android-15.0.0_r14 to android-15.0.0_r20 AOSP changelog

This only includes the Android Open Source Project changes and does not include any changes in any proprietary components included by Google or any hardware manufacturer. The raw log was generated using a modified version of this script written by JBQ and improved by Al Sutton.

Please do not copy this without attribution to this site and JBQ for the original script.

+- Project: platform/art

799187ab8c : Hide SDM status if not found.
b72ae2a45e : Mock calls to Binder in DexUseManagerLocalTest
ba201d247d : Revert^2 "arm64: Store resolved MethodType-s in .bss."
1be926562a : verifier: Check field access type early.
b71aeac727 : Change preferred-alloc-space addr to accomodate larger heap
96cf217d42 : verifier: Rely on zero-initialized allocation for flags.
b1d9ccb0fb : verifier: Remove return type descriptor validation.
c620ef884f : verifier: Do not collect CHECK_CAST locations.
a3b39b9c91 : Block on concurrent redefinitions instead of returning error
b9e21e08e3 : arm64: handle invoke-static in invokeExact intrinsic.
3fe3180b51 : Fix bump-pointer-space walk for black-dense
15f201a640 : verifier: Speed up `VerifyInstruction()`.
0c5d145e04 : verifier: Clean up constructor checks.
1cbdde43a5 : verifier: Re-introdce `VERIFY_ERROR_NO_FIELD`.
22ee56acfb : verifier: Refactor `Is{,Strictly}AssignableFrom().
68bb61936e : Add a host flag lib for CTS use.
0d07807785 : Implement missing Unsafe.get*Volatile and Unsafe.put*Volatile methods.
1553dd2732 : Refactor dex `Instruction` for `constexpr`.
701e215aa5 : Remove remnants of quickened opcode verification.
a0bdc8d7d8 : Clean up host APEX support.
efad3542a8 : Add newly introduced methods from aosp/3342998 to profile.
d1852d28b2 : cleanup: Remove extra SetGraph calls
7dcc228bc8 : Reduce hotness on interpreter lookups.
e19e9db1dd : verifier: Speed up `GetMethodIdxOfInvoke()`.
28f82fc275 : Speed up `AppendPrettyDescriptor()`.
065fef17f8 : Revert^2 "Adjust JIT thresholds."
b69aa62e67 : Make sure that MH/VH do not operate on uninitialized classes.
5b9716f84d : Add back HeapByteBuffer related unchecked methods.
dd303e6a58 : Only constructor names can start with `<`.
4e56223e86 : Prettify field type descriptor in `DexFile::PrettyField()`.
4c696c49b5 : Disable test 153- for all debuggable
9cf1396efc : verifier: Stronger uninitialized `this` access checks.
f324f3ffe1 : verifier: Fill merge table for `Constant{Lo,Hi}`.
a24e2c2551 : Make NativeLoaderNamespace instances immutable.
cca3341e67 : Do not hold g_namespace_mutex during library loads.
ecee442f64 : Add poisoning/unpoisoning to invokeExact intrinsic.
12515e8b1f : Start reporting again the ArtDeviceDatumReported atom
6963ab426a : Refactor ResolveMethod.
2191e73311 : verifier: Create `std::ostringstream` only if needed.
0c30ecd52c : Promote two ART run-tests to presubmits (2024-11-15).
54e83f1492 : Handle abstract and conflicting default methods in MH interpreter.
702b79ba77 : Add a dump option to dump SDM status.
178425422a : Regenerate ART test files (2024-11-15).
b82da21e55 : x86-64: handle invoke-interface in invokeExact intrinsic.
b16c991a56 : qemu: Remove opensbi
6f3f338c33 : Revert "Build host binaries with asan"
1e3dbffe69 : Fix bug number in comment
a3d9ad8d83 : Add APIs for ART managed install files.
86d746b735 : Add known failure 710-varhandle-creation for gcstress & debug
d7c74b43f8 : Remove dependency on desugar
9a2ad8a5b0 : Disable test 153- for more configs due to timeouts
8ec33c0a30 : Add a fast path for `Class::CanAccess`.
54cd7bf458 : Adjust art tests to java.lang changes.
6984f43344 : Fix races around JIT GC and deoptimizations.
fad4678f3a : Revert "arm64: Store resolved MethodType-s in .bss."
1d280b080a : Set BreakBeforeTernaryOperators to true for clang format
966eff16db : Allow the inliner pattern matcher to work with invoke-direct/range
98ffb6ed4c : verifier: Faster primitive assignability checks.
581aa379c2 : Revert "Adjust JIT thresholds."
23f722c82d : Flush profile on force save.
41b5aa7c71 : Increase timeout (give extra 60 min) for tests on QEMU.
500d22b953 : buildbot-vm.sh: download newer EFI for arm64 (the old one is missing).
16bad93289 : Order oat methods by startup
c987498b01 : Add comment and dcheck regarding SuperblockCloner edge case
d4b4368669 : verifier: Refactor type verification.
a21eb118c8 : Allow the inliner to devirtualize intrinsics
53d8174b00 : Log an error for an invalid fd in dumpLowOverheadTraceImpl
e5ff363769 : verifier: Specialized `RegType` for `j.l.Object`.
52d7390bf9 : Fix the 918-fields test
9d4c98438d : buildbot-vm.sh: download newer U-Boot version (the old one is missing).
303e0cbc68 : verifier: Use `ArenaAllocator` in verifier...
70cb3b60e2 : Fix LoopElimination mistakenly eliminating a loop
8ef719ad4c : verifier: Reduce size of `RegType`.
9f931d0b34 : Reapply "Update 913-heaps expected output after D8 reg alloc changes"
dc28ff6530 : verifier: Do not store `klass` in uninitialized types.
0210f35054 : verifier: Use type index-based cache more.
79b40df93a : verifier: Speed up `RegType::AssignableFrom()`.
b344a627ab : verifier: Use superclass index from `ClassDef`...
34759bdc2b : Ignore ReferenceQueue.add method when running method trace test
723708ddb6 : Revert "Always use an array in the DexCache for ArtField and ArtMethod."
dbe738240d : Add more information regarding packed flags to GraphVisualizer
e65b712edc : Clean up dexopt_chroot_setup_test.
5e5cbd9149 : Skip temporary mount points created by Package Manager.
44dda682e0 : Add an artd method to wait for profile save notifications.
60e19495ce : Revert "Use proper type for madvise."
67c5f64d33 : Disable test 153- for asan due to timeouts
58d7d14a39 : Implement profile save notification.
dd57ac6f44 : verifier: Reduce memory used by type index-based cache.
7ac25410e5 : verifier: Speed up `RegType::Merge()`.
c3df40627f : verifier: Reimplement 3 virtual functions as `constepxr`.
22de552a9c : verifier: Rewrite representation of constants.
fcb5c3a0cc : Force 4K ELF alignment on art/odex files.
7a97baeb4f : riscv64: Use `Stored()` insted of `Sd()` as the offset is too large.
35e5520291 : Don't reuse the old snapshot after a long running bg dexopt to do the file cleanup.
33a0e253cb : Document constructor call divergence from RI as deliberate.
11b1a19d9e : Revert "Update 913-heaps expected output after D8 reg alloc changes"
815774da7a : Update 913-heaps expected output after D8 reg alloc changes
c77bb270ee : Remove mcts-art from test_suites since they are not tagged as MTS
d6c05592b2 : Show string values of byte[] instances.
9fbfe26b27 : run-test: Create boot-image-compilation script
b881f3015e : Revert "Temporarily skip the crash symbolizer step for ART"
87a0d38e94 : Use uint64_t instead of size_t for black-dense calculations
707da44f4b : run-test: Use tgz instead of zip for the merged output.
5081f0fa46 : Export ART_TEST_ON_VM for tests.
9157849889 : Run exception catch callbacks after the dex pc moved callbacks
44f5dbb1a7 : Disable 153- for gcstress
0098033496 : Don't assume mainline boot jars presence when running tests on VM.
c6c8e76a69 : Explicitly use implementations where necessary for modules in the same apex
c2ad788c31 : riscv: Remove bad DCHECK
6713bfa5b4 : Move executable_method_file_offsets flag to system_performance namespace
3fa9eb38f8 : Re enable tests that are linked to fixed bugs
844dca8483 : Skip check for synthetic field in 918-fields test
dc42d4ed36 : Clarify GetExclusiveOwner() semantics
3051a5e03f : Remove now unnecessary checks from invokeExact intrinsic.
20cdc427d5 : Add missing Location::kNoOutputOverlap
83668f93e2 : Add 597-deopt-new-string to the known failures
8666596312 : Add an extra bit for intrinsics in ArtMethod
918a1812f6 : verifier: Use `constexpr` primitive types.
298dc96f06 : run-test: Make temporary directory name deterministic
2aaea4a0da : LUCI: Enable git superproject checkout for QEMU (non-repo)
b18b55181f : Add a comment to explain the API considerations for the pre-reboot dexopt usage of BatchDexoptParamsProto.
e0a2cec593 : Revert^2 "Check that the API level is set correctly for all intrinsics"
5656cd4148 : Reduce the likelihood of the child process being stuck
939772b8f0 : verifier: Do not keep `Handle<>`s for primitive types.
77c5372e7f : Update boot image and system server profiles [M80C35P56S0PP]
a202343b6d : Consolidate the use of some constants.
01df4b3a9b : Avoid `strlen()` for `ClassLinker::FindClass()`...
8a2ca00194 : Update hprof/veridex for C++20 `starts_with()`.
835f5dea1d : LUCI: Remove gcstress.ndebug and poison.ndebug (keep only debug)
e46f4012f3 : LUCI: Switch qemu builds to debug.
7f9f69e739 : run-test: Remove ART_HOST_TEST_DIR from env.py
5466b7e9fe : LUCI: Remove "debug" from builder names.
61560bdaa8 : Increase black-dense threshold to 95%
c7a572e7ec : Replace test_for properties with explicit dependencies on implementations
ec561332d9 : Revert "Check that the API level is set correctly for all intrinsics"
8cb3e137f7 : Check that the API level is set correctly for all intrinsics
8cbf793d00 : Introduce `RegType::Kind`.
f8ac417533 : Clean up after introducing `HRol`.
b506262278 : LUCI: Switch cmc from non-generational to generational
9bdf40477a : LUCI: Remove manual builder names.
39dbb1a7cd : LUCI: Rename remaining builders
475aeedad4 : LUCI: Refactor - invert some local variables and parameters.
2a9ed5f6a3 : LUCI: Remove deprecated and hidden builders.
07a58eb1cb : Revert^4 "IsMemorySharedMethod fix for intrinsics"
3086bbd64c : Fix use-after-free in dex2oat mini-debug-info compression.
cc22b57fd3 : Don't virtual dispatch non-copied method defined in an interface.
52f22d080c : Add packages/modules/UprobeStats to flag visibility
a687066b70 : arm64: Store resolved MethodType-s in .bss.
6b32080cb3 : Add implementation for Unsafe.compareAndExchangeLong
740ae3479b : Support all conditions in predicated vectorization
8cec104ae6 : Allow Pre-reboot Dexopt to be configured by BatchDexoptStartCallback.
d90eed6d02 : Ignore `setSplitName` in `BatchDexoptStartCallback`.
30da55cee8 : Speed up `Class::IsInSamePackage()` even more.
21be9add44 : Revert "[Profile Saver] Prevent a frequent lock/unlock of wait_lock_."
df418f0b14 : Speed up `Class::IsInSamePackage()`.
29ec7aef84 : Use `std::string_view` for `ClassTable` operations.
96dc77238e : Rename ResolveMethodWithoutInvokeType and use it more.
4d6fdd9fb3 : Use .data.img.rel.ro for app image methods.
389f84b5c4 : Fix HiddenApi bits of refersTo
ad2afaee62 : Fix buggy code in ProfileSaver.
55cc56fba2 : Wait and retry when class-pointer is null during marking
9792f354ea : Remove `ClassLinker::LookupClasses()`.
2b637e24ad : [Profile Saver] Prevent a frequent lock/unlock of wait_lock_.
14769da546 : Check that immediate is 32b in andl.
6ffbee2128 : riscv: define and use funct7 for Print32BinOp
4b18e4025e : verifier: Cache uninitialized type in initialized type.
bbd8e5e4ef : Promote `libnativebridge-lazy-tests` to presubmits.
84a268980e : Promote two ART run-tests to presubmits (2024-11-01).
1e4eb13a7e : Regenerate ART test files (2024-11-01).
46c8f60c6c : Temporarily disable failing gtests on riscv64.
da6fc933d3 : Revert^2 "Move native ABI asm out of quick_entrypoints*.S"
eb36868d95 : run-test: Create runner scripts as part of the build.
d30b340648 : Remove dependencies on the 1-variant fallback
5b8cb4477a : Update ART Service README.md
56950b0c1c : LUCI: Rename builders to avoid old device codenames
6f02672d20 : verifier: Track allocation dex pcs in `RegisterLine`.
644895b1db : Reland^2 "Use `ArenaAllocator` in `RegTypeCache`."
faccb4e6c9 : Build host binaries with asan
abfa6ce52e : Remove -Xprofile which is not used
bc3ae29926 : cleanup FixUpInstructionType
cbf82dafb9 : Run RTP after GVN to remove more NullCheck instructions
33499d1f02 : Revert "Reland "Use `ArenaAllocator` in `RegTypeCache`.""
ecc9c1afba : Reland "Use `ArenaAllocator` in `RegTypeCache`."
51887173c4 : Use proper type for madvise.
96f1c792b0 : Reland "verifier: Cache types by type index."
7472fab1e4 : Reland^3 "Introduce black-dense region in moving space"
fe6dff8697 : Do not defer library unloading from zygote
1f9c184392 : Always use an array in the DexCache for ArtField and ArtMethod.
323dfbf34f : Poll for boot image mapping in a heap task.
5be7f7dab7 : Don't block for long in UnloadNativeLibraries()
e60598cab9 : Add //frameworks/base:__subpackages__ to flag visibility
8a3f6fff32 : Add heap type to perfetto hprof
bed5054ed9 : Update method tracing to work when always_enable_profile_code
de7910029c : Revert "verifier: Cache types by type index."
f9f226f0b4 : Revert "Use `ArenaAllocator` in `RegTypeCache`."
b6413c0a1c : Remove interpret_one_instruction from switch interpreter
e54bbb5c16 : Use `ArenaAllocator` in `RegTypeCache`.
38e9b0c973 : verifier: Cache types by type index.
dba42ff46c : Do not package test module `ArtGtestsTargetInstallApex` in ART MTS.
ac0330d972 : Add `libnativebridge-lazy-tests` to ART Test Mapping and ART MTS.
857fb7041d : Promote `art_standalone_dex2oat_cts_tests` to presubmits.
c65b7a36c1 : verifier: Remove `PreciseReferenceType`.
6c819b64e4 : Add thread_suspension_timeouts.md
6c9d77fc10 : Fix app info code type.
8d20c5194f : Remove unused ShadowFrame.result_register_
d71ee1abed : Revert "Reland^2 "Introduce black-dense region in moving space""
a5557d9da3 : Revert^3 "IsMemorySharedMethod fix for intrinsics"
3dcb57a05a : Revert^2 "Add VMDebug_getExecutableMethodFileOffsetsNative"
0cdb0fedbf : x86-64: handle invoke-direct in invokeExact intrinsic.
398bedca50 : Revert^2 "IsMemorySharedMethod fix for intrinsics"
003071c7fc : Revert "Move native ABI asm out of quick_entrypoints*.S"
3f5ee33f21 : Update comment in modifiers.h
1d62bd6e7c : Fix marking methods as always throws
83387d63c4 : Revert "Add VMDebug_getExecutableMethodFileOffsetsNative"
b07fc736e3 : Use ScopedObjectAccess instead of ScopedFastNativeObjectAccess
72f6bb93a3 : Implement Executable::GetParameterTypesInternal in UnstartedRuntime.
88f65e2368 : Add VMDebug_getExecutableMethodFileOffsetsNative
466a011446 : run-test: Simplify boolean global variable assignments
b3e42351b2 : Optimizing: Clean up graph visiting.
2d2522780c : verifier: Clean up handling of final abstract class.
aa072d3b56 : run-test: Replace "true"/"false" with True/False
ebc4306c65 : run-test: Move global variable close to their first use
14c0c11936 : run-test: Replace "yes"/"no" with True/False
9be545efec : Remove unused variables.
8be53d09ef : Move native ABI asm out of quick_entrypoints*.S
bc44a6b142 : Split dirty-image-objects file between art and framework
66d17a5f66 : Update boot image and system server profiles [M80C35P56S0PP]
9e9f0bcd84 : Run-test: Add option to save runner script for debugging
e7fffca5a4 : Don't crash when generating image for nonexistent dex files.
1ef51a583c : Fix read barrier assertions for app image loading.
7e3d07a973 : verifier: Fix declaring class check for `iget*`.
c39bb32508 : Refactor run-test to use python argument parser.
881f7d31e3 : LUCI: Add gcstress hidden builders with new names.
399ab73f56 : LUCI: Allow all host tests to run on Ubuntu 22
c89e09f4d4 : Revert "Enable 153-reference-stress ART run-test"
5204c765a2 : Fix `--jit-on-first-use` run-tests.
b741af2590 : Separate kRuntimeISA/kRuntimeQuickCodeISA
1c595ed4f8 : Use better names for some predicates in `regen-test-files`.
b207b10e5d : Do not package non-runnable ART run-tests in ART MTS.
25e5dcdbed : Fix typo in regen-test-files's post-execution statistics.
0d8683f53e : Promote one ART run-test to presubmits (2024-10-15).
f9704473fa : Regenerate ART test files (2024-10-15).
761b73d30a : Add method and thread infos when dumping low-overhead traces
f0c45c0576 : fix inliner bug
a5f50ab9af : Instread of throwing ISE, abort transaction in ...
d96190b790 : Revert "IsMemorySharedMethod fix for intrinsics"
d5a09b3adf : Fix ambiguous FillVRegs template specializations
93ea75d7b4 : Enable 153-reference-stress ART run-test
8555b07360 : Revert^2 "Add intrinsics for the absolute forms of unsafe.{get,put}Int"
7a21281cfc : verifier: Fix a crash for abstract final class.
6253569462 : Share `RegTypeCache` for all methods in a `Class`.
3e5cfa5a75 : Provide faulty implementation of System.nanoTime in UnstartedRuntime.
54c1c76edb : IsMemorySharedMethod fix for intrinsics
aff0b1efe8 : Remove dead `MethodVerifier::ResolveCheckedClass()`.
9723509386 : Set native libs to non writable before loading it
e2d89170d5 : Don't bind-mount optional partitions if they don't exist.
81888a2d5c : Remove unused variable.
8c8e6e295f : buildbot-vm: create missed directory
6127341b74 : Arm64: fix VecPredToBoolean code generation for SVE
b50d59f4ca : Verify hardware supports ISA features in tests
8c67e8c9c1 : runtime: add alderlake variants
f2d6bfd7d6 : verifier: Do not resolve types in other class loaders.
b544a10177 : Arm64: Fix PackedSwitch codegen for large methods.
d3af812dd1 : Reland^2 "Introduce black-dense region in moving space"
64860dc5d7 : Update 2001 test to use lower task count on 32-bit systems
7074ffadae : Few fixes for low overhead method tracing
21730ffb79 : Fix typo in CodeGenerator::Compile
6fa9174e30 : Revert "buildbot-vm.sh: use QEMU bundled with cuttlefish for arm64 as well."
c1bd4376b1 : Revert "Add intrinsics for the absolute forms of unsafe.{get,put}Int"
23cfc82c8e : Refactor `HandleCache` out of `nodes.{h,cc}`.
5701a59d18 : Move `HCondition` creation function to `HCondition`.
d7118f3546 : Do not record dex PC in constant HIR.
bcb5c19e5e : Add intrinsics for the absolute forms of unsafe.{get,put}Int
6b16dc2993 : Refactor `ReferenceTypeInfo` out of `nodes.{h,cc}`.
b375b1cc44 : Remove stale TODO.
95362ffe17 : Revert "Reland "Introduce black-dense region in moving space""
c7d8ebe433 : Load app image without holding the mutator lock.
8d24de7515 : Skip gtest ImgDiagTest.ImageDiffPidSelf when running tests on VM.
b0349f656a : Remove extra LOG(FATAL)+Unreachable
719e731607 : Update 2001 test to use lower task count on 32-bit systems
2e74401703 : Clean up after "Add stack type"
2e78250a75 : Address comments from aosp/3282234
f758d6a753 : Fix `VerificationResults` use in `OatWriter`.
d576392c76 : Pass ISA to `Context::CalleeSaveAddress()`.
c50d679916 : Reland "Calculate the number of out vregs."
00db5b25da : Search in /system_ext for apex systemserver dexpreopt files
399e00d03a : Revert "Avoid move to same registers."
12e3c23315 : Revert "Avoid move to same registers."
92a62ec92e : Reduce memory used by `HEnvironment`.
c7f1c1039a : Refactor the ART APEX Soong modules to make them easier to follow.
600794a64c : Consolidate various APEX properties to the ART shared defaults.
d6ce14941e : Add libandroidio to all APEXes.
6cb63acdff : Avoid move to same registers.
80a1feff17 : Make entrypoints and Context ISA configurable
ee3063bda0 : Use dex2oatd in artd.
0c44cf6824 : Skip dexopt if storage is low during Pre-reboot Dexopt.
1c6c18df9a : Clean up some internal libraries from the native library lists.
c4064f4d53 : Add some logging to help debug pre-reboot dexopt.
bc9bac28de : Update 2001 test to use lower task count on 32-bit systems
e159a3bce2 : Revert^5 "Object.clone() allocates more movable objects"
7b74ddc161 : Source tools/buildbot-utils.sh from tools/run-gtests.sh.
85d0f119b0 : Add new HeapByteBuffer methods into profile
8ab90b3199 : Remove volatile from GUARDED_BY variables
aed7b8612d : Move code around to compile when asan is enabled.
9aae9efe55 : Fix spacing in verbose logging
871b53457d : Turn kNumPackedOpcodes enum into constexpr
d53f0ae0e7 : Reland "Do not unmap twice a mapping."
ea269f69d0 : Revert^4 "Object.clone() allocates more movable objects"
f255d9c2c5 : Reland "Introduce black-dense region in moving space"
6ba9268298 : Release trace buffer allocated by lowoverhead tracing on thread exit
b3b758f0ce : Add `art_boot_images` as required dep of some ART tests
9f3192455f : Refactor C++ entrypoints
bb2fb09b7d : Revert^2 "Add VarHandle implementations for void getAndUpdate methods"
8567fcdf97 : Revert "Add VarHandle implementations for void getAndUpdate methods"
3c4f9761bc : x86_64: remove subtype check from invoke-virtual fast path.
9ff2b61734 : Add VarHandle implementations for void getAndUpdate methods
a172151731 : Verify the presence of flag.info in ART APEXes
e99f7e7591 : Add stack type
63b8399f3f : Add more validation for const-method-handle.
0b8cc45c57 : Delete compact_dex_level.h.
68df0adae7 : x86_64: Handle invoke-static in invokeExact fast path.

+- Project: platform/bionic

bad36dce2 : Remove METADATA file
10d48c53f : Revert "Define linker.recovery"
9d738c798 : Define linker.recovery
4e5e0f2d6 : Move bionic test libraries to a default, for easy consumption in cts.
811e03238 : Fix an overflow bug in __get_elf_note().
254989946 : linker: ignore the possibility of page size migration for 32-bit processes.
06a86dc87 : Allow system_image_defaults to access libc_hwasan
40457eab8 : Add mmd AID to grp_pwd_test
e1a577fc8 : Revert "async-safe logging: __ANDROID_API_LEVEL__ isn't a thing."
48a3bef79 : tests: Disable cpu_target_features
0c603c3c8 : Remove the ILP32 __arch_swab cruft.
986afb973 : Update <stdbool.h> test for C23.
19b70dcbd : libc: swap memrchr to llvm-libc for most arches
79427db0c : libc: swap strlcat to llvm-libc for all arches
c90ba5560 : libc: swap strlcpy to llvm-libc for all arches
2e65afecb : bionic: libc: avoid -Wdeprecated-declarations via std::atomic_init
414bfb399 : libc: swap to llvm-libc for 8 fns on x86_64
273473bfd : Add <dlfcn.h> to the libdl documentation.
c5a98e629 : libc: swap bsearch to llvm-libc's impl
fab5e6f6d : Trivial pthread_setaffinity_np()/sched_setaffinity() test.
20eb73ee6 : Revert "libc: point some x86_64 function impls at llvm-libc"
987359572 : Protect against misuse of an implementation detail.
d354c42be : Trivial "can we get the cpu affinity mask?" tests.
7f095e3e6 : Fix broken link in MTE documentation
76b3685e6 : libc: point some x86_64 function impls at llvm-libc
8e5de06bc : Add API to set page size compat mode
e117d6ee7 : Add pthread_getaffinity_np()/pthread_setaffinity_np().
7e70bb499 : Update links to libc_init_static.cpp to libc_init_mte.cpp
eb6e5a49e : Add per-file OWNERS for docs/mte.md
eac5f73d0 : [MTE] fix missing thread id from stack_mte_ring mapping
62799312d : Fix Android.bp formatting
a9e144df0 : [MTE] Cleanup stack buffer for detached threads
0d43de892 : Remove a comment.
08fc8bd52 : Add mmd AID to grp_pwd_test
0198e074d : Revert^2 "No special treatment for vendor"
756e765fa : Factor out freeing of stack buffer to ease testing
28057b88c : Re-land Implement deterministic MTE globals for dlext RELRO sharing
3848423b2 : Re-land Fix dlext tests for MTE globals
477c1eb04 : Re-land^2 tests for MTE globals
4edc20d7e : Re-land^2 linker support for MTE globals
eabc47bf5 : Reduce flake in pthread mutex timed lock tests.
1f8c22586 : Add the __BIONIC_ALLOC_SIZE to reallocarray.
d407e5e63 : Hide the reallocarray polyfill by default.
b2aa69fb7 : Revert "No special treatment for vendor"
1443502bb : <ctype.h>: fix a stray `inline` that breaks C89 compatibility.
0ac5da00d : Refactor the android_mallopt code.
a6e9dcf4c : Move libc MTE init to separate file
e7e61390c : Add implementation notes for MTE
6e07b508e : No special treatment for vendor
e39602c9a : Use libmemory_trace for writing trace data.
c138f81e4 : [MTE] split heap and stack MTE initialization
3e56e562a : [MTE] do not use version number for memtag_dynamic_entries
02ce401d1 : API guard every post-21 API.
5a7331a4d : Add more __BIONIC_AVAILABILITY_GUARD().
36f97cac5 : Reland "Tidy up and document <limits.h>."
8b95640f2 : Allow a couple more swab implementation details through.
e5da49688 : Add __BIONIC_AVAILABILITY_GUARD().
a85f1589c : Revert "Re-land linker support for MTE globals"
499f74f97 : Revert "Re-land tests for MTE globals"
dc38b1404 : Revert "Fix dlext tests for MTE globals"
c44f80cbe : Revert "Implement deterministic MTE globals for dlext RELRO sharing"
56964b4ad : Revert "Re-land linker support for MTE globals"
de06c395b : Revert "Re-land tests for MTE globals"
f1f716ad2 : Revert "Fix dlext tests for MTE globals"
c46d4c257 : Revert "Implement deterministic MTE globals for dlext RELRO sharing"
8eceeea1a : Revert "Re-land linker support for MTE globals"
49eb5d9bb : Revert "Re-land tests for MTE globals"
287eb4776 : Revert "Fix dlext tests for MTE globals"
f0c2b2ee3 : Revert "Implement deterministic MTE globals for dlext RELRO sharing"
95680c7a8 : Re-hide fmemopen.
66136e7d5 : Re-hide some error.h stuff.
7b7abe9d0 : Re-hide mempcpy.
a0e834395 : libc.map.txt: add a version for API 36.
3c932f1c3 : __riscv_flush_icache() still ignores the address range in 6.12.
fe2a9eb32 : Clean up malloc stress test.
5b5bab5b1 : Add a <stdatomic.h> header test, and the missing kill_dependency macro.
e11877813 : Remove <private/bsd_sys_param.h>.
eae2849b9 : Revert "Tidy up and document <limits.h>."
414dd2d6b : Always include <sys/cdefs.h> first.
87a5d209f : Remove unused variable.
47eb52732 : Tidy up and document <limits.h>.
ba3805abd : Remove unused variable.
b4fa67f66 : Implement deterministic MTE globals for dlext RELRO sharing
c54d307ff : Fix dlext tests for MTE globals
61436e232 : Re-land tests for MTE globals
2e6e409dc : Re-land linker support for MTE globals
6459ad3e6 : bionic: Add page size 16KiB compat tests
45655d305 : Reuse the strtod_l()/strtof_l() polyfills as the "real" implementation.
755b9c127 : libc: Set __bionic_asm_align to 64 for arm and arm64
ce1c3cf77 : linker: LoadSegments: Load 4KiB ELFs on 16KiB page-sized systems
52fe6ce2e : Disable warning: passing no argument for the '...' parameter of a variadic macro is a C23 extension
b23787f48 : linker: LoadSegments: Preparatory work for 16KiB App Compat
1d4f7f59c : 16kb: bionic: Re-align libtest_invalid-zero_shdr_table_offset.so to 16kb
5b17c76f7 : 16kb: bionic: Re-align libtest_invalid-zero_shdr_table_content.so to 16kb
9018e5fcd : librust_baremetal: Use cc_baremetal_defaults
63fca20d4 : [AArch64] BTI flag is missing from the crt_pad_segment.S
bad227258 : Allow v4l2r access to libc_uapi_headers
0225a380f : 16kb: bionic: Re-align libtest_invalid-local-tls.so to 16kb
b71054867 : 16kb: bionic: Re-align libtest_invalid-unaligned_shdr_offset.so to 16kb
81d7c8238 : Add note about seemingly double relocation of the linker
80246ff67 : Get out of the business of hard-coding a list of uapi unistd.h files.
49074ba1d : Remove manual -fPIC from x86/x86-64 crt_so_defaults.
b24d217e7 : Make the strncpy() test test strncpy().
06f28bed6 : Remove 32-bit x86 wcs*()/wmem*() assembler.
e2d872ef4 : 16kb: bionic: Re-align libtest_invalid-empty_shdr_table.so to 16kb
c76342d72 : cpp: Explicitly retain array definitions
44df2c416 : Fix tests for change that modified #include names.
dc9b0fd61 : Revert^4 "Use DoNotOptimize rather than rely on a volatile."
f5d5f8356 : Revert^3 "Use DoNotOptimize rather than rely on a volatile."
9d1ef60fe : Product variants are not in the NDK API surface.
aad7abbd3 : Revert^2 "Use DoNotOptimize rather than rely on a volatile."
8469cdc27 : Revert "Use DoNotOptimize rather than rely on a volatile."
ea0b7fc2f : Use DoNotOptimize rather than rely on a volatile.
63fcca410 : Update to v6.11 kernel headers.
a8050343f : Revert "Update to v6.11 kernel headers."
ec07dacda : Use new riscv unistd names for syscall definition.
3b3b3569a : bionic: Disable arm32 cpu_target_featuers test
fb9df2530 : unistd.h: document chdir()/fchdir().
86da38c9d : fortify: rewrite strlen to fold to a constant
4ba54495e : Update to v6.11 kernel headers.
0cad9cbd6 : async-safe logging: __ANDROID_API_LEVEL__ isn't a thing.

+- Project: platform/bootable/deprecated-ota

f49abf1 : deprecated_tests: health v4

+- Project: platform/bootable/libbootloader

15951a1 : Simply EFI fastboot variable interfaces
18b4c51 : gbl: Fix memory leaks and race in gbl_efi_image_loading tests
5e67731 : Update and plumb slots interface through libgbl
699df13 : android: add AVB digest to handle_verification_result declaration
21f624e : android: support persistent values in avb protocol
a165c2a : Fix fastboot test failure
21a06cf : Add documentation for GBL's A/B boot flow
a89b7bd : gbl: efi: add "fastboot boot" command
8a02169 : Update backup in primary header when flashing GPT
897e6bc : Remove libstorage_testlib
3bb3da3 : gbl: use image buffers instead of allocations
f8cee18 : android: support unlock and rollback indexes in avb protocol
796d340 : Support fastboot erase
dfb688b : Support erasing GPT
7b5f3f1 : android: allow FW to verify public key
5b5d8b6 : Merge impl blocks of libgbl fastboot
c2da578 : Always adjust last usable block when flashing GPT
b7c7336 : Override secondary GPT if different from primary
d0ad606 : Support platform specific fastboot varaibles
c33a7a7 : Dump error log if test fails
bd6ab10 : android: allow FW to handle AVB result
edd02c3 : Support legacy parition name "fuchsia-fvm"
ee600ec : Clean up leftover u-boot naming
3afc1c3 : Fix bug in sending fastboot okay before reboot
0951b95 : gbl: libgbl: src: android_boot: mod.rs
c9e730b : Support fuchsia fastboot set_active, reboot mode
f59364a : Improve buffer ownership for PartitionBlockDevice
d5c0066 : Move gbl's avb-related logic into separate module of libgbl
16345c7 : Reset system when panic in EFI
06c5087 : android: increase maximum amount of device tree components
e4bcecb : Add APIs for modifying/creating GPT
45fc699 : Make BlockIo hidden in GblOps
f5537a2 : Add check for partition range overlap
e3d9610 : Fork bazel.py/bazel.sh from Kleaf.
62cc955 : Move fastboot variable getters into GblFastboot
4ced125 : Improve Gpt buffer ownership flexibility
68d4631 : Replace copy_to_dist_dir with pkg_install.
ff294ad : Add a built-in ram-based BlockIo implmentation
d1732f9 : Stop allowing deadcode and unused var by default
20df4f9 : Centralize libstorage API on BlockIo
f8b787a : Sync compiler version of rust_analyzer with the rest of the project
1fd8344 : Also correct include_dirs in rust-project.json
5709da0 : Add option to resize GPT when flashing it.
af05fc2 : gbl: move rust-analyzer into gbl/ dir
9afa1c9 : Add support for VSCode + rust-analyzer auto-completion setup
a5a8651 : Support "fastboot flash gpt" for updating GPT
124321b : Add API for updating GPT
d8f06c0 : Improve GPT check and error expressiveness
428d51b : android: rename generated bootimg bindings
af00cdf : VtsBootconfigTest: enforce requirement from U+
328ac6c : VtsBootconfigTest: enforce requirement from U+
c6fe753 : VtsBootconfigTest: enforce requirement from U+
470a6b6 : android: remove bootconfig cuttlefish specific logic
5c69af8 : android: support multiple dt as a part of boot/vendor_boot
184a400 : android: allow FW to select device tree components
152486c : Support broadcasting fuchsia fastboot mdns service
6072075 : Fix fastboot parallel flash test flake
1066cbd : Support oem add-staged-bootloader-file
6b1ab82 : libfdt: check fdt header correctness for FdtHeader
2a1c276 : gbl: Update GBL README.md
ac0e5e6 : Don't re-listen if already handshaking
3442318 : gbl: Add guide on running Fuchsia with GBL on emulator
6326284 : Support legacy names for Fuchsia boot
09cef94 : Support "fastboot continue"
5b1681a : Fix EFI net TX frame deallocation bug
a0ae06c : libefi: migrate to u-boot dt_fixup_protocol to update device tree
367e5a0 : Support stack allocated worker tasks for fastboot
83b7e9e : android: make sure fdt is extandable before modify
5b784cb : Abstract fastboot buffer management
7cb7ba9 : libefi: use boolean to select device tree components
c33d046 : Update to Android Rust 1.81.0
cff23df : Switch to use libgbl API for fastboot
4db8e55 : GBL: cleanup TODO
25db10a : [gbl] Add memory mappings to ZBI items for fuchsia boot.
7b91b8d : [gbl] zbi-rs library sync
9bb23ce : Move fastboot loop logic to libgbl
281247e : android: provide device tree build by GBL for fixup
81dfd15 : Remove the use of associated_type_defaults
f4784be : Asserts that wait_io_completion must return
d46fec0 : fdt: use static memory to convert refs to pointers
e1152ca : Refactor EFI GBL network
e08c9bf : gbl: move Android boot logic into libgbl
ad2b54e : gbl: remove heap allocation from Android boot
0b3fa77 : android: request bootconfig and os commandline fixup for android boot
5f0152a : Support fastboot reboot
fb29f26 : libgbl: combine Fuchsia/Android avb ops
0c1ac3b : gbl: move Android libavb operations into libgbl
064da7a : Check fastboot mode in fuchsia bootflow
25e306a : [gbl] move decompression to libgbl
0826f8d : gbl: efi: src: android_boot.rs: make vendor boot partition optional
eb6a753 : gbl: efi: src: android_boot.rs: let fdt_bytes not mutable
36fea3f : gbl: create libutils
0bde4d0 : gbl: move libavb sysdeps to EFI application
672200f : libefi: update os configuration protocol signatures
eaf479c : Change to delimeter to '/' for partition arg

+- Project: platform/bootable/recovery

1a98f019 : Specify recovery property in recovery_deps
08386b88 : Convert librecovery_ui_ext to Android.bp
e93314f1 : Convert librecovery_ui_ext to Android.bp
9e26d5ae : Define recovery resources prebuilt_res modules
facf3f6c : Convert recovery_deps to Android.bp
b6a21705 : recovery: use v4 Health HAL
6f85be04 : Keep RECOVERY_API_VERSION and RECOVERY_FSTAB_VERSION in sync.
ba882c78 : Make bootloader_message ramdisk_available.
65fd9470 : Display recovery reason in prompt_and_wipe_data
c74843c5 : rm -rf non-AB code

+- Project: platform/build/bazel

bdedf8e9 : Remove bp2build and queryview
94209dff : Delete unused ci scripts

+- Project: platform/build/bazel_common_rules

59c7ae7 : Mark copy_to_dist_dir & embedded_exec deprecated.
1d967e1 : exec: delete exec_rule.

+- Project: platform/build/blueprint

da9eeb7 : AddLoadHookWithPriority function
cc1c206 : Support ModuleProxy in a few blueprint singleton methods.
1d99261 : Add support for selects on string lists
97bbbf6 : Add VisitAllModuleVariantProxies to blueprint.
778c835 : Do not read moduleInfo map in Context.ModuleErrorf
60f7014 : Support ModuleProxy in OtherModuleType, OtherModuleErrorf. Also removed OtherModuleSubDir.
ef71ba9 : Change GetModuleFromPathDep to use ModuleProxy.
9fa6ea5 : Replace FinalModule with IsFinalModule.
ce87cd8 : Sort the subninja list of the incremental modules.
35714dc : Add VisitAllModuleProxies and VisitAllModuleVariantProxies.
2743fea : Support repacking lists of structs
5c855ad : bpfmt: Extend visibility to cargo_embargo
cd57e03 : Add NeverFar() option for transition mutators
203ef32 : Introduce the bp api `OtherModuleIsAutoGenerated`
946017e : Dedup addDependency and addVariationDependency
7905d7e : Remove the 1-variant fallback
9dfaaec : Use maps.Clone()
fd04a91 : Revert "Add a UniqueList that can store a slice in the unique package"
34d9407 : Revert "Use unique.Handle for DepSets"
e748cf8 : Remove the 1-variant fallback in vendor/
26923eb : Fix slices.Grow() calls
a3c144d : Partially remove the 1-variant fallback
602d141 : Don't print errors in RunBlueprint
5686ac4 : Move some gob helpers to a new package.
5753849 : Split bpmodify command into a library
289f3e3 : Use unique.Handle for DepSets
0808295 : Add a UniqueList that can store a slice in the unique package
c706bf2 : Move DepSet to blueprint
2b45552 : Introduce CreateModuleInDirectory(...)
2066b02 : Add error methods to transition mutator contexts
3eef82c : Support Int64 in fieldToExpr
97aa334 : Don't wrote empty module based ninja files.
d469c91 : More minor optimizations to updateDependencies
c49fbf2 : Optimize out some calls to c.updateDependencies()
bc7accb : Remove c.modulesSorted
705cd21 : Remove distinction between parallel and non-parallel mutators
5ff6ab7 : Add test for disallowing mutator functions
62f80fa : Add comment describing directDeps vs newDirectDeps
e61e26a : Run blueprint module implementations on all variants
d5f678a : Coalesce compatible mutators
da7cb34 : Update gotestmain.go for go 1.23
51aa659 : Update finished mutator checks
3b98058 : Update directDeps immediately when adding new dependencies
b1bb3f6 : Annotate mutators that use methods that prevent mutator coalescing
3b7bb5d : Minor optimizations when running mutators
9a4e015 : Remove unused TopDownMutatorContext methods
3e3af9d : Make blueprint mutators parallel
fa2ed53 : Change the way to support custom gob encoder and decoder.
079d1b9 : Add ModuleProxy that should be used when visiting deps.
32f934e : Remove the 1-variant fallback from reverse dependencies
b62b6ec : Add utilities to repack a property struct to a bp file
6659e20 : Revert "Add tests for new reverse dependency behaviors"
20758c3 : Revert "Remove the 1-variant fallback from reverse dependencies"
ddf9bb8 : Add tests for new reverse dependency behaviors
e8090f2 : Remove the 1-variant fallback from reverse dependencies
81f60b4 : Add AddReverseVariationDependency
9e0ece8 : Move HasMutatorFinished to EarlyModuleContext
511ea71 : Remove the ability to configure the provider check
fe62964 : Remove aliases
87fe5cc : Remove CreateVariations and related functions

+- Project: platform/build

9e94795a3d : Version bump to BP1A.250305.019 [core/build_id.mk]
612b129e86 : Version bump to BP1A.250305.018 [core/build_id.mk]
fe6b32bfdf : Version bump to BP1A.250305.017 [core/build_id.mk]
ad85cfb291 : Version bump to BP1A.250305.016 [core/build_id.mk]
5731ef8072 : Version bump to BP1A.250305.015 [core/build_id.mk]
93ee51d85f : Version bump to BP1A.250305.014 [core/build_id.mk]
1a2846c65a : Version bump to BP1A.250305.013 [core/build_id.mk]
ccdb70c9af : Version bump to BP1A.250305.012 [core/build_id.mk]
da84d0687f : Version bump to BP1A.250305.011 [core/build_id.mk]
46d58423f3 : Version bump to BP1A.250305.010 [core/build_id.mk]
b5e47e9e5c : Version bump to BP1A.250305.009 [core/build_id.mk]
85eaa82d25 : Version bump to BP1A.250305.008 [core/build_id.mk]
11b8bb36c0 : Version bump to BP1A.250305.007 [core/build_id.mk]
a0b038995d : Version bump to BP1A.250305.006 [core/build_id.mk]
834757b420 : Version bump to BP1A.250305.005 [core/build_id.mk]
f5c974e80b : Update update_bootloader_radio_image.mk calling path
effb3b4171 : Version bump to BP1A.250305.003 [core/build_id.mk]
2651b80590 : Version bump to BP1A.250305.002 [core/build_id.mk]
b184f78491 : Version bump to BP1A.250305.001 [core/build_id.mk]
bb35c70c4c : Version bump to BP1A.241210.028.A1 [core/build_id.mk]
a245d218ae : Version bump to BP1A.241210.033 [core/build_id.mk]
b37d3f52a2 : Version bump to BP1A.241210.032 [core/build_id.mk]
5e880999c3 : Version bump to BP1A.241210.031 [core/build_id.mk]
3c69e6e380 : Version bump to BP1A.241210.030 [core/build_id.mk]
f0c00694ea : Version bump to BP1A.241210.029 [core/build_id.mk]
3bcc356319 : Version bump to BP1A.241210.028 [core/build_id.mk]
2a16018528 : Version bump to BP1A.241210.027 [core/build_id.mk]
b6fb725dfd : Version bump to BP1A.241210.026 [core/build_id.mk]
9f85f77d4c : Version bump to BP1A.241210.025 [core/build_id.mk]
1655e4cf51 : Version bump to BP1A.241210.024 [core/build_id.mk]
ab848a5f61 : Version bump to BP1A.241210.023 [core/build_id.mk]
7fe8bcbf00 : Version bump to BP1A.241210.022 [core/build_id.mk]
3886893514 : Version bump to BP1A.241210.021 [core/build_id.mk]
fc11007d75 : Version bump to BP1A.241210.020 [core/build_id.mk]
5374653452 : Version bump to BP1A.241210.019 [core/build_id.mk]
c771e98871 : Version bump to BP1A.241210.018.A1 [core/build_id.mk]
1d085b86a0 : Version bump to BP1A.241210.018 [core/build_id.mk]
c855860866 : Version bump to BP1A.241210.017 [core/build_id.mk]
d183208269 : Version bump to BP1A.241210.016 [core/build_id.mk]
354666e5b6 : Version bump to BP1A.241210.015 [core/build_id.mk]
2725fbdf3f : Version bump to BP1A.241210.014 [core/build_id.mk]
08d23ea714 : Version bump to BP1A.241210.013 [core/build_id.mk]
fb682f92f6 : Version bump to BP1A.241210.012 [core/build_id.mk]
c09a0f6424 : Version bump to BP1A.241210.011 [core/build_id.mk]
a082af41c8 : Version bump to BP1A.241210.010 [core/build_id.mk]
0aa8a120cc : Version bump to BP1A.241210.009 [core/build_id.mk]
8ef75e7f14 : Version bump to BP1A.241210.008 [core/build_id.mk]
96c11f69c5 : Version bump to BP1A.241210.007 [core/build_id.mk]
fd5b2d4bb6 : Version bump to BP1A.241210.006 [core/build_id.mk]
4b90495772 : Version bump to BP1A.241210.005 [core/build_id.mk]
b46dc95276 : Version bump to BP1A.241210.004 [core/build_id.mk]
a9a021e61e : Version bump to BP1A.241210.003 [core/build_id.mk]
caaf1902c0 : Version bump to BP1A.241210.002 [core/build_id.mk]
670696674c : Version bump to BP1A.241209.002 [core/build_id.mk]
26c0d8a4e7 : Enable real test discovery in build_test_suites
ec657cedbb : Synchronize the setup process of edit monitor
62035d9c96 : Add --device-build flag to build_test_suites.py
130da8edf5 : Update the logic to force cleanup all edit monitor instances.
71b1e1d177 : Version bump to BP1A.241206.003 [core/build_id.mk]
3a30d6a829 : Version bump to BP1A.241206.002 [core/build_id.mk]
c7284d08d3 : Version bump to BP1A.241205.002.A1 [core/build_id.mk]
46d6d66ede : Version bump to BP1A.241202.002.A3 [core/build_id.mk]
0508c32a97 : Return default value if fingerprint doesn't match
d14440fcf9 : wifi: Add 11BE feature support for hostapd
b227413207 : Add build_mixed_kernels_ramdisk to otatools.zip
8ff1f957c1 : Version bump to BP1A.241205.002 [core/build_id.mk]
3d9e40dc02 : Version bump to BP1A.241202.002.A2 [core/build_id.mk]
e116a42f0e : Version bump to BP1A.241202.002.X1 [core/build_id.mk]
8a27a2b71e : Ensure SOONG_CONV_DATA is generated even no modules require conversion
44ce7800e7 : Fix broken enable_lz4diff flag
481aae6e07 : Add build_mixed_kernels_ramdisk to otatools.zip
0e54bd65d0 : Revert BUILD_ID back to MAIN.
338fa62cd4 : Export super image related variables to Soong
45cf348940 : Add build make config for allowing framework and service jars
a7107913d9 : Fix a race condition in edit monitor
b1b3434b06 : Version bump to BP1A.241202.002.A1 [core/build_id.mk]
516571b977 : Version bump to BP1A.241204.002 [core/build_id.mk]
1b9b6edcea : Fix crash when failing to parse test discovery output
05fe501226 : Revert^3 "Reland linking shared_libs to their modules"
ff804fff5c : Version bump to BP1A.241203.002.K1 [core/build_id.mk]
26cab60447 : Process source_file through GetTargetFilesZipForCustomImagesUpdates
aed81504b9 : Export BOARD_AVB_<image>_ADD_HASHTREE_FOOTER_ARGS to soong
1b88084ecd : Export TARGET_SCREEN_DENSITY to Soong
d2e0b0a384 : Add AAOS Display Safety triage team
afd7c98da2 : Revert^2 "Drop RRO autogeneration for soong apps"
b52bd8f89e : Revert "Drop RRO autogeneration for soong apps"
3fdf2553a6 : Correct partition name
b5201222d1 : Filter read-only + disabled flags from aconfig_flags.pb
5730ce71ac : Filter read-only + disabled flags from aconfig_flags.pb
cbf71a4361 : Version bump to BP1A.241203.002 [core/build_id.mk]
ad7cfb5601 : apex_sepolicy_tests only if default payload type is ext4
03d62d56db : apex_utils: do not use `deapexer list`
a6af369e71 : Export recovery partition related variables to Soong
b45129d218 : Export PRODUCT_FSVERITY_GENERATE_METADATA to soong
00543f116d : Version bump to BP1A.241202.002 [core/build_id.mk]
920c1a064e : Add HWASAN libraries in case of ARM64
2adedef9cf : Correct a comment in flags.mk
7d1c4b91e7 : Export LIBBINDER_REMOVE_CACHE_STATIC_LIST to Android.bp
4fd12f8f9d : Export libbinder_addservice_cacheflag to Android.bp
79b5ed6312 : Version bump to BP1A.241128.001.A1 [core/build_id.mk]
3a1ca807d2 : Set the property "debug.codec2.bqpool_dealloc_after_stop = 1" in GSI.
87655daa61 : Version bump to BP1A.241129.001.X1 [core/build_id.mk]
3596eccfe4 : Set `TARGET_RECOVERY_UI_LIB` as a Soong config string list
dae4e53982 : Version bump to BP1A.241127.002 [core/build_id.mk]
7a1d930837 : Version bump to BP1A.241121.002.A2 [core/build_id.mk]
a2ce6c339c : Version bump to BP1A.241127.001.K1 [core/build_id.mk]
8bd21dd86e : Add GSI targets using soong built system image
378c8f35cf : Export BOARD_VENDOR_RAMDISK_KERNEL_MODULES_OPTIONS_FILE to Soong
8cdf078a2a : Export vendor ramdisk kernel related variables to Soong
fbcc8b9f95 : Version bump to BP1A.241126.003 [core/build_id.mk]
29a0350589 : Deprecate cc_binary aconfigd and the controlling flag
7562f08ac9 : Handle null exception messages in FeatureFlagsImpl.java.template
1bf31fab4b : Version bump to BP1A.241121.002.A1 [core/build_id.mk]
155cc4662a : Version bump to BP1A.241126.002 [core/build_id.mk]
1e3db01eb2 : Add TARGET_USERIMAGES_USE_F2FS and BOARD_CACHEIMAGE_PARTITION_SIZE to soong_config_variable
9f6234fb1c : Deprecate cc_binary aconfigd and the controlling flag
0622cc1224 : codegen: don't log AconfigStorageReadException
75cc6ce6a4 : Export bootconfig variables to Soong
e003de4d74 : Increase dump-words-file limit again
39cd33701c : Salt init_boot and vendor_boot using the build number
fc1ea362c8 : Version bump to BP1A.241125.002 [core/build_id.mk]
fe3a08df5f : Support ADB product keys purging with a dedicated flag
4b2f1d48d3 : Add soong config variable for ShannonIms
b455956334 : Update dependencies on graphics.common HAL to latest
96a307fc25 : Version bump to BP1A.241122.003 [core/build_id.mk]
95769d3a7a : Version bump to BP1A.241122.002.X1 [core/build_id.mk]
509eed2383 : Parse RELEASE_PLATFORM_BASE_SDK_EXTENSION_VERSION
c4a5f78417 : Set soong config variables for cbd_v2
6c99ed6ee8 : Update Security String to 2025-01-01
1e995abe47 : Version bump to BP1A.241122.002 [core/build_id.mk]
4d098853f9 : Version bump to BP1A.241121.003 [core/build_id.mk]
db08311f1b : Add prefetch binary to system
fe3c6f1e20 : Export INTERNAL_KERNEL_CMDLINE to Soong
bedd13b544 : Rollout the edit monitor to 100% user
c3ad53c4c8 : Add soong_config_set_string_list
a49492273c : Version bump to BP1A.241121.002 [core/build_id.mk]
653943281e : Remove trace redactor from soong-built system image
b9c461db08 : Update vendor_api_levels.go with the vintf finalization
a7da9cbd93 : update java local flag read code
132c5f0074 : Export boot image-specific variables to soong
3eea481e22 : Export BOARD_INCLUDE_DTB_IN_BOOTIMG to Soong
fc3d31bcc4 : capture NoClassDefFoundError
862e8b443f : Version bump to BP1A.241120.002 [core/build_id.mk]
f3b37bc9c9 : Add TARGET_RECOVERY_UI_LIB to soong_config_variable
da6ff4a4b5 : Revert^2 "remove file check"
d02a9cf4d2 : Revert "remove file check"
e38c1f486c : Export variables for avb boot images to soong
54612e3db6 : Export correct boot image related variable to Soong
bdef1f1fd5 : Fix bug in write fingerprint logic.
db9227d725 : Add fingerprint info to package context.
afa826a773 : Use the RELEASE_TARGET_JAVA_21 flag for makefiles
89a402b794 : Version bump to BP1A.241115.004.A2 [core/build_id.mk]
8b0499bb9a : Revert^2 "RESTRICT AUTOMERGE: Package xTS console into CTS 13 to support MCTS."
993c4de29a : Increase dump-words-to-file capacity
dc2ac98457 : Fix android_gsi dirs and symlinks
3d80c6a500 : add new storage sub libray
5a322134e4 : Version bump to BP1A.241119.002 [core/build_id.mk]
a849e06827 : Use ext4 for android_gsi image
d51b0e0673 : Export BoardUsesGenericKernelImage to Soong
5540452429 : Version bump to BP1A.241115.004.A1 [core/build_id.mk]
6ede52cf4d : Add device target build script wrapper
47d30424cd : Revert^2 "Reland linking shared_libs to their modules"
78fd076d99 : Reboot edit monitor when memory exhausted
a6a0007ef2 : Export ProductBuildVendorBootImage to soong
6b1a0e1d0a : Drop RRO autogeneration for soong apps
b74e105778 : Version bump to BP1A.241118.004 [core/build_id.mk]
b44af1747c : Report optimized/unoptimized targets correctly
ab3a63970b : Version bump to BP1A.241118.003 [core/build_id.mk]
5848147ed1 : Remove trace redactor from build
6988272f77 : Set the edit monitor memory threshold as a percentage
95084b1e30 : Remove special case for ltp build rules
219a502e3e : remove file check
7cde0fa88f : Version bump to BP1A.241118.002 [core/build_id.mk]
2f93c91820 : change test map
d9ce51afaa : Revert^2 "add PlatformAconfigPackageInternal"
1b422968b0 : Revert "add PlatformAconfigPackageInternal"
01e6974377 : Version bump to BP1A.241115.004 [core/build_id.mk]
ae74dab702 : Revert "Reland linking shared_libs to their modules"
a84f5d555e : enable read from new storage
abb1839784 : Add --allow-read-write flag to aconfig
acc0a85371 : Export boot partition variables to soong
589b6a9caa : add PlatformAconfigPackageInternal
25cbc38585 : add PlatformAconfigPackageInternal
3dbe3117ac : Write fingerprint to package map.
bed80a0cca : Update dependencies on graphics.common HAL to latest
37f52dbfa6 : Version bump to BP1A.241115.003.X2 [core/build_id.mk]
d39edf5a67 : Version bump to BP1A.241115.003.X1 [core/build_id.mk]
93536cd8f1 : Version bump to BP1A.241115.003 [core/build_id.mk]
c8071e23cd : Version bump to BP1A.241115.002.X1 [core/build_id.mk]
fea93527a0 : Add a switch to configure APEX payload type
dafe563d95 : Add INSTALLED_DTBOIMAGE_TARGET to droidcore-unbundled
2a29dfdf34 : Rename generic_system_image to aosp_shared_system_image
2f5b12aca4 : Version bump to BP1A.241115.002 [core/build_id.mk]
ffb0168b89 : Rollout edit monitor to 50% of the users
5dbad404cd : Begin reporting Test Discovery Agent metrics
7d85971722 : Record initial metrics in build_test_suites
29b821e5b4 : Build using build_test_suites_binary
2b3b093e86 : Implement metrics agent for built_test_suites
af198bd02f : Export partition-specific avb information to soong
c513070f52 : Set system property from list of backported fixes
882116d631 : Reland linking shared_libs to their modules
0bae6dfd85 : add_img_to_target_files: Don't build unnecessary vbmeta images
d5403179ed : Add hyphenation files to layoutlib SBOM
97eacc5ab5 : Version bump to BP1A.241114.002 [core/build_id.mk]
5f84cf727b : Build ramdisk's build.prop with soong
f9156ffc9d : Revert "Include aconfig_flags.pb iff RELEASE_CREATE_ACONFIG_STORAGE_FILE is enabled."
f617be169a : Increase the memory consumption for edit monitor
585b4344ef : Make edit monitor error type clearer
06924ce157 : Revert^6 "Rename build/make/target/board/Android.mk to android-info.mk and include it in main.mk directly"
cc00a0cdae : Move write-partition-file-list out of if statement
e7b245dcc8 : Include aconfig_flags.pb iff RELEASE_CREATE_ACONFIG_STORAGE_FILE is enabled.
c5bc781ac2 : In order to give devs better local parity with CI and decouple from CI, migrate config to the repo.
67603050a1 : Revert "Check remaining bytes before attempting to read from buffer"
3a6004bd50 : Add PRODUCT_IGNORE_ALL_ANDROIDMK to ignore Android.mk files
031379d088 : Version bump to BP1A.241112.002 [core/build_id.mk]
2ae7f4bc82 : Revert^5 "Rename build/make/target/board/Android.mk to android-info.mk and include it in main.mk directly"
bae8557ca0 : Write list of files in the ramdisk
6f9ce5a7b3 : Rename linkerconfig to linker_config
de75a0fc56 : Version bump to BP1A.241111.002 [core/build_id.mk]
0cc202e8fb : Add hyphenation files to layoutlib data dist files
d28b8d8a59 : Version bump to BP1A.241108.004.A1 [core/build_id.mk]
be78edc9e5 : Define android_gsi module
411bc852d0 : Revert "Revert "add aconfigd-system binary to be installed on /s..."
097156931b : Revert "Revert "add aconfigd-system binary and libaconfigd_syste..."
4c12368a7c : Revert^4 "Rename build/make/target/board/Android.mk to android-info.mk and include it in main.mk directly"
8998d09540 : refactor
1dedc03974 : Convert PRODUCT_COPY_FILES to a list
5bd57513b6 : Add flag for enabling Rust aconfigd_system
1bfc8e11ae : Update code comment to match behavior
7ea5a9fe0d : Version bump to BP1A.241107.001.A2 [core/build_id.mk]
10d7fb1095 : Add flag for enabling Rust aconfigd_system
a4187a1460 : Add dlkm related board config vars for soong migration
1f11e898aa : Version bump to BP1A.241108.004 [core/build_id.mk]
8a34c88af5 : Version bump to BP1A.241108.003.X1 [core/build_id.mk]
6e0cd25486 : Version bump to BP1A.241108.003 [core/build_id.mk]
33238833eb : Revert^3 "Rename build/make/target/board/Android.mk to android-info.mk and include it in main.mk directly"
163223dc2f : Remove video_codec soong config variables
16c4e3a191 : Add BOARD_GENFS_LABELS_VERSION
0c357c9947 : Update linker configuration for generic_system_image
0cbf8adcdc : Revert^2 "Rename build/make/target/board/Android.mk to android-info.mk and include it in main.mk directly"
bdc44e3d1a : Version bump to BP1A.241108.002 [core/build_id.mk]
5c2997e41a : Version bump to BP1A.241107.001.A1 [core/build_id.mk]
549fe2a516 : Add vendor_kernel_boot files to target-files-package
62ac1d9186 : Make --allowlists optional
5f7a83f8b1 : Export vbmeta variables to soong
d751b5249d : Revert "Export vbmeta variables to soong"
10adf35cbf : Add build rule for build_test_suites
7e48ab3534 : Version bump to BP1A.241107.001.X1 [core/build_id.mk]
688cae0126 : Add fingerprint to package map.
177864ed33 : remame appfunction sidecar jar
0b20edce61 : Preload sidecar in base system ext
abaec6c6a3 : Fix ide_query prober script.
8cce56672e : Revert "Rename build/make/target/board/Android.mk to android-info.mk and include it in main.mk directly"
fab1cbef40 : Export vbmeta variables to soong
735f41a9be : Hook RELEASE_MESSAGEQUEUE_IMPLEMENTATION into build
274435657e : Delete the build/install rules of dlkm build.prop from make
f20d9745d0 : Rename build/make/target/board/Android.mk to android-info.mk and include it in main.mk directly
d1c4a8b298 : Change the flag to enable edit monitor to ENABLE_ANDROID_EDIT_MONITOR
1008b3e89d : java FlagImpl: Clear context without instrumentat
569b834bd3 : Add llndk_libs to PRODUCT_PACKAGES
cc5aa13bd3 : Revert "add aconfigd-system binary to be installed on /system"
65a9fe6409 : Revert "add aconfigd-system binary and libaconfigd_system librar..."
4438f35906 : java FlagImpl: Clear context into Provider
3b50d0d19c : Add errorcode in AconfigStorageException
5cac84339f : Move uprobestats into Mainline
d9d8dfaa14 : Export some board config vars to autogenerate android_info.prop
b59419f19b : Version bump to BP1A.241106.002 [core/build_id.mk]
e0dc6b9d77 : Revert "Symlink shared libs to their module in general-tests.zip"
aaf375685a : add aconfigd-system binary and libaconfigd_system library to generic system image
869cb22cec : add aconfigd-system binary to be installed on /system
d4a51e5cbf : Specify product_specific property in adb_keys
d2cc6a6c99 : Implement test discovery agent for test zip discovery
d5205f2218 : Version bump to BP1A.241105.001.A1 [core/build_id.mk]
a3c088ce1d : Convert build/make/tools/fs_config/Android.mk to Android.bp
2a9290258f : Symlink shared libs to their module in general-tests.zip
239c5308eb : Always include vendor_manifest.xml in PRODUCT_PACKAGES
3bba90d0b7 : Version bump to BP1A.241105.002 [core/build_id.mk]
a5bec26ceb : Rollout edit monitor to 10% users
aaf27b6057 : Check remaining bytes before attempting to read from buffer
16b3842fdf : Add `chre` related variables to soong_config_variable.
5b55f92683 : core/Makefile: Make PACK_DESKTOP_{RECOVERY,UPDATE}_IMAGE into product variable
a2133ee538 : Add mmd to the base system image
e037024fb6 : Throw exceptions instead of returning null
0ed69beebb : Add DTS to Android build system
c3e457e329 : Version bump to BP1A.241030.002.A4 [core/build_id.mk]
be57597ba6 : Revert "add new flag new_storage_platform_system_api"
9610893156 : Use optional assignments for some virtual_ab properties
5bf20389cd : Version bump to BP1A.241030.002.A3 [core/build_id.mk]
bb67881c5d : Version bump to BP1A.241101.001.X3 [core/build_id.mk]
15baa34e12 : Version bump to BP1A.241030.002.A2 [core/build_id.mk]
b023411da2 : Export BOARD_SYSTEM_KERNEL_MODULES related variables to soong
ee253971ab : make getDefaultProvider static
7cf07b3ec7 : add new flag new_storage_platform_system_api
df7efa6242 : Explicitly declare selinux_policy_(vendor|odm) in PRODUCT_PACKAGES
6c68c5a3d4 : Export BUILDING_ODM_IMAGE to soong
a78e566b5f : Version bump to BP1A.241101.002 [core/build_id.mk]
bf7d45b9a1 : Version bump to BP1A.241101.001.X2 [core/build_id.mk]
2d9428e5e9 : Convert vendor_manifest.xml to soong
3ca7cefe72 : Start edit monitor only when the feature is enabled
345373c7e9 : Version bump to BP1A.241101.001.X1 [core/build_id.mk]
8c7495c217 : Use soong-built vintf/manifest.xml file
be41b10082 : add allow_instrumentation back
da05576410 : Export PRODUCT_PRODUCT_LINKER_CONFIG_FRAGMENTS to soong
c505dbfade : Refactor fingerprint logic.
f3540005c6 : aconfig: update c++/rust codegen
573c43eb34 : Version bump to BP1A.241031.002 [core/build_id.mk]
1de39ab525 : Version bump to BP1A.241030.002.A1 [core/build_id.mk]
7db151f413 : Make pbtombstone a host tool.
2c790f7b3a : fix commends in last change
4b341df865 : Version bump to BP1A.241030.002 [core/build_id.mk]
213409c851 : throw exception if read from deviceConfig doesn't work
afb25696da : create AconfigPackageImpl
96f41f56de : Remove TrimmedApex product variable
8e001ae470 : Version bump to BP1A.241029.002 [core/build_id.mk]
4d33c64f70 : Only install snapshotctl on debug builds
5d6eee1d43 : Remove non-existent Wear OS Trendy teams.
3368ebca60 : Revert "Add Cert processor system property"
bc00e060cb : Export PRODUCT_VENDOR_LINKER_CONFIG_FRAGMENTS to soong
b6618c2ece : Export some board config vars to autogenerate vendor-build.prop
0946070e68 : Export related vars to soong for libdrmresource
66d7d580e4 : Fine tune the soong config variables for libExynosC2OSAL
cd82659fc1 : aflags: Check for existence of container when filtering
78846c2bd0 : aflags: Check for existence of container when filtering
228842b761 : Version bump to BP1A.241025.003 [core/build_id.mk]
64ad75f20b : Support --verbose option in edit monitor
ca5c5e64a8 : Version bump to BP1A.241025.002.X1 [core/build_id.mk]
5257bc13ed : Add target_boots_16k soong config variable
d15982c80f : Version bump to BP1A.241018.003.A2 [core/build_id.mk]
f64dc60611 : RESTRICT AUTOMERGE Ignore-AOSP-First: Aosp doesn't need this. Better mechanism in place. For config captured only in tradefed
c13c65a4b1 : RESTRICT AUTOMERGE Ignore-AOSP-First: Aosp doesn't need this. Better mechanism in place. For config captured only in tradefed
e35e25afd6 : Separating framework platform crashrecovery jar
2c429b5a01 : Export libacryl related flags to soong
e91008d246 : Version bump to BP1A.241025.002 [core/build_id.mk]
5d162229e8 : A few fix of the daemon_manager
7d4268feec : Add integration test for edit monitor
c5b4bfb5e5 : Update edit_event proto
7f22db8fc1 : Support --dry_run option in edit monitor
ba64f31b56 : Report edit monitor error to clearcut
0d2730f871 : Define BOARD_API_LEVEL_PROP_OVERRIDE for GRF prop
cf106526e3 : Move RECOVERY_API_VERSION and RECOVERY_FSTAB_VERSION to build/core/android_soong_config_vars.mk
7665ff23a3 : Export wpa_supplicant_8 related settings to soong
7db1a60ffe : Version bump to BP1A.241024.002 [core/build_id.mk]
768f1ac566 : Version bump to BP1A.241023.002 [core/build_id.mk]
1a88cffd42 : Export PRODUCT_COPY_FILES to Soong
0597a9d775 : Filter out the logs from inotify_buffer in edit monitor
8a22579652 : Performance optimization for edit monitor
35bd3d27b9 : Ignore the edits unrelated to Android dev in the edit monitor
823f357f17 : Version bump to BP1A.241023.001.X1 [core/build_id.mk]
703a048b5c : Add framework-connectivity-b to the boot classpath
2b39a02e35 : Adding Desktop Stats trendy team
c0190662b4 : Add the actual main.py for edit monitor
cb4d82e027 : Make the result IDE query deterministic.
b414f050dc : Change location of font XML files for layoutlib build
27f5e69636 : Remove dependencies on the 1-variant fallback
4b1deb8c7a : Export BUILDING_PRODUCT_IMAGE to Soong
84adcd14d4 : Soong config for RELEASE_PACKAGE_WEBVIEW_VERSION.
a3197cb6fb : Version bump to BP1A.241018.003.A1 [core/build_id.mk]
2ba0213cc6 : Update ownership documentation
5b9fc904db : aconfig_storage_read_api: allow read api to read from mutable mapping
1f7a71e8ac : Version bump to BP1A.241022.002.X1 [core/build_id.mk]
7d8be54f50 : Move building bootloader/radio image task to proper place
2fb520ea28 : Move uprobestats client lib into Mainline
fdb76a591f : Version bump to BP1A.241018.003.X1 [core/build_id.mk]
a79b045846 : Version bump to BP1A.241022.002 [core/build_id.mk]
28810e40d8 : Export BUILDING_VENDOR_IMAGE to soong
6786fe4e10 : aconfig: make write api lib available to config infra
0545809fb2 : Remove the "metadata" module
c3eefb6d3d : Support a raw bootconfig file as input to vendor-bootconfig.img
bae5f72a3f : Add the basic edit monitor logic
c7b8a9268a : Version bump to BP1A.241021.002 [core/build_id.mk]
3394e2f22a : Use --sysroot when compiling against the NDK
f340caf028 : Export more video codec variables to soong
f198911bde : Revert "Use --sysroot when compiling against the NDK"
ad480c26a5 : core/Makefile: Fix filenames with commas
34f21c24f3 : Check ELF alignment on W+ devices.
d67eb0f0b7 : Version bump to BP1A.241018.002.X1 [core/build_id.mk]
709d4b2721 : Version bump to BP1A.241018.003 [core/build_id.mk]
09e76baa3a : [Ravenwood] Make sure that runner errors are not swallowed
f9671c3526 : Version bump to BP1A.241017.001.A1 [core/build_id.mk]
543ed4e7b2 : Version bump to BP1A.241018.002 [core/build_id.mk]
21058d0576 : Version bump to BP1A.241017.001.X1 [core/build_id.mk]
64887d8cc8 : Add tradeinmode to the system image.
66ab0787fd : Make the `container` argument required for the `create-cache` subcommand.
2083f1694c : Use --sysroot when compiling against the NDK
f33532e73c : aconfig_storage_read_api: make map_file function public.
eef83d0886 : aconfig: don't fail if a proto file is missing
ba109976fa : Move build/make/target/product/gsi/Android.mk to build/make/core/tasks/check-abi-dump-list.mk
d8535ef949 : Add soong config variables ADDITIONAL_M4DEFS for file_contexts.bin
174db7b179 : Remove phony module vndk_apex_snapshot_package and add its required modules to auto-included-modules
41cb2c2268 : Add trendy team for wear
da11b1651d : Version bump to BP1A.241015.003 [core/build_id.mk]
ab01c3918c : aconfig: OWNERS: mark amhk, joeo, jham as LAST_RESORT_SUGGESTION
d448be227a : Add test discovery agent skeleton
f81828cb7b : Version bump to BP1A.241015.002 [core/build_id.mk]
040a041265 : Rebrand STS SDK as AutoRepro
b5665cd53f : Add trendy team for automotive cast
a6b42ad768 : Version bump to BP1A.241014.003 [core/build_id.mk]
9aabdc04cc : Version bump to BP1A.241014.002 [core/build_id.mk]
98435af440 : Add odm/* symlinks to system image
a20a7fb5ec : Fix printf usage in _wrap_build()
9da40171fb : Add special paths for BoardConfig.mk for 32-bit watch gf.
d4907e91fc : Version bump to BP1A.241011.002 [core/build_id.mk]
0bfdb54a40 : Export video codec variables to soong
42ec305c6d : Revert^2 "Add missing modules to the internal system image"
9348f5455e : Revert "Add missing modules to the internal system image"
16b2818fa6 : Switch the FS type of generic_system_image to erofs
3a2809ab8c : Add missing modules to the internal system image
d04c2c504f : Update workspace to 1.23
f5f0f63e60 : Return default instead of crash if package not found.
8898a9e281 : Add soong config variables for selinux_policy_system in Soong
4d49111eff : Assign DependencyIds and GeneratedFiles to the root module.
f73f78703d : Remove non-system modules from generic_system_image
a1033d31bd : robolectric upgrade required changes
267956eff9 : Remove module llndk_in_system in build/make/target/product/gsi/Android.mk and add the required modules to auto-included-modules directly.
d490af1b25 : Import some product definition variables to Soong
5f837486f2 : Update codegen to return defautl value if package is not in storage file
c4e5f6cf4f : Remove non existent health 2.0 recovery package
940a8fb4fa : Reapply "Add libcgrouprc to multilib.both.deps"
476da833e1 : Revert^3 "Use -target-feature for MTE"
936f02befa : Revert "Add libcgrouprc to multilib.both.deps"
e5d12064cf : Update Security String to 2024-12-01
7b43b96b94 : Add capability to let devices specify linker config fragments for /product partition.
89ea048f58 : Add adb_keys symlink to system image
715bc9adeb : core/Makefile: Add experimental desktop update image option
fe67d8f1f2 : Revert^2 "Use -target-feature for MTE"
ee23e8fbb3 : Version bump to BP1A.241004.002.A2 [core/build_id.mk]
efaeb5e58b : Adding EXTRA_ALLOWED_DEPS_TXT config for non-mainline deps
2c5a8a8615 : Version bump to BP1A.241004.002.A1 [core/build_id.mk]
d285e3d7a1 : Add Cert processor system property
d75493e3a0 : Remove aconfig storage files from next allolwlist
e4c50bf24b : Let genrule have the entrance depend on signapk
d46dbd539e : Change VINTF_FRAMEWORK_MANIFEST_FROZEN_DIR to a fix value
0b21e0f1a6 : Export related variables for vintf_data to soong
577407bac1 : Revert "Add soong config variables for selinux_policy_system in ..."
e270df131d : Move the logic of using prebuilt tradefed to build/core/task
981b13f8ac : Move building bootloader/radio image task to proper place
f460a638f1 : Define file_diff allowlist for the next builds.
05e28fa8f5 : Do not start edit monitor under cog workspace
49697ab9d1 : Version bump to BP1A.241004.002 [core/build_id.mk]
e4fb83024f : add default path to flag bp files in mainline module
79b2eddeb2 : Export libexynosscaler related variable to soong
51a066f9de : Convert build/make/target/product/security/Android.mk to Android.bp
382e6def41 : aflags: update aconfigd_protos rust build target name
364469d1b2 : Add missing modules to the internal system image
a34d80ee0c : Add libcgrouprc to multilib.both.deps
db4d71bd85 : Create trendy team for ravenwood
ad030fa54f : aflags: update aconfigd_protos rust build target name
5e55e4577c : Move test files from tests/ to data/v1/.
32721f125b : Revert "Use -target-feature for MTE"
b7679766ed : Add release configs artifacts to metadata build
8014b0510a : Convert modules below in build/make/target/product/gsi/Android.mk to Android.bp
3967e509e4 : Check for unnecessarily allowlisted files
52153e973d : Add java_aconfig_library to Android.bp
e472adf4d5 : add the test config lib to a target that is used in platform build
8d422a7456 : Adding EXTRA_ALLOWED_DEPS_TXT config for non-mainline deps
e22bf83b04 : Move ap3a configuration to build/release.
d32bf9f3aa : Add a main function for the edit monitor
11986e8242 : Add the protobuf defintion to store edit event logs
4ff9054912 : Move ap3a configuration to build/release.
ecd8c03e87 : Export TARGET_BOARD_PLATFORM to soong
d2badf9294 : core/Makefile: Add experimental desktop recovery image option
b3b48a4422 : Drop dexpreopt files of service-compos from install diff allowlist
f25d2e3b41 : Add ability to read both v1/v2 package map.
d28da5cfe3 : Support to cleanup all existing edit monitor instances
a23f934ae3 : Version bump to BP1A.241001.002 [core/build_id.mk]
47a8057d09 : Export ril related flag to soong
47619b0fec : Remove system-ext from test.
205a2fc6e4 : Support restart in daemon manager
aabd301152 : Drop extraneous entries from install diffs
ff713e997e : Read the proto paths with root
776e4da5c4 : Update error message for easier error resolution
e4da757a9f : Use profile at framework/base/boot/ instead of the combined one at framework/base/config
be64285ebb : Remove libvendorsupport.so from the filediff allowlist
8add49972f : Add missing modules to the internal system image
d620b935ec : Add proposed trendy teams for GTS modules
532d7f9e5b : Add libvendorsupport to `deps` of `generic_system_image`
26e0430ee0 : Define generic system image for Android targets
8eccfe7ed5 : Version bump to BP1A.240927.003 [core/build_id.mk]
e7f87c4d2b : Revert "Add has_boot_local_override to aconfig_storage_file.hpp"
2f739face8 : Version bump to BP1A.240927.002 [core/build_id.mk]
5f94039840 : Remove aconfig/flag.info from install diffs
7c737ca9c0 : Remove DEXPREOPT_IMAGE_PROFILE_BUILT_INSTALLED from make-metadata.csv
b5f1c79373 : Add has_boot_local_override to aconfig_storage_file.hpp
d8c50acce6 : Fix UnusedVariable errorprone issues
8fab8160a1 : Version bump to BP1A.240926.002 [core/build_id.mk]
7bed8164f3 : Add keep rules for WeaklyReferencedCallback annotation
613b643ccf : Support test_runner_options in android_test
37e6884982 : Export partition-related variables to soong
94748a3aa7 : aconfig: add a flag for launching aconfigd from mainline
1b0e3962cf : Make new storage and old starge seperate
331a91829c : Add alderlake variant
58c779db74 : Export AUDIOSERVER_MULTILIB to soong
52558650c9 : Update the tracking bug for init.environ.rc migration
7683b2f6d1 : aconfig: remove create flag info api
070bf54dce : Remove some entries from install diff allowlist
439a704bcb : Drop obsolete install diffs between make and soong partitions
b3e92d28ca : Return null if key is not found in map
dc2840dafc : Support monitoring the subprocess in edit monitor
ad7b673386 : Version bump to BP1A.240925.002 [core/build_id.mk]
7fd4b713ce : core/Makefile: Add desktop migration image target hook
1513eab6f9 : Remove prompt to repair Cog workspace
2c9d884f3d : Define __ANDROID_VENDOR_API__ for product variants
5ba8a8b775 : aconfig: create flag info file at build time
3e9e7ae4a9 : Remove checkPartitionsForJavaDependency()
2404c4a30e : Move cog setup logic into shell_utils so it can be used when any of these commands are run:
03b7abee1a : Delete the installation rules of host ART boot image from make
f6f2a75433 : Add soong config variables for selinux_policy_system in Soong
73d5bfc711 : Set ro.*.build.version.sdk_minor sysprop
6468f44329 : [RESTRICT AUTOMERGE] Enable build MCTS on aosp-android13
d297a0925a : Add dependency information to the ide_query output.
184ca73d96 : Add soong flag to use speed-profile in SystemUI
097ca8881c : Replace soong config module types with selects
8ae967e5c2 : Remove bazel completions from envsetup.sh

+- Project: platform/build/release

fbad0c3e : bp1a flag sync
59fa9d7a : Update aconfig flags to BP1A.250305.020
8ecd7e4b : Rollback `browsing_refactor` and `set_addressed_player` in 25Q1
38b31e2f : Add post_callback flag to bp1a
8d54e1df : Update boot flags to BP1A.250305.005
a8a9cfd4 : Removing flag android.media.audio.enable_ringtone_haptics_customization from bp1a.
bda6ccba : Removing flag com.android.server.notification.notification_vibration_in_sound_uri from bp1a.
6155ce4a : Advance Kernel to Build: 12919773 in 25Q1 Comet
f64115b9 : Advance Kernel to Build: 12919773 in 25Q1 Caimito
fd5cf1a6 : Advance Kernel to Build: 12919773 in 25Q1 Akita
7047c966 : Advance Kernel to Build: 12919773 in 25Q1 shusky
a9cda0c4 : Advance Kernel to Build: 12919773 in 25Q1 tangorpro
06984213 : Advance Kernel to Build: 12919773 in 25Q1 pantah
cd4fccdb : Advance Kernel to Build: 12919773 in 25Q1 felix
9898271d : Advance Kernel to Build: 12919773 in 25Q1 lynx
42908194 : Advance Kernel to Build: 12919773 in 25Q1 bluejay
007d3922 : Advance Kernel to Build: 12919773 in 25Q1 raviole
a3e4fbd1 : Trunk_staging: Set SPL to 2025-02-05
ff548773 : AP4A: Set SPL to 2025-02-05
8dd584fa : fix newline at the end of file
bb3a64a0 : Trunk_staging: Set SPL to 2025-03-05
039b6f11 : BP1A: Set SPL to 2025-03-05
a194a1b1 : Trunk_staging: Set SPL to 2025-02-05
f78ce6c1 : Advance Kernel to Build: 12883629 in 25Q1 Comet
404dac50 : Advance Kernel to Build: 12883629 in 25Q1 Caimito
bc4403b7 : Advance Kernel to Build: 12883629 in 25Q1 Akita
aa3f223e : Advance Kernel to Build: 12883629 in 25Q1 tangorpro
1f2ad1f0 : Advance Kernel to Build: 12883629 in 25Q1 felix
e96a2eed : Advance Kernel to Build: 12883629 in 25Q1 pantah
239d0158 : Advance Kernel to Build: 12883629 in 25Q1 lynx
692995a8 : Advance Kernel to Build: 12883629 in 25Q1 raviole
c04c6ff2 : Advance Kernel to Build: 12883629 in 25Q1 bluejay
404a7e5b : Advance Kernel to Build: 12883629 in 25Q1 shusky
d75ca78c : Enable network_time_update_service for 25Q1
8d4c6f2b : Removing flag android.crashrecovery.flags.synchronous_reboot_in_rescue_party from bp1a.
7fb75319 : Adding flag com.android.server.telecom.flags.end_session_improvements to bp1a.
d68506a2 : Adding flag com.android.settings.flags.satellite_oem_settings_ux_migration to bp1a.
cdf460bd : Advance Kernel to Build: 12859539 in 25Q1 Comet
72d1f0c6 : Advance Kernel to Build: 12859539 in 25Q1 shusky
3a44c7e7 : Advance Kernel to Build: 12859539 in 25Q1 Caimito
e61a6f4a : Advance Kernel to Build: 12859539 in 25Q1 Akita
fcfaa7c3 : Advance Kernel to Build: 12859539 in 25Q1 tangorpro
11ccc009 : Advance Kernel to Build: 12859539 in 25Q1 pantah
44fbe057 : Advance Kernel to Build: 12859539 in 25Q1 felix
b8f4ebca : Advance Kernel to Build: 12859539 in 25Q1 lynx
5373ad71 : Advance Kernel to Build: 12859539 in 25Q1 bluejay
1af9f22b : Advance Kernel to Build: 12859539 in 25Q1 raviole
f66d84ca : Removing flag libgooglecamerahal.flags.zsl_video_denoise_in_hwl_two from bp1a.
f9b06c4a : Advance Kernel to Build: 12846072 in 25Q1 Caimito
332a1fd7 : Advance Kernel to Build: 12846072 in 25Q1 felix
30c6e792 : Advance Kernel to Build: 12846072 in 25Q1 lynx
ae7ff580 : Advance Kernel to Build: 12846072 in 25Q1 bluejay
17bbad58 : Advance Kernel to Build: 12846072 in 25Q1 raviole
108d206d : Advance Kernel to Build: 12846072 in 25Q1 Comet
2e494b0f : Advance Kernel to Build: 12846072 in 25Q1 Akita
395ce3ed : Advance Kernel to Build: 12846072 in 25Q1 shusky
d48d1679 : Advance Kernel to Build: 12846072 in 25Q1 tangorpro
f2948672 : Advance Kernel to Build: 12846072 in 25Q1 pantah
ee66c422 : Don't exclude framework-photopicker from apex boot jars for bp1a
452248b2 : Advance Kernel to Build: 12819897 in 25Q1 Akita
751d792c : Advance Kernel to Build: 12819897 in 25Q1 tangorpro
eb49a979 : Advance Kernel to Build: 12819897 in 25Q1 lynx
6dab8a4c : Advance Kernel to Build: 12819897 in 25Q1 pantah
0c77d0ad : Advance Kernel to Build: 12819897 in 25Q1 raviole
0bc5bd01 : Advance Kernel to Build: 12819897 in 25Q1 Comet
2fcf31f5 : Advance Kernel to Build: 12819897 in 25Q1 Caimito
1e28469c : Advance Kernel to Build: 12819897 in 25Q1 shusky
f62cf7bb : Advance Kernel to Build: 12819897 in 25Q1 felix
f724f6e0 : Advance Kernel to Build: 12819897 in 25Q1 bluejay
139ed0e2 : Moving bp1a flags to build/release.
85202f15 : Removing flag com.android.intentresolver.fix_shortcuts_flashing from ap4a.
197c2adf : Removing flag com.android.intentresolver.fix_shortcuts_flashing from trunk_staging.
6bbe52c5 : Adding flag com.android.shell.flags.handle_bugreports_for_wear to trunk_staging.
b7a2c0c3 : Adding flag android.hardware.usb.flags.enable_udc_sysfs_usb_state_update to trunk_staging.
f3f54fb6 : Adding flag com.android.server.telecom.flags.keep_bluetooth_devices_cache_updated to trunk_staging.
5a3533f9 : Removing flag com.android.wm.shell.enable_pip2 from trunk_staging.
671aad83 : Adding flag android.view.flags.enable_scroll_feedback_for_touch to trunk_staging.
fe9a3b64 : Adding flag com.android.internal.telephony.flags.deprecate_cdma to trunk_staging.
433adb08 : Adding flag com.android.internal.telephony.flags.cleanup_cdma to trunk_staging.
3fd97d9c : Removing flag com.android.systemui.enable_view_capture_tracing from trunk_staging.
293f1952 : Remove test packages from production release configurations
e562e11f : Adding flag com.android.bluetooth.flags.hci_vendor_specific_extension to trunk_staging.
72f279b9 : Add tejaswiy, opg as OWNERS for aconfig overrides
5ba8727b : Adding flag android.service.autofill.improve_fill_dialog_aconfig to trunk_staging.
222b120a : Adding flag com.android.org.conscrypt.flags.spake2plus_api to trunk_staging.
4e297aa1 : Adding flag com.android.graphics.surfaceflinger.flags.deprecate_frame_tracker to trunk_staging.
72f6262c : Adding flag com.android.window.flags.enable_desktop_windowing_scvh_cache_bug_fix to trunk_staging.
84bfab63 : Removing flag android.view.flags.enable_scroll_feedback_for_touch from trunk_staging.
d2ed3061 : Adding flag android.app.nm_binder_perf_permission_check to trunk_staging.
1e8f5440 : Revert "Adding flag com.android.server.telecom.flags.telecom_app_label_proxy_hsum_aware to trunk_staging."
309c356e : Adding flag com.example.android.aconfig.demo.flags.pc_test_7 to trunk_staging.
668798f0 : Adding flag com.android.window.flags.app_compat_async_relayout to trunk_staging.
419f26c3 : Adding flag com.android.systemui.enable_view_capture_tracing to trunk_staging.
8ba3de75 : Adding flag com.android.settingslib.flags.ignore_a2dp_disconnection_for_android_auto to trunk_staging.
fe876986 : Removing flag android.os.adpf_measure_during_input_event_boost from trunk_staging.
c92ff2ff : Adding flag android.os.message_queue_testability to trunk_staging.
2c0b7d11 : Advance Kernel to Build: 12755779 in trunk_staging Comet
340e06f3 : Adding flag android.view.accessibility.copy_events_for_gesture_detection to trunk_staging.
2a85b9f7 : Advance Kernel to Build: 12755779 in trunk_staging Caimito
c416d27c : Adding flag android.permission.flags.use_frozen_aware_remote_callback_list to trunk_staging.
edb10d06 : Removing flag com.android.window.flags.ensure_keyguard_does_transition_starting from trunk_staging.
c73b0189 : Revert "Removing flag com.android.graphics.hwui.flags.high_contrast_text_small_text_rect from trunk_staging."
80e7fd2e : Adding flag android.view.flags.enable_scroll_feedback_for_touch to trunk_staging.
573c381a : Adding flag android.app.pic_cache_nulls to trunk_staging.
b1389d5c : Adding flag android.os.material_shape_tokens to trunk_staging.
15913592 : Adding flag com.android.server.telecom.flags.allow_system_apps_resolve_voip_calls to trunk_staging.
e16262bc : Update aosp_current to AP4A for 24Q4 release.
401c67e3 : Adding flag com.android.libcore.hpke_public_api to trunk_staging.
d3346ffe : Adding flag com.android.systemui.qs_ui_refactor_compose_fragment to trunk_staging.
e5ec8ea2 : Adding flag android.content.res.system_context_handle_app_info_changed to trunk_staging.
6a52fb87 : Adding flag com.android.graphics.flags.display_bt2020_colorspace to trunk_staging.
3553a446 : Bump SDK Extension version to 16
c357bef3 : Adding flag com.android.providers.settings.disable_bulk_compare to trunk_staging.
82f40050 : Removing flag android.permission.flags.use_frozen_aware_remote_callback_list from trunk_staging.
e6bde662 : Removing flag android.permission.flags.use_frozen_aware_remote_callback_list from trunk_staging.
79fbd644 : Advance Kernel to Build: 12748947 (6.1) in trunk_staging Husky and Ripcurrent
cd98fc7c : Adding flag com.android.launcher3.enable_use_top_visible_activity_for_exclude_from_recent_task to trunk_staging.
d16b4905 : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_ONDEVICE_INTELLIGENCE_MODULE = true
9a59a211 : Adding flag com.android.window.flags.ensure_keyguard_does_transition_starting to trunk_staging.
e020ba1d : Removing flag com.android.bluetooth.flags.hci_vendor_specific_extension from trunk_staging.
003c88a9 : Adding flag com.android.internal.telephony.flags.force_imsi_certificate_delete to trunk_staging.
0865cea0 : Adding flag com.android.aconfig_new_storage.enable_aconfigd_from_mainline to trunk_staging.
5122e0d2 : Add display_compatibility_density flag
1441181d : Adding flag com.android.graphics.surfaceflinger.flags.display_config_error_hal to trunk_staging.
58e904c0 : Adding flag com.android.launcher3.enable_contrast_tiles to trunk_staging.
860888dc : Adding flag android.security.prevent_intent_redirect_show_toast_if_nested_keys_not_collected_r_w to trunk_staging.
3c466a49 : Adding flag android.car.feature.vehicle_property_remove_system_api_tags to trunk_staging.
28b3b34d : Removing flag com.android.server.am.fix_apply_oomadj_order from trunk_staging.
2dda5f56 : Adding flag android.appwidget.flags.remote_adapter_conversion to trunk_staging.
3470b4de : Adding flag com.android.server.power.feature.flags.move_wsc_logging_to_notifier to trunk_staging.
f2d31f9e : Adding flag com.android.bluetooth.flags.hci_vendor_specific_extension to trunk_staging.
3b0b53cb : Adding flag android.app.pic_separate_permission_notifications to trunk_staging.
435a64e2 : DO NOT MERGE ANYWHERE: Update RELEASE_PLATFORM_SDK_EXTENSION_VERSION to align with Q4Q3 (AP3A) release.
0c898425 : Advance Kernel to Build: 12742234 (6.1) in trunk_staging Oriole and Raven
4dc09549 : Advance Kernel to Build: 12742234 (6.1) in trunk_staging Husky and Ripcurrent
c92037c6 : Removing flag android.service.autofill.autofill_session_destroyed from trunk_staging.
851f1268 : Removing flag android.appwidget.flags.remote_adapter_conversion from trunk_staging.
ba265ecf : Adding flag com.android.internal.telephony.flags.action_sim_preference_settings to trunk_staging.
d4049139 : Removing flag com.android.systemui.shade_expands_on_status_bar_long_press from trunk_staging.
9973fa73 : Adding flag com.android.graphics.hwui.flags.bitmap_ashmem_long_name to trunk_staging.
bf37358b : Removing flag com.android.systemui.qs_ui_refactor_compose_fragment from trunk_staging.
e2278332 : Adding flag android.service.autofill.relayout_fix to trunk_staging.
1997a3d2 : Removing flag com.android.systemui.qs_new_tiles from trunk_staging.
55fecba7 : Adding flag com.android.graphics.surfaceflinger.flags.arr_setframerate_gte_enum to trunk_staging.
8c10dfa5 : Adding flag com.android.systemui.shade_expands_on_status_bar_long_press to trunk_staging.
f88040c9 : Removing flag com.android.window.flags.ensure_keyguard_does_transition_starting from trunk_staging.
9f8bf485 : Adding flag android.car.feature.audio_control_hal_configuration to trunk_staging.
7a34abb2 : Adding flag com.android.bluetooth.flags.rpa_offload_to_bt_controller to trunk_staging.
d8094b71 : Adding flag android.service.autofill.autofill_session_destroyed to trunk_staging.
4b6f8694 : Removing flag com.android.window.flags.enable_desktop_windowing_scvh_cache from trunk_staging.
93bd90e0 : Adding flag com.android.aconfig_new_storage.enable_full_rust_system_aconfigd to trunk_staging.
871fbfb7 : Adding flag com.android.media.flags.enable_route_visibility_control_api to trunk_staging.
6db5e574 : Advance Kernel to Build: 12735872 (6.1) in trunk_staging Oriole and Raven
a91e9b25 : Adding flag android.os.material_motion_tokens to trunk_staging.
5026efcb : Removing flag com.android.aconfig_new_storage.enable_full_rust_system_aconfigd from trunk_staging.
0941ca00 : Advance Kernel to Build: 12730958 (6.1) in trunk_staging Husky and Ripcurrent
f316633a : Revert "Adding flag com.android.server.display.feature.flags.subscribe_granular_display_events to trunk_staging."
bc6abe57 : Removing flag com.android.graphics.surfaceflinger.flags.arr_surfacecontrol_setframerate_api from trunk_staging.
10c3799b : Adding flag com.android.window.flags.ensure_keyguard_does_transition_starting to trunk_staging.
8a420989 : Adding flag com.android.window.flags.system_ui_post_animation_end to trunk_staging.
e26326a5 : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_PACKAGE_NOTO_SANS_KHMER_VERSION = 2.004
c724c330 : Adding flag android.media.audio.dolby_ac4_level4_encoding_api to trunk_staging.
6528ccce : Adding flag android.permission.flags.cross_user_role_platform_api_enabled to trunk_staging.
57db1149 : Adding flag com.android.internal.camera.flags.data_delivery_permission_checks to trunk_staging.
4c6920c3 : Adding flag android.permission.flags.permission_request_short_circuit_enabled to trunk_staging.
3cfb69bf : Advance Kernel to Build: 12735872 (6.1) in trunk_staging Bluejay
666b082c : Add RELEASE_ACONFIG_REQUIRE_ALL_READ_ONLY build flag.
265116b5 : Adding flag android.view.flags.date_time_view_relative_time_display_configs to trunk_staging.
8aba5ea9 : Revert "Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RE..."
81e1af71 : Adding flag com.android.nfc.flags.use_device_lock_listener to trunk_staging.
ce9dd355 : Removing flag com.android.systemui.ensure_enr_views_visibility from trunk_staging.
068fc7ea : Adding flag com.android.internal.camera.flags.zoom_method to trunk_staging.
1bdfcc33 : Removing flag com.android.server.display.feature.flags.subscribe_granular_display_events from trunk_staging.
0721d6f4 : Adding flag com.android.hardware.input.touchpad_system_gesture_disable to trunk_staging.
9636c8a3 : Adding flag com.android.window.flags.update_dims_when_window_shown to trunk_staging.
8cd69740 : Adding flag com.android.providers.contacts.flags.cp2_sync_search_index_flag to trunk_staging.
ae915994 : Adding flag android.chre.flags.bug_fix_remove_exit_call_in_hal to trunk_staging.
813404d7 : Adding flag com.android.graphics.surfaceflinger.flags.connected_display_hdr to trunk_staging.
98279794 : Adding flag com.android.graphics.egl.flags.multifile_blobcache_advanced_usage to trunk_staging.
4a969296 : Removing flag com.android.window.flags.app_compat_async_relayout from trunk_staging.
bdde86d1 : Adding flag android.view.flags.scroll_capture_target_z_order_fix to trunk_staging.
84f88720 : Advance Kernel to Build: 12730958 (6.1) in trunk_staging Oriole and Raven
92d6ac55 : Adding flag com.android.systemui.qs_ui_refactor_compose_fragment to trunk_staging.
17bdea66 : Adding flag com.android.systemui.sim_pin_use_slot_id to trunk_staging.
ed558aae : Adding flag com.android.healthfitness.flags.phr_read_medical_resources_fix_query_limit to trunk_staging.
1f42a8a4 : Removing flag com.android.window.flags.ensure_keyguard_does_transition_starting from trunk_staging.
6a99d018 : Adding flag com.android.window.flags.app_compat_async_relayout to trunk_staging.
6c98e861 : Adding flag com.android.window.flags.enable_desktop_app_launch_transitions_bugfix to trunk_staging.
e51f633b : Adding flag com.android.window.flags.enable_desktop_windowing_exit_transitions_bugfix to trunk_staging.
4ebee060 : Adding flag com.android.window.flags.enable_desktop_windowing_enter_transition_bugfix to trunk_staging.
da089008 : Adding flag com.android.window.flags.enable_desktop_app_launch_alttab_transitions_bugfix to trunk_staging.
25002b61 : Adding flag com.android.internal.telephony.flags.cellular_identifier_disclosure_indications to trunk_staging.
01ebed5f : Adding flag com.android.window.flags.ensure_keyguard_does_transition_starting to trunk_staging.
e0c477b8 : Advance Kernel to Build: 12727355 (6.1) in trunk_staging Husky and Ripcurrent
2a2252ee : Adding flag com.android.bluetooth.flags.identity_address_type_api to trunk_staging.
2568ae29 : Advance Kernel to Build: 12730958 (6.1) in trunk_staging Akita
bee4d611 : Advance Kernel to Build: 12730958 (6.1) in trunk_staging Tangorpro
1493b882 : Adding flag android.app.admin.flags.unsuspend_not_suspended to trunk_staging.
1bcea500 : Adding flag android.net.http.preload_httpengine_in_zygote to trunk_staging.
68ba74cb : Enable flag to remove static list in trunk_staging
334efb69 : Add a flag to remove static list
7f7afa08 : Add Flag for addService cache
8d97c505 : Enable LIBBINDER_ADDSERVICE_CACHE in trunkstaging
3b082696 : Removing flag com.android.launcher3.enable_refactor_task_thumbnail from trunk_staging.
bb90d864 : Adding flag android.net.platform.flags.x509_extensions_certificate_transparency to trunk_staging.
d1e0a5e0 : Removing flag android.server.remove_java_service_manager_cache from trunk_staging.
c576c274 : Adding flag android.app.modes_multiuser to trunk_staging.
18f80a56 : Advance Kernel to Build: 12722286 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
957ccc1c : Adding flag com.android.bluetooth.flags.leaudio_allow_leaudio_only_devices to trunk_staging.
7718ba7e : Adding flag android.location.flags.density_based_coarse_locations to trunk_staging.
a8f69800 : Removing flag com.android.internal.camera.flags.data_delivery_permission_checks from trunk_staging.
b6a5ebc3 : Removing flag com.android.server.power.feature.flags.move_wsc_logging_to_notifier from trunk_staging.
862baf13 : Adding flag com.android.internal.telephony.flags.get_group_id_level2 to trunk_staging.
652e76cf : Adding flag android.os.adpf_graphics_pipeline to trunk_staging.
dfd45d73 : Revert^2 "New flag: RELEASE_APEX_USE_EROFS_PREINSTALLED"
e97b9312 : Adding flag com.android.bluetooth.flags.leaudio_add_opus_codec_type to trunk_staging.
a546ce62 : Adding flag android.permission.flags.updatable_text_classifier_for_otp_detection_enabled to trunk_staging.
5ba1c32a : Adding flag com.android.settings.flags.page_size_app_compat_setting to trunk_staging.
bc5659f0 : Adding flag com.android.bluetooth.flags.key_missing_public to trunk_staging.
d02cd232 : Adding flag com.android.internal.camera.flags.data_delivery_permission_checks to trunk_staging.
fb57ec3a : Adding flag android.location.flags.population_density_provider to trunk_staging.
a60925fb : Removing flag android.app.admin.flags.lock_now_coexistence from trunk_staging.
97d01062 : Removing flag android.app.admin.flags.reset_password_with_token_coexistence from trunk_staging.
dde267d4 : Reapply "BPF: Enable libbpf build flag in trunk staging"
96028c12 : Adding flag android.view.flags.scroll_capture_target_z_order_fix to trunk_staging.
215ee666 : Adding flag android.view.flags.scroll_capture_target_z_order_fix to trunk_staging.
3fc49e7b : Revert "New flag: RELEASE_APEX_USE_EROFS_PREINSTALLED"
207fdc8b : Advance Kernel to Build: 12715883 (6.1) in trunk_staging Oriole and Raven
0dddd845 : Advance Kernel to Build: 12715883 (6.1) in trunk_staging Husky and Ripcurrent
b5e5b539 : Adding flag com.android.systemui.show_toast_when_app_control_brightness to trunk_staging.
600f91d7 : Adding flag com.android.settingslib.flags.hearing_device_set_connection_status_report to trunk_staging.
d0adcba3 : Adding flag com.android.settingslib.flags.hearing_devices_input_routing_control to trunk_staging.
6d5b5220 : Adding flag com.android.healthfitness.flags.phr_upsert_fix_use_shared_memory to trunk_staging.
74c13d19 : Adding flag com.android.internal.telephony.flags.security_algorithms_update_indications to trunk_staging.
e2a0a223 : Adding flag com.android.bluetooth.flags.metadata_api_microphone_for_call_enabled to trunk_staging.
383ed422 : Adding flag com.android.bluetooth.flags.rfcomm_always_disc_initiator_in_disc_wait_ua to trunk_staging.
c918ed55 : Adding flag com.android.bluetooth.flags.socket_settings_api to trunk_staging.
db5c9543 : New flag for new apexd feature: mount data apex early
d6d0f68d : New flag: RELEASE_APEX_USE_EROFS_PREINSTALLED
7ac7b4e7 : Adding flag com.android.settings.flags.biometric_onboarding_education to trunk_staging.
dc8d8db4 : Add new build flag for Khmer font update
ff651675 : Adding flag com.android.internal.telephony.flags.carrier_id_from_carrier_identifier to trunk_staging.
98775647 : Removing flag com.android.systemui.show_clipboard_indication from trunk_staging.
cdd39ed9 : Removing flag com.android.systemui.override_suppress_overlay_condition from trunk_staging.
ecc18780 : Adding flag com.android.aconfig_new_storage.enable_full_rust_system_aconfigd to trunk_staging.
ee465d9c : Removing flag com.android.bluetooth.flags.gatt_clear_cache_on_factory_reset from trunk_staging.
2de60700 : Adding flag android.permission.flags.text_classifier_choice_api_enabled to trunk_staging.
528edbee : Removing flag com.android.bluetooth.flags.leaudio_broadcast_primary_group_selection from trunk_staging.
df3305f9 : Add RELEASE_USE_DEX_V41
6f748cb5 : Adding flag com.android.appsearch.flags.enable_blob_store to trunk_staging.
f38740f8 : Adding flag com.android.appsearch.flags.enable_result_already_exists to trunk_staging.
af5fa2e8 : Adding flag android.security.protect_device_config_flags to trunk_staging.
69e3bb1b : Adding flag suspend_service.flags.fast_kernel_wakelock_reporting to trunk_staging.
b5c9680b : Removing flag com.android.aconfig_new_storage.enable_full_rust_system_aconfigd from trunk_staging.
baa2084d : Adding flag android.security.aapm_feature_memory_tagging_extension to trunk_staging.
d5e4f79c : Adding flag com.android.systemui.gsf_quick_settings to trunk_staging.
1fb22676 : Adding flag android.security.aapm_feature_disable_cellular_2g to trunk_staging.
55e5358e : Adding flag com.android.systemui.communal_hub_use_thread_pool_for_widgets to trunk_staging.
da63a0aa : Advance Kernel to Build: 12709428 (6.1) in trunk_staging Panther and Cheetah
87ad8993 : Adding flag android.service.autofill.add_last_focused_id_to_fill_event_history to trunk_staging.
2a3f3474 : Adding flag com.android.appsearch.flags.enable_scorable_property to trunk_staging.
dde54903 : Adding flag com.android.systemui.gsf_quick_settings to trunk_staging.
4e37ec1b : Adding flag com.android.aconfig_new_storage.enable_full_rust_system_aconfigd to trunk_staging.
bfb4e1a5 : Advance Kernel to Build: 12704425 (6.1) in trunk_staging Tangorpro
620eb920 : Advance Kernel to Build: 12704425 (6.1) in trunk_staging Akita
0129ac61 : Revert "Removing flag com.android.systemui.keyguard_wm_state_ref..."
f6cac410 : Revert "Revert^2 "Adding flag com.android.systemui.keyguard_wm_s..."
f4601624 : Adding flag android.media.codec.native_capabilites to trunk_staging.
af934b2a : Adding flag com.android.bluetooth.flags.gatt_callback_on_failure to trunk_staging.
4d91bb2e : Adding flag com.android.settings.flags.biometrics_onboarding_education to trunk_staging.
4091b016 : Removing flag com.android.bluetooth.flags.rfcomm_always_disc_initiator_in_disc_wait_ua from trunk_staging.
8f644310 : Removing flag com.android.bluetooth.flags.hfp_allow_volume_change_without_sco from trunk_staging.
4459bef2 : Adding flag com.android.window.flags.enable_desktop_windowing_app_to_web_education_integration to trunk_staging.
70fc4fbf : Adding flag com.android.appsearch.flags.enable_search_result_parent_types to trunk_staging.
f7503ab9 : Adding flag com.android.appsearch.flags.enable_delete_propagation_type to trunk_staging.
8760aeb0 : Adding flag com.android.bluetooth.flags.le_impl_ack_pause_disarmed to trunk_staging.
76409bef : Adding flag com.android.bluetooth.flags.support_bluetooth_quality_report_v6 to trunk_staging.
a1dbc73c : Adding flag com.android.bluetooth.flags.a2dp_lhdc_api to trunk_staging.
ac8a1fb0 : Revert "BPF: Enable libbpf build flag in trunk staging"
78d815b6 : Removing flag com.android.bluetooth.flags.leaudio_allow_leaudio_only_devices from trunk_staging.
7ab1369b : Removing flag android.os.adpf_graphics_pipeline from trunk_staging.
272b9c1f : Adding flag com.android.launcher3.enable_refactor_task_thumbnail to trunk_staging.
d019016e : Removing flag com.android.systemui.sim_pin_use_slot_id from trunk_staging.
8efe7f93 : Adding flag com.android.appsearch.flags.enable_schema_embedding_quantization to trunk_staging.
37fd0194 : Adding flag com.android.appsearch.flags.enable_search_spec_filter_document_ids to trunk_staging.
37e4619c : Adding flag android.adpf.adpf_viewrootimpl_action_down_boost to trunk_staging.
d18953bb : Adding flag com.android.appsearch.flags.enable_list_filter_match_score_expression_function to trunk_staging.
80a14c80 : Adding flag android.provider.flags.device_config_writable_namespaces_api to trunk_staging.
a72ae7e8 : Adding flag com.android.server.power.hint.cpu_headroom_affinity_check to trunk_staging.
330ba51d : Adding flag com.android.bluetooth.flags.pbap_client_storage_refactor to trunk_staging.
9caf8534 : Adding flag com.example.android.aconfig.demo.flags.one_stage_rollback_test_flag to trunk_staging.
5791fbb4 : Adding flag com.android.settingslib.media.flags.enable_tv_media_output_dialog to trunk_staging.
d627f6cf : Adding flag com.android.internal.camera.flags.fmq_metadata to trunk_staging.
1923ba5b : Adding flag com.android.wifi.flags.wifi_scorer_new_stats_collection to trunk_staging.
d61e3897 : Adding flag com.android.server.biometrics.frr_dialog_improvement to trunk_staging.
822f020b : Adding flag android.content.res.layout_readwrite_flags to trunk_staging.
501418fd : Adding flag android.service.quickaccesswallet.launch_selected_card_from_qs_tile to trunk_staging.
5c5b4dc3 : Adding flag android.media.swcodec.flags.mpeg2_keep_threads_active to trunk_staging.
9c995c74 : Adding flag com.android.appsearch.flags.enable_document_limiter_replace_tracking to trunk_staging.
0d792bb5 : Adding flag android.content.res.resources_minor_version_support to trunk_staging.
d80932b7 : Adding flag android.media.codec.in_process_sw_audio_codec_support to trunk_staging.
7e9364e1 : Adding flag com.android.internal.camera.flags.depth_jpeg_extensions to trunk_staging.
bf9e01f8 : Adding flag com.android.server.telecom.flags.telecom_app_label_proxy_hsum_aware to trunk_staging.
9b8fecde : Advance Kernel to Build: 12704425 (6.1) in trunk_staging Panther and Cheetah
157c5a18 : Adding flag android.app.admin.flags.set_keyguard_disabled_features_coexistence to trunk_staging.
a6f68a29 : Removing flag com.android.permission.flags.decluttered_permission_manager_enabled from trunk_staging.
6c270ed7 : Adding flag android.security.prevent_intent_redirect_collect_nested_keys_on_server_if_not_collected to trunk_staging.
30ecf903 : Advance Kernel to Build: 12704425 (6.1) in trunk_staging Oriole and Raven
901e4557 : Adding flag com.android.nfc.flags.nfc_alert_tag_app_launch to trunk_staging.
e4143c5f : Assign android_virtualization namespace to AVF build flags
33e77e26 : Adding flag com.android.graphics.surfaceflinger.flags.graphite_renderengine_preview_rollout to trunk_staging.
9de6db27 : Adding flag android.app.admin.flags.lock_now_coexistence to trunk_staging.
424472b4 : Adding flag com.android.systemui.sim_pin_use_slot_id to trunk_staging.
2668db0c : Adding flag com.android.server.power.feature.flags.move_wsc_logging_to_notifier to trunk_staging.
09e5a858 : Adding flag com.android.systemui.shortcut_helper_key_glyph to trunk_staging.
3caf3783 : Adding flag com.android.server.display.feature.flags.is_always_on_available_api to trunk_staging.
ac63f18c : Adding flag com.android.server.display.feature.flags.subscribe_granular_display_events to trunk_staging.
15a1dc03 : Adding flag android.permission.flags.use_profile_labels_for_default_app_section_titles to trunk_staging.
5f284a5e : Revert^2 "Adding flag com.android.systemui.keyguard_wm_state_refactor to trunk_staging."
9ac353a6 : Removing flag com.android.providers.contacts.flags.cp2_sync_search_index_flag from trunk_staging.
8b60fb8e : Removing flag com.android.internal.camera.flags.zoom_method from trunk_staging.
cde63a08 : Removing flag android.security.protect_device_config_flags from trunk_staging.
c2cde09b : Removing flag com.android.internal.camera.flags.data_delivery_permission_checks from trunk_staging.
eef9acfb : Advance Kernel to Build: 12704425 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
22e03b99 : Advance Kernel to Build: 12704425 (6.1) in trunk_staging Lynx
a7f42c27 : Adding flag com.android.bluetooth.flags.aics_api to trunk_staging.
cf792cc4 : Adding flag android.nfc.nfc_check_tag_intent_preference to trunk_staging.
30472e98 : Add a flag for bluetooth socket service
727a91bc : Adding flag com.android.internal.camera.flags.data_delivery_permission_checks to trunk_staging.
fd400968 : Adding flag android.media.codec.codec_availability_support to trunk_staging.
3c6ac059 : Adding flag com.android.providers.contacts.flags.cp2_sync_search_index_flag to trunk_staging.
bf759abf : Adding flag com.android.nfc.flags.exit_frames to trunk_staging.
009dfeef : Adding flag android.app.background_install_control_callback_api to trunk_staging.
e5b1a99f : Adding flag android.os.adpf_graphics_pipeline to trunk_staging.
ccb1c027 : Adding flag com.android.permission.flags.decluttered_permission_manager_enabled to trunk_staging.
12f392e5 : Adding flag android.security.protect_device_config_flags to trunk_staging.
cb657b77 : Adding flag android.permission.flags.wallet_role_cross_user_enabled to trunk_staging.
e42d7cc6 : Adding flag android.os.message_queue_force_legacy to trunk_staging.
1ceb7230 : Adding flag android.appwidget.flags.engagement_metrics to trunk_staging.
beb09440 : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_PACKAGE_MESSAGEQUEUE_IMPLEMENTATION = CombinedMessageQueue/MessageQueue.java
dbea1f33 : Removing flag android.chre.flags.reduce_lock_holding_period from trunk_staging.
2cb4260a : Adding flag com.android.server.usage.adjust_default_bucket_elevation_params to trunk_staging.
4e27d855 : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_USE_SYSTEM_FEATURE_BUILD_FLAGS = true
2907ba4d : Adding flag android.app.admin.flags.secondary_lockscreen_api_enabled to trunk_staging.
0598aed1 : Removing flag com.android.appsearch.flags.enable_search_spec_filter_document_ids from trunk_staging.
7622c4e3 : Adding flag android.security.aapm_feature_disable_install_unknown_sources to trunk_staging.
d76b7255 : Introduce RELEASE_BUILD_PURGE_PRODUCT_ADB_KEYS flag
0396b6eb : Add build variant specific release configurations
de5a3528 : Adding flag com.android.window.flags.scrolling_from_letterbox to trunk_staging.
7229a1f9 : Add flag for PLATFORM_BASE_SDK_EXTENSION_VERSION
ca4b2a2a : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_SERVICE_WIFI_SPEED_PROFILE_ART_COMPILATION = true
632b7bcf : Adding flag com.android.window.flags.show_app_handle_large_screens to trunk_staging.
87bda838 : Adding flag com.android.net.ct.flags.certificate_transparency_job to trunk_staging.
20774220 : Adding flag android.permission.flags.device_policy_management_role_split_create_managed_profile_enabled to trunk_staging.
e98a7acd : Adding flag com.android.healthfitness.flags.phr_upsert_fix_parcel_size_calculation to trunk_staging.
5ffce198 : Adding flag com.android.appsearch.flags.enable_search_spec_filter_document_ids to trunk_staging.
c3a2a07b : BPF: Enable libbpf build flag in trunk staging
69b6738b : Advance Kernel to Build: 12695387 in trunk_staging Comet
5c2c064e : Adding flag com.android.internal.camera.flags.zoom_method to trunk_staging.
30c8ff54 : Adding flag android.media.codec.aidl_hal_input_surface to trunk_staging.
92c0fbd5 : Advance Kernel to Build: 12695387 in trunk_staging Caimito
18b193ca : Removing flag android.os.message_queue_force_legacy from trunk_staging.
58237e13 : Advance Kernel to Build: 12695387 (6.1) in trunk_staging Akita
ff3c566f : Advance Kernel to Build: 12695387 (6.1) in trunk_staging Tangorpro
83f1e937 : Adding flag com.android.server.am.fix_apply_oomadj_order to trunk_staging.
98c43492 : Adding flag android.service.autofill.test_flag to trunk_staging.
3df9433b : Adding flag com.android.systemui.gsf_bouncer to trunk_staging.
87b9e1a1 : Removing flag com.android.systemui.keyguard_wm_state_refactor from trunk_staging.
e13dc40a : Adding flag com.android.server.am.oomadjuster_cached_app_tiers to trunk_staging.
27c84d6d : Adding flag com.android.server.stats.add_pressure_stall_information_puller to trunk_staging.
6b2f569c : Advance Kernel to Build: 12688049 (6.1) in trunk_staging Panther and Cheetah
82b77a93 : Advance Kernel to Build: 12688049 (6.1) in trunk_staging Oriole and Raven
1ed21eae : Adding flag android.widget.flags.use_wear_material3_ui to trunk_staging.
e73ac761 : Removing flag android.app.admin.flags.secondary_lockscreen_api_enabled from trunk_staging.
57746f36 : Removing flag android.os.adpf_graphics_pipeline from trunk_staging.
b14683a4 : Adding flag com.android.server.display.feature.flags.enable_plugin_manager to trunk_staging.
ff448791 : Adding flag com.android.server.display.feature.flags.even_dimmer to trunk_staging.
0eeccaf3 : Adding flag com.android.bluetooth.flags.leaudio_broadcast_api_manage_primary_group to trunk_staging.
04b0b429 : Adding flag com.android.bluetooth.flags.leaudio_broadcast_primary_group_selection to trunk_staging.
0c941a01 : Adding flag com.android.settings.flags.regional_preferences_api_enabled to trunk_staging.
d5a921f1 : Removing flag android.multiuser.cache_profile_type_read_only from trunk_staging.
db0bc3f7 : Adding flag com.android.window.flags.predictive_back_three_button_nav to trunk_staging.
95af8a72 : Adding flag com.android.window.flags.predictive_back_default_enable_sdk_36 to trunk_staging.
8378de99 : Adding flag com.android.bluetooth.flags.connect_hap_on_other_profile_connect to trunk_staging.
023ddf1b : Removing flag com.android.systemui.enable_view_capture_tracing from trunk_staging.
a7652ed9 : Adding flag com.android.wm.shell.enable_pip2 to trunk_staging.
d394016c : Adding flag com.android.wifi.flags.wep_disabled_in_apm to trunk_staging.
3041e119 : Adding flag com.android.internal.camera.flags.ae_priority to trunk_staging.
f3691d95 : Removing flag android.app.contextualsearch.flags.enable_token_refresh from trunk_staging.
0763de95 : Adding flag com.android.window.flags.enable_display_windowing_mode_switching to trunk_staging.
25f43c92 : Adding flag android.uprobestats.mainline.flags.enable_uprobestats to trunk_staging.
0d143556 : Adding flag android.uprobestats.mainline.flags.executable_method_file_offsets to trunk_staging.
8f347639 : Revert^2 "Enable the mainline supplicant build"
abe293ee : Adding flag android.uprobestats.mainline.flags.uprobestats_support_update_device_idle_temp_allowlist to trunk_staging.
751a92c6 : Adding flag android.permission.flags.delay_uid_state_changes_from_capability_updates to trunk_staging.
6c02138c : Removing flag com.android.wm.shell.enable_pip2 from trunk_staging.
0c57a28f : Adding flag android.permission.flags.note_op_batching_enabled to trunk_staging.
e221f4fb : Removing flag com.android.wallpaper.clock_reactive_variants from trunk_staging.
9451f6ac : Removing flag com.android.systemui.clock_reactive_variants from trunk_staging.
16bc6104 : Switch RELEASE_USE_SYSTEM_FEATURE_BUILD_FLAGS to LAUNCH
89df78b4 : Adding flag com.android.graphics.surfaceflinger.flags.adpf_fmq_sf to trunk_staging.
ef3d889a : Adding flag android.os.adpf_use_load_hints to trunk_staging.
009cf598 : Removing flag com.android.internal.camera.flags.ae_priority from trunk_staging.
d496acc8 : Adding flag android.hardware.devicestate.feature.flags.device_state_rdm_v2 to trunk_staging.
4b90f359 : Adding flag com.android.calllogbackup.call_log_restore_deduplication_enabled to trunk_staging.
172a10e5 : Removing flag android.security.aapm_feature_disable_install_unknown_sources from trunk_staging.
17542161 : Adding flag com.android.nfc.flags.coalesce_rf_events to trunk_staging.
e903625a : Adding flag android.app.no_sbnholder to trunk_staging.
4abb1db4 : Add build feature to guard enable speed-profile compilation for Wifi.
ed994205 : Adding flag com.android.graphics.surfaceflinger.flags.adpf_native_session_manager to trunk_staging.
4ec4d6fa : Removing flag com.android.systemui.secondary_user_widget_host from trunk_staging.
6867f0bb : Removing flag com.android.wifi.flags.wep_disabled_in_apm from trunk_staging.
2868178b : Advance Kernel to Build: 12683036 (6.1) in trunk_staging Panther and Cheetah
18847105 : Adding flag com.android.bluetooth.flags.adm_fix_disconnect_of_set_member to trunk_staging.
eec4dfdb : Adding flag com.android.graphics.libgui.flags.buffer_release_channel to trunk_staging.
5ed85f2f : Adding flag com.android.hardware.input.use_key_gesture_event_handler_multi_key_gestures to trunk_staging.
bad5131d : Advance Kernel to Build: 12683036 (6.1) in trunk_staging Oriole and Raven
7b312040 : Removing flag com.android.server.display.feature.flags.even_dimmer from trunk_staging.
61a65129 : Removing flag com.android.window.flags.predictive_back_three_button_nav from trunk_staging.
26f94ba1 : Adding flag android.app.admin.flags.reset_password_with_token_coexistence to trunk_staging.
445a580f : Revert "Adding flag com.android.window.flags.predictive_back_three_button_nav to trunk_staging."
507504b0 : Adding flag com.android.launcher3.use_system_radius_for_app_widgets to trunk_staging.
218b4cd2 : Adding flag com.android.window.flags.enable_desktop_system_dialogs_transitions to trunk_staging.
db33c82a : Adding flag android.multiuser.add_launcher_user_config to trunk_staging.
364c9448 : Adding flag com.android.settingslib.flags.hearing_devices_ambient_volume_control to trunk_staging.
179c10b2 : Adding flag com.android.window.flags.predictive_back_three_button_nav to trunk_staging.
90b69551 : Advance Kernel to Build: 12673319 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
4e74595c : Adding flag com.android.wm.shell.enable_pip2 to trunk_staging.
c68388b2 : Adding flag com.android.bluetooth.flags.encryption_change_broadcast to trunk_staging.
c4a9ea4c : Adding flag com.android.bluetooth.flags.scan_manager_refactor to trunk_staging.
33d7c871 : Adding flag android.permission.flags.replace_body_sensor_permission_enabled to trunk_staging.
70640744 : Removing flag android.content.pm.verification_service from trunk_staging.
bae58ba4 : Removing flag android.permission.flags.replace_body_sensor_permission_enabled from trunk_staging.
72a5e334 : Adding flag com.android.launcher3.one_grid_specs to trunk_staging.
24f73227 : Adding flag android.os.adpf_graphics_pipeline to trunk_staging.
2dad543a : Adding flag com.android.appsearch.flags.app_open_event_indexer_enabled to trunk_staging.
ed6febc7 : Adding flag com.android.appsearch.flags.enable_additional_builder_copy_constructors to trunk_staging.
9ea1c6a2 : Removing flag android.hardware.devicestate.feature.flags.device_state_rdm_v2 from trunk_staging.
2fc8a87e : Adding flag android.appwidget.flags.use_smaller_app_widget_system_radius to trunk_staging.
ee27a688 : Adding flag android.permission.flags.replace_body_sensor_permission_enabled to trunk_staging.
53fc14b0 : Adding flag android.provider.new_default_account_api_enabled to trunk_staging.
5580b676 : Adding flag com.android.bluetooth.flags.channel_sounding_25q2_apis to trunk_staging.
90a5568d : Adding flag android.net.vcn.mainline_vcn_module_api to trunk_staging.
c68a1c8c : Adding flag com.android.hardware.input.keyboard_a11y_shortcut_control to trunk_staging.
792f667d : Revert "Removing flag com.android.server.am.limit_priority_scope from trunk_staging."
f7998a9b : Adding flag android.permission.flags.check_op_overload_api_enabled to trunk_staging.
3ae22089 : Advance Kernel to Build: 12673319 (6.1) in trunk_staging Panther and Cheetah
6730d261 : Adding flag com.android.systemui.notification_reentrant_dismiss to trunk_staging.
da2fdddf : Removing flag android.provider.new_default_account_api_enabled from trunk_staging.
f3688ec3 : Adding flag android.media.tv.flags.apply_picture_profiles to trunk_staging.
45991bbc : Adding flag android.chre.flags.offload_implementation to trunk_staging.
b42a15fb : Advance Kernel to Build: 12673319 (6.1) in trunk_staging Oriole and Raven
738d82ad : Adding flag com.android.graphics.libgui.flags.apply_picture_profiles to trunk_staging.
6b721ba0 : Removing flag com.android.graphics.libgui.flags.buffer_release_channel from trunk_staging.
f17bc3df : Adding flag com.android.graphics.hwui.flags.initialize_gl_always to trunk_staging.
becd694e : Adding flag com.android.hardware.input.input_manager_lifecycle_support to trunk_staging.
3e661c69 : Adding flag android.app.admin.flags.split_create_managed_profile_enabled to trunk_staging.
6773a6e0 : Removing flag com.android.launcher3.one_grid_specs from trunk_staging.
6e1feb36 : Adding flag com.android.bluetooth.flags.drop_acl_fragment_on_disconnect to trunk_staging.
8c7bd269 : Removing flag com.android.window.flags.ensure_keyguard_does_transition_starting from trunk_staging.
5b043aa9 : Adding flag com.android.launcher3.one_grid_specs to trunk_staging.
eb143c40 : Adding flag com.android.window.flags.enable_display_focus_in_shell_transitions to trunk_staging.
443377e3 : Adding flag com.android.graphics.surfaceflinger.flags.begone_bright_hlg to trunk_staging.
d31f5fda : Adding flag com.android.net.thread.flags.set_nat64_configuration_enabled to trunk_staging.
bb23732d : Removing flag com.android.input.flags.connected_displays_cursor from trunk_staging.
7a548dfb : Adding flag android.permission.flags.replace_body_sensor_permission_enabled to trunk_staging.
df6fbd35 : Removing flag android.permission.flags.check_op_overload_api_enabled from trunk_staging.
9d3e0310 : Adding flag com.android.server.am.add_modify_raw_oom_adj_service_level to trunk_staging.
e97ca749 : Removing flag com.android.internal.telephony.flags.cellular_identifier_disclosure_indications from trunk_staging.
d8752d3a : Removing flag com.android.internal.telephony.flags.security_algorithms_update_indications from trunk_staging.
a13b4c00 : Adding flag com.android.server.am.app_start_info_isolated_process to trunk_staging.
2aa2ffed : Removing flag com.android.net.thread.flags.set_nat64_configuration_enabled from trunk_staging.
f8615a56 : Adding flag android.hardware.devicestate.feature.flags.device_state_rdm_v2 to trunk_staging.
eec3bfc4 : Adding flag com.android.window.flags.ensure_keyguard_does_transition_starting to trunk_staging.
5151ff88 : Adding flag android.view.flags.surface_view_get_surface_package to trunk_staging.
fce0b188 : Adding flag android.view.flags.surface_view_set_composition_order to trunk_staging.
ec7ae0da : Adding flag com.android.window.flags.enable_drag_to_maximize to trunk_staging.
ccb39682 : Adding flag com.android.internal.telephony.flags.temporary_failures_in_carrier_messaging_service to trunk_staging.
faaa0841 : Adding flag android.permission.flags.check_op_overload_api_enabled to trunk_staging.
28c677fd : Adding flag com.android.bluetooth.flags.donot_update_sec_flags_on_csrk_save to trunk_staging.
8b842578 : Adding flag android.provider.new_default_account_api_enabled to trunk_staging.
40d39927 : Removing flag com.android.internal.camera.flags.zoom_method from trunk_staging.
b77f5e65 : Removing flag android.view.flags.buffer_stuffing_recovery from trunk_staging.
e809d11d : Advance Kernel to Build: 12667688 (6.1) in trunk_staging Panther and Cheetah
b9e5b167 : Adding flag com.android.bluetooth.flags.disconnect_on_encryption_failure to trunk_staging.
5baa9059 : [Ranging] Add uwb namesapce to RANGING_RELEASE_STACK
2edfcbc1 : Removing flag android.provider.new_default_account_api_enabled from trunk_staging.
e22323c6 : Adding flag com.android.input.flags.connected_displays_cursor to trunk_staging.
60a4ed80 : Removing flag com.android.window.flags.system_ui_post_animation_end from trunk_staging.
964f31c0 : Removing flag android.permission.flags.replace_body_sensor_permission_enabled from trunk_staging.
9cd2b7fa : Adding flag android.companion.virtualdevice.flags.notifications_for_device_streaming to trunk_staging.
0b51ffe5 : Removing flag com.android.internal.telephony.flags.power_down_race_fix from trunk_staging.
924b4627 : Adding flag com.android.window.flags.enable_task_resizing_keyboard_shortcuts to trunk_staging.
f0579be3 : Removing flag com.android.launcher3.one_grid_specs from trunk_staging.
c914b194 : Adding flag com.android.window.flags.system_ui_post_animation_end to trunk_staging.
0ccdeae6 : Revert "Adding flag android.permission.flags.replace_body_sensor_permission_enabled to trunk_staging."
e5847d08 : Advance Kernel to Build: 12667688 in trunk_staging Comet
a44b0caf : Advance Kernel to Build: 12667688 in trunk_staging Caimito
5395de0e : Advance Kernel to Build: 12667688 (6.1) in trunk_staging Felix
eeee2fb3 : Advance Kernel to Build: 12667688 (6.1) in trunk_staging Bluejay
5afbfe61 : Advance Kernel to Build: 12667688 (6.1) in trunk_staging Akita
ce75e2cf : Advance Kernel to Build: 12667688 (6.1) in trunk_staging Lynx
10c4d8b7 : Advance Kernel to Build: 12667688 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
e05315db : Advance Kernel to Build: 12667688 (6.1) in trunk_staging Tangorpro
96e960f3 : Adding flag android.app.notifications_redesign_app_icons to trunk_staging.
93a2451b : Revert "Enable the mainline supplicant build"
cdb7d88e : Adding flag com.android.internal.telephony.flags.subscription_plan_allow_status_and_end_date to trunk_staging.
7aeb9f6c : Adding flag com.android.bluetooth.flags.prevent_service_connections_on_remove_bond to trunk_staging.
cf9fd62a : Adding flag com.android.internal.telephony.flags.power_down_race_fix to trunk_staging.
99d60403 : Revert^14 "Turn on CombinedMessageQueue for trunk_staging"
b5a34474 : Removing flag com.android.wm.shell.enable_pip2 from trunk_staging.
bbb9f768 : Adding flag android.app.device_unlock_listener to trunk_staging.
6869f699 : Adding flag com.android.launcher3.one_grid_specs to trunk_staging.
7634fa85 : Adding flag com.android.internal.telephony.flags.security_algorithms_update_indications to trunk_staging.
196bb88d : Adding flag com.android.internal.telephony.flags.cellular_identifier_disclosure_indications to trunk_staging.
2e36d321 : Adding flag com.android.bluetooth.flags.btsec_le_oob_pairing to trunk_staging.
19548ea0 : Adding flag com.android.bluetooth.flags.btsec_avdt_msg_ind_type_confusion to trunk_staging.
bd8b690e : Removing flag com.android.launcher3.one_grid_specs from trunk_staging.
fe4405fe : Adding flag android.os.enable_has_binders to trunk_staging.
d79d40cd : Adding flag com.android.wm.shell.enable_pip2 to trunk_staging.
edff0391 : Adding flag android.view.flags.dynamic_view_rotary_haptics_configuration to trunk_staging.
838e5e3e : Adding flag com.android.launcher3.work_scheduler_in_work_profile to trunk_staging.
32b89a0e : Adding flag android.permission.flags.replace_body_sensor_permission_enabled to trunk_staging.
35bf1b2a : Adding flag android.car.feature.async_audio_service_init to trunk_staging.
34019099 : Adding flag android.car.feature.car_property_supported_value to trunk_staging.
8f6a040a : AP4A: Set SPL to 2025-01-05
6beb4490 : Adding flag android.chre.flags.efw_xport_in_context_hub to trunk_staging.
5f389a50 : Adding flag com.android.bluetooth.flags.donot_validate_bond_state_from_profiles to trunk_staging.
105cb82b : Adding flag com.android.launcher3.enable_dismiss_prediction_undo to trunk_staging.
baa374ca : Adding flag com.android.internal.camera.flags.query_process_state to trunk_staging.
8b272710 : Removing flag com.android.wifi.flags.voip_detection_bugfix from trunk_staging.
ae73340f : Adding flag com.android.healthfitness.flags.replace_body_sensor_permission_enabled to trunk_staging.
135c79a3 : Adding flag com.android.bluetooth.flags.ble_scan_setting_does_not_disconnect_if_bt_on to trunk_staging.
bdabfcd3 : Adding flag com.android.bluetooth.flags.directed_advertising_api to trunk_staging.
615f8f1a : Adding flag android.os.message_queue_force_legacy to trunk_staging.
831713f7 : AOSP first per lint on ag/30439490 Add system ui tests to presubmit for release flag promotions
ca3ac52c : Adding flag android.view.inputmethod.adaptive_handwriting_bounds to trunk_staging.
1c5bc957 : Revert^13 "Turn on CombinedMessageQueue for trunk_staging"
074234ab : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_MOVE_VCN_TO_MAINLINE = true
78c0a9f6 : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_ATTEST_MODULES = true
0bbf0b21 : Removing flag android.app.modes_multiuser from trunk_staging.
aa08d5fe : Adding flag android.app.live_wallpaper_content_handling to trunk_staging.
69ac59e2 : Adding flag com.android.server.backup.enable_restricted_mode_changes to trunk_staging.
67a91896 : Removing flag android.app.notifications_redesign_app_icons from trunk_staging.
54bf11bd : Adding flag com.android.bluetooth.flags.opp_set_insets_for_edge_to_edge to trunk_staging.
74b820e0 : Removing flag android.multiuser.add_launcher_user_config from trunk_staging.
9405563c : Removing flag android.net.vcn.fix_config_garbage_collection from trunk_staging.
0da03e3c : Removing flag android.net.vcn.mainline_vcn_module_api from trunk_staging.
cd237305 : Adding flag android.security.keystore2.attest_modules to trunk_staging.
e5b5cd2f : Adding flag com.android.settings.flags.enroll_layout_truncate_improvement to trunk_staging.
d018ee7e : Adding flag android.server.migrate_wrist_orientation to trunk_staging.
7da9e8b1 : Adding flag com.android.bluetooth.flags.remove_pending_hid_connection to trunk_staging.
d65ed951 : Adding flag android.content.pm.app_compat_option_16kb to trunk_staging.
538c8db5 : Adding flag com.android.media.projection.flags.stop_media_projection_on_call_end to trunk_staging.
53e4f4b5 : Adding flag android.media.audio.speaker_layout_api to trunk_staging.
abf685b1 : Adding flag android.nfc.nfc_set_service_enabled_for_category_other to trunk_staging.
1ce5a0fd : Adding flag android.service.autofill.autofill_w_metrics to trunk_staging.
2da59931 : Adding flag com.android.server.am.limit_priority_scope to trunk_staging.
20ccef10 : Add RELEASE_BUILD_USE_VARIANT_FLAGS
588a202d : Revert "Advance Kernel to Build: 12651858 (6.1) in trunk_staging..."
61f4b747 : Adding flag android.security.aapm_feature_disable_install_unknown_sources to trunk_staging.
b86c8154 : Enable the mainline supplicant build flag in the trunk_staging config.
34c5d34a : Add build flag to indicate whether the mainline supplicant binary should be included in the Apex.
abc78213 : Adding flag com.android.adservices.flags.adservices_outcomereceiver_r_api_deprecated to trunk_staging.
b3003837 : Adding flag android.provider.new_default_account_api_enabled to trunk_staging.
e29a4e5d : Adding flag com.android.launcher3.one_grid_specs to trunk_staging.
27e18b2c : Adding flag com.android.graphics.libvulkan.flags.vulkan_1_4_instance_api to trunk_staging.
4b8cf338 : Advance Kernel to Build: 12651858 (6.1) in trunk_staging Panther and Cheetah
2769ddbf : Adding flag com.android.internal.camera.flags.camera_multi_client to trunk_staging.
3d178fb6 : Removing flag com.android.server.telecom.flags.telecom_main_user_in_get_respond_message_app from trunk_staging.
f6a08d40 : Adding flag android.app.admin.flags.active_admin_cleanup to trunk_staging.
31c444d7 : Revert^12 "Turn on CombinedMessageQueue for trunk_staging"
ce09b981 : Advance Kernel to Build: 12651858 (6.1) in trunk_staging Oriole and Raven
f631ba1e : Adding flag android.app.admin.flags.secondary_lockscreen_api_enabled to trunk_staging.
7f4cf8ec : Removing flag android.app.live_wallpaper_content_handling from trunk_staging.
08bc22ef : Adding flag android.app.admin.flags.set_auto_time_enabled_coexistence to trunk_staging.
771c2dcd : Define RELEASE_MOVE_VCN_TO_MAINLINE
96d85382 : Adding flag android.multiuser.cache_user_info_read_only to trunk_staging.
e829a96e : Adding flag com.android.net.flags.tethering_with_soft_ap_config to trunk_staging.
437416e4 : Adding flag com.android.libcore.openjdk21_stringconcat to trunk_staging.
a3d56cef : Adding flag com.android.server.display.feature.flags.display_listener_performance_improvements to trunk_staging.
b0cec390 : Adding flag android.provider.system_regional_preferences_api_enabled to trunk_staging.
32f579bf : Removing flag android.app.admin.flags.secondary_lockscreen_api_enabled from trunk_staging.
ce645a08 : Revert^11 "Turn on CombinedMessageQueue for trunk_staging"
bf410249 : Removing flag com.android.nfc.flags.coalesce_rf_events from trunk_staging.
e08c9288 : Adding flag com.android.window.flags.record_task_snapshots_before_shutdown to trunk_staging.
9dfbc198 : Adding flag com.android.server.usb.flags.check_user_action_unlocked to trunk_staging.
31295625 : Removing flag android.hardware.usb.flags.enable_udc_sysfs_usb_state_update from trunk_staging.
acced98e : Adding flag com.android.graphics.hwui.flags.runtime_color_filters_blenders to trunk_staging.
0ca2c07e : Advance Kernel to Build: 12651858 in trunk_staging Comet
b8c30604 : Advance Kernel to Build: 12651858 in trunk_staging Caimito
3dddbc41 : Adding flag com.android.settings.flags.catalyst_network_provider_and_internet_screen to trunk_staging.
bfedf88a : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_MEMORY_MANAGEMENT_DAEMON = true
40f76afa : Removing flag com.android.settings.flags.catalyst_network_provider_and_internet_screen from trunk_staging.
48396cc8 : Adding flag com.android.settings.flags.catalyst_my_device_info_pref_screen to trunk_staging.
81142715 : Adding flag com.android.settings.flags.catalyst_wifi_calling to trunk_staging.
981c3a23 : Adding flag android.media.audio.hardening_permission_spa to trunk_staging.
0075d397 : Adding flag com.android.settings.flags.catalyst_accessibility_color_and_motion to trunk_staging.
aafb5855 : Adding flag android.service.quickaccesswallet.launch_wallet_option_on_power_double_tap to trunk_staging.
694eb628 : Revert^10 "Turn on CombinedMessageQueue for trunk_staging"
da07bb8c : Adding flag android.os.allow_thermal_thresholds_callback to trunk_staging.
0174b957 : Adding flag android.os.cpu_gpu_headrooms to trunk_staging.
c998d31c : Adding flag android.media.tv.flags.mediacas_update_client_profile_priority to trunk_staging.
154a8c35 : Revert^9 "Turn on CombinedMessageQueue for trunk_staging"
e48c88aa : Advance Kernel to Build: 12651858 (6.1) in trunk_staging Bluejay
2aadca3a : Adding flag android.nfc.nfc_oem_extension to trunk_staging.
34994378 : Advance Kernel to Build: 12651858 (6.1) in trunk_staging Felix
f4cd8519 : Adding flag android.app.admin.flags.secondary_lockscreen_api_enabled to trunk_staging.
f3dc250c : Adding flag android.view.inputmethod.verify_key_event to trunk_staging.
c108e320 : Advance Kernel to Build: 12651858 (6.1) in trunk_staging Tangorpro
65093f1f : Advance Kernel to Build: 12651858 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
4aa4e2d1 : Advance Kernel to Build: 12651858 (6.1) in trunk_staging Akita
82d726df : Advance Kernel to Build: 12651858 (6.1) in trunk_staging Lynx
72d189ac : Adding flag com.android.art.flags.executable_method_file_offsets to trunk_staging.
477fdd67 : Adding flag com.android.bluetooth.flags.auto_connect_on_multiple_hfp_when_no_a2dp_device to trunk_staging.
37d57e83 : Adding flag android.hardware.flags.luts_api to trunk_staging.
4b44df2b : Adding flag android.view.inputmethod.public_autofill_id_in_editorinfo to trunk_staging.
e843ac97 : Adding flag com.android.hardware.input.can_window_override_power_gesture_api to trunk_staging.
d2f55378 : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_CRASHRECOVERY_MODULE = true
75f2a98a : Adding flag com.android.providers.media.flags.enable_mark_is_favorite_status_api to trunk_staging.
32551ef2 : Revert "Adding flag android.nfc.nfc_oem_extension to trunk_staging."
29be2182 : Removing flag android.app.admin.flags.set_auto_time_enabled_coexistence from trunk_staging.
1c59c656 : Adding flag com.android.hardware.input.override_power_key_behavior_in_focused_window to trunk_staging.
0dafffb8 : Adding flag com.android.nfc.flags.coalesce_rf_events to trunk_staging.
9227afc8 : Adding flag android.permission.flags.supervision_role_permission_update_enabled to trunk_staging.
3723c9a6 : Adding flag android.chre.flags.offload_api to trunk_staging.
9f4d5121 : Adding flag com.android.wifi.flags.public_bands_for_lohs to trunk_staging.
85a8dc20 : Adding flag android.multiuser.add_launcher_user_config to trunk_staging.
6717328a : Adding flag com.android.bluetooth.flags.guard_bonded_device_properties to trunk_staging.
04e626fe : Adding flag com.android.adservices.ondevicepersonalization.flags.executorch_inference_api_enabled to trunk_staging.
1b267a45 : Removing flag com.android.art.flags.executable_method_file_offsets from trunk_staging.
05b91a87 : Adding flag com.android.adservices.ondevicepersonalization.flags.is_feature_enabled_api_enabled to trunk_staging.
1177f099 : Adding flag android.nfc.nfc_event_listener to trunk_staging.
a24a8b58 : Adding flag com.android.media.metrics.flags.mediametrics_to_module to trunk_staging.
de866eec : Adding flag android.view.accessibility.remove_child_hover_check_for_touch_exploration to trunk_staging.
e49ae9a7 : Adding flag com.android.systemui.keyguard_transition_force_finish_on_screen_off to trunk_staging.
76da643f : Adding flag com.android.internal.telephony.flags.geofence_enhancement_for_better_ux to trunk_staging.
85837036 : Adding flag android.view.flags.buffer_stuffing_recovery to trunk_staging.
dbc9e74e : Adding flag com.android.window.flags.jank_api to trunk_staging.
7d5dd6ff : Advance Kernel to Build: 12644615 (6.1) in trunk_staging Panther and Cheetah
9356ff18 : Adding flag com.android.internal.camera.flags.feature_combination_baklava to trunk_staging.
edd011e7 : Advance Kernel to Build: 12644615 (6.1) in trunk_staging Oriole and Raven
dc028d6d : Adding flag android.app.supervision.flags.supervision_api_on_wear to trunk_staging.
c4b247aa : Adding flag android.permission.flags.health_connect_backup_restore_permission_enabled to trunk_staging.
6edda880 : Adding flag android.service.notification.notification_regroup_on_classification to trunk_staging.
f6d3dcb7 : Adding flag android.app.admin.flags.set_auto_time_zone_enabled_coexistence to trunk_staging.
76d86c61 : Adding flag android.app.admin.flags.set_auto_time_enabled_coexistence to trunk_staging.
8d7ce656 : Adding flag android.app.admin.flags.remove_managed_profile_enabled to trunk_staging.
d27f56f1 : Removing flag com.android.settings.flags.catalyst_wifi_calling from trunk_staging.
8a3dc83d : Adding flag android.uprobestats.flags.executable_method_file_offsets to trunk_staging.
81327af4 : Adding flag android.app.ondeviceintelligence.flags.enable_on_device_intelligence_module to trunk_staging.
be4fbe04 : Removing flag com.android.providers.downloads.flags.download_via_platform_http_engine from trunk_staging.
dce2bc6f : Adding flag com.android.settings.accessibility.add_brightness_settings_in_suw to trunk_staging.
bc5ed0bb : Adding flag com.android.wifi.flags.wifi_state_changed_listener to trunk_staging.
b067c7b9 : Adding flag com.android.settings.flags.catalyst_wifi_calling to trunk_staging.
7bff563b : Adding flag com.android.window.flags.release_snapshot_aggressively to trunk_staging.
ec8efa2c : Advance Kernel to Build: 12646621 in ap4a Caimito
df7452c7 : Advance Kernel to Build: 12646621 in ap4a Comet
5d720b71 : Adding flag com.android.providers.media.flags.enable_mark_media_as_favorite_api to trunk_staging.
2ec54b22 : Adding flag com.android.settings.flags.add_security_algorithms_to_eng_menu to trunk_staging.
816447a8 : Adding flag com.android.settings.flags.catalyst_screen_timeout to trunk_staging.
81269180 : Adding flag com.android.settings.flags.catalyst_restrict_background_parent_entry to trunk_staging.
9341093a : Adding flag com.android.settings.flags.catalyst_adaptive_connectivity to trunk_staging.
b38b4c30 : Adding flag com.android.settings.flags.catalyst_internet_settings to trunk_staging.
dca5c5c5 : Add RELEASE_ATTEST_MODULES flag
cd4c7db3 : Adding flag com.android.wifi.flags.bssid_blocklist_for_suggestion to trunk_staging.
1769c2e9 : Adding flag android.permission.flags.use_system_selection_toolbar_in_sysui to trunk_staging.
90d57481 : Adding flag android.permission.flags.system_selection_toolbar_enabled to trunk_staging.
3d1d96df : Adding flag android.car.feature.distant_display_transitions to trunk_staging.
a09a48b7 : Adding flag android.car.feature.task_view_task_reordering to trunk_staging.
98498b12 : Adding flag com.android.bluetooth.flags.a2dp_clear_pending_start_on_session_restart to trunk_staging.
ca5d1786 : Adding flag com.android.media.audio.hardening_strict to trunk_staging.
c03e51e2 : Adding flag com.android.graphics.hwui.flags.remove_vri_sketchy_destroy to trunk_staging.
578338ff : Adding flag com.android.window.flags.enable_desktop_windowing_pip to trunk_staging.
d079cf95 : Adding flag android.service.autofill.fill_dialog_improvements to trunk_staging.
8bcd3a7e : Adding flag com.android.art.flags.executable_method_file_offsets to trunk_staging.
1270113e : Removing flag android.os.allow_consentless_bugreport_delegated_consent from trunk_staging.
aec59fa6 : Adding flag com.android.settings.flags.catalyst_service to trunk_staging.
d12e821f : Adding flag com.android.internal.camera.flags.night_mode_indicator to trunk_staging.
46cc5b60 : Adding flag com.android.wifi.flags.wep_disabled_in_apm to trunk_staging.
ac6ef9cf : Adding flag com.android.wifi.flags.ap_isolate to trunk_staging.
4ba7a57b : Adding flag com.android.settingslib.flags.settings_preference_write_consent_enabled to trunk_staging.
6a110e95 : Adding flag com.android.settingslib.flags.write_system_preference_permission_enabled to trunk_staging.
26d09625 : Adding flag com.android.wallpaper.clock_reactive_variants to trunk_staging.
768b2668 : Adding flag android.security.afl_api to trunk_staging.
d3d7ed86 : Adding flag com.android.uwb.flags.hw_state to trunk_staging.
35cdcf69 : Adding flag com.android.wifi.flags.wifi_direct_r2 to trunk_staging.
d30e6ccf : Adding flag com.android.wifi.flags.mlo_sap to trunk_staging.
680ae8ee : Adding flag com.android.window.flags.supports_drag_assistant_to_multiwindow to trunk_staging.
af10c350 : Adding flag android.content.pm.support_minor_versions_in_minsdkversion to trunk_staging.
1fa267d6 : Removing flag android.view.accessibility.remove_child_hover_check_for_touch_exploration from trunk_staging.
27bdc6af : Adding flag com.android.wifi.flags.secure_ranging to trunk_staging.
3877c2dc : Adding flag android.net.wifi.flags.usd to trunk_staging.
2df4dc60 : Adding flag com.android.adservices.ondevicepersonalization.flags.fcp_schedule_with_outcome_receiver_enabled to trunk_staging.
3e701ef2 : Adding flag com.android.adservices.ondevicepersonalization.flags.data_class_missing_ctors_and_getters_enabled to trunk_staging.
f2e5b42a : Adding flag com.android.adservices.ondevicepersonalization.flags.execute_in_isolated_service_api_enabled to trunk_staging.
87365a20 : Removing flag com.android.providers.media.flags.enable_mark_media_as_favorite_api from trunk_staging.
3a026e7d : Adding flag com.android.adservices.ondevicepersonalization.flags.fcp_model_version_enabled to trunk_staging.
fb4fe78a : Adding flag com.android.launcher3.msdl_feedback to trunk_staging.
003236f6 : Adding flag com.android.internal.camera.flags.zoom_method to trunk_staging.
7f7f758f : Adding flag com.android.server.power.feature.flags.per_display_wake_by_touch to trunk_staging.
13fc2f15 : Advance Kernel to Build: 12637859 (6.1) in trunk_staging Panther and Cheetah
b376d25f : Adding flag android.app.pic_isolate_cache_by_uid to trunk_staging.
aa2133cb : Adding flag com.android.systemui.lockscreen_custom_clocks to trunk_staging.
79a5d16d : Adding flag com.android.org.conscrypt.flags.certificate_transparency_checkservertrusted_api to trunk_staging.
2f385a97 : Adding flag android.content.pm.reduce_broadcasts_for_component_state_changes to trunk_staging.
968f2870 : Adding flag com.android.org.conscrypt.flags.certificate_transparency_platform to trunk_staging.
d2bc4ce8 : Adding flag android.app.admin.flags.user_provisioning_same_state to trunk_staging.
f0d2f76d : Adding flag com.android.window.flags.enable_desktop_app_launch_transitions to trunk_staging.
96f1e69b : Adding flag com.android.healthfitness.flags.activity_intensity to trunk_staging.
f65d90d5 : Revert "Adding flag com.android.systemui.keyguard_wm_state_refactor to trunk_staging."
e222a1f5 : Adding flag com.android.systemui.notes_role_qs_tile to trunk_staging.
f30f79ca : Adding flag android.hardware.usb.flags.enable_accessory_stream_api to trunk_staging.
bde4abf3 : Adding flag com.android.systemui.keyboard_shortcut_helper_shortcut_customizer to trunk_staging.
d3a7a977 : Removing flag com.android.settings.flags.catalyst_internet_settings from trunk_staging.
537b9614 : Adding flag com.android.media.audioserver.enable_audio_input_device_routing to trunk_staging.
601d8fce : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_TARGET_JAVA_21 = true
4b9b5345 : Adding flag com.android.window.flags.skip_compat_ui_education_in_desktop_mode to trunk_staging.
abd2b845 : Add launch flag for RELEASE_ONDEVICE_INTELLIGENCE_MODULE
365bbbcc : Adding flag com.android.settings.flags.catalyst_power_usage_summary_screen to trunk_staging.
ed5eb3d4 : Revert "Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING:"
61400687 : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_TARGET_JAVA_21 = true
f1b566f0 : Adding flag com.android.settings.flags.catalyst_sound_screen to trunk_staging.
df1071a9 : Adding flag com.android.settings.flags.catalyst_display_settings_screen to trunk_staging.
1259592b : Adding flag com.android.settings.flags.catalyst_text_reading_screen to trunk_staging.
18159fbc : Adding flag com.android.settings.flags.catalyst_battery_saver_screen to trunk_staging.
730d365f : Adding flag com.android.settings.flags.catalyst_bluetooth_switchbar_screen to trunk_staging.
5e8a63da : Adding flag com.android.settings.flags.catalyst_vibration_intensity_screen to trunk_staging.
e16e131b : Adding flag com.android.settings.flags.catalyst_internet_settings to trunk_staging.
cc58bf26 : Adding flag com.android.settings.flags.catalyst_tether_settings to trunk_staging.
a091736d : Adding flag com.android.settings.flags.catalyst_location_settings to trunk_staging.
5b4d4370 : Adding flag com.android.settings.flags.catalyst_network_provider_and_internet_screen to trunk_staging.
245ef773 : Adding flag com.android.settings.flags.catalyst_mobile_network_list to trunk_staging.
398ac891 : Removing flag com.android.systemui.keyguard_transition_force_finish_on_screen_off from trunk_staging.
c9828079 : Adding flag com.android.graphics.libgui.flags.edge_extension_shader to trunk_staging.
0e8d891f : Removing flag com.android.launcher3.enable_tiered_widgets_by_default_in_picker from trunk_staging.
93b0b909 : Trunk_staging: Set SPL to 2025-01-05
dd1c0e00 : Removing flag com.android.media.audioserver.enable_audio_input_device_routing from trunk_staging.
16921e5e : Adding flag com.android.wm.shell.enable_bubble_bar to trunk_staging.
bad86b2e : Revert "Adding flag com.android.wm.shell.enable_pip2 to trunk_staging."
099d15df : Adding flag com.android.systemui.keyguard_wm_state_refactor to trunk_staging.
15434012 : Adding flag android.app.accurate_wallpaper_downsampling to trunk_staging.
ef0ad8d9 : Adding flag com.android.internal.camera.flags.ae_priority to trunk_staging.
6f93849c : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_HARDWARE_AUDIO_USE_CAP_AIDL = true
11a2d812 : Adding flag android.media.audio.enable_multichannel_group_device to trunk_staging.
092b9077 : Adding flag com.android.adservices.flags.adservices_enable_per_module_overrides_api to trunk_staging.
4db54fe7 : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_TARGET_JAVA_21 = true
97845436 : Adding flag com.android.wifi.flags.softap_disconnect_reason to trunk_staging.
9d6cf510 : Adding flag com.android.aconfig.flags.enable_system_aconfigd_rust to trunk_staging.
49ccdc81 : Adding flag com.android.wm.shell.enable_pip2 to trunk_staging.
f8d328d5 : Adding flag com.android.settings.flags.catalyst_lockscreen_from_display_settings to trunk_staging.
9937edbe : Adding flag android.app.customization_packs_apis to trunk_staging.
4e8bae24 : Adding flag android.app.job.get_pending_job_reasons_history_api to trunk_staging.
f2ba9558 : Adding flag com.android.adservices.flags.fledge_enable_schedule_custom_audience_default_partial_custom_audiences_constructor to trunk_staging.
78ef88c9 : Adding flag com.android.healthfitness.flags.permission_tracker_fix_mapping_init to trunk_staging.
85bc4c66 : Adding flag com.android.systemui.keyguard_transition_force_finish_on_screen_off to trunk_staging.
5edaa1d4 : Adding flag com.android.text.flags.typeface_redesign_readonly to trunk_staging.
1c1ef430 : Adding flag com.android.internal.telephony.flags.satellite_system_apis to trunk_staging.
ca8f26d5 : Advance Kernel to Build: 12632724 (6.1) in trunk_staging Panther and Cheetah
06e9318d : Adding flag com.android.launcher3.enable_tiered_widgets_by_default_in_picker to trunk_staging.
180e9004 : Adding flag android.nfc.nfc_associated_role_services to trunk_staging.
d0fd817e : Adding flag android.os.api_for_backported_fixes to trunk_staging.
5045d86f : Adding flag com.android.graphics.hwui.flags.clip_surfaceviews to trunk_staging.
a6342d0c : Advance Kernel to Build: 12632724 (6.1) in trunk_staging Oriole and Raven
8946b2af : Adding flag com.android.systemui.multiuser_wifi_picker_tracker_support to trunk_staging.
ec59a6b3 : Adding flag android.app.notifications_redesign_app_icons to trunk_staging.
147fc7a8 : Adding flag com.android.systemui.qs_new_tiles to trunk_staging.
9448e916 : Adding flag com.android.media.audioserver.enable_audio_input_device_routing to trunk_staging.
2528f677 : Adding flag android.media.audio.speaker_cleanup_usage to trunk_staging.
abcfd450 : Adding flag android.os.update_engine_api to trunk_staging.
c2596674 : Adding flag android.content.pm.cloud_compilation_pm to trunk_staging.
9fa3c01c : Adding flag com.android.art.flags.art_service_v3 to trunk_staging.
4d9c21fa : Adding flag com.android.window.flags.enable_desktop_app_launch_alttab_transitions to trunk_staging.
477bf710 : Adding flag com.android.adservices.flags.fledge_enable_report_event_for_component_seller to trunk_staging.
7684c31f : Adding flag com.android.window.flags.enable_desktop_windowing_exit_transitions to trunk_staging.
3c8ec477 : Adding flag android.os.update_engine_api to trunk_staging.
f80179c6 : Adding flag com.android.hardware.input.enable_new_25q2_keycodes to trunk_staging.
4f0067ff : Adding flag com.android.media.editing.flags.muxer_mp4_enable_apv to trunk_staging.
80113a51 : Adding flag com.android.window.flags.disable_opt_out_edge_to_edge to trunk_staging.
60612109 : Adding flag com.android.window.flags.better_support_non_match_parent_activity to trunk_staging.
46479181 : Adding flag com.android.settings.flags.screen_off_unlock_power_optimization to trunk_staging.
915f7917 : Removing flag com.android.server.am.limit_priority_scope from trunk_staging.
64e7710e : Advance Kernel to Build: 12632724 in trunk_staging Comet
052c209c : Advance Kernel to Build: 12632724 in trunk_staging Caimito
43b60116 : Advance Kernel to Build: 12632724 (6.1) in trunk_staging Bluejay
72d428d1 : Advance Kernel to Build: 12632724 (6.1) in trunk_staging Akita
d0992067 : Advance Kernel to Build: 12632724 (6.1) in trunk_staging Tangorpro
c511bdd9 : Advance Kernel to Build: 12632724 (6.1) in trunk_staging Lynx
f2997e1d : Adding flag android.media.codec.subsession_metrics to trunk_staging.
f9f16c30 : Removing flag com.android.window.flags.enable_desktop_windowing_exit_transitions from trunk_staging.
bed5af36 : Adding flag com.android.server.am.limit_priority_scope to trunk_staging.
2edc1240 : Adding flag com.android.systemui.media_controls_button_media3 to trunk_staging.
a4dc5480 : Adding flag android.view.inputmethod.writing_tools to trunk_staging.
dec5c829 : Adding flag android.security.secure_lockdown to trunk_staging.
2173bc4a : Adding flag android.media.codec.codec_availability to trunk_staging.
6312572b : Adding flag android.permission.flags.enable_sqlite_appops_accesses to trunk_staging.
9f8a7538 : Adding flag com.android.media.flags.enable_new_wired_media_route_2_info_types to trunk_staging.
0c31c3f7 : Adding flag android.app.enable_tv_implicit_enter_pip_restriction to trunk_staging.
341b0cdc : Removing flag android.app.notifications_redesign_app_icons from trunk_staging.
7429fbb0 : Adding flag android.view.accessibility.deprecate_ani_label_for_apis to trunk_staging.
bb52961f : Adding flag android.media.audio.spatial_audio_settings_versioning to trunk_staging.
077543be : Adding flag android.media.audio.iamf_definitions_api to trunk_staging.
4388fb2e : Adding flag com.android.server.telecom.flags.fix_user_request_baseline_route_video_call to trunk_staging.
30f33fd6 : Adding flag com.android.server.telecom.flags.new_audio_path_speaker_broadcast_and_unfocused_routing to trunk_staging.
5608116b : Adding flag android.database.sqlite.oneway_finalizer_close_fixed to trunk_staging.
ccc20fc2 : Adding flag com.android.bluetooth.flags.leaudio_gmap_client to trunk_staging.
8ced59a3 : Adding flag com.android.adservices.flags.fledge_enable_winning_seller_id_in_ad_selection_outcome to trunk_staging.
b79e6c9b : Adding flag com.android.internal.camera.flags.camera_heif_gainmap to trunk_staging.
65fbba38 : Adding flag android.media.codec.secure_codecs_require_crypto to trunk_staging.
c40da3c9 : Removing flag android.multiuser.add_launcher_user_config from trunk_staging.
3c43877d : Advance Kernel to Build: 12623605 (6.1) in trunk_staging Panther and Cheetah
e7d98398 : Adding flag com.android.media.audio.hardening_impl to trunk_staging.
05174a8a : Adding flag android.service.dreams.cleanup_dream_settings_on_uninstall to trunk_staging.
2910f986 : Adding flag com.android.media.extractor.flags.extractor_mp4_enable_apv to trunk_staging.
58ccb5d3 : Adding flag android.media.audio.spatializer_capabilities to trunk_staging.
ac12cfe6 : Adding flag android.multiuser.add_launcher_user_config to trunk_staging.
77cb445a : Advance Kernel to Build: 12626061 in ap4a Caimito
1f90f522 : Advance Kernel to Build: 12626061 in ap4a Comet
2e138d0e : Adding flag com.android.media.aaudio.offload_support to trunk_staging.
ec6c8cf6 : Adding flag com.android.adservices.flags.fledge_enable_custom_audience_component_ads to trunk_staging.
cd4804ae : Adding flag android.media.tv.flags.tuner_w_apis to trunk_staging.
b9487a84 : Advance Kernel to Build: 12623605 in trunk_staging Comet
2fd6650f : Adding flag android.media.swcodec.flags.apv_software_codec to trunk_staging.
d7d2447f : Adding flag android.media.codec.num_input_slots to trunk_staging.
e188af77 : Advance Kernel to Build: 12623605 (6.1) in trunk_staging Bluejay
a22d30b9 : Advance Kernel to Build: 12623605 in trunk_staging Caimito
3f9ac1dd : Run presubmit when enabling HealthFitness flags.
e03f153b : Advance Kernel to Build: 12623605 (6.1) in trunk_staging Felix
77325368 : Adding flag android.permission.flags.device_aware_app_op_new_schema_enabled to trunk_staging.
52f0993d : audio: Add a build flag for the framework to use CAP via AIDL HAL
af4a255c : Advance Kernel to Build: 12623605 (6.1) in trunk_staging Akita
0f7f5d47 : Advance Kernel to Build: 12623605 (6.1) in trunk_staging Tangorpro
e9c2a254 : Advance Kernel to Build: 12623605 (6.1) in trunk_staging Lynx
55442d9d : Adding flag com.android.bluetooth.flags.smp_state_machine_stuck_after_disconnection_fix to trunk_staging.
f065883a : Add a test mapping to shell flags
55e9ed79 : Removing flag com.android.bluetooth.flags.adm_fix_disconnect_of_set_member from trunk_staging.
63a44bf4 : Removing flag com.android.healthfitness.flags.activity_intensity from trunk_staging.
4de8dd4c : Removing flag com.android.internal.camera.flags.camera_heif_gainmap from trunk_staging.
a3efeb24 : Adding flag com.android.media.audio.audio_eraser_effect to trunk_staging.
3f34c616 : Adding flag android.app.notifications_redesign_app_icons to trunk_staging.
c48b5b5e : Adding flag android.media.audio.routed_device_ids to trunk_staging.
23af1ecb : Removing flag android.chre.flags.efw_xport_in_context_hub from trunk_staging.
4ed65250 : Adding flag com.android.trunk_stable_workflow_testing.test_migrate_flag to trunk_staging.
f75cd877 : Adding flag com.android.server.display.feature.flags.enable_get_supported_refresh_rates to trunk_staging.
e5c3eeea : Removing flag com.android.trunk_stable_workflow_testing.test_migrate_flag from trunk_staging.
58fd2201 : Adding flag com.android.systemui.clock_reactive_variants to trunk_staging.
706d56f1 : Adding flag com.android.server.am.phantom_processes_fix to trunk_staging.
987cdfc9 : Adding flag com.android.systemui.stoppable_fgs_system_app to trunk_staging.
b5447b8b : Adding flag com.android.systemui.user_encrypted_source to trunk_staging.
90457cf0 : Adding flag android.permission.flags.ranging_permission_enabled to trunk_staging.
1a5010a3 : Adding flag com.android.intentresolver.rebuild_adapters_on_target_pinning to trunk_staging.
1e1719b4 : Advance Kernel to Build: 12616601 (6.1) in trunk_staging Oriole and Raven
01f38159 : Adding flag com.android.systemui.status_bar_auto_start_screen_record_chip to trunk_staging.
8c44f58a : Adding flag android.app.admin.flags.set_mte_policy_coexistence to trunk_staging.
2ab4a918 : Adding flag com.android.text.flags.vertical_text_layout to trunk_staging.
0c194165 : Removing flag com.android.text.flags.typeface_redesign from trunk_staging.
40033e90 : Adding flag com.android.server.power.feature.flags.framework_wakelock_info to trunk_staging.
7130bfc5 : Adding flag android.content.pm.cloud_compilation_pm to trunk_staging.
11d29dff : Adding flag com.android.healthfitness.flags.activity_intensity to trunk_staging.
bc8704b3 : Adding flag android.companion.virtualdevice.flags.vdm_settings to trunk_staging.
0c74c938 : Removing flag com.android.window.flags.ensure_keyguard_does_transition_starting from trunk_staging.
827fc421 : Adding flag com.android.healthfitness.flags.expressive_theming_enabled to trunk_staging.
82f428f9 : Adding flag com.android.healthfitness.flags.phr_fhir_primitive_type_validation to trunk_staging.
7baab263 : Adding flag com.android.healthfitness.flags.phr_fhir_complex_type_validation to trunk_staging.
de13a873 : Adding flag com.android.healthfitness.flags.phr_fhir_oneof_validation to trunk_staging.
313c899b : Adding flag com.android.healthfitness.flags.phr_fhir_basic_complex_type_validation to trunk_staging.
07460f05 : Adding flag com.android.window.flags.enable_desktop_windowing_exit_transitions to trunk_staging.
2ac9a3e1 : Adding flag com.android.server.flags.rate_limit_battery_changed_broadcast to trunk_staging.
e5ac5f9c : Removing flag com.android.window.flags.release_snapshot_aggressively from trunk_staging.
b9fa3024 : Adding flag com.android.window.flags.ensure_keyguard_does_transition_starting to trunk_staging.
482e35c2 : Removing flag com.android.wm.shell.enable_bubble_bar from trunk_staging.
1832ba31 : Adding flag android.chre.flags.efw_xport_in_context_hub to trunk_staging.
a9100e67 : Add a build flag to select between MessageQueue implementations
955ab7e2 : Adding flag com.android.window.flags.enable_accessible_custom_headers to trunk_staging.
5dbb4ba3 : Adding flag com.android.server.deviceidle.use_cpu_time_for_temp_allowlist to trunk_staging.
bb528c3e : Adding flag android.view.flags.toolkit_viewgroup_set_requested_frame_rate_api to trunk_staging.
211a4054 : Add build flag to indicate whether the mainline supplicant binary should be included in the Apex.
32eab0c4 : Adding flag com.android.wifi.flags.aware_pairing to trunk_staging.
5482d439 : Adding flag com.android.wifi.flags.get_channel_width_api to trunk_staging.
edc47cf0 : Adding flag com.android.wifi.flags.autojoin_restriction_security_types_api to trunk_staging.
fd2ce9c1 : Adding flag android.security.prevent_intent_redirect_abort_or_throw_exception to trunk_staging.
8f0095e6 : Adding flag com.android.nfc.flags.check_passed_in_package to trunk_staging.
7b8f1495 : Adding flag android.view.flags.use_refactored_round_scrollbar to trunk_staging.
f1eafade : Removing flag android.security.prevent_intent_redirect_abort_or_throw_exception from trunk_staging.
b73b5616 : Removing flag com.android.window.flags.bal_strict_mode from trunk_staging.
623828b8 : Adding flag android.permission.flags.enhanced_confirmation_in_call_apis_enabled to trunk_staging.
8d1455d3 : Adding flag android.permission.flags.unknown_call_package_install_blocking_enabled to trunk_staging.
80431d69 : Adding flag com.android.wm.shell.enable_bubble_bar to trunk_staging.
1fa7ad33 : Removing flag com.android.systemui.media_controls_button_media3 from trunk_staging.
a7896034 : Advance Kernel to Build: 12609334 (6.1) in trunk_staging Bluejay
57770bf9 : Advance Kernel to Build: 12609334 (6.1) in trunk_staging Tangorpro
432aadde : Advance Kernel to Build: 12609334 (6.1) in trunk_staging Akita
3b1203bf : Advance Kernel to Build: 12609334 in trunk_staging Caimito
05a2b26c : Advance Kernel to Build: 12609334 in trunk_staging Comet
d7d13cdd : Advance Kernel to Build: 12609334 (6.1) in trunk_staging Lynx
00112619 : Advance Kernel to Build: 12609334 (6.1) in trunk_staging Oriole and Raven
5c79a679 : Adding flag android.app.admin.flags.suspend_packages_coexistence to trunk_staging.
14b04308 : Adding flag com.android.healthfitness.flags.personal_health_record to trunk_staging.
da58193d : Adding flag com.android.healthfitness.flags.personal_health_record_telemetry_private_ww to trunk_staging.
97f29b27 : Adding flag com.android.icu.icu_25q2_api to trunk_staging.
3175ee9b : Removing flag com.android.healthfitness.flags.personal_health_record from trunk_staging.
5ef6b66c : Adding flag com.android.art.flags.art_service_v3 to trunk_staging.
6105590a : Adding flag com.android.hardware.input.touchpad_three_finger_tap_shortcut to trunk_staging.
5b3bcb00 : Removing flag android.tracing.perfetto_wm_dump_cts from trunk_staging.
9c472e88 : Removing flag com.android.systemui.user_encrypted_source from trunk_staging.
2e47f302 : Add text namespace to the live flags
0cbc657d : Removing flag com.android.server.am.reset_on_fork_enabled from trunk_staging.
acb5226c : Adding flag com.android.net.thread.flags.channel_max_powers_enabled to trunk_staging.
bad15c7d : Adding flag com.android.systemui.show_clipboard_indication to trunk_staging.
1ded8440 : Removing flag android.content.pm.reduce_broadcasts_for_component_state_changes from trunk_staging.
f189943b : Adding flag android.media.tv.flags.set_resource_holder_retain to trunk_staging.
d373f958 : Adding flag android.media.tv.flags.tif_extension_standardization to trunk_staging.
9f3ec5e7 : Adding flag android.permission.flags.enable_aiai_proxied_text_classifiers to trunk_staging.
ae28d964 : Adding flag com.android.internal.camera.flags.camera_heif_gainmap to trunk_staging.
a6e7496f : Adding flag com.android.appsearch.flags.enable_apps_indexer_incremental_put to trunk_staging.
04cfa8f3 : Adding flag com.android.systemui.media_controls_button_media3 to trunk_staging.
fef3c969 : Adding flag android.app.job.get_pending_job_reasons_api to trunk_staging.
0a9e4d01 : Adding flag com.android.text.flags.deprecate_elegant_text_height_api to trunk_staging.
af6e0cc2 : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_UPROBESTATS_MODULE = true
b2a11a97 : Adding flag com.android.trunk_stable_workflow_testing.test_migrate_flag to trunk_staging.
886c8296 : Removing flag com.android.trunk_stable_workflow_testing.test_migrate_flag from trunk_staging.
3fe2af7b : Adding flag android.server.allow_network_time_update_service to trunk_staging.
9d306d90 : Advance Kernel to Build: 12609334 (6.1) in trunk_staging Felix
4f6d9bd9 : Adding flag android.car.feature.change_swaps_during_suspend_to_disk to trunk_staging.
e77e9c43 : Adding flag com.android.systemui.user_encrypted_source to trunk_staging.
452adef0 : Adding flag com.android.graphics.hwui.flags.animated_image_drawable_filter_bitmap to trunk_staging.
03ddbe25 : Removing flag com.android.healthfitness.flags.permission_tracker_fix_mapping_init from trunk_staging.
0c87d42f : Adding flag android.permission.flags.use_frozen_aware_remote_callback_list to trunk_staging.
9a1339a9 : Adding flag android.permission.flags.sync_on_op_noted_api to trunk_staging.
dac7f5fa : Adding flag com.android.systemui.override_suppress_overlay_condition to trunk_staging.
e4655c20 : Adding flag com.android.internal.camera.flags.color_temperature to trunk_staging.
3b226474 : Adding flag android.hardware.radio.hd_radio_emergency_alert_system to trunk_staging.
944e3b8d : Adding flag com.android.apex.flags.enable_brand_new_apex to trunk_staging.
49492eea : Adding flag com.android.bluetooth.flags.default_gatt_transport to trunk_staging.
df83aa3f : Removing flag android.hardware.radio.hd_radio_emergency_alert_system from trunk_staging.
6bfd168a : Adding flag com.android.settings.flags.mobile_network_security_2g_toggle to trunk_staging.
14b1a29c : Adding flag com.android.systemui.user_aware_settings_repositories to trunk_staging.
2f10ad3d : Removing flag android.database.sqlite.oneway_finalizer_close_fixed from trunk_staging.
b2d1cb49 : Add flag to control if to target java 21 by default
8ac48eb1 : Removing flag com.android.graphics.libgui.flags.edge_extension_shader from trunk_staging.
7c83e95d : Adding flag com.android.providers.downloads.flags.download_via_platform_http_engine to trunk_staging.
7f2436c5 : Adding flag com.android.providers.media.flags.enable_mark_media_as_favorite_api to trunk_staging.
d32383d3 : Adding flag com.android.hardware.input.enable_customizable_input_gestures to trunk_staging.
9fadff5f : Removing flag android.app.job.enforce_minimum_time_windows from trunk_staging.
ba838c03 : Adding flag com.android.healthfitness.flags.permission_tracker_fix_mapping_init to trunk_staging.
f300eeb5 : Adding flag com.android.media.extractor.flags.extractor_sniff_midi_optimizations to trunk_staging.
ec4b1d23 : Adding flag com.android.window.flags.predictive_back_system_override_callback to trunk_staging.
cd62bc2c : Removing flag com.android.net.thread.flags.channel_max_powers_enabled from trunk_staging.
d451e9cb : Adding flag com.android.window.flags.unify_back_navigation_transition to trunk_staging.
e3afbb3b : Adding flag android.content.pm.reduce_broadcasts_for_component_state_changes to trunk_staging.
c52606de : Advance Kernel to Build: 12602718 (6.1) in trunk_staging Bluejay
8f81a015 : Revert^8 "Turn on CombinedMessageQueue for trunk_staging"
4bb056f4 : Advance Kernel to Build: 12602718 (6.1) in trunk_staging Lynx
f6787b88 : Advance Kernel to Build: 12602718 (6.1) in trunk_staging Akita
16ec1e81 : Advance Kernel to Build: 12602718 (6.1) in trunk_staging Tangorpro
28dc174b : Adding flag com.android.window.flags.defer_predictive_animation_if_no_snapshot to trunk_staging.
75587aca : Adding flag android.app.job.enforce_minimum_time_windows to trunk_staging.
8c58273f : Revert^7 "Turn on CombinedMessageQueue for trunk_staging"
595bd878 : Removing flag com.android.bluetooth.flags.gatt_callback_on_failure from trunk_staging.
93db7469 : Adding flag android.hardware.radio.hd_radio_emergency_alert_system to trunk_staging.
f01814a4 : Removing flag com.android.bluetooth.flags.clear_pairing_state_when_no_devrec from trunk_staging.
8fa341f6 : Adding flag com.android.server.telecom.flags.end_session_improvements to trunk_staging.
a91af9a6 : Adding flag android.database.sqlite.oneway_finalizer_close_fixed to trunk_staging.
4b876a43 : Advance Kernel to Build: 12602718 in trunk_staging Comet
858d3b85 : Adding flag com.android.trunk_stable_workflow_testing.test_migrate_flag to trunk_staging.
7d8026c6 : Advance Kernel to Build: 12602718 in trunk_staging Caimito
a14274c3 : Adding flag android.appwidget.flags.not_keyguard_category to trunk_staging.
53dd0934 : Adding flag com.android.bluetooth.flags.leaudio_broadcast_api_get_local_metadata to trunk_staging.
5581b9e0 : Adding flag com.android.systemui.media_controls_button_media3 to trunk_staging.
5513525f : Adding flag com.android.wifi.flags.local_only_connection_optimization to trunk_staging.
45100402 : Adding flag com.android.systemui.secondary_user_widget_host to trunk_staging.
7a341ec3 : Adding flag android.view.accessibility.deprecate_accessibility_announcement_apis to trunk_staging.
70db5816 : Adding flag android.view.flags.calculate_bounds_in_parent_from_bounds_in_screen to trunk_staging.
156118d7 : Adding flag com.android.systemui.screenshot_policy_split_and_desktop_mode to trunk_staging.
44e01aa4 : Adding flag com.android.permission.flags.cross_user_role_enabled to trunk_staging.
008e1412 : Adding flag android.hardware.biometrics.private_space_bp to trunk_staging.
75aa5d93 : Adding flag android.hardware.biometrics.effective_user_bp to trunk_staging.
b66c93e3 : Adding flag android.tracing.perfetto_wm_dump_cts to trunk_staging.
59a44fce : Advance Kernel to Build: 12594479 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
d34665e6 : Adding flag android.hardware.usb.flags.expose_usb_speed_system_api to trunk_staging.
7931fb13 : Adding flag com.android.aconfig_new_storage.support_clear_local_overrides_immediately to trunk_staging.
8ab84398 : Set namespace of Android SDK build flags
2afcfda2 : Adding flag com.android.healthfitness.flags.personal_health_record_disable_export_import to trunk_staging.
b2792ace : Adding flag com.android.healthfitness.flags.personal_health_record_enable_d2d_and_export_import to trunk_staging.
27a4f6e8 : Adding flag com.android.healthfitness.flags.personal_health_record_disable_d2d to trunk_staging.
245d0ce0 : Adding flag com.android.healthfitness.flags.personal_health_record_telemetry to trunk_staging.
30dc7883 : OWNERS: assign ownership of Android SDK build flags
acc5835e : Adding flag com.android.window.flags.respect_animation_clip to trunk_staging.
a567ef62 : Adding flag com.android.net.thread.flags.configuration_enabled to trunk_staging.
c4d2444d : Removing flag com.android.net.thread.flags.configuration_enabled from trunk_staging.
c9d55f45 : Rename to RELEASE_AVF_ENABLE_VM_TO_TEE_SERVICES_ALLOWLIST
03d87840 : Advance Kernel to Build: 12594479 in trunk_staging Comet
73f9ac9c : Advance Kernel to Build: 12594479 in trunk_staging Caimito
0f4d9bc2 : Advance Kernel to Build: 12594479 (6.1) in trunk_staging Akita
380bcd09 : Advance Kernel to Build: 12594479 (6.1) in trunk_staging Lynx
ff5550d9 : Advance Kernel to Build: 12594479 (6.1) in trunk_staging Tangorpro
f0c574a2 : Advance Kernel to Build: 12594479 (6.1) in trunk_staging Felix
5fbce3d6 : Advance Kernel to Build: 12594479 (6.1) in trunk_staging Bluejay
cb172656 : Adding flag android.app.pic_uses_shared_memory to trunk_staging.
c7e70475 : Add new build system flag for mmd
7ff73452 : Adding flag android.permission.flags.enable_otp_in_text_classifiers to trunk_staging.
527d4138 : Adding flag com.android.bluetooth.flags.sec_disconnect_on_le_key_missing to trunk_staging.
788c975f : Adding flag android.view.accessibility.indeterminate_range_info to trunk_staging.
c7734795 : Adding flag com.android.server.power.feature.flags.policy_reason_in_display_power_request to trunk_staging.
99a8dddb : Adding flag com.android.systemui.home_controls_dream_hsum to trunk_staging.
0a967125 : Adding flag android.permission.flags.permission_tree_apis_deprecated to trunk_staging.
9c33c5eb : Adding flag android.media.audio.deprecate_stream_bt_sco to trunk_staging.
7349f169 : Adding flag android.app.job.handle_abandoned_jobs to trunk_staging.
fded1013 : Revert^6 "Turn on CombinedMessageQueue for trunk_staging"
66205110 : Revert^5 "Turn on CombinedMessageQueue for trunk_staging"
6f2fdd21 : Change the namespace of RELEASE_UPROBESTATS_MODULE
3f133e03 : Removing flag com.android.systemui.compose_bouncer from trunk_staging.
3b692214 : Adding flag android.media.audio.hardening_permission_api to trunk_staging.
adfef6cd : Removing flag com.android.systemui.home_controls_dream_hsum from trunk_staging.
23e549d5 : Adding flag com.android.systemui.notifications_redesign_footer_view to trunk_staging.
8b2004b0 : Advance Kernel to Build: 12587547 (6.1) in trunk_staging Panther and Cheetah
ccb5a20e : Adding flag com.android.window.flags.enable_tile_resizing to trunk_staging.
9e7878d3 : Adding flag com.android.healthfitness.flags.phr_fhir_structural_validation to trunk_staging.
78f7e353 : Advance Kernel to Build: 12587547 (6.1) in trunk_staging Oriole and Raven
2f5f3c63 : Adding flag android.service.notification.notification_conversation_channel_management to trunk_staging.
849f20a4 : Adding flag android.multiuser.ignore_restrictions_when_deleting_private_profile to trunk_staging.
5e0561aa : Updating build flag in RELEASE_CONFIG_TRUNKFOOD_STAGING: RELEASE_HC_PHR_FHIR_STRUCTURAL_VALIDATION = true
f4fa9503 : Adding flag android.companion.virtualdevice.flags.default_device_camera_access_policy to trunk_staging.
6343ebf2 : Removing flag com.android.healthfitness.flags.personal_health_record_telemetry from trunk_staging.
6841e49c : Adding flag android.content.pm.change_launcher_badging to trunk_staging.
9d4afbbc : Adding flag com.android.window.flags.disallow_app_progress_embedded_window to trunk_staging.
3c8c15d4 : Adding flag android.view.accessibility.supplemental_description to trunk_staging.
7caf3c43 : Adding flag android.security.prevent_intent_redirect_show_toast to trunk_staging.
7b848f54 : Adding flag android.media.codec.apv_support to trunk_staging.
f4f367a2 : Adding flag com.android.systemui.media_controls_umo_inflation_in_background to trunk_staging.
7248de25 : Adding flag android.app.live_wallpaper_content_handling to trunk_staging.
a4099c19 : Removing flag com.android.healthfitness.flags.activity_intensity from trunk_staging.
049a5a67 : Revert^4 "Turn on CombinedMessageQueue for trunk_staging"
a08f8209 : Adding flag android.os.binder_frozen_state_change_callback to trunk_staging.
9da95371 : Adding flag android.multiuser.cache_profiles_read_only to trunk_staging.
f342d024 : Revert^3 "Turn on CombinedMessageQueue for trunk_staging"
a8ded30b : Adding flag com.android.internal.os.ravenwood_flag_rw_2 to trunk_staging.
8896dd73 : Adding flag com.android.launcher3.grid_migration_refactor to trunk_staging.
a1fafbda : Adding flag com.android.systemui.smartspace_swipe_event_logging_fix to trunk_staging.
880f156a : Adding flag com.android.internal.os.ravenwood_flag_ro_2 to trunk_staging.
99854b0e : Adding flag android.os.profiling.system_triggered_profiling_new to trunk_staging.
edf54f0f : Adding flag com.android.systemui.compose_bouncer to trunk_staging.
505a4cb3 : Adding flag com.android.systemui.ignore_touches_next_to_notification_shelf to ap4a.
d36977ce : Adding flag com.android.systemui.home_controls_dream_hsum to trunk_staging.
3311156d : Adding flag android.view.accessibility.a11y_is_required_api to trunk_staging.
f3f72a1a : Adding flag com.android.internal.telephony.flags.ims_resolver_user_aware to trunk_staging.
9924aa21 : Adding flag com.android.window.flags.enable_desktop_windowing_app_to_web_education to trunk_staging.
a74e85db : Adding flag android.net.vcn.fix_config_garbage_collection to trunk_staging.
98631e8b : Adding flag android.net.vcn.mainline_vcn_module_api to trunk_staging.
477aa2b0 : Adding flag android.nfc.nfc_oem_extension to trunk_staging.
69a61c2b : Revert^2 "Turn on CombinedMessageQueue for trunk_staging"
b45bad69 : Adding flag android.os.vibrator.haptics_scale_v2_enabled to trunk_staging.
26bd927c : Adding flag android.app.job.ignore_important_while_foreground to trunk_staging.
8fc776ce : Adding flag com.android.systemui.home_controls_dream_hsum to trunk_staging.
7affd60b : Adding flag com.android.window.flags.vdm_force_app_universal_resizable_api to trunk_staging.
8d74f29d : Revert "Turn on CombinedMessageQueue for trunk_staging"
31eb8b43 : Adding flag com.android.healthfitness.flags.activity_intensity to trunk_staging.
94fcbc2c : Adding flag com.android.bluetooth.flags.adm_fix_disconnect_of_set_member to trunk_staging.
1df5d93f : Adding flag com.android.bluetooth.flags.key_missing_classic_device to trunk_staging.
61294c4d : Adding flag com.android.internal.telephony.flags.support_isim_record to trunk_staging.
23172ec4 : Adding flag com.android.internal.telephony.flags.support_ims_mmtel_interface to trunk_staging.
a2a485a8 : Adding flag com.android.window.flags.bal_strict_mode_ro to trunk_staging.
6fc79d70 : Removing flag com.android.window.flags.unify_back_navigation_transition from trunk_staging.
f419ca06 : Adding flag com.android.internal.camera.flags.multiresolution_imagereader_usage_public to trunk_staging.
585b0920 : Adding flag android.multiuser.property_invalidated_cache_bypass_mismatched_uids to trunk_staging.
e237904b : Adding flag android.content.pm.include_feature_flags_in_package_cacher to trunk_staging.
784328b1 : Removing flag android.view.flags.calculate_bounds_in_parent_from_bounds_in_screen from trunk_staging.
a7c948c3 : Adding flag android.view.accessibility.a11y_character_in_window_api to trunk_staging.
5731c066 : Adding flag com.android.bluetooth.flags.get_profile_use_lock to trunk_staging.
9ba86749 : Adding flag com.android.server.display.feature.flags.enable_get_suggested_frame_rate to trunk_staging.
847ba3d4 : Adding flag com.android.graphics.surfaceflinger.flags.arr_surfacecontrol_setframerate_api to trunk_staging.
691d704f : Turn on CombinedMessageQueue for trunk_staging
60290106 : Revert "Removing flag android.multiuser.property_invalidated_cache_bypass_mismatched_uids from trunk_staging."
f30cc779 : Removing flag android.app.pic_uses_shared_memory from trunk_staging.
e1b71258 : Adding flag com.android.server.am.unfreeze_bind_policy_fix to trunk_staging.
8180200d : Removing flag com.android.systemui.media_controls_umo_inflation_in_background from trunk_staging.
74566cd5 : Advance Kernel to Build: 12574206 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
bc0159a7 : Adding flag com.android.server.am.restrict_priority_values to trunk_staging.
a1b304d0 : Advance Kernel to Build: 12574206 (6.1) in trunk_staging Panther and Cheetah
39c66e9b : Adding flag com.android.healthfitness.flags.personal_health_record_lock_screen_banner to trunk_staging.
aecee74d : Adding flag com.android.healthfitness.flags.personal_health_record_entries_screen to trunk_staging.
eddd71a4 : Adding flag android.permission.flags.allow_host_permission_dialogs_on_virtual_devices to trunk_staging.
59dfe3ab : Adding flag android.app.modes_multiuser to trunk_staging.
95301167 : Advance Kernel to Build: 12574206 (6.1) in trunk_staging Oriole and Raven
072d465e : Adding flag com.android.window.flags.unify_back_navigation_transition to trunk_staging.
747229bc : Adding flag android.app.enforce_pic_testmode_protocol to trunk_staging.
8cf41dc4 : Adding flag com.android.bluetooth.flags.rnr_store_device_type to trunk_staging.
3e552dd0 : Adding flag com.android.bluetooth.flags.queue_dis_requests to trunk_staging.
d0f147d1 : Adding flag android.app.wearable.enable_provide_read_only_pfd to trunk_staging.
c6a5c8c4 : Adding flag android.app.wearable.enable_concurrent_wearable_connections to trunk_staging.
7d64dc08 : Removing flag com.android.window.flags.enable_windowing_transition_handlers_observers from trunk_staging.
07941cb9 : Adding flag com.android.healthfitness.flags.activity_intensity_db to trunk_staging.
a611a803 : Adding flag com.android.media.projection.flags.media_projection_connected_display to trunk_staging.
ee2f4eb3 : Revert^2 "trunk_staging: set codename to Baklava"
cc5575d0 : Adding flag com.android.window.flags.enable_move_to_next_display_shortcut to trunk_staging.
ff59cdfb : Adding flag com.android.window.flags.release_snapshot_aggressively to trunk_staging.
543ed56f : Adding flag com.android.settingslib.flags.audio_sharing_developer_option to trunk_staging.
8fd56214 : Adding flag com.android.bluetooth.flags.forward_get_set_report_failure_to_uhid to trunk_staging.
4085ffd2 : Adding flag com.android.settingslib.flags.audio_sharing_hysteresis_mode_fix to trunk_staging.
38add9cb : Advance Kernel to Build: 12574206 in trunk_staging Comet
0a76c695 : Advance Kernel to Build: 12574206 in trunk_staging Caimito
4b39d1c3 : Adding flag com.android.internal.camera.flags.mirror_mode_shared_surfaces to trunk_staging.
f18521ec : Advance Kernel to Build: 12574206 (6.1) in trunk_staging Lynx
f57e2b9a : Advance Kernel to Build: 12574206 (6.1) in trunk_staging Tangorpro
764c3108 : Advance Kernel to Build: 12574206 (6.1) in trunk_staging Akita
f59518e3 : Advance Kernel to Build: 12574206 (6.1) in trunk_staging Felix
3cca27fd : Advance Kernel to Build: 12574206 (6.1) in trunk_staging Bluejay
42161358 : Adding flag android.media.tv.flags.media_quality_fw to trunk_staging.
76856a58 : Adding flag android.app.pic_uses_shared_memory to trunk_staging.
58848ffe : Adding flag android.view.flags.toolkit_frame_rate_animation_bugfix_25q1 to trunk_staging.
1f7fd751 : Removing flag com.android.bluetooth.flags.get_profile_use_lock from trunk_staging.
eaeac51a : Adding flag android.view.flags.calculate_bounds_in_parent_from_bounds_in_screen to trunk_staging.
3d521021 : Adding flag com.android.server.telecom.flags.updated_rcs_call_count_tracking to trunk_staging.
d990e4d0 : Adding flag android.os.profiling.system_triggered_profiling to trunk_staging.
d4eff818 : Removing flag com.android.server.telecom.flags.enable_call_sequencing from trunk_staging.
a18701ea : Removing flag android.app.job.ignore_important_while_foreground from trunk_staging.
78ad596a : Removing flag com.android.systemui.notifications_redesign_footer_view from trunk_staging.
6312e9d9 : Adding flag android.hardware.biometrics.identity_check_api to trunk_staging.
3c6d7fd1 : Adding flag com.android.systemui.expand_heads_up_on_inline_reply to trunk_staging.
1f972b7f : Adding flag android.content.pm.sdk_dependency_installer to trunk_staging.
f303a5e8 : Adding flag com.android.window.flags.enable_desktop_windowing_hsum to trunk_staging.
2d8a2dce : Advance Kernel to Build: 12568106 (6.1) in trunk_staging Panther and Cheetah
61c0e8bc : Removing flag com.android.internal.camera.flags.mirror_mode_shared_surfaces from trunk_staging.
d9a58320 : Adding flag com.android.internal.telephony.flags.satellite_state_change_listener to trunk_staging.
619a020a : Adding flag com.android.intentresolver.target_hover_and_keyboard_focus_states to trunk_staging.
183671c0 : Adding flag android.os.allow_consentless_bugreport_delegated_consent to trunk_staging.
f4f78c87 : Adding flag com.android.graphics.libgui.flags.buffer_release_channel to trunk_staging.
0ec43b65 : Adding flag com.android.systemui.haptics_for_compose_sliders to trunk_staging.
9394bd7f : Advance Kernel to Build: 12568106 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
31671151 : Advance Kernel to Build: 12568106 (6.1) in trunk_staging Oriole and Raven
4b1e833b : Revert "Adding flag com.android.systemui.notifications_redesign_footer_view to trunk_staging."
bb04fdd8 : Adding flag com.android.systemui.status_bar_show_audio_only_projection_chip to trunk_staging.
8c3c2f52 : Adding flag android.os.vibrator.primitive_composition_absolute_delay to trunk_staging.
a13324d2 : Adding flag com.android.systemui.notifications_redesign_footer_view to trunk_staging.
89cab0f4 : Adding flag com.android.graphics.libgui.flags.edge_extension_shader to trunk_staging.
4d6f1408 : Removing flag com.android.server.power.feature.flags.per_display_wake_by_touch from trunk_staging.
21cfbbe3 : Removing flag android.os.allow_consentless_bugreport_delegated_consent from trunk_staging.
a4a8db3e : Adding flag com.android.providers.media.flags.motion_photo_intent to trunk_staging.
ea51416b : Adding flag com.android.internal.telephony.flags.support_sms_over_ims_apis to trunk_staging.
99ac0c18 : Adding flag android.app.job.ignore_important_while_foreground to trunk_staging.
de5c903b : Removing flag android.hardware.usb.flags.enable_udc_sysfs_usb_state_update from trunk_staging.
93d76f9c : Adding flag android.media.codec.rendering_depth_removal to trunk_staging.
85bca2bf : Removing flag android.companion.virtual.flags.stream_camera from trunk_staging.
48c54a40 : Revert "trunk_staging: set codename to Baklava"
2490fa6e : Adding flag com.android.launcher3.enable_first_screen_broadcast_archiving_extras to trunk_staging.
e742c33d : Removing flag com.android.server.accessibility.enable_magnification_follows_mouse from trunk_staging.
2fdd9705 : Revert "nfc: Turn on watchdog flag for AP4A builds"
cbaab0cd : Adding flag com.android.ranging.flags.ranging_stack_enabled to trunk_staging.
ceb066ce : Adding flag com.android.ranging.flags.ranging_rtt_enabled to trunk_staging.
68bf4c95 : Adding flag com.android.ranging.flags.ranging_cs_enabled to trunk_staging.
e82f7094 : Adding flag com.android.media.audioserver.use_bt_sco_for_media to trunk_staging.
652d6831 : Adding flag com.android.wm.shell.enable_task_view_controller_cleanup to trunk_staging.
6d06a5bb : Adding flag com.android.internal.camera.flags.mirror_mode_shared_surfaces to trunk_staging.
800afb13 : Adding flag com.android.server.accessibility.magnification_enlarge_pointer_bugfix to trunk_staging.
4fa073a3 : Removing flag com.android.systemui.expand_heads_up_on_inline_reply from trunk_staging.
3c4fe7ee : Removing flag android.app.notification_no_custom_view_conversations from trunk_staging.
b01e0f1e : Adding flag com.android.systemui.transition_race_condition to trunk_staging.
6ffdd6f4 : Removing flag android.multiuser.property_invalidated_cache_bypass_mismatched_uids from trunk_staging.
d94c174d : Temporary block O6/R3 kernels from trunk_staging.
beac7bab : Adding flag com.android.systemui.qs_quick_rebind_active_tiles to trunk_staging.
b94be066 : Adding flag android.os.allow_consentless_bugreport_delegated_consent to trunk_staging.
fa7ad05b : Adding flag android.net.wifi.flags.legacy_keystore_to_wifi_blobstore_migration_read_only to trunk_staging.
e5400493 : Adding flag com.android.bluetooth.flags.a2dp_source_threading_fix to trunk_staging.
e9f7afab : Revert "Advance Kernel to Build: 12553394 (6.1) in trunk_staging..."
0a3cd9b3 : Adding flag android.security.enable_intent_matching_flags to trunk_staging.
220a918f : Advance Kernel to Build: 12561822 (6.1) in trunk_staging Panther and Cheetah
97f52d91 : Adding flag android.app.api_rich_ongoing to trunk_staging.
fec82f5c : Adding flag android.security.prevent_intent_redirect_abort_or_throw_exception to trunk_staging.
82e4328d : Advance Kernel to Build: 12561822 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
d21329e3 : Removing flag android.app.notifications_redesign_app_icons from trunk_staging.
1589df1b : Removing flag com.android.settingslib.flags.audio_sharing_hysteresis_mode_fix from trunk_staging.
c9cb1736 : Adding flag com.android.bluetooth.flags.leaudio_mono_location_errata_api to trunk_staging.
8a92775e : Adding flag com.android.settingslib.flags.audio_sharing_hysteresis_mode_fix to trunk_staging.
b17112d6 : Adding flag com.android.window.flags.universal_resizable_by_default to trunk_staging.
2d9b7982 : Advance Kernel to Build: 12561822 in trunk_staging Comet
df50fd14 : Advance Kernel to Build: 12561822 in trunk_staging Caimito
ff62ff32 : Advance Kernel to Build: 12561822 (6.1) in trunk_staging Tangorpro
c9286205 : Advance Kernel to Build: 12561822 (6.1) in trunk_staging Akita
3cec14e4 : Advance Kernel to Build: 12561822 (6.1) in trunk_staging Felix
b9b5483a : Advance Kernel to Build: 12561822 (6.1) in trunk_staging Bluejay
a4b4d351 : Advance Kernel to Build: 12561822 (6.1) in trunk_staging Lynx
d42c4ec5 : Advance Kernel to Build: 12561933 in ap4a Caimito
56542b09 : Advance Kernel to Build: 12561933 in ap4a Comet
af954a74 : Add initial set of system feature build flags
f3add90e : Removing flag com.android.launcher3.enable_first_screen_broadcast_archiving_extras from trunk_staging.
5d61b5fd : Removing flag com.android.systemui.transition_race_condition from trunk_staging.
88addd8b : Adding flag com.android.launcher3.enable_first_screen_broadcast_archiving_extras to trunk_staging.
6cd8b883 : Adding flag com.android.systemui.notifications_dismiss_pruned_summaries to trunk_staging.
9e46c37c : Removing flag android.view.flags.use_refactored_round_scrollbar from trunk_staging.
1934d325 : Advance Kernel to Build: 12553394 (6.1) in trunk_staging Oriole and Raven
65c2e436 : Adding flag com.android.systemui.transition_race_condition to trunk_staging.
b64481f1 : Adding flag android.server.remove_java_service_manager_cache to trunk_staging.
e8cad4ee : Adding flag com.android.bluetooth.flags.leaudio_mono_location_errata to trunk_staging.
3ba25abc : Removing flag com.android.systemui.expand_heads_up_on_inline_reply from trunk_staging.
1ea6596f : Removing flag com.android.systemui.expand_heads_up_on_inline_reply from trunk_staging.
294c7530 : Adding flag com.android.providers.media.flags.enable_cloud_media_provider_capabilities to trunk_staging.
5a218a1f : Enable RELEASE_HC_PHR_FHIR_STRUCTURAL_VALIDATION flag in staging
b622864e : Advance Kernel to Build: 12553394 in trunk_staging Caimito
522ffb48 : trunk_staging: set codename to Baklava
438778ba : Rollback kernel from 12546210 to 12539045 trunk_staging for Caiman
9e6fe77b : Advance Kernel to Build: 12553394 in trunk_staging Comet
ff5952da : Adding flag android.view.flags.use_refactored_round_scrollbar to trunk_staging.
d1b89d7a : Adding flag com.android.text.flags.typeface_redesign to trunk_staging.
d5ec0259 : Adding flag com.android.bluetooth.flags.associate_browse_l2cap_request_with_active_control_channel to trunk_staging.
85c0693b : Adding flag com.android.bluetooth.flags.support_remote_device_metadata to trunk_staging.
351d876d : Adding flag com.android.server.display.feature.flags.auto_brightness_mode_bedtime_wear to trunk_staging.
ad7ca5c6 : Advance Kernel to Build: 12546210 (6.1) in trunk_staging Akita
00babe7f : Advance Kernel to Build: 12546210 (6.1) in trunk_staging Felix
92d7b565 : Advance Kernel to Build: 12546210 (6.1) in trunk_staging Tangorpro
3d780d35 : Advance Kernel to Build: 12546210 (6.1) in trunk_staging Bluejay
3ab31c9a : Advance Kernel to Build: 12546210 (6.1) in trunk_staging Lynx
e8d24486 : Adding flag com.android.window.flags.bal_strict_mode to trunk_staging.
16555d02 : Removing flag com.android.wm.shell.enable_optional_bubble_overflow from trunk_staging.
4763fa08 : Adding flag android.permission.flags.appop_mode_caching_enabled to trunk_staging.
db0729cd : Adding flag com.android.server.telecom.flags.telecom_main_user_in_block_check to trunk_staging.
79e3cbaf : Adding flag com.android.server.telecom.flags.telecom_main_user_in_get_respond_message_app to trunk_staging.
fff1911c : Removing flag android.content.pm.sdk_lib_independence from trunk_staging.
fed7ed0d : Add new build system flag for HC PHR structural validation
b5cb31c4 : Advance Kernel to Build: 12546210 (6.1) in trunk_staging Panther and Cheetah
621125be : Adding flag android.app.modes_hsum to trunk_staging.
ac5404b6 : Advance Kernel to Build: 12546210 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
4e986be7 : Advance Kernel to Build: 12546210 (6.1) in trunk_staging Oriole and Raven
5136f8b3 : Removing flag android.app.modes_hsum from trunk_staging.
c90d9219 : Adding flag com.android.healthfitness.flags.personal_health_record_telemetry to trunk_staging.
3683da23 : Adding flag android.app.notifications_redesign_app_icons to trunk_staging.
c725942e : Adding flag com.android.systemui.notifications_footer_visibility_fix to trunk_staging.
9d44d36e : Adding flag com.android.libcore.schedule_at_fixed_rate_new_behavior to trunk_staging.
8f19e446 : Adding flag com.android.healthfitness.flags.personal_health_record_ui_telemetry to trunk_staging.
9819884f : Adding flag com.android.systemui.media_projection_request_attribution_fix to trunk_staging.
354ba68b : Removing flag android.content.pm.get_packages_from_launcher_apps from trunk_staging.
3a980a98 : Adding flag com.android.bluetooth.flags.l2cap_fcs_option_fix to trunk_staging.
13175f58 : Advance Kernel to Build: 12546210 in trunk_staging Comet
83607c51 : Removing flag android.server.remove_java_service_manager_cache from trunk_staging.
57fa3d28 : Adding flag com.android.bluetooth.flags.fix_add_device_properties to trunk_staging.
58621437 : Advance Kernel to Build: 12546210 in trunk_staging Caimito
fccd8515 : [2nd attempt] Update Emoji to 4.074 which has latest Unicode 16 emojis
11340d1f : Advance Kernel to Build: 12539045 (6.1) in trunk_staging Tangorpro
f943da00 : Advance Kernel to Build: 12539045 (6.1) in trunk_staging Bluejay
be125e62 : Advance Kernel to Build: 12539045 (6.1) in trunk_staging Akita
bf3cf4be : Advance Kernel to Build: 12539045 (6.1) in trunk_staging Felix
0647fb90 : Advance Kernel to Build: 12539045 (6.1) in trunk_staging Lynx
0bc1abd0 : Adding flag android.server.remove_java_service_manager_cache to trunk_staging.
5431f8f1 : Adding flag android.multiuser.cache_profile_type_read_only to trunk_staging.
df3cba07 : Move P24 and CT3 Kernel flag declaration here
2e4e73d8 : Adding flag com.android.tradeinmode.flags.enable_trade_in_mode to trunk_staging.
9ea699f4 : Adding flag com.android.bluetooth.flags.bta_ag_cmd_brsf_allow_uint32 to trunk_staging.
ebd313dc : Adding flag com.android.window.flags.predictive_back_swipe_edge_none_api to trunk_staging.
0455f455 : Adding flag android.app.supervision.flags.supervision_api to trunk_staging.
6f31ef78 : Move P24 and CT3 Kernel ap3a flag here
d1ec21a6 : Adding flag android.multiuser.multiuser_widget to trunk_staging.
124022a3 : Adding flag com.android.bluetooth.flags.leaudio_add_aics_support to trunk_staging.
dc7c6a35 : Adding flag com.android.providers.media.flags.enable_photopicker_transcoding to trunk_staging.
c904f7d0 : Adding flag com.android.internal.jank.use_sf_frame_duration to trunk_staging.
41fb440e : Adding flag android.media.audio.concurrent_audio_record_bypass_permission to trunk_staging.
30e8e2b5 : Adding flag android.media.codec.p210_format_support to trunk_staging.
b7213a2a : Adding flag android.content.pm.improve_install_freeze to trunk_staging.
ffd773a7 : Adding flag com.android.net.flags.net_capability_not_bandwidth_constrained to trunk_staging.
d6f0c7ed : Advance Kernel to Build: 12539045 in trunk_staging Comet
c67c36f8 : Advance Kernel to Build: 12539045 in trunk_staging Caimito
b058c473 : Adding flag com.android.server.accessibility.package_monitor_dedicated_thread to trunk_staging.
5928d54a : Removing flag com.android.ranging.flags.ranging_stack_enabled from trunk_staging.
5846ac3c : Adding flag android.hardware.biometrics.screen_off_unlock_udfps to trunk_staging.
aac318f0 : Adding flag android.os.enable_angle_allow_list to trunk_staging.
6baea94b : Advance Kernel to Build: 12532287 (6.1) in trunk_staging Tangorpro
89f69d62 : Advance Kernel to Build: 12532287 (6.1) in trunk_staging Felix
9bc1c681 : Advance Kernel to Build: 12532287 (6.1) in trunk_staging Akita
e811b006 : Advance Kernel to Build: 12532287 (6.1) in trunk_staging Bluejay
04961394 : Advance Kernel to Build: 12532287 (6.1) in trunk_staging Lynx
47406062 : Adding flag com.android.server.display.feature.flags.enable_has_arr_support to trunk_staging.
cc83010e : Adding flag com.android.graphics.surfaceflinger.flags.arr_setframerate_api to trunk_staging.
18b83506 : Adding flag android.app.report_postgc_memory_metrics to trunk_staging.
d7df62ce : Removing flag android.security.prevent_intent_redirect_abort_or_throw_exception from trunk_staging.
ba825271 : Build flag for WebView prebuilt version.
5f5b3bde : Adding flag android.security.keystore_grant_api to trunk_staging.
889f8793 : Adding flag com.android.intentresolver.save_shareousel_state to trunk_staging.
0638272d : Advance Kernel to Build: 12532287 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
d237285f : Override enable_increased_bmm_logging_for_restore_at_install flag to disabled state
c582486b : trunk_staging: set codename to Baklava
7ac99c0e : Removing flag com.android.libcore.schedule_at_fixed_rate_new_behavior from trunk_staging.
e19bb04e : Adding flag android.security.prevent_intent_redirect_abort_or_throw_exception to trunk_staging.
b7b48d70 : Add flag declaration for P24 kernel versions
d6dee406 : Adding flag com.android.window.flags.enable_desktop_windowing_enter_transitions to trunk_staging.
d20856e7 : Adding flag com.android.libcore.readonly.native_metrics to trunk_staging.
e0a3b4c3 : Adding flag com.android.libcore.schedule_at_fixed_rate_new_behavior to trunk_staging.
979b478b : Adding flag com.android.libcore.readonly.post_cleanup_apis to trunk_staging.
076a581e : Revert^6 "Enable RELEASE_USE_OPTIMIZED_RESOURCE_SHRINKING_BY_DEFAULT in trunk_staging"
8f53c0bf : Removing flag com.android.server.notification.notification_nls_rebind from trunk_staging.
071446dc : Advance Kernel to Build: 12526200 (6.1) in trunk_staging Akita
b5db11df : Advance Kernel to Build: 12526200 (6.1) in trunk_staging Lynx
aa754e80 : Advance Kernel to Build: 12526200 in trunk_staging Caimito
b67ba878 : Advance Kernel to Build: 12526200 in trunk_staging Comet
1c8ce576 : Advance Kernel to Build: 12526200 (6.1) in trunk_staging Bluejay
c10ec69a : Advance Kernel to Build: 12526200 (6.1) in trunk_staging Tangorpro
e91f5825 : Advance Kernel to Build: 12526200 (6.1) in trunk_staging Felix
5dcdb6af : Adding flag com.android.settingslib.flags.settings_catalyst to trunk_staging.
0ba6d554 : Adding flag com.android.server.power.optimization.accumulate_battery_usage_stats to trunk_staging.
6e0e2b58 : Adding flag com.android.bluetooth.flags.allow_free_last_scn to trunk_staging.
68536240 : Revert "Adding flag com.android.graphics.libgui.flags.buffer_release_channel to trunk_staging."
b38c4110 : Adding flag com.android.server.job.enforce_quota_policy_to_top_started_jobs to trunk_staging.
d619408a : Revert "trunk_staging: add VanillaIceCream back to set RELEASE_P..."
03ea9067 : Removing flag android.server.remove_java_service_manager_cache from trunk_staging.
56b3329e : Adding flag com.android.systemui.ensure_enr_views_visibility to trunk_staging.
9f477a5a : Removing flag com.android.trunk_stable_workflow_testing.test_migrate_flag from trunk_staging.
e9f18b17 : Adding flag android.multiuser.multiple_alarm_notifications_support to trunk_staging.
d84e5601 : Adding flag com.android.graphics.libgui.flags.buffer_release_channel to trunk_staging.
a9f68849 : Adding flag com.android.trunk_stable_workflow_testing.frozen_read_only to trunk_staging.
7e74220a : Advance Kernel to Build: 12526588 (6.1) in trunk_staging Panther and Cheetah
9f0781e3 : Advance Kernel to Build: 12526588 (6.1) in trunk_staging Oriole and Raven
9626189c : Advance Kernel to Build: 12526588 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
d2417e24 : Adding flag com.android.systemui.only_show_media_stream_slider_in_single_volume_mode to trunk_staging.
7a5df43b : trunk_staging: add VanillaIceCream back to set RELEASE_PLATFORM_VERSION_ALL_CODENAMES
cc76a3eb : Adding flag android.multiuser.cache_profile_ids_read_only to trunk_staging.
4b85cba9 : Adding flag android.server.remove_java_service_manager_cache to trunk_staging.
04bd079a : Adding flag com.android.providers.media.flags.cloud_media_provider_search to trunk_staging.
fd52fa05 : Removing flag android.app.modes_hsum from trunk_staging.
5168b506 : Adding flag android.service.notification.notification_force_grouping to trunk_staging.
0d97a01b : Adding flag com.android.server.notification.notification_nls_rebind to trunk_staging.
6c969a1d : Revert "Update Emoji to 4.074 which has latest Unicode 16 emojis"
1ab35580 : Advance Kernel to Build: 12516218 (6.1) in trunk_staging Felix
eef2e187 : Advance Kernel to Build: 12516218 (6.1) in trunk_staging Bluejay
b0ca7018 : Adding flag com.android.window.flags.reset_draw_state_on_client_invisible to trunk_staging.
6d47772a : Adding flag com.android.net.thread.flags.set_nat64_configuration_enabled to trunk_staging.
771b0a5a : Revert "trunk_staging: set codename to Baklava"
562c724a : Removing flag android.app.job.cleanup_empty_jobs from trunk_staging.
cecf29d5 : Add trunk stable build flag for terms of service experience in CarLauncher
41eda9f7 : Removing flag android.content.pm.improve_install_freeze from trunk_staging.
56ffd80d : Removing flag com.android.server.backup.enable_increased_bmm_logging_for_restore_at_install from trunk_staging.
2deb3f51 : Move RELEASE_READ_FROM_NEW_STORAGE to build/release.
6d4ba34a : Adding flag com.android.ranging.flags.ranging_stack_enabled to trunk_staging.
5fb8185b : Removing flag com.android.text.flags.typeface_redesign from trunk_staging.
945a655e : Adding flag com.android.settings.accessibility.toggle_feature_fragment_collection_info to trunk_staging.
393b368e : Advance Kernel to Build: 12518885 (6.1) in trunk_staging Panther and Cheetah
2452af05 : Removing flag com.android.settings.flags.updated_suggestion_card_aosp from trunk_staging.
0c8cc5f7 : Advance Kernel to Build: 12518885 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
0ad0a38d : Adding flag com.android.server.usage.screen_time_bypass to trunk_staging.
5a1ae060 : Removing flag com.android.graphics.hwui.flags.clip_surfaceviews from trunk_staging.
90f941ce : trunk_staging: set codename to Baklava
58418bdf : Rollback enable_increased_bmm_logging_for_restore_at_install from AP4A
357a3dbf : Advance Kernel to Build: 12516218 (6.1) in trunk_staging Tangorpro
ce1e72cc : Adding flag com.android.systemui.ignore_touches_next_to_notification_shelf to trunk_staging.
5fcebee5 : Adding flag android.multiuser.cache_profile_parent_read_only to trunk_staging.
07b4d013 : Adding flag com.android.bluetooth.flags.gatt_clear_cache_on_factory_reset to trunk_staging.
bbaee7b8 : Update Emoji to 4.074 which has latest Unicode 16 emojis
4fb846f3 : Advance Kernel to Build: 12518035 in trunk_staging Comet
a8b22d9c : Advance Kernel to Build: 12518035 in trunk_staging Caimito
2392cee1 : Removing flag android.tracing.client_side_proto_logging from trunk_staging.
562e0d5e : Adding flag com.android.bluetooth.flags.fix_avdt_rconfig_not_setting_l2cap to trunk_staging.
9e6d3481 : Adding flag com.android.server.telecom.flags.telecom_metrics_support to trunk_staging.
32c7dd80 : Adding flag com.android.bluetooth.flags.av_stream_reconfigure_fix to trunk_staging.
64fb95e8 : Advance Kernel to Build: 12516218 (6.1) in trunk_staging Akita
8334f768 : Advance Kernel to Build: 12516218 (6.1) in trunk_staging Lynx
a151a0c6 : Adding flag android.security.aapm_api to trunk_staging.
0d68148c : Adding flag android.app.assist.flags.add_placeholder_view_for_null_child to trunk_staging.
33c3cb0d : Adding flag android.view.accessibility.focus_rect_min_size to trunk_staging.
dc22ddee : Adding flag com.android.server.accessibility.enable_magnification_follows_mouse_bugfix to trunk_staging.
a22d2d42 : Adding flag com.android.graphics.hwui.flags.clip_surfaceviews to trunk_staging.
573a5b7b : Removing flag com.android.internal.camera.flags.use_context_attribution_source from trunk_staging.
7deb2639 : Adding flag android.app.jank.detailed_app_jank_metrics_api to trunk_staging.
aacc9231 : Add more flags for 25q2 AVF features & enable in trunk-staging
c10dc963 : Advance Kernel to Build: 12513677 (6.1) in trunk_staging Panther and Cheetah
28bf256d : Advance Kernel to Build: 12513677 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
289a1e65 : Advance Kernel to Build: 12513677 (6.1) in trunk_staging Oriole and Raven
a0ff76aa : Adding flag android.app.modes_hsum to trunk_staging.
484de0d9 : Advance Kernel to Build: 12509525 (6.1) in trunk_staging Tangorpro
414a208f : Advance Kernel to Build: 12509525 (6.1) in trunk_staging Felix
de2fb1a5 : Advance Kernel to Build: 12509525 (6.1) in trunk_staging Bluejay
7c292dc0 : Adding flag android.multiuser.property_invalidated_cache_bypass_mismatched_uids to trunk_staging.
a99df42a : Adding flag android.tracing.client_side_proto_logging to trunk_staging.
3a1048d8 : Adding flag android.os.vibrator.vibration_pipeline_enabled to trunk_staging.
42746fe9 : Adding flag com.android.server.job.adjust_quota_default_constants to trunk_staging.
7bd85062 : Adding flag com.android.server.accessibility.clear_shortcuts_when_activity_updates_to_service to trunk_staging.
8b731182 : Adding flag com.android.window.flags.enable_restore_to_previous_size_from_desktop_immersive to trunk_staging.
1fe86b23 : Removing flag com.android.graphics.hwui.flags.high_contrast_text_small_text_rect from trunk_staging.
b523cd58 : Revert^2 "Add RELEASE_NFC_MAINLINE_MODULE flag to trunk_staging"
554db886 : Adding flag com.android.bluetooth.flags.rfcomm_cancel_ongoing_sdp_on_close to trunk_staging.
29ba1ff1 : Introduce flag to guard DocumentsUI released as an apex
cd39acc4 : Revert "Add RELEASE_NFC_MAINLINE_MODULE flag to trunk_staging"
d0435e88 : Removing flag com.android.systemui.enable_view_capture_tracing from trunk_staging.
04b3bdaf : Removing flag com.android.systemui.expand_heads_up_on_inline_reply from trunk_staging.
f50e8f3e : Adding flag com.android.server.power.hint.reset_on_fork_enabled to trunk_staging.
7a818caa : Advance Kernel to Build: 12506974 (6.1) in trunk_staging Panther and Cheetah
d3f7c0e9 : Advance Kernel to Build: 12506974 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
a7fa3fa8 : Removing flag com.android.window.flags.skip_compat_ui_education_in_desktop_mode from trunk_staging.
e83f6b76 : Adding flag com.android.internal.telephony.flags.ignore_carrierid_reset_for_sim_removal to trunk_staging.
e1b99569 : Advance Kernel to Build: 12506223 in ap4a Comet
c00d107d : Advance Kernel to Build: 12506254 in 24Q4 shusky
cf9a35fe : Advance Kernel to Build: 12506223 in ap4a Caimito
3453688c : Advance Kernel to Build: 12506254 in ap4a Akita
a67b053c : Rollback expand_heads_up_on_inline_reply flag from nextfood
14856afa : Adding flag com.android.launcher3.enable_container_return_animations to trunk_staging.
0fefe04a : Removing flag com.android.healthfitness.flags.onboarding from trunk_staging.
1b42eb6e : Removing flag com.android.healthfitness.flags.onboarding from trunk_staging.
f3c61bec : Adding flag android.view.inputmethod.refactor_insets_controller to trunk_staging.
e0bdfa25 : Adding flag com.android.bluetooth.flags.hap_connect_only_requested_device to trunk_staging.
b425355a : Removing flag com.android.window.flags.release_snapshot_aggressively from trunk_staging.
8e42b277 : Removing flag com.android.internal.telephony.flags.ignore_carrierid_reset_for_sim_removal from trunk_staging.
ce2dfb4d : Advance Kernel to Build: 12502707 (6.1) in trunk_staging Felix
7698e444 : Advance Kernel to Build: 12502707 (6.1) in trunk_staging Bluejay
debac3ef : Advance Kernel to Build: 12502707 (6.1) in trunk_staging Lynx
1f7b321c : Advance Kernel to Build: 12502707 (6.1) in trunk_staging Tangorpro
38099d8d : Advance Kernel to Build: 12502707 (6.1) in trunk_staging Akita
ee0197a2 : Advance Kernel to Build: 12502707 in trunk_staging Caimito
cf2c107d : Advance Kernel to Build: 12502707 in trunk_staging Comet
e0b7c50b : Adding flag com.android.bluetooth.flags.close_hid_if_uhid_ready_too_slow to trunk_staging.
8503ab5e : Adding flag com.android.window.flags.release_snapshot_aggressively to trunk_staging.
52cd7c73 : Adding flag com.android.systemui.enable_view_capture_tracing to trunk_staging.
747a493e : Add RELEASE_NFC_MAINLINE_MODULE flag to trunk_staging
35c6314b : Adding flag com.android.graphics.hwui.flags.high_contrast_text_small_text_rect to trunk_staging.
9c335ac9 : Adding flag android.service.autofill.highlight_autofill_single_field to trunk_staging.
ed2cf8f8 : Adding flag com.android.permission.flags.permission_timeline_attribution_label_fix to trunk_staging.
c7e162a3 : Advance Kernel to Build: 12495865 (6.1) in trunk_staging Bluejay
7ba757bd : Advance Kernel to Build: 12495865 (6.1) in trunk_staging Tangorpro
39219496 : Advance Kernel to Build: 12495865 (6.1) in trunk_staging Lynx
3b05a43f : Advance Kernel to Build: 12495865 (6.1) in trunk_staging Akita
6d8ab21e : Advance Kernel to Build: 12495865 (6.1) in trunk_staging Felix
7005d180 : Advance Kernel to Build: 12495865 in trunk_staging Caimito
19acccbb : Advance Kernel to Build: 12495865 in trunk_staging Comet
a74a27d4 : Revert "Advance Kernel to Build: 12486647 (6.1) in trunk_staging..."
c0e746ba : Removing flag android.tracing.client_side_proto_logging from trunk_staging.
55e92aec : Adding flag com.android.bluetooth.flags.adapter_properties_looper to trunk_staging.
677931d3 : Advance Kernel to Build: 12499850 (6.1) in trunk_staging Oriole and Raven
b0384a66 : Adding flag com.android.systemui.clipboard_use_description_mimetype to trunk_staging.
f6ecf191 : Adding flag android.tracing.client_side_proto_logging to trunk_staging.
f67200dd : Adding flag com.android.providers.media.flags.enable_unicode_check to trunk_staging.
f8f8974a : Adding flag com.android.text.flags.typeface_redesign to trunk_staging.
63819a5c : Adding flag android.app.enable_current_mode_type_binder_cache to trunk_staging.
b381e215 : Adding flag com.android.bluetooth.flags.leaudio_improve_switch_during_phone_call to trunk_staging.
48557ab5 : Adding flag com.android.systemui.shared.return_animation_framework_library to trunk_staging.
d3d965e5 : Advance Kernel to Build: 12492175 in trunk_staging Comet
f28e9e06 : Advance Kernel to Build: 12492175 in trunk_staging Caimito
aba7c451 : Adding flag com.android.bluetooth.flags.guest_mode_bond to trunk_staging.
73362c7c : Adding flag android.webkit.user_agent_reduction to trunk_staging.
e855401f : Revert "Advance Kernel to Build: 12494547 (6.1) in trunk_staging Shiba, Husky and Ripcurrent"
605196d2 : Adding flag libgooglecamerahal.flags.zsl_video_denoise_in_hwl_two to trunk_staging.
53ee812a : Adding flag com.android.window.flags.enable_a11y_metrics to trunk_staging.
363e06b9 : Adding flag com.android.bluetooth.flags.uncache_player_when_browsed_player_changes to trunk_staging.
a6354a21 : Adjust flags for RipCurrent Pro and 24 to follow caiman.
09db2da5 : Run android.app tests on android.app flag flips
0235c0e9 : Advance Kernel to Build: 12492175 (6.1) in trunk_staging Akita
350bd591 : Advance Kernel to Build: 12492175 (6.1) in trunk_staging Tangorpro
fc94d622 : Adding flag com.android.server.job.enforce_quota_policy_to_fgs_jobs to trunk_staging.
fa5799b4 : Adding flag android.service.notification.notification_classification to trunk_staging.
c143ba68 : nfc: Turn on watchdog flag for AP4A builds
17fdcdd8 : Advance Kernel to Build: 12494547 (6.1) in trunk_staging Panther and Cheetah
a810e165 : Advance Kernel to Build: 12494547 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
ef22f3e3 : Advance Kernel to Build: 12494547 (6.1) in trunk_staging Oriole and Raven
e1a1f170 : Adding flag com.android.libcore.read_only_dynamic_code_load to trunk_staging.
248698d0 : Adding flag com.android.aconfig.flags.enable_only_new_storage to trunk_staging.
739d656e : Adding flag com.android.window.flags.predictive_back_timestamp_api to trunk_staging.
ff45ff24 : Adding flag com.android.settings.flags.revamp_toggles to trunk_staging.
41f1be23 : Adding flag com.android.server.display.feature.flags.enable_waiting_confirmation_before_mirroring to trunk_staging.
bbc2a522 : Adding flag com.android.server.display.feature.flags.enable_user_refresh_rate_for_external_display to trunk_staging.
ec0ee69b : Adding flag com.android.server.display.feature.flags.enable_apply_display_changed_during_display_added to trunk_staging.
51136233 : Removing flag android.permission.flags.delay_uid_state_changes_from_capability_updates from trunk_staging.
e80ab960 : Removing flag com.android.server.telecom.flags.telecom_metrics_support from trunk_staging.
efaae0d5 : Adding flag com.android.internal.telephony.flags.emergency_callback_mode_notification to trunk_staging.
9896f3ad : Advance Kernel to Build: 12492175 (6.1) in trunk_staging Felix
f617768e : Advance Kernel to Build: 12492175 (6.1) in trunk_staging Lynx
609d1d3a : Advance Kernel to Build: 12492175 (6.1) in trunk_staging Bluejay
95c29c66 : Adding flag com.android.settingslib.flags.audio_sharing_qs_dialog_improvement to trunk_staging.
cb23daab : Adding flag android.car.feature.always_send_initial_value_event to trunk_staging.
ae6e7bc5 : Adding flag android.companion.unpair_associated_device to trunk_staging.
f7fd17ec : Adding flag com.android.server.telecom.flags.telecom_metrics_support to trunk_staging.
c9f4fd0b : Adding flag com.android.permission.flags.wear_compose_material3 to trunk_staging.
c070ef06 : Adding flag com.android.window.flags.enable_desktop_windowing_immersive_handle_hiding to trunk_staging.
3d3c4d23 : Adding flag android.multiuser.cache_user_properties_correctly_read_only to trunk_staging.
01db4a11 : Adding flag com.android.launcher3.enable_recents_window_proto_log to trunk_staging.
65f6c07a : Adding flag com.android.launcher3.enable_state_manager_proto_log to trunk_staging.
665a011c : Add TEST_MAPPING for android.service.notification
ccf2d409 : Removing flag com.android.providers.media.flags.enable_malicious_app_detector from trunk_staging.
12efbebd : Advance Kernel to Build: 12486647 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
d0b8110c : Removing flag android.nfc.nfc_oem_extension from trunk_staging.
e10387c9 : Adding flag com.android.systemui.check_lockscreen_gone_transition to trunk_staging.
53246286 : Adding flag com.android.providers.media.flags.enable_malicious_app_detector to trunk_staging.
d310c91e : Adding flag com.android.server.compat.system_uid_target_system_sdk to trunk_staging.
3a1dc151 : Adding flag com.android.window.flags.enable_desktop_windowing_app_handle_education to trunk_staging.
9731265c : Adding flag com.android.bluetooth.flags.adm_verify_active_fallback_device to trunk_staging.
20824fb6 : Adding flag android.nfc.nfc_oem_extension to trunk_staging.
4972b067 : Advance Kernel to Build: 12476354 in 24Q4 tangorpro
abd28904 : Advance Kernel to Build: 12476354 in 24Q4 raviole
c5dca9af : Advance Kernel to Build: 12476354 in 24Q4 bluejay
5b627d6c : Advance Kernel to Build: 12476354 in 24Q4 lynx
28e86c95 : Advance Kernel to Build: 12476354 in 24Q4 pantah
8145d63b : Advance Kernel to Build: 12476354 in 24Q4 felix
3ed6d98e : Adding flag android.permission.flags.delay_uid_state_changes_from_capability_updates to trunk_staging.
2fb2d0b7 : Advance Kernel to Build: 12484404 (6.1) in trunk_staging Akita
4aa6d05e : Advance Kernel to Build: 12484404 (6.1) in trunk_staging Felix
2501579b : Advance Kernel to Build: 12484404 (6.1) in trunk_staging Tangorpro
a08c58a2 : Advance Kernel to Build: 12484404 (6.1) in trunk_staging Lynx
250c7da5 : Advance Kernel to Build: 12484404 (6.1) in trunk_staging Bluejay
bd501681 : Removing flag android.service.notification.notification_classification from trunk_staging.
47bdb4b2 : Adding flag android.view.accessibility.tri_state_checked to trunk_staging.
d039f583 : Advance Kernel to Build: 12482486 in trunk_staging Comet
ce7028f4 : Adding flag android.os.network_time_uses_shared_memory to trunk_staging.
d9b1488a : Advance Kernel to Build: 12482486 in trunk_staging Caimito
4cfe004f : Adding flag android.multiuser.invalidate_cache_on_users_changed_read_only to trunk_staging.
e1d81141 : Removing flag com.android.systemui.keyguard_wm_state_refactor from trunk_staging.
0c75d973 : Adding flag com.android.internal.telephony.flags.carrier_roaming_nb_iot_ntn to trunk_staging.
cdc64f51 : Advance Kernel to Build: 12481213 (6.1) in trunk_staging Oriole and Raven
d021007d : Adding flag android.service.notification.notification_classification to trunk_staging.
b88c7c5d : Removing flag com.android.server.am.defer_display_events_when_frozen from trunk_staging.
82c4fad1 : Removing flag android.service.notification.notification_force_grouping from trunk_staging.
062220d7 : Adding flag com.android.server.notification.notification_test to trunk_staging.
c8641ea1 : Adding flag com.android.healthfitness.flags.logcat_censor_iae to trunk_staging.
ec938f71 : Adding flag com.android.sdksandbox.flags.sdk_sandbox_verify_sdk_dex_files to trunk_staging.
6eecacd5 : Removing flag android.nfc.nfc_oem_extension from trunk_staging.
d0286244 : Adding flag com.android.hardware.input.manage_key_gestures to trunk_staging.
84379106 : Removing flag com.android.server.display.feature.flags.virtual_display_limit from trunk_staging.
1add5fc0 : Removing flag com.android.server.power.hint.reset_on_fork_enabled from trunk_staging.
41f12b87 : Adding flag android.companion.virtualdevice.flags.display_power_manager_apis to trunk_staging.
c4f0be4c : Removing flag com.android.window.flags.scrolling_from_letterbox from trunk_staging.
ac6c5b76 : Adding flag android.view.accessibility.warning_use_default_dialog_type to trunk_staging.
0544001e : Revert^2 "Adding flag com.android.server.power.hint.reset_on_fork_enabled to trunk_staging."
c34b7541 : Revert "Adding flag com.android.window.flags.scrolling_from_letterbox to trunk_staging."
22bb146f : Removing flag libgooglecamerahal.flags.zsl_video_denoise_in_hwl from trunk_staging.
dbaf57e3 : Adding flag com.android.input.flags.enable_per_device_input_latency_metrics to trunk_staging.
a68f9bf5 : Removing flag android.companion.virtualdevice.flags.display_power_manager_apis from trunk_staging.
14447311 : Adding flag libgooglecamerahal.flags.disable_capture_request_timeout to trunk_staging.
6daffbbe : Removing flag android.app.supervision.flags.supervision_api from trunk_staging.
a0d637da : Removing flag android.app.supervision.flags.supervision_api from trunk_staging.
fea3038b : Adding flag com.android.bluetooth.flags.dont_send_hid_set_idle to trunk_staging.
77bf6c61 : Adding flag com.android.healthfitness.flags.permission_metrics to trunk_staging.
c6a78b54 : Adding flag com.android.healthfitness.flags.ecosystem_metrics_db_changes to trunk_staging.
3f16157e : Adding flag com.android.nfc.flags.allow_multiple_hce_bindings to trunk_staging.
e00d5489 : Adding flag com.android.nfc.flags.post_callbacks to trunk_staging.
fad82f17 : Adding flag android.nfc.nfc_oem_extension to trunk_staging.
659796c4 : Adding flag com.android.nfc.flags.ee_aid_select to trunk_staging.
331041d0 : Remove libgooglecamerahal:zsl_video_denoise_in_hwl flag from ap4a
7bdca0bd : Adding flag android.permission.flags.wallet_role_icon_property_enabled to trunk_staging.
4679d3b5 : Adding flag com.android.internal.camera.flags.use_context_attribution_source to trunk_staging.
e2863c9e : Adding flag com.android.server.stats.accumulate_network_stats_since_boot to trunk_staging.
71c037fe : Removing flag com.android.window.flags.universal_resizable_by_default from trunk_staging.
65ff0064 : Advance Kernel to Build: 12474510 in ap4a Caimito
11697be0 : Advance Kernel to Build: 12474510 in ap4a Comet
dda17568 : Advance Kernel to Build: 12476592 (6.1) in trunk_staging Panther and Cheetah
fb12e740 : Adding flag android.chre.flags.reduce_locking_context_hub_transaction_manager to trunk_staging.
f176b10f : Adding flag com.android.systemui.keyguard_wm_state_refactor to trunk_staging.
fcad9d19 : Adding flag com.android.window.flags.scrolling_from_letterbox to trunk_staging.
f358e882 : Advance Kernel to Build: 12476145 (6.1) in trunk_staging Oriole and Raven
793eb793 : Adding flag android.app.admin.flags.dont_write_policy_definition to trunk_staging.
0afb66ee : Adding flag android.crashrecovery.flags.synchronous_reboot_in_rescue_party to trunk_staging.
4ebcbdfa : Adding flag com.android.server.display.feature.flags.virtual_display_limit to trunk_staging.
072c778e : Adding flag com.android.window.flags.universal_resizable_by_default to trunk_staging.
f277bc60 : Adding flag android.tracing.perfetto_wm_dump to trunk_staging.
a307e7fd : Adding flag com.android.window.flags.filter_irrelevant_input_device_change to trunk_staging.
335b3d58 : Adding flag com.android.server.stats.netstats_use_add_entries to trunk_staging.
c7686a2d : Adding flag com.android.text.flags.tts_span_duration to trunk_staging.
8187f100 : Fix namespace and reset RELEASE_CRASHRECOVERY flag
491cccef : Adding flag com.android.server.telecom.flags.resolve_active_bt_routing_and_bt_timing_issue to trunk_staging.
13b2118f : Removing flag com.android.aconfig.flags.enable_only_new_storage from trunk_staging.
cc889226 : Removing flag com.android.internal.telephony.flags.carrier_roaming_nb_iot_ntn from trunk_staging.
e1016cc5 : Adding flag com.android.server.utils.anr_timer_trace to trunk_staging.
1da1afb3 : Revert^5 "Enable RELEASE_USE_OPTIMIZED_RESOURCE_SHRINKING_BY_DEFAULT in trunk_staging"
9c468782 : Adding flag com.android.systemui.notification_content_alpha_optimization to trunk_staging.
d02257c1 : Adding flag com.android.internal.telephony.flags.async_init_carrier_privileges_tracker to trunk_staging.
c3201803 : Advance Kernel to Build: 12469715 (6.1) in trunk_staging Panther and Cheetah
06397355 : Advance Kernel to Build: 12469346 (6.1) in trunk_staging Oriole and Raven
b6e7ef44 : Adding flag com.android.aconfig.flags.enable_only_new_storage to trunk_staging.
8dc55bd7 : Removing flag android.view.inputmethod.refactor_insets_controller from trunk_staging.
600d1af9 : Adding flag com.android.internal.telephony.flags.ignore_carrierid_reset_for_sim_removal to trunk_staging.
b24d6662 : Adding flag android.view.inputmethod.refactor_insets_controller to trunk_staging.
2b32871c : Adding flag android.companion.virtualdevice.flags.display_power_manager_apis to trunk_staging.
01438853 : Adding flag android.security.keystore2.use_blob_state_column to trunk_staging.
3263c7d5 : Advance Kernel to Build: 12465044 in trunk_staging Comet
2cb4f738 : Advance Kernel to Build: 12465044 in trunk_staging Caimito
bf5ccd15 : Adding flag android.view.accessibility.support_multiple_labeledby to trunk_staging.
0b283662 : Adding flag com.android.internal.telephony.flags.support_carrier_services_for_hsum to trunk_staging.
661c52d1 : Adding flag android.webkit.update_service_ipc_wrapper to trunk_staging.
3ee2cdb9 : Advance Kernel to Build: 12465044 (6.1) in trunk_staging Felix
7466af3b : Advance Kernel to Build: 12465044 (6.1) in trunk_staging Bluejay
aa4fee11 : Advance Kernel to Build: 12465044 (6.1) in trunk_staging Lynx
5dd9cde8 : Advance Kernel to Build: 12465044 (6.1) in trunk_staging Tangorpro
0bb5cdc4 : Adding flag android.chre.flags.efw_xport_rewind_on_error to trunk_staging.
41b424e3 : Move AP4A mainline flags.
a5f990b1 : Removing flag com.android.systemui.clipboard_use_description_mimetype from trunk_staging.
771cefca : Adding flag android.media.codec.codec_buffer_state_cleanup to trunk_staging.
61124dd6 : Removing flag com.android.icu.icu74 from trunk_staging.
7f51bf2d : Adding flag com.android.internal.telephony.flags.carrier_roaming_nb_iot_ntn to trunk_staging.
51e67418 : Adding flag com.android.bluetooth.flags.signal_connecting_on_focus_gain to trunk_staging.
f41dc6dd : Advance Kernel to Build: 12461277 (6.1) in trunk_staging Panther and Cheetah
9cb23199 : Advance Kernel to Build: 12461277 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
03f6989c : Advance Kernel to Build: 12461277 (6.1) in trunk_staging Oriole and Raven
9eae63cd : Adding flag android.service.notification.notification_force_grouping to trunk_staging.
bb4392d0 : Revert^2 "Adding flag com.android.aconfig.flags.enable_only_new_storage to trunk_staging."
067ec5e8 : Adding flag android.companion.virtualdevice.flags.camera_multiple_input_streams to trunk_staging.
436a9e96 : Adding flag com.android.systemui.clipboard_use_description_mimetype to trunk_staging.
9e7f9682 : Advance Kernel to Build: 12462565 in ap4a Comet
f12519a6 : Advance Kernel to Build: 12462565 in ap4a Caimito
24654460 : Adding flag com.android.bluetooth.flags.leaudio_sort_scans_to_sync_by_fails to trunk_staging.
3d7443ec : Removing flag com.android.aconfig.flags.enable_only_new_storage from trunk_staging.
b73dd777 : Adding flag com.android.bluetooth.flags.leaudio_broadcast_resync_helper to trunk_staging.
616f38c6 : Removing flag android.companion.virtualdevice.flags.display_power_manager_apis from trunk_staging.
162b451b : Removing flag com.android.ipsec.flags.liveness_check_api from trunk_staging.
3a3b2714 : Adding flag com.android.bluetooth.flags.get_name_and_address_as_callback to trunk_staging.
8ce577ae : Adding flag com.android.aconfig.flags.enable_only_new_storage to trunk_staging.
57fd51f9 : Revert "Advance Kernel to Build: 12449905 (6.1) in trunk_staging..."
f0d2cce8 : Removing flag com.android.internal.telephony.flags.carrier_roaming_nb_iot_ntn from trunk_staging.
ee0a9304 : Adding flag com.android.systemui.communal_widget_resizing to trunk_staging.
93e45fe2 : Removing flag com.android.systemui.notification_content_alpha_optimization from trunk_staging.
97a9e6a5 : Adding flag com.android.window.flags.predictive_back_priority_system_navigation_observer to trunk_staging.
eec361c1 : Adding flag android.location.flags.use_legacy_ntp_time to trunk_staging.
45ae33e3 : Adding flag android.nfc.nfc_state_change_security_log_event_enabled to trunk_staging.
868beeb4 : Removing flag android.media.codec.secure_codecs_require_crypto from trunk_staging.
de514f2e : Adding flag com.android.intentresolver.individual_metadata_title_read to trunk_staging.
bced5cee : Adding flag android.hardware.usb.flags.enable_usb_data_signal_staking_internal to trunk_staging.
2ff015cc : Adding flag com.android.internal.telephony.flags.carrier_roaming_nb_iot_ntn to trunk_staging.
af3edfac : Add flag com.android.systemui.media_controls_posts_optimization to AP4A
b037cb8f : Adding flag com.android.bluetooth.flags.fix_hfp_qual_1_9 to trunk_staging.
9317b707 : Adding flag com.android.bluetooth.flags.bta_dm_defer_device_discovery_state_change_until_rnr_complete to trunk_staging.
6e34e01b : Add release_config_contributions module
cf524266 : Advance Kernel to Build: 12449905 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
87abcf89 : Advance Kernel to Build: 12453943 in trunk_staging Caimito
30616148 : Advance Kernel to Build: 12453943 in trunk_staging Comet
99602d90 : Adding flag com.android.server.am.logcat_longer_timeout to trunk_staging.
bab0ac21 : Adding flag com.android.settings.keyboard.keyboard_and_touchpad_a11y_new_page_enabled to trunk_staging.
1af36ba8 : Adding flag com.android.server.display.feature.flags.block_autobrightness_changes_on_stylus_usage to trunk_staging.
d798f413 : Adding flag android.multiuser.show_custom_unlock_title_inside_private_profile to trunk_staging.
d5e39d37 : Revert "Removing flag com.android.systemui.keyguard_wm_state_refactor from trunk_staging."
2a450e7e : Adding flag android.companion.virtualdevice.flags.enable_limited_vdm_role to trunk_staging.
5bca36ec : Revert "Adding flag com.android.server.power.hint.reset_on_fork_enabled to trunk_staging."
0c55f229 : Adding flag android.content.pm.improve_install_freeze to trunk_staging.
de8d28bb : Adding flag com.android.bluetooth.flags.snoop_logger_tracing to trunk_staging.
5cf260ae : Advance Kernel to Build: 12452533 (6.1) in trunk_staging Bluejay
e77e28dd : Advance Kernel to Build: 12452196 (6.1) in trunk_staging Akita
221e2a89 : Advance Kernel to Build: 12452196 (6.1) in trunk_staging Felix
acde946c : Advance Kernel to Build: 12452533 (6.1) in trunk_staging Tangorpro
c5134c7e : Advance Kernel to Build: 12452196 (6.1) in trunk_staging Lynx
1157573c : Adding flag android.view.accessibility.a11y_selection_api to trunk_staging.
c7e40514 : Adding flag android.view.accessibility.a11y_expansion_state_api to trunk_staging.
c459eecf : Adding flag com.android.server.power.hint.reset_on_fork_enabled to trunk_staging.
dd5eab70 : Adding flag com.android.server.am.reset_on_fork_enabled to trunk_staging.
24222f1b : Removing flag com.android.systemui.clipboard_use_description_mimetype from trunk_staging.
7564df26 : Adding flag com.android.bluetooth.flags.read_le_appearance to trunk_staging.
460766a4 : Adding flag android.media.codec.secure_codecs_require_crypto to trunk_staging.
359228ab : Enable flags for Caimito developer option in trunk staging
96633afe : Add flags for Caimito developer option
f960165d : Revert "Enable RELEASE_HIDDEN_API_EXPORTABLE_STUBS in trunk_staging"
5ce094de : Adding flag com.android.bluetooth.flags.l2cap_tx_complete_cb_info to trunk_staging.
ca0b2fb9 : Adding flag android.os.profiling.persist_queue to trunk_staging.
3c96673e : Adding flag com.android.bluetooth.flags.rfcomm_always_disc_initiator_in_disc_wait_ua to trunk_staging.
76a01716 : Adding flag android.content.res.dimension_frro to trunk_staging.
77c19109 : Introduce RELEASE_INSTALL_APEX_SYSTEMSERVER_DEXPREOPT_SAME_PARTITION
82471053 : Removing flag com.android.systemui.keyguard_wm_state_refactor from trunk_staging.
8bca91e8 : Adding flag com.android.text.flags.handwriting_unsupported_show_soft_input_fix to trunk_staging.
7995285a : build_flag: Add flag for NFC mainline module release
a7c45def : Adding flag com.android.systemui.clipboard_use_description_mimetype to trunk_staging.
d215fca9 : Removing flag com.example.android.aconfig.demo.flags.test_mendel_gantry_disintegration_again from trunk_staging.
4ee34076 : Advance Kernel to Build: 12449905 (6.1) in trunk_staging Panther and Cheetah
b90f8fe0 : Advance Kernel to Build: 12449905 (6.1) in trunk_staging Oriole and Raven
2453a616 : Adding flag com.example.android.aconfig.demo.flags.test_mendel_gantry_disintegration_again to trunk_staging.
461a3a04 : Adding flag android.app.supervision.flags.supervision_api to trunk_staging.
6a3118f7 : Adding flag com.android.window.flags.enable_resizing_metrics to trunk_staging.
1be3fa72 : Adding flag com.android.healthfitness.flags.health_connect_mappings to trunk_staging.
22bc01b1 : Adding flag android.provider.flags.new_storage_public_api to trunk_staging.
bbadc563 : Adding flag com.android.server.power.feature.flags.per_display_wake_by_touch to trunk_staging.
0c1c48f3 : Removing flag com.android.internal.telephony.flags.carrier_roaming_nb_iot_ntn from trunk_staging.
2b2ae628 : Adding flag com.android.systemui.media_load_metadata_via_media_data_loader to AP4A
7a1e2ffc : Adding flag com.android.launcher3.coordinate_workspace_scale to trunk_staging.
f95b121f : Adding flag com.android.internal.telephony.flags.oem_paid_private to trunk_staging.
f3b2390f : Advance Kernel to Build: 12443579 in ap4a Comet
1e55736e : Advance Kernel to Build: 12443579 in ap4a Caimito
1f1f5900 : Advance Kernel to Build: 12442519 in ap4a Akita
00a8c479 : Advance Kernel to Build: 12442519 in 24Q4 shusky
887619c8 : Adding flag com.android.input.flags.rotary_input_telemetry to trunk_staging.
7e3b2256 : Enable RELEASE_HIDDEN_API_EXPORTABLE_STUBS in trunk_staging
a9021322 : Adding flag android.app.contextualsearch.flags.enable_token_refresh to trunk_staging.
3bb53282 : Add release_config_contributions module
5f9543a4 : Removing flag com.android.server.power.feature.flags.per_display_wake_by_touch from trunk_staging.
dd22d746 : Adding flag android.media.tv.flags.tif_unbind_inactive_tis to trunk_staging.
748517da : Adding flag com.android.internal.telephony.flags.carrier_roaming_nb_iot_ntn to trunk_staging.
185dd41f : Removing flag com.example.android.aconfig.demo.flags.test_mendel_gantry_disintegration_again from trunk_staging.
0be53419 : Adding flag com.example.android.aconfig.demo.flags.test_mendel_gantry_disintegration_again to trunk_staging.
0976c95c : Adding flag com.android.bluetooth.flags.get_profile_use_lock to trunk_staging.
166a8b36 : Adding flag android.content.pm.verification_service to trunk_staging.
0089ea4a : Add build flag for fingerprint.
1ac0c211 : Adding flag android.security.prevent_intent_redirect to trunk_staging.
ab53bfd8 : Adding flag com.android.bluetooth.flags.gatt_client_dynamic_allocation to trunk_staging.
dd64a6be : Removing flag com.android.aconfig.flags.enable_only_new_storage from trunk_staging.
99c3b9b8 : Adding flag com.android.media.audio.stereo_spatialization to trunk_staging.
7d68e9de : Adding flag com.android.graphics.hwui.flags.query_global_priority to trunk_staging.
a643d942 : Move ap3a configuration to build/release.
993b25e3 : Revert "Adding flag com.android.aconfig.flags.enable_only_new_storage to trunk_staging."
7f00535f : Adding flag com.android.window.flags.touch_pass_through_opt_in to trunk_staging.
353f3ee3 : Adding flag com.android.server.notification.use_ssm_user_switch_signal to trunk_staging.
f0624328 : Adding flag com.android.server.am.defer_display_events_when_frozen to trunk_staging.
bca207fa : Advance Kernel to Build: 12442713 (6.1) in trunk_staging Panther and Cheetah
3b13f6f1 : Adding flag com.android.healthfitness.flags.add_missing_access_logs to trunk_staging.
c50e7145 : Adding flag android.app.modes_ui_empty_shade to trunk_staging.
8e5bc2d8 : Advance Kernel to Build: 12442713 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
915a339c : Advance Kernel to Build: 12442713 (6.1) in trunk_staging Oriole and Raven
f8e65e11 : Adding flag com.android.systemui.screenshot_multidisplay_focus_change to trunk_staging.
759b912e : Enable RELEASE_SUPERVISION_SERVICE in trunk_staging
9211a31c : Adding flag com.android.window.flags.offload_color_extraction to trunk_staging.
d2b8740a : Revert^4 "Enable RELEASE_USE_OPTIMIZED_RESOURCE_SHRINKING_BY_DEFAULT in trunk_staging"
db97f2fb : Adding flag com.android.server.power.feature.flags.per_display_wake_by_touch to trunk_staging.
17ba059b : Adding flag com.android.adservices.flags.sdksandbox_use_effective_target_sdk_version_for_restrictions to trunk_staging.
b8ee337a : Adding flag com.android.launcher3.enable_large_desktop_windowing_tile to trunk_staging.
e00e1def : Removing flag com.android.media.audio.stereo_spatialization from trunk_staging.
a3c08741 : Advance Kernel to Build: 12440080 in trunk_staging Comet
52139450 : Move ap3a configuration to build/release.
238b9ae9 : Advance Kernel to Build: 12440080 in trunk_staging Caimito
4de52233 : Adding flag com.android.text.flags.handwriting_gesture_with_transformation to trunk_staging.
87c0b95f : Trunk_staging: Set SPL to 2024-12-05
7d35ec2f : AP4A: Set SPL to 2024-12-05
2c85712d : Adding flag com.android.graphics.libgui.flags.wb_consumer_base_owns_bq to trunk_staging.
a7077b16 : Adding flag com.android.intentresolver.keyboard_navigation_fix to trunk_staging.
64d5a2a1 : Adding flag android.hardware.usb.flags.enable_usb_sysfs_midi_identification to trunk_staging.
0c0f71ce : Revert^2 "[Ranging] Enable ranging stack in trunk_staging"
3ce9d050 : Removing flag com.android.launcher3.enable_refactor_task_thumbnail from trunk_staging.
0f1e9e14 : Adding flag com.android.systemui.media_controls_umo_inflation_in_background to trunk_staging.
3e5b8b56 : Adding flag com.android.bluetooth.flags.sec_dont_clear_keys_on_encryption_err to trunk_staging.
5db21c28 : Revert^3 "Enable RELEASE_USE_OPTIMIZED_RESOURCE_SHRINKING_BY_DEFAULT in trunk_staging"
3a2196fa : Adding flag com.android.launcher3.force_monochrome_app_icons to trunk_staging.
38b9f251 : Adding flag com.android.server.notification.notification_verify_channel_sound_uri to trunk_staging.
5a936ac4 : Adding flag com.android.bluetooth.flags.btsec_check_valid_discovery_database to trunk_staging.
872f82d5 : Advance Kernel to Build: 12434637 in trunk_staging Caimito
a64945d0 : Adding flag android.content.pm.delete_packages_silently_backport to trunk_staging.
1ea3e18d : Adding flag android.appwidget.flags.security_policy_interact_across_users to trunk_staging.
0ef7c78a : Advance Kernel to Build: 12433216 (6.1) in trunk_staging Bluejay
9fcb1c25 : Advance Kernel to Build: 12433216 (6.1) in trunk_staging Felix
5feeb329 : Advance Kernel to Build: 12433216 (6.1) in trunk_staging Akita
0e85c2a0 : Advance Kernel to Build: 12433216 (6.1) in trunk_staging Lynx
70c8e0dc : Advance Kernel to Build: 12433216 (6.1) in trunk_staging Tangorpro
46d5ec03 : Adding flag com.android.window.flags.enable_desktop_windowing_app_to_web to trunk_staging.
b3e20e0a : Adding flag android.os.vibrator.haptic_feedback_input_source_customization_enabled to trunk_staging.
1bf6f496 : Adding flag com.android.server.locksettings.wait_for_internet_ror to trunk_staging.
9f9fa407 : Adding flag com.android.window.flags.enable_fully_immersive_in_desktop to trunk_staging.
7b8ba6ad : Adding flag android.app.fix_wallpaper_changed to trunk_staging.
3c4428b9 : Adding flag com.android.server.flags.pin_global_quota to trunk_staging.
3924bd29 : Adding flag com.android.bluetooth.flags.leaudio_big_depends_on_audio_state to trunk_staging.
062518c5 : Adding flag android.os.adpf_use_fmq_channel_fixed to trunk_staging.
0694c407 : Advance Kernel to Build: 12430953 (6.1) in trunk_staging Panther and Cheetah
f0901c65 : Adding flag com.android.systemui.keyguard_wm_state_refactor to trunk_staging.
ac3f6be7 : Advance Kernel to Build: 12430953 (6.1) in trunk_staging Oriole and Raven
dd35bd54 : Adding flag com.android.aconfig.flags.enable_only_new_storage to trunk_staging.
1f5c241e : Adding flag com.android.hardware.input.use_key_gesture_event_handler to trunk_staging.
f50f8207 : Removing flag com.android.server.power.feature.flags.per_display_wake_by_touch from trunk_staging.
1abc4343 : Removing flag com.android.systemui.update_corner_radius_on_display_changed from trunk_staging.
0b58f6b5 : Adding flag com.android.window.flags.wlinfo_oncreate to trunk_staging.
cf8f1969 : Advance Kernel to Build: 12428401 in trunk_staging Caimito
8df234ce : Advance Kernel to Build: 12428401 (6.1) in trunk_staging Bluejay
0df9ca00 : Advance Kernel to Build: 12428401 (6.1) in trunk_staging Akita
cc0ca491 : Advance Kernel to Build: 12428401 (6.1) in trunk_staging Tangorpro
5bdec3f2 : Advance Kernel to Build: 12428401 (6.1) in trunk_staging Felix
9a16f771 : Advance Kernel to Build: 12428401 (6.1) in trunk_staging Lynx
2180f245 : Adding flag com.android.net.thread.flags.epskc_enabled to trunk_staging.
41d1b231 : Adding flag com.android.systemui.update_corner_radius_on_display_changed to trunk_staging.
13db1f30 : Regenerate Android.bp files from the Telescope service
474e9f5a : Adding flag com.android.internal.telephony.flags.hsum_package_manager to trunk_staging.
bcd2884f : Adding flag android.server.remove_game_manager_service_from_wear to trunk_staging.
ea46bd5f : Adding flag android.media.tv.flags.kids_mode_tvdb_sharing to trunk_staging.
297a44f5 : Adding flag com.android.launcher3.enable_active_gesture_proto_log to trunk_staging.
4ceb34f3 : Adding flag android.app.remove_next_wallpaper_component to trunk_staging.
3cbed435 : Adding flag com.android.server.power.feature.flags.per_display_wake_by_touch to trunk_staging.
dac82d43 : Adding flag android.security.asm_reintroduce_grace_period to trunk_staging.
8a8762f3 : Removing flag com.android.internal.telephony.flags.hsum_package_manager from trunk_staging.
252525fa : Revert "[Ranging] Enable ranging stack in trunk_staging"
2c8be7ce : Adding flag com.android.internal.telephony.flags.hsum_package_manager to trunk_staging.
4687cf1a : Adding flag com.android.internal.telephony.flags.dds_callback to trunk_staging.
0e11a295 : Advance Kernel to Build: 12421218 (6.1) in trunk_staging Akita
71e55f1e : Advance Kernel to Build: 12421218 (6.1) in trunk_staging Tangorpro
f473271d : Advance Kernel to Build: 12421218 (6.1) in trunk_staging Felix
462d87b6 : Advance Kernel to Build: 12421218 (6.1) in trunk_staging Bluejay
085f9198 : Adding flag android.app.app_start_info_component to trunk_staging.
85419005 : Adding flag com.android.launcher3.enable_multi_instance_menu_taskbar to RELEASE_CONFIG_TRUNKFOOD_STAGING.
105de483 : Adding flag com.android.internal.camera.flags.enable_stream_reconfiguration_for_unchanged_streams to trunk_staging.
9da251fc : Adding flag android.app.notification_no_custom_view_conversations to RELEASE_CONFIG_TRUNKFOOD_STAGING.
cf185d51 : Removing flag android.app.modes_ui_empty_shade from RELEASE_CONFIG_TRUNKFOOD_STAGING.
6f9746b7 : Adding flag android.content.pm.remove_cross_user_permission_hack to RELEASE_CONFIG_TRUNKFOOD_STAGING.
c9630cbf : Adding flag com.android.media.editing.flags.stagefrightrecorder_enable_b_frames to RELEASE_CONFIG_TRUNKFOOD_STAGING.
8014c2c3 : Advance Kernel to Build: 12409676 (6.1) in trunk_staging Lynx
862faf04 : Advance Kernel to Build: 12410883 in trunk_staging Comet
1c976f5d : Advance Kernel to Build: 12410883 in trunk_staging Caimito
c03fe03a : Advance Kernel to Build: 12409676 (6.1) in trunk_staging Tangorpro
199d1b99 : Adding flag com.android.aconfig_new_storage.support_immediate_local_overrides to RELEASE_CONFIG_TRUNKFOOD_STAGING.
64ba3696 : Advance Kernel to Build: 12414938 (6.1) in trunk_staging Felix
31409a8d : Advance Kernel to Build: 12418065 (6.1) in trunk_staging Shiba, Husky and Ripcurrent
a6e2bda6 : Advance Kernel to Build: 12414938 (6.1) in trunk_staging Bluejay
6691f784 : Advance Kernel to Build: 12414938 (6.1) in trunk_staging Akita
e91d5046 : Adding flag android.app.modes_ui_empty_shade to RELEASE_CONFIG_TRUNKFOOD_STAGING.
ff6d5bd6 : Adding flag android.crashrecovery.flags.refactor_crashrecovery to RELEASE_CONFIG_TRUNKFOOD_STAGING.
ab8e408a : Adding flag com.android.systemui.notifications_background_icons to RELEASE_CONFIG_TRUNKFOOD_STAGING.
6df6dccf : Declare build flag to set default version code per release config.
19c0f6eb : Revert^2 "Enable RELEASE_USE_OPTIMIZED_RESOURCE_SHRINKING_BY_DEFAULT in trunk_staging"
cccffb05 : Adding flag com.android.window.flags.enter_desktop_by_default_on_freeform_displays to RELEASE_CONFIG_TRUNKFOOD_STAGING.
cd167d9f : Removing flag android.tracing.perfetto_wm_dump from RELEASE_CONFIG_TRUNKFOOD_STAGING.
38fcedfb : Adding flag com.android.server.net.never_apply_rules_to_core_uids to RELEASE_CONFIG_TRUNKFOOD_STAGING.
71e0f74a : Adding flag android.media.tv.flags.hdmi_control_enhanced_behavior to RELEASE_CONFIG_TRUNKFOOD_STAGING.
abfc47d6 : Adding flag android.provider.a11y_standalone_gesture_enabled to RELEASE_CONFIG_TRUNKFOOD_STAGING.
00ad78e2 : creating final snapshot of ap4a
16712f8f : Adding flag com.android.settings.flags.active_unlock_finish_parent to RELEASE_CONFIG_TRUNKFOOD_STAGING.
f81f5d8b : Adding flag com.android.internal.os.application_shared_memory_enabled to RELEASE_CONFIG_TRUNKFOOD_STAGING.
543b77d2 : Adding flag com.android.healthfitness.flags.ecosystem_metrics to RELEASE_CONFIG_TRUNKFOOD_STAGING.
1d00489c : Adding flag com.android.providers.settings.notify_individual_aconfig_sysprop_changed to RELEASE_CONFIG_TRUNKFOOD_STAGING.
c94ea86f : Adding flag com.android.window.flags.common_surface_animator to RELEASE_CONFIG_TRUNKFOOD_STAGING.
f3a91170 : Revert "Enable RELEASE_USE_OPTIMIZED_RESOURCE_SHRINKING_BY_DEFAULT in trunk_staging"
cabcc706 : Adding flag com.android.settings.flags.catalyst_legal_information to RELEASE_CONFIG_TRUNKFOOD_STAGING.
ccb09958 : Removing flag com.android.window.flags.scrolling_from_letterbox from RELEASE_CONFIG_TRUNKFOOD_STAGING.
11694bde : Adding flag com.android.settings.flags.catalyst_firmware_version to RELEASE_CONFIG_TRUNKFOOD_STAGING.
3a0c458c : Adding flag com.android.window.flags.remove_activity_starter_dream_callback to RELEASE_CONFIG_TRUNKFOOD_STAGING.
4989aafd : Removing flag android.hardware.usb.flags.enable_usb_sysfs_midi_identification from RELEASE_CONFIG_TRUNKFOOD_STAGING.
229e67bc : Adding flag android.appwidget.flags.remote_document_support to RELEASE_CONFIG_TRUNKFOOD_STAGING.
45f58cd4 : Adding flag com.android.server.telecom.flags.remap_transactional_capabilities to RELEASE_CONFIG_TRUNKFOOD_STAGING.
d367ed9d : Adding flag com.android.server.telecom.flags.csw_service_interface_is_null to RELEASE_CONFIG_TRUNKFOOD_STAGING.
96a738a5 : Adding flag com.android.text.flags.handwriting_track_disabled to RELEASE_CONFIG_TRUNKFOOD_STAGING.
304cb16f : Advance Kernel to Build: 12407776 (6.1) in trunk_staging Panther and Cheetah
19f6a6c3 : Adding flag com.android.providers.media.flags.version_lockdown to RELEASE_CONFIG_TRUNKFOOD_STAGING.
7b604280 : Advance Kernel to Build: 12407776 (6.1) in trunk_staging Oriole and Raven
595201c1 : Adding flag com.android.launcher3.enable_refactor_task_thumbnail to RELEASE_CONFIG_TRUNKFOOD_STAGING.
32f8ae00 : Adding flag com.android.adservices.flags.sdksandbox_dump_effective_target_sdk_version to RELEASE_CONFIG_TRUNKFOOD_STAGING.
748c2c11 : Adding flag com.android.systemui.media_projection_dialog_behind_lockscreen to RELEASE_CONFIG_TRUNKFOOD_STAGING.
738acd03 : Adding flag android.app.use_sticky_bcast_cache to RELEASE_CONFIG_TRUNKFOOD_STAGING.
1443c367 : Advance Kernel to Build: 12404474 (6.1) in trunk_staging Akita
1ca40b71 : Advance Kernel to Build: 12404474 (6.1) in trunk_staging Lynx
572f31d9 : Advance Kernel to Build: 12404474 (6.1) in trunk_staging Felix
9d0ee21a : Advance Kernel to Build: 12404474 (6.1) in trunk_staging Bluejay
16c6520e : Advance Kernel to Build: 12404474 (6.1) in trunk_staging Tangorpro
75606119 : [Ranging] Enable ranging stack in trunk_staging
b16c027d : Enable RELEASE_USE_OPTIMIZED_RESOURCE_SHRINKING_BY_DEFAULT in trunk_staging
56177984 : Add RELEASE_PLATFORM_SDK_MINOR_VERSION
4d0c8a45 : Adding flag android.media.soundtrigger.generic_model_api to RELEASE_CONFIG_TRUNKFOOD_STAGING.
f7c52f22 : Update flag value to use speed profile in trunk staging
50fec1b5 : Add build flag to use speed-profile in SystemUI

+- Project: platform/build/soong

a5c7daa54 : Bugfix for LOCAL_CERTIFICATE of AutogeneratedRuntimeResourceOverlay
13d76eefe : Do not emit FlaggedApi annotations in core-current-stubs-for-system-modules-no-annotations.
8222955af : Do not include assets in autogenerated RROs
f3c8ddf76 : Change SuperImageProperties.Block_devices to be a slice of string
b1bfa9d6b : Convert checkStaticLinkingToStubLibraries to use module proxy.
978f453c4 : Remove Base_dir from non system non recovery filesystem
0d467050d : Add selinux contexts to autogenerated partitions
d547ad117 : Disable IdentifierName
ecf667f8b : Remove cross partition modules from provideLibs
79730d4d9 : Add super_image module type and create super image module in fsgen
86fe1dcff : Add variable to aid errorprone update
06c2542c3 : Remove __future__ references from python scripts
e16761211 : Fixes for avb flags in soong-generated partitions
ef8b3b273 : Move autogenerated rro creation to a higher priority load hook
b0aabb1f6 : AddLoadHookWithPriority function in build/soong/android
9007f3893 : Specify dirs and symlinks for recovery partition
45893370a : Add recovery-resources-common-* module to FsGenState
96f38d29e : Remove neverallow restrictions for prebuilt_res module type
be90fc9fa : Change verifyNativeImplementationLibs to use ModuleProxy.
de588a30a : Revert^2 "Use soong built autogenerated RROs"
39acfc8f2 : Fix LOCAL_CERTIFICATE of autogenerated RROs
7e60122de : Change enforcePartitionTagOnApexSystemServerJar and provideApexExportsInfo to use ModuleProxy.
e47ba7b5a : Change enforceAppUpdatability to use ModuleProxy.
e8ef6f154 : Revert "Use soong built autogenerated RROs"
3e7309726 : Add erofs compressor information to soong-generated partitions
d40d90fba : Revert^2 "Support auto-generating prebuilt_* modules for recovery partition"
63bdf6337 : Change checkJavaStableSdkVersion to use ModuleProxy.
320ca7c7d : Introduce additional prebuilt_* module types
a32373316 : Revert "Support auto-generating prebuilt_* modules for recovery ..."
7195b065b : Add property for incremental nsjail genrules
ef56908c7 : Use soong built autogenerated RROs
10c4136b1 : Reland "Skip packaging cross container cc deps of apk-in-apex"
d7556eb83 : Support long commands in RuleBuilder
7e44feb5f : Support auto-generating prebuilt_* modules for recovery partition
d546507c3 : Support auto gen module type matching in neverallow
3ca07a1e4 : Introduce prebuilt_vendor module type
3216c986a : Auto generate recovery partition
430bb2e2c : Revert "Skip packaging cross container cc deps of apk-in-apex"
45e4001f7 : Move withNativeBridgeEnabled to build/soong/android
12887baab : Add RELEASE_ACONFIG_REQUIRE_ALL_READ_ONLY build flag.
2b4bf4c18 : Generate fsv meta only when PRODUCT_FSVERITY_GENERATE_METADATA is true
b1c0035d0 : Remove version script hack for libclang_rt.* libs
71be42d93 : Automatically add system and system_ext autogen RRO to vendor/product
6e0c11049 : Skip packaging cross container cc deps of apk-in-apex
cc36feb02 : Update aconfig storage generation function
de4f1bff7 : Fix -Wimplicit-int-float-conversion treewide
785fbd539 : Don't list build variant release configs
38aed8dec : Update arm64 trusty test project target name
72dd6fc97 : Specify options_file in vendor_ramdisk prebuilt_kernel module
c16c91925 : Add options_file support for prebuilt_kernel_modules
23be5bb23 : Reland^2 : Do not allow vintf_fragments for modules installed in the filesystem"
f6b5e8fbb : Specify correct install location for vendor_ramdisk prebuilt kernel module
a8fa07111 : Create prebuilt kernel module for vendor_ramdisk partition
1d4e76c0f : Add missing root dirs/symlinks to soong-built system image
30bf8f6c5 : Add vendor_ramdisk support in PartitionTag
55b46b009 : Enable dex container (DEX v41) for the main branch
43a52c74a : Disable sparse filesystems
4004cc6ae : Specify bootconfig property in vendor_boot.img
9ef6edba2 : Allow overrides for TARGET_BUILD_VARIANT
e2f98da57 : Introduce module type to autogenerate RROS
7436834a0 : Adding "-Werror=dangling" on Android builds
6d9896746 : Ignore missing deps of module sdk during soong analysis
e545e64ec : Do not propagate transitive required deps of apexes
6dbff0392 : Revert "Reland: Do not allow vintf_fragments for modules installed in the filesystem"
92729e1a3 : Delay error on version_script, etc. properties on Darwin
2f3791f8c : Remove BuildWithUnescapedNinjaVars
c117f933b : Salt init_boot and vendor_boot using the build number
a8fb73be3 : apex: Do not compress EROFS APEX
065401609 : Upgrade clang to clang-r536225
b95180730 : Add apex_test.skip_validations
438082094 : Allow librecovery_ui_ext to set InstallInRoot() to true.
33a291c0d : Ignore missing apex boot jars if it does not exist
0c852084c : Suppress warnings while testing a new compiler
4297f40b6 : Remove ApexesProvidingSharedLibs
2dcbca69b : Replace InAnyApex with checking apex_available
7db05751d : Fix system image file diffs
1821b5ee2 : Specify cmdline property in autogen (vendor_)boot.img
761019cf3 : Soong: libbpf_prog: Define ENABLE_LIBBPF
4143ee819 : Add support for selects on string lists
70c1c68d9 : Specify dtb_prebuilt property in the autogen vendor_boot.img
0c4b415f1 : Fix remaining diffs for init_boot and boot images
aed2e74b6 : replace aconfig_storage_reader_java to aconfig_storage_stub
f1e0ff0f8 : Ensure that --revert-annotation flag is not generated for everything stubs
458fde5f9 : Collect aconfig files from *_libs in java_api_library
336b3ba20 : Avb sign boot images
cdeb7bbd4 : Flush verbose log every second
f69781731 : Remove InApexModules
df17e7b6c : Reland: Do not allow vintf_fragments for modules installed in the filesystem
83815b382 : Soong: libbpf_prog: change output filetype to .bpf
26bdac55e : Add partition size check to boot partitions
96fdba92e : Introduce boot_image_type property in bootimg module
95eb1dace : Auto generate init_boot image
1c9c335a0 : Add --os_version and --os_patch_level to bootimg
68382198f : Remove base_dir property on ramdisk filesystems
bb674a197 : Remove ApexContents
467b51d97 : Remove DirectlyInAnyApex
ff6559dcb : Use !NotInPlatform instead of !DirectlyInAnyApex for apex required deps
eacbdfcad : Introduce a java_system_features_srcs wrapper for codegen
e3a84fe7d : Revert "Do not allow vintf_fragments for modules installed in the filesystem"
920809ad4 : Use PRODUCT variable for default payload fs type
eb9c14874 : Make the certificate property Configurable
815626597 : Remove METADATA files from implicit dependencies of SBOMs
d098d4482 : Specify dirs property in generated ramdisk filesystem module
24938e2a1 : Auto-generate vendor ramdisk and boot image
6c03c8ee7 : Specify dev nodes list file for ramdisk partition generation
c128dd7e4 : Add `skip_setsid` option to nsjail for sbox.
59ee5d7c8 : Do not attempt to generate prebuilt_kernel_modules if they are not src
e4f348869 : Don't use DirectlyInAnyApex to enable dex and coverage
54de178b7 : Remove unnecessary base_dir property value overrides
5146e78cb : Deny nil outputpaths
f2a6e8bc9 : Build boot.img with soong
84dc3244d : [MTE] do not enable memtag-heap for memtag-globals
0510f5438 : Move LLNDK moved_to_apex into the Android.bp files
386afeb2f : Don't use ctx.directlyInAnyApex for bootstrap libraries
d6aaadc33 : Filter LLNDK ABI by SDK version
22abf219d : LLNDK version maps to NDK version
ed3dbce7c : Rename generic_system_image to aosp_shared_system_image
568530033 : find_input_delta: allow inspection of sharded jars
91d0beea4 : Disable source map id usage in eng builds
953476f26 : Fix missing avb keys
ade584ce0 : Modify highPriorityDepTag to embed PackagingItemAlwaysDepTag
97e21a98b : Add applied_backported_fixes to the system properties.
b90593168 : Implement support for build_prop.Footer_files
1f46a91b5 : build: limit concurrency of updateSymlinks
d3d89ede5 : Do not allow vintf_fragments for modules installed in the filesystem
4e9f5923c : Use fewer OutputPaths
6d4183018 : release_config: Inheritance should honor Redacted
866c60392 : Fix nondeterminism issue in high_priority_deps
7cd1d422c : Don't use echo -e
4ed895d55 : Add ninja determinism test
349073b6e : Add the following directories to the Android.mk denylist
fe46293c7 : [Ravenwood] Generate manifest.properties for Ravenwood tests
0d7b0117b : Add generated prebuilt_* modules to high_priority_deps by default
b5275328b : Use VisitAllModuleVariantProxies in generateModuleTarget. Also rename android.CommonPropertiesProvider to android.CommonModuleInfo.
481a7255d : Fix nondeterminism in chained vbmeta partitions
4169898d8 : Only coverage instrument device modules that are being compiled to dex
4385d3545 : Build ramdisk's build.prop with soong
0c19b699d : Remove AnyVariantDirectlyInAnyApex
6890a004d : find_input_delta: Add jar inspection, metrics generation
85125326a : Enable -Wenum-constexpr-conversion globally
47dadd9d6 : Fix non determinism in prebuilt_* modules generation
4552aeb5b : Revert^2 "Use VisitDirectDepsProxy in GenerateBuildActions and remove the"
04f12c961 : Create avbpubkey module in filesystem_creator
4325cbe40 : Convert avbpubkey installation rule to Soong
f27c3a5aa : Convert absolute source path to relative path in PRODUCT_COPY_FILES processing
c88cff15e : Make ModifyPackagingSpec generic to any filesystem
f0c85e998 : Add bugs field to FlagDeclaration
69725b3aa : Add unit test for prebuilt_* generation from PRODUCT_COPY_FILES
69c65106d : Remove SdkVersion condition from image variants
3c0a042d2 : Add GSI support for android_system_image
ecf76dddd : Define prebuilt_tvconfig module type
986d98cfb : Use VisitDirectDepsProxy in checkStaticExecutables.
ac483e081 : Use VisitDirectDepsProxy in aconfigUpdateAndroidBuildActions, GatherPackagingSpecsWithFilter and checkClasspathFragments.
d3228acdc : Change GetModuleFromPathDep to use ModuleProxy.
3a8759c27 : Support generating prebuilt_* modules from 1:n mapping of PRODUCT_COPY_FILES
0b1b008a3 : Make apex.Bootclasspath_fragments configurable
7103c4ffd : Make Contents and Standalone_contents configurable
76a6e9596 : Build ramdisk with soong
b51cd7100 : Revert "Use VisitDirectDepsProxy in GenerateBuildActions and remove the"
6c1a6354a : Use VisitDirectDepsProxy in GenerateBuildActions and remove the duplicated validation.
2047a4c20 : Remove linkerconfig prop to linker_config
6dfcbdfc5 : Build and install modules.blocklist in prebuilt_kernel_modules
aa2d3f577 : Revert "Revert "cc: use platform sdk version for vendor/product ..."
bee2e21f5 : Don't generate compile db entry for *.o files
5b493cd44 : Autogenerate vendor_dlkm and odm_dlkm
eb426b796 : Make `runDepmod` behave more like `build-image-kernel-modules`
ad402925e : Make `runDepmod` behave more like `build-image-kernel-modules`
912d26bd3 : Add fs_config_files/fs_config_dirs/build_prop to system_dlkm
03ae24043 : Make find_input_delta available
b3d6e7553 : Make chained_partitions accept other vbmeta modules
79170a94c : Remove some uses of ApexContents
323e6f2e3 : Target Java 21 wherever Java 17 was being targeted
8d5cccd0f : Revert "cc: use platform sdk version for vendor/product variants"
b40260dfc : Add find_input_delta
0f1a781e1 : cc: use platform sdk version for vendor/product variants
67118217c : Use common interface to build linker configuration
2598bb967 : Always package NOTICE.xml.gz
23ba87615 : Auto generate userdata.img
f424c9a70 : test_module_config for sh_test
563015765 : Remove --allowlists
1a6291fde : Add f2fs support to Soong filesystem modules
876b8038f : Revert "test_module_config for sh_test"
3552eb6ef : Build vbmeta partitions with soong
50cb131c3 : Revert "Build vbmeta partitions with soong"
88ea9ffcc : Replace FinalModule with IsFinalModule.
1fa1c6db4 : Build vbmeta partitions with soong
32a9557ef : Allow test_suite to nest
ea91a175f : Don't magically use implementations for modules in the same apex
cc27a84f6 : Delete trim_against support
5d52666ba : Remove no longer used test_for functionality
45dca5c8c : test_module_config for sh_test
27d3be035 : Add python library build rule
b9cef3b9a : Migrate build.prop for (system|vendor|odm)_dlkm to soong
2391918de : Revert^2 "Define core-current-stubs-for-system-modules-exportable"
7b25a5196 : 1:1 mapping between prebuilt_kernel_modules and _dlkm filesystem modules
9afc298e4 : Handle several symlinks in system image generation
0c79e26a9 : Implement test_suite.
aa37c16b0 : Create out/soong/packaging directory and put a placeholder manifest file in it.
27ff76790 : Add *_dlkm properties to android.ModuleBase
5b9897759 : Make resource_dirs configurable
a14c27ce2 : Make include_dirs and local_include_dirs configurable
dca2f2bd1 : Define additional prebuilt_* modules
996b52223 : Replace RuleBuilders in android_info module
e5b1bb3bd : Add a phony module for llndk libraries
7cf5d501a : Add sdk_minor to build prop of all partitions built with soong
4cd93b516 : Autogenerate an android_info module for vendor build.prop
c32e04610 : Add android_info module type
ec7043df1 : Collect kythe, obj and tidy files using ModuleProxy.
06689b180 : Enforce adb_keys to be installed in product partition
1d45f88bd : Remove bazel functions from tests/lib.sh
1b9b45077 : Add a location type to describe where to search for a file
81aeb9e27 : Use ModuleName() instead of Module().Name() in collectDepsMutator
add2bb263 : Separate mutators and modules creation into a separate file in fsgen
3e85da88f : Remove magic implementation selection based on test_for
516c545d4 : Add a check that dependencies don't include conflicting explicit versions
b614cd441 : Verify that libraries in apexes don't link to implementations outside the apex
32282f69e : Add boilerplates for test_suite module type to tradefed_modules
ae3e1d3fc : Add vendor and odm to autogenerated bp file
5958625d5 : Add soong config variables of paths of *_dlkm partitions and whether they are built for a product.
675d4688f : Revert^2 "Auto generate prebuilt_* modules from PRODUCT_COPY_FILES"
2e2b7441a : Define additional prebuilt_* module types
fe13786c3 : Delete filegroup test for mixed builds
40be6479d : droidstubs: clarify how to address API lints
37cc2714c : Target Java 21 by default
5e336426d : Autogenerate a system_dlkm android_filesystem
5286b6602 : Add documentation for select statements
4bc45d87d : Make platform_bootclasspath a regular module
c0a746543 : Revert "Auto generate prebuilt_* modules from PRODUCT_COPY_FILES"
fc7a41126 : Add NeverFar() option for transition mutators
76d9446ef : Remove 3 make vars related globals from cc.
c1ded7e7d : Support override_android_(app|apex) in deps of android_filesystem
5524b5b7e : rust-project.json: Set sysroot in generated file
93036a5ff : Add DirectoryPath for nsjail genrule
83f135b10 : Clean up filesystem_creator
235b13b93 : Auto generate prebuilt_* modules from PRODUCT_COPY_FILES
2b3da4636 : rust-project.json: Generate proc-macro paths
2a7bf750e : Add a neverallow rule for prebuilt_* module types
0da5ae932 : Define additional prebuilt_* modules
c57171660 : Autogenerate a soong module to build odm.img
f1f447df6 : Remove sdk_genrule
535277601 : Fix typo.
0c5eaedcc : Keep high priority deps separate from regular deps
7ab866007 : Enhance vintf_data to support vendor_manifest and odm_manifest
26cfe3c2d : Utilize high_priority_deps in autogenerated filesystem modules
79196c55e : Introduce packaging property high_priority_deps
173256b06 : Use a boolean to determine if linker.config.pb should be generated
8fe68dcfd : Create linker_config_srcs for autogenerated product partition
499c92ea1 : Stop dumping/diffing preview ABIs.
afd8aacf5 : Add -Wno-c23-extensions globally.
918191e35 : Dedupe logic to generate linker.config.pb
034af2c2e : Distinguish HideFromMake from SkipInstall
312cc4116 : Create linker_config_srcs for autogenerated vendor partition
9263188d9 : Add linker.config.pb support to android_filesystem
872e8463d : rust: Shorten name of generated Rust staticlibs
66e658641 : Reapply "Only use partial compile on eng builds"
7ceb14aa4 : Revert "Use Unique lists instead of pointers in PackagingSpec."
710a14b52 : Revert "Add device_first_vendor(_shared)_data"
28cc1c734 : Make all_apex_contributions a regular module
bef36af55 : Revert "Only use partial compile on eng builds"
eef29fea1 : Run apexDirectlyInAnyMutator even when module is disabled
cdef43f30 : Fix 1-variant fallback issue in aconfig_test.go
ae3c12bac : Ignore METADATA files in python packages shipped with prebuilts/clang, which are from upstream and not in the format used in Android.
eba15b66c : Fix metrics/hostinfo_darwin.
ca280bf75 : Fix a bug where ModuleInfoJSON was not serialized correctly because core was not exported.
e70976d4d : Revert^2 "Convert cc modules to use AndroidMkInfoProvider."
0040f01de : cc: Make export_header_lib_headers configurable
db901f84d : Use Unique lists instead of pointers in PackagingSpec.
f6742afe1 : Add objectLinker to cmake_snapshot supported module types
9759ed7b2 : Move disabling rust sanitizers for linux_musl_x86 earlier
7fd5b2e15 : Make java_resources configurable
c49443ed6 : Make prebuilt_font use the common arch
9820408a8 : Add defaults for sh_ rules.
6ec8d4ac7 : Add comment for imageMutatorBeginMutator
768b00f90 : Only use partial compile on eng builds
54b01cd37 : Add CPU and RAM information to metrics
ec62d843d : Introduce additional prebuilt_* module types
924871387 : Add fully qualified names of lincese modules as dependencies
878b2fc69 : [Ravenwood] Support java_defaults for ravenwood properties
5b7635d30 : Implement the 1-variant fallback in the image mutator
168098c39 : Autogenerate a vendor-build.prop for the current product
859cdef88 : Add vendor to list of supported partitions in gen_build_prop.py
f7a1f6418 : Support host-cross variant on golang modules
98bb3b8ed : Print bootstrap errors through ctx.Status
69464c3d3 : Install cross partition symlinks in autogenerated vendor.img
b304ea9b3 : Handle IgnorePrefer32OnDevice() in first_prefer32 path properties
50bc3bc67 : Fix race condition when running tests
e1860e455 : Remove overriden modules from deps of autogenerated filesystem modules
8ba1fb0e4 : Add optimized build metrics to soong metrics
a8bf946a2 : Add device_first_vendor(_shared)_data
3cadf7d81 : Move some gob helpers to a new package.
afcdce8be : Fix explicitly versioned tag
49f1a8fce : Fix apex rust_dyn_libs property
a2fdb61a0 : Make apex tests parallel
a14fb6a73 : Update DepSet references
8b0b38270 : Mark BSD-Like-Binary-Only as deprecated in favor of BSD-Binary-Only
976890e51 : Simplify glob messages with many excluded files
830f56a78 : Remove testing package
e8b1ac47c : Update neverallow rules for Trusty genrule
3f84f6ea6 : Define VendorApiLevelPropOverride for GRF prop
11d0c38a2 : Move DepSet to blueprint
e98d345ed : Convert DepSet to a wrapper around a pointer
222874dae : Introduce Soong API CreateModuleInDirectory(...)
1cb98b77e : Export PRODUCT_COPY_FILES to Soong
2ecf86297 : Introduce prebuilt_media_audio module type
18f03f1c0 : Add device_first_prefer32_data to java modules
4fc1fc936 : Revert "error for non-system, non-vendor fuzzer"
49cc3e822 : Add fs_config_files_vendor and fs_config_dirs_vendor to deps
6c2b01d20 : Do not install transitive packaging specs of overriden modules
ee45d33c1 : Include device_common_data in python test data
a8fa6b445 : Set Fsverity props conditionally
6dd13b600 : Generate product partition filesystem module in filesystem_creator
1c74304fb : Add a test for .flat files with flag path generation
8c67d5842 : Add support for .flat files with flags in the directory
95b1c7042 : Remove broken struct tag
2c9ff13b5 : Revert "Define core-current-stubs-for-system-modules-exportable"
405f2d449 : Use shared variant of dep for packaging
619ed5ce7 : Give flag to R8 if we explicity opted for optimized shrinking
e3b65316b : Autogenerate a vendor/ partition
fa6e0fdf2 : Make ImageInterface use its own context
fcc07c0f3 : Skip host variant in `collectDepsMutator`
f62f4228f : Specify flags_packages property in the android_app-generated rro module
c5288af94 : Inline android.FilterListPred so soong_ui doesn't depend on the android package
dafaa7f5c : Revert "Autogenerate a vendor/ partition"
65cb40a97 : Add new properties to aid in removing the 1-variant fallback
35f300dff : Remove bp2build and appurtenances.
d117bec81 : Remove multiproduct_kati and build_test.bash
d3dc1ee41 : Remove cuj script.
0e1812973 : Define core-current-stubs-for-system-modules-exportable
4e5d8de9d : Improve soong generated filesystem bp files write
9f007c34c : Add a soongdbg star command, which prints dependencies in both directions.
9e4f3b5dd : Update riscv64 minimum API level to FutureApiLevel
6d056508d : Use a partition filter only for autogenerated partition modules
69d829a14 : Revert^2 "Use a partition packaging spec filter for android_filesystem"
0e097dd2b : Use --sysroot when compiling against the NDK
02adec80d : Revert "Use a partition packaging spec filter for android_filesystem"
22f82067d : Revert "Use --sysroot when compiling against the NDK"
35bd00a3b : Revert "Add -nostdlibinc to bindgen's device cflags."
19f7b285e : Add -nostdlibinc to bindgen's device cflags.
c3f083e7f : error for non-system, non-vendor fuzzer
46c98d9ff : Autogenerate a vendor/ partition
8a49643e6 : Use a partition packaging spec filter for android_filesystem
2a506cff5 : Use packagingPropsStruct for image bp generation
4b0ca97d4 : Reland "Add baseProps and fsProps into system bp generation"
99f189609 : Update SOONG_PARTIAL_COMPILE logic
d9875bc11 : Add VNDK apexes to deps of autogenerated system_ext partition
d86882b57 : Fix fsverity metadata apk name
2757bb749 : Use --sysroot when compiling against the NDK
dd81e67ec : Extend TestIncludeDirectoryOrdering to cover the platform variants too
6850d8f88 : Set build_logtags true for system partition
64f2d844f : Make uses_libs and optional_uses_libs configurable
dcca9dbe3 : Revert "Add baseProps and fsProps into system bp generation"
3c7be418b : Set fsverity.libs for soong generated filesystem modules
50098f775 : Add support for kotlin plugins
37e6794ad : Revert "Convert cc modules to use AndroidMkInfoProvider."
8d5418480 : Add module dir to includes for cc_preprocess_no_configuration
72f812f6e : Add a diff test phony target per partition
906222c75 : Dexpreopt install path fix for system_ext apex sscp jars
5e63b25c5 : Add filegroup modules to module_bp_java_deps.json
e19d335cf : Switch from genrule to java_genrule
a077b9401 : Add baseProps and fsProps into system bp generation
0d545b824 : Revert^4 "Set the appropriate deps property for the soong generated fs modules"
c98856816 : Add cc_preprocess_no_configuration modules
01977217a : Modify disabled flags to be exposed in keep-flagged-apis
4f443e78c : Skip `none` system_modules from module_bp_java_deps.json
5a2117c93 : Revert^3 "Set the appropriate deps property for the soong generated fs modules"
29e2d062d : Revert^2 "Set the appropriate deps property for the soong generated fs modules"
73d2930e6 : Bold the file created from the build-flag command
72d86c62e : Do not install internal files of apex
bda1816d2 : Add error functions to IncomingTransitionContext
564000874 : Convert cc modules to use AndroidMkInfoProvider.
f6ac27595 : Ignore Android.mk files for NDK build in external/
9e0062b5c : [Ravenwood] Allow Ravenwood to use system-build.prop
76e19859e : Add directory support to nsjail genrule
7319cff8a : Respect THINLTO_USE_MLGO for ML inliner
cbe641aad : Skip aconfig flag files for system_ext partitions
c71b175e6 : Make the "apk" property configurable
7a46f6c42 : Reland "Create an empty system_ext partition for aosp_cf_*"
b6ac822b8 : apex: pass partition_tag to host_apex_verifier
ac2d1bab4 : Revert "Set the appropriate deps property for the soong generated fs modules"
8ea25038b : Make vintf_fragment modules use the common arch
481b6699b : Hide windows genrules from make
dc6492f01 : Set the appropriate deps property for the soong generated fs modules
acd01347b : Do not disable adb_keys module type
7cd4baddd : Minor cleanup of erofs property checking
c7e58c90b : Use global mutex for fsDeps update
41e4c9988 : Revert "Create an empty system_ext partition for aosp_cf_*"
8f86c881e : Add logic to get file system dependencies
efc456a76 : Create an empty system_ext partition for aosp_cf_*
c35d6fb14 : Add erofs support to Soong filesystem modules
33c1e24ef : Update Soong workspace to 1.23
8a9628098 : Remove MutatorHandle.Parallel()
2d2e68ffa : Fix TestFinalDepsPhase for parallel mutator
6978879d2 : Remove dependencies on 1-variant fallback
d2a959554 : Add a few module visiting methods that return ModuleProxy.
12d548a07 : cc_baremetal_defaults: Disable SVE
d3bd6b486 : Introduce cc_baremetal_defaults
b9c67e21a : Make java binaries common instead of common-first
79688ff5f : Add 'dsts' property to prebuilt_etc
f1c79cae6 : Generate android_device module from filesystem_creator
c55e68816 : Import some product definition variables to Soong
f2c53982a : Introduce android_device module type
597bad603 : Convert fuzzMutatorDeps to a transition mutator
5246a7efb : Fix a gob related issue where anonymous fields need to be exported.
946688209 : Add a tracking bug for non ext4 FS type support
6f01658ba : Do not set --lto-O0 for optimize_for_size targets
096b8d6bc : Add srcs of jarjar'd java libs back to module_bp_java_deps.json
f966e387e : Remove resolved_srcs
159860c84 : Revert^3 "Use -target-feature for MTE"
42816dccc : Introduce vintf_data module type
1e613d24f : Add product variable SelinuxIgnoreNeverallows for sepolicy
b8533a82c : Annotate mutators that use methods that prevent mutator coalescing
b2388e3e3 : Remove unused TopDownMutatorContext methods
0feac37f1 : Revert^2 "Use -target-feature for MTE"
9e866c861 : Fix output file path error in filesystem_creator
127f95c1d : Make rust rustlibs property configurable
dc14bd116 : Drop `common_first` special-case from `addRequiredDeps`
fbcd5fe31 : Enforce partition property on apex system server jars
8609a5569 : Added EXTRA_ALLOWED_DEPS_TXT to allow arbitrary allowedlist text files that enforces min_sdk_version for apex dependencies to avoid regression
6a1d02919 : Remove IDEInfo method from java_sdk_library
3d9b69e99 : Use RELEASE_CREATE_ACONFIG_STORAGE_FILE to create aconfig files
b67040ded : [Ravenwood] Run Ravenizer on resource jars
dd9ccb423 : Add ModuleProxy that should be used when visiting deps.
cdae631d4 : Revert "Add product variable SelinuxIgnoreNeverallows for sepolicy"
90584fb9d : Add install_symlink_host Soong module type
91ae5ecbd : Convert sabi to TransitionMutator
457e50624 : Skip resource verification on intermediate android libraries
467d7c5c3 : Change the way to support custom gob encoder and decoder.
00408eab2 : Remove cross compilation support from go packages
d1d08abbf : Update test for go modules with mutliple variants
2b9f138c1 : Make sanitizer markapexes mutators parallel
7077be40e : Don't use sanitized kati when SANITIZE_HOST=address is set
1639ab5d6 : Add release configs artifacts to metadata build
e90a67190 : Fix file remove bug in dex rule.
950deca36 : Install dexpreopt artifacts of apex system server jars in same partition
92ccbe2cf : Create soong-generated filesystem diff test
e42c5d96f : Add JNI libs support to device variants of java_binary
258b96f39 : Make the package_name property for apps configurable
24cf036b5 : rust: Don't produce apex variants if apex_exclude
7f9bcd00b : Revert "Create soong-generated filesystem diff test"
2cf1bf562 : Create soong-generated filesystem diff test
989ee847c : [Ravenwood] Allow sending additional args to Ravenizer
7bd19d90a : Strip versioned module-info.class files
11eb55c8b : Properly sort release_config_contributions artifact
12f6ec9b3 : Make the package_name property configurable
5c6d83b9c : Add new module type for adb_keys.
a0463d3c6 : Use AddReverseVariationDependency to avoid 1-variant fallback
b77d695fa : Revert "Use -target-feature for MTE"
735a80bf3 : Add release_config_contributions modules
1119e46b4 : Rename all_build_flag_declarations.go before adding to it.
0ec5a753b : Add proto declaration for all release configs artifact
99939e9ca : Enforce app updatability in GenerateAndroidBuildActions
091ffd888 : Make the rust features property configurable
3413f3050 : Added EXTRA_ALLOWED_DEPS_TXT to allow arbitrary allowedlist text files that enforces min_sdk_version for apex dependencies to avoid regression
b8083bb94 : Delete more unused bazel code
98047cfd0 : Introduce soong_filesystem_creator module type
76e6a9276 : Fix soong_ui.bash on mac
639423daa : Enforce exclusive release config component directories
7297f05ee : Don't mutate sources when reusing objects
f22fe41cd : Add PostApex mutator stage
cceefc885 : Add test for sabi propagation to static libraries
8bce38183 : Remove the java property `exclude_static_libs`
da7ba6e88 : Remove the cuttlefish visibility
8e47abc75 : glob blame
523db9433 : Add SOONG_PARTIAL_COMPILE
371a037ef : Make the java jni_libs property configurable
1cde8fc24 : There is no verbose flag.
aa022a53b : Revert "Use AddReverseVariationDependency to avoid 1-variant fal..."
4e8976e7f : Use AddReverseVariationDependency to avoid 1-variant fallback
f4992e8e0 : Add AddReverseVariationDependency
6d39c70d0 : Make the java jni_libs property configurable
3275cdf44 : Add missing dependency on soong.variables
82bea76a4 : Support select for product variable "build_from_text_stub"
50858a72e : Forbid -gsplit-dwarf cflag
5351cf28d : Use profile at framework/base/boot/ instead of the combined one at framework/base/config
559356d01 : Revert "Add install_symlink_host Soong module type"
a8b745213 : Make `Bitness` undefined for the common variant
77e27d44e : Find matching variant of `required` for `common_first`
a6d0aa86f : Do not set JavaInfoProvider in java_sdk_library
b1ccc2f89 : Prevent adding new modules after the defaults mutator
648daea67 : Remove blueprint.Module helper functions
fd1c4fb04 : Revert "Remove some unused make vars"
e9c49d5f2 : Remove obsolete 'neon' feature.
3a6eced43 : Set version code of apex and apk based on RELEASE_DEFAULT_UPDATABLE_MODULE_VERSION.
fb09145a9 : Add install_symlink_host Soong module type
7b534d596 : Update the visibilities for soong system image
5d88f7fb8 : Support setting R8 dump directory with environment variable
953ce9e92 : Remove some unused make vars
75e722820 : Remove some unused make vars
f7cd03e17 : Add nsjail support to genrule
a237b6767 : Create etc/aconfig/flag.info file in soong system image.
874273545 : Remove top down strict updatability checks
b34ca77d4 : Add trendy team for build/soong
98e9ac607 : Remove the SdkLibraryDependency interface
4b34a7296 : Support test_runner_options in android_test
a4c0355a8 : Export partition-related variables to soong
18cb570c4 : Only propgate strict updatability linting to modules that are in the apex
b94c779a6 : Revert^2 "pass read new storage parameter to java codegen"
8bf14fcb8 : Allow WalkPayloadDeps to be called from mutators
b79aa8fe1 : Use providers for lint
b323c9112 : Fix coverage when transitive jars are enabled
d4530d612 : Explicitly set the bitness of the jni lib required dep
d32e85f63 : Allow selective R8 optimization for eng test_suites
38b4c94cc : Create the installation rules of host ART boot image in soong
b5c82a45d : Remove checkPartitionsForJavaDependency()
c3cd8e3f3 : Possible fix for stat error during globbing
117af4292 : Possible fix for stat error during globbing
a4f336b7b : Bump ART host base address
162b489c5 : Add alderlake arch variant
cf7fa899e : Revert "Fix coverage when transitive jars are enabled"
c44416282 : Convert RRO enforcement to transition mutator
d433bd5eb : Fix coverage when transitive jars are enabled
89eb6d268 : Remove unused parameter to GeneratedCcLibraryModuleFactory
96ce83bbb : Move sdk_library submodule build rules to sdk_library_internal.go
a51c952e8 : Revert^2 "Remove `prebuilt_apex_module_creator` mutator"
5523481ed : Check if vintf_fragment modules' target partition
4d42248ec : Revert "Remove `prebuilt_apex_module_creator` mutator"
791d42220 : Define __ANDROID_VENDOR_API__ for product variants
343d09e77 : create flag info file in mainline build
3e6015e60 : Move cog setup logic into shell_utils so it can be used when any of these commands are run:
1e954b6a5 : Remove CreateVariations, CreateLocalVariations and AddInterVariantDependencies
cdf8fd726 : Handle RELEASE_PLATFORM_SDK_MINOR_VERSION
b24bbccaa : Add product variable SelinuxIgnoreNeverallows for sepolicy
a043a1bd9 : Modify core.current.stubs dist dependency based on release flag

+- Project: platform/cts

f78631c714b : Verify onUpdateEditorToolTypeMatcher was called after focus by stylus
001219f85dc : Flag FRRO changes for Launcher preview color fix (3/3)
43dbac00bfc : Add missing recordSliderEvents
769270f4a66 : Add function for extracting SearchResults from SyncSearchResults in AppSearchTestUtils
86c391681df : Account for the audio permissions propagation
9240b20c6fa : Revert "Stop opting out edge-to-edge under cts/tests/inputmethod"
0d5b4a84d7f : Add CTS tests for TestLooperManager
4091abff697 : Fix the thermal CTS with correct mock headroom value type
1aec0907521 : Cleanup CDMA CTS tests
69daf4586cc : Moving test behind flag
4f5fd66f4d3 : Require both ART and platform flag for DynamicInstrumentationManagerTest
58b50e09121 : Exclude instant from DynamicInstrumentationManagerTest
f11049b1886 : Revert "Fix failed CtsAutoFillServiceTestCases for secondary_user_on_secondary_display"
044ed9d9a82 : WebView: refactor tests off of UI thread
d935b38a501 : Add scroll to show "Test String" on smaller displays
927b9a465f7 : Added tests to verify the newly exposed API method TextClassifier#getClassifier
f1b266b0210 : Grab data dependencies for CtsBionicTestCases from bionic.
cbbd63c2b85 : Added VEHICLE_PASSIVE_SUSPENSION_HEIGHT CTS tests
5335d3433d4 : Added BRAKE_FLUID_LEVEL_LOW CTS tests
d08b33bfdb2 : Added BRAKE_PAD_WEAR_PERCENTAGE CTS tests
3b938ec94d0 : Added BRAKE_PEDAL_COMPRESSION_PERCENTAGE CTS tests
4ca8986c427 : Added ACCELERATOR_PEDAL_COMPRESSION_PERCENTAGE CTS tests
c873b06c7af : Added VEHICLE_DRIVING_AUTOMATION_TARGET_LEVEL CTS tests
14453114665 : Added VEHICLE_HORN_ENGAGED CTS tests
d6b0648ee69 : Added INSTANTANEOUS_EV_EFFICIENCY CTS tests
6a65b0aea63 : Added INSTANTANEOUS_FUEL_ECONOMY CTS tests
c6edb817f90 : Update insets types per frameworks change
f9802129426 : Add CTS for PERMISSION_READ_STEERING_STATE_3P.
c8124722c6f : Revert "Update waitAndAssertActivityLaunched to just check for an activity being in RESUMED state."
3f819dae984 : Ensure that activities launched in IntentTests are the ones from the intent.
a6de2a1bf87 : Temporarily remove some modules from MUMD passenger CTS plan
aefcf7be752 : Restrict Embeddings to Android W+.
cc0a2b38f41 : Updated EvChargingConnectorType CTS tests to support SAE_J3400 enums
f9c8027ddeb : Added TURN_SIGNAL_LIGHT_STATE and TURN_SIGNAL_SWITCH CTS tests
1a8beb739dc : ITS: Split the AWB check from AE check in test_multi_camera_switch
e85cb984076 : Add framework GlobalSearchSession cts test for global open blob read.
3f50ab64364 : Accomodate to historical version of LE_Get_Vendor_Capabilities_Command already in use by some root-canal versions.
ed5d042535f : Complete CtsSignedConfigHostTestCases dependencies
994a3f682fa : RESTRICT AUTOMERGE: Relax focus event timeout
222cf7906f5 : Added INFO_VEHICLE_SIZE_CLASS CTS tests
2c42f3804c9 : Added INFO_MODEL_TRIM CTS tests
fca50dfb5b0 : Relax the check waiting for the storage clear signal.
114c8df30e8 : [Relayout][CTS] Ignore test affected by relayout fixes
7b6a5b7c693 : Complete CtsSecurityBulletinHostTestCases dependencies
8f249d700d2 : Fix regression caused by ag/30476469
b906bf99733 : Fix regression caused by ag/30476469
6c1da3b7d06 : Tests for fill dialog improvements
9ac1eb73372 : Camera: Run feature combination tests unconditionally
1141dfd7e1d : Add more logging to diagnose flakes in StrictJavaPackages test
c90bfc912e5 : Clean up test use of activityEmbeddingAnimationCustomizationFlag.
b70a9d82ac8 : Add a test for tapping impact on barometer. Also update the error threshold of the squeezing impact to 0.3hPa.
9d87abf65f3 : Restore check to sync input transactions and wait for idle
b455df7106f : Remove logging company exemption
965983698dc : Add flag check to testIndividualProperty.
597e96fae86 : Ensure there are no wake locks in display timeout tests
dea444b3e47 : [cts] Clean up AF policy test
7045759c48d : Add bestead AfterClass and BeforeClass with DeviceState rule
0986310889b : Add a CTS test for SDM support.
88d24341edc : Revert "Add CTS tests for TestLooperManager"
70cf3eef309 : Lock screenOrientation for LocationInWindowTests
41d5dfcc4ea : Verify placeholder launch on secondary
e1c6ff7a265 : DO NOT MERGE Update waitAndAssertActivityLaunched to just check for an activity being in RESUMED state.
c5d95e9e082 : Update waitAndAssertActivityLaunched to just check for an activity being in RESUMED state.
2ced0e179c5 : Reference CtsUsbSerialTestApp directly as a proper dependency rather than packaging separately
49ed787199e : [MQ API] add an option to exclude parameters
542df0bf3af : Make WindowInputTests multi-user and multi-display aware (1/2)
327845de315 : Update for December filter results
4e1f332ce77 : Increase waiting time to 10 seconds to fix ActivityManagerTest flakiness.
ca7ac6307fb : Add CTS tests for TestLooperManager
037e902d4d4 : Make WindowUntrustedTouchTest multi-user and multi-display aware
9762f3ad1ea : Use assumeTrue to skip the test when mManager is null
ddafe9ce0ac : Add TestIntrusionDetectionApp to cts test config
88536972ab6 : Revert "Add CTS tests for noteOp batching"
cf2ef111411 : Complete CtsSimpleApp dependency for CtsDevicePolicyManagerTestCases
3dd9a0bf748 : Remove multiuser config from AndroidTest
8615da4ecde : Make DistractingPackagesTest robust by using Bedstead
2d7922a52f5 : ITS: Check skip condition before loading scene.
915e28c9317 : Minor CTS fix to work with mainline module
839264bf92a : Skip polling frame tests when observe mode is not supporte
1a9859688a0 : Add test cases to check null parameter.
97b2c14c83c : Update waitAndAssertActivityLaunched to just check for an activity being in RESUMED state.
c16fec4c56c : Changed DeviceConfig.dump() signature.
983c6ee9f61 : Add Notification.hasPromotableCharacteristics to public API
8d0eff7000e : Support capital letter of JPG file extension
a1e8969dbaf : Add CTS test for USD support APIs
fe4908223d6 : Add CtsVirtualDevicesCameraCtsTestCases to known failure
1ef67d010c7 : Improve AE / AF / AWB for low light tests
4602a13cf2e : Update Framework from Jetpack.
d0f8ec0f97b : replace fail by exception and assertThrow
ac274babf2a : [cts] Respect enterprise policy in AppFunctions
0cfbdb48635 : cts(nfc): Renamed the nfc flags to include package name
dc0c469f293 : DO NOT MERGE Update waitAndAssertActivityLaunched to just check for an activity being in RESUMED state.
c88612d2d0d : Update IntentTests to account for freeform windowing mode.
2b22142cb6f : Add CTS tests for NAN Periodic Ranging
a8fe89a093b : Change again how HiddenApiTest is ignored for JUnit 3
bc8b4b08da5 : Wait for lock screen focused before unlocking
8b7dace4123 : DO NOT MERGE Skip CtsJvmtiRunTest1911HostTestCases for Android 14
70ad5690127 : Revert "Use BeforeClass and AfterClass of bedstead"
c97f858c0bb : Verify SplitAttributesCalculator accorss displays
0f96fa0c769 : Verify AE on secondary display
98e6a9d4034 : Fixed for CtsLocaleManagerTestCases.
8a507364934 : Update mock modem with 2pSIM+1eSIM HAL API changes
1f9d305990e : Stop opting out edge-to-edge under cts/tests/inputmethod
9cf3d2ff86e : Fix expectation order in PowerManagerStatsTests#testAcquireModifyAndReleasedWakelockIsPushed
3be94e61162 : [PM] Add PackageInstaller CUJ test case (41/N)
fa58ba0e667 : DO NOT MERGE ANYWHERE: Loosen the requirement on tag
c9e0c6e44a2 : Revert "Remove mcts tags since cts-dalvik-host-test-runner is host requried by other CTS test cases"
a615192fa17 : Add CPU/GPU_LOAD_SPIKE hints for one-off expensive workloads
291152ed37c : record and reset allowed networks for all SIMs
06369e4ef58 : Revert "Reland CTS tests for graphics pipeline"
2f8a351d4da : Repo hook: activate google-java-format
24f50005f9e : Add required flag for IDENTITY_ADDRESS_TYPE_API
0aa6cc4c3ed : Use shared library dependency and module_lib
16516cc7fe9 : Bluetooth: apply formatter
1d53aeb6eb8 : Add flag requirement for UprobeStatsTest
77762cf0311 : Updated CtsStatsdAtomApp for Sdk 35
ad535d8c2cc : Clear calling identity in CTS SecurityStateManagerTest
9377e6c92aa : [ITS][Zoom] add physical ID to scene6 tests
6d93da500c0 : cts(nfc): Renamed the nfc flags to include package name
1bc41780859 : Fix ownership file for GameFrameRate CTS
2a52f7cec49 : [Ravenwood] Remove usage of RavenwoodFlagsValueProvider
5794877e661 : Update Framework from Jetpack.
445d794e252 : Increase shell timeout for hidden-api tests
4467996089c : Use BeforeClass and AfterClass of bedstead
3b13d6044ca : MediaQuality CTS fix
1f1b9ff60d4 : Add a test for barometer smoothing across sensor activations.
944b2c078a3 : Add a test case for barometer smoothing within the same activation.
d7cdc2eeb5b : Improve Messaging in Audio Loopback Latency Test.
7b5ce2c84c2 : Tests for list-based method to restrict media route visibility by permissions
9312da577da : Update CTS test to accomodate resume-on-failure for DIS
857859a724e : Fix testIsTagIntentAppPreferenceSupported
f055eb39879 : CTS test: New System API to launch SIM Preference in Setting
ee49f3c5170 : Fix VDM sleep CTS flakiness
18e841c688e : Lock screenOrientation for LocationOnScreenTests
5b5a13508f4 : Add tests for new restricted backup mode behavior
9886f6e9b55 : Ignore updated MediaProjectionTargetChanged atom test
71fb1be4376 : Add a common Junit3 TestClass
af94c166d11 : CTS test for BLE assisted P2P discovery and pairing
5a0234beca4 : Filter out API .class files
67526138eaf : Attemps to fix the test flakiness
c074c0a736a : Verify EmbeddedActivityInfo on secondary display
e06c56291db : Verify ignore orientation policy on secondary display
24bd9c4db4d : Move VirtualDisplaySession to ActivityManagerTestBase
9f591afc212 : Add host side test for generated preview persistence
57f8831a632 : Tune testBatteryUsageStatsPerUid conditions
81455cfd8d6 : Add tests for associated role services in NFC
237a84c1a22 : Skip some tests on CtsJobSchedulerTestCases for devices that support visible background users
0133b33c02a : Add NDK test for CPU/GPU headroom APIs
e8c8ca23923 : Test that we rebind to the payment service correctly after it dies and can complete a transaction.
99a5bb10467 : Updated to CTS for the change that "restricitng local/sim contacts creation" is only enabled on App with Target SDK Baklava or above.
6be8280c7f5 : Remove cts test for associaitonTag
46051621275 : Update Sensitivity Levels for Preferences
fd8449135f2 : Switch to non-ssl dev port
9930d370bf4 : test(high contrast text): fix CTS screenshot for Oriole
4719d05fbc8 : Wait until the storage is cleared as part of the cleanup.
91bf818cc0d : Update CTS test for bluetooth offloaded rfcomm socket
65139db3271 : CTS test for bluetooth offload socket
8ba9a59f370 : CTS for uprobestats / art target process API
d336024c274 : Add native test for thermal headroom callback API
48c97ff630f : Fix the wrong message in car audio CTS
f258dce94ee : CTSV: subtract latency offset for Anker adapter
5c9b1a5b34d : Improve error messages in VendorVibrationSessionTest
3a55f415b0e : Ensures that NFCAdapter is capable of enabling NFC before running tests.
de79f208a33 : [cts] implement app functions enterprise policy
4a765e20a5c : Add CTS tests for noteOp batching
242c4d4d41b : Remove assertTrue as it can cause failure on some low-end devices
5bb14f5f28e : let CtsOsTestCases use bedstead-enterprise instead of bedstead
d3a17bedc8c : Add GnssAssistanceTest
ae00bb7bbd6 : Session activity size to fix flaky CompatScaleTest
7706b3df4c8 : Allow @Disabled @Overridable on REL builds
4ab0e9062f6 : Update ParseDouble test name.
6a1c454c3ce : Skip some tests on CtsJobSchedulerTestCases for devices that support visible background users
45aca33b8cb : Skip MultiDisplayImeTests and MultiDisplaySecurityImeTests for visible bkg users
a788e19518f : Add CTS tests for envelope effects XML serialization
a641c8bb637 : Remove mts-neuralnetworks tag from CtsNNAPIJavaTestCases as it is causing mts issues.
613ca50d93e : Fix flakiness for networksecurityconfig tests
29da5ae5e19 : Revert "PopulationDensity: CTS test"
d82aac0d8c1 : Replace deprecated AndroidTestCase for NetSecPolicy CTS tests
89cd2d9a468 : Correct settings type of transition scale
2ff6d402a5c : Disable EuiccManager tests using the test UI component for watches
c3b62341f94 : Disable EuiccManager tests using the test UI component for watches
a008d527b7f : Enable enable-testing-secondary-user-on-secondary-display option on CtsAppTestCases
9f02e8ca4d2 : [CarrierLock] CTS fix to support carrierlock Old API to read the carrier lock info.
a3f08afa6a6 : Fix retry bug in test_night_extension.py
366ef30358d : Add new types to the blocklist
77478465bd9 : Allow @Disabled @Overridable on REL builds
51a76e97796 : let MediaBetterTogetherTestCases_defaults use bedstead-enterprise instead of bedstead
c461d8decfb : CTS test for getIdentityAddressType
8db002d6949 : Add CTS test for Dependency Installer's multi-user support
bd656ad4d64 : DO NOT MERGE Use DisplayId when verifying top task in MulitDisplay
f817222e878 : RESTRICT AUTOMERGE Add CtsHiddenApiBlocklistApi27TestCases to known failures
fb91b641912 : Reenable Support Message tests
46302ee35aa : Declare DecorInsetTestsBase$TestActivity in AndroidManifest
07d9baa6332 : ITS:Converge 3A and take capture in same session
876c806fb18 : Update Framework from Jetpack.
12170064dea : Enable enable-testing-secondary-user-on-secondary-display option on CtsAppTestCases
b8314524f6b : Skip VirtualDeviceMirrorDisplayTest of CtsHardwareTestCases for visible background users
a73b86871a1 : CTS for SetFrameRate GTE enum.
067fee72107 : Reduce the test threshold for refresh rate matching
1277fed90ef : Change the way divisor rate is verified
f8759f2ce6d : Add metadata necessary for the service manager to initialize a transport connection during CTS testing
a5c94ba30e4 : Improve logs for test case failures
4876bbdb503 : Remove MctsMediaMuxer and MctsMediaRecorder from cts, mts and mcts
4fd128632ab : Update PowerManagerTest to run with AndroidJUnit4
6e0baa352f6 : Enable secondary_user_on_secondary_display on CtsPackageManagerTestCases
e4be14985a7 : [audio] Rename ST method for API.
2b899693e00 : [Ravenwood] Remove RavenwoodCommonUtils usage
25acef1546f : Update ExpeditedJobTest capability checking
21a999a6dcf : Relax some ActivityManagerProcessStateTests expects with waitFors
a2f76007c1a : Revert "Add CTS for new ASurfaceTransaction_setFrameRateParams"
e3b130b9006 : Revert "Add CTS for new ANativeWindow_setFrameRateParams"
f1f28eee134 : [25Q2 ITS] test_video_stabilization_jca
a0ca692bdbe : Add mfasheh@ and shayba@ as owners of TestLooperManagerTest.java
1b49a03d958 : [Ravenwood] Remove RavenwoodCommonUtils usage
0ac08ee1629 : Cts for the custom virtual display dim brightness API
09dde471c52 : Make WindowInputTests multi-user and multi-display aware (2/2)
39400e75bf5 : Revert "Updated CTS test to adopt the migration to the LocalSimC..."
430cc6e10be : [cts] Add cts test for getRouteType().
173e8ff2bfe : Add satellite APIs in mock modem.
1da582d4bae : Tests for description crop hints
de8a0bfaad8 : DO NOT MERGE Increase the size of the test freeform activity.
1e8196f6edd : Add error messages to assumeTrue statements in CardEmulationTest
d57e90dae61 : Disable Play Protect during backup CTS
d9b7bf08224 : Add a package filter for a global search test.
0a6dfe65582 : [Relayout][CTS] Enable relayout flags
004a6fd23e3 : Update ParseDouble test name.
5f8ea4e7d49 : Add user aspect ratio settings metrics tests
4f2e3112a5c : [AAPM] Update tests to wait for hooks and callbacks to complete
c033edfbed7 : Add CTS Test for getGroupdIdLevel2
a71627962b0 : Enable secondary_user_on_secondary_display on CtsPackageManagerTestCases
98fbc3e2f0f : Install package on current user to pass on HSUM devices
e0b70b232df : Disable EuiccManager tests using the test UI component for watches
dd73bc534d8 : Update test case to allow no override if the weight is the same
fb42f26e65c : Disable DisallowCellular2GTest#testStateAfterToggle()
9b06913645b : Fix to support secondary_user_on_secondary_display on CtsPackageManagerTestCases:PackageManagerShellCommandInstallTest
261ae141bdb : [Ravenwood] Remove RavenwoodCommonUtils usage
92f8c08d22a : FrameRateOverrideTest: re-enable testAppBackpressure
0a28f0c413f : Add scroll to show the view objects on smaller displays
2c5289b264d : Add scroll to show the view objects on smaller displays
e70e0271cc5 : Add scroll to show the view objects on smaller displays
b575e8ac11a : Fix flakiness of testMoreOptionsButton_simpleBiometricAuth.
202dd2eada6 : Fix InputDeviceKeyLayoutMapTest for secondary_user_on_secondary_display
403b18cce40 : Cleanup flag "create_windowless_window_magnifier"
52bc741efd0 : Increase the size of the test freeform activity.
78d2ee9862d : Increase the size of the test freeform activity.
8ba8e45510d : Add test for BlockingOption
b5fb4a4ed56 : Skip some CtsJobSchedulerTestCases test cases depend on NotificationListeners for the visible background users
13fd39b4a89 : Skip some CtsNotificationTestCases test cases that depend on NotificationListeners for the visible background users
69e597870f4 : Relax some ActivityManagerProcessStateTests expects with waitFors
57c16c07d57 : Update ExpeditedJobTest capability checking
b50bde58a7a : Skip CtsLegacyNotification28TestCases on the visible background user
cd9e2428cb8 : Add tests for BackportedFixStatusReported atom
eedf6f90702 : Fix to support secondary_user_on_secondary_display on CtsPackageManagerTestCases:PackageManagerShellCommandInstallTest
15f01653b43 : [Ravenwood] Remove usage of RavenwoodFlagsValueProvider
05b72897398 : Enable secondary_user_on_secondary_display on CtsHardwareTestCases
3c8714cf2dc : [Ravenwood] Remove usage of RavenwoodFlagsValueProvider
ccdcd5fc40b : Fix NPE in CardEmulationTest
26c0ed01200 : Fix flaky AppTaskTests#testMoveToFront.
22ff5f7f5e6 : Add CTS test to check the channel sounding feature.
f8db524130c : Cleanup flag "proxy_use_apps_on_virtual_device..."
bb162eea83b : Statsd Cts: removed obsolete test dependency
9d343cd2ed1 : Statsd Cts: removed obsolete test dependency
38ebe7496a1 : Add test for onSessionDestroyed API
ba658f3d04b : Convert hidden SatelliteManager APIs to System APIs.
1adce018636 : Fix InputDeviceKeyLayoutMapTest for secondary_user_on_secondary_display
2e7a68301e9 : Make IncompleteMotionTest multi-user and multi-display aware
2a13a957dcb : Remove android.service.chooser.chooser_payload_toggling flag
ff1972947aa : Add CTS Multidevice tests for PollingFrame data and type
e7cdba27379 : Remove testEventListener_commandTimeout from CardEmulationTest
c87430d4921 : CameraITS: Update flip camera resource pattern
6fe3ab6b87f : Fix flaky CarTaskViewControllerTest
12f18ee021d : Add CTS test for WindowManager flag to override power key gesture.
d33532318aa : Fix PowerPolicyHostTest
2fc419c3ca7 : Handle testing of devices with NO wired audio peripheral support.
cb80b2a4519 : Remove flagging from AudioDeviceUtils routines.
0088d41e6c4 : Disable DeviceConfig#getAdbWritableNamespaces test on ravenwood
f24f58bb014 : Fix PowerPolicyHostTest
768224b10b9 : Fix BlurPixelVerifier to use the correct Gaussian blur equation.
909d8466c73 : Adds a CTS test for the jank data API.
4da2ebabcaf : Add CTS test where Dependency Installer abandons sessions
b23689906d4 : Update expectations for a couple of jdwp cts tests
5202fb8900c : Add CTS tests for handling non-existing session id from DIS
ae787d764ec : Prevent test app from crashing when swapping roles
0e7a23924a9 : CTS changes for VirtualDisplay brightness API
ea6bffa3cad : Add tests for new surface binding for ADPF Timeline API
5e3f0aec6f2 : Add mcts tag to tests for host required dependencies.
e401c8a38dc : Run media session related tests with bedstead
b70f1e6f7fe : Increase test timeouts for MT6762 SOC
5894944707d : Deflaky BackNavigationLegacyGestureTest
b913f92e1db : Minor test fixes for sync ag/30558640
3103f91e151 : Restore transition scale after testClearOverrideActivityTransition
06b59f78d72 : Remove AccessibilityEventAndCacheTest from android15
9df9dfd800c : Enhance geofence cts for satellite service
aac395bb5e7 : Renaming test_module_config test.
fe376e16088 : Make WindowUntrustedTouchTest multi-user and multi-display aware
acab02033b9 : Fix AppConfigurationTests CTS-on-GSI issue
09f7d734685 : Sync BlobStore to AppSearch
bebea409976 : FrameRateOverrideTest: re-enable testAppBackpressure
e70df60e8f4 : CtsLintChecker: report attribute location
61bb3f400a7 : Fix flaky CarTaskViewControllerTest
2662aa7eab4 : CTS for frozen-aware RemoteCallbackList
b57f530de95 : Tests for new API method to restrict media route visibility by permissions
d73005e9c4d : Add scroll to show the view objects on smaller displays.
0c7ede4f607 : Update Framework from Jetpack.
d3ceeb957d7 : Fix flaky ForceStopTest
f39f30deee7 : Add ALLOWLISTED permission to DeviceConfigApiPermissionTests
74336de55df : Move TestExtensionAtomReported & TestRawAtomReported to statsd cts
d809678aeef : Return message wrapped in Result from receiveMessage
8d6d3be96ea : Reorder and cleanup the CTS content directory owners
7c0cb7ca992 : Add a test for overriding LUTs with null.
75d81ac78d3 : Remove Telephony tests for hidden APIs
675d6211768 : Fix Cts for transparency TelephonyCallback APIs
1bbb5d23e31 : Add scroll to show the view objects on smaller displays.
0f74d76c771 : AppExitTest: align timeout with platform default
80a168fcac2 : Enable secondary_user_on_secondary_display on CtsHardwareTestCases
2b9beaacbe9 : Move device-display association to VirtualInputDevice
a71146a8e00 : Remove usage of impulseVelocityStrategyForTouchNavigation flag
f83ecea62bb : Require pip2 to be off for minimal size test case
8fdff695610 : Use MMAP APIs directly.
20f85af5020 : Fix ClassNotFoundException
36d6fd4a422 : Add evitayan@ to the OWNERS file of VCN CTS
ee2ac1c23a6 : Modify cts tests for FillEventHistory
b65c4cad3e9 : Remove ARC Compat team as test owners.
41db724d346 : Bedstead: replace "Harrier" usage with "bedstead" or "bedstead-enterprise" if the enterprise dependencies are needed
04fcbe3d1a9 : RESTRICT AUTOMERGE: [ITS][Zoom] handle is_tele case where activePhysicalId is null
b01428e2fb8 : Consolidate blur-disabled tests
9d006d8b804 : Update CTS-system-virtual filter
8fb8f3e9161 : Fix flake where refresh rate change is not recorded
34d18eb3233 : [NTN][Geofence] CTS for geofence enhancement
f9583afd241 : Exclude frozen notification CTS on older devices
10c3e256d6a : Fix CtsSensorRatePermissionTestCases for visible background users
4cd4715bef7 : Add MotionEventBuilder for simpler creation of MotionEvents
012256a1b03 : Update Framework from Jetpack.
1595d3217c6 : camera security test: Change onResultReceived signature to match ICameraDeviceCallbacks
dae6dfdd5b5 : [AAPM] Tests for DisallowInstallUnknownSources state after disable
97b5c5f4a7a : Fix service startup failed
f94fd809a72 : Move DrmFramework tests using mpeg2 extractor to module
7c95c87d73d : Mark CtsPackageWatchdog as MTS for crashrecovery
740aa2b96e8 : Re-enable FrameRateOverrideTest on CTS
3afbf629aea : [RESTRICT AUTOMERGE] FrameRateOverrideTest: test only with the highest refresh rate.
ff740ec6835 : Re-enable FrameRateOverrideTest on CTS
969328496f6 : [RESTRICT AUTOMERGE] FrameRateOverrideTest: test only with the highest refresh rate.
2adbca88a62 : Make InputInjectionTest multi-user and multi-display aware
730e067a080 : Fix CtsSensorRatePermissionTestCases for visible background users
33225e4ec88 : AICS: update test name
61eb0e44e67 : ITS: Add scene2_g and test_preview_num_faces
91eb4d37000 : Make InputInjectionTest multi-user and multi-display aware
94be4f0c0b0 : CTS for deviceId
b7d7bff0a04 : RESTRICT AUTOMERGE: [ITS] use largest common video/preview size in test_ae_awb_regions
b3ee9610e3c : Skip some CtsJobSchedulerTestCases test cases depend on NotificationListeners for the visible background users
9adfa78f4d2 : Fix broken UprobeStatsTest
8860cefd1b8 : Add more getDeviceIds tests
758c2d6c976 : [RESTRICT AUTOMERGE] Skip testSurfaceTransaction_setDesiredPresentTime_*ms from cts-on-gsi
03b02f4de23 : Add a comment for skipping testSurfaceTransaction_setDesiredPresentTime_*ms from cts-on-gsi
95319845fd0 : Add CTS test for setting USD channels
d095007e674 : Skip a CtsAppTestCases test case that depends on NotificationListeners for the visible background users
ec5ea14198a : Add CTS tests for IntrusionDetectionEventTransport
d9ed286ae67 : Support secondary_user_on_secondary_display on CtsJobSchedulerTestCases
ef872dfb4de : Support secondary_user_on_secondary_display on CtsJobSchedulerTestCases
c9ce2bb6187 : CTS test for BluetoothQualityReportTest
eac42428a1a : Skip CtsLegacyNotification28TestCases on the visible background user
d9bfa1da9e5 : Reland CTS tests for graphics pipeline
2a08e3474c9 : Skip some CtsNotificationTestCases test cases that depend on NotificationListeners for the visible background users
521e099cee1 : Skip some CtsJobSchedulerTestCases test cases depend on NotificationListeners for the visible background users
eccd3ea9d3f : [Lut NDK] when call ADisplayLutsEntry_createEntry, convert dimension and key to predefined enum.
6b0bd0ce353 : Skip a CtsAppTestCases test case that depends on NotificationListeners for the visible background users
ae5beafbae2 : Skip a CtsLegacyNotification29TestCases test case depend on NotificationListeners for the visible background users
66110d498af : memcgv2: Use total device memory for reclaim amount
e25f871569a : Fix to support secondary_user_on_secondary_display on CtsPackageManagerTestCases
2e334b00888 : Remove flaky test component pending fix
29a748b19e1 : Bluetooth: Cts can only use API flags
115ef1fac42 : Refactor TestDependencyInstallerService to run based on preferences
dc92d2f93d8 : Skip a CtsLegacyNotification29TestCases test case depend on NotificationListeners for the visible background users
2b1b4f6a1d8 : Fix camera_id in file stem for test_night_extension.py
69d6a793839 : Cts for adaptive scribe bounds
4f66fdad357 : Call APIs for querying platform mmap policy directly.
dee1dfb56a4 : Prepare KeyguardDisabledFeaturesTest for coexistence flag flip.
a8584b1ec78 : Skip CtsWindowManagerDeviceActivity on the current user in Multi-Display configuration
1603e568fd6 : Skip CtsWindowManagerDeviceActivity on the current user in Multi-Display configuration
ea742d6909a : Re-enable FrameRateOverrideTest on CTS
40a5f0d17b2 : Fix to support secondary_user_on_secondary_display on CtsPackageManagerTestCases
65350b15cf1 : Change again how SystemApiAnnotationTest is ignored for JUnit 3
307651e2486 : Change again how SystemApiAnnotationTest is ignored for JUnit 3
cedafe39a7f : Change again how HiddenApiTest is ignored for JUnit 3
ffdbd1a2d28 : Add CTS test for USD
3f9735c7833 : Make WindowInputTests multi-user and multi-display aware (2/2)
9f1d353a6e1 : wifi: Use AAPM flag instead of WEP flag for API flag
6e5f704f42c : Clean up flag(3): com.android.window.flags.always_draw_magnification_fullscreen_border
a2489350510 : Bedstead: introduce a new "bedstead" module that doesn't depend on TestApps and bedstead-enterprise
9b6ad72d2fb : Add CheckFlagRule to DeviceConfigApiTests
200a0d86812 : [cts] Add multiuser cts tests
f50bfaf0f54 : Add reachability CTS metrics tests
c3030d6090c : Update CTS tests for renaming CloudMediaProvider capabitilies API
1cf71773772 : Add tests for dependency installer role
f3ab14572c6 : [AAPM] Adding Memory Tagging Extension test
04a04a5ab9c : Stop the LockScreenSession on test end
1aad52d1a07 : Fix MediaProjectionAtomTests [2/2]
36fd8178e76 : Allow orientation request for orientation test
7695547b745 : Stop opting out edge-to-edge under cts/tests/tests/systemui
5db16a44cdd : Add CTS test cases for new APIs for monitoring selected satellite subscription id change events.
9d7116e6ab2 : Allow VCN CTS to access hidden VCN APIs
99ef4edcbb2 : Skip TextureViewCameraTest for secondary_user_on_secondary_display
f4bfe4b0e75 : bt(snippet): Dont hold shell identity on initialization
9512bc03227 : RESTRICT AUTOMERGE: Add xTS console to the cts-tradefed script.
a07a2567dd7 : WallpaperTest: WallpaperTest test case setBitmap_viaDpc_disallowed_cannotSet[IncludeRunOnAffiliatedDeviceOwnerSecondaryUser].
06f7f2bdfa5 : Remove state from WifiStateChangedListener test
bd3c1f668ad : Fix MediaProjectionAtomTests
57b41ebe323 : Update the point for touch outside
314028c263c : [PM] Add PackageInstaller CUJ test case (40/N)
5db0f507b0c : CameraPermissionTest: Test NDK path and fix proxy tests.
7f9c48e888b : [ITS][Zoom] use physical ID, not focal length to determine camera switch
a84e7dc4be8 : Use UinputTouchDevice in stylus navbar test
c0eb1e746f6 : Find object on the main display for visible background users
d8536957b58 : ITS: Add landmark check in test_preview_num_faces
b341a907b1c : Bluetooth: use module_current dependency for Cts
fa5467b6c12 : Updated CTS test to adopt the migration to the LocalSimContactsWriteException
54a165128c8 : cts: skip loopback test for automotive products
2a67026376d : cts: skip audio routing test for automotive products
00080403d6f : add CTS tests for Advanced Protection Mode's feature to disable 2G
18f96f9e68a : Add requestSatelliteDisplayNameForSubscription in SatelliteManager
3e4637f3127 : Skip testSwitchToHeadlessSystemUser_whenCanSwitchToHeadlessSystemUserEnabled on automotive
fa98ece1c19 : Add test for offload playback in aaudio.
89feac0c726 : Find object on the main display for visible background users
9131c9a9703 : Added null check for mNotificationListener at tearDown
a550d230620 : Prepare LockNow tests to enable coexistence flag flip.
7a56e0ddee6 : Tests for new static wallpaper set APIs using WallpaperDescription
276421baf5e : Remove a test failure workround.
56bfaf0e181 : Remove testAutofillCustomHighlight_singleField_noHighlight
27e0a4754d3 : Bypass GPP for CtsWidgetTestCases29
ea68983ec58 : Added null check for mNotificationListener at tearDown
d27a980594f : Fail for no SIM in OutgoingCallTest test
888d976f8ba : [DO NOT MERGE] Skip PointerIconTest for Automotive
7553ad49cd1 : Remove flagged class construction from test init
a5c85230797 : ITS: Converge 3A and take capture in same session
d50fa1d94f2 : Clarify lock instruction for multi-display devices
41d09b50546 : ITS: Converge 3A and take capture in same session
3f16b22eeab : Implement test for device lock state listener
cdb446c7649 : Let Camera FOV Calibration handle insets properly
8e15237e18e : Stop opting out edge-to-edge in CtsAccessibilityServiceTestCases
1158111f860 : Stop opting out edge-to-edge in CtsAutoFillServiceTestCases
438bc7b6da4 : Update the action name for the dependency installer service
c9603e50755 : Add tests for minor version support in minSdkVersion
660f98187fe : Update android.sdk aconfig flag lib to new name
dee40efff1f : WallpaperTest: WallpaperTest test case setBitmap_viaDpc_disallowed_cannotSet[IncludeRunOnAffiliatedDeviceOwnerSecondaryUser].
0de269bb810 : Increase test timeouts for MT6762 SOC
e7d5cf01e73 : Stop opting out edge-to-edge in CtsPreferenceTestCases
12b8e75b4f5 : CTS for onSecureWindowHidden API
d636a7ed0ab : Test for INDETERMINATE RangeInfo
8dbdd4805ef : Stop opting out edge-to-edge in CtsTextTestCases
a3fb41e6c6e : Stop opting out edge-to-edge under cts/hostsidetests
41349309395 : Add test for DeviceConfig getAdbWritableNamespaces
08cfdae4357 : [cts] Add cts test for getMaxPausePollingTimeout
a9152e4c113 : Introduced forceResetCarrierKeysForImsiEncryption TestAPI to delete Imsi Certificate.
670dbae7b00 : Add linter to standardize CTS app names
9651a04d047 : Improve AE / AF / AWB for low light tests
07d86fd4c33 : [Cleanup] Convert CDD requirment comments to python annotations.
e70e489f8c1 : Update Framework from Jetpack.
ae23d3a365a : Skip testSwitchToHeadlessSystemUser_whenCanSwitchToHeadlessSystemUserEnabled on automotive
862771580a8 : Camera: Verify DEPTH_JPEG extension output
ef276eb0aa6 : CameraCTS: Remove AE_PRIORITY_MODE from expected CaptureResult keys list
316d0b0b362 : [PM] Don't run the test cases in a profile
1e1d0ce77a2 : Skip testGetStorageVolumeSDCard() for visible background user
5b67375ea44 : [audio] Remove getInputForAttr usage test
e6b6e83f04f : Get Casimir device ID from REST API rather than by passing in a commandline argument
e19aabdbb71 : add sdk_version
d3f2a9011ad : Skip ProcStateAtomTests#testTopSleeping for automotive devices
48ad34f3469 : [audio] Remove getInputForAttr usage test
8887e603f97 : [audio] Fix missing perm barrier on dropShellPerm
477b21065c7 : CTS: Update test to be more thread safe
a4807e1391a : [Lut NDK] NDK tests
2668b221e94 : Use activity window screenshot not whole device
759a7b5d2c1 : Add CTS test for SoftApCallback#onClientsDisconnected
2972d2f38aa : [audio] Remove getInputForAttr usage test
4bc673ed634 : Update OWNERS file for CTS
ff750f5d102 : [cts] Add cts test for NfcRoutingtableEntry.getType.
31d3efb5dd9 : Revert "Bedstead: introduce a new "bedstead" module that doesn't depend on TestApps and bedstead-enterprise"
b802acec834 : ITS:Converge 3A and take capture in same session
2ba8dcefbd2 : [ITS][Zoom] choose shared marker closest to transition image if possible
6e2fb36c5c3 : Add RequireMinimumAdvertisedRamDevice annotation
823e28b6dc0 : Fix flakiness in VirtualDevicePowerTest
4806ae6fac4 : nfc(cts): Add T4T Ndef Nfceee feature support
19419089629 : Poll when asserting the touch mode state
f70396491e0 : Bump CTS Verifier version
f295ab77cd6 : Add a test for walking impact to the barometer test.
bd2dde648e0 : Add a test case for barometer measurements under radio impact.
70b22ae54af : Use PropertyUtil to get the property since the api is internal, cts verifier doesn't seem to be able to use it, compiles but causes runtime exception on device.
a5c31c4e2ad : Add test for squeezing impact on barometer
80cb3837420 : Fix MediaStore_FilesTest#testStandardSorting flakiness
4e2e4f3482b : PopulationDensity: CTS test
960a1e57d24 : Skip throughput test on Cuttlefish
27c7e3aa43a : Remove appfunction code from appsearch
37cee9f75b3 : Change how SystemApiAnnotationTest is ignored for JUnit 3
9fd843274e6 : Change how HiddenApiTest is ignored for JUnit 3
574dc867bc2 : Change how SystemApiAnnotationTest is ignored for JUnit 3
11790a9bcd2 : Use TestActivity.isStopped() instead of isDestroyed() for verification
7446db65304 : Stop opting out edge-to-edge in CtsWindowManagerDeviceTestCases
d357c9ece07 : Bedstead: move UserRestrictions annotations from multi-user to the enterprise module
becd6b51bcc : Change how HiddenApiTest is ignored for JUnit 3
a4151afa524 : Add exceptions for unfrozen interface enum values
ff575500405 : Rename setEnableAutoInstallDependencies API
1c0c7acf36f : Remove outdated sdk 26 tests
6139dfd0645 : mediapc: Extend initialization latency test for Dolby Vision
3a1a9112e2b : Update CTS test config for (re)moved Health Connect tests.
3de1b072ce6 : Update CTS test config for (re)moved Health Connect tests.
273e9e53ab8 : Fix flaky getPreviousForegroundUser_correctAfterReboot test.
35f41813575 : Add CTS test for when Dependency Installer doesn't wait
27ce89a52dc : Remove elegant text height params from the PcT tests
5104aa97814 : [PM] Don't run the test cases in a profile
0f8e5b3d8eb : Skip ProcStateAtomTests#testTopSleeping for automotive devices
666e63790cd : [PM] Don't run the test cases in a profile
b156424b97c : [PM] Don't run the test cases in a profile
7c6b549b4c6 : [Lut cts] add Lut cts for CIE_Y
43b4b6b2595 : Remove usage of mRearDisplayState from ExtensionRearDisplayTest
7e953b0cc98 : Make WindowInputTests multi-user and multi-display aware (1/2)
1e6a57406f0 : Update testRearDisplayStatusListeners to use property
9eb9e4c443d : Make CtsSurfaceControlTests debuggable to enable checkjni
a7a5ed8688a : AICS: add test
dfceef3d1ef : Enable SettingsMultiPaneDeepLinkTest on Automotive
2c99cc1cffa : Add headroom tests on custom calculation window size and tids
853d70a5959 : [cts] Fix the framework-targeting test overlay
c5a33351583 : Fix: TEST_MAPPING name for CtsDynamicInstrumentationManagerTest
5696afc991b : Clarify lock instruction for multi-display devices
5c88bd95ba2 : Replace CtsTouchUtils by UinputTouchScreen
1b3a1ae7dea : Modify test for setting inactive & active on media fgs.
c52aaa6ffed : Update CTS tests corresponding to updated API
2b07cc5e59c : Increase polling timeout
96bb3a1e0a7 : Migrate CtsOsTestCases to use test_module_config.
a94d748a141 : Deflake + improve test for tracking colormode atoms
46e9eda940a : Consider activity window location on screen.
ad07c424812 : Make IncompleteMotionTest multi-user and multi-display aware
1b02542b648 : RESTRICT AUTOMERGE Skip Bluetooth related tests on visible background users in CtsCarTestCases
a8f3bbc331f : Update CtsSettingsPreferenceServiceTest for API changes
25c11026421 : Add a test to check the temperature compensation of the barometer sensor.
3e0a309985e : Adds CTS tests for undelying sensor support
befb6d7cc0e : Add ADD_MIRROR_DISPLAY permission to the relevant CTS
1dbcd699bd5 : [CTS-Verifier] Create CTS Test for ProgressStyle
31caa0ef83e : Require pip2 flag off in keyguard locked tests
bcb8ad4d636 : Ignore flaky and failing tests in SatelliteManagerTestOnMockService
bfb72771f91 : CTS test of privatizing the DefaultAccountAndState's constructor.
7a09cda1275 : Add success test for setEnableAutoInstallDependencies API
7603fc1ac89 : Bedstead: introduce a new "bedstead" module that doesn't depend on TestApps and bedstead-enterprise
f6b0f429c65 : Bedstead: move Policy.java to the enterprise module, extract enterprise code from BedsteadJUnit4.java and move it into the enterprise module, prepare Harrier to build without the enterprise module
cf682246d5a : Improve messaging in Audio Stream Disconnect test.
ea6e5c627b8 : Add JobScheduler Quota enforcement changeids into allowlist.
3709a7f5810 : CTS: Annotate drm non-mainline tests with FrameworkSpecificTest
d88669b3463 : Adding CTS test for getLatestInferenceInfo
6ae3c647b25 : Ignore testBrightnessSliderTracking
678a9bbc430 : Ignore VirtualCameraTest on unsupported hardware
29618ef39fb : Clean up launched VDM API flags from CTS
f20f9845bd0 : Added logging of apkFile when error getting class defs
f34df2ad416 : Remove test failing after bug fix
d477a7b9fc3 : Skip checking disconnectCiCam return value
2e6905b0e26 : Remove assertTrue as it can cause failure on some low-end devices
a5926c2288c : CTS test for Android Security b/360807442
affde51b69c : Remove the wrong added tags of aosp/3297934, remove CtsMediaDrmFrameworkTestCases from mcts-media and mts-media, this is a partial cherry-pick of 94b7ffa9590f08e9869ccaec73eb9bcc2072716c.
275174a7c15 : Remove the useless tethering tags since CtsWifiTestCases only contain in mts-wifi test plan
358b85c1d04 : Remove assertTrue as it can cause failure on some low-end devices
83fd56ea76d : mediav2 CTS: Consolidate muxer-related support methods into a single file
2876e11e842 : To only run "run_on_work_profile" for photo picker tests
34373bbdc84 : Hide keyboard before checking texts.
4af8283e1fe : Revert^2 "Update Framework from Jetpack."
cb9bf8b01b0 : Skip Bluetooth related tests on visible background users in CtsCarTestCases
faec70d6b73 : Remove enabled flag fix_merged_content_change_event_v2
f3f38c49358 : Remove CtsProviderUiTestCases and CtsMediaBetterTogetherTestCases from MUMD passenger CTS plan
3507c612064 : Add CTS test for WifiStateChangedListener
305a8fd8281 : CTS Adds getSupportedRefreshRates tests
870d04d4092 : [RESTRICT AUTOMERGE][PATCH] Fix CtsContentTestCases on GTV devices
3f2d66a7c36 : Fix getCurrentAudioZoneConfigInfo failure
2fffc708bba : Add CTS to test thermal pending reason for getPendingJobReasons().
988ebdf756b : [ITS][Zoom] make W transition point 2.2x for test_zoom and test_zoom_tele
008998c5627 : [cts] fix testInstallArchivedUpdate on HSUM
eb45f1ed725 : Add CTS test cases for GRAPHICS_PIPELINE
b21a9ccf6fb : Exclude more failing tests from cts-system-virtual
ec36d59327d : Fix getCurrentAudioZoneConfigInfo failure
30f2024189d : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
a6289d9566a : Switch from SUPL Production SSL port to the Dev SSL port for CTS tests
38cd7e343f2 : Close resources in cts/tests/app tests
d02184368cd : Use RavenwoodRule for system properties
f6923e9e4fa : Add case for SysUI toast with Gravity.NO_GRAVITY.
304d8e80efa : Use activity window screenshot not whole device
813a4223f40 : Disable hidden API check tests.
1d0e2659e03 : Fix wrong-thread violation in test
49b2e8658df : Verify background color by TaskSnapshot instead of screenshot.
4d231cbe23b : [RONs] Change default value of canPostPromotedNotifications to true
05c843d925d : Ignore system annotation test
7b104a98147 : CTS test for getExecutableMethodFileOffsets
34f1c68a9fa : bt(snippet): Add methods to OOB bond to the snippet.
aea46c97a49 : Consider activity window location on screen.
f9332da54a5 : Disable a couple more CTS tests for MUMD background users
41687b48e1e : Add CTS tests for Vibration.BasicEnvelopeBuilder
1ccb3121bc8 : RESTRICT AUTOMERGE: Resolve a bad merge into OneTimePermissionTest#exitApp
5a20c9f06fd : [cts] delete verification verifier tests
8a845ce202e : RESTRICT AUTOMERGE [AAPM] Move security flags to static lib
024a8287be2 : Small adjustments to log list related CT verification tests for SCT-providing domain
550ded7ad9d : Camera CTS: Update testTurnOnTorchWithStrength
8a800e208ce : Adds arthuri@google.com to CTS sensors OWNERS
bb812ff85db : Revert^2 "CTS for IBinder.addFrozenStateChangeCallback"
39eee7eb13e : Ignore VirtualCameraTest on unsupported hardware
eb7e0c53f4c : Ignore system annotation test
6137210ff2e : Ignore VirtualCameraTest on unsupported hardware
69821b22519 : Increase polling time
3d6690dabb2 : Move OptOutEdgeToEdge tests to WindowPolicyTestsSdk35
44899aecdcc : Stop using windowOptOutEdgeToEdgeEnforcement in CtsViewTestCases
6fc79f902f9 : Change regex for finding media.
18e9158eef4 : VDM Flag cleanup in CTS
e16b1d82dc5 : mediav2 CTS: Adaptively disable code between cts and mts runs
f28a18fea60 : Log lack of StrongBox support and rename variable to follow best practices.
994fb5b72a8 : mediapc CTS: update to CtsMediaPerformanceClassTestCases-3.3.zip
6ee99d5b609 : [PM] Don't run the test cases in a profile
e07f53446e3 : [PM] Don't run the test cases in a profile
8096e644228 : Check the correct displayId in testTurnScreenOnActivity
285cc6a63fc : Add tests for MediaProjection stopping behavior
d292e2855d2 : Revert "CTS for IBinder.addFrozenStateChangeCallback"
eab8ff06ca7 : Skip StylusButtonInputEventTest for background user
5dc837c8aa7 : Skip Ringtone vibration tests if the device without a vibrator.
26f27453290 : Make A11 key tests multi-user aware
8d09575a535 : Migrate onBackPressed to onBackInvoked callback
93ab53c8af4 : Add CTS test for getCarrierIdFromIdentifier
794010b8d24 : [ID] Replace "forensic" with "intrusion detection"
69ee737a869 : Add package name exceptions for PackageSignatureTest
2ad618d0982 : Add CheckFlagsRule to QAWClient test to ensure RequiresFlagsEnabled annotation works.
7d4dd9148b0 : Exclude automotive form factor for test
8feba16482a : Add CTS for converting java hint sessions to native hint sessions
2b00552c385 : Add case for SysUI toast with Gravity.NO_GRAVITY.
e76bfc4e759 : DO NOT MERGE: Mark GameFrameRateTest#testGameModeBackpressure as Ignore
aabe8c5a324 : Camera: Enable complete HEIF UltraHDR verification
065901c44f8 : Address API Review comments: MediaCas, Tuner and TunerResourceManager: CTS
0093dbc5fc2 : Add CTS tests for requestSelectedNbIotSatelliteSubscriptionId
f22de942c60 : [ITS 25Q2 Refactoring]: Improve error message for test_flash_strength
25e7caff148 : add tests for input child runtime effects
6bb5f747642 : Test for Wi-Fi Direct R2 USD and pairing APIs
4d695a1a4ee : CTS: Skip Volume change test in HdmiCec tests on volume fixed devices
85428988ad2 : Check num_configs from eglChooseConfig
3e2c4cf244e : [AAPM] Move security flags to static lib
6af2c71269d : RESTRICT AUTOMERGE Adjust CTS tests for areAutomaticZenRulesUserManaged()
7ad78d83933 : Add test for NfcAdapter state change event
f27f57b36b4 : Skip ClusterHomeManagerTest on visible background users
0c24542c758 : ITS: Move function to utils file
e92d2e9676c : Skip TextureViewCameraTest for secondary_user_on_secondary_display
1386a5f9552 : CTSV: add route to WAV path, delete old files
5ba3f48af13 : Filter out NFC HCE multidevice test from CTS-V app if device is missing NFC feature
310addbf189 : Disable hidden API check tests.
5b25784d5f0 : Disable hidden API check tests.
c6267a7eba5 : Update BuildTest.getBackportedFixStatus_alwaysFixed
6b8d6278c9f : Fix timestamp test to work with cuttlefish and Pixel 9
3eafb403be8 : Add flag conditions for Cts tests
45940250617 : Fix: Flaky input tests due to residual locked modifier state
57f40abdf7f : Add explicit dependency on flags library
dcf9265009c : Use DisplayId when verifying top task in MulitDisplay
360e6516e54 : Change camera compat freeform opt-in override name.
e5a45bc2a9a : [AAPM] Rename permission
e85caa11749 : Rename waitAndAssertResumedAndFocusedActivity to waitAndAssertResumedAndFocusedActivityOnDisplay
7b248be8098 : Add tests to check when domain does not provide SCT (connection fails if log list present, otherwise fails open and succeeds).
9c08313b752 : Remove test failing after bug fix
f5c73d8da9a : Test for per-display WM brightness override
ceaba0e5c16 : Adding CTS tests for WaveformEnvelopeBuilder
9522ba1a131 : CTS for size compat restart button clicked
6d14a962a6b : Add explicit dependency on flags library
3ec44b8ae28 : Using adb shell commands to toggle display state
62a052d2b57 : Verify the behavior of universal resizable
91fd3806c09 : Remove SizeCompatRestartButtonStatsTests from cts-on-gsi
d77eac26ed0 : Fix flakiness in ActivityLifecycleTopResumedStateTests.
241c929c211 : Fix transition test flaky due to not capture transition running state.
64c0dee860f : Skip ClusterHomeManagerTest on visible background users
c9968e31d03 : accessibility: adapt additional test to small screen
f4169b9f371 : [PM] Add PackageInstaller CUJ test case (39/N)
49343ab68bb : Add CheckFlagsRule for DisplayLutsTest
13e5a05918e : Fix WifiRttTest#testRangingResultBuilder
34fef282ff5 : Fix flakiness in ActivityLifecycleTopResumedStateTests.
c05e99a4ec7 : Add cts test for #isAlwaysOnDisplayCurrentlyAvailable()
a4d8c3dca43 : Add a test to verify priorities are considered within a process.
8cc23ec98ae : ITS: Move function to utils file
67753b41abb : Add CTS test for directed advertising APIs
dffd9a9275c : Add test for checking RemoteViews URI
8c9d014f10b : Add XR feature to FeatureUtil
2c9a0f4db8b : Add tests for multiple output device ids
72dd95747cd : [cts] Add cts tests for commitRouting and extractOemPackages
d2c16f3f44b : Add CTS tests for resolving dependencies when needed
fd833ffe99e : Remove dimension test from CtsAssistTestCases#testAssistantContentView
ea7dc37ea7c : Fix failed CtsAutoFillServiceTestCases for secondary_user_on_secondary_display
cada16647f9 : ITS: Add loggings for test_multi_camera_alignment
c634f583c04 : [ITS] Reuse camera capture session in test_yuv_jpeg_capture_sameness.py
6c36ba238ea : Remove 6 non-compatible tests from TunerTest
3161772747b : Add CTS test for getChannelSoundingSupportedSecurityLevels
2e78e17018a : Skip CameraCts with virtual camera on automotive
3b8c954dcb4 : Skip CameraCts with virtual camera on automotive
9c8f5569e04 : Revert^2 "Test InputMethodService#onShouldVerifyKeyEvent"
91a58873f1f : Support secondary_user_on_secondary_display on CtsPackageUninstallTestCases
585f8249c6a : TEST_MAPPING: re-enable MediaProjectionGlobalTest#testCreateVirtualDisplay
6e6f29483a4 : Add OWNERS for MediaProjectionTests
36682cb1f1c : Remove secure_window_state flag
61ca4c4476f : Add CTS for vibration primitive delay type in XMLs
86a7cf6536d : Disable dependency installer for existing tests
2f12d854c00 : Scroll MediProjection consent Dialog in tests
836f42f7db5 : Add test for disabling dependency installer shell command
0d673632dd2 : Skip SMS-related test cases if not support FEATURE_TELEPHONY_MESSAGING
e087064ea94 : Make sure testRenameAndReplaceFile is cleaning up created files
5350954174a : Update VCN CTS to allow unset nat timeout
0f378f59e48 : Fix CtsMatchFlagTestCases for automotive
e0a0c056df0 : Allow OEMs change max shortcut count
6f7f8fa3242 : Check for testng's @Test methods in the entire class hierarcy.
6bb2957c31f : Add check for internet connection before running netsecpolicy CTS tests
962f38ff6dc : Update CTS test for Android Security b/304772709
98a73976108 : Add IllegalArgumentException condition to a tc
597d99c6543 : Ignore VirtualCameraTest on unsupported hardware
cec0d3aae74 : Disable GPP UI for CtsStatsdAtomHostTestCases
2112cf79bb2 : Add a CheckFlagsRule to mandate that a flag is required.
7ea2f65418b : Add a CheckFlagsRule to mandate that a flag is required.
f06b7fa0d68 : nfc(cts): Add cts tests for isTagIntentAllowed
53ffc711031 : Adding test to verify the default dim area of embedded TF
0764ae43aa4 : Add CTS test for new checkOp APIs
b28fb5e0cee : Add definition test for audio hardening ops
585a85f6d71 : Include the "exported" flag lib in VCN CTS
b96a994243f : Take display cutout into account of system insets
f3fe4720007 : Make sure screen is turned back on after test
7caf2838d2d : Fix flaky getPreviousForegroundUser_correctAfterReboot test.
103fd66519b : Revert "Test InputMethodService#onShouldVerifyKeyEvent"
77ccf1b41b6 : Add CTS for new transparency APIs
ad8e9b3ebdb : Exclude CtsStatsdAtomHostTestCases foldable CONCURRENT_OUTER_DEFAULT
eb80daeb890 : Filter out NFC HCE multidevice test from CTS-V app if device is missing NFC feature
d4859df7efb : NoWritingToolsSpan test
93fa7a63b73 : Add a CTS test for ContactsContract.Contacts.IN_VISIBLE_GROUP
4eb1f6b919b : Update CTS tests for the ImsCallSessionListener
678938b28fd : Avoid show-when-lock activity relaunch
ee90da76b3a : Add scroll to show POSITIVE/NEGATIVE button on smaller displays
16002bfc828 : Add OPSTR_WRITE_SYSTEM_PREFERENCES to AppOpDefinitionTest
e93c56bf563 : Ignore ContactsStorageSettingsTest, currently there are two activities handling the SET_DEFAULT_ACCOUNT.
567db576790 : VirtualDisplayTest fix for VDM behaviour change around trusted displays
dc3f1a2d52a : CTS for SusbcriptionPlan status, end date
027e88feee7 : Add a test for AudioDeviceInfo.getSpeakerLayoutChannelMask()
a09607201a6 : Add cts test for PASN comeback cookie
99e7eae334c : Add AccountMigrationTest CTS
8eafb2acd62 : Update filter, additional cleanup and regex.
cba420836bc : Rename Ext to Extension
dde903b094e : Skip StylusButtonInputEventTest for background user
ac40e5b7b7b : Update cts-input-lib for the new UinputRecordingIntegrationTests
232a841701b : Revert^2 "Implement MultiUserMultiDisplayPolicyTest"
b44948de66b : Make A11 key tests multi-user aware
ac53aee926e : Fix ActivityManagerProcessStateTest for secondary_user_on_secondary_display
10ed1611523 : Clear keyguard disabled state after test
33c9036a7df : Remove @RequiresFlagsEnabled annotations for released flags
b0ae8aa2e84 : Introduce CTS for VendorVibrationSession.vibrate
f0efe2a3c0b : Test InputMethodService#onShouldVerifyKeyEvent
3863678a724 : [cts] Rename oemCallback onEnable/disable
43e15acb358 : Skip SMS-related test cases if not support FEATURE_TELEPHONY_MESSAGING
1cbe23e88f2 : CTS tests for INSTALL_DEPENDENCY_SHARED_LIBRARIES permission
f23b2ef5ea5 : Disable secondary_user_on_secondary_display for CtsMediaBetterTogetherTestCases
bac0cefb6d9 : Add additional values to the ReportLog for the Audio Loopback Latency test.
5bb63d4ca1d : Replace multiuser with run_on_work_profile
aaa385d368d : Remove CtsMediaBetterTogetherTestCases samples zip
30ecfcf6451 : [AAPM] CTS tests for Disallow Install Unknown Sources
e0e00a89bd3 : Create new flag for favorites API
8c01f1b2b71 : Added CTS for display event property change
84d08380490 : Don't run the test with two crops for square display
213cd5d8775 : Add cts test for VibratorEnvelopeEffectInfo
3c4e6fd6235 : Add some tolerance to CTS crop tests
86f4e9d9dfd : Generalize signatures of helper functions with entropy checks.
0c200127e03 : Run code formatter on KeyAttestationTest CTS test file.
4ee58ec7368 : Fix VendorVibrationSessionTest cleanup
457a3995f49 : Revert "Implement MultiUserMultiDisplayPolicyTest"
a08f61894fa : [CarrierLock] CTS fix to support carrierlock Old API to read the carrier lock info.
ee967e8dd93 : Remove CTS channel_sounding flag
13266035268 : [Forensic] CTS for ForensicManager
54932f4b526 : add tests for new RuntimeXfermode APIs
d5ce484ef63 : Test: Cleanup all permissions in setup
b5d8cf79f21 : Add case for SysUI toast with Gravity.NO_GRAVITY.
abf279cd3ba : Add multi-client support in camera2
f2006ef924b : ITS: test_preview_distortion: Not yet mandated failure for Preview recording issues
fbeba31f20d : Add test for standard extension frontend status in the right place
6ea2d53e927 : Add missing CTS tests for ASurfaceControl NDK
bf4585d70e0 : Add null check before checking value
0f1b0ddb5a4 : Skip a test if config_hsumBootStrategy isn't 0
2e5d03c55ed : Add test case for NPE for null family name
7784b5d69e1 : Support secondary_user_on_secondary_display in CtsHibernationTestCases
6de14231bdb : [cts] Use mock of INfcAdapter to test firmwareUpdate.
357bc186b55 : Add cts test for PressureStallInformation atom.
f96c229e103 : Add SDK test for CPU/GPU headroom APIs
5d11427a16d : Support secondary_user_on_secondary_display on CtsPackageUninstallTestCases
29d6c574259 : Implement MultiUserMultiDisplayPolicyTest
027e53523fe : Adds CTS Test for QuickAccessWalletClient.getGestureTargetActivityPendingIntent
80b7e1f845e : Stop depending on RavenwoodRule/RavenwoodConfig
6a49b363cd4 : Rename testProxyConnectDevice_noCameraProxyPermission_checkFullAttributionSourceChain_off
8ddc14b3fa7 : Support secondary_user_on_secondary_display on CtsPackageInstallAppOpDeniedTestCases
bcfaa068699 : [Lut API] add corresponding cts.
87774cc88dc : Skip telecom focus test for voip app
f7967fe75a8 : Support secondary_user_on_secondary_display in CtsHibernationTestCases
59cd0ade304 : Remove mitchp from OWNERS files
cb18cc7a268 : HDMI: Modify address in active source messages in CTS
d7b38c875e3 : Add a CTS test for secure HE-LTF version capab
2aab155b982 : wifi: add cts for testing new api about AAPM wifi feature list
9c7dad014f2 : Support secondary_user_on_secondary_display on CtsPackageInstallAppOpDeniedTestCases
b26a4f7f3a8 : Add a list of approved backported fixes
d117fdbde9b : Fix failed CarActivityManagerTest for secondary_user_on_secondary_display
abb4ccb4b55 : nfc(cts): Add API to return default NFC SubscriptionId
9509bdad25d : Implement MultiUserMultiDisplayPolicyTest
1eb8826f75c : Fix ActivityManagerProcessStateTest for secondary_user_on_secondary_display
46bdfe544f7 : Revert "Revert "Add public ADPF load hints with better rate limi..."
25c7fe64750 : Add CTS tests for DPM.setAutoTimePolicy.
8024572e85d : Revert "Add public ADPF load hints with better rate limiter and ..."
51b6da5e2a5 : Update OWNERS for DisplayLuts file
615fc05041d : VideoCodecClaimsPerformanceTest: Remove 720p and 1080p tests for VP8
49b04783f42 : [MTE] add CTS test for globals mte
fb51e129528 : Revert^4 "Add test for Custom IME Switcher button visibility"
83d5dcbc4ae : Update secondary lockscreen tests.
d710f8d820c : [AAPM] CTS tests for support dialog
6d769ea7328 : rename package name of AconfigPackage to android.os.flagging
46cfc81401d : Add shayba to RemoteCallbackListTest OWNERS
a0729cf6be5 : CTS tests for CloudMediaProvider Search APIs
c10dbb5fb27 : Add CTS tests for DPM.setAutoTimeZonePolicy.
29f689ad777 : Followup to ag/28612330, we should not use internal android namespace in CTS tests
77868d631c0 : le_audio Add CTS tests for new API broadcast to unicast fallback group
69b0836d768 : Check that size is changed if display has a maximum width
f747a24d4bc : Disable Sony Dualshock 4 Touchpad tests on Android 15
712508163b4 : mediacuj CTS: Re-workout the timeouts of CtsMediaCUJTests
d6e7f07fc82 : Allowlisted T-Mobile apps to access TM#getCarrierRestrictionStatus.
031a4166efd : Remove elegant text height expectation
e232067e0dd : mediapc: update FilmGrainValidationTest as per CDD 5.1/H-1-14
9512ca72ed2 : Extract associateWith out of UinputTouchDevice
fa472480f3b : [Ranging] Add support for new ranging permission
7a224ace71b : Enable secondary_user_on_secondary_display on CtsInputTestCases
efe91b5a52a : Fix for testOneDatasetAndSave test issue.
52b52681482 : VideoCodecClaimsPerformanceTest: Remove 720p and 1080p tests for VP8
64f4e9aeb0d : Add test case for applications with the same user ID
efd55eec0d3 : Add API tests for Z-order in TV views
ad556db125f : Fix failed createControlledRemoteCarTaskView_startsTheTask
de462e28182 : Add removeManagedProfile CTS tests
170ada09eb9 : Fix failed CarTaskViewControllerTest for secondary_user_on_secondary_display
fd5c3dabf0f : Remove string matching when checking for security exception.
6eb3c4c92fd : Update filter and remove duplicates
55d5b287332 : Annotate native ASurfaceControl CTS tests
bb31d99d599 : Set AppOps settle time to zero when testing mid-stream blocking.
c9f25fc35d8 : Revert "Revert^2 "Add test for Custom IME Switcher button visibi..."
5d6dbb6cc2c : [cts] Add cts test for setSErviceEnabledForCategoryOther.
23c8440c471 : [cts] Add cts test for onLaunchRoutingTableFullDialog
6c5c2ae28fb : Fix version control for APV tests
e7ec838e70a : Wait for keyguard showing before entering credentials
37f36d470f2 : Add CTS for the new getPendingJobReasonsHistory() API.
e9f114769f3 : Print windows when injection fails in SeekBarTest
26d008b9667 : TEST_MAPPING: Temporary remove PowerManagerTest#testPowerManager_batteryDischargePredition from presubmit
c3c25ff7e39 : CTS test isolation for account args tests
7d06fd27ffd : Update ExpeditedJobTest capability checking
5cf04903b01 : Add public ADPF load hints with better rate limiter and hint batching
7128f3eb143 : Remove commented out api_levels_... properties
deff046c979 : Unit test to match SessionInfo and SessionParams
b6253508c64 : CTS tests for failure flow when installer is enabled/disabled
e3e6bdcb43a : Skip Bluetooth related tests on visible background users in CtsCarTestCases
f7df910a55d : Bedstead: add adb fallback for checking if the user has a lock credential
bc0de0a2d71 : Add CTS test for quick swipe back while the IME is shown
fe88a0a32dc : CTS tests to verify failure flow for Dependency Installer
bdd5f57805a : Enable large heap for Keystore cts.
2d8b9b93c8c : Add new test to detect that the log list related CT verification is working as expected for SCT-providing domains.
079eca87322 : Revert^2 "[cts] Add cts tests for maybeTriggerFirmwareUpdate."
5a446234002 : Bufix for annonation
aee091a5330 : Extract associateWith out of UinputTouchDevice
5940043f108 : Add test for setting inactive media foreground service.
76efcc4695d : CTS for SusbcriptionPlan status, end date
8507322dda0 : Verify get not available when hvac power off
03f526fcca0 : Camera: Add tests for AE priority mode
bb982dcf0b0 : Revert^2 "Add test for Custom IME Switcher button visibility"
b30e3c163e2 : Add tests for rendering ISO gainmap metadata
130cd9d1697 : Change Typeface redesign flag to readonly
c7a009fce29 : Ignore sleep tests on automotive with visible background users
20bf847a8fd : Add smoke test that decoding a gainmap doesn't introduce any borders
0855e969c50 : Move TestExtensionAtomReported & TestRawAtomReported to statsd cts
09d3b522b0c : Require flag enabled for BackgroundActivityLaunch
1bba046fb86 : 25Q2_ITS: increase test_imu_drift time to 120s
1bacd195ca5 : Adding new tests for Accessory Stream APIs
8852332bd2d : RESTRICT AUTOMERGE: [ITS] Ignore TELE captures for scene6 tests in WFoV/RFoV rigs
dc668dc16f4 : Remove flag NETWORK_BLOCKED_FOR_TOP_SLEEPING_AND_ABOVE
16ea8c75310 : Cleanup usage of use_permission_manager_for_broadcast_delivery_check.
1a3eec543ff : Replace use_context_attribution_source / check_full_attribution_source_chain flags with read-only flag
7ed7cee2ded : Bedstead: add SKIP failureMode to RequireFactoryResetProtectionPolicySupported
1db4c239edc : Run multi-device tests with Cuttlefish
c04e02ab9fe : Maintain a set of unflagged colorspace ids in ColorSpaceTest
eff3986ea98 : Add userId to notification shell command for CtsJobSchedulerTestCases
bc4459306ec : Adapt test app layout to smaller screens
ebf5f2cebf8 : Change VerifierContext to static class
8fadbeefb9a : Fix floating point error in valueInRange check
16c72ebef7c : Update HVAC error messages to be more clear
04c0f5789e1 : Change verifySetNotAvailable to also wait for onErrorEvent
45c0b2fdcf0 : Add test for trigger NPE on Fill Dialog
48481a27d24 : Add CTS for setStreamWithCrops and setBitmapWithCrops
a473c24954c : Add permission for visible background user to get WiFi SSID in CtsJobSchedulerTestCases
12d0f1025e5 : Increase timeout when waiting for block from sensor privacy.
613ef0603b2 : RESTRICT AUTOMERGE Adjust CTS tests for to areAutomaticZenRulesUserManaged()
e07f5cf1366 : Add userId to notification shell command for CtsJobSchedulerTestCases
0d0166d03e5 : Fix testKeepScreenOnCrossProcess latches
bdfc22f94a7 : Use uinput-based injection in FlagSlipperyTest
c26e447802b : Add APV Codec tests
c97c13f34d1 : Add getType test on a redacted uri
f91629a261a : Add permission for visible background user to get WiFi SSID in CtsJobSchedulerTestCases
db9c78cd1d2 : Update StreamedAppBehaviorTest due to flag removal
57a065ed99a : [TimeSync][Android] Remove check preventing NetworkTimeUpdateServiceSntpTest from running
fa54bfd8fa5 : Camera: Add test for CONTROL_ZOOM_METHOD control
bc5997a7a67 : Add CTS Tests for SettingsPreferenceServiceClient
2b38b107117 : [RESTRICT AUTOMERGE] Relax the assumption that config only changes once
375b0ffe897 : Remove flag NETWORK_BLOCKED_FOR_TOP_SLEEPING_AND_ABOVE
92bc98c76be : Increase time to allow OEM default settings to complete
adac991ff9d : Check for tuner feature before MediaCas tuner tests
13564b21605 : ITS: Delete test_num_faces in scene2_b, 2_f
c8edeb9caf2 : Close resources in cts/tests/app tests
f85369a7bde : Update CTS for API change in executeAppFunction
c2303da3835 : Skip VirtualDeviceMirrorDisplayTest of CtsHardwareTestCases for visible background users
707946fc047 : Skipping tests in KeyGeneratorTest for TV
f8c0c4ad1f2 : CTS: Set system audio mode to ensure it is enabled
e150bb79c3a : Updates to mumd passenger cts subplan
9ce3b2baba6 : Revert "[cts] Add cts tests for maybeTriggerFirmwareUpdate."
2fac50771b2 : Camera: Configure HEIF UltraHDR dataspace explicitly
5483c2f2530 : Fix testKeepScreenOnCrossProcess latches
01a1bfda010 : Add CTS test for onCarrierRoamingNtnSignalStrengthChanged callback.
18848ee8c86 : Fix testKeepScreenOnCrossProcess latches
e9bbab2fc3d : Temporary disable the API call
07ae65478ff : Disabling refresh rate tests for concurrent displays
e4ea1640ed2 : Add tests for associated role services in NFC
c354fdea56c : Enable secondary_user_on_secondary_display on CtsGraphicsTestCases
c325193a6dd : nfc(cts): Add a callback for onEeUpdated
809c02c1397 : Skip VirtualDeviceMirrorDisplayTest of CtsHardwareTestCases for visible background users
89c061cacd5 : Maintain a set of unflagged colorspace ids in ColorSpaceTest
e5077dde3c0 : CP2 Move API CTS isolation
b10046858ad : Enhance ImsService related tests to ensure that the correct user is bound
04a66dd335e : CTS test for BluetoothSocketSettings
6448b1817db : Enable secondary_user_on_secondary_display on CtsGraphicsTestCases
ffe209d2dbe : Add ScrollView to SSID Restriction page
5dbc19bc80c : Update expectations for a couple of jdwp cts tests
8a7c8865ee2 : change test for AconfigStorageReadException
dcee7172081 : Skip CameraCts with virtual camera on automotive
f02e96b1c50 : Verify errorcodes starting from Android U
5d2ee0b1313 : Remove mcts tags since cts-dalvik-host-test-runner is host requried by other CTS test cases
8bb624460b9 : Remove IntegrityCheckStatsTests.java
d40a1bccec1 : [cts]remame appfunction sidecar jar
6c6f579f561 : CTSV: save WAV file for failed Data Paths
b01beb01cb7 : Update CTS test for new 25q2 keycodes
19e75c96713 : Check that size is changed if display has a maximum width
850742c8aea : Check for multi-window mode in AppTaskTests and ActivityManagerTest.
c24286bdb20 : Check for multi-window mode in AppTaskTests and ActivityManagerTest.
5f66bc781b6 : Re-enable AVF tests on CTS-on-GSI.
e19f2b11e06 : Night Mode Indicator
f765301f883 : [MQ CTS] Add cts module for MediaQuality
4a20db46676 : CTS tests for TV_IMPLICIT_ENTER_PIP permission
4260f403205 : Remove the @SdkSuppress and TODO from the Keystore migration test.
271eada49f9 : CTS: Exclude TV devices from AnalogHeadsetAudioTest
a3351738512 : ITS: test_preview_zoom: Make test mandated for only devices with first API level 15 or later
691aa886e2b : Add and update tests in TvInputManagerTest for TvInputServiceExtensionManager
b3ca1d490ea : Use the current user to check appops against
6b6bee282fd : ITS: Add test_preview_num_faces.py in scene2_d and scene2_f
274558d1681 : mediav2: Add CTS test for KEY_NUM_SLOTS
33fd19babbf : Revert "RESTRICT AUTOMERGE Adds tests around addSearchEmbeddings and related to known failures am: f2c8b6bbd0"
f95e371b003 : Separate write API test to multiple steps.
e48fd9d2e4b : Separate to multiple test steps.
3f52247bfdd : [le audio] Add CTS tests for new API broadcast to unicast group changed callback
1745dcd9366 : Adding one more atom to track during the boot
556c8153137 : Add CTS tests for Settings Preference Service
514bc773850 : CodecInfoTest: Use cts/mcts mode for codec selection
f99a8eb5269 : Reflect the disappearance of the BATTERY_USAGE_STATS_SINCE_RESET_USING_POWER_PROFILE_MODEL atom
f0d68f85c53 : [MTE] add CTS for stack MTE
296f100dc6b : Revert^2 "Relax some ActivityManagerProcessStateTests expects with..."
5bd447bb9a2 : AppOps: Add new ReadOxygenSaturation AppOp to tests.
ce561d62d97 : CTS test for deprecate AccessibilityNodeInfo#setLabeledBy
e978b3fe1f5 : Additional explanatory text to USB Headset Adapter Warning Dialog.
53e7b6cc9c3 : Revert "Skip adopting permissions that does not exist."
4485d085021 : ITS: test_param_color_correction.py use stationary_lens_capture
3019c6da577 : Stabilize EditTextTest#testSuspendAndResumeBlinkingCursor
5c230ad3bb4 : Revert "Relax some ActivityManagerProcessStateTests expects with..."
41a02891047 : Add ScrollView to SSID Restriction page
0ae4b96f0ab : Check that size is changed if display has a maximum width
41e4b4f81a7 : ITS: test_param_color_correction.py use stationary_lens_capture
43fd4fa0bff : Add tests for VibrationEffect.Composition delay type
5b147d410df : Update CTS-system-virtual exclusions
b91b90a63cd : Temporary disable ExplicitHealthCheckServiceTest
052ac762098 : Fix PackageManagerTest test failure.
e7adfc58920 : CTS tests for DPM.createManagedProfile and DPM.finalizeCreateManagedProfile
fc7b56bb228 : CTS test for Android Security b/305695605
eb6ceba5bc3 : Check for multi-window mode in AppTaskTests and ActivityManagerTest.
f271d883eec : Fixing DisplayTest for VRR display
c8838215366 : Trigger the battery_changed broadcast before getting it's extra.
556e9226fc1 : Support secondary_user_on_secondary_display on CtsPreferenceTestCases
e93c80b70df : [ITS]Remove colour-science check in env setup.
c0b2f5ac1d6 : [A11y] Add DATA_TEXT_CHARACTER_LOCATION_IN_WINDOW_KEY CTS tests
3c4e1d20dca : Improve dream key handling tests consistency.
821fae9425b : Stabilize EditTextTest#testSuspendAndResumeBlinkingCursor
57d2694716e : ITS: Enable face detection in preview recording
72d9b05ebc5 : [ITS][25Q2] Replace sorted()[-1] with max.
c9181220101 : Fix initial value issue for adas property.
fbf018dbd4f : Use parameterized test for individual properties.
ad63e5ae28f : CodecInfoTest: Use cts/mcts mode for codec selection
f2c8b6bbd0a : RESTRICT AUTOMERGE Adds tests around addSearchEmbeddings and related to known failures
02f22aff6c4 : CTS for IBinder.addFrozenStateChangeCallback
79810e94842 : Revert "Add test for Custom IME Switcher button visibility"
45d6ea94120 : mediav2: Update to CtsMediaV2TestCases-5.1
e8cf4a26e9c : Create CTS API test for EditorInfo#AutofillId.
768412ea1b1 : Support secondary_user_on_secondary_display on CtsPreferenceTestCases
73bbbf90f90 : Add muxer test for APV codec
17cf94e7239 : Add another printing owner
517b59ef2fa : fix color filter tests running when Flag is disabled
0acbc319a5e : Add extractor test for APV codec
fd96b7156a2 : CTS for VirtualDisplay API for brightness callback and default.
68100413881 : Use separate flag for suspend packages coexistence
4cee9578b71 : Revert^2 "Adjust CTS tests for FRRO self targeting changes"
fd9dd15f467 : Check Verified Boot and bootloader state if locked not required.
00186ef4ee7 : decompose android.media.misc.cts.PresentationSyncTest#testThroughput
33d4d4b7216 : media CTS: update to CtsMediaExtractorTestCases-2.3.zip
3a31c8bc414 : media CTS: update to CtsMediaMuxerTestCases-2.3.zip
59fd82d0690 : Clean up group volume adjustment disablement flag
ec2599317b3 : [AAPM] Move security flags to static lib
df80293bd01 : Remove DebugInputRule for passing tests
f0b3d42e1ca : Log exceptions instead of throwing them.
b64f8e12bb1 : disable CrashGetDescriptor
0282fcd1a31 : BluetoothLeBroadcastTest: Fix StartBroadcast tests
c21481f0035 : Add CTS for the new getPendingJobReasons() API.
a393c6315b8 : [cts] Add cts test for onLogEventNotified.
8ccee952253 : wifi: add cts for testing new api about mld MAC address
3d32b242481 : Revert "Adding test to verify the default dim area of embedded TF"
30b5ec15740 : Revert^2 "Add tests validating image decoding metrics"
6b0ffc257f2 : Bad uid now returns MODE_IGNORED
20558880622 : Relax some ActivityManagerProcessStateTests expects with waitFors
6816c9f3280 : Fix failed CtsSharesheetTestCases for secondary_user_on_secondary_display
1ddf0ef7c9d : Add OemExtension callbacks support
b3c2f08434d : Fix failed CtsSharesheetTestCases for secondary_user_on_secondary_display
f6171919e2f : fix(non linear font scaling): fix CTS test to use proper conversion
5e8e66e39db : Fix the user information for CtsJobSchedulerTestCases
e341d1ee0ee : add cts test for new api
27a87c0ea10 : Add test case for vertical layout
7fd310fa435 : Fix the user information for CtsJobSchedulerTestCases
3c79dc5cd7c : Add another printing owner
9aa3aefabe7 : Removes usages of streams from tests
93363c79cf9 : Validate CAPABILITY_SUPPORTS_TRANSACTIONAL_OPERATIONS restrictions
830b8aaa09b : Skip adopting permissions that does not exist.
dd7e4292a14 : Enable secondary_user_on_secondary_display on CtsSensorTestCases
9826fff6b72 : Add android.cts.statsdatom.perf.UprobeStatsTest
73a8e947a40 : [Ranging] Add support for new ranging permission
68faf1a93d9 : Implement test for device lock state listener
574a2778e66 : Add CTS test for generated preview persistence
38f9603b628 : Add cts test coverage for apduservice pl filter additions.
ac35d0a8d4c : Fix misspelling in Multichannel mixdown test.
3a011344959 : Revert "Add tests validating image decoding metrics"
583dd17bccc : Fix flakes and failures
703300bfac9 : Re-enables A11y PIP window CTS tests
a25828f33be : [ITS][25Q2] Use CaptureResult zoom ratio in scene6 tests
d170894ee50 : Camera: Fix inconsistent assert message
bb2d46f9af9 : CTS test for VirtualDeviceParams API for host camera streaming
3f4948fbfa8 : Skipping test on GSI + first_api_level < U
14243dcb29f : Keystore: suggest vendor properties for device IDs
e92b23f1e78 : Keystore: coalesce device ID failure code
4ba87344957 : Add @Ensure annotations to some tests in UserManagerTest
3b2d6e4b662 : Make the insets to be ignored symmetrical horizontally
d5c81028cfb : Disabling refresh rate tests for concurrent displays
74eaf441626 : Adding test to verify the default dim area of embedded TF
4b3403c7fce : Changes to OWNERS files for USB.
dd058cd491c : Mark media as favorite API
53ceec2bcd2 : Fix AID for perfetto trigger atom test
a60dd6fdac7 : Reland "Fix perfetto trigger test"
a66714f5f20 : Enable secondary_user_on_secondary_display for CtsKeystoreWycheproofTestCases
22a7d9c0bdd : CTS test for new metadata API for microphone for call
acc7cddaaf3 : Test for P2P Compatibility Mode APIs
361234274c1 : Allow for KeyMint v4 in CTS tests
ca039e2fb2b : Enable secondary_user_on_secondary_display for CtsDownloadManagerApi28
b7262b98604 : [ITS][25Q2][Zoom] increase radius tolerance to 0.15
b0017ef1076 : [Accessibility API] CTS Tests for required API
37aa880095d : Revert^2 "AudioAttributesTest: test new system usage"
5da3f3fdaf4 : Make APC tests backwards compatible
b4b99613034 : [ITS][25Q2] Add STATISTICS_FACES to RecordingResult
43ecd4328fd : Revert back to the initial user and remove test users in CarUserManagerHostTest
68841012202 : Update test methods in CtsWindowInfoUtils
b3208b856e3 : Update javadoc in CtsWindowInfoUtils
3fe3c261ad1 : 25Q2_ITS: test_metadata handle fixed-focus cameras
fd969f029e6 : Remove AudioHalVersionInfoTest
1097eb5d941 : Remove AudioHalVersionInfoTest
7cd109ce756 : [Satellite] CTS test case added to verify the default datagram value.
8345d35bbce : Add CTS test to verify bounds are set for ViewStructure.
657be410550 : Introduce CTS for VendorVibrationSession
d64ac43f06f : Disable fingeprint admin policy test
ef7b018cd55 : Remove AudioHalVersionInfoTest from cts-platinum-*-normal.xml
1ad767d4cb9 : Add cts tests for LaucherUserInfo config and UserConfigChange callback
ffd2102506f : Add notification when satellite availability changes
1a746dfc7f8 : Camera: Add Heic UltraHDR ImageReader test
75da691d559 : Remove AudioHalVersionInfoTest
3999d10335b : mediapc CTS: update to CtsMediaPerformanceClassTestCases-3.2.zip
e4302e74eb6 : Replace multiuser with run_on_work_profile
3e2af54623b : Fix for flaky testGetDocumentUri_throwsWithoutPermission test
a0822ba866e : Add CTS test for update_provider SELinux attribute
0a988413fc8 : Refactor Verified Boot hash checks.
c599ed16920 : Add CTS tests for startWaveformEnvelope with initial frequency
65bc02fcb8b : Add test for override non-default close activity callbacks.
b68727bacab : AppOps: Add new ReadSkinTemp AppOp to tests.
843c2c0b435 : CTS tests for indeterminate RangeInfos
51156d4e80d : Initial cts-on-gsi-on-u.xml
1fbce9ef012 : Initial cts-on-gsi-on-v.xml
934ebaf3231 : Enable secondary_user_on_secondary_display for CtsKeystoreWycheproofTestCases
6dd7ff27288 : [ITS] Add Xiaomi Redmi Pad SE to low light utils
4fad32456d3 : [ITS] Add Xiaomi Redmi Pad SE to low light utils - patch
fdc40d94df2 : Add test case for long text case
e4a6e71b46a : CtsWidgetTestCases:operate: A patch for MediaControllerTest to fix a failure in testOnBackInvokedCallback()
661163f7fac : Add test verifying mic capability change
411a0b55b5d : AudioRecordPermission test remove appop cooldowns
45b5781ffd0 : Fix fixed source SetFrameRateTest expectation
a907854be9b : Fix fixed source SetFrameRateTest expectation
caed960df23 : Bluetooth: CTS Tests for HCI vendor-specific commands/events API's
806f322917b : [ITS] Add Xiaomi Redmi Pad SE to low light utils - patch
e9c2b2de9e0 : wifi: add cts for testing new api about ap isolation
f3399f18b47 : Move polling frame count check earlier in the timestamp test
1e82eb54278 : Add CTS MultiDevice test for Polling Frame VendorSpecificGain
022bfe128a5 : Test API for backported fixes.
836381188f8 : Enable secondary_user_on_secondary_display CtsCompanionDeviceManagerCoreTestCases
27c60665fd0 : Pin dhrystone to C99.
58e9e21840b : 25Q2_ITS: test_feature_combination lint cleaup
86591f852bc : Fix separator in SplitApp/needsplit/AndroidManifest.xml
6ba8b64671f : Tests opt-in strict intent resolution
7df178f2408 : Revert "AudioAttributesTest: test new system usage"
54fde4fd20f : Remove fullscreen asserts for keyguard tests in ActivityVisibilityTests.
1ae6d345463 : Remove fullscreen asserts for keyguard tests in ActivityVisibilityTests.
1d3caa1b43a : Add test case to check MoveContactsToDefaultAccount intent.
a62eb377e3f : Verify BATTERY_CHANGED broadcast is triggered immediately.
c794fbb420f : RESTRICT AUTOMERGE: Fix OneTimePermissionTest to use same thread.
915716d0249 : Check touch screen feature for judging system gesture support
b8f7df08a66 : [cts] Add a dedicated error code for missing function ID
f7a894242c8 : [CTS] Split the isAppFunctionEnabled method into two overloads
68a2cf24462 : [CTS] Add Error categories for AppFunction exe response
35c45c430a9 : [AAPM] Add Feature class
943eef08e60 : Revert "RESTRICT AUTOMERGE Adds tests around addSearchEmbeddings..."
604023a0764 : nfc(cts): Remove Tag from onTagConnected callback
c627dc94e03 : ITS: test_preview_stabilization_fov FOV > Android 15
ff8b31937e1 : Test capture timestamp increases for multiple image
0132e8c3268 : Add CTS for IME subtype getLayoutDisplayName
cf3c4fad7df : Verifier Streaming use a gloabally accessible link
034c96f8128 : CtsMedia: Update the links used in StreamingMediaPlayerTest
2be41a0ea25 : Remove fullscreen asserts for keyguard tests in ActivityVisibilityTests.
5c825a7090b : Update Contacts Storage Settings CTS test to use FLAG_NEW_DEFAULT_ACCOUNT_API_ENABLED flag.
8fbc5f031a8 : Add CTS tests for abandoned jobs
23aefb7aa0f : CTS tests for `createConversationNotificationChannelForPackage()`
9f078a5ad57 : Fix CVE-2021-0484
0b73289608f : Fix testAutomotiveCameraCharacteristics
05eb1445089 : Add cts test for supplementalDescription api
a5856b150d1 : [ITS] Add Xiaomi Redmi Pad SE to low light utils
6b8183da884 : Add CTS test cases to cover Satellite state change listener APIs
0557feb4ca9 : Add test for thermal headroom callback API
ecb828951fe : Fix a NPE in WearableSensingManagerIsolatedServiceTest
b33433e7430 : [VRR] Add cts tests for ViewGroup frame rate setting APIs
b7d09f6b9ee : Guard the failing tests with the feature flag
175b6e698b4 : Revert^2 "CTS for BAL Strict Mode"
25104a1be28 : Update CTS tests for the SIM info related APIs in TelephonyManager
ab89656e8bd : Update CTS tests for the IMS HAL APis in MmTelFeature
0e4c087781a : Revert "Adjust CTS tests for FRRO self targeting changes"
98f6a173305 : Revert "Add CTS test to verify bounds are set for ViewStructure."
5c37ee34c5f : CameraITS: fix glint error for test_feature_combination
e411fe43c3c : RESTRICT AUTOMERGE Adds tests around addSearchEmbeddings and related to known failures
3dcbd51ca87 : Fix Event Listener API tests to work with functional callbacks
d817bc4175b : Update CTS OWNERS in cts graphics for scheduler files
d61f4114cef : Camera: Test new MultiResolutionImageReader constructor
1a295339c75 : Set SDK level for tests-tests-networksecurityconfig-lib
90df7ac31c6 : Stop leaking persistable bundle key arrays in test
fe3ab0883f5 : Revert "CTS for BAL Strict Mode"
57e470ac26c : Add assumption for flag: SurfaceControl.Transaction#setFrameRate
ad7fbe0ae8c : RemoteSubmixTest: fix muting/unmuting of streams
f3d7ba0d0b2 : Remove CompatChange ID for ENABLE_PREVENT_INTENT_REDIRECT
39f684dbe30 : Enable --secondary_user_on_secondary_display for CtsMediaAudioPermissionTestCases
c101c67cccf : Enable --secondary_user_on_secondary_display for CtsMediaAudioTestCases
5cbab5abd7d : RESTRICT AUTOMERGE Also exclude "instant" FrameRateOverrideTest, DisplayTest from Android 15 CTS
5fc6a6adb78 : Revert "[Nfc_cts] Add cts tests for Nfc security log."
a4d786bbcfa : Revert "[Nfc_cts] Add cts tests for Nfc security log."
5e69d0080fd : RemoteSubmixTest: fix muting/unmuting of streams
332ddd81c97 : Change the language of the CTS-V test for FSI without permission with screen off to allow notifications to show as icons, but still disallowing the launch of an FSI if the permission is denied.
9cada2a21f7 : add tests for new RuntimeColorFilter APIs
401afe29ded : Revert "CTS for BAL Strict Mode"
05189a27880 : cts wm tracing: migrate to perfetto
a1a9bcd9490 : Bedstead: prepare multi-user to be able to build without enterprise module, make other Bedstead modules able to run on physical devices
2d97cfda6a6 : To only run "run_on_work_profile" for photo picker tests
a18914ca30f : Test failed capture when first frame fails to renders
3d9f5e325f4 : Revert "Update Framework from Jetpack."
6bf6bc291b7 : Disable CtsBionicTestCases CPU Tests for GSI
070cb6a6a65 : Add CTS tests for non-UI context in WM extension.
5e1b671b147 : Add audio device type MULTICHANNEL_GROUP
1525d3b1162 : Fix excluded test classes
749a1df6d93 : Bug: b/298915013 GSI image need to have 3 new properties in the product partition, otherwise it will definitely FAIL because without these 3 properties, it is not possible to find google's servers via DNS. After flashing the GSI image, the automatic time setting is not taking effect and the time of the device is not the same as the real time.
b7418eb2782 : [Divider] Add CTS for interactive divider
b50a6d4f29d : Handle if activities are not launched in fullscreen mode in FreeformWindowingModeTests#testMultiWindowFullscreenOnNonPcDevice.
4b26d947f8c : Fix testRequestBluetoothPermission_Downgrade by enabling location
1be17358477 : Verify that receiver priorities defined by apps are restricted.
653446a43a9 : Ensure appops service has created the state for test app
78b397d2219 : RESTRICT AUTOMERGE Exclude FrameRateOverrideTest, DisplayTest from Android 15 CTS
f850ea387fe : Add CTS test for secure ranging
917fc8e9205 : Remove @RequiresDevice from codec|decoder|encoder tests
2889c524b8f : Fix flake in RolePermissionOverrideTest
3acf8824b34 : [le audio] Add CTS test for API get assistant local metadata for source
a8173fe2f7e : Add test for AnimatedImageDrawable setFilterBitmap and isFilterBitmap API
755e941a9f5 : [CTS] Fix pass-through opt-in CTS when flag is disabled
3bbfc503a96 : [cts] Add cts tests for maybeTriggerFirmwareUpdate.
c165d42983f : Skip some tests using DocumentsUI package for visible background users
2be9dbfe6d6 : Adjust user-update CTS test to account for modified icons in MODES_UI
4782b0485cb : Skip some tests using DocumentsUI package for visible background users
a902022caae : ITS: lint cleanups
13a2485f7e6 : Add CTS tests for looping vibration effects
bb31e383c67 : testApduFileReadTwoChannels: better handling of duplicate channels case
373ca9e925f : Add test to verify role doesn't override user choice
b6e5fb2701d : cts wm tracing: prepare for perfetto migration
00ba1ca2637 : AudioAttributesTest: test new system usage
d8873dd2e71 : ITS: test_metadata add focus distance units
56a01113178 : CTS for the custom VDM power timeouts
729b4f3166c : Updating CTS for getDataSystemDeDirectory
269bb9c6fec : CTS test for Android Security b/304772709
c916fe04cd8 : Add test case the deprecated Paint.setElegantTextHeight behavior
3e00114abc9 : Add effective weight test case
7deae47c465 : Remove multiarch=true in the test apps
986fe5728e9 : Disable Always-on VPN tests for devices without VPN support
64d9f8e1b8a : Add CTS for new ASurfaceTransaction_setFrameRateParams
ed54447389b : Fix to generate the input event on the display where the test runs
8700dacc41b : Handle if activities are not launched in fullscreen mode in FreeformWindowingModeTests#testMultiWindowFullscreenOnNonPcDevice.
61c005ddfcf : Add CTS for new SurfaceControl.Transaction#setFrameRate
f1bd50ee2c9 : Update SurfaceControlViewHostTests corner tap offset
604201ef273 : Exclude testGetSimCallManager where no telephony present.
e0b63a03c65 : Add SDK check to enable test on main branch
1bd8bbc034a : ITS: combine_feature_combo_results glint cleanup
8922e412e42 : ITS: fix glint errors in tools & utils
13eaeee08d6 : CTS for getSuggestedFrameRate api
df1ae34a64d : Poll when asserting the touch mode state
7335d170a5d : Remove dependencies on the 1-variant fallback
d3d1b56f677 : Revert "Make DEFAULT_RESCIND_BAL_PRIVILEGES_FROM_PENDING_INTENT_..."
beec38b1a76 : Add testing for high quality barometer part 1 flashlight test, verifying that the barometer isn't affected by lighting
c81193eedb8 : Bluetooth: Metadata: enforce unique registration
eb1e50a16e7 : Skip some tests using status bar for visible background users
6738b559df9 : Add a CTS test for verifying custom Ringtone vibration playback
3c65ca8ed1b : HDMI: Make cec-client run on port 1
1c21f90ccb7 : Check for protected support in testWrapHardwareBufferWithProtectedUsageFails.
280cd8be44c : ITS: test_imu_drift handle empty RV vector
e13465502fe : Adjust CTS tests for to areAutomaticZenRulesUserManaged()
1e32d39e77e : Add CTS tests for Vibration.WaveformEnvelopeBuilder
e42c82fefb4 : Revert "Use get-brightness adb command"
1f81326284a : Revert "Use get-brightness adb command"
75b56f0a68f : Refactor collector to improve readability.
ebba39fa475 : Suppress non primary profile tests.
ca792827ecd : Enable logs to debug input in tests of WindowInsetsBehaviorTests
7a9781b0ed9 : Extend the assertion time by 10s according to the DONT_KILL_APP flag.
b4943e37704 : Add test cases for limiting the sending of the PACKAGE_CHANGED broadcast
d7a4914616f : Add tests validating image decoding metrics
b3b482d409e : Add permission tests for camera streaming
3a7c55ed905 : Fix AdvancedProtectionManagerTest#tearDown
a778cf0b649 : RESTRICT AUTOMERGE: Update CtsHibernationTestCases to support more form factors
934a25ea9cf : CTS for BAL Strict Mode
a982f1c3f46 : Remove dependencies on the 1-variant fallback
424a39a1f1d : Test whether authority is correctly parsed
a546c2cf5ce : Add CTS for new ANativeWindow_setFrameRateParams
bfd1edbe6e3 : JobScheduler: Ignore important while foreground flag
1712dc4e40b : Camera: Add test for per-surface mirroring mode
0f432a23d87 : Exclude testGetSimCallManager where no telephony present.
f4e66b680e5 : Support secondary_user_on_secondary_display for CtsContactKeysProviderPrivilegedApp
72660dea526 : Remove use of colour library from image_processing_utils
a80117bbf77 : Add Technology Type info for the active secure element list
f95fd8e07fd : [ITS][Zoom] force greyscale for test_low_latency_zoom
563870a9a76 : Skip some tests using status bar for visible background users
c7749408a96 : Fix AudioHalVersionInfoTest after deprecating HIDL 5
16be96bee63 : CTS for IpcDataCache
eda5beb446c : Remove dependencies on the 1-variant fallback
633f2852ce9 : Fix satellite CTS tests due to the change in provision acitivity
1f381368eef : getIccAuthentication may throw UnsupportedOperationException
8748f4edb9b : Use correct slot index for Remote SIM
c6a8ddf9143 : CTS for IpcDataCache
74f35b99dc3 : Ensure apps cannot act on bundle channels
04fa9ce9a85 : Remove the MCTS tags that are not in MTS test plan
4255e0fe610 : Make sure testRenameCanRestoreDeletedRowId is cleaning up created files
d31fba05cac : Use Radio HAL 2.3 tag instead of comparing to 2.2
469c4f5743f : Extend timeouts for AppZygote tests.
de71b21928c : DO NOT MERGE Remove some mcts tags to align with main and CTS R1
ec2a6987d32 : RESTRICT AUTOMERGE: Skip device ID attestation failures under GSI
d6631610dac : [AAPM] Disable for unsupported form factors.
62e608b99e8 : Revert "Use get-brightness adb command"
bb1a2416fc9 : Move CtsModernMediaProviderTests to presubmit
db56f481de3 : [2/n] CTS for ignore activity restrictions on virtual display API
2ca6b11c78a : Avoid show-when-lock activity relaunch
1b472bb5755 : Clean up flag get_address_type_api.
c65ef65d118 : Restore settings after test Settings_SecureTest#testUnsetSetting
c3cad0dc802 : Add unenroll & enumeration log CTS hostside tests
be8b9698d4b : Skipping test on GSI + first_api_level < U
3517728fb8d : Use 1/4 view width for drag from top.
1ec04a5a050 : Set target sdk 35 for legacy resizablility tests
8e9902913c7 : Enable --secondary_user_on_secondary_display for CtsMediaAudioTestCases
ed5723561a1 : Update cts test rotation assumption
883dfb7a0b5 : Update VirtualDisplayTest rotation assumption
79ef99dd3a5 : Update Framework from Jetpack.
b81a0c54d12 : Verifier Streaming use a gloabally accessible link
d3b2bb7edb0 : Remove H.263 clip from StreamingQuality Test
88ce87ade3d : [cts][7/N] Use class-level RequiresFlagsEnabled annotation
59b6f40d0e2 : Fix AID for perfetto trigger atom test
51b708be389 : [Accessibility APIs] CTS tests for expansion API
e5f1f7ebaae : Changes CtsAppSearchTestCases to use appsearch_flags_java_lib
509791ea6ad : Fix to generate the input event on the display where the test runs
0ded3ff00ea : DragDropTest:Resolve the cts issue about timeout
c42690443c7 : Camera: Add test for color temperature feature
746d0696fa7 : Update 7.5/H-1-3 to report MPC 15
b9946fec65a : Change time out seconds from 5 to 10 for LauncherCallback
7e29fde1710 : Add display option to OrientationRule for visible background users
71e9b9d088e : CTS test for providing read-only file descriptor to WSS
4ef295f0636 : Reduce wait time for hostside tile service test
f5095aa74e2 : Support secondary_user_on_secondary_display for CtsContactKeysProviderPrivilegedApp
4424624fa0f : Fix android auto support in multi-window checks in ActivityLifecycleTests
33ca0c1750d : CTS test for AudioManager.setWiredDeviceConnectionState
c3ed9fd01cd : Support multi-window occlusion in ActivityLifecycleTests
5c5177d8987 : Correct the string for Wifi test
c2810f0af89 : Undo assumption about autocreated channels
7342cdbf597 : Fix TileServiceTest for new tiles
58feb2acb39 : Add CloudMediaProviderTest for testing default behavior.
5ef77099ce3 : Make sure testRenameCanRestoreDeletedRowId is cleaning up created files
ef909726e93 : Change code owner for signature tests to Android SDK team.
57de779847a : Use actual user ID when enrolling fake sound model.
94cafac6ad6 : Add display option to OrientationRule for visible background users
2578a620aac : Remove EXTENSION_VERSION_CURRENT_PLATFORM_V6 in CTS.
b4f460b0e35 : Fix android auto support in multi-window checks in ActivityLifecycleTests
eb5516255fb : Adjust muted by property when using portID volume APIs
9c2dd7de1f8 : Use actual user ID when enrolling fake sound model.
cbd62606443 : Fix android auto support in multi-window checks in ActivityLifecycleTests
cfe04136043 : Use LargeTest for CarPropertyManagerTest.
81c15feaf96 : Change CTS app category to game
e74867fb71d : Move common HVAC VehiclePropertyVerifiers to same source
a44becbe570 : Add CTS for radio emergency alert
2a3a470b626 : Enable secondary_user_on_secondary_display for CtsRsBlasTestCases
ca211c3596a : CTS tests for enforcing the concurrent connection limit
c0bd9bb2f97 : CTS tests for concurrent connections
59d76f5bae2 : Remove launched flags
b4a12c8c854 : Skip testStartUserOnDisplayAndStopUser if no display is available.
35bae0b2e8b : [cts][5/N] use shell command to set policy
3f7ee5a0244 : Allow nesting for runWithShellPermissionIdentity.
3a5b981d508 : LeAudio: update getAudioLocation test
2e643944742 : Fix MediaStore_FilesTest#testStandardSorting
a3489ae5ad0 : Allow sharding for telephony CTS tests
aa4a16ca33c : Allow sharding for telephony CTS tests
7638ed04f5f : remove EnterpriseComponent dependency from TestAppsComponent
81b373ced9a : Create cts/tests/tests/uprobestats and add OWNERS
611d7fe913d : Allow sharding for telephony CTS tests
218f8b01587 : Fix StorageHostTest#testCache
2127aedbe8a : Change WAV file folder to a subfolder of Context.getDataDir().
6bf34bebaf6 : Deflake MediaSession2ServiceTest
51a9d9aaa76 : Update flag value checks in IME tests
146d5e5880d : CTS test for Android Security b/341680936
c9b3d944648 : Create test artifacts for ApkLiteParseUtilsTest
c8c8f8af5b8 : Add CTS tests for WindowLayoutComponent#getCurrentWindowLayoutInfo.
582407ca30b : Moving code to disable verification of adb install
54e0dd10482 : Updated the test accounts in ContactsContract_DefaultAccountTest to avoid account collision with ContactsContract_SettingsTest (which used the same account)
19115ca5a7e : Add initial tests for YCBCR_P210 format
e36b98932a3 : Remove unused variables.
7a3f4a5469e : Skip DualSuspendTests for visible background users
ba6b17dea43 : Fix test StylusHandwritingTest#testOnViewClicked_withStylusTap
875faf3a8f4 : Correct the string for Wifi test
1f2bbd66cf2 : nfc(mts): Fix mainline module name
2d3cd6b913d : Check for protected support in testWrapHardwareBufferWithProtectedUsageFails.
43fe834831a : Google RCS uses FTEU MO SMS for phone number verification Test cases
6727200969d : Adjust CTS tests for FRRO self targeting changes
c3616df87a4 : ParcelTest: testFixedArray fix breakage
7d733723ddd : ParcelTest: testFixedArray fix breakage
28fa421adff : ParcelTest: testFixedArray fix breakage
85808c0c806 : ParcelTest: testFixedArray fix breakage
4a131e0da5d : CTS updates to acquire permissions from app-streaming/device-streaming roles
42f540f40e6 : Exclude VulkanFeaturesTest from instant apps flavor of CTS
bc95e6133ca : Exclude VulkanFeaturesTest from instant apps flavor of CTS
c4635af7559 : Add CTS tests for invalid parameters of RecognitionConfig.Builder.
25b1f934aa0 : Clean up android.webkit.update_service_v2.
f3e841cc49f : SatelliteManagerTestOnMockService: skip on non-satellite devices
b27b4270525 : SatelliteManagerTestOnMockService: skip SMS test without messaging
3bcde435f5c : MediaCasTest is framework specific
3ea87278961 : Add permission after setting permission identity
e3d3278b209 : Make test more robust when other sessions intervene
ab503b078f8 : Fix testAutomotiveCameraCharacteristics
86d1f6177b2 : Add test for removing currentImeUserId
57ab018cee1 : WM CTS: Add a check to ensure that onCancelled is only called once.
33c198ab321 : Update virtual compliance filters for November
a1043bb198c : Fix CompatChangesValidConfigTest
4265d00bf38 : Update incorrect field names and encoding of attestation_id_* values.
da367fa4eba : Fix StorageHostTest#testCache
5be9170e4c3 : Clean up flag get_address_type_api under `cts`
5c8338e6d3f : mediav2: Skip EncodeDecodeAccuracyTest for vendor partition < U
0c9acf7552b : add secondary_user_on_secondary_display for media cts and mcts tests
a44191170c0 : Enable mCTS for MUMD
0238f6c4358 : Remove mcts-permission from backup hostsidetests
17143ef15fb : Remove testDisabledFeatureFlagBypassesVerifier
92f305c35c8 : Add 2 CarrierApiTest tests for multiple channels
8bc1a10c7ab : AudioAttributesTest: test all system usages, remove reflection
ab85fc5186e : Add CheckFlagRule to bypass KeyStoreManagerTest when flag disabled
09d6c36e0ed : Ensure that BlockedNumberContractTest restores block state
796c1459a0a : Ensure Media Performance Class is an approved value
11658d43b03 : Check flags for tri-state CTS test
13542242b40 : Allow only one binary to exist
ad82a9934bc : Wait for keyguard showing before entering credentials
a9d948c1d55 : change test behavior of testStorageReaderDisableInstance
d6bb07ef833 : Enable secondary_user_on_secondary_display for CtsUiRenderingTestCases
374a6489b46 : DO NOT MERGE Exclude from Android V CTS
5ae1938f172 : Ignore a few IntentTest test cases from Ravenwood
872ae386d00 : Do not kill process in multi-window mode in ActivityLifecycleTests
a62f1cad6f3 : Add CTS tests for Telecom audio focus locking.
17cecd520a8 : [ITS][25Q4] centralize handling of multiple ArUco versions
64147be4fb9 : BluetoothHeadset: skip createNetworkServiceStateFromParcel
d2dfb48e3f2 : Google Contrigution item #12 - subitem #2, opt [VZW-Skylo] apply Idle Mode Scanning for Terrestrial Network
2e2715535fa : Add CTS MultiDevice test for Polling Frame Timestamps
b93299bb09c : Update bug component for network policy hostsidetests
ac10a032d28 : Enable secondary_user_on_secondary_display for CtsFgsTimeoutTestCases
847b6ad2fcb : Revert^2 "Add tests for querying mmap support."
cc331d0fa38 : Make InputShellCommandTest multi-display aware
55a9b7c4bcb : Revert "Add tests for querying mmap support."
1a6cf2d6c38 : [API change] Add EDGE_NONE option for BackEvent#swipeEdge
d2fc6e351ca : Revert "CtsHardwareTestCases: Don't verify source for selected tests"
eea4a04df94 : Fix granting location for telephony tests using it
eaa50391563 : Skip WCG tests if the GPU driver doesn't support fp16.
420e127fa8c : Add test for Custom IME Switcher button visibility
3a26fa3f84f : Change time out seconds from 5 to 10 for LauncherCallback
1b480d0afe3 : AudioManager.addOnDevicesForAttributesChangedListener test permission
31f2d4d10ed : Add CTS test to verify bounds are set for ViewStructure.
5e323f002b3 : Allow other CarPropertyValue statuses beyond AVAILABLE
82a91750175 : Flag 24Q3: remove auto_on_feature
c7501012fe9 : Add CTS tests for getChildSurfacePackage() and clearChildSurfacePackage()
212d202aad8 : Fix flakiness in ActivityLifecycleTopResumedStateTests.
29f962456a4 : CTS test for querying contacts with account args
3cc83af7183 : Enable secondary_user_on_secondary_display for CtsUiRenderingTestCases
3a6832c9ea9 : Establish owners for IpcDataCacheTest
f4db779f8ab : Car x Bluetooth: use BlockingBluetoothAdapter
2b5552d6bb0 : Reland "Fix perfetto trigger test"
8662417a1e9 : Add CTS for always_send_intial_value.
1d23d4c1d08 : Update the IpcDataCacheTest
d11d901f7db : Re-enable CTS tests
475e67ec53f : Update bug component for DropBoxManager tests
36d7c51a7c7 : Add CTS test for triggering contacts storage settings. This test is to make sure calling SET_DEFAULT_ACCOUNT intent can be directed to Contacts Storage page in Settings.
cc22918be8a : Make InputShellCommandTest multi-display aware
00949297e3a : [ITS][NumPy2.0][test_jpeg_quality] use Python int for uint8 OOB issue
2e5d5317e4e : Deflake ASurfaceControlBackPressureTest
77fa2e9f0a7 : CtsHardwareTestCases: Don't verify source for selected tests
7cd6035fbaa : CtsHardwareTestCases: Don't verify source for selected tests
f19aae9f61d : Add tests for decoding ISO gainmap metadata.
2e10cb2989a : Catch AssertionError in waitUntil()
a7c019c21e9 : Fix failed testImeWindowCanSwitchToDifferentDisplays() for secondary_user_on_secondary_display
b1884c91201 : Enable secondary_user_on_secondary_display for CtsAppSearchTestCases
9e877c38389 : CP2 Move API CTS robustness
d49f9d3c7e3 : 25Q2_ITS: test_session_characteristics_zoom lint cleanup.
10cdea04b24 : Add tests for the Display_BT2020 ColorSpace
8adc3db6a1f : Change CTS app category to game
9f79cbafc4d : Update references to Job OWNERS in CTS tests
d602bf1cfae : Changing folder for saved WAV file to /storage/emulated/0.
87771513c56 : Update test expecation of font variation settings
bfe6fc685c3 : Skip testUniquenessOfHmacKeys for older devices
22522066a08 : Deflake ASurfaceControlBackPressureTest
bd0cb541280 : [satellite] CTS test for onCarrierRoamingNtnAvailableServicesChanged
4385609f68b : Wait until user is started and unlocked
de34ffdfce2 : Enable ATS by default in cts-tradefed.
d567c2290a3 : mts(nfc): Add a MTS module for nfc
64625ceafb1 : Make MultiTouchTest and PointerCancelTest multi-display aware
360e8ec259e : Remove READ_CAR_DISPLAY_UNITS permission from HVAC_TEMPERATURE_DISPLAY_UNITS verifier
b63a80a989d : Fix failed testImeWindowCanSwitchToDifferentDisplays() for secondary_user_on_secondary_display
1f31340c819 : Fix CtsWindowManagerDeviceDisplay:android.server.wm.display.WindowContextTests for the visible background user
18fe0750589 : [25Q2_ITS] Add tablet name and Android version to error message when tablet is not allowed.
0d3249ef497 : Add tests for querying mmap support.
835f4fef08b : Fix CtsWindowManagerDeviceDisplay:android.server.wm.display.WindowContextTests for the visible background user
624cc62a777 : AAudio: Update test copyright year to 2024
71e6aabda8f : Remove @RequiresFlagsEnabled annotations for released flags
26388cf36c9 : Skip the test if createHintSession fails
621318b9e48 : Add tests for verifying SurfaceControlEvent atoms
98812377c73 : Make MultiTouchTest and PointerCancelTest multi-display aware
86372d67ecd : Fix UiModeManagerTest for secondary_user_on_secondary_display
4e026fc37b6 : Save Audio Loopback Latency test audio to WAV file.
e870f623be5 : Add test for AccessibilityNodeInfo#getBoundsInWindow
b8986afecdd : Revert "Fix perfetto trigger test"
66d2335dda2 : Support secondary_user_on_secondary_display in CtsAppWidgetTestCases
4b5190eadc5 : Fix perfetto trigger test
03ad8a64927 : Add secondary_user parameter to PublicVolumeTest
b842b8c06e2 : [AAPM] Introduce new Service for Android Advanced Protection Mode
e66e0be022a : Update CTS for the NPE fix
72da4d8fdb6 : media CTS: Annotate CtsMediaDecoderTestCases
350a63e2c39 : riscv64: ignore missing libRS.so on riscv64.
c62465ca05c : mediacuj CTS: Re-workout the timeouts of CtsMediaCUJTests
5ad845af1c9 : Re-enable CTS tests
af6e81b5b2a : Extend timeouts for AppZygote tests.
a7e1d0553ae : Exclude CtsAdServicesEndToEndTestMeasurement com.android.adservices.tests.cts.measurement.MeasurementManagerCtsTest#testMeasurementApiDisabled_lowRamDevice and CtsAdServicesEndToEndTests com.android.adservices.tests.cts.topics.TopicsManagerTest#testGetTopics_lowRamDevice from cts-on-gsi
f089218c2ea : Giving tolerance on validateMirroredHierarchy calculation
cd25f8302f6 : Update filtered test names in cts platinium staging test config.
a9cfe9198d8 : Fix UiModeManagerTest for secondary_user_on_secondary_display
6996b2fa4bb : Support secondary_user_on_secondary_display in CtsAppWidgetTestCases
7edd8cfaf10 : Remove minSdkVersion and targetSdkVerison for car CTS.
63d762901c7 : Add initial tests for YCBCR_P210 format
9cf938387f6 : Add CTS for new Surface#setFrameRate
7fedc8b51c0 : remove multi-user and enterprise deprecated functions from DeviceState
e146c4ca903 : Add test for new API to allow app to opt out Intent Redirect Prevention
36d4d8ee667 : Write tests for shouldDefaultToObserveMode xml attribute
f92b90f566a : Disable AppOpsTest CTS test
6800038787d : Fix perfetto trigger test
dd9da44a5f1 : Fix the potential issue of ConcurrentModificationException
741e75f7ef1 : Add support for front default camera testing.
76999ce7f1c : [ITS][numpy2.0] Set offset type for convert_yuv420_planar_to_rgb_image
d1e3afbea1c : Increase stability of audio focus CUJ tests.
241da8cd21b : Add SimpleAfterLoginActivity in PasswordOnlyActivity, MultipFragmentLoginTest to fix fail MultiScreen test case.
12a8436bcdc : Add CameraFeatureCombinationVerifier in android-cts-verifier.zip
f8d7eafa05d : Ensure a11y shortcuts are cleaned up when the target was an Activity and become an AccessibilityService, so we won't skip the AccessibilityService permission dialog.
72f6c086287 : Improve exception message for proto parse failure
723b3be05a9 : Add checkFlagsRule for deviceIcon cts test
97bb4281185 : ITS: test_preview_distortion: Increase max resolution to find more 4:3 preview resolutions
904f4f53311 : ITS: test_preview_zoom: Pick preview resolution also supported by MediaRecorder
4b150f323e4 : Ignoring testPickAudioDir, testPickImageDir and testPickVideoDir tests
8f1187d7a6b : Revert "ITS: test_preview_zoom: Pick preview resolution also supported video profile"
38933cebb45 : Allow dashes as a valid start to an OTP
ea63f6fae8e : input: remove flag for MotionEvent.PointerCoords#isResampled
fb3282c38b0 : Submitting changes to detect APEX and related methods
60c27155b37 : Add ListeningPortsTest userdebug exception for gnssd port
b6919dcb80a : Remove references to deprecated method.
49923c5811c : Special handling (autopass) for emulators running AudioLoopbackLatency test.
9c88eb31901 : Ignore PublicVolumeLegacyHostTest
b0c52ddf055 : Add user option and set display id for CtsAppTestCases
1b843a1295a : Add user option to the shell command for CtsAppTestCases
f01280ad56f : Fix CloseSystemDialogsTest for secondary_user_on_secondary_display
adb0963d736 : Fix CtsWindowManagerDeviceAnimations:android.server.wm.animations.DisplayShapeTests for the visible background user
968c4063369 : Fix CtsWindowManagerDeviceAnimations:android.server.wm.animations.DisplayShapeTests for the visible background user
641c755c7f7 : Fix failing bedstead core tests.
3163ac081c1 : Add millis suffix to BackEvent#getFrameTime API
cf506f4e127 : Prevent remote submix devices from disconnecting before test finish
d9c448af155 : extract bluetooth module from Harrier
9a6432baa65 : Add tests for new KeyStoreManager grant APIs
0c3dc7916cd : [ITS] Add Xiaomi Pad 5 (nabu_tw) to Tablet allowlist
a3c09fb983d : Update test expectation
a539ddb53e1 : Add user option and set display id for CtsAppTestCases
1c17a76a270 : Add user option to the shell command for CtsAppTestCases
ead180768ac : WebView: disable setSafeBrowsingAllowlist test case
adbce09e2b7 : mediacts: end the codec usage upon error/exception
77865774752 : Change testAndroidBaselineProfile2021Support
d5f41b442e3 : [Ravenwood] Support DeviceConfig (CTS)
ac5c6b403cc : [cts][4/N] tests for adb bypass
b32786b2f47 : [cts][3/N] test for policy override and failure reasons
f55da50f518 : [ITS] Add Xiaomi Pad 5 (nabu_tw) to Tablet allowlist
23d4605429e : [Ravenwood] Support DeviceConfig (CTS)
08cbd24f7b1 : Allow auto-pass in AudioTapToToneTest when run on an emulator.
2449025ad5d : Enable secondary_user_on_secondary_display on CtsActivityRecognitionTestCases
195dce06fb7 : Add tests for connections for AttributionSource chains
88459733a0e : Correctly verify subscription failure.
75411edca5a : Use new subscribe API in CTS tests.
cd8a982db98 : Removed unused AndroidJUnit4 from CtsInputMethodTestCases
2ef06942fa0 : A workaround for Sharesheet tests failure when launched from the suit.
d099ed15743 : Modify shouldNotRedact to account for increased length allowance
482c756b1a8 : Resolve CTS test failure on non-telephony device
c9987bb457a : Flag gate CTS tests for #getPackagesWithCarrierPrivileges
2869d7037de : Ensure blocked number related test only runs where a Dialer is present.
abc790d896a : Support CDM UI cts test for landscape mode.
a245eb502d2 : Skip EmergencyCallTests if there's no telephony
9d1443fa986 : Add @RequiresFlagsEnabled to ignoreAsyncOpNoted
31126f9bfdb : Add distractionOptimized to CtsVerifier manifest for GearSelectionTestActivity
ee57234c044 : Correctly verify subscription failure.
953fc0782df : Increase _RED_BLUE_TOL to 20 for RGB not supported devices in Encoder
46e8ac5da85 : Fix CloseSystemDialogsTest for secondary_user_on_secondary_display
f2fc3784ab0 : Enable secondary_user_on_secondary_display on CtsInputMethodTestCases32
e1c40ed326f : Resize CtsVerifier car tests buttons
f19ae1a049f : mediacts: end the codec usage upon error/exception
34a86460351 : Add SimpleAfterLoginActivity in PasswordOnlyActivity, MultipFragmentLoginTest to fix fail MultiScreen test case.
6daee29de7d : Set trendy team on CtsGraphicsTestCasesRavenwood
0882b55d1b1 : Added the CTS test to ensure the new default account policy is implemented as expected.
9aed9a04eda : Update WidgetTransitionTest to use a concrete RemoteViewsFactory
6bd0cfc829f : Added CTS for wakelock ww logging
d170c5ef806 : Giving tolerance on validateMirroredHierarchy calculation
c6ef07dd49d : Exclude CtsInputMethodTestCases#StylusHandwritingTest some failed test
7634cec76bc : replace DeviceState methods using enterprise components by extension functions
756b2e7ba8e : Prevent remote submix devices from disconnecting before test finish
60e3552225f : extract content suggestions module from Harrier
d37d4b395ae : During cts testcase the testcase is switching render rate to 1Hz
280c4f982b3 : CTS issue onImeiMappingChanged fixed in case of 3 active modem.
de042a3713e : DO NOT MERGE Use DisplayId when verifying top task in MulitDisplay
6c5ed142846 : During cts testcase the testcase is switching render rate to 1Hz
53ca5f79a6d : Add device_config override commands to AndroidTest.xml
82db1ffa519 : Add a new variable to control the dynamic downloader for cts-tradefed script.
f64766c6675 : Enable secondary_user_on_secondary_display on CtsActivityRecognitionTestCases
3ff34c8ec3f : Use new subscribe API in CTS tests.
b69b6dd912a : Update October Filter
9a26390b0ef : feat(high contrast text): update screenshot tests for new design treatment: rounded rects
dc59598520f : Change testAndroidBaselineProfile2021Support
6673bef114a : [ITS][Zoom] create new test test_zoom_tele for TELE extension rig
4c12cdcf1a1 : DO NOT MERGE Disable A11y services before running ReviewAccessibilityServicesTest
03b212d397f : mediapc: limit 4k secure decoder tests to run at 1080p concurrently to reflect updated CDD
b5f1f14c4f6 : Handle unsupported EAP-SIM/AKA
939d7bb9fce : Enable secondary_user_on_secondary_display for CtsContentTestCases
b4e74349a47 : Remove all CTS tests verifying PWLE V1 behaviour
7f789c6ddb1 : Submit CTS tests for HDR bugs
f09ede16653 : add cts test for new api
133a4907e8f : Make E2E redaction tests set themselves as the SMS app
cad8a422fbe : Updates the LoginWithCustomHighlightActivityTest to test the new flag.
54ea92ac673 : Change testAndroidBaselineProfile2021Support
36fa9a194b7 : DO NOT MERGE Remove DefaultUITraceListener from CtsViewTestCases
809356ada51 : Revert^2 "Fix crash in CtsMediaProjectionTestCases"
a5e91f43ec0 : Exclude CtsGameFrameRateTestCases[instant] from CTS
ed8aefa9a54 : DO NOT MERGE : Bump CTS and CTS verifier version to 15_R3
fe1af180521 : Add test to validate proper handling of SurfaceSyncGroup timeouts
c2dab9f4ebe : [DO NOT MERGE] skip PermissionlessService_ignored on Android 15
1499eafe4cb : Enable secondary_user_on_secondary_display on CtsInputMethodTestCases32
08b442f4349 : Set flag defaults for CP2 Move API tests
b78b9e1fe1e : extract accounts module from Harrier, replace DeviceState account methods with extension functions, unify obtaining instance from locator using String class name
24e397339c3 : Drop @NonMainlineTest in favor of @FrameworkSpecificTest
c461f67f4d4 : Introduce codename Baklava
23f9cd37b45 : Use a concrete RemoteViewsFactory instead of mockito
64643de0e42 : Bedstead runner: cache annotations cost and priority.
785c5dd2e56 : Keystore: suggest vendor properties for device IDs
f8acee5c733 : Keystore: coalesce device ID failure code
8e2d4e9c1c5 : DO NOT MERGE Revert "Wait user switch complete at the end of startActivityValidUser"
6b5e129794c : Take display cutout into account of system insets
260258767f6 : Fix Cuttlefish watch tests
056f9da57f9 : Keep unresizable for legacy compat test
4a382dae33c : Updated CTS Test for SmsMessage for getRecipientAddress
e5e82c6e42f : Revert "Fix crash in CtsMediaProjectionTestCases"
8d45728e580 : Support multi-window occlusion in ActivityLifecycleTests
7f9f7db9847 : Do not kill process in multi-window mode in ActivityLifecycleTests
04a790542c8 : Do not kill process in multi-window mode in ActivityLifecycleTests
4950570a967 : Fix the timeout logic in testAnr
49d551802e9 : Add android.permission.health.READ_HEART_RATE to sensor test app.
86596773eea : Correct CallDiagnosticServiceTest cleanup issue.
3739d56cf00 : Add CTS test for setProperty with null val.
89fd44a9f50 : ITS: handle multiple OpenCV versions for backup ArUco marker detection
1774adab326 : [ITS][Zoom] Fix RGB/BGR conversion bug in test_low_latency_zoom
040378227b9 : CTS for hasArrSupport
33e5e9c99b7 : Enable --secondary_user_on_secondary_display for CtsMediaAudioPermissionTestCases
eca2ef75651 : Uses VIRTUAL_DISPLAY_FLAG_TRUSTED for A11yDPTests's virtual display.
5bf30568ce2 : Skip CtsIdentityTestCases:android.security.identity.cts.UserAuthTest for visible background users
1eeb79091b3 : Exclude CtsGameFrameRateTestCases from CTS
604f655693f : Add tests for Build.get{Major,Minor}SdkVersion
a12dac4ff31 : Support multi-window occlusion in ActivityLifecycleTests
57c25f31fb2 : Press home instead of force killing the app for associaiton revoke test
e13f7462466 : Support CDM UI cts test for landscape mode.
865d454c8b0 : Update filter for October release
b3c7ad86ace : Clean up 24Q3 aconfig flag reset_mobile_network_settings
874b4f60adc : Remove MctsMediaMuxer and MctsMediaRecorder from cts, mts and mcts
9c3a7504a5e : Add test for (user) allowed NAS adjustments
1bd70f80539 : Remove IntegrityCheckStatsTests.java
6c0cfe1ad3d : Handle if activities are not launched in fullscreen mode in ActivityVisibilityTests.
32905629e1d : Replace "XXXXXXX" (7*X) and "XXXXXXXX" (8*X) with TAG.
5dfb72b3d6a : Disabling refresh rate tests for concurrent displays
0eb8363a032 : replace DeviceState functions calling UsersComponent with extension functions
0566f5cbb18 : Add test cases to test PackageManager.MATCH_ANY_USER flag
337ae81cc2a : Handle if activities are not launched in fullscreen mode in ActivityVisibilityTests.
5f8c98a6c4f : Make DEFAULT_RESCIND_BAL_PRIVILEGES_FROM_PENDING_INTENT_CREATOR overridable
b847bad46d4 : Skip CtsIdentityTestCases:android.security.identity.cts.UserAuthTest for visible background users
d6c79adf574 : Check flag before calling DefaultAccountAndState
56e8736d68f : [cts] Return early on "cannot execute function"
4ecd3400fb5 : CTS for AppOpsManager.setOnOpNotedCallback
36fd36443f9 : Clean up CtsAngleIntegrationHostTestCases
68f781b9c8c : nfc(cts): Some platforms may not have EE's for NFC
ca3a2e00d16 : Add tests for a color mode atom
194f9b2ed30 : CameraITS: Click on Cancel button instead of ON
b35c3291a2f : CTS for android:turnScreenOn on virtual displays
2ed638c6dae : Fix CTS in comply with RemoteCompose code drop
c8f82fc714a : Extend ServiceTest#testActivityServiceBindingLru to include app resume
4517199c91c : Disable AppOpsTest CTS test
a13f7e8fa45 : Update CarWatchdogDaemonTest to use proto dump instead of text dump
373e325747c : Skip call notification tests for visible background users.
cc7e5d4a338 : Support secondary_user_on_secondary_display in CtsContactsProviderTestCases
3b464c9e764 : CTS changes for the CREATE_VIRTUAL_DEVICE enforcement removal
6f0b595c594 : Deleting legacy frequency profile info
19d17d4be6e : Make spelling of writable consistent
c265ad3a258 : Add more context to Keystore CTS failures
4a65a86b44d : move UserTypeResolver outside of multi-user module and to not depend on multi-user and enterprise modules
f976c05cb0b : Add timestamp API extension for predictive back
71f94bdbb23 : Add RO_DCL_CHANGE_ID to OVERRIDABLE_CHANGES
8d0d9db6950 : Add text folks into CtsGraphicsTestCases owner
cf40fb5541d : Tests for new BAL modes
a6aeb0913be : Support secondary_user_on_secondary_display in CtsContactsProviderTestCases
4c0f0e2ef0c : Remove custom_biometric_prompt flag
ba986a02715 : Clean up 24Q3 aconfig flag DATA_ONLY_CELLULAR_SERVICE
08c3a63e085 : CTS Verifier: Test BT MIDI without USB loopback
d8942a4c615 : Fix flagsOff Move API CTS tests
b138aa43423 : Enable testMoveLocalContactsToSimDefaultAccount
7ca25265d43 : Fix failing assistant CTS tests on wear OS.
520ecf1a571 : Skip call notification tests for visible background users.
9af29038e29 : Revert^2 "Fix CtsSensorPrivacyTestCases on the current user in Multi-Display"
70bc3f5e8f1 : Media: Use java_defaults to reduce duplicated lines
0f03d042a9e : MediaStress: Use java_defaults to reduce duplicated lines
7cdaebf7dc1 : CtsVerifier take screenshots using MediaProjection
627f39a2352 : Add CTS test for tri-state checked api
2a0f2279607 : replace DeviceState functions calling TestAppsComponent with extension functions
b7b268e8458 : "Share Results" added to Datapaths Tests.
d44ed1d206c : Add cts test for getLong
8329af465ee : Fix bug in low_light_utils.py
d4764b7e5e4 : WindowFocusTests: Use CtsWindowInfoUtils to wait for window
405732bef91 : Skip ResetPasswordWithTokenTest if no DPC Component exists
cbba5cc9590 : CTS fixes for removal of VDM permissions from Shell
a5dbcb62c6e : set minimum sdk version for CtsMultiUserTestCases in order to allow execution on physical devices
084a4387518 : Revert "Revert "Display CTS changes for removal of VDM permissio..."
508d30de012 : Revert "Revert "Revert "Revert "CTS adjustments for removal of V..."
8ed0e4d26b8 : Add CTS test to ensure CapsLock can be produced through Alt+Meta
251d0396cf7 : Add modern media provider tests to postsubmit
d3edd7f7f39 : Add test for new frontend status StandardExt
fdfd8a8eaa4 : Collect screenshot and add DebugInputRule to debug test flakiness
37c34f047ba : Ignore low quality DefaultDialerApplicationTest
4b2f320bdfd : Create DynamicCodeLoadingTest
086264ea4f5 : Add haok@ to OWNERS for security tests.
a3914792862 : Fix test cases for the redesigning typeface
3d49c4c9aab : Analyze flakiness of setCredentialManagementApp, poll for 5 secs to check for the desired result.
93b42929bf9 : Remove flaky tests from Platinum prod
17a14c17e7f : Handle broken avd connection exception when executing adb commands.
ebc26c5503c : CTS for the VDM untrusted display limitations
0aea6321925 : Use DisplayId when verifying top task in MulitDisplay
e1fe0880d11 : audio: Correct test api access on user
0200b009271 : CtsVerifier take screenshots using MediaProjection
03037befb05 : Update VulkanFeaturesTest for 1.4
6e72bc688c3 : Fixes for errorprone update
1f702b1305a : Add Touch Pass-through Opt-In CTS for target SDK 35
8b4326056d7 : Add distractionOptimized to CtsVerifier manifest for GearSelectionTestActivity
acbb74e6871 : Disable AppOpsTest CTS test
d5a0829aece : Create new test target for modern MediaProvider
6bec83a87f7 : CTS test for getEligibleCloudAccounts API
e7db7f41974 : Revert^2 for "Add menu and media play/pause global actions [2/2]"
0a4c3396ec5 : Revert "Revert "Revert "CTS adjustments for removal of VDM permi..."
9738a78c3c9 : Revert "Display CTS changes for removal of VDM permissions from ..."
6680fc09e38 : Resize CtsVerifier car tests buttons
725aa0b7350 : Fix CameraPermissionTest error of using uninitialized BroadcastReceiver
817c481841c : BluetoothAdapterTest: remove deprecated usage
aefcc64b53c : BluetoothAdapterTest: use proper permission
f815799f2fb : BluetoothAdapterTest: use truth
ba4876e2be2 : GetName/SetName: clean test to use inline mock
b3eebed97aa : [cts] Add cts test for getRoutingTable().
5d2fe70c69d : Support secondary_user_on_secondary_display for CtsWindowManagerDeviceMultiDisplay
b59f6e48bf6 : Support secondary_user_on_secondary_display for CtsWindowManagerDeviceMultiDisplay
967d718daed : Add cts tests for new vibrator frequency profile
13d394aa0b1 : extract testapps module from Harrier
5196ea734e5 : mediapc: remove unnecessary usage of createByCodecName API call
53277dac175 : Remove obsolete testNoImeInsetsInSplitScreen
0b2116f1e90 : Display CTS changes for removal of VDM permissions from shell
11cfdce2431 : Remove the useless tethering tags since CtsWifiTestCases only contain in mts-wifi test plan
c70d9b044cb : Revert "Revert "CTS adjustments for removal of VDM permissions f..."
94b7ffa9590 : CTS: Annotate drm non-mainline tests with FrameworkSpecificTest
d7f8866e420 : Skip testSensorAdditionalInfo on emulators
e25b36589d6 : Support secondary_user_on_secondary_display in CtsWidgetTestCases29
7d891968b21 : [cts][2/N] verify session attributes including declared libs
f0d0b7cf06e : Use older CarPropertyConfig APIs to support older Android versions
3fc3b4a7745 : Support secondary_user_on_secondary_display in CtsWidgetTestCases29
acbf78118d2 : CTS test for setDeviceIcon api.
2a7d5b86932 : Revert "CTS adjustments for removal of VDM permissions from Shell"
f0eb6dfb1b2 : [ITS][Zoom] add force_greyscale option to find_aruco_markers
2e58bd1701e : CameraITS: Handle location tag window pop up
ca7ddcaa2fa : Revert "Make KeyboardVisibilityControlTest multi-user aware"
69d0064785b : BluetoothAdapterTest: apply format
48d979c2a23 : Added CTS test to support DCA to be SIM.
aed1f3d2e1a : Skip testTunneledAccurateVideoFlush* on devices with vendor APIs earlier than Android 12
c2901861ad3 : CameraITS: Handle location tag window pop up
b969f0e6cc4 : Support secondary_user_on_secondary_display for WindowFocusTests
129ea84fedd : Revert "Fix CtsSensorPrivacyTestCases on the current user in Multi-Display"
ffcf6c9a0dc : Add distractionOptimized to CtsVerifier manifest for GearSelectionTestActivity
7ce93fea322 : Fix CtsLocaleManagerTestCases for automotive
347b2b0aeb2 : nfc(cts): Adding default system code route to overwriteRoutingTable() API
2e1af09f289 : Skip some Statsdatom test on MUMD
83f79d7aedd : Skip testSensorAdditionalInfo on emulators
1e0fa05f974 : Add test for read_heart_rate app op
5d2d63b0ea9 : Disable apk install verification for CtsTelecomTestCases.
b712e2ca770 : Add test to verify CtsIsolatedInferenceService can only be bound by system-uid.
61614d1bcda : Fix CtsSensorPrivacyTestCases on the current user in Multi-Display configuration
7b9c8218904 : Fix CtsSensorPrivacyTestCases on the current user in Multi-Display configuration
17bc08e0769 : Support secondary_user_on_secondary_display for WindowFocusTests
bc0e3cec322 : Move `mLocator.prepareTestState()` to after `applyAnnotations()`.
02d0e562ad4 : Fix DRIVING_DISTRACTION flakiness
c3706916657 : Skip calling-dependant tests on data-only devices
0b81ea38ca1 : Update the Trendy team name for NNAPI, Renderscript, and RSBLAS tests from "trendy_team_renderscript_nnapi" to "trendy_team_machine_learning".
63346f28473 : Update filter for ctssecurityhost failures even after update.
643c62dfca6 : Add testcase to OnBackInvokedCallbackGestureTest
4290bd08a73 : Fix broken OnBackInvokedCallbackGestureTest
ec260f3dba7 : CTS test for Android Security b/159624555
575e685f6a9 : Fix EnsureHasAdditionalUser for the case of existing user being ephemeral.
39ee83c18ae : Add CTS test to ensure CapsLock can be produced through remapping
880c2931af2 : Fix the broadcast action name in the CTS test.
e7ff8e76248 : DO NOT MERGE : Bump CTS and CTS verifier version to 12_R15
b76edd290c9 : DO NOT MERGE : Bump CTS and CTS verifier version to 12.1_R13
614119b589e : DO NOT MERGE : Bump CTS and CTS verifier version to 13_R11
8703693d11d : DO NOT MERGE : Bump CTS and CTS verifier version to 14_R7
6c6dea56c36 : Revert^2 "VDM CTS for the PowerManager API display awareness"
510dd0eb727 : Make KeyboardVisibilityControlTest multi-user aware
8321d4f2c6e : CTS for set autojoin disallowed
01235c2ce58 : Fix CtsWindowManagerDeviceAnimations:android.server.wm.animations.DisplaySizeTest for the visible background user
86da30c7224 : Fix crashes in CarrierApiTest
d79a6c88b61 : Update CTS tests to support CP2 Move API
89c5f024de6 : Deleting legacy frequency profile info
bd6ee5984c1 : Revert "VDM CTS for the PowerManager API display awareness"
2035cc8f42f : Add SurfaceView composition order CTS tests
2b939ee5a81 : Add tests for TextureView dataspace changes
91d3006a450 : Skip some Statsdatom test on MUMD
16072e744d5 : Fix CtsLocaleManagerTestCases for automotive
d9c978cd80a : Update CtsHibernationTestCases to support more form factors
94957fae732 : Fix CtsWindowManagerDeviceAnimations:android.server.wm.animations.DisplaySizeTest for the visible background user
81f870e08c2 : Create a security test for cameraserver + AttributionSource
d060dbd0519 : Camera: Correct physical id selection in #testSettingsBinderParcel
1b1c73ab2e8 : Support secondary_user_on_secondary_display in some part of CtsWindowManagerDeviceDisplay
5c89bf2f7a1 : Skip testStackFocusSwitchOnStackEmptiedInSleeping for visible background users.
4011d169a43 : Check resumed activities on the primary display
f37ea9b50aa : CTS for the VDM power state API
23aeb2de8ba : VDM CTS for the PowerManager API display awareness
040a2a321b0 : Add sleep between scrollUntil and click
f3ae75117a7 : Expect correct exception in test
e242e336952 : Add a test for switching from a non-credential user to a credential user.
93dd26b8633 : Allow ANY for @EnsurePasswordSet and @EnsurePasswordNotSet.
6daea52264d : Add missing multi-display assumptions to VirtualDisplayTest
42fa3cfc36b : Use the same TEST_MAPPING file
b2b2e0ab871 : Add UNIVERSAL_RESIZABLE_BY_DEFAULT to allow list
4dbbee9666c : Add testNoImeInsetsInSplitScreen to verify that apps in split screen don't get IME insets
a52270df583 : Skip BUG_327749022 for Wear
666cbaff860 : Handle if test activity is launched in freeform mode in AssistantStackTests#testLaunchIntoSameTask.
c7c7f7ef8d8 : Cleanup dependencies of CtsResourcesTestCasesRavenwood
0bd2a9f84e9 : Handle if test activity is launched in freeform mode in AssistantStackTests#testLaunchIntoSameTask.
19fa6de42ca : ITS: add wait time for tablet brightness to change
5d889607667 : MediaCas: Add test to validate setResourceHolderRetain API
88f05d808f7 : Tuner: Add test to validate setResourceHolderRetain API
ac1492420cc : Increase timeout for LauncherAppsForHiddenProfilesTest
510401250ae : Skip a zen mode test for MUMD devices
38557abf4a8 : Added DefaultAccount cts tests.
01a7b6343da : Update CTS test for migrateLegacyKeystoreToWifiBlobstore to use the new asynchronous parameters.
337ffb95cb3 : Enable the option in DeviceSetup to ensure key events operate according to displayId when testing with secondary_user_on_secondary_display
e3dee482b9d : update test command to use local override rather than put
7302d444421 : Choreographer: Move asserts to main test thread
aba5c2a2b74 : ITS: add wait time for tablet brightness to change
a253167437d : CameraITS: Rename some proto enums
79bfd4cdd1b : decompose android.media.misc.cts.PresentationSyncTest#testThroughput
70864e5ed6f : [ITS][Zoom] add alternative offset check for smooth zoom
7ec28a8feff : move multi-user and enterprise annotations to the modules
022a0f39a48 : [cts] Add calling package onExecuteAppFunction
b23561dbfcd : Enable carrier_roaming_nb_iot_ntn flag in Telephony satellite CTS
05a73c3226e : CTS adjustments for removal of VDM permissions from Shell
e649bc6aa3b : Remove MTS tags from all app security CTS tests
d7df33fc90c : Updated OWNER of ContactsProvider CTS.
30689b17928 : Skip a zen mode test for MUMD devices
7482fe518ba : Increase the retry of scan to reduce the flaky
bcd367491e3 : [Nfc_cts] Add cts tests for Nfc security log.
c9a1255d1be : Add owners to display atoms CTS
f079ace3e57 : Add tests for A8 gainmap handling
3de3b151d75 : Make IMS Tests Pass on Cuttlefish
cdc90a5e2fc : Remove deprecated CddTest annotations from OnBackInvokedCallbackGestureTest
1ab81196d2d : Add new PRIORITY_SYSTEM_NAVIGATION_OBSERVER to OnBackInvokedDispatcher
850876aac88 : CUJ Test: Fix failure related to DND access change
1cb166b646b : Disable ASM_RESTRICTIONS flag
78b1bdadc98 : [RONs] CTS - Rename Step to Point
6f7e7f9dd41 : Refactor cts, split sidecar test from the rest
11f5d0c9011 : Add CTS tests for RecognitionConfig.CREATOR.
36de477f27d : Enable the option in DeviceSetup to ensure key events operate according to displayId when testing with secondary_user_on_secondary_display
0ebea108810 : MediaV2: Use java_defaults to reduce duplicated lines
13a5ae073e1 : Check resumed activities on the primary display
729fab78285 : Skip testStackFocusSwitchOnStackEmptiedInSleeping for visible background users.
225683bf39e : [cts] Add cts tests for saving routing option APIs.
32f8def500c : audio: Add multi record ref count test
a8da21af73b : ITS: handle multiple OpenCV versions for ArUco marker detection
18df49a2b4b : Fix failed ActivityEmbeddingPolicyTests for SECONDARY_USER_ON_SECONDARY_DISPLAY
b311df0fd00 : CTS tests for FRRO dimension support
d1e819b3f56 : ITS: test_in_sensor_zoom add TOL to AssertionError
32cea4133e0 : CodecInfoTest: Exclude wmv and vc1 codecs from some tests
b102af8207c : mediav2 CTS: replace vp9 clips from mp4 to webm container
d4d6e9b3738 : Exclude CtsGameFrameRateTestCases from CTS-on-GSI
08686a3f520 : mediav2 CTS: update to CtsMediaV2TestCases-5.0.zip
7a7db51e510 : [RONs]CTS Test - Remove Stable from StableId for Step/Segment
f8db8cb906a : Update mumd passenger cts subplan
bb0d839775e : [cts][1/N] verification test with test verifier
8aed2f66ee9 : CodecInfoTest: Exclude wmv and vc1 codecs from some tests
2aa0317adb4 : Adapt ResetPasswordWithToken CTS to run with permission based callers.
c19d7fb970b : mediav2 CTS: Update to CtsMediaV2TestCases-4.4.zip
873569de6e5 : Fix CTS in comply with RemoteCompose code drop
53084d79b96 : Skip MultiDisplaySystemDecorationTests#testLaunchSecondaryHomeActivityOnDisplayWithDecorations for visible background users
b0c43babf90 : ParcelTest: testFixedArray - remove UB
1dbf8b4c0fc : Disable two tests in SurfaceControlInputReceiverTests for AOSP
03fb364591a : Remove MTS tags from all app security CTS tests
f8fb70b1a82 : ITS: test_preview_zoom: Pick preview resolution also supported video profile
1b019301237 : Exclude CtsGameFrameRateTestCases from CTS-on-GSI
1afd0c07f84 : [cts] Add cts tests for more oem extension callbacks.
f0fd51a9781 : Enable secondary_user_on_secondary_display for windowmanager tests
6a68b04c33b : Make InputMethodServiceLifecycleTest compatible with concurrent multi-user IME
407a827fad8 : Support parameterization in AppFunctionManagerTests
71d2819484b : Check commit history for actual description.
0897cd0ba98 : CTS for VirtualDisplay#setSurface
cc518f91f3c : Replaced deprecated InstrumentationRegistry on UserHelper.
c82937bbf96 : Dump post-assertion screenshot from BlurTests
b38cd11200b : Revert^2 "[Autofill] Add more tests for save flow"
87e66a371d2 : Fixing DisplayTest for VRR display
e7e43c02424 : Revert "[Autofill] Add more tests for save flow"
da987f6ac66 : Fix flaky test: Wait till remapping is done at KCM level
cca49b5f6f7 : Check buffer before reading in PixelValidator
fad34b90487 : Exclude CtsAdServicesDeviceTestCases android.adservices.cts.AdSelectionTest#testGetAdSelectionService_lowRamDevice_throwsIllegalStateException and android.adservices.cts.CustomAudienceTest#testGetCustomAudienceService_lowRamDevice_throwsIllegalStateException from cts-on-gsi
b52de8f55b0 : [Ravenwood] Prevent dexmaker from going into Ravenwood tests
68827f4a829 : [Ravenwood] Support several axt libraries with Mockito
b5ad51dc134 : Add RequiresFlagsEnabled annotation for cts tests.
45833665ba0 : Add deprovisionSatellite api
5b34e8c2dcc : Make InputMethodServiceLifecycleTest compatible with concurrent multi-user IME
675584c2a48 : writeUnpadded -> write
94f851bfda9 : Revert "Replace InstrumentationRegistry#getTargetContext to ApplicationProvider#getApplicationContext"
7f1d4607e04 : HeadsetClient: createNetworkServiceStateFromParcel
90915d16406 : Tests for check getShowSoftInputOnFocus() for stylus input
1a4995aeff1 : Redisable CTS BluetoothLeScanTest#testOpportunisticScan
f0dec75f642 : HeadsetClient: createNetworkServiceStateFromParcel
0b4e3c7b864 : Use the correct annotation for some CTS tests
e79b66cc770 : Fix SurfaceControlInputReceiverTests for device with persistent system bars
d2fea47dc21 : [ITS][25Q2][scene6] use ArUco markers, restore offset check
15cd33ae5b4 : Fix flaky CarRotaryImeTest
dad7023d995 : Remove CTS metadata_api_inactive_audio_device_upon_connection flag
4052bb2433b : Fixed the test failure of testDefaultContactsAccountClass_sim.
3e86809961e : Deflake DisplayHashManagerTests
c29541385af : Remove CTS leaudio_broadcast_volume_control_for_connected_devices flag
ae62671d71d : Remove CTS leaudio_broadcast_monitor_source_sync_status flag
791fc59e02e : [Autofill] Add more tests for save flow
a5ba843e538 : Revert "[RESTRICT AUTOMERGE] Fix CTS test for inflating remote views"
5b27e14bb71 : Clean up group volume adjustment disablement flag
a1e3d2eb600 : audio: Remove DevicesForAttributesTest instantmode
4a51c11ac81 : [RESTRICT AUTOMERGE] Fix CTS test for inflating remote views
38c44a6ea6b : Add missing 3DES algorithm for TEE test cases
5d97d4cc514 : Update tests to accommodate the oneway interface change
61be0fc3607 : Make sure testRenameAndReplaceFile is cleaning up created files
8f941253049 : Skip MultiDisplaySystemDecorationTests#testLaunchSecondaryHomeActivityOnDisplayWithDecorations for visible background users
1193ab82623 : Adjusting test case to not rely on receiving new insets, but being the same as before
aa470a6febd : Fix EmbeddedPhotoPickerTest#testOnSessionError_sessionIsNull
72d5b99418a : Correct annotations order for an AccountsManagementTest.
25ea5c93d6e : Use KeyguardManager APIs to summon lock screen PIN pad to increase test reliability
b11713a434e : CtsWindowManagerDeviceTestCases-2 failures
29611a3b9fa : Add WM focus logs in DebugInputRule to debug focus test flakiness
d521323c1dd : [PM] Add PackageInstaller CUJ test case (37/N)
0da0a818e96 : Add LoopbackPassthroughTest
322a43a5fb5 : Revert^2 "Skip some IME tests for MUMD devices"
0c4cdb14de0 : Remote obsolete test
e1ece7b8cd1 : Test for new NAS apis
1258e3b29eb : Fix ActivityLifecycleTopResumedStateTests for visible background users
485a47040f4 : Revert "Disabling refresh rate tests for concurrent displays"
8c929fed7cb : feat(high contrast text): add CTS screenshot tests for high contrast text
cf028aef820 : Fix crash in CtsMediaProjectionTestCases
12a1e85644c : ITS: Fix numpy deprecation error for numpy.product()
91e665d0792 : ITS: Fix numpy deprecation error for numpy.product()
5bd662530db : Remove pre-created related CTS test
765f0ae0788 : Use the correct annotation for some CTS tests
e77ccf2beab : Support secondary_user_on_secondary_display on CtsAppEnumerationTestCases
87e01e414a0 : Enable secondary_user_on_secondary_display for windowmanager tests
3f8bae4487a : Fix photopicker CTS tests swipe logic for some tests to work with more form factors
9dd6d071bc4 : Disabling refresh rate tests for concurrent displays
a6454511b5d : Add a test case for executeAppFunction cancellation timeout.
572351ea167 : Fix mouse cursor position handling in MotionEventBuilder
cb84d802d32 : Add @FlakyTest annotation to ImeInsetsControllerTest#testChangeSizeWhileControlling
9101c55903b : Drop @NonMainlineTest in favor of @FrameworkSpecificTest
ecf2042b688 : Drop @NonMainlineTest in favor of @FrameworkSpecificTest
26051ce2c04 : Drop @NonMainlineTest in favor of @FrameworkSpecificTest
fc7bd95657c : Drop @NonMainlineTest in favor of @FrameworkSpecificTest
be92f690c9e : Drop @NonMainlineTest in favor of @FrameworkSpecificTest
c3b71a759be : Revert "Add LoopbackPassthroughTest"
83fe12e4cf6 : ITS: test_yuv_jpeg_capture_sameness fix lint cleanup
ac405d8400a : Remove obsolete DisplayEventTest
0d87e37748d : Drop @NonMainlineTest in favor of @FrameworkSpecificTest
3290b6c2dea : Support secondary_user_on_secondary_display on CtsAppEnumerationTestCases
8c4bd0fffed : Ignore sleep tests on automotive with visible background users
ebeba7b7b16 : Support secondary_user_on_secondary_display for CtsNoPermissionTestCases25
fb14bd1b333 : Test that ImsMmtel APIs Are Accessible
9513bfeccc1 : Make CameraExtension CTS test cases compatible with devices that only support basic extenders
0f6147c7b64 : Drop @NonMainlineTest in favor of @FrameworkSpecificTest
d0aeb97652c : Remove leaudio_multiple_vocs_instances_api flag
fee71ce0a9f : Drop @NonMainlineTest in favor of @FrameworkSpecificTest
d769495e0fc : Fix test_suites for CtsAdoptableHostTestCases, CtsDirectBootHostTestCases and CtsStorageHostTestCases
f1d6f4301af : Add missing assumption to InsetsParamsTest
5185d371a45 : CTS for adding VD status bar on non-trusted display
6e47706cec1 : Fix testRenameFileNotOwned flakiness
f077887842d : CtsMediaEditing: Fix TranscodeQualityTest
35e88724ab8 : Update filtered test names in cts-platinium test config
a110559f8f6 : Update test expecation of font variation settings
409b730400d : Check lock screen support on attestation key tests
da3168d6047 : Remove tests for deprecated NFC-DEP
ca4db9f7a09 : Add proposed trendy teams for CTS modules
445126dd4df : Unstop apps as part of the test setup.
917f07200bc : Fix the case failure due to GMS Backup Transport
e9ecafbb11f : Reenable CTS BluetoothLeScanTest#testOpportunisticScan
d4764ed3de4 : Fixes for errorprone update
2e2243c176e : Add cts for satellite service
a34a9c64a62 : Fix ActivityLifecycleTopResumedStateTests for visible background users
e69428195b9 : Optimize Tuner Callbacks to only use ConditionVariable
d0e3156c3d3 : TunerTest: Add Content URL in IPTV createFrontendSettings
bd19bc3a7fd : TunerTest: Allow tests to pass without achieving frontend
ac7f332a7e2 : [cts_nfc] use default adapter instead of a mock for enableReaderOption
46994ff98e6 : Changes AppSearchDeviceTest to ignore apps indexer docs
8a7b655f456 : Remove pre-created related CTS test completely
1a5610b9b2b : Remove pre-created related CTS test
74723e7fcbb : [AVF] Update CDM multi-device test corresponding to AVF failure codes
9e88d16c95f : Disable ASM_RESTRICTIONS flag
bbf2e1f8636 : add required value for av1_frames_without_film_grain for 5.1/H-1-14
4f9051b86e8 : Add addtional timeout for onLosing
ae7e9aa5415 : [cts_nfc] use default adapter instead of a mock for enableReaderOption
aeb6b77c38f : Add CTS for isAppFunctionEnabled and setAppFunctionEnabled for sidecar
e1682f71550 : Add CTS for isAppFunctionEnabled and setAppFunctionEnabled
36ae27a9b1a : Fix testRenameFileNotOwned flakiness
af931e63d40 : Add CTS tests for GenericSoundModel.
0ed03caaaf3 : Set verifier_engprod to 1 for scoped storage and DownloadManagerApi28Test
12293866cbe : Use textColorPrimary for helper text color
c94024eabda : Remove tests requiring specific carrier sim from staging sim normal config.
4de7b8c7205 : Support secondary_user_on_secondary_display for CtsNoPermissionTestCases25
c219755ca4b : Revert "[cts] Pass in package name in enableReaderOption."
8fd8e191936 : testExistingWallpaperWindows failed
43576496206 : Fix ComponentStateChangedReportedStatsTests failure
f86eb8cfeb7 : Disable hidden API check tests.
9ee50bbdd0b : Override HSUM CTS Bug Fix flags to enable during CTS.
6245edaa03d : Av1FilmGrainValidationTest: Remove first sdk check
b772425463c : Make TestApisReflection_Processor deterministic
aa66eef4213 : skip image codecs on gsi builds
346f5736a51 : Update mediapc README doc for generated code
0244f8bf256 : [ITS][NumPy2.0][test_jpeg_quality] use Python int for uint8 OOB issue
71e03339624 : [Autofill] Disable tests on TV
1db2c7e42f4 : [RONs] Implement CTS Test for Notification.ProgressStyle
d15e36a8d87 : Fix UnusedVariable errorprone issues
682e57eec07 : Remove deleted tests from platinum test configs
cef150a543a : [cts] Pass in package name in enableReaderOption.
81036fb6c57 : skip image codecs on gsi builds
0323a3051bb : Remove dead code from CtsInputMethodTestCases
79a492740c7 : Enable secondary_user_on_secondary_display on DownloadManagerInstallerTest
2f9b0008968 : Add CTS for touch pass-through opt-in
2ebdd2bcf4b : mediapc: Update Av1FilmGrainRequirementTest as per CDD
b319a879038 : Fix CtsTelephonyTestCases module failure.
385350588c4 : Add setScreenLockDisabled and getScreenLockDisabled to UserReference.
1e39fbdbcf7 : Enable large heap for Keystore cts.
1e9c488efa9 : Cleanup backup_service_security_log_event_enabled
cd621bc0529 : Use DesktopWindowing flag for Camera Compat for Freeform.
fe554984013 : mediapc: limit 4k secure decoder tests to run at 1080p concurrently
a532e719d5a : Add sleep times in ScopedStorageDeviceTest tests
be4be85d2a6 : Fix modified time issue in testMetadata test
de808165baa : Skip the Attestations ID Test without device IDs provisioned
67bec8b691b : DO NOT MERGE Remove unnecessary and error prone setup methods.
f2abbb7ff97 : [cts-multidevice]: Remove discovery tests from CTS-V UI.
b6831a856c6 : CTS test for Android Security b/341256043
efda693f53e : Skip WCG tests if the GPU driver doesn't support fp16.
cf9b15cab9e : Add CTS tests for RecognitionConfig.Builder.
fd8f707806c : CTS for mirror display creation with NEARBY_DEVICE_STREAMING
367db141619 : Exclude CtsVideoTestCases android.video.cts.VideoEncoderDecoderTest from cts-on-gsi-on-t.
af4025022ad : CTS test for Android Security b/281044385
a7d03a14ea0 : Add test for Adding an extra condition to check whether it is the default notification uri
7658e1157d9 : Revert "Skip some IME tests for MUMD devices"
72fdfb33b13 : Muxer tests use CtsMediaMuxerTestCases-2.2
23c5eef98df : [Ravenwood] Support several axt libraries with Mockito
18f876b0eff : Relaunch "Skip failed CtsUsageStatsTestCases for visible background user"
dfc921fd92e : Reland "Add cts testcase for setTransact..."
8638950ba25 : CTSV: add PASS/FAIL to DataPath logs
aec18fe21a3 : Revert "Skip failed CtsUsageStatsTestCases for visible background user"
c0fb475b38c : Add CTS tests for auth-bound key with at least LSKF
3952738ca21 : Fixed CTS EuiccManagerTest on HSUM devices
934f4e47083 : Add test for CancellationSignal propagation
3eca2c4cca6 : Revert "Revert "Revert "Add cts testcase for setTransactionCodeM..."
7ce2f4dbdb1 : Disable ColorSpaceTest on Ravenwood on AOSP
8be08af0efa : CTSV: add logs to mark BEGIN and END of tests
6ba6fc13068 : Ignore rollback lifetime tests for now
72c8efdd2bc : Use a more generous touch exploration test slow-swipe time.
b25dc373cc7 : [ITS][numpy2.0] Set offset type for convert_yuv420_planar_to_rgb_image
14c28159353 : Add av1_frames_without_film_grain measurement to 5.1/H-1-14
5242513d982 : CTS Verifier: Test BT MIDI without USB loopback
fcbb02cda34 : Add sleep times in ScopedStorageDeviceTest tests
aca002cbec1 : Don't create MMAP test modules for Java API (which doesn't support MMAP)
5b0fdaaf502 : fix testImeShowsAfterLockScreenOnEditorTap
53215885771 : Remove app integrity CTS tests
aaabe5466b2 : Reintroduce grace period for ASM
7f22afb52b3 : video: Fix the out of bound issue in arg length
f1c82120665 : DPMS: Enforce max byte length not character length
0f94f9f443b : Add more tests for DMRH uninstall
144b703c0a0 : Remove flaky annotation from ScopedStorageDeviceTest
eaf37943a03 : CTS issue onImeiMappingChanged fixed in case of 3 active modem.
0a32f05b2ee : CTS test for set and get autojoin restrictions
1805751f77e : Modify appfunctions sidecar test to use dynamically loaded library.
01779f2dd36 : Update PerformanceClassEvaluator to report data to MediaPerformanceClassTestCases
5f2bf47d75b : Delete deprecated cts platinum prod plans with indexes.
e621399e741 : Remove two flaky cts tests from platinum.
eb6300c9419 : Use a more generous touch exploration test slow-swipe time.
628cc9d9118 : Update Sept release
4095670a587 : Enable secondary_user_on_secondary_display for CtsMediaProjectionTestCases
56751361163 : Fix SurfaceControlInputReceiverTests for device with persistent system bars
4ccd1694e3d : Revert "Add caption bar consideration to TextureViewTest#testFirstFrames."
b79645722d4 : Remove library names cts-shim-host-lib and tradefed from test suites.
469cf120395 : Fix CtsTelephonyTestCases module failure.
1fa051eaee1 : Strip out the user ID when checking temp power exemption list in MissedCallTest.
e0d2135d465 : RESTRICT AUTOMERGE Remove 2 FrameRateOverrideTest from CTS on GSI am: dc52dc5383
04d659ca82e : Update testapps to target API 36
0acee01a949 : Ignoring these tests for TV as TV does not have any keyguards Test: atest CtsDreamsTestCases Flag: TEST_ONLY bug: 362419010
8a132114c71 : Remove `CtsJvmtiRunTest988HostTestCases` from CTS and ART MCTS.
930289be75a : Remove `CtsJvmtiRunTest988HostTestCases` from CTS and ART MCTS.
d8b7acba84f : Fix and enable secondary_user_on_secondary_display for CtsTimeTestCases
463db1105af : Delete StageOtaFlags CTS test
9fd43a96779 : move multi-user annotations to bedstead-multiuser module
698acfa60ca : [ITS][test_edge_enhancement] ensure lens STATIONARY before capture
1f6bef026e5 : Fix btest documentation
cbeb9bc51e0 : Update getChannels() to know about new system channels
f1ae0be2475 : UinputTouchDevice: Account for display scaling
d93f85b2a75 : [RESTRICT AUTOMERGE] Revert "WebView: disable Safe Browsing test cases"
cb603dbe4a7 : Fix and enable secondary_user_on_secondary_display for CtsTimeTestCases
8f3c84b61ca : HardwareBufferRenderer: Waiting for rendering to complete
04a547871b9 : Wait for the broadcast to be received before returning.
2485a4db02e : video: Fix the out of bound issue in arg length
3d7ae8604ea : Fix HideOverlayWindowsTest for non-fullscreen TDA
0df4422392a : Extend timeout defined in VideoEncoderTest#testEncode
d56fe1a11af : Make KeyboardVisibilityControlTest multi-user aware
c470f97d758 : Skip failed CtsUsageStatsTestCases for visible background user
dc52dc53838 : DO NOT MERGE Remove 2 FrameRateOverrideTest from CTS on GSI
5e7783e811a : CTS Verifier: Remove unused MIDI function
9277c603da5 : Make MockTestActivityUtil#launchSync test user aware
d42069e47b1 : Make InputConnectionHandlerTest multi-user aware
a1c291dd84c : [cts] Add cts tests for more extension APIs.
da4ca717fe1 : AppStartInfo start component cts
3c4da8426e7 : Remuxing comparing durations needs to signal EOS
fc621a78cf4 : CVE-2018-9472: disable FORTIFY
0e4c8cb8a33 : UinputTouchDevice: Account for display scaling
cd328edb9b4 : Wait for focus before dispatching Enter key
f3f1797a60a : Add device check to require valid sim for CtsVcnTestCases
f6041c7d95e : Enable secondary_user_on_secondary_display on DownloadManagerInstallerTest
e89174daa4f : Temporarily ignore tests that depend on starting cursor position
ff0ded2b3d1 : Update CTS-V ColdStart tests.
080a167132a : Remuxing comparing durations needs to signal EOS
9f25330ef27 : Enables automotive location bypass before testing.
60dd2cb1b22 : Add tests for OEM_PAID/PRIVATE int to/from string relation
2e6a9196841 : Added hostside test for NLS during user switch
f683f6d41c1 : RESTRICT AUTOMERGE: Fix PermissionReviewTapjackingTest to use pressBack() instead of pressHome().
3a1eef054e5 : Temporarily ignore tests that depend on starting cursor position
46779a48ff8 : Test for notif promotion api
5749a86537d : mediav2 CTS: add check for format support in CodecDecoderSurfaceTest
f4d3bceb250 : CTS test for Android Security b/304772709
f91fee7f344 : DO NOT MERGE ActivityTransitionTests does not respect animation scale.
668c4459a4e : mediav2 CTS: Reduce runtime of Decoder Tests
2147e0ccc70 : DO NOT MERGE Revert "Add menu and media play/pause global actions [2/2]"
99b2f3ac3b4 : Fix CtsSharesheetTestCases on CarUiPortrait
d19be0bc3b8 : Enable Photo Picker CTS tests for automotive
c8ec1f95429 : Enable settings UI test for Automotives
0028883be6d : Enable JobParametersTest for Automotive
c205e1145a4 : Enable Sharesheet CTS tests for Automotive
e1a69a224d2 : Enable AlarmStatsTests for Automotive
6bd0ad9d65e : CpuNativeJni: use std::nothrow if you want to check the result of new.
cc3605424d5 : Add test for new fields for reassembling fragmented media events
b84836601d1 : Replace InstrumentationRegistry#getTargetContext to ApplicationProvider#getApplicationContext
1d569e92244 : Enable null GameManager on Wear devices
e6ffad95d27 : Revert "Revert "Add cts testcase for setTransactionCodeMap and S..."
4b07bbe7c2d : Invoke invalidate to make a view get drawn in ViewTest
94b61004019 : mediav2: don't catch AssumptionViolatedException in getDecodedYuv
1d52e1116a9 : Skip the Attestations ID Test without device IDs provisioned
0ed44c7bd7f : Added --namespace parameter to dumpsys device_config
47cd641f5cb : Add CTS test for channel sounding 25Q2 APIs.
280b8a70cf0 : Skip some IME tests for MUMD devices
0a328a3f1d2 : Update notification strings
6fc769ace42 : Update notification strings
c11fed57169 : Add doBindAndWaitForService to testActivityServiceBindingLru.
853bcebe739 : Fix test crashes and failures on telephony-less devices
909aa734a8a : Introduce MockModemTestBase class to remove duplications
62788d36d55 : Call unified enforceMockModemDeveloperSetting method for MockModem cases
642c910a048 : Updating the CTS tests for the Callback Mode APIs
aedd6407d4d : Add test for SDK_INT_FULL
2d3e7d62741 : Verify if permissions are adopted (for S+ only).
4d9cf2c20c9 : Change unmounted callback timeout to 65 seconds
fd2f06e56fe : mediav2 CTS: replace vp9 clips from mp4 to webm container
ef65ad82ac8 : mediav2 CTS: update to CtsMediaV2TestCases-5.0.zip
344333d99a0 : Remove FLAG_ENABLE_CHOOSER_RESULT (CTS)
079bc6ab6cd : [RESTRICT AUTOMERGE] CTS test for Android Security b/232798676
ae2dbdd6df8 : AudioTrackSurroundTest: Add test for AC-4 level 4 playback
eb99c35bf90 : DolbyAudioFormatsTest: Add CTS tests for Dolby audio encodings
1e9f42a4e92 : ResourceHandle : Refactor resourceHandle data type to long
28abd72513a : CTS test for Android Security b/232798676
2feff1af281 : [RESTRICT AUTOMERGE] CTS test for Android Security b/289242655
8de3daa6b2a : DragDropTest:Resolve the cts issue about timeout
e033fb2b402 : CTS test for Android Security b/289242655
f0e7958d1f5 : enforce limits for VisualVoicemailSmsFilterSettings properties
a18395c46d9 : CTS for Android Security b/336323279
9b213092398 : DO NOT MERGE package-installer: scroll button into view
febe2f8e69f : CTS test for TtsSpan#TYPE_DURATION and ARG_SECONDS
73d71df62f1 : [RESTRICT AUTOMERGE] CTS test for Android Security b/336490997
ca12fe54b9a : CTS test for Android Security b/336490997
8372f2e4259 : CTS for Android Security b/336323279
8757ffb86df : Google RCS uses FTEU MO SMS for phone number verification [ Week 25]
a734de701d9 : Update owner
a282ea0007b : MediaCas : Add test to validate updateResourcePriority api
b7390a74237 : Add test mapping file for aaudio cts test.
618a2aefab2 : Add LoopbackPassthroughTest

+- Project: platform/development

53443cf94 : Re-run queries in one click
86ecacba5 : Parse mp4 files uploaded with separate metadata file.
ddfa6a1a9 : Isolate trace search initialization.
1980356df : Filter uploaded files for metadata.
09bcb6b0f : Add mp4box dependency.
f5d50c09a : Add analytics for trace search.
24734d836 : Create SQL views for Surface Flinger.
01721fa67 : Block timeline UI whilst search query is running/initializing
6a42cc8fa : Route trace search events to appropriate components to block UI.
1ee491028 : Custom query to initialize trace search.
c57f073c8 : Iterate on motion test viewer
afc4e7713 : STL introduces OverscrollEffects [2/2]
c62c4c2d8 : Highlight text filters when non-empty.
0c8779d21 : Ensure calculated properties not override by lazy fetching.
171102798 : Add tests for SF/VC to increase coverage.
d61b20521 : Strip AbstractHierarchyViewerPresenterTest.
51bd03ac8 : New test suite for common AbstractHierarchyViewerPresenter behaviour.
a78613998 : New test suites for PropertiesPresenter and HierarchyPresenter.
1b25b59da : Test util changes.
5416f2dd9 : Cleanup in hierarchical presenters.
723da4a89 : remove obsolete copyright line
7d3bd36c6 : Strengthen user notification checks.
d01805a53 : Collect unarchived uploaded files into one archive.
6237365ee : Set screen recording trace as active if only one uploaded.
0a334731b : Make TraceSearch added/removed events generic.
9e01d9313 : Add the zlib license to content-based analysis.
9eef19922 : Remove libusb directory from libusb1-sys crate.
2834703ff : Code cleanup for trace processor changes.
8021ca7d4 : Add additional logging and some fixes
98d48dc6a : Add a recorder demo activity in VdmDemos
2e7c9338f : Update e2e tests.
b94c08f04 : Revert the B VDM demo settings requirements to V
a8f0cf0f6 : Demo for the custom virtual display dim brightness API
ec57657cb : Update wm presenter test.
f5e5afdbe : ViewerSearchComponentTest.
5953b9cb9 : Add viewer search component.
2ed58cde3 : Search list component.
46cc5c9da : Changes to handle trace search events.
b8413cf5a : Add presenter for search.
b848d5382 : Presenter for search results.
49ce49ba5 : Add search viewer.
509a0ed2b : Don't save search filters between sessions.
5830f5b77 : Show missing/duplicate layer ids as fixed warnings
7f58ce082 : VDM demo changes for VirtualDisplay brightness API
592e5129f : Add setting to increase the # of QS rows
8e399bcc3 : Use separate status files for trace threads.
40cedb82c : Add a test flag to verify one stage rollback in Gantry
d6513b61b : Always log signal handler output.
c63c433c0 : Show window configuration container for all WM elements.
48d0fee29 : [DO NOT MERGE] Show window container for all WM elements.
10353935d : Use one AudioRecord per VirtualDevice
7fefbbe9e : Rebuild cargo_embargo every 2 weeks.
d8d11fdfb : Support deleting directories from crates.
cfb908d8a : Introduce SharedElement transitions with NestedSTLs
c3f89156e : Use ADD_MIRROR_DISPLAY in VDM Demo.
fe6c21af0 : Add colored border around demo STL
9a60e3bab : VDM demo: Address a number of SDK version TODOs with BAKLAVA
25b7458e5 : Winscope: Remove KEYCODE_ prefix from input event details
fdd116ca4 : Winscope: Update intDefMapping.json
957bce9c5 : Make license header optional for rules.mk
866da4e22 : VDM Demo for display brightness.
08fda4810 : Use updateMixingRules in AudioInjector
8bf331f52 : Rename generic_system_image to aosp_shared_system_image
e22c5ffaa : Use --versioned-dirs with 'cargo vendor'
ecfe101e9 : Use updateMixingRules when changing uids
179bfe0c8 : Make STL demo overlays closer to the Flexi dual shade
483b9aaf5 : Revamp import code.
367cf3694 : Add command to analyze crate imports for problems.
4cbb44cc8 : Add a new command to update TEST_MAPPING files.
df2cea1f7 : Crate for managing TEST_MAPPING files.
fd701b54d : Use verticalContainerReveal() in the STL demo
6948fdf60 : Update search parser test to check new TP columns.
b5e1f7fbc : Add current index to CUJ ui data class.
5a2bbe78f : UnitTestUtils#makeEmptyTrace.
9c17e8250 : Handle missing intdef values.
41710f514 : Expose fromContent and toContent to UserActionDistance (2/2)
f8be51e0a : Introduce parser and trace type for searches.
560a38944 : Revert^2 "Define core-current-stubs-for-system-modules-exportable"
660a1face : Log stdout & stderr of patch Command on failure
9d4681fce : STL demo, update spring configuration
550894297 : STL improve Swipe definition API 2/2
9e5f528bf : Android.bp: Add required bpfmt
334657b2b : Address concurrent sessions case when warning message present.
292952463 : PriorityNestedScrollConnection: Add flingToScroll [2/2]
1ce5b5147 : Make default proxy logging mode debug.
fc6068fea : Stop recording preview API levels.
b5446853b : Unfocus text boxes on escape key.
7de2659de : Show full name on hover of hierarchy name if simplified.
9f7e898e7 : Added some test flags to verify flag saturation improvements
5569fb253 : Report trace timeout during cleanup in UI.
1e24c07e5 : Handle 5+ perfetto concurrent sessions error.
baabde75e : Correct z order computation.
38495583f : Update intdef mapping and translation of left over.
3429c8830 : Bump Winscope proxy version to 4.0.2
696e3f43c : Winscope: Increase the timeout for waiting for the trace to finish
418a90b76 : Winscope: Show dispatch targets in input event details
68deebed3 : Winscope: Use input frame as input window rect
793a6f893 : Update the format of debug Winscope proxy messages
616a7e24d : Update API 36 release config and emulator files for 25Q2 DP 1
8493b4b23 : Add license term for libfuzzer-sys.
0a837975a : Pin serde@=1.0.210.
43d6bc0bc : Fix glob to find intermediates.
c2f99ff40 : Document options which were missing from documentation.
d7ff48e8f : Allow multiple entries for license_text.
79a6b6ffd : Add clipping and scroll to shades
ce4691054 : Allow multiple licenses.
f4fc34393 : Handle missing crates directory, in the case of a new monorepo.
2e9b5456c : Revert "Use exportable module for core-for-system-modules.jar"
d63eb51e6 : Correctly handle legacy crate directories with multiple versions.
ba4594afd : Write a .gitignore when initializing a pseudo-crate.
34b2b749c : Make preupload check work with different managed repo paths.
e8526c045 : Required changes due to refactored filters.
6a7c01fc7 : Remaining log viewers refactor.
bbf651940 : New and refactored log unit tests.
4ce62544e : Refactor log types.
3213bd61d : Use exportable module for core-for-system-modules.jar
2d971bb07 : Allow license build rule name to be overridden.
5b44129af : Allow specific version to be specified for multi-version crates.
2d01a4949 : Fix movable Notification picker with overlays
ddec6b885 : Improve handling of size during interruption + overscroll
792aaf2dd : Add new command to initialize a new managed repo.
fdcda0ad7 : Fix NullPointerException in MonkeyMotionEvent
68f63699a : s/emargo/embargo
351a1fa93 : Allow the path to the managed repo to be specified on the command line.
c5b78e3c5 : Allow Android.bp changes when doing a combined migration + upgrade.
f31055915 : cargo_embargo: Add dirgroup for Trusty soong build
b1d6447df : Add tiles to QS shade
a81aad8c8 : Improve handling of size during overlay interruptions
e84326798 : remerge3: Auto advance to next file if mergetool exited successfully
f011c0473 : Use INVALID_DISPLAY as the default displayId for the trackball
34345b2b8 : Make fetchartifact more progress bar friendly.
335ca839c : Demo for custom VDM power policy.
00aa7ce9e : remerge3: exit with success (0) exit code
962a81ac7 : Recover from "cargo add" errors.
d727741eb : Permission checks in the VDM demo preferences
9e28ea904 : Navigate filtered logs with arrow keys.
97b599136 : Show handler, participants and flags in transitions log view.
2e823c53f : Combine plain text titles and filters into headers parameter.
bc025d716 : Better output when analyzing updates.
fdde18aa7 : Ignore --allow=deprecated and --allow=unexpected_cfgs
21f61a3cd : Standardize how crates are specified on the command line.
75c0d68e5 : Rename monorepo tool to "crate_tool".
91d3c666b : Make UpdateTransitionChangesNames apply to PropertyTreeNode.
2efd145a0 : Scroll strategy for transitions.
5c39ad98c : Bring back is_perfetto_tracing_session_running
49a5e0bf0 : Don't label displays with "Mirror".
315d4d7f4 : Make LogFilter a class.
29793cc5c : Translate flag intdefs for transitions.
f8a35074e : Update intdef mapping.
30f415ffa : Add command to try crate updates and see if they succeed.
abfadf2bd : Add a command to update a crate to a new version.
9d073a7e9 : Add a command to suggest updates that seem likely to succeed.
8c074819b : Add command to analyze crate updates.
635c7b60c : Preserve README.android when migrating.
d444ee2c2 : Add command to find updatable crates.
a2909607a : Don't write unrecognized license to source tree.
a354038ac : Correctly restore DemoConfiguration.enableOverlays
68ac02207 : Remove STLState.enableInterruptions (2/2)
3d312d7d3 : Remove the lockdown handling from VDM demo
7de153450 : Demo for the new VDM power management APIs
6acc00ef4 : Preserve *.bp.fragment files
2bd6283b4 : Move Cargo.lock when running cargo_embargo
a6ca18832 : Print post-migration cargo_embargo output in verbose mode.
032b03003 : Avoid unnecessary backup files when patching.
f22da31bf : Fix bugs in LayerExtractor.
d9805a28a : Pinned nodes always full opacity.
be4e41f24 : Display change fixes.
99518d449 : Fix the shared Shade <=> QS time clock
6b760744e : Clean up lib.rs
136e52239 : Check exit status when recontextualizing.
38bf78638 : Make 'cargo vendor' status part of ManagedCrate state.
aa89e7219 : cargo_embargo: add another license combo
c4d055da2 : cargo_embargo: additional license selection
2bbf7ef41 : Add ability to recontextualize patches.
bb9e17c3f : Close QS when dragging FooterActions
7b25093b0 : Do not adjust bounding box after every scene change.
ca805bbbc : Store display selection.
f2586862a : Always show transactions properties related to flags.
7f404d895 : Log stderr output for perfetto traces.
28827abbe : Add .envrc to ignored files for migration health.
47b0501e9 : Only show the mirror display option for app streaming
8ff98cc99 : Extend presenter input tests.
e03f713d9 : Unify properties behaviour across log views.
d2022524f : Make migration-health errors non-fatal.
6890f595b : Add ManagedCrate struct, with major refactoring.
d1e98828c : Add license expressions and organize them a bit.
e6dccbacb : cargo2rulesmk.py: Pick rust version from envsetup.sh
a6954c155 : Update the path of the soong-built generic system image
971c5bdc9 : Add an exported flag in CubeLiveWallpaper manifest

+- Project: device/amlogic/yukawa

2a75ccb : yukawa: enable vendor package support
e52e15b : Revert "yukawa: enable vendor package support"
ff5ec96 : yukawa: enable vendor package support
14c96ba : yukawa: ship vendor package fetch script
97d6bb4 : Remove CEC libraries from non-Android TV Builds
879c99c : yukawa: Add --ramdisk_offset to BOARD_MKBOOTIMG_ARGS
b781a08 : yukawa: Set BOARD_INCLUDE_RECOVERY_DTBO := true to fix build break
e352691 : CEC: fix use-after-free in hdmicec_close
f156fca : CEC: do not join NULL thread

+- Project: device/generic/car

fa19d60 : update health HAL dependency
5b8dd39 : Enable dynamic audio routing in sdk_car targets
6e3d361 : Add audio configuration APIs to gcar audio control HAL
c239a7a : Updated VHAL version run by emulator from V3 to V4
aac64f1 : Added a rule to copy advancedFeatures.ini for system image generation
92deebd : Switch usbip_test to run as a unit tests instead of testmapping-host
ad1030a : Remove unused argument.
82aa8cb : Add owner "google" to internal car interface

+- Project: device/generic/common

c6c4de2 : Add GSI targets using soong built system image

+- Project: device/generic/goldfish

26528670 : Remove unnecessary checks
9d94022d : goldfish arm64: do not strip vendor_ramdisk modules
47cfe7d9 : Remove deprecated property that was used to enable RKPD
1a6c297d : Pull virtio_pci_legacy_dev.ko from GKI modules, not virtual-device ones
8efa4db6 : Fix slim emulator build.
c86dbb80 : Make provisionWifi more verbose if failed
da03b8e4 : Update kernel module paths
2ccdefa9 : Revert "Update kernel module paths"
a085e2ab : Update kernel module paths
9094949b : Revert "Update modules path"
324ee2a2 : [uwb-overlay] Support multicast list update rsp v2 on gf
bf888ebf : Update modules path
6eefc2d5 : Update the gatekeeper HAL
dc789ffa : Don't provision AndroidWifi for background users
6b9dee24 : allow host to enable vulkan for hwui
8cb3f2c6 : Convert emulator-info.txt to Android.bp which is used in dist only.
09d95bf7 : Convert lib_driver_cmd_simulated for goldfish to soong
ccbe665f : Move the radio specific rc part to a separate file
20961222 : Increase BOARD_EMULATOR_DYNAMIC_PARTITIONS_SIZE for sdk_phone64_*
0b4e2779 : Add Display ID when creating media projection
12bf8e23 : Delete the unused file
6a062af2 : Move phone/tablet specific files into the phone/tablet common place
6a89f9b6 : Move adding advancedFeatures.ini into the right place
af3952b2 : Delete the unused file
8eca65fe : Move the radio related files into radio/data
10045f74 : Put apns-conf.xml into /data/misc/apns
7c6f1267 : Retire unused code
eda05fe2 : Retire unused headers
23b79b00 : Retire librilutils
98514931 : Retire goldfish-fork-sap-api-java-static
7ae8e0a0 : Delete product/uwb.mk
55d50c51 : Enable Uwb feature by default for Goldfish emulator (guest)
04b8f753 : Add ANGLE feature overrides to init.ranchu.rc
e385b09a : Move fold configuration files into ifeq clause
3a5f7087 : Move radio related files where the radio is added
3710bdb6 : Rename radio/RadioConfig into EmulatorRadioConfig
72ff6f57 : radio: save tel alaska icc profile
252f66e8 : Replace atrace@1.0-service with a file
46668ecc : Update android.hardware.contexthub
adf93624 : Update android.hardware.dumpstate
72480e4e : Update sample power HALs
22bcbc94 : Update android.hardware.vibrator
af4864ef : Upgrade android.hardware.thermal
2f39c113 : set game_default_frame_rate_override as 60
84d5c1a7 : Remove biometrics.face-service.example
9a7f0121 : Revert "sdksetup: always perform provision"
5eddbb29 : Move fold configuration files into ifeq clause
13ced9e5 : Move generic_system.mk into handheld.mk from base_handheld.mk
e1ef1ba9 : [Tablet AVD] Prefer network time suggestion over telephony time suggestion.

+- Project: device/generic/goldfish-opengl

563486a1 : [Cuttlefish] add getLuts() interface
e0d4777e : Implement per-layer brightness in Composer HAL
3b17811f : [Cuttlefish] Lut backend implementation
b6c77a1d : hwc3: Free drmConnector object after taking backup
7257e959 : Sync with new API to start HDCP
69503943 : Add stubs for HAL IComposerClient getMaxLayerPictureProfiles
6d751211 : [owners] Remove yahan@google.com from OWNERS
54c571b3 : goldfish-opengl: add support for printing sync info
01de504a : [Cuttlefish] change executeLayerCommandSetLayerLuts function declaration.

+- Project: device/generic/trusty

ab86e6e : Reapply "Update kernel module paths"
c53d659 : Revert "Update kernel module paths"
30236be : Update kernel module paths
98aa37d : Revert "Update kernel modules path"
842eb72 : Update kernel modules path
b5679ca : Revert "BoardConfig.mk: Add ffa-module.ko"
afa5e8a : BoardConfig.mk: Add ffa-module.ko
4667d38 : Revert^2 "trusty: Add script to repack ramdisk to host package"
2e24898 : Revert "trusty: Add script to repack ramdisk to host package"

+- Project: device/google/akita

3eb1621 : gps: add certificate file for carrier
b67c554 : gps: official release 4.13.2_28_Release_248164
79cd618 : Add a way to disable auto prefer fit for launch.
e2baaa6 : audio: align volume curve
33717cd : [BT] Update LEA allow list
8d0c53a : gps: support Galileo in CP NILR for ATT and TMO
137c188 : Revert "powerhint: enable auto_prefer_idle in games"
d1b4263 : Revert "gps: set default SUPL SSL method to SSLv23"
b0ae6ee : Enable multi-codec architecture for Akita
0074216 : gps: Enable coredump report for user ROM
684c384 : akita: add microphone info for aidl hal.
846a5a7 : Disable Wifi BugReport for subsystem restart
f34455d : [NFC] Enable STNFC_ACTIVERW_TIMER
f4ee85c : Revert "gps: Enable coredump report for user ROM"
3e51fb3 : gps: Enable coredump report for user ROM
8547e6d : gps: set default SUPL SSL method to SSLv23
cfbfa0a : Enable CDPreferHighCap for CAMERA_STREAMING_HIGH for akita
8960e3e : powerhint: enable auto_prefer_idle in games
e5f155a : Enable TA and FG prefer idle for some camera streams
b22f973 : [akita] Define CAMERA_MULTICAM_BOOST
a89b803 : [Akita] Regulation e-label for Akita
4a36904 : Remove vibrator HAL service
48c8082 : Update PRODUCT_RELEASE_CONFIG_MAPS assignments to mark git project boundary
d368e19 : Disable stereo spatialization.
13d9ccb : Use auto prefer fit for launch
6c209b9 : Set the fcc cdev_ceiling to 1A before disable charge
27d9d57 : dumpstate: touch: Init using touch_predump

+- Project: device/google/atv

dd54e8b : Import translations. DO NOT MERGE ANYWHERE
f062382 : Import translations. DO NOT MERGE ANYWHERE
0fe8e5a : Import translations. DO NOT MERGE ANYWHERE
afa4994 : Import translations. DO NOT MERGE ANYWHERE
17afcf5 : Add TV release config map
ed6b074 : LE Audio: Add unicast toggle to device settings
c3bea08 : Add connectivity team as OWNERS for BT slice + services
6f23c73 : Import translations. DO NOT MERGE ANYWHERE
a506bb1 : Limit persistent logs to 60MB on all TV devices
8c7f074 : Import translations. DO NOT MERGE ANYWHERE
80452d7 : Remove audio_proxy folder
f1ce31d : Import translations. DO NOT MERGE ANYWHERE
42b9677 : HDMI: Expose power state change on active source lost setting in SettingsUI
448b80b : Import translations. DO NOT MERGE ANYWHERE
a8bab3e : Install advancedFeatures.ini for TV
b25a4b5 : Do not add apns-conf.xml in atv_emulator_vendor.mk
18d1cea : Fix WorkManager initialization issue
0de9689 : Import translations. DO NOT MERGE ANYWHERE
c02bd90 : Remove NETWORK_SETTINGS permission due to CTS restrictions.
b4d028a : Fix persist log makefile
4eaded8 : Remove unused audioproxy configs
c28becf : Import translations. DO NOT MERGE ANYWHERE
cb35f2d : Avoid memory leaks by unregistering BroadcastReceivers and cleanup upon destroy
3b71025 : Remove PRODUCT_USE_PROFILE_FOR_BOOT_IMAGE from atv_lowram_defaults.mk
7f93451 : Set existing default to skip home pinning for device

+- Project: device/google/bluejay

5581e80 : modem_svc: use modem_svc_sit version sepolicy
4cda211 : Update ISODEP routing setting
e2d874b : modem_svc: use shared_modem_platform to replace all modem_svc_sit
1a653ac : bluejay: Pull init.insmod.*.cfg from vendor_dlkm
7fc6a2a : Move modem_svc_sit from gs101 to bluejay
cff96f4 : Remove vibrator HAL service
34289b8 : powerhint: fix json syntax for wbs test
bc19c19 : gps: set default SUPL SSL method to SSLv23

+- Project: device/google/caimito

3223be9 : Fix LE Audio Unicast Allowlist
5c23b60 : powerhint: Disable auto margins
0977e86 : Add support for AMM experiment.
756cd13 : thermal: update policy for earlier USB port throttling/warning
58ca016 : support NTN with dual SIM
f91720b : caimito: apmg3: update 0 db tx tuning.
facfa9d : caimito: update libspeechenhancer_1203
22dcc34 : Add a way to disable auto prefer fit for launch.
e3c6ec6 : audio: [2024/12/05] TK4 Fortemedia table check in
c0b25f3 : audio: [2024/12/05] KM4 Fortemedia table check in
ef60280 : audio: [2024/12/05] CM4 Fortemedia table check in
03efb69 : Revert "caimito/haptics: Remove voltage restriction for haptics"
f4f7f5e : caimito/haptics: Remove voltage restriction for haptics
ed36a84 : Add Samsung Galaxy Buds 3 pro to the LE audio allow list
5d1df95 : BT: add skip uart suspend overlay config
a62254b : gps: support Galileo in CP NILR for TMO
b8d7b1d : Revert "gps: set default SUPL SSL method to SSLv23"
701babc : audio: add always available display paths
3e9baa9 : [caimito] Define multiple levels of PA kill to be used in Mendel experiments
3392f43 : 16kb: Filter out unnecessary modules from 16k mode
74d165d : powerhint: Set response_time_ms for clusters
4cdac3a : powerhint: Enable auto migration margins/dvfs headroom by default
f52a815 : Revert "Remove check for the 16kb kernel directory"
c59eaf6 : Revert "Allow PRODUCT_16K_DEVELOPER_OPTION from other targets"
ec685d8 : Enable TAPreferHighCap for first frame
149158d : Disable Wifi BugReport for subsystem restart
9e30c2d : [NFC] Enable STNFC_ACTIVERW_TIMER
d720b4a : gps: set default SUPL SSL method to SSLv23
18ccb4a : [Bluetooth] Set default LDAC quality mode to ABR
89e9f94 : Allow PRODUCT_16K_DEVELOPER_OPTION from other targets
0db03e1 : Remove check for the 16kb kernel directory
22536c7 : Remove 'RELEASE_PIXEL_BROADCAST_ENABLED'.
20ca167 : powerhint: Add WriteOnly flag for PA_KILL to avoid SELinux error
3232826 : Modify TARGET_LINUX_KERNEL_VERSION according to build flags
1830a99 : gps: Update official release 4.15.3_7_241024_R1 config on P24
6124fe3 : thermal: support stats for future temperature predictions
aa1a128 : [VSS] VSS to be supported only for P24 devices
c063e41 : ADPF:caimito: add tagged ADPF profile for SYSTEM_UI.
de9f892 : thermal: update P24 ml prediction model and config
7b857c0 : audio: 2024/10/11 Fortemedia table check in for next release
6d0de40 : Enable TA and FG prefer idle for some camera streams
f59daa1 : Remove device/google/caimito/Android.mk which is not needed any more since there are no other Android.mk files in subdirectories.
a6d5fc8 : powerhint: enable auto_prefer_idle in games
aff249a : Remove vibrator HAL service
c8d5ce9 : Enable 16kB developer option for caimito
5bdfebb : komodo: Update APMg3 tuning files
84a0c53 : caiman: Update APMg3 tuning files
80a1e9c : tokay: Update APMg3 tuning files
97bf4c0 : caimito: fix unexpected affinity setting.
9991f0a : cm4km4tk4: upgrade to media performance class 15
827f321 : Use auto prefer fit for launch
9bb79a5 : Disable stereo spatialization.
b990268 : [caimito] Define CAMERA_MULTICAM_BOOST

+- Project: device/google/caimito-sepolicy

bc94d49 : Remove rule from Caiman, Komodo, and Tokay.

+- Project: device/google/comet

2abb11f : powerhint: Disable auto margins
2bca538 : support NTN with dual SIM
0ea9a17 : Add a way to disable auto prefer fit for launch.
fe86da5 : comet: apmg3: update 0 db tx tuning.
ec4e82a : comet: update libspeechenhancer_1203
2f842ea : audio: [2024/12/05] CT3 Fortemedia table check in
4d46cb3 : BT: add skip uart suspend overlay config
1267a36 : Revert "comet/haptics: Remove voltage restriction for haptics"
8e61044 : powerhint: Set response_time_ms for clusters
279cd17 : powerhint: Enable auto migration margins/dvfs headroom by default
1a75186 : comet/haptics: Remove voltage restriction for haptics
0ae159f : Add Samsung Galaxy Buds 3 pro to the LE audio allow list
e01ffa6 : gps: support Galileo in CP NILR for TMO
7d4e309 : Revert "gps: set default SUPL SSL method to SSLv23"
4ffe473 : [comet] Define multiple levels of PA kill to be used in Mendel experiments
a45dfcf : Set a proper status bar height
7975640 : Disable Wifi BugReport for subsystem restart
613e9d9 : [NFC] Enable STNFC_ACTIVERW_TIMER
5eb0771 : gps: set default SUPL SSL method to SSLv23
6e1a6c0 : [Bluetooth] Set default LDAC quality mode to ABR
ef979d3 : Enable TAPreferHighCap for first frame
c9afc81 : Remove 'RELEASE_PIXEL_BROADCAST_ENABLED'.
61589ce : powerhint: Add WriteOnly flag for PA_KILL to avoid SELinux error
f4b9014 : Modify TARGET_LINUX_KERNEL_VERSION according to build flags
6893af6 : gps: Update official release 4.15.3_7_241024_R1 config on CT3
fc13cf0 : VSS to be included only in P24 devices
552f4d3 : Revert "ct3: upgrade to media performance class 15"
e227bcf : add fatp_camera_eeprom_inspector for comet
4b523ba : Convert device/google/comet/sensors/Android.mk to Android.bp.
f1815b7 : Enable TA and FG prefer idle for some camera streams
dcc7759 : powerhint: enable auto_prefer_idle in games
e538262 : Convert device/google/comet/sensors/Android.mk to Android.bp.
bb48ea5 : powerhint: port dvfs_headroom settings from p24
12af273 : powerhint.json: enable gpu capacity signalling.
f24c022 : Remove vibrator HAL service
c617d1d : Boosting kswapd uclamp min value when the panel is on
821731b : comet: fix unexpected affinity setting.
69ca723 : audio: enable software encoded Bluetooth broadcast
6832fe9 : Use auto prefer fit for launch
55d224f : ct3: upgrade to media performance class 15
6291d33 : Disable stereo spatialization.
6d42431 : Update volume panel radius to 52dp for comet
b319347 : thermal: Remove bcl related tzones
3b7c659 : [comet] Define CAMERA_MULTICAM_BOOST

+- Project: device/google/comet-sepolicy

efc74e6 : Move the rule to gs-common

+- Project: device/google/common/etm

84168f3 : Snap for 12680993 from e1a542245d4e5332cee41cb256820c728cd5acca to 25Q1-release

+- Project: device/google/cuttlefish

13ed53bad : Use Skia to handle more screenshot formats
e31d4c5cf : Add `cvd display screenshot` functionality
61c16b819 : Add HDCP AIDL to the KnownMissingAidl
14d15eca0 : Blocklist snd-aloop.ko kernel module
01eb90b89 : Mark `android.system.vold` as unimplemented
54d3208ca : Add vendor_capabilities_service to the list of missing HALs
a1abf1aa5 : Unlock after reboot in snapshot tests
4596136aa : file_contexts: support secure storage in system for test
939ffa804 : Workaround casimir dropping ints with value 0
c53799c30 : Add android.media.audio.eraser.types to kAlwaysMissingAidl
8db69c435 : Fix HexToBytes vector size
4d1e258e8 : Add bluetooth socket service
1adc362bf : Refactor input_connector code
7caffcab6 : Bpfmt all Android.bp
1d03ab6c9 : process_sandboxer: Fixes to support compiling after import
637e4c69b : [tests] Add HwCrypto hal as known missing AIDL
a36268962 : Stop matching deprecated Ika target
43cb82bca : Define vendor_ramdisk version of fstab.cf.* modules
025fe666c : Bpfmt shared/config/Android.bp
1ceb3a284 : Update cpuvulkan version to 1.3 to match swiftshader
5b0dd41ff : Load wildcarded kernel modules from conditional path
a556f71a7 : Revert "Use a local copy of vulkanhpp"
e5307028b : Add DEBUG logger to audio server
10ba2d888 : Revert "Use a local copy of vulkanhpp"
51175384c : Try a different unlock mechanism for snapshot tests
8900b4937 : Mark secure storage aidl as always missing
3d837e1f1 : Use a local copy of vulkanhpp
eea2e012c : Disable exposeES32ForTesting on Cuttlefish
a1812d8fb : Remote impl of KeyMint is v3 not current
4f4decc0f : Revert "[uwb-overlay] Support multicast list update rsp v2 on cf"
6407b61b5 : Add secondary command buffer snapshot tests
b7d58252e : Add test rule to unlock device before each test
1acda8ee5 : Set ro.vendor.hwc.drm.present_fence_not_reliable=true
522c79ff9 : Reapply "Update kernel module paths"
66f3d615b : Save screenshots on failure
680cc37ab : Revert "Update kernel module paths"
272f41107 : [uwb-overlay] Support multicast list update rsp v2 on cf
e67cffefd : sepolicy definitions for WV Trusty VM
a1d3dd467 : Update kernel module paths
c2d4959f8 : Decrease the trusty security VM memory
a355152f4 : Avoid empty cvd_bugreport_build.log.
7b12296c6 : Add AuthMgr AIDL to the KnownMissingAidl
c2b5678ac : Remove |ro.hardware.| prefix in KM VM sys property
53faad346 : Deduplicate `Result`-to-`Status` conversion in casimir_control_server
25103ebfb : Tie `CasimirController`'s initialization to its lifetime
257941a31 : Don't use `std::shared_ptr`s for hex strings
40cc3cc0a : Rename `utils.h` to `hex.h` in `casimir_control_server`
da0c6f11e : Order class members in `casimir_control_server`
6bc29337a : Put `casimir_control_server` entirely in the `cuttlefish` namespace
d8d63d7cb : Update sandbox policies for casimir using unix sockets
9b2cd94b8 : Use unix sockets for casimir
edf2ca428 : casimir_control_server: Support a unix rf server
6597d3b49 : Add REAR_DISPLAY_OUTER_DEFAULT to CF configuration
cca82ec60 : Revert "Reapply "Update kernel module paths""
148b5d924 : cuttlefish: use Health V4
56d4daadb : Add new log messages around super image mixing
ba4a3390c : Implement field on notificaitons and setting the power level on casimir
e1d289088 : Reapply "Update kernel module paths"
6e8608796 : Move early_vms.xml out of cuttlefish specific directory for reusability
fc584ba31 : Refactor all of the path logic together
487d80c07 : Revert "Update kernel module paths"
e9c422b22 : minidroid: fix build
0da427c64 : Update kernel module paths
b8def3319 : Rename generic_system_image to aosp_shared_system_image
78646a27b : Add vulkan snapshot tests
2ec62dc50 : Rename system property to enable KeyMint VM
8761f51db : Add `CuttlefishConfig::EnvironmentSpecific::casimir{nci,rf}::socket_path`
fb63a7ee4 : Use the fluent Command syntax for launching casimir
adbd90802 : tcp_connector: support unix sockets
5b2c759ce : blocklist dummy-cpufreq.ko for aarch64 cf targets
182f60ad9 : Ignore all Android.mk files in aosp_cf phone products of vsoc_x86_64 and vsoc_x86_64_only
7620aa709 : Revert "Change how hibernation image is generated"
69760a5c7 : Cope with new KeyMint tag
83e4284b7 : Hold the lock on the mutex only while accessing the data
cb5bcd6d3 : Move non-vcpu/critical tasks to workers cgroup
87987271e : Set guest_soc prop at boot
7b1767bf6 : cuttlefish: Update kHwComposerDrm to "drm_hwcomposer"
0c437bee8 : Run multidevice tests with Cuttlefish
de2cced87 : Derive the number of vCPUs from the vcpu_config_path instead
86f86d237 : Change TEST_MAPPING
b4d09f7a1 : Revert "Delete CF hal_{gatekeeper,keymint}_default.te files"
a8368f3f4 : Disable prime shader cache for CF
cbb889cf3 : Add `CrosvmBuilder` commands for cpu flags
dbf38117d : Move frequency domain crosvm arguments to a separate file
527f84791 : Delete CF hal_{gatekeeper,keymint}_default.te files
f9cd3210a : Remove `/tmp` mount in `assemble_cvd` or `avbtool`
ca49374eb : Add a `early_tmp_dir` flag to control file locations
315bd0310 : Change how hibernation image is generated
774502ed4 : Revert^3 "Changed how hibernation image is generated"
d99d9d6c9 : Revert^2 "Changed how hibernation image is generated"
13daa35a7 : Make auto_ethernet optional
71938af67 : Add wheel event handler to mouse
5e2256013 : Revert "Changed how hibernation image is generated"
a369894cd : Update overrides in vsoc_arm boardconfig
4679ff686 : Radio: bump rest of declared AIDL services
779de3ae5 : Radio: bump rest of declared AIDL services
d882d90a5 : Add parsing for freq_domain and cgroups
8483f5deb : Bump KeyMint version
3381bd867 : Fixing watchdog for trade-in mode
680083ec1 : Re-enable soong-built system image for aosp_cf targets
b78e5ea9c : Send touch up/down events for all contacts
fde718488 : Don't use slot id as tracking id
ee2d7d24d : Don't access iterator after erase
a9d066630 : Don't send BTN_TOUCH UP unless down is false
ea3b10887 : SnapshotTest avoid delete on snapshot fail, throw errors
64f1300bc : Changed how hibernation image is generated
7625f5eea : Update ARpcServer_newVsock for new method
be8715805 : Rename KM VM related system properties
d3b40dc79 : Use `JoinPath` rather than concatenation in all sandbox policies
f2cb73ecd : Use Kati to build the system image for cuttlefish targets
7c3f580d5 : Use a tmpfs mount for netsimd `/tmp`
3fb567197 : Use random data for sandbox pingback values.
8e316f654 : Remove `TraceAndAllow` implementation.
c88f895e3 : Bump KeyMint version
c196a4485 : Fix sepolicy errors on switching keymint/gatekeeper domains
2b1a5078c : Reapply "Add serialno access to the kefault keymint domain"
fe788787d : Fix function pointer type mismatch for C23.
60d913a69 : Revert^2 "Enable Media Quality Service on Cuttlefish"
f5578415c : Revert "Enable Media Quality Service on Cuttlefish"
0efbecbb8 : Disable CBS V4 on cuttlefish
5491a3733 : Remove egrep usage
52cd8e365 : Revert "Enable checkpoints in cf with ext4"
ee9df3eec : Remove displays from streamer after display sinks
5bb5ffb46 : Protect access to the display sinks with the send mutex
ae05510ff : Radio: bump declared AIDL services
b774f8f81 : Use libradiocompat aidl_deps for AIDL dependencies
b6866b06f : Reformat reference-libril makefile
925ece353 : Pin KeyMint dependency to correct/specific version
37b5a42f0 : Revert "Use new Radio HALs"
4206297b0 : Fix path for 6.6 kernel
ca15fe231 : Add support for multiple custom partition paths
dcc0f1623 : Add overlay xml file to fix test error for cuttlefish
3aa3c3dc6 : Revert "Add serialno access to the kefault keymint domain"
aee062532 : Remove dependencies on the 1-variant fallback
d3cb58ac1 : Set vsoc_arm to use 6.6 kernel
06fc37f52 : Enable Media Quality Service on Cuttlefish
13dab8199 : Rename trusty_vm_launcher and move it to packages/modules/Virtualization
4dd6c97e0 : Add serialno access to the kefault keymint domain
beeb6582c : simplify ProcessMonitor::Properties API
f16db40c3 : fix openwrt crosvm crash on CF shutdown
4d8c82f49 : Use new Radio HALs
e4d16d1eb : delete openwrt crosvm control socket on powerwash
6822d3ffb : Add android.media.audio.eraser.types to kAlwaysMissingAidl
3c6865ab6 : Use `AutoSetup` for `InitializeEspImage`
ac31394c9 : Update auto_portrait dimension
0f500fa96 : Add a default implementation of `SetupFeature::Enabled` returning `true`.
bf69ab9a1 : Delete snapshot after every test in SnapshotTest
68c5186c7 : Revert^2 "Add Cuttlefish frontend mouse support."
d77868439 : Revert "Removing vhost_user_vsock"
306965e4f : Add back drm hwcomposer support
976bab23e : Removing vhost_user_vsock
3ce82399b : CUTTLEFISH: cf_x86_64_desktop builds use AL kernel
23151cd24 : Use soong-built system image for aosp foldable
927e97664 : android-info: add prefer_drm_virgl_when_supported flag
7908da819 : [trusty] Move trusty kernel to etc/vm/trusty_vm directory
f2d28e319 : aosp_cf_x86_64_phone uses soong defined system image
a5e65999a : Use canonical copy of ABSL for WebRTC dep.
4db79c4f3 : Setup ethernet for cf_auto target.
ddc833225 : Add overlay xml file to fix test error for cuttlefish
880fdac38 : Add vsoc_arm target to be used by Wear targets
354159a52 : Fix build with fmtlib 11.0.2
f8067dd68 : Populate shared desktop directory
347a0bfe2 : Add support for vvmtruststore partition to Cuttlefish
51411d514 : Add back drm hwcomposer support
3a35d43a4 : Revert^3 "Drop guest-side socket_vsock_proxy"
f012f05cc : Sync with new drm common aidl interface
58f3d25c8 : Revert^2 "Drop guest-side socket_vsock_proxy"
8c3018893 : Don't audit vendor_boot_security_patch_level_prop read denial
b880d8c37 : Use profile at framework/base/boot/ instead of the combined one at framework/base/config
e12b8481b : Move the soong-built system image to build/make/target
36b2e32e9 : Move the soong-built system image to build/make/target
f901f1412 : shared: sepolicy: system_ext: enforce secure storage types
bfabbaffc : Fixes `cvd suspend` getting stuck when `adb_connector` is not running.
907736269 : Skip logs only if log dir already exists on restore
b815bf742 : shared: device.mk: add trusty-ut-ctl
ac67a1996 : shared: device.mk: Add secure storage for the Trusty VM
76d60d103 : Add new createVm argument
b18a0e44c : Replace uses of egrep and fgrep with grep -E/-F
0b0357ead : Add auto cf specific logic needed for hibernation
5998bc63e : Change vintf_fragment_modules to prebuilts
f555c7f61 : Enable checkpoints in cf with ext4

+- Project: device/google/cuttlefish_prebuilts

e612d2d9 : Snap for 12755599 from 1d4fff138a23cd1f3da505f0097b669024755e44 to 25Q1-release

+- Project: device/google/cuttlefish_vmm

33c1d3b : Snap for 12399304 from 9dd7090bb4943a582865f1067797e68a6afb244b to 25Q1-release

+- Project: device/google/felix

f62bd8d : modem_svc: use modem_svc_sit version sepolicy
a4f3009 : Revert "felix/haptics: Remove voltage restriction for haptics"
c8208e3 : felix/haptics: Remove voltage restriction for haptics
0b7eb46 : Enable TAPreferHighCap for first frame
3231326 : Update F10 Bluetooth LEA unicast allowlist: Samsung Galaxy Buds 3 pro
067d8c6 : Disable Wifi BugReport for subsystem restart
11690d6 : audio: fix cts AAudioTests failed on GSI image
ccd324c : Update ISODEP routing setting
2b76424 : Felix HAL: Fixed VibratorTest unit tests errors.
f669a1e : cs40l26: add DBC bin info and reduce duplicates
46065a4 : vibrator/cs40l26: update default scales of click, tick and long vib
0b3cc7e : cs40l26: organize dump() AIDL section
0f94013 : Add power profile config to reflect the presence of two displays
3f7ccdf : Update OWNERS
c5a772a : vibrator: correct debug() calibration file path
9c9f564 : modem_svc: use shared_modem_platform to replace all modem_svc_sit
32a1b78 : felix: Pull init.insmod.*.cfg from vendor_dlkm
17e4c02 : Move modem_svc_sit from gs201 to felix
8104233 : vibrator: Update location of PixelVibratorFlags
ea53f43 : gps: set default SUPL SSL method to SSLv23
c45009e : audio: enable software encoded Bluetooth broadcast
7105ca3 : vibrator: Format PWLE header in user driver
29af95b : [felix] Define CAMERA_MULTICAM_BOOST
ef3616f : dumpstate: touch: Init using touch_predump

+- Project: device/google/gs-common

92b5295 : Revert "Add amm experiment."
f94206f : Reduce the trace instance irq_gia_google's buffer size
2d6b42c : Remove code that just re-enables IRQ and GIA events
842b86d : Add amm experiment.
9e1cdbc : storage: add missing bug_map
d5909db : Add apf experiment.
8112ee6 : modem_svc: add modem_svc_sit to solve sepolicy conflicts arising from different device versions
2833eec : modem_svc: move shared_modem_platform related sepolicy to gs-common
3bcf1e5 : Add Intelligence rc
f7eae2a : Always include camera calibration tools in debug builds.
4250b91 : Add kswapd experiment.
2c41fda : Add Proc Vendor Sched Sepolicy Fix
afc6c28 : Add recovery support for perf experiments.
6711886 : Revert "Allow tachyon service to make binder calls to GCA"
97f5022 : Allow tachyon service to make binder calls to GCA
5085275 : Add libg3a logging initrc files.
e93068e : Rename aocx.IAoc to aocx.IAoc/default to support stable AIDL
219845f : dump_chip_info: dump more tables from chip-info driver
d6b9cc4 : Introduce interrupts module for debug and trace
db25f03 : Revert^2 "gs-common: Move cpufreq perf settings to gs-common"
168f30d : Revert^2 "gs-common: Added common perf init.rc"
8fb8122 : gs-common/esim: include sysprop setupwizard.feature.provisioning_profile_mode
74283c5 : Revert "modem_svc: move shared_modem_platform related sepolicy t..."
e8884c9 : dump_gps: collect gps logs in ascending order
20bb328 : modem_svc: move shared_modem_platform related sepolicy to gs-common
064b50e : Add sepolicy for edgetpu_tachyon_service to report metrics
e3df39e : Document radioext_interface_type soong variable usage
83e7cc5 : Build lyric from source if prebuilt directory is missing.
0649754 : mediacodec: add GPU access policy
993506e : GRIL sepolicy for aidl radioext v2.1
350e262 : storage: turn off writebooster flags upon init
4213243 : gsc: Change the criteria for building GSC targets
5a063cc : audio: update hdmi audio path
9758650 : Revert^2 "Add GIA (Google Input interface Abstraction laye..."
303cf04 : sepolicy: Allow hal_gnss_pixel create file
e546ba5 : Give ContextHub HAL access to AOC version
c68ac04 : Revert "Add GIA (Google Input interface Abstraction layer) relat..."
f39a955 : Introduce Pixel mailbox module
cfedcac : Remove bug comment
3330640 : Revert "storage: Defer blkio class configuration"
872e432 : Replace many app service permission with app_api_service
ea38f5c : Add widevine SELinux permissions for L1
2f08dd6 : Allow command line tools to access Tachyon service in user builds.
ba53a62 : Revert^2 "Add more access for GCA to edgetpu"
84d3523 : Revert "Add more access for GCA to edgetpu"
132ad09 : Add more access for GCA to edgetpu
cb2c9c9 : Consolidate gca permissions inside gs-common
8d4f1c1 : Allow fingerprint HAL to access IGoodixFingerprintDaemon
5c50cca : Add permissions for GCA to access various services
50930b4 : Allow grilservice_app to binder call twoshay
8ad4c5c : RamdumpService: Update the SELinux policy for Flood Control to use Firebase Cloud Firestore.
1f83bb1 : Add GIA (Google Input interface Abstraction layer) related SEPolicy rules and AIDL compatibility matrices.
0a17aca : Introduce dump_chip_info module
62abd5d : add sepolicy rules for bluetooth common hal dumpstate
1de5b57 : add bluetooth common hal sepolicy rules for bt subsystem crash info files
952e4d7 : bt: add dumpstate for bluetooth common hal
cea50c9 : Remove mitchp from OWNERS
b7d645e : mte: add nnk@google.com to OWNERS
4352bbc : Move camera type back to project
69ffa90 : Remove the duplicate gxp rule
d76dcdc : add sepolicy rules for bluetooth common hal
25ac4cc : Vibrator: Add enable_pwle_v2
c3a0ad4 : storage: adjust ufs error history design
016ddaf : introduce pixel bluetooth common hal service
afd55f9 : [Audio AIDL] Move audiometricext to HIDL only.
6b137ff : insmod.sh: Support 'rmmod' directive
570dfe1 : storage: support new UFS error history algorithm
2c8ec7e : dump_gps: Support bugreport extract resource info
0694376 : Remove DBA from edgetpu.mk
d7d26a5 : Disable bootstrap for UGS devices (sold in Canada)
8af77ef : gsc: Change the criteria for building GSC targets
93d8e4a : [chre-hal-xport] Add file_contexts for new xport
df68b9b : Add permission for mediacodec to bindercall camera hal
0af034b : storage: Defer blkio class configuration
f24bfe8 : ban hal_dumpstate_default from execute_no_trans
21b3ed1 : touch: Support SW_LID event from sensor HAL
0379e1a : display: add pixel display trace to bugreport
1822201 : sepolicy: remove irregular policy
3c88c19 : Update AIDL to v4.
1d9653d : Add common lib for libgc2 encoders and decoders
2971584 : dumpstate: touch: Add touch_predump for focaltech
7d24596 : dumpstate: touch: Add touch_predump for stm
d36b2b7 : vibrator: Add vibrator HAL flags
c398fe1 : Allow gmscore to read setupwizard_feature_prop
617a80e : display-dump: use generic panel path
15c9c33 : audio: add soong configs for debugging
1ea1cff : [USB Audio] Fix SEPolicy issue

+- Project: device/google/gs101

19c3844e : dump_power: add battery caretaker dump into bugreport
410cabfc : update health HAL dependency
4981fc30 : Add sched qos support
1736bb0c : Change TARGET_RECOVERY_UI_LIB to use fully qualified names in gs101
c3aac00c : Change TARGET_RECOVERY_UI_LIB to use fully qualified names in gs101
a343001f : Clean up unnecessary data_connection_5g_plus overlay
dda01746 : init: gs101: move sched rate limit to late init
89b65695 : Set soong config variables for libExynosC2H263Dec and libExynosC2H263Enc
59579ee4 : Disable Wifi BugReport for subsystem restart
bab24fc4 : init: make pmu_poll_enable node readable
6a9c8ba9 : Set soong config variable "board_use_dec_sw_csc" for libExynosVideoCodec
a32d4142 : Switch to using gs101 instead of valhall for GPU UMD
a3eb6a65 : Update ldaf sensor device filename
df3964ba : dump_power: gs101: correct dump path
17c4543a : Move video_codec soong config variables into board config
7bbc59ee : dumpstate: gs101: fix dump path
ecb4cc26 : Revert "Add IGoodixFingerprintDaemon aidl interface"
c2b251f4 : Add IGoodixFingerprintDaemon aidl interface
ce576fa1 : Set BOARD_LIBACRYL_G2D_HDR_PLUGIN for soong
47aaa4bf : Add a soong config variable for CitadelProvision
838d1086 : gs101: MCP: Set the vendor customized max cached processes to 1024.
b2af3259 : [Pixel RR] Apply reviewed default permissions
60ef8d69 : gs101: Disable kmem cgroup accounting
7ed8a558 : Usb: Add status check to prevent NPE
e471d738 : Relocate modem_svc_sit to proper places
164d01e0 : gs101: Copy insmod configs from kernel to vendor_dlkm
7f7e44cc : [task_profiles]Add MaxPerformance and PreferIdle to InputPolicy profile.
583bb024 : Set dexpreopt and dexopt filter for SystemUI
5734db5a : Change any use case of folder name apis to tachyon_apis to avoid api review
66db2ee5 : wifi: Upgrade vendor hal version

+- Project: device/google/gs101-sepolicy

5f17f078 : Update SELinux error
f20c8a90 : modem_svc: move shared_modem_platform related sepolicy to gs-common
4a732d5e : Update SELinux error
9d43b259 : Revert "modem_svc: move shared_modem_platform related sepolicy t..."
94e8fa7a : modem_svc: move shared_modem_platform related sepolicy to gs-common
c8cc2683 : Update SELinux error
a6019b0c : Update SELinux error
4e105e14 : Update SELinux error
1df8457f : Update ldaf sensor device filename
c025f491 : sepolicy: allow dump_power to read debugfs
c8f947be : Remove cgroup_desc_file bugs.
af68091a : modem_svc: use shared_modem_platform to replace all modem_svc_sit
d338373c : Update SELinux error
a5766d42 : Update SELinux error
e746382d : sepolicy: allow dumpstate to execute dump_power
7561dcc9 : Remove duplicate service entries
57c566b2 : Update SELinux error
f5714487 : Update SELinux error

+- Project: device/google/gs201

515504b : Enable WIFI_FEATURE_HOSTAPD_11AX
2654550 : dump_power: add battery caretaker dump into bugreport
a13d935 : [Pixel VPN] Apply reviewed default permissions
cadf608 : update health HAL dependency
6eded85 : Add sched qos support
81a804e : Change TARGET_RECOVERY_UI_LIB to use fully qualified names in gs201
a477bea : Change TARGET_RECOVERY_UI_LIB to use fully qualified names in gs201
fac8b49 : Clean up unnecessary data_connection_5g_plus overlay
d817c86 : init: gs201: move sched rate limit to late init
f5e3505 : Add soong config use_google_qns to device/google/gs201/device.mk
e89e33a : Set soong config variables for libExynosC2H263Dec and libExynosC2H263Enc
f48f262 : init: make pmu_poll_enable node readable
2451f7a : Set soong config variable "board_use_dec_sw_csc" for libExynosVideoCodec
ba05fcc : Update ldaf sensor device filename
7d14002 : dump_power: gs201: correct dump path
837de89 : Move video_codec soong config variables into board config
12dd9e3 : Use full namespace path for BOARD_WPA_SUPPLICANT_PRIVATE_LIB
8202fc2 : dumpstate: gs201: fix dump path
e4f334f : Revert "Add IGoodixFingerprintDaemon aidl interface"
aed4820 : Add IGoodixFingerprintDaemon aidl interface
d764715 : Set BOARD_LIBACRYL_G2D_HDR_PLUGIN for soong
33f3bf6 : Add a soong config variable for CitadelProvision
9826c8d : init.gs201.rc: change permission of SJTAG writable files
08cc782 : gs201: MCP: Set the vendor customized max cached processes to 1024.
c40d53b : [Pixel RR] Apply reviewed default permissions
0bfc854 : gs201: Disable kmem cgroup accounting
f22f211 : Usb: Add status check to prevent NPE
617251a : Relocate modem_svc_sit to proper places
3ab1329 : gs201: Copy insmod configs from kernel to vendor_dlkm
2fef0de : gs201: Remove unused BFQ I/O scheduler configuration
28382ce : [task_profiles]Add MaxPerformance and PreferIdle to InputPolicy profile.
169b08d : Set dexpreopt and dexopt filter for SystemUI
991aeb8 : Change any use case of folder name apis to tachyon_apis to avoid api review
14885d6 : wifi: Upgrade vendor hal version
44855fc : dumpstate: Modify dumpTcpc path and content
24d312d : dumpstate: Dump logbuffer_pogo_transport

+- Project: device/google/gs201-sepolicy

d1f806c : modem_svc: move shared_modem_platform related sepolicy to gs-common
438a3ed : Update SELinux error
a3d0621 : Allow tachyon service to make binder calls to GCA
8059774 : Update SELinux error
0c22bea : Update SELinux error
2c027c6 : Revert "modem_svc: move shared_modem_platform related sepolicy t..."
1b9fcdf : modem_svc: move shared_modem_platform related sepolicy to gs-common
cde7e14 : Update ldaf sensor device filename
edc0829 : Update SELinux error
4f11538 : Update SELinux error
8b6e654 : sepolicy: allow dump_power to read battery_history_device
d2f8dde : Update SELinux error
491a1cc : sepolicy: allow dump_power to read debugfs
1b64d05 : Remove duplicate service entries
6497d42 : Revert "Update SELinux error"
5000f8a : Update SELinux error
588e82a : convert-to-ext4-sh.te: use su domain instead
f906b69 : modem_svc: use shared_modem_platform to replace all modem_svc_sit
ce5420f : Update SELinux error
315cc63 : sepolicy: allow dumpstate to execute dump_power
eb84e9c : Update SELinux error
3aeae9b : Update SELinux error

+- Project: device/google/lynx

b7fb9e1 : modem_svc: use modem_svc_sit version sepolicy
b7ac236 : Disable Wifi BugReport for subsystem restart
ae2ed69 : Update ISODEP routing setting
d510169 : [lynx] Define CAMERA_MULTICAM_BOOST
b01091a : modem_svc: use shared_modem_platform to replace all modem_svc_sit
22dd220 : lynx: Pull init.insmod.*.cfg from vendor_dlkm
473d219 : Move modem_svc_sit from gs201 to lynx
117b115 : Remove vibrator HAL service
1019f75 : dumpstate: touch: Init using touch_predump
cd8d3c9 : gps: set default SUPL SSL method to SSLv23

+- Project: device/google/pantah

6f0b328 : modem_svc: use modem_svc_sit version sepolicy
2eb5220 : Revert "pantah/haptics: Remove voltage restriction for haptics"
6606b83 : pantah/haptics: Remove voltage restriction for haptics
05b6cbe : Update P22 Bluetooth LEA unicast allowlist: Samsung Galaxy Buds 3 pro
4258e91 : Disable Wifi BugReport for subsystem restart
38c5d36 : Update ISODEP routing setting
99a2848 : [pantah] Define CAMERA_MULTICAM_BOOST
d214549 : modem_svc: use shared_modem_platform to replace all modem_svc_sit
e87bccc : pantah: Pull init.insmod.*.cfg from vendor_dlkm
cedffc0 : Move modem_svc_sit from gs201 to pantah
8dbfc34 : Remove vibrator HAL service
58d6159 : audio: enable software encoded Bluetooth broadcast
1ff2576 : gps: set default SUPL SSL method to SSLv23

+- Project: device/google/raviole

5003ac58 : modem_svc: use modem_svc_sit version sepolicy
d009e5b1 : Add PLAYVIDEOS_VERSION_DIR and PRODUCT_SOONG_NAMESPACES for Videos.
013069b4 : modem_svc: use shared_modem_platform to replace all modem_svc_sit
1ea54293 : raviole: Pull init.insmod.*.cfg from vendor_dlkm
cf891d2e : Move modem_svc_sit from gs101 to raviole
18a37326 : Remove vibrator HAL service
dfdffd07 : audio: enable software encoded Bluetooth broadcast
96a0ec11 : gps: set default SUPL SSL method to SSLv23

+- Project: device/google/shusky

7571b12 : Add a way to disable auto prefer fit for launch.
fc98bc4 : audio: align volume curve
a0670ae : Revert "shusky/haptics: Remove voltage restriction for haptics"
53e6a3d : shusky: Remove dbc properties for fw 7.2.81.
dcc3255 : shusky/haptics: Remove voltage restriction for haptics
eb79ff9 : PowerHint: Refine FIXED_PERFORMANCE mode CPU Frequencies
3f93d0e : Revert "thermal: Add JSON Schema Checker"
882ac42 : Add Samsung Galaxy Buds 3 pro to the LE audio allow list
758dd12 : Revert "powerhint: enable auto_prefer_idle in games"
c93b55b : bt: add bthal service permission to access bt wakelock control device node
fc28f05 : Enable bthal service recovery by restart
6b0cee8 : shusky: add microphone info for aidl hal.
5399e1c : Disable Wifi BugReport for subsystem restart
4a40152 : thermal: update thermal config
111f28d : [NFC] Enable STNFC_ACTIVERW_TIMER
2ca729d : Remove 'RELEASE_PIXEL_BROADCAST_ENABLED'
e0635d2 : Enable CDPreferHighCap for CAMERA_STREAMING_HIGH
ffdcb59 : 16k: Move BoardConfig-shusky-common.mk to device/google/zuma/BoardConfig-16k-common.mk
c4818d8 : 16kb: Set 16kb TARGET_ vars in BoardConfig files and targets
71af04a : 16kb: Use PRODUCT_BOOTS_16K to select the kernel and fs
c12042d : powerhint: enable auto_prefer_idle in games
efa15cc : Enable TA and FG prefer idle for some camera streams
41f7d06 : Remove vibrator HAL service
54e593d : 16k: Switch AOSP shusky 16k targets to use ext4 for RW filesystem
7ebc41a : Disable stereo spatialization.
11aaff6 : gps: set default SUPL SSL method to SSLv23
2609a6d : thermal: remove duplicate counting power rail
f3fe769 : audio: enable software encoded Bluetooth broadcast
2b1c430 : Use auto prefer fit for launch
d5729b0 : [shusky] Define CAMERA_MULTICAM_BOOST
f9be9f1 : dumpstate: touch: Init using touch_predump

+- Project: device/google/tangorpro

dc1995c : modem_svc: remove shared_modem_platform from T6pro
71f464d : modem_svc: make shared_modem_platform build empty in tangorpro
af98660 : Enable TAPreferHighCap for first frame
da4b0d2 : Revert "modem_svc: remove shared_modem_platform from T6pro"
b32c27d : modem_svc: remove shared_modem_platform from T6pro
ba93e73 : Add PLAYVIDEOS_VERSION_DIR and PRODUCT_SOONG_NAMESPACES for Videos in tangorpro.
2b73d48 : [Revert^2] Use mediadrm from private instead of tangorpro
4329839 : modem_svc: use shared_modem_platform to replace all modem_svc_sit
439d11d : Move $(PHONE_CAR_BOARD_PRODUCT)/BoardConfig.mk to the end
075b408 : tangorpro: Pull init.insmod.*.cfg from vendor_dlkm
8bb8258 : dumpstate: touch: Init using touch_predump

+- Project: device/google/tangorpro-sepolicy

5d140bd : Use SELinux rules from private instead of tangorpro for MediaDrm plugin
af493c2 : Remove unused audio_proxy sepolicy

+- Project: device/google/trout

73324e4 : Snap for 12710726 from 45db0fe73fe12717506a359aee4d0dfe544bb26d to 25Q1-release

+- Project: device/google/zuma

650bdd0 : 16KB: zuma: Do not filter out goodix_brl_touch.ko for 16KB mode
0824da9 : audio: fix headtracking permission for spatializer offload playback
413efc2 : dump_power: add battery caretaker dump into bugreport
a192558 : [Pixel VPN] Apply reviewed default permissions
b6c6a96 : modem_svc: use modem_svc_sit version sepolicy
9cc872d : Add sched qos support
f130ff4 : update health HAL dependency
2620d19 : Change TARGET_RECOVERY_UI_LIB to use fully qualified names in zuma
7463e54 : Fix kasan logic
e9866cc : Clean up unnecessary data_connection_5g_plus overlay
a8a6c8f : 16kb: zuma: Filter out unnecessary modules from 16k mode
ff7717d : init: zuma: move sched rate limit to late init
d1cf23e : Change TARGET_RECOVERY_UI_LIB to use fully qualified names in zuma
1f92d2a : Add hardware/google/graphics/zuma/libhwc2.1 to PRODUCT_SOONG_NAMESPACES
2908e97 : Allow metadata to be formatted as ext4
74ccf54 : Enable usb state update via udc sysfs
43c6cf5 : Add soong config use_google_qns in zuma
d827c45 : Set soong config variables for libExynosC2H263Dec and libExynosC2H263Enc
2047505 : Set soong config variable "board_use_dec_sw_csc" for libExynosVideoCodec
bc49636 : dump_power: zuma: correct dump path
ca66ae1 : Move video_codec soong config variables into board config
cc52939 : bcl: fix ocp_gpu_lvl in init.zuma.rc
8873769 : Reland "Add detailed error log to efs test"
42abd95 : Revert "Add detailed error log to efs test"
0ca2ed5 : Revert "Add IGoodixFingerprintDaemon aidl interface"
082fc23 : Add IGoodixFingerprintDaemon aidl interface
d0bdf1d : Revert^3 "Enable usb state update via udc sysfs"
2676f4b : Enable MTE in -eng builds on zuma devices.
32b6825 : Set BOARD_LIBACRYL_G2D_HDR_PLUGIN for soong
b7cc955 : Add detailed error log to efs test
03b2a5a : zuma: MCP: Set the vendor customized max cached processes to 1024.
4170eb8 : Update display dim configuration
9e1bb10 : 16k: Ignore 16k kernel settings if 16kb folder doesn't exist
fbedb6f : 16k: Move logic from device/google/shusky/BoardConfig-shusky-common.mk to zuma/BoardConfig-16k-common.mk
e192e6f : 16kb: Use TARGET_BOOTS_16K to select the efs config files
0420e6d : 16kb: Use PRODUCT_BOOTS_16K to select the proper rc file and fstab
d2b8339 : [Pixel RR] Apply reviewed default permissions
9106927 : Fix LE Audio sysprops typos in makefiles
a6c7198 : Set auto prefer idle task name
23cee31 : Usb: Add status check to prevent NPE
1df1b01 : [chre-hal-xport] Give permissions for new xport
115208a : modem_svc: use shared_modem_platform to replace all modem_svc_sit
0911e1e : [task_profiles]Add MaxPerformance and PreferIdle to InputPolicy profile.
72946bd : Zuma: Enable Secretkeeper on aosp targets
7a32e9d : dumpstate: Use generic dc_mains for all parallel chargers
a53df11 : Change any use case of folder name apis to tachyon_apis to avoid api review
5e2b322 : use 80x80 as the minimal resolution
7512629 : wifi: Upgrade vendor hal version

+- Project: device/google/zuma-sepolicy

b81b342 : Update SELinux error
4b9ca7c : modem_svc: move shared_modem_platform related sepolicy to gs-common
1b7a5a0 : Allow tachyon service to make binder calls to GCA
9f0f02d : Update SELinux error
9f5ced1 : Update SELinux error
b7ab33d : Update SELinux error
3c17e28 : Add udc sysfs to udc_sysfs fs context
9880272 : Revert "modem_svc: move shared_modem_platform related sepolicy t..."
41e0d76 : modem_svc: move shared_modem_platform related sepolicy to gs-common
80c32be : Update SELinux error
5515229 : Update SELinux error
6f1672a : Update SELinux error
139f530 : Revert^3 "Add udc sysfs to udc_sysfs fs context"
bf1d975 : Revert "Update SELinux error"
c2660d9 : modem_svc: use shared_modem_platform to replace all modem_svc_sit
a6eb313 : Update SELinux error
d898a7a : Update SELinux error
ce7cdaa : Allow systemui_app to set 'debug.tracing.desktop_mode_visible_tasks' system property
f688a56 : Remove duplicate service entries
c6822be : Update SELinux error
e40a281 : Fix error in systemui when toggling airplane mode
e6639e9 : Update SELinux error
1492b49 : Update sepolicy for nfc antenna selftest values

+- Project: device/google/zumapro

debbc81 : Reject e911 call during non-emergency satellite session.
2d3c2c6 : dump_power: add battery caretaker dump into bugreport
97ce99c : [Pixel VPN] Apply reviewed default permissions
5d037f2 : Configure satellite NIDD APN name for Zuma pro devices
d0338c0 : Add country codes of Canda and EU countries to the satellite allowed list
00419ae : Add enhanced geofencing data and satelltie access config json for Zuma Pro
3dd14b5 : Add enhanced geofencing data and satelltie access config json for Zuma Pro
6db5e8e : modem_svc: use modem_svc_sit version sepolicy
20ca1e9 : Set permission for rampup_multiplier
1c26820 : Use SCHED_QOS_SENSITIVE_EXTREME_SET for InputPolicy
6e9b0db : Add SCHED_QOS_POWER_EFFICIENCY profiles
36fdcd9 : update health HAL dependency
a547826 : Add `config_satellite_carrier_roaming_esos_provisioned_class` for the intent to trigger satellite provisioning
312ec7c : Change TARGET_RECOVERY_UI_LIB to use fully qualified names in zumapro
5c61b70 : Add `config_satellite_allow_tn_scanning_during_satellite_session` flag.
bc9e47d : Fix kasan logic
4307055 : Clean up unnecessary data_connection_5g_plus overlay
0d33b2a : Vibrator: Update compatability matrix
81cf0cf : [NTN][VZW P2P] Add config to enable sending check message datagrams when satellite modem is not connected.
7f6e9bb : init: zuma: move sched rate limit to late init
1887e6e : Change TARGET_RECOVERY_UI_LIB to use fully qualified names in zumapro
673b99f : Extend satellite support to Puerto Rico as a US territory
2aec743 : init.zumapro.board.rc: bluetooth own uart debug node
343c37a : Add hardware/google/graphics/zumapro/libhwc2.1 to PRODUCT_SOONG_NAMESPACES
4ee7fad : Revert^2 "devices: Move cpufreq perf settings to gs-common"
675c35b : Opt zumapro into RE-Graphite aconfig-based preview rollout
40d3412 : Allow metadata to be formatted as ext4
7d1e64d : opt in bt channel sounding for p24
5b478fd : Increase HCI command timeout for BT AOC passghrough
e1c5eb3 : Add soong config use_google_qns in zumapro
ec810e8 : Set soong config variables for libExynosC2H263Dec and libExynosC2H263Enc
5e05405 : Fixes issue where GIA keep crashing on next release builds.
8f762b9 : Revert^2 "Add GIA (Google Input interface Abstraction laye..."
ea2353c : correct the naming of frame_interval_ns and expected_present_time_ns
2424e33 : Set soong config variable "board_use_dec_sw_csc" for libExynosVideoCodec
1ef792a : Move modem setup to init.persist.rc
a404117 : Revert "Add GIA (Google Input interface Abstraction layer) for z..."
3b4a0be : Fixes issue where ripcurrentpro devices ROM build will fail with GIA
fdbfd00 : Add config to disable satellite notifications in telephony for zumapro
478be30 : usb: support device state monitoring with internal hub enabled
2d2a42e : usb: dump port name for usb device state update
7a3202c : usb: clean up monitor thread in UsbDataSessionMonitor destructor
3f193d6 : dump_power: zumapro: correct dump path
7b57770 : Enable Secretkeeper HAL service on zumapro
512af9c : zumapro: always create the symlink of trusty_persist
4772281 : Move video_codec soong config variables into board config
e0e5b17 : Add dynamic color aspects support for vp8 decoder
c61d1d0 : Add p24 perf setup script
a419a7b : Add GIA (Google Input interface Abstraction layer) for zumapro devices.
ab4b51e : 16kb: zumapro: Verify that zumapro prebuilts are 16kb elf aligned
b625b7f : Revert "Add IGoodixFingerprintDaemon aidl interface"
fd73068 : Add IGoodixFingerprintDaemon aidl interface
473b174 : [Satellite] Changed start of satellite non-emergency mode in hidden menu to send intent instead of directly invoking satellite manager
2bf5d31 : Enable MTE in -eng builds on zumapro devices.
19ab0df : Set permission for sched qos nodes.
30c7fad : Enable sched qos for vendor groups
c0dae61 : Add sched qos profiles
41b27fe : Set BOARD_LIBACRYL_G2D_HDR_PLUGIN for soong
8033193 : usb: report compliance warning to framework
0105698 : zumapro: MCP: Set the vendor customized max cached processes to 1024.
2b041db : Update display dim configuration
b710f75 : VSS to be removed from Zumapro
2a8f680 : Mount efs and persist partitions at right stage for 4kb and 16kb
cee21e4 : Add manifest for VINTF target FCM level 202404
fdb10a4 : [Pixel RR] Apply reviewed default permissions
3f62c6b : Fix LE Audio sysprops typos in makefiles
76b662b : Set auto prefer idle task name
e5f8be3 : Usb: Add status check to prevent NPE
89ab3ca : Remove device/google/zumapro/Android.mk which is not needed any more since there are no other Android.mk files in subdirectories.
60dc676 : Copy files on efs/efs_backup/modem_userdata/persist partitions to /data in 16kb mode
e6525d2 : [chre-hal-xport] Give permissions for new xport
49a4d98 : Move setup of the persist partition to init.persist.rc
9111a59 : Copy fstab from zuma to zumapro for 16kB developer options
1285fd9 : init.zumapro.rc: change permission of SJTAG writable files
562f2de : [task_profiles]Add MaxPerformance and PreferIdle to InputPolicy profile.
9af16ad : dumpstate: Use generic dc_mains for all parallel chargers
cb436af : Change any use case of folder name api to tachyon_api to avoid api review
ea7bd38 : usb: Remove project-specific file modification
1a610d1 : set the minimal resolution as 80x80
7c7154d : Support FEATURE_HlgEditing for MPC15
5fcf705 : Support FEATURE_DynamicColorAspect for MPC15
2e83b04 : Set priority task name
27a5772 : wifi: Upgrade vendor hal version

+- Project: device/google/zumapro-sepolicy

db19f52 : Update SELinux error
862fbd7 : modem_svc: move shared_modem_platform related sepolicy to gs-common
1e5b6fb : Allow tachyon service to make binder calls to GCA
3057025 : Update SELinux error
a9b6884 : allow hal_bluetooth_btlinux write sysfs file
c22f870 : port display sysfs access
afb2839 : Add hal_shared_modem_platform to modem_diagnostic_app.te
57bf47f : add permission for hl7132 sysfs
1004368 : Update SELinux error
ec3dae0 : Update the PMS app seinfo for the certification change.
0d60be5 : Update SELinux error
62f34d8 : Revert "modem_svc: move shared_modem_platform related sepolicy t..."
7e11c79 : modem_svc: move shared_modem_platform related sepolicy to gs-common
78eaa18 : Support access to radioext service over AIDL
9faa399 : Update SELinux error
351ceac : Update SELinux error
233610e : correct frame_interval_ns and expected_present_time_ns naming
30306a3 : shamp: remove fixed bug from bugmap
f8891af : sepolicy: add label for logbuffer
2fe9123 : Update SELinux error
31d6e22 : Update SELinux error
d03f77d : Update SELinux error
35b65db : logger_app: allow logger_app to access persist.vendor.tcpdump.capture.len for logger_app
dde3987 : Update SELinux error
f1471f5 : Share same seinfo between propsetter app and GCA.
0e859b8 : Allow fingerprint HAL to access IGoodixFingerprintDaemon
7c85388 : Copy 16KB developer option sepolicy to zumapro
c5a7f8c : Add SELiunx for camera debug app (propsetter)
537bf14 : add selinux permission for fps_touch_handler wakeup
3c83ed0 : Fix modem_logging_control sepolicy error
f43ae7b : Revert "sepolicy:tracking_denials: add btlinux vendor_aoc_prop"
f39431c : Remove duplicate service entries
693260c : remove b/350830796 and b/350830680 from bug map
ac26d97 : Allow hal_fingerprint_default to access sysfs_aoc_udfps
bf729b7 : Update SELinux error
a0407ea : Remove b/340369535 hal_audio_default from bug map
644a742 : Remove SELinux error tracing bug
81f027f : modem_svc: update sepolicy for UMI
a59097a : Update SELinux error
ad0fc36 : Fix error in systemui when toggling airplane mode
1ded01d : Update SELinux error

+- Project: device/google_car

a50ef88 : Update google_car to use current VHAL version
140a633 : Disable ro.adb.secure only for debug builds
5ed6e35 : Disable android.hardware.broadcastradio.
d12434a : Enable android.hardware.camera.concurrent
761fb08 : Remove android.hardware.identity_credential feature.
4ee62e9 : Remove android.hardware.context_hub feature.
e918ead : Refine unavailable features
5a4e5b2 : Enable Wifi STA + AP mode
7da2487 : Set density of tangorpro_car_cw to 150

+- Project: device/linaro/dragonboard

d9f5a35 : manifest.xml: drop android.hardware.bluetooth hal
78535e5 : manifest.xml: drop composer hal
3d90cfe : Revert "sepolicy/set_udc.te: fix build error on sysfs_udc type"
d3174ab : sepolicy/set_udc.te: fix build error on sysfs_udc type
75383d1 : sm8x50: Add support for sm8450-{hdk,qrd} and sm8650-{hdk,mtp}
c196acb : dragonboard: Use drm_hwcomposer hwc3 implementation
4203c4a : sm8x50: Make rules for adding overlays to the prebuilts

+- Project: device/sample

a8b7652 : APN Bug - Modify/Add APN Configuration for KOODO - 352352887
098c816 : APN Bug - Modify/Add APN Configuration for TELUS. Issue ID 352352886
5e7e07b : Change APN name for 20412 according to KPN request in b/373613975
e204ab2 : Modify Jawwal apn to sample apns
4741aa0 : Updating Orange Spain APN settings
a6bdc1a : Add apns-full-conf.xml module for Soong-built GSI
be30191 : Add APN configuration for Rcell (2646)
0bcfbd8 : apn: Update APN configurations for Carrier Pivotel
f02356c : Fixing shatelmobile carrier name from shatelmonil to shatelmobile
3e28e13 : Added mobi APN config for mcc313mnc460
ccc3ec6 : Removing Orange Entreprise APN configuration as requested by Orange France.
c3d1205 : Add Sparkle APN
68312a6 : Adding CarrierID to Webbing settings
1011799 : Add TATA MOVE apn for TATA Netherland(204-07,901-54) and Tata UK (234-27)
e70b781 : Merge internet and MMS APN for H3G to fix MMS issue
67f0b59 : APN Bug - Modify/Add APN Configuration for Public Mobile - 352345932
3418446 : APN Bug - Remove APN for Koodo - 352345931
f20ebaf : APN Bug - Remove APN Configuration for TELUS Mobility - 352345930

+- Project: platform/external/AFLplusplus

a7b83f62 : Pin to C17.

+- Project: platform/external/OpenCSD

7323ae8 : opencsd: Update Version for v1.5.4
81ea406 : opencsd: docs: Update for new configuration flags.
8933175 : opencsd: testing: Add options to test program
2ac12a5 : opencsd: etmv4: ete: Add checks for bad program image
2a3cff7 : opencsd: Add error types for bad program image.

+- Project: platform/external/TestParameterInjector

b399e81 : Edit OWNERS file
3900df0 : Allow car CTS to use TestParameterInjector.
8d1ab59 : Update time info, add robolectric as a dep, target junit4 sources
af50563 : Internal cleanups regarding createTestMethodProcessorList
a30fd37 : Update CHANGLOG with an entry for 1.18
66c58c2 : Bump version to v1.18 in README.md
5ea6810 : TestParameterInjector: Make implemented JUnit methods computeTestMethods(), methodBlock(), methodInvoker() public.
ae35863 : Bump version to v1.17 in README.md
d0a2a56 : Attempt to fix github-pages-deploy-action by updating src directory
03e76c6 : Attempt to fix github-pages-deploy-action by renaming parameters
7442607 : Increment github-pages-deploy-action version
bcb37a3 : Add support for parsing java.time.Duration from a string
9cf4a6c : TestParametersValuesProvider: Update javadoc now that TestParameters.TestParametersValuesProvider has been deprecated
e68d3f1 : Refactor code to allow Google-internal TestParametersValuesProvider interface to be deleted in favor of the new context aware version
97ac8c3 : README: Update TestParametersValuesProvider example to use new provider with context
123740e : Deprecate TestParameters.TestParametersValuesProvider
d8957ca : JUnit5 test: Migrate from TestParameters.TestParametersValuesProvider to context aware TestParametersValuesProvider
87d2b35 : JUnit5 test: Migrate from TestParameter.TestParameterValuesProvider to context aware TestParameterValuesProvider
96ba33a : Fix typo on getOtherAnnotations()
0510f11 : Fix javadoc of getOtherAnnotations
19a00a7 : Bump version to v1.16 in README.md
f64b872 : Add to CHANGELOG: When generating test names for enum values, the enum name is used instead of its toString() method
6631620 : Avoid metadata when putting enum values into the test name.
03a97fa : Support proto enum aliases: Update CHANGELOG and add documentation
9539541 : Support protobuf enum aliases by allowing the parsing of static fields on the enum
e7f3d29 : Convert incorrectly YAML-parsed booleans back to their enum values when possible
02c9d09 : TestParameterValuesProvider: Fix typo in javadoc
2f7f349 : TestParameterValuesProvider: Update obsolete comment
cb5ab94 : Add support for repeated annotations to TestParameterValuesProvider.Context

+- Project: platform/external/XNNPACK

5b3d182c3 : Remove unused -Wno-implicit-function-declaration.

+- Project: platform/external/aac

d950611 : Initial version of aacdec benchmark

+- Project: platform/external/abseil-cpp

7478ba71 : Add visibility for libtextclassifier and its dependents.
c32b93c1 : Add attributes and visibility for TensorFlow.
b512dc6d : Add myself as additional owner for ABSL.
38a58c08 : Allow all janitors to approve ABSL changes.
b3a5a589 : Expand visibility to WebRTC and its users.
4447c756 : Update GoogleTest dependency to 1.15.2 (#1736)
7e5c339b : Cherry-picks for Release Candidate 2 (#1727)
ebdba5af : Apply LTS transformations for 20240722 LTS branch (#1724)
33582861 : Fix LINT.IfChange syntax
3cb49889 : PR #1720: Fix spelling mistake: occurrance -> occurrence
f754f2b9 : Add missing include for Windows ASAN configuration in poison.cc
0598e582 : Delete absl/strings/internal/has_absl_stringify.h now that the GoogleTest version we depend on uses the public file
7795d084 : Update versions of dependencies in preparation for release
eb852207 : PR #1699: Add option to build with MSVC static runtime
65ede0a3 : Remove unneeded 'be' from comment.
646126a4 : PR #1715: Generate options.h using CMake only once
b86d574c : Small type fix in absl/log/internal/log_impl.h
cd75cb4a : PR #1709: Handle RPATH CMake configuration
26ee072e : PR #1710: fixup! PR #1707: Fixup absl_random compile breakage in Apple ARM64 targets
db1255ca : PR #1695: Fix time library build for Apple platforms
cd7f66ca : Remove cyclic cmake dependency that breaks in cmake 3.30.0
5b6285e7 : Roll forward poisoned pointer API and fix portability issues.
bb50cad0 : Use GetStatus in IsOkAndHoldsMatcher
6dee1532 : PR #1707: Fixup absl_random compile breakage in Apple ARM64 targets
f46495ea : PR #1706: Require CMake version 3.16
af4c589e : Add an MSVC implementation of ABSL_ATTRIBUTE_LIFETIME_BOUND
074a32af : Mark c_min_element, c_max_element, and c_minmax_element as constexpr in C++17.
eb46a63d : Optimize the absl::GetFlag cost for most non built-in flag types (including string).
6e701508 : Encode some additional metadata when writing protobuf-encoded logs.
0d9c2fc7 : Replace signed integer overflow, since that's undefined behavior, with unsigned integer overflow.
f36d3331 : Make mutable CompressedTuple::get() constexpr.
1278ee9b : vdso_support: support DT_GNU_HASH
37ebde53 : Make c_begin, c_end, and c_distance conditionally constexpr.
a2766235 : Add operator<=> comparison to absl::Time and absl::Duration.
649f5892 : Deprecate `ABSL_ATTRIBUTE_NORETURN` in favor of the `[[noreturn]]` standardized in C++11
57f04ad8 : Rollback new poisoned pointer API
4eb81046 : Static cast instead of reinterpret cast raw hash set slots as casting from void* to T* is well defined
a7c5f985 : Fix absl::NoDestructor documentation about its use as a global
d4cf6b71 : Declare Rust demangling feature-complete.
f3725a74 : Split demangle_internal into a tree of smaller libraries.
4b9a55fd : Decode Rust Punycode when it's not too long.
0ccc51f9 : Add assertions to detect reentrance in `IterateOverFullSlots` and `absl::erase_if`.
16452e14 : Decoder for Rust-style Punycode encodings of bounded length.
63d4b2fe : Add `c_contains()` and `c_contains_subrange()` to `absl/algorithm/container.h`.
c98bd9c8 : Three-way comparison spaceship <=> operators for Cord.
3ff94461 : internal-only change
9957f276 : Remove erroneous preprocessor branch on SGX_SIM.
e486af70 : Add an internal API to get a poisoned pointer.
a305e859 : optimization.h: Add missing <utility> header for C++
72dde987 : Add a compile test for headers that require C compatibility
74f8c1ea : Fix comment typo
0d5c20a2 : Expand documentation for SetGlobalVLogLevel and SetVLogLevel.
1ee05f28 : Roll back 6f972e239f668fa29cab43d7968692cd285997a9
6f972e23 : PR #1692: Add missing `<utility>` include
8a28a0c8 : Remove NOLINT for `#include <new>` for __cpp_lib_launder
0f29d3e8 : Remove not used after all kAllowRemoveReentrance parameter from IterateOverFullSlots.
10ac811f : Create `absl::container_internal::c_for_each_fast` for SwissTable.
93763764 : Disable flaky test cases in kernel_timeout_internal_test.
e1814101 : Document that swisstable and b-tree containers are not exception-safe.
69195d5b : Add `ABSL_NULLABILITY_COMPATIBLE` attribute.
b4e4b625 : LSC: Move expensive variables on their last use to avoid copies.
1315c900 : Add ABSL_INTERNAL_ATTRIBUTE_VIEW and ABSL_INTERNAL_ATTRIBUTE_OWNER attributes to more types in Abseil
f04e4890 : Drop std:: qualification from integer types like uint64_t.
9755364a : Increase slop time on MSVC in PerThreadSemTest.Timeouts again due to continued flakiness.
33dca3ef : Turn on validation for out of bounds MockUniform in MockingBitGen
7c03b80e : Use ABSL_UNREACHABLE() instead of equivalent
7c17d8bc : If so configured, report which part of a C++ mangled name didn't parse.
fc761208 : Sequence of 1-to-4 values with prefix sum to support Punycode decoding.
17137c08 : Add the missing inline namespace to the nullability files
567ebd05 : Add ABSL_INTERNAL_ATTRIBUTE_VIEW and ABSL_INTERNAL_ATTRIBUTE_OWNER attributes to types in Abseil
a0889af0 : Disallow reentrance removal in `absl::erase_if`.
cb319b3e : Fix implicit conversion of temporary bitgen to BitGenRef
1d401d9c : Use `IterateOverFullSlots` in `absl::erase_if` for hash table.
d30298a1 : UTF-8 encoding library to support Rust Punycode decoding.
96cdf6cc : Disable negative NaN float ostream format checking on RISC-V
2fc843ef : PR #1689: Minor: Add missing quotes in CMake string view library definition
5195c35d : Demangle template parameter object names, TA <template-arg>.
2f61aed1 : Demangle sr St <simple-id> <simple-id>, a dubious encoding found in the wild.
696b3278 : Try not to lose easy type combinators in S::operator const int*() and the like.
f875817b : Demangle fixed-width floating-point types, DF....
3941dc41 : Demangle _BitInt types DB..., DU....
9140cc7b : Demangle complex floating-point literals.
c6000317 : Demangle <extended-qualifier> in types, e.g., U5AS128 for address_space(128).
c586e8d8 : Demangle operator co_await (aw).
61e721f4 : Demangle fully general vendor extended types (any <template-args>).
59d0a7d1 : Demangle transaction-safety notations GTt and Dx.
6e607350 : Demangle C++11 user-defined literal operator functions.
2a40eb60 : Demangle C++20 constrained friend names, F (<source-name> | <operator-name>).
0cd50e6e : Demangle dependent GNU vector extension types, Dv <expression> _ <type>.
586a541d : Demangle elaborated type names, (Ts | Tu | Te) <name>.
66ef711d : Add validation that hash/eq functors are consistent, meaning that `eq(k1, k2) -> hash(k1) == hash(k2)`.
ed34153e : Demangle delete-expressions with the global-scope operator, gs (dl | da) ....
ffa1e4a5 : Demangle new-expressions with braced-init-lists.
e7a5d7ac : Demangle array new-expressions, [gs] na ....
54e1f14c : Demangle object new-expressions, [gs] nw ....
fe43a4cb : Demangle preincrement and predecrement, pp_... and mm_....
aad792d4 : Demangle throw and rethrow (tw... and tr).
9e72bd67 : Remove redundant check of is_soo() while prefetching heap blocks.
49e0099a : Demangle ti... and te... expressions (typeid).
8ece6dc4 : Demangle nx... syntax for noexcept(e) as an expression in a dependent signature.
699fcf35 : Demangle alignof expressions, at... and az....
8322d3ab : Demangle C++17 structured bindings, DC...E.
cba68bb9 : Demangle modern _ZGR..._ symbols.
29bd16cb : Remove redundant check of is_soo() while prefetching heap blocks.
8777d440 : Demangle sizeof...(pack captured from an alias template), sP ... E.
d8e17c00 : Demangle types nested under vendor extended types.
36d1644b : Demangle il ... E syntax (braced list other than direct-list-initialization).
b0e72168 : Avoid signed overflow for Ed <number> _ manglings with large <number>s.
9645a2fb : Remove redundant check of is_soo() while prefetching heap blocks.
4953bbcd : Remove obsolete TODO
65dfbf2b : Clarify function comment for `erase` by stating that this idiom only works for "some" standard containers.
d06b8277 : Move SOVERSION to global CMakeLists, apply SOVERSION to DLL
0d9746ac : Set ABSL_HAVE_THREAD_LOCAL to 1 on all platforms
9605d816 : Demangle constrained auto types (Dk <type-constraint>).
9a2da1a4 : Parse <discriminator> more accurately.
c8671e75 : Demangle lambdas in class member functions' default arguments.
36c2a14c : Demangle unofficial <unresolved-qualifier-level> encodings like S0_IT_E.
65a55c2b : Do not make std::filesystem::path hash available for macOS <10.15
3f9c3255 : Include flags in DLL build (non-Windows only)
7ce797f2 : Enable building monolithic shared library on macOS and Linux.
64457068 : Demangle Clang's last-resort notation _SUBSTPACK_.
77d0ac71 : Demangle C++ requires-expressions with parameters (rQ ... E).
abc0f8d1 : Demangle Clang's encoding of __attribute__((enable_if(condition, "message"))).
1f5a9cdc : Demangle static_cast and friends.
457fdbf9 : Demangle decltype(expr)::nested_type (NDT...E).
40b2776e : Optimize GrowIntoSingleGroupShuffleControlBytes.
cf071bb3 : Demangle C++17 fold-expressions.
44e077d6 : Demangle thread_local helper functions.
3ef92c6f : Demangle lambdas with explicit template arguments (UlTy and similar forms).
6ec17dc6 : Demangle &-qualified function types.
4a861bb1 : Demangle valueless literals LDnE (nullptr) and LA<number>_<type>E ("foo").
90d49cba : Correctly demangle the <unresolved-name> at the end of dt and pt (x.y, x->y).
ca81d343 : Add missing targets to ABSL_INTERNAL_DLL_TARGETS
1c177722 : Build abseil_test_dll with ABSL_BUILD_TESTING
baf07b1f : Demangle C++ requires-expressions without parameters (rq ... E).
49c1f36e : overload: make the constructor constexpr
f858e740 : Update Abseil CI Docker image to use Clang 19, GCC 14, and CMake 3.29.3
48235b89 : Workaround symbol resolution bug in Clang 19
52bc669d : Workaround bogus GCC14 -Wmaybe-uninitialized warning
50d39219 : Silence a bogus GCC14 -Warray-bounds warning
4a7c2ec6 : Forbid absl::Uniform<absl::int128>(gen)
0ef5bc61 : Use IN_LIST to replace list(FIND) + > -1
1343b6d0 : Recognize C++ vendor extended expressions (e.g., u9__is_same...E).
8bb0b503 : `overload_test`: Remove a few unnecessary trailing return types
d60c089e : Demangle the C++ this pointer (fpT).
b3cd0250 : Stop eating an extra E in ParseTemplateArg for some L<type><value>E literals.
a7d70c87 : Add ABSL_INTERNAL_ATTRIBUTE_VIEW and ABSL_INTERNAL_ATTRIBUTE_OWNER attributes to Abseil.
9e095d74 : Demangle C++ direct-list-initialization (T{1, 2, 3}, tl ... E).
cfac0a35 : Demangle the C++ spaceship operator (ss, operator<=>).
a3c25aec : Demangle C++ sZ encodings (sizeof...(pack)).
88c1f181 : Demangle C++ so ... E encodings (typically array-to-pointer decay).
41492937 : Recognize dyn-trait-type in Rust demangling.
1a31b81c : Rework casting in raw_hash_set's IsFull().
ac810bee : Remove test references to absl::SharedBitGen, which was never part of the open source release. This was only used in tests that never ran as part in the open source release.
aaed9b4a : Recognize fn-type and lifetimes in Rust demangling.
e7f1a950 : Support int128/uint128 in validated MockingBitGen
a2625a64 : Recognize inherent-impl and trait-impl in Rust demangling.
7a730c1b : Recognize const and array-type in Rust mangled names.
22108fae : Remove Asylo from absl.
bfbfc3c7 : Recognize generic arguments containing only types in Rust mangled names.
c025a934 : Fix missing #include <random> for std::uniform_int_distribution
6ab5b0aa : Move `prepare_insert` out of the line as type erased `PrepareInsertNonSoo`.
01283057 : Revert: Add -Wdead-code-aggressive to ABSL_LLVM_FLAGS
254b3a53 : Add (unused) validation to absl::MockingBitGen
93ac3a4f : Support `AbslStringify` with `DCHECK_EQ`.
cbfe51b2 : PR #1672: Optimize StrJoin with tuple without user defined formatter
6683a617 : Give ReturnAddresses and N<uppercase> namespaces separate stacks for clarity.
eba8db7b : Demangle Rust backrefs.
de8ae871 : Use Nt for struct and trait names in Rust demangler test inputs.
519ef3b3 : Allow __cxa_demangle on MIPS
73841853 : Add a `string_view` overload to `absl::StrJoin`
692d9e56 : Demangle Rust's Y<type><path> production for passably simple <type>s.
99bb2f6f : `convert_test`: Delete obsolete condition around ASSERT_EQ in TestWithMultipleFormatsHelper
1f6c241c : `any_invocable`: Clean up #includes
7b87d959 : Resynchronize absl/functional/CMakeLists.txt with BUILD.bazel
e444af7c : `any_invocable`: Add public documentation for undefined behavior when invoking an empty AnyInvocable
289d8626 : `any_invocable`: Delete obsolete reference to proposed standard type
77224c28 : PR #1662: Replace shift with addition in crc multiply
e0df4a72 : Doc fix.
b8c843eb : `convert_test`: Extract loop over tested floats from helper function
a28ee5b5 : Recognize some simple Rust mangled names in Demangle.
c1e1b47d : Use __builtin_ctzg and __builtin_clzg in the implementations of CountTrailingZeroesNonzero16 and CountLeadingZeroes16 when they are available.
7e149e40 : Remove the forked absl::Status matchers implementation in statusor_test
d9f501ef : Add comment hack to fix copybara reversibility
cba31a95 : Add GoogleTest matchers for absl::Status
d94c7aef : [random] LogUniform: Document as a discrete distribution
0941ce7f : Enable Cord tests with Crc.
f638e342 : Fix order of qualifiers in `absl::AnyInvocable` documentation.
d3b1c7bb : Guard against null pointer dereference in DumpNode.
8a3ae1b6 : Apply ABSL_MUST_USE_RESULT to try lock functions.
08b21bd0 : Add public aliases for default hash/eq types in hash-based containers
b0160ba4 : Import of CCTZ from GitHub.
b65852fa : Remove the hand-rolled CordLeaker and replace with absl::NoDestructor to test the after-exit behavior
c3b4f295 : `convert_test`: Delete obsolete `skip_verify` parameter in test helper
e022c806 : overload: allow using the underlying type with CTAD directly.
564372fc : PR #1653: Remove unnecessary casts when calling CRC32_u64
0908376f : PR #1652: Avoid C++23 deprecation warnings from float_denorm_style
b85096e6 : Minor cleanup for `absl::Cord`
7efc308b : PR #1651: Implement ABSL_INTERNAL_DISABLE_DEPRECATED_DECLARATION_WARNING for MSVC compiler
192e959b : Add `operator<=>` support to `absl::int128` and `absl::uint128`
9a61b00d : [absl] Re-use the existing `std::type_identity` backfill instead of redefining it again
6645f314 : Add `absl::AppendCordToString`
4eb6f626 : `str_format/convert_test`: Delete workaround for [glibc bug](https://sourceware.org/bugzilla/show_bug.cgi?id=22142)
fa57bfc5 : `absl/log/internal`: Document conditional ABSL_ATTRIBUTE_UNUSED, add C++17 TODO
e304ff50 : `log/internal/check_op`: Add ABSL_ATTRIBUTE_UNUSED to CHECK macros when STRIP_LOG is enabled
85419307 : log_benchmark: Add VLOG_IS_ON benchmark
8f9e5f02 : Restore string_view detection check
6c398278 : Remove an unnecessary ABSL_ATTRIBUTE_UNUSED from a logging macro
b8f2b2c6 : In example code, add missing template parameter.
61e47a45 : Optimize crc32 V128_From2x64 on Arm
1ec4a27e : Annotate that Mutex should warn when unused.
b59913e4 : Add ABSL_ATTRIBUTE_LIFETIME_BOUND to Cord::Flatten/TryFlat
377de9d7 : Deprecate `absl::exchange`, `absl::forward` and `absl::move`, which were only useful before C++14.
d5e42609 : Temporarily revert dangling std::string_view detection until dependent is fixed
643c7bab : Use _decimal_ literals for the CivilDay example.
fbd5fa17 : Fix bug in BM_EraseIf.
5953a488 : Add internal traits to absl::string_view for lifetimebound detection
d1dd9cd6 : Add internal traits to absl::StatusOr for lifetimebound detection
c02bb5f6 : Add internal traits to absl::Span for lifetimebound detection
f5227676 : Add missing dependency for log test build target
8a31d4a8 : Add internal traits for lifetimebound detection
770d0783 : Use local decoding buffer in HexStringToBytes
00f8c399 : Only check if the frame pointer is inside a signal stack with known bounds
160d3906 : Roll forward: enable small object optimization in swisstable.
c0f104bf : Optimize LowLevelHash by breaking dependency between final loads and previous len/ptr updates.
c5d722bc : Fix the wrong link.
41136ed1 : Optimize InsertMiss for tables without kDeleted slots.
52715dbd : Use GrowthInfo without applying any optimizations based on it.
ff0a0f2d : Disable small object optimization while debugging some failing tests.
18018aa4 : Adjust conditonal compilation in non_temporal_memcpy.h
a1ced69b : Reformat log/internal/BUILD
e8b549b7 : Remove deprecated errno constants from the absl::Status mapping
b70ad841 : Introduce GrowthInfo with tests, but without usage.
1ccc2eb3 : Enable small object optimization in swisstable.
6f0c5004 : Refactor the GCC unintialized memory warning suppression in raw_hash_set.h.
68ce303d : Respect `NDEBUG_SANITIZER`
e7858c73 : Revert integer-to-string conversion optimizations pending more thorough analysis
86f30194 : Fix a bug in `Cord::{Append,Prepend}(CordBuffer)`: call `MaybeRemoveEmptyCrcNode()`. Otherwise appending a `CordBuffer` an empty Cord with a CRC node crashes (`RemoveCrcNode()` which increases the refcount of a nullptr child).
ad5499a2 : Add `BM_EraseIf` benchmark.
48abb9fe : Record sizeof(key_type), sizeof(value_type) in hashtable profiles.
06e11906 : Fix ClangTidy warnings in btree.h.
c2cf2d38 : LSC: Move expensive variables on their last use to avoid copies.
7335a36d : PR #1644: unscaledcycleclock: remove RISC-V support
9a9502bf : Reland: Make DLOG(FATAL) not understood as [[noreturn]]
76f8011b : Separate out absl::StatusOr constraints into statusor_internal.h
5036f0b6 : Use Layout::WithStaticSizes in btree.
1cd7128b : `layout`: Delete outdated comments about ElementType alias not being used because of MSVC
e4c00cc6 : Performance improvement for absl::AsciiStrToUpper() and absl::AsciiStrToLower()
85166c91 : `layout_benchmark`: Replace leftover comment with intended call to MyAlign
42133464 : Remove absl::aligned_storage_t
4024afbb : Delete ABSL_ANNOTATE_MEMORY_IS_INITIALIZED under Thread Sanitizer
8fe6b423 : Remove vestigial variables in the DumpNode() helper in absl::Cord
1980d7b9 : Do hashtablez sampling on the first insertion into an empty SOO hashtable.
43c36ffa : Add explicit #include directives for <tuple>, "absl/base/config.h", and "absl/strings/string_view.h".
fa6a3cd6 : Add a note about the cost of `VLOG` in non-debug builds.
a41e0168 : Fix flaky test failures on MSVC.
d53b1e66 : Add template keyword to example comment for Layout::WithStaticSizes.
c1d4e4b9 : PR #1643: add xcprivacy to all subspecs
50a88673 : Record sampling stride in cord profiling to facilitate unsampling.
5e61a28e : Fix a typo in a comment.
4539c540 : [log] Correct SetVLOGLevel to SetVLogLevel in comments
5839a148 : Add a feature to container_internal::Layout that lets you specify some array sizes at compile-time as template parameters. This can make offset and size calculations faster.
56d3f227 : `layout`: Mark parameter of Slices with ABSL_ATTRIBUTE_UNUSED, remove old workaround
153186b6 : `layout`: Use auto return type for functions that explicitly instantiate std::tuple in return statements
16e21953 : Remove redundant semicolons introduced by macros
d0d902e6 : [log] Make :vlog_is_on/:absl_vlog_is_on public in BUILD.bazel
74df6975 : Add additional checks for size_t overflows
2f059101 : Replace //visibility:private with :__pkg__ for certain targets
1c233c55 : PR #1603: Disable -Wnon-virtual-dtor warning for CommandLineFlag implementations
2a7d0da1 : Add several missing includes in crc/internal
c6ed744e : Roll back extern template instatiations in swisstable due to binary size increases in shared libraries.
e4b07ec1 : Add nodiscard to SpinLockHolder.
321addf0 : Test that rehash(0) reduces capacity to minimum.
03856129 : Add extern templates for common swisstable types.
3c1f9be7 : Disable ubsan for benign unaligned access in crc_memcpy
686aae12 : Make swisstable SOO support GDB pretty printing and still compile in OSS.
5e54c9da : Fix OSX support with CocoaPods and Xcode 15
bb83acea : Fix GCC7 C++17 build
28e40003 : Use UnixEpoch and ZeroDuration
6cd8cf09 : Make flaky failures much less likely in BasicMocking.MocksNotTriggeredForIncorrectTypes test.
e20285c6 : Delete a stray comment
b9690836 : Move GCC uninitialized memory warning suppression into MaybeInitializedPtr.
d8027081 : Replace usages of absl::move, absl::forward, and absl::exchange with their std:: equivalents
b97e7f35 : Fix the move to itself
e9682564 : Work around an implicit conversion signedness compiler warning
d03f54ef : Avoid MSan: use-of-uninitialized-value error in find_non_soo.
53e6dae0 : Fix flaky MSVC test failures by using longer slop time.
6f0bb274 : Add ABSL_ATTRIBUTE_UNUSED to variables used in an ABSL_ASSUME.
1449c9a1 : Implement small object optimization in swisstable - disabled for now.
6bf3c73f : Document and test ability to use absl::Overload with generic lambdas.
8dc90ff0 : Extract `InsertPosition` function to be able to reuse it.
59daf188 : Increase GraphCycles::PointerMap size
7bd9ff91 : PR #1632: inlined_vector: Use trivial relocation for `erase`
7a434451 : Create `BM_GroupPortable_Match`.
e7fe9ec9 : [absl] Mark `absl::NoDestructor` methods with `absl::Nonnull` as appropriate
55d28d4b : Automated Code Change
b7372748 : Rework casting in raw_hash_set's `IsFull()`.
953cec75 : Adds ABSL_ATTRIBUTE_LIFETIME_BOUND to absl::BitGenRef
cfde5f74 : Workaround for NVIDIA C++ compiler being unable to parse variadic expansions in range of range-based for loop
653a6710 : Rollback: Make DLOG(FATAL) not understood as [[noreturn]]
c0bec1a7 : Make DLOG(FATAL) not understood as [[noreturn]]
9bbbbd3b : Optimize `absl::Duration` division and modulo: Avoid repeated redundant comparisons in `IDivFastPath`.
bde089f9 : Optimize `absl::Duration` division and modulo: Allow the compiler to inline `time_internal::IDivDuration`, by splitting the slow path to a separate function.
90ebb6fc : Fix typo in example code snippet.
1436831c : Automated Code Change
eef325b1 : Add braces for conditional statements in raw_hash_map functions.
d87dc03c : Optimize `prepare_insert`, when resize happens. It removes single unnecessary probing before resize that is beneficial for small tables the most.
0e72e54f : Add noexcept to move assignment operator and swap function
3afe4fed : Import of CCTZ from GitHub.
f2710ccb : Minor documentation updates.
831e57a4 : Change find_or_prepare_insert to return std::pair<iterator, bool> to match return type of insert.
92c8575d : PR #1618: inlined_vector: Use trivial relocation for `SwapInlinedElements`
b0f85e23 : Improve raw_hash_set tests.
f576ea0e : Performance improvement for absl::AsciiStrToUpper() and absl::AsciiStrToLower()
c28f689c : Use const_cast to avoid duplicating the implementation of raw_hash_set::find(key).
1449add2 : Import of CCTZ from GitHub.
d073d80e : Performance improvement for absl::AsciiStrToUpper() and absl::AsciiStrToLower()
a7012a5b : Annotate that SpinLock should warn when unused.
14b8a4ea : PR #1625: absl::is_trivially_relocatable now respects assignment operators
8a3caf7d : Introduce `Group::MaskNonFull` without usage.
4580d86d : `demangle`: Parse template template and C++20 lambda template param substitutions
d4578efe : PR #1617: fix MSVC 32-bit build with -arch:AVX
797501d1 : Minor documentation fix for `absl::StrSplit()`
4618865c : Prevent overflow in `absl::CEscape()`
c14dfbf9 : `demangle`: Parse optional single template argument for built-in types
0a362eb2 : PR #1412: Filter out `-Xarch_` flags from pkg-config files
4ea6e47c : `demangle`: Add complexity guard to `ParseQRequiresExpr`
34604d5b : Remove deprecated symbol absl::kuint128max
119e0d3f : Add ABSL_ATTRIBUTE_WARN_UNUSED.
4358cb2f : `demangle`: Parse `requires` clauses on template params, before function return type
df2c771e : On Apple, implement absl::is_trivially_relocatable with the fallback.
1ac7f340 : `demangle`: Parse `requires` clauses on functions
760b2153 : Make `begin()` to return `end()` on empty tables.
8eadbbac : `demangle`: Parse C++20-compatible template param declarations, except those with `requires` expressions
36442dd8 : Add the ABSL_DEPRECATE_AND_INLINE() macro
19c20d73 : Span: Fixed comment referencing std::span as_writable_bytes() as as_mutable_bytes().
99f0b6d1 : Switch rank structs to be consistent with written guidance in go/ranked-overloads
0be9f997 : Avoid hash computation and `Group::Match` in small tables copy and use `IterateOverFullSlots` for iterating for all tables.
3e59efa2 : Optimize `absl::Hash` by making `LowLevelHash` faster.
f4c713f5 : Add -Wdead-code-aggressive to ABSL_LLVM_FLAGS
c7ea3209 : Stop using `std::basic_string<uint8_t>` which relies on a non-standard generic `char_traits<>` implementation, recently removed from `libc++`.
643b48a3 : Add absl_container_hash-based HashEq specialization
e22f9c1f : `demangle`: Implement parsing for simplest constrained template arguments
563c86a8 : Roll forward 9d8588bfc4566531c4053b5001e2952308255f44 (which was rolled back in 146169f9ad357635b9cd988f976b38bcf83476e3) with fix.
0e289dc5 : Add a version of absl::HexStringToBytes() that returns a bool to validate that the input was actually valid hexadecimal data.
ddcf8be9 : Enable StringLikeTest in hash_function_defaults_test
c680be45 : Fix a typo.
0dc846d4 : Minor changes to the BUILD file for absl/synchronization
52a711fc : Avoid static initializers in case of ABSL_FLAGS_STRIP_NAMES=1
146169f9 : Rollback 9d8588bfc4566531c4053b5001e2952308255f44 for breaking the build
9d8588bf : No public description
971eada3 : Decrease the precision of absl::Now in x86-64 debug builds
7339447a : Optimize raw_hash_set destructor.
a3ee6ce2 : Add ABSL_ATTRIBUTE_UNINITIALIZED macros for use with clang and GCC's `uninitialized`
513a6f93 : Optimize `Cord::Swap()` for missed compiler optimization in clang.
4c7e7c7d : Type erased hash_slot_fn that depends only on key types (and hash function).
780bfc19 : Replace `testonly = 1` with `testonly = True` in abseil BUILD files.
2812af91 : Avoid extra `& msbs` on every iteration over the mask for GroupPortableImpl.
0aefaf7f : Missing parenthesis.
c44dd5ac : Early return from destroy_slots for trivially destructible types in flat_hash_{*}.
779a3565 : Avoid export of testonly target absl::test_allocator in CMake builds
cbdbec09 : Use absl::NoDestructor for cordz global queue.
04af270f : Add empty WORKSPACE.bzlmod
d5eb5032 : Introduce `RawHashSetLayout` helper class.
9a79278a : Fix a corner case in SpyHashState for exact boundaries.
42624b3d : Add nullability annotations
27f15a05 : Use absl::NoDestructor for global HashtablezSampler.
6dda8e52 : Always check if the new frame pointer is readable.
4676ffa9 : PR #1604: Add privacy manifest
f7d2b13e : Remove code pieces for no longer supported GCC versions.
b21b4898 : Disable ABSL_ATTRIBUTE_TRIVIAL_ABI in open-source builds
2be67701 : Prevent brace initialization of AlphaNum
04d8afe7 : Remove code pieces for no longer supported MSVC versions.
b03cda5e : Added benchmarks for smaller size copy constructors.
49ff696c : Migrate empty CrcCordState to absl::NoDestructor.
fe16a5e7 : Add protected copy ctor+assign to absl::LogSink, and clarify thread-safety requirements to apply to the interface methods.

+- Project: platform/external/angle

c174aa7079 : Trace perf: add a basic fps limiter
76691d2782 : Insert "-U_FORTIFY_SOURCE" in Android.bp cflags
3d32b3c105 : Vulkan: Remove vk::BindingPointer
2641424960 : Remove GetTexLevelParameter* from ANGLE_texture_multisample
94515733f4 : Vulkan: Remove release commands from checkOneCommandBatchLocked
e42047f0bb : Vulkan: Disable DescriptorSet cache for SwiftShader
045f281884 : End2end test for GL_MAX_SHADER_STORAGE_BLOCK_SIZE validation
473798bfa9 : WGSL: @align appropriate struct members in uniforms.
09578c4257 : Vulkan: Cleanup CommandPoolAccess implementation
4b84ee4c92 : Vulkan: Implement GL_EXT_EGL_image_storage_compression
739bcef0a5 : Vulkan: Rework finishOneCommandBatchAndCleanup
746090659c : Vulkan: Fix finishOneCommandBatchAndCleanupImplLocked
100c0b8cef : Preserve mMinSampleShading value when SAMPLE_SHADING enable is toggled
224f836ca6 : Vulkan: Update setupDispatch comment
cc841237de : Accept framebuffer modifications while PLS is active
57ccab32f7 : Skip dota_underlords on Windows NVIDIA
32924e1f31 : Manual roll vulkan-deps from cf7563f67639 to 86f73c27b3fc (1 revision)
96a49b8ac6 : Roll VK-GL-CTS from b27686793f88 to a9f7069b9a5b (18 revisions)
2d335b19ae : Roll Chromium from 16c655f60abe to 920d427801ff (660 revisions)
a504b6a24f : Support GL_OES_required_internalformat
5951cac9b0 : Update xxHash metadata
0bb109aa33 : Fix validation for 2D multisample array textures
808cb91c39 : Log when glGetPerfMonitor* calls fails or is missing data
75a64561ec : restricted_trace_perf: Windows fixes
4262c8e42b : Tests: Add CompressBlob()/DecompressBlob() tests
11495e5588 : Perf tests: restricted_trace_perf.py uses android_helper helpers
104d4e4aa7 : Vulkan: Improve usage of ErasePipelineCacheVkChunks method
2f595f56db : Update third_party metadata
c31a926bb1 : Refine test - TextureFixedRateCompressionTest.Invalidate
fa70c4cbe2 : CL/Vulkan: Implement the buffer rect enqueues
e3427ac7a0 : Add missing README.chromium fiends to ANGLE and DEPS
cc5218afd3 : ANGLE will crash when the buffer is NULL in eglCreateImageKHR
da2920578b : Update third party metadata
cd255ae717 : Roll vulkan-deps from ff00298c3058 to cf7563f67639 (10 revisions)
83bf7a687d : Roll Chromium from 0f40455a74d6 to 16c655f60abe (624 revisions)
7adbb3e811 : Vulkan: Remove explicit destroy calls
b7e0a250a9 : Add tests for RGB8 and RGBA8 renderbuffer usage
65d674b0cc : Vulkan: Must run UnlockedTailCall from flush and finish
d81834b622 : Vulkan: Store VkDevice in vk::SharedPtr
7070a9e941 : Remove draw buffer validation clauses from PLS
b47144c638 : Roll vulkan-deps from ef4dc615f82d to ff00298c3058 (3 revisions)
7eeec0f178 : Roll Chromium from 9c3b7c9af896 to 0f40455a74d6 (554 revisions)
f7cac0bb8d : Start Win NVIDIA experiment
f51170b395 : Enable GL_KHR_texture_compression_astc_hdr
d57b1d30f0 : Vulkan: Support GL_OES_required_internalformat
e7eb8e270e : Skip TextureFixedRateCompressionTest.Invalidate on Pixel 6
d096f9222d : Roll vulkan-deps from 0e28d467e76d to ef4dc615f82d (2 revisions)
96952a9b1b : Roll Chromium from b640856c0ed4 to 9c3b7c9af896 (355 revisions)
a2d76f0399 : Add GetTexLevelParameterTest.Levels end2end test
b31f367a9a : Vulkan: Keep old VVL suppression until roll into Chromium
3f312d98b9 : ES31: GetTexLevelParameter{if} validation for TEXTURE_2D_MULTISAMPLE
5d3d299d47 : Expose the required GLES1.0/1.1 extensions in ANGLE
e10220f8d5 : Manual roll vulkan-deps from 3c7156644de7 to 0e28d467e76d (59 revisions)
01dee1cbb0 : Add implementation for GL_EXT_texture_storage_compression
8b7bb82cf8 : Vulkan: Disable dynamic vertex input state on Qualcomm
0f9c146b65 : Roll Chromium from c957bc3555b2 to b640856c0ed4 (1228 revisions)
6c1021ec7c : Vulkan: Switch DescriptorSetLayout to use AtomicSharedPtr
4aaeffd8b9 : Metal: Limit simulator texture size to 8k
2dc072ec71 : Vulkan: Switch PipelineLayout from AtomicBind* to AtomicSharedPtr
87d6199796 : Vulkan: Switch ShaderModule to use SharedPtr
6451a893b1 : Fix minimum caps for texture gather offsets.
6f28cb15b4 : Fix minimum caps for draw buffers and compute invocations.
5288ed3623 : GL: Enable ANGLE_texture_multisample on OpenGL ES
b6b826b32b : Add stubs for GL_EXT_EGL_image_storage_compression
8597a5fdcc : Skip TextureMultisampleTest.MaskedDrawWithSampleID on S22
9c7ad319dd : Revert "Delay EGLQueryContext render buffer change"
2e25ea1e72 : Add a new mutex in CommandQueue to protect Vulkan Command Pool
c796c571e5 : Disable BasicUniformUsageTest.* on Linux Intel WGPU for now
ea6e6b9fbd : Capture/Replay: Fix GetTexLevelParameterivANGLE buffer size
239ef6801a : Metal: Support ANGLE_texture_multisample
c56867ece5 : Refine sRGB support
233d9ee5c3 : Delay EGLQueryContext render buffer change
7a1da65f87 : Fix GetTexLevelParameter validation
777fd9127d : Roll Chromium from e5849daecf2c to c957bc3555b2 (609 revisions)
cecefe5343 : Vulkan: Improve recreateSwapchain() error handling
f458f86590 : Vulkan: Update AcquireNextImageUnlocked() implementation
ea86503c2a : Vulkan: Remove vkAcquireNextImageKHR multi-threading support
b94d785394 : Fix TextureMultisampleTest.CheckSamplePositions end2end test
bfa03a8465 : Metal: Fix vec swizzles to bvecs on AMD
e3199b5f8e : Add bison/flex binaries on Mac.
c02e01842f : Add GL_ARM_rgba8
89f7fd8b55 : CL/Vulkan: Skip crashing tests
18127908ca : Roll Chromium from 1cae31feac1f to e5849daecf2c (818 revisions)
9eab301c58 : Enable GL_KHR_texture_compression_astc_sliced_3d
cb31b8864d : Vulkan: pruneDefaultBufferPools when there is excessive garbage
a5dec6fea2 : Tests: ANGLETestBase subclasses non-zero initialize primitives
52fa677906 : Refactor indexed parameter query validation
6b9b8239f4 : Vulkan: fix un-initialized mCompatiblePresentModes
03380548cc : CL/Vulkan: Fix nullptr case for async build task
9c20227585 : Fix SAMPLE_MASK_VALUE indexed query validation
b856059e21 : Tests: linux-trace logs return code with OOM note
21d747de9d : Vulkan: Use vk::SharedPtr for SharedDescriptorSetCacheKey
0757254b4f : Tests: Add crash callback to test Replay (e.g. linux-trace)
a3452c2482 : Fix SAMPLE_MASK toggle validation
6544daecea : Fix indentation in GetQueryParameterInfo
2f7feccd99 : Tests: Fix EGLSurfaceTest.SwapWithoutAnyDraw test
bc09b1785b : CL: Run a subset of CTS tests on linux
0c7c14eb20 : Capture: add a sanity check to InitializeReplay4 args
8ba71e663e : CL/Vulkan: Update few device caps based on VK caps
b18b0b0234 : Skip couple non-deterministic angle_restricted_trace_gold_tests
8e9dc1a669 : Validate anonymous struct names with namespace
8409b41e3b : Report error on unsized interface block array
b76f0ae053 : Roll Chromium from 29f00e74db5f to 1cae31feac1f (560 revisions)
10c2dc7a1b : CL: Remove logic restricting CL version of passthrough backend
a58b35bcc6 : CL/Vulkan: Implement image creation from buffer object
94eebab508 : Tests: fix ProgramPipelineTest31 unitialized memory read
70d1ef67fe : Vulkan: Ensure onFramebufferBoundary is called for offscreen
0dfe0a75f3 : CL/VK: Add writeBuffer to staging/transfer routine
8e0178fb66 : Vulkan: Switch SamplerBinding to Use SharedPtr
a41b798ee4 : WebGPU: Ensure mDefaultBindGroup is created
8f07cdde00 : Fix NegativeTextureMultisampleTest
e1fd42db9f : CL/Vulkan: Add parent dependencies and image layout transitions
ec77386581 : Qualifier in const for sampler2D fails validation
99d5d4d5b4 : Tests: fix replay failure handling in linux/win-trace
7672101dab : Roll Chromium from 185ab56e3a12 to 29f00e74db5f (604 revisions)
594a11acf8 : Tests: Add Need For Speed: No Limits trace
770242db74 : Translator: Remove the `gimage1D` base type
1f0ac74a7a : Translator: Remove SubpassInputMS
c4533e0aae : Translator: Remove the `double` base type
3515113e2b : CL/Vulkan: Remove redundant state in CLImageVk
10e073a297 : CL/Vulkan: Capture an event for async build task
ce53aff056 : Vulkan: Add per descriptorSet LRU cache eviction
ecfa487492 : Assert no GL errors at the end of image tests
3b3783bc77 : Reland "Possibly fix FixedQueue.ConcurrentPushPop flakiness"
99aac4b931 : Capture/Replay: Remove implied exts from RequiredExtensions
17904b43c7 : Vulkan: Restrict EGL_ANDROID_front_buffer_auto_refresh support
f59d6617c0 : Roll vulkan-deps from 867065ecbb6a to 3c7156644de7 (4 revisions)
ba74d33b77 : Roll Chromium from 0236845acb4a to 185ab56e3a12 (679 revisions)
c4ec8dbb44 : Vulkan: Expose EGL_ANDROID_front_buffer_auto_refresh
79c4ad91ab : ssci: mark rapidjson as Security Critical: No
74f74b63df : Vulkan: Add ContextVk::onFramebufferBoundary() function
087cc41138 : Vulkan: Add mRenderer to ShareGroupVk class
d2e543c77b : Vulkan: Consider PowerVR hardware coherent for framebuffer fetch
d8e183ab55 : Skip the antutu_refinery perf test on Windows/Intel
8504b3cfaf : Add third_party/rust/ to Chromium rolls
e961f7ab35 : Skip KHR-GLES31.core.texture_stencil8.multisample on S22
d81d29e166 : Revert "Possibly fix FixedQueue.ConcurrentPushPop flakiness"
42f9c200ca : Comments: www.anglebug.com -> anglebug.com
fd1f9e76c4 : Roll vulkan-deps from 59ce475cae66 to 867065ecbb6a (6 revisions)
f0ebb3743d : Roll SwiftShader from 4d3a7b64279f to 4074d9674b3e (1 revision)
8b29fe6533 : Roll Chromium from 45f7f2245fc1 to 0236845acb4a (600 revisions)
987cc0de1d : Vulkan: Add release utility for BufferViewHelper
dc8b2e2ba6 : CL: Pass in memory properties from cl entry point
743dd7bfde : CL: Add some event/memory helper functions
518e162e55 : Fix validation for 3D depth/stencil textures
174694c14a : Manual roll VK-GL-CTS from f674555ab03e to b27686793f88 (29 revisions)
828e2d190a : Tests: Add Block Blast trace
f21cfcd6da : Add a corner case for framebuffer fetch
8a2b60b2fa : Add stubs for GL_EXT_texture_storage_compression
84b175546e : Possibly fix FixedQueue.ConcurrentPushPop flakiness
51ff063ca8 : Skip real_racing3 trace on Linux NVIDIA
843191f572 : tools: add buganizer to DIR_METADATA
04222c1b36 : Roll vulkan-deps from c93d7c04d2f7 to 59ce475cae66 (12 revisions)
0dd9041431 : Roll Chromium from b3ded8346a3f to 45f7f2245fc1 (752 revisions)
15492c9bc4 : Vulkan: A few workarounds for older ARM drivers
33dc1606ee : Skip end2end failures on S22
f3404c4d24 : Tests: improve Windows support in android_helper
9dd69aae81 : Vulkan: Disable dynamic rendering on PowerVR
e3b3dd6845 : Vulkan: Optimize color clears vs read-only depth/stencil switch
adb80cbba7 : Vulkan: Optimize read-only depth/stencil switch after clear
2babbbd2bf : Add egl config support GL_RGB10_A2 for Linux Headless
29855942a1 : Vulkan: Add stubs for expose VK_EXT_image_compression_control
ba81145fb6 : Vulkan: Emulate coherent framebuffer fetch everywhere
33c86fdedc : Tests: Add SimCity BuildIt trace
5754fb8066 : Roll vulkan-deps from 1096ec1aabbb to c93d7c04d2f7 (7 revisions)
b3af056377 : Roll SwiftShader from d5c428477411 to 4d3a7b64279f (1 revision)
0dc407dc7a : CL/Vulkan: Address slicePitch and rowPitch issues in images
b9932271da : CL/Vulkan: Add handling for NormalizedSamplerMaskPushConstant
aaf0753464 : CL/Vulkan: Fix clEnqueueUnmapMemObject for host ptr image cases
08ce567228 : CL/Vulkan: Add pitches handling for image APIs
c5242b6f59 : CL/Vulkan: Add extent, offset, and layers handling for arrays
12b1484086 : Roll Chromium from 159c891eeeab to b3ded8346a3f (616 revisions)
7ba69645aa : Fix unnamed outs w/ initializeUninitializedLocals
52831fc0a4 : CL/Vulkan: Add handling for image push constants
03c75d3532 : Remove SeparateStructFromFunctionDeclarations
e557b60e8a : CL/Vulkan: Add destruction of staging buffer for CLImageVk
485f6c343d : CL/Vulkan: Implement clEnqueueFillImage
bf07bcb7fa : Vulkan: Remove dead code since mSwapchainImages always valid
df61fc7f8f : Avoid reordering structs when separating from func
54e8e665f1 : Trace tests: generate and use trace list according to gn args
8134b1ccf6 : CL: Rename ocl_cts_binary to ocl_cts_source_set
a402f9cbca : Metal: Do not use number digit separator in .mm
05bd847cd4 : Roll vulkan-deps from 4cd63162a07c to 1096ec1aabbb (5 revisions)
06f476ea35 : Roll Chromium from 8b70336fb4c3 to 159c891eeeab (662 revisions)
924ee1ba78 : CL: Enable CTS over GTest interface
817b7d204b : Trace Tests: Skip solar_smash on Intel Windows
b09008fd3e : FrameCapture: Warn when shaders are not attached yet
14e4435b27 : FrameCapture: Start active queries last in MEC
eb614d7ecb : Metal: Avoid leaking library and binary sources
d9af7ac733 : Roll vulkan-deps from d1b8acb8cd01 to 4cd63162a07c (10 revisions)
1ae78f5593 : Roll Chromium from 75279b66ac13 to 8b70336fb4c3 (552 revisions)
b2d84a6649 : Revert "Metal: Avoid leaking library and binary sources"
c7a43ec86e : Capture/Replay: Track framebuffers by context
4a835cf266 : WebGPU: send uniforms to GPU for use in shader
4707e5bb30 : Unskip mini_world for other QCOM devices
267a3daf43 : CL/Vulkan: Use enums for cmd exec status instead of bools
05b5873797 : CL: Add CL command execution status to the enum list
9cd3dc645e : Reland "Vulkan: Enable build on Chrome/Android"
f5b9e0edd8 : Metal: Query MSL print env var with bool getter
c46e6bac43 : Make intermediate output symbol names consistent
b8b962d2fa : Parse function parameters into TPublicType
4f4062aea4 : Add support for running the parser generation on Mac
ceb8f1ac16 : Roll SwiftShader from 76855a9baecc to d5c428477411 (1 revision)
13f74e2549 : Roll vulkan-deps from a2dfb2276ea5 to d1b8acb8cd01 (9 revisions)
0d02f857f1 : Test SeparateDeclarations
c9407fec79 : Roll Chromium from dcfc04e3c072 to 75279b66ac13 (575 revisions)
e5619a5c48 : Use EGL sync global lock for all EGL*sync entrypoint calls
1c70fb79b9 : Fix dependency metadata invalid date warnings
f0d8a82065 : Fix dependency metadata invalid date warnings
1f82ca25e9 : Enable KHR-NoContext* and KHR-Single* tests on Linux Nvidia
f198e807ec : CL/VK: Serialize cmds when queue profiling enabled
ac41a84aa0 : CL/VK: Fix reflection parsing out-of-order cases
a417229046 : CL/VK: Initial impl for migrateMemObjects
1071079a23 : CL/VK: Fix frontend cl_mem_flags default access
152f8035a2 : CL/VK: Fix missing default device on contextFromType
de28790921 : CL/VK: Generalize host transfer sync utility
efcb94abf0 : Output the type of constant union
e3011d961c : Translator: Optimize size calculation for variable arrays
6359ec1115 : Metal: Avoid leaking library and binary sources
ce13a00a2b : Roll vulkan-deps from 12e843b4aad1 to a2dfb2276ea5 (5 revisions)
6528879665 : Roll Chromium from 4201d3dd229a to dcfc04e3c072 (845 revisions)
7fea539cc9 : Vulkan: Remove extra non-conformant flag checks
d30d81d0ba : [code health] Remove underscores from test names in ANGLE (1/N)
11d73f1dcf : Revert "spirv::Print without ANGLE_ENABLE_ASSERTS -> compile error"
094a0b116e : Skip KHR-GLES31.core.tessellation_shader.single.primitive* on win
a08663463c : Roll vulkan-deps from 12b75c58255f to 12e843b4aad1 (4 revisions)
fec8786e94 : Roll Chromium from 369ac471ea04 to 4201d3dd229a (705 revisions)
2e2294003f : Fix ANGLE Preferences Menu Crash
2a61126bb3 : Remove GLES 3.2 tests from swiftshader backend
2c7e983068 : restricted_trace_perf: Setup and wrapper support
bdebee8c1e : Tests: add repro for running out of outside RP serials
750d9a24a0 : Skip SourceAHBTarget2DGenerateMipmap* tests on S22
d01f510100 : Tests: add wrappers for restricted_trace scripts
6cda99d868 : Tests: fix run_angle_android_test.py and trace bundle
7ac10ebe8b : Roll vulkan-deps from 155bbe2e1429 to 12b75c58255f (31 revisions)
a04cedac0e : Roll Chromium from aba74d57387d to 369ac471ea04 (477 revisions)
026ba84810 : Manual roll Chromium from 64d6e30da907 to aba74d57387d (848 revisions)
f44427b5da : Vulkan: Fix MSAA glReadPixels into PBOs
d399841a92 : Roll SwiftShader from 1495532f997f to 76855a9baecc (1 revision)
cc2edfd112 : Fix getPerfMonitorCounterData maxResults, skip AsyncCommandQueue test
644b91f7e0 : CL/Vulkan: Implement buffer map/unmap
d44204893a : Add check for iOS simulator when initializing caps in metal
6df20e5f9b : CL/VK: Add missing HostPtr-BufferVk sync on unmap
2a569b2b71 : Vulkan: Document that hex can be viewed with spirv-dis
17a01469ec : Vulkan: Bugfix TextureVk::generateMipmap
e2cd908247 : Vulkan: Bugfix in setCurrentImageLayout
5a9f361c53 : Roll Chromium from 69b5e685119d to 64d6e30da907 (601 revisions)
84a24a1ea6 : CL: Implement clone for kernel object
8dae26c65f : CL: Add missing validation checks
1daf17b5ad : Tests: restore angle_end2end_tests --help on Android
924f207938 : Manual roll Chromium from 601f829c4935 to 69b5e685119d (60 revisions)
47fafdb914 : Disable tracegz (trace interpreter) by default, remove from CI
e43d359150 : Tests: allow choosing Chromium/our test runner + screen checks
7ce8b268f5 : Roll vulkan-deps from a52547961655 to 155bbe2e1429 (7 revisions)
d06410dafa : Roll Chromium from d8c3950b24a5 to 601f829c4935 (447 revisions)
eccfec936e : Metal: gl_ClipDistance fails validation
7483897cc3 : CL/Vulkan: Add numeric versioning for extensions
9ce9e678b7 : CL/Vulkan: Set storage buffer usage for cl buffers
2f8ad9c104 : CL/Vulkan: Add support for sub-buffer creation
3c1e98a312 : CL/Vulkan: Fix clEnqueueMapImage/clEnqueueUnmapMemObject
bd9d028522 : Remove feature description / condition enums
5242386b0c : Manual roll Chromium from 61c4298a5d2e to d8c3950b24a5 (286 revisions)
2b8d6bbea5 : Vulkan: Use UpdateFullTexturesDescriptorSet when cache missed
fbe34df703 : Vulkan: More texture descriptorSet code cleanup
79b6c7ab30 : CL/Vulkan: Add fillWithPattern interface
1a3fadbf91 : Vulkan: Enable imagelessFB for recent QualComm drivers
c0a284034e : CL/Vulkan: Enable clEnqueueNDRangeKernel for Images and Samplers
0baeb12e81 : CL/Vulkan: Fix ImageDescriptor constructor
c3ff2bbe44 : CL/Vulkan: Enable clGetSupportedImageFormats
9db2e88bd8 : CL/Vulkan: Add support for required image formats
a05a0e154e : Validate PLS shaders against context state
a21b7ad0f2 : CL/Vulkan: Add skeleton for CLSamplerVk
b03f014823 : Metal: interpolateAtOffset fails validation
c0c541da40 : Remove the gl+d3d-only build of the translator
57ce489ff0 : CL: Check that arguments are set at enqueue call
5c26ffea1d : Vulkan: Optmize descriptorSet cache disable code path
f4a239bce5 : Roll vulkan-deps from 977783acab11 to a52547961655 (6 revisions)
2b6e5b2e40 : Roll Chromium from c0123634901e to 61c4298a5d2e (219 revisions)
bf29a047db : Metal: Remove uniform struct decl separation code
2156cd6e92 : Metal: Fix rewritten array variables clashes
7c99c22540 : CL/Vulkan: Implement clEnqueue APIs involving images
0624b4fba3 : Metal: Make ToposortStructs compile on c++17
20de3a8a10 : Manual roll Chromium from ada7221c4738 to c0123634901e (3970 revisions)
6b9d37627d : Vulkan: Optimize full texture clears
91ea8aef46 : scripts: Add restricted_trace_perf as data dep
66701c9854 : Vulkan: Remove extra non-conformant flag check
02f88b31f1 : Improve CanSupportAEP Error Reporting
8d12b278c2 : Make separated anonymous in/out structs work better
eaa3d4a9d2 : Add llvm-libc BUILD.gn
b8d546b2ed : Fix immutable string concats with ints
d4a9fa51cc : Vulkan: Re-enable dynamic rendering on newer ARM drivers
7bb1e0f6ce : restricted_trace_perf: Allow use of system ANGLE libs
236e0f486a : Add fix for create multi-window surfaces cause crash
42bfb55450 : Add llvm-libc dependency, now required for libc++
0d914d46ca : Roll vulkan-deps from a5edfbb83552 to 977783acab11 (10 revisions)
2df8d32b9c : Vulkan: Tag DescriptorSets properly with every command buffer
913251aa4e : Add clear tests related to layered image
63f5a32843 : Revert "Improve CanSupportAEP Error Reporting"
1652f8edbf : Vulkan: end2end tests when descriptorSetCache is disabled
0a372f29da : Vulkan: Remove docs about OpenGL line rasterization emulation
fb655e43f4 : Improve CanSupportAEP Error Reporting
d0a0fd1ac7 : Vulkan: Skip pool eviction when cache is disabled
9c0fc663be : Set use_custom_libcxx=false in MSVC builds
af73a7eb4a : Vulkan: Add WeakPtr to mirror std::weak_ptr
629dc14f90 : Manual roll VK-GL-CTS from cfd0b16e7b5e to f674555ab03e (12 revisions)
079a2bad66 : Roll SwiftShader from 3aaa6784ca31 to 1495532f997f (2 revisions)
a7ef644461 : Tests: skip D3D11 dEQP-GLES2.functional.polygon_offset
2ad2f0e68a : Roll vulkan-deps from 6bf0a68d2621 to a5edfbb83552 (11 revisions)
4397ff2f02 : Metal: SeparateCompoundStructDeclarations fails validation
4a5b0284f3 : Disallow discarded uniform block references
898a1c1241 : Metal: Fix ToposortStructs validation == failures
2a62e525d0 : CL: Fixup copying empty string
0e0e5eae7d : Minor clean up for mSamplerBindings usage
08c1724fec : Vulkan: Support GL_ARM_shader_framebuffer_fetch_depth_stencil
65fcf9c465 : Vulkan: Remove redundant dependent feature checks
c13e9963b2 : Tests: skip mac intel dEQP-GLES2.functional.polygon_offset
ba65fc48e0 : ANGLE unit test to check const expression in a shader with uniform
5987c2bc7a : Disable treat_warnings_as_errors on MSVC builds
0d5c0bd1b5 : ANGLE end2end test to check const expressions are handled correctly
a769fad4a0 : Vulkan: Optimize and fix glFinish for single buffered surfaces
d9f8fba818 : CL/Vulkan: Fix event queue serials
a0586d6e54 : Remove feature description / condition strings
5b4609de68 : CL/Vulkan: Adjust the pushConstant size/offset to multple of 4
f2a409f82b : Roll vulkan-deps from b0229dbd25db to 6bf0a68d2621 (25 revisions)
127f1e15d7 : Roll SwiftShader from 145112eea713 to 3aaa6784ca31 (1 revision)
fe99836c8b : Vulkan: Use ANGLE_PERF_WARNING when no serial for reserved serial
715fe91d96 : Docs: add a Cuttlefish setup section for apk side-loading
4353d25cbc : Fix ASAN bug in GLSL test
2d6029d2b2 : DEPS: Add .git to libdrm URL
3a265f143b : Android tests: raise if --render-test-output-dir doesn't exist
ec262a3243 : Trace tests: offscreen gles1 fix framebuffer binding handling
1258404954 : Make SimplifyLoopConditions testable
f2315dbe32 : Reland: Vulkan: Update checks for promoted extensions
2dcc80ddf3 : Vulkan: allocateDescriptorSet to avoid repeated try on same pool
a070b65aba : Roll Chromium from e7eae5389783 to ada7221c4738 (2147 revisions)
a05fc2bcf8 : Revert "Vulkan: Update checks for promoted extensions"
70b711a890 : Roll vulkan-deps from d8276cfd24b7 to b0229dbd25db (2 revisions)
9a4c7495f3 : Vulkan: Add feature flag to enable descriptorSet cache
31c80bbf6d : Vulkan: Avoid redundant work in updateFullActiveTextures
922147f905 : Trace tests: offscreen sRGB traces use sRGB format
dd54eeeca4 : Reland "Vulkan: Track GPU progress for individual DescriptorSet"
60da450efa : CL: Implicit cmd queue submit on release
45cc47afde : Revert "Vulkan: Track GPU progress for individual DescriptorSet"
9c1f96b860 : Vulkan: Update checks for promoted extensions
99ba07d3af : CL/Vulkan: Implement createImage
d774f75c17 : Fix Python warning in overlay fonts generator
292102944a : Vulkan: Track GPU progress for individual DescriptorSet
47c66901fc : Vulkan: Set gl_Layer to 0 if the framebuffer is not layered
e869b4262c : Revert "Removed checks for promoted extensions"
f4e776a8c1 : GLES1: Fix eye distance for fog
04c0ef4689 : Roll SwiftShader from 0afe6a306dd2 to 145112eea713 (1 revision)
dad24ded41 : Roll vulkan-deps from 1ea770ceed23 to d8276cfd24b7 (7 revisions)
4bdcdf0dee : Vulkan: Switch RefCountedDescriptorPoolBinding to use SharedPtr
c52e849360 : Vulkan: Switch DescriptorPoolPointer to use SharedPtr
bbe6896334 : Vulkan: Fix `precise` vs `mat4(...)[index]`
5c2a2fd576 : Vulkan: Fix `vec4(...).zxwy[index]`
ef55ca0a90 : Update copy validation regarding ext textures
a19f09478b : Vulkan: Cache depth- and stencil-only views
c2219ef9ec : Removed checks for promoted extensions
c42ecd73af : Roll vulkan-deps from b48b5be748a7 to 1ea770ceed23 (16 revisions)
d7c5710c01 : Roll VK-GL-CTS from 5e9887eb393c to cfd0b16e7b5e (3 revisions)
4aa12e9e17 : Metal: Remove macOS 11.0 availability checks
8ec1d518e5 : Roll vulkan-deps from 844aac33d628 to b48b5be748a7 (3 revisions)
7bca73ac18 : Roll Chromium from 04637c3ecbf3 to e7eae5389783 (616 revisions)
ff455e8c41 : Add tests to check copy image with TEXTURE_EXTERNAL_OES
182aa4071a : Reland "Metal: translate IOSurface pbuffer's GL_RGB to RGBX/BGRX format."
0fded2c431 : Manual roll SwiftShader from 74b783dffb9b to 0afe6a306dd2 (1 revision)
4b1e58d943 : Fix for float constant precision in the GLSL backend.
2b329ee4e1 : Metal: fix memory leaks in Texture::getStencilView
78f146e3ad : Remove EAGL support
5b96316be9 : Revert "Metal: translate IOSurface pbuffer's GL_RGB to RGBX/BGRX format."
831a52f2dc : Hold on to error message in LinkTaskMtl as C++ string.
79ae1e5970 : Roll vulkan-deps from ad31dd1cb898 to 844aac33d628 (12 revisions)
e7aa81e5fc : Roll Chromium from fb88b76548ad to 04637c3ecbf3 (610 revisions)
2f644ed8a8 : Implement NULL translator output
323187d9bc : Vulkan: Fix color attachment limit with framebuffer fetch
3b0d1e4492 : Add NULL shader output type
3fa74223bb : Vulkan: Add check for VK_EXT_external_memory_host extension
2ee914a4f2 : CL: Add cl_image_format map autogeneration
897a565412 : CL: Rename isSet/isNotSet to intersects/excludes
8d86ae9f01 : Skip flaky end2end test
e40d858179 : Vulkan: Fix render pass revival vs framebuffer fetch and DR
7beb008de6 : Vulkan: Disable dynamic rendering on Nvidia
37dd8e9278 : WebGPU: Stream client arrays
68ba532b60 : Add validation for ObjectLabel
9e8b104ed4 : Do not test OpenGL backend on iOS
f0b367f2f0 : Roll vulkan-deps from 4c2208c976c8 to ad31dd1cb898 (1 revision)
597641fd94 : Roll SwiftShader from 7a9a492a38b7 to 74b783dffb9b (1 revision)
eb557ef4d4 : Roll Chromium from b20c9c684689 to fb88b76548ad (590 revisions)
0dbe85f317 : Increase the size of vector WriteImages to max
4a2f9b1ca4 : Manual roll vulkan-deps from b234b73ac73a to 4c2208c976c8 (6 revisions)
059f66beed : Bugfix in UnitTest_DMSAA_dst_read
91391c06b9 : Vulkan: Vertex attribute hole crash workaround
a1584f49e8 : Vulkan: Qualify framebuffer fetch with "Color"
f1296738cc : Skip end2end tests failing on iOS
1608d0be26 : Vulkan: Isolate framebuffer fetch no-RP-break optim from DR
76025caa1a : Roll Chromium from 91bda6332316 to b20c9c684689 (501 revisions)
576b5ef40a : Manual roll vulkan-deps from 73fd75175922 to b234b73ac73a (18 revisions)
e734111334 : Manual roll VK-GL-CTS from 179dd9f858f0 to 5e9887eb393c (20 revisions)
367e9e74a8 : Android: Update targetSDK to 35
4c35748f94 : Skip MultisampleTestES3.CopyTexImage2DFromMsaaDefaultFbo on S22
8f3678543b : Vulkan: Refactor ImageCopy shader
a9a924e1ca : Roll Chromium from f801b43c96ea to 91bda6332316 (730 revisions)
78a694a1b8 : Bugfix for ms_to_ss in dynamic rendering
e7f0d107f2 : Translator: Fix EXT_texture_query_lod shader types
a8d9d81383 : Trace perf: save individual screenshots in offscreen
0f2ce2cd6d : Tests: Add Supreme Duelist trace
ab1cdd22aa : Translator: Support EXT_texture_query_lod
9add9893bf : Stop Linux/Intel experiment
a813854729 : Translator: Support GL_ARM_shader_framebuffer_fetch_depth_stencil
2dd1698685 : Manual roll vulkan-deps from b8d6ceadf45d to 73fd75175922 (2 revisions)
68de004277 : Vulkan: Support glCopyTexImage2D from MSAA default framebuffer
028bb1cba7 : Add EXT_texture_query_lod stubs
14b495336c : Revert "Roll VK-GL-CTS from 179dd9f858f0 to 5dd667ee8fa8 (1 revision)"
9e9cbd9794 : Tests: Add Antistress Relaxation Toys trace
23b99e2f8c : Manual roll Chromium from b34a6d5a6f69 to f801b43c96ea (59 revisions)
ddbfae9658 : Manual roll vulkan-deps from e8e61a227e2c to b8d6ceadf45d (8 revisions)
eb087852c0 : Roll Chromium from f28e081e9fe5 to b34a6d5a6f69 (682 revisions)
3ef8d1714d : ssci: canonicalize / backfill dependencies managed by DEPS
b724eb0e8a : Vulkan: Fix assert with overlay and not dynamic rendering
e6130b90f8 : Tests: Add Car Race 3D trace
62f33a5cd0 : Vulkan: Make retainResource for descriptorSetPool consistent
5701d8c5d2 : Manual roll vulkan-deps from a7919b0e1d20 to e8e61a227e2c (4 revisions)
1043accc09 : Roll Chromium from a2c49c9bb8d4 to f28e081e9fe5 (589 revisions)
ae5c3b969e : Boilerplate for GL_ARM_shader_framebuffer_fetch_depth_stencil
3e8d09a1c2 : Vulkan: Enable FRAGMENT_SHADER_FRAMEBUFFER_FETCH_MRT_ARM
c9606f0095 : Fix extensions moved to core in GLES 3.2
30ae44bfa0 : Tests: Skip going_balls on Windows Intel
98b5cf46b3 : Tests: Add Piano Fire trace
2bb5b4436f : Tests: Add Billiards City trace
ba292370e1 : Vulkan: Disable imageless framebuffers on Qualcomm and PowerVR
f0a66ba26d : remove angle_gl_driver_all_angle when resetting
4984fe1294 : Add a test for framebuffer fetch and multisampling
1c14a0b0b3 : Roll Chromium from 3bc95aca3c88 to a2c49c9bb8d4 (704 revisions)
d0e2141a99 : Tests: GLES1 offscreen replay uses GL_OES_framebuffer_object
878e1c92af : Vulkan: Fix line-loop draw arrays after elements
dd66a284ea : Perf tests: custom throttling excludes VIRTUAL-SKIN*-MODEL-*
0b78963dcf : Perf tests: add thermalservice custom temp throttling
aa61c07652 : Autogen files for GL_ARM_shader_framebuffer_fetch_depth_stencil
af33337ad4 : Metal: Remove unused MSL helpers
cea5f080d0 : Enable retries in angle_deqp_gles2_webgpu_tests on Mac
aec90a8d02 : Fix ignoring blit bits when attachments are missing
0f7371ae34 : Metal: Remove ANGLE_tensor macro
b40f367ed6 : Roll vulkan-deps from dd729cf1f807 to a7919b0e1d20 (11 revisions)
db8df6134b : Roll Chromium from af63b8cf2be2 to 3bc95aca3c88 (508 revisions)
166b72c952 : GL_ANGLE_blob_cache implementation.
770dc68f08 : Tests: Add Thief Puzzle trace
492cf2658a : Stubs for GL_blob_cache_angle
37c69bd615 : Tests: gold tests assert that filter matches a test
07ad37e1fd : Roll Chromium from f110bcc1488e to af63b8cf2be2 (761 revisions)
c99f110623 : Debug: Allow forcing GL_RENDERER and GL_VENDOR
cd10ad463b : Metal: Rework allowSamplerCompareGradient feature
b3d85ccef0 : Vulkan: Consolidate write colorspace override states
95379bb4d5 : Suppress flaky end2end test on Linux NVIDIA Vulkan
6d6a168673 : Roll vulkan-deps from 2be80b8bd62c to dd729cf1f807 (13 revisions)
53b9dcdc11 : Roll VK-GL-CTS from 179dd9f858f0 to 5dd667ee8fa8 (1 revision)
aacbf041f6 : Revert "Vulkan: Enable build on Chrome/Android"
b38cc7faa7 : Vulkan: Consolidate read colorspace override states
605c2f8506 : Vulkan: Bugfix in FramebufferVk::blit(...)
f9709279de : CL/Vulkan: Add support for printf builtin processing
b61f9f9eff : Vulkan: Add operator<< for descriptorSet for debugging
d147a2caa7 : Vulkan: release descriptorSets from TextureVk::refreshImageViews
eb4eaea9b8 : Vulkan: Improve SharedCacheKeyManager::addKey performance
a921694bb6 : Metal: Support EXT_texture_shadow_lod
d550d96fb6 : Metal: Remove allowSamplerCompareLod feature
c94c37c14b : WebGPU: Skip ReadPixels if texture creation failed
f145842983 : Tests: Add Going Balls trace
e2a2385112 : Vulkan: Enable build on Chrome/Android
3f132f0cad : Tests: Add Woodoku trace
c861f0d65a : Skip KHR-GLES3.framebuffer_blit.scissor_blit on S22
e8fdc34146 : Android: Update targetSDK to 34
ad8627a8dc : Roll vulkan-deps from 1d7fd2888081 to 2be80b8bd62c (7 revisions)
9f609a5c6c : Roll SwiftShader from 07d3f212a083 to 7a9a492a38b7 (1 revision)
9257738866 : Roll Chromium from 4bfccb8742c8 to f110bcc1488e (573 revisions)
cd7f294923 : GL: Workaround constructor bug on Nvidia
b16d105fc6 : Remove Desktop GL front-end support
6024e9c055 : Manual roll VK-GL-CTS from 65470ff2e321 to 179dd9f858f0 (27 revisions)
b5d548bb93 : CL/Vulkan: Update map interface for CLMemoryVk
0a45269705 : CL/Vulkan: Enable support for multiple descriptor set handling
371539c3ac : CL/Vulkan: Move descriptor set and pipeline layout cache to context
e97a9ba630 : Roll SwiftShader from 72ca2005cd32 to 07d3f212a083 (1 revision)
627a3c526c : Roll vulkan-deps from 7aaa4e9a5b34 to 1d7fd2888081 (12 revisions)
01eb02205a : Roll Chromium from 7b11e2d1e07c to 4bfccb8742c8 (590 revisions)
9edd74e2ff : Tests: Add Traffic Rider trace
55980dbd39 : common: Improve/fix spir-v utils
be8cc064fc : DisplayWgpu: Remove wgpu::FeatureName::SurfaceCapabilities
f680925b5d : Tests: Add Warhammer 40000 Freeblade trace
435bd0a9ae : ssci: use canonical date format
80aced82ce : Roll vulkan-deps from fb8f0127fca4 to 7aaa4e9a5b34 (9 revisions)
5aa20fd13b : Roll SwiftShader from 8580e3a98e50 to 72ca2005cd32 (1 revision)
bba8c29280 : Roll Chromium from dd17ed0c05d9 to 7b11e2d1e07c (692 revisions)
7ff7775b2b : Metal: Align internal texture wrapper names
371ab6d885 : Metal: Refactor texture wrappers
86872374b0 : Metal: Refactor textureGrad wrappers
39b5f5fc5a : Metal: Refactor textureLod wrappers
80e5e611e4 : Metal: Refactor textureGradOffset wrappers
cc44090d34 : Vulkan: Add an extra descriptor set index
0040cda117 : Vulkan: Invalidate host visible non-coherent buffers on mapping
65ece02900 : Roll vulkan-deps from 223523f05dc0 to fb8f0127fca4 (8 revisions)
81c1196195 : Roll Chromium from a588c34f73df to dd17ed0c05d9 (541 revisions)
572fd30ee2 : Clean up LineLoopIndirectTest
67a5ea45f4 : Vulkan: Fix the error from multiple lineloop draws
e06b07a992 : Vulkan: populate ycbcr conversionDesc for yuv VkFormats
03b5ea3989 : Traces: --offscreen syncs on N-2 frame GPU completion
7c81171550 : Vulkan: fix crash when clearing stencil with ClearBuffer
163e24e4ca : Roll vulkan-deps from 4c709b68a2c6 to 223523f05dc0 (12 revisions)
cedad4757a : Roll Chromium from 37b223759e13 to a588c34f73df (702 revisions)
7b0212b337 : Retrace cod_mobile for minimum requirements
0621c95c61 : Add test for repeated indirect line loop draws
baf0c6b213 : Metal: Refactor texture[Lod]Offset wrappers
bc9c772dfe : Metal: Refactor textureProj[Offset] wrappers
15a64b27a6 : Roll vulkan-deps from 4b313c0d5593 to 4c709b68a2c6 (36 revisions)
474b7c2e37 : Roll Chromium from da7173a2a2dd to 37b223759e13 (591 revisions)
0ec8a7f1b5 : Prevent multiple solutions when retracing with get_min_reqs
05c62ebcad : Fix check for whether stencil write is masked out
b4ec585097 : Metal: Refactor textureProj(Grad|Lod)[Offset] wrappers
fe8c0390bc : WGSL: Run SeparateCompoundStructDeclarations to name structs
7e24988190 : HLSL: Emulate mix functions when the last parameter is a bool.
994bbbfc27 : Vulkan: Don't require renderability in AHBs
2af092362c : Vulkan: Enable monolithic pipelines on Intel Windows
94a8bfa8f3 : Roll Chromium from 0570283bb92a to da7173a2a2dd (623 revisions)
966739ac8b : Drop PLS support for EXT_shader_pixel_local_storage
a6ee4641d0 : WGSL: Output default uniform block and accesses to it
fe6c13d711 : Skip dota_underlords on Linux NVIDIA
036e3ff1f2 : Remove Framebuffer::usingExtendedDrawBuffers
f1843343bb : Skip flaky end2end test on Metal AMD
6cd8a2dbc6 : WebGPU: Use SurfaceTexture instead of SwapChain.
a2891e6cdd : Roll vulkan-deps from ab526a2539cd to 4b313c0d5593 (8 revisions)
c74da9ff8f : Roll SwiftShader from 2afc8c97882a to 8580e3a98e50 (1 revision)
0e3d6d30aa : Roll Chromium from c3264b853e52 to 0570283bb92a (570 revisions)
eaffa034c7 : Revert "Vulkan: Consolidate colorspace override states"
1798e1f044 : GL: Avoid infinite loops clearing GL errors
f5f419ec0b : Vulkan: Add verify-restore in CompressAndStorePipelineCacheVk()
0b610712c8 : WebGPU: Sync index buffers, add indexed draw calls
75297ee99c : Skip dEQP GLES3 crashes on Metal AMD
ff5dfad57b : restricted_trace_perf: Change loop order
9a744edefc : Enable Mac/AMD experiment
05c7d2eb51 : Roll vulkan-deps from 54e834b2bf55 to ab526a2539cd (8 revisions)
19f1dd9c83 : Roll Chromium from e5395ab2f022 to c3264b853e52 (505 revisions)
b563ede4e6 : WebGPU: initDefaultUniformBlocks outside of an ASSERT
a6ec0bb996 : Vulkan: Fix recursion in ensurePipelineCacheInitialized()
fdec693541 : Workaround supportsSurfaceMaintenance1 on Linux with llvmpipe
082380ec76 : Roll vulkan-deps from ccec2dffc262 to 54e834b2bf55 (8 revisions)
9416a22b64 : Roll Chromium from 994b97894d7d to e5395ab2f022 (655 revisions)
1e74ce33a5 : Reland "Vulkan: Prefer monolithic pipelines everywhere"
49ea6f0081 : Cleanup ImageTest skip conditions
bffcd235ba : Vulkan: Consolidate colorspace override states
167b9e8d22 : Vulkan: Fix pipeline cache store vs monolithic pipeline race
1bed7fddcb : CL/VK: Fix missed PushConstantRegionGroupOffset
cce1497c30 : CL/Vulkan: regionOffset as GWO for uniform
741e5355bf : CL/Vulkan: Add missing PushConstantNumWorkgroups
8eecc8e9b8 : CL/Vulkan: Update CL_DEVICE parameters
86a24b84b5 : Add TraceFrameIndex atrace counter
0de7c43f7a : Remove unused functions
4d613ca7ff : Skip angle_deqp_egl_vulkan_tests on linux-exp-nvidia
c2051a2d47 : Metal: Refactor texelFetch[Offset] and textureSize wrappers
2ca686d27c : Suppress flaky test on Linux NVIDIA Vulkan

+- Project: platform/external/apache-commons-compress

14a564dc5 : Remove OWNERS file
f1156e301 : Add apex_available for the library

+- Project: platform/external/armnn

62a0ad428 : Define vintf_fragments as modules

+- Project: platform/external/auto

5fa6e0f1 : Add com.android.tethering to available list for Auto usage
ef136f94 : [Ranging] Add com.android.uwb to available list for Auto usage

+- Project: platform/external/avb

7052520 : rust: allow to override mock result for rollback indexes
aad7b32 : tools/transparency/verify: Refactor verifier to handle new log.
556d5ce : Run libavb_host_unittest[_sha] purely under host-unit-tests
b4e1ba1 : Fix typo in comment about how "androidboot.vbmeta.device_state" is populated.
c32891f : Remove dependencies on the 1-variant fallback
f955791 : Remove dependencies on the 1-variant fallback
9581e36 : ANDROID: libavb_baremetal_*: Use cc_baremetal_defaults
5f36e84 : tools/transparency/verify: Fix broken links.
7c630de : Remove usage of base::SetCurrentDirectory from avb tests

+- Project: platform/external/aws-crt-java

b47514a : Eliminate submodules.
5003189 : Make bpfmt happy.

+- Project: platform/external/bazelbuild-rules_python

387c2f6 : chore: prepare 0.36.0 release (#2245)
b92927d : feat(bzlmod): add python.override APIs (#2222)
93eda70 : fix(platforms): include flag_values in config_settings (#2236)
1d7fd51 : fix(ci): use --enable_workspace for bazel-in-bazel tests (#2237)
ade0b2b : refactor(toolchains): split the implementation of toolchain rules to separate files (#2232)
27276b6 : fix: Prefix bootstrap file to prevent pytest reentrance (#2230)
654b4d5 : chore: remove mandatory default values for python version flag (#2217)
5a856e5 : feat: add //python:none as public target to disable exec_interpreter (#2226)
63114a3 : refactor: move hermetic Python runtime setup into macro (#2221)
b3862ec : docs: give some general guidance on how to define custom toolchains (#2220)
3f20b4b : refactor(internal): add a semver parsing utility function (#2218)
451e62c : refactor(internal): make the usage of MINOR_MAPPING variable explicit in full_version (#2219)
9b16bfb : chore: cleanup unused attributes for the host_toolchain repo rule (#2195)
c9972d3 : test(bzlmod): add python.toolchain unit tests (#2204)
7d42a93 : tests: make precompile tests pass when other toolchains are defined (#2213)
a6cd158 : build(deps): bump mdit-py-plugins from 0.4.1 to 0.4.2 in /docs (#2209)
2a8b2e0 : build(deps): bump certifi from 2024.7.4 to 2024.8.30 in /docs (#2210)
acfc125 : fix(sphinx): Support python 3.9 in Sphinx rules (#2208)
fcd1d5e : feat: default `py_runtime` version info to `--python_version` (#2198)
c804a13 : feat(toolchain): add patch_strip attr for python_repository (#2201)
a18ae49 : fix: Fix incorrectly generated `Required-Dist` when specifying requirements with markers in extra_requires in py_wheel rule (#2200)
148122a : docs: document py_cc_toolchain and py_runtime_pair (#2196)
118573b : feat: allow py_cc_toolchain libs to be optional (#2197)
f4596fb : fix: allow disabling exec_tools toolchain from looking up an interpreter (#2194)
30bd94d : tests: use `{package}` instead of hard-coded path in precompile_tests (#2193)
da0fe57 : build(deps): bump urllib3 from 2.0.7 to 2.2.2 in /tests/multiple_inputs (#2140)
cd4b9c4 : fix(bzlmod): use --lockfile_mode=update and add a separate job for lockfile testing (#2154)
a507673 : feat(rules): add build_data_file field to PyExecutableInfo (#2181)
53f7407 : chore: cleanup exposed python_repository symbols and add docs (#2189)
acc4cef : docs: add module_ctx, repository_ctx and path for xref support (#2188)
c46ee92 : fix: allow detecting if `--precompile_source_retention` was specified on the command line (#2192)
4248467 : cleanup: remove commented out debug statement in precompile tests (#2191)
65d1326 : fix: make bootstrap_impl=script compute correct directory when RUNFILES_MANIFEST_FILE set (#2177)
3fd5b83 : docs: add `testing.*` Bazel objects to Sphinx inventory and xref in docs (#2185)
612baef : tests: move various supporting code under tests/support (#2183)
fe1d9a7 : doc: clarify the precompile attribute affects the local target (#2179)
076fbc7 : build(bazelci): explicitly enable workspace where Bzlmod is disabled (#2184)
7b99948 : build(deps): bump alabaster from 0.7.16 to 1.0.0 in /docs (#2138)
9081cdb : build(deps): bump sphinx from 7.4.7 to 8.0.2 in /docs (#2137)
56abb01 : refactor(precompiler): give optimize/invalidation_mode flags default values (#2180)
79df3c9 : docs: fix some doc warnings and xrefs (#2176)
50f6ce7 : refactor(sphinxdocs): use bazel label format for internal object tracking (#2174)
36bb556 : feat(rules): add PyExecutableInfo (#2166)
b97a5d6 : fix(py_wheel): Avoid reliance on bash in `py_wheel` macro. (#2171)
54c9fab : refactor(flags): return FeatureFlagInfo in --python_version flag (#2167)
4f2dd2f : refactor: allow py_library to accept additional fragments (#2170)
e15fc16 : refactor: move bootstrap tests to their own directory (#2168)
31e4771 : fix: correctly check the arg count in precompiler.py (#2165)
4761c25 : docs: document the exec tools toolchain pieces (#2163)
2a5ba18 : fix(gazelle): Correctly resolve deps that have top-level module overlap with a gazelle_python.yaml dep module (#2160)
6361057 : build(deps): bump docutils from 0.20.1 to 0.21.2 in /docs (#2158)
2f46873 : docs: docgen python apis (#2149)
e331afe : fix: Handle relative paths properly in _absolute_url (#2153)
b99bb61 : fix(whl_library): remove `--no-index` and add `--no-build-isolation` when build sdist (#2126)
b679a79 : docs: turn a couple mentions of flags into cross references (#2146)
5eff339 : fix(bzlmod): keep the lockfile platform independent when resolving python (#2135)
dac8a5f : fix: Exclude external directory when generating python report (#2136)
04e2f32 : feat(gazelle): Update resolve.go to provide more human-friendly error output (#2120)
bdcf53a : fix: formatting directive bugs in test (#2134)
d7d2ce7 : refactor: move docs/sphinx -> docs (#2130)
79ca4a4 : build(deps): bump myst-parser from 3.0.1 to 4.0.0 in /docs/sphinx (#2109)
13ce192 : sphinxdocs: add docs; support sources from other directories (#2128)

+- Project: platform/external/bc

d0410eb5 : Make bpfmt happy.
110ec337 : Upstream doesn't build as C23, and doesn't appear to want to.
c8dabac6 : Add the Tonkünstler-on-the-Bund Fedora package to the README
26ec92dc : Increment the version and update the NEWS
e5d67578 : Don't unconditionally enable warnings after ignoring them
d2990582 : Increment the version and update the NEWS
89d6c345 : Fix Ctrl+D on FreeBSD
c5b7724e : Update the bc manpage with two minor corrections
324c3098 : Increment the version and update the NEWS
eede8e84 : Test with GCC on FreeBSD
11d129b7 : Fix the release script on FreeBSD
9c9a5f26 : Apply a patch from Stefan to fix build on FreeBSD
ed57c2ff : Add a new needed preprocessor define to the Windows build
13d1b4b9 : Remove some useless code
6440fa2b : Fix the leaks from the last three commits for real
7f68353d : Fix the last two commits
df4a7ab4 : Fix more leaks
a95d4d89 : Fix leaks
003022f7 : Make sure an extra math-only function is properly protected
3b63849a : Fix a compile error on FreeBSD
d7da0c26 : Fix the release script for FreeBSD
692218eb : Fix the strgen build
2f8f4b44 : Update the NEWS for the bug fixed in the previous commit
17da73ad : Fix a crash found by AFL++
1798c813 : Remove some compiler warnings in certain predefined builds
cd4fd098 : Provide error output on the test commands in configure
626289c0 : Increment the version and update the NEWS
9521342c : Update the dc manual for the behavior change of not clearing the stack
70141eb1 : Move the os.c compilation test file
56bb1825 : Fix a bug on the macOS build
a5d621a2 : Add a purpose-built file for testing compilation on OSs
9ebe0f05 : Fix an outdated comment
7bc8039d : Fix configure.sh not exiting with an error
b3e3099f : Make sure to clear the stack when dc is *not* interactive
2f2d90c3 : Change it so dc does not clear the stack
dcb0e585 : Fix configure tests for OSS-Fuzz
15fb2df7 : Remove all asserts
5378eaa7 : Add an assert for OSS-Fuzz debugging
81ed3e69 : Remove asserts for debugging OSS-Fuzz
6fe63648 : Fix the dc fuzzer args
f45d6bd7 : Add an assert for OSS-Fuzz testing
0d499cfc : Have the fuzzers ignore empty input more robustly
c46812af : Add an assert for OSS-Fuzz
14738581 : Make expressions not exit in an OSS-Fuzz build
35c54ba0 : Move the assert again
15b6f940 : Move the new assert for testing
3eac7b39 : Add another failing assert for testing
9aac0c71 : Guard the stdin/fuzzer buffer properly
7b2662a6 : Add a failing assert to check something in the OSS-Fuzz build
cf451066 : Remove the hardcoded sanitizer flags for OSS-Fuzz
904d0625 : Make sure to free the fuzzer data
7c2e7f31 : Make the OSS-Fuzz build use the data provided by the fuzzer
0448a657 : Build with memcheck for OSS-Fuzz
95a0f29a : Make sure the vm global is clean
dcef5248 : Make sure to use the CXX env var for OSS-Fuzz linking
11c0adb1 : Add libFuzzer support
b4e0fef3 : Fix style
7b074598 : Fix a typo in the development manual
a2e1976d : Increment the version and update the NEWS
76646861 : Add a memory bug
321d8190 : Add a clarification to a comment
32ab0beb : Refactor more code into its own function
1c3a5c2c : Refactor some code into a function
05a7cd3d : Update fuzzing stuff
4b83bf07 : Try to quiet Python syntax warnings
3b98de13 : Remove a stupid Clang warning
d99a5d20 : Fix style
5e1ab7b4 : Remove an invalid fuzz compiler arg
56e958d5 : Change fuzzing configs for changes to AFL++
59cf3b86 : Make configure.sh posix compliant
3278daef : Make `bc`/`dc` just exit immediately when failing to write to stdout
41b0c2ae : Change references to Mac OSX
220d2951 : Improve the fix in 235482c16, 90ece03c9, and 44669ead3
1b924ce8 : Make sure to not return BC_STATUS_QUIT
636c5429 : Fix the dc extended register tests
44669ead : Improve the last two commits
90ece03c : Improve the fix from last commit
235482c1 : Fix a bug found by a user
133bff36 : Add warnings about the different precedence of `!`
b1ecf8db : Fix wrong items in configure.sh -h
a8284c7e : Add a file to use clangd as an LSP server
811f3103 : Update the copyright year to 2024
7f6a907d : Fix a change with AFL++
43b4cc2d : Make bc and bcl build faster on Windows
b2f8f8d0 : Increment the version and update the NEWS
9c577c77 : Add the naggamura reproducer to the print2.bc script test
e2efb695 : Make the bc print2.bc script test work without extra math
5b044b2d : Make the bc print2.bc script test hit more cases
fcf071a8 : Improve a comment for my own sake
90bb2c09 : Fix the bc print2 script test for 32-bit
e9682c73 : Add a bc print test for printing at the edges of BC_LINE_LENGTH
39bd2c56 : Fixed printing error. Try obase=2; 2^105;
1a381d61 : Add scripts to test random numbers against sqrt()
7919ab40 : Add a couple more sqrt() tests
ce2c5fdd : Add scripts that I used to test for sqrt initial guesses
f6b72014 : Fix a typo
4803245b : Fix some debug code
d662bba9 : Fix style
a7caf291 : Fix a problem in the clang-format file
a00903f2 : Tweak a wrong comment
307d6fdf : Remove a small inefficiency in sqrt()
52c335a6 : Fix a typo in a comment
9f257871 : Add a small sig lock change found by naggamura on GitHub
53832b88 : Add benchmarks to test a pull request
52387634 : Add a way to delete generated benchmark files
d4cc8f85 : Tweak some benchmark things
5ee6a059 : Add a bit more automation to the package.sh script
f1f247c1 : Increment the version and update the NEWS
da5d5faf : Fix some problems in the bc manual
52b89889 : Remove a linker warning on Mac OSX
e2747a16 : Try to make the release script work on Mac OSX
29c07f2e : Add a release command for Mac OSX
7eaa40ab : Remove the "robust" commit
ae654ed9 : Increment the version and update the NEWS
7807eead : Make the Mac OSX build fix more robust
0a6165c5 : Fix library build on Mac OSX
0a53cd64 : Fix a typo
22253a3a : Remove some debugging code, increment the version, and update the NEWS
3aa9d816 : Increment the version and update the NEWS
536ed200 : Add back SA_NODEFER
0792964c : Remove the Clang Dwarf 4 workaround for Valgrind
eef531eb : Increment the version and update the NEWS
4e708521 : Add the new lib2 functions to the manual
bd768daf : Fix the test for i2rand()
38b98e72 : Fix bugs in the new lib2 i2rand() function
2689bdd7 : Add a test for the function in the last commit
d9bf4479 : Add a function to generate a random number between two bounds
55c99c6b : Add min and max functions to the extended math library
ee4c5915 : Fix a typo in the bc man page
677a1693 : Add a link to an algorithm

+- Project: platform/external/bcc

927ce3de : Pin to C17.
875a952c : Remove dependencies on the 1-variant fallback

+- Project: platform/external/blktrace

8900c19 : Make bpfmt happy.
ccc816d : Pin to C17.

+- Project: platform/external/boringssl

f8022b39 : Migrate 9 crates to monorepo.
7d6f4fe5 : Add libopenssl to the configinfrastructure apex
35cfbafe : ANDROID: boringssl_flags_baremetal: Use cc_baremetal_defaults
8331e18d : Suppress cast function type warnings where needed for bio
b7784dc5 : Add tokio-openssl to bssl_sys's visibility
d2227426 : Migrate 8 crates to monorepo
ff0438c8 : Add dirgroup for trusty genrule
3b47fd9a : Define BORINGSSL_FIPS in test_fips.
02b54c2b : Add com.android.wifi to the libcrypto and libssl apex_allowed lists.

+- Project: platform/external/bouncycastle

7906f9c4 : Completely revert R8 on bouncycastle.
bac8ca7d : Add AES/CFB to proguard flags.

+- Project: platform/external/bpftool

26c551d : Bpftool: revert local modification
20ce693 : mirror: Update expected diff with kernel sources
a668a13 : sync: Pull latest bpftool changes from kernel
0bac781 : libbpf, selftests/bpf: Adjust libbpf, bpftool, selftests to match LLVM
b79c0dc : bpf: Sync uapi bpf.h to tools directory
c1adf4a : bpftool: Clean up HOST_CFLAGS, HOST_LDFLAGS for bootstrap bpftool
3dc1ac6 : bpftool: Remove unnecessary source files from bootstrap version
0cb4aaf : bpftool: Enable libbpf logs when loading pid_iter in debug mode
d616707 : bpf: support BPF cookie in raw tracepoint (raw_tp, tp_btf) programs
3214350 : bpftool: Fix missing pids during link show
89d84b3 : bpftool: Cast pointers for shadow types explicitly.
73a2c7e : libbpf: Recognize __arena global variables.
a09e203 : bpftool: Recognize arena map type
c305ebf : bpf: Disasm support for addr_space_cast instruction.
ee84172 : bpf: Introduce bpf_arena.
bd1e316 : bpftool: rename is_internal_mmapable_map into is_mmapable_map
a655360 : sync: Update libbpf submodule
779cba7 : mirror: Add u16 definition to types.h
3e6f814 : sync: Pull latest bpftool changes from kernel
8328f37 : bpf: Introduce may_goto instruction
579d6b0 : bpftool: Add an example for struct_ops map and shadow type.
fc0ad76 : bpftool: Generated shadow variables for struct_ops maps.
99a3e05 : bpf: Replace bpf_lpm_trie_key 0-length array with flexible array
5aee3c0 : bpf: Clarify batch lookup/lookup_and_delete semantics
16e9ae8 : bonding: Add independent control state machine
e93d117 : sync: Update libbpf submodule
22621e3 : sync: Pull latest bpftool changes from kernel
6e0d7d0 : bpftool: Be more portable by using POSIX's basename()
43cbd7a : tools headers UAPI: Sync include/uapi/linux/perf_event.h header with the kernel
1c79dbd : sync: Update libbpf submodule
a04db04 : sync: Pull latest bpftool changes from kernel
1554ad5 : bpf: Add BPF token support to BPF_PROG_LOAD command
f29d89d : bpf: Add BPF token support to BPF_BTF_LOAD command
7892e21 : bpf: Add BPF token support to BPF_MAP_CREATE command
1ca16b5 : bpf: Introduce BPF token object
23baad8 : bpf: pass attached BTF to the bpf_struct_ops subsystem
c2a7c94 : bpf: pass btf object id in bpf_map_info.
4263cc0 : bpftool: Display cookie for kprobe multi link
25df69b : bpftool: Display cookie for perf event link probes
fb85c68 : bpftool: Fix wrong free call in do_show_link
39b14f3 : bpf: Store cookies in kprobe_multi bpf_link_info data
bb4dd70 : bpf: Add cookie to perf_event bpf_link_info records
1e82718 : bpf: Sync uapi bpf.h header for the tooling infra
c4cc180 : bpftool: Silence build warning about calloc()
184ca6d : sync: Update libbpf submodule
b0e69ac : sync: Pull latest bpftool changes from kernel
e725e62 : bpfilter: remove bpfilter
5ba3bba : net/sched: Remove uapi support for CBQ qdisc
98c8c08 : net/sched: Remove uapi support for ATM qdisc
67a1ca1 : net/sched: Remove uapi support for dsmark qdisc
1b6407c : net/sched: Remove uapi support for tcindex classifier
570698f : net/sched: Remove uapi support for rsvp classifier
da013dc : Revert BPF token-related functionality
bc61d0d : bpf: rename MAX_BPF_LINK_TYPE into __MAX_BPF_LINK_TYPE for consistency
ac0294b : bpf: add BPF token support to BPF_PROG_LOAD command
6384c2f : bpf: add BPF token support to BPF_BTF_LOAD command
21eccab : bpf: add BPF token support to BPF_MAP_CREATE command
c03dd58 : bpf: introduce BPF token object
dd0b761 : bpftool: Add support to display uprobe_multi links
e5aae2e : bpf: Add link_info support for uprobe multi link
5b0a3a4 : bpftool: mark orphaned programs during prog show
806bbc5 : sync: Update libbpf submodule
515739f : mirror: Fix CLANG_BPF_CO_RE_PROBE_CMD
687e7f0 : sync: Pull latest bpftool changes from kernel
62edbc3 : bpf: rename BPF_F_TEST_SANITY_STRICT to BPF_F_TEST_REG_INVARIANTS
d4ac6ea : bpf: add register bounds sanity checks and sanitization
0664481 : bpf: Add crosstask check to __bpf_get_stack
9429d83 : sync: Update libbpf submodule
7842164 : sync: Pull latest bpftool changes from kernel
b5aaf97 : bpf: Use named fields for certain bpf uapi structs
fa46ebb : bpftool: Fix prog object type in manpage
d712a3e : bpftool: Extend net dump with netkit progs
0fcd8de : bpftool: Implement link show support for netkit
c59bb37 : tools: Sync if_link uapi header
0946be6 : netkit, bpf: Add bpf programmable net device
b9530de : bpftool: Wrap struct_ops dump in an array
5978b98 : bpftool: Fix printing of pointer value
1485934 : sync: Update libbpf submodule
8485b9f : sync: Pull latest bpftool changes from kernel
087d22a : bpftool: Add support for cgroup unix socket address hooks
e4ddc81 : bpf: Implement cgroup sockaddr hooks for unix sockets
0a9ad5d : bpf: Derive source IP addr via bpf_*_fib_lookup()
f12f538 : bpftool: Align bpf_load_and_run_opts insns and data
e8b7df5 : bpftool: Align output skeleton ELF code
6982b3e : bpf: Add ability to pin bpf timer to calling CPU
f566717 : sync: Update libbpf submodule
b55f531 : mirror: Fix a typo in README.md
7b3bde5 : ci: Adjust branch names from "master" to "main"
4a6a35a : mirror: Add logo to README.md
e854f13 : mirror: Rework README.md file
79cc66c : mirror: Add a logo for bpftool: Hannah the Honeyguide

+- Project: platform/external/brotli

ad171d1 : Make bpfmt happy.
95927a1 : Add apex_available for the library

+- Project: platform/external/chromium-crossbench

6113173f : Initial empty repository

+- Project: platform/external/chromium-webview

9e17ed1 : Snap for 12770256 from 053c38e4f9b85d01bcbd1ab8eea2814e0ce6ee70 to 25Q1-release

+- Project: platform/external/cn-cbor

d4084d1 : Upstream assumes gnu99 (and has been dead for years).
5624032 : Make bpfmt happy.

+- Project: platform/external/conscrypt

c30981855 : Add SPAKE2+ API flag
fc618c041 : Revert^4 "Add support for enabling/disabling TLS v1.0 and 1.1 in Conscrypt."
5ec5e1c74 : Remove 3DES from Conscrypt
58708e275 : Specify aconfig_library in conscrypt.module.platform.api
5ae7b5c2f : Revert^3 "Add support for enabling/disabling TLS v1.0 and 1.1 in Conscrypt."
f7b7007b8 : Revert^2 "Add support for enabling/disabling TLS v1.0 and 1.1 in Conscrypt."
062a2d6bc : Make the Conscrypt CT platform flag visible to CTS tests
3996c2ccb : Revert "Add support for enabling/disabling TLS v1.0 and 1.1 in Conscrypt."
4d08f9ff5 : Add support for enabling/disabling TLS v1.0 and 1.1 in Conscrypt.
5c4d01361 : Limit clang-format to C/C++/Java files
25fbbb36d : Move flags to its own package
2453e79c0 : Add checkServerTrusted with OCSP and TlsData argument
c450bb4a2 : Parses timestamps as long
5bb7c554a : Add some scripts for building and testing uber jars. (#1246)
6e1d23aa5 : Move AddressUtilsTest to JUnit4 (#1249)
d286b36ff : Move OpenSSLKeyTest to JUnit4 (#1248)
f25d73d11 : Move NativeRefTest to JUnit4 (#1247)
5e30a8dff : Lint fixes. (#1245)
d3af5ef3a : Upstream AOSP (#1244)
7e01ec8b9 : Extract test file lists out to filegroups.
bf44974ed : Upstream AOSP (#1243)
8c8a05fd0 : Gradle tidy-up. (#1242)
0c47caf05 : Remove 3DES from Conscrypt
8abf0b721 : Remove CTVerifierTest in CTS
f991aa5f2 : Add /current/ to log list path
2d07616dc : Move ConscryptPrivateTestCases to presubmit
1eeef91a4 : Implement Android CT Policy for OCSP and TLS Data SCTs
77b3175ea : Add metric reporting for log list status
7f1e02b89 : Errata from upstreaming and downstreaming. (#1234)
66c8e9508 : Inner class logic fixes. (#1240)
cff2a6faf : Tidy up Javadoc generation. (#1239)
e57bf15ee : gradle: add max-page-size 16384 (#1213)
4c4119d33 : Use a workflow action to configure MSVC. (#1241)
90f0a8638 : Update CI config. (#1235)
8b63789f4 : Fix NativeCrypto.X509_verify() exceptions.
d8be94d05 : Tidy up filter doclet. (#1237)
1dff14ba0 : Use a thread pool to write to ConscryptStatsLog.
161f2de25 : Move TLSv1.0 and SSLv3 deprecation tests to a separate class in platform/
de68ba35b : Remove CT tests
4c04553ab : Add check for nonexistent isTlsV1Deprecated method within conscrypt.
7e3df46a1 : Add uid logging to tls_handshakes (#1233)

+- Project: platform/external/coreboot

abf6ee7f45 : Android.bp: Enable cbmem target and mark all tools as vendor

+- Project: platform/external/cronet

330637024 : Update Cronet build time
1f2ef2ab4 : Add preload API method to httpEngine
a1efc6825 : Disable proc_macro2 for host builds
c61648125 : Update gn2bp file name in Android.bp
1b9e3df6b : Remove dependencies on the 1-variant fallback
1fda9800d : Switch from cc_object to cc_preprocess_no_configuration
9082969b2 : Fix incorrect behaviour for UrlRequestBuilder wrapper
bdb1034bf : Disable failing licensing test due to b/372449684
526a1956f : Remove duplicated BSL-1.0 from License.
cf72a5533 : Add support for CC modules depending on rust modules
da9518dc7 : Add support for statically generated build scripts output
7a4288f5e : Generate Android.bp after applying rust changes
8c7366a66 : Relocate Android.bp destination for rust crates
344fc1b1a : Add support for rust_library and rust_proc_macro
8c46b35c9 : Follow up on comments in aosp/3281780
bfe5087d6 : Set enable_rust to true and generate desc_*.json files
b3652fc40 : [126.0.6423.0] Import Rust Crates to AOSP
bfd7d3f41 : Add rust crates without proper README.chromium to copybara exclude list
67bf11b12 : Ensure symlinks are relative and not absolute for LICENSE file
cca59c5e5 : Remove `mainline-presubmit` from TEST_MAPPING
9dde6050a : Add support for licensing generator for Rust directories
f71738746 : Delete .github, .gitmodules and BUILD files
37c1a1998 : Add third_party/rust to import list

+- Project: platform/external/crosvm

6471a5c72 : input: Add custom input device
74aea9525 : devices: serial: Support UnixStream for data source and destination
0ef87087b : arch: serial: Use default values in SerialParameters for tests
cf10d0325 : x86_64: move --pcie-ecam into --pci
0f36f6e53 : aarch64, x86_64: add cmdline option to configure PCI mem region
389e62502 : aarch64: add cmdline option to configure PCI CAM region
3cf3d942f : cmdline: fix --gpu max-num-displays help
13b958b96 : disk: Add seekable zstd disk support
8acdcbdc2 : Roll recipe dependencies (trivial).
09cc2f2c8 : Roll recipe dependencies (trivial).
af7d4bf7e : docs: memory_layout: fix aarch64 PCI CAM size
1f15edcd2 : devices: gpu: fix suspend/resume for 2D fallback
2a555e46b : Roll recipe dependencies (trivial).
ee773d6f3 : Roll recipe dependencies (trivial).
445810de5 : Roll recipe dependencies (trivial).
ce38ddaa7 : Roll recipe dependencies (trivial).
801b42452 : UPSTREAM: Don't error out if host doesn't support cpufreq
c71dedcbf : Roll recipe dependencies (trivial).
d27eca8c7 : Don't error out if host doesn't support cpufreq
a45f4fad6 : Roll recipe dependencies (trivial).
e3b10db9b : rutabaga_gfx: build.rs: check fence_passing_option1
16859c756 : aarch64, riscv64: inline get_resource_allocator_config function
5112d843b : aarch64: move constants to top of file
e500f8851 : aarch64: delete AARCH64_AXI_BASE constant
b32355490 : disk: more detailed create_composite_disk unit test
ba3a8e1dc : third_party/minijail: update to detect -Wxor-used-as-pow support
1e02c66d7 : Roll recipe dependencies (trivial).
0a0160f31 : Roll recipe dependencies (trivial).
b49eaa3a5 : Roll recipe dependencies (trivial).
5ca9bdaf6 : Roll recipe dependencies (trivial).
263c8e56c : Roll recipe dependencies (trivial).
4826555e7 : devices/virtio/media: handle interrupt resample events
fdfd498e8 : devices: add virtio-media support and enable example and proxy devices
e2cdbba57 : docs: book: Explicitly mention frontend and backend.
acf8752e8 : Roll recipe dependencies (trivial).
d39e3bbc3 : vm_control: change VmMemoryRegionId to use GuestAddress
e9de2c477 : vm_control: store address in VmMappedMemoryRegion, not gfn
78fa298b1 : vm_control: remove gfn from VmResponse::RegisterMemory
75bb3940f : Roll recipe dependencies (trivial).
b12e8c2ca : x86_64: add initial support for protected VMs
524e5eedb : Roll recipe dependencies (trivial).
f49c2869e : Roll recipe dependencies (trivial).
c20aa82fa : Roll recipe dependencies (trivial).
bb6d51e36 : virtcpufreq_v2: index cpufreq table by vcpu
b3ca8c646 : UPSTREAM: virtcpufreq_v2: index cpufreq table by vcpu
29a2dd897 : hypervisor: kvm: aarch64: assert system registers roundtrip
d804ccfea : UPSTREAM: virtcpufreq_v2: Use a shared perf for vCPUs in the same freq_domain
b534b0517 : Roll recipe dependencies (trivial).
e3becfb6c : virtcpufreq_v2: Use a shared perf for vCPUs in the same freq_domain
6c3dea2b0 : Roll recipe dependencies (trivial).
fdc597792 : Roll recipe dependencies (trivial).
1f65b83e3 : Roll recipe dependencies (trivial).
46b7dbe38 : Roll recipe dependencies (trivial).
1bc4e6296 : Roll recipe dependencies (trivial).
4363a89b7 : Roll recipe dependencies (trivial).
1adfae0cb : Roll recipe dependencies (trivial).
3650f0139 : Roll recipe dependencies (trivial).
8f42cdfe4 : Roll recipe dependencies (trivial).
0f2b758d3 : Roll recipe dependencies (trivial).
147b0fe6c : media: ffmpeg: fix undefined behavior in test_avpacket_drop
ea6b76607 : aarch64 clippy fixes for Rust 1.81
fbb8137c0 : Windows clippy fixes for Rust 1.81
0a4175294 : rutabaga_gfx: gfxstream_unstable --> fence_passing_option1
930bfd423 : rutabaga_gfx: reorganize rutabaga resource saving/loading
13b8982cd : UPSTREAM: virtcpufreq_v2: Place vcpu worker threads directly after vCPU threads
db652dcce : virtcpufreq_v2: Place vcpu worker threads directly after vCPU threads
5ebcb77f4 : Roll recipe dependencies (trivial).
30590f3fc : minigbm: update submodule to latest
e3a35164a : Roll recipe dependencies (trivial).
fadb586d0 : Roll recipe dependencies (trivial).
649c48e00 : Roll recipe dependencies (trivial).
7cbb56928 : Roll recipe dependencies (trivial).
d933faa19 : rutabaga_gfx: kumquat: refactor initialization
bede2f810 : rutabaga_gfx: introduce WaitTimeout abstraction
9f8b4eb83 : rutabaga_gfx: cross-domain: fix excessive visibility
7a826cea5 : Roll recipe dependencies (trivial).
64ed51b21 : Roll recipe dependencies (trivial).
c519c8a4a : tools: check if .git/modules exists in install-deps
a555cdda9 : crosvm_control: add crosvm_client_resume_vm_full api
ef672e603 : virtio: update virtio-media ID to 48
0fabdcacd : devices: virtio-video: remove unused PictureReady visible_rect
ab423fbec : Roll recipe dependencies (trivial).
0ff7cdcfa : Roll recipe dependencies (trivial).
fb02c4f8c : Roll recipe dependencies (trivial).
77a9eac63 : Roll recipe dependencies (trivial).
d5f088a8d : Roll recipe dependencies (trivial).
cd1a0ea1a : Roll recipe dependencies (trivial).
7655ffd1e : Roll recipe dependencies (trivial).
8ac5bf464 : rutabaga_gfx: make kumquat and rutabaga_gfx cross-platform
1ac142925 : Add khei@ to OWNERS
6be50c8f8 : Roll recipe dependencies (trivial).
09f2841b0 : Roll recipe dependencies (trivial).
cd61e392e : ANDROID: Rename libbit_field to prevent conflict
263667f16 : ANDROID: Remove .patch duplicating copy_out
cd0454fbe : Roll recipe dependencies (trivial).
cb02c471c : UPSTREAM: Require --virt-cpufreq-upstream for configs with custom frequencies
e9ecde874 : UPSTREAM: virtcpufreq_v2: Add a way to throttle vCPUs
413d899fc : Roll recipe dependencies (trivial).
3f1a383a4 : Exclude a few dead_code instances
17b696fd0 : fuzz: expect cfg(fuzzing)
27ed7f170 : plugin: annotate kvm::Cap transmute for clippy
1457967c0 : net_util: clean up transmutes in create_sockaddr()
705a9a646 : Roll recipe dependencies (trivial).
53af02e7f : Require --virt-cpufreq-upstream for configs with custom frequencies
6eb76a1a3 : virtcpufreq_v2: Add a way to throttle vCPUs
43cd01c17 : rutabaga_gfx: kumquat: reset cursor position before restore
d32482563 : Roll recipe dependencies (trivial).
e6dd56333 : Roll recipe dependencies (trivial).
40dcc1885 : devices: virtio-console: deprecate legacy-virtio-console option
4ef01e7a1 : UPSTREAM: virtcpufreq_v2: Support scaling for custom sized vCPUs
024383f0a : UPSTREAM: Reorder cgroup and affinity vCPU configuration
f1c401983 : UPSTREAM: Add freq_domain and CGroup V2 support
e17dfb03f : x86_64: bzImage: load extra data appended to kernel
f2056c668 : hypervisor: aarch64: rename VcpuFeature::SVE -> Sve
97dbce522 : hypervisor: aarch64: add SVE support
49a840466 : Roll recipe dependencies (trivial).
2d6b8064a : minijail: update to ec73d1d3ae015c1cecaea203e71991ed1bc29f41
13029b6fa : Roll recipe dependencies (trivial).
7e76cd914 : Roll recipe dependencies (trivial).
41f648e36 : virtcpufreq_v2: Support scaling for custom sized vCPUs
ef5d1fe9c : Reorder cgroup and affinity vCPU configuration
9148e0136 : rutabaga_gfx: use serde for snapshots
fc8a9dd07 : cros_async: remove debug prints.
d49f618c9 : Roll recipe dependencies (trivial).
e7cd61f41 : Roll recipe dependencies (trivial).
2d1ff7b19 : Roll recipe dependencies (trivial).
81d9015b9 : virtio-pmem: switch allocation to bottom up
94fce29ab : Add freq_domain and CGroup V2 support
4abcdf6b4 : disk: qcow: read table clusters in a single I/O
aa25762cd : disk: qcow: flush BufWriter before dropping
abc15b43c : Roll recipe dependencies (trivial).
f638d4e0a : x86_64: move PCI memory layout logic out of global
9c13ce4cc : devices: fs: check root_dir is a valid CString
2550f3568 : devices: fs: use C string literals to avoid unsafe
18e145492 : bat: Add fake_power/resume_fake_power crosvm_control api
b5592f1b7 : e2e_tets: fs: fix list formatting in comment
0ab9a6945 : Roll recipe dependencies (trivial).
5e7201d3a : arch: make generate_pci_topology private
53761f9b8 : Roll recipe dependencies (trivial).
d3851eab0 : devices: fix clippy::manual_unwrap_or_default lints
de09e2edb : devices: vhost-user: device: gpu: remove unnecessary imports
f67ee911d : devices: fs: Allow running virito-fs without root
4d70c7310 : devices: vhost-user: device: gpu: do not hold the mutable reference when cancelling
7228ac3d0 : Cargo.toml: add missing Windows feature declarations
9bd5e72cd : rutabaga_gfx: Make fence state snapshot-able with serde json
846b26e08 : devices: virtio_gpu: Separate snapshot from suspend
7d917b9b0 : docs: Add a section for debugging tips of tests
1748262c8 : Cargo.lock: update libc 0.2.153 -> 0.2.161
d66023ee0 : x86_64: move DTB SetupData creation out of fdt module
e8dc70dcd : x86_64: construct a devicetree if overlays are present
d77b70bd4 : x86_64: make Android fstab optional in create_fdt()
6c57e3373 : Use inspect_err in place of map_err where appropriate
5fd5694f2 : rutabaga_gfx: Default suspend()/resume() should be no-op
7f8fba396 : Roll recipe dependencies (trivial).
94971fd07 : devices: virtio_gpu: save and restore shmem offset
d88a30a9b : crosvm_control: make validate_socket_path() unsafe
4bb586498 : Add --name option to the run command
0d73150cf : Add --name option to the run command
a4d827ded : rutabaga_gfx: Add suspend/resume hooks for components
3498a61b0 : Cargo.lock: update protobuf v3.2.0 -> v3.6.0
1214269d8 : Roll recipe dependencies (trivial).
ecdd71ae7 : devices: add virtcpufreq_v2
ad154e6d7 : devices: virtio_gpu: Keep state across sleep/wake
a2363b98f : vhost_user: add wait4 seccomp policy for vhost_user device
58f13de71 : Revert "devices: add virtcpufreq_v2"
97490c525 : devices: add virtcpufreq_v2
95fa5a8f3 : rutabaga_gfx: rutabaga_os: simplify descriptor handling
6210648ed : ANDROID: switch virtmgr from vm_control to crosvm_control
75458ee8f : Roll recipe dependencies (trivial).
d6140eb6e : Roll recipe dependencies (trivial).
f99e505ed : tools: contrib: crosvmdump support --block.
e002bc109 : Roll recipe dependencies (trivial).
8ff6fab74 : Roll recipe dependencies (trivial).
2c73ec1f0 : devices: vhost_user_frontend: panic if worker fails
9861eeb94 : docs: add Rust in building crosvm
4f06b28a9 : devices: vhost-user frontend: fix backend_req on Linux
065043968 : UPSTREAM: devices: vhost-user frontend: fix backend_req on Linux
4cdd0d99d : cmdline: add --no-pmu option
93272c20b : devices: bus: add test for read() zeroing behavior
18f43226d : devices: bat: get power property before first read
85697dd51 : device: fs: Enable fs_runtime_ugid_map feature
f36fc40bb : power_monitor: make system_api dep optional
8c5e8f212 : Revert "balloon: Extend dynamic_mapping tube support to linux"
40920e727 : hypervisor: pass IoOperation data as slices
787240bce : Cargo.toml: reindent all-android features
967294a8b : Roll recipe dependencies (trivial).
17a737e4a : vmm_host: Support vhost_user front-end device monitor sockets in Windows
16298be6a : vmm_host: Make vhost_user front-end device monitor sockets
8de3640c0 : devices: vhost-user frontend: rewrite worker as non-async
3e1811de7 : Roll recipe dependencies (trivial).
bb63b39cf : crosvm: simplify gdb setup
9e675e62b : crosvm: refactor tube plumbing in vm init
60053cdf0 : rutabaga_gfx: fix observed errors in cross-domain
3b64cca5a : Roll recipe dependencies (trivial).
4775054db : device: fs: Setup UID-GID mapping between guest and host without user-namespace.
d8fbb091c : power_monitor: add powerd client
0090567d2 : vhost: improve set_vring_addr() validation
1cba602f7 : Roll recipe dependencies (trivial).
b61d89501 : crosvm: linux: refactor init_balloon_size calculation
7a486c000 : Roll recipe dependencies (trivial).
3e12b0068 : Roll recipe dependencies (trivial).
d8250a4d1 : power_monitor: make powerd monitor individual module
4f352d379 : system_api: add power_manager bindings and update other bindings
157aa8ef5 : Roll recipe dependencies (trivial).
f0d853907 : Roll recipe dependencies (trivial).
a12226c4d : Roll recipe dependencies (trivial).
5e064ea1e : Roll recipe dependencies (trivial).
c420ddc4d : Roll recipe dependencies (trivial).
153b16df7 : vmm_vhost: remove deprecated non-standard message types
8201cad1f : Roll recipe dependencies (trivial).
1205969b2 : Roll recipe dependencies (trivial).
e813a994f : Roll recipe dependencies (trivial).
cce87a14e : Roll recipe dependencies (trivial).
6241e6aa0 : Roll recipe dependencies (trivial).
a9b621c62 : devices: vhost-user-fs backend: reuse virtio worker
dc2488b3f : Roll recipe dependencies (trivial).
2e3478a02 : Roll recipe dependencies (trivial).
dc8524cb3 : Roll recipe dependencies (trivial).
20803a395 : rutabaga_gfx: Add rutabaga_resource_wait_sync interface
511af5a11 : disk: windows: restrict file sharing
e56281d44 : windows: more vmexit tracepoints
82e0b2f81 : devices: add IDs and constants for virtio-media
8d06f7a77 : hypervisor: tests: add REP INSB/OUTSB test
c9fb63556 : hypervisor: haxm: fix repeated port IO (REP OUTSB etc.)
c4343a158 : Roll recipe dependencies (trivial).
53b97c399 : Roll recipe dependencies (trivial).
3751dc693 : Roll recipe dependencies (trivial).
787251f65 : Roll recipe dependencies (trivial).
993cf06c7 : Roll recipe dependencies (trivial).
d40b9d103 : devices: virtio: add Virtio device state machine
612394547 : devices: virtio: virtio suspendable macro fix and comment
132a7eebe : Roll recipe dependencies (trivial).
84388aea3 : Roll recipe dependencies (trivial).
76ca44221 : devices: vhost-user frontend: shorten thread name
6d32cf0a8 : Roll recipe dependencies (trivial).
b5fd9aa9e : Roll recipe dependencies (trivial).
cb48db6af : Roll recipe dependencies (trivial).
0f30f2e6f : plugin: fix needless borrow clippy warning
951d0ee77 : ext2: fix needless borrow clippy warnings
ee6e0ef0e : Roll recipe dependencies (trivial).
e1838f518 : Roll recipe dependencies (trivial).
2a96ff0fb : hypervisor: kvm: riscv64: don't mutate through a shared ref
8eca6e616 : ext2: Support xattr that fits inode records
aef7f5f5a : hypervisor: add gvm feature to Cargo.toml
3debe1909 : win_audio: duplicate the 2 ready Events
3fd465afd : Roll recipe dependencies (trivial).
b093ed45e : Roll recipe dependencies (trivial).
290b17e4d : Roll recipe dependencies (trivial).
a43b9346f : cros_async: remove ?Sized from Default bounds
72550d97b : hypervisor: x86_64: remove unused get_emulated_cpuid()
92e906674 : Roll recipe dependencies (trivial).
592b420ee : Roll recipe dependencies (trivial).
656090ee2 : Roll recipe dependencies (trivial).
7c9993946 : Roll recipe dependencies (trivial).
52f0f63d8 : hypervisor: geniezone: GZVM_CHECK_EXTENSION mutates its parameter
7e3e8f8cc : rutabaga_gfx: kumquat: use RutabagaEvent abstraction
16ab05b39 : rutabaga_gfx: os: introduce RutabagaEvent
b21a70b80 : Roll recipe dependencies (trivial).
7cf4c6e51 : Roll recipe dependencies (trivial).
5aaf0a3a5 : Roll recipe dependencies (trivial).
9cd9bcaf2 : Roll recipe dependencies (trivial).
d5555a8a2 : Roll recipe dependencies (trivial).
0dd634885 : Roll recipe dependencies (trivial).
df8a5b540 : Roll recipe dependencies (trivial).
1e1429dc8 : Roll recipe dependencies (trivial).
8162b08ed : Roll recipe dependencies (trivial).
caebe43a8 : ext2: Make Builder take a root directory path
1356b8277 : ext2: Extend Inode size to 256
994677f76 : Roll recipe dependencies (trivial).
f2d11d94d : hypervisor: disable getsec and invvpid tests on Linux
09e715bf6 : fix fdt location when static swiotlb is enabled
3b767d235 : crosvm device: add --socket-path and --fd, deprecate --socket
b073f872d : devices: vhost-user: remove keep_rds from VhostUserStream::new_socket_from_fd()
b222eb335 : net_util: add test for create_sockaddr()
e91ab4248 : Roll recipe dependencies (trivial).
bd545cf29 : Roll recipe dependencies (trivial).
76c3a1bfe : docs: update gfxstream guest docs
bbc406dd0 : Roll recipe dependencies (trivial).
a0db4972c : fix fdt location when static swiotlb is enabled

+- Project: platform/external/curl

e31fc9fdb : Move CURL_OS from Android.bp to lib/curl_config.h
b1ef0e1a0 : RELEASE-NOTES: synced
62020546c : THANKS: contributors from the 8.11.0 release
380790b24 : GHA/non-native: fix installing OpenLDAP on OpenBSD
087f77d85 : GHA/macos: drop WebSockets from job names
1b4897f3c : RELEASE-NOTES: update cmake LDAP-related entry [ci skip]
e1ed6b8e2 : mbedtls: remove failf() use from mbedtls_random
3a35901a1 : wolfssl: coexist with openssl, further work
f81abcbfe : RELEASE-NOTES: synced
413300779 : wolfssl: no more use of the OpenSSL API
46bd595b7 : ci: update dependency wolfSSL/wolfssh to v1.4.19
6b2bc8130 : openssl: extend the OpenSSL error messages
78c317292 : curl_addrinfo: support operating systems with only getaddrinfo(3)
25025419c : pytest: include curl version string and python platform in log
9d32724c1 : certs: add missing `-CAcreateserial` option for LibreSSL
97d69ec0d : winbuild: drop `gen_resp_file.bat`
d88db0b6b : tests: use a set for several of the curl_props
9b863ac67 : vquic: recv_mmsg, use fewer, but larger buffers
922235e56 : ngtcp2: do not loop on recv
2f22fc10e : GHA/linux-old: adjust configure job name
329a8e9da : unit1307: tidy up Apple OS detection
9640a8ef6 : schannel: fix TLS cert verification by IP SAN
fb711b509 : build: fix clang-cl builds, add CI job
9acecc923 : tidy-up: whitespace, fix CI spacecheck for docs
0cececef0 : config: rename the OS define to CURL_OS to reduce collision risk
c0d2b9bee : MQTT: remove trailing newline
98561a3be : RELEASE-NOTES: synced
bc6212004 : pytest: show curl features and protocols
e4aa07b52 : mqtt: fix mqtt.md wording and add clearer explanation
22e7b1512 : winbuild/README: consolidate command prompt section
85ee61402 : ci: update rojopolis/spellcheck-github-actions digest to 74c2a14
b57e9c8f9 : OS400: don't delete source files when building with debug
0d475da1c : pytest: fix run against multissl curl
a70f5bc4b : curl/config2setopts: move SSH related options into same block
84f96e336 : tool_operate: url_proto improvements
a273cc255 : multi: fix "Useless Assignment"
b7a06dee5 : setopt: return error for bad input to CURLOPT_RTSP_REQUEST
74c7b672d : runtests: add comment for handle64 pathsep requirement [ci skip]
cbc39a88d : setopt_cptr: make overflow check only done when needed
1a2d38c47 : GHA/windows: avoid curl.exe libtool wrapper
ef7399b8b : runtests: pass single backslashes with Windows Perl
cd2b45201 : src/lib: remove redundant ternary operators
080973dcd : lib: msnprintf tidy-ups
cb011ac09 : tls: avoid abusing CURLE_SSL_ENGINE_INITFAILED
974f6bcf8 : RELEASE-NOTES: synced
701813b23 : tests/http: add --insecure tests
0e0c8cdf8 : tests/scorecard: allow remote server test
770702fa3 : CI: bump wolfSSH and wolfSSL
80aa1fe46 : tool_getparam: drop unused time() call
398025a41 : appveyor: fix job names, tidy-up
1db9af2b9 : cmake: tweaks around debug mode and hidden symbols
0da1489eb : build: disable warning `-Wunreachable-code-break`
e77326403 : multi: split multi_runsingle into sub functions
522c89a13 : lib: remove Curl_ prefix from static functions
1160380e5 : docs: clarify FTP over HTTP proxy functionality somewhat
0910a412a : cmake: fix missing spacing in log message
aafc074f8 : cmake: clear package version after `pkg-config` detection
b8c12634f : INSTALL-CMAKE: fix punctuation and a typo [ci skip]
f66af623c : cmake: document `-D` and env build options
ffc4e6a88 : cmake: mark as advanced some internal Find* variables
45862f222 : cmake: tidy up and shorten symbol hiding initialization
1da1fcc43 : cmake: tidy up picky warning initialization
fff6afb0f : cmake: rename local variables to underscore-lowercase
f48609d4c : cmake: limit `CURL_STATIC_CRT` to MSVC
c9b54fadd : cmake: use `list(APPEND)` on `CURL_INCLUDES`
7e94680c9 : cmake: tidy up `CURL_DISABLE_FORM_API` initialization
ec68fb5a6 : cmake: drop obsolete items from `TODO` and `INSTALL-CMAKE`
02ac5547c : docs/libcurl/opts/Makefile.inc: alphasort the options list
469f53686 : curl: detect ECH support dynamically, not at build time
8cb2d5f48 : quic: use the session cache with wolfSSL as well
b34b757c2 : ngtcp2: set max window size to 10x of initial (128KB)
358eae42a : bearssl: improved session handling, test exceptions
30f66c8ba : mbedtls: handle session as blobs
3722ed037 : RELEASE-NOTES: synced
1056889f9 : url.md: clarify
9255e7a10 : version: minor cleanups
ac7ae08f0 : schannel: reclassify extra-verbose schannel_recv messages
0325e1b9b : mprintf: treat `%o` as unsigned, add tests for `%o`, `%x`, `%X`
7ca164fab : mprintf: do not ignore length modifiers of `%o`, `%x`, `%X`
f901ab84e : schannel: ignore error on recv beyond close notify
38c57bdf0 : GHA: update five dependencies
59831f806 : tool_operate: split up the huge single_transfer into sub functions
30da1f597 : setopt: split Curl_vsetopt() into several sub functions
b3816f67b : cmake: avoid setting `BUILD_TESTING`
7c023c3f6 : libssh2: delete duplicate `break`
6b440704d : GHA: drop "3" from openssl names and keys
b8de0dadd : cmake: tidy up line order [ci skip]
5f9411f95 : GHA/windows: work around Git for Windows perf regression
73d277919 : GHA/linux: drop patch from openssl3 thread sanitizer
e1099726a : CI: update dependency openssl/openssl to v3.4.0
53fdc5faf : runtests: use deterministic sort for `TESTINFO` lines
e43d37c54 : ci: fix renovate's matching for OpenSSL and quictls
b327a53f0 : GHA: use `--no-install-suggests --no-install-recommends` where missing
0e18bd394 : mk-lib1521: fix the long return code check
605bc2d2c : GHA/linux: merge 32-bit Linux workflow
acd134cfe : tests: Fix FILEFORMAT <file name=""> directive
b6219cd93 : GHA/linux: merge torture jobs into the main workflow
52851d325 : GHA/macos: use `test-torture` target for torture tests
096bfdeff : cmake/FindCares: fix version detection for c-ares 1.34.1
825a800e0 : cmake: use the `BSD` variable
9126eb5a8 : cmake: replace `CURL_*_DIR` with `{PROJECT,CMAKE_CURRENT}_*_DIR`
c33174d42 : GHA/windows: increase timeout for vcpkg jobs due to slowness
dcb27fdd4 : GHA: fix the msh3 renovate thing
943df95ae : CI: run with standard mod_http2
c2e263677 : GHA/windows: add http3 to libressl vcpkg job
1e0305973 : GHA/windows: ignore results for test 987
0978afd7a : GHA/linux: tidy up and performance
e89491e1f : cmake: fix compile warnings for clang-cl
7dd7cbac8 : version: say quictls in MSH3 builds
a58584a88 : checksrc: add check for spaces around logical AND operators
51724c43e : curl_ws_recv.md: the 'meta' pointer is only returned on success
d6bae1cb8 : curl_ws_recv: return recv 0 and point meta to NULL on all errors
2816cba2d : GHA/linux: bump to quictls 3.3.0
547d60047 : curl_multi_perform.md: fix typo
684773319 : docs: fix a typo in some cipher options
e29629a40 : GHA: update ngtcp2/ngtcp2 and awslabs/aws-lc
2c1d83e6a : Dockerfile: update Docker digest to d830561
4de627ab0 : winbuild: add initial wolfSSL support
b4e162566 : KNOWN_BUGS: LDFLAGS passed too late
5ea61a0b5 : hsts: support "implied LWS" properly around max-age
288cfcbe3 : RELEASE-NOTES: synced
fbc0da376 : cmake: set version for `project()` and add CPack support
1b155f034 : tool_operate: reuse the schannel backend check
29faa7919 : libcurl/opts: improve phrasing for connection cap related options
fe2a72029 : http2: auto reset stream on server eos
2ae8d9b57 : libtests: generate the lib1521 atomically
b9877b74c : GHA: drop the hyper job
b42eb27c1 : openssl: improve retries on shutdown
8cdbaba4b : tool_operate: break out of loop on error
38bfe1c2a : GHA: switch off proselint
9cc246401 : source: avoid use of 'very' in comments
d1323839b : DISTROS: avoid use of "very"
193f1b484 : DISABLED: disable test 1060 with hyper
c97cd8282 : tests/http: fix ubuntu GnuTLS CI failures
beeeb85a7 : tests: update some HTTP/2 over HTTPS tests
fde532629 : winbuild/README: document how to clean a build
1e01e2b54 : GHA/macos: merge autotools and cmake jobs
a2f913ef6 : CI: explicitly specify the OS version when necessary
41c980bb0 : tests: capture stdin to get the vsftpd version number
6478a36b6 : src: guard for double declaration of `curl_ca_embed` in unity builds
adf2b4fa5 : libssh: use CURL_PATH_MAX instead of PATH_MAX
7fbcf4b9b : vquic: fix compiler warning with gcc + MUSL
facf59c30 : libssh2: use the filename buffer when getting the homedir
083b4ab6e : libssh2: put the readdir buffers into struct
1cf187a4f : CI: update GHA dependencies
3040971d1 : GHA: silence proselint warnings and an error
8403e5a70 : tests: fix callback signatures to please UndefinedBehaviorSanitizer
eed3c8f4b : curl.h: remove the struct pointer for CURL/CURLSH/CURLM typedefs
ad1c49bc0 : lib: remove function pointer typecasts for hmac/sha256/md5
335d32570 : conncache: More efficient implementation of cpool_remove_bundle
e20b139a1 : GHA/linux: add cmake job for system mbedTLS with pkg-config
e33cf006e : server/mqttd: fix two memory leaks
8ea120f61 : GHA/linux: fixup pip for Ubuntu 24.04
69bf530df : tool_operate: make --skip-existing work for --parallel
9bee39bfe : url: use same credentials on redirect
eb77297cc : lib: move curl_path.[ch] into vssh/
a7ccd0261 : ftp: move listen handling to socket filter
3455d360c : mbedTLS: fix handling of TLSv1.3 sessions
513904c26 : wolfSSL: fix handling of TLSv1.3 sessions
aa43b4246 : curl-rustls.m4: set linker flags to allow rustls build on macos
960521d21 : smb: do not redefine `getpid` on Windows
e8a007de0 : GHA: optimize test prereq steps
75dfb7b64 : pytest: include `buildinfo.txt` in the output
66cc01575 : GHA/windows: drop vcpkg workaround
01a815799 : cmake: tidy-ups and rebase fixups
a3601cf57 : tests: allow pytests to run in out-of-tree builds
79809ffe1 : GHA/linux: mbedTLS 3.6.1
ba68eb02f : CI: update rojopolis/spellcheck, actions/checkout, actions/upload-artifact
7d53a5929 : CI: bump github/codeql-action, vmactions/omnios-vm and actions/cache
fe8399f06 : gnutls: use session cache for QUIC
954177b9d : tool_xattr: create the user.creator xattr attribute
dfd36d3ee : cmake: apply `WIN32_LEAN_AND_MEAN` to all feature checks
8e3450577 : cmake: untangle feature detection interdependencies
7bff68647 : ci: dump `curl_config.h` to log in all jobs
617feb7c9 : RELEASE-NOTES: synced
0095f9846 : libssh2: split the statemachine function into smaller sub functions
3b43a05e0 : netrc: cache the netrc file in memory
962097b8d : TLS: TLSv1.3 earlydata support for curl
d0377f5a8 : multi: convert Curl_follow to static multi_follow
be39ed19a : cookie: overhaul and cleanup
91d451b48 : cmake: replace `check_include_file_concat()` for LDAP and GSS detection
2c90f7f69 : cmake: allow manual configuration for LDAP
6074e3350 : cmake: add comments to feature check options applied globally
447bcea5a : cmake: stop adding dependency headers to global `CMAKE_REQUIRED_INCLUDES`
91519bfb7 : cmake: use `cmake_push_check_state()` around feature checks
ae5e538e5 : GHA: drop `--parallel` option for CMake + Ninja jobs
7bab201ab : cmake: add native `pkg-config` detection for mbedTLS, MSH3, Quiche, Rustls, wolfSSL
ae5351696 : cmake: tidy up detection C code
436bbbe7a : GHA/linux: skip installing rust if rustls is in cache
36bd80747 : GHA/linux, http3-linux: add CMake support, sync steps, other improvements
5b2d6448b : GHA/mac: simplify detecting SDK version bound to GCC
d3725f2bc : GHA/linux: fix mbedTLS cmake build
8c62479a7 : packages/OS400/curlmain: remove the strncpy calls
45b388fdc : tests/server/util.c: remove use of strncpy
08949637d : tool_getparam: replace two uses of strncpy(), ban strncpy
5ee43bb82 : tests: 780 - 783, new HSTS tests
a94973805 : hsts: improve subdomain handling
461ce6c61 : multi: make curl_multi_cleanup invalidate magic latter
0f7e72fbc : wolfssl: use old version API without openssl extra
e377c9176 : GHA: add Linux and macOS mbedTLS jobs, fix issue
b941d16d5 : GHA/windows: drop vcpkg shiftmedia-gnutls, replace with mbedtls
6268caee8 : INSTALL.md: fix a typo that slipped in to RISC OS
ee68b8db8 : RELEASE-NOTES: synced
adca93b53 : json.md: cli-option `--json` is an alias of `--data-binary`
e8c024aa9 : http_aws_sigv4: avoid local buffer and strcpy
c5b8569c7 : tftp: avoid two memcpy/strcpy
d90a8f07e : telnet: avoid two strcpy() by pointing to the strings instead
88ef62ba2 : setopt: avoid superfluous length checks before strcmp()
741e07edb : bearssl: avoid strpcy() when generating TLS version log message
3dfc256b9 : smb: replace use of strcpy() with snprintf()
45b7aa6b7 : altsvc: avoid using local buffer and memcpy
60d8663af : hsts: avoid the local buffer and memcpy on lookup
3e7a6fbb8 : configure: add GSS to `libcurl.pc` `Depends:`
9e19a577e : cmake: detect GNU GSS
3db50bd01 : CURLOPT_APPEND.md: goes for SFTP as well
699a2df35 : conncache: find bundle again in case it is removed
80dac51af : test1915: remove wrong comment
40bd652b7 : setopt: use a single function for HTTPAUTH and PROXYAUTH
a79f20d37 : cmake: do not propagate unused `HAVE_GSSAPI_GSSAPI_KRB5_H` to C
e888069f5 : cmake: detect `HAVE_NETINET_IN6_H`, `HAVE_CLOSESOCKET_CAMEL`, `HAVE_PROTO_BSDSOCKET_H`
86d5c2651 : configure: drop unused bare `socket.h` detection
6cfb615e9 : sws: fix unused static function with `TCP_NODELAY` undefined
2d1959dd0 : configure: drop duplicate feature checks for `poll()`, `if_nametoindex()`
5e7056609 : build: detect and use `_setmode()` with Cygwin/MSYS, also use on Windows
948a2b24f : ech: spelling, whitespace, say `--ech` default config
a71bc67f2 : GHA/macos: comment spelling and clarity
1d9606722 : build: add `ldap` to `libcurl.pc` `Requires:`
d9a9233af : RELEASE-NOTES: synced
19af07e7e : INSTALL-CMAKE.md: mention focus on shared libraries
6fe1c3bc6 : ci: update dependency ngtcp2/nghttp3 to v1.6.0
4407b890a : ci: update dependency ngtcp2/ngtcp2 to v1.8.0
3bd672866 : GHA/non-native: fix OmniOS job to fail on tests
b171ee601 : cmake: use OpenSSL for LDAP detection only if available
e9eda865d : warnless: remove curlx_sktosi and curlx_sitosk
57cc52337 : tests: enable additional ruff Python lint options
223fb00a7 : CI: run pytype and ruff on Python code
0f7ba5c5b : tests: change Python code style to pass ruff checks
2f3b7f20f : tests: fix some Python typing issues
0b864bde0 : CURLOPT_HEADERFUNCTION.md: do not modify the passed in buffer
6f454bab7 : asyn-ares: remove typecast, fix expire
b531e3307 : cmake: add missed variable to comment [ci skip]
bc055d08a : test1915: add tracing and connect timeout
566a6d7b0 : urlapi: normalize the IPv6 address
87e19ea68 : tests/valgrind.supp: remove a travis suppression, add a Debian
65eb20260 : openssl quic: populate x509 store before handshake
6c1b15768 : pytest: improve pytest_07_42a reliability
e1f93beff : test1515: add tracing and more debug info
28230bec1 : GHA/curl-for-win: tidy up `DOCKER_CONTENT_TRUST`
2400a6c6b : bufq: unwrite fix
08d13c0e4 : GHA/curl-for-win: re-enable image verification for debian:bookworm-slim
841f42150 : GHA/windows: add workaround for upstream vcpkg issue
a35f223cd : GHA/curl-for-win: disable `DOCKER_CONTENT_TRUST`
7b12c36ca : DEPRECATE: remove hyper in January 2025
679f18ef8 : RELEASE-NOTES: synced
bcec0840b : lib: use bool/TRUE/FALSE properly
78ed473db : wolfssl: add proper colon separator
98591551d : vtls: convert Curl_pin_peer_pubkey to use dynbuf
ebd9d67b8 : vtls: convert pubkey_pem_to_der to use dynbuf
9b0c0d6ad : tests: let openssl generate random cert serials
fe0ee1167 : GHA/linux: fix wolfSSL version in cache key
51d4b19ce : GHA/linux: drop duplicate names from cache keys
0acf0bced : tests: simplify `pathhelp.pm`, avoid using external tools
6fd5a9777 : wolfssl: convert malloc + memcpys to dynbuf for cipher string
b5d453eff : lib: avoid assigning 'result' temporarily
23386872d : multi: make multi_handle_timeout use the connect timeout
8eb983318 : GHA/labeler: adjust some docs patterns
39697dead : tests: remove debug requirement on 38 tests
8c76ae317 : vtls: skip a "useless assignment"
b0c82239c : tool: support --show-headers AND --remote-header-name
bc6072d24 : GHA/macos: update comment with new Xcode default for macos-13 [ci skip]
96fc2b88f : GHA/macos: drop unsupported Xcode version references
bf44536e2 : GHA/macos: delete `macos-12` jobs, update matrix for `macos-14`
4b4ff444d : GHA/macos: Sequoia chores, fixes for llvm 18
aa092012b : tests: fixup `checkcmd` `PATH` on non-unixy platforms
f88fb1c83 : tests: fix shell quoting on native Windows Perl
9c1ab7fa4 : tests: fix `%POSIX_PWD` on native Windows Perl
ca9106c1e : tests: replace `%PWD` with `%SSH_PWD` in SCP/SFTP tests
10ddf4c66 : RELEASE-NOTES: synced
303c0cf74 : CI: bump actions/checkout from 4.1.7 to 4.2.0
43cbe53ea : CI: bump github/codeql-action from 3.26.8 to 3.26.10
85a81d278 : docs/libcurl: expand multi documentation
c72cefea0 : select: use poll() if existing, avoid poll() with no sockets
72d2090fc : ftp: fix 0-length last write on upload from stdin
68c358619 : tests: replace hard-coded `/dev/null` with variable
31a29fc6b : tests: add and use `%PERL` variable to refer to the Perl binary
f6cb707b6 : tests: replace `%PWD` with `%FILE_PWD` for `file://`
18143579b : cmake: readd `generate-curl.1` dependency for `src` just in case
212ff19b6 : runtests: drop unused code for old/classic-mingw support
aed3f0231 : GHA: move Cygwin jobs back into the Windows workflow
7bc653a1d : appveyor: bump to OpenSSL 3.3
069a9654c : appveyor: delete unused WebSockets option remains [ci skip]
b85d37a3b : CI: bump vmactions/omnios-vm from 1.0.6 to 1.0.7
97c0f89bd : quic: use send/recvmmsg when available
876f17ad2 : ci: update dependency awslabs/aws-lc to v1.36.0
256fa6393 : CI: bump github/codeql-action from 3.26.6 to 3.26.8
cd6362973 : CI/winbuild: remove enabling of websocket - done by default now
e474b0230 : runtests: fix indentation
80a8e2495 : GHA/cygwin, msys: move tests to cmake jobs, to finish faster
47d604ae7 : GHA/windows: fix `find` in old-mingw-w64 `curl -V` step
2c419fc14 : ci: tidy-ups
8a8719d2e : cmake: websockets tidy-ups
842f88434 : GHA linux: restore `apt-get update`
7048d1d21 : docs/cmdline-opts: GnuTLS supports PKCS#11 URI in --cert option
f2ce14e10 : singleuse: limit checks to non-unity jobs
9541e6662 : GHA/windows: formatting, adjust timeouts, tidy-ups
95d33905f : CI: update 32-bit CI to Ubuntu 24.04 and enable more
7f3d59827 : CI: improvements in test reliability and performance
aca28abac : lib: fix disabled-verbose-strings + enable-debug build warnings
d78e129d5 : WebSockets: make support official (non-experimental)
cfae354a9 : codespell: extend checks to more subdirs
6b2824dae : GHA/torture: prefer pip `--break-system-packages` for speed
fcc89619d : singleuse: make `git grep` faster, add Apple `nm` support
f0f9e2c61 : GHA/http3-linux: add name to align with other Linux workflows
1b0da9cfc : RELEASE-NOTES: synced
44505adb3 : GHA/linux: improve cmake use, switch to Ninja
d08d16cac : multi: avoid reading whole struct pointer from pointer
6f2a1666a : tests: use '-4' where needed
841fbdb63 : tests: improve mqtt server handling
e61c5eb41 : tests: check http/2 and http/3 server responsiveness
ef400f4f3 : test190: replace %FTPTIME2 with a fixed value
dc284be4c : tests: remove the %FTPTIME3 variable
af60bdf4e : socks_gssapi: switch to dynbuf from buffer with strcpy
8e9a8dd97 : tests: add file: tests with existing files
dd0405c90 : test518: restore valgrind disable
5d7275d5d : openssl: convert a memcpy to dynbuf use
b2dc95540 : test1540: add debug logging
cbbf4d8ed : test504: fix handling on pending connect
17a7e12e1 : test2502: add libtest debug tracing
f51898277 : testrun: explicitly set proper IP address for stunnel listen/connect
8289ac1be : lib/cw-out: initialize 'flush_all' directly
21fb30b5d : test1035: convert host name back to utf8 as should be
4e22d7c56 : openssl: remove two strcpy() calls
f383a1761 : tool_doswin: simplify; remove unused options and strncpy calls
0b70b23ef : tests: add codeset-utf8 as a feature
5fb1b64fd : tests: introduce %CLIENT6IP-NB
7aa2b4e01 : tests: make precheck for HTTP on 127.0.0.1 into a feature
56183c1d6 : tests: postcheck is now in verify
7060b9b08 : build: fix cross-compile check for poll with bionic
da94b0237 : THANKS: cleanup duplicates
d82f9f965 : build: add pytest targets
ed766751c : GHA/linux: tidy up msh3 build step
b4cf21b45 : build: clarify CA embed is for curl tool, mark default, improve summary
73ea09b9e : GHA/linux: review and prune valgrind use
68a224c29 : tidy-up: indentation in autotools sources
b70e8f4b9 : cleanup: added space around ternary expressions
b7c9166c4 : test1185: Updated test data with NOSPACEC rule
cff75acfe : checksrc: Added checks for colon operator in ternary expressions
635253caa : configure: improve help string for some options
802577791 : curl_trc: fix build with verbose messages disabled
98c7d4b19 : multi.c: warn/assert on stall only without timer
336b8ca54 : GHA/linux: merge AWS-LC workflow
820afa2b7 : GHA/linux: merge wolfSSL workflow
f6036dead : build: show if CA bundle to embed was found
9b3a7d1e6 : GHA/linux: enable test bundles for cmake jobs
4619b4103 : build: fix possible `-Wformat-overflow` in lib557 with test bundle builds
954bd9083 : CI: improve readability of Circle CI config
d48747b26 : GHA/windows: mark 3023, 3024 flaky for mingw-w64 7.3.0 job
7b88bb444 : cmake: make `test-ci` target skip building dependencies
8e7ccfd17 : CI: add missed updates to the `configure-libssh` job
7b75bd52f : RELEASE-NOTES: synced
b4f7ec71c : tool_operate: let --create-dirs work for --dump-header as well
44fc2687b : tests: libtests and unit tests need explicit #include memdebug
7307c1a28 : gtls: Add P12 format support
a4703dac1 : checksrc: fixed typo
1f1fc27c1 : cmake: require quictls (or fork) when using msh3 on non-Windows
95a87a0e2 : ci: update rojopolis/spellcheck-github-actions digest to b83ca7c
22652a5a4 : curl: add options for safe/no CA bundle search (Windows)
668584a94 : build: drop exclamation marks, use "msh3/msquic" where incomplete
112417047 : GHA/windows: formatting
1b8449674 : GHA: use more ninja, build examples in the last step, and more
4b378ea43 : GHA: revert some build test steps added by #14772
71cf0d1fc : tests: speed up builds with single-binary test bundles
6a1dcdc5d : cmake: tidy up
0aece8f66 : tidy-up: indent, whitespace, `#error` in make files
d83b528a8 : tidy-up: spelling
1064dfa86 : tidy-up: indent, whitespace, comment in sources
8afdf8dc5 : RELEASE-NOTES: synced
867c187fd : build: use `configurehelp.pm.in` with autotools and cmake
7100c5bc9 : build: tidy up and improve versioned-symbols options
30ab1133c : configure: catch Apple in more target triplets
9f56bb608 : GHA/configure-vs-cmake: check `libcurl.pc`/`curl-config`, fix issues
ce7d0d413 : ipfs: add options to disable
8b42df3eb : src: tidy-up conditions for CA bundle search
fb35a5fe2 : CI: enable RTMP and WebSockets in old Linux build
28d1d71ff : lib: memdebug comment fixup [ci skip]
c34aaca5b : GHA/linux: disable unity build for fix scanbuild job
496da69aa : cmake: fix broken dependency chain for cmdline-opts, tidy-ups
5cefda1b9 : build: tidy up deprecation suppression, enable warnings for clang
e1ab01d1b : cmake: expand `CURL_USE_PKGCONFIG` to non-cross `MINGW`
c5e3d8ba9 : GHA: speed up builds in torture jobs, tidy up
60c3d0446 : autotools: add support for 'unity' builds, enable in CI
45202cbba : cmake: separate target for examples, optimize CI, fix fallouts
caefaecaa : runtests: log output improvements
b2331f3ee : request: on shutdown send, proceed normally on timeout
47d6ec980 : alt-svc: honor data->state.httpwant
433d73033 : url: connection reuse on h3 connections
c91c37b6e : tests: remove all valgrind disble instructions
876047d1c : libssh2: use the Curl_* memory functions to avoid memdebug
5895b71b0 : libssh.c: handle EGAINS during proto-connect correctly
b20ac93f4 : multi.c: make stronger check for paused transfer before asserting
fcbe930ef : tests/valgrind.pm: fix warnings with no valgrind report to show
df5ad100f : GHA/linux: fix installing valgrind, libpsl for rustls job, other cleanups
bc6f719d2 : GHA/windows: add MSVC vcpkg MSH3 job
dff66196d : CI: disable dependency tracking in Circle CI jobs
8439007fe : GHA: keep default pkgconf, do not replace with pkg-config on Linux
3434c6b46 : unit1660: fix unreachable code warning in no-SSL builds
3efba94f7 : cmake: allow building tests in unity mode
aa1a15391 : lib: fix unity builds with BearSSL, MSH3, Quiche, OmniOS
895008de9 : tests: Only log warnings or worse by default in smbserver
33472dbc1 : runtests.md: Suggest a value for -j for torture tests
22ba044f0 : tests: Fix keyword for test1411
54fd903f9 : GHA/torture: bump test parallelism to `-j10`
cf2f4ca58 : cmake: sync torture test parallelism with autotools
bc2f72b9a : tidy-up: rename `CURL_WINDOWS_APP` to `CURL_WINDOWS_UWP`
445fb8123 : cmake, `Makefile.mk`: use `-isystem` for dep headers, silence BearSSL issues
8d32e878a : cmake: delete unused `NEED_LBER_H`, `HAVE_LDAP_H`
941394939 : tets: testrunner fairness
8498b1b95 : cmake/FindNGTCP2: use library path as hint for finding the crypto module
50e2cb589 : build: `buildinfo.txt` improvements
fdb8b40fe : autotools: tidy-ups in `src/Makefile.inc`
d1bf44799 : build: limit `arc4random` detection to no-SSL configs
44f9ce02a : cmake: disable default OpenSSL if BearSSL, GnuTLS or Rustls is enabled
fc708ea9e : dist: drop `.in` files from `EXTRA_DIST`
e666a678b : checksrc: check for spaces around '?', '>' and '<'
fbf5d507c : lib/src: white space edits to comply better with code style
a57b45c38 : TODO: IMAP upload unread
c4f781e69 : cmake: drop redundant assigments
abf737b3c : cmake: drop redundant zlib var, rename function (internals)
56b0442e4 : urlapi: drop unused header
c997f3e00 : processhelp.pm: improve `taskkill` calls (Windows)
888662b74 : tests: delete duplicate macro check
844528573 : CURLMOPT_PIPELINING.md: clarify that CURLPIPE_NOTHING is not default
8ad3597d2 : tests: testrunner reliability improvements
5a263710f : lib, src, tests: added space around ternary expressions
0236276c3 : RELEASE-NOTES: synced
1ec5336b6 : negotiate: conditional check around GSS & SSL specific code
c0a9db842 : curl_url_set.md: document HOST handling when URL is parsed
6d0a48e58 : sendf: add condition to max-filesize check
7eb8c0484 : RELEASE: synced
dabeb542f : THANKS: contributors from the 8.10.1 release
210cf7cd9 : GHA/windows: revert enabling SSPI option
0cfc7fcca : tool_cb_wrt: use "curl_response" if no file name in URL
89de54320 : GHA/non-native: bump vmactions/omnios-vm from 1.0.2 to 1.0.6
41290d437 : GHA/windows: fix bad typo in MSVC GnuTLS stunnel condition
8a7efdb87 : GHA: misc updates: impacket, timeouts, mingw-w64 32-bit
e53523fef : CI: move Azure jobs to GHA, fix fallouts, sshserver, runtests tweaks
0af017d66 : GHA/non-native: install Perl for FreeBSD cmake jobs
2b6bdb41c : cmake: fix MSH3 to appear on the feature list
71a83b878 : GHA/non-native: enable SFTP/SCP tests on FreeBSD
0ba70dd13 : singleuse: drop `Curl_memrchr()` for no-HTTP builds
5f93cad08 : GHA: replace make with ninja in Cygwin, MSYS2 and mingw-w64 cmake jobs
2e76e1f7f : GHA/non-native: replace make with ninja in cmake jobs
8f5d73af1 : GHA: add `valgrind` to the job titles using it, and tidy-ups
c3a7145a4 : GHA/macos: tidy-up
65168b8eb : GHA/windows: use libuv for event-based tests on openssl job
09b21e475 : GHA/windows: re-add GnuTLS after upstream fix
8709404fe : GHA/macos: make impacket found by tests
c89cc09ab : GHA/macos: replace make with ninja for cmake builds
51fc89965 : GHA/macos: tidy-ups, install impacket for cmake jobs
aef384a7d : http: make max-filesize check not count ignored bodies
7eda757d9 : FTP: partly revert eeb7c1280742f5c8fa48a4340fc1e1a1a2c7075a
2b652b863 : transfer: remove redundant variable use: select_bits
3e36334af : RELEASE-NOTES: synced
50166c0de : connect: store connection info when really done
a33bcc9b5 : transfer: fix sendrecv() without interim poll
897284512 : vtls/rustls: support strong CSRNG data
6d9b40d6a : vtls/rustls: simplify ciphersuite skipping
f09adc3ad : vtls/rustls: rustls-ffi 0.14.0 update
65b8d8946 : vtls/rustls: differentiate error messages
d38458d82 : vtls/rustls: simplify builder cleanup
bef0acaf2 : request: correctly reset the eos_sent flag
e70c22b62 : tests: tweak lock file handling and timers
8ca603083 : RELEASE-NOTES: synced
28fa417bf : autotools: fix `--with-ca-embed` build rule
79f0007c2 : setopt: remove superfluous use of ternary expressions
381de75ce : CURLMOPT_TIMERFUNCTION.m: emphasize that only a single timer should run
61e48b4df : vtls: fix `Curl_ssl_conn_config_match` doc param
70d3a9b6a : http2: when uploading data from stdin, fix eos forwarding
9dc0770e2 : cmake: ensure `CURL_USE_OPENSSL`/`USE_OPENSSL_QUIC` are set in sync
b29caf0ba : GHA/macOS: add an -e test
a610bb8d9 : test537: cap the rlimit max this test runs
283af039c : QUIC: on connect, keep on trying on draining server
0ca15307a : rustls: fixed minor logic bug in default cipher selection
6a9f3764f : lib: fix AF_INET6 use outside of USE_IPV6
48f61e781 : multi: check that the multi handle is valid in curl_multi_assign
30865e09b : RELEASE-NOTES: synced
31be4d5bf : runtests: accecpt 'quictls' as OpenSSL compatible
28ca199d8 : libcurl-docs: CURLINFO_LOCAL_* work for QUIC as well as TCP
a3bd1dda1 : RELEASE-NOTES: synced
5e225c84a : THANKS: contributors from 8.10.0
813995bb1 : GHA/windows: raise test run timeouts
3aef8b97b : CURLOPT_COOKIE.md: tiny language edit
805bbf7c5 : NTLM_WB: delete remains in tests, docs updates
7020bc9c4 : cd2nroff: only require "added-in" for source "libcurl"
c4ab33370 : CURLOPT_*-docs: provide additional details
fc1c326ab : tests: ignore the tests/buildinfo.txt file generated
4a382f4bf : CURLOPT_COOKIE.md: this cookie gets appended to the others
b61d6e088 : GHA/linux-old: add an autoconf/automake build
a744b7bb4 : server/getpart: delete unused code
0d6c8b753 : lib: enable strerror and strncpy checksrc warnings in subdirs
63ebc48b6 : content_encoding: avoid getting all encodings unless necessary
81300c30e : unit1398: test maximum input parameters/output segments
80df6a5c1 : checksrc: add STRNCPY as an opt-in rule to detect and error on strncpy
344a177aa : lib: remove the final strncpy() calls
eb8ad66f6 : asyn-thread: stop using GetAddrInfoExW on Windows
24606191f : doh: remove redundant checks
c72dd0bb1 : maketgz: fix RELEASE-TOOLS.md for daily tarballs
f6955e421 : Makefile.mk: update to use Markdown sources for manual
9783c4540 : autotools: fix MS-DOS builds
4a8be9131 : build: drop unused `NROFF` reference
1ce626158 : Makefile.dist: fix `ca-firefox` target
0cdd9afd1 : cmake: fix to show features/protocols with `CURL_DISABLE_INSTALL=ON`
1fdea1684 : build: generate `buildinfo.txt` for test logs
b0a1c9bdc : CI: update names of jobs that are now building tests [ci skip]
b12a8158a : .dcignore: remove
b1f0b8f60 : pop3: fix multi-line with LIST arg
435dd8aa6 : doh: cleanups
40017fb32 : firefox-db2pem: mention what "certutil" the script uses
8d6db8cd8 : scripts/delta: output bugfixes/day
2f040ac61 : RELEASE-NOTES: synced
88c7182b2 : GHA/distcheck: keep upload artifacts for one day only
56f90637a : CURLMOPT_SOCKETFUNCTION.md: expand on the easy argument
5c14d696f : maketgz: move from / into scripts
0d1504b20 : libcurl.def: move from / into lib
519be2b9d : system_win32: fix typo
4ea6ff55a : GHA/distcheck:: bump actions/upload-artifact from 4.3.6 to 4.4.0
f16808173 : Dockerfile: Update debian:bookworm-slim Docker digest to 903d322
f905769fe : llist: only provide Curl_llist_tail in unit test builds
6aa5f25c6 : GHA/linux-old: split test step into build and run
db5eae112 : cf-socket: fix listen pollset for FTP active mode
464d466ae : smb: convert superflous assign into assert
3e7ddf94a : schannel: avoid malloc for CAinfo_blob_digest
32eee8f13 : src: namespace symbols clashing with lib
5ebc820c7 : KNOWN_BUGS: cleanup
6588a7f03 : openssl: certinfo errors now fail correctly
bca9c7719 : lib: make SSPI global symbols use Curl_ prefix
6a9b71037 : cmake: restore variable names `CURL_CA_BUNDLE_SET`/`CURL_CA_PATH_SET`
9e629a148 : docs: document the (weak) random value situation in rustls builds
4e16f8aa6 : RELEASE-NOTES: synced
a07ba37b5 : cf-socket: fix pollset for listening
81a334287 : connect: always prefer ipv6 in IP eyeballing
933e202eb : KNOWN_BUGS: CURLOPT_CONNECT_TO does not work for HTTPS proxy
4ff04615a : lib: use FMT_ as prefix instead of CURL_FORMAT_
a2bcec0ee : openssl: fix the data race when sharing an SSL session between threads
2c2292eca : haproxy: send though next filter
e512fbfa6 : printf: fix mingw-w64 format checks
6004f9673 : cmake: default `CURL_DISABLE_LDAPS` to the value of `CURL_DISABLE_LDAP`
d76b64858 : rand: only provide weak random when needed
269fdd4c6 : lib: remove use of RANDOM_FILE
00ef60732 : url: fix connection reuse for HTTP/2 upgrades
76212cbf3 : curl_easy_handler.md: fix language
8bb71d5fd : curl.h: make CURLOPT_WRITEINFO and CURLOPT_CLOSEPOLICY compile
336299494 : build: add options to disable SHA-512/256 hash algo
83bcd335c : test1165: check if `curl_config.h.cmake` lists all `DISABLED` options
ad32fb42f : autotools: settle with option name: `--enable-windows-unicode`
1e58665c2 : configure: break indentation to fix `--help` output
3fc81be44 : cmake: sync `CURL_DISABLE_*` behaviour with autotools
d4240b9bf : cmake: allow disabling `RANDOM_FILE`
04e3621dc : build: add `poll()` detection for cross-builds
415573a76 : RELEASE-NOTES: synced
4cd10ee28 : POP3: fix multi-line responses
bc81292ea : llist: clear the list pointer when a node is removed
7143833f1 : cmdline-opts: language fix for expect100-timeout.md and max-time.md
22a6a0bc6 : http3.md: mention how the fallback can be h1 or h2
98395155d : mailmap: Aki Sakurai
23e6391c1 : managen: in man output, remove the leading space from examples
e5f9050b2 : cmake: use host OS to decide about libcurl manpage batch size
c280010d8 : managen: fix superfluous leading blank line in quoted sections
430af3fb5 : dump-ca-embed.md: set as "boolean", not "single"
946c96aa0 : docs/cmdline-opts/_VARIABLES: language polish
3cf45fedc : runtests: remove "has_textaware"
eeb7c1280 : ftp: always offer line end conversions
4becbb4af : test1050: mark as FTP
55672d0aa : test476: test ASCII FTP upload where file already uses CRLF
ee17f35d4 : test475: verify a 72K ASCII FTP upload
cc8b81376 : build: drop unused feature-detection code for Apple `poll()`
7c49279aa : GHA: update github/codeql-action digest to 4dd1613
4abf2b969 : openssl quic: fix memory leak
6354b35df : gnutls: send all data
44d1b6c27 : pytest: add ftp upload ascii test
64ab0ace2 : urldata: remove crlf_conversions counter
1b0568539 : cmake: fix internal variable names in Rustls detection
5b87b4edf : test1013.pl: require case match for features, order match for protos, fix issue
a5682d9cb : GHA/windows: vcpkg GnuTLS started breaking CI, temp drop it
09cdcac8d : RELEASE-NOTES: synced
4c744c3ee : tests/http: add HTTP/2 Upgrade and prior knowledge tests
9280bbea3 : urldata: remove proxy_connect_closed bit
80f9fce56 : cookie: add more debug tracing to set-cookie handling
ea6f5c9f0 : connect: limit update IP info
87f0a7943 : HTTP3.md: cleanup markup and language
0ae3fadb7 : pytest: tweak counts
29610e5f3 : transfer: skip EOS read when download done
1be704e17 : cpool: rename "connection cache/conncache" to "Connection Pools/cpool"
3af75e18d : configure: remove USE_EXPLICIT_LIB_DEPS
d620ec679 : CI: add test timeouts, more cmake build tests, fix VS2010 C warning
2625360b5 : configure: fix WinIDN builds targeting old Windows
09437d8cd : tests: delete `libhostname.so` and `chkhostname`
59b419f1a : CI: add a script and job to run cmakelint
fa37248d0 : mailmap: add Moritz Buhl
8fe1f562b : ngtcp2/osslq: remove NULL pointer dereferences
8dd0cb73a : share: don't reinitialize conncache
c96551cea : cmake: `Libs.private` improvements
2c9331be4 : GHA/macos: drop options no longer necessary
bc72a78a1 : GHA/windows: enable HTTPS tests with stunnel
4a0061697 : GHA/configure-vs-cmake: drop disabling dependency tracking [ci skip]
8132b170d : transfer: remove comments, add asserts
444e34c51 : CONTRIBUTE: polished
89b9fb64a : pop3: use the protocol handler ->write_resp
10873ec5a : GHA/configure-vs-cmake: delete stray backslash [ci skip]
26ab9027f : configure: fix indentation more
aaacd0246 : GHA/configure-vs-cmake: add Windows build, fix issues
7673c1292 : build: check OS-native IDN first, then libidn2
3307b9813 : configure: delete unused `CURL_DEFINE_UNQUOTED` function
dbf5fbd4a : configure: delete unused `HAVE_OPENSSL3` macro
c190338e0 : build: delete unused `REQUIRE_LIB_DEPS`
573e7e827 : lib, src: delete stray `curl_` prefix from printf calls
8b0913808 : cmake: minor tidy-ups
5d4d1c712 : GHA: update CI dependencies
dacd483a7 : README: refresh
6f3522641 : tests: tweak use of impacket in smbserver
4b791dca3 : GHA/macos: ignore flaky tests 2041 and 2037
8aadb8308 : GHA/windows: add Linux -> mingw-w64 cross-build (cmake, autotools)
a2ef5d36b : cmake: sync code between test/example targets
f73f6bf9f : GHA: add yamlcheck
5629bb7cf : CI: consolidate workflows for source and docs check
6429ce8e5 : docs: fix some examples in man pages
4d5d171ac : RELEASE-NOTES: synced
d1394a00e : urlapi: verify URL *decoded* hostname when set
fa461b4ef : GHA/macos: enable HTTPS tests with stunnel
7c0b6eb3b : cmake: respect cflags/libdirs of native pkg-config detections
4f09967a3 : cmake/FindGSS: bring closer to other Find modules
df15d9ff2 : gha labeler: make labeler.yml human-readable
dbc4b7072 : FEATURES.md: fix typo
3b057d4b7 : test1521: verify setting options to NULL better
17dde5396 : setopt: make CURLOPT_TFTP_BLKSIZE accept bad values
05609bac9 : setopt: let CURLOPT_ECH set to NULL reset to default
20d447c1a : getinfo: return zero for unsupported options (when disabled)
4be599fe7 : src: replace copy of printf mappings with an include
0c3789461 : cmake: pkg-config 'found' message sync with native CMake
c8c64c882 : GHA: trim markdown headers before proselinting
23749bfd0 : GHA: add a checksrc job
99ba50d9c : misc: general C style cleanups
42843af0b : tidy-up: spelling WebSockets
118f446ad : src: delete `curlx_m*printf()` aliases
0052b4b52 : configure: fix indentation
35034df1c : docs: Clarify OpenSSF Best Practices vs Scorecard
aebd50870 : sectransp: fix setting tls version
a4152864f : tests: constrain http pytest to tests/http directory
aeb1a281c : gtls: fix OCSP stapling management
c730c8549 : build: make `CURL_FORMAT_CURL_OFF_T[U]` work with mingw-w64 <=7.0.0
c04504885 : src: fix potential macro confusion in cmake unity builds
6292a332f : RELEASE-NOTES: synced
b000cdfb2 : CURLOPT_XFERINFOFUNCTION: clarify the callback return codes
972452642 : lib: delete stray undefs for `vsnprintf`, `vsprintf`
b3e1fe6dd : cmake: tidy up option descriptions
a62e3be67 : cmake: honor custom `CMAKE_UNITY_BUILD_BATCH_SIZE`
9fff0742b : GHA/windows: fix indentation in the MSVC section
b0b4b481b : setopt: allow CURLOPT_INTERFACE to be set to NULL
3065f106e : build: add `iphlpapi` lib for libssh on Windows
576b39b6d : cmake: drop libssh CONFIG-style detection
778391334 : unit1300: fix checksrc longline warnings
c8d71e598 : http2: fix GOAWAY message sent to server
eb5c3f370 : buildconf.bat: fix tool_hugehelp.c generation
81a086146 : cmake: fixup linking libgsasl when detected via CMake-native
fc8575ed4 : tidy-up: spelling wolfSSL [ci skip]
7ca719dee : mbedtls: fix incorrect macro condition mbed_dump_cert_info
69b50017a : docs/SSLCERTS: rewrite
8e9056f8b : GHA/macos: enable brotli and zstd in autotools and cmake jobs
2e88ef104 : version: fix shadowing a `libssh.h` symbol
ac207bf56 : ssh: deduplicate SSH backend includes (and fix libssh cmake unity build)
440d00d17 : tidy-up: spelling 'built-in'
e83c83807 : build: improve compiler version detection portability
ae2c753a8 : GHA/windows: add missing time limit for msys2 autotools test runs
0cbfce802 : tests: add test_17_09_ssl_min_max
3ca38f9a5 : tests: improve test_17_07_ssl_ciphers
925aea1ab : mbedtls: no longer use MBEDTLS_SSL_VERIFY_OPTIONAL
e8bfa9639 : GHA: update github/codeql-action digest to 883d858
422696f0a : cmake: migrate dependency detections to Find modules
cd683f907 : cmake: add `find_package()` missing from `USE_MSH3` option
d8cefac24 : cf-socket: prevent KEEPALIVE_FACTOR being set to 1000 for Windows
26e9d3a89 : curl: find curlrc in XDG_CONFIG_HOME without leading dot
96b9027f1 : GHA/windows: unblock TFTP MQTT WebSockets SMTP FTP tests
c555ab469 : cmake: limit `pkg-config` to UNIX and MSVC+vcpkg by default
211cbcb4f : cmake: rename Find modules
3a2e47afb : cmake: fix Find module and package names
c5cb8e7c7 : tidy-up: spelling quiche and Rustls
0fb4e5926 : tidy-up: adjust casing of project names (continued)
a5598b6fc : pingpong: drain the input buffer when reading responses
ca8823510 : KNOWN_BUGS: Heimdal memory leaks
145f87b9e : build: use -Wno-format-overflow
c2e814f8d : cmake/FindNettle: log message when found via `pkg-config`
9fc2d7b8d : cmake: adjust GSSAPI option description
12399737c : CI/azure: disable parallel tests, allow IDN tests
47849be5d : cmake/FindNettle: skip `pkg-config` for custom configs
5b2a659ea : mbedtls: fix setting tls version
ff94698d3 : wolfssl: fix setting tls version
38fa458e5 : rustls: fix setting tls version
7a7c7a899 : bearssl: fix setting tls version
73f62acaa : RELEASE-NOTES: synced
dcf5a5383 : cmake: fix `cmakelint` warnings
3e60f174e : cmake: tidy up more in Find modules
c57d3aeb5 : appveyor: drop uploading artifacts
1d2924653 : cmake: tidy up around ngtcp2 and wolfSSL
24889acbf : cmake: do not unset the deprecated mixed-case variables
9fbda4ca7 : cmake: rename wolfSSL and zstd config variables to uppercase
47a486471 : location: fix typo
5fcf96930 : docs: add description of effect of --location-trusted on cookie
88727f7ed : docs: improve cipher options documentation
b2488afb1 : GHA: update github/codeql-action digest to 429e197
6fc66e167 : SECURITY: mention OpenSSF best practices gold badge
88cae1455 : mbedtls: add more informative logging
2c4d04c4d : GHA: update dependency gnutls/gnutls to v3.8.7
a58b50fca : transfer: Curl_sendrecv() and event related improvements
432f2fd9a : cmake: sync up version detection in Find modules
d8de4806e : cmake: tidy-up continues
f3a03df6a : cmake: revert to `pkg_check_modules()`
4beb23647 : cmake: fixup variable reference in FindZstd
f9f2eaaec : internals/SPLAY.md: internal API documentation
8f562f744 : curl: make the progress bar detect terminal width changes
4e2f3641f : cmake: add missing version detection to Find modules
2bea3892e : GHA/windows: delete redundant options, tidy up
453d032b2 : tidy-up: misc build, tests, `lib/macos.c`
471b11a9f : RELEASE-NOTES: synced
0e06603b2 : docs: remove ALTSVC.md, HSTS.md, HTTP2.md and PARALLEL-TRANSFERS.md
1e03d4bc0 : rustls: add support for setting TLS version and ciphers
0d8fdd1c7 : cmake: add wolfSSH support
3ff147f8b : cmake: TLS 1.3 warning only for bearssl and sectranp
dcb51bafa : splay: use access functions, add asserts, use Curl_timediff
6c5a7af75 : scorecard: tweak request measurements
20aa8d8f3 : docs/internals: new subdirectory
393296479 : test1707: output diff more for debugging differences in CI outputs
0066d169e : managen: wordwrap long example lines in ASCII output
178e8ba2d : cmake: fix find rustls
160f02335 : multi: on socket callback error, remove socket hash entry nonetheless
ef1d606d1 : libcurl.pc: add reference to `libgsasl`
b042d5297 : tidy-up: misc spelling (bit, ASCII)
551baf7d6 : tests: move the disabling of 500 for hyper from CI to DISABLED
560320444 : curl: fix the -w urle.* variables
2401ee68a : cmake: show warning if libpsl is not found
c90a3f16b : mime: avoid inifite loop in client reader
9f23c8f20 : cmake: fix and tidy up c-ares builds, enable in more CI jobs
304a349e8 : GHA/configure-vs-cmake: add macOS build, fix issues
278480197 : cmake: add missing `pkg-config` hints to Find modules
dd3b3eca5 : RELEASE-NOTES: synced
136504195 : getinfo: add CURLINFO_POSTTRANSFER_TIME_T
c0233a35d : hash: provide asserts to verify API use
41a01033b : GHA/windows: enable HTTP/3 in wolfSSL MSVC job
fd662fb3f : GHA/windows: add GnuTLS to MSVC jobs
ed76a23fc : cmake: add rustls
db39c668a : cmake: sync up result variable names in Find modules
65f5caee0 : cmake: tidy up Find modules
5abfe451b : cmake: update list of "advanced" variables
1c42ea406 : smtp: add tracing feature
8058bbae5 : TODO: mqtt and gopher test fails on network blocks
9222f3120 : test649: improve robustness
e434cdb83 : test587: improve robustness
68dad8c4e : test httpd, tweak cipher list
623b87750 : gnutls/wolfssl: improve error message when certificate fails
6905b1f86 : hyper: call Curl_req_set_upload_done()
22d292b3e : urldata: introduce `data->mid`, a unique identifier inside a multi
ad6320b8a : tool_paramhlp: bump maximum post data size in memory to 16GB
8ae7049f4 : cmake: sync up formatting in Find modules
1a444e315 : runtests: log ignored but passed tests
d76389d82 : GHA/macos: disable AppleIDN for autotools in combinations jobs
0c4f05c6e : tests: don't mangle output if hostname or type unknown
af73743f8 : curl_sha512_256: fix symbol collisions with nettle library
624b20c63 : lib: prefer `CURL_SHA256_DIGEST_LENGTH` over the unprefixed name
d7e1a2dd7 : lib: avoid macro collisions between wolfSSL and GnuTLS headers
5a45e0c56 : cmake: update `curl-config.cmake.in` template var list [ci skip]
4111d1080 : lib: fix building with wolfSSL without DES support
28c12bc9b : sha256: fix symbol collision between nettle (GnuTLS) and OpenSSL codepath
71d3ab581 : vtls: fix static function name collisions between TLS backends
457427e03 : build: silence C4232 MSVC warnings in vcpkg ngtcp2 builds
b910122fe : cmake: add `CURL_USE_PKGCONFIG` option
fdc3e88bf : IDN: fix/extend/migrate test exclusion rules
77d722a05 : docs: update CIPHERS.md
eb6d6fce0 : GHA: bump deps: upload-artifact, codeql and spellcheck
cb17c069a : http2+h3 filters: fix ctx init
2cc56eb75 : GHA/macos: drop gcc-11
902d9a1d4 : wolfssl: fix CURLOPT_SSLVERSION
3e64569a9 : websocket: introduce blocking sends
0a5ea09a9 : spnego_gssapi: implement TLS channel bindings for openssl
9dfdc6ff4 : cmake: allow `pkg-config` in more envs
d222dbe78 : build: tidy up internal macro names for `libcurl.pc`
f3b14e1b0 : tidy-up: delete `Makefile.inc` from `EXTRA_DIST`
d2360b07f : RELEASE-NOTES: synced
ba235ab26 : llist: remove direct struct accesses, use only functions
6f00a05e8 : libcurl/docs: expand on redirect following and secrets to other hosts
f0a551814 : urldata: remove 'scratch' from the UrlState struct
4e51437de : docs/cmdline: refer to --show-headers instead of --include
f4376b5c7 : DEPRECATE.md: remove hyper after February 2025
b1fac8ed3 : cookie.md: try to articulate the two different uses this option has
552d32886 : TODO: remove 4.2 Alter passive/active on failure and retry
2c15ee4bd : multi: make the "general" list of easy handles a Curl_llist
9e4a2187e : autotools: add `--with-windows-unicode` option
2edbc229c : dist: add CI job to detect files missing from distro
515440a2f : cmake: limit libidn2 `pkg-config` detection to `UNIX`
aa3a31ce2 : cmake: exclude tests/http/clients builds by default
e48d8821a : Replace nonportable grep -o with awk
9cb7f08ef : lib: fix AIX build issues
a298df7f4 : cmake: more small tidy-ups
b828149b1 : tidy-up: delete unused `m4/xc-translit.m4`
beb871180 : dist: add missing `lib/optiontable.pl`
62b13ecfe : configure: fixup copy-paste mistake
4f05f6b3c : RELEASE-NOTES: synced
f27ba323a : test677: improve robustness
640febc7d : test579: improve robustness
ac6349b45 : test556: improve robustness
32f9130ae : mk-ca-bundle.pl: include a link to the caextract webpage
9fa0cf9c5 : HISTORY: fill in some events from recent years
a0ea955f8 : ftp: flush pingpong before response
badbd4eb4 : manpage: ensure a maximum width for the text version
919394ee6 : cmake: more small tidy-ups and fixes
d3f6b2ffa : krb5: add Linux/macOS CI tests, fix cmake GSS detection
e042073f9 : cmake: detect and show VCPKG in platform flags
daf9fdc4e : GHA/non-native: ignore FTP results in OpenBSD job
2d6fb0f58 : cmake: tidy up more value comparisons
a3155db43 : cmake: fix version variable references in FindGSS
c2889a7b4 : cmake: more syntax tidy-up
63e9e0679 : wolfssl: avoid taking cached x509 store ref if sslctx already using it
3ac1569c1 : tracing: allow CURL_DEBUG override
0bc5b2e37 : http/2: simplify eos/blocked handling
1e9c1e8f2 : curl: fix --proxy-pinnedpubkey
cf7a080c3 : verbose.md: polish, mostly remove back-ticks
d41916c43 : max-filesize.md: mention zero disables the limit
146759716 : cmake: fix `pkg-config`-based detection in `FindGSS.cmake`
2154f7c5f : krb5: fix `-Wcast-align`
cd51bb503 : cmake: add debug function to dump all variables
e20413980 : GHA/macos: tweak toolchain dump steps
588a6e334 : idn: more strictly check AppleIDN errors
a35687831 : idn: support non-UTF-8 input under AppleIDN
07843d816 : BINDINGS: add zig binding
493c6d79e : cmake: delete MSVC warning suppression for tests/server
b1153820f : dist: add missing `test_*.py` scripts
5f9426ec4 : tests: show snapshot commit in testcurl
0011df47b : ws: flags to opcodes should ignore CURLWS_CONT flag
b102763c1 : curl: fix --test-event --parallel
1b2544876 : curl: warn on unsupported SSL options
5c2ab55ab : vtls: add SSLSUPP_CIPHER_LIST
cd4aee156 : tests: ignore QUIT from FTP protocol comparisons
b3490c5bc : RELEASE-NOTES: synced
06c5829da : curl: support repeated use of the verbose option; -vv etc
53146dd26 : tool_help: handle longer lines, exit on too long
48818a41a : tests/runner: only allow [!A-Za-z0-9_-] in %if feature names
b0394b153 : runtests: if DISABLED cannot be read, error out
c6fb9895b : cmake: cleanup header paths
ada8ebe18 : GHA/macos: enable AppleIDN in autotools job
7b1c0ab75 : Makefile.mk: fixup enabling libidn2
ea3dfcb36 : cmake: drop unused `HAVE_IDNA_STRERROR`
6712bd600 : cmake: show CMake platform/compiler flags
dcc520952 : GHA: run badwords check on tests/*.md too
91fcbc5d1 : dist: drop buildconf
8577f4ca0 : cmake: add math library when using wolfssl and ngtcp2
bfa939d06 : docs: mention "@-" in more places
67d5e3624 : cmake: replace an `MSVC_VERSION` with `MSVC`
72ae0d86a : cmake: use numeric comparison for `HAVE_WIN32_WINNT`
8de8fe8c9 : configure: detect AppleIDN
232302f88 : cmake: add Linux CI job, fix pytest with cmake
f7d5f4705 : cmake: add support for `CURL_USE_LIBUV` option
e64e62cc7 : GHA/windows: bump msys2/setup-msys2 from 2.24.0 to 2.24.1
cf3e3d93d : aws_sigv4: fix canon order for headers with same prefix
f3e07e5c5 : docs: wolfssl and mbedtls add CURLOPT_TLS13_CIPHERS support
4c1289241 : wolfssl: add CURLOPT_TLS13_CIPHERS support
a18680f50 : VULN-DISCLOSURE-POLICY.md: small typo fix
82bbb386a : cmake: fix `GSS_VERSION` for Heimdal found via pkg-config
3f7dc8a40 : mbedtls: add CURLOPT_TLS13_CIPHERS support
d266d19d8 : ngtcp2: use NGHTTP3 prefix instead of NGTCP2 for errors in h3 callbacks
b9d465c89 : tool_help: fix a NULL deref in the --help option code
0238a9b0d : KNOWN_BUGS: "special characers" in URL works with aws-sigv4
38d334e3e : curl: use libuv for parallel transfers with --test-event
7c31ceb5d : RELEASE-NOTES: synced
35bf76628 : http2: improved upload eos handling
344ba8c88 : wolfssl: improve shutdown handling
4494005b5 : openssl: improve shutdown handling
6f1921066 : bearssl: improve shutdown handling
ed2850456 : configure: fail if PSL is not disabled but not found
7d45b5216 : KNOWN_BUGS: mention AppleIDN and WinIDN test problems
781c14c4e : tool_operhlp: fix "potentially uninitialized local variable 'pc' used"
3eec5afbd : sigpipe: init the struct so that first apply ignores
8d9811802 : wolfssl: add support for ssl cert blob / ssl key blob options
7b1444979 : cmake: add support for versioned symbols option
573aaec3b : easy: fix curl_easy_upkeep for shared connection caches
b7e769dc8 : vtls: stop offering alpn http/1.1 for http2-prior-knowledge
732cb15b9 : curl: add --skip-existing
eec908bb6 : revert "tests/http: configure test httpd to honor client cipher order"
8a9567899 : GHA/windows: add mbedTLS MSVC job
f81f351b9 : tidy-up: OS names
a4ad7dc5a : dist: add missing `docs/examples/CMakeLists.txt`
1159dc359 : RELEASE-NOTES: synced
0a94578a9 : maketgz: accept option to include latest commit hash
9a0cf5647 : curl: --help [option] displays documentation for given cmdline option
9b1e4b463 : tool_operate: support --dump-header % to direct to stderr
e26eefd9c : tool_operate: for -O, use "default" as filename when the URL has none
cb829f994 : doh-url.md: point out DOH server IP pinning
4f198c852 : tests: fixup `tests/data/Makefile.am` references
1556951c4 : GHA/non-native: ignore FreeBSD FTP test results
93d1af401 : pytests: add tests for HEAD requests in all HTTP versions
acbc6b703 : cmake: tidy-ups
b64d9d7d8 : RELEASE-NOTES: synced
272233e48 : docs/cmdline-opts: update see-also to use show-headers
b80798c24 : getparam: make --rate accept "number of units"
2d8464c4c : GHA/windows: move Cygwin into its own workflow
82c53f821 : tool_getparam: make --show-headers the same as --include
709a6a396 : cfilters: send flush
911c3166b : lib: add eos flag to send methods
0472afe5f : vtls: init ssl peer only once
5a9262a33 : url: dns_entry related improvements
2372a5915 : Curl_rand_bytes to control env override
0324d557e : CI: enable parallel testing in CI builds
fadb2ee6e : CI: realign cmake build settings (for nghttp2, libidn2)
8a3740bc8 : curl: support embedding a CA bundle
87aa4ebd8 : cmake: detect `nghttp2` via `pkg-config`, enable by default
f518c73a8 : cmake: drop unused internal variable
bb9c15e97 : vtls: fix MSVC 'cast truncates constant value' warning
170c28805 : ci: Update actions/upload-artifact digest to 89ef406
b6089c35d : cmake: drop reference to undefined variable
f5b826532 : cmake: drop no-op `tests/data/CMakeLists.txt`
f87c3363e : cmake: drop custom `CMakeOutput.log`/`CMakeError.log` logs
39b9ccea8 : x509asn1: raise size limit for x509 certification information
d2abf8ded : GHA/distcheck: add a reproducible release check
86039e6e4 : verify-release: shell script that verifies a release tarball
fab526c03 : Makefile: remove 'scripts' duplicate from DIST_SUBDIRS
d0afb3395 : dmaketgz: only run 'make distclean' if Makefile exists
4d34fd26d : autotools: fix typo in tests/data target
c6cf411ba : GHA/non-native: reduce FreeBSD test parallelism to -j8 [ci skip]
45246ebca : tests: gitignore newly generated files
ba44ac62e : progress: ratelimit/progress tweaks
eb0a366b7 : http2: improve rate limiting of downloads
4abf97b0a : GHA: update awslabs/aws-lc to v1.33.0
f6cb3c630 : tests/http: configure test httpd to honor client cipher order
754acd1a9 : dist: fix reproducible build from release tarball
c73b80a3c : cmake: add gnutls to multissl feature
1f61db590 : curl: allow 500MB data URL encode strings
9bfc7f923 : escape: allow curl_easy_escape to generate 3*input length output
8a9c22796 : CHANGES: rename to CHANGES.md, no longer generated
12774f450 : RELEASE-NOTES: synced
e3240db0a : GHA: scan git repository and detect unvetted binary files
c3fe2dd25 : GHA/windows: drop FTP tests
a79dc7b60 : GHA/windows: remove vcpkg bin path in MSVC jobs
0d1252872 : GHA/windows: timeout earlier with hung tests
65ece771f : INSTALL.md: MultiSSL and QUIC are mutually exclusive
02e0151a3 : lib: convert some debugf()s into traces
a118a6ecd : cmake: distcheck for files in CMake subdir
404679d25 : libcurl.pc: add `Cflags.private`
58946eed2 : dist: add missing `FindNettle.cmake`
8f89218b1 : tests: provide docs a as curldown, not nroff
a9f63b8e0 : RELEASE-NOTES: synced
dd95a49d4 : rustls: make all tests pass
ec41cfb80 : GHA/windows: enable MulitSSL in an MSVC job

+- Project: platform/external/deqp

6c64e2a20 : Bypass dEQP tests and report nothing for the dEQP dependencies collection runs.
dcb0f4565 : Bypass dEQP tests and report nothing for the dEQP dependencies collection runs.
b7fc3d4a8 : Bypass dEQP tests and report nothing for the dEQP dependencies collection runs.
86040d470 : Bypass dEQP tests and report nothing for the dEQP dependencies collection runs.
6cdcf8efc : [RESTRICT AUTOMERGE] Move one more test case to 2023 testlist
d2434db4c : [RESTRICT AUTOMERGE] Move one test case to 2023 test list
7f28009f9 : Fix image layout transitions in multisample_resolve tests
8d04dcd1e : Fix transitioning image layouts in dynamic_state inheritance tests
d24b530d2 : Fix mandatory feature checks
b3119f3eb : Limit max sub group size for vktSubgroupsBallotMasksTests to 64
01a64e099 : Move some test cases to 2024 testlist
8a804aad3 : Move some test cases to 2024 testlist
e554a4a0c : Update incremental-deqp testlist for internal main
f69a46836 : Add watchdog touch to dEQP-VK.texture.explicit_lod test
50a01aef2 : Bypass dEQP tests and report nothing for the incremental dEQP trusted build run.
fb61e5fb0 : Add missing pipeline barrier after dispatch in expect/assume tests
94862517c : Add missing pipeline barrier after dispatch in zero_initialize_workgroup_memory tests
ec98146b2 : Add missing pipeline barrier after dispatch in cooperative matrix tests
7b8556050 : Add missing pipeline barrier after dispatch in push descriptor tests
9596362f0 : Add missing pipeline barrier after dispatch in robustness tests
560aad19e : Fix wrong bufferMemoryBarrierCount in pipeline barrier for some buffer_view tests
bc77c5afa : Add missing pipeline barrier after dispatch in descriptor limits tests
1094f71cb : Fix sparse tests to handle planar DS aspects
ec181873c : Enable missing extension in swapchain simulate oom tests
6ff5d119c : Exclude large pipeline barrier index buffer test
c83ec3e3a : [RESTRICT AUTOMERGE] Exclude large pipeline barrier index buffer test
896adfd8c : Remove dependencies on the 1-variant fallback
10b582e58 : Fix false/NULL confusion.
2eb25bc07 : Add check result for SetForeground.
dbd73c4b7 : Remove dependencies on the 1-variant fallback
8e8e6a6de : Add missing barrier in swapchain maintenance tests
4ef821fd6 : Bypass dEQP tests and report nothing for the incremental dEQP trusted build run.
9ce51f65b : Add proposed trendy teams for VTS modules
a3f392b97 : Set missing blend equation in extended_dynamic_state tests
d764869fe : Fix checking mesh shader support in shader object tests
c6e32594d : Fix the scratch size in acceleration structures update tests
8db845887 : Fix dstStageMask in sync none_set_reset tests
f7db16a63 : Fix synchronization issue in OpacityMicromapTests
a50b44de5 : Check R64 format qualifier for storage images in robustness tests.
e28deb745 : Fix gles3_android_native_yuv420_yuv_texture test
47b9ffbe9 : Fix egl_fence_persistent_buffer timeout
a66852d84 : Adjust deqp to support testing 1.4 instance implementations
92329edb9 : Loads the incremental dEQP tests only when the incremental dEQP check in IncrementalDeqpPreparer is done.
2d60fe8a3 : Move some tests from CTS 2019 to 2024.
2126211e3 : Loads the incremental dEQP tests only when the incremental dEQP check in IncrementalDeqpPreparer is done.
a8bb939ff : Add feature checks to conditional rendering tests
ed71c2953 : [RESTRICT AUTOMERGE] Move some tests from CTS 2019 to 2024.
18201b135 : Fix pipeline_robustness out_of_bounds_stride test
67f5dddac : Fix VK_PIPELINE_CREATE_VIEW_INDEX_FROM_DEVICE_INDEX_BIT tests
54e55def4 : Fix custom device used in nonsequential tests
923004aee : Fix using wrong device in bind_buffers_2 tests
8e773f500 : Enable vertexPipelineStoresAndAtomic in pipeline state tests
b6b26ac37 : Add R64 data format qualifier check in robustness tests
eb4cd9dec : Fix transform feedback issue
9721ad0f6 : Freeze 2024 testlists
c5ed87afc : Update vk-incremental-deqp-baseline testlist
490247052 : Move some copy_and_blit tests to the 2024 testlist
894d803f9 : [RESTRICT AUTOMERGE] Fix for render_to_image exceeding maxResourceSize
6129a7e27 : Allow starting helper invocations in maximal reconvergence tests
0a62a2302 : Remove extended_usage_bit_compatibility with image_format_list tests
aca834d3a : Fix setting scissor in shader object scissor tests
a6b525444 : Fix errors in unavailable_entry_points test
c615e22f5 : Build fixes for Visual Studio (2022)
609c09b31 : Allow running the CTS with unknown versions of Vulkan in the driver
a6706eb33 : Fix missing memory barriers in a deqp-vk test.
5fecb9a53 : Remove redundant pipelines from Image Atomic tests
873db64b2 : Add precise occlusion query feature check
f3550ed2b : Add mesh and task shader to s_resultItemMap for result parsing

+- Project: platform/external/dng_sdk

160512e : Catch null HuffmanTables when decoding jpeg
55bcf62 : Edit OWNERS

+- Project: platform/external/dnsmasq

8187e9c : Pin to C17.

+- Project: platform/external/doclava

316005d : Use ChangeId name instead of description in doclava

+- Project: platform/external/double-conversion

3bfbd19 : Allow more packages to depend on libdoubleconversion

+- Project: platform/external/drm_hwcomposer

8cfa7ff : drm_hwcomposer: Fix incorrect modeset buffer size
53da371 : drm_hwcomposer: Clear staged_mode_config_id_ after SetConfig
613a944 : drm_hwcomposer: Add property for config groups
e39af97 : ANDROID: add getLuts() aidl interface
bb9f3ea : Cherry-pick: Update drm hwcomposer to composer3 V4
2852f7d : drm_hwcomposer: Fix build_deploy script and switch it to hwc3
445ef20 : drm_hwcomposer: hwc3: Fix build with Android-13
012271d : Sync with new API to start HDCP
16933c3 : drm_hwcomposer: Check "Present Not Reliable" property
97b5abc : drm_hwcomposer: Add blocking SetConfig
897a709 : drm_hwcomposer: Use ALOG_IF to simplify logging
bcc587d : drm_hwcomposer: Add flag for blocking commits
fe70c80 : drm_hwcomposer: Simplify state tracking for staged configs
9a2e2f6 : drm_hwcomposer: Fix SetDisplayBrightness failure
e1d49f1 : ANDROID: Add unsupported stubs for IComposerClient getMaxLayerPictureProfiles
9c41b68 : drm_hwcomposer: Fix build failure on Linux-native targets
556f56b : drm_hwcomposer: Convert UNKNOWN sample range and dataspace
8998f8b : drm_hwcomposer: Deprecate usage of HWC2 setActiveConfig*
85be25d : drm_hwcomposer: Deprecate usage of HWC2 GetActiveConfig
9799ab8 : drm_hwcomposer: Deprecate usage of HWC2 GetDisplayVsyncPeriod
210c2ce : Revert "drm_hwcomposer: Workaround for screen de-activating causing db845c regression"
e11f723 : drm_hwcomposer: Query property when checking virtual displays availability
b6eb4b8 : drm_hwcomposer: Set std=c++17 in Android.bp
1f41ac7 : drm_hwcomposer: Fix DrmConnector log build failure
f4563dc : drm_hwcomposer: ci: Allow "Revert" as a valid commit subject prefix
2c36651 : drm_hwcomposer: Fix DrmProperty log build failure
634c89c : drm_hwcomposer: Remove unnecessary warning
a2249a5 : ANDROID: [DRM] Use Luts instead due to Lut AIDL change.
a611a54 : drm_hwcomposer: Add Drew to OWNERS
a2f3efa : drm_hwcomposer: Implement getDisplayPhysicalOrientation()
d36bbb8 : drm_hwcomposer: meson: Use link_whole for linking common code with HWC3
8891e91 : drm_hwcomposer: Remove unimplemented display brightness code
e5fbbbb : drm_hwcomposer: Stop using HWC2 for setting layer buffer
71a8f02 : drm_hwcomposer: Support multiple AIDL versions
6e5c82e : drm_hwcomposer: CI: Properly upgrade drm_hwc aospless file
57ba08a : drm_hwcomposer: CI: Upgrade Ubuntu and clang version
9472795 : Revert "drm_hwcomposer: CI: publish docker image to local container registry"
0fea188 : Revert "drm_hwcomposer: CI: use local container image for building"
fb5a9c3 : Revert "drm_hwcomposer: CI: Only run build-docker when necessary"
df22097 : Revert "drm_hwcomposer: CI: Use upstream container image conditionally"
3f0c01a : drm_hwcomposer: Plumb link status BAD to OnHotplugEvent composer callback
3217017 : drm_hwcomposer: CI: Use upstream container image conditionally
921c1cd : drm_hwcomposer: Correct CTM conversion
a37df7c : drm_hwcomposer: Correct spelling of matrix
5294f09 : drm_hwcomposer: Add getter/setter for Colorspace
9c6f7e5 : drm_hwcomposer: Only validate layer brightness
5d679b0 : drm_hwcomposer: Stop using HWC2 for layer z order
51c61e4 : drm_hwcomposer: Stop using HWC2 hooks for layer transform
7ab8c18 : drm_hwcomposer: Stop using HWC2 hooks for source crop
07b96f0 : drm_hwcomposer: Stop using HWC2 hooks for layer plane alpha
22d66b4 : drm_hwcomposer: Stop using HWC2 hooks for displayFrame
1b0d8b7 : drm_hwcomposer: Stop using HWC2 hooks for CompositionType
1afb579 : drm_hwcomposer: Stop using HWC2 hooks for DisplayConfigs
ac9681e : drm_hwcomposer: Don't use hwc2 interface for dataspace
a241a77 : drm_hwcomposer: Clean up SetLayerBlendMode for hwc3
8d38dc7 : drm_hwcomposer: Drop content type for hwc3
f39b015 : drm_hwcomposer: clang-format: Preserve header blocks
173247b : drm_hwcomposer: Add getter/setter for content_type
254b33c : Add .gitignore
16fb945 : ANDROID: Skip pre-push hook if it doesn't exist.
7aff49b : drm_hwcomposer: CI: Use upstream container image conditionally
8a167fc : ANDROID: drm_hwcomposer: Add pre-push hook
3dc274e : ANDROID: Update drm hwcomposer to composer3 V4
8053f2e : drm_hwcomposer: Calculate vsync period consistently
3f4469f : drm_hwcomposer: Set correct size for changed composition types
d827293 : drm_hwcomposer: CI: Only run build-docker when necessary
1f26384 : Support drm hwcomposer as an apex

+- Project: platform/external/dtc

7663711 : Work around failed link with HWASan
521b25d : Add dirgroup for trusty genrule
3c5eb97 : ANDROID: libfdt_baremetal: Use cc_baremetal_defaults
21b1890 : ANDROID: Android.bp: Introduce libfdt_baremetal

+- Project: platform/external/e2fsprogs

f54458bc : Define mke2fs.recovery
709a2f5d : Define e2fsdroid.recovery

+- Project: platform/external/edid-decode

d094f79 : edid-decode: Make availble to vendor

+- Project: platform/external/elfutils

85a3fc48 : Make libdw.so available
c02dfd72 : Re-enable `libdwfl` functionality
b848ffee : Run bpfmt

+- Project: platform/external/erofs-utils

8496ea5 : Define *.erofs.recovery modules
1f383d5 : ANDROID: erofs-utils: Add dhavale to owners
6d909df : ANDROID: erofs-utils: enable HAVE_UTIMENSAT so we can set utimes correctly

+- Project: platform/external/error_prone

fcc83be : Update errorprone to 2.36.0
191be62 : Remove errorprone's copy of javac
32845c4 : Disable PatternMatchingInstanceof error prone check
184ae5e : Update errorprone to 2.32.0
c5f6763 : Revert^2 "Promote IgnoredPureGetter and ReturnValueIgnored to errors"
f2c826d : Revert "Promote IgnoredPureGetter and ReturnValueIgnored to errors"
d30b1c0 : Promote IgnoredPureGetter and ReturnValueIgnored to errors

+- Project: platform/external/escapevelocity

d1f2040 : Add support for android to escapevelocity module

+- Project: platform/external/executorch

91afa6e92 : Fix type-safety of `torch.nn.Module` instances
96a9d35f7 : Qualcomm AI Engine Direct - Optimize memory usage at runtime (#7003)
2c65b977d : Take advantage of C++17 in scalar_type_util.h (#7022)
8d71cd371 : [llama-mm] Make text decoder exportable (#6999)
2adb1bca4 : [Android] Remove runtime internal evalue list types (#7012)
4e90293bf : Update CONTRIBUTING.md to reflect "Squash and merge" option (#5117)
6ac19cc2d : Java Tensor and EValue serialization (#6620)
7375cf5ac : Fix _native_batch_norm_legit_no_stats_out
0070680ce : add Qualcomm SA8295 support (#6986)
3a358899b : Minor perf improvements to quantized linear 8 bit shader.
ce77995fe : Updating add operator with name space changes (#6433)
270d0558c : Arm backend: Make run.sh handle portable ops using aten::<OP>.<modifier>out
08ed54e79 : Add pytest option to run FVPs in fast mode
43555d21c : Add mobilenet_v2 and resnet50 to the examples
50b4ac3b1 : Add buck build for static llama runner
aa8d9049a : Use getops to pass flags and arguments for test_llama.sh
a39ea29ba : add remove ops to oss and callsites, [cadence][8/X] add reorder ops to oss and callsites, [cadence][9/X] add replace ops to oss and callsites, [cadence][10/X] merge passes with replace remove ops passes and update default pass order...
792ef43fc : [ET-VK] Move `save_cache` from Runtime dtor to model destroy (#7002)
0dc25e5c0 : [ET-VK] Only save_cache the first time (#7001)
6b0bb56ed : [ET-VK] Remove unneeded `Runtime` move (#7000)
e721945c1 : Clear out Dynamo cache in between tests
b132c96ec : Enable bool type for _local_scalar_dense (#6994)
a93e716f3 : Add exported program runner (#6969)
711f1c23d : Simplify Xcode configs.
bafae87bf : Use GitHub mirrors of mlplatform submodules to unbreak ExecuTorch CI (#6991)
359e9d3a0 : Fix custom annotation (#6978)
84222a9ef : Update apple-runtime.md (#6985)
dcacde01d : Update docs on linkage and debugging.
709e7395b : Qualcomm AI Engine Direct - Optimization in static llama
af8728302 : add simply ops to oss, update fuse simply callsites
f40daea76 : Fix ARM submodule URLs (#6973)
54feeef5f : add uint16 to serialization
1de96f8a2 : Move the rest qnn model jobs to the new qnn sdk docker
809a1a5f1 : Fix pyre
34d4398dd : split to_cadence_edge_executorch API to to_cadence and to_executorch_gen_etrecord
8470cb964 : import test graph builder to oss, import im2row
97cb8923d : add graph builder in oss for fuse ops
fbc384ca1 : Lower confidence threshold in tests to avoid flakiness.
f477fd559 : Label checker fix (#6953)
eae0b0417 : add fuse ops passes in oss
86cb5d79e : Fix error message.
04f6fcd4b : Fix ReLU fusion when conv/linear has > 1 user
8cce45bde : Add PR release note label checker bot (#6835)
85fca598f : Add documentation (#6883)
a0ac8203b : update OSS pre-existing passes and migrate callsites to OSS
7bfe3b9af : Fix Cuda out of memory issue for eager runner (#6935)
801382273 : Strip local file paths from builds.
727851945 : update sccache to latest version (#6837)
f145cede9 : Port memory planning to Cadence
88df185f1 : Fix erroneous `extension_features` initialization
4086509b5 : Dev weight sharing
76a0e0fc3 : Introduce static targets
ee74d0613 : remove codegen-nosub from q_8w
5b4d9bbf4 : [Executorch] optimized sigmoid
c242a59e5 : [Executorch][Portable] Dont upcast to double for sigmoid
3b475e334 : [llama-mm] Fix AOTI test for attention (#6915)
4f9ae32a8 : [llama-mm] Reduce copies in SDPA in MHA (#6917)
3784f06e5 : Add MobileNetV2Evaluator to Arm Backend
8526d0a2d : Arm backend: Set default allocation pool to 20MB
e95f17131 : Qualcomm AI Engine Direct - Quantizer refine for qat
19268dee3 : Introduce data schema to store raw tensors (#6919)
793a988d4 : Fix arm dependency
d5f91bf4c : Temporarily disable OOMing llama vision decoder test (#6908)
0692ef4e9 : Pyre fix
ca4496505 : Introduce no_volk buck targets
ae611007c : Remove assert for ArgSchema
3813f42b6 : Only use `LINEAR` tiling if it's available
fe5caad1c : Move linker flags to config files for clarity.
5663e3cc5 : [MTK] Add support for Llama 3.2 and code updates to align with current ET API for dynamic dim
c734ad4f7 : Change the branch to download frameworkd from.
e32d1b75f : Simplify .xcoconfig.
f96be5bbe : Add UInt16 support to Cadence kernels
087792696 : Add UInt16 support to Q and DQ
92ee5228e : merge internal op registration with oss
b4ab76f94 : Remove template file after renaming and amending. (#6885)
54899feb1 : Adding tests old lowering flow for op_abs.py
041b7d64f : Refactor TOSAOperatorSupport to allow for maping against TosaSpecification
4a18003db : Add TosaSpecification to ArmPartitioner
04b3d9211 : [llama-mm] Add torch.cond to replace if condition in MHA (#6869)
7b76f0f32 : Make TorchTune Llama model KV cache compatible in eager (#6643)
e384c1abf : Fix Xcode projects packages pins.
b38632223 : Update Xcode projects to use nightly builds. (#6873)
b0f6048c1 : Fix test_model.sh for vision text decoder (#6874)
ad158526f : fix lintrunner
a4e241462 : Update Hugging Face version (#6489)
cfac61056 : Force-push nightly build branch. (#6879)
937af6486 : expand dtype conversion support in aten_bridge
eccc633fa : Add back AWS upload step. (#6871)
d7826c819 : Automate SwiftPM updates. (#6868)
6c944db40 : Runner changes for TorchTune Llama3.2 vision text decoder (#6610)
27f31cdae : Export TorchTune llama3_2_vision in ET (#5911)
d9d485929 : Add shim/README.md to explain what the shim/ tree is for (#6860)
fd2c323b4 : Add unit tests for old lowering flow for op_cat.py
d06042625 : Skip building Apple demo app on fork PRs (#6865)
53caedefd : Improve ET_ASSERT_UNREACHABLE_MSG to use formatting
9f424e80e : Add missing Pyre mode headers] [batch:9/1301] [shard:12/N]
fc6d09711 : Disable llm module test (#6861)
71612a6b0 : Install requirements for llama vision (#6854)
4e83f2477 : Patch for div_mod build issue on Xtensa (#6814)
f32cffd26 : [ET-VK][Llama] Apply XNNPACK partitoner as well when lowering to Vulkan (#6857)
0f2995f75 : [ET-VK] Enforce GPU buffer limit when partitioning (#6856)
07c4d0e86 : Fix imports to use torchao public api
ecdc00768 : Linear tiling experiment
0a9598bff : [llama-mm] Fix vision encoder model test (#6842)
21eecff07 : add up to Uint64
6f638937d : Fix arm dep for jarvis test
efa462810 : Assert that the repo directory is named `executorch` (#6480)
7b03a8b22 : Update checksums in Xcode projects.
31dbfc904 : Qualcomm AI Engine Direct - support llama3.2 1B/3B with static llama in kv mode
473bc7b54 : [ET-VK][ez] Link MoltenVK statically for all Mac builds (#6834)
cafe0fabb : [llama-mm] Onboard torchtune vision encoder to ExecuTorch/AOTI (#6831)
bd768bf1c : Add op: masked_select
9393b8c03 : Fix broken trunk test_model.sh (#6826)
dd8f18289 : Remove PATTERNLINT suppressions (#6821)
93b202cc2 : [ET] Fix serde handling of `inf` (#6832)
ee32ea364 : Remove unused-variable in ods3/services/storage/gorilla/lib3/gorilla/micro_shard/GmscQuery.h
9511c2189 : [llama-mm] Add export friendly TiledTokenPositionalEmbedding (#6824)
bfb1e99e7 : Reducing VMA large pool allocation to 4 MB from 32 MB to reduce memory wastage.
d0e0466d7 : Expose mehod name as part of backend init context
f01b20b4e : [llama-mm] Bump PyTorch pin to 11/12 (#6819)
3da4b5da9 : Update model evaluator to check multiple outputs
ee14ad07f : Pin TorchTune (#6791)
4f2ac7e23 : Use lambda on all kernel prim ops
73893e174 : Arm Backend: Place pte file/data in DDR area
9abc9f49f : Solve circular import error
97e0417d0 : [executorch][serialization] Format program.fbs with consistent whitespace (#6809)
7b8511759 : Add qnn sdk docker (#6796)
d5c2b317c : Arm backend: Speedup generation of C/C++ header of the pte file
4c686608a : Add support for upsample_nearest2d op in the Arm backend
667f600b6 : Accept model type parameter in export_llama
d8a617f78 : [llama-mm] Add unit tests for exporting MultiHeadAttention with KVCache (#6801)
f943856f2 : Fix flaky ET attention test (#6795)
b6ebd3cdb : Fix missing move and bare new in pytree from_str_internal (#6803)
15b1f39f2 : Use std::variant to implement pytree Key (#6792)
5e0371479 : Fix pyre on builder.py (#6789)
9cd57d6b9 : [llama-mm] Enable kv cache for MultiHeadAttention (#6798)
4b7a60fac : Add missing quantize ops to q_dq_ops target
b3525d573 : [ET-VK] Replace `uint16_t` with `int` for swiftshader (#6788)
8127edda5 : migrate passes calls to pass utils
74bb5ff74 : Temporarily ignore flaky mha test (#6790)
f8fda0afe : Add a test to ensure that PT built from the pinned commit is used (#6785)
e332e2a32 : Remove .ci/docker trigger on Apple workflows (#6787)
043870bb6 : Update backend execute args in test
10a814efd : Run slow model E2E CI on periodic only (#6742)
b860f8f0b : remove extra torchao commit pin (#6784)
c8967dae3 : Fix missing header `<algorithm>` in tensor_ptr.h (#6778)
2f6d64f3f : torch.export()-only export Llama arg (#6695)
49756f669 : Print the number of tokens generated (#6773)
99ba7794d : [test] add Android java test to gh workflow (#6772)
acc63d3d3 : [ET-VK] Reduced int precision for texture coordinates in conv2d_pw op, to reduce shader register pressure. (#6776)
7f3d37aaa : [ET-VK] Removing tile input storage variable in conv_pw op and fetching the data in main loop. Also unrolling the main loop for performance improvement. (#6775)
1166669a3 : [ET-VK] Reduced int precision for texture coordinates in conv2d_pw op, to reduce shader register pressure and slightly improve performance. (#6774)
995c2bf03 : Add missing ops to ArmQuantizer (#6780)
dc41596b1 : migrate utils from jarvis to cadence
4947e2737 : Fix internal pyre test
f90cf2d0e : Tighten type hints for tensor arithmetic
576e96cfd : Qualcomm AI Engine Direct - Add llama sha transforming pass
623a9a61a : add the ability to have multi-round conversation with llama (#6769)
671f9c50c : update llama runner to decode single token (#6768)
6887ae960 : [ET-VK] Update partitioner to account for custom packed arguments (#6763)
b8b5146ee : move junit tests to android_test (#6761)
c411a75dc : add script to generate add.pte (#6760)
d544f9492 : [ET-VK] Statically link MoltenVK (#6762)
2660287a9 : added instrumentation test for LlamaModule (#6759)
b23c9e62a : [Android] Added instrumentation test for Module (#6751)
789598290 : Qualcomm AI Engine Direct - wav2letter e2e example
bec0625db : Fix arm related internal build
a809953b3 : Add torchao kernels to llama runner
146ca1ba5 : Swap mha (#6719)
7fcd0af54 : Search graph for quantization parameters (#6690)
793f17e52 : introduce slack channel for community
0a20d7205 : Arm backend: Add linear decomposition (#6661)
5b51bb8b6 : Support sym round and ceil
427b36d09 : Add Android standalone log target
289e84edd : Correctly set _GLIBCXX_USE_CXX11_ABI pybind compile options (#6744)
b1e6617f3 : Fix pyre
ddc8ea6f8 : Tosa specification handling (#6688)
6d6630eda : register quantized_linear.per_tensor in lib
1cd8a0685 : migrate passes and utils in cadence backend
b0f9a6176 : Add support for uint16 in quant and dequant kernels
485a5dfcf : Revert "Run tosa_reference_model using python binding" (#6729)
39e5b91c6 : [ET-VK][ez] properly parse skip memory metadata pass (#6723)
cb2a0e717 : Qualcomm AI Engine Direct - Reduce redundant observers (#6351)
4af687a76 : Revert "Qualcomm AI Engine Direct - Quantizer refine for qat (#6513)" (#6722)
437168ebe : [Android] added tests for Tensor.java
f9698d84b : Move quantize IO passes from BoltNN to ExecuTorch
abc8a5fab : Revert changes to executor_runner (#6687)
38346fdd0 : Added HiFi optimized mean and where ops. (#6483)
4bbe9945b : Run tosa_reference_model using python binding (#6658)
713d8a115 : Add max_pool2d op to Arm backend (#6285)
545535b63 : [Executorch] enable sleef consistently (#6705)
70f15e6f9 : [ET-VK] Fake u16vecn for devserver (#6704)
8f82198b5 : Add dvorjackz to ghstack_land.yml (#6644)
6051b2fee : Add per_tensor overload for quantized_conv
785ebf3ff : Add trunc scalar prim_op
b07386c85 : Remove custom implementation of string_view
026fe0b3c : Add support for bits16 in ETDump
17ad8d3f6 : Fix type handling for output types from TOSA reference model (#6660)
03b1ef26d : Remove IR check after aten in arm
b8e0ef9b2 : Add cat/stack ops to generic annotator
c438f8dc2 : Refactor pytest config + add default dump dir option
179d4954c : [data loader] move logic for FD data loader out of file_data_loader (#6682)
b4c6fe1ee : Refactor Init function arg
068f43c14 : Qualcomm AI Engine Direct - Quantizer refine for qat (#6513)
fbee392ca : Fix src file path in bp and add bp formatter
f7e26d749 : [llama-mm] Add export-friendly tile position embedding (#6671)
735e019f7 : Fix broken apple tests
c5b88cc21 : Add in-memory log buffer in Android JNI
836d5561a : [ET-VK] Introduce memory metadata tagging pass (#6669)
cefe51594 : [ET-VK] Refine paritioner to account for storage type and memory layout (#6668)
d99d26e7e : c10::optional -> std::optional
363505f96 : adding suppression tags to improve autodeps noise
63ff1aefe : Arm backend: Use better Ethos-U PMU counters for Ethos-U85
cd565b5b6 : Make EValue template constructor MSVC friendly
287488fff : Core runtime changes for Windows MSVC support
11d1742fd : compiler.h changes for Windows MSVC support
8a4e49285 : Revert "Search graph for quantization nodes (#6452)" (#6649)
3a1f8d2d6 : [AOSP] add file descriptor support to file_data_loader (#6611)
e1a9f91a7 : Add q_dq_ops targets to only build q dq ops for Windows
e2b5bcad3 : Add select for compiler_flags in kernels bzl files
8ab3385f4 : [Java][Android] add unit test for EValue (#6641)
f31bcca5f : [Java] add missing optional tensor list functions (#6640)
63017e403 : Search graph for quantization nodes (#6452)
244546b03 : Remove typing true flag from executorch
97a460006 : Enable aoti for preprocess + torch pin update (#6621)
f2c7700ed : Add per_tensor overload for quantized_layer_norm
09cf98201 : Update docs in executorch, remove capture_pre_autograd_graph references
40bac20d3 : Include external/executorch
3b458a7d4 : [Executorch] Revert compile optimizatin flags for op libs (#6618)
046a4e404 : Add preprocess to ci (#6616)
0691ae124 : Bump torch nightly pin (#6594)
ebe49112d : Executorch ops fail to build on macbook M1 due to bad formatting
c4b4e98f2 : Create pull_request_template.md (#5771)
a6328f0a3 : Use pinned commit PyTorch on CI instead of nightly (#6564)
465170fb0 : [ET-VK] Enable buffer implementation of `aten.linear` (#6608)
f0af4660e : [ET-VK] Allow clone op to transfer between memory layouts and storage types (#6607)
27eac4863 : Fix lint (#6606)
91c382df0 : [ET-VK][AOT][ez] Introduce vulkan export utils lib (#6605)
3aaf58419 : [ET-VK][AOT] Define pass application order (#6604)
c9f2fd017 : [Executorch] Make apple take accelerate path for blas (#6603)
f965c311d : [Executorch] Handle broadcast semantics for last dim (#6602)
d4a9ca01e : Remove open_source oncall marker in executorch owned code
1972e6921 : [ET-VK] Build Vulkan delegate on MacOS with MoltenVK (#6598)
0f6aeecb8 : [ET-VK] Reduced int precision for all int storage in q linear op, and reducing some texture coordinate storage variables to improve performance. (#6595)
77fe041bf : [Executorch] mul broadcast update (#6589)
3a1538a4e : [Executorch][Portable] Compile portable ops with -O2 (#6588)
b84d922ec : [Executorch][llm] Compile custom op with -O2 (#6587)
6086d6e2a : Support optrace
1bde32cca : [Executorch][Kernels] Build optimized lib with -O2 (#6586)
c045c350a : [ExecuTorch] Clean up pre-C++17 macros (#6583)
a17b59b24 : Fix find_package(executorch) failure (#6581)
f813b6a9a : Get rid of statement expressions from all core code for MSVC windows
10aeb5694 : Remove ET_ALLOCATE_LIST_OR_RETURN_ERROR from all core code for MSVC windows
20808774e : Revert "Revert "Updating cadence ops with new name space, rebasing 6 optimize…"
fd2844cdf : Add Android instrumentation test on emulator (#6364)
fbb0acf3f : [ET-VK] Used hashed layout instead of axis map UBO (#6574)
b07be360a : Fix compiled model export. (#6568)
c8a391802 : read meta["val"] from arg for place holders instead of generating a fake tensor using fake_tensor_mode.from_real_tensor
8be632712 : Arm backend: make tag_io_quant_pass tag all output dq-nodes (#6547)
ffad82455 : Update executorch-arm-delegate-tutorial.md with CS320 FVP
7dd981fb8 : ArmBackend: Add support for Conv1D
db38bcc49 : Remove deprecated InstructTemplate from llm_pte_finetuning example (#6557)
f0463c46f : upgrade lm_eval to 0.4.5 (#6561)
94818587b : Add uint16 support in exir serde
4a627d42d : Fix circular import for lm_eval (#6572)
538575fda : Android App with MediaTek Mode
74e6c1fac : Refactor preprocess to use EagerModelBase (#6567)
a2f1f14cd : Add MediaTek Llama Runner in Android App Readme
461d61d6b : Add readme for other backends
47bca20fe : MTK Android Llama Runner
1f38016c1 : Fix op not found issue in smoke test for pip wheel (#6549)
72b02ec88 : Remove buck2 CI from pull.yml (#6539)
2c32bf321 : Arm backend: Add select operator
41a57e6b0 : Add ScalarType 22 `BITS16` support in etdump gen and deserialization
6b01b9138 : Fix utf8_check_validity (#6543)
a4d09bd97 : add lucylq to ghstack_land.yml (#6537)
b8d97d86b : Enable int8 support for quantized_linear reference
ec27667fc : Fix pyre issue
4e6976dc6 : SGD and TrainingModule in Python
3b25b05fd : Docs: Typo Fix (#6526)
5429eea41 : add the ability to run eager runner via buck
85d3ff660 : [ET-VK] Fix op benchmarks code generation (#6528)
3366edff7 : migrate jarvis quant linear out hifi ops to oss
1b1297118 : migrate quant layer norm hifi ops to oss
80807fd01 : [cadence][oss] migrate jarvis dequant_per_tensor hifi ops to oss
9890c2454 : [ET-VK] Introduce AOT operator registry (#6511)
a3579e931 : Remove "Android" from update-viablestrict.yml (#6525)
5b49bc6c1 : Qualcomm AI Engine Direct - Update the evaluator API call for Llama (#6386)
b2f73a344 : Qualcomm AI Engine Direct - Set llama io as quantized tensor (#5383)
16b633b4d : Add ops: max.unary_out & min.unary_out
800fc276d : Fix to_edge_transform_and_lower
dc2e02a98 : Add rpath to libcustom_ops_aot_lib.dylib and libquantized_ops_aot_lib… (#6499)
cd1a25d72 : Move CI pin to latest nightly
dde3df383 : Replace torch.empty with torch.zeros
8234c1410 : Fix #6462 - run compile only for examples/qualcomm/llama (#6463)
904a2e514 : Set member variable to Attention module (#6376)
f044e91fe : All executprch test pass now
8f9fb7e91 : Add ExtraTensorInfo to schema.py
e93ad5f20 : Add transpose ops make convolutions channels-last.
cbfdf78f8 : Insert transposes around view_copy ops
28a213fb8 : Remove presere ops
dfbf6fd53 : Mention torchao in llama readme page (#6487)
5889cc311 : Fix typo in apple-perf (#6491)
395b4d68e : add new dependencies added by D64735807
d695f15b3 : Add pass for decomposing (log)softmax
53a94afd4 : Support quantized llama models (#6486)
e6d93dece : Update Android and iOS demo app readme for Spinquant and QAT+LoRA model support (#6485)
8477fa950 : Fix selective build codegen for sym primops
e2526cc4a : Update ghstack_land.yml
ec154875c : Revert "Updating cadence ops with new name space, rebasing 6 optimize… (#6482)
8140a90c6 : [ET-VK] Implement generic reduction shader + mean, sum, amax, amin (#6473)
569220378 : [ET-VK][ez] Implement rsqrt (#6472)
cb2580942 : Android emulator job should use linux.24xl.spr-metal
eea172de0 : Fix CI for _android.yml (#6469)
6a1772d93 : Move android.yml to pull.yml (#6449)
2553788a5 : c10::nullopt -> std::nullopt
fa30e80cf : Reformulating matrix multiplication scale equation to reduce math ops and improve power and performance.
4f12131c0 : Remove unrelated file
f6778d595 : Fix dim order in op_permute
169ddbfcb : fix typo
66c57dba7 : [Android] Strip .so for Release build (#6443)
d192b1f91 : [Android] Introduce build types and set default to Release (#6442)
fe20be98c : Calculating axis's local wg size based on global workload and making it as close as possible to warp size of 32.
2c2e52775 : Reorganize llama readme
ded9f4a90 : Qualcomm AI Engine Direct - oss model enablement (retinanet_fpn)
9af634443 : Add the PTE/runtime compatibility policy (#6438)
cb0f53e0e : Arm backend: Add tanh operator
8c9680569 : executorch/examples/portable/custom_ops/custom_ops_2_out.cpp: fix llvm-17-exposed format mismatches
5d7bd718c : Reduced int precision for texture coordinates in q_linear op, to reduce shader register pressure.
5a6a57146 : Refactor out utility functions out of llama readme
89ba47ae2 : llama export with input vocab pruning
0aa802de6 : ghstack bot skip if a corresponding PR is merged (#6427)
979708d2d : Updating cadence ops with new name space, rebasing 6 optimized ops (#6407)
5a34bc1cf : Qualcomm AI Engine Direct - Fix Per Layer Dump (#6384)
484774f41 : fix eager run for cuda (#6429)
ca4783992 : fbshipit-source-id: f59bd1452bcb81e0b6eb4393909fad9bce14c20d
633b1163d : Fix int8 buffers support detection (#6404)
10a8e24ad : [ET-VK] Enable custom rotary embedding module replacement (#6424)
10f51b9f2 : [ET-VK] Introduce rotary embedding custom op (#6423)
030985448 : [ET-VK][ez] Apply rotary embedding as Module (#6422)
124754558 : Qualcomm AI Engine Direct - qat proto
3ea853884 : `c10::optional` -> `std::optional` in executorch/runtime/core/exec_aten/exec_aten.h
706df296b : `c10::optional` -> `std::optional` in executorch/backends/vulkan/test/op_tests/sdpa_test.cpp
9af2a982d : `c10::optional` -> `std::optional` in executorch/extension/llm/custom_ops/op_sdpa_aot.cpp
c7a3c3fd9 : Arm backend: Insert transposes between <4D and 4D tensors
01d878310 : Fix merge bot to use ref for merged PR (#6417)
54acd38ea : Skip diff train for ghstack bot generated PR
290eed9c7 : fbshipit-source-id: 50c91a66e57900df1de62220d3bd95fa40860c9a
0189ed9a7 : fbshipit-source-id: cd0b3898dc7c2b7bd41489b55d6b24cb8afb25f7
9ce50b4ea : Add training readme
1e9ebdfba : Align aot_arm_compiler to latest export flow (#6327)
f93270a4c : Arm backend: Add layer_norm decomposition (#6288)
46ea1a4fd : Fix params.json for llama models
af13be9c0 : C10_UNUSED to [[maybe_unused]] (#6357)
bb1ad31d8 : c10::optional -> std::optional in executorch (#5871)
324f02172 : update native_layer_norm to new layout gen & axis mapping (#6358)
b3932c028 : Add back extension suffix for portable lib shared object in pip wheel (#6363)
4d7b294ef : Add an interface for LLM runner (#6356)
8209bc155 : Optionally add qnn backend to llama runner buck file (#6355)
339bb74ba : Implement prepack nodes (#6352)
75c17c043 : Clean up prepack API (#6331)
59c8d828e : Refactor out llama2 specific content out of Llama readme (#6359)
7493aaed4 : fix llama eager runner and add ci (#6344)
179fd6941 : add ci job for eval_llama with mmlu task (#6337)
a343884ee : add CI job for eval_llama with wikitext task (#6336)
3ce5741ac : Update viable strict workflow (#6348)
6b2a0825b : Fix issue with partial UTF-8 string (#6317)
0eeea824b : Expose ET tokenizer to clients (#6335)
ced983aa8 : executorch/{kernel,examples}: fix llvm-17-exposed format mismatches (#6350)
b1c94ab67 : migrate jarvis quant-per-tensor hifi ops to oss (#6293)
fad26af95 : Back out "Revert D64082731" (#6339)
2c431901b : Remove duplicated ANDROID_PLATFORM and define in CMake (#6223)
18a7e659d : Script to ghstack land (#6270)
a835e0302 : executorch/backends/cadence/reference/operators/quantized_layer_norm.cpp: fix a few format mismatches via static_cast<int>() (#6322)
7510f8cbc : Run benchmark on background thread (#6320)
8f6c16ec4 : Removing q_linear.h and adding tiled q_linear implementation (#5492)
8d6d18a04 : Exclude log_outputs() from execute profiling scope (#6325)
6669e1880 : Fix scalar arithemetic and add test cases (#6224)
6a085fff7 : libexecutorch.a -> libexecutorch_core.a (#6333)
a5a147f54 : Update cmake docs to reflect the recommended way of running cmake (#6329)
5f12f28bc : Enable quantization as default for XNNPack for previous failing models (#6242)
d59a5118c : Add a null check for `tracer` in the `Module::load_method` (#6319)
da2cd5b1d : [pybind] Add pybind API reference docs (#6328)
ab628ccb3 : executorch/examples/devtools/example_runner/example_runner.cpp: fix format mismatches printing error_code_t values (#6324)
0a64c3eac : Git clone instruction with release/0.4 branch (#6323)
6b858f2a7 : Add pybind API reference docs (#6276)
5e4499153 : Upload iOS peak memory usage metric (#6282)
c242c4cec : Update doc for benchmarking (#6318)
8dd777495 : Update landing page to include Llama (#6218) (#6259)
ad0e5e81c : Add a null check for `event_tracer` param in the `Module::load_method` (#6298)
065d48084 : Pass QP8 flag only for Kleidi QB4W (#6220)
0f17eb10b : Update and fix docs (namespaces, consistency) (#6185)
52ebd2c60 : Update runtime tutorial to promode Module APIs in the beginning. (#6216)
1f33a7711 : Revert D64082731 (#6314)
f7ce18e16 : Update XNNPACK docs to use to_edge_transform_and_lower API (#6244)
ddee09f5d : Add placeholder runtime/COMPATIBILITY.md (#6316)
573ad9659 : LLM manual fixes on profiling part (#6315)
a060791b2 : Bump branch version in docs. (#6306)
7fedb2f14 : Update mediatek_README.md (#6294)
6a27cd11e : Add placeholder runtime/COMPATIBILITY.md (#6307)
2be010518 : Rename executorch-llama.aar to executorch.aar (#6313)
3b8b28b69 : Fix issue caused by llama2->llama renaming (#6310)
620d7ea9b : Point to to_edge_transform_and_lower in ExecuTorch API docs (#6283)
e86cb3aaf : Bump branch version in docs. (#6305)
ec1c431c7 : LLM manual fixes on etdump generation (#6299)
04a82b764 : Define unused variable in build_android_llm_demo.sh (#6303)
63f0718db : Codemod examples/models/llama2 to examples/models/llama (#6302)
0a8e00714 : Enable Vulkan 4-bit weight only quantization in `export_llama` (#6235)
58ee33d3f : Add custom `VkInt4WeightOnlyQuantizer` for vulkan (#6234)
841277a2d : Fix implementation of int4 quantized linear (#6200)
3b56fdfe9 : Fix shadowed variable in executorch/backends/apple/coreml/runtime/inmemoryfs/inmemory_filesystem.cpp (#6289)
2e67e3ada : Remove pad_max_tiles from preprocess (#6295)
295b2aa66 : Modify build script for MediaTek Neuron (#6292)
50aa51754 : Return the exported_program from transform to allow internal usage (#6210)
0c5536ef9 : Fix script export_hf_model.py (#6246)
dc4be7c74 : Qualcomm AI Engine Direct - Observer Fix and remove unused passes (#6225)
423f65d74 : update eager runner to use same options for loading the model (#6257)
1d7c3abf0 : fix llama runner (#6256)
2c8b14c55 : Re-sync with internal repository (#6291)
871efb78e : Unify the qnn version for a few remaining space (#6260)
49395c2a8 : Update mediatek_README.md (#6290)
cccc2c2b9 : Run CI for bf16 with custom SDPA (#5554)
867356725 : Point to to_edge_transform_and_lower in ExecuTorch API docs (#6236)
35aeaca07 : Add reduce_sum op to ArmBackend (#6044)
48d851e3c : Arm backend: Less logs during Execute() (#6098)
fe1b2e772 : Add sigmoid to one-to-one annotator in ArmQuantizer (#6047)
500019a66 : android-release-artifact only trigger if AAR doesn't exist (#6248)
254bdf90e : Rename capture_pre_autograd_graph private method (#6214)
4b29f2626 : fix event tracer usage for shader profiling (#6249)
41ffa93d5 : Do not raise error when there is mixed dtype in the checkpoint (#6247)
dae765e3e : Don't automatically backup settings in Android demo apps (#5822)
3e052a890 : Update landing page to include Llama (#6231)
11eeec787 : Android NDK use r27b (#6092)
59070e5c4 : Update the "This is an alpha release" wording to "this is beta release" (#6217)
c544d0a29 : Handle output from TOSA reference model being int8 (#6190)
6ae8b1daa : Unify the qnn version definition (#6193)
e91357d0b : Adding model stats to aot_arm_compiler (#5816)
97a19658f : Use Eigen blas for custom sdpa (#6229)
96ecf7834 : Instruct people to clone viable/strict as opposed to main (#6243)
7b3439e20 : Use a common sh to build android llama app related stuff (#6034)
435723037 : Update XNNPACK docs to use to_edge_transform_and_lower API (#5344)
e342a9206 : Width Packing Mat1 input for Quantized Linear (#6149)
517fddbf5 : Adding keyboard dismissal when prompt sent (#6238)
708c6b6fe : check to_copy args in vulkan_partitioner (#6215)
5c3439ddf : Increase number of iterations for benchmarking (#6209)
6108a0b59 : Convert directory fbcode/executorch to use the Ruff Formatter (#6232)
55ed63f60 : Qualcomm AI Engine Direct - Update arch and soc chipset terms (#6227)
e0e19ed90 : Update extension-module.md to avoid extraneous header (#6237)
da78f3bc3 : [Release] Update pyproject.toml (#6221)
8dbd14ccb : Fix get_control_flow_submodules_list call in debug handle generator pass (#6187)
d63b35240 : Add 2b embedding op (#5800)
7ba799015 : Revert D64334733: Use Eigen blas for custom sdpa
95e7aa3a6 : Use Eigen blas for custom sdpa (#6197)
4745070d0 : Llama2 model cleanup (#5859)
4e8f609f3 : Fix /opt/android folder permission (#6207)
bff26f3e5 : Update runtime tutorial to promote Module APIs in the beginning. (#6198)
5c8b115e3 : Add CoreML tests. (#6203)
3a7056e09 : Add op: masked_scatter (#6167)
09367a866 : Update doc to include Benchmark Dashboard (#6202)
9bd7f4600 : Populates EXECUTORCH_LIBRARIES and other variables in executorch-config.cmake (#6204)
ce767cbed : Disable XNNPACK workspace sharing by default (#6199)
8101bf124 : Use int8 quantizer in the OSS flow (#6166)
ce67b543d : MTK Android library integration (#6001)
aa852cc7d : Continue benchmark other models even if one fails to export (#6186)
a6c69a1eb : Aten _To_Copy (#6055)
8957dc8a2 : Update to new version with new 16x4 kleidi kernels (#6101)
cd2d2b402 : fixed empty output from runtime executor in CPU flow (#6184)
1f2b9aa27 : add instructions about getting mmlu score for instruct models (#6175)
5512fe091 : Update and fix docs (namespaces, consistency) (#6084)
4b3ffc4ae : Add Vulkan Quantizer to Llama export lib (#6169)
236e60d75 : Include `FuseDequantLinearPass()` in `vulkan_preprocess` (#6168)
10f83d697 : Remove test on FakeProgram reducing size (#6178)
0b671f3ae : Clean up Android prebuilt artifact path and verification (#6181)
3b4b9d2d7 : Clean up Android prebuilt artifact path and verification (#6182)
37e7d928d : Back out "Revert export_for_training migration in llm/export/builder.py" (#6180)
6fa454ced : Mark extension/llm examples/llama2 and examples/llava C++ APIs experimental (#6177)
e95aa9d2e : add option to run mmlu with 5 shots (#6146)
ab12b7c7c : Revert "Fix doc links to the benchmark apps" (#6176)
e02faecb8 : Add possibility to collect all TOSA tests to a specified path (#5028) (#6174)
61c501c0d : Use fbjni from maven (#6163)
f17c9e19a : Misc fixes for the demo app. (#6064)
6e788c729 : Add fake mode in verifier (#6132)
b07780127 : Adding executorch_prim::mod.Scalar (#6133)
539152b55 : Don't install torchao at the user level. (#6119)
485a486d3 : Move arm.passes to arm._passes (#5918) (#6128)
905a26f0f : Move vulkan.passes to vulkan._passes (#5919) (#6129)
9aa96f8da : Add README to run the LLM fine-tune example on ET (#6159)
c169d91a1 : [Arm] Fix tosa-reference model upstream tracking (#6162)
922c74b6d : Add MTK NeuroPilot Portal link for SDK in mediatek_README.md (#6160)
d094b093f : Qualcomm AI Engine Direct - Support topk (#5870)
4bddf0166 : Open the correct image picker. (#6171)
e28cb76ff : Qualcomm AI Engine Direct - Fix push wrong library (#6155)
063596807 : Package headers into pip wheel (#6138)
040015cf2 : Pick pybind (#6151)
7d1b0a167 : s/exec_aten::/executorch::aten::/ for runtime/**/*.h (#6161)
99827e4e4 : s/exec_aten::/executorch::aten::/ for extension/**/*.h (#6106)
5696b355e : Fix tosa-reference model upstream tracking (#6156)
84f51d171 : s/exec_aten::/executorch::aten::/ for runtime/**/*.h (#6030)
0b82b17ac : Fix image processing. (#6154)
6449c3898 : Move cadence.aot.passes to cadence.aot._passes (#6091)
83ec9d7f0 : Move mediatek.passes to mediatek._passes (#6089)
7a2d98636 : Move xnnpack.passes to xnnpack._passes (#6088)
a2bc6bdf5 : Change memory planning API to accept full algorithm as argument as opposed to string name (#6130)
67c959a84 : Use TensorMeta to check if inputs and outputs are memory planned (#6114)
56a3d1e1c : Add README to run the LLM fine-tune example on ET (#6150)
722bced8b : Mark extension/llm examples/llama2 and examples/llava C++ APIs experimental (#6125)
4204e50dd : Fix image processing. (#6153)
e0fcdd42e : Add OptionalIntArrayRef used by torchgen. (#6145)
4ad5350c9 : Update compiler-backend-dialect.md (#6139)
51ef93ee1 : introduce model-level end2end tests to dim order tests with different delegate (#6135)
2e4e17c70 : Fix qualcomm passes (#6110)
f52f4d475 : Fix backend and devtools debug handle based tests (#6143)
4a4a90fed : Allow int8 type in quantized_conv and im2row (#6049)
ab1e671f1 : [BE] Add --clean option to install_requirements.py (#6137)
eca44f03b : Fix delegate debug handle generation (#6134)
ba8dc288a : New Runtime pybind API (#6063)
edb43d804 : op_clamp: add downcasting tests & fix (#6131)
96bac8226 : Improve ArmTester logging (#6080)
247508b68 : Update xnnpack_README.md adding warning for unsupported ndk version (#6127)
9784813ff : Release docs proofreading (#6124)
550428467 : Update runtime-build-and-cross-compilation.md (#6123)
5f442a655 : Specify NDK version 26 (#6122)
140fc57ee : Update kernel-library-selective-build.md (#6121)
59c83f9dd : Remove unused empty images. (#6120)
965fd671a : [ExecuTorch] Disable animation in hot path of iOS example again (#6118)
d40dd1e77 : Serialize destruction of the delegate (#6111)
9a4d6ce34 : fixed CPU kernel, unary_ufunc_realhbbf16_to_floathbf16, cpu flow (#5991)
6655c3b1e : Qualcomm AI Engine Direct - Fix aihub path failing due to memory planning pass (#5748)
a43b4a609 : introduce model-level end2end tests to dim order tests with different delegate (#6093)
69766fb6c : Call llama2/install_requirements.sh in llava install_requirements.sh (#6115)
5d12e5bcc : Fix androd-perf workflow OOM (#6109)
f31df696b : Fix hardcoded mps model names (#6113)
40cb26e4b : Blit Node (#6038)
0d935b6b9 : Don't install torchao at the user level. (#6116)
cedcf7ea6 : Remove unused empty images. (#6117)
1be8521ef : make dim order test as default in xnnpack (#6104)
d8dacf345 : [ExecuTorch] Fix missing cstdint in vec_base.h (#6107)
a9ea8b459 : Migrate from capture_pre_autograd_graph to torch.export.export_for_training (#6103)
15c164288 : Let find_package(executorch) find the correct include directory (#6102)
40358faea : Add CMake instructions to apple-runtime.md (#6099)
644045a86 : Add execution permissiong to install requirements scripts. (#6087)
02bf97a59 : update profiling import issue, and latest output (#6083)
3c0c99453 : Fix reporting backends and dtyep to benchmark results (#6073)
df5b2abf8 : Implement Graph node (#6037)
d6aea3d4a : Support more breakdown of latency metrics/stats for Llama (#6072)
83c95dffb : Move arm.passes to arm._passes (#5918)
69c2c766c : s/exec_aten::/executorch::aten::/ for extension/**/*.h (#6032)
192ca8289 : Upload Apple iOS benchmark results to benchmark database (#5982)
57e3c810d : Add execution permissiong to install requirements scripts. (#6082)
4b6a0336f : Support serializing scalar tensors as SymInt values (#6070)
1a0c2c732 : Update `RemoveLocalScalarDenseOpsTransform` to tag scalar tensors as well (#6069)
7bfab2176 : Bypass unused variable warning (#6078)
59c359f86 : Move cadence.aot.passes to cadence.aot._passes (#5921)
1a6ba2e20 : Reduce build size of op_cumsum (#6021)
d989680fd : Reduce build size of op_pixel_shuffle & op_pixel_unshuffle (#6020)
1083643b7 : Reduce build size of op_copy (#6019)
e0c26dd46 : Reduce build size of op_addmm (#6018)
be86a2cf3 : Reduce build size of op_convolution (#6017)
b3087441a : Move to dtype_utils (#6016)
5937f4a2c : Rewrite logical op pattern. Binary ops: logical_and, logical_or, logical_xor (#6015)
addb403a4 : Rewrite bitwise op pattern. Binary ops: bitwise_and, bitwise_or, bitwise_xor (#6014)
a2cf40949 : Introduce comparison op pattern. Binary ops: eq, ge, gt, le, lt, ne (#6013)
4334cec53 : Integer division binary ops: floor_divide, fmod, remainder (#6012)
9b1d333a1 : Introduce FLOATHBF16. Binary ops: atan2, div (#6011)
405f53158 : Introduce REALHBF16. Binary ops: pow, rsub, sub (#6010)
7606476cb : REALHBF16 binary ops: maximum, minimum, mul (#6009)
f8d182be7 : Check tensor dtype inside elementwise_utils (#6008)
607d4a3ee : Introduce notion of compute type. Refactor add/clamp/where (#6007)
41533718b : Introduce uni-tensor and bi-tensor equivalents (#6006)
a79caab72 : Move to elementwise_utils as apply_tritensor_elementwise_fn (#6005)
d27182586 : update profiling import issue, and latest output (#6079)
27330f20c : Default to cores/2 threads in JNI layer (#6042)
554eebe91 : Cover MPS models in the CI (#6036)
dc496bb2d : Fix doxygen docs for runtime API reference (#6067)
c0d007f14 : Make getting started clearer and reduce formatting bugs (#6071)
1eed36484 : Include `VkUtils.h` in `api.h` (#6066)
e764959b1 : Enable vTensor creation from external image (#5993)
991f0a970 : Enable externally allocated `VulkanImage`s (#5992)
d3cd09cf9 : Support staging any bitw8 image, take 2 (#6028)
fb63da908 : add missing namespace of getLeadingDims in hifi (#5997)
a922b3dba : Remove logic for appending or prepending tokens (#4920)
afb26644c : Extract a function that selectively displays or prints DF (#6031)
4f4dc0b22 : Fix weights not updating during training (#6068)
5fc56627d : Add kwarg example inputs to eager model base (#5765)
0ce221e7b : Allow int8 type in quantized_linear and quantized_fully_connected (#5900)
f5f6969d6 : Allow int8 type in quantized_layer_norm (#5899)
f4e25e100 : Allow int8 type in quantized_matmul (#5898)
012cba9a1 : Fix reporting backends and dtyep to benchmark results (#6023)
c1b10a708 : [ExecuTorch] Unbreak optimized sub in the case where one input is a scalar and both are Half (#5940)
1498238c3 : Update executorch-arm-delegate-tutorial.md (#6065)
866b40c72 : Move qualcomm.passes to qualcomm._passes (#5920)
e1832efa5 : Move vulkan.passes to vulkan._passes (#5919)
b118d8e73 : Fix doxygen docs for runtime API reference (#6061)
243fffce3 : Update executorch-arm-delegate-tutorial.md (#6040)
9364c06e0 : Misc fixes for the demo app. (#6051)
2e64e62a4 : Make getting started clearer and reduce formatting bugs (#5954)
0fcc4de52 : Update packages commit. (#6062)
a57853c2b : Centralize soc_to_chipset map (#6033)
53f58708f : Support quantizing to and dequantizing from uint16_t (Bits16) (#5839)
f3002f6dd : Arm run.sh fix with working unit tests (#6056)
2027a1436 : Fix eval (#5955)
af1e067c6 : : Remove "under active development" in XNNPACK (#6054)
80afaf2da : Migrate backends/apple/coreml to the new namespace (#5943)
867c96a51 : Fix weights not updating during training (#6039)
a6f754a98 : Add clarification about perpelexity discrepancy (#6053)
b8f66f947 : Migrate backends/qualcomm to the new namespace (#6025)
c4a2eeb7e : Migrate backends/apple/mps to the new namespace (#5883)
7bebf8e42 : Migrate backends/cadence away from deprecated namespaces (#5905)
50f70f3aa : Migrate backends/mediatek to the new namespace (#5956)
cb3a546be : Remove "under active development" in XNNPACK (#6048)
b73fb1e43 : Qualcomm AI Engine Direct - Refine max spill fill buffer setting (#6041)
0a3002f78 : Revert "Arm run.sh fixes" (#6052)
8b012a0af : Arm run.sh fixes (#6043)
36a5bc635 : Revert export_for_training migration in llm/export/builder.py (#6029)
f4bbf1955 : Update build framework script to skip specifying buck executable. (#6004)
14df42cbb : filter gif from lint rules (#6002)
ed7e65f5c : Correct Core ML perf metrics (#6000)
566902b71 : Serialize destruction of the delegate (#5996)
befc1a905 : Arm backend: Make slice-op work without end index (#5782)
28c95480e : Add CastInt64ToInt32Pass (#5842)
e540bcb55 : Move mediatek.passes to mediatek._passes (#5922)
da1d2ca3d : Add mapping from C++ program::verification to Python (#5915)
a6b213b9e : Android cleanup java api (#6022)
72b3bb319 : Unbreak test models llama CI (#6026)
b6e6d062e : Revert D63918659 (#6027)
01fcdf420 : Qualcomm AI Engine Direct - Refine max spill fill buffer setting (#5989)
0d1250a79 : Support selector for functions.yaml and custom_ops.yaml targets in et_operator_library (#5957)
ed9f50f87 : Adding LSTM to example models (#5931)
ec0fceec6 : Minor fixes around the Arm testing framework (#5976)
e904e569a : Arm backend: Fix pte path in run.sh when using a model python file (#5979)
c35386f47 : ArmQuantizer: Quantize AvgPool2d (#5873)
c44f334f3 : Arm backend: Set example timing adapters values (#5980)
132709092 : Improve pip package build (#5965)
f663ba662 : Update build framework script to skip specifying buck executable. (#5985)
245aa4e7b : Fix doc links to the benchmark apps (#5999)
931d8d8c2 : Fix QA issues of the devtools README (#5998)
a9d7182af : Improve iOS readme (#5990)
0fa033c27 : [ET-VK] Miscellaneous fixes for Vulkan docs (#5986)
243536414 : Arm backend: Show delegation info from aot_arm_compiler (#5981)
cf18ceda2 : Arm backend: Make memory pool sizes configrable from cmake (#5841)
a8bc14499 : Add Arm pass-utils and update passes (#5579)
fe083e5bc : filter gif from lint rules (#5995)
c5fdebd42 : Add div decomposition in ArmQuantizer (#5267)
f8cec5370 : Support staging any bitw8 image (#5934)
6fa423399 : Fix doc links to the benchmark apps (#5984)
f0112a2c6 : Add MTK NeuroPilot Portal link for SDK in mediatek_README.md (#5872)
f3de2bbb6 : Improve iOS readme (#5877)
4629f3afb : Migrate to training IR in executorch, part 2 (#5950)
085193e7e : Fix QA issues of the devtools README (#5963)
c35fb94b6 : Support `texture2D` in `etvk.copy_offset` (#5933)
77b1f082f : import from_blob from executorch::extension
60cc6bcd8 : New URL for the Profiling page (#5966)
9619b9cb9 : Update docs on Module new APIs. (#5968)
fef1299f7 : Update demo app readme. (#5970)
71df0aae3 : Update Llama README.md for Stories110M tokenizer (#5975)
513d16633 : Miscellaneous fixes for Vulkan docs (#5972)
fe295b9a3 : Add helper method to generate missing debug handles (#5902)
48d08dac6 : Migrate backends/arm to the new namespace (#5974)
62a13c142 : Handle scalar tensor and mutable buffer inputs in Vulkan delegate runtime (#5930)
a4b88a354 : Support exporting of custom operator calls via `higher_order_auto_functionalized` (#5884)
400fefa4c : Add pass to remove local_scalar_dense (#5886)
cb12061fc : Add --clean option to install_requirements.py (#5348)
3298b228b : Rename llama3_2_mm to llama3_2_vision (#5892)
2eb6b0497 : Qualcomm AI Engine Direct - Fixed layer norm quantization annotation for 16bit (#5927)
ac2ae072e : Move benchmark apps to extension/benchmark dir (#5951)
986d0012c : Update demo app readme. (#5962)
12cb9ca06 : Update Llama README.md for Stories110M tokenizer (#5960)
7337f8e6d : Find portable_lib.so in pip package during cmake build (#5961)
03e451647 : Migrate backends/arm to the new namespace (#5904)
aad548c1a : Fix delegate debug handle generation (#5953)
7b93aa2ed : Use source_fn_stack in xnnpack tutorial (#5959)
0424eef25 : Update docs on Module new APIs. (#5952)
f005dd5ba : Migrate to training IR in executorch tests (#5941)
ee07d78db : Fix missing export_for_training import in bundled io tutorial (#5958)
9c4032b5c : Bump numpy from 1.21.3 to 1.22.0 in /.ci/docker (#4514)
19e5a4545 : Improve the qcomm aot part docs (#5939)
165144998 : New URL for the Inspector page (#5942)
37a13976d : Fix missing export_for_training import in bundled io tutorial (#5949)
c86d0d033 : Use source_fn_stack in xnnpack tutorial (#5948)
59cc8171b : Move xnnpack.passes to xnnpack._passes (#5917)
6e871c3b6 : Implement SDPA + KV-Cache operator (#5799)
0186c7f92 : aten.flip (#5879)
a9cbb3851 : Enable `uint8` dtype in shaders (#5932)
3e208748b : Benchmark app for Apple platforms (#5944)
0a11e9931 : Update README.md (#5945)
2726bdb87 : use --use_sdpa_with_kv_cache for 1B/3B bf16 (#5861)
5b45b75e1 : update the tested qnn version (#5936)
478a9b6d4 : Improve the qcomm aot part docs (#5868)
19584a82c : New URL for the ETDump page (#5938)
af6f3ed15 : Add documentation for the apple benchmarking app. (#5935)
e194febb9 : update the tested qnn version (#5903)
d17463731 : Qualcomm AI Engine Direct - oss model enablement (fastvit) (#5543)
c06a70824 : Revert "Add quantize option to the coreml script (#5710)" (#5906)
8fc3e2061 : Cleanup export_model API calls (#5882)
84f5a561e : Handle empty Android benchmark results (#5916)
17c2f3672 : Release docs proofreading (#5909)
5abfe13b3 : Update runtime-build-and-cross-compilation.md (#5452)
4651d6542 : Upload Android benchmark results to OSS benchmark database (#5808)
c678cbdcf : Add a script to download frameworks. (#5901)
2f9f94a06 : Use .bin extension for tokenizer. (#5907)
1052e3b17 : Unbreak optimized sub in the case where one input is a scalar and dtype is mixed or both are Half (#5894)
94289ad21 : Clean up organization of supported_ops (#5885)
84498b25c : Update kernel-library-selective-build.md (#5895)
acfcdd54d : Add option to disable operator profiling (#5720)
d9aeca556 : Update compiler-backend-dialect.md (#5890)
784eb51cf : Correct Core ML perf metrics (#5862)
58d72fd8e : Properly kill the buck2 daemon (#5863)
cf56ba77e : MIgrate some random files away from the torch:: namespace (#5866)
594ab5e41 : New URL for the Model Debugging page (#5891)
4f41e652e : Migrate backends/xnnpack to the new namespace (#5896)
445b849ea : Migrate backends/vulkan to the new namespace (#5897)
f8d66890b : Fix input setting and invocation. (#5888)
4adf49601 : Fix load time. (#5889)
08308942c : New URL for the Bundled IO documentation page (#5860)
b43def54a : Use new threadpool namespace for all of //executorch/... (#5851)
34e7ad8a6 : Migrate backends/xnnpack to the new namespace (#5865)
a6d67c7dd : Move LLaMA tests to a subdir.
a4fcdcdc2 : PT Pin Bump: 20241002 (#5880)
98a58e0df : Add generic annotator for data layout ops (#5814)
00d804c18 : Arm backend: Add argument to list used fallback ops for run.sh (#5815)
f102d0652 : Add llama tests. (#5874)
b9e9479e9 : Migrate backends/vulkan to the new namespace (#5876)
be4b7f4b7 : Add missing Pyre mode headers] [batch:32/1018] [shard:9/N]
d34cc4e7e : Qualcomm AI Egine Direct -- add ssg2115p (#5867)
c2969f12a : add 16a8w matmul custom annotation (#5864)
20a157f23 : Rename flamingo to llama3_2_mm (#5759)
3a2565182 : MIgrate some random files away from the torch:: namespace (#5836)
a4ee59a4b : Adding executorch_prim::mod.Scalar (#5721)
433ead09e : Just pass SupportedTensorDtypes for each tensor to apply_ternary_elementwise_fn (#5834)
b1fd74c8d : Simplify function pointers for apply_ternary_elementwise_fn (#5833)
aa8a93c33 : Dramatically improve op_clamp build time (#5784)
9c3ebfe0a : Add readme to Resources dir. (#5857)
0ddd91329 : Add a script to download frameworks. (#5858)
3dd1b9853 : New URL for the ETRecord documentation page (#5848)
876c6651a : Update xnnpack_README.md adding warning for unsupported ndk version (#5854)
2584e7208 : Specify NDK version 26 (#5855)
abae470da : fix the format in this README (#5853)
13408b984 : Arm backend: Avoid failing sigmoid unit tests after Pytorch 2.6 update (#5843)
0e5b92ddb : Add OptionalIntArrayRef used by torchgen. (#5735)
077f24c6a : Migrate extension/threadpool to new namespace (#5849)
70aee720a : Properly kill the buck2 daemon
fd8891327 : Move etensor types to their new namespace (#5773)
4e0e340f7 : New URL for the Debug in Delegates documentation page (#5846)
98c5efae3 : Daily `arc lint --take CLANGFORMAT`
18845b03a : Update the job name to reflect the new generic benchmark app (#5837)
92d1d1e41 : Enable Ethos-U85 and Corstone-320 in Arm run.sh flow (#5818)
ee5d0993b : Arm backend: Mark test_block_bottleneck_residual_tosa_BI unit test flaky (#5812)
b9aaf96e9 : Split up split test to collect artifacts correctly (#5785)
3da2658eb : Arm backend: Change run.sh to let cmake decide number of parallel jobs (#5813)
b78ec1bbe : Arm backend: Track target memory usage (#5788)
8ac2608f4 : Add support for Arm tests on MacOS (#5786)
7559ddd86 : Fix tensor views with tensors that use deferred memory allocation (#5831)
e2e2129bf : Implement `repeat_interleave` (#5830)
79b78966c : New URL for the Profiling page (#5819)
835bd34d4 : New URL for the Inspector page (#5810)
6f179472b : New URL for the ETDump page (#5809)
011d42bca : New URL for the Model Debugging page (#5817)
9ff3351f6 : add more options for loading checkpoints (#5823)
793f79b64 : Update install_requirements.py: Bumping PT pin to dev20240925 (#5824)
9263d889f : Use new threadpool namespace for all of //executorch/... (#5826)
436afcedd : Migrate extension/threadpool to new namespace (#5825)
bc4fd8a1d : Update the job name to reflect the new generic benchmark app (#5829)
1abeccea8 : Refactor the test suite.
5acd5c9d7 : Less Noisy Pybindings (#5828)
dc2498382 : New URL for the Delegates Debugging documentation page (#5769)
d80ebd572 : New URL for the Bundled IO documentation page (#5767)
450aecebe : New URL for the ETRecord documentation page (#5764)
68c33c7f9 : Fix llama demo app internal build (#5820)
3aa6b14de : op_clamp: add downcasting tests & fix (#5798)
9dcd71fe6 : Disable animation in hot path of iOS example again (#5821)
c10c96a3a : Add fake mode in verifier (#5805)
152e22d58 : c10::optional -> std::optional
0d6a098eb : Support bf16 for binary logical ops (#5706)
c48d867dd : update spinquant quantization options to be general purposed pre-quantization (#5797)
d708b94a2 : Allow Inspector to accept ETDump bytes directly (#5657)
5877c2abc : Corstone-320 download and tests (#5787)
393553cee : Remove preprocess xplat build (#5801)
43d7662c9 : Add custom_sdpa and use that instead of sdpa_with_kv_cache (#5669)
fdacfaa78 : Update EXECUTORCH_LIBRARY macro (#5668)
29364c428 : Refactoring sdpa (#5667)
bca3ad6a5 : Update SDPA op to use quantized kv cache (#5666)
5f324ce51 : Add update_quantized_cache op (#5527)
d459011ab : Refactor custom SDPA op to separate kv cache update from the custom sdpa op (#5665)
055bed53a : Add quantized kv cache to llama (#5664)
8ddb8469b : Wrap server generated yaml files inside et_operator_library (#5778)
6923ae5fb : Use TensorPtr in aten_bridge (#5789)
aced6d771 : add microkernels-prod to backend_xnnpack in build_apple_frameworks (#5795)
1a120d9e0 : Fix out of OOB error for ConversationHistory (#5775)
fbcd3320e : Fix android build (#5796)
d62c7ad21 : Package headers into pip wheel (#5734)
2d0237cb6 : set env in buck file (#5736)
7183f1985 : Add more instructions to Xcode setup (#5757)
ee3284899 : Support bf16 for isinf/isnan (#5690)
6a2758990 : s/unary_ufunc_realhb_to_floath/unary_ufunc_realhbbf16_to_floathbf16/ (#5678)
085b8176b : Make optimized op_exp support bf16 (#5677)
07bcd7f15 : UnaryUfuncRealHBToFloatHTest: test Half more widely (#5676)
b63c68eb0 : migrate all unary_ufunc_realhb_to_floath op tests to general infra (2/2) (#5675)
829ba3e1d : generalize tests for unary_ufunc_realhb_to_floath ops (1/2) (#5674)
04669a19b : Fix load time. (#5781)
8079eb700 : Kleidi Integration (#5162)
660ef7761 : Add warmup for Llama (#5756)
26dc9fdf1 : Ensure iOS benchmark app build is tested in CI (#5609)
a91eb8a9d : Generalize softmax for packed dim vs non packed dim (#5755)
41a765dd9 : Fix unqualified uses of executorch functions (#5772)
42bed3ca1 : Polish CoreML Llama Doc (#5774)
b30212287 : Add model files support for android tokenizer (#5768)
b60fa7125 : buckify eval_llama (#5437)
f0662bba1 : Fix out of OOB error for ConversationHistory (#5770)
e186bc9e5 : Corstone-320 support (#5628)
418c4c3e0 : Polish CoreML Llama Doc (#5745)
9c38cf731 : Add API to read value of `SymInt` and`ParamsBuffer` (#5754)
a5a76f7b4 : Introduce `virtual_clone` API to support view of view use cases + fix synchronization hazard with view tensors (#5753)
6ff52cc72 : Move etensor types to their new namespace (#5569)
56059542d : Fix unqualified uses of executorch functions (#5709)
51e79a0db : add command_alias in runtime_wrapper (#5737)
944bd6746 : Add animated gif for 3B SpinQuant (#5763)
972071572 : throw instead of segfault with invalid args in pybindings (#5726)
e2f1aca28 : Fix missing cstdint in vec_base.h (#5747)
969839387 : Qualcomm AI Engine Direct - add tutorial for op builder & quantizer (#5749)
3553ade72 : Update README for MediaTek backend (#5750)
48d586cfe : indention fix (#5738)
b2f20cba7 : removing autodeps suppression tags (#5593)
2dd88fb0f : Support dim order in Arm backend (#5576)
1c6dbb6f4 : Qualcomm AI Engine Direct - support Conv2dTranspose (#5461)
e19677cf4 : Arm backend: Add squeeze-op (#5681)
68548e5f0 : Fix input setting and invocation. (#5752)
0d96f7519 : Improve ArmTester logging (#5629)
4bf7e2ff0 : Input name bugfix in runner_utils (#5071)
a9ad3c66e : Arm backend: Improve memory config and documentation in the runtime (#5580)
06ce22618 : Update aot_arm_compiler to use export_for_training (#5581)
c222a4410 : Arm backend: Add rsqrt lowering (#5577)
09f13c085 : Arm backend: Updated depthwise/conv2d test lists for Ethos Ux5 (#5625)
b71926f93 : Arm backend: Add options when using run.sh (#5627)
77e7ad109 : Add access to edge_program in ArmPassManager (#5542)
905b88c75 : Rename executorch_no_prim_ops to executorch_core (#5740)
fe0e676f5 : Add quantize option to the coreml script (#5710)
8093d518b : Update packages commit. (#5742)
e31e0b638 : Merge TensorImplPtr into TensorPtr.
8b5cf96ea : Fix `VK_NULL_HANDLE` comparison style (#5733)
b4a6148c9 : Migrate from capture_pre_autograd_graph to torch.export.export_for_training (#5730)
dacd0a2ff : c10::optional -> std::optional
2fff1713a : Enable AHB extension for Android builds (#5729)
55cc4302a : Always use two XNNPACK Partitioners (#5573)
9eb9bf8ab : Fix a typo in the memory planning doc (#5732)
2d0382ef9 : Document update (#5707)
bdaad8e68 : Add dtype arg to the script for exporting HuggingFace models (#5716)
c1c508031 : Fix a typo in the memory planning doc (#5723)
3f04c3c3d : Update README for MediaTek backend (#5386)
87dc49db6 : Qualcomm AI Engine Direct - add tutorial for op builder & quantizer (#5575)
4ee0437a7 : Remove TensorPtr::get() (#5687)
53936dc9c : Store the Tensor inline in TensorPtr (#5684)
e6237f715 : Add Buck config to disable XNNPACK Workspace Sharing (#5696)
fcdfe06d8 : Add model files support for android tokenizer (#5727)
c5ced7170 : Remove extract_constant_segment (#5680)
43b03f634 : migrate cadence cpu executor to use the existing ET sample (#5644)
1fedffc35 : Make export file compatible with buck (#5703)
c3460e570 : Use aliasing constructor instead of a custom deleter in TensorImplPtr. (#5725)
a5c9dee93 : Fix non-reentrant threadpool (#5714)
e18bf6f39 : New doc for the memory planning inspection util function (#5724)
53cdf6387 : Update Android XNNPack demo app doc for Llama 3.2 and Llama Guard 3 (#5697)
f2bed6878 : Readme docs update (#5713)
bd03d6bec : Improve llama README with SPinQuant
8f3a83b0e : Fix rope source transformation error message (#5630)
bdaeede40 : Remove unused includes from operators (#5538)
fc6b8ead0 : Add BUCK file to coreml export script (#5702)
7127ea9cf : New doc for the memory planning inspection util function (#5430)
d4afbe77e : Show loading model UI during model switch (#5691)
61c421f26 : Bump nightly torch (#5660)
a1ed265d3 : Allow softmax and log_softmax to operate on any dimension (#5694)
25a1be827 : [Android] Add a workflow dispatch for uploading release artifact (#5708)
c609f11bb : Add Llama 3.2 and subsections to Example dir README (#5701)
cd1fa99e2 : Cleanup xnnpack_README.md (#5700)
d6cf749d3 : Add llama 3.2 model type on Android (#5699)
570572b00 : Demo app android xnnpack quick-fix for the bookmark link (#5698)
de718e620 : Add documentation. (#5611)
e172c5ce3 : add performance number for 1B/3B (#5704)
7e9eaa887 : Readme docs update (#5695)
9d224a5f3 : Fix dequantize per channel to handle double scale type (#5524)
985f92db9 : Some updated to kv cache (#5663)
953ab5146 : Small improvements for module usage. (#5705)
ff6607e46 : Document update (#5692)
13869ecb2 : Add a workflow dispatch for uploading release artifact (#5606)
1cf3da322 : Fix linker flags. (#5689)
4dcee852e : Add MethodMeta object for python visibility (#5571)
3bedd8b31 : Add buck targets for QNN AOT export (#5476)
e57bbbb12 : Pin Xcode projects package deps on main to a particular commit instead of a branch. (#5673)
6259a2920 : IMprove README page
7c647cd1e : add instruction for quantizing with SpinQuant (#5672)
dacbba7e1 : Add llama 3.2 model type on Android (#5646)
dd8d5beec : Cleanup xnnpack_README.md (#5662)
f3fa9fa7d : Add animated gif for Llama3.2 1B bf16 (#5671)
ba0958a67 : Add Llama 3.2 and subsections to Example dir README (#5661)
984986ef4 : Add some Llava related stuff (#5659)
88c2407bf : Fix typo
52d521865 : update export SpinQuant checkpoint to align with the new format (#5645)
7ab977e35 : build cadence cpu flow as a stand-alone cmake dependency (#5555)
52f9e033f : build cadence hifi flow as a stand-alone cmake dependency (#5551)
b9dadee81 : Add llama3.2 1B and 3B instructions (#5647)
cd4672101 : Linter fix (#5643)
6e9efa141 : Demo app android xnnpack quick-fix for the bookmark link (#5642)
a91444672 : Improve Llama page (#5639)
9b6d4b4a7 : Update iOS XNNPack demo app docs for Llama 3.2 (#5641)
82a505bb4 : Update Android XNNPack demo app doc for Llama 3.2 and Llama Guard 3 (#5640)
d2ba23837 : Revert D62874650: Arm backend: Track target memory usage
e425dbbaa : Arm backend: Track target memory usage (#5341)
d516309d8 : Add possibility to collect all TOSA tests to a specified path (#5028)
341545c0c : add option to quantize output layer perchannel for SpinQuant (#5614)
6f9cd8c84 : Add Int8DynActInt8WeightLinear module (#5605)
85e745897 : Update NDK version to r26d in docs (#5612)
5c56f960d : Fix typos in docs. (#5613)
99ee547e0 : Add documentation. (#5562)
8d16c5247 : Adding support to demo prompt classification with Llama Guard (#5595)
7f8ea4416 : [llava] Add tiktoken dep (#5608)
3e79ea413 : Transform embedding from SpinQuant checkpoint (#5552)
72245c331 : Add tiktoken dep (#5586)
ce7402498 : update copy_channel_offset to axis mapping (#5587)
b206b97f8 : Introduce convenience constexpr for memory access types (#5592)
f1c5fc6e9 : Use `MemoryAccessFlags` instead of `MemoryAccessType` when binding (#5591)
206043478 : Implement slice as a view (#5590)
3f447d71e : Add transpose op as view operator (#5589)
90dcea548 : Qualcomm AI Engine Direct - Fix aihub path failing due to memory planning pass (#5482)
8660faf39 : Add New Ethos-U85 support to Arm aot_arm_compiler (#5345)
df72b8cd9 : Use TensorMeta to check if inputs and outputs are memory planned (#5565)
f4728f40b : Add all relevant testcases for Arm Ethos-U85 (#5346)
9757eda3b : Fix duplicating latest prompt (#5568)
21ca2a6ee : Remove TIP Format and Replaced with Subheader in README (#5567)
224a9e3c3 : Fix image sizes in README.md (#5563)
85dc41ca8 : Remove `torch::` references from devtools/example_runner (#5559)
65de731ca : Remove stray uses of `torch::executor::` from examples/... (#5558)
14b3ee397 : Remove `torch::` namespace reference from LLaMMARunner.mm (#5557)
0267b77fc : Move examples/mediatek out from under the torch namespace (#5556)
5a984ccae : Generate benchmarks automatically (#5561)
ca0e48cbc : Refactor codegen components to prepare for benchmark generation (#5560)
61cb5b0a5 : Adding support to demo prompt classification with Llama Guard (#5553)
286799c9c : Include optimized kernels in pybindings' portable_lib if building them (#5520)
28c2ab623 : add BFloat16 to aten_bridge (#5519)
badd76eb2 : Support bfloat16 in op_index_put (#5500)
0a72cb063 : Support bfloat16 in op_index (#5499)
2eae7a963 : Move QMat2 to buffer storage and scales_and_zeros to Channels Packed (#5515)
8be3ce541 : Fix image sizes in README.md (#5550)
b611d59ba : add CI job for phi-3-mini (#5532)
cab6335bf : Allow using custom SDPA for non-float32 dtypes in llama demo (#5548)
f68a1387c : Remove `torch::` references from devtools/example_runner (#5495)
abe9c3633 : Remove stray uses of `torch::executor::` from examples/... (#5512)
3b63839a7 : Fix duplicating latest prompt (#5546)
b361f91d4 : Remove `torch::` namespace reference from LLaMMARunner.mm (#5516)
182f13829 : Move examples/mediatek out from under the torch namespace (#5478)
0ec003bc0 : Add "px" unit to image sizes in readme (#5540)
e6f7bb161 : Fix broken images in docs (#5545)
dbb00e55e : Improve iOS demo app readme (#5488)
e12b37e83 : Arm backend: Track target flash size metrics (#5342)
dabf08243 : Fix Xcode project. (#5541)
3bb4850ac : Fix tensor cloning when data is null. (#5537)
37dbd5dbb : Fix optimized kernels build. (#5536)
d770fe8c4 : Fix phi-3-mini build (#5522)
caed67195 : Remove `torch::` references from arm_executor_runner (#5521)
b2517d66b : Remove TIP Format and Replaced with Subheader in README (#5517)
55d6b0d55 : Fix Xcode project. (#5539)
45210bbb1 : Fix tensor cloning when data is null. (#5535)
3ec416126 : Fix optimized kernels build. (#5534)
d5fdbd441 : update conv1d to new layout specifier gen, axis mapping, and use non-singlethreaded local workgroup (#5504)
0eee42a6b : Don't require -march compiler flags to use bfdot (#5444)
c50f9fe5e : update copy_offset to new layout specifier gen & axis mapping (#5505)
613cfd6c1 : Add CMake instructions to apple-runtime.md (#5533)
0866c52e6 : Update GH link in docs (#5496)
7de3f8184 : Fix broken images in docs (#5514)
ffbe90ba5 : Add a note about cleaning the build system after a sync (#5485)
73fbb7447 : Migrate examples/apple/... away from the torch:: namespace (#5486)
dc4319314 : Move examples/qualcomm out from under the torch namespace (#5487)
ab2266dee : Add LLM subpages to navi (#5497)
d0a6e09c2 : Fix javadoc for LlamaModule.java (#5507)
1cabf8d2a : Fix missing newline typo on ios docs (#5510)
8618607e1 : Fix phi-3-mini build (#5513)
01dcebdc7 : Remove `torch::` references from arm_executor_runner (#5506)
b5741a65c : Fix javadoc for LlamaModule.java (#5502)
a9f3f810d : Android custom lib (#5501)
3512148ab : introduce {load|write}_texel_lpos helper for fetching/writing logical pos (#5494)
2df0cc1cd : Fix missing newline typo on ios docs (#5491)
a556a2d73 : Support SpinQuant to run on ET (#5435)
28c9a1dfe : Fix underflow error in `calculate_dim_order()` (#5498)
16673f964 : Update GH link in docs (#5493)
af098c31b : Enable Workspace sharing by default (#5336)
90d519179 : vTensor cleanup 7/N - Blanket replacement of `packed_dim_whcn_idx` with `packed_dim` (#5484)
7c6d58a9a : vTensor cleanup 6/N - Do not use `gpu_memory_layout` as a source of truth, use `packed_dim_whcn_idx` directly (#5479)
47f4f07de : Forward fix pull.yml (#5489)
73244a921 : Add LLM subpages to navi (#5475)
93ea1f1f9 : Add definition to `etrecord_path` to the devtools tutorial (#5481)
d6186d94f : Create qualcomm_README.md (#5480)
c715c3d52 : Update Llava README (#5477)
8ef6c79d5 : Move examples/qualcomm out from under the torch namespace (#5400)
c80969208 : Update ExecuTorchDemo project.pbxproj (#5469)
b89c52cdc : Add definition to `etrecord_path` to the devtools tutorial (#5458)
e148c1d36 : vTensor cleanup 5/N - clean up `indexing_utils.h` and clarify function names (#5443)
2afcd96aa : apply output layer pruning (#5426)
0a9bbaab2 : Add DeviceInfo in iOS benchmark run (#5410)
5a3eceba6 : added detailed instructions for xtensa cmake and devserver setup (#5398)
8e281884a : Add test_llama bf16 portable config to CI (#5472)
fdc7e45bc : Add llava animated gif to llava readme
ad95e4669 : Update a ReadMe (#5473)
67752ee9f : Add llama animated gif to llama readme (#5474)
61e5d4c52 : Improve iOS demo app readme (#5453)
f5f54b883 : vTensor cleanup 4/N - consolidate texture positions and extents to be logical positions and extents (#5442)
958afe13b : vTensor cleanup 3/N - Introduce conversion constructors for `vec` types (#5423)
ebff33c0e : aten.hardswish fix w coordinate (#5418)
0704ed3dd : [executorch][docs] Update stories cmd to use kv cache (#5466)
1a5c8bfcd : Make llama and llava more prominent in top-level readmes (#5471)
30f59d20d : Update versions in apple.yml (#5467)
6ed88733b : Add training readme
d2a38ccc8 : Update stories cmd to use kv cache (#5460)
0648a8a36 : Fix various using namespace issues in executorch (#5464)
26c736ee1 : Training demo (#5445)
53c1a5f50 : Batch-aware torch.ops.llama.sdpa_with_kv_cache (#4822)
1e4c316ce : Add more data points from benchmarking infra (#5432)
759e0c8c7 : Add convenience load methond for forward. (#5454)
8bed35871 : Fix Android ExecuTorchDemo docs (#5439) (#5456)
9003900e9 : [doc][devtools][rename] New URL for developer tools overview page (#5457)
444480b60 : Buckify Cadence HiFi4 Operators. (#5154)
861a7bf43 : Fix Android ExecuTorchDemo docs (#5439)
41c21a40b : Integrate axis mapping in embedding op (#5440)
3e1a5788c : Add an error message to help debug issue when accessing secrets in CI (#5441)
ab6cbd949 : Update executorch pin (#5438)
8a0b48e76 : aten.hardsigmoid.default in unary_ops (#5396)
8a7cb9c50 : [RELEASE] Update references to release/0.4 (#5448)
72493cb37 : Upgrade the coremltools version. (#5429)
80c437fb3 : Update readme to include OSS references (#5449)
b14dea8c9 : Add convenience load methond for forward. (#5446)
b7dfd8abc : Fix Android x86_64 build (#5434)
2e1043b02 : Update readme to include OSS references (#5433)
8f7d9d521 : Allow mutating input tensor (#4850)
aebc2e36c : Update llava readme docs to reference demo apps (#5427)
06c0fa3e9 : Unbreak Intel Apple Buck builds (#5431)
0658dce9c : Print overload name in print_ops_info (#5404)
fe53daf7f : Check for contiguous dim order in op_fast_hadamard_transform (#5419)
cb200d720 : Cast the vector from deduced type to desired type if needed. (#5416)
442b45ad2 : Remove buck2 from llama mac test (#5414)
3e2cfc7b3 : Integrate axis mapping into binary op (#5408)
b5f7b855e : Move type arg to the end to match Aten constructors. (#5391)
618466ed5 : Change memory planning API to accept full algorithm as argument as opposed to string name (#4727)
acfe0ba15 : Custom op for fast hadamard transform kernel (#5291)
19f5ed873 : Refactor fast_hadamard_transform_test shared implementation functions (#5388)
a6f838931 : Add android-arm64 execution platform in shim & plumb it to extract_sources.py (#5369)
92adb9418 : vTensor cleanup 2/N - remove `discard_and_reallocate` API (#5422)
aaf73d857 : vTensor cleanup 1/N - swap order of `vTensor` and `vTensorStorage` in cpp file (#5421)
bbe0ebd46 : [ExecuTorch] Reapply D62466496: Build optimized kernels with bf16 support and gate usage at runtime (#5420)
1645af0cc : Upgrade the coremltools version. (#5425)
a5d67891c : Add a README.md for benchmarking infra (#5413)
ec9d86ac6 : [doc][devtools][rename] New URL for developer tools tutorial (#5401)
22dfe5885 : [executorch] Update phi-3-mini lora export code and readme (#5393)
6d3128f61 : Add extra checks for the provided dim order and strides. (#5390)
5fb6db67c : [RELEASE] Update pytorch pin in ci to 2.5 (#5406)
3befc8aa4 : Update version.txt post 0.4 branch cut (#5411)
16daeb4a1 : Increase memory allocated for inputs in case semi hosting is used. (#5309)
9c5994dfe : Arm Backend: Generate input if not supplied in the non semihosting (#5308)
e8a557c6e : Cast the vector from deduced type to desired type if needed. (#5409)
c605bae0b : pad_max_tiles (#5271)
c8a7762af : Add SpinQuant into README (#5412)
c5d16619e : Add a README.md for benchmarking infra (#5403)
5c3be4a87 : Migrate examples/apple/... away from the torch:: namespace (#5405)
e36a0273c : [RELEASE] Remove nightly pin from install_requirements.py (#5402)
9c068abc3 : Create qualcomm_README.md (#5394)
7c661d727 : Add XOR Model Example (#5397)
8a8e876d9 : Register preprocess in pytorch (#5350)
f7954f65e : Fix batch dimension adjustment in shader indexing utils (#5399)
30ab618a3 : Add a note about cleaning the build system after a sync (#5395)
fa777716b : New URL for developer tools overview page (#5385)
2460e1595 : New URL for developer tools tutorial (#5384)
a9ffb3a60 : Increase timeout threshold for on-device benchmarking (#5371)
07c77bef8 : Remove unnecessary member structs from `Allocation` struct to reduce `vTensor` size (#5392)
a166a2530 : Export aoti for preprocess (#5354)
2b3cc276a : Reapply D62466496: Build optimized kernels with bf16 support and gate usage at runtime (#5376)
26375cce7 : Introduce `virtual_transpose()` to `vTensor` for no copy transposition (#5353)
c25255358 : Move type arg to the end to match Aten constructors. (#5379)
0a501ebad : Add extra checks for the provided dim order and strides. (#5377)
ef316082c : Add sdpa arg comments (#5323)
08f16d0ce : Update phi-3-mini lora export code and readme (#5327)
eecf74fb2 : Qualcomm AI Engine Direct - Intermediate Tensor Dump (#5310)
eb0cdf731 : Upload exported models and Android apps directly to S3 (#5375)
58700faa2 : Update iOS Llama demo app readme docs (#5359)
e0c931255 : Revert D62466496: Multisect successfully blamed "D62466496: [ExecuTorch] Build optimized kernels with bf16 support and gate usage at runtime" for one build failure
768f5c9a4 : Simplify setting output. (#5363)
74a56e492 : Update Android ExecuTorch Llama demo app readme docs (#5364)
262dfc002 : Move examples/models/... out of the `torch` namespace (#5318)
08ecd735a : Llava+llama demo (#5372)
248ba5256 : Build optimized kernels with bf16 support and gate usage at runtime (#5251)
69ae1e154 : Use bfdot if compiled with ARM_FEATURE_BF16 (#5249)
8d5ef1d13 : Add nongenai models to apple perf (#5370)
a7618c508 : Add API to set inputs independently from execution. (#5356)
68b75cd6a : Refine the tests to compare the result with the error code. (#5358)
0aa75e6ef : Fix EValue construction from a smart pointer. (#5357)
69d33a81a : minibench README (#5367)
67be84b57 : Script to export 🤗 models (#4723)
2001b3c92 : Revert D62621640: Increase timeout threshold for on-device benchmarking
f50787130 : No matrix needed in benchmarking apk build (#5339)
3450eccd3 : Fix API warning for older SDKs #2 (#5365)
9b5ba1f7e : Increase timeout threshold for on-device benchmarking (#5324)
bfce743ec : Try to fix test_model.sh (#5361)
62024d845 : Use ms for number report (#5362)
25168b708 : Enable tensor aliases with texture storage (#5347)
31e652db6 : Integrate axis mapping into naive matrix multiplication shaders (#5277)
71602a0e5 : Allow Partitioner to Force Dynamic Linear Computation (#5338)
034e09808 : Revert D62617066 (#5351)
0d1644fcd : Define generic Android benchmark metric structure (#5332)
6d1a57385 : Use parallel_for in bfloat16 gemm_transa_ kernel (#5248)
1bb5b2006 : Mark all pybindings.portable_lib names as @experimental (#5329)
bba4040ee : Compile and deploy QNN models to S24 (#5137)
933685b83 : mark sgd as experimental (#5316)
a0a249e46 : Fix tests. (#5349)
37f77eda5 : Refactor kernel registration tutorials (#5297)
7dbe15e26 : Simplify setting output. (#5334)
0d0b14abe : Clean up LlamaDemo test on AWS (#5340)
07e15a90e : Precompute multiplicative inverse when possible in op_div (#5209)
984501954 : Let Module tests use Tensor extension and aten mode. (#5298)
c080c486d : Polish UI (#5319)
9301ebb7a : Add experimental API for preserving ops from decomposition (#5236)
ca2ac5494 : Enable optimized build for portable configuration of test-models (#5317)
fe53d41a9 : Qualcomm AI Engine Direct - Add the tutorial to deploy llama3 8B Instruct (#5335)
fcbbef482 : Update minibench to support qnn (#5325)
bdd6a8e79 : Fix API warning for older SDKs (#5337)
aa1bcc322 : Update native library in AndroidManifest (#5331)
c20ed5e20 : Fix env in android perf so aar build (#5330)
9256b4ab2 : Fix so name in setup-with-qnn.sh (#5333)
3264a7b54 : Add lint suppressions for fast_hadamard_transform_special_unstrided_cpu.h (#5328)
b9ae0ec1a : Rename cmake option "EXECUTORCH_BUILD_SDK" to "EXECUTORCH_BUILD_DEVTOOLS" (#5272)
53c49fb58 : update doc for phi-3-mini (#5320)
523b41ebb : Update executorch documentation to use export_for_training (#5219)
c03219432 : Fix Hard Fault in arm_executor_runner (#5302)
4b3f1c5cd : Set EXECUTORCH_BUILD_QNN=ON for benchmarking app build (#5315)
4218d4581 : FFHT enhancements to fast hadamard transform kernels (#5290)
eedc38af4 : FFHT: ARM NEON port (#5289)
d3fb502a8 : FFHT: just expect benchmark to be installed (#5288)
ce1f8bd60 : Remove unneeded FFHT files (#5287)
2d4b9ed44 : Clean commit of FFHT dependency (#5286)
ab7553158 : add quantized fast_hadamard_transform_28N (#5285)
327a5b6f9 : Quantized fast hadamard transform (#5284)
b904833bf : Add fast_hadamard_transform and fast_hadamard_transform_28N kernels (#5283)
1d46d720a : Android Java introduce Experimental API annotation (#5303)
905df29af : Rename exec_aten:: to executorch::aten:: (#5296)
38892acd6 : Clean up non-exec_aten references to tensor types (#5254)
bcd156bfe : Enable Ethos-U85 support in Vela (#5002)
272d4d80f : int(max_seq_len) (#5269)
1793c4a46 : Preserve SDPA for CoreML (#5258)
7998b7f37 : Fix issue with debug TOSA dumps being overwritten (#5029)
8874de2d0 : Benchmark app update (#5240)
8888c0d12 : Adding per-method tracers to the module utility. Changing set_output_data_ptr to take in a method name. (#5279)
4053a18c7 : Remove dim order check in unsqueeze op to unblock supernova model release (#5299)
1d373322e : Add APIs to make from an existing Tensor and clone by copying and owning the data. (#5304)
40e6e52c5 : Expose shape dynamism. (#5306)
7ea5a1dba : Fix flaky tests. (#5305)
4c61317c9 : Failing DW test on executorch (#4929)
623b7b67c : Use Apple perf workflow to validate the test spec (#5293)
3ad2f1604 : Rename the "executorch/examples/sdk" folder to "executorch/examples/devtools" (#5207)
08c8c6eb0 : Add test for stateful model and fix output backings issue (#5294)
c5c69a9fc : Qualcomm AI Engine Direct - build with rpath for ease of use (#5268)
5022deb07 : Migrate RuntimeContext users to KernelRuntimeContext (#5270)
88508c5e2 : Make ForcedUnroll usage in bf16 BlasKernel actually work for -Oz builds (#5247)
17cf78262 : Add trailing dims memoization to improve performance of permute_copy_out (#5246)
5e7efe607 : port bf16 dot product kernel from ATen CPUBlas (#5245)
12a25c61d : Preserve undelegated Linear ops in Llama demo export (#5244)
81b543850 : Reskin the demo app with new UI assets and colors (#5282)
665fa0355 : Fix android setup qnn sh (#5275)
a4be79fe7 : Switch Apple benchmark workflow to use the generic ET benchmark iOS app (#5212)
c5c121ba9 : Move test spec file (#5218)
f9da6758a : Tuning LLM from PTE (#5233)
7e3ec96c7 : Remove references to exec_aten::RuntimeContext (#5257)
de3057283 : Just print Android instrument log when the test passes (#5280)
7c76e0302 : Switch Optimizer to std::map (#5230)
6328d41eb : Rename "SDK" -> "Developer Tools" in documentations (OSS files) (#5238)
338ef2609 : Remove explicit dereferencing for TesnorPtr converted implicitly to EValue. (#5278)
d6897222e : Make `compare_results()` import path public (#5225)
6ac13656a : Fix Android LlamaDemo setup.sh (#5274)
d80f78fc7 : Read SpinQuant checkpoints (#5259)
41b463e29 : Do not load constant_segment if only the placeholder exists (#5229)
92d0559bc : Add missing Pyre mode headers] [batch:21/424] [shard:6/N]
0af6c126b : Use ones() to create tensors. (#5273)
e462e5a3f : Bug fix partitioner (#5239)
75a56a205 : Add helper function to create random tensors. (#5266)
d6b800bb6 : Add helper function to create empty, full, ones and zeros tensors. (#5261)
4da3c5d0b : Add CoreML Quantize (#5228)
3171ede40 : Add scalar tensor tests. (#5260)
d73a653c0 : Add optimized op_linear (#5243)
68397af39 : Optimized op_mm using CPUBlas gemm (#5242)
af8080497 : Debug event populates event name (#5142)
7e374d762 : Add model execution scripts and runner (#5217)
d7a7ec6e1 : Updated the workflow to upload models to S3 (#5232)
41bc1ce4c : spinquant in eager mode (#5125)
69aed24f0 : link whole quantized_ops_lib (#5253)
7942d2cf3 : Allow core aten op exception list (#5237)
d423131b8 : Android app UI/flow improvements (#5241)
e4d72ce60 : Update setup.sh for LlamaDemo (#5235)
ca889fb59 : Minibench use model_dir instead (#5250)
cba5bee4b : fbshipit-source-id: f63634ba171da01328849d84552b125b829403e8
cac2c05d8 : [ET-VK] Integrate axis mapping into optimized matrix multiplication shaders + massive code cleanup
f07e4d5cf : Update setup-with-qnn.sh with runner util flag (#5210)
ab6d91c5c : Fix internal executorch_llama_jni
4cce62007 : Minor fix: Create root dir when it doesn't exist. (#5075)
e245590d8 : App side change
ced40f4fa : Fix models in benchinfra (#5226)
2b50c76a3 : Use dynamic bound by default.
4ce0f9d3e : Introduce PlatformMemoryAllocator
b54206d78 : Update the minimum C++ version to C++17
a4d67e2d3 : Android: Leverage prefillPrompt and prefillImage on Llava
d38ca81db : Android refactor cmake build
c76b22fc9 : Qualcomm AI Engine Direct - Fixed the order of the transforms for llama (#5221)
02304d7c0 : Update bundled_program to use new namespace
db342399a : [LLava] Fix stats for C++ runner
30acae55f : Switch over backend tests to export_for_training
43e2f2d50 : Qualcomm AI Engine Direct - support skip quantization (#5070)
e826de3e3 : Add Half/BFloat16 tests for op_mul
549f14b55 : Restore constant segment
657789e97 : Qualcomm AI Engine Direct - Apply spin quant R1 and R2 (#5175)
126abb5f6 : Update the API of registering fake kernels to new standard (#5084)
1eeded16a : Let the app check "aatp/data" subdir for AWS.
083b9e65b : [ET-VK] Fix gpuinfo CI
370f30416 : Add slice_scatter test: large end value
63e794aa6 : Add pass to convert special case of mean.dim to averagepool2d
67ae762f6 : Qualcomm AI Engine Direct - Add the argument to specify soc model (#5211)
f471556c0 : Partition Mutable Buffer as Core ML State (#5165)
c5a385e3b : Update schema to include infinity for double values
f4126309f : Qualcomm AI Engine Direct - Uplevel QNN version for ci test (#5174)
7650667b2 : Add a default delegate time scale converter
59d9bad82 : Use c++17 for size test
542ecb59a : Add Echo parameter to multimodal runner (llava) and jni layer (#5181)
6ce9f5216 : t to z start ops | add dim order sanity check
28beeff0d : Clean up devtools/etdump
b23ee01ba : Register LLM prefill native method in JNI
d2014e3a5 : Add a target rule for ops_registrations (#5083)
85410e401 : Qualcomm AI Engine Direct - Optimization and fix mutable buffer issue (#5072)
eca9ed501 : q to s start ops | add dim order sanity check
6b1e3287a : [ExecuTorch] Support BFloat16 in CPUBlas gemm
b69ae0cd2 : Hide and simplify operator registry internals
cd9d5361f : Make convert to linear an export pass
b52d4b6f8 : Enable Llama3 Multi-turn conversation
647bfd4ee : Add an overload to skip dtype and sizes.
2dee34e5d : Refactor namespace usage in module tests.
0f4caa10a : [flamingo] Update preproc imports (#5160)
99fbca390 : Provide more options to create an owning tensor.
fd6a59085 : [ExecuTorch] Handle rank 0 tensors correctly in optimized add/sub/div/mul
286353612 : [ExecuTorch] disable text animation in iOS Llama demo app
7c682efdb : [Android script] Add QNN related lib to AAR
237744fac : Give more instructions on java format fix
cb944b702 : Expose the compute number of elements helper function.
cb7119328 : Make TensorImplPtr custom deleter copyable.
258cf71a8 : Reland add proper calibration for pt2e flow
13da62b00 : [CoreML] Add support for running statefule model. (#5143)
3268da2ad : [eval_llama] Add option to save checkpoint after eager transforms.
32d83b0db : [Android Java] Get rid of forwardOnes
8ff79efbd : print tensor storage bytes in print_readable value list section
ab4810cc9 : Use metal instance for Android emulator for KVM support
26e72e4c3 : Add coreml models to CI and benchmark workflow
1511fc1d7 : [ExecuTorch] Allow setting dtype to bf16 in export_llama
1d420c95f : [ExecuTorch] support BF16 in LLM runner & sampler
03a016863 : [ExecuTorch] support BF16 in op_add
234f94894 : [ExecuTorch] support BF16 in op_where
b2ca27093 : [ExecuTorch] support BF16 in op_scalar_tensor
9fed53b57 : [ExecuTorch] support BF16 in op_slice_scatter
3d6edb06c : [ExecuTorch] support BF16 in op_copy
c02546c8c : [ExecuTorch] support BF16 in op_mm
e33c25cbb : Add logic to not print stop token on Android
1cc850305 : Allow qnn to use the IR from torch.export.export
8afdc48e8 : Ensure 0-index in constant buffer is carried through
617f9d8af : Disable fail_fast for benchmark jobs
5d4d821a0 : App skeleton.
fb86e610e : Add Echo parameter to llama runner, jni+java layer, and demo app
f55ce1f23 : Llava prefill Java API
27632333c : Revert "Add proper pt2e calibration" (#5136)
17103dcd9 : [Llama] Dump RSS info for Linux
a25db2f95 : [WIP][Llava] Add support to cross compile llava_runner for Android
2ce4ad121 : Remove extract_constant_segment from config
c83fd2e2f : Build frameworks with EXECUTORCH_XNNPACK_SHARED_WORKSPACE flag.
7122d3108 : Add proper pt2e calibration (#5095)
20d93fb16 : Trigger android.yml with tag ciflow/android/*
a48f91653 : Revert "Implement dumping operator distribution for TOSA graph" (#5131)
41ec7fa1a : [ET-VK] Integrate axis mapping into staging <-> image transfer shaders
973960910 : [llava] Expose prefill image and prompt APIs
030fc3f81 : [LLAVA] Enable 2nd XNNPACK Partition pass for the text model
40720f065 : Use Android llm benchmark runner
0458c2e4d : Retire the ManagedTensor.
cab29eaf6 : [ET-VK] Introduce axis mapping for no-copy permute of texture-backed tensors
5f4a81154 : [ET-VK] Fix negative dim in `normalize_to_dim_index`
3c582377a : Update module/test .pte file
89829b5fa : [Build] Use C++17 Constructor for tiktoken.cpp when C++20 is unavailable (#5025)
8fb1defe0 : Revert default to 8650 for llama
10288a204 : [ExecuTorch] support BF16 in op_mul
c9ac212b1 : [ExecuTorch] support BF16 in op_to_copy
b8a1899e8 : [ExecuTorch] Implement BFloat16 and hook it up to scalar_type_util
6ccb290c0 : Switch to the new tensor API internally.
cea5abbcd : m to p start ops | add dim order sanity check
cdb54389d : Migrate executorch examples to use export_for_training
4e2cd6ce8 : [executorch] Update Llava install_requirements.sh (#4886)
2ba2f4787 : Allow user to define ANDROID_ABIS in build_android_llm_demo.sh
91089db77 : Implement dumping operator distribution for TOSA graph
866e9b84d : Switch XNNPack tests to use export_for_training
e8549670b : Build QNN in android_llm_demo.sh and perf
ee752f0be : Update Android lint path
83d92ff1b : Fix typo in runner interface.
3d4904bb4 : Resync Android BUCK part 2
b8a2cbd5a : Add LLaVa runner.
9ae7c0da2 : h to l start ops | add dim order sanity check
d23548be7 : [ET-VK][BE][ez] Enable automatic layout slot index incrementing
e119d51af : Add more docs and give API warning in ET Java
e4a232292 : Make QNN chipset with the device pool (#5098)
caf48b706 : Migrate //executorch/... from PyTorchBackendInterface to BackendInterface
5156615fa : Resync Android related BUCK file
a8c592e98 : Buckify backends/arm for meta internal use.
cd1c833b0 : Unified Android aar support for llava and llama models
084659e0d : Update script to build and upload MiniBench artifacts
e793795d8 : [ET-VK] Add `TmpTensorVRef` struct to recycle temporary tensor memory
6ec534230 : TrainingModule
1a4cf514f : Remove usages of extract_constant_segment=True
2806554ac : d to g start ops | add dim order sanity check
e73dce293 : [INT4-MM| Add Texture3D storage type
79b97e450 : [tokenizer] Consolidate how runner decide which tokenizer to use
3716680ec : [ET-VK] Persistently map staging buffers
f326ee1d5 : Adopt the new tensor API for aten_util.
ae05ed804 : Remove usages of extract_constant_segment=False
03a8f34f2 : b&c start ops | add dim order sanity check
a0b8bf059 : Format cmake files.
e1dfb14fe : Exclude some extra duplicated runtime symbols.
550cd9954 : Rephrase namespaces for brevity.
d32bf9955 : Add op: narrow_copy.out
0c78a9db8 : Remove expensive print from lowered_backend_module.py
8781604cd : Fix CI by correcting namespaces.
3cf9237af : a-start ops | add dim order regulation
8c0f63e48 : Adopt the new tensor API.
59d83ca61 : Build for Apple frameworks.
761688cdf : Fixx OSS tests.
5a4188fc0 : Revert D61959566
aca758d43 : Add LLaMA perf benchmark workflow for Apple iOS
46a1e6ca7 : Build for OSS.
5b6cac650 : FIx typo in templated constructor.
3f5773ac8 : [runner] Print prompt before call into text prefiller
f271159ca : Adopt the new tensor API for aten_util.
d76134642 : Convenience API for creating Tensor with a data blob.
d0708c0f8 : Fix linter for minibench
d96bcca3f : Tensor managed by a smart pointer.
3c4e26f57 : TensorImpl managed by a smart pointer.
d519b4d3a : [executorch] Add logs for helping debug address space overflow issue
23f03b9e7 : Rename PyTorchBackendInterface to BackendInterface
83c8a165d : Trigger apple.yml on examples/demo-apps/apple only
ec8922302 : Introduce ciflow/android and ciflow/apple label
887abab50 : Update pytorch pin for ET
0293cac20 : Fix wrong Android app artifact path after #5004
77df7b402 : Fix MacOS CMake build
e3cbeed35 : Add op: scatter.src_out
35d0f59cd : [ET-VK] Add type for symbolic integers
f65531bc1 : Swap to better default symshapeevalue pass
6961eed17 : [ET-VK] Add test to track sizes of various objects
bc56a97d8 : Add op: convolution_backward
324864d3d : Add op: topk
a4092c590 : introduce dim order tests to op test
5af5ed09d : [ET-VK] Simplify Allocator's buffer creation methods
b73312f19 : Pass images by value to allow rvalue args.
f824c1e5a : Add exp and log ops to Arm backend
1c2e57f40 : Arm backend report Ethos-U PMU counters
473274967 : Add pass for adjusting conv2d input shape to parameters
9e8ffbbf2 : [evaluate] pte mode: Add error message and instruction for full logits.
ef3c53d56 : Update pytorch pin for ET
05010e893 : [ET-VK] Remove unused Allocator function
61ddee540 : Android library update for benchmarking support
36c1f5431 : [codegen] Change cmake file to take CMAKE_CURRENT_SOURCE_DIR and add logging
91dc80141 : [ET-VK] Rename `StorageBuffer` to `StagingBuffer`
37cad019d : Exposing custom_ops_aot_py to executorch clients.
e7e86478b : Add a new MiniBench apk for benchmarking - app skeleton
47ac24ba6 : Changing sdpa_with_kv_cache tests to use a wider dynamic range.
0a8547a9f : Enable MKL on x86 to get around long-context discrepancies with torch.nn.functional.scaled_dot_product_attention
9c1a52cd4 : Fix export vit w/ QNN delegate
cd8aed663 : Use a separate threadpool library
1d6662dac : Remove tokenizer variants for Android build
2520d50b1 : Include stories with qnn in benchinfra
b95e4b3c5 : Qualcomm AI Engine Direct - Check the version QNN API and backend API (#4998)
12639645d : Preprocess C++
12039af2a : Format Cmake files.
369f804e7 : Update default segment alignment to 128
f99e25f42 : [llama] Build the runner with tiktoken by default
ff4a73602 : Hide and simplify backend registry internals
7608ab8fa : Decouple model exporting from dataset downloading
49b4dde59 : Update cpuinfo dependency version
f3077486f : Add tiktoken support in setup-with-qnn.sh (#4991)
959bb1be0 : Update ExecuTorch for XNNPACK 87ee0b4 (#4916) (#4916)
9fd8e53db : Add op: scatter.value_out
455ddaafb : Update pytorch pin for ET
db5abf626 : Add int4pack_mm shader and operator
9de12bc77 : Add custom op: batch box cox
e49d4dd8c : Let Module set_output_data_ptr accept an EValue.
7b3549bf6 : Propagate mul optimizations from D61504544/D61560825/D61560826 to add/sub/div (#4966)
0c6a77e54 : Refine tokenizer (#4940)
58efb8b67 : Optimized 2D-by-1D broadcasting in optimized op_mul (#4965)
ba06861dc : Add vectorized scalar path for single-element Tensor passed to optimized mul (#4964)
89ebeb08b : Move threadpool to extension
1ae997c30 : [executorch] Ignore leading 1 dimensions when checking optimized path for op_mul (#4963)
2553b85d8 : Arm backend: extend Softmax to handle dim < 0
66b2f73c5 : Implement bmm op for Arm backend
eeb52d5c8 : Fix sym int deserialization and add to shape_env.var_to_val
4cfdd22d2 : [ET] Introduce `TensorInfo::is_memory_planned`` API
8bcaca2a4 : Remove unused metadata util.
1f59accf5 : Format cmake files.
1774638c6 : [llava] Quantize embedding
4a8b8eee9 : [executorch] Increase mac capacity / fix conda for llava test on trunk
1899d15c8 : update doc to disable dynamic shape (#4941)
230511ed8 : Fix linter in quantizer_lib.py
52c9f3020 : [executorch] Sync torchao version
a5157de76 : Allow delegate to consume buffer mutations
cc9fb50c1 : Add buffer_to_buffer prepacking
20024900f : Add op: gather.out
88edab839 : Add op: pixel_unshuffle
89a24e01a : dont emit non mutable weights
89c499e86 : Remove util/util.h
8c4427c72 : Enable permute_memory_to_nhwc for corstone300 unittests
65d552f5c : fix typo in doc
e636ef662 : Move preprocess into subdir
5205cb2e0 : Update setup-with-qnn.sh (#4943)
4116cb24d : Qualcomm AI Engine Direct - Model sharding for LLM (#4923)
35e2302cd : Add a workflow to validate and upload iOS demo app test spec
ff46dd56b : [executorch] Rename phi-3-mini-lora directory to match phi-3-mini-lora
a79b1a6a4 : Support regnet_x_400mf and regnet_y_400mf (#4925)
5395ae696 : fold quantize in convert
69472e5c4 : [llava] Enable memory profiling
7bb8e9c21 : Clean up index utils
11434975a : [llm_manual] Use the new namespace, and clean up headers
b578d6d88 : Release ethosu driver after use.
9fafdb053 : Add Half support: full.out
d91f6121f : Add support for int_oo in exir serialization
7600f21b3 : Run ExecuTorchDemo test suite on AWS Device Farm
3f7707891 : serialize fqns.
1cea0eeea : preprocess e2e test
37db39a50 : Remove read_file.h
d2e54b622 : Simplify setting inputs in Module execute.
49156d0f7 : [llava] Expose max_seq_len as a parameter to export_llava
4890748f5 : [mediatek] Link portable kernel lib
d9ae7c3a3 : Rename tensor test and use gtest class fixture. (#4928)
395d3f539 : enable parallel prefill again
f92139f78 : [module] Change the deleter of Program in Module to make android happy
f65c28e2a : Handle null data edge case in data_is_close testing util.
3fb03dcd2 : Fix llama runner build (#4817)
801e1c947 : Forbid having TensorImpl with zero number of elements and null data.
5942e4ade : Allow EValue to be constructed with a smart pointer implicitly.
da2142bba : Fix module forward calls after api changes.
7efdfc051 : Inline some of the Module methods.
4689c9116 : Swap --gc-sections to -dead_strip for clang compilation
2c7b7e838 : [llava] Enable dynamic shape for image preprocessor
b284866b5 : Remove torch:: references from examples/portable
1f0487d65 : Update llama special tokens
2b2911b50 : Add a script to install Apple certificate for CI iOS jobs
f56068266 : Allow single EValue to be passed to Module execute.
6feb6399c : Remove all uses of PrepareInputTensors
a532d9c68 : Delete multi_runner and relocatable_runner
1253ed5e7 : Remove unnecessary INativePeer
0ae82f934 : Vulkan logger fixup
dc66414c7 : WhyNoPartition
260cf6fc1 : Revert D61290864
29797d41e : Add support for lifted tensors in ArmPartitioner
ef9c07f5d : Add unsqueeze op to Arm backend
e3ac39a12 : Add ReLU operator to Arm backend
2b7aa2b0c : Improve logic for getting submodules from target name
316bd15eb : Add cat op to Arm backend
ebe9075ad : typing for decorators - fx/_compatibility
8471c22fa : Enable MKL on x86 to get around long-context discrepancies with torch.nn.functional.scaled_dot_product_attention
7b4be5431 : Qualcomm AI Engine Direct - Use AIHub's context binary file for Stable Diffusion (#4836)
48f4eee47 : [exir] Enable dict for sym shape eval pass
26e921e54 : exir dialect view to squeeze/unsqueeze pass
6d29c1da4 : [Build] Define ssize_t for windows build.
b157e5f3e : [Build] Link pthreadpool and cpuinfo for windows.
bf71fd4bb : Clean up install scripts
047656b36 : Add quantized ops to pybindings
7fe3a69bb : [ET-VK][Ez] Add utilities to check if one vTensor is a view of another
4afc4fb45 : [ET-VK] Add buffer implementation for matrix multiplication
33fbe03fc : Remove custom op pad tests
d8be9b1c5 : [ET-VK] Set export log level INFO
bc66ff278 : [Build] add kernel_link_options for windows (#4865)
ee6e4e9af : [Build] Add install_requirements.bat for windows build (#4862)
c7bc7e0de : [Build] Support windows in setup.py (#4858)
b0d67c22e : [Build] Remove .exe suffix on windows (#4864)
0add885d2 : Update Xcode project to include new sources.
4915c9f91 : Move towards quantize_ api for blockwise
f492d9624 : Add prim ops neg.Scalar
11e8ed33b : Reduce the memory usage of logits from O(context_length) to O(1)
6c26a8723 : Fix Llama demo app after move in #4460
02a76ea67 : Show warning if java code needs formatting
891521a78 : [ET-VK] Use dim order as the source of truth for tensor strides
d7c069f49 : Fix SDPA decomp problem
bf6481916 : Rename directory "executorch/sdk" to "executorch/devtools"
bfc5b1739 : [ET-VK][ez] Enable no-op ExecuteNodes for view ops
87b38cfdb : [ET-VK][ez] Empty initialize ShaderInfo and add `bool()` operator
65473de3b : [ET-VK][ez] Introduce `check_close` function in `compute_api_test` to account for small numerical differences
0a211022a : Add option to generate full logits
3af50f9e2 : emit metadata
c2044a425 : [ET-VK] Register conv_with_clamp custom op
4442a91fe : Cross attention mask C++
226f9751c : Migrate extension/llm to new namespace
ce4917c6b : remove pad custom op (#4801)
ea4a1870c : Fix non-existing docstring parameters (#4827)
c7aff77df : Fix android-perf job
288e758d7 : [executorch] Migrate most of extension/... to new namespace
33231642d : Make it runnable via buck
78b08676b : update pytorch pin to 08/21
30a777cbf : cpp infra and tests
66bfa41cc : Only export LoRA layers in backward pass for phi-3-mini w/ LoRA
1fc0cfc17 : Buckify ArmBackendEthosU + bugfix.
6daa00273 : Run trunk.yml when we update PT pin
342e7f788 : Allow multiple eos ids
d8a00e677 : Add mul-op for Arm backend
b66d62ab9 : Add ET_EXPERIMENTAL
db7a37884 : Revert "[ao] Pin torchao to 0.4" (#4814)
ccd684f82 : Use pread() in FileDataLoader.
48e4d2955 : Add strides to ManagedTensor
75e6413df : Android MultiModal JNI binding
56001c3dd : Fix stories model name
055af0997 : Check that ET and Eager are numericaly equivalent
f887d7212 : [Build] Implement install_requirements.sh with Python (#4748)
1f783f550 : Fix the arg format passed to update-viablestrict
65531968c : [executorch] Migrate exec_aten utils to new namespace
53f401e36 : Fix required checks for update-viablestrict
ee7ae68da : Delete unused code
482f60fca : Add the example non-genai qnn model to ci and benchinfra
24780c8da : [ao] Pin torchao to 0.4
c1c8b0091 : [XNNPACK][Partitioner] Migrate completely to new config based partitioner (#4798)
f93a5b5f8 : [XNNPACK][Partitioner] SDPA Config (#4797)
7a2d885c9 : [ExecuTorch][XNNPACK] Don't partition 3d and transposed convs (#4796)
4a27a5388 : [XNNPACK][Partitioner] enable src based partitioner (#4795)
48d664c68 : [ET] promote to_edge_transform_and_lower to public API (#4790)
5950611bb : [ET-VK] Introduce copy constructor for vTensor to allow for zero-copy operators (#4791)
80b4a72ff : Support Llama3 qaihub (#4789)
012d61f99 : Fix tests in test_mm.py
447dc6c30 : Don't run Android device tests on forked PRs
187bd055f : Remove underscore prefixes from compiler.h macros
d3da92df8 : Lower phi3 mini with LoRA to edge for training
2b3c01c3c : Make consumed lifted_constants as delegated nodes
c6347f330 : Fix MPS test internal build.
6911e8e1c : added empty out
b1f7b60aa : [llava][22/N] Turn on xnnpack shared workspace and add README.md
54b692956 : Fix copying of state dict
9e478e865 : Update fused kernels and call _safe_softmax from SDPA
eaf383a14 : Add pass to convert split to many slice
4c0690760 : Qualcomm AI Engine Direct - add cli tool for QNN artifacts (#4731)
6cb5726b5 : Allow sharing Program among several Modules.
45e9f6b46 : [llava][21/N] Add llava runner test binary and build script (#4667)
622de2dff : Mark thread-safe methods of DataLoader as const.
b75e7d7cd : Add default dim_order asserts
7b795d7e9 : Make seq_len param available in JNI layer generate()
96e7f0a3d : Define @deprecated and @experimental decorators
a54d62c70 : fix eager_eval with kv cache and improve pybind eval speed
5c9a00a3e : Make the Module non-movable.
add6e2e86 : Support mutable tensors in TensorParser
9a98abb27 : Back out "Back out "[executorch][PR] [Core ML] Implement intermediate tensor logging""
aead1d506 : Revert "Add event tracing and ETDumps to executor_runner"
f25f13516 : [executorch] Migrate runtime/core tests to new namespace
3e4508a94 : Refactor delegation code
ae299cf91 : [executorch] Migrate runtime/core to new namespace
2dcf0f310 : cria runner
39aeff99d : Back out "Implement intermediate tensor logging"
bf29bd6e6 : [executorch] Migrate runtime/platform tests to new namespace
9b0b8e7cb : Fix android perf periodic default spec
2b9c4b268 : [executorch] Migrate runtime/platform to new namespace
ba2ff6383 : Add QnnBackend dependency to the ET main test binary app in buck for Android OS
c4ccad361 : added expand and gelu ops
caadd81e6 : VulkanQuantizer for weight-only quantization on linear
35da5bfa9 : Add event tracing and ETDumps to executor_runner
48b430410 : Implement mm op for Arm backend
938748b35 : FuseDequantLinearPass to convert dq -> linear into weight_int8packed_mm
54f8932e0 : API life cycle and deprecation policy in official documentation
a9ed83525 : [Core ML] Implement intermediate tensor logging
35a15a632 : [llava] Fix llava test-model-linux CI job
5c4a2a263 : [MPS] Add support for Int4 groupwise quantization
f1b741e12 : Use the common return_type field to support ET-QNN internally and externally
84100d1bf : [llava][20/N] Add llava runner using building blocks in e/llm/runner (#4666)
ef564147b : [XNNPACK] Share workspace across delegate instances
1cb97e075 : Initial Implementation of MediaTek Backend for Executorch
c541bc130 : Fix return type mismatch in choose_qparams_tensor_out
278052860 : Update bundle identifier.
023ab35e8 : Use MmapDataLoader::MlockConfig::NoMlock for Module::LoadMode::Mmap
0b513632f : Add 256 constraint executorch.max_kernel_num for
6efc2225c : [llava][19/N] Add multimodal runner base class and build file
7b27f9b81 : Back out "Back out "[executorch][PR] Add stories ci for qnn""
5d151d05b : [llava] Use huggingface LLaVA instead of depending on third-party/LLaVa
49c6a10b7 : Pin lm_eval to 0.4.2
5477369ca : Repeat op + pass expand -> repeat
dbd40f443 : Use single rounding as default for TOSA lowering
ba3448c3c : Expand Program Interface
2378cda8e : Use a single map for model metadata.
1e9e5d07b : update generation.py to run in eager mode as well
6982c03fa : Add the get method to Module to return a single EValue.
85b786931 : Add llama sdpa to generation script
9293b780f : [llava][18/N] Move token generation loop to a class (#4705)
e404e1982 : Implement load_into for buffer data loader
ccaaa4625 : Upload Android test spec to ossci-android
d51ec6d27 : Qualcomm AI Engine Direct - File Structure Refactoring
51e6b2ac0 : Add .github/ghstack_direct
9a32a4acb : [executorch] Preprocess export test
2654f5962 : Back out "Add stories ci for qnn"
2117c1a2f : add a list for ops to be added
e71fa0309 : Add stories ci for qnn
56f843b25 : Move metadata util to extension/llm/runner.
5e9bab8c5 : Delete dead code
b6de6ed49 : Fix periodic run and model name for benchmarking
b165c2827 : Implement load_into for file data loader
3e0eb0ff0 : Do not print eos (#4654)
728a29ded : Pack buffer-backed tensors correctly when moving into and out of staging
8f4697180 : allow models to use customized token ids during export
440048cff : Add an activity for benchmarking only
0c26dc047 : [Cadence] Enabled x86 executor flow with numerical verification
9b2bfb608 : Update phi3 lora example documentation
d53f8fa8e : Not hardcode llama2 model in perf test
e80062630 : Qualcomm AI Engine Direct - fix conv2d to meet QNN constraint
99e1ae1f5 : Skip storing unnecessary metadata in ManagedTensor.
18b829c42 : Replace custom op pad with aten op, post-export
ce7f5a0ef : [llama] Fix text prefiller
c9e7714f9 : improve sampling time
7f34796b6 : Qualcomm AI Engine Direct - fix release build issue
a70d0709d : add fp32 bmm op
593da7010 : Skip checking for dim order and strindes if those are not provided explicitly.
98b8ae173 : Statically Quantize Image Encoder
e2ca877e7 : [executorch] Avoid division in Sampler::sample (#4656)
f7684ad26 : Fix bundled program and plan_execute in pybindings
d9cfd6aaf : add buck targets build coverage to kernels operators and codegen
bcbdfa89a : Add core.pyi to resolve pyre issue when importing from executorch.extension.pybindings.core
a5c1bb9ec : Small refactoring of TensorImpl. (#4640)
79c15efb5 : Fix llava model definition for export
c5a816edb : fixed quantized_layer_norm
37c4f9748 : fixed quantized_matmul_out
d3a7c711d : [QNN] fix linter (#4645)
83a32af14 : Improve ET llama runner logging and debuggability
82608bf7a : Gh/larryliu0820/46/base (#4643)
c04bc993e : [MPS] Add support for flatbuffer serialization > 4GB
bbabd282d : Implement tile_crop custom op (#4622) (#4622)
3f9b39e60 : Qualcomm AI Engine Direct -- update documents
e4897dd68 : [ET-VK] Do not apply zero padding for buffer backed tensors (#4637)
0b1695fa4 : [ET-VK][Ez] Allow ParamsBindList to append a single Binding Info (#4636)
4e57f9c67 : [coreml] Remove references to build_apple_frameworks.sh "--Release" flag
5be55e5da : [ET-VK][Ez] Improve vec class (#4635)
05f13fce1 : Update on Mac runner build
3b21e79ef : Buckify open sourced ET-QNN
1b092e923 : fix CI errors
192d463f3 : Skip computing capacity for dynamic tensor.
2405f7499 : Remove Proxy from exported programs and modules
d04c9d62c : HiFi4 NNLib added as a third party lib for Cadence DSP backend compilation
d3367e63a : Introduce preprocess custom ops
b671e24b3 : add empty shim/TARGETS (#4600)
8ba7d0374 : Remove unused shim (#4597)
ff317c09c : Expand dataloader interface.
2718dd438 : cmake flatcc build issue
91252e7a9 : Qualcomm AI Engine Direct - Add configurable job number for build script
f19f9d915 : Move exir.delegate to PyTorch core to enforce no out-of-tree HOPs
9a4f32d56 : Qualcomm AI Engine Direct - Add source transform for kv cache and sdpa
5c270451a : Test new partitioner on all models
f863fe4b9 : Sub, Sqrt, Pad
6607c7d79 : Prelu, Pow, Slice
62f90c6a8 : Mean, Min, Neg Configs
b4a0b44ed : Floor, Hardswish, LeakyRelu configs
e61094755 : update encoder latency
8c813f94c : Simplify prompt fields logic and added modelType selector
530d4a1e8 : Upsample Bilinear 2d
92edd0438 : Revert "update the pinned pytorch hash (#4321)" (#4583)
1b484fd88 : Prep TensorParser for Loading Mutable Segments (#4545)
76c85e100 : llava use to_edge_transform (#4580)
ad3c6f4e7 : don't deep copy program (#4575)
3816c9171 : Remove inline Tensor decl from tensor_util.h (#4577)
3a5426ddb : Add module load option in Java/JNI (#4578)
34e487010 : update readme for phi-3-mini (#4551)
a50d0322e : update the pinned pytorch hash (#4321)
bf477e42e : Fix wheel build and smoke test (#4429)
05a7d5251 : Back out "BC Deprecate XN00 Support" (#4573)
9d9cda09d : More robust DCE pass (#4565)
20cb298d5 : Move metadata util to a separate header for reuse (#4550)
f52d8ab6a : Handle multiple memory IDs using pid (#2974)
fadc076d6 : Filter aliasing nodes from memory timeline (#4468)
60699fb56 : Adding mode to enable memory offsets in Executorch trace profiler (#4472)
de300e0ca : Add README in extension/llm (#4544)
ec4f57687 : Max Configs (#4540)
a86204cf3 : Bugfixes to enable call_map. (#4471)
5636c9c74 : Fix overload name for prims ops used in `call_map` nodes. (#4465)
6aaab87f1 : Suppress -Wglobal-constructors for llama example main (#4547)
0f5794e43 : Fix doxygen comment (#4457)
d88f368ee : Permute, Softmax, Sigmoid Configs (#4537)
a0d63cc71 : Div, Mul, Elu Partitioner Configs (#4539)
4217b9e8c : Support Conv1d (#4532)
3a62cb2b9 : Cat, Ceil, Clamp (#4368)
5ebd62c77 : Abs and AvgPool (#4366)
eb8bee0ee : Conv2d + BatchNorm (#4372)
8b0e7fb2a : Add, Relu, Hardtanh configs (#4367)
ef4f99255 : Add GEMM Config to partition Linear, QLinear, DQLinear, Addmm (#4373)
1a68779eb : refactoring some quant utils (#4374)
4826cb449 : Add to_edge_transform_and_lower stage (#4375)
ded57a7ca : XNNPartitionerConfig + Config-Based XNNPACKPartitioner (#4370)
e1cb7bf4f : configeration based partitioner (#4369)
864e0b0a3 : Implement runner for phi-3-mini (#4500)
14c24732b : Improve prefill speed (#4531)
9b06921f8 : Make quantized relu more flexible with quant params and use nnlib kernel on HiFi (#4530)
7cd96f71c : Revert "Make flatcc cross-compile deterministic (#4312)" (#4528)
76f0b6184 : Fix Stats in jni_layer_llama.cpp (#4527)
fbc183f21 : Dont generate a mutable segment if you ahve no mutable data (#4523)
738842d87 : Type Promote Div (#4516)
20c86cacf : Move test delegate to new AoT flow (#4522)
0590ed17a : Add test training model (#4511)
7a034522c : Move runner stats into its own header (#4499)
ee8359c6d : Update bilinear test to handle dim_order (#4520)
164762271 : Revert dim_order ops by default (#4518)
cd529cdbf : Add dim-order op revert pass for delegates (#4470)
15815dd5c : Make flatcc cross-compile deterministic (#4312)
1b6d5bb3a : fix forward _partition_and_lower_one_graph_module
0caaf3f38 : Support devices and delegates parameters in android-perf workflow (#4484)
448c7d3c0 : Support int8 texture tensors without requiring int8 buffers (#4485)
4483bb66e : Add Wav2Vec2 base model (#4513)
1090bcd26 : Qualcomm AI Engine Direct - Enable HTP emulator test in x86 host (#4503)
d59419c4d : Rename optimizer buck target to be more specific (#4509)
c329d6af5 : BC Deprecate XN00 Support (#4450)
aa56e8ceb : Split quantize_pt2 to allow calling the same APIs in testing and regular flows (#4505)
301a017bb : Add weights to model outputs (#4302)
4ba11e34f : Register grid_priors nn.Module test (#4489)
64b77338b : update sampler reference (#4508)
d207eb0be : move get tokenizer to export_llama_lib (#4451)
c1d53baf0 : Fix CI OOM issue (#4507)
6fce77f28 : example of updating dim order for specific part of graph (#4404)
34c3c3d7c : make rust-project deps oss compatible (#4506)
0bbcabe35 : Qualcomm AI Engine Direct - Add index and index_put op (#4481)
1882837f3 : Fixed typo in default devices (#4483)
ad371a4e4 : Move calculations away from GPU in Bandwidth profilers (#4445)
a743a3be6 : export phi-3-mini-wrapper (#4478)
a65700cf6 : add a wrapper for running phi-3-mini with kv cache (#4491)
5b3752498 : Add customized static cache implementation (#4490)
1114539fd : Change warning to a different log level (#4482)
f611219e0 : Add workflow for on-demand benchmarking (#4441)
f9d2de1c7 : Create a buck genrule for schema_generated.h
5890a9c52 : Allow us to move the proto files outside of fbcode (#4417)
6bfefa84b : Use Core ML Quantizer in Llama Export (#4458)
febd9c138 : Change deprecated impl_abstract to register_fake (#4392)
dcdd25477 : Non-GenAI models coverage XNNPACK (#4474)
6cd7f380e : Move llm custom ops to extension (#4467)
227b49de9 : Fix unsupported linker flag on Mac (#4473)
2852bda4e : move delegate debug tools to sdk (#4459)
2f8ecf397 : Program.fbs change to support serialized mutable state (#4216)
a567abfd0 : Porting over ET MultiModal Demo App (#4455)
9aeceeee3 : Implement grid_priors op (#4440)
69f3f1c7d : Fix prewarming (#4454)
1ec344470 : Migrate sampler to extension/llm (#4460)
1727aa18c : fix eval llama (#4469)
e03181d1f : Refactor and class split (#4432)
586712988 : Add metric for 3D texture max concurrent cache read (#4421)
298b625a9 : Add config file support for constants and test control (#4337)
ea0c01722 : Add 3D Texture Bandwidth metric (#4336)
da7ca6ff2 : Fix build error (#4464)
b7c8378d5 : nop validation during build (#4449)
28cfabb58 : Fix use_sdpa_with_kv_cache option (#4456)
3c25aec9c : Add docstrings to all unittest.TestCase:s (#4391)
38724d072 : Add test debug features (#4144)
c659b9c23 : Add flakyness mark to conv BI test (#4390)
3d5a1491a : Add FVP tests for linear op (#4393)
1e143339d : Add exportable baby llama example (#4345)
db1c4d838 : Add an option to turn on/off sdpa_with_kv_cache (#4444)
318a178e3 : Delete hooks.h (#4448)
da24d1855 : Add slice op to Arm backend (#4072)
7f6a3416c : Remove redundant generate_*_compile_spec funcs (#3869)
711ecec40 : fix zero arg export in training_ir and constant tensor handling (#4382)
e6684f766 : Use linux.24xlarge for llava test (#4446)
f695f8e8d : Support qmatmul with different dims tensors (#4438)
e087ac83f : Qualcomm AI Engine Direct - Fix UT example script hang when exception happened (#4355)
dd8870871 : Hoist numel out of loop condtion in op_embedding (#4146)
1e4603d2e : FileDataLoader fails to read the file when size > INT32_MAX (#4435)
5a20a4951 : Fix numpy and pandas versions. (#4430)
91298923a : immutable accessors in graph signature (#4433)
5d3ec1323 : Revert D60253955: immutable accessors in graph signature
11407f05e : immutable accessors in graph signature (#4428)
faeeca8ec : remove unused tensors from VK model's graph (#4427)
889e5cbc0 : Enable SPIR-V compiler optimization (#4402)
77c905d13 : Define custom op for grid points generator of single level feature map (#4395)
dbf87b0bf : add phi-3-mini eager mode example (#4315)
5b0700bf1 : some bug fixes (#4409)
dbf7d6e22 : Build and link against sentencepiece 3rd party lib. (#4410)
5ad00e725 : Update readme to use the latest release. (#4406)
85d4d12ca : Prepare the script to run tests on Android emulator (#4387)
d6d691ec6 : Add llama3.1 to readme (#4378)
5e8cce1b2 : remove target_sdk_version attributes (#4394)
d8866e30d : Move sentencepiece to extension/llm/third-party (#4388)
99623bee0 : Sort .gitmodules (#4389)
944e1e816 : Add "schedule" option in gather_test_models.py (#4397)
f6bad568d : Update Apple runtime doc with a new version. (#4398)
69dcfe075 : Set FLATBUFFERS_MAX_ALIGNMENT=1024 (#4215)
56120f9df : Qualcomm AI Engine Direct - Refactor & centralize common keywords (#4357)
6c69ebd3d : Support llama3.1 (#4376)
11b2fcb7b : Add sub operator for Arm backend (#4074)
47d309a2a : Introduce periodic.yml (#4348)
fd2dccf51 : Fix log.cpp when compiled with `executorch.enable_et_log=false` (#4358)
48da61aa3 : Enable aten.relu_.default in the CadenceQuantizer (#4344)
3154afc21 : Update llama docs on main (#4361)
628b280af : update phi-3-mini readme doc (#4377)
93c56cb3e : Prepare for merging _export/exported_program.py and export/exported_program.py (#4338)
ae1f098b9 : Re-enable CI with smaller model for validation (#4304)
78df3327a : Android update prebuilt library link (#4362)
dbc73a620 : update qnn doc (#4356)
6153b1bf7 : Update ExportedProgram ctor callsites. (#4347)
09cfc9275 : Add sigmoid operator to Arm backend (#4114)
f0ebfa20f : Add Warp Size metric for alternative SM workload distribution (#4305)
3269e6106 : Add more tests to sdpa_with_kv_cache for speculative decode (#4325)
74b0e8925 : Add GQA test for sdpa_with_kv_cache large seq length (#4324)
d28807619 : add dim order into executorch concept documentation (#4349)
5d58203f4 : Disable sdpa_with_kv_cache for now (#4319)
6556991ac : Add full op for Arm backend (#4073)
908b5a5fa : Update xnnpack (#4340)
b7fb9bd7f : Directly load program in sdk_example_runner (#4352)
8bdafb0ca : Add missing quantized_matmul meta kernel (#4343)
0e032c59c : Add support for unbacked symints (#4326)
0e2b20580 : Qualcomm AI Engine Direct - add program validation (#4297)
f0364e879 : Fix quantized_matmul with 4D inputs (#4335)
844a69f26 : Simplify tokenizer header (#4283)
6dbb4dcfa : Fix sdpa flash attention op for et llama deployment (#4322)
9d859653a : update the pinned pytorch hash (#4313)
5865a576d : Update Hugging Face packages to latest versions (#4306)
7e417f42d : Add Warp Size metric (#4298)
ba052a434 : Add a step to build Java extension code in LLAMA docs (#4265)
8b43bf59e : Fix crash when tokenizer files are not found (#4266)
9a4823b5d : Migrate all DataLoader::Load callsites (#4291)
033339066 : Add export_llava.py (#4295)
1933dae71 : Add Llava model definition (#4259)
c7574994e : Upload android artifacts to s3 (#4300)
71bb56591 : Apply late review feedback for D58874164 (enable parallel prefill) (#4279)
8e0f856ee : Remove Buck2 from examples/sdk/README.md (#4299)
282e7fe8f : Refactor android demo job (#4288)
92b87e43f : Fixed the command (#4296)
1d7d71dbf : Allow expression of scalar tensor buffers, non string values in variants (#4292)
e5687a4c5 : Add Shared Memory Bandwidth Profiler (#4277)
a4deccaf7 : Add UBO Read Bandwidth profiler (#4270)
c72190ac5 : Add Buffer Memory Bandwidth profiler (#4262)
544462d08 : Add HF RoPE into llama_transformer (#4256)
4a8831892 : Intermediate output logging enablement - Inspector logging (#4293)
1b0bf1c3e : Qualcomm AI Engine Direct - enable loading context binary directly (#4163)
c3357e1c4 : use _preserve_ops for to_edge_transform_and_lower (#4273)
b448254a8 : Update Xcode project paths after tokenizer source code move. (#4290)
8950d901a : Use external_deps for sentencepiece (#4269)
037cfcfb8 : Disable uploading to S3 and test on-device (#4287)
2b5419495 : Add homebrew package for libopqs (#142)
0cde6b896 : Move tokenizer into extension/llm/tokenizer (#4278)
740a0a51f : Update serializer to sync with export serializer (#4264)
242f2c027 : Delete deprecated non_const_buffer methods from Program (#4272)
ef640bfa6 : Provide kernels with true reference implementations for quantized ops (#4108)
7d6c8fca1 : Update kernels readme (#4274)
8a1589dbb : Allow llama_transformer take embedding as optional argument (#4257)
58b1b18ec : Rename llava_encoder to llava (#4242)
76397d53b : Take dynamic shape as an argument to LLMEdgeManager and (#4241)
a22e80950 : Migrate the quantizer to use aten ops directly (#4195)
a7ac3d5a5 : Add pyre-ignore to fix executorch for D59723984 (#4260)
fbe0af196 : Update qc docs (#4263)
4cf348999 : Move tokenizer.py into extension/llm/tokenizer (#4255)
8775280fd : Remove llama related stuff out of bpe_tokenizer (#4235)
0a999a1a6 : Add linux package names to be installed while running GA
047e7d676 : Only include base64.h in tiktoken.cpp (#4234)
7ee140354 : Remove uses of deprecated executorch non_const methods (#4205)
09fc6d84c : Remove print statement in SPIR-V compilation scripts (#4268)
fc20d21f0 : Fix build failure instroduced by https://github.com/pytorch/executorch/pull/4227 (#4267)
d3c92de23 : Remove memory-format workaround for Arm backend (#3981)
ac4360614 : Removed deprecated dialect field from EP schema. [1/2] (#4254)
ae4d8d58a : Move SDPA decomp pass from Qualcomm's directory to be shareable and call it for Cadence backends (#4258)
6903715b1 : Add buffer cacheline size metric (#4228)
dd7fa6a9d : Add ArmPassManager (#3749)
4b45264d3 : Get device information from OpenCL (#4210)
93a77251d : Pass in correct backend id into data load from runtime (#4218)
c7e407eda : Switch to DataLoader::load in runtime (#4217)
bda0f7ce2 : Introduce new DataLoader::load() with segment info (#4158)
08f433408 : Fix constant propagation pass to consider kwargs (#4166)
d159de2e5 : Let models provider their own specific special tokens (#4227)
7021d519d : Intermediate output logging enablement - Runtime APIs (#3822)
7047162cd : Implement aten.squeeze_copy.dims (#4223)
9221ab658 : Fix pybinding build. (#4243)
728c1f68f : Use cmake to build runner dependencies. (#4238)
cfbe63d19 : Add support for quantized bmm (#4047)
e9aa542ef : Add tiktoken OFF/ON options for android prebuilt aar (#4203)
d8c4afa93 : Remove unused function from executorch/backends/apple/coreml/runtime/inmemoryfs/inmemory_filesystem.cpp (#4240)
17ae00962 : remove buck2 from custom operator registration (#4233)
37c9897fa : remove buck2 from mps readme (#4232)
2c29c856f : remove buck2 from custom variables (#4231)
e599d3791 : remove buck2 from backend dependencies doc (#4226)
8c9df714b : remove buck2 from backend integration tutorial (#4225)
9aafd0fc0 : remove buck2 from Kernel Library Selective Build (#4224)
6051ce706 : remove buck2 from kernel registration (#4220)
1c42a7f21 : remove buck2 from runtime overview (#4219)
d1712972c : remove buck2 from cmake installation tutorial (#4214)
051b01bf6 : remove buck2 installation from setup page (#4213)
d863214c5 : Do not error out when op is not in ATen namespace (#4229)
f9efb0521 : Create exceptions for spec.const in verify_graph_input_output (#4083)
e45365297 : Duplicate dequant nodes for dq encoder (#4222)
b7df20d30 : Bug fix to disable lower recomposed sdpa (#4211)
a92450085 : Expand verifier to be multiple on ExportedProgram (#4184)
1ec626365 : aten.minimum.default (#4192)
a9cbe6417 : transposed convolution: add channels last tests (#4208)
e570a2233 : fix example input generation for dynamic shapes (#4207)
c698791ef : Enable RANGE definition in shader YAML for repetitive shader generation (#4196)
e3e674405 : RegCount concurrency calculation (#4173)
09336a6b2 : RegCount max registers calculation (#4171)
ac1c7d058 : RegCount NITER calculation (#4159)
63e8025bc : Link to readme from ci.bzl
074a81e18 : Quantization types (#4094)
19ed01853 : Update phi-3-mini to use the export library (#4190)
a71935a9c : Add option to export phi-3-mini model via buck2 (#4189)
4fa7cfc86 : Move builder.py into extension (#4188)
3fb70836b : Simplify Module usage in LLama runner. (#4175)
c96c20519 : Let Module use FIleDataLoader when requested. (#4174)
1faa1bb87 : Add op: enable transposed convolution (#4197)
90d7d07a6 : Add memory planning pass for fully delegated path (#4176)
bc632308a : Parallelize SPIR-V compilation (#4200)
f80c6ef6f : Fix OSS CMake build deps (#4198)
561c035af : Rewrite etdump debug data value comparison (#4152)
b10b763f1 : Fix executorch_no_prim_ops_shared linkage. (#4185)
00bea84a4 : Fix export_and_delegate.py (#4193)
7774c3888 : Separate `PhysicalDevice`/`DeviceHandle` from `Adapter.*` (#4187)
81cf3ab53 : Convert `DeviceHandle` into a struct (#4186)
073a96d2d : Rename `RuntimeConfiguration` to `RuntimeConfig` (#4182)
d93176ad8 : Create `containers/` folder in `api/` (#4181)
fdc6a679c : Move `GPUMemoryLayout` and `StorageType` to `utils/` (#4180)
f09a8f602 : Move `utils/` and `vk_utils` out of `api/` (#4179)
709994419 : Consolidate MeanToSumDiv export pass into VulkanPreprocess (#4178)
4027c1bd2 : Remove Duplicate Dequant Node Pass (#4169)
2d9c6b528 : Move portable/util.py into extension/export_util (#4141)
bdbfcc657 : Use linux runner for Android Device Farm test job (#4177)
352102153 : Fix OSS CI macos test-custom-ops-macos (#4170)
d948c1acd : Bug fix in bpe tokenizer (#4149)
5f2ab0e0d : make the predicate truly data dependent. (#4130)
a33936b5c : Remove buck2 from selective build readme (#4003)
09ff722c3 : Fix missing llama2 model on Device Farm (#4162)
46b10a7f1 : add coreml stories end to end ci (#4161)
b91c20bc5 : integrate coreml delegate to llama_main (#4160)
e4eeadc94 : Fix a typo in the comment (#4140)
5584b9e3c : Qualcomm AI Engine Direct - Support kv_cached stories 110M llama2 (#4142)
29fdaa185 : Delete inaccurate comments. (#4139)
67882f8f2 : Add dtype selective build support for op_to_dim_order_copy (#4049)
f1e673f60 : add mps stories end to end in ci (#4137)
f32d707fd : Add `requires-python = ">=3.10"` to pyproject.toml (#4131)
38d67db95 : Link MPS and CoreML delegate library into portable_lib (#4040)
516f7f0e8 : Add PR/issue workflow to CONTRIBUTING.md (#4107)
998fc08c7 : Skip model load when lowering (#4103)
e00de25c8 : Fix typo in README.md (#4148)
d3ae67cc2 : Fixes in github issues template. (#4147)
8bd51eacd : Improve README page and Getting Started page. (#4145)
970e278bf : Inspector should allow explicit None for delegate_map (#4136)
28638d78e : Update strings in Arm unittests (#4143)
459bc1dfe : Workaround for accuracy problem detected on Arm64 (#4092)
649d7b17e : Integrate Corestone300 in unit tests (#4015)
c2147cbef : Add Tiktoken v5 vision tokenizer (#4086)
28a45cdbe : Order stdlib includes after ET includes (#4127)
17bab7d2e : Move files using the Vulkan API to `vk_api/` (#4125)
55fb6c89c : Remove trailing newline (#4132)
c839b9e74 : Move `Tensor.*` to namespace `api` (#4124)
049040295 : Organize `StringUtil.h` and `Utils.h` into `utils/` and `VkUtils.h` (#4123)
3cc7d9c79 : Shorten namespace `api::utils` to `utils` (#4122)
95feccec5 : Move `gen_vulkan_spv.py` out of `api/` (#4121)
55af84086 : tests remove run_decompositions from Export Stage (#4109)
152988d3a : Fix README typo (#4135)
96fb2bfb9 : Move `ParamsBuffer` and `StorageBuffer` to standalone files (#4120)
da04a6065 : Move `ParamsBindList` to `Descriptor.*` (#4119)
da35965be : Delete unused functions in `Adapter.*` (#4118)
901303028 : Delete unused functions in `Context.h`, `Command.h` and `Utils.h` (#4117)
83d660c04 : Delete unused function in `Types.h` (#4116)
5bcb28e7a : Delete empty file `StringUtil.cpp` (#4115)
154579dad : Refactor ArmQuantizer and quantize utils for readabililty (#4095)
95b02e053 : Move llama related stuff out of builder.py (#4105)
63d97f375 : Introduce op_benchmark binary for matmul (#4096)
07683cd09 : Initial github issues template (#4129)
c79243de9 : Update read me with llama3 numbers
e74aa2e27 : Make dynamic shape based export selectable (#4100)
5dc3b2b4c : aten.hardswish.default in unary_ops (#4087)
8740c6932 : Halve model loading time for llama demo on iOS (#4032)
01a1b28c2 : Move partitioner and quantizer to export_lib (#4089)
c572f9e50 : Check for 0 size allocation before erroring out in ET_TRY_ALLOCATE_* (#4097)
8f12da1a3 : Added support for HiFi build for ISS (simulator) under EXECUTORCH_BUILD_CADENCE cmake-config switch (#3629)
3eec95a34 : Add vision transformer (#4077)
748a4f8e4 : Make memory allocation macros windows-compatible (#4071)
929fc806b : Use lowered_module graph instead of self.node.graph in emitter (#4079)
4fcd90315 : update the on device delegate compatibility (#4060)
59e706a3c : enable parallel prefill (#4068)
38cff0987 : Enable dynamic shape tests for sdpa with kv cache (#4067)
51c32105c : Enable is_causal tests for sdpa with kv cache op (#4066)
2b029fbac : More test refactor to enable attnetion mask (#4065)
061bf8120 : Refactor sdpa_with_kv_cache tests (#4064)
4157af428 : Add LLaMA model test (#4070)
50baff184 : Add debug_buffer_size for ETDumpGen in pybindings (#4063)
85cf584e6 : Support compute_uinit in coreml backend when calling ct.convert (#4059)
38046ba1a : Add resize function for aten.upsample_nearest2d.vec operator (#4069)
eebffae6c : register nn.Module test for upsample_nearest2d (#4055)
34bda85e7 : Export mini phi3 LoRA model (#4062)
f538eae73 : update Vulkan nn.module test to support not decomposition (#4054)
1ba8946b8 : Migrate workgroup API for trivial cases (#4061)
07970da5a : remove clone ops pass (#4058)
fdeda8ede : Add mod prim operator (#4057)
e7ebb5ea6 : add support not decompose into vulkan_partition (#4056)
34fd76700 : Migrate conv2d shaders to new layout API (#4053)
ae175c55c : Allow overwriting local workgroup size (#4046)
72426cd6d : Use macros for all shader codegen variables (#4052)
b1f74b693 : Reduce conv2d_pw global wg size (#4045)
16941c90c : genericalize I64toI32 ExportPass to convert between user-specified arbitrary dtypes (#4037)
6b3de999b : Stylize check_fn in codegen (#4044)
6f0631603 : Standardize 2-space tabs in codegen (#4043)
39e17e4e0 : add 2x4 tile in mm computation (#4031)
caf3b1ba8 : Add more quantized kernel pytest (#4036)
398ce661d : Let custom_ops_aot_lib use portable_lib (#4024)
78688b7dd : VAD perf improvements - Improve aten::full performance (#3974)
5e22836d6 : Add function to get temp allocator (#4034)
b2a4bb63c : Add to_out_variant tests (#2880)
03de34411 : Update update_placeholder_tensor_specs to include lifted tensor constants (#4029)
b8e88cc8e : Fix ExecutorBackend output value assignment (#4020)
9c9919a43 : add int to generated shader variants for copy_offset (#4033)
3454f917a : build test/demos/rpc (#3941)
204391da6 : Add CMake option to override PAL default (#4026)
380704523 : Remove buck2 references from examples/portable/README.md (#4027)
f428d03e6 : report a second digit of model loading time for iOS LLaMA demo (#4023)
f2b0595b3 : refactor mm and linear implementation (#4011)
524dd490a : fix skip dim order use in test_vulkan_delegate (#4028)
30a2dc2ee : Pass in exported program in call to ops_to_not_decompose() (#4022)
8dee613e3 : Take care of overload name when replacing aten op with custom op in to_edge_transform_and_lower (#4010)
b53b9836e : add half dtype to slice_channel codegen (#4025)
8834d2e0a : Rename PAL target files to match style guide (#4001)
60750542a : Fix partitioning for constants (#3991)
d67c6c415 : Update tracker issues after successfully cherry-picking a PR (#4016)
052427a46 : new fuse_view_copy pass to merge chains of single-user view_copy nodes (#4019)
d11e8e1f6 : Move the quantization API to OSS (#3997)
802a0b2f3 : add temp allocator for the module (#4009)
18bccf7d9 : Make flatc-build optional in setup.py (#3844)
c6d4e8baa : Make Tokenizer methods const (#4014)
51f64552e : Implement aten.constant_pad_nd.default with constant mode (#3975)
337174c4d : Enable build on aarch64 linux (#3896)
172574a6b : Update PT pin to 6/18 (#4007)
a68c340ef : Update vela numpy fix (#3998)
c6fb9da97 : Fix dequantize_per_channel for single dimension input tensor (#3867)
7028a7104 : skip dim order for now to avoid delegation issue (#3982)
99284c735 : Migrate convolution workgroup API (#3996)
ed81c5ba0 : Print local workgroup size (#3995)
2216cae34 : Remove targets.xnnpack_dynamic_quant_utils from cmake_deps.toml (#3999)
8a320dd4b : Update version on main to 0.4.0 (#3993)
bcabc3757 : Add PFH in buck for FoA (#3985)
d30b70864 : log skipped nodes in Vulkan Partitioner (#3978)
56914ec15 : Appease pyre (#3979)
7c0f4c2b6 : Remove op_common.py in Arm-backend (#3972)
4b6331d86 : Improve static typing in tracer. (#3973)
da9b92df7 : Arm test: Pass inputs to run_method_and_compare_outputs (#3750)
089858b34 : Fix to pin dependency revisions properly and so fix some builds (#3971)
9c8cb1e55 : Model example for fbcode only (#3977)
70743bbf0 : add phi-3-mini runner (#3951)
4ed5bc7f5 : disable disables gradient calculation when getting the prop constant tensor (#3948)
f338c761e : Change brotli target names
1892c108d : Update flatcc to 896db54 (#3967)
1345bc2c6 : Update flatbuffers to v24.3.25 (#3966)
3de5ad47d : Add best practice for custom ops APIs (#3956)
c67ce8652 : Enable tests to run in OSS (#3707)
a1e9c7a25 : remove old buck2 setup code (#3870)
1547bcf6f : Dtype selective build refactor (#3938)
7f0c0b806 : add int to generated shader variants for view (#3970)
4cf9a46c4 : use _to_copy or _to_dim_order_copy in I64toI32 pass (#3968)
2d35b304c : Update lintrunner libraries (#3963)
22b063d77 : Add exir/backend/test to pytest (#3965)
d308fd51a : FC preparation for int_oo in PyTorch (#3947)
8f08b8b0e : Add runtime/kernel test (#3954)
70a4d98b5 : Fix Pyre problems in executorch (#3955)
189d5480b : Remove unused pyre ignores (#3950)
7f51fa760 : Bump up coremltools version. (#3962)
2ce1a91d5 : Buckify MPS AOT components. (#3925)
2e5fd9c59 : Add copyright headers to shim files
8cb13dca7 : SDK Integration 5/n: EventTracer (#3961)
d000224dc : SDK Integration 4/n: Expose QueryPool data to ComputeGraph (#3960)
8f3ae8d94 : SDK Integration 3/n: QueryPool With node_id (#3959)
ac7927c48 : SDK Integration 2/n: Runtime DDI Deserialization (#3958)
92704a5da : SDK Integration 1/n: AOT DDI Serialization (#3957)
d8fac8f95 : Add extension/parallel test (#3952)
6862e6fa9 : Remove schema/default.profraw (#3953)
bc655a639 : Add threadpool_test (#3942)
c5e31b358 : Add exir/tests (#3937)
34d31723a : pybindings test (#3934)
159375669 : Fix discrepancy on eos and bos tokens for different pte files (#3923)
10de8b23a : Add emit/test (#3933)
88e9737fe : Change the elementwise broadcasting contract from graph to kernel (#3894)
0ed71a2f9 : Remove buck-config dtype selective build (#3931)
dcf00b776 : Remove unused parameter (#3858)
e053a8dc9 : Add exir/program/test (#3935)
76fa85b6a : Add tokenizer test into OSS (#3926)
2152d7986 : Cadence - Add RNNT joiner from torchaudio (#3920)
53d0025e8 : update portable (#3927)
b39dbadf9 : Fix manual permute within channels_last block (#3932)
6840b9d1c : ptez_loader (#3897)
5aecf9a04 : create deflate decompressor (#3891)
a7a9bacd6 : Update Arm delegate tutorial for MobileNetV2 usages (#3914)
0276e9ea2 : Add Etdump C++ test (#3921)
5b9795c66 : Use portable lib in exir/emit/test/test_emit.py (#3918)
55883fec1 : Remove @EXECUTORCH_CLIENTS from export pass deps (#3915)
0bb54927a : combine all sdk test in oss config (#3916)
4b4f38b57 : enable bp test in oss (#3917)
a3ff00d32 : Implement weight_int8packed_mm (#3893)
be7150c88 : Add more sdk pytest (#3910)
afdd4b104 : Add more xnnpack pytest (#3909)
27d5329c6 : Complete revamp of float/promotion sympy handling (#126905)
bd3acf25b : Test int64 dtype (#3728)
9068baf0a : Add `executorch` keyword to model identifier (#3906)
232268f3c : Apply custom op `conv_with_clamp` to graph transform (#3888)
c411b946a : Introduce I64toI32 export pass (#3727)
3e5b3cd77 : Add examples/models/llama2/tests to pytest (#3908)
038d50f03 : Add XNNPACK C++ test (#3907)
6cac9c163 : Enable more portable and optimized kernel tests (#3900)
84a11cadd : Update header_namespace for OSS
d92806644 : Create export_edge_to_executorch, call export_to_executorch in testing flow, and call print_ops_info in export_to_executorch (#3863)
c18cea726 : Delete legacy test code (#3829)
fce208cf0 : Add support for int8 textures and buffers (#3892)
61d688b0d : Implement custom op `conv_with_clamp` on Vulkan (#3887)
355a1d260 : Mitigate the "dim_order" not found error in dim order op (#3905)
44e54164e : Udate frameworks version in docs. (#3904)
9a813e26c : Add quantized kernel test (#3899)
5715d2f5e : Define custom op `conv_with_clamp` through python (#3886)
dc04a6bd0 : Add x509-parser to oss shim (#3825)
91c048565 : Inline test custom_pass (#3895)
aadfc0fc0 : Enable more portable kernel tests (#3890)
6554fa544 : Add colab/jupyter notebook in getting started page (#3885)
31b766b86 : Fix test failures (#3876)
2ee83bea4 : executorch/examples (#3880)
21a6fe5d1 : update <tokenizer.model> in llama2 readme (#3879)
b1e5ba82a : Creation ops (#3877)
98f0d82e9 : Add runtime/executor/test CMake (#3865)
7fdb9e7c4 : Separate aten and portable tests in memory_planning_ops (#3873)
b57cfb737 : memory_allocator_test in oss (#3874)
04674a0ab : Fix example code in `Running an ExecuTorch Model in C++ Tutorial` (#3868)
32335e5ea : Add more files to shim
c3500752a : Update llama readme, use main branch for llama3 (#3861)
13dcf1070 : Workaround for fbs file not found (#3855)
7bb2c587a : Add Manifold support for ModAI Manager (#3849)
5292bacb7 : add export_model.py (#3842)
f48f392ee : Update quant overview for 021 (#3845)
800911413 : Allow explicit dtypes and data ranges for certain arguments (#3839)
32d31aa81 : register test suites via function decorator (#3838)
7b60703f8 : backend/arm: Update Ethos-U software to version 24.05 (#3847)
63f05b944 : Add folly certs/defs.bzl to shim (#3851)
b2fafe9d2 : Cadence examples -- Add RNNT encoder from torchaudio (#3691)
e85b52af5 : Android improve prebuilt workflow and provide tutorial (#3840)
1ec6a6812 : Improve error message for compiling quantized models without quant libs (#3801)
8c70a50b1 : Tester: rename inputs to example_inputs (#3800)
9d58de167 : aten.full_like.default (#3843)
f1843296f : Update headers for folly-config.h path
7eafa2b96 : Fix a bunch of licenselint problems (#3834)
2354945d4 : Clean up Shader Profiling Capability (#3792)
f21676cba : Fix MPS buffer initialization during runtime (#3824)
a1222af6f : backends/arm: Bump ITCM mem to 1MB in Corstone 300 linker script (#3830)
2b48e1351 : Ignore backends/arm unit tests from pytest.ini (#3831)
1713392df : Update doc on iOS Runtime. (#3833)
d7ffa18a0 : Add docs on iOS runtime. (#3828)
8523db595 : Add support for immutable list (#3811)
3b5bb0044 : Add buck file to selective build (#3812)
a17d5ff74 : Update top-level README.md to reflect current state (#3816)
2c00ade46 : Rename so file to match soname (#3810)
ab6f177c7 : Turn on python dispatcher for EdgeOpArgValidator (#3809)
b9bbb71b2 : Enable data_is_close for Half tensor (#3790)
cc9e2d185 : Generalize MeanToSumDiv to any dtype (#3794)
bb4f761b3 : Fix compute_graph_op_tests, attempt 2 (#3805)
13ba3a7cd : Add docs on Module extension. (#3798)
1358e03c3 : Fix compute_graph_op_tests (#3796)
fa3bf82f1 : Enable kernel testing in OSS (#3795)
7bcdb0ee3 : Removing lm_eval as a third-party module (#3793)
1d7c7c8ef : Add more tests and clean up helper (#3783)
a463f0be2 : aten.avg_pool2d (#3770)
8c8d9652a : Enable operator<< for Half tensor (#3779)
70e33954c : Qualcomm AI Engine Direct - gMLP Enablement (#3774)
0412dead3 : Manual LICM for numel() in quantize ops (#3785)
d1d2e7aba : Manual LICM for mutable/const_data_ptr calls in quantize ops (#3784)
ccce5faef : Update README.md to ask user to double check python env (#3782)
e650dd91a : Qualcomm AI Engine Direct - SqueezeNet Enablement (#3748)
9e8686052 : Fix report_coverage (#3781)
ba672960b : extension/runner_util test (#3777)
f1cc9826b : Add get_inputs to method.cpp (#3775)
ff4e9edc5 : Enable Vulkan tests in CI (#3737)
62cdfb90e : buck2: shim: unused lint (#3773)
d72467351 : Fix GPTQ import error after torchao refactor (#3760)
1ad4ae636 : Handle time wrap around in inspector (#3766)
a36ace7ed : aten.embedding (#3762)
f03499967 : extension/module c++ test (#3710)
0bbc6548a : Trigger trunk when .ci/script is modified (#3741)
4f2f7e08b : fuse conv and batch_norm (#3769)
870c753a1 : Update deps paths
56a68553e : Add necessary third party libs
2badd7643 : Switch the order of the to_dtype function and source transform (#3757)
fbbba34dc : Factor out eager val from eval_llama_lib (#3756)
82663ac2b : Collect OSS GTest code coverage (#3734)
8f65b6d4d : Qualcomm AI Engine Direct - ESRGAN Enablement (#3751)
d9194d1cc : Mass migrate to pybind11 2.10.4 #1 (#3763)
0b34dc5d9 : Replace buck with cmake in docs (#3739)
55d11e1f3 : add flag for dynamic shapes to filter out static ops (#3733)
2eaed1b07 : Inspector CLI Specify source and Target Timescale (#3761)
757a6ad21 : Upsample add support for output_sizes argument
da73e8f88 : aten.arange.start_step (#3754)
26daed736 : Fresh setup m1 (#3759)
c665c17fc : aten.index_select (#3744)
24b37f25e : make builder accessible (#3743)
2b91ebadf : to_edge_transform_and_lower (#3483)
636c5c3de : Implement upsample_nearest2d.vec op to pass codegen operator tests: scale factor supports only
c3d1680de : Add support for exception list in EXIRATenDialectVerifierBase (#3481)
9d4727d17 : Fix binary_op int variant (#3745)
b8f92dfe4 : add handling of out-of-range indices to Slice (#3689)
c937ecebb : Migrate upsample op to new layout declare using the gen_vulkan_spv.py (#3730)
79e9b7960 : Remove torch._constrain_as_value (#3731)
b24aa497e : Newline linter in OSS (#3729)
128912c02 : Fix Mac CI (#3740)
5e9db9ae4 : Mac OSS CI use cmake built gtest (#3713)
610eb4fea : Enable Arm unit test when running pull jobs (#3371)
01ae21dd8 : Reconfigure ET documentation orders (#3736)
9d12efd97 : set per channel quant for weight (#3709)
f42942abd : improve the eval perf with kv cache (#3732)
d44877beb : Added optimizer implementation (#3699)
1343224a1 : Fix OSS build + separate test build into its own CMakeLists.txt (#3724)
377c75024 : Use the pinned buck2 version on MacOS (#3726)
c4c165b04 : Enable customization of atol and rtol (#3720)
2194486c4 : aten.sin, aten.cos, aten.neg (#3706)
8cfad952b : Fix typo in homebrew install commands in CI (#3725)
11a77c45f : register and test `batch_norm` (#3716)
94dd87916 : Fully quality all deps in the shim (#3723)
199134638 : Default construct QueryPool with nonzero values (#3721)
c28ca1869 : Integrate a placeholder upsample_nearest2d.vec to Vulkan codegen operator tests (#3711)
1dd493500 : Verify types in custom op schemas (#3705)
25cc1dbb5 : Fix docs (#3719)
3c43fd694 : Ease MPS backend and partitioner imports. (#3714)
ed2da4f51 : Clean up lowering APIs (#3690)
91bf5b99d : Fix -Wdocumentation warning. (#3715)
fe910e1e1 : Buckify Core ML AOT components. (#3701)
168183708 : ArmQuantizer: quantize dropout with SharedQuantizationSpec (#3633)
306f06f85 : Linux OSS CI use cmake built gtest (#3609)
2d48cdc80 : Add support for buffer storage tensors (#3684)
ce751fc35 : update docs (#3285)
dd6246eed : Add libunwind third-party package (#3708)
f13a28e9e : Add examples/portable/test to pytest (#3703)
75dcaa627 : Fix torchchat model not working on demo apps (#3668)
c3b423c07 : add clamp op int variant (#3680)
e5fdd23a3 : Merge order-dependent exir/dialects/backend/test (#3700)
0dc948878 : Enable eval for pt2e quantization (#3652)
435ea9dd7 : Set CMAKE_ARGS for CI and add more pytest (#3693)
16a892dcf : Improve homebrew header times (#3695)
ab1c8aafc : Fix zero size tensors (#3702)
705ac9639 : Arm: Enable MobileNetV2 MI unittest (#3675)
04b99b7cd : Fix buck2 OSS builds (#3694)
0a87530db : register aten.hardshrink.default (#3687)
455f5cc1f : Refactor arm quantization (#3616)
d689b53fd : Upgrade BUCK2 version to 2024-05-15 (#3698)
d79ba63b5 : In remove noop pass check for symints (#3577)
900e39bc6 : update Llama 2 README to include more Llama 3 info (#3582)
d1e6fac96 : Add SGD state and options (#3595)
8261bf25a : Fill in third-party dependencies for folly
3d3039b8d : Fill in missing pieces from macro layers for folly
1428f3127 : Update the existing extra data ARN when running Android test on devices (#3688)
bc5ba991b : Arm backend: Provide more debug info for numerical diff issues (#3596)
a70755063 : Remove pt2_quant flag (#3676)
07dcf35b3 : Add ETDumpGen_Done state and reset when in that state (#3667)
69f4a2fd0 : Qualcomm AI Engine Direct - Model enablement - Mobilenet_v3 (#3673)
4b7c6db5d : Add fp16 qb4w linear test coverage (#3626)
a397bff43 : Add int to possible input dtypes for permute (#3685)
47a29a13f : Qualcomm AI Engine Direct - Update the build script (#3677)
91ef7fb7a : skip partitioning ops with bool args/outputs (#3679)
2b603df91 : aten.hardshrink.default in unary_ops (#3681)
788676f30 : Improve cmake testing setup (#3666)
a32e729fa : apply simple sdpa to coreml/mps backend (#3660)
39834b1f2 : Qualcomm AI Engine Direct - Model enablement - DINO v2 (#3655)
5fd2a11c7 : Simplify _parse_tensor_value in executorch/sdk/inspector/_inspector_utils.py (#3678)
0f21c6630 : Add find_missing cache to oss re client
c2efeda9e : Fix the naming for some of the eval wrappers (#3665)
90742d06b : add visibility of transforms (#3647)
074c35e7f : Add more scalar types to debug buffer parser (#3643)
a662ed675 : Adding exponentiation operator to Vulkan (#3661)
dcc61001c : Fix typo in XNNPACKBackend error message (#3642)
90dde7bc4 : Improve backend/arm batchnorm2d unit tests (#3658)
7874186d1 : Add div unit tests for Arm backend TOSA MI (#3657)
7dd0e7ff3 : Improve ET perf on M1 mac (#3628)
ac60698fc : raise error for wip work to prevent mis-use (#3648)
e3bea92d1 : Add unit test for aten.slice.Tensor operator (#3645)
dc36ef73e : Add more sdk/ to pytest (#3650)
ae311b482 : Use c++17 for OSS gtest (#3654)
400860066 : Add tests for zero-dim tensors (#3644)
6efe44ce5 : Improvements to OSS Test CMake generation and add tests (#3649)
0364d455d : seperate quantize and export_to_edge in builder (#3613)
36f83eb6a : Make //executorch/backends/apple/mps:mps build internally (#3641)
524703f8a : Fix -Wdeprecated-defintions + -Werror data_ptr complaints in kernels (#3639)
5c7012132 : Enable zero-size tensors (#3640)
c7104db6b : Call the quantizer from the OSS repo (#3597)
46ec26be5 : check "transposed convolution" at beginning
7f05874d7 : clean up trunk by disabling skipped tests (#3625)
7a35b6f50 : Add a script for generating OSS gtest CMakeLists.txt (#3608)
4c2ee9b87 : add missing cmake flags for llama android ci (#3618)
29f8326a6 : Allow `.model` extension for tokenizers. (#3637)
121b04c4e : add missing cmake flag to set up JNI for LLama Android Demo (#3619)
ad419a512 : mean_to_sum_div export pass (#3615)
f87307428 : Fix span_test (#3627)
245434e7f : Print number of Vulkan subgraphs (#3624)
52c2cbb40 : Fix runtime/core unit tests (#3623)
2b1ed2616 : Ignore the FLATC_EXECUTABLE env var when empty (#3612)
d34527848 : Fix stories model export (#3622)
6b6c1fa43 : Recompose linear operator during serialization (#3621)
4b3744450 : Fix buck2 targets command for non-ocamlrep users of the fbcode shim (#3563)
8ddf836c2 : add missing cmake flags to build llama runner for android (#3611)
99ec94694 : Add typed-arena to oss shim (#3576)
fb24c9d8d : Fix headers search paths for nlohmann json in Core ML. (#3614)
aaa2f2e33 : Add a way to run c++ unittest on OSS (#3606)
39f9c0fb5 : temporary disable dim order ops in executorch/examples to mitigate oss ios ci issue (#3610)
0e7955d22 : Implement `aten.linear.default` (#3594)
e8a520c4f : Fix headers search paths for nlohmann json in Core ML. (#3607)
b64182d64 : Fix memory.view insertion except for output nodes (#3602)
01ce72c93 : Fix includes in Core ML backend. (#3603)
c69861ddc : Ease Core ML partitioner and quantizer imports. (#3564)
c853b3cc4 : Add base for sgd optimizer (#3496)
ebe701eda : Install mpmath in pip installation (#3593)
ea9647f47 : Add support for slice_scatter; enable index_put (#3399)
4b5e434c4 : Reuse the existing clone of the ios toolchain for Core ML. (#3575)
87d828aab : don't partition max pool with ceil mode (#3578)
e28803978 : Comment out memory planning log (#3581)
a88a78d39 : Separate mm and addmm into separate implementation files + cleanup (#3592)
c229e7b2c : Update README.md with correct link (#3591)
9db0a6928 : Simplify gen_oplist_copy_from_core file (#3549)
629b1127f : Cadence - refactor quantizer and passes (#3539)
60c94e81b : gelu (#3573)
43bfcd2b3 : Batch norm (#3569)
60fa0d3a7 : build and upload aar (#3381)
b93b7ae4a : Change custom_skip_targets meaning for constant_prop_pass (#3491)
1871ec1d0 : min pip version (#3526)
8e2e2e214 : explicitly import _export.exported_program. (#3574)
c83af25b7 : Add target_link_options_shared_lib to coremldelegate build (#3556)
749f6ab41 : Rust: Update sysinfo crate (#3520)
0563aa7a5 : Disable view_copy elimination for graph outputs (#3565)
8a63430b1 : add per-channel tests for linear (#3551)
6c5612201 : Break flag bc (#3128)
2ac7f2a84 : Add support for method level executorch backend config (#3266)
cc2d3b574 : Revise memory monitor to align with Xcode metrics (#3131)
ebdb152ab : Save and load VkPipelineCache data if path is specified (#3546)
f6f881af6 : Remove unneeded `api::` prefix within `namespace api` (#3554)
edaae14f6 : Split `Resource.h` into multiple files within `memory/` (#3553)
cc1154167 : make extract_delegate_segments=True by default (#3405)
b3c3e6547 : Use compile-time promotion to reduce optimized le op size & build time (#3534)
a4110e082 : Use compile-time promotion to reduce optimized add/sub op size & build time (#3533)
5adc8bfb6 : Use compile-time promotion to reduce optimized mul op size & build time (#3532)
8ebe1c948 : Use compile-time promotion to reduce bitwise op size & build time (#3487)
38c30d6fa : use utils:{min,max}_override in {min,max}imum ops (#3453)
21f5fbffe : Catch check on symbolic shapes (#3537)
fff20a738 : Update pytorch pin (#3562)
dd81fc76d : Fix backend/arm tests failing when there is no tosa installed (#3560)
f8403ed16 : Error out when token is outside of vocab size (#3535)
2d68bd35d : Rename `Allocator.*` as `vma_api.*` (#3552)
c8f306d50 : Stylize struct members with snake_case (#3545)
7841e96c7 : support Half in minimum and clamp (#3457)
5c05deffe : Use compile-time promotion to reduce fmod size & build time (#3456)
ad33982c6 : Use compile-time promotion to reduce remainder size & build time (#3458)
a2e13101b : Use compile-time promotion to reduce floor_divide size & build time (#3455)
25214d476 : Use compile-time promotion to reduce max/min size & build time (#3459)
3f125b67e : Update `sorted_vector_map`
a3adba416 : Easy fix for vulkan (#3548)
0492eebf3 : Remove dependency from pytorch core as git submodule (#3530)
c86128e5d : Remove dependency from torch gitsubmodule Part 1/N (#3538)
37d65a9b7 : Add clap_complete to our rust dependencies (#3544)
8a5668a90 : Size test all ops (#3522)
b0c0e3087 : Use c++11 for size test (#3509)
f9db02af6 : Fix backends/arm test after _skip_dim_order update (#3529)
251aa748c : reconcile Dim4D and NchwDim into DimIndex (#3489)
bd6cbc4c6 : add dim order check in edge verifier (#2544)
5b65163c3 : log_softmax (#3411)
2c1e2836e : Add instructions to convert Hugging Face models to PyTorch (#3523)
818b178a3 : softmax (#3410)
e5846dc4d : `addmm` and `baddbmm` (#3401)
7109df4b1 : optimized matrix multiplication, bmm (#3343)
c001f597f : Cadence - Move primary code to backends folder (#3353)
6c0c6752e : don’t check torch.min in the verifier (#3531)
a82db7c05 : Arm backend batchnorm2d fixes and tests (#3369)
69d2a8493 : Prevent converting runtime-assertion related ops (#3066)
f227fc3da : Improve logging when running tosa_reference_model (#3120)
d6591ff00 : use edge_compile_config in verifier (#3521)
e95f87b7e : make memory format pass as default in Executorch AOT (#2205)
50e9ee92f : fix constant tagging in mps backend (#3503)
7d850e47c : Port simple_view and simple_clone tests to new test setup (#3123)
a116d89b8 : Check preprocessor flag using #if instead of #ifdef. (#3525)
0beb072a3 : remove all torch.ones from tests (#3494)
2835d01e3 : make quantized model tests better (#3495)
f96e0351a : Fix exporting qnn quantized model (#3513)
9ceee3732 : Use compile-time promotion to reduce eq/ne scalar op size & build time (#3423)
b55410ab0 : Use compile-time promotion to reduce comparison op size & build time (#3422)
a3816fe81 : Use compile-time promotion to reduce op_mul size & build time (#3420)
1b73db448 : Arm backend: Update documentation: Main (#3392)
3df8695a8 : end2end memory format pass test with both aten and lean mode (#2177)
cf407922c : introduce _to_dim_order_copy op to runtime (#1970)
834b54578 : get compile config back to EdgeProgramManager.transform (#3500)
5e425c39c : Register aten.repeat in partitioner, add aten.sqrt (#3510)
d34f9f2ed : Only unset HOME when running cmake as root (#3507)
0909c5ac5 : relax check on bf16 (#3460)
3a2b2e8a3 : use c++14 for size test (#3441)
53078c409 : Refactor to_edge into smaller functions that can be reused (#3482)
1fd80feff : fix the temp allocator for backend (#3490)
b9488fe9d : Add suffixes to cmake flags related to building kernels. (#3499)
494f98143 : Rename frameworks to differ between kernels and backends. (#3484)
ae22b6c88 : Add descriptive suffix to frameworks. (#3479)
e19d48f08 : Build Mac frameworks. (#3429)
d0535fac5 : Format all cmake files. (#3498)
f3874e25a : Use compile-time type promotion for sub.Scalar_out (#3461)
bdc712031 : Use compile-time type promotion for add.Scalar_out (#3451)
2a31cbfdb : Add compile-time promote_type_with_scalar_type template (#3450)
623f0e03e : Add support for MobileNet v2 in Ethos-U55 and arm baremetal runner (#3056)
6353c412b : Delete core FunctionRef (#3452)
4bd45c39a : Resolve `compiler_flags` to allow Mac build (#3478)
74538f8d1 : fix examples/xnnpack/README (#3317)
9371c168f : Pin the setuptools version in pre_build_script.sh (#3493)
049c6a1d9 : remove exir.capture from test_mps_utils (#3072)
e493c92e3 : remove exir.capture from comment (#3177)
6a2d04500 : remove exir.capture from test_delegate_aten_mode.py (#3173)
6f0b8e020 : remove exir.capture from test_lowered_backend_module (#3169)
3c4a22342 : remove fixture tests (#3158)
753385cd7 : remove exir.capture from comment (#3159)
b76edd783 : remove exir.capture from test_graph_partition (#3167)
cc12d9b5a : Add MPS support to llama runner (#3464)
aee435df9 : Look on pytorch servers when installing pip deps (#3477)
474001324 : Audit and update the pip package metadata (#3476)
b5dd16905 : Dynamically determine the version of the pip package. (#3475)
f8a4d8cd9 : Implemement smoke_test.py for pip wheel jobs (#3474)
904b50c6f : Disable wheel delocation on macos (#3473)
da2d695dc : Set the RPATH of _portable_lib.so so it can find libtorch (#3472)
61d7af734 : Build pybindings with -D_GLIBCXX_USE_CXX11_ABI=0 to match libtorch.so (#3471)
7d5ba40a3 : Build pybindings and link in backends when building pip wheels (#3470)
f6b152785 : Don't recurse submodules when building wheels (#3469)
e098cff1d : Add project.ignore to .buckconfig to reduce watched files (#3468)
f54f2b207 : Have setup.py unset HOME when running buck2 (#3467)
9214139d0 : Install build requirements in pre_build_script.sh (#3466)
3a253e96b : Use compile-time promotion to reduce op_where size & build time (#3417)
12bff870d : Reduce build time for op_where by consolidating bool & byte (#3416)
43ed46824 : Properly register quantized out ops into AOT export flow (#3449)
5d2a17b29 : In large graph sanity test, resize inputs between inference and log latency metrics (#3447)
a4ffd96a5 : Simplify SDK tutorial by moving cmake commands to a script (#3438)
1ebc1826c : allow specify -d fp16 (#3342)
104ea44ba : min setuptools version (#3448)
c4225e3cd : Simplify FunctionRef
1269777a8 : silence log in common case of pybindings set_output_pointer
df7e344b4 : Fix third-party/rust/Cargo.toml
36765d34c : Refactor Codegen.cmake to add argument names (#3382)
90d0933b4 : Update demo app images.
06230bf1c : Cut size & build time for add_scalar_out
c415d8b11 : Use compile-time promotion for op_add size/build time
f4e4ea6dc : Use compile-time type promotion support to lower op_sub build time
34aadcbb9 : Update view shader to support change packing (#3414)
a2cfefc75 : Use compile-time type promotion support to reduce op_pow build time & code size
8c8563f62 : Add compile-time promote_types template
62a7a1312 : Set min numpy version 1.25.2 (#3440)
d92e47aaf : Add out stream parameter to pretty_print and print_program (#3397)
d24dbe165 : remove fairseq log (#3436)
41eec7af6 : Inline simple TensorImpl methods (#3304)
6141d1097 : Add support for getting executorch quantized model (non-lowered) from ModAI
f0f4db877 : Add operators to Partitioner (#3407)
8458e639e : split_with_sizes with more test codegen (#3389)
38fad23b1 : Fix conv2d_prepack_test (#3415)
44ec67d17 : Minor updates to xnnpack tests
9a0e676a7 : add size test to ci (#3412)
b12ea4361 : half support for index_copy, copy and slice_scatter
987a1f98b : Manual LICM in op_convolution (#3274)
32a592691 : Remove -s compile option from release mode (#3367)
c992983f5 : migrate runtime to modern ET libraries (#2994)
1f7f8c9d9 : apply verbose flag from args
e9691ae90 : rename the print delegate graph api
130d7e8e1 : Fix the broken cmake commands in sdk integration tutorial (#3391)
7fa91fdd6 : Update llama3 8b enablement to include s24
b00185502 : Move kernels/portable/README.md to kernels/
fac1ae664 : aten.cat with more codegen (#3388)
ae250c03a : copy_channel_offsets node (#3351)
9314595c6 : Update the upload job (#3385)
dc726f9a0 : Fix stack buffer overflow when XNNPACK tensor has too many dims (#3233)
e5471a5a7 : Inline val_at (#3273)
6c06f2624 : Simplify conv2d weight prepacking (>2x pipeline-creation speedup) (#3368)
92b5aea5e : Fix remove _to_copy pass.
8dcc1ada3 : Add Disclaimer (#3378)
f3c5695c2 : Remove preview wording and TODOs from README (#3374)
f9b24154e : Integrate QNN backend to LlamaDemo (#3346)
5a57d63c1 : Reland Add more prebuilt artifacts
d3cfd00d5 : Change test case to not use deprecated constructor
44d4bac1c : Fix quantized_linear cpp op schema
7b3f5c614 : Reword "preview release" notice now that we are at alpha (#3364)
c209e12bd : Fix extension/data_loader installation (#3355)
c32b0a214 : Export the ET_VERSION_DOCS variable in doc build (#3358)
80d72f2d6 : Add EXECUTORCH_SEPARATE_FLATCC_HOST_PROJECT cmake option (#3356)
7b3b485f8 : Half support for index op
8ec0af9e7 : Extend setup cmake ability (#3349)
319a4f251 : Remove unneeded _to_copy in edge dialect.
8fcba3680 : Eliminate deprecated api usage (#2695)
fd63d0cad : Enable doc job to run on -rc tags. (#3345)
30128f3f8 : Build custom ops in pybinding (#3263)
79b79cb77 : Update llama2 readme file - main branch (#3340)
b2a724321 : register `view`, `reshape` and `select`
590cbcee7 : add buck2 installation into setup.md
b2c794a67 : copy node, aten.repeat (#3299)
9811eeaaf : Remove the sorting of the nodes from partitioning (not needed for now as Custom Metal kernels are yet not enabled) (#3328)
453ebada6 : Update MPS documentation; add helper script to build mps_executor_runner (#3324)
035aee4ad : Update readme.
aa3e73620 : Fix sdk_example_runner.sh (#3298)
34f59edd8 : llama2 readme (#3315)
f6758fc52 : Update custom kernel registration API
ce1e9c191 : Update readme.
b669056c1 : update typos (#3300)
727a68d97 : Revert D56480274: Add more prebuilt artifacts
98a7e6602 : Update examples/README.md with Llama 3 and names (#3275)
b560864c6 : Use relative links in llm/getting-started.md (#3244)
e25e5d25c : Fix portable is[inf|nan|_out compilation on older Linux (#3272)
5b0030f5d : Update readme. (#3301)
66a350b1b : add dynamic export into llm manual (#3202)
2dac5f3c2 : clone node (#3219)
d05361107 : Unsqueeze (#3172)
b5bb9217e : DynamicShim for dlsym user (#3136)
de0c2338e : update memory planning docs (#3270)
bf9888ff3 : delegation debug page (#3254)
b0a400c2d : Fix for TOSA BI clamp ops (#3092)
2f5cbd488 : Capture output of Vela and print on error (#3057)
67121854d : Add semihosting to cmake for executor_runner (#3008)
8b1f49a1c : Tie quantization of add operands and result together (#3091)
b7b40ac67 : Add delegate time scale converter to Inspector (#3240)
d98dc013b : Fix compilation with gcc-9+ (#3262)
ba0caf8e5 : Fix broken links on the coreml tutorial page (#3250)
02a6b66c2 : Add index.Tensor and aten.logical_not (#3221)
e9d7868ab : fix qnn install link (#3260)
9c99fe118 : Inspector APIs page
719b368c5 : Add allocate_temp method to KernelRuntimeContext (#3209)
329184ac2 : Update Profiling Section in XNNPACK Delegate Docs (#3237)
8748d572d : update XNNPACK/README.md (#3236)
b6e54d0c0 : move code under executorch/example (#3176)
45fd79619 : `conv1d` general case (#3223)
f89c31201 : SDK tutorial doc update (#3238)
3b0f27182 : Add more prebuilt artifacts (#3245)
1eaed2b9b : Add iPad support to demo apps. (#3251)
cf487f144 : update sdk delegate integration (#3246)
ee288684f : Update tutorial (#3242)
ca8e58964 : Fix a small inconsistency on the SDK debugging page (#3247)
783e932d6 : bundled program alpha document (#3224)
c004efe55 : Update Core ML Backend Doc (#3188)
ee8c3a62f : Enable doc upload for tags, disable for release branches (#3153)
6c36f10b1 : Support tensors in prim_getters (#3203)
7b854b648 : Expand visibility of targets needed for executorch_llama2 kernel (#3174)
cb777638f : Pin CoreMLTools 7.2 (#3170)
0afb73d01 : Bump torchfix from 0.1.1 to 0.5.0 (#3220)
4342cf2e1 : fix typo (#3235)
d8e94b0d6 : strip symbol when linking (#3234)
4668b5d8b : Support llama3 (#3232)
9783697c2 : Update TorchNightly to 2024.04.22 (#3225)
aec2549b2 : Update to transformers 4.38 (#3227)
6c30eea98 : Fix executor_runner_mps and mpsdelegate linking with pybind (#3222)
438944248 : Fix LLAMA app (#3228)
03c7a99b8 : Bump the torch pin (#3199)
b41f763b4 : Update some SDK docs from MVP (#3212)
9d2af4c31 : backout the schema definition change (#3213)
9769386f1 : Bring back `extents_ubo()` as `texture_limits_ubo()` (#3217)
dbf90c272 : Create dependabot rule to upgrade TorchFix version (#3208)
67f337677 : Remove unused extension/aot_util directory (#3216)
969aa9690 : Add a pure python wrapper to pybindings.portable_lib (#3137)
3bb591ce9 : Qualcomm AI Engine Direct - Fixed uint16 tensor and linear op (#3196)
1a93deebd : Update setup.sh for tokenizer selection (#3207)
67123b676 : Refactor export_llama_lib.py
90d0c1a1f : Add memory and vector include in managed_tensor.h (#3201)
d24af2bbe : Add quantized ops to pybindings (#3206)
8dc54d569 : Update Xcode project to build tiktoken tokenizer for LLaMA 3. (#3197)
73599f483 : support emit sym value from delegate (#3103)
ebc38b211 : Fix linter. (#3195)
a7a9ab3c1 : Specify OSX deployment target for python package. (#3194)
7c74010ea : Specify OSX deployment target for python package. (#3193)
c350e58b0 : Deprecate `gpu_sizes_ubo()` and `extents()`; also toggle packing layout via specialization constants (#3181)
d89eabb52 : Use "latest" as the version for prebuilt frameworks. (#3161)
36453fc2f : Switch to a dedicated brach for prebuilt packages. (#3184)
87eb15564 : conv1d with bias=False
7b1f10d5e : Qualcomm AI Engine Direct - Enable SSD300_VGG16 (#3010)
1d467d0a6 : conv1d, special case
70baafe6e : Update model arg name rope_theta to be consistent with those in llama's website (#3147)
c8b43d27b : add kv cache to eval (#3162)
7469a280a : Slice, with lots of codegen improvements (#3171)
4ea04738b : fix test-demo-android (#3168)
023ca0724 : Add missing ops for RNNT predictor (#3125)
d47f9fe5a : Docs for lower smaller models to mps/coreml/qnn (#3146)
08005944c : Adding .model tokenizer to selection (#3163)
3ef9d2c28 : Rename tokenizer file in Xcode. (#3160)
db17853ca : Introduce `ParamsBindList` to prevent needing to pass `shared_ptr` to bind parameter UBOs (#3150)
bf5093a26 : Clean up api::vTensor class (#3149)
825db6c73 : Call destructor explicitly when move constructing `Value` (#3148)
bd07c7525 : make op_split_with_sizes_copy support dynamic shape (#3152)
fa433cbdc : Add link to llama3 README file (#3156)
269b6adb1 : Fix embedding_4bit out variant (#3151)
ceae80a8e : Instructions for Llama3 (#3154)
3257c669c : qnn end to end flow for stories model (#3038)
74dba6ef8 : Comply llama2 runner with gcc 11.4 (#3140)
2c467dde4 : Fix quantized embedding export logic (#3095)
cf781073f : Add a simple sdpa (#3037)
b5085aa64 : add cpu device to run eval on cpu (#3133)
060d151c7 : Update Llama3 perplexity numbers in README.md (#3145)
3db036298 : Add reference to the llama2 example for llama3 (#3142)
06beacef4 : Update README.md on the evaluation parameters (#3139)
1eed1253e : aten.view_copy (#3129)
523c2cbb3 : Update README.md for llama3 (#3141)
02ec58944 : Adding Gotchas in README.md (#3138)
74204f409 : Update README.md (#3094)
8fd92bc89 : 4b embedding quantizer (#3135)
944dd4c5d : delete exir/experimental (#3109)
8d25288dd : Buck build - fix use_tiktoken config
ab02a9c83 : fix llama-runner-linux-android (#3127)
29faa2e75 : Preserve modelname (#3122)
f2e660b60 : cherry-pick: Add required deps to pyproject.toml (#3117)
4c552d4c9 : Define embedding_4bit (#3121)
e0b064752 : Free Vulkan delegate segments after compileModel (#3116)
e69a662c5 : Fix typo in sub & clean up (#3100)
eb47c4e38 : Add quantized cmake option back to fix build-apple-framework (#3115)
6510625b5 : Delete llama_quantized lib (#3119)
910f85106 : fix embedding_4bit resize (#3118)
414cd05d1 : Documentation for Vulkan Delegate (#3113)
d73186695 : Handle missing data types. (#2984)
b19d58605 : throw Java exception when execution fails (#3112)
203ae4045 : remove exir.capture from example delegate test (#3101)
b3ac5332e : update readme to not use exir.capture (#3107)
9c2b41b9a : Don't crash when execute_method fails (#3104)
cca9f65dc : remove exir.capture from quant fusion test (#3106)
5fbd1f489 : make_seq_tensor in codegen (#3088)
28be9d680 : Improve codegen for aten.permute (#3087)
de0071764 : aten.permute_copy.default (#3086)
49928bcde : select_copy.int (#3085)
78cb141f8 : Enable additional specialization constants in compute shaders (#3079)
0815c2b55 : Introduce `SpecVarList` to represent specialization constants (#3078)
7e14c0e61 : remove exir.capture from test_rpc.py (#3102)
b341223dc : move mask as sdpa input instead of attribute (#3036)
f729b2d1d : Create __init__.py in example folder (#3093)
73438a5b9 : remove exir.capture from dynamic_shape_propogation test (#3070)
20bf0dbdc : change call_delegate_autograd (#3073)
5f9478d7c : Fix build time warning (#3097)
980aaca0d : Fix llama2 README.md cmake instructions (#3096)
ebde8e1f1 : forward fix ConstantArgument initialization (#3074)
65f2693e5 : Remove noindex from upload to gh-pages job (#3077)
22dfc6ac7 : Load missing state dict in edge program serialization (#3076)
bae038792 : {executorch][llama] support mqa (#3080)
1f4b63109 : Add quantized op support to llama runner (#3062)
f14dc83df : Handle empty (size=0) tensor in Inspector (#2998)
5d7949d86 : Update doc-build.yml (#3071)
9b55f48bb : Fix Android llama2 demo app after #2962 (#3032)
54f9f3eea : Set kernel default visibility to hidden (#3060)
4b6d2c393 : fix linear recomposition (#3064)
eb664a06d : Add int16 support to aten_bridge (#3069)
c73bfc02a : Update doc-build.yml (#3045)
89cfa73f8 : ETRecord ser/de handling "None" outputs and more (#3039)
9931301af : Fix formatting issues in executorch/test/size_test.cpp (#3065)
ab62707bc : Enable FP16 type in operators (#3059)
d481c1156 : Bump Vulkan API requirement to 1.1 and enable 16 bit and 8 bit types in buffer storage (#3058)
473c98c2c : Fix test_llama_runner by hiding tiktoken (#3055)
3b31eff78 : 4b quantized embedding table operator (#3050)
458d743bc : aten.select.int (#3033)
d0208d0b3 : Fix iOS build by excluding external CoreML SDK dependencies (#3043)
7b375fe7f : Dynamic Conv1d + W2L (#2976)
15f141b89 : Update pytorch commit pin to 04/15 (#3047)
780ed2556 : Add tiktoken to eval (#3044)
49d1f02b2 : Add tiktoken (#3015)
eb44e887a : aten.full.default (#3013)
74576e83d : native_layer_norm (for width dim) (#3001)
075fe4073 : remove duplicate generate_lib_aten target under aten kernel (#2951)
64497b7ca : Move compile spec to ArmTester interface (#2991)
59023ed74 : Clean up shader library and introduce some new conventions (#3024)
645256dc1 : generation.py with kv cache (#3030)
7c8115558 : Fix lint in clang-format (#3041)
7616d421e : Fix handling constant inputs when delegating (#3031)
057e43228 : Update to clang 18.1.3
57dd7f19b : oss: Upgrade `clap`, add `string` feature (#3035)
c61ef4447 : Apply clang-format 18
c095046f7 : update the pinned pytorch hash (#2824)
4d7dd036a : Throw in VK_GET_OP_FN if op is not found (#3028)
cd32712b1 : Update README.md and add submodule update (#3029)
21fdc4e46 : Change tokenizer name to bpe_tokenizer and extract a base class (#3009)
c075eea06 : Remove RemoveRedundantViewCopyPass (#2464)
cd248b447 : Run LlamaDemo app on AWS Device Farm (#3004)
17c64a37c : add more instructions and examples on Delegation (#2973)
0f379bafc : Update README.md (#3012)
74eb8b345 : Decouple custom ops in llama_transformer.py Part 2/N (#3007)
488afc5e9 : Decouple custom ops in llama_transformer.py Part 1/N (#3005)
b1edc3db5 : Add util to print out ops and frequency (#2983)
5b7c4bab9 : Fix 3 CI jobs (#3006)
6acc86ff5 : Add exir.save and exir.load with export_serialize (#3000)
ab323a59b : add export configs (#2965)
d1bc794b1 : Dynamic ViT (#2476)
33f41bd37 : Dynamic ResNet (#2474)
fec9c2f1f : dynamic mv3 (#2475)
1f5a8339f : dynamic mobilenetv2 (#2440)
bf59da65d : dynamic qd8-fc test with 2 batch dims (#2441)
65be9b4d2 : Dynamic Shapes (#2442)
46cf1c75f : Add Tiktoken in python (#2986)
76d851380 : Introduce `vTensorPtr` to prevent reference invalidation and remove `get_val()` API (#2978)
5ef8427cf : Extend constant prop pass to work with int/float/etc scalars and fix input specs. (#2950)
3b727a785 : Fix build-framework-ios CI job (#2996)
ce344bc84 : Skip annotate boolean input (#2957)
62a4dd34a : Replace view copy with view (3/3) (#2463)
1f6f711f9 : fix et-view (#2843)
c32268574 : Use new API to register custom ExecuTorch kernels into ATen (#2937)
7b8343b3a : Update name from xtensa to cadence (#2982)
c7fd39490 : Fix tutorial for Qualcomm AI Engine Direct Backend (#2956)
6e431355a : Use new API to register custom ops for llama model (#2916)
e641ffc06 : Add llama2 readme in examples/README (#2992)
7c719709a : Minor fix in README.md page
7d4bafcea : Core ML Has Added `Index_Put` Support, No Need to Skip Anymore (#2975)
d761f99f6 : Add a mock perf test for llama2 on Android (#2963)
2fc99b061 : Forward fix macOS job after test-infra #5086 (#2980)
75c27c375 : Fix llama runner test (#2981)
cb9caa30c : Add the missing import generate_etrecord to doc Getting Started with LLM (#2977)
859e924cd : Update OSS repo (#2033)
948760aae : Consolidate tokenizer interface (#2954)
d209e41e0 : Consolidate EXECUTORCH_BUILD_CUSTOM option (#2935)
8f8d969cd : Custom ops API small fixes (#2936)
554cd2792 : Qualcomm AI Engine Direct - Enable per channel linear op (#2822)
26365f188 : Android demo app tutorial fix for XNNPACK and QNN (#2962)
e733f2db3 : Refine the LLM manual (focus on the debugging and profiling part) (#2952)
a983ebc9e : Replace `std::stringstream` with `std::string` for Shader names (#2964)
d993797d4 : Fix failing CI jobs caused by #2934 (#2961)
b14570151 : add aten.sum.default (#2807)
f0bfc3c03 : Add convolution cases to codegen (#2920)
8aaf2c5bb : aten.convolution (Bias=False) (#2887)
564c276a5 : Compute graph print readable (#2825)
d9781d0bf : Initial empty repository
de7fdaa86 : resolve_buck.py: Add an entry for darwin-x86_64 (#2868)
218f643fa : Make minor updates to LLM guide setup instructions (#2940)
99c4f4e08 : aten.convolution (Pointwise) (#2886)
f00afe721 : aten.convolution (Depthwise Output-Tile) (#2885)
02f565ee3 : Fix indentation in selective build example code (#2773)
3661a1169 : s/heirarchies/hierarchies/ (#2772)
6cb6051d6 : Use __ET_UNLIKELY in assertion macros (#2949)
b26eee879 : Introduce convenience constexpr for `StorageType`s and `GPUMemoryLayout`s (#2948)
459965051 : Fix Validation Layer warnings about wrong image layout (#2854)
c4ac14c5a : aten.convolution (Depthwise) (#2884)
8a6427e5a : aten.convolution (Transpose) (#2883)
cb6ddae73 : update torch pin (#2944)
c4054f166 : oss: Add `buck_build` cfg (#2902)
d309e9dc5 : Add executorch_no_prim_ops target (#2934)
5d299fe0e : aten.convolution (SlidingWindow) (#2812)
b51be728a : Fix comment style for parameter names (#2847)
1ae8aa570 : Refactor Pool.cpp (#2836)
9630f9d45 : Introduce add_tensor_like consuming ValueRef (#2835)
7f1fa70fe : Assign trivially moveable values to const-ref variables (#2834)
9c9d2d703 : Rename to max_pool2d.yaml (#2820)
9fd1a0e31 : Bump version 0.2.0->0.3.0 (#2703)
fcefd105f : Move legacy range constraint calculator to executorch to unblock pytorch CI (#2925)
3708e7431 : Qualcomm AI Engine Direct - HardSigmoid follow up for FP16 / Test cases complement (#2790)
8cabeac6c : Fix generation speed calculation. (#2932)
992e73127 : Prefer native buildifier suppressions over @lint-ignore BUILDIFIERLINT
01bac3d95 : Refine LLM getting started guide for uniformity, fix critical errors (#2911)
6db9d72ea : Fixing minor issues in llama2 7b repro (#2926)
ce447dcef : Update iphone 15 pro benchmarking numbers (#2927)
599cfde9f : exclude mutated buffer (#2876)
dc7e4d516 : Update docs (#2919)
9ba8bc956 : Update docs for the demo app. (#2921)
3e256ffa6 : Use unified path /data/local/tmp/llama (#2899)
d3326a207 : Fix checksum for prebuilt packages.
4449f695f : Update packages checksum. (#2917)
643c62848 : Revert "Use new API to register custom ops for llama model (#2840)" (#2912)
61ad48d9c : Qualcomm AI Engine Direct - Requantization Mechanism Implementation (#2823)
86b326a9f : Fix model path/toenizer path dialog (#2896)
71f899d40 : The LLM Manual (#2905)
e6540c126 : Fix llama runner mac CI jobs (#2903)
fd266660c : Add static module extension target (#2904)
81a7e8817 : Use XNNPACK for stories110M model (#2895)
020d8bee8 : Use new API to register custom ops for llama model (#2840)
4d9863fa6 : Add a link to Android app tutorial (#2893)
b6c107013 : Use release build (#2897)
1e8c0fff1 : Fix android runner command in llama2 readme (#2891)
081c8492c : Print checksum for the uploaded frameworks. (#2901)
3b85946d0 : update tutorial and provide a SDK setup tutorial (#2879)
3945100de : Update README.md with TorchTune reference (#2873)
62193672c : remove image (#2888)
678c1c01a : Fix executorch/exir/backend/test:test_passes (#2890)
4c0627491 : Update benchmarking numbers (#2881)
6a96e090b : OSS CI: Test our endorsed llama path
ba7bbe8c3 : Review getting-started-setup for flow + UX (#2817)
d17864dab : Update to ExecuTorch/XNNP submodule to et_v20 (#2770)
ec1554275 : delete exir.capture from some more places (#2852)
6dfa457f9 : setup.sh improvements (#2858)
fce75e706 : Fix llama2/README.md. Add Hugging Face link. (#2874)
1adf2683b : bump torchao version (#2849)
cc9a71503 : Fix vulkan_backend linking in Android demo app (#2872)
e8af9aa26 : update buck (#2860)
76bf85457 : Restore original placeholder names (part 1: top-level renaming) (#2859)
a51b07dd5 : Update android and what is coming next section (#2866)
e5a8de09a : torchao sync 04/04/2024 (#2865)
86ee2f632 : Update Xcode project to use debug libraries by default. (#2869)
92b5cfc78 : Provide debug packacges. (#2863)
8d8fe09fb : Android app use stats from runner (#2801)
f64637190 : Add eval for eager et (#2856)
3b4a9e803 : Refactor Swift package structure. (#2851)
ac5dd3046 : Fix exir deserialization logic (#2857)
033f96215 : Adding instructions for generating model accuracy (#2855)
f64130e7f : Expose timestamp stats (#2794)
dd06b7a69 : skip one op for coreml partitioner (#2826)
fbf834aac : Cmake llama instructions (#2853)
bacc0c821 : Update results section (#2850)
88b6cd242 : remove exir.capture from coreml script (#2830)
f1badd008 : add custom ops to buck oss (#2712)
65505b2f8 : Add dummy files for debug frmaeworks dependencies targets. (#2848)
923f2d5ed : Make pthreadpool not use GCD on apple platforms (#2839)
077e4030f : Build debug frameworks. (#2828)
64a6757e6 : fix constant tagging util function (#2816)
d9c09f4aa : Moving Quant functions out to quant_lib.py: Part 1 (#2846)
399482c33 : Remove unused GPTQ code (#2844)
7c7886b3e : Use CMAKE_CURRENT_SOURCE_DIR for host flatcc paths (#2845)
b5d400247 : Android automate cmake build in gradle (#2838)
ea3927ae5 : Remove examples/third-party/llama (#2832)
7581f91c1 : Make examples/models/llama2/install_requirements.sh executable (#2833)
39d251990 : Prepare Llama2 README.md for consumption (#2831)
a25dea630 : fix a set union bug (#2827)
51d3389c2 : delete test_quant_lowering_custom_backend_pass.py as there are no tests in it (#2809)
be618c27e : Port Arm backend op unittests: conv op + other ops (#2647)
bd6ceab42 : Handle str enum breaking change in python 3.11 (#2819)
29d7b4dfb : remove exir.capture from a bunch of tests (#2719)
ca9530f9d : Add placeholder wrappers for ET evaluation: Eager and Runtime (#2821)
65f3e189e : Add pooling and softmax unittests for Arm backend (#2645)
b8a28d44f : Build with forked API instead of depending on PyTorch (#2813)
4b0ed9194 : create et_view primop (#2553)
6c3daa02f : Replace view_copy with view (2/3) (#2462)
93fa3d629 : Replace view_copy with view (1/3) (#2461)
b1efd43cd : Fix StorageBuffer size in PrepackNode (#2811)
31d5c614f : Force callers to manually provide GPTQ args (#2795)
06668cf50 : change warning from pybindings around memory planned outputs (#2799)
d670a7317 : Remove obsolete vTensor members for buffer storage support (#2774)
eed70fa4d : remove exir.capture from coreml example (#2724)
26d7bccb6 : Fix xnnpack pybind (#2802)
8ab6daf1e : remove exir.capture from test_arg_validator (#2808)
7938344ee : update the pinned pytorch hash (#2737)
bb8a8c650 : integrate qnn backend to runner (#2806)
d612c23e5 : Migrating to executorch (#2768)
c731673da : Fix static llama AOT tracing (#2800)
c06c89f78 : Update namespace from at::native::vulkan to vkcompute (#2798)
a6e73175e : Fork Vulkan API into ExecuTorch directory (#2797)
696512016 : Fix partitioner parameters recognition (#2777)
755c89ccf : add aten.sum.dim_IntList (#2796)
daa217f2e : Use threadpool (#2761)
6e2705e55 : Arm CI fix (#2771)
f353405d7 : Update Xcode project with linker flags.
15d9ddd16 : Qualcomm AI Engine Direct - FbNet enablement (#2706)
24477df84 : Add fuse_dq_q_pass in exir/passes and also add it to HTP backend (#2295)
f178b1a23 : Add custom ops framework package. (#2766)
091e524e7 : Propagate debug handle. (#2783)
02325d63e : Upload custom ops framework.
fd1a338be : Refactor select_copy into util (#2784)
067a82921 : Build custom ops framework. (#2765)
21944ef52 : Mark optimized_kernels as a dependency for custom_ops. (#2764)
3e2e26dbb : extract the delegate segments to allow emit large models (#2714)
e283967de : Add Llava model to examples (#2576)
250f6818d : New pip build system (#2499)
db7c0570d : Update tests to also count constants (#2775)
480431849 : Do not ignore init() errors (#2778)
dc909bef9 : Revert D55175415: Multisect successfully blamed "D55175415: Fix lift_constant_tensor_pass to make sure constants are not inserted in the state dict" for one test failure (#2767)
d4b3e5cc6 : Automatically generate operator tests (#2754)
1c98d78ec : Android screenshot (#2756)
5cb8575a7 : update xnnpack static docs for alpha (#2755)
c6f13f92c : Add llama demo app to CI (#2763)
764c3532c : Ease main thread by updating UI on every Nth token only. (#2760)
f0c666637 : Restrict promoting commit to stable branch by including Android & Apple jobs (#2757)
f799c0ef6 : add runner android build ci (#2733)
05b0d3add : refactor runner to fix bugs (#2752)
a624345ca : Fix lift_constant_tensor_pass to make sure constants are not inserted in the state dict (#2558)
57e34494a : Defer resolution of the default value of arguments used by quantize (#2738)
4df99cc44 : Plumb group_size to 4b quant (#2734)
501b5a2f2 : fix bug in mask slicing for 1 length sequence and no kvcache (#2742)
cd7d5c364 : Android build fix (#2753)
95b0daa85 : Fix Android docs and add gradlew helper (#2751)
45c255700 : Simplify pip install step of install_requirements.sh (#2731)
ebf0a065f : Fix release script (#2736)
4de53bd93 : add abs, sigmoid, tanh to ET-VK (#2605)
f7fbc7a9b : Skip building debug frameworks. (#2748)
132fecf39 : include vector explicitly (#2740)
eb8eca807 : Android build improvements (#2725)
694d84183 : Fix frameworks build. (#2744)
9922c5421 : Restore CMake configure comment (#2723)
649eea979 : Fix llama runner cmake for android (#2673)
a0977b2ce : Use eigen blas for blas routines (#2672)
3881bff40 : Cmake sdpa custom op (#2671)
33e13a23a : Enable blas and add eigen blas (#2670)
a1974126a : Fix pyre errors (#2729)
a15e7954e : Update docs (#2717)
d6a5fb39d : Prepare a simple TOC for 4/2 launch. (#2728)
34b1cfc62 : Fix frameworks build flags. (#2739)
0eaa59918 : Fix cherry-pick workflow (#2727)
e7a429a2d : Qualcomm AI Engine Direct - Adapt to new IR capture flow (#2431)
3cf9f2237 : Build debug frameworks. (#2732)
244932651 : Minor bug fix in the post-release script (#2705)
bf96c4dd1 : use curl instead of wget to download ci artifacts (#2720)
25c5b67d7 : Fix BinaryOp broadcasting for packed dim (#2653)
04d568d9f : Revert D55434703: Add cherry pick script from PT
7a9c45330 : forward fix for taking list of parititioners (#2713)
609f71e20 : Add custom info plist and entitilements keys. (#2721)
25cc29562 : Add cherry pick script from PT (#2716)
a19a32bcf : New util function for getting delegation summary (#2611)
66c5fc89d : Add xnnpack to llama runner mac & linux CI job (#2677)
d64fc74c5 : No hardcoded default checkpoints (#2708)
79a4ba049 : Make docs build use the install pip package (#2678)
cf839648d : CMake Use python instead of python3 when in a (non-base) conda environment (#2687)
456dab5ea : Update Swift PM manifest checksums. (#2715)
08b644f46 : Android gitignore (#2692)
ccecd3ac1 : Move all headers under executorch dir. (#2707)
3c64e8f0c : Better 4bit packing (#2649)
7149a9c81 : Differentiate between resolve_buck failing due to bad buck version and failing from unexpected error (#2696)
f4b97b242 : taking list of partitioners (#2628)
5b3f748d1 : fix add qs8 const test (#2699)
4f6aaefa2 : fbshipit-source-id: a1ea4d74cf22ca89179a51975b271bc04049ab20
853fb5b4a : Fix project frameworks search paths. (#2702)
e506df2e0 : Fix lint errors (#2701)
a37efd391 : Forward fix master branch (#2700)
3095d09fc : (Partial) Bug fix for test_xnnpack_dq4_kv_fp32_llama (#2691)
3aefd5653 : Fix export tests (#2693)
253f2fae2 : Demo app fix (#2697)
265a5e83d : Clean up demo app and docs (#2685)
acf5d83e0 : Add Xcode project. (#2694)
fa08ac3f4 : Docs and setup.sh (#2650)
4f98779d6 : pull in java deps automatically (#2596)
dfeadabaa : Verbosity control for export (#2690)
a526af360 : mitigate llama2 export issue (#2682)
5e23c3329 : off by one error in sdpa cache op (#2689)
9baa2dfa9 : Enable Half for scalar_tensor, full_like, where.self_out (#2663)
cde514ca0 : fix runner max seq len (#2688)
61b1b8385 : Update Swift PM manifest checksums. (#2686)
14996cc91 : Automatically download buck2 when not provided (#2619)
3527bd79b : Update Swift PM manifest to include optimized ops. (#2668)
20bf71173 : Upload framework with optimized ops. (#2683)
ec1f34397 : Bundle optimized ops. (#2680)
4111b3f98 : Rename quantized and optimized build options for OSS to follow the rest of options. (#2664)
682d9ac59 : Turn off optimized kernels so that they don't build and link unless needed. (#2676)
bddeb994c : Fix Android CI (#2659)
a3bf63b08 : skip emformer predictor ci
9cb5ab9d5 : Skip resursive lookup for submodules. (#2654)
40f21e765 : Build optimized ops framework. (#2667)
9de4fb201 : Disable internal fp16 llama unittest
bc929069c : Mark optimized ops depend on portable ops to avoid duplicated symbols error. (#2665)
d3c23bf3f : get around of +/- inf issue when compile flatbuffer file (#2460)
1346824d6 : Mark quantized ops depend on portable ops to avoid duplicated symbols error. (#2662)
927df2826 : fix export dynamic shapes key error (#2656)
378961e8d : Fix flaky test test_lowered_backend_module (#1930)
551aaf09a : Fix the build of examples/xtensa (#2642)
02050de74 : Patch test_quantized_aot_lib.sh in OSS (#2652)
e3cc1f4e8 : add is_xplat to env_interface and runtime wrapper (#2643)
4f97b78e9 : Cleanup dependencies for the runner. (#2648)
9a2dafad6 : Disable fp16 llama oss CI jobs (#2651)
6a0a6c7b8 : Add REGISTER_OPTIMIZED_OPS option to CMake build (ET + Llama) (#2632)
d70c52b75 : Refactor aot_util build (#2631)
67a7d20e0 : Fix handling of nonpersistent buffers (#2604)
a531ca5a5 : Qualcomm AI Engine Direct - Enable zero copy feature (#2531)
9e922d3f8 : add qnn option (#2606)
be4325510 : Add quantized backend dependencies placeholder. (#2641)
cd53f2952 : Add quantized backend to Swift PM. (#2603)
7f96f5a85 : add duplicate constant node pass (#2570)
e603f4fb0 : allow dummy_llama2 script to take real checkpoint/params (#2588)
c8f2d8d72 : Upate pytorch pin (#2636)
ef2178752 : Move LM wrapper generation into helper (#2609)
542ef506f : Fix examples/portable/custom_ops/CMakeLists.txt
45597937d : Add Vulkan backend to llama demo binary (#2608)
1d34cee1d : Build and upload quantized backend artifacts on CI. (#2615)
0a1e855c7 : Fix after pytorch API change (#2624)
2dd59f1db : Expand cmake linter rule to exclude checks for line length and more (#2625)
1899be4ee : directly using dim order for memory format comparsion. (#2170)
102614a2d : rename default dim order as contiguous dim order (#2157)
31160b1bc : make utils support empty dim order (#2142)
8be87e2a4 : fix xnn executor runner cmake (#2613)
42d59d7cb : Migrate google java format from 1.7 -> 1.21.0 (#1771)
126f91835 : fp32 as default data type because fp16 not fully supported (#2597)
c0715652d : Build option for quantized backend framework. (#2617)
b697a0b99 : Log more cmake flags. (#2616)
dfa5a6246 : Consolidate build options. (#2614)
5919212b7 : Back out "Build CPUBLAS as a separate CMake target" (#2610)
a9dc34130 : remove torchao dependency (#2599)
4c9994886 : Fix formatting data types in executorch/extension/pybindings/pybindings.cpp
913e12249 : Fix formatting data types in executorch/runtime/executor/tensor_parser_aten.cpp (#2497)
579cccea4 : Format Eval output and enabled cuda support (#2569)
725c59055 : add xnnpack backend (#2585)
14e31f0cf : Add github workflow for cherry-pick (#2594)
e3b34f737 : Build CPUBLAS as a separate CMake target (#2601)
00be1124c : refactor the transform as a standalone function (#2593)
8532e79a6 : Update coreml runner to only profile model when prof… (#2574)
911e9a356 : Build optimized library with CMake (#2530)
24fe99c02 : Qualcomm AI Engine Direct - Fixed the bug in dummy_llama2 script (#2587)
1d934f029 : Kv Cache as mutable buffer (#2595)
d52ebdcbe : Revert D54763963: Add etdump generation to llama_runner
e7748f87d : Add script to apply release-only changes (#2578)
2d8fa1f13 : Add some gptq related args to quantize function (#2577)
ba920e452 : Update pytorch pin to 2024/03/21 nightly (#2562)
101d78fa5 : Basic gradle rule (#2589)
2743d729b : Skip building frameworks and switch to Swift PM. (#2583)
a6aefc004 : Use `Int8DynActInt4WeightQuantizer` in torchao (#2551)
914f64276 : Fix creating graph signature in delegation (#2580)
001cc5f7e : Add convolution unit test for Arm backend (#2427)
f6803b8d5 : Re-sync with internal repository (#2582)
0b12daf24 : Qualcomm AI Engine Direct - Implement sdk profiler and intergrate with Qnn profiler (#2227)
3152d7f33 : Arm backend: Add and use ArmQuantizer (#2561)
d06ccd27a : Fix quantization for input to reference model (#2317)
9c7bb4561 : Add debug feature to deserialize TOSA fb on dump_artifact (#2560)
a13a953bb : Add script to cut release branch (#2567)
c89a75813 : use partitioner instance directly in to_backend (#2513)
87167806c : Optional and ArrayRef for make_aten_functor_from_et_functor (#2495)
6bef9e7db : Rename int4 to 8da4w in llama2 quantization (#2573)
60eb1bb27 : Clean up max_pool2d (#2575)
a701d82f2 : preserve is fine in our ops (#2572)
b7c146a52 : Add etdump generation to llama_runner (#2359)
ec6b88aae : Add a new line for lowered model in print_delegated_graph()
9f552af4f : Add support for event tracer in module constructor (#2360)
07eb60983 : Add Vulkan executor runner to Android JNIs (#2563)
c45d6d5b4 : Fix @param doc comments. (#2565)
863397972 : Plumb max_seq_len into llama model (#2566)
cd9a84c33 : remove duplicate deepcopy (#2550)
2be3fd88b : Fallback to deepcopy when fake program fails
1c2ed7bf6 : Fix coreml runner build (#2536)
43bae287f : Add standard libs and frameworks deps to Swift PM manifest. (#2568)
12b5324c2 : Generate int variants for binary_ops (#2472)
8f634a87f : Add EXECUTORCH_LOG_LEVEL option to CMake (#2532)
e3ee6d923 : add coreml backend option (#2554)
a41ac1c6c : aten.max_pool2d_with_indices (#2548)
a8a49a988 : aten.max_pool2d_with_indices (#2547)
8cf4d1fa1 : Add support for SchemaKind.mutable to to_out_variant (#2541)
7bff77180 : Enable storage type and memory layout settings to be serialized with Vulkan graph (#2540)
810fcdd11 : Refactor sampling time log (#2549)
634a2065e : Calibrate portable ops (#2537)
3270d22e4 : Qualcomm AI Engine Direct - Enable 4 bits BW quantization (#2506)
5a93353be : Get access to ExecuTorch (#2471)
60764d617 : Replace wget with curl when setup macos job (#2539)
ccd2fe8a6 : Make sdpa_with_kv_cache thread parallel (#2501)
08733f0c7 : Make RoPE freq calculation broadcast for per head (#2353)
9c2092916 : Update Swift PM hashes. (#2546)
42a84b71e : scalar_type doesn't depend on other runtime/core lib (#2538)
8d55f91ca : Load Q4_0 gguf model and run in eager mode (#2510)
04dc65a04 : Add tensor subclass to handle ggml quantized tensor (#2504)
202bcf872 : Add groupwise quant support (#2512)
f9cad4e11 : Build iOS demo app in trunk for every commits again (#2535)
f5d75ffa2 : Remove deepcopy (#2502)
dd99909a7 : Add license headers to Quantizer files
5826af6f7 : Temporary fix of different memory formats between Pytorch and TOSA (#2371)
80e39891b : Fix debugger tests - sdk issue (#2503)
22cd0f868 : Create aot_util extension, build convert_to_qc4w under aot_util (#2508)
7c86c2b5f : Don't bunclde the license file since it violates the archive format. (#2529)
31d7ae54b : Eval_llama clean up
5e832cbff : Move fb-specific files back to fbobjc. (#2526)
625257b0e : Make `WIDTH_PACKED` the default packing scheme (#2518)
5f133b305 : Add matmul operator (#2517)
f7300b2e0 : Add binary op support for height and width packing GPU layouts (#2516)
2d9c48900 : Add CPU/GPU transfer shaders for width and height packing (#2515)
f0864e077 : Update Swift PM haches. (#2525)
cff0dca80 : Fix Swift PM manifest. (#2524)
0f0c30713 : Return if unary_op is out of bounds (#2520)
969060bc4 : Fix gguf util for models without rope_freq_base (#2521)
24029a789 : Fix eval port for pre-to_edge evaluation (#2519)
289c7de64 : Fix bug: gguf converter (#2514)
12d9e251d : Upload iOS frameworks to S3 on demand (#2500)
d0a434f23 : Create benchmark flag for threads
a5add5f2b : Load model in background thread (#2470)
07f52ffb6 : Add configuration file for compiler (#2292)
0e6f77560 : Adds a script to extract CoreML models from a pte file. (#2485)
9b5bd5ef7 : add mps backend option (#2507)
f5f50b537 : fix mps backend (#2505)
7a04a7904 : Delete old/irrelevent quantize tests (#2491)
ba7074195 : Adding initial eval port for pre-to_edge evaluation (#2477)
252508b36 : Regression fixes and tidyup for Arm backend (#2372)
5bff24ed2 : print graph break in llama -- a different if-else branch (#2493)
98f679fff : Reduce log level of mlock failure in mmap_data_loader (#2498)
4c9e3ea72 : Generate int variants for nchw_to_image and image_to_nchw (#2488)
92f6254e4 : aten.clamp, aten.hardtanh, aten.relu (#2446)
1ba14b962 : Serialize NoneType as Null/None (#2445)
8b6f5d7d4 : Don't build unnecessary flatbuffers/flatcc targets (#2466)
5cc6067b6 : Add a CI job to build Android demo app (#2392)
089ff8bc9 : Don't partition non_packable constants (#2423)
bb275fde6 : ArrayRef and Optional make_boxed_from_unboxed_functor (#2473)
fdde7e177 : Plumb checkpoint path for quantizing and tokenizer for 8da4w-gptq (#2478)
1c2b08abe : Fix Swift PM manifest. (#2490)
7f490299d : Fix debugger tests. (#2489)
1255c4712 : Don't absorb constant data unless explicitly tagged (#2424)
df254ce85 : Fix flatcc + pybindings linking on linux (#2467)
28075699f : Enable third level in left nav (#2435)
1d852edf8 : Move Swift PM manifest to the root. (#2487)
7edd2fa2d : Fix tests compilation on sdk < 14.4 (#2484)
8e4918f72 : Don't bundle cmake-out dir. (#2482)
f3019a6b3 : Add a package manifest for Swift PM. (#2483)
45df80052 : DQLinear: Add support for per_token activation and fp16 dtype (#2327)
588c39177 : Bundle each framework separately for distribution. (#2479)
b3456eb84 : Optimize memory when exporting. (#2341)
c9dea32aa : Expand export options. (#2481)
84cd2bbe3 : Qualcomm AI Engine Direct - support multi-context, dlbc (#2450)
246ed45ef : Enable GPTQ in executorch (#2425)
391c498e9 : Set # of threads to use performant cores (#2352)
86b9ff7b8 : Add "private" method to reset num threads in threadpool (#2351)
e65358c02 : Add util to find out # of performant cores (#2350)
24587170c : print graph break in llama (#2469)
1a3c56ee1 : Disable flaky test TestQuantLoweringCustomBackendPass (#2465)
bad9c6e86 : Re-enable generate etrecord (#2448)
ae84721f7 : Integrate sdk (#2314)
39c93aaa0 : use dequantize per channel group for embedding (#2374)
e76cd8890 : LLAMA demo app fix (#2457)
22707f081 : Upload apple ios artifacts (#2449)
6a704bff2 : Switch to per-shader YAML files (#2459)
90f6fa496 : Fix install_flatc.sh version checking (#2402)
b83002105 : import more quantized decomposed ops into edge dialect (#2454)
8084a74bb : Change install requirements for lm-eval (#2453)
eb976d2b6 : Delete loop unroll in SDPA op (#2438)
71ea7cb44 : make partition graph compilation based on xnnpack compile config (#2452)
04fbaae3b : Create placeholder pages for LLM TOC. (#2419)
1269262f1 : Android LLAMA demo app build rule (#2406)
e4ea524f7 : pybind env var (#2447)
6b96df4c1 : Dump more stats on output mismatch (#2328)
a62733adf : Move macros out of torch::executor namespace (#2436)
faaacbf5b : Enable Partial GPU lowering via Vulkan in stories model export (#2368)
dae670ea2 : Wrap edge manager copy inside the generate_etrecord arg check (#2437)
9cb0be60a : Add copy_ (#2432)
63a1fde8d : Bump transformers version to most recent (#2428)
1647cd0c7 : Fix OSS build for Vulkan Delegate (#2434)
a5faf0336 : Include alpha to fix aten.sub.Tensor (#2418)
caee336d6 : Add time to first token for llama runner (#2141)
9e83fde6b : Build Apple backends with pybindings on CI. (#2395)
38b36cb96 : Provide an extra arg to install_requirements to optionally build pybindings. (#2396)
4096dca9d : dont memory plan buffers (#2415)
051ec0d9e : add comments for with statement when tracing llama (#2089)
9b22596c3 : Prevent BUCK files in dirsyncs (#2429)
4401d3d2d : remove clang pinning functions (#2326)
3e414fb5f : Back out "Add example of LLava encoder" (#2420)
31355a229 : Use deserializer from oss. (#2407)
94517f066 : Build frameworks. (#2413)
e98a7e053 : Access last element of vector for current input details (#2393)
57237cd6b : Fix ExecuTorch documentation build (#2417)
bdc8338bd : Always memory plan mutable buffers (#2411)
c176a68bd : Fix import paths for Core ML delegate script. (#2421)
43fef124e : temp remove generate_etrecord call from export
63d04b14c : Clean embedding_byte checks (#2377)
96d429983 : Copy XNNPACK quantizer to ARM backend (#2401)
83541beaf : Adding new ExecuTorch Executor that supports HTP-delegated models (#2322)
1eb9a15dd : Serialize all scalar types (#2414)
dc7df4f8b : Fix source for export to executorch tutorial (#2400)
d5f898dbf : Create model with device='meta'
d0512b685 : Serialize tuple types from Node (#2405)
21cbfd69e : Serialize list types from function args (#2404)
16a31567c : Shorten torch.fx.Node to Node (#2403)
e807a7561 : Adding lm-eval 0.3.0 as an install req in llama2 (#2394)
9a53ffec6 : Add api to explicitly load a method (#2408)
ad9f1868c : Ensure descriptor set pools don't run out of memory (#2398)
35074122c : Add CoreMLQuantizer (#2338)
624ce59a0 : Add config option for log level (#2391)
d42d21b0b : Add torchao as a depdency (#2387)
20f5073aa : Add wrapper code to wrap an ET kernel into an ATen kernel (#2186)
188ebdc41 : Demo CoreML Partitioner (#2384)
845329465 : LLAMA Android demo app Fix (#2354)
684d41a1d : Add a few more TARGETS files (#2149)
9ff2c0e05 : Update pytorch commit to 03/12 (#2370)
7e5a84136 : Add nightly build workflow (#2363)
7156f1bdc : Dynamic shape support in Vulkan Backend (#2367)
835279e11 : Introduce graph runtime shader library that enables dynamic shapes (#2366)
341d2d972 : Add new tests: 4b dq, kv sdpa (#2378)
498141d7b : Update kernels for non-fatal (part 21) (#2121)
4af855e5a : Update kernels for non-fatal (part 20) (#2119)
eb63db541 : Update kernels for non-fatal (part 19) (#2113)
c1630c210 : Update kernels for non-fatal (part 18) (#2118)
5acdc0581 : Update kernels for non-fatal (part 17) (#2114)
9eaaa7537 : Update kernels for non-fatal (part 16) (#2117)
6169802d1 : Update kernels for non-fatal (part 15) (#2108)
2ba12d8b4 : Update kernels for non-fatal (part 14) (#2111)
ba2c8681b : Update kernels for non-fatal (part 13) (#2112)
f93c24342 : Update kernels for non-fatal (part 12) (#2105)
8b443e001 : Update kernels for non-fatal (part 11) (#2120)
13804b3ed : Update kernels for non-fatal (part 10) (#2110)
c89a0f463 : Update kernels for non-fatal (part 9) (#2116)
310b78be0 : Update kernels for non-fatal (part 8) (#2103)
9560ceed6 : Update kernels for non-fatal (part 7) (#2109)
74789ba86 : Update kernels for non-fatal (part 6) (#2123)
9e555a88d : Update kernels for non-fatal (part 5) (#2122)
e5e8e6e55 : Update kernels for non-fatal (part 4) (#2107)
f8fd8a73c : Update kernels for non-fatal (part 3) (#2106)
1f49252ab : Update kernels for non-fatal (part 2) (#2104)
7d1bd16c8 : Update kernels for non-fatal (part 1) (#2115)
72b369dd6 : Update kernel utils for non-fatal (#1770)
cbe8cbd83 : Truncate long op_types lists and reindex df rows (#2390)
8f830de76 : Generate ETRecord from llama export script (#2362)
467021156 : Fix doc build GLIBCXX_3.4.30 not found (#2376)
8b8094d02 : Fix a small typo (#2253)
111379b28 : Add lm-eval as a submodule (#2383)
6c0c2c936 : Arm backend: add unittest for linear op (#2369)
1c08bc674 : Runtime, fix error check (#2380)
42eeebc78 : Add example of LLava encoder (#2375)
4fea98398 : FIx bug in the optimized cpu ops configuration (#2349)
15790c87e : remove unused aten_compatible argument (#2356)
35a847ebe : Add torch.no_grad() guard on export (#2373)
767507301 : Some fix for GPTQ (#2330)
e2b733e87 : remove exir.capture_multiple (#2275)
da8e2876d : fix partitioner CI (#2358)
75284d2f4 : Add template based unboxing (#1284)
4dfb6376f : Enable Dynamic shape support via tensor virtual and physical resizing (#2340)
91134d605 : optional zero points on dequantize per channel (#2364)
969638c0b : Modify signature of dequantize ops for decomposed quantized Tensor (#2308)
8dfb03044 : Move demo app to OSS. (#2355)
02edc9ef3 : New option to speicfy output name for repeatable testing independent of naming
e5ef31d8c : Revert D54497840: Add as install req in examples/llama2
1fb447e9d : Add as install req in examples/llama2 (#2234)
4ef264e5b : remove exir.capture from model inventory (#2302)
a33fbd816 : Support multiple UniformParamsBuffer (#2348)
caade55af : Binary size flag patch (#2347)
689796499 : Fix reshape dq operator
a6e45f4c5 : remove exir.capture from models.py (#2325)
70c5be385 : Cut dependencies and clean up Arm backend unit tester (#2231)
47d27375f : fix oss bugs after removed generated_lib_all_ops target
ceb1f1d05 : Add logging to the demo app. (#2337)
36c21bd82 : Build Apple extensions with CMake. (#2336)
7f301196a : Build in release mode by default and build portable framework explicitly. (#2335)
b2b12918c : Strip logs when disabled by preprocessor flags, (#2334)
88a4a6211 : Buffer log messages before any sinks are added. (#2333)
a45793463 : Dump logs to syslog. (#2332)
7e37a2629 : use generate_lib to replace generate_lib_all_ops (#2315)
754429dd0 : fix sn test (#2320)
d112857aa : Handle multiple output from forward (#2310)
ac8267b03 : Organize utils (#2323)
e5c285386 : Fix Android test linkage (#2319)
0b6add801 : Add GPTQ quantization to executorch (not tested) (#2274)
daaecb1cc : suggested fixes for congruences (#2299)
2487a4ee7 : Add missing XNNPACK wrappers to fix OSS build (#2313)
f3487ed0f : Enable dynamic operator registration (#2305)
9b3925d48 : Clean up OperatorRegistry (#2304)
ed079d1d4 : Remove test dependency on ATen/native/vulkan/impl (#2312)
5316e69b6 : Introduce write_to_file api (#2307)
e44d5b2fb : linter (#2306)
8e59cf56d : et_log (#2303)
f9b3a4429 : Update serialize_pte_binary to return cord (#2296)
ae0706dfd : new Dynamic Runtime Support (#2224)
c7fb96757 : Update XNNPACK revision to fcbf55a (#2223)
b22c224f7 : Support None's When Calculating Emitted ExecutionPlan Outputs (#2243)
23c817243 : Adding Support for preserve_format when converting Evalues (#2242)
f6d0d4950 : Skip querying for current ticks for filtered out logs. (#2300)
9d6bf72ee : Fix operator<< for Half (#2294)
49065cc0f : Use autoglob.
990f6ca5e : UX Improvements (#2291)
b56065b0d : Add generated_lib dep (#2297)
424f1a99b : Move vela to a newer pin and remove patches we've merged upstream. (#2158)
6be02ceec : Serialize union fields with single entry dict. (#2279)
47b837b99 : Use cords to store constant and delegate segment data (#2281)
8602e9e22 : Revert D54538932: Enable tensor closeness check for additional types
187079e25 : Stop after first EOS post-prompt (#2255)
63b0a22ad : Enable tensor closeness check for additional types (#2256)
aed32c400 : Use lazy descriptor pool allocation (#2285)
05702947e : Add cord data structure (#2273)
12fcfcf29 : Extend support for scalars and scalar lists in Value class (#2271)
3b0dd33ac : Revert D54553809: Serialize storage_offset in sym int.
c0167c556 : Fix tests. (#2277)
8ad8a2e1f : Serialize storage_offset in sym int. (#2267)
61a69d5cf : Use dynamic quantized linear partitioner of xnnpack (#2252)
34db73d72 : Fix 4bit groupwise dynamic linear quantization (#2251)
bcba73924 : Remove runtime dependency on ATen/native/vulkan/impl (#2270)
0de3a9736 : rename original original module to orginal exported program (#2263)
aef3a7c21 : emit programs with mutable buffers (#2233)
c11924742 : Updating Media Enhancement codebase to support ET (#2168)
61d639351 : Fix groupwise quantization when group size divides channel size (#2264)
80f3b1bec : Introduce instrumentation test and custom shader library test for Compute API test (#2259)
e7197a14a : Fix .module() calls (#2258)
aaba33d6b : Fix op schema and avoid re-registering.
49fb74b5a : Generalize ExecuteNode args with ArgGroup (#2262)
4a975ea54 : Merge ArithmeticPrepack into PrepackNode (#2261)
b2862eac2 : Merge StagingNode into ExecuteNode (#2260)
a5c18907c : parallel_for should return true if precondition fails (#2240)
69bf18b4f : Enable operator<< for Half (#1733)
862f75571 : Merge ArithmeticNode into ExecuteNode (#2247)
fae9ef0e0 : Nit Arithmetic cleanup (#2246)
dfb5f5105 : Remove Functions.h/cpp (#2245)
46036135f : Remove CopyNode (#2244)
911094eb4 : Qualcomm AI Engine Direct - enable per-tensor dump mechanism (#1294)
65f970198 : Build ARM V8 ONLY (no native FP16) (#2207)
9260d7bc6 : Add mixed dtype linear test (#2206)
a1293b22f : using exec_aten::SizeType to replace Tensor::SizesType in copy_ops_utils (#2250)
10e251009 : Disable exported_program.__call__ (#1994)
829a2bd7e : Set up Vulkan executor_runner (#2239)
6c9880c4d : Implement global shader registry (#2222)
eb2d8b69f : Allow using files from local storage (#2200)
566209d6e : Implementation thread parallel with threadpool (#2173)
568673ed8 : Update ufmt version (#2237)
bff4e0ccd : Add embedding_byte.dtype to enable output dtype be different than scales/zp dtype (#2236)
14b00d0ac : Add op: _cdist_forward.out (#2221)
205cfeafa : Add op: _pdist_forward.out (#2220)
38532e441 : Add op: native_group_norm.out (#2219)
a810f346e : Add ops: _native_batch_norm_legit variants (#2218)
1ae981913 : Add op: diagonal_copy.out (#2217)
ace5bc3a1 : Move as_strided_copy template to copy_ops_util (#2216)
21ab4e7d1 : Add op: flip.out (#2215)
1b22459c5 : Add op: roll.out (#2214)
09d331829 : Add op: var.correction_out (#2213)
a03912603 : Add op: clamp.Tensor_out (#2212)
3609c7231 : Add copy_ to aten op lib. (#2202)
dc77c852e : Update black linter in OSS lintrunner (#2229)
d25b57b71 : Back out "Enable embedding_byte output dtype be different than scales/zp dtype" (#2210)
2a4273762 : Add choose_qparams_per_token_asymmetric for llama on XNNPACK
3ff0f77ee : Fix llama quantize_per_token numerics
9283e506c : Skeleton for GGUF conversion (#2018)
a6d71e275 : Be able to set RopE.Freq_Base explicitly (#2064)
e585a57bb : Refactor llama2/models.py (#2054)
8299fe3ea : Add mixed dtype linear (#2023)
57e192b03 : Qualcomm AI Engine Direct - support embedding op (#2057)
75352ad37 : apply Black 2024 style in fbcode (9/16)
2966e3889 : Enable embedding_byte output dtype be different than scales/zp dtype (#2091)
5a18cc6e1 : Avoid converting k and v to q dtype (#2201)
03c056b05 : Update int 4 flow for consistency
a81bfe212 : Remove padding from Linear,supported by quantization
ac833fa41 : Enable padding for quantization
de2d0c8d4 : Allow selecting models
c3c667ed9 : Restrict promoting commit to stable branch
ea4a74fb9 : Fix pybinding build.
1b431c5ea : Rationalize quantization
8958294f7 : Set up CMake build for Vulkan Delegate (#2179)
81b3232f0 : Qualcomm AI Engine Direct - Move soc infomation from cpp to python (#1679)
06035f395 : Remove num_values from emitter
e2a8f9548 : Remove documentation on old arg in BackendDetails
38a3574ee : fixup for fs-err
95e41f6df : Fix delegate for constants, parameters, buffers
3eb648c4c : update pin to 2/29
780b85e75 : Add warning if a seemingly fairseq2 checkpoint is passed to model constructor
74baa0af8 : use optimized cpu op lib (#2156)
76cbfb7cc : Dont memory plan for inputs (#2155)
c67f0ff57 : modify model to use sdpa_with_kv_cache op (#2154)
92ac8902e : ] Add python impl of sdpa_with_kv_cache (#2153)
5fae05aba : Support aten.mean.dim where dim = [-1, -2] (#2058)
843db937e : Fix of the test-arm-backend-delegation flow (#2143)
47ff3657f : remove unsqueeze from test
23d60bb54 : skip inplace variant nodes in memory planning
6a65288ef : Enable XNNPACK pybindings (#2152)
539f18869 : Add third-party dependencies required for Vulkan Delegate (#2161)
5aa4cac3c : update buck url
a78a07eb7 : Build pybinding as part of iOS CI.
6bfc6c7c6 : Add a few more TARGETS files
8aca4412e : Refactor graph directory structure (#2151)
3ed51440f : Split Graph.h classes into multiple files (#2150)
758168e62 : Revert changes to getKVCacheSize()
ce99c2168 : Add SDPA with kv cache op
b8c733e25 : fp32 flash attention from aten
c244dd7c6 : Fix vulkan delegate partial test
2fbd2c496 : Remove runtime_min and runtime_max arguments (#2028)
24bd94e19 : Adjust pad value to meet the strict convolution shape calculation (#2059)
f327e53b3 : Add model exporting and inferencing steps into Llama runner cmake CI job (#2092)
52dde47c9 : Add note to setup-macos.sh about buck2 workaround
e4a5c3694 : Memory planning debug log (#2125)
e09c29b28 : android llama demo app skeleton
dc5a9af39 : Separate OpNode to PrepackNode/ExecuteNode (#2102)
3a90aa6f1 : Fix floor_div delegate test (#2101)
722af90b5 : Be able to set hidden_dim explicitly
68a16df59 : Revert errorneous 4bit quant changes
1d74652e6 : Resolve recurring errors where query is c10::Half and key and value float
566528fd7 : Add options for embedding quantization: bitwidth, group_size on CLI
4ab839f5c : pow(n,2) => n*n
efa0e05d1 : Fix export for loading llama2 7b (#2088)
3d59fe9cf : Enalbe 4 bit quantization via PT2 quantizer (#2055)
f6665a73d : Move graph runtime from PT directory to ET directory
3e806ffc6 : Provide a way to dispatch PAL logs to multiple subscribers.
68ca9187a : Publicly link libtorch.so into pybind portable_lib
83bd5c211 : Introduce Vulkan partitioner (#2062)
a58bfd062 : Introduce GraphBuilder abstraction (#2061)
78ce089df : Support fp32 activation for quantizing llama2 to int8 activation and int4 weight (#2032)
33306d324 : Fix dtype check for builder.py
6d04257b0 : Optimize release build for binary size (#2009)
1aa395a74 : Address failing inf/nan gelu test (#2046)
2006cc0e2 : allow customize buck2 and python path (#2044)
c89377c12 : Introduce OperatorRegistry (#2048)
bd0a4c9fa : Bump version of indexmap (#2050)
33ba5638f : Add seq_len to llama runner for early stopping (#2051)
c4a1e95c6 : Refactor export_llama_lib (#2030)
ca6995bd5 : java instrumentation test with MV2 (#2029)
1186129c5 : BC Fix Stage 2 fully migrate to constant data (#1960)
7973d7a5b : BC Fix Stage 2 new DQ Linear Path (#1961)
56c093f2a : Serialize with OperatorCall (#2045)
d89a34271 : update the release buck2 version in doc (#2034)
5cf961e2d : Update shim for ocamlrep (#2043)
99c70f98b : Improve schema names and comments (#2035)
d98741cd8 : Do not define static target for `vulkan_backend_lib` (#2042)
7e438464c : Qualcomm AI Engine Direct - Fix Missing Kernels (#2040)
48537273d : Build static extension_module lib for iOS demo apps (#2038)
f70759084 : Qualcomm AI Engine Direct - LLAMA2 Infrastructure (#2020)
8fed60b09 : Delete old preprocess code (#2017)
abec72283 : Fix Lint (#2036)
20714e7c0 : fix lint warning (#2027)
6b1fc8bc1 : Add CMake for llama runner (#2022)
3af672a51 : fbcode//executorch/backends/vulkan/test (#2024)
6375f0187 : Serialize constant data outside of flatbuffer (#2016)
04df8ab80 : Introduce `VulkanDelegateHeader` to manage constant data and shader data (#2013)
bfdd8ab2a : Change log to debug for const_inputs in runner_util (#1810)
5d7edcb7f : Add support for outputting constants (#1774)
0c38c6062 : Qualcomm AI Engine Direct - refactor documents (#1168)
a704dd649 : Inject Inplace Copies into Graph for Mutable Bufers (#1995)
b32161627 : xnnpack aot_compiler minor fix (#2015)
bb8589251 : BC Fix Stage 1 Update Runtime to support XN00 and XN01 (#1878)
ce9b5ccd2 : duplicate schema for AoT and Runtime (Temporary) (#1877)
bb2185a58 : qnn_executor_runner release build flow (#1531)
5d0a7a0b1 : Re-sync with internal repository (#1993)
027ad541b : Fix a rotary position encoding bug in kv cache
f4c4ad3bc : Upgrade buck2 toolchain (#1996)
0fa67f697 : Qualcomm AI Engine Direct - support online prepare (#1692)
d25a2196d : Remove unused variables in executorch/examples/sdk/sdk_example_runner/sdk_example_runner.cpp (#2007)
53f251354 : Llama stories example (#1997)
288c948fb : Clarify log when ignoring mlock error (#2004)
2dbe2cf33 : Extend github-export-checks to include merge rule from ExecuTorch (#2000)
7b1ac5d37 : Factor out model loading and provide a way to stop generation. (#2002)
534664d78 : Cast error code to int (#1914)
5c6cefcc9 : Use custom cpp op for packing 4 bit weights (#1899)
b601b49b2 : Executorch][xnnpack] Hack to speedup dqlinear lowering (#1893)
800de21fc : update tensor::SizesType to exec_aten::SizeType (#1988)
32356df5c : Add llama export lib into executorch_llama bento kernel
4b9e8a4ae : Fix mask dtype mismatching query dtype properly (#1984)
dd01c6d8b : docs: add editable installation instructions (#1953)
4e3dd2eda : Fix mismatch between tests and supported tests for Arm backend (#1889)
e4a78c233 : Refactor Arm backend unit test to align with XNN-pack unit tests (#1971)
899642b7f : Arm Baremetal Runner cleanup (#1909)
5bb10003b : Fix Arm setup.sh support for Darwin OS (#1910)
0f2bd67f6 : FileDataLoader calls read() in a loop (#1978)
3c81c71bb : Back out "Fix Core ML export script" (#1990)
1689ed886 : Fix linter (#1989)
40356ef95 : Revert D53075378: Disable exported_program.__call__
b8ff0a138 : Fix ExportedProgram calls (#1985)
ac9c121ef : Fix demo app workflow and tutorial (#1979)
75aa0b45c : MmapDataLoader can handle non-page-aligned offsets (#1975)
f739212d4 : Disable exported_program.__call__ (#1954)
8fe00b502 : Llama stories oss ci (#1973)
bdde144ce : Demo CoreML Partitioner (#1981)
1d07fc47b : Fix llama_aten (#1982)
d6b07b225 : Fix Core ML export script (#1977)
98d224edc : Add temperature to llama runner (#1969)
da5ab274a : Save flatc input files when serialization fails
b5ed07598 : Use xnnpack and quantized ops deps (#1852)
63deed3b6 : Use upperbound shape (#1851)
df2074229 : Duplicate dynamic quant chain (#1867)
b16ea47cd : Fix lint
c4e01854e : Use export_delegate_segments
d8d60a00c : Add option to redirect stdout and stdcerr to a file for xcrun compatibility
1711bbcad : Add segment alignment option to export.py, default constant_segment to True (#1965)
ccb41c234 : Fix mask dtype
ccc1eb250 : Group-wise support for Linear quantization
f95d88f08 : Add missing persistent flat in deserialize_input_spec
6d5491154 : Operator registry key fix
a3dec31fb : Fix serialize_metadata
7e84e5038 : Fix example targets build for langauge_llama_vocab.bin
3b623f971 : Allow tokenizer_py to run in oss
084abda7c : Fix inconsistent "HALF" enum value name in scalar_type.fbs (#1929)
dda3dfbe0 : Disable arm CI job temporarily (#1949)
e4a509845 : Fix folders that contain a BUCK but no TARGETS (#1895)
145a89e63 : Fix arm CI (#1944)
70b747cc9 : Add op: div.Scalar_mode_out (#1928)
7df81cffe : Introduce math_util (#1927)
6344ecfe0 : Add op: reflection_pad3d (#1926)
8bd5b0798 : Add op: reflection_pad2d (#1925)
1b7a09657 : Add op: reflection_pad1d (#1924)
738fa1f62 : Add op: replication_pad3d (#1923)
74305f489 : Add op: replication_pad2d (#1922)
06dce7d2a : Add op: replication_pad1d (#1921)
6a2598ed6 : Add ops: any.out & any.dims_out (#1876)
52a5a059f : Add ops: prod.out & prod.int_out (#1875)
d81f36f75 : Export llama targets (#1943)
8a6764573 : Fix extension/pybindings/test:test_pybindings_aten_lib (#1940)
c4d301d27 : Link python bindings with Apple backends. (#1942)
4f3e43db8 : Fix llama2 buck targets (#1941)
017fd87c4 : Fix data dependent symint issue in kv cache branch (#1933)
2cf66b996 : Build MPS backend on apps with lower taregt SDK version. (#1937)
9249ed7cb : Conditionally link pybindings against Apple backends when building with cmake. (#1934)
dccfaf517 : Use mutable data ptr over deprecated data ptr. (#1935)
c9349ae60 : Link against standard frameworks when building with cmake. (#1938)
6e4ee9f7c : Upgrade setuptools and wheel on install. (#1936)
9ccdc0762 : Link against standard frameworks when building with cmake. (#1939)
65eee43a4 : Groupwise quantized embedding (#1864)
7012fa8ef : Temporarily disable mobilebert quantized on xnnpack
a9439586f : Add int8 per token dynamic activaiton quant and int4 weight quant for llama2 in executorch (#1904)
5d4d0ca60 : CI failures with dtype-override (#1919)
83d4e52c3 : Fix test constant_prop_pass_for_add (#1920)
7128b3e2c : make memory format pass test support dim order propagation (#1900)
5e492b28c : move memory format pass into to_edge (#1891)
bf7b701ac : Install snakeviz for cProfile flamegraph (#1915)
28355e3c9 : fix the file name in the instruction
828ae623c : Re-enable quant_lowering_custom_backend_pass (#1916)
338fc6236 : Deepcopy LoweredBackendModule meta (#1913)
eb50c4648 : Run example llama2 model with fp16 (#1902)
636f9a71f : Linear (#1901)
6bdd250f4 : Ensure that lifted tensor constants don't show up as inputs in emitted program (#1897)
9371da816 : Java library (#1869)
02cbaf2ca : Small fixes to enable OSS buck2 build for llama runner (#1883)
b67a7d86c : Integrate memory planning into Vulkan delegate (#1898)
eabb2b0cc : sync schema_type.fbs and update warning msg to solve broken ci (#1908)
f7ce83cbd : Set `fold_quantize` to True in `convert_pt2e` (#1882)
d3065bd3d : Update pin to 2/9 (#1912)
d5964cb49 : Cache results from SDPA.get_graph and bilinear_2d graph (#1911)
c3c198967 : Add a skycastle job for ExecuTorch running llama
316468850 : Fix interval construction in `storage_overlap()` (#1905)
4e7a6fddb : Qualcomm AI Engine Direct - Fold_Quant Enabled (#1773)
63489e887 : Use checkpoint dtype to export llama (#1870)
136300459 : Qualcomm AI Engine Direct - Enable W2l (#1767)
0ea44aaca : update nightly version (#1896)
7c80cd397 : Fix broken emitter test (#1892)
f57ca1eb5 : Handle tensor memory offsets > 32 bits (#1792)
ac841ada9 : aten.pow.Tensor_Tensor (#1880)
8699b9812 : Convert internal tests to using .module() (#1831)
82a27050a : Simplify export_llama options (#1887)
9c019ab87 : Fix the default export_llama checkpoint and config. (#1884)
73d2a694b : Fix TestEdgeYaml test (#1885)
da140c3d2 : Use embedding_byte operator for embedding (#1819)
672425bb1 : llama runner add namespace to elementSize (v2) (#1879)
d79c9b37c : Fix llama export (#1873)
4f8ddc5cc : reenable memory format ops pass test (#1881)
a274d2ed7 : sync schema_type.fbs from core to sdk
fd584606b : Revert D53247301: Set `fold_quantize` to True in `convert_pt2e`
544d296bf : Introduce `mem_obj_id` to `TensorSpec` (#1868)
b76d4090c : Enable running examples/arm/setup.sh (#1861)
f23e2c207 : Add embedding quantized operator (#1818)
2a6b3fd6e : Remove fold_quantize flag (#1766)
e972f96e1 : Check named buffers in emitter, for non-persistent buffers (#1865)
68336ba36 : JNI library - internal
3431a0cbe : Fix lifting constants (#1862)
637d06697 : Use compile specs for backend configuration (#1820)
cc75b636f : Fixes for arm example run for runner_util:prepare_input_tensors move (#1800)
b026e39af : dynamic quantize linear and lower to xnnpack (#1849)
8e531f818 : add support for profiling activation memory (#1783)
638bc0a61 : embedding quantization (#1782)
71d43e013 : Init n_kv_heads when not specified (#1781)
be586e8ac : Add gcc to executorch ci (#1840)
b87ccd218 : FP32 Emforer model test (#1848)
460803cb7 : don't delegate nodes with empty tensors (#1847)
bbe3f5bfe : Re-sync with internal repository (#1860)
3cf738a70 : Migrate the macOS runners label from macos-m1-12 to macos-m1-stable (#1838)
f5ee612aa : Use std::string for runner. (#1856)
dd0e3b45b : Qualcomm AI Engine Direct - Enable quantized VIT (#1839)
aad3bdced : Avoid unused ivar warning for managed tensor.. (#1857)
add728794 : update torch nightly version (#1855)
dca9b5cd2 : Update MPS docs to reflect latest runtime changes (#1842)
91bf41996 : add util to print the delegated graph after delegated (#1846)
3327fca70 : Fix use_static_deps
05d2c935b : Import _trace directly (#1843)
d7ab1ea4f : Do not expose module state directly (#1826)
2827c2493 : stop using exir.capture in export_delegated_program.py (#1824)
c47595bab : remove exir.capture from test_verification (#1841)
ca20e057f : remove exir.capture from test_delegate.py (#1825)
fb7afb2a9 : fix tests (#1836)
f995945a3 : Serialize the flag of appending EOS tokens into the model (#1832)
96cdb2a60 : Support kv cache on llama_runner (#1835)
e20d8d570 : Simplify metadata retrieval and sampler logic (#1833)
42d83bdd8 : Add tensor wrapper (#1830)
186ed5f17 : Comment out kv cache as class attributes (#1829)
cc284848f : Run llama in xplat (#1828)
fee1528cb : Export llama CI (#1809)
ad67a3d15 : Fix linter error (#1823)
c09541b63 : Add some debug info in ExecuTorch runtime (#1821)
1f7c0f7cf : preemptive ci pin update (#1812)
8490570b7 : Add mixed mm op (#1791)
6110f6b3d : remove exir.capture from memformat tests (#1822)
c49e1eef7 : support non-persistent buffers (#1817)
687af0bb7 : SDPA (#1805)
a3e3cca95 : Verifier support for torch.float16 (#1803)
9948146cf : Add most ops (parity w/ fp32) (#1797)
1af274f69 : Don't promote fp16 to fp32 (#1796)
779d58bf8 : Add Half dtype support (#1795)
b5dd5cbf3 : Base support for the dtype (#1794)
340c69b97 : Fix MobileBert timeout CI failures (#1729)
e9737d0ff : Groupwise Embedding table quantization (#1815)
4d1d8d6c7 : Quantize output ffn (#1813)
f20beee54 : Remove calls to torch._export (#1804)
7ba9b0ddc : Fix build failures (#1747)
b2636ff62 : Make tokenizer build for iOS. (#1814)
a9642cce3 : Add a callback to runner generation to prvide tokens. (#1816)
394e15b64 : Define meta function for embedding_byte (#1793)
47ba2a903 : Implement initial SDK profiler support in XNNPACK delegate (#1632)
24e428c57 : Replace `constraints` with `dynamic_shapes` in pye and executorch (#1714)
cf87e792e : Update stale comment name (#1807)
acb401f98 : Update torch pin (2024-02-01) (#1808)
258744643 : Simplify check_node_has_valid_dtype (#1801)
7ea3ebd69 : Add op: maximum (#1758)
28cd4d614 : Add op: eq.Tensor_out (#1757)
4356ce204 : Add op: atan2 (#1756)
9c26d9890 : Add op: trunc (#1755)
a5ff4fceb : Add op: expm1 (#1754)
5a77a7b3f : Add ops: log variants (log10, log1p, log2) (#1753)
e9c011233 : Revert D53253905: support non-persistent buffers
814ddea98 : Add tokenizer to bento kernel (#1798)
2f5b626fd : Enable NATIVE_FP16 for aarch64 (#1788)
deb24a849 : Use Intel x86 intrinsics for half/float conversion (#1789)
0c3f61755 : update quantized op name to match quantized.yaml (#1790)
779f904b6 : Add cprofile option to export_llama (#1784)
e16193a4d : Micro-optimization for memory_planning (#1785)
3be5e12ed : support non-persistent buffers (#1772)
aff6f6815 : Stop SymInt from coming in (#1778)
14d9699a3 : Embedding with Half scales/output & null zero points (#1762)
814bddd2b : FP16 native support (#1779)
5688f14a4 : Don't lower aten.sym_constrain_range.default to edge op
a0af36801 : Qualcomm AI Engine Direct - Partitioner Enhancement (#1561)
d0050dd85 : Add a workflow to promote green commit to stable branch (#1697)
c6af8484b : Make module conform to the snake style. (#1768)
3c4e4cbaf : Rewrite kv cache implementation for llama (#1763)
83ebb2cc5 : Refactor export_llama_lib.py to serialize more constants for kv cache (#1764)
98ffe4440 : Support empty inputs into executorch method (#1765)
c99e5a5a4 : Fix fp32->fp16 cast for scalars (fixes llama2 fp16 for MPS) (#1752)
7b5419a70 : Populate the backend name of cpu DELEGATE_CALLs (#1749)
f80c05c22 : TensorParser tests (#1745)
c86bda2a7 : Rename runner and cleanup deps. (#1741)
1a58959d9 : Run fp16 language llama test with llama runner (#1742)
61dd51e43 : Buckify runner. (#1740)
2599b0ccd : update the pinned pytorch hash (#1658)
badd58e34 : Make executor_runner use the new prepare_input_tensors (#1710)
04d969380 : Reimplement PrepareInputTensors in terms of the new inputs.h (#1711)
61bc5d1e8 : Refactor and move prepare_input_tensors (#1712)
e77a56b91 : find bias for mm in all nodes (#1730)
6a1b70a0c : Add string to supported types in et_schema (#1738)
a34cbd409 : Matmul kernels: op_addmm (#1720)
935f873fc : ExecuTorch Vulkan floor_div (#1737)
fc0e62583 : Pin all TOSA components to v0.80 (#1701)
52ea3c60c : Build MPS delegate with Werror=1 (#1736)
359faa617 : Handle more args in ET_CHECK_OK_OR_RETURN_ERROR macro. (#1723)
3d650a94c : Half support: cat, mean, pow, sigmoid, softmax, to_copy (#1734)
1ebd43ec5 : Half to float only after checking common type equals output type for scalar variants (#1732)
95003f396 : Remove check for half dtype (#1691)
09b7b0b5b : Prefill all prompt tokens at once
e83f31c4e : Let ET_UNWRAP handle message format string to log. (#1727)
2a2fd12a1 : Enable Half output for ExecuTorch (#1731)
038944049 : Specify dimensions in call to squeeze (#1728)
23aceb8cb : Return a set instead of a vector for method names in a program. (#1724)
a4188a827 : Fix MPS build (#1726)
6ca6479ac : Fix lint errors (#1725)
43b35b4b3 : Split export into a main program and a library (#1719)
f240b35b6 : make definition of Half available in to CPUblas (#1722)
2b4e07805 : Matmul kernels: op_mm, op_bmm (#1721)
366960aa6 : Add ETDump support to MPS runtime (#1715)
7b05f509f : Add support for MPS partitioner (#1717)
6a2b62c0a : Add tick rate API to PAL (#1651)
eb5d0fefc : Add dump of weight dimensions if args.verbose (-v), other fixes (#1699)
32f2f2506 : Half support: unary ops with realh pattern (#1709)
48466fa2b : Half support: unary ops with realhb to bool pattern (2 ops) (#1708)
f74a221fa : Half support: unary ops with realhb to floath pattern (18 ops) (#1707)
810f1c1e7 : Half support: sub (#1705)
ae51ef5b0 : Half support: mul (#1704)
4807a6b09 : Half support: add.Scalar (#1703)
6642995a9 : Add convenience switch macros (#1702)
462c197c6 : Remove noautodeps from files inc executorch/extension/pybindings/TARGETS (#1303)
2985564c2 : Remove a left over debug print (#1713)
9af451c85 : Fix MPS build (#1689)
7d0631918 : fix the mps graph builder (#1706)
49af5d6fc : remove exir.capture from exported_module.py (#1674)
9796603af : Fix buffer alignment for iOS15/16, macOS12/13 (#1698)
ab114145d : Add support for ScalarType::Half (#1693)
2fd4c0094 : Fix small issues on export and runner (#1696)
1986e33ec : Allow sampler to take fp16 logits type (#1695)
59b757ae3 : Avoid type promotion pass for fp16 weights (#1694)
d207d49b1 : Remove use of ET_EXTACT_SCALAR in portable kernels (#1683)
f2e759479 : Fix emission of weights that are views
9a31c00c6 : Add error checking for disconnect between nybtes and storage size for constant tensors (#1685)
017bf8d8d : Add fold_quantize=True for all convert_pt2e calls (#1640)
72c017fbc : Add buck files to llama example. (#1688)
0d4cd755e : Split llama_runner into main and class (#1690)
ef0f4e554 : Automated build of language llama (#1669)
bb50b4e07 : Update torch pin (2024-01-23) (#1684)
9e094583f : fix broken tests (#1686)
1d46ddd82 : Include missing header files (#1677)
70679da50 : Add DQLinear with 4bit weights (#1664)
c0c2fabdd : quantized w2l (#1623)
19392e656 : Quantization Support in Conv1d pass (#1624)
be0eea2b7 : FP32 Wav2Letter Tests (#1625)
001bfd1a4 : Use fuse_activation pass in delegate (#1626)
336b0199f : pass to fuse activations (#1627)
cb04be3c5 : create target for serializer and schema (#1628)
e1aa5a536 : Fix lint errors (#1682)
a1f50c793 : Remove extra semi colon from executorch/kernels/portable/cpu/op_zeros.cpp (#1675)
f20610620 : Enable MPS tests on CI. (#1671)
6a1c7a2cc : Use force-push when rebuilding Docker images (#1582)
cbc0c9502 : Add new mps runtime with support for FP16, lifted and unlifted graphs (iOS15+, macOS12+) (#1655)
71aedc553 : Replace `constraints` with `dynamic_shapes` in export-to-executorch tutorial (#1659)
93dd96fc7 : remove exir.capture from pybindings test (#1673)
b87c414bf : Another minor lint error (#1672)
0e17257b1 : Use CoreMLTools Main Branch (#1666)
ca8e6c89b : Fix linking flatcc library for debug builds (#1668)
705923e5a : Fix lint errors (#1670)
842acfce5 : Back out "Extract constants to segment by default" (#1667)
e2369c99d : Change the way tokenizer and model are being serialized (#1657)
bc317e107 : Add a runner for llama (#1656)
809522d35 : Add a sampler (#1643)
58a82f5e7 : require Module to be passed to export
023fa2ff9 : Call fs2 to llama checkpoint conversion (#1661)
1c0f7a382 : Create a llama2 checkpoint from fairseq2 checkpoint (#1660)
d01511f39 : Remove duplicate code (#1654)
e261672e9 : Enable export_llama to read LLama-style checkpoints created with fairseq2 (#1603)
90daa9c7a : Include LLM/Llama model parameters in pte file generated by to_edge (#1653)
e41b580a2 : Capture params in model.pte with export_llama (#1646)
3a5cd9ac9 : llama2 export with xnnpack (#1645)
e0ff564f9 : Hide internal-only Method methods (#1593)
4da94a961 : Further Module improvements. (#1622)
b46cc573e : Macros to check and return if error. (#1621)
52e8ba9c6 : clean up quantize.py (#1647)
1468f5700 : Skip quantizing output linear layer (#1644)
a4b54f7a3 : make bundled program as an integrate class (#1616)
ef50e7a4a : replace match with if-else to meet python version from other packages (#1636)
3ee0830f1 : update the pinned pytorch hash (#1613)
ba1e853e3 : Add int8 quantization support (#1642)
1a000e7d5 : Fix CoreML CI failures (#1633)
0935d8993 : Update test script. (#1650)
8ad4e8e58 : Extract constants to segment by default (#1522)
6aab49e49 : Add a tokenizer (#1641)
78ccd2ec1 : Add a tokenizer python script (#1611)
fa50ded67 : SDPA (#1540)
62c6a43b6 : squeeze mask for XNNPACK before SDPA (#1536)
6c4ab8a08 : Disable memory_format pass test (#1639)
a402c449b : Support FP16 Llama model inference (#1596)
58b746733 : Update callsites to specify extract_constant_segments=False (#1606)
5bd5129c3 : Replace `constraints` with `dynamic_shapes` in executorch test (#1608)
05d169b25 : Fix update-commit-hash env condition on nightly workflow (#1635)
da9cb81ee : Add filtering for null debug handles when inserting (#1638)
927e2a128 : Add kvcache to llama2 (#1619)
d44773785 : typo clean (#1615)
265025fd6 : Update pytorch pin to 01/18 (#1634)
19496c4ad : Add option to build python module portable_lib (#1588)
fa118dc75 : Try to fix omitted tests (#1629)
cfa27d64b : Fix linter errors in Core ML partitioner. (#1630)
33f8887bb : modify tests to take an nn.Module (#1609)
41fe5d2e0 : Fix missing prim_ops symbols for executorch OSS build (#1594)
bd024e593 : Fail when trying to resize DYNAMIC_UNBOUND tensors past their capacity (#1610)
94ef5de9c : Add a Rudimentary Core ML Partitioner (#1415)
d52de8488 : fbcode//executorch/backends/xnnpack/test (#1617)
89dcb96ad : Clean up a few more ops (#1581)
18e98b684 : Fix failing doc build on CI due to sphinx version (#1614)
65c3d5788 : Update PyTorch pinned commit semi-automatically (#1597)
1446b7280 : Add support for logging DSP events in HTP (#1478)
70d3b474f : remove directly access to internal schema-type bp (#1607)
91a2310a0 : Dont lower constrain_range_as_size to edge ops (#1584)
a60333251 : Enhanced Runner and Module classes with improved loading and error handling (#1573)
aaba8b247 : Update pytorch pin to 01/16 (#1605)
2d39c223f : Add to_dataframe for Inspector and Event (#1476)
4ec2b7f2a : change signature of map_impl (#1580)
a06b4504c : Fix various lint warnings under runtime/executor/... (#1589)
54b2e0a9d : Add a target for executor_test (#1591)
8dd082702 : Rollback pytorch pin to 01/10 (#1592)
af317ac4c : Avoid CHECK and buffer over/underrun in JF instruction (#1570)
d19f1c5e7 : Handle missing and zero non_const_buffer_sizes (#1571)
0b5e3494b : Bounds checks for values/inputs/outputs (#1572)
a9bd42574 : Fail earlier if chains are not present (#1545)
409c95a43 : Don't return null-ish method names (#1508)
653518eaf : Non-fatal error on missing delegates field (#1521)
057aad777 : Check for missing fields when parsing values/instructions (#1509)
4d7e25bf0 : Switch to serialize_xnnpack_binary for perf gain (#1544)
a8da94a08 : Handle XNNHeader in XNNPACK Runtime (#1543)
99d0ea1db : Serialize constant Data outside of flatbuffer (#1542)
4361d6280 : Intorduce XNNPACKHeaderto manage flatbuffer data and constant data (#1523)
428da4fae : Update pytorch pin to 01/11
96166da65 : Replace `c10::ScalarType` with native equivalent
7a6abb4a1 : Remove the redundant install_requirement script when test iOS on CI (#1583)
f0b7f126e : Additional op cleanup to use ET_KERNEL_CHECK (#1579)
9917fc6d3 : Minor op cleanup to use ET_KERNEL_CHECK (#1564)
d1eac35a6 : Refactor resize_to_broadcast_target_size for non-fatal (#1558)
02eca50a4 : Clean up masked_fill, softmax, and constant_pad_nd (#1560)
c40f461ab : Clean up nonzero, slice_scatter, slice_copy, select_scatter (#1559)
04062e441 : Cleanup up copy-family ops (#1557)
afe896771 : Clean up arange, cumsum, glu, log_softmax (#1539)
ec2f342f4 : bugfix local_scalar_dense impl (#1578)
0cfc7f58b : Fix CI: cmake paths for the runner module, include runner module exported headers, fix Xcode project. (#1567)
c0b436152 : update export_llama (#1565)
52a4cecf2 : linear fused relu in xnnpack delegate (#1551)
10ef41e2a : Fix portable kernel utils for aten mode (#1563)
845ce383f : Fix cmake build for portable ops (#1562)
b85a48cc9 : Update torchvision version (#1566)
8012187d4 : Enable running TOSA reference_model e2e tests on executorch public CI (#1335)
af3578753 : add include and context to call guard (#1547)
379926cd4 : Reformat function names to match pep 8 convention (#1503)
5902ff5b6 : Update executorch pin to 01/08 (#1556)
1a03df948 : wrap dict in struct (#1555)
663882fe7 : Add MoE structure in the llama example (#1533)
f6f8d0d80 : Remove use of equality_constraints (#1472)
797481f1a : Forward stderr and stdout through pybindings (#1537)
6cdc9b6a1 : Update pin (#1546)
5cf362a05 : Back out "Add constant segment to export options" (#1538)
30732fe8c : Add constraint to not partition standalone batch norm (#1501)
5318baacd : Fix forward test failure.
32fee1412 : integrate gen_oplist as a function (#1498)
67e448340 : Check for missing chain instructions (#1510)
eb4744ed4 : Avoid fatal checks in populate_operator_name (#1511)
e08991001 : Fix some printf warnings in executor_runner.cpp (#1529)
4a545ce58 : Remove a leftover debugging log statement (#1534)
700bcb3a9 : Check for missing method-related field before creating MethodMeta (#1507)
d3dfd1ba2 : Check for missing backend id string (#1512)
fb2554a0a : Check for missing KernelCall/DelegateCall args (#1513)
d55212e00 : More checking on sizes and dim order during tensor parsing (#1514)
fdf20a3ab : Back out "Back out "[executorch] allow passing dynamic shape information in examples dir"" (#1535)
5bc506624 : Refactor ExportPassBase. (#1532)
f37bb470a : Improve verifier to not specialize on dialect. (#1528)
03093c2cc : Migrate xnnpack tests to use functional IR (#1506)
1993d92f9 : Fix xnnpack tutorial reference to XnnpackPartitioner (#1530)
46300bc5c : Remove fatal ET_CHECK for unknown instruction (#1515)
9846ba450 : Check for out-of-range op indices (#1516)
fa9a8176e : Check for out-of-range argument value indices (#1517)
89958d27b : Catch invalid scalar type when parsing tensors (#1518)
d86938548 : Check for missing arrays in Program (#1519)
b4c6afc2c : Non-fatal error for unknown KernelTypes type (#1520)
d9a540073 : Update schema versioning format. (#1491)
f36107718 : Use scaled_dot_product_attention in llama model
a33e6a618 : Improve serialization of union types. (#1502)
504366f9c : Constant segment runtime tests (#1505)
60df68274 : Cleanup EXIREdgeDialectVerifier to use the same logic to verify core (#1504)
a81c2d4c0 : Add constant segment to export_program for tests (#1493)
38b6b9745 : Bump transformers from 4.34.0 to 4.36.0 in /.ci/docker (#1500)
0d120ba61 : Use Module facade. (#1496)
046735b73 : Build Module facade for OSS. (#1495)
b12187b50 : Refactor Runner and Module facade. (#1494)
6fcf2722f : Qualcomm AI Engine Direct - refactor unit test (#1475)
53b99f3bd : Refactor llama_runner to use Module (#1488)
ff28629e3 : Only error out when the number of kernels being registered exceeds the limit (#1487)
746460085 : Back out "allow passing dynamic shape information in examples dir" (#1485)
9d9ba2914 : Add LlamaRunner (#1483)
88cae48d7 : Consolidate memory ownership for runner.cpp (#1482)
32368a51a : Support nn_module_stack in non_strict mode (#1480)
dc9d43fa1 : Enable quantized mobilebert in xnnpack backend (#1481)
52af70ee4 : fbcode//executorch/exir/tests (#1474)
3d7c1b5c9 : Add to_dataframe to Inspector docs (#1473)
344919c4c : allow passing dynamic shape information in examples dir (#1471)
146b09248 : Clean up reduction ops (#1460)
8d7a681ad : Qualcomm AI Engine Direct - update build flow (#1447)
0488f4091 : New partitioner constructor flag for dynamic shape ops (#1429)
ce23edeea : Refresh src_partition after each graph update (#1466)
99f6eb3b3 : Dynamic shape fixes for slice and squeeze (#1467)
e9f2d7303 : Add hacky per-tensor quant support (#1470)
393bea19d : Cleanup remaining legacy tests (#1450)
169b0f3d5 : Migrate dqlinear tests (#1449)
833259b44 : Migrate quantized tests (#1448)
bae556f03 : fbcode//executorch/sdk/fb/visualizer/tests (#1469)
5d8de3ab0 : Delete unused file wrappers xnnpack_wrappres/* (#1463)
fb7a60388 : Delete unused file wrappers xnnpack_wrappres/q* (#1464)
6959ad177 : Delete unused file wrappers xnnpack_wrappers/f16* (#1461)
708767ada : Delete unused file wrappers xnnpack_wrappres/f32* (#1465)
05a7d0f9d : update xnnpack commit (#1462)
b9b26e1ad : fix export_llama CI (#1457)
786c92488 : Fix broken ET CI on macos builds (#1458)
1b8d2d709 : Debug data on visualization (#1446)
16d50219e : Pass the enable_module_hierarchy flag E2E (#1444)
e457d5fa0 : Rename constants; fix shapes; add dtype (#1443)
f308467ce : add _local_scalar_dense.default (#1420)
bb489e890 : enable lower llama2 to xnnpack (#1441)
cf9bdb382 : ETDump changes to support tensor lists (#1427)
d4d6c2db4 : Update PT commit pin (#1439)
2a157887c : Back out "Update memory plan default to use ConstraintBased instead of HintBased" (#1424)
bd0eeaa29 : Add dynamic shape support (#1425)
c8a1f3bf8 : Migrate quantized conv2d tests (#1387)
f0499a3fd : Qualcomm AI Engine Direct - Set vtcm size into graph (#1218)
c1aedde16 : Split of "Mass update arvr/third-party/pybind11 to point to third-party/pybind11" #2 (#1433)
9dc698980 : Add warning message about missing etdump data (#1432)
ffc87a3f5 : Let Llama export using buck (#1431)
0ba14a1e5 : Migrate pass tests (#1400)
fbba1cbc5 : Introduce Runner and Module facades to simplify runtime usage. (#1411)
f836d2d19 : Pre-allocate space for Executorch inputs and outputs (#1385)
dc9cc8e19 : add oncall annotation for TARGETS files in fbcode based on contbuild information - /data/users/bayarmunkh/target_oncalls_26 (#1417)
0f192cac0 : address lint error (#1414)
b30f26a50 : Update llama ci test (#1423)
bc3860def : Fix lint (#1422)
fdf3977d8 : Skip Tensor Debug Output population if no buffer is provided (#1421)
1d138ded4 : Custom Llama2 export script (#1412)
eed4c6aba : Add constant segment to export options (#1406)
befb33998 : Fix Dynamic QLinear with new XNNPACK Version
f6342bee1 : Cleanup qualcomm example option from root CMakeLists.txt (#1403)
10c66858b : Cleanup MPS example option from root CMakeLists.txt (#1402)
9d986076f : Build size_test separately (#1395)
7ff6e6dec : Build xtensa example runner separately (#1394)
0f27954ae : add oncall annotation for TARGETS files in fbcode based on contbuild information - /data/users/bayarmunkh/target_oncalls_11_pytorch (#1404)
9b09bb0e4 : Remove extra semi colon from executorch/extension/pytree/pytree.h (#1401)
68754cd75 : Delete unused option from CMakeLists.txt (#1393)
d8aadf68c : Build sdk examples separately (#1392)
922310c60 : Cleanup selective build example options from root CMakeLists.txt (#1381)
4372862dd : Clean up custom ops example camke options from root (#1380)
287db578e : Adding ETDump integration to coreml_executor_runner (#1382)
b7db64086 : Pass extract_constant_segment to ExecutorchProgram (#1391)
703ae21e5 : update test_passes to use export (#1304)
83ad15a43 : Update memory plan default to use ConstraintBased instead of HintBased (#1399)
97e88891b : Revert D51955472: Fix Dynamic QLinear with new XNNPACK Version
85dabdedb : error out when tag input/output node (#1398)
6b11d1954 : Update executorch serialization (#1397)
c7880d58a : make MapHigherOrder create map_impl (#1388)
c2babbdcf : Fix Dynamic QLinear with new XNNPACK Version (#1373)
e718ce471 : Attempt to fix flaky cmake missing libzstd issue on MacOS (#1390)
ca6322dcf : update Meta-internal googletest references (#1377)
366c9696e : Updated pinned pytorch (#1384)
e39460c91 : Add dtype dict example (#1359)
adae047c3 : Abstracting away flatcc from any users of etdump (#1376)
a08c8b79a : Add dequantize_per_channel.out kernel (#1375)
cff7a9794 : Getting delegate sdk integration documentation up to date (#1368)
e811fbcbb : Adding documentation for debugging models using SDK (#1367)
99f912ac8 : Replace Linear lowering of using Matmul with Conv2d (#1336)
cb76c5183 : Rewrite backend passes w/o ExportPass (#1372)
5ab43ca4d : Memory profiling (#1301)
0d0dd1dae : Update runtime to read new schema (#1369)
9beb5947f : Add the ability to insert delegate map entries via handles (#1371)
dbcdce2d2 : Qualcomm AI Engine Direct - refactor quantizer (#1339)
abc0ddc2f : Offset calculation fix for etdump (#1365)
9cded5c6f : Plumb a Delegate Metadata Parser (#1356)
01452ad78 : executor_runner on iOS fixes (#1364)
0e98e8849 : compare_results() implementation (including matplotlib ploting) (#1347)
70f90e406 : Enable MPS backend builds for fbcode and xplat. (#1288)
fb0bd8947 : Remove explicit supports_static_listing key
0030acef0 : Add support to BoltNN executorch app for ETDump (#1351)
af2a82345 : Add quantize per channel in quantized kernel (#1323)
55a487ac1 : Add api for dictionary with kernel metadata (#1334)
a89c043e5 : Parsing reference outputs from etrecord and save in inspector and event block (#1346)
fcf87ba87 : Fix lint in test code (#1355)
e6d05117b : Refactor extract_segments out of serialize_pte_binary (#1353)
3835ace23 : Enable sleef, BLAS for executor_runner_optimized (#1354)
0415a35f3 : Only add extended header if segments exist (#1352)
d08d65817 : Update kernel key version comment (#1349)
c44dadf2d : constant_segment tests (#1348)
8305ce46f : Migrate additional op-level tests from test_xnnpack.py (#1345)
b2f5dfe80 : Copy over tensor constants (#1332)
61fcf400a : Move constants to constant_segment (#1343)
303e9d220 : etrecord error message to use f-string (#1344)
988fb0886 : rename kernel_temporary_allocator to temporary_allocator (#1341)
6758f7376 : Qualcomm AI Engine Direct - Enable quantized inception v3 (#1293)
9516b71a1 : Refactor extract_segments -> extract_delegate_segments (#1337)
5f7280271 : block temp allocator == method allocator
d96c49e87 : Fix cached state bug in ToOutVarPass (#1333)
47b44b46c : add memory manager test target back (#1331)
463d9b11d : Add Executor runner benchmark to iOS PyTorch Benchmark app (#1322)
4e7aa6406 : suppress errors in `executorch` (#1329)
d70db7d12 : Switch PyTorch nightly to commit-based pin (#1247)
3a4bb06b3 : Fix SDK CMake (#1271)
d707966ba : Add constant_segment to schema (#1305)
531dbed7c : InspectorUtil Test: find_populated_event (#1326)
5967b8573 : Consume DebugEvents from ETDump into EventBlocks (#1325)
cf17b675a : Change delegate debug metadata to use a bytes array + len (#1317)
008511c68 : Migrate conv2d tests to new format (#1327)
158c7ed79 : Fix lints in test_emit.py file (#1315)
23d608cff : suppress errors in `executorch` (#1320)
e8646d977 : Fix emitter build for Apple (#1321)
c11906561 : Fix test_mps.py (#1256)
475231e92 : Fix bug in memory planning (#1300)
cb2ff0f28 : SDK example runner changes to suppport intermediate tensor logging (#1307)
23df70b19 : Support ExecutorchModule as a callable in pybindings (#1312)
388490cd5 : lintrunner (#1309)
bed91223f : pow.Scalar_out (#1292)
b8a6b6268 : Update torch pin (11/29) (#1308)
8f9321b78 : Add support for statically allocated buffer in etdump (#1295)
cfccfdfdf : ETDump cpp changes for intermediate logging (#1290)
e37290713 : Separate benchmark implementation to executor_runner_lib for sdk/fb/executor_runner (#1208)
22ea16796 : update test_serde.py to use export (#1269)
19f409926 : Remove noautodeps from files inc executorch/extension/pytree/TARGETS (#1298)
61aad492a : Preparatory fixes & lint suppressions for c10::optional->std::optional (#1226)
2a554ec3a : Reland attempt (#1291)
e7bca6252 : update XNNPack tester to use export (#1265)
1175ba80b : update test_emit to use export (#1259)
5159de436 : Remove usage of detect_fake_mode in lift constant tensor pass. (#1289)
5df2c7544 : Add file parameter to Inspector.print_data_tabular (#1278)
3f96fe1d2 : update docs to use export (#1264)
8c0e36267 : update test_backends_lifted.py to use export (#1266)
e01a95410 : update test_profiler_e2e.py to use export (#1267)
1693e652b : Use python 3.11 and the correct conda env on MacOS runners (#1285)
6dc33910f : Enable mobilebert quant example (#1043)
463b5d829 : Adding logging for program outputs and delegates (#1276)
1f584d77e : Add hooks in event_tracer for logging evalues (#1275)
36e03ce54 : etdump schema and etrecord changes for intermediate tensor logging (#1255)
f4578fc15 : fix CI after partitioner type change (#1277)
8c7159802 : Add a util to generate schema for out variant op from other variants (#1253)
df13e9f2e : fix lint warning (#1274)
1c63f4af2 : Add ETDumpGen APIs to pybindings (#1272)
29ed30398 : Update schema (#1245)
9770ee417 : Migrate tests for additional ops (#1273)
ccdb12eeb : patch partitioner change to partners callsite (#1270)
9b439b7d1 : Fix unittest ci (#1260)
002420e60 : Tester: enable quant const folding by default (#1234)
e55b80fa1 : Add support for fold_quantize=True (#1235)
893e04ee7 : SymInt hunter for XNNGraph dataclass (#1236)
2222a0931 : update codegen concept (#1258)
32538f6a8 : fix lint warning
e06d5feb4 : Update documentation README to reflect the changes in the C++ and Python API builds. (#1248)
22591bd32 : Allow allocate via temp allocator in backend.execute (#1187)
19fe8ecd1 : fix the type annotation (#1251)
e94f58b2c : Fix infinite recursion when parsing op graph (#1250)
fc20dc86e : Add div op-level test for xnnpack backend (#1242)
ad5f76f79 : partitioner instance instead of type (#1147)
f5e4a1e74 : remove _deprecated_global_ns (#1224)
835a36332 : Save reference outputs (sourced from BundledProgram) in ETRecord (#1243)
e5fb274be : Fix lint error (#1246)
98862f293 : reset temp allocator (#1207)
32161cd5a : Skip wrapping Core ML and MPS lowered modules. (#1241)
9a5803156 : Add a way to use cprofile to generate stats and render as flamegraph (#1225)
2ddea28c9 : Don't link against libclog.a for xnnpack backend anymore. (#1237)
b5501b2ea : propogate disable ir (#1233)
71322456b : Update XNNPACK to commit b0e2ac129b0952dd070889fade85aa3943308a61 (#1223)
42268fe40 : Fix broken xnnpack build (#1229)
91b2a6187 : Add atol and rtol as args for verify_result_with_bundled_expected_output (#1209)
78a514116 : suppress errors in `executorch` (#1220)
b83dfd525 : new AOT BundledProgram class to store ExecutorchProgram instead of Program (#1213)
b4844d482 : Increase recursion limit in tree validation to pass mobilebert test (#1211)
3f32b9692 : refactor method to return err together (#1212)
3448a745e : Remove constraint on having all the input arg dtypes to be (#1205)
f746540b5 : Few fixes to export to enable exporting sam (#1204)
84f97c0a2 : change build mode to release (#1219)
3311a3ff5 : Move lifted tensors out of state_dict (#1210)
1cb4b726f : Add links to dialect specifications (#1217)
e9cbfcbac : Create XNNPACK __init__.py file (#1215)
74acb4062 : suppress errors in `executorch` (#1206)
3e9993011 : hash constant buffer to improve deduplication performance (#1185)
47900c963 : Slight refactor of ETRecord Consumption (#1200)
7a441d5c5 : Add oncall for targets owned by executorch (#1198)
968217257 : Use release build for OSS CI runner (#1079)
f4679ae94 : Dependency refactor, add tests (#1165)
d664d76d1 : Update torch nightly to 11/10 (#1192)
bfebedaac : Fix lint error (#1193)
27d649fea : Update to torchfix 0.1.1 (#1184)
5b253d614 : Ignore licenselint for llama2 example in Executorch
f905dd572 : add dialect test (#1191)
3053662e4 : initialize the member variable (#1186)
d153257b4 : Enable FP32 Mobilebert (#1158)
1a0d8758e : MobileBert Test (#1159)
146fbbde5 : enforce dtype in partitioner (#1160)
343cbbfd5 : executorch (#1177)
0b9905dfc : Add initial lowering of aten depthwise convolution to tosa.depthwise_conv2d (#1070)
87bf9dbee : Update torch nightly to 11/09 (#1180)
b5c747dac : Replace Arm git server with something more stable (#1170)
043e8e8c4 : Fix executorch models. (#1173)
82c91ba99 : Qualcomm AI Engine Direct - retire "_unlift" IR capture option (#1130)
6765b94ba : Enable verifier [2/n] (#1155)
b182621de : Update README file (#1166)
8c5aaf0dc : Remove nlohmann json dependency from runtime (#1145)
5c92ab908 : Move executorch_call_delegate to torch.ops.higher_order.executorch_call_delegate (#1149)
8d680077c : Move map_impl to torch.ops.higher_order.map_impl. (#1052)
c0b42d2bf : Introduce pybind11 and build extension/pybindings/portable_lib (#332)
c226e6f37 : Add constant prop pass (#1146)
0fc438362 : Move timescale validation earlier in Inspector Constructor (#1153)
c873c691b : Move constants from Inspector to the util file (#1154)
64e8d25ba : Add a util function to get the delegate list in a graph (#1156)
da0efcadc : Re-sync with internal repository (#1157)
b63c94489 : fix flakerules error for type comparison (#1152)
f281d5fb3 : Fix sdk lint errors (#1148)
6f3737357 : simplify runtime api input (#1140)
c177cf717 : Enable IC3/IC4 (#1150)
764ca74fa : Update Explicit __init__ for SDK and propagate import paths (#1151)
266ee5f6a : Revert D51029740: Namespace doesn't need to be followed by semicolon
03f963e8e : Namespace doesn't need to be followed by semicolon
50f97de74 : Rename selected_mobile_ops --> selected_op_dtypes (#1098)
2a7bd9a0f : Attach op level runtime data to graph (#1139)
53b7cc757 : Migrate the visualizer to use Inspector for etrecord and etdump parsing (#1137)
6349acec8 : distributes test (#1118)
065a4cfce : move schema files under bundled_program/schema (#1057)
beb0aa47a : executor::util -> executor::bundled_program for bp runtime api (#1056)
b5ea51614 : move all runtime apis under bundled_program/runtime.cpp/.h (#1050)
9fe1b3aea : put bundled program under executorch/sdk (#1014)
766a963f3 : Add a note to help with missing `lld` (#1144)
6f6d129b1 : Update PyTorch Nightly Pin to 11/03 (#1143)
ea33dfdd0 : Upgrade to latest version (#1138)
7cd009eb1 : Portable selected op workaround for arm backend (#1122)
a5e529548 : Move remove get item pass (#1027)
a8887ee05 : Move BN fusion pass (#1030)
de1b2af62 : Fix arm CI (#1135)
84ff864ba : Clean up verifier [1/n]. (#1121)
2513f419f : Delete staticdocs files (#1133)
6ce3b4c6d : Move Channels Last Reshape Pass to Tester (#1028)
88fd32f0a : Pass manager should take in type[pass] instead of pass instance (#1029)
0d56bf09f : run method on different stages (#1034)
7e01419c3 : add dump_artifact for debugging (#1127)
53a5cb691 : Refactor TOSA backend preprocess to use visitor pattern (#1099)
a221780c1 : Surface delegate_debug_metadata to Event (#1131)
81b19fdfd : Fix lint issues in size analysis tool (#1129)
49b94df72 : Fix selective build CI job (#1128)
a8e05cc09 : OSS CI skip if xnnpack is not used (#1125)
fdce5db15 : Re-sync with internal repository
3a74a1f06 : Deprecate etrecord.program_buffer (#1042)
3b0c760a6 : Fix arm_backend.py script (#1120)
2535a25ed : Arm build and runner CI (#1111)
bd2462ec0 : Fix CI job (#1115)
dbec07585 : add spec violation note in the tutorial (#1116)
12bcaf6a2 : Remove final references to /docs/website/... (#1094)
097d09bd9 : Point to control flow docs in core pytorch (#1083)
c4fb6621d : Move some delegate docs into the new docs tree (#1082)
b0720d846 : Move program_file_format.md into the new docs tree (#1081)
ee0897323 : Move portable_programming.md into the new docs tree (#1080)
6152ac456 : Install libexecutorch.a and libportable_kernels.a (#1036)
7814dd7f2 : Improvements to gen_sample_etrecord (#1073)
1f7ad3399 : Move gen_test_files from AIBench to ExecuTorch examples (#1107)
ec4868ae3 : Fix lifetime issue of PyBundledModule bytes object (#1108)
66fdef635 : Update export api in codebase to long term version (#999)
251801e18 : Fixing cast issues in Executorch codebase
8eb983526 : Clean the RUNNER_ARTIFACTS_DIR before copying the doc build artifacts (#1103)
c6e6ac1b8 : gen_custom_ops_aot_lib should always use rtti and exceptions (#1096)
6a284bab6 : fix D50422563 oss broken (#1095)
2a494a724 : Fix printf for enum class (#982)
2fb586730 : Don't run duplicated BUCK and CMake runner workflows (#1074)
00d3dc702 : dtype selective build, link codegen and scalar_type_util (#1090)
423f3bc4b : Update optimized op names to match optimized.yaml (#1091)
f1aca09c6 : Don't trigger app-build on xnnpack changes (#1089)
dab0d710b : structural api and schema update (#1009)
b32f5a364 : add util to get non lower nodes (#1088)
28e116541 : Migrate test-demo-backend-delegation to a separate job (#1078)
52da1698b : Remove duplicate dq/q nodes
d7626f2e0 : XNNPACK workflow don't stop on one run error (#1077)
427641cce : Fix cat tests (#1086)
c6c2ea9d1 : Add Llama2 license (#1085)
b258e4227 : Move test-coreml-delegate from pull to trunk (#1084)
199817738 : Run XNNPACK on single workflow (#1071)
cc1a8bd55 : Tidied up Ethos-U delegate binaries (#1046)
d8e9b2600 : Clarify that the python example isn't exactly the same (#1072)
2bbc7926c : Update arm_tosa_e2e tests with new to_edge APIs (#1047)
f8c27421a : Remove dependency on exir.capture from generate_etrecord. (#1002)
896727135 : Simplify gather_models step (#1069)
f53aeeb2d : Propagate version number in the doc build. (#1022)
d65bb6be5 : suppress pyre erros in xnnpack/fuse_batch_norm
f4d9c4aa5 : Add amax partitioner constraints (#1064)
c27a59aa9 : Remove event choices in gather_test_models (#1066)
e677d80da : Onboard ExecuTorch to ciflow bot (#1060)
a815742b5 : Update tutorial to reference dummy model instead of MV2 (#1059)
08fe6b24f : remove breaking progress bar (#1054)
8f446e3fb : Update portable op names to match functions.yaml (#1049)
156c7f0df : Improve gather_test_models (#1038)
17f62529f : Update front-page docs to point to the new forum category (#1051)
86f15a7a5 : Fix README doc in example (#1048)
137e510ec : sdk tutorial (#1024)
e8d30eac6 : update extension of Bundled Program from .bp to .bpte (#1013)
6e471a391 : Switch to oss serializer. (#1016)
6d6130eec : Add CoreML backend to CI. (#1040)
6b05301fe : Trigger the build demo apps CI workflow on any dependent file change. (#1041)
a62900819 : Ignore C901 in index.Tensor upper bound calculation (#1033)
ff20f4032 : api for bp type check (#1017)
ea8aa038d : CoreML delegate scripts cleanup (#1025)
9c29c1a2d : Update PyTorch pin to 10/19 (#1031)
e91de92df : Fix "Tutorials" and "Getting Started" top-nav links (#1026)
2570eec34 : Generate selective_mobile_ops.h file for executorch (#1021)
0969ca84b : Register upper bound inference for index.Tensor (#1018)
8d5258004 : Move iOS app build to a separate workflow (#1010)
9c258d35e : remove "Bundled" prefix for bundled program schema (#1008)
fbff69f59 : give bp a specific namespace (#1007)
32bedbbe5 : Macro to switch on dtype (#1011)
0b82089f8 : Handle negative dim in cat (#1015)
dedb7e311 : Update docs-build job (#1012)
a1187c6eb : Fix lint error. (#990)
c4da03435 : Remove Progress bar from tutorials where it's not loading (#988)
889165562 : Prepare for changing the namespace of map_impl. (#984)
48b0232f9 : Correcting usage of Core ML (#985)
d0396839f : Remove call_spec argument from ExportedProgram ctor. (#981)
8b4abf40d : Remove sh from CoreML tutorial for consistency (#962)
945100dbd : Fix 'getting started' links in examples folder (#979)
82fa49680 : getting started code format (#978)
8c5d0da29 : Update the runtime build doc. (#973)
fa262146f : Update mps docs (#974)
264633c98 : Update the tutorial to make certain steps more clear. (#966)
1545a0415 : Fix C++ API docs rendering. (#960)
4e93e972e : SDK Tutorial: ETDump usage docs (#865)
39c7e61df : executorch->ExecuTorch, other nits (#953)
5d9dffa63 : Fix top-level README.md (#963)
41fe5e58c : Fix lowering example (#958)
4204f9343 : Provide an alternative clone instructions. (#957)
3d97e3477 : QA fixes for Arm tutorial page, Part 1/N (#954)
3942dfa39 : Nit fix in sdk-profiling.md (#951)
b5548c8a4 : Update xnnpack delegate doc (#948)
3e9633bf0 : Fix the xnn_executor_runner location (#952)
51981b0b3 : Fix sdk-etdump.md (#949)
81b38fb7c : Fix selective build broken links and format (#947)
bb7d5e6bb : update delegate doc (#956)
5fc6e72b2 : Fix links in top examples/README.md (#946)
9593f62bc : Rephrase backend dialect definition (#950)
4a14eff91 : Add the ExecuTorch logo to the ET landing page. (#907)
4d4aa7436 : QA fixes for "kernel-library-overview"
e87f20c53 : Update delegate tutorial (#938)
66ffd3527 : QA fixes for "BUNDLED PROGRAM" page (#942)
d5739327c : QA fixes for "Building with CMake" page (#941)
38e327e30 : Fix SDK tutorials link (#940)
3434d75f7 : Update install_requirements script to use https (#936)
c2bd146fc : Follow consistent ordering in examples/README.md (#937)
927859297 : Update nightly version pin. (#921)
d799bd87d : Fix broken link in QNN tutorial card (#935)
25a89372f : Simplify bundled program example comment (#927)
01533bbfc : concept map (#917)
04235e163 : Add README for old folders (#933)
5fe3d5367 : Remove "WIP" from top-level README file (#932)
1ad52ac61 : Remove preview URL reference from runtime-overview.md (#931)
dd1196d23 : Added Apple license (#925)
42797cde2 : Tutorial title consistency (#926)
022c5f5dc : Fix broken etrecord link (#911)
bfb0fe01f : Fix variable number of inputs in mps_example.py (#918)
9a3376913 : Sync list of tutorial cards and left-hand nav section (#922)
b305bc73b : Fix lint errors (#915)
41e9a3e60 : Remove "TBA" runtime docs (#913)
43093e2ca : Update top-level README.md to recommend using a release tag (#912)
286aaccd2 : CoreML delegate changes (#753)
954b5b8df : upsample bilinear2d (#919)
f30c79532 : Update readme to clarify quantization bits
60931689b : Quantization doc cleanups (#767)
8705a04ac : sdk example tutorial (#846)
5c00a83f7 : loose sdk example verification requirement (#906)
96321d6e0 : Update runners for better illustrations. (#879)
21b221582 : use flat_tree from core instead of executorch (#880)
28418dcee : Remove profiling hook from method_meta (#887)
62f3d8ac3 : Re-sync with internal repository (#909)
0a855ba40 : Minor changes to sdk integration tutorial to add command to generate bundled program in cmake flow (#881)
d71efe403 : Rename module etdb to inspector (#908)
062cba711 : Update serialization schema to input/output specs. (#845)
a729ec82a : Link to Setup Instructions (#905)
3bb194104 : Update export api in documentation to long term version (#888)
751e1ccb2 : Fix xnnpack quantization instructions (#904)
a62c53ac6 : Fix Export to ExecuTorch API Reference format issues (#878)
b0f72a95c : concepts.md (#896)
c33932c54 : Support unlift=false for per_channel qparams (#868)
d6bd01274 : Fix per_channel quant axis for depthwise conv2d weights and bias (#903)
7a931b2cf : Setup CI to run tests on Simulator. (#839)
d84b5fb1d : patch some formating (#902)
35639b85a : getting-started-architecture.md (#900)
6b3036136 : Fix link (#899)
a6667d3ff : Use issue instead of TODO (#898)
6e5b79e14 : intro-overview (#893)
45108391c : Rename getting-started-build-and-cross-compilation back to runtime-build-and-cross-compilation (#895)
cfe2cfd0d : Fix demo iOS in the left nav (#892)
e415d4b02 : Fix link to the tutorial (#869)
37b0d8e24 : Update torch nightly. (#891)
c1b533c51 : Add concepts (#884)
9880be6d9 : Update how-it-works doc (#890)
52c8752dc : Moving CMake build documentation to getting started section (#883)
2aeed4432 : Reland "Fix graph signature data model to list of specs." (#866)
469ee1cf0 : Fix docs (#882)
ee958a4d0 : Fix code block in .md for ipynb (#874)
8beb21776 : Fix ipynb generation (.md -> code block) (#862)
ddc61f002 : pick_doc_commits.py prints commits we should cherry-pick (#863)
fd2b256dd : Add version dropdown to the layout html (#873)
b448008da : tutorial update (#870)
6ce4b6082 : Disable IC3 IC4 Q8 XNNPACK (#871)
506ecc58a : Update README.md (#851)
790239f12 : Fix time scale conversion in inspector (#860)
cb54dcc3c : Redirect top nav to ET resources (#855)
3f181de8d : Fix doc-build job (#861)
92f2d7ba8 : Revert D49876258: Fix graph signature data model to list of specs.
26f556201 : Fix ExecuTorch SDK tutorial home page card (#856)
ca89995d1 : Fix graph signature data model to list of specs. (#836)
18dac767a : Link Fixes (#857)
15d80d32b : Fix link (#858)
6e12632fa : Add a workflow trigger to release branches (#853)
d61c9bc0d : Fix unnecessary text in tutorial (#832)
52a95b7eb : Update job to strip from "v" and patches. (#854)
14715f67a : Change default memory planning algorithm to greedy. (#830)
3748b0088 : Update more examples to use long-term export APIs (#810)
7baf02cc0 : Add android app screenshot (#850)
dc0be34e0 : Fix xtensa demo tree structure and format (#852)
67d4c6b0b : Fix documentation frontpage. (#848)
ed16f6675 : Add MPS delegate (#754)
102ec0ac1 : fix typo "buck2_oss" -> "buck2" in integrate tutorial (#847)
e9c06644a : Fix Sphinx build warnings (#840)
9833feb35 : Get export APIs ready for PTC (reland) (#835)
bc17c52d5 : move tutorials higher in left pane (#844)
66505007e : fix a couple todos (#842)
b5d569998 : Re-word misleading delegate doc (#826)
ed0e858dd : add __ET_NODISCARD to backend.is_available (#838)
3ff6a0b62 : Remove etrecord parse_etrecord API from doc (#805)
61a79678e : add diagrams and partition part for delegate (#768)
469e92216 : Update torch nightly for branch cut for v0.1.0-rc1 (#834)
205c8de56 : Op fusion pass (#733)
0f2bc699c : Revert D49742989: Get export APIs ready for PTC
f2650f513 : Fix typo and imporove introduction for SDK doc (#825)
a4a86df82 : Get export APIs ready for PTC (#566)
c73fea87f : Linking to an item in .rst from tutorial .py (#798)
469874b96 : Add links to the tutorial for xtensa demo (#831)
2c6b9dddd : Remove the temp for xtensa lint (#829)
234ed8afc : Update docs et remove loop from executor (#828)
9338ff17f : update function doc to rst format for auto-gen (#823)
8e00e3bf1 : Remove suggested tags for issue report (#822)
3e19338ee : Added tutorial to build and run QNN backend (#772)
447f6f8b6 : Fix spacing in SDK Tutorial (#827)
72fa25eb5 : Expose readme to tutorials in docs. (#770)
0e6b27793 : fix minor typos (#786)
a5a428a37 : Remove extra * (#800)
c5000e6ef : examples nits (#820)
9d47e8c20 : doc - update code block decorator from console to bash (#824)
ad8878957 : native_layer_norm: add (void)ctx (#813)
4f00517c3 : Fix & cleanup optimized native_layer_norm (#812)
7ef721c07 : Move Inspector Tutorial Code into individual code cells (#814)
c20df2aee : Add support for EdgeProgramManager and ExecutorchProgramManager in etrecord (#788)
592ac127f : Move xtensa custom kernels to third-party and add license file (#789)
b3a59611a : Readme to point to tutorial
8bc746a42 : Clean examples/xnnpack/readme (#784)
42d8dd9a5 : XNNPACK Lowering Tutorial (#634)
c74bf0967 : XNNPACK Delegate Overview (#635)
1ea5377b7 : Cleanup existing top-level README.md (#819)
012c9d74e : Adding CMake and OSS Buck build details for ETDump (#779)
beb212955 : Fix upload-gh-pages (#817)
9c615a45a : index.rst: Remove "release status"; add compile/export links (#816)
164aeab57 : test for torchvision VIT (#356)
4f85f6400 : Delete old index helper functions (#808)
f6bb2bd8f : Rewrite scatter_add (#807)
d8456718a : Fix kernel registration doc (#815)
09e51f9a0 : use headers in concepts for anchor links (#802)
3c3241f1c : Documentation and model gen update for Arm target (#773)
ee78051c3 : Link website tutorial to README.md (#782)
269c6d00e : lintrunner and internal lint fixes (#801)
ab3b52297 : memory planning doc (#781)
8ee941120 : Add supported host environments (#776)
cf05295f8 : Update the Note Format for SDK Integration Tutorial (#804)
a32431ed3 : Update doc build job to publish stable tags (#740)
d62c41ca8 : Update SDK Integration Tutorial to use ProgramManagers (#799)
542f49709 : XNNPack --> XNNPACK pt2 (#803)
469b5caca : Fix optimized log_softmax (#792)
e89b74400 : Cleanup log_softmax & softmax (#791)
20df1a2a5 : Fix op t_copy (#790)
2628c0e5a : Fix index.rst links to tutorials (#737)
f52944302 : Remove runtime instruction in llama model (#796)
b9d139c37 : Bug fixes (#795)
0615470a7 : Fix CMakeLists.txt by setting a boolean in if conditions (#783)
2f7c03fc3 : DL3 XNNPACK FP32 precision correctenss test (#794)
a347c8550 : Add div.Tensor_mode sample input (#760)
81b9cc2f5 : restructure bp export example (#751)
e57a9063b : update types for bundled input and expected output (#748)
4a0c64de0 : fix typo in doxygen file (#785)
16d74c5dc : Add tutorial page and minor fix to doc (#777)
671964d61 : Move old ETDump schema to fb specific folder (#716)
ee5ec6a4c : etrecord is opaque (#762)
79f331a41 : Add Tutorial Examples to SDK Integration (#771)
d0d664dad : missing include for some builds (#775)
1c22c4598 : Update patch_repo to take repo_dir instead (#761)
29f6dc935 : Script to automate running the Demo App tests in Simulator. (#744)
d3ef770d9 : All the ops needed for the linear demo example (#749)
3916ab8f8 : Rename unit tests. (#769)
ca6ece13d : Improve example used to demonstrate delegate debug mapping generation (#519)
1de1a8227 : Rename bundled_executor_runner to sdk_example_runner (#727)
35030bf6c : Fix CMakeLists.txt to skip building executables with iOS toolchain. (#766)
5b02e836b : Remove inception model part (#765)
b68f757f9 : Increase threshold for MV3 test accuracy issue (#763)
d39918f1b : Fix op squeeze_copy (#759)
16e8bfe61 : Fix op transpose_copy (#758)
fc76d9e98 : Fix op select_copy (#757)
77cb4a58c : Fix op index_select (#756)
09e7cbac2 : fix typo "Exececutorch" delegate example page (#755)
caa7080ae : Link to SDK from the concepts page (#752)
470d47d3b : Print headers with timescale in tabular mode (#747)
0a5767174 : Fix op permute_copy (#726)
4eb35da6a : Fix op native_batch_norm (#725)
017a718df : Rewrite index_put (#724)
b6c2f0126 : Introduce advanced index util (#723)
66962d248 : Implement output broadcasting for split_with_sizes_copy (#712)
7a7f9dd2e : Simplify op copy (#711)
80d98577f : Fix kernel ops (convolution, avg_pool2d, max_pool2d_with_indices) (#710)
19da79a25 : Update docs and aot demo files (#750)
9e657ae1f : remove duplicated property "median" (#746)
65902454a : Fix minor nits (#732)
efc427c36 : Disable random debug print (#745)
7e3c1f474 : Update the readme file. (#713)
4dbcdb72f : Update the Setting Up tutorial. (#728)
696b934ac : make GetProgramData only for bp (#730)
bac81e689 : Update doc (#661)
30c51c03a : Move selective build docs to docs/source (#736)
be73d8598 : Revert "Separate XNNPACK and HTP lib (#682)" (#738)
e46025950 : Merge delegate runners on Arm Baremetal (#718)
4bb03ffb3 : fix quantized ic3/ic4/edsr examples (#574)
81a302af5 : Reorder the pages, renamed etrecord and etdump titles (#739)
b167f04a5 : Minor Script Updates (#715)
e15f002bc : Runtime API tutorial (#673)
04bd18b0b : Clarify that buck2 version is important (#734)
809cc4b4f : Exclude Framework events from node association (#735)
94c6692b6 : Exclude OPERATOR CALL Rows from print_tabular (#729)
88a3b3bba : Update export examples to use the long term APIs (#641)
5da410431 : Update Xcode project to use xcframeworks instead of regular libs. (#691)
19fa9b2ed : Export a Jarvis program in OSS with executorch (#654)
a1cc31a6f : Add flatcc deps for oss (#722)
80e957837 : fix the instruction issue (#731)
b25c4956c : use 3.19 version of cmake_dependent_option (#721)
27e4d07e6 : ETRecord OSS documentation (#510)
4cd34570c : Inspector APIs documentation - .rst part (#518)
32fa61ed8 : Replace GitHub link with the link to pytorch/executorch (#671)
5a3e873a0 : Add GA tag (#719)
162290056 : Fix op index_select (#709)
3b9b2c69c : Fix op constant_pad_nd (#708)
b98d6a75a : Fix & cleanup op layer_norm (#707)
bbdf57984 : Fix & cleanup op gelu (#706)
a96dce50d : Fix & cleanup op lift_fresh_copy (#705)
2ba820c25 : Fix & cleanup op select_copy (#704)
6a6733d8d : Fix & cleanup op slice_copy (#703)
7f36c70da : Fix & cleanup op scalar_tensor (#702)
c1032d735 : Fix & cleanup op t_copy (#701)
eaca8f7f7 : Fix & cleanup op transpose_copy (#700)
d3df91a1e : Fix & cleanup op ones (#699)
75df54d9f : Fix & cleanup op any (#698)
2c190cb6b : Fix & cleanup op as_strided_copy (#697)
fe4e70de1 : Fix & cleanup op tril (#696)
0a9947d05 : Fix & cleanup op logit (#695)
7a838f795 : Add tensor utility functions (#694)
788d3391a : Functions for handling zero dim tensor (#693)
006e3b2ce : Separate XNNPACK and HTP lib (#682)
070f7a858 : Cross compiling for Xtensa (#579)
83f41a731 : Hide QNNPACK from OSS (#681)
a2257b0a6 : Adding documentation for ETDump (#632)
67913e3cc : A tool to automate iOS frameworks creation. (#688)
b602439d9 : Remove <model>.pte generation from export_bundled_program.py (#714)
8035f47ab : Profiling documentation (#631)
4418dfdc1 : Check if dl library is available and only then link it in for executorch in CMake. (#657)
51d6afa6e : Add initial lowering of aten.convolution to tosa.conv2d support (#615)
9c7b2c9e3 : A tool to create XCFrameworks from libraries build with cmake. (#680)
207715c80 : more fix to permute memory format pass (#689)
b7e3a29fd : Add custom kernel registration doc (#619)
7af058d9f : Simplify flatc build option default value logic (#650)
4ff9736be : Fix CMake schema generation (#685)
e60998257 : Fix lint errors
ddd1cd75e : fix permute_memory_format_pass (#684)
7966c5df3 : Inspector APIs documentation - inline docstring part (#626)
a77faff9c : Hide the latency label and portable ops mode. (#683)
09e60a03e : fix the example backend (#664)
be8253558 : Rename edir into debug_format (#674)
78197a308 : Initial framework of an ethos-u runtime backend #3 (#659)
0aff4daff : Move fbjni to examples/third-party (#678)
af31d8e6d : More demos fixes (#676)
882d98704 : Add progress bar for delegate docs (#669)
7df85c8ab : Executorch -> ExecuTorch
85ba2b75e : Restructuring demos (#531)
dab79113f : "How ExecuTorch Works" - final touches (#668)
9e379eabc : Move sdk executor_runner to sdk/fb/runners folder to hide from OSS (#604)
01066df1d : Clean up UI (#660)
8c728db59 : Add the ExecuTorch Overview section (#662)
6e02c3c5e : Kernel Library Overview (#629)
d2c0f1e7c : Core Opset Definition (#628)
94119f6a2 : Address some runtime-overview.md comments (#646)
d235ba89b : Fix bundled_executor_runner schema visibility issue (#663)
fd82f21e0 : ExecuTorch Concepts (#624)
e8e8f3f3d : SDK Integration Tutorial (#620)
a5cb1e2f2 : Add `squeeze_copy.dims` (#655)
68719386c : CMake changes to build bundled_program and bundled_executor_runner with ETDump (#653)
737186ecb : Tutorial for building and running a simple model on Xtensa (#587)
aaf1f4898 : Add example script to run on cpu (#611)
7648be84c : Add support for building bundled_program schema in CMake (#656)
5e6281503 : Use ic4 xnnpack for classification (#652)
55d9c1274 : Fix qualcomm readme.md file path (#651)
7c64f01e0 : make bundled_executor_runner only for bp (#532)
62cba608b : Update bp export example (#633)
1ed625018 : Test (#578)
f2273df31 : Test (#577)
921120fd3 : Test (#575)
54ddab9ac : add convert scalars to attrs pass in quantize tester stage (#576)
6230f8fca : quant params from static inputs (#573)
98af7191c : Fix the android demo app tutorial (#649)
eeb85a3c5 : Update torch pin (10/05) (#644)
3afa24f67 : Fix some license header (#647)
f296eb481 : fix a few typos and rephrase
2ce732973 : Delegate Integration with SDK Documentation (#643)
144c8c414 : delete outdated ios demo app
8f0ba2538 : Allow switching between two backends
07aaad406 : Add llama model to examples (#559)
85116fd82 : Move bundled_executor_runner to sdk/bundled_executor_runner using hg mv (#621)
1dabb9046 : Android app integration (#547)
d40997584 : Add "Loader=..." to yaml.load callsites (#588)
0692affdf : Fix formatting and cleanup (#593)
79f3db2fe : add cpp api reference (#618)
43eb74d4c : Add EXECUTORCH_ENABLE_LOGGING and EXECUTORCH_ENABLE_PROGRAM_VERIFICATION (#638)
c85b683cf : Rearrange libs per backend and add CoreML deps. (#637)
d935d9a8a : Use only cmake for MacOS CI to save capacity (#613)
ef9714845 : Qualcomm AI Engine Direct Backend (#490)
de3c0247b : runtime-overview.md tweaks (#636)
974e2e360 : Properly introduce selective build options (#625)
86449bfc9 : SDK intro documentation (#535)
b06dfda05 : Markdown template for tutorials (#609)
5e7f49bdc : runtime-overview.md: mention inference-only; update diagram (#627)
aa041b393 : A tool to print all exported headers of a buck target. (#612)
d8f407516 : Remove default argmode (#601)
cb09cd7f5 : Move GenMode back to OpInput (#608)
9ef850c4f : Migrate runtime-platform-abstraction-layer.md (#623)
0e5ddd13d : Refactor install_flatc.sh (#606)
5e2e6ce10 : replace deprecated functions (#596)
97189f990 : Remove shape testing code from dialects edge (#607)
dfd947a08 : Cast to float in nearbyint function for quantize and choose_qparams ops (#598)
bc6acf62f : update delegate api in document (#600)
17fee78d0 : Add `buildRescale` helper function with scale width and double rounding option (#567)
bebff52a6 : Add lowering for aten.clone and aten.view_copy (#602)
7c68ec666 : Move MacOS jobs to trunk to reduce its load (#610)
9e47a086b : fix handling of optional inputs (#605)
504b9b324 : Add a readme file with build instructions. (#586)
3a69ea032 : Use proper Xcode schemas instead of autogenerated. (#584)
5c71c95ce : move install_requirements.sh to oss folder (#597)
04782baf7 : add exir-api-reference.rst (#457)
782fdedcf : Move the new demo app to a new location. (#585)
a3b0f0fa6 : runtime-overview.md (#571)
f4e16928f : Add aten::squeeze and aten::unsqueeze to functionalization (#274)
ee3c5e191 : XNNPack --> XNNPACK (#590)
53fd3b301 : version bump for mobilebert (#592)
2a0203262 : update set_tensor_data to use MethodMeta and set_input (#524)
f533d5aa4 : add long running tag to e2e model tests (#372)
393a324a8 : Re-sync with internal repository (#599)
ee4d98bf8 : Remove getter for event_blocks (#581)
1297f7dca : Make etrecord path optional (#572)
7ed747972 : Add buck config to remove ET_LOG (#569)
fdaf6fab7 : flag a bunch of method functions as deprecated (#523)
e0cde5b93 : Add Meta llama as a submodule. (#560)
141fcf61d : Forward fix test_create_submodule_list_return to be backwards compatible (#591)
d5547f12d : Remove unused imports in exir/delegate.py (#589)
4df159215 : Print errors in hex (#476)
9c62249eb : EXIR spec (#482)
8b0fc05d2 : Amend toolchain file matching in CMakeLists to avoid building executables for iOS. (#583)
a5b6f5e3d : internal fixes for FunctionalTensorMode usage in AOTAutograd (#538)
0b0f49e1c : Add `split.Tensor` and `unbind` decompositions to core ATen decomp table (#550)
84b333d47 : fix resizing scalar tensor with empty sizes (#548)
f9e23ea01 : iOS demo app. (#557)
145b624c2 : If arg is None during OpGraph generation then ignore it (#530)
99824b1f7 : Add quantized linear layer lowering (#549)
77c466877 : Update release condition (#522)
764253885 : Simplify runtime FunctionRef (#555)
869704eda : Fork FunctionRef for pytree (#554)
deabaca09 : Add Inspector to init file (#570)
39de0bf15 : Cleaning et_schema of InferenceRun for OSS (#565)
d3ff3ea0f : Fix cmake linter errors (#580)
7906c18a5 : Cast long inputs to float32 before feeding to xnnpack graph runtime (#553)
83af3a454 : "CLI" aka a script to run the Inspector (#546)
a3a1d6441 : Update instruction to install flatc (#447)
99b1539e0 : Pass flatc executable path via env var for exir serializer. (#568)
80ee5f61a : Enable building ETDump with CMake (#564)
d3ccc62dd : Update torch pin to 10/02 (#562)
d1f917726 : Selective build (#563)
22354c8a0 : Fix the docs build by using unique titles (#561)
578fdf3d4 : Append xnnpack prefix to dynamic_quant_utils target built with cmake. (#528)
1de10f476 : Rename etdb directory to inspector (#552)
10138facf : OSS doc (#462)
86a88c295 : update error message (#499)
d7dc494c3 : Fix lint error in Utils.cmake (#545)
79124a0c6 : Fix Kernel Target (#551)
ee7ae7df7 : Allow lintrunner to run internally (#533)
1ab02edff : Move ETDB to fb/ (#543)
0b96a7e40 : Make print_data_tabular work for both terminal and notebook environment (#544)
3f535e0cb : Add split_with_sizes_copy (#537)
e7228d4b9 : Fix in terminology.md (#507)
8fcaa60ea : Sort cmake_deps.toml alphabetically. (#527)
036712d50 : First commit of quantized lowering of Add (#495)
91d29fb23 : Update torch pin (9/29) (#539)
c25414a9f : Work around an old GCC bug in memory_manager.h (#529)
aa82b7d46 : Add extension/data_loader targets to cmake build. (#521)
7b945cc25 : build and cross compile (#498)
1f2a9bf1b : Adding support for a scale factor when parsing ETDump results (#534)
4b2f0b7fd : Fix lint warning introduced in D49715284 (#526)
a6ec5dca4 : Make edge dialect program a first class citizen (#508)
30eb47f8f : Update third-party pytorch to 869226b (#520)
620b7692e : Remove ET_LOG and verification from OSS Release build (#509)
8accb8629 : helper function to print node info (#512)
645594e90 : Replace node.meta source_fn with source_fn_stack (#210)
61077c4a4 : Fix to the Export ExecuTorch section (#506)
5d424e274 : High-level architecture and components of ExecuTorch (#505)
06c7af518 : Add support for tuple as input type to delegate in emitter. (#511)
732d92bd8 : Add support for using ETDumpGen (flatcc) in sdk/executor_runner (#502)
63e0f768f : Rewrite the Setting Up ExecuTorch doc (#496)
e3ca6197d : Update docstring for DelegateMappingBuilder (#504)
c8725807b : Populate Event attributes with op nodes metadata linked by debug handles (#401)
1cc64fea0 : Re-enable ir validation for example models (#492)
4f9a01d7c : Add support for delegate profiling in event_tracer and etdump (#484)
2500d3a49 : Update torch pin (9/27) (#500)
153cf3bf5 : Allow kernel manual registration (#491)
8cbfa80e0 : Fix the key used for extraction of delegate debug handle map in emitter (#493)
e7ec26787 : Fix size test flag name (#497)
4da9d6650 : Update ExecutorchModule to use data loader (#485)
d51513539 : Revert D48854790: buckification
e3104ffab : Decouple model e2e tests (#487)
59d84eee1 : Add a test for building size_test with no ops (#375)
938345397 : Seperate delegate doc to two parts: vendors and users
57142858e : Add EXECUTORCH_BUILD_HOST_TARGETS option to guard host-only targets (#463)
b6e182c40 : Verifier for exported program (#292)
f586b597a : export and delegate example fix, typos (#478)
f6a8d9d85 : Add more Edge->TOSA lowerings and fixes (#475)
82e26a8f2 : subtle typo update
a704b2b69 : (#488)
13bac8404 : Print executor_runner outputs using EValue's new operator<<() (#481)
691400301 : Make operator<<() wrap long EValue lists (#480)
bd26dbf58 : Implement operator<<() for EValue (#479)
9b384a4c0 : Remove legacy `tosa` backed - which is now landed as `arm` (#464)
12b3a9142 : buckification (#460)
fa7a46470 : Fix missing flatc in doc build (#483)
e34ecdd41 : Upgrade flatbuffers version to 23.5.26 (#465)
c3a6a6af0 : Export to Executorch Tutorial (#474)
48e27b4ea : How it works (#466)
fd30e3aef : Enable quantization for mobilebert (#477)
5b32d8aca : Fix %hhd format specifiers as PRId8 (#470)
a89017d09 : Replace Executorch with ExecuTorch, Part 6/N (#471)
c857a5496 : Replace Executorch with ExecuTorch, Part 5/N (#469)
749a365d3 : Add event_tracer pointer to delegate backend context (#409)
7a1b7f4a0 : Add support for delegate profiling in event_tracer (#408)
68cb85106 : Install Executorch when building docs (#467)
1027a2f22 : Replace Executorch with ExecuTorch, Part 2/N (#468)
a3e648082 : Fix build breaks from llvm-17 (#461)
89b7e896e : Don't print really large output tensors (#448)
c19b58f60 : event_tracer logging inside method.cpp and program.cpp (#346)
57c328c6a : Edit profiling_and_debugging_delegates.md using inpage editor
74bd5481d : Enable quantization for EDSR (#456)
b541a48c6 : Replace Executorch with ExecuTorch (#455)
2f2450207 : Work around spec violation in executorch program unit test. (#453)
cf0676bf2 : DL3 needs 12xlarge instance (#438)
a35d1ac19 : Fix forward unittests.
167b72da0 : Summary: Initial TOSA support in executorch (#161)
c3f3ef40b : Add EDSR model (#201)
5169d8d10 : remove executor.h 14/N (#348)
d24ea23cf : Add support for event_tracer in KernelContext (#344)
2bb389695 : Prohibit customize export config (#337)
da7c87306 : Back out "delete .int overload" (#432)
a2378953d : remove redundant inner header from delegate authors section
baa8d8749 : Update optimization recipe (#220)
e045952f7 : gather-model to pull git submodules from the PR (#431)
767e965f3 : Export User Guide (#418)
dc009f883 : Enable quantization for inception_v3 (#426)
5e3fcbbb9 : The number of inputs given to exported program should be 2 not 3.
175ca79c7 : Delegate + Passes (#419)
5ac7661c8 : 3p dependency guideline (#272)
af370c26d : update cross compile instruction
9099f7943 : enable vit quantization (#417)
aa247da37 : delete .int overload (#423)
87fdedd2d : Fix broken unit tests and timeout for deeplabv3 (#425)
78f884f2b : sdoc update (#422)
d9eef24bb : Make docs build less flaky (#420)
bc176aae8 : bump num kernels (#421)
a216a0388 : update prim ops lib (#349)
31c80cfb2 : Update torch pin (9/20) (#407)
052f62463 : Fix op_to_copy (#416)
0d58374af : end2end test for bundled program (#395)
74278c6f9 : support multimethod by method name runtime (#394)
7108f3718 : support multimethod by method name on AOT (#341)
96b83ced3 : conv1d fix (#411)
fedc04c49 : Migrate runner-like targets to use the new MemoryManager API (#403)
6944c4556 : Clean up MemoryManager API (#402)
9eca2a9de : Make linter happy by removing unused imports (#412)
96a84051a : Add AoT apis (#345)
dea872c19 : Migrate ExecutorchModule.mm to use the new HierarchicalAllocator span ctor (#391)
2c975163c : Delete the unused TestMemoryConfig target (#390)
e1719b8e6 : Migrate pybindings to use the new HierarchicalAllocator span ctor (#389)
0bce2cb0a : Migrate runner-like targets to use the new HierarchicalAllocator span ctor (#388)
58c8c924f : Make HierarchicalAllocator use buffers instead of MemoryAllocators (#387)
650333e83 : Split out HierarchicalAllocator tests (#386)
e03bd6c0a : Add lift constant tensors passes after aten_to_edge (#359)
49d2e683c : Update how we input kwargs (#314)
b78576ec9 : Enable quantization for deeplabv3 (#398)
ebb752fd9 : fix nonzero upper bound (#393)
60ea5c603 : always partition static attr and addmm op is supported (#354)
2b7eb62dd : remove transpose addmm weights hack (#358)
51b238579 : Operator Support uses Exported Program (#357)
d5ed0e8c9 : Fixing xnnpack+qnnpack test (#355)
932f39f32 : Update rst files for python docstring generation (#399)
395e51acd : Rename MmapDataLoader::From to ::from (#383)
e41017f1a : FileDataLoader::From -> ::from for all of //executorch (#382)
e2dd0bec0 : Rename FileDataLoader::From to ::from (#381)
4ee204f10 : Program::Load -> Program::load for all of //executorch (#385)
8a5f3e89c : Rename Program::Load to Program::load (#384)
3acbbb172 : Should automatically pop modes
0f3d42f26 : Constraint based upper bound memory planning. (#264)
a7e9dcfcc : Rename SymShapeEval pass to HintBasedSymShapeEval pass and add warning for it. (#377)
c52000a73 : Update the quantization location in the executorch_stack (#380)
432bfe371 : Add sample input for pixel_shuffle (#364)
1f5e54a82 : Have a DelegateCall error not abort the runtime (#360)
7b29899be : Add helper to resolve debug handles of Event (#368)
e60110a58 : Convert Flatcc ETDump Profile Entries into Underlying EventBlocks (#367)
4e0353611 : ET Schema change to add module hierarchy to node metadata (#376)
bfa89be3e : fix example partitioner (#373)
0199c1b52 : ETRecord changes for delegate (#352)
e6e1898e3 : Don't generate docs for exir.serialize (#371)
9c8036a10 : Fix test_end2end.py (#369)
30d915a4e : Add an option to specify an existing flatbuffers compiler for cmake build. (#365)
e04d9ba7f : Re-enable inception_v4 quantization (#366)
31a96ea81 : Fix Debug Handle Map Typing (#363)
1315a924a : Add aten::pixel_shuffle.out portable variant (#351)
1200d595d : Adding support for delegate map in emitter (#334)
f0125ba27 : Change partition function to take exported program (#340)
8209cb3e3 : Update Inspector interface based on new design and define underlying data classes (Event, EventBlock, PerfData) (#343)
cc5e34188 : Add argmode argument to DtypeRunner.run (#350)
d30104733 : Remove metadata on delegate getitem nodes (#342)
65189b519 : Run more C++ tests on CI (#333)
0574637db : rename to backend/interface.h (#285)
7f395fdda : Export emformer RNNT encode, predict, join (#173)
2590257d5 : Fix type hints for optional dtype (#347)
72abf1c6f : hg mv backend example to a different place (#328)
270c80db3 : update index: dtype constraints changed in ATen (#321)
18dba91ec : Fix select_dtype_combinations return for GenMode.All (#339)
d1a009e23 : Add etdump_gen to generate flatbuffer containing profiling/debug events (#338)
348283015 : remove the supportive for attachment/metadata (#281)
4f3e5e65d : Hide Method's ctor and init() (#336)
73f699acc : Remove Inspector API usage from lib.py (#327)
c5fecc823 : Enable W2L XNNPACK delegation (#212)
1bf2ff4ef : Add macros and helper classes in event_tracer needed for profiling (#294)
31aaa1292 : Add TorchFix linter (#298)
e697d213b : clean up TODOs (#310)
10361c4a5 : unified partitioner for qs8 mv2 (#306)
8abb05f00 : unified partitioner for qs8 mv3 (#309)
4923fafd1 : Make Default Testing Config Canonical Config (#303)
4955252da : Make tester partitioner default partitioner (#300)
a0e9dbad5 : quant_params should come from implicit nodes (#301)
7bfec8fa8 : move quant_params class out of quant_utils (#308)
18ffbcc0c : add RunPasses Stage to XNNPACK Tester (#307)
1eee0214d : expose all stages in init (#302)
b4c881265 : fix mv2/3 input shape (#304)
3e78b2fc1 : remove short-term quant (#305)
8e5616f38 : Bug fix in node.metadata deserialization for edge ops (#299)
dfde23d26 : Enable flake-finder and distributed execution (#273)
4209306f9 : Add deeplab_v3 model to examples (#60)
192da3660 : Use optimized library if op available (#324)
a07832dfc : Re-sync with internal repository (#326)
0cd9c57b8 : Use sample input for edge ops (#291)
632a6e172 : Make bundled_executor_runner use MethodMeta interface and adjust non-const indices (#319)
a404e289d : Make sdk/executor_runner use MethodMeta interface and adjust non-const indices (#317)
04d5c240b : Make executor_runner use MethodMeta (#318)
f6f3f005f : Migrate relocatable_runner to use MethodMeta (#316)
5762802b4 : Adjust MethodMeta's non-const indices by 1, and update all users (#315)
1f558c7fd : Fix missing runner in Linux selective build and custom ops jobs (#313)
63bb3b521 : Support torch.int32 as a dtype for quantize and dequantize (#289)
fbbec0081 : Initial gtest bringup for executorch c++ tests (#271)
8c0dcea02 : Add schema changes for delegate profiling in etdump_flatcc (#280)
f618121bf : Update export documentation (#296)
b2ef921da : Wrap partition tags as part of partition result (#269)
1909b32a3 : add cmake install as part of setting up executorch
65830d9d2 : fix lint warning (#295)
69a68b418 : fix test name (#293)
04138365f : Back out "fix the test name"
2c22fbb59 : fix the test name
8e2ed67d6 : Fix emitter bug which doesn't allow input lists to delegates (#282)
2f21fe636 : example quantizer and delegate backend
89f3e39bb : Allow each model to choose its own runner (#277)
0c1d726e5 : Run lintrunner on CI (#266)
951425d39 : Update nightly version, 9/12 (#283)
b490757fd : Apply lintrunner --all-files -a (#275)
26527efdc : Partitioner new op guideline (#284)
7692b51ec : Rewrite Index Op (#278)
da70c5db9 : linearize_access_indexes only needs dim of broadcast_to (#270)
e9c585188 : Add backend init context to backend.init (#276)
925cd1b33 : Unified XnnpackPartitioner (#265)
71470a750 : Move examples export to two stage export (#238)
27dabc56e : Cover tosa AOT tests in CI (#267)
77ea624ba : Initial print profiling for xnnpack ops (#227)
060f4b3bd : Ignore .hypothesis dir (#268)
38277dab6 : Add init context
32acb602c : remove executor.h 8/N (#193)
b5ae26fdb : delete duplicated declaration (#256)
d84d9021a : remove executor.h 7/N (#192)
3200b94f7 : add export utils to example targets (#228)
40f0cca41 : Add backend runtime context to backend.execute (#211)
2c9057a35 : Run all unit tests on CI (#261)
fea0bd480 : Add sample input for a subset of core ATen ops (#262)
6fa02377f : Add util to find functional and/or out variant of an operator (#250)
7e6b2b1d2 : Move opinput api.py argtype.py and dtypes.py to edge dialect (#249)
094d00e90 : set_output_data_ptr api (#223)
64d451f52 : buck macros (#260)
aecef92de : passes on edge dialect will be done via tranform (#236)
29bbab89f : Fix torch nightly versions (#258)
51ec1a59d : Update `add` tests (#259)
0affcf24f : Improve module import (#235)
4f297a5c0 : move modules to ops (#257)
36d11381a : Add gtest to executorch third-party (#253)
c7808d4a0 : Fix executorch/exir/tests:quantization (#243)
2c0f52926 : Enable CMAKE linter (#255)
05167efdc : Create pytest.ini (#230)
2b0be2a05 : Fix CI breakage (#254)
e04ec330d : remove executor.h 5/N (#186)
69d1b7fce : Enable CLANG-FORMAT via lintrunner
c19cd11f6 : Depend on lintrunner-adapters pypi instead (#251)
2b5d9ebc8 : ETDump FlatCC Python Schema (#248)
5e6e4370a : Support inception_v4 in examples (#203)
68c1b01ce : conv_to_linear: Allow logger to control the verbosity (#179)
d7c4ccf00 : Adding EventTracer in runtime and support for it in Method (#232)
9d134cbe4 : Add option to disable XNNPACK executor runner build (#242)
b388d3d85 : Add priliminary support for lifted graphs (#199)
790a5cdae : Update nightly version (#237)
b2af64f74 : Update torch.ops.higher_order.cond (#247)
0acc96bfa : Add demo backend with delegate mapping generation (#221)
8a7cfa779 : Outline for Delegate Debug Index Documentation (#234)
d50a8ec64 : Add ufmt linter/formatter (#246)
02337bccd : Update the development guide with an improved package management script and newly added packages (#229)
1a83ee4ec : Update custom ops example (#245)
9c4c5c3c9 : sanity check prim inputs (#176)
50a95bf6b : Create separate ETDump schema for FlatCC (#214)
73f653230 : Add backend runtime context (#209)
c6a56c21c : Fix exporting issue (#239)
1406fe875 : Add flake8 via lintrunner (#241)
6a703ce85 : Remainder: std::fmod vs aten remainder (#90)
da53c68fd : remove executor.h 9/N (#194)
4db7dc1c8 : Hide exir.serialize; rename [de]serialize_{to/from}_flatbuffer (#187)
e379356d4 : Add docs README.md and a custom directive (#208)
b2fa50822 : Add operator name and arg type in Error message (#188)
d2e6750f8 : FF jarvis (#178)
bba84db13 : Fix flaky issues on MacOS CI (buck2, libzstd) (#200)
770e4cca9 : Increase the job timeout duration from 30m to 60m (#218)
f3c4edc5a : Change _generate_new_graph_signature (#206)
f5796c7b4 : Add debug and release build mode in CMake (#205)
8219bcc4c : suppress errors in `executorch` (#207)
dd57cc2ea : Set max kernel num (#204)
6f3cb16ef : Add debug handle map (#146)
d93a7d02a : remove executor.h 4/N (#185)
834c53f12 : Remove accidently checked in model
ab258aaac : Fix ._transform naming (#162)
a3fbf2de4 : Remove executor.h includes 1/N (#183)
4f832e05a : Add MobileBERT (#181)
6bc218136 : Add div_mode_out (#190)
ad404a89b : use transformers==4.31.0 (#198)
a74e11c33 : Remove graph pattern based partitioners (#196)
5d075e7c9 : Fix T162411243 (#191)
9ca4ed025 : Introduce DelegateMappingBuilder (#184)
511b441c6 : ignore the copied files (#197)
7539564e7 : Turn on ir validator for example models (#195)
f32caa84a : Update torch versions (#189)
8a738bcc2 : Fix bugs in selective build cmake (#182)
11feabf3f : bugfix framework tax (#177)
0a414fad5 : factor out shared test logic, name libs appropriately (#174)
89559ad52 : Add pypi/transformers to OSS deps (#180)
ca12e1b2b : refactor pybindings lib (#132)
33e1155dd : Comparison Ops Common Type (ge/gt/le/lt) (#171)
56df264d2 : Re-sync with internal repository (#175)
5f815bac0 : Update the executorch stack diagram
34317e2a6 : Add example script to generate bundled program and consume in examples/executor_runner (#166)
03405980d : Remove models.py (#168)
c8f8bcc62 : Remove dependency of bundled program from testing:tensors_are_close (#165)
8ede8c932 : Add mapping for bundled program in setup.py (#164)
72a0cc1e5 : Add CMake selective build examples (#167)
c63197fbd : Fix serialization due to OSS schema changes (#155)
f1b3eb9b9 : Create models via model base and factory (#143)
390544be3 : Fix broken link (#163)
1f88ff4e4 : Update doc and code to run quantized model (#157)
b9f37cc68 : Remove redundant prim ops (#160)
7ce236929 : Build xnn_executor_runner with cmake (#153)
68f6c20ae : ET XNNPACK performance showcase
822574b35 : Some fix for oss (#156)
f8a890a7c : Fix deps
70a797339 : Make kernels/portable/cpu compatible with C++11/14/17/20 (#128)
366247830 : Make the core runtime C++11/14/17/20 compatible (#127)
ae8c6d92b : move codegen pybindings out of public pybindings lib (#131)
f9163a6b1 : Enable MV3 in xnnpack_examples (#149)
68a1a0d51 : Add selective build examples for buck2 (#152)
56fc73fa0 : Decentralize schema lib cmake build file (#151)
cca86d01a : Decentralize portable lib cmake build file (#150)
7299bf3da : Add lru_cache for node comparison (#148)
9ede8132d : Fix XNNPACK Example for MV3 (#147)
ec4d4e8e2 : Create an initital recipe for xnnpack optimization (#145)
0f6bae0db : MobileNetv3 FP32 + QS8 (#142)
41125f9a7 : add unsupported module list for partitioner (#139)
76b13851b : Use IDs for placeholder/outputs of implicit q/dq nodes (#141)
80ed4560c : handle static ints and floats (#140)
e4253779f : Revert D48588121: Add support for mul and mul_relu
06da346b3 : Add support for mul and mul_relu
17fa8f8ee : Fix timm installation (#144)
b04fec240 : Update the example flow to include emit program directly from lowered module (#134)
98c1ec9af : Emit lowered module (#89)
da993fd7a : Add `transform_exported_program` stub to ease renaming of `transform` method (#135)
e6bdec2f3 : Attach debug handle to control flow modules
6b350bd4d : Make sure XNNPACKQuantizer works with the pre_dispatch=True
d61d5b794 : Add XNNPACK with and without quantization tests to CI (#123)
036b933e7 : Reinstall buck2 arm64 binary on MacOS if the wrong version (x86) is installed (#129)
1c92891a8 : Remove strange helpers for python -> tensor conversions, and at -> et tensor conversions (#121)
56f685f56 : Add proper error message when shared library is missing
02fbd7495 : Fix fbcode buck run quantize example
35db16ddc : Fix xnnpack demo (#122)
3794945ad : add typing to pybindings (#120)
5a8c428f1 : CONTRIBUTING.md file with code style guidelines (#114)
927bda15a : Rearrange placeholders (#117)
47cd8a824 : Run quantized example models (#118)
792388f92 : Cleanup operator class (#113)
8683882a4 : Fix OSS build (#116)
da1f29b1a : example with mv2 (#64)
236675dd0 : delete unused pybinding function
fbb960bf6 : delete stuff from pybindings.cpp (#110)
f4ed04fcd : Pybindings Module is more flexible with non_const set ups (#109)
ef94eedb0 : delete load_from_buffer shared ptr api from pybindings module (#97)
eaf006e8d : MobileNetv2 FP32 + QS8 Test (#82)
ab3d408bc : Fix Long Term Quant Testing (#78)
def08cf71 : fix permute copy name (#80)
b2b9766be : add preprocessor flag for enabling dq linear in runtime (#66)
e13df91d3 : use Method not ExecutionPlan or Executor (#98)
2b85bcbd0 : Dtype compliance: clamp (#83)
1f903c464 : Enable ResNet-18 (#107)
2d3685ff1 : Introduce a new interface in lib.py and the ETInspector class (#111)
85c2b4c09 : delete get_param and is_param from executorch codebase (#108)
3e856c79f : Remove all ET_MMAP_SUPPORTED hacks (#102)
511809787 : dont require mlock in pybindings (#99)
fc64ccc2b : Add schema version to serializer/deserializer (#107420)
1487c0a16 : Enable quantized ops in fbcode (#101)
85605df42 : CMake support quantized ops (#103)
95c26f43b : Extract out CMake selective build and codegen logic into functions (#100)
45357a818 : fix flaky unit test (#105)
16497820c : Enable resnet50 (#104)
546663838 : delete some stuff from pybindings (#96)
b1b8a56e4 : Update toplevel graph signature (#95)
800c491c3 : Split out :executor_runner_lib (#93)
9af7d57e6 : Deserialize operator args even if operator wasn't found
ddde60b80 : Support graphs which return get_attr nodes directly as output
8a4d87981 : export_example should be python_binary (#91)
73c6cb3c8 : Add Inception V4 model to examples (#86)
4885608aa : Add timm package to CI (#88)
0ece1961e : Register quantized ops into quantization example (#85)
511a85d1f : Refactor pull workflow to avoid code duplication and add quantization export test (#84)
6c0dbd96c : Add EXPECT_CLOSE macros with tol arguments
cfd36a6ad : Add Inception V3 model to examples (#74)
50f89710d : Implement avg_pool2d
91bf785b3 : Change docs about kwargs
8043bc1b3 : persist constants file (#63)
abeb4dd30 : move qnnpack utils under xnnpack (#65)
3cf11270f : Fix CPUINFO dependency for xnnpack/threadpool (#62)
961c7a35d : Add more sample models to OSS CI (#76)
b26db1d76 : Remove fb dependency by deleting sdk.py and replacing its usages with lib.py
db8303b75 : Move specific quantizer related things outside of main quant code base
b5a7f81b2 : Small fix to addmm broadcasting implementation
ce5d965d1 : Fix accessing nullptr in some ops
19a599433 : Remove direct calls to serialize_to_flatbuffer
8079496e0 : Consolidate serialization tests
9b13bf087 : Remove call to serialize_to_flatbuffer from executor_backend_preprocess
fba265542 : Remove direct calls to serialize_to_flatbuffer
ffc28e61c : Remove direct calls to serialize_to_flatbuffer
1728279dd : Remove exir/scripts
8a85ced2a : Dtype compliance: arange
87cf2bd57 : Dtype compliance: index & index_put
102fe5394 : Modernize indexing kernels
16f3933b4 : Add ExecuTorch docs TOC (#75)
5151eb6d1 : Fix log softmax shape compliance
d7dce4a48 : Implement max_pool2d_with_indices
d8bb5d6e2 : Refactor + Migrate convolution to non-fatal failure pattern
8a389229f : Revert D47573238 - "[ET][Portable] Dtype compliance: clamp" (#77)
d70822283 : Update fixtures causing failed tests
ee98531dc : Add missing TestUtil headers
961815af7 : Standardize op_empty_out helper signature
78925641e : Special handling for op_cat for 1D empty tensor
e6deec6d5 : Add export_to_edge util
d0cb85129 : Add SymIntToTensorPass in to_edge
e05bf2a66 : Fix CI job (#73)
f396b810d : Fix cat shape compliance
128c27aee : Dtype compliance: clamp (#69)
23cfcaaa6 : Remove setter for graph_module
950ce1b83 : Fix ScalarType -> C type conversion in quantize_per_tensor
3644b60b5 : Dtype compliance: native_batch_norm
b35b665cb : Dtype compliance: native_layer_norm
8aa5db494 : Add ceil op
e7aebe971 : Pattern: unary_ufunc_real
bf33da432 : Dtype compliance: index_select
076fd0935 : handle case when arg is None
ad7829121 : Add custom ops test to CI job (#72)
cd9db9c5c : Update delegate docs
a6f628a2e : Add wav2letter model to examples (#71)
d85b2ad14 : Update custom ops readme
814414cd9 : Add macos_kernel_link_options for MacOS (#70)
95810861b : Add buck2 example for custom ops
d1563b687 : Executorch Model Size Analysis Tool (#68)
1363a0e54 : Update quantization related section in readme
8ec4f8f12 : Update documentation for submdule sync
f66d6876e : Add cmake Linux build and test job (#52)
30a4b36f4 : unblock list optional tensor
54412bed7 : delete py_bin for quant. Add message to indicate succesful finish (#61)
43cda4828 : Modernize matmul kernels
09e8e195e : clean up Program api
49ccb7092 : Hook up KernelRuntimeContext.fail()
6e0e0ccf2 : Remove unncessary exec_aten.h include
642a926a6 : Fix cumsum shape test
64ee556b1 : Fix import for XNNPACKQuantizer
d9361aa66 : Standardize native_batch_norm helper name
af2c4952c : Load dylib for custom ops if platform is MacOS
5951d838f : Minor fix in README.md
04f4977d6 : suppress errors in `executorch`
76e2e12c5 : Dtype compliance: split_copy
88eda7b28 : Dtype compliance: unbind_copy
91dbce70d : Dtype compliance: slice_scatter
97d6b65e2 : Dtype compliance: scatter_add
a82d03c1c : Dtype compliance: copy
af67e28ff : Dtype compliance: convolution
827388da9 : Dtype compliance: select_scatter
8c77ef85b : Add convert template
bd95cf984 : ATen Compliance: masked_fill (Dtype, Shape & Broadcast)
848b8bd1b : Patch cmake build on mac
c35171784 : Bump flatbuffer version
3817862d9 : Add verification for quant examples and update README.md to include quantization
6851268af : Remove emformer model
889e46397 : Fix more shape tests for 0-D tensor
3c2adcb2a : Special handling for 0-D tensor
7108fb331 : Move extension/fb/threadpool into backends/xnnpack/threadpool (#55)
c932ebd08 : Update ExecuTorch cpuinfo generate-wrapper script (#58)
060bb5ba4 : changes to run custom op script in oss
5fe3df22f : Migrate existing check functions to return bool pattern
9cb9ba043 : Log mismatched values during `tensors_have` check functions
48bb99fc5 : export_to_ff -> export_to_pte
7911ea717 : Add support for backend dialect in serde
4090d6c0f : Remove redundant pt2_mode=True (#54)
1bd137a1e : Update 00_setting_up_executorch.md (#59)
711cf6ce1 : Fix test boltnn test failure.
eba90b719 : Updated instructions to genenrate and run example models
5d2e43cfb : Add quantized ops to Executor Runner lean mode deps
dfc2117c3 : Bump torch version
f8d8ac68f : Clean up output validation helper
4720677bb : Clean-up on by default flag] prefix_/_5
015954e9b : Lookup for libtorch.so and create buck2 target
3b0533ff3 : Add back selective build to custom ops AOT lib
5f8aa6db8 : Let custom ops registration code only include ATen headers
9bc3662b4 : Add torchvision_vit model to the examples (#33)
f529efc81 : Install torchaudio as dep for examples
974c98f60 : More ff to pte renaming
61dded953 : Fix image path
6506314c5 : Re-sync with internal repository (#57)
f245ac7b7 : Support preserving calling convention to some modules.
e023d8f43 : Add pattern + replacement for Embedding with padding_idx
0b317e573 : Add pass to properly check if q and dq nodes are implicit (#49)
67b74cfe2 : Remove unexpected failure of Upsample2d emit test
3b8ea6064 : Update linker flag
215e48a9c : Add ios example app
a9d72654f : fix reduce_util for 0-D tensor
1db9e08f4 : Clarify the output of quantization flow
49ef23cb5 : Update instruction for cross compilation
f61b60027 : map third party deps between OSS and Internal
e3da931d3 : add XNNPACK (#45)
73b936c66 : add FXdiv (#48)
53cda68aa : add fp16 (#46)
1e9765839 : add cpuinfo (#44)
d66b7354a : add pthreadpool (#47)
d62a9349e : reduce_util: clamp the apply fn start/end
3a92785f6 : Update torch version and torchvision version
7122df57b : Introduce non-fatal check functions
e5f786979 : Rename `context` to `ctx` in kernel code
1fd18b5b7 : Add emformer to ET examples
74cbddeb2 : add op_empty
c81360568 : Fix op_t_copy
e346b8227 : add get_execution_plan(name) helper to Program
a3ecc2dc9 : Fix small expand_copy bug
a9aa9b534 : Not use util:mark_memory_as_unused
4a630c4d9 : MethodMeta (#42)
d1dd975f2 : Add a pass to replace memory_format ops
1ac53eeb3 : Introduce dim_order edge dialect op for .to()
7b330cf48 : Introduce dim order utils
b66faf27c : Update pytorch 2.0 export quantization doc
418dbbfb5 : Explain what unlift config does
578a81883 : Another fix for test_backend CI
336bbd2d4 : Patch changes needed for ios/android build
26f9efa16 : Support custom mem_id in MemoryPlanningPass algos
5462df750 : Add CMake build example for custom ops
217ddba0e : grab parameter from node
bd9e0b141 : fix flatbuffer deps for xnnpack delegate
4fc157aa6 : Fix type inconsistency in EValue
311164e6a : Make Resnet work with rexportable flow (#40)
d7d91a29f : Update the torch version (#50)
d3c5f7860 : Temp fix oss ci
906cd6d05 : Provide CMake build for custom ops
f9a1fd02c : Add tomli as part of pip dependency
a8cab0bce : Add OSS CI workflow for MacOS M1 (#32)
231e4e9bf : ignore editor temporaries
943e4ec9f : Fix broadcast semantics
333dfb4ba : Re-sync with internal repository
afad46710 : Refactor method.h to hide forward-declared flatbuffer types
be659019b : move backend aot api to exir
15389107e : Update exir.pass_base to use export.pass_base
6fab1e598 : Try remove no_grad in exir capture.
a81505b0d : Fix aten-mode copy_tensor_data()
14711d3d5 : Fix model test timeout issue
0fa2da2e3 : Fix non-kernel refs to the deprecated Tensor::data_ptr()
0b576d97f : Move clients.bzl under fb/
416d114f3 : Fix minor CMake issue
ed1e3b407 : Use lang_compiler_flags to avoid breaking C libraries
1e996d3b9 : Fix lint warning
9870e11b0 : Add custom ops registration to examples
789f4ce04 : Preserve two folders for ios build and android build
524fd6176 : Remove unused exir dynamo configs.
77175d790 : Rename private CMakeLists.txt vars
58b664f51 : Add cmake_build_system.md
07f031b98 : Fix export config in examples (#34)
7cffa8685 : CMakeLists.txt for executor_runner
5c56a7f77 : extract_sources.py generates cmake sources lists from buck2
b21db6cdb : Fix build breakage (#36)
4a5e147f8 : avoid torch.ops.load_library() calls
15c972497 : delete jarvis
b47b593e4 : add support for symbool and symfloat
c6ccaab38 : Remove uses of `data_ptr`
1418b1501 : Use a preprocessor flag to optionally depend on dynamic linking header
0c330f8a3 : Split out custom ops lib from executorch_generated_lib
5aba402ee : Generate etdump from Executorch BoltNN app
03f9b9cce : Clean-up on by default flag] prefix_/executorch/
e2682a850 : schema.fbs -> program.fbs
4179cc8df : .ff -> .pte
2582b911d : fix test_backend unit test
cd8f23278 : Support get_attr in lists that are inputs to nodes
13ca8c91e : Add support for edge dialect ops in exir/serde
f8a19a758 : Add sleef as a direct fbandroid_platform_deps of operator targets
9c9442edb : groupkey_0b8c7657-2719-4624-bde9-79fbdb590e5e_2
ac466663c : Delete dtype tests
bfc8ef7d9 : Add missing TestUtil headers
666b2c707 : Replace TEST_F with TEST
2093409ab : Standardize helper names
baf07cfd4 : Replace exir cond with torch cond
750756b98 : suppress errors in `executorch`
005429d43 : Untie the lifetime of returns and the lifetime of module in pybindings
e822e9769 : updated backend delegate doc for lifted params
f354a9635 : new backend tests for lifted graphs
fb602f802 : Remove stale google colab link
585254ac5 : Update Pytorch pip-install s/nighlty/specific_version
3351eb438 : setting_up_executorch cleanup
ee5ffe3e5 : ET][Docs] Update setup_et and examples/README.md to use `executor_runner`
8d28f1cda : Memory format WIP note
f747ff953 : Remove exir's patch for capture_scalar_outputs
4b1d473b6 : Backend dialect clarification on opset
8973bfabc : Update colab doc link
f48c314c9 : Pin PyTorch nightly in Executorch Linux CI (20230731) (#31)
6a9fe14c4 : Initial commit

+- Project: platform/external/f2fs-tools

4245bb0 : Define *_f2fs.recovery modules
1734b84 : Define fsck.f2fs.recovery
ad3736c : mkfs.f2fs: remove IMMUTABLE bit
05fde8e : f2fs_io: add more options for randread test
036af19 : f2fs_io: support 1GB dio buffer
9206c3b : f2fs-io: unify default block size
2893f7c : mkfs.f2fs: adjust zone alignment when using convention partition with zoned one
b7b6cac : fsck.f2fs: fix incorrect parent blkaddr when adding lost dots
6617d15 : f2fs-tools: use stdbool.h instead of bool
43d6b66 : f2fs_io: {set,clear}flags: support nocow flag
bce0e1e : f2fs_io: {set,clear}flags: support immutable flag correctly
bd9d283 : mkfs.f2fs: don't trim on aliased partition
83f090d : f2fs-tools: remove linux/fcntl.h but define the hint directly
7326e5a : f2fs_io: choose MB/s instead of MiB/s
312de6f : f2fs_io: support fadvice for read
0cd64a7 : f2fs_io: add fdatasync
8cc4e25 : mkfs.f2fs: add device aliasing feature
c35fa8c : mkfs.f2fs: change -c option description
5c06793 : f2fs-tools: add write hint support
48fb947 : fsck.f2fs: remove redundant i_ext.len set to zero
4ce6d22 : fsck.f2fs: support to add missing '.' or '..' dirent
82b59c7 : fsck.f2fs: fix to detect double '.' or '..'
662e619 : mkfs.f2fs: use correct endian conversion for writing lpf inode
9ad0ad3 : inject.f2fs: add dentry injection

+- Project: platform/external/fbjni

440a427 : Include platform/external/fbjni
536c56a : Initial empty repository
474795f : Prepare version 0.7.0 (#100)
cdd876d : NDK 27 & Support 16kb page size for Android15 (#98)
72e44a2 : Fix CI broken for host.gradle (#99)
e9af44f : fbandroid/libraries/fbjni/test/FBJniTests.java
95a6972 : Remove unused exception parameter from fbandroid/libraries/fbjni/test/jni/inter_dso_exception_test_2/Test.cpp
4ec7ce4 : Update for libc++ 18
231bed1 : Remove libstdc++ support
968e381 : Apply clang-format 18
cd6e61a : Log Java stack trace for uncaught JniExceptions
17d30bc : Migrate google java format from 1.7 -> 1.21.0 (#95)
8862b32 : h2kt - support set
5587a7f : Prepare upcoming version of fbjni
1cf7637 : Prepare version 0.6.0
3bf8172 : feat: Bump Android NDK (#94)
0454714 : feat: Add `bigEndian()` and `littleEndian()` fields to `JByteOrder` (#93)
8b45d9c : Add dynamic_fn_ptr attribute to dynamic fn ptrs in M4A
00c8672 : Start enabling -Wextra and -Wconversion in "rn_xplat_cxx_library" (1/2)
0f2b807 : Use a separate executor/thread for ClientSync
08a04f0 : Bump Gradle minSdkVersion (#91)
d128913 : xplat/arfx/atlas/core/presenter/java/jni
59461ef : Fix missing "#include" causing GitHub Actions failure (#92)
408af4e : fbjni | Fix lints exposed by clang-tidy
12995ec : xplat/arfx/atlas/core/presenter/java/jni
bd94aae : Bump to NDK 26 (#89)
52a14f0 : fixed -Wdocumentation warning
4e00822 : Back out "Fuzzing harness for fbjni utf16toUTF8"
070e640 : Fuzzing harness for fbjni utf16toUTF8
66fa297 : Prepare 0.6.0-SNAPSHOT and fix double closing issue
8dc225a : 0.5.1 Correctly publish the -java-only artifact and the -headers artifact
7621c81 : Add missing publishing properties to gradle.properties
c58c8ee : Fix publishing for 0.5.0
e058590 : Bump Version to 0.5.0
147c95b : Format fbjni with CLANGFORMAT
f6969e1 : Format fb-ffmpeg with CLANGFORMAT
b8a7f9e : fbjni - Add getApplication() to AContext
22a9cdb : compile/target SDK to 33
cc92b5d : NDK to 25.1.8937393
43d1b7d : CMake to 3.22.1
49ef4a5 : AGP to 8.0.2
c247571 : Cleanup and simplify publishing
19041c0 : Use namespace and remove unnecessary manifest file
cf86584 : Modernize the CI of fbjni
4407d37 : Bump AGP to 7.4.2
94c44af : Bump CMake to 3.18.1 (#83)
53ff025 : reland: [fbjni] Keep all the HybridData's members
91f5b98 : Bump version to 0.4.1-SNAPSHOT
a4be46c : Revert D46769581: Keep all the HybridData's members
d08a84e : Keep all the HybridData's members
83cc597 : Prepare FBJNI version 0.4.0 (#82)
8efea4d : Add getMessage to JThrowable and getSimpleName to JClass (#78)
521c0d1 : Remove global initializer in fbjni
423ecdd : fbjni | Remove usage of global constructors in Exceptions/Lyra.
568b012 : Fix CI
3036463 : Remove remaining usages of fest
11791b4 : fbjni | Add 'getPackageName' to AContext.
378aeb4 : Bump SoLoader version to 0.10.5 (#119)
5eacdd1 : Fixes for -Wextra-semi,-Wextra-semi-stmt
6015a2f : fix release build again
428f535 : Do explicit cast from size_t to jsize.
40bdad2 : Bump SoLoader version to 0.10.4
d4c43e3 : fbjni | Fix includes when using FBJNI_DEBUG_LOG_REFS.
0068c03 : Back out "Temporarily disable stack traces for libc++ arm64"
f379750 : Restore "[lyra] Temporarily disable stack traces for libc++ arm64"
250d889 : fixes to support -Wstring-conversion compiler warning
de205a3 : Back out "Temporarily disable stack traces for libc++ arm64"
2554f9b : Implement ReadableNative*::mapException without dynamic_cast
0464ece : v0.3.1-SNAPSHOT (#75)
6420543 : Use new publish action (#74)
34535fc : Use new credentials format (#73)
ebe4309 : Revert D32593957: v0.3.1-SNAPSHOT
8aa8ffa : v0.3.1-SNAPSHOT
0c14863 : v0.3.0
af4c8e6 : Bump ndk (#72)
befc998 : Remove sdkmanager (#71)
aacc5d8 : Upgrade deps (#70)
dc673a2 : Create a new method to return the cached environment and whether we are in the middle of attaching
11ea7b1 : Make sure TLS data is initialized before calling attachCurrentThread()
8cd8713 : Default branch renames (#66)
8b5aa9e : Add a default copy assignment operator since there's a default copy constructor
bb2910d : Add a default copy constructor for JMethod
fc4d127 : Revert D29433693: Fix JNI leaks
a98ec18 : Fix JNI leaks
089df7f : Add JHashMap::create(int)
f730250 : Add simple bindings for container construction
d31edcb : Reorder DocTests.java
a031656 : v0.2.3-SNAPSHOT
7e1e1fe : v0.2.2
e7a5554 : Don't create separate distribution for headers (#55)
f107a1f : v0.2.2-SNAPSHOT
7af8406 : v0.2.1
8c12e95 : Publish headers archive again (#54)
288c0d6 : v0.2.1-SNAPSHOT
76b6860 : v0.2.0 (#52)
b71ac9f : Remove the package exclusions (#51)
dae99dc : Update maintainer docs (#49)
d521fc5 : Run release on release creation (#48)
80f1552 : Add release GitHub workflow
741158a : v0.1.1-SNAPSHOT
db683d9 : v0.1.0
e36da74 : Migrate to Maven Central (#47)
1417653 : Bump nativeloader (#45)
984fdda : Add updated integration instructions (#46)
87e261c : Enable prefab native Android distribution (#43)
fc7d970 : Upgrade Gradle and dependencies (#42)
b592c55 : add install target, which is expected by PyTorch. (#41)
346b771 : Apply BLACK
933d3a5 : Random template fix
c6cd3dc : Typedef descriptorType to avoid compiler bug
905db88 : Avoid temporaries in toCall
53059f9 : Fix const handling in attachCurrentThread
a8dabb5 : Fix makeNativeMethod on MSVC
e92c5f6 : Fix test builds under Windows
8e9b66e : Fix test build on MSVC
2c94078 : Fix lyra build with MSVC
269005c : Tiny formatting fix in lyra_breakpad
8a0c112 : Use more explicit reference type
3efa7be : Use more explicit type names in ElementProxy
6bea888 : Fix MSVC build for SimpleFixedString
1351943 : Disable unknown exception type handling on Windows
c745f67 : Upgrade to newer Gradle for OSS build
d0541b4 : 0.0.5-SNAPSHOT
bcbd0b1 : v0.0.4
01a67fd : Try to get a name for unexpected exceptions.
d52535b : Expose function to determine if a global JVM is available
814bf29 : Fix crash in Lyra when throwing an exception without a destructor
b288162 : Fix thread names for android traces
7a17f48 : Make detail::ElementProxy's copy ctor explicitly defaulted
3e3f712 : volatile mNativePointer
501e6df : Temporarily disable stack traces for libc++ arm64
6e39301 : Eliminate use of alloca
a0022e6 : Get lyra building on Windows (but not really working)
adc9439 : Reorder some includes
64ccb9d : Fix linker issues on windows
77ebe3a : Support thread-local storage on Windows
bf9f1ea : Use a typedef for thread-local storage keys (pthread_key_t)
982cc3f : Fail more clearly when using Windows's jni.h
31cbe39 : Make WrapForVoidReturn perform conversions as well & rename it
586623c : Outline more code from cthis
c4ffa64 : make deleted function public (#30)
9450d23 : use nullptr in Hybrid.h (#29)
c3ca020 : enable CI for pull request (#31)
056c140 : Avoid conversion in return from getHolder
3b3717c : Extract getHybridDataFromField helper for cthis()
5fb1abb : Coalesce isHybrid and mHybridData field statics
62fdcf7 : Create non-entry-point variant of FunctionWrapper
8d3d245 : Delete JObjectWrapper
c82ae13 : Remove puzzling duplicate call in ReprStorage<Repr>::jobj()
ae74427 : Trim a few more bytes in getHolder
9d89a47 : Make WrapForVoidReturn take a function pointer instead of a function template argument
de6be4c : Switch CI to use Clang (#28)
00330b4 : Fix build with gcc (#27)
170cfb6 : Add move ctor from other type to weak_ref
0771762 : Add cross-referent-type move ctor to basic_strong_ref
8ccd5c1 : Add detailed checking of reference counts in testAssignmentAndCopyCrossTypes
a8ade20 : Reduce duplicate codegen for getHolder
8ef050c : s/NativeMethod/JNINativeMethod/
f660280 : Eliminate NativeMethodWrapper & make exceptionWrapJNIMethod constexpr
e888eb0 : Eliminate std::string in makeDescriptor
a85423b : fbjni requires c++14 now
aed5c91 : constexprify JMethodDescriptor
aed54f5 : Implement exception stack traces for libc++
482ae6d : Update android_setup.md (#26)
b5346bd : Update fbjni version number
f908b58 : Reduce minimum CMake version (#24)
70d5c3f : Fix warning about const return type
a5ff63a : Unify implementation of ReprAccess
88906d3 : Remove ApplicationManifest
4fd223c : Add a flag to disable building tests
8cfb2d4 : Add java-only upload step to maintainer docs
ba85e01 : Set up java-only JAR publishing
ac0de60 : Remove litho references
f71e03b : Add ApplicationManifest (#23)
e109650 : Revert auto formatting changes in DocTests
8b1694b : Add copyright to build_quickref.py
3c6041b : Update documentation (#22)
a9de8b2 : Support JArrayClass without ::javaobject and document its use (#20)
c493524 : Support getMethod without ::javaobject or local_ref in return (#19)
bf80462 : Support get[Static]Field without ::javaobject (#18)
d1a61fe : Support {static,dynamic}_ref_cast without ::javaobject (#17)
54a993d : Extract a helper constexpr function for templates: IsJavaClassType (#21)
ad76f41 : Daily `arc lint --take GOOGLEJAVAFORMAT`
d2a2bb2 : Add runnable quick reference documentation (#14)
22696f4 : Yearless license header
3e3f9ad : Add pthread flags explicitly (#16)
a99eebc : Remove unnecessary ::javaobject in ref types (#10)
629f526 : Use a more specific return type for autoboxing (#11)
311dbd8 : Fix argument name in make_jstring (#12)
fc1c94a : Add templates for issues and PRs
0c19650 : Add missing method into fbjni for pytorch mobile.
3aee97a : Fix copyright headers
3dcc97d : Document release process
9c14e1a : Ensure that ANDROID_HOME is set
2e0188d : Run C++ and Java tests in GitHub action (#8)
b8e3418 : Set up basic Github Actions workflow (#5)
393462f : Only download gtest if library can't be found
16c10cc : Download gtest on demand
69a94e7 : Add Darwin case
a823574 : v0.0.3-SNAPSHOT
af6cb4d : v0.2.0
3f0ad0f : Run fbjni host tests in Open Source build
0c6a1e5 : Fix soloader reference
6f1663c : Port fbjni from SoLoader to NativeLoader
b0e897e : Add initial README
64429fb : Use a separate preprocessor symbol for reference logging
d3eca00 : Fix some printf format strings for 64-bit
6b78a98 : Move fbjni tests into fbjni directory (#2)
5aefe56 : Support host builds for fbjni
67b3baa : Adopt Contributor Covenant
b7bf503 : Adding Contributing file (#1)
91830a7 : Fix fb4a land blocking test
a41520f : Set up headers publication
8c4e251 : Set up Maven Snapshot/Bintray integration
ef2a6f2 : Generate separate AAR with header files included
48b23b6 : Initial commit

+- Project: platform/external/fec

b36cef1 : Make bpfmt happy.

+- Project: platform/external/federated-compute

7785ba2 : Move UploadLocation to odp/common.proto.
cfe10cb : Remove unnecessary imports from exception reporting proto file and update the comment to reflect the updated path based on recently discussion.
8b195b5 : Switch Federated Compute to canonical ABSL.

+- Project: platform/external/fhir/spec/r4

9beebd7 : Third-Party Import of: R4 FHIR spec
2d8c06d : Initial empty repository

+- Project: platform/external/flashrom

c08865ab : UPSTREAM: doc: Change link from gitiles to github
ef60e45e : UPSTREAM: linux_mtd: fix build with clang >= 19
66cf4389 : Bring back identifier's fields into METADATA
c92d4913 : Bring back identifier's fields into METADATA
38caa74f : UPSTREAM: flashchips: add GD25F128F
7d5e834e : UPSTREAM: doc: Add chip models support into recent development
3dfdfe94 : UPSTREAM: writeprotect: Fix inaccurate return code
dc635367 : UPSTREAM: VERSION: Change name pattern to match tag name from now on
49c91b78 : UPSTREAM: erase/write: Deselect all smaller blocks when large block is selected
162746d6 : UPSTREAM: flashchips: Add Support for XMC XM25LU64C
4ab5318b : UPSTREAM: flashchips: add GD25F256F
6bd6d4b4 : UPSTREAM: flashchips: add GD25F64F
d1b644a8 : UPSTREAM: erasure_layout: Erase larger block only when all sub-block need erase
e252db48 : UPSTREAM: doc/contact: Add note to IRC section and calm down the formatting
97d4ef7d : UPSTREAM: ichspi: Probe opcode in POSSIBLE_OPCODES[] as well
3bb69daf : UPSTREAM: ichspi: Merge spi_master implementations for Intel ich
05d781e9 : CHROMIUM: file_lock: Fix lock file path checking
83981789 : UPSTREAM: ichspi: const-correct POSSIBLE_OPCODES[]
e1f73216 : UPSTREAM: ichspi: Change the opcode position for reprogramming on the fly 2->4
df3f023f : UPSTREAM: build: never install cmocka
0b7ab947 : UPSTREAM: doc: Convert doc for ft2232_spi
af020055 : UPSTREAM: doc: Add manpage item for nicintel_spi
23c30c52 : UPSTREAM: dummyflasher: Enable higher frequency emulation, add docs and tests
4f72285b : UPSTREAM: en29lv640b.c: Fix the comment about chunksize
708a0660 : UPSTREAM: dediprog: Fix comment about usb transfer size
d4bc0111 : UPSTREAM: Fix FEATURE_NO_ERASE chips and add test for them
26ef6fe1 : UPSTREAM: erasure_layout: Fix init_eraseblock segmentation fault
7e9fa4bf : OWNERS.android: Add bernacki@google.com
ad4d502c : Add init.rc script with flashrom data directory setup
e8041808 : UPSTREAM: ichspi: Add RaptorPoint PCH support
081f524f : UPSTREAM: reduce DELAY_MINIMUM_SLEEP_US to 100 by default
7d4b8dff : UPSTREAM: ch347_spi: Add spi clock frequency selection
47bcf245 : UPSTREAM: flashchips: Update test status for Fudan FM25Q08 and FM25Q128
789633e2 : UPSTREAM: flashchips: Add definitions for Fudan FM25Q04, FM25Q64 and FM25Q128
e82c1663 : UPSTREAM: stlinkv3_spi: Mark STLinkV3-Mini not working
a337cef0 : UPSTREAM: chipset_enable.c: Use PCI_ACCESS_ECAM to access pci register
7030397d : UPSTREAM: doc: Remove leftover reference to building_with_make
c1ab7468 : UPSTREAM: flashchips: add GD25LF512MF model
016997a3 : UPSTREAM: flashchips: adding GD25LB512MF/GD25LR512MF
b989a496 : UPSTREAM: flashchips: add support for MX77U51250F chip
1fd7ab4c : UPSTREAM: flashchips: add GD25LB512ME
a3b1a83d : UPSTREAM: doc: Add doc describing release process
202c6154 : UPSTREAM: Remove the Makefile
a0652621 : UPSTREAM: flashchips: add GD25LF256F
edab204f : UPSTREAM: doc: Convert the doc for MSI JSPI1
2fbae241 : UPSTREAM: doc: Fix the link to In-System programming doc
93f8e9ee : UPSTREAM: doc: Add overview doc for user_docs
d9184286 : UPSTREAM: doc: Add doc for buspirate programmer
99532389 : UPSTREAM: doc: Add doc for in-system programming
6e7216b2 : UPSTREAM: doc: Add page with misc notes and advice
970fb90b : UPSTREAM: MAINTAINERS: Add David Reguera for Bus Pirate
8829215c : UPSTREAM: flashchips: add support for chip model Winbond W25Q32JV_M
81530185 : UPSTREAM: doc: Add manpage entries for nic3com, gfxnvidia, satasii
4eec29b6 : UPSTREAM: flashchips: Add Support for XMC XM25QU512C/XM25QU512D
bf1cfb28 : UPSTREAM: flashchips: Add Support for XMC XM25QH512C/XM25QH512D
2fbfa873 : UPSTREAM: flashchips: Add Support for XMC XM25QU256D
60ecc17e : UPSTREAM: flashchips: Add Support for XMC XM25QH256D
6665769d : UPSTREAM: flashchips: Add Support for XMC XM25QU128D
c249fb8c : UPSTREAM: flashchips: Add Support for XMC XM25QH128D
273de1aa : UPSTREAM: flashchips: Add Support for XMC XM25QH64D
2d5cae03 : UPSTREAM: flashchips: Add Support for XMC XM25QU32C
4794b30a : UPSTREAM: flashchips: Add Support for XMC XM25QH32C/XM25QH32D
a1e58d8c : UPSTREAM: flashchips: Add Support for XMC XM25QU16C
4d81b1f4 : UPSTREAM: flashchips: Add support for MXIC MX25U25645G
c2d30b30 : UPSTREAM: how_to_add_new_chip: Add a section for feature bits and WRSR handling
6ec5e65d : UPSTREAM: VERSION: Update version to 1.5.0-devel
b5a0e07e : UPSTREAM: doc: Add download info to release notes 1.4.0
5c1971d2 : UPSTREAM: flashchips: Add chip models GD25LB256F/GD25LR256F
35a38d98 : flashchips_crosbl: Mark GD25LB256F/GD25LR256F as duplicate
1d38c63e : UPSTREAM: VERSION: Update version to 1.4.0
4bb118cc : UPSTREAM: doc: Release notes for version 1.4.0
11f59e54 : UPSTREAM: meson: Fix project name as flashrom
023c5e0d : UPSTREAM: VERSION: Update version to 1.4.0-rc2
f102ba81 : UPSTREAM: tests: Move SKIP_TEST macro to common header
6354e3e6 : UPSTREAM: libflashrom.map: remove non-existent functions

+- Project: platform/external/flatbuffers

937612ae : Make bpfmt happy.

+- Project: platform/external/fmtlib

4046f972 : Fix -Wmissing-noreturn warning (#4194)
6bdc12a1 : detail_exported -> detail
786a4b09 : Cleanup fixed_string
2cb3b7c6 : Update README.md
e9cba690 : Update README.md
02537548 : Cleanup an example
c68c5fa7 : Test FMT_BUILTIN_TYPES
22701d5f : Address build failures when using Tip-of-Tree clang. (#4187)
e62c41ff : Conform `std::iterator_traits<fmt::appender>` to [iterator.traits]/1 (#4185)
18792893 : Silencing Wextra-semi warning (#4188)
c90bc918 : Bump actions/checkout from 4.1.6 to 4.2.0 (#4182)
c95722ad : Improve naming consistency
db06b0df : Use countl_zero in bigint
b9ec48d9 : Cleanup bigint
3faf6f18 : Add min_of/max_of
d64b100a : Relax constexpr
ff9ee046 : Fix handling FMT_BUILTIN_TYPES
1c5883be : Test nondeterministic conversion to format string
cacc3108 : Don't assume repeated evaluation of string literal produce the same pointer
fade652a : Require clang >=15 for _BitInt support (#4176)
96dca569 : Module linkage fixes for shared build (#4169)
891c9a73 : Cleanup format API
9282222b : Export more
e5b20ff0 : Deprecate detail::locale_ref
ff922235 : Simplify locale handling
80c4d42c : Cleanup format.h
3b70966d : Add width and alignment support to error_code
05226c4b : Remove type_identity
c283b458 : Cleanup format.h
fe79932c : Fix conversion warning on chrono.h (#4170)
23fcf194 : Apply clang-format
3f296e3d : Workaround clang-format nonsense
a197a994 : Add member format_as for std
6d43c755 : Fix a typo
1f87b1c5 : Use fmt::formatter specialization for std::reference_wrapper to avoid undefined behavior (#4164)
ed8f8be7 : More chrono padding (#4161)
55a0a9cd : Cleanup pragma detection
5c926d9f : Remove FMT_UNCHECKED_ITERATOR
8b024662 : Remove unnecessary inheritance
2f1424be : Simplify handling of arrays
239aa691 : Remove unwrap_named_arg
497df6db : Remove formattable
a25e594f : Remove range_mapper
503dff93 : Simplify has_formatter
3374a95b : Simplify has_formatter
0e62e5dc : Simplify has_formatter
7ce01397 : Sync value ctors and type mapper
07e70151 : format std::reference_wrapper
41977277 : Improve handling of unformattable args
527e98e3 : Remove unformattable
8a19b2db : arg_mapper -> type_mapper
e97df46a : Cleanup type mapping
39f1e090 : Remove FMT_MAP_API
d832830f : Cleanup type mapping
b329ff19 : Always detect encoding on Windows
2af403ce : Simplify type mapping
b7513b1d : Simplify type mapping
761d35f7 : Cleanup format_as handling
545dc414 : Add value ctor taking name_arg
3f5e45dd : Simplify handling of _BitInt
2e3b6fbd : Remove redundant check
a0328e1f : Improve error reporting
de28ef5f : Remove make_arg
2d5e561a : Cleanup argument handling
6537fb43 : Update changelog
50aac2ac : Add reference to iterator_traits
538d8777 : Workaround a bug in libstdc++
03353123 : Demacrify UTF-8 check
463fe65f : Cleanup FMT_COMPILE_STRING
1782a6ea : Rename pragma macros
b52fb988 : Fix no locale build
b6a6ec7f : FMT_EXCEPTIONS -> FMT_USE_EXCEPTIONS
89999f16 : Simplify pragma
b90b4bc9 : Remove FMT_STATIC_THOUSANDS_SEPARATOR in favor of FMT_USE_LOCALE
a1d6f9a9 : Minor cleanup
689ec7a0 : Cleanup
28143dc9 : Cleanup chrono
1bde49e5 : Remove FMT_USE_USER_LITERALS
f924d16e : fix: pass /utf-8 only if the compiler is MSVC at build time
ab8f9d5b : Cleanup format API
6f62db09 : Cleanup format API
ab44ee75 : Avoid shadowing
0d4e7e3f : Remove old workaround
8ee89546 : Remove old workaround
a5deb96b : Update gcc version
61a241f0 : Cleanup
ff82d8d2 : Cleanup visit
0cc20f56 : Remove iterator_t
2ba6785d : Remove unused type
5644e750 : Remove unnecessary forwarding
5345cfe6 : Adjust clang-format
3e9fdb3a : Cleanup
3ada4aed : Optionally exclude Unicode data
b37be85b : Optionally disable named arguments
70643b25 : Don't use format_error if exceptions disabled
967e2d17 : Cleanup
02c5d637 : Cleanup
047bf75c : Cleanup
2d3ba32e : Improve debug codegen
6c90b31f : Improve debug codegen
9408c2ae : Readd support for FMT_BUILTIN_TYPES
cc3ff152 : Cleanup
158893b3 : Cleanup
f5a16a48 : Cleanup
cad876be : Switch to vargs
debf6f82 : Switch to vargs
35f4fab4 : Simplify value ctor
ff8f3247 : Minor cleanup
bd48715d : Simplify make_format_args
57d6df62 : Simplify make_format_args
8ed4a9dc : Improve debug codegen
f288f45e : Prepare for arg_store unification
5bf577ca : Backport from GoogleTest: "Work around a maybe-uninitialized warning under GCC 12" (https://github.com/google/googletest/commit/0320f517fd920866d918e564105d68fd4362040a)
b6de6681 : Backport from GoogleTest: "Always initialize fields in MatcherBase constructors" (https://github.com/google/googletest/pull/3797)
6870e4b0 : Workaround for GCC regression: false positive null-dereference in vector.resize
5cdef760 : Switch to gcc-13 for C++23 tests
a2c290bc : Suppress a bogus MSVC warning
f1e3016c : Optimize debug codegen
106dc8fd : Reduce usage of type_identity
c3344e21 : Cleanup base API
5f438c96 : Remove make_arg
2a257798 : Reenable FMT_BUILTIN_TYPES
22d50c1a : Add support formatting std::expected<void, E>
1cc10ab6 : Make is_formattable work with const/volatile void
6aaf7f4b : Suppress a gcc 13 warning
b4d1d7f8 : Improve debug codegen
1e0771c7 : Fix broken CI for shared library on macos and linux (#4151)
3df47a46 : Make is_formattable work with void
b4aea98b : Small fixes for some issues with modules builds (#4152)
565461a0 : Update MSVC workaround in compile-test
e2b72387 : Cleanup format string API
1e0c6cdc : Make symbol sizes shorter
a8bcf81f : Minor cleanup
15694c9a : Workaround an MSVC bug
4cae2da0 : Workaround a clang 17 bug
79e5ae91 : Fix locale tests on FreeBSD
894b71da : Fix handling of _BitInt
7a6a2a79 : Improve debug codegen
387395fc : Cleanup base API
6a884154 : Add FMT_APPLY_VARIADIC
9a2aae37 : Cleanup base API
88037683 : Cleanup base API
4fa533c7 : Cleanup base API
d980dd71 : Cleanup base API
4eed488c : Cleanup base API
a6ecd25b : Improve debug codegen
9f29345e : Simplify mapped_type_constant
4986b4c0 : Simplify arg_mapper
a5f4d982 : Simplify arg_mapper
bc3af512 : Reduce the number of instantiations
60740b7c : Cleanup base API
9ef160d3 : Cleanup base API
0b80978c : Cleanup base API
4f39d886 : Cleanup base API
a86b1acf : Add mapped_t
c9ef07bc : Minor cleanup
8c4cfab5 : Detemplatize parse
7e3aa6d9 : Minor cleanup
7c662160 : Minor cleanup
1416edab : Cleanup base API
d4aeca99 : Bump actions/upload-artifact from 4.3.3 to 4.4.0 (#4141)
eee93ddf : Bump github/codeql-action from 3.25.11 to 3.26.6 (#4142)
b310a0d4 : Simplify parse_format_string
985c3399 : Make map static
4a55b0d5 : Remove duplicate error in compile-time checks
64a6c845 : basic_format_parse_context -> parse_context
66920fee : Improve compile-time checks
f4dad85c : Improve handling of named arguments in compile-time checks
db4becab : Reduce template instantiations
fec2cc7a : Improve handling of named arguments
621e9c17 : Clarify why we have TYPE in native_formatter
bca70405 : Simplify compile-time checks
8c4b17fe : Simplify compile-time checks
516a2e20 : Cleanup FMT_STRING
6797f0c3 : Cleanup compile-time checks
db496b47 : Remove old gcc hack
8eda3c8e : Cleanup compile-time check
53316903 : Move string_literal to format.h
8a484ad5 : Minor cleanup
b446cc9e : fwrite_fully -> fwrite_all
0204dd35 : Fix _BitInt formatter
d8876b77 : Minor cleanup
c0fab5e2 : Reject modernity, embrace tradition
64313e91 : Move redundant initialization to compile time
8e3da9da : Improve binary size
2a2f73f7 : Improve binary size
6dd9194a : Simplify format_to_result
a017bba0 : Minor cleanup
5eb023cd : Improve binary size
f213d833 : Disable locale more
b3ccc2d2 : Disable locale more
7477dda2 : Simplify is_utf8_enabled
e582d377 : Simplify locale handling
cd8d01d8 : Minor cleanup
377cf203 : Add opt out for built-in types
5a0a3734 : Add support for _BitInt on clang (#4072)
bbf8b3bd : insert else branch to avoid unreachable code warning (#4130)
a3f3f2ec : Fix gcc 8.1 - 8.3 bug and compilation (#4131)
e3676ca3 : Change std::copy to detail::copy in chrono to fix MSVC compile errors (#4132)
0379bf3a : Workaround -Wstringop-overflow
c59ee969 : Improve compile-time formatting (#4127)
1a79bbfa : Cleanup chrono formatting
89af1ad7 : Cleanup chrono formatting
0e741e0d : Minor cleanup
d1acc667 : Minor cleanup
4fb7008c : Cleanup duration cast
589898e2 : Fix %S doc
62382e36 : Test full exponent range
94b8bc8e : Add an experimental writer API
020af729 : Simplify ostream
fb07b37c : Prioritize using the header files of self (#4116)
31354212 : Minor cleanup
993f56cf : Make sign a proper enum class
c6c830e2 : Make align a proper enum class
b906c321 : Get rid of bit fields
f8c0c8ee : Cleanup public API
c71d03fc : Make `support/python/mkdocstrings_handlers/cxx/__init__.py` PEP 8 compliant (2 of 2) (#4115)
50a8c3e9 : Reduce format specs size
98314319 : Fix ambiguous overload
0ce49aeb : Add a test case
bf870ae3 : Fix back_inserter lookup for non-std containers
c9851835 : Make `support/python/mkdocstrings_handlers/cxx/__init__.py` PEP 8 compliant (1 of 2) (#4110)
9f0c0c46 : Add 'n' specifier for tuple and pair (#4107)
9f269062 : Simplify default formatter
15f939c3 : Improve handling of dynamic specs
928a07bb : Simplify handling of dynamic specs
78916997 : Simplify handling of dynamic specs
58aba5a3 : Deprecate append instantiation
5ee14d35 : Reintroduce constexpr fmt::formatted_size for C++20 (#4103)
b9c0e4dd : Improve spec parsing
8445327c : Simplify spec handling
8a06cee8 : Optimize shortest float formatting
1db22749 : Use us if Unicode is disabled
34ead4b3 : Bump msys2/setup-msys2 from 2.23.0 to 2.24.0 (#4098)
3bf26009 : Bump ossf/scorecard-action from 2.3.3 to 2.4.0 (#4099)
d326c729 : Fix conversion a surrogate pair (#4095)
6e462b89 : Get rid of std::copy
aff640c3 : Make fmt::appender implement std::output_iterator concept (#4093)
e23fb6a8 : Apply clang-format
16b3542f : Remove float_specs
29d7e580 : Remove float_format
919f7c5e : Reduce float_specs usage
a80d668a : Diagnose invalid precision
707d7d92 : Apply coding conventions
de6ed8df : Test alignment
ffdc3fdb : Align digits table
0c028137 : Fix doc build
f8581bce : Add redirect page
31b3c325 : Mark namespace scope constexpr variable 'buffer_size' inline. (#4084)
52b32081 : Wrap private module fragment content within conditional extern "C++", to match declarations. (#4083)
0b0b09f4 : Constrain format_uint
4173a631 : Improve format_decimal
4239dfe0 : Simplify format_decimal
ba36a048 : Remove counting_iterator
f6b4a23b : Unbloat chrono
42d3d703 : Remove the commenting attempt
9fcd9c4c : Remove all warning suppressions
7f157dca : Workaround gcc stringop-overflow bug
524ca1c7 : Improve parsing
bdc45eef : Simplify on_text
439b6d72 : Reenable print optimization
3cc32fdc : Mark more formatters nonlocking
0c9fce2f : Update version
b47d662e : Update changelog
e84297f2 : Bump version
0ad234ad : Update changelog
de684ef7 : Make appender compatible with fill
447c6cbf : Update changelog
bc8d32e9 : Update changelog
0f87d6ff : Improve sign processing
808ea019 : Cleanup test
55e76e6c : Update check-commits script
8757f1f8 : Add a script to test multiple commits
9228f349 : Inline visit
e10643ad : Add a perf-sanity test
f29a7e79 : Don't use memcpy in append
f97deb0d : Minor cleanup
35413535 : Apply minor optimization
5ef93a9f : Expand FMT_FORMAT_AS to include implicit conversions (#4055)
c9102619 : Avoid extra reserve
58d792b6 : Apply minor optimizations
25adca56 : Remove redundant overload
1408f182 : Simplify iterator detection
3fe4641d : Add 2 more constexprs to fix compile error (#4065)
33e7ed1e : Improve handling of back_insert_iterator that writes into a buffer
6a192f8d : Fix broken links in README.md (#4066)
92cdbbae : Update api.md
13038f37 : Support printing (const) volatile void* (#4056)
67252257 : Update changelog
e60ff504 : Fix usage with std::generator (#4057)
ccea3380 : Update lint.yml
92227c77 : Improve support for non-POSIX platforms more
486838f2 : Improve support for non-POSIX platforms
a4339119 : Update changelog
7a8b54a0 : Don't confuse Glib::ustring with std::string
b50e685d : Update version
e314776c : Fix version check
2208143a : Update changelog
a9625970 : Improve std::complex formatter (#4050)
232c6bc4 : Update changelog
503e183b : Bump version and add version validation
e50c8b6b : Fix disabling Unicode support
9d946a2f : Fix compilation errors due to `make_format_args` in gcc 14.1.1 with c++20 (#4042)
c4f6fa71 : fix: Make basic_format_arg::visit() const (#4043)
10f12fd3 : Bump github/codeql-action from 3.25.3 to 3.25.11 (#4041)
24c1f886 : Remove double has_value check (#4040)
0041a40c : Update version
686339f7 : Minor cleanup
e355c116 : Tweak wording in the changelog
707bb5b3 : Fix grammar
6f68c62c : Ignore doxygen files
d059fe42 : Ignore vagrant files
43c5b347 : Fix package build
e89568e6 : Update vagrant config
f5bf6f77 : Update build script
bd9af9a9 : Update changelog
16521089 : Fix typo
84f61318 : Fix formatting of release notes
dedc17c1 : Fix handling of tables, take 3
5d0adb6d : Fix handling of tables, take 2
3f251fc9 : Fix handling of tables
1930ed4b : Fix release script
26d07e49 : Fix formatting
949d5d17 : Fix build script
53186535 : Bump version
602e3c3d : Update build script
2952130c : Fix doc build
1e94a463 : Create build dir
a3412032 : Update doc script
0fae326c : Update site dir
8b1fcf5c : Update doc dir
ec46c3de : Update build script
2d9d32c6 : Update build script
4703ade7 : Update build script
52e7b25f : Update changelog
b61c8c3d : Change actions/github-script from e69ef54 -> 60a0d83 (#4038)
bbf44cc0 : Defines are still needed for FMT_MODULE as well (#4027)
06948fa7 : Pin deps
d9899492 : Simplify deps
ff72f553 : Update changelog
7f951f25 : Optimize range formatter
7ae102bd : make format_int constexpr (#4032)
edde9731 : Update test names
b1efe851 : Prevent silent data loss
2c0d9e94 : Add a define to force the use of fallback_file
18a9676d : Add an experimental path
af8cd4e4 : Module purview can only contain direct preprocessor code (#4029)
514b6955 : Suppress a bogus warning in MSVC (#4023)
ac967732 : Added missing std::declval for non-default-constructible types (#4024)
c00149f5 : Fix a typo
71244e07 : Cleanup includes
a57b0194 : Correct comments
febd8ed5 : Cleanup includes
0434026a : Remove build-docs.py
0882bfef : Don't deploy docs from a PR
2a2048a7 : Don't pass seconds as a double in examples
ea1187f4 : Generate doxyxml in build
1334eeda : Improve docs
709169a4 : Set the anchors
2bf1b300 : Update changelog
8687315e : Guard more system headers by `FMT_MODULE` (#4006)
98dd673c : Cleanup cmake
a245a8d4 : Update changelog
e0b66e8f : Remove dependency on <ranges>
794df69c : Added range_format::(debug_)string formatter (#3973)
1d9df9ce : Remove a redundant comment
c4ea9032 : Only install `FILE_SET` when needed (#4013)
3e3062c1 : Update msys2/setup-msys2 to v2.23.0 (#4012)
b998b471 : Update changelog
bff1de15 : Fix deploy docs (#4010)
90932dd2 : Update doc.yml
232c5e85 : Update doc.yml
26cdd1cb : Update doc.yml
ad34d4df : Update doc.yml
f7962644 : Fix doc workflow
28673d96 : Update api.md
a5c1b5d4 : Update changelog
cc4d1245 : README.md: update to remove "not yet release" remarks on clang-tidy
18a325f3 : Disable footer
a1337aa8 : Merge literal-based API doc into the parent section
51a690ab : Check if `.cc` exists in `fmt.cc` (#4005)
f332a81b : Remove unnecessary build step
33a1de57 : Deploy docs, take 3
c7252b33 : Deploy docs, take 2
3f71b606 : Deploy docs
215ce4d9 : Fix error getting config 'user.email'
89f3a810 : Fix error getting config 'user.name'
1f170d3f : Install mike
d175db8f : Fix doc CI and clean workflows
a8cfc0cc : Deploy dev docs
65e278b2 : Don't pollute the source directory
3620c174 : Fix doc build
702b6f37 : Update docs
ed21034a : Implement deployment
76d57f93 : Remove old script
ab6b257a : Implement doc building
077e4ae7 : Added generator expression to /utf-8 compile option (#3995)
d4a8d26c : Temporarily disable doc build in CI
b5c8fd78 : Fix doc build
735a6138 : Build docs
a6e6e9c3 : Fix a link
e6d4f927 : Improve docs
8de3e87d : Add a CMake option to control Unicode support
46d2acb3 : Don't add `os.cc` to sources with FMT_MODULE (#4004)
fad0222a : Export `compiled_string` so that user can customize one (#3999)
d1cab6a9 : Drop parentheses
fcb6a452 : Improve docs
72928661 : Improve docs
d6ec6b7e : Update docs
e845fc57 : Ignore old changelog
2bf811b1 : Also allow compiled format for clang >= 12 (#4001)
9653eed8 : Don't hide the navbar
9b5d1826 : Update changelog
fe741daa : Mention namespace `fmt::literals` in the document (#4002)
0f6e7165 : Fix missing includes in fmt.cc (#3994)
a3d95971 : Update changelog
7bd11b5c : Remove a redundant extension to reduce divergence from std::format
21372bc0 : Update cmake config
a0495e3e : Update changelog
cba5e861 : Update changelog
e9609dec : Update changelog
6ebbaf4b : Split changelog
4e31d2dc : Update changelog
fcc0b499 : Fix `FMT_INSTALL` with `FMT_MODULE` (#3998)
0560c334 : Fix build with `FMT_MODULE=OFF` (#3997)
db9365a1 : Update lint.yml
5c445bc4 : Reverting check to make shorter branch comes first
94f96d11 : Fix undefined reference when compiling with FMT_STATIC_THOUSANDS_SEPARATOR and chrono.h
6abc1204 : Check if the generator is ninja
a9b85176 : Use native c++ module support from CMake
fba06f0e : Update changelog
598e5180 : Remove redundant tests
0a555818 : Usage -> Get Started
966a1b3d : Update docs
adb8e27d : Fix rendering of template parameters
2c84fa9a : Update docs
8da0240d : Improve docs
83bf1423 : Update changelog
595e5491 : Cleanup docs
c636967c : Improve docs
2392367e : Set primary color
06f8e02f : Remove rtd compat
c71d08fc : github: update lint.yml to post details on formatting issue (#3988)
d9b90029 : Update docs
c0029b98 : Update docs
1ac9b317 : New landing page
f68dee53 : Fix syntax highlighting
fb9ee2ed : Simplify doxygen config
d29ceaf9 : Update .gitignore
9b12491c : Migrate docs
ab29ef37 : Migrate docs and cleanup
97117cbb : Migrate to mkdocs
886237ae : Emit anchors
904f2a5c : Remove a non-pinned dependency
dab1a65d : Sort out directory URI config
509d0181 : Fix a link
75ab3bc2 : Add a script to invoke mkdocs
871538d3 : Fix install dir
250456d5 : Migrate to mkdocs
38ba3d39 : Migrate to mkdocs
07141139 : Add macro support to api doc extraction
03d14c3b : Add support for multiple namespaces
416ac0fc : Bump actions/checkout from 4.1.0 to 4.1.6 (#3986)
596add89 : Bump ossf/scorecard-action from 2.3.1 to 2.3.3 (#3984)
a10e0321 : Improve docs
febeb51b : Documentation improvements
f18c2b65 : Fix rendering of aliases
e3910b8a : Improve apidoc rendering
34b85678 : Render members
e5c07c83 : Improve apidoc formatting
933d8ba3 : Improve apidoc formatting
e7ba467e : Improve apidoc formatting
91a859ee : Switch to markdown
6180442e : Render template parameters
418c5d09 : Render template params
aafdde7e : Switch to JavaScript syntax highlighter
d2ecfcfc : Fix rendering on github
26b24943 : Improve doc presentation
4f330567 : Improve apidoc generation
19927462 : Convert API doc to Markdown
a4d42c44 : Cleanup comments
ddd8a542 : Add mkdocs config
fcd3e1e1 : is_convertible_v -> is_convertible::value (#3983)
dc401b1c : Move handlers outside of the docs
f7c5588c : Cleanup syntax doc
a4e40677 : Fix markdown
3479828e : Fix markdown
191b0cb4 : Fix markdown
e80f4a9b : Cleanup syntax doc
022d8efe : Update doc.yml
ca8eeb09 : Add glibc ext for day of month and week of year (#3976)
cddb41f6 : Fix markdown
0b0a0577 : Remove old contents
caa97da1 : Add a word joiner to prevent line break
cf9833f4 : Cleanup apidoc comments
b6638f9c : Convert usage to Markdown
d9034601 : Fix markdown
ba2fbf6e : Fix markdown
6e49bb88 : Remove CSS
e0f3e850 : Fix markdown
4fc3fce9 : Improve syntax markdown
d6427ae7 : Improve syntax markdown
3d686906 : Improve syntax markdown
551aa8d5 : Add CSS
9e07045f : Fix links
5735048b : Improve mkdocstrings handler
33eba104 : Minor comment fix
43ab964c : MSVC 17.10.0 + modules cannot find definition (#3972)
728f9bc3 : Added std::type_info formatter (#3978)
e721046e : Convert index to Markdown
552842c4 : Convert syntax to Markdown
2c38766f : Add a mkdocsstrings handler
c8f1b4e7 : ci: Remove macos-11 runners, add macos-14 (#3980)
529dcd11 : Fix workflow, take 2
1441c660 : Fix workflow
ecd15597 : Improve styles
a57a63dc : Fix styles
8691f21b : Fix styles
7e4fac3f : Improve styles
4a368625 : Replace less with sass
f4e1ec81 : Cleanup html
89c0d101 : Update description
12ef9e09 : Fix class conflict
5afa6813 : Remove redundant github button
cc131020 : Fix navbar style
8ee6c940 : Reintroduce GCC-11 C++20 into CI (#3979)
766300b3 : Update html
4115219e : Fix CSS path
95076981 : Update documentation deps
1752d7fb : Added formattable concept (#3974)
1768bf97 : Added FMT_EXPORT for fmt::range_format and fmt::range_format_kind (#3970)
fc723fd6 : Fix regression in #3710 (#3968)
b8176106 : Check range_begin is dereferenceable (#3964)
706eabd5 : Resolved warning C4127: conditional expression is constant (#3967)
028bffa0 : Update checks for dynamic_cast usage when compiled with no rtti (#3963)
86741b3e : Bazel support: Add missing platform dependency (#3965)
75e89242 : Minor cleanup
0b5287f8 : Remove unused function
a4715c48 : Bazel support: Add utf-8 to Windows build (#3962)
8e728044 : Fix format_as for non-const begin/end views (#3955)
1f436c64 : Cleanup locking/buffering
db1ee420 : Cleanup unicode check more
7d6ae972 : Cleanup unicode checks
3460b30f : Improve utf-8 detection
b7809f91 : Enable Unicode support by default
1dc71f21 : Enable Unicode by default
8db8f224 : Optimize join_view
d2473b7b : Simplify join_view formatter
328d256c : Apply coding conventions
57593a12 : Simplify map formatter
10508a30 : Enable fmt::join for uncopyable iterators (#3946)
16cec4f5 : Make the map formatter correctly handle elements with custom formatters
77bfd849 : Split range and map formatters
3af8ac7a : Privatize write_debug_string
ceb406d0 : Remove range_default_formatter
19afc9c3 : Update README.md
6ff593b0 : Update README.md
78420bed : Update README.md
a21bc7b8 : Update doc.yml
97d0613b : Update doc.yml
04b0ae41 : Update doc.yml
27dd1dcf : Update lint.yml
3649c395 : Update lint.yml
7650ed04 : Fix to_nonnegative_int
9234fe83 : Add tests to check that isnan doesn't cause FP errors
8a8f4825 : Fix: isnan() shouldn't cause FP exceptions
17062a0c : Bump actions/upload-artifact from 4.3.1 to 4.3.3 (#3950)
88d3997f : Bump github/codeql-action from 3.24.9 to 3.25.3 (#3949)
48c90845 : Fix CodeQL alert (#3945)
cf1f55f7 : Specialize `formatter` for all `std::basic_string` types (#3943)
400f6a8e : Dedup ADL begin/end lookup
a3e0931e : Update signature in the doc
51eeccd0 : const void* is neither a fundamental nor string type
f30f1fd5 : Document formatter specializations provided by base.h
f4b256c6 : Fix warning C26439
f746a59a : Cleanup FMT_ASSERT
ee0c3351 : Fix format_to + FMT_STRING for wide character type (#3931)
99735764 : Fix FMT_USE_NONTYPE_TEMPLATE_ARGS define back (#3937)
aa52eb76 : Resolved warning C4996: 'fileno': The POSIX name for this item is deprecated. Instead, use the ISO C and C++ conformant name: _fileno. (#3930)
116a9ce4 : Added FMT_IMPORT_STD feature macro (#3928)
5eb68c0e : Fix mix-up of 'FMT_BEGIN_EXPORT' and 'namespace detail'. (#3924)
550437b2 : Resolved warning C4127: conditional expression is constant (#3923)
4e8640ed : Fix: enable `FMT_NORETURN` without exception support too (#3917)
c70e7b74 : Coding conventions and minor fixes
71144eea : implement year_month_day (#3913)
843e2935 : Bump github/codeql-action from 3.24.6 to 3.24.9 (#3915)
f5ec5ada : Update syntax.rst
3b5f3de3 : Make CMake version message less confusing (#3907)
ca919398 : Replace std::fill_n with fmt::detail::fill_n (#3909)
74a18728 : Implemented fmt::day, fmt::month, fmt::year and related unit tests (#3906)
88620e53 : Range formatting documentation (#3905)
5d63e87d : Add a formatter for float128
aecec01b : Initial support for extended FP types
5af88653 : Cleanup
45b772f8 : Improve std::complex formatter to be compatible with P2197R0 (#3900)
53347891 : Make line buffering test less flaky
38881e5a : Fix handling of the fileno macro
6c7cc6a0 : Fix group_digits for negative integers (#3901)
365c3fbd : Bump timeout
c0dac839 : Use p2197 format for complex by default
bb882c03 : Simplify path formatting
12acd798 : Fix ambiguous call
c710bfa1 : Apply clang-format
73f2b344 : Add std::complex formatter
9f3fc6e3 : Add XChar support into nested_formatter
c17816cb : Fix invalid fmt::formatter<>::format return type (#3895)
df6e7b22 : Fix relative path for cmake in usage doc (#3890)
c816fa67 : Fix a warning
e281749c : Simplify range formatter
11f2f30f : Simplify range formatter
13cfaa2a : Guard against usage of _isatty when header was not included (#3880)
0861db50 : Support character range formatting (#3863)
dfe5b12c : Update os-test.cc (#3883)
09935d82 : Bump github/codeql-action from 3.23.2 to 3.24.6 (#3876)
3bc6cc1e : Protect against locking formatters
4fcc317d : Bump actions/upload-artifact from 4.3.0 to 4.3.1 (#3875)
ae1e93d3 : Fix warning C4702 emitted from format.h (MSVC) (#3866)
f68f452d : Workaround an ld warning on macOS
ebea5736 : Fix chrono locale format bug for RHEL gcc (#3859)
ddf0b7d2 : Fix warning C4365 emitted from printf.h (#3865)
0166f455 : std.h c++23 build fix (#3856)
8e42eef4 : Don't error on min time_point
91b30e5b : More API details
7a63e233 : Readd core.h to headers
44c3fe1e : Fix handling of static separator
ae181cc9 : C++23 compatibility: basic_string_view cannot be constructed from nullptr (#3846)
3a6fb2fc : Fix some typos. (#3843)
08795047 : Fix typo in typename. `containter_type` -> `container_type`. (#3844)
34f415b5 : Fix %S formatting for chrono durations with leading zeroes (#3814)
e17bc675 : Make scan variadic
06311ed1 : Fix fixed rounding around zero in Dragon
e5bab8da : added formatter for std::expected (#3834)
9f5f39cb : Bump actions/upload-artifact from 4.0.0 to 4.3.0 (#3837)
ea581437 : Bump github/codeql-action from 2.22.5 to 3.23.2 (#3836)
6321a97d : Simplify color formatting
4b6b32f3 : Deprecate wide stream function
1b54ba4b : Fix UB in format_arg_store implementation. (#3833)
71a4a8d4 : Really fix MSVC warning about <bit> only being available in C++20. (#3832)
8e62172a : Fix a warning
28afff36 : Improve buffering
af44c297 : Separate buffer initialization from flush
a1e1eedb : Minor cleanup
ffce3632 : Add glibc stream support
b5669512 : Don't hang on test failure
6435b169 : Add support for line buffering
6f260455 : Add scan_data::make_args
e1832bcf : Consider ADL begin() and end() when joining ranges (#3824)
2caf1b3b : scan more
668fe265 : doc: fix the chrono %C example value (#3822)
06fc25f2 : Don't always enable typeid usage under msvc (#3821)
11ba1270 : Fix flush
4c5b4af0 : Improve name argument validation
2eb36329 : Fix custom formatter example (#3820)
0147e082 : Document println
6b68dff9 : Write directly to a stream buffer
b2cde48d : Reduce usage of float_specs
8510838d : Make format_specs not depend on code unit type
090ee135 : Pass char type to write
470c4e6c : Fix scope for glibc ext for sec, min, and hour (#3812)
13ec66bf : 🛠 Add basic array safety functions and backwards-compatible result type (#3805)
64091b7a : Fix naming
e9548235 : Make fill independent on code unit type
f80a2bee : Update README.md
83652dfe : Restrict always inlining to the top-level API
d249fd9f : Workaround for gcc 6 (#3810)
73d91351 : Mark `iterator_buffer` move constructors as `noexcept`. (#3808)
fe0d910a : Replace multiple error reporting mechanisms with report_error
f9294f0e : Improve handling of format specs
c98a5a59 : Remove unnecessary checks
5f30d371 : Update README.md
3647feaa : Improve scan
e420a58f : Improve scan prototype
ca37503f : scan -> scan_to
123e058e : Improve scan prototype
f924d20d : core-test -> base-test
d7072921 : Fix constness
362b40c1 : Fix docs
56fa4d61 : Fix docs
cacdf143 : Remove nonstandard alias
4d766b16 : Invert dependencies
c10859f1 : Remove deprecated options
d0963d48 : Make ranges only depend on fmt/base.h
da0f84c4 : Cleanup copy functions and move to base.h
59baac52 : Remove unused functions
21b04582 : Use std::allocator_traits (#3804)
df6a3564 : Fix MSVC warning: "The contents of <bit> are available only with C++20 or later." (#3807)
7c163acf : Fix conversion warning in filesystem::path formatter (#3806)
1b55d103 : Update api.rst
5d9d376d : Update api.rst
6064b85c : Update api.rst
deb584c0 : Update build.py
297b22f5 : Remove <memory> dependency
3c960841 : Remove redundant detection of experimental string_view
0cdee904 : Add a missing num_bits specialization
7e58af4e : Fix an ICE on clang <= 15
f1924d32 : Cleanup macros
52174953 : Cleanup conseval detection
b71d9877 : Reduce usage of FMT_COSTEXPR20
810d1750 : Cleanup constexpr detection
170ffb1f : Simplify constexpr checks
e470ba8b : Simplify exception detection
bf98e3e4 : Cleanup macros
fd87a23d : Reduce memory dependency
b71ef65b : Remove iterator dependency
c5340539 : Remove unnecessary trait specialization
971f7ae7 : Minor cleanup
6159e2b0 : Bazel support: Switch to globbing to collect header files
da7a232b : Cleanup contexts
2595bf57 : Fix formatting of ranges with begin()&/end()&
6f5d53ce : Add fmt::is_contiguous<std::basic_string<Char, Traits, Allocator>>
961df829 : Fix buffer overflow if output iterator is std::back_insert_iterator and value is escaped (debug format)
401f0873 : Fix write_uintptr_fallback
72599292 : Update build.py
3d84b45a : Update core.h
4331abed : Move fmt::format to fmt/format.h
fc8f6ba9 : Separate compilation for println
58a6bd48 : Add core.h for compatibility
79f1506f : Add base.h
4d616479 : Simplify make_format_args
cf8426cf : Sort links on fmt/std.h section
e915d521 : Update api.rst with support provided by std.h
7ba64205 : Optimize debug codegen
97867e27 : Extend Bazel build support to bzlmod (#3792)
8875cf96 : Fix spelling: othewise ==> otherwise (#3791)
f7ed65fa : Simplify format_arg_store
f34f31b3 : Move format_arg_store to detail
fb66131e : Improve arg storage
6af30d8f : Remove legacy workaround
c177324b : Simplify basic_format_args
545d37a8 : Remove extra level of indirection when building args
9f73e45c : Update README.md
a5ae9ae1 : Split standard context into a separate class and optimize
23e8109d : Remove buffer_appender
679af1f5 : Remove redundant get_container
48d7fb26 : Merge back_insert_iterator and appender
f348d1a2 : Reintroduce back_insert_iterator detection
df67df7b : Add is_back_insert_iterator
17f79ac6 : Minor cleanup
dbdfc99f : Don't crash if flush fails during unwinding
c1d9e884 : Remove unnecessary final and apply clang-format
7d73ef85 : Cleanup ranges
ae9b0b52 : Disable transitive includes
f73388f1 : Update README.md
08878044 : Update readme
1b7d9db0 : Remove string dependency
0641b844 : Cleanup string traits
1e938dda : Simplify char_t
2e5b14bf : Add compile-time checks to color formatting
e0b604be : Remove a redundant function
6c617c96 : Update documentation (#3789)
bee495c9 : Remove dead code
794e1b61 : Remove redundant overload
c7061776 : Bump version
dc52d176 : Cleanup dependencies
0e3e61cc : Remove limits dependency
800a0bb2 : Remove dependency on string_view
f2e43f96 : Remove char_traits dependency
c9287eb9 : Reduce char_traits usage
61f144bd : Move copy_str for format.h
4687f8e3 : Remove dependency on <iterator>
f2c55f6b : Remove dependency on back_insert_iterator
c9d233c0 : Decouple appender from back_insert_iterator
242bcaec : Update README.md (#3784)
4e43a469 : Clarify why we can't have nice things
c36ed77f : Get rid of addressof
e2ab9ab2 : Add FMT_GLIBCXX_RELEASE
0378d171 : Replace remove_cvref_t with remove_const_t
baea8f69 : Update docs
eedfdb4c : Fix docs
5dbd7fd7 : Switch to bootstrap 4 because 5 breaks menus
7fd18026 : Fix FMT_OS definition (#3783)
63ce1708 : Replace virtual dispatch with normal functions in buffers
74ffea0d : Simplify to_ascii
6c3b2d49 : Fix constness
4ec9c290 : Apply coding conventions
f4a7d40d : Dedup throw_format_error
9659f22d : Don't include os.cc in the module if it is disabled via FMT_OS
068bf9ba : Make visitation compatible with std::format
50565f98 : Move misplaced join overloads to fmt/ranges.h
0b39d671 : Remove detail::error_handler
c1423850 : Improve the pipe API
398ddb8f : Don't include fmt/os.h in the module if it is disabled via FMT_OS
58372949 : Remove duplicate version
6c5bcca8 : Fix the release script

+- Project: platform/external/freetype

ed1d6509a : Add dirgroup for trusty genrule

+- Project: platform/external/giflib

7d49b5f : Fix potential overflow when calculating ImageSize
230b35c : Fix potential overflow when calculating ImageSize
579301a : Fix potential overflow when calculating ImageSize
f3ca2db : Fix potential overflow when calculating ImageSize
2cd3a5f : Fix potential overflow when calculating ImageSize
abc6707 : Fix potential overflow when calculating ImageSize
a6ede43 : Fix potential overflow when calculating ImageSize

+- Project: platform/external/gmmlib

3eb406c : Add Android bp (#217)
567dc09 : Xe2 Caching demotion for L3XD|L4WT|NC request (#212)
552325f : Check if BaseWidth and BaseHeight is zero (#213)
06c0515 : Fix Debug build type(#210)
4d9f382 : Update README with new platform support info (#208)
1ed3f87 : Introduce Battlemage full support (#205)
a999c84 : Introduce Battlemage basic enabling support (#203)
8d74657 : Remove media compressed formats (#202)
58d1ff0 : Introduce LunarLake Support (#188)
0d65e69 : Revert "Failing the resource allocation if the U plane offset exceeds the HW …" (#199)
40348e9 : Add RGB format perftag (#197)
c6380d3 : Update Readme with more build details (#198)
e2a8b80 : Fail if pTextureCalc is NULL (#196)
dcc4b85 : Add new DG2 device IDs (#194)
ac964bd : Failing the resource allocation if the U plane offset exceeds the HW limitations (#191)
93cf7ea : Correct DG2 Depth/Stencil MSAA 16x 8x Texture Allocation (#192)
fdd1f77 : Handle out of memory case (#193)
0d744fb : Add new Xe_HP Device IDs (#190)
47185b6 : Added new ADL-N Device IDs (#189)
544a8be : Introduce ARL-H support (#172)
f594362 : Update DG2 device ID macro (#183)
c9ca417 : Assert and return on divide by zero (#187)
4c04653 : Remove structurally dead code (#184)
1f69d42 : Fix possible integer overflow
f056c5d : Guard FormatTable to be within bound
bbfc32c : Add PVC device ID

+- Project: platform/external/google-fruit

e8ba5d7 : Make bpfmt happy.

+- Project: platform/external/google-smali

43f7ef13 : Add a new owner
aef7c0e4 : Edit OWNERS file

+- Project: platform/external/googleapis

cee86c9a9 : Add host cc target for code + status protos
02312e267 : Create new rule googleapis-field-behavior-java-proto

+- Project: platform/external/googletest

c7a87e0f : Add dirgroup for trusty genrule

+- Project: platform/external/grpc-grpc

8e1018b031 : Pin third_party/upb to C17.
74a0fd198c : Remove more submodules directories

+- Project: platform/external/grpc-grpc-java

3147ff40e : make grpc available for com.android.virt

+- Project: platform/external/gsc-utils

53834fde2 : Add trendy_team_desktop_hwsec and enable libtpmgenerated_test.
dc6fdcd55 : Revert "Enable libtpmgenerated_test."
1a6228a45 : Enable libtpmgenerated_test.
716d1e443 : Add an `Android.bp` for libtpmgenerated.
7446d69fd : gsc-utils: switch to macros from const for boot_param
d54dcb45f : Remove libchrome dependencies from tpm_generated.
4892d06b6 : presubmit: Ensure gsctool changes have devboard tests
37ca239ce : gsctool: Restart Ti50-based GSC on AP RO start
936ec42b3 : gsc-utils: build BootParam and DiceChain
d7612293e : gsc-utils: rename dice to boot_param
a45e4fcab : Merge remote-tracking branch 'origin/upstream-gsc_utils' into main
56777da8b : gsctool: Include ToT version of DT FW
085910bd6 : Adds the Android Desktop Vendor Binary Folder to ts_write
5eb14c075 : ti50: Update release notes for 0.24.120
0a5166add : Deletes tpm_generated/Android.bp
881028bf9 : Add DICE library
5a0809d8a : Removes Android Specific Files
ccd06bd3d : Build tpm_generated and its test.
106ceaabe : gsctool: Add more DT versions to USB list to avoid TPMV
f009c2c8d : gsctool: Handle special case for version 0.21.x
f087583ce : Import tpm_generated and its dependencies.

+- Project: platform/external/gturri-aXMLRPC

7f61b01 : change visibility of aXMLRPC library

+- Project: platform/external/guava

a3b51888c : Include `j2objc-annotations` in the Gradle runtime classpath.
6a070d804 : Bump github/codeql-action in the github-actions group
e5b58e2fc : Use `assertThrows` even in GWT/J2CL/J2KT-compatible code.
294f07da2 : Bump github/codeql-action in the github-actions group
7bc08c1e9 : Remove stale comments about `@NonNull` annotations in the JDK.
930cc5833 : Bump actions/setup-java from 4.2.2 to 4.3.0 in the github-actions group
bb9fc2316 : Bump the github-actions group with 2 updates
27c544073 : Bump github/codeql-action in the github-actions group
9c84ddbac : Remove comments about compiling with JDK 8.
bff7090ab : Stop testing code with inaccurate nullness annotations under J2KT.
508cef789 : Changed `*.concat()` to throw `IllegalArgumentException` if the input arrays contain too many elements.
137798d21 : Specify `@InlineMe` for `{Doubles,Floats}#compare`
388e0980d : Call `Thread.suspend` and `resume` reflectively (when we even try to call them at all).
781068569 : Synchronize on empty arrays instead of `Integer` instances.
7c9455c0d : Bump github/codeql-action in the github-actions group
d6192739d : Suppress Error Prone findings locally.
55911e843 : Bump the github-actions group with 2 updates
e1eaeff67 : Prepare for release 33.3.0.
a94ff8be6 : Update nullness annotations after cl/662127972, and prepare for the forthcoming version of our nullness checker.
a4a7f6bd0 : Recommend the JDK `compareUnsigned` methods over our equivalents.
e232035c7 : Fix ImmutableList.Builder to throw a useful exception when its size would overflow.
0cb9cc6d7 : Group overloads.
36dfb16a2 : Simplify the implementation of a test `Predicate`.
7c4ca4154 : Validate that ImmutableSortedSet.Builder is O(n log n).
6bd3a14c0 : Make ImmutableSet.Builder.addAll(Set) smarter about sizing the internal tables in current.
636e580b3 : Stop hashing in presized ImmutableSortedSet.Builders in java7 variant. This could only happen in code paths involving the just-added ImmutableSetMultimap.orderValuesBy.expectedValuesPerKey.
f7c5569b7 : Make ImmutableSet.Builder.addAll(Set) smarter about sizing the internal tables in current.
c3d5b17dc : Allow size hinting for ImmutableMultimap and subtypes.
91b6ebeae : Synchronize on empty arrays instead of `Integer` instances.
c78405955 : Fix or suppress `JUnitIncompatibleType` findings in Guava.
cab8ad16b : Update Public Suffix List data.
5f0e88627 : Make lock objects `transient`.
641b4c502 : Bump the github-actions group with 2 updates
8bab177c1 : swap Charsets for StandardCharsets
a629c5d9b : Soften the comment about _deprecating_ Multimap factories, as it hasn't been done in eight years.
372f6f09b : Update Public Suffix List data.
1a01958a9 : Bump the github-actions group with 2 updates
2466a099a : Add `text/markdown` to `MediaType`
e2b7a333a : Initialize our `Charsets` constants using the fields in `StandardCharsets`.
582742210 : Address some https://errorprone.info/bugpattern/NonApiType warnings.
85c6f8876 : Fix some [style violations](https://google.github.io/styleguide/javaguide.html#s5.2.3-method-names) in test method names.
41800d53d : Suppress some https://errorprone.info/bugpattern/JUnit4ClassUsedInJUnit3 warnings.
71666ca53 : Hard-code Java versions for plugins other than `maven-surefire-plugin`.
76f87bbcc : Bump a few Maven plugins.
5041fbe61 : Migrate off our `Charsets` constants, and further discourage usage.
04c1b7a8f : Reimplement our `compare` methods in terms of JDK equivalents.
e74da9293 : Update some comments / docs about JDK 7.
3dce9a2fe : Suppress an `UnusedVariable` warning for an unusual `Comparator` implementation.
1dffea265 : Clean up threads for `CacheBuilderTest` and `FuturesTest`.
5ccc169d0 : Remove some references to Java 7.
e4494149e : Bump the github-actions group with 2 updates
514f21277 : Update links.
63734b9df : Use iteration instead of recursion in `Graphs.hasCycle`.
61bfd841b : Improve bug report template.
354136fc8 : Bump github/codeql-action in the github-actions group
4a2d30252 : Bump Truth to 1.4.4.
123ae0bda : Fix a bad name for an exception parameter.
8b6d8bbdb : Use try-with-resources to close a `URLClassLoader`.
7e66ec48f : Remove Impossible from open-source release.
2f6d6186c : Switch FutureCallbackTest to an explicit monitor for j2kt compatibility.
c28e65234 : Rename `HighwayHash64FunctionTest` to `HighwayHashFunctionTest`. Simplify some testing setup.
558c2be63 : Internal change.
c94072c3f : Roll back suppressions for bogus nullness errors now that we've fixed our checker.
542e58830 : Downgrade to a version of `plexus-io` from before a performance improvement was reverted.
ad57f5221 : Add a Sequential Executor implementation for the j2kt super source. Also remove an J2KtIncompatible annotation.
2c24eb8b4 : Bump actions/upload-artifact in the github-actions group
8dac90754 : Compare for equality, not approximate equality within a tolerance of `0.0`.
c8829bf39 : Migrate JUnit `double` equality assertions to Truth.
8b0d007aa : Suppress https://errorprone.info/bugpattern/Finalize warnings.
3618043d0 : Address some `UnusedVariable` warnings.
9ba5fea8e : Bump github/codeql-action in the github-actions group
0a76ba8ed : Tweak code to avoid upsetting the nullness checker.
e549ba5c8 : Remove suppressions that are unnecessary now that our nullness checker sees `@NonNull` on JDK APIs.
a869b8665 : Migrate from legacy com.google.gwt to org.gwtproject.
4facd69c9 : Bump Truth to 1.4.3.
4b96e432b : Internal change.
b50df335c : Suggest using streams instead of `Maps.toMap()`.
93c7a3ab6 : Remove accidental whitespace from cl/646505537.
76fca99db : Remove `@Beta` from the `Duration` overload of `Suppliers.memoizeWithExpiration`
2a3fa8f07 : Internal change.
dc80e7e9f : Optimize Futures.allAsList handling of already completed Futures. Skips adding a listener on already completed futures and directly collects the completed futures value.
8c31d5268 : Bump gradle/wrapper-validation-action in the github-actions group
19cfef214 : Address https://errorprone.info/bugpattern/FunctionalInterfaceClash warnings.
f46b178bd : Document and add suppressions for our intentional use of `&` and `|`.
faf36af34 : Minor followup from cl/645331066:
cd11e83cf : Remove helper methods that are unused externally from the external codebase.
263712a20 : Fix typo.
20348c7ed : Remove unnecessary explicit NaN check.
1648ef7a3 : Mark Synchronized.java and references as J2ktIncompatible. Change Guava locks in remaining j2kt compatible code to J2ktCompatibleMonitor.
b310b7e1e : Update Sec-Ch-UA-Form-Factor to Sec-Ch-UA-Form-Factors to follow latest spec.
22151fa71 : Bump the github-actions group with 3 updates
040d0b977 : Use a Map<K, ImmutableCollection.Builder<V>> instead of Map<K, Collection<V>> in ImmutableMultimap.Builder.
95b394ca7 : Lazily initialize the map in ImmutableMultimap.Builder.
808e0799f : Use imports instead of fully qualified types.
a5f9bcafd : Make `CacheBuilder` `Duration` overloads available in `guava-android`.
fdfbed198 : Migrate some tests and testing infrastructure from `AssertionFailedError` to `AssertionError`.
c2bbd73e2 : Remove workaround for [ancient Android `unmodifiableMap` bug](https://issuetracker.google.com/36997692).
e0e44b562 : Fix `MissingOverride` findings.
9ea68165f : Use natural units for durations.
635571e12 : Suppress https://errorprone.info/bugpattern/SunApi warnings.
d328b5175 : Remove unused `writeObject` method.
0d3726990 : Bump github/codeql-action from 3.25.6 to 3.25.8
f94db26dd : Group all dependabot updates together in the same commit to avoid merge conflicts.
62c7fbbc9 : Bump deps.
c86c09dc3 : Remove `@Beta` from the `guava-android` `Collector` APIs.
25322a7cc : Prepare for release 33.2.1.
6c672281b : Migrate usages of obsolete Guava APIs.
c497a975b : Use `AssertionError(String, Throwable)` instead of supplying the cause later.
f74135f69 : Eliminate the need for a few `rawtypes` and `unchecked` suppressions.
142ddbc32 : Fix `@VisibleForTesting` violations.
c4b883de9 : Use Number.prototype.toPrecision in Platform.formatCompact4Digits.
3f61870ac : Change `InetAddress`-`String` conversion methods to preserve the scope ID.
f238ae451 : Bump github/codeql-action from 3.25.5 to 3.25.6
4ed497230 : Remove workaround for an ancient Android `toArray` bug.
f0c709997 : Remove workaround for an Android bug that was fixed at some point before Lollipop.
dd2fac88b : Remove `@Beta` from `TestingExecutors.sameThreadScheduledExecutor`.
ec7069a1d : Bump actions/checkout from 4.1.5 to 4.1.6
6dd1cb162 : Reenable `NullPointerTester` tests for our internal Android-emulator testing.
dc200d32f : Internal change.
225c1b5f9 : Internal change.
7c934a2d5 : Add a J2kt super-source implementation of DirectExecutorService that doesn't rely on other unsupported classes.
cc2875120 : Bump github/codeql-action from 3.25.4 to 3.25.5
aeadb633f : Update the documentation of `RegularImmutableMap`.
243b32747 : Bump ossf/scorecard-action from 2.3.1 to 2.3.3
8b70996ad : Bump github/codeql-action from 3.25.3 to 3.25.4
b9b6084df : Standardize parameter names across `Immutable*.of(...)` methods.
1b3969e80 : Bump actions/checkout from 4.1.4 to 4.1.5
6342a23cd : Unconditionally use `Throwable.addSuppressed` in `Closer`.
61d3f25b1 : Use `Throwable.addSuppressed` in the Android copy of `ServiceManager`.
381835deb : Accept `AutoCloseable` in the Android copy of `ClosingFuture`, too.
8f212ba60 : Add missing `@since` tag for `merge` in `TreeRangeMap` and `ImmutableRangeMap`.
b15afd574 : Roll back initial atttempt at micro-optimizing `singletonIterator`.
ba820c6a8 : Roll back middle atttempt at micro-optimizing `singletonIterator`.
2f4154d53 : Roll back the most recent `singletonIterator` optimization attempt.
57f76e3a7 : Prepare for release 33.2.0.
96fca0b74 : Make our `Collector` APIs available in `guava-android`.
fddc95d79 : Micro-optimize `singletonIterator` one more time.
70a98115d : Fixed a potential `NullPointerException` in `ImmutableMap.Builder` on a rare code path.
e075e8955 : Bump github/codeql-action from 3.25.2 to 3.25.3
9596c9960 : Update Animal Sniffer scents to Lollipop.
50047447e : Bump gradle/wrapper-validation-action from 3.3.1 to 3.3.2
093852ef2 : Document that we now run our Android tests under Lollipop.
348be7bd1 : Actually write the manifest we generate to the jar.
21fb20119 : Suppress or work around false-positive errors from the forthcoming version of our nullness checker.
730ca9f23 : Bump actions/checkout from 4.1.3 to 4.1.4
f88cd5c20 : Add Javadoc to the guava-android copy of `Equivalence.doEquivalent`.
4be0c8156 : Bump github/codeql-action from 3.25.1 to 3.25.2
759bad652 : Bump actions/upload-artifact from 4.3.2 to 4.3.3
b9d2b93da : Update comments to reflect our recent handling of "sneaky checked exceptions."
38c8017bd : Add "Sec-GPC" header.
d092aeda2 : Add a missing `@since` tag, and remove `@FunctionalInterface` from a package-private method.
183837d2f : Further micro-optimize `singletonIterator`, just to be safe / waste my morning.
1792bffe7 : Bump actions/checkout from 4.1.2 to 4.1.3
e21aac943 : Bump gradle/wrapper-validation-action from 3.3.0 to 3.3.1
6585e3e32 : Bump actions/upload-artifact from 4.3.1 to 4.3.2
ffb708e6c : Bump github/codeql-action from 3.25.0 to 3.25.1
df2c2e55c : Use multicatch even when it produces the type `ReflectiveOperationException`.
eab511707 : Address a few nullness mismatches that our checker newly detects.
b33044e57 : Bump Maven Wrapper's version of Maven to 3.9.6.
26f4538b5 : Bump gradle/wrapper-validation-action from 2.1.3 to 3.3.0
cf4e8ad28 : Bump github/codeql-action from 3.24.10 to 3.25.0
858caf425 : Fix [Gradle GWT compilation breakage](https://github.com/google/guava/issues/7134).
dad0c594f : Internal change
75de44c2b : Make `Comparators.max`/`min` return the type of the comparands, even if the `Comparator` accepts a more general type.
983b4516a : Internal change
31e7a17c0 : Internal change
1a583c32f : Reduce the thoroughness of mutation testing.
616802ad0 : Bump gradle/wrapper-validation-action from 2.1.2 to 2.1.3
911661a95 : Check `isEmpty()` instead of `size == 0`.
48b664367 : Internal change
ac17a14c7 : Fix typo reported by @Marcono1234.
d364abbfd : Discourage use of HMAC-MD5.
faa651613 : Bump github/codeql-action from 3.24.9 to 3.24.10
57ff7d93b : Use the diamond operator more.
aecd2f0ac : Bump github/codeql-action from 3.24.8 to 3.24.9
a32589e46 : Bump gradle/wrapper-validation-action from 2.1.1 to 2.1.2
85d57205e : Update Public Suffix List data.
bc56e904d : Add `@CanIgnoreReturnValue` to `ForwardingMapEntry.setValue`.
578ee2a30 : Bump github/codeql-action from 3.24.7 to 3.24.8
7dc01ed27 : Add "Ad-Auction-Allowed" header.
c99891cb8 : Make a class initialization deadlock more impossible
d5fbccac9 : Test `LocalCache` when async refresh takes longer than expire-after-write duration.
41d0d9a83 : Adds Permissions-Policy-Report-Only definition.
fcc46da8b : Bump actions/setup-java from 4.2.0 to 4.2.1
ad249c3f6 : Remove unused `DecodingException(Throwable)` constructor
a6bff7ac1 : Bump github/codeql-action from 3.24.6 to 3.24.7
b6d8d91b9 : Go back to using Java 11 to generate snapshot Javadoc.
cc2e372c5 : Prepare for release 33.1.0.
fe28211b2 : Bump actions/setup-java from 4.1.0 to 4.2.0
f2b8c4f47 : Make build and tests work with Java 21, and enable CI for them.
c6e91c498 : Update to Error Prone annotations 2.26.1
f9850325b : Bump actions/checkout from 4.1.1 to 4.1.2
3dc9e73d6 : Add `@since` tags and other Javadoc for `AbstractNetwork` methods from cl/591404913.
d48c6dfbb : Update to Error Prone 2.26.0
d70366d64 : Stop setting up and trying to inherit Javadoc from the JDK.
8b9387e6c : Bump `j2objc-annotations` to 3.0.0.
52a62b289 : Avoid calling `checkNotNull` on nullable values except during actual precondition checks.
0b1c47731 : Collection tests: Add explicit type parameters for J2KT and enable J2KT tests
1bb3c4386 : Deprecate the constructors of our `ExecutionException`-like classes that don't accept a cause.
b90ce5bde : Add action version comments in GitHub workflow files
cf86414a8 : Deprecate the remaining two `propagateIfPossible` overloads.
aa1df9f08 : Migrate from soon-to-be-deprecated `propagateIfPossible` to equivalent `throwIfInstanceOf` and `throwIfUnchecked` calls.
c96c7d42b : Make null-unsafety explicit in `TableCollectorsTest#testToTableNullValues`
4f7989d63 : Enable manual j2kt tests for com.google.common.net
5f7750959 : Declare the `V` parameter of `toTable` as non-nullable.
43abfe4e6 : Disable a `LongMath` test that is slow on J2KT
c7d03fd3f : Add missing `@Nullable`s to MultimapsTest
14eef4bb9 : `ObjectArraysTest` J2KT fixes
b775e6851 : Table tests: add type parameter to express whether table can contain null
032e2f92a : Workaround bad J2KT/Kotlin smartcast interaction
61766305f : MapsTest tweak to avoid J2KT `<? extends Object>` bug
5c8fd7bd0 : Add more “trivial” `@Nullable` to collection tests
71aa784c6 : The boxed primitive constructors are no longer scheduled for removal.
a6a34dc42 : Bump Truth to 1.4.2.
174d4cd1c : Added isEmpty() override for sequential Lists.transform() implementation.
e6ef66759 : Bump github/codeql-action from 3.24.5 to 3.24.6
7b8ff4044 : Enable a few more Guava Primitives tests for J2KT
f231543e3 : Improve nullness of types in some APIs related to `Map` merging, and fix `Collectors.toMap` null-handling.
fe62266a3 : Bump actions/setup-java from 4.0.0 to 4.1.0
3789a5645 : Miscellaneous test cleanups noticed during cl/610731287 and cl/611055831.
515601dba : Avoid test methods with type parameters
aedd38122 : IterablesTest: use `Iterable<@Nullable String>` where necessary
526682c20 : Workaround https://github.com/google/guava/issues/6824 for J2KT test
0ea03dae3 : OrderingTest: nullness type-bound fix
91d2a3408 : Bump github/codeql-action from 3.24.4 to 3.24.5
25fef6f2e : Fix, suppress, and/or localize suppressions for `unchecked` and `rawtypes` warnings under `gwt`.
58e30d60a : Disable collection tests for J2KT broken due to Java emulation issues
be01f1ab0 : Collection tests: workaround rawtypes when translating to Kotlin
418402455 : Fix, suppress, and/or localize suppressions for `unchecked` and `rawtypes` warnings in `collect.testing`.
fc21ea79c : `AbstractSequentialIteratorTest`: work around a J2KT generics issue
d5cef8fdc : `testPoorlyBehavedTransform`: tweak it for Java/Kotlin exception difference
b4cc971bc : Add `@J2ktIncompatible` to collection tests of `@J2ktIncompatible` features
37ce42927 : Roll forward ChecksumHashFunction optimization.
186f982cb : Remove the copy of `ToStringHelper`'s tests from `MoreObjectsTest`.
7e0661832 : Fix, suppress, and/or localize suppressions for `unchecked` and `rawtypes` warnings in `collect`.
12020e2dc : Fix, suppress, and/or localize suppressions for `unchecked` and `rawtypes` warnings in `collect` tests.
e5b68f120 : Fix, suppress, and/or localize suppressions for `unchecked` and `rawtypes` warnings in non-`collect`, non-`gwt`, non-test code.
7fa9f1e54 : Fix, suppress, and/or localize suppressions for `unchecked` and `rawtypes` warnings in tests (other than `collect`).
d16216e9a : Bump github/codeql-action from 3.24.3 to 3.24.4
7ff645780 : Automated Code Change
603a84415 : Disable tests on J2KT that fail due to a bug with missing RandomAccess
2da8a3b68 : Make `com.google.common.collect.testing` compile with J2kt
4148c5bc8 : Remove `@J2ktIncompatible` from `toImmutableEnumMap`
c200ab0e9 : Add some `@Nullable`s to collection tests (prep for running on J2KT)
9700e787b : Fix testing-j2objc when built with no-wrapper-methods
dd582b3a2 : Bump Truth to 1.4.1.
1d96bb9ae : Remove unnecessary suppressions.
59c063bef : Address a couple Error Prone warnings.
5fb34c074 : Add `@NullMarked` to collection tests
fd1e344e3 : Add `@J2ktIncompatible` to many `@GwtIncompatible` tests
ecdd5c403 : Remove `@Nullable` from `AbstractImmutableSetTest#copyOf(E[])`
301d6e322 : IteratorsTest: Replace `Integer` varargs with `int` varargs
54e1c7572 : Remove `@J2ktIncompatible` from `Maps#immutableEnumMap`
f50be6525 : Add `@Nullable` to `CollectSpliterators#flatMap(toPrimitive)` function parameter’s return type
ff9477ac6 : Bump github/codeql-action from 3.24.1 to 3.24.3
1bb5464a7 : Standardize on `Java N+` in documentation.
09e655f6c : Change the return types of `transitiveClosure()` and `reachableNodes()` to `Immutable*` types.
17c032873 : Internal change.
4caddb98e : Roll back CRC32c optimization while we address a versioning problem with a prebuilt build tool.
0bf03c490 : Bump github/codeql-action from 3.24.0 to 3.24.1
98c8b997a : Enable c/g/c/u/concurrent tests for j2kt-native
afb35a5d1 : Optimize checksum-based hash functions on non-array ByteBuffers by at least an order of magnitude, at least for Java 9+.
345cd115a : Bump gradle/wrapper-validation-action from 2.1.0 to 2.1.1
4937bf820 : Enable more `com.google.common.base` unit tests for J2kt native
5ddc67be6 : Remove two more obsolete `@J2ktIncompatible` from `primitives` tests
fc6bfb51e : Update Public Suffix data.
4ef54a7a7 : Add some missing nullness annotations on `handleInvocation` overrides.
86c1c5173 : Bump gradle/wrapper-validation-action from 2.0.1 to 2.1.0
5c950ae13 : Mark `Sets.complementOf` `@J2ktIncompatible`
f5ee97211 : Fix typo in Splitter.limit Javadoc.
0a1bd1724 : Enable j2kt tests for escapers; no "actual" code changes.
0e1aebf73 : Roll back [the change that made `LocalCache` avoid `synchronized`](https://github.com/google/guava/commit/1512730820a99e329a475b7143b7295775ef26ba).
476faf9e2 : Bump gradle/wrapper-validation-action from 2.0.0 to 2.0.1
7f4bbf75d : Enable additional `com.google.common.math` tests on J2kt
1bd273ff6 : Migrate usages of `Truth8.assertThat` to equivalent usages of `Truth.assertThat`.
f12f918d2 : Bump actions/upload-artifact from 4.3.0 to 4.3.1
06ba46097 : Remove `@J2ktIncompatible` from passing `primitives` tests
f346bbb6a : Expose `FakeTicker` `Duration` methods to Android users.
750ba3eb4 : Suppress bogus nullness errors until we're able to fix our checker.
3688a537d : Improve performance of InternetDomainName::ancestor.
cc1b9d2be : Bump github/codeql-action from 3.23.2 to 3.24.0
6e2d0d829 : Add an overload of `spliterator()` to the Android flavor of one of our classes.
7e0e6edb1 : Bump Truth to 1.4.0.
ee17e1f12 : Bump gradle/wrapper-validation-action from 1.1.0 to 2.0.0
d3232b71c : Move tsan suppressions to annotations
a9d243cef : Automated Code Change
da9bc28e5 : Add a tiny bit more `@SafeVarargs`.
e5d98af4c : Internal change.
9aa7ee694 : Bump github/codeql-action from 3.23.1 to 3.23.2
35feec942 : Bump styfle/cancel-workflow-action from 0.12.0 to 0.12.1
518fd1d4f : Bump actions/upload-artifact from 4.2.0 to 4.3.0
fccfe9685 : Make J2CL super-source'd classes pass nullness checking.
4f394e66d : Nullmark J2CL super-source'd classes of Guava collection.
813797368 : Bump a few deps.
36980a944 : Add a missing nullability annotation for the j2kt super source. Also renames folders to follow convention.
8dca77634 : change behavior of views returned by graph accessor methods that take a graph element as input: they now throw IllegalStateException when that element is removed from the graph
52654b7c9 : Relax tests slightly to allow kotlin native behaviour that seems to be still conforming with the spec.
8dc5e98e3 : Internal Change.
9541c7662 : Bump actions/upload-artifact from 4.1.0 to 4.2.0
6be1208af : Remove stale suppressions.
c537e5aad : Bump github/codeql-action from 3.23.0 to 3.23.1
8481d3329 : Document the JDK 9+ alternative to `ByteStreams#toByteArray`
a5200a397 : Remove obsolete TODO comments
bc85d0727 : Bump actions/upload-artifact from 4.0.0 to 4.1.0
823412c09 : Add a missing nullability annotation for the j2kt super source. Also remove an unused method.
0824e6e2e : Remove `final` so that J2KT can generate overrides in subclasses.
eccc0f6e9 : Prepare for forthcoming `@Nullable` annotations on `Optional.orElseGet`.
cabc1833e : Roll forward `Suppliers` change.
2fe7b166d : Internal change.
4eaa5a5c1 : Roll back `Suppliers` change.
76e46ec35 : Add `Duration` overload for `Suppliers.memoizeWithExpiration`.
ca0ad2ab2 : Bump copyright year.
757a5d5f8 : Bump github/codeql-action from 3.22.12 to 3.23.0
1de727004 : Fix some typos in comments
f347fb7a2 : Document that we now run our Android tests under KitKat.
8a8efc75e : Use fake filesystems to make Guava tests work better [under Windows](https://github.com/google/guava/issues/2130) and Android, and prepare jimfs for some preliminary Android testing.
2d30e0d6d : Update Public Suffix data.
25ff2376b : Bump github/codeql-action from 3.22.11 to 3.22.12
9a32e48c5 : Bump Truth to 1.2.0.
c7dcd6e71 : Add a missing nullness annotation.
5eda11b98 : fix a typo in a class name.
5c4f5b205 : Make a few small optimizations to JRE `ImmutableSet` construction for this "new" world in which we never reuse the array passed to `construct`.

+- Project: platform/external/harfbuzz_ng

9ef44a2d6 : 10.1.0
a9b76edca : [ci] Pin Python version to 3.12 on macOS
c85a6c2a2 : [cairo] Respect HB_NO_VAR
5e32b5ca8 : Bump actions/setup-python from 5.2.0 to 5.3.0
4148c8d4e : Bump actions/checkout from 4.2.1 to 4.2.2
f8e0ba5ef : Bump github/codeql-action from 3.26.12 to 3.27.0
9974a6616 : [icu] Make it build with ICU 76
392581be0 : [subest] get benchmark subset working again.
e5139c51a : bug fix in hashmap get_with_hash()
825bc1964 : [perf] Simplify meson.build
de1a1e27d : [coretext-font] Implement get_glyph_v_origin()
786097029 : [coretext-font] Implement get_glyph_v_advances
e1026a225 : [coretext-font] Implement get_variation_glyph()
d44cc8a1f : [coretext-font] Implement get_glyph_name()
0ce67f56d : [coretext-font] Implement font_get_h_metrics
e31ea830c : [ft] Try using a ref-counted ft_library
52becf1c6 : [test] Fix a leak
a8360b7e9 : [perf] Respect new envvar HB_FACE_LOADER
c224178a0 : [perf] Add hb-benchmark.hh
2dc633413 : [tests] Remove invalid tests from collections.tests
734ba5ab4 : [hb-info] Fix font face number recording for .dfont
67591f851 : [util] Add --face-loader
75d168cbf : [util] Rename a variable
aa933abb7 : [util] Use hb_face_create_from_file_or_fail()
12fc715dd : [ft] Add hb_ft_face_create_from_file_or_fail()
89c83b5b0 : [coretext] Add hb_coretext_face_create_from_file_or_fail()
5f8b77d19 : Bump github/codeql-action from 3.26.11 to 3.26.12
bb5a8284e : Bump actions/upload-artifact from 4.4.0 to 4.4.3
ea3b6c60b : Bump actions/checkout from 4.2.0 to 4.2.1
b12acba49 : [face] Add hb_face_create_from_file_or_fail()
2437fd883 : [face] Add hb_face_create_or_fail()
2166a46ad : [coretext] Don't set CoreText funcs on new CoreText fonts
62ae9fbd6 : [coretext-font] Implement get_glyph_from_name
b5e9f2cb2 : [coretext-font] Implement get_glyph_extents
8db2997e4 : [coretext] Configure hb_coretext_font_create() with CT font funcs
8a805271a : [coretext] Start implementing CoreText font-funcs
064b24177 : [coretext] Rename hb-coretext.cc to hb-coretext-shape.cc
e1269215f : Revert "Fix a compiler warning"
755929c48 : Fix more compiler warnings
377e3c67a : Fix a compiler warning
ab3608992 : [CFF] Increase max op num limit
1a4bdd699 : [font] Change fallback y_advance sign
9c00255b4 : [ci] Fix Codecov upload
6a25df24b : [COLR] Add and use get_clip_list ()
5462978c9 : [COLR] Lets see if this makes CIFUZZ any happier
008505e1c : [COLR] Pepper some hb_barrier()'s around
cec95a2d2 : Try to fix heap-buffer-overflow
4d1f6e049 : [COLR] Enable COLRv0 support in get_extents()
4587e08a4 : [VarStoreInstancer] Fix null deref
e8de8d88d : [CONFIG] Remove unused HB_NDEBUG
50d67b202 : Bump codecov/codecov-action from 4.5.0 to 4.6.0
d9c029755 : Bump github/codeql-action from 3.26.9 to 3.26.11
e15720549 : unused-parameter in test/fuzzing/hb-draw-fuzzer.cc
8de0d9116 : missing-field-initializers in test/api/test-ot-face.c
b6196986d : [USE] Update the data files
31b22016a : [ot-tags] Update IANA and OT language registries
5772f4ffc : missing-field-initializers in main.cc
18f1d9121 : missing-field-initializers in hb-draw.h
c1c0e82e3 : Revert "Bump setuptools from 73.0.1 to 75.1.0 in /.ci"
4aad43c82 : Bump github/codeql-action from 3.26.8 to 3.26.9
a89144544 : Bump actions/checkout from 4.1.7 to 4.2.0
a87fa89b4 : Bump setuptools from 73.0.1 to 75.1.0 in /.ci
fa79b51d1 : Bump fonttools from 4.53.1 to 4.54.1 in /.ci
c7ef6a2ed : Remove the hack re variation-selectors
a1d9bfe62 : 10.0.1
527e60b01 : [morx] Relax sanitizing
867366ccf : [test] Add GeezaPro tests for MacOS 15.0
70ca19dff : Use hb_barrier() instead of longer name
d5261f723 : 10.0.0
667ce682a : [hb-view] Support cairo script as output format
7a390b509 : [hb-view] Simplify background drawing
700ef11c9 : [meson] Add back with_libstdcxx build option
242b58440 : [USE] Update the data files
7a890c2eb : Add hb_barrier() to switches of unions
7aace3d3f : Ignore CGJ and Mongolian Variation Selectors during GPOS
540666b80 : [paint] Err. Make previous commit actually work
ed6362600 : [paint] Intersect clips
b0cf3d81e : [paint] Comment
b95cb48f8 : Revert "Bump setuptools from 73.0.1 to 75.1.0 in /.ci"
5e5cd10e1 : Don't make variation-selectors default-ignorable if not-found set
b94a39d7f : Follow up to variation-selector-not-found glyph
25591a3a1 : Bump setuptools from 73.0.1 to 75.1.0 in /.ci
29157e6a6 : Bump meson from 1.5.1 to 1.5.2 in /.ci
5807388b8 : Bump github/codeql-action from 3.26.7 to 3.26.8
095892950 : [post] Add a hb_barrier()
487740521 : [GDEF] Sprinkle some hb_barrier()s
e4e9f6a18 : [gsubgpos] Add a barrier
0dace9f34 : [PairPos] Forgo an optimization for the sake of compatibility
287046f71 : [buffer] Hook up not-found-variation-selector-glyph
a003890e8 : [buffer] Add hb_buffer_[sg]et_not_found_variation_selector_glyph()
6fd76e1f6 : [subset] offset format fix in gvar table
63d09dbef : Bump github/codeql-action from 3.26.6 to 3.26.7
e30dedbb4 : [Unicode 16] Add tests
c66f25ef1 : [Unicode 16] Send the new scripts to USE
85a9ec897 : [Unicode 16] Update the USE table
87c685d37 : [Unicode 16] Update the vowel constraint table
7f325b6c8 : [Unicode 16] Update the Indic table
554658e3a : [Unicode 16] Update the emoji table & cluster test
dc3e005d1 : [Unicode 16] Update the Arabic joining script list
351e20c21 : [Unicode 16] Update the Arabic table
42369b843 : [Unicode 16] Update the UCD table
f279e2581 : [Unicode 16] Add new `hb_script_t` values
a5c9cc4e2 : [USE, Unicode 16] Update the data files
98353ecef : [test] Run shape tests with C locale as well
70334d74d : Run subset tests with C-locale
0a82f43a6 : [arabic] Remove non-sensical code
a070f9ebb : Bump github/codeql-action from 3.26.5 to 3.26.6
cb6ba6871 : Bump actions/setup-python from 5.1.1 to 5.2.0
a75d48753 : Bump actions/upload-artifact from 4.3.6 to 4.4.0
a141e25c4 : [subset] remove unnecessary check on name IDs
2ba0b9ee9 : Turn some byte data into unsigned char, from char
28cc53c9e : Smalll fix on documentation
2ae90ed32 : Bump github/codeql-action from 3.26.0 to 3.26.5
eaa97d650 : Bump setuptools from 72.1.0 to 73.0.1 in /.ci
ea430c10e : [doc] Quote the table name
868a75b61 : [style][doc] Mention explicitly it goes through STAT table
16c196e0c : Fix warnings with -fstrict-flex-arrays=2
634778efc : [subset] bug fix in post table
cdbd966e9 : [buffer-verify] Fix a compiler warning
a411de2b3 : [cff] Try to silence static code analyzer
39ea4cdd7 : [hb-subset] Fix a resource leak
e25fa0bff : Bump github/codeql-action from 3.25.15 to 3.26.0
f25d952b8 : Bump actions/upload-artifact from 4.3.5 to 4.3.6
72502ef02 : [instancer] dont return false when variation data is empty after partial instancing
c1b9f846f : Bump hendrikmuhs/ccache-action from 1.2.13 to 1.2.14
3894b1c96 : [face] Update docs for get_table_tags
d5596dfb0 : [hb-subset] Report "Invalid font file."
59a97ac02 : [test] More get_table_tags test
a55b00714 : [test] Add get_table_tags test for hb-coretext
98355724a : [hb-coretext] Implement get_table_tags func
bd79bfb65 : [test] Add get_table_tags test for hb-ft
a459753ef : [test] Test get_table_tags of face_builder
8896b1d57 : [test] Add test for get_table_tags
830326fe1 : [hb-ft] Implement get_table_tags func
76770eb00 : [face-builder] Implement get_table_tags func
ff04f28b2 : [face] Add get_table_tags callback
84f165646 : [test] Move code around
22e1a5a78 : [test-paint] Fix warnings
191c6eedf : Fix Linux bot
e0c3cbf16 : [subset] Fail if source face has no glyphs
31f1d25e2 : Bump actions/upload-artifact from 4.3.4 to 4.3.5
d6088fb46 : Bump setuptools from 72.0.0 to 72.1.0 in /.ci
2edc371e9 : Bump github/codeql-action from 3.25.13 to 3.25.15
c86909393 : Bump ossf/scorecard-action from 2.3.3 to 2.4.0
1304587ba : Bump meson from 1.4.1 to 1.5.1 in /.ci
ebcf5514e : Bump setuptools from 71.1.0 to 72.0.0 in /.ci
788b469ad : [ChainContext] Fix fast-path deviation from slow path
fe7dc0c3c : Bump github/codeql-action from 3.25.12 to 3.25.13
354691b7a : Bump setuptools from 70.3.0 to 71.1.0 in /.ci
1c01944e9 : Added forward declaration to fix build with Visual Studio 2017.
5c7eb8540 : meson: Fix builds against ICU >= 75.x on Visual Studio
8aa7db54f : Bump github/codeql-action from 3.25.11 to 3.25.12
0ef0a0e53 : Bump actions/setup-python from 5.1.0 to 5.1.1
acf2febb7 : Bump setuptools from 70.2.0 to 70.3.0 in /.ci
0706f3989 : Update wasm-shaper.md with link
7c41d91e7 : Update of https://behdad.org/text2024
4eb89942b : Bump actions/upload-artifact from 4.3.3 to 4.3.4
48193e028 : Bump setuptools from 70.1.1 to 70.2.0 in /.ci
7ab9733f8 : Bump fonttools from 4.53.0 to 4.53.1 in /.ci
677d6646a : [subset] Make sure the clamp is done in a int64_t space
495937f96 : [subset] Use hb_clamp instead of consequent hb_min and hb_max calls
e079dd203 : [instancer] remove the warning for CFF partial instancing
b8087dbdd : Update README.md
492848947 : Bump setuptools from 70.1.0 to 70.1.1 in /.ci
c481ea51e : Bump github/codeql-action from 3.25.10 to 3.25.11
67c301fdc : [cmap] Fix macroman lookup
7b14feb42 : Drop the README symlink
e1df06748 : [docs] Typo
1a06d3f51 : [ci] Fix tarball path
9c03576c4 : 9.0.0
b461c4224 : Fold the remaining Makefile.sources into CMakeLists.txt
a38f853e8 : Drop unused Makefile.sources files
9af6902c9 : Drop more remnants of autotools build
b9d243ef4 : Try to fix macos-aat-fonts job
e2cd1be6e : Try to fix dist job
fa82ecd2c : Fix CMake build
cf1fdf163 : Drop autotools build
93930fb1c : fix build with HB_TINY
dce8e4579 : typo `acsii` -> `ascii` in `hb-subset-input.cc`
59617de1b : [BUILD] Update Arch Linux instructions
76d9905c0 : Bump setuptools from 70.0.0 to 70.1.0 in /.ci
49c8493f5 : [test] Build with HB_MINI
106e4068b : HB_SCRIPT_CANADIAN_ABORIGINAL removed, as deprecated
8de7e9fdb : deprecation cleanup: HB_BUFFER_FLAGS_DEFAULT removed
7946a2840 : Move constant for max composite ops per glyph to hb-limits.hh
6289e475d : In _fill_unicode_and_glyph_map add a second unicode -> gid lookup which is general.
d7b3ea644 : Bump actions/checkout from 4.1.6 to 4.1.7
181f6e46c : Bump github/codeql-action from 3.25.8 to 3.25.10
4a352b3a4 : Bump codecov/codecov-action from 4.4.1 to 4.5.0
2266d2582 : Try fix fuzzer build on 32bit
cef1eafb7 : Use find_package for ICU
a109d5fbc : [BUILD] Actually build project!
a1c803dfb : [limits] Increase number of glyf points
f9b7ca8b1 : Bump github/codeql-action from 3.25.6 to 3.25.8
eba1add71 : [hb-info] Use 128 as max glyphname / name length instead of 64
de2a2f27f : Another try at fixing 32bit fuzzer build
7be12b33e : [subset] refactor populate_unicodes_to_retain.
0c2f5ecd5 : [normalizer] Add c.override_decompose_and_compose
8a9bc5230 : [normalizer] Move a couple functions around
bda5f647c : [normalizer] Allow c->plan to be nullptr
3e06b7054 : [ot-map] Make shaper categorizer independent of shape planner
4ec3cb0fc : [Glyph] Don't round to int when shifting glyphs
2db636c65 : [VARC] Try fixing build failure on i386
1e2bd4983 : Another include fix
e9870e8d5 : Add missing include
c1ca46e4e : Bump fonttools from 4.52.1 to 4.53.0 in /.ci
09947ae1f : Bump meson from 1.4.0 to 1.4.1 in /.ci
42bf7ce7c : Try to fix warning on 32bit system
9456f6bdf : [test] Fix a few compiler warnings
c2b5b7b9c : [ot-tags] Update IANA and OT language registries
86942e9af : [ot-tags] Let Võro fall back to Estonian
88868411b : [ot-tags] Remove obsolete overrides
135d6537d : test-draw-varc.c: Fix “warning: unused function”
3fa47cea2 : [subset] Add HB_SUBSET_FLAGS_NAME_LEGACY to keep_everything()
e8049ae9a : [VARC] Sanitize ConditionList
88e9cd3fd : [VARC] Check for an OOM
9f8f81403 : [main.cc] Add note
ec437ccd7 : [VARC] Adapt to change of meaning of RESET_UNSPECIFIED_AXES
93d58f831 : meson: set -std=c++17 when building with icu >= 75
09a17a086 : Bump github/codeql-action from 3.25.5 to 3.25.6
1407b6b3d : Bump codecov/codecov-action from 4.4.0 to 4.4.1
200b20aa8 : Bump setuptools from 69.5.1 to 70.0.0 in /.ci
73ae5f709 : Bump fonttools from 4.51.0 to 4.52.1 in /.ci
ee0c7d6bc : [geometry] Use && instead of "and"
484cb2608 : [CFF] Handle error in case of Null used on Unsized type :(
361d30e28 : [CFF] Ignore unknown operators
fecc5789d : [var] Minor, make a function a template
bc90b29b3 : Bump github/codeql-action from 3.25.4 to 3.25.5
71ff393dc : Bump codecov/codecov-action from 4.3.1 to 4.4.0
95526ef96 : Bump actions/checkout from 4.1.5 to 4.1.6
1da053e87 : [aat] Remove unused template parameter
204778e83 : [aat] Use buffer-digest for non-state-machine kerning as well
fbcfc1984 : [aat] Change buffer-digest threshold to 32
f536a416f : [aat] For short words, use buffer digest to skip kerx machine subtables
3ff9ebc86 : [aat] For short words, use buffer digest to skip morx subtables
687c218fc : Bump codecov/codecov-action from 4.3.0 to 4.3.1
c9d6bbcf4 : [aat] Minor don't copy variable
136097901 : [VarStoreInstancer] Add cache argument
c270a254d : [COLR] Remove redundant variable
b32e0a70c : Comment
fff48b457 : Remove unnecessary comment
cd1d8b8bf : [varc] Use multiVarStore instead of GDEF varStore
ac411f26b : [Condition] Finish evaluation of ConditionValue
66cd7c04e : [Condition] Shuffle code around
d2ca8a593 : [Condition] Implement ConditionValue
6129c7261 : [varc] Use Condition instead of ConditionSet
3fda07b48 : Replace an abort with assert(false)
2e5336348 : [TupleValues] Add a likely
e8139bea2 : Revert "[varc] Reuse x_deltas and y_deltas vectors"
f97d1ea23 : [varc] Reuse x_deltas and y_deltas vectors
fa6f123ee : [var] Add a fast path for coord == 0
7b1b20fa0 : [varc] Move code around
6af0c5199 : [varc] Remove unused method
db06c673f : [VARC] Tweak cache use
a7fd55569 : [varc] Use a varStore cache
9489c8565 : [varc] Simplify iterator
2d01e1a97 : [varc] Shed another vector
5ed773506 : [varc] Some error handling
8961b1c51 : [varc] Add __forward__ to the iterator
731f78151 : [varc] Improve iterator
84a755bfd : [TupleValues] Minor add an enum
eaa1fb141 : [varc] Use an iterator to unpack TupleValues
3b86ec0af : [varc] Optimize use of coord_setter
df330e7ab : [varc] Optimize
49c5ed38e : [varc] Remove one vector allocation
761468c66 : [varc] Reuse a variable
70990ac7c : [varc] Micro-optimize "* scalar"
9c7d871b0 : [varc] Fix return type
c13fc5579 : [varc] Micro-optimize
88eab447d : [varc] Remove unnecessary check
e451b6cbf : [varc] Minor style
2c87c319e : [varc] Fix compiler warning
a00cf8c51 : [varc] Remove also from face tables when disabled
69e615f17 : [varc] Fix guard
421a134b6 : [varc] Micro-optimize record
53c019a87 : [varc] Speed up hidden components
4e0845ab7 : [varc] Micro-optimize
d07d70aef : [varc] Add test
1121d80b3 : [varc] Add a conditional test font
7c8743546 : [varc] Implement conditionSets
6608b4578 : [varc] Read & discard reserved records
946a461f0 : [varc] Whitespace
a3211515a : [varc] Add another hb_barrier()
7e4adde0b : [varc] Move includes around
cf3ce69f0 : [TupleValues] Add a pre-alloc
3ffd92f09 : [varc] Add a couple of seeds for the fuzzer
f1f5c7dcf : [varc] Micro-optimize non-variation case
f403215a6 : [varc] Simplify scaling
85237065c : [varc] Fix get_upem()
599d08a57 : [varc] Implement edge-count limiting
3d846a8d0 : [limits] Centralize graph edge limits
1339a6850 : [varc] Flip depth accounting
2b94779d2 : [varc] Implement max depth
aed01d016 : [varc] Implement cycle-detection
d5ab62a19 : [varc] Rename macro
3901a87ae : [VARC] Undefine macros after use
805272d87 : [VARC] Minor simplify
bf27f4a3b : [varc] Fix config
ed57ab906 : [VARC] Comment
57a18ac76 : [varc] Error check
91a06cef7 : [VARC] Cleanup
bb3bfe8cf : [glyf] Remove old glyf1 VarComposites support
72c9deb5f : [varc] Fixups
12ad2ff62 : [varc] Fix thinko
11388c162 : Fix build
00d56b12a : [varc] Apply VarComponent transform
fb333ce42 : [varc] Move some code to VARC.cc
924432816 : [varc] Apply variations to VarComponent transform components
825ed6a6f : [varc] Set coordinates on recursive components
745ff05a0 : [varc] Add coord-setter
320dcedec : [varc] Reading VarComponent transform components
edd1a4443 : [varc] Flesh out VarComponent a bit
f77aa8be8 : [varc] Add TupleList
4919f3648 : [geometry] Flesh out transform & transform_decomposed
d32c5164c : [varc] Add hb_transform_decomposed_t
aeb564381 : [varc] Start decoding VarComponent
3b8e7d3b6 : [HBUINT32VAR] Change return type
7a766b33d : [varc] Use enum class
ef7c0a9b8 : [varc] Add VarComponent::flags_t
c6ae8d586 : Add hb-geometry.hh
70665adc2 : [varc] Add guards
7ef7ded29 : [varc] Add VarCompositeGlyph
c819a0b49 : [varc] Add VarComponent
7cf84272f : [varc] Fix sanitize for HBUINT32VAR
a94a5c636 : [varc] Add get_point_at to glyf/CFF2
0d6f77e62 : [varc] Add table
0f2fe7550 : [varc] Add MultiItemVariationStore
2ddd1a279 : [varc] Add MultiVarData
a9776a3d2 : [TupleValues] Rename delta to value
763f83906 : [TupleValues] Move compile/decompile here
b24511d2b : [TupleVariations] Rename encode/unpack to compile/decompile
8ccebae1f : [TupleVariations] Simplify encode API
3d0c03aec : [TupleVariations] Take array instead of vector in encode
0ee6164a4 : [varc] Simplify a bit
7b57a8a40 : [varc] Implement SparseVariationRegion
0dcdbb4f4 : [VARC] Add consume_all to unpack_deltas()
ae35e30bd : [VARC] Add SparseVarRegionAxis
ef04b5c25 : Try fixing MSVC builds
fa9dc530f : Move CFF Index types to open-type
cf3e88e17 : [CFF] Remove duplicate typedef
bd795373f : [TupleValues] Encode 32bit values
0c6054539 : [TupleValues] Decode 32bit values
e1d0ee3bd : [var] Mark a few methods as static
e339fed72 : [open-type] Add HBUINT32VAR
1ad48fddd : Meson: Use actual FreeType version when using CMake

+- Project: platform/external/icing

8234d15 : Update Icing from upstream.
e619cca : Refresh the mmap related objects/addresses after remap
e47640e : Update Icing from upstream.
ea02d8a : Revert "ICU with Reverse JNI Tokenization." from upstream.
e60c93c : Update Icing from upstream.
a9ba50e : Add range check when accessing document_id_old_to_new
877dbca : Update Icing from upstream.
72c615f : Update Icing from upstream.
c49b061 : Update Icing from upstream.
ad910e6 : Update Icing from upstream.
956623c : Update Icing from upstream.
3cda345 : Update Icing from upstream.
fdad642 : Clean up projects using protobuf plugin
c330af3 : Update Icing from upstream.
8c2b9d6 : Update Icing from upstream.
cf26d7c : Update Icing from upstream.

+- Project: platform/external/icu

5abf120c6 : Add new lines for nicer formatting
0fd93e87d : Write atrace event in ZygoteHooks
bdbc8c3f2 : replace aconfig_storage_reader_java to aconfig_storage_stub
69afbbf83 : Remove the abandoned icu74 flag
ad5311114 : Android patch: Expose Unicode 16.0 constants
e29877ee3 : Expose stable APIs from ICU 76
0b1fdb329 : Remove the usage of the icu_v_api flag
320582c10 : Expose new API constants from ICU 75
b006ff0ef : Move to flagged-api.json
8a73fd0d0 : Add @FlaggedApi annotation support
cff6a1f54 : Revert "Load Unicode extension subtag key map in Zygote"
1760e4bc6 : Android patch: Reintroduced changes in ULocaleCollationTest.TestNameList()
89f9333b1 : Do not add libraries to dist for layoutlib
13b4fd61d : [Ravenwood] Move Ravenwood processing out of individual repos
b4592409d : Load Unicode extension subtag key map in Zygote

+- Project: platform/external/igt-gpu-tools

bb30fea80 : Extend IGT visibility to DTS tests
80b33f5f1 : igt: move gtests to vendor folder

+- Project: platform/external/intel-media-driver

c1223c3cd : Cherry pick "Add Android.bp for v24.3.4"
87dde8f38 : Cherry pick "Fixed compilation issues in Android"
010a609e2 : Remove C/C++ code downstream changes
081fc57f7 : [Media Common] [VP][MOS] fix double free issue
a3b37815c : [Encode] update Xe2 encode caps
360640797 : [Encode] update encode readme file
07689430e : [VP] fix_free_kernel_build
c7f94de7f : [Encode] Move AV1 files for LNL
e2ff46f63 : [Media Common][VP] add New Modifier support in Media Libva
c3863270a : [VP] Avoid duplicate pointer in packet pipe recycle pool
5ee6c4151 : [VP] Fix Coverity issue about 444 read kernel integrate code
4358b7185 : [Encode] move avc/hevc xe2_lpm file
b19e20e9c : [Media Common] [VP][MCPY]Remove AUX MMIO and refactor CreateGpuConent for VE copy from Xe2.
cef9a33db : [Media Common][VP] APO Mos switch: surface dump switch
bb88de942 : [Encode] update aqm common feature
b5958fdc7 : [Media Common] [VP] Refine and add etw trace for Media Copy
34fc7a2e1 : [Media Common] [VP] Gpu context trace level
adbd371d4 : [Decode] Update decode features
e50188500 : [Encode] update parameters and comments
ea588fcaf : [Media Common] [VP] Add BMG VP features
f682ac612 : Revert "[Media Common] [VP] Add etw trace for media copy info"
1ef53da32 : [Decode] VVC incomplete status report handling
f31a2124b : [Media Common] [VP] Add etw trace for media copy info
d4e97425e : [VP] VP FC 444 3 plane read kernel integrate
c8c3b9678 : [Media Common][VP] APO Mos: Create/free resource with GraphicsResourceNext
b5a60cb74 : [Media Common] [VP] Add DG2 new device ID 0x56AF
d9c4512c2 : [Media Common][VP] APO Mos: switch layer, create osdevicecontext and get device info
cb850188b : [Encode] Fix AV1 Vdenc TU7 GPU Hang Issue
1336d08d9 : [CP] Fix HEVC long scalability hang issue
57d410178 : [VP] add WA ID for linear output issue
e7e989d61 : [Media Common][VP] report user setting
9306d43ea : [Encode] update avc img state program
3edea6f29 : [VP] Enable 3DLut kernel case w/o vebox workload
628bf904c : [Media Common] Bump up version to 24.3.4
a14824a3a : [VP] Fix L0 FC FastExpress Aligned Coordinate Wrong
6875da2f6 : [Media Common] [VP][MCPY]separate BLT Mocs and BCS SWCTRL per HAL
722793d2e : [VP] Fix gstreamer ximagesrc segfault
347eff0f8 : [VP] Add Switching Meachnism for L0FC
ac5f0ff9e : [VP] add YUY2 width unalign plane definition for 2 plane l0 corruption
88e60b40f : [VP] Remove duplicated getInfo: fixing S3D failure
c1b9da6bb : [Encode] Refinement for QualityInfo block level DDI
3a161d062 : [Decode] Disable Xe2_Hpm VD-SFC
98760145a : [VP] fix sfc2pass tdr issue
f6857864d : [Encode] Modify cmdpar/hwcmd/impl/headerfiles
8445b5e6d : [VP] enable SFC Linear Output By TileConvert for potential HW issue
d953a3fe5 : [VP] Add RGB24/RGB565 implementation in FC
e99b7875e : [VP] Fix L0 FC FastExpress Half Corrupt
574dedb5f : [VP] Fix L0 FC Rotation
91d12c477 : [Media Common] Bump up version to 24.3.3
7a5cb428a : [Media Common] Fix Debug Override mode issues
4461a2703 : [VP] Add globalInst null check
f807bf110 : [VP] Recheckin Open Source L0 FC
5ce0d2c89 : [Media Common] [VP] BMG media interface upstream
486b94919 : [Encode] Add LUT for ComputeRdMult
2585059e9 : [Encode] Modify vdaqm
c7df900d4 : [Encode] Add linux caps for vp9 seg_id_block_size
998651ca8 : [Media Common] [VP] BMG DDI upstream
026370fab : [Encode] Reformat MV dump for Hevc Encode pipeline
4f6a855f4 : [CP] Fix some potential Coverity issues2
4e87d0e76 : [Encode] update xe2_hpm vdenc interface
f852e1dc2 : [Decode] BMG CodecHal Layer Open Source
d9e615e00 : [Media Common] [VP] Fix open source kernel build error
0f3d45370 : [Encode] update xe2_hpm huc hwcmd & impl
ce6e9ac91 : [CP] Only Local to Staging Copy Use CPSmallLocalBar MemCopyCallbackImpl
1e2abe29a : [Decode] Fix AV1 decode media reset issue
70128be83 : [Decode] BMG MHW Layer Open Source
cd88a5590 : [VP] Add Perf Tag for L0 FC
5a0440288 : [Encode] QualityInfoOutput block level DDI enable for AV1
1a3ae6226 : [Encode] update avc/hevc featureIDs
c12125271 : [Encode] Modify features sensitive name
43d94f2d0 : [Media Common] [VP] BMG VP/common HAL upstream
82d252480 : [Media Common][VP] gpucontexthandle use map not vector
bf4ff61bf : [Media Common] [VP] BMG VP/common MHW upstream
9a8624119 : [Encode] Modify AV1 RhoDomain
5641059be : [VP] Integrate L0 FC FastPath
2dce560ba : [Media Common] [VP][MCPY]fix cons_vpp_10b_422_y210_composition Y210 output corruption due to VPL used same MCPY deive for different size surf copy
07788117e : [Media Common] Enable VEcopy Null HW for all components sw latency test
f40b45dd7 : [Encode] update jpeg base class
e56e0fa10 : [Media Common] [VP] Fix query HW config error for XE KMD
dbf60cc15 : [Encode] AVC sliding window MaxRateRatio modification
c7c27bf41 : [Encode] Add support for VP9 seg id block size
3f46155e4 : [VP] refine code for DdiMedia_CreateImage and DdiMedia_PutImage
a599dc7f5 : [Encode] memset EncodedBitstreamWrittenBytesCount metadata for av1
019772d5f : [VP] gstreamer plugin vapostproc produces tiled output
9d59b60ef : [Media Common] Bump up version to 24.3.2
d3f59ce37 : [Decode] Coding Style Refinement
128e17905 : [Media Common] [VP] Add LNL vp features
7e466aa1c : Reorganize KMD header files
68edce60a : [Media Common] [VP] [MCPY] Disable preemption for render copy
7300aa7d5 : [Encode] [HEVCe] add VAProfileHEVCMain422_10 profile check
8017d9c92 : [VP] Use forward_references as the reference for ADI
7c9645864 : [VP] fourcc format mismatch
8b90bafc7 : [Media Common] [VP] Fix gcc-10-mtl-off build error
ec37ce05c : [Decode] Update decode features
367d72922 : [Encode] fix 1 coverity issue
32f1e92a5 : [Decode] Fix of MHW VVCP patch list size calculation
4f57c0e40 : Add Lunar Lake platform support
72fe3189d : [CP] Fix some potential Coverity issues
fa13cd34b : [HEVCe] fix a build issue
bd5d3355f : [VP] fix sfc line buffer reallocation long latency issue
85f546d33 : [Media Common] [VP] Fix gcc-10-mtl-off build error
12a3eb9fa : [Encode] Add HuC load status check logic for VP9 encode on Gen12 platforms
8fbf3b8f5 : [Decode] Fix fast dump can not work issue
19a4399c6 : [Encode] enable adv streamout
61045832c : [Encode] update lnl vdenc interface
a51c7d26e : [Decode] LNL JPEG cmake issue fix
93c9bbde1 : [Media Common] Bump up version to 24.3.1
619c25a08 : [Decode] Enable pat index for prime buffer
6f2977f57 : [VP] Fix coverity issue for heap manager
76260ff3b : [Decode] Fix decode coverity issue
f0d7aeb9d : [Media Common] [VP] Fix linux long latency issue
7a27da957 : [VP] Add LNL upstream support for cm kernel
a5de39800 : [VP] fix_reuse_params
22fe6b5e9 : [Encode] comment fix
213acc4b4 : Add max resolution for MPEG2 on Gen12
c7e70b413 : [VP] Fix L0 FC Alpha and Blending
0232c1b21 : [Encode] Move common featurse to common path.
3919a5b64 : [VP] Enable bindless suface state mode
00848c9eb : [Media Common] [VP][MCPY] add support for small local bar L8 format.
c6cadb7ca : [VP] Separate FP16 and HDR Legacy Code and fix alpha perf issue
1b8f7b4c2 : [Media Common] [VP] Fix LNL build issue of missing files
5376e43d5 : [Encode] Potentially Fix Render PF issue on RPLP platform
47bae3c80 : [VP] Refactor for build issue
5b952fd8e : [Decode] Fix some coverity issues
d62cce3df : [VP] Fix hdr capture issue
30ba8600f : [VP] Move Reserved Bit to the End
35962456e : Revert "[Encode] update ddi layer"
4f7b96a76 : [VP] Update FPS tracking logic
76762dd2e : [Encode] update ddi layer
36067cd02 : [VP] Clear Bayer input in use CSC params
32a5aa652 : [VP] Enable 64b Effinicient
34429a3fa : [Decode] extend budmgr api in create prime bo
8b9ce3291 : [VP] RefactorHDR10 1k1dlut and DV 256 1dlut on base code
1ab896958 : [Encode] Vulkan AVCe IPB Enable
3fc582322 : [Encode] Fix AVC unaligned padding issue
b3e54f0d9 : [Media Common] [VP] LNL DDI upstream
99cd35de9 : [Media Common] [VP] LNL media interface
8c5ba9ccd : [Media Common] [VP] set nativePrgressFence
710fa9d87 : [Encode] Refactor AV1 Pipeline functions
33e1b82a9 : [Encode] Minor code refactor for fast dump
f796cb889 : [VP] Add negtive Dst surface left/top value handle for apo path
fe32ed711 : [Encode] Refactor pipeline ActivateVdencVideoPackets
ca338661b : [Encode] VP9 Unalignment Issue Fix.
45503b18a : [Encode] Refactor AV1 packet submit function
4b9ed1df9 : [VP] Integrate L0 FC
63da90a6d : [VP] Remove unnecessary descriptor setting
5c504fcc8 : [Media Common] Bump up version to 24.3.0
eefcca2a9 : [Encode] Fix LNL Linux IBC corruption issue
0410255f3 : [Media Common] [VP][MCPY]BLT copy linear to linear add support buffer format override scenario
6369aafa8 : [Decode] Add status report error+CRC for new codec
2e27b41e8 : [Media Common] [VP] LNL VP/common HAL upstream
c4550594b : [VP] Fix typo for potplayer page fault legacy path
e215d12d1 : [Media Common] [VP][MCPY]VPL usage copy completion wait move to VPL from UMD to reduce RT latency
65fcac075 : [Encode] Add random access B 16 support for ICQ mode
80f88cf8d : [Encode] AV1 enable sliding window for CBR
aedbec48e : [VP] pront not DW align
c15f6b7f5 : [VP] Fix PotPlayer PageFault issue.
a389a8360 : [Media Common] [VP] xe2 keyword update
48fdaefb5 : [Media Common] [VP] LNL VP/common/Decode MHW upstream
a455c1199 : [Encode] Rext444 unaligned surface p frame corruption issue
2a60d2d1d : [VP] CPU latency optimize
de1a43ed2 : [Media Common] [VP][MCPY] CP small local bar force VE copy path support
d1cf532e1 : [Encode] Fix AV1 Linux numtilegroup > 10 issue
64e0020e5 : [VP] fix legacy code xml dump error
2e190f42e : [VP] fix TGL hdr state size related PF
2c6fe5df1 : [Decode] Decode roubustness enhancement for invalid reference surface
632f240ad : [Media Common] [VP] Disable overloaded virtual being hidden warning
2b7105b06 : Fix double free crash when create mmd fail
bfa2f3f0b : [Encode] Reduce MHW CMD parser perf impact
e3b29f19e : [Encode] Add AVC BRC error handling
5610e1ac7 : [Decode] Disable VT scalability on ADL
4b9dd3996 : [Encode] Enable REXT on Linux
6941724a3 : [Media Common] [VP] Move MTL Media CCS registry to common file
12561f682 : Add registry to configure MTL Media CCS
e77144df9 : Fix GCC14 compilation issue
aa60920e1 : [Encode] fix HLK R2R PF issue
02d10ce23 : [VP] update vebox rgb format pertag
f14159d25 : [VP] VP TGNE DN Cmodel match bug fix
bb85fe57d : [VP] demosaic_scaling_sfc
ecfcdea1c : [VP] Remove fmd kernel
96b7c11bc : [Media Common] [VP] [MCPY] Clean up MTL+ VE copy code
f4d94724c : [VP] add indirect state dump
1f53df429 : [Encode] Add OF support on Asyn Device
91f8d1050 : Revert "[VP] add indirect state dump"
9298999f2 : [Media Common] Bump up version to 24.2.5
57e34c253 : [Media Common] [VP] Add pat_index debug message and correct error message for vm bind
1ef612cee : [VP] regkey clean
02094a53c : [Decode] Fix decode MAVD memory violation issue
a0391f7a0 : [Decode] Update readme for platform
1a49b4558 : [Media Common] Bump up version to 24.2.4
4bf73406e : [VP] VP DN TGNE cmodel bitmatch
0eb5d8ddb : [VP] fix channel swap issue
4495b6100 : [Media Common][VP][MCPY] BLT copy update ColorDepth calc to support more formats RGB16 and R16
aacebe055 : [VP] Allocate VP dumper temp surface in VIDEOMEMORY for protected surface
968bb0173 : [Encode] Av1e update tileNum info for brc
90a02f634 : [Encode] HuC Release for V20240509
bd956d93a : [VP] add indirect state dump
c389832c2 : [Media Common] [VP] [MCPY] Fix Y410 VE copy
b48ce0921 : [VP] recheckin 2Pass iiScaling
ae5516af7 : [VP] feature graph frame id
b861af7bd : [Encode] Refactor AV1 Packet PatchTileLevelCommands
610eed9ad : [Script] upgrade workflow node version
930661df8 : [VP] update getveboxoutput format
4dcc378b5 : [VP] SR 1440p 60fps fallback
c0b4a07b4 : [VP] enable SFC fp16 output grey fill
de94eafde : Revert "[VP] fix_iscaling_2pass"
6a99cb5d5 : [VP] fix_iscaling_2pass
38f4c618c : [Encode] Add IsCompressFlagNeeded check for scc
e6c7f8027 : [Media Common] [VP][MediaCopy]Enable R16 and RGB16 for Camera Pipe on BLT Copy
788f0e797 : [VP] Enable 3dLut kernel
c785edd08 : [VP] Update L0 ARGB Scaling Bit Match
7ec3a50e6 : [VP] Expand FC Kernel Size Limitation
a24ec4eb1 : [VP] Fix hdr stage condition typo
c4691e49e : [Media Common] [VP][MCPY] Clean up DG2 VE copy code
f2adc9978 : [VP] remove unused WA
a79e5622a : [Media Common] [VP] Enable MC and RC for BLT copy input surface on DG2 and MTL.
1bddae204 : [Encode] Consolidate AV1 Packet Funcs
3036f1305 : [VP] L0 ARGB Scaling
7c1c775a4 : [Media Common] [VP]HW WA for media compression Format
ee70b3b1e : [Media Common] [VP] Add ACM new DIDs
3e2801c29 : [Media Common] Bump up version to 24.2.3
6ceff0171 : [Media Common] [VP] Enable FP16 media copy
9ac6a9ce5 : [Encode] Optimize on AVC ROI mem configs
f3a2f355c : Revert "[Media Common] [VP] Enable FP16 and ARGB16 for media copy"
66239ba85 : [CP] CP media perf profile opensource.
103513da3 : Fix BGRx AVC/HEVC Enc for g12
6bea76a50 : [Media Common] [VP] Enable FP16 and ARGB16 for media copy
1785055f0 : [Decode] fix hevc sf slice dump
42b761bd9 : [VP] Refactor SR model configuratio table
cb99fcd83 : [Decode] VP9 Prob buffer update base packet refine.
11bda2b2e : [VP] Refactor Vp Kernel Add Inline Data
347ade62a : [VP] Fix converity issue by legacy code changed
db0e28979 : [Encode] Fix AV1 8k CQP multi-tile silicon/cmodel mismatch
fb0075759 : [Encode] Add look up table rounding for Video Conference Scenario
5a9189e9a : [Decode] Fix query blob issue in xe drm
c42c8049e : Revert "[Encode] Enable AVC Native ROI for BRC"
cb81ee143 : [Encode] fix a coverity issue unchecked return value
593d750b9 : [Encode] Consolidate AV1 Packet Func
6a478ee17 : [VP] update legacy render intermediate surface
9619a58e6 : [Encode] Enable AVC Native ROI for BRC
8e8e3c1f4 : [Media Common][VP]Interface change for sync
a9efec9b0 : [Media Common] Bump up version to 24.2.2
1f913d7f7 : [CP] Enable streamout data dump
8aa866dc6 : [Decode] Correct condition check when dump avc mv buffer
c5425fe64 : [Encode] Fix AV1 frame header size calculation issue for Ffmpeg HDR enabling
30d283ace : [Decode] Null ptr check in AV1 pipeline
68cd0b720 : [VP] Enable SFC fp16 output
829a706db : [Encode] Fix MHW CMD Parser json file dump issue for AVC
e889e87ba : [Encode] Separate mmc state for HEVC reference surfaces
059386476 : [Media Common] [VP] Unity SubmitCommandBuffer declaration to avoid build error
9b79eb657 : [VP] Refactor common code to swfilterHDR for 3DLUT Linux support
39ddc3bc4 : Revert "[VP] Enable SFC fp16 output"
5aec512c1 : [VP] Change Vebox output buffer compression setting
93e5d4ce0 : [Media Common][VP] Sync Clean
5e12517b0 : [Encode] HEVC BT2020 encode integration test
222a1b93b : [VP] Add regkey for TLB Prefetch force disable
47d641366 : [VP] Enable Vebox FP16 Input
b236f374c : [VP] Enable SFC fp16 output
7f872a7a7 : [VP] fix first frame long latency issue
bff22e2b0 : [Encode] update compare operation
969c0c46c : [VP] Fix 4K monitor failed to run videowall 4k on multi channel
4ca4d1326 : [Encode] add nullptr check for hwInterface
4e33abcf9 : [Media Common] [VP] MCPY linear surface dump rework and minor fixes
842f9cf24 : [VP] SR new model for perf tuning
d9e5d4887 : [VP] Enable Lace cache table
42324f2cc : [Media Common] Bump up version to 24.2.1
fb0d32296 : [Decode] Refine execution syncs log in xe drm
2d3ef82af : [Decode] Refactor all xe drm implement of query function
ca08ff11e : [Media Common] [VP][MediaCopy]Change cache configration for render copy
9c062b9ae : [Decode] Fix memleak when enabling macro HAVE_VALGRIND
0f46bd679 : Enable Xe KMD by default
868056839 : [VP] Enable euThreadSchedulingMode for SR
45ac3405a : [Media Common] [VP] enable new platform compression
43461f4fe : [Media Common] Bump up version to 24.2.0
87c1e81d0 : [Encode] DDI TU report in driver
71171e4b6 : [VP] optimize the semaphore clear
85fe0d884 : [Media Common] [VP][Media Copy]Switch default Media copy to BLT engine
72af1dbdf : [VP] VaSyncSurface failed
6a2111c35 : [VP] [VP] fix osinterface perf issue
87baae9a3 : [Media Common] [VP] Remove legacy caps for platforms using apo path
2ecf41e78 : [Decode] Optimize xe synchronization for sync dep
7d1045e7b : [Encode] quality feature DDI enable for av1, hevc, avc
c10d2dd18 : [Media Common] [VP] Add ATS-M new DID
3aeb60fd6 : [Encode] Enable AVC Freq Boost flag in Remote Gaming scenario info
425f4c9f4 : [VP] Commonlibctx WA
7f1ba2d5e : [VP] Update EDSR binary for performance tuning
bccccb470 : [Decode] Fix MediaUL dumping issue
70f5ed026 : [Media Common] [VP] Add query va_bits interface
34f2f5197 : [Media Common] [VP] VCIP feature add 3X3 support
cf22977e7 : [Decode] Optimize xe synchronization
d3bcd2d94 : [Encode] Update AVC/HEVC TU strategy for hpm

+- Project: platform/external/iperf3

ee3cd66 : Enable host_supported for iPerf3.

+- Project: platform/external/iproute2

2087fae5 : Remove unused -Wno-implicit-function-declaration.

+- Project: platform/external/iputils

42da5f4 : Pin to C17.

+- Project: platform/external/jackson-databind

4c4eed7b3 : Update Jackson to support Json-schema-validator

+- Project: platform/external/jarjar

e110f69 : Write data to buffered output stream.

+- Project: platform/external/jemalloc_new

1abcb968 : Remove the unreachable() macro that conflicts with the C23 one.
a6de53b6 : Decrease size of arena structure.
1d10546c : Handle migration of tikv-jemalloc-sys to crate monorepo
8974b235 : Fix fork deadlock issues.

+- Project: platform/external/jetpack-camera-app

5691905 : [external/jetpack-camera-app] Remove Experimental annotations in camera-core module
6d1c0e9 : [external/jetpack-camera-app] Remove Experimental annotations in camera-core module

+- Project: platform/external/jsmn

bca0a7c : Make bpfmt happy.

+- Project: platform/external/json-schema-validator

1cc33c1 : Updating json-schema-validator to properly build
45eda76 : Third-Party Import of: https://github.com/networknt/json-schema-validator
75ab7da : Initial empty repository
3416e28 : upgrade to 1.4.0 and update changelog
6f44455 : Explicitly handle if the discriminator property value is null (#988)
0f983b0 : Refactor walk (#986)
eea61d6 : Fixes uri, uri-reference, iri, iri-reference formats and does iri to uri conversion (#983)
95911ba : Support custom vocabularies and unknown keyword and meta-schema handling (#980)
fed46cf : Fix message (#975)
8c2f60b : Make ethlo excludable (#974)
bc93b44 : upgrade 1.3.3 and update changelog
53a3402 : Support GraalVM and refactor (#972)
bd085ed : Fixes for discriminator (#971)
2879ca3 : Fix validation messages (#969)
7cda40e : Add unevaluatedProperties test (#968)
1f60740 : Reduce memory usage and improve performance (#966)
c768bc1 : Set result at the end of schema processing (#963)
5069ba4 : upgrade to 1.3.2 and update changelog
8f47aac : Update upgrading doc on fail fast (#961)
5dc929e : Improve schema retrieval docs (#959)
e60f81f : Refactor format validation (#958)
32356e9 : Add test for OpenAPI 3.1 schema validation (#956)
937105f : Fix patternProperties annotation (#955)
fb40864 : Add test for type integer (#954)
a61cf42 : Improve vocabulary support (#953)
51fd82b : Fix resolve (#952)
61bf64a : Locale.ENGLISH should set. (#951)
85d642b : Fix issues with hierarchy output report (#947)
91385c2 : Add test for type loose for array and update doc for behavior (#946)
c7e7ab4 : Support type loose for multipleOf validator (#945)
2795d79 : Fix for required annotations for evaluation not collected (#944)
49a44c9 : ugrade to 1.3.1 and update changelog
9c95c06 : Add annotation support refactor keywords to use annotations implement output formats (#942)
e95642c : upgrade to 1.3.0 and update changelog
b76c9f4 : fixes #933 update javadoc and a test case (#934)
7f1ec11 : Support Draft 2020-12 and refactor schema retrieval (#931)
f3825f3 : Fix getSchema() anchor fragment lookup. (#930)
44b1ec5 : Upgrade ITU library to version 1.8 (#929)
747b47f : update changelog for a typo
85f112a : upgrade to 1.2.0 and update changelog
9b73d10 : Support schema resource (#922)
5a94df7 : Refactor of paths (#915)
5dd0be3 : Basic test on URI create to improve coverage (#923)
48ca8c3 : Refactor validation message generation (#910)
0aaa967 : Update docs on CollectorContext (#913)
0c14a7a : upgrade to 1.1.0 and update changelog
5535107 : downgrade logback to 1.3.14 for JDK 8 compatibility (#912)
b818d54 : upgrade to 1.0.88 and update changelog
b138dc6 : 665: Can't load JSON schemas with URN value in id field (#906)
97b4cba : upgrade logback to 1.4.14
b33561f : Refactor to remove ThreadLocal usage (#896)
2971b79 : upgrade slf4j to 2.0.9
2f611ed : compile configuration is depricated (#900)
84f613e : Escape single quotes in validation messages (#898) (#899)
0924dfe : Fix JDK regex support (#888)
9ed6dc2 : fix: make JsonSchemaFactory more thread-safe #891 (#892)
92141e3 : Adapt collector context documentation (#876)
20a179f : Added test cases for not allowed validator, Handled invalid keyword test case for recurisve reference and empty array case for prefixItems validator (#890)
e86c817 : Fix pl_PL message translations (#887)
181f88a : Fix invalid class passed to getLogger (#886)
9ece704 : upgrade jackson to 2.15.3
f03a306 : docs: clarify commons-lang3 exclusion only required for 1.0.81 and older (#883)
b041fcc : Fix identation in example in walkers.md (#866)
b8a008e : Revert "Bump io.undertow:undertow-core from 2.2.25.Final to 2.3.5.Final (#858)" (#863)
01a7da2 : Bump io.undertow:undertow-core from 2.2.25.Final to 2.3.5.Final (#858)
b634de7 : upgrade to 1.0.87 and update changelog
185b456 : java doc update
21a6143 : New resource bundle languages added for issue #844 (#852)
3ff32ee : Use correct namespace URI to pass XML validation (#837)
520d997 : upgrade to 1.0.86 and update changelog
5703036 : Adds support for $recursiveAnchor and $recursiveRef (#835)
4d51a58 : Stops unevaluatedProperties and unevaluatedItems being applied recursively (#827)
b1572db : Always normalize uri keys of JsonSchemaFactory.jsonMetaSchemas on both read and write. (#834)
84a7137 : fixes the version for 1.0.85
a4c5135 : upgrade 1.0.85 and update changelog
d353623 : Adds support for writeOnly (#823)
200d9e9 : Reverts Undertow version to 2.2.25.Final (#819)
20ec058 : upgrade to 1.0.84 and update changelog
c5d21a4 : Ignores fail-fast when evaluating a member of an applicator (see https://json-schema.org/draft/2020-12/json-schema-core.html#name-applicators). (#816)
ad3ff30 : Corrects Java's failure to match an end anchor when immediately preceded by a quantifier. (#815)
ac6ecd0 : Adds support for walking if-then-else (#813)
9d78bf3 : Ensures context is reset after validating regardless of which method is used by the client. (#812)
efbb37e : Adds support for walking dependentSchemas (#811)
ad3fa5c : Ignores siblings of $ref when dialect is Draft 4, 6 or 7 (#809)
b88ccc7 : Updates Jacoco configuration to ignore the embedded Apache code (#807)
b2d6135 : Simplifies how evaluated properties and array items are tracked (#790)
3c9d1d4 : Enables unit-tests for refRemote validation (#806)
1f9e591 : Corrects issue with deserializing JSON Schema Test Suite tests. (#805)
ad795de : Support config param to disable custom messages from schema (#801)
bf1bf01 : fixes #793 Updating jackson version to 2.15.2 (#796)
4fcfc6a : Supports fail-fast when a pattern does not match. (#795)
a4ce435 : upgrade jackson to 2.15.1
e9dea3b : upgrade to 1.0.83 and update changelog
3810f30 : fixes #788 update JsonSchema to fix the javadoc issues (#789)
aa9f164 : Allows to override date-time and duration validators (#787)
de1cb89 : Allow walking of schema for items keyword when non-array node is provided (#786)
f09740a : Resolves improper anchoring of patternProperties (#783)
77cd232 : Makes JsonSchemaFactory solely responsible for creating JsonSchema instances. (#781)
3cf9bb6 : Adds support for cross-draft validation (#779)
1834353 : Adds support for handling integer overflow (#777)
ba4b910 : upgrade to 1.0.82 and update changelog
48a509b : Adds support for validating idn-hostname and idn-email (#775)
84d8546 : Fix issue #769 - Add minContains / maxContains correct keywords (#773)
19011c3 : Adds support for validating an IRI (#768)
cb7c53b : Update README.md
d21140a : Supports iri-reference format validation. (#766)
1e81668 : Supports uri-reference format. (#764)
d87c019 : Supports relative-json-pointer validation. (#762)
e73d129 : Enables validation of json-pointer formats (#760)
69555e6 : Adds support for validating uri-template formats (#758)
aaddd0d : Bug fix for handling of common special characters when building paths (#750) (#757)
5b5a192 : Bug fix for JSON Pointer parsing (#752) (#755)
2143b54 : Resolves incomplete validation of unevaluatedProperties. (#754)
8a4cffa : #750 Escape double-quote in produced JSON Path expressions (#751)
98d540c : Enables unit-tests for the unevaluatedItems keyword. (#749)
1d674ad : #686 Better localisation support (#743)
6382ac6 : Updates LICENSE and NOTICE to comply with section 4d of the Apache License Version 2.0 (#741)
d8c1310 : Enables unit-tests for ECMA 262 regular expressions. (#738)
6a73316 : Enables unit-tests for 'not' keyword (#735)
ec8b6e1 : Updates tests from JSON Schema Test Suite (#733)
0188052 : upgrade to 1.0.81 and update changelog
262d3ee : Improves performance (#731)
86c6ac6 : Removes need for network access when executing unit-tests (#730)
cff8961 : Adds explicit Java module descriptor (#728)
a3eccdf : custom uri fetcher doc (#725)
594479b : update the contributors and sponsors
284e952 : Produces validation messages when oneOf has no valid schemas. (#720)
6cea2ca : upgrade to 1.0.80 and update changelog
7afdd3b : fixes #709 throw the exception as it is (#717)
3c93e45 : update the comments only
8107dfe : Adds support for unevaluatedProperties that uses a non-boolean schema. (#716)
58b0ddf : Adds explicit support for tracking evaluated properties (#714)
a6b63f9 : Corrects malformed tests (#712)
3e5312f : Adds support for the Draft 2020-12 interpretation of `prefixItems` and `items`. (#710)
796bbf0 : fixes #708 remove System.exit from I18nSupport (#709)
460526d : Corrects treating 1.0 as an integer (#707)
636a346 : Adds support for validating regular expressions (#706)
1f3ae93 : Adds support for email addresses containing an IPv6 literal value (#705)
4acf3d6 : Adds support for validating leap seconds (#704)
388ffe4 : Corrects validation of duration and provides the option to validate against a more recent version of ISO 8601 (#703)
6a5a429 : Adds support for minContains and maxContains (#702)
dbdc79c : Updates tests from JSON Schema Test Suite commit 987a4c8fc4468f37c555db362f5de5f9052a13ff (#701)
4cf4783 : fixes #698 avoid warnning for additionalItems keyword (#699)
a6dc0f9 : Moves JSON Schema Test Suite to a separate test-resources folder (#697)
39e5529 : fixes #695 add then and else to as NonValidationKeyword for v7 (#696)
132a616 : Uses JUnit dynamic tests to generate tests from specification files (#691)
b5ef628 : upgrade slf4j to 2.0.7
4e66b53 : upgrade logback to 1.4.6
eb1e36d : #687 Support for (valid) JSONPath and JSONPointer expressions as message paths (#689)
5a67ecc : [CI] Bump used latest non-LTS Java: `19` -> `20` (#688)
84f564f : upgrade to 1.0.79 and update changelog
e672edb : Issue683 (#684)
c1d0837 : updating to keep java 8 compatibility
577b647 : Adds support for translating one URI into another (#682)
2d9d5ef : add a doc for metaschema validation
a199ac6 : #604 add disabled test case to reproduce the NPE
cb248c3 : changing ReadOnlyValidator to use boolean property instead of array. Including the concept of read/write modes.
dcd6749 : Add option to disable uri schema cache in JsonSchemaFactory (#679)
d22b587 : Avoid throwing exceptions and error-level logging (#676)
54b5e82 : Update README.md (#675)
34c4fbb : fixes #672 add multiple language doc
dcc76e4 : Support time offsets in `format: time` (#671)
a6a5403 : upgrade to 1.0.78 and update changelog
82ca8e5 : issue668: handle references to yaml sub-schemas (#670)
5b03a98 : Update README.md
dd3f787 : issue-664: Provide/unify schema path for applicator schemas (#667)
05fc14a : Clarify usage of Apache commons lang in README.md (#666)
bb63267 : Use full schema path to look up type validators for `anyOf` operator (#663)
b888a00 : [#475] Make DependentRequired error message more helpful (#661)
24bc57e : upgrade to 1.0.77 and update changelog
18805f1 : upgrade jackson to 2.14.2
f443adf : Map BinaryNodes to type string (#651)
5b15c46 : Improve logging performance (#649)
435c921 : Drop unused test dependency: Mockito (#648)
12bb112 : Use Javadoc badge with dynamic version instead of plain link in README (#647)
b1aec99 : Add ability to detect spec version optionally (#646)
5645d56 : Add MavenCentral badge to README (#645)
1ec44ff : Improve example of Gradle dependency in README (#644)
a87773a : Make sure all constants are `static final` (#643)
234e595 : Remove unused fields from `JsonSchemaVersion` (#642)
9e2e151 : Improve error messages on spec version detection (#641)
816c167 : Update build badge from README to point GH Actions CI (#640)
0a86653 : Drop Travis CI config (#639)
e22246b : Restore code coverage calculation (#638)
e622ccd : Adding tests for overriding error messages at schema level for individual keywords (#636)
2fcd0c6 : Setup CI based on GH Actions (#637)
aef6e9e : Quick fix for issue causing the wrong custom message to be used (#634)
f972484 : add persian language to json validator (#635)
2822c69 : Issue #627 custom message for format (#632)
8b91c28 : upgrade to 1.0.76 and update changelog
f94930c : adding new walk method to start walking from a specific part of a given schema node (#629)
90275b5 : upgrade to 1.0.75 and update changelog
acd5839 : schema path fixes in oneOf,allOf and anyOf validators (#628)
ba3c84a : downgrade undertow to 2.2.21.Final
907ee25 : upgrade to 1.0.74 and update changelog
c7056a9 : fixes #620 upgrade comons-lang3 to 3.12.0
527fe6b : Add support for subschema references in getSchema(URI) (#619) (#625)
63e0ceb : Correcting the oneOf,anyOf and allOf child schema validators to use the full path (#626)
b92359f : upgrade undertow to 2.3.0.Final
e5bf574 : upgrade jackson to 2.14.0
c41b83d : [docs] Beautify code blocks (#617)
4992d7d : Update spec version tests (#614)
f69ea6c : Update the specversion.md and pom.xml (#613)
88ce022 : upgrade to 1.0.73 and update changelog
6041dd7 : issue_563 Support adding custom message at attribute level and custom validator for all schema versions (#608)
cbc03f8 : Issue 606: Handle matched state in AnyOfValidator (#607)
1c69c01 : upgrade undertow to 2.2.18.Final to 2.2.19.Final
ba3a614 : GH-597 - add italian translation (#598)
195d716 : add validator for duration format (#593)
1ecc9da : Remove commons lang as a compile time dependency (#594)
61ed6df : Add NonValidationKeyword "else" on 201909 and 202012 (#592)
f83e386 : upgrade to 1.0.72 and update changelog
0372dbe : upgrade undertow to 2.2.14.Final to 2.2.18.Final
4e7df5f : fix: add V202012 to SpecVersionDetector And JsonMetaSchema (#586)
667a2d5 : fix: changed data type to preserve order of schema attributes (#585)
268da37 : upgrade to 1.0.71 and update changelog
b933e9f : Fix unevaluatedPropeties with patternProperties and type union (#582) (#583)
9592543 : update the readme.md
0d1d8f2 : Add support for draft 2020-12 (#380) (#580)
640768b : fixes #556 upgrade jackson to 2.13.3 (#576)
5970f14 : fixes #575 (#578)
ebf4a59 : upgrade slf4j to 1.7.36
95a80b0 : upgrade logback to 1.2.11
79ec3f1 : upgrade jackson to 2.13.3
22240ea : upgrade to 1.0.70 and update changelog
ba17f1d : proposal for issue #535, part 2: fix the same issue in AnyOfValidator, too (#559)
63cd3bf : Upgrade javadoc plugin (#570)
27b969b : Fix broken tests on non-english setup (#569)
678ff47 : Remove unused variable in JsonNodeUtil (#566)
4906d90 : Improve performance of URLFactory.create() (#565)
39cc79b : Prevent from throwing an exception when setting default values (#561)
99a5389 : Add French translation for validation messages (#558)
3e110b1 : upgrade to 1.0.69 and update changelog
cbac5b1 : removed unnecessary check (#554)
a2ca992 : Setting default value even if that value is null (#555)
08ec68e : Fixing unevaluated properties with larger test base (#544)
7e9375f : Add schemaPath to ValidationMessage. (#552)
e68be00 : Revert "The bug fix for the issue 520 (#543)" (#547)
ae7c0ca : The bug fix for the issue 520 (#543)
c9ed1bf : Allow fetching /properties from map with comparator (#541)
45d3bc1 : upgrade to 1.0.68 and update changelog
2ab2199 : Fix oneOf bug (#537)
0b51d25 : PR #511: Improve validation messages (German and default) (#536)
ca36ea4 : Refactoring-code (#539)
5007242 : Adding Unevaluated properties keyword. (#534)
4fb2eba : fixes #532 Invalid (non-string) $schema produces NullPointerException (#533)
1b43f0d : Fixed a typo in the validators documentation (#530)
af6bba1 : Updates to German translation (#529)
bc4cabc : upgrade to 1.0.67 and update changelog
1c68e2c : Leap seconds are handled even better (#508) (#525)
9af96e8 : Fix handling of leap seconds in date-time validation (#508) (#524)
53fa5de : fixes #523 synched ipv4 and ipv6 and fix some gaps for the IP format validation
416572e : fixes #522 synch the official test suite for draft v4 from schema.org
48cba95 : fixes #509 NPE with oneOf and custom URI Fetcher or Factory (#515)
1fbd73f : Make date-time validation align with RFC3339 (#508) (#521)
67bff81 : Preserve `#` suffix during metaschema URI normalization (#519)
035f63c : fixes #516 fix the additionalProperties in oneOf failed test cases
15a378b : AdditionalPropertiesOneOfFails test (#505)
a015e31 : fixes #510 try to reproduce the issue but failed
62f418f : Add German validation messages (#511)
8fce164 : Support fragment references using `$anchor` (#500)
4b5f7aa : upgrade to 1.0.66 and update changelog
4172360 : Improve type validation of integrals (#496)
7f96315 : parent schema of additionItems can be correctly referenced (#493) (#497)
788c0aa : upgrade to 1.0.65 and update changelog
8ada62c : Sort ValidationMessage by its type (#492)
30d4b4b : Handle the situation when context class loader is null (#490)
4ac0fbd : Fix flakiness in `CollectorContextTest` (#489)
0c2a855 : upgrade to logback 1.2.7 to resolve some x-ray warnnings
61e3dc9 : upgrade to undertow 2.2.14 to resolve some x-ray warnnings
911091a : Fix violations of Sonar rule 2142 (#488)
fad2a58 : apply default in objects and arrays (#477)
fdae2f8 : Issue386: FailFast should not cause exception on if (#485)
8814cfa : Add Java Syntax Highlighting to specversion.md (#483)
4a3d23a : upgrade to joni 2.1.41 to resolve a security concern (#482)
a068e6a : upgrade to 1.0.64 and update changelog
dd57a89 : issue-478 : Time format validation supports milliseconds (#480)
2dee3ae : Add dependentRequired and dependentSchemas validators (#479)
1f7d062 : upgrade to 1.0.63 and update changelog
9cb204b : Remove additional properties validation message when there are other … (#473)
9c2311c : update the link to the gitter chat
7184d27 : fix i18n doesn't work with locale CHINA. (#472)
0908741 : upgrade to 1.0.62 and update changelog
76a4d9a : fixes #456 OneOf only validate the first sub schema
455cf0a : upgrade to 1.0.61 and update changelog
77390a4 : Fix test NPE (#464)
45d2358 : fix for #461 (#462)
9419ce7 : Correcting the ref listeners config in WalkEvent class when fetching the getRefSchema (#459)
df9f67a : upgrade to 1.0.60 and update changelog
82aafd9 : fix for #451 (#452)
7522176 : changed from isIntegralNumber to canConvertToExactIntegral to support numbers with 0 decimal value as valid integers for max/min lengths. (#450)
a03b92c : Refactor JSON Schema Test Suite tests (#449)
9e6b137 : Test CI with JDK 11 (#448)
ccb9696 : Bump JUnit version to 5.7.2 (#447)
8b26d19 : upgrade to 1.0.59 and update changelog
58947aa : JsonValidator: mark `preloadJsonSchema` as default (#445)
6a2255f : $ref caching issue (#443)
8f9189b : Issue 426 custom message (#440)
b825939 : upgrade to 1.0.58 and update changelog
e8c9a2d : "Adding custom message support" (#438)
5ec08c7 : Implementation of #436 (#437)
9a8680c : add i18n support for ValidationMessage. (#439)
2a1fac7 : Added exampleSetFlag to nonValidationKeyword (#435)
308e258 : Issue425 (#432)
36095ff : Issue428 (#431)
b50c7cf : Update collector-context.md (#430)
e152f9c : Update collector-context.md (#429)
8ecc14c : upgrade to 1.0.57 and update changelog
d3c9376 : Avoid NPE in AdditionalPropertiesValidator (#424)
2273b37 : Fixed loss of accuracy (#422)
afb72f0 : 201909 false flag keywords "additonalItems", "then" (#418)
80ef1b9 : upgrade to 1.0.56 and update changelog
5eea5e8 : Fixes https://github.com/networknt/json-schema-validator/issues/416 my marking each JsonSchema as being initialized (#417)
00923c2 : Simplify the uri format validation regexp (#414)
f4809e5 : upgrade to 1.0.55 and update changelog
cbc1877 : Fix empty fragment and query string being invalid (#412)
751f436 : upgrade to 1.0.54 and update changelog
ed8d440 : Added JsonValidator#preloadJsonSchema() to allow eager preloading of schemas. (#410)
faec7f1 : Use a more restrictive regex for URI validation (#409)
72fe835 : fixes #404 add test case to replicate the issue
9d31e80 : update changelog
3196d72 : upgrade to 1.0.53 and update changelog
c41df27 : Introduce `forceHttps` flag in `JsonSchemaFactory.Builder` (#400)
d907f55 : upgrade to 1.0.52 and update changelog
decfda9 : Fix for #389. Also fixes the special situation of nested discriminators not working correctly when the parent schema has a discriminator, but is not referenced via anyOf/ (#399)
6c573d4 : Properly support propertyNames (#397)
dfd63aa : Added documentation for #390 (#394)
b4f43ab : upgrade to 1.0.51 and update changelog
415980e : Changed the development approach - new attempt from scratch (#390)
9a9032d : NPE due to concurrency bug #392: Make JsonSchemaRef more thread-safe; Removed unused members and constructor (#393)
afcd64e : override default EmailValidator, if set custom email format (#391)
26b28b1 : upgrade to 1.0.50 and update changelog
4fa1ab4 : Fixing concurrency and compilation issues. (#385)
b1a8e4e : fixes #387 resolve the test case errors for TypeFactoryTest (#388)
46cc360 : Add lossless narrowing convertion (#379)
ca1847a : Potential fix for #383 (#384)
e692d47 : upgrade undertow to 2.2.4.Final
f68462b : upgrade to snakeyaml to 1.26
4ed6882 : Bump jackson-databind from 2.10.4 to 2.10.5.1 (#378)
368b7e9 : upgrade to 1.0.49 and update changelog
b322058 : Fixed parallel processing (issue-335) (#377)
9df611d : PropertyNames to return validator value on error (#375) (#376)
ff0a6fd : upgrade to 1.0.48 and update changelog
3758d52 : Fix for issue-366 (#372)
60917a8 : #326 property names with pattern (#374)
a5f9f7f : upgrade to 1.0.47 and update changelog
a5eca44 : Fixing Walk Listeners Issues. (#368)
1783eb5 : upgrade to 1.0.46 and update changelog
76ad178 : Validation of oneOf depends on schema order (#361)
cb332a3 : IF-THEN-ELSE Conditional with properties attribute (#358)
e1a5aa2 : Date-time validation fails depending on local time zone (#363)
0ec1133 : fixes #360 add four project links to the README.md
a03170d : #354-fixed OneOf validation error if number of valid schemas is not equal to 1 (#355)
13a5f0a : YAML source location handling (#340) (#352)
94dcb57 : Add 'anchor' and 'deprecated' as NonValidationKeywords for v2019-09 draft (#351)
068a3e5 : upgrade pom.xml and update changelog
01119cf : javadoc update
0d442fd : Add builder method that accepts iterable (#350)
c4b27b0 : Fix NPE (#348)
8e663d0 : Update docs about javaSemantics flag (#346)
6cb431f : fixes #345 optimize imports in the src folder
f1c56df : fixes #334 Improve type validation of numeric values
0776752 : Add contentMediaType, contentEncoding and examples as a NonValidationKeyword for v7 and v2019
8e9d3f8 : fixes #338 comment out the moditect-maven-plugin for mvn deploy
0d4dfd2 : upgrade to 1.0.44 and update changelog
a04a1fa : Walk Changes
3c675e6 : Walk Changes
a47f34f : Changes for walk listener
95b3c61 : Bump junit from 4.12 to 4.13.1
f757baf : fixes #329 JRuby Joni dependency and its dependencies
b867cf1 : Add '$comment' as a NonValidationKeyword for v7 and v2019 drafts
236967a : fixes #327 Unknown MetaSchema cannot be reproduced
f5903a4 : Changes for walk methods
2a94cc3 : Changes for walk method
9f2ad72 : Changes for walk methods
fcd51c2 : Changes for walk methods
730f7cf : Fix slf4j and Jackson module requirements.
f9f73fc : Changes for walk methods
acabad5 : Generate module-info, fix build on JDK11+.
a4ffec1 : Changes for walk listener
3013f3c : Changes for adding walk capabilities
24511a6 : FIX: potential duplicate log entry due to race condition
8f42eac : Adding walk changes to applicable validators
924d95f : Follow up(#317) Compatible with Jackson 2.9.x
5d8e6ac : upgrade to 1.0.43 and update changelog
65a3f3c : fixes #319 resolve a java doc warning in CollectorContext
2c01183 : (#317) Compatible with Jackson 2.9.x
1abc99b : fixes #315 implement propertyNames validator for v6, v7 and v2019-09
d5a861a : add test case Issue285Test from SirMoM
3a3e1a8 : fixes #313 add test case to reproduce the issue
3069333 : upgrade to 1.0.42 and update changelog
0fdee4a : Split the PatternValidator into 2 classes.
e0e3c88 : Updated structure and text to be more descriptive. Uniform code block style.
f4b5908 : Fixed cust-fetcher and cust-meta link not working. Updated structure and text to be more descriptive. Uniform code block style.
fa7e587 : Class comment added
8c56c7b : Added schema version detection based sample. Updated structure and text to be more descriptive. Uniform code block style.
3156bb4 : Added schema version detection based sample. Updated structure and text to be more descriptive. Uniform code block style.
52e606a : Added schema version detection method
a8bec51 : upgrade to 1.0.41 and update changelog
5dadf03 : Marked depndencies to joni with resolution optional (fixes #307)
fc24edd : upgrade to jackson 2.10.4
0a02a8f : upgrade to undertow 2.1.1.Final
2fa85e6 : Added schema version detection with schema tag Some small code clean up
d29cc40 : Update UUIDValidator.java
cf3b4fe : upgrade to undertow 2.1.1.Final
e1571c6 : upgrade to 1.0.40 and update changelog
e32a541 : fixes #273 normalize the meta schema uri
00eb26d : unkownMetaSchema error reproduce
47de83e : Compatibility matrix
93fa3bb : Add document for custom validators
49215e6 : upgrade to 1.0.39 and update changelog
5120bce : Changes for adding getAll on CollectorContext class
51e18fd : upgrade to 1.0.38 and update changelog
3d1e9e4 : fixes #281 EmailValidator use ValidatorTypeCode Datetime
8832315 : upgrade to 1.0.37 and update changelog
00c8853 : Add null check in validationContext.getConfig()
a0e00e1 : downgrade undertow to 2.0.29.Final
cf92bd6 : Trivial readme fixes
874cf94 : upgrade undertow to 2.0.30
5457143 : update changelog
c8346d9 : upgrade to 1.0.36 and update changelog
0cf42c2 : add small documentation about URNFactory
524dc47 : #258 fixed cyclic dependencies
8b2dbfd : fixed potential npe
69121ad : added test schema
e1ce67d : fixes #273 make the getInstance() deprecated
6f4a06e : switch to version.joni property
bc681d2 : upgrade to 1.0.35 and update changelog
858a838 : Use ECMA-262 validator when requested
2971a32 : fixes #270 404 validators reference error
55794b4 : upgrade to 1.0.34 and update changelog
133f8e4 : Adding getValidators public method in JsonSchema and making newSchema protected in JsonSchemaFactory
b2e5e96 : Collector Context changes to handle simple Objects
fa40960 : Collector Context changes to handle simple Objects
f227b96 : fixes #266 reformat the code and resolve javadoc warnnings
83a09fd : upgrade to 1.0.33 and update changelog
e4686dd : Handling JSONPointer (URI fragment identifier) with no base uri
fde3a1c : fixes #255 add test cases for issue 225
c65023b : Revert java version change
f707525 : Handling local id ref
92aa6fc : fixes #261 remove Java 8 stream code from the test case
0df798b : upgrade to 1.0.32 and update changelog
7194f77 : Changes for adding collector context as a ThreadLocal object
a67b7bd : Changes for adding collector context
662f455 : Changes for adding collector context
c4c1891 : upgrade to 1.0.31 and update changelog
b658913 : Fixes 226, Implements contains.
a03428e : upgrade to 1.0.30 and update changelog
eaddcd9 : Resolve schema id from the schema document (for v6 and above)
cdfc710 : Reactivate plugin
13f0585 : reset distributionManagement
0fac9c1 : Move test data out of the official specification
f54ed47 : Improve accuracy of rounding with multipleOf
d95b85b : fixes #242 add customized fetcher and meta schema doc
b61f378 : upgrade to 1.0.29 and update changelog
63b0c8c : fixes #232 update meta schema URI to https
f3cf218 : Update description in pom.xml to match readme.md
b96065a : fixes #229 move the remotes to resource from draftv4 so that it can be shared by all tests
caf5259 : fixes #228 support boolean schema in the dependencies validator
1dbf2db : enable const validator test for v6
efcd587 : fixes #224 support boolean schema for the item validator
709a9bb : fixes #222 add document for URL mapping
21d80d2 : LIFEAPTE-5144 enable json-schema-validator to work with Android 5 and 6
c7f3a41 : upgrade to 1.0.28 and update changelog
ec9f933 : Aligned tests with JSON-Schema-Test-Suite.
268413b : upgrade undertow to 2.0.28-Final
a6c35aa : Fix for oneOf when not all properties are matched.
c08c168 : Added an example code snippet
efa7000 : add yaml validator document
69813b9 : add yaml validator document
8232309 : upgrade to 1.0.27 and update changelog
24aad4f : Fix remote ref to follow redirects
a2f520f : #214 - Fix conditional of validation (if-then-else)
08ca9e4 : fixes #54 convert all numbers to decimal node for comparison in enum
e83fc2e : fixes #54 implement Const validator
9f8febd : fixes #54 add boolean schema validator to resolve some of the test cases
cc0c136 : fixes #54 add v6 and v7 test cases
0ea1128 : add test cases to the v201909 test suite
7aaa5b0 : fixes #54 pass the common buildin-formats to each spec instance
b2741f9 : fixes #54 support for draft V6, V7 and V2019-09
a34af03 : fixes one missing link to the github.com
4af02c8 : correct the github url location for the remote file
42900f5 : fixes #211 move the current test cases from tests to draft4 folder in the resource
d16a005 : update pom.xml to integrate with codecov
760777b : add code coverage
421ea40 : upgrade to 1.0.26 and update changelog
31a7712 : Fixing ref validation with same name but different files
bbddae6 : upgrade to 1.0.25 and update changelog and document
2f37b9e : Adding Unit Tests for IfValidator
01dfc0b : Implementing IF-THEN-ELSE Conditional (Draft 7)
3bcab62 : upgrade json-schema-validator to 1.0.24
9d0accb : fixes #205 Update README.md to add more content
c70b2b7 : fixes #205 Update README.md to add more content
50925d1 : fixes #204 add documentation for configurations
ee44d86 : upgrade to 1.0.24 and update changelog
64f6367 : fixes #203 String for Number should fail with the default SchemaValidatorsConfig
5788bba : update README and changelog before release
8579314 : feat/#199-fail-fast - Adding feature to fail fast on the very first validation error.
cf1b188 : upgrade to 1.0.22 and update changelog
fa8b6a1 : replace reflection in ValidatorTypeCode
8e62936 : upgrade to 1.0.21 and update changelog
6db536a : fixes #187 SchemaValidatorsConfig not propagated
25543d2 : fixes #192 upgrade jackson to 2.9.10
dbdfe90 : fixes #190 keep the old test cases for oneOf
7875ef7 : Fixes #190 - OneOfValidator cannot validate object with multiple properties, additional test case
f335920 : Fixes #190 - OneOfValidator cannot validate object with multiple properties
417ffb3 : update travis ci dist
b0c8b86 : specify build dist as the default xenial doesnot support jdk8 anymore
b07aaf1 : fixes #188 couldnot validate the email format in json schema
22d29af : upgrade to 1.0.20 and update changelog
8852fe5 : Fixes #185 Validation issue in oneOf when elements have optional fields
c26d997 : Fixes #185 Validation issue in oneOf when elements have optional fields
0816c2d : Fixes #185 Validation issue in oneOf when elements have optional fields
8364aad : Added additional validation
0f0c75f : Fixes #185 - OneOf validation error
3b21a47 : Fixes #183 Validation error when field is nullable and consumer sends in a null value
6d52a9b : Fixes #183 Validation error when field is nullable and consumer sends in a null value
aaa9e18 : upgrade to 1.0.19 and update changelog
9ccc25e : fixes #182 upgrade jackson databind to 2.9.9.3
3622e9d : fixes #182 upgrade jackson databind to 2.9.9.2
b67154b : Prevents recursive parsing of json schema when conditions like told in json-schema specification, point 8.3.1: https://json-schema.org/latest/json-schema-core.html#rfc.section.8.3.1
1de8822 : upgrade undertow to 2.0.23.Final
e37b2d4 : Added a means for retrieving the URIFactory that is being used by the JsonSchemaFactory.
23b4fe2 : Removed the code causing the new OneOf unit test to fail.
6cc45a8 : Added a unit test to reproduce bug #177.
99d2187 : upgrade to 1.0.18 and update changelog
b47ef33 : #173 - Do not ignore other errors in case of anyOf validator
41b3623 : upgrade to 1.0.17 and update changelog
10bb35a : More insights into comentary for #157
2447bc4 : Some insights on discussion brough up in #157
0a1d65e : fixes #171 validation of quoted numerics added
413b4b7 : upgrade to 1.0.16 and update changelog
0128c25 : Fixed some funky formatting.
2224e26 : fixes #168 resolve javadoc warnings
409c4a5 : Moved the URISchemeFactory into the ValidationContext.
5f541d4 : Fixed some issues in the UriMappingTest class.
0a64900 : Finished adding support for URIs. Standard URLs and the classpath URLs are handled by default, but can be overwritten.
28d5a62 : Updated the URI interfaces.
53bf363 : Updated URL variables to URIs. Also created a URIFactory interface that needs to be used instead of URI#resolve and URI#create.
fbae1d6 : upgrade to 1.0.15 and update changelog
d1c9468 : upgrade undertow to 2.0.21.Final
7ce7f98 : -ignore performance test, shouldn't include in build.
d96ac6e : -upgrade java.testversion so that support lambda in test cases
f31f698 : -change minimum validator to use BigDecimal to do comparsion with similar logic to MaximumValidator -added test cases
40dc57e : -added a performance test to execute test to compare with double type. the time consuming diff is very small after the change using BigDecimal
7096890 : -add final to schemaNode -change logback-test from debug to error level due to it may affect performance test
a456ff5 : -only compares with Long value when the schema type is int/long and the compare node is integer type, otherwise means one of schema or compare node is with decimal, so convert it to BigDecimal to do the compare. -change some edge test cases, due to the way of measuring is changed, some edge scenarios don't apply anymore.
1fd7a2e : upgrade to 1.0.14 and update changelog
b9fbd91 : fixes #163 update typeLoose to false as before merging the PR 141
148f356 : fixes #162 bump up java version to 1.8
ee0b21c : -fixed max validator when have an integer type but with a double max value -added test cases
278533e : upgrade to 1.0.13 and update changelog
61ed0f8 : Make a colon in date-time format zone optional (close #158)
2449ee0 : upgrade to 1.0.12 and update changelog
4829f9d : Fix 24-hour validation error
93a698b : upgrade to 1.0.11 and update changelog
8dd4ab5 : add test case for uuisValidator
b35d50e : fixes issue #151
48ca6ec : upgrade jackson to 2.9.9
f8e4b7d : upgrade to 1.0.10 and update changelog
ab49e2c : -fixed when node is BigInteger, when it's smaller than Long.MAX_VALUE, it should do the comparison instead of return false directly. -added test cases.
3471475 : Improved error message
25fbfeb : Added time and date-time type check
66fa314 : Removed unused params
7114855 : Enhance the DateTimeValidation to support validate time zone shift and millisecond
ccb1851 : upgrade to 1.0.9 and update changelog
fcf0bf8 : Added DateTimeValidator
4a0c830 : upgrade to 1.0.8 and update changelog
dc9137f : Updated my RefValidator implementation to utilize the recently introduced URL mapping changes.
99bc05b : Removed comment put line
37d6870 : Fix bug parsing array query params when only one item present
c58aab8 : fixes #120 update version to 1.0.7
3c9d0e0 : Adding test cases & Adding $ref condition
730862b : Fix enum object validation error
e3b4c4a : I was a bit hasty in my last commit. There is a use case for supplying a URL along with a JsonNode. The schema could be embedded into a json file and may still need to have a URL location (we use this for testing purposes and I broke our tests with the last commit).
8be9498 : Removed the factory methods that I added previously. Someone could get the same behavior by supplying a schemaUrl and utilizing a custom URLFetcher.
bb6cbf3 : Fixed the actual problem that I was running into and reverted the JDK version back to 1.8. I just was skipping the problematic unit test due to some problems in my dev environment.
4adbc38 : Updated the travis JDK version to OpenJDK11. Java 1.8 now requires a special license in order to use in production, so it's probably safe to eliminate full support for it. The issues that exist in Java 1.8 are only relevant to relative values that have upper case characters.
c74aadf : Added a test that I previously deleted and fixed it so that it correctly reproduces the problem described in issue #12.
298fc58 : Removed unused imports and variables.
708bec8 : Fixed some issues with the previous commit and made it so that schemas have to be valid for the unit tests.
f7d767a : Updated the RefValidator so that it is more robust and supports relative references.
15978fa : -fixed maximum, minimum validator due to BigInteger issue
d2a3a8d : upgrade to 1.0.7 and update changelog
1c6edd7 : Fix negative case
2759140 : Convert double to BigDecimal in MultipleOfValidator to make the validation more accurate
a586428 : fix: avoid performance penalty of using URL in map or set
350013b : feat: allow $ref and other URLs to be mapped to local URLs
ba7e96a : test: add test that invalid external ref fails if not mapped
706e60d : upgrade to 1.0.6 and update changelog
7a2e592 : maximum/minimum validation of integral numbers
22fdad0 : upgrade to 1.0.5 and update changelog
e096d1a : Revert "Develop (#128)" (#129)
9746310 : Develop (#128)
a000f78 : fixes #127 update license copyright and add NOTICE
e97d280 : feat: Add URL mappings (#125)
490b20a : Add link to Javadocs (#123)
4d6df4f : upgrade to 1.0.4 and update changelog
0f0a24e : Almost JSON-spec compliant validation of numeric values (#119)
6f7ea7d : add support links to the README.md
23287fc : fixes issue #120 (#121)
babb6d6 : upgrade to 1.0.3 and update changelog
01aa633 : fix a java doc issue
4ed192a : Fix/#116 fail to validate numeric (#117)
d0f9869 : upgrade to 1.0.2 and update changelog
3aafcd9 : Fixed validation for path parameters and query parameters (#113)
b7d37b9 : fixes #114
25aefc0 : upgrade undertow to 2.0.16.Final
9dc6917 : upgrade to 1.0.1 and update changelog
c20e0b4 : downgrade to 1.0.0 and update changelog
bf5a723 : upgrade to 1.0.1 and update changelog
8e0d55a : upgrade to 1.0.0 and update changelog
fef6916 : Fixes #111 problem started from oneOf validation - Validation failure for optional field in a schema - in the PropertiesValidator
7294ea4 : AnyOfValidator: only return expectedTypeList if not empty
1071a9f : upgrade jackson to 2.9.8
d90866c : upgrade to 0.1.26 and update changelog
e7c21bc : Fixes #110 Validation Error when using OneOf in OpenAPI specs - additional recursive validation fix
625234e : Fixes #110 Validation Error when using OneOf in OpenAPI specs
dd0dac3 : upgrade to 0.1.25 and update changelog
d832ad9 : fixes #109 update after the merge between two PRs
99b56b2 : -Rename ValidatorConfig to SchemaValidatorsConfig
3af631f : bug fix related to union type in anyOf; test cases for union types
506ec3f : remove local test files
6717cea : fix for performance issue
9d6d94e : -Revert those files are only changed imports by IDE -clean up a legacy constructor
27eaa02 : -Move ValidatorConfig from wrong package to correct package -Implement light-rest-4j #64 for Swagger ValidatorHandler
cee6226 : -use Config class instead of a Map for validation configuration -code clean up from previous commit.
e271a79 : roughtly add an option to differenciate between validating from different request parts
ccd9212 : upgrade to 0.1.23 and update changelog
c0e2c24 : temporary fix to performance issue
493b6a9 : adding test case to debug perf issue
a6b04df : upgrade to 0.1.23 and update changelog
e1776f5 : fixes #103 Boolean type validation for the string type is incorrect
baab1d9 : fixes #101 remove unused regex patterns
fdabb5c : upgrade to 0.1.22 and update changelog
6ba7aed : fixes #101 enhance TypeValidator trying to convert type from TEXT
da5514d : upgrade to 0.1.21 and update changelog
228d117 : Check nullable for null instead of catching the exception.
af97444 : Fixing min/max int validation to not output decimal constraints.
3cd9c88 : Adding support for nullable fields
b7d78f5 : upgrade to 0.1.20 and update changelog
0ff98e9 : fixes #92 rollback type validator for null value as it is against json scheme spec
09fdcda : fixes #91 update one test case to ensure compatibility of Java 6
416e893 : Remove unused dependency to slf4j-ext due to security issue
b270f13 : type validation, if the allow null for optional value in the json schema
e62f3fc : Added example for custom keywords in tests
e5db9fb : Update version in maven dependnecy sample
279ab0c : upgrade to 0.1.19 and update changelog
636eb84 : fixes #84 remove Java 8 optional to ensure that this library can be used in Android
cbdce4e : fixes #83 upgrade to undertow 1.4.23.Final in sync with other repo
6c52e59 : upgrade to 0.1.18 and update changelog
37d2d6b : fixes #80 upgrade to jackson 2.9.5 and undertow 1.4.20.Final
22e28b9 : One of was broken - it did not fail when there were no valid schemas
54f6da2 : Make remaining JsonSchema constructors public
7c8c50a : fixes #74 upgrade changelog for a new release
150009f : fixes #72 build JAR with OSGi support
dcb51d2 : fixes #71 Github Quickstart section out-of-date
bb4ebe9 : upgrade to 0.1.16 and update changelog
8a9fd70 : Minor optimizations
da104f5 : Only apply required validator to objects
3b36f38 : Only apply pattern validation on strings
8b640f9 : upgrade to 0.1.15 and update changelog
39d0383 : fixes #65
2f02bc8 : upgrade version and update changelog
dc1baba : Add simple tests for ValidatorTypeCode
4b2094d : Restore ValidatorTypeCode.fromValue()
d0c358b : Optimize AdditionalPropertiesValidator
87a0316 : EnumValidator optimizations
ad07477 : upgrade to 0.1.13 and update changelog
d8d5019 : Optimization for OneOf
5a5a912 : References that cannot be resolved should be treated as an error
37368a5 : Resolve sub schema node only if really needed
724a442 : update changelog
0fc1483 : upgrade to 0.1.12 and update changelog
e4a7af3 : Improved handling for unknown keywords
79acd4d : Support custom meta schemas with custom keywords and formats
9c24e16 : Use LinkedHashSets for ValidationMessages
03f5ef2 : Change access modifiers in ValidationMessage
c61b761 : Remove unnecessary todo
b79da28 : Change access modifiers in ValidationMessage
fcc6e6a : Added test case for loading schemas from classpath
ae43a2e : update changelog
fca59f8 : update changelog
3e9d79d : update changelog
e357462 : upgrade to version 0.1.11
4bb14fb : Bumped version number
6d48541 : Replaced 'new URL' with a call to a new URLFactory which also handles related schema files that is only available in the classpath
3dd481e : upgrade to 0.1.10 and update changelog
9e37ab5 : compile src/main for java 1.6 and src/test for java 1.7
790170c : backport 5 of 3000 source lines that prevent compiling for java 1.6
fcf2fb8 : upgrade to 0.1.9 and update changelog
1889e50 : adding relative $ref url
6087924 : upgrade to 0.1.8 and update changelog
cd245ba : Moved duplicate constructor code to init method
3f0c4a7 : Clean up imports and whitespace
e5244a2 : Prevent recursive loading when a schema id is own URL
6245eac : update README.md
f295a40 : upgrade version to 0.1.7 and update changelog
0b5d8f4 : added missing codes
a776738 : if schema not valid to oneOf, added all error, that isnt additionalProperties
713f1a6 : added test with id schema as url
0958e50 : fixes #25 enable undertow server for remote ref tests
38d934e : update changelog
500716e : upgrade to version 0.1.6. thanks @eskabetxe
6cb3959 : uncomment jacobo
b68ba25 : update version on README
f2bea32 : update dependencies versions
1dd05c8 : only check subschema if distinct from schema, and minor changes
1778aa3 : formating correctly
85fef1e : If using id for subschema, validation isnt working
1026599 : fixes javadoc issues
ad14e06 : added default messages to empty messages on ValidatorTypeCode
3c89412 : added default messages to empty messages on ValidatorTypeCode
2a46668 : fixes #19 to change undertow as test scope dependency
6cc2fef : Update README.md
f37eedf : Revert "Switch to targeting Java 7"
e6a7670 : Switch to targeting Java 7
72dbdc4 : update changelog
80a8183 : Match subsequence instead of entire input sequence.
98d3fba : add a test case for self ref
6bef6c8 : update readme
3770a9f : Add Gitter badge
6b5f230 : upgrade to 0.1.4
c3f593c : update changelog and readme and version for release
b101658 : fixes #4 unicode length
f653dad : update readme
0f81f13 : update readme
c847498 : add travis icon
51c807c : move gpg plugin into ta release profile so that the build can be performed on travis
9c55de8 : add test cases and update readme.md
461f4e0 : add travis ci
bbfe46f : update readme
683c629 : update readme
6d59682 : sync with official test suites
22e4362 : add change log and update readme for the latest verision
7edeeab : upgrade to version 0.1.2
16f4582 : fixed typo
194abaa : fixed broken escaping in uri pattern
046d082 : update copyright
f59e9fc : get JsonSchema from JsonNode in JsonSchemaFactory
39ba04b : initial checkin
086797e : Initial commit

+- Project: platform/external/jsoncpp

27383f5 : Edit OWNERS file

+- Project: platform/external/jsoup

bb7e475b : Replace JSpecify stub with the real thing.

+- Project: platform/external/jspecify

526170b : Make bpfmt happy.
0947587 : Add apex_available to JSpecify
a59c722 : Add apex_available to JSpecify
463ba62 : Set min_sdk_version in JSpecify to a numeric value
0b3cf01 : Set min_sdk_version to apex_inherit for jspecify
4d7aef5 : Add visibility for AndroidX to use JSpecify
c25f168 : Add visibility for AndroidX to use JSpecify
26c54b2 : Import JSpecify.
a1703ec : Initial empty repository
5bf4ae4 : Run build workflow on any branch whose name starts with 'dev-'. Run the docs workflow on any branch whose name starts with 'dev-' or ends with ''-1.0'. (#561)
87a2864 : add OSGi metadata and reproducible build (#428)
f0af7de : Use Gradle toolchain to specify the jdk to use (#558)
378925a : Update to gradle 8.9 (#557)
2e9ff60 : Run docs action for `docs-1.0` branch. (#556)
48ffa9b : Direct people to file an issue in the reference checker repo (#552)
bc70dc4 : Exception parameters are intrinsically NonNull (#508)
dbb3673 : docs: Update docusaurus to 3.4 (#549)
9abe30d : Run the build-documentation job (but not deployment) on pushes and pull requests on the `specification-1.0` branch as well as `main`. (#546)
7fd7d97 : Remove "Draft" from the footer. (#542)
d5b0dfd : Don't run the build or update-conformance-test workflow on docs-only changes. Don't run the docs workflow unless there are docs changes (or gradle or workflow changes). (#541)
48b97d8 : Update the start-here page (#540)
4764b40 : Remove "draft" (#539)
3562ddd : Update user guide to refer to all four annotations, and other updates (#538)
61c8778 : Add and normalize license headers. (#536)
1b33a22 : Stop checking in generated files for GitHub Pages (#528)
5b49db6 : Update Gradle to 8.8. (#530)
32b6e23 : Bump braces from 3.0.2 to 3.0.3 in /docs (#529)
f60152c : Fix a couple problems with building under JDK 22: (#517)
4c7e842 : Regenerate Javadoc to remove warnings. (#527)
97f5374 : Remove now-empty "Important warning" heading. (#526)
8d61d53 : Remove pre-1.0 warnings (#511)
17c9224 : Fix link from "Nullness-subtype-establishing path" to point to the correct section. (#520)
1ad705a : Regenerate Javadoc. (#518)
d660272 : gradle 8.7 (#504)
60b8ce9 : Bump webpack-dev-middleware from 5.3.3 to 5.3.4 in /docs (#499)
3c0df8f : Bump follow-redirects from 1.15.4 to 1.15.6 in /docs (#497)
6ca910f : Bump gradle/gradle-build-action from 2 to 3 (#476)
91514a0 : Bump express from 4.18.1 to 4.19.2 in /docs (#502)
6c705a5 : Bump gradle/wrapper-validation-action from 2 to 3 (#506)
e2d0bd4 : Bump JamesIves/github-pages-deploy-action from 4.5.0 to 4.6.1 (#512)
74f79fa : Update kevinb9n's email in POM (#510)
5bfa09b : Add copyright and license info to Gradle files. (#513)
8989d39 : A brief note on the non-hierarchicalness of packages (#489)
d138f92 : Bump clsx from 1.2.1 to 2.1.0 in /docs (#479)
56207d7 : Bump JamesIves/github-pages-deploy-action from 4.1.5 to 4.5.0 (#475)
a64a84c : Use upload-artifact@v4 to match download-artifact@v4. (#474)
9eb0bb3 : Add comment to sample files covered by conformance test assertions. Remove comment from conformance test assertions that documents which sample files are covered. (#473)
9430515 : Bump eta and @docusaurus/preset-classic in /docs (#472)
ae75dcc : Bump @docusaurus/module-type-aliases from 2.0.1 to 3.1.1 in /docs (#468)
d527eed : Bump gradle/wrapper-validation-action from 1 to 2 (#463)
365edaa : Bump follow-redirects from 1.15.1 to 1.15.4 in /docs (#452)
6b5cc3b : Bump actions/download-artifact from 2 to 4 (#464)
1a44d9d : Bump actions/setup-java from 3 to 4 (#465)
1f9a69a : Bump actions/checkout from 3 to 4 (#466)
2a160c1 : Bump actions/setup-node from 3 to 4 (#467)
1b62da3 : Bump @docusaurus/core from 2.0.1 to 2.4.3 in /docs (#469)
3983c65 : Add conformance tests for annotations in unrecognized locations (#461)
421e1ba : Update to gradle 8.6 (#470)
d815c17 : Configure dependabot behavior (#462)
6b15f01 : Make conformance tests (deps, really) compatible with Java 8, just like the annotations (#451)
1ae91f3 : Fix workflow context key (#450)
97eb333 : Publish conformance test snapshots (#449)
aac1d5f : Update to gradle 8.5 (#441)
8f1c509 : Publish conformance tests to depending projects (#446)
ee91db6 : Bump minimatch, recursive-readdir and serve-handler in /docs (#445)
263e7ee : Bump json5 from 2.2.1 to 2.2.3 in /docs (#432)
ea5d0ba : Bump loader-utils from 2.0.2 to 2.0.4 in /docs (#433)
d600274 : Add missing link to issue 260 (#442)
8a06253 : Put the samples files into the conformance tests zip. (#438)
23b4bfb : Change to install `java-library` and `distribution` plugins instead of `java-library-distribution` so that there's only one distribution. (#437)
bbc8648 : Bump webpack from 5.74.0 to 5.89.0 in /docs (#435)
a07c333 : Bump semver from 5.7.1 to 5.7.2 in /docs (#404)
ab34db4 : Bump ua-parser-js from 0.7.31 to 0.7.35 in /docs (#406)
b98c208 : Bump http-cache-semantics from 4.1.0 to 4.1.1 in /docs (#407)
76de7ee : Bump postcss from 8.4.14 to 8.4.31 in /docs (#418)
6726170 : Bump @babel/traverse from 7.18.10 to 7.23.2 in /docs (#419)
038eebd : Bump @sideway/formula from 3.0.0 to 3.0.1 in /docs (#403)
bb137fa : Add an assertion about a wildcard capture type. (#430)
6cce94e : remove Kengo and SpotBugs team from the doc (#399)
40787a7 : Remove old copies of conformance test assertions. (#429)
eff8f18 : Produce a single conformance test ZIP (#422)
2691a60 : Adjust build tasks for reference checker (#423)
380bcda : Produce conformance test artifacts (#420)
61b6402 : Upgrade Gradle to 8.4. (#421)
94d90b0 : Update to gradle 8.3.
829b41f : workflows: checkout the repo before deploying (#416)
0162c7d : workflow: specify an intermediate path when downloading built docs (#415)
4c0b01c : workflows: fix yaml syntax error
40433ac : workflow: fix build and deploy docs workflow
cac53dd : workflow: fix head ref identifier
1ccf74d : Add a few sink-type assertions to prove they work. (#409)
faa38e6 : Add `test:expression-type` assertions. (#397)
4634ca2 : Don't fail the CI if the conformance tests or samples tests fail. (#400)
b1eca8b : Default to `jspecify-reference-checker` if `vars.reference_checker_repo` is unset. (#402)
ecf1241 : Update dependencies (#398)
5c78249 : Add irrelevant annotation assertions
b9773b6 : Make a section in `@NonNull` match the one in `@Nullable`. (#394)
1756736 : Update README.md (#393)
d84685c : Fix broken Javadoc links.
1d0f1d1 : start-here page: link to reference-checker and portray it as ready (#388)
d70b360 : Reformat new samples to fix the build.
da9d0d0 : Add samples for `@NonNull`.
f7ef2e9 : Trigger updates to reference checker conformance test reports on pushes to the main branch (#386)
23f81d2 : Fill in a couple license years that had been autogenerated.
cfb1fb2 : Consistently use actions/checkout@v3.
ec998d5 : Update spotless version, format annotations, and apply to all .java files.
e5a827e : Run the conformance tests on the new directory and on the samples directory. (#380)
21c1b3c : Create a new conformance test file to start a new directory. (#378)
54b72b0 : Use `test:cannot-convert` in place of `jspecify_nullness_mismatch` in a few places. (#377)
c740bcb : Samples for `@NullUnmarked`.
20865fe : Remove trailing whitespace.
57e311e : Update gradle-nexus.publish-plugin to 1.3.0 (#374)
6f1a0bb : Run conformance/sample tests on push (post-merge). (#372)
3709779 : docs: fail builds on invalid markdown links; update action.
3f35c70 : Run reference checker tests on PRs.
6041cdd : Update to gradle 8.0.
17993b6 : Update spotless and errorprone.
9f97aaa : Sample for an annotation on a multidimensional primitive array.
959f5ad : Fix "Draft Javadoc" link.
edd3942 : Add JDK19 (non-Early Access) as a non-experimental CI job.
679f301 : Update start-here.md
668775a : Update start-here.md
cf7c385 : Update about.md
48118d9 : Update about.md
94f48d9 : Update Gradle to 7.6.
576b463 : Add a sample inspired by `NullsFirstOrdering` and `NullsLastOrdering`.
267c950 : Delete old annotations (#357)
e5f8e13 : Add warning that spec is outdated, and make it more consistently outdated (#355)
9856c90 : Export both old and new packages from the module. (#353)
7b29dea : Remove the old package from hosted javadoc (#352)
e5fc93d : Remove MODULE from NullUnmarked (per #348) (#351)
36cf861 : healthy round of javadoc updates (#349)
ca56f22 : Update Sonar references in 'about' page (#341)
5d6b0c9 : fix link that brought the whole site down (#345)
d43fe4a : move /start-here to /docs/start-here, since it's permanently useful and this makes it appear in the sidebar (#343)
4efb2fa : random tiny doc fixes (#342)
00eb1cf : A new "Start here" page, update javadoc hrefs, misc other changes (#338)
dda8189 : Goodbye, @Implies (#339)
fff7b48 : Update README.md
5ac1fcf : Fix references to o.j.nullness -> o.j.annotations.
f96922c : Move annotations from `.nullness` to `.annotations`, leaving deprecated versions behind (#335)
7dc31e2 : docs: Add a missing svg asset (#336)
bf09452 : more small tweaks to home page (#331)
9eef007 : Link the central button to a list of things to read (#330)
2b12e1c : Tighten wording to similar length as the other two. (#329)
58f2eff : Update group links (#328)
d9bf441 : Switch back to Standard (from Tool-Independent) in titles (#327)
6917ad4 : Update README.md
a0268d2 : first round of javadoc revisions (#326)
c1c5a5f : revise about page (#325)
6ec92cb : use absolute link (#324)
00a5f22 : various small web site updates (#323)
506d50e : various small updates including comments from PR #321 (#322)
30ecefe : Update README.md
d0f0d5b : Trying moving the javadoc where we want it (#319)
0202610 : add javadoc instructions
e182ccd : Don't create Javadoc before Java 15.
fa7b1d8 : docs: Make the warning about changes pre-1.0 more prominent.
4ee6768 : Fix minor typo in user guide.
31ec5b8 : First javadoc commit, representing current working-decisions (#269)
94543bc : Mark 19-ea as experimental.
498e587 : Test that NullMarked.class.getAnnotations() really does fail under JDK8.
4242439 : Change to first person plural from singular.
0a8f5d2 : Run CI jobs on multiple JDK versions.
5f0879a : Bump Gradle to 7.5.1.
2870c39 : Set javadoc encoding.
11f45c2 : Update home page images to be more consistent between light/dark themes (#263)
72367b5 : Change `@Implies` from a repeatable annotation to one with an array of values. (#258)
a46c488 : Fix a homepage description and a few links (#262)
29b03a7 : Update jspecify.dev website (#257)
6a1837b : Update spotless to 6.9.1.
80c2887 : Add new potential annotations and update existing ones (#256)
b5b3051 : Update README.md
985a6c1 : Update README.md
c8a8487 : Update README.md
e07e9a9 : Update README.md
7c4f07a : Extend OutOfBoundsTypeVariable to show that usages remain non-nullable after substitution.
0a075ce : Update eisop entry.
914adca : Update index.rst
43cfab9 : update member list
582473a : mrjar: Set `@Target({..., MODULE})` in the root, and move `module-info` out.
6d9d111 : Remove unused imports in samples.
34043b6 : Reformat Java sources.
5a05bae : Fix/update timeline (which was never when we thought we'd be "done"), and reformat.
2ba3036 : Another sample inspired by Google-internal `FutureCallback`-like code.
283efa2 : Sample inspired by some Google-internal `FutureCallback`-like code.
d6f5943 : Better justify why subsitution produces `MINUS_NULL` instead of `NO_CHANGE`.
45c4bb0 : Sample inspired by some Google-internal flag-parsing code.
7ef19d0 : ExtendsSameType, based on SuperSameType.
cd7df84 : Sample more directly inspired by the missing annotation on a `Map.compute` override itself.
07cf26f : Sample prompted by a missing annotation on an override of `Map.compute`:
3747a01 : Samples for arrays.
4ff6511 : Sample based on `Equivalence.onResultOf`:
202046a : Sample based on `Equivalence.wrap`:
aae7388 : Fix sentence to remove an extra "that".
6f78b15 : Sample exercising whether `Lib<? super Object>` is a subtype of `Lib<? extends Object>`.
845538b : Fix expectation in ContainmentExtendsBounded.
4ac4961 : Remove TODO about substitution into type-variable bounds.
c47abca : Handle the capture of `? super Foo` more simply and correctly.
5b2eceb : Remove NullnessUnspecified.
3dbd521 : Try to clarify "recognized locations."
dba5afb : Assorted minor spec clarifications.
c7088d1 : Expand samples for ternary operator.
6e58480 : Add a sample inspired by a mis-annotated Guava API that our prototype missed.
78f3984 : Add a sample for a problem with type-parameter bounds I had fix as I tried to update our prototype checker.
ddbf498 : To avoid tickling a bug in our prototype, allow nullable type arguments.
d64e55a : Further expand samples for member selects to include those in non-@NullMarked code.
9bb4add : Add more samples of member selects that are dereferences and those that aren't.
ca75c36 : Add a TODO to probably make `T super @Nullable Foo` null-inclusive.
8d8d84a : Rephrase comment about what a warning "means."
ff59b90 : Minor updates for welcoming 2022.
e50c4f7 : Assorted minor clarifications.
ec51c41 : Standardize formatting of augmented types.
7fd0d92 : Remove weak justification for why we picked `MINUS_NULL`.
095c713 : Present "requirements on subcomponents" as only an example.
e752201 : Rewrite the explanation of the nullness-delegating subtyping rules for Java.
9a23e72 : Note that "nullness-subtype-establishing path" is used from 2 different definitions.
4615c93 : Fix location of concept-references.
64d9a19 : Attempt to summarize the lower-bound rule.
f050023 : Loosen a condition of the lower-bound rule.
4e8fee4 : Explain why we provide a nullness operator for the output of the lower-bound rule.
80410e5 : Provide a nullness operator for the output of the lower-bound rule.
bc7d914 : Remove "otherwise" case: There are naturally no edges if there are no applicable type parameters.
435f379 : Introduce the section on direct-supertype edges.
53e7241 : Say more about the null-inclusive and null-exclusive rules.
b8492fb : Make another attempt at explaining "mostly transitive."
cfd5506 : Fix typo.
54ccbbd : Attempt to explain subtyping and nullness subtyping better.
12ec671 : Fix typo.
c6ad697 : Suggest thinking of `UNSPECIFIED` as the lack of a type.
e1f9ac1 : Suggest scare quotes for terms like "subtyping."
22c1026 : Refer consistently to "relations" instead of "checks."
ac58054 : Remove false claim about `ImmutableList.Builder<@Nullable Foo>`.
59a297c : Attempt to clarify the role of `MINUS_NULL`.
ac65e9c : Run docs through a Markdown formatter.
9c17900 : Sample documenting the expected mismatch fixed by the previous 2 commits.
4941322 : Add missing bound in AugmentedInferenceAgreesWithBaseInference.
b354dde : Add missing bound in BoundedTypeVariableReturn.
d77763c : Rename Facebook -> Meta.
1962f5f : Update actions in workflows
b83c95a : build: bump up Gradle modules plugin to resolve warnings
0d39a0e : Set version to 0.0.0-SNAPSHOT.
6c3f286 : Allow build with JDK 17.
6785efd : Update index.rst
9a84cf6 : Remove design overview doc.
2c5fd80 : Expand samples around substitution for null-exlusive type parameters.
e23c033 : User guide changes to address the issues in #203 and several other issues (#207)
3190434 : Add `developer` section to the project metadata. (#202)
318ba3e : Still more spec clarifications.
94e1aff : Further set off non-normative comments as italic.
2715f09 : Present an informal, non-normative overview of the nullness operators.
517d322 : Better explain which locations are recognized.
f1d885a : Prepare for deployment onto Sonatype Nexus.
6aef68b : Keep `spotlessCheck` working with `JAVA_HOME` set to JDK16+.
b12be94 : Note that we haven't maintained the design overview recently.
8797899 : Still more spec clarifications.
a0d6589 : Set `group`.
b8cf474 : Fix typo.
c80ab8c : Remove stray characters.
71bef48 : Expand and clarify our "expectations" about tool behavior.
e017206 : Rewrite introduction of user guide (#198)
ff3984e : Fix cross-references and formatting (#199)
bc2c979 : Generalize warning (#197)
21a697d : Link Nullable from NullMarked. (#196)
b25c996 : Use http instead of https, which we haven't set up for the .org site.
2ca81d8 : Use jspecify.org (#195)
645ef90 : Link more to User Guide, and tweak existing an reference.
5cdb4db : Define "base type" in terms of the JLS.
f928ec6 : Remove a few glossary citations.
da83751 : More assorted spec clarifications.
93bda34 : Assorted spec clarifications.
4f24e2b : Candidates for minimalist javadoc in 0.2.0 (#191)
cc53ef3 : Partially fix anchors/fragments/targets/slugs.
de120a5 : Remove the word "possible" from `@Nullable` Javadoc.
086eb14 : Stub Javadoc for `@Nullable`.
364f62d : Add a missing backtick.
d5ccc5b : Initial version of the User Guide. (#189)
4198ec6 : Assorted minor clarifications.
4a0dbc7 : Standardize references to lowercase. Also, rewrap.
99fa94f : Remove advice about reStructuredText.
412e1b3 : Remove file used to prove domain ownership to Sonatype.
ed4c965 : Resolve warning of the document generation with markdown (#188)
ff87a6d : Explain how to view the built docs.
2b034c7 : Begin cutting over from reStructuredText to Markdown.
2fea900 : Demonstrate some fallout from the `T <: T!!` fix.
75d25e9 : Fix subtyping so that `T` is not necessarily a subtype of `T!!`.
7d1ed2c : Remove warning about concatenated docs.
1b0b161 : Link directly to OSSRH-69944.
5aa17c2 : Create file to demonstrate ownership to Sonatype.
900adc1 : Update Gradle and Java libraries.
9443a52 : Various clarifications.
ee6ae7d : Remove "trusted" from the "null-* under every parameterization" terms.
e29a6db : Use the term "directly annotated" (from the Java Compiler API, IIRC).
74ef532 : Remove the comment about the glossary definition of "augmented type."
62ee401 : Refer to "Java module" instead of "JPMS module."
72b1577 : Expand the introduction, including discussing normative and non-normative.
307584c : Further discuss how our rules are non-reflexive and non-transitive.
9970900 : Small clarifications around "multiple worlds."
5da6513 : Refer to "all worlds" and "some world" instead of "the least convenient world" and "the most convenient world."
53c59a0 : Further clarify "recognized locations."
1812df8 : Rename "General" to "+Details common to all annotations."
9d78ae0 : Split non-spec parts of the design overview back out.
6dad422 : Various spec changes I have had sitting in my client for a couple weeks:
611a6d0 : Put type annotations next to types.
898dcea : Link "unrecognized" to the section that now uses that term.
aff63da : Remove the distinction between "illegal" locations and "unspecified" ones.
820f9d8 : Say "null-marked scope" instead of "null-marked context."
8e11233 : Update 0.1 spec for rename of package from org.jspecify.annotations to org.jspecify.nullness.
c04a8f2 : Update 0.1 spec for rename of DefaultNonNull to NullMarked.
45704f1 : Remove goals.
4bef163 : Update prose for naming of NullMarked.
145e19c : Regenerate rST from Markdown.
e88788c : Concatenate 0.1 spec with subtyping spec.
b3c7ec2 : Rename tsttcpw.md to spec.md in preparation for merging with 0.1 spec.
7dd979f : Update file names for naming of NullMarked.
24dc388 : Update tsttcpw for rename of DefaultNonNull to NullMarked.
ebb63a6 : Remove duplicate comment about obligation.
cbb9b3f : Emphasize the distinction between information and tool behavior.
e30ea33 : Give an example of adding information when JSpecify leaves nullness unspecified.
3c2e870 : Emphasize that samples are used mostly as test cases nowadays.
c3dbcba : Rename DefaultNonNull to NullMarked.
df06583 : Create org.jspecify.nullness, and move the contents of org.jspecify.annotations there.
d1cd6b2 : Samples for some cases and non-cases of annotated outer types.
e060560 : Use `source +=` instead of fully overwriting.
fc8a3ed : Additional catch sample that somehow breaks my existing handling.
19961fa : Remove explicit setting of `srcDirs = ['src/main/java']`.
6fae6f0 : Eliminate separate compileJava8Java task.
8135398 : Redo mrjar handling.
57f79c6 : Put sources jar, prefer Java 9+ sources over Java 8.
cb77e72 : For Javadoc jar, prefer Java 9+ sources over Java 8.
eb59cb2 : Sample that checks subtyping on a union type.
ce3d194 : Samples for annotations on local array variables and their component types.
ea669bf : Replace 2020 with 2021 in document and copyright notice.
97d3318 : Minor updates for document.
d75eb80 : Put some throwing code in the body of the `try`.
8c6cc40 : Samples for annotated local variables.
1b04483 : Build a multi-release jar.
68a7dac : Remove mismatches for types that are both null and non-null.
ca0dcfb : Remove InstanceOfMethodCheck.
84b90e2 : Check Foo.class.isInstance instead of Object.class.isInstance.
3f6c660 : Treat CF and PMD like SpotBugs in participants table.
093c717 : Revert "Change `@DefaultNonNull` to `CLASS` retention."
41961e9 : Make class name consistent with file name.
d24a208 : Explain how MINUS_NULL arises.
69bdf78 : If a value is equal to null, make it nullable, even if it was non-null before.
8b44d53 : If a value is equal to null, make it nullable, even if it was unspecified before.
cafb9fc : Check `instanceof Foo` instead of `instanceof Object`.
985ddfc : Refactor samples README.
4fc89fd : Change `@DefaultNonNull` to `CLASS` retention.
60e827a : Sample showing that the type-parameter bound does not affect containment checks.
c6d29f7 : More samples that pass type arguments with unspecified nullness where non-null is required.
bb3c6a6 : Add "trusted" the beginning of the "null-inclusive" and "null-exclusive" terms.
4466940 : Comment that nullness operator are not "inherited" from overridden methods' types.
ba8c924 : Rename "containing scopes" to "enclosing scopes."
2f800e9 : Minor clarifications.
817452f : Move substitution rules below the subtyping rules.
b67586c : Add a special case for one kind of "out of bounds" substitution.
6922fef : Fix repeated apostrophe-s, and convert en dashes to em dashes.
93fa2ec : Link to the section on intersection types more frequently.
b4044a1 : Move "Applying a nullness operator to an augmented type" below "Substitution."
4b2800b : Add MINUS_NULL, but don't add the rule that produces it yet.
23cf34b : Clarify which nullness operator is being preserved during capture conversion.
ed42c94 : Rename "Unioning an augmented type with a nullness operator" to "Applying a nullness operator to an augmented type."
824d120 : Rename "CODE_NOT_NULLNESS_AWARE" to "UNSPECIFIED."
d3a7a82 : Rename "additional nullness" to "nullness operator."
80dfb27 : Add a TODO to write and link a "Don't say '@Nullable'" doc.
2b3231c : Sample for casting from a wildcard capture to a type variable.
5aa6782 : Sample for an unusual case of wildcard capture.
08a3666 : Samples for casting the result of a wildcard-returning method.
0106114 : Put Javadoc before annotations instead of after.
2e5798b : Merge fullyQualified into a single package.
147409a : Merge fullyQualified classes into fewer packages.
d7b4fa5 : Add one more sample for the ternary operator.
aea86c7 : Sample for the ternary operator.
edd77f0 : Add samples for some more unrecognized type-annotation locations.
cc0c03c : Change retention to RUNTIME.
0e1b715 : Sample with annotated receiver.
9da93c5 : Samples that call a method accepting `<T extends @Nullable Object>` with `@Nullable E`.
5f67b5e : Note something to change if we support parameter contravariance.
e55eb43 : Bump year based on current staffing estimates.
96fdb3e : Add a few samples based on support of jspecify annotations in Kotlin
972fad0 : Samples for overrides that use parameters.
26163f8 : Sample that compares the result of an assignment expression to null.
82d1348 : Sample for Class.isInstance.
f9668d7 : Samples for the result of string concatenation.
20f8410 : Sample input to show that `Foo<T>` is a subtype of `Foo<T>`.
3b6b2bf : Sample to demonstrate that nullness does not affect overload selection.
e1e8a9b : Sample for casting to a primitive type.
9cc4e16 : Best-effort refactoring tool to migrate CF annotated-code to JSpecify-annotated code.
5b118e3 : Sample for class literals.
04f5611 : Sample for instanceof check.
04c33cc : Give NotNullAwareDirectUseOfTypeVariable a more accurate name.
7abdd5b : Sample with a null-aware usage of a type variable with a non-null-aware parameter declaration.
01d4b71 : Sample that passes a type variable with unspecified bound for a non-nullable type parameter.
5d5317a : Sample with return type of a type variable corresponding to a bounded type parameter.
ab0bd81 : Add a sample with uninitialized fields.
2995fe0 : Add a warning to ComplexParametric.
e996503 : Rename most Intersection* samples to CaptureConverted*.
95deacc : Sample that has an intersection type as a supertype.
bcf6ff2 : Sample for subpackages.
0e151a0 : Samples for fully qualified type names, including more than just plain classes.
1f68761 : Samples for null checks of type-variable usages.
1da16dc : Treat CF and PMD like SpotBugs in participants table
5a9fc92 : Samples for null checks.
99cd705 : Change Pivotal to VMware.
b74aa7b : Sample for catching and deferencing an exception.
fe68b72 : Use parameters instead of fields.
b4e0841 : Samples for local variables.
7db141f : Samples if `if()` conditions.
bb10a6f : Also show primitive arrays used as type arguments.
9e157a3 : Show a nullable <array of int>, which is fine.
5cab2f0 : Sample with annotations on enum constants.
6da4ac6 : Specify project name, to avoid defaulting to directory name
328ba3c : Link to GitHub from jspecify.dev.
55e649d : Fix github-pages-deploy-action version
3697966 : Make class name consistent with file name (#143)
2a6f618 : Split "paradox" into 3 categories.
e685492 : Demonstrate that inference for augmented types follows inference for base types.
3ecf8bd : Provide an initial high-level overview of subtyping.
d8d5650 : Sample inputs for overrides in non-null-aware code.
8e339b6 : Document the format for samples.
169015c : Sample inputs for overrides.
1863663 : Sample for a package-level defaulting annotation.
22f7588 : Split IllegalLocations.
82dc378 : More tests for illegal locations, mostly doubly nested types.
0f6cc3b : Change casing to "JSpecify" (#116).
a473f35 : Simplified NotNullAwareUnboxing.
80baf7a : More samples for primitives.
5268231 : Rename ConstantsNeverNull to Constants.
0e1be76 : Samples for literals/constants.
26e4a78 : Only primitives with *annotations* are structurally illegal.
2fdacc6 : Add a sample in which there are multiple "paths" from one type variable to another.
84d0ece : Eliminate method bodies.
77948bc : Sample for a null-aware type-variable usage of a type parameter with a not-null-aware bound.
35514f1 : Fill out the suite of UseOfTypeVariableAsTypeArgument.
adf8c1e : UseOfTypeVariableUnspecAsTypeArgument, derived from NotNullAwareUseOfTypeVariableAsTypeArgument.
a74f085 : Make NotNullAwareContainment calls one-liners.
ce7cf14 : Samples for null literals.
ccc2667 : Samples for dereferences.
7ee7305 : Update to comment sequences from #122.
60399a1 : Use a separate class for *ClassToSelf.
94b139d : More tests of illegal (and legal) locations.
e65d5b7 : More amples for non-@NullAware/non-@DefaultNonNull containment.
3da1c5a : Sample for non-@NullAware/non-@DefaultNonNull containment.
03a3b6a : Add first samples for non-@NullAware/non-@DefaultNonNull code.
ecf150f : Add samples that check type arguments of bounds.
e3ed629 : docs: link to the new page from the site top (#125)
739fe24 : Update previous commit to use DefaultNonNull instead of NullAware.
7e553de : Revert "Remove references to @NullnessUnspecified."
e4ab659 : Add @NullnessUnspecified for testing purposes.
049230c : ci: update the target branch name to build (#121)
d068c45 : Test that annotations are not inherited.
2bfba90 : Tests for covariant and contravariant returns.
149f3df : Change NullAware to DefaultNonNull.
5f0ecfb : Remove references to @NullnessUnspecified.
a08677a : Enhance the build portability (#117)
3bf03b1 : Add TODOs in ComplexParametric.
4260910 : Share the sample inputs I've been experimenting with.
19b816c : Try to clarify "do what we mean" / "mostly transitive."
bdb8e9c : A path need not to go F, only to any type with that base type.
3bdd98c : Minor simplifications and clarifications to "Nullness-subtype-establishing direct-supertype edges"
14f989a : Fix typo.
90220f7 : Add a link to the slide (#106)
d6c4efc : Rename package and module, to prepare for install and deploy (#107)
dd5934c : Say "least/most convenient world" instead of "strict/lenient mode."
c4e5623 : Add license to module-info.
e996287 : Fully qualify annotation names.
c3cb14a : "The Simplest(?) Thing That Could Possibly Work for subtyping"
2f3fa10 : Set copyright author in docs to "The jspecify Authors"
87416a5 : Add $ before shell command for consistency.
e845bc8 : How to build without Docker; CommonMark vs. reStructuredText.
764c8e1 : Add "placeholder" warning to README.
893b0ec : Warn that all annotations are placeholders.
f59fae2 : Note that manual builds often aren't necessary
789bb79 : Define a JPMS module for our annotations (#105)
7cce133 : Introduce Gradle, Errorprone, Spotless and GitHub Actions
1d97016 : Restore "not an officially supported product."
ee18102 : Remove Eclipse as requested.
6586ed5 : Update README.md to use the jspecify name.
6045feb : Update AUTHORS to use the jspecify name.
202e866 : Update README for the new GitHub org.
4d07a27 : docs: add more translation
82cd6a3 : ci: deploy the HTML files to gh-pages branch
dc4b356 : ci: move files in the container
77b9e9c : docs: translate the doc partially
4f3e8be : docs: copied content from #52
2c9dafa : build: fix the wrong indent in the YAML file
c1b1bd9 : docs: add translation mechanism
31a9bf3 : docs: temporal commit
dc27bac : Remove sample annotation.
07502b2 : Update README.md
4960586 : This is not an officially supported Google product
620d044 : Add an example annotation.
0acec22 : Initial boilerplate

+- Project: platform/external/kernel-headers

495c71c : Update to v6.11 kernel headers.
a17575e : Revert "Update to v6.11 kernel headers."
025dd59 : Update to v6.11 kernel headers.

+- Project: platform/external/kotlin-compose-compiler

34ed980 : Initial import of Compose Compiler plugin compatible with Kotlin Compiler 2.0.0
e983f98 : Initial empty repository

+- Project: platform/external/kotlinc

bc258c8 : Add a target for kotlin-serialize-compiler

+- Project: platform/external/kotlinx.atomicfu

31c3c49 : Version 0.23.1
46cf1c9 : Update Kotlin to 1.9.21 (#373)
0a4ff49 : Apply Native IR transformations only if Kotlin version in the project is at least 1.9.20 (#372)
aedae16 : Enable error loglevel for partial linkage messages (#367)
f83c00d : Version 0.23.0
c9972c7 : Introduce Native IR transformations (#363)
5fe6c0d : Introduce WebAssembly target (#334)
9d2a3e4 : Integration tests (#345)
90d52fd : Update Kotlin to 1.9.20 (#361)
f3e66a2 : [infra] Fix error in passing kotlin version for train
899b29d : [infra] Pass the custom repo with dev-builds of kotlinc to AFUGP's testing Gradle projects
2b86fbf : [infra] Remove outdated conditional removal of JS/Legacy-related buildscript parts
c379bee : [infra] Refactor Kotlin aggregate/Kotlin user project buildscript parts
4448c71 : [infra] Enable binary compatibility validation
c9287d1 : [migration] Kotlin LV 2.0: KT-59660
ef96cfc : [migration] Kotlin LV 2.0: bump Gradle version to 8.3
c59a938 : Fix the WA for the failing clean task on Windows (#351)
b2b815f : Set dependency between compileNativeTest tasks and Sign tasks (#347)
1f78257 : clean task can't delete the expanded.lock file on Windows (#350)
6b97f4b : Upgrade JDK target version to 11 in integration tests. (#349)
de38382 : Get rid of `previous-compilation-data.bin` file in META-INF (#344)
0a74d6f : Use kotlin.concurrent.AtomicInt in SynchronizedTest instead of the old kotlin.native.concurrent.AtomicInt.
dbdcd85 : Fix expect/actual mismatched member scope for `open expect` compilation errors (#329)
75667aa : Update apiVersion/languageVersion to 1.9 for atomics implementation.
2036268 : Update native atomics implementation
dec5b94 : Update of Gradle Version to 8.1 revealed the problem that publish task uses the output of sign task as an error. This commit sets the explicit dependency between Publish and Sign tasks. (#335)
8345a07 : Version 0.22.0
2151dcc : Update of Gradle Version to 8.1 revealed the problem that publish task uses the output of sign task as an error. This commit sets the explicit dependency between Publish and Sign tasks. (#335)
cfb2f22 : Update atomicfu-gradle-plugin tests according to the atomic properties visibility requirements updated in JVM compiler plugin (coming in 1.9.20).
7b764f9 : Version 0.22.0
367ad27 : Update Kotlin to 1.9.0 (#330)
b948f11 : Update kotlinx.metadata to 0.7.0 (#327)
46d34fd : Comply with new compiler restriction on actual declaration annotations (#325)
b0c444b : Remove obsolete no longer supported kotlin.mpp.enableCompatibilityMetadataVariant that will be promoted to ERROR as part of KT-55891 (#326)
7697fff : Conditionally remove targets that are removed after 1.9.20. (#320)
33fbf92 : Update gradle version to 8.1 (#319)
f8a2a75 : Updated gradle-node-plugin to be compatible with Gradle 7 (#316)
8c3b92a : Version 0.21.0
7ff4e2f : Do not rename original destination directory of the compile task. (#312)
88a5e14 : Always configure test compilation classpath (#308)
fea9b5d : Fix duplicated class files in jar for JVM-only projects (#303)
fb6555d : Configure jvm transformation task via multiplatform configuration function
c201c53 : Fix target compatibility
14ddcd1 : Update Kotlin to 1.8.20
0fac729 : Update gradle to 7.3 (#300)
d5962f0 : Opt-in into experimental interop (KT-57728) to fix aggregate build (#299)
4dd0a6c : Remove JS Legacy configurations (updated) (#296)
0f801b3 : Fix build error on using KGP removed method (#297) (#298)
d235d6e : Version 0.20.2
a9a08b9 : compileOnly dependency is not published to the compile classpath of the project that depends on the library with applied atomicfu -> use implementation (#290)
bfdd8a7 : Version 0.20.1
e2433f3 : Set apiVersion and languageVersion back to 1.4, because atomicfu-gradle-plugin should be compatible with the Kotlin version provided in Gradle. (#287)
f2d2d0b : Fix compiler plugin dependency in AtomicfuGradlePlugin (#286)
4daefcd : Clearly pass `kotlinx-atomicfu-runtime` dependency to the runtime classpath and avoid leaking of transitive dependency. (#283)
d2bfdcd : Replace 'interop-as-source-set-klib.gradle' with cinterop commonization
8cd5571 : Move license to the root so that GitHub recognizes it (#280)
f1d0401 : Version 0.20.0
ad78630 : Update Kotlin to 1.8.10
2a467e0 : Support all official K/N targets (#275)
38fef80 : Version 0.19.0
73391b4 : Update LV to 1.8 (#270)
3a3deb6 : Specify legacy JS backend explicitly in atomicfu-gradle-plugin tests (Kotlin/JS Legacy compiler backend was deprecated in Kotlin 1.8.0)
42a635a : Update Kotlin to 1.8.0
16d679c : Update LV to 1.7 (#267)
48ba832 : Legacy JVM bytecode transformer will not support atomic delegates for Kotlin languageVersion >= 1.7. (#266)
eee81a9 : WA for 1.7 languageVersion: changes in bytecode generation for unchecked casts
b9395cf : Update kotlin version for atomicfu-gradle-plugin tests
a1aba90 : Chore(infra): Prepare atomicfu for including to the Kotlin Aggregate build //KTI-1016 (#265)
89a8859 : Minor fix in README regarding delegated properties
462e5be : Replace jcenter() repo with mavenCentral() (#259)

+- Project: platform/external/kotlinx.coroutines

8810b50d : Add sdk_version to the kotlinx_coroutines_guava target
756c14f8 : Add sdk_version to the kotlinx_coroutines_guava target
cd696d3f : Version 1.8.1
c1ba5af8 : Fix the ticker channel example giving wrong results on the website (#4109)
2430d9a7 : Fix broken API reference links to the Project Reactor Javadoc (#4111)
f22b229a : fix the link to `Thread.uncaughtExceptionHandler`
515308dc : fix: get rid of horizontal scrolling by splitting the comment and show more code mentioned (#4100)
f8d1821f : Fix typo in coroutine-context-and-dispatchers.md (#3941)
20707d35 : fix: remove `sampleStart` and `sampleEnd` comments from example in coroutine-context-and-dispatchers.md (#4081)
01485343 : chore: fix identation in example loadContributorsBlocking() (#4085)
74774df1 : Docs: reference to The Hitchhiker's Guide to the Galaxy (#4082)
d106ac7c : Docs: avoid scrolling sample code; fix test description; add () to functions calls (#4080)
f9a4545e : Update Dokka to 1.9.20 (#4057)
0ca73585 : Fix `Flow.timeout` swallowing the channel closure exception (#4072)
60d2fe84 : Version 1.8.1-Beta
29e82130 : Ensure that inline functions only access atomics from the same class (#4041)
c990f597 : Potential workaround for #3820 (#4054)
48178b37 : Bump Kover version to `0.8.0-Beta` (#4055)
7bed717a : Reduce the overall time spent on stress tests in stress mode (#3890)
390be6fc : Rework CompletionHandler to avoid subclassing a functional type (#4010)
f86cbc76 : Fix a bug that could leave the coroutine scheduler's PRNG in an invalid state (#4052)
6c21369a : Remove outdated ExperimentalTime annotation (#4046)
8c516f5a : Version 1.8.0
90d9a302 : Disable DebugProbes.enableCreationStackTraces by default (#4028)
83fa0b4f : Supply MDC context propagation with examples.
1d044524 : Revisit SupervisorScope, supervisorScope, and coroutineScope docs
17bae3f1 : Don't say that job completion causes `CancellationException`
92df6e19 : Reword the prompt cancellation guarantee
8eb49638 : Improve the explanation of how `await` throws exceptions
fdc08186 : Clarify that using `runBlocking` in `suspend` functions is allowed
d0dabb9b : Ensure that flow operators propagate the cancellation exceptions (#4038)
761bdebf : Fix a build error on nightly compiler version (#4032)
be781c50 : Use dashes instead of asterisks for lists in KDoc (#4027)
555c65fd : Separate test facilities into a separate module and clean up
08491dc9 : Perform a major cleanup of Gradle scripts after the translation
2bf15195 : Translate *.gradle to *.gradle.kts
07226cc9 : Move *.gradle to *.gradle.kts
648ba8a5 : Re-structure DebugProbes documentation and mention classloading (#4024)
7c34ebc1 : Update the copyright notices to the new requirements (#4016)
e11395f9 : Do not fail on nullable YarnRootExtension fields in Gradle
8671bce6 : Get rid of native-mt leftovers (#4013)
d562f26e : Simplify internal handling of suppressed exceptions (#4007)
1a0287ca : Version 1.8.0-RC2
11dc96f7 : Work around #3968, start running integration tests for Native (#3996)
c8105d16 : Document newFixedThreadPoolContext in common code (#3994)
445f5837 : Fix a compilation error due to deprecations (#3995)
d12eb45b : Update to Gradle 8 (#3966)
7bb06388 : Ensure that non-local returns work properly in inline functions (#3986)
b05e15e1 : Increase code coverage in core module (#3962)
39ea1090 : Specify the rules for release candidate feedback (#3990)
605ec561 : Change the order of atomicfu and kotlin plugins in the classpath (#3984)
84bad926 : Update Dokka to 1.9.10 (#3973)
5251db64 : Simplify Dokka source link configuration (#3969)
26fab945 : Fix flakiness of MainDispatcherTestBase.testWithTimeoutContextDelayTi… (#3970)
69dc487d : Version 1.8.0-RC
c39f92ff : Make the benchmark instructions version-independent
85afa72f : WASM target implementation (#3849)
17bdf4a1 : Remove metaInfo usage (#3964)
7f32340a : Ensure Dispatchers.Main != Dispatchers.Main.immediate on Android (#3924)
3ceb35d0 : Fix kotlinx-coroutines-debug having the wrong module-info (#3948)
5cd845b6 : Fix warnings that manifested themselves as errors after multiple simu… (#3959)
7a87eabc : Update Gradle dokka configuration to make sure "source" button is visible in all API docs (#3960)
a79db374 : Create Dispatcher.Default threads with the same context classloader a… (#3877)
00a07672 : Explain SafeCollector hackery (#3811)
9a98eabe : Rollback runTest timeout to 60 seconds, configure it with getenv (#3945)
d5c0effc : Fix warnings in Native (#3957)
addb5247 : Work around KT-60304 by suppressing "INVISIBLE_SETTER"
aa5d0ce9 : Fix KT-62279 by suppressing `INVISIBLE_SETTER`
2d0bb86a : Suppress MULTIPLE_DEFAULTS_INHERITED_FROM_SUPERTYPES warnings (#3896)
f5b3f967 : Ensure that `println` is properly resolved in the JVM tests (#3955)
e1342d25 : Update Kotlin to 1.9.21 (#3954)
9b2bd3a7 : Work around KT-63238 by removing Any.unwrap altogether
2ec8c281 : Add INVISIBLE_REFERENCE to every INVISIBLE_MEMBER suppression
2d852a59 : Make SchedulerTask a proper interface
d559909f : Rename the `completion` field in `SafeCollector`
9e0cfe05 : Remove memoryModel = experimental (#3947)
142f7973 : Introduce jvmBenchmarks into kotlinx-coroutines-core (#3946)
28ed2cd8 : Change Gradle configuration to make sure "source" button is visible in all API docs (#3929) (#3930)
ff95ab76 : Properly round `Duration` instances to milliseconds (#3921)
ecb5b3e1 : Remove reference to actors from coroutines guide (#3935)
3dd48ace : Specify that `subscriptionCount` not conflating is a guarantee (#3934)
661e554b : Remove JDK7 reference from exception handling guide (#3928)
5d169381 : Update exception-handling.md (#3927)
e69bc2c6 : Kotlin 1.9.20 update followup (#3926)
b2ef7aba : Update Kotlin to 1.9.20 (#3925)
5a570e1f : Work around KT-58685 (#3881)
2b5d93f2 : Introduce the notion of @BeningDataRace (#3873)
44d46ea0 : Ensure `runTest` unsubscribes from the exception handler (#3909)
ed0cf7aa : Update README.md (#3910)
1fc8c379 : Add java.base/sun.security.action required for Lincheck to run on Java 17 (#3908)
64d88749 : "member scope mismatch for open expect" was downgraded from error to warning
2a580dfd : Tweak `CoroutineScope.plus` docs for accuracy (#3903)
7659d65a : Introduce workaround (in fact, fix the previously incorrect expect-actual mapping) for CancellationException. (#3850)
84a2cbaa : Minor: add forgotten Suppress for ACTUAL_CLASSIFIER_MUST_HAVE_THE_SAME_MEMBERS_AS_NON_FINAL_EXPECT_CLASSIFIER (#3891)
fd5a58bd : Document that SharedFlow.collect subscribes by its first suspension (#3885)
7e2d2374 : Fix the documentation for the Main dispatcher (#3879)
cedb0669 : Fix the display of a code snippet on the website (#3878)
454acd88 : Kotlin 1.9.20 migration: Fix ACTUAL_CLASSIFIER_MUST_HAVE_THE_SAME_MEMBERS_AS_NON_FINAL_EXPECT_CLASSIFIER compilation error (#3870)
9d1c62af : Get rid of ACTUAL_FUNCTION_WITH_DEFAULT_ARGUMENTS to reduce the tensi… (#3869)
4673870a : Mute "expect/actual classes are experimental" warning
2d64ba1f : Kotlin 1.9.20 migration: Fix NO_ACTUAL_CLASS_MEMBER_FOR_EXPECTED_CLASS compilation error (#3870)
aa6b74e3 : Removing unused imports in build.gradle (#3860)
6e92c23c : Change "parallel decomposition" to "concurrent decomposition" (#3856)
f2274323 : Update coroutines-basics.md
5ac64726 : Fix NO_ACTUAL_CLASS_MEMBER_FOR_EXPECTED_CLASS compilation error The compilation error appeared after KT-59665
b9ff2183 : Fix NO_ACTUAL_CLASS_MEMBER_FOR_EXPECTED_CLASS compilation error #2 (#3851)
e1a0b5dc : Update atomicfu to 0.22.0 (#3848)
203ab5b7 : Explain potential memory leak that nulling BufferedChannelIterator.continuation out prevents (#3837)
9942a308 : Fix expect/actual mismatched member scope for `open expect` compilation errors
db57e088 : Fix NO_ACTUAL_CLASS_MEMBER_FOR_EXPECTED_CLASS compilation error (#3826)
1ccc9689 : Move async-style warning above the paragraph in question (#3835)
b78bbf51 : Update Knit to 0.5.0-Beta (#3838)
1781a1ea : Add Optin for kotlinx.cinterop.UnsafeNumber due to KT-59859
a1134f2c : Revert "Add optins required in Kotlin 1.9"
bb21e8eb : Ignore adding additional optIns also for "jvmCoreMain" source set
77b630c6 : Migrate from the deprecated native atomics
6fb30e24 : Add optins required in Kotlin 1.9
e0dd2773 : Conditionally remove native targets that are removed in 1.9.20 (#3825)
bb54504e : Fix typo in CommunityProjectsBuild.kt build script (#3829)
bc35cd8f : Update Kotlin to 1.9.0 (#3814)
35d88f1b : Version 1.7.3
47f0a46d : Fix the IDEA debugger (#3822)
99f08044 : fix: errors in runnable snippet in doc (#3818)
3c9e8564 : Make annotations on expect declarations comply with new compiler restriction
c675e3fb : 3789: Update flow.timeout example to re-throw (#3801)
9b06a690 : Clarify documentation of Mutex.lock() behavior (#3816)
ef623b84 : Revert "Remove `@PublishedApi` from `unwrap` to comply with new compiler restriction (#3810)"
5c4a252a : Stop building and publishing compatibility MPP metadata variant (#3809)
38909c75 : Fixed visibility of atomic properties, they are required to be private or internal (#3808)
4cfd88aa : Update kotlinx-coroutines-core build file for IDE friendliness (#3795)
2bd0f290 : Remove `@PublishedApi` from `unwrap` to comply with new compiler restriction (#3810)
fa218c64 : Fix ABSTRACT_MEMBER_NOT_IMPLEMENTED compilation with newer Kotlin (#3805)

+- Project: platform/external/kotlinx.serialization

33ae4016 : Export r8 rules to users

+- Project: platform/external/libaom

d6f30ae47 : Remove experimental feature AOM_CODEC_USE_PRESET
7dc70f92e : Remove experimental status of AOM_CODEC_USE_PRESET
71a912841 : av1_encoder.dox: fix doxygen 1.9.8 warnings
f8eb8a5b4 : rtc: Change subpel motion speed feature for speed 11
6c3ba08f7 : Do not mention experimental AOM_CODEC_USE_PRESET
6f04c0253 : minty: Add neon speed up to change log
d441f2d6e : Update version and changelog
d608f882d : Enable skip_encoding_non_reference when frame_dropper enabled
0983375d9 : rtc: Fix to color_sensitivity for fading scene transitions
924099142 : Avoid the use of aom_write_bit_buffer
33e1a1e4d : Do not use aom_write_bit_buffer in write_av1config
7ab13f951 : fft.c: make some functions static
00fc79453 : rm inv_wht_sse2.asm
acf9c27f1 : Do not export aom_dsp_rtcd,aom_scale_rtcd,av1_rtcd
e39f938bd : decodetxb.c: make av1_read_coeffs_txb() static
8665342b5 : metadata_test: fix test w/CONFIG_REALTIME_ONLY=1
2ca5aecaf : metadata_test.cc: remove unneeded bitstream.h include
463d73047 : aom/exports_com: rm aom_img_metadata_array_{alloc,free}
1ea0656f2 : global_motion.c: make av1_warp_error() static
f3f4638ab : decodeframe.c: make av1_read_frame_size() static
7e245f0f7 : av1/exports_com: rm aom_wb_write_unsigned_literal
7e1c71b6f : av1/exports_dec: rm av1_add_film_grain
018ed756a : aom/exports_test: rm aom_*_metadata_to_frame_buffer
8ef799506 : av1/exports_test: rm av1_fwd_txfm2d_test
669fcf17a : rtc: Constrain the reset of ref_idx for spatial layers
93a3f4de1 : Add I8MM 4/1 scale spec. for av1_resize_and_extend_frame
4d7c17180 : Add I8MM 2/1 scale spec. for av1_resize_and_extend_frame
3211b7fb5 : Add Neon Dotprod 4/1 scale spec. for av1_resize_and_extend_frame
7a8fa705b : Add Neon Dotprod 2/1 scale spec. for av1_resize_and_extend_frame
6c62c374e : Add 6-tap spec. for av1_resize_and_extend_frame_neon
f51490d5e : Clean up av1_resize_and_extend_frame_neon
01126ee44 : Add unit tests for av1_resize_and_extend_frame
c2cf6f6b7 : Remove the unused function av1_resize_frame420()
fece42acd : Revert "Export film grain funcs again"
381748126 : Export film grain funcs again
5cceaf8a4 : rtc-svc: Fix to reset ref_idx for svc
a28da9475 : Don't export film grain funcs for examples & tests
ea17a0feb : Add test/noise_model_test.cc to test_libaom target
3177866cd : Add SVE implementation of non-downsampling av1_compute_stats
5903d21c9 : Add SVE implementation of av1_compute_stats_highbd
b42f4a19a : Fix bitstream buffer size updated for frame parallel encoding
07ec09a7a : rtc: Allow for lower-QP on scene/slide changes
4ce4878fd : Remove AV1_COMP_DATA::frame_display_order_hint
06257035d : Add buffer bound checks for FPMT and Annex B buffers
ee2623e31 : reenable SVE/SVE2
bca512200 : generate_config.sh: use clang for arm64
c2143dec6 : Optimise av1_apply_temporal_filter_neon_dotprod
892d77fed : Optimize av1_apply_temporal_filter_neon
b78a27110 : encoder_encode: Check cpi_data.frame_size earlier
09c7db893 : Some cleanup for av1_convert_sect5obus_to_annexb()
0367482b8 : Avoid overflow in rc->this_frame_target * 7
a91a05634 : Fix to possible integer overflow in reset_rc
b9c2f49eb : Optimize Neon impl. of av1_compute_stats with no downsampling
4731ee2c0 : Optimize Neon implementation of av1_compute_stats_highbd
9b1a1ce1c : Saturate cast double to int for this_frame_target
01ef5628d : Remove the EncodeFrameResults struct
cf88b1261 : aarch64_cpudetect: detect SVE/SVE2 on Windows
41722afe2 : aarch64_cpudetect: detect I8MM on Windows via SVE-I8MM
a060e1806 : reconinter.c: wrap fns in CONFIG_AV1_DECODER check
b6b67834c : resize.c: make av1_upscale_normative_and_extend_frame static
226cedfc9 : av1_convolve_2d_facade: use intrabc rtcd functions
d32bdcdc9 : rtc: Disable speed feature at speed 11 screen
1dba35885 : Remove grain and noise model from CONFIG_REALTIME_ONLY
8d4d4d8fd : third_party/vector: comment out unused code
3c7b9693b : don't define aom_var_2d_u16() w/CONFIG_AV1_HIGHBITDEPTH=0
476841e4e : bitreader_buffer: rm some fns w/CONFIG_AV1_DECODER=0
b8e520d01 : rtc: Remove cfl setting in svc_encoder_rtc
f3cf030f5 : move av1_iqmatrix() to decodeframe.c
692b0b7db : av1.cmake: rm SVE srcs w/CONFIG_REALTIME_ONLY=1
8e516c4a6 : Binary size reduction from cfl removal
af41a65ce : rtc: update documentation on set_ref_frame_config
316eee7f6 : rtc-svc: Reset ref_map_idx for references not used
0c9ac983f : {highbd,}sad_sse2.asm: add missing CONFIG_REALTIME_ONLY check
fe2bb817f : Revert^2 "Binary size reduction for realtime build"
538a49e69 : arm/warp_plane_*: Hoist filter load if alpha == beta == 0
975db3301 : rtc: Allow QP to decrease more aggressively for static content
1b6cd7247 : rtc: Add checks to set_ref_frame_config
47f42d050 : Add CONFIG_AV1_DECODER to resize tests
7ab9ad1c9 : rtc: Fix for dropped frames for svc
05d5d9b68 : Merge two for loops in av1_set_svc_fixed_mode()
a50971334 : Use the local variable rtc_ref
dadbea849 : Simplify prototype of av1_write_uleb_obu_size()
08f6c2a1a : rtc: Add CONFIG_AV1_DECODER to resize and svc tests
d5265b173 : Fix Clang -Wunused-but-set-variable warnings
a64332214 : Check it's safe to cast 64bit duration to uint32_t
fc9c85897 : Revert "Binary size reduction for realtime build"
f10e098b5 : Add AOM_CODEC_USE_PRESET for aom_codec_enc_init()
8a09cc2de : ratectrl.c,cosmetics: fix some typos
08b07b222 : Add more documentation for aom_svc_ref_frame_config
f3cf370b2 : Binary size reduction for realtime build
6d875d0bf : tools_common.sh: add TOOLS_COMMON_DIR variable
dba2aaf00 : Add the experimental feature policy
dfa2b4118 : av1_c_vs_simd_encode.sh: fix variable dereference
dee70e618 : av1_c_vs_simd_encode.sh: fix arm64 source list
5f48eb3b3 : aom_image.h: add lifetime note for img_data
6e1ad5ac5 : warp_plane_{neon_i8mm,sve}.c: Move add_const into USDOT
778bc191f : rtc: Round framerate for computing max_consec_drop
8536402b3 : av1_c_vs_simd_encode.sh: fix missing cpu-used vals for arm
bcd5f2d46 : Add buffer bounds checks to av1_pack_bitstream()
78fa6dbf7 : Remove unneeded casts in av1_write_last_tile_info
5cbe41832 : rtc: Fix setting of intra-only frame for set_ref_frame_config
e736ba87a : Remove two unnecessary/incorrect uint32_t casts
c78e883c6 : Do not use a bit writer in av1_write_obu_header()
febe83829 : cmake: only build threepass.c w/CONFIG_THREE_PASS=1
599199beb : Clamp the calculation of sb64_target_rate to INT_MAX
20cc3d17d : av1_c_vs_simd_encode.sh: add --fresh to cmake call
f6e03fbc1 : Tweaks to VBR rate control based on size
62ab1810e : README.md: add security report note
a4828c116 : intrapred_neon.c: add missing const for static tables
fc04863e6 : Enable palette for default_extra_cfg for CONFIG_REALTIME_ONLY
26e33eefc : encoder.c: make 2 functions static
9608e8af7 : av1_quantize.c: make 2 functions static
09c26b0c1 : tx_search.c: make av1_uniform_txfm_yrd() static
b6cd51f50 : Pass bitstream out buf size to av1_pack_bitstream
f901db8bf : rtc: Adjust threshold on datarate test
374d2cae7 : rtc: Reduce psnr thresholds in rt tests
9ed60cc5f : Add macro name as comment for header guard #endif
17dfba0fb : Add #ifndef header guard to config/aom_version.h
7e1785a94 : rtc: Update default_extra_cfg for CONFIG_REALTIME_ONLY
8e728c01b : hash_motion.c: make av1_hash_table_clear_all() static
dc6229a0d : resize.c: remove most av1_{highbd,}_resize_frame*()
8906f0031 : rd.c: remove function w/CONFIG_REALTIME_ONLY=1
416dbf190 : cdef: add missing CONFIG_AV1_HIGHBITDEPTH check
172161d87 : cdef_block.c: make cdef_directions_padded[] static
43cb919ec : don't define SumOfSquaresTest w/CONFIG_REALTIME_ONLY=1
a6f6232bc : mcomp.c: make several av1_init_*_compensation*() static
00e1fb83e : palette.c: make av1_remove_duplicates() static
50cdf4364 : ratectrl.c: make some functions static
d977f753c : txb_common.c: make av1_nz_map_ctx_offset_*[] static
6219314ee : remove aom_ssim_parms_16x16()
d23155855 : rd.c: don't define av1_fill_lr_rates() w/CONFIG_REALTIME_ONLY=1
a5ee6ea31 : encoder.c: make av1_increment_scaled_ref_counts_fpmt() static
edd540129 : reconinter.c: make av1_modify_neighbor_predictor_for_obmc static
b33512221 : rd.c: remove av1_model_rd_surffit()
a24432b07 : cdef.c: add missing CONFIG_AV1_HIGHBITDEPTH check
0f98fd86d : remove aom_get_mb_ss() w/CONFIG_REALTIME_ONLY=1
b95c9fdda : Remove binary_codes_writer.[hc] w/CONFIG_REALTIME_ONLY=1
a73386537 : partition_strategy.c: make some functions static
c1ac9ef23 : rtc: Fix for artifacts for screen with active_maps
374426728 : aq_variance.c: make some fns static w/CONFIG_REALTIME_ONLY=1
878cc4e83 : aom_dsp.cmake: rm highbd asm w/CONFIG_AV1_HIGHBITDEPTH=0
fa82bedb8 : bitwriter_buffer.c: make aom_wb_overwrite_bit() static
8057a57fb : Change default_extra_cfg to an array
feaeb3317 : Reorder the two default_extra_cfg structs
35c90ab67 : rtc: Bugfix for active_maps with sb_size=128

+- Project: platform/external/libavc

7d6f857 : libavcdec: Fix integer overflow issue in ui_max_frame_num
1a896d4 : libavc: Add VSCode configuration files for Linux & MacOS
266cda3 : libavc : Enable support for MacOS
045d0c9 : libavc : Fix mutex initialization index in apv_proc_start_mutex

+- Project: platform/external/libbpf

415b9e9 : Revert "Add visibility to '/test' for use in tests"
9fb631e : Update path to libbpf-sys crate.
0527346 : Update visibility of libbpf-rs
d7876da : libbpf: Add pieces required to build rust crates
a508615 : libbpf: bump version to v1.4.5
ebd40ad : libbpf: detect broken PID filtering logic for multi-uprobe
92943f3 : libbpf: bump version to v1.4.4
fd61a39 : libbpf: fix BPF skeleton forward/backward compat handling
f8c6e03 : libbpf: bump version to v1.4.3
92f681c : libbpf: keep FD_CLOEXEC flag when dup()'ing FD
1b35758 : libbpf: bump version to v1.4.2
17b7d5b : libbpf: improve early detection of doomed-to-fail BPF program loading
d61acc8 : libbpf: fix libbpf_strerror_r() handling unknown errors
5116e2d : libbpf: handle yet another corner case of nulling out struct_ops program
ce41163 : libbpf: remove unnecessary struct_ops prog validity check
63738ed : libbpf: v1.4.1 bugfix release
2809148 : libbpf: better fix for handling nulled-out struct_ops program
95222a5 : libbpf: handle nulled-out program in struct_ops correctly
20ea95b : ci: sync DENYLISTs with BPF CI
902af69 : ci: clean up temporary patch
25a9cc2 : ci: regenerate latest vmlinux.h
8db4a2f : sync: latest libbpf changes from kernel
7fee466 : libbpf: Define MFD_CLOEXEC if not available
ddf722f : libbpf: fix u64-to-pointer cast on 32-bit arches
137193b : libbpf, selftests/bpf: Adjust libbpf, bpftool, selftests to match LLVM
d2676a5 : bpf: Sync uapi bpf.h to tools directory
4d95d8b : libbpf: Add new sec_def "sk_skb/verdict"
f5828cc : libbpf: add support for BPF cookie for raw_tp/tp_btf programs
cbd6e35 : bpf: support BPF cookie in raw tracepoint (raw_tp, tp_btf) programs
7cfc365 : libbpbpf: Check bpf_map/bpf_program fd validity
a5459ea : libbpf: Skip zeroed or null fields if not found in the kernel type.
f84ee80 : libbpf: Prevent null-pointer dereference when prog to load has no BTF
2d042d2 : libbpf: Recognize __arena global variables.
4524a45 : libbpf: Add support for bpf_arena.
0868253 : libbpf: Add __arg_arena to bpf_helpers.h
6de941b : bpf: Disasm support for addr_space_cast instruction.
1675c13 : bpf: Introduce bpf_arena.
385d344 : libbpf: Allow specifying 64-bit integers in map BTF.
d71c0ed : netdev: add queue stat for alloc failures
5e80833 : netdev: add per-queue statistics
2778cbc : ci: add xdp_bonding fixes from bpf/master
4f87586 : sync: latest libbpf changes from kernel
438adf4 : libbpf: Rewrite btf datasec names starting from '?'
fc8b86b : libbpf: Struct_ops in SEC("?.struct_ops") / SEC("?.struct_ops.link")
d5d0b6e : libbpf: Replace elf_state->st_ops_* fields with SEC_ST_OPS sec_type
060c604 : libbpf: Sync progs autoload with maps autocreate for struct_ops maps
cb42614 : libbpf: Honor autocreate flag for struct_ops maps
aded62b : libbpf: Tie struct_ops programs to kernel BTF ids, not to local ids
f7fd5db : libbpf: Allow version suffixes (___smth) for struct_ops types
00b08dc : bpf: Introduce may_goto instruction
bf52494 : libbpf: Correct debug message in btf__load_vmlinux_btf
acfaeff : libbpf: Convert st_ops->data to shadow type.
0758d8b : libbpf: Set btf_value_type_id of struct bpf_map for struct_ops.
fa4d002 : bpf: Replace bpf_lpm_trie_key 0-length array with flexible array
f749be8 : bonding: Add independent control state machine
fb98d4b : include: fix BPF_CALL_REL definition
f4e9b60 : ci: clean up bpf_test_no_cfi.ko for v5.5.0 and v4.9.0.
ff95bd6 : sync: latest libbpf changes from kernel
a894b0c : tools headers UAPI: Sync linux/fcntl.h with the kernel sources
afa81fb : bpf: Clarify batch lookup/lookup_and_delete semantics
16e68ab : libbpf: Make remark about zero-initializing bpf_*_info structs
b19fdbf : libbpf: Add support to GCC in CORE macro definitions
445486d : ci: Pass arch parameter to setup-build-env
820bca2 : ci: verifier_global_subprogs can't be run on 5.5
8a8feae : sync: latest libbpf changes from kernel
a20b60f : libbpf: Use OPTS_SET() macro in bpf_xdp_query()
b24a627 : libbpf: fix return value for PERF_EVENT __arg_ctx type fix up check
25fe467 : ci: allowlist tests validating libbpf's __arg_ctx type rewrite logic
f11758a : sync: latest libbpf changes from kernel
cbb8ba3 : sync: auto-generate latest BPF helpers
95b4beb : libbpf: Add missed btf_ext__raw_data() API
5b7613e : libbpf: Add btf__new_split() API that was declared but not implemented
245394f : libbpf: Add missing LIBBPF_API annotation to libbpf_set_memlock_rlim API
3b19b1b : libbpf: Call memfd_create() syscall directly
7529e0c : libbpf: Remove unnecessary null check in kernel_supports()
688879f : libbpf: add bpf_core_cast() macro
0303e25 : libbpf: add __arg_trusted and __arg_nullable tag macros
9b306ac : libbpf: Add some details for BTF parsing failures
c57fb75 : libbpf: fix __arg_ctx type enforcement for perf_event programs
0b412d1 : libbpf: integrate __arg_ctx feature detector into kernel_supports()
3b09738 : sync: remove NETDEV_XSK_FLAGS_MASK which is not in bpf/bpf-next anymore
5139f12 : sync: latest libbpf changes from kernel
830e0d0 : libbpf: Fix faccessat() usage on Android
fad5d91 : libbpf: make sure linux/kernel.h includes linux/compiler.h
8ca3062 : Makefile: add features.o to Makefile
274d603 : libbpf: add BPF_CALL_REL() macro implementation
0f84f3b : ci: regenerate vmlinux.h
3dea2db : ci: drop custom patches for fixing upstream kernel issues
2f81310 : sync: latest libbpf changes from kernel
0e57fad : sync: auto-generate latest BPF helpers
a36646e : libbpf: Support BPF token path setting through LIBBPF_BPF_TOKEN_PATH envvar
e1a4380 : libbpf: Wire up BPF token support at BPF object level
a3b317a : libbpf: Wire up token_fd into feature probing logic
d42f0b8 : libbpf: Move feature detection code into its own file
9bf9504 : libbpf: Further decouple feature checking logic from bpf_object
9454419 : libbpf: Split feature detectors definitions from cached results
8082a31 : libbpf: Add BPF token support to bpf_prog_load() API
ac4a66e : libbpf: Add BPF token support to bpf_btf_load() API
8002c05 : libbpf: Add BPF token support to bpf_map_create() API
5cc8482 : libbpf: Add bpf_token_create() API
21fb08c : bpf: Add BPF token support to BPF_PROG_LOAD command
eb9d108 : bpf: Add BPF token support to BPF_BTF_LOAD command
1386c15 : bpf: Add BPF token support to BPF_MAP_CREATE command
5cd6a84 : bpf: Introduce BPF token object
385ae49 : libbpf: Ensure undefined bpf_attr field stays 0
517ff8d : libbpf: Correct bpf_core_read.h comment wrt bpf_core_relo struct
0b0dfaf : libbpf: Find correct module BTFs for struct_ops maps and progs.
d767ead : bpf: pass attached BTF to the bpf_struct_ops subsystem
d70e207 : bpf: pass btf object id in bpf_map_info.
fe50838 : bpf: Store cookies in kprobe_multi bpf_link_info data
de2f366 : bpf: Add cookie to perf_event bpf_link_info records
390c623 : libbpf: call dup2() syscall directly
89ca11a : libbpf: Apply map_set_def_max_entries() for inner_maps on creation
4c3742d : bpf: Sync uapi bpf.h header for the tooling infra
f4211a7 : libbpf: warn on unexpected __arg_ctx type when rewriting BTF
939ab64 : libbpf: feature-detect arg:ctx tag support in kernel
82ebbd9 : perf/x86/intel: Support branch counters logging
9705c4c : perf: Add branch stack counters
528cb9d : README: update Ubuntu link
f81eef2 : ci: skip two tests failing due to kernel bug
feabd96 : ci: regenerate vmlinux.h
1570d56 : Makefile: bump to v1.4.0 dev version
e2203b3 : sync: latest libbpf changes from kernel
3102067 : libbpf: implement __arg_ctx fallback logic
a4f0740 : libbpf: move BTF loading step after relocation step
9447025 : libbpf: move exception callbacks assignment logic into relocation step
4d68ea9 : libbpf: use stable map placeholder FDs
2ea3d80 : libbpf: don't rely on map->fd as an indicator of map being created
e9ce551 : libbpf: use explicit map reuse flag to skip map creation steps
3fb45d3 : libbpf: make uniform use of btf__fd() accessor inside libbpf
2e49eb8 : net/sched: Remove uapi support for CBQ qdisc
5473fe6 : net/sched: Remove uapi support for ATM qdisc
c04d1b6 : net/sched: Remove uapi support for dsmark qdisc
717798e : net/sched: Remove uapi support for tcindex classifier
f2c790c : net/sched: Remove uapi support for rsvp classifier
c008eb9 : libbpf: Fix NULL pointer dereference in bpf_object__collect_prog_relos
6252a2f : libbpf: Skip DWARF sections in linker sanity check
c378eff : libbpf: add __arg_xxx macros for annotating global func args
c65b319 : Revert BPF token-related functionality
43e7309 : xdp: Add VLAN tag hint
b166b99 : libbpf: support BPF token path setting through LIBBPF_BPF_TOKEN_PATH envvar
5df9eba : libbpf: wire up BPF token support at BPF object level
b14daa8 : libbpf: wire up token_fd into feature probing logic
fab327c : libbpf: move feature detection code into its own file
feda072 : libbpf: further decouple feature checking logic from bpf_object
11c977f : libbpf: split feature detectors definitions from cached results
9d2f8aa : libbpf: Add BPF_CORE_WRITE_BITFIELD() macro
5f68c57 : libbpf: Add pr_warn() for EINVAL cases in linker_sanity_check_elf
235ea85 : bpf: Load vmlinux btf for any struct_ops map
400cbd6 : bpf: rename MAX_BPF_LINK_TYPE into __MAX_BPF_LINK_TYPE for consistency
ec1cab7 : libbpf: add BPF token support to bpf_prog_load() API
207b6eb : libbpf: add BPF token support to bpf_btf_load() API
a23b8ff : libbpf: add BPF token support to bpf_map_create() API
f8954ca : libbpf: add bpf_token_create() API
1ebea57 : bpf: add BPF token support to BPF_PROG_LOAD command
544acb9 : bpf: add BPF token support to BPF_BTF_LOAD command
9abcc5e : bpf: add BPF token support to BPF_MAP_CREATE command
33de35f : bpf: introduce BPF token object
ac9cd25 : netdev-genl: spec: Add PID in netdev netlink YAML spec
cfa6e42 : netdev-genl: spec: Add irq in netdev netlink YAML spec
36f30e4 : netdev-genl: spec: Extend netdev netlink spec in YAML for NAPI
e4fcfe7 : netdev-genl: spec: Extend netdev netlink spec in YAML for queue
419eab9 : xsk: Add option to calculate TX checksum in SW
95134be : xsk: Add TX timestamp and TX checksum offload support
2f95d28 : xsk: Support tx_metadata_len
afb384f : bpf: Add link_info support for uprobe multi link
467dd7b : libbpf: Add st_type argument to elf_resolve_syms_offsets function
9c794e5 : libbpf: Start v1.4 development cycle
eb40a93 : tools: ynl: add sample for getting page-pool information
1baa3e2 : ci: move /dev/kvm permissions setup from to actions/vmtest.yml
1b2ae67 : ci: custom patch to patch out BPF_F_TEST_REG_INVARIANTS flag
20c0a9e : sync: latest libbpf changes from kernel
b88b3ac : sync: auto-generate latest BPF helpers
96ed1c5 : bpf: rename BPF_F_TEST_SANITY_STRICT to BPF_F_TEST_REG_INVARIANTS
7ccc41c : bpf: add register bounds sanity checks and sanitization
785a079 : bpf: Add crosstask check to __bpf_get_stack
a6b9909 : ci: disable sockopt selftest for 5.5 kernel
4161e1f : ci: disable a number of selftest causing CI for LATEST kernel
93f360c : ci: don't set /dev/kvm permissions when CI user is root
5ff0102 : ci: use config.vm for kernel config when present
0c54691 : ci: apply temporary patch to make bpf-next build
168630f : ci: give /dev/kvm 0666 permissions inside CI runner
5d4237d : ci: regenerate vmlinux.h
fa0e866 : sync: latest libbpf changes from kernel
0fa5ff4 : bpf: Use named fields for certain bpf uapi structs
2d5df9f : libbpf: Fix potential uninitialized tail padding with LIBBPF_OPTS_RESET
2cb0236 : libbpf: Add link-based API for netkit
cc7f085 : tools: Sync if_link uapi header
62b1e49 : netkit, bpf: Add bpf programmable net device
3189f70 : docs: attempt to fix .readthedocs.yaml
6a57760 : sync: latest libbpf changes from kernel
acecaf8 : sync: auto-generate latest BPF helpers
365cefa : libbpf: Don't assume SHT_GNU_verdef presence for SHT_GNU_versym section
f4b6dcf : documentation/bpf: Document cgroup unix socket address hooks
7487874 : libbpf: Add support for cgroup unix socket address hooks
8a08d63 : bpf: Implement cgroup sockaddr hooks for unix sockets
c9f8eb5 : bpf: Derive source IP addr via bpf_*_fib_lookup()
1c03588 : bpf: Add ability to pin bpf timer to calling CPU
20c1170 : libbpf: Fix syscall access arguments on riscv
b44eb3a : libbpf: fix bpf-checkpoint-commit
1464826 : ci: Regenerate latest vmlinux.h for old kernel CI testts

+- Project: platform/external/libchrome

fe2bcfd55c : Compile strcat.cc

+- Project: platform/external/libchrome-gestures

3e416e5 : Include is_tap when stringifying buttons gestures
9cd1400 : Enable c++20
0095adf : Allow finger lock to switch to a new finger
f4e0375 : Make HardwareState strings nicer to read

+- Project: platform/external/libconfig

3c8f496 : Pin to C99.

+- Project: platform/external/libcups

150fb7f5 : Remove unused -Wno-implicit-function-declaration.
fcc500ff : Add additional printing owner

+- Project: platform/external/libcxx

44b6f30e1 : Add dirgroup for trusty genrule

+- Project: platform/external/libcxxabi

2add5c0 : Add dirgroup for trusty genrule

+- Project: platform/external/libdav1d

2156d85 : libdav1d: enable sve2 intructions
a6f1e26 : libdav1d : enable NEON DotProd and i8mm optimizations
32cf02a : NEWS for 1.5.0
c3fa1db : NEWS: add itx to riscv list
789a1f6 : riscv64/itx: Replace vwadd+vnsra with vnclip
389450f : NEWS: last updates about optimizations
79f7188 : NEWS: add an entry for the Power9 optimization
572c5a6 : riscv: Fix argon test failure
257b04f : loongarch: fix argon tests failure
b2e7f06 : riscv64/mc: warp_8x8 and warp_8x8t 8bpc
56f6d16 : riscv64/mc: Re-order instructions
3d12677 : riscv64/mc: Add bidir functions
50ac826 : riscv: Add $vtype helper definitions
cc7d877 : riscv64/mc: Branchless vsetvl in blend_v function
2da8107 : riscv64/mc: Branchless vsetvl in blend_h function
b374b24 : riscv64/mc: Branchless vsetvl in blend function
0e3f70e : riscv64/mc: Add VLEN=256 8bpc RVV blend_v function
a5b9544 : riscv64/mc: Add VLEN=256 8bpc RVV blend_h function
83485c5 : riscv64/mc: Add VLEN=256 8bpc RVV blend function
7f2bb2f : riscv: Move get_vlenb() from checkasm_ to dav1d_
01da36e : riscv64/mc: Add 8bpc RVV blend_v function
d3a94f1 : riscv64/mc: Add 8bpc RVV blend_h function
f851fcd : riscv64/mc: Add 8bpc RVV blend function
848c5a2 : Tone down loop to only 2 iterations
a0a08d8 : Scalar dc calculation
c8749f0 : riscv64/itx: Special case 16x16 8bpc dct_dct eob=0
0cdf1b4 : ipred_paeth
b830ac8 : pal_pred
44541df : ipred_smooth
d711f97 : ipred cfl functions
2f5bfc3 : riscv64/cdef: filter functions
f223436 : pal_idx_finish
38f74bd : riscv: Allow multiple .option arch with vararg ext
7072e79 : x86: Make AVX2 SGR gatherless
21d9f29 : tests: Add a fail fast option
ed004fe : loongarch: minor improvement on decode_symbol_adapt
62a51df : loongarch: rewrite optimization functions in loongarch/itx.S
757f294 : LoongArch: Add save_tmvs_lsx
3d96175 : loongarch: refactor loopfilter
7058202 : loongarch: add lasx implementation of sgr_3x3 for 8 bpc
96d6e47 : loongarch: rewirte warp_8x8/8x8t_lsx for 8 bpc
b9e9a0e : loongarch: Refine prep_8tap_8bpc_lasx
af11a10 : loongarch: add lasx implementation of wiener filter for 8 bpc
90a9549 : Loongarch: Optimized load_tmvs_c function by LSX
411fc21 : Loongarch: Optimized ipred_z1 8bpc functions by LSX
7c63bb1 : Loongarch: Optimized emu_edge_c function by LSX
e3101dd : LoongArch64: Implement checked_call()
7f89159 : Loongarch: Optimized ipred_filter 8bpc functions by LSX
f398bf9 : loongarch: Add the some optimization function about itx for 8bpc
13a857d : loongarch: add lsx implementation of itx_8bpc.add_16x8 series function for 8 bpc
843f00e : loongarch: opt inv_txfm_add_adst_dct/dct_dct/identity_identity_16x4_8bpc_lsx
083cf42 : Loongarch: Optimized cfl_pred_cfl, cfl_pred_cfl_128, cfl_pred_cfl_top and cfl_pred_cfl_left 8bpc functions by LSX
3f6c845 : Loongarch: Optimized pal_pred 8bpc functions by LSX
b26f315 : loongarch: Add prep_8tap_8bpc_lsx
ce45ebd : Loongarch: Optimized blenc_h_c function by LSX/LASX
5319278 : Loongarch: Optimized blend_c/blenc_v_c function by LSX
0b9c756 : Loongarch: Optimized ipred_smooth, ipred_smooth_h and ipred_smooth_v 8bpc functions by LSX
7463c2a : Loongarch: Optimized ipred_paeth 8bpc function by LSX
3e9d80d : Loongarch: Optimized ipred_h and ipred_v 8bpc function by LSX
2a9cbcc : Loongarch: Optimized ipred_dc,ipred_dc_128 8bpc,ipred_dc_left and ipred_dc_top functions by LSX
62c47f3 : Loongarch: Optimized cdef_filter_block 4x4,4x8,8x8 8bpc function by LSX
fa7b72d : Refine mc_put_8tap
02309b9 : msac: Add msac_decode_bool_equia_lsx and msac_decode_hi_tok_lsx
2154425 : Loongarch: Optimized cdef_find_dir_8bpc function by LSX
f6ffdc9 : loongarch: opt inv_txfm_add_identity_identity_8x32_8bpc_lsx
5de878a : loongarch: Minor improvement on identity4*, identity8* and dct32*
2fc6566 : loongarch: add lsx implementation of itx_8bpc.add_8x16 series function for 8 bpc
643ae52 : loongarch: add lsx implementation of itx_8bpc.add_4x16 series function for 8 bpc
d60d93a : loongarch: add lsx implementation of itx_8bpc.add_4x8 series function for 8 bpc
74e0eeb : loongarch: Opt one functions of itx_8bpc.add_16x32 series
f2c3ccd : meson: supports the iOS platform
a7a40a3 : Define __ARM_ARCH with older compilers
8e993f4 : Support older ARM versions with checkasm
8d9b1e2 : ppc: Factor out dc_only itx
75d3ad1 : ppc: itx 16x4 pwr9
0bf331a : ppc: itx 4x16 pwr9
19e122e : ppc: Remove high bitdepth macros from the 8bit-only code
b1d847b : ppc: itx 8x8 pwr9
da51b12 : ppc: itx 4x8 and 8x4 pwr9
33b9d51 : ppc: itx 4x4 pwr9
2123596 : NEWS: get ready for 1.5.0
bd87548 : Update NEWS for 1.4.3
dd32cd5 : Use #if HAVE_* instead of #ifdef HAVE_*
82e9155 : AArch64: Trim Armv8.0 Neon path of 6-tap and 8-tap MC functions
f4a0d7c : Remove dav1d/ prefix from dav1d.h
74ccc93 : meson: don't generate version.h
4385e7e : Improve density of group context setting macros
166e1df : tests: Add an option to dav1d_argon.bash for using a wrapper tool
79db162 : AArch64: New method for calculating sgr table
ec5c305 : AArch64: Optimize lane load/store in MC functions
a992a9b : AArch64: Optimize Armv8.0 Neon path of SBD H/HV 6-tap filters
2d808de : AArch64: Optimize Armv8.0 Neon path of HBD HV 6-tap filters
93339ce : AArch64: Optimize Armv8.0 Neon path of HBD horizontal 6-tap filters
109b242 : AArch64: Optimize Armv8.0 Neon path of HBD horizontal filters
d268788 : Support using C11 aligned_alloc for dav1d_alloc_aligned
7629402 : meson: fix include directories when building as subproject
507b697 : Allow software renderers with placebo-gl
312972d : Disable the mouse cursor in dav1dplay
b9cc27d : Allow quitting dav1dplay with the escape key
2f9fc72 : Allow playing videos in full-screen mode
4e1a8b4 : dav1dplay: Ensure that SDL is shut down when the application quits
cc6eb3d : Allow getopt fallback to compile on non-Windows platforms
bdef299 : picture: copy HDR10+ and T35 metadata only to visible frames
6b3c489 : Check for sys/types.h before using it
7490d98 : Only include unistd.h and pthread.h when necessary
a796f66 : Remove unused sys/stat.h includes
4104018 : Allow compile time CPU detection to be used when trim_dsp is disabled
41511bf : aarch64: Split the jump tables to a separate const section
0d8abee : Fix the macro parameter name for the CHECK_SIZE macro
0255c2b : Ensure that the refmvs_refpair union is packed
033a090 : Detect availability of pthread_setname_np and pthread_set_name_np
ccb02dd : aarch64: Enable detection of SVE/SVE2 on Windows
27491dd : aarch64: Fix a label typo
e560d2b : aarch64: Avoid looping through the BTI instructions
5a33c5c : aarch64: ipred: Use the right fill width loop in ipred_z3_fill_padding_neon
472b31f : AArch64: SVE MS armasm64 fix of HBD subpel filters
3329f8d : aarch64: mc16: Optimize the BTI landing pads in put/prep_neon
01558f3 : AArch64: Add HBD subpel filters using 128-bit SVE2
713c076 : AArch64: Add USMMLA impl. for SBD 6-tap H/HV filters
287e90a : AArch64: Fix typo in SBD 6-tap 2D/HV subpel filter
5ef6b24 : decode_coefs: Optimize index offset calculations
2355eeb : AArch64: Move constants of DotProd subpel filters to .rodata
7fbcdc6 : aarch64: Explicitly use the ldur instruction where relevant in mc_dotprod.S
431f4fb : Add Arm OpenBSD run-time CPU feature detection support
32bf6cd : x86: Add 6-tap variants of high bit-depth mc SSSE3 functions
ca83ee6 : itx: restrict number of columns iterated over based on EOB
01b94cc : cli: Prevent buffer over-read
92f592e : AArch64: Fix potential out of bounds access in DotProd H/HV filters
da2cc78 : x86: Eliminate hardcoded struct offsets in refmvs load_tmvs() asm
26a2744 : refmvs: Consolidate r and rp_proj allocations
54801d0 : refmvs: Remove dav1d_refmvs_init()
89a200c : refmvs: Simplify 2-pass logic
ca156d9 : x86: Add 6-tap variants of 8bpc mc SSSE3 functions
8afbd4f : x86: Add minor 8bpc mc SSE improvements
85c1639 : x86: Remove 8bpc mc SSE2 asm
d3997ac : x86: Remove unused macro in mc16_avx512.asm

+- Project: platform/external/libdrm

38ec7dbd : build: bump version to 2.4.124
f314a43f : modetest: Make modetest availble to vendor on Android
028effbd : Fix the asprintf() build warning rather than suppress it.
e68e9b80 : android: add genrule for generated_static_table_fourcc.h
50da61ee : xf86drm: print AMD modifiers properly
0248a64b : Migrate 13 crates to monorepo
c0a08f06 : include/drm/README: update drm-next link to use gitlab instead of cgit
0a1162e2 : modetest: add support for YUV422 and YUV444 plane format
38c043dc : modetest: simplify planar YUV handling
bea14386 : build: simplify Linux system check
887fec2c : tests/util: Call `drmGetDevices2()` instead of `drmOpen()`ing all modules
25dec5b9 : build: bump version to 2.4.123
f3f56f41 : Disable ioctl signed overload for Bionic libc

+- Project: platform/external/libese

95e6902 : Pin KeyMint dependency to correct/specific version

+- Project: platform/external/libevent

d61242c : Set _GNU_SOURCE where appropriate.
15c0937 : Take C23 fix.

+- Project: platform/external/libfuse

5d0214a : libfuse: create libfuse_headers target
856f44d : ANDROID: ignore mtab on android
0c9af7d : Reland "Libfuse: merge upstream-master up to eca63dab"
4311f67 : Revert "Libfuse: merge upstream-master up to eca63dab"
eca63da : Enable passthrough mode for read/write operations (#919)
58f85bf : Add in the libfuse version a program was compiled with (#942)
2bdec0b : Handle NO_OPEN/NO_OPENDIR support automatically (#949)
0128f5e : Bump actions/checkout from 4.1.4 to 4.1.5 (#946)
e4e6873 : fuse_common.h: fix warning on _Static_assert() (#939)
b701673 : Fix missing fuse_loop_cfg_destroy() in fuse_session_loop_mt_31 (#944)
26fa6c1 : Bump actions/checkout from 4.1.3 to 4.1.4 (#943)
8a9e2dc : Use std=gnu11 (#940)
cdcaabd : Bump actions/checkout from 4.1.2 to 4.1.3 (#934)
45effd5 : [libFuse 3.16.2]Compilation failure on freeBSD #936 (#938)
80663a7 : Use single place to define the version and increase version to 3.17.0 (#932)
a8f1ae3 : example/: Convert all fuse_session_loop_mt users to 3.12 API (#931)
285da32 : passthrough_ll: fix fd leaks in lo_destroy() (#929)
73cd124 : Add clone_fd to custom IO (#927)
0800773 : fix max_write update in do_init() (#926)
20de66d : fusermount: Fix use of uninitialized x_mnt_opts (#924)
e2df577 : Add more documentation for FUSE_CAP_EXPORT_SUPPORT (#917)
3e283a1 : Add support for FUSE_CAP_HANDLE_KILLPRIV_V2
67d4db4 : Fix FUSE_CAP_DIRECT_IO_ALLOW_MMAP - use new numerical value
e3bd7de : Install all test/build python packages from requirements.txt
dd95d13 : fix readdirplus when filler is called with zero offset (#896)
f01d927 : reset got_init after handling FUSE_DESTROY message (#910)
e547a66 : ci-build.sh: Fix checking for function arguments (#909)
c021e91 : Add FUSE_FILL_DIR_DEFAULTS enum (#903)
9a823df : Fix example/fix-notify_inval_inode.c (#908)
b4804ce : Bump actions/checkout from 4.1.1 to 4.1.2 (#905)
425f52a : Add final "meson setup --reconfigure" to README.md
0d90a90 : /test_ctests / test_notify1: Print cmdline on failure
e48c71d : Add glibc backtrace to signal handler
99ef7a9 : ci-build.sh: Add back test without versioned symbols
255de0b : Build clang/sanitized build first
982743f : Fix use-after-free in example/poll.c
f041818 : Add back s-bit for compiled fusermount
71a9722 : Fix test failures: Create missing mount dir
38d40c5 : ci-build.sh: Reduce pytest --maxfail from 99 to 1
aab146e : ci-build.sh: Always install and add s-bit for fusermount3
b1cdc49 : ci-build.sh: Run ASAN and UBSAN at the same time
290c65b : posix_spawn style updates
bb9cecb : Use posix_spawn instead of fork+exec
a6ac2ec : Fix undefined loff_t in test_write_cache.c on alpine
9e35add : fusermount: Fix head-buffer-overflow in extract_x_options
6bda409 : meson: Point OSX (darwin) to https://www.fuse-t.org/
420a6c3 : example/notify_store_retrieve: Fix races and handle errors
31bf17c : Fix tests/test_write_cache in write back mode (#892)
c458633 : Enable direct IO for passthrough examples when open has flag O_DIRECT
74b1df2 : Passthrough options starting with "x-" to mtab (#894)
fce970c : passthrough_example: make parallel_direct_writes more clearly
402c8ff : remove duplicated fuse_chan_put() (#893)
67d28fb : make FUSE_CAP_EXPIRE_ONLY test depend on available cap and not on version
54007ee : add support for kernel flag FUSE_HAS_EXPIRE_ONLY
0c12204 : Add processing for FUSE_CAP_HANDLE_KILLPRIV and disable it by default
2c736f5 : Don't set FUSE_CAP_PARALLEL_DIROPS by default
22741f5 : Add FUSE_CAP_DIRECT_IO_ALLOW_MMAP and use in passthrough_hp
0392acb : Add in rename to FUSE_DIRECT_IO_ALLOW_MMAP
6ee7a42 : Synchronize fuse_kernel.h from current linux master
3c7ba57 : examples/notify_store_retrieve: Add a clean shutdown
bd89859 : Allow *xattr operations on root directory (ino 1)
3f6cf53 : Clarify fuse_lowlevel poll() docstring (#862)
c990534 : Pass FUSE_PARALLEL_DIROPS to kernel (#861)
a466241 : Update fuse_common.h (#855)
c814e3f : fuse_clone_chan: avoid additional FD_CLOEXEC setting if O_CLOEXEC defined (#852)
463871a : Bump actions/checkout from 4.1.0 to 4.1.1 (#854)
05b696e : passthrough_hp: Fix clone-fd option (#850)
063ef8e : Enabled parallel direct IO writes for passthrough examples
ef11cf9 : Fix typo in comment
7a92727 : Released fuse-3.16.2
f99b7eb : Fixes typo in fuse.h (#844)
9ca35f4 : xfstests example: Use export in local.config and remove comment (#811)
eb9ccbe : Bump actions/checkout from 4.0.0 to 4.1.0
98ee575 : passthrough-hp: Fix --clone-fd
7a9717e : passthough_hp: Add a direct-io option
f44214b : Bump actions/checkout from 3.6.0 to 4.0.0 (#837)
1f3a338 : Bump actions/checkout from 3.5.3 to 3.6.0 (#833)
869a4a6 : Add NTFS3 kernel driver fs to the whitelist of mount targets (#830)
5fd5039 : Added missing file, update release process docs.
1f0dfae : Released fuse-3.16.1
0d830af : Don't attempt to put signify signature into gz header
7b9e7ee : Make errnum-verification more flexible (#824)
98eb808 : Pass cache_readdir and keep_cache from high level API (#822)
7c109e2 : Add dependabot for GHA
0d7f43f : Hash-pin workflow Actions
624783d : Allow linking with mold / fix the version script (#814)
d888c30 : Use signify to sign releases.
0b50e26 : Released fuse-3.15.1
1cb6e17 : Reduce default write size by half
b51f69f : Add missing include.
51bc827 : Make expire only function fail if no kernel support (#789)
7567268 : Clarify behavior of fuse_session_exit().
010d820 : Improve wording of user_allow_other usage instructions (#806)
b58a001 : Wrapper around test applications for cross compiler environment in meson.build (#804)
6d08472 : Released 3.15.0
e0041ad : Error handling for fusermount's commfd (#786)
690b12f : util/meson.build: don't install udev.rules if udevdir cannot be determined (#801)
113ce78 : meson.build: pass -D_FILE_OFFSET_BITS=64 to C/C++ compiler (#799)
0433b40 : util/mount.fuse.c: compile with linux headers < 3.5 (#798)
eb9eb1d : Fix memory leak (#785)
30a300a : Remove unnecessary `_GNU_SOURCE` in `fuse.c` (#787)
841cd09 : Add security policy (#797)
6fc5ffd : Add minimal token permissions
4c177c9 : Add support for running xfstests.
ad1abf3 : Do not daemonize to early
dba6b39 : Do not pass unsupported mount options to the kernel.
bb1890a : Fix issue #746. (#782)
fcd293f : Fix memory leak in high level API (#781)
36c2250 : Fix doxygen deprecation warning (#774)
34d9d2a : Disable leak suppression (#773)
7297044 : Fuse mount: make auto_unmount compatible with suid/dev mount options (#762)
681a0c1 : Update fuse_kernel.h to state of linux-6.3
eb88309 : Migrate away from deprecated distutils
04215e9 : Fix doxygen warning about parameter name
c261271 : Fix typo
6ce27f4 : Fix PytestReturnNotNoneWarning
b9b4307 : Fix deprecated @pytest.mark.hookwrapper
7555d03 : Fix meson deprecation warning
10f2915 : Fix meson deprecation warning also in FreeBSD
49b14f3 : Fix deprecated udev.get_pkgconfig_variable in meson
b37f6df : Upgrade meson version in CI
1edfed0 : Add long `--options` to fusermount (#764)
c60a90b : Fix MS_LAZYTIME not defined on uclibc and move all MS_* and UMOUNT_* (#753)
4cf25c2 : Document risks of auto_unmount (#763)
7be56c5 : Workaround musl bug when mountdir has whitespace (#761)
2113871 : Fix compiler warning in hello_ll.c (#760)
d65686a : Add unit tests for setxattr() et al
0f8cb28 : Fix typos and configure spellcheck for PRs
ed9be12 : Fix meson deprecation warning
f2144c6 : Fix use-after-free warning
1703ccb : Review feedback: rename and comments
055478f : Fix `auto_unmount` to work without `allow_other`
b509964 : Update script to drop references to Travis CI.
d8d1f84 : Released 3.14.1
024eccb : Revert "upgrade of fuse_kernel.h based on Miklos expire_only kernel patch https://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse.git/commit/?h=for-next&id=53e949edb7692dce02220eba926c9d75ecbb47f7"
81ad52c : Add more time mount options to fusermount / fix lazytime
1e66c92 : Add more time mount options
b45d66c : Respect includedir when installing libfuse_config.h
ab5ca07 : Fix max_threads command line parameter propagation
a5eb7f2 : Enable parallel direct writes on the same file.
4f8aae7 : Update description of keep_cache.
17e8b3e : Migrate from Travis to Github actions
2da03e3 : Avoid max-idle threads warning
df2cde2 : Change define for C++ too
91e6e3c : Add a github actions file
3f65bb6 : Released 3.14.0
418b7ef : fuse_lowlevel.h: add more setattr flags
d7560cc : Split config.h into private and public config
becc030 : Released 3.13.1
db35a37 : Install a the configure_file (config.h) and use in headers
e42b972 : Update travis to ubuntu jammy (22.04)
19d95c0 : use off_t over __off64_t
9acd183 : Released 3.13.0
c67b921 : passthrough_hp: Avoid a bit code dup in readdir
856c683 : passthrough_hp: Add options for clone_fd, max_threads, daemonize
aad5c3a : Fix loading of FUSE modules
50c74e6 : Support application-defined I/O functions for FUSE fd
c0a344e : Test for fuse_lowlevel_notify_expire_entry. This test is too simple to check for all functionalities of notify_expire as it always successfully passes when libfuse supports the function (even if kernel does not support it - it just takes it as notify_inval)
91083df : adding comments and capability discovery, enum for flags moved to top of file
7f430a3 : upgrade of fuse_kernel.h based on Miklos expire_only kernel patch https://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse.git/commit/?h=for-next&id=53e949edb7692dce02220eba926c9d75ecbb47f7
8fd95ab : Initial patch provided by Miklos Szeredi <mszeredi@redhat.com>
d372d3f : Fixes when HAVE_LIBC_VERSIONED_SYMBOLS is not defined
3736e0c : convert __APPLE__ and __ULIBC__ to HAVE_LIBC_VERSIONED_SYMBOLS
f212ec0 : Fix ublic/apple build for the fuse_parse_cmdline ABI symbol
3373695 : Remove partial locking of paths when using high-level API
30d423e : Move try_get_path2 earlier in the file
06be456 : Revert "libfuse custom communication interface"
40b0cf9 : update mount.c, in order to pass through -n.
5aaec92 : Make it work even if max_idle_threads is set to 0
7776639 : libfuse custom communication interface
b1290d4 : Fix the fuse_parse_cmdline@FUSE_3.0 ABI compat symbol
0c3fbe2 : Released 3.12.0
0590458 : Add option to specify init script location
9e1601a : Use destroy_req instead of free to destroy fuse_req
8de32bc : Add summary of changes regarding 'max_threads' to ChangeLog.rst
d823cab : fuse_session_loop_mt: Accept a NULL config - use defaults
af5710e : fuse-loop/fuse_do_work: Avoid lots of thread creations/destructions
30a126c : API update for fuse_loop_config additions
7657ec1 : Revert "Increase meson min version and avoid get_pkgconfig_variable warning (#682)"
e609591 : Remove member m from fuse_fs (#684)
8db2ba0 : Increase meson min version and avoid get_pkgconfig_variable warning (#682)
8d93412 : Fix a test strncpy compilation warning with recent gcc
a56147d : Released 3.11.0
96ad05c : patch: document ignored fill parameter of readdir
7e5278c : Add missing kernel flags up to 1ULL << 33
34a7ad5 : Set FUSE_INIT_EXT in fuse_init_out::flags
646ff0b : Passthrough_ll should display cmd line options
4df0871 : Modify structures in libfuse to handle flags beyond 32 bits.
f0bba7e : passthrough_hp: Disable splice with the --nosplice option
f8a24e9 : passthrough_hp: Fix inode ref in sfs_unlink
2da64ec : Fix fd leak with clone_fd
3c2ba7a : Removed duplicates code. (#642)
5128cee : Fixed returning an error condition to ioctl(2) (#641)
b08e275 : Fix ReST end-string nits (#638)
6ddd14f : Avoid ENOENT response when recently invalidated fuse_ino_t is received from the kernel (#636)
435a14e : Add test for FOPEN_NOFLUSH flag
dad15ae : Add no_rofd_flush mount option
1b498ac : Add support for FOPEN_NOFLUSH flag
48ae2e7 : Document possible NULL paths when directories are removed (#633)
cee6de8 : test/test_syscalls.c: allow EBADF in fcheck_stat() (#631)

+- Project: platform/external/libjpeg-turbo

7090679 : Enable AFDO for libjpeg-turbo
1a048ea : Edit OWNERS file

+- Project: platform/external/liblc3

a091e6c : Edit OWNERS file

+- Project: platform/external/libnetfilter_conntrack

6aadd6b : Make bpfmt happy.

+- Project: platform/external/libnl

157ff698 : Make bpfmt happy.
7b5a2ca6 : Add com.android.wifi to the libnl apex_available list.

+- Project: platform/external/libogg

9b055be : Make bpfmt happy.

+- Project: platform/external/libopenapv

9e99266 : Fuzzing test 20241122 (#26)
51edb5f : APV: Adding extra sanitizer
a3f40ff : Fuzzing test (#25)
270a705 : fixed error driven by fuzzing test
67d9ab8 : fixed avx2 dquant, removed sse2neon.h, and added code for handling malformed bistream
9b55a94 : Remove unused -Wno- flags.
598a481 : OpenAPV Add fuzzing
087e84b : Update after review
f7ca514 : Update version.txt
141eb19 : Update apv_isobmff.md
3f6b3c7 : Update apv_isobmff.md
adca9d8 : Update apv_isobmff.md
d7e5a9b : reformating code and fixed AVX overflow issue
ce56d67 : Update apv_isobmff.md
fc16ce6 : [Fix] 32bit arm arch build fail
8db348c : Update apv_isobmff.md
7c35b51 : Remove no-op config file
164db3b : docs: Update README.md
510fd91 : [Bug Fix] My cause overflow on QP index
e71e1c9 : clarify qp and q matrix
c32c687 : clarify QP and Q matrix
40f1dca : Update README.md
1a222b1 : added conditional compiliation 'OAPV_STATIC_DEFINE'
723b873 : refactor app and decoder code
0aa3091 : Adding libopenAPV
e50b603 : Fix export definition for OAPV_STATIC_DEFINE to avoid unnecessary export declarartions.
9d40137 : Update version.txt
48983ed : Refactoring code and compilable for ARM 32bit and x86 32bit
93d6382 : docs: Update README.md
f91c08e : added logo section in readme
b1370dc : Update version.txt
1a621f3 : added feature list of openapv and support float number fps
55d890f : Update README.md - added iso fileformat
a53bb63 : Update README.md
2c5bca3 : added APV ISOBMFF spec (draft)
fce2364 : Update README.md - 1
2bd6c0e : Initial empty repository
a2b9521 : Update version.txt
99d01f1 : refactoring error handling
2562a32 : reduced number of global functions and refactored metadata code
40d5f28 : hotfix of runtime error in rate control caused by overflow calculation
b3bc804 : modification
7063999 : modification of test bitstream and input parameter
f7f5eca : fix after review
5676326 : ci: Add build and test job for GitHub Actions
e14f1d0 : Update README.md
468bc02 : Update README.md
9c11b28 : fix: fix wrong include path
efef404 : modification for P210
02cff66 : initial commit
38a78e7 : initial commit
269556d : initial commit
e16f1d3 : initial commit
46857b7 : updated license
c7ba7c5 : Initial commit

+- Project: platform/external/libpalmrejection

086d65d : Stop explicitly adding bionic subdirectories to the include path.

+- Project: platform/external/libpcap

d6c0af63 : Make bpfmt happy.

+- Project: platform/external/libphonenumber

fad5a66e : Update libphonenumber to v8.13.51
2cf069dc : Update libphonenumber to v8.13.46

+- Project: platform/external/libpng

5c8823eb2 : Make bpfmt happy.
93dde8ce5 : Edit OWNERS file

+- Project: platform/external/libprotobuf-mutator

11d593a : Edit OWNERS file

+- Project: platform/external/libsrtp2

1900051 : Make bpfmt happy.
b961e24 : Remove unused -Wno-implicit-function-declaration.
8d51c1d : Update METADATA file and add OWNERS file

+- Project: platform/external/libtextclassifier

9bc42fc : Use canonical ABSL and remove local fork.
b7a0b53 : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
0f0038f : Edit OWNERS file

+- Project: platform/external/libtraceevent

bd47bd5 : libtraceevent: 1.8.4
fe0bc49 : libtraceevent: Print function pointer address when TEP_EVENT_FL_PRINTRAW is specified
f2224d5 : libtraceevent: Have sizeof() parsing handle u8/s8 through u64/s64
5f570de : libtraceevent: Print arrays like Linux does

+- Project: platform/external/libultrahdr

4cba06e : fix overflows while encoding large images
1c2fdce : fix overflows while encoding large images
6def84b : Add avic_OWNERS to OWNERS file
285824d : ultrahdr_enc_fuzzer: Simplify hdr_img_fmt initialization
2222dad : bump to 1.3.0
6143f22 : map +inf of hdr intent half fp values to max instead of zero
1766f18 : iso: generate fields only if they will be written
86ad83a : iso: improve error checking during decode
00c6243 : iso: use signed types to represent offset and content boost
3c457ab : iso: fix channel count computation during decode
3912add : fix clamping of hdr pixels to diffuse white
9c10af0 : add riscv32 compile support
bdf5e48 : add rgba8888 to yuv444 conversion using arm neon
a7fd97f : update enc fuzzer for half fp inputs
3cac18b : Add OWNERS file
a265b54 : fix overflows while encoding large images
fd67a9a : Extend benchmark tests to profile half float inputs
d87030e : Add return after SkipWithError in benchmark tests
1a9d45c : Add .gitignore
f1a7dda : add support for decoding images with non-zero hdr/sdr offsets
79978aa : use resize filter during applyGainMap process
25e4c8e : Update ISO 21496-1 Metadata (#314)
990f103 : apply correct normalization factor during decode
613fee1 : add support for encoding linear half-float images
232bd43 : apply ootf and its inverse for hlg transfers
98ddcdc : fix incorrect ootf impl used during tonemapping
6b85f33 : revert e432c4d partially
a034ec5 : cosmetics: group declarations in gainmap math
cb0b016 : Fix floating point exception in generateGainMap
f6c9918 : add sample app to the install target list
698a51c : use libjpeg ports while building for wasm target
8bb953f : update user-guide to reflect correct cmake version
7b59b05 : Fix oob access while resizing an image
46f1337 : Add fuzzer for legacy jpegr APIs and improve encoder fuzzer
686ee47 : add loong64 support
202e5c3 : add support for generating packages using cpack
89d4dc5 : fix issues in gainmap weighting
494b664 : validate gainmap metadata before usage
e432c4d : add API to set mastering display, target display peak brightness
8f894f7 : Update benchmark tests for uhdr APIs
b35a75d : Make tonemapping and gainmap generation public
01ace38 : fix oob access while encoding image edited with crop effect
5e3aa22 : Use TRUE instead of true
279a085 : fix hang if gainmap image is larger than base image
21582ec : fix typo in var declaration
031c0cb : Dont discard return values from func floatToUnsignedFraction
aad4ffc : fix hang if gainmap image is larger than base image
17a0461 : Add option to enable compiling with -Werror
55ee308 : fuzzers: add editor effects api to fuzz tests
57247bf : ossfuzz.sh: re-enable shift sanitizer
5e7ceda : update fuzzers for improved coverage
b9e22c0 : Fix doc of windows platform build with MSYS2
2e5d8be : add support for generating pure static binaries

+- Project: platform/external/liburing

9bbd42a : Remove unused -Wno-implicit-function-declaration.
9706a10 : Clean up a #include.

+- Project: platform/external/libusb

94da7f9 : Add a new target for clients running at Android OS level
de38189 : Enable linux netlink event monitoring for Android OS platform services
7bc88c0 : macos: Fix Zero-Length Packet for multiple packets per frame
7adb291 : docs: Fix broken doxygen references
0b4eda6 : docs: Hide internal descriptor.c structure from doxygen
28a6afb : docs: Document internal_ssplus_capability_descriptor
467b6a8 : winusb: Fix winusb_get_device_list() failing to find port numbers
8776b80 : descriptor: Fix clang -Wimplicit-int-conversion warnings
a319969 : xcode: Adjust file indentation settings
30ec25f : examples/ezusb: Fix error checking regression in recent commit
4528752 : windows: Base HID device descriptor on cached values
d04fc0e : openbsd: Use default clause in _errno_to_libusb()
bc12cda : netbsd: Debug print all errors
9d595d4 : Replace atoi() with strtol() which allows error checking
bd0fcdb : Add KEYS file for release files verification
c3873d5 : xusb: Define proper exit status
e8d76b1 : clang-tidy: Stop suppressing readability-misleading-indentation warnings
197e305 : libusb.h: Match parameter names in declaration and definition
55f8c95 : descriptor: Fix addition overflow by correcting casts
e3ccc46 : descriptor: Eliminate all duplicate branch bodies, as they are bug-prone
9cf8457 : Avoid assignments within `if` statements
a18a964 : darwin: Fix multiplication overflow by better matching type sizes
6883f84 : darwin: Explicitly compare string compare function results to -1, 0, or 1
418aadc : darwin: Always use uppercase literal suffixes for improved readability
3616e75 : examples/xusb: Match size of loop index to what is iterated
a7e471d : examples/xusb: Make some parameters const where possible
85055a4 : examples/xusb: Make all macro replacement lists parenthesized
9ffdb7f : examples/fxload: Eliminate all reserved C identifiers (leading underscores)
00454ab : examples/ezusb: Replace rewind with fseek, to check for errors
e678b3f : Emscripten: Avoid uncaught TypeError on browsers without USB support
916c740 : descriptor: Avoid buffer over-increment in parse_iad_array function
678c812 : descriptor: Small clarifications with no behaviour change
016a0de : descriptor: Fix potential offsetting of pointer by too much
5144b1c : descriptor: Restore implicitly casted-away const
d795c0b : descriptor: Defer potentially truncating cast to last minute
2c32efa : descriptor: Replace parse_descriptor() function

+- Project: platform/external/libva

64baa94 : Fix include directories and generated header files in Android.bp
f14d443 : Update Android.bp to generate va_version.h and build only for x86_64
c4578f7 : Add Android.bp to replace mk files
b3ce6fa : Remove most downstream changes
217da1c : libva 2.22.0
86cd48f : update NEWS for 2.22.0
c7a4be4 : meson:remove autogen.sh from the meson script
bb178ef : Add VVC decode LibVA interface.
a619226 : va: fix --version-script detection for lld >= 17
558d03b : trace: Add bit_depth capturing in trace log
1b7d71f : wayland: add support for linux-dmabuf
6f3e068 : libva 2.22.0.pre1
0cfc607 : update NEWS for libva 2.21.0

+- Project: platform/external/libvpx

5a812e3ba : enable SVE optimizations
9f9b7e9ba : Changelog: add neon optimization speed up stats
0ba09cc79 : Update CHANGELOG and version
4e5a33cb2 : Expose libvpxrc on x86-64
c3af9b137 : libs.mk: Expose RC_RTC_SRCS as libvpxrc_srcs.txt
02fb5310c : build: Export include config directories
3939c5ebb : vpx_highbd_convolve8_avg_sve2: fix C fallback typo
816a90fe7 : Update AUTHORS and .mailmap
192b4a4ce : rtc-vp9: Always disable svc_use_low_part
cdd4e3501 : vp8: Fix integer overflow in encode_frame_to_data_rate
aa73610d0 : Fix a typo: avg_frame_index => avg_frame_qindex
417204d7f : rtc-vp9: Fix to integer overflow in vp9-svc
ac68e7f99 : aarch64_cpudetect: detect SVE/SVE2 on Windows
729b78a12 : aarch64_cpudetect: detect I8MM on Windows via SVE-I8MM
6dfdc4ee1 : tiny_ssim: fix argc check
d581bce1f : enable NEON I8MM optimizations
696c488d3 : rtc-vp9: Disable svc_use_low_part for screen
b5ba83092 : enable NEON DotProd optimizations
c6de95ce0 : Initialize gf_picture in vp9 tpl
3ba1fada8 : vpx_image.h: add lifetime note for img_data
fbf63dff1 : vp9: clamp the calculation of sb64_target_rate to INT_MAX
507aea8e2 : vp9_speed_features.h: fix partition_search_type comment
50aa6cca4 : README: add security report note
f00fa3ce7 : Add macro name as comment for header guard #endif
35b908f80 : Add #ifndef header guard to vpx_version.h
2c778f4da : remove vp9_{highbd_,}resize_frame*()
312a9004c : remove vpx_ssim_parms_16x16()
a5ea71f09 : Key frame temporal filtering
5d20cc308 : IWYU: include vp9_ext_ratectrl.h for tpl
ee2552d90 : vp9 ext rc: TPL & final encoding use same QP
a69eeb0af : ext rc: Override encode QP in TPL pass for VBR
d9d6c5e2c : Remove ext_rc_recode
95568674c : remove redundant `&& __GNUC__` preproc check
fcd1f39e5 : Improve temporal filter prediction and process
13be4a719 : Remove a stale TODO in ext RC
b222d7228 : Add the saturate_cast_double_to_int() function
c18b9f7c6 : Add min/max q index to ext rc config
634e1f8fb : vp9_calc_iframe_target_size_one_pass_cbr: clamp final target
bb95d3652 : update libwebm to libwebm-1.0.0.31-10-g3b63004
428f3104f : Include "gtest/gtest.h" using the shorter path
1865f20e9 : Extend border for vp8 loopfilter
9f06827ee : Run clang-format on three files
0c4af6b4c : vpx_fdct16x16_avx2: add missing cast
b5451de5c : vp9_extrc_update_encodeframe_result: normalize decl & def
4295bf4f0 : Update third_party/libwebm to commit f4b07ec
2ab292e9e : Remove unused parameters from ext rc callback
3cc287bbd : vpx_scale,scale1d_c: add assert(dest_scale != 0)
8db1b663e : vp9_subexp,remap_prob: add an assert
f987e3514 : doxygen: quiet warnings in decoder-only config
85d386599 : systemdependent.c: fix warning w/CONFIG_MULTITHREAD=0
cdf8da4c0 : vp8: fix OOB access in x->MVcount
f9120b789 : vp8,calc_iframe_target_size: clamp kf_boost
d63ecb411 : Reset the ref_table array for the key frame GOP
f809c987b : Remove repeated ref_frame assignments
f96deb0bb : Add tpl propagation with updated ref_frame idx
3fb0e5d75 : Remove unneeded cpi->output_framerate assignment
057e53d75 : Small refactoring in vp9_firstpass.c
9a1e8ae7a : README: add link to issue tracker
efe615f80 : add repro for crbug.com/352414650
3219f76ce : Remove printf warning statements in set_size_literal()
72018e8c7 : Some cleanup in vbr_rate_correction()
77974ec04 : vp9_svc_adjust_avg_frame_qindex: fix int overflow
a40848c80 : Do not include vpx_version.h
1640ed408 : Turn off frame_stats == NULL error.
066ea57e3 : Fix unused function warnings in real-time only
7cc7bbba1 : Allow TPL group to reference more frames
4ac9c4ba3 : Fix int cast errors in vp8 on max target bitrate
27c39522f : vpxenc.c: Fix UBSan integer errors in test_decode
a396ac214 : Fix unsigned int overflow in init_rate_histogram()
af599a0c5 : Fix further overflow issue in VBR.
ac117ca7f : Remove static from vars in parse_stream_params()
2693255a2 : Let vp9_ext_ratectrl getting key frame decision
253d6365e : rtc-vp9: Allow scene detection for all speeds
f07ca82f7 : set_analyzer_env.sh: remove -fno-strict-aliasing
d6ae3ea46 : rtcd.pl: add license header to generated files
68deb7ee2 : Add missing header in vp9_firstpass.c
ff67a4f20 : Fix typo of received again
277a5cdaa : Remove redundant setting of max_layer_depth.
2ca6e875c : Typo recieved -> received
fb01e53c9 : configure: add -c to ASFLAGS for android + AS=clang
b0c9d0c6f : configure: remove unused NM & RANLIB variables
ed95b102c : Move ext_rc_define_gf_group_structure
271b3f0bf : tiny_ssim: mv read error checks closer to assignment
a2508b571 : configure: disable runtime cpu detect w/armv7*-darwin
ec129c190 : Document the internal maximum of rc_target_bitrate
b401a1ff2 : Remove unnecessary double cast for cpi->framerate
faf12bdb8 : vp9: round for framerate and _min/max_gf_interval()
713e0faca : vp9: round avg_frame_bandwidth result
60807f0ab : Use round for RC calcutions in cyclic_refresh
9d734db16 : Rename gop_size by show_frame_count
fd84dccd5 : Fix high target data rate overflow.
ffe9c9a45 : Handle ARF and GF gop cases
ddf3c281e : Remove a redundant condition in firstpass.c
25540b3c1 : Fix some UBSan errors in vp8_new_framerate()
495c4b596 : Add a #endif comment for CONFIG_VP9_HIGHBITDEPTH
f3e064e1d : {aarch*,arm}_cpudetect: align define with comment
1f65facb6 : Fix a UBSan error in vp9_rc_update_framerate()
db4d6a5f5 : Fix a typo in the CpuSpeedTest.TestTuneScreen test
6c079c8be : Account for gop_decision->use_alt_ref
5b4cfe88e : vp9-rtc: Fix integer overflow in key frame target size
611d9ba0a : Fix error handling in vp9_pack_bitstream()
b1cf64c40 : vpx_decoder.h: Change "size member" to "sz member"
498097b15 : vpx_dec_fuzzer.cc: Initialize stream_info.sz
5913401eb : Add vp9_ratectrl.h header to vp9_firstpass.c
db2558196 : Assert a vpx_img_set_rect call always succeed
1a3cd4922 : vpx_dec_fuzzer: add vpx_codec_peek_stream_info coverage
e934e3551 : vp9 rc: also run tpl for GOPs without ARF
8433fe639 : vpx_ext_ratectrl.h,cosmetics: Correspondent -> Corresponds
108f5128e : encode_api_test.cc: assert encoder is initialized
314ee14b6 : Fix a rare memory overflow bug
9f7337782 : vp9_pack_bitstream: remove a dead store
35f0262c5 : configure: Do more elaborate test of whether SVE can be compiled
3e713e39a : vp9_ethread_test: move 'best' mode to a Large test
6db3f6e57 : Add several utility functions to set gf_group
f65aff7b9 : Remove vpx_ports/msvc.h
8372a5cfe : vpx_ext_ratectrl.h: make rate_ctrl_log_path const
847b3548b : Better format comments for vpx_ext_ratectrl.h
1c77f7fc0 : Fix comments in vpx_ext_ratectrl.h
c0db981ea : Include <stdio.h> or <cstdio> for *printf()
e9be4f607 : encode_api_test.cc: apply iwyu
7a0089dc0 : vpx_ext_ratectrl.h: fix doxygen comments
f93e6aa33 : Print gop_index in ENCODE_FRAME_RESULT
b61b27220 : vp9_rdopt.c: make init_frame_mv static
63b9c2c0e : VP9: add vpx_codec_get_global_headers() support
3015c41f0 : Add VPX_RC_NONE
6f5839f98 : vp9_encoder.c: fix printf format string
2b88a07bc : vpx_image_test.cc: add missing stdint include
fd28f6f3c : Add rate_ctrl_log_path
85dafa9c6 : Initialize frame_mv in rd pick inter
976134c50 : Add 10 and 12b ranges to vpx_color_range_t comment
89efe85cd : Clarify comment about buf_align in vpx_img_wrap.
3f4055b05 : Introduce local vars uv_x,uv_y in vpx_img_set_rect
74c70af01 : Fix a bug in alloc_size for high bit depths
7d37ffacc : Apply stride_align to byte count, not pixel count
8b2f8baee : Avoid wasted calc of stride_in_bytes if !img_data
06af417e7 : Avoid integer overflows in arithmetic operations
2e3227627 : Fix integer overflows in calc of stride_in_bytes
3dbab0e66 : Add test/vpx_image_test.cc
8762f5efb : Define the MAX_NUM_THREADS macro in vp9_ethread.h
0752960c6 : Add missing header for EBUSY on mingw
6445da1b4 : Fix GCC -Wmissing-braces warnings
2bafeadd3 : Add missing configuration includes
588beb020 : Unit test config changes for Chromium
05a4c855b : Compare ctx->pending_cx_data with NULL
1d007eafa : vp9 rc: Fix GetSegmentationData() crash in aq_mode=0
976cedd64 : Set priv->cx_data_sz to 0 if cx_data alloc fails
419f36e8e : encoder_encode: Assert pending_cx_data_sz is valid
dc74cf923 : Dont use VPX_CODEC_CORRUPT_FRAME in set_frame_size
bf932674a : Add SVE2 implementation of vpx_highbd_convolve8_avg
9274c2bbf : Merge horiz. and vert. passes in HBD SVE2 2D 4tap convolution
43d12d507 : Update yv12_mb initialization
5396643be : Add invalid value to gop decision enums
5f5dfb330 : Assert the return value of read_tx_mode() is < 5
ccefddef3 : Initialize yv12_mb array
d790001fd : Perform bounds checks in vpx_write_bit_buffer
d5501945f : vp9 rc: override GF_GROUP decisions using ext RC
3f8f19372 : vp9: Fix to alloc for row_base_thresh_freq_fac
73703c188 : Perform bounds checks in vpx_writer
5bea4606d : Fix a typo in comment: "it" -> "is"
34277e53a : Free row mt memory before freeing cpi->tile_data
d2ba3a22b : Add 2D-specific highbd SVE2 horizontal convolution function
cd9d72c06 : Add SVE2 implementation of vpx_highbd_convolve8
c9bd57353 : Replace "cpi->common." with "pc->"
4f579df33 : Declare VP9BitstreamWorkerData dest_size as size_t
e38718743 : Add the buffer_end field to the vpx_writer struct
9137f7fa4 : rtcd.pl: add empty specialize() check
d48577579 : Pass output buffer size to vp9_pack_bitstream()
cab4f31e1 : encodeframe.c: remove some unused includes
3c58cb1bc : VP8_COMMON: remove unused cpu_caps member
6e879c617 : Save encode_tiles_buffer_alloc_size() result
afc8b452b : aarch64_cpudetect: add missing include
55d4b736b : vpx_scaled_convolve8_neon: add missing include
6358ef626 : vp9 rc: Add ref frame list for each frame in GOP
18059190d : Remove vpx_rc_gop_info_t
6641e9e03 : Add update type and ref update idx to gop decision
458e1c687 : Remove TPL IO functions
19832b170 : vp9: fix to integer overflow test
c29e63728 : Fix to buffer alloc for vp9_bitstream_worker_data
7fb8ceccf : Restrict ranges of duration,deadline to UINT32_MAX
bc5a22eb6 : Replace timestamp_ratio by oxcf->g_timebase_in_ts
6c0bf97a9 : Detect integer overflows related to pts & duration
8c36d36bc : Add high bit depths, 4:2:2, 4:4:4 to VP9Encoder
ad1d0ece3 : Disable SVE2 if compiler doesn't support arm_neon_sve_bridge.h
c1494fa57 : neon: fix -Woverflow warnings
daa33cca3 : Remove return statement after vpx_internal_error()
0ba7b5033 : Ignore the pts parameter when flushing the encoder
7f6ba04e8 : Move the local variable sd to the innermost scope
cf1b7a65f : VP9RateControlRtcConfig: relocate some initializations
0af724497 : ratectrl_rtc.h: remove use of vp9_zero()
ca7fd396e : ratectrl_rtc.h: move some includes to .cc
0ba67bb93 : *ratectrl_rtc.h: remove unneeded 'public:'
cd88d25c5 : vp8_ratectrl_rtc.cc: fix include order
5391609fb : Add SVE2 implementation of vpx_highbd_convolve8_avg_vert
45ea306da : Add SVE implementation of vpx_highbd_convolve8_avg_horiz
2c3a9b69e : Add SVE2 implementation of vpx_highbd_convolve8_vert
282e9aa0e : Add Arm SVE2 build flags and run-time CPU feature detection
a87978a53 : VP8: Always reset the setjmp flag before returning
f51417671 : Include system headers first
03c7f6a10 : libs.doxy_template: remove DOT_TRANSPARENT
f46d99bcf : Clear dangling ptr in vp8_remove_decoder_instances
ec06dcc31 : Subtract pts_offset from pts after calling setjmp
99e887c09 : vp8/encoder/encodeframe.c: sort includes
a6647c9ca : Add vp8_ prefix to sem_* macro names
6b6916be0 : Refactor standard bitdepth Neon scaled convolve
9b94b7bd0 : Optimize Arm Neon implementation of transpose_u8_8x8()
fa64af7bb : vp8/encoder/encodeframe.c: add missing include
1f066bf77 : build/make/Android.mk: update configure/build comments
b0e26cdcf : aarch64_cpudetect.c: Avoid unused variable warning
148c7f65f : IWYU: fix clang-tidy complaints
b207d1c9b : Only #define __builtin_prefetch if it doesn't exist.
a571299b0 : vp9 ext rc: Do motion search on key frame in TPL
9d8d71b41 : Refactor Arm Neon transpose_concat_*() to not need lookup table
5a8e2f705 : Cosmetic: Remove 'vpx_' prefix from static Neon functions
f394f2be7 : Delete "public" from struct definitions
d4959f982 : Handle EINTR from sem_wait()
b7b5d0a56 : Use the `value` param in Win32 version of sem_init
fa50b2684 : Only enable AArch64 extensions if the compiler supports them
3646b1292 : Specialise highbd_convolve8_horiz_sve for 4-tap filter
c78f1ef4a : Rename dot_neon_sve_bridge header file
2bc1012c5 : Remove redundant code for neon_dotprod 2D convolution
baece7460 : Merge h. and v. passes in 4-tap SBD Neon DotProd 2D convolution
d191c5f98 : Remove redundant code for neon_i8mm 2D convolution
cef5b0da9 : Merge h. and v. passes in 4-tap SBD Neon i8mm 2D convolution
a3209600f : codec_factory.h: fix -Wpedantic warnings
b5578f128 : Add inter/intra_pred_err to VpxTplBlockStats
ff9591f8d : vp9 ext rc: assign srcrf_dist/rate instead
5433b943a : resize_test.cc: fix warning w/CONFIG_VP9=0
fca3d1755 : fix void param declarations
5e90a97fa : tokenize.h: remove undefined vp8_tokenize_initialize()
79284f4c8 : vp8: add uv_delta_q support to external RC
1659e73b0 : vp9_context_tree.h: add name to union
5d022e45e : vp9_rdopt,skip_iters: normalize use of const
9c0c5144e : Add SVE implementation of vpx_highbd_convolve8_horiz
7e9da9702 : Use Armv8.0 Neon 4-tap vertical convolution for all arch levels
9f7a70bdf : Further accelerate Armv8.0 Neon 4-tap convolution
4340382bb : Move THREADFN macro definitions to vpx_pthread.h
4e384da53 : Delete a duplicate definition of thread_sleep()
3316c1124 : Delete unused macro definitions
d63efe067 : Handle EINTR from sem_wait(&cpi->h_event_end_lpf)
e1da3834b : Add base qp to ext rc config
e92dd0512 : Add VPX_WORKER_STATUS_ to values of global-scope status enum.
4c0cf7458 : Split pthread wrapper to vpx_pthread.h.
b01d61c9a : Remove unused signals for get_encodeframe_decision
591c78743 : vp9 ext rc: Remove initializer for gop_decision
455cb2699 : Optimize Arm Neon USDOT narrowing sequences in convolve kernels
939bcd402 : Optimize Arm Neon SDOT narrowing sequences in convolve kernels
a64bf87fb : Fix gop decision and gop index in TPL pass
8cf26c128 : Backport thread-related changes from libaom.
491c16a9f : Merge horiz. and vert. passes in HBD Neon 2D avg convolution
364326c37 : Merge horiz. and vert. passes in HBD Neon 2D convolution
58731e2b7 : Specialise highbd Neon 2D horiz convolution for 4-tap filters
3127962e7 : Specialise highbd Neon vert convolution for 4-tap filters
70b14bf4d : Specialise highbd Neon horiz convolution for 4-tap filters
b1c9bbeaa : Remove unneeded assert in vpx_filter.h
00135942d : Add 2D-specific highbd Neon horizontal convolution function
08eb51bc1 : Call scalar impl. immediately from HBD Neon 2D convolution
ef3fd00c2 : Refactor Neon highbd 2D convolution definitions and merge files
72cc21e3a : Refactor SBD Armv8.0 Neon horizontal convolve8 paths
81ce6067c : Refactor SBD Armv8.0 Neon vertical convolve8 paths
e32f9d413 : configure: remove profile from CONFIG_LIST
8408251f4 : README,cosmetics: break some long lines / fix whitespace
96b64eaac : Refactor Neon highbd_convolve8 kernels
18bc7ffe5 : Optimize vpx_highbd_convolve8_horiz_avg_neon
04c8813a2 : Optimize vpx_highbd_convolve8_horiz_neon
a0f3eb8ce : Delete a useless clamp(q, x, y) call
f5e1a0ab7 : Include headers to fix clang-tidy complaints
6c0035648 : Refactor vpx_highbd_convolve8_avg_vert_neon
01edfb3df : Refactor vpx_highbd_convolve8_vert_neon
de7883604 : Init using 0-vector instead of load-broadcast in mem_neon.h
a7a853c3a : Remove stride width == 4 tests in mem_neon.h helpers
4084250cc : Align intermediate buffers for 2D Neon convolutions
c6a8fa27b : Added documentation on PGO for optimization analysis
7eec109a8 : Add profile guided optimization support
58fb0f1d2 : Add SVE implementation of vp9_block_error_fp
a9d91d7a0 : Add SVE implementation of vp9_block_error function
075569f3a : Add SVE implementation of vpx_sum_squares_2d_i16
1258773dc : Ext rc: Remove max_frame_size from frame decision
a9bd789d2 : Delete #if USE_PARTIAL_COPY code
c35f3e9e3 : Cosmetic: Refactor Arm Neon i8mm convolution functions
224f2dc82 : Refactor Arm Neon DotProd convolution functions
3b1039c82 : Rewrite ext RC test
9aefcb317 : Ext RC: remove gop_info parameter
91bc8ec56 : vp9: Set VPX_FRAME_IS_INVISIBLE for no show frame
861981b13 : Allow external RC to control key frame
56b67113d : Move vp9_estimate_qp_gop to vp9_tpl_model.c
8630b1832 : Fix gf group index used in TPL pass for WebM RC
7ad5f4f69 : vp9_scale_and_extend_frame_ssse3: fix uv width/height
ef09c2e1a : vp9_encoder.c: make vp9_svc_twostage_scale static
adac06ace : vp9_scale_references: condense hbd #if
6cf6e1f08 : Simplify Armv8.4 DotProd correction constant computation
fd6b80b15 : Move Neon dotprod and i8mm convolution kernels into .c files
189c135d5 : Merge Arm Neon dotprod and i8mm convolution files
433577ae3 : Update third_party/libwebm to commit affd7f4
2edd69749 : Pass the aligned width/height in lookahead_push
fed0dfe96 : Allow SVE variance functions to be called from Neon subpel var
33ef1caf2 : Add SVE implementation of HBD get<w>x<h>var functions
0e2325634 : Enable HBDGetVariance test for different implementations
c43ec846f : Enable GetVariance test for different implementations
95d0fcae0 : Add unit tests for vpx_get<w>x<h>var functions
989c393b2 : Initialize members in VP8/VP9RateControlRTC ctors
d84436d53 : Add SVE implementation of HBD variance functions
0f6a27496 : vp9_ratectrl.c: add missing include for INTER_LAYER_PRED_ON
43e1c8bf1 : encode_api_test,RandomPixelsVp8: fix stack overflow
eeb1be7f2 : Support VPX_IMG_FMT_NV12 in vpx_img_read/write
d3a946de8 : Make img_alloc_helper() fail on VPX_IMG_FMT_NONE
db6a5c09c : README: remove library version
71a5cb6e8 : Add SVE implementation of HBD MSE functions
b95d17572 : vp9-rtc: Fix to reset on scene change for temporal layers
e001eeb5b : Enable Neon Dotprod impl for HBD MSE
41e0655e5 : Fix highbd_get_block_variance_fn input parameter
25f03e456 : vp9-svc: Fix to sample encoder for mismatch check
cc306fac7 : vp9-svc: Fix to max-q on scene change for svc
8aeb5848a : Do not use TPL QP from RC for final encoding
0eecce72b : vp9_quantize.c: add missing include for get_msb()
0a91e18ec : vp8_datarate_test.cc: add missing include
43bd56795 : vp8: Fix to integer division by zero and overflow
aeb4928c6 : Add SVE 16-bit dot-product helper functions
0801bfca3 : Add -flax-vector-conversions=none to Clang builds
e03c9d2a6 : Update of get_files.py to support Python3.x
6ea3b51ec : Require Arm Neon-SVE bridge header for enabling SVE
756b29a77 : vp8: Fix to race issue for multi-thread with pnsr_calc
aef73b22c : Make encoder know frame size increase from config
c5f808670 : Move VPX_TPL_ABI_VERSION to the ext RC ABI version
469150a92 : configure: add -arch flag when targeting darwin23
3b3e8b5f2 : vp9 ext rc: if->assert, more comment for TPL ctrl
1474e3c72 : Return error if TPL related interface isn't set
f6b7166a2 : Clear some clang-tidy complaints
bd7803407 : Add codec ctrl to control TPL by external RC
655da33b8 : Use get_lsb in vp9 and vp8 invert_quant functions
41ced868a : Remove VP9E_GET_TPL_STATS
df655cf4f : Clarify the comment for update_error_state()
4fe07a0c4 : Return correct error after longjmp in vp8e_encode
193b15119 : Fix to integer overflow in vp8 encodeframe.c
a75859c43 : Remove redundant comment in convolve8_4_usdot
7e735cdf4 : IWYU: include vp9_scale.h and vpx_codec.h
3a88c0c20 : Avoid dangling pointers in vp9_encode_free_mt_data
fa60c7d9c : IWYU: include yv12config.h for YV12_BUFFER_CONFIG
1ed56a46b : Update frame size in actual encoding
50ed636e4 : Fix a bug in simple motion search
585798f75 : Set pred buffer stride correctly
2f258fdee : vp9_frame_scale.c,cosmetics: funnction -> function
476d02a2d : Fix two clang-tidy misc-include-cleaner warnings
f9b7c8576 : README: update target list
c4c920805 : Remove SSE code for 128x* blocks
7dfe34319 : Use vpx_sse instead of vpx_mse to compute SSE
4c2435c33 : Fix several clang-tidy complaints
12e928cb3 : Add unittest for issue b/314857577
97184161d : Add "IWYU pragma: export" to some public headers
7cfc58de4 : RTC RC: add screen content support for vp8
8bf3649d4 : Fix a bug in frame scaling
9ad598f24 : Improve test comments.
db83435af : configure: Add darwin23 support
f10481dc0 : Set skip_recode=0 in nonrd_pick_sb_modes
5dcb4c174 : Define VPX_DL_* macros as unsigned long constants
bf0755418 : Add the needed Android API level predicates.
a9f1bfdb8 : Fix edge case when downsizing to one.
5cad6fdc9 : CHANGELOG: add CVE for issue #1642
070d7e5cf : Document vpx_codec_decode() ignores deadline param
845a817c0 : Fix scaled reference offsets.
d144e6e95 : Specialise Armv8.0 Neon horiz convolution for 4-tap filters
a33ac12dc : Specialise Armv8.0 Neon vert convolution for 4-tap filters
b027590c3 : Define vpx_enc_deadline_t type for encode deadline
cc89450a4 : Specialise Armv8.6 Neon 2D horiz convolution for 4-tap filters
0dc67ecf5 : Specialise Armv8.6 Neon horiz convolution for 4-tap filters
9cdb68891 : Specialise Armv8.4 Neon 2D horiz convolution for 4-tap filters
68ef57f99 : Specialise Armv8.4 Neon horiz convolution for 4-tap filters
2f8e94715 : Specialise Armv8.6 Neon vert convolution for 4-tap filters
bdc9e1c9d : Specialise Armv8.4 Neon vert convolution for 4-tap filters
1b3ec0676 : Make reporting of filter sizes more granular
9b729500d : Delete redundant code in Neon SDOT/USDOT vertical convolutions

+- Project: platform/external/libwebsockets

2862e24d : Remove unused -Wno-implicit-function-declaration.

+- Project: platform/external/libxaac

4bbe4a0 : Make bpfmt happy.

+- Project: platform/external/libxkbcommon

9ba24b3 : Make bpfmt happy.
02ee184 : Actually pin to C11.

+- Project: platform/external/libxml2

b7c0f9d2 : string: Fix va_copy fallback
a870088f : xpath: Hide internal sort functions
51394929 : python/tests: fix typos
f9a6469a : Update NEWS
c7b27866 : Avoid Python 'licence' distribution option is deprecated; use 'license' error
bf3619c3 : fuzz: Don't unlink DTD when replacing nodes
a4c16a14 : xmllint: Improve --memory and --testIO options
3ac214f0 : xmllint: Support --html --sax
225ed707 : html: Accelerate htmlParseCharData
74dfc49b : parser: Clarify logic in xmlParseStartTag2
20799979 : html: Handle numeric character references directly
0bc4608c : html: Use hash table to check for duplicate attributes
24a6149f : html: Make sure that character data mode is reset
c32397d5 : html: Improve character class macros
e8406554 : html: Rewrite parsing of most data
f77ec16d : html: Optimize htmlParseCharData
440bd64c : html: Optimize htmlParseHTMLName
c34d0ae9 : html: Deprecate htmlIsBooleanAttr
6040785a : html: Deprecate AutoClose API
188cad68 : html: Remove obsolete content model
0144f662 : html: Remove obsolete code
0ce7bfe5 : html: Try to avoid passing XML options to HTML parser
76cc6394 : test: Fix XML_PARSE_HTML constant
575be6c1 : html: Fix line numbers with CRs
be874d78 : html: Ignore unexpected DOCTYPE declarations
462bf0b7 : html: Rework options
16de1346 : parser: Make new options actually work
42c3823d : html: Update comment
9f04cce6 : html: Remove unused or useless return codes
e179f3ec : html: Stop reporting syntax errors
c6af1017 : html: Test tokenizer against html5lib test suite
27752f75 : html: Fix EOF handling in start tags
b19d3539 : html: Fix EOF handling in comments
17e56ac5 : html: Fix parsing of end tags
24a09033 : html: Fix bogus end tags
bca64854 : html: Allow U+000C FORM FEED as whitespace
6edf1a64 : html: Fix DOCTYPE parsing
9678163f : html: Don't check for valid XML characters
a6955c13 : html: Parse numeric character references according to HTML5
4eeac309 : html: Start to fix EOF and U+0000 handling
e062a4a9 : html: Add HTML5 parser option
17da54c5 : html: Normalize newlines
341dc78f : html: Deduplicate code in htmlCurrentChar
3adb396d : html: Parse bogus comments instead of ignoring them
84440175 : html: Add missing calls to htmlCheckParagraph()
86d6b9b0 : html: Deduplicate some code
0d324bde : html: Simplify node info accounting
ccb61f59 : html: Remove duplicate calls to htmlAutoClose
e1834745 : html: Add character data tests
f9ed30e9 : html: HTML5 character data states
59511792 : html: Parse named character references according to HTML5
d5cd0f07 : html: Prefer SKIP(1) over NEXT in HTML parser
dc2d4983 : html: Rework htmlLookupSequence
637215a4 : html: Always terminate doctype declarations on '>'
72e29f9a : html: Fix quadratic behavior in push parser
a80f8b64 : html: Allow attributes in end tags
f2272c23 : html: Handle unexpected-solidus-in-tag according to HTML5
939b53ee : html: Stop skipping tag content
dcb2abb2 : html: Parse tag and attribute names according to HTML5
d67833a3 : xmllint: Use proper type to store seconds since epoch
81d38ed0 : meson: Fix duplicate listing of libxml2.devhelp2
b1c5aa65 : xpath: Deprecate xmlXPathNAN and xmlXPath*INF
55ddccb6 : io: Make sure not to pass partial UTF-8 to write callback
c46b89e2 : xpath: Deprecate xmlXPathEvalExpr
03f1bdd2 : xpath: Make recursion check work with xmlXPathCompile
dae160c6 : encoding: Fix table entry for "UTF16"

+- Project: platform/external/linux-firmware

e03cac8 : IMPORT: rt2800usb
0edc320 : ANDROID: Add firmware_import_workflow for rt2800usb
472686b : ANDROID: Add initial README.md and copy.bara.sky
a871e99 : IMPORT: r8152
be82fe7 : ANDROID: Add firmware_import_workflow for r8152
e57da94 : ANDROID: linux-firmware: Add OWNERS
2297552 : Initial empty repository

+- Project: platform/external/linux-kselftest

053f45be4e35 : Snap for 12551711 from 86f44e2f3fa835709cd5aba4998e213378882ff0 to 25Q1-release

+- Project: platform/external/llvm-libc

2622e1e17ca0 : Project import generated by Copybara.
496d77ec8fa7 : Android.bp: move memrchr to most arches
aa97f032d38e : Android.bp: move strlcat to all arches
8f6900a36077 : Android.bp: move strlcpy to all arches
a84b19b4e95b : Android.bp: roll out 8 fns to x86_64
dd77140bcdce : Android.bp: fix up build metadata
497e1ab2c040 : Revert "Android.bp: export more 'baseline' libc func implementations"
28b95e731c64 : Android.bp: export more 'baseline' libc func implementations
4f3b4bc9aaca : Project import generated by Copybara.
7c11e784aa5b : external: llvm-libc: build 32b arm
35892f289b97 : external: llvm-libc: enable 32b x86
d949ae7a266e : Project import generated by Copybara.
551222e584a7 : Project import generated by Copybara.
47e9021884db : Project import generated by Copybara.
06bb7e72930d : Project import generated by Copybara.
4de0ca2ba829 : Project import generated by Copybara.
b8f316a963ca : Project import generated by Copybara.
5d20f490927f : Project import generated by Copybara.

+- Project: platform/external/ltp

7f4f899e2 : ANDROID: Add lowmem and hwasan configurations using extra_test_configs
817acb7e3 : ANDROID: Move test definition from vts-testcase/kernel/ltp
01fd306d6 : libswap: fix tst_max_swapfiles() for SWP_SWAPIN_ERROR_NUM
cf41f0a51 : ANDROID: memcg/regression: toybox swapon and swapoff need paths
29f4e65e8 : cgroup_core02: Requires cgroup2 mounted with nsdelegate
953774a67 : Revert "nft02: Fix initializer element is not a compile-time constant"
6d391af90 : ANDROID: Remove vts scenario_group
68f2de545 : scenario_groups/default: remove io, filecaps and cap_bounds
d862601d1 : LTP 20240524
d25f1aad5 : ANDROID: Don't build kvm_svm04
a21f47355 : ANDROID: Add back package list generation
2be5d01b5 : Pin to C99.
4634c8dc0 : ANDROID: update kernel timespec checks for Android
a7558eecf : runtest/mm: create TMPFILE in TMPDIR for mmapstress07
178264313 : ltp: syscalls: pipe2: Update pipe size check
0e72ac7ea : ltp: syscalls: epoll_wait: Work around edge triggered semantics
dd6fc8d8e : ltp: security: dirtyc0w_shmem: Fix backing file size
85eab5a3f : ltp: syscalls: mincore: Iterate vector in kernel page size granule
139a6b89a : ltp: syscalls: fcntl: Clarify default pipe size for previleged user
f502aad17 : ltp: controllers: memcg: Fix fault granularity
a2b6c319b : ltp: syscalls: mlock: Fix fault granularity
e5375f640 : ltp: syscalls: signal: Fix alignment and size of altstack
9f8d8a610 : ltp: syscalls: msync: Fix pagemap index
347abf561 : ltp: Introduce pgsize_helpers.h
8f21ebba4 : LTP 20240524
0358f7a27 : syscalls/msgstress01: Fix off by one in array access
dac76a85f : syscalls/msgstress01: Fix timeouts
8a2dca14e : syscalls/msgstress01: Fix the stop logic
f888bc21f : sbrk03: Convert to detect support with flags
7e08a58e5 : Refactor fork14 using new LTP API
56f63e54e : lib: Add .needs_abi_bits
374238e81 : tst_kernel.h: Convert docs to sphinx
3ec7b4ebc : libswap: Remove function description
78a6e1f55 : libswap: Fix tst_max_swapfiles() for SLE12-SP5
61ad7ef65 : libswap: Split long lines (readability)
6ab10dec5 : setsockopt03: Fix typo in docs
3922d75f3 : setsockopt03: Convert docs to docparse
e644691d3 : docparse: Fix list formatting
1c0bf86a4 : getsockname01: Add case for errno EINVAL
5dd33b797 : getsockopt01: Add case for errno EINVAL
4850d9a24 : tcindex01: Pass if the tcindex module is blacklisted
ee1bf39b3 : tst_netdevice: Add permissive macro for adding traffic filters
b1e97fd95 : readahead01: pass on pidfd
76c8c04b5 : setitimer: Pass the kernel-defined struct __kernel_old_itimerval to sys_setitimer()
7fd200fc3 : open_posix_testsuite: Replace old -W command line argument
02ef6efd8 : syscalls/mlock05: add mlock test for locking and pre-faulting of memory
09f729b18 : bind: Add negative tests for bind
ba69dd79e : KVM: Add functional test for VMSAVE/VMLOAD instructions
45069d033 : KVM: Implement printf-like formatting for tst_res() and tst_brk()
8e97c8e56 : KVM: Implement strchr() and basic sprintf()
7e10cebe2 : KVM: Disable EBP register use in 32bit code
ff13d6750 : syscalls/mmap08: Use macro TST_EXP_FAIL_PTR_VOID()
11fb88089 : syscalls/mmap06: use macro TST_EXP_FAIL_PTR_VOID()
1bddece8b : madvise11: ignore EBUSY for MADV_SOFT_OFFLINE
8c9ecdfbf : wait01: Use TST_EXP_FAIL2() for wait
059cb0233 : swapping01: Add sleeps in the loop that dirties the memory
99b3e43c3 : doc: Clarify that the only public CI testing is build only
9e9654cf2 : doc: Bump minimal supported kernel to 4.4
947393d25 : syscalls: arch_prctl01.c fix compilation on old distros
0d9dc994e : runtest: Move io content to ltp-aiodio.part4
071727828 : runtest: Move capability related tests to new capability
c5500841c : Add case about arch_prctl syscall.
b3102e21c : doc: Improve TDEBUG docs
7352ba023 : hugemmap15: Support RISC-V to do __cache_flush
e59f1a917 : doc: Use more common doc for gdb
dc2c4f8bc : doc: Add links to git man pages
095f00ec6 : doc: Link modules to kernel doc website
0364c2671 : doc: Link kernel file names to git repo on kernel.org
e4260eee6 : doc: More link file/directory names to GitHub sources
6052dca5d : ci: Specify only library devel packages
349bab5f0 : ci: Fix libaio package rename on Debian
84155fae8 : ci: Rename docker build config
180bc2bb9 : ci: Run sphinx test only when files changed
3fe59efe4 : Add utime07 test
b8a5974d3 : statx04: Skip STATX_ATTR_COMPRESSED on Bcachefs
67f155054 : safe_mmap(): Fix compiler warning in tst_res() format
7841eed7c : ci: Add sphinx related job
a835b0730 : open_posix_testsuite: Avoid non portable GCC extensions without a guard
ef286ba37 : KVM: Move kvm_pagefault01 to the end of KVM runfile
27700e7c4 : KVM: Add VMSAVE/VMLOAD functions to x86 SVM library
e93dc2146 : KVM: Add system control MSR constants
320fc82e3 : kvm_find_free_descriptor(): Skip descriptor 0
29b76a954 : kvm_svm02: Fix saved stack segment index value
90f80322a : Rewrite msgstress testing suite
703406ba4 : doc: libltpswap: Add kerneldoc
b0343add6 : mmap15: Enable in compat mode
dd0e8ded2 : tst_test.h: Turn 1 bit tst_test members to unsigned
504bdede6 : zram01.sh: Remove unneeded return
96ef4b40a : zram01.sh: Increase timeout for check_read_mem_used_total
8cb61291f : doc: Update building docs section
0a682f1af : sched_stress: Use time_t instead of long for type
f62d2cbc1 : kallsyms: Fix docparse formatting
e29c89f6e : kallsyms: Utilize ksymbol table for unauthorized address access
9725496ca : lib: add SAFE_CALLOC macro
1ba882cf3 : m4: Remove now unused ltp-nommu-linux.m4
b635fe826 : doc: UCLINUX has been removed
81fbc2937 : Remove doc/old/nommu-notes.txt
664c0d320 : lib: Remove -C option and self_exec.c
7cdb2bedb : syscalls/ustat02: Remove UCLINUX
be7e87d12 : syscalls/sysinfo02: Remove UCLINUX
5f31b9cf3 : syscalls/sigrelse01: Remove UCLINUX
b6b583098 : syscalls/setsid01: Remove UCLINUX
070b10e84 : syscalls/setgroups04: Remove UCLINUX
b36429dde : syscalls/read02: Remove UCLINUX
ebbb3c9ce : syscalls/sock*: Remove UCLINUX
9b02fceb9 : syscalls/send*: Remove UCLINUX
e3e3d0217 : syscalls/recv*: Remove UCLINUX
f3a290e1c : syscalls/pause: Remove UCLINUX
dba03bf22 : syscalls/pipe: Remove UCLINUX
0dff3d5f7 : syscalls/writev05: Remove UCLINUX
d4063d5fa : syscalls/munmap: Remove UCLINUX
90b735af7 : syscalls/mlockall: Remove UCLINUX
ae0808881 : syscalls/madvise02: Remove UCLINUX
883a911b3 : syscalls/kill: Remove UCLINUX
aecdb74cb : syscalls/semctl06: Remove UCLINUX
7464f09b4 : syscalls/fcntl: Remove UCLINUX
e3fc7c44a : syscalls/creat06: Remove UCLINUX
450309c5b : syscalls/connect01: Remove UCLINUX
00ca6ec8a : syscalls/clone02: Remove UCLINUX
ad8ed74aa : tlibio.c: Remove UCLINUX
e4d3f8551 : lib/parse_opts.c: Remove UCLINUX
fca021a2a : tree: Remove FORK_OR_VFORK and tst_vfork()
ebff440f9 : test.h: Remove MAP_PRIVATE_EXCEPT_UCLINUX
b6a09758d : make: Remove UCLINUX (nommu detection)
891fee45c : make: Remove WITH_POWER_MANAGEMENT_TESTSUITE
2f13c1364 : doc: Link file/directory names to GitHub sources
c240726a6 : libswap: Use {SAFE_,}MAKE_MINIMAL_SWAPFILE()
c33f65c39 : libswap: Add {SAFE_,}MAKE_SMALL_SWAPFILE() macros
6b791b727 : doc: Remove dead link to README.md
ccb072923 : doc: Fix link to github repo
7248e5c5f : doc: update syscalls statistics
f8c922454 : lapi: getrandom05: Add getrandom() fallback
b4970ae94 : lapi/fs: Replace loff_t with long long
965d1fa3c : tst_safe_macros_inline.h: Add man page + more explicit doc
2cf78f47a : unlink: Add error tests for EPERM and EROFS
d9280782d : getrandom: Add negative tests for getrandom
7c8997c06 : gethostname: Add negative test for gethostname
03333e6f8 : doc: introduce sphinx extlinks
b150e3a21 : doc: test_case_tutorial: Fix link
b318e7822 : tst_test: Merge needs_cgroup_ctrls C comment into sphinx doc
04cca38b5 : mincore02: refactor with new LTP API
c99c5a4be : swapoff0[12]: Remove unneeded tst_brk()
f77ebacb7 : libswap: Use tst_res_() instead of tst_res()
2eec2625a : libswap: Move file & line macros to macros
2bec70ce2 : doc: documentation: Fix typos
46f4aa523 : doc: Add section for C API documentation
638934e8b : doc: Documentation usage and development
70d3ea085 : controllers: remove use of LINE_MAX
4961781fd : mremap06: fallocate is not supported on nfsv4.1 or earlier
b592cdd0d : dnsmasq: Final fix of library inclusion
8f3b7bb06 : syscalls: Add test for splicing to /dev/zero and /dev/null
a0bf6550e : syscalls: Add test for splicing from /dev/zero and /dev/full
a49a7e9d7 : dnsmasq: Proper fix of library inclusion
1cb8e3153 : dnsmasq: Fix variable initialization
a8e3009e1 : getsockopt01: refactor with new LTP API
cce93d3a3 : Refactor with new API and merge fcntl27 + fcntl28
8fb231135 : Refactor open09.c with new LTP API
0b6ca26f9 : refactor fallocate03 with new LTP API
76ed92c1b : waitid10: Add .needs_tmpdir=1 to run test in temporary directory
91feb8458 : unlink05: Convert docs to docparse
73c196c24 : unlink05: Add identifier name
807ff91c1 : getsockname01: refactor with new LTP API
6f97789ca : network/README: Document ping dependencies
64b11656f : tst_net.sh: tst_ping(): check for ping -I support
87d90a00e : doc: Remove make install
0d688b956 : .github: Remove GitHub wiki mirroring hook
6cf992812 : include: doc: Convert comments into linuxdoc
175d91a74 : doc: Add more to spelling_wordlist
4a72aada8 : New LTP documentation
0ac55a649 : realpath01: Use TST_EXP_FAIL_PTR_NULL
3fef321cb : shmat02: Use TST_EXP_FAIL_PTR_VOID
8995610c3 : lib: Add TST_EXP_FAIL_PTR_{NULL,VOID}{,_ARR} macros
4bba73929 : test_macros02: Reduce duplicity
ce7060d84 : Refactor sigaltstack02.c with new API
42f2c155f : sctp/test_1_to_1_events: memset() struct sctp_event_subscribe
747fe069f : tree: Fix tst_clone_args initialisation
f09c3b0db : Refactor mmap09 test with new LTP API
18ab64692 : Refactor gethostbyname_r01 with new API
b1f31f4f3 : Refactor fcntl08 with new API
350f353d6 : fanotify14: fix anonymous pipe testcases
634376ea6 : tst_test_macros.h: Require to pass array size in TST_EXP_FAIL*_ARR()
64bb39076 : lib: Add tst_selinux_enforcing()
23e3083b8 : getpeername01: Refactor with new LTP API
df933e38b : setpgrp02: refactor with new LTP API
3f571da28 : include: Move inline functions to special header
3c3c36f02 : tst_safe_macros: Move implementations into C file, rename
67ab430a2 : lib: Merge security related sources
d1e742459 : syscalls/timer_getoverrun01: use kernel_timer_t type
8ea05b559 : swapon01: create 128MB swapfile
f987ffff5 : libswap: add two methods to create swapfile
aa33d2bb8 : mknod09: Refactor with new API
e2fce8fc6 : lib: Add SAFE_SSCANF()
77449d296 : getxattr01: Convert to new API
bc9b785a6 : iopl02: Convert docs to docparse
fd3f66831 : iopl01: Convert docs to docparse
a3dc45fcd : madvise06: set max_runtime to 60
be4368630 : fpathconf01: Fix SPDX license ID
2282da7c4 : github: Add issue template
f99740db4 : setreuid07: Add missing .needs_tmpdir = 1
c3f8ace4a : tst_lockdown: Add copyright
3e0648fc5 : docparse: Correct spelling mistake
dbfe867b4 : fpathconf01: Convert to new API
fc1e87cb8 : Add more check points for Review Checklist doc
690d44d75 : make: Delete gitignore.mk
91cee3203 : Makefile: Add doc target
d06621fed : tests: Run test_kconfig03 in CI
7ba556640 : tst_safe_macros: Fix formatting
6a582f415 : doc: Document the oldest supported clang
9b118fea5 : lib/newlib_tests: add test_kconfig03 in .gitignore
bc8b87088 : stack_clash: make use of tst_kcmdline_parse
3c0b6c7ab : init_module: To handle kernel module signature enforcement
180834982 : kconfig: add funtion to parse /proc/cmdline
14c710cae : scenario_groups/default: remove connectors
71f75ca07 : tools: fix broken failure-detection when using individual dmesg logs
02109a38b : safe_mount: Temporary clear umask before mount()
ab1c8d16e : memcontrol03: Using clean page cache to avoid dependency on IO rate
f21f1e4ee : tst_fs_setup.c: Add tst_ prefix to new API functions
b78078e90 : include: Move new API only functions to new API header
441dcca68 : logrotate: Simplify log checking
dd7aa665f : logrotate: Rewrite into new API
20ea2221c : send02: Turn docs into docparse
9856625b2 : Add shmat04 SysV IPC bug reproducer
ff5f945e1 : Print prot flag when SAFE_MMAP() fails
b366afb64 : Add SAFE_MPROTECT() macro
9fa305fe3 : send02: Fix typo in TINFO message
cbc2d0568 : mkdir03: Convert docs to docparse
d824f59a2 : Add more testcases in mkdir03
754c518e5 : munlockall: re-write test case
d6e3d0c44 : network: Remove clockdiff01.sh test
37bc7f250 : network: Remove telnet01.sh test
4fb5e8e2e : network: remove xinetd_tests.sh
f85a9df7a : network: Remove host01.sh
0b38797a8 : getxattr04, 05: Change to docparse comment and typo fixes
dce8b26e2 : syscalls/mmap13: Rewrite the test using new API
6f82542fc : libswap.c: Improve calculate swap dev number
ee628efff : include/tst_fs.h: Fix missing header
0f5d8c520 : libswap.c: Check free space with correct mnt path
50626b4a1 : cgroup_dir_mk: set the umask to '0' before creating the subdir
d95f453ac : statx07.c: set umask to 0 within setup
3c89830fc : pipe15: Adjust fd check for pipe creation
fc6adb845 : pipe15: Avoid SIGSEGV in cleanup
ea1a7e8f1 : tree: Relicense GPL-2.0 (v2 only) => GPL-2.0-or-later
75744cf01 : tree: Fix SPDX license GPL-2.0-only
e8564a2bc : tree: Fix license GPL-2.0-or-later
4ce197857 : setxattr03: Convert docs to docparse
1700062eb : setxattr02: Convert docs to docparse
f487f3e29 : setxattr01: Convert docs to docparse
b85c7bca3 : getxattr05: Add missing linux tag
ca03a9b92 : listxattr03: Convert docs to docparse
f74b8b422 : listxattr02: Convert docs to docparse
f47428eee : listxattr01: Convert docs to docparse
697a06a82 : ioctl02: Use correct termios structure
dbd224078 : Add fallback for RHEL9
8fd941649 : syscalls/swapon03: Simply this case
cf99f511d : swapon/Makefile: Remove useless section for MAX_SWAPFILES
ee95bc7fc : swaponoff.h: Remove useless header
fe1782ed6 : syscalls/swapon03: use tst_max_swapfiles() and GET_USED_SWAPFILES() API
319693d0b : libltpswap: alter tst_count_swaps API
c1b8c011e : libltpswap: Add tst_max_swapfiles API
a3b05b8c7 : Use memset() to fill buffers in diotest
afb6277fb : swapo{n,ff}: Remove useless tag .needs_tmpdir
386844083 : include/tst_cmd.h: Improve programming doc
2c89b2f78 : include: Add SAFE_CMD() programming doc
928c93ca2 : doc/C-Test-API: Reword SAFE_CMD()
7b1c5e0a2 : tst_fd: Use raw syscall for fanotify_init()
e97f41970 : move_pages12: compacting memory before each test loop
631b5acd8 : doc: Fix typo in constant name
23ec4f144 : link05: Use constant for number of links
25ad0c50a : link05: Return on link() failure
58952a874 : net.nfs: Fix nfs06.sh runfile entries
f62beb00d : open07: Convert to new API
fba66012d : settimeofday02: Simplify test using TST_ macros
512cf0e75 : settimeofday01: Convert docs to docparse
5f596e662 : Refactor mount02 test using new LTP API
a43ab76d6 : refactor fcntl29 with new API
82562d5cd : Add test for file truncation over NFS
1050f12f6 : swapon03: swapon() file on mounted filesystem
3f79bcb94 : Refactor mount01 test using new LTP API
fe4068694 : io_submit: Link against libaio only io_submit01
31b988058 : Refactor timer_getoverrun test using new LTP API
ab803e8a9 : futex_waitv: Convert 32bit timespec struct to 64bit for compatibility mode
64f7e5604 : nfsstat01.sh: Run on all NFS versions, TCP and UDP
17568a518 : nfsstat01.sh: Add support for NFSv4*
ee00c2ebe : nfsstat01.sh: Validate parsing /proc/net/rpc/nfs{,d}
36b2baa46 : runtest/net.nfs: Rename test names
34cc32bd6 : process_state: Enhancement of process state detection
ee6cfd5d9 : pwritev2: Convert docs to docparse
15841bafa : waitpid01: Add subtests from waitpid05
dcf371c4d : waitpid01: Fix signal value
ef154a8df : swapon03: Fix formatting
8bea380a0 : lib: tst_buffers: Fix checkpatch.pl warnings
767ae231b : waitpid04: Convert to new API
b07ef3bff : libswap: Refactor is_swap_supported function to return status
009a407a0 : swapon/off: enable all_filesystem in swap test
6249e87b5 : libswap: customize swapfile size
fb3f4c08c : libswap: Introduce file contiguity check
0d85fd1c7 : libswap: add function to prealloc contiguous file
f1e2c3bce : swapon01: Improving test with memory limits and swap reporting
a0fb0c3f2 : swapon01: Test on all filesystems
9a18d9fbe : libswap: add known swap supported fs check
2a50d18cc : README: Mention -f param for strace
ac69c8125 : hugemmap24: Postpone free()
ed5ccf6c1 : waitpid01: Test all standard deadly signals
c6a51e024 : inotify: Convert doc to docparse
6bb15044f : ioctl: Convert doc to docparse
2959a26d5 : runtest/net.nfs: Restore running nfsstat01.sh on NFSv3
54fb751b2 : nfsstat01.sh: Move local to the beginning of the function
8137e4778 : doc: Update C And Shell Test API Comparison table
c5e71f9e2 : Increase default appends operations in dio_append
3cc510997 : Fix dio_append/aiodio_append tests
359047c97 : fanotify01: Test setting two marks on different filesystems
711878673 : Add test for ASLRn't bug
9eb8d2dc7 : lib: Add tst_is_compat_mode() helper function
46eb69ffd : execl01.c: set stack to unlimited
222054d4c : lib: Add .ulimit
491927d79 : waitpid03: Convert to new API

+- Project: platform/external/lz4

65998fe : removed lz4c target
36df905 : update API and man page documentation for v1.10
de6531b : fixed minor conversion warning
8ce9e94 : improved lorem ipsum generator speed by a factor > x8
2a1de07 : improved speed of lorem ipsum generator
cc1508f : more correct extDict detection
2ea0373 : level 2 compatibility with LZ4F dictionary compression
21837b9 : fixed traces
d31aaac : Level 2 is now compatible with dictionary attach mode
be35b7c : minor documentation edit
1b12a15 : fix minor conversion warning
f458b2b : LZ4_loadDictHC is compatible with LZ4MID level 2
7480aee : added a dictionary loader dedicated to level 2 (LZ4MID)
33c62fa : promote LZ4_attach_HC_dictionary() to stable
08bfdbe : update changelog for v1.10
0642ae2 : minor: document version for new entry point
089210b : added CODING_STYLE documentation
4f88c7a : exit on invalid frame
6d4d56e : added runtime tests for binary produced by generated Visual solution
f8432bf : removed problematic v140 version test
814d9be : fix VS2022 directory for appveyor
4cc0431 : remove Visual Solutions
a2a21bc : removed failing tests
8017ca0 : add documentation about decompression side
9b72ddf : removed failing tests
4f25e37 : --list automatically triggers -m
397d684 : Bump github/codeql-action from 3.25.11 to 3.25.12
aa6a748 : Bump actions/setup-python from 5.1.0 to 5.1.1
42a43c5 : promote LZ4 dictionary API to stable
0990811 : promote LZ4F dictionary API to stable
2db419e : removed implicit stdout
b20025f : minor readability refactor for version extraction logic
f76c979 : add lz4file.h to include list
e5563ea : update logic that determines LZ4_BUNDLED_MODE
6ce6e6d : improved logic to extract version number
b5139c7 : do not test gcc/clang flags when building for visual
32c05ee : export the generated VS2022 solution as artifact
75391bd : added test for script generating solution on GH
d2f2e18 : fixed VS2017 build script
fae5a66 : provided scripts for other versions of Visual (2015+)
c292e7d : test some scripts to generate visual solutions on Windows
aafb56e : update gpl license to 2.0-or-later
7aaf095 : disable multithreading when compiling for m68k
e36fadf : support multithreading for linked blocks with dictionary
b3dd37f : fix leak issue
ed4d7d1 : Linked Blocks compression (-BD) can employ multiple threads
c78899a : add support for environment variable LZ4_CLEVEL
c0e4305 : Bump actions/upload-artifact from 4.3.3 to 4.3.4
461881c : fixed c90 compliance
461f369 : automatically enable multithreading by default on Windows
32f7fd3 : minor completion port update
cfdb8ac : completion ports: minor readability refactor
c688b4b : updated threadpool API
50d541f : simpler execution expression
f0b29a9 : updated completion ports logic
1dc60aa : minor: removed one variable
5301a75 : fix cpuload measurements on Windows
7ec8526 : build: minor: fix lz4 project in VS2017 solution
b7b0ee0 : Add MT control via Environment variable LZ4_NBWORKERS
8dd837f : document LZ4_NBWORKERS_MAX
718d1dc : warning message when nbThreads above limit
79e72be : minor optimization: allocate worker array at runtime
04341f1 : changed Makefile variable HAVE_THREAD -> HAVE_MULTITHREAD
b5639ad : fixed Visual Studio type warning
71ef5b7 : Makefile: automatic MT detection under native msys2/mingw64
10cc4e3 : fix C90 comment style
604122c : Makefile build automatically detect multithreading for Windows
425cd51 : count nb cores on Windows
c7fd52a : fixed queue size control
3628163 : minor: Semaphore init value
097f0fb : minor parameter sanitization
b4e3708 : queueLock no longer required
7183fe9 : minor simplification
9443ed9 : working implementation of completionPorts
0437458 : fixed tsan warnings
c379947 : fix pedantic warning
2e867a5 : fix comments for C90 strict compatibility
7b4e804 : fix minor cast warnings related to -Wc++-compat
e9b8907 : optimize asyncio parameters
091ec26 : implemented ayncio for lz4f decompression
06ce31e : add status update when decompressing legacy frames
03b1a99 : Bump github/codeql-action from 3.25.1 to 3.25.11
0501c0f : Bump actions/checkout from 4.1.6 to 4.1.7
2a7cb52 : Update CMake tests to verify that the unified target always exists.
e554fde : removed FreeBSD 13.2 test from Cirrus CI
e8a09f0 : Fix lib32gcc versions for clang and clang-{8,9,10}
5d4a42f : Fix for gh-actions breaking change
5b2af93 : Bump actions/checkout from 4.1.5 to 4.1.6
6aedc23 : [cmake] Always create lz4 target.
a5a46f1 : Bump ossf/scorecard-action from 2.3.1 to 2.3.3
15e2f10 : Bump actions/checkout from 4.1.4 to 4.1.5
bdc8e14 : [cmake]: just a minor refactor of the symlink installation paragraph
db10b08 : len should be unsigned
642f6b8 : Prefer OR over ADD for splicing numbers from byte-addressed memory
7f042fc : Define mlen = MINMATCH at the start of the loop
9838a0d : Update function comment
ae585f7 : Bump actions/upload-artifact from 4.3.2 to 4.3.3
6428f91 : Bump actions/checkout from 4.1.3 to 4.1.4
ba744bd : CMake: Separate symlinks creation and installation
fb301fa : Bump github/codeql-action from 3.24.9 to 3.25.1
7789c91 : Bump actions/upload-artifact from 4.3.1 to 4.3.2
667c5b1 : Bump actions/checkout from 4.1.2 to 4.1.3
1a5b83b : benchmark results are displayed to stdout
3cd6d01 : Bump actions/setup-python from 5.0.0 to 5.1.0
5b44fd1 : Fix typo `libzstd` -> `liblz4`
993e77f : Fix: 25 typos
58c3e9d : Suppress VS2022 warnings
e1ccdfc : Bump github/codeql-action from 3.24.7 to 3.24.9
643a51c : minor: assert successful context creation
fab3068 : add test for bug #1374
e50a894 : fix #1374
fc0943e : Bump actions/checkout from 4.1.1 to 4.1.2
d8f0e86 : Bump github/codeql-action from 3.24.6 to 3.24.7
6b88d0a : fix to please C++ compiler
3233600 : minor: keep old single-thread code
7a1b705 : fix variable LZ4IO_MULTITHREAD
43ae8c8 : first implementation of async io for decoder
e0410ac : reorganize mt code
56e80ab : Fix typo
a1b741e : Bump github/codeql-action from 3.24.3 to 3.24.6
e5207e0 : Add unified CMake target if building only a shared or a statric library.
be8a4f6 : Added preprocessor checks for Clang on Windows
ed37236 : minor: fix missing include
1de550f : fixed minor conversion warnings
718fe2a : updated lorem ipsum generator
1fa7266 : Bump github/codeql-action from 3.24.0 to 3.24.3
6423a73 : Bump actions/upload-artifact from 4.3.0 to 4.3.1
d676576 : fix incorrect assert
80b7db0 : fix overly cautious static analyzer warning
9712d31 : fix inaccurate address overflow condition
5ccd334 : fix out-of-limit match in level 2
0abb17f : fix back limit bug
cc0f287 : fix minor sign comparison warning
9e8649e : removed assert always true
ad204dc : fix "source has 2 buffers" for level 2
bde614d : fix dictionary support for level 2
65ee88f : fix _destSize() variant for level 2
5a2516c : made hash8Ptr compatible with big-endian systems
3dc0caf : minor refactor, for clarity
7d304b5 : fill table more
6774551 : skip over incompressible data
8002658 : long matches look for 7 bytes
15ed811 : fix the hash8 formula
10921d8 : another minor compression improvement for level 2
3116d85 : slightly improved compression ratio of level 2
ee86631 : first implementation of level2
75c9df7 : Bump github/codeql-action from 3.23.2 to 3.24.0
372ce8e : switch sparc test to ubuntu-20
1a39244 : add sparc compilation test
95cb4da : minor: lower literal range
e0cfe3e : C90 comment style
f8a0a37 : improve LZ4F dictionary compression in fast mode
10ac725 : fix init for dictionary compression in HC streaming mode
b108deb : Bump github/codeql-action from 3.23.0 to 3.23.2
72c1d52 : Bump actions/upload-artifact from 4.2.0 to 4.3.0
ef7baa6 : fixed visual studio solution
93a905c : removed COMPRESSIBILITY_DEFAULT
72be517 : fixed meson build
0bac9ab : fixed visual studio projects
47ffbfd : finish generation with a newline character
6536283 : datagen uses lorem ipsum generator by default
5f9a5c6 : fix meson recipe
69d708d : fixed minor unused variable
1511ec6 : made lorem slightly less compressible
e6791b2 : fix a very picky visual studio warning
0a1499f : fixed meson formula
dac61f7 : fixed Visual Studio solutions
d3b5fe9 : minor optimization
5bc3919 : fix minor static analyzer warning
73dd539 : minor optimizations
823d37f : bench.c does not longer need datagen
f9eb266 : added a lorem ipsum generator
87ad5e8 : Fix Python 3.6 string interpolation
34c22c9 : Bump actions/upload-artifact from 4.1.0 to 4.2.0
540b539 : change INSTALL_DIR into MAKE_DIR
a393433 : Bump actions/upload-artifact from 4.0.0 to 4.1.0
f484041 : Bump github/codeql-action from 3.22.12 to 3.23.0
1230f1c : regrouped all compile-time variables into programs/lz4conf.h
d61b4f1 : LZ4IO_MULTITHREADING can be forcefully disabled at compile time
0d57b82 : Build variable LZ4_NBTHREADS_DEFAULT
15c3072 : update Github Actions tests
330b9be : update cirrus CI FreeBSD tests
112c788 : added tsan tests
f4d9a94 : added msan tests to Github Actions
196e7db : minor optimization: only create thread pools if needed
1e4dce4 : minor optimization : share thread pools
38f605f : adjust tests for more parallelism
312693b : fixed potential leak scenario detected by @t-mat
bde2e95 : fixed initialization issue
f3860a1 : sequential test, for easier debugging
93a1bc6 : fix leak when input is an exact multiple of job size
1286666 : fix potential leak after failure in legacy format
6a3932d : make file open errors recoverable in legacy mode
8e114d9 : fix MT compatibility with dictionaries
e9e9beb : manually select single-thread path with -T1
1f43e4d : updated manuals
0dfbe96 : report nb of threads in verbose mode
132c214 : removed left-over trace
88faa80 : second attempt at fixing mingw32
34e9895 : fixed MT incompatibility with --content-size
b45de8b : attempt to fix mingw32 compilation issues
2b23260 : removed cancelled declaration
3bdaad3 : silence some clang x32 test
d7c82ff : fixed a few Visual Studio Static Analyzer warnings
e1350f8 : make visual compilation test on appveyor less permissive
3227a55 : added a simple test for MT CLI commands
ef6442b : fix minor static analyzer warnings
f376a90 : fix VS2022 solution
5fb1bd2 : fix VS2010 solution
a2d5ce4 : fix standard_variables test
4333be5 : fix minor static analysis initialization warning
c6762ec : fixed meson build
db7495d : fix another C90 pedantic warning
ee38a87 : fixed several minor pedantic conversion warnings
94496a9 : fix pedantic C90 compatibility warnings
bd47756 : lz4io: verbose compress/decompress operations display a summary
a49ca22 : fix cmake recipe
ca2e106 : fix minor conversion warnings
39845fa : fix minor unused assignment
d169286 : fixed minor conversion warnings
4984c7f : updated Welcome message to specify if binary support multithreading or not
cbe7211 : make: fix lz4 mt compilation on linux/posix
d3ae8e0 : fix numCores detection on Linux
f4dda82 : fix minor comparator warning
da5e8b7 : lz4frame: new API: LZ4F_compressBegin_usingDict()
fd5f76d : minor traces upgrades
fc6029f : preparation to support linked blocks
e58a06d : fixed smaller block sizes
95c2bc7 : fix legacy format compression
d4c21a4 : fixed stdin support
815f355 : fix frame checksum in MT mode
3acabda : multithreading works with lz4f format, but
610baf1 : more generic read and write jobs
553962b : better time measurement and reportin
127fa57 : control over nbThreads
8a1c8aa : first implementation, for compression of legacy format
b239d6a : added a simple (untested) threadpool implementation
88e477d : fix 1308
af08a06 : clarify man page on lz4 CLI being single threaded.
3f1eb79 : Appveyor: Visual: faster compilation for compilation-only tests
efd7029 : create local_LZ4_decompress_safe() to circumvent dllimport warning
467db78 : make arrays static
f239a17 : fix read when requesting out-of-range codec
ded75e7 : fixed minor conversion warning
4c5858e : make appveyor ci tests faster
a03e877 : fullbench: -i0 runs a very fast (but still measured) run
f5d14ab : can list algorithms to benchmark
df3b602 : array for benched compressors
fde3157 : fullbench: record benched decompressors into a C90 array
2109153 : reduce appveyor CI test duration
16f33b1 : Bump github/codeql-action from 3.22.11 to 3.22.12
a37a62f : update ossfuzz test time
a0abf19 : fixed -j for versionsTest
f91702f : use make -j more often for CI
5e663b9 : refactor C++ tests
5c9e349 : changed test name cpp->cxx
e2e8d9b : rename USERCFLAGS for consistency
dd7e999 : rename variable LIBDIR for consistency
fc25339 : rename variable to LIBDIR for consistency
631cd0f : minor adjustments for examples/Makefile
7b7cecd : fix attempt for ppc64 test on circleci
cecb7f0 : versionsTest: pass variables via MOREFLAGS
669e255 : add traces to versionsTest
736bcf7 : minor CI adjustments
098a987 : fix circleci tests
6bdfc56 : fix minor conversion warnings on macos
48db0b2 : abiTests fixes
a467d7e : abiTest : add more traces and messages
63e73c1 : adding traces to abiTest to better observe potential issues
bbe41f4 : attempt to fix abi Test
6b5f566 : adjust speed python test
1ec11b1 : adjust appveyor windows test
e760561 : adjust test-lz4-speed for absence of MOREFLAGS
6beb1a6 : adjust appveyor tests for absence of MOREFLAGS
968134d : adjust Github Actions tests for absence of MOREFLAGS
da955d7 : streamlines tests/Makefile clean logic
c952eaf : centralized clean logic for lib/Makefile
fe18314 : more thorough target classification for programs/Makefile
854d13a : fix uninitialized memory
6b9fc5f : minor: C90 comment style
863cf7c : streamline free logic
00ad605 : lz4file API returns more accurate error codes
06b2287 : very minor conversion warning fix
4e8788e : ensure make install target doesn't create files
dc474bd : fix minor conversion warnings
38b7377 : refactor appveyor.yml
167623d : Bump actions/upload-artifact from 3.1.3 to 4.0.0
675be98 : Bump github/codeql-action from 2.22.9 to 3.22.11
cd7ce04 : missing clean target
e2cb69c : SED can be defined on command line and with environment variables
e52a826 : link final binary rather than copy
a86d5c8 : avoid accidental redefinition of LZ4_STATIC_LINKING_ONLY
acf06f1 : Bump actions/setup-python from 4.7.1 to 5.0.0
b6eef8d : Bump github/codeql-action from 2.22.8 to 2.22.9
a2e4da3 : updated code documentation
629ba80 : decomp: refine read_variable_length codegen layout
d09975f : Bump github/codeql-action from 2.22.5 to 2.22.8
ee25fc2 : Add LZ4_compress_fast_extState_destSize() API
aa9c54e : Bump ossf/scorecard-action from 2.3.0 to 2.3.1
d9ed2c0 : Bump github/codeql-action from 2.22.4 to 2.22.5
84a1e9c : lz4: remove unnecessary check of ip
9c7c87d : Bump actions/checkout from 4.1.0 to 4.1.1
64b7f0b : Bump github/codeql-action from 2.22.3 to 2.22.4
0e4b22d : updated NEWS and raised version number
212da69 : Fix compiler preprocessor
8de247b : added new qemu targets for CI (MIPS, M68K, RISC-V)
39734d2 : Enable basic support on riscv64
9fa9bf2 : Add null pointer check before `FREEMEM()`
3743fcf : Bump github/codeql-action from 2.22.0 to 2.22.3
f99c438 : Use `-Wpedantic` instead of `-pedantic`
7c50ac9 : fix: fix PR #1286
e270f2e : Make Makefile version number parsing more robust
b389bbf : Make Meson version number parsing more robust
47daa7e : Make CMake version number parsing more robust
86e43fd : Ignore Visual Studio Code files in `.gitignore`
0f43297 : Introduce `.clang-format` rule file
2acec86 : add: cmake static lib test
5ba4803 : chore: suppress warning C6385 for MSVC 17.7
952c503 : Bump ossf/scorecard-action from 2.2.0 to 2.3.0
0ea3257 : Bump actions/setup-python from 4.7.0 to 4.7.1
496eb37 : Bump github/codeql-action from 2.21.9 to 2.22.0
e92efba : fix: issue #1269
42c95e9 : Bump actions/checkout from 4.0.0 to 4.1.0
0b7dc8b : Bump github/codeql-action from 2.21.7 to 2.21.9
2160059 : Add Scorecard Action
fa83473 : fixed meson build
0b8dc46 : Bump actions/checkout from 4.0.0 to 4.1.0
8586d6a : fix example for low resolution timers
6ea2ed7 : removed cxxtest
65d3772 : fix minor c++ compat warning
67532ae : fix examples
ac8d683 : removed x32 tests from inter-version-abi-test
b21ba41 : fix x32 CI tests
05d77ab : minor comment update for x32 ABI
e79d3c3 : update frame fuzzer to be able to generate bug1227
f10aaa0 : created new test case bug1227
2acbb0f : minor unitTests refactor
ea7a371 : minor frametest refactoring
9a03570 : update comment on @.stableDst parameter
eaae715 : frametest: added RAND_BITS() macro
6c95e59 : Bump actions/upload-artifact from 3.1.2 to 3.1.3
e23d0c9 : Bump actions/checkout from 3.6.0 to 4.0.0
8a0d78f : minor: fixed incorrectly position comma (,)
4bd6e02 : lz4hc: increase count back search step
b9343a6 : Change test to decompress-partial that has no dependencies
b2abefe : Move GNUInstallDirs include before it is referenced first
2afd925 : Add test cmake project and CI integration
cf17807 : Bump actions/checkout from 3.5.3 to 3.6.0
38cc73c : Move GNUInstallDirs include before its referenced first
4387bee : Improve the README
eef01f7 : Hide the functionality behind a feature flag and document it
9e9664b : bump cmake minimum to 3.5
f313ed9 : Added namespace declaration for xxhash in CMake
cfee8d5 : Apply pyupgrade suggestion to Python test scripts
1bb50a1 : Hash-pin GitHub Actions
b6d94d2 : Make hashes identical between LE and BE platforms
06a27a6 : fix: missing LZ4F_freeDecompressionContext
ccef95b : fix: issue #1248
bdecb2e : fix #1246
13c508e : Discard trailing spaces
b7ec6e3 : Macros should not use a trailing semicolon
bdfe4c0 : Enclose by a do-while loop to avoid possible if/else logic defects
20241ec : ci: fix batch files for msvc 2022
6d685dc : ci: add clang-15 and clang++-15
0d64541 : add: gcc-13 and g++-13
864c9b6 : Set WINDRES only if it isn't already set. This resolves the problem with clobbering WINDRES more generally.
34d9127 : updated README with repology status
c8b856e : Adjust output location of liblz4.dll and liblz4.dll.a.
72618e5 : Don't conflate the shared library name with the shared library filename. The target for building the library now uses the real filename, to ensure that the target matches the file that the recipe writes.
2d7009e : Ignore generated .rc files.
4ac63d8 : WINBASED uses yes/no as values, not 0/1. Check for 'yes'.
cb3a5ff : Don't clobber default WINDRES in MinGW environments.
7075bc2 : fixed minor typo
d3d7ad9 : Add security policy
56ee317 : Update README.md
70ad629 : Update ci.yml
a26a46c : add: CI build test for VS2022
532c923 : refactor: Build script for VS2022
0784fef : Create build.bat
eeb33d1 : fix: workaround for false positive analysis from MSVC v17.6 (part 2)
099892c : Add missing build/VS2022/lz4/
9c0b42e : Update .gitignore
d678159 : fix: workaround for false positive analysis from MSVC v17.6
0d109b2 : fixed github workflow
c075220 : fixed circle CI script
8c06506 : removed travis script and reference
990f8ab : Set cmake policy max to something recent
42aef71 : Reduce usage of variable cpy on decompression
8ef36c0 : Fix build break on Visual Studio 2010
ae72509 : remove whitespace
b87247f : Add packing support for MSC
0973fe5 : Remove redundant error check
5953ac1 : lib/Makefile: Support building on legacy OS X
3060839 : fix: GH-Actions - removed ubuntu-18.04
a515219 : merge into a single UTIL_isDirectory() method
45a4880 : refuse to compress directories
1a3f35c : Add 64-bit detection for LoongArch
c3addfe : improve LZ4F_decompress() documentation
7ab223b : build: move meson files from contrib, to go alongside other build systems
ab8328b : Clean up generation of internal static library
b1fd838 : Fix typo found by codespell
fe389ca : version note
2fc9a85 : Install lz4file.h only when default_library isn't shared
3301f31 : Only build the freestanding test on Linux x86_64
3946e3d : Add Meson override for the library
5b83db4 : Change the version of lib[x]gcc for clang-(11|12) -mx32
95d703a : Remove PATH=$(PATH) prefix from all shell script invocation
e4ea198 : Fixed const-ness of src data pointer in lz4file and install lz4file.h
2a782cc : Add copying lz4file.h to make install
812c4b1 : Declare read_long_length_no_check() static
08f1483 : Add environment check for freestanding test
198e532 : Update Meson build to 1.9.4
7213a32 : uncompressed-blocks: Allow uncompressed blocks for all modes
4dafb85 : fixed usan32 tests
e3974e5 : minor refactor of lz4.c
68848ec : fix another ubsan warning in lz4hc
a3d1762 : use LZ4HC_match_t structure directly to store match candidates
2fefb1d : removed virtual pointer from optimal parser
0a2e406 : removed virtual match pointer from HC parser
a0adc61 : sequence encoder accepts offset as a value
952942d : LZ4 HC matchfinder returns an offset value
586e9a4 : added code documentation on heap mode
1ae9a50 : Update snapcraft.yaml to reflect build of v1.9.4
b214902 : added notes about LZ4_compressFrame() and stack/heap memory usage
dc94419 : fix rare ub
7f54a56 : fixed minor UB warning
2c8fd11 : removed a few more usages of base ptr
2620c09 : remove another usage of base
251e04a : added test able to catch bug #1167
ec0d3e6 : fix benchmark more using Dictionary
3c1d581 : add a test to catch issue #1164
fdfbe3a : update v1.9.4 NEWS
5799b2d : document Makefile variables
5ccbd38 : build: Support BUILD_SHARED=no
72b9348 : Clarifiy documentation for LZ4F_HEAPMODE
32bfb20 : clarify Data Block in the Frame format documentation
f6c1848 : simplify getPosition
d91d16b : updated documentation : no more issue with 32-bit compilation on recent compilers
9bed6b5 : updated Github Actions tests documentation
fbd2f9f : fixed a few ubsan warnings in lz4hc
8b7c57a : attempt to enable ubsan tests in CI
2822825 : added LZ4F_compressUpdate() in fullbench
cedf9cd : allocation optimization for lz4frame compression
36de5a5 : Bump actions/upload-artifact from 1 to 3
cd64053 : Bump actions/checkout from 2 to 3
19532ca : Bump actions/setup-python from 2 to 4
78b3078 : Add dependabot
c871a28 : Cancel in-progress CI if a new commit workflow supplants it

+- Project: platform/external/lzma

62c50ee : Upgrade lzma to 24.09

+- Project: platform/external/marisa-trie

cb5387b : Make bpfmt happy.
0df2115 : Add OWNERS file

+- Project: platform/external/mdnsresponder

596a504 : Pin to C17.
ebf0c4b : mdnsd: fix compilation error if MDNS_DEBUGMSGS is on

+- Project: platform/external/mesa3d

1eb4f6523c6 : gfxstream-guest: update offset to correct value
5f6158adb92 : Use try_unbox in VkDescriptorBufferInfo
bb27914b526 : The BumpPool of VkStream is not freeAll'ed
2e02f6817eb : Wrap queue related functions on codegen
a52bad65b48 : UPSTREAM: meson: Remove experimental from gfxstream driver build
c23df8e6585 : UPSTREAM: gfxstream: Avoid repeated functionality
5c374aa1a3c : meson_to_hermetic: Fix parser not accounting for files with '-'
3e54f76350f : Change C style cast on extension structs
85de971256d : meson_to_hermetic: Add global base project config / inheritence
5c2f2bffa3d : ANDROID: Rerun codegen
8143ffa1f72 : UPSTREAM: gfxstream: change output location
2ef2455984b : UPSTREAM: gfxstream: for Android, look for the autogenerated files
30792309579 : UPSTREAM: gfxstream: delete qemu_pipe target
646ed2c09ce : UPSTREAM: gfxstream: conditionals for using gfxstream::aemu
e41275d1cc6 : UPSTREAM: gfxstream: use canonical Mesa dependencies
d046307c381 : UPSTREAM: gfxstream: guest: use internal version of AEMU headers + impls
0af5ac322ef : UPSTREAM: gfxstream: modify libaemu for Mesa use case
3784c1caf5a : UPSTREAM: gfxstream: aemu: vendor it
b01f01f4539 : UPSTREAM: gfxstream: nuke EntityManager.h include
00cd89f7007 : UPSTREAM: gfxstream: use vulkan_lite_runtime
119f6a678fe : UPSTREAM: gfxstream: nuke android::base::SubAllocator
a86af1b160d : UPSTREAM: gfxstream: move isHostVisible function
92b3cec1454 : UPSTREAM: util: add c++ guards to u_mm.h
05e0c72332c : Update auto-generated comments.
ff21d29fdbb : gfxstream snapshot: DescriptorSet allocate and update
01f501817a9 : gfxstream snapshot: avoid double boxing dispatchable handle
9391fab7e63 : meson_to_hermetic: Add bazel code generation
6690e88449c : UPSTREAM: gfxstream: update Kumquat API
7130382b005 : global_state_wrapped_decoding of vkCreateComputePipelines
47572197017 : meson_to_hermetic: Add jinja code generation for Android.bp
b4b9d49ef34 : Allow VK_KHR_line_rasterization
14522af57b3 : Keep VK_EXT_line_rasterization for codegen
2e9c7f9dea3 : VulkanBatchedDescriptorSetUpdate toggled on caps on Guest
bc1b5ca26fb : ANDROID: add OWNERs for src/gfxstream
bf5fbedc670 : UPSTREAM: gfxstream: move generate-gfxstream-vulkan.sh script
fe8ea14dc31 : UPSTREAM: gfxstream: Handle tmp folder explicitly on codegen
89c67dfa509 : UPSTREAM: gfxstream: Add VkPrivateDataSlot handle type
888f3241563 : UPSTREAM: Update decoder.py to use try_unbox on destroy calls
ca2acbf2826 : Check metal extension for external memory
065c7c67429 : ANDROID: fix android deps
2955af2d047 : meson-to-hermetic: add pyyaml to python libs
b4e91ac5e99 : UPSTREAM: util: add sync_fence_info
ba738243233 : UPSTREAM: gfxstream: use util/libsync
93a2ab03d00 : UPSTREAM: gfxstream: nuke util function
0cdc145681d : UPSTREAM: gfxstream: add clang-format
711c536e488 : UPSTREAM: gfxstream: use sync_fence_info
3921a02ed53 : Use KHR version of the line_rasterization extension
c0225193932 : meson_to_hermetic: Capture bazel project state
a5d0cf74d81 : meson_to_hermetic: Add project state tracking (Android)
93e2218cef6 : Remove obsolete `neon:` clause.
81ec5fcd88d : meson_to_hermetic: add missing python dependency
7550d9e8ae2 : UPSTREAM: gfxstream: use gralloc metadata in vkGetAHBPropertiesANDROID

+- Project: platform/external/mime-support

1350784 : Revert "Update Android's bundled mime.types mapping."

+- Project: platform/external/minigbm

aac2a17 : Revert "Revert "UPSTREAM: cros_gralloc: Avoid using masks in han..."
8956e04 : Revert^2 "Merge remote-tracking branch 'aosp/upstream-main'"
8a3215b : Revert "Merge remote-tracking branch 'aosp/upstream-main'"
3068d2b : Revert "UPSTREAM: cros_gralloc: Avoid using masks in handle_usage()"
3b405c8 : UPSTREAM: cros_gralloc: Avoid using masks in handle_usage()
ccda090 : i915: allow DRM_FORMAT_YVU420_ANDROID for camera use
ff0509d : mediatek: allow DRM_FORMAT_YVU420_ANDROID for camera use
6d687b7 : Revert "gralloc: Error when locking buffer alloc'd without CPU_ usage"
d56aa13 : cros_gralloc: Fix gralloc -> gbm usage flag mapping
eb0e5fa : gralloc: Error when locking buffer alloc'd without CPU_ usage
94e1bdc : OWNERS: add ryanneph@google.com
453a4c0 : i915: Remove media compression support
2381df8 : virtgpu_cross_domain: fix planar size calculation
05865d7 : virtgpu_cross_domain: force LINEAR for YVU420_ANDROID
e2fdd90 : dri: pass use_flags to dri_bo_create_with_modifiers
ac54d29 : dri: make dri_driver an opaque object
2743358 : dri: make dri_driver a proper object
a160cb4 : minigbm: Update common.mk to unbreak gcc and bfd builds
a2a049d : Initialize `emulated_metadata` fields to avoid uninitialized warnings
4cca6f6 : Error when attempting to lock buffer alloc'd without CPU_ usage

+- Project: platform/external/mksh

8032c7c : Define sh_recovery

+- Project: platform/external/mobile-data-download

79fbb03 : Format Android.bp file
cb34945 : Add package visibility for using by MagicPortrait app
8aa9fd2 : Update OWNERS file.

+- Project: platform/external/mobly-bundled-snippets

1b9d6cf : Add "android-support-multidex" dependency to mobly-bundled-snippets-lib
9592e4b : Extend UI wait time for BT action to 2 seconds
d255e75 : Remove unused android.content.pm.PackageManager (#210)
7dd1c69 : Update method name for checking if an account exists on the device
c60aba1 : Rename RPC methods with wifi prefix. (#209)
fa39fdc : Update RPC name to match with MBS style and throw an error when account does not exist
530ffbf : Update AndroidManifest.xml
456bbde : Add `isTdlsSupported` in `WifiManagerSnippet` (#202)
01e2235 : Create WifiAwareManagerSnippet.java
8ae5228 : Update snippet description
242734f : Support removing contact under specific account
f3f30a8 : Add new snippet for removing a contact with the given email address
1f69100 : Remove on-device contact support and make snippet name more straightforward to its use case
62aafdc : Rename sync function to a proper name and make it as a separated RPC
386e87f : Add Javadoc for starting an asynchronous sync operation
f810d3f : Rename new snippet file to ContactSnippet.java
4d090cf : Add new snippet for adding a contact with the given email address
6608837 : Update Utils to remove targetSdk check, the Build SDK should be enough
32807ab : Fix MBS Utils to work while being instrumented
ac70166 : Fix the error that class "kotlin.jvm.internal.Lambda" not found. (#198)
7eb2d18 : Add the network callback for wifi connection check (#195)
d4d23e2 : Add the permission to post notification. (#196)

+- Project: platform/external/musl

ef333d36 : Add unreachable() to musl's <stddef.h>.

+- Project: platform/external/nanopb-c

cb7a38e7 : Add dirgroup for trusty genrule

+- Project: platform/external/nos/host/generic

d1e48f5 : Export protobuf app options files

+- Project: platform/external/noto-fonts

f9d1e87 : Update NotoSansKhmer font to 2.004
f03797b : Remove released flags
f1567f3 : Use the latest emoji for SDK for signing.
77626fa : [2nd attempt] Add Unicode 16 emojis
c0818f0 : Revert "Add Unicode 16 emojis"
8fccc74 : Remove CJK VF flag
e9afa66 : Add Unicode 16 emojis

+- Project: platform/external/nullaway

2212f6d : Export javac classes

+- Project: platform/external/oj-libjdwp

f40a8b35f : Replace `requires` with `runtime_lib` to fix dependency tracking.

+- Project: platform/external/okhttp

8517428 : Migrating test option from TEST_MAPPING -> Android.bp
cdecc10 : Grant okhttp-norepackage for packages under /vendor.
9c25fcd : make grpc available for com.android.virt

+- Project: platform/external/okio

b8a6fdc : make grpc available for com.android.virt

+- Project: platform/external/open-dice

d948a94 : ANDROID: Remove libopen_dice_standalone_headers
c388cc3 : Support multi-algorithm in the baremetal open-dice library
6c169c1 : Move DiceClearMemory() declaration to own header
f1e6fb4 : clear_memory.c: Include missing <stdint.h>
b020503 : Update CoseSign header algorithm to use authority key principal
db3c928 : ANDROID: Add libopen_dice_clear_memory
5021907 : roll: third_party/pigweed/src 39dbc59..aeecb42 (69 commits)
c2f2b64 : Check invalid context in multialg open-dice
0103afd : roll: third_party/pigweed/src 5421a43..39dbc59 (70 commits)
2a8963f : Rename DICE_PRIVATE_KEY_SIZE to DICE_PRIVATE_KEY_BUFFER_SIZE
d2bef2d : [ANDROID] Drop removed files in Android.bp and makefile
6d511e9 : Merge upstream changes to support multi-alg DICE derivation
6535993 : roll: third_party/pigweed/src 6d68ac5..5421a43 (70 commits)
15fd3e5 : Support switching algorithms in DICE derivation
b2f0f3f : Add boringssl multi-algorithm config with CBOR certificates
d2a06ff : Configure with functions instead of constants
46fc4ac : roll: third_party/pigweed/src 5eec847..6d68ac5 (67 commits)
31a5bd1 : Remove unnecessary include in boringssl_ed25519_ops
ccead2c : [refactoring] Regroup DICE types in types.h
afdf723 : Rename DICE_SIGNATURE_SIZE to DICE_SIGNATURE_BUFFER_SIZE
3b6ba02 : Rename DICE_PUBLIC_KEY_SIZE to DICE_PUBLIC_KEY_BUFFER_SIZE
fe59b1e : OWNERS: Don't suggest Pigweed team members
8410d96 : roll: third_party/pigweed/src c221c63..5eec847 (45 commits)
d7d234a : roll: third_party/pigweed/src 223c6d9..c221c63 (50 commits)
329da0d : Fix broken link to RKP VM docs
1058dec : Add dirgroup for trusty genrule
2d6dd74 : ANDROID: libopen_dice*_baremetal: Use cc_baremetal_defaults
72dc983 : roll: third_party/pigweed/src 6ad0bec..223c6d9 (47 commits)
4da1a4c : roll: third_party/pigweed/src a5a1995..6ad0bec (72 commits)
4af0512 : roll: third_party/pigweed/src f2d25b9..a5a1995 (100 commits)
cf3f4cc : roll: third_party/pigweed/src 53 commits

+- Project: platform/external/openthread

599cf082d : [border-agent] invoke ephemeral key callback on session connection (#10699)
314d2b41d : [log] add Android log system support
bb2f3bd16 : [trel] bind the socket to TREL interface (#10957)
fc1a4a94e : update the CSL request ahead time to 4000us
45cb48bcf : [trel] Add cli to get TREL UDP port.
3143763c9 : Enable TREL
b44679068 : [trel] fix crash
9f9d53aa4 : [trel] add otSys API to init/deinit TREL
41fb7e555 : Enable OPENTHREAD_POSIX_CONFIG_UPSTREAM_DNS_BIND_TO_INFRA_NETIF
f13454307 : [nat64] add API `otNat64ClearIp4Cidr` (#10848)
7d918c346 : [posix] bind the resolver's UDP socket to the infra network interface (#10864)
18b5b5529 : Enable OPENTHREAD_CONFIG_DNS_UPSTREAM_QUERY_ENABLE
1311c462e : [border-agent] update ephemeral key connection timeout handling (#10609)
c0ddca37d : [border-agent] add api to disconnect from secure sessions (#10754)

+- Project: platform/external/openwrt-prebuilts

169dce7 : Fix the indent of wireless configuration file.
c6ffb5a : Modify VirtWifi2 to be a password protected AP
e806623 : Modify VirtWifi2 to be a password protected AP

+- Project: platform/external/ot-br-posix

a48832e7 : [Thread] fix typo in FakeOtDaemonTest
5e750ae5 : [Thread] beautify & simplify code
21885a7d : [Thread] refactor OtDaemonServer::Initialize argument order
f954c8bb : Make RunOtCtlCommand and Dump as AndroidThreadHost methods
1ed85bb3 : [log] add Android log system support (#2614)
e4d41dd9 : [Thread] explicitly enables/disables border routing in setConfiguration
7242cc66 : Handle mdns resolved address that contains interface name
b294c655 : Add TREL to telemetry data
06df71a4 : Support enable/disable TREL.
2b665643 : Use shared_ptr for service and host subscriptions
f721b3f5 : Replace ot get methods with ThreadHost NetworkProps
449dd9f3 : [thread-host] add common network properties (#2598)
5d645d17 : Add AndroidThreadHost methods
495714a8 : [host] add active dataset as a network property (#2540)
5f87711c : [thread-host] make AddThreadStateChangedCallback as a ThreadHost API (#2587)
c0ca77b6 : [doxygen] remove empty line at end of block (#2506)
684e324e : Add AospThreadHost with SetConfiguration method
7d6b1436 : [Thread] avoid disruption when joining the same network twice
7cc5a2d1 : [trel] updates for switching trel interface
73811b28 : Move utils in otdaemon into a separate header
6b3ffff5 : [thread-host] implement schedule migration for rcp host (#2559)
4e2e698b : Add Border Agent info to telemetry data
88f942e7 : Clear NAT64 CIDR when null CIDR is passed to ot-daemon
bb0d23a1 : [tests] add unit test for RcpHost SetCountryCode (#2564)
29bec1d1 : Add upstream DNS counters and query state to telemetry data
238a7f57 : [test] add unit test framework for ThreadHost APIs (RCP) (#2549)
320d9ba7 : Replace the implementation of setChannelMaxPower
a74e0bab : [android] allow disabling border router
ad636e79 : [thread-host] add set channel max power api (#2551)
9f371b2c : Sync threadnetwork_atoms.proto from frameworks/proto_logging/stats/atoms/threadnetwork/threadnetwork_atoms.proto
7c623e87 : Replace the implementation of getChannelMask
cbccf20c : Support configuring the upstream DNS server
8633b706 : [thread-host] add get channel mask (#2553)
813ec46b : Replace the implementation of setCountryCode
b7d82de1 : Support specifying the NAT64 CIDR at ot-daemon
5f689552 : [thread-host] add set country code api (#2550)
88f9ab16 : [host] add SetThreadEnabled to ThreadHost APIs (#2542)
87c50d1b : Enable ephemeral key feature.
8a549a86 : Call setConfiguration at the beginning of initializeInternal instead of at the end
0b194d23 : Change OtDaemonState ephemeralKeyExpiryMillis to ephemeralKeyLifetimeMillis
bd8cd773 : Support setConfiguration at ot-daemon
d7ccc6cf : deactivateEphemeralKey disconnects any active session
ce9e33f7 : Support ephemeral key feature.
4dbacd51 : Make ot-daemon-aidl visible to //packages/modules/Connectivity/thread

+- Project: platform/external/pandora/bt-test-interfaces

1bfd5a9 : a2dp: Add codec reconfiguration methods (#28)
27d5293 : version: bump version to 0.0.6 (#22)
84fcba2 : Update grpc dependencies (#21)

+- Project: platform/external/pdfium

a23be8713 : Fix native crash in Files with pdf-viewer integration

+- Project: platform/external/perfetto

89ad131621 : [stdlib]: Add a combined reason to the android binder breakdown
56f8a422d4 : ui: Group all old openTable into Chrometto plugin
fbc6d1645d : ui: Use dataset for the thread state aggregation tab
f93db9ce07 : ui: Convert track event to area selection when marking a span with 'M'
7f88ac8672 : ui: Allow typed omnibox prompt choices
74ba5f7342 : Add utid to atrace counter events
637b03f953 : tp: fix parsing battery stats history strings that contain commas
3afd5fd2b5 : Add AndroidProcessMetadata to gc metric.
9d63f91225 : ui: allow pinning tracks with children if they are not summaries
fb73928c18 : tp: don't mark cpu_profile_stack_sample as sorted
621f02f5b8 : stdlib: Add a pixel.camera module
c398a7b56c : tp: parse out pid first when parsing ProcessStats
e8c4fac37e : tp: parse standalone batterystats checkin files
d252cf6031 : stdlib: Add binder interface to the android_binder_txns table
45d960e7eb : stdlib: Add android_logs table to prelude
77f48ebbf7 : ui: Open complex types with 'Open table...' command
7348fe8743 : Initial TargetMemory implementation
6557f67a7c : string_view_splitter: Fix include guards and gen builds
0923a69a52 : BUILD: disable ETM importer for bazel
f9344dc10d : Add buildtools/open_csd/ to .gitignore
5dd44f879e : Decoder vtable for ETM data
0159bea4ed : base: add integer conversion utilites for StringView objects
1e9df39585 : ui: Remove hardcoded explore page table.
b50ab38408 : tp: import Battery charging & plug status events from dumpstate
217e8749de : COPYBARA_IMPORT=Project import generated by Copybara.
20850ceaae : ui: Add total energy column for Wattson
49275b0aad : ui: explore page persistent state/charts.
86fa00fea8 : Make libperfetto_c available to vendor/.
cb874d56b7 : [ui] Add a cmd to add counter track by ftrace args
30c7000340 : ETM Importer skeleton
00d1fadae4 : stdlib: Add broadcast record and process_queue_id
7f7bc4f905 : tp: fix^2 binary size regression
948bc7ba61 : stdlib: Enable column type validation
62a5311638 : tp: Parse Battery counters from battery stats
08d1b008dc : base: add StringViewSplitter utility
cdd5af689b : trace_config_utils: avoid useless copies
76eff5a449 : metrics: Add period name to Wattson atrace apps
c30e71d47b : stdlib: Add singleton wakeup power attribution
62f903045c : stdlib: Make idle attribution JOIN more readable
f5baed8adf : ui: minor fixes to plugin docs
f3fd323948 : ui: fix cuj query
7f28b77427 : tp: fix binary size regression
41843aedd1 : weak_runner.h: Move from src/ to include/ext/
c81bc8ab53 : ui: Disable "Open with legacy UI" flag by default
17bb7a2bd4 : Fix name group index in banned drop check script.
eb22bdb6dc : COPYBARA_IMPORT=Project import generated by Copybara.
c74c8cf69e : fix typos in trace config proto comments
6eefe6bfc1 : tp: Populate ScreenState track from dumpstate battery_stats
64286a141e : tp: Parse Battery Stats History items from checkin
256869f7fc : tp: remove unreferenced file
c50d98853c : tp: remove legacy track interning code
b6b1fcf853 : cmd: fix incorrect trigger atoms
72cc5eb04d : tp: migrate existing uses of InternTrack to new track interning
d7af5727ac : tp: migrate chrome, systrace, fuchsia and json to new track interning
a8584ff7cd : ui: Fix bug where dragging anywhere in details panel would resize it
9743f831e8 : stdlib: Disable JOINID validation
5730647f18 : stdlib: Add ARGSETID type
0658eab21b : Revert "tp: remove deprecated function call"
966f5fc99f : tp: migrate remaining counter tracks to new track interning
5961fe52fc : tp: migrate perf counters to new track interning system
cfe5cd9a23 : ui: fix bad query in cuj plugin
76247369d0 : tp: migrate track event parsing to new track interning
034383b5d5 : tp: migrate system level and ftrace probes to new track interning
722000b015 : tp: migrate vulkan and system probe counters to new track interning
a0f0946c18 : tp: migrate iostat, thermal and android counters to new track interning
ad088181f1 : tp: migrate most ftrace counter events to new track interning system
bd25374153 : ui: Put "Open with legacy UI" button behind flag
c05e674cef : tracing_service_impl: Avoid copying weak_ptr into lambdas
ddec78a079 : tracing_service_impl: Use WeakRunner for TracingServiceImpl
b60f5a1306 : tracing_service_impl: Use WeakRunner for ProducerEndpointImpl
d95ad0b086 : base: Add WeakRunner
f273013bbb : COPYBARA_IMPORT=Project import generated by Copybara.
f640d105d4 : COPYBARA_IMPORT=Project import generated by Copybara.
e1fe418051 : shared_lib: Thread safety annotations to PerfettoDsImpl
f20a36fd6d : ui: sort process counter tracks by name
5ac4baad20 : shared_lib: Thread safety annotations to TrackEvent::GlobalState
a6fd5e2dea : move config pb <> txt into src/trace_config_utils
f9774ebf1e : tp: introduce new track interning system
f506cb8d3b : tp: migrate counter tracks to intrinsics and prepare for revamp
010d1378fb : docs: remove outdated information from trace processor page
30b1e9f0a3 : tp: fix the unit for the GPU memory track
6d4d5f29d5 : ui: fix recent wakeup info regression in thread_state and sched tabs
e34abf37f1 : shared_memory_arbiter: Log number of chunks on kStall before crashing
d08ebe3da0 : shared_memory_abi: Rename PageHeader::layout to header_bitmap
ff99ba98bd : ftrace: fix controller doing twice the intended data per ReadTick
ddaf3b31ce : ftrace: bump "have we caught up to the writer" heuristic to half a page (from ~3.5k bytes)
83997c60b0 : tp: Fix calling* on disengaged value
ed3392a39b : ui: export protos as a namespace rather than individual types
5214f1513d : metrics: Add Wattson atrace apps estimate
b6ec6a9f68 : ui: Sort fuzzy matches by shorter strings first
f6aa087fdf : Close broken suspend_enter slice earlier.
d16d30d2a4 : traced: don't wait for frozen producers on start
f86fa52b1a : ui: Make 'mem' group track a summary track
740dd926ed : traced: Do not include service.cc in unittests
2659b32a30 : tracing_service_impl: Remove consumers_
8d65241c0c : ipc: add lalitm@ as owners to ipc for merges/refactorings
f0b6f48110 : ui: Roll canary
bb21d93d89 : ui: Open table command opens sqlTable (without ID and JOINID cols)
f9e5211e58 : stdlib: Add JOINKEY type
4d7d32df2a : profiling: Fix an unwinding fail issue on RISC-V64
8b89c10b55 : ui: Add a plugin showing IO request congestion
bec47c4e45 : Add missing source to perfetto-trace-processor-python-srcs
f1a68efc43 : perfetto_cmd: add a new logging atom when the session is cloned
8dbc2fc1a9 : Update ChromeFrameReporter with Raster Scroll
cb4244f336 : ftrace: kprobes: delay teardown until all data sources are gone
c6e04f444c : tp: fix brittle diff tests
3a7c2560e8 : tp: Migrate counter to __intrinsic_counter
91e41f52f1 : ui: add base/event.ts
1e71038bce : ui: clean up base/result.ts
2fd89a88ae : [perfetto] Fix path parsing in tools/check_sql_modules.py
529dc73037 : ui: Fix typo in registerAreaSelectionAggreagtor
605f41543a : ui: Migrate various pages's scss files into their plugins
ad69f2f3ef : ui: Allow plugins to define scss files
0a9bc87c6c : ui: Create new 'components' root level dir and move files into it
d7221d611f : stdlib: Disable type checks
060948ffa2 : Add 'clone_snapshot_trigger' packet to the trace if it was cloned by the trigger.
1162ae9208 : fix deps for protos/perfetto/trace:test_extensions_@TYPE@
b87dbee9bc : tp: fix bad parent uuid for thread track
975cee09e0 : tp: Add a diff test for block_io_* tracepoints
dbf709c43e : tp: switch from ucpu -> cpu on track tables
9391393a23 : protozero: mark constchars as hashable
1c44cc827b : base: make base::Hash as constexpr as possible
91eca3fff8 : Fix the optimizations extension to gracefully handle multiple app startups.
436d16387a : [chrome] Support flow on chrome trigger
a26171ee5e : tracing: Make incremental state per instance in the SDK
89decae7c3 : Add optimization status to the App Startup plugin.
375ce2bb8b : stdlib: Add Id type
3179af13eb : track: Emit whole hierarchy from TrackRegistry
af72efbf1c : Add 'clone_snapshot_trigger' packet to the trace if it was cloned by the trigger.
d007277e0f : stdlib: Add Duration type
79e084033a : ui: Vastly improve findTrackByUri() perf & flow events rendering
90f3b5a720 : ui: Vastly improve getTracksById() and flatTracks performance
e3eeb29513 : ui: Resolve track_id -> TrackDescriptor on selection change
48bec16c4c : ui: Speed up classNames() by 10x
a4cc493cf1 : ui: Created CollapsiblePanel widget.
a73c5e5218 : ui: Enable Wattson plugin by default
f24d60d57d : stdlib: Add Timestamp type
88be77628d : stdlib: Add type checks
4c623124ef : Add PerfTracker
0d48bf6e49 : ui: Open table command
e5f783890a : Introduce block_io_* tracepoints support
2a90a6ae5e : ui: Load Wattson on track expansion
299525595d : ui: Reorder Wattson aggregation
6b76500fc5 : ui: Add includes for Wattson aggregation
f808e184a2 : ui: Simplify Wattson track logic
534a3708ff : ui: add missing attrs for timestamp widget
7bd7780971 : ui: fix duration formatting bug with ms and us
582680c0b2 : ui: add import/export functionality for pinned tracks
9bcbe78de4 : ui: add support for saving/restoring sets of tracks by name
45277f31f1 : ui: switch pinned tracks save/restore feature to use Zod
646314997d : ui: Fix flow rendering for pinned tracks & improve canvas performance
ac5b610da3 : ui: fix order id not being respected for intermediate tracks
189a8de1d4 : ui: Use plugin deps to add tracks to process groups
c17c04997a : ui: Don't render tracks while the trace is loading
9294501f95 : tp: migrate cpuidle to new tracks system
cce3455341 : wattson: Add DSU support for VM traces
e9807b4e88 : ui: Move various components into public/lib
d6bb5948e6 : ui: Move ChromeScrollJank plugin to /plugins
ad124f2132 : tp: workaround OOB offset returned by SQLite in sqlite3_error_offset
3ff07af694 : ui: Added explore page styling for scroll.
1d40d0e8e7 : ui: Standardize track constructor args
819c771428 : Fix bug in GC sched reporting.
f9029e2d40 : ui: Stdlib support prototype
67bb549e71 : Remove axis sharing for battery stats counters.
b14bb413a2 : Add a new enum value for CompositorGpuThread
5ee55cf2c5 : ui: Use zod to parse feature flags from local storage
50c99220e0 : ui: Remove logic to port chrome scroll jank plugin flag to plugin flag
0e1f159468 : ui: Export dataset from CPU slice tracks
d8a389d920 : Fix incorrect GCA RSS memory calculation
dd5d43dea5 : Report average power from charge difference in milliwatts
e49496c138 : [android_Desktop_mode] Fix bug when reset atoms in trace.
f90dcb05aa : Update atoms descriptor.
0b42bf564a : ui: Add cmds and hotkeys to navigate to prev & next track events
3de471f004 : COPYBARA_IMPORT=Project import generated by Copybara.
81d23433de : ui: Fix missing cpu column in dataset for ftrace tracks
aae882ee36 : stdlib: Add sched.latency module
56a7149ec6 : Add buganizer component ID to DIR_METADATA
4bd51f1976 : ui: Switch to a more "OO" implementation for `Dataset`
1ce65fb331 : docs: format several docs
8d0abd900b : docs: reformat script and styles for docs
9e3d3982a7 : ui: Add Dataset & support for area aggregations on most slice tracks
bbd13ca03c : Document `update-statsd-descriptor`.
9e11af5648 : [chrome] Support chrome trigger hash translation
483d3f5aa7 : COPYBARA_IMPORT=Project import generated by Copybara.
d6ac03ff9b : heapprofd: Fix outdated log message
6aa8ad4340 : ui: make ftrace track 10x faster
cc0d967e1f : ui: add mithril autoredraw behind a flag
1554c9d0f1 : Modify trace redactor target to be included in module apex
1290ac2584 : wattson: Fix estimates when entire CPU track missing
33e2344d71 : ui: remove unnecessary full redraws
1ce711a437 : ui: Remove old PopupMenu and replace with PopupMenu2
cf95cb604a : ui: clean up raf_scheduler.ts and perf.ts
f295feb968 : ui: tp: plumb errors in NotifyEndOfFile into UI
b642011a30 : ui: release canary
fe05b818b2 : tp: fix kernel idle tasks for boot-time traces
017511ffa5 : add open_exec ftrace events
389e81f9b3 : add do_sys_open ftrace events
7ba39f99ba : Make power counter always positive
a82189fc54 : build: switch libpprofbuilder to allowlisted public visibility
252fdc6d63 : ui: Display histogram in explore page.
e8d28d395b : Add critical blocking calls related to compose
2c5927538b : Cleanup and simplify 'PerfettoCmdlineTest'.
40dba54373 : ui: don't disable sidebar when passing hideSidebar=true
71f1dab196 : ui: move all recording code into a plugin
66830b4b09 : Add the time range info. to the slices/threads section info.
f69cb9ae19 : ui: Created "Add chart" menu.
ee6cc386ea : base: rm non-thread-safe `localtime` function
b05094e835 : ui: Fix bug where flows were not being shown for frames tracks
6513a371ab : COPYBARA_IMPORT=Project import generated by Copybara.
80f68c5b63 : docs: remove common queries page
b153c7feda : thermal: perfetto/ftrace: Parse param_set_value_cpm
b4f79a6046 : thermal: perfetto/ftrace: Add cpm_trace/param_set_value_cpm event
a55c9b4269 : tp: add table for breaking down thread_slice by scheduling states
f344bb3dc8 : proto: propagate indirect proto deps correctly
af93d6cd59 : ui: Add a command to show uninterruptible thread count
aa112473cf : tp: fix flamegraph searching without any prefix
a854b1379d : Add android device manufacturer to trace metadata metric
a6028e3eb6 : ui: move GcsUploader to base
f182424dcd : Fix 'PerfettoCmdlineTest#CloneByName' test.
e662cb2e6f : ui: expose featureFlags in the App interface
36fa40c91c : ui: fix staleenss of record instructions, remove permalink dep
52e5b48b55 : Add unknown enum types as strings. This means args from enums have consistent types.
cfbe74a6c5 : android jank cuj: fix incorrect cuj boundaries for some cujs
b8d5a5a644 : stdlib: Remove duplicate _has_descendant_slice_with_name fun declaration
bb9a68487e : ui: Update plugin docs
8d4918c9b6 : Reapply "stdlib: Remove common package."
3c499f398f : ui: Render additional thread state information
aca46322ca : Android jank cuj: Improve the quality of cuj related SF frames
84e32613d8 : Add dim bounds proto to perfetto
ea011a2c2d : traced: change guest_soc.model prop string
42ad5898f0 : ui: remove final deps on globals
a3c18dfaf5 : ui: de-controllify recording
ab3bf51c83 : perf: add userspace frame pointer unwinder
28e404c560 : Revert "stdlib: Remove common package."
1aa0626cd8 : Fix scroll jank plugin
5212a65ae2 : bazel: Restrict visibility of libperfetto_c
82876a9a35 : ui: move perfDebug to app
73b594fc6a : Add energy usage estimate to android_batt metric
b9831613f0 : stdlib: Remove common package.
f9be497803 : tp: add summary table for native heap profiles
e538f8946a : ui: Fix crash when creating sdebug counter tracks
6a31ce1694 : ui: move most pages to their own plugin
9926ca4f01 : tp: fix opening JSON files with no trailing ]
3d3e1688dc : ui: record all screenshots on failure
9b86b1f745 : ui: _ctx -> ctx
0555a09f69 : ui: Rename example plugins to start with com.example
529d03ac93 : UI: improve sidebar API
e965be2b8e : ui: Simplify & merge 'simple' tracks & debug tracks
6d76517f19 : Disable JIT for debuggable test app
ceea397fcf : ui: remove dead code
f072aa7d8b : ui: remove a few things from state and actions which are unused
97f48fd123 : ui: add Page API
2ce3fd3f23 : Add --background to traced_perf
98c2fa0da1 : Add static table function to return args with defaults based on proto schema.
f1e96e3b5e : Add hardcoded mapping of Winscope table name to proto name and allowed fields.
7642b216b8 : Optionally add defaults in ProtoToArgsParser.
53befb79ad : Add default values from descriptor set.
7e2a493cdc : Add raw proto column to Winscope tables.
dfe2882153 : metrics: Add period_id for Wattson per thread metric
5637abca4a : metrics: Support for multiple periods in Wattson per thread
28d86d6065 : metrics: Remove duplicate code in Wattson per thread
993240526f : ui: fix misuse of ontraceready callback
a36fea4d85 : Deflake HeapprofdJavaCtsTest Runtime tests
12fde27086 : Export kprobe_profile stats
3b3f6cbbdd : tp: Stop exposing VIRTUAL and non perfetto tables and ban old docs schema
9aa13af322 : Use shared descriptor pool to parse Winscope traces.
4982a29488 : tp: simplify sys_stats classification handling
8dcfddb877 : Fix build with bazel 7.3
ceaa8b8d6d : tp: Add quiet mode to diff tests
683e423b70 : docs: don't show intrinsic and experiemntal tables in docs
11fe38f78d : Discard non-startup events from the startup metric
5ea57a9daa : Remove PERFETTO_INTERNAL_TRACK_EVENT
62d5898e42 : [stdlib] Added uid, internal, and public stop reasons to job_scheduler_states module tables.
44f0bcc2c1 : COPYBARA_IMPORT=Project import generated by Copybara.
161cc038da : Increase kprobes maxactives
2d2d6d8d4e : metrics: Add energy to Wattson per rail metrics
9307d2e497 : metrics: Add idle overhead cost to Wattson metrics
65bc131167 : metrics: Add power model version to Wattson metrics
ab7919266c : Revert "Increase kprobes maxactives"
dd8d17c61f : Increase kprobes maxactives
1d1c891130 : docs: add documentation for unscoped tracks and ordering tracks
6cc3a6780a : ui: unbreak extraSqlPackage injection
2a92bb9935 : tp: Stop forcing lexicographic ordering of tracks and use `child_ordering` when possible.
7de7671d03 : ui: implement serialization of flamegraph state for single selects
fb6c16b02a : ui: add infrastructure for serializing details panels
3bf2fc2f88 : ui: allow any App to obtain the matching Trace for the plugin
4341480e39 : [android_desktop_mode] Fix bug where instance ids are reused.
fe43562940 : ui: move details panel computation to selection manager
a06de91a73 : tp: Add device manufacturer to TraceProcessor
108da269c0 : ui: make QueryFlamegraph a class instead of a mithril component
acb3597595 : ui: simplify flamegraph filters
bdecef9048 : ui: remove last dependency between globals > Trace
1e9a82c838 : ui: remove unused accessors from Globals
5cbcc7ff9a : ui: simplify bottom tab handling
38cce5733b : ui: fix debugtrack test
6f9824f511 : ci: improve detection of UI changes to save resources
acf590aebe : ui: remove some gratuitous usage of globals
a02778fdd3 : ui: fix missing redraw calls on logs panel
4043fab1dc : ui: move TraceSource back into core
e0f1ba8fe6 : traced: Add device manufacturer to system_info
dbbf1bdf2b : [chrome] Export base Uuid
18a81e943a : Remove/replace agarwaltushar@ in OWNERS.
8a2b235d7f : fix PERFETTO_UNLIKELY scopes
3e4dbdcd6f : ui: rework check_imports
b5fc72586c : Enable Thread Safety Annotations for Perfetto standalone builds.
f639cfc2e5 : ui: Create interfaces for vega chart spec and state.
b7080b8b1d : Enable Thread Safety Annotations for Perfetto standalone builds.
4d0bb20959 : ui: rearchitect area select to be simpler and fix several bugs
d24c292bfd : ui: cleanup some more globals and AppImpl.instance usage
1e37e2a416 : ui: deglobalify a bunch of misc things
2160d102fa : perfetto: fix a bunch of fallout from aosp/3323137
2b8dc4d071 : ui: cache the viewport when the search is started
b2d78876ed : ui: remove dep between heap-profile and sidebar.ts
5e8f9a73e4 : Implement a Chrome histogram summary metric in Perfetto.
626f1396a1 : ui: gate child area selection based on summary track
6dd9c8b72f : Enable Thread Safety Annotations for Perfetto standalone builds.
79509629f7 : ui: fix area selection checkboxes
f98a7006b3 : ui: Add initial stdlib table menu with table viewer to explore page.
e91817ec7b : add upid and utid to TraceSliceSection and TraceThreadSection.
39f683d79d : COPYBARA_IMPORT=Project import generated by Copybara.
c5fd83915f : [chrome] Add org.chromium.system_metrics data source
7d53275271 : ui: deglobalify timeselectionpanel
20141387c1 : ui: deglobalify sidebar
63fcbdf688 : ui: deglobalify panelcontainer
0508c94806 : Add system metrics config
7390a36811 : ui: deglobalify notes panel
621c515c00 : ui: deglobalify TabPanel
7564e9e712 : tp: add handling of the language encoding zip flag
acf33a93ee : Add missing 'lock_guard' in 'UnixTaskRunner::Run'.
6d95e77f1a : ui: move hash to base
7042fe6bea : ui: break dep between workspace and raf
38e23cc44e : perfetto: rework proto descriptor gen to inform gn about deps
370bf01985 : perfetto_cmd: fix stray ; in error message concatenation
5cd63ff824 : [API] Expose clone session to perfetto API
fc90168be0 : stdlib: Fix old schemas
f32719bb23 : tp: fix parsing of multi-thread gecko profiles
5fb20bc65a : ui: Improve error when negative timestamps used with INTERNAL_LAYOUT
5d7a93af0d : tracing: export ProcessDescriptor and TrackDescriptor proto headers
2dae9d0a40 : ui: Fix track error capturing logic
7eee697f4a : protos: Add atrace_name to TrackDescriptor
8dbadce4fa : Add typedef for key_code in AndroidInputEvent
4750d14fff : Fixes a SQL condition for the Input Events plugin
c645ce524c : ui: Create separate histogram widget.
7456724c83 : ui: Flush before disabling trace with "stop" button.
a4b32d898f : ui: Rename PluginDescriptorStatic -> PerfettoPluginStatic
b59999f3ca : tp: Rename "z-index" to "rank" in TrackEvent ordering
3c402e57a9 : ui: Move threads details to plugin & expose via plugin deps
ea1e4a964a : ui: Add plugin dependencies
c281123b71 : ui: Use fzf library for fuzzy finding instead of our home-grown impl
9de6626b5e : ui: Close all related visualised args tracks on close click
022a7745c9 : ui: Fix bug in visualised args tracks and Briticise spelling
ac1dc46361 : ui: Migrate to self-contained class-based plugin descriptors
d14252d226 : ui: remove unexpected string from query
fe729bd98f : metrics: Switch Wattson per rail to interval_intersect
9c0772bbfb : metrics: Convert Wattson per thread to interval_intersect
719e9bf244 : metrics: Include last slice in Wattson trace metrics
a558193715 : stdlib: Fix startup_breakdown module performance
6028433d85 : tp: fix crash with linux device frequency tracks
057280ef0c : ui: Remove some gratuitous use of private member variables in plugins
6c9f16ba1f : ui: Turn onTraceReady into an optional hook on Trace
44a77fc7d7 : perfetto_cmd: Implement --clone-by-name
5dd9f9901d : tracing_service: Implement clone by unique session name
906e6b8849 : tp: outline gpufreq name
a9e55d1d63 : docs: Cleanup docs json for better parsing
797137a5e8 : ui: Make plugin activation require restart & refactor PluginManager
5de4f572c7 : ui: Remove unused onActivate() calls in a couple of plugins
d1f85fa4c7 : ui: Remove onTraceUnload hook & expose trash on trace interface
b7869df7d5 : perfetto_winscope-lite: add sdk_version "current"
2a4a8843f6 : ui: simplify sidebar click handler logic
44108c3aee : ui: fix SearchOverviewTrack race
d8e5beab32 : ui: change permalink UX to show a modal dialog
2d7fec474a : ui: remove JobStatusUpdate and simplify sidebar logic
43b653793d : Create a filegroup for heap_profile
adb90dc0b6 : ui: Fix trace off-by-one in UiMain
3f8589ae5a : ui: Only import plugins with an index.ts
0296ee49b2 : Show slices from atom_counters_slices in the UX.
bbd80b5e6f : tp: UI uses TrackDescriptor ordered tracks
e372718a20 : tp: Add child tracks ordering to TrackDescriptor proto
2293e922c3 : Add Android Statsd Logging to "Record new trace" UI.
7c86eb2431 : stdlib: Add a 'launch_delay' startup reason to the breakdown
7474caecb7 : [stdlib]: Add job scheduler tables based on ScheduledJobStateChanged atom to stdlib.
0cd910560a : ui: Added initial explore page to sidebar.
d29bb46aca : Update atoms descriptor.
d888a3001f : fix proto deps to handle indirect dependencies
447c7c5eb0 : stdlib: Parse guest_soc for Wattson
9ec47d5514 : tp: Add guest SoC model to TraceProcessor
0778307366 : traced: Add guest_soc.model from system_info packet
4e41815d36 : Revert "Create a filegroup for heap_profile"
3c10e85aa9 : ui: move analytics, embedded and testing move away from globals
1658c115c9 : tp: remove deprecated function call
732bc5ec01 : COPYBARA_IMPORT=Project import generated by Copybara.
04aba47294 : Reapply "stdlib: Fix suspend calculations for some traces"
c0d7669274 : stdlib: Only add non-zero duration to suspend
05802053d4 : ui: Remove remaining stray usages of globals from all non-core plugins
4a24fda99a : ui: Allow deep linking to events from tables other than 'slice'
9a7a72a592 : Create a filegroup for heap_profile
24a1c07b1f : tp: fix ubsan violation
25defafa3e : stdlib: Function to query for Wattson in time window
b9535cc9e3 : codec_metrics: Adding power data to metric
517d2123ec : ui: Fix highlighting on cpu tracks
d5ba33fa75 : Add a plugin for Desktop Windowing statsd atoms.
717f7bd2d9 : tp: fix failing tests on debug
109ab3ebcb : tp: simplify track classification
dccfefa0c5 : ui: Add area selections to permalinks
8ff6f0f67d : ui: Move most core_plugins to plugins
842f9c1cc8 : ui: Roll canary
2cf86e463d : tp: migrate all cpu counter tracks to new track name system
cc724c2f8a : tp: migrate all cpu tracks to new naming system
c9ee4baf3b : tp: improve track name specification and migrate global tracks to it
f647ebbb5b : tp: remove dependency of devfreq parsing on name
a33a53bb04 : tp: simplify track map
afb55653f4 : tp: rename tracktracker functions to clearly separate legacy from non legacy
e130b067d7 : tp: remove unnecessary tracktracker function
145dc82b08 : tp: merge irq and softirq tracks into track table
d61e77bf3b : tp: cleanup gpu_work_period_track and uid_track
e49f781f7d : shared_memory_arbiter: Fix TSAN race on destruction
67dacc7b77 : tp: fix sequences with no interned data being marked as dropped
bd63e0a479 : tp: defer including prelude into NotifyEof is called
2e635dbac1 : tp: add summary of heap graph class tree
6715e7476e : tp: fix crash when parsing JSON traces
5af76377f1 : Revert "stdlib: Fix suspend calculations for some traces"
2cd7f1cd9f : ui: Add an app startup breakdown track
6d9f29187b : perfetto: remove user build guardrail
9d7d5e6939 : stdlib: Expose app startup breakdown module
8d9ce4223c : Revert "Export liblog headers"
036895c42d : tp: change energy estimate breakdown track tables into entries in track
a9dbd59e5e : tp: migrate linux device tracks to using InternTrack
096b68f201 : Zigzag decode Pigweed integers.
1d2daf2cd3 : tp: make classification and dimensions public
43dae4c340 : stdlib: Fix suspend calculations for some traces
5e35a49901 : stdlib: Fix CPU0123 suspend calculations
63c8749609 : tp: some small cleanups in TrackTracker and GLobalArgsTracker
ae5fb3d704 : Add a token_id_hex arg to Pigweed events.
238df56025 : tp: Generalise all Reserve functions to ReserveDescriptorTrack and make DescriptorTrackReservation public
286bedbbf3 : ui: Fix issue where track shells flash white on click
745efebad6 : tp: move track classification out of track_tracker
7dfb6b531d : ui: Remove some more gratuitous use of globals in tracks/timeline
e0e916637e : ui: migrate track_decider into a global grouping plugin
2faffe69a5 : ui: fix tests screenshots
8f470eaa03 : ui: Add command to pin a track by name
f4c8349d73 : Extend Perfetto with kprobes tracing
2789f2877e : Add time_clss_initialzation to startup metric, matched by "L*/*;"
fcd791b689 : ui:Update Wattson columns to reduce confusion in total vs runtime
7ecdbd6992 : ui: fix stack profiling flamegraph
83b6522243 : stdlib: Add support for DSU dependent devices
d31780ef41 : stdlib: Move CPU dependence calcs into separate file
2003810ea8 : stdlib: Add support for new device
9adf6289eb : stdlib: Add function to query for devfreq counters
de8cad0bd0 : tp: Parse devfreq and add to counter track
5d53841dd3 : tp: Typed Intern functions don't take `std::optional<Dimensions>`
904f88678f : tp: Add TrackTracker::InternCounterTrack()
a0fe5a4223 : tp: Use track_tracker to intern/create tracks in {track_event|async}_track_trackers
9b5639a89e : perfetto: centralise trigger logging in traced
1e23ee8f4a : tp: add a cpu profiling samples table with samples from all sources
84d779d3e1 : ui: simplify gpufreq track addition
6663a13917 : ui: Remove `Optional<T>` type and all its usages
0e8ab9c53a : ui: Remove BottomTab & BottomTabToTabAdapter
8f1c8cae4b : ui: Format ui-plugins doc
9507be6ffe : ui: Move detailsPanel from TrackDescriptor to Track
54958edd56 : ui: Break cycles using a an extensions registration interface
214e205751 : ui: Add duration in top bar for single selections
5bf8d52a5c : [track] Create NamedTrack class
eb7af20cc1 : ui: Merge perfetto.Slice plugin into perfetto.AsyncSlices
dc1a10095a : ui: Fix bug in ion track nesting in track_decider
2f59874bf0 : ui: Add cpu to suspend resume details panel
2289b0ce9c : tp: Add cpu number to suspend resume slice args
1eb7a79967 : Read the atom_counters table from g3.
d0f9f1e858 : tracing_service_impl: Do not expose internals to tests
5dc5f14d81 : tracing_service_impl_unittest: Avoid accessing trigger_window_ns_
2d6bb12b31 : tracing_service_impl_unittest: Mock randomness
10a0b922a4 : tracing_service_impl_unittest: Use mock clock
9e4635d6a4 : tracing_service_impl: Use virtual interface for clock
4ae4350331 : Revert "ui: Merge slice and async_slice plugins"
67c32eada5 : tp: don't parse V8 events if JSON parser is not available
3c2962fe48 : tp: speculative fix crash in ForwardingTraceParser
c77dfaaebc : perfetto: release new Python version
d1338de442 : ui: Merge slice and async_slice plugins
353bad0041 : tp: add stat for when all packets on sequence is lost
6699e0a88e : [Reland] tp: add support for importing legacy v8 samples from JSON
f24f1f7bd2 : tp: simplify approach for including JSON parser
285e26147f : tp: Cleanup track tests
25a3024529 : ui: Remove union selection
46a8356778 : Revert "tp: add support for importing legacy v8 samples from JSON"
f23aa034fc : ui: Add cmd to create area selection from track event selection
64d118ec4e : tp: add support for importing legacy v8 samples from JSON
2ce4d181eb : tp: catch common errors with write_into_file and put in stats
24764a1d9c : stdlib: Refactor Wattson s/ungrouped.sql/estimates.sql
a753766ce8 : stdlib: Remove redundant Wattson module
57242edb35 : Roll prebuilts for v48.1
0a27255ef2 : stdlib: Support AIDL::ndk in android.binder module
15fbb2b956 : COPYBARA_IMPORT=Project import generated by Copybara.
e9ccb723e0 : ui: tp: remove legacy derived events from trace processor
21f9bd9a34 : Update CHANGELOG for v48.1
59e5b24bc0 : tp: formalize different parsing modes of trace processor
bc99b0c04d : ui: Remove legacy selection types from serialization code
b3ba14e41e : docs: add pointer when hovering over stdlib summaries
990f621daa : tp: Interval intersect errors on negative ts
a8616ff8db : tp: add parsing of thread names in ART method traces
050c04eb49 : tp: add support for the ART method tracing, streaming format
5056b3582b : [Python API] Improve TraceProcessor error reporting
52e84a1543 : tp: delete unused jank cuj derived events
57d05dcf4d : Add suport for TAR files
76c65eae5f : Refactor AndroidBugreportReader
26b801b97f : tracing/ipc: Fix build with MSVC
f19b3dde17 : ui: Merge thread_slice & async_slice plugins
1e7aef15ad : base: move clock snapshotting logic to base and use in TP
4375daa275 : tp: properly set tid and mono clock snapshots
7911d6d1d0 : Add proposed trendy teams for VTS modules
c6d8013f44 : Roll prebuilts for v48.0
a5ac928573 : ui: Fix incorrect alignment bucket size in base_slice_track
6fa57ed3d1 : tp: increase max slice depth
6af5f0c817 : Revert "ui: Merge thread_slice & async_slice plugins"
eae417b3f2 : docs: Add CHANGELOG for v48
6e7483a371 : tracing_service_impl: Emit lifecycle event when flush starts
03b7c02b64 : tracing_service_impl_unittest: Test slow to flush data sources
1168bc2d39 : tracing_service_impl_unittest: Test slow to start data sources
9b684465b1 : COPYBARA_IMPORT=Project import generated by Copybara.
61b1bf5f6b : ui: Tweak the way the status message timeout is managed
e097f28ca1 : ui: Merge thread_slice & async_slice plugins
6f094227c4 : ui: Add track selection
8f084bf05a : ui: Remove TraceController
8ff4403da0 : ui: Remove remaining unnecessary scrollTos at selectSqlEvent callsites
81daed5081 : Add ART heap type enum to the heap graph proto output
8d03c66623 : Change defaults for java heap dump buffer size
9ae9aa9ada : docs: stdlib: Nicer stdlib docs
9a3176900f : ui: Fix omnibox state restoration bug
6ac70fb833 : Handle absence of process association in dmabuf plugin
9565f3ca18 : ui: Move heap/cpu profile details panels into their own files
a356b918b7 : ui: small cleanups to ui/common
2daf6ce9dc : ui: Format track crash popup
7ec60b0122 : tracing_service_impl_unittest: Remove producer id wrapping test
fd8f303606 : tracing_service_impl_unittest: Rework producer test
2df7316f08 : tracing_service_impl_unittest: Remove some unused utils
557d70292f : tracing_service_impl_unittest: Remove RegisterAndUnregisterTraceWriter
28ec718a12 : tracing_service_impl_unittest: Avoid GetShmemArbiterForProducer
bdd680b9db : ui: Fix crash on 0 byte trace
0044e3c439 : tp: add support for parsing the perf text format
74ba719d3b : tp: add test for simpleperf gecko traces
d429cd26f6 : ui: minor cleanups in preparation of TraceController -> TraceLoader
6fcebee1db : Take into account track type/etc. in save and restore pinned tracks plugin
6fc0becb16 : tp: cleanup some comments in trace processor's API
03d4cf09a8 : ui: Allow pinned tracks to be reordered
bbecf27e91 : ui: fix track nesting when intermdiate tracks do not have events
bff880b072 : tp: remove sorting option which has long since been obsolete
17c99c0b4c : ui: Move stdlib docs fetcher function to public/lib
d4948144d9 : tp: Add dimensions column to track_table
7db5784002 : perfetto: fix chrome autoroll
7a2fd2c8f5 : ui: move deeplink querystring to dedicated plugin
4cc6080d69 : ui: Make stdlib docs available in the UI
49bfd0e0e7 : ui: Deprecate 'enableScrollJankPluginV2' flag
cf23a8e08b : perfetto: update tp/stdlib changlog
ad50ba9c4c : ui: move globals.threads -> trace.threads
ec53911c2d : ui: move metric errors and ftrace info to TraceImpl
7b22d4d159 : fix proto deps to handle indirect dependencies
5849a82b79 : ui: Update changelog
82d9de58a6 : shared_lib: Add tests for PerfettoDsImplGetInstanceLocked
33991d23e5 : tp: fix trace type detection for simpleperf Gecko traces
f426725426 : test: Remove old unused header
9b768b9cd5 : ui: Remove 'GENERIC_SLICE' legacy selection
fd97512c1c : tp: add support for ART method tracing format
103cc402d4 : tp: Add RegisterSqlPackage() and start sunsetting RegisterSqlModule()
847f4ede6b : tracing_service_impl_unittest: Do not check internal buffers
34ad7ddf34 : tracing_service_impl_unittest: Use MockProducer::CreateTraceWriter
f63951d5e1 : tracing_service_impl_unittest: Avoid internals in ScrapeOnDisconnect
3d3de5af2d : tracing_integration_test: Remove WaitForTraceWritersChanged
deee9b14a4 : tracing_service_impl_unittest: Remove WaitForTraceWritersChanged
ae0a8c4cd4 : tracing_service_impl_unittest: Do not access internal triggers
cc3eb23028 : tracing_service_impl_unittest: Rework marker test
992a839d9a : tracing_service_impl_unittest: Do not access internals in clone test
2c94cc13be : tracing_service_impl_unittest: Don't check for internal ds state
a0da06a6c4 : tracing_service_impl_unittest: Do not test number of pending flushes
7923f3858c : ui: move overview loading into overview_timeline_panel
26e304667e : ui: Remove unused state
2c54e0bbd0 : Fix code to add modem slices.
6a4c32a6d0 : ui: initialize AppImpl explicitly
fea9434556 : ui: Added test plugin for testing nested SimpleSliceTrack tracks
6552adc6c6 : gn: add link_deps support
49c1bffda5 : tp: add wrapping view for all intrinsic tables
93bd93060e : cmd: fix incorrect trigger atoms
52c4625873 : perfetto: Introduce devfreq ftrace events
76e86fff8a : Add dmabuf plugin to defaults
cb87a6ba3e : ui: Remove some gratuitous use of globals
ac15318a1d : ui: Move heap|perf select to onTraceReady to avoid stalling plugins
f627a23e5c : Guard different types of modem tracks separately.
45ee6192c1 : Speed up dmabuf query
af3b706991 : perfetto_cmd: add a new logging atom when the session is cloned
6537135abc : codec_metrics: using new functions for slice cycle
7fbe53f3de : ui: remove rerunControllers injections
29fa37a961 : ui: move slice_args_parser from controller to frontend
5a905b04f9 : ui: Move PostedTrace to TraceSource
2974870c47 : Revert^2 "ui: set override flag when registering modules"
3d6f91af42 : ui: move pendingDeepLink and statusMessage out of state
4161a339ac : stdlib: Include interval_intersect module
25e08ce9ed : stdlib: Use interval_intersect to create system_state
0428c99470 : stdlib: Prepend NULL dummy slices per CPU for Wattson
221383a924 : stdlib: Separate cpu_idle|freq|freq_idle
d04742025f : Revert "ui: set override flag when registering modules"
94fdaa2894 : ui: Split TraceController in async functions
f5c1355050 : ui: Add counter track arguments to the details panel
dcf2527b82 : ui: Fix screenshot tests
c20974fb63 : ui: Remove LOG legacy selection type
9927a61dc1 : ui: Remove PERF_SAMPLES legacy selection type
1d5d134a39 : ui: cleanup engine-related plumbing
3ffd08ab7c : ui: move PluginManager under AppImpl
79eeb65187 : ui: set override flag when registering modules
f64fdd7bb8 : ui: Remove CPU_PROFILE_SAMPLE legacy selection type
c4d33750f8 : ui: Remove HEAP_PROFILE legacy selection type
e89e0eaca2 : ui: move engine state from State to EngineBase
aad5eb5b7b : ui: Improve engine lifecycle
b165ef34e5 : Adds an Android input event plugin
f0ce1b1097 : ui: fix typo in object count unit
6f32c22802 : ui: move state.traceUuid into TraceInfo
b0c4436b8c : ui: Turn details panel hook on TrackDescriptor into a factory
607c3846cd : ui: add plugin for cpuidle time in state
18c53c13b2 : tp: Refactor stdlib naming from module to package
07503afb78 : ui: Add selection details to single selection and port plugins
dc52398fc4 : Fix include for presubmit
e16d1d0409 : ui: add object counts to heap graph size flamegraph
e7a5cd6cb8 : tp: RestoreInitialTables() restores added/overriden custom SQL modules
6713f7ca02 : tp: add support for importing Gecko JSON traces
1fab4be377 : ui: Disable suspend resume track from async slices plugin
6d56a6605f : ui: Add link to thread in Suspend/Resume details panel
fac67c8ad9 : ui: Introduce a customized details panel for SuspendResumeLatency plugin
cb95ebfdae : ui: Introduce SuspendResumeLatency plugin
b2fa2fdf0e : tp: Add additional slice args for suspend / resume slices
f42b53c9e7 : base: add missing include
965ea0539c : stdlib: Add CPU cycles calculation for specified slices
a6fc0364ae : tp: add generic cpu profiling module to stdlib
c22620a856 : tp: add table exporting perf samples in a simple tree
077bb360ed : ui: Split AppImpl and TraceImpl
9f209ef56e : ui: move UI responsibilities out of TraceController
cd2021f02a : [Python API] Make BatchTraceProcessor more resilient
845bd1eaad : ui: Only render timespan note durations when selected
c0cbddeb51 : ui: Move initial perf sample selection from TraceController to plugin
71cfbe1c05 : ui: Move initial heap profile selection from TraceController to plugin
074514fc06 : ui: cleanup Engine tracker
735254ace5 : ui: Move timestamp format functions into timeline
7b37bd5a23 : ui: Move hoverCursorTimestamp into Timeline
71aaedeaea : docs: use updated container image for perfetto-site builder
2812a39f61 : ui: Add sql selection resolver for async tracks
993a055d30 : [ui] Fix screenshot from aosp/3286715
c5e9446f43 : Fix windows build
b4f27ab375 : ui: remove TraceErrorController
97a53a9b96 : [ui] Fix anchor overlap
4913d6210e : [ui] Add "duplicate" button to SQL table viewer
5063b37c49 : [ui] Make table columns reorderable
b124a04f4d : ui: use the interrupt context when showing scheduling wakeups
f336c0afd5 : Fix source code link in docs
5d0513d94c : Dmabuf plugin skeleton
4e1ce2fa6e : ui: decontrollify Flows
c4d875b728 : [ui] Fix adding filters for sql table
ce07e60ec1 : Treat integer args as integers
b9ff56b011 : ui: Remove LegacyDetailsPanel
659cf723ba : ui: Add selection resolver API
32aa2b0257 : [stdlib]: Show GC info without thread_state info
87ecfdd2d5 : Add support for ARM SPE data
fbc4abbc9d : ui: Add missing canvas redraws to track panel mouse events
a48be8a445 : test_task_runner: Implement AdvanceTimeAndRunUntilIdle()
a82e6cdd0c : trace_processor: Show slow start data sources
1ea8afe098 : ui: Added filtering to column headers for sql table.
731a74bd6d : ui: Roll canary
9ed6c7f027 : ui: fix handling of parent ids for async slices trees
af029f370f : Detect g3 module by just trying to include it and detecting if we fail.
a7b705031a : shared_lib: Add two more PERFETTO_TE_ track params
8067bb8c65 : stdlib: Add CPU cycles calculation in interval
27db257f09 : ui: remove old ui screenshots pre-playwright
0c6297e75d : ui: fix addSqlTableTab
414f15c451 : tracing_service_impl: Emit lifecycle event with slow flush data sources
152535bcf3 : tracing_service_impl: Emit lifecycle event with slow start data sources
b597a4c6bf : tp: fix integer overflows when computing CPU cycle tables
7365d0280a : COPYBARA_IMPORT=Project import generated by Copybara.
780df6eccb : ui: Improve track shell accessibility
aeec337832 : Add average power to android_batt metric
5876c3b7a4 : Add a counter for power calculated from current and voltage
db8c66f656 : ui: decontrollify PivotTable
114b571577 : ui: Remove setLegacy* methods, just use setLegacy instead
53d2aa9052 : ui: simplify handling of AreaSelection
3ea6a233c3 : ui: Render underlay and overlay on pinned tracks
81b198a64f : ui: Roll stable
dba1b86b4e : ui: add tests for Wattson plugin + allow plugins to be enabled via url
d4506fc07e : [stdlib]: Fix binder_breakdown interval_intersect
bb214f1c19 : ui: de-controllify aggregators
6059a6d693 : ui: add tests for aggregation
9eab236253 : Improve performance for android_input_events
1802537873 : ui: Fix parent/child track nesting in async process tracks
f913caf2a1 : ui: Fix overflowing track titles when title text contains newlines
f57db47e3f : ui: Add isSummary flag to TrackNodes
a0892a1b97 : ui: Add standard group for input events
0e9cfe6c48 : Enhance AUX DATA processing
0d9e46f9c0 : ui: Nest async global tracks properly as defined in the trace
a50d0be535 : ui: Render nested tracks
7c742269fa : trace_processor: Expose cpu identifier in Cpu table
747468b763 : traced: Capture CPU identifier
b1b6d304fc : tp: add some context for perf_sample data quality counters
7f264d1ee2 : ui: update download_changed_screenshots.py for playwright
b6305ba0fb : ftrace: buffer polling: update kernel version checks
39dbbc4d0d : ui: Improve usability of workspace API
6470d45cae : Convert voltage from Health HAL from mV to uV
923cd5933d : Add number of cpus to startup message.
61fa76ccc6 : ui: move query_result{table,tab} to public/lib
34ddcb7991 : ui: move debug_tracks under public/lib/
81229eb98a : ui: plumb Trace object and remove getEngine()
41ba86a049 : ui: specify escape character for flamegraphs
1809089e3d : trace: remove redundant deps from "non_minimal_@TYPE@"

+- Project: platform/external/pffft

89e3b0e : Make pffft be visible by vendor to use FFT for audio processing

+- Project: platform/external/pigweed

3f8617fee : Project import generated by Copybara.
a87b43ed5 : Project import generated by Copybara.
bea72ed83 : Project import generated by Copybara.
b4aa092fd : Project import generated by Copybara.
cb1711c80 : Project import generated by Copybara.
d7bb0b92f : Remove copy.bara.sky from external/pigweed
9a9e023d1 : Project import generated by Copybara.
cbe00cf6e : Project import generated by Copybara.
45aa7f045 : Project import generated by Copybara.
e46b74471 : Project import generated by Copybara.
5d1fbda0b : Project import generated by Copybara.
74fd7ba84 : Project import generated by Copybara.
8ece86aac : Project import generated by Copybara.
99ec7aae4 : Project import generated by Copybara.
b891f79e1 : Project import generated by Copybara.
507a82cf4 : Project import generated by Copybara.
aacd6d5fb : Project import generated by Copybara.
5b4df984d : Project import generated by Copybara.

+- Project: platform/external/protobuf

3e513f9e2 : Add neuralnetworks apex available for libprotobuf-java-nano
b9a368035 : Suppress ReturnValueIgnored errorprone issues

+- Project: platform/external/python/bumble

23f46b3 : HAP: wait for pairing event (#551)
e16be1a : docs/examples: Add `run_cig_setup` description
2fa8075 : examples/run_cig_setup: Fix the address type and CIG params
46ec39c : avatar: update to latest version to correct flakiness (#568)
eef418a : Collect Mobly logs (#566)
9e663ad : Clarify Bluetooth address comments
f28eac4 : docs/examples: Fix typo
669bb3f : run_csis_servers: Update `usage` and add docs entry
347fe8b : Add codecs info in controller info app
35bef7d : Fix whitespace
d069708 : Support netsim.ini tmpdir on linux
bdba5c9 : pyusb: check devices_in_use before removal (#559)
f06a357 : Replace unsafe default values

+- Project: platform/external/python/cpython2

cd4dcab179 : Fix cpython2 for C23.

+- Project: platform/external/python/jinja

cfb608a : Add dirgroup for trusty genrule

+- Project: platform/external/python/markupsafe

772709e : Add dirgroup for trusty genrule

+- Project: platform/external/python/pyee

6f931de : Edit OWNERS file

+- Project: platform/external/python/six

c07d9bd : Add dirgroup for trusty genrule

+- Project: platform/external/python/watchdog

197282e : Import watchdog
8c7cc64 : Initial empty repository
2b62830 : Version 5.0.2
2f5377a : Enable OS specific Mypy checks (#1064)
4427aa4 : [watchmedo] Fix `tricks` argument type of `schedule_tricks()` (#1063)
236a57c : Bump the version
8658cfc : Version 5.0.1
cede9b6 : chore: run ruff
72be3e4 : [kqueue] fix `TypeError: kqueue.control() only accepts positional parameters` (#1062)
5daadf5 : docs: tweak
ca453f5 : Bump the version
7c6ca3b : docs: tweak
76f3ba8 : Version 5.0.0
516d4ac : core: more types (#1061)
6847b0e : chore: remove doctest `needs`
b134646 : chore: remove unused file
9af32e0 : fix: types
31b0c34 : feat!: more kwarg-only
cc05691 : docs: typing in examples
4e6f036 : dos: tweak
80576b4 : docs: clean-up headers to ease maintenance + add funding
324e044 : feat!: Enable Mypy `disallow_untyped_defs` rule + fixes (#1060)
837ee40 : [inotify] Add support for `IN_CLOSE_NOWRITE` events (#1059)
84d5adb : feat: Improve typing references for events (#1058)
b87e645 : feat!: more typing clean-up + enforce keyword-arguments (#1057)
3b00a42 : chore: No more `typing.Optional` nor `typing.Union` (#1056)
2872c7e : feat!: Enable `disallow_untyped_calls` Mypy rule + drop Python 3.8 support (#1055)
a318f39 : Bump the version
9c5a432 : Version 4.0.2
aac4328 : chore: add git attributes file
6a33516 : docs: tweak
cff604e : feat: Python 3.13 support (#1052)
7503d34 : fix: possible race condition in `AutoRestartTrick` (#1002)
7d4a369 : [core] Run ruff, apply several fixes (#1033)
7cd723a : chore: partly move settings from `setup.cfg` to `pyproject.toml`
654707e : fix: remove execution rights from `events.py`, and `watchmedo.py`, files
4043ef0 : tests: improve flakyness + clean-up
206843c : chore: remove useless kwarg on `BaseObserver` subclasses
914923c : feat: centralize platform checks (#1051)
ab5117a : [fsevents] Add missing `event_filter` keyword-argument to `FSEventsObserver.schedule()` (#1050)
6294daf : docs: Update PatternMatchingEventHandler documentation (#1048)
402ad01 : docs: Simplify the quickstart example (#1047)
9f23b59 : Bump the version
1a1f400 : Version 4.0.1
b92b6fa : doc: tweak
29ab159 : [inotify] Fix missing `event_filter` for the full emitter (#1032)
2fe1609 : Bump the version
9a4f3e2 : ci: final fix
d134073 : ci: fix missing wheels
5c8f620 : Revert "Bump the version"
df07c90 : Bump the version
c7a7842 : ci: fix publish workflow
d1439f9 : ci: fix blob storage
4f451f7 : Bump actions/setup-python from 4 to 5 (#1028)
e614fde : Bump actions/checkout from 3 to 4 (#1027)
1ce88d5 : Bump actions/upload-artifact from 3 to 4 (#1029)
26457de : Bump actions/download-artifact from 2 to 4 (#1026)
192c4ae : ci: add Dependabot for GitHub Actions
b07dec3 : ci: fix pypa/gh-action-pypi-publish
7d651ac : Version 4.0.0
6cdf07e : chore: Update supported Python versions (drop 3.7, add 3.12) (#1017)
6cfe9cc : fix: mypy "type: ignore" comment errors (#1016)
e4e2f8e : style: run black & isort on inotify (#1015)
52d8692 : feat: Add `DirectorySnapshotDiff.ContextManager` (#1011)
2338837 : fix: mypy errors introduced by #1012 (#1014)
5f9d93c : feat: Add typing to `dirsnapshot` (#1012)
75a3289 : doc: log the watched folder in the README (#995)
48c49a1 : Added event filter for the emitter (#989)
363fe62 : [windows] Remove the `WATCHDOG_TRAVERSE_MOVED_DIR_DELAY` hack (#986)
41fca1e : [events] Use `dataclass` (#987)
f991928 : [watchmedo] Add missing import
30a149a : [tests] Skip ``test_unmount_watched_directory_filesystem()`` outside the CI
916af32 : [tests] Improve `FileSystemEvent` coverage
d825a94 : [watchmedo] Log all events in `LoggerTrick`
f3dfc4b : [events] Log `FileOpenedEvent`, and `FileClosedEvent`, events in `LoggingEventHandler`
2cdb2e9 : Use SPDX license identifier (#984)
a4f28a1 : Bump the version
da09c06 : Release 3.0.0
9fc1ce2 : fix: missing `>` in `FileSystemEvent.__repr__()` (#980)
1838e0b : doc: clean-up
ce6218c : Update global.rst.inc
989fddc : Update installation.rst
71f2df3 : Update README.rst
9c28c61 : mypy check_untyped_defs (#966)
764a234 : tests: refactor test setups towards fixtures and hinting (#968)
ddb9bd1 : tests: xfail tests until we work on them (#975)
344f342 : tests: skip pypy on windows (#976)
b0255ba : [tests] use pytest tmpdir, remove watchdog tmpdir (#970)
bf0f6b3 : add isort (#969)
e716122 : return from InotifyEmitter.queue_events() if not launched when thread is inactive (#963)
5400b1f : fix tricks yaml generation (#965)
0d27bff : Remove handling of threading.Event.isSet spelling (#962)
61158c6 : [ci] apply misc simple mypy settings (#961)
76a03bd : Enable mypy warn_unused_ignores and make windows files 'optional' (#959)
2345147 : [ci] add job timeouts (#960)
017f6d0 : [ci] cancel previous jobs for pull requests (#958)
941a300 : [ci] use a single test/lint/hint/doc matrix (#957)
d66d664 : [ci] drop pull_request: paths: constraints (#956)
3eed70e : PyPy 3.8, 3.9, drop 3.7 (#955)
1c50e13 : Move mypy test dependencies to the tox config (#954)
f1fa93c : Drop support for Python 3.6 (#953)
4d1510d : Add mypy type checking (#908)
c0bd8b5 : Use Tox to specify the `[watchmedo]` extra
1ca358b : Fix Python 3.6 in CI
2bde22b : git-ignore `.venv/` virtual environment directories
c4e014b : Bump the version
60a97bf : Version 2.3.1
346ee46 : Bundle the `requirements-tests.txt` file in the source distribution
d38ab5a : Second run for `black`
b4a8a14 : [watchmedo] Log `FileOpenedEvent`, and `FileClosedEvent`, events in `LoggerTrick`
25a2d1f : [watchmedo] Exclude `FileOpenedEvent` events from `AutoRestartTrick`, and `ShellCommandTrick`
54f2cf0 : Run `black` on the entire source code
16015f5 : Bump the version
4f83d70 : Version 2.3.0
a9fc7f5 : [watchmedo] Add option to not auto-restart the command after it exits. (#946)
d2837e9 : [watchmedo] Exit gracefully on KeyboardInterrupt exception (Ctrl+C) (#945)
3140f52 : [watchmedo] Add optional event debouncing for `auto-restart` (#940)
2b09f64 : [inotify] Add support for ``IN_OPEN`` events via ``FileOpenedEvent`` events (#941)
d565049 : [doc] Add `--verbose` flag to log example (#914)
75c56f8 : Bump the version
858c890 : Version 2.2.1
37cfcc1 : [ci] Set the expected Python version when building release files
6687c99 : [watchmedo] [regression] Fix usage of missing `signal.SIGHUP` attribute on non-Unix OSes (#936)
da7bc03 : doc: time to move forward
68ee5cd : Add more files tro MANIFEST.in
293a31e : Enable mypy to discover type hints as specified in PEP 561 (#933)
14e95bb : [ci] Update actions versions in use
82e3a3b : Bump the version to 2.2.1
7773a25 : Version 2.2.0
d493ec2 : doc: tweak changelog
3a940dd : doc: update copyrights date
fe4a111 : Support Python 3.11
cf0a195 : Remove unnecessary code of dirsnapshot.py (#930)
78adc2f : Allow musl libc's `-1` error message during testing (#923)
21e9f4c : [watchmedo] Add SIGHUP termination signals (#912)
e8c20a4 : Deprecate the fsevents2 module (#909)
0800dba : docs: fix simple typo, schduling -> scheduling (#910)
eb331de : Add documentation testing (#902)
ccb5a29 : Fix the one flake8 error in `setup.py`
95d3981 : Add `setup.py` as a flake8 testing target
3041b97 : Remove the `docs/echo.py.txt` file
bb199cf : Remove the `docs/_fsevents.pyx` file
df7f7ca : Remove a Windows XP TODO comment
085c151 : tests: skip eventlet installation on Python 3.11
5785c5c : tests: add Python 3.11
ea3d34c : Bump the version to 2.1.10
4cbf15c : Release 2.1.9
1dd7fa8 : [inotify] Avoid bad file descriptor shutdown (#895)
df1574c : [watchmedo] Avoid zombie sub-processes when running shell-command without --wait (#897)
fa806f1 : [watchmedo] Make auto-restart restart the sub-process if it terminates (#896)
72f2eb7 : [CI] Run flake8 only once in CI (#892)
5e5cb4a : [tests] Fix flakey watch deletion test for FSEvents (#891)
68ef624 : Bump the version to 2.1.9
2d8b7c5 : Release 2.1.8
9a5df95 : [watchmedo] Support setting output verbosity with `-q, --quiet`, and `-v, --verbose` (#889)
4baea36 : [inotify] Fix `Observer.unschedule` hang on linux when called with watch of an unmounted path (#869)
5747e93 : [watchmedo] (minor) check top_command instead of command in watchmedo.main() (#890)
a60ab4f : Modernize `super()` calls
3e064cf : tests: fix `test_schedule_failure_should_not_prevent_future_schedules()`
3a6eace : [watchmedo] Fix broken parsing of commands from auto-restart, and shell-command (#888)
2830914 : [watchmedo] Fix broken parsing of boolean arguments (#887)
a250c51 : docs: Update the changelog, and contributors
0ab956e : Fix adding failed emitters on observer schedule (#872)
030fb0a : [watchmedo] Fix broken parsing of --kill-after argument for the auto-restart command (#886)
8433a15 : tests: Improve debug on warnings issues (#865)
8e4f0ee : Fix typos (#873)
e57ec02 : Bump the version to 2.1.8
5fbbccd : [docs] Fix error: Unknown target name: "treewalker"
4641ccd : Release 2.1.7
be50c42 : [watchmedo] Fix calling commands from within a Python script (#880)
77e1f46 : Fix typo in README (#866)
f945d82 : Fix pypy CI version (#864)
3d5c4f9 : Run Github actions on master (#863)
fa3b3b5 : [watchmedo] Lazy loading of the PyYAML module (#847)
5f89548 : [inotify] Fix `not` equality implementation for `InotifyEvent`
61d8f86 : Eliminate timeout in waiting on event queue (#861)
57e80a5 : Bump the version to 2.1.7
c6f3ead : Release 2.1.6
63eab27 : NEWs entry for #836
3d72fd1 : [watchmedo] Improve the help output
9c61301 : s/Mac OS X/macOS/
2325b00 : [watchmedo] Removed unexistant WindowsApiAsyncObserver references
0191c40 : Removed argh from watchmedo and replaced with argparse (#836)
b9774e9 : Fix test suite on FreeBSD (#842)
d444b4a : Update tests for FreeBSD and make these more visible by adding a badge (#841)
5a7022c : Use is_alive() instead of isAlive()
43f7982 : Add project URLs such as changelog and docs (#833)
ccc7811 : Adapt documentation date
7dc087c : Bump the version to 2.1.6
e695197 : Release 2.1.5
354cf70 : Revert "Allow overriding or adding custom event handlers to event dispatch map (#814)"
75fe2d2 : Convert regexes of type str to list (#831)
c74381c : Bump the version to 2.1.5
12de093 : Release 2.1.4
97ff41c : Fix test mocks for big endian systems (#828)
4ef348b : Update the changelog
be845f3 : Allow overriding or adding custom event handlers to event dispatch map (#814)
ea884d7 : Correct changelog
426e29d : [mac] Convert absolute watch path in FSEeventsEmitter with os.path.realpath() (fixes #821) (#822)
384dfea : Fix SkipRepeatsQueue._put() raises AttributeError (#817) (#818)
1be65c0 : Ignore more folders
05da647 : [doc] Add quotes around brackets (#816)
8407554 : [macOS] Emit FileModifiedEvent on permission change (#815)
44ced23 : [watchmedo] Fix usage of os.setsid() and os.killpg() Unix-only functions (#810)
31801ac : Bump the version to 2.1.4
e9afb60 : Release 2.1.3
6d2f9aa : Publish macOS arm64 and universal2 wheels (#807)
aadb53f : Bump the version to 2.1.3
7e478de : Release 2.1.2
8fb9409 : [mac] Fix relative path handling for non-recursive watch in FSEventsObserver (#799)
50af6eb : [windows] Fix event detection right after starting an emitter on PyPy (#796)
4e7ca8d : Bump the version to 2.1.2
69d5ba2 : Release 2.1.1
a70b350 : [mac] Fix callback exceptions when the watcher is deleted but still receiving events (#786)
8ebb408 : Set long_description_content_type in setup.py
2b47824 : Bump the version to 2.1.1
d691038 : Release 2.1.0
2c069e9 : Update changelog with the lastest inotify change
1415238 : [inotify] Simplify libc loading (#776)
7728a45 : [watchmedo] Add the `--debug-force-*` arguments to tricks (#783)
1c67997 : [mac] Add support for non-recursive watches in FSEventsEmitter (#779)
cd18dfc : Bump the version to 2.0.4
a2b015e : Release 2.0.3
d424b69 : Docs: update links (#777)
5efb90e : [mac] Use logger.debug()instead of logger.info() (#774)
331fd7c : Bump test sleep time from 10ms to 100ms on macos (#768)
f164748 : Bump the version to 2.0.3
5018022 : Release 2.0.2
6dfb322 : [mac] Add missing exception objects (fixes #765)
4c37723 : Be more specific about macOS requirements
c5f6a6b : Bump the version to 2.0.2
5fbd4b6 : Release 2.0.1
dbc5689 : Moved the CI from Travis-CI to GitHub Actions (fixes #751)
4e54e6b : [mac] Fix a segmentation fault when dealing with unicode paths (fixes #762)
4bddc32 : Release 2.0.0
16d9010 : [inotify] Allow to stop the emitter multiple times (#760)
72a2545 : [mac] Refactor handling of coalesced events (#754)
d9a6e63 : [mac] Improve handling of rename events (#750)
fc1242e : [mac] Support coalesced fsevents (#734)
2fab7c2 : [inotify] Add support for the IN_CLOSE_WRITE event (#747)
847a059 : [mac] Skip calling PyEval_InitThreads() on CPython 3.7+ (#746)
98cb3dd : Capabilty to release from any branch (#742)
5565a30 : Bump the version to 1.0.3
d80b55c : Release 1.0.2
f9ff9e1 : Add workflow for building and releasing wheels (#739)
2758815 : Uniformize event for deletion of watched dir (#727)
1d7cb5d : [mac] Add compatibility with old macOS versions (#733)
44b9618 : [mac] Return byte paths if a byte path was given in fsevents (#726)
b5c8cc1 : [mac] Don't pass event flag as ID (#725)
d0bb7b8 : [mac] Fix missing event_id attribute in fsevents (#722)
4fbca6e : Bump the version to 1.0.2
0f9acb7 : Release 1.0.1
9a99a2f : Bump the version to 1.0.1
954d363 : Release 1.0.0
614ccfc : [mac] Regression fixes for native fsevents (#717)
7f7937d : Drop Python 3.5 support
f2c28b2 : [windows] winapi.BUFFER_SIZE now defaults to 64000 (instead of 2048)
3d302a0 : doc: minor fixes
86b983a : Finish the Python 2.7 and 3.4 drops and prepare for the upcoming v1.0.0
a5cc00d : Fix installation of Python 3.7.9 on Windows (#705)
3dea877 : tests: bypass the eventlet test on Python 3.10
efdc69f : Use the standard pathlib instead of pathtools (fixes #556)
432c31f : Drop Python 2.7 support and update encoding handling (#703)
4dcb0f7 : Bump the version to 0.10.5
186a39c : Release 0.10.4
14a9406 : [mac] Prevent compilation of watchdog_fsevents.c on non-macOS machines
d35ef6b : watchdog is working on Python 3.9
7ddd5d4 : doc: Use `finally` for teardown in doco examples
b803b3e : fix: Handle repeated signals better in watchmedo
60c57ae : Expand tests to Python 2.7 and 3.5-3.10 for GNU/Linux, macOS and Windows
3b9904c : [Mac] Performance improvements for the fsevents module (#680)
d95692b : Add watchdog.tricks to the API documentation
1c6f52a : [Mac] Remove unused initwatchdog_fsevents() prototype (#681)
c36a7df : Rename docs/dependencies.txt to docs/requirements.txt (#678, fixes #650, fixes #666)
e985b67 : Add logger parameter for the LoggingEventHandler (#676)
4bd3cfe : Replace mutable default arguments with the correct implementation (#677)
e32272e : Bump the version to 0.10.4
1133684 : Release 0.10.3
dee8e5f : Fix flake8 errors
e249acf : Update the changelog
4e5768b : [inotify] Prevent raising an exception when a file in a monitored folder has no permissions (#669, fixes #670)
37bcf6c : Ensure ObservedWatch.path is a string (#652)
077d5b1 : [inotify] Allow to monitor single file (#655)
1675cb6 : Bump the version to 0.10.3
c3dd0d3 : Release 0.10.2
80f51e4 : Removed pinned macOS requirements for fsevents2
6e01c8d : [BSD] Restore support (#641)
977866c : [tests] Mark tests from test_delayed_queue.py as flaky
f61c0f3 : [BSD] Further support in tests
9ac83bb : [bsd] Partially restore support (#638)
9ef3b1b : Added FreeBSD CI support
f1aa087 : Adapt tests to improve BSD support (#633)
b752e37 : Fixed the build_ext command on macOS Catalina
502df2b : Refactor FileSystemEventHandler.dispatch()
8f7e051 : Bump the version to 0.10.2
a723e22 : Release 0.10.1
0889faa : [mac] add -Wno-nullability-completeness to compilation arguments
6baa210 : Fix macOS build_ext for old and new versions
9134164 : Add LC_ALL=C to Python 3.6 test on GNU/Linux
d557574 : Fix Python 3.6 installation when the OS locale is set to POSIX (#616)
42aae6e : Update v0.10.0 release date in changelog.rst
b4da673 : Fix two changelog titles to push to PyPI
5ee9a38 : Add the EmptyDirectorySnapshot class (fixes #612)
28b75d0 : Bump version to 0.10.1
6c1c96a : Release 0.10.0
6793588 : Fix simple typo: seperate -> separate
d092a69 : [snapshot] Removed the walker_callback parameter
587d11f : Update the changelog
56b712b : Fix a TypeError when deleting the watched folder on Windows
7a55f31 : Added parameter `ignore_device` to constructor of DirectorySnapshotDiff (#597)
771c192 : Fix double 'http://' in README
ee167a4 : Remove AppVeyor (superseeded by Travis-CI)
97e6457 : Change documetation link to readthedocs.io (#591)
76e7520 : Support eventlet monkey patching in SkipRepeatsQueue (#589)
2c56a83 : Setup Travis to support Windows (#581)
e311ea7 : Join the polling emitter thread in its fixture
183a06e : Garbage collect on no_thread_leaks fixture
bee6687 : Avoid private threading._dangling object
a606644 : Skip unprocessable socket files on Unix (fixes #509)
18bb843 : Fix wrong source path after renaming a top level folder
ecaa927 : Snapshot: don't walk directories without read permissions (#573)
c73eaad : Fix issue with observed directory deleted (fixes #570)
a8db635 : Add flake8 checks in tox and Python 3.8 in supported versions
6ab52af : Fix CIFS documentation markup
4fcf037 : Added usage instruction to use with CIFS
11fae8c : Fix changed tox behaviour on Appveyor CI
f252554 : Drop Python 3.4 tests
c81655b : Fix test_observers_winapi.py::test___init__() on Windows
af370b4 : Added missed st_size in win32stat for fixing broken tests against Python 2.7 on Windows (#558)
70026b1 : DirectorySnapshot.walk(): also filter on EINVAL (#559)
845d767 : Uniformize use of the with context manager with files
3f687c6 : Watchmedo: Handle all available signals
8b94506 : Remove emitters which failed to start
6ba82ac : FSevents: Support for unscheduling deleted watch
9852970 : Fix Inotify behavior when presented with non-ASCII path events
f313e9d : Trigger modified event when the file size has changed
11cb8d5 : Fix thread leaks in tests
495ac6b : Remove obsolete buildbot files and documentation
46141d4 : Identify synthesized events
29d7cce : Update changelog
bc8b5b9 : General code clean-up
caa95e8 : Refactor all tests to use pytest
6062026 : kqueue.py: fix AttributeError: 'DirectorySnapshot' object has no attribute 'path_for_inode'
7273ab8 : Fix how we raise OSError
9acf185 : Fix FreeBSD detection
f3c7f84 : Add test for several observers on the same path
a5ecb7b : Add test for schedule() after unschedule_all())
b8cf1b0 : Generate sub created events only if recursive=True
7a726d1 : Force polling option on auto_restart
19eb494 : Avoid race condition crash when a directory is swapped for a file
c7e6f8d : Remove empty .gitmodules and project files for Eclipse.
22ddba2 : Skip watchmedo tests if PyYAML dependency not satisfied
ee2c3de : Update test documentation (fix #518)
e4297f0 : Fix watchmedo tests in Windows
ce724cb : Update .gitignore
2b2091a : Remove delay from non-move events
6cf6d5d : Update README.rst
773d3da : Move test configuration to setup.cfg
a37e1ae : Make 'watchmedo' an extra installation
48701f3 : Use yaml.safe_load() instead of yaml.load()
2cea5e8 : Use scandir to save memory
edba525 : Fix tests with Python 3.4. Using pytest < 3.3 for pytest-timeout < 1.2.1 compatibility.
17b3c30 : Remove `easy_install` instructions
7a66c5f : Move conftest.py into tests
6f5604a : Fix AppVeyor badge link and and Travis-CI one
9dcd848 : Should use `os.pathsep` to split paths
cedf21a : WindowsApiEmitter made easier to subclass.
eea9b69 : Use separate ctypes DLL instances
b4dca3f : Remove unused import
437976f : specify unit of timeout
d5d5aa6 : Fixes small typo in README.rst
350d9f4 : Add -Werror compile arg
c0f5496 : Fix missing field initializers
c5039e4 : Update after review
a7fafc0 : Add build status badge to README.rst
48bf46b : Add Appveyor CI for Windows
e2730fd : Allow to call FSEventsEmitter.stop() twice without error
f6d7683 : Add Travis-CI facilities for macOS
152d3f8 : Drop support for Python 2.6, 3.2 and 3.3
4a89e2e : Fix unused parameters
0c5e353 : Fix bytes <> str warnings
43ea4bf : Handle all supported Python versions
6aef6bb : Fix deprecated imp module usage
6987131 : Fix Python 3.7 DeprecationWarning with ABCs
b78fc0c : Bump v0.9.0
f5f9010 : Fix installation to make every Travis CI job pass
d95bcbd : Fix .travis.yml and tox.ini for Python versions
0dd1e13 : Clean "except" blocks
825b94a : Move fixtures to conftest.py
0a22783 : Clean imports
f48f735 : Fix typo in DirMovedEvent with inotify
c4818c2 : Fix Python version in tox.in and setup.py
95e68ed : Add Danilo J. S. Bellini to AUTHORS
268a46d : Add a workaround to support uClibc library
d33bee0 : Fixes issue #381
b692c5b : Fix crash on shutdown
eaedd76 : Add trove classifiers for python versions supported
cb70f70 : polling observer: deleting observed directory now emits DirDeletedEvent
6a38937 : Catch RuntimeError in _load_libc()
4cb5ad8 : - fix indentation
5dc9ff4 : sys.platform does not start with 'bsd' on any of the BSDs implementing kqueue.
47904b9 : Fixed constant in winapi.py.
0ec7746 : Update .travis.yml to be based on tox
5d41464 : Add a tox.ini that doesn't depend on setup.py test
ca335db : Remove duplicated directory "delete" on Linux
69d4ca4 : Mock inotify & test the c-d-c-d-ds-i-ds-i sequence
5fdc62a : Remove Inotify._remove_watch_bookkeeping
d695019 : Add test_fast_subdirectory_creation_deletion
a7bd635 : No need for time.sleep
475dae4 : utils: remove unused 'namedtuple' import
f8dddef : utils: fix undefined name 'platform'
62e55ab : Remove unneeded debug statements and remove super() call.
f6a040d : Remove unneeded debug statements.
a40a187 : Adds option on linux to allow all move events.
de78669 : Adds option on linux to allow move events to be passed through, even if only one part of the event is watched. Tests for the new feature also added.
5388391 : Add --debug-force-polling to shell-command
c9fb86c : Remove accidental newline
c582cf0 : Use `setsid` function from Python's `os` module
fe73cb6 : Fix dead link to nose
1ce1de5 : fix pytest invocation
4cd5138 : Bugfixed in fsevents2.py. unicode path param not properly set
0a70d87 : v0.8.3
eadb257 : manifest: only include source files
35bbbad : changelog.rst indentation
b6207a4 : [linux] auto fallback to polling on unsupported libc
c77f677 : [osx] auto fallback to polling since kqueue may require c extension installed
45a5d67 : [inotify] fix possible unbound local error
7cac23b : fix fallback encoding. getfilesystemencoding() can be None.
ceeb0d8 : [inotify] refactor libc loading and improve error handling
d7b7a1b : allow event handlers to modify the observer from the callback
3502ce6 : [tests] dont't join thread on teardown.
794f502 : use custom loggers. closes #274
aec1529 : v0.8.2
c678ecd : show missing lines in coverage report
ddfb57e : fix changelog
2e571a2 : Remove old, unused .c and .h files
2e83199 : Avoid race condition crash on directory deletion
f34d475 : whoops. backported wrong version
0244dae : add threading.Event backport. in py2.6 wait returns None
2b119df : remove dead code
4a3844b : handle failed starts
7a7d09f : update api changelog
2e9530b : move emitter initialization into `start`. (don't block/start watching on object creation)
0fd68df : fix the issue : watch & unwatch the same folder on Windows may cause exception "Access Denied" should close the dir handle
37ad8e7 : add accessor to emitters created by observer
46b0b5d : [tests] join obsserver before verifying, so it doen't depend on synchronous impl. of stop
060906d : [tests] remove unnecessary test. tested in test_emitter.py
282a673 : [tests] verify that emitter terminated in function teardown
9c261dc : simplify inotify buffer implementation by factoring out a generic delayed queue. separates the inotify/threading work from the queue
d5fbf87 : [inotify buffer] immediately unblock read_event and return None on close instead of using the special stop event. properly interrupts all threads and subsequent calls.
5799f3e : move sphinx markup to docs/
f2aa073 : rename DaemonThread to BaseThread. preparations for making all threads non-daemon
c38e025 : [inotify] make delay a class attribute
3fda8fa : [inotify] update some comments and names for readability
d344a02 : explicitly check if we have the emitter. assignment here does nothing.
78b61bb : refactor: remove the silly get/add/remove wrapper methods for internal collections. use explicit checks instead of catching KeyError.
d5eb1e3 : dont start emitters on schedule if observer is not running
6b5c0a8 : Fix unused arg treated as error with clang
bc677d7 : add observer api tests
b86533f : remove dependence of implementation details from test
114d116 : raise AttributeError instead of ImportError on missing library functions
6bced05 : cosmetics
9582bc5 : remove legacy tests
d116a31 : make is_directory for events a class attribute
79e69e2 : cosmetics
3d14116 : make event_type in FileSystemEvent a class attribute
aadfc3a : remove legacy tests
ef40898 : add teardown of emitter tests. fixes pytest deadlocking on failing tests
0667f20 : v0.8.1
82f5cd8 : join emitter threads after stop. fixes #252
d03ea36 : Fix inotify stop.
1d40eff : Fix anon_inode descriptors leakage
cefefb5 : fix changelog formatting, maybe
93ff1da : v0.8.0
9c84dc5 : [snapshot] remove deprecated method. note in changelog instead
b54636d : [snapshot] fix ino/dev order
71bdf62 : [snapshot] rename parameter to avoid confusion between inode and (ino,dev) tuple
433ef1c : [snapshot] fix wrong key used for root and function stored instead of path
2b253de : bump argh version requirement
2530910 : [doc] minimum python version is 2.6 now
bf1acc6 : [doc] update caveats
95e0cef : [doc] remove outdated information
4c23cce : remove utility functions from documentation.
000e39d : [watchmedo] remove absolute path conversions
3515a63 : [watchmedo] remove unused code
2418837 : remove unused imports
bfe4a5a : [inotify] clean up path handling
cc52b3d : fix typo
aacd4a3 : [inotify] rename `buffered` to `buffer`
dc33063 : remove failing repr and str legacy tests
dd69d0c : [win] fix reduce not imported (py3k)
e4c06c8 : remove absolute path conversion
ff93032 : [win] use repr to make event printable
abb6fd4 : use repr for event paths. fixes encode error when printing event where unicode isn't supported
5bada01 : remove unused import
545aa5f : typo
ba28d86 : remove unused code
ba0b3bc : increase sleep time in snapshot tests for windows. mtime seems to have lower resolution here too.
a7e9993 : [win] only silence exceptions from io cancel. closes #167
24c7fb0 : [win] refactor to similar structure as inotify implementation.
dd0eb9d : [win] refactor
5064ef2 : [win] remove unused code
a88c27b : [win] merge winapi modules
1250090 : [inotify] don't read in_close events.
ee4af8d : [inotify] off by one error. last event in buffer not read
8f0371d : [inotify] refactor/cosmetics
ce975ab : [inotify] generate sub created events for move to
12d00da : cleanup sub event generation
514ff71 : change watchmedo logevel to info. (don't print messages from inotify_buffer etc.)
0f93b5e : add back working tox script
a15b6db : add changelog to manifest template. closes #244
5d85da3 : Enable automated testing with Python 3.4
3bb104d : give more helpful error messages on inotify errors. closes #155
c6788c3 : remove unused code
2a15219 : refactor queue imports
401e64d : port emitter tests to osx
3a8cb6d : initial implementation of new fsevents observer using pyobjc
fb122ff : [tests] properly skip inotify tests (pytest wont skip imports)
cc8c734 : [tests] path to tempdir contains symlinks on osx. convert to real path
e3d6e93 : fix failing tests on osx because of low mtime resolution
52aa6c8 : add documentation for inotify buffered
a9cafbb : cleanup inotify emitter
54b9b78 : close inotify in tests
7c604cd : mark inotify tests linux only
0cf696c : refactor pytest fixtures
fbed63b : refactor
19db36b : separate old integration tests from currently up-to-date and maintained ones
5914f2a : generate dir. modified events on moved files. fixes tests
645c960 : add tests of inotify emitter
d763422 : [buffered inotify] add tests
aeaaa6d : [buffered inotify] move default delay to class
9bdb422 : [buffered inotify] mark all attributes protected
ac45e89 : add .egg to gitignore
5e7bd7d : fixing deprecation warnings with argh
91cce36 : Typo
266fecd : add changelog file
9d26e52 : bump pyyaml version. closes #234
bce7cd1 : add changelog to long description
e962fef : fix open file not being closed
807f6d3 : Include the unit-tests with the sources released
a839194 : rename no_parallel_processes
60efd74 : close opened files
cd649a0 : remove unused import
66e58e2 : fix references
87ecfdc : fix heading
c45a6dc : add 'contribute' section to readme
2f095a2 : improve documentation
b8e5db0 : add polling observers to api docs
68a32e3 : add vfs observer
ff42831 : [inotify] fallback to 'libc.so' (android)
3bcd354 : [polling] listen for stop event instead of using blocking sleep()
060e59a : Create pull request for adding --no-parallel as coded by @BernieSumption in issue #227
7095e95 : Send the signal to the process group of the spawned process thus handling all its children in the same group (setsid is needed to get the process into a different group than watchdog)
d3214cc : remove references to old flask theme
133ecf3 : win32stat: Change python2 syntax octal integers to also work with py3k
10ee902 : remove unused imports
1a2be46 : forgot to check for recursive
ce27391 : remove outdated information
dfaa083 : add listdir parameter
2b9ecf2 : avoid calling stat twice for each path by using listdir instead of os.walk
f4282d6 : include st_dev when comparing files
13c6286 : fix rename from previous commit
2af868a : undeprecate stat_info
a59f58d : add option to specify stat function for dir. snapshot
5200577 : use default stat result struct. added memory consumption seems insignificant
0090690 : implement st_dev/mode/mtime in to avoild double work by calling stat gain
2af8469 : update emitter to use buffered inotify
7ef8dce : initial implementation of buffered inotify
af5f241 : inotify: remove non_blocking option. not in use nor implemented
08a0a3d : fix close for inotify. os.close never worked
7252ad5 : inotify: generate dir modify event for create/delete
12d60a9 : fix invalid handle value for x64. closes #123
bf89c14 : update tests to reflect changes
f8b6640 : update dirshapshot to use new stat function
54c2730 : fix mv. shutil.move will silently fail on windows and just delete the source file instead
2c2723d : add custom implementation of stat with ino support for windows
d88ae9e : fix path in windows tests
1fcb7c2 : fix path in windows tests
8d5342c : polling: take the first snapshot in thread. init blocks observer.start()
801765f : cosmetics
273cc8b : polling: explicitly check if we should stop instead of setting to none
1e71946 : remove unused code
d071c12 : remove unused code
2d1b877 : refactor DirectorySnapshot and DirectorySnapshotDiff
9a45398 : fix some tests and add delay for everything that tests modification (mtime have too low precision)
2f2f53a : be quiet travis
54bbc15 : remove old tests. covered by test_snapshot_diff.py
e37e490 : add tests for known issues with snapshot
b70fe06 : change sphinx theme. closes #45
808d858 : v0.7.1
a440f19 : fix indentation error in docs from previous commit
d520a6d : remove the observer selection and import excplicitly based on platform
c962996 : fix inotify import for python 3
a45612f : remove unused coverage config file
459edd8 : remove old run test script
68c97e1 : remove unmaintained tox script
99d8cf9 : update ci scripts to use pytest
cc0a92c : cleanup travis ci script. use setup script
2514634 : add unittest2 dependency to setup.py
66a7987 : cleanup setup.py
058b1c1 : add py.test test runner to setup
087a01a : Added Python 3 support to setup.py
9f92424 : v0.7.0
7a89234 : Use locally declared unittest lib in unit tests.
9c42a79 : Declare unittest lib used within python version
d38767d : Always use unittest instead unittest2 (same api used in this case)
7e2a43f : Updated fsevents observer to properly compile and run on Python 3
81f78b6 : Removed running 2to3 from setup.py since it butchers the custom Python 3 work I did
18b2a05 : Fix ignore_regex default value. Close #189
587cbee : update docs on supported os. closes #178
ee30e78 : remove unused constant ALL_INOTIFY_BITS
da2393e : refactor: split low level inotify code and watchdog emitter implementation into separate files
5c8bc48 : Revert "fix kqueue platform check"
c907113 : fix indentation
90a3fd6 : undo bad refactoring. (closure refer to variables outside)
3539a4c : fix kqueue platform check
5ca1c21 : cosemetic cleanup of setup.py
c7ee337 : added AttributeError and cosmetic adjustment to winapi
a1e0d8c : cosmetic / semantic refactor of the observer import system, slight redesign. added backcompat importlib2 to watchdog.utils
695e981 : semantic refactor of watchdog.observers.inotify.read_events
9a83813 : cosmetic refactor of watchdog.observer
8861b81 : cosmetic refactor of top level modules, watchdog.[utils,tricks]
a9b7018 : cosmetic refactor of tests and utilities
f928298 : remove README symlink
4e79dfe : Fix include python. Closes #188
ff699c3 : cosmetics
ce92680 : inotify: dont fire modify event on IN_CLOSE_WRITE
5aaef0d : Readability, enable pygments highlighting in README.rst
6a9a442 : pass encoded path to os.path.isdir et.al. as they do not use a fallback encoding and will fail if locale is C and path unicode.
b1518b1 : Changed unicode_paths to only use positional arguments for Python 2 compatibility
2f71d66 : Updated .travis.yml to test against Python 3.x
721e550 : Added skips to directory snapshot tests on Windows since Windows does not have the ability to detect renames using the directory snapshot util
98be5da : First pass at Python 3 compat
277fc12 : inotify: dont remove watches before closing the fd.
18d4361 : clean up the stop and on_thread_exit mess. see #169
8a0b2f1 : Cleanup some edge conditions in polling - Don't hold lock while sleeping - Avoid derefing a None _snapshot after on_thread_exit() is called
bcbc883 : Fix DirectorySnapshot to handle 2 different cases 1) file X and Y exist. move X Y was not reporting proper results. Was only reporting that Y was deleted. 2) file dir1/X and dir2/X exist. move dir1/X dir2/X similar issue. Old code reported 'dir1/X' was deleted. And nothing about dir2/X
2d14857 : Watchdog (on Windows) would create bad FS changed events streams
b1e0271 : inotify: dir/tree might be gone when we try to add a watch for it
29260f3 : read_directory_changes: On directory create check if it may have been a move
ef15049 : key (used for equality) in move event was never overriding. fixes #130
9289410 : cosmetics
b61458d : dont fire move events for tree or watch sub-directories unless set to recursive. fixes #73
aac9816 : tests: Fix polling test moves to work in Windows
254127e : tests: Fix test to work on Windows
164f941 : tests: Fix failing unittest test_watchdog_observers_polling.TestPollingEmitter
ae037ca : drop support for unicode paths in inotify.
6bafb5e : dont call on_thread_exit in emitters. observers handles staring/stopping. fixes #117 fixes #149
ec75c90 : cosmetics
e16b00f : detecting moves/renames with polling never worked: method copy dont exist. not copying appears to be safe
ae82972 : fix indentation
052cf48 : Set src_dir_path to None when generating sub_moved_events
32f5609 : fix pattern matcher handlers: src_path may be None
3fee609 : dont convert input path to absolute.
a5a1561 : update dependency list
adce642 : Document implemented observer classes
4fac911 : inotify: encode/decode paths as utf-8
2c956bf : Update quickstart.rst
f057522 : adding pip-style dependencies so readthedocs can create the documentation
e17799e : inotify: retry if system call was interrupted
202675b : Moved import inside of .
b10b953 : Fix for issue 129 at https://github.com/gorakhargosh/watchdog/issues/129 to allow import of ctypes-agnostic code when ctypes isn't present.
04c6921 : Fixed version comparisons to work with Python 3.
b20e3a4 : catch AttributeError thrown on win xp
5848e4e : Add .travis.yml for Travis CI (http://travis-ci.org/)
d2b4803 : Require pathtools>=0.1.1
e1b0c2c : Ignore -1 watch descriptor value.
ac9b287 : Adds bootstrap.py to .gitignore.
ca845d7 : Fixes bootstrap.sh.
6cebae0 : Adds new bootstrap.sh script.
4c3ef9d : Removes bootstrap.py
29e15b8 : Fix traversal of symlinks
a9349d6 : fixed some tests for windows, fixed directory move event for windows
119fe05 : handle exceptions that occur during io canceling
bdcef77 : fixed indefinitely blocking in winapi
7fbe566 : indentation error in previous commit
55f67ce : dont subclass, so the caller is able to see which observer is used
cd07f79 : correcting uncaught exception for thread safety
5caa45f : added an except to fsevents.py which was causing it to not be threadsafe in Django
6383ff2 : dont block while waiting for system callback
fe451b7 : removed unused on_thread_told_to_stop
5d9de53 : Updates package manifest to include COPYING.
c1f8479 : Adds a copy of the Apache License 2.0
253049a : Renames LICENSE to COPYING.
fa7b6df : Allow quickstart code to run without arguments.
4c58c0d : Cleaned up formatting in example code.
dfe1b42 : And another copyright line.
c6cf18c : Fixes another copyright line.
937d16b : Fixes trove classifier for license.
dbd88b9 : Bumps version up to 0.6.0.
4ac8608 : Adds executable permissions to bootstrap.py
6e530ba : Formatting according to google style.
bab1f3e : Updates more author lines.
1796ba5 : Updates author lines.
cded678 : Fixes copyright lines.
273771e : Updates AUTHORS file.
8004501 : adding python-setuptools to build-deps
703657f : Adds a new bootstrap.py script.
cf8e712 : Fixes readme formatting.
f7eba9f : Updates copyright notices.
13bf35c : Updates copyright notices.
9ffa9d3 : Updates readme and copyright.
c0d63f6 : Handle SIGTERM in the same manner as SIGINT so that this program has a chance to stop the child process.
c484d89 : Allow multiple directories for the auto-restart command
3e9f9a9 : Added an AutoRestartTrick and an auto-restart watchmedo command.
b23d69f : Add Debian packaging
9ea9e13 : Subscribe to inotify modify events
7db4c15 : Added an event handler which matches on regexes.
10d4795 : BaseObserver: don't hold the lock while waiting on the queue
ffa3751 : fix #79
19aa18d : scripts/autobuild.sh moved to tools/autobuild.sh. Update hacking.rst to reflect this.
ae582a5 : Revert commit 3a297e86d0a8777972d3afa07ff33633a07b5477.
3a297e8 : setup.py: replace setuptools with distutils
74ec8af : inotify: correctly report deletion of the watched directory
0656443 : inotify: emit a `DirDeletedEvent` if the watched directory is deleted
2a75987 : terminate the `FSEventsEmitter.run` loop when calling `stop` (fix #64)
32efcb8 : Fix very small path issue in hacking guide
624ffa4 : update build and testing scripts to reflect actual location of utility scripts
ce06b77 : whitespace (damn you github inline editing!!!)
4f64b6c : Don't let OrderedSetQueue.unfinished_tasks be incremented if `put` does not add anything to the queue.
b7999e3 : Adds description for the --wait flag to shell-command.
7f057d1 : Updates AUTHORS
5d45d33 : Add option to wait for the shell command process
f2765c7 : `FSEventsObserver.schedule` returns the created `ObservedWatch` instance.
0bc9713 : Note about vim. Editing on Github is cool! Immediate possible use is to write the blog right in here!
a1cbf2d : Minor edit in the readme.
33b8eb3 : Adds a note to the readme about kqueue and ulimit.
9a66fa1 : Adds eclipse project files to make it easy for you to import.
a8d16ec : Re-enable FSEventsObserver
b524eeb : FSEvents API now works.
3941786 : Functional FSEvents module.
26f0795 : Some Cython declarations for the FSEvents API.
bc6cc4f : Properly formats license header.
97c30b7 : Updates bootstrap.py
0edf088 : Updates examples.
877018f : Drops the dependency on Brownie.
a1c8f3f : Even more licensing updates.
3f40bf1 : More licensing updates.
f93e484 : License is now Apache License, version 2.0.
9604fa8 : ``setup`` to ``setUp`` for unittest2.
18b4245 : Minor corection.
67a7236 : Fix is_set attribute test for Python 2.5
2237842 : with_statement needs to be imported in Python 2.5
d654f36 : Fixes a couple broken tests.
69a4fc4 : Adds missing testenv dependency on coverage.
56bc9d3 : Fixes formatting in run_tests.py
0ccd9f6 : Moving to unittest2.
1523750 : Adds tox configuration.
ef2b12c : Updates buildout to use unittest2.
3b05239 : Removes nose configuration.
55f56d4 : Adds a unittest2 test runner.
8100e08 : Updates Makefile to use current practices.
0388994 : Renames "scripts/" directory to "tools/"
cf1b866 : Adds Simon Pantzare <simpa395@student.liu.se> to AUTHORS
a8e6289 : unschedule_all instead of unschedule
f0c2144 : getattr was missing default argument
efa45bb : add ignores for virtualenv
6ea7a8f : Closes issue #48
44e4aab : Adds Senko Rašić to list of Authors
88b8d5a : observers.inotify: gracefully handle moves from non-monitored directories
e6c3c0c : add logging configuration
4858a15 : update my email address in AUTHORS file
8fc6224 : Updates AUTHORS
a7713fd : add logging configuration with INFO log level, so quickstart example shows some output
16f51a6 : Updates email address
3205476 : Update project with my name.
e32dd4a : Update version number.
144f587 : Allow tricks to set source_directory per trick
a58b20a : Disable fsevents compilation as it is unused.
910bbd1 : Defer inotify_init1 error from import time to call time in order to avoid breaking on Linux <2.6.27.
2c41bfe : Minor fix release 0.5.2
7ffe507 : Small fix for issue that borks WindowsApiObserver
f5acde4 : Prepares for the next release of brownie
8129c80 : make watchmedo directly executable
8a151f8 : Fix docs version
96ff6a3 : Remove bricks module from docs
8d530c9 : Adds --recursive option to repo clone instructions
1a67b3d : Adds dist command to Makefile
4ee1fb1 : Updates reference to autobuild.sh in docs
9f36843 : Disable FSEvents for now
815d685 : Fix for issue #26 trap/bpt error for unicode path
ba357e8 : Adds Brownie 0.3 as dependency.
1fe5fdb : Adds Tim Cuthbertson to the inotify module docs
1417f42 : Credits Tim Cuthbertson in the inotify.py module
343de17 : Adds Tim Cuthbertson to AUTHORS list.
3d90085 : Formats fix for issue 24 by gfxmonk
610df8d : Merged fix for issue 24 from gfxmonk by cherry-picking
f8d8a99 : Fix for issue 23
8fe6eda : Add Makefile to ease development setup
96f4550 : Add online documentation location
62969af : Add pathtools as dependency in documentation
04cbbba : Includes project eggs in sphinx buildout config
decd1bf : Updates source code to use pathtools
e9f27aa : Use pathtools in dirsnapshot and watchmedo
c6fe2ff : Adds pathtools as dependencies
32ddc3c : Moves the multi-bootstrap script to scripts/ dir
401d475 : Clean up docs/source/conf.py
a9a3ed4 : Moves bootstrap.py to scripts/ directory
559a708 : Updates C module documentation
34dbd60 : Adds version constants to the C module
8245638 : Updates quickstart example for API changes
46959ea : Ignore configuration files for IDEs
f9aee7f : Moves the autobuild.sh script to the scripts dir
43b9cf5 : Formats C source
66e482e : Formats setup.py
44155cd : Update C module documentation
502feda : Adds version attributes to the C module
1c87679 : Makes __version__ attribute consistent throughout
7f4edb0 : Adds a __version__ attribute to the C module
9478d60 : Pedantic C compilation and version macros
483bb21 : Wraps an extern "C" block around C header API
bbbe112 : Documentation clean up
8925f8d : Adds documentation for all C code
11030f3 : Updates ignore rules to exclude *.o files
599f5e7 : Single blank line at end of LICENSE
8d05851 : Add ant-glob wildcard example to README
48d4c3b : Clean up .gitignore list
0000f8b : Removes superfluous `import logging` from README
abe0c3e : Clean up rst links
57a3b7a : from __future__ statement must be first in module
765c1d7 : Fixes indentation in code example
3038fd5 : Update example in README
286ea20 : More fixes
c1ffe87 : Makes minor corrections in README
15f1a31 : Fix rst in README
f3df986 : Adds info about PyYAML and uses rst in README
8b2ba1f : Removes idea configuration
f3efdcf : Update documentation about tricks.yaml
c4074eb : Simplify the tricks.yaml file
ea8dd50 : Add PyCharm configuration
07e6b59 : Simplifies tricks.yaml
4094724 : Parsable json
67f3593 : Adds a tricks.json file
413205d : Make async winapi impl raise ImportError
f2fe9f6 : Adds blank ReadDirectoryChangesW async impl
a2f10b6 : Adds script to dump FSEvents constants
ab70ade : Moves the src/scripts directory to the project dir
8a3dae2 : FSEvents basic now works. Library is usable
b0a4746 : FSEvents monitors subdirectories by default
f145054 : Adds basic FSEventsObserver
5525716 : Uses loop index type size_t to match `num_events`
e7df8d6 : Raises error when event stream fails to start
97dc191 : Refactors _watchdog_fsevents.*
68de68d : Update setup configuration to use src/ as srcdir
a6e1557 : Move all watchdog lib/script source into src/
12e1f32 : Add order comment in observers/__init__.py and format
45ba3df : More formatting
1b06736 : Format argument option
ed673c9 : Replace `schedule` with `add_watch` in PyArg_ParseTuple
9bfb7da : Renames in FSEventStreamInfo struct
c68dc66 : Casting is rarely required in C with void *
c450e5f : Removes g__* object refs from _watchdog_fsevents.c
79e8805 : events.py formatting
ead618c : Clean up
c6d9af1 : Cleans up ``watchmedo.py``and adds documentation
44c890b : Formats the watchdog/watchmedo.py script
64b4a4b : Updates file header; passes all of malthe's tests
da8062c : Method renames
e2764f6 : Formats code
6797e8e : Rename 'observer' to 'emitter' in _watchdog_fsevents
9f10165 : Adds argument checks, updates signatures, and docs
b034a04 : String literals don't need enclosing parentheses
53dcd4d : Clean up and modularize FSEvents Python/C API
9c565ec : More accurate descriptions and names
bd332e4 : Update C module documentation
304daed : (void *) casts are unnecessary in C
c462f16 : Remove unnecessary Python C arg tuple parsing
c952d71 : Add docstrings for functions in _watchdog_fsevents
7ccd1dd : Remove stray comma from setup.py
5d54306 : Adds RETURN_IF_NOT & RETURN_IF macros and cleans up
216e248 : Adds a ``ctypes_find_library`` function
a947e20 : PyDev cleaned up formatting in bootstrap.py
cffbc53 : Adds __init__ methods to clean pylint warnings
4de790f : Makes handler.dispatch public
b9c6c64 : Cleans up more code
94ff278 : Cleans up even more code
7e948f5 : Cleans up a lot of code thanks to pylint
24d92fe : Adds warning note about kqueue performance
e0dec54 : More clean up
2a95f1d : Cleans up code as recommended by pylint
832b469 : Update references to winapi async observer
e655ed5 : Removes unused import in read_directory_changes.py
e509bcd : Use the ``ctypes`` module as namespace
6c7b2d4 : Fixes renamed debug imports in ``watchmedo``
1d9f451 : Adds ``--interval`` command-line option
6dc6286 : Renames generate-yaml -> generate-tricks-yaml
818ed83 : Capitalizes usage of Copyright symbols
d1c6fa5 : Adds missing ``ctypes.util`` import to find libc
f09f138 : Uses the raw buffer to read Windows events from
c8816b2 : Read C strings instead of wchar_t and decode
753260c : Synchronize read_events for inotify
36d08a4 : Fixes typo: `FSEevnts` to `FSEvents`
0076585 : Converts bytes to long for get_FILE_NOTIFY_INFORMATION
58f6c90 : Fixes typo in the copyright line for ``winapi.py``
5bbc040 : Adds link for get_FILE_NOTIFY_INFORMATION
de309e4 : Adds Thomas Heller as contributing author
4a13f14 : Adds parser for FILE_NOTIFY_INFORMATION struct
fa15979 : Adds using FILE_NOTIFY_INFORMATION to read events
bfa9375 : Removes remaining references to pywin32 from docs
695b9e2 : Enables OrderedSet on supported Python versions
a186c8f : Fixes author email formatting
7592b18 : Will McGugan and Ryan Kelly are authors
e0e81ad : Removes pywin32 from setup.py
bb19440 : Drops dependency on pywin32
1701745 : Renames ``winapi.py`` to ``read_directory_changes.py``
e0f0764 : Ensures on_thread_exit is called on daemon threads
38d948d : Drops the dependency on pyinotify
f17251e : Renames ``scripts/nosy`` to ``scripts/nosy.py``
1e6c78b : Adds copyright information to scripts/nosy
bf29db6 : Sync queue_events and handle ctypes AttributeError
399aca2 : Generates sub events for directory movement
11b8381 : Swallows fired but unpairable moved_from events
c654a2d : Adds Sebastien Martini to documentation
1853878 : Starts queuing inotify reported events
a4b161f : Enables displaying event mask names in __repr__
7a6ec65 : Adds Sebastien Martini as contributing author
60050f4 : Clean up ignored watches
4816b6d : Adds automatically adding new directory watches
7e21b75 : Adds basic working inotify file system monitoring
eeb2e31 : Initialize the Inotify() instance correctly
c8b2808 : Adds wrapper class and struct for inotify_event
a6cb1f9 : Enables creating non-blocking Inotify instances
fd7e5e7 : Adds links to inotify articles and documentation
53f7bea : Adds an initial wrapper for the inotify API
542d5ec : Adds bitmasks and ``inotify_event`` struct parser
1613cad : Precalculate all events mask.
07f85ca : Cleans up add/remove kevent descriptor code
14d67b6 : Stops generating native API documentation
65993ad : Removes a period from the end of admonition title
9641ee6 : Fixes a typo: c_unit32 -> c_uint32
cf2af70 : Adds basic skeleton and constants for inotify
0f84d34 : Disables installing ``select_backport`` for Linux
94f2d4e : Adds Luke McCarthy as additional copyright owner
5f1cfb2 : Adds docs and empty inotify emitter implementation
1da782c : Updates documentation with recursive clone command
73074e6 : Add a why watchdog section in the readme.
e28226a : Disable using the watchmedo.bat script. console_scripts generates executables.
7acfa66 : Don't need an rlock in KeventDescriptorSet
95289fe : Update test docs
26ecc6a : Update doc examples
e3a51d0 : Cleans up imports in winapi observer
16e7b81 : Windows install fails if manifest includes directory names with ending slashes. Fixes this.
a59fe1e : Adds a --timeout option to the watchmedo log subcommand.
40bff3f : Updates loggertrick to log specific events
901b745 : Removes unnecessary print statement from kqueue observer.
982dd1b : Formats setup.py and updates Trove classifiers for the package
121dd22 : Update manifest and use setuptools
afbd726 : Updates contributors list adding Raymond Hettinger
fa70e77 : Add Filip Noetzel's email address
4773992 : Adds Filip Noetzel as a contributor.
bbbfb13 : watchmedo gets autogenerated, delete script
e115aa8 : setup.py: pack all packages, not only 1st level
59b3f05 : let setup.py generate the "watchmedo" cli tool
ccc8dd6 : setup.py: s/install_requires=/requires=/g
2c8e410 : Use valid project/version requirement specifiers
f328ecd : Formatting for the setup.py
97b3407 : Indent the KqueueObserver class one level in so that it executes only on BSD and Darwin
60f50f7 : setup.py: Don't read now-excludeed README.md
6b4ce68 : Update code comments.
0d5239f : Clean up
d8aa7f9 : Remove unnecessary code and update answer to TODO in kqueue observer
2da4ff2 : Includes information about why we're ignoring temporary files in kqueue observer. Locked files cause python process to die with bus errors
ed17b08 : Ignore temporary files in kqueue observer
b60fb74 : Fixes the locking code for KeventDescriptor by now using a reentrant lock instead.
a326100 : kevent object cannot be dictionary key
a63b2f5 : Detect the python version properly in kqueue observer.
d34c9b1 : Corrects the names of the windows observers in watchmedo and watchdog.observers and fixes the watchmedo logger.
549f5d3 : Comment update for timeout behaving as interval in polling observer
3551cc4 : Adds a Windows API ReadDirectoryChangesW-based implementation using the cleaned up API spec.
7d46b7a : Adds cleaner names for the Windows API observers.
f5387a1 : Rename w32_api.py to winapi_common.py
6250c65 : Excludes README.md from the sdist to fix issue 15 with broken symlink from README.md to README.
4e76433 : Update docs
9881117 : Update more references to watchdog.utils.collections to watchdog.utils.bricks
e356d61 : Use _Observers from the observer implementations.
c52e7f7 : Update references to watchdog.utils.collections with watchdog.utils.bricks
cd1b364 : Rename collections test file to bricks.
34fb68c : Rename collections.py to bricks.py to avoid conflicts with the python collections module.
c88f608 : Move collections/__init__.py to collections.py
a9a7193 : Fix a bad call in observers.kqueue to _queue_dir_modifications and add new observer classes.
353df3b : Remove dangerous code that uses list literals for param values
507ffaf : General cleanup and wildcard removal
fdc92c5 : Remove wildcard importst
c10ff42 : More pylint clean up
e268953 : remove the previous observer implementations
0fd8e66 : Don't write code without pylint. Caught a multitude of syntax problems and errors.
7bc390f : Update classes to use absolute_path and modified api in dirsnapshot.
b977b05 : Removes the use of real_absolute_path, prefers absolute_path instead and adds a collection class by Raymond Hettinger
d914e50 : Set the encoding of the tests.utils module to utf-8
24d76a9 : Adds an ls() utility function to tests.shell
d42662c : Produces detailed errors
7323b17 : Remove the older kqueue and polling implementations
10be2d5 : Adds documentation to watchdog.utils
1bd4913 : Update docs
88d1d10 : Remove the unnecessary modules/ directory from docs/source/
3e96555 : Organize documentation
ad0ac8f : Correct typo in documentation.t
b08e66a : 'interval' to 'timeout'
f7aba47 : Lot of documentation updates
3b2e3f5 : Clean up syntax problems and add documentation in watchdog.observers.kqueue
ce87463 : Get tests to work again
7d69d53 : Add an under-construction kqueue emitter
a123a11 : Make the polling emitter block for a timeout before re-reading events.
d8b3ad0 : Intervals for emitters are actually blocking timeouts, so name them so.
ed17b65 : Removes ObservedWatch.signature property.
466c259 : Add docstring for ObservedWatch.signature property.
ab43932 : removed unnecessary doc
1ae3f9d : Use simple lock and update documentation in PollingEmitter
5cce428 : Move shell test utils to tests/shell.py module.
49826bd : Update reference to polling emitter documentation
4284586 : Update all references to polling_emitter module.
b87e7eb : More name changes.
aa9f2a9 : Update observers/__init__.py
654fe8b : More renames
d5659e4 : Renames
8864fa8 : renames
41b7b85 : Move emitters to their own module package.
110bb48 : Full watchdog.observers coverage. 76% watchdog.observers.polling_emitter coverage.
83a31de : 100% testing on watchdog.observers.api
96f785a : 95% coverage of watchdog.observers.api with some crap thread testing that needs fixing
983e4b1 : 74% coverage of watchdog.observers.api
fbd82ed : Revamped and much cleaner API.
5b7db05 : Don't let kqueue_observer and polling_observer face broken imports
7727d07 : Add watchdog.utils.rst and eventemitter and eventdispatch classes.
2833ccc : Add doc for test
0a9d290 : Clean up
1a7541e : Check order and set behavior on ordered set queue
b62d117 : 100% test coverage for watchdog.utils.collections
0fc6ec9 : Clean up test_watchdog_events.py
f2bed6a : Coverage more packages.
d1faa93 : Add pythoscope as develop dependency
0d94902 : More clean ups in events.py
73a3e1c : Remove unnecessary pass statements
d7b4722 : Remove unnecessary code from PatternMatchingEventHandler
e80e019 : Clean up test_watchdog_events.py and 100% coverage of the module completed with tests.
9d63220 : 100% test coverage on watchdog.events.
5071405 : Allow bin/python to use test eggs
41caafe : Remove doctest from FileSystemEvent which is already covered by nose and add documentation for watchdog.observers.Observer
976f6bc : Update docs
8084e4f : Update missing cd to source directory in installation instructions.
b3887fd : Move examples to docs/source/ directory
5472fdb : Add information about hacking
989bf02 : Add information about flask-sphinx-themes
0a0b989 : Move scripts to scripts/
06b271b : Add mailing list info to docs
7bc41df : Update documentation
a320207 : Update nosy to use sphinx-build from buildout bin/ directory.
abbfcb2 : Use buildout to set up entire development environment without depending on system python.
f50543e : Don't clean up pydev files
4378da2 : Cleaned up docs
616fc58 : Rename install.rst to installation.rst
87d575b : 95% coverage of watchdog.events
d0c4861 : Update test runners
af8b337 : Remove unnecessary file
adedb92 : Update readme with API change
5632895 : Update examples and code to use moved Observer class from watchdog.observers
23df354 : Reorganize docs
818b304 : Rename introduction.rst to quickstart.rst
46c4f99 : Update documentation with examples
d85f819 : Clean up docs
e34d4d9 : Clean up documentation
3cef601 : Rename polling_observer.rst
1dab824 : Moving ordered set queue to collections.
934f9d7 : Rename events.rst to watchdog.events.rst
4daa367 : Update ordered set queue docs
5056827 : Update documentation for ordered_set_queue
53277ab : Configure sphinx to use flask theme
525e41f : Add flask-sphinx-theme as a submodule
0fdb6c1 : Remove _themes directory
273ca14 : Remove flask theme
e4ba716 : Add sphinx themes from flash as a submodule.
e3ae382 : Corrected coverage is now 53% for watchdog.events.
464c9af : Coverage by itself generates better reports.
2cd3519 : Update ignore rules to exclude htmlcov/
df33077 : Update ignore rules to exclude .pythoscope/
9a59a58 : 60% coverage of events.py in test_watchdog_events.py
33d917e : Nosy runs only tests at the moment.
6bb4bb7 : Clean up events and add error handling to the event handlers.
a4fd64f : Add a match allowed and ignored patterns for path method in watchdog.utils
2ab626e : Update nosy to build documentation too in the background
d6cc592 : Add a nosy script that currently polls the directory for py changes and automatically runs tests.
b1c3d1e : Remove the run_tests.sh script. We're adding nosy.
bbcb7fc : Add nose-exclude
9b3ca12 : Clean up tests and update run_tests.sh to use tests/run_tests.py
15fa7fc : Make bootstrap.py executable to exclude it from nosetests and rename bootstrap to multi-bootstrap
ba3261b : Include watchdog.utils.echo in coverage
c11dc1d : Get tests/run_tests.py to read the nose.cfg file
27ee09f : Add coverage support to nosetests
1198b2d : Add a tests package.
f7b5e62 : More doctests for FileSystemEvent
872ad9e : Clean up docs
217a154 : Update to use tests only when platform matches.
286d459 : Add documentation and doctests to events.py
d95cbcd : Update documentation
057089c : Get nose working using a nose.cfg file in the project directory
ab017d0 : Move decorator_utils.py to utils/decorators.py
8a464a7 : Preparing docs for clean up
65c916f : Preparing for clean up
8a134e2 : Preparing for clean up
742b2fa : Apparently, 'win' in sys.platform would trigger for OS X as well.
3a93903 : Missing import
9b6bb22 : Don't try to detect moves on Windows. It doesn't have inodes.
a0b0ede : Typo
53ac882 : Revert to is_stopped as a property
07eaeb4 : Clean up win32 observers.
55c25ce : CLean up polling observer.
44e9499 : Events revamped.
602eaaa : Don't check attributes on every call to DaemonThreads methods. Set missing attributes on the class if unavailable once.
3cfa412 : Fix typo
4a94b59 : Store the handler in the event object itself. Makes life easier. =)
97289a8 : Another typo. =|
a0aec45 : Fix typo
1137c27 : Use EventQueue in win32ioc observer for testing
2734972 : Add an EventQueue implementation.
e5ce16f : Fix typo
f5e9d10 : Clean up synchronized from win32ioc_observer
7560333 : Win32 observer clean up, manual locking and using ordered set queue for testing.
646b3c8 : Update manual locking in fsevents_observer and inotify_observer
e2252a3 : Manual locking in kqueue_observer.py
0d9013b : Clean up names in kqueue_observer.py
d67c742 : Formatting in dirsnapshot.
7638846 : Use explicit locking and clean up API in PollingObserver.
e7d327f : Use an ordered set queue in polling observer.
b9d51ca : Clean up _PollingEventEmitter
ea9f3c8 : Clean up imports
040ae5c : Moved DaemonThread class to watchdog.utils
67aa314 : Fix problems with orderedsetqueue
c1251f1 : Add Lukáš Lalinský to the authors list. Used his ordered set queue impl.
8c8e9ea : Add an ordered set queue implementation
af5069a : Add README.md as a symlink
ddfa92e : Rename README.md to README
27dba32 : Delete the README symlink.
dfde6c8 : Make a symlink to README.md to shut distutils up about the missing file.
8b3e390 : Update .gitignore to exclude MANIFEST
3cd6407 : Update manifest and remove setuptools usage
992aa58 : Update MANIFEST.in to include documentation and add a setup.cfg
67dbf64 : Move watchdog/dirsnapshot.py to watchdog/utils/dirsnapshot.py
8a7f0e6 : Update docs
9d1f13e : Add basic introductory documentation
2e3786e : Update autobuild.sh patterns
2013c7e : Clean up properly in win32ioc_observer.py
a24716d : Ignore directory events in autobuild.sh
b832aa7 : Force a first build, then monitor.
6e205bd : Update autobuild.sh with vim patterns and patterns to ignore.
ec0e8bf : Update the autobuild.sh script to monitor the watchdog/ module directory as well.
4de00b6 : Move autobuild.sh to parent directory.
f6cab5d : A script that automatically builds documentation as I write it, so I don't have to keep typing
4081228 : Clean up configuration.
53236bf : Set up configuration for documentation.
5db9f8f : Keep documentation in one place
ee6666a : Move eclipse styles and basic examples to docs directory.
0a1b348 : Add initial sphinx documentation.
eee2c60 : Updated tricks.yaml example file
d9fc19b : Third level heading in README.md
eef6d9e : Add information about tricks.yaml to the README.md
c7cd89b : Use has_attribute to detect whether a threading.Event object has is_set attribute.
6bea60b : Include documentation for the watchdog.utils.echo module.
bd8d827 : Update MANIFEST.in to include watchdog.utils
33cc54e : Update AUTHORS with information about echo library
9e7f310 : Add tracing option to the log subcommand
896c305 : Add Thomas Guest as author of echo library dependency included within watchdog.
cd2e146 : Using super for inheritance.
9f22e43 : We weren't calling dispatch on the event handlers inotify_observer. =|
c14f7b1 : Add a logger trick example.
6e1e87b : Make the logger trick echo
f3061d4 : Move echo.py into utils.
be0198b : Make utils a package.
6fa7b2c : Update attribute in echo.py
63f841f : Remove unnecessary logging statement
2a2cbed : Add an echo module that lets me trace calls
4553237 : Now to test whether the pattern matching event handler triggers on Linux. It works on OS X.
215ad37 : Clean up
d19d4e6 : Failed attempt at fixing the file movement problems on windows.
06b1d0d : Attempt detecting movement to subdirectories in the win32ioc_observer.py
a80ba16 : Reduce moved events delay on win32
63a6940 : Fix typo.
d5c5913 : Fix bad import in fsevents_observer.py
d054e5d : Add missing ) in polling_observer.py
4291163 : Make arguments explicit for get_moved_events_for function.
43c52f6 : Remove stray ) from fsevents_observer.py
b5ee176 : Major import and path clean up
8869864 : Fix broken dependencies
42b798a : Update watchmedo file descriptions.
e153510 : Fix typo and update short option for --recursive
e87a448 : Add epilog
1d945f1 : Add tip to view help in readme
f9ccab7 : Update README.md with information about executing shell commands in response to file system events.
477d5a7 : Add shell-style string interpolation command strings to shell-command subcommand
3ce7431 : Add more short subcommand option flags to watchmedo.
bd470ba : Add documentation about shell command interpolation variables.
ccc01dc : Add installation information to the README.
258edf1 : Add python2.5 as a dependency.
5335ead : Fix typo in readme
bfc6989 : Include information about watchmedo in README.md
2480dac : Clean up readme
374743d : Enlist supported platforms in the README.
949a192 : Add documentation where temporary files are handled.
d07440e : Handle quick temporary file creation and deletion. for example creates many files very quickly and deletes them.
8e7b3e3 : Handle very fast file movement in kqueue_observer.py
0a32fbf : Update ignore rules.
fb7d490 : Use Win32IOCObserver as the new default implementation on Windows.
13b75a6 : Update dependency check in setup.py
8209e9b : Drop the dependency on select26. Add select_backport as the replacement dependency. It's more up-to-date.
b225386 : Update dependency on argh to version 0.8.1
b181cde : Add a bootstrap wrapper script that sets up multiple python configurations with zc.buildout.
3a6dc22 : Add information about win32 file movement with moved dir HACK.
e0ca756 : Update AUTHORS
522ccb7 : Clean up trailing whitespaces
c8156dd : Add a win32ioc_observer implementation and add a hack to read movement of files within a moved directory.
6cb9552 : Try at firing moved events for files within a moved directory. [Needs a way to wait for I/O completion.
9ecded4 : Add force observer implementation flags for the log subcommand for all the observer types.
d56b8e0 : Fire moved events for directory entries as well in win32_observer.py
4cd7433 : Make code python2.5 compliant as well
df88e7a : Updated setup.py
5fc91e6 : Remove unnecessary alias for the shell-command subcommand in watchmedo.
1778edc : Update trove classifiers with operating systems > BSD
60b1c2d : Fix a typo in watchdog/__init__.py that would have prevented FSEvents observer from loading.
39e87d3 : Add documentation to kqueue_observer.py
e28202c : Clean up when the thread stops running not when stop() is called in kqueue event emitter
d17f2b1 : Remove unnecessary documentation.
581ddd0 : OS X can use kqueue too, so mention that in the README.md
09ecd99 : Fixed descriptor bookkeeping for directory movement events and cleaned up code in kqueue_observer.py
bca37ea : Fix incorrect moved paths
88f1bbc : Add select26 to the list of dependencies in the README.md
6aef96d : Update README.md
1077a30 : Add imports to load the KqueueObserver on BSD Unix. Update README.md
e5876c9 : Add a working kqueue-based observer implementation.
210b17f : Clean up paths before accepting them in PollingObserver.schedule
8d000ba : A basic working kqueue-based observer implementation. Now to fill the event queue with these events. =)
43a7885 : Add the ability to fetch paths and stat information from dirsnapshot using inodes.
749cee4 : Clean up the _watchdog_fsevents.c source code a bit and use PyDoc_STRVAR macro for documentation.
5aca1ae : Remove commented code from kqueue_observer.py
e9f4aeb : add a skeleton of the kqueue_observer.
42de5a3 : Add a flag to the log command to force the kqueue observer implementation.
243d711 : Add a get_walker function to watchdog.utils that returns a recursive/non-recursive directory tree walker.
6da69ba : General cleanup
bc3e25e : Add select26 as a dependency for python2.5 and earlier.
ef2f75a : event.path -> event.src_path; event.new_path -> event.dest_path; marks event.path and event.new_path as deprecated.
0315339 : Bump version number to 0.4.0
cc407a7 : Add a simple function to watch paths
74bce3e : Add a debug flag to the watchmedo log subcommand that forces using the polling observer implementation.
5c17fed : Update README with api change.
b6b4ce3 : Move the recursive argument after the paths argument to keep more consistency. API changed.
2907cc8 : BREAKING API changeset. Now monitors non-recursively by default and allows monitoring recursively by setting a flag.
d7c937c : watchdog.dirsnapshot can now take non-recursive snapshots.
11f5d53 : Clean up indentation in buildout.cfg
0539941 : Update README.md with dependencies and their links.
a8a6a70 : Fixed a broken import that caused win32_observer observer imports to fail.
ebcee0d : Fix typo in copyright statement in watchmedo.bat
05ff663 : Add copyright information to the watchmedo.bat script
cdd4e00 : Add a subcommand to the watchmedo script that executes shell commands in response to file system events.
2b75582 : Update the watchmedo.bat script to import watchdog and execute the entry-point function via the python command line.
f07651f : Add an entry-point function to the watchdog/watchmedo.py module and call it from within the watchmedo script.
20e27ed : Moved watchmedo script to module watchdog/watchmedo.py
2848cd6 : No need to generate the name for the log command observer schedule
ebcc3c5 : Split command-line patterns for the log subcommand.
8ecb20c : Add --append option to generate-yaml subcommand in watchmedo and use relative paths for python-path.
af4a7f4 : Replace the call to print() with one to sys.stdout.write() in watchmedo.
e03bd10 : Flesh out the watchmedo log subcommand.
f28928d : Get watchdog.utils.filter_paths to handle None as arguments.
f96c07c : Allow setting python path from tricks.yaml.
e2dac0f : Clean up more logging.
d1c30ec : Implement the 'tricks' and 'generate-yaml' subcommands for watchmedo.
55681b6 : Clean up logging.
08ff2df : Replace the class_instance method with one that loads a class instead.
fb1f9df : Update project source code text in README.md
b8bc5ab : Add the watchmedog script to setup.py
fde5421 : Update link for issue tracker.
45bf95d : Fix escaping in README.md.
8be8dc5 : Add basic trick execution functionality and an example tricks.yaml file
f8f580a : Add a few more utility methods to watchdog.utils
7c7bace : Update the unschedule method in all observers to unschedule all watches when called without arguments.
9d6af99 : Clean up the win32 observer implementation.
255e020 : Add copyright information to the README.md
1ee84cf : Bump version to 0.3.6. Bugfix release.
267f280 : Fix a broken build.
de3e9e8 : Fix missing observers subpackage from sdist.
569a742 : Bump version to 0.3.5
16148b8 : Move all the observers into the watchdog.observers.* subpackage.
22ab85e : Remove unnecessary import from watchdog.utils.py
bcaf91b : API change. Events and handlers are not available in watchdog.* namespace now. Import them from watchdog.events.* instead
941ae91 : Clean up imports and remove old testing code from observer modules.
7adfaab : Add watchdog to the eggs list. It uses the one in this repository.
f8c1b7e : Add a PatternMatchingEventHandler
e13a77e : Remove unnecessary import from polling_observer.py
1edb42d : Update AUTHORS to include Tim Golden's watch_directory.py script.
f89de24 : Add a logging handler and add copyright information to all the source files.
0c69af6 : Move version information to watchdog/version.py
80326df : Correct indentation and clean up trailing spaces from buildout.cfg
89a2487 : Update ignore rules and add ipython to buildout.cfg
b2e673d : Add a develop line to the buildout.cfg
c17f237 : Add dependencies and get buildout to install a python interpreter which we can use.
5c654d2 : Remove executable permissions from bootstrap.py
d9de284 : watchmedo script
f7b1ea0 : Add PyYAML >= 3.09 as a dependency
8a44081 : Add a wrapper .bat script for the watchmedo command.
1754361 : Get builds to work.
e8dca9c : Update MANIFEST.in to include _watchdog_fsevents.h header file.
65b4175 : Fix download urls and bump to version 0.3.4.
c95c53f : Add download_url param to setup.
51fddea : Bump version to 0.3.2
6cf38f7 : Add dependencies list
aa4115a : Fix a bug where path and new path were switched in inotifyobserver moved event
2521ff5 : Bump version to 0.3.1. Version includes InotifyObserver
f9a22a0 : Use InotifyObserver if the pyinotify dependency is satisfied.
3c5dadd : Add a todo
4d05df4 : Add InotifyObserver implementation based on pyinotify.
7d29867 : Add an example file named simple.py to documentation.
17530ae : Add linux pyinotify dependency in setup.py
1613cbd : Clean up setup.py and update Windows installation requirements.
c343101 : Avoid recalculating flags per call.
167563d : Update copyright statement
45f0819 : Update AUTHORS file with Andrew Schaaf as a contributor.
2d16821 : Update reference to README in setup.py with README.md
3620e05 : Update MANIFEST.in to include README.md
60a8b4a : Remove usage of identifier 'file' from the win32_observer.py module.
212f51d : README tweaks, added ".md" for GitHub
2a9cc29 : Bump version to 0.3
34d1754 : Update AUTHORS
edad18f : Clean up win32_observer.py
54ba7af : Update __init__.py to use Win32Observer on Windows systems.
969f9c3 : Add a Windows API-based observer.
74f3c80 : Update information about Windows requirements
0311d82 : Clean up polling_observer.py and enable subclasses to inherit from PollingObserver.
01d9d0f : Update README with Windows API information.
8826b0c : Update ignore rules and bump version number to 0.2
e89da17 : Add TODO remark and clean up trailing spaces.
6ebdeaa : Update README to present the example first.
f606db9 : Rename the _fsevents module to _watchdog_fsevents to avoid conflicts with pyfsevents and macfsevents libraries.
8d8a779 : Rename the c files
e5b3ea7 : Add logging to watchdog module and clean up logging in fsevents_observer.py
a6cf9cb : Fix the FSEvents Observer KeyError by backtracking to watched path keys which exist in the snapshots dictionary.
eaf2b13 : Use absolute paths stripped of the right most path separator only in fsevents_observer.py
f958add : Catch and handle keyboard interrupts properly. Add missing stopped threading event to polling observer.
3d1cfa3 : Import as instead of assignment in watchdog/__init__.py
8ab4bc0 : Fix incorrect order of arguments in the usage example in the README.
b1defbe : Clean up README
3c31697 : Update readme with documentation.
7d2fb16 : Correct the capitalization of the name of the project.
d7c00db : Update version number and development status.
47f1f6e : Add ignore rules to exclude files generated by eclipse.
7983705 : Clean up
0672746 : Remove the temporary logger.py module.
6260fef : Expose the observers through a common API
043a5e7 : Clean up logging in events.py
9485686 : Update temporary logger
d7680a8 : Finally, a working FSEvents-based observer implementation.
e0e646b : Update the polling_observer.py API to use names to mark watchers.
1ab40a9 : Encapsulate logging style in a module.
41f945a : Remove unnecessary void pointer casts
7d9252c : Move common macros and type declarations to a separate header file called _fsevents.h
3a7af77 : Add eclipse formatter rules
5f7c9b4 : Clean up formatting of _fsevents.c and add more documentation.
ebd1f14 : Clean up the _fsevents.c file a bit.
ce0cf10 : Update test code with the name change
9dc3814 : Rename Observer in polling_observer.py to PollingObserver
805b286 : Clean up documentation.
41a1fe3 : Clean up documentation of the Observer.run() method in polling_observer.py
beb4231 : Update logging code to use common configuration
b73c803 : Clean up API and update polling_observer/Observer to support it.
2c2d42e : Add documentation to classes in events.py
0827e04 : Add documentation to dirsnapshot
5e510af : Remove unnecessary code from the watchdog/__init__.py module.
cfb2dec : Update remove rule method to clean up properly.
02cd84f : Clean up polling_observer.py code
46de60a : Synchronize the add_rule and remove_rule methods and fix a missing import error in events.py by adding import logging
514d92a : Rename polling_watcher.py to polling_observer.py
be54622 : Rename PollingWatcher.py to polling_watcher.py
501db2d : Move the FileSystemEventHandler class to events.py
ae471a9 : Better documentation for the PollingProducer class
c9abda2 : Fix events code.
5be79d7 : Add documentation to the classes in the PollingWatcher module.
2ae3eae : Add documentation to the FileSystemEventHandler class and flesh out methods with logging.
bb893b0 : Dispatch events to the right methods
19f4867 : Fix the remove_rule method
3222904 : Update polling watcher with events. Pending calling EventHandler methods.
d9b1969 : Fix typo enlisting deleted files in the modified files list.
dbfdc13 : Detect changes to directories as well as files now.
6b4179e : First implementation of a PollingWatcher.
efeb090 : Add documentation to dirsnapshot.py
eb62c8e : Add a __sub__ implementation to DirectorySnapshot
a703e23 : Rename the property decorator in decorator_utils.py to propertyx
b5c93d5 : Add a directory snapshotting module.
7d372bd : Replace the RETURN_NULL_IF_DUPLICATE_STREAM macro with a cleaner generic replacement.
34d0bc7 : Fix the synchronize decorator
f616e09 : Don't hide global variables in macros.
3ec6bab : Fix mismatching parentheses
25019a2 : Better coding practices for if conditional expressions.
3b50b9b : Macro documentation fix
17bdd95 : Fix a few macros that would otherwise bomb and use them in the code to clean up code.
e14e535 : Add appropriate string qualifiers in _fsevents.c
f468afc : Add missing documentation for RETURN_NULL_IF_NULL and FSEventStreamInfo.thread_state
b9bb1ff : Better name for function that converts python string list to cfmutablearray of cfstrings and clean up unnecessary casts.
40a53c2 : Cleanup the pyfsevents_schedule function into 4 individual functions to improve readability.
3624cad : Move _fsevents.c into repo.
fbe5b96 : Update _fsevents.c to choose the correct init method based on the PY_MAJOR_VERSION macro.
fa8b15b : Update comments.
bc1f050 : Make num_events const unsigned int and i unsigned int in _fsevents.c
8c64f00 : Move strings to the top of the _fsevents.c file.
86991c1 : Flesh out the event_callback_handler function.
a18b4f9 : Update ignore rules to exclude .so files
f09a781 : Building now.
ba05d38 : Add parts of the _fsevents.c module with documentation.
bdf7ffa : Add setup.py, watchdog package, and decorator_utils.py
de33ecf : Clean up the README to make it more consistent.
8c0f58e : Update ignore rules to exclude buildout-generated directories.
591ca34 : Make bootstrap.py executable.
3852436 : Add zc.buildout bootstrap.py script
572b3f5 : Update setup.py, add an empty watchdog package, and update ignore rules to exclude the build/ directory.
46fbb57 : Add base MANIFEST.in and setup.py
5101a30 : Drop the extensions from the AUTHORS.txt and LICENSE.txt files
cc0c6e3 : Remove trailing whitespace from AUTHORS.txt and buildout.cfg.
d5f5d4b : Update README to use Markdown syntax.
9ec7738 : Rename README.txt to README.
dabe63b : Add README.txt
613c5ec : Add LICENSE.txt
5033549 : Add AUTHORS.txt
6137716 : Add ignore rules to exclude certain types of files from the repository.
162be23 : Add basic buildout.cfg

+- Project: platform/external/pytorch

b5ea807cbd : Python soong target not support "apex_available"
a8d6afb511 : Disabling amp context when invoking compiler (#138659)
f31b8bbc5b : [MPS] Fix sliced cast (#138535)
848e7ac42a : [SDPA-CUDNN] Make CuDNN Attention Opt in (#138587)
885c823759 : Update doc copyrights to 2024 (#138650)
8c3ed97baa : Update cpuinfo submodule (#138600)
e3a10289ad : Include platform/external/pytorch
70cf2bbc0b : Add link to torch.compile the missing manual in troubleshooting (#137369)
cde6b382ff : Don't try to load cufile (#138539)
4076a738b0 : [Cherry-Pick] Use cuda 12.4 pytorch_extra_install_requirements as default (#138526)
a97c15174b : update getting started xpu (#138090)
9a0dfa64f5 : Initial empty repository
32f585d934 : [Release only] use triton 3.1.x from pypi (#137895)
417a0763a7 : [split build] move periodic split builds into own concurrency group (#135510) (#137265)
119e7344d9 : [RELEASE-ONLY CHANGES] Fix dependency on filesystem on Linux (#137242)
783a6a424c : [MPS] Add regression test for `fft.fftfreq` (#137215)
5375201dff : [MPS] Add missing dispatch to rshift.Tensor (#137212)
1de132ec9e : [MPS] Fix 5D+ reductions over negative dimentions (#137211)
0b1b609ed7 : [NCCL] Don't override `waitUntilInitialized`'s setting of `comm->initialized_` (#137210)
0b45af9c10 : Fix addmm silent correctness on aarch64 (#137208)
1a0b166ba2 : [ONNX] Add assertion nodes to ignoring list (#137214)
3a541ef8c2 : Clarify that `libtorch` API is C++17 compatible (#137206)
f8c4c252ca : [Release only] Set WITH_PUSH when WITH_PUSH_ROCM is set (#137177)
8af31b2e49 : [RELEASE-ONLY Change] Push ROCm images on RC (#137148)
8a71edcca5 : [RELEASE-ONLY CHANGES] Disable slow workflows (#136805)
058d3de7b9 : [dynamo] Do not treat user defined nn module attributes static for dynamic shape infra (#137025)
17d25897b2 : [FlexAttention] Fix output layout (#135882) (#136905)
70298e91f9 : [Cherry-pick][DSD] Fix distributed state dict full_state_dict option hang during set_state_dict (#135725) and Fix loading uneven full tensor into sharded state dict (#136365) (#136903)
69ed7c7093 : [SymmetricMemory] improve multicast initialization/fallback logic (#136894)
d80f521ee2 : fix requirements.txt installation failure issue on Windows (#136893)
57717c8768 : Fix lint (#137052)
550ed97a89 : [cuDNN][SDPA] cherrypick Support `attn_bias` in cuDNN (#130482) (#136885)
051df20ac2 : [ROCm] Update to AOTriton 0.7b (Cherry-picked) (#135869)
bc421d456e : [RELEASE ONLY CHANGES] Revert XNNPACK Update (#136522)
aa574ab7e3 : SDPA regression fix to work around high-precision by default (#136536)
24bd87d5dd : [Docs] fix inconsistent docs in conv1d, conv2d, and conv3d (#136813)
6101aafa34 : [Update] Update note for Getting Started with PyTorch on Intel GPUs (#136731)
396413f05c : Fix ROCm skip decorator for test_ddp_tp and multiprocess UTs (#136161) (#136801)
c25781c5d2 : Update current maintainers (#136769)
ecd330669e : Constraint setuptools to 72.1.0 or older in requirements.txt (#136729)
1715708183 : Revert "Trace fwd graph under no_grad mode #134872" (#136734)
2e2c00f74c : Make test_skip_data_serialization regex more flexible (#136710)
cbe476a5a7 : Disable iOS workflow (#136706)
4b030d47b1 : [RELEASE-ONLY CHANGES] Don't push to https://ghcr.io/ (#136703)
9b80ddecd6 : Fix hardcoded ROCm paths in `Caffe2Targets.cmake` (#136700)
6e86793f75 : [ROCm] upgrade ROCm CI builds to py3.10 (#134108) (#136696)
7c550fea95 : [ROCm][CI] upgrade CI to ROCm 6.2 (#132555) (#136467)
7a00785c23 : [ROCm] Cherry-pick unit test fixes to release/2.5 (#136557)
4e6a99e5f3 : fix stride compare failed when size value equal to one in ForeachUtils.h (#136426)
dd73223b90 : [ROCm] [BUGFIX] Re-enable rocm-specific tuning parameters v2 (#133852) (#136139)
c5e5254a79 : Fix xpu memory stats error (#135818) (#136420)
ffed7b71e8 : Fix dynamo benchmark skip logic for cpu device (#135193) (#135793)
becdf8ae4f : [Inductor] Increase multiplier to 3 for Inductor AMP FP16 benchmark correctness check (#135932) (#136262)
fb276d2652 : [ONNX] Fix numpy method to return the correct type (#136162) (#136203)
b7de7932fd : Update torch-xpu-ops pin (ATen XPU implementation) (#135833)
1954439802 : Update document for autocast on CPU (#136082)
3920988456 : [Cherry-pick] [ONNX] Update fake mode usage in onnx docs (#135512) and Drop final None values as inputs for nodes in exporter graph (#135520) (#136005)
1db2a6562c : [DCP] Fixes the stateless optimizer issue of distributed state_dict (… (#136000)
4b5bf41476 : [inductor] [cpp] fix the input contiguous check in max-autotune (#135561)
a889c85498 : [Inductor] simplify indexing_exprs in LoopBody._init_with_copy (#135574) (#135935)
813e06461c : [ONNX] Fix symbolic values and numpy implementation (#135786) (#135868)
9887030485 : Revert "[fx] Bypass custom __setattr__ in Node.__init__ (#135079)" (#… (#135625)
9e315fef22 : Revert "[Release only] Temporary disable triton xpu build" (#136276)
6b14e6cfdd : [Release only] Temporary disable triton xpu build (#136206)
828d686e1c : [ONNX] Fix scaled_dot_product_attention with float scale (#135710)
612fc7c447 : [ONNX] Improves documentation of ONNX exporter (#135526)
cea562006e : [Cherry-Pick] Bump triton pin and release version, revert temporary changes to build from pin (#135613)
ba27502501 : Use upload-artifact@v4.4.0 for create_release.yml (#135534)
4a3dabd67f : [RELEASE-ONLY CHANGES] Temp changes to build triton from pin for first RC (#135517)
e130696270 : [RELEASE-ONLY CHANGES] Branch Cut for Release 2.5 (#135506)
b7eb7256fb : docs: `torch.nn.utils.rnn.pack_padded_sequence`: docs improve (#135417)
c1ae78be92 : [inductor] calibration inductor windows uts (18/N) (#135449)
defb515306 : [NJT]Add permute ops support (#135336)
31c4e0d37d : [inductor] Cleanup analysis done at lowering time (#135412)
53290ca00b : [inductor] Refactor BaseSchedulerNode.__init__ (#135400)
16f5155992 : [inductor] Fast path for extract_read_writes without tracing (#135377)
37144be03d : [inductor] Remove ReadWrites.op_counts (#135306)
3bdc54ed18 : [inductor] Refactor LoopBody.memory_usage (#135286)
2196f32475 : [22/N] Fix clang-tidy warnings in jit (#135319)
cfc227ad43 : [reland][dtensor] move DTensor to public namespace (#134203)
20cab91a12 : [dynamo] Remove skip from jit freeze tests (#135281)
a6fae2e811 : Use BRGEMM for Half flash attention forward kernel (#131879)
042f2f7746 : [ONNX] Re-raise the exception if the dynamic shapes cannot be refined (#135418)
fd494dd426 : Change wrapped_linear_prepack and wrapped_quantized_linear_prepacked to private by adding _ as prefix (#135401)
8334cb2fb9 : remove commented out breakpoints (#135363)
e72ed4717e : [Dynamo] Fix Huggingface PretrainedConfig get non const attr (#135413)
3bebc09be9 : [FlexAttention] Align the matmul tensorcore usage (#135168)
a2db22e6bb : [inductor] Catch BrokenProcessPool and print a more helpful message. (#135120)
eac5e12548 : [inductor] Move LoopBody to its own file (#135257)
18479c5f70 : [Doc] update max-autotune for CPU (#134986)
f7c0c06692 : Add oneDNN BRGEMM support on CPU (#131878)
b53d97c7be : [Intel GPU] Add XPU memory-related APIs (#129919)
6c1da66407 : [Reland] Refactor caching device allocator utils (#130923)
d7c97e7245 : [inductor][cpp][gemm] cache blocking config for dynamic shapes (#133538)
be9f4ffe88 : [inductor][cpp][gemm] enable dynamic M for k-slicing (#133447)
692faa9bc6 : [inductor][cpp][gemm] reduce memory alloc overhead by allocating local acc once per thread (#135277)
32f3af72b7 : [ONNX] Support FakeTensor in ONNXProgram (#135399)
ebab5c85c4 : [FlexAttention] Skip very small block size unit tests on H100 due to Triton bug (#135393)
3d734d837b : [ONNX] Handle mixed sequence inputs properly (#135378)
c92227c41a : [quant][pt2e] fix placeholder typo and related quantization tests (#135379)
e6a0221fc6 : [Inductor] Optionally allow padding on non-GPU devices (#135280)
a6b9d444fb : [ONNX] Refactor exporter errors (#135180)
d42b0c8f22 : Add release matrix for 2.5 (#135383)
941d094dd1 : [Dynamo][DTensor] Fixes SymNodeVariable() is not a constant error in Compiled DDP + TP unit test (#135315)
b1a934741e : Change test_constant_prop_preserve_metadata (#135268)
0c661f3e1a : [Split Build] Refactor split build binary builds into their own workflows and move split build binary builds to periodic (#134624)
2c7e314803 : [Inductor][CPP] Fix the issue of view dtype (#135301)
ead4407f57 : [inductor] Fix loop split optimization (#135303)
2f5b40c099 : [aoti test] Disable FP8 funz dtypes in fp8 runtime check test (#135373)
993b5647ab : [export] fix placeholder name collision tests by removing map call (#135366)
2ab26806f1 : Require tlparse for failing tests in test_structured_trace.py (#135376)
b1612569f6 : [BE] Clarify defaulting behavior in optimizer (#135384)
dc0e818738 : [FR] Automatically infer a common filename prefix (#135158)
06e414d7fe : [FR] Make trace_dir a required argument (#135157)
a681260caf : Revert "[ONNX] Refactor exporter errors (#135180)"
95e976a63f : [dynamo] recursively skip frames when Dynamo cache limit is hit (#135144)
306ac44eaa : [ez][TD] Fix request for issue body returns None (#135389)
a7643baceb : Revert expectFailureIf condition on tests with torch.compile on Windows (#134759)
a4030e37be : [dynamo] reland map/zip iterator related changes (#135074)
22e1fb6faa : [test][easy] Add debug utils for cpu select algorithm test (#135038)
2a4890e315 : [ONNX] Clean up the missed lines from previous PRs (#135368)
3ce433aef2 : [TCPStore] use wait counters (#135283)
7f2d20e687 : Run all autograd node post hooks (#134728)
32fd29c1ea : [ONNX] Properly handle Attributes in traceable functions (#135367)
5eebd9315a : [ONNX] Refactor exporter errors (#135180)
a15aabc975 : Add MaskedTensor passthrough: unfold, F.Unfold, F.Fold, stack (#125262)
b143426db3 : [Inductor] Use argument names as the key for the `constants` dict and the `signature` dict (#135170)
13ba0a2e5c : Run bypassed graph compile outside the except block to avoid chaining of exceptions (#135175)
8520ce5f78 : Fix incorrect trace of post-accumulate grad hook on tensor with zero dims (#135226)
196748d491 : [elastic] support local_addr across all rendezvous impls (#135262)
177e4f4218 : remove _check call on item() for torch.istft (#135234)
3988b3468b : [aoti][easy] remove breakpoint() in wrapper.py (#134807)
04118d8617 : [export] Record the global torch version in serialization. (#135243)
24482e5c68 : [torch][fx] Set maximum warning count during fx.Graph.lint (#135069)
c0ec599f27 : Update submodule ideep to include aarch64 change (#134897)
7074de43c0 : Porting to GCC 15 (#135188)
771dcce11d : [AOTI][Tooling][6/n] Fix long dtype input tensors calling `mean()` in `aoti_torch_print_tensor_handle` (#135072)
de74aafff4 : error on exporting ScriptModule (#135302)
ad29a2c0dc : Add Inductor config for default stride behavior (#135238)
3a9e33dca8 : [torchelastic] Don't do signal handling when off the main thread (#135088)
a086882d72 : [inductor][triton] mark workspace args as mutated (#134648)
84ae6b7d6b : AOTDispatcher: limit cases when we detach() graph inputs to non-leaves (#134193)
60a097a071 : [CD] Update binary_linux_test.sh to include calling builder smoke test (#133869)
13bae39e22 : [inductor] [cpp] improve cache blocking for is_dynamic_M (#131306)
4ef6c05f65 : [inductor][cpp][gemm] fix autotune runtime error from linear_binary fusion (#135275)
d6b9bd3e60 : Also handle compiler collective when input variable doesn't exist on all ranks (#135147)
d0591f4658 : Ignore fresh unbacked when doing recursive make_fx inside HOPs (#135053)
b5dea061c8 : check compilation status before query cudnn version in conv (#135332)
041960a1ce : [Dynamo] Automatically in-graph traceable tensor subclass ctors (#135151)
67c7924ea1 : [inductor] Fix gen_transposed_tile_load_store (#135307)
217ba7b2ab : [Docs] Update FileCheck doc (#135199)
758d515d98 : [Inductor][CPP] Select tiling factor for lower precision data types (#133830)
60d98b4cfb : Update torch-xpu-ops pin (ATen XPU implementation) (#135300)
590a3e9f8a : [export][training ir migration] quantized_decomposed.quantize_per_tensor decomposition (#134525)
764ee6e3f9 : [FlexAttention] Specify padding_value for boundary checked loads (#134573)
67f98a99a4 : [DeviceMesh][Easy] Make RuntimeError a bit more descriptive by including the actual world_size (#135271)
e020a8755a : [Fix][FR][ez] Remove debugging logs (#135308)
7ffb3b201c : [inductor] Remove LoopBody.reads,writes,other (#135256)
f946bf88c4 : [inductor] Skip retracing an existing LoopBody (#135235)
66da3b3b2a : [fx] Bypass custom __setattr__ in Node.__init__ (#135079)
41e653456e : [RDP] Fix "No module named 'libfb’" (#135244)
e40a0a9359 : Add randomness checking for sdpa vmap (#135176)
c05a7adb36 : [inductor][debug] fix draw_buffers (#135266)
5f57be7571 : [Distributed] Change function call in test to non-deprecated to eliminate warning (#134938)
29d72c1100 : [inductor] check intel compiler minimal version (#135209)
3b1a334c0f : [Inductor][CPP] Avoid mistake wgt tensor delete (#135100)
07689a38bf : [Inductor] Fix AOT weight alignment issue on CPU (#135205)
06a7dc21c1 : Remove dead expect_rational (#135105)
d9a18173fa : Report qualname of exception type rather than <class 'RuntimeError'> (#135146)
d8543e3162 : Include exception type qualname when rewrapping InternalTorchDynamoError (#135145)
ad01fc194d : Consolidate raise and rewrap raise error branches (#135148)
e162414963 : add instrumentation of CCA stats for reserved and allocated memory size (#135231)
9e5a797771 : Improve test_public_bindings import module error reporting (#135258)
b46a1b9e2d : Use Python 3.9 on all libtorch jobs (#135245)
9688014820 : aarch64: extend matmul heuristic checks to all neoverse platforms (#134548)
8f6e73f068 : [ONNX] Enable experimental exporter logic to dynamo_export and support refine dynamic_shapes (#134976)
1e57ef08fa : [AOTI] Support MKLDNN qconv ops in cpp wrapper (#134795)
614b86d602 : [AOTI] Support MKLDNN qlinear ops in cpp wrapper (#134783)
0b96dfb736 : [AOTI] Support MKLDNN conv ops in cpp wrapper (#134475)
62b221d5cc : Add Percentages to Function Events (#135155)
66dd4577b1 : Track base of FunctionalTensor in inference mode. (#135141)
cc28634172 : [Submodule] Bump pybind11 to v2.13.5 (#135202)
c83cdf068b : [DTensor] Fix view op replicating on tensor dim when the size of the tensor dim = 1 (#135054)
28ccfba248 : [ONNX] Delete ONNXProgramSerializer (#135261)
b2386bdca1 : [debug] Add helper to run cProfile on a function (#135084)
bdfc8d9f96 : [fx] Don't use generators in map_aggregate (#135082)
70779dded8 : [fx] Compile time optimization in Node.__update_args_kwargs (#135076)
ea231300d1 : [inductor] Improve compile time regression from MemoryDep.normalize (#135070)
8f66995459 : Revert "Support rolling over a percentage of workflows (#134816)"
144fde4fd2 : [MPS] Add support for autocast in MPS (#99272)
43f4947d44 : fix fake tensor tolist implementation (#135131)
65e1c34061 : [rfc] scuba for flight recorder (#134794)
830247c355 : [Intel Triton] Update Intel Triton to release/2.5.0 (#134074)
4262755b5a : [cond] fix typo in cond codegen (#134708)
3825607144 : Add torch._logging.scribe (#135224)
3c8f71ff93 : [cuDNN][64-bit indexing] cuDNN v9.3+ supports non-batch-splittable convolutions with > 2**31 elements (#134890)
fc890b55b5 : Support rolling over a percentage of workflows (#134816)
058a69d91a : [fbcode][dynamo] Turn on guard_nn_modules using justknobs_check (#134928)
6c5920d515 : Tune int8 AMX WoQ micro-kernel for CPU (#134832)
116fd474da : [export] Expand coverage to more copied sym ops for unflattener. (#135119)
a5d70cf545 : [PyTorch] Add isfinite to BFloat16-math.h (#135052)
7fe819d917 : [PyTorch] Fix -Wshadow -Werror build in BFloat16-inl.h (#135031)
f63571060c : Revert "Use actions/upload-artifact@v4.4.0 for rest of workflows (#135264)"
38fead8f7c : [hop] preserve metadata in re-tracing hop subgraph by running with interpreter (#135159)
24a223c49d : Run inductor micro benchmark on x86 metal runner (#135042)
e4920a1364 : [Traceable FSDP2][Dynamo] allow tracing through auto_functionalized HOP (#135169)
bc5ecf83d7 : [training ir migration] Fix quantization tests (#135184)
e55c0f59e5 : Revert "[Reland] Refactor caching device allocator utils (#130923)"
a4cf9653ee : Revert "Remove Caffe2 code from tool scripts (#134941)"
9c0b03020b : Use actions/upload-artifact@v4.4.0 for rest of workflows (#135264)
034717a029 : [ROCm] remove triton-rocm commit pin and merge pins with triton.txt (#133438)
9c38b00999 : [export] Add ability to run eagerly on UnflattenedModule (#133996)
8efe547046 : Use actions/upload-artifact@v4.4.0 for triton builds (#135263)
82d00acfee : Allow cross-device copies for cpu scalars in refs (#135140)
098431a29d : Update Resize.cpp with new device type (#135117)
be660ea2d3 : [PT2] Directly set meta.val in group_batch_fusion_aten (#135078)
52c7c89ea4 : [Inductor][CPP] Leverage full bits for BF16/FP16 vectorization (#126502)
1efd341d15 : [fake_tensor] Move unrecognized_type NotImplemented before ConstProp (#135033)
a096f2899d : Add torch.serialization.skip_data context manager (#134504)
dbeb8a1691 : Render log filepaths that are not anchored in torch's directory in a reasonable way (#135165)
b1f72e2984 : Gradient scaler for DTensor (#132816)
bb3c2408f4 : [inductor][test] in test_unbacked_symints, replace inductor's skipCUDAIf with common device type's skipcudaif (#133936)
2c99f17a32 : Implement VariableTracker.python_type() (#134215)
0043dcd79e : Switch torch pt2e xnnpack tests to use export_for_training (#134788)
2e2fb668fa : Upgrade expecttest to 0.2.1 (#135136)
9d24f945ba : [CI] Use larger instance for building triton whl (#135201)
ecbd715363 : [Intel GPU][Windows] Fix overriding default CMAKE_CXX_FLAGS (#135093)
58f2477a26 : [Dynamo] Support builtin function frozenset (#134563)
43dcb4bb61 : Revise CPU vectorization ISA support API (#135075)
50d1e37079 : [AOTI] Fix a unbacked symint retrieve bug (#134670)
b99ef1a02e : Update torch-xpu-ops pin (ATen XPU implementation) (#135185)
8a5c8e5db9 : Update unbacked symints in masked_select more precisely (#134899)
c7328dff7f : Enhance the stability of the complex divide code (#134647)
749dc6ceda : [inductor] [cpp] use_local_acc if template_buffer_has_other_users (#135081)
eaeae0ac95 : [c10d] Change collective to take in a list of tensors so it work fully for all collectives (#135049)
5a0e7a408f : restore CSE'd node metadata in runtime asserts pass (#134516)
81a8624296 : [Intel GPU] Customized XPU behaviour in indexing, group norm (#134453)
731fd3172a : [inductor] [cpp] generate reindexer for each epilogue_node (#134984)
9d705605dd : Fix decomp behaviour in export training IR (#134801)
05feb6e4ed : [Inductor] support masked vectorization for the tail_loop for dynamic shapes (#131745)
7b280c31ba : [export] dynamic_shapes serialization, load/dump (#134718)
f2a7228aed : [executorch hash update] update the pinned executorch hash (#135162)
8fb1281db9 : [Traceable FSDP2] Skip _backward_prefetch under compile, and rely on compiler pass to have prefetching (#135163)
a7a53b796b : [Intel GPU]device guard codegen for XPU (#133980)
30b98940b8 : Fix typo in comment (#135111)
724faac260 : [FSDP] casting input args with dataclass(frozen=True) (#135067)
04e11c7eed : Update current scripts used for setting up s390x runners (#129866)
a3e0d4bf07 : [FlexAttention] Fix mismatched backward strides for eager impl (#135152)
27d86f93fe : Remove redundant code (#134955)
32f45f01a9 : [dynamo] Retire CompileProfiler (#135133)
4a661e089a : [FR] Add version based logic to FR script and make traces print can be filtered (#135154)
105ac2418c : Fix binary builds artifact download (#135139)
560f449d8f : Fix: use clone_preserve_strides in auto_functionalized_v2 (#135142)
956da79bda : [CUDA][AMP] Fix autocast_dtype (#133938)
977a909250 : [CI] Build pytorch wheel with Torch XPU Operators on Windows (#133151)
b3ef0c99f5 : [PP] Fix zero bubble composability with DP (#134052)
43c9b4e0e6 : Fix unintentional deduplication of returned tensors (#134726)
00a8666708 : [ONNX] Support output_names in dynamic_axes when dynamo=True (#135134)
4f70b3cfae : [CUDA][complex][TF32] Update `test_noncontiguous_samples` tolerances for `complex64` (#134526)
359077fa43 : [export] Fix indentation (#135128)
9810ce9ca7 : [PP] Go back to export instead of _export (#134299)
804852c1f9 : [dynamo] Search for _torchdynamo_inline only for functions (#135130)
13a4a0c60d : [Inductor] Apply loop split optimization in codegen_node (#132389)
87842cc658 : [dynamo][super] Corner case where the class is not present in the __mro__ (#135129)
d9ae92cd6e : [Dynamo] Support for proxying frozen dataclasses (#134846)
ed06772e35 : [TorchElastic] add warning when users try to pass a "use_libuv" argument to create_c10d_store (#135062)
fb1c580892 : [BE][optim] Make pyright recognize exported symbols (#135043)
2276940f8c : Make Dynamo inline through torch._library.custom_ops.autograd (#135066)
4e6df83d19 : [PT] Add out variant for avg_pool1d and adaptive_avg_pool1d (#135051)
a8611da86f : [dynamo][backend match] Optimize backend match for common case (#135121)
09a339fc06 : [Flex Attention] update __getitem__ without tree_map_only to support compile (#134627)
741d52c69f : Revert "Add support for 32KB multi_tensor_apply kernel arguments (#134373)"
dd7cd182ab : [AIInfra][DCP] All gather keys checkpoint utils bug fix (#135045)
eb0fd17bc4 : [Profiler] Fix Raw Metadata Iterator (#135096)
c88c19c6de : Revert "restore CSE'd node metadata in runtime asserts pass (#134516)"
873abfc18e : [inductor] fix compile time regression due the (disabled) loop ordering after fusion (#135071)
d7b57c4d63 : Fix tensor.data access under inference_mode and compile (#134878)
0d193a0adf : Add ExecuTorch warning to mobile_optimizer (#134697)
193c547461 : [inductor] Refactor simplify erase_nodes() (#134822)
2ddf3ed707 : [inductor] Allow cudagraphs with unused CPU inputs (#134749)
cff1158200 : [inductor] Pass to fix device on index(..., [iota]) (#134748)
7858045491 : Revert "Fix set_unbacked_bindings when list of Tensors is returned (#133585)"
8759ed2ac5 : Revert "Compute and do renamings even when ignoring fresh unbacked symbols (#134407)"
fc07e6bf56 : Revert "Ignore fresh unbacked when doing recursive make_fx inside HOPs (#135053)"
c8ab9b06a2 : Redesign custom op functionlaization for better re-inplace (#134409)
195ac85fb6 : [Profiler] Allow kwinputs to be non-string values (#134893)
60dfe1b35e : Fix lint after Bump actions/download-artifact update (#135109)
8bfd4916d6 : fast path for sympy gcd in floordiv (#134880)
67208f08bd : [CD] Enable XPU nightly build on Windows (#134312)
6c5669903f : Fix Invalid NaN comparison due to infinity-zero multiply on latest sympy (#135044)
a178a053ad : Ignore fresh unbacked when doing recursive make_fx inside HOPs (#135053)
46cb2af7d8 : Compute and do renamings even when ignoring fresh unbacked symbols (#134407)
5690f003a6 : C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED and C10_DIAGNOST should be used in pairs (#135004)
dcf05fcb14 : Fix stale job using non-existant ARC runner (#134863)
a8467c17c3 : Remove specific lazy initialization of PrivateUse1 (#135002)
80a6d60829 : Moving _run_autocast_outofplace to basic class named TestAutocast to reduce redundance (#134460)
c2ff9fe042 : [fp8 rowwise] Retune the tile heuristics to increase perf (#134781)
eec8fa038e : [fp8 rowwise] Support transposing operands in order to change output layout (#134773)
679b8fe426 : Update generate-xnnpack-wrappers.py parsing to handle build identifier (#134724)
1dfb105239 : restore CSE'd node metadata in runtime asserts pass (#134516)
9f00317997 : rationalize STATIC vs. None (#134877)
9809080b9e : [Reland] Refactor caching device allocator utils (#130923)
6448d351db : [inductor] clean up cpp_builder code. (#134909)
2c9b4d2052 : [executorch hash update] update the pinned executorch hash (#135077)
6b05aafc57 : Add specializations for VecMaskLoad and VecMaskCast (#126501)
ffd1e214df : Back out "[FSDP2] Set `ctx.set_materialize_grads(False)` for post-backward (#133498)" (#135059)
c818ecd169 : Remove Caffe2 code from tool scripts (#134941)
9e6f4f3f77 : [dynamo] Use __eq__ for backend match (#135039)
367a78495f : Bump actions/download-artifact from 2 to 4.1.7 in /.github/workflows (#135068)
362ecd9817 : [inductor] Skip the sub-process pool until it's ready (#133508)
7600e9b36f : [ONNX] Use the stable APIs in onnxscript and sync the latest logic (#134782)
982e27e532 : [halide-backend] Update CI pin (#130258)
ae3aa8ff73 : [AOTI][Tooling][5/n] Refactor the debug printer call to a level lower (#134789)
ea89f01281 : Remove unused comment (#135034)
175485097a : [EASY] Typofix (#135022)
15c25c4580 : Fix dim mismatch logic automatic dynamic not working with compiler collectives (#135025)
4ebf6b04a8 : Turn on expanded index path for Half on CPU (#133553)
e000cf0ad9 : Fix license metadata in setup.py (#129219)
45743019cf : [PT2][Optimus] Skip meta update on symblic shape (#134975)
9ffcca7060 : [Profiler] Handle Tensor Sizes/Strides Parsing Error (#134862)
f05b716d6d : Add validator to ensure runner determinator script is kept in sync (#134800)
469429b959 : Refactor runner determinator (#134796)
c044deb9ce : Revert "c10d/logging: add C10D_LOCK_GUARD (#134131)"
2fd36086bc : Revert "Add torch.serialization.skip_data context manager (#134504)"
85fa019697 : [Docs] Fix call to deprecated function (#135037)
14c8ef5198 : autolabel aotinductor->export (#135040)
c40e622966 : [inductor] add openmp config for intel conpiler on Linux. (#134973)
272f3b9fe1 : [FlexAttention] Update tolerance for failing test (#135035)
e7731b3f8a : [TorchElastic] make torch elastic not have to realize TCPStore backend type and rely on c10d to decide which backend to use (#134882)
71383dd3da : [MPS] Fix bachnorm_2d for channels last (#134618)
758d787901 : Added complex support for `torch.logsumexp` (#133187)
6c3767452d : Move auto functionalize tests in their own test file (#134834)
2e0b114c06 : add a new Guage API with an empty backend to PyTorch core (#134883)
7804c089c6 : [BE] Update numpy version to 2.0.2 (#134875)
1b9f51bd88 : [ONNX] Bump onnxscript version in CI; temporarily remove op test (#133748)
27677ead7c : Revert "[ONNX] Bump onnxscript version in CI; temporarily remove op test (#133748)"
a258844a32 : Properly handle empty CPUINFO variable (#134916)
f927bcb934 : Revert "[Inductor] Apply loop split optimization in codegen_node (#132389)"
6eed63c8b9 : [ONNX] Bump onnxscript version in CI; temporarily remove op test (#133748)
33ba952e31 : [subclasses] Do not fakeTensor const prop subclass args (#134855)
2a49296d75 : Fix set_unbacked_bindings when list of Tensors is returned (#133585)
2443507acc : Update torch-xpu-ops pin (ATen XPU implementation) (#134983)
39935e0fde : Update cpuinfo submodule (#134891)
23a2161ad1 : Changed addmv to be a decomposition and not a fallback (#134823)
9856bc50a2 : Switch nanmedian to not cuda synchronize (#134819)
6fce1faa10 : change multinomial to use async asserts instead of a synchronization (#134818)
db193d1e29 : add msg to _assert_async (#134813)
d14fe3ffed : [Inductor][CPP] Turns on inline_inbuilt_nn_modules for CPP GEMM template testing (#132487)
a00fad0177 : Add specializations for vectorized conversion between float and BF16/FP16 (#126500)
45f11094b6 : [ONNX] Delete `op_level_debug` from `torch.onnx.ExportOptions` (#134961)
4c1dd13ba3 : [BE] better type annotation for `torch.types` (#129559)
76710d4f95 : Corrected docstring of ``solve_triangular`` (#129766)
ee03530fd9 : Add a test to avoid decorator based regression for cprofile traces (#133086)
16de25b1dc : fix tensor_repr(at::Tensor) (#134762) (#134764)
3daca187aa : [Inductor] Allow customizing the padding format (#133939)
de3a641476 : [executorch hash update] update the pinned executorch hash (#134914)
3cb5d25122 : [Inductor] Apply loop split optimization in codegen_node (#132389)
c140fa1426 : Reorg cache code to make it simpler (#134911)
0cbcef12bd : Stop adding useless prefix to error message here, you're pushing the important info off the screen. (#133108)
208442ea18 : Don't setup try-except handler when Dynamo compiling (#133239)
ea01aec8b1 : Move FunctionSchema implementations to cpp file (#133856)
2dadc2c8fc : Log fx graph cache bypass reasons (#134792)
1595e755af : [Reland] [Torchgen] Pass mutable to cpp.valuetype_type (#134549)
b1a00b7b6d : Abate `-Wsign-compare` warning spam in `Indexing.cu` (#134805)
d03f767cae : Check function declarations of Vulkan code (#134550)
c25b64a057 : expose host_emptyCache to python, fix a bug in freeing cudaHostRegist… (#134919)
caa04e0cae : [ET] codegen: bool array as array ref (#134886)
29b7852dc1 : drop gil in couple places (leads to deadlocks) (#134910)
7239b8a4f1 : Clean up RemoteCache classes (#134032)
590d96be64 : [inductor] move test_fuse_large_params to slow test. (#134900)
f4641ca481 : [Inductor] Remove VecChecker and fallback non-supported Vec op to Scalar impl with a for loop (#134569)
16f119e62a : Update compiled optimizer tests for tensor betas (#134169)
4e71418566 : [dynamo] rewrite addcmul_ to remove graph break (#134168)
3fb4c6bc38 : [dynamo] Rewrite foreach pow to broadcast scalar argument (#134167)
471c33f007 : [dynamo] Rewrite foreach_lerp to avoid aten item call (#134166)
eed0d76682 : [dynamo][itertools] refactor `itertools.islice` to use polyfill (#133876)
ec660c383e : [dynamo] reduce overhead for `PolyfilledFunctionVariable.call_function` (#134842)
d9cc693719 : [jit] Change argument names (#134828)
136badae64 : [inductor] preload icx built in math libs (#134870)
090d9cf410 : [Dynamo][autograd.Function][vmap] support torch._C._are_functorch_transforms_active (#134889)
34b85d301f : [executorch hash update] update the pinned executorch hash (#134894)
64fad53b50 : [Inductor] Support passing module map parameter to Triton make_ir API (#134774)
aef5da50f4 : Cleanup unused `pytorch.version` (#134381)
86e03a64e1 : Revert "[Inductor] Allow customizing the padding format (#133939)"
f95085fd91 : [BE][MPS] Prefer xfail to skip (#134858)
050ad925f3 : [benchmark] Add to torchbench relative path search (#134871)
a854c3a25e : [dynamo] refactor `builtins.enumerate` to use polyfill (#133894)
ebbdeeede1 : [dynamo][itertools] refactor `itertools.chain` and `itertools.chain.from_iterable` to use polyfills (#133864)
5dad6a5a84 : [ONNX][DORT] Lazy-import `onnxruntime` (#134662)
2384f77d76 : [XPU] Fix Windows XPU build (#134276)
e688b78791 : [Dynamo][autograd.Function] Trace fwd graph under no_grad mode (#134872)
8b258b3b14 : [Inductor] Allow customizing the padding format (#133939)
a1ba8e61d1 : Revert "[ROCm] remove triton-rocm commit pin and merge pins with triton.txt (#133438)"
f6398eb0fa : dynamic shapes for combo_kenel/foreach_kernel (#134477)
db17a9898d : regenerate ci workflows for binary builds with new g4dn runners (#133404)
98b813d0d4 : Enable cudagraphs in cpp wrapper (#133885)
bdfa94b787 : [RFC] Make fr trace script a console scripts (#134729)
a0d0c6b7e6 : Used `torch.equal` in `test_foreach_copy_with_multi_dtypes` (#134861)
1993a2aa9e : [FR] Make pg_name unique, show P2P collective status and fix bugs when running the script as command (#134780)
15f5a4858b : [inductor] enable Intel Compiler(icx-cl) for inductor windows (#134772)
9e0ddc0e14 : [inductor] don't allow triton config pre_hook (#134633)
e21d7b77ce : Update `ForeachfuncInfo.sample_inputs_func` to yield scalars & scalarlists that are more friendly to test_meta (#134552)
577a93514f : [dynamo][dynamic][heuristic] Mark tuple getitem integers as static (#134734)
08184aa85c : Add support for 32KB multi_tensor_apply kernel arguments (#134373)
a19a7524f6 : [export] Make sure getitem replacement are synced with module call graph. (#134830)
f5b0caee71 : Rewrite `unsafe_remove_auto_functionalized_pass` using `decompose_auto_functionalized` (#134831)
351ba3e67c : Revert "[c10d] Remove Option for ProcessGroup and Expose backend Options to reflect the correct code structure (#132931)"
994438040c : Improvements for associative_scan - combine_mode (#133012)
c6ecf57dd2 : Revert "[dynamo] simplify implementation for `functools.reduce` (#133778)"
7a85c488a8 : Revert "[dynamo] simplify implementation for `builtins.sum` (#133779)"
1ad08c7a5b : Revert "[dynamo][itertools] refactor `itertools.chain` and `itertools.chain.from_iterable` to use polyfills (#133864)"
8aa44e14cf : Revert "[dynamo] refactor `builtins.enumerate` to use polyfill (#133894)"
10c31e96df : Revert "[dynamo][itertools] refactor `itertools.islice` to use polyfill (#133876)"
d261a1751a : [HOP] fix export x inline_inbuilt_nn_modules (#133731)
932c4ca5a0 : make make_fx collective test single threaded (#134775)
c07e566baf : [CUDA][P2P] Check device capability in `requires_cuda_p2p_access` (#134523)
92f282ca52 : Enable batch matmul for result sizes > 2**32 the tensor can be split along batch axis (#133430)
50efbb9f1e : [DeviceMesh][Test] Add a unit test for get_local_rank for flattened mesh (#134603)
0f8bec4399 : [dynamo] mark_static_nn_module (#134713)
a5630239ad : [dynamo] Improve minifier error message when fp64 not supported (#134737)
1011e0ae98 : Generalize devices specific UTs for dynamo (#130714)
7a694f6683 : [justknobs] Override __bool__ method (#134799)
75b86b1554 : [executorch hash update] update the pinned executorch hash (#134736)
5e8bf29148 : [ROCm] remove triton-rocm commit pin and merge pins with triton.txt (#133438)
1f1e2eeb9d : [inductor] Install `tlparse` for test\dynamo\test_structured_trace.py UTs. (#134806)
0d5f978795 : add basic nn modules diff time benchmarks (#134658)
a645a18d2e : [reland][dtensor][MTPG] make sharding prop lru cache not shared among threads (#134509)
27ffa67984 : Support __class__ attr for tuple and list variables (#134099)
cf11fc0dcb : dynamo: Only log if we've disabled eval_frame once. (#134529)
8b68912dfc : Correctly detect "Rate limit exceeded" error (#134785)
3402a5d865 : fix windows xpu build issue (#133845)
3775fc982d : [Inductor][CPP] Fix Index name error (#134645)
d13ce2e2b5 : [c10d] release gil lock during eager init (#134779)
71ff168dbb : pytorch: llvm_codegen: prefix JIT generated functions with 8B of data so jitted code can be called from ASAN+UBSAN on LLVM17 (llvm/llvm-project#65253) (#134572)
496e57283d : add add_loop benchmarks (#134652)
65864d0134 : [c10d] Remove Option for ProcessGroup and Expose backend Options to reflect the correct code structure (#132931)
8b4c487581 : Fix AOTInductor complication on ROCM (#134522)
1e92d7b688 : [inductor] move loop ordering after fusion (#126254)
416a7894fe : [Windows][XPU] Disable Kineto PTI on Windows only (#134620)
7d12e6dceb : [dynamo][itertools] refactor `itertools.islice` to use polyfill (#133876)
a2566adfb6 : [dynamo] refactor `builtins.enumerate` to use polyfill (#133894)
1b70366957 : [dynamo][itertools] refactor `itertools.chain` and `itertools.chain.from_iterable` to use polyfills (#133864)
eaa449fbf0 : [dynamo] simplify implementation for `builtins.sum` (#133779)
b5f1ffa7ab : [dynamo] simplify implementation for `functools.reduce` (#133778)
e09324e7da : [dynamo] simplify polyfill registration for `builtins.all` and `builtins.any` (#133769)
b977abd5de : [Inductor] Fix error checking for scaled_mm lowering (#134765)
6180574771 : Move py 3.8->3.9 pull, trunk, inductor, prerioric CI tests (#133624)
202e5cc87d : [inductor] Fix error in debug_str_extra (#134747)
43e1df64f8 : register all entry_point backends on first attempt (#132546)
5470fcd5b9 : [5/N] Reconcile barrier and NaN checker (#134707)
d91b49dbaa : expandable_segments <-> other allocator options (#134338)
3fc6e47d42 : [AOTI] Fix cosmetic indentation issue in cuda cpp wrapper codegen for DeferredCudaKernelLine/GridLine (#134705)
5573c17877 : [BE][Ez]: Update ruff to 0.6.3 (#134769)
ce96146623 : [PT2] Fix node metadata setting in group_batch_fusion_aten (#134543)
348d02a983 : Changed masked out rows logsumexp to be -inf and not zero (#134650)
36a6516290 : [export] use single FQN for param_buffer_mapping (#134500)
d9d95dc55e : [4/N] Test NaN checker against broadcast (#134701)
ab646cd805 : Revert "[reland][dtensor][MTPG] make sharding prop lru cache not shared among threads (#134509)"
26aea277f7 : [3/N] Set correct device to CUDA guards (#134357)
d503217ea4 : [inductor] calibration inductor windows uts (15/N) (#134586)
9953f55f4c : [2/N] Add flag to control which rank should perform NaN check (#134345)
387d3fc296 : [AOTI] Switch benchmarking to use export non-strict mode (#130977)
0dbc72887b : [CPU][flash attention] make the stride of output align with input (#134656)
4fcd15a667 : Fix test_sgd_weight_decay_xpu accuracy error (#134744)
594162f7ab : [dynamo] Support reading attributes from pybind objects (#134630)
92e38a476f : preserve aten::to device in export training (#134622)
092349dcdd : Never CSE aten.empty in the partitioner (#134703)
70853b792a : [dynamo][itertools] support `itertools.tee` (#133771)
9e806c1a60 : [dynamo] simplify implementation for `os.fspath` (#133801)
d01a7a9faa : [dynamo] Graph break on FSDP flat_param inconsistent tensor and grad dtype (#134614)
fb35d1e01f : [raland][dynamo][exceptions] Support raise from None (#134621)
2bf622685d : [dynamo][dicts] Support hasattr on dicts (#134590)
2446dead35 : [dynamo][exceptions] Use exception subclass whenever possible (#134610)
cfb642bb6b : [DTensor] Extend implicit replication to replicate DTensor for foreach ops so model doesn't have to be fully tp-ed when using 2D (#134551)
3645634f3c : [1/N] Move NaN check onto NCCL stream (#134300)
578b8d75e5 : [2nd try][Traceable FSDP2] Allow tracing through FSDP2 impl in trace_rules.py (#134539)
834d8b0965 : [Inductor][mkldnn] Bug fix: incorrect codegen arg order for qconv (#134579)
b0a6d9ad27 : [DTensor] Add pointwise ops strategy for aten.isinf, aten.isneginf, aten.isposinf (#134699)
da9e61ef70 : Get accumulate dtype for Intel GPU (#134465)
94db935749 : Add torch.serialization.skip_data context manager (#134504)
297b42012d : [PyTorch] Use pinned memory for zero_cuda_out (#134712)
a32255481b : [caffe2][hipify] remove un-used flag from `pybind_utils.h` (#134404)
4655eb3ee2 : Uses MemPoolContext to route allocations from CUDACachingAllocator (#134685)
4b4ba7ab06 : [NJT] Support NJT SDPA + meta-device flop counting (#134289)
17e9c2d1e7 : Add oneDNN support for Half LSTM on CPU (#132607)
41e36e2b46 : Reflect check_labels status as a signal (#134711)
4f9c68454a : [inductor]Let output or input_as_strided match exact strides (#130956)
4811dc3de9 : Revert "[dynamo] simplify polyfill registration for `builtins.all` and `builtins.any` (#133769)"
f65df5edae : Revert "[dynamo][itertools] support `itertools.tee` (#133771)"
eaec9e80b8 : Revert "[dynamo] simplify implementation for `os.fspath` (#133801)"
76f975948e : [inductor] Cleanup generate_node_schedule (#134306)
cccb121d4e : [Inductor] add inductor config: masked_vec (#134566)
c5f114747e : fix flakiness in update_hint_benchmark.py (#134649)
f0fceed432 : Revert "[dynamo][exceptions] Use exception subclass whenever possible (#134610)"
67d7040fce : Revert "[dynamo][dicts] Support hasattr on dicts (#134590)"
40cebde3bc : Revert "[raland][dynamo][exceptions] Support raise from None (#134621)"
c35d1f7b3a : Revert "[dynamo] Graph break on FSDP flat_param inconsistent tensor and grad dtype (#134614)"
25531eb735 : Revert "[2nd try][Traceable FSDP2] Allow tracing through FSDP2 impl in trace_rules.py (#134539)"
cbf5ba1e97 : Revert "[1/N] Move NaN check onto NCCL stream (#134300)"
33d0c11b26 : Revert "[2/N] Add flag to control which rank should perform NaN check (#134345)"
43dc17fd00 : Revert "[3/N] Set correct device to CUDA guards (#134357)"
503c0dd923 : Revert "Add MaskedTensor support to *_like API (#128637)"
1285443994 : Revert "Add torch.serialization.skip_data context manager (#134504)"
e7711d6c7d : [MPS] Fix SDP training (#134719)
ca03a14cf7 : hang dim hint constants off Dim (#134702)
ee1b680438 : [Doc] Fix rendering of the unicode characters (#134695)
7a554e96b4 : [AOTI][Tooling] Follow up to print location of saved file path for `torch.pickle_save()` (#134651)
202600bc23 : Add torch.serialization.skip_data context manager (#134504)
f997b2b8e6 : Revert "Add MaskedTensor passthrough: unfold, F.Unfold, F.Fold, stack (#125262)"
6dd3f81aaf : Add export_for_training as public API (#134677)
a7933acd5a : Improve custom ops aliasing error message (#134688)
dd443f418a : Improve opcheck docs. (#134692)
afc76c6f2d : [3/N] Set correct device to CUDA guards (#134357)
5ff97e79ee : Skip test_mutable_custom_op_fixed_layout2 on ROCM (#134690)
2fe7e332c7 : [2/N] Add flag to control which rank should perform NaN check (#134345)
26ec06e45d : [amd][lowering] hipify shim v2 headers (#134689)
7b3da5f297 : Revert "[dynamo] Cache _dynamo.disable results (#134272)"
20b62fed21 : Create processes in parallel in mp.start_processes for forkserver (#134629)
f685018ea9 : Add MaskedTensor passthrough: unfold, F.Unfold, F.Fold, stack (#125262)
b6e51711a0 : Add MaskedTensor support to *_like API (#128637)
4c16797e71 : [c10d FR analyzer] Output a meaningful debug report for users (#134528)
de35d3062f : Runtime Estimator for estimating GPU compute time (#134243)
cae817c862 : [ET][CodeGen] Remove TORCH_API from NativeFunctions.h declarations (#134245)
b07d0a22f5 : [hop] require hops to override __call__. (#134352)
66c33d5989 : Revert "[2/N] Add flag to control which rank should perform NaN check (#134345)"
23e26b84af : Revert "[3/N] Set correct device to CUDA guards (#134357)"
3b40b07efb : Update PyTorch for XNNPACK 87ee0b4 (#134518)
042b733ddd : [dynamo][freezing] Set is_static_type to false after marking an input static (#134653)
79c88679b9 : Fix docstring for torch.signal.windows.nuttall (#134704)
aa31e7019a : [FSDP] Made `clip_grad_norm_` norm compute order deterministic (#134673)
47ba47a81f : [compiled autograd] error instead of deadlock on reentrant autograd (#134530)
c352b6aaaf : [compiled autograd][cpp node] point c++ custom autograd functions tracing error to google doc (#134514)
ba5aec88c6 : [reland][dtensor][MTPG] make sharding prop lru cache not shared among threads (#134509)
310eb6d8c6 : [AOTI] Fix test_aoti_inference CPU build issue (#134675)
633a9a3b13 : add back sum_floordiv benchmark. (#134635)
b8859dc4b8 : [PyTorch Pin Memory Allocator] Optimize the free list implementation and add lock sharding (#134154)
40de63be09 : parameterized test_graph_optims and test_graph_scaling_fused_optimizers (#133749)
c7338f457c : [DCP] Fixes the BC issue where the traversal doesn't support versions before 2.4 (#134158)
13d40f6fc5 : Revert "hang dim hint constants off Dim (#134484)"
2c88a923a7 : Revert "Refactor caching device allocator utils (#130923)"
d52aff3e73 : Revert "Adding entry-point based support for out-of-tree rendezvous plugins (#132633)"
85d9946001 : [CI] change conda to miniforge for XPU images (#134455)
208b922327 : [Intel GPU] Remove special dispatch logic for xpu in adaptive_avg_pooling (#132217)
e6bf1710ff : [Inductor][Refactor] Rename CPU benchmark test configs (#134639)
c142af7209 : hang dim hint constants off Dim (#134484)
3e42f21eee : Bucketize fix to include number and tensor inputs (#133652)
bb22132c8d : [aotd] Make effects op registry WeakKeyDictionary (#134470)
97c8a0739e : [Dynamo] Support inspect.signature.Parameter getattr (#134636)
26e392132d : [2nd try][Traceable FSDP2] Allow tracing through FSDP2 impl in trace_rules.py (#134539)
8693322ef0 : [Dynamo][autograd.Function] Support mark_non_differentiable (#134087)
d01415409b : [PGNCCL] Improve logic to infer device for barrier (#134617)
e4a5958ab5 : [dynamo] Graph break on FSDP flat_param inconsistent tensor and grad dtype (#134614)
e96dc3665a : [raland][dynamo][exceptions] Support raise from None (#134621)
c566f2465f : [dynamo][dicts] Support hasattr on dicts (#134590)
880e3d18a4 : [dynamo][exceptions] Use exception subclass whenever possible (#134610)
bf7db4e4f9 : [Inductor UT] Generalize inductor UT for intel GPU (#133309)
2ba60a1618 : fix torch.prod vectorized path for bool (#128009)
89929d9abc : [AOTI][Tooling][4/n] Add `torch.save()` for individual intermediate tensor (#133871)
ca77f0a986 : [executorch hash update] update the pinned executorch hash (#133386)
e3308d835d : [audio hash update] update the pinned audio hash (#134632)
bb4dfe90b8 : [Reland] [1/N] Fix clang-tidy warnings in inductor (#134544)
71d0eff6e7 : Back out "[pytorch][PR] [export] Schematize nn_module_stack serialization" (#134628)
ec3f52dd27 : [21/N] Fix clang-tidy warnings in jit (#134537)
5beb859e74 : [BE] no need to print stream in comm abort (#134362)
f33bcbe5fd : c10d/logging: add C10D_LOCK_GUARD (#134131)
c45ca8092d : Refactor caching device allocator utils (#130923)
d96254631e : [CD] Fix docker builds by installing setuptools after python build (#134631)
2b95da7ef4 : allow conv_bn mixed dtype folding in post-grad (#133968)
f7467c3b95 : using new device-agnostic api instead of old api like torch.cpu or torch.cuda (#134448)
0c7856973b : [export] enumerate unsupported sympy.Functions (#134271) (#134598)
3b33f26513 : Add device daemon (#131814)
d6091c8726 : Add compile time instruction count metric (#133834)
ef0f5919c7 : [ROCm][Inductor][CK] Fix codegen after ck signature change (#134483)
5ead965026 : [export] don't duck size for DIM.AUTO (#134486)
30094bedbc : Revert "[dynamo][dicts] Support hasattr on dicts (#134590)"
d966d91e37 : [FlexAttention] Fix Sparse block multiple to ceildiv instead for floor div (#134538)
f5c67917d3 : [FlexAttention] Remove unused code (#134511)
856a8410f2 : [FlexAttention] Create new variables for the subgraphs (#134507)
41e512a4cd : [EZ] Restore `test_unicode_comments` (#134589)
1ba39ec1d0 : Add test case test_arange_length_with_float32_dtype (#134415)
b58a0c3c4d : [split build] fix distributed problems (#134502)
289486d007 : Move attention kernels back from fake_impls to meta_registrations (#134288)
39ca96398b : Update label_to_label with oncall: pt2 hierarchy. (#134582)
b567ca0f51 : Remove unused imported names in python files (#134438)
d23c0150f3 : [dynamo][dicts] Support hasattr on dicts (#134590)
16b8146c9e : Exclude test_transformers and unit tests which require recent GPU arch (#132895)
44dadf2506 : [Fix] Check name when registering privateuse1 backend (#134071)
f754c0ae1b : [easy] rm duplicate definition for inductor in TORCH_LOGS documentation (#134480)
fe6d0e3a04 : Do not compute unnecessary `tensor!=0` for bool tensors in `count_nonzero` (#134254)
b744ed6816 : Add a cpu_dispatch_key parameter to the cpu_fallback function (#134321)
adf401f822 : Links to contributors' GitHub accounts (#133787)
534f43ddce : [Doc] Fix rendering of the unicode characters (#134597)
3ef4c27ab3 : Update pt2e numeric debugger to use node.meta["custom"] field (#134040)
38b96d3399 : Do not use `<filesystem>` on Linux (#134494) (#134604)
ed494603c7 : [inductor] calibration inductor windows uts (16/N) (#134587)
b094972051 : [inductor] calibration inductor windows uts (17/N) (#134588)
9d0e0e6f1d : [inductor] calibration inductor windows uts (14/N) (#134585)
05ac7cd760 : [MPS] Remove superfluous label/link (#134090)
d5aefadb17 : [CD] Fix docker builds by installing setuptools (#134595)
a4b44dd2ef : [AOTI] Introduce DeferredCudaGridLine for cuda cpp wrapper (#129268)
5fd670e0ef : [ROCM] Properly disable Flash Attention/Efficient Attention with environment variables (#133866)
5b392d22c6 : Revert "fix stuck floordiv (#134150)"
0159ebb654 : [dtensor] add test for local_map decorator (#127752)
8de0d7690c : Use newer `toAccumulateType` signature in `Normalization.cpp` (#134540)
68b1a09422 : Integrate device agnostic APIs in FSDP library [1/n] (#134337)
13049cd6e5 : [aotinductor][UserDefinedTritonKernel] fix case with non-constexpr params declared after autotuned params (#134520)
13114da4ef : [3/N] Set correct device to CUDA guards (#134357)
be7752ead3 : [2/N] Add flag to control which rank should perform NaN check (#134345)
9dc4bd7466 : Create a JustknobConfig for use in config (#134161)
94caba4899 : [1/N] Move NaN check onto NCCL stream (#134300)
c582602245 : Update partitioner's is_fusible heuristic to respect triton kernels (#134491)
761cf91e3c : [DeviceMesh] Add get_all_submeshes in _MeshEnv (#134275)
d028b810fe : Fix flaky GroupNorm ModuleInfo test (#133899)
2033934ff8 : Clarify error messages for NEWOBJ and BUILD in weights_only unpickler (#134346)
2ac710e667 : Make torch.serialization.set_default_mmap_options usable as a context manager (#134371)
0fa0ac80e4 : Do not use `<filesystem>` on Linux (#134494)
3418708abf : Revert "[FlexAttention] Create new variables for the subgraphs (#134507)"
87a3f664e1 : Revert "[FlexAttention] Remove unused code (#134511)"
3e10a1eb5a : Revert "[FlexAttention] Fix Sparse block multiple to ceildiv instead for floor div (#134538)"
c7cbcdad76 : Update partitioner's is_fusible heuristic to respect auto_functionalized (#134490)
b84e8c6848 : Move module_tracker to logging for confused hierarchy (#134467) (#134501)
6a79d4afcd : [ROCm] Prevent accidental enablement of efficient attention. (#134531)
dde5974b13 : Implementation for rng ops on hpu and xpu (#133068)
e0ddbffbc3 : [Release Only] Disable flaky failing tests in release. Pin optree. Pin sympy (#134489)
ef8236f12b : Provide default value None for the attn_bias parameter(#133981) (#133986)
a34320a6f2 : [FlexAttention] Fix Sparse block multiple to ceildiv instead for floor div (#134538)
767c47d3c0 : [FlexAttention] Remove unused code (#134511)
4d0a44d34a : [FlexAttention] Create new variables for the subgraphs (#134507)
f480385277 : Remove explicit Amz2023 reference from jobs (#134355)
0916d72e99 : Fix the warning for cat operators with same qparams (#133999)
3515090006 : Fix TypeError when itering NoneType in instantiate_device_type_tests() (#134457)
136b19b062 : Adding entry-point based support for out-of-tree rendezvous plugins (#132633)
4a18fcf7af : [inductor] calibration inductor windows uts (12/N) (#134428)
0b81f700aa : [PT2/Profiler] Add Context Info to Torch-Compiled Regions (#132765)
de57a6e806 : Back out "[dynamo][exception] Support raise exception from None (#134028)" (#134513)
02b0b524b5 : [inductor] Turn on UT: test_randint_int64_mod (#134510)
d0147290d8 : [BE][Easy][dynamo] ensure `trace_rules.MOD_INLINELIST` in alphabetical order (#134246)
2ee201a7d0 : [CMake] Remove BUILDING_WITH_TORCH_LIBS (#134434)
bdfc1d3987 : Remove unnecessary expect_true in split_with_sizes (#133439)
c7ca89a11a : Improve print stack/locals printing in comptime (#133651)
58771315d3 : Unify lowerings for auto_functionalized and triton_kernel_wrapper_functional (#134466)
141a9c7204 : Revert "[export] enumerate unsupported sympy.Functions (#134271)"
4df10a6340 : [FlexAttention] Fix bug when checking whether to return LSE (#134495)
b98d33c155 : [inductor] calibration inductor windows uts (13/N) (#134429)
74341e1150 : [dynamo] simplify implementation for `os.fspath` (#133801)
1dbd3476de : [dynamo][itertools] support `itertools.tee` (#133771)
43bbd781f2 : Back out "[Traceable FSDPS] Allow tracing through FSDP2 impl in trace_rules.py (#133532)" (#134478)
46ecc673ae : [ROCm] Prevent accidental enablement of efficient attention. (#133331)
0be6584203 : [Inductor UT] Refine test case `test_codegen_upcast_to_fp32_upcast` to pass on XPU. (#134474)
1565940114 : [MPS] Add `test/test_nn.py` to test suite (#134184)
79b7fff188 : Fix docstring for torch.signal.windows.nuttall (#134512)
ddd71e3479 : [export] enumerate unsupported sympy.Functions (#134271)
55236d0cb7 : TestForeach::test_parity: Remove check for error message text (#134251)
ef8c474fcf : Add the fast path for bfloat16 lgamma (#134344)
3c5883e550 : Fix test_parity xfail for sigmoid (#134253)
a23dae22d5 : Update AC pass use_reentrant message (#134472)
314f033e65 : Use ephemeral runners for windows nightly builds (#134463) (#134496)
9c1f78e018 : [CD] Use ephemeral arm64 runners for nightly and docker builds (#134473) (#134493)
dbef2b05b4 : [dynamo] Cache _dynamo.disable results (#134272)
3675fc52ae : Use ephemeral runners for linux nightly builds (#134367) (#134492)
920c023664 : docker: Use miniforge, install from pip (#134497)
28a4db84f2 : [ARM] Fix infinite recursion in unwind (#134387)
900c5083ed : [inductor] calibration inductor windows uts (9/N) (#134425)
68624cf089 : [dynamo][guards] De-dupe DUPLICATE_INPUT guard (#134354)
af82dc816a : Fix lint failures (#134488)
2588b5e51a : Move module_tracker to logging for confused hierarchy (#134467)
a0e062c6f1 : Add mean.dtype_out (#133506)
3541e450af : Support larger page sizes with `use_mmap_weights` (#131000)
3322ee236d : [aoti] remove c_shim_version v1 logic (#134283)
1d231ff8ba : [HOO] add hints_wrapper to support passing context hints (#132860)
1ccc8f0200 : [dynamo][super] Improve handling of getattr on super (#134039)
1dd4b9221b : [inductor] enable clang for Windows inductor (#134444)
0a3c064c12 : [inductor] fix _maybe_subprocess_run not support Windows path (#134365)
78128cbdd8 : [CD] Use ephemeral arm64 runners for nightly and docker builds (#134473)
0f5b052dba : [inductor] calibration inductor windows uts (11/N) (#134427)
73604eed0c : [20/N] Fix clang-tidy warnings in jit (#133399)
019b80855f : [inductor] calibration inductor windows uts (10/N) (#134426)
7ff576072f : [inductor] calibration inductor windows uts (8/N) (#134424)
adcce538b7 : Revert "Allow mp.start_processes to create processes in parallel (#133707)"
d0ac5d55ba : Memory optimization for DSD for TorchTune LoRA (#134025)
fc61aae70f : Remove color in CI (#133517)
42955e04f1 : Revert "[dynamo] Cache _dynamo.disable results (#134272)"
e94bdc7876 : Revert "[dynamo][guards] De-dupe DUPLICATE_INPUT guard (#134354)"
a6fac0e969 : Use ephemeral runners for windows nightly builds (#134463)
b417e32da2 : [CD] fix xpu nightly wheel test env (#134395) (#134464)
c507f402f1 : Add linux arm64 ephemeral runners (#134469)
17e8a51ff2 : Revert "[inductor]Let output or input_as_strided match exact strides (#130956)"
1c4780e69a : Revert "c10d/logging: add C10D_LOCK_GUARD (#134131)"
50e90d7203 : Revert "[dynamo] simplify implementation for `functools.reduce` (#133778)"
472c7cf962 : Revert "[dynamo] simplify implementation for `builtins.sum` (#133779)"
3d7f3f6a55 : Revert "[dynamo][itertools] support `itertools.tee` (#133771)"
e1fc4362fb : Revert "[dynamo] simplify implementation for `os.fspath` (#133801)"
bb67ff2ba7 : Migrate Windows bin jobs to runner determinator (#134231)
27d97b9649 : Remove unnecessary test skip (#134250)
be96ccf77c : Revert "[CD] fix xpu nightly wheel test env (#134395)" (#134461)
96738c9d75 : [CD] fix xpu nightly wheel test env (#134395)
1ff226d88c : [inductor] support vec for atomic add (#131314)
bf5c7bf06d : [FR] Fix the bug in FR script (e.g., checking all ranks dump check) (#134383)
92c4771853 : fix stuck floordiv (#134150)
c5f6b72041 : [dynamo] simplify implementation for `os.fspath` (#133801)
38f97ec8e3 : [pt2] Add meta for poisson (#134103)
ed86ac2f25 : [BE] typing for decorators - fx/_compatibility (#134054)
7b6b10417d : Remove ansi escape chars in assertExpectedInline and add options to skip comments and to skip empty lines (#134248)
2ec149cd3e : [inductor] fix test_functional_call_sequential_params_and_buffers expectation on Windows (#134394)
7af38eb98b : Fix unexpected inference_mode interaction with torch.autograd.functional.jacobian (#130307)
dc1959e6a7 : [inductor] calibration inductor windows uts (7/N) (#134420)
97fd087cdb : [inductor] calibration inductor windows uts (6/N) (#134419)
b5dd60fa75 : Fix namespace issues with qnnpack (#134336)
7940f2428f : [torch/package_importer] add compatibility name mapping (#134376)
816061843a : [Distributed/Profiler] Fix input/output dimension overflow (#134360)
e93ca12c88 : [CUDNN][SDPA] Fix unsupported trivial stride-1 transpose case (#134031)
08d111250a : [ez][c10d] change ERROR to WARNING (#134349)
4648848696 : Revert "[ROCm] remove triton-rocm commit pin and merge pins with triton.txt (#133438)"
e5563f7ad7 : Revert "[dtensor][MTPG] make sharding prop lru cache not shared among threads (#134294)"
268092db83 : [DeviceMesh] Allow _flatten() to take in an optional mesh_dim_name (#134048)
326db8af4c : Replace sympy Min/Max with reimplementations (#133319)
8db8ac700d : line by line logging (#134298)
907c32faac : [inductor] calibration inductor windows uts (4/N) (#134401)
74ef74be36 : [inductor] calibration inductor windows uts (3/N) (#134400)
d33d68e326 : [Profiler] Add test to make sure FunctionEvents are processed lazily (#134359)
af4c87953e : [inductor] calibration inductor windows uts (5/N) (#134402)
94f92fbd88 : Use integer divison in arange length calculation when start/end/step are integral (#134296)
1a0d00f1f4 : [traced-graph][sparse] enable to_dense() for compressed (#133371)
050aa67e41 : [traced-graph][sparse] fix restrictive assert for sparse add (#134037)
90fb83749e : [inductor] fix test torch package working with trace on windows (#134397)
9cd53b3212 : Add Arm copyright line to LICENSE (#133982)
50d5aa8c10 : Enable optimized dynamic quantization on aarch64 (#126687)
f71c3d265a : [ROCm] remove triton-rocm commit pin and merge pins with triton.txt (#133438)
6245d5b87b : [CI] Update XPU ci test python version to 3.9 (#134214)
a63efee5cd : [inductor]Let output or input_as_strided match exact strides (#130956)
cdb9df5efe : [dynamo][guards] De-dupe DUPLICATE_INPUT guard (#134354)
d433a603af : [BE] use torch.amp.autocast instead of torch.cuda.amp.autocast (#134291)
a1061009c9 : [PT2] use statically_known_true in slice_noop (#134270)
ff77c67d16 : Use ephemeral runners for linux nightly builds (#134367)
592038351e : [Release only] Use amazon linux 2 runners for CI (#134350)
ff7d94c67e : [compiled autograd] fix saved tensor hook firing count (#134361)
929de1d0d4 : Re-enable skipped compiled autograd eager tests (#134163)
ad8bdfae1e : add compiled_autograd to programmatic set_logs API (#134162)
1431663693 : [compiled autograd] finish classifying tests (#134290)
0b228a2af8 : [compiled autograd] match eager behavior for ctx.saved_variables (#134286)
6cc57c64b2 : [compiled autograd] match eager behavior for post acc grad hooks (#134205)
d7a25e1d8c : [compiled autograd] add config patching for certain eager tests (#134200)
0d9208a398 : [compiled autograd] match eager behavior for inplace detached activations (#134186)
ccafc93be5 : [AOTI][CPU] Make int8 qlinear work (#134368)
eb15b1a016 : [dtensor][MTPG] make sharding prop lru cache not shared among threads (#134294)
1034f456ef : [inductor] fix munge_exc not support windows path (#134348)
0694918aeb : [export] Temporarily bypass torch_fn in partitioner (#134292)
f260cc2edf : Enable DTensor sharding propagation of `native_layer_norm_backward` to more fully accommodate optional args (#133502)
8d3c6494ae : [Inductor][FlexAttention] Rename IS_LAST_BLOCK to CHECK_BLOCK_BOUNDARY (#134378)
5ad759ca33 : [inductor] calibration inductor windows uts (2/N) (#134358)
5ae9c01794 : [DTensor] Add naive replicate strategy for aten._linalg_eigh.default (#134284)
962e1f6ca7 : [DTensor] Add aten.any.default,dim,out to linear_reduction_strategy (#134206)
5d39b14b68 : [DeviceMesh] Add DeviceMesh slicing support for flatten mesh dim (#133839)
195abdb85c : ppc64le: VSX Support for Inductor (#132746)
519342962d : Pass process group info into NcclWork (#134269)
e2a87fb1e9 : [ONNX] Update exporter logic (#134304)
a1d0b4d568 : Add option to skip functional passes in the pattern matcher's replacement graph (#134364)
2c8fc3f4ce : [inductor] Move imports to top of file in generated code (#134195)
1aa0e35a04 : [inductor] Remove dead code in multi_kernel.py (#134194)
4ff1a4dd0f : [export] support set_grad_enabled hop in dynamo to enable re-tracing (#134281)
9dc47f5e62 : [FlexAttention]Fix how we realize input buffers (#134351)
4c28a0eb0b : c10d/logging: add C10D_LOCK_GUARD (#134131)
e52e93e8fd : Update scale-config files with linux.24xlarge.ephemeral (#134380)
54ff320519 : [export] refactor ExportGraphSignature construction (#134059)
aa9f4cc733 : [Inductor][CPP] Support vectorization of remainder (#129849)
286f2dba9f : [2/N refactor NCCLPG error logs][c10d] Make msg in monitoring thread in NCCLPG more accurate and simpler (#134036)
2cfc2da527 : [export] Make move_to_device_pass function public (#134263)
c638a40a93 : [Caffe2] Remove unused AVX512 code (#133160)
1f19ccb5b3 : [Inductor/Triton] Customize triton codegen to optionally preserve input dtype on tl.load (#132406)
8ff3a5be1b : [export] basic auto dynamic shapes (#133620)
f5a2a22dc4 : [export] Fix unflattener to respect nn.Parameter requires_grad (#134353)
eaa2c0e009 : Improves error message when passing wrong tensor type to torch.nn.functional.one_hot (#134209)
09a82f3d24 : [EZ][BE] Delete references to non-existing `AWS_SCCACHE` secrets (#134370)
adf0f50cc7 : [Compile] Add NEON implementation for bf16->fp32 cast (#134297)
69813dbbfd : [export] Schematize nn_module_stack serialization (#134049)
78d69bfe11 : [SymmetricMemory] introduce multicast support, multimem_all_reduce_ and multimem_one_shot_all_reduce (#133424)
2ca7f0fc5c : [Minimizer] for sequential mode, respect find_all setting (#134339)
58e2cf364b : Make DTensor sharding propagation for `scaled_dot_product_efficient_attention` and `scaled_dot_product_flash_attention` more conservatively cached (#134146)
157de30f53 : [sparse] Update cuSPARSELt to v0.6.2 (#134022)
74a9001ada : [aoti] Add additional custom op input type support (#132454)
f8fbfe5846 : Always emit end events even on failure, use thread local storage for stack (#134279)
847a042a0d : [Release only] Disable triton build workflows (#134347)
a23d86c178 : [hop] ban creating hop by directly instantiating HigherOrderOperator. (#133645)
3546628a2a : Allow mp.start_processes to create processes in parallel (#133707)
afd081c9d4 : [inductor] Fix needs_fixed_stride_order silent incorrectness (#133639)
2553278bae : .github/merge_rules.yaml: added multiprocessing to Distributed (#134262)
6eae569546 : [dynamo][fix] always use POSIX-style path in `trace_rule.py` (#133987)
2eef749b31 : [Inductor][FlexAttention] Fix IS_DIVISIBLE bug and add unit tests (#134055)
8ae4f82243 : [aotd] Support HOP effects in backward (#132638)
7fd3b69886 : Revert "[dynamo][super] Improve handling of getattr on super (#134039)"
09127b096c : Revert "[inductor] Fix needs_fixed_stride_order silent incorrectness (#133639)"
75c22dd8bf : Revert "[dynamo][fix] always use POSIX-style path in `trace_rule.py` (#133987)"
0e49b2f18e : [dynamo][itertools] support `itertools.tee` (#133771)
8d90392fb0 : [dynamo] simplify implementation for `builtins.sum` (#133779)
6c0b15e382 : [dynamo] simplify implementation for `functools.reduce` (#133778)
cc3a76edba : [dynamo] simplify polyfill registration for `builtins.all` and `builtins.any` (#133769)
ca3f48dd5b : [XPU] Set `make triton` install pre-built whl by default (#130313)
55cdcef0f7 : [fp8 rowwise] Work around CUDA Invalid Memory Access bug (#134227)
9d81767d43 : [fp8 rowwise] Rework dispatch logic (#134226)
0afb4872aa : [fp8 rowwise] Support non-contiguous inputs and clarify checks (#134225)
9f8d3f511f : [fp8 rowwise] Some clean-up (#134224)
2f198605ac : [fp8 rowwise] Simplify epilogue visitor tree via common blocks (#134223)
25b2e46573 : [dynamo] add max iterator limit while inlining generators (#134233)
673b9bd561 : [WIP] [Inductor UT] Reuse inductor UT for intel GPU `test/inductor/test_multi_kernel.py` (#133943)
80846caa8c : [inductor] fix dynamic size array(vla) build error on msvc v4 (#134221)
49b9f2d8b0 : [inductor] fix signbit build fail on Windows. (#134229)
311af3b988 : Add new ops wrapped_linear_prepack and wrapped_quantized_linear_prepacked (#134232)
b23779ef0a : [dynamo][fix] always use POSIX-style path in `trace_rule.py` (#133987)
a699bd1155 : [dynamo] Cache _dynamo.disable results (#134272)
b454c51060 : remove dynamic_dim (#134211)
058302494c : [AOTI][Tooling] Add a test case where `config.debug_intermediate_value_printer=True` to check codegen (#133326)
d2c60749ac : [Inductor][FlexAttention] Respect user's input kernel_options (#134065)
8301add833 : [4/N] Further refactor FR script to make it more modulized (#134196)
bcfc560aea : [Profiler/CPU] Add Test for Dynamic Activity Toggling [4/n] (#134149)
bf5addb613 : [FlexAttention] Enable different qk and v head-dims (#134043)
7c93c4f8cf : [CI][dashboard] Change aarch64 perf run (#134265)
b3821f1da1 : [dynamo][guards][logs] Generate code_parts for debugging (#134181)
edbadc904b : Do not broadcast uniqueId during a split (#133962)
b2eb0e8c6a : docker: Use miniforge, install from pip (#134274)
30d7e7a1cd : [XPU] Fix patch for old llvm package error for triton xpu (#134204)
629bd6f718 : Update FlexAttention with masking semantic (#133373)
e7929809f3 : [c10d][ez] Add comments to CudaEventCache class (#134172)
b319fa3fd9 : [ONNX] Opt into ruff fmt (#134120)
25499de814 : Remove ncclIdToCommMap_. (#133961)
b0cf287b46 : [export][training ir migration] Fix getitem not exist (#134259)
f0ba309d78 : [CI][dashboard] Add jemalloc back for aarch64 (#134189)
1b6bbaa016 : Remove PMI dependencies in PyTorch (#133960)
ff61f55387 : [Dynamo][autograd.Function] Supports ctx.set_materialize_grads (#133978)
5633773188 : Convert various jobs to be Linux Foundation fleet compatible (#134128)
0eb9c870fd : [reland][ROCm] TunableOp for gemm_and_bias (#128919)
978c5a80a0 : [export][training ir migration] fix batch norm pattern match in quantization (#134157)
fee677eeb6 : [fbode-testing][dynamo][reland][inline-inbuilt-nn-modules] Mark attri… (#134136)
c469d14a14 : [NJT+SDPA]Fix flash_attention output when batch_size=1 and seq_len=1 (#133595)
bd92fa2cf2 : Update conda-env-iOS.txt (#134239)
eba4d08497 : [MPS] Gather sliced inputs to batch norm (#134121)
8f7d66f0c3 : Enable dynamic rollout for Linux binary workflows (#131472)
d95aedf5fd : [BE] typing for decorators - fx/_compatibility (part 1) (#134202)
44fa9f991c : [NJT] add aten.to.dtype support (#134164)
b6abac68ec : [BE][dynamo] reorganize polyfill module hierarchy (#133977)
c95ddd4bf2 : [dynamo] ensure polyfill function has the same signature as the original function in `substitute_in_graph` (#133813)
240467adfe : [fx] Implement deepcopy for Proxy (#133706)
b0171c3920 : Revert "[ONNX] Opt into ruff fmt (#134120)"
828ab84e19 : Improve error msg on _lazy_init() error (#134159)
3c5485fb7f : [Retry] Log chromium events to scuba (#134118)
1b10a5c652 : Allow SymInts and SymFloats as other in div_softmax_pattern (#133989)
afc2615d33 : Add proper casting to fuse_linear_bn_weights (#134105)
b459ca78eb : [NJT]Add unit tests that cover the internal use cases using new NJT API (#133513)
1a7e8e5780 : Revert "Update FlexAttention with masking semantic (#133373)"
88c973005d : Revert "[FlexAttention] Enable different qk and v head-dims (#134043)"
83b5d449a3 : Add full float16/bfloat16 support to MaxUnPool (#133774)
c9c84ae3ee : [BE][Ez]: Update CUDNN_frontend submodule to 1.6.1 (#134007)
108a75b454 : [PP] Add ZeroBubble schedule (#133467)
cedfac20c7 : Revert "[SymmetricMemory] introduce multicast support, multimem_all_reduce_ and multimem_one_shot_all_reduce (#133424)"
592a172910 : [FSDP2] Resolved strided sharding todo in clipping tests (#134152)
4c645c04d8 : Fix type of get_raw_stream (#134187)
5fb8754434 : [inductor] write cpp code with encoding utf-8 (#134027)
aea1148d56 : [fp8 rowwise] Clarify dtypes (#134114)
72586ccd14 : [fp8 rowwise] Don't build separate kernel for no bias (#134113)
d64fa11095 : [fp8 rowwise] Fix bias calculation being done in low precision (#134112)
15faed60ca : [fp8 rowwise] Make schedule selection more readable (#134111)
b8ea5b01c9 : [fp8 rowwise] Allocate workspace as a PyTorch Tensor (#134110)
4c8193b8f0 : [14/N] Fix clang-tidy warnings in aten/src/ATen (#132733)
90c821814e : SparseCsrCUDA: cuDSS backend for linalg.solve (#129856)
64cfcbd8a3 : Tune _int_bsr_dense_addmm for int8 inputs on A100 (#134035)
b7baa062fc : Update torch-xpu-ops pin (ATen XPU implementation) (#133850)
cdb9c7d228 : Add support for using privateuse1 backend name in `instantiate_device_type_tests()` (#133082)
24c2dd2002 : Migrate fuse_chunk_reshape_concat_pass to PT2 (#134026)
938f37b745 : Added batching rule for sdpa_math, sdpa_efficient_attention forward, cudnn, and flash attention (#133964)
e2ff094008 : [inductor] calibration inductor windows uts (1/N) (#134033)
0d7ac1966a : kill sharing of constraints (#134045)
de06345e9b : Avoid Host & Device Sync In LR Scheduler (#133663)
e847b6bb9b : [FlexAttention] Enable different qk and v head-dims (#134043)
7868b65c4d : [Dynamo] Support dict.setdefault (#134083)
7b20514f8e : [export] Device remapping in export (#133660)
df467f8746 : [CI] Do not set Intel OMP for aarch64 (#133997)
6bddfb9546 : [FSDP2] Add cache for FSDP wrapper class (#134135)
2a73ba298c : Upgrade submodule oneDNN to v3.5.3 (#131620)
2213c07dcd : [CpuInductor] Enable NEON ISA detection on Linux ARM (#134165)
5f0bd98767 : Increase max total number of dynamo partitions to 15 (#134153)
a5ef04a3b8 : add relevant function (#133946)
8604c0a150 : [inductor] Fix needs_fixed_stride_order silent incorrectness (#133639)
d2204d4f0f : Remove skip ci recommendation (#134134)
255cd75a97 : [sparse] Add cuSPARSELt as a backend (#128534)
0870398fa8 : [ONNX] Opt into ruff fmt (#134120)
96dfe95ed0 : Fix DDPLoadBalancingPlanner docstring (#134044)
5d5a45dc85 : [CI][dashboard] Collect Export pass rate separately (#134076)
b3eef3deaf : Triple number of shards for aarch64 cpu inductor tests (#134123)
345578afb4 : Add int8 support to bsr_dense_addmm and bsr_dense_mm Triton kernels (#133855)
a3e1416c05 : Fix out_tensor device in diag_test.py (#134020)
6c1e2d2462 : [easy] Force inline_inbuilt_nn_modules to remove divergence (#134122)
865facda44 : [pytorch] Remove thread naming when torch is imported (#134066)
1491a61769 : Revert "[hop] ban creating hop by directly instantiating HigherOrderOperator. (#133645)"
5fcfccefc6 : [export] Migrate `capture_pre_autograd_graph` to `_export_for_training` (#132815)
18aaceb7be : Update conda-env-iOS.txt (#134068)
84b3f1900a : C++ network flow implementation in c10 (#132188)
05304f59f0 : [Doc] Fix typo in `torch/fx/passes/README.md` (#134078)
32e057636c : Enable scribe environment for compile-time benchmarks if requested. (#133891)
750d68ff70 : Use amazon linux2 for Docker builds, fix build-docker-conda condition (#134116)
696107efcb : [hop] ban creating hop by directly instantiating HigherOrderOperator. (#133645)
6835f20d20 : [HOP] support generating schema for hop (#133521)
dd5a7c8397 : [PT2] Add a pass to convert stack to unsqueeze cat (#133966)
1da3a049da : [dynamo][super] Improve handling of getattr on super (#134039)
3ef1cc8583 : [export] Implement common_getitem_elimination pass. (#133618)
4ebe5b7cf4 : Avoid autocast deprecation warning in DataParallel (#130660) (#134057)
2db28a9611 : Revert "[BE]: Update Typeguard to TypeIs for better type inference (#133814)"
57625bacea : [partitioner] Fix must_be_in_backward corner cases (#134002)
346e0f605f : [Inductor] short-term fix for needs_fixed_stride_order silent incorrectness (#133452) (#133888)
362a6ca99a : Add xpu_cmake_macros.h to xpu build (#133649)
68425e68fe : Revert "[dynamo][reland][inline-inbuilt-nn-modules] Mark attributes of nn mod… (#133714)"
32e052e468 : [docs] improve `torch.stack` example code to be reproducible (#133857)
585c049fa3 : Fix `Extension` attribute name in `CppExtension` example (#134046)
afaa5fcecb : [BE][Ez]: FURB142,FURB92 misc preview fixes (#133880)
683609c631 : Skip cpp_extension test internally (#134011)
4b1fb3b0ed : [PP] pt-native input/weight grad split (#132691)
2bffbe06bd : [Inductor][CPP] Support vectorization of load_seed and randn (#130317)
313bc11963 : [inductor][cpp] complete vectorization for int32/int64 (#122961)
539be0a769 : [dynamo] support `ClassMethodDescriptorType` (#133862)
dab239be1f : [cpu][flash attention] fix nan issue (#133598)
30faa177c4 : Fix warning when pickle.load torch.Storage (#133597)
0d79f67a25 : [dynamo][exception] Support raise exception from None (#134028)
bd0db490bf : [dynamo][set] Fix EQUALS_MATCH guard for constant sets and lists (#134016)
c929e1e11f : [dynamo] fix polyfill for user defined constructor `__new__` (#133822)
695291be2f : Fix test flakiness due to not resetting state (#134058)
18736d2b55 : [inductor] parallel compile: Create new pipes for subproc communicati… (#134042)
30dc6338c1 : [effects] Prevent inductor dtype promotions for HOP effects tokens (#134003)
19eb14493a : [Inductor] Moves intermediary tensors which are constructed on the cpu to XPU when safe, align with CUDA. (#132843)
6535f11259 : [Inductor] Support _check_triton_bf16_support on XPU. (#132748)
c2e2602ecd : [Inductor] Move `GPU_TYPE`(The runtime avaliable gpu type, cuda or xpu) from (#132740)
3d8db41337 : Add new op wrapped_quantized_linear (#134024)
022cd7c9aa : [RFC][dynamo] add decorator to register polyfill for unsupported C++ function to avoid graph break (#133712)
843fdf81c2 : Fix a getenv segfault due to a race (#133744)
af664882dd : Safely infer device type + docstrings + tests (#133668)
b39ec7fbe9 : [1/N] Make NCCL PG error messages more accurate and simpler (#134017)
66d3eb783c : [SymmetricMemory] introduce multicast support, multimem_all_reduce_ and multimem_one_shot_all_reduce (#133424)
8337b4d96e : [training ir migration] Fix ReorderConvertTest (#134010)
e8fc1e0118 : [ONNX] New export logic leveraging ExportedProgram and ONNX IR (#132530)
06cc2e83f0 : Make optim.swa.util content accessible from the torch.optim doc (#133393)
d1abd6241a : [CI][BE] Update retry action to v3.0.0 (#119403)
c42ac54d9e : [inductor] prune unused constants in graph scheduling (#132208)
5f3d22a609 : Avoid GPU syncs by reusing Pre-allocated Zero Tensor (#128069)
5a7b544e5c : Update FlexAttention with masking semantic (#133373)
bc785c2d9a : [Inductor][FlexAttention] Don't trigger dynamic shape on building empty block mask (#133836)
f7c1f32803 : Fix partially initialized module error (#134019)
41fab40be7 : [report_exportability] Avoid re-exporting duplicated modules (#133930)
1ae5d5bb62 : [dynamo][user-defined] Improve getattr_static for user_defined objects (#133742)
a36739f36a : Cherry-Picking don't resolve conflicts (#134047)
2e1830c7c8 : Implement 2D version of masked_select for nestedtensors (#133889)
15b5a0b67f : Revert "[RFC][dynamo] add decorator to register polyfill for unsupported C++ function to avoid graph break (#133712)"
88ead0afc6 : Revert "[dynamo] simplify polyfill registration for `builtins.all` and `builtins.any` (#133769)"
3fa874abbe : Revert "[dynamo] simplify implementation for `functools.reduce` (#133778)"
98e6a1d8ff : Revert "[dynamo] simplify implementation for `builtins.sum` (#133779)"
2540ee372a : Revert "[dynamo][itertools] support `itertools.tee` (#133771)"
ccc0aa69ce : [ONNX] Remove torch.onnx._export (#133824)
b03381cac2 : [dynamo] support `cls.__flags__` (#133970)
5229b52bf2 : [dynamo] support `cls.__base__` (#133969)
bb0bf09aff : [easy] skip test_sdpa_autocast on windows (#134009)
28ce3c0227 : [dynamo][itertools] support `itertools.tee` (#133771)
3f58a8051a : [dynamo] simplify implementation for `builtins.sum` (#133779)
37b4bc60a4 : [dynamo] simplify implementation for `functools.reduce` (#133778)
178e8563b8 : [dynamo] simplify polyfill registration for `builtins.all` and `builtins.any` (#133769)
71dd52f51a : [RFC][dynamo] add decorator to register polyfill for unsupported C++ function to avoid graph break (#133712)
49430bfd5c : [DeviceMesh] Add a _MeshEnv attr to record the mapping of flatten mesh_dim_name to its mesh dim index in root mesh (#133838)
c188d419db : [BE] [EZ] Allow linux-build workflows to run on the default runner type (#133640)
1002815f17 : Pass `torch.load(weights_only=)` internally to avoid FutureWarning (#133594)
d3d93a897b : Replace [[unlikely]] with unlikely(x) (#133583)
81a822ddc9 : Back out "[1/N] Fix clang-tidy warnings in inductor (#131979)" (#133922)
49f6ea6dd9 : Revert "[report_exportability] Avoid re-exporting duplicated modules (#133930)"
43f78bf37a : [MPS] Gather sliced inputs to batch norm (#133610)
278bc985d7 : [report_exportability] Avoid re-exporting duplicated modules (#133930)
333890b701 : Enable CUDA 12.4.1 (#132202)
e41b520ee3 : [3/N] Refactor FR script - Add a processor module (#133933)
bce0caba78 : [BE]: Update Typeguard to TypeIs for better type inference (#133814)
fbf3fc2a30 : [inductor] Use int64_t as index type for all platfroms 4 (#133892)
3caf3baabb : [inductor] enable inductor backend for dynamo on Windows. (#133921)
c3d02fa390 : [Reland2] Update NVTX to NVTX3 (#109843)
33f1ee036e : [dynamo][user-defined] Simplify call_hasattr (#133935)
8d93fe510e : Remove NestedTensorFactories.h (#133809)
187d55018a : [BE] Fix MYPY issues (#133872)
52dfe99dbf : Skip test_custom_op_add_abi_compatible_cpu_with_stack_allocation internally (#133704)
3a2f7192c3 : Revert "return state dict without optimized module (#132626)"
f2b57d8831 : Fix `torch._C` submodules population (#133919)
b02695d65f : [export] training ir migration, fix export_rle_model (#133937)
6590f4fb0e : [CD] Enable python 3.13 for xpu nightly build (#133670)
36376efd06 : [2/N] Refactor FR script - add a loader module (#133929)
2bd02e0c82 : Revert "[RFC][dynamo] add decorator to register polyfill for unsupported C++ function to avoid graph break (#133712)"
91fd270535 : Revert "[dynamo] simplify polyfill registration for `builtins.all` and `builtins.any` (#133769)"
5109c5ef23 : Revert "[dynamo] simplify implementation for `functools.reduce` (#133778)"
241df7e7f8 : Add multi-cache autotune test (#133868)
11af423eca : [SymmetricMemory] make buffer_ptrs_dev, signal_pad_ptrs_dev, buffer_size, and signal_pad_size accessible in python (#133680)
08b5e07e6c : Revert "[dynamo] simplify implementation for `builtins.sum` (#133779)"
68570fca69 : Revert "Add MaskedTensor support to *_like API (#128637)"
42097f0ec1 : Revert "[BE]: Update Typeguard to TypeIs for better type inference (#133814)"
25d5a815f7 : [Dynamo] Guard on torch function mode global state (#133135)
48ee0984ac : Add C API to return all torch function disablement status (#133136)
d97ca968cd : [Dynamo] Test intermediate tf mode construction (#133134)
626acaeb16 : [Dynamo] Support torch function stack len (#133133)
d1fdf984c3 : [Dynamo] Support push torch function mode stack (#133132)
c0b4aaa8c5 : [Dynamo] Support pop torch function mode stack (#133131)
f147349568 : Fix DeviceContext bug (#133729)
09e366cb57 : [Dynamo] Add torch function mode stack guard to dynamo (#133130)
7492da804f : Mark disabled tests as fixed (#133940)
e8d3c4be36 : [dynamo][reland][inline-inbuilt-nn-modules] Mark attributes of nn mod… (#133714)
f08d484702 : Add itertools.islice support in dynamo (#133893)
b6891f4002 : [1/N] Refactor fr trace script to make it modulized - config (#133927)
15addb00e6 : Update test_control_flow.py to device-agnostic. (#133843)
994fcb9acd : Killswitch based rollout for flight recorder (#133237)
32f57ac627 : [BE] Fix lint issues in qlinear_prepack.cpp (#133797)
b0bafd2be5 : remove tensor weak ref from constraint target (#133890)
188cb5e67b : Bump scikit-image to 0.22.0 (#133932)
6c82a1c68c : [AOTI] Introduce DeferredCudaKernelLine for cuda cpp wrapper (#129135)
c51fc7e98e : Enable clang-tidy in aten/src/ATen/native/nested/ (#133829)
c6ea7b3f21 : Update xpu CD used driver to rolling version (#133454)
c7af2728d3 : Remove aten dispatch to empty in foreach_norm cuda kernel (#133897)
874ae854eb : [c10d] Land CudaEventCache with roll out flags (#133727)
cfcb9e388d : [PT2][Optimus] Add move reshape out of split stack pass (#133710)
6f738d6434 : Remove early exit in constant_pad_nd for export (#132679)
9a998d98f1 : Fix edge case in inductor triton clean script (#130837)
65b3e42074 : Warn on fx graph cache bypass and log it to tlparse (#133826)
2ec95ffe57 : [cond] support unbacked symbool inputs (#133589)
3f525c9d5d : Upgrade nightly wheels to rocm6.2 - 2 of 2 (binaries) (#133238)
2b95007d12 : [dynamo] support random.Random (#133725)
06faa15194 : [pytorch][counters] add pytorch.wait_counter.fx_codgen_and_compile (#133107)
afb3e5ed6a : Add onnx and onnxscript to CI requirements (#133647)
1fdeb4e329 : [dynamo] simplify implementation for `builtins.sum` (#133779)
ff9be0eda9 : [dynamo] simplify implementation for `functools.reduce` (#133778)
59ca56e56c : [dynamo] simplify polyfill registration for `builtins.all` and `builtins.any` (#133769)
641724ed1d : [RFC][dynamo] add decorator to register polyfill for unsupported C++ function to avoid graph break (#133712)
8de56e2958 : Add MaskedTensor support to *_like API (#128637)
14ddd932fd : Add MaskedTensor support to _is_any_true (#128574)
432638f521 : Remove useless environment in reusable workflow (#133659)
d131048056 : Change install_triton to do git checkout, apply patch, pip install (#133878)
66d6d8b1b9 : Support TORCH_COMPILER_COLLECTIVES envvar (#133696)
0d4eacb9d2 : [fake tensor] unbacked symint support for binary op fast path (#133584)
565e2ea019 : Scale XBLOCK in triton for `pointwise` (#133300)
fb26b84390 : Update fused kernels and call _safe_softmax from SDPA (#133882)
f1dc3b108a : Back out "[export] fix test for training ir migration" (#133697)
a8619c9a1d : Add nitpicker, which allows adding comments to PRs when they match a file pattern (#133861)
64d9afd8a7 : Register nll_loss2d decompositions for core aten (#133534)
ad7dda7b32 : [CI] Bump up TIMM pin (#133528)
773a782249 : Decompose _unsafe_index_put into index_put (#133365)
517aee5369 : [torchscript] Add a sampled logging integration point. (#133484)
6564e746ed : [PT2] Port remove_noop to PT2 pre_grad passes (#132183)
da69a28c6f : [pipelining] Add schedule runtime for lowered schedule (#130488)
f31404ba6f : Revert "Update xpu CD used driver to rolling version (#133454)"
6ca68357b3 : [dynamo] Save class vt in UserDefinedObjectVariable (#133800)
08f14d5492 : [refactor][dynamo][side-effects] Helper function for __new__ for user defined class (#133799)
d6f30b91e5 : Add a smaller default config option for decode (#133646)
e37eef8a7b : return state dict without optimized module (#132626)
8d404581fc : Revert "[ONNX] New export logic leveraging ExportedProgram and ONNX IR (#132530)"
68fcd54226 : Lower cache mocking to test more pytorch code (#133579)
32ed4a3beb : Update xpu CD used driver to rolling version (#133454)
df6831562c : [Flight Recorder] Add more basic analysis to the script (#133412)
76b0284744 : Revert "[inductor][cpp] complete vectorization for int32/int64 (#122961)"
318d3b39c4 : Revert "[Inductor][CPP] Support vectorization of load_seed and randn (#130317)"
5153550e4b : [CI] Add FP32 dynamic, AMP static, AMP dynamic for AOT inductor accuracy CPU CI test (#132836)
5fab35d77c : [ONNX] New export logic leveraging ExportedProgram and ONNX IR (#132530)
92151c814b : [ROCm] Set _HAS_PYNVML to false if amdsmi not installed (#132990)
0a976b8899 : Enable bf16 float32 mkldnn matmul when float32 precision is 'medium' (#130919)
8b6b1721c8 : remove StrobelightCompileTimeProfiler.profile_compile_time from stacktrace when strobelight profiling not enabled (#133831)
4bae7ae3d9 : [DeviceMesh][Easy] Fix typo (#133790)
35f36363ec : Revert "[dtensor] move DTensor to public namespace (#133113)"
42e61c783c : [Inductor][CPP] Align Half load with BFloat16 load (#132011)
ae00063570 : Change default runner's AMI to Amazon 2023 AMI - Part 1 (#133641)
e72e924eb5 : Add correct typing annotations to rsample() for all distributions (#133516)
c0c82a5f6a : [CUDA][SDPA] Bump tolerances for `test_mem_efficient_attention_attn_mask_vs` (#133738)
cf60fe53a8 : [BE]: Update Typeguard to TypeIs for better type inference (#133814)
0d4cedaa47 : [13/N] Fix clang-tidy warnings in aten/src/ATen (#133807)
47ed5f57b0 : [12/N] Fix clang-tidy warnings in aten/src/ATen (#133425)
fbd020fce6 : Add new prop to _XpuDevicePropertie for triton gemm optimization (#131738)
fed6096e73 : [dynamo] Support object.__new__ call (#133746)
d56a395971 : [dynamo] Support os.fspath (#133747)
27dfd63ee8 : remove unnecessary slicing in EffectTokensWrapper (#133737)
d717df2071 : [compiled autograd] fix flaky tests due to torch.cuda.memory_allocated() != 0 (#133733)
fb9d2dc641 : Remove Wno-invalid-partial-specialization from CMake (#133398)
f8cf1829b5 : [Reland] [11/N] Fix clang-tidy warnings in aten/src/ATen (#133758)
0bde3c4f2f : Run cudagraphs on AOTAutograd cache hit (#132294)
d6368985af : [BE]: Fix setuptools not installed with Python 3.12 (#133561)
b4a1673a67 : profiler/unwind: include <dlfcn.h> for dladdr (#133582)
215b14530a : Add Half for sparse.mm reduce (#133672)
1c6fbae579 : [Easy][dynamo] fix builtin function names for `itertools` (#133711)
a0ef8888e6 : [Inductor][CPP] Support vectorization of load_seed and randn (#130317)
99b3b58f39 : [inductor][cpp] complete vectorization for int32/int64 (#122961)
d5f6d68d68 : [PT2] Resolve PT2 compatility issue in slice and diff (#133740)
cd89bf77c8 : [inductor][cpp][gemm] easy: adjust indentation of template, var renaming etc. (#133312)
4dc9795ebf : [refactor][easy] Directly call var_getattr method for PythonModuleVariable (#133745)
2ee6b97464 : [dtensor] move DTensor to public namespace (#133113)
1a4709cef5 : [dtensor] add more documentations (#133306)
addee9f4d1 : [dtensor] add missing __all__ to public modules (#133305)
702c810780 : move param's device check to `_init_group` for fused (#131153)
12b8e29203 : Add a fudge factor to ephemeral NCCL timeout increase (#133722)
695d7db2d6 : remove dead code for suggesting legacy dynamic shapes fixes (#133700)
455f6bda56 : Add cache timings info to tlparse (#133504)
dcfa415e6e : [Inductor UT] Reuse inductor UT for intel GPU `test/inductor/test_compiled_optimizers.py` (#133083)
983bea399d : [compiled autograd] move non-hot path logs into default logger (#133541)
0a6cc15079 : [compiled autograd] use same graph node names as AOTDispatcher (#133148)
4b3ed8bc52 : [compiled autograd] log aot id for CompiledFunctionBackward (#133115)
b0803129e8 : Added meta registration for `_fused_adamw_` (#133728)
ec28121017 : [inductor] Fix test_cudagraph_trees_expandable_segments.py for internal (#133698)
648fc6c9c1 : [Inductor][CPP] Refactor the tiling select into a standalone module to enhance its extensibility (#130892)
d04cd7f3ba : Improvements for associative_scan - Reverse feature (#133011)
19ff9059eb : Revert "[Inductor][CPP] Support vectorization of remainder (#129849)"
98d6a6eb7d : [inductor] clean up TODO comments. (#133718)
271ee90851 : [easy] Fix type annotation for `ExportedProgram.run_decompositions` (#133720)
99e789b52b : [Fix 1/n] GPU Test skips - fbcode/ caffe2/test/quantization (#133158)
fd33499b0c : [PT2][Optimus] Fix mixed precison training problem in decompose mem bound (#133626)
be207af6e1 : Disable unwrapping scalar tensors when used as outputs (#132859)
861bdf96f4 : [MPS] Add native strided API for MPSNDArray starting with macOS 15 (#128393)
447f428d6d : [ROCm] Fix text_export cudnn_attention UT (#133234)
f57b00704e : [Traceable FSDP2][Dynamo] Support reconstructing CUDA event object within Dynamo graph (#133635)
bc9e20b927 : Move the layout constraint registration of aten._scaled_mm.default to module scope (#133669)
88ba50279c : Consolidate the format for `--max-acc-splits` flag (#133724)
3ac527ac5f : [BE][Ez]: Update cudnn_frontend submodule to 1.6.0 (#133687)
41e6619509 : [codemod] Del un at::native::metal @ MPSCNNFullyConnectedOp.h:6 (export D59157302) (#133515)
a0cb54ab46 : Revert "C++ network flow implementation in c10 (#132188)"
fb59440791 : Use dedicated docker-build environment for manywheel, libtorch and conda Docker builds - 2 (#133709)
678a8f9e66 : [Inductor][FlexAttention] Small cleanup for FlexAttention kernel template (#133664)
611c104370 : [MPS] Add workaround for nonzero with large/complex inputs (#126188)
0063e56949 : Make FX Graph Cache work with distributed training (#133374)
5ee070266f : Workaround ASAN failure (#133623)
90c3669cd9 : Make sure T::is_traceable is bool (#133673)
eb3d517605 : [Test] Add SkipIfRocm to test_grad_acc_cpu_offload (#132975)
e5baf43b61 : [Inductor] short-term fix for needs_fixed_stride_order silent incorrectness (#133452)
caaa339e0f : Use dedicated docker-build environment for manywheel, libtorch and conda Docker builds (#133699)
b833990a8f : Revert "[CUDA][CUTLASS][submodule] Fixes for CUTLASS upgrade (#131493)"
4ee65c7e4e : Add message text to BypassFxGraphCache exceptions. (#133505)
1df1d00ffc : [Traceable FSDP2] Remove usage of tuple() generator and simplify code (#133636)
374c61cc82 : [inductor] make conv template work with symbolic stride/padding (#132938)
2cffe82dea : Fix triton build failure due to tritonlang.blob.core.windows.net not available (#133694)
f735038c8f : [PT2][Optimus] Add unbind_stack_to_slices pass (#133420)
6790eb52f9 : [Traceable FSDP2] Set torch._dynamo.config.skip_fsdp_hooks to True by default (#133531)
6d85077168 : [Traceable FSDPS] Allow tracing through FSDP2 impl in trace_rules.py (#133532)
18705e371d : S390x nightly binaries for python 3.13 (#132984)
770086fe39 : [Dynamo] Support torch.cuda.device ctx manager (#133385)
38e5ee1a34 : mixed_mm: add more extensive dtype testing (#133292)
9c2d119194 : [Profiler/CPU] Add API for Dynamic Activity Toggling [3/n] (#133353)
46af996ce7 : [c10d] Do not call ncclCommAbort if comm is not initialized (#133630)
8b8b4e5ae9 : AutoHeuristic: documentation for mm (#133611)
0e0077f3b6 : AutoHeuristic: mm ranking heuristic h100 (#133608)
e51c8ad369 : AutoHeuristic: Heuristic that ranks choices for mm (#131714)
51e13745be : [BE]: Update ruff to 0.6.0 (#133609)
eca8b4220f : [inductor][cpp][gemm] fix k-slicing bug and add thread blocking config (#132730)
a6aa451bde : Move python 3.8 to 3.9 for linux-binary-manywheel workflow (#133621)
e1b9b89d94 : Revert "[Flight Recorder] Add more basic analysis to the script (#133412)"
b444343087 : Fix printing symfloat pow in triton (#133614)
762b1b4c17 : [inductor] [cpp] fix accuracy when template_buffer has users other than the epilogue nodes (#133073)
dd69013c7a : deprecate `search_autotune_cache` (#133628)
15183f5ebf : overestimate `time_taken_ns` for autotuning (#133633)
30fbf5b19c : Remove AMD restrictions on triton hashing (#133616)
5ed3b70d09 : remove redundant upper bound check at runtime (#133627)
f64146aff0 : Update source matcher to use torch_fn (#133642)
d12bbcd785 : Add auto-tuning for sparse semi-structured MM operator (#123742)
3d45717219 : [ROCm][CK][Inductor] enable dynamic shapes for CK backend to gemm max autotune (#133285)
8ea5b572a6 : [PT2][Optimus] Add missing example value for the nodes introduced in group batch fusion (#133414)
8a2b064236 : [dynamo][user_defined][stable-diffusion] Raise ObservedAttributeError on UserDefinedObject var_getattr (#132806)
fcc2fc1a70 : [Flight Recorder] Add more basic analysis to the script (#133412)
d9f17cf4e4 : [fx] Do not add Proxy on Tensor (#133470)
8a5708ba3d : [dynamo] Support object creation of classes with custom __new__ (#132977)
a1a869f2f5 : [ts_converter][reland] Add support for LinearOpContext and Conv2dOpContext in quantization pass (#133622)
1653f7786d : Fix type promotion for `ldexp` (#133519)
3a904d1163 : AutoHeuristic: Enable explicit support for ranking (#131710)
add0f0085c : AutoHeuristic: Support ranking/pruning choices (#131705)
929d2f8253 : [3/N] Fix clang-tidy warnings in torch/csrc/autograd (#133389)
c22f51ce7c : [inductor][cpp][gemm] improve large bs perf with better cache blocking (#132729)
8f7cf796ea : [14/N] Use std::optional (#133417)
d9576c9440 : Fix failures when default is flipped for weights_only (#127627)
c8ad5e37e8 : Fix all RuntimeErrors during weights_only load from being erroneously reported with the weights_only message (#132349)
0d2be06d94 : [export] fix test for training ir migration (#133587)
7ad3108ef2 : [CUTLASS][FP8] Skip scaled_mm rowwise test on sm89 (#133612)
413416cf33 : [PT2] Consolidate args and kwargs usage in pre_grad passes (#133518)
f347174d61 : Hipify Pytorch3D (#133343)
29c4b4ea5a : [executorch] Refactor delegation code (#132773)
86aa327e4a : [FSDP2] Added eager fast-path for fp32->bf16 param cast (#133369)
24e04f3bfd : Remove QNNPACK reference from setup.py (#133526)
90d2593b3e : Revert #132806, #132736, #132539, #132487 (#133570)
5f1470d45d : [export] Add InterpreterModule to trace_rules (#132949)
09a489b177 : Fix serialization for tensor list output (#133539)
cdf217cda1 : Disable distributed nccl tests to unblock Amazon2023 ami upgrade (#133355)
161cc137d2 : [DTensor] Add naive replicate strategy for aten.triu.default and aten.tril.default (#133545)
99cf567714 : Make SCRIBE_GRAPHQL_ACCESS_TOKEN available to test jobs running on main (#133536)
7e0ef343b0 : [ROCm] Check supported archs before setting preferred blas backend to hipblasLT (#133359)
5dfb22d4c8 : AutoHeuristic: tests (#133496)
7673ee5456 : remove benchmarks/__init__.py (#133390)
dff388491b : Fix docs for L1Loss and MSELoss (#133501)
27538671ae : Enable clang-tidy coverage on torch/*.h (#133422)
4aa66f68a8 : [CUDA][CUTLASS][submodule] Fixes for CUTLASS upgrade (#131493)
41d6cabca1 : [c10d]Control logging c++ traces with a flag (#133490)
546c53b784 : Bump max runners for linux.24xlarge to 500 (#133569)
59b3f5911d : [sigmoid] Support custom obj deserialization. (#133463)
5ec9c0bc4a : Fix `linearize(grad(...))` call (#133364)
cfec69e2a1 : Revert "Update fused kernels and call _safe_softmax from SDPA (#131863)"
d3b458e603 : [export] Do not use export.export for `capture_pre_autograd_graph` (#133370)
2236194c6b : [traced-graph][sparse] cleanup test guards (#133375)
a7c6e30a3f : [c10d][ez] Add space between PG ID and PG UID (#133497)
018e48c337 : [Reland] Add wrappers for synchronous GPUDirect Storage APIs (#133489)
c23dceb8f1 : Add Adafactor foreach impl (#132336)
3434a54fba : [CP] Rewrite ring attention backward algorithm and enablement APIs (#131351)
7470ae85e4 : Fix triton codegen with math.trunc (#133354)
fc5aa24a6e : Rewording doc string for clip_grad_norm_ (#133406)
a75248528f : [export] refactor _process_dynamic_shapes (#133391)
dd6ce2fe7c : Restore mixed dtypes GEMM auto-tuning for Ampere (#129058)
758a0a88a2 : [BE][Easy] enable `ruff` rule `PIE790`: unnecessary `pass` statement (#133200)
57d1ffc512 : Ignore `torch.onnx._internal` in `test_circular_dependencies` (#133110)
a6ad834fa8 : Fix counting execution time in run_test.py (#133199)
ec49ce5f8e : [CUDA]: Add frexp CUDA bfloat16 support (#133313)
59e33cd1f4 : [FSDP2] Set `ctx.set_materialize_grads(False)` for post-backward (#133498)
07adae3dac : Revert "Make FX Graph Cache work with distributed training (#133374)"
32d890745d : Revert "Add cache timings info to tlparse (#133504)"
bbddde311a : Migrate inductor jobs to runner determinator (#133457)
9876aa39c0 : AutoHeuristic: pad_mm documentation (#133411)
f32a9e953f : AutoHeuristic: mixed_mm documentation (#133410)
142353eca3 : AutoHeuristic: util scripts (#133409)
b0fc6aa412 : fix a typo in the householder_product docs (#124279)
b6335cfeab : Add an option to use do_bench_using_profiling in TORCHINDUCTOR_PROFILE (#133523)
cf1fc07bd4 : [DTensor][Easy] Minor fix to Partial Placement Docstring (#133149)
e6272acaec : C++ network flow implementation in c10 (#132188)
c88174df95 : typing for remote_cache (#133446)
7eb31e5023 : Add cache timings info to tlparse (#133504)
448d54ee92 : AutoHeuristic: instructions (#132894)
8624a571b4 : [Inductor][CPP] Support vectorization of remainder (#129849)
1120b5ab55 : Revert "[CI] Change inductor-perf-test-nightly naming (#131476)"
c2b2969b5d : made some args optional in create_mask (#133413)
8676401707 : [MPS] Enable MPS mm from macOS >= 14.4 (#133494)
dcdb25453e : Make FX Graph Cache work with distributed training (#133374)
6d4287419c : [ONNX] Disable op_level_debug tests (#133485)
7a74294786 : [sparse] enable meta tests (#133379)
3965f11837 : Minor type annotation updates following up D60954888 (#133382)
d8c494910b : [EZ] Enable explicitly opting into the old Linux Amazon 2 ami - Pt 1 (#133469)
3fc9ee5a31 : [DeviceMesh] Directly retrieve flattened mesh if already created (#133195)
44eaef46d0 : [DCP] Fix meta tensor loading (#133256)
c0be0105c7 : [aarch64] Replace OpenBLAS with NVPL in cuda arm docker (#132811)
2e8c1be947 : Update date for 2.5 in RELEASE.md (#133503)
86cb24e6eb : [CI] Change inductor-perf-test-nightly naming (#131476)
bedf96d7ff : [AOTI] Switch fbcode HIP to C shim version v2 (#133105)
6980e9e569 : [AOTI] Disable split_cat_aten passes (#133014)
63e5b09218 : Add unit test for asymmetric compilation (#133363)
6f51782a59 : Add comptime.sleep (#133362)
cf81180007 : allow `SubConfigProxy` of arbitrary depth (#133418)
d46e0761ca : Revert "[11/N] Fix clang-tidy warnings in aten/src/ATen (#133298)"
07c73a964b : [MPS][BE] Delete MacOS-12.3 specific checks (#133141)
58ab993dcc : Fix recent build error on ppc64le (#133416)
7b269cc484 : [TD] llm retrieval to not use bash -l {0} (#133464)
4bb1650ca3 : Bump maxinum num warps (#132458)
d114fd78bd : [FSDP2] Enable HSDP + TP (#133335)
7f40ac9be2 : Migrate periodic jobs to use runner determinator (#133124)
118b2a4139 : Convert inductor jobs to Linux Amazon 2023 (#133352)
62cd065de2 : Validate that node TK_ASSIGN have field initialized (#127878)
e554f71d7e : Implement filter in dynamo (#131674)
854a5ba958 : [lint] fix lint broken by #131912 (#133428)
378b12f3ad : Improve namespace for `c10::MemoryFormat::Contiguous` in `torchgen/api/cpp.py` (#131622)
6ba64be950 : fix for launching kernel invalid config error when calling embedding … (#133346)
d61995868c : [Doc] update guide install mkl-static from conda to pip (#133328)
efc6e8457a : [inductor] [cpp] fix the reindexer from template_buffer to Y (#133070)
52741043e7 : [Inductor][FlexAttention] Support non-divisible sequence lengths (#133019)
b5711297a0 : Add support for SetVariable.discard (#133317)
ef580a0e5c : [DeviceMesh] Restrict slicing to be a contiguous or non-contiguous subsequence of the root mesh_dim_names (#133193)
d143f879e2 : [DTensor] Add more aten._foreach ops to _pointwise_ops.py (#133271)
a6413d2924 : Regression test for S429861 (#133376)
a30504b2a2 : fix silly error when printing diff (#133345)
4d11a9b783 : [CI] Fix rowwise scaling tests on h100 (#133384)
7aee3376e2 : [aotd] HOP effect tokens wrapper above SubclassWrapper (#131672)
2a4304329b : [wip][lowering] Add max_acc_splits (#133041)
f951fcd1d7 : Inductor-CPU WoQ int8 GEMM micro-kernel with scale epilogue (#131887)
918367ebb0 : Add new runner: G4DN Extra Large with T4 for windows binary builds (#133229)
1206958d89 : [Dynamo] add EventVariable reconstruct (#133236)
d1d6b370ce : Upgrade nightly wheels to rocm6.2 - 1 of 2 (docker images) (#132875)
14750dd737 : Correct return type of grouping helper function in Optimizer (#133360)
5fff960004 : [PT2][Optimus] Extend split_stack_to_cats when split and stack have different dims (#133060)
4af4910b1a : Reland "Construct NJT without graph breaks" (#133196)
f23dbefe52 : [export] Support "custom" metadata field. (#131912)
c2eeda5da0 : [structural binding][12/N] Replace std::tie with structural binding (#131031)
7666ef9d9b : [GHF] Fix co-authors attribution (#133372)
3578598401 : [11/N] Fix clang-tidy warnings in aten/src/ATen (#133298)
fbb0adbc32 : [TunableOp] lazy init TuningContext singleton (#133347)
5947169c9d : Add missing endline in exception message (#133240)
c91bc499f7 : [CI] Do not emit color escape sequence during testing (#133350)
caba37e99b : Update fused kernels and call _safe_softmax from SDPA (#131863)
9de023d44d : [Dynamo] Make torch.Size can be reconstructed by LOAD_CONST (#133342)
c17d26c3c1 : [AOTI][Tooling] A couple fixes / minor updates for initial debug printer (#133016)
41da528378 : [BE] Skip inductor+profiler test for templates if we didn't run select_autotune (#133344)
8e074c4583 : [ROCm] skip SymmetricMemory related UTs for ROCm (#133241)
5a1d4f7ddc : Migrate lint.yml to runner determinator (#133320)
a9d34138df : [traced-graph][sparse] add to_dense() operation to sparse export test (#133175)
69de9e78e9 : Revert "typing for remote_cache (#133299)"
fa7ae6cdbc : can't infer device on benchmarked function with no args or kwargs (#133290)
dfc7c860e4 : Allow SymInt input for torch.fx reinplace pass (#133178)
61625a18ef : [profiler] Only parse kineto requests and build tree when required (#132713)
657d58bbd8 : [inductor] Fix test_codecache::test_inductor_counters (#133244)
2fde1934f9 : typing for remote_cache (#133299)
a1ca4dfe0b : [ONNX] Fix onnx conversion scaled_dot_product_attention (#133314)
19416bf38b : Reland "[2/2] PT2 Inductor ComboKernels - automatic horizontal fusing (#131675)" (#133291)
dadb20a9d6 : [Memory Snapshot][Viz] Add Allocator Settings Tab (#132518)
7172c732d9 : [Memory Snapshot] Skip C++ warmup unwind() call if context is not set (#133038)
be400ee2b4 : [inductor][test] Fix test_vertical_pointwise_reduction_fusion (#133276)
26735e7364 : [MPS][TYPE_PROMOTION] Fix Clamp (#133260)
89795da5e3 : [inductor] process compile_only case in all build options class. (#129975)
19270cff61 : Add a reference for the LRScheduler class (#133243)
aa4fbba42d : Make q info optional in prep for inference (#133261)
660436d843 : Convert Periodic to use Amazon2023 runners (#133036)
2f30473fba : [19/N] Fix clang-tidy warnings in jit (#133067)
2e7d67e6af : Migrate slow.yml jobs to use runner determinator (#133232)
c518b50c4c : Remove functorch dispatch keys in `legacyExtractDispatchKey` (#133018)
cd565bc455 : Refactor process_inputs outside of create_aot_dispatcher_function (#130962)
4cca18d5b6 : Revert "Update fused kernels and call _safe_softmax from SDPA (#131863)"
095c5ccf9f : [CD] Change XPU nightly build back to ABI=0 (#132854)
e0a5536cc9 : [2/N] Fix clang-tidy warnings in torch/csrc/autograd (#133295)
7756175273 : Add Sleef Implementation for maximum Kernel for ARM (#131642)
40061bd61e : [export] overwrite placeholder names when deepcopying (#133269)
947a446be4 : [executorch hash update] update the pinned executorch hash (#131420)
9f17037e8b : [dtensor] move tensor constructors to the api module (#133129)
50e837d9c2 : [10/N] Fix clang-tidy warnings in aten/src/ATen (#133155)
af7830e353 : [1/N] Fix clang-tidy warnings in torch/csrc/autograd (#133180)
4671e98656 : [export] fix node.users when inlining HOOs (#133144)
fa36eba77d : Turn off remote caching in unit tests unless explicitly on (#133258)
1e9bedf688 : Add `_codecs.encode` and `builtins.bytearray` to `_get_allowed_globals` to support bytes and bytearray serialization (#133189)
f1c439cbed : AutoHeuristic: refactoring (#133170)
e76f0e0646 : Remove QNNPACK reference from setup.py (#133177)
7be77658e9 : [Inductor] support masked vectorization for the tail_loop for INT8 datatype (#131155)
370b072d8d : [Inductor] support masked vectorization for the tail_loop of the 2d tiles kernel (#130724)
e61def65d5 : Update fused kernels and call _safe_softmax from SDPA (#131863)
00aa086298 : Revert "[dtensor] move tensor constructors to a separate module (#133129)"
89670d5bdd : Revert "Inductor-CPU WoQ int8 GEMM micro-kernel with scale epilogue (#131887)"
844103197d : Revert "[2/2] PT2 Inductor ComboKernels - automatic horizontal fusing (#131675)"
656465fc77 : Revert "Conversions between strided and jagged layouts for Nested Tensors (#115749)"
d4b31f7bcf : Refactor BlockMask constructorr and add Factory func (#132969)
e553ef69d0 : [BE] Fix typo (#133247)
8585dee85d : [inductor] Add some more reinplacing tests (#132839)
592682fe22 : Migrate nightly.yml to use runner determinator (#133225)
80ed3e9ccd : s/dipatch/dispatch/g (#133192)
4f0d5f6551 : Pin sympy to 1.13.1 (#133235)
36c4ed8e49 : [inductor] add FreeLibrary to DLLWrapper for Windows. (#133184)
cdcc7dc891 : update comit pin for xla (#133120)
cc1cc71c46 : [MPS] Fix relu for 0-element input case (#133191)
666362865c : [test/profiler] Make test_profiler_pattern_matcher_json_report write … (#133009)
fa1d7b0262 : Revert "Remove unused Caffe2 macros (#132979)"
afb73d253c : [custom_ops] torch.library.{custom_op, register_kernel} disable Dynamo (#133125)
d53dfa4680 : [BE] Raise when the target model has scalar parameters (#132934)
0e4c0ef29f : fix type of `eta_min` parameter in CosineAnnealing (int -> float) (#132482)
e7d8d73582 : [minor] Correct in-code documentation for complex numbers and LBFGS (#133020)
d51e5467fd : TunableOp unconditionally add all validators (#132464)
d61815cb7d : [torch][ao] Use returned model from Quantizer.transform_for_annotation in prepare_pt2e (#132893)
1371c420c3 : Migrate binary builds to use Amazon2023 runners (#131826)
b06959e614 : [export] change deepcopy to copy in _replace_with_hop passes (#133142)
3128640c31 : [Memory Snapshot][Viz] Show event timestamps if collected (#132523)
454713fe9d : Add inductor-cu124, inductor-rocm to upload test stats (#133143)
9641abe97a : Revert "[export] change deepcopy to copy in _replace_with_hop passes (#133142)"
e9eb8795bb : Revert "[Memory Snapshot][Viz] Show event timestamps if collected (#132523)"
26b0a0c2f3 : Fix fsdp_state_dict_type_without_warnings (#132621)
f5e704a6f2 : Add instruction count benchmark to run on pull requests (#131475)
27c44c884e : [Memory Snapshot][Viz] Show event timestamps if collected (#132523)
7f08b73980 : Revert "[Memory Snapshot][Viz] Show event timestamps if collected (#132523)"
456909e5d3 : [Memory Snapshot][Viz] Show event timestamps if collected (#132523)
2d71f03db1 : [export] change deepcopy to copy in _replace_with_hop passes (#133142)
e7b870c88b : mixed_mm: fix segfault when allow_tf32=True (#133173)
04f37ed57d : Add support for returning LSE from FlexAttention (and also differentiating through it) (#133159)
78ccbad678 : [inductor] remove dtype check/assert for reduction vec and support bool for min/max (#132473)
79ca596dc6 : Optimize test_transformers.py (#133049)
a7912bf9dc : Make step != 0 test in slice irrefutable (#133091)
5b7b3e4af0 : Fix some issues detected by static analyzer (#132970)
92f650c5b3 : [Inductor][Intel GPU] Support codegen empty_strided_xpu, align with #118255. (#126678)
4a3a30c36e : [inductor] remove deprecated cpp_builder implementation. (#133161)
32be3e942c : Remove -Wno-error=pedantic from CMake (#133074)
b9922f7a5a : [compiled autograd][cpp node] No recaptures from saved float scalars (#133048)
c860889a65 : [compiled autograd][cpp node] No recompiles from saved int scalars (#132771)
2ad011ca73 : [inductor] remove debug code of AotCodeCompiler (#132823)
343071cd96 : Fix privateuse1 backend name case (#132980)
c8275e25a7 : fix requirement for error classification (#133122)
9f0d90655d : [inductor] cpp_builder add dynamo time trace for compile_file (#133103)
cc5a57d185 : Return from monitoring thread on TCPStore failure (#133150)
e888f401c5 : Fix autotuning for flex_decoding (#132157)
05de2b2d0f : Revert "Construct NJT without graph breaks" (#133145)
e890d888d9 : [dtensor] move tensor constructors to a separate module (#133129)
8fbd7d92a8 : Inductor-CPU WoQ int8 GEMM micro-kernel with scale epilogue (#131887)
c89936eaa0 : [CUDA][SDPA] Bump `grad_key` fudge factor in `test_flash_attention_vs_math_ref_grads` (#133051)
f037803290 : Add ChromiumEventLogger, log FXGraphCache and AOTAutogradCache (#132864)
de48d54042 : [TorchRec] Add Support for FakeProcessGroup (#133039)
3899465268 : relax unification checks when size-like symbols can be 0 (#133112)
72f2b29bb0 : [CI] disable xpu kineto build (#133069)
21302d5891 : AutoHeuristic: script to generate data for mm (#131617)
e7512ab752 : inductor mm autotuning: add back previously pruned configs (#131616)
e5fa190e01 : AutoHeuristic: tuned_mm (#131615)
3b440f358c : [elastic collectives API] add missing rank tracing support (#132818)
6beb2be2ed : Fix _dynamo.variables.torch_function.global_mangled_class_name (#132744)
d2ecdcb2f7 : [Profiler] Add API for Dynamic Activity Toggling [2/n] (#133035)
b0b4723062 : [c10d] Rename PG name and PG ID attribute (#132915)
4110cb6ba7 : Add explicit GQA support. (#131559)
dc8bb2636c : [c10d][doc] Add docs for ENV variables TORCH_NCCL_ASYNC_ERROR_HANDLING TORCH_NCCL_TRACE_CPP_STACK and TORCH_NCCL_COORD_CHECK_MILSEC (#132920)
78fa32a77b : Turn off Function Event Accumulation by Default (#133095)
c44cb89e06 : [export] detach constant tensors when they're not registered as buffer or parameter in unlift (#133031)
cd307fb0b1 : [FSDP2] reset FSDPParam.sharded_param in lazy_init (#132954)
78cf8df4a0 : [aoti] forward fix of [inductor] switch AotCodeCompiler to new cpp_builder. (take 3) (#133042)
472b0daeaa : [DDP][FSDP2] keep DTensor params for replicate(fully_shard) (#133059)
e66084f9bf : [BUG FIX] Refactor _scale_attn_mask_fusion_kernel to Use Runtime Argument Instead of Template Parameter (#132434)
b41d62a3a2 : Fix typo in docs of `all_gather` (#133066)
f3eab23c42 : Fix typo in `mypy.ini` (#133097)
31ef900a65 : Revert "added persistent option to buffers and namedbuffers (#132994)"
6c012f7217 : [c10d][Log] Use pg_id instead of pg_name for logging prefix (#132058)
655ec07525 : [ROCm] TunableOp logging improvements (#132173)
d13e72fd6a : [c10d] set a shorter heartbeat detect timeout to avoid race with NCCL timeout (#133028)
574cdf1232 : [export] Merge functions in replace set_grad/autocast with HOO (#132724)
2dbe5cb979 : [C10D] Clarify warning for concurrent PG usage (#131895)
bc57d5b6ff : [Inductor][CPP] Turns on inline_inbuilt_nn_modules for CPP GEMM template testing (#132487)
23b877cb54 : [inductor]a less ambitious way to slove the scalar tensor (#132702)
50595ecef4 : Revert "[BE] Raise when the target model has scalar parameters (#132934)"
065f7aa44b : [inductor] tensor_is_align fallbacking False if unbacked expr not comptime evaled (#132423)
4bdb4bbd86 : Fix fbcode AOTI GPU lowering for ARM64 hosts (#133017)
f2bacd908a : [BE] Move function definitions to .cpp (#132927)
465e071898 : Revert "[CUDA][CUTLASS][submodule] Fixes for CUTLASS upgrade (#131493)"
f565d16acb : Fix work-around item non-sync issue on AMD (#133054)
927b4c1114 : [CUDA][CUTLASS][submodule] Fixes for CUTLASS upgrade (#131493)
7b8ab7eb3e : [dynamo] Partially support random.Random class (#133037)
ea00036841 : [BE] Raise when the target model has scalar parameters (#132934)
5707c6e952 : [Fake tensor] Align the appearance of `device_put` op in fx_graph generated for CUDA and XPU, which is exposed in the issue #130823 (#132479)
da65cfbdea : Remove unused Caffe2 macros (#132979)
05e8e87a69 : [Submodule] Remove foxi (#132976)
bb6eef8ed1 : [2/2] PT2 Inductor ComboKernels - automatic horizontal fusing (#131675)
8875226d62 : [dtensor] multi-dim mesh redistribute follow up (#133023)
3b7edc12c6 : [dtensor] more refactor to imports/paths (#133022)
22ea248aa8 : dynamic shapes mismatch errors (#132982)
8967d55b01 : [18/N] Fix clang-tidy warnings in jit (#132963)
313aa151da : Revert "[ROCm] TunableOp logging improvements (#132173)"
4101dd14c2 : Make debugging backends accept and ignore options kwargs from torch.compile (#132892)
0ff0bf3d31 : [Replicate] Fix replicate with DeviceMesh initialization (#133024)
10c2168b31 : [pt2-bench] use larger multiplier for smaller tensors for a few models (#132952)
3c5b246d3c : [export] Remove Proxy from exported programs and modules (#132956)
e2b94923ba : [PyTorch] Speed up decomposed quantize_per_channel (#133029)
fa8c34301a : [ts-migration]: Quantized ops to standard ops pass. (#133026)
45cf8ef557 : add impls for required for nt ops (#132710)
1434e0b121 : Add a private _safe_softmax (#131060)
1f66487c69 : [BE] Reroute all uses of proxy_tensor.maybe_disable_fake_tensor_mode to fake_tensor.unset_fake_temporarily (#132770)
f25df31008 : TunableOp more unit test follow-up (#130065)
3d0de6e1cd : [Inductor] Add config option to force higher-dimensional tiling (#132937)
8707c6dfac : added persistent option to buffers and namedbuffers (#132994)
9cca0494b9 : [ROCm] TunableOp logging improvements (#132173)
cd30861857 : [PT2][Optimus] Update unbind_cat_to_view pass to include more complicated cases (#132831)
40767e8468 : [BE] rename testHelperPrefix test (#132916)
7bd0732cbd : Fix flaky internal mixed_mm tests (#133015)
a9954d22f8 : Raise exception if torch.func.* calls torch.compile functions (#128736)
b845068db2 : [dtensor] refactor examples folder (#132914)
c326533999 : [ROCm][Inductor] Enable AOT Inductor CPP UTs for ROCm (#131521)
de288e2203 : Fix inf value reduction in non persistent reduction for scans (#132293)
322c9d03a0 : [FSDP][dtensor] use _StridedShard to represent nested sharding for correct full_tensor() result (#130760)
21906ddaba : [AOTI] Fix complex64 not defined (#132810)
ac95b2a2f2 : Migrate slow self-hosted jobs to Amazon2023 AMI (#131771)
75eb66afc0 : Support 'non-contiguous with holes' NJTs for contiguous clone() (#132776)
3ec9ec03a8 : Revert "[pipelining] Add schedule runtime for lowered schedule (#130488)"
942ffd1b2d : Make the __module__ name of HOO to be always "torch.ops.higher_order" (#132775)
eeb6ad0744 : [quant] Speed up dequantize_per_channel (#132828)
dfc5bb0099 : Login to Meta's ECR when using non-meta runner (#132870)
4a4dc9d6d9 : [inductor] Disable remote caching in failing test_cpu_repro tests (#132955)
9d5c85c499 : Move exir.delegate to PyTorch core to enforce no out-of-tree HOPs (#132525)
4ee5547b37 : [triton_op] Skip HOP dispatch when possible (#132822)
b885ad8fce : Revert "[Inductor][CPP] Turns on inline_inbuilt_nn_modules for CPP GEMM template testing (#132487)"
0ca8f66e3a : [NestedTensor] Modify softmax on ragged dimension to allow for 2D nested tensors (#132812)
c4071c4707 : Remove noqa: G004 warnings (#132917)
9db5bfccdc : [inductor] disable test_torchinductor failed UTs on Windows (#132973)
51ddcde110 : [BE] Introduces runner variants for amzn2023 to simplify lf-scale-config.yml and lf-canary-scale-config.yml (#132918)
6f99e97f0a : Revert "[ts-migration]: Support quantized operation transformation (#131915)"
42cd397a0e : Loads .pyd instead of .so in MemPool test for windows (#132749)
d1f73fd844 : Revert "[BE] Reroute all uses of proxy_tensor.maybe_disable_fake_tensor_mode to fake_tensor.unset_fake_temporarily (#132770)"
902c6f3a19 : [BE] Reroute all uses of proxy_tensor.maybe_disable_fake_tensor_mode to fake_tensor.unset_fake_temporarily (#132770)
0e43175e22 : [BE] Get rid of unnecessary inner_torch_dispatch method (#132769)
35fd4391bc : Format torch.fx.experimental.proxy_tensor.py (#132767)
b4e2411f6f : Big enough count to trigger stack overflow (#132062)
aec6332356 : Only thunkify proxies in some situations (#132421)
54efd43022 : [BE] Simplify code interacting with get_proxy_mode/enable_tracing (#132675)
361db32d47 : Consolidate SymDispatchMode into ProxyTensorMode (#132674)
0f19d4150b : Revert "[inductor]a less ambitious way to slove the scalar tensor (#132702)"
ec49796b8f : [Inductor] Support use_libdevice_for_f64 for pointwise ops on XPU, align with CUDA. (#132739)
24dee99cb7 : Populate submodules of `torch._C` to `sys.modules` recursively (#132216)
7f71f2a997 : [dtensor] improve docs and comments (#132683)
9e37e73e01 : [dtensor] refactor and improve readability of _dispatch.py (#132682)
ac960dced1 : Skip Reformer for Dynamic size testing (#132468)
9c5e0d47fe : Add xpu_cmake_macros.h to xpu build (#132847)
751c744ad0 : Optimize sort kernel for contiguous tensors (#132236)
83e4af203d : [dtensor] rewrite redistribute algorithm for multi-dim mesh (#131210)
479d460471 : [DeviceMesh] Add a private _flatten() API for device_mesh (#132632)
0e8541766f : [ts-migration]: Support quantized operation transformation (#131915)
9e584d0c05 : [BE] Test foreach optimizer for FSDP1 optimizer state_dict (#132933)
a270800f0b : [export][reland] Add print_readable to unflattened module (#132817)
745665d8b5 : [BE] Using with_temp_dir for test_distributed_checkpoint (#132908)
aff48f7378 : Autoselect default device in FSDP construction. (#127609)
4a1edbe475 : Disable SymDispatchMode when torch.compile'ing (#132433)
5ae979ab10 : [Dynamo] Support torch.autograd._is_checkpoint_valid (#132611)
4fd0d594a1 : [sym_shapes] Not eval sym expression for printing storage_offset (#132911)
b483ca05a9 : [inductor]a less ambitious way to slove the scalar tensor (#132702)
ac6398b630 : [FSDP2] Follow-up fix to correct relaxed overlap test (#132953)
636a7c4859 : [13/N] Use std::optional (#132527)
fd874b799f : [AOTI][refactor] Update MKLDNN ops cpp wrapper support (#132367)
c69b2d24e3 : [dynamo] Support remove method of set (#132943)
194ec49d27 : [dynamo][lists][stable diffusion] Do not add source on list slice (#132912)
45d0e90bd3 : [export] Allow str outputs (#132808)
4ca616e6d4 : Disable sparse tests in export (#132824)
fb6b001cde : Disable expandable segments IPC in fbcode, because some jobs seem to be failing. (#132890)
5709375d56 : [AOTI][tooling][1/n] Add intermediate value debug printer (#132323)
59f4725b49 : [NJT] manually autocast in SDPA handling (#132835)
bbf568aac8 : Split of "[reland] [export] fix zero arg export in training_ir and constant tensor handling" (#132307)
0f90ffe94a : Remove ProcessGroupRoundRobin (#132888)
5cb05a82b4 : [BC breaking] move benchmarking + prefer inductor path (#132827)
a9036e1cf8 : [inductor] raise unsupport msg in capture_pre_autograd_graph on Windows (#132841)
441c1c03d5 : Prevent an unnecessary device -> host copy for CuPy arrays when not explicitly setting a device in torch.as_tensor. (#132595)
374747818d : Run performance test non-alternately (#131935)
f16d87eeff : Print where raw cprofile lives (#132866)
b73d4b6555 : [pipelining] Add schedule runtime for lowered schedule (#130488)
9282e6ca78 : Don't use _disable_current_modes as decorator (#132809)
42226ca3a3 : Don't use use_lazy_graph_module as decorator (#132804)
5e4d8eb831 : Don't generate stack entry for DebugContext.wrap (#132802)
708a99e52a : Stop using with_fresh_cache_if_config as decorator (#132801)
c3e51c09ed : [PP] Add get_schedule_class util (#132768)
383f2ac914 : AutoHeuristic: mixed_mm H100 heuristic (#132685)
c327710a87 : [export] Publicize validate function (#132777)
21d4c48059 : Allow distributed breakpoint to skip the first few calls (#129511)
acad2050c1 : [easy][dynamo] Add tx as an arg in getitem_const (#132899)
700a11fdd4 : Make inductor kernel metadata comments more descriptive (#126698)
48f7bdbbe1 : aot_autograd: copy metadata from fw to bw nodes (#126573)
260e7cb143 : Make CUDA device properties's `__repr__` output actually printable (#132863)
525fdc0f95 : [docs] fix incorrect example in `convert_conv3d_weight_memory_format` (#129318)
6a348e5e57 : [CUDAGraph] Warn once if too many distinct sizes (#132832)
e76bd0b603 : [BE] put "show_dispatch_trace()" print logic in .cpp file (#132717)
7830373662 : Update owner for BC test (#132891)
59bbaea3a7 : [inductor] disable capture_pre_autograd_graph related UTs on Windows (#132848)
7ea8374c0e : `nn.ModuleList.__getitem__` overloads (#132834)
83fa7f871f : Work around item non-sync issue on AMD (#132772)
ff81ca8e0c : Revert "Populate submodules of `torch._C` to `sys.modules` recursively (#132216)"
4fe6a5dc34 : Move slow tests to be in repo (#132379)
26b0011fb8 : [XPU][Kineto Submodule] Introduce kineto-based XPU profiler (#130811)
07551887b8 : Revert "Disable SymDispatchMode when torch.compile'ing (#132433)"
ca713b8393 : llvm update for backward-breaking APIs in 18 and 19 (#132825)
a9ff190867 : Revert "Consolidate SymDispatchMode into ProxyTensorMode (#132674)"
9d476fee53 : Revert "[BE] Simplify code interacting with get_proxy_mode/enable_tracing (#132675)"
f2ad3c89b0 : fix dtype mismatch in lobpcg eigen solver (#132762)
1749025081 : Revert "Fix infinite recursion while walking to submodules (#132763)"
25df063f04 : [dynamo][user_defined][stable-diffusion] Raise ObservedAttributeError on UserDefinedObject var_getattr (#132806)
40ce0a53bb : [FSDP][dtensor] add FSDP2+TP distributed state dict test (#131408)
ad0ce89050 : [3/N][dtensor] Strided Sharding offset calculation util (#132391)
0b0c660c02 : [2/N][dtensor] Strided Sharding shard_to_replicate (#130239)
92a17f454a : [1/N][dtensor] introduce StridedShard placement type and _split_tensor() logic (#126697)
123d9ec5bf : Revert "Loads .pyd instead of .so in MemPool test for windows (#132749)"
a62710c820 : [FSDP2] Relaxed overlap test to address CI flakiness (#132869)
32a284c275 : [9/N] Fix clang-tidy warnings in aten/src/ATen (#132842)
ffd0d92c18 : fix autotuning init issues (#132837)
8b50d5398f : [DeviceMesh] Create new group for 1D mesh when default backend is 'gloo' and 'cuda' is available (#132709)
258f47fc0b : Add `padding_side` to `pad_sequence` with `"left"` and `"right"` options (`"right"` as default) (#131884)
780310fed7 : Revert "Only thunkify proxies in some situations (#132421)"
de9b8a42c1 : Revert "Add support for other backends in get_preferred_device (#132118)"
13fa59580e : Enable clang-tidy on aten/src/ATen/cpu (#132830)
ed97fb77f9 : Conversions between strided and jagged layouts for Nested Tensors (#115749)
fb146fc3c6 : Only store necessary tensor_dict fields in node meta (#132805)
7c79e89bc5 : Stop using clear_frame as decorator (#132778)
bb99008c9e : Only thunkify proxies in some situations (#132421)
32f9a809c7 : Replace [[unlikely]] with unlikely(x) (#130816)
8c8eb9670a : [CI] Enable inductor UT test on avx512 (#132645)
37ab0f3385 : Loads .pyd instead of .so in MemPool test for windows (#132749)
8333ecf085 : Support hasattr tracing for more PythonModuleVariable (#132731)
c8c964f950 : [inductor] check best templates first for fusions (#132829)
c184ac0f6b : Add support for other backends in get_preferred_device (#132118)
87053132ea : [DeviceMesh] Remove parent mesh concept from _MeshEnv and replace by root mesh (#132339)
dc00eeb0f4 : [Dynamo] fix incorrect kwargs in create_proxy (#132723)
2206a3de00 : [Compile] Speedup int8-to-float conversion on aarch64 (#132676)
4faa0e3efb : [Inductor] support masked vectorization for the tail_loop (#126526)
8bc5ef563e : Grouped Query Attention (#132689)
527f104a69 : add L2 cache size to device properties (#132819)
bfeb45e46b : [17/N] Fix clang-tidy warnings in jit (#132753)
03480213de : [8/N] Fix clang-tidy warnings in aten/src/ATen (#132728)
919e384247 : [PT2][Optimus] Add unbind_stack_to_cat_pass (#132542)
063a45ed27 : Fix infinite recursion while walking to submodules (#132763)
73c083e02c : [Inductor][CPP] Turns on inline_inbuilt_nn_modules for CPP GEMM template testing (#132487)
ed224554eb : [BE] Don't unnecessarily suggest -k for rerunning tests locally (#132807)
837898d9c8 : Stop using preserve_rng_state as decorator (#132774)
b01402b0a4 : [7/N] Fix clang-tidy warnings in aten/src/ATen (#132727)
178dc0c9c7 : various doc fixes (#132803)
cb4d1bfb71 : Clean up some tflop calc and add option for saving (#132799)
cbee9c1fd2 : Revert "Deprecate `torch._utils.is_compiling()` and `torch._dynamo.external_utils.is_compiling()` (#127690)"
e98eac76b3 : [inductor] switch AotCodeCompiler to new cpp_builder. (take 3) (#132766)
c7113a6186 : Revert "[DeviceMesh] Create new group for 1D mesh when default backend is 'gloo' and 'cuda' is available (#132709)"
0d6caeb259 : Add logging + counter for missed reinplacing opportunities (#132758)
cd7f527c59 : [3/3] 3D Composability - move tp dp tests (#129802)
179b572fd9 : [2/3] 3D Composability - move pp tests (#129801)
825002c9c6 : [export][fx] More robust DCE pass (#132764)
073cee531c : [Test][Easy] Remove print in test_device_mesh.py (#132780)
1a23ef2ece : [DeviceMesh] Create new group for 1D mesh when default backend is 'gloo' and 'cuda' is available (#132709)
18b678082e : [Easy] log output code path on cache hit (#132718)
3c1033eeb0 : Don't auto request review for reopened PRs (#132681)
2073ddfd1c : Actually report the HOP and subclass/mode when there isn't a registration (#132550)
623d0204f0 : [NJT] Support Chunk backward for simple cases (#132193)
2f908ffa4a : [traced-graph][sparse] sparsity propagation for all current tests (#132690)
029f8fc701 : Bump rexml from 3.2.8 to 3.3.3 in /ios/TestApp (#132469)
e47b684c33 : Revert "Temp disable MKL in DistributionKernels.cpp (#132532)"
94155ce31b : [Torch] Support meta device in checkpoint (#132684)
de00c79583 : [dynamo][inline_inbuilt_nn_modules] Mark nn module tensor static for cudagraphs (#132736)
1954bfacda : [Inductor] Small performance, precision, and dependency updates to B2B-GEMM (#132354)
775c310c0c : Preserve source_fn_stack in the training IR decomp (#132033)
4faa5804f6 : [c10d] Used float tensor for PG NCCL barrier all-reduce (#132701)
1e65ccc3de : [inductor] export kernel for gemm template. (#132580)
81a5a7a30a : [Quantizer] Fix getattr for quantizing constants (#132705)
c2bccfd431 : [BE] Simplify code interacting with get_proxy_mode/enable_tracing (#132675)
1de4ebc85d : [Quantizer] Fix Maxpool2d share q params (#132704)
db0bd04151 : [AOTI] Switch to use shim v2 for fbcode (#132750)
8d2c272e5a : properly register conjugate/neg fallthroughs to prim ops (#132699)
c6582f11cd : Add get_optin_feature() to allow opt-in to amz2023 (#131792)
e3394e5548 : torch.autograd.graph.increment_version: accept List[Tensor], use in AOTDispatcher (#132652)
af67b8df6d : [export] Fix exportdb test (#132678)
e6eee04875 : dynamo: use equality guards instead of id guards for Placement/DeviceMesh (#124401)
f50621989b : Construct NJT without graph breaks (#130292)
406b50835b : Use FakeTensor cache for subclass inner tensors (#131803)
a94c441e48 : Fix symbolic nested int printing (#131916)
ffdf48e63b : Consolidate SymDispatchMode into ProxyTensorMode (#132674)
7045bc5a77 : [export] change error message for specializations (#132698)
ca7ce2fca1 : [ts-migration][1/N]: Add prim::Loop for constant number of iterations and condition (#131418)
c803e35c4b : Reduce number of guards introduced by check_cudnn_tensor_shapes when cudnn version is higher enough (#132384)
fc7849b93f : [pt2e][quant] Ensure BN node is erased after convert (#131651)
679cdf606a : Converted `__all__` literal tuple to literal list. (#132404)
6753ee127c : Allow torch.cuda.memory.mem_get_info to take a device str argument with an unspecified device index. (#132616)
7100c36c8a : Revert "[inductor] export kernel for gemm template. (#132580)"
656a4d1408 : [6/N] Fix clang-tidy warnings in aten/src/ATen (#132620)
a8f0979962 : Add cudagraph static inputs logging (#132726)
da320214e6 : Format tensor (#127992)
728374d7f7 : Changed create_block_mask to just accept BLOCK_SIZE (#132697)
91df66ee74 : [caffe2] Wrap constexpr with preprocessor statements (#132582)
4260f365ba : [inductor] Replace torch.allclose with torch.testing.assert_close in test_fx_fusion (#130618)
4e610924d4 : [c10d] Add a new API for adding ephemeral timeout for one local rank and the timeout will reset when the first collective finishes (#130905)
39c9b75a68 : Add registration mechanism for aoti model runner (#131638)
345bea01dc : Refactor thunkify to return proper thunk abstraction (#132407)
93fad2f0f2 : [export] Fix import in D60427208 (#132707)
2f16e68cab : [Intel GPU] Allow XPU device in copy, cdist, index_put_impl (#130088)
38674bcb45 : Revert "Conversions between strided and jagged layouts for Nested Tensors (#115749)"
d6a24b3b92 : Removed duplicate `__all__` declarations. (#132405)
96471ea47c : [inductor] support vectorization for torch.any(bool) -> bool (#132472)
26c6786109 : return_and_correct_aliasing: skip dispatcher when swapping storage (#132524)
eca0cb0fbe : Conversions between strided and jagged layouts for Nested Tensors (#115749)
4306eebab1 : [DeviceMesh] Update slicing documentation to include nD and non-continuous slicing (#132311)
1add8c5f1c : [Easy][DTensor] Rename args_sharding to args_schema for OpSchema __str__ (#132187)
3ef45e5669 : Fix ODR (#131032)
a74e5abda4 : Fix issues in activation_memory_budget for float8 (#132687)
a4ed8eeb33 : [hop] makes compiled hops not share code objects (#132427)
4a2cf50edf : [export][reland] Convert autocast to HOO (#132677)
ea42027e0e : [micro_pipeline_tp] support all _scaled_mm args (#131984)
2b5e31d099 : Move sigmoid run_const_graph HOP to PyTorch core (#132526)
af8b8a47cb : fsdp.set_: convey to functionalization that it mutates storage (#132322)
1a0db29932 : move torch._functionalize APIs to pybind. add one for marking storage mutations (#132337)
4db368a475 : make functorch CSE respect mutations as barriers (like fsdp.set_) (#132243)
ee0ae11b34 : Fix a typo in the example code. (#132601)
9a1ad3345f : Fix periodic windows test (#132648)
6b12dc0224 : [Reland] [11/N] Use std::nullopt and std::optional (#132622)
6f4dc56735 : [inductor] Default to 1 compile thread for internal (#132540)
1471473b84 : Add tests to bsr_dense_addmm_meta. Tune bsr_dense_addmm kernel for ViT shapes. (#132646)
b7bcfdaff2 : Change deprecate warning on dispatch_on_subclass to warn once (#132374)
2764bee942 : Revert "[MPS] Add support for autocast in MPS (#99272)"
a3ea96b762 : Revert "[export] Convert autocast to HOO (#131914)"
1d34f33d00 : Scale XBLOCK in triton reduction configs to avoid hitting max grid (#128826)
e1c2bdac2f : [easy] fix f-string messages in torch/_ops.py (#132531)
aec948adfc : [export] Convert autocast to HOO (#131914)
8d9c3a71f6 : Support IPC for Expandable Segments (#130890)
618e2c9de4 : fix torch rec test failure (#132437)
1c7dc335f7 : [ROCm][CK][Inductor] Enable addmm for CK backend to gemm max autotune (#130576)
7b2664ece6 : Temp disable MKL in DistributionKernels.cpp (#132532)
baa2483cea : Revert "Refactor thunkify to return proper thunk abstraction (#132407)"
d5045cceff : [16/N] Fix clang-tidy warnings in jit (#132604)
e8645fa2b9 : [Doc] fix some typos (found by codespell and typos) (#132544)
3d87dfc088 : Add basic OpenReg module scaffolding with autograd (#131708)
df59084012 : Drop GIL around cudart APIs (#132520)
6919e8baab : [MPS] Add support for autocast in MPS (#99272)
d532c00c81 : [test/torch_np] Fix usages of deprecated NumPy 2.0 APIs in numpy_tests (#131909)
a672f6c84e : [inductor] unificate SUBPROCESS_DECODE_ARGS variable in cpp_builder.py (#132615)
9945caec65 : [inductor] Fix autotune non-close attr crash on Windows (#132630)
a8490a0762 : [traced-graph][sparse] propagate sparsity in fx graph (#131920)
14edd986b3 : Fix missing include file (#132647)
70cb16b316 : [DTensor] Added naive replicate strategy for more diagonal ops (#132201)
c65cb37657 : Refactor thunkify to return proper thunk abstraction (#132407)
b465a5843b : DTensor: add more foreach ops to supported sharding prop list (#132066)
c3ee07c71c : add missing profiler include in cpp code generation (#132419)
b30d0916d9 : [FSDP2] Added missing event wait (for future) (#132568)
fb87796d4f : [DeviceMesh] Add supports for non-continuous slicing (#132310)
27f61eba58 : serde sympy functions (#132493)
55b0c39d82 : Reland "[1/2] PT2 Inductor ComboKernels - Foreach cases (#124969)" (#132182)
ae44b8f410 : [inductor] support vectorization for torch.argmax/min(float/int64_t)-> int64_t (#131016)
1fb498d6e3 : Add try except for _maybe_evaluate_static call in IndexPropagation (#132128)
c7cfa51721 : Always use high precision for SDPA math backend (#128922)
01cdcbf7c8 : [dynamo] revert map/zip iterator related changes (#132528)
09f9c256ad : Add basic mypy annotations to inductor (#132416)
6e79932543 : Add basic mypy annotations to dynamo (#132415)
3558a8cf4a : Revert "Add basic mypy annotations to dynamo (#132415)"
f2ddd5e9e0 : Revert "Add basic mypy annotations to inductor (#132416)"
9be33bc584 : Revert "[inductor] Add type hints to functions in mkldnn_fusion.py (#131820)"
0a25666f92 : Revert "[dynamo] revert map/zip iterator related changes (#132528)"
fd4b649e6c : [BE]: Simplify some list comps to generators C419 (#132578)
4226ed1585 : [BE] Format uncategorized Python files with `ruff format` (#132576)
c35061c542 : Migrate Python code formatter from `black` to `ruff format` (#132574)
09fcd792eb : [Fix]: ScriptObject lifting issue (#130952)
5dac4d2c78 : Revert "[easy] fix f-string messages in torch/_ops.py (#132531)"
105ba7b58c : [5/N] Fix clang-tidy warnings in aten/src/ATen (#132565)
908d2a153b : [easy] fix f-string messages in torch/_ops.py (#132531)
87d46d70d7 : [inductor] export kernel for gemm template. (#132580)
d2dc173664 : Remove lint dependency `ufmt` (#132573)
f7aeb394b6 : [BE][Easy] Remove empty `ISORT_SKIPLIST` (#132572)
f3fce597e9 : [BE][Easy][17/19] enforce style for empty lines in import segments in `torch/[a-c]*/` and `torch/[e-n]*/` (#129769)
2714adce20 : [caffe2] Fix compiling ATen-hip in non-opt mode (#132581)
522fa03e91 : [Submodule] Bump ONNX to v1.16.2 (#132566)
2a8e94347f : [TP] verify numeric parity on Transfromers for multiple iterations (#132543)
8ff310392e : add __torch_function__ handler to get_device cpp (#132567)
7f8a384a8f : [inductor] add msvc_cl compiler check (#132571)
81b8d3586f : Update torch-xpu-ops pin (ATen XPU implementation) (#132390)
6ec4af6865 : [Inductor][CPP] Add vectorization support for double (#131886)
d984105748 : Revert "[export] Convert autocast to HOO (#131914)"
6c65fd0394 : [inductor] Add type hints to functions in mkldnn_fusion.py (#131820)
bc46f205c4 : [15/N] Fix clang-tidy warnings in jit (#132564)
00097f3458 : Revert "C++ network flow implementation in c10 (#132188)"
e3387c6712 : [inductor] use uint64_t replace long to add Windows support. (#132491)
bbce517221 : [Inductor][FlexAttention] TestFlexAttention -> TestFlexDecoding (#132547)
21d02f8b4b : Revert "[easy] fix f-string messages in torch/_ops.py (#132531)"
a896fb1b36 : check unsupported sympy functions for runtime asserts (#132457)
0e7e61f7ce : Deprecate `torch._utils.is_compiling()` and `torch._dynamo.external_utils.is_compiling()` (#127690)
159d508f03 : [Fix]: prim::If with multiple outputs and input return directly (#131779)
36ec0fdf10 : [inductor] check compiler exist on Windows. (#132533)
8ad9f89ccc : [inductor] Reland: Add flag to ignore unsupported @triton.autotune args in user-written kernel compilation (#132562)
06581c277a : [dynamo][stable-diffusion] Support dict(obj) on constrained subclasses of dict and OrderedDict (#132558)
b28c01d90d : [export] Convert autocast to HOO (#131914)
ed4493de0e : dim name is identifier (#132557)
1f5dfe00da : Subtracer should always be real to inherit fake/real tensors from parent config (#132488)
6966d44eda : [ONNX] Rename _internal/exporter to _exporter_legacy (#132429)
5973aec671 : [fx] python_code(verbose=True): show size/strides for all tensors (#132192)
0b571b1058 : [codemod][pyre] Add missing Pyre mode headers (#132548)
373e9be457 : [Inductor][FlexAttention] Add kwarg to top level for users to specify kernel params (#132015)
25903f3932 : [easy] fix f-string messages in torch/_ops.py (#132531)
419b76c4ac : [dynamo] Reland 132308, 132314, 132318, 132334 - Make builtin nn modules attributes static (#132539)
841cadd555 : Fix discrepancies from 129973 (#132545)
243a763e1b : ci: Remove split-build CUDA testing from pull.yml (#132537)
a503136583 : [export] Detect whether case_name is registered in exportdb (#132420)
64720f3b89 : Introduce checks to validate public API tests (#131390)
fcef6cc6d1 : [13/N] Fix clang-tidy warnings in jit (#132477)
705ac311aa : Fix Distributed EventList usage (#132448)
e3513fb2af : [ts_converter]handle python list append, list add, aten.to.dtype+mutation_op pattern (#132529)
85f19ce14a : Support meta["val"] that is a dict, for triton kernels and for the partitioner (#132466)
bcac71517c : [Profiler] Test Logging for Empty Traces (#132444)
1962f9475f : [NJT][flop counter] attention: if offsets are fake, use max seqlen (#132356)
37c3d503b7 : [pipelining] Make test_schedule quiet (#132369)
7c1cca9fda : [pipelining] Add schedule send/recv pass (#130378)
625f494619 : [Pipelining] Add schedule unshard/reshard pass (#129810)
f379bbd46d : [dynamo] support inspect.signature.bind (#132330)
642257db1a : Update the FQN for auto_functionalized HOO. (#132171)
dccce77935 : C++ network flow implementation in c10 (#132188)
f49d5e30eb : Change owners of test/test_transformers.py to module: multi-headed-attention (#132519)
e81e74ca6c : [dynamo] revert map/zip iterator related changes (#132528)
b71cd149ce : Fix file lock issue in AotCodeCompiler (#132343)
bcb4f7c172 : Revert "Grouped Query Attention (#128898)"
afca6f5b47 : [PT2][Optimus] Add missing example value for introduced nodes (#132297)
24d0a32f98 : Revert "[dynamo] Wrap unspecialized nn module getattr with UnspecializedNNModuleSource (#132308)"
e696f17467 : Revert "[dynamo] Track builtin nn modules with UnspecializedBuiltinNNModuleVariable (#132314)"
e4e3575fb0 : Revert "[11/N] Use std::nullopt and std::optional (#132396)"
59b73079a0 : Revert "Always use high precision for SDPA math backend (#128922)"
193a19ee91 : Revert "[dynamo] Treat attr of unspecialized buiitin nn modules as static (#132318)"
b8f7019df0 : Revert "[dynamo] Track params/buffers and mark them as static (#132334)"
e0514a5b99 : [AOTI][refactor] Consolidate how python_kernel_name is set (#132320)
a9e1133faa : [AOTI][refactor] Move set_cpp_kernel to base class (#132319)
df781343e2 : Link libc10 to pthreads (#132484)
19897a1647 : [export] change deepcopy to copy in _replace_set_grad_with_hop pass.. (#132181)
87d58cc81f : [4/N] Fix clang-tidy warnings in aten/src/ATen/native/ (#132001)
207e24ff83 : Enable clang-tidy on aten/src/ATen/cudnn/* (#130133)
0c491702c4 : [ONNX] Define the `TORCH_ONNX_USE_EXPERIMENTAL_LOGIC` flag (#132299)
9167113c16 : [easy][MPS] add torch.mps.is_available() (#132426)
fc32732596 : Don't attempt to compute hints for unbacked expressions (#132060)
8fff976355 : Revert "Refactor thunkify to return proper thunk abstraction (#132407)"
1197550876 : Revert "Don't attempt to compute hints for unbacked expressions (#132060)"
296c339f98 : Ensure compiler collective is called even when no graph is compiled (#132163)
82b6480b0a : Update SavedTensorHooks TLS stack to use SafePyObject (#131700)
9eeb5eebab : Revert "Ensure compiler collective is called even when no graph is compiled (#132163)"
fca2dba7ca : [pytorch][counters] Pybind for WaitCounter (#132357)
d224857b3a : Revert "Change signature of CompilerFn for register_backend decorator (#131880)"
63eb06c051 : Disable SymDispatchMode when torch.compile'ing (#132433)
5aafdc2f87 : [3/N] Fix clang-tidy warnings in aten/src/ATen/native/ (#132000)
78f4a3919f : Remove duplicate XPU switch case in DispatchStub (#132480)
ccf9ce8e8c : Change signature of CompilerFn for register_backend decorator (#131880)
053e5080f6 : Enable exception chaining in call_user_compiler (#131186)
48929184e9 : AutoHeuristic: mixed_mm heuristic for A100 (#131613)
b9cb1abf65 : [12/N] Use std::optional (#132361)
56f2917bef : [dynamo] Bugfix for recently added str handler (#132461)
0d9c9716b2 : Ensure compiler collective is called even when no graph is compiled (#132163)
d342dc0179 : Don't attempt to compute hints for unbacked expressions (#132060)
d903e664c6 : Refactor thunkify to return proper thunk abstraction (#132407)
290f09f829 : Ban decorator usage of dynamo_timed (#132328)
8668bc279d : [inductor] contine to fix restrict keyword. (#132463)
d2e9a8bf6d : [Reland] Fix inlining module-scoped store global (#132439)
a4ea776881 : Add pinned memory support to sparse COO/CSR/CSC/BSR/BSC tensors (#129645)
babb249a89 : [dynamo] Track params/buffers and mark them as static (#132334)
2ee9895304 : Support optimizer capturable on hpu and xpu (#132119)
f936e68506 : [CI] Update CPU inductor smoke test model list and target (#132221)
e5560d10f4 : [CUDA][SDPA] Fix expect export on sm90+ (#132194)
7d8b95e8fb : [easy] more debug in partitioner assert (#132456)
35d14d22a0 : Fix some issues detected by static analysis tools (#131989)
5ea0f51187 : [Dynamo] Support abc.MutableMapping.get (#132363)
2b86a7fcc7 : fix printing of scores and mods names (#132424)
07fe1dd58f : [13/N] Fix clang-tidy warnings in jit (#132411)
1250171866 : Use fresh inductor cache on unit tests (#132432)
6c4ce4331c : [dynamo][exception] Raise Observed KeyError exception for dict __getitem__ (#132425)
cd5452aace : [CUDA] `is_bf16_supported()` should not crash if there are no GPUs (#132313)
3a355c1891 : Correct sample creation of torch.histogram in UT op_db to align PyTorch defined operator semantics (#131630)
bc510916fa : Only make wait_tensor as a side_effect op (#132341)
ef426d5183 : [nccl] Wrap nccl code update with version check (#130419)
50ed6ce277 : Support built-in id function for TensorVariable on parameters (#130100)
64235c6a71 : Skip test_fp8 in test_aot_inductor to temporarily (#132453)
56334c854c : [2/N] Fix clang-tidy warnings in aten/src/ATen/native/*.{cpp,h} (#131834)
ee1ef066fd : add src map to data-dependent errors (#132393)
625af2d27c : [dynamo] fix add_push_null callsites with CALL_FUNCTION_EX (#132329)
0016be8051 : [Docker] Replace epel release rpm by yum install (#132449)
3855ac5a5d : Revert "[export] Add print_readable to unflattener (#128617)"
0c3ac428a2 : [BE][typing] fix types in common pruning (#132309)
87ddf70fc6 : Set weights_only=False in export `deserialize_torch_artifact` (#132348)
1362d51e7d : [AOTI] Fix number type for AOTI (#132180)
35400f750f : [torchbind] don't warning for certain skippable methods. (#132306)
2f54c38594 : [AOTI] Fix bfloat16 in CPU (#132150)
a356a03f4a : Fix DEBUG=1 asserts for mvlgamma backward with NJT (#132422)
92bebb46fa : Support XPU ABI=0 build (#130110)
997f64af38 : fastpath FunctionalTensor sizes() (#132084)
c8958f8f84 : Revert "Ban decorator usage of dynamo_timed (#132328)"
78927d37f6 : Add basic mypy annotations to inductor (#132416)
71e22e0959 : Add basic mypy annotations to dynamo (#132415)
12f61e65eb : [mtia][sdpa] MTIA SDPA dispatch via _fused_sdp_choice_stub (#132008)
596f568592 : [dtensor][debug] adding js script to pytorch github so that i can host the browser visualizer on pytorch (#132185)
9853c048eb : Ban decorator usage of dynamo_timed (#132328)
40c8f73099 : Revert "Fix inlining module-scoped store global (#132224)"
93979e7063 : Skip frame if torch dispatch mode enabled (#131828)
fbf3bc0a60 : Always use high precision for SDPA math backend (#128922)
0eea2b3947 : Cast inputs to low precision kernels in emulate low precision mode (#132345)
ce61300141 : Enable oneDNN for tanh based GELU on aarch64 (#130925)
97eba8e174 : [AOTI] Fix a typo in ExternKernel.codegen_const_args (#132191)
f467d55329 : Disable remote cache on test_aot_autograd_cache (#132409)
010fc7858a : [export] Fix serialization of OpOverload w/ SymInt outputs (#132126)
ff4ca0d02a : [Easy] Fix argument name collision in `HigherOrderOperator` dispatched functions (#132377)
7b816d7d6d : [dynamo] Treat attr of unspecialized buiitin nn modules as static (#132318)
69cbf05529 : Fix recent build error on ppc64le (#129736)
30293319a8 : [BE][Easy][19/19] enforce style for empty lines in import segments in `torch/[o-z]*/` (#129771)
c59f3fff52 : [PP] Forward only schedule (#132177)
ee09d066d3 : [dynamo] Add line number to _warn_capture_scalar_outputs() (#132333)
35fcd59fd8 : [inductor] make restrict_keyword cross OSs. (#132394)
920f0426ae : Add None return type to init -- tests rest (#132376)
221350e3a4 : Add None return type to init -- tests (#132352)
a6985c09cb : Add None return type to init -- functorch and torchgen (#132351)
72d2dba992 : Add None return type to init (#132335)
30d7f0b15a : Remove wget call to builder install_cuda.sh (#132410)
c99adce9a1 : [12/N] Fix clang-tidy warnings in jit (#132209)
0d88dd0f77 : [TS2E] Remove reference to torch.onnx internals (#132186)
d7d6190493 : [11/N] Use std::nullopt and std::optional (#132396)
a4013e8b72 : [inductor] cpp codegen alignas for all OSs. (#132387)
f6fb80b0f9 : Fix mkl-static issue for Windows. (#132401)
6c1f1563e1 : [inductor] fix UndefinedTensorImpl singleton can't export on Windows. (#132326)
6ff1e43a41 : [BE][Easy][13/19] enforce style for empty lines in import segments in `test/j*/` (#129764)
672ce4610e : Populate submodules of `torch._C` to `sys.modules` recursively (#132216)
d95756f6a5 : [Quantizer][Add] Fix add annotation with constant (#132092)
bdd83c4c7f : Add Full block support to flex_decoding (#131404)
043e41f4f4 : [10/N] Use std::nullopt and std::make_optional (#132364)
d6a82ce39b : [dynamo] Track builtin nn modules with UnspecializedBuiltinNNModuleVariable (#132314)
aa0ed2496f : [dynamo] Wrap unspecialized nn module getattr with UnspecializedNNModuleSource (#132308)
612ea35395 : [dynamo] Introduce UnspecializedBuiltinNNModuleSource (#132312)
4c29c1a96a : [EZ] adjust test to accept training IR input (#131999)
7a779b5257 : Add functions from `torch.masked._ops` to `__all__` for `torch.masked` (#131288)
928adb7cc2 : Fix empty fake mode problem (#131995)
f32ab3b9e3 : Migrate Inductor scheduler, dependencies, ir, and codegen/common to use OrderedSet (#130004)
bcd1d2e832 : [dynamo] Introduce UnspecializedNNModule guard source (#132304)
e772547d70 : [dynamo][rename/refactor] Rename guard_source NN_MODULE to SPECIALIZED_NN_MODULE (#132302)
90fa64bd7e : [torch][take2] Implement BFloat16 __hip_bfloat16 overloads (#132234)
7911b7bfb7 : [inductor][cpp] stabilize do_bench_cpu (#131873)
b25ef91bf1 : [BE][Easy][18/19] enforce style for empty lines in import segments in `torch/d*/` (#129770)
bc7ed1fbdc : [FSDP2] add __repr__ to FSDPParamGroup and FSDPParam (#132350)
46ed33b207 : add decomposition_table as an arg to get_isolated_graphmodule (#130886)
073430ebea : Don't check for autograd state when lowering to inference IR (#131988)
81db69278d : unsupported sympy functions in export solver (#132325)
10344d76bd : Revert "[AOTI] Fix bfloat16 in CPU (#132150)"
a28cda11ef : Revert "AutoHeuristic: mixed_mm heuristic for A100 (#131613)"
589aef4bb0 : Fix py codegen to delete values that don't have any users (#131028)
718c13cd39 : [inductor] Reinplacing should not allow an op to mutate the same input multiple times (#132238)
344c15a0bb : AutoHeuristic: mixed_mm heuristic for A100 (#131613)
2276d9045a : [cpu] add more VecConvert for 8bits (#131876)
7c89ec0f7c : Implements torch.cuda.MemPool() API (#131152)
4e966e8a1c : Update inference_mode doc (#132321)
a488113062 : [AOTI] Fix bfloat16 in CPU (#132150)
6b28af1b79 : Grouped Query Attention (#128898)
f0da167ce5 : Add fx graph runnable to tl parse (#130976)
645c1052a6 : Refactor local autotune remote cache to make the code less error prone (#132289)
b0e06d9d6a : Make config.autotune_remote_cache be a three-way option (#132285)
260c991e20 : [inductor] Fix unsoundness with negative-valued indexing expressions (#131761)
e74ba1b34a : [BE][Easy][15/19] enforce style for empty lines in import segments in `torch/_d*/` (#129767)
ad9826208c : Remove string length limit in ET (#132169)
d3cefc9e3a : AutoHeuristic: Collect data for mixed_mm (#131611)
f8b6e91840 : Add sequoia runner to mac-mps (#132190)
d72e863b3e : Fix lint after PR #130572 (#132316)
aeb78c9849 : [TD] More files for test_public_bindings (#132284)
cb4c107d70 : [pytorch][counters] DynamicCounter (#132166)
dc38646c58 : Revert "[pytorch][counters] Pybind for WaitCounter (#132167)"
6955bc170d : Some updates to merge rules (#132296)
2138a710eb : enable test_max_pool2d6 after resolving empty array (#132219)
cfe61e84ac : Add a 'to' method for moving to and from device for BlockMask (#132087)
898a431a46 : Dump files that look like FX graphs to structured log (#132100)
f9e4d05c15 : Save and run post compilation steps within FXGraphCache (#130572)
b40249b462 : propagate XLA's metadata after functional sync (#131076)
7eb2a99585 : Fix to support unary pointwise ops when an NJT is not the first arg (#131937)
c3a31d90e7 : Fix inlining module-scoped store global (#132224)
6214b5388b : typing ir.py - part 1 (#131845)
144639797a : Improve side effects error message (#132223)
784a6ec5a3 : Revert "Migrate Inductor scheduler, dependencies, ir, and codegen/common to use OrderedSet (#130004)"
9826c542f0 : [inductor] skip remote fx caching in failing pattern matcher tests (#132206)
bdd7a0322d : [Dynamo] Fix - `str` handler for UserDefinedObjectVariable (#130506)
fe4f8e97cd : [Intel GPU] xpu-ops codegen via backend whitelist (#130082)
aec8bc5e4c : [easy] fix type annotation on constraint_violations variable (#127064)
c85088b1f9 : [ROCm] performance optimization for index select (#131713)
13d744464f : Migrate Inductor scheduler, dependencies, ir, and codegen/common to use OrderedSet (#130004)
2c7bd61afa : [pytorch][counters] Pybind for WaitCounter (#132167)
39a3c98aa6 : [inductor] fix scalar miss constuctor for long type. (#132117)
b2118573d6 : [BE] Unify PG assignments (#132230)
9c52013559 : [subclasses] Fix nested subclasses flattened tensors ordering (#132096)
5406e46b00 : Revert "Add fx graph runnable to tl parse (#130976)"
3d7f541597 : [BE][TP] Check module has bias before access (#132137)
dad125a64b : Address clang-tidy nits in BFloat16 (#132203)
45e6a364ee : Avoid autocast deprecation warning (#132207)
f4f7aba75d : Expose function to probe whether PyTorch was built with FlashAttention (#131894)
548c460bf1 : [BE][Easy][7/19] enforce style for empty lines in import segments in `test/[a-c]*/` and `test/[q-z]*/` (#129758)
46994e753b : [NestedTensor] Integrate the layer normalization operator along the jagged dimension into NestedTensor (#132172)
89053e382a : [NestedTensor] Integrate the softmax operator along the jagged dimension into NestedTensor (#132170)
e7eeee473c : [BE][Easy][14/19] enforce style for empty lines in import segments in `torch/_[a-c]*/` and `torch/_[e-h]*/` and `torch/_[j-z]*/` (#129765)
9e473fd868 : Make adding Buffers more like adding Parameters (#125971)
a94e507c39 : [aota] Needs autograd if an input requires_grad, agnostic to enable_grad (#128890)
e9d1c26275 : fix uniform op in dynamo (#132160)
ae708e9791 : [ONNX] Remove the deprecated SymbolicContext (#132184)
89da94594e : [11/N] Fix clang-tidy warnings in jit (#132131)
91299c95ec : Revert "Add functions from `torch.masked._ops` to `__all__` for `torch.masked` (#131288)"
27c9262d29 : Fix stdout / stderr typing in SubprocessHandler (#132071)
52c3af62d6 : Add fx graph runnable to tl parse (#130976)
deb788f6cc : Merge `torch.nn.utils.rnn` type stubs (#131872)
78020ea55d : Add functions from `torch.masked._ops` to `__all__` for `torch.masked` (#131288)
df0494bbba : Clean redundant link libraries for XPU (#131322)
c07aa1c9c9 : [Easy] reorder functions in `torch._jit_internal` (#130531)
fbe6f42dcf : [BE][Easy][8/19] enforce style for empty lines in import segments in `test/[k-p]*/` (#129759)
914577569d : Remove python 3.8 nightly builds (#132138)
05317cd8f7 : [dtensor][be] improving readability and reducing repeating code (#132070)
f85feef127 : [DTensor] add support for custom op registration (#131108)
31205d5198 : [Inductor][CPP] Fix Local Buffer issue with inplace result line (#132018)
882d80fd92 : Add lowering for updated _scaled_mm (fixing submodules) (#130422)
fdcd2f0dd1 : [PT2][Optimus] Add unbind cat to view pass (#132152)
afb04d78c8 : Don't try hard to compute alignment of unbacked expressions (#131649)
5a33657b31 : [micro_pipeline_tp] implement the pass for fused_scaled_matmul_reduce_scatter (#131951)
524aac413c : Initial OpInfo-based testing for NJTs (#131704)
93facac02c : [NeuralNetInference] Bring up iOS builds (#131917)
53a5e0f1a8 : [BE] delete spmd module (#132072)
a141334c88 : migitate wrong tensor.dim_order() (#131366)
2b43fab555 : [DTensor] Added naive support for `nn.init.orthogonal_` (#132104)
3e142d766a : [EZ] Make consistent with scale-config.yml (#132164)
69c34f6e4c : Corrects Error Codes from cudaHostRegister (#132089)
ff377e16ab : Improve logging in the TSConverter (#132082)
495d413519 : Include code object of frame being compiled in stack (#132161)
19db4f6014 : [capture_triton] fix special kwargs path (#132143)
1118c74b5f : [PT2] Port fuse_chunk_reshape_unsqueeze_concat_pass to PT2 pre_grad passes (#131902) (#132078)
d53b11bb6e : Strict shape checking for NJTs with TestCase.assertEqual() (#131898)
58f76bc301 : Revise skip torchrec logic (#130783)
964f97539f : [MPS] Correct nonzero warning and fix the test (#132127)
f2dedc910e : Improve SpeculationLog error message (#131982)
e6cddc9271 : Fix public API tests (#131386)
f217b470cc : [CMAKE] Avoid double setting of LDFLAGS (#130370)
3816f6420a : [BE] remove unnecessary _dispatch_sqrt by using ** 0.5 (#131358)
9f6d7df3d9 : docs(multinomial): Add reference to `Multinomial` class (#131904)
239d4d2489 : Revert "[reland][inductor] switch AotCodeCompiler to new cpp_builder (#130127)"
9027db1ab8 : TCPStore: fix remote address (#131773) (#131913)
3864a2d834 : [profiler ut] Update event name in test_profiler.py (#131757)
32c57e78ed : Specialize sym node when used as device kwarg (#131811)
33ce9cf7f9 : [FSDP2] Relaxed overlap timing check to avoid flakiness (#132116)
16e0868a3d : [FSDP] Add hpu device to _get_remote_device_str (#132120)
a843178529 : Let dynamo inline functional_call (#128646)
12b67bd998 : Fix pyi annotation for `ProcessGroupGloo.Options` (#132080)
499ead96ff : Revert "Grouped Query Attention (#128898)"
bdf57da6a6 : [3/N] Enable clang-tidy on torch/csrc/inductor (#132101)
eccbd408e5 : [10/N] Fix clang-tidy warnings in jit (#132122)
83db609ee5 : [inductor] fix the cudagraph tree test (#132043)
36e8289129 : [PT2][Optimus] Optimize cat node inputs pattern (#131866)
54d4f6bbca : [Inductor][FlexAttention] Correct partial/full blocks naming (#131993)
03e058189e : [dynamo] Support dict unpack of MutableMapping objects (#131961)
f806128619 : [dynamo] Skip <frozen abc> to skip __isisintance__ check on abc objects (#131956)
13457d1da0 : [dynamo][log] Suggest to use pytree when graph-break on optree (#131827)
fc6066b80f : improve mkldnn_linear_pointwise_binary performance for contiguous tensor with non default contiguous strides (#132019)
40f8db5741 : [audio hash update] update the pinned audio hash (#132105)
aa1488fe02 : [inductor] turn on enable_kernel_profile on Windows. (#132025)
475da800c7 : [inductor] optimize cflags for Windows. (#131980)
bdc42e3fb8 : [inductor] validate_can_generate_cpp_wrapper add win32 support. (#131978)
baa4c9ca46 : Optimize aten.cat calls of a repeated element (#132081)
f8e4060484 : [Inductor][CPP] Enhance cppcsevar data type deduce (#130827)
b6c1490cc0 : [dynamo] make more unpack_var_sequence calls forced (#132069)
8721b21b38 : Fix fake_tensor w/ non-view tensor (#132050)
9598c58618 : Add config option to skip autotuning conv (#131839)
5a2620302b : [inductor] Replace self_cuda_time_total function calls with self_dev… (#131029)
a147fa577b : [MPS] Fix masked_fill_ in non_contiguous cases (#131957)
3716934b1a : [Inductor] Refactor autotuning utils to compute max block sizes (#131730)
7a7dd8c29e : Revert "[NestedTensor] Integrate the softmax operator along the jagged dimension into NestedTensor (#131518)"
ab9791c0e3 : [export] Add print_readable to unflattener (#128617)
2a4d9aa548 : Disable expandable segments checkpointing internally (#132048)
be5e44192d : Revert "[NestedTensor] Integrate the layer normalization operator along the jagged dimension into NestedTensor (#131519)"
b1ccd0c407 : [CI] Update environment varible setting for aarch64 (#132046)
14ab5b5059 : Add single Python 3.10, single Cuda 12.1 build with dependencies included (#132094)
e3dc20c94b : [NJT] support cat backward (#132076)
5298acb5c7 : Back out "[1/2] PT2 Inductor ComboKernels - Foreach cases (#124969)" (#132065)
8b507a922a : Mode to emulate amp numerics (#131595)
884eadcd19 : Fix multi grad hooks thread safety (#132055)
e55e9d8126 : Clear speculation log when restarting due to compiler collective (#131983)
62b2e7a553 : Revert "Add config option to skip autotuning conv (#131839)"
8fe2bf212d : [NestedTensor] Integrate the layer normalization operator along the jagged dimension into NestedTensor (#131519)
d039b14207 : Grouped Query Attention (#128898)
05a8540041 : [cpp-wrapper] create null pointer for zero-size array (#132023)
d8358a2d86 : Made `register_multi_grad_hook` return type `RemovableHandle` (#132074)
d5e9fbb012 : Revert "BE: reset dynamo before each test in test_module.py (#131372)"
a4723b566f : Revert "BE: reset dynamo before each test in test_ops_gradients.py (#131397)"
bdf5a6dca9 : Add decomposition for unsqueeze_copy (#130942)
3c1562158e : [BE] Fix torch.compile docstring formatting issues (#131837)
dcb03106b7 : [Land Internally] MTIA equivalent of torch.cuda.memory_stats (#132007)
082d0b80ca : Min and max NaN propagation fix in MPS backend (#130445)
f44446e851 : [dynamo] Turn on inline_inbuilt_nn_modules (#131275)
4c2bcf92cb : [inductor] Enable FX graph caching in OSS by default (#125863)
484852c02b : [Doc] update guide install mkl-static from conda to pip (#130026)
301ec32ae8 : [EASY][TEST][CUDA] Fix typo in test_graph_make_graphed_callables_same_pool (#132059)
5cc34f61d1 : [CI] add new test config label `ci-test-showlocals` to control test log verbosity (#131981)
4694ee1ad2 : [BE][tests] show local variables on failure in tests (#131151)
ab912b7fef : [2/N] Fix clang-tidy warnings in inductor (#132040)
c764ef6d53 : [9/N] Fix clang-tidy warnings in jit (#132010)
f389bca2e9 : [dynamo][inline_inbuilt_nn_modules] Skip test_dpp_graphs for now (#132053)
6c6fbb4691 : Fix pyi annotation for ProcessGroupNCCL.Options (#130957)
025242d065 : [cpu-test] enable test_cpu_repro in fbcode (#132022)
ca8153ae67 : BE: reset dynamo before each test in test_ops_gradients.py (#131397)
527901f054 : BE: reset dynamo before each test in test_module.py (#131372)
bd1a29b158 : [BE][Ez]: Update ruff to 0.5.5. Bugfixes and better LSP support (#132037)
6cf493158e : Revert "Enable FlashAttention on Windows (#131906)"
3d4de8e96d : Add config option to skip autotuning conv (#131839)
e73a4cb21f : Revert "[pt2e][quant] Ensure BN node is erased after convert (#131651)"
f72266ecea : Revert "Let dynamo inline functional_call (#128646)"
962f248437 : Add decomposition for expand_copy (#130940)
e393c7fa05 : Tighten torch.library.infer_schema input types (#130705)
957a89f56c : Revert "[inductor] Fix unsoundness with negative-valued indexing expressions (#131761)"
ca254d145f : [BE][Ez]: Update fmtlib submodule to 11.0.2 (#132036)
5aab1acc84 : Let dynamo inline functional_call (#128646)
e0e4e84ef9 : wrap self.call_function(...) in try finally block to undo changes to self.kw_names (#130490)
1e9cdf7d91 : Relax constraints for creating a `GenericContextWrappingVariable` (#129091)
6cbad37bee : make `_inductor.config.rocm.supported_arch` set order deterministic for caching (#131921)
14108c1677 : Fix error handling in _triton.py (#132006)
be3eba382f : [CI] Run perf test for perf_cpu_aarch64 (#132038)
c35f21e5fc : Revert "[BE][tests] show local variables on failure in tests (#131151)"
06fe99a097 : Revert "[CI] add new test config label `ci-test-showlocals` to control test log verbosity (#131981)"
7ef927da15 : Revert "[dynamo] Turn on inline_inbuilt_nn_modules (#131275)"
efca51e171 : [8/N] Fix clang-tidy warnings in jit (#131997)
eb9409511e : Revert "support zb1p and zb2p algorithms (#130752)"
9d497887b8 : Changes to support clang-19 (#131905)
b67811abda : [1/N] Fix clang-tidy warnings in inductor (#131979)
d47c470f47 : [dynamo] implement `var_getattr` in UserFunctionVariable (#130413)
dfa18bf3f3 : [CI] add new test config label `ci-test-showlocals` to control test log verbosity (#131981)
f151f25c0b : BE: reset dynamo before each test in test_torch.py (#131388)
30e7fc0fe1 : Cpp wrapper: set args to CppWrapperKernelArgs in cpp template kernel (#129557)
03760be271 : [inductor] Fix unsoundness with negative-valued indexing expressions (#131761)
2a02b5cd22 : [Intel GPU] Dispatch Stub support (#130019)
5b3b2b9cc7 : [7/N] Fix clang-tidy warnings in jit (#131996)
ddd539ba6c : [6/N] Fix clang-tidy warnings in jit (#131986)
7b0e10f0e5 : fix _MaskPartial when multiple embeddings coexist (#131264)
0ab6551bcb : [inductor] Handle NoneLayout in count_numel (#131645)
7c1fbc7fe9 : [5/N] Remove unused parameter (#131998)
f901b02066 : [Distributed] Do not expose `nlohmann/json.hpp` in public headers (#131925)
75c8d59ea1 : Remove mypy ignore from torch/_dynamo/variables/lazy.py (#131785)
7c29665f77 : Remove mypy ignore from torch/testing/_internal/distributed/ (#131870)
2e4807575c : Remove mypy ignore from torch/_dynamo/polyfill.py (#131786)
cc512ea0f6 : [inductor] Fix flaky tests in test_aot_inductor.py (#131994)
6de65d5dd4 : [dynamo] Turn on inline_inbuilt_nn_modules (#131275)
8927fc209f : [inductor] Add type hints to functions in debug.py (#131836)
500aea8d50 : Build PT aarch64 on arm runner (#131964)
945bf78894 : Revert "[BE] typing for decorators - fx/_compatibility (#131568)"
b002ec61b6 : Revert "[BE] typing for decorators - masked/_ops (#131569)"
a3ba405871 : Revert "[BE] typing for decorators - library (#131570)"
a0abb77007 : Revert "[BE] typing for decorators - distributed/_tensor/ops/utils (#131571)"
a8a9882899 : Implement fused_scaled_matmul_reduce_scatter for async-TP (#131950)
0538a69a8d : [micro_pipeline_tp] support all-gather -> _scaled_mm (#131833)
492e9a4886 : [micro_pipeline_tp] add support for type-erased all-gather pattern observed in DTensor + float8_experimental (#131832)
fd5b7d4bf9 : Revert "[BE] typing for decorators - _meta_registrations (#131572)"
609447a626 : Revert "[BE] typing for decorators - _jit_internal (#131573)"
4684b8e9d7 : Revert "[BE] typing for decorators - _inductor/lowering (#131574)"
07b7f51877 : Revert "[BE] typing for decorators - _inductor/fx_passes/post_grad (#131575)"
6a0c3bae21 : Revert "[BE] typing for decorators - fx/experimental/migrate_gradual_types/constraint_generator (#131576)"
b1d640a2b7 : Revert "[BE] typing for decorators - ao/quantization/quantizer/xnnpack_quantizer_utils (#131577)"
d3c17fea90 : Revert "[BE] typing for decorators - _library/custom_ops (#131578)"
065d0fe570 : Revert "[BE] typing for decorators - fx/experimental/graph_gradual_typechecker (#131579)"
5ced63a005 : Revert "[BE] typing for decorators - utils/flop_counter (#131580)"
2c4023d65f : Revert "[BE] typing for decorators - _refs/nn/functional (#131581)"
e448f32944 : Revert "[BE] typing for decorators - signal/windows/windows (#131582)"
d90f6b45c0 : Revert "[inductor] Add type hints to functions in mkldnn_fusion.py (#131820)"
8f5cf46405 : Revert "Fix public API tests (#131386)"
7be0ce51b6 : Fix handle serialization error (#131871)
3e0ccb3a9f : Fixing fake tensor SymInt caching (#131966)
d07a125af2 : [Inductor] supporting pointwise intermediate nodes in B2B-GEMM (#131685)
14158d892a : [BE][tests] show local variables on failure in tests (#131151)
466ea8ce54 : Add fallback() to torch.library (#131707)
8e5a367311 : [5/N] Fix clang-tidy warnings in jit (#131969)
918ece4f4d : [BE][Easy][11/19] enforce style for empty lines in import segments in `test/dy*/` (#129762)
ae9f17a821 : [aoti] Rename OSS DynamicArg and OpKernel (#131862)
8cdfdb41bc : Revert "[NestedTensor] Integrate the layer normalization operator along the jagged dimension into NestedTensor (#131519)"
07389163f0 : [C10][BE] Use range loop (#131922)
f83ef69b84 : Fix typo in assignment operators (#131890)
c82441e07a : Fix std::optional checking bug (#131874)
93a4671746 : Add out_dtypes to fused_all_gather_scaled_matmul's args (#131831)
12cd040edd : [micro_pipeline_tp] exclude simple overlappable collectives as micro-pipeline TP candidates when reorder_for_compute_comm_overlap is enabled (#131410)
36d24925c6 : [inline_inbuilt_nn_modules][inductor-cpu] More skips for dynamic shapes when inlining enabled (#131948)
aee6bcdba4 : [Traceable FSDP2][Inductor] Apply compute/comm reordering passes to achieve overlap (#131614)
9e06572704 : [Traceable FSDP2][Inductor] Create grouped nodes for FSDP2 all-gather code block and reduce-scatter code block (after Buffer/Operation split) (#131510)
99e13e68e9 : [4/N] Fix clang-tidy warnings in jit (#131903)
f862f45730 : [NestedTensor] Integrate the layer normalization operator along the jagged dimension into NestedTensor (#131519)
bcf5c68c18 : [NestedTensor] Integrate the softmax operator along the jagged dimension into NestedTensor (#131518)
c49e857d32 : [pt] immutable accessors in graph signature (#131940)
96c1862e0b : Remove mypy ignore from torch/_dynamo/variables/__init__.py (#131784)
1bfe7eb7e6 : Update how we do sdpa testing (#131743)
bcdba9f91d : Added hpu backend support in fsdp utils (#127757)
28fd2e905d : [inductor] enhance cpp_builder lint check. (#131752)
a90b8b967a : [inductor] enable windows inductor UTs (#131767)
3768faec2f : carry cond in data-dependent error (#131932)
9606d61e0c : [reland][inductor] switch AotCodeCompiler to new cpp_builder (#130127)
fdf1451bfa : Add `__all__` to torch.optim to define public interface (#131959)
8458980bbf : Move benchmarks/dynamo/huggingface configuration to YAML (#131724)
ef8d118c67 : Sync with changes to test-infra's scale-config.yml (#131955)
8b04edcac1 : Delete unused yml files (#131298)
1e00f055a4 : Move distributed experimental jobs back to the amazon2 for now (#131963)
91fcfd8760 : Fix public API tests (#131386)
02b922900b : [aoti] Fix float16 and bfloat16 for generated GPU code (#131437)
0272934238 : [Inductor][CPU] Fix an InvalidVecISA issue on CI (#131812)
5489ff8e94 : Use Mermaid for the diagram in torch/ao/quantization/fx/README.md (#131412)
16cd1aaa1d : [inductor] Improve sort kernel perf (#131719)
b90bc66766 : Enable FlashAttention on Windows (#131906)
d73b55d64b : Support meta tensors as inputs to the triton_kernel_wrapper HOPs (#131896)
fb98cd33f1 : [inline_inbuilt_nn_modules][inductor-cpu] Skip test_quantized_linear_amx (#131928)
c8626a4e1f : [BE] add a list of inductor test files to skip resetting dynamo (#131551)
fde577702d : [TD] More synonyms for filepath (#131838)
1bda3a3135 : Migrate nightly.yml workflow & docs to Amazon 2023 (#131821)
0e6df1e0fb : Disable remote cache on test (#131908)
071ac38141 : fast-path FakeTensor detach (#131899)
2ec8312a28 : Add rerun_disabled_tests for inductor (#131681)
da1a1fa55f : Move load_yaml_file to common (#131924)
6c95f79645 : [CI] Increase the timeout for aarch64 docker build (#131926)
782efd8e5b : Revert "Add rerun_disabled_tests for inductor (#131681)"
0f9bf208ec : Revert "[BE][tests] show local variables on failure in tests (#131151)"
a3cdbd8189 : [FlopCounterMode] Fix register_flop_formula (#131777)
cd53698df0 : Add hpu backend support for dynamo torchVariable _in_graph_classes() function (#129948)
5f2c80d16d : Add inductor OrderedSet (#130003)
1dd10ac802 : [BE] [Reland] Make nn.Module state_dict load_state_dict pre-hook and state_dict post-hook public (#131690)
8158cf2f59 : [c10d] Fix split_group usage when there is a single rank (#131824)
e191b83462 : Revert "Add wrappers for synchronous GPUDirect Storage APIs (#130633)"
e4db5dc1c4 : Revert "[BE] remove unnecessary _dispatch_sqrt by using ** 0.5 (#131358)"
2576dbbc35 : [dynamo] implement IteratorVariable and polyfill fallbacks for enumerate (#131725)
35b4de32fa : [dynamo] add itertools repeat/count bytecode reconstruction (#131716)
40cc5c0697 : [AOT Autograd] Donated Buffer (#130580)
9589d986fa : [UT] Relax atol for test_non_contiguous_input_* (3 tests) (#131822)
161bb67116 : Revert "Fix static `py::object` dangling pointer with `py::gil_safe_call_once_and_store` (#130341)"
c382fc3fea : [Reland] Fix vulkan builds with missing overrides errors (#131760)
1a2edf6dca : [AOTI] Fix _mm_plus_mm codegen (#131689)
696e83a1da : Revert "TCPStore: fix remote address (#131773)"
404a8ae8f6 : [export] fix set_grad x tensor constant. (#131787)
bb64702eb3 : Revert "[reland][inductor] switch AotCodeCompiler to new cpp_builder (#130127)"
d57de73fe0 : AutoHeuristic: Add support for kernel choice selection (#131610)
a38890a53f : Revert "[2/3] 3D Composability - move pp tests (#129801)"
13ab92b72d : [dynamo][recompile-logs] Suggest force_parameter_static_shapes on the recompile log for parameter-related recomps (#131825)
7feaa73057 : [export] Remove deprecated fields from ExportedProgram ctor. (#131697)
546df5daf8 : Revert "[3/3] 3D Composability - move tp dp tests (#129802)"
2988d33c80 : [3/N] Fix clang-tidy warnings in jit (#131830)
5612408735 : _get_operation_overload: dont raise exception when overload does not exist (#131554)
eba2ffd278 : [pt2e][quant] Ensure BN node is erased after convert (#131651)
9440a4824d : [CI][dashboard] Add a workflow to collect A10g perf (#131816)
535c17efb3 : [torch] Implement c10::BFloat16 ctor from __hip_bfloat16 (#131359)
e4ace1a396 : AOTDispatcher: properly bump version counter on input mutations in inference graphs (#131665)
5570a0da0a : dont dispatch aten.conj(scalar_tensor) back to python (#131482)
8bb9aa93a7 : dynamo: mutations on .data should be invisible to autograd (#131403)
7339c8ab28 : Revert "immutable accessors in graph signature (#131807)"
e76e566cfb : [Dynamo] Support zip_longest (#131497)
c9888c2739 : Revert "[BE] typing for decorators - optim/optimizer (#131583)"
7ee6831ae8 : Revert "Fix vulkan builds with missing overrides errors (#131760)"
d3e932dc10 : [CI] Add inductor cpu accuracy test running on AVX2 runners (#128682)
e73fa28ec8 : [CI] Fix arm64 docker build arch (#131869)
608057afe2 : [inductor] Fix duplicated range tree codegen in split scan (#131669)
945946e817 : [AOTI] Fix another ABI-compatible CPU issue (#131798)
7d282d8755 : [dynamo] add lazy IteratorVariable implementations for map and zip (#131413)
115994fea2 : [aotd] Align partitioner graph output type to tuple (#131759)
1e24f7875e : [AOTI] Fix ABI-compatible mode link issue for CPU (#131791)
6fd28fc228 : immutable accessors in graph signature (#131807)
bceb91222c : Fix meta error in _convert_weight_to_int4pack (#130915)
2bf649f5ae : suggested fix for data-dependent error (#125378)
fb3ddafbcf : [inductor] Add type hints to functions in mkldnn_fusion.py (#131820)
13e806a591 : [NestedTensor] Add support for transposed NestedTensors where ragged_idx > 1 for sum and mean operators (#131517)
63374dda69 : [BE][Easy] explicitly define global constants in `torch.testing._internal.common_utils` (#129826)
aebfd3d4de : [CUDAGraph] skip cudagraph if too many distinct sizes (#131387)
16d7cb5049 : [CUDAGraph] Type annotation for cudagraph_trees.py (#131621)
dfba85c26b : Update torch-xpu-ops pin (ATen XPU implementation) (#131643)
baa93e160f : [MPS] Add native implementation for shift ops (#131813)
a1dad77dfa : [BE] typing for decorators - optim/optimizer (#131583)
8689d377f9 : [BE] typing for decorators - signal/windows/windows (#131582)
dbf7c318b2 : [BE] typing for decorators - _refs/nn/functional (#131581)
81c26ba5ae : [BE] typing for decorators - utils/flop_counter (#131580)
33069630ce : [inductor] Add type hints to functions in decompositions.py (#131780)
5b05ad9697 : fix non-persistent buffers (#131756)
a617919541 : [dynamo] Do not guard on keys for _forward_hooks and _forward_pre_hooks (#131682)
3d7c424a75 : [inductor] update users to buffers instead of scheduler nodes (#131796)
6dbf343936 : Fix aten implementation for low memory max_pool2d (#131717)
c2f3266c8e : Not remove collective ops in dce since they have side-effect (#131023)
e0d3e4a498 : remove unused code for XPU (#131856)
236d055330 : [Traceable FSDP2] Add partial-graph (graph-break) unit tests (#131747)
03f49c9523 : Revert "[CUDAGraph] Type annotation for cudagraph_trees.py (#131621)"
16699c7d84 : [CUDAGraph] Type annotation for cudagraph_trees.py (#131621)
2ff98bc57f : [inductor][autotune_at_compile_time] fix some codegen-ing for standalone autotuning file (#131726)
b343644f3a : Revert "MTIA equivalent of torch.cuda.memory_stats (#131673)"
b893a57f96 : [Dynamo] Fix guard_on_nn_modules unit tests discrepancy between OSS and fbcode (#131810)
246e32055a : [benchmark] Add hf_T5_generate to inline_inbuilt_nn_modules (#131804)
c92f2a19a4 : [BE] Use assertEqual in MultiKernel tests (#127725)
9ae288f4be : [inductor] Simplify multi-kernel codegen by unifying kernel args (#127724)
14920c149b : Revert "[dynamo] Turn on inline_inbuilt_nn_modules (#131275)"
adbe4f5ecf : TCPStore: add better logging on wait timeout (#131808)
e9443860e7 : add python binding for _get_current_graph_task_keep_graph (#131038)
eac83479cc : Enable Wunused-function and Wunused-result globally (#131596)
2a4ca5ccc4 : [dynamo] Pop the exception stack on handling the StopIteration natively (#131801)
11673851d9 : [dynamo][exception][bugfix] Add a pop for < 3.11 version (#131795)
f885a70fab : [inductor][autotune_at_compile_time] support Triton kernel with sympy fn str arg (#131253)
b4b62d3945 : update to 2.5.8 (#131684)
51f4f87718 : [Reland] Ensure staticmethods can be allowed in graph (#131789)
4de85e3c30 : [DeviceMesh] Remove _parent_mesh as an attribute from DeviceMesh and remove it from DeviceMesh's hash (#131636)
79f0c4dc04 : [BE] typing for decorators - fx/experimental/graph_gradual_typechecker (#131579)
c65b197b85 : [BE] typing for decorators - _library/custom_ops (#131578)
5ee6a6dacc : [BE] typing for decorators - ao/quantization/quantizer/xnnpack_quantizer_utils (#131577)
37d76c7d48 : [BE] typing for decorators - fx/experimental/migrate_gradual_types/constraint_generator (#131576)
42dc5a47a1 : [BE] typing for decorators - _inductor/fx_passes/post_grad (#131575)
b2cbcf710b : [BE] typing for decorators - _inductor/lowering (#131574)
f0f20f7e97 : [BE] typing for decorators - _jit_internal (#131573)
bfe0079b72 : [BE] typing for decorators - _meta_registrations (#131572)
4b985e6f80 : [BE] typing for decorators - distributed/_tensor/ops/utils (#131571)
5731b486c8 : [BE] typing for decorators - library (#131570)
aa58af8b43 : [BE] typing for decorators - masked/_ops (#131569)
193f62fde9 : [BE] typing for decorators - fx/_compatibility (#131568)
709ddf7a9d : Add wrappers for synchronous GPUDirect Storage APIs (#130633)
0455344777 : [dynamo] Turn on inline_inbuilt_nn_modules (#131275)
513ce5f69a : MTIA equivalent of torch.cuda.memory_stats (#131673)
9039131a89 : TCPStore: fix remote address (#131773)
520182dbff : [reland][inductor] switch AotCodeCompiler to new cpp_builder (#130127)
a34692c0a3 : [Inductor] Added and_masks and or_masks utilities & make fully masked out rows 0 instead of nan (#131552)
89bdd9c18f : [kineto] populate src/dst rank for p2p (#130812)
1c58aacbc8 : [dtensor] move ops to private (#131211)
605dfd8fb4 : Switch sync_distributed_folder to use non-reverse order (#131683)
fe2e6f0c51 : Revert "[reland][inductor] switch AotCodeCompiler to new cpp_builder (#130127)"
1ad4e6f228 : Refactor cudagraphs to use serializable placeholder info (#130252)
69d63b2318 : [CUDA][Pooling] Clean up unused `accscalar_t` in `maxpool2d` forward (#131728)
fdc4d6fe96 : [inductor] Refactor fusion of inplace operations (#130835)
61d7bb3e79 : Migrate trunk workflows to Amazon2023 ami (#131677)
a6ebd56f7b : Factor out cudagraph post compile into its own function (#129384)
58b8704f28 : [aot] Keep backward mutations in backward (#129130)
6c31e02971 : Fixes the example for `convert_conv3d_weight_memory_format` (#131742)
fba24252bd : [dynamo][frame summary] Skip frame summary for frames from inside torch/nn/modules (#131744)
a1fad03fa8 : [ROCm] Enable cudagraph expandable segments UTs in inductory/dynamo (#131111)
8c4683c978 : Add device argument to the large_grid unit test (#131702)
bf6aae1468 : Improve `torch.masked.mean` and `torch.masked._std_var` scaling (#131293)
2c1851f04e : [export] fix output node's meta (#131706)
dfc9bfc883 : [reland][inductor] switch AotCodeCompiler to new cpp_builder (#130127)
f3df7deab8 : Revert "Add flag to ignore unsupported @triton.autotune args in user-written kernel compilation (#131431)"
2423d89d0c : [dynamo] mirror training flag in OptimizedModule (#131546)
c3679bed35 : Revert "Fix py codegen to delete values that don't have any users (#131028)"
ec3829795d : [3/3] 3D Composability - move tp dp tests (#129802)
29571c5c06 : [2/3] 3D Composability - move pp tests (#129801)
75c4176b05 : [export][BE] consolidate export and export_for_training (#131496)
6bc8db1d32 : Rename is_training flag to have more information (#131618)
f063027d54 : [aoti] Fix constant inputs passed to aoti (#131594)
ffc6bf8149 : [dynamo] lazily guard and specialize on the symint when used in f-string. (#131529)
96e8df6a3a : [ts_converter] Support prim::max and prim::if with multiple outputs (#131593)
b07ea91c4c : [2/N] Fix clang-tidy warnings in jit (#131735)
49a8e061b6 : Revert "Support IPC for Expandable Segments (#130890)"
a4be5cb50e : Simplify some c++ code (#131612)
c3d099ddd1 : [BE][Easy] Add hooks to doc for Optimizer base class (#131628)
745b55d14a : [CI][dashboard] Add a workflow to collect aarch64 perf (#131729)
1eedb0a962 : fix torchrun log message (#131652)
d0e2ab617d : Migrate conda, manywheel and libtorch docker builds to pytorch/pytorch (#129022)
4a5a87168e : [BE] typing for decorators - _prims_common/wrappers (#131567)
7260eaeca0 : Fix vulkan builds with missing overrides errors (#131760)
fddb1bcdea : [CCA][Memory Snapshot] Move user_defined annotations to Native Caching Allocator (#130964)
c88c90a897 : [TS2E] Improve logging (#131711)
316c0d3e6b : [inductor][cpp][gemm] support k slicing for static shapes (#130821)
d962dba0c4 : Revert "[2/3] 3D Composability - move pp tests (#129801)"
9c4cf866c2 : Adafactor forloop basic impl (#129905)
e8956c9fe6 : Allow cpu scalar to be moved to HPU in masked_fill_decomposition (#127871)
91aba7baac : Fix py codegen to delete values that don't have any users (#131028)
2784b3f1b7 : [inductor] Fix split-scan interaction with multi-kernel (#131044)
c04f70bb30 : [BE] enable UFMT for `torch/ao/` (#128864)
434f60ce33 : Refactor nightly checkout tool (#131134)
054d214c50 : [BE][tests] show local variables on failure in tests (#131151)
c4bf4005d1 : [dtensor][debug] adding new noise level which allows users to only print operations with dtensors (#131592)
41e9f9cb7c : [inductor] Fix flaky tests in test_select_algorithm.py (#131709)
3afdbecb23 : [inductor] Fix flaky tests in test_debug_trace.py (#131722)
059f9fb30b : [BE][inductor] Type annotate `codecache.py` and `config.py` (#131427)
ace6decc99 : Fix static `py::object` dangling pointer with `py::gil_safe_call_once_and_store` (#130341)
59ef88ea5b : [inductor] Fix flaky tests in test_pad_mm (#131699)
ee996cd63c : [inductor] Fix flaky tests in test_benchmark_fusion.py (#131733)
42a4df9447 : Support CUDA nightly package in `tools/nightly.py` (#131133)
ceab3121de : [inductor] Fix flaky tests in test_memory_planning.py (#131703)
35bb0d3638 : Fix unsigned type bug in CUDACachingAllocator.cpp (#131464)
5f3f14e5e4 : [BE] Annotate subgraph_lowering (#131545)
00e19ae97a : [MTIA] Support module.mtia() (#131499)
2ce734cee9 : [BE] enable UFMT for `torch/ao/quantization/` (#128863)
a2f6eb33d0 : Register buffer in static input test (#131686)
62704db5c3 : [Distributed] [10/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d/control_plane (#131671)
2d7c135757 : Bump setuptools from 69.5.1 to 70.0.0 in /tools/build/bazel (#130893)
d6115439be : [MPS] Add SDPA implentation (#131362)
d98d00487d : [2/N] Remove unused variables (#131468)
538258bc13 : [1/N] Fix clang-tidy warnings in jit (#131034)
46e42ae85d : [4/N] Fix Wunused-parameter warnings (#131291)
03979a599e : [BE] enable UFMT for `torch/ao/pruning/` (#128862)
973a1362b9 : [BE] enable UFMT for `torch/ao/nn/` (#128861)
c047bddbca : [easy][dynamo] Update test for inline_inbuilt_n_modules (#131718)
01bc2a8165 : [inline-inbuilt-nn-modules] Skip mobilenet_v2 test for cpu inductor (#131694)
b5c006acac : [BE][Easy] enable UFMT for `torch/nn/` (#128865)
8ea4c72eb2 : [1/N] Fix clang-tidy warnings in aten/src/ATen/native/*.{cpp,h} (#130798)
ab609d6aa6 : [ts_convert] Update conversion for aten.tensor (#131549)
e20fb5e975 : [PTD][c10d] Include PG status into flight recorder (#131268)
c3fe9075a9 : [ROCM] Use hipblaslt version from hipblaslt runtime instead of header for tunableops validator (#131078)
803c5b8640 : [CMake] Fix private compile options for CUDA code (#130546)
7a42470bcb : Annotate all InstructionTranslator (#131509)
7535b23a25 : [export] Fix set_grad hoo if output is empty (#131511)
29c9f8c782 : [export] Fix `graph_break` log registration error when importing export/_trace.py (#131523)
236e06f9f9 : Revert "Ensure staticmethods can be allowed in graph (#130882)"
5db5865614 : Revert "Annotate all InstructionTranslator (#131509)"
a7e20ef7e4 : [BE] Get rid of missing destructor override warning (#131204)
b56939dae1 : Annotate more InstructionTranslator (#131680)
f9322c26b2 : Remove _export/exported_program.py (#131597)
eb54ca7abe : Revert "[BE] Get rid of missing destructor override warning (#131204)"
544f950d14 : [BE] Improve error message when there are internal changes (#131547)
7f61324268 : Add sparse block to flex_decoding kernel (#130884)
b90aa18569 : [aoti] Add initial custom op support (#127034)
44fdf24967 : [BE] typing for decorators - jit/_decompositions (#131566)
2b83e4f8d7 : [ROCm] Enable flex decoding unit tests (#131048)
84cd062fb2 : [2/3] 3D Composability - move pp tests (#129801)
a9e6356271 : [ONNX] Update torch.onnx.export API (#131501)
9db567f17d : [ONNX] Set dump_exported_program to True in bench (#131670)
85fa66be04 : Add rerun_disabled_tests for inductor (#131681)
65ce2bf465 : Allow setting `PYTHON_LIB_REL_PATH` via environment variable (#128419)
074b46b7d9 : [1/3] 3D Composability - move fsdp tests (#129800)
e0f1bf14a4 : Fully type torch/utils/_config_module.py (#131676)
05681b6838 : Migrate missed experimental jobs to Amazon2023 AMI (#131485)
05064f2827 : [CI] Move all ROCm jobs to periodic frequency (#131637)
8aff6caf67 : [CI][dashboard] Rename cpu-x86 to cpu_x86 (#131658)
3ce6f61416 : [AOTI] Support fallback ops not in inductor_fallback_ops (#131247)
aeca9845a6 : Migrate Lint jobs to Amazon 2023 AMI (#131514)
b98b3127f7 : [easy][pytorch][counters] Move WaitCounter in c10/util (#131021)
7718024d2b : [3.13] support 3.13 multiline traces in munge_exc (#131207)
f0378912a0 : [3.13, dynamo] fix test/dynamo/test_bytecode_utils.py (#131206)
a86909d251 : [inductor] Type annotate constant_folding.py (#131364)
8fe5b93667 : support zb1p and zb2p algorithms (#130752)
5e6cfb7db5 : Add an extra shard for distributed periodic jobs (#131498)
106c6a49f5 : [dynamo] limit number of compiles per frame (#130891)
abcd329359 : [BE] typing for decorators - onnx/symbolic_helper (#131565)
0e71a88f9b : Support IPC for Expandable Segments (#130890)
eb5883f8aa : Add new runner labels to actionlint (#131525)
72d17d95d7 : [inductor] Enable dynamo for Windows. RC1 (#131286)
4c7f22dee2 : [BE] remove unnecessary _dispatch_sqrt by using ** 0.5 (#131358)
98984422eb : [triton_op] fix autotuning (#131363)
bc938184de : [FSDP2] Added `set_reduce_scatter_divide_factor` (#129286)
8ffd109a00 : Revert "Fix py codegen to delete values that don't have any users (#131028)"
451462dbff : [1/N] Add missing constructors or assignment operators (#131077)
0c6f1ca064 : Introduce torch._dynamo.config.enable_compiler_collectives for syncing compilation across ranks (#130935)
85d3ee1d67 : [micro_pipeline_tp] refactor all-gather and reduce-scatter pattern matchers to be more flexible and testable (#131409)
89d5391bbf : [inductor] Kill mark_node_as_mutating (#130834)
6415c45da5 : [inductor] Use multiple outputs for flex-attention (#130833)
95c248751b : [inductor] Make UserDefinedTritonKernel a multi-output operation (#130832)
a4c3f29047 : [ONNX][BE] Remove ruff skips in torch/onnx (#131368)
62e566b345 : [BE] Remove suppression of inconsistent missing overrides (#131524)
83d19620f6 : kill tmp _is_executorch flag (#131488)
1e34870796 : [CI][dashboard][reland] Collect PT2 cpu perf nightly (#131560)
276b5238ef : [bug] Add is_compiling check for optimizers to avoid untracked tensor during graph tracing (#130909)
41189b0da4 : Simplify THPEvent_get_device (#131466)
e782918b8e : [NestedTensor] Add example NestedTensor objects with inner dimension of size 1 to tests reducing along jagged dimension for NestedTensor (#131516)
e9db1b0597 : Add flag to ignore unsupported @triton.autotune args in user-written kernel compilation (#131431)
eafbd20f23 : Annotate all InstructionTranslator (#131509)
5772c13f56 : Dont wrap negative indexing in scatter reduce (#131503)
9f96d4b61b : Disable inlining on cudagraph fallback tests (#131557)
9575b1afad : Ensure tensor dict is populated with compiled autograd (#131556)
dffbd3a1e2 : Add mypy typing to pattern_matcher (#131506)
7124efa81b : Include _native.h for structured_native_functions (#131208)
31da9ee711 : Use explain function to provide more meaningful information when conversion failed. (#131214)
0ceaabaf71 : [easy][inline-inbuilt-nn-modules] Update test (#131563)
0e780a7d69 : [BE] Remove some mypy allow-untyped-decorators that are no longer needed (#131564)
abb313b466 : [torch.mtia] Noop set_rng_state and get_rng_state APIs (#130873)
aa1c78c7e9 : [PTD][c10d][EZ] LOG error for nccl error rather than info (#131483)
466c167b71 : Fix py codegen to delete values that don't have any users (#131028)
14495ce288 : [BE][MPS] Use `isOperatingSystemAtLeastVersion:` (#131513)
76f7b3e560 : [inductor][cpp][gemm] improve thread blocking heuristics (#131024)
fdc9a1404e : Remove _BLACK_LISTED_OPS (#131361)
2cf220956a : [inductor] fix CacheBase.get_system on AMD (#131365)
480ae51f85 : [pytree] Only import optree if it's used (#131478)
6850e42266 : [dynamo][exception] Remove older specialization for StopIteration (#131512)
e2b941a1b4 : [dynamo] Rename TENSOR_ALIASING to OBJECT_ALIASING. Permit OBJECT_ALIASING for dict guards (#131480)
e39f136c35 : [debug][dtensor] implemented activation checkpointing differentiation (#130996)
7b375c3682 : [dtensor][debug] changed which module tracker I inherited from to fix bug with activation checkpointing (#131419)
161c18ed0b : SymmetricMemory-based, low contention intra-node all-gather and reduce-scatter (#130583)
1930698140 : Fix fake tensor SymInt caching when there's a SymInt storage_offset (#131500)
fc3d2b26cd : Use fake PG for test_compute_comm_reordering.py unit tests (#131415)
980bb54361 : [BE][Inductor] fix failures in test_padding.py (#131417)
53f1f75061 : [BE][Inductor] fix do_bench test (#131402)
5a0068cc69 : [BE] mypy: disallow untyped decorators (#131428)
e3ca4e79e1 : Fix mypy errors introduced by #131400 (#131522)
c9e74449f3 : bump executorch commit pin. (#131486)
8a890b72dc : [BE] Get rid of missing destructor override warning (#131204)
4eee2e7a6d : [operator_benchmark] Remove TARGETS from broken benchmarks (#131460)
8497930766 : Revert "[CI][dashboard] Collect PT2 cpu perf nightly (#131369)"
d4e3fd613c : Revert "[CI] Relax config name matching for cpu inductor tests (#131467)"
7b82ed2d59 : Delete very old misleading info from .ci README (#131502)
93fdd0237d : Ensure staticmethods can be allowed in graph (#130882)
faddb0f30c : [NestedTensor] Integrate the mean operator along the jagged dimension into NestedTensor (#131132)
120ca23a1f : Fix IMAs in Flash-Attention splitkv kernel (#131277)
f75d724482 : Updating Types in torch/_dynamo/utils.py (#131001)
aa54bcb6d2 : [CI] Relax config name matching for cpu inductor tests (#131467)
94f22eb6b2 : refactor post-trace fakification in strict (#131421)
f85c35872b : Remove GraphModuleOpUpgrader in _export.serde.upgrade.py (#131373)
22906be8f0 : Do not abort on SPARSE_STATUS_INVALID_VALUE (#130382)
cfb9ccab6c : [export] Filter errors by exception type, add case name (#131327)
6b8ec2b371 : Revert "[triton_op] fix autotuning (#131363)"
3fe72e0c2e : [4/N] Non-Tensor: Support layout, device and dtype for aten operations (#125897)
68c725a094 : [custom ops] Add register_vmap for custom ops (#130589)
404d640c39 : [1/2] PT2 Inductor ComboKernels - Foreach cases (#124969)
979429ca89 : [inductor]Add DtypeView to avoid memory leak and unnecessary kernel generations (#128883)
f93a6a4d31 : Add mypy typing to torch_version.py (#131447)
eab1595ce2 : [dynamo] Delete wrong assertion in bind_args (#131405)
e4b5645f83 : Revert "Add wrappers for synchronous GPUDirect Storage APIs (#130633)"
f7754c6dc5 : Run pull jobs with new AMI (#131250)
5f0b65bee7 : Revert "Replace manual parsing of "TMPDIR", "TMP", "TEMP" and "TEMPDIR" with std::filesystem::temp_directory_path() (#130842)"
4ca8705035 : Add mypy typing to fx node (#131434)
ded5bdb0de : Use inductor TestCase for test_replicate_with_compiler.py (#131053)
a5ad02d05d : Remove MacOS M2 14 runner from MacMPS job (#131465)
c1ef214046 : Print ExportedProgram without color by default (#131399)
db376fb643 : Ensure non-contiguous indices are handled (#131430)
4f0497c747 : Divorce triton and pt2 remote caching (#131345)
154f27455a : [triton_op] fix autotuning (#131363)
3aa45cae77 : [export] Removed deprecated dialect field from EP schema. [2/2] (#131344)
b61600f6cc : [pytorch] fix the leak for pinned memory when using _create_cpu_state… (#131270)
1e86387871 : Revert "Support IPC for Expandable Segments (#130890)"
f064dac588 : [CI] change xpu ci build runner type to reduce build time (#130922)
6bbef2a06b : [dynamo] Support set on KeysView (#131389)
e7c5e06772 : [dynamo] Support __contains__ on __dict__ on UserDefinedClassVariable (#131378)
0bc5e26067 : [dynamo] Support dict conversion of objects derived from MutableMapping (#131367)
a944cce5b8 : [dynamo] Support if callable on list (#131347)
250cdb2ac7 : Fix cuda_half_test.cu (#131416)
4ac77fc6bd : [HOP] Don't send HOPs to torch_dispatch (#131370)
027f35d9e6 : [Inductor] Allow customize decompositions for fwd_only trace function (#131329)
eb146b10db : Only depend on sympy 1.12 for conda (no 3.13 there anyways) (#131355)
9851c7313d : [CI][dashboard] Collect PT2 cpu perf nightly (#131369)
3f3b226ffc : Fixes for the extension backend tests (#130933)
d8e2e1fe50 : [aoti] use reshape instead of view for flattening tensors for the nan checker (#131302)
16247987a1 : Add decomposition for t_copy (#130939)
16a2a1aad3 : Annotate graph.py (#131400)
102d8e5a63 : MPS LSTM backward kernel workaround on MacOS 14.4+ (#130038)
29e2e2afb6 : Revert D59561509: Multisect successfully blamed "D59561509: [FX][export] DCE pass, check schema for node impurity (#130395)" for one test failure (#131341)
b2ad16f01d : avoid OpOverloadPacket.__getattr__ calls in inductor lowering (#131348)
99d9b369f4 : [Optim] Support tensor lr for all optimizers and check it is 1-element (#131065)
781189f25d : Add `nvjitlink` to the list of loadable global deps (#131295)
02cd4dbcf4 : [BE][CI] Get rid of duplicated code (#131406)
35a0e0f018 : [tp] improve SequenceParallel and its documentation (#131346)
12434504a2 : [c10d] remove non-necessary tests (#131212)
8a591da3e7 : [CI] Enable AOT inductor in cpu performance smoke test (#130097)
6cbb1437c1 : Revert "Add sparse block to flex_decoding kernel (#130884)"
28b0ad4f46 : [PT2] Minor fix in signpost (#131332)
b435d84261 : Revert "[custom ops] Add register_vmap for custom ops (#130589)"
8963623494 : Re-implement pin_memory to be device-agnostic by leveraging the Accelerator concept (#126376)
074b420641 : [custom ops] Add register_vmap for custom ops (#130589)
1e5ecc4277 : move save/load from _export to export (#131353)
26f7dd286b : [export] Allow non-CIA ops to be preserved (#131075)
69b1999586 : TunableOp size hotfix (#130800)
8ae1963a61 : [Autograd] Cond Higher-Order Operation (#126911)
c74396e890 : Revert "[c10d] remove non-necessary tests (#131212)"
f8f41dcb24 : Revert "[inductor] Make UserDefinedTritonKernel a multi-output operation (#130832)"
15eb10df02 : Revert "[inductor] Use multiple outputs for flex-attention (#130833)"
f8875e8277 : Revert "[inductor] Kill mark_node_as_mutating (#130834)"
d33804f8b6 : Replace manual parsing of "TMPDIR", "TMP", "TEMP" and "TEMPDIR" with std::filesystem::temp_directory_path() (#130842)
a136a7d623 : [Functional Collective] enable custom work registration from python (#130354)
a3922acc06 : [TD] More synonyms, new heuristic for test_public_bindings (#130397)
0bf59db6cc : Add sparse block to flex_decoding kernel (#130884)
83b355bad5 : [aoti] forward fix of D60006838, add back test_multiple_output_alias (#131331) (#131356)
e3eaa22126 : [torchbench][multisect] Run accuracy check at Diff time (#131266)
0c074352ab : [c10d] remove non-necessary tests (#131212)
781a33f5d8 : Enable dynamic rollout for Linux trunk workflows (#131325)
406f510f89 : [c10d] add bfloat16 support for NAN check (#131131)
9e753d1f20 : [AMD] catch exception when other processes belong to other users (#131018)
23ae6e2eb3 : [FSDP2] Removed state dict error for HSDP (#131320)
d3556786b8 : Blocklist certain modules for weights_only load (#131259)
93ef2e53f8 : [3.13, dynamo] support FORMAT_SIMPLE/FORMAT_SPEC (#130751)
375a4d7e9e : [3.13, dynamo] decompose fused load/store instructions (#130569)
157f38bc4d : [3.13, dynamo] support STORE_FAST_LOAD_FAST (#130568)
1e116c7a1e : [3.13, dynamo] fix END_FOR (#130567)
4319147ca9 : [3.13, dynamo] fix closures, MAKE_FUNCTION, LOAD_CLOSURE; support SET_FUNCTION_ATTRIBUTE (#130566)
44e689d947 : Revert "[TD] More synonyms, new heuristic for test_public_bindings (#130397)"
56bb047449 : [pt2] Increase dynamo/inductor default log level to info (#131311)
d8a35d5722 : [TD] More synonyms, new heuristic for test_public_bindings (#130397)
b9912f31ef : Revert "[export] fix zero arg export in training_ir (#130990)"
32c2f84e34 : Support IPC for Expandable Segments (#130890)
0246b28510 : [aoti] refactor aoti_torch__scaled_mm and skip aoti fp8 test for some cases (#130868)
5b5e0698a5 : Add wrappers for synchronous GPUDirect Storage APIs (#130633)
5c78581fc9 : Fix documentation for tensor.repeat. (#131195)
26383a6cc0 : Revert "Added and_masks and or_masks utilities (#131073)"
3eb9fa5d58 : Add support for using LF Canary runners (#131188)
69e2590490 : Fix MKLDNN check in `test_aot_inductor.py` (#130982)
92bb323d36 : Added and_masks and or_masks utilities (#131073)
68df24f9b6 : [xla hash update] update the pinned xla hash (#126672)
6d65a2c3f4 : [3/N] Non-Tensor: Support string parameter for aten operations (#125831)
8da19fec60 : [Inductor] Support store SPIR-V binary file output from Intel Triton. (#130849)
2820e1d9f8 : Update CPython support policy (#130989)
1614891946 : [Profiler] exclude gpu_user_annotation when accumulating cuda time total (#130733)
c2425a3b57 : [BE] Use `_linux-build.yml` instead of `-linux-build-label.yml` flavor (#130762)
500cbb5b90 : Add decomposition for view_copy (#130938)
f628813066 : Fix out_wrapper, _make_copy_from_view to handle all signatures (#130937)
b193894b94 : FakeTensor cache SymInt support (#127596)
ebce85172e : FakeTensor cache SymInt support: flatten cache key (#129780)
f3562e2cdc : backport dataclass(slots=True) (#131014)
1439bd3c9c : [Easy][pytree] enable CXX pytree under `torch::deploy` (#130144)
ddde9dd25c : [dynamo][automatic_dynamic] Trigger dynamism on stride changes (#130232)
e506dfa640 : [dynamo] Add a JK kill switch for disabling compile (#131258)
1d1d074072 : [3/N] Fix Wunused-parameter warnings (#131271)
d57af32e63 : Fix undefined tensor error in _copy_from_and_resize when fallback to cpu. (#130237)
13283fb4bc : [distributed] test_store: remove flaky bind test (#131262)
407c87a32c : [debug][dtensor] fixed updating current module (#130995)
33f036a6f7 : [inductor] Kill mark_node_as_mutating (#130834)
fccbe85475 : [BE] Improve CUDA UpSample error message (#131252)
a7a951a4ae : [executorch hash update] update the pinned executorch hash (#130001)
b6d477fd56 : [BE][Easy][16/19] enforce style for empty lines in import segments in `torch/_i*/` (#129768)
8e478d4fb1 : Add Alban and Piotr into Core Maintainers (#130903)
637ab85e7f : fix for launching kernel invalid config error when calling embedding … (#130994)
a8319698b3 : [inductor] [cpp] improve cache blocking with CPU info (#129348)
0b44e1a74c : [inductor][cpp][gemm] optimize arbitrary N in packed gemm template (#130690)
ebc012ace6 : Add hooks for execution on intel gaudi devices - 1 (#128584)
d31f2ae904 : Ensure invariant that all inputs have tensor dict (#131249)
37337ef5c3 : add some description on create_block_mask and mask mods (#131209)
168c0e24a5 : [IntraNodeComm] Fix some issues in two-shot all-reduce (#131244)
d2bd9acabd : [BE] bump `optree` version to 0.12.1 (#130139)
50436d5bdb : [export] fix zero arg export in training_ir (#130990)
3c43fe068f : [inductor] parallel compile: Create new pipes for subproc communication (#131194)
9df8ea1cf2 : [inductor] Use multiple outputs for flex-attention (#130833)
deacc543f1 : [inductor] Make UserDefinedTritonKernel a multi-output operation (#130832)
27c2a0d63b : [inductor] Separate Buffer and Operation into two concepts (#130831)
bb4251213b : Add decomposition for channel_shuffle (#118775)
f0075c179b : Pin `sympy >= 1.13.0` (#130895)
30d1826b2b : Revert "[executorch hash update] update the pinned executorch hash (#130001)"
cd8bbdc71a : [2/N] Fix Wunused-parameter warnings (#131170)
207fb96155 : [functorch] saved tensor hooks error should only apply to grad, vjp transforms. (#131191)
4821f72457 : [executorch hash update] update the pinned executorch hash (#130001)
7c299b46ca : Revert "Invalidate StorageImpl instances when tensor is overwritten with cudagraphs (#125264)"
35bf05561c : [Inductor] B2B-GEMM performance tuning with test (#130778)
6657b14a64 : [inductor] Fix the method for checking the variable type of entry.numel (#131026)
0e72baddf0 : Revert "[easy][pytorch][counters] Move WaitCounter in c10/util (#131021)"
4aef5a1134 : [c10] add an option to pg_config split share (#130877)
0ca7b6ddd9 : [easy][pytorch][counters] Move WaitCounter in c10/util (#131021)
c64ad2403c : LF runners: Add new runner types for Amazon2023 AMIs (#131246)
85ca88a2bb : [Distributed][PP export] update tracing to handle autocast inclusion (#130998)
ceee87df2e : [export] modify export code owners (#130894)
5f981388ec : Revert "[ROCm] Enable ROCm support for inductor's dynamic_rblock_scaling (#129663)"
125be005eb : [Docs] Fix fake tensor doc (#131205)
e49c0acc39 : [dynamo] Revert https://github.com/pytorch/pytorch/pull/130416 (#131058)
042be441ba : [aoti] Unskip some aot inductor tests (#130973)
9b5c70878b : [Fix] Missing parameter happens when retracing an already jit.scripted module (#129787)
abb3f2822c : [aotinductor] Support additional lifted constants supplied to const folding. (#130743)
31e79aae6a : Another follow up to #130260 (#130993)
d4a79d4a7c : Fix an example: Resolve broadcasting error in attn_bias and attn_mask… (#130209)
451fc029fe : docs: note transposed weight initialisations (#130122)
5f3d8b8788 : Revert "[c10] add an option to pg_config split share (#130877)"
25d8a0480b : [lint] Remove unnecessary BUCKRESTRICTEDSYNTAX suppressions
a6a2cd6257 : Typo fix (#131037)
1b72cf0b09 : Add hasattr for tensor variable (#131008)
1f961ad495 : Runs aten cuda cpp tests in CI (#131061)
d7a78ec8b9 : [ROCm] Enable ROCm support for inductor's dynamic_rblock_scaling (#129663)
feef057691 : [1/N] Fix Wunused-parameter warnings (#130924)
eee76c86a8 : Write trace_structured events to scuba (#130955)
982309b501 : Initial commit of flight recorder trace (#130764)
fd4899bc58 : [ONNX] Run ruff pyupgrade to update type annotations (#130657)
4f60a2e39c : Set correct output dtype for dequantize op during convert_pt2e in decomposed mode (#128953)
d59803fb67 : Refactored flexattention kernel (#130904)
ac76dd606f : [dynamo] Alternative way to skip empty hooks guards on inbuilt nn modules (#131057)
00e54e74ff : [dynamo][cpp-guards] Fix bug in dict tags (#131056)
3c622fbcd3 : [inductor] Fix var_to_range in IndexPropagation (#130984)
b556d31586 : Update torch-xpu-ops pin (ATen XPU implementation) (#131015)
52cb9abb1d : Add deterministic support in nn.functional.interpolate for XPU (#129864)
39493aa934 : [inductor][cpp][gemm] move bias add to epilogue (#130675)
5a6a806b19 : [Inductor UT] Generalize device-bias code in case TestFxGraphCache.test_inductor_counters. (#131006)
208dffa702 : [Compiled DDP] DDP + AC unit test (#130981)
3cc6183ce1 : Fix getAugOp error (#131033)
6e7b9ee8a0 : [inductor] adapte windows file path (#130713)
e880cb2fe0 : [ONNX] Remove beartype usage (#130484)
fb3674b1f4 : Revert "[Autograd] Cond Higher-Order Operation (#126911)"
686b7f046a : [Fix]: TSConverter handles call ops with multiple outputs (#129294)
7f1cda1533 : Autoheuristic: Do not store choices as metadata (#130304)
4d9f2a6d56 : Small expandable segments refactor. (#130889)
d8fed480ef : Move handle-creation logic into cudacaching allocator. (#130888)
3e9cf1cc80 : Fix potential segfault during deletion (#131036)
f7058b735e : [Autograd] Cond Higher-Order Operation (#126911)
24467ba2ec : Update pin (#130896)
793b17ebcb : Add numeric_debugger top level APIs (#130643)
726b9268d2 : Revert "Re-implement pin_memory to be device-agnostic by leveraging the Accelerator concept (#126376)"
e7f7c5c3f8 : [inductor] Avoid fallback case for custom scan op lowering (#130936)
367213a608 : [c10] add an option to pg_config split share (#130877)
c015e5b9e3 : Make sure that TransformGetItemToIndex for all graph replay (#131003)
82242a258a : rm duplicate index_dtype arg (#130803)
6d9f74f0af : Add flex decoding benchmark (#130850)
fff92d4f18 : Revert "Use inductor TestCase for test_replicate_with_compiler.py (#129494)"
745324e487 : [export] turn on hybrid symints by default (#130775)
22388ffe03 : Graph break on tostring for numpy remapping (#131007)
8bf0be7c78 : [CUDAGraph] Add operator.mul to skip list for find_input_mutations (#130986)
5979014059 : DSD for TorchTune LoRA (#129635)
5484c86021 : [export] Fully support extension op in serialization/deserialization. (#130851)
85451b2cde : [DTensor] Fix shard_dim_alltoall fake tensor return (#129945)
16aaff7783 : Fix mm pad regresion - more conservative estimation of plannable inputs (#128909)
27ded03545 : [FX][export] DCE pass, check schema for node impurity (#130395)
32ff04d30a : [dtensor][debug] adding functionality to control noisiness of the debug output (#130410)
8ea03372a1 : [MPS] Store philox counter as part of the RNG state (#130662)
7c90a82970 : [Reland] [5/N] Change static functions in headers to inline (#131010)
d6ae8bbf16 : Revert "[export] Add print_readable to unflattener (#128617)"
120fdf7ee2 : Revert "[aota] Needs autograd if an input requires_grad, agnostic to enable_grad (#128890)"
5a90ed3523 : Reinplacing should ignore copy_ nodes where the mutated arg is not read (#130866)
dd39dca034 : Removing some cruff and updating signatures for consistency (#130871)
9f6db5d0e2 : Revert "Ensure staticmethods can be allowed in graph (#130882)"
63a0a65df9 : Define 'zero-preserving unary functions' in docs (#130804)
1b07d42171 : Add @syed-ahmed to CUDA `CODEOWNERS` paths (#130971)
c986aeea2d : Re-implement pin_memory to be device-agnostic by leveraging the Accelerator concept (#126376)
38b7d89aa4 : Uses context pointer for deleter to enable multiple CUDAPluggableAllocator usage (#130472)
28a74b9fa4 : [NestedTensor] Integrate sum along the jagged dimension into NestedTensor (#130425)
e98135d1ad : [aota] Needs autograd if an input requires_grad, agnostic to enable_grad (#128890)
cf3f4285a8 : Add recursive metadata guard test (#131002)
134bc4fc34 : [BE][Easy][12/19] enforce style for empty lines in import segments in `test/i*/` (#129763)
dfc3347c4a : [pytorch][counters] Make WaitCounter backend pluggable (#130934)
b732b52f1e : Revert "[BE][Easy][12/19] enforce style for empty lines in import segments in `test/i*/` (#129763)"
6c2c8ee15b : [export] Remove preserved ops from decomp list (#130970)
aecc746fcc : [BE][Easy][12/19] enforce style for empty lines in import segments in `test/i*/` (#129763)
740fb22966 : [BE][Easy][4/19] enforce style for empty lines in import segments in `functorch/` (#129755)
a085acd7d6 : [dynamo] Revert back changes to UnspecializedBuiltinNNModuleVariable (#130991)
9f392f8294 : Use inductor TestCase for test_replicate_with_compiler.py (#129494)
433ef4e444 : Revert "[FX][export] DCE pass, check schema for node impurity (#130395)"
bd56bcf0ab : [TEST] Fix _scaled_mm tests (#130897)
9fee87e4cd : [export] Add print_readable to unflattener (#128617)
a0ae77b25b : Simpilfy cub::unique_by_key code (#130907)
d818c3319f : Autoheuristic: add config options for specifying optimizations to collect data for and use heuristics (#130245)
051971ab32 : Reorder MIOpen conditions so getCUDAHooks only called when CUDA input (#130867)
e22b0acc76 : [FX][export] DCE pass, check schema for node impurity (#130395)
73d0f484b3 : [structural binding][11/N] Replace std::tie with structural binding (#130830)
e14d1d10ef : Unwrap Identity in prepare indexing (#130967)
d77af49380 : [Traceable FSDP2] Preserve fsdp.set_ op through lowering; Add unit test for multiple .set_ into same primal; Add unit test for FSDP2 module layer reuse (#130786)
fc3dbcd1c3 : [Traceable FSDP2][Inductor] Re-inplace all_gather_into_tensor (#129773)
442bfa7fc4 : Fix mypy error (#130992)
a0da1265c5 : Define key in codecache (#130979)
31e3330040 : [Reland][FSDP2] Allowed `List[nn.Module]` as arg (#130949)
ff7e021e94 : [Reland][PT-D] Relaxed `contract` to allow `Sequence[nn.Module]` (#127773) (#130947)
90105a4f3e : [ts-migration] Support RaiseException, prim::Unitialized, prim::Enter, and prim::Exit (#129416)
874bbc53c9 : Revert "Define key in codecache (#130979)"
43a6d20883 : Add decomposition for reflection_pad{1,2,3}d_backward (#130299)
0eb43ed189 : Revert "[ts-migration] Support RaiseException, prim::Unitialized, prim::Enter, and prim::Exit (#129416)"
ebdfc7e37d : [BE] Rename `ISORT_WHITELIST` to `ISORT_SKIPLIST` (#130987)
df5919393c : [ROCm] std::clamp work-around for hip-clang compiler (#127812)
f0faecd291 : [ts-migration] Support RaiseException, prim::Unitialized, prim::Enter, and prim::Exit (#129416)
4112f68783 : Define key in codecache (#130979)
0b134c15cd : Revert "Relax constraints for creating a `GenericContextWrappingVariable` (#129091)"
c49f909aab : Revert "wrap self.call_function(...) in try finally block to undo changes to self.kw_names (#130490)"
65b4163bd2 : [dynamo][nn-module] Make slice getitem on nn module container sourceless (#130852)
a8bd2933d9 : wrap self.call_function(...) in try finally block to undo changes to self.kw_names (#130490)
882fd91869 : Relax constraints for creating a `GenericContextWrappingVariable` (#129091)
41f5d5dcaf : Revert "[inductor] adapte windows file path (#130713)"
1bf4a44b33 : Revert "[ts-migration] Support RaiseException, prim::Unitialized, prim::Enter, and prim::Exit (#129416)"
b0387449db : Ensure staticmethods can be allowed in graph (#130882)
e4f9d01cd9 : Add test for dataclass field accesses (#130848)
470f07c840 : Add guard override capability for tensor subclass metadata (#130780)
bea6762c01 : Add guards on subclass metadata (#130779)
752c817898 : [AOTI][refactor] Unify UserDefinedTritonKernel.codegen (#130796)
efefea52e0 : renamed inductor kernel args in flexattention properly (#130869)
480a5bd881 : Renamed mask_fn to mask_mod (#130818)
d96c80649f : [export] constants & non-persistent buffers for training IR (#130864)
ef0511245a : [ts-migration] Support RaiseException, prim::Unitialized, prim::Enter, and prim::Exit (#129416)
d552e5c3d5 : Fix ciflow/nightly triggering commit hash update workflow (#130570)
db3290846e : [BE][Easy][10/19] enforce style for empty lines in import segments in `test/d*/` (#129761)
1e13cb2f28 : Log cache state to structured logs (#130845)
af0b5ee924 : Reduce number of samples in {svd,pca}_lowrank OpInfos (#127199)
6e916f112f : [inductor] skip fx remote cache for 2 tests in test_metrics.py (#130853)
1fb572289b : [BE][c10d] Add a warning messages in the comment about cuda hang (#130844)
b7d2abd766 : Fix vectorized ops.masked (#130130)
b29b23137c : [Easy] Fix argument name collision in dispatched functions (#129562)
c0ed38e644 : [BE][Easy][3/19] enforce style for empty lines in import segments in `benchmarks/` (#129754)
32995dec28 : Add support for XPU accumulate type (#128579)
76169cf691 : [BE][Easy][9/19] enforce style for empty lines in import segments in `test/[e-h]*/` (#129760)
cbf274d4a7 : [aoti] Add packaging solution (#129895)
94a910b43b : Revert "Renamed mask_fn to mask_mod (#130818)"
d027aef8f8 : Revert "Removed q_num_blocks from constructor (#130819)"
4b7ff35622 : Fix flex_attention import in score_mod (#130906)
e1b2d8f975 : Revert "[cuDNN][SDPA] Support `attn_bias` in cuDNN (#130482)"
d3a11a0198 : [Inductor] Handle device_put op in constant folding. (#130824)
2af2d26562 : [Inductor UT] Generalize device-bias code in test_triton_kernels.py and test_torchinductor.py (#130817)
2300bb2a88 : [3.13, dynamo] support TO_BOOL (#130565)
539acf7656 : [3.13, dynamo] support CALL_KW (#130564)
e2365c05d7 : [3.13, dynamo] fix instruction line numbers (#130461)
82b2e7a253 : [3.13, dynamo] fix CALL_FUNCTION_EX in symbolic_convert (#130460)
8c9a996091 : [3.13, dynamo] support LOAD_FAST_LOAD_FAST and STORE_FAST_STORE_FAST (#130459)
bb62e9d7c3 : Avoid autocast deprecation warning in DataParallel (#130660)
f6838d521a : [BE][Easy][5/19] enforce style for empty lines in import segments in `tools/` and `torchgen/` (#129756)
ba48cf6535 : [BE][Easy][6/19] enforce style for empty lines in import segments in `test/` (#129757)
e51e971a86 : [inductor] adapte windows file path (#130713)
7c45476d38 : [pytorch][counters] WaitCounter cleanup (#130664)
419b8df0b6 : [inductor][easy] add debug logs for inlining constants (#130799)
f2552dcc3d : refactor cached tensor more generic (#129359)
c6aa03bd4e : Add allow_xpu to enable XPU UTs (#130312)
fc238db62a : Separate AOTI Eager utils as a single file (#125819)
d1c4e6b55f : [BE]: Enable a few additional ruff rules (#130700)
c24c50da92 : fix tensor print behavior for XPU (#130523)
aa95fb99af : On advice of James March, log pid instead of tid (#130679)
e9023d57b0 : [ROCm] Return correct AMDSMI socket_power metric (#130331)
03c660468e : Removed q_num_blocks from constructor (#130819)
1a97bcf93b : Renamed mask_fn to mask_mod (#130818)
6024fea0f8 : Compute q_num_blocks from kv_num_blocks if q_num_blocks is not passed in (#130809)
ef9d9be236 : TCPStoreLibUvBackend: log port on error (#130797)
25cb4426d3 : [inductor] Add num_matches_for_scatter_upon_const_tensor to list of cached metrics (#130843)
8458dc8966 : Revert "Use inductor TestCase for distributed tests (#129494)"
d7a8e8f7c5 : Revert "[PT-D] Relaxed `contract` to allow `Sequence[nn.Module]` (#127773)"
9a6d81b178 : Fix pytorch JIT build for LLVM 18+ (#130661)
de177b50f8 : [cuDNN][SDPA] Support `attn_bias` in cuDNN (#130482)
4f40a7078e : Revert "[FSDP2] Allowed `List[nn.Module]` as arg (#127786)"
7919f0b952 : Add buffer static input tests to cudagraph trees (#130402)
415d5e53ae : Propagate buffer and parameter indices through AOT (#130393)
5f3c356a56 : Revert "[inductor] adapte windows file path (#130713)"
2eec02523b : [autograd] Support GradientEdge as output for torch.autograd.grad (#127766)
c1e7e40f24 : Revert "[Traceable FSDP2][Inductor] Re-inplace all_gather_into_tensor (#129773)"
4e479568df : [PT2] Log compile ID in the signpost event (#130801)
2ceade37c5 : [SymmetricMemory] put socket files in /tmp (#130757)
0468f2616a : [SymmetricMemory] make sure different subgroups with the same name use different store prefixes (#130756)
f2f31027ce : [Traceable FSDP2][Inductor] Re-inplace all_gather_into_tensor (#129773)
156b99cfb1 : [inductor] Handle inductor counters in fx graph cache (#130635)
d548417d95 : [NJT] throw an exception if nested_tensor_from_jagged is fx-traced without being fx.wrapped (#130702)
0851de5b16 : Revert "[ONNX] Remove beartype usage (#130484)"
09b1b113f5 : Cache min / max seq len for torch.nested.as_nested_tensor(t) (#130766)
408c921d96 : Make hashing a SymInt raise an error again (#130548)
1d8baa4df2 : [torchbench][servicelab] Fix servicelab test failures (#130781)
1794c35912 : [ONNX] Remove beartype usage (#130484)
67e22d6c61 : [Fix]: Convert operator that does specialization to its symbolic counterpart (#129578)
e8998d68c8 : [export] add non-strict training IR (#130062)
d2f44eabe7 : [Export] Support aten.full.default and aten.full_like.default (#130639)
f272e0ab4a : [inductor] support unbacked symint divisors in vars_and_sizes (#130595)
2b43d339fe : Make FlexAttention API public (#130755)
cbda8be537 : Revert "Propagate buffer and parameter indices through AOT (#130393)"
9cb23ba85b : Revert "Add buffer static input tests to cudagraph trees (#130402)"
c509319210 : [inductor] Disable remote fx graph cache in test_snode_runtime (#130655)
aa4ad711ef : [CCA][Memory Snapshot] Create TraceEntryRingBuffer class for alloc_trace logic (#130741)
e11c41035c : Directly use empty strided in cudagraph copy (#130777)
4c3348932c : typing: convert_frame (#130670)
ea25febfab : typing: storage (#130669)
8390843eba : Invalidate StorageImpl instances when tensor is overwritten with cudagraphs (#125264)
1fbfb3202d : [docs][TorchScript] document c10::AliasAnalysisKind::CONSERVATIVE (#130765)
69e9917245 : [inductor] adapte windows file path (#130713)
53e5b8ac5b : [BE]: Update flake8-comprehensions and enable C420 (#130699)
213685ba97 : [torchao][pt2 benchmark runner] Run performance test non-alternately (#130136)
67c6941b4e : Update torch.cat decomp for 0-dim (#130763)
705da70f2c : [inductor][cpp] align dtype convert cache between vec and scalar kernels (#130677)
68a4f2a3df : Revert "Tighten torch.library.infer_schema input types (#130705)"
dee0f43fde : Add a CI job to check runner det sync (#129746)
e57101d927 : Add testing regarding SparseAdam state_dicts (#130645)
168e41009b : [structural binding][10/N] Replace std::tie with structural binding (#130784)
747b38c131 : [BE][Easy][2/19] enforce style for empty lines in import segments in `.ci/` and `.github/` (#129753)
096dc444ce : Keep zero check be compatible with different sympy versions (#130729)
fedae41c57 : [dynamo] Do not mark nn.module containers as BuiltinNNModuleVariable (#130773)
83eedf66b9 : Update libfmt submodule to 11.0.1 (#130628)
c549629696 : [CD] Fix xpu nightly wheel test failure (#130742)
95dbbf713e : [Distributed] [9/N] Fix clang-tidy warnings in torch/csrc/distributed/rpc (#130109)
7b2e802f31 : [dtensor] add a few dunder methods to pointwise ops (#130754)
2b2671a7b1 : [dtensor] fix foreach_norm when ord is 2 (#130753)
a29052a0bf : [BE][Ez]: Update ruff to 0.5.2 (#130698)
ad314a2f05 : Pass `torch.load(weights_only=)` internally to avoid FutureWarning (#130663)
3cd2ae331a : Use inductor TestCase for distributed tests (#129494)
39eeaac4e5 : inductor: avoiding moving constructor to cuda when it would cause h2d sync in index_put_ fallback (#130338)
93a03edcf9 : Update error message in meta__convert_weight_to_int4pack (#130707)
a3abfa5cb5 : [BE][Easy][1/19] enforce style for empty lines in import segments (#129752)
5e617d7ef5 : [CUDA] Actually bump tolerances for `test_grad_pca_lowrank` (#130770)
80236dca90 : Add buffer static input tests to cudagraph trees (#130402)
69a77389e2 : Propagate buffer and parameter indices through AOT (#130393)
200d3d0a89 : Remove static param counting if inlining NN modules (#130503)
0d0c09702a : Update mark_static_address for inlining NN modules (#130392)
d8616eb66a : Mark nn_module params and buffers as static in dynamo (#130391)
9ab8d47f9d : Constant folding for dynamic shape node (#129686)
ea4f310ff1 : [Nested Tensor][easy] Add softmax backward support (#130602)
d3ab8ceced : [FSDP2] Allowed `List[nn.Module]` as arg (#127786)
b27695791e : [PT-D] Relaxed `contract` to allow `Sequence[nn.Module]` (#127773)
54a932b0ac : Support for expandable segments with cuda graph trees (#128068)
006020ff6e : Fix the cudagraph capture of SDPA (#130712)
50ef099ad0 : Learn a heuristic to decide whether to pad before mm (#128643)
9a5204dc2d : [inductor] Remove "spawn" as an option for parallel compile method (#130746)
3f031b96c6 : [Fix] Correctly identifying arguments for sub-blocks with renaming logic during TorchScript to ExportedProgram conversion (#128386)
b893aa71ca : Rename generate_numeric_debug_handle to numeric_debugger (#130590)
535016967a : Enable UFMT on all of torch/sparse (#130545)
7d4f50de19 : dynamo add support for `defaultdict(set)` (#130745)
3928ca2ab6 : [dynamo] update call map to allow multiple input parameters (#130748)
6f32dc0c7b : Don't pass error message as `places` in `assertGreaterAlmostEqual` (#130648)
dff9d68f18 : Revert "Fix names conflict when lifting (#129817)"
78799e82b0 : Revert "Invalidate StorageImpl instances when tensor is overwritten with cudagraphs (#125264)"
db3a641b71 : Implement operator for micro-pipelined all-gather -> _scaled_mm (#129289)
77fb5b0e23 : [c10d] a new Pytorch API (split_group) to create a process group (#130507)
ac3e2cb64a : [BE] Delete unused -rg.yml workflow (#130759)
ee6f0ab190 : [DeviceMesh][Reland] Only include the real thread_id in DeviceMesh hash under threaded backend (#130495) (#130685)
27322355de : Added some more documentation to block mask creation (#130649)
0e79e1f958 : [NJT+SDPA]Fix flash_attention output when batch_size=1 and seq_len=1 (#130652)
074a5c0c9b : Revert "[BE] bump `optree` version to 0.12.1 (#130139)"
f1456c74a0 : Fix mkl-static issue for Windows. (#130697)
a7cfe40c9b : [dtensor] Improve from_local API with run_check (#130289)
3342f3aa4e : [dtensor] simplify sdpa strategies (#130288)
7d82dc2c23 : [dtensor] slice_backward to use op strategy (#130287)
53cf46b8c6 : Fix names conflict when lifting (#129817)
b4b64f76e5 : Ensure tensors devices match on `torch.index_put` batch rule impl (#130479)
00d71b3e86 : Tweak tolerances for test_vjp_linalg_tensorsolve_cuda_float32 to pass in Windows / debug builds (#130449)
9e161af179 : Revert "Increase tolerance for tensorsolve tests (#130620)"
8fcb156e8b : [BE] bump `optree` version to 0.12.1 (#130139)
1e897a0ca4 : Revert "Fix names conflict when lifting (#129817)"
0099e15b47 : Also put unbacked symbols in symbol_to_node in split_module pass (#130535)
ca2d424c6e : Tighten torch.library.infer_schema input types (#130705)
9df4bc6a0d : Revert "Constant folding for dynamic shape node (#129686)"
7cd48df2da : Refine the logic of device construction when only device index is given (#129119)
9cae2160f5 : Introduce the concept of Accelerators to PyTorch doc (#129363)
74da2a467f : Fix names conflict when lifting (#129817)
ee039c0614 : [custom_op] triton_op API V0 (#130637)
6beec34b1c : [structural binding][9/N] Replace std::tie with structural binding (#130404)
ac28ae18dc : [BE][Ez]: Update pybind11 submodule to v2.13.1 (#129827)
1d983bbb28 : [easy][inline-inbuilt-nn-module] Update test output (#130681)
1a266def4f : [dynamo][unsoundness but very controlled] Skip guards on inbuilt nn module hooks (#130420)
dc7725cc16 : [halide-backend] Random number generation (#130211)
1bc390c5f5 : Invalidate StorageImpl instances when tensor is overwritten with cudagraphs (#125264)
a3c0bab502 : [inductor] [cpp] use non-temporal tile load for A (#129455)
c547b2e871 : Fix python detection in cuda.cmake (#130651)
c0897919da : Revert " [5/N] Change static functions in headers to inline (#130673)"
28f6ae2718 : [9/N] Replace c10::optional with std::optional (#130674)
774ca93fd2 : Added zb1p schedule (#130210)
5fe9515d35 : [structural binding][8/N] Replace std::tie with structural binding (#130544)
81322aee74 : [Inductor][CPP] Support more than one LocalBuffer (#129121)
adaa0fea5a : [Inductor][CPP] Enable Local Buffer for Outer loop fusion (#126967)
dcaa111dc8 : support intersection by polyfill (#130672)
4d7bf72d93 : [BE][Easy] fix ruff rule needless-bool (SIM103) (#130206)
fa5f572748 : [cudagraph] fallback to eager if re-record too many times (#129349)
4410c44ae6 : [5/N] Change static functions in headers to inline (#130673)
6f275ae4d0 : Add kwinputs to Kineto Traces (#130373)
f9f85bfc0b : [Inductor] FlexAttention supports partial masking (#130415) (#130626)
cbb7e26acd : [3.13, dynamo] fix jump target offset calculation (#130458)
0b5792c0ae : [3.13, dynamo] fix NULL ordering in symbolic_convert CALL (#130385)
87b406d7e5 : [3.13, dynamo] codegen TO_BOOL before conditional jump (#130384)
92ac9ee83c : [3.13, dynamo] swap null and pop_null in codegen (#130383)
97cfc65dbc : Back out "[DeviceMesh] Only include the real thread_id in DeviceMesh hash under threaded backend (#130495)" (#130676)
e5de25896f : Fixed CUDA randint generation for large ranges. (#126066)
1f162a5fce : Revert "[Inductor][CPP] Support vectorization of remainder (#129849)"
8714b7fc69 : [dynamo][cpp-guards] Use dict tags to skip guards on immutable dict getitems (#130654)
7c83f5f7d5 : [8/N] Replace c10::optional with std::optional (#130509)
0effcb70ef : Revert "[ONNX] Remove beartype usage (#130484)"
567482973d : typing fake_tensor.py (#128041)
1ad0f38a37 : Fix IMAs in FlexAttention + autotuning (#130352)
c03e667276 : [Inductor][PatternMatcher] Always prevent match across mutations (#130584)
3710a79622 : Flex Attention HOP: Add support for flex decoding (#129415)
f44739cf42 : [ONNX] Remove beartype usage (#130484)
a7f54c7f8a : [dynamo] add meta fn for aten.kthvalue.default (#130562)
634b62f111 : typing proxy_tensor.py (#129182)
ea78b0c177 : Revert "Fix static `py::object` dangling pointer with `py::gil_safe_call_once_and_store` (#130341)"
f422027fce : fix torch.linalg.lstsq input check (#130612)
06ebf87a1e : Fix and improve reorder_compute_for_overlap (#130573)
619029e892 : [easy] Small rendering fix in Tensor.module_load doc (#130489)
95046c86e3 : [HOP] add HOP x torch_dispatch interaction (#130606)
f093cd4086 : Fix custom ops warning during export (#130623)
7c289c2a5c : Add torch.serialization.safe_globals context manager (#127939)
f0d7164cb9 : Revert "[inductor] switch AotCodeCompiler to new cpp_builder (#130127)"
103b6ccab2 : Increase tolerance for tensorsolve tests (#130620)
af4da0799c : [PyTorch] Half: don't disable direct conversion to/from float on mobile (#130465)
d727e2f2d1 : add total wall time in calculate_time_spent (#130611)
60fc01d0ab : [CUDA] Don't double-destroy CUDA graph when debug dump is used (#130401)
43b98fa521 : Add debug repr to SymNode (#129925)
2c4303c1d1 : [ROCm] [BUGFIX] Re-enable rocm-specific tuning parameters (#130617)
741c1710e8 : [cond] inlining into one of the branches when pred is a python constant (#130493)
0bf9a091ec : [torchbind] add tracing_mode support (#129586)
c3e77d144e : [3.12, 3.13, dynamo] simplified construction for frame f_locals/localsplus (#129185)
b0a597fcb4 : Fix #121334: graph break on constant method call (#130158)
4865c6425c : Add new control plane handler (#129712)
55dc82bef9 : [EZ] Make test_pytree_inputs actually run tests on CUDA (#130593)
988ed4d5db : [export] clean up allow_complex_guards_as_runtime_asserts flag (#130596)
dafef3ff35 : [CP] Make CP loss curve on par with TP (#129515)
c35f12c67c : [EZ] Add formatting changes to .git-blame-ignore-revs (#130627)
22fd89c904 : [TEST][Inductor] Fix scaled_mm call (#130582)
34e57025e1 : Add unsigned int types to torch/types.h (#130616)
2b1df24877 : Revert "Make hashing a SymInt raise an error again (#130548)"
2a1f22e57f : Change BN to eval before QAT Convert phase (#130598)
18418a7dbb : [ONNX] Fix torch_onnx patch accuracy bug in benchmark (#130586)
e5657024b5 : Fix loss_parallel with BF16 logits (#130550)
ea4b80e6d6 : [FX][export] strict DCE pass, check schema for node impurity (#130552)
febadda107 : [MPS] Fix `torch.[all|any]` for 5+D tensors (#130542)
d443fbc025 : [inductor] Cache precompilation functions based on configs (#130350)
9c69684af8 : [custom_ops] expose torch.library.register_torch_dispatch (#130261)
ba941769b5 : Add API for open registration between operators and subclasses (and modes) (#130064)
ae3ac9cb64 : Only test _is_param if doing instance check on Parameter base (#130578)
6f54e961ea : Add trace_shape_events artifact tracing for ShapeEnv events (#130473)
3100455b8e : Make hashing a SymInt raise an error again (#130548)
b75cc70875 : [Pipelining] add looped schedules to fsdp/ddp test (#130563)
da030e7add : Revert "[Inductor] FlexAttention supports partial masking (#130415)"
207564bab1 : [Inductor] FlexAttention supports partial masking (#130415)
e568c91a7b : [CP] Fix the incorrect ring schedule in the fwd and bwd (#129514)
0d8dedb01b : [dtensor] Add dtensor to TORCH_LOGS (#129512)
b6215f44ef : DCP checkpoint_dist_client integration (#130452)
ff25dfca5a : Save quantization_tag in export graph serialization (#127473)
b7d287fbec : Constant folding for dynamic shape node (#129686)
ae0edadea0 : [SDPA] Replace `masked_fill_` with `aten::where` (#130281)
c16e90fe06 : The device_suffix in a test_name is "privateuse1" sometimes. (#130091)
9ae40c6bc0 : Fix and improve raise_comms and sink_waits (#129980)
c6a676add4 : [Traceable FSDP2][Inductor] Add GroupedSchedulerNode to contain nodes that must be scheduled together (#128568)
c101c4517a : Add python type for list iterators (#130511)
536b5b19b5 : Revert "Simplify c10::string_view (#130009)"
7f2436014e : add MTIA as valid device type for prof averages (#130340)
7ce5b5767c : Revert "Make c10::string_view an alias of std::string_view (#130417)"
b5b91b418d : [Easy] Update record_function Comment (#130561)
18b7633bfb : [export] fix kwargs in run_decompositions() for training IR (#130553)
26c2b92525 : [export] make with_effect mark op has_effect to prevent them from DCEed. (#129680)
9c6c0deadc : Add eager_compile_backwards_failure to tlparse (#130434)
d97d962082 : Revert "Add decompositions for copy variants of view ops (#128416)"
a2f630a9a4 : Revert "Decompose expand_copy and permute_copy (#129476)"
fc872e98f3 : Infer prim tags from equivalent aten ones (#130367)
726a287271 : [export] Expand verifier to be multiple on ExportedProgram (#130364)
5c6edd29ec : Turn on splitShare=1 to make the optimization of comm_split effective. (#129929)
c50b189280 : Move trunk windows builds to CUDA-12.1 (#130446)
bc18863713 : Corner-case fix for upscale_histogram in the new HistogramObserver (#130316)
cd9bae30de : Allow kwargs in _remove_effect_tokens_pass (#130491)
578388bed8 : Revert "Support for expandable segments with cuda graph trees (#128068)"
1cae60a87e : Caching attr_proxy for nn_module attribute to fix guard check failure (#130280)
0a4fe2ff86 : [DSD] Use no_grad() to make some operations faster and avoid possible memory leakage (#130355)
973037be6a : [BE][Easy] apply autofix for ruff rules unnecessary-collection-call (C408): `list()` / `tuple()` / `dict()` (#130199)
492de213e2 : Revert "Change deprecated warning on dispatch_on_subclass to warn once (#130047)"
f21a21828a : Change deprecated warning on dispatch_on_subclass to warn once (#130047)
3896ba3260 : [DeviceMesh] Only include the real thread_id in DeviceMesh hash under threaded backend (#130495)
72d9135679 : increase tensor size to force out of memory exception on the latest generations of GPUs (#130334)
9c1ba5ac10 : [BE] Cleanup unused vars in MPS (#130541)
68ad3eb722 : Do not set hints for mark_unbacked quantities (#130483)
ca023f77bc : [CD] Add pytorch xpu wheel build in nightly (#129560)
fb9bc6d74a : [custom op] add doc for CustomOpDef.set_kernel_enabled (#130406)
5ed72ff5f5 : Reduce all tensors to their metadata in AOTAutogradCache; add tests (#128583)
be7bf20234 : Add JK to enable fx graph cache for amd (#130463)
6f662e9575 : update the input `weight` of `_convert_weight_to_int4pack` to `[n][k / 2] uint8` (#129940)
c4a2b6a943 : [2/N] Fix NVCC warnings (#130214)
a833582dbb : [dynamo][tuple] Optimize guard for small tuples - helps conv2d guards (#130400)
f7d7b94017 : [dynamo][unspecialized-nn-module] Distinguish between user-defined and builtin nn module (#130416)
fed8b0055f : [dynamo][bufgix] Fix the value for key manager (#130368)
9c612df504 : [dynamo][cpp-guards][QOL] Print NO_TENSOR_ALIASING guard once (#130285)
bac10cdd6f : [DCP] Fix duplicated logging messages when enable both c10d and dcp l… (#130423)
0d66ccaf23 : [IntraNodeComm] fix an issue where input check fails when running all-reduce on sub groups (#130492)
f261c6ebe8 : Revert "[halide-backend] Update CI pin (#130258)"
354edb232a : Make public binding test only consider files that are packaged in the wheels (#130497)
215013daad : [cuDNN][SDPA] Limit cuDNN SDPA head-dim to 128 (#130494)
9822fdc354 : [7/N] Replace c10::optional with std::optional (#130510)
f52b2ee90f : Modularize aten parameter parser and checker (#125308)
2a51ccc77e : When translation validation is enabled, assert that hint is consistent (#130478)
c9551a3f50 : Make c10::string_view an alias of std::string_view (#130417)
c5b66c3fe1 : Enable -Werror=pedantic on torch targets (#130319)
5db9bd467e : Skip test_nnc_correctness for new op _unsafe_masked_index (#130375)
b1942a1af4 : [fbgemm_gpu] Break up `fbgemm_cuda_utils.cuh`, pt 10 (#130468)
79c41bb58a : [inductor] switch CppCodeCache to new cpp_builder. (#130132)
75ab027fbb : [dtensor] move bernolli to op strategy (#130286)
fdc83610f2 : Support for expandable segments with cuda graph trees (#128068)
da24823e06 : [BE][EZ] Migrate to new dcp save and load APIs (#130475)
5835ff1ed5 : [Easy][Inductor] Add comment for .min_order and .max_order (#130390)
a4576dad34 : [reland][custom ops] infer schema (#130079)
9f401187c7 : [pipelining] Refactor test_schedule to fix "-k" (#130294)
dfd1d1971e : Fix warning when pickle.load torch.Storage (#130246)
4fcfd475be : [halide-backend] Update CI pin (#130258)
df9d1b44e7 : Preserve _numeric_debug_handle throguh deepcopy and re-export (#129287)
a205a53c50 : Make sym_node log more useful (#130436)
79e34800c3 : Suppress guards generated by empty_strided in ir_node_to_tensor (#130431)
798b9652f7 : [6/N] Replace c10::optional with std::optional (#130438)
5bc18ec0a1 : [Inductor][CPP] Support vectorization of remainder (#129849)
6adc725157 : doc - fix the `max_norm` value in a note (#129687)
358da54be5 : [inductor] Better messaging when triton version is too old (#130403)
ceedee23ec : [DTensor] Included meshes in cross-mesh error msg (#130454)
2abc7cc21b : [inductor] switch AotCodeCompiler to new cpp_builder (#130127)
551b3c6dca : Use irange to avoid -Wsign-compare errors (#130388)
ce499eee0c : Revert "Add API for open registration between operators and subclasses (and modes) (#130064)"
83c95c48f7 : Flight recoder data as JSON (#129505)
86bca69c5f : Revert "[custom_ops] expose torch.library.register_torch_dispatch (#130261)"
e14a0f45ed : Revert "[reland][custom ops] infer schema (#130079)"
46c52661bc : Use a better cherry-pick strategy for stable pytorch w/ distribute changes (#129987)
80a421a54d : [TD] Pin numpy to 1.26.0 in indexer (#130442)
cd2638be09 : Revert "[pipelining] Refactor test_schedule to fix "-k" (#130294)"
b81767161e : Revert "[aota] Needs autograd if an input requires_grad, agnostic to enable_grad (#128890)"
1b3b4c2fb9 : [runtime asserts] deduplicate runtime asserts & CSE (#128599) (#130380)
1352f13f78 : [pipelining] Refactor test_schedule to fix "-k" (#130294)
cf090e222e : Update torch-xpu-ops pin (ATen XPU implementation) (#130333)
4b7ee51260 : [BE][MPS] Cleanup optimizers code (#130453)
08d5423d33 : [aota] Needs autograd if an input requires_grad, agnostic to enable_grad (#128890)
0beeac35fa : Revert "[cond] inlining into one of the branches when pred is a python constant (#128709)"
b4b7477d3f : Fix CPU Annotation Overlapping with Python Events (#129599)
6b3460ae0d : fix discrepancy from the export of #126601 (#130296)
7d4cb21098 : Decompose expand_copy and permute_copy (#129476)
a7aa066b09 : Fix link to dynamo in torch/fx readme (#130233)
a09910d3a9 : add strobelight profile links to tlparse (#129703)
fe3e6878c4 : [cond] inlining into one of the branches when pred is a python constant (#128709)
9d94b122f0 : Fix usage of USE_ROCM when calling cudaFuncGetAttributes (#130441)
ae73489b7d : [codemod] Use C++17 [[fallthrough]] in 1 file inc caffe2/aten/src/ATen/native/cuda/DistributionTemplates.h (#130433)
bef085bdfa : [reland][custom ops] infer schema (#130079)
ce4d95143f : Add scale kwarg to FlexAttention (and some changes that get FlexAttention numerics to be as accurate as FA2) (#130250)
a7715e36de : Add block mask utility support for batches and heads > 1 (#130227)
c83b941141 : [export] add dynamic shapes argument and infer from graph nodes (#129928)
d31f866b33 : [BE] [CMake] Remove AT_CORE_STATIC_WINDOWS option (#130409)
81ea298600 : Wrap the test func with try/except to always call destroy_process_group (#124961)
81df076bfd : Fix Apple crash when running PyTorch with Metal API validation turned on (#130377)
417c83e7cf : [ROCm] Unskip scaled_dot_product_attention tests on ROCm (#127966)
b38de2f9e2 : [decomps] Fix aten._to_copy decomp (#130381)
bd3452f431 : [5/N] Change #include <c10/util/Optional.h> to #include <optional> (#130408)
99967e1119 : [MPS][TYPE_PROMOTION] Fix Clamp (#130226)
6ce0bd7d3b : [HOP] Use user directed names for variables where possible (#130271)
637cc8d27f : Revert "update the input `weight` of `_convert_weight_to_int4pack` to `[n][k / 2] uint8` (#129940)"
a1590e16df : Add single Python 3.10, single Cuda 12.1 build with dependencies included (#130349)
cb2bce98de : [MPS][BE] Reduce the number of parameters encoded for no momentum fused SGD (#130131)
6367f02a0e : update the input `weight` of `_convert_weight_to_int4pack` to `[n][k / 2] uint8` (#129940)
e29657efb6 : [Inductor][CPP] Fix typo in merge rules (#130405)
10c7f037fe : Simplify c10::string_view (#130009)
a17d1e5322 : Fix static `py::object` dangling pointer with `py::gil_safe_call_once_and_store` (#130341)
5abe7ebd41 : Add new (private) capture_triton API (#130178)
99c68f7bea : Refactor TritonKernelVariable's logic so it can be shared (#130177)
868d9a4f12 : [cpu][flash attention] fix nan issue (#130014)
68751799b8 : Add decompositions for copy variants of view ops (#128416)
007e75958f : [4/N] Change #include <c10/util/Optional.h> to #include <optional> (#130329)
9912209743 : check if the input fx graph of aot_compile return tuple (#129824)
85b8503621 : [Caffe2] Remove Caffe2 documentation (#130089)
7a3ab1fe79 : [structural binding][7/N] Replace std::tie with structural binding (#130216)
fb696bf264 : Revert "Add block mask utility support for batches and heads > 1 (#130227)"
44815ed67e : Revert "Add scale kwarg to FlexAttention (and some changes that get FlexAttention numerics to be as accurate as FA2) (#130250)"
5b5a1f5202 : Add on to Mark some test_decomp tests as slow on win #130260 (#130337)
fd43a2ba27 : Forward fix for test_compare_cpu_cuda_float32 (#130360)
3be4922a9d : Revert "[HOP] Use user directed names for variables where possible (#130271)"
37d4d04309 : [torchscript] Add logging for model id. (#130118)
fb5cb17fbe : [torch][fx] Add normalize_args constructor argument to FxGraphDrawer (#130348)
df83142131 : [CCA][Memory Snapshot] Stop duplicating annotations to all device_traces (#130315)
bb9a73f767 : [custom_ops] expose torch.library.register_torch_dispatch (#130261)
c23d103afa : Add API for open registration between operators and subclasses (and modes) (#130064)
9c9744c3ac : Revert "[runtime asserts] deduplicate runtime asserts & CSE (#128599)"
f85bda8bdd : c10d/Handlers: expose running handlers from Python (#130149)
1d93367cfa : Fix typo (#130305)
721a798886 : add bits16 to graph dtype_abbrs (#130339)
42f647219a : [ROCm] Add int4 support (#129710)
adb65682af : [HOP] Use user directed names for variables where possible (#130271)
a6345d3477 : [CMake] [3/N] Remove unused code (#130322)
3477ee38e4 : fix the use of initial learning rate in the OneCycleLR example (#130306)
d990dada86 : [CMAKE] Look for `Development.Module` instead of `Development` (#129729)
3689471ea4 : [inductor] Add FileCheck to flex attention epilogue test (#129343)
c6cce976b2 : Fix an issue where ENABLE_INTRA_NODE_COMM=1 + multiple process groups leads to failure (#130269)
cb4bec311a : Fix nodes has more than one output users after replace_set_grad_with_hop pass (#129716)
e4c51d22c5 : [cuDNN] Cleanup < 8.5 #ifdefs (#130283)
cab90b0049 : [custom ops] disable kernel temporarily (#130190)
e4ee3be406 : [Release only] use triton 3.0.x from pypi (#130336)
edf273edf4 : Revert some PRs (#130303)
71efbf701d : [3/N] Change #include <c10/util/Optional.h> to #include <optional> (#130300)
a5f816df18 : Add more dtypes to __cuda_array_interface__ (#129621)
3e48d92733 : Add scale kwarg to FlexAttention (and some changes that get FlexAttention numerics to be as accurate as FA2) (#130250)
86fb76e871 : [SDPA] Clean up `print` in `test/test_transformers.py` (#130302)
953c6476bd : [CMAKE] Look for `Development.Module` instead of `Development` (#129669)
b139b5090f : [pytorch] Name threads in thread pools for better debugging (#130270)
312652c325 : [RFC] Add support for device extension autoloading (#127074)
6c4efd4e95 : [Memory Snapshot][BE] Clean up record function callback scope (#130265)
ded469cfbd : [issue scrubbing] Fix imports in test_memory_planning.py to work with pytest (#130275)
e235db98c9 : [Inductor] Add aot_mode UT to new cpp_builder. (#130105)
31df1d235e : Support tensor stride (#129297)
e836ee1955 : Enhancements to recompiles logs (#130043)
29861779ce : [2/N] Change #include <c10/util/Optional.h> to #include <optional> (#130236)
d1e0653fad : [fx][easy] print_readable should recursively apply options (#130268)
f2c9f0c0db : [HOP] improve naming for subgraph inputs (#130255)
abe81d5d05 : Fix the rest of foreach flakers (#130277)
d44c30e2f9 : Revert "Add API for open registration between operators and subclasses (and modes) (#130064)"
75fa10066d : Mark some test_decomp tests as slow on win (#130260)
7f08d3d9a0 : [C10D] Fix corrupt log due to uint_8 printing as char (#130184)
4c19623800 : Change numeric_debug_handle to store per-node id (#129811)
a28bb3268d : [Pipelining] Reorder _Action from F1_1 to 1F1 (#129786)
60d9f3f7d9 : Set the epoch timestamp when uploading data to dynamoDB (#130273)
b4cc25f126 : [custom_op]Fix self in mutation_args (#130179)
17ca0d0edf : Add linux manywheel python 3.13 binary workflows (#130030)
00335a27b4 : Accept min / max sequence length in nested_tensor_from_jagged() constructor (#130175)
922d2737d5 : Add API for open registration between operators and subclasses (and modes) (#130064)
44a773c121 : Revert "[custom ops] infer schema (#130079)"
f9bb258892 : Revert "[Inductor] Add aot_mode UT to new cpp_builder. (#130105)"
5e467604c3 : Revert "[inductor] switch AotCodeCompiler to new cpp_builder (#130127)"
09d57f577b : Revert "[inductor] switch CppCodeCache to new cpp_builder. (#130132)"
856fe230c7 : [AOTI] better approach to generating runtime checks for symbolic dimensions (#130220)
3fe324ffb6 : [custom ops] infer schema (#130079)
1e61cb8c87 : Revert "[3.12, 3.13, dynamo] simplified construction for frame f_locals/localsplus (#129185)"
f059201e0d : [dtensor][debug] added deviceMesh for relevant operations and module parameter sharding and module fqn (#130072)
3e53cae0fc : Release 2.4 matrix update. Future releases dates (#130267)
36e2608783 : [Quant][PT2E] enable qlinear post op fusion for dynamic quant & qat (#122667)
a8985a97f9 : elastic/store: use wait instead of get for barrier (#130148)
22c809aa73 : [FSDP] Runtime Error on Checkpoint Loading for optimizer state (#129110)
9158bb7837 : Ignore functional tensor wrapper when caching (#128335)
6dc64026cb : Restrict fusions in foreach if there are dependencies on multiple subkernels (#130046)
64139987c0 : Add block mask utility support for batches and heads > 1 (#130227)
cd683212a2 : Fix indexing twice with score_mod (#130224)
e16276b9bf : [ROCm] Check supported archs before setting preferred blas backend to hipblasLT (#128753)
b428f1ad77 : [3.12, 3.13, dynamo] simplified construction for frame f_locals/localsplus (#129185)
d325aaef39 : [halide-backend] Use get_reduction_combine_fn for reduction ops (#130212)
a18568f293 : [dtensor][debug] Added functionality to convert log into a json file (#129994)
61017eb77b : Add missing mapping between DLDevice and ATenDevice for MAIA (#129615)
63743b223c : [AO] catch qparam mismatch for cat (#123769)
f4774d64bf : Skip test_profile_memory on windows (#130037)
d7b7f8b79f : Revert "[ROCm] Add int4 support (#129710)"
c8ab2e8b63 : Set seed per sample for OpInfo tests + support for restricting to a single sample input (#128238)
acf9e31cf8 : adding MTIA to supported activities (#130052)
16d53cb7d5 : Only run mixed_mm heuristic if shapes are static (#130081)
010009e642 : [compiled autograd] c++ autograd function saved_data: lift tensors (#130057)
f4dcf2ae93 : [1/N] Change #include <c10/util/Optional.h> to #include <optional> (#128301)
f053be2a97 : [dynamo] Graph break on random_ op (#130222)
31bb65de19 : [Inductor] Fix conditional codegen (#129492)
c5c9dbece1 : [dynamo][user-defined] Simplify and improve scope of UserDefinedObject var_getattr (#130169)
d0ad13fa42 : [ROCm] Add int4 support (#129710)
d1b832e739 : [inductor][mkl][inline-inbuilt-nn-modules] Change assertion (#130219)
940e4477ab : [runtime asserts] deduplicate runtime asserts & CSE (#128599)
0c44684901 : [Typo] Fix typo in DispatchKeyExtractor.h (#130221)
e423224546 : Revert "[Inductor][CPP] Enable Local Buffer for Outer loop fusion (#126967)"
1b57dce35f : Revert "[Inductor][CPP] Support more than one LocalBuffer (#129121)"
f794cf59bd : [Inductor][CPP] Support more than one LocalBuffer (#129121)
98929ceae3 : [Inductor][CPP] Enable Local Buffer for Outer loop fusion (#126967)
a3ce9eddd6 : [BE][Easy] apply autofix for ruff rule unnecessary-literal-set (C405) and unnecessary-map (C417) (#130198)
9983242c8e : [inductor] support adding a new inductor backend using PrivateUse1 (#129953)
3d138af943 : [Inductor] First implementation of the B2B-GEMM pass with tests (#129995)
3957b3b349 : [inductor] switch CppCodeCache to new cpp_builder. (#130132)
dc5f37193f : [inductor] switch AotCodeCompiler to new cpp_builder (#130127)
dfe3534134 : [1/N] Fix NVCC warnings (#130191)
3f50e197c4 : [BE] annotate `torch.autograd.graph` (#129558)
01ec03bac6 : [inductor] switch HalideCodeCache to new cpp_builder. (#130146)
2f219f7d79 : Enforce unused-{variable/function} checks to all torch targets (#130189)
096eca2f9a : [2/N] Replace exceptions with static_assert(false) in some templates (#130116)
520a4642bf : [CI] Enable build with asserts (#129924)
da66e50e6e : Added compile option to create_block_mask (#130106)
963f430d13 : Revert "[runtime asserts] deduplicate runtime asserts & CSE (#128599)"
aa4899eee9 : [CCA][Memory Snapshot] Fix race on alloc_trace vector - S430480 (#130180)
e019540c9e : Revert "Fix the SDPA AOT export issue (#130164)"
bf609630ae : Fix a bunch of stride issues with FlexAttention (#130160)
10c831567b : Make sympify'ing SymInt/etc produce their sympy expression (#130166)
acd03ca2d9 : [halide-backend] Support scan kernels (#129035)
c5110f6388 : [halide-backend] Use 0D scalar inputs/outputs (#130129)
0267b2ddcb : [runtime asserts] deduplicate runtime asserts & CSE (#128599)
7c43f59a45 : [audio hash update] update the pinned audio hash (#129429)
bd0252fb98 : [dynamo][user-defined] Support method descriptors (#130159)
a1a2023eb8 : Back out "Pass device to is_pinned call inside TensorProperties.create_from_tensor" (#129972)
1927c40684 : Fix the SDPA AOT export issue (#130164)
c5ede865c4 : [pt2-bench] raise tolerance for squeezenet1_1 (#130165)
0fcbca9adb : [pt2-bench] use eval mode for vision_maskrcnn (#130163)
e5841bb8d5 : [3/N] Enforce unused-function and unused-variable checks (#130084)
126796d239 : [c10d] fixing an UT after a change in eager mode new group (#130167)
d1d0a7080f : [torchgen] reference generated comment to actual location of the generator and template (#130020)
6fc771d19b : Revert "Change depreacate warning on dispatch_on_subclass to warn once (#130047)"
df50452279 : Pin optree==0.11.0 on windows CI (#130155)
18e75c098b : [DCP] Adds Checkpointing Team (dcp) to merge rules (#129582)
739fc01ac9 : [NCCL] Make sure current device is correct in `torch.distributed.barrier()`'s `streamSynchronize` (#129908)
faebaef089 : [EZ] Fix typo in upload stats OIDC rolename (#130168)
3d56673b24 : [Split Build][BE] remove extraneous .py, .a, and .so files (#130053)
8ff243bcf1 : Change depreacate warning on dispatch_on_subclass to warn once (#130047)
784e3b4123 : Revert "Change numeric_debug_handle to store per-node id (#129811)"
889ed48a22 : Fix missing id-token write in upload stats (#130153)
7c5f3cd049 : Add explain function to TSConverter. (#129968)
7ea8a3c9b8 : [dynamo] Validate check_fn (#118448)
7192ee0735 : Default to input tensor device for as_nested_tensor(t) (#130050)
a33ee73a28 : Upload perf stats to both Rockset and dynamoDB (#129544)
e7ab7b83bc : Have torch_key hash entire torch directory (#129250)
eea4ece256 : Revert "[audio hash update] update the pinned audio hash (#129429)"
4b05d9d233 : Revert "[NCCL] Make sure current device is correct in `torch.distributed.barrier()`'s `streamSynchronize` (#129908)"
8f6765f7a7 : [pt2-bench] fix accuracy failure for beit_base_patch16_224 during training (#130005)
c0735a3dd3 : [pt2-bench] fix accuracy failure for a few models (#129941)
8f1c2e1e28 : [pt2-bench] pass acc test if ref is NaN (#129996)
78a0b010eb : Refine XPU UTs (#130138)
3240bff56a : [benchmarking] Add join_results.py (#129202)
30fc4b06f5 : [audio hash update] update the pinned audio hash (#129429)
c9f1db265e : [NCCL] Make sure current device is correct in `torch.distributed.barrier()`'s `streamSynchronize` (#129908)
7128504424 : [inductor] Add Triton template for Conv3D (#129518)
e590168865 : Enable sharing meta tensors between processes (#129520)
21eeedb455 : [Inductor] Add aot_mode UT to new cpp_builder. (#130105)
d496145534 : [CD] Add triton xpu wheel build (#129730)
f78b79daaa : Forward fix the missing torch.nn.Module.set_submodule from D59140215 (#130075)
5b5f4b02c2 : [pipelining] [BE] Move pipeline_order validation to schedules.py (#129369)
6dfa53ca76 : Revert "[pt2-bench] pass acc test if ref is NaN (#129996)"
fa3953a2e1 : Revert "[pt2-bench] fix accuracy failure for a few models (#129941)"
54da35a2e0 : Revert "[pt2-bench] fix accuracy failure for beit_base_patch16_224 during training (#130005)"
57d05f2616 : [RELAND] Add xpu to getAccelerator (#129205)
551f3b92b2 : [Dynamo] Add assertion for tensor unpack shape mismatch (#130077)
f3962cfd9c : [RELAND] XPUHooksInterface inherits from AcceleratorHooksInterface (#129463)
fa4e489d70 : [dynamo][dynamic-shapes] Graph break if out shape changes on out= variants (#130074)
e98587c58d : Update torch-xpu-ops pin (ATen XPU implementation) (#129353)
bffb278700 : [ONNX] Add `artifacts_dir` to torch-onnx-patch in benchmark (#130069)
d62d351107 : [Optim][BE] Change str(device) to _get_device_type(device) (#129984)
42f3d7e948 : [MPS] Add mps profiler env vars to docs (#129552)
07b06f0f0a : [2/N] Remove outdated CMake code (#130006)
26be691e6b : Unify shard logic for inductor and dynamo test_config (#129508)
9c9ac670a0 : [dtensor][be] Reduced redundant LOC by creating functions to set up models used in example (#129613)
0b9995c1ce : [dtensor][debug] Added forward and backward differentiation for module level tracing (#129602)
e2e624a02f : [AOTAutograd] Micro-optimize runtime_wrapper (#128188)
a7a7363be0 : [dynamo] Skip side effect tracking for c wrappers/descriptors (#129914)
da8af685ac : [dynamo] Skip ID_MATCH guard on GetSetDescriptorType (#129913)
8405ba21c1 : [inductor][cpp] fix the vec convertion between float and int64 on AVX2 (#130013)
9afe4ec096 : Update torchbench model expected accuracy values after pinning numpy (#129986)
99ec7bbee7 : Force inconsistent-missing-override for torch targets (#130010)
0af8c8a981 : [pt2-bench] fix accuracy failure for beit_base_patch16_224 during training (#130005)
dafbd603ee : [pt2-bench] fix accuracy failure for a few models (#129941)
51fa0bd436 : [pt2-bench] pass acc test if ref is NaN (#129996)
9108b74bbc : Updates to scaled_mm for rowwise scaling (#130059)
cd70ac884f : c10d/Utils: better error message on 0 bytes (#130056)
efb73eda51 : [2/N] Fix some violations of unused-function and unused-variable checks in torch_cpu (#129878)
d95a019704 : [export] construct empty graph when there's no tensor computation (#129541)
2fe7c1fe04 : [custom ops] Support factory function (#129978)
779fc8119e : Revert "XPUHooksInterface inherits from AcceleratorHooksInterface (#129463)"
8a9725bedb : Revert "Add xpu to getAccelerator (#129205)"
a9a744e442 : Change numeric_debug_handle to store per-node id (#129811)
b0d0114f5b : Enable automigration for windows jobs (#129977)
a79bb8db91 : Make `_embedding_bag_backward` explicitly dispatch to CPU and CUDA. (#129691)
7bbd6cf931 : [custom_ops] Mark older custom ops prototypes as deprecated (#130032)
a21d4363d2 : [Profiler] Remove all instances of TMP_USE_TSC_AS_TIMESTAMP (#129973)
042d764872 : [export] Update example inputs format for DB. (#129982)
9b902b3ee3 : AOTI: dont treat views of buffers as constants (#129688)
35600bcaad : Print float with full precision, don't truncate (#130027)
01e41f1814 : Modified autotuning for flex_attention to pass in (proper) fake inputs for the block sparse entries (#129915)
e2eb33b089 : Added methods to blockmask to visualize them (#129950)
29c68df600 : Stop immediately specializing common constants 0/1 for plain int (#128327)
9e1e58e052 : Support allowlisted modules and op overloads in AOTAutogradCache (#128329)
64a04d2225 : Make sparse empty constructors specialize instead of fail on symbolic inputs (#129983)
735044191f : [Easy] Add whitespace after comma when re-rendering tuple default value in schema (#129884)
8f70bf7a94 : Skip TestSDPAPrivateUse1Only on FBCODE (#129997)
62b710782d : change LayoutLMForSequenceClassification inference accuracy tolerance (#129728)
4fc9157e90 : [halide-backend] Disable split reductions for Halide (#129320)
0abcca85b7 : [halide-backend] Support manual schedules (#129321)
8af58f66bb : Fix typo in floordiv solver code that affects flipped relation (#129888)
424cd1e1df : Enable TORCH_TRACE by default on Conda on Mast (#129988)
1026b0f687 : Use setup-miniconda step from test-infra for llm retrival workflow (#129720)
31fc5b8966 : Add support for inline_asm_elementwise in Inductor lowerings (#129846)
9ee8c18309 : TCPStore: add ping to verify network connectivity on connect (#129985)
91a8376d47 : run_test: Unset cpp stacktraces after reruns (#129004)
c77c139878 : [Intel Triton] Update Intel Triton to resolve installation issue on manylinux. (#129847)
c686304277 : Enable UFMT on test/test_public_bindings.py (#128389)
3b77b122c5 : [Inductor UT] update rtol for convoluton on XPU. (#129782)
1e27af335e : [easy] enhance local model loading (#129897)
be2d79a16b : [dynamic] config to disable duck sizing (#129804)
111f9b5d44 : [Dynamo] Add config to skip/inline torchrec (#129912)
89646ebb11 : Revert "[export] make with_effect mark op has_effect to prevent them from DCEed. (#129680)"
921c116089 : [inductor] Kill mark_node_as_mutating (#129346)
b2ac8d2af3 : [inductor] Use multiple outputs for flex-attention (#129344)
45844e0d4e : [inductor] Add FileCheck to flex attention epilogue test (#129343)
7955cd3e83 : [inductor] Make UserDefinedTritonKernel a multi-output operation (#129325)
fb078c20c1 : [inductor] Separate Buffer and Operation into two concepts (#128893)
872d972e41 : [custom_op] better error message on no returns (#129896)
aa0352ca38 : [custom ops] add default value support for device types (#129792)
d7680a564b : Bug fixes for disabling 0/1 specialization on plain int (#129961)
29ffa20bb1 : [CUDA] Bump tolerances for `test_grad_pca_lowrank` (#129902)
b5fdbc1a9f : Revert "[pipelining] [BE] Move pipeline_order validation to schedules.py (#129369)"
b6f781e433 : Bug fix for captuing execution trace grid function (#129832)
39357ba06f : [dynamo] don't constrain range on the replacement for a symbol (#129907)
c22e66896f : Revert "Fix typo in floordiv solver code that affects flipped relation (#129888)"
1ddb100318 : [FSDP1][Easy] Remove Spammy Log Lin in _runtime_utils.py (#129967)
deefc10dd3 : [executorch hash update] update the pinned executorch hash (#129428)
26de2c2487 : [3/N] Enable clang-tidy on torch/csrc/jit/serialization/* (#129850)
8ec5ba960f : [MPS] Add tensor_lr overloads to fused adam & adamw (#129451)
2631a96f2a : Stop updating hints (#129893)
1f6c1fcd36 : [dtensor][debug] add operation tracing to comm_mode (#129017)
bf05ea2bab : Re-generate Linux build workflows after #124014 (#129976)
080149cb38 : [Inductor][FlexAttention] Add helper functions of converting score_mod to block_mask (#129909)
1f3e2d7877 : [Inductor] Rename TemplatedAttention to FlexAttention (#129859)
aa7ea6b45c : Add wraps back (#129933)
ec789a3c9d : [pipelining] [BE] Move pipeline_order validation to schedules.py (#129369)
4eb449f7dc : [pipelining] add small logging section to docs (#129368)
34e94c507a : [Inductor] Make FlexAttention block_mask argument as tuple (#129831)
9105d54c6b : [dynamo][sparse] Graph break on sparse tensors (#129883)
75443d3daf : [dynamic-shapes] Dont create symbol if .item() is a nan (#129881)
d146a62e77 : [MPS][BE] Introduce `mtl_setBytes` (#129910)
9fb2dec7a6 : [custom ops] Add unknown arg (#129614)
e3b3431c42 : Fix for HistogramObserver (#129387)
03440a1c13 : Revert "Add support for inline_asm_elementwise in Inductor lowerings (#129846)"
3fd128361e : [traced-graph][sparse] add relay override for layout_impl (#129930)
dacc33d2fa : Make sym_min/sym_max handle Numpy scalars (#129917)
f1df13f023 : [BE][Easy] Fix `PYI001`: unprefixed-type-param in `torch/utils/data/datapipes` (#129885)
499621e7bb : [CherryPick][FSDP2+TP] Disable 2D state_dict (#129519) (#129923)
257b9c7936 : Fix layout for *_like() factories on NJTs (#129879)
6c2a8b6b38 : [Ez][BE]: Enable new stable ruff rules (#129825)
2926655761 : [inductor] optimize cpp builder configuration code (#129577)
e5bda62849 : [CherryPick][DCP] Fix Optimizer Learning Rate not being loaded correctly (#129398) (#129683)
6cb0ad3375 : [BE]: Update NCCL submodule to 2.21.5 (#124014)
dc75ec252a : [inductor] Fix can_merge check for expr=q0*q1 (#129806)
37e3c60897 : [Inductor][CPP] Remove redundant INT8-specific logic in the INT8 GEMM template (#129470)
b6379591a9 : [Inductor][CPP] Pass weight dtype explicitly for cpp gemm template (#129221)
72fa864098 : [Inductor][CPP] Enable Quantized Linear with AMX MicroGEMM (#129220)
a796358330 : [Inductor][CPP] Enable Quantized Linear GEMM Template with Binary Fusion (#129103)
86e2d16ba0 : [Inductor][Quant] Change the schema of QLinear Binary (#129049)
07450e9713 : Revert "[MPS] Add support for autocast in MPS (#99272)"
0441173ab2 : Add slowTest marker to test_linalg_solve_triangular_large (#129903)
95a5958db4 : [ROCm] Update nightly triton-rocm pin to release branch (#129361)
3c6c3b9448 : Fix typo in floordiv solver code that affects flipped relation (#129888)
8ef8240172 : Don't mark conversion to float as is_integer = False (#129890)
eb1ff76f23 : Make are_strides_like_channels_last size oblivious (#129677)
ebeeb22669 : Correctly put mark_unbacked symbols in shape_env_to_source_to_symbol_cache (#129869)
567dd1a3ca : [inductor] unificate toolchain code. (#129816)
badc638eb6 : Add support for inline_asm_elementwise in Inductor lowerings (#129846)
ccc4ee7793 : check boolean alpha and beta of Fake tensor impl for Tensor.addr (#129839)
5c9d5272e4 : fixes #124582 (#128483)
1ad683033b : Implemented flexible PP schedule (#129597)
3e2df3ca9d : Add xpu to getAccelerator (#129205)
6353a12e6a : XPUHooksInterface inherits from AcceleratorHooksInterface (#129463)
76259ebfdd : [inductor] split cpu vec isa to dedicate file (keep git history) (#129789)
f6edd1f7c9 : [BE] Make ActivationWrapper an abstract class (#129808)
c2d0b7b96d : Revert "[ROCm] std::clamp work-around for hip-clang compiler (#127812)"
6240cfd5c7 : [MPS] Add support for autocast in MPS (#99272)
600bf978ba : [Pipelining] Add to/from CSV format and improved __repr__ (#129264)
83e6ec2ccd : [FSDP2+TP] Disable 2D state_dict (#129519)
46366888d7 : Remove outdated CMake code (#129851)
7e4329c258 : [EZ][BE] Bump min cmake version to 3.18 (#129906)
9645eaaaec : [BE] Improve logging for runner-determinator (#129679)
eeef68671d : [autograd] Do not detach when unpacking tensors that do not require grad (#127959)
87693b534c : [ROCm] Use AOTriton as a dynamic library (#129094)
8c2c3a03fb : [ROCm] std::clamp work-around for hip-clang compiler (#127812)
750c701e49 : [ROCm] Update xlogy comment detailing issue (#128151)
78cda9a810 : [symbolic-shapes] Add FloatPow in the symbolic shape guard closure (#129857)
53d67165c0 : [dynamo] Skip FUNCTION_MATCH guards for descriptors (#129858)
f86dbae247 : Fix typo in lxml requirement (#129695)
fdd0a7f9b4 : Run test_mps_allocator_module serially (#129340)
b02186ffc1 : Revert "Allow get attributes on DDP similar to FSDP (#128620)"
bb0f3df562 : Fix index issues in torch.fx.interpreter (#129527)
1956d87c1f : Increase riscv implementation in DepthwiseConvKernel (#127867)
c9dc9887db : Revert "Enable UFMT on test/test_public_bindings.py (#128389)"
433b691f98 : Revert "[inductor] optimize cpp builder configuration code (#129577)"
19e17216a2 : Revert "[inductor] split cpu vec isa to dedicate file (keep git history) (#129789)"
b6dc37bb4e : Revert "[inductor] unificate toolchain code. (#129816)"
ca5d13c672 : [1/N] Enable unused variable warnings on torch_cpu and fix some violations (#128670)
e385bf8ef8 : Revert "[halide-backend] Disable split reductions for Halide (#129320)"
a83eaf1c3a : Revert "[halide-backend] Support manual schedules (#129321)"
cc9b005bf2 : Enable torchao nightly workflow (#129779)
75f64e1203 : Fix test `test_type_hints.py::TestTypeHints::test_doc_examples` (#129829)
e1b426b345 : [ROCm] CUDA_VISIBLE_DEVICES fallback option for device_count (#129650)
313eec02cc : Add hash function of std::string_view to torch/csrc/lazy/core/hash.h (#128800)
f6a0be5023 : Add warpSize to Device properties (#128449)
04a0d85620 : [BE] Print all pip packages installed on the system after TorchChat (#129809)
eb1583dbc1 : [2/N] Fix clang-tidy warnings in torch/csrc/jit/serialization (#129300)
e62073d799 : [dynamo] Skip FUNCTION_MATCH on method-wrapper objects (#129830)
24b6c5a41f : [cuDNN][SDPA] Bail out of dispatching to cuDNN for head dim > 128 on Ampere (#129587)
f845a7a91a : [cuDNN][SDPA] Remove `TORCH_CUDNN_SDPA_ENABLED=1`, enable cuDNN SDPA by default on H100 and 2nd on other archs >= sm80 (#125343)
7b0e9a27ba : Restore `allowed_info` in OOM message when applicable (#129546)
8755e035d2 : [CUDA][Pooling] Fix 64-bit indexing in `avg_pool_2d` backward attempt 2 (#129818)
4dd3cff234 : [CUDA] Fix more `DeviceIndex` printing (#128540)
68484621fe : [cuDNN][functorch] Bump tolerances for `nn.functional.conv2d` in `test_vmap_autograd_grad` (#129796)
fff633f087 : [CI] Enable AOT inductor FP32 accuracy test for CPU (#129040)
8a5fda0377 : added type hints for __contains__ (#129653)
1a689ea38c : [Inductor][CPP] Enable Quantized Linear GEMM Template with INT8 output and Unary Post Op (#129048)
35a197defa : [Inductor][CPP] Enable Quantized Linear GEMM Template with FP32 output (#128825)
fe5424d0f8 : Enable UFMT on test/test_public_bindings.py (#128389)
4ee1cb9b95 : [BE][Easy] replace `import pathlib` with `from pathlib import Path` (#129426)
2effbcfcd8 : Revert "[BE][Easy] replace `import pathlib` with `from pathlib import Path` (#129426)"
67c9ec2b6d : [inductor] unificate toolchain code. (#129816)
3fec0efd34 : [Inductor][CPP] Support vectorization of bitwise fn (#129733)
705e3ae420 : Improve error message for weights_only load (#129783)
6d75604ef1 : [BE][Easy] replace `import pathlib` with `from pathlib import Path` (#129426)
7837a12474 : [BE] enforce style for empty lines in import segments (#129751)
9ae78a578c : [halide-backend] Support manual schedules (#129321)
a18eb651d3 : [halide-backend] Disable split reductions for Halide (#129320)
4cb8cb04a7 : [halide-backend] Enable bfloat16 support (#129036)
b93bf55b6a : [halide-backend] Add GPU support (#127506)
86cadc6385 : [halide-backend] Dimension-based indexing (#129026)
da5f37515e : [halide-backend] Generate standalone runtime (#129025)
e34b7e6af3 : [halide-backend] Initial implementation of HalideKernel and HalideScheduling (#126417)
13d4be1dc7 : [pipelining] Support W action for schedules (#129233)
a6da01bd01 : [pipelining] Support arbitrary stage ordering on ranks (#128976)
18ae3bab2f : [Pipelining] Support separate dw_runner for PipelineStage (#128983)
b0e5c9514d : use shutil.which in check_compiler_ok_for_platform (#129069)
56935684c3 : Use Generic TypeAlias (PEP 585) and Union Type (PEP 604) in `.pyi` stub files (#129419)
9120992c72 : [BE][Easy] enable postponed annotations in `torchgen` (#129376)
8a67daf283 : [BE][Easy] enable postponed annotations in `tools` (#129375)
58f346c874 : [inductor] split cpu vec isa to dedicate file (keep git history) (#129789)
a676b7c5f3 : Add XGLMForCausalLM to the flaky model list (#129776)
5d1763d159 : Add lcnet to the inline_inbuilt_nn_module list (#129775)
89696db4b0 : Revert "[LLVM/TensorExpr] Update for an API change in LLVM 18." (#129797)
3ef44df667 : [ts-migration] support prim::SetAttr and fix prim::GetAttr (#129440)
ec47d4d9a8 : [Inductor] FlexAttention supports block sparse mask (#129216)
7b5a8424a1 : [GPT-fast] Update micro benchmark numbers as A100-50G (#129799)
065c386990 : Allow get attributes on DDP similar to FSDP (#128620)
2bc6f329b2 : Make PyTorch argparser understand complex (#129580)
dfd55d1714 : Revert "[cond] inlining into one of the branches when pred is a python constant (#128709)"
3d96217891 : Revert "[BE][Easy] use `pathlib.Path` instead of `dirname` / `".."` / `pardir` (#129374)"
c0782e7c81 : Kineto profiler: collecting observer traces from C++ child threads (#128743)
a32ce5ce34 : Revert "[BE][Easy] enable postponed annotations in `tools` (#129375)"
6063bb9d45 : Revert "[BE][Easy] enable postponed annotations in `torchgen` (#129376)"
83caf4960f : Revert "Use Generic TypeAlias (PEP 585) and Union Type (PEP 604) in `.pyi` stub files (#129419)"
00d7bba2fa : Revert "[BE] enforce style for empty lines in import segments (#129751)"
fa6c0fe3e4 : Revert "Conversions between strided and jagged layouts for Nested Tensors (#115749)"
24f69eef6a : [FSDP2] Ran reduce-scatter copy-in in default stream (#129721)
f06e3a1569 : [Split Build] Make script not crash if split build is not set (#129774)
7bda23ef84 : [BE]: Update ruff to 0.5.0 (#129744)
0a337613f8 : Fix typo in stack_module_state doc (#129126)
f5ff1a3ab9 : [BE] enforce style for empty lines in import segments (#129751)
5b96a552df : Add a check and error message for no support on MPS for conv with output_channels > 2^16 (#129484)
bc8883a7c4 : fix the error msg in device_mesh (#129747)
45f3e20527 : Improve error message for weights_only load (#129705)
99456a612b : [AOTI] Properly indent launchKernel calls in AOTInductor (#129616)
b26cde49b6 : [Windows] remove mkl shared library dependency. (#129740)
6120aa3718 : [nn-module] Use standard dict for _parameters, _modules and _buffers (#129164)
db4c7bb7fc : Refine typing annotation for compile (#129136)
59e4e92556 : sdp::SDPBackend::flash_attention support PrivateUse1 (#126392)
26d633b721 : [BE] Correctly catch skip signals emitting from sys.exit in Sandcastle (#129731)
c12a4f2e65 : Add decomposition for slice_scatter (#123744)
6897631ceb : Guard on inner tensor names for traceable wrapper subclasses (#129618)
b84036e3fb : [AOTI] Fix test_dynamic_scalar_abi_compatible_cpu_with_stack_allocation (#129173)
04264efab6 : Add structured logging on FXGraphCache hit (#129588)
e40f50cb87 : Use Generic TypeAlias (PEP 585) and Union Type (PEP 604) in `.pyi` stub files (#129419)
494057d6d4 : [BE][Easy] enable postponed annotations in `torchgen` (#129376)
59eb2897f1 : [BE][Easy] enable postponed annotations in `tools` (#129375)
2e3ff394bf : [inductor] optimize cpp builder configuration code (#129577)
eabe6574c0 : [metal] Parameterize group_size in int4_mm test, fix int4mm shader for group_size > 128 (#129628)
635d6c9d66 : [FSDP2] Ran post-acc-grad hooks manually (#129450)
fe4032fe20 : [BE][CMake] Do not use `EXEC_PROGRAM` (#129714)
98d34d849d : Add a XPU UT to ensure lazy init (#129638)
22a06869f2 : include jit/*.pyi (#129654)
12ad767daf : [distributed] NCCL result code update (#129704)
1164d3cb9c : Add threadfence to 2-stage reduction for correct writes visibility (#129701)
9533637daa : Inductor to fail gracefully on Voltas for bf16 tensors (#129699)
fadd3cc4ab : [MacOS] Improve libomp packaging (#129697)
80277a50bc : Remove cuda check in the CUDAGraph destructor (#129696)
424068d0d2 : [Windows] remove mkl shared library dependency. (#129493)
a0dac3de31 : Noise tensor using same size/stride with input to promote performance when channel last situation. (#129467)
999eec8dea : Revert "[cuDNN][SDPA] Remove `TORCH_CUDNN_SDPA_ENABLED=1`, enable cuDNN SDPA by default on H100 and 2nd on other archs >= sm80 (#125343)"
d21993bbb8 : Revert "[cuDNN][SDPA] Bail out of dispatching to cuDNN for head dim > 128 on Ampere (#129587)"
c43923a116 : Revert "[Inductor] FlexAttention supports block sparse mask (#129216)"
73eb4503cc : Enable UFMT for numpy_test files, test_xnnpack_integration.py (#129023)
b019f38fdd : [inductor] Fix pattern replacements with multiple users (#129689)
7854d84acb : [cuDNN][SDPA] Bail out of dispatching to cuDNN for head dim > 128 on Ampere (#129587)
8d4216af8c : Fix compile error with Intel oneAPI compiler (#129589)
4b8a5e0374 : [export] make with_effect mark op has_effect to prevent them from DCEed. (#129680)
4b598d87d3 : Fix FindBLAS.cmake (#129713)
b9d3cedd64 : [Inductor] FlexAttention supports block sparse mask (#129216)
c07a799ed5 : [Traceable FSDP2] Add Dynamo support for run_with_rng_state HOP (#127247)
36b9d9cfcd : [Inductor UT] Generalize device-bias code in newly added UT `test_scatter_optimization.py` (#129622)
deaab33f3f : [custom op] add error message (#129417)
8ba0f6c7c2 : Revert "[nn-module] Use standard dict for _parameters, _modules and _buffers (#129164)"
9e1f3ecaa7 : [BE][Easy] use `pathlib.Path` instead of `dirname` / `".."` / `pardir` (#129374)
d4b6ff6fbe : Disable llm-td step (#129722)
0ffb17547e : [Simple FSDP] Add unit test for torch.compile + reparameterization + SAC (#129641)
169b4ca07e : add uuid in cudaDeviceProperties (#125083)
fb5888c719 : Remove unused type traits in torch/csrc/utils (#128799)
3fc279633b : [ATen] Make argsort.stable CompositeImplicitAutograd (#129529)
7cf0b90e49 : [BE] enable UFMT in `torch.utils.data` (#127705)
f911957573 : [BE] sort imports in `torch.utils.data` (#127704)
d80939e5e9 : [BE] enable UFMT for `torch/storage.py` (#127706)
67416a2996 : [c10d] Introduce a util for detecting DMA connectivity among devices (#129510)
305ba62906 : Add support to `GradScaler` for respecting an already set `grad_scale` value (#123429)
83a4a8b510 : [C10D] clean up pointless 'or None' clause (#129522)
5e7ac69a67 : [Dynamic Shapes] fixed dynamic shape inference (#128807)
b8398b771c : Upload test stats when workflow regardless of conclusion (#129694)
1d0efedc85 : [Profiler] Add TSC Clock Callback to CUPTI (#125036)
d0831d65aa : Tunableop hotfix2 unit tests, release/2.4 (#129607)
ca8d4d1751 : TunableOp hotfix (#129499)
602b5cb218 : [inductor] switch HalideCodeCache to new cpp_builder. (#129441)
39427288f4 : Taskify training IR + run_decomp flow failures (#129547)
23adf166e1 : [cond] inlining into one of the branches when pred is a python constant (#128709)
71f5ecd1ee : Fixed Memory Leaks in tests (#129640)
dabaebd339 : Make run_decomp work (#129249)
ec284d3a74 : Prototype for export_for_training (#129092)
4dcc1ceff3 : [dynamo] Fakify result of delegate (#128752)
389492e264 : Fix runner determinator bug (#129612)
a4d7aa498b : [Traceable FSDP2] Add auto-functionalize support for mutable list[Tensor] (copy from Brian's PR #127347); enable E2E inductor unit test for transformer model (#129502)
9174d14551 : Don't install remaining caffe2 python files (#129067)
e0bba37d66 : [codemod] Add `[[noreturn]]` to 2 files inc caffe2/c10/util/TypeCast.cpp (#129575)
321bdcb372 : Fix device propagation for checkpointing (#128671)
04206d1898 : TunableOp hotfix, unit test follow-up (#129606)
5c6af2b583 : [cpu] Fix div with rounding_mode="floor" when division overflows (#129536)
3d7d7927ca : Upload release tag source code to s3 (#129600)
5f7de217cb : Add warning for weights_only (#129572)
072d9e8ac9 : Add example for torch.serialization.add_safe_globals (#129573)
1f84579407 : Cherry pick #129244 #129251 #129509 (#129574)
5ceba6a3cb : Revert "[Inductor] FlexAttention supports block sparse mask (#129216)"
82c8fc3a2b : [inductor] Add size_hint to conv dilation (#129631)
483dbfcf2a : [BE] Correctly catch skip signals emitting from sys.exit (#129581)
2d9012ad25 : Forward fix internal pyre failure from D58983461 (#129525)
0680e6cd1c : [Profiler] Add sraikund16 to profiler paths in CODEOWNERS (#129591)
ad607b91f4 : [dynamo][onnx] Skip some dynamic=True test with inlining in built nn modules (#129610)
a028e5862d : [profiler] Directly use end_ns to create the FunctionEvent instead of using start_ns + duration_ns in pytorch profiler post processing for checking parent-child precisely (#129554)
ff026f3d0a : Fix an issue in meta_scaled_mm (#129521)
9f29a2291c : Feat: Updated torch.nn.Modules.set_submodules() (#127714)
c9798d123b : [dynamo][compile-time] Manually trace torch.nn.Module.parameters (#129583)
cf392d8a89 : [pytorch][cuda] Generate kernels for 5x5 filters on depth wise convolution backward (#129609)
4082759925 : [Inductor] FlexAttention supports block sparse mask (#129216)
5ee893a84a : Add inductor support for conv3d transpose (#129458)
9b5b93c58f : [CUDA][Inductor][CI] Revert PR#127150 since cu124 is now behaving similar enough to cu121 (#128423)
ea588d7fd3 : [SymmetricMemory] use SCM_RIGHTS socket control message to share exported cumem handle (#129412)
84ad5452f6 : [MPS] Fused SGD optimizer (#129350)
e19042481b : [cuDNN][cuDNN Frontend] Bump cuDNN FE submodule to 1.5.2 (#129592)
9450e198aa : Conversions between strided and jagged layouts for Nested Tensors (#115749)
c9ceae3fac : Use JK for mast rdzv handler tcpstore handling and additional logging (#129603)
b9697eacd3 : [torchbind] support tensor ops inside of __obj_flatten__ (#129605)
cdbd6542d0 : Fix inductor benchmarks (#129620)
27a14405d3 : enable device index check for all device types (#126767)
0b7e8df7d8 : [CUDAGraph Trees] Enable input mutation support in OSS (#129184)
7bb558fd6e : add _flash_attention_forward and _efficient_attention_forward to compute intensive ops in partitioner (#129533)
b6689e0fb8 : [ts migration] add logging as part of torch logging system (#129405)
90f6043368 : Don't decompose functional composite ops in export inference IR (#128077)
64f1111d38 : Expose nholmann json to torch (#129570)
5ad2ad5921 : Update start_, end_ and retired only for the right entry when retire a work (#128948)
b8e5678ad2 : Delete lazy ddp optimizer (#120727)
13316a8d46 : [Profiler] Add Rank to NCCL Debug Info (#129528)
7b1988f922 : [ez] Give trymerge id token write permissions after #129503 (#129594)
795db80975 : Upload release tag source code to s3 (#128842)
28480dd7dc : [CI] Fix runner determinator for ciflow (#129500)
d3d6764082 : [pytorch][logging] add fb internal ODS implementation of wait counter (#128605)
90f82426b9 : RS migration - trymerge to upload merge records to s3 (#129503)
895316119d : Revert "[BE][Easy] use `pathlib.Path` instead of `dirname` / `".."` / `pardir` (#129374)"
e9aefad641 : Revert "[CUDA][Inductor][CI] Revert PR#127150 since cu124 is now behaving similar enough to cu121 (#128423)"
cca85c96cd : [export] minor typo fix (#129543)
87d14ad419 : [inductor] Fix TORCHINDUCTOR_FORCE_DISABLE_CACHES (#129257)
61bf1452a3 : Add one more shard for CPU jobs (#129299)
b9a1c2c991 : [ROCm] Enable F8 Inductor Unit tests (#128353)
8e4f7f742f : [DCP] Capture reader, writer and planner components in the DCP API logger (#129548)
7373492c9b : Use _unsafe_masked_index in masked_scatter decomposition (#123667)
1b1fd0f4fe : [ROCm] Use additional shard for inductor workflow to resolve timeouts (#129480)
bc68907caa : [EZ][BE] Replace `assertTrue` with more appropriate checks (#129569)
9cf8e5dd32 : chore(quantization): Enable PT2E symmetric dynamic quantization (#124615)
f7708ffebb : Revert "[AOTI][refactor] Unify UserDefinedTritonKernel.codegen (#129378)"
4d83bca8d8 : Revert "Cherry pick #129244, #129251, #129239, 129396 into release/2.4" (#129571)
04339eec05 : [Inductor][Intel GPU] Support reduction split. (#129120) (#129337)
22a4d46e2b : Cherry pick #129244, #129251, #129239, 129396 into release/2.4 (#129478)
560869918d : Documentations for XPU functionality to PyTorch (#129266)
2bf37985b1 : Support HSDP + Monolith Checkpointing (#128446) (#129254)
491e9e2d4a : [DSD] Add unittest to verify HSDP1 + broadcast_from_rank0 (#128755) (#129255)
ec19059347 : [DSD] Correctly handle shared parameters for optimizer state_dict (#1… (#129252)
04e98d3d0e : [cpp_extension][inductor] Fix sleef windows depends. (#128770) (#128811)
474d743dba : [torchao][benchmark] Skip all accuracy tests by returning `pass_due_to_skip` (#129545)
25cec43678 : Remove dependency on private _compat_pickle in CPython (#129509)
3b531eace7 : Add example for torch.serialization.add_safe_globals (#129396)
303ad8d7f5 : Add warning for weights_only (#129239)
52009068bc : [AOTI][refactor] Unify UserDefinedTritonKernel.codegen (#129378)
42d490d41d : [AOTI][refactor] Move generate_user_defined_triton_kernel (#129267)
53fafdd0c3 : [BE] Runner determinator: more resilient user matching (#129462)
211f38e742 : Revert "[ALI] [Reland] Use LF runners for Lint (#129071)"
92be3403ea : Fix an issue in oneShotAllReduce where different ranks perform reduction in different order (#129501)
f2840bb220 : [nn-module] Use standard dict for _parameters, _modules and _buffers (#129164)
ead97ee486 : [Compile+SAC] Only warn for in-place ops once (#129397)
c422a9549d : [easy][DCP] Fix test_fsdp_ep.py for _MeshEnv.create_child_mesh API ch… (#129445)
8b8e2fcdda : [DCP] Fix Optimizer Learning Rate not being loaded correctly (#129398)
000f2d637b : Refactoring the code to make it lint clean (#129424)
610894e978 : [MPS][BE] Generalize Fused optimizers (#129105)
d02bba519c : [export] match fake mode for _decompose_exported_program() (#129421)
7420bad74c : [BE] Do not assert if the barrier is not created (#129497)
c04cec609d : [dtensor][debug] fixing CommDebugMode module collective tracing (#128887)
bd3a11776f : [dtensor][test] test case suite for comm_mode features (#128729)
6181e65cd8 : Nested tensor subclass support (#127431)
cda4d4887d : Skip signals from older runs of the same workflows (#129291)
c718e2f43b : [pytorch][logging] add empty wait counter implementation (#128466)
54f27b886e : [Inductor UT] Reuse test_distributed_patterns.py for Intel GPU (#129437)
555f71a15b : Fix test_auto_simd in machine with AMX support (#129444)
a89a1ed072 : [easy][DCP] make BroadcastingTorchSaveReader device generic (#129231)
90d5a6f001 : [inductor] Add lowering and codegen for aten.sort (#128458)
b7e7a4cb01 : [cuDNN][SDPA] Remove `TORCH_CUDNN_SDPA_ENABLED=1`, enable cuDNN SDPA by default on H100 and 2nd on other archs >= sm80 (#125343)
9554a9af87 : [GPT-benchmark] Distinguish LLM models and mirco-benchmarks (#129498)
0d0d42c4a7 : test_qat_mobilenet_v2 succeeding on dynamo (#129532)
112ef79f29 : [inductor] Remove comm-specific node attributes from scheduler (#129084)
d1f9e822dd : [DTensor][Test] Update implicit replication unit tests for tensor arg being the first in args list (#127803)
575bc1e3af : [Reopen #114036] Allow "must recompute" in torch.compile + selective checkpointing (SAC) (#129295)
f389541ce0 : Add Strided Input test for flex attention (#128915)
87ebd627a7 : RS migration - upload sccache stats to s3 instead of rockset (#129490)
52341c28e8 : Revert "[FSDP2] Ran post-acc-grad hooks manually (#129450)"
bbd47f7b2f : Remove ProcessGroupCudaP2P and change async-TP to use SymmetricMemory (#128762)
1c5df9107d : [BE] Fix several incorrect skip tests (#129488)
fd414d6189 : [inductor] don't materialize the large sparse matrix in CE bwd (#129043)
e1499f6342 : [C10D] Make new_group eager when used with comm_split (#129284)
e58ef5b65f : [export] Rewrite exportdb formatting. (#129260)
551e412718 : [CUDA][Inductor][CI] Revert PR#127150 since cu124 is now behaving similar enough to cu121 (#128423)
79959d707c : [Inductor][ROCm] Composable Kernel backend for Inductor (#125453)
ae0f84d89c : [CI] Enable amp accuracy check for inductor cpu (#127758)
45f2876934 : [Fix] NumToTensor resulted from numel() and size() in TSCovnerter (#128761)
e68ee2cadb : TunableOp hotfix (#129281)
1865fe282f : Log whenever we sleep (#129197)
b1f486aff9 : Revert "Add warning for weights_only (#129239)"
7cf454ec52 : Revert "Add example for torch.serialization.add_safe_globals (#129396)"
0298560ca2 : TCPStore: improve connect and retry logic (#129261)
816e8a3f21 : [MacOS] Improve libomp packaging (#129473)
b045878f81 : Revert "Remove test_mps_allocator_module XFAIL (#129340)"
7ebffef4d0 : [FSDP2] Ran post-acc-grad hooks manually (#129450)
dd00f5e78d : Fixes T192448049 (#129146)
53f462c506 : Write dynamo benchmarks performance result to csv when throw exceptions (#126764)
e317a8b264 : Add guard to use AMX for x86_64 only (#129479)
45b2931b7e : Revert "[Traceable FSDP2] Don't decompose fsdp.split_with_sizes_copy (#129414)"
fb40ba6fc2 : Revert "[Traceable FSDP2] Add Dynamo support for run_with_rng_state HOP (#127247)"
ad76da6c16 : Revert "[inductor] Fix TORCHINDUCTOR_FORCE_DISABLE_CACHES (#129257)"
b38f6d4cd2 : Revert "[inductor] Enable FX graph caching in OSS by default (#125863)"
f8db12a538 : Fix logic to find sbgemm in BLAS library (#125227)
665d6ea05b : [export] Fix IR canonlization. (#129401)
e364290718 : Support linear backward for NJT with dim > 3 (#129393)
0e6bb7f1ce : [caffe2][be] migrate gloabl static initializer (#128784)
fd4af87855 : Fix non-portable path warning (#129474)
cb1c56caba : Set target dependencies to always build for sm90a on rowwise scaling (#129402)
71ebe5121a : [MPS] Fast math env var (#129007)
bbdeff76fc : fix add decomposition for complex numbers (#129044)
6508f0f5d4 : Improved backward tracking and attribution, fixed typing for python < 3.10 (#129400)
63474620ab : test_jit: Replace plain assert by test assert (#128950)
0314c4c101 : [BE][Easy] use `pathlib.Path` instead of `dirname` / `".."` / `pardir` (#129374)
4ca8eecca4 : skip test_graph_capture_oom for jetson (#128661)
8bfd9e9815 : [cuDNN] Graph-capturable cuDNN CTCLoss (#128271)
533c4190f9 : [inductor][cpp] support nested kernel with indirect indexing (#129223)
665dbc2f52 : [easy][DCP] Fix test_fine_tuning.py for get/set_state_dict API changes (#129365)
0e1e289033 : [ONNX] Benchmark refactored ONNX export (#129427)
f18becaaf1 : Add example for torch.serialization.add_safe_globals (#129396)
381ce0821c : Add warning for weights_only (#129239)
c5f7755e86 : Allow BUILD/NEWOBJ instruction for items added via torch.serialization.add_safe_globals (#129251)
1bb1e3463c : Fix allowlisting of builtins for weights_only unpickler (#129244)
aa4ee2cb9e : [Traceable FSDP2] Add Dynamo support for run_with_rng_state HOP (#127247)
b24787b757 : [Traceable FSDP2] Don't decompose fsdp.split_with_sizes_copy (#129414)
e6bfa2958b : Add aten._unsafe_masked_index (#116491)
4d04203852 : [BE] Runner determinator: Expect usernames to be prefixed with '@' (#129246)
533395e204 : Fix build error on s390x (#129326)
c4dd752d97 : [dynamo][compile-time][inlining-inbuilt-nn-modules] Manually implement nn.Module._call_impl (#129285)
514f9279f8 : [dynamo][compile-time] Manually implement nn.Module.__getattr__ to reduce compile time (#129315)
c012013aa6 : Revert "Add Strided Input test for flex attention (#128915)"
1315be4893 : [aotinductor] only autotune at compile time when enabled via config (#129413)
78e40b271b : Change index_put on GPU to accept FP8 inputs (#128758)
8b6391ee59 : [Test][DTensor] Temporarily skip gloo test for test_depthwise_convolution (#129391)
81de71fdc5 : [inductor] fix a double clone in coordesc tuning (#129399)
14dc08ddc7 : Inductor to fail gracefully on Voltas for bf16 tensors (#129288)
4c1e4c5f30 : [inductor] Enable FX graph caching in OSS by default (#125863)
7b57ddd38c : [inductor] Fix TORCHINDUCTOR_FORCE_DISABLE_CACHES (#129257)
b22f0f5f51 : [torchbind] fix bug of mutating FakeScriptObjects twice in aot_export (#128844)
41bb81b582 : Add Strided Input test for flex attention (#128915)
00f675bb4c : [Nested Tensor]fix sdpa backward for the special case with ragged second batch dim and constant length (#128349)
7b7f357042 : Fix DEBUG=1 asserts with NJT ops (#129014)
5f912f480c : Fix max_pool2d decomposition for empty list and integer limits (#129106)
e096faaf30 : Fix rot90 decomposition for no rotation (#129097)
fbca70718f : Fix scatter lowering when src is a Number (#129096)
8edb7b96b1 : Enable dynamic rollout for pull workflow (#129243)
30bfdf1afc : Errors when 0-dim tensor of complex or bool type passed to aminmax. (#128404)
18fdc0ae5b : [executorch hash update] update the pinned executorch hash (#129099)
93a33bf3ac : [BE] update type annotations for basic utilities in `torch/__init__.py` (#129001)
1a54bb0f96 : Revert "[halide-backend] Initial implementation of HalideKernel and HalideScheduling (#126417)"
063facf352 : Revert "[halide-backend] Generate standalone runtime (#129025)"
c888ee3632 : Remove test_mps_allocator_module XFAIL (#129340)
cb4919344a : Revert "[BE] update type annotations for basic utilities in `torch/__init__.py` (#129001)"
7b910285db : Revert "[inductor] Refactor fusion of inplace operations (#128979)"
df51d0b623 : [aotinductor][UserDefinedTritonKernel] use appropriate expr printer when printing args (#129301)
e53d959028 : [BE] update type annotations for basic utilities in `torch/__init__.py` (#129001)
c89a9f5d17 : Allow SAC policy_fn to return bool for backward compatibility (#129262)
9094248090 : [FSDP2] Fixed `unshard` without lazy init (#129241)
d21f311af8 : [Easy][Traceable FSDP2] Skip rocm for the E2E tests (#129339)
662e9e1076 : [BE] enable UFMT for `torch/nn/functional.py` (#128592)
8a2fed7e6a : [Inductor][CPP] Fallback QLinear Binaryfusion from postop sum to binary add when others is view (#128808)
287c68c5ec : [Inductor][Quant] Use output dtype torch.uint8 explicitly (#128804)
7b9e6430ed : [Split Build] Add periodic and trunk CI for cuda builds (#129269)
f85d1e845a : [BE] enable UFMT for `torch/nn/*.py` (#128593)
dadc0ed4c8 : [Traceable FSDP2] Add `aot_eager` backend E2E tests for transformer model (#129157)
b91a9dc328 : [Brian's PR #128754] Use torch.ops.fsdp.set_ for FSDP2 storage resize; dont functionalize resize_, set_, split_with_sizes_copy.out (#129203)
62ccf6d7cd : [BE] enable UFMT for `torch/nn/modules` (#128594)
440d8fbd4a : FSDP2 Memory Tracker (#125323)
17d1723aee : [dynamo][unspecialized-nn-modules] Remove dead (also incorrect) code (#129316)
cac6f99d41 : Fix Windows CUDA periodic inductor/test_pattern_matcher test (#129198)
749c03406c : [metal] Add int4mm weight packing mps kernel, and improved int4mm shader (#128965)
856541c701 : [custom_op] support default dtype values (#129189)
3e02ecd740 : Test only one sample with huber_loss (#129245)
94dc3253a0 : [BE][Easy] enable UFMT for `torch/distributed/` (#128870)
e165a5971f : [Traceable FSDP2] Fix support for CUDA resize_storage_bytes_ (#129215)
0e6118a68e : [dtensor][debug] added logging module tracing table to file feature (#128721)
1afd492d88 : [dtensor][example] add functionality allowing users to choose which example they'd to run (#128720)
10c64c3b49 : [halide-backend] Generate standalone runtime (#129025)
4f9399bd0d : [halide-backend] Initial implementation of HalideKernel and HalideScheduling (#126417)
79aabaf626 : [3.13, dynamo] codegen PUSH_NULL when callable is codegen'd (#129172)
905dfa186c : Fix ConstraintViolationError exception string when exprs are int (#129271)
920ebccca2 : [inductor][cpp] refactor CppTemplateKernel to inherit CppKernel (#129101)
72e3aca227 : [inductor] Refactor fusion of inplace operations (#128979)
88a35b5b64 : BE: User future annotations in _inductor/comms.py (#129083)
73ba226d98 : [inductor] Linear time dead node elimination (#129082)
cb126711cd : [merge_rule] add more cpp inductor files (#129192)
b57fa8d9c0 : [BE] Remove JNI from libtorch builds (#124995)
9ffdbb5d12 : Forward Fix PR for #128683 (#129037)
64743de6d8 : [Split Build][BE] consolidate pip install commands (#129253)
7661d1220a : [Split Build] Fix typo in pull ci (#129270)
b0044e2e18 : [Split Build] Support nightly release (#129011)
b72ef9df0d : Update torchbench model expected accuracy values after pinning numpy (#129213)
f42d5b6dca : [Memory Snapshot] Make recordAnnotations callback initialize lazily (#129242)
858fb05dac : Modify ExternKernelAlloc with NoneLayout to not assign its result to anything (#129188)
2f8b301c32 : Clean up distributed/CONTRIBUTING.md (#128450)
5b14943213 : Run TestAOTAutograd test suite with cache (#128222)
c5b9ee7408 : [easy][dynamo] Remove try except from call_getattr (#129217)
1c75ddff35 : Revert "[cuDNN] Graph-capturable cuDNN CTCLoss (#128271)"
ef55446538 : [FSDP2] Add 'TORCH_LOGS=+fsdp' to log hooks(pre/post forward/backward) and FQN (_init_fqns) (#128663)
9d1b65b569 : [PT2][Observability] Change the log logic (#129201)
40e8675fcb : [cuDNN] Graph-capturable cuDNN CTCLoss (#128271)
9103b40a47 : Fix small typo in docstring in ParameterList (#129193)
92ca17d85d : Update triton pin (#126098)
d52684e9a8 : [BE]: Update CUDNN_frontend submodule to v1.5.1 (#128612)
ebf25e128c : [autograd] Do not stash version counter for saved tensor (#128545)
58cefaf53b : Fix hipify regular expression for AOTI wrapper (#128912)
2db33054b3 : Disable fast path in `TransformerEncoderLayer` when there are forward (pre-)hooks attached to modules (#128415)
699c056479 : [ROCm] Include hsa headers for rocm-triton whl (#129235)
8edd4c71c6 : [AOTI][refactor] Remove GridExprCppPrinter (#129142)
bdc39eef3b : [inductor] Add --inductor-config benchmark flag (#129034)
bb4ab59651 : [inductor] Run more test on correct device (#129033)
feb3f3ad77 : [inductor] Refactors for Halide backend (#129024)
49d2eec960 : [custom ops] Switch out references from old landing page to new landi… (#129237)
165e09874b : [docs] Redirect custom ops landing page to the correct place (#129177) (#129236)
237c4e6163 : Improved flexattention bwd perf + added configurations for benchmarks (#129013)
bdd11483ea : [3.13] get C dynamo to compile with python callback and custom frame eval (#129171)
b0ae0db815 : [Inductor][Intel GPU] Support reduction split. (#129120)
fb0c51b61c : [Inductor UT] Fix UT failure 'test_polar_dynamic_shapes_xpu' introduced by #128722 (#129124)
715b09ae2d : Revert "Fix DEBUG=1 asserts with NJT ops (#129014)"
479ce5e2f4 : Remove outdated CUDA code from CMake (#128801)
2c7c286fa4 : [1/N] Fix clang-tidy warnings in torch/csrc/jit/serialization (#129055)
53be7ff0e4 : Make tl.atomic_add relaxed (#129133)
62e5d045c0 : [AOTI] Auto-tune Triton kernels in a seperate block (#129057)
9795dba1e0 : Optim package docstring fix (#129086)
93c51dc84b : Re-enable py3.12 nightly wheel builds and add triton dependency for ROCm (#129161)
b697808056 : [BE][Easy] eliminate relative import in `torchgen` (#128872)
e1c1052829 : Backward support for unbind() with NJT (#128032)
27ae1f981d : [inductor] fix linear_add_bias for autocast case (#129138)
67a815abd2 : [Release only] Temporary change to depend on pytorch-triton (#129232)
5d8e23b49c : [custom_op] Support string default values in schema (#129179)
08b616281f : [custom ops] Switch out references from old landing page to new landing page (#129178)
311fadb1fb : [docs] Redirect custom ops landing page to the correct place (#129177)
d2e4cc71f1 : [inductor][ci] Fix torchbench dependency issue with numpy (#129074)
0233f8df5b : [ROCm] [Triton] - Include roctracer headers in triton whl (#129227)
217aac96d7 : Introduce a prototype for SymmetricMemory (#128582)
f0443ad174 : [compiled autograd] flatten runtime inputs with fast path (#129116)
d97dfe9313 : [compiled autograd] move inputs to cuda with non_blocking=True (#129181)
8f320fd6c6 : [compiled autograd] treat input params as static (#128987)
fafa1867d1 : [compiled autograd] use in_compiled_autograd_region instead of compiled_autograd_enabled_count (#128982)
68b33453f4 : [aot autograd] collect static parameter metadata when graphs fallback to inference (#128905)
123812790b : [compiled autograd] update benchmarks to use cli flags for fullgraph/dynamic (#127960)
aee512cc9d : [dtensor][op] Fixed stack op strategy (#129018)
6b5fbc544e : [dynamo] Use polyfill to trace through the attributes of torch.jit.* and lru_cache_wrapper (#128336)
914d3ca2ba : [inductor][cpp] BF16 AMX micro-gemm support (#127195)
632910e2a8 : Add test to xfail_list only for abi_compatible (#128506)
62e425ab03 : Memory Tracker for tracking Module wise memory (#124688)
2b1b055a96 : [Split Build] Fix libtorch_python RPATH (#129088)
c008488b9c : [dynamo][guards] Dont run TYPE_MATCH for DICT_LENGTH C++ guard (#129163)
5c676bb8b3 : Remove Caffe2 handling from onnx_unpack_quantized_weights (#129021)
3a2fdbb142 : [dynamo] - Add JK killswitch for dynamo compilation. (#128538)
f73b451e78 : Revert "Improved flexattention bwd perf + added configurations for benchmarks (#129013)"
b542825066 : Enable deterministic support for oneDNN (#127277)
e8dbb45e98 : [dynamo][user-defined-object] Check that object is valid (#129117)
e99a24ce7c : Remove TensorImpl_test.cpp (#129054)
880e894c39 : [Brian's PR #128981] fix dynamo isinstance inlining for nn.Parameter + subclasses (#129162)
8cd9b10456 : Fix exp decomp numerics (#129154)
ff89ebc50a : Improved flexattention bwd perf + added configurations for benchmarks (#129013)
0acd09aecd : [torchrec][pt-d][model store] introduce LocalShardsWrapper for DTensor (#129150)
31c9e3d2f4 : [FSDP][Test] Test save model save with FSDP1 and load into FSDP2 applied model (#129028)
8758fedbfc : [export] copy sym ops when respecting call module signature (#129153)
5da428d9eb : [cpu][flash attention] fix attention mask issue (#128816)
d4022b4658 : Revert "[BE] enable UFMT for `torch/nn/modules` (#128594)"
cc8193c707 : Revert "[BE] enable UFMT for `torch/nn/functional.py` (#128592)"
9c929f6ce9 : Revert "[BE][Easy] enable UFMT for `torch/distributed/` (#128870)"
9dd8f8cf8b : [cpuinfo][submodule] bump cpuinfo to the latest to support amx isa check (#127505)
c027c8935b : [distributed] NCCL result code update (#128777)
43060a1dbc : Add shard support to test_inductor (#129160)
31d5753247 : Short-term fix to preserve NJT metadata cache in torch.compile (#122836)
63a724d8e1 : Revert "Introduce a prototype for SymmetricMemory (#128582)"
5fba5d83f0 : add xpu for amp (#127276)
adc14adb88 : Fix flakiness with test_binary_op_list_error_cases (#129003)
61fa3de4cb : ci: Hardcode runner-determinator (#128985)
aace8ffc00 : Revert "[BE] enable UFMT for `torch/nn/*.py` (#128593)"
f2f4dde2d3 : [dynamo] Remove ID_MATCH for FSDPModuleVariable (#129015)
e84cf805d2 : Revert "Modularize aten parameter parser and checker (#125308)"
254487f288 : Revert "Separate AOTI Eager utils as a single file (#125819)"
73340f0909 : Revert "[3/N] Non-Tensor: Support string parameter for aten operations (#125831)"
8c2542623b : [Traceable FSDP2] [Dynamo] Add tracing support for out-variant custom ops that return None (#129078)
734891ac22 : Fix export log script (#128967)
ddb95dbb0d : Fixing equalize with three things and improving functionality (#124632)
832fc35211 : Revert "Improved flexattention bwd perf + added configurations for benchmarks (#129013)"
65286883d4 : [export] reland "experimental joint graph API." (#129081)
fc5b0ff2d7 : [BE][Hackaday] deprecate legacy cuda docker image (#128859)
b2a9b8d485 : [CpuInductor] Enable NEON ISA detection on Linux ARM (#129075)
e0aa992d73 : Fix inductor and deploy jobs timing out (#129108)
434bf9559f : [Release 2.4] Release only changes for triton 3.0.x build (#129143)
2bb8ee602b : Fix DEBUG=1 asserts with NJT ops (#129014)
7178b4e987 : [Dynamo x torch_function] fix incorrect source (#128980)
ea47d542ca : [dynamo][guards] Remove BOOL_FALSE - not needed after C++ guards (#129098)
50e57d4f3f : Revert "[Release 2.4] Release only changes - use pinned triton." (#129139)
54b0006cb2 : Evaluate symexprs on load path of cache not write (#128997)
799acd31b4 : [MPS] Add lu_factor (#99269)
0d25f096c1 : [CppInductor] Fix erfinv codegen when non-vectorized isa (#129090)
6d2b3c90f1 : Improved flexattention bwd perf + added configurations for benchmarks (#129013)
ad2593cb86 : [Animesh's PR #125340] [dynamo][fsdp] Track FSDPNNModuleVariable for mutations (#129045)
19f3abcde4 : [Docs][MPS] Add mps environment variable table (#129008)
609ffaf717 : Add more shards for slow CPU and ROCm jobs (#128873)
d8db074988 : [Traceable FSDP2] [Dynamo] Fix OptimizedModule._initialize to allow tracing into FSDP2 module hooks for module from user-defined module class (#129046)
859fa183fe : BE: Use future annotations in inductor scheduler and ir (#128892)
a2b1673dfb : [Horace's PR #126446] Prevent partitioner from ever saving views (#129039)
9d06e3783d : [Inductor][CPP] Fix the symbolic size cast issue in GEMM Benchmark (#128824)
a6ac6447b5 : Re-enable py3.12 nightly wheel builds and add triton dependency for ROCm (#128525)
571a0db132 : [inductor] Fix logging for run_and_get_cpp_code (#128794)
277f2914a5 : [9/N] Remove unused functions (#128704)
fca408fa29 : s390x vectorization: rework operators (#129066)
73f5d2b787 : Run ET unit tests on PT CI (#128560)
df94d57c0a : Revert "[export] experimental joint graph API. (#128847)"
b5d541609d : [Memory Snapshot] Add recordAnnotations to capture record_function annotations (#129072)
bafd68b4fc : [inductor] fix windows python module ext and func export declaration (#129059)
0707811286 : [export] experimental joint graph API. (#128847)
0fc603ece4 : [optim] Fused implementation stability table (#129006)
1b92bdd0ea : [ALI] [Reland] Use LF runners for Lint (#129071)
edcc77dadb : Remove leftover warning causing log spew (#128837)
0e0a9c5a5c : [Inductor] Fix the High Order Op layout issue (#128275) (#128834)
236fbcbdf4 : [Split Build] Test split build in pull CI workflow (#126813)
7d33ff59ba : [Split Build]Use same package (#127934)
4af5000bff : [Port][Quant][Inductor] Bug fix: mutation nodes not handled correctly for QLinearPointwiseBinaryPT2E (#128591)
ffb50fb691 : [ONNX] Add onnx::Gelu support for version 20 (#128773)
562cdc2084 : [tp] refactor and fix PrepareModuleInput for DTensor inputs (#128431) (#128719)
b1d53f07b2 : [inductor] fix compile time regression by caching get_gpu_type (#128363) (#128717)
3397d5ef90 : Revert "[ALI] Use lf runners for Lint" (#129070)
86271445d6 : [Inductor] Update Intel GPU Triton commit pin. (#124842) (#128615)
d71de3c95c : Revert "Make torch_geometric models compatible with export (#123403)"… (#128511)
e7dde73d43 : [custom_op] stop using nonlocals to store information (#128547) (#128616)
9ad8a5b657 : Clean up xpu ut to make CI happy (#128383) (#128614)
ed624a0483 : Change Dynamo's custom ops warning message to be less spammy (#128456) (#128581)
082c4f7e64 : [inductor] fix linear add bias pattern (#128473) (#128577)
118f9ceb7c : [inductor][ci] Fix torchbench dependency issue with numpy (#128968)
e49525275d : Make TraceUtils.h to be device-agnostic (#126969)
7fac03aee9 : [ALI] Use lf runners for Lint (#128978)
50567f7081 : Pass device to is_pinned call inside TensorProperties.create_from_tensor (#128896)
d3e8b8bf47 : Remove cuda check in the CUDAGraph destructor (#127382)
ba92f5277f : [inductor][refactor] Unify the use of generate_kernel_call (#128467)
3a185778ed : [aotinductor] Add torch.polar fallback op for shim v2 (#128722)
a584b2a389 : Revert "Add test to xfail_list only for abi_compatible (#128506)"
fcf2a1378b : Enable fp8 rowwise scaling kernel on cuda, TAKE 2: #125204 (#128989)
2f88597aad : [inductor] For internal, allow multiple workers if the method is "subprocess" (#129002)
1f0a68b572 : [ROCm] Fix fp32 atomicAdd for non-MI100 GPUs (#128750)
acefc5c016 : [torch.compile] Enable bwd compilation metrics (#128973)
eb9f4da11e : Modified template indexing to broadcast indices to out instead of mask and some other flexattention micro-opts (#128938)
8771e3429c : Introduce a prototype for SymmetricMemory (#128582)
ed5b8432cd : Enable mixed_mm only if casting from lower-bitwidth type to a higher one (#128899)
df85f34a14 : Add test to xfail_list only for abi_compatible (#128506)
4bc90185fb : fix: Print statements causing parse error (#128969)
eda375a490 : [Inductor] Remove min/max from inductor opinfo test (#128925)
459e2aa454 : Revert "[cuDNN][SDPA] Remove `TORCH_CUDNN_SDPA_ENABLED=1`, enable cuDNN SDPA by default on H100 and 2nd on other archs >= sm80 (#125343)" (#128539)
6be0234f07 : Revert "Deprecate `torch._utils.is_compiling()` and `torch._dynamo.external_utils.is_compiling()` (#127690)" (#128542)
24a3885ef6 : Revert "Set simdlen based on ATEN_CPU_CAPABILITY (#123514)" (#128541)
2458f79f83 : [Inductor UT][Intel GPU] Skip newly added test case test_torchinductor_strided_blocks:test_reduction for Intel GPU (#128881)
b0d2fe6299 : Revert "Short-term fix to preserve NJT metadata cache in torch.compile (#122836)"
5ffb032be6 : Revert "Backward support for unbind() with NJT (#128032)"
35c78668b4 : Improve the debugging message for when foreach mta_called (#128991)
99f042d336 : Revert "Forward fix to skip ROCm tests for #122836 (#128891)"
670b94c9c8 : [inductor][mkldnn] Use floats instead of ints for pattern matcher test (#128484)
c5e0b84484 : [dynamo][trace_rules] Remove incorrectly classified Ingraph functions (#128428)
cb5e9183c6 : [Caffe2] [2/N] Remove Caffe2 from tests (#128911)
62417c6ca9 : [dynamo] Fix for #127696 (#128530)
ac5f565fa7 : [FSDP2] Added `set_post_optim_event` (#128975)
d9c294c672 : [Inductor] Fix arguments passed to triton kernel launch hooks (#128732)
a0e1e20c41 : [BE][Easy] enable UFMT for `torch/distributed/` (#128870)
3b798df853 : [BE][Easy] enable UFMT for `torch/distributed/{fsdp,optim,rpc}/` (#128869)
cec31050b4 : [BE][Easy] enable UFMT for `torch/distributed/{tensor,_tensor}/` (#128868)
e47603a549 : Fix weight_norm decomposition behavior (#128956)
2227da4431 : [Profiler] Clean up use_mtia to follow standard use_device instead (#126284)
4cc3fb5ee2 : Bump urllib3 from 2.2.1 to 2.2.2 in /tools/build/bazel (#128908)
5dc4f652bc : Backward support for unbind() with NJT (#128032)
44722c6b10 : Revert "[dynamo][fsdp] Dont take unspecializedNNModuleVariable path for FSDP modules (#128453)"
1babeddbbf : Revert "[inductor][mkldnn] Use floats instead of ints for pattern matcher test (#128484)"
5bc9835d64 : Revert "[dynamo][trace_rules] Remove incorrectly classified Ingraph functions (#128428)"
9a7e2519d3 : [MPS] Fused Adam & AdamW (#127242)
fe8558b7aa : [DSD] Add unittest to verify HSDP1 + broadcast_from_rank0 (#128755)
abde6cab4c : Remove compile_threads=1 in test_inductor_collectives.py (#128580)
04a5d3228e : [ts migration] Support prim::tolist and aten::len (#128894)
44483972bd : [EZ] Keep weight_norm var name aligned (#128955)
bdffd9f0c6 : [export] Graph break on nn.Parameter construction (#128935)
1a527915a6 : [DSD] Correctly handle shared parameters for optimizer state_dict (#128685)
d77a1aaa86 : DOC: add note about same sized tensors to dist.gather() (#128676)
1877b7896c : [checkpoint] Clean up selective activation checkpoint and make public (#125795)
77830d509f : Revert "Introduce a prototype for SymmetricMemory (#128582)"
84c86e56bd : Update tracker issues after successfully cherry-picking a PR (#128924)
4e03263224 : [CUDA][Convolution] Add missing launch bounds to `vol2col_kernel` (#128740)
26e374e3ca : [EZ] Fix typos in RELEASE.md (#128769)
9818283da1 : re-enable jacrev/jacfwd/hessian after #128028 landed (#128622)
ec616da518 : RNN API cleanup for cuDNN 9.1 (#122011)
108318ad10 : [BE][JIT] Handle case where codegen object can be unset (#128951)
4817180601 : make fallback for aten.argsort.stable (#128907)
22d258427b : [BE][Easy] enable UFMT for `torch/distributed/_shard/` (#128867)
e6d4451ae8 : [BE][Easy] enable UFMT for `torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/` (#128866)
f2805a0408 : [FSDP2] Added APIs for explicit fwd/bwd prefetching (#128884)
3dd5f0ecbb : Remove circular import (#128875)
304c934572 : Move MKLDNN Specific IR to Separate File (#126504)
6e43897912 : [BE][ptd_fb_test][3/N] Enable TestSlide for MultiThreadedTestCase (#128843)
60baeee59f : [BE] Skip the test if CUDA is not available (#128885)
e3a39d49a0 : [Traceable FSDP][Compiled Autograd] Add queue_callback() support (#126366)
f7eae27946 : Pass params to dump_nccl_trace_pickle (#128781)
d9eaa224f2 : Fixes #128429: NaN in triu op on MPS (#128575)
59b4983dc0 : DebugPlane: add dump_traceback handler (#128904)
17abbafdfc : [inductor] Fix some windows cpp builder issue (#128765)
4061b3b822 : Forward fix to skip ROCm tests for #122836 (#128891)
c017c97333 : [dynamo][inlining-inbuilt-nn-modules] Update test output (#128880)
4e97d37fd9 : [inlining-inbuilt-nn-modules][pre-grad] Adjust efficient_conv_bn_eval_graph for inlining (#128878)
22f1793c0a : [dynamo][easy] Use LazyVariableTracker for UserDefinedObject var_getattr (#128877)
43998711a7 : [CUDAGraph] add more docs for cudagraph trees (#127963)
e12fa93b8b : add is_big_gpu(0) check to test_select_algorithm tests in tests/inductor/test_cuda_cpp_wrapper.py (#128652)
9e8443b56f : Remove dtype from gpt-fast micro benchmark experiments model name (#128789)
fbc7559ceb : [custom ops] convert string type annotation to real type (#128809)
c35ffaf954 : [Inductor][CPP] Add ne with VecMask (#126940)
beb29836cd : [Inductor][CPP] Add Min/Max with VecMask (#126841)
11ff5345d2 : Changed colored logging to only be turned on if printing to interactive terminal (#128874)
b70440f0a7 : Document the torch.cuda.profiler.profile function (#128216)
95b5ea9cde : Add mark_unbacked (#128638)
8415a4ba98 : Back out "[ROCm] TunableOp for gemm_and_bias (#128143)" (#128815)
3b8c9b8ab1 : [Docker Release] Test if pytorch was compiled with CUDA before pushing to repo (#128852)
1835e3beab : Fix the inductor ci (#128879)
7baf32b5e7 : [c10d] fix p2p group commsplit (#128803)
1fd7496ab2 : [MTIA] Fix synchronize API (#128714)
163847b1bb : [1/N] [Caffe2] Remove caffe2_aten_fallback code (#128675)
8953725e6d : [Inductor][FlexAttention] Tune backwards kernel block sizes (#128853)
a489792bb2 : [GPT-benchmark] Fix memory bandwidth for MoE (#128783)
8c06eae17e : [GPT-benchmark] Add metric: compilation time for GPT models (#128768)
a59766ee05 : replace `AT_ERROR(...)` with `TORCH_CHECK(false, ...)` (#128788)
0f89e66d17 : Validate logs are created by default (#128522)
1577328ea4 : Set bash shell on Windows (#128854)
b181b58857 : Fix Storage.filename to not track the filename when storage was mmap-ed with MAP_PRIVATE (#128725)
213eba7d2e : Configure mergebot via config (#128840)
c172b58fe0 : Revert "Update DALLE2_pytorch expected accuracy result on CPU (#128718)"
5344c41d43 : Use forked torchbench branch with pinned numpy (#128856)
d35cdee97f : [Caffe2] Remove caffe2 onnx tests (#128687)
153362fbc9 : Support HSDP + Monolith Checkpointing (#128446)
c6b180a316 : Created docs (and example) for cudart function in torch.cuda (#128741)
fc2913fb80 : Remove amax return from _scaled_mm (#128683)
73b78d1cbe : Document the torch.nn.parallel.scatter_gather.gather function (#128566)
316b729677 : [Fix] TS converter constant to tensor (#128442)
a87d82abd7 : [BE] enable UFMT for `torch/nn/*.py` (#128593)
f6e6e55fa7 : [BE] enable UFMT for `torch/nn/functional.py` (#128592)
95ac2d6482 : [BE] enable UFMT for `torch/nn/modules` (#128594)
dff6342a0b : [BE][Easy] enable UFMT for `torch/nn/parallel` (#128596)
bfad0aee44 : [export] Preserve requires_grad for export inputs. (#128656)
2a41fc0390 : Short-term fix to preserve NJT metadata cache in torch.compile (#122836)
24443fe16a : [inductor] parallel compile: Print traceback detail when there's an exception in a sub-process (#128775)
e3093849e5 : [Docs] Update links (#128795)
0f81473d7b : Update fake tensor error checks for bool tensor subtraction (#128492)
b0282071c4 : [dynamo] override torch.nn.modules.activation._is_make_fx_tracing (#128748)
b40a033c38 : [cpp_extension][inductor] Fix sleef windows depends. (#128770)
a52c8ace98 : [3/N] Non-Tensor: Support string parameter for aten operations (#125831)
74e11a4210 : Enable clang-tidy on torch/csrc/mps (#128782)
f9dae86222 : Concat namespaces in torch/csrc/utils/* (#128787)
6cbdbb6c3c : Remove top lev numpy dependency from fuzzer.py (#128759)
f8d60e0e0a : [Inductor][CPP] Fix Half data type cse cache issue for CPP Backend (#128498)
979edbbe12 : [Traceable FSDP2] Dynamo support FSDP2 use_training_state context manager (#127854)
e4d8aa4d24 : [torchbench] Enable some models with inline_inbuilt_nn_modules (#128315)
cc518ebd38 : [Inductor Intel GPU backend Upstream] Reuse inductor test for Intel GPU (PART 2) (#124147)
f1ee3589a1 : [Inductor] Emit strided block pointer from ModularIndexing and FloorDiv (#127342)
a61939467a : Enable passing dynamo-traced complex test (#128771)
ab13980424 : [ONNX] Update 'person_of_interest.rst', 'CODEOWNERS' and 'merge_rules.yaml' (#126364)
6079c50910 : Make config.fx_graph_remote_cache be three-value switch (#128628)
94c0dcbe1d : [inductor] Parallel compile: handle crashes in subprocesses (#128757)
f0d68120f4 : [subclasses] Handle dynamo inputs that are subclass views with (-1) in the view (#128662)
18634048a1 : Separate AOTI Eager utils as a single file (#125819)
7a39755da2 : Introduce a prototype for SymmetricMemory (#128582)
60bbdc0b40 : Modularize aten parameter parser and checker (#125308)
de4f379cf2 : run mkldnn test with inlining (#128749)
b50c0e94c2 : TCPStoreLibUvBackend: use somaxconn and enable TCP_NODELAY (#128739)
e4c32d14a8 : [3/N] Remove inclusion of c10/util/string_utils.h (#128504)
472211c97a : Make assert_size_stride to return all errors (#128764)
4ccbf711e2 : Learning Rate Scheduler docstring fix (#128679)
108adbc726 : [dynamo][side effects] Raise assertion error if the object is already tracked for mutation (#128590)
9ebf77b13b : Fix windows inductor defination issue (#128686)
7e092a62e6 : [dynamo] Support weakref objects (#128533)
62a0e39ced : [dynamo][inlining-nn-modules] Update tests with new expected counts (#128463)
2d01f87737 : Enable torch.empty for float8 dtypes + deterministic mode + cpu (#128744)
846bb30e13 : Revert "[1/N] Change #include <c10/util/Optional.h> to #include <optional> (#128301)"
5efe71f134 : Revert "[export] Add print_readable to unflattener (#128617)"
f37121bb74 : Add model name, quantization and device to gpt_fast micro benchmark output (#128091)
3f47c72268 : add multiprocessing checks in test_dataloader.py (#128244)
73ba432d32 : [custom_op]Fix None return schema (#128667)
6616ad030f : [Inductor] Fix the High Order Op layout issue (#128275)
5d9a609b4f : [export] Add print_readable to unflattener (#128617)
d67923b955 : Adding kwargs to composable AC API to enable full capabilities (#128516)
271852aa7e : inductor: pre-grad bmm pass shouldn't match if output is mutated (#128570)
ba19ed9a1a : FunctionalTensor: dispatch metadata directly to inner tensor (#127927)
574a2cbcb7 : Enable UFMT on common_device_type.py and common_dtype.py (#128490)
0492ec460a : [BE] Remove external testing of torch::deploy (#127952)
bd72e28314 : [1/N] Change #include <c10/util/Optional.h> to #include <optional> (#128301)
52d4442a00 : [c10d] Socket, TCPStore: add better logging (#128673)
4abecd7102 : [AOTI] fixed performance issue for AOTI_TORCH_CHECK (#128402)
fd27138c4a : Update DALLE2_pytorch expected accuracy result on CPU (#128718)
d3a4d9e4fe : Update cu124 dynamo benchmark expected values (#128737)
bca2cf00ed : [ONNX] Add dynamic axes support to torchscript exporter with dynamo=True (#128371)
f103247a14 : Run all samples for torchinductor tests (#128343)
e9c6e8369c : Torchbind call method + effects support (#128397)
65d3ddcb8b : Add GLIBC requirements for libtorch to solve #113124 (#128135)
e9a29aaa4a : [ONNX] Add upsample trilinear to skip decomp (#128259)
e6e102cf85 : Dynamo testing: add some skips (#128734)
11de50f17c : [Dynamo] skip some TorchScript tests (#128731)
4b96575a09 : [dynamo][aot autograd] Silently disable default saved tensor hooks during tracing (#123196)
1aafb9eb90 : [dynamo][yolov3] Track UnspecializedNNModuleVariable for mutation (#128269)
9c77332116 : [torch.compile][ci] Flaky models in CI (similar to DISABLED_TEST) (#128715)
2e5366fbc0 : Extended Module Tracker (#128508)
d50712e5e3 : [PT2] add inductor log for unbind_stack_pass (#128684)
9035fff2de : [BE] Do not test deprecated `torch.nn.utils.weight_norm` (#128727)
27458cc097 : [BE] Refactor repeated code in test_weight_norm (#128726)
a6bd154a42 : [inductor] Support mm decomps for matrices with unbacked sizes (#128655)
b94c52dd29 : [GHF] Refuse merge to non-default branch (#128710)
be0eec9031 : [export] Improve static typing in tracer. (#128552)
2367161e4b : Revert "[ROCm] Unskip scaled_dot_product_attention tests on ROCm (#127966)"
d7fc871175 : [inductor] Improve superfluous mask handling in triton codegen (#128518)
2357490524 : [PT2] Enable shape_padding multiplier adjustment (#128346)
d4807da802 : Various fixes of torch/csrc files (#127252)
089e76cca3 : [traced-graph][sparse] remove redundant assert in sparse prop test (#128523)
1fb4effe7a : [GPT-fast benchmark] Add MLP, gather + gemv, gemv micro benchmark (#128002)
4c84af0f5d : Fix indexing and slicing of ranges in dynamo (#128567)
f75f5987aa : Revert "Extended Module Tracker (#128508)"
732b4e9074 : Fix generated vararg types (#128648)
8629939a51 : [torch/c10] Add C10_UBSAN_ENABLED macro and use it to disable SymInt_… (#127967)
ee140a198f : Revert "[Port][Quant][Inductor] Bug fix: mutation nodes not handled correctly for QLinearPointwiseBinaryPT2E (#128591)"
c187593418 : Prevent expansion of cat indexing to avoid int64 intermediate (#127815)
c339efaf02 : [ROCm] Unskip scaled_dot_product_attention tests on ROCm (#127966)
c76a9d13cb : Revert D56709309 (#128481)
9972e5f447 : Rename impl_abstract to register_fake, part 2/2 (#123938)
a2d9c430b4 : Adding a note for Getting Started with PyTorch on Intel GPUs (#127872)
dfc4b608e1 : Remove leftover warning causing log spew (#128688)
e1dfc61250 : Document CI/CD security philosophy (#128316)
bfd5ea93e0 : Enable clang-tidy on c10/util/Float8*.h (#120573)
1f46284f9e : Extended Module Tracker (#128508)
e397ad6883 : Improve codegen for ops.masked in triton (#128054)
7e734e2d08 : [inductor] Fix nested indirect indexing case for index_propagation (#128378)
99988be423 : [halide-backend] Add test shard (#127308)
03e8a4cf45 : [Port][Quant][Inductor] Bug fix: mutation nodes not handled correctly for QLinearPointwiseBinaryPT2E (#128591)
43ae3073f9 : Revert "[traced-graph][sparse] remove redundant assert in sparse prop test (#128523)"
0344f95c2e : Add missing #include <array> to thread_name.cpp (#128664)
03725a0512 : [dtensor][example] added MLPStacked example for printing sharding (#128461)
dd3b79a08f : [dtensor][be] improving readability of comm_mode.py and comm_mode_features_example.py (#128451)
e886122e98 : [dtensor][debug] add module level tracing and readable display (#128369)
4669c6d3ae : [quant][pt2e][quantizer] Support `set_module_name_qconfig` in X86InductorQuantizer (#126044)
674be9d3be : Update cu124 dynamo benchmark expected values (#128589)
18f35d9e12 : Revert "Run all samples for torchinductor tests (#128343)"
f48f7615dc : [easy][subclasses] dynamo.reset() in test_subclass_views (#128659)
9ac08dab1f : Updates diskspace-cleanup for ROCm CI (#127947)
eff01bce21 : Only run inductor A100 perf benchmark smoke test periodically (#128677)
ba3726d02b : [traced-graph][sparse] remove redundant assert in sparse prop test (#128523)
685fcfb40d : Fix docstring in autograd (#128657)
0186b386cd : Revert "[ONNX] Add upsample trilinear to skip decomp (#128259)"
f48ca2561d : Document `torch.cuda.profiler.start` (#128098)
41df20c07c : Run all samples for torchinductor tests (#128343)
6895a5804c : Revert "[checkpoint] Clean up selective activation checkpoint and make public (#125795)"
6564d63e69 : Use mv kernel for small M (#128632)
ae2359638b : Save DOT file of graph instead of SVG for GraphTranformObserver (#128634)
6f181756dc : Use by-column algorithm for fp16/bf16 CPUBlas gemm_transb kernels (#127318)
18f5357f4f : Introduce heuristic for mixed_mm on A100 (#128232)
9ebec1f345 : Enable Wunused-function in torch_cpu (#128576)
6767e38267 : Fix manual licensing (#128630)
afdaa7fc95 : [while_loop] expose it as torch.while_loop (#128562)
c486e2ab64 : Add coloring to fx graph print out (#128476)
61421c42c0 : [custom_op] don't invoke autograd.Function when unnecessary (#127976)
b72989a2b5 : [ONNX] Add upsample trilinear to skip decomp (#128259)
8c20f53a5e : Try seeding individual foreach tests (#128220)
865d7b3424 : [Reland][dynamo] Enable some inlining inbuilt nn module tests (#128440)
3a0006ef22 : Remove global variable SIZE, and fix linter warning (#128559)
6211e67e49 : Document `torch.jit.frontend.get_default_args` (#128408)
bf8a05f483 : [FSDP2] Included module FQN in `FSDPParamGroup` `record_function`s (#128624)
c8e9656a12 : Revert "Add test to xfail_list only for abi_compatible (#128506)"
8763d44bf1 : add xpu to torch.compile (#127279)
790138fdc7 : Add profiler annotation for fused_all_gather_matmul and fused_matmul_reduce_scatter (#127556)
3b28dc6c9d : Improve the scheduling for fused_matmul_reduce_scatter (#127455)
c0b40ab42e : doc string for torch.jit.frontend.get_jit_class_def method (#128391)
a3af32c2fb : Add functionality to make ViewAndMutationData (slightly more) cache safe (#127618)
39193b10e8 : [inductor] fx graph cache: memoize devices to make cache key calculation more predictable (#128366)
c54e358bdb : enable comprehensive padding internally (#128555)
cdc37e4bff : Add a shape property to IR nodes (#127818)
5a80d2df84 : [BE] enable UFMT for `torch/nn/utils` (#128595)
9f55c80a9f : [AOTI] Fix a minimal_arrayref_interface test failure (#128613)
a265556362 : inductor fusion logs: make it easier to attribute to aten graph (#127159)
de9a072ac4 : Updating the `sigslot` license to Public Domain (#128085)
8733c4f4be : docs: Add link to test-infra issue (#128608)
dd19c9150c : Revert "[aota] compiled forward outputs requires_grad alignment with eager (#128016)"
52f529105d : force_stride_order on fused_all_gather_matmul/fused_matmul_reduce_scatter's operands to avoid a copy due to layout transformation (#127454)
d5780396c7 : Skip debug asserts for mixed dense, subclass views in autograd_not_implemented_fallback (#128057)
9a8917fdbd : Naive CPU kernels for jagged <-> padded dense conversions (#127007)
a0604193a2 : handle call_function with Parameter args in DDPOptimizer splitting (#128034)
3e3435678c : Remove some implications from the static_eval pattern matcher (#128500)
0fdd8d84fa : Do not generate -1* in SymPy expressions when canonicalising (#128411)
bdeb9225b0 : Do not call `get_implications` unnecessarily (#128410)
e2a72313e8 : Concat namespaces of torch/csrc/profiler code and other fixes (#128606)
7c370d2fb0 : expose set_thread_name to Python and set thread names (#128448)
b05b8d3989 : [EZ][ALI Migration] Add logging for workflow type determination (#128619)
e9b81e4edf : Fakify torch bind input by default (#128454)
c63ccead5e : Revert "[dynamo] Enable some inlining inbuilt nn module tests (#128440)"
17b45e905a : Fix get output code when caching is enabled (#128445)
93a14aba6e : [BE]: Update mypy to 1.10.0 (#127717)
49366b2640 : Add test to xfail_list only for abi_compatible (#128506)
cf7adc2fa1 : [Inductor] Update Intel GPU Triton commit pin. (#124842)
edb45dce85 : Add OpInfo entry for as_strided_copy (#127231)
7cc07a3eb1 : [custom_op] stop using nonlocals to store information (#128547)
2b9465d62a : [aota] Allow some mutations in backward (#128409)
d0c08926d1 : allow inlining functions in _python_dispatch and _is_make_fx_tracing (#128485)
1fd2cd26a0 : [inductor][cpp] support bf16/fp16 gemm template epilogue fusion (#126545)
c897651392 : [inductor] Add BackendFeature gating (#128266)
88974fedd0 : Clean up xpu ut to make CI happy (#128383)
ce79b09415 : [CUDA][Sparse] Change comparison function of `test_sparse_semi_structured.py` and bump tolerances for `sp24_matmuls` (#128553)
0678742924 : [MPS] Add Metal implementation of exp op (#128421)
14c9eb5ed2 : Add XPU code owners (#128486)
518c9e6455 : Forward fix lint (#128587)
c52eda896e : [dynamo][trace_rules] Remove incorrectly classified Ingraph functions (#128428)
1f6e84fa68 : [inductor][mkldnn] Use floats instead of ints for pattern matcher test (#128484)
ea541dd965 : SymIntify cross_entropy_loss_prob_target numel call (#128141)
ade3d07483 : GGML inspired int8 MM Metal shader (#127646)
b86b4ace88 : Invalidate eager params when inlining and freezing nn modules (#128543)
83bb9b7c53 : [BE] explicitly export subpackage `torch.utils` (#128342)
2229884102 : Introduce int_oo (#127693)
d3b8230639 : Fix profiler_kineto Clang errors (#128464)
d630e1e838 : Revert "[dynamo][yolov3] Track UnspecializedNNModuleVariable for mutation (#128269)"
7fe9ab9ccc : update amp example to device-agnostic (#127278)
3f9b8446cf : [8/N] Remove unused functions (#128499)
ede74940a1 : optimize vec isa check dispatch logical. (#128320)
c1cd946818 : [cond] add a set_ and data mutation expected failure test (#128457)
c472cec565 : [checkpoint] Clean up selective activation checkpoint and make public (#125795)
25b7537a27 : doc comment typo fixes and improvements (#128512)
eb1db6702f : [2nd try][AOTI] Switch to use shim v2 (#128521)
4423e1bbdc : [release] Increase version 2.4.0->2.5.0 (#128514)
3bc2004f91 : [ts_converter] Fix prim::dtype (#128517)
2fa6f80b13 : Perform reciprocal optimization with foreach_div (#128433)
8db4a41973 : Use computeStorageNbytesContiguous if possible (#128515)
e2610240f9 : [ROCm] Enable several inductor UTs (#127761)
bb3cf8a339 : Lift inductor lowerings for jagged <-> padded dense kernels (#125968)
b4a7b543e5 : Add targeted unit tests for guards-related functions used in the codecache (#128482)
1f302d6885 : Support aten operations with out tensor (#124926)
f4edd67fe7 : [c10d] fix OSS commSplit bug (#128459)
f39ab8a0fe : Fix side effect pruning (#128028)
3008644297 : [Caffe2] Remove remaining unused perfkernels (#128477)
55a6b38f52 : [inductor] enable fx graph cache on torchbench (#128239)
6206da55ef : Fix lint after #119459 (#128558)
2b28b107db : [dynamo][fsdp] Dont take unspecializedNNModuleVariable path for FSDP modules (#128453)
6aef2052ea : Save backward graphs lazily to cache (#126999)
87072dcfdb : Change Dynamo's custom ops warning message to be less spammy (#128456)
c53d65b3d3 : [inductor] fix linear add bias pattern (#128473)
bb13fad7aa : Share TCPStore by default when using c10d rdzv handler (#128096)
c0ea8fc3a3 : Disable inlining nn modules on static inputs tests (#128529)
ff3ba99320 : Disable inline nn modules on unstable ptr test (#128528)
1026b7cfbe : Add docstring for the torch.typename function (#128129)
cba840fde9 : Fix accidental variable shadow (#128460)
0444e89931 : [export] Remove replace_sym_size_ops_pass (#128443)
67e6c76a18 : Support apply_(callable) sugar for CPU NJTs (#125416)
dd143d44cc : [BE] enable UFMT for top-level files `torch/*.py` (#127707)
cc231a8e2b : First version of AOTAutogradCache (#126791)
7775fee10f : [tp] refactor and fix PrepareModuleInput for DTensor inputs (#128431)
ec1fdda196 : Fix jagged NT softmax semantics (#119459)
817ce6835b : Revert "[cuDNN][SDPA] Remove `TORCH_CUDNN_SDPA_ENABLED=1`, enable cuDNN SDPA by default on H100 and 2nd on other archs >= sm80 (#125343)"
6d1b1ddd3e : Select Runner Label Dynamically (#127287)
7db501ba2b : Revert "[cuDNN][SDPA] Support different key, value dimension in cuDNN SDPA (#128350)"
d71f92213c : [DSD] keep 'exp_avg' as DTensor after torch.distributed.checkpoint.state_dict.set_optimizer_state_dict (#128004)
624e8ae491 : Documentation for is_dependent function (#128197)
a70a7337d2 : Update torch.nanmean() docstring to mention input dtype requirement (#128155)
0f52dc7e51 : Document `torch.cuda.profiler.stop` (#128196)
5001f41b90 : Revert "Make TraceUtils.h to be device-agnostic (#126969)"
f89574fa23 : Revert "Pass params to dump_nccl_trace_pickle (#128307)"
81e4e12f02 : Revert "Support aten operations with out tensor (#124926)"
c5172b8de8 : Revert "[AOTI] Switch to use shim v2 (#127674)"
9e39c62908 : correct avx512_vnni isa name. (#128318)
f2dcbe89d6 : Revert "Prevent expansion of cat indexing to avoid int64 intermediate (#127815)"
8df56afc20 : Add support in Python API for the recommended max working set size. (#128289)
b19c2319e4 : [ROCm] TunableOp for gemm_and_bias (#128143)
3c971d2ef3 : Flip default value for mypy disallow_untyped_defs [final] (#127836)
15ab636007 : Revert "Fix side effect pruning (#128028)"
5ef70faaa7 : Revert "Make torch_geometric models compatible with export (#123403)" (#128377)
71f491554c : Revert "First version of AOTAutogradCache (#126791)"
abc3eec22d : First version of AOTAutogradCache (#126791)
2e065f2486 : [Quant][Inductor] Bug fix: mutation nodes not handled correctly for QLinearPointwiseBinaryPT2E (#127592)
46a35a1ed4 : [BE] enable UFMT for `torch/__init__.py` (#127710)
26433b86de : [BE][Easy] sort `__all__` in `torch/__init__.py` (#127709)
2386045e4f : Add OpInfo entry for alias_copy (#127232) (#128142)
1edcb31d34 : [RELAND][inductor][cpp] bf16/fp16 gemm template computed with fp32 (#128472)
ebb00a92bd : [dynamo] Skip freezing expect failure for inlining inbuilt nn modules (#128470)
1602c7d0c8 : [dynamo] Enable some inlining inbuilt nn module tests (#128440)
04037f3d22 : [BE] sort imports in `torch/__init__.py` (#127708)
0b331fd5d7 : [CUDA] Abate `SoftMax.cu` compiler warning spam (#128468)
8b3daf1768 : Add FloatTrueDiv and ToFloat to SYMPY_INTERP (#128418)
a421699998 : Revert "[tp] refactor and fix PrepareModuleInput for DTensor inputs (#128431)"
dcc0093dba : [BE][Easy] export explicitly imported public submodules (#127703)
62311257ad : Add 1 test case for Convtranspose1D in op microbenchmark (#127216)
089f9a116a : [tp] refactor and fix PrepareModuleInput for DTensor inputs (#128431)
77a0ca66e4 : Add threadfence to 2-stage reduction for correct writes visibility (#128455)
c0b87afcad : [RELAND2][dynamo][nn-modules] Trace through nn.Module dunder methods for UnspecializedNNModule (#126578)
02e7519ac3 : DOC: strip inaccurate either float32 or float64 statement from set_default_type (#128192)
8cf302dce4 : [5/N] Change static functions in headers to inline (#128406)
86b5df3e71 : Documenting the torch.fx.annotate.annotate function (#128337)
7c2058338a : Improve convert fp32 to fp16 fx pass (#127829)
3ddec713b8 : Revert "[cuDNN][Quantization] Don't print when plan finalization fails in cuDNN quantization backend (#128177)"
85eeb90d2c : [dynamo] Fix graph breaks related to HF ModelOutput (#127780)
7f6daf289b : [inductor] parallel compile: set LD_LIBRARY_PATH for sub-processes in internal (#128376)
3d55d84ec2 : [Fix] Check tensor dtype before using torch.allclose in _trace log (#128438)
bb2a995529 : Back out "[Dynamo] Treat integers stored on nn.Modules as dynamic (#126466)" (#128432)
9538bf4e7c : [2/N] Remove inclusion of c10/util/string_utils.h (#128372)
219da29dfd : [7/N] Remove unused functions (#128407)
fb013ecb24 : Remove unused private List::ptr_to_first_element (#128405)
6af4c6acad : Migrate test to internal base class, fixes (#128367)
786c24a4cd : [inductor] Always realize sigmoid for CPU (#128339)
5d8c7f39d4 : Revert "Introduce int_oo (#127693)"
c9c1fed065 : Revert "Flip default value for mypy disallow_untyped_defs [10+2/11] (#128374)"
94fea82d66 : init sub comment (#128082)
447173198b : Add docstring for the torch.fx.operator_schemas.create_type_hint func… (#128139)
b79d056e76 : [export] FIx unflattener for preserving modules containing unused inputs (#128260)
eb567b1f40 : Pass params to dump_nccl_trace_pickle (#128307)
1dd2431f86 : [Test] Add test for only_active flag (#128191)
5fcb5f0c8b : init reshape_from_tensor_shape comment (#128171)
a55d0d9718 : Fix side effect pruning (#128028)
8c1247cffb : [BE] Fixed CPU autocast warning (#127774)
70a1e85718 : [Traceable FSDP2] Use custom ops for AllGather copy-in / copy-out and ReduceScatter copy-in (#127856)
adb699189b : Revert "[RELAND][dynamo][nn-modules] Trace through nn.Module dunder methods for UnspecializedNNModule (#126578)"
45dccfddcd : [cuDNN][SDPA] Support different key, value dimension in cuDNN SDPA (#128350)
3e09123797 : Enable UFMT on test_nestedtensor.py (#128359)
61f922c2ca : Fix 'get_real_value' on placeholder nodes (#127698)
984b1a8c35 : Fix 'get_attr' call in dynamo 'run_node' (#127696)
205410cb44 : add xpu to torch.tensors (#127280)
cac7a22b92 : [cuDNN][Quantization] Don't print when plan finalization fails in cuDNN quantization backend (#128177)
8a09940a54 : [inductor] fix compile time regression by caching get_gpu_type (#128363)
1d233b8f50 : Revert "Make nn.Module state_dict load_state_dict pre-hook and state_dict post hook public (#126704)"
491c4a5dcb : Revert "Make sure #126704 is BC for torch.save-ed `nn.Module` (#128344)"
4345d98663 : [dynamo] Fix for #127696 (#128358)
a838e90964 : Add Intel Gaudi device/HPU to auto load in instantiate_device_type_tests (#126970)
29081059b6 : [Static Runtime] Fix & run gen_static_runtime_ops (#128299)
f8c45996d5 : [MPS] Make erfinv compilable for bfloat16 (#128375)
c13e03c874 : Flip default value for mypy disallow_untyped_defs [10+2/11] (#128374)
053930e194 : [MPS][BE] Remove code duplication (#128373)
9a38cae299 : [AOTI] Switch to use shim v2 (#127674)
55901fb3da : [fx] Preserve Fx graph node order in partitioner across runs (#115621)
fc77fdca6f : [guard_size_oblivious] Add gso ExpandUtils:_sym_to (#128224)
648625b230 : Make TraceUtils.h to be device-agnostic (#126969)
207c2248a8 : [inductor] Fix lowering full with SymBool value (#128213)
a206dcc79e : fb_memcache: Move to fbcode from thirdparty (#128174)
f2d7f235a6 : [dynamo][yolov3] Track UnspecializedNNModuleVariable for mutation (#128269)
402b289f3b : Properly register parameter for binary folding test (#128356)
a32157c67c : Mark params static if inlining modules and freezing (#128355)
24e7f29099 : Lowering for avg_pool_3d_backward (Fixes:#127101) (#127722)
5b5d269d34 : Speed up fx graph iteration by implementing it in C++ (#128288)
fa88f390a0 : Revert "[inductor] enable fx graph cache on torchbench (#128239)"
fe39c07826 : [pipelining][doc] Remove duplicated words (#128368)
cba195c8ed : Support aten operations with out tensor (#124926)
16e67be7f1 : Also preserve unbacked SymInts when partitioning as backward inputs (#128338)
1cd41997e9 : [Release 2.4] Release only changes - use pinned triton. (#128388)
7afffdf48b : [CI] Comment hf_T5_generate, hf_GPT2 and timm_efficientnet in inductor cpu smoketest for performance unstable issue (#127588)
ca45649eb5 : [easy][dynamo][inline work] Fix test with inlining inbuilt nn modules (#128254)
665e568381 : [inductor][inlining nn module] Skip batchnorm version check test for inlining (#128268)
4077cdd589 : [pipelining][doc] Update arg list of pipeline API (#128361)
e4bd0adca5 : [6/N] Remove unused functions (#128309)
793df7b7cb : Prevent expansion of cat indexing to avoid int64 intermediate (#127815)
d1d9bc7aa6 : init add comment (#128083)
841d87177a : Make sure #126704 is BC for torch.save-ed `nn.Module` (#128344)
3b555ba477 : Add docstring for torch.utils.data.datapipes.decoder.basicandlers (#128018)
734e8f6ad7 : [inductor] enable fx graph cache on torchbench (#128239)
99f5a85a09 : [Clang Tidy] Fix misc-header-include-cycle errors in clang-tidy and ignore some files (#127233)
f843ccbb1a : [MTIA] Add set_device support (#128040)
30875953a4 : [1/N] Remove inclusion of c10/util/string_utils.h (#128300)
2126ae186e : Remove caffe2/perfkernels files (#128186)
739aa224ec : [Fix] Parameter un/lifting issues in the TorchScript to ExportedProgram converter (#127975)
b2d602306a : [RELAND][dynamo][nn-modules] Trace through nn.Module dunder methods for UnspecializedNNModule (#126578)
05711eece9 : [dynamo][inlining inbuilt modules] Ensure BC for nn_module_stack (#128295)
a287ff75d0 : Use init_torchbind_implementations in inductor torchbind tests. (#128341)
4bbadeee8a : Revert "Set simdlen based on ATEN_CPU_CAPABILITY (#123514)"
c85e2cacd3 : [Release 2.4] Release only changes (#128347)
2176ef7dfa : [compiled autograd] support .backward(inputs=) (#128252)
583a56d5a8 : DOC: add docstring to construct_and_record_rdzv_event() (#128189)
c38b3381a1 : Make nn.Module state_dict load_state_dict pre-hook and state_dict post hook public (#126704)
a2d4fea872 : [easy] Move state_dict hooks tests to test_module_hooks and decorate tests that call load_state_dict with swap (#126906)
58083ffb10 : Improve unbacked reasoning involving has internal overlap (#128332)
6630dcd53c : Add docstring for the torch.serialization.default_restore_location function (#128132)
3a2d0755a4 : enable test_ParameterList with dynamo if nn module inlining enabled only (#128308)
b459713ca7 : [aota] compiled forward outputs requires_grad alignment with eager (#128016)
4460e481bc : Disable jacrev/jacfwd/hessian if compiling with dynamo (#128255)
90bb510ece : Revert "Deprecate `torch._utils.is_compiling()` and `torch._dynamo.external_utils.is_compiling()` (#127690)"
38e0a0440c : [AMD] Default to hipblaslt in gemm (#127944)
946f554c8f : Flip default value for mypy disallow_untyped_defs [10+1/11] (#128293)
55646554b7 : [EZ] Fix typos in SECURITY.md (#128340)
9cab5987bd : Introduce int_oo (#127693)
db2fa7b827 : Revert "[export] FIx unflattener for preserving modules containing unused inputs (#128260)"
093a4ff5f8 : [export] FIx unflattener for preserving modules containing unused inputs (#128260)
fa8ec8e718 : [dynamo] handle hashable exceptions in trace_rules lookup (#128078)
136bdb96cb : Update Kineto submodule with fix to test_basic_chrome_trace (#128333)
83941482f7 : Add docstring for the torch.distributed.elastic.utils.distributed.get_free_port function (#128133)
08d038f8a8 : [PT2] Fix a typo and lint problem (#128258)
46948300a2 : [c10d] integrate PMI NCCL initialization to NCCL-PG (#128243)
ab3a0b192a : [RFC] add per-collective timeout value in flight recorder (#128190)
8e482e909b : Add some guard to size oblivious has_internal_overlap (#128328)
7b9c5e0e3f : Turn on GraphTransformObserver for inductor (#127962)
ca561d639b : Revert "Fix 'get_attr' call in dynamo 'run_node' (#127696)"
d22287d1ad : Revert "Fix 'get_real_value' on placeholder nodes (#127698)"
3b73f5de3a : Revert "Add OpInfo entry for alias_copy (#127232) (#128142)"
c993f1b37f : Fix edge cases for gather in inductor (#126893)
04da6aeb61 : Add OpInfo entry for alias_copy (#127232) (#128142)
b66e3f0957 : Set simdlen based on ATEN_CPU_CAPABILITY (#123514)
df43d5843e : fix miss isa bool check (#128274)
26f6a87ae9 : [5/N] Remove unused functions (#127185)
d3817d8a60 : Don't create python tuple when _maybe_handle_torch_function is called from C++ (#128187)
cd2ad29afe : [inductor] Reduce binding overhead of _reinterpret_tensor (#128185)
253fa9c711 : [AOTAutograd] Remove runtime import from view replay function (#128184)
55b2a0a002 : [AOTAutograd] Use _set_grad_enabled instead of no_grad (#128183)
5e7377e044 : [Dynamo][TVM] Make the `opt_level` parameter adjustable (#127876)
c7e2c9c37e : [c10d][doc] add a doc page for NCCL ENVs (#128235)
0bf2fe522a : [RFC] Provide optional switches to _dump_nccl_trace (#127651)
75b0720a97 : Revert "Use hidden visibility in OBJECTCXX files (#127265)"
4c971932e8 : [cuDNN][SDPA] Remove `TORCH_CUDNN_SDPA_ENABLED=1`, enable cuDNN SDPA by default on H100 and 2nd on other archs >= sm80 (#125343)
3964a3ec73 : Complete revamp of float/promotion sympy handling (#126905)
31c3fa6cf5 : [audio hash update] update the pinned audio hash (#128178)
7bfd1db53a : [4/N] Change static functions in headers to inline (#128286)
f681e3689b : [dtensor][experiment] experimenting with displaying distributed model parameters and printing sharding info (#127987)
2c2cf1d779 : [dtensor][experiment] experimenting with displaying model parameters (#127630)
d34075e0bd : Add Efficient Attention support on ROCM (#124885)
6e7a23475d : [easy] Run autograd if any mutations on inputs that require grad (#128229)
aee154edbe : [Traceable FSDP2] Make FSDPParam._unsharded_param creation traceable (#127245)
0dd55ee159 : Fix bug in _update_process_group API (#128262)
3494f3f991 : [dynamo] Skip inlining builtin nn modules for torch.compile inside cond (#128247)
33972dfd58 : [easy][inline-inbuilt-nn-modules] Fix expected graph for control flow test (#128246)
57536286e2 : Flip default value for mypy disallow_untyped_defs [10/11] (#127847)
8db9dfa2d7 : Flip default value for mypy disallow_untyped_defs [9/11] (#127846)
27f9d3b0a1 : Flip default value for mypy disallow_untyped_defs [8/11] (#127845)
038b927590 : Flip default value for mypy disallow_untyped_defs [7/11] (#127844)
7c12cc7ce4 : Flip default value for mypy disallow_untyped_defs [6/11] (#127843)
3a0d088517 : Flip default value for mypy disallow_untyped_defs [5/11] (#127842)
62bcdc0ac9 : Flip default value for mypy disallow_untyped_defs [4/11] (#127841)
afe15d2d2f : Flip default value for mypy disallow_untyped_defs [3/11] (#127840)
ea614fb2b1 : Flip default value for mypy disallow_untyped_defs [2/11] (#127839)
dcfa7702c3 : Flip default value for mypy disallow_untyped_defs [1/11] (#127838)
2369c719d4 : [DSD][BE] Cleanup unused variables and rename variables to avoid exposure to the users (#128249)
02a901f1e9 : Revert "[RFC] Provide optional switches to _dump_nccl_trace (#127651)"
57a24c4fdb : Revert "[RFC] add per-collective timeout value in flight recorder (#128190)"
348b181a97 : Deprecate `torch._utils.is_compiling()` and `torch._dynamo.external_utils.is_compiling()` (#127690)
917387f66d : [AOTI] fix a constant tensor device move issue (#128265)
695502ca65 : [3/N] Change static functions in headers to inline (#128194)
73d6ec2db6 : Increase verbosity of FX graph dumps (#128042)
0e6c204642 : [pipelining] Friendly error message when not traceable (#128276)
44371bd432 : Revert "[dynamo][nn-modules] Trace through nn.Module dunder methods for UnspecializedNNModule (#126578)"
6e13c7e874 : Revert "[dynamo] Support if cond on UnspecializedNNModuleVariable and add inline tests (#128158)"
94165dba7b : Revert "[dynamo] Inline the getattr of fx graph and proxy graph (#128172)"
8a0bc8c9ee : [fsdp2] simplify fsdp_param logic with DTensorSpec (#128242)
cbb7e3053f : View specialization (#127641)
310f80995b : Added memory budget to partitioner (#126320)
ffc202a1b9 : Added remove_noop_ops to joint_graph_passes (#124451)
c446851829 : [fsdp2] update foreach_reduce accumulate_grad (#128117)
613c7d270d : [pipelining] Format doc (#128279)
2e42671619 : [pipelining] Rename to stage.py and schedules.py (#128278)
0e3fe694d1 : [pipelining] Restore a stage constructor for tracer path (#128273)
8a45cf4c64 : [AOTI] align data_size of the constants (#127610)
1d84c7e100 : [DeviceMesh] Update get_group and add get_all_groups (#128097)
6e5c2a1a3b : [inductor] Add missing files to torch_key (#128230)
6220602943 : [torchbind] support query schema of methods (#128267)
0ef5229569 : Revert "Change lerp decomp to use aten.as_strided_copy instead of prims.copy_strided (#128030)"
f9508b4c1f : [pipelining] Update Pipelining Docs (#128236)
fe74bbd6f0 : init sigmoid comments (#127983)
921aa194c7 : [pipelining] Move modify_graph_op_device to _IR.py (#128241)
ad96f991a5 : [pipelining] Add pipe.build_stage() (#128240)
5ef081031e : [MPS] Include MPSGraphVenturaOps.h for complex types on macOS 12 (#127859)
647815049e : Inductor: Allow small sizes of m for mixed mm autotuning (#127663)
ef2b5ed500 : [4/N] Remove unused functions (#128193)
39dd4740e6 : [inductor][dynamo-inline-nn-modules] Fix test with inlining flag (#128200)
bef586111a : [pipelining] pipelining.rst updates (#128228)
09cccbc1c7 : [RFC] add per-collective timeout value in flight recorder (#128190)
11f2d8e823 : Move inductor cuda 124 jobs to a separate workflow that is not triggered by ciflow/inductor (#128250)
5b3624117a : update test_issue175 to handle inline_inbuilt_nn_modules (#128026)
ba81c3c290 : [inductor] add cpp builder code. (take 2) (#125849)
3a620a0f65 : bug fix of dynamo_timed in cprofile (#128203)
8892ddaacc : [TD] Test removal on sm86 (#127131)
fdf1666b20 : Change lerp decomp to use aten.as_strided_copy instead of prims.copy_strided (#128030)
e647ea55a3 : [pipelining] redirect README to document (#128205)
dcb63fcedb : [pipelining] Remove num_microbatches from stage (#128201)
cafbcb6376 : [BE]: Update ruff to 0.4.8 (#128214)
8ca4cefc7d : [C10D] Ensure gil is not released when calling toPyBytes (#128212)
0a6df4fca6 : delete inductor config.trace.compile_profile (#127143)
82d7a36a27 : Added torchao nightly workflow (#128152)
0c7f4353e5 : [inductor] simplify indexing (#127661)
662a78f957 : [dynamo] Inline the getattr of fx graph and proxy graph (#128172)
19b31d899a : Fix 'get_real_value' on placeholder nodes (#127698)
b741819b05 : Fix 'get_attr' call in dynamo 'run_node' (#127696)
3aa623d407 : Fix assume_constant_result for UnspecializedNNModuleVariable methods (#127695)
754e6d4ad0 : Make jobs with LF runners still pass lint (#128175)
85758fa5ae : [c10d][TCPStore] make TCPStore server use libuv by default (#127957)
6c824cd9fb : [BE][c10d] fix use of TORCH_ERROR in TCPStore libuv backend (#127956)
b9b89ed638 : [pipelining] fix LoopedBFS (#127796)
d9696ea624 : [AOTInductor] [Tooling] Update NaN and INF Checker for AOTInductor (#127574)
fc6e3ff96d : [ROCm] Update triton pin to fix libtanh issue (#125396)
128952625b : Revert "Added memory budget to partitioner (#126320)"
c219fa5eb9 : [3/N] Remove unused functions (#128179)
8d16a73f0f : Manipulate triton_hash_with_backend so that it doesn't contain any keywords (#128159)
852b7b4c99 : [inductor] Enable subprocess-based parallel compile as the default (#126817)
ac51f782fe : Revert "Complete revamp of float/promotion sympy handling (#126905)"
23c156cd2d : Revert "[inductor] simplify indexing (#127661)"
a1b664adeb : Add default values to PyTorchMemEffAttention::AttentionKernel::Params members (#112215)
3090667cf9 : [pipelining] pipeline() taking microbatch as example input (#128163)
224b4339e5 : Revert "Make ValueRange repr less chatty by default (#128043)"
6e75024ff0 : Run TestAOTAutograd with dynamo (#128047)
771be55bb0 : Documenting `torch.onnx.operator.shape_as_tensor` (#128051)
3f9798a4fd : add docstring to masked_fill, expand, select, unsqueeze, cat fns (#128055)
543a870943 : [pipelining] Rename ManualPipelineStage -> PipelineStage (#128157)
5f81265572 : [Traceable FSDP2] Return early from _register_post_backward_hook when compile (#127864)
7efaeb1494 : [AOTI] docs: add suggestion to turn on freezing on CPU (#128010)
0c16800b4a : [pipelining] include lifted constants in input_to_state (#128173)
01601ebd41 : Retire torch.distributed.pipeline (#127354)
70724bdbfe : Bugfix for nondeterminstic torch_key (#128111)
00c6ca4459 : [compiled autograd][cudagraphs] Inputs runtime wrapper to move cpu scalars to cuda (#125382)
190f06d468 : [pipelining] Lower _configure_data_parallel_mode to stage (#127946)
a448b3ae95 : [Traceable FSDP2] Check hasattr('fsdp_pre_all_gather') only when not compile (#127855)
2ff312359c : skip hf_T5_generate in dynamic shape test (#121129)
d943357a21 : [XPU] Add xpu support of `make triton` (#126513)
68cc63ae27 : introduce skipIfNNModuleInlined and skip test_cpu_cuda_module_after_dynamo (#128023)
7e48d6a497 : reset dynamo in test_do_not_skip_side_effects unit test loop to avoid dynamo cache limit hit (#127487)
dc8e3c2e90 : [inductor] subproc parallel compile: initialize future before sending work to the pool (#128086)
6a2bf48cfa : [inductor] subproc parallel-compile: start thread last in init (#128037)
e8e0bdf541 : [inductor] parallel-compile: call triton_key() before forking (#127639)
96806b1777 : [pipelining][doc] Add frontend description and change tracer example (#128070)
3df53c2a8f : [dtensor] directly return local_tensor under no_grad (#128145)
747fc35ff5 : [dynamo] Support if cond on UnspecializedNNModuleVariable and add inline tests (#128158)
5e5bbdb35e : [DDP] Bucket handling: make first bucket size equal to bucket_cap_mb if it was set (#121640)
4d0ece8196 : [pipelining] Consolidate chunk counting between stage and schedule (#127935)
476bfe6cce : fix torch.compile with triton kernels under inference_mode (#124489)
50155e825b : [export] provide refine function for automatically accepting dynamic shapes suggested fixes (#127436)
65aa16f968 : Revert "Default XLA to use swap_tensors path in nn.Module._apply (#126814)" (#128170)
f99409903c : Documenting `torch.distributions.utils.clamp_probs` (#128136)
740cd0559f : Filter non input symexprs from codecache guards (#128052)
117ab34891 : Documenting the torch.utils.collect_env.get_pretty_env_info function (#128123)
901226ae83 : [inductor] simplify indexing (#127661)
7ede78f9f5 : [dynamo][nn-modules] Trace through nn.Module dunder methods for UnspecializedNNModule (#126578)
e5b3387166 : [dynamo] Bugfix for nn parameter construction (#128001)
6dfdce92ba : Fixed typos in the complex numbers portion of the autograd docs (#127948)
56a3d276fe : Handle custom op during TorchScript to ExportedProgram conversion (#127580)
80fa2778ed : Update types for verbose in lr_scheduler (#127943)
0a761f0627 : [RFC] Provide optional switches to _dump_nccl_trace (#127651)
54fe2d0e89 : [cuDNN][quantization] skip qlinear test in cuDNN v9.1.0 (#128166)
04272a0e12 : Add docstring for the torch.ao.quantization.utils.get_combined_dict function (#128127)
baaa914bf7 : [small] test clean up (#128079)
9554300436 : [inductor][codegen] Codegen constexpr globals and constexpr annotated globals correctly. (#126195)
2184cdd291 : Added memory budget to partitioner (#126320)
7e059b3c95 : Add a call to validate docker images after build step is complete (#127768)
e8670f6aea : [Dynamo][TVM] Support macOS and Linux/aarch64 platforms (#128124)
de4f8b9946 : [BE]: Update cudnn to 9.1.0.70 (#123475)
fba21edf5b : [CI] Ensure inductor/test_cpu_cpp_wrapper is actually run in inductor_cpp_wrapper_abi_compatible (#126717)
936225d7b2 : [mergebot] Fix pending unstable jobs being viewed as failed (#128080)
32fb68960e : [FSDP2] Added experimental warning to `unshard` API (#128138)
78a6b0c479 : update test_reformer_train test to handle nn module inlining (#127467)
304956e1fb : Switch to torch.float16 on XPU AMP mode (#127741)
1d0c1087dd : Allow overriding per-dim group options via _MeshEnv.set_dim_group_options (#126599)
e9c5144cbc : Fix bug in update_process_group DDP API (#128092)
2ffdf556ea : Add back API that some people rely on in torch.cuda.amp.grad_scaler namespace (#128056)
2d47385f0f : [BE]: Enable ruff TCH rules and autofixes for better imports (#127688)
4f87f47ea1 : [dtensor] reuse DTensorSpec as much as possible (#128112)
f0dd11df55 : Make ValueRange repr less chatty by default (#128043)
0de6d2427f : Bump tolerances for `inductor/test_efficient_conv_bn_eval.py::EfficientConvBNEvalCudaTests::test_basic_cuda` attempt 2 (#128048)
a5b86a1ec0 : Revert "FP8 rowwise scaling (#125204)"
a5ba9b2858 : Fix for addcdiv contiguous problem (#124442)
c58d3af3b4 : Revert "Add OpInfo entry for alias_copy (#127232)"
9d849d4312 : Disable py3.12 nightly wheel builds for ROCm (#127968)
48a54146e7 : Revert "[dynamo] Support ndarray.dtype attribute access (#124490)"
f08fd8e9e3 : Remove redundant device guard in Resize.h (#126498)
c97e3ebb96 : Fix wrongly exposed variables in `torch/__init__.py` (#127795)
457df212e1 : Add OpInfo entry for alias_copy (#127232)
f5328542b5 : Allow multiple cudagraph recordings per compiled graph (#126822)
5a3bea1e88 : Remove unused arg to GraphLowering (#126821)
70ba6f0ab6 : Collect static parameter metadata in aot (#126820)
c8ff1cd387 : [FSDP2] Changed `test_register_forward_method` to use multiprocess test (#128100)
638f543ac2 : Enable single nadam test (#128087)
cd42b95047 : Handle aten::__contains__ during TorchScript to ExportedProgram conversion (#127544)
68eb771265 : [2/N] Remove unused test functions (#128005)
2f7cfecd86 : Complete revamp of float/promotion sympy handling (#126905)
c1a43a69e4 : [NestedTensor] Add error checks for unbind operator coverage when ragged_idx != 1 (#128058)
9795c4224b : Revert "[DDP] Bucket handling: make first bucket size equal to bucket_cap_mb if it was set (#121640)"
b4a0161449 : Build SYCL kernels for ATen XPU ops on Native Windows (take 2) (#127390)
6adcf21b2b : Documenting the torch.cuda.nccl.version function (#128022)
bf2c05352e : Make length == stop size oblivious too (#128050)
80d34217c6 : Typo fixes: et al. (#127811)
d3ad84c38f : Use pexpr, not texpr in Triton launch codegen (#128038)
8bcebc8dae : Add runtime dependency on setuptools for cpp_extensions (#127921)
2fd75667b4 : [Caffe2]Remove Caffe2 scripts and benchmarks (#126747)
e98662bed9 : [DDP] Bucket handling: make first bucket size equal to bucket_cap_mb if it was set (#121640)
ffaea656b5 : WorkerServer: add support for binding to TCP (#127986)
a7c596870d : [BE][Eazy] remove `torch.torch.xxx` usages (#127800)
4123323eff : [ONNX] Single function for torch.onnx.export and torch.onnx.dynamo_export (#127974)
01694eaa56 : Move cuda 12.4 jobs to periodic for both pull and inductor (#127825)
8184cd85fc : [fake tensor] Set _is_param for base fake tensors for views (#127823)
626dc934d1 : [dynamo][pippy] Hotfix for nn_module_stack for pippy usecase (#127972)
72e863df27 : Update _learnable_fake_quantize.py (#127993)
6e545392cd : Move nongpu workflows from trunk to periodic (#128049)
6412c6060c : [reland] Refresh OpOverloadPacket if a new OpOverload gets added (#128000)
bb68b54be0 : [BE][ptd_fb_test][1/N] Enable testslide (#127512)
3acbfd602e : Document torch.utils.collect_env.get_env_info function (#128021)
6454e95824 : [FSDP2] enable CI for torch.compile(root Transformer) (#127832)
4adee71155 : [dynamo] Support ndarray.dtype attribute access (#124490)
a9cc147fa1 : [DSD][FSDP1] Deprecate FSDP.state_dict_type and redirect users to DSD (#127794)
9acc19f8da : [inductor] Take absolute value of strides when picking loop order (#127425)
22964d1007 : [DSD] Deprecate submodules feature for DSD (#127793)
5dc9128229 : FP8 rowwise scaling (#125204)
4f9fcd7156 : Handle unpacking during TorchScript to ExportedProgram conversion (#127419)
9f2c4b9342 : Replace with standard type traits in torch/csrc (#127852)
3d617333e7 : Simplify CMake code (#127683)
df75a9dc80 : Remove Caffe2/onnx (#127991)
d48c25c7d1 : [BE] Fix missing-prototypes errors in Metal backend (#127994)
8992141dba : Restore MPS testing on MacOS 13 and m2 metal (#127853)
879d01afcb : [dynamo][numpy] Add unsigned integer dtypes (#125717)
4ce5322a1f : Enable UFMT on test_shape_ops.py test_show_pickle.py test_sort_and_select.py (#127165)
faabda4fc9 : [Inductor] Skip model_fail_to_load and eager_fail_to_run models in inductor benchmarks test (#127210)
c3949b20a1 : Opt model save and load (#126374)
9a8ab778d3 : Revert "[BE]: Update cudnn to 9.1.0.70 (#123475)"
bb2de3b101 : Fixed broken link and removed unfinished sentence from issue #126367 (#127938)
4a384d813b : [SDPA/memeff] Backport changes from xFormers to PT (#127090)
b054470db2 : Remove unused functions (#127881)
30788739f4 : [c10d] add a simple test to demonstrate the user usage of collectives (#127665)
e505132797 : [export] track TORCH_DYNAMO_DO_NOT_EMIT_RUNTIME_ASSERTS for export runtime asserts (#127554)
d5cb5d623a : Revert "Complete revamp of float/promotion sympy handling (#126905)"
55a4ef80c4 : [pipelining] test pipeline_order in schedule (#127559)
71e684bfae : [BE][Mac] Add missing prototypes (#127988)
ce4436944c : Fix IOS builds (#127985)
a135776307 : Remove tensor subclass detection logic from weights_only unpickler (#127808)
8e496046e5 : Update torch-xpu-ops pin (ATen XPU implementation) (#127879)
6c07e2c930 : fix redundant tensor (#127850)
8830b81208 : [c10d] Add commCreateFromRanks to c10d (#127421) (#127982)
7fdfb88f03 : [pipelining] rewrite interleaved 1f1b (#127332)
1f67cfd437 : [inductor] raise tolerance for cspdarknet (#127949)
907cb28f67 : Revert "Inductor: Allow small sizes of m for mixed mm autotuning (#127663)"
f4b05ce683 : Add registry for TorchScript to ExportedProgram conversion (#127464)
0eb9ec958a : Revert "Inductor respects strides for custom ops by default (#126986)" (#127923)
20f966a8e0 : Ignore undocumented PipelineSchedule.step (#127955)
a7b1dd82ff : Default XLA to use swap_tensors path in nn.Module._apply (#126814)
1b704a160f : Add linker script optimization flag to CMAKE rule for CUDA ARM wheel (#127514)
6dc0a291b9 : Revert "[dynamo] Bugfix for nn parameter construction (#127806)"
597922ba21 : Reapply "distributed debug handlers (#126601)" (#127805)
e76b28c765 : [dtensor][debug] added c10d alltoall_ and alltoall_base_ to CommDebugMode (#127360)
01e6d1cae4 : [dtensor][debug] added c10d reduce_scatter_ and reduce_scatter_tensor_coalesced tracing_ to CommDebugMode (#127358)
9a25ff77af : Revert "[inductor] Enable subprocess-based parallel compile as the default (#126817)"
f27c4dd862 : [dynamo] Bugfix for nn parameter construction (#127806)
569c5e72e7 : [dynamo] Unspec nn module when global backward hooks are present (#127802)
c7e936a56a : [dynamo] Tensorvariable - track grad with _grad field (#127785)
3bcc3cddb5 : Using scalarType instead string in function _group_tensors_by_device_and_dtype. (#127869)
0ff60236ab : Revert "Retire torch.distributed.pipeline (#127354)"
627d2cd87d : [CI] disable td for xpu ci test by default (#127611)
36e9b71613 : Enable UFMT on test/test_jit_fuser_te.py (#127759)
ff32f6c93b : Use freshly traced jit-traced module to be used in export analysis (#127577)
c490046693 : [BE]: Update cudnn to 9.1.0.70 (#123475)
97ea2b5d83 : documentation for pattern_matcher.py (#127459)
7a60a75256 : Add typing annotations to pattern_matcher.py (#127458)
9adfa143d7 : fix post_grad pattern (#127457)
f8c6d43524 : Concat namespaces and other fixes in torch/csrc/utils (#127833)
91461601b6 : [TORCH_FA2_flash_api] Update total_q to the reshaped query 0th dimension (#127524)
c209fbdc53 : [inductor] Fix missing unbacked def for unbacked in input expr (#127770)
059cae6176 : [Caffe2] Remove Caffe2 proto and other files (#127655)
4c074a9b8b : Revert "[torchbind] always fakify script object by default in non-strict export (#127116)"
fb696ef3aa : Complete revamp of float/promotion sympy handling (#126905)
db515b6ac7 : [ROCm] Fix error in torch.cuda initialisation if amdsmi is not available (#127528)
49048e7f26 : [FSDP2] Fixed variable shadowing of `module` (#127776)
f325b39303 : Introduce Inductor passes to micro-pipeline all-gather-matmul and matmul-reduce-scatter in certain cases (#126598)
cf77e7dd97 : [inductor] Enable subprocess-based parallel compile as the default (#126817)
b9c058c203 : Retire torch.distributed.pipeline (#127354)
6abca6a564 : [export][unflatten] More strictly respect scope when removing inputs (#127607)
e216df48c8 : [Dynamo][TVM] Fix ignored `trials` argument for MetaSchedule (#127747)
2122c9e2a9 : [BE] Enabled lintrunner on torch/distributed/utils.py (#127771)
ef77f2ca4a : [pipelining] Simple 1F1B schedule (#127673)
f4b77ce8e2 : Masked scale meta function registration #119984 (#127389)
e7cb43a2d2 : Check unused variables in tests (#127498)
2ad0e4197d : [ts-migration] support aten::__is__, aten::__isnot__, aten::__not__, profiler::_record_function_enter_new, profiler::_record_function_exit (#127656)
8d153e0bab : [Inductor] Add FlexAttention backward kernel dynamic shape tests (#127728)
e793ae220f : [Inductor][Flex-attention] Support different sequence lengths for Query and Key/Value (#127678)
dae757c971 : Specify supported OS matrix (#127816)
22368eac10 : [FSDP2] Fix submesh slicing to enable 3D parallelism (#127585)
69f5b66132 : [Inductor] FlexAttention backward kernel optimization (#127208)
2498ef7490 : Fix scheduler typehints (#127769)
6580a18f86 : [c10d][BE] fix test_init_pg_and_rpc_with_same_socket (#127654)
7e906ec9e5 : [PT2][Optimus] Improve group batch fusion with same parent/users fusion enablement (#127648)
c32fe6b279 : [FSDP] keep paras in torch.distributed.checkpoint.state_dict.set_optimizer_state_dict (#127644)
4d0386ce1c : [torch/jit-runtime] Add explicit include of <chrono> to torch/jit/run… (#127779)
ddef7c350f : Add comments about runner labels (#127827)
1208347d09 : [inductor][ez] fix loop ordering test (#127807)
41033a4274 : PyPI: fix link to images to be rendered (#127798)
05fa05cbae : [2/N] Change static functions in headers to inline (#127764)
dbf39a6e63 : [inductor] fix linear_add_bias path (#127597)
b42cfcabc4 : Lift jagged -> padded dense forward / backward kernels from fbgemm_gpu (#125946)
ac568fc007 : [CUDNN] Remove defunct cuDNN V8 API build flag (#120006)
0e7bd7fedd : [ROCm] TunableOp improvements (#124362)
0f1f0d3015 : Onboard ARM bfloat16 to gemv fast path (#127484)
f6ca822366 : Patch ARM Half use_gemv_fast_path gate to avoid kernel duplication (#127478)
6faa3d5f18 : Onboard ARM bfloat16 to gemm-by-dot-product-for-gemm_transa_ infrastructure (#127477)
01fc22056a : [BE] enable UFMT for `torch/masked/` (#127715)
406532f864 : [AMD] Fix power_draw api (#127729)
c27882ffa8 : [torchbind] always fakify script object by default in non-strict export (#127116)
3efac92888 : [torchbind] support torch.compile with aot_eager backend (#127114)
c6dc624690 : [torchbind] remove test cases that don't fakify script objects (#127113)
6d4ec9b2ec : [RFC] Introduce Checkpointable for DCP (#127540) (#127628)
a4064da8ca : Always simplify sympy expressions before printing. (#127543)
ef9451ac8d : Move the build of AOTriton to base ROCM docker image. (#127012)
941316f821 : [pipelining] Stress test schedules with multi iters (#127475)
db9d457a3f : Use sleef on macOS Apple silicon by default (#126509)
2fc907971a : Revert "[Inductor] FlexAttention backward kernel optimization (#127208)"
3f45fa63f2 : Revert "[Inductor] Add FlexAttention backward kernel dynamic shape tests (#127728)"
c35b65715c : Revert "[Inductor][Flex-attention] Support different sequence lengths for Query and Key/Value (#127678)"
3437177e2b : Quick Fix on #126854, deepcopy `lr` and other possible `base_parameters` (#127190)
d8d0bf264a : Inductor: Allow small sizes of m for mixed mm autotuning (#127663)
7c3740d388 : [NestedTensor] Extend coverage for unbind when ragged_idx != 1 (#127493)
4d32de14b6 : [export] Handle serializing duplicate getitem nodes (#127633)
12c4a2c297 : [BE]: Apply PLR1736 fixes (unnecessary index lookup) (#127716)
21144ce570 : [dtensor] implement scatter op with simple replication (#126713)
ded580a594 : [dtensor] standardize multi mesh-dim strategy with utils (#126712)
d1fad416a8 : Revert "Add aten._unsafe_masked_index (#116491)"
53f001c599 : Revert "correct BLAS input (#126200)" (#127762)
8677508167 : [c10d] guard gpu context during abort (#127363)
430cdfc0ac : [ATen][Native] fixes sparse SPMV on aarch64 (#127642)
badf898df2 : Remove unstable ARC jobs (#127563)
63d7ffe121 : Retry of D58015187 Move AsyncCompile to a different file (#127691)
3f8b8f08c8 : [Split Build] Make libtorch_global_deps accessible from libtorch wheel (#127570)
d05cddfe23 : Revert "FP8 rowwise scaling (#125204)"
f03f8bc901 : Add aten._unsafe_masked_index (#116491)
d6963e769c : Force Inductor output code to be dumped even if it fails to compile (#127700)
f343f98710 : [jit] Validate mobile module fields parsed by flatbuffer loader (#127437)
e017b56c0c : [dtensor] local_map UX change: keep func signature and be compatible with Tensor input (#126924)
2d1ad0c31a : [CI] Add freezing for cpu inductor accuracy test in inductor CI (#124715)
10e3406ea5 : [Inductor] Add FlexAttention backward kernel dynamic shape tests (#127728)
6d21685b45 : [DSD] Fixes various bugs for broadcast_from_rank0 (#127635)
48846cd164 : Update torch-xpu-ops pin (ATen XPU implementation) (#127730)
e2e3ca94cc : [Inductor][Flex-attention] Support different sequence lengths for Query and Key/Value (#127678)
288df042c5 : [1/N] Change static functions in headers to inline (#127727)
1b182ea0d2 : Remove c10::guts::{conjunction,disjunction} (#127726)
3399ad8d9d : [Inductor][CPP] Add UT for bitwise right shift (#127731)
7e97b33fbb : [Dynamo] Log backward graph compilation metrics (#126629)
84776d7597 : Revert "[BE]: Update mypy to 1.10.0 (#127717)"
e57f51b80f : Update _dedup_save_plans.py (#126569)
fec8ef8c17 : [Aten][BlasKernel] Add function prototype to fix compiler error (#127719)
8b08b0f340 : [BE] enable ruff rule `Q` from flake8-quotes (#127713)
139b9c6529 : Avoid reference cycle in inner closure (#127711)
30213ab0a7 : [BE]: Update mypy to 1.10.0 (#127717)
fb53cd6497 : [aten_cuda/flash_attn] Add typename to template argument Kernel_trait… (#127634)
08653fe355 : Beef up the allow_in_graph docs (#127117)
e24a87ed8d : [BE][Ez]: Apply PYI059 - Generic always come last (#127685)
c2547dfcc3 : [BE][Ez]: Enable ruff PYI019 (#127684)
67ef2683d9 : [BE] wrap deprecated function/class with `typing_extensions.deprecated` (#127689)
c1dd3a615f : Implement Graph Transform Observer (#127427)
4e7f497bb3 : [Submodule] Remove ios-cmake (#127694)
2129903aa3 : Properly detect nested torch function args (#127496)
16578e8584 : [symbolic shapes] if symbol not in var_ranges default to unknown range (#127681)
4fd777ed59 : [ONNX] Add quantized layer norm op to opset 17 (#127640)
c19ad112f6 : [Inductor UT][Intel GPU] Skip test case which doesn't currently work on the XPU stack but newly re-enabled by community. (#127629)
2cef2fc2b4 : [ts migration] support aten::dim, aten::len, aten::__getitem__ (#127593)
0d9e527c4d : Remove tensor storage_offset/storage_bytes from the cache key (#127319)
2e779166eb : [Functorch][cuDNN] Bump tolerances for `test_vmapjvpvjp` (#127355)
6e2e09f6cc : [inductor] fix redis-related env vars in remote_cache.py (#127583)
b505e86475 : [Inductor][CI][CUDA 12.4] Update dynamic_inductor_timm_training.csv - change gluon_inception_v3 from fail_accuracy to pass (#127672)
17dea09b15 : Revert "Default XLA to use swap_tensors path in nn.Module._apply (#126814)"
82cd7a7dab : Revert "Default meta device to use swap_tensors in nn.Module._apply (.to_empty and .to('meta')) (#126819)"
42312a52b3 : [DSD] Adds type_check param to copy state dict utils (#127417)
edffb28d39 : [BE][Ez]: Enable B019 - flags memory leaks through LRU cache on method (#127686)
22f392ba40 : Revert "[easy?] Move AsyncCompile to a different file (#127235)"
d49dc8f4b8 : Revert "Add noqa to prevent lint warnings (#127545)"
114c752b14 : Revert "Improve MAGMA conditional macro in BatchLinearAlgebra.cpp (#127495)"
efcea2d2fd : [dynamo] Support __getitem__ on NNModuleVariable __dict__ (#126956)
4129c3e596 : Let us find out why we wrote foreach meta regs (#127623)
ac60bdaf01 : Allow slow foreach to run for any backend, not just CPU (#127412)
4aa7a1efcf : [dynamo] Initial exception handling support (#126923)
25994a7ed1 : [AOTI] Fix a bug when mutated buffer meets .to (#127671)
c3be459f26 : [inductor] fix mkldnn linear binary fusion check ut (#127296)
e62925930f : Clear dest impl extra_meta_ info when shallow_copy_from src impl to dest impl. (#127616)
554265d450 : [Inductor]: Use new device-agnostic libdevice import from triton.language (#127348)
7ef7c265d4 : Ack codecvt_utf8_utf16 as a deprecated func in C++17 (#127659)
3c1cf03fde : Add fake impl for aten.unique_dim (#126561)
25447ba241 : Always Link libtorch and libtorch_cpu to ensure the functionality for AOT mode (#127381)
df53cc7114 : [reland] "[reland] `_foreach_copy` with different src/dst dtypes" (#127186)
ff8042bcfb : Enable AOTI shim v2 build and add into libtorch (#125211)
a8c9b26534 : [BE] Fix dependabot security errors (#127567)
f7171313ab : [Inductor] FlexAttention backward kernel optimization (#127208)
57baae9c9b : Migrating CI/CD jobs to macOS 14 (#127582)
02248b73eb : [EZ] Port over all test-infra scale configs to lf runners (#127645)
bb1468d506 : Updates state dict in state dict loader (#127617)
f33beb767d : [NestedTensor] Use maybe_mark_dynamic instead of mark_dynamic (#127453)
6bfc6e0875 : Add back private function torch.cuda.amp.autocast_mode._cast (#127433)
923edef31c : FP8 rowwise scaling (#125204)
806e6257f3 : Unconditionally assign symbolic shapes as locals (#127486)
033e733021 : Revert "[BE] wrap deprecated function/class with `typing_extensions.deprecated` (#126898)"
ea13e9a097 : correct BLAS input (#126200)
bbf892dd58 : Revert "Add back private function torch.cuda.amp.autocast_mode._cast (#127433)"
1103444870 : [AOTI] Add back include_pytorch for specifying link paths (#126802)
8af1c655e5 : improve eager overhead of _disable_dynamo (#127325)
b704c7cf0f : Re trying Support min/max carry over for eager mode from_float method (#127576)
121c55d8d1 : Old branch deletion script to also delete old ciflow tags (#127625)
0be06b08fc : [GPT-fast benchmark] Merge GPT-fast and micro benchmark output as one CSV file (#127586)
4a0d96e496 : Add a GH action to autolabel docathon PRs (#127569)
b2f5fd8efb : [ts_converter] Basic support for prim::If conversion (#127336)
3e66052e16 : Improve python3 discovery code in CMake (#127600)
8d7393cb5e : Update triton-xpu commit pin merge rules for XPU (#127203)
1699edaabb : [DeviceMesh] Adding nD slicing support back (#127465)
8bf2c0a203 : [BE][Ez]: Update ruff to 0.4.6 (#127614)
58b461d57a : Revert "[ROCm] Update triton pin to fix libtanh issue (#125396)"
225ec08e35 : Fix typo in .ci/docker/ubuntu-cuda/Dockerfile (#127503)
67f0807042 : [Inductor] [CI] [CUDA] Skip the failed models and tests the better way (#127150)
64c581a1d4 : [DSD] Make distributed state_dict support torch.distributed is not initialized case (#127385)
8b4ad3a8d9 : [DSD] Unify the API signatures of set_model_state_dict and set_optimizer_state_dict (#127384)
bd868eeb28 : [DSD] Support flattening the optimizer state_dict when saving and unflattening when loading (#127071)
6b1b8d0193 : [DSD] Remove the support of Dict[nn.Module, Dict[str, Any]] state_dict (#127070)
a010fa9e24 : [DCP] Fix variable spelling (#127565)
75e7588f47 : [Inductor UT] Fix expected failure but pass for test case on Intel GPU. (#127595)
4644def434 : Update docstring for weights_only (#127575)
cddb8dbebe : add workloadd events to pytorch (#127415)
10a92b5f84 : [AOTI] Fix a bool value codegen issue when calling custom ops (#127398)
17c5b6508b : [AOTI] Support _CollectiveKernel in the cpp wrapper mode (#127037)
413b81789f : [AOTI][refactor] Unify val_to_arg_str and val_to_cpp_arg_str (#126916)
aaef7b29e9 : Only register _inductor_test ops if not running with deploy (#127557)
029b3ec775 : Revert "[inductor][cpp] bf16/fp16 gemm template computed with fp32 w/o epilogue fusion (#126068)"
a6bae1f6db : Remove more caffe2 files (#127511)
df0c69f32d : [inductor] Add fallback for collectives size estimation for unbacked (#127562)
f4d7cdc5e6 : [dynamo] Add current instruction to BlockStackEntry (#127482)
2a03bf5a14 : [inductor] fix grid z bug for large grid (#127448)
4935a019e4 : [ONNX] Update decomposition table to core ATen ops (#127353)
0c5faee372 : Replace python::python with Python::Module (#127485)
b5e85b8ecc : Add deferred_runtime_assertion pass after run_decompositions (#127305)
ae47152ca8 : Expand supported labels to most self-hosted linux pull.yml workflows (#127578)
ec098b88b6 : [compiled autograd] torch.compile API (#125880)
ee08cf5792 : Improve MAGMA conditional macro in BatchLinearAlgebra.cpp (#127495)
159632aecd : [dynamo] Support hasattr on BuiltinVariable (#127372)
bb6bfd9ad8 : [dynamo][compile-time] Cache the child guard managers (#127377)
f264745ff1 : [interformer] batch pointwise op + unbind stack pass in post grad (#126959)
8629f9b3f2 : Remove more unused variables in tests (#127510)
0aaac68c57 : Add structured logging for tensor fakeification (#126879)
b1792a622d : [pipelining] handle param aliasing (#127471)
d535de1747 : [inductor] remove reordering_reindex (#127367)
7646825c3e : Revert "distributed debug handlers (#126601)"
d44daebdbc : [Submodule] Remove deprecated USE_TBB option and TBB submodule (#127051)
da9fb670d2 : Nadam support the flag for "maximize" (#127214)
f6e303fa47 : Revert "[DeviceMesh] Adding nD slicing support back (#127465)"
af5ed05416 : Include triton in py3.12 binaries (#127547)
fc73d07e5e : [c10d] Decorate methods in `NCCLUtils.hpp` with `TORCH_API` (#127550)
a2bff4dc8c : Fix lint (#127584)
e72232f8f0 : [DeviceMesh] Adding nD slicing support back (#127465)
214dd44608 : [c10d] add Work's numel to logger for debugging purposes (#127468)
620ec081ec : Extract inner loops into separate function for ARM64 fp16_dot_with_fp32_arith (#127476)
603bde1de3 : Use efficient ARM fp16 dot product for gemm_transa_ general case (#127451)
74b89b9283 : Extract dot-product functions from fp16_gemv_trans gemv kernels (#127435)
a3c00e4331 : [Easy] Move V.fake_mode inside of replace_by_example (#127494)
f9a1bc2c65 : [FSDP] Remove _sync_module_states (#124678)
029af29e6d : support operator.index function (#127440)
3b88c27c46 : Mark DynamicShapesExportTests::test_retracibility_dict_container_inp_out as slow (#127558)
e02971fcfb : Revert "Enable UFMT on test_shape_ops.py test_show_pickle.py test_sort_and_select.py (#127165)"
4ee003abdf : [inductor] Repeat should not return a view (#127533)
a288b95d4e : Enable UFMT on test_shape_ops.py test_show_pickle.py test_sort_and_select.py (#127165)
f471482eb2 : Try to include NCCL related header file with macro USE_C10D_NCCL (#127501)
6849b80411 : Add `ninja` as dev dependency (#127380)
094183dba6 : [torchbench][pt2] Enable Huggingface and Timm models for interal buck runner (#127460)
bf2f5e70dd : Fix warnings in SmallVector (#127250)
ad1b18ab2f : Add repo-specific scale config files (#127566)
846f79e61a : Revert "Reduce number of samples in {svd,pca}_lowrank OpInfos (#127199)"
cce2192396 : [pipelining] Support calling multiple recv fwd/bwd ops (#127084)
aa3d041830 : [pipelining] Fix block comments for doc rendering (#127418)
ff23c5b7d7 : [cudagraph] improve log for mutating static input tensor addresses (#127145)
19333d1eb9 : [ROCm] Update triton pin to fix libtanh issue (#125396)
2cb6f20867 : Warn env vars only once during program (#127046)
4afc5c7bb9 : [torchscript] Handle prim::device and prim::dtype (#127466)
fa426b096b : Default meta device to use swap_tensors in nn.Module._apply (.to_empty and .to('meta')) (#126819)
bfdec93395 : Default XLA to use swap_tensors path in nn.Module._apply (#126814)
39cf2f8e66 : Added sorting notes for eig/eigvals (#127492)
7827afca14 : Copy the constant folding pass to the pass under export/passes folder (#127456)
f9937afd4f : Add noqa to prevent lint warnings (#127545)
12d6446507 : Revert "[inductor] fix mkldnn linear binary fusion check ut (#127296)"
e9a6bbbf7c : Revert "[CI] add xpu test in periodic workflow (#126410)"
8777443d73 : Remove FindMatlabMex.cmake (#127414)
b506d37331 : Fix multiple errors while parsing NativeFunctions from YAML (#127413)
ea5c17de90 : Revert "Add torchao nightly testing workflow (#126885)"
be7be9fa16 : [Distributed] [8/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#125102)
576c5ef1dd : [inductor] fix some tests in test_max_autotune.py (#127472)
d2df0f56a3 : Fix compilation_latency regression caused by #127060 (#127326)
ffe506e853 : Better graph break msg (and warning) on Dynamo x Python C++ extension (#127301)
c9beea13ac : Rewrite existing links to custom ops gdocs with the landing page (#127423)
18a3f781e6 : Reduce number of samples in {svd,pca}_lowrank OpInfos (#127199)
48538d3d14 : Implement svd_lowrank and pca_lowrank for complex numbers (#125580)
3fb8a0b627 : Fix nextafter in inductor CPP codegen (#126876)
ce63b676f3 : Revert "[compiled autograd] torch.compile API (#125880)"
6e0eeecc7c : Add back private function torch.cuda.amp.autocast_mode._cast (#127433)
3f5d8636aa : [inductor] Copy RedisRemoteCacheBackend into pytorch (#127480)
cdeb242fc9 : [inductor] fix mkldnn linear binary fusion check ut (#127296)
9f73c65b8f : xpu: pass MAX_JOBS building xpu_mkldnn_proj (#126562)
30d98611a3 : [CI] add xpu test in periodic workflow (#126410)
1071437169 : Introduce cuda_p2p based fused_all_gather_matmul and fused_matmul_reduce_scatter (#126634)
705346bf8d : [ONNX] Skip optimizer when it fails (#127349)
cd06ae0cb8 : Relax use_count constraints for swap_tensors when AccumulateGrad holds a reference (#127313)
d44ab8ba6d : [dynamo] utility to generate bytecode from template function (#127359)
5d316c81be : [Inductor] Add 0 initialization to Triton masked loads (#127311)
3947731887 : enable test_parameter_free_dynamic_shapes test when nn module inlining is on (#127424)
15cc9f2e7e : [dtensor][be] added checksAssert function and refactored test cases (#127356)
998f38814c : [dtensor][debug] added c10d allgather, allgather_coalesced, and allgather_into_tensor_coalesced tracing to CommDebugMode (#127334)
f58fc16e8f : [easy?] Move AsyncCompile to a different file (#127235)
e0fc1ab625 : Forward fix for templates + views (#127446)
3d541835d5 : distributed debug handlers (#126601)
e1c322112a : [compiled autograd] torch.compile API (#125880)
da39461d61 : [optim] Move test_grad_scaling_autocast_fused_optimizers to test_cuda.py (#126418)
67739d8c6f : Revert "[Submodule] Remove deprecated USE_TBB option and TBB submodule (#127051)"
1abcac9dab : New Custom Ops Documentation landing page (#127400)
49ad90349d : Correct error message for aten::_local_scalar_dense on meta tensor (#124554)
d66f12674c : Handle tuple and dict during TorchScript to ExportedProgram conversion (#127341)
f14dc3bde8 : Fix check message (#126951)
76fc58c160 : Document the legacy constructor for Tensor (#122625)
7931eee5c5 : Support torch.dtype as parameter in pybind11 cpp extension. (#126865)
8ea1dc8748 : Use Python::NumPy target (#127399)
0fa2c5b049 : Fix mask propagation in the presence of where (#125574)
15a7916c0e : Ability to capture Process Groups information into Execution Traces (#126995)
3174e6cb8e : [Temp][CI] Run older MPS tests/Mac builds on MacOS 13 (#127428)
9257a0698b : [Split Build] Load dependencies from libtorch in __init__.py (#126826)
b0ef363972 : [dtensor] rename _Partial -> Partial for all imports (#127420)
d99b115eb3 : Fix delete old branches workflow (#127442)
38a33c3202 : don't call .item in onehot for XLA (#127335)
84b5aa9a68 : [Caffe2] [Reland] Remove Caffe2 proto files (#127394)
92d081e228 : [Docs] Add `str` type to `cuda.get_device_name()` and `cuda. get_device_capability()` function (#126743)
24a4bfdcc2 : [AdaRound] Make versatile for data / extra param for callback function (#126891)
c404b2968c : Support min/max carry over for eager mode from_float method (#127309)
82a370ae3a : Revert "Refresh OpOverloadPacket if a new OpOverload gets added (#126863)" (#127366)
05e99154ee : Allow int vals to go down the fastpath for _foreach_max (#127303)
601c5e085d : Add _foreach_max (#127187)
90f4b3fcb2 : PyTorch Distributed security assumptions (#127403)
5196ef1b59 : support builtin id function on user defined object variables. (#127146)
ff65b18fcf : Update the is_causal explaination in the SDPA doc (#127209)
9cc0d56fdc : Remove unused variables in tests (#127379)
d938170314 : Add torchao nightly testing workflow (#126885)
090a031d6f : Use bit_cast instead of UB type-pun-via-union in Half.h (#127321)
8b5cbb7c68 : Improve NLLLoss docs (#127346)
28de9143a3 : opcheck should be usable without optional dependencies (#127292)
8a31c2aa84 : [export] allow complex guards as runtime asserts (#127129)
cc6e72d882 : Drop caffe2 core tests and some other stuff (#127089)
e8e327ba82 : Cover clang-tidy to torch/csrc/onnx/init.cpp (#127393)
7de1352457 : [1/N] Replace exceptions with static_assert(false) in some templates (#127371)
c69562caf9 : [Caffe2]Remove more caffe2 files (#126628)
80a8fc07b2 : [dynamo] Handle np.iinfo/finfo/dtype as input (#124482)
9a8e8101a8 : Fix wording in nn.Linear docstring. (#127240)
ade075444f : [dynamo] Support numpy.dtype (#124481)
bf966588f1 : [BE][Ez]: Update cudnn_frontend submodule to v1.4.0 (#127175)
0910429d72 : [BE][CMake] Use FindPython module (#124613)
942d9abd66 : [AOTI] Update reinplace to cover mutated buffer (#127297)
af69a52f06 : Reapply "Remove more of caffe2 (#126705)" (#127317)
749a132fb0 : [BE] wrap deprecated function/class with `typing_extensions.deprecated` (#126898)
699db7988d : [Submodule] Remove deprecated USE_TBB option and TBB submodule (#127051)
02b1cdab23 : [Sync torch_FA2 and FA2 flash_api] + [Expose seqused_k & alibi_slopes arguments] (#126520)
dae33a4961 : [inductor][cpp] bf16/fp16 gemm template computed with fp32 w/o epilogue fusion (#126068)
65af1a9c26 : FIX the document of distributed.new_group() (#122703)
6c81856dca : [inductor] Add a subprocess-based parallel compile (#126816)
92bc444ee3 : [inductor][cpp] epilogue support for gemm template (#126019)
00999fd8b1 : Prefer flip over index_select (#126783)
8a21532e53 : Fix constant propagation pass (#114471)
51b22d9cf2 : [dynamo] Support enum construction (#127364)
ad7700bfdb : [inductor] Misc changes (#127307)
cef776bcd1 : [inductor][cpp] GEMM template (infra and fp32) (#124021)
719589c9bf : [dynamo] move bytecode tests from test_misc to new bytecode test file (#127329)
a60b06bd2b : [dtensor] update public API docs (#127340)
2c9a420da3 : [dtensor] move some modules to private namespace (#127339)
72ef2555e3 : [dtensor] make Partial placement public (#127338)
5359af0c7e : [dynamo] wrap GraphModule exceptions in dynamo-wrapped tests (#126341)
cdf2133186 : Add compile time profiler for non fbcode targets (#126904)
2b72e2a596 : [Cudagraph] better support for streams (#126809)
a41f828da7 : [c10d] fix group_name/group_desc set up in eager initialization (#127053)
932e04142d : extract calculate_time_spent from print_time_report (#127362)
a25b28a753 : [Split Build] Add option to create libtorch wheel and use it to build pytorch as a separate wheel (#126328)
8090145936 : [pipelining] add back support for multi-use parameters/buffers (#126653)
781f26240a : Add script to copy distributed commits to stable branch (#126918)
10d2373abd : Add a registry for GraphModuleSerializer (#126550)
cdbb2c9acc : Revert "[Submodule] Remove deprecated USE_TBB option and TBB submodule (#127051)"
7a506dd005 : Revert "[Caffe2]Remove Caffe2 proto files (#126134)"
669560d51a : Use hidden visibility in OBJECTCXX files (#127265)
52e448a7f9 : Revert "Enable Wunused-variable on tests (#127161)"
85172fbe84 : Back out "Prevent partitioner from ever saving views (#126446)" (#127316)
a40658481a : [Caffe2]Remove Caffe2 proto files (#126134)
f4cbcff8ef : [TorchScript] Expand TorchScript __init__ annotation warning (#127045)
1be7e4086a : Drop caffe2 nomnigraph (#127086)
f6ef832e87 : [inductor] Use symbolic_hint when bounding fallback size hint (#127262)
26a8fa3a06 : [inductor] Restore ExpandView sanity checks (#127251)
db0a0ecb60 : [FSDP2] Added test for N-way TP and 1-way FSDP with CPU offloading (#127024)
6b24155827 : [dtensor][debug] added c10d gather, reduce, scatter tracing to CommDebugMode (#127134)
a76faff71c : [NCCL][CUDA] Optionally avoid rethrowing CUDA Errors in NCCL Watchdog (#126587)
93bfe57144 : cudagraphs: fix backward hooks & fsdp interaction (#126914)
4154c8358a : [BE] Wrap store check in a try/catch (#127030)
f206c5c628 : [export] handle new roots & root swapping in derived dims suggested fixes (#125543)
0a9d73a814 : Remove c10::guts::bool_constant and c10::guts::negation (#127300)
03005bb655 : Improve the clarity of the torch.Tensor.backward doc (#127201)
f600faf248 : [metal] Improve perf of int4pack_mm shader (#127135)
c9172d4471 : print default value in FunctionSignature (#127059)
045309aa35 : [MPS] Enable toch.mm and friends for complex dtypes (#127241)
829f594d7d : [small] guard_size_oblivious, skip check for meta (#127298)
9521528f71 : Log export result of torch.jit.trace to scuba (#126900)
3f79e09515 : Revert "Made some minor improvements to flexattention perf + added more autotune configs (#126811)"
254783ce80 : [Fix]: populate input parameter name when convert TorchScript to ExportedProgram (#126787)
122282111d : [inductor][reland] Various improvements to error handling during autotuning (#126847)
df360e2add : Update derivatives.yaml (#127193)
cbb79a2baf : [export] Disable backend decomps for capture_pre_autograd (#127120)
c40408850a : [1/N] Fix clang-tidy warnings in aten/src/ATen/cuda/ (#127183)
3d88c618d5 : Concat namespaces in torch/csrc/profiler and other fixes (#127266)
4d4d2a96f2 : Add space in MetaFallbackKernel.cpp error message (#127291)
a6b994ed54 : Fix lint after #126845 (#127286)
ec8b254ef4 : Refactored template codegen to explicitly set current body when generating code (#127144)
457b9f7397 : Optimize mask memory for flash attention (#126961)
1507d5205a : [dynamo][fsdp] Skip Dynamo tracing of __getattr__ if its top-level frame (#127263)
d6e3e89804 : Remove c10::void_t (#127248)
246311c944 : Unconditionally add asserts after export (#127132)
e4b245292f : Remove caffe2::tensorrt target code from cuda.cmake (#127204)
c6b36ec2f9 : Remove calls of deprecated _aminmax (#127182)
d957c2d5de : [Doc] update default magma cuda version in readme (#122125)
7c61e7be5c : Address issue #125307 (#126351)
8979412442 : Enable ufmt format on test files (#126845)
57000708fc : Remove c10::invoke_result (#127160)
6436a6407d : Enable Wunused-variable on tests (#127161)
70d8bc2da1 : Fix various errors in TCPStoreLibUvBackend.cpp (#127230)
0ff2f8b522 : update kineto submodule hash (#126780)
25a9262ba4 : Add structured logging for fx graph cache hash (#127156)
26f4f10ac8 : [5/N][Easy] fix typo for `usort` config in `pyproject.toml` (`kown` -> `known`): sort torch (#127126)
c7f6fbfa9d : Revert "[FSDP2] Added test for N-way TP and 1-way FSDP with CPU offloading (#127024)"
7121ea6f70 : Revert "Add compile time profiler for non fbcode targets (#126904)"
00fe0a0d79 : Revert "Remove more of caffe2 (#126705)"
1110edb94b : Fix stream type to generic in comms default hooks (#120069)
55c0ab2887 : Revert "[5/N][Easy] fix typo for `usort` config in `pyproject.toml` (`kown` -> `known`): sort torch (#127126)"
4608971f7a : Revert "[inductor][cpp] GEMM template (infra and fp32) (#124021)"
343a41fba8 : Revert "[inductor][cpp] epilogue support for gemm template (#126019)"
68fddebf84 : Revert "[inductor][cpp] bf16/fp16 gemm template computed with fp32 w/o epilogue fusion (#126068)"
ed9951ace7 : Revert "[inductor][cpp] support bf16/fp16 gemm template epilogue fusion (#126545)"
4c2e671a3b : Revert "[Inductor][CPP] Add Min/Max with VecMask (#126841)"
5247446396 : Revert "[Inductor][CPP] Add ne with VecMask (#126940)"
60523fa674 : Revert "Move MKLDNN Specific IR to Separate File (#126504)"
ff63e8bac8 : [CI] fix doctest case by adding requires (#126855)
22712ba5c5 : Radam support the flag for "maximize" (#126765)
5cca904c51 : [3/N] Enable clang-tidy in aten/src/ATen/detail/ (#127184)
1c2e221e25 : CUDA 12.4 ARM wheel integration to CD - nightly build (#126174)
7763c83af6 : [5/N][Easy] fix typo for `usort` config in `pyproject.toml` (`kown` -> `known`): sort torch (#127126)
4fdbaa794f : [Submodule] Remove deprecated USE_TBB option and TBB submodule (#127051)
6aa5bb1a76 : [inductor] Support persistent reductions for dynamic shapes (#126684)
bf2909b871 : Move MKLDNN Specific IR to Separate File (#126504)
39de62845a : [decomp] Fix default values missing from inplace `rrelu` decomposition (#126978)
06934518a2 : [AMD] Fix deprecated amdsmi api (#126962)
ee6cb6daa1 : Turn the mutation dependency of MutationOutput to weak deps (#127151)
f8c4c268da : [Inductor][CPP] Add ne with VecMask (#126940)
1ef4306ab1 : [Inductor][CPP] Add Min/Max with VecMask (#126841)
b8ee7d0cc1 : Change direct uses of MutationOutput to `mark_node_as_mutating` (#127149)
3817c4f9fa : Unify add_fake_dep and add_mutation_dep, as they're literally the same thing (#127148)
9bead53519 : [2/N] Fix clang-tidy warnings in aten/src/ATen/detail/ (#127168)
a28bfb5ed5 : [4/N][Easy] fix typo for `usort` config in `pyproject.toml` (`kown` -> `known`): sort functorch (#127125)
35ea5c6b22 : [3/N][Easy] fix typo for `usort` config in `pyproject.toml` (`kown` -> `known`): sort torchgen (#127124)
0dae2ba5bd : [2/N][Easy] fix typo for `usort` config in `pyproject.toml` (`kown` -> `known`): sort caffe2 (#127123)
da141b096b : Enable UFMT on test/test_hub.py (#127155)
12d11fe4e5 : Revert "reset dynamo cache before each test (#126586)"
71eafe9e97 : Refactor dispatch logic to clarify control flow (#126402)
7642cdef25 : Improve fusable_read_and_write() (#127061)
6c79299a35 : Improve score_fusion_memory() (#127060)
ba3b05fdf3 : [1/N][Easy] fix typo for `usort` config in `pyproject.toml` (`kown` -> `known`): sort stdlib (#127122)
4a997de8b9 : [AOTI] support freezing for MKLDNN (#124350)
e7a42702f9 : generalize custom_fwd&custom_bwd to be device-agnostic (#126531)
c09205a057 : Deprecate device-specific GradScaler autocast API (#126527)
ef86a27dba : Mark test_set_per_process_memory_fraction serial (#127087)
0f67d38f0f : add TORCHDYNAMO_CAPTURE_DYNAMIC_OUTPUT_SHAPE_OPS (#127017)
84e59f052d : Made some minor improvements to flexattention perf + added more autotune configs (#126811)
9f11fc666a : [1/N] Fix clang-tidy warnings in aten/src/ATen/detail/ (#127057)
bd24991f46 : reset dynamo cache before each test (#126586)
8bd26ecf0b : [pipelining] test composability with DDP and FSDP (#127066)
c1d2564acf : [pipelining] Add grad test for interleaved schedules (#126931)
eaace67444 : [pipelining] do not check inputs for non-0 stages (#127136)
cc9a3412d4 : Implement a post_compile step for aot_dispatch_autograd (#126193)
52bcf120e5 : Make inductor config hashing more portable (#127022)
665637714f : Remove SparseAdam weird allowance of raw Tensor input (#127081)
29a1f62f23 : Replace c10::invoke_result with std::invoke_result (#124169)
9ef6f8dfc1 : Fix typo in inductor workflow for CUDA 12.4 jobs (#127121)
ed838793df : [pipelining] Remove qualname mapping (#127018)
5f15110499 : Update dispatch stub to make SDPA routing cleaner (#126832)
db9c6aeec6 : Revert "Skip test_memory_format_nn_BatchNorm2d in inductor (#125970)" (#126594)
b03dc3d167 : don't check memory format for empty tensors (#126593)
84f8cd22ac : [dynamo][TensorVariable] Support "if param.grad_fn" usecase (#126960)
bbeb0906c4 : Register creak_node_hook (#126671)
72f0bdcc22 : Remove torch._constrain_as_value (#127103)
d5bf3a98db : [inductor] Refactor indexing() into triton.py (#127047)
92433217cb : [inductor] Misc refactors (#126945)
1b6e3e3bcb : [inductor] Refactor part of IterationRangesEntry into triton.py (#126944)
83617017e0 : [dtensor][debug] add c10d allreduce_coalesced_ tracing to CommDebugMode (#127040)
59052071b7 : Disallow fusions of foreach and reductions (#127048)
023c1baf82 : Add global configurations to cache key (#126907)
c133665d4a : [CUDA] Parallelize upsampling OPS across the batch/channel dimension. (#127082)
b0871f9b33 : [DSD] Add a test to verify FSDP lazy initialization case (#127069)
7394ec7123 : [AOTI][refactor] Update DTYPE_TO_CPP mapping (#126915)
800f461b2a : [User-Written Triton] Handle the `scf.for` and `scf.while` case (#127065)
dce29a8a87 : Replaced same with assertEqual in two files (#126994)
c34f8c7f91 : Revert "Lift jagged -> padded dense forward / backward kernels from fbgemm_gpu (#125946)"
fdda9a22c3 : Performance parity for 32-bit-precision in FP16 ARM matrix-vector kernel using FMLAL instruction (#127033)
1d3aa08327 : Cleanup: use c10::ForceUnroll and constexpr variables in ARM FP16 matrix-vector fast path (#127016)
67d52d7fcb : [caffe2] Remove import_legacy.cpp (#126149)
5e69e11d09 : Lift jagged -> padded dense forward / backward kernels from fbgemm_gpu (#125946)
9d4731f952 : [AOTI] Disable stack allocation for OSS (#125732)
72d30aa026 : [AOTI] Fix an int array codegen issue (#126801)
71f1aebe1f : [AOTI] Add more fallback ops (#126720)
f508cd6e00 : Update assigntome job (#127027)
3cb16ebf08 : [BE]: Update ruff to 0.4.5 (#126979)
4a09117d16 : Introduce ProcessGroupCudaP2P (#122163)
01f04230cf : [cond] support torch built in function as subgraph (#126909)
2d6d2dbc0b : [dynamo] make callable(nn_module) return True (#127026)
f2c6fddbe1 : Remove unnecessary const_cast and other fixes (#127054)
9117779b0a : [FSDP2] Added test for N-way TP and 1-way FSDP with CPU offloading (#127024)
87f79af24d : Fix map_location for wrapper subclass and device tensors that go through numpy (#126728)
4ff9113e3d : [MPS] Add `_weight_int8pack_mm` tests (#127041)
194950c0ca : Default TreadPool size to number of physical cores (#125963)
5ae9daa4a2 : Revert "[AOTI] support freezing for MKLDNN (#124350)"
2ac739cc80 : [DOCS] Fixed KLDiv example (#126857)
4105f91cfc : [inductor] fix an assertion for node debug str (#127021)
654afb6f3a : [AOTI] support freezing for MKLDNN (#124350)
43baabe9b9 : [inductor][cpp] support bf16/fp16 gemm template epilogue fusion (#126545)
4aa43d11f3 : [inductor][cpp] bf16/fp16 gemm template computed with fp32 w/o epilogue fusion (#126068)
56c412d906 : [inductor][cpp] epilogue support for gemm template (#126019)
dd64ca2a02 : Inductor respects strides for custom ops by default (#126986)
f14cdc570d : Fix to #126656 (#127050)
47c976b904 : Revert "[AOTI] Add more fallback ops (#126720)"
f749c5def8 : Revert "[AOTI] Fix an int array codegen issue (#126801)"
fd9cdeed19 : Revert "[AOTI] Disable stack allocation for OSS (#125732)"
f95dbc1276 : Remove more of caffe2 (#126705)
0d1e228550 : [inductor][cpp] GEMM template (infra and fp32) (#124021)
505b8ceaa2 : Double registers per iteration in FP32-arithmetic FP16 ARM gemv kernel (#126877)
e8fa0f10c5 : Quadruple registers per iteration in ARM64 FP16 kernel (#126794)
f6366454db : Add privateuse1 in FSDP's sharded grad scaler (#126971)
2f6954c7c3 : Update the modification api (#127035)
894efcd0e9 : [DTensor] Supported simple replicate strategy for SVD (#127004)
70dc59c55f : Fix perf regression caused by #122074 (#126996)
cb6ef68caa : Propagate tokens in aotautograd (#127028)
99a11efc8a : Revert "Lift jagged -> padded dense forward / backward kernels from fbgemm_gpu (#125946)"
cfb374dc73 : [BE] Create grad check util (#126991)
27594be3ed : [dtensor][be] remove repeated test in test_comm_mode.py (#127029)
89c638f9a5 : [dtensor][debug] add all_reduce_coalesced tracing to CommDebugMode (#127025)
575cb617db : Add compile time profiler for non fbcode targets (#126904)
e2f081837f : Lift jagged -> padded dense forward / backward kernels from fbgemm_gpu (#125946)
3f5b59eef4 : [codemod] c10::optional -> std::optional in caffe2/aten/src/ATen/DeviceGuard.h +117 (#126901)
95e5c994f9 : [Submodule] Clear USE_QNNPACK build option (#126941)
dfabae5b89 : Revert "[pipelining] Add grad test for interleaved schedules (#126931)"
2db13633e7 : [export] disable forced specializations, even when solvable with single var (#126925)
6eac3f45c7 : Add basic sanity checks for graph ops to cache key (#124745)
ff82e2e7cf : [traced-graph][sparse] propagate sparsity metadata into traced graph (#117907)
93ba5e7291 : Fix typo for input (#126981)
d11e44c0d0 : Reset grad state across unittests (#126345)
a31a60d85b : Change run_test.py arg parsing to handle additional args better (#126709)
09a73da190 : Downgrade requests to 2.31.0 for ios and android (#126989)
0d2ac9782b : [FSDP1] Update docstring to include device_mesh arg (#126589)
0902929d58 : [CUDA] [CI]: Enable CUDA 12.4 CI (#121956)
abf6d4e6bc : [pipelining] Add grad test for interleaved schedules (#126931)
c46b38bc75 : [pipelining] Generalize definition of MultiMLP for testing interleaved schedules (#126927)
6b39146b3f : [pipelining] Validate stage input/output shape/dtype (#126732)
9b91c91e64 : Don't add to replacements when guard is suppressed (#126210)
f8857cef45 : [Reland] Verify types in custom op schemas (#126861)
c921c5cc77 : [c10d] Print certain logs only on head rank of each node (#125432)
0625f92993 : [inductor] Run some tests on correct device (#126943)
abf40320dd : remove ax/ay arrays in fp16 ARM matmul kernels (#126793)
5dcf3d0f9e : use arith-by-dot-products approach for fp32 accumulation in fp16 matmul (#126746)
fd4fd24080 : add tail fixup for fp16 gemv transposed fast path (#126745)
b36e390b6c : Revert "Default XLA to use swap_tensors path in nn.Module._apply (#126814)"
6a06d36296 : Revert "Default meta device to use swap_tensors in nn.Module._apply (.to_empty and .to('meta')) (#126819)"
041e8d73fd : Separate non/strict functions in _export (#126718)
e5db6758c8 : [BE]: Use make_unique (#126966)
264155a8d7 : [DCP][AC] Add test for apply AC with FSDP1 (#126935)
bbe68a16b9 : [codemod][lowrisk] Remove extra semi colon from caffe2/caffe2/core/observer.h (#126976)
a63310eebc : TorchScript 2 ExportedProgram Converter (#126920)
1b29c16e5e : Revert "Introduce ProcessGroupCudaP2P (#122163)"
ab61309ab8 : Default meta device to use swap_tensors in nn.Module._apply (.to_empty and .to('meta')) (#126819)
eb41ed5d90 : Default XLA to use swap_tensors path in nn.Module._apply (#126814)
f0366de414 : [dynamo] Support __contains__ on obj.__dict__ (#126922)
25b8dbc3e4 : Revert "[inductor][cpp] GEMM template (infra and fp32) (#124021)"
45784cd229 : Revert "[inductor][cpp] epilogue support for gemm template (#126019)"
926327e8fc : Revert "[inductor][cpp] bf16/fp16 gemm template computed with fp32 w/o epilogue fusion (#126068)"
30c9ca0899 : Revert "[inductor][cpp] support bf16/fp16 gemm template epilogue fusion (#126545)"
da7bf1d588 : [export] Fix unflatten with empty nn_module_stack (#126785)
a6155d23d1 : [easy] Delete dead code global (#126903)
cc61d03ac9 : Do not trace into triton/backends (#126083)
558c4413ce : add strobelight cli function profiler (#126693)
7b6d036c05 : [inductor][cpp] support bf16/fp16 gemm template epilogue fusion (#126545)
31412cb2f2 : [inductor][cpp] bf16/fp16 gemm template computed with fp32 w/o epilogue fusion (#126068)
08f57b4bff : [inductor][cpp] epilogue support for gemm template (#126019)
9da7efa677 : [inductor][cpp] GEMM template (infra and fp32) (#124021)
aa6de76181 : Fix silu test for flexattention (#126641)
36e70572d0 : [Dynamo] make bytecode of resume function resemble natural bytecode (#126630)
2c90b99267 : Revert "reset dynamo cache before each test (#126586)"
b1e214ceb1 : Revert "don't check memory format for empty tensors (#126593)"
df4b7cb5f7 : Reapply "Skip test_memory_format_nn_BatchNorm2d in inductor (#125970)" (#126594)
4f14282e35 : Revert "[inductor][cpp] GEMM template (infra and fp32) (#124021)"
657d39e44c : Revert "[inductor][cpp] epilogue support for gemm template (#126019)"
205f08140e : Revert "[inductor][cpp] bf16/fp16 gemm template computed with fp32 w/o epilogue fusion (#126068)"
2b57652278 : Update requests to 2.32.2 (#126805)
ebbd431d9e : [CPU] Bump `test_complex_2d` thresholds for LBFGS on `complex64` (#126358)
57c185b4c7 : [inductor][cpp] bf16/fp16 gemm template computed with fp32 w/o epilogue fusion (#126068)
57108d9a49 : [inductor][cpp] epilogue support for gemm template (#126019)
2ac33a9f66 : [inductor][cpp] GEMM template (infra and fp32) (#124021)
e3db9ba37a : [FSDP2] Added test for manual reshard with `reshard_after_forward=False` (#126892)
203f2641e9 : [FSDP2] Used `CommDebugMode` for comm. count test (#126887)
69325e4de6 : [FSDP] Warned on wrapping `ModuleList`/`ModuleDict` (#124764)
b0e849870e : Change error message when nn module inlining is enabled for MiscTests.test_map_side_effects (#126444)
17186bd5b6 : [inductor] make conv lowering work with dynamic shapes (#126823)
14c5c753de : [inductor] use smaller RBLOCK for expensive reduction kernels (#126477)
ce6e36bf8b : Revert "Skip test_memory_format_nn_BatchNorm2d in inductor (#125970)" (#126594)
12dee4f204 : don't check memory format for empty tensors (#126593)
43f2f43eb3 : reset dynamo cache before each test (#126586)
08c260bc29 : [pipelining] Test schedules against manual stage (#126735)
6a539e80dd : Update descriptor fields to resolve fft precision issue (#125328)
5ccc634603 : [CI] Pin uv==0.1.45 for lintrunner (#126908)
a30baec0c3 : [Docs] Fix NumPy + backward example (#126872)
e4623de4cf : typing scheduler.py [2/2]: Apply types (#126656)
3591bce6c7 : Add usage explanation in torch.dot ducment (#125908)
0939b68980 : Support `dtype` kwarg in `_foreach_norm` (#125665)
d62b025efc : [TorchElastic] Option for sharing TCPStore created by rdzv handlers (#125743)
fde1e8af7a : [dtensor] implement distributed topk operator (#126711)
af633e4a7b : [dtensor] remove unused failed_reason (#126710)
a8195f257e : [custom_op] use new python custom ops API on prims ops (#124665)
db0b74bbc5 : [CUDA Caching Allocator] Allow division of 0 (#126833)
d4ec18bdad : Prevent partitioner from ever saving views (#126446)
51e707650f : Fix flexattention not realizing inputs before lowering (also refactored runtime estimation) (#126615)
3e826c477a : [pipelining] Add pipeline stage test (#126721)
403012b50a : [pipelining] expose APIs per pytorch rule (#126812)
599e684ad6 : [AOTI] Disable stack allocation for OSS (#125732)
ff617ab6c8 : [AOTI] Fix an int array codegen issue (#126801)
19cd4484ec : [AOTI] Add more fallback ops (#126720)
0d17aae242 : Teach FakeTensor to fill in item_memo when converting scalar CPU tensor (#126245)
86ad101370 : Enable pickling `torch._C.Generator` (#126271)
ed734178ab : Refresh OpOverloadPacket if a new OpOverload gets added (#126863)
082251e76b : fix invalid call to aoti_torch_tensor_copy_ (#126668)
2dd2699860 : Introduce ProcessGroupCudaP2P (#122163)
8a4597980c : Revert "Fix flexattention not realizing inputs before lowering (also refactored runtime estimation) (#126615)"
0f37fd06d9 : Revert "Prevent partitioner from ever saving views (#126446)"
d2cbbdee31 : Revert "Fix silu test for flexattention (#126641)"
4575d3be83 : [Quant][onednn] fix performance regression of depth-wise qconv (#126761)
aede940975 : [inductor] Fix cuda compilation under fbcode remote execution (#126408)
edea2b81b5 : [ONNX] Adds Support for Some Bitwise Ops in Onnx Exporter (#126229)
b516de8cac : [halide-backend] Add HalideCodeCache (#126416)
d937d0db0f : [SAC] fix ignored ops in eager mode to recompute (#126751)
3b0f6cce5c : [pytree] freeze attributes of `TreeSpec` (#124011)
6edf989e2f : [CUDA Caching Allocator] Round to nearest 512 bytes boundary if number of divisions=1 (#126830)
ae66c94eaa : Capture dtype in Flight Recorder (#126581)
7530cfe7e4 : [dynamo][flaky tests] test_conv_empty_input_* (#126790)
ac1f0befcf : Remove redundant serialization code (#126803)
608a11c496 : [pipelining] Retire PIPPY_VERBOSITY in favor of TORCH_LOGS=pp (#126828)
e3c96935c2 : Support CUDA_INC_PATH env variable when compiling extensions (#126808)
5fa7aefb49 : [pipelining] Do not print loss (#126829)
e6f655697b : [AOTI] Fix unsupported type of output=s1 (#126797)
a379ed6e98 : Fix SobolEngine default dtype handling (#126781)
28f29e074b : Dont mutate tensor stride in place in cudnn conv (#126786)
66c23cb021 : Add micro-benchmark framework and multi_layer_norm as an example (#126754)
636e79991c : [FSDP2] Fixed 2D clip grad norm test (#126497)
25ea32567e : [caffe2][1/n] migrate global Static Initializer (#126688)
10a5c1b26c : [Dynamo][TVM] Fix tvm backend interface (#126529)
1e818db547 : [torchbench] Fix torchao benchmarking script (#126736)
9dba1aca0e : [inductor] Relax type annotations for statically_known_* (#126655)
c08afbb3da : [inductor] Add kernel_code logging artifact (#126631)
4e921593a4 : [c10d]skip nan tests for lower versions of CUDA (#126701)
f6ffe32a9d : [AOTInductor] Automatic detection for buffer mutation and binary linking (#126706)
fed536dbcf : [DTensor][Optim] Add support for fused_adam and fused_adamw when lr is a tensor (#126750)
7ee74d986a : Enable UFMT format on test/typing files (#126038)
1cc9354cb0 : Unify the dtype to VecMask<float, N> in ops.masked (#126662)
fd7293db71 : Bump rexml from 3.2.5 to 3.2.8 in /ios/TestApp (#126455)
fe0a36fd7c : Fix a link in the compiler backend doc (#126079)
5325a6de64 : [dtensor] remove `output_` prefix from OpStrategy properties (#126359)
c73c9457aa : Add guard_size_oblivious to vector_norm (#126772)
97eef61474 : Don't assume compare_arg is fx.Node (#126771)
fc594ed219 : Remove lint from retryable_workflows (#126806)
4e6673e244 : Remove MAX_STACK_ENTRY from _build_table (#126583)
0c76018714 : [inductor] Don't inherit `__future__` flags from the calling scope when `compile` -ing generated modules (#126454)
7428fd19fe : Remove outdated options from setup.py (#125988)
b40fb2de59 : [AOTI] Fix a codegen issue when .item() is used for kernel arg (#126575)
5e2de16a6f : [AOTI] Codegen None as empty tensor (#126369)
ac51920656 : Reapply "c10d: add Collectives abstraction (#125978)" (#126695)
d8f5627a88 : prune back configs (#126570)
85fd76f76d : Add test coverage for fp16 matrix-vector specialized kernel (#126700)
bae3b17fd9 : Tweak a comment and fix spelling (#126681)
0756f9f5fd : Remove debug breakpoint (#126756)
140ab89c02 : typing scheduler.py [1/2]: Bug fix (#126610)
ac2c547838 : [TD] Upload names of failures to s3 for pytest cache (#126315)
4a7b46be3d : small changes to padding (#126716)
980f5ac049 : Revert "[Quant][PT2E] enable qlinear post op fusion for dynamic quant & qat (#122667)"
b36e01801b : [3.12, inductor] re-enable AsyncCompile.warm_pool for 3.12 (#126724)
faa72dca41 : Remove QNNPACK submodule (#126657)
7d34cfd28a : Update torch-xpu-ops pin (ATen XPU implementation) (#126744)
4b23c4fc5d : [Pipelining] Clean up function names in 1f1b schedule (#126582)
8c9d332953 : [c10d] fix excepthook crash on exc after destroy_process_group (#126739)
e363a8a222 : Revert "[pipelining] Add pipeline stage test (#126721)"
dc2560f073 : [Pipelining] Add debug logs for batch p2p ops (#126539)
b96d9090d2 : [C10D] make get_node_local_rank() accept fallback_rank (#126737)
c1b90a4e8a : [Dynamo] Treat integers stored on nn.Modules as dynamic (#126466)
a83e745356 : [BE] split seq_id to collective_seq_id and p2p_seq_id (#125727)
5f64086d08 : [NT][SDPA] Bump tolerances for `test_sdpa_with_packed_in_proj_cuda_bfloat16` (#126356)
40cc616909 : Fix caching allocator of out-of-tree device is destructed before the … (#126677)
51c07f9f69 : [dynamo] Allow asserts to fail (#126661)
d777685ef9 : Script for choosing template configurations (#126560)
d30cdc4321 : [ROCm] amdsmi library integration (#119182)
b948b1ad7a : [pipelining] Add pipeline stage test (#126721)
31ba6ee49b : Traceable wrapper subclass support for deferred runtime asserts (#126198)
82b4528788 : [cudagraph] fix verbose graph logging (#126694)
4644611b14 : [cprofile] log manifold link instead of raw data to trace_structured (#126451)
b85f9d7fa2 : Add symbolic_shape_specialization structured trace (#126450)
cd3a71f754 : Fix silu test for flexattention (#126641)
da2292ce6b : Prevent partitioner from ever saving views (#126446)
831efeeadf : Fix flexattention not realizing inputs before lowering (also refactored runtime estimation) (#126615)
14dc8d4f63 : Protect codecache against cache failures (#126696)
6f1935b0b5 : doc: `torch.utils.data.Sampler`: `__len__` is optional (#125938)
74b053d7c4 : Pass model path to observer (#126503)
acfe237a71 : Fix C++ compilation error for tensor array in abi_compatible mode (#126412)
3d4f1c3083 : [export] Make error name private (#126715)
d28868c7e8 : Change skipIfs to xfails in test_mps.py for test_isin (#125412)
8bca0847c2 : Revert "[TD] Upload names of failures to s3 for pytest cache (#126315)"
2813f0672a : fix huggingface models input issue in torchbench (#126579)
11c2d127ec : [AOTInductor] Add config to allow buffer mutation (#126584)
2068dadbe8 : [torchbench] Add torchao to PT2 Benchmark Runner (#126469)
022adf8c5e : Fix bug for comptime.get_local for cells/closures (#126637)
f9de510121 : [dynamo] Graph break on set_num_threads (#126623)
89c1cfe144 : [export] Allow modules to be created in the forward (#125725)
655038687a : [TD] Upload names of failures to s3 for pytest cache (#126315)
8c38d0cd64 : [inductor] Fix edge case in JIT vs. AOT fusion after finalizing MultiTemplateBuffer (#126622)
7aa853a54e : [CI] Install sccache on XLA build job (#126117)
3642e51ea5 : [Quant][PT2E] enable qlinear post op fusion for dynamic quant & qat (#122667)
2f53747ec6 : Speedup bf16 gemm fallback on ARM (#126592)
cb69c51b6f : Revert " Updated test_graph_optims and test_graph_scaling_fused_optimizers to use new OptimizerInfo infrastructure (#125127)"
7100a72950 : [inductor] Fix ops.scan for non-commutative operators (#126633)
d9c3485146 : Revert "c10d: add Collectives abstraction (#125978)"
53f73cdeb6 : Revert "Add symbolic_shape_specialization structured trace (#126450)"
5ad2f10034 : Revert "[inductor] Load python modules using importlib (#126454)"
cf35a591b9 : Updated test_graph_optims and test_graph_scaling_fused_optimizers to use new OptimizerInfo infrastructure (#125127)
5fb11cda4f : [compiled autograd] Better cache miss logging (#126602)
be67985bd7 : [compiled autograd] log in cpp using python logger (#126483)
574ae9afb8 : [Submodule] Remove third-party onnx-tensorrt (#126542)
853081a8e7 : Replace torch.library.impl_abstract with torch.library.register_fake (#126606)
5ea956a61f : Update hf_BirdBird periodic-dynamo-benchmarks results (#126414)
c4dfd783f4 : UFMT torch.utils._sympy.functions (#126553)
7dae7d3ca5 : Remove unnecessary implementations from MockHandler (#126511)
71b6459edc : Revert "[Dynamo] Treat integers stored on nn.Modules as dynamic (#126466)"
e3230f87aa : Cached required_fw_nodes creation (#126613)
abc4b66124 : Forward fix the failed new test from D57474327 (#126596)
ad67553c5c : Updated test_torch.py to use new OptimizerInfo infrastructure (#125538)
99af1b3ab0 : Refactor variables / function names related to non-strict export (#126458)
6bb9d6080d : [Dynamo] Treat integers stored on nn.Modules as dynamic (#126466)
a44d0cf227 : [Traceable FSDP2] Change from register_multi_grad_hook to per-tensor backward hook (#126350)
d4704dcacc : Map float8 types to uint8 for allgather (#126556)
bf099a08f0 : [2/N] Non-Tensor: Scalar Support: Add scalar to the cache for eager-through-torch.compile (#124070)
c1767d8626 : Faster(?) FP16 gemv kernel (#126297)
b98decfc38 : [halide-backend] Refactor codegen/triton.py into codegen/simd.py (#126415)
74b99438f2 : [Submodule] Remove third-party CUB (#126540)
1191168c45 : [pipelining] Follow improvements in export.unflatten (#126217)
661ecedbd0 : gitmodules: switch cpp-httplib to https (#126580)
224f2bef9f : [C10D] Add __repr__ to P2POp class (#126538)
bcee6f708a : [Pipelining] Fix 1f1b schedule (#126419)
41fb4bcc73 : [AOTI] Flag to include aoti sources when building lite interpreter (#126572)
2863c76b1f : [torch-distributed] Make log directory creation idempotent (#126496)
0d5ba547ec : Tool for scouting exportability in one shot (#126471)
54bc55c515 : Remove dist_ prefix from TORCH_LOGS shortcuts (#126499)
93844a31b3 : Fix aarch64 debug build with GCC (#126290)
d54c28e7fc : Added error checks for invalid inputs on thnn_conv2d (#121906)
173b1d811d : [dynamo] Sourceless builder - ordered dict and re.pattern (#126468)
faa26df72e : [inductor] Load python modules using importlib (#126454)
d7de4c9d80 : Fix issue of lowering nn.linear ops with kwargs (#126331)
c26f6548f9 : [AOTI] config target platform (#126306)
09fd771485 : Disable vulkan test batch_norm_invalid_inputs (#126571)
bed1c600bb : Experimental prototype for converting torch.jit.trace modules to export (#124449)
30b70b1a63 : [ROCm] enable faster_load_save for Fused_SGD (#125456)
d782e43464 : Revert "[FSDP2] Fixed 2D clip grad norm test (#126497)"
95b2766864 : [BE][Ez]: Use NotADirectoryError in tensorboard writer (#126534)
90a5aeea79 : [distributed] Add cpp-httplib to pytorch (#126470)
eb0b16db92 : Initial implementation of AdaRound (#126153)
875221dedf : Revert "Fix aarch64 debug build with GCC (#126290)"
f89500030b : Revert "Remove redundant serialization code (#126249)"
de42af4b00 : Add coms metadata to execution trace (ET) (#126317)
6931f781c2 : [quant][pt2e] Allow multi users without output observers (#126487)
ecd9a4e5c3 : Enable FX graph cache for huggingface and timm benchmarks (#126205)
66dc8fb7ff : Allow tensor subclasses and add `torch.serialization.add_safe_globals` that allows users to allowlist classes for `weights_only` load (#124331)
31ea8290e7 : Workflow for uploading additional test stats on workflow dispatch (#126080)
6bcf15669e : [inductor] fix unbacked case in pointwise + reduction vertical fusion (#125982)
7e9a037b47 : [Perf] Vectorize more dtype for int4mm (#126512)
81277baa0c : Remove removed ruff rule TRY200 (#126256)
402170b22f : Early return in _recursive_build if obj is a Tensor (#125639)
7e166e8057 : [optim] Fix: wrong ASGD implementation (#126375)
078e530446 : Delete refactored function, move changes over (#126407)
ab307a8992 : Default to env variable instead of config value for precompile parallelism (#126333)
3f28906311 : [FSDP2] Fixed 2D clip grad norm test (#126497)
55033ab43a : Update ops handler documentation some more (#126480)
4ed93d6e0c : [Submodule] Remove zstd dependency (#126485)
6c503f1dbb : save the reciprocal of weights for welford_reduce (#125148)
8619fe6214 : variable search spaces for gemm autotuning (#126220)
45f2d09452 : [Quant][Inductor] Enable lowering of qlinear-binary(-unary) fusion for X86Inductor (#122593)
2edaae436a : Fix cummax and cummin lowering for empty case (#126461)
15ca562f86 : [DTensor] Turn on foreach implementation for clip_grad_norm_ for DTensor by default (#126423)
f9a7033194 : Refactor partitioner and clean it up (#126318)
5756b53dd8 : [XPU] call empty_cache for dynamo tests (#126377)
9edf54df4d : [dtensor] refactor view ops to use OpStrategy (#126011)
a0df40f195 : Add dist_pp shortcut to TORCH_LOGS (#126322)
4b2ae2ac33 : c10d: add Collectives abstraction (#125978)
a8c41e0678 : dont pad 0 dim mm inputs (#126475)
88582195fd : [FSDP2][Test] Fix _test_clip_grad_norm (#126457)
1a27e24ff5 : Make inductor scheduler graph extension configurable (#125578)
da1fc85d60 : Add symbolic_shape_specialization structured trace (#126450)
d2f5a8ac99 : [doc] expose torch.Tensor.xpu API to doc (#126383)
776b878917 : [easy] Fix typing for `map_location` docs in torch.load (#125473)
697ed6f5b3 : [DeviceMesh] Supported N groups in `from_group` (#126258)
1018a68e31 : [export] Delete predispatch tests (#126459)
8bb7a2f46d : Fix documentation for register_fake_class (#126422)
762ce6f062 : Add Lowering for FlexAttention Backwards (#125515)
337830f657 : Revert "[inductor][cpp] GEMM template (infra and fp32) (#124021)"
4a5ef0b793 : Revert "[inductor][cpp] epilogue support for gemm template (#126019)"
59ca0d8c14 : Revert "[inductor][cpp] bf16/fp16 gemm template computed with fp32 w/o epilogue fusion (#126068)"
cb3b8cd0d3 : Use object identity for deepcopy memo (#126126)
55628624b8 : [c10d] add pg_name and pg_desc to logger (#126409)
796dff7147 : Import MKL via //third-party/mkl targets (#126371)
62403b57b9 : Add prefix option to CapabilityBasedPartitioner (#126382)
c226839f5c : Eliminate some C++11 checks (#126308)
f17572fcf6 : add 3.12 inductor CI tests (#126218)
93524cf5ff : [compiled autograd] clear compiled_autograd_verbose once test is done (#126148)
cef7756c9c : [inductor] Clear cache on ctx manager exit (#126146)
4cd4463c1c : [compiled autograd] Fix LoggingTensor flaky test (#126144)
4b7eee3450 : Print export warning only once in capture_pre_autograd (#126403)
e9719aec30 : Fix strict default value in StateDictOptions (#125998)
f5abf28e41 : [Traceable FSDP2] Use DTensor.from_local() in _from_local_no_grad when compile (#126346)
4f1a56cd42 : Switched from parameter in can_cast to from_. (#126030)
82c66bc41a : Make 'pytest test/inductor/test_memory_planning.py' work (#126397)
866ca4630c : Don't install inplace_methods on MockHandler, not needed (#126398)
8f0c207e18 : xpu: implement xpu serialization (#125530)
da9bf77f0a : [Dynamo] Support SET_UPDATE (#126243)
aab448e381 : Remove redundant serialization code (#126249)
5862521ad1 : [onnx.export] Cache SetGraphInputTypeReliable (#124912)
a0429c01ad : [BE][FSDP] Remove unnecessary warnings (#126365)
0dd53650dd : [BE][FSDP] Change the logging level to info (#126362)
9fbf2696d7 : [AOTI][refactor] Add aoti_torch_item as a util function (#126352)
0332b5812e : [AOTI] Support InplaceBernoulliFallback in the ABI-compatible codegen (#126183)
5792bc3c3e : [AOTI] Refactor some fallback op util functions (#126182)
c5f926ab87 : [AOTI][torchgen] Support at::Generator via C shim (#126181)
a55d63659a : Add 2nd shard to ROCm trunk workflow for core distributed UTs (#121716)
f155ed6bf2 : [ROCm] amax hipblaslt integration (#125921)
14d8e3aec0 : Add distributed/_tensor/test_attention to ROCM_BLOCKLIST (#126336)
91bf952d10 : Fix aarch64 debug build with GCC (#126290)
ab07867084 : [FSDP2] Supported `set_all_reduce_gradients=False` for HSDP (#126166)
c2f8c75129 : [Reopen] Upgrade submodule oneDNN to v3.4.2 (#126137)
691af57fbc : Fix broken link of scikit-learn (#120972)
4333e122d4 : [Traceable FSDP2] Add all_gather_into_tensor out variant (#126334)
d61a81a9e7 : Fix lint failures coming from #126035 (#126378)
0716f75cfb : Revert "Add Lowering for FlexAttention Backwards (#125515)"
cdcba4dee5 : Revert "Fix lint failures coming from #126035 (#126378)"
58378f1224 : [Doc] Add deprecated autocast comments for doc (#126062)
08aa704d0c : [1/N] Non-Tensor: Scalar Support: Enable aot compile to support aten operations with scalar input like alpha (#124177)
5fa1f4c6e4 : Fix lint failures coming from #126035 (#126378)
e661a42428 : [Add sliding window attention bias] (#126061)
8dc6f455bd : [ez] fix exported diff mismatch (#126357)
6e6e44bdcc : Generate runtime asserts when propagate real tensor is used (#126287)
c860df5a9d : [c10d] Add an option for NAN check on every collective (#125726)
0214711f05 : Add mode to MemoryDep to track atomic accumulates (#123223)
d0dfcd2c34 : fix the device type for with_comms decorator (#125798)
bcc8d25e47 : [dynamo] Delete extra testing of cpp guard manager (#126343)
95b9e981c3 : Add Lowering for FlexAttention Backwards (#125515)
ae6fdfa539 : Revert "Initial implementation of AdaRound (#126153)"
e3c5d1b7d7 : Revert "[optim] Fix: wrong ASGD implementation (#125440)"
175c18af81 : Initial implementation of AdaRound (#126153)
927e631dc2 : [inductor][cpp] bf16/fp16 gemm template computed with fp32 w/o epilogue fusion (#126068)
059b68fbdf : [DeviceMesh] Fix hash and eq not match (#123572)
1876f0fec1 : [dynamo][nn module guards] Use TENSOR_MATCH, and not ID_MATCH, for numpy tensors (#126246)
315389bfed : Revert "Remove deprecated _aminmax operator (#125995)"
6dca1e639b : [TEST][Dynamo] fix test_deviceguard.py (#126240)
7844c202b2 : [inductor][cpp] epilogue support for gemm template (#126019)
6065a4d46e : Revert "Switched from parameter in can_cast to from_. (#126030)"
5efad4ebc1 : [inductor] [FX graph cache] Ignore unbacked symints in guards expression (#126251)
bd63300bae : [dynamo][inline-inbuilt-nn-modules] Add and update test_modules.py for nlining work (#126327)
7aa068f350 : [dynamo][inline-inbuilt-nn-modules] Change test to not depend on id of mod instance (#126314)
0f8380dd65 : [Inductor][Flex-attention] Make num_head support dynamic (#126342)
f9d107af66 : [optim] add fused_adagrad support for CPU device (#124905)
51e9bb8783 : [Export] Allow ExportedProgram to take empty decomp table (#126142)
b3f1882d17 : [easy][dynamo][inline-inbuilt-nn-modules] Change test to check for params (#126316)
06d6bb4eba : Switched from parameter in can_cast to from_. (#126030)
3ae118204e : Make propagate_real_tensor more safe (#126281)
b2d9b80fba : Also remove compile_time_strobelight_meta frame when generating stack (#126289)
9c9d0c2fab : Add VariableTracker.debug_repr (#126299)
a7af53cec1 : [FSDP2] support fully_shard(model_on_meta, cpu_offload) (#126305)
bcdd0b11ca : [dynamo][inline-inbuilt-nn-modules] Bug fix - Only unspecialized nn modules (#126303)
5cab7a7662 : [dynamo] fix https://github.com/pytorch/pytorch/issues/93624 (#125945)
56a89fcc08 : [dynamo] graph break on issubclass call with non-const args (#125943)
100e3c1205 : [dynamo] graph break on const dict KeyError (#125882)
b5432ad5ab : Fix triton codegen main do_bench_gpu import error (#126213)
2c5ad9a3d7 : [optim] Fix: wrong ASGD implementation (#125440)
5af4b49285 : Remove expected failure in `test_eager_transforms.py` (#125883)
0ca8bf4b41 : Enable UFMT on `test/test_datapipe.py` (#124994)
18cbaf6dbf : Remove Caffe2 python code (#126035)
ad7316b4c2 : [CI] Add AMP models in inductor cpu smoketest for performance (#125830)
f0d34941dd : Improve Storage copy_ size mismatch error message (#126280)
d15920a7d0 : Warn SDPA users about dropout behavior (#126294)
31d22858e9 : [onnx.export] Avoid unnecessary copy of debug_names (#123026)
90461d4986 : [dynamo] Detect monkeypatching on nn module forward method (#126203)
c8130dfe84 : [FSDP2] allow meta tensors during loading state dict and cpu offloading (#126267)
d74c89fb10 : 2 rocm shards on trunk.yml (#125933)
d2b2727d66 : Fix public api allowlist logical merge conflict (#126321)
e2d18228fe : [DCP] overwrites existing checkpoint by default (#125877)
b659506d82 : Parametrize test_dim_reduction (#126292)
2086f91c4c : Revert "Fix aarch64 debug build with GCC (#126290)"
2978f07d0e : [FSDP] Fixed docs for inter/intra node PG helpers (#126288)
af9acc4168 : Fix public binding to actually traverse modules (#126103)
a961e1ac83 : Fix aarch64 debug build with GCC (#126290)
196661255f : Enable UFMT format on test/test_utils.py (#125996)
44efeac24e : Beef up error message for pending assert failure (#126212)
26f6f98364 : Forward fix failures for torch.export switch to predispatch (#126081)
0d49c5cb06 : Skip padding cost of fusible/planable inputs (#125780)
4fb5d69b3b : Reland '[Inductor] GEMM shape padding improvements (#118522)' (#125773)
a91311e7c2 : [easy] Remove aot_config from pre_compile returns, rename fw_metadata in post_compile (#125854)
44e47d5bd0 : [onnx.export] Avoid linear loop over symbol_dim_map (#123029)
490d72e4e6 : CMake: Improve check and report of Magma (#117858)
f91cae461d : [Dynamo] SizeVariable supports hasattr (#126222)
c1dc8bb858 : [DTensor] Turn on foreach implementation of optimizer for DTensor by default (#123394)
4ab2c399be : Faster int8 quantized (#125704)
719a8f42bf : Foward fix lint after #125747 (#126295)
9689532106 : [CI] 3 procs non cuda (#125932)
718bb9016f : Revert "[Memory Snapshot] Add recordAnnotations to capture record_function annotations (#124179)"
f9dda37a74 : [export] Cover more cases to copy tensor conversions. (#125628)
c53e0ac7ba : [Inductor] Generalize new introduced device-bias code. (#126261)
ba3cd6e463 : Enable UFMT on `test/test_fake_tensor.py`, `test/test_flop_counter.py` and some files (#125747)
187aeaeabf : [Memory Snapshot] Add recordAnnotations to capture record_function annotations (#124179)
ee8c1550d6 : [AOTI][torchgen] Add a few more fallback ops (#126013)
563aa3e035 : [AOTI][torchgen] Update NativeFunctionsGroup mapping (#125962)
a0aaf56114 : Don't assert about pending when we are peeking (#126239)
8f30f367d0 : [CUDA] [CI] Add cu124 docker images (#125944)
f060b0c6e6 : [inductor][cpp] GEMM template (infra and fp32) (#124021)
79655a1321 : Add force_disable_caches to the docs (#126184)
2d35b4564a : [audio hash update] update the pinned audio hash (#126248)
03467b3fed : Add a few "warm start" smoketest runs to CI (#125955)
c87c39d935 : [benchmarking] Suppress csv creation on cold-start phase of --warm-start-latency (#125953)
9f0d3f71c9 : Adjust number of repeats when using --warm-start-latency benchmark flag (#125917)
0dedc1aff2 : Update CUDA out of memory mesage with private pool info (#124673)
5178baefa9 : use statically known instead of suppress guard for ddp stride propagation (#126234)
e74a6f487a : [Inductor] Skip test_nll_loss_backward for intel GPU. (#126157)
b950217f19 : Support third-party devices emit a range for each autograd operator (#125822)
bdea4904c1 : Add some type annotations to python stream and event classes (#126171)
7dfd2949d7 : Add missing type uint16, uint32, and uint64 to TensorHash in LTC. (#125972)
dfab69fdf1 : [Inductor] Flex attention supports dynamic shape (#125994)
1485621ccb : [BE] Abstract out strings to top of file (#125640)
24c30096e3 : Set dtype when copying empty tensor (#126124)
51ed4c46cf : [Dynamo] Supports torch._C._is_any_autocast_enabled (#126196)
314ba13f01 : Support trace_subgraph in _MakefxTracer (#125363)
73d8c10f13 : Refactor make_fx to better support hop subgraph tracing (#125267)
470723faea : [pipelining] Add manual pipeline stage (#126123)
dccb5cf7ca : Allow for trailing 'a' in sm_arch (#126185)
92eb1731d4 : [torch/distributed] Bugfix: wait for all child procs to exit before c… (#125969)
e5cce35c21 : Remove use of USE_C10D (#126120)
fd48fb9930 : Revert "[CUDA] [CI] Add cu124 docker images (#125944)"
b6d8b256e6 : Revert "[inductor][cpp] GEMM template (infra and fp32) (#124021)"
c1aa05f80c : [easy][dynamo] Use disable_dynamo for torch.manual_seed (#126192)
c6f3f1d239 : [reland][dynamo][disable] Move disable impl to its own __call__ method (#126191)
41fabbd93f : Fanatically correct real tensor cloning for propagate_real_tensors (#126175)
328b75d1a0 : Enable epilogue fusion benchmarking internally (#125455)
e046c59e5b : [export] handle aliased/unused params for unflattening (#125758)
4d063c8e8a : Do not print escape characters in xdoctest logs (#126219)
b522e65056 : Check pointer for null before deref in Aten/native/sparse (#126163)
bbdbfe3661 : Reland add `write_record_metadata` to PyTorchFileWriter (#126087)
1ba852c1dc : Fix torch elastic test SimpleElasticAgentTest.test_restart_workers br… (#126002)
3a58d40b93 : [Profiler] Clean up deprecated use_cuda by default (#126180)
534c34b320 : Fix copy-pasted docs, reversing the load and save description (#125993)
2973c9bb88 : [export] add SchemaCheckMode testing for pre-dispatch export, OpInfo (#125481)
534ddfa619 : Move compute unbacked bindings call to track_tensor_tree (#126168)
54131ecb25 : Remove redundant spaces in CMakeLists.txt (#126042)
7ed67cdbcc : Add compile time smoketest for foreach (#126136)
a8eac0efa8 : fix: unknown CMake command "check_function_exists" (#126165)
4a8db9d45b : [dynamo] reset grad state in aotdispatch test, add failing trace functional tensor test to dynamo (#126113)
f6a00a8032 : [inductor] Add abs to index_propagation (#124616)
c30ea3387b : [inductor] Improve stability of scaled softmax (#124119)
352a893b0c : Fast standalone symbolize for unwinding (#123966)
5fb4a766b8 : [CUDA] [CI] Add cu124 docker images (#125944)
ed327876f5 : [codemod] `c10:optional` -> `std::optional` (#126135)
b55f57b7af : [codemod][lowrisk] Remove extra semi colon from caffe2/c10/core/SymNodeImpl.h (#123055)
023f05cfe6 : Allow symbols to reach conv_layout stride argument #125829 (#126116)
0e6462f69a : [pipelining] Consolidate test models into a registry (#126114)
38b8b614a2 : [ROCm] Implement forward AD for miopen_batch_norm (#125069)
1a28f731dc : [optim] Merge the pyi files into py files of optimizer (#125452)
a00a99e801 : [profiler] Report strides in json trace (#125851)
50c3d58734 : [onnx.export] Cache AllGraphInputsStatic (#123028)
3cba50e478 : [quant] Make per_group and per_token quant match torch.fake_quantize (#125781)
3892e86c94 : [FSDP2] Changed grad acc test to use data parallel ref model (#126161)
4ded666535 : [FSDP2] Factored out `MLPStack` to de-dup code (#126070)
48f98bcdfc : [TD] Enable test removal on most default configs + distributed CUDA for everyone (#125931)
db3b38202b : Improve dead code elimination of unnecessary int arguments (#126074)
9df2f8687f : cprofile every compile id [x/y] to keep consistent with tlparse (#125659)
2e4d011195 : [FSDP2] Used `CommDebugMode` in grad acc test (#126067)
20aa7cc678 : Revert "[c10d] Add an option for NAN check on every collective (#125726)"
aac215a824 : SymInt-ify unsqueeze_copy (#125976)
ed76079af3 : Revert "Remove Caffe2 python code (#126035)"
d1f254dce8 : Add a cache mechanism to accelerate torch.compile-for-eager (#116368)
b3a8a3cbab : Fix typos in `torch._dynamo.config.py` (#126150)
680a568721 : Fix typo in HistogramKernel.cpp (#126156)
9a1bf39c66 : Remove Caffe2 python code (#126035)
9641a8db25 : [optim] deprecate `LRScheduler.print_lr` (#126105)
37596769d8 : Autocast `vdot` (#125697)
556e4ec6c9 : [FSDP] Add device in pin_memory argument (#119878)
9dec41b684 : add avx512 specialization for vec_shuffle_down (#125147)
8bf9e99cea : [pytorch][cuda] Some speedup on depth wise convolution 2D forward (#125362)
1370f3a00d : [inductor] make mm template work with non-contiguous input (#126106)
60b00b4b4d : [CI] Upgrade intel support packages for XPU (#125655)
c312cd8890 : add simple test for nccl metadata (#125317)
b805d3cbcb : Modify device check in capturable optimizer to support more devices (#124919)
e0e9d3ed79 : make sure device mesh can be imported from torch.distributed (#126119)
2ae65b72ff : [dtensor] early return for _split_tensor (#125810)
bdaa9b2981 : [Dynamo] Wrap set as SetVariable and support isdisjoint by polyfill (#126046)
bc9587778c : update pointwise cat heuristics (#125772)
d0f3ae8e67 : [Doc] Update Intel GPU Support on README (#126001)
812534d27e : Skip two LR schedulers with eager memory leaks in compiled optim tests (#126133)
9a2beb862d : Permit trivial solves for floating point equality in ShapeEnv (#125915)
2ba102f689 : Implement native support for float inputs in Dynamo and ShapeEnv (#125325)
04877dc430 : Update context manager for cudnn (#126122)
aeb9934bda : [AOTI] Fix a problem in https://github.com/pytorch/pytorch/pull/125730 (#126110)
71467abc44 : Changes to compile with 3.13 (#126033)
ef7d8ad6af : Use source code hash instead of torch version (#126092)
3c4058cf18 : Add master cache disable switch for inductor (#126084)
c712b0f8a3 : [export] Fix runtime assertions to add call_function (#125878)
6a5acd91c3 : add shape check for rrelu_with_noise (#122870)
6db3271007 : [c10d] Add an option for NAN check on every collective (#125726)
1e47c7b11b : [inductor] enable software pipelining on AMD devices (#125858)
ec7f2b2626 : [DCP] adds type safety to str filtering in EmptyStateDict (#126082)
bd3cbdba2f : Revert "[optim] add fused_adagrad support for CPU device (#124905)"
36e6f3b339 : [caffe2] Make all get_backtrace() implementations lazy (#125750) (#126064)
c098cd0cbb : Eliminate a C++11 code pattern in pimpl.h (#126069)
b9e7b35912 : Remove caffe2 from more build files (#125898)
b620231378 : Fix nested fqn discovery (#125957)
9e1826deff : [torchbind] Add inductor support (#123709)
4d8fa7df40 : Fix four misspellings of "its" in documentation (#125681)
7f1d5aba93 : [FSDP] Use generic device handle instead of cuda (#121620)
3183d65ac0 : use shutil.which in _find_cuda_home (#126060)
637074983e : [inductor] Make load_mask() codegen determinstic (#126017)
82edc8b5d5 : [NT] Make NestedTensor register as having symbolic sizes/strides (#124687)
96bdb7a0fb : in `test_foreach.py` pacth `KINETO_LOG_LEVEL` to silence profiler log (#126048)
7899034282 : [fbcode] remove xcode_public_headers_symlinks (#125966)
56b271fd7a : `STRONG_CONSTEXPR` -> `constexpr` (#125872)
f0c8b93487 : Make wrapIndexOnce check async, avoid DtoH sync on index_put_ (#125952)
c0b7b56cf4 : [xla hash update] update the pinned xla hash (#126052)
afda6685ae : fixed typo in documentation (#125974)
1c3fe84033 : [optim] add fused_adagrad support for CPU device (#124905)
4b88a5bd0b : Remove AnalyzeTemporaryDtors from clang-tidy config (#125985)
34910f87f0 : [BE]: Update ruff to v0.4.4 (#125031)
ae9a4fa63c : [ROCm] enforce ROCM_VERSION >= 6.0 (#125646)
0116ffae7f : Remove deprecated _aminmax operator (#125995)
037615b989 : [inductor][cpp] GEMM template (infra and fp32) (#124021)
02093b6c6a : Keep track of `ViewMeta` with symbolic inputs. (#125876)
6ffc94fa62 : Fix cpp node instance check (#125875)
07d6ab5aa2 : [pipelining] Add pipeline schedules (#125975)
f19e07b056 : Memoize local_scalar_dense calls, refactor all memos (#125623)
0935b3d794 : [dynamo] Turn on guard_nn_modules (#125202)
0dda3389e5 : [AOTI][torchgen] Minor improvements to C shim torchgen (#125928)
2df114e6be : [AOTI] Fix 'int' object is not subscriptable (#125731)
3f11958d39 : Remove FFMPEG from CI scripts (#125546)
d49abf039a : Revert "update pointwise cat heuristics (#125772)"
d5470749bc : Revert "[dynamo][disable] Move disable impl to its own __call__ method (#125486)"
a174c536f8 : GPT-fast benchmark: adding memory bandwidth and use A100-40GB as target (#125881)
b24ad7eab5 : Enable dynamo traced test_param_group_with_lrscheduler_goes_right_direction (#124544)
e72ef4f22a : Fix capturable enablement conditions (#125826)
b833fc0ecb : Tighten fallback conditions for compiled optim (#125825)
1115a25c36 : Add obc counter for TS migration. (#125986)
7e92a2c1c9 : Revert "Allow symbols to reach conv_layout stride argument (#125829)"
e9c5f1cb80 : [MPS] Improve _int4pack_mm (#125983)
9f4bb4d6bc : Enable UFMT format on test/test_throughput_benchmark.py test/test_type_hints.py test/test_type_info.py (#125906)
9dee3ef919 : Ingest gpt-fast benchmark results from S3 to Rockset (#125891)
c1690a3e12 : Fix the link to torch.compiler_custom_backends. (#125865)
0a9c6e92f8 : Skip test_memory_format_nn_BatchNorm2d in inductor (#125970)
da7ced6e8c : S390x binaries (#120398)
d8708a35f6 : [pipelining] Add _PipelineStage runtime (#125729)
c6e5d0d2e6 : Revert "Memoize local_scalar_dense calls, refactor all memos (#125623)"
01fb9676b8 : Enable UFMT format on test/license.py test/logging.py (#125737)
a5c93a6899 : Speed up _extract_graph_with_inputs_outputs (#125937)
4457cd9a30 : [Distributed] [7/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#124987)
31946c10d0 : Add missing parameter doc of Adagrad (#125886)
ee804d256b : Revert "[caffe2] Make all get_backtrace() implementations lazy (#125750)"
45628e3b66 : Remove Caffe2 python (#125143)
b08072f645 : [CI] Relax per proc memory by a little bit, mark a test as serial (#125960)
c61bfd24c1 : [PT2] Register fake impl for quantized embedding bag ops (#125884)
538877d204 : [AOTI] Fix convolution_backward (#125730)
aca0807101 : [AOTI] Use random inputs to autotune the backward pass (#125291)
9e85d3d830 : Add "accurate" FlopCounter implementations for NestedTensor SDPA kernels (#125776)
4dad988822 : Revert "Remove vision packages from CI scripts (#125546)"
0e853327cb : Implement wrappers for aot_dedup and aot_synthetic_base (#125764)
c520929c83 : add typing in torch.optim.lr_scheduler (#125556)
59f2e716cc : Test foreach functions with all dtypes except qints (#125527)
10c17b13d7 : fix cudnn attention check (#122391)
bef7d650c4 : [CI] 3 procs on sm86 (#125598)
ff98731803 : Speedup `convert<float>(Vectorized<half>::loadu(ptr, 8))` on ARM (#125889)
f25c7c9699 : functionalize storage resizing, minimal ppFSDP traceable forward (#122434)
f42ea14c3f : Remove vision packages from CI scripts (#125546)
d7fe3c4123 : [RELAND] Switch default behavoir of export IR to be predispatch (#125860)
4996a3fda3 : [BE][Easy] Remove usage of deprecated `ast.Str`, `ast.Ellipsis` and `ast.NameConstant` (#125912)
53a64e446f : `STRONG_NODISCARD` -> `[[nodiscard]]` (#125873)
5f58cf65d1 : Refactor other post compile wrappers in forward functions (#125610)
cc4da72b47 : [caffe2] Make all get_backtrace() implementations lazy (#125750)
31372fa842 : Support generic stream/event on CUDA/HIP backend (#125757)
946b96fd54 : [AOTI] Add a failing test case (#123235)
f87fbfdb01 : GPT-fast benchmark: remove Embedding layer from model size (#125901)
d81db9c1df : GitHub workflows / Dynamic rollout (#125680)
2ed17e0b1e : Remove binaries using caffe2 functionality (#125885)
013722bcb8 : Allow symbols to reach conv_layout stride argument (#125829)
fcbf2b61e6 : Memoize local_scalar_dense calls, refactor all memos (#125623)
8be4104cf3 : Update conda to latest version for Docker release builds (#125887)
d14d6127f6 : [BE] Rename `macos-12` to `macos-13`/`macos-` jobs (#125859)
2ad794550a : Support generic stream/event on XPU backend (#125751)
d19d932183 : update pointwise cat heuristics (#125772)
978b572652 : Add registration API for torch.compile-eager (#121387)
c9a258e474 : [export] handle constant aliasing for export (#125509)
fd816bf630 : Add script for removing Inductor dependencies from Inductor generated code (#125811)
3267814d53 : [inductor] refactor: device dispatch inside do_bench (#125736)
13545fe68a : [export] Don't create a new fake mode if dynamo tracing (#125185)
23e71ffd82 : Remove unused caffe2 subdirs (#125818)
350a3ed82f : Fix unused variable 'kEps' (#125870)
477612c0f6 : [dynamo] Clear GenerationTracker on dynamo reset (#125855)
52fad83335 : [onnx.export] Avoid linear look up in env for exist_in_env (#124909)
37d2ecd123 : Only log toplevel torchscript calls. (#125714)
e43d656921 : FakeTensor speedup: minor cleanups (#124224)
a08be4b705 : FakeTensor speedup: Split cache_key so we only validate once (#124223)
6a8b1da18d : FakeTensor speedup: Delay formatting stack trace until it's actually asked for. (#122911)
eaaf0f3299 : Print capture_pre_autograd_graph warning only once (#125848)
20271f0a3b : Drop caffe2-linux-jammy-py3_8-gcc11-build (#125857)
ae5e2ab92e : [dynamo][fsdp] Use Tensor match for FSDP modules (#125827)
0d4fdb0bb7 : Revert "[ROCm] amdsmi library integration (#119182)"
966ebd2e24 : Add --warm-start-latency to benchmark harness (#125353)
ee00349780 : [dynamo][logs] move recompilation reason within compile_id scope (#125805)
a7575e8bd5 : [dynamo] Use correct source for custom getattr (#125828)
7c00635125 : [CI] Move gha artifact download before xml parsing for test stat uploads (#125609)
1ecea513b6 : Fix common_methods_invocations example inputs to _efficient_attention_forward (#125788)
6fd745255e : Revert "add uuid in cudaDeviceProperties (#125083)"
74a0ef8f8c : Enable UFMT format on test/test_package.py test/test_per_overload_api.py (#125834)
ed8a560845 : Update Release Calendar for 2.3.1 and 2.4 releases (#125794)
85447c41e3 : [ROCm] amdsmi library integration (#119182)
0e419b9146 : Fix graph partitioner and make runtime assertion work with submodules in export (#125793)
98821b3d92 : Disable various flaky tests in test_foreach (#125783)
ae20f15941 : [dynamo] trace through nn parametrize (#125771)
6ea226b99c : Fix DDP no_sync when find_unused_parameters is True (#124193)
8fb3ff2a4e : Revert "[profiler] enable CUPTI range profiler in build (#125685)"
26b942c4fc : [C10D] Document destroy_process_group usage (#122358)
257d40ba2e : Docker release - push nightly tags only for amd64 builds (#125845)
3ccf107f01 : [export] remove upgrader. (#125625)
0241ed9331 : Fix sparse fake tensors detach (#125679)
7e86a7c015 : Lint: Update older-python test to 3.6 (#125843)
b8a706a321 : [EZ][BE] Use `untyped_storage` in tests (#125838)
4e29e80bf0 : Run MPS tests on MacOS Sonoma (#125801)
b9588101c4 : [Inductor][Quant] Fix PT2E Dynamic Quant regression (#125207)
c337395cdb : [Inductor][Quant] Change the QConv output scale name (#124246)
d83ab88f81 : [Inductor] [Quant] Enable lowering of quant per tensor and refactor quant pattern (#124041)
96c8447001 : change error message to avoid failing when nn modules inlined (#125612)
da2f4bbc33 : remove empty partition (#124920)
e5766f02d0 : [onnx.export] Avoid dict <-> unordered_map implicit copies (#123063)
c59a2369be : [fsdp2] Accomodate FSDP2 to accept parent mesh > 2 (#125778)
aaa2f93a4f : Add meta for _embedding_bag_dense_backward and _embedding_bag_per_sample_weights_backward (#125785)
ed48ea9997 : [AOTI] Refine the C shim autogen mechanism (#125589)
0bde9c08ef : Prevent rendezvous shutdown on worker restarts (#124819)
6c4f43f826 : Decouple most Caffe2 components from the build systems (r-barnes) (#125711)
fdff9920f6 : [pytorch] fix blasLt on windows (#125792)
902a74c1d6 : [caffe2] Lazily symbolize backtrace in c10::Error (#125787)
ea3f625e32 : Revert "[Inductor] [Quant] Enable lowering of quant per tensor and refactor quant pattern (#124041)"
ca579c177b : Revert "[Inductor][Quant] Change the QConv output scale name (#124246)"
97509c8eb2 : Revert "[Inductor][Quant] Fix PT2E Dynamic Quant regression (#125207)"
19bab45e67 : [Inductor] Add SDPA pattern for OOB GPT2 models (#125562)
3da949b0fb : [Inductor][Quant] Fix PT2E Dynamic Quant regression (#125207)
d474d79420 : [dynamo][disable] Move disable impl to its own __call__ method (#125486)
9ba9f7fa82 : [Inductor][Quant] Change the QConv output scale name (#124246)
33e6791645 : [Inductor] [Quant] Enable lowering of quant per tensor and refactor quant pattern (#124041)
1b1b18a7a4 : Add LRScheduler Composability E2E Tests (#125653)
8c9c169b48 : LRScheduler composability kernel tests (#125383)
69eeef0727 : Update LRScheduler to handle tensor LR (#123753)
7b36b4a765 : Fix user warning for tensor LR (#123752)
0ea6ffc613 : Swap warning counter to flag in LRScheduler (#123751)
78a1693266 : [Inductor Intel GPU backend Upstream] Reuse inductor test for Intel GPU (PART 1) (#122866)
4dd33a1c2b : Better core binding in torch.backends.xeon.run_cpu when launced from torchrun with --nproc-per-node (#123711)
8def2e92f2 : [inductor] autotune benchmark support for cpu (#125159)
96a5698408 : Fix torch.profiler Schedule Function (Function Event only) (#125510)
ff090c6937 : [dynamo] support tracing nn.Module @property that accesses closure cells (#125724)
93f3d561f9 : [dynamo] don't make nn parametrized Modules unspecialized (#125710)
e71207b729 : Fix infinite recursion in API BC test (#125706)
04bf7713e8 : [c10d] Reduce test time by reusing ProcessGroup (#125648)
8f27c7f181 : [sparse] Fix type-dispatch errors (#124777)
1b8891a31d : make torch._check understand Eq commutativity (#125629)
346343e6b5 : [DeviceMesh] Make _validate_tp_mesh_dim support 3D (#125763)
e457fdcd81 : Revert "[caffe2] Lazily symbolize backtrace in c10::Error (#125682)"
7e0edafe86 : [compiled autograd][dynamo] improve lifted autograd.Function.backward handling and fallback to pseudo-eager (#125661)
de8ce3be20 : [TD] Heuristic based on file path (#125477)
17ab7f77c2 : [Kineto] Update Kineto Submodule Hash (#125621)
255a3afbf1 : [dynamo] don't LOAD_FAST local context variables in modified bytecode (#125719)
0feca5e341 : Increase Python version for Docker builds (#125782)
19a9de114a : Forbid subclassing _TensorBase directly (#125558)
afea237935 : [minimizer] Create block traverse mode in minimizer for graph aware debugging (#125613)
603d1e6049 : [DTensor] allow numel 1 tensor operand to be implicitly replicate DTensor (#125073)
445a0c01da : Retry: Low mem max_pool2d_with_indices (#122832)
005a12722d : Remove duplicated nodes in dfs_iter_find_cycle (#125585)
3f36145db2 : add uuid in cudaDeviceProperties (#125083)
f4b2d50fd7 : [export] disable_forced_specializations (#124949)
74b1674860 : Use nvidia/cuda:CUDA_VERSION-devel-ubuntu22.04 as base for official Docker release (#125770)
faf0015052 : [dtensor] run transformer sdpa in dtensor (#122997)
efece3f142 : [dtensor] add op support for memory efficient attention (#122996)
08be8ec8a9 : [dtensor] improve new factory strategy (#122995)
affd7a9789 : Get PT2 Cutlass backend working under fbcode [take 2] (#125688)
87f86fd586 : Fix multi template debug trace (#125703)
e28d9947a1 : AsyncCollectiveTensor: prevent wait_tensor() calls on graph inputs from getting DCEd (#125677)
5d97c22845 : AOTAutograd: use info not debug logging for ViewAndMutationMeta (#125676)
6f619cc727 : [ez] functorch/test_vmap and test_dataloader to run in parallel (#125597)
bd2635578b : [vision hash update] update the pinned vision hash (#125521)
2e237fcd70 : Revert "[inductor] add cpp builder code. (#124045)"
c5b6c696c1 : Start refactoring runtime wrappers (#125595)
13462ecd27 : Update preserve_node_meta to reset torch.fx.traceback.current_meta (#125500)
8cad88e1f3 : [BE]: Improve exception typing. Remove NOQAs (#125535)
82b7b59d2a : [inductor] Check if n is the input tensor of conv_pointwise (#125119)
d17be10df1 : make torch.amp.autocast more generic (#125103)
320af5eaa6 : Compute bounds for the variables created during codegen (#123100)
15a9770225 : [DSD] Implement broadcast_from_rank0 option for optim state_dict (#125339)
0542fd485f : [DSD] Implement broadcast_from_rank0 option for model state_dict (#125338)
88fbe79550 : [DSD] Fix set_optimizer_state_dict() changes the parameters with some optimizers (#125708)
469383755f : [inductor] add cpp builder code. (#124045)
08f6ef0e1c : [caffe2] Lazily symbolize backtrace in c10::Error (#125682)
a1a22a22d5 : [ROCm] Parameterize the triton build dir (#125420)
50073127b5 : [tp] add some test for shard output layouts for rowwise parallel (#125713)
9a2375b6b7 : [dtensor] improve some pretty print in op schema (#125695)
65fec7bbbf : [dtensor] make sure meta tensor random op does not alternate rng state (#125693)
38baa02a40 : Meta kernel for _pack_padded_sequence (#124794)
2deea9e6e9 : [profiler] enable CUPTI range profiler in build (#125685)
9fedf41b60 : Dockerfile should set the syntax directive to v1 (#125632)
58e045d03c : [MPS] Fix strided ELU op (#125692)
21aaac47e7 : [torchelastic] add timing events to different stages of rendezvous (#125636)
a3d97f6ce4 : [ONNX] Benchmark onnx export w/ ort fusions (#125700)
baf36f6d11 : Pad bandwidth bound split k kernels on a100 (#125650)
ba27548679 : [MPS] Remove in place views (causes too many crashes) (#124895)
3fb53bb6a7 : [MPS] Fix strided mse_loss (#125696)
939b701d3a : SymInt-ify mem-efficient attention forward op signature (#125418)
bb6ba31250 : [DCP] Adds storage metadata, and passes it during the save path (#124772)
244d93039d : Remove fbobjc_configs from xplat (#125586)
8b4d62009d : [EZ] Update jinja2 to 3.1.4 (#125698)
8be4c1bc2f : [export] Add metadata for nodes insert_deferred_runtime_asserts (#125414)
8024e72326 : [export] Warn on capture_pre_autograd_graph. (#125602)
021ff7fd77 : [BE] Explicitly handle all types `c10::isSigned` (#125637)
51f25c08f4 : Fix 'Could not infer dtype of SymBool' on torch.tensor call (#125656)
e3d5afc60a : Enable dynamo'd test for 116499 (#123469)
0f02e0aa39 : Disable dynamo on functional optims if capturable=False (#123619)
0fd1fc17c3 : [MPS] Fix `abs` for complex types (#125662)
2163956208 : [TGIF][HHC][Sharding] add device_ordinal to Subgraph (#125616)
b356a0de86 : Add support for multiple flexattention calls in a single compile (#125516)
d4225c55d9 : [fx] Prioritize runtime assertions ops (#124213)
2f79a18324 : Revert "[inductor] add cpp builder code. (#124045)"
c5e04a4479 : More accurate is_bw and prompt parents cleanup for ModuleTracker utils (#125634)
fdfef759a6 : Add userbase library dir to windows dll search path (#125684)
7864d287a1 : [inductor] add cpp builder code. (#124045)
b23b6e7108 : Ensure that vmap is restored properly if an exception is thrown during frame eval (#122074)
196a0b1722 : Add Inductor micro benchmark workflow (#125450)
5fd0b6e5f7 : Revert "add uuid in cudaDeviceProperties (#125083)"
f7d48302b6 : [DSD] Fix to remove non_persistent buffer in distributed state dict (#125337)
a89177936c : [DSD] Correctly handle _extra_state (#125336)
9f1d3eebf5 : Update PyTorch ONNX Exporter maintainers (#125630)
6f1e3a6bf7 : [DCP] Always flatten mapping even if no tensors present (#125335)
790f43c315 : Run test_inductor_distributed with run_test (#125647)
22767e4791 : [DCP] Always create requests for non-tensor objects (#125334)
9782439277 : [Profiler] Do not emit a warning when using CPU profiler (#125654)
7863e04615 : Back out "Get cutlass_library import working under fbcode" (#125606)
71dc15742c : [DSD] Improve the performance of distributed state_dict (#125501)
0e57bbb6d7 : Set timeout for C++ tests (#125517)
1b396d69cb : Revert "[CUDNN] Remove defunct cuDNN V8 API build flag (#120006)"
848fce35b5 : [CI][ez] Don't retry when it says don't retry (#125643)
0de9ce9bb3 : [export] Fix serialization of empty torch artifact (#125542)
b37bef9b13 : Use triton_key instead of triton.__version__ for hash (#125624)
8573d9551a : Fix to preserve tensor wrapper subclass dtype through multiprocessing serialization (#125615)
b29d77b54f : Separate arm64 and amd64 docker builds (#125617)
5dee46266a : Fix & optimize open device registration test. (#125572)
f0c6d6100b : Enable dynamo-traced optimizer peak memory tests (#124543)
5033d3ba6d : Disable fb_memcache for MTIA (#125658)
e72936c27c : [PT2D] Fix the circular import issue (#125618)
acafabaa29 : Rename TorchDynamo -> Dyanamo in the dynamo tutorial doc (#123431)
058e28108f : [inductor][cpp] support int64 vertical vec reduction (fix #124821) (#125563)
a60fa960e5 : refactor: extract `get_lr` warning (#125545)
461ffaaaf3 : [dynamo] support torchbind object input (#124978)
c165a8e71d : Enable UFMT on `test_decomp.py`, `test_expanded_weights.py` and some files (#125117)
48b6c8dbc3 : [Inductor] log fusion failure due to index mismatch (#124986)
f35fe4eaf1 : add uuid in cudaDeviceProperties (#125083)
4332fc4095 : [export] Allow constant attr mutation (#125424)
c0c2f6156a : Updated docs to add the error case for torch.multinomial Issue#125388 (#125495)
3407899ba1 : DTensor Fused ADAM (#125369)
65fc3c31bc : [BE] Delete unused `AT_FORALL_SCALAR_TYPES_AND[456]` (#125607)
3411d54811 : fix loading optimizer options from archive (#125215)
ee4cafa098 : [CUDNN] Remove defunct cuDNN V8 API build flag (#120006)
b98c689261 : Better repro command: include test class + fix paths for py3.8 (#125498)
22bcfc25ef : Initial implementation of Inductor FX Graph Remote Cache (#124669)
05bd7fe3eb : Nested Tensor + AOTI test (#125513)
1b3fd83ab2 : [TD] Enable TD on AVX related configs (#125482)
8c74162074 : Reduce the number of layers for mixtral moe model to adapt CI memory limitation (#125608)
7ddf57e9f5 : xfail codegen dynamic if the test is xfailed (#125573)
373a00df9a : [dynamo] better file open method in funcname_cache (#125435)
cbb3791891 : [pipelining] Add tests for tracing frontend (#125449)
bdaa7bbd7d : [dynamo] fix potentially missing _torchdynamo_inline from ScriptFunction (#125447)
ad9a27f3e5 : Move autocast op list to autocast_mode.h to make sure other backends can reuse it. (#125114)
2a42c40791 : Revert "Compute bounds for the variables created during codegen (#123100)"
9cd4bcb2c4 : [FSDP] mark pre_backward_hook unserializable (#125464)
7d10b06e1a : Allow building for sm90a (#125523)
ee0c47349c : Revert "Upgrade submodule oneDNN to v3.4 (#122472)"
af144139df : Remove some pre-c++17 cruft (#125590)
daf1eb44bc : try to fix the warning in distribute_tensor (#125476)
7ffa5558ee : Revert "[FX] Update type hints in `torch.fx._compatibility.py` (#125469)"
bb668c6468 : Compute bounds for the variables created during codegen (#123100)
3827810453 : [export] suggest constant dim values in dynamic shapes fixes (#125458)
6ebec38453 : Add ciflow/linux-aarch64 to auto labeler on mkldnn PR's (#125599)
e30e6d321f : [MPS][BE] Introduce MetalShaderLibary class (#125550)
7bf6ed01ac : [inductor] Remove symbol exports in C shim for Windows (#125472)
b6bcd09173 : Get rid of tabular and sizes, beef up verbosity of output graph (#125507)
71bec453b1 : [xla hash update] update the pinned xla hash (#124599)
60efb1060a : Make codegen dynamic test faster (#125569)
24b64fc482 : [HOP][inductor] Support pytrees as associative_scan input (#122137)
68a1f787c8 : [inductor][cpp] move some common cpp utils to cpp_utils.py (#125152)
fc183f0bde : [Inductor] Properly package target info for triton.compile (#125553)
1dd42e42c4 : [BE]: Try TCH autofixes on torch/ (#125536)
ccbac091d2 : Revert "Add `write_record_metadata` to PyTorchFileWriter (#125184)"
1b1d593c8c : Don't call item() into torch.scalar_tensor uselessly (#125373)
ecd62746e3 : Also pull size/stride info from example_value (#125505)
d1a3271a55 : [ez]2->3 shards for asan slow (#125499)
94c4855e75 : [Inductor max autotune] Make autotune_select_algorithm more robust (#124928)
58d8388ed3 : Remove Inductor IRs for legacy functional collectives (#124992)
235b4d6ec2 : [FX] Update type hints in `torch.fx._compatibility.py` (#125469)
30c9fd96f6 : [FX] Add missing forbidden mutation methods in immutable collections (#125468)
7c59720ba7 : [comm] Ensure ncclComm is not aborted before checking exception (#124466)
99e4909677 : Remove assertion for cat target_func (#125540)
650a248d3e : Rename is_unspecialized to pass_arg_as_tensor, add comment (#125496)
12da7ee58f : Don't use wrap_fx_proxy_cls for wrap_symint (#125494)
617e473da5 : Split wrap_symint out of wrap_unspecialized_primitive (#125483)
10f673541e : [Inductor cutlass backend] Enabled nonzero workspace and Cutlass StreamK (#125406)
f70bd71a48 : [FSDP2] Computed grad divide factors at runtime (#125484)
dba689bbfd : Revert "[FSDP2] Computed grad divide factors at runtime (#125484)"
8a0529e986 : [2/2] Remove Caffe2 db and distributed code (#125533)
7f0c5eb023 : Added some more flex attention tests (#125487)
6d30803d64 : Revert "[Inductor] Properly package target info for triton.compile (#125241)"
084d818e71 : Revert "try to fix the warning in distribute_tensor (#125476)"
a32ad828dc : Revert "Don't call item() into torch.scalar_tensor uselessly (#125373)"
f04c8471a4 : [dynamo][prepare for nn module guards] Guard nn modules for a few benchmarks (#125324)
5ba777f46e : [guards][cpp-guards] Optimize NN module getattr guards (#124522)
76a26a885d : Add module tracker (#125352)
1a20b4ef3f : [dynamo] handle inactive nullcontexts across graph breaks (#125518)
6f70d22277 : Extend torch.utils._sympy.symbol for more Inductor symbols (#125419)
5cd7c75bd9 : [pipelining] Add tracing frontend (#125448)
2b4fe183db : Don't call item() into torch.scalar_tensor uselessly (#125373)
5ef50d75f8 : Don't short circuit if shape is same (#125188)
83845a7c78 : [1/2] Remove caffe2 db and distributed from build system (#125092)
2b41e1d6fc : try to fix the warning in distribute_tensor (#125476)
b62e89c1b8 : [dynamo] Do not turn on record relay with TORCH_COMPILE_DEBUG (#125488)
ff061baa94 : [comm_mode] adding some initial c10d ops to CommDebugMode (#125475)
d4727fd4eb : [TD][ez] Better check for is pr or not (#125485)
0302dc68bf : [Reland] Fakify script object inputs and attributes for non-strict ex… (#125490)
bfd5bb0c44 : [c10d] only PG0 should dump when monitoring thread timed out (#125356)
d325c55896 : Add CUDA paths to `CODEOWNERS` (#125409)
8a1af95b09 : [Inductor] Properly package target info for triton.compile (#125241)
9aa7699185 : [FSDP2] Computed grad divide factors at runtime (#125484)
996bb74077 : [FSDP2] Added HSDP grad acc tests and some minor changes (#125479)
b96b1e8cff : [Distributed] Add P2P versions of *object_list operations (#124379)
f2ab96a57e : [dynamo] fix crash when context manager is passed to a function (#125321)
59abd1dccb : Fix lint after PR 122611 (#125512)
4abcf36dde : Make c10::Error empty backtrace as an optional argument (#122611)
a783fef990 : [AOTI] Add a missing mypy ignore (#125508)
2b5ae2611e : s390x: use runtime detection for vectorization support (#123936)
5503c29357 : Introduce torch.utils._sympy.symbol (#125395)
1a578df57c : [FSDP2] Added test to show rank 0 broadcast for HSDP replicas (#125431)
c941fee7ea : [CPP extention] Baton lock is called regardless the code version (#125404)
645baef05d : s390x: remove workaround for sleef issue (#124730)
b1a7455b99 : [Inductor cutlass backend] Fix cutlass_utils.get_max_alignment() for strided layouts. (#124930)
a988b4ed76 : [AOTI] Generate mul_Scalar instead of mul_Tensor (#125397)
e84a5b6cc0 : [AOTI] Add missing std::move for constant args (#125329)
d6052a35d4 : [RFC][FSDP2] Added `register_fsdp_forward_method` for user fwd methods (#125394)
52f9128a0d : [AMD] Fix cutlass path in inductor (#125463)
e10b2ba357 : Script for compiling count + time of test at file granularity (#125322)
12a69afa6d : [export] Fix deserializer node meta handling. (#125454)
30610251ec : [MPS] And naive quantized intmm and `.gputrace` capture hooks (#125163)
a99ada5b27 : call `super().__post_init__` in `ForeachFuncinfo.__post_init__` (#125457)
79af814369 : [FSDP] Added private `_unshard` API (#124304)
ca98c2a932 : inductor: Add Conv3d support (#124361)
489b4586e9 : [optim]fix ut and sgd kernel (#124904)
bebefcf845 : Driver folder check (#117548)
e5cc7ada67 : skip triton template precompilation in 311.0-3.11.7 to workaround 311 cpython bug (#125446)
dd92637f44 : Add `write_record_metadata` to PyTorchFileWriter (#125184)
4c84789743 : [vision hash update] update the pinned vision hash (#123227)
071ee40793 : [dynamo][nn module] Check for duplicate tensors in register_attr_or_module (#125421)
ef757a5c00 : [export] use tree_map for _flatten_dynamic_shapes (#125415)
394ec2da30 : Remove GPU Check from Basic Chrome Trace test (#125430)
8706da2bad : [dynamo][cpp-guards] Improve recompilation reason logic for NO_TENSOR_ALIASING guard (#125439)
d156cb2e12 : Fix mem size mismatch from split/chunk in const folding (#125199)
a40d6df448 : [MPS] Native nonzero implementation (#125355)
e15da7856c : [MPS] Fix overflow in cumsum when dtype is bool (#125318)
acac7aa70f : [CI] Unskip Linalg tests on ARM (#125377)
d18a6f46d0 : Adding Compare in torch.utils.benchmark documentation (#125009)
4440d0755a : Support custom layout call under torch dispatch mode (#125379)
7551755cec : Update tolerance for flex fp32 (#125444)
3b5f6b10ad : [Inductor] default block size for head_dim = 256 for flex attention (#125380)
5c7b71dccf : [DCP] Adds strict option to DefaultPlanner (#123869)
2c8237c6aa : [ATen-VK] Resolve compiler_flags to allow Mac build (#125361)
55c705b602 : [dynamo] add trace_bytecode logging artifact (#125360)
a0e2f62edd : Revert "Include support for the scatter gather cuda kernels to allow for comp… (#124809)"
b1b03992d0 : Merge the pyi files into py files of optimizer (#125153)
edad82fc90 : Add private helper for determining which version of FA2 closest matches kernel version (#123653)
0199ce8d6c : [pipelining] Add microbatch split and merge utils (#125273)
1657f7e262 : [Doc] Update docstrings for torch/random.py (#125265)
fc76764a56 : Always pass down kernel_file and grid as string (#125384)
dae574c713 : Don't make replacements for i variables (#125398)
4f62494bf9 : [DCP] Move async logic into filesystem for better encapsulation (#124944)
2bbfb70831 : ignore unsupported module from flop counter (#125346)
799f1460af : [DCP] Provides default AsyncStager (#124939)
3741fb3680 : [DCP] Introduce async staging extension points (#122965)
da991fac22 : [ROCm][CI] upgrade CI to ROCm 6.1 (#124300)
1eb7b8eb60 : [PT2D] Ensure the trace rules are correct with distributed (#125333)
e93b57a570 : Add propagate_real_tensors mode for unbacked (#125115)
fb1bfe1156 : Get cutlass_library import working under fbcode (#125257)
8046de3512 : [Inductor cutlass backend] Remove epilogue nodes from Kernel call (#124929)
a13a0a2479 : [dynamo][easy] Simple fixes to prepare for nn module guards (#125316)
0b70026d3b : Do not pass none to has_pending_mutation (#125359)
aa7be72cc5 : Convert `ForeachFuncInfo` to `dataclass` (#125001)
da5d2d9b3e : Hotfix: restore CPP guard string in structured trace (#125303)
fff7a31800 : fix torchdeploy issue on sharddim_alltoall op (#125344)
f59ce798f9 : [ROCm] TunableOp for scaled_mm (#123987)
5ea54839c9 : Make min(stride, strides[idx]) in collapse_view_helper size oblivious (#125301)
b119e1bcc2 : Fix refcount handling for dtype, layout and memory format (#125271)
4731130ea8 : Add a code comment about torch._check_is_size in tensor_split (#125292)
a9309502af : Revert "Refactoring to remove unused variable (#125252)"
b03fb49ed8 : Revert "[dynamo] use lazy disable dynamo for manual seed (#125196)"
9e24c263f9 : Include support for the scatter gather cuda kernels to allow for comp… (#124809)
f1f142c44f : Revert "Fakify script object inputs and attributes for non-strict export (#124239)"
9022f131b5 : [inductor] switch assume_aligned_inputs to False (#124336)
c281d3a0cb : Enable UFMT on test_indexing&test_view_ops (#125112)
9043ccafdf : Require nnz==0 in sparse meta tensors (#125221)
46f326eff5 : explicitly reset stderr/stdout in precompilation (#125289)
6f5f405b05 : [ncclx] Rename NCCL-EXP to NCCLX (#125238)
6cfb55dd5d : Add a variable for some testcases. (#124708)
c451d108da : Implemented isin_Tensor_Tensor_out for MPS backend (#124896)
506eda538b : Fix windows build error not propagating (#125306)
599a2e25f1 : Reland "make sure dynamo doesn't inline DTensor __new__ or __torch_dispatch__ (#123347)" (#125288)
9e9ba61fde : AOTAutograd: force tangents to be contiguous when subclass inner tensor is noncontiguous (#124400)
5173cbe260 : fix FakeTensor creation on noncontiguous subclasses (#124399)
7058563078 : support as_python_constant on PlacementClassVariable (#124398)
2d794bcb8a : Delete NegateSource handling, I think it's dead (#125311)
746da8755c : switch tests from constrain_as* to torch._check* (#125253)
dbcf123105 : Upgrade submodule oneDNN to v3.4 (#122472)
c99617706e : Add lintrunner as dev dependency (#125304)
197612c84c : ProcessGroupWrapper support custom backend (#124447)
b4ccc615cd : Do exact type match on int so we don't pick up bool here too (#125305)
a216d87c6b : [export] Fix for unflattening modules with duplicate tensors (#125192)
af67704dcc : [privateuse1] _refs.masked_fill support privateuse1 when value.device.type is cpu (#124835)
07422fd0b9 : add missing space to first cmake append (#125294)
bf6acf9add : [ROCm] Add extra cuda_to_hip_mappings.py (#125108)
c8d2a55273 : Intel GPU: specify the tolerance for torchbench models (#125213)
e3627d05e7 : [CMake] Add NVPL BLAS/LAPACK option (#125268)
39eb5d4fa4 : Add Sanity Testing to Pytorch Profiler (#124773)
4d410155b2 : Revert "Include support for the scatter gather cuda kernels to allow for comp… (#124809)"
e16f1ee4cc : [ez][CI] Move test_modules and test_schema_check off CI_SERIAL_LIST (#125193)
8fde9a988c : CI: Extending unit test coverage for aarch64 linux (#125255)
b094622bc9 : Refactoring to remove unused variable (#125252)
e09f98c705 : Include support for the scatter gather cuda kernels to allow for comp… (#124809)
e421f1b4a8 : docs: `torch.nn.utils.rnn`: docs improve (#123559)
a2715144c3 : Add NEON-accelerated int8mm for bfloat16 (#125290)
9fbb4dfc12 : Fix AttributeError when doing mock patch for FileTimerServerTest.test_expired_timers (#125144)
47ba7a76e2 : [ATen][CUDA][AMP] Fix dtype mismatch in linalg_vector_norm (#125175)
c59cce38a9 : [MacOS][CPUInductor] Fix includes to system Python (#125285)
52142192d4 : [pipelining] Add stage backward function (#124958)
aead440c62 : [Inductor] Further tune block size for templated attention on H100 (#125286)
c511aed27f : [Meta Tensor] fix meta inplace set storage (#123880)
c3c4465f50 : Add has_guarded_code to CompilationMetrics (#125279)
081f41a920 : Use BFloat16 in distributed quantization when supported by NCCL (#125113)
14857e71c2 : Export `torch.jit.interface` from `torch.jit` package (#125209)
75a8e9ee77 : [inductor] better cache clearing in fx graph cache tests (#125280)
787afc5180 : Add LR as tensor tests (#123750)
1c905f1be3 : [EZ][BE] Don't import pathlib twice (#125260)
abaa717350 : [FSDP2] Removed logic to save and remove pre-backward hook handles (#125269)
37c993546d : [dynamo][guards] Bug fix for set_export_info (#125275)
4d5f8070c4 : add a decomposition for select_scatter (#124426)
e9ce23985f : [TorchScript] attach target function to OSError when source can't be found (#125248)
8f31988088 : [C10D] Document 'tag' limitation for nccl send/recv (#125278)
74e8817311 : [inductor] Minor fixes to various tests before enabling fx graph caching in OSS by default (#125258)
0506e95433 : [dynamo] support inactive context managers across graph breaks (#125203)
1b9d353e4f : [Torch] Add more mm kernel choices (#125000)
a59dc14877 : Keep node.meta when fusing subgraph (#125261)
0ee5c14163 : [PT2][Optimus] Read the patterns from the config instead of hard-code passes (#125136)
25691558d9 : Change templated_attention -> flex_attention (#125251)
a7023b89f8 : Use torch._check for safety assert in _reshape_view_helper (#125187)
1bcbc9158f : Add CUDA 12.4 workflows (#121684)
d6c713884a : [dynamo, 3.12] xfail refleaking tests due to buggy getattr_static (#125062)
c12c85e919 : Revert "[benchmark][cudagraph] Explicitly call aten.div with CUDA denominator for cudagraphs (#119729)" (#125246)
9fec26e231 : Fix typo under torch/_inductor directory (#119658)
ca0f070065 : Revert "Add registration API for torch.compile-eager (#121387)"
00dd4d55e3 : Refactored _remove_auto_functionalization_from_graph_helper (#125180)
ea347fa6ce : Revert "Fix & optimze open device registration test. (#124712)"
c1a3fcfa47 : [pipelining] Add util and debug facilities (#124875)
75fa54a9d1 : Revert "Convert `ForeachFuncInfo` to `dataclass` (#125001)"
56e4cbc69d : Fixes two build problems on ROCM 6.1 + Ubuntu 22.04 (#118216)
90258e8369 : forward fix preferred blas backend and windows CI (#125080)
04a241947a : [dtensor] delete the old unused mesh_alltoall (#124879)
00df0d3e94 : [dtensor] implement shard dim change with alltoall (#124872)
02e7800b3f : [Torch][Timer] Skip expired timer logging for empty expired timers (#125039)
3946fa1c12 : Fix bug in get_update_constraint (#125194)
07958c538c : Setup initial testing harness and cache key generation for AOTAutograd Cache (#124642)
8242fb62a7 : [quant][pt2e] Fix conv-bn weight + bias per channel QAT (#125208)
05be0fb62d : [minimizer] Add exclusion function to minimizer base (#124504)
80046c315b : Add templated attention BLOCK_M & BLOCK_N default size for different head_dim (#125139)
04c6424fbf : Remove caffe2 image and video (#125045)
a03b9a2189 : fix: typo (#125226)
d699ade0cb : [dynamo] Refactor into torch/_inductor/runtime/compile_tasks.py (#124681)
254128c16e : [inductor] Remove usage of device_interface from _inductor.runtime (#124592)
5f4c6d9b49 : Upgrade nightly wheels to rocm6.1 (#124811)
9466335ae4 : Convert `ForeachFuncInfo` to `dataclass` (#125001)
ecc2e034f7 : Fakify script object inputs and attributes for non-strict export (#124239)
ab80a59677 : CI: add opt-in aarch64 linux workflow (#121284)
b7d67e476d : upload pt2 cprofile stats to manifold (#125162)
2480e8b8a1 : Add MAP_SHARED option for torch.load(mmap=True) (#124889)
761a7b84ba : [Dynamo] Fix alias issue with respect to wrapped numbers (#124731) (#124774)
9aed5dcfe6 : Clarify wording in docstring for `CosineAnnealingWarmRestarts` within `lr_scheduler.py` (#125161)
e3db465029 : Re-enable nightly testing for linux and macos binaries (#123390)
07d3af8e6a : Added ARC test jobs to all build jobs in the unstable bucket (#125142)
dc514df2af : [inductor] add triton code to SchedulerNode.debug_str (#125091)
a587a93f4c : [inductor][easy] add buffer layout to SchedulerNode.debug_str (#125090)
e0d2c24de1 : Fix device type issue in `_get_device_handle` (#124390)
5e5f890273 : [dynamo][source] Remove inspect getattr_static from AttrSource (#125200)
8320b770fd : [dynamo] use lazy disable dynamo for manual seed (#125196)
e7846447e0 : dynamic shapes builder API (#124898)
31801918e9 : Add pooling support for 3d channels last (#116305)
16e8431963 : Fix hybrid sparse COO tensor conversion to meta tensor (#125120)
74b7c56517 : [Autotune] Use half the number of warps for reduction tuning on AMD. (#125084)
0969f01d73 : [FSDP2] Accumulated in `reduce_dtype` if not syncing grads (#125191)
631d2b87f1 : [FSDP2] Fixed fp32 param dtype/bf16 reduce dtype test (#125190)
2369ee49cc : Update torch-xpu-ops pin (ATen XPU implementation) (#125011)
724c7491d0 : Revert " [Distributed] [7/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#124987)"
e7631d6eae : Revert "CI: add aarch64 linux workflow (#121284)"
744f341aa4 : Fix ref leak in `dtype.to_complex()`/`to_real()` (#125154)
4d717cd7c3 : [TD] Enable td on cpu windows (#125049)
8ee6105f84 : Fix edge case in cudagraph pool detection (#124981)
e1e6ef753b : [dtensor] use str for reduce_op (#125172)
ccaf03fd89 : Fix: `nn.Parameter` return type identified as `Tensor` instead of `nn.Parameter` (#125106)
26f8d96cab : Fix typo in `compile` docstring regarding default `cache_size_limit` (#125145)
8c219251c5 : Add backwards support to FlexAttention (#123902)
720e5f306d : Update CODEOWNERS - Dataloader (#125181)
faee0e5ee8 : [ez][CI] Move test_linalg and test_sparse_csr off CI_SERIAL_LIST (#125068)
946e202c07 : [export] Restore user input names to unlifted graph modules (#124765)
f1d1e3246f : Revert "[dtensor] implement shard dim change with alltoall (#124872)"
3bd67dab32 : Revert "[dtensor] delete the old unused mesh_alltoall (#124879)"
3d1dd79b80 : make sure to stopTrace() on exception (#125131)
a434d1487b : Fix EtcdServer leak in etcd_server_test.py file (#125121)
fab5bd5359 : [checkpoint] Improve error message when use_reentrant=True is used with .grad() (#125155)
f03cf9d4dc : Fix & optimze open device registration test. (#124712)
32cf04cb7f : CI: add aarch64 linux workflow (#121284)
ae13c7e593 : Revert "[Meta Tensor] fix meta inplace set storage (#123880)"
96cc73dc13 : [oss][torch.package] fix multiple error messages within PackageExporter (#124943)
f7f018a0ed : [dtensor] delete the old unused mesh_alltoall (#124879)
6b79469d24 : [dtensor] implement shard dim change with alltoall (#124872)
8d46ab4104 : [dtensor] move pad/unpad_tensor to separate utils (#124871)
935a946241 : [RFC][FSDP2] Renamed `FSDP` to `FSDPModule` (#124955)
da44d2f7fb : split out flop counting its own method (#125061)
e5e623af4b : Codegen runtime asserts in Inductor (#124874)
e498e28b2f : Remove API that allows for extra deferred runtime asserts during lowering (#124864)
303880e16b : Update gen.py aoti_fm install dir (#125087)
5585138db9 : Remove caffe2 contrib and experiments (#125038)
555f1aeb02 : Fix module buffer mutation (#124586)
06b845dedc : Make metadata serialization more strict (#124411)
cc06c00a56 : Don't run auto grad safe mode when predispatch is on (#125066)
e3b9b71684 : [BE]: Ruff - TRY401 - Avoid verbose exception logging (#125126)
3e1fb96964 : [BE]: RUF018 - ban assignment in assert (#125125)
a05b2ae302 : Enable UFMT on `test/test_dataloader.py` (#124710)
518ab48e85 : Enable UFMT on test/test_functionalization.py (#123926)
cccae93551 : [Meta Tensor] fix meta inplace set storage (#123880)
6761b49551 : Ensure autocast device_type is a string + Unit test (#125014)
1a0b247762 : [dynamo] Bug fix for LOAD_GLOBAL and STORE_GLOBAL (#125002)
0f139b04b3 : [dynamo] Fix test (#125107)
49ca2b3429 : [BE]: Apply RUF025 perf fixups (#125104)
94b328ee45 : add likely/unlikely macro for unsupport c++20 compiler. (#124997)
42a192db0f : Fix Conv BN folding with deadcode (#124808)
c1e0dea023 : Delete unused param 'OP' in KERNEL_PRIVATEUSEONE (#125008)
5f7c4181b5 : Correcting valid device name of privateuse1 (#125018)
c5b1a4c269 : [inductor] share more cse cache during swap buffer (#124921)
57790fd088 : [inductor] share cse cache during vectorized indirect load (#124597)
7478b7f1ca : Add common used score_mod functions for templated attention (#124670)
df08140de2 : [dynamo] Collect cell_and_freevars correctly (#125097)
7aa6bd7fa0 : Refactor all top level usages of record_shapeenv_event to ShapeEnv class (#123735)
9ce58542ba : Ignore torch/distributed/_tensor/_collective_utils.py for TOR901 (#125082)
b4a008209a : Expose tensor check from guard for reusing (#124836)
f0a5a0d298 : OSS: Capture triton kernel in ET (#124775)
8246f42864 : Export torch.newaxis=None for Python Array API/Numpy consistency (#125026)
9bf53b128c : [codemod] Remove unused variables in caffe2/aten/src/ATen/test/scalar_test.cpp (#125041)
905318818d : [codemod] Fix missing field initializer in caffe2/torch/lib/libshm/core.cpp +2 (#125047)
61e937f3d6 : Add registration API for torch.compile-eager (#121387)
620d808da0 : [Pytorch 2] Forward fix for broken test (#125065)
d4a1b3e093 : Make c10d_functional ops call into _c10d_functional ops (#124979)
91a4740e72 : Disable the CUDA fast path for split_with_sizes_copy when capturing (#125052)
b3fd94d15e : [Distributed] [7/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#124987)
ce503c1b40 : Dynamo x autograd.Function supports setup_context (#124802)
a866bfff45 : [cuDNN] cuDNN SDPA (Flash Attention) Backward (#122510)
5944a53555 : [MPS] Fix nextafter for negative values (#125029)
35b332882b : [Quant][PT2E] Enable linear-binary(-unary) post-op recipe for X86Inductor quantizer (#122387)
dc4c75ba72 : elastic/rendezvous: make barrier and rank assignment operations O(n) instead of O(n^2) (#124982)
1a6fef15ef : [compiled autograd] verbose logs for debugging cache misses (#124980)
43a7ab2a21 : [compiled autograd] introduce verbose logs, add autograd node info to graph (#124954)
e592a609fd : [Quant][ONEDNN] improve performance of qconv by reducing integration overhead (#123240)
368f5212fa : [cpu] [inductor] decompose bmm for memory bound in lowering (#124826)
ebb8905e0c : [cpu] add VecConvert between 8bits and 16bits (#124828)
fd24d8c05a : [dynamo][nn module] Use correct sources for _call_impl (#124970)
43069c460e : Correct check for Boolean list input type (#124899)
be2c09725a : [dtensor][experimental] local_map (#123676)
83e7b9d25f : [Inductor] Support fusion of chained reductions even if keepdims=True (#124843)
a68a8c0f6b : Disable test_binary_op_list_error_cases in test_foreach (#125046)
c6b7504d47 : Fix torch.library.register_fake's module reporting (#125037)
cd06c73cbd : [Inductor Cutlass backend] Improved GEMM template (#124577)
4a6dfbe480 : Add label to label config to auto apply labels based on other labels (#125042)
4e2b4c6ed6 : Fix broken docs (#124940)
9266e472e2 : rename ort to maia in dynamo's ort backend. (#124967)
abcb42cdd2 : Avoid COW materialize in various places (1) (#124984)
2ea1e84d40 : log pt2 config dict to signpost from inductor post grad (#124593)
91d565da0c : [dynamo] Add support for tensor's is_complex method (#124927)
781ea00c90 : [TD] Query Github API for base (#122214)
858fdd8c40 : Remove cppwrapper option on inductor benchmark workflow (#124971)
392dc45597 : Made FlexAttention rewrite getitem calls to use aten.index in score_mod (#124799)
b4d39a5de9 : Revert "[TD] Query Github API for base (#122214)"
8461e7ed9e : Add test_cpp_extensions tests for stream_and_event and mita_backend (#123614)
73744a2c00 : torch.mtia module for MTIA device backend (#123612)
36af9c0d7d : [Aten] Fix XPU convolution_overrideable input memory format. (#124841)
a8574a9719 : Fix global flake8 issues (#124771)
609c958281 : Fix mypy issues in fake_tensor.py (#124428)
8d12ba9acf : add methods for open device in PackedSequence module. (#124923)
b003e0f29e : [TD] Query Github API for base (#122214)
6b54f9d3e1 : Revert "fix Invalid call to aoti_torch_tensor_copy_ #123039 (#124037)"
6bef5e9f67 : [CI] Add retry mechanism to check if the Docker daemon is running (#124728)
2f3b0befed : [BE]: Apply ruff FURB 118. (#124743)
fc2aa23c1e : Test reland "AOTAutograd: gate view-replay behind config, not the def… (#124948)
fc13c1c850 : [aot_inductor] Enable test_aot_inductor tests for ROCm (#123393)
3d8585e501 : [XPU] Add manual_seed and synchronize method (#124709)
74afccdd80 : [parametrization] fix `requires_grad` propagation (#124888)
d1b25596d5 : Revert "Add common used score_mod functions for templated attention (#124670)"
bba59b718b : Teach ShapeEnv that a <= b => a < b + 1 (#123436)
fa5ea29863 : Apply guard knowledge to all simplifications (#123342)
359ff49bf4 : Revert "[dtensor] move pad/unpad_tensor to separate utils (#124871)"
35a82d4a4a : Revert "Refresh OpOverloadPacket if a new OpOverload gets added (#124654)"
7324ddd80c : Revert "Delete erroneous print (#124972)"
19a83eacb5 : add new API torch.amp.is_autocast_available (#124938)
a46c27d961 : Revert "Verify types in custom op schemas (#124520)"
9c7c81b897 : [BE] Test everything against scipy-1.10.0 (#124983)
63d4dc5a80 : Remove TMP_LIBKINETO_NANOSECOND flag from Compilation (#124734)
4ad291d07f : [DeviceMesh] Removing mapping child_to_parent_mapping from `_MeshEnv` (#124890)
f131c2c199 : Revert "Fix mypy issues in fake_tensor.py (#124428)"
1ac60484c1 : Revert "Fix global flake8 issues (#124771)"
e607dc8abb : Revert "Refactor all top level usages of record_shapeenv_event to ShapeEnv class (#123735)"
e323c681ad : Update trymerge to honor the list of unstable failures from Dr.CI (#124965)
b3cf36cb7c : Implement deepcopy / clone for SymNode, NestedIntSymNode (#121361)
14430564ce : [cudagraphs] add cudagraph_skips counter (#124804)
855939904b : [cudagraphs] add more info to skip messages (#124700)
62b5738a8b : [benchmark][cudagraph] Explicitly call aten.div with CUDA denominator for cudagraphs (#119729)
769b1e6cdc : [profiler] Split up profiler test file (#124856)
f9a611a3ce : Update Jinja to 3.1.3 (#124976)
43f4e71daa : Making _MeshEnv subclassing thread local (#124555)
e913f77c60 : Revert "Made FlexAttention rewrite getitem calls to use aten.index in score_mod (#124799)"
b2f521f376 : Revert "remove empty partition (#124920)"
9bccafc31c : Made FlexAttention rewrite getitem calls to use aten.index in score_mod (#124799)
7321005dd8 : Add support for capturing tensors with score_mod (#124444)
3a810bcf91 : skip unsupported rocm test (#124968)
f9379ebbbf : fix Invalid call to aoti_torch_tensor_copy_ #123039 (#124037)
333f095d07 : Delete erroneous print (#124972)
c4b6ed4609 : guard_size_oblivious in unbind (#124959)
c715e76799 : [inductor] optimize isa dry compile time. (#124602)
db3a2d751c : [MPS][BE] Error-check linear (#124952)
973d724e21 : [CUDA] Fix 64-bit indexing in `vol2col` in conv3d (#124650)
33fae4fcf4 : Revert "Use recursive blob for package data (#119257)"
98835fff9f : remove empty partition (#124920)
724f8dd8c5 : [export] Serialize empty list based on argument type (#123748)
7bb89bcaa4 : [export] Fix state dict reparametrization in non-strict. (#124847)
4259e5d0e0 : [inductor] Specialize on unguarded alignment of example inputs (#123319)
8db42e7688 : [EZ][GHF] Rephrase cancelled message (#124947)
00c5859aeb : [dynamo] Add support for DELETE_SUBSCR (#123526)
8c515a14fd : [caffe2] Add build configuration for linux-arm64 (#124618)
84fb96130f : [export] Fix check for optional tensor returns (#123739)
4b586a434f : [ROCm] Triton upstream AMD backend integration (#121801)
b8b04b26fb : Forward fix for D56289438 (#124882)
d5182bb75b : Enable UFMT on `test/test_cuda*.py` (#124352)
977dc5593a : [EZ] Get rid of utf-8 quotes (#124932)
751d9a319d : [AOTI] Add a unit test (#124486)
a8aed4ce3f : Fix MPI_Group initialization errors (#124824)
29b22fbef9 : Typo fix: s/nonzero/unique/ (#124935)
93a319a4fc : [export] kill _process_constraints() (#123985)
9aeeb8e925 : [Inductor Cutlass backend] Improve GEMM op filtering (#124576)
e04c7b19f4 : Revert "torch.mtia module for MTIA device backend (#123612)"
4a1299cc0e : Revert "Add test_cpp_extensions tests for stream_and_event and mita_backend (#123614)"
3de78a1b48 : [dynamo][cpp-guards] EQUALS MATCH - Cache first passing value (#124627)
87079f5e91 : [DCP] Fix broken validate checkpoint api test (#124786)
cdc66e9dc3 : refactor autocast python APIs (#124479)
f01275934b : Fix global flake8 issues (#124771)
44bb5da529 : Fix mkl cmake not support static mkl on Windows. (#124925)
25c0d3f3f0 : Fix mypy issues in fake_tensor.py (#124428)
87bec7db4e : Refactor all top level usages of record_shapeenv_event to ShapeEnv class (#123735)
61e05f2fb4 : Don't ignore fresh unbacked symbols in AOTAutograd forward analysis (#124785)
b4597fffce : Try to reuse old symbol name rather than new symbol name when renaming (#124782)
4c44e2b236 : Improved unbacked SymInt input support in Inductor (#124739)
1ac402a96c : [Distributed] [6/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#124701)
f6ce94dca5 : Revert "[inductor] Remove usage of device_interface from _inductor.runtime (#124592)"
58806d6531 : [decomp] Remove dead device_hint function (#124849)
5f9ea26185 : Revert "OSS: Capture triton kernel in ET (#124775)"
3890848ec2 : Revert "[ROCm] Triton upstream AMD backend integration (#121801)"
e520233526 : Revert "[dynamo] Refactor into torch/_inductor/runtime/compile_tasks.py (#124681)"
f3af049b88 : [DDP][PT2D] Fix the import issue (#124846)
0ca1ff3dce : Revert "Add support for capturing tensors with score_mod (#124444)"
c0fd7894cc : Revert "Fast standalone symbolize for unwinding (#123966)"
2d7f709752 : [Inductor] Force the parallel depth as outer loop fusion depth (#123899)
24ed909934 : Revert "[CUDA] Fix 64-bit indexing in `vol2col` in conv3d (#124650)"
678662a557 : Revert "Made FlexAttention rewrite getitem calls to use aten.index in score_mod (#124799)"
48a016157d : Revert "[benchmark][cudagraph] Explicitly call aten.div with CUDA denominator for cudagraphs (#119729)"
6a92b352ee : Revert "[cudagraphs] add more info to skip messages (#124700)"
154157416c : Revert "[cudagraphs] add cudagraph_skips counter (#124804)"
7a6813b7b3 : Revert "[cuDNN] cuDNN SDPA (Flash Attention) Backward (#122510)"
9d139eedcf : [AOTI] set alignment for aot constant (#124272)
e68d65dae2 : [dynamo][cpp-guards] Differentiate dict guards wrt to guarding on key order (#124779)
59a1f1f308 : [dynamo][inline inbuilt nn modules] Do not inline for export (#124814)
94af62b000 : Updated test_graph_grad_scaling to use new OptimizerInfo infrastructure (#123581)
acc4cbea39 : Made FlexAttention rewrite getitem calls to use aten.index in score_mod (#124799)
9a70e7f58c : [Nested Tensor]Add unit test that cover the internal use cases (#124880)
8b2f8ee5ef : [DDP][PT2D] Fix no_compiled_forward flag in the test (#124829)
b21bf5e4e4 : [foreach] Use same `dtypes` when `dtypesIfCUDA` is `None` (#124813)
84666389e1 : [FX] Update opinfo tests (flattened diff) (#124657)
4e340a7f8b : [custom_op] setup_context fills in default values (#124852)
fdad16b851 : [cudagraphs] add cudagraph_skips counter (#124804)
0ed38c9b22 : [cudagraphs] add more info to skip messages (#124700)
c021c9b8e4 : [benchmark][cudagraph] Explicitly call aten.div with CUDA denominator for cudagraphs (#119729)
0b0eea2229 : [dtensor] move pad/unpad_tensor to separate utils (#124871)
13ab24f192 : Reimplement unbacked symbol bindings in Inductor (#124394)
66b0156e0b : Ban replacements with unbacked SymInt on both sides (#124316)
5e58227d27 : Rebind and refresh unbacked bindings in FakeTensorUpdater (#124314)
9692b954c6 : FakeTensorProp works with unbacked bindings (#124310)
141888765b : Verify types in custom op schemas (#124520)
050dd65a87 : [onnx.export] Track new nodes added during _run_symbolic_function (#123027)
4f398eed0b : [custom_op] register_autograd supports non-tensor kwargonly-args (#124806)
31522391a8 : [custom_op] Blanket ban kwarg-only Tensors (#124805)
2b1c13e3a3 : [custom_op] fix schema inference for kwarg-only args (#124637)
c5e567c573 : [Torch][Timer] Adding debug info logging interface for expired timers (#123883)
43313a506a : Dont precompile if we search_autotune_cache but not max autotune is set (#124870)
68225072e8 : Match insignificant strides for sdpa inputs (#124859)
36c983a973 : [DeviceMesh] Added `DeviceMesh.from_group()` (#124787)
cb94845b14 : Force upsample to be float32 (#121324)
02ed2992d9 : [export] Capture tensor.to() under export. (#123732)
4f29103749 : [ez][CI] Move test_cuda off CI_SERIAL_LIST (#124649)
85b28ffc3a : [quant][pt2e] Move batch norm op between eval/train for cuda (#123957)
82fe9071c2 : [ROCm][CI] fix 5.7 nightly wheel build (#124797)
a89f442f0b : add -fclang-abi-compat=17 to HIP_HIPCC_FLAGS (#124862)
7809b34288 : [DTensor][Easy] Update OpSchema __repr__ to show args_schema in format print (#124812)
a248c24694 : [ROCm][Inductor] Disable conv cache emptying with hipgraphs (#124791)
80ab062103 : [MemoryViz] Improve description of blocks with missing frames (#124784)
8885638f95 : [quant][pt2e] Propagate get_attr meta through known ops only (#124415)
355dc34f86 : Add test_cpp_extensions tests for stream_and_event and mita_backend (#123614)
381653de63 : torch.mtia module for MTIA device backend (#123612)
408aa0182c : Build device generic torch.Stream and torch.Event based on c10::Stream/Event (#123611)
a22847a9cb : We should not be in kernel invocation before we restore fake mode (#124762)
0d58aeb73a : Handle size/etc accessors in FakeTensor, support accessing symbolic types from toInt/etc in IValue (#124760)
9bd6e93a04 : [inductor] Add option to create parent directory for write_atomic (#124646)
adbf62cd0a : Fix layer norm in static runtime when input is non-contiguous (#124789)
71d92bace2 : [CUDA] Fix 64-bit indexing in `vol2col` in conv3d (#124650)
8fe0b8b6a8 : No CPP or xdist process level reruns (#124798)
c55309e58f : OSS: Capture triton kernel in ET (#124775)
872eeb0d7d : Refresh OpOverloadPacket if a new OpOverload gets added (#124654)
7ad6dc2cf3 : [Profiler][PrivateUse1] Profiler support PrivateUse1 key (#124818)
f07b6227e6 : Initial add of torch.distributed.pipelining (#124776)
40cf38fd15 : [BE]: Apply ruff rule FURB192 (#124742)
48312a7fc3 : [DeviceMesh] Removed unneeded `.to(cpu)` (#124768)
927ae80afa : Release 2.3 compatibility matrix (#124861)
1db7d64af2 : [DeviceMesh] Initialized mesh tensor with CPU context (#124767)
674e15ae07 : Back out "Switch to predispatch" (#124860)
9888d7495e : [ROCm] Triton upstream AMD backend integration (#121801)
ed120b08c4 : Add common used score_mod functions for templated attention (#124670)
977105466f : Remove activation checkpointing tag to get correct FQNs (#124698)
bf834d388b : Mark test_xavier_uniform as slow (#124801)
e739a2d59e : Revert "[quant][pt2e] Move batch norm op between eval/train for cuda (#123957)"
92295fbacd : Revert "Verify types in custom op schemas (#124520)"
7d94f52a8a : [Inductor Cutlass backend] clean up CUTLASSGemmTemplate.add_cutlass_gemm_choices (#124575)
49f0d127fb : Fix a bug in retrieving approximate bsr_dense_addmm kernel meta data (#124371)
a47f4253ab : [Inductor Cutlass backend] Set INDUCTOR_TEST_DISABLE_FRESH_CACHE in test setup (#124574)
e76b5e3cc8 : [Inductor Cutlass backend] Disable epilogue fusions (#124107)
537aebc99f : [Inductor cutlass backend] Add bmm support (#121734)
fb69eef1b4 : Add int testing for foreach_copy on cuda (#124703)
89ca0cb7a0 : [FSDP2] Added test to show rank 0 CPU full state dict flow (#124741)
e0e2d897ed : Handle Tensor returns in PropagateUnbackedSymInts (#124297)
b04dca1502 : Add pending_fresh_unbacked_symbols, populate unbacked_bindings for Dynamo (#124290)
0848051844 : Migrate linux-test Job yo ARC (#124386)
290bfbe01f : [DDP][PT2D] Lazy Initialization of DDP Module for Replicate API (#123424)
81740fd1f6 : [DCP] minor readability fix: make param name consistent with overriden function (#124770)
34f468e66f : remove the redundent '* 1000' to timestamp (#124374)
0da94f3a08 : [device_mesh] add a private init backend option (#124780)
b91f83f181 : [cudagraph] add config for cudagraph managed input mutation support (#124754)
bee924d173 : Enable test config selection when doing workflow dispatch (#124795)
9dded148d0 : Fix test_extension_backend on non-AVX systems (#117272)
2e7b8ff116 : [ROCm] Fix Int_mm() Integration with hipblasLT (#122431)
f0f7452e31 : Do not propogate (#124769)
952a00eda7 : torchelastic: change monitor_interval default to 0.1 (#124692)
03fa2421dc : Get ARC jobs to run on both classic and ARC infra (#124753)
2716e77cf7 : [FSDP1][2D] Fix FSDP1 2D state_dict to use run_check=False (#123802)
57a12d9d0f : Add Half support to torch.sparse.addmm for CPU (#124694)
1ab0b3c9f8 : [ROCm] avoid heap buffer overflow in hiprtc failure logs (#121865)
4efb28c900 : [quant][pt2e] Move batch norm op between eval/train for cuda (#123957)
64af899fdf : [cuDNN] cuDNN SDPA (Flash Attention) Backward (#122510)
31ca27af62 : Add the quant lift up pass in convert phase (#122777)
c933af2709 : Switch to predispatch (#123573)
3145522427 : [Profiler] Update third_party/kineto submodule hash (#124737)
e8f9f37b03 : [FSDP2] Added test to show rank 0 broadcast meta-device flow (#124651)
a21327e0b0 : [ROCm] update hipDataType support and hipify mappings (#120751)
1c4ad87396 : [TorchElastic] Option to enable TCPStore libuv backed (#124684)
3999b72d46 : Dont error in check consistency if old is None (#124751)
98ffdf930c : Revert ARC jobs to run on classic infra again (#124748)
cc268a710d : Revert "AOTAutograd: gate view-replay behind config, not the default (#124488)"
4ceb44c40d : Add torch.library.opcheck (#124496)
763dc26e59 : [Dynamo] Add dynamo support to torch.func.linearize (#123118)
8cf54929e3 : compiletime->compile_time (#124579)
d40774f4ed : [export] Fix up nn_module_stack for nodes occured around tracepoint ops. (#124457)
e94c846cf7 : [ez][TD] Unique td_exclusions file name (#124301)
57e92162eb : [inductor] Keep inductor cache for tests when TORCH_COMPILE_DEBUG is specified (#124755)
5532c7949f : Fix import error in update_failures.py (#124695)
e112792a69 : [export] refactor _AddRuntimeAssertionsForInlineConstraintsPass (#124503)
35a448f3cb : Record structured log for overall AOTAutograd backwards compilation (#124648)
abdd569e41 : [easy][test_profiler.py] if tqdm is not available, pass instead of None (#124729)
1d3a13d3d1 : Conform torch.mps to device module interface (#124676)
4e66aaa010 : update kineto submodel commit id to include new pg naming (#124332)
7c253a7776 : Add support for capturing tensors with score_mod (#124444)
0792ceab4b : [dynamo] Refactor into torch/_inductor/runtime/compile_tasks.py (#124681)
5d45eb77f1 : [inductor] Remove usage of device_interface from _inductor.runtime (#124592)
25a2d18dd9 : [Profiler] iterate frontend function events for profiler post processing (#124596)
05db64024c : [DDP][PT2D] Correctly calculate the numel with symint in DDP fusion (#124422)
47330ca133 : AOTAutograd: gate view-replay behind config, not the default (#124488)
b2fd224f27 : [AOTI] Add more ABI-compatiblity unit test (#123900)
e558008a05 : [PyTorch] Add test that canEnableStaticRuntime rejects prim::CallMethod (#120853)
fb6d052e9c : Specify the exact table we upload metrics to (#124321)
772ae6da1e : Fast standalone symbolize for unwinding (#123966)
cf98cab1b6 : [export] Forward fix XNNPackQuantizer.module_type_config to detect str nn_module_stack (#123662)
7ecbbc40c3 : [HOP][inductor] Add higher order associative scan operator (#119430)
64491c0811 : Restore CompileContext as well in backwards (#124626)
4f3e1f1c93 : Revert "Add support for capturing tensors with score_mod (#124444)"
5b98d43488 : Verify types in custom op schemas (#124520)
107f944f22 : Support fp8 quantization (#123161)
f8f6c460cd : [Inductor max autotune] Make autotuning robust against very slow Kernels (#123932)
25f321b84f : Refactor autocast C++ APIs to be device-agnostic (#124359)
3c964ad1ca : add fused_sgd_kernel support for CPU device (#123629)
4efb980e07 : [BE] Update older scipy used in CI to 1.8.1 (#124675)
7b6e354ecd : [DDP][PT2D] Fix some tracing bugs of DDP (#124421)
9a5b4d2403 : Do not forward parent's value range to CSE variable for variables created within codegen. (#123099)
edcd968b51 : Add out wrappers to some decompositions (#115437)
e0c5113dec : Add support for capturing tensors with score_mod (#124444)
c82fcb7b30 : Add testing and fix `weights_only` load for quantized types and nn.Parameters with python attrs (#124330)
de5d689cf9 : [EZ] Update pillow to 10.3.0 (#124614)
7706cd7d12 : Extend CPU inductor merge rule (#124671)
660db767ef : Don't clean up fresh inductor cache on error (#124620)
7e095be4b6 : Fix test_max_autotune_remote_caching (#124655)
375ec25f55 : Add missing aten::sort.any op for assistant lm models (#123982)
ea61c9cb29 : [Distributed] [5/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#124043)
5f5778476a : rename ort to maia (#123265)
bffecb5aff : [Inductor] Enable VecMask store (#123710)
dd440ac734 : Add Matmul recipe into x86_inductor_quantizer (#122776)
1fcdea8cd6 : Do not import transformers when import torch._dynamo (#124634)
0c21161488 : Add meta function for `torch.histc` (#124548)
6054789874 : Make numel equal test size oblivious in reshape_symint (#124611)
abf3f90781 : [MPS] Fix large copy (#124635)
72a34eeb99 : Dynamo x autograd.Function supports non-{Tensor, symnode, constant} inputs (#124360)
302d7e9a6e : [Binary Build] Increase timeout for Linux nightly binary builds (#124668)
87a35d5a29 : Use new function to log one cluster per line (#124628)
501edc7e59 : [inductor, test] remove cast for test_tmp_not_defined_issue2_cpu (#114910)
ba3c00c266 : [test_profiler.py] Disable tqdm monitor thread and torch.compile with compile_threads=1 (#124409)
c01499ecc6 : [sym_shapes][perf] Cache ShapeEnv constrain_symbol_range calls (#124610)
05addd5658 : [tp] add kwargs support to prepare_module_input (#124114)
5785b02ba6 : Skip workspace permission change for ROCm CI (#123816)
bb37910e30 : [AOTI] Fixes ScatterFallback codegen (#124580)
fd59554be6 : Scripts to compile reruns + td exclusions and upload to s3 (#124312)
0bbbc754dd : Add AOTInductor generated cpp code to TORCH_TRACE (#124617)
0093735ccd : [inductor] Use compile time config values in runtime (#124561)
cb9fe91f5c : [inductor] Remove config check for 3D tiling (#124569)
4620a45542 : [inductor] Refactor runtime files into torch._inductor.runtime (part 5) (#124560)
0cc0e60e30 : [inductor] Refactor runtime files into torch._inductor.runtime (part 4) (#124559)
7fd8870e6b : [inductor] Refactor runtime files into torch._inductor.runtime (part 3) (#124557)
bb8815bc31 : [inductor] Refactor runtime files into torch._inductor.runtime (part 2) (#124553)
480585fd2b : [inductor] Refactor runtime files into torch._inductor.runtime (part 1) (#124552)
16eea7c6a5 : Revert "[inductor] Refactor runtime files into torch._inductor.runtime (part 1) (#124552)"
56714cb497 : Revert "[inductor] Refactor runtime files into torch._inductor.runtime (part 2) (#124553)"
0b90af0bf5 : Revert "[inductor] Refactor runtime files into torch._inductor.runtime (part 3) (#124557)"
b3d6c2fe9b : Revert "[inductor] Refactor runtime files into torch._inductor.runtime (part 4) (#124559)"
0f44ef93ab : Revert "[inductor] Refactor runtime files into torch._inductor.runtime (part 5) (#124560)"
8973c5b846 : Revert "[inductor] Remove config check for 3D tiling (#124569)"
30dec1da84 : Revert "[inductor] Use compile time config values in runtime (#124561)"
d77e7b7c54 : Make some kernel static asserts clearer (#124519)
c2f8bfae9c : Make torch._inductor.dependencies.Dep a proper class (#124407)
77c35334c1 : Fix build on s390x (#123250)
be2e56b5ab : s390x: update using vectorization builtins (#124396)
0ee514e628 : [CI] Upgrade xpu driver to LTS_803.29 (#123920)
9c2ac4476c : Allow ONNX models without parameters (#121904)
6ede882c0b : preferred blas library; cublaslt gemm implementation (#122106)
9a322ba1b0 : [user triton] Return unbacked SymInts used in the grid (#124594)
277ab8a4c0 : Revert "[Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)"
d5037c389c : [Inductor cutlass backend] Fix tests: skipIfROCm always skips when using as class annotation (#123930)
ad7b5d32b6 : Intel GPU oneDNN Upstreaming: Convolution operators support (#117529)
9d4bef6248 : Intel GPU oneDNN upstreaming: Conv primitive integration (#117512)
42bd1abc62 : [Inductor Cutlass backend] Tolerate dynamic shapes (#121497)
34bce27f0d : Revert "fix Invalid call to aoti_torch_tensor_copy_ #123039 (#124037)"
3af12447f8 : [inductor] Use compile time config values in runtime (#124561)
317c0af149 : [inductor] Remove config check for 3D tiling (#124569)
3ac30bc32a : [inductor] Refactor runtime files into torch._inductor.runtime (part 5) (#124560)
9ea2a09510 : [inductor] Refactor runtime files into torch._inductor.runtime (part 4) (#124559)
fcf28b0ad5 : [inductor] Refactor runtime files into torch._inductor.runtime (part 3) (#124557)
f4d47f5bbb : [inductor] Refactor runtime files into torch._inductor.runtime (part 2) (#124553)
a7035cc11a : [inductor] Refactor runtime files into torch._inductor.runtime (part 1) (#124552)
6e24cc012b : fix Invalid call to aoti_torch_tensor_copy_ #123039 (#124037)
b1984237a0 : [Profiler] Unify the device(CUDA, XPU, PrivateUse1) in torch profiler post processing (#123247)
c5fafe9f48 : [BE]: TRY002 - Ban raising vanilla exceptions (#124570)
fd90991790 : [rfc] opentelemetry in pytorch (#122999)
29cc293725 : [BE]: FURB142 - Remove set mutations. Use set update (#124551)
5a1216bb2e : [BE]: Update ruff to 0.4.1 (#124549)
f34905f61d : Assert that TracingContext is available when set_example_value is called (#124284)
0e6367dd44 : Factor var_to_range assignments to _update_var_to_range helper (#124283)
cbf420b67a : [inductor] for UserDefinedTritonKernels don't mark all inputs as mutating (#124425)
0d90d4d613 : [Dynamo] Fix NamedTuple hasattr bug (#124531)
a6a3f2e06b : [MPS] Fixes GELU, LeakyRELU and MISH on non-contiguous tensors (#123049)
98f3e0214b : [NCCL][TEST] Synchronize proper devices (#124517)
d6f88105ce : Fix the problem about load_state_dict with unexpected key whose prefix matches a valid key (#124385)
afa78ad08c : Call writeline from writelines (#124515)
a32eac345f : [dynamo] Return gm.forward for eager backend (#124109)
febc4d8759 : [dynamo][easy] forbid_in_graph check to use getattr_static (#124445)
97ccfad915 : Fix test_decomp test for ops with py_impl(CompositeImplicitAutograd) (#116832)
a3e3693afc : [Dynamo] Fix missing bracket in ListVariable (#124532)
f20e3ae0c3 : Use recursive blob for package data (#119257)
0d0b5b2655 : Enable dynamo rosenbrock sparse tests (#124542)
184f16016e : Enable dynamo-traced deepcopy test for RMSprop (#124541)
6a730698e2 : Enable dynamo-traced Adamax tests (#124540)
f1cbaf1764 : Adds LSE output for templated-attention-hop if inputs require grad (#124308)
0d64b82f0b : Make CompiledFxGraph portable between machines (#124438)
c5a4ba2257 : [inductor] consider pointwise nodes when deciding reduction hint (#124131)
57f64197f3 : Reduce warning msg in torch.profiler (#124469)
b79b0d3d6a : Enable UFMT on test/test_legacy_vmap.py (#124381)
3d8b903d95 : [PyTorch] Remove ArrayRefTensor::numel_ (#124516)
f9fce110af : [FSDP2][ez] Removed error check for swap tensors flag (#124513)
1c2cb36811 : [FSDP2] Added CPU offloading (#120256)
cf5ca58e7f : [NJT] Inline through torch.nested.nested_tensor_from_jagged instead of graph break (#124343)
acbf888a13 : rename sl to strobelight (#124455)
0feab7d6c3 : Revert "Build device generic torch.Stream and torch.Event based on c10::Stream/Event (#123611)"
929242a15c : Revert "torch.mtia module for MTIA device backend (#123612)"
52da03edeb : Revert "Add test_cpp_extensions tests for stream_and_event and mita_backend (#123614)"
f8f7cfbeee : Add __torch_function__ support for generated tensor methods/property of PrivateUse1 (#121723)
19850d770d : update triton pin (#124429)
d8a98ddd60 : Prep PR for cutlass 3.5 update (#124412)
b3504af56e : Enable UFMT on `test/scripts` and some files (#124137)
f0560f7b3b : [opcheck] Stop doing test_aot_dispatch_static by default (#124495)
37d18966ea : [custom_op] set some tags when constructing the op (#124414)
1900f79b72 : [FSDP2] Added `set_reshard_after_backward` (#124319)
10b9d4d19c : [export] handle Dim.lower = 0, 1 for ep.run_decompositions() (#123602)
c74dfca5e7 : Int4MM: Unswizzle for different dtypes (#124448)
000d55870a : Enable in oss (#124031)
e6a788ac26 : Fix compilation on aarch64 with gcc (#124511)
179108f14d : Use separate flags for MultiTemplates from BenchmarkFusion (#122825)
73f56e1e81 : [sym_shapes][perf] Do not calculate hint in advice_is_size (#124472)
661fd23640 : [AMD] TunableOp take priority over DISABLE_ADDMM_HIP_LT (#124161)
f87c788a34 : Revert "Capture triton kernel in execution trace (#124140)"
761de37ab7 : [sym_shape][perf] eval_static: guards, unbacked compute once (#124217)
8869b543e8 : [AMD] Remove deprecated macro from COnvUtils (#124158)
b0d83726bd : [5/x][AMD][Lowering Enablement] Hipifying aoti code_wrapper (#124241)
25c65d6642 : Change register_autograd to reflect ordering of setup_context and backward (#124403)
a8e17b2d4d : Move schema inference to torch._library (#124199)
a78450a00b : Excise uses of the old custom ops APIs (#124134)
9489019085 : Small fixes for deferred epilogue (#123229)
39fc280dce : Dont precompile already seen keys, limit epilogue choices (#122642)
7ae835eee4 : Enable SourcelessBuilder to build GraphModule generated by make_fx (#123673)
68a027f144 : Fixes for 123400 (#123406)
5050e627dc : Defer marking_static_address (#124309)
1531a29fb9 : Enable tests related to 116061 (#123405)
406d99e46c : Fix for 117147 (#123404)
203d111c54 : Enable dynamo test_forloop_goes_right_direction_multi_gpu (#123324)
293f756cdc : Support aot_export torchbind op (#123370)
e62169a8fa : Support torchbind op dispatch in python (#123367)
136f8378e1 : Re-land precompile triton templates (#124030)
bad8d25881 : Add torch.library.register_kernel (#124299)
3918dfedc5 : [custom_op] Rename register_impl to register_kernel (#124200)
22a2f676c3 : [custom_op] add ability to provide manual schema (#124180)
a56e057814 : [Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)
8b1ad51881 : Better Error Message in `ChainedScheduler` and `SequentialLR` (#121633)
c9db59e9e4 : [sparse] Add fast semi-structured spasification kernels (#122350)
96724a769b : [ptd] drop ncclGroupStart/end for ncclCommInit (#124363) (#124416)
88fa843e58 : Add vectorized norm fill for ppc64le (#113351)
8e280862ff : Add custom joint graph passes (#124443)
b412b75b42 : [optim] add fused_adam/adamw_kernel support for CPU device (#123074)
9a71d12d92 : [CUDAGraphTree] Support mutated inputs from prior cudagraph pool (#123231)
58e403c739 : Added a docstring for torch.Size.numel. (#124186)
520bc1080e : Revert "[Profiler] Unify the device(CUDA, XPU, PrivateUse1) in torch profiler post processing (#123247)"
b2f6cfd9c0 : Fix AVX2 int4pack_mm_kernel crash if weighs are unaligned (#124433)
a6f044a490 : [dynamo, 3.8-3.9] support dataclass with `frozen=True` in Python 3.8/3.9 (#124393)
e408d9ca25 : Revert "Migrate linux-focal-cuda11_8-py3_10-gcc9-build to ARC (#123721)"
96a067b190 : Revert "Migrate linux-focal-cuda12_1-py3_10-gcc9-build to ARC (#123722)"
1ba85b34dd : [AOTI] Enbale mmaped weights when CUDA is used (#124346)
87f44d70b1 : [torch/distributed] Check gloo availability when doing isinstance(pg,… (#124233)
768ce2cdda : [Profiler] Unify the device(CUDA, XPU, PrivateUse1) in torch profiler post processing (#123247)
803a08f8ae : [ROCm] Add cublasGemmAlgo_t -> hipblasGemmAlgo_t (#121030)
290e3e7abb : Add ability to save TORCH_COMPILE_DEBUG logs for CI failures (#124408)
889e3eeed3 : Avoid cuda init to FakeTensorMode (#124413)
e620c3e814 : Optimized templated attention to use exp2 (#124356)
0bde4efa84 : Fix broken test in `test_aot_inductor.py` (#124329)
0affd23014 : Enable UFMT on test/test_python_dispatch.py (#124373)
ddd0ed1b43 : distributed: templated ring attention (#124215)
4946638f06 : [AOTI] Add ABI-compatiblity tests (#123848)
cbefaf2a37 : [AOTI] Move c10/util ostream function implementations to their headers (#123847)
9ed9b22ec0 : Implement efficient_conv_bn_eval_decomp_graph_transform to handle conv and bn fusion after decomp (#123680)
ca6a0e1348 : [c10d] remove the env of TORCH_NCCL_ABORT_IN_DESTROY_PG (#124334)
2f45be46f6 : [DeviceMesh][Test] Add 3d unit test for `get_local_rank()` (#124142)
e0792cf3d6 : Make copy_cast, softmax and cat_out unranked (#123191)
e4f6340f21 : realize inputs to mem bound mm decomposition (#123165)
5ba6bb7b2f : Add swap_tensors path to nn parametrizations (#124130)
87f651c7e7 : fix cpu test errors (#124116)
2e48b39603 : Fix example_value of map (#124203)
4a0900d04b : Revert "[NJT] Inline through torch.nested.nested_tensor_from_jagged instead of graph break (#124343)"
61bc188f42 : Revert "[Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)"
89407eca3b : Capture triton kernel in execution trace (#124140)
74bedbb9e1 : [export] Serialize rational symint ranges (#123884)
b6f0159db0 : Add test_cpp_extensions tests for stream_and_event and mita_backend (#123614)
37215a4fa2 : Fix memory leak in pattern_matcher (#124345)
d7e1bf9ff9 : torch.mtia module for MTIA device backend (#123612)
cb17721899 : Build device generic torch.Stream and torch.Event based on c10::Stream/Event (#123611)
7a6edb0b66 : Possible fix for einops warning (#124084)
a8cf91c395 : Fix predispatch tracing for aten::lift_fresh_copy (#124198)
e1062f5738 : [export] Add a printer to unflattened module. (#124315)
415a8f6398 : Fixed issue in affine_grid_backward when grad_grid is non-contiguous (#124370)
aa2da0cdd2 : [Export] Add runtime assert to non-strict export (#123681)
5677128cb8 : [MPS] Fix crash with binary_cross_entropy is invoked for half dtypes (#124258)
ef93402f61 : [NJT] Inline through torch.nested.nested_tensor_from_jagged instead of graph break (#124343)
bbb6e36495 : [FSDP2] Fixed `set_requires_gradient_sync`'s `recurse` arg (#124318)
9385ef2a5d : Revert "Skip workspace permission change for ROCm CI (#123816)"
1325fd94a4 : Support xpu autocast policy (#124052)
b51f66c195 : [Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)
1542874311 : Delete qualname from custom_op decorator (#124092)
648c39c47d : Add OpOverload.redispatch; use it in new custom ops API (#124089)
645173a0b5 : Add torch.library.register_autograd (#124071)
8135c4b921 : torch.library.register_fake now accepts more types (#124066)
a0466061e1 : Support xpu host allocator (#123080)
b5d4ebe9ae : Migrate linux-focal-cuda12_1-py3_10-gcc9-build to ARC (#123722)
d032a78008 : Migrate linux-focal-cuda11_8-py3_10-gcc9-build to ARC (#123721)
6fcbeb3489 : [ATen] Add CPU fp16 support for nll_loss and cross_entropy_loss (#123256)
d59f1da62f : [sym_shapes][perf] _find not update unchanged replacements (#124274)
9eba1995d0 : [sym_shapes][perf] Use sympy xreplace instead of subs (#124208)
2b82345e48 : Revert "Re-land precompile triton templates (#124030)"
704fac5618 : [dynamo][cpp-guard] Reland Attempt 1 - Enable cpp guard manager (#124231)
6e86a40694 : Revert "[Dynamo] Check for __bool__ attribute before accessing it (#120943)"
8ff85b42f9 : Revert "Add swap_tensors path to nn parametrizations (#124130)"
ec608a5d66 : Refactor CUDA's amp cast policy to be generic (#124051)
8ad66e05d2 : [4/x][AMD][Lowering Enablement] Enabling meta internal AOTInductor compilation on ROCM (#124123)
de1c0d2497 : [cublas] Keep explicit workspace creation to avoid OOM (#124250)
c9ab9248ce : [Inductor Intel GPU backend Upstream] Generalize device-bias code in (#124249)
27daa110c8 : Back out "Refresh OpOverloadPacket if a new OpOverload gets added (#123578)" (#124324)
f213f262af : [dynamo][cpp-guards] Improve when to use Dict vs DictSubclassGuardManager (#124237)
9fed2e826b : [DTensor][Test] Add unit tests to keep track of DTensor sharding for 2D (#123687)
dca24d70ba : [dynamo, test] remove skip for unhandled exception test (#123876)
812bae09be : [dynamo] fix 3.11+ refleak (#124238)
7c94652d7d : [benchmarks] Add --use-warm-peak-memory (#124326)
0ddd17bdc6 : [benchmarks] Add --snapshot-memory to get memory pickles for eager vs compiled (#119411)
6b4b857a60 : [dynamo][nn_module] Enable torch.compile/disable as decorators on the class (#124187)
b6b757701e : [aot] trim refcount for subclass runtime wrapper (#124155)
1f04c29be5 : [inductor] Freeze the layout of the conv input to channels_last (#122765)
51a56efbb9 : [inductor] modify the output_stride of ConcatKernel (#122761)
78f3b99a94 : [inductor] Modify the rules for freezing the layout of x.unwrap_view() in convert_to_reinterpret_view (#122760)
b71423c2e4 : [inductor] let coordesc tuner respect max RBLOCK (#124325)
43b4ac956e : Add index_reduce decomposition (#122579)
030bb13fe8 : Re-land precompile triton templates (#124030)
fae31495ff : Try to speed up lintrunner in CI (#124311)
cc66c43d51 : Make macro with AMP more generic (#124050)
102a223216 : Enable dynamo test_state_dict_deterministic (#123323)
d88fcb86d8 : Enable dynamo traced test_forloop_goes_right_direction (#123322)
57a3dc56d4 : Small Adamax fix (#123498)
21f7cbdc1c : Enable UFMT on `test/test_autograd.py` (#124141)
025387f4dd : [ez][CI] Reduce CI_SERIAL_LIST pt2 (#124298)
38bfe7bcd1 : add link to custom ops troubleshooting page on tensor data_ptr error (#124240)
5a60a1abde : Move the implementation of register_fake onto torch.library.Library (#124065)
d1e1d671ef : Stop requiring a pystub for register_fake by default (#124064)
f5049de242 : Revert "[Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)"
64f6ddf12c : Add swap_tensors path to nn parametrizations (#124130)
b5235694f4 : [FSDP2] Made `unshard` return type consistent (#124293)
64f42bfd52 : [dynamo] Support list.reverse (#124210)
dd7aeedb72 : [Dynamo] Check for __bool__ attribute before accessing it (#120943)
00372b1211 : Extend int[48]mm ops to float32 input (#124287)
14162eecfc : Update Security Policy to provide Security Guidance for users (#120531)
9875a834e4 : [Intel GPU] oneDNN GPU GEMM support (#117202)
6330acae76 : Refactored implementation for upsample_nearest decompostions (#122783)
bebdbb63ce : Introduce set_example_value and use it throughout Dynamo (#124176)
d23bf9cef0 : Add fake impl for aten.unique2 (#124306)
cc18afa25f : Intel GPU oneDNN upstreaming for primitive integration (#117112)
944d046645 : Revert "[DeviceMesh][Test] Add 3d unit test for `get_local_rank()` (#124142)"
1ec05c769b : all_gather and reduce_scatter autograd (#123989)
a403757913 : [DeviceMesh][Test] Add 3d unit test for `get_local_rank()` (#124142)
cdc855af97 : [Test][2D] Turn on 2D state_dict tests for uneven sharding (#124255)
93e249969b : [BE] enable `ruff` rule `RSE` and remove useless parentheses in `raise` statements (#124261)
b726a23d4e : change `tf32` thresholds for `test_per_sample_grads_embeddingnet` (#124104)
4efdf9a6a6 : fix pytorch version for onnx in doc (#124182)
24cecf06d7 : Update autotune jk knobs (#124214)
f433517181 : [dynamo][decorator] Support disable on nn modules (#124185)
46324fe073 : Speedup int4mm_kernel with NEON (#124257)
9b1d6c8d98 : improve F.adaptive_avg_pool2d error messages on mps (#124143)
7e1c98c171 : [dynamo] support `object.__setattr__(obj, name, value)` (#124068)
36f6928a37 : Revert "[Profiler][PrivateUse1] Profiler support PrivateUse1 key (#120556)"
d2b0c0a34e : Fix index_reduce sampler filter when op_info.variant_test_name is specified (#123375)
5a735ece6b : Remove @abock from ONNX approvers/codeowners (#124259)
b880a71010 : [BE] Add missing `std::` prefix to `Unique.mm` (#124232)
5f378e1853 : [Inductor cutlass backend] Fix flaky test ( CUDA IMA ) (#124106)
47dbfecd37 : Rename impl_abstract to register_fake, part 1/2 (#123937)
6efcb6c718 : Fix wrong ufmt exclusions in `.lintrunner.toml` (#124135)
2dc15b6849 : Revert "[sparse] Add fast semi-structured spasification kernels (#122350)"
3f89f565bb : Revert "Re-land precompile triton templates (#124030)"
77ad630f5d : Revert "Dont precompile already seen keys, limit epilogue choices (#122642)"
acc466751b : Add bfloat16 support to binary_cross_entropy for CPU (#123823)
c4878abab0 : Fix Setup Linux for ARC (#124171)
d0211e207c : inductor cpp wrapper: add GIL release back (#123897)
e3effa5855 : Enable UFMT on all of `test/distributed` (#123539)
ed22dde877 : Pointer to the nonzero limit ticket (#124244)
dd3cea3291 : Fix derived dim bugs in ep.run_decomp (#123326)
236b0d12fa : Don't clamp slices generated from cat kernel (#124139)
050051f412 : Dont precompile already seen keys, limit epilogue choices (#122642)
51cc808ac7 : [dynamo][cpp-guards] Missing decref on early returns in DictSubclassGuardManager (#124230)
d68196e7ef : Re-land precompile triton templates (#124030)
32ca18ea3b : Handle the case when one of the output of forward pass is None (#123988)
6e4c4e93b6 : [Inductor] add contiguous layout optm for bmm input (#122599)
1fd9e320ea : Remove unnecessary FileLock in Fx Graph Cache (#124212)
f56c4572a6 : Fix typos in docs (#124218)
bf45ac8c98 : [FSDP2] Added explicit `unshard(async_op)` API (#120952)
0abd3f60fd : [CI] Reduce CI_SERIAL_LIST list (#124085)
946b50c788 : [ez][TD] Increase logging (#124082)
e7cf6f81ea : [sym_shapes][perf] Skip assert in check_is_size (#124209)
cebf65126c : FakeTensorProp assert consistency of sizes when metadata previously existed (#124059)
4322a0e782 : Skip workspace permission change for ROCm CI (#123816)
3eea300680 : [quant] Do not decompose choose_qparams_per_token_asymmetric (#124178)
3e90e93a78 : [inductor] disable comprehensive padding in fbcode (#124191)
b3f88317ec : [dtensor][5/N] have table-wise sharding use LocalShardsWrapper on participating ranks only (#122853)
d419fcd19f : [dtensor][4/N] have row-wise sharding always use LocalShardsWrapper (#122843)
1d7ac7baa0 : [dtensor][3/N] add torchrec row-wise uneven sharding example (#121392)
9d3543df9a : [dtensor][2/N] add torchrec table-wise sharding example (#120265)
9d88339b53 : Revert "make sure dynamo doesn't inline DTensor __new__ or __torch_dispatch__ (#123347)"
440e4353c7 : [DCP] Remove overlapping loader in async case (#123942)
606c4f1367 : [PT] [ST] fix test_sharded_tensor (#124103)
46a25cc0db : [DCP] Adds support for non-primatives in async_save by deep copying during cpu offloading (#123941)
14b2273b0c : [sparse] Add fast semi-structured spasification kernels (#122350)
383d2d1f6c : Add testing and fix issues for weights_only load for LRScheduler (#123775)
1f89bf4188 : Revert "[reland] `_foreach_copy` with different src/dst dtypes (#123844)"
42e22bb444 : [nccl-pg] Pass pg name and desc to NCCL communicator (#124149)
72271fb07e : Add NEON ISA support on aarch64 (#123584)
67bd43b510 : [compiled autograd][dynamo] use aliases for stack restore when partial graphs steal inputs (#124127)
d838cc8f66 : [DCP] Returns a copy of sd in copy sd (#123567)
0f6ce45bcb : [Inductor] handle AMD special launch options (#124146)
4dc160864b : [dynamo, 3.12] enable dynamo-wrapped tests in CI (#123307)
962096bce6 : [dynamo, 3.12] skip some failing profiler dynamo-wrapped tests (#124124)
5e17f62d10 : [dynamo, 3.12] move functorch/test_aotdispatch.py::TestAOTAutograd::test_view_detach from dynamo xfail to skip (#124100)
9309580d69 : [dynamo, 3.12] handle possibility of NULL local variables during graph breaks (#124095)
2b3594f90e : [dynamo] fix call_finally issue in Python 3.8 (#124122)
298eb69c91 : [EZ] Make `weight_int4pack_mm` compilable for `half` input dtype (#124136)
bb0c768c5b : [dynamo][refactor] Move LazyGraphModule handling (#124113)
530bf391cc : Revert "[dynamo] Turn on CPP guard manager (#123547)"
52be63eb2c : Revert "Enable UFMT on all of `test/distributed` (#123539)"
2e48f7b044 : [pytree] add `tree_iter` function (#123913)
0eab740db3 : [Docs][Distributed] Add migration notes for `--local-rank` option style change for `torchrun` in PyTorch 2.0 (#109480)
7530c5a85d : [DOC] Fix example and typo (#123959)
5bef127c2e : [Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)
83ef3bb128 : Fix AVX512 int4pack_mm_kernel crash if weighs are unaligned (#124128)
df5829d0ba : [inductor] let rand_strided support fp8 (#124120)
89ac37fe91 : Enable UFMT on all of `test/distributed` (#123539)
e4efa311f1 : Refactor test_tensor_set_data to be parametrized (#124105)
791e5db705 : Part 3: UFMT fix the rest files in torch/optim due to the pr-sanity-checks (#124055)
ac74a6783b : Part 2: UFMT fix 2 files in torch/optim due to the pr-sanity-checks (#124054)
560efaa471 : Part 1: UFMT partial files in torch/optim due to the pr-sanity-checks (#124053)
f30704f5f3 : add preparatory work for torch/optim/lr_scheduler.py (#124048)
6babf00014 : [inductor] Bypass FX graph cache when we have HigherOrderOperators (#123325)
ff1e3ff5a5 : [reland] `_foreach_copy` with different src/dst dtypes (#123844)
a4c8002ee0 : MPS FFT implementation bug (#123274)
eeb626b46a : [BE] Do not use `using namespace` in mps headers (#124117)
1cf62e86a4 : skip various unit tests for Jetson (#122531)
aaad0554b4 : [Inductor] Fix endless recursion in codecache.DLLWrapper.__getattr__ (#123931)
c2596fd3e0 : [Distributed] [4/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#124032)
9079c76689 : Fix Asynchronous PyTorch Profiler Trace (#124080)
1885c3972d : [C10D] Add dist.get_node_local_rank helper (#123992)
2b54b00e30 : Update some more APIs to have positional-only args (#124063)
3c25b18d76 : Excise old custom ops prototype from custom_op_db (#124062)
a03711d24d : [custom_ops] Support TensorList inputs/outputs (#123615)
5a15cbfa44 : Fix typo in TorchScript annotate docstring (#123719)
70ad64e8a6 : update docs for separate context and forward functions (#121955)
9fa922c2ed : [profiler] Log process group name instead of pg uid (#124035)
bd222473fc : [EZ][BE] Fix unknown pragma warning (#124086)
9aba918bd8 : Support Accelerator OOM Error (#121200) (#121702)
495a4d4a42 : [FSDP2] Added `mesh` arg to `fsdp_pre_all_gather` (#123953)
d1a0821e7e : [FSDP2] Added pre/post-all-gather extensions (subclass) (#122908)
ea52918e81 : [FSDP2] Generalized all-gather outputs to >1 per parameter (#119302)
975f77784f : Fix CUDA out of memory error message formatting (#123984)
95a090fb56 : [CI] Update bazel deps (#124076)
601112fdb4 : [dynamo][log] Print missing skipped frame info on debug (#124078)
e5b404b809 : [inductor] Fix fresh_inductor_cache() (#122661)
99059affb9 : Use packed metadata from triton to reduce launch latency (#123842)
6c9f5064ea : Avoid retrieving launch metadata if launch_enter_hook is not installed (#123841)
90d1720861 : [export] Restore original placeholder names (part 3: constant input de/serialization) (#123590)
fb6f6270d6 : [inductor] comprehensive padding (#120758)
221c397e2e : Use NEON to speedup `int8pack_mm` on aarch64 (#124023)
8ce29f1416 : Enable UFMT on `test/onnx_caffe2`, `test/optim`, `test/package` and `test/profiler` (#123901)
63dcb5b0f2 : make sure dynamo doesn't inline DTensor __new__ or __torch_dispatch__ (#123347)
9c4fc5fa34 : [BE][Ez]: Fix minor potential perf regression from #123960 (#124013)
fea1b99d89 : Remove warning from LazyModuleMixin constructor. (#123968)
af9a707233 : Use uv in lintrunner init when it is available. (#124033)
7cd7a7aa8e : [xla hash update] update the pinned xla hash (#124042)
e3ac61587a : Enable UFMT on `test/functorch` (#123541)
03a05e791a : Don't add non-integer Triton kernel arg 1 to equal_to_1 (#123886)
19f50333e9 : Improve assert message for unbacked symint not written out (#123965)
a096e99a5d : Enable int8mm kernel for float16 (#124022)
9bb54c7f3c : [pytree] enable `functools.wraps` in Python pytree with dynamo (#124012)
f5331aade5 : Simplify ATen sparse semi-structured operators based on CUTLASS (#123473)
635c238bad : Enable UFMT on all of test/quantization/jit &pt2e (#124010)
0dfe72c63b : [dynamo, 3.12] fix positions and offsets of added instructions when we clean (#123991)
88a7159493 : [NT] Fix typo in declared strides variable (#123856)
f3fd280238 : [dynamo] Relax strict_mode for autograd.Function forward inputs (#123910)
6440d1baa6 : [dynamo, 3.12] fix the block stack... again (#123978)
da7db5d345 : [BE] migrate import sorter configurations to `pyproject.toml` (#123846)
b60af92c17 : [Distributed] [3/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#123312)
97261be0a8 : Revert "Simplify ATen sparse semi-structured operators based on CUTLASS (#123473)"
6ac8fe46dd : Enable UFMT on all of test/quantization/ao_migration &bc (#123994)
285c93d64d : [inductor] Write generated files from parent process (#123409)
5961e23e76 : primitive attribute assignment (#123898)
891736f115 : Fix links rendering when surrounding code in Dynamo deepdive (#123427)
7e3f80f00f : accelerate `binary_cross_entropy_with_logits` (#122789)
d39e6b3156 : Cleanup: Remove redundant `inference_patterns` PatternMatcherPass (#121602)
eeea3b12aa : Fix _LazyConvXdMixin.initialize_parameters and add related tests (#123756)
2216068559 : Enable UFMT on test/test_ops* (#123935)
71b8363f40 : [inductor] Remove unused local variable. (#120227)
2a2e1d8e4f : [functional collective] change the Python APIs to only use the native funcol ops (#123777)
2da3e113ca : [functional_collective] remove the logic that forces torch-xla to use legacy funcol (#123776)
58afcd7b61 : [dynamo][dict] Add UnspecializedNNModuleVariable to dict keys (#122812)
fefe6e2fea : [dynamo][3.12] Stop backend detection on the first RETURN_VALUE (#123878)
f1654fd4b0 : [PT2D][FSDP] skip FSDP hooks base on dynamo config (#123021)
77a45883ce : [Reland] [Distributed] [2/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#123821)
879f5b9a39 : Pass triton kernel info to record function (#123871)
7234f180f3 : Add mtia to codeowner (#123975)
1d6c5972c1 : [BE]: Optimize min/max/sum comprehensions C419 (#123960)
961eb39348 : AOT logging: log fw_metadata with each graph (#118646)
585cd117e6 : [nccl-pg] print broadcast ncclunique id duration (#123963)
3e98bdd66d : [dynamo] Turn on CPP guard manager (#123547)
d564fe7dca : [sparse] add proper path for cloning sparse tensors (#123127)
3dde6a461f : fix cpp path in torch/_C/_autograd.pyi (#123924)
380180c918 : Fix typo (#123767)
7b11fb4695 : [Dynamo] fix opcode `YIELD_FROM` and `SEND` (#123912)
4b889d1247 : stop TestMakeFx leaking to other tests (#123958)
c65aa5af6e : [Pytorch] doc sync-stream-and-free-HBM counter in memory_stats (#123799)
5d1f9bd2bc : Move the trace_rules.py docs up (#123873)
79deff689f : Update compile doc to suggest Module.compile (#123951)
762e19606e : [quant] Enable backward for choose_qparams_per_token_asymmetric (#123452)
3346ec8263 : [BE] Document what is tested in TestOptim (#123853)
2f0fc04fa3 : [CUDA][64-bit indexing] Bump large tensor threshold of `test_cross_entropy_large_tensor` to 70GiB (#123772)
8069469081 : [dynamo] Support Tuple[int] args to autograd.Function (#123887)
70b8c58f84 : [dynamo] Emit warning to turn on capture_scalar_outputs (#123896)
e3935783f7 : [dynamo] Fix @property on user-defined nn.Module (#123804)
6bac183dc2 : [dynamo] Support numpy.iinfo/finfo (#123803)
11e6f84ad8 : [dynamo] Graph break on uninitialized nn.Module (#123790)
6022600cc6 : [inductor] Handle meta tensor ops in graph (#123786)
6b0ba6bbd3 : [dynamo] Improve constant-prop for regex/torch.__version__ (#123705)
a625705290 : Enable UFMT on all of `test/nn` (#123809)
04acdad829 : [PT] [FSDP] [test] add barrier device ids (#123866)
366b24e242 : [Inductor] Add a device agnostic DeviceGuard class to inductor (#123338)
6367eab1a6 : Normalize remote/local cache names (#123914)
23dbe2b517 : Add test for skipping hf logging during export (#123410)
3c09c6b91a : Fix memory planning compile error (#123867)
ab647bd325 : Add missing interfaces of `torch.optim.swa_utils` (#117036)
5c0a380bdf : [pt2e][qat] Support conv-transpose-bn[-relu] QAT fusion (#123652)
e4c887fbf6 : [AOTAutograd] Replay views on output using `FunctionalTensor` metas. (#121007)
435db051d0 : get torch.distributed.breakpoint() to work under Python/Meta contexts (#118645)
07e0faf3ef : [export] set AOTAutograd ctx to enable_grad with pre-dispatch export (#123671)
757daece95 : [sparse] add meta support for add operation (and copy) (#123594)
951582949b : [export] Enforce final classes in serialization. (#123861)
2cb3301f80 : [ROCm] Add cast to kFloat in amax calculation (#123872)
b024c0c2ef : Convert MKL symbols from global to local (#122284)
616446cc0a : Update Kineto Hash to fix OSS json output (#123885)
41613a0803 : [Profiler][PrivateUse1] Profiler support PrivateUse1 key (#120556)
6f4c7eeb08 : Migrate linux-focal-py3_11-clang10-build to ARC (#123441)
e023863474 : Migrate linux-focal-py3_8-clang10-build to ARC (#123440)
1cdde98df4 : Intel GPU oneDNN upstreaming for library compilation (#117098)
3120dbbf81 : Revert "[sparse] Add fast semi-structured spasification kernels (#122350)"
f9f7ef33c4 : AOTAutograd: add config to error when overlapping input checks would cause slow compile / runtimes (#123455)
f0eb162730 : Revert "Switch quantized_decomposed over to new custom ops API (#123454)"
7fc3aa5f81 : [compiled autograd][aot] Trim runtime refs for list inputs from dynamo (#122535)
540b451e91 : [compiled autograd][dynamo] Codegen aliases to keep grad mutated tensors alive (#123359)
d274d57037 : [compiled autograd][dynamo] Make compiled graph take in boxed inputs (#122353)
9dfeec9cdc : Add a mode to avoid clone() in DDPSink (#122927)
4e9094533e : [c10d/nccl-pg] allow user to pass process group description (#123472)
73f0ecc1ac : [BE] UFMT directory `torch/_functorch` (#123723)
7ff53e169f : add option to turn on return_tuple in _SplitterBase (#123868)
d994d993c0 : Revert "[inductor] Fix fresh_inductor_cache() (#122661)"
e881d567f4 : Revert "[inductor] Write generated files from parent process (#123409)"
5669334175 : Revert "Add Matmul recipe into x86_inductor_quantizer (#122776)"
db895ace1d : Only run backward part of COW test if results are strided (#123870)
87f7486df9 : Support SparseCsrPrivateUse1 (#123826)
706f7d1f22 : Enable UFMT on `test/jit_hooks`, `test/lazy` and some files (#123807)
4e3022dbe9 : [dynamo][logs] Print bytecode before tracing (#123877)
ede9e8237a : [dynamo] Bug fix for GET_YIELD_FROM_ITER (#122943)
aaec97a403 : [sparse] Add fast semi-structured spasification kernels (#122350)
868e5ced5d : [Dispatcher] Collect autograd sequence numbers on PythonTLSSnapshot dispatch keys (#123304)
358ace1a1b : functional_collectives: add first differentiable collective -- all_to_all_single_grad (#123599)
5b648afba4 : Enable UFMT on test/test_multiprocessing (#123840)
7efaf54dc4 : Fakeifying views shouldnt create symbols when dynamic=False (#123348)
96fe3c5d46 : fix correctness for dynamo inlining RangeVariable __contains__ (#122751)
2fe672b146 : compile: ban mutations on non-compositional uses of as_strided (#122502)
22ba180e55 : [c10d] add more fields for periodic logging (#123860)
78824fd212 : [inductor] Fix recompiles bug for torch.full (#123811)
5b8c81eb82 : [PT] [FSDP] fix HSDP sharding placement (#123778)
7f6884f620 : Added some extra repr to triton template buffers and added autotuned block configs to templated attention (#123813)
3d9dc976ae : Handle unqualified imports in custom Triton kernels (#123703)
604c9c5601 : Enable UFMT on all of `test/jit` (#123623)
d0ccf599cc : [export] Restore original placeholder names (part 2: higher-order-op subgraph naming) (#123587)
b9675e820e : [dynamo][cpp-guards] Improve the logs (#123780)
2e6871f924 : [dynamo][guards-cpp] Early return in DictGuardManager for empty dicts (#123787)
b0b7aa201c : [dynamo][cpp-guards] Introduce DictSubclassGuardManager (#123773)
bd225189f1 : [inductor] Change OverridesData to take callables instead of strings (#123397)
af1e03fb8f : Quantized relu (#123004)
c75a32b9f9 : [FSDP2] Fixed `is_last_backward` for 1f1b (#123857)
13070e2753 : [DCP] Adds better handling in logging of specific kwargs (#123658)
b7fac76fc2 : [DCP fixes for _load_state_dict_keys and supports nested keys (#123679)
e70bf23b7b : [dynamo] apply certain bytecode cleaning transformations unconditionally (#123785)
3cd06f56b1 : [ez] test_profiler in serial (#123665)
fa013f69bb : dynamo assertion that graph has no fake-tensor constants should check for subclasses (#118644)
e979f45610 : [while_loop] add a simiple op_info test (#123814)
f82d20c207 : [NEON] Remove implicit type promotion in `Vectorized<c10::Half>::operator!=` (#123864)
5a7fd20aa1 : [dynamo] Support autograd.FunctionCtx.needs_input_grad (#123700)
016ca546aa : Adding health check server hook in torch elastic (#122750) (#123504)
ee869c9bb7 : Avoid COW materialization in backward ops (4) (#123798)
69249a218b : Avoid COW materialization in backward ops (3) (#123797)
9c3b87833a : AOTAutograd: keep set_() input mutations in the graph, ban other cases (#122981)
10a03c56e5 : fix leaky fake tensor on attribute assignment, support buffer assignment (#122337)
7c451798cc : [inductor] Disable channels_last heuristic when channels==1 (#123758)
79c565b24e : [inductor] Write generated files from parent process (#123409)
902374cc09 : [CI] show doc coverage repro instructions (#123688)
6b24ec480c : [Tensor] Detect more cases of symbolic sizes/strides (#123696)
fe092da874 : Revert "[quant] Enable backward for choose_qparams_per_token_asymmetric (#123452)"
efa36ef092 : Natively support int truncation, don't guard on positive/negative (#122827)
c83900887f : [quant] Enable backward for choose_qparams_per_token_asymmetric (#123452)
134e56fa33 : inductor: log unique id to match output_code to aot graphs (#118647)
638729c0cd : Switch quantized_decomposed over to new custom ops API (#123454)
1b4419dc4d : Refresh OpOverloadPacket if a new OpOverload gets added (#123578)
8a5e7a01b5 : [custom_op] Schema inference now includes default values (#123453)
b2a0b8c446 : Simplify ATen sparse semi-structured operators based on CUTLASS (#123473)
4f244cfaa0 : Enable int4mm for both half and bfloat16 (#123794)
02b29e7d07 : Add meta function for channel_shuffle operation (#123033)
84580f76d9 : fix flop counter issue with out parameters (#123768)
e8e9261b90 : Add Matmul recipe into x86_inductor_quantizer (#122776)
8798f5bf0d : Add Quantization recipe filter per operator type for x86_inductor_quantizer (#122775)
6b7741546b : Fixed arange decomp for float dtype (#121013)
2ac99d539b : Only initialize state if needed in SGD (#123757)
e00282fecf : [c10d] make monitorThread sleep when we try to dump (#123788)
a510afb885 : [aot] refactor runtime_wrapper's epilogue args access (#123674)
a8d2504eec : [aot] always pass inputs to runtime_wrapper as list and add type annotations (#123630)
c2f687f32c : Option to include stride and device annotation in gm.print_readable() (#123690)
8aad72b0d3 : Support all unsigned int sizes on unique (#123643)
416f532753 : [AOTI] Serialize large weights (#123002)
7fc4b170d8 : [EZ] Update mypy to 1.9.0 (#123595)
cacc8e27a5 : [inductor][cpp] refactor code to use define_kernel and call_kernel similar to CUDA (#123704)
2a597cfd2c : [EZ] Pin scipy to 1.12 for Py-3.12 (#123795)
57a2032c7a : Delete Lark (#123689)
bbcdd28409 : Report LRU cache stats at end of program for symbolic shapes (#123724)
3ebbeb75fd : [Profiler] Make Kineto traces export ns granularity for finer timestamps (#122425) (#123650)
ec00daf4f1 : [aotinductor] Fix benchmarks with self.autocast for run_performance_test (#123699)
902cb2c842 : [multi_tensor_apply] revert the optimization introduced in #119764 (#123763)
0d0fd80033 : [AOTI] fix relocation overflow error when .data is large (#123639)
281810e307 : Avoid COW materialization in backward ops (2) (#123740)
793df52dc5 : Aviod sync for privateuse1 backend inside Onehot. (#123621)
3e43dc086a : implement bmm to support nested tensor and normal tensor (#119762)
b3feb01910 : [dynamo] Update co_names if needed in fix_vars (#123697)
c64184b097 : [FSDP] Made patch functions thread safe with barrier (#123754)
4d1d71ecac : Add aoti_torch_dtype<> specializations for half and bfloat16 (#123692)
e29e990ddc : Add the VecConvert between 8bits and float (#123512)
01ab5a3104 : aot_eager and aot_eager_decomp_partition: include input mutations in graph (#123646)
8d36354187 : AOTAutograd: fix detection for mutations under no_grad when there are outstanding aliases (#122433)
57634ce74f : Don't intersect when clamping for size oblivious (#123675)
b36b523c05 : Fix guard_size_oblivious on non-symbolic expression (#123743)
a13cf5d396 : Workaround dind-rootless bind mount permissions (#123641)
69c6e0b851 : [ROCm] Fix ROCm bug that causes numerical errors in float8_experimental (#123275)
6b18daf205 : Revert "Delete Lark (#123689)"
49d5553f5a : Avoid COW materialization in backward ops (1) (#123657)
4bee4c7c25 : [3.12] enable inductor unittests (#123654)
f688d7a2f7 : Only suggest debug envvar when debug is on (#123647)
cda383e7bc : [inductor] Fix fresh_inductor_cache() (#122661)
cf8139b956 : Revert "Fix derived dim bugs in ep.run_decomp (#123326)"
63c24f73ef : Upsample2d backwards to int64_t (#123682)
a631461eef : Delete Lark (#123689)
8d9af8b91c : Revert "[Quant][PT2E] Enable linear-binary(-unary) post-op recipe for X86Inductor quantizer (#122387)"
30c4efe6d2 : Update preferred-citation in CITATION.cff (#123575)
2bcc83dfbd : Preserve dispatch state across function tracing (#122073)
a65e9a06f0 : Revert "[AOTI] Serialize large weights (#123002)"
4322874282 : Fix derived dim bugs in ep.run_decomp (#123326)
cd3c1132a9 : Create a mock benchmark results for torchao cudagraphs_low_precision (#123419)
c3de2cc154 : Enable UFMT on test/test_foreach.py (#123718)
c9c099b271 : Add kwargs to RecordFunctionFast (#123600)
26a9b05bce : Set stacklevel on checkpoint warning (#123717)
66e61af467 : [ROCm][CI] skip float16 for TestTemplatedSDPA (#123668)
49be96efe8 : Instantiate VaryingShape<c10::Stride> (#123542)
7a925c2657 : Migrate linux-focal-py3_8-clang10-onnx-build to ARC (#123435)
d017645dc7 : Revert "Support all unsigned int sizes on unique (#123643)"
5c1bde99c0 : Fix the uncorrect return value of Tensor.numpy() (#123538)
bdee35a870 : [BE] rewrite logical-`and` expression to if-statement (#123638)
8aa08b8b9d : Support all unsigned int sizes on unique (#123643)
4a4baff0f3 : [dynamo, 3.12] force LOAD_SUPER_ATTR second bit on (#123686)
d60135e915 : [FSDP1] fix _same_storage check for DTensor (#123617)
37fd547518 : [DeviceMesh] Make dtype of mesh tensor from `init_device_mesh()` consistent with directly calling `DeviceMesh()` (#123677)
1346ebf12e : [dynamo][guards] Delay DUPLICATE_INPUT guard because of incorrect ordering (#123605)
1dc4e1e335 : [dynamo][logs] Bug fix (#123606)
cdc47ad991 : fix amp for AOTInductor (#122883)
fe4d1aff05 : UFMT formatting on test/export (#123520)
e8ad5460c0 : Fix skip logic bug in dynamo benchmark runner (#123544)
65710d95c9 : Fix example in torch.distributed.new_subgroups docstring (#123492)
713f065c8d : Enable UFMT on test/test_dispatch (#123644)
e15ae63a42 : [cuBLAS][cuBLASLt] Remove CUDA 11 heuristics for dispatching to cuBLASLt (#119939)
247646333e : Fix py opcode (#118977)
b7f898c4a6 : Generalize host allocator to be device-agnostic (#123079)
82e0153487 : [Quant][PT2E] Enable linear-binary(-unary) post-op recipe for X86Inductor quantizer (#122387)
69c7bd4587 : [Compile FSDP2][3/n] Check all_gather_work is distributed_c10d.Work before calling .wait() (#123491)
7bcac56140 : Update Kineto Submodule in PyTorch (#123565)
bd59e1113d : Improve docstring for tensorboard add_embedding() (#120408)
0288fa7cae : [inductor][cpp] expose config options via env vars (#123519)
786c6db519 : Revert "UFMT formatting on test/export (#123520)"
624e58f2c6 : [CUDA] Update `size_1` conv tests with TF32 thresholds (#118022)
af27bc443b : fix typo in 4 files (#123529)
8a566161cd : [ATen-VK] Remove duplicate function from Resource.cpp (#123659)
ec7551d1b7 : UFMT formatting on test/export (#123520)
c773913407 : Add torch.while_loop support to AOT Inductor (#123586)
3908ebca86 : Test COW materialization in backward ops (#123593)
298171df5c : [benchmark] Add namedtuple pytree serialization (#123648)
60d7fbe89a : Register matmul out variant so it is used (#122979)
27eb5daee4 : [AOTI] Serialize large weights (#123002)
d6fb1da806 : Fix doc example of masked_scatter (#123664)
adcfc2b582 : Add meta reg for addcdiv/addcmul ScalarList (#123486)
b287dbbc24 : [export] Fix naming if state dict contains colons (#123601)
a96e4ad0d1 : [Inductor] Pass device interface to the worker compile (#122492)
f7cdc1b9bb : Add test_aot_inductor to test_inductor (#123340)
bff321716c : Remove special handling of step with closure (#123620)
3db618d656 : [CUDA] Use 64-bit indexing in `CUDA_KERNEL_LOOP` in `im2col` (#118005)
b3eb1b2f74 : Revert "fix amp for AOTInductor (#122883)"
041be901b3 : fix ctc_loss zero-length/neg-length corner cases (#123193)
a6080f79e9 : [Build] Add linker script optimization (#121975)
178ce1433c : Hoist out auxiliary values in optional-typed arguments (#123613)
1970a802b3 : Only print bw result for the first time we benchmark a kernel (#123568)
5712c326a5 : Teach pattern_matcher to use a pre-traced pattern if given (#121314)
4044e93a51 : Add mm_pattern and bmm_pattern to serialized_patterns (#121313)
f772ea5493 : Improve return value docs for Module.load_state_dict (#123637)
10d06fc92e : Revert "[EZ] Update mypy to 1.9.0 (#123595)"
1b5944358e : Ignore logging.Logger.* calls during dynamo export (#123402)
2a37793249 : [Dynamo] Ensure that Higher Order Ops can be composed in dynamo (#123357)
497bac223c : Add XPU backend check on NamedTensor (#123081)
56dd7603da : Cleanup comment (#123467)
7a78534468 : [Compile FSDP2][1/n] Support using user-defined object instance method as hook (#123399)
9a661636e3 : Make lint clean on OS X (#123052)
46903d978b : fix maybe_initialize_device for custom device. (#121379)
270dd99180 : Fix record issue on XPUGuardImpl (#123523)
266e278ccf : UFMT formatting on test/distributions, test/error_messages, test/forward_backward_compatability (#123527)
c96bd3de06 : Enable UFMT on all of `test/fx` (#123622)
3b3962f7b3 : Enable UFMT on `torch_version.py` and `types.py` (#123131)
2834c68deb : Migrate linux-jammy-py3_10-clang15-asan-build to ARC (#123434)
6980c5048d : UFMT formatting on test/mobile (#123521)
f61b04a1f0 : [EZ] Update mypy to 1.9.0 (#123595)
491e2ed6d1 : [AOTI] Fix an internal test regression (#123481)
a4a49f77b8 : fix amp for AOTInductor (#122883)
4656ea5768 : Migrate linux-jammy-py3_8-gcc11-pch to ARC (#123433)
15745a52b0 : Inductor: don't change the stride_order of FlexibleLayout if it's already the same as required (#122945)
7c23fed12c : Move step to cpu if state is already initialized (#123618)
526a69f5ee : Remove incorrect check (#123616)
d04957c0c6 : Revert "Ignore logging.Logger.* calls during dynamo export (#123402)"
b9d2b75bac : Revert "Add test for skipping hf logging during export (#123410)"
666a628bea : [Inductor pattern] support int8 woq mm pattern matcher with freezing passe (#122955)
d86cb9c747 : [Quant][Inductor] Add qlinear_pointwise.binary op for X86Inductor backend (#123144)
7283c37c98 : [dynamo] Keep guards on global function (#123423)
98cf183629 : [merge rules] add mkldnn_lowerings.py to CPU inductor rule (#123627)
0d3a771f7b : Allow for git worktree when computing clangtidy scm root (#123060)
0fd072bf90 : [inductor] easy: move mkldnn lowerings to its own file (#123556)
f07c0977d5 : [dynamo, 3.12] avoid using co_lnotab in symbolic_convert (#123577)
b63477faa2 : [PT2][Inductor] Add decompose_mem_bound_mm to the customization pre and post grad passes (#123376)
8865425ff7 : [minifier] Add config flag to ignore non-fp values (#123006)
6d005ca590 : [PT2][Observability] Add model_type and global_rank for the scuba log for the dashboard Optimus pattern frequency monitor (#123398)
7420b8c5be : [effects] Add way to register effectul op (#122348)
493478db4a : [effects] Add inductor support for tokens (#122347)
9bd6d6e8b0 : Add mem-eff-attention's sliding window arg to align with xformers (#123571)
565e8c0645 : [Reland] Enable dynamo'd tests disabled for #115679 (#123552)
8bd6223730 : [export] construct set_grad_enabled HOO subgraph inside other HOO subgraphs (#123391)
3969f85769 : add TORCH_NCCL_HIGH_PRIORITY option (#122830)
9faa8848ea : [aotinductor] Add test case for outputs with views (#123415)
c3d37a88ed : Fix a perf regression in MultiTensorApply (#123566)
ba55ef8e21 : Add test for skipping hf logging during export (#123410)
1b9eebb6bb : [AOTI] Handle null outputs (#123460)
75933ff523 : Ignore logging.Logger.* calls during dynamo export (#123402)
aa9aed2fcf : Removes forgotten print statement (#123579)
d8e0c26e64 : [dynamo] Support warnings.catch_warnings (#123511)
6951626735 : [Reland] Enable tests disabled for #115607 (#123551)
4765570359 : [onnx.export] Avoid building vals_to_params_map (#123025)
c48f6680ff : Skip test_artificial_grid_cpp_wrapper (#123211)
1be2126ff6 : [pytree] Fix namedtuple serialization (#123388)
c797fbc4e1 : Enable UFMT on `test/cpp_api_parity`, `test/cpp_extensions`, `test/create_dummy_torchscript_model.py`, `test/custom_backend`, `test/custom_operator` (#123518)
b279034e5a : [DDP][PT2D] Add the trace rules for DDP (#121741)
89e6292d48 : Defer setting capturable in optimizer variable (#123497)
73e235f0a6 : Swap to ID guard for optimizer Variable (#123496)
6a3b47ec8f : [PT2D][DDP] Remove the hack to pass None as the process group (#123207)
aa73d5bb5c : Register COLLECTIVE_COMM profiler activity type, if available (#121461)
a2327d203b : [PT2D][DDP] Remove some hacks to get the test work (#123206)
3e8d3577be : Revert "Swap to ID guard for optimizer Variable (#123496)"
d9ac80f80c : Revert "Defer setting capturable in optimizer variable (#123497)"
e4e5449dfc : [aoti][reland] clear precomputed symbol replacements before cpp wrapper compilation (#123136)
61be8843c9 : [TD] Use label to configure td on distributed for rollout (#122976)
4f66db80ca : Migrate linux-jammy-py3.8-gcc11-no-ops to ARC (#123432)
3a7351bf91 : [xla hash update] update the pinned xla hash (#123549)
76b290344f : Defer setting capturable in optimizer variable (#123497)
26bf05ccac : Swap to ID guard for optimizer Variable (#123496)
bb04f3f66a : [dynamo][logger] Log graph break on Unsupported bytecodes (#122684)
07cecf4168 : [dynamo][cpp-guards] Fix bug for slices (#123516)
6ceec53579 : [dynamo][cpp-guards] Fix test for CPP guard manager (#123515)
212e460dce : [dynamo] Support custom __setattr__ on UserDefinedObjectVariable (#123318)
89724843bb : Use graph.find_nodes in pattern matcher (#122331)
5aab2b9acf : Use graph.find_nodes in functorch (#122258)
287680176b : Use graph.find_nodes in dynamo (#122257)
f8465df9f0 : Use graph.find_nodes in inductor (#122256)
33783e43e9 : Use graph.find_nodes in inductor/fx_passes (#122255)
03b13851d9 : [FX] Add side table to FX Graph for O(1) op/target query (#121565)
0355f6e954 : [Bug Fix] Fix Cuda 12.4 compilation - Refactor SFINAE boxing logic (#123377)
1ea2f1eaa1 : [BE][MPS] Reorganize logics and naming in copy.mm (#123310)
77681facac : [fix] inductor `split` lowering fails if `item()` is captured (#123032)
e3ea316623 : [dynamo] Save/restore cublas_allow_tf32 in convert_frame (#123509)
eff1e4899c : Add sparse COO/CSR/CSC/BSR/BSC meta tensor input support to torch.sum (#121673)
7ce42ebd44 : Generalise mod value ranges (#123253)
caed7f6727 : profile pt2 compile time with strobelight (#123311)
c66d503194 : Revert "[Profiler][submodule] Make Kineto traces export ns granularity for finer timestamps (#122425)"
89cbb2d86d : Allow docs build on workflow dispatch (#123493)
ecb2418dd6 : Revert "Adding health check server hook in torch elastic (#122750)"
7b02910163 : [Compile FSDP2][2/n] Support streams created outside of compile region (#123487)
6f7dd2f84a : [Profiler][submodule] Make Kineto traces export ns granularity for finer timestamps (#122425)
a4ef9cdd28 : benchmark: raise tolerance to unblock triton upgrade (#123484)
77643ed2eb : [torch quantization]raise exception when OOM during combine histogram in observer (#123309)
d3596cf004 : [dynamo][cpp-guards] Fix missing decref in GradGuardAccessor (#123488)
6fa72480d3 : Enhance RecordFunctionFast input args and use input args in triton_heuristics.py (#123459)
8c84fe3c86 : [dynamo][guards] Forward fix for #123302 (#123485)
841112d074 : [dynamo, 3.12] fix graph break issues with BINARY/STORE_SLICE (#123401)
284b07ba63 : [dynamo, 3.12] fix block stack related issues (#123392)
9189d04cb1 : [inductor] Add explicit ops.fma and use it in softmax_backward (#122518)
3e8c64a637 : [AOTInductor] Fix non-determinism in CUfunction declarations (#123266)
e94b81b254 : Revert "Enable tests disabled for #115607 (#123314)"
239abb2a14 : add record function Id to Torch ops (#122948)
f4e2a226aa : ScoreMod API (#121845)
8e98fda7a9 : [dynamo][easy] Add AC test and improve graph break message (#121394)
954d750516 : Revert "Enable dynamo'd tests disabled for #115679 (#123315)"
d9d25076fe : Reduce guards of optimizer state dict to guard once per param group (#123413)
f7e41a2b7a : [pt2] Clean up for removing 2 decompose patterns (#123422)
d472ebf94a : Enable dynamo'd tests disabled for #115679 (#123315)
9564e204c1 : Enable tests disabled for #115607 (#123314)
61d431fab0 : Adding health check server hook in torch elastic (#122750)
22b9987144 : [dynamo][cpp-guards] ListGetItemGuardAccessor and TupleGetItemGuardAccessor (#123396)
cd6c58baea : [custom_ops] mutated_args -> mutates_args (#123437)
81e7a7c955 : Add mutated_args field to custom_op (#123129)
9e8d2b6de2 : Add register_autograd to register backward formulas for custom ops (#123110)
d8e1c1087d : Add is_tensorlist_like_type helper (#123109)
067851dd0d : Expand is_functional_schema to work with torch._C._FunctionSchema (#123108)
42c2a5477c : [export] nn_module_stack to return class name str (#123308)
63c221b7fa : Clone mutated inputs in first pass of CPP wrapper compilation (#123316)
4946558dd4 : [minifier] Don't recompile for accuracy minification (#123005)
f5b8c9b730 : Ignore some known duplicated modules in doc build config script (#123425)
b0c86a5bc1 : [PT] [ST] support non contiguous rank validation in sharded tensor (#123230)
d78991a738 : Make torch_geometric models compatible with export (#123403)
cbde0f048b : [dynamo, 3.12] enable tests disabled due to missing dynamo 3.12 support (#123300)
ae6f8d923c : Pass and record process_group_name when creating ProcessGroupNCCL (#123117)
d7f23f6826 : [export] Restore original placeholder names (part 1: top-level renaming) (#122904)
f71e368969 : UFMT formatting on test/autograd test/ao test/cpp test/backends (#123369)
de7edeea25 : [DCP] DCP logger (#121352)
c8e117fb76 : Tiny comments improvement (#123426)
9b8f446e95 : [pytorch profiler] Add metrics for performance timing and other statistics (#123412)
32f9453c2a : [dynamo] Emit FUNCTORCH_STACK_MATCH guard in vmap(compile(f)) case (#122786)
8c7d8f0ff2 : Revert "Make torch_geometric models compatible with export (#123403)"
5b0ce8f334 : [Wheel] Change libtorch_cpu OpenMP search path (#123417)
cfd06bd60c : [CI] Switched to the _linux-build-label workflow for pull, rocm, slow and trunk jobs (#123255)
9743e3a19c : [Inductor Intel GPU backend Upstream] Add Inductor Intel GPU backend. (#121895)
9078191666 : [Inductor] Add the possible fusions group by priority (#123067)
bac2a39aee : [Inductor] [ReImplement] Outer Loop Fusion for CPP Backend (#121625)
2ffab6e663 : Make torch_geometric models compatible with export (#123403)
18c9d46068 : Fixes format utils executable (#123407)
7b575f0814 : Handle transposes in second batch of matrices in bmm (#122194)
86c5cc6559 : [ONNX][dynamo_export] Integrate onnx-rewriter optimizer (#123379)
7ffad9ab04 : Use out-of-place version of `put` inside `take_backward` (#123268)
c575e378ba : Update torch.compile_faq w.r.t to functorch (#122213)
dbe0c474a9 : Ensure all `torch.func.*` functions capture can be disabled (#122212)
84658d9c4f : Enable `capture_func_transforms` by default (#122211)
3d20cc1332 : Cleanup some duplicated placeholder py:module docs (#123244)
16cb5d48dd : Revert "[inductor] Add explicit ops.fma and use it in softmax_backward (#122518)"
535a84c125 : Publish PyTorch docs to pytorch/cpp repo (#122895)
d7ccde58a7 : [ONNX][dynamo_export] Fix ONNX export with print (#123368)
4cf5a9c505 : [pt2] remove 2 decompose patterns (#123371)
0c8a165b43 : [Export] Improve metadata and output parsing during deserialization (#122793)
064a650b63 : [AOTI][refactor] Improve generate_extern_kernel_out's signature (#123351)
aa063054ce : [AOTI] Fix the codegen for aten.randint.low_out (#123346)
e61d04e467 : Revert "[sparse] Add fast semi-structured spasification kernels (#122350)"
6107cbba1b : doc: `torch.nn.utils.rnn.pad_sequence`: improve the example description (#123183)
19f8cf0167 : [FSDP2] Used `ReduceOp.AVG` if bf16 reduce-scatter (#123362)
5494b2a8d3 : enable test_sampled_addmm_zero_sized_cuda for rocm (#121940)
4b1b4db231 : [export] Add stack_trace for non-strict export (#121034)
512759a3d7 : Fix for tensor attribute missing (#123313)
30598c162d : [pytorch][cuda] Optimized softmax forward native CUDA implementation (#122970)
05984e642b : [inductor] Add explicit ops.fma and use it in softmax_backward (#122518)
d486cb7c1b : Deprecate calling FakeTensor.data_ptr in eager-mode (#123292)
fd60752786 : Turn _allow_unsafe_data_ptr_access into a config option (#123291)
de14717819 : [triton] Backport https://github.com/openai/triton/pull/3433 (#122470)
5c7e2fd270 : [dynamo, 3.12] use pymalloc allocator instead of malloc/free for frames (#123299)
d59c5d7353 : [dynamo, 3.12] enable dynamo on 3.12, enable most dynamo unittests on 3.12 (#123216)
c63a7b5691 : [sparse] Add fast semi-structured spasification kernels (#122350)
d8717c2d68 : Revert "Skip test_artificial_grid_cpp_wrapper (#123211)"
a808559fc6 : Revert "[inductor] Fix fresh_inductor_cache() (#122661)"
721dcaff94 : Revert usage of NJT views in SDPA (#123215)
8b83327cd5 : [FSDP] Fixed `summon_full_params` on submodule (#123290)
fe84155083 : [TP][Tests] Replace assertEqual with deepcopy (#123218)
98e5238ad8 : [codemod][lowrisk] Remove unused exception parameter from caffe2/caffe2/image/image_input_op.h (#123056)
836a86064c : Ensure torch.library doctests runs under xdoctest (#123282)
8f20cf1c71 : Update the functionalization error message (#123261)
e0c9764660 : Back out "Precompile triton templates (#121998)" (#123305)
595613d746 : [CI] Workaround to the dind-rootless limitation to restore user on build.sh and test.sh (#122922)
5ecfe58cfb : Remove ulimit setting for ARC dind-rootless (#122629)
26b4ccf9d1 : Use numpy 2.0.0rc1 in CI (#123286)
d9cbd57dfe : Make u/int8 cat inductor fallback cpu-only (#123278)
b5488cbe64 : Use Vectorized Half for eager and compile (#123260)
54801e6fd6 : Revert "[Distributed] [2/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#122892)"
6890333e3d : [inductor] fix tensor overlap detection that cause cudagraphs being disabled (#123327)
f00ece024b : Handle wrong workflow name from GitHub (#123301)
dbeb214043 : [aot_inductor] Fix issues in pre_grad passes (#123181)
eee8413b8d : [Inductor Intel GPU backend Upstream] Enable triton installation for Intel GPU (#122254)
2a24b54e65 : [inductor] simplify expr when looking up size hint (#123140)
5bc6bd3cb8 : Remove excessive warnings and rewrite FSDP docstrings (#123281)
6694628170 : [dynamo][guards] Remove workaround after #122858 (#123303)
5b45ec8892 : [dynamo][guards] Use DATA_PTR instead of ID_MATCH for tensors (#123302)
fb7664d5bf : [dynamo][optimizer][guard-overhead] NOT_NONE guard for param.grad instead of TENSOR_MATCH (#123285)
63d17d3c90 : Revert "Revert usage of NJT views in SDPA (#123215)"
ba7d396eb7 : [inductor] Fix fresh_inductor_cache() (#122661)
1ea6d3a9b4 : Fix conv decomp when running to core-aten (#123283)
c58b0ac7c2 : IntraNodeComm primitives for allgather_matmul (#118038)
0ba16ffd35 : [Distributed] [2/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#122892)
41e7619875 : [EZ] Do not test for an undefined conversion behavior (#123258)
0fcddb5625 : Revert usage of NJT views in SDPA (#123215)
6e99f73923 : [CMake] fix cmake regex to match newly introduced 9.0a architecture (#123243)
05289a278c : Fix for MPS regression in #122016 and #123178 (#123234)
4732375042 : make RecordFunctionFast take inputs (#123208)
a5cf9a5800 : [CI] Do not install Nvidia drivers in ARC (#122890)
3e2b7e6052 : [dynamo][guard overhead] Data ptr guard optimizer state tensors (#122858)
63a0ce89a0 : [PT2][Inductor][3/n] Customize pre grad and post grad patterns (#121915)
7deb842b0d : [MacOS] Default parallel jobs to performance cores (#123038)
9e0838ff27 : fix typo in export/test_export.py (#123228)
620aaaf0cb : [DCP] Adds ability to create a CPU state dict that is both shared and pinned (#122338)
a4035bea5c : [while_loop] support closures (#123018)
5a66c2d65b : Update pytorch/xla pin (#123217)
9aed9c8c87 : Reduce CPU overhead of copying inputs in CUDAGraph trees via foreach_copy (#123162)
691054eeef : Fix error message of autograd (#123154)
700917c361 : Adjust logging content for TS usage logging (#123133)
9aa1a4d386 : Remove mypy-ignore-errors from custom_op_db (#123107)
44c0c0fc0f : Add torch.library.custom_op (#122344)
aa16c0163f : Only update momentum buffers for SGD if momentum is enabled (#122349)
fe29a8fbea : [quant][be] Simplify fake_quant_per_channel (#123186)
8b6b179a8a : [cuBLAS][cuBLASLt][FP8] Enforce restrictions on `amax` computation for scaled fp8 gemms (#122821)
bde1a93bc4 : Add lowering for resize, decomp for resize_as. (#122317)
a8b9dcb9af : Skip test_artificial_grid_cpp_wrapper (#123211)
8a0436014d : Support map in pre-dispatch functionalization (#121444)
8ac0f072e6 : [aot eager] Support frontend graphs with list arguments (#123212)
d895192e87 : Fix zeros_like on sparse compressed fake tensors (#123084)
74b3a7920e : [Inductor Cutlass backend] GEMM size threshold for Cutlass backend usage (#121491)
f2e67179ee : [Inductor] Make codecache CUDA compilation more robust & flexible (#121490)
25ad90adc0 : Revert "Support map in pre-dispatch functionalization (#121444)"
957b8d5c00 : [Inductor Intel GPU backend Upstream] Register general runtime device for Intel GPU (#121883)
3eb84b6343 : [dynamo][cpp-guards] Init LocalState only when TENSOR_MATCH guard present (#123152)
68cffd19f6 : DTensor: add ring attention for _scaled_dot_product_flash_attention (#122460)
f06d77caba : [TP] Improve MLPStacked test (#123199)
eb3a34d280 : Optimize multi_tensor_apply (take 2) (#119764)
d91db70295 : [dynamo][cpp-guards] Optimize tensor.grad accessor (#123226)
9288b27461 : Support map in pre-dispatch functionalization (#121444)
15bd81bfaf : expose transformer header in cmake and wheel (#122586)
102c676418 : [DTensor] Added some more foreach ops (#123214)
15529de901 : Remove FlameGraph comment from export_stacks (#123102)
2964b1ef21 : Extend XPU merge rules: Add torch/csrc/xpu/, torch/xpu/ and test/xpu (#122856)
0c6e8af257 : [AOTI][refactor] Update some test cases (#123093)
0a2e0eb4c0 : [functional collective rewrite] support rewriting reduce op for reduce_scatter_tensor (#122834)
f15fd650b7 : [funcol] add deprecation warning for the legacy backend (#122666)
31aff29b79 : Add clone if output is a view from constant. (#123200)
c77352b5cc : Add torch._library.register_fake_class to fakify torchBind class (#122622)
46c7235406 : add tensor queue example (#122621)
5d6a447357 : [torchbind] change to parametrized tests for pre_dispatch (#122620)
071f23f4f3 : [torchbind] redispatch call_torchbind in proxy dispatch mode (#122619)
e3d80f2fa9 : [ONNX] beartype to emit warning instead of error by default (#123205)
b1aca36f4c : [export] Allow legacy IR to be unflattened with weaker submodule ordering. (#123192)
d7fe0603a1 : Move sparse tests to TestOptimRenewed (#123146)
f2838c99a0 : Add a tensor lr test for optimizers (#123139)
cb8fc30e4a : Move LRScheduler integration tests to OptimizerInfo (#123134)
12e36dc1df : [dynamo] Fix torch._dynamo.disable on flatten_graph_inputs wrapper (#123007)
71085983ae : [c10d] [NCCL] Fix work handle for coalescing manager (#122849)
5027ef7e9c : [TP] Add wildcard support (#122968)
35f4d70240 : [sparse] proper sparse iteration (#123128)
19c2ed15c0 : update submodule onnx==1.16.0 (#123125)
8244ee00cf : Add fuzzer instructions to pt2 bug template (#123156)
0ff6155eee : [AOTI] Support module buffer mutation (#123164)
a9a9ce6d9c : [ez][FSDP2] Removed `_contiguous_orig_stride` (#123142)
bcb6e5aa72 : [DCP] Support partial load (#122829)
feabb645a7 : Revert "Handle transposes in second batch of matrices in bmm (#122194)"
1c61401086 : [cuBLAS] Fix typo in `CUDA_VERSION` `ifdef` for explicit workspace allocation (#123114)
64d743044d : Add inline constraints to non-strict exported program (#123017)
d17eea9c0f : [dynamo] fix broken 3.11+ windows build failure (#123104)
251ad1232b : Handle transposes in second batch of matrices in bmm (#122194)
aaef246c74 : remove log2 decomposition; add log2 lowering (#123112)
5f46312dbb : Reapply "Switch cudagraph backend to cudagraph trees (#121019)" and "Add Cudagraphs disable checking (#121018)" (#121864) (#122713)
638b003cb7 : [NJT] .to() properly updates device of offsets (#122797)
b27ee6548d : Add a Dynamo deepdive to documentation (#122305)
c40f386afd : [Inductor][1/n]Split cat customization (#123045)
1f503dffb3 : Revert "[aoti][reland] clear precomputed symbol replacements before cpp wrapper compilation (#123136)"
72662bf05b : [BE] Add torch.ops.aten._sparse_compressed_tensor_with_dims (#123083)
f9b2ffa7c4 : Forward fix lint after #119727 (#123137)
7eadb157bd : [aoti][reland] clear precomputed symbol replacements before cpp wrapper compilation (#123136)
969bbf8e82 : [dynamo][guards] Skip aliasing guards for optimizers (#123044)
1d52c2d985 : Add vec256_half_neon (#122918)
7a934e4031 : [c10d] dump on any exception (timeout + nccl error) (#123023)
f1c4d0fb2c : [dynamo] show inlining reasons from trace_rules (#123014)
0a038cf0cf : [TP] Avoid splitting path twice (#122919)
9d9d2af786 : [BE] Move tests using functional API to OptimizerInfo (#122822)
597f479643 : Add torchbench on-demand test workflow (#122624)
fdc281f258 : [inductor] lower min SM requirement for gemm autotuning to 68 (#123121)
12ced0f986 : make user defined triton kernel work with new ASTSource.make_ir API (#123124)
bc65c98588 : [AOTI] enabled a couple of tests for CPUs (#122992)
09c72eaa3f : [inductor] Remove identity from ops.scan (#119727)
4d5cdc2e1e : Fix empty_like bug for sparse tensors. (#121900)
891994fd1b : Update dynamo test failures list (#123111)
489f4a063b : Revert "Preserve unbacked SymInt on SymNode (#120816)" (#122988)
8b49782ba6 : [Inductor] require channels last output for channels last input for max_pool2d_backward (#122749)
d765e223ac : [dynamo][PT2D] avoid skipping dynamo_resume_* in torch/testing/_internal (#123013)
5d0ac887b9 : [dynamo][higher order ops] Make the subgraph sourceless (#123071)
69fa28f483 : [dynamo][cpp-guards] Enable a few tests to prevent frequent regressions (#123059)
234287aa16 : [dynamo][cpp-guards] DUAL_LEVEL guard (#123058)
ffd1e4e9ba : [dynamo][cpp-guards] Always Reset relational guards (#123046)
c4cbad4106 : Fix broken test (#123034)
f461444be8 : [inductor] make inductor work with new triton kernel launch API (#123076)
76a87e33a0 : Remove cuda dependencies when building AOTriton (#122982)
c422bce131 : [codemod] Fix some namespace issues in caffe2 (#121847)
533c1b6c49 : Disable vulkan logsoftmax test (#123103)
d7a274e1b0 : [dtensor] switch aten.t to use op strategy (#122950)
9e1447dad6 : [dtensor] make sure expected input spec have correct tensor meta (#122949)
afee5bea92 : [dtensor] refactor schema suggestions in output sharding (#122929)
b4c810491e : [export] Temporarily block mutating ops in quant tests. (#122863)
526ca5f28e : [vec] fix compile warning in vec_n.h (#123090)
9ff2a9dcdd : [dynamo] Skip leaf check on `assert_metadata_eq` if grad tensor level is `-2` (#122728)
03439d4c1c : [inductor] Lower divide by constant as multiplication by reciprocal (#121924)
6939279a17 : [dynamo] Forward OptimizedModule.__setattr__ to the wrapped module (#122098)
dd8a24b8b7 : [xla hash update] update the pinned xla hash (#123078)
4b725e1619 : [AOTInductor] Support quantized linear on CPU with fbgemm (#123069)
6b1f13ea2f : Add skip models by device in Dynamo Test (#122591)
8b7da5b791 : Inductor cpp wrapper: fix dtype of ShapeAsConstantBuffer (#122297)
781e8d2201 : [dynamo] Support __next__ on UserDefinedObjectVariable (#122565)
5fc0f52bf0 : [BE] Use modern C++ in ATen tests (#123031)
fa6178d246 : [CI] Updated expected result files after https://github.com/pytorch/pytorch/pull/122846 (#123035)
6c2f36c984 : Upgrade submodule pybind to 2.12.0 (#122899)
6d8bb0e984 : [Distributed] [1/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#122884)
a52e89b6f7 : [inductor]re-enable cpu reduction ut (#122289)
56451cd49d : Enable x86 CPU vectorization on windows [submodule sleef] (#118980)
2b1ba0ceae : [DeviceMesh] Cache and reuse sliced result (#122975)
35c493f2cf : [CPP Extension] Escape include paths (#122974)
557e7c9c16 : Add some type hints to functions and update a few spelling mistakes (#123015)
e203aa9fab : [FSDP] [easy] fix HSDP validation error msg (#123019)
ec58f1f74e : [inductor] make mask_rcnn inference work in max-autotune mode (#123008)
5e878be101 : Revert "Enable x86 CPU vectorization on windows [submodule sleef] (#118980)"
b8550f527f : Support gpu trace on XPU (#121795)
eb7adc3ae0 : Refactor gpu trace to be device-agnostic (#121794)
99f8f77de9 : [Inductor] Fix AFOC QPS Regression. (#122944)
2cd3ef4777 : Check scale dtype for fake_quantize_per_channel_affine_cachemask (#120987)
07f0ff6ed7 : [DCP][FSDP2][Test] Add_adamW to test_train_parity_2d_transformer_checkpoint_resume (#122002)
ed457c7dbe : [export] Add torch_fn (#122693)
3a9eead4ab : [inductor] Don't compile MultiKernelCall in a subprocess (#123010)
6c0911f1d9 : [inductor] Skip cudagraphs warning on CPU (#123009)
0b7a156f68 : [executorch hash update] update the pinned executorch hash (#122662)
c66a44ea79 : [AOTInductor] Support many outputs aliasing the same tensor (#122846)
aaba3a87b1 : tune down batch-size for res2net to avoid OOM (#122977)
5a06b8ebfd : Remove skipIfTorchDynamo from TestComposability in test_eager_transforms.py (#121830)
3d3d4e1cd5 : export XPUStream to doc (#121398)
f4ff063c33 : Add attributes to xpu device prop (#121898)
b5bef9bbfd : Fix cpp tests not running + failing to surface (#122845)
4282bb8b07 : [c10d] add the source rank which detects the timeout (#122850)
d7d77a152c : [ez] Increase slow grad check shards 4 to 6 (#122631)
ea33adf6c2 : [vec] test VecMask in vec_test_all_types (#122878)
c9b32c9caa : [vec] test at::vec::convert in vec_test_all_types (#122869)
6f4ed57b8a : [inductor][cpp] unified the vectorized conversion with `at::vec::convert` for all data types (#119979)
05e54536fb : [CI] Removed tests for torch.utils.tensorboard.summary.hparams (#122556)
482d8bf1ea : [aoti] Change aot_compile callsites (#122225)
267145c5d0 : Enable full state checking (#122971)
4d6cb7bca0 : Use Q-NEON register to compute the dot product (#122952)
73e362756b : Avoid COW materialize in conv forward ops (#122748)
7423092227 : [TorchGen] [2/N] Remove unused variables and simplify dictionary iterations (#122585)
57a9a64e10 : [BE] Give a different error message when evaluating an integer. (#122938)
3178ba0dc9 : Don't use sympy Float functions, use an opaque one with no reasoning (#122823)
ae0cf1f98d : [TD][ez] Set pytest cache bucket default to gha-artifacts (#122901)
99d939f51f : [dynamo] Bugfix for HASATTR guard (#122947)
0a7162f898 : Fix svd_lowrank parameter `M` (#122681)
487b6d40ec : Add RMSNorm module (#121364)
3243be7c3a : [FSDP2] Removed `wrapSwapTensorsTest` since no longer needed (#122962)
a236fa9f06 : Revert "[aoti] clear precomputed symbol replacements before cpp wrapper compilation (#122882)"
2a137f7af1 : [dynamo] Support hasattr on UserDefinedClassVariable (#122564)
772e142e70 : [dynamo] Delay cuda device registration (#122795)
315bd951e4 : Add inductor fx pass unit test for shape propagation (#122897)
b83c94339e : Fix performance regression and memory storage handling of Flash Attention on ROCM (#122857)
d8b69de73b : [EZ] Run fp16 torch.mm/torch.mv across CPU threads (#122951)
fb90b4d4b2 : [TorchGen] Use std::optional in generated code (#121454)
375a8041ed : [AOTI][refactor] Improve logging (#122932)
769d1909f0 : Enable clang-tidy warnings of aten/src/ATen/functorch (#122933)
38946bff51 : Added DispatchKey.CompositeImplicitAutograd to all upsample_nearest*.default decompositions (#122782)
b524a404e0 : Fixed support for uint8 in upsample bicubic2d decomposition (#120411)
d94db5f6ee : Enable x86 CPU vectorization on windows [submodule sleef] (#118980)
35c56f85fd : [dynamo][pt2d] avoid skipping modules from torch/testing/_internal (#122851)
10bdf64427 : Properly pexpr the actual sympy.Expression, don't repr it. (#122893)
ed37fbdf60 : made gpt_fast benchmark run faster (#122872)
b9c9f037d1 : Added some checkpointing tests (#122848)
b6201a60c5 : [BE] minor logging cleanup in distributed (#122921)
6a45809580 : Simplify forward AD missing support error (#122639)
76d8020e62 : Add tests for pre_dispatch + run_decomp flow and taskify failures (#122508)
f041df8530 : Fix order conditioning of norm kernel (#122874)
6b8205d3de : Revert "Support map in pre-dispatch functionalization (#121444)"
16771747c2 : Add tensor step and capturable support to rprop (#122261)
e63e013c3b : Skip use_count() debug assert for _nested_get_offsets() (#122917)
6fc5ad931c : Use zeros for NJT dummy to avoid messing with randomness (#122902)
f476d707fd : Remove previous grad impl. in torch dynamo (#122215)
079feea337 : Support map in pre-dispatch functionalization (#121444)
481c9bb1fc : Upgrade submodule oneDNN to v3.3.6 (#122164)
3924d2189c : [FSDP2] Simplified `_move_states_to_device` (#122907)
3beb9d85a6 : Revert "Add non strict inline constraints and runtime assertions to non-strict exported program (#122722)"
8852b09abc : [FSDP2] Used `_chunk_cat` for reduce-scatter copy-in (#122888)
8df99732a4 : Revert "Workaround dind-rootless volumes mount as root (#122787)"
dacc73669c : [export] Make quantizer compatible with the standard nn_module_stack. (#122819)
384de46395 : [aoti] clear precomputed symbol replacements before cpp wrapper compilation (#122882)
646dd1ab8d : Rewrite quantized conv transpose2d for vulkan (#122547)
71b5b7e081 : Let dynamo trace some functions in functorch.deprecated.* namespace (#121665)
966ae943df : Add wrapper for fbgemm quantization operations (#122763)
e296722e0e : Z3 validation: Lift operators later when we actually run with Z3 (#122791)
3d2d7ba19d : Delete torch.autograd.function.traceable APIs (#122817)
a3b30851c5 : Add quantized.linear_unpacked_dynamic_fp16 (#122762)
59f6393209 : [docs] Update PT2+Profiler docs (#122272)
091a24495b : [AOTInductor] Support use_runtime_constant_folding for CPU. (#122563)
8a33a77fd1 : Back out "Added a check in register_lowering to avoid decomposed ops (#117632)" (#122709)
4670dcc94c : [Inductor]Fix a couple of broken unit tests (#122714)
07f94df1a6 : [torch quantization]fix HistogramObserver OOM when (self.max_val - self.min_val) is too small (#122659)
d65b9dff73 : [AMD] turn off triton memcache for amd devices (#122560)
d9a08de9a4 : Add Opinfo entries for HOP testing (#122265)
0bfa9f4758 : [ROCm][ATen][Native] Fix kernel cache selecting kernels for incorrect architectures (#121401)
9693797491 : [PT2][Inductor][Observability] Improve the optimus scuba log (#122361)
049d68d8bb : [inductor][Autotune] Add matrix_instr_nonkdim to triton_meta (#122852)
1e8d4b389b : Super tiny fix typo (#122881)
958dbb876c : Revert "`_foreach_copy` with different src/dst dtypes (#121717)"
8698121636 : Revert "Add RMSNorm module (#121364)"
8007d9a34a : Revert "[fx] Preserve Fx graph node order in partitioner across runs (#115621)"
9208df45cb : Fixed increasing CPU overhead of `RemovableHandle.__init__` (#122847)
4290a57e9c : Revert "[NJT] .to() properly updates device of offsets (#122797)"
d6aed1b692 : Fix clang-tidy warnings of aten/src/ATen/functorch (#122779)
6e1c81c687 : Revert "Let dynamo trace some functions in functorch.deprecated.* namespace (#121665)"
f9eab9ca92 : Let dynamo trace some functions in functorch.deprecated.* namespace (#121665)
f178d996a8 : [dynamo] Fix traceback generation on runtime errors (#122746)
1d96791661 : [dynamo] Fix list proxy to list element proxy source propagation (#122691)
0284bca99b : Don't cache device_count if we haven't initialized CUDA yet (#122815)
84dc76156a : Workaround dind-rootless volumes mount as root (#122787)
d1da9cc654 : [ClangTidy] Disable misc-include-cleaner (#122855)
8c8e4e31f2 : Some improvements to nonzero post guard_size_oblivious (#122156)
caa57e4fcd : Add tensor step and capturable support to rmsprop (#122264)
927bc4b558 : [vision hash update] update the pinned vision hash (#122754)
c10352a406 : [audio hash update] update the pinned audio hash (#122584)
235f24fc66 : [inductor] Add FileLock around V.debug.copy (#122665)
1b5ccdb0f0 : Avoid COW materialize in more forward ops (#122720)
60f3c092d4 : [dynamo] Config option to Inline builtin nn module forward (#122725)
d4317becce : [dynamo][easy] Force recompilation in a test (#122818)
52b1d2a73d : Increase timm batch sizes to make less overhead-bound and less noisy (#122581)
e6ee8322d7 : nn.Module: use swap_tensors for Tensor subclasses (#122755)
3e7fd45b40 : [NJT] .to() properly updates device of offsets (#122797)
574a8ccf10 : Remove several `expectedFailureNonStrict` (#122802)
12116aee68 : Add Flash Attention support on ROCM (#121561)
8d676a6e8e : [dynamo][cpp-guards] Bugfix for size/strides for tensor match (#122828)
66510c641f : [c10d][NCCL] Refactor coalesced storage (#122651)
cc12668053 : Fix swap_tensors path in _apply for modules that inherit from RNNBase (RNN, GRU, LSTM) (#122800)
0348773655 : Forward fix for subtly breaking AC with compile in the case of stacked (#122841)
a8b7480f0d : fix dynamo.explain examples (#122745)
a54ea7bbd8 : Made several changes to min-cut partitioner that allow it to recompute more things (#121692)
bef01c7c2b : Revert "Optimize multi_tensor_apply (take 2) (#119764)"
222dfc4282 : [Inductor] Run pattern matcher over the original graph (#122519)
530e13cf3d : Revert "[c10d] disable compute_duration by default (#122138)" (#122539)
933d3a7829 : Allow dynamo to inline through "hessian" (#121410)
a7306de0dc : Add RMSNorm module (#121364)
b693fff5d7 : Add non strict inline constraints and runtime assertions to non-strict exported program (#122722)
abe4a0e9eb : [dynamo] pop result of print reordering (#122744)
76fe0faadd : [dynamo, 3.12] add END_SEND (#122743)
c5d372dafc : [dynamo, 3.12] trace through __mro__ attribute access (#122742)
71d40ff861 : [dynamo, 3.12] fix typing variable tracing (#122741)
5d0a792d5f : [dynamo, 3.12] fix some tests (#122740)
a9704848d1 : [dynamo, 3.12] add CALL_INTRINSIC_1 (#122739)
8e5a4248a3 : [dynamo, 3.12] add LOAD_SUPER_ATTR (#122738)
8cd7bb7422 : [dynamo, 3.12] add LOAD_FAST variants (#122737)
a9b27bbbe9 : [dynamo, 3.12] update jump instructions (#122530)
f44f16ebd5 : [dynamo, 3.12] add END_FOR (#122456)
bcdd0c6f59 : [dynamo, 3.12] add BINARY/STORE_SLICE (#122455)
7b13228038 : [dynamo, 3.12] fix DICT_VERSION C++ guards (#122449)
01547960bc : [dynamo, 3.12] remove LOAD_METHOD, update LOAD_ATTR (#122356)
8ba26f4aa5 : [dynamo, 3.12] support RETURN_CONST (#122355)
3a67c86f72 : [dynamo, 3.12] remove references to PRECALL instruction in 3.12 (#122354)
35382f0573 : [dynamo, 3.12] Use CPython internal _PyOpcode_Caches instead of hardcoding (#122335)
2564f6cf0e : [dynamo, 3.12] Allocate Dynamo shadow frames by mimicking CPython (#122146)
ccfc87b199 : include scheduler_on_plateau in optim.h (#121722)
ceff2205e9 : [dynamo][cpp-guards] Bugfix to pass on correct example_value (#122769)
7281c5afdc : [dynamo][fbcode][torchrec] Selectively inline torchrec/distributed/types.py (#122716)
5b42c41b19 : [dynamo][improve-guard-overhead] Skip TENSOR_MATCH guards on parameters for optimizers (#122647)
c108696228 : [dynamo][guards-cpp-refactor][easy] Env variable to turn on cpp manager (#122646)
1b9c7e41bb : Remove .data call in LSTM as it is not necessary (#122733)
1d6fc0d4de : Fixed `_infer_device_type` warning in `checkpoint` (#122726)
37e3c8f33f : [DCP] Supporting resolve_bytes in LoadPlanner (#122700)
cd51496f8b : add a couple debug options (#121033)
5af839f86d : [quant][pt2e] Enable observer sharing between different quantization specs (#122734)
b63f6f78dc : Revert "[Inductor] Run pattern matcher over the original graph (#122519)"
f3b82a4dc2 : [xla hash update] update the pinned xla hash (#122628)
f140309e9c : Revert "Only update momentum buffers for SGD if momentum is enabled (#122349)"
70c3deef2d : Revert "[xla hash update] update the pinned xla hash (#122628)"
eb5381da66 : Skip storage check debug assert in view codegen when output is a subclass instance (#122718)
105381ea11 : [inductor][cpp] simplify CppVecKernelChecker (remove bool/int8 load as mask and load as float flags) (#119734)
49121603ab : [inductor][cpp] support vectorized indirect indexing (#119655)
a697d972b1 : Fix torchbench errors (#122735)
367ec62ae3 : [inductor][cpp] generalize vector mask for dtypes (#119654)
f2c1060de3 : [fx] Preserve Fx graph node order in partitioner across runs (#115621)
d1104d76aa : [Easy] Fix freezing bug with mismatched bias sizes (#122724)
249e65b92d : Graph-Safe RNG State Exchange for Tensor Parallelism (#114068)
fe41ba4765 : Optimize multi_tensor_apply (take 2) (#119764)
67a4d6d6cb : Stopped TORCH_COMPILE_DEBUG from printing out a bunch of logs (#122688)
602c2af9e3 : Cleaned up/fixed get_args after_aot repro (#122686)
c81c9ba472 : Disallow {FakeTensor,FunctionalTensor}.data_ptr (#122514)
04399a3091 : [xla hash update] update the pinned xla hash (#122628)
07b618e2d4 : Graph break cleanly in Dynamo for module parametrization (#121041)
2367d0dacd : [AOTInductor] Add tensor_constantX to pass constant buffer update's check (#122562) (#122690)
09cb42ce29 : [dynamo] delete graph_out_{n} after restoring local vars (#122658)
df724153c1 : Add option to skip cudagraphing on dynamic shape graphs (#122520)
e229ec6886 : [NEON] Speedup float16 convert (#122702)
6767c04fde : Forward fix for broken internal tests related to NJT view dummy (#122704)
291848bf30 : [Build] Fix AVX detection logic (#122708)
3bede14fa7 : Don't create world pg variable out of thin air when rewriting c10d collectives (#122561)
852111e1c2 : [TORCH_TRACE] Record stack when no compile context is available (#122644)
f631586084 : Revert "[dynamo] Forward OptimizedModule.__setattr__ to the wrapped module (#122098)"
537cd66e73 : [Inductor] Support custom op in JIT with cpp wrapper (#122554)
e61aaab725 : Log autotune time in scuba (#122637)
1f5fcb4e20 : [Inductor] Run pattern matcher over the original graph (#122519)
8cfbdc0451 : [Easy][DCP] Fix small typo in assert (#122633)
30a579dba3 : Add XPU ATen merge rule (#122484)
e08cbc0d41 : update comment of test_invalid_last_dim_stride in test_transformers.py (#122679)
8bad7b63c8 : [ez] Add more files to trigger inductor (#122669)
9b90c5e2a1 : [CI] Switch pull job linux-jammy-py3_8-gcc11-build to use ARC with runner groups (#122503)
85845a29db : Refactor ShapeEnvSettings so it's directly on ShapeEnv (#122310)
7e176ebb47 : Log compilation_metrics to TORCH_TRACE (#122638)
99c822c0ba : Let dynamo inline through jacfwd (#121254)
2b4173e0de : [Quant] [Inductor] Enable the Inductor Lowering of QConv2d post op HardTanh with int8-mix-bf16 (#122374)
293579363c : [Quant] [Inductor] Enable the Inductor Lowering of QConv2d post op HardSwish with int8-mix-bf16 (#122373)
caf9c23310 : [Quant] [Inductor] Enable the Inductor Lowering of QConv2d post op SiLU (#122268)
41d24df08f : [export] hack skip index_put_ in dce (#122683)
e0329cba8a : [Quant] [PT2] Add SiLU into X86InductorQuantizer Conv2d Unary Annotation (#122267)
b7089937dc : Disable test (test_mm_plus_mm2_cuda_cuda_wrapper) (#122682)
f8eeae7aaa : Enable CPP wrapper codegen registration (#121296)
d1f58eaaf5 : [inductor] Fix bug with freezing + split_cat passes (#122544)
268b0cc714 : Do not run CUDA lazy init if it is triggered with fake mode on. (#122636)
dd3f2cb53a : [Inductor] Add NEON ISA support on arm64 Macs (#122217)
a333b080c1 : Only update momentum buffers for SGD if momentum is enabled (#122349)
0c47f8028e : Keep example_inputs when saving and loading ExportedProgram (#122618)
47e8d60627 : [dtensor] add op support for view_as_complex and view_as_real (#122569)
1af6fc5e03 : Remove top-level DisableFuncTorch; clearing interpreter stack should work. (#122610)
f42818321b : Restore DILL_AVAILABLE for backwards compat with torchdata (#122616)
55f36d1ada : Revert "[AOTInductor] Add tensor_constantX to pass constant buffer update's check (#122562)"
4e0b5d59fa : [dtensor] add backward support for scaled dot product attention (flash-attention) (#122541)
83ad8e01b1 : fix the problem that cpu_fallback for aten::triu_indices on custom device crashed (#121306)
5e66bf5f42 : Avoid COW materialize in nn.functional forward ops (3) (#122443)
b6982bf2b2 : [dynamo] Forward OptimizedModule.__setattr__ to the wrapped module (#122098)
eda279c997 : [CpuInductor] Implement masked_load for integral types (#122608)
57a3d00b06 : [AOTInductor] Add tensor_constantX to pass constant buffer update's check (#122562)
ebde6c72cb : Precompile triton templates (#121998)
9b095c3fe6 : [dynamo] Config to not emit runtime asserts (#122603)
1f67da5105 : [executorch hash update] update the pinned executorch hash (#122152)
46a76cfef5 : [ROCm] Fix test_trace_rule_update.py (#121524)
bc7f3859b3 : Update jvp to support symbolic execution. (#120338)
1c1268b6e9 : seg-fault of "basic_string::_M_construct null not valid" fix for getNcclErrorDetailStr (#121905)
05bbcae5bb : Refactor functorch meta conversion (#122202)
9223b2cb31 : Pop codegened parent graph from wrapper in GraphLowering (#122469)
b2c496ba24 : Revert "[TorchGen] Add mutable parameter to valuetype_type function in api/cpp.py (#121415)"
f84e3bf36d : [ez] Fix XLA auto hash updates (#122630)
9d1de31634 : [BE][CPUInductor] Use C++17 helper templates (#122607)
2d4197c9b7 : add case for creating storage on ort (#122446)
2db7d874a9 : [inductor] Improve error message for shape errors in slice_scatter (#122543)
db506762d1 : Revert "Change ATEN generator argument type to const std::optional<Generator>& (#120076)"
c7bf5871ce : CUDAEvent::elapsed_time could accidentally initialize a non-used GPU (#122538)
198927170d : Avoid COW materialize in nn.functional forward ops (2) (#121992)
55becf02bc : Avoid COW materialize in nn.functional forward ops (1) (#121991)
4c70ab26ef : [MPS] Enable `index_select` for complex types (#122590)
e6a37eeb06 : run some cuda testcases on other devices if available. (#122182)
70ac13b876 : [ez][TD] Hide errors in llm retrieval job (#122615)
47a9725de9 : Implement prefer_deferred_runtime_asserts_over_guards (#122090)
e49a38973f : Update DimOrDims typing in torch.sparse (#122471)
06f22537ca : [dynamo] Suppress warning about torch.autograd.Function() (#122566)
0465a90b00 : [export][reland] Fix unflattened submodule ordering. (#122341) (#122507)
11dfa72153 : [BE] Remove unnecessary state dict update. (#122528)
5152945441 : GPT2 SDPA inference pattern-matching for Inductor-CPU (#121866)
4dc09d6aa4 : Revert "Graph-Safe RNG State Exchange for Tensor Parallelism (#114068)"
b9d6f8cc18 : Fix clang-tidy warnings in aten/src/ATen/core/*.cpp (#122572)
1e404c9b12 : Remove redundant query to tensor_to_context (#122278)
49b81af45f : Delete dead memoized_only kwarg in FakeTensor (#122271)
f32ce4e28e : Delete FakeTensorConverter.__call__ in favor of from_real_tensor (#122270)
069270db60 : [dynamo] Fix list comparison ops (#122559)
5891c5b3a6 : Factor meta conversion through serializable MetaTensorDesc (#122044)
cf06189a2d : [CPPInductor] Fix another out-of-bounds access (#122580)
deeeaded1f : Add metas for randint/rand factory functions out overload (#122375)
a01d35c7f6 : [TorchGen] Remove unused variables (#122576)
e75ecd5618 : [BE][veclib] Use `is_same_v`/`enable_if_t` (#122533)
14e348b7ad : Handle JIT test failure when the GPU is newer than the CUDA compiler or vice versa (#122400)
36188360dd : [dynamo] support torch.distributed.{group.WORLD, GroupMember.WORLD, distributed_c10d._get_default_group} (#120560)
3e4a4bea12 : [dynamo] Graph break on SymNode control flow (#122546)
adeedc060f : [Inductor] Fix unbacked symbol in stride when using item() (#122298)
c1fe09dc37 : [TorchGen] Add mutable parameter to valuetype_type function in api/cpp.py (#121415)
ca9606f809 : Update COW OpInfo test to include kwargs and expected materialization (#122437)
9d4218c23e : Handle JIT test failure when the GPU is newer than the CUDA compiler (#122402)
808a035658 : [Dynamo][4/N] Enable clang-tidy coverage on torch/csrc/dynamo/* (#122534)
f0d461beac : [vision hash update] update the pinned vision hash (#122536)
5f7e71c411 : [dynamo] Add HASATTR guard for UserDefinedObject attrs (#122555)
07d037674f : [inductor] Fix issue with randint + symbolic shapes (#122428)
476585b190 : Preserve unbacked SymInt on SymNode (#120816)
a52b4e2257 : Change ATEN generator argument type to const std::optional<Generator>& (#120076)
788638fcdc : Suggest TORCHDYNAMO_EXTENDED_DEBUG_ envvars when appropriate (#122473)
cdc7f0fd3b : Fixed failing pyhpc_equation_of_state due to cpp nodes fusion with compatible ranges (#122420)
4758837930 : [BE] Do not use `importlib.load_module` (#122542)
bf40e3f880 : [EZ][BE] Add missing `acosh` op to vec256_float_neon.h (#122513)
a39e638707 : Update bsr_dense_addmm kernel parameters for sizes 3 x 2 ^ N (#122506)
8a209344c9 : Fix access to unitialized memory in VSX vector functions for quantized values (#122399)
c677221798 : remove torchao dependency (#122524)
19d27a13ea : [CPUInductor] Fix out-of-bounds read/write in cvt_int64_to_[fp32|int32] (#122511)
4d8a3f8bb3 : changed aliasing checks to properly recurse for computing last usage (#122444)
50036ec781 : [Inductor] Add a test for creating a cpu inductor-> triton backend (#122396)
41d69ff324 : Add a shape inference tool (#120097)
29bca8547b : Fix failing test_cpu_repro without vectorization support (#117262)
a84f1d3def : [effects] Fix backwards handling (#122346)
e7fa3f7812 : AOTDispatch: allow subclasses to correct when we guess metadata of tangents incorrectly (#118670)
f7b8d8e249 : Support for sapling scm (#122072)
482f6c4693 : [Dynamo][3/N] Fix clang-tidy warnings in torch/csrc/dynamo/* (#122392)
3f99306452 : [export] Remove from_export flag (#122500)
03184a82dd : [TD] TD on ASAN PR jobs (#122332)
271cc687de : Audit retracibility errors and fix some ez ones (#122461)
29132c2e47 : Prevent dup initializers when ONNXProgram.save is called many times (#122435)
4eaa000acc : Teach dynamo about torch.func.jvp (#119926)
3795ebe925 : Revert "[Inductor] Make codecache CUDA compilation more robust & flexible (#121490)"
97d3bf71b9 : Revert "[Inductor Cutlass backend] GEMM size threshold for Cutlass backend usage (#121491)"
8013c4409f : [inductor] config to control whether we assume inputs are aligned (#122158)
5790096059 : [dynamo] Remove uses of `raise unimplemented` (#122136)
ed15370aab : [aoti] Add handling of ir.Constants in promote_constants (#122419)
52e9049ffa : Remove unused variables (#122496)
bbe846f430 : Add symbolic_opset19.py and symbolic_opset20.py to support opset 19/20, extend opset 18 support (#118828)
34d33df056 : [DCP] Check if pg exists in async before checking for cpu PG (#122316)
400cc518fc : pt2 dper passes: run shape prop before each pass (#122451)
152fa9ecc2 : skip moondream for training (#122483)
a3d4eaf253 : [inductor] device guard for max autotune benchmark (#122479)
3db64c1955 : [NCCL PG] Enable ncclCommDevIdxMap unconditionally (#122049)
f305c96cac : [DCP] Add bytesIO object to test_e2e_save_and_load (#122112)
86082f1fdc : [aot_inductor] added runtime checks for input/output tensors in debug compile mode (#122047)
90a13c3c5b : Added a check in register_lowering to avoid decomposed ops (#117632)
9347a79f1c : [Watchdog Timer] Clear timer for already terminated process (#122324)
018f5e2c32 : Fix unused variable warning in `int4mm.cu` (#122286)
7fd14ebb52 : [export] Use randomized inputs to examples. (#122424)
60bc29aa0b : Revert "[Quant] [PT2] Add SiLU into X86InductorQuantizer Conv2d Unary Annotation (#122267)"
b30b396d05 : Revert "[Quant] [Inductor] Enable the Inductor Lowering of QConv2d post op SiLU (#122268)"
777ac511cc : Revert "[Quant] [Inductor] Enable the Inductor Lowering of QConv2d post op HardSwish with int8-mix-bf16 (#122373)"
dbedc6bb7c : Revert "[Quant] [Inductor] Enable the Inductor Lowering of QConv2d post op HardTanh with int8-mix-bf16 (#122374)"
02fee6caec : Revert "Change ATEN generator argument type to const std::optional<Generator>& (#120076)"
e6986e4317 : Public API for NJT construction from jagged components (#121518)
65c37fe05a : AOTAutograd: ensure traced tangent subclass metadata takes non-contiguous outputs into account (#118669)
09be5800c8 : dynamo: support placement kwargs for DTensor.to_local() (#119947)
2e44b12dd4 : dynamo: handle DTensor.device_mesh.device_type (#118803)
ea8e0c75c7 : [quant][pt2] Fix create FQ with FixedQParamsQSpec (#122104)
6e6891e843 : [jit] Fix _batch_norm_with_update shape function (#122430)
23a6d74f93 : [Quant] [Inductor] Enable the Inductor Lowering of QConv2d post op HardTanh with int8-mix-bf16 (#122374)
f65373e278 : Revert "Factor meta conversion through serializable MetaTensorDesc (#122044)"
700c92e1b9 : [Inductor Cutlass backend] GEMM size threshold for Cutlass backend usage (#121491)
d34514f8db : Renamed mutationlayout/aliasedlayout (#122474)
eca30df846 : Added load_args to repro (#121624)
783fd89ff1 : [Quant] [Inductor] Enable the Inductor Lowering of QConv2d post op HardSwish with int8-mix-bf16 (#122373)
99f0fec7d0 : [Quant] [Inductor] Enable the Inductor Lowering of QConv2d post op SiLU (#122268)
bb75313f0a : [dynamo] Optimize handling of BINARY_OP (#122465)
2c6eeb26d3 : [Quant] [PT2] Add SiLU into X86InductorQuantizer Conv2d Unary Annotation (#122267)
6bbd697306 : [Inductor] Make codecache CUDA compilation more robust & flexible (#121490)
a337ee0a3a : [Quant] Enable QConv2d with silu post op (#122266)
b78e8c0d37 : remove duplicate method run_subtests (#122421)
6ba85cfc2a : Fixed memory leak in Python dispatcher w.r.t. THPDevice. (#122439)
3600778ede : Do not create a new node if no normalization is needed (#122330)
e2d89e9704 : Factor meta conversion through serializable MetaTensorDesc (#122044)
ecbe82b9ce : Change ATEN generator argument type to const std::optional<Generator>& (#120076)
ef0d470eb3 : [vision hash update] update the pinned vision hash (#122453)
fb57d1699b : [export] Fix handling output in remove_effect_tokens_pass (#122357)
09eb07bee8 : Introduce XPU implementation for PyTorch ATen operators (#120891)
e419011471 : [inductor] Add torch.while_loop support to JIT Inductor (#122069)
5e0440edb4 : Revert "Optimize multi_tensor_apply (take 2) (#119764)"
470b44c048 : Support for torch.nested.as_nested_tensor(t) (#113280)
cd6bfc7965 : Proper view support for jagged layout NestedTensor (#113279)
bde22835c6 : [PT2] - Guard oblivious on meta registrations (#122216)
4f93b3d958 : [Dort] Reduce excessive warning to info (#122442)
a001b4b048 : Inductor: Don't clamp views when the views come from split_with_sizes (#122149)
b1fa0ce4aa : [export] build the infra to rollout predispatch export. (#122326)
4b535906aa : Better handle test-config labels on PR (#122155)
bce640709c : Revert "Precompile triton templates (#121998)"
c4486d3e88 : Allow fake models to run with ONNXProgram.__call__ (#122230)
4ba51bb2c4 : Add keys used for templated attention impls (#122423)
224beecee6 : Revert "Proper view support for jagged layout NestedTensor (#113279)"
12e7602cf9 : Revert "Support for torch.nested.as_nested_tensor(t) (#113280)"
816db3bd29 : Revert "Public API for NJT construction from jagged components (#121518)"
48afb5c325 : [inductor] Use python constants in IndexPropagation (#122031)
99055ae165 : [aoti] Fix compilation bug for buffer mutations (#121688)
332456c44d : triton_kernel_wrap shouldn't access FakeTensor.data_ptr (#122418)
621fdc9db8 : infer_schema can add alias annotations when passed a list of mutated args (#122343)
639d6201b4 : Expand the types infer_schema can infer (#122320)
0dd78f1828 : Add standalone tests for infer_schema (#122319)
23524710e6 : [dynamo] use proxies to nn.Module in dynamo generated GraphModules (#120756)
2cd0a5d516 : [Inductor] Fix for WrapperCodeGen.statically_known_int_or_none (#121808)
968c4c4154 : Revert "Refactor gpu trace to be device-agnostic (#121794)"
13afbcfc85 : Revert "Support gpu trace on XPU (#121795)"
182bb0f2ca : Revert "Introduce XPU implementation for PyTorch ATen operators (#120891)"
628dcde136 : [AOTI] Disable stack allocation when there is a fallback op (#122367)
af9b71c82f : fix typo in while_loop_test (#122416)
d131cbc44f : Fuse the input -> p2p buffer copy into one-shot all-reduce kernel when the input is small (#121213)
765c3fc138 : fix breaking changes for ONNX Runtime Training (#122000)
c2651a7f0e : Make check_is_size clamp to sys.maxsize - 1, so sys.maxsize comparison returns False (#122372)
780f70b728 : Make expected stride test in torch._prims_common size oblivious (#122370)
25bf5f7e61 : Revert "Enable x86 CPU vectorization on windows [submodule sleef] (#118980)"
b8df2f0ca5 : Precompile triton templates (#121998)
17175cdbc7 : [Docs] Add extended debugging options for troubleshooting (#122028)
c20bc18d59 : [export] allow static constraints in dynamic_shapes (#121860)
16935de961 : Support alias for NestedTensorCPU/CUDA (#117711)
148a8de639 : Introduce XPU implementation for PyTorch ATen operators (#120891)
204fd69ca6 : Make ONNXProgram.model_proto and disk file the same (#122196)
f9996ed764 : [BE] Enable torch inductor tests running on MacOS (#122360)
456b112dca : [inductor] Support non-Tensor predicate in torch.cond (#122378)
0b68a28c87 : Optimize multi_tensor_apply (take 2) (#119764)
0d8e960f74 : Revert "[Sparsity] add support for H100 compute capability 9.x (#121768)"
7f8bb1de83 : [Dynamo][2/N] Fix clang-tidy warnings in torch/csrc/dynamo/* (#122362)
ea1cd31b50 : [c10d] Log the target of FR dump (#122345)
365e89a591 : Add tensor step to adadelta (#122252)
7fa1be506b : Add an option to sdpa benchmark to specify backend (#122368)
18c164ef7c : [Inductor] Match insignficiant strides on outputs (#122239)
b915877deb : Support numpy array in `Tensor.__eq__` (#122249)
bf18e967b4 : [c10d] disable compute_duration by default (#122138)
ea6f67853e : [inductor fbcode] Add python include paths for Python.h (#122363)
d4dff9cf5e : Public API for NJT construction from jagged components (#121518)
17c9c70265 : Support for torch.nested.as_nested_tensor(t) (#113280)
77bed8f7f2 : [ONNX] model_type flag is only supported under `SKIP_XFAIL_SUBTESTS` (#122336)
cc0cadaf4c : [vision hash update] update the pinned vision hash (#122154)
61f69c7fc4 : [audio hash update] update the pinned audio hash (#122153)
885fb9742d : Handle special kwargs in user-written Triton kernel calls (#122280)
3e6fdea390 : [ONNX] Fix list dtype finding bug in dispatcher (#122327)
ae913175c3 : Fix GraphModuleDeserializer (#122342)
e9dcda5cba : Graph-Safe RNG State Exchange for Tensor Parallelism (#114068)
91ead3eae4 : Support gpu trace on XPU (#121795)
74deacbf31 : Refactor gpu trace to be device-agnostic (#121794)
57734202c6 : [HSTU][TGIF] Provide a API to check whether running in torch_dispatch mode (#122339)
e38d60bc07 : Remove some stale xla dynamo backend (#122128)
c20cf97366 : Move some cudagraphs checks into C++ (#122251)
be5863de39 : Remove usage of deprecated volatile (#122231)
1686e2d1e4 : [symbolic shapes][compile-time] Minor compile time optimization in has_free_symbols (#122144)
c2eedb7f8a : [Dynamo][1/N] Fix clang-tidy warnings in torch/csrc/dynamo/* (#122259)
c80601f35a : Revert "Avoid COW materialize in conv, log sigmoid, repeat, group_norm, batch_norm (#121537)"
d5b5012dc4 : [CUDA] Raise `softmax_forward_64bit_indexing` GPU memory requirement (#116075)
5855c490f0 : Proper view support for jagged layout NestedTensor (#113279)
057892f4be : [CPU] optimize Lp norm for 1-dimensional vector (#122143)
aa74a8b9e5 : Enable x86 CPU vectorization on windows [submodule sleef] (#118980)
666d6291af : Cast checkpoint weights to match model parameter's dtype (#122100)
2289fa5f5a : [while_loop] fix mode not on stack error (#122323)
512251c8f3 : Use tree_map to get device ids and device types for activation checkpointing (#121462)
1dd1899fd6 : Add missing throw of std::runtime_error in dynamo/guards.cpp (#122306)
d2a8d3864c : [PT2][Inductor] Change the log for the group batch fusion (#122245)
61ff41f0ca : [while_loop] disable closure capturing and manually set the inputs. (#122244)
2f6e8e84c5 : Fix `_chunk_cat.out` issue (#122076)
c84f81b395 : [export] add pass to remove auto functionalized hop (#122246)
d813474363 : [Pytorch] auto format _python_dispatch file (#122226)
821ad56ea6 : [CI] Enables support for pytorch ci build in ARC + introduces _linux-build-rg.yml. (#121930)
91fdaa1b41 : [Sparsity] add support for H100 compute capability 9.x (#121768)
d1e8b97387 : [export] Log module hierarchy. (#121970)
0696db8202 : Revert "Teach dynamo about torch.func.jvp (#119926)"
1d13c82559 : Precompile in background (#121997)
65eb22158e : Revert "Update jvp to support symbolic execution. (#120338)"
072935917b : Update cuda_to_hip_mappings.py (#122110)
334f7e43f9 : [TD] Remove credentials requirement for retrieval (#122279)
2e02e1efad : Skip nonzero unbacked SymInt memo in inference mode (#122147)
15a8185cd3 : Revert "Enable x86 CPU vectorization on windows [submodule sleef] (#118980)"
06db0a9f78 : Revert "Upgrade submodule sleef to fix build warning (#122168)"
8a94005d46 : [dynamo][runtime_asserts] Ignore failures on sorting sympy relations (#122205)
afc4c9382f : Update jvp to support symbolic execution. (#120338)
17489784b6 : Teach dynamo about torch.func.jvp (#119926)
eb1d6ed9f9 : [Inductor] fix addmm fusion check (#121953)
ee6ce31b1d : [BE][fix] fix test_tp_random_state and add it to periodic test list (#122248)
a1d02b423c : XFAIL detectron2_maskrcnn_r_101_c4 CPU inductor accuracy (#122263)
477d154ffd : [dynamo] Add missing _nonvar_fields annotations (#122219)
46bf37b3f7 : [dynamo] Replace VariableTracker.apply with visit/realize_all (#122218)
a0db2e4237 : [dynamo] Fixed handling of ImportError (#122222)
7832efb242 : [export] skip nn_module_stack verifier for non-fx.GraphModule modules (#122210)
7d2b2dec4b : [Pytoch][Vulkan] Register `run_conv1d_context` (#122172)
e7141d117f : [IntraNodeComm] refactor rendezvous into a separate method for better code organization and error handling (#120968)
9f572b99a6 : [Clang-tidy header][29/N] Enable clang-tidy warnings in aten/src/ATen/core/*.h (#122190)
11e64b4ba8 : [dtensor] aten.cat to use stack strategy approach (#122209)
5b7ceab650 : Support auto_functionalize in pre-dispatch (#122177)
dc89d8b74a : Fix broken lint after #116876 (#122253)
de950039fc : Use .get in xml parsing (#122103)
6662627c89 : Add APIs for custom device using TensorIteratorBase. (#120792)
f8565c4a28 : [sigmoid] Clean up serialization API. (#122102)
1f8177dedf : [Inductor][CPU] fix flash attention last_stride!=1 issue (#122083)
55310e58a9 : Use constexpr for index variables (#122178)
eec8b252b7 : Upgrade submodule sleef to fix build warning (#122168)
cbbed46377 : Defer selection of triton template (#120275)
e5e0685f61 : Revert "[dynamo] Forward OptimizedModule.__setattr__ to the wrapped module (#122098)"
19d6004b97 : add int8 woq mm pattern matcher (#120985)
6fefc52a2b : Set py3.x build-environment name consistently (#122247)
6c659bbc36 : [codemod][lowrisk] Remove unused exception parameter from caffe2/c10/mobile/CPUCachingAllocator.cpp (#116875)
6b95dc8884 : [codemod][lowrisk] Remove unused exception parameter from caffe2/torch/csrc/jit/frontend/lexer.cpp (#116876)
d0153ca755 : use make_storage_impl to create storages for COWStorage. (#121896)
4aaf25bc38 : delete useless cast_outputs call in unary_op_impl_float_out (#120486)
2980779d0b : [codemod] Remove unused variables in caffe2/caffe2/experiments/operators/tt_pad_op.h (#120177)
2239b55cd1 : Add some more sanity asserts to checkPoolLiveAllocations (#122223)
139647d317 : Fix #83241: torch.nn.TripletMarginLoss allowed margin less or equal to 0 (#121978)
a843bbdb21 : [codemod] Remove unused variables in caffe2/caffe2/opt/nql/graphmatcher.cc (#118116)
f05af9e377 : [codemod] Remove unused variables in caffe2/caffe2/opt/nql/ast.h (#120176)
03b987fe3f : [CI] Test that NumPy-2.X builds are backward compatible with 1.X (#122157)
f8becb626f : [codemod] Remove unused variables in caffe2/caffe2/contrib/fakelowp/spatial_batch_norm_fp16_fake_op.h (#120178)
94eb940a02 : [codemod] Remove unused variables in caffe2/caffe2/operators/softmax_op_cudnn.cc (#121995)
a6aa3afa77 : [codemod] Remove unused variables in caffe2/caffe2/video/video_decoder.cc (#122151)
a80c60ad8f : [codemod] Remove unused variables in caffe2/caffe2/operators/conv_op_cudnn.cc (#122161)
02f436da6d : [codemod][bugfix] Fix addressing bug in caffe2/caffe2/video/video_input_op.h (#121856)
1c4887d52b : fix dlrm accuracy test in max-autotune (#122012)
c71554b944 : Revert "[aot_inductor][easy] enable test_triton_kernel_multi_output_arg (#122052)"
7678be4667 : Replace numel with sym_numel in is_int_or_symint (#122145)
6915a5be70 : Increase numel limit to 2^63 for replicatepad1d (#122199)
b12d297b44 : [AARCH64] Hide FP16 scalar arithmetic behind proper feature flag (#122204)
901ba2be86 : [quant][pt2e] Add support for conv transpose + bn + {relu} weights fusion in PTQ (#122046)
bc1fef113d : Respect TORCH_DISABLE_ADDR2LINE in symbolizer (#121359)
7718a1cd4f : T159183991: Error: EXC_SOFTWARE / SIGABRT at IGPyTorchFramework:-[MPSImageWrapperTrampoline endSynchronization:] (MPSImageWrapper.mm<line_num>):cpp_exception_clas (#122132)
c0b2e56c8f : Support triton.language.dtype with torch.compile -- Second Attempt (#122141)
58a805da71 : [UserDefinedTriton] Move constant args out of the fx graph (#122140)
c5ffebebab : [export] allow Dim(1,2) for export dynamic shapes (v2 after revert) (#121910)
d56ab7b020 : Revert "[torch export][serialize] create a more compact stacktrace format for serialization (#121675)"
36e5c1dcab : Revert "Teach dynamo about torch.func.jvp (#119926)"
88999674a0 : Revert "Update jvp to support symbolic execution. (#120338)"
e0d57001ef : [codemod] Remove unused variables in caffe2/caffe2/experiments/operators/fully_connected_op_prune.h (#122165)
6bd2d12bc7 : release gil in prepareProfiler (#121949)
7fb2d69282 : [PT2] - Fix cat backwards wrapping on symints (#121527)
8de4d86479 : Back out "[fx] Preserve Fx graph node order in partitioner across runs (#115621)" (#122113)
eae89138d8 : [torch export][serialize] create a more compact stacktrace format for serialization (#121675)
271b12c790 : [Functorch] Bump tolerances for `test_per_sample_grads_embeddingnet_mechanism_functional_call_cuda` (#122014)
ba9a1d96a4 : Add scuba logging for TorchScript usage (#121936)
4819da60ab : [TD] Add LLM retrieval + heuristic (#121836)
cec0fd6f2f : [pt2] add symbolic shape support for decompose mm and expose max_block to user config (#121440)
764eae9c4e : Revert "Add Flash Attention support on ROCM (#121561)"
88ebdbc97c : [dynamo] Forward OptimizedModule.__setattr__ to the wrapped module (#122098)
2164b7f746 : Flatten/Unflatten micro optimization in proxy_tensor.py (#121993)
42624bceb6 : Fixes nan with large bf16 values (#122135)
e26280ad8b : Fix typing for autograd.Function with ctx-less forward (#122167)
f9ed1c432d : Revert "Refactor gpu trace to be device-agnostic (#121794)"
c05bf0037d : [dynamo] Remove copy_graphstate/restore_graphstate (#122067)
7673cb534a : Revert "Skip nonzero unbacked SymInt memo in inference mode (#122147)"
6c01c25319 : [Clang-tidy header][28/N] Fix clang-tidy warnings in aten/src/ATen/core/*.h (#122175)
6c50308801 : [ATen-Vulkan][EZ] Small fixes: fix gpu size calculation and Half scalartype ctype mapping (#122096)
39877abee2 : Update jvp to support symbolic execution. (#120338)
edd04b7c16 : Teach dynamo about torch.func.jvp (#119926)
6b5259e507 : [lint] bump lint dependency PyYAML to 6.0.1 to support Python 3.12 (#122022)
8168338063 : Add CPU implementation for `torch._int_mm` (s8*s8->s32) (#121792)
0d845f7b07 : Fix auto_functionalize (#121990)
a2a88f39ee : Avoid COW materialize in conv, log sigmoid, repeat, group_norm, batch_norm (#121537)
0ff1109e26 : Refactor gpu trace to be device-agnostic (#121794)
09ce76809c : Improve compiler detection on MacOS (#121406)
8499767e96 : add sdpa choice for DeviceType::PrivateUse1 (#121409)
5bc7f7f977 : [dynamo] Make tx.next_instruction lazy (#122066)
153a01833b : [dynamo] Optimize SourcelessBuilder (#122063)
8082adcf65 : [dynamo] Only rename a proxy once (#122060)
2bec55c5f9 : [dynamo] Remove VariableTracker.parents_tracker (#122058)
3c706bf483 : [dynamo] Optimize BuiltinVariable (#122055)
07caea5c12 : [dynamo] Refactor COMPARE_OP and comparison builtins (#122043)
769ff86b91 : [dynamo] Optimize COMPARE_OP (#122039)
e1706bba3b : [Clang-tidy header][27/N] Fix clang-tidy warnings in aten/src/ATen/core/*.h (#122023)
5e26873912 : Skip nonzero unbacked SymInt memo in inference mode (#122147)
8860c625ea : [dynamo][guards-cpp-refactor] Integrate cpp guard manager with CheckFnManager (#120726)
f84d560236 : [dynamo] Raise accumulated cache size limit (#122130)
7084528eb9 : [dynamo][model_output] Do not include none for CustomizedDictVariable (#122005)
2b06098380 : Enable x86 CPU vectorization on windows [submodule sleef] (#118980)
6502c888cf : Enable fx graph cache in torch_test.py when using PYTORCH_TEST_WITH_INDUCTOR=1 (#122010)
18d94d7165 : Make FX nodes sortable (#122071)
1f4d4d3b78 : [fx] preserver partiioner order fix (#122111)
34f36a28df : [MPS] Fwd-fix for clamp regression (#122148)
ae983d2d6e : Fix typo in sparse.rst (#121826)
e6cf3e90a5 : [AOTAutograd / Functionalization] Fix incorrect expand_inverse (#122114)
ba69dc6675 : [Easy] add option to print compilation time (#121996)
2ab8b34433 : Error out in case of in-source builds (#122037)
e6a461119a : [functorch] Add batch rule for linalg.lu_unpack (#121811)
773ae817f7 : Batch Norm Consolidation (#116092)
a17cd226d6 : [inductor] Enable FX graph caching on another round of inductor tests (#121994)
7c5e29ae71 : Back out "Support `triton.language.dtype` with `torch.compile` (#121690)" (#122108)
685ace3834 : [compiled autograd] add dynamo segfault test (#122004)
40acc84aaf : Fix torch.clamp in MPS to handle NaN correctly (#121381)
0a1b3be216 : chore: add unit test to verify split_by_tags output_type (#121262)
676a77177e : Revert "[BE] Migrate pull.yml to use S3 pytorch-ci-artifacts bucket for linux-jammy-py3_8-gcc11 and docs builds/tests (#121908)"
df1cdaedeb : Log restart reasons and extra compile time in CompilationMetrics (#121827)
74c09a757b : Simplify Storage meta conversion with PyObject preservation (#122018)
32410f80ec : [Caffe2 CPU tests] Update CMakeLists.txt (#119643)
5d52b163d1 : [dynamo] Optimize load/store/const op handling (#122038)
4034873a31 : [dynamo] Optimize builtin handling (#122035)
6ca0323615 : [dynamo] Optimize VariableTracker.__post_init__ (#122034)
115c9c6d6b : Remove __getattribute__ on autograd.Function (#122033)
5a10b56083 : [dynamo] Small microbenchmark changes (#122032)
1a58e9d357 : [TD] LLM indexer to run daily (#121835)
ceb1910bad : Revert "[BE] Enables support for pytorch ci build in ARC + introduces _linux-build-rg.yml. (#121930)"
11b36e163d : [BE] Enables support for pytorch ci build in ARC + introduces _linux-build-rg.yml. (#121930)
c4d24b5b7f : special-case cuda array interface of zero size (#121458)
f7908d9fa8 : enable reshape+linear+reshape fusion for dynamic shapes (#121116)
f2f8eeea94 : Inductor: fix Conv output stride for dynamic shapes (#121400)
206da97b8b : [aot_inductor][easy] enable test_triton_kernel_multi_output_arg (#122052)
65ccac6f17 : Fix triton import time cycles (#122059)
bc9d054260 : [executorch hash update] update the pinned executorch hash (#122061)
7380585d97 : [vision hash update] update the pinned vision hash (#122062)
e39aedfcc5 : Fix fx graph triton import bug (#122041)
5030913d6a : [test] Delete variables that have been declared but not referenced di… (#121964)
d9460758df : [Clang-tidy header][26/N] Fix clang-tidy warnings in aten/src/ATen/core/*.h (#122015)
c568b84794 : [dynamo][guards] Move backend match to eval_frame (#121954)
fc504d719f : [executorch hash update] update the pinned executorch hash (#122036)
6f74b76072 : Move get_unwrapped outside of disable_functorch (#121849)
3bd38928ba : [export] Improve consistency for nn_module_stack metadata, add checks to _trace.py (#120661)
6d9588a12b : [inductor] disable linear weight prepacking pass on double (#121478)
9990d1bc22 : Add 'profiler/python' to the package.' (#121892)
5f601a41e0 : Pin protobuf to 3.20.2 on macOS (#121918)
4d9d5fe540 : [executorch hash update] update the pinned executorch hash (#122009)
4d92928fe2 : [dynamo] Add tests for fake FSDP (#121610)
0b7d9711d4 : [dynamo] Add support for nn.Parameter constructor (part 2) (#120965)
040b925753 : [Compiled Autograd] Reorder accumulate grad nodes (#121735)
f0b9a8344a : [vision hash update] update the pinned vision hash (#121177)
b94691700e : [FSDP] Avoided CPU sync in `clip_grad_norm_` (#122001)
7bc91d5dc2 : [mergebot][BE] If we don't have any required checks, don't run required checks (#121921)
2b71b21a3f : Don't use Proxy torch function in the sym size calls (#121981)
37e563276b : Document complex optimizer semantic behavior (#121667)
12662900f9 : [inductor] FX graph cache: Fix bug handling constants (#121925)
6b0f61891f : [Clang-tidy header][25/N] Fix clang-tidy warnings and enable clang-tidy on c10/cuda/*.{cpp,h} (#121952)
0cc60a05da : Revert "Fix torch.clamp in MPS to handle NaN correctly (#121381)"
07ec3356b9 : Revert "Force upsample to be float32 (#121324)"
256c0ec1e5 : [docs] Added comment on replicate -> partial for `_NormPartial` (#121976)
b717aa6f36 : Revert "[BE] Enables support for pytorch ci build in ARC + introduces _linux-build-rg.yml. (#121930)"
ca80d07ac7 : Fix torch.clamp in MPS to handle NaN correctly (#121381)
26aaabb979 : [c10d] initialize lastEnqueuedSeq_ and lastCompletedSeq_ (#121980)
dfc5e9325d : format caffe2/torch/_export/serde/serialize.py (#121670)
53d2188df9 : Update get_aten_graph_module (#121937)
af86d67d61 : [Doc][NVTX] Add documentation for nvtx.range (#121699)
b92daff6e9 : [DTensor] Enable ASGD foreach optimizer and add the associated unit test (#121942)
f4dd2fda51 : [DTensor] Supported 2D `clip_grad_norm_` (#121945)
2c33e3a372 : [BE] Enables support for pytorch ci build in ARC + introduces _linux-build-rg.yml. (#121930)
6f4fa8e9a1 : [inductor] FX graph cache: simplify "current callable" logic (#121903)
d0d09f5977 : Fix torch.compile links (#121824)
8a5a377190 : Move doc links to point to main (#121823)
535bc71d03 : Enable FX graph caching in another batch of inductor tests (#121697)
3ee319c49c : Fall back to eager mode when viewing with differing bitwidths (#120998) (#121786)
409b1a6081 : Add lowering for cummax, cummin (#120429)
d04faf4531 : [dynamo][compile-time] Remove preserve rng state per op (#121923)
67ec870234 : Fix FakeTensorUpdater logic for updating fake tensors (#116168)
239d87af5e : combine loops so fn_name correct in error message (#121601)
39fdde7f84 : [release] Increase version 2.3.0->2.4.0 (#121974)
565d1e28ab : update kineto submodule commit id (#121843)
3c3d7455a3 : Disable inductor (default) and inductor (dynamic) by default in the perf run launcher (#121914)
ef25d83a62 : [export] Add serialization support for tokens (#121552)
014f91a9d9 : [FSDP2] implement HSDP (#121569)
4cbf963894 : [BE] Migrate pull.yml to use S3 pytorch-ci-artifacts bucket for linux-jammy-py3_8-gcc11 and docs builds/tests (#121908)
2770e3addd : Force upsample to be float32 (#121324)
e25054b248 : [compiled autograd] free stack objects before calling compiled graph (#121707)
5a2b4fc8f0 : [dynamo] Convert invalid args into graph breaks (#121784)
fc33bbf827 : better support set_default_dtype(torch.float16), update doc (#121730)
8fdd8125b6 : [executorch hash update] update the pinned executorch hash (#121871)
fb10e13000 : [Clang-tidy header][24/N] Fix clang-tidy warnings on c10/cuda/*.{cpp,h} (#120781)
e4fda049c2 : DTensor: add comm tests to test_tp_examples (#121669)
02083f5452 : [DCP][DSD] Add AdamW to distributed state dict unit tests (#121774)
efbeefbb84 : [executorch] Make trymerge force merges actually work with executorch (#121920)
a623666066 : [dynamo][compile-time] Make output_graph new_var linear (#121858)
3bc2bb6781 : use two pass reduction for deterministic reduction order (#115620)
0cd094a4fd : Revert "[aoti] Fix compilation bug for buffer mutations (#121688)"
01d7c948e2 : Make torch/_inductor/comms.py recognize native funcol IRs as collective IRs (#118498)
60ccf81490 : [dynamo] Refactor update_block_stack into a seperate function (#121810)
1e9a7df8fe : [dynamo] Compile time optimizations in tx.step() (#121790)
1afa8e0985 : Fix #83153: torch.nn.hardtahn allowed min_val to be greater than max_val (#121627)
710446b1eb : [dtensor] refactor and generalize stack strategy (#121869)
92ed8553a6 : Revert "Switch cudagraph backend to cudagraph trees (#121019)" and "Add Cudagraphs disable checking (#121018)" (#121864)
d604ab81a2 : [PyTorch] Fix static runtime sigrid_hash precomputed multiplier pass (#120851)
cceabe873f : [jit] ClassType hashing: hash on compilation_unit as well (#121928)
2d9cee20a2 : [jit] AliasDB type hash - don't always return 0 (#121874)
57b20c51b9 : Don't record autograd state ops while torch.compile in pre-dispatch export (#121736)
bd7beef529 : [Inductor] Update the cpp_wrapper entry function signature (#121745)
8be80706b4 : [AOTI] Add pybind for tensor_converter util functions (#121744)
46493ee9b5 : [AOTI][refactor] Update tensor_converter util functions (#121743)
3df1b3b0ad : [jit] support getattr/hasattr on NamedTuple (#121863)
818b14025a : [AOTI][refactor] Remove is_legacy_abi_kernel and abi_compatible_kernel (#121523)
43e243180b : Add gpt-fast as a static benchmark (#121886)
0e68eb1505 : Add privateuseone flags for c10::EventFlag (#121118)
9f314d4aa8 : [aoti] Fix compilation bug for buffer mutations (#121688)
0636c11811 : [AOTInductor] Include build cmds at the end of wrapper file (#121872)
c409292197 : [sigmoid] Use deserializer from oss. (#121839)
499136a4dd : [Inductor] Fix a dynamic shape problem when lowering diagonal (#121881)
5b1642516f : [with_effects] Skip over profiler.record_function_exit (#121829)
f1f7c5c31e : [ez] Document for add_var_to_val (#121850)
4c3a052acf : [BE] Add S3 bucket argument to number of workflows (#121907)
38d7d366b9 : [FSDP2] Added 2D DCP save/load test (#121747)
443444dc7f : [c10d] Add generic scuba logging capability into c10d (#121859)
83f8e51404 : Add CUTLASS kernel as choice for (u)int8/(b)float16 mixed MM autotuning (#119986)
be0bdf111c : relax tol for flaky nansum_out_dtype_cuda_float32 test (#121550)
7e13b5ba29 : Checkout release branch rather then commit_hash when building triton release (#115379) (#121901)
956059fa2e : [Fix] Fixed behaviour for the conversion of complex tensors to bool (#121803)
1251f0fa31 : Add CUTLASS kernel as choice for _int_mm() Inductor autotuning (#119685)
38d9bb5abc : Make PyTorch compilable against upcoming Numpy-2.0 (#121880)
b4c53aa0ec : Do not compile FP16 arith internally (#121844)
3eb322ff29 : Handle transitive replacements in Triton kernel mutation analysis (#121867)
4cd503c1f3 : Enable FX graph cache for a batch of inductor tests (#121696)
15abc56bd5 : Graph break on step closure in optimizer (#121777)
f85f58bf86 : Fix quantized linear vulkan tests (#120960)
a37caa6ed3 : [Quant][Inductor] Enable quantization linear pattern fusion with int8_mixed_bf16 for gelu (#116004)
43d68e9c8f : [Quant][Inductor] Enable quantization linear pattern fusion for gelu inside inductor (#114854)
25e00545bb : [Quant][PT2E] Enable linear and linear-unary post-op gelu quant recipe for x86 inductor quantizer (#114853)
a04e7fca8e : Use memcache versioning for autotune remote cache (#121748)
7e076c75bd : [C10D] Fix coalescedCollective op Flight Recording (#120430)
bf7ac4ddf7 : Revert "[export] allow Dim(1,2) for export dynamic shapes (#121642)"
3e02a7efcd : Only FA2 doesn't support attn-mask (#121825)
a8dcbf2749 : [export] allow Dim(1,2) for export dynamic shapes (#121642)
70c6f542f2 : Revert "[dynamo] Convert invalid args into graph breaks (#121784)"
aaff8d274a : CUDA fast path for `_chunk_cat()` (#120678)
c53e3f57b5 : allow fp16 in quant/dequant decompositions (#121738)
c7193f4099 : [DDP][PT2D][2D] Enable DDP + TP and add test for compiled DDP + TP (#120479)
dd568f4207 : [Export, AOTInductor] Populate ShapeEnv's var_to_val during deserialization (#121759)
a2a4693c1b : Revert "Init CUDA instead of faking memory stats (#121698)"
45a835cef2 : Revert "[compiled autograd] free stack objects before calling compiled graph (#121707)"
8b1b61bc70 : [compiled autograd] support custom ops backed by c++ autograd::Function (#120681)
58ff55aac5 : Add support for tt.scan to triton kernel mutation analysis (#121828)
8e6d572b4e : [DDP][PT2D] Allreduce fusion fx pass using concat and all_reduce_coalesced (#113209)
0c1ac4484d : Support `call_method` in DDPOptimizer (#121771)
0df39480f6 : [dynamo] Convert invalid args into graph breaks (#121784)
5b90074540 : [compiled autograd] free stack objects before calling compiled graph (#121707)
2460f0b1c7 : Init CUDA instead of faking memory stats (#121698)
cd949d133e : Support setUpClass & tearDownClass with instantiate_device_type_tests() (#121686)
ffabb25c48 : Count the number of entries directly in avg_pool2d lowering (#121429)
a19a05fd1d : Add lowering for avg_pool{1, 3}d (#116085)
79fac48bb3 : Use pytorch bot's labeler (#121762)
05df03ec1b : Allow custom attributes for torch function subclasses (#121693)
92a2b214f8 : Make translation validation more user friendly (#120880)
b1d5998956 : Upgrade to tlparse 0.3.7 (#121772)
5498804ec2 : [MPS] Fix naive matmul for BFloat16 (#121731)
559ca13b3f : [dynamo] Refactor TorchInGraphFunctionVariable for compile time (#121616)
51cf57c6c6 : Revert "Include torch warn in each error in cudnn/Conv_v8.cpp (#120719)"
a157a0d00d : [constraints] Fix scalar type for constraint_range to Long (#121752)
7fe0cc53e9 : make _process_dynamic_shapes an implementation detail (#121713)
5088e4956e : Add quantized conv transpose2d op (#120151)
e99fa0042c : Back out "[DeviceMesh] Add support for nD slicing (#119752)" (#121763)
be33d31ae2 : add std::ostream& operator<< for BFloat16 in BFloat16.h (#121302)
5986552ebe : [nit][DCP][DSD] Remove variables not being used in test_state_dict.py #121204 (#121773)
da2a9a0512 : `_foreach_copy` with different src/dst dtypes (#121717)
a13dd92d88 : [dynamo] Minor compile time optimizations in torch.py (#121615)
d619be57c0 : [executorch hash update] update the pinned executorch hash (#121056)
0c1d59b72f : CI: Fix flaky artifact upload step (#121733)
52ed35bb64 : [inductor] Update triton pin (#121268)
07330ff7b6 : [MPS][BE] Define `_compute_tolerances` (#121754)
f83392b677 : cublasLt workspace warning info is misleading, the unit of measuremen… (#121073)
e755dab0d1 : [ROCm] Enable several test_unary_ufuncs UTs on ROCm (#121104)
f24ae66abf : [AOTInductor] Skip tests on RoCM for duplicate_constant_folding (#121750)
9f235971f0 : Gate tt.reduce Triton mutation tests on Triton version (#121753)
7d05c4c093 : Remove error anti-pattern when dealing with dynamic shape output (#121681)
9df0dca7f6 : Revert "[ Inductor ] Shape padding honors output stride preservation (#120797)"
02bb2180f4 : [torch export] replace traceback.extract_stack with CapturedTraceback.extract (#121449)
7a53dedb07 : CI: Specify libc and libstdcxx versions in conda environments (#121556)
68be750e17 : Cleanup some exception handling in triton mutation tracking (#121739)
a9274c9a2c : Fix aoti doc to avoid cannot bind non-const lvalue reference error (#121672)
79ee6bbde3 : Support `triton.language.dtype` with `torch.compile` (#121690)
22bb24986d : [dynamo][guards] Use lazy variable tracker for func defaults (#121388)
519151a062 : [fx] Preserve Fx graph node order in partitioner across runs (#115621)
a95ceb51a2 : Release fix pinning slow-tests.json (#121746)
a5ec45f2ec : [Inductor Cutlass backend] Move tests to separate file (#121489)
844bfbbd2e : feat: Update Dockerfile default versions for Python, OS, and CUDA arch list (#121560)
d62bdb087d : [Profiler] add missing field device_resource_id (#121480)
5478a4e348 : Don't run non-strict for test case that doesn't need non-strict (#121710)
5b506c8bce : Revert "[dynamo][guards] Use lazy variable tracker for func defaults (#121388)"
522d972924 : [eazy] add more log when accuracy check fail (#121656)
f50c652422 : avoid aten dispatch shadowing type with variable (#121659)
6d8a7d6e58 : [pytorch] optional zero points on dequantize per channel (#121724)
a6149eba12 : [easy] Refactor MultiOutput. codegen_list_tuple_access to use subclass type checks (#121662)
90e886aa6c : Sanity check for non-strict (#121687)
443e241cc5 : Don't cache predispatch kernels (#121712)
a26480a4d1 : [dtensor] move early return check into redistribute autograd function (#121653)
00a53b58dd : Refactor release only changes to two step execution (#121728)
4e63d9065a : [dynamo] Delete record replay tests as they are not maintained (#121705)
cd1751b14f : [dynamo] Measure Dynamo cache latency lookup (#121604)
22489bfe70 : [dynamo][guards-cpp-refactor] Directly call root guard manager in eval_frame (#121622)
2348e8e4e7 : [dynamo][guards-cpp-refactor] Simplify DYNAMIC_INDICES guard (#121614)
0398dc9e8e : Revert "[DCP] Makes fsspec public (#121508)"
b84f94f6a3 : Restore timestamps on C++ logs without glog (#121384)
704e15307e : [caffe2] replace refernces to np.asscalar (#121332) (#121545)
d1715c3adb : [export] Update error message for set_grad (#121666)
3c8c7e2a46 : [dynamo] Tweak naming for module hook bw_state (#121609)
7a68e0a3e8 : [DCP][state_dict] Remove the check of FSDP has root (#121544)
85dc254364 : [DTensor] Moved `Transformer` sharding to staticmethod (#121660)
cc51e100f5 : [ET-VK] Enable Dynamic shape support via tensor virtual and physical resizing (#121598)
2a99e6f299 : Update error message (#121644)
edf22f3a48 : Modify signature of dequantize ops for decomposed quantized Tensor (#119173) (#121450)
06d2392003 : Support tt.reduce in Triton kernel analysis pass (#121706)
78b4793c96 : [dynamo][compile-time] Caching VTs to reduce compile-time (#121031)
52ad2b682c : Generate predispatch tests (#121678)
656134c38f : [ROCm] enable complex128 in test_addmm_sizes_all_sparse_csr for rocm for trivial (k,n,m) cases (#120504)
86a2d67bb9 : Simplify guards using info from previous guards (#121463)
703e83e336 : Fix AARCH64 builds (#121700)
159f30331f : [quant][pt2e] Call sub-quantizers' transform_for_annotation in ComposableQuantizer (#121548)
7fc497711d : Also test predispatch serialization (#121652)
6ca9ae4f86 : Express y grid > 2^16 in terms of z grid (#121554)
fb1d7935bb : [optim][BE] move complex_2d (last of complex tests) to OptimInfo (#120618)
a37e22de70 : Add Flash Attention support on ROCM (#121561)
3a5f48d55f : Port remove_split_ops to PT2 pre-grad passes (#121674)
5b5d423c2e : Benchmark templates (#118880)
7676433012 : [AOTInductor] Reuse generated kernels between constant graph and main graph (#121564)
272cf29e4d : [FSDP2][BE] Refactored `check_1d_sharded_parity` to use mesh (#121357)
cd1dc5e484 : Delete requirements-flake8.txt (#121657)
fd0dbcd891 : Revert "Batch Norm Consolidation (#116092)"
498a94a7f5 : Don't install torchfix for python<3.9 (#121655)
b2f09c1859 : Revert "[compiled autograd] support custom ops backed by c++ autograd::Function (#120681)"
d1f45a93af : Check for releasing GIL at compiletime (#116695)
fd13a56f61 : Refactor some testing helpers for FX graph cache testing (#121520)
e01b07e1e8 : [ROCm] Autocast RNN Support (#121539)
fc712311ce : port fuse_parallel_linear (without changing weights) to PT2 pre-grad (#121617)
3461404869 : [pt2 export]fix name collision on constant name (#121145)
b091a32909 : Add a section on release wiki about pytorchbot cherry-pick command (#121648)
dd2062c737 : fix CMake FindCUDA module for cross-compiling (#121590)
5fd7f5c4e3 : Include torch warn in each error in cudnn/Conv_v8.cpp (#120719)
9aa3fedb75 : Slightly faster FX graph iterator (#121611)
ae22bdaefe : Update torchbench commit pin, add sam_fast benchmark (#121420)
dccc1ca839 : [torch] Use __prepare_scriptable__ for closures (#121553)
b4160fd9c7 : Clean up macOS x86 binaries build jobs (#116726)
8d03c59d59 : Bring torch_xla pin to the latest torch_xla commit (03/08/2024). (#121529)
39ed038f41 : [TEST] Prepare test_cumulative_trapezoid for SciPy 1.12 (#121541)
6801595349 : Fix round robin sharding (#121022)
e2ac2dc13a : Update NCCL submodule to v2.20.5 (#121635)
89add71168 : fix synchronization behavior for copies with type change (#121341)
03717430cc : Fix lower precision check for MKLDNN on Windows (#121618)
e29004615f : Add NEON accelerated torch.mv kernel (#119992)
fac06a12c8 : CI sanity check test for env vars (#120519)
6c11d3ce0c : Add support to save safetensors checkpoint directly into onnx (#121001)
485f8ebc07 : add __repr__ function to FunctionSchema for Python (#121484)
d1510e01fa : Upgrade submodule onednn to v3.3.5 (#120767)
605c0a28aa : [dtensor][debug] force visualize_sharding not to print for empty tensors (#121217)
3a5ab17bdc : [dtensor][debug] visualize_sharding skip if the current rank is not in mesh (#121382)
b383123e37 : [dtensor][debug] visualize_sharding only compute offset on the first rank in mesh (#121385)
9c50ecc84b : Fix `get_rank` under a non-default group. (#120481)
7cc476ea16 : [dynamo] Fix support for nn.Parameter constructor (part 1) (#120163)
32488b0664 : [dynamo] Support _unsafe_set_version_counter (#121086)
7a4e451184 : [Dynamo] Fix function overrides (#120885)
f11f2b0d55 : split predispatch pass into multiple passes (#121592)
13e8181b7b : relax assertion on fake shape (#121599)
660ec3d38d : [Export] Fix bug removing node from wrong graph (#121574)
41286f1505 : [IntraNodeComm] fix a hybridCubeMeshAllReduceKernel breakage caused by a recent refactor (#121575)
60cd2a43ca : [DeviceMesh] Add support for nD slicing (#119752)
e90cddb0d3 : [inductor] Log triton kernel source and metadata on failure (#120494)
168a04e752 : [inductor] Changes to support newer triton pin (#121267)
459c5bca58 : [inductor] Refactor common triton imports into one function (#121438)
8c96b4367a : Remove opmath cast for im2col decomp (#121363)
71d0202627 : [dynamo] support rewriting dist.all_reduce with explicitly specified reduce op (#120181)
cf9742371c : Revert "Add CUTLASS kernel as choice for _int_mm() Inductor autotuning (#119685)"
761783a4ff : [profiler] Fix recorded profiler step number (#121127)
242e03ba86 : [dtensor] add async_op option to redistribute and some refactor (#121477)
a6a67da333 : [quant] Add error check for input_edge annotation (#121536)
e8836759d0 : [export] Add effect token to export (#121424)
eb3919944d : [C10d][NCCL] Refactor complex all_reduce and broadcast (#121045)
752d164b2f : Add CUTLASS kernel as choice for _int_mm() Inductor autotuning (#119685)
13a25c647f : [export] improve binary op fast path broadcast check (#121546)
d482614fec : [DCP] Makes fsspec public (#121508)
6791b0c09e : Change default torch_function behavior to be disabled when torch_dispatch is defined (take 2) (#120632)
ca9678405a : [CUDA graphs] Pool argument for make_graphed_callables (#121475)
b2f19dd284 : [C10d][UCC] Retain CUDA context in progress_loop (#121446)
ed8eebd1c2 : Changed cublas repdocubility URL (#121534)
b0a0850a5c : [DCP] Replaced `storage()` with `untyped_storage()` (#121538)
8887c95004 : [inductor] Skip welford combine on first reduciton loop iteration (#121488)
fe78cf040b : [profiler] add a function to allow adding preset user-defined metadata to traces (#121487)
9eb8fae02d : Revert "Fix round robin sharding (#121022)"
bc02fca358 : [dtensor] to_local backward grad placement passthrough (#121474)
9373ad0bb8 : Switch cudagraph backend to cudagraph trees (#121019)
7b3febdca7 : Change assertion throw to error message for const_run_impl call. (#121396)
038b2e8780 : [c10d] Add complex support for P2P (#121240)
4af0e634bf : Add Cudagraphs disable checking (#121018)
7d0ad5c6f0 : [FSDP2] Zeroed padded tensor in `_apply` (#121509)
f2d5e96db4 : [export] Add docs for 2.3 release (#121466)
2c2d6ce515 : Revert "CI sanity check test for env vars (#120519)"
35d3adb4b0 : Add ATen Op _chunk_cat and _chunk_cat.out (#121081)
a656e12bf5 : Disable test_torch_name_rule_map_updated in code (#120627)
82bb06334d : Update python binding for in-place foreach to return `List[Tensor]` (#121405)
d27509c384 : [compiled autograd] support custom ops backed by c++ autograd::Function (#120681)
f43b9c56c5 : CI sanity check test for env vars (#120519)
75bb049d38 : Skip AOT Inductor test_cond_* tests on ROCm (#121522)
53d5276d69 : Improve Dynamo support for torch function and class methods in general (#121365)
c0996866f4 : Revert "Change ATEN generator argument type to const std::optional<Generator>& (#120076)"
c78f72d7e7 : [c10d] Deprecate torch.distributed.pipeline (#121464)
27a0900946 : Revert "[fx] Preserve Fx graph node order in partitioner across runs (#115621)"
937e89f252 : cudagraphs backend refactoring (#121017)
bc117898f1 : Revert "Update XLA pin (#121501)"
22cd2658b4 : Disable GroupRegistry's thread isolation by default (#121457)
2c9c57c061 : Only profiling when it's enabled. (#121404)
df06b94778 : Add complex support to parametrizations.spectral_norm (#121452)
0f3f4f5534 : Revert "[nit][DCP][DSD] Remove Unused Variables in test_state_dict.py (#121204)"
d55d803812 : Add operator length hint support (#121495)
9b03a06288 : [BE] [MPS] Fix `out` resize logic in `torch.where` (#121476)
9cc89970a9 : [BE] Cleanup where_self_out (#121494)
1866ee6735 : Enable `out` OpInfo testing for `torch.where` (#121473)
0dd21c0c34 : Update Quantizable LSTM to support QAT (#121448)
b52e0bf131 : Deprecate torch.autograd.function.traceable, is_traceable (#121413)
08460f4bae : [tp] remove deprecated tp_mesh_dim arg (#121432)
30982ce072 : [tp] doc fixes (#121431)
effdea5fc6 : Fix round robin sharding (#121022)
9d83f9dc0e : Update XLA pin (#121501)
a2a8c1fda0 : [AOTDispatch] Return mutated inputs directly when keeping mutations (#120514)
f7ec984b1b : [DTensor][XLA] support XLA backend in distirbute_module API (#121355)
7b4f70eda5 : Batch Norm Consolidation (#116092)
c253d1c1db : Add links to _ex variants in all linalg functions that support them (#121451)
975d428425 : [Quant] Add the operator of decomposed fake quant per channel (#121297)
8ed0932172 : Update link to OpenVINO backend in torch.compiler.rst (#121303)
b3f24b57fb : fix accidental specialization with faketensor input checks (#121460)
2e789ad522 : [DCP][state_dict][doc] Update the distributed state_dict document (#121290)
e628f2cc66 : suggested fixes for congruences (#121418)
96ed37ac13 : [DCP] Makes async_save public (#121325)
13366a101a : [DCP][state_dict][doc] Fix the documents for distributed_state_dict (#121276)
72dd9b2430 : [inductor] Make some improvements to FX graph caching (#117888)
909d73d8cb : [DCP] Removes `no_dist` and `coordinator_rank` from public DCP API's (#121317)
23ac0cd561 : more passing dynamo tests (#121378)
4186c36531 : [nit][DCP][DSD] Remove Unused Variables in test_state_dict.py (#121204)
0f8c9acc29 : Revert "[fake_impls] Fix seed/offset device for attention kernels (#120839)" (#121447)
dc514b967e : [dtensor][TP] check funcol calls and improve doc for loss parallel (#121366)
25c74a93cd : [fx] Preserve Fx graph node order in partitioner across runs (#115621)
7dc1ab8989 : make dyanmo work with _LazyGraphModule.lazy_forward (#121259)
9bff1599b6 : [Torch Elastic][Draft] Refactor SubprocessHandler to separate module for easier subclass (#120373)
c86a1ce125 : [dynamo][guards-cpp-refactor] Func defaults and kwdefaults accessor (#121338)
79a04f2df9 : [dynamo][guards-cpp-refactor] Permit dict version guard in DictGuardManager (#121327)
962c1b4c69 : Update XNNPACK revision to fcbf55a (#120583)
090616d9a1 : [Indutor] Support auto-tuned custom PT ops in abi compatible mode (#120877)
04a5d6e8d3 : [dynamo][guards] Use lazy variable tracker for func defaults (#121388)
5d8e4126b6 : Fixup test_trace_rules (#121351)
af62a70fab : [export] Fix nn_module_stack in retracing (#121423)
4f120dc2a6 : Clean up mode handling in python dispatcher (#121083)
0811f15270 : [DCP][state_dict] Let _offload_state_dict_to_cpu to return the companion_obj if it exist. (#121273)
f76e541ec7 : [BE] NO MORE discrepancy between forloop foreach capturable YAY (#121269)
9d6c5be781 : Add ASGD capturable API for forloop (#121264)
24821fec26 : Add RAdam capturable API for forloop (#121260)
b1657beac1 : feat: Add min, max ranges to mark_dynamic API (#119737)
e0c534fe02 : Revert "[Inductor] Add support for NEON ISA in the Inductor C++ backend (#105590)"
3d089de851 : Add torch.cond support to AOT Inductor (#121120)
26740f853e : Remove unnecessary use of ctx.resolve_tools. (#120493)
d14d62b7aa : [dynamo] add more refleak tests (#120657)
6490441d8f : Remove dead get_shape_groups (#120813)
18d574a07a : [Inductor] Use indices for constants in triton_meta (#121427)
f61192b014 : Fix for Wait kernel lowering in inductor not accepting MultiOutputs from non-collective calls (#121428)
76f1461892 : [export] Serialize union fields with single entry dict. (#121263) (#121337)
4c58f2b675 : [PyTorch] Use uint32_t for ProcessedNode::num_outputs (#121335)
ea8f6e2e54 : Subclass view fake-ification via reified ViewFuncs (#118405)
63ec5cd158 : TD Heuristic for tests mentioned in PR body, less verbose TD printing (#120621)
c7a65f58b0 : [CI] Script to fetch creds from current AWS session (#121426)
2b1661c7a0 : Revert "[compiled autograd] support custom ops backed by c++ autograd::Function (#120681)"
60aaba4128 : create function to get ProcessGroupNCCL uid (#121132)
83d095c213 : [BE] Remove unnecessary requires_cuda in common_optimizers.py (#121249)
53bdae736d : Add capturable single tensor Adamax (#121183)
af88425cdc : Forward fix lint after 121202 (#121425)
c3c15eb9a6 : [export] update docs to not export raw functions (#121272)
862b99b571 : Revert "[ATen][CUDA][CUBLAS] cublasLtMatmul increase workspace_size (#120925)"
eea37c6db4 : [profiler] record nccl version in distributed info (#121044)
3aa512cd72 : [Clang-tidy header][23/N] Enable clang-tidy coverage on aten/src/ATen/*.{cpp,h} (#121380)
9a45001905 : [dynamo] relax missing symbols runtime assert (#121339)
0339f1ca82 : [Inductor] Allocate another shard for testing cpp-wrapper JIT (#121310)
7e598c0053 : [Inductor] Enable ABI-compatible mode for cpp-wrapper JIT (#121309)
57fc35a3af : [ Inductor ] Shape padding honors output stride preservation (#120797)
4305c64fea : Change ATEN generator argument type to const std::optional<Generator>& (#120076)
1ce5049692 : [inuctor] fix the layout problem for nll_loss2d_backward (#121173)
b3065f6899 : add int8 packed gemm support on CPU device (#118056)
e8e3049f57 : [FSDP2] Relaxed check for parent mesh (#121360)
db36d21f5c : Add SDPA pattern for HuggingFace models BF16 (#121202)
953c6c37cb : Wrap remote cache creation with a try-catch (#121340)
291ce86a6c : Modify StorageImplCreateHelper (#118459)
f848e9c646 : [Quant][Inductor] Fix q/dq per channel lowering with 64-bit qparams (#120984)
4f9d4e1ab0 : [DTensor][XLA] refactor DTensor _xla API (#113214)
c723514ef4 : [CUDACachingAllocator] Simplify update_stat and avoid casts (#120964)
55232c4e1c : Make CausalBias a torch.Tensor subclass again (#121358)
df2ad1fecc : [dtensor][debug] have visualize_sharding correctly print for sub-mesh DTensor (#121216)
77873f6fe5 : [dtensor][1/N] add torchrec even row-wise sharding example (#120260)
9cc0f23e5c : [dtensor][debug] allow visualize_sharding to print header (#121179)
a2854ae904 : Bugfix consume_prefix_in_state_dict_if_present function to keep the order of the state_dict (#117464)
edd80f87b8 : Prevent infinite recursion within Tensor.__repr__ (#120206)
eb4d87f237 : graph break on sparse tensors constructions (#120458)
1a28ebffb3 : [TP] Introduce Sequence Parallel Style for Laynorm/RMSNorm/Dropout (#121295)
967dd31621 : [cuDNN] Cleanup cuDNN < 8.1 ifdefs (#120862)
b9087f8571 : [profiler] Add execution_trace_observer as an optional argument to profiler (#119912)
eb1145436a : [DCP] Adds main in format utils (#120128)
5cc511f72f : Use c10::irange and fix other index types in ForeachReduceOp.cu (#121123)
c268ce4a6d : Make ATen-cpu cuda/rocm agnostic (#121082)
e50ded03a6 : Use type check for also `is_not` (#113859)
a88356f45c : [dtensor] make add_.Tensor/div_.Scalar to be linear pointwise instead (#121294)
2f064d895c : Switch TORCH_TRACE to accept a directory by default (#121331)
372f192050 : [DTensor] Initialized RNG tracker if needed (#121328)
b0e2ed4d67 : removing some macros (#120314)
69cedc16c5 : Add padding dimension checks and tests (#121298)
d7a5e59647 : [dynamo] support group=None when rewriting collectives (#121043)
3fee05f242 : Triage the remaining fallbacks (#121312)
e865700f6a : [FSDP2] Added initial meta-device init support (#120351)
3cf02c5e06 : [Dev Container] Fix container build by preventing conda prompt (#121128)
58ac4a2007 : Remove llava from ci_expected_accuracy as it's flaky (#121322)
23fb37fa41 : Revert "[export] Serialize union fields with single entry dict. (#121263)"
76f3663efe : Fixed a memory leak when calling from_numpy on a numpy array with an … (#121156)
360761f7d0 : [Torchelasic] Create root log directory by default (#121257)
418568d2e3 : Add Float8 support to onnx exporter (#121281)
5a2527db22 : [Clang-tidy header][22/N] Fix clang-tidy warnings in aten/src/ATEN/*.{cpp,h} (#121102)
c5ef4df274 : guard on grads being `None` in compiled optimizers (#121291)
7feabe9b73 : [export] Serialize union fields with single entry dict. (#121263)
c66d68ba51 : [PT2] Add tolist() to FunctionalTensor for torch.export (#121242)
05c256849b : [compiled autograd] support custom ops backed by c++ autograd::Function (#120681)
b27d76949b : [ROCm] Enable several fake_crossref UTs on ROCm (#121112)
b529c19bdf : Revert "Batch Norm Consolidation (#116092)"
8dd4b6a78c : Fix venv compatibility issue by updating python_lib_path (#121103)
a427d90411 : add int4 packed gemm support on CPU device (#117475)
54d92f2e37 : Add jacrev support in torch.compile (#121146)
49d1fd31cf : Fuse nodes with sizes (s0*s1*...,) and (s0, s1, s2, ...) (#120077)
aa0b0944d5 : [dynamo] Re-dispatch `torch.Tensor.new` into `torch.Tensor.new_empty` method. (#121075)
e3bd6efe72 : [dynamo][guards-cpp-refactor] Prevent duplication of leaf guards (#121164)
b6b2d5b00a : [dynamo][guards-cpp-refactor] Pass source name for debug ease (#121154)
52d89d8491 : [dynamo][guards-cpp-refactor] Simplify DictGuardManager by removing KeyValueDictGuardManager (#121147)
af7f55ffc8 : [dynamo][guards-cpp-refactor] Add argnames in pybind'ings (#121121)
0b9bfcf9bb : [non-strict export] support tensor attribute without other args (#121176)
8087912622 : Revert "[XPU][Profiler] Add Logic To The Profiler For Processing XPU-backend Data (#120185)"
099ff51d45 : torch check the division by zero in batch_norm_update_stats (#120882)
2eec0e7c5f : [BE] Remove `__iniline__` from `__global__` (#121246)
31bfa59970 : Capture primitive data type arguments for profiling python_function (#120949)
5680f565d5 : Batch Norm Consolidation (#116092)
f72eb5ae4c : __grid__constant is only suported on cuda version >= 11.8 (#121275)
dad1b76584 : Introduce EphemeralSource for symbols that should be simplified out (#120948)
d968fc442b : [FSDP] restore fully_shard after exit from mock.patch (#121058)
8dafc81ba9 : [cuBLAS][cuBLASLt] Fix expected failures for `int_mm` on `sm75` (turing) (#121277)
ce6a7d56fc : Don't merge qnnpack (#120676)
4b3903379a : Add assign argument to torch.Tensor.module_load (#121158)
27389e03f0 : [easy] Fixed requires_grad preservation for nn.Module.load_state_dict(assign=True) (#121157)
87a533ed1b : c10:intrusive_ptr, self assignment (#119275)
412c687e2e : Fix permuted sum precision issue for lower precision on CPU (#108559)
34e3f6f3c9 : fix segfault in torch.native_channel_shuffle when input is empty (#121199)
8473cd92e4 : remove compute capability 3.5 for CUDA 12 (#114930)
d13ed8503c : CI: Add aarch64 docker build and ciflow tags (#120931)
cac36e232e : [PyTorch] Split StaticModule out of test_static_runtime (#121028)
f5391dad82 : Update docs to point to new sdpa_kernel context manager (#121180)
8bb3e0b643 : [pytorch] Name the main and autograd threads for better debugging (#121170)
24944f6717 : [doc] Fix math display in ChannelShuffle doc (#121247)
b3a9d677a3 : [ez] Add super() calls in test_custom_ops (#121239)
34a28f01dd : [Autograd] Improve error for leaf tensors as out argument to fallback (#121089)
eae9751e82 : Fix linalg_eigvals invalid use of composite dispatch key (#121142)
393b4ab432 : Fixes issue_119785 (#121048)
8ccf8b2c47 : Avoid COW input materialize in more forward ops (#121070)
81dbc487c7 : ci: add "typing_extensions" package to ci requirements list (#121136)
3239f86a3d : [ATen][CUDA][CUBLAS] cublasLtMatmul increase workspace_size (#120925)
8aeb247a3d : [export] Remove WrapperModule. (#121042)
0e604becc5 : [NJT] support chunk on batch dim (#119713)
ae4c85960f : Add Deberta pass (#121206)
5abf7972d1 : [DCP][state_dict] Implement pin_memory and shared_memory copy for _offload_state_dict_to_cpu (#120378)
6ecd65886a : Remove unnecessary const_casts (#121225)
85c807b3fd : [export] Ensure optional fields always have default value. (#121163)
35004b8ab4 : [dynamo] Fix handling of invalid args (#121110)
4f19b5f7ef : [dynamo] Remove extra guard for tensor constant attrs (#121106)
e4352182bd : Disable remote cache test on ROCM (#121210)
f25a25fde5 : Fix lintrunner-noclang (#121205)
fbf36d01a0 : Update Triton (#119457)
59d9f1e227 : Spectral norm value test (#121068)
d621e3e3b8 : Add exhaustive module and optimizer tests for torch.load(state_dict, weights_only=True) (#121049)
42821d462a : [ATen][Native][CUDA] Decrease max_threads in ctc_loss (#120746)
12191f4b3e : Fix make triton command on release branch (#121169)
ee557d8f61 : skip detectron2_fcos_r_50_fpn in dynamic shape test (#120697)
c4a1570864 : Temporarily increased compile time limit of #GPUs to 120. (#121076)
de8af28083 : [FSDP][StateDict] Allow FULL_STATE_DICT option for 2D (#120837)
507611f9ae : [CUDACachingAllocator] Turn Allocator::allocate into non-const (#120969)
46c9d646dd : [Dynamo] Fix inspect.getattr_static doesn't work well for torch.utils._cxx_pytree.PyTreeSpec (#120812)
311cc564f6 : Fix README Typo (#120892)
a7e93c341f : [hoo] Add with_effects to handle side effectful ops (#120296)
29976519a1 : Make configs hash part of remote cache key (#121152)
43416e3059 : Correctly read the cache key for remote cache (#121151)
9e16622397 : Move JK check to on-demand (#121182)
9ccff0aff9 : Remove ids_of_folded_args from test_triton_kernel_equal_to_1_arg (#121192)
4b49bc19e8 : [export][reland] Disable exported_program.__call__ (#120019)
6ddf5cf85e : [AOTI] Update cpp wrapper codegen to use v2 C shim (#120714)
bd19d6d822 : [AOTI] Use torchgen to generate C shim functions (#120513)
ffe45a8188 : [ATen-vulkan] Implement global shader registry (#121088)
c3c618c750 : Update torchbench pin (#121029)
a15c02562a : Fix dynamo failure (#121167)
3381f282c3 : Revert "Update Triton (#119457)"
9deaa2e812 : [BE]: FURB187 Use inplace reverse on lists: faster, more readable. (#121140)
ec4146c535 : [inductor] skip foreach kernel for benchmark fusion (#121168)
bcf35c6ae6 : [tensorboard] Handle bfloat16 type in add_histogram (#120087)
a3a8137484 : [onnxrt, dynamo] Fix run with inputs on mix devices (#121159)
83c312990f : Add missing newline to repro and some utility thing in repro (#121051)
eba28a6f91 : [VK-API][Op Redesign][3/n] Expose new Context and Resource APIs (#121060)
70c23a51ac : Revert "[ATen][CUDA][CUBLAS] cublasLtMatmul increase workspace_size (#120925)"
df3c8b8390 : [fake_impls] Fix seed/offset device for attention kernels (#120839)
6a5c7d5f95 : [ATen-vulkan] Enable deferred descriptor pool initialization (#121134)
0c07c0c15f : Revert "add int4 packed gemm support on CPU device (#117475)"
74b19fa8b9 : fix fsdp device mesh depenency issue (#121061)
7a065e3b23 : improve the constantLR doc (#120852)
cb812c9832 : Add windows constraint to mkl package in wheel (#121014)
4cdc2d7096 : [dynamo] Remove expected dynamo test failures (#120836)
a98c17edc7 : Revert "add int8 packed gemm support on CPU device (#118056)"
9ff65d56a5 : Revert "delete useless cast_outputs call in unary_op_impl_float_out (#120486)"
26431db939 : [ONNX] Perform implicit casting of constants for the onnx::where operator (#118733) (#120619)
58047205ed : Delete unnecessary code (#120365)
2e6c08a14b : Update flash_attention kernel from 2.3.6 to 2.5.5 (#118935)
d49864f6a5 : Update Triton (#119457)
6566b3db67 : Add an autotune cache for inductor generated kernels (#120963)
3ef0befdc9 : Better error messages for impl_abstract_pystub (#120959)
ce2903080c : Add sparse compressed fake tensor support (#120920)
c06499981d : Add a decomposition for torch.put, 2. (#120179)
8ba49d0e53 : Fix compilation error: load_fp32_from_fp16’ was not declared in this scope for ppc64le (#120307)
27ac73073b : Fix hipification issue (#121107)
2e50566722 : [dtensor] change distribute_module input/output_fn to accept module (#120895)
3045b16488 : Do not use warm_pool() if TorchTnT is used (#121047)
4b494d0750 : Fix comparison of integer expressions of different signedness (#121066)
c83dfc8854 : [PT2][Inductor] Fix missing "example_value" for nodes introduced by group batch fusion (#120974)
cead0363a8 : [jit][nested strided tensor] support nested tensor in check_trace (#121039)
089f4c0bd9 : If data dependent, check if guard_size_oblivious would fix problem and report if so (#121011)
13fadea888 : [Clang-tidy header][21/N] Fix clang-tidy warnings in aten/src/ATEN/*.{cpp,h} (#120763)
4f0481e1d5 : [inductor] add decompostition for mm in backward (#120933)
b7f2522692 : [dynamo][compile-time] Remove unnecessary tree_map_only (#121052)
368f242e37 : Revert "[PT2D] Make the speedup benchmark works with DDP + CompiledAutograd (#120454)"
0e0a621e0c : [dynamo] Minor refactors (#120966)
8e4301077e : [dynamo][comp-time] BuiltinVariableTracker - inspect signature only on failure (#121053)
7aced61c46 : [DCP] deletes legacy formatting test (#120127)
7f81563e5e : [dynamo][guards-cpp-refactor] Skip type and length check guard for DictGuardManager (#120739)
82d1465d8d : [dynamo][guards-cpp-refactor] DICT_CONTAINS guard (#120673)
bab4b5a341 : [dist][sharded_tensor] Fix ChunkShardingSpec metadata offsets for empty shards (#121002)
66b20b4297 : [export][ez] minor variable rename (#121040)
505637198a : [export] cleanup to rewrite steps (#121037)
b0cfa96e82 : [Torchelastic][Logging] Pluggable logsspecs using python entrypoints and option to specify one by name. (#120942)
f351a71dbb : remove constraints from capture_pre_autograd_graph (#120981)
83d848e1c7 : [Quant][Inductor] Enable lowering of dynamic qlinear for X86Inductor (#120605)
af5376c444 : [dtensor] add support for loss parallel (#119877)
c4ed456fc3 : [inductor] fix accuracy failure for a few models under freezing (#121054)
f84375ca5d : add int8 packed gemm support on CPU device (#118056)
5258c3645d : [ATen-vulkan][EZ] Bug fixes: only create the image view when memory has been bound, invalidate cmd on flush (#121027)
2d9efad38f : Add the bound check for flatten with out_dim (#120894)
06fe6ed82b : [dynamo bug burndown] update tensor creation to support sequences of tensors (#120872)
a3b81666b1 : [Dynamo] Fix guards for code objects (#120909)
f7a2bae0ac : Change TestOpWaitiness to use MultiProcessTestCase (#121046)
4cf6d1172b : [FSDP2] Used `ReduceOp.AVG` if fp32 reduce-scatter (#120919)
85157af784 : Fix more xfails for scaled_dot_product_attention (#121032)
7c71d7f32b : [DTensor] Supported `foreach=True` for `clip_grad_norm_` (#120910)
f0e8e7cf43 : [DTensor] Supported `foreach=False` for `clip_grad_norm_` (#120238)
30befa592e : add int4 packed gemm support on CPU device (#117475)
c8e56b4965 : [c10d] dump from one and only one thread (PG0's monitor thread) (#120893)
3d7cf8f392 : Revert "Limit loop unrolling (#120023)"
d8395830ea : [ONNX][dynamo_export] Skip instance_norm decomp for export (#120866)
581fe26792 : [C10D] Add ProcessGroup op_id to track ops inside coalescing region (#120745)
0a38a6ac80 : [ATen][CUDA][CUBLAS] cublasLtMatmul increase workspace_size (#120925)
06b52dd103 : TD outside of test job (#118250)
d08ce51881 : [compiled autograd] refactor eager test loading and run custom ops tests (#120679)
8cb4855d1e : Release the GIL in serialization when it is safe to do so (#120818)
fd2ab1f613 : [PT2][Inductor] Change the split cat log to debug (#120823)
797d4fbdf4 : [export] Log operator set. (#120951)
d3876f73e7 : Preserve metadata for `MutableMapping` and `MutableSequence` in `pin_memory` and `collate_fn` (#120553)
a7c799fb85 : [executorch] Add support for method variants in aten executorch code gen (#121016)
7a64eb65e4 : Fix Dynamo tests failing with "Failed running call_function <built-in function linalg_norm" (#120993)
39e4d1a535 : Make TestEmbeddingNNDeviceTypeCPU::test_EmbeddingBag_per_sample_weights_and_no_offsets_cpu_int32_float32 compatible with TorchDynamo (#120831)
e02047add4 : [BE][Ez]: Update ruff to 0.3.0 (#121003)
af93849a3a : [pt2 export] small fix on non_persistent buffer unlift (#120715)
19fcf6de1a : Add lowering for fraction_max_pool2d (#120460)
cdb50d0380 : remove constraints from aot_compile (#120979)
55ae8fb1f6 : Switched m1 runners to the lable macos-m1-stable (#120997)
de3202abea : [EZ][BE] Remove Python-2 installation logic (#121015)
b474a523c6 : Ban passing in free function into capture_pre_autograd_graph (#120817)
ce50db22c2 : Handle transposition pattern seen in SDPA with unbacked SymInts (#121005)
11f2e8beac : [Dynamo, Compiled] Save some python overhead when calling compiled function with many tangents (#118730)
0b18ed1c47 : [FSDP] Added warning about unsupported double backwards (#120926)
f01a23d01b : Don't aggressively rewrite asserts for symbolic expressions (#120564)
c844b377fa : [dynamo] Reorder logs (#116106)
9fc56f8209 : Exclude operators that produce unbacked symbols (#120917)
ea7149aa22 : Replace TTIR string parsing with structured MLIR walk in Triton kernel mutation analysis (#120476)
8861507ba3 : Fix guard for SUPPORTED_NODES (#120798)
b8e6ca6f76 : Add sparse compressed meta tensor support (#120707)
70d4d109f2 : Make SparseCsr a functionality dispatch key (#120703)
eee040c939 : expose nested header to wheel (#120603)
c646030cd2 : Support higher order op functionalization in predispatch IR (#115314)
82b356193d : Move VariableInfo into its own file to avoid circular dependency (#120732)
8c2e569928 : [PT2D] Make the speedup benchmark works with DDP + CompiledAutograd (#120454)
77ef9d4022 : Add verbose parameter to torch.hub.list (#120717)
63b259492a : Revert "[dynamo] Reorder logs (#116106)"
86e6497c6f : [Inductor][cuDNN] Disable tf32 in `test_mutate_view_for_conv_output` (#120953)
6ed26392b3 : Update xfails for scaled_dot_product_attention (#120928)
2a08a51738 : Add _assert_scalar and teach Inductor to codegen it (#114148)
77aea289ae : Add test to check that COW inputs are not materialized (#119507)
13a54ce279 : Avoid COW materialization in `at::parallel_for/parallel_reduce` (#120455)
d053dcfa69 : delete useless cast_outputs call in unary_op_impl_float_out (#120486)
c5472628ff : [dynamo] Reorder logs (#116106)
02a410ee12 : Enable TORCH_TRACE by default in all Tupperware like environments (#120915)
518a23bb03 : support bool as Scalar Type in TorchScript (#113835)
2e84d01d05 : [executorch hash update] update the pinned executorch hash (#120747)
65d568680c : Revert "[Dynamo] Fix inspect.getattr_static doesn't work well for torch.utils._cxx_pytree.PyTreeSpec (#120812)"
e49f31ca02 : [onnxrt, dynamo] Enable custom ONNX model transforms in `onnxrt` dynamo backend (#120854)
67c97a9aad : fix the scale dot attention doc (#120859)
b35551f357 : Ban reset_to_zero argument to triton.autotune in user defined kernels (#120938)
06f8af30fa : Change FakeTensor serialization to consider only an _active_ FakeTensor mode (#120848)
e3dbd194f4 : [dynamo] Support module backwards hooks (#120685)
9b2c35b4fe : [dynamo] Fix convolution meta kernel when input channel is 0 (#120944)
d534a49767 : Reinplace auto_functionalized (#120829)
791f8ef350 : [Composable APIs] Add composable API `fully_shard` deprecation warning (#120929)
fd35aafc26 : Teach dynamo about vjp (#119405)
9d5dea7812 : [DCP] Adds storage reader and planner classes for online loading/sharding of models in torch.save format (#119816)
33da8d5c12 : Revert "Fix guard for SUPPORTED_NODES (#120798)"
7ebfe21724 : Fix nll loss dynamo failure (#120805)
d03b11ad5b : Pass inductor strides forward in ddp optimizer (#120523)
772db2a3ae : Fix handling of torch.return_types in dynamo (#120826)
da559c98e3 : Fix isin decomp and add python meta registration (#120821)
76d3a6bb4a : Revert "[C10D] Add ProcessGroup op_id to track ops inside coalescing region (#120745)"
e7039e3a0b : [dynamo][easy] Dynamo test changes (#120927)
39c092d242 : Skip semi-structured-sparse on windows (#120807)
1a1f58ffbe : [rocm][cmake] retrieve rocm location from ROCM_SOURCE_DIR env if specified (#120898)
b2dddcfe27 : [FSDP2][DCP][DSD] Add test to ensure FSDP2 model/optim state_dict work after a full training loop (#120871)
67d3e4f2a2 : [TorchElastic] Refactoring to support non-default logging strategy (#120691)
277bc97709 : [FSDP2][ez] Combined communication test files (#120904)
0b924d7cde : Revert "[inductor] Optimize welford reduction (#120330)"
0a7666801d : SymIntify prod_backward (#120776)
313abcdba2 : [c10d] fix the unwanted reason (#120863)
f94933ed42 : Refine value ranges on inequalities (#120800)
81c4c0dda2 : [functional collecitve] don't import torchdynamo when running torchdeploy (#120900)
f7a809c96a : fix dupe deprecated warning in dynamo export (#120896)
0290fe65bd : Test TD (test removal) on crossref (#119426)
1458f1de66 : Revert "Update flash_attention kernel from 2.3.6 to 2.5.5 (#118935)"
96eff4ef70 : [inductor max autotune] Detailed autotuning result logs ( machine-readable ) (#119004)
a911eb74ae : [dynamo] Graph break when faking named tensors (#120779)
1104e0798c : [Dynamo] Fix inspect.getattr_static doesn't work well for torch.utils._cxx_pytree.PyTreeSpec (#120812)
ca679384c2 : [rocm][cmake] correctly check the ROCM_SOURCE_DIR environment (#120858)
9e016debeb : [dynamo] Fix inference_mode context variable (#120830)
98c4ba683e : [EZ][BE] Fix ResourceWarning (#120886)
664dd61b29 : Add some more symbolic shapes related files to ciflow/inductor (#120887)
558316b5f4 : Emit grid wrapper inlined with the user defined triton kernel (#120824)
84e2accd6c : Make triton_meta be part of user defined triton kernel cache (#120809)
342e7929b8 : [export] kill deprecated constraints API (#120860)
3cfed01228 : [AOTI] Store OpOverload in ir.ExternKernel (#120629)
fa7241ed79 : [AOTI] Change the cpp wrapper codegen for sdpa (#120592)
52e3c78a43 : [AOTI][refactor] Move a few util functions in atoi_torch (#119987)
5b9e5f854b : [profiler] Log process group id instead of backend id (#120475)
576c0482a5 : Remove hard numpy dependency from guards.py (#119519)
5db5049b34 : Move TRITON_CONSTRAINT setting to common binary_populate_env.sh, BE - Cleanup unused build scripts (#120744)
f988f649be : [IntraNodeComm] accept P2P buffer size as constructor argument (#120856)
22b5548f5d : [IntraNodeComm] refactor all_reduce variants as private methods (#120855)
96793e0f10 : [ROCm] enable scaled_gemm (#117822)
09aefe1502 : Fix ouput typos (#120870)
14c5ebc8a1 : [Dynamo] Do not attempt to make nditer spawned arrays writable (#120868)
169c220bf8 : [torch.compile] Provide capability to register callback on compile start/stop (#120764)
82cbd9b131 : [dynamo][guards-cpp-refactor] PythonLambdaGuardAccessor (#120730)
66d05a8900 : [dynamo] Fix source for default dict default_factory (#120864)
df1e855313 : [fake_impls] fix max_seqlen return values in efficient_attention_forward (#120842)
d1d50d2e4c : [Inductor][cuDNN] Disable tf32 in `test_mutate_base_for_conv_output` (#120867)
8a42cff7b1 : [DeviceIndex][7/N] Use DeviceIndex in XPU (#120576)
4b18ab869f : [torch.export] Support is_compiling() flag for non-strict mode (#119602)
0a46102b37 : Add equal_to_1 to triton_meta for user-written Triton kernels (#120579)
4407138bf6 : [inductor][eazy] fix a typo in test (#120832)
2d17230212 : [inductor] Do not reuse buffers across scopes in mem planning (#120777)
f5b99976ad : [C10D] Make _set_pg_timeout work with DeviceMesh PG (#120850)
26d6ddc232 : [bug burndown]Fix #119784 (#120846)
fad228c7cc : Fix a potential race condition in the test decorators for enabling/disabling native funcol (#120833)
2c0c70f763 : [Dynamo] enumerate imported names for eval_frame.py (#120778)
ef9e89984c : [pytorch] Support output types that are non tensors (#120804)
0dbef1618f : [inductor] Apply fx passes recursively to nested subgraphs (#120665)
db1cc781db : Revert "[dynamo] Function => FunctionCtx for placeholder obj (#120577)"
b2e4b621cc : Reduce create_env log level to DEBUG (#120772)
9e0631cc8a : get CommsDebugMode to work with DTensor (#118769)
381a7ad3f1 : [C10D] Add ProcessGroup op_id to track ops inside coalescing region (#120745)
f85d3a022c : [C10D] Fix pointToPoint op Flight Recording (#120270)
7f4d673885 : [C10D] Add record_id to flight recorder (#120724)
950b484356 : skip three pyhpc models with dynamic shape test (#120599)
3179107629 : [DDP][PT2D] Ignore gradient sync if the gradient is not defined (#120419)
ab38354887 : Allow str inputs in non-strict tracing (#120536)
1b8bb027f6 : Fix guard for SUPPORTED_NODES (#120798)
aa36821615 : [Memory Snapshot] Stop clearing history when changing context (#120436)
86ff31c4a0 : Revert "Avoid COW materialization in `at::parallel_for/parallel_reduce` (#120455)"
dbe0967a0a : Revert "Add test to check that COW inputs are not materialized (#119507)"
7e185277cd : [cuDNN] bump cuDNN-frontend submodule to 1.1.2 (#120761)
9c9bde515c : Factor out Submod compilers (#120527)
5b5bcf0470 : Test that tlparse understands the structured logs we output (#120658)
d6c202975c : Move attention kernels from meta_registrations to fake_impls (#120682)
50073248ed : add a note wrt torch.nn.functional.scaled_dot_product_attention (#120668)
e2ee87d48b : Fix segfault on mac when running vulkan tests (#120337)
e317e39a02 : Fix `nonlinearity` arg issue in RNN (#120234)
8b22fe9594 : [FX passes] Set group/batch fusion log to DEBUG level (#120780)
4903e33e19 : Revert "Capture non tensor arguments in record_function (#120017)"
01ec8df6d8 : [Compiled Autograd] Introduce BackwardState capture (#120382)
c016ffed5b : [C10D] Fix logic for default group=None in _set_pg_timeout (#120686)
11de40f82f : [flight recorder] record process group configuration (#120262)
5aa7f8646f : [inductor][Gemm] Autotune with matrix_instr_nonkdim for AMDGPU (#120742)
b020ee5b05 : [PyTorch Use MaybeOwned when promoting indices/offsets in embedding_bag (#120755)
98d1529474 : [PyTorch] fix mixed int32/int64 indices/offsets for embedding_bag_out (#120752)
db92558229 : [codemod][lowrisk] Fix deprecated use of 0/NULL (#120740)
491c2b4665 : Let torch dynamo inline torch.func.grad (#118407)
5472923998 : derived dim (#118729)
9c55aa6ff6 : TransformerEncoder/Decoder: add type hints (#120550)
4b7a521856 : Update flash_attention kernel from 2.3.6 to 2.5.5 (#118935)
a9d9077f12 : Revert "Increased compile time max GPUs to 512. Switched to int16_t DeviceIndex. (#119639)"
1c67f6cb26 : fix decomposition of aten.diag_embed (#120549)
f422467ccb : [BE]Delay the call to set_pytorch_distributed_envs_from_justknobs (#120625)
91190d8087 : [quant][pt2e] Relax `model_is_exported` input (#120720)
f67c77c497 : Update engine.cpp (#120773)
0ab2ec3738 : [XPU][Profiler] Add Logic To The Profiler For Processing XPU-backend Data (#120185)
3e8b56d362 : [Inductor] Track constant's original_fqn mapping (#120524)
702e82da28 : [cuDNN][Flash Attention] Minor cleanup for cuDNN SDPA (#120750)
364faafe75 : [DCP] Asserts CPU backend for async_save (#120241)
c8a34a4013 : [ez] Smaller weight for some TD heuristics (#120736)
dfe7b9d471 : Move user defined triton tests to inductor test folder (#120738)
df40847486 : Add xpu header to include/ATen/xpu (#120786)
7881b95c73 : Don't suppress error codes in lint job, properly activate conda (#120769)
facfc0baaf : Update _constrain_as_size docs (#120728)
82099ab87b : [easy] Reword unexpected success error messages and generated github issues now that we have sentinel files (#120766)
46e3f670b4 : refactor code to share across different devices (#120602)
a11a49af58 : Add NCCL work sequence number to work info (#120596)
be31e522ce : [PT2][Inductor] Fix "example_value" absent for stack nodes (#120655)
12995a5d9d : [2/2] Intel GPU Runtime Upstreaming for Generator (#118613)
8ba4cb451f : Fix an import loop (#119820)
e9a961f66a : [dynamo][refactor] Use originating_source for HASATTR (#120723)
a774baa501 : [audio hash update] update the pinned audio hash (#120748)
184e815c74 : Add TORCH_LOGS_FORMAT=short alias (#120757)
bd5f290505 : [vision hash update] update the pinned vision hash (#120749)
bfa71b523d : add complex32 to v3_dtypes (#120388)
5a53c0ff23 : [dynamo][refactor] Rename LIST_LENGTH to SEQUENCE_LENGTH, separate DICT_LENGTH (#120721)
1627d9e06d : [aot_inductor] added a utility function aoti_torch_print_tensor_handle (#120660)
d21c6eb215 : Do not wrap output with input device inside _to_copy (#119868)
33499ec41b : [FSDP2][DCP][DSD] Add FSDP2 model state dict unit test with distributed state dict (#120680)
1aa9099839 : [CLANGTIDY] Enable clang-tidy in torch/csrc/xpu (#120616)
1a1fc1047d : Add structured trace logs (#120289)
677e67c399 : Update nn.Module._apply to not gate on should_use_set_data when swap_tensors is set (#120659)
213b3ac3f2 : [BE] fail_* variables don't need to be shared across restarts, they're set only once (#120712)
2ebf2c88ba : Add test to check that COW inputs are not materialized (#119507)
cabc09a5f2 : Avoid COW materialization in `at::parallel_for/parallel_reduce` (#120455)
1e9fafc160 : [Clang-tidy header][20/N] Fix clang-tidy warnings in aten/src/ATEN/*.{cpp,h} (#120574)
9c597ff137 : use condition_variable and wait_until in nccl dump on timeout (#120544)
14b258b5bc : Fix broken link in README (#120698)
5929d4e830 : [CUDA][cuBLAS] Check if a context is present when grabbing a cuBLAS handle (#120131)
f36e00b8ce : Revert "[inductor][Gemm] Autotune with matrix_instr_nonkdim for AMDGPU (#120639)"
6cc7f9a2e6 : Limit loop unrolling (#120023)
f3dd2a544c : Revert "Add structured trace logs (#120289)"
65efece3a4 : [CUDA][cuBLAS] Bump `test_cublas_baddbmm_large_input` tolerances (#117889)
5b5c167adc : [dynamo] Add some helpers to PyCodegen (#120684)
0c8bb6f70c : [dtensor] standardize tuple strategy handling for foreach ops (#120695)
440a9b212d : [profiler] log process group config information in distributedInfo field (#119443)
78f53a3f73 : [inductor][Gemm] Autotune with matrix_instr_nonkdim for AMDGPU (#120639)
3f62b05d31 : [export] Use forward hooks to capture module signatures. (#120468)
ed3c256b61 : Add lowering for adaptive_max_pool2d (#120254)
27bb73fe46 : [AOTI] Fix a strict-aliasing warning (#120628)
c29ac05ac0 : [inductor] correctly retrieve the "shared" attribute from a Triton binary (#120666)
435063aa89 : Decomposition for upsample_linear{1d, 3d} (#114774)
2ad66e6bf0 : Fix test failure: Add torch.cuda._get_device_properties to dynamo trace rules (#120620)
e3d64c4d5d : [dynamo] Desugar accumulate_grad, fix .grad handling (#120590)
9db6a849ed : [FSDP] Clean missing and unexpected keys (#120600)
b2a318d856 : [PyTorch][ExportedProgram] add 'is_lifted_tensor_constant' and 'get_lifted_tensor_constant' utils (#120546)
7c556428c7 : Increased compile time max GPUs to 512. Switched to int16_t DeviceIndex. (#119639)
cbbc309cae : [pytree][reland] Require pytree serialized_type_name (#120636)
12f724c779 : [export] preserve constant fqn (#120664)
a358b23a6a : Keep test order due to rename_privateuse1_backend is disposable (#120464)
5a5b654481 : [BE]: Enable ruff LOG checks (#120674)
b6139b1e57 : [PyTorch][CUDA Caching Allocator] Export sync-stream-and-free-HBM counter in memory_stats for performance debugging (#120050)
a1c641f118 : [executorch hash update] update the pinned executorch hash (#120675)
237773132d : Restore artifact name in log messages (#120671)
ac28571742 : [vision hash update] update the pinned vision hash (#119944)
9d423f0e91 : [audio hash update] update the pinned audio hash (#120135)
63f874b476 : [dynamo][guards-cpp-refactor] DictGetItemGuardAccessor for f_locals (#120593)
27990045ff : docker: Only match tags that start with v* (#120670)
cf6df886a0 : Remove hard numpy dependency from experimental_ops.py (#119520)
2de7468d2b : Switch to native functional collective by default (#120370)
8a59f49da2 : [dynamo][compile-time] Collect guard debug stack info only with logs enabled (#120520)
2e0e545759 : [EZ][BE] Use nested namespace in functorch (#120663)
b3fe53e1ad : [1/2] Intel GPU Runtime Upstreaming for Generator (#118528)
f064dec7e0 : Add torch.ops.aten.print (#120295)
ef9b6d6816 : Replace individual detaches with overall torch.no_grad decorator (#120638)
df72819f91 : clip_grad_norm can use fast foreach path for inf norm (#120623)
b01bd1f7a1 : Revert "Add torch.ops.aten.print (#120295)"
17560eb472 : Revert "[Dynamo] Remove deadcode: unwrapping script_if_tracing (#120444)"
e874376f6a : Mark test_reference_numerics_extremal__refs_frexp_cuda as xfail on windows (#120640)
d341b66e96 : Revert [dynamo] support group=None when rewriting collectives (#12018) (#120677)
fdae9363b3 : [meta registration] efficient_attention_forward fix for NT inputs (#120594)
9dfaef962c : Add structured trace logs (#120289)
ecb3f33a1a : [dynamo] fix segfault in _debug_get_cache_entry_list (#120635)
64660b51f6 : Add the hyperlink of the transfomer doc (#120565)
c59b14163b : Implement aten::upsample_linear1d on mps (#115031)
30625ae582 : Add cpp stack traces to our own reruns (#119408)
41adec3c59 : Revert "Switch to native functional collective by default (#120370)"
7b1cc140aa : Use lxml in scripts/compile_tests when it is available (#120633)
5a0a964444 : [Dynamo] Fix guards for script_if_tracing or lru_cache fn with default args (#120390)
55b5908427 : [PT2][Inductor]Add unbind node normalization (#120253)
274b362442 : [FSDP] Removed `.detach` in `clip_grad_norm_` (#120612)
fd3cf88f27 : Rewrite docs about why we guard on dynamic dims (#120566)
759204253f : [export] Change runtime asserts to using assert_scalar (#119608)
2fb32a5f07 : Enable fake tensor caching in fbcode by default (#118555)
ee01d0807b : [dynamo] Function => FunctionCtx for placeholder obj (#120577)
7eb7ac815f : [inductor] Optimize welford reduction (#120330)
c39bbd6def : Numbers based TD (#119901)
86063b4d03 : Add torch._print to dynamo trace_rules (#120533)
8a32a07856 : Revert "Add meta device support to sparse compressed tensors (#120498)"
b381a4372b : make GPT2ForSequenceClassification pass inference accuracy check (#120537)
f4cf25bb24 : Fix a bug where nn.functional._AllGather.backward produces wrong gradients (#120582)
c617e7b407 : Add resnet50/mobilenet_v2_quantized_qat in into deterministic_algorithms exclusive list (#120384)
a299db2983 : [dynamo][guards-cpp-refactor] NO_HASATTR guard (#120469)
1c7b0e7cd1 : [inductor][cpp] disable masked load for non-fp data types (#120558)
ea20885d95 : [executorch hash update] update the pinned executorch hash (#120264)
c18623b7ed : [dynamo] Reland 120147 - - Use EQUALS_MATCH guard for mod.training (#120578)
685d862c45 : Add SparsePrivateUse1 in backend_to_string, layout_from_backend and check_base_legacy_new. (#119263)
4328e772bf : [dynamo][guards-cpp-refactor] DICT_VERSION guard (#120416)
c269e48af0 : [dynamo][guards-cpp-refactor] DictGuardManager (#120359)
775a4388d9 : [dynamo][guards-cpp-refactor] WEAKREF_ALIVE guard (#120344)
5d71ba6885 : Add meta device support to sparse compressed tensors (#120498)
834c7a1d3e : [dynamo][refactor] Move some helper functions to global scope (#120426)
5c7b761f8e : Fix default world_size when running on 1 or 0 GPU (#119372)
81f0b2c14e : [Clang-tidy header][19/N] Enable clang-tidy on torch/csrc/autograd/profiler_legacy.* (#120552)
298c686d3f : [dynamo] support group=None when rewriting collectives (#120118)
3e382456c1 : Fix compiler check (#120492)
0f20cc1e0e : Don't use size on TensorVariable when doing out resize test (#120567)
54c1cf8d8a : add distributed checkpoint support for custom device (#120201)
56203fc407 : Add profiling for backward (#120540)
a17979faa6 : [dynamo] add stronger test for dynamo memory leaks (#120459)
a62d9184d5 : [ET-VK] Move graph runtime from PT directory to ET directory (#120528)
1f1bc0e6ac : Switch to native functional collective by default (#120370)
33938cfddd : [BE][Ez] Update ruff to 0.2.2 (#120517)
79f059987e : Update find_test_dir() to check for skip files relative to the local path first. (#120521)
15add24bf2 : fix: set codegen in _SplitterBase partitioner (#120361)
3eefe96297 : Update scripts/compile_tests/update_failures.py (#120529)
b7df3bba62 : add decomposition for frexp (#119217)
f7e79299c7 : register torch.return_types in torch.fx._pytree (#120027)
c3496d50f0 : Fix torch.return_types init signature (#119284)
623632a401 : More informative stacklevel for autograd function warning (#120512)
4d2073bc3f : [Dynamo] Remove deadcode: unwrapping script_if_tracing (#120444)
8e20385447 : [c10d] fix the macro definition of NCCL_COMM_DUMP (#120502)
7cd623aa89 : Remove monkey-patch for torch.utils._rebuild_tensor (#120446)
ed0ea2f30b : add `export` to `torch.jit.__all__` (#120432)
e29eb39e04 : [EZ] Fix typo in gcc version detection (#120489)
007606e520 : [dynamo][guards-cpp-refactor] TENSOR_MATCH guard (#120342)
4b65d192f0 : [dynamo][guards-cpp-refactor] DYNAMIC_INDICES guard (#120096)
a92ce46dc3 : [dynamo][guards-cpp-refactor] GlobalWeakRefGuardAccessor (#120093)
bb331b1eb5 : [dynamo][guards-cpp-refactor] LENGTH_CHECK guard (#120123)
2eac593ffd : [dynamo][guards-cpp-refactor] TUPLE_ITERATOR_LEN guard (#120119)
da95421f05 : [dynamo][guards-cpp-refactor] TupleIteratorGetItemAccessor (#120091)
39f0a5ecc9 : [c10d] simplify the dump timeout logic and unify the async call (#120331)
8646872ff7 : Make balance_gradient preserved in export (#120332)
182ed1e32c : Use a dtype property in torch inductor nodes (#119227)
d54121d13f : Increase bazel CUDA tests timeout to 480s (#120443)
6b35415a54 : Create a sentinel file for each dynamo test skips (Part 2) (#120501)
cffdd642a9 : Create a sentinel file for each dynamo test skips (Part 1) (#120500)
2120f65174 : [AT-VK][EZ] Move ops to dedicated folder (#120364)
6d920dd3c6 : [ET-VK][Op Redesign][2/n] Introduce OperatorRegistry (#120363)
3e2ac1f094 : [AT-VK][EZ] Define OpNode constructor (#120362)
232f09e0ea : Add copy of scripts for setting up s390x workers (#120417)
3b944113c8 : Add torch.ops.aten.print (#120295)
97918e8c37 : [Clang-tidy header][18/N] Enable clang-tidy on headers in torch/csrc/cuda (#118504)
2892d2f31b : Revert "[inductor] Optimize welford reduction (#120330)"
2c85c9e77e : [Memory Snapshot] Add Total memory used after allocation in Trace View (#120339)
d9db9e62e3 : Describe special case in avgpool (#120335)
cef9f70f4b : Move torchbench model configuration into a YAML file. (#120299)
54bac042e7 : Fix error in examples of `torch.linalg.lu_factor` (#120484)
b96ea097ee : [aotinductor] rename CppWrapperCodeGen and CudaWrapperCodeGen (#120391)
72fec96e59 : fix no shard state dict loading (#120367)
9e9eaf0032 : [CUDA] Workaround register spilling issue in mem-efficient SDP kernels on `sm60` (#120445)
edf1c4e552 : [Dynamo] Handle guard_size_oblivious in user code (#120379)
a5548c6886 : Create a sentinel file for each dynamo test failure (#120355)
2240018c03 : Construct `c10::Half` from `float16_t` on ARMv8 (#120425)
3f6be7696b : [cuDNN][cuDNN RNNv8 API] Fix math type behavior in cuDNN RNN (#120277)
36c1cc962a : Update cutlass from 3.3.0 to 3.4.1 (#120434)
f609f2050f : [structural binding][6/N] Replace std::tie with structural binding (#120353)
3426c6f559 : update the tensor.scatter_ doc (#120169)
bb6f50929b : Fix lint after https://github.com/pytorch/pytorch/pull/105590 (#120461)
2b0168aeb0 : [c10d] update the work progress of PG periodically (#120438)
8f4ffd3d8a : [HigherOrderOp] makes control flow operators respect global decomp table (#120412)
156954d6a2 : [Inductor] Add support for NEON ISA in the Inductor C++ backend (#105590)
4c6ba16f82 : [inductor] Optimize welford reduction (#120330)
722afe6171 : Revert "[dynamo] Use EQUALS_MATCH guard for mod.training (#120147)"
3588e7f265 : Ignore .numpy() under FakeTensorMode() (#120261)
f9eb66e16d : [BE][EZ] Flatten preprocessor hierarchy (#120422)
1c7ba330b2 : [BE][optim] Simplify _init_group. (#120055)
5603d95375 : [DeviceMesh] Ensure mesh tensor is a cpu tensor (#120046)
c11bd724fe : [ROCm] replace ROCmLoops.cuh with hipified CUDALoops.cuh (#120101)
77692736d1 : Use privateuseone during external module register test (#120399)
edd03f975f : highlight readme code block (#120228)
1eae8950b9 : [Dynamic] Fix dynamic shape size inspection bug (#120341)
11e4a9266d : Temporarily support ranks + tag as pg identifier in native funcol (#120226)
5a3e19578f : Make tests using CommDebugMode work for both legacy and native funcol (#120070)
a4c5f48b11 : Prepare test_dtensor.py for native funcol migration (#120043)
1c9fc720ae : Change the .clone() in native funcol's all_reduce to use at::MemoryFormat::Contiguous (#120042)
7b8f6736d1 : [cond] make sure subgraphs in cond are decomposed according to current decomp table (#120366)
680cfec295 : Fix the default value of side in torch.searchsorted (#120066)
c37d07a1bc : [FSDP2] Removed `super().__setattr__` call (#120340)
2ba798df60 : [inductor] decompose memory bound mm (#120047)
ce807c17c0 : modify comment of SparseTensor coalesce (#120221)
bb72bfe2ac : Add code example for torch.stack() (#120304)
ca64f7cbb8 : Fix rendering in the doc of `PackedSequence` (#120385)
a77226aa49 : [inductor] improve kernel metadata logging (#120274)
b88621040a : [profiler] Add kineto init delay when used in daemon mode (#120276)
be0ee93467 : [pytree] support `X | Y` union type in `tree_map_only` (#120389)
65627cfd6a : [dtensor] implement scaled dot product attention (flash-attention) (#120298)
f2452e98a6 : Revert "Native Half on ARM (#119483)"
c7328602ed : [ROCm] enable tests test_sampled_addmm_autograd_cuda_*, test_sample… (#117501)
1c1028ac49 : [DCP] Adds utility for converting torch save to dcp (#119815)
aae7ccd2d5 : [FSDP2] disable compile in broken unit tests (#120358)
1ab441a7dd : [DCP] Adds utility for converting dcp to torch save format (#119814)
e0a7b024b0 : [ROCm] Skip test_parity* unit tests in test_foreach only if ROCm version < 6.0 (#117301)
de60050801 : [inductor] Colorization improvements for bandwidth profiler (#120343)
03f7235caa : [Dynamo] Fix dynamo trace rules (#120371)
0e4bd25a33 : [inductor] When generating debug logs don't fail if nvcc not found (#120346)
c2b2e57032 : Intel GPU Runtime Upstreaming for Guard (#118523)
dcfe463600 : fix xpu build failure (#120315)
faad8ecb26 : Use opmath for sinc on CPU (#120311)
5c5b71b6ee : Capture non tensor arguments in record_function (#120017)
7e6bce9684 : [amd] fix unused variable device_flags (#120369)
5210a22b39 : Add basic shampoo test (#120293)
354a436d96 : Remove device assert in Gradscaler (#119362)
fff9d98e58 : Revert "Increased compile time max GPUs to 512. Switched to int16_t DeviceIndex. (#119639)"
8fa6340701 : Revert "Ignore .numpy() under FakeTensorMode() (#120261)"
1aad5c98b4 : [structural binding][5/N] Replace std::tie with structural binding (#120142)
d514df63ea : Reenable triton tests and clean extra clones after the pin update (#120324)
952b37145b : Ignore .numpy() under FakeTensorMode() (#120261)
450339ab2d : Test for fatal signal in test_pynode_destruction_deadlock (#120279)
306642b66d : [export] fix test_passes on ci (#120322)
e0268821dd : Increased compile time max GPUs to 512. Switched to int16_t DeviceIndex. (#119639)
27c5bbe5cb : Add is_nested_int() (#119975)
2e77629b9f : [pytrees] Allow tree_map_only to support predicate function as filter (#119974)
722e87899a : [Memory Snapshot] Clean up elem text (#120245)
a5893926f2 : [dtensor] simplify outputs wrapping handling (#120297)
e06978be4b : [CI] Add initial inductor cpu smoketest for performance (#116456)
9630bcbd49 : [execution trace/chakra] remove backend_id from pg_info (#120038)
e7eab2f07e : Fix to keep stride in return_and_correct_aliasing() (#117860)
fa77829126 : Remove bc linter label triggers after test-infra #4956 (#120148)
e87deb8004 : fix: conversion of max memory allocated and reserved to GB (#120172)
d336be2942 : Update torch.mean() description about dtype restriction. (#120208)
9c64068ef8 : [dynamo][guards-cpp-refactor] TypeGuardAccessor (#120089)
ec6783990a : [dynamo][guards-cpp-refactor] GlobalsGuardAccessor (#120068)
66c52d678f : [dynamo][guards-cpp-refactor] GetItemGuardAccessor (#120067)
7a0c2a9d0a : [dynamo][guards-cpp-refactor] NO_TENSOR_ALIASING guard (#120065)
8d5ae8c0b3 : [dynamo][guards-cpp-refactor] TENSOR_ALIASING guard (#120064)
034955b2fc : [dynamo][guards-cpp-refactor] DATA_PTR_MATCH guard (#120062)
cc6cf89c30 : [dynamo][guards-cpp-refactor] GLOBAL_STATE guard (#120061)
5066bec743 : [dynamo][guards-cpp-refactor] DEFAULT_DEVICE guard (#120060)
8f3fd79b23 : Native Half on ARM (#119483)
29b2131c62 : [Inductor] Fix bug around out of order constexprs in inductor (#120287)
cfddfce0d3 : Alternate sharding (#119078)
a24cba35b0 : [c10d][flight recorder] dump additinal NCCL debug info (#120063)
06bc203c7b : Update dynamo_test_failures list (#120271)
9199468401 : Properly trace into mark_static (#120232)
d38a3627a5 : Support privateUser1 key in RNN op. (#118182) (#118351)
eae025b1d7 : Fix bug with block pointer multi dim args (#120263)
3cd6a21e8f : [DeviceIndex][6/N] Use DeviceIndex in more places (#120133)
d5d13ab15e : Remove C10_FALLTHROUGH (#120157)
d6801578c3 : Update tracing rules for new cudnn functions (#120268)
65519d183b : Remove old optimizer tests (#120257)
b4cef25a1e : add register_device_op_overrides (#119268)
3993771617 : Expose recordSize in ChunkRecordIterator (#120239)
26610175d2 : pass device_str for async_compile.triton function (#120202)
800e9acd43 : [inductor] fix bandwidth extimation for StarDep (#120266)
20f7e5a719 : Remove dependency of triton during inductor codegen (#120193)
dd6b5e236e : Prepare test_inductor_collectives.py for native funcol migration (#120025)
af765dbdfd : [ez] Explicit env for run_test (#120251)
a1fc29cd78 : Revert "[pytree] add function `tree_iter` (#120155)"
701f651f9c : Change the parameter type from int to float in torch.nn.Softplus (#120183)
35891e5007 : Explicitly set nn.Module.set_extra_state return type to None (#120161)
e54c4e8659 : [aot_autograd] handle subclass input mutations correctly in collect_metadata_analysis.py (#120136)
b36404159d : [aot_autograd] support inplace mutations for subclasses (#120141)
96092e1f55 : Extend aot_graph_input_parser to sym shapes (#120246)
7acdd08fcc : [FSDP2] Used stream APIs for CUDA event handling (#120231)
dfb83df889 : Revert "Add cpp stack traces to our own reruns (#119408)"
2d6c0cc81b : Run test_functional_api.py with both legacy and native funcol impls (#119982)
d42ede8ae4 : [torch.compile] Log compilation start time for timeline view (#120220)
be8ba5ef2d : Revert "use two pass reduction for deterministic reduction order (#11… (#120243)
4f0f25b7ce : [Inductor][bugFix] fix a bug in merge_splits (#119956)
957f37686a : Refactor instance_descriptor for new triton version (#119636)
8464654ae4 : Add missing words to torch.utils.checkpoint doc (#120196)
b33e8d3f6b : [Inductor][fx pass] Add split cat pattern to remove cat nodes (#115004)
cccacf6c8e : add a test that non_overlapping checks dont generate too many guards (#120106)
6d82a7e9b0 : Add pixel_shuffle to core aten decomps (#120092)
53bfae2c06 : [MPS] Add `torch.fft.` support (#119670)
5f3f8fd3c7 : [Inductor] Setting kernel launch and exit callbacks for inductor generated triton kernels (#119450)
d3839b624b : [ROCm] HIP Lazy Streams (#119996)
26fbbc3e84 : DTensor + dynamo: fix is_shard/replicate always inlining to False (#118668)
609cde94f9 : DTensor: use memory_format in the hash for all aten ops that use that arg (e.g. aten.clone) (#118667)
6819452a08 : fix multiple-fake-modes bug with compile + subclasses (#118191)
b4b1480b06 : remove redundant to_dtype in Fused Schedular Nodes (#118365)
c28a43988e : Fix typo under aten/src/ATen/native directory (#119686)
389b56b4c4 : [dynamo][guards-cpp-refactor] GetAttrGuardAccessor (#119833)
96f45d15d8 : [dynamo][guards-c++-refactor] EQUALS_MATCH guard (#119827)
0802951081 : [dynamo][guards-c++-refactor] Introduce LeafGuard, GuardManager and GuardAccessor classes (#119822)
0512ba43ab : [executorch hash update] update the pinned executorch hash (#120214)
a7e2b609d3 : Skip less replacements (#119570)
cc7ef43423 : use two pass reduction for deterministic reduction order (#115620)
ae7830051d : [BE] Delete GCC-7 ICE workarounds (#120122)
0bdeaad936 : Revert "add register_device_op_overrides (#119268)"
3ad067fe2b : [CPP] Update GCC minversion check to 9 or newer (#120126)
48bdd0fb47 : [ROCm] TunableOp bugfix filename handling (#120144)
f1fbba8f35 : Revert "Fix lint after #119268 (#120207)"
a73a98c9ae : Revert "Updating sleef submodule to eb3d97785 to fix export errors (#119953)"
d9d0f1dccc : Fix lint after #119268 (#120207)
92bf2a4550 : [torchbench] Update skipped models. (#120117)
637cf4a3f2 : Test parametrization utils for native funcol migration (#119950)
40786ca509 : Handle unwaited work objects on process termination (#119881)
84de851539 : [Inductor] Enable the decomposition of quant/dequant per channel (#119177)
fa9cbdce99 : Updating sleef submodule to eb3d97785 to fix export errors (#119953)
f2cf0768d1 : [dynamo][distributed] handle _rank_not_in_group, _get_or_create_default_group (#119628)
372d078f36 : [pytree] add function `tree_iter` (#120155)
61a3a7628c : [nit][DTensor][Test] Update test name to reflect the actual test (#118960)
2864a7e161 : add register_device_op_overrides (#119268)
70bc3b3be4 : [executorch hash update] update the pinned executorch hash (#120165)
d74bdd5042 : [inductor] Always allow 64 bit in next_power_of_2 (#120164)
de15781af0 : [cuDNN] Bump cuDNN frontend submodule to 1.1.1 (#120137)
b642a18e80 : [dynamo] Use EQUALS_MATCH guard for mod.training (#120147)
0b11b0edd6 : [dynamo][refactor] Use existing helper functions for CLOSURE_MATCH (#120145)
0c972c7c4e : enhance next_power_of_2 function (#120153)
2fea475215 : [dynamo] Refactor reconstruct() not to return anything (#120150)
757fc663a8 : [dynamo][refactor] Use TYPE_MATCH instead of manually constructing guard (#120140)
48d96c08f2 : [dynamo][guards] Use EQUALS_MATCH for NAME_MATCH (#120132)
a9953a5ef3 : Remove unused c10/util/C++17.h inclusion and outdated checks (#120149)
fac598c4ae : [inductor] allow padding mm/bmm/addmm in the presence of dynamic dims (#120073)
2f8a80ecb2 : Fix skip for test_set_nccl_pg_timeout (#120130)
badf84bd6b : [inductor] Add torch.cond support to JIT Inductor (#119759)
30000aa3fd : [c10d] remove one line of verbose log (#120138)
fa0e39560c : [AOTI] Fix a typo (#120094)
0a7471e0df : [executorch hash update] update the pinned executorch hash (#120134)
ac2ba7889d : [export] turn on replace_set_grad_with_hop_pass in pre_dispatch (#119915)
737630268c : [export] manuually create test cases for split and inline (#119914)
8d81e61fb6 : [export] make node_inline_ also inline the get_item calls (#119913)
812f05d731 : [export] add replace_set_grad_with_hop_pass (#119810)
4769e6916a : [export] add node_inline_ to prepare replacing set_grad_enabled with hop (#119736)
068659ddc2 : [export] add sequential_split to prepare replacing set_grad_enabled with hop (#119732)
becfda005e : tiny improvement to the cprofile wrapper (#120100)
36e118b810 : [inductor] logging meta data for inductor generated triton kernel (#120048)
24968ff042 : Add quantized gelu (#119935)
7973ac586d : [Memory Snapshot] Add CUDAAllocatorConfig details into snapshot metadata (#119404)
9aa8bbf7f2 : [BE] Delete `C10_IS_TRIVIALLY_COPYABLE` (#120120)
79569d117d : Add hpu device support in storage/resize (#119761)
6b63d3bac9 : [ONNX][dynamo_export] Adjust to new symbolic shape name format in value_info (#119855)
e61c8ef3aa : Simplify c10::is_pod implementation and remove unneeded inclusion of C++17.h (#118212)
6952d6ddad : [structural binding][4/N] Replace std::tie with structural binding (#120039)
761fa5d6ec : Add FakeTensor support to torch._utils._rebuild_tensor (#108186)
7ad4ab4765 : Remove unused import (#120004)
7b1f5c874f : [PT2][Optimus][Observability] Log the optimus graph transformation to the scuba (#119745)
006eead7d2 : [dynamo][functional_collectives] Add all_to_all_single, all_gather_list, reduce_scatter_list to dynamo remapping (#119683)
4f4629d522 : [Dynamo] Fix ListIteratorVariable repr to avoid log flooding (#120053)
26343451be : DTensor: make tensor_flatten more compatible for dynamo getattr (#118209)
ee7bcf23db : dynamo: support attribute access on tensor subclasses without sources (#117666)
67f6aca0d0 : dynamo: respect autograd.Function + multiple save_for_backward calls (#117667)
4ac857f94e : Support broadcast in native funcol (#119229)
24d5caba6e : [EZ] Fix argument parsing in build_with_debinfo (#120088)
2d4aa91a10 : Fix searchsorted function signature in docs (#120086)
288d1f3698 : [Optim][Rprop] Replace new().resize_as_() by torch.full_like() (#119978)
6ea4480818 : [quant][pt2e] Add `model_is_exported` util function (#119726)
312ce35c1f : Rename singleton int to nested int (#119661)
b97fa6ac30 : Make roll a decomposition and remove its lowering (#119857)
8b02d64197 : Correct index propagation for % (#119864)
00524970e8 : Simplify indexing when doing ModularIndexing + index propagation. (#119863)
86dedebeaf : Revert "Add pixel_shuffle to core aten decomps (#119899)"
b10ae9e54c : [pytree] Properly register immutable collections (#120036)
124c251510 : Guarantee init cuda before attaching hooks (#120052)
fbe8e0f92d : Fix missing right square bracket to match glog format (#119966)
9726d7ca8e : Add lowering for logcumsumexp (#118753)
3f4dd9bfa4 : Back out "[pytree] Require serialized_type_name" (#120041)
4625ecb858 : Add decomp for linalg.cross (#119809)
3693d8f467 : Do to convert UnsupportedFakeTensorException into RuntimeError in runNode for proper graph breaking. (#120026)
54025c01a7 : [DCP][state_dict] Let distributed_state_dict filter out the compiler prefix (#119830)
bc7f3efb09 : [aot_inductor] move CppWrapperCodeGen into a separate file (#119871)
78c9b2948a : [aot_inductor] move CudaWrapperCodeGen into a separate file (#119870)
8f9f12c068 : Intel GPU Runtime Upstreaming for Device Allocator (#118091)
b8be8b639f : Add Runtime Constant-Folding function of AOTInductor for AOTInductorModels used internally. (#119823)
4dc75f9084 : Intel GPU Runtime Upstreaming for Event (#117734)
02fb043522 : Change native funcol inductor tests to use fake pg (#119104)
62e5840b36 : [Dynamo] Do not create TorchInGraphFunctionVariable for tags (#120005)
ddde1e4dee : [executorch hash update] update the pinned executorch hash (#119943)
4eefe7285a : Use ARMV8 fconv insns to seepd up scalar fp16<->fp32 (#120012)
3e5e8590f4 : Account for inference mode in FakeTensor cache (#119963)
8bfc87ce74 : fixed flop counter formula for conv transposed backwards pass (#119874)
17c345ebd9 : [FSDP] compile compute and CI with @test_compiled_fsdp (#119933)
c802c50196 : Setup Nvidia Runtime before Indexer (#119923)
4319735ace : Add meta registration for _foreach_norm (2nd try) (#119927)
707cde9b31 : [DTensor][Test] Improve math_ops test (#118956)
94f19fe545 : [3/N] Replace std::tie with structural binding (#119962)
2a63dd8889 : [Dynamo] Support lazy module with namedtuple/dict input (#119972)
f9f602fcb8 : Clean up decorators (#119925)
444c628e06 : Include the scalar tensor auto-transfer in the doc (#119967)
47300221c2 : Revert "[export] Change runtime asserts to using assert_scalar (#119608)"
da1df5d7b8 : [ROCm] Update triton wheels to ROCm 6.0 (#119765)
3f4f91f2eb : [inductor][eazy] fix profiler (#119959)
65fd8b6730 : Revert "[export] Disable exported_program.__call__ (#119466)"
744898b311 : Add doc page for environment variables that effect PyTorch Runtime (#119087)
d707e3c9c6 : Fix handling none source in build_torch_function_fn (#119724)
9548860b37 : Fix typo in istft docstring (#119776)
a2f07bb317 : Fix typo under docs directory (#119657)
2d7a395c0f : Fix typo in functional.py (#119775)
c3b4d78e17 : [Dynamo][Easy] Fix a small bug in test_trace_rules.py (#119973)
b4c7afe101 : [pytree] Require serialized_type_name (#119718)
f32560c939 : Remove Redundant Bullet Point (#120007)
605de946cf : Clarify the patience in ReduceLROnPlateau (#119872)
26b6de43e5 : Revert "Use ARMV8.2 scalar fp16<->fp32 conversion (#119895)" (#120001)
9b6fae2d79 : Tweak to pr#119719 - eager & fullgraph (#119921)
01ee85c8ab : [PyTorch][Vulkan]remove redundant test of log_softmax (#119964)
8835ff1b09 : [AMD] Update hipify code to oss (#119958)
143b5f2745 : Fix the missing device in _memory_profiler (#119751)
98fd23cccc : [EASY] Move OpsHandler and MockHandler to their own file (#119851)
6f324e8776 : [ATen] Tag isinf as a pointwise op (#119728)
e386bfa688 : [CUDA][cuSPARSE] Work around IMA in cuSPARSE ALG1 on SM 8.9 devices (#119610)
2429495820 : [FSDP2][ez] Made typing more strict to avoid `cast` (#119985)
840426e793 : [export] Log export time. (#119960)
9b38ee2343 : Revert "Alternate sharding (#119078)"
a83a1bc43b : Adding c10 device type to newly added DeviceAccelerator (#119961)
e5bfdde7ba : Fix the skip condition for test_c10d tests (#119938)
c26884f063 : [export] Disable exported_program.__call__ (#119466)
f4d641ba2f : [export] Change runtime asserts to using assert_scalar (#119608)
c83af673bc : Allow CUDA extension builds to skip generating cuda dependencies during compile time (#119936)
d4882e438a : [DeviceIndex][5/N] Use DeviceIndex in more places (#119866)
68328ad394 : Check existence of caffe2::mkl target (#119945)
0898ead2d5 : Timestamp Embedding Indices Generated for TD (#119955)
af346df6a0 : [PyTorch][Vulkan]fix the issue of log 0 after softmax (#119898)
cd08dc37f8 : Support tracing native functional collective via python APIs (#119103)
5f9b432494 : [2/N] Replace std::tie with structural binding (#119879)
9ff9798716 : Fix a bug in kernel analysis with ttir defined args (#119934)
7f5b87c953 : [torch.compile] Log more compilation time breakdown (#119865)
516f38a144 : [RelEng] Define `BUILD_BUNDLE_PTXAS` (#119750)
a07fd51b6b : [caffe2] Add an avx512 implementation of adagrad_update (#113289)
861acda205 : Alternate sharding (#119078)
b4252d73b1 : Make pattern matcher more robust (#119876)
daf1050ae5 : [dtensor] refactor sharding cost model to count for latency (#119897)
99cb807e25 : Skip test_wrap_bad if run under pytest (#115070)
d833e2f236 : Use ARMV8.2 scalar fp16<->fp32 conversion (#119895)
096ebcca73 : [FSDP2] Added gradient accumulation w/o reduction (#118298)
8f27fde2f5 : [export] Log private api uses. (#119848)
340b6fa972 : Deduplicate docs between global and non-global full backward hooks (#119708)
3713103db4 : Revert "[Inductor] Setting kernel launch and exit callbacks for inductor generated triton kernels (#119450)"
756cf2913d : Fix NJT stride access in SDPA dispatcher logic (#119846)
0560c193a6 : Fix meta registration for _flash_attention_forward() [ROCm forward fix] (#119910)
734ae20f2e : [C10] Expand half unittest (#119892)
3470ab42bb : [DCP] Automatically set `no_dist` if distributed is unavailable (#119813)
cd380c794f : [CUDNN][SDPA] Experimental cuDNN Flash Attention v2 Inference (#115663)
9ec8dd2467 : Reify view_func() closures as ViewFuncs (#118404)
6b04251b87 : [inductor][scheduler] Use set for origin (#119861)
29235c7063 : Handle aliases correctly in foreach (#119508)
e0f6fa6a7c : Windows Dynamo Error Removal CI Check (#115969)
9201d7335a : Add pixel_shuffle to core aten decomps (#119899)
244b124bb8 : Add linux cpu test for 3.12 (#117853)
bb67a28738 : [DTensor] Enable Adamax foreach optimizer (#119850)
2aad3f93f8 : Fix guards for field access through properties (#119719)
7797a8c2cb : [testing][inductor] Allow grad tolerance override (#119844)
15f1b9f1c4 : Improve TORCHDYNAMO_EXTENDED_DEBUG for GuardOnDataDependentSymNode (#119412)
0e6eee3c89 : [ROCm] TunableOp (#114894)
90f785dc34 : Change default TORCH_LOGS format to match Meta/glog standard (#119869)
d999222fba : [dtensor] add op support for nll_loss_backward (#119256)
47182a8f4b : Add cpp stack traces to our own reruns (#119408)
6cf48187c5 : [export] Remove references to capture_pre_autograd_graph inside test_export (#119875)
ee3a7bdc2d : [export] Don't error if nn_module_stack doesn't contain a class (#119753)
3e21c785a4 : [ROCm] Initial ir.Scan/aten.cumsum lowering support on ROCm (#119369)
fb492f7ca1 : [inductor] Reorder if check to avoid more expensive check. (#119817)
184605ae7d : [inductor] Replace generators with map. (#119818)
edd9ddf73f : Propagate allow_non_graph_fake between get_fake_values_from_nodes and get_fake_values (#119731)
87c6cd2f00 : [1/N] Replace std::tie with structural binding (#119774)
a45c627f27 : [c10d][flight recorder] store a copy of string in entry (#119837)
4a50572c92 : [inductor] Recursivly unwrap_storage_for_input when convert_to_reinterpret_view fails (#119867)
9f44274373 : Add tests to verify disabled optimizers (#118919)
ca55468416 : Target Determinator Indexer Workflow (#118824)
caf9d9d7c1 : [executorch hash update] update the pinned executorch hash (#119733)
54a30f6d4e : [Dynamo] Update trace_rules.py and re-enable skipped tests (#119860)
8ba2675488 : Fix for-loop divisibility parsing (#119859)
1f0e4ac146 : Add support for while-loops in ttir analysis (#119838)
5ffac768f6 : Add support for labels to ttir analysis (#119836)
3f09c5ee66 : Add TTIR verification (#119835)
b257ff80da : Add test scf.for with multi return (#119834)
72bbbab70a : Add the missing test_dynamo_list_index from #119151 (D53392287) (#119854)
563f1b9fef : [inductor] Use torch.cuda.clock_rate instead of triton.testing.nvsmi (#118662)
80379ef0aa : [dynamo-must-fix] Use ID_MATCH for UserDefinedClass (#119853)
4240304da4 : [TorchElastic] Handle SystemExit with code == 0 (#119697)
5ce305270b : Add a decomposition for isin() (#115390)
75a6d6aef7 : [inductor] Support storage resizing (#119749)
31e59766e7 : Fix meta registration for _flash_attention_forward() (#119812)
179ecab7e7 : Do full checkout in lint workflow to rebuild new Docker images (#119858)
690f54b0f5 : [dynamo][nit] Cleanup analyze_kernel_mutations nits. (#119703)
f9f0c67445 : beef up non-overlapping checks for detecting false aliasing of graph inputs (#119826)
c9459e7f55 : Update atomicMaxFloat (#119577)
8e029dc616 : [export] fix tuple return with symints (#119829)
4a5b2cd6cb : Revert "Windows Dynamo Error Removal CI Check (#115969)"
16369816a2 : [sparse] semi-structured sparse refactor (#117302)
2536c5186e : [BE] Properly mark destructor overrides (Take 2) (#119656)
cb0886ecf2 : [DeviceIndex][4/N] Use DeviceIndex in more places (#119741)
b2e779868f : make internal lintrunner mypy clean (#119840)
507db17675 : Update HF pin (#119717)
b51e0246b7 : sccache version update (#119554)
be35fc9ea7 : Size oblivious test for slice optimization (#119625)
d81d5f52d5 : [FSDP2][ez] Replaced `groupby` with `all` for same-dtype check (#119825)
cf117e37d5 : Refactor THPStorage_resize_ (#119671)
ca777fbbb7 : Add Accelerator device and shell hooks (#119329)
e9b78f2db0 : Rewrite group_batch_fusion.find_independent_subset_greedy() to be iterative. (#118324)
ba1eb0e27f : [ROCm] upgrade CI to 6.0 (#119495)
df9b44436a : [ROCm] Enable float16/complex32 fft tests on ROCm (#117296)
63d64c8995 : [MPS] Enable more bfloat16 ops (#119738)
eb9a3383c2 : [MPS] Add naive std_mean implementation (#119777)
ee5b59dd4b : [ROCm] CatArrayBatchedCopy performance improvement (#118685)
6665b96ebb : Rewrite maybe_reduce more carefully for unbacked SymInt (#119562)
28f299a870 : [c10d] Fix compilation of NCCL_EXP path (#119805)
f9200c8608 : [BE][Ez]: FURB129: remove unneeded readlines() (#119796)
3319dbcd23 : Update vmap guard to avoid recompilations (#119061)
abadbbc4b0 : [c10d][flight recorder] remove unintended assignment of entry (#119748)
34638c82a6 : [mergebot] No unique behavior for facebook bot re pending jobs (#119735)
8ec3d8e35f : Fixed FxGraphDrawer compat constructor (#119767)
8ec8d78ef2 : [quant][pt2e][be] Rename eval_utils -> export_utils (#119725)
0a2e000edf : [BE] Enabled mypy in `common_fsdp.py` (#118755)
8c1480f568 : [FSDP2] Added mixed precision (#118223)
3956ce01e0 : [FSDP2] Added autograd/memory/overlap/frozen/2D/AC tests (#118136)
39c68efd85 : [dynamo] Capture untyped_storage().resize_() (#119647)
c0e5cca4f8 : [DDP] Change the --no-optimize-ddp flag to reflect the latest usage (#119437)
c2522554dd : Prevent DCE'ing unbacked SymInt for view outputs (#119552)
52de407b6c : Avoid performing replacements when it would unrefine ranges (#117356)
0fd371c868 : fix torch.cumsum docs (#117944)
c2a835d710 : [inductor] Refactor device guard Python codegen to allow nested indentation (#119673)
f4b5f710e8 : Fix typo in private attr of `inference_mode` (#119167)
3629287151 : Implement analysis for for-loops (#119730)
2ae655b4f1 : caffe2: remove support for specifically running "flaky tests" (#112007)
60148f1761 : [EZ] Set maximum supported version of Python as 3.12 (#119743)
beb0041384 : improve cuda graph symint logging msg (#119739)
bfb9ea1a43 : fix compile DTensor.from_local in trace_rule_look up (#119659)
379183a0dd : Skip log line if no tensors were dedupped (#119742)
a4c476a081 : [BE] Use more GTest primitives in XPU unit tests (#119527)
47a2e6b6b8 : Fix C++20 build (#112333)
2bda6b4cb8 : [DTensor] Only wait on AsyncCollectiveTensor after DTensor-based state dict loading (#119716)
2502a01110 : Linear-BN Fusion: add precondition check (#119264)
15ef52a015 : [MPS] Enable `conj` and `conj_physical` (#119669)
214f06ae3a : Revert "Add Accelerator device and shell hooks (#119329)"
7d4b666870 : Revert "[BE] Properly mark destructor overrides (#119656)"
2921c2b3d9 : [mypy] refactor mkldnn_fusion._is_valid_binary to avoid [union-attr] has no attribute (#119085)
db228f1efd : [Lint] replace [assigment] with [method-assign] for methods (#119706)
9f8c84a399 : Revert "Add missing include for internal build (#119721)"
ea8e4fd5ac : Support FunctoolsPartialVariable::get_function, fix NamedTupleVariable::as_proxy and handle call_function in get_fake_values_from_nodes (#119435)
74d55b0e63 : [dynamo] Support torch.distributed.fsdp._flat_param._same_storage_size (#119627)
472500e32a : Revert "Avoid performing replacements when it would unrefine ranges (#117356)"
2492f8748e : Revert "Improve TORCHDYNAMO_EXTENDED_DEBUG for GuardOnDataDependentSymNode (#119412)"
830ed6d9b2 : [quant][pt2] Fix _disallow_eval_train error message (#119694)
55483fc2c9 : Min-cut partitioner always saves tensors that are returned as-is in backward (#114970)
bd9db6a9c7 : Update to TorchFix 0.4.0 (#119424)
5acd1f0f7d : Add cherry-pick workflow (#119352)
f15b517055 : [export] suppress type error (#119720)
b3df3e4e94 : Restore OpInfo/ModuleInfo tests in Inductor-wrapped tests (#119693)
e0cabebad9 : Add missing include for internal build (#119721)
70c93c6097 : [inductor] Update JIT Inductor cpp wrapper entry function signature (#119280)
02b60e76c9 : make flash_attn_bw impl correct w.r.t. meta when k and v have different strides (#119500)
10789ccd83 : Remove redundant CMake NUMA code (#119650)
34a61c527b : Revert "Enable x86 CPU vectorization on windows (#118980)"
10f3abc6b8 : [DeviceIndex][3/N] Use DeviceIndex in more places (#119635)
064b61009b : Correctly formatting the example in get_state_dict (#119532)
ad217d4266 : [ez] Add try catch for deleting old branches (#119696)
7eecbf8a30 : Remove unnecessary skipIfTorchDynamo from test_jit_fuser_te (#118728)
28c30f29be : Update documentation for set_flush_denormal support on ARM (#119354)
7d780ff86f : Revert "Enable fake tensor caching in fbcode by default (#118555)"
110919c984 : Check QNNPACK support for the platform before running test (#119139)
7adfeba47a : Add Python 3.12 as experimental to release 2.2 (#119705)
82248f0b1c : [export] improve FakeTensor serialization (#119531)
482345d747 : Refactor out shape test into InputMetadata::maybe_reduce (#119559)
c24b74efc7 : Revert "Optimize multi_tensor_apply (#119153)"
8d8fb9783c : [MPS][EZ] Fix cfloat->chalf conversion on MacOS13 (#119681)
eb0f9efd31 : fix is_ and is_not (#118978)
0e5b6594b7 : [Dynamo] Minor cleanup of redundant function lookup logics (#119666)
ed20e9118b : Fixed hash issue in `fx_graph_cse` (#119567)
27ffede878 : [reland] Fix estimate_nccl_collective_runtime (#118986)
b2043c0543 : [c10d] PGNCCL refactor part 2: Simplify ProcessGroupNCCL into single-device style (#119421)
893dcac068 : [c10d] explicitly abort communicators in destroy_process_group call (#119250)
31f00b0160 : Clarify that legacy cat behavior only applies for 1-D tensor (#119684)
059bf1baa4 : Separate clang lint? (#119575)
bc521f2ce3 : In dynamo tracing for index() use None as the default indicator for end and not -1 (#119151)
cf474a09f5 : Decompose torch.ops.higher_order.auto_functionalized in Inductor (#118673)
8069b29603 : [export] Implement logging for scuba. (#119585)
757201c213 : Refactor ExportedProgram to expose the functions for pre and postprocessing (#119513)
72d9a38118 : add get_function to TorchInGraphFunctionVariable (#119314)
1c1dc0e4e0 : [sparse] Add in out_dtype support (i8i8->bf16, i32) for cusparselt (#119296)
5f69d95b2b : Enable x86 CPU vectorization on windows (#118980)
52a3de6cbf : [AOTI][refactor] Move ThreadLocalCachedOutputTensor into a separate header (#119392)
24bdd03d23 : Revert "Reify view_func() closures as ViewFuncs (#118404)"
79df897608 : Fix some tests in test_c10d_functional_native.py (#119102)
0342b227e5 : Revert "[c10d] PGNCCL refactor part 2: Simplify ProcessGroupNCCL into single-device style (#119421)"
8a3c241094 : Remove unused header inclusion (#119667)
dcb08a7044 : Add CUDAEvent recording for constant folding to show up. (#119216)
bc4d0277cd : [executorch hash update] update the pinned executorch hash (#119648)
76fac69577 : add a couple more cases to pointwise_cat perf tests (#119521)
647564dbaa : Implement conditional statements in kernel analysis (#119664)
663dd5d006 : [inductor] Update the compile options for CppPythonBindingsCodeCache (#119415)
069581b3ca : [BE] Properly mark destructor overrides (#119656)
a4cc6b85dc : [dynamo][eval][perf] Remove unnecessary dict copies. (#119305)
e5f46a1d35 : Check alignment of ReinterpretView args of custom Triton kernels (#119649)
b8e4423278 : [torch][cuda][perf] Avoid unnecessary dicts. (#118011)
95a8d5b1bc : [random] Replace for loop with list comprehension. (#119143)
4394e0dc2c : [inductor] Use list comprehension to initialize unused_views. (#119618)
24be7daf79 : Optimize multi_tensor_apply (#119153)
2c91e13afc : Add lowerings to special functions (#119187)
4ee8aac432 : [MPS] Enable `bfloat16` support on MacOS 14 (#119641)
68e009dd8f : [BE][EZ] Use `dyspatch_sync_with_rethrow` in searchsorted (#119646)
6cd82253ae : fix torch.set_float32_matmul_precision doc (#119620)
88183923d2 : Remove unneeded linking of torch_shm_manager in CMake (#119540)
0bed0501fa : Don't skip register-spilling configs in custom Triton kernel auto-tuning (#119634)
3ab08946d5 : Revert "[aot_inductor] move CudaWrapperCodeGen into a separate file (#119448)"
d8e319a961 : Revert "[aot_inductor] move CppWrapperCodeGen into a separate file (#119491)"
6db6a1b526 : [aten] Use emplace instead of insert. (#119614)
2c8722182e : [dynamo][guards] Avoid unnecessary stack copies. (#119115)
568740f080 : [DeviceIndex][2/N] Use DeviceIndex instead of int in allocators (#119545)
57d8f67619 : [Dynamo][17/N] Rename SkipFilesVariable to SkipFunctionVariable and move to functions.py (#119619)
dcce5327bb : [core][perf] Use set comprehensions in _RecreateLookupTables. (#119617)
c5116d9e44 : Fix optim.lr_scheduler examples in doc to use optimizer vs self.opt (#119563)
34db6f1b13 : Revert "make flash_attn_bw impl correct w.r.t. meta when k and v have different strides (#119500)"
c0f1183eb4 : [inductor] Fix compile error on scan with no mask (#119555)
e71c202520 : Use CUDA if cuda's macro is set for AOTI runner's pybind (#119616)
3581428ea0 : Do not mark tt.load's arguments as mutated (#119631)
6c5bf5a5ce : Implement kernel analysis for functions with multiple return values (#119615)
e693089c7a : [Dynamo] Refactor tensor methods handling (#119581)
699ae72f51 : [DCP][state_dict] Fix the issue that get_state_dict/set_state_dict ignore the buffer (#119573)
a82c50793e : [executorch hash update] update the pinned executorch hash (#119510)
8fd11cb307 : [2/2] Intel GPU Runtime Upstreaming for Stream (#117619)
f2778e3874 : [vision hash update] update the pinned vision hash (#119511)
42ca82dfb1 : [audio hash update] update the pinned audio hash (#119612)
3278b4c557 : be more consrevative until regression is debugged (#119583)
70a364d402 : non-strict improvements: constant args and kwargs (#119529)
760056bbdc : [aot_inductor] move CppWrapperCodeGen into a separate file (#119491)
095f471307 : make flash_attn_bw impl correct w.r.t. meta when k and v have different strides (#119500)
e1c1b8c2b2 : [dynamo] Improve support for backwards hooks (#119525)
05602915f5 : Link torch_cpu to cudart only if CUPTI is enabled (#118232)
44796682d0 : [torch][ao] Fix module name filter for pytorch2 quantization for underscores (#119344)
34f7dc9eba : [ONNX] Support op consistency error reproduction (#119512)
bb287d73ec : [ONNX] Apply modularizarion to exported program exporting (#119498)
3372aa51b4 : Integrate swap_tensors into nn.Module.load_state_dict (#117913)
a7f82b7d62 : [fix] tmp fix for import issue in dtensor (#119582)
bf8db86a19 : [FSDP] Added deprecation msg for `NO_SHARD` (#119553)
f3e7d80993 : [c10d] PGNCCL refactor part 2: Simplify ProcessGroupNCCL into single-device style (#119421)
0597dab523 : [aot_inductor] move CudaWrapperCodeGen into a separate file (#119448)
9a1df7cfd7 : ReduceLROnPlateau init _last_lr (#119366) (#119556)
bf8a5a11be : Fix Inductor CSE Across Separate Reductions (#119410)
f208795182 : Improve TORCHDYNAMO_EXTENDED_DEBUG for GuardOnDataDependentSymNode (#119412)
01e248d6f1 : Fix FallbackKernel behavior on mutable ops (#118649)
25a0fa6d13 : Revert "[dynamo] Improve support for backwards hooks (#119525)"
4b9568a360 : Add Accelerator device and shell hooks (#119329)
d5a6762263 : Reify view_func() closures as ViewFuncs (#118404)
261f0138a2 : [easy] Fix pass_manager type annotation (#119499)
5747ec24b4 : [export] fix canonicalization for input mutations (#119533)
cf42dd09ca : [FSDP2] Replaced version-ctx with `no_grad`; removed `no_grad` (#119550)
f3a2094065 : [Dynamo][Export] Mitigate legacy issue that aten op as export entrance function (#119528)
5356b5d1f0 : [Dynamo][16/N] Move skipfiles to trace_rules.py (#119432)
7082e24ce8 : [quant][pt2e][bc-breaking] Set `fold_quantize` to True in `convert_pt2e` (#119425)
3f82e435eb : Fix delete branches (#119399)
c6f39740c7 : Revert "Fix delete branches (#119399)"
53a6ab3fda : [BE] Update Pillow to 10.2.0 (#119517)
b1f4b2a63c : [dynamo] Improve support for backwards hooks (#119525)
5d6e323549 : No TD (test removal) option in CI (#118808)
e1fc7e1ebc : Fix delete branches (#119399)
5d81ade484 : [Inductor max autotune] Multithreaded Precompilation (#119386)
173256424a : Update setuptools to 68.2.2 (#119456)
eff93fbd86 : Revert "[Dynamo][16/N] Move skipfiles to trace_rules.py (#119432)"
90dabff260 : Avoid COW materialize in various operations (#119506)
8a09f1320c : Avoid COW materialize in index, reduce, compare, unique, and copy ops (#119504)
0e6b314fc2 : Avoid performing replacements when it would unrefine ranges (#117356)
064610d8ac : Don't guard if there are unbacked SymInts (#119312)
a13bb9f6a8 : Add symbol_guard_limit_before_specialize (#119347)
a050d146b7 : [Inductor] Add Int8 data type into Inductor CPP backend vectorized code generation (#119179)
5918622d72 : Avoid COW materialize in pooling, batch linalg, upsample, softmax ops (#119503)
53deddd66d : Avoid COW materialization for TensorInfo with const type (#119502)
fba5b7f7c8 : Avoid COW materialization for TensorAccessors with const type (#119501)
fa071a2e1b : Clarifying windows cosine behaviour in the documentation (#119444)
0f2fbbff10 : Enable fake tensor caching in fbcode by default (#118555)
2cdf9b7674 : [BE] Update requests to 2.31.0 (#119516)
458e83b5b3 : Revert "Add FakeTensor support to torch._utils._rebuild_tensor (#108186)"
930b60f5aa : Add Debug Utility To Generate Inputs for AOT Graphs (#119409)
2d474e17cb : Don't log canonicalized expressions (#119471)
8994f2367d : Revert "Fix jagged NT softmax semantics (#119459)"
88429a8084 : [inductor] Add split scan kernel (#117992)
01edb8a559 : [inductor] Refactor triton range_tree handling (#117991)
6efda849b5 : Update chunk_dtensor to support HYBRID_SHARD (#119481)
454abb6b99 : Disable tests that use bfloat 16 for SM < 80 (#118449)
915f9db03c : [Dynamo] Support kwargs for lazy module (#119445)
45c4a0ce9d : Update setup tools to 65.5.1 (#119456)
a8d1645f15 : Revert "Add lowering for logcumsumexp (#118753)"
560c92c324 : [DeviceIndex] Use DeviceIndex instead of int in CUDA wrappers (#119142)
e98dbae0a0 : [ROCm] enable hipsolver backend for linalg.eigh (#115177)
0f12c0af44 : [export] allow user input mutation in aot_export (#119356)
9f8ade04cc : [aot_inductor] replace TORCH_CHECK with AOTI_CHECK in the generate cpp code (#119220)
71e772f827 : Update logging.cpp for explicit chrono import (#119469)
45e7af5818 : Windows Dynamo Error Removal CI Check (#115969)
0827510fd3 : [export] Remove torch._export.export (#119095)
a7754b2b60 : [dtensor] switch softmax backward ops to OpStrategy (#119255)
d9a1b25807 : Fixed an issue where nn.Linear would cause an internal int underflow … (#119221)
7fd6b1c558 : s/print/warn in arch choice in cpp extension (#119463)
db1a4dcb5a : [BE] Add dtypesIfMPS to ModuleInfo enabling float16 tests for MPS and remove all skipIfMPS for float64 (#119039)
4e93b00b69 : [Inductor] Setting kernel launch and exit callbacks for inductor generated triton kernels (#119450)
6adadbaf79 : Fix jagged NT softmax semantics (#119459)
278a0e1600 : [NestedTensor] Support binary pointwise ops with >2 inputs (if inputs are non-tensors) (#119419)
cd9a1934fb : [ONNX] Bump to onnx1.15.0 and ort1.17.0 in CI (#119106)
91f038161a : [FSDP2] Used `split_with_sizes_copy` for all-gather copy-out (#119451)
def572929b : [export/nonstrict] always create FakeTensorMode (#119446)
7ec6ac89e8 : Add lowering to special.modified_bessel_i0 (#118993)
9242523ad5 : [ET-Vulkan] aten.pow.Tensor_Tensor (#119423)
b51b27922b : Add to_empty() suggestion in the error message (#119353)
5a77ee6587 : Add lowering for logcumsumexp (#118753)
7315ec7505 : Revert "Fix estimate_nccl_collective_runtime (#118986)"
1d61011c11 : [MPS] Add support for complex scalars (#119318)
2b9cba86cf : Fix deadlock in ExecutionTraceObserver (#119242) (#119398)
896cf9d1ce : [inductor][cpp] vectorization support for int32/int64 (#119001)
8182fce769 : Revert "Add cpp stack traces to our own reruns (#119408)"
8da2f81527 : [export] Convert internal tests to using .module() (#119105)
c3e0836084 : [export] Remove CallSpec (#117671)
9436710afd : Implement shallow copy functions for `FunctionalTensorWrapper`. (#118783)
6d8f192fd0 : [DCP] Call os.sync if os.fsync does not work for fsspec (#119287)
b251bca205 : [dynamo] inlining into __iter__ of user defined object (#119243)
b181e52a8f : [export] Support non-tensor tuple hoo outputs (#119402)
7f05c72864 : [nccl flight recorder] record time we discover start and complete (#119249)
3a8bf25fdd : [SparseCsr] Remove triton sdpa skip after triton pin update (#109601)
d947534782 : [DCP] Enable filesystem/fsspec auto detection (#118888)
4f2bf7fa87 : Print the value of constants in __str__ (#119276)
579999a731 : [PyTorch] Back scalar value to pinned memory for .item() (#119202)
08657b82f5 : Reduce scope of dispatching in logcumsumexp_backward (#119397)
56364124af : [Dynamo][16/N] Move skipfiles to trace_rules.py (#119432)
0a41ac3cf3 : [1/2] Intel GPU Runtime Upstreaming for Stream (#117611)
7d516bbd5f : Update MacOS deployment target to OS version 11.1 (#119373)
5f6b35915a : [executorch hash update] update the pinned executorch hash (#119336)
f579c65ef6 : Release GIL for torch::autograd::clear_autocast_cache (#119416)
9d6bf20022 : [FSDP2] Added backward prefetching (#118118)
1d2382f141 : [DDP] Use compiled_autograd to trace DDP backward allreduce (#110662)
113506d2d4 : Add FakeTensor support to torch._utils._rebuild_tensor (#108186)
9a992b0918 : [4/4] Intel GPU Runtime Upstreaming for Device (#116869)
3cb7ec312c : [PT-Vulkan] aten::conv1d - opt: width-pack weight tensor (>2x speedup) (#118835)
2349e473f1 : Forward fix for same_shape oblivious guard (#119383)
64aaa8f508 : Fix typo on Contribution Guide (#119428)
fbe6f6236e : Add cpp stack traces to our own reruns (#119408)
33761969a4 : Remove parent device mesh check (#118620)
029a16c41f : [c10d] PGNCCL refactor part 1: adds assert size==1 (#119099)
6fe5a3adaf : release GIL for cudaEventDestroy (#119393)
ad75d9e2ca : [easy] Fix test_triton_kernel_reinterpret_view_mem_leak by cloning fwd input (#119219)
81abc2b249 : Revert "[quant][pt2e][bc-breaking] Remove fold_quantize flag (#118701)"
a6e16fe202 : Fix global in header warning (#119380)
35aa353c48 : Change watchdog log from "NCCL" to "Process group" (#118121)
892a7bf674 : [BE]: Add filelock typing to mypy stubs (#119390)
d0db80126e : [EZ][CI] Fetch full history for MPS jobs (#119401)
51fb99250b : Fix missing MAST log when there is Unicode non-decodable text in logs (#119298)
02c24b0b5e : Add Python binding `resizable` to class `{Untyped,Typed}Storage` (#119286)
d054cd3e44 : [FSDP2] Added `reshard_after_forward` (#118017)
482d952e88 : [quant][pt2e][bc-breaking] Remove fold_quantize flag (#118701)
0e2330d84c : fix lint (#119395)
23b030a79c : [easy] Add testing utilties for torch.nn.utils.set_swap_module_params_on_conversion (#118023)
d5a718d27b : Add swap_tensors path to nn.Module._apply (#117167)
91d1d2c421 : Make MHA Query Scaling Behaviors Consistent (#119323)
5eda355e54 : [inductor, test] remove cast for test_pow2_cpu (#114912)
0dab6fb352 : Fix estimate_nccl_collective_runtime (#118986)
088d538a8d : Revert "[Inductor] GEMM shape padding improvements (#118522)"
f6bf7d26e1 : Print full exception info in Graph break log (#119292)
f79ae7599a : [export] fakify module state in nonstrict (#119297)
40ec155e58 : [AOTI][refactor] Split common aoti_runtime utils into a separate header (#119066)
059994d2b7 : Migrate load_state_dict hook tests to OptimizerInfo (#119310)
0320e62255 : Migrate test_state_dict hooks to OptimizerInfo (#119308)
5c46600f84 : [RELAND] refactor lazy init to device-agnostic (#119248)
3625ccfbea : Move step global hooks test to OptimizerInfo (#119299)
7b3762e6bc : Move step pre/post hook tests to OptimizerInfo (#119288)
99ddfaf572 : Add symbol guard counts instrumentation (#119290)
7c95cc5e03 : Add basic reference documentation for symbolic_shapes.py (#118997)
1435cfecfa : Increase accumulate_grad_ gradient's expected refcount to account for pybind (#119068)
326dcf9dc8 : Never reuse accumulated gradients' buffers (#119334)
8e14e1d514 : Fix gradient refcounts in pybind and compiled autograd (#118817)
d85631b721 : Revert "Fix deadlock in ExecutionTraceObserver (#119242)"
dfdbd73360 : add Half support for flash attention (#119247)
0f478d9d61 : [Dynamo][15/N] Merge allow_in_graph/inline/skip trace rules check into trace_rule.lookup (#118971)
284b0b5f44 : Add --local-ranks-filter to torchrun: allow logs filtering by rank (#118562)
6c3600d008 : Enable optional tensorList fallback to cpu. (#119273)
53ee47ca32 : [vision hash update] update the pinned vision hash (#119337)
ee1c2449f7 : [dynamo] delete dynamo cache entry when guard function is invalidated [attempt 2] (#119107)
fcc36de9d6 : [ONNX][dynamo_export] Turn off opmath type promotion for div (#119112)
45a79323fe : Add torch.dtype instances to the public API (#119307)
8c2fde1fcf : [EZ][BE] [CMake] Remove checks for GCC-7 (#119306)
e9907a3446 : [PyTorch] Free up 8 bytes per intrusive_ptr_target (#117986)
5f2ad407a9 : Fix typo on torch.frombuffer() documentation (#119214)
5ae6f6cffe : Test seo torch cuda (#119324)
728228a7c7 : LazyGraphModule: improve the fix for the FakeTensorMode mismatch issue (#119311)
e868a7fedd : [AOTI] Rename config.aot_inductor.abi_compatible (#119065)
c814d8e5c2 : Fix handling random() calls encountered inside inlined code. (#119218)
5e78c4b0f4 : [dynamo] Functools partial reconstruct (#118583)
62cc1053d8 : [dynamo] Fix missing guards in FunctoolsPartialVariable (#118616)
6fc775ae13 : Fix deadlock in ExecutionTraceObserver (#119242)
d0ca849fdf : Refactor Symint Deduping to separate pass (#118938)
dea15c9fdc : Revert "Add meta registration for _foreach_norm (#118604)"
6c1cca153e : [quant][pt2e] Allow users to override train/eval behavior (#119091)
9d46fe603d : Revert "[c10d] PGNCCL refactor part 1: adds assert size==1 (#119099)"
0f68bcaa5c : Make filename optional in update_failures.py (#119289)
422b4271ae : Change PrivateUse1's resize_bytes to PrivateUse1HooksInterface (#117839)
ae4e866bba : [dynamo] refactor CacheEntry and ExtraState to eval_frame.c to C++ (#118438)
73f0fdea5b : [fix] accounting for dilation in pool padding assertion (#118897)
ec31d11580 : [dynamo] Skip dynamo when inside a functorch context (#118901)
f3645fc38b : Update auto_functionalize docs (#119228)
f85b0ea8bb : Migrate last lbfgs test over to OptimizerInfo (#119283)
3f0fd36835 : Introduce size oblivious guards (#118579)
5410385c42 : [dynamo] support comparing stream with constant (#119199)
fa157af69c : [mypy] declare type for DynamoTestCase._exit_stack (#119084)
238d87f74d : Add a short code snippet in the RNN doc (#119150)
169c070076 : Move catch_errors_wrapper to convert_frame (#119253)
790858afa9 : Make start compiling stack trace omit framework frames (#119251)
22669843c2 : Reserve sizes in c10::VaryingShape::concrete_sizes(), c10::TensorType::computeStrideProps() (#119189)
8ee9f26ce8 : [Dynamo] Remove build_checkpoint_variable from call_getattr (#119236)
2ad3599a71 : Add torch.backends.mha.get_fastpath_enabled to FUNC_INLINELIST (#118979)
a77be631e0 : Bugfix to MixtureSameFamily's _pad_mixture_dimension (#118947)
499040ac32 : Revert "Add FakeTensor support to torch._utils._rebuild_tensor (#108186)"
1e4b408b02 : [decomp] Add tests for different dtypes to SDPA decomposition (#119239)
85033759d6 : Update scatter_reduce_ test with parallel backend check (#118708)
7d7a3f0b37 : [inductor] Support sympy.expr in user-defined Triton kernel grid fn (#119165)
8a8e70477e : Fix type hints on nn.attention.sdpa_kernel (#119140)
720f781160 : [CPU] Optimize softmax as flash attention v2 (#118957)
4ab852b6c5 : [c10d] PGNCCL refactor part 1: adds assert size==1 (#119099)
884b6d2a67 : [inductor] Implementing missing magic methods on IR values. (#118933)
e47f571da7 : Revert "Update scatter_reduce_ test with parallel backend check (#118708)"
12ac3ba383 : [executorch hash update] update the pinned executorch hash (#118936)
3497388b9f : [export] Fix serialization for auto_functionalization (#118810)
03db96c248 : [Dynamo] Enhance autograd.Function strict mode test (#119237)
074f2bb5ce : Fix dynamo benchmark runner for torchbench skip sets (#118615)
9250965f8b : [ez] Lower windows timeout limit for trunk, set test step timeout (#119234)
86d5d1650b : [dynamo] support dict.clear() (#119197)
c0164f2393 : Revert "[BE] Add dtypesIfMPS to ModuleInfo enabling float16 tests for MPS and remove all skipIfMPS for float64 (#119039)"
3829b55416 : [inductor] Support ProxyExecutor argument codegen for sympy.Expr (#119166)
781f7c9080 : [BE] Use OptimizerInfo step_requires_closure, only_supports_sparse_grads (#119230)
69344fe987 : c10d: Don't add NCCL backend by default without CUDA (#119149)
fd0bf96c2b : [inductor] make multi-kernel work with cpp-wrapper (#117813)
04d52d5399 : [BE] Add dtypesIfMPS to ModuleInfo enabling float16 tests for MPS and remove all skipIfMPS for float64 (#119039)
d9d8c2b79f : Remove HSDP validation check (#112435)
966db82c9d : Revert "Remove extra graph breaks (#118987)"
b8bb12cd45 : Add meta registration for _foreach_norm (#118604)
51e096114b : Increase recommended logging in DEFAULT_LOGGING (#119207)
5086e1cf3f : Remove distributed/c10d/Functional.hpp (#119138)
200108c6e6 : Delete old branches (#117079)
b816760a2f : More progress on type checking ValueRanges (#118870)
b92819a039 : Move nn.Module.load_state_dict tests from test_nn.py to separate file (#118028)
71655bccbe : Fix wrong mobile build Docker image (#119213)
962fca6839 : [storage][perf] Reduce _get_device_from_module overhead. (#119144)
b964a1222c : Revert "[inductor] make multi-kernel work with cpp-wrapper (#117813)"
b2e0f8d82d : [mypy] added type annotations to codegen_nodes methods (#119080)
88e346680b : Patch all_gather to support HSDP + TP (#118638)
f481835115 : Revert "add Half support for flash attention on CPU (#118368)" (#119204)
ab613a4019 : Revert "refactor lazy init to device-agnostic (#118846)"
124a54ef16 : [jit][perf] Reduce lookupInModule overhead. (#119145)
fa8d97776c : [aotinductor] Migrate fuse_split_linear_add from dper_pass to AOTI based on predispatch IR (#118983)
5f9f771711 : [DeviceMesh][Test] Remove test_raises_mesh_dim_less_than_2 (#119172)
d444a3b443 : [MPS] fix float32 error on mps, in linalg.matrix_rank and linalg.pinv (#114771)
a72190fd51 : make nanogpt work with both compiled autograd and _LazyGraphModule (#118981)
d670dfb7ae : Update scatter_reduce_ test with parallel backend check (#118708)
0348975a87 : Set up new logging artifact for SymNode (#119158)
0245000be8 : [DeviceMesh] Temporarily disable re-use subgroup (#118940)
0c3a1c893e : [dynamo] Setup the globals for guard_fn without a reference to f_locals (#118447)
b8307513e5 : [torchelastic][rendezvous] Add option to enable libuv for TCPStore based rendezvous backend (#118944)
5ebed6f1c3 : [torch] fix comment typo (#118656)
0d5f53a2f9 : fix forward test_memory_planning.py (#119109)
052e824467 : improve CUDACachingAllocator lock contention (#118550)
b41f3e8df1 : [AOTI] Make abi_compatible as default for OSS CI (#119126)
79b20aec76 : [AOTI] Support copy_, _fft_c2c and view_as_real in C shim (#119125)
cee16353db : [Dynamo][autograd.Function] Should graph break on stride accesses in backward (#119137)
8f82a44a5b : Run device mesh tests with native funcol enabled (#118437)
e3371ff739 : Use correct type of indices in ForeachUtils.h (#119116)
6620176da7 : Add documentation for meta device (#119119)
dab16b6b8e : s/supress/suppress/ (#119132)
abc09b27b9 : Some minor type stub improvements (#118529)
3ed9df36a9 : Clean up some obsolete TODOs in run_test and several test files (#119113)
26a2743162 : Fix placeholder tensor is empty for relu in mps (#118965)
0ddcb5c3ca : Include the documentation on scale arg being a keyword only arg (#119129)
ffae20e594 : [BE][MPS] Add `dictionaryFromPlaceholders` (#119077)
2d64fddd48 : [dtensor] add op support for nll_loss_forward (#118917)
4c397e6ec6 : [Dynamo] Add correct guards for tracable tensor subclasses (#119110)
7a52455102 : [dynamo] Refactor TensorVariable method handling (#119111)
fcf22a853d : Enable test_ellipsis_index_2 with Torch dynamo (#118773)
1adedc3c86 : [decomp] Remove pixel_shuffle from core aten decomps (#118921)
4dc53f777b : Fix dynamo failure w/ astype (#117952)
c6c851102f : Fix test_compressed_layout_conversions_coverage to check BSC format (#117951)
6c8faf4680 : [executorch] Run llama in xplat (#118831)
a64b03a58e : Move lr tensor to cuda if needed (#119073)
41b63b26c2 : [dynamo] Fix incorrect docstring placements in _guards.py. (#119114)
9a8e3b07d7 : Remove extra graph breaks (#118987)
ce40ee8ecd : [FSDP] Fixed `device_mesh` and auto wrap (#119064)
18fc1ca7d9 : [MPS][BE] Add native lerp support (#119036)
30d3ff1659 : Inline gradcheck functions since they don't have C bindings (#119047)
372e9550bd : ProcessGroupGloo::reduce_scatter_tensor_coalesced (#118911)
65314a6129 : [c10d] add an unit test for unordered destruction of PGs (#119045)
857508fa36 : Change the internal assert to torch_check in torch::nn::functional::InterpolateFuncOptions (#117831)
9ffed22391 : Document file format returned by torch.save (#118719)
2eba82d122 : [dynamo] decrease logging level for graph break in higher order op. (#119079)
d91d21fd6f : [submodule kineto] Enable profiler connection to daemon during init for cpu only jobs (#118320)
494c2ec054 : [DCP][BE] Let FsspecWriter and FsspecReader inherit from FileSystemWriter and FileSystemReader (#118887)
6b009aceea : Enable scaled_mm on sm89 devices (#118881)
440b7d5279 : [auto_functionalize] Remove mutated_args_name from args (#119050)
3aeaa21eb0 : Revert "Remove parent device mesh check (#118620)"
de6a906093 : Expose aggressive_recomputation as an inductor config (#118943)
7bbd9befed : Improve example for ``torch.mode()`` (#115308)
c24ffc3f66 : [inductor] make multi-kernel work with cpp-wrapper (#117813)
576383c2eb : Add torch check for dtype within bilinear (#118900)
a4355d6b9a : Revert "Add --filter-rank to torchrun: allow logs filtering by rank (#118562)"
63fd6883fd : [c10d] logging utility for cpp-python stacktrace (#118924)
a3cec6a7fa : [ONNX] Eliminate redundant TODOs (#119060)
454e6b380c : [export] Prevent specialization on backends (#118683)
db2225da37 : [export] fix forward test_lift_unlift (#119090)
9fe3693bbb : [dynamo] bypass graph break due to masking if inference mode (#119056)
4d45c68ca6 : [fx] fix for subgraph rewriter (#119052)
c908caf92b : [DeviceMesh] Alllow 1d slice from 1d mesh (#118895)
6379010ebd : [dynamo][higher order ops] Remove restore side effects logic (#118420)
113138aa55 : add test cases for GradScaler on CPU (#109994)
426339e4de : Add FakeTensor support to torch._utils._rebuild_tensor (#108186)
3b41793412 : Purge redundant module init tests (#119028)
a69016a741 : Add lowering to special.bessel_j1 (#118992)
c7ba5f6c6f : [AOTI] Fix a cpp kernel missing arg type issue (#119021)
debc3b3254 : Download reports only if they're necessary (#119027)
a68cf3ef7d : update_failures.py: add option to also remove "skipped" tests (#118931)
1de50f8654 : [HigherOrderOp] fix stack trace to report user stack (#118826)
3c0c387429 : Support symbolic min/max on unbacked SymInt (#118953)
f641c55c9b : Make torch._dynamo.mark_static work inside graph (#118962)
29f99a3365 : Update XLA commit pin (#118871)
bd8c91efc0 : Remove some now-succeeding tests from dynamo_test_failures.py (#118928)
bf4e171539 : [export] support non-persistent buffers (#118969)
b5ba80828f : [optim] Rectify capturable testing and fix bugs! (#118326)
8b00e5aa12 : [FSDP2] Added pre/post-backward (#118004)
a688b4b397 : Update pointwise concat heuristics (#118453)
3a1ae86a93 : Fix internal failure D53291154 (#118907)
fd000340fd : ProcessGroupGloo::allgather_into_tensor_coalesced (#118910)
70605d150b : [quant][pt2] Add `move_exported_model_to_train` (#113492)
52b679d415 : [BE] Cleanup CircleCI README (#118927)
0e5fe4b3ae : [AOTI] Fix a RAIIAtenTensorHandle premature deallocation bug (#118963)
53da422582 : [export] Move _create_graph_module_for_export to torch/export (#118893)
b374f8987d : [ROCm] Hipify trie re-engineering and adding unit tests (#118433)
65efbf078c : Optimize dict keys guard when all the keys are constant (#118855)
cdbc29e91a : [dynamo,optim] Use the actual sources from the parameters when tracing "params" in an optimizer (#118535)
a3770bcf10 : Add functools.partial and UserDefinedFunction to dict keys (#118199)
9d592c14eb : Don't assume all subclasses of BaseUserFunctionVariable have a fn attribute (#118208)
188628d99e : [dynamo,easy] Add Typing variable to possible dict keys (#118003)
ecf7d0e8ac : Make dict guards amenable to the CSE pass (#118194)
eb2bdfae88 : Make variables in dict LazyTrackers (not lazily guarded yet) and avoid using DICT_KEYS guard (#117625)
75a5c41921 : [dynamo,optim] Place guards on the args before assuming they exist (#117983)
b1da929df9 : Use SourcelesBuilder in BuiltinVariable (#118098)
0f3e20a1b6 : Print the malformed guard when there's a guard error. (#117982)
292243d1aa : Automatically pull test reports from CI (#118882)
0f7954107a : Add ability to print histogram as a github issue (#118874)
520771d7b3 : refactor lazy init to device-agnostic (#118846)
2de327cedc : Fixed an illegal memory access in cross entropy loss when using an index that is not a valid class (#117561)
05ac295177 : [export] Fix bug with user input mutations (#118942)
cc46829f96 : [Inductor] GEMM shape padding improvements (#118522)
855d5f144e : Relax MKL_INT assumption to int64_t (#118946)
2964170f3a : Revert "[optim] Rectify capturable testing and fix bugs! (#118326)"
4a5a2c6571 : Update auto_functionalize schema (#118809)
89b7ab671e : Protect against modules without __file__ (#117445)
3d8c36786b : Add device for distributed examples (#118867)
da5cbb1269 : [export] fix for duplicate constant lifting (#118776)
32f48e917d : [minimizer] Defined traverse (#118889)
3f1f057adf : Remove parent device mesh check (#118620)
9cc6422ab6 : Revert "[executorch hash update] update the pinned executorch hash (#118936)"
8cc8cf75f3 : [executorch hash update] update the pinned executorch hash (#118936)
497ea17684 : Limit reductions into pointwise cat fusion (#118452)
babd6c776d : [inductor] skip launching kernels with zero grid in AOTInductor when using backed symints (#118654)
946ea47a4f : [inductor] Fix an internal test issue (#118903)
8b729fb826 : [ez] Fix CI log file piping error (#118807)
d947b9d500 : [optim] Rectify capturable testing and fix bugs! (#118326)
08472a4fd5 : [dtensor] add op support for aten.gather.default (#118513)
8ca8729321 : [PT-Vulkan][EZ] Adjust string-report width (#118914)
7e1ac59016 : [pytorch][vulkan] add 1d tensor support for linear (#118690)
796278b57e : Revert "[inductor] make multi-kernel work with cpp-wrapper (#117813)"
9153174cd1 : [pt-vulkan] Introduce `SharedObject` class to `ComputeGraph` (#118756)
a5a63db3bf : add Half support for flash attention on CPU (#118368)
838c1c553e : Add back recompile test (#118905)
4b59bfe8e5 : [CI] Filter should not fail if pr_body is empty (#118934)
08d90a1ea9 : Workaround for super() calls in test_torchinductor_dynamic_shapes (#118586)
7c609f01ff : [PT-Vulkan] aten::conv1d - support any batch size (#118834)
dc4779b010 : Split out fake_impls from fake_tensor (#118878)
844a76ebe8 : [MPS][BE] Remove stale TODO (#118902)
a16df1d85f : [Dynamo] graph break on isinstance calls if we don't know the type (#118778)
39aab55c1c : Add myself to CODEOWNERS for serialization-related files (#118892)
46ef73505d : Clarify how to get extra link flags when building CUDA/C++ extension (#118743)
dbba1d4bf5 : Revert "Some minor type stub improvements (#118529)"
d4a94ad041 : [ONNX] Fix upsample_bilinear2d decomp skip with output shape (#118823)
6692f2c91e : [no ci] Add myself to MPS codeowners (#118904)
6929322a28 : [PT-Vulkan] aten::conv1d - support any channel-group combo (#118833)
61b572ed56 : [inductor] more accurate throughput calculations for kernel benchmarks (#118858)
20484a1936 : [inductor] make multi-kernel work with cpp-wrapper (#117813)
54668ad6dc : Cleanup max cuda device (#118779)
f63dc9a21d : s/DIRECLTY/DIRECTLY/ (#118877)
923a7c7572 : add test elipsis to dynamo test functions (#118754)
318e6ff40e : Fix `__name__` on a reconstructed NestedUserFunctionVariable (#118768)
b0e65dd1b4 : Fix TCP Store Windows (#118860)
df048f4da4 : Revert "[RELAND] Remove deprecated fbgemm operators (#112153)"
0f7e63620f : CUDA fast path for split_with_sizes_copy.out (#117203)
68f9c28e00 : Don't make default arguments dynamic (#118772)
24dd9f42ce : [MPS] Fix `use_metal_mm` condition (#118830)
3e79ef6db8 : Complete decomposition for aten.round (#118635)
0010b6145e : Reduce register usage of fused adam(w) (#118361)
b73a2b7795 : [ait] inspect get_attr nodes for _decline_if_input_dtype (#118760)
ff9ce94489 : Create empty host tensor for privateuseone (#118854)
d790c1dca6 : [CUDA][cuDNN][TF32] Misc TF32 updates (#118781)
687946eea1 : [FSDP2] Added reduce-scatter (#117975)
9c2b43cc50 : [inductor] Handle special values correctly in ir.Scan codegen (#118788)
221747507d : Revert "[export] support non-persistent buffers (#118612) (#118722)"
4a5a3bcc89 : Revert "fused adam(w): Reduce register usage (#117872)"
a1dd367716 : Fixed error in bicubic upsampling aa=false for uint8 input (#118389)
8b140da804 : Use MKL_INT in MKL wrapper interfaces (#118734)
a205e7bf56 : [3/4] Intel GPU Runtime Upstreaming for Device (#116850)
eaa45f47f8 : [sigmoid] fix for torchbind serialization (#118791)
0dc15ff674 : [reland][export] Fix graph signature for primitive outputs (#118818)
b8e71cf302 : fused adam(w): Reduce register usage (#117872)
eba4bd6b86 : Updated test_upsamplingBiMode2d_consistency (#118388)
7e0ea0d5df : [export] Only deepcopy graph in unlift (#118821)
4fc4f5eb06 : [Dynamo] Support tensor is not tensor (#118840)
a1280f0cc6 : Add an OpInfo test for split_with_sizes_copy (#118512)
2b48891e62 : [AOTInductor] Add Runtime Constant-folding for AOTInductor (#118765)
b97ab47619 : [pytorch][ao] Update `PerChannelMinMaxObserver` default `_load_from_state_dict` (#118659)
526701cfb7 : [executorch hash update] update the pinned executorch hash (#118698)
45d2dff844 : [easy] Enable test_neg_view for 5D SampleInput for torch.nn.functional.linear (#118815)
adff335095 : [vision hash update] update the pinned vision hash (#118825)
9b28621369 : [FSDP2] Added forward unshard/wait for unshard/reshard (#117973)
8d6e34b21b : Add verbose option to failures histogram (#118757)
499f31d40b : [dynamo] use par_style = "xar" in minifier targets file (#118603)
a43c28368c : [export] support non-persistent buffers (#118612) (#118722)
4cba1dd0c3 : [submodule] Update cudnn_frontend to v1.0.3 (#118782)
2f79a7bf9e : [export] make spec comparison indifferent to fx collections (#118718)
6c67f3333a : [Inductor] Skip triton templates for mixedmm on SM70- (#118591)
da4b4d961e : Support printing storage while FakeTensorMode is enabled (#118780)
30f43e3d89 : [ONNX][bench] Deepcopy model to another device before export to avoid OOM (#118710)
21ce53b9c5 : Add inf norm support for _foreach_norm (#118441)
e87ac82c98 : Fix missing default dim param in weight norm interface decomp (#118762)
e426924c19 : Change classification to beta for TORCH_LOGS (#118682)
fb391a016d : Test that optimizers are running cudagraphs (#118716)
8dee7b7a16 : Add TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED (#118750)
c978f38bd4 : Some minor type stub improvements (#118529)
5ced432a0d : Revert "[export] Fix graph signature for primitive outputs (#118655)"
a768a50a55 : Re-enable test_nan_to_num (#118711)
9391af9796 : Merging heuristics (#118029)
3280fdb883 : [FSDP2] Added `_to_kwargs` root forward input cast (#117955)
d33f9dcefe : [FSDP2] Added all-gather and unsharded parameter (#117950)
483001e846 : Revert "Workaround for super() calls in test_torchinductor_dynamic_shapes (#118586)"
649f2e3000 : Fix for out of bounds registers_ access in mobile TorchScript interpreter (#110300)
8026534a2f : Add torch.complex128 and torch.complex32 to DTYPE_TO_ATEN dictionary. (#117929)
82b6ee5a2a : Fix build error in ppc64le (#118516)
aca41a3a74 : [optim] lbfgs: handle complex params as independent real params (#118184)
82b0341af3 : s/verison/version/ (#118749)
41dfd0e063 : Update Dynamo passrate/histogram scripts (#118752)
99b69e1ffb : add PrivateUse1 device support in function options_from_string. (#118627)
7aff92c838 : [torch] Expose dynamic_shapes api at multiple levels (#118695)
6bd1807ae9 : enable mkl_gemm_f16f16f32 in cpublas::gemm (#118367)
81d12846dc : Add decomp for pixel_shuffle/unshuffle (#118239)
81b55f58ce : Matmul decide should_fold using has_out instead of grad_mode (#118617)
a5a0fdcae9 : Remove some unnecessary skipIfTorchDynamo (#118725)
680cc6b17a : [export] Fix graph signature for primitive outputs (#118655)
8455447972 : Support builtin callable with object arguments in dynamo (#118678)
68c3cb7594 : s/fialure/failure/ (#118744)
5586d7797e : fix up batchnorm folding in pt2 quant (#118720)
4a677da36b : Add more triton kernel mutation tracking tests (#118691)
b4f4fd0c28 : Parse and handle functions in TTIR (#118595)
1bf9ddf130 : add test_truth (#118597)
1128cf96f0 : [AOTI] Support _embedding_bag in C shim (#118706)
8db8ff652c : [AOTI] Add aoti_torch_view_dtype in C shim (#118705)
dd52939438 : [inductor] Refactor ir.ComplexView (#118704)
35f3ccffd4 : [Cutlass 3.3.0 submodule upgrade] (#118629)
c3a3e61bcb : Resolve TODO in test_slice_mutation2 (#118712)
9afd539075 : [sigmoid] update serialization to include custom objs (#118684)
56718cab8d : Unskip test_complex_type_conversions (#118694)
73229b4f93 : Add --filter-rank to torchrun: allow logs filtering by rank (#118562)
995f69623d : Add Silu to Dtensor Pointwise ops (#118702)
74f4947caf : Fix admm over empty tensors and broadcastable input (#118619)
2d37a046e7 : [export] Enforce serialization BC/FC with updater script. (#118424)
697ca4f292 : Preliminary DeviceMesh + native c10d functional integration (#118423)
e3cde68534 : [FSDP2] Added initial `_lazy_init` and FQNs for debugging (#117881)
f7ae454003 : [vision hash update] update the pinned vision hash (#118700)
6d7cfb5c3f : [audio hash update] update the pinned audio hash (#118699)
0a7e2ce0e1 : [PT-Vulkan] aten::conv1d - support any stride, padding, dilation (#118660)
68a75d4539 : [lint] remove merge_base_with from .lintrunner.toml (#118677)
07a7feca74 : [FSDP2] Sharded parameter in `FSDPParam` (#117877)
4a019047ad : Enable nested namespace check in clang-tidy (#118506)
1b03423526 : [meta registration] fix _efficient_attention_forward for jagged inputs (#118657)
6fa162e681 : Reland: [aotinductor] Replicate split_cat from torch IR to predispatch IR" (#118590)
7761ceb6b3 : Fix a bug with python lambda capture (#118676)
616e9dbed8 : add torch.float64 precision support to the transformer test suite in TP/SP (#116436)
1f376b3b24 : Flix lint after #117814 (#118689)
1e78dc95a4 : Fix/Temporarily disable tests broken due to triton version mismatch (#118661)
2f7839e6db : register decomposition for rsub in torch._refs (#118288)
04ded1399d : Fix signatures of torch.{add, sub, mul} (#118398)
6ea233a14c : [FSDP2] Added initial `FSDPParamGroup`, `FSDPParam`, `ParamModuleInfo` (#117867)
ae6233ec47 : [FSDP2] Added `mesh` arg, `FSDPState`, move to device (#117814)
7aa4b35b75 : [FSDP2][Reland] Introduced initial `fully_shard` frontend (#118525)
48f876143a : Fix missing permission in create release workflow (#118681)
1aa836f502 : Dont fuse write into read if indexing differs (#118210)
82a7460b67 : [quant][bc-breaking] Turn on fold_quantize by default (#118605)
ba1be17733 : Remove `voznesenskym` from the list of autoreviewers (#118680)
f2682e75e6 : Workaround for super() calls in test_torchinductor_dynamic_shapes (#118586)
4f5785b6b3 : Enable possibly-undefined error code (#118533)
e332653eb3 : [inductor] Use at::detail::empty_strided_* in cpp_wraper mode (#118490)
1562dae62c : [BE]: Apply RUF025 dict.fromkeys preview rule (#118637)
e33e88e5bc : Add separate logging target for cudagraphs (#118329)
e180218949 : [c10d] Log the last enqueued and completed collective (#118582)
9247641f34 : [PT-Vulkan] aten::unsqueeze - nit optimization (#118575)
d0627cc2af : [export] do not rewrite state dict when unlifting (#118611)
be90ab7efd : [export] do not unlift cond/map submodules (#118610)
4ee8aa6028 : [export] adopt KeyPath API in nonstrict mode (#118609)
ca090b2c77 : [export] do not use tree_flatten_spec (#118608)
bc9642f578 : Skip more tests under rocm (#118624)
e6e7d7f26b : [pt-vulkan] Introduce MemoryAllocation class and enable deferred allocation and resource aliasing (#118436)
40ece2e579 : Revert "Enable possibly-undefined error code (#118533)"
6511811ebb : [export] preserve metadata during nonstrict tracing (#118607)
644f64f2d1 : [c10d] added docstrings and tests for src / dst (#118593)
19e8ba95e5 : [RELAND] Remove deprecated fbgemm operators (#112153)
2327879fb6 : Add lowering to special.bessel_j0 (2nd try) (#118565)
fbf92500fb : enable privateuseone to perform streaming backward (#117111)
15702a8027 : Fix lnit after #118533 (#118633)
827949cef2 : accelerate `binary_cross_entropy_with_logits` by using `log_sigmoid` operator (#115539)
e5bb527d3e : [inductor][cpp] support scalar value in vec reduction (#118511)
91690983ff : [easy] Faster empty LIST_LENGTH guard (#118542)
64efec9953 : Port FakeProcessGroup to cpp (#118426)
da0635d17c : Add pytorch-distributed justknobs helper (#118568)
3ecc2f3a0d : [PT2][Runtime Numeric Check] Fix compatibility issue (#118578)
b7c8485704 : refactor mm_plus_mm check to pattern match (#118456)
c7af626a26 : [c10d] allow nonblocking wrap of ncclCommInitRankConfig (#118256)
e632d0c0dc : Break Triton MutationTests to one kernel per test (#118553)
4a48899b6e : [CUDA][complex] Define `LIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS` in CUDA build (#117061)
c203d88795 : Skip mutation tests on rocm (#118588)
fe07851173 : [CUDA][TF32][functorch] Also disable TF32 for vjp and jvp tests (#118592)
8be6dee14b : [inductor] Fix codegen bug with Native Triton kernels with ReinterpretView args (#118569)
4f13f69a45 : Enable possibly-undefined error code (#118533)
5dfcf07449 : Reland PR117393 [inductor/fb] log config dict when compilation finishes (#118552)
dcc077eea2 : [executorch hash update] update the pinned executorch hash (#118594)
0d47f6a44f : [ez][inductor] fix a typo in should_pad_bench (#118598)
135f785d77 : [audio hash update] update the pinned audio hash (#118338)
ff0cb38693 : [vision hash update] update the pinned vision hash (#118340)
2eefbc02a0 : [ez] Discover tests without importing torch (#118574)
eb9905be5d : [export] Remove the branch for skipping verifier. (#118139)
b778f44e97 : Allow using native c10d_functional via _functional_collectives (#113057)
126c1621ce : Add Support for CausalBias to torch compile (#116071)
67d8db9252 : Remove semicolon after `return_from_mutable_noop_redispatch` (#118538)
0ed24cb1af : [export] comments about runtime_var_to_range. (#118539)
b1f8b6b8fc : Forward Fix accidental removal of import (#118572)
460950d3aa : [Nested Tensor] Support ragged_idx != 1 on aten::is_same_size, aten::_to_copy (#118442)
6c9f72156e : Fix constant folding bug with sym size tensor (#118411)
aef820926c : Add some tests for 3d channels last (#118283)
bacbad5bc9 : add GradScaler on CPU (#109993)
796d270392 : [easy] Fix small typo in register_state_dict_pre_hook doc (#118571)
413a434846 : [export] Convert all export tests to .module() (#118425)
ca7cbf1226 : Add memory_format to typehints of Tensor.cpu and Tensor.cuda (#118392)
e1cbf6dff5 : Use SEQUENTIAL posix_fadvise on mmapped files (#117805)
67c6152f4e : [HigherOrderOp] support while_loop in dynamo (#116913)
e3d7a19f73 : [CI] add wait for /orig branch in mergeability check (#118576)
a40be5f4dc : Autograd doc cleanup (#118500)
fc5cde7579 : [dynamo] constant fold torch.cuda.get_device_properties to avoid graph break (#118422)
f99adbb4ec : [inductor] Remove ROCm xfail on test_cum{sum,prod}_zero_dim (#118558)
6591741183 : [dynamo] support inference_mode with no arguments (#118427)
e0d04b7119 : [Caffe2] Fix bug in `str` on wide types (#117531)
68b18dc2a2 : [DeviceMesh] Removed print of `self._dim_group_infos` (#118527)
bb55970e5b : Revert "Add justknobs env helper for pytorch distributed (#118451)"
0288db3120 : [DCP] Removes Checkpoint Wrapped Prefix from state dict fqns (#118119)
fb11354594 : Revert "[c10d] relax the nccl error check for nonblocking mode (#118254)"
3011a4406f : [BE][GHF] Do not hardcode default branch name (#118530)
65f8276bc6 : add an option to specify custom addr2line binary (#118328)
abe3c55a6a : Update DDP dynamo debug docs (#118295)
f9971daaee : Fix divergence between internal + external (#118509)
04c1df651a : [inductor][cpp] enable vectorization with constant bool (#118380)
ee3dfbbe47 : [Inductor] Fix Argmax codegen with Nan input (#118358)
41dfdde9f5 : Handle some numpy functions with out arguments correctly in dynamo (#118248)
4d1bb2175a : Add justknobs env helper for pytorch distributed (#118451)
41902a6ebc : [dynamo] Optimize is_tracing checks (#118474)
eba240afcb : Revert "[FSDP2] Introduced initial `fully_shard` frontend (#117776)"
e6f3a4746c : include a print for _get_cuda_arch_flags (#118503)
47b5a6b05d : [Dynamo] Analyze triton kernels via tracing to determine mutations (#117300)
2951bbf0f7 : Add some type annotations to torch._inductor.codegen.wrapper (#118491)
5f59d0c748 : [C10D] Disarm PGNCCL Heartbeat Monitor to gather data (#118344)
890d8e6692 : [executorch hash update] update the pinned executorch hash (#118502)
0d9aff2523 : Removed unused “device” argument in torch.frombuffer() #118273 (#118439)
acc700739e : Upgrade mypy version to 1.8.0 (#118481)
338596dfbc : Forbid follow_imports = skip from mypy.ini (#118480)
119b66ba16 : Use strict to toggle strict options in MYPYSTRICT (#118479)
ecca533872 : Use dmypy instead of mypy in lintrunner (#118475)
cad79bd0bb : Remove follow_imports = skip from sympy (#118469)
59b4d2cd40 : [mypy] Remove colorama ignore_missing_imports (#118468)
46712b019d : Enable local_partial_types (#118467)
2ed0af2bde : [executorch hash update] update the pinned executorch hash (#118477)
9d5b950bdd : [BE][Easy]: Update ruff to 0.1.14 (#118466)
ca1d70632d : [14/N][Dynamo] Make trace_rules.lookup only handle function + callable type (#118366)
62c1e4a578 : Added missing CircularPad*d references so the docs are actually built. (#118465)
2728c9137d : [easy][AOT] Fix shortcut path for simple tuple/list spec (#118460)
1460334436 : [quant] Remove deprecated torch.jit.quantized APIs (#118406)
d03173e88c : Unify MYPYINDUCTOR and MYPY (#118432)
42062e2622 : [pytree][BE] update treespec `is_leaf()` access (#116371)
26473460a4 : [ET-Vulkan] ExecuTorch Vulkan floor_div (#118428)
8d790abab9 : [NCCL][c10d] Log failing pointer if deregistration fails (#118455)
dabb90f2a4 : Revert "[Exception] [6/N] Remove use of torch::TypeError (#117964)"
bb6eba189f : [export][ez] remove unused argument from InterpreterModule (#118364)
89a1175e0e : Upgrade mypy python_version to 3.11 (#118418)
978faf1fa2 : Use an op counter to decide when to realize a kernel (#117030)
800e2e823f : Add compilable foreach RAdam support (#117912)
fe10b1800f : LazyGraphModule (#117911)
70699a6357 : [C10D] Add tests for gather and gather_object with subgroup (#118359)
28625d746f : [executorch hash update] update the pinned executorch hash (#118443)
993e4f3911 : [c10d] relax the nccl error check for nonblocking mode (#118254)
40c08795b0 : [JIT] python IR bindings: consolidate tests, add short docs in OVERVIEW.md (#118319)
9bce208dfb : Replace follow_imports = silent with normal (#118414)
af1338bfbf : fix escape nested comments in C++ (#117882)
5b31516008 : [dynamo] inline torch.jit._unwrap_optional (#118434)
4aa1f994be : [dynamo][assume_constant_result] Dont put symbolic guards for assume_constant_result (#118430)
838d3620cd : [NCCL PG] log NCCL comm at creation and abort (#118335)
80cb6db90d : [CUDA] [CI] Disable flash attention for sm87 architecture when the head dim > 192 (#117678)
7cc7bf9dda : [GHF] Add co-authors to PR (#118347)
4d771c56de : [xnnpack] Move x86 flags to platform_compiler_flags (#117923)
ff8e33556e : Enables load balancing duplicates in DCP (#116469)
b95c45fbf7 : add stack trace to device skip (#118112)
b256b7b348 : Add way to actually delete a torch.library.Library object (#118318)
f129e3fe03 : [inductor] Handle cum{sum,prod} on zero-dim tensors (#117990)
074ac822d5 : [ONNX] Skip empty input test case in aten_mm (#118413)
eee63ac845 : [dynamo] move torch._C._get_cublas_allow_tf32 to constant_fold_functions (#118342)
d41cfc92e6 : [CI] simplify mergeability check workflow (#118415)
84251d1d71 : [ez] Windows log printing + save successful test logs (#118124)
5c56822be2 : [export] Various fixes to .module() (#118272)
2ed1b1747a : Fix Auto Functionalize to handle specified default values (#118331)
07499074bb : Increasing session duration for AWS credentials for _rocm-test.yml (#118412)
939008a268 : Fix RuntimeError: NYI: Named tensors are not supported with the tracer (#118393)
bfbb8d8220 : Don't manually invoke atexit exit handlers in tests (#118409)
728789d850 : Deflake stream tests, part 2 (#118391)
e696fa1ee7 : [tp] enable rowwise embedding sharding in RowwiseParallel (#118242)
dc8357b397 : [dtensor] implement dim-0 (row) embedding sharding with MaskPartial (#118080)
910b49c48b : [dtensor] rewrite embedding ops using op strategy (#118079)
25f72194e8 : Realize inputs to DynamicScalar before unwrapping storage (#118125)
96d94f574e : Fix several bugs related to unbacked SymInt codegen in inductor (#117862)
89a0b1df51 : fix lint for cudnn codes (#117091)
2842d3c9d3 : [Nested Tensor] view: basic support for ragged_idx != 1 and _unsafe_view (#118317)
533637d9a3 : Revert "Check if enable inside run call (#118101)"
f1aef2c094 : Don't check is_conj for `_refs.linalg.svd` (#117972)
af8f37c2b6 : Revert "Use SEQUENTIAL posix_fadvise on mmapped files (#117805)"
6da0e7f84b : [Clang-tidy header][17/N] Apply clang-tidy on headers in torch/csrc/cuda (#117829)
8ff55c7e68 : Clarified sampling process of torch.randn for complex dtypes. (#118315)
b66c4eda61 : [Inductor] Add Thread Number Checker in scatter_reduce_ fallback for CPP backend (#118278)
0857a3a753 : [c10d_functional] fix an issue where mutation on views fails in inductor (#118333)
4d0b471389 : fix key error in pre_grad fx_passes_numeric_check (#118325)
8dd1be49b7 : [Inductor] Use sleef implementation for CPP backend acosh codegen (#118350)
2ea38498b0 : [FSDP][BE] Only show state_dict log when the debug level is detail (#118196)
4f4e61bb75 : [DCP] Add tests to demonstrate DCP checkpoint conversion (#117773)
644bc69530 : [DCP] Allow users to save and load without creating storage reader and writer (#117772)
fc30bd3b7b : Revert "[dtensor] rewrite embedding ops using op strategy (#118079)"
bfb5e7642e : Revert "[dtensor] implement dim-0 (row) embedding sharding with MaskPartial (#118080)"
bc67f87559 : Revert "[tp] enable rowwise embedding sharding in RowwiseParallel (#118242)"
2c9a90cde6 : [ROCm] backward compatible type enums (#118137)
f8e14f3b46 : [PyTorch][Vulkan] Clean up aten::stack (#118314)
2b1ee9be7a : [executorch hash update] update the pinned executorch hash (#118339)
0c5da6100f : [PyTorch][Vulkan] Clean up aten::unsqueeze (#118311)
8467de4e97 : Fix kaiser_window for lower precision data types on CPU (#117345)
ef29fe745f : [CUDA] Add missing TF32 annotation to `test_uint4x2_mixed_mm` (#118143)
b599f5608c : Fix mergeability check for ghstack PRs (#118258)
4e456fd95b : [AOTI] Support scalar to tensor in the ABI-compatible mode (#118024)
66c3152e36 : [CI] Build docker on larger runners (#118167)
385d8b32fc : Update PocketFFT submodule (#118348)
3cdd4e236e : [inductor][easy] dump triton kernel names in the log (#118313)
7a9012d7e8 : [tp] enable rowwise embedding sharding in RowwiseParallel (#118242)
8cc02b46c3 : [dtensor] implement dim-0 (row) embedding sharding with MaskPartial (#118080)
3d062f9abe : Revert "[pytorch][kineto] log process group config in distributed info (#117774)"
6596a3f23d : [Export] Remove ScriptObjectMeta (#118241)
401aa1a1de : Use SEQUENTIAL posix_fadvise on mmapped files (#117805)
de9ddd19a5 : Various CI settings (#117668)
8c167f9fc3 : [CMake] Explicitly error out if CuDNN older than 8.5 (#118235)
71757093c5 : [dynamo] avoid graph break on torch.backends.cuda.matmul.allow_tf32 (#118236)
b5c9623835 : [export] Add node meta into UnflattenedModule (#118138)
a93940b5db : [export] Allow constant outputs + None input/outputs (#117894)
24133e44b1 : Fix return type hint for list types (#118238)
52c5803088 : [NestedTensor] Support ragged_idx != 1 in pointwise ops (#118157)
91d5f94f85 : [FSDP] Idempotent reshard (#117997)
b10b08227a : Passes process group to `_all_gather_keys` in `dcp.load` (#118301)
02a411d4a6 : [mergebot] Dry run for labels + easier to read Dr CI result (#118240)
26f1da0b1b : Fix node traversal when setting up stacktrace preservation hooks (#118252)
b8bd3bb30a : Fix aot_autograd seq_nr logic (#118249)
3c77a3ed03 : export ATen/native/sparse/*.h (#118274)
fae569b4f2 : [dynamo] avoid graph break on tensor.element_size() (#118229)
bd6bf97ea5 : stop using torch.Tensor in dynamo/test_export_mutations.py (#118287)
f7f7283ec7 : Skip test_none_names_refcount under Dynamo-wrapped CI (#118309)
4e45d791e7 : Remove set_ exclusion in FakeTensor dispatch cache (#118154)
13bdd6c4e2 : Revert "[Dynamo, ONNX] use environment variable ONNXRT_DUMP_PATH to dump onnx models created by onnxrt backend (#117551)"
ea851eb027 : Uses Serial Loader for DCP.save when more then one thread is used. (#118114)
708e6241ed : Fix sympy_subs to preserve integer and non-negative properties. (#118150)
2de24c11f6 : [inductor] Slightly faster memory allocation on CUDA (#118255)
3e76a0e9c2 : Install an excepthook which annotates exceptions with rank information when distributed is initialized (#118190)
1565d58ad9 : [inductor] correctly generate grid info for benchmark_kernel (#118202)
b47cf4182e : Fix support non tensor inputs to operator.pos function (#118251)
476b744e23 : [AOTI] Forward fix https://github.com/pytorch/pytorch/pull/117989 (#118291)
1f6aa4b336 : [mypy] Enable follow_imports = normal for mypy-torch.backends.* (#116311)
3221585af0 : [Dynamo, ONNX] use environment variable ONNXRT_DUMP_PATH to dump onnx models created by onnxrt backend (#117551)
9768f73cb2 : [AOTI] Skip test_index_put_with_none_index on rocm (#118290)
83581f91ca : [Dynamo, ONNX] use environment variable ONNXRT_DUMP_PATH to dump onnx models created by onnxrt backend (#117551)
bb3db079b1 : [Export] Introduce class_fqn into CustomObjArgument (#118158)
fed0f2946f : [FSDP][BE] Fix optim_state_dict_to_load doc errors (#118195)
01388d0790 : [dynamo] Slightly better error message if key not in dict (#117902)
e1f9eca113 : [DeviceMesh] Reuse sub_group pg if exists (#115716)
a289dba7b1 : Add missing cuda libraries for context_gpu_test (#117493)
eb054cc012 : Revert "Fix Auto Functionalize to handle specified default values (#118035)"
8810fdd21e : fsdp: Unit test for ModuleWrapPolicy as a Callable (#117395)
c1e0674485 : [DCP][BC] Remove the dependency on _shard.TensorProperties (#116248)
316579e30c : [FSDP2] Introduced initial `fully_shard` frontend (#117776)
4f78869c18 : [state_dict] Calls wait() for the DTensor to_local() result (#118197)
817debeb89 : [inductor] Slightly faster memory allocation on CPU (#118171)
d6b556bd98 : Added `"any"` mode to `register_multi_grad_hook` (#117984)
173777461c : expose nested tensor header file (#117956)
865945cc1f : Convert `requires_cuda` to full decorator (#118281)
87fb8b6218 : [DTensor] Relaxed `to_local` `requires_grad` warning (#118186)
a5230e6019 : [ez][docs] Fixed render of `tensors` in `backward` (#117994)
8f973038d5 : Update update_failures.py given feedback (#118237)
b5b36cf0c4 : Fix failure of test_dynamo_distributed & test_inductor_collectives (#117741)
ee1dbb2acf : [AOTI] Fix a None as index codegen issue (#118187)
d1e661a1ce : [AOTI] Add _scaled_dot_product_efficient_attention to C shim (#118169)
5c7a18c5cb : [AOTI] Refactor shim_common.cpp (#118168)
4b4e6550f2 : Update oneDNN build option for older systems (#118057)
eebe7e1d37 : Migrate update-viablestrict to test-infra (#118163)
357a06f7c9 : [ONNX] Fix type promotion pass (#118246)
2c6a233c45 : Report the type of a tensor in wrap_to_fake (#118220)
8b95fb4eb8 : Add stack trace to "start tracing" log (#118217)
2a178dade8 : Augment create_symbol with user/infra backtrace fragment (#118215)
514159ddcb : Add torch_dynamo to resume_in for ease of debugging (#118201)
5a83c47d98 : [vision hash update] update the pinned vision hash (#117594)
e0903b0720 : [executorch hash update] update the pinned executorch hash (#118040)
e5e9f390be : [dynamo] Optimize overheads from _TorchDynamoContext (#118070)
a40951defd : [C10D] Fix nccl flightrecorder ignored dump timeout (#118142)
87335fabae : [Exception] [6/N] Remove use of torch::TypeError (#117964)
67300a11cb : Support custom autograd Function forward AD return non-Tensor in forward (#118234)
2d7a360911 : Fix Auto Functionalize to handle specified default values (#118035)
4a49e2b52d : refactoring (#118111)
4448f2a49d : Log stack trace of mutated idx reland (#118110)
5b819d9ef0 : Properly move retains_grad hook on in-place over view for base (#117552)
9c1348feb3 : [pytorch][kineto] log process group config in distributed info (#117774)
89530c8590 : [dynamo] Test for using torch.nn when replay_records are enabled (#116215)
7c33ce7702 : [CI] Install dill in ci (#116214)
b53cc6cf8d : [dynamo] Fix test_replay_record.py (#116230)
61865205b6 : Deflake Dynamo stream tests (#118205)
5e0ef84b01 : [dynamo] Refactor install_global_once, remove usages of install_global_unsafe (#118100)
2abb812a78 : Check if enable inside run call (#118101)
dba160e676 : [13/N][Dynamo] Refactor torch ctx manager classes check out of trace_rules.lookup (#118130)
4e29f01bf2 : Remove sdp_kernel and replace with sdpa_kernel in attention namespace (#114689)
77186af028 : [DTensor][BE] re-enable test_dtensor_ops in CPU CI (#118134)
e6288820e3 : Revert "Update triton ROCm version to 6.0" (#118179)
af9b6fa04e : Revert "Check if enable inside run call (#118101)"
15608d8cb4 : Add guardrails preventing complex params in LBFGS & SparseAdam (#118161)
17ecd1e9cd : Migrate test_complex_optimizer to OptimizerInfo (#118160)
6978c3ddf3 : Removes an Incorrect Type Specification from AdaptiveMaxPool1d (#118162)
821b2c543c : [AOTI] Support .item() in the ABI-compatible mode (#117989)
2f6fc33c20 : Move skip sets into a new file. (#118032)
e599a08796 : [dtensor] rewrite embedding ops using op strategy (#118079)
b025e5984c : Get Device instance with correct type when privateuse1 backend is registered (#117966)
6fc015fedc : Check if enable inside run call (#118101)
fc135454ca : [PT2][Optimus][Reliability]Fix a bug in gradients computation for runtime numeric check (#118105)
1e185c7803 : [c10d] Barrier uses stream sync instead of device sync (#117804)
6e78592cbb : Added type checking for ExportedProgram (#117231)
af1ebc45d3 : [quant][pt2e] Add fold_quantize=True for all convert_pt2e calls (#117797)
90b3cf33ac : [C10] Make Scalar constructable from longs (#118149)
880f9bb57e : Remove xfails for consistently succeeding tests (#118152)
bd99115276 : [AOTI] Enable for MacOS (#118076)
a545ebc870 : Switched macOS runners type to macos-m1-stable (#117651)
12662f4d95 : [dynamo] add username in debug path (#117820)
7d396918c6 : [Inductor] Fix `argument unused during compilation` warning (#118077)
50ead5d8ae : [fx] add an option to not retrace when doing op fusion (#118120)
c5702a0891 : [dynamo] Optimize BACKEND_MATCH guard (#118065)
ed0ec2e0be : Remove dynamo runner's dependency on distributed build (#117903)
725f4b58ac : Cache dfs path in propose_partitions and re-use that later when trying to find cycles in the graph (#115943)
d59c2d6e05 : [dtensor] refactor partial redistribution logic (#113334)
03205ff3ba : [dtensor] make local_shard_size_on_dim be staticmethod (#118078)
8d49737f2b : [CUDA][Complex] Bump thresholds for conv3d (#118151)
46c228f0e2 : [DTensor][BE] rename PlacementStrategy.output_spec to output_specs since now we support a tuple of DTensorSpec as output (#116437)
26968cefb0 : [DTensor][fix] re-enable [add]mm tensor test (#118132)
155f27a97b : [DTensor][fix] fix is_tensor_shardable to correctly handle Replicate placement (#117726)
e9c240670f : [sigmoid] Add canonicalized IR as an option. (#116758)
21e8546b11 : [inductor][fx] Fix broadcast_tensors with unbacked symints when translation validation is off (#118066)
41a56f7828 : Fix swap_tensors to swap PyObjects associated with TensorImpl (#116955)
fc30c4d769 : Migrate forloop directional tests to OptimizerInfo (#117410)
5b671ce486 : [dynamo] fix typo in 3.11 resume_execution.py (#118108)
b7b1affe97 : Add half specializations for load of sum (#106454)
c0732c8d5e : [Dynamo] Add complex to literal constant (#117819)
cd084c4909 : Add `TensorIteratorConfig::add_const_input` to avoid COW materialize (#118053)
abd759d50d : [fx] Add hooks to intercept node replacements. (#117825)
b369888bec : Replace `constraints` with `dynamic_shapes` in caffe2/test/cpp & torchrec/distributed/tests/test_pt2 (#118026)
6ac284122b : [Memory Snapshot] Track context for SEGMENT_FREE and SEGMENT_UNMAP (#118055)
c6930aad46 : Update Triton pin (#117873)
13d2cdffa2 : Remove optimizer.step patching for profiler hook (#115772)
77705e7486 : [dtensor] fix unnecessary redistribute in new_factory_strategy (#118037)
58e7ec5843 : Revert "Log stack trace of mutated idx (#117720)"
364728b27b : Reduce pytest prints (#117069)
5ec2d7959d : Revert "[ez] Provide a slightly better error message if process times out (#117865)"
6784594532 : Fix sparse windows on CPU with MKL (#102604)
7598a4efdc : [ROCm] Disable MIOpen for empty tensors for RNN (#117672)
0c9b513470 : [Export] Fix serialize_metadata (#118031)
9ebaa27922 : Fix types.MethodDescriptorType related bug in dynamo (#118041)
3b38f7b266 : Remove skips for passing tests (#118000)
3ec4f00316 : [inductor] Allow reinplacing functionalized scatter ops (#116899)
5502a63b22 : [inductor] Allow reinplacing before meta-only users (#117121)
eb0fcab421 : [inductor] Move reinplace pass to its own file (#116898)
e309d6fa1c : Better unsupported op error message (#117770)
4d625c1c92 : [AOTI] Fix a bug in the torch._export.aot_load API (#118039)
bff348b28f : [AOTI] Add missing include to `model.h` (#118075)
2963e85a3f : [EZ][AOTI] Fix typos (#118074)
ae459c5809 : Don't use private accessor on SymNode to get _expr (#118007)
73c9be1395 : Don't use private accessor on SymNode to get _expr (round 2) (#118013)
905a7cc340 : [ROCm] skip test_eager_transforms.py test_compile_vmap_hessian_cuda (#118009)
4cfd16cb6d : [Inductor] optimize transpose_mxn with bf16 data type (#117958)
40890ba8e7 : [CI] Add python test skip logic for XPU (#117621)
455bba38f4 : [C10D] Make Flight Recorder report time_created in ns (#118047)
5df92a9244 : [C10D] Add version tag to NCCL Flight Recorder Dump (#118046)
dace1fda2e : [C10D] Make NCCL Flight Recorder dump produce a dict (#118044)
28c8a07b4d : add mask_convert_to_lp to support bool->fp16/bf16 convert (#117830)
6049998971 : [C10D] Finer-grain nccl heartbeat, avoid false positive hangs (#118016)
a8978d3676 : [dynamo] Add size(), get_coordinate() support for DeviceMesh in dynamo (#117710)
bb28965924 : Revert "Remove skips for passing tests (#118000)"
d84173c025 : [export] fix unlifting of custom class constants (#117979)
7b0979ef8e : [export] fixes to unflatten + custom obj composition (#117978)
e056cf5507 : [ac][pattern matcher] Do not percolate tags beyond the inputs of matched portion (#118034)
3708f2608e : [DTensor] Skip `[add]mm` empty tensor test (#118045)
0036385b55 : [Inductor][Reliability] Add runtime numeric check for pt2 Optimus in the pre grad pass (#115142)
3c339b5b21 : Remove skips for passing tests (#118000)
4646d0e1b2 : Update xla.txt (#117999)
fed45aee54 : Replace invoking self.value if there is a user defined init, avoiding arbitrary code execution (#117818)
dc1b9d758e : Update passrate calculation script to skip inductor and export (#118030)
162f643090 : Script to generate failures histogram (#118008)
af7cd5c32a : [Dynamo] Install module globals per output_graph (#117998)
a85fd20d45 : [ONNX] Improve support to mmap for ONNXProgram.save (#117863)
052860294f : Replace `constraints` with `dynamic_shapes` in export-to-executorch tutorial (#117916)
d810b10232 : Add beta1 support to CyclicLR momentum (#113548)
d01ba4e94e : enable fp8 cast for inductor CPU (#117737)
d8420c0b0c : [Nested Tensor]Add helper functions to set max_seqlen/min_seqlen directly (#117815)
a27a6e8cf1 : [ROCm] skip test_sparse_csr test_triton_bsr_softmax_cuda (#118006)
c6be5d55a5 : Migrate param_group testing to OptimizerInfo (#117675)
d280b6ae58 : Ensure that deleter is called even for a no-data tensor. (#117418)
cef5b93f28 : [ez] Serial when NUM_PROCS is 1 (#117977)
f9fca33baf : [codemod][highrisk] Fix shadowed variable in caffe2/caffe2/onnx/onnx_exporter.cc (#117996)
b901999350 : [inductor] For View.create(x, sizes) call realize_input() instead of realize() when handling unbacked symints (#117013)
f96b7d06d7 : [export] skip export tests when test with dynamo in ci (#117988)
c14751b6cf : Remove extraneous [[fallthrough]] in ivalue.cpp (#117985)
b5799d9977 : Revert "[c10d] Barrier uses stream sync instead of device sync (#117804)"
792dfa7e16 : Allow dynamic shapes of `tuple` type for inputs of `dataclass` type (#117917)
4df65bf51b : Optimize recursive_add_node in fx splitter (#117969)
86e8551446 : [dtensor] switch softmax forward ops to OpStrategy (#117723)
fdac55c35d : Added example regarding weight_decay distinction with per-parameter API (#117436)
b14d57ceda : Replace `constraints` with `dynamic_shapes` in scripts/sijiac/prototypes and test/inductor (#117915)
95a6866220 : Migrate fused optim load_state_dict to OptimizerInfo (#117890)
9a2c8f644b : Mark DynamicShapesExportTests::test_retracibility_dynamic_shapes as slow (#117896)
903e1913ff : Rename unbacked SymInt prefix to u (#117859)
0f6bbb1c07 : [c10d] Barrier uses stream sync instead of device sync (#117804)
c170fbd309 : [dtensor] refactor redistribute and fix uneven sharding redistribution (#115525)
2bb2cc0b71 : [tp] add clarification to doc and improve TP examples (#117618)
01abb5af21 : additional support for float8_e4m3fnuz and _e5m2fnuz (#115214)
56ef5afdee : [dynamo] Add more dynamo call_methods and getattr support or Placement (#117733)
f612e96180 : [export] set proper fqn in lift constant tensor pass (#115222)
80cf0ce153 : Enhance torch.vmap support from inside torch.compile (#116050)
b2a3d6ba0d : [exportdb] Remove torch/fb/exportdb (#117866)
a359afbc3f : Make and/or on uint8 tensors properly return 0x00 or 0x01 (#117827)
c6c54df81b : Fix incorrect type hints of `Module.to` (#117937)
60519fa3b7 : change master to main in datapipes readme (#117919)
86b4b27e26 : [docs] start a new FSDP notes doc (#117323)
8dc421a6b4 : Revert "accelerate `binary_cross_entropy_with_logits` by using `log_sigmoid` operator (#115539)"
c3780010a5 : Remove calls of c10::guts::void_t (#117942)
3580e5d407 : [executorch hash update] update the pinned executorch hash (#117953)
39df084001 : [Clang-tidy header][16/N] Enable clang-tidy on headers in torch/csrc/autograd (#117821)
3baade4425 : Remove calls of c10::guts::conjunction,c10::guts::disjunction,c10::guts::negation (#117926)
02209b5880 : Revert "[docs] start a new FSDP notes doc (#117323)"
c393b2f1ee : [export] require Module to be passed to export (#117528)
3ee092f75b : VSX: Fix overflow in complex division (#116972)
afabed6ae6 : [inductor][custom ops] Add tag to custom ops to preserve stride orders in inductor (#117298)
41556324a9 : [cpp_wrapper] Change CppWrapperCodeCache to use faster python binding (#117693)
7f474da6bc : [docs] start a new FSDP notes doc (#117323)
b50ccad86e : [BE]: Add type alias typing annotation to prims_common (#117928)
df4e3d9d08 : Document OpsHandler protocol (#117790)
8f7caaee67 : [cuDNN] Fix cuDNN version parsing against future versions of cuDNN (#117908)
fbd1d567ed : [inductor] Fix CPP wrapper codegen for ExternKernel args (#117931)
fa1e89b337 : Ban mutation on dropout outputs in export (#117879)
949a76a7f0 : [executorch hash update] update the pinned executorch hash (#117899)
2ae66ddba0 : [export] fix test ownership (#117886)
bad5e1e0bb : [Quant] [Inductor] Enable the Inductor Lowering of QConv2d post op hardswish (#117489)
05ef2030ea : [c10d] Add logs for NCCL Comm Abort call (#117868)
2de3474711 : Simplify kwargs propagation in __call__. (#117880)
50633620b2 : sympy.Symbol is a subclass of sympy.Expr (#117857)
af831415a8 : fix cpp backend relu codegen with inf input (#117622)
4bf481fb1b : Fix inductor pattern match error for qlinear with bmm (#117633)
0ae952db76 : enable mkldnn bf32 matmul (#116015)
aaae2d8bb6 : Add compilable and capturable foreach adamax with tests (#117835)
e732adf0a7 : [pytree] add access api (#117771)
a1b3b5748f : [Pytoch][Vulkan] Create context for conv1d (#117780)
10923f8720 : Revert "[inductor][custom ops] Add tag to custom ops to preserve stride orders in inductor (#117298)"
94f0472579 : [Quant] [PT2] Add Hardswish into X86InductorQuantizer Conv2d Unary Annotation (#117488)
1967394690 : [inductor][custom ops] Add tag to custom ops to preserve stride orders in inductor (#117298)
181e6dafd0 : [MPS] Fix lintear for 5D tensors (#117837)
d4cc1c5bff : Add new pattern matchers for SDPA (#113004)
8f91a53e9a : Add environment for close-nonexistent-disable-issues (#117885)
3c1498d117 : [ONNX] Add bfloat16 support for scaled_dot_product_attention (#117878)
f684e44fd6 : Revert "Reduce pytest prints (#117069)"
5538b37a06 : [ez] Provide a slightly better error message if process times out (#117865)
29f899ef87 : [pytorch][vulkan] cumsum dim <= 1 (#117580)
dd6c0f6844 : Trim Dynamo shards 7->3 (#117869)
365c7a292f : Log stack trace of mutated idx (#117720)
6c99bf0766 : move disable_cudagraph_reason disabling after codecache is accessed (#117823)
c4eab49ded : [MacOS] Embed libomp.dylib/omp.h into MacOS wheel (#114816)
414a1fd29f : [PyTorch] Add IValue::IValue(std::vector<T>&&) ctors (#117769)
d45fd68012 : OIDC for update_pytorch_labels (#117876)
ad3d41692e : [PyTorch] return `decltype(auto)` from getItem (#117569)
632fcc4831 : [PyTorch] Make `List::get() const` match `List::operator[]() const` (#117568)
15d568d621 : [Inductor] Use codegen reference for buffer to string (#117838)
1f5c27eb18 : cleanup code comments _compute_numerical_gradient (#117484)
ab216bbaeb : cleanup code comments analytical Jacobian as vjp projection (#117483)
40dbd567e0 : Reduce pytest prints (#117069)
2f4456a73e : Remove xfail on test_make_weak_keyed_dict_from_weak_keyed_dict (#117848)
b637fdc8b3 : Revert "additional support for float8_e4m3fnuz and _e5m2fnuz (#115214)"
f316c35a34 : [export] Support preserving submodule callling convention in non-strict export (#117796)
249a226113 : [export] Error on not pytree-flattened nodes (#117598)
6c5c2121b1 : Run some OOMing tests serially (#117759)
de25718300 : [release] Docker Release build trigger on rc for testing (#117849)
03b12e56c7 : accelerate `binary_cross_entropy_with_logits` by using `log_sigmoid` operator (#115539)
98a044d33e : [CI] Build M1 conda binaries on M1 runners (#117801)
17c5f69852 : Run test_jit with PYTORCH_TEST_WITH_DYNAMO=1 in CI (#117765)
f115f1cde1 : [Quant] Enable QConv2d with hardswish post op (#117487)
5756b7a08e : Remove math_compat.h (#117828)
f2d6e99f8d : Workaround a cusolver bug on CUDA < 12.1 in triangular_solve (#117636)
4057d005ff : Initial torchbind support in PT2 (#117697)
c51a4e64c0 : Add support for compiling SDPAParams (#117207)
8524fa566c : [executorch hash update] update the pinned executorch hash (#117593)
f302a0d380 : Re-enable SGD (#117434)
924ed91612 : Move getDurationFromFirstEvent to USE_C10D_NCCL ifdef (#117738)
38d9b3d937 : Remove use of math_compat.h (#116167)
5c17f66a3d : [Exception] [5/N] Remove torch::IndexError (#117713)
3131e0460e : Changed return type of randint64_cpu to int64_t to prevent codegen is… (#117443)
1adf77ce5e : Don't use functional tensor inside _unstack_pytree (#117811)
c16e6e4cf7 : [ProcessGroup] Make watchdog check work queue more frequently (#117297)
aadbaf8e2d : [EZ][BE] Move `build_android_gradle.sh` (#117795)
d618e86328 : [ONNX] Bump transformers in CI test (#117703)
74e1362499 : additional support for float8_e4m3fnuz and _e5m2fnuz (#115214)
c317bf2c2b : [HigherOrderOp][BE] factor out merge_graph_inputs (#116912)
c6028f8f73 : [HigherOrderOp] Add while_loop support (#116823)
113f0749f5 : [HigherOrderOp] move some common utils in cond to utils.py (#116721)
77cfacab55 : Revert "Reduce pytest prints (#117069)"
a468b9fbdf : Update xla.txt to fix missing commit (#117708)
2f84a9d37c : Revert "[CUDNN][SDPA] Experimental cuDNN Flash Attention v2 Inference (#115663)"
2f89ef2300 : Reduce pytest prints (#117069)
e432b2e607 : [inductor] multi-kernel support (#103469)
fee96adde7 : [EZ] Update `weekly.yml` to use actions from test-infra (#117775)
6d9432c44c : [ONNX][dynamo_export] Decomposition skips using custom operator (#117314)
92d718aed1 : [export] Add lifted constant obj to input (#116985)
eba5d5485d : [dynamo] make ConstantSource propagate through built-in ops for TensorVariable (#117704)
1462d72904 : Speed up triu_tril_kernel (#115013)
16ebfbbf07 : All tests run with markDynamoStrictTest now (#117763)
5278200507 : Add some better docs for dynamo_test_failures.py (#117761)
07216721cf : [codemod] markDynamoStrictTest batch 23 (#117754)
def4959662 : Revert "[inductor] allow mm template to accumulate with float16 dtype (#117479)"
23d53a4360 : add test_public_bindings to internal CI (#117712)
1b773df3c6 : Place .lrodata later in the binary (#117575)
7451dd0585 : Revert "Add node meta value into UnflattenedModule (#117686)"
5aa895e53e : Don't run inductor tests in Dynamo shard (#117747)
646229218f : Revert "[export] Error on not pytree-flattened nodes (#117598)"
4720109d7f : [dynamo] add common methods to DistributedVariable (#117590)
044b9012d5 : Update PocketFFT (#117595)
db1a6eda9e : [codemod] markDynamoStrictTest batch 22 (#117729)
fa86fa7a61 : Fix MSVC 14.38 - VS 2022 Build (#117497)
a669319450 : [inductor] Faster C++ kernel python bindings (#117500)
6e4e81a9ef : [dynamo] Extend LazyVariableTracker to tuples (#117426)
26956980c6 : [AOTI] Add torch._export.aot_load (#117610)
2fb9d8811f : Don't try to directly compare symbols, it won't work (#117674)
8bf788c390 : [SAC][Dynamo] Add support for functools.partial in CheckpointHigherOrderVariable (#117657)
b0084be114 : Revert "Re-enable SGD (#117434)"
0d1e7053ac : [easy] Log guard failure (#117639)
4ba5318d3f : [dynamo] Add DictView variable tracker (#108420)
f4df0f061c : Implement set in terms of dict (#110524)
bc85eb948f : Break on unsupported keys for dicts / elements for sets (#117630)
4512a95371 : [easy]Remove specialized value (#112252)
2dd4a254a0 : add Half support for interpolate operators on CPU (#105648)
c9528a11dd : Add Half support for masked_softmax on CPU (#117028)
e60bc502b4 : [Inductor Intel GPU backend Upstream] Generalize part of Inductor test case (#117513)
b72ddbab60 : [Clang-tidy header][15/N] Enable clang-tidy on headers in c10/cuda and c10/mobile (#116602)
57ca455471 : [dynamo] Add hasattr support for TupleVariable (#117694)
bc9cb04822 : Replaced CHECK with TORCH_CHECK in order to not abort, but throw a Ru… (#117653)
e7fac72be7 : Re-enable SGD (#117434)
79811e765c : [2/4] Intel GPU Runtime Upstreaming for Device (#116833)
61ea3036bc : Allow explicit shutdown of the compile-worker pools (#117664)
1859895ffa : Docs: fix docstring errors in model_averaging (#117038)
4f2620ce56 : [PT2][split_cat] fix a bug in merge_splits (#117707)
02c96f6949 : [export] modify torch.export tests to pass a Module in (#117572)
ccc8440609 : [export] introduce WrapperModule (#117571)
5697986482 : [export] change exportdb to require torch.nn.Module (#117570)
41153542ae : Use wait stream instead of synchronize() in cudagraph warmup (#117578)
560213de2d : [export] Error on not pytree-flattened nodes (#117598)
634ce3c913 : Document and type torch._inductor.virtualized (#117658)
16ff6cd340 : Catch some missing unbacked symbol dependencies (#117650)
cb2b98ad6b : [codemod] markDynamoStrictTest batch 21 (#117609)
bbf65bc451 : Revert "[Dynamo] Remove the workaround since it has been fixed (#117615)"
cbf24ba962 : Add node meta value into UnflattenedModule (#117686)
6d96beb6be : [c10d] Remove health check (#117699)
21ddca4225 : Enable HIP build for //sigrid/predictor:pytorch_disagg_gpu_task (#117616)
3882714168 : Fix check-labels.yml for ghstack PRs (#117680)
f7143b79bd : Stricter pull_request_target in labeler.yml (#117677)
58c4bc62bb : [c10d] Deprecate Work.result() (#117565)
5aa92b5090 : [CUDNN][SDPA] Experimental cuDNN Flash Attention v2 Inference (#115663)
a60b566d37 : [TorchElastic] Support for overprovisioning in C10 based rendezvous (#117066)
a1afd1b195 : Revert "[inductor] Faster C++ kernel python bindings (#117500)"
410515241d : [c01d] Remove CoalescedWorkNCCL (#117696)
387ea260af : [c10d] Enable watchdog for coalesced work (#117682)
396a5c3091 : [Exception] [4/N] Replace torch::IndexError and torch::ValueError with C10 counterparts (#117317)
c64fd8b89c : [codemod] markDynamoStrictTest batch 20 (#117702)
3770311093 : [codemod] markDynamoStrictTest batch 19 (#117701)
82c0083819 : Fix trition wheels build (take 2) (#117706)
898f6a48a9 : [codemod] markDynamoStrictTest batch 18 (#117700)
b3e2571e83 : [Dynamo] Remove the workaround since it has been fixed (#117615)
3114813314 : Replace `constraints` with `dynamic_shapes` in deeplearning/aot_inductor test (#117573)
2db53a01e5 : propagate torch stack trace metadata to copy_() nodes during input mutations (#117587)
26a63907ba : Ordering placeholder and get_attr nodes in unflattened module (#116910)
e457b6fb18 : [inductor] Faster C++ kernel python bindings (#117500)
763ddb396d : Revert "[codemod] markDynamoStrictTest batch 18 (#117604)"
01c0c67937 : Revert "[codemod] markDynamoStrictTest batch 19 (#117605)"
87c2427173 : Revert "[codemod] markDynamoStrictTest batch 20 (#117606)"
84cfe6d8b2 : Drop all gather stats to debug not warning (#117669)
8841d26046 : [dynamo] LazyVariable - redirect __str__ to the realized variable __str__ (#117583)
a7fbbc2a4a : [inductor] allow mm template to accumulate with float16 dtype (#117479)
208e64a9ba : Initial implementation of FakeTensor caching (#113873)
c0940d2e93 : [pytree] reuse `flatten_fn` in `flatten_with_keys_fn` to ensure consistency (#117656)
bffc8ecfb0 : [codemod] Fix shadows in PyTorch (#117562)
da6abaeeac : Revert "[inductor] Faster C++ kernel python bindings (#117500)"
cb0bfcf590 : Revert "Ordering placeholder and get_attr nodes in unflattened module (#116910)"
89cf1ddb5c : [AOTInductor] Allow user to explicitly specify Device to run on (#117413)
308e154af5 : [codemod] markDynamoStrictTest batch 20 (#117606)
0cda1e0b21 : [codemod] markDynamoStrictTest batch 19 (#117605)
24f288114a : [codemod] markDynamoStrictTest batch 18 (#117604)
006d655956 : [codemod] markDynamoStrictTest batch 17 (#117219)
1967165d4d : [codemod] markDynamoStrictTest batch 16 (#117218)
ca0abf8606 : Add inductor-specific testing strict mode denylist (#117553)
12561bb5fe : Ordering placeholder and get_attr nodes in unflattened module (#116910)
bb0fd1bd3c : [inductor] Faster C++ kernel python bindings (#117500)
0c26565d5d : Revert "Add pull request target to bc lint (#106065)"
9da01affd3 : Revert "[inductor] Faster C++ kernel python bindings (#117500)"
8c7e3a18ff : Revert "Ordering placeholder and get_attr nodes in unflattened module (#116910)"
e877c2e6ff : Revert "Add inductor-specific testing strict mode denylist (#117553)"
7f3cac06b9 : Revert "[codemod] markDynamoStrictTest batch 16 (#117218)"
29fa6fbc4e : [Dynamo] Fix a corner case of reinplace_inplaceable_ops pass for triton kernels (#117612)
e94b79f627 : Revert "[codemod] markDynamoStrictTest batch 17 (#117219)"
8483f493af : Revert "[codemod] markDynamoStrictTest batch 18 (#117604)"
0bfd9653ef : Revert "[codemod] markDynamoStrictTest batch 19 (#117605)"
d51583b214 : Revert "[codemod] markDynamoStrictTest batch 20 (#117606)"
06dab05405 : Revert "[export] Error on not pytree-flattened nodes (#117598)"
d0fc268918 : Fixed issue in upsample_nearestnd lowering with scales (#117538)
ab847a2f5c : [codemod] markDynamoStrictTest batch 20 (#117606)
45d7859e75 : [codemod] markDynamoStrictTest batch 19 (#117605)
70b22be32a : [codemod] markDynamoStrictTest batch 18 (#117604)
6d1406d177 : [oidc] Migrate Triton wheel upload to oidc (#117648)
35e8478305 : [export] Error on not pytree-flattened nodes (#117598)
40a6710ad3 : Mark set_ as an inplace view op (#115769)
5bb2298da7 : [codemod] markDynamoStrictTest batch 17 (#117219)
3bb8d2b905 : Update triton ROCm version to 6.0 (#117433)
e2830e6328 : [PyTorch] SDPA decomp: actually use attn_mask (#117579)
1deb75b584 : [c10d] Move the timeout dump check from watchdog to monitoring thread (#117168)
ed6006ee5d : [Reland][ONNX] Guard xfail tests with error messages (#117592)
9448065061 : [pytree] add key path api (#116786)
5667a990fd : Chore: improve log message about cache size limit exceeded (#116557)
3cd2c68fbe : Fix syntax highlighting in android (#117439)
735715e6d3 : [Dynamo] Make profiler function will be ignored warn only once (#117585)
2c5488d719 : Match all_gather_into_tensor args names in remapping (#117224)
8f1bc876b2 : [quant] Support custom qmin/qmax for activation and weight for xnnpack quantizer (#117305)
e4c2dfb35b : [Dynamo, ONNX] Run llama attention with onnxrt and dynamic shapes (#117009)
fb06ed36d1 : Change dynamo_test_failures.py to silently run skipped tests (#117401)
9056c7d941 : use getPinnedMemoryAllocator for privateuseone (#117530)
8852bb561c : More efficient multi-threading in Softmax & LogSoftmax CPU kernels (#116367)
4a54ab328c : Removed an internal assertion for the optional stable value and inste… (#117414)
1872834247 : [MPS] Fix `torch.mm` correctness for large matrices (#117549)
f518cf811d : [DCP] Adds support for meta tensor loading for DCP.load_state_dict() (#113319)
4a44a3c76d : update kineto submodule (#114297)
cf470e7b59 : Migrate update-commit-hash to test-infra (#117506)
1d14adfa66 : [mta] Fused SGD (#116585)
5aac95c713 : Introduce slice_inverse() op (#117041)
f6767244cf : Added meta function for _upsample_bicubic2d_aa (#117347)
b1c3f9f1b9 : Fix missing mkl-dnn include paths (#117492)
46a8408fa1 : [codemod] markDynamoStrictTest batch 16 (#117218)
ab6207a342 : Add inductor-specific testing strict mode denylist (#117553)
5e0e78585d : Ordering placeholder and get_attr nodes in unflattened module (#116910)
4ec667cc64 : Revert "[ONNX] Guard xfail tests with error messages (#117425)"
3a52147cc5 : [inductor] Faster C++ kernel python bindings (#117500)
2a3fb7dbb6 : [ROCm] Fix NHWC related tests in test_inductor_freezing (#117158)
4712c7dac8 : [inductor] add C-shim for index_put (#116667)
3e8c8ce37b : Update Reviewers for PT-D team (#117409)
1993956da3 : [ONNX] Guard xfail tests with error messages (#117425)
28be47c267 : [RELAND][export] Exempt autograd ops for predispatch export (#117448)
99e54744f7 : Fix ExecuTorch pinned commit update failure (#117518)
c30346db0e : Check in some torch.compile helper scripts (#117400)
a7a2773567 : Check invariants for dynamo_test_failures.py (#117391)
29516bd2a0 : add _amp_foreach_non_finite_check_and_unscale_cpu_ and _amp_update_scale_cpu_ kernels on CPU (#109281)
0fa6ee44d9 : [CI] Skip lib for xpu binary unit test (#117514)
13473df0d7 : [MPS] Make addmm support empty matmul (#117223)
28bb31e4a5 : [Dynamo] Trace autograd.function in dynamo when inputs require grad (#116358) (#116897)
f20eaadfef : [vision hash update] update the pinned vision hash (#117509)
ae3d7091cb : [BE] Replace deprecated `set_default_tensor_type` (#117505)
dd2cff1591 : [Dynamo] Use isinstance rather than istype when check if python module type (#117022)
bac0878780 : Error if compiled nondeterministic backward called in deterministic mode (#114780)
c1ab2777c0 : Update state_dict.py to propagate cpu offload (#117453)
1a57c18760 : Fixed cuda grads for interpolate::trilinear on non-contig grad output (#117373)
001585f446 : [fx][inductor] Add statically_known_true utility for SymBool (#117359)
661747c727 : XPU, move oidc to top level workflow and use gha_workflow_s3_and_ecr_read_only policy (#117498)
7a8013fbfa : [inductor] Handle more edge cases in slice and slice_scatter (#117377)
5c700f60a5 : Properly preserve SymInt input invariant when splitting graphs (#117406)
75818adcf7 : Pyi doc inclusion + fix (#117267)
7a851fedc8 : support torch.mm with conjugate transposed inputs (#117238)
41ffea2f99 : Properly unwrap_storage tensors sent to DynamicScalar (#117444)
d9b265adaf : modify the conditions as PythonModuleVariable (#116856)
d089bb1b72 : [xla hash update] update the pinned xla hash (#117485)
2b56d80460 : [inductor][cpp] apply simplify_index_in_vec_range to vector store and vector transpose (#117263)
3b00dd5843 : [inductor][cpp] apply simplify_index_in_vec_range in select_tiling_indices to enable more contiguous vec load (#117260)
3a0bcd2c12 : [audio hash update] update the pinned audio hash (#117423)
19502ff6aa : Fixed typo in build_activation_images.py (#117458)
03c6f79548 : [vision hash update] update the pinned vision hash (#117311)
2200118f59 : Enable some uint{16,32,64} tests that are working (#116809)
a298fba146 : [MPS] Increase metal language support to 2.3 (#117472)
61a181e83c : Report function name in stack trace annotations (#117459)
a6d33614d6 : add float8 types to dtypes table (#117375)
c3e2b94827 : Realize non-ReinterpretView Views in custom Triton kernel args (#117468)
62496ffd0d : [dynamo][easy]: Add support for `operator.truth` (#117463)
2748f05056 : Add torch.fx.interpreter to uninteresting_files (#117460)
a1155883d4 : Clean up Docker config on ROCm runner (#117432)
a76610e6fb : [BE] Delete unused is_dynamo_compiling (#117455)
347255809c : Make c10::SymInt typecaster support scalar-like fake tensor (#117454)
796fe40a96 : [BE] Delete unnecessary variable fastpath (#117452)
220cf46c2a : Always accept 0-d scalar tensors as int, even if __index__ fails (#117451)
38c18f3825 : [c10d] Add a timeout check interval variable for timeout dump (#117093)
003c900d5e : Add _assert_scalar (#117378)
1a8545164a : [export] Add unit test for SDPA export result (#117390)
bf27dd6df9 : Add dynamo support for operator.abs (#117442)
1a790f5a61 : [RELAND] Error grad mode op in export API (#117420)
d6847c5977 : [CI] Set correct permissions for auto_request_review (#117408)
53f3361319 : [BE] Use nested namespaces for sparse (#117415)
d8bdb50379 : [reland] pass shape/stride during tensor unflatten (#117340)
eebf115686 : [fsdp][2d] FSDP sync module states handle tensor subclass (#117336)
fc044b5cdb : [pt-vulkan] Add build time flag to control descriptor pool sizes (#117398)
2c8975387d : [Optimus] fix batch layernorm numerical issue (#117404)
f008efa8e7 : Reconstruct streams via global registration, temporary impl to unblock FSDP (#117386)
ef3217d9f7 : [PyTorch] Mark USDT probes as noinline to avoid duplications in ThinLTO mode (#117381)
302f931c25 : Update Reviewers for PyTorch Distributed team (#116231)
96163eb010 : Switch nightly binaries to oidc. Remove aws keys (#117416)
22ddf91dbb : [torch][fx] more strong typed codegen for partial specialized code on boolean (#117201)
2bc7da1ab7 : [HigherOrderOp] change signature of map_impl (#117161)
f2f47c6848 : [dynamo] realize LazyVT's in DICT_MERGE (#117282)
3e397cefc5 : Add uint1 to uint7 dtypes (#117208)
52575eb1bb : The permission id-token write needs to be set on rocm-test callers (#117422)
9746f36e50 : [export] Minor fixes to serialization (#117374)
7f1f0b1135 : [C10D] Add duration_ms to flight recorder (#114817)
7a7535283f : Some basic support for uint{16,32,64} codegen in CPU inductor (#116810)
4b25948ee6 : Torchbench Dynamo Runner: Enable DDP for perf test and traces (#113332)
c329eddcb9 : Migrate the rest of state_dict testing to OptimizerInfo (#117186)
bcf1f312a0 : Migrate nontensor step and CUDA params state_dict tests to OptimizerInfo (#116509)
7b753cc7b8 : Skip some slow tests (under Dynamo) (#117389)
d73846689d : Rename test_legacy_vmap.py TestCase names (#117320)
06576d859d : Stop running ModuleInfo tests under Dynamo (#117318)
fbd9bccb75 : [C10D](reland) Add GIL checker to NCCL watchdog monitor (#117312)
7b0926cc3e : Fix wrong class inheritance in pyi (#116404)
c167c34396 : Skip unsupported tests on arm (#117344)
384c4885fa : [ProcessGroup] Do not print NCCL_DEBUG before NCCL init (#117328)
18bd5c05bc : FFT: Handle noop fftn calls gracefully (#117368)
5cf481d1ac : [CI] Explicitly specify read-all permissions on the token (#117290)
013a59acbd : Update `BCEWithLogitsLoss` documentation regarding `pos_weight` (#117046)
e54b40e5eb : [dynamo] GetItemSource - restrict the supported index Source to be GlobalWeakRefSource (#117138)
657545dbdd : Migrate rocm test to using oidc (#117160)
cb42bc705b : Make auto_functionalized HOP fallback in inductor (#117084)
a97d00cca5 : [Nested Tensor]Support SDPA math fallback for jagged layout nested tensor (#116445)
21d370819b : [CI] Set permissions for stale workflow (#117371)
172dd13ecf : [inductor][cpp] improve vector contiguous checks for FloorDiv and ModularIndexing (#117221)
6c624aad37 : [CPU] Disable floating-point contraction when compiling (#116318)
6ebb26d572 : Fail Conv Binary Inplace check when act and accum are same tensor (#117331)
19a9fdbf3a : Add more alias and mutation check for other input of Conv Binary Inplace fusion (#117330)
f7d9047864 : [inductor] Iterative percolate tags (#117306)
47c9d12ffd : Add super().setUp() to TestFFT1D (#117329)
50049cfaa0 : [1/4] Intel GPU Runtime Upstreaming for Device (#116019)
7dac2f9f2d : [export][ez] Fix getting meta["val"] (#117313)
40f12cec93 : Change predispatch tracing API (#117278)
ec443089c7 : enable fp16 mkldnn fusion/prepack in inductor (#117206)
9d5954e2a9 : ignore ill-formed solution of reduce_inequalities (#117310)
638f85fd67 : Add default parameters to rrelu_with_noise() (#117141)
d29bf0a37e : Fix ONNXProgram.save to use torch.load(..., mmap=True) for large models (#117295)
b62ba82cdc : Update initializer path for ONNXProgram.save due to onnx.checker limitation (#117294)
b3b585af64 : Revert "[codemod] markDynamoStrictTest batch 16 (#117218)"
ac0bed01df : Revert "[dynamo] GetItemSource - restrict the supported index Source to be GlobalWeakRefSource (#117138)"
3214ada631 : [MPS][BE] Better format nested ternary (#117198)
04604eea8a : [inductor] check nan/inf for graph inputs (#117189)
47119785ac : [codemod] markDynamoStrictTest batch 16 (#117218)
c278a1b39c : [dynamo] GetItemSource - restrict the supported index Source to be GlobalWeakRefSource (#117138)
5d2d21a7be : [bfloat16][easy] kthvalue, median (#117279)
5c6e7962f4 : [c10d][EZ] Add more logs in the destructor of ProcessGroupNCCL for better root cause investigation (#117291)
53cba40651 : [Distributed] Fix tests when CUDA not available (#117163)
9f87760160 : Revert "[Nested Tensor]Support SDPA math fallback for jagged layout nested tensor (#116445)"
0a5aa5c2d1 : [pt-vulkan][ez] Remove reference to c10::MemoryFormat from `api/` folder (#117183)
8b0bfb3aaa : [FSDP] remove unused flat_param_part_view (#117082)
3c66c89057 : [pt-vulkan] Replace `c10::ScalarType` with native equivalent (#117181)
331ae7f89f : [pt-vulkan][ez] Replace `c10::overflows` with native equivalent (#117180)
4205892be6 : [pt-vulkan][ez] Replace `ArrayRef` with `std::vector<T>&` (#117179)
b209de6699 : [pt-vulkan] Replace `TORCH_CHECK` and similar macros with native equivalents (#117178)
fe298e901a : [pt-vulkan][ez] Replace `ska::flat_hash_map`, `c10::get_hash` with `std::unordered_map`, `std::hash` (#117177)
57b76b970b : [pt-vulkan][ez] Miscellaneous small c10 deprecations (`c10::irange`, `C10_LIKELY`, `c10::SmallVector`, etc.) (#117176)
24c39bb5e5 : Upgrade nightly wheels to rocm6.0 (#116983)
e55a778cbb : [Nested Tensor]Support SDPA math fallback for jagged layout nested tensor (#116445)
92cc8ae172 : [FSDP] Cloned unsharded tensor slice in optim state dict load (#117261)
88bf84f106 : [benchmark] add --compile-autograd to dynamo benchmarks (#117196)
83c45a9931 : Faster gc_count update for CUDACachingAllocator (and avoid nullptr de… (#117064)
5bc896e5dc : Dockerfile; Add cuda bin to PATH (#117105)
9e3580f793 : Fix #117011: add the TORCH_CHECK(grad_output) of upsample_nearest::backward() (#117100)
f89725fb41 : [DCP][BC] Add the backward compatibility test (#116247)
7e9cbc6834 : [CI] Catch more exception types when running eager in PT2 tests (#117120)
5b24877663 : Improve uint{16,32,64} dlpack/numpy compatibility (#116808)
623b7fedc4 : [c10d] Add comments to the rest environment variable within NCCLPG (#117092)
3d1869d0ae : [DCP][BE] Improve the readability of filesystem and fsspec filesystem (#116246)
4c7b602645 : Add Support For Symbolic Shapes in Register_replacement, SDPA Pattern Matching (#115441)
bfc336308a : Revert "Error grad mode op in export API (#117187)"
767e1b6349 : Revert "Bring docstring to .pyi file (#114705)"
7005a4bcb6 : [dynamo] Added dyn shapes support for math trigo ops: sin(h), cos(h), tan(h) ... (#114866)
2b5a201aa6 : [Exception] [3/N] Replace torch::NotImplementedError and torch::LinAlgError with C10 counterparts. (#116824)
89ef426ba0 : Error grad mode op in export API (#117187)
0e1f43c44d : [inductor] don't access cluster_dims for too old version of triton (#117192)
3b2ddb6f71 : Update TorchBench pinned commit (#117073)
1cefc58905 : init tls grad_mode/local_dispatch_key set while fork new thread in (#113246)
9f57cf502f : [inductor][cpu]disable pointwise_cat on CPU (#116313)
e3d4f4d14b : [ProxyTensor] dedupe symbolic shapes in tracing (#116158)
6f9fcc79c2 : [DCP][BE] Remove unused fields (#116245)
263cc12fab : Add Dynamo Reset in PT2E Quantization testing (#117200)
5ae221a214 : [ONNX] Refactor op consistency tests (#116319)
9b1fac694e : [c10d] Add extra sleep in waitForDumpOrTimeout to ensure enough time for all ranks dump debug info (#116545)
ca23c56efc : [codemod] markDynamoStrictTest batch 15 (#117139)
9dbe4eae82 : [codemod] markDynamoStrictTest batch 14 (#117133)
a526d0a926 : Skip all OpInfo-based test when running with PYTORCH_TEST_WITH_DYNAMO (#117129)
dc43ad4286 : add is_grad_enabled check in runtime_wrapper before running with torch.no_grad (#117089)
203430a778 : [dynamo] easy - better assert message for EQUALS_MATCH guard (#117006)
79de14546d : [export] Add TORCH_LOGS=export (#116993)
6f0f4f12ca : [BugFix] Prevent LSTM to run with wrong input shape (#115542)
10509dac85 : [C10D] Rename flightrecorder key vars to avoid confusion (#116905)
1174e82bde : Revert "Add _assert_scalar and teach Inductor to codegen it (#114148)"
0f10a706f6 : add a docblock for torch._scaled_mm (#117190)
edec54b9de : Add `torch._lazy_clone` to create COW tensors (#113397)
71343507cd : Add super().setup in test_numeric (#117148)
2f17a21b2b : [Reland] [13/N] Enable clang-tidy on headers of torch/csrc (#117088)
8783fe9cf3 : [export] Modify SDPA decomposition to decompose _scaled_dot_product_flash_attention_for_cpu (#117097)
f70aeb4ffd : Fix backward for reshape() on jagged layout NT (#117137)
e10cfdd895 : Update matmul requires_grad checks (#117067)
7e6a04e542 : Allow unMarkDynamoStrictTest to work on tests (instead of just classes) (#117128)
1b8ebb6c42 : [codemod] markDynamoStrictTest batch 13 (#117127)
79e6d2ae9d : Remove incorrect usages of skipIfTorchDynamo (#117114)
d6540038c0 : Fix 0-dim Index in Index Copy decomp (#117065)
b9293e74a2 : [ROCm] Fixes for hipblasLt for mm use case. (#116537)
7e37f63e5e : [Reference Cycle Detector] Ignore FakeTensor in cycle leak detection (#117116)
3e9bb8d4de : Run docker release build on final tag (#117131)
73990c37e6 : [c10d] To make ProcessGroupNCCL to use globalStore for coordination (#117075)
180425df9b : [c10d] Add a recursive method to get the inner most store (#117074)
6f8fc42dba : [inductor] Add support for tl.make_block_ptr (#116079)
9bf9586c6d : Pytest do not rewrite assertions by default (#117060)
fad7734fa7 : [AOTI] Remove caching for compiled model.so (#117087)
e4e80dc9b3 : [FSDP] sharded grad scaler: copy found_inf after waiting on async reduce_all (#115710)
9eb842cbd6 : Compiled autograd: Lift autograd functions' backward and provide default key for custom autograd functions (#115573)
b4a35632f9 : Add function to materialize COW storages (#117053)
ec98df70f3 : [CPU] _vec_softmax_backward, _vec_log_softmax_backward, _vec_logsoftmax: fix CHUNK_SIZE to avoid unnecessarily large allocation (#117029)
e0da05e1ba : [codemod] markDynamoStrictTest dynamo/* (#117077)
04f788f925 : Unflake test_auto_functionalize (#117076)
5046b4981d : [ROCm] Add opt-in option for inductor's layout optimisation on ROCm (#116329)
94db6578cc : [Quant] Add dynamic quantization config for x86 inductor backend (#115337)
558cc69641 : Fix torch function kwarg dispatch (#117083)
e88d0648ed : Revert "[export] Error grad mode op in export API (#116339)"
77ecb3d725 : Revert "[export] Exempt autograd ops for predispatch export (#116527)"
20f394f10a : [LLVM/TensorExpr] Update for an API change in LLVM 18. (#117086)
20f769544c : [12/N] Apply clang-tidy and fix warnings in headers of torch/csrc (#116486)
90df7c008a : Migrate state_dict bc test to OptimizerInfo, increase coverage (#116500)
19e93b85b9 : Fixes last_dim stride check for singleton dimensions (#117001)
8bcdde5058 : Support uint{16,32,64} deterministic empty fill and scalar Python binding handling (#116807)
43a23a704a : Support uint{16,32,64} copy (#116806)
2e983fcfd3 : Support unsigned int for randint, item, equality, fill, iinfo, tensor (#116805)
4a10e9eed4 : update build guide to use mkl-static. (#116946)
b4f1ab4505 : Docs: fix docstring errors in ddp_comm_hooks (#116866)
16d69290c6 : Use view name instead of view_copy name for functional inverses (#117056)
fdfdba7c13 : [BE] Use `__builtin_overflow_sub` when available (#117015)
a6325ad86c : Fix cuInit test on Windows (#117055)
907e80239d : Fix broken lint after #117052 (#117080)
d9fc438083 : [cpu][vec512][double] unsigned left shift for mask (#117021)
0b72ce1bd1 : Add at::sparse::full_coo_indices utility function. (#116352)
152bde6e27 : [MPS][BE] Move `kernel_index_offset` to HistogramKernel (#117037)
8918ce4087 : Add TORCH_LOGS_OUT to direct TORCH_LOGS output (#117005)
b6028acfa4 : Add _assert_scalar and teach Inductor to codegen it (#114148)
d2033a0639 : [quant][pt2e][xnnpack_quantizer] add support for linear_relu (#117052)
4f3d698cac : Impl. call_hasattr for BaseUserFunctionVariable (#116049)
8a6c43fbe5 : add predispatch_pass to hold pass functions to be run when config.is_predispatch is true (#116788)
39ae4d8cd7 : Revert "[inductor] Add support for tl.make_block_ptr (#116079)"
848cfe8d45 : [reland] unflatten_tensor on compute stream for DTensorExtension (#117020)
1dd4813328 : [BE][dynamo]: Add operator is and is not tests to dynamo tests (#116397)
5866284d4a : Make not passing use_reentrant back to warning instead of erroring and clarify docs (#116710)
4e666ba011 : Update torch.autograd.graph logging to not print out grad_output (#116523)
29ae4f22bf : Enables private_use_one lazy_init by PrivateUse1HooksInterface (#115067)
ab1ac43752 : [pytree] extend pytree operations with `is_leaf` prediction function (#116419)
902807a86d : enable pytree tests in fbcode (#116787)
b4eb97a072 : Revert "[C10D] Add GIL checker to NCCL watchdog monitor (#116798)"
b8374314cc : [AOTI] Update AOTI runner util (#116971)
d527df707a : [inductor] Add support for tl.make_block_ptr (#116079)
94363cee41 : [inductor] Indexing refactors (#116078)
84b04e42a1 : [ROCm] Enable aot_inductor tests (#116713)
ad22bd2fa1 : [export][refactor][6/n] Remove equality_constraints (#116979)
bdeaaad70c : [CPU] _vec_log_softmax_lastdim: fix CHUNK_SIZE to avoid unnecessarily large allocation (#116990)
75968e2f94 : Optimize operator (#117017)
0dd5deeced : Bring docstring to .pyi file (#114705)
cfd0728b24 : Feature: cudnn convolution out (#116759)
0ef1266bc6 : [BE] Fix CUDA build warnings (#117023)
b6962208b8 : [CI] Add initial ci test workflow for XPU based on IDC runners (#116554)
6784030df4 : [MPS] Add support for 64-bit index operations (#116942)
81b7a09d27 : [CI] Test that cuInit is not called during import (#117010)
db79ceb110 : [ROCm] Enabling additional UTs on ROCm (#115738)
f0bbc2fcf5 : [AOTInductor] Small refactor so both Meta internal and OSS can deal with misplaced args and kwargs for Extern Fallback kernels (#116779)
6e2f879d7f : [ROCm] hipify mapping for cudaDevAttrMaxSharedMemoryPerBlockOptin (#116984)
d78776e2e6 : Stop unconditionally applying hermetic mode (#116996)
6cf1fc66e3 : [cuda][easy] cosmetic and small syntax changes to layer_norm_kernel.cu (#116920)
104a23e4f5 : [cpu][vec512] improve int load/store/with mask (#116964)
4e54a70451 : [cpu][vec512] improve double load/store with mask (#116963)
428807f9bc : [cpu][vec512] improve fp32 load/store with mask (#116962)
a0bd7dfec1 : [cpu][vec512] improve bf16/fp16 load/store with mask for inductor (#116961)
bac0de160c : [ROCm] Add minimal inductor test to rocm-test workflow (#115425)
4c0d63180a : Support NNModules as dict keys (#116723)
92cf7ba36b : [vision hash update] update the pinned vision hash (#117002)
ff0a3f35a4 : [audio hash update] update the pinned audio hash (#116954)
14be2ee271 : Inductor qlinear int8_bf16 with bmm (#116604)
153b3a0996 : Inductor qlinear int8_fp32 with bmm (#116599)
6ca31ae1d3 : [CI] Add inductor workflow for rocm (#110544)
227579d6a0 : [Inductor] [Quant] Add remaining user check for qconv binary fusion (#115809)
33d90cfd16 : Allow for [-oo, oo] ranges for bools (#114362)
f26ed0a71d : [dynamo] Move graph breaks in for/while->skip after logging (#116981)
e728ebb66d : Small docstring fix (#116947)
28e2e12b2a : [quant][be] enable xnnpack_quantizer tests to run in internal CI (#116911)
534c73d478 : Fix NaN bug in torch.signal.windows.kaiser (#116470)
d006cae2a8 : Update documentation for unsigned int types (#116804)
fd0c071969 : Add tolist support for unsigned types (#116803)
f4e35e2c3d : Proposed mechanism for handling uint64_t in Scalar (#116595)
7073dc604e : Merge merging rules of CPU inductor and x86 CPU quantization (#116937)
a2d73e21d1 : follow up #115078, broken distributed tests (#116217)
ad507789d1 : [Reland] [11/N] Enable clang-tidy warnings on c10/util/*.h (#116751)
e780213340 : [xla hash update] update the pinned xla hash (#116958)
6173386fc4 : [MPS][BE] Remove unused nOffsets parameter (#116940)
f663935935 : [MPS] Fix boundary checks in generateKernelOffsets (#116915)
aa718065b2 : [MPS][BE] Refactor common code (#116904)
57491d2046 : Add bfloat16 + fp16 support to fractional_max_pool for CUDA and CPU (#116950)
7d61fa23df : Add float16 support to CUDA logaddexp2 (#116948)
2fe90e4d47 : [vision hash update] update the pinned vision hash (#116908)
6c32cd05a3 : [executorch hash update] update the pinned executorch hash (#116936)
376f036570 : Add bfloat16 CUDA support to multinomial (#116951)
8257b867d8 : Add bfloat16 CUDA support to binomial distribution (#116932)
4a37f57c69 : Add batched sparse CSR/CSC/BSR/BSC to sparse COO conversion support (#116206)
4b74bb6c34 : [Exception] [2/N] Remove THPUtils_assert (#116772)
3c7f358c91 : Update the expected accuracy value for demucs (#116944)
de005b14ab : [dynamo] fix more broken dict tests (#116943)
8ddac14a15 : Add unsigned integer dtypes to PyTorch (#116594)
8e273e23b5 : Refactor promoteType to no longer use shifting strategy (#116693)
c5e6485d14 : Add AT_DISPATCH_V2 (#116698)
9557b63c85 : [MPS][BE] Do not crash if Metal function can not be found (#116938)
20c2ec9a15 : [CPU] Add flash attention mask version (#115913)
b847290ddd : Back out "[2d] unflatten_tensor on compute stream for DTensorExtension (#116559)" (#116939)
4b5b8f8a75 : Add bfloat16 CUDA support to smoothl1loss (#116933)
a7902571be : Add bfloat16 CUDA support to gamma unary functions (#116929)
8e1119f7b2 : Fix typo in CUDA Macro (#116930)
83e8a0721d : Reland #111196 (take 4) "Support tensors as Dict keys" (#116934)
95041829c8 : Add bfloat16 CUDA support to RNN (#116927)
a5b86847ef : Fix compiler warnings in cuda code (#116921)
65da4e1ba2 : [CI] Use jemalloc for CUDA builds (#116900)
c05dd2aaf0 : [EZ][MPS] Use dispatch with rethrow for indexing (#116903)
9519c8afd4 : [export] Remove hacks for passing pinned version test. (#116871)
2dca3e99eb : Revert "Support tensors as Dict keys Re-PR of #111196 (#116785)"
88197f2202 : Rename experimental API (#116895)
830ace33bc : [C10D] Add GIL checker to NCCL watchdog monitor (#116798)
f24bba1624 : [executorch hash update] update the pinned executorch hash (#116800)
78c3098470 : cmake: Include `CheckCXXCompilerFlag` where it is used (#113028)
1badad9ce9 : Support tensors as Dict keys Re-PR of #111196 (#116785)
ff0f79d3c7 : [MPS] Mark `torch.[all|any]` as working with complex on MacOS14 (#116907)
0b0c76bace : Support squeeze.dim for jagged NT (#116891)
8894a97707 : [Dynamo] Fix source for autograd.function default value (#116894)
5323b2daa5 : [docs] add mode="reduce-overhead" into torch.compile to enable cuda g… (#116529)
2753960177 : markDynamoStrictTest most of test/lazy/.* (#116893)
af2ded23eb : [export] Exempt autograd ops for predispatch export (#116527)
9431798521 : [export] Error grad mode op in export API (#116339)
8fd4efacb4 : markDynamoStrictTest most test/functorch/* (#116892)
e5f2ac18da : [codemod] markDynamoStrictTest batch 12 (#116881)
7562a00946 : Make TORCH_LOGS="dist_ddp" include DDPOptimizer logs (#116794)
5377b994da : [aot_inductor] Retrieve original FQNs for weights (#116157)
521dbbfaff : Remove cpp/tensorexpr benchmarks (#116868)
99ef47098d : Use smaller shapes in lstm test to fix the CI timeout (#116453)
499ca71e49 : [codemod] markDynamoStrictTest batch 11 (#116880)
ef7abdbd1a : [C10] Mark Complex::imag as C10_HOST_DEVICE (#116877)
c72d9f5de3 : [no ci] Add pytorch-dev-infra as owners of .ci folder (#116901)
0f0020d76f : [GHF] Add support for new style stacks (#116873)
71d8fe690f : Replace recursive stable_topological_sort() with iterative. (#116761)
476e9d5f77 : [codemod] markDynamoStrictTest batch 10 (#116879)
764a18016d : VSX: Fix vectorized abs function for complex tensors (#116859)
63ee35c4e0 : BugFix: Fix F632 bug in dynamo (if statement is always false) (#116867)
d455c33cca : [ez][td] Pipe TD logs to log file (#116796)
ebedce24ab : [FSDP] enable autograd in forward prefetching (#116792)
7f124167b5 : [BE][Easy]: Update libfmt submodule to 10.2.1 (#116864)
4b6961a629 : [no ci] Fix spelling (#116872)
0a0209e8a1 : [ROCm] Use MI210 CI runners for all trunk commits (#116797)
9ac0e6971a : Revert "[1/4] Intel GPU Runtime Upstreaming for Device (#116019)"
7956ca16e6 : Enable reverse view_funcs by default for python subclasses (#116512)
3c21264c9b : Introduce reverse view_funcs (#115894)
053b15c596 : [codemod] markDynamoStrictTest batch 9 (#116836)
ee07260337 : [codemod] markDynamoStrictTest batch 8 (#116834)
c0da5a4c68 : [codemod] markDynamoStrictTest batch 7 (#116829)
6747d1383f : [codemod] markDynamoStrictTest batch 6 (#116827)
9543caadc8 : [codemod] markDynamoStrictTest batch 5 (#116802)
0159e3abbd : [dynamo] add a handler for itertools_chain_from_iterable and test (#116849)
0249c4a785 : Add config toggle suggestions for data-dependent/dynamic output shape (#114337)
53f8d17d1e : Specialize SymNodeVariable when used as module index (#114377)
0e8698c3b6 : Prevent unbacked symbol reallocation by forcing unification for unbacked symbol def sites (#114368)
f692fc9e7f : fix typo (#116828)
5f5405f809 : I have seen this deprecation and I am curious if this is the fix (#116714)
79ba39710e : [AOTI] Forward fix a Windows build failure (#116790)
2ccc7af028 : Revert "[CPU] Add flash attention mask version (#115913)"
bbfd81f513 : [codemod] markDynamoStrictTest batch (#116791)
6d9b837c27 : Graphbreak when creating a map with unsupported keys (#116460)
7c8f38700a : [dynamo] Fix np.issubdtype (#116459)
76a3fbb709 : [CPU] Add flash attention mask version (#115913)
6413511713 : [export][refactor][4/n] Make equality_constraints optional (#116233)
db69956feb : [Dynamo] Catch ImportError when tracing_rules load objects (#116783)
b0393ebe9b : [MPS] Make test_mps.py passable on Sonoma (#116764)
d0cf2182ea : Fix TransformerEncoderLayer for bias=False (#116760)
e3ca7346ce : Re-add initial Flash Attention support on ROCM (#115981)
8195a0aaa7 : Move array_of helper to c10/util (#116749)
5ac57a06eb : [export] Refactor ExportPassBase. (#116778)
e7d741b0fd : [C10D] Dump cpp stacktraces on heartbeat monitor timeout (#116717)
d23972df00 : Update libfmt submodule to 10.2.0 (#116363)
70f3a530d7 : [AOTI] Add pybind for AOTIModelContainerRunnerCpu and AOTIModelContainerRunnerCuda (#116269)
56d7a47806 : [BE] Use precompiled headers to speedup clang-tidy (#116780)
39f8853313 : [inductor] Use max sm clock when calculating device tflops (#116754)
6793b99107 : [BugFix] Fix SegFault when torch.all/any dispatched to mps or other backends (#116457)
b4cebe2c34 : [1/4] Intel GPU Runtime Upstreaming for Device (#116019)
43fb1b671c : [export] Improve verifier to not specialize on dialect. (#116705)
f1a393c029 : [codemod] markDynamoStrictTest batch (#116745)
311548b79c : [codemod] markDynamoStrictTest test_sort_and_select (#116744)
30f0a05207 : [codemod] markDynamoStrictTest test_stateless (#116743)
46b44fb246 : [codemod] markDynamoStrictTest test_subclass (#116742)
c2174974ae : [codemod] markDynamoStrictTest test_tensor_creation_ops (#116740)
7c5704fc00 : [codemod] markDynamoStrictTest test_tensorboard (#116739)
caa33e1eb1 : [codemod] markDynamoStrictTest test_testing (#116736)
882d1f4ea6 : [codemod] markDynamoStrictTest test_transformers (#116735)
eb958d7552 : Fix bug in unflatten pytree (#116750)
75dae4f691 : Revert "[dynamo] Fix np.issubdtype (#116459)"
3a0f6897c5 : Revert "Graphbreak when creating a map with unsupported keys (#116460)"
c2a020a218 : Graphbreak when creating a map with unsupported keys (#116460)
81f98f1082 : Experimental non-strict mode (#114658)
91bbcf8c71 : [1/N] replace THPUtils_assert with TORCH_CHECK (#116675)
faea6f2c7a : [C10D] Make heartbeat_ atomic (#116702)
2bdc2a68cb : [ez][td] Fix for emit metrics can't find JOB_NAME (#116748)
670e7992fd : [Easy] Document AGGRESSIVE_RECOMPUTATION flag in min-cut partitioner (#114007)
a8a9695047 : Move promoteTypes to cpp file (#116685)
f071687ef1 : Clean up macOS x86 CI build and test jobs (#116725)
9b88354b80 : [executorch hash update] update the pinned executorch hash (#116668)
b5c33ccdb3 : [dynamo] Fix np.issubdtype (#116459)
e2359f72c8 : [BE]: Update ruff to 0.1.11 (#116704)
e70dfe07f6 : [audio hash update] update the pinned audio hash (#116747)
c14a0b6c84 : [codemod] markDynamoStrictTest batch (#116734)
bfb9df3684 : [codemod] markDynamoStrictTest batch (#116733)
a308a25fb7 : [codemod] markDynamoStrictTest batch (#116732)
9255f55767 : [codemod] markDynamoStrictTest batch (#116731)
1f7badd856 : [codemod] markDynamoStrictTest batch (#116730)
d1d6b90a1b : [codemod] markDynamoStrictTest torch_np/numpy_tests/core/test_scalarmath (#116729)
3ba35548c3 : [codemod] markDynamoStrictTest torch_np/numpy_tests/core/test_shape_base (#116728)
3acb7972b0 : [BE] Test CrossEntropyLoss for `torch.half` (#116681)
6fece41e9a : [codemod][lowrisk] Remove extra semi colon from caffe2/c10/util/Float8_e5m2.h (#115761)
5395331644 : Avoid GIL during exit (#116709)
4926146537 : [Inductor] Fix Conv Binary Inplace Fusion issue (#115153)
ce2df3f690 : [HigherOrderOp] set set_subgraph_inputs to flatten_manual for map (#115853)
a2f3770b24 : [BE] Remove `arch -arch arm64` (#116724)
4e330882da : [inductor] Add ABI shim function for torch.scatter_reduce (#116700)
a75b587803 : [codemod] markDynamoStrictTest torch_np/numpy_tests/fft/test_helper (#116654)
f3e2661555 : [codemod] markDynamoStrictTest torch_np/numpy_tests/fft/test_pocketfft (#116653)
bf4c1a3d66 : [codemod] markDynamoStrictTest torch_np/numpy_tests/lib/test_arraypad (#116652)
f4168c0e2e : [codemod] markDynamoStrictTest torch_np/numpy_tests/lib/test_arraysetops (#116651)
dab1599d81 : [codemod] markDynamoStrictTest torch_np/numpy_tests/lib/test_function_base (#116650)
8a76c07b98 : [threaded pg] add devices to avoid seeing warnings (#116678)
b10cb168a7 : [tp] disable some assertion temporarily for torch.compile (#116573)
7309f6fdf0 : Remove hardcoding arch to arm64 (#116680)
f6be25bae6 : [inductor] Add shape checks to ExpandView (#113839)
1c69d0bdb5 : Revert "[11/N] Enable clang-tidy warnings on c10/util/*.h (#116353)"
0aa50909f3 : Revert "[12/N] Apply clang-tidy and fix warnings in headers of torch/csrc (#116486)"
791db94c62 : Revert "[13/N] Enable clang-tidy on headers of torch/csrc (#116560)"
71523c2289 : Add 116583 to `.git-blame-ignore-revs` (#116676)
9693b3740b : [easy] [c10d] Add documentation for the `device_id` parameter for `init_process_group` (#116222)
f543093e06 : [ONNX] Fix output mismatch issue of repeat_interleave when dim is None (#116689)
68105da229 : Revert "[Dynamo] Trace autograd.function in dynamo when inputs require grad (#116358)"
68b77311ad : Fix bug in non-strict input processor (#116674)
1429c204f8 : Increase hub download chunk size (#116536)
c919935cb7 : [export] Update schema versioning format. (#116462)
2ae55e99fe : [release] Add Launch Execution XFN meeting process to release runbook (#116701)
d2fc00d2cc : [codemod] markDynamoStrictTest torch_np/numpy_tests/lib/test_histograms (#116649)
2d1011d84f : [codemod] markDynamoStrictTest torch_np/numpy_tests/lib/test_index_tricks (#116648)
c47ab693ff : [codemod] markDynamoStrictTest torch_np/numpy_tests/lib/test_shape_base_ (#116647)
6a300bd1c6 : [codemod] markDynamoStrictTest torch_np/numpy_tests/lib/test_twodim_base (#116646)
34a8c64c92 : [codemod] markDynamoStrictTest torch_np/numpy_tests/lib/test_type_check (#116645)
fe287af812 : [codemod] markDynamoStrictTest torch_np/numpy_tests/linalg/test_linalg (#116644)
28a8e4bdb6 : [codemod] markDynamoStrictTest torch_np/test_basic (#116643)
146426a0df : [codemod] markDynamoStrictTest torch_np/test_binary_ufuncs (#116642)
efe3b7f457 : [codemod] markDynamoStrictTest torch_np/test_dtype (#116641)
d760014b9f : [codemod] markDynamoStrictTest torch_np/test_function_base (#116640)
efee9e689e : [codemod] markDynamoStrictTest torch_np/test_ndarray_methods (#116639)
608091e4d1 : [codemod] markDynamoStrictTest torch_np/numpy_tests/core/test_multiarray (#116673)
70eb53505b : [export] Update range constraints to runtime_var_to_range (#115427)
f081c45a34 : Add out_dtype support for sparse semi-structured CUTLASS back-end (#116519)
ba06951c66 : [BE] [cuDNN] Always build assuming cuDNN >= 8.1 (#95722)
3407541b0c : add cpu inductor merge rule (#116679)
b57d473091 : [codemod] markDynamoStrictTest torch_np/test_nep50_examples (#116638)
49de03f0fd : adapt to other acceleration devices (#116682)
c1b88723f8 : Fix buck build after recent clang-tidy updates (#116669)
2a87ab4508 : Refactor some tests by using TEST_CUDA & TEST_MULTIGPU instead (#116083)
d9c0e37bab : [2d] unflatten_tensor on compute stream for DTensorExtension (#116559)
29674b8e1d : [dtensor] fix dtensor _to_copy op for mix precision (#116426)
b0749bce6c : [export] Allow None as the meta value for tensor output. (#116664)
3fe437b24b : [BE]: Update flake8 to v6.1.0 and fix lints (#116591)
09ee96b69d : [MPS] Fix CrossEntropyLoss for float16 (#116597)
75359934bd : [C10D] Improve Heartbeat Monitor exit logs (#116268) (#116661)
1ae39a372e : Inductor cpp wrapper: fix cumsum codegen (#116171)
ef98987017 : Fix user input mutations for run_decompositions (#116382)
c5bd88b56a : [export] Improve serialization of union types. (#116511)
ca4df16fdd : [c10d] Make DebugInfoWriter Singleton across all PG objects (#116489)
41f265b06a : [quant][pt2e] Preserve numeric_debug_handle in quantization flows (#116477)
f73b1b9388 : [EZ] Update lxml dependency to 5.0.0 (#116657)
6e9ca2f220 : Enable eye on CPU for bfloat16 dtype (#116616)
5005f36c12 : Clean up files under fb/vulkan/... (#116665)
3ac0aaf478 : [codemod] markDynamoStrictTest torch_np/test_random (#116637)
884e449753 : [codemod] markDynamoStrictTest torch_np/test_reductions (#116636)
8ec606d4c5 : [codemod] markDynamoStrictTest torch_np/test_scalars_0D_arrays (#116635)
9b27fcf65a : [codemod] markDynamoStrictTest torch_np/test_ufuncs_basic (#116634)
0ce32ce409 : [codemod] markDynamoStrictTest torch_np/test_unary_ufuncs (#116632)
a1191ce4bf : optimize (u)int8 vectorized operator* (#116235)
0f6f582c0d : Add config to disable TransformerEncoder/MHA fastpath (#112212)
9dc68d1aa9 : clangformat: fused adam (#116583)
3ff4572fe7 : delete sharded tensor from fsdp/tp tests (#116244)
dfccaac31b : [2d] Ensure gradient clear out pending AsyncCollectiveTensor in FSDP Extension (#116122)
a2061ceefe : ci: Output runner OS / HW for macOS (#116627)
640d46f823 : [inductor] Control the cpp_wrapper mode with an env variable (#116615)
295bdaafb7 : [codemod] markDynamoStrictTest test_module_init (#116625)
074dfc2648 : [codemod] markDynamoStrictTest test_linalg (#116624)
5d8e066f6b : [codemod] markDynamoStrictTest test_indexing (#116622)
fc7546e9db : [codemod] markDynamoStrictTest test_functional_optim (#116621)
88d1638139 : [codemod] markDynamoStrictTest test_autograd_fallback (#116619)
39339df8d7 : [codemod] markDynamoStrictTest test_autocast (#116618)
0bc21c6a6b : [C10d] Fix Log Prefix in NCCLPG so that each instance gets its own prefix (#116520)
6d8d3c1334 : add a DTensor test for weight tying (#116475)
fb5a9f2f5c : Fix implicit conversion to double (#116614)
77d979f748 : Autograd attaches logging hooks only in debug level (#116522)
b18d8d4595 : Add a wrapper to transform a NumPy function into a PyTorch function (#114610)
be455921f5 : Fix missing words in README.md (#116606)
95a86ed9ca : [Quant] Add int8 linear op gelu for quantization PT2E with Inductor. input is an int8 CPU tensor; weight is an int8 MdkldnnCPU tensor (#114852)
a81edf9f23 : [inductor] Fix cpp_wrapper codegen for ir.ComplexView (#116481)
b0629cdd67 : [13/N] Enable clang-tidy on headers of torch/csrc (#116560)
1ed8efa9b3 : [MPS] Speedup addmm (#116548)
abd80cbb15 : [Inductor] Decompose bmm if batch2's last dim size is 1 and coordinate_descent_tuning is enabled (#116582)
4ffe1fb7f4 : [BE]: Improve typing to respect ruff PYI058 (#116588)
cf618452d3 : [BE]: Fix F821 error in torch/fx/experimental (#116587)
035e55822a : vulkan: fix gcc build errors (#115976)
4451ca068c : [xla hash update] update the pinned xla hash (#116388)
bd10fea79a : [BE]: Enable F821 and fix bugs (#116579)
6c02520466 : Remove unneeded comment and link for `BuildExtension` (#115496)
db752f2f1a : Pin the version of expecttest to 0.1.6 in requirements.txt (#116238)
60844ccc4f : [MPS][BE] Refactor common code (#116566)
aec4377257 : Optimize batch_norm_cpu_collect_stats_channels_last_impl when N <= num_threads (#113619)
fc5fda14bc : Try creating a bf16 tensor as a last resort of `is_bf16_supported()`. (#115924)
127812efee : [BE]: Further improve pathlib checks in torch serialization (#116577)
4bfaa6bc25 : [MPS] Fix addmm (#116547)
aef06c316b : [BE]: Add better handling of pathlib.Path with os calls (#116564)
86cd6655a1 : [BE]: Use exist_ok arg for os.makedirs calls (#116561)
4f9858a902 : [BE]: Use os.fspath and os.PathLike in torch serialization (#116562)
5aa258eb09 : [12/N] Apply clang-tidy and fix warnings in headers of torch/csrc (#116486)
37aae5932c : [11/N] Enable clang-tidy warnings on c10/util/*.h (#116353)
97891b184c : [Dynamo] Trace autograd.function in dynamo when inputs require grad (#116358)
c5d9173d04 : [BE]: Enable readability-redundant-function-ptr-dereference check (#116538)
5e58be678c : Make collect env BC compatible (#116532)
bd7d26bb96 : [CI] Fix docker builds (#116549)
961fbbe967 : [CI] Add initial ci build test for XPU (#116100)
de4d48df34 : [c10d] Fix timeout dump path write path overlap when there are multiple PGs (#116218)
db2b4078b9 : Add missing cstdint includes (#116458)
71ec3edbf7 : Enhance Opinfo to support privateuse1 (#116417)
e01e00fba8 : fix code spell (#116530)
afadfa0175 : [c10d] Add stream info during nccl comm abort call (#116076)
e8a9d088c6 : [DevX] Add tool and doc on partial debug builds (#116521)
df85a920cf : [Inductor][Observability] Add logging for split cat pass (#116442)
8deaa13417 : [EZ][Distributed] Add 'c10d' to distributed TORCH_LOG comment (#116526)
ef94499ad7 : [executorch hash update] update the pinned executorch hash (#116474)
240121587a : [vision hash update] update the pinned vision hash (#116524)
cab79ceb51 : [Inductor Intel GPU backend Upstream] Step 2: Register and add Intel GPU Inductor backend (#116330)
8173d98c57 : [quant][be] Skip conv-bn folding when there are no batchnorm ops (#116440)
33917150d3 : Cleanup scope ref properly (#116169)
4371939751 : Removing HTA documentation (#116513)
8220d5c66d : Support `pathlib.Path` as input to `torch.load` when `mmap=True` (#116104)
02e2158e75 : Fix for out of bounds read in mobile interpreter INTERFACE_CALL opcode handler (#110301)
7e12e722af : [Dynamo][12/N] Remove allowed_functions.py (#116401)
439f2a6c1f : [RelEng] Missing signal for release branches (#116516)
4af1c27fa8 : Migrate repr, deterministic state_dict test to OptimizerInfo (#116496)
f3c4395358 : [BE] Add helper in common_optimizers to get all optim inputs (#116471)
577529daec : [Dynamo] Implement a simple mutation tracker for user defined triton kernels (#116466)
f10c3f4184 : Fix module pre bw hooks when input doesn't req grad but gradients are changed by the user (#116454)
fb91acd33b : [release] Add specific section about building and testing final rc (#116476)
b5e83b8c50 : Fix edge case for size 1 channels dim in AdaptiveMaxPool (#116482)
dfc898ede4 : Don't decompose functional ops in predispatch functionalization (#116383)
80c07df659 : Update doc for the constraints of FractionalMaxPool2d (#116261)
d791074c81 : Clean up PyTorch op BC check list (#116468)
6243dbb5c0 : [DTensor][BE] unify PlacementStrategy print function (#116428)
87fea086aa : [DTensor] remove experimental DTensor op backward layer norm (#115689)
575f17ebd4 : [DTensor] add layer norm backward support (#115683)
b3f7fdbf0a : Add decomp for pad_sequence (#116285)
d59350cc1c : [Dynamo] Consolidate common constant types (#116366)
6375eb15ef : [Dynamo][11/N] allow_in_graph/disallow_in_graph decorator refactor (#116365)
53e32d12c4 : [c10] Use nested namespace in c10/cuda (#116464)
93b86bf531 : [GHF] Implement stacked revert (#116447)
5fcc2519f5 : [GHF] Refactors (#116446)
85628c0e57 : Revert "[export] Update range constraints to runtime_var_to_range (#115427)"
a17069684c : Improve nn.modules.activation and batchnorm docs (#113531)
3149e4a667 : [dynamo] fix `sum()` function with `start` argument (#116389)
83502feabe : [BE]: Enable readability-simplify-subscript-expr clang-tidy check (#116356)
8d84b5041c : [pt-vulkan] Address CLANGTIDY warnings in `api`, `graph`, and `impl` folders (#116431)
bbe3261dd3 : [BE]: Use `iterable.chain.from_iterable` where possible (#116376)
e0e90bc0d4 : Revert "[dynamo] fix `sum()` function with `start` argument (#116389)"
5c9464fb51 : add CALL_FINALLY opcode (#116159)
f657b2b1f8 : [Dynamo][10/N] Remove TorchVariable and is_allowed (#116312)
87da0e1d23 : [GHF] Fix gh_get_labels for small repos (#116444)
e14026bc2a : [CUDNN] RNNv6 API deprecation support (#115719)
0aa5b751bb : [executorch hash update] update the pinned executorch hash (#116438)
924f1b841a : [optim] Allow torch.float64 scalars for forloop + foreach implementations (#115841)
1d13086492 : [BE] force DTensorTestBase.build_device_mesh to use world_size rather than NUM_DEVICES constant (#116439)
6b91e6907e : Add setUserEnabledNNPACK config (#116152)
9c3ae37fc4 : [Distributed] Add finer granularity tag for distributed submodule (#116434)
2c89e5a5e5 : [inductor] Sort unbacked symbols before iterating on them (#116421)
362bc6d7cb : Fixed a segfault issue when passing an empty kernel to quantized_max_… (#116342)
d0395239c1 : [DTensor] allow OpStrategy to represent ops whose return type is a tuple (#115682)
44b98c09ca : [BE] migrate all assertRaises tests to OptimizerInfo test_errors (#116315)
8abeacda6f : Refactor user defined triton kernel tests (#116425)
3b709d7c1e : Revert "[Dynamo][10/N] Remove TorchVariable and is_allowed (#116312)"
13505898c9 : Revert "[Dynamo][11/N] allow_in_graph/disallow_in_graph decorator refactor (#116365)"
0aa185f394 : [BE] Make `torch.cuda.has_magma` a build time check (#116299)
0edc348788 : Revert "[Dynamo] Consolidate common constant types (#116366)"
e86636266f : [Quantized] Fixed `equal_quantized_cpu` for QUInt4 (#116307)
e5bcfe205e : [inductor] fix cpp_wrapper inputs mismatch (#116197)
7571511af9 : [inductor] More tweaks to fusion logs (#115084)
6051f9f404 : multiply int8/uint8 for AVX512 (#116346)
51eef859eb : min, max, clamp* for AVX2 hosts (#116236)
427ecc61c0 : [Easy][BE]: Fix none type comparison (#116399)
0978482afa : Revert "Implement aten::upsample_linear1d on mps (#115031)"
f4230ec9fd : [inductor] Remove the float16 restriction for cpu cpp_wrapper (#116205)
c6969cb8a9 : Implement aten::upsample_linear1d on mps (#115031)
4c6e842496 : [inductor][cpp] load as scalar for the index invariant in the vector range (#116387)
3c9076f070 : [dynamo] fix `sum()` function with `start` argument (#116389)
bb2a1e9941 : Enable readability-redundant-smartptr-get in clang-tidy (#116381)
ffe6f9ac91 : [inductor cpp] support vectorization for index_expr that depends on tiling itervar or with indirect indexing (#114545)
a254fbfd61 : Initialize variable for all codepaths in dynamo benchmarks (#116260)
f6dfbffb3b : [c10d] Add hashing as a debug feature for before and after NCCL collective call (#113238)
039fbeb016 : [dynamo] fix `functools.reduce()` function with `None` as `initial` (#116398)
c7e9c15102 : Ignore SIGINT in codecache workers (#116380)
951da38800 : [Dynamo][11/N] allow_in_graph/disallow_in_graph decorator refactor (#116365)
22742d93a5 : Expose functional IR to capture_pre_autograd (#115210)
76b1d44d57 : pre_dispatch aot_export (#115188)
36dccc2aba : [Dynamo] Consolidate common constant types (#116366)
199e07f108 : [pytree][BE] update treespec `num_children` access (#116370)
81cebca3d2 : [Inductor] [Quant] Fix QConv Binary Inplace Layout Issue (#115613)
dfb6815170 : [Reland] [PT2] [Quant] Change the QConv2d Binary post op name from add to sum (#116172)
7cdbdc789d : [executorch hash update] update the pinned executorch hash (#116362)
f1cdb39da3 : [dynamo] Fix handling of one_hot (#116338)
dbbe8485b4 : Fake Tensor refactors part 2 (#116345)
6c419a0efd : Fixed a segfault when calling topk on a quantized scalar tensor. (#116337)
3a4fe835cc : Fixed segfault when trying to permute empty tensor (#116335)
015bd0e0a1 : [Dynamo][10/N] Remove TorchVariable and is_allowed (#116312)
4912922297 : Fake Tensor refactors part 1 (#116344)
08b404e3a2 : [Dynamo] Remove ExecutionRecorder.MOD_EXCLUDES during replay & record (#116347)
7663ffb673 : [10/N] Fixes clang-tidy warnings in c10/util/*.h (#116326)
84b2a32359 : [executorch hash update] update the pinned executorch hash (#115599)
60f4114769 : Support nn_module_stack in non_strict mode (#116309)
0931170a13 : [vision hash update] update the pinned vision hash (#116343)
4f4b931aba : [inductor] Do variance calculation in opmath type (#115181)
65c5eed01d : [sigmoid] Remove workaround for constant output. (#116288)
3f9e9ecfe4 : Fix torch.detach doc-string (#115850)
b940fa2fce : Delete unused global variable (#116228)
f08c4da86d : Add a decomposition for take() (#114813)
341c4227a8 : Update F32 sparse semi-structured support for CUTLASS back-end (#116017)
0b9146bf5d : [BE][Easy]: Update ruff to 0.1.9 (#116290)
0e39f4db92 : Disables denormal floating numbers on ARM CPU (#115184)
9a0c217a0a : [9/N] Fixes clang-tidy warnings in c10/util/*.h (#116185)
c7514ccc8c : Delete unused API again (#116226)
7a6cb9fdfb : [Inductor Intel GPU backend Upstream] Step 1/3: Generalize device-bias code in code generation. (#116020)
7d0ad6e870 : Make native c10d_functional ops work with AOTInductor (#113735)
718b576e2c : Port all_to_all_single to native c10d_functional (#113438)
cb489e769c : Delete unused API (#116225)
b6473065c6 : [AMD] Fix build for intra_node_comm (#116291)
b342286646 : adds async save, makes checkpointer private (#116293)
ad3c0b2c00 : [torch.export] fixes for unlifting lifted tensor constants (#116266)
764b4cd44e : Remove outdated string function wrapper for Android and Caffe2 (#116186)
b47aa69685 : [c10d] Fix the hang issue in store.check(TIMEOUT_DUMP) (#116297)
94f3781145 : Fixed bug with unpickling integers > 64-bits (#115264)
9736deae76 : [vision hash update] update the pinned vision hash (#109957)
db25462ffd : [quant][pt2e] Relax constraints on dtype and qscheme to allow for customizations (#116287)
fdf8718225 : Update reviewes for PyTorch Distributed (#116296)
4b97ed2ed8 : [SparseCompressed] support csc layout for add sparse/dense. (#115433)
910baa3a03 : [SparseCompressed] Support `add(sparse_compressed, dense)` (#115432)
d2d129de65 : [sigmoid] replace unflatten with upstream version (#115468)
127cae7ec8 : [C10D] Increase TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC (#116267)
d6de2df6b6 : Improve the error message when a PR lacks the necessary approvals (#116161)
99f7e721fe : [inductor] make inductor work with new triton compile interface (#115878)
247f9c3de4 : Preserve strides of custom Triton kernel args (#116219)
a27ed4d364 : [dynamo / DDP] Add optimize_ddp_lazy_compile config to control lazy compile for DDPOptimizer (False by default) (#116292)
1e834e0e50 : Fix bug in mem_eff kernel with attention mask and MQA (#116234)
52f0457d7d : Support view returns for functional inverses on narrowing views (#115893)
b5c866db13 : [export] Add FlatArgsAdapter to unflatten (#115467)
01ec3d1113 : [export] upstream some final fixes to OSS unflatten (#115795)
bc3ef1684e : [export] refactor unflatten.py to be a top-level API (#115466)
497777e302 : Revert "Mark set_ as an inplace view op (#115769)"
0e63837ec7 : [dynamo] Skip some tests using scipy.kstest (#116263)
199b04fdbd : Back out "Implement pass-through `state_dict` and `load_state_dict` for dynamo OptimizedModule (#113423)" (#116243)
ed03834693 : Revert "Expose functional IR to capture_pre_autograd (#115210)"
a357a0f315 : Back out "[Kineto] Initialize libkineto profilers during torch init process during pybind set-up (#112623)" (#116201)
ff4aac109a : [BE][Easy]: Enable clang-tidy check readability-misplaced-array-index (#116210)
cc2c2c6ca9 : [Easy][BE]: Enable clang-tidy check for duplicate includes (#116193)
2dce364634 : [AOTI][refactor] Remove model_container_runner_cuda.cpp (#116113)
f71d302c63 : Revert "[Easy][BE]: Enable clang-tidy check for duplicate includes (#116193)"
348cb2f8f9 : Revert "[BE][Easy]: Enable clang-tidy check readability-misplaced-array-index (#116210)"
ec6c4fed3f : Revert "Support nn_module_stack in torch.export(strict=False) (#115454)"
0567f71ac6 : Revert " pre_dispatch aot_export (#115188)"
f170d6665c : [DCP] Add a profiler function for benchmarking save and load (#116007)
a548ff40de : [DCP][BE] Remove unused function (#116006)
4b59b4dffb : Expose functional IR to capture_pre_autograd (#115210)
8fd1963ae2 : [dynamo][collective_op] Use the value of the wrappered attribute async_op in dynamo when checking supported or not (#115921)
74e8cfc9a0 : Forward fix torch package bug - dont depend on dynam in fsdp directly (#116229)
db35ccf463 : Revert "[innductor] make inductor work with new triton compile interface (#115878)"
65d3dde665 : Fix allowed dtypes for mem_eff attention (#116026)
c1d960aadd : [Quant] [Inductor] add input shape check for quantized conv binary lowering (#115247)
be9de33240 : [Dynamo][9/N] Make SkipFilesVariable wrap functions only (#115963)
a734085a63 : [ONNX][Dort] Fix bug preventing running with OrtValueVector (#116124)
259b0af367 : [ONNX] Add copy before export for perf bench to avoid mutating base model (#115945)
feafbcf437 : [AOTI][refactor] Refactor model runner API (#116047)
9502fa8d84 : add a transformer suite in TP/SP tests (#115530)
7ca6e0d38f : [EZ] Add `CUSPARSELT` to build variables (#116213)
74119a3482 : [EZ] Fix typo in `USE_GLOO` var (#116212)
f206e31e2f : Swap slots if slots match in swap_tensor (#116128)
8aae46f843 : [ROCm] fix nightly 5.6 build (#116029)
be90b757d9 : Enable compiled Adam in the benchmarks (#116093)
bbded928b3 : [innductor] make inductor work with new triton compile interface (#115878)
5d5ef016a6 : [BE][Easy]: Enable clang-tidy check readability-misplaced-array-index (#116210)
897600eb35 : [inductor] Some tests have both CPU and CUDA variants running with CPU tensors (#116131)
7c7208a9e7 : Forward fix to remove xfails for vmap NT tests in Dynamo (#116216)
edf1ea622d : Move step is noop tests (#115299)
8f3a0594e9 : Move tests depending on listed configs to OptimizerInfo (#115025)
05d60931b3 : Migrate test_peak_mem_multi_tensor_optimizers to OptimizerInfo (#115023)
4fb92b591d : [BE] remove redundant _test_derived_optimizers by migrating more to OptimizerInfo (#114802)
0fae3dfef7 : Add convenient things for Dynamo testing (#116173)
19207b9183 : Allow more backend worker threads with each using a separate cuda stream (#116190)
0dd64174bd : Do H2D/D2H of input/result on separate threads/cuda.Streams (#116189)
3793ad6a7e : Fix bugs in metrics calculation in inference benchmark and rerun baseline (#116188)
75a4b10d56 : [easy] Add option for profiling backend in inference benchmark (#116187)
31f21e033e : Run inference in an Executor (#115286)
b72127cd4b : [inductor] Support sym exprs in lowering constant promotion (#116196)
a267d67350 : pre_dispatch aot_export (#115188)
4afe2687d5 : Reland "Serve multistream graph captures from correct pool (#114647)" (#116199)
199bacaf77 : [Dynamo] Fix broken trunk and re-enable test_torch_name_rule_map_updated (#116146)
6e2c9be501 : [Easy][BE]: Enable RUF008 and RUF016 checks (#116195)
bc0d8649a4 : Fix missing dependency in torch.utils.tensorboard (#115598)
1d5a9a1c1a : [Easy][BE]: remove itertools.accumulate Python 2 shim and apply UFMT (#116192)
602abf6b55 : [ROCm] more 6.0 changes (#115946)
ea3a5f8ddc : Add chunk for jagged layout NT (#115842)
29b198dcf8 : Add markDynamoStrictTest to NT tests (#116111)
f2c1fb3ee4 : Fix crash in SymInt unary minus (#116160)
f8ad664cf2 : [export] Update range constraints to runtime_var_to_range (#115427)
1be6a070bc : Add support for torch.cond in vmap (#114523)
06ae9b79ed : [mtia] add module exporter to net minimizer (#115687)
6de28e92d2 : [BE]: Apply FURB118 (prev): replaces unnecessary lambdas with operator. (#116027)
2d2016fdf8 : WIP Add compatibility with channels_last_3d for conv3d (#114790)
8bff59e41d : [ROCm] add hipblaslt support (#114329)
0b0b9b3275 : [c10d][libuv] add partial read test for libuv backend and fix an error which only happens when partially reading a buffer (#116141)
ee5d981249 : [BE]: Enable RUFF PERF402 and apply fixes (#115505)
8837df1d71 : [c10d] Expose check method to Python for store via pybind (#116144)
71cb13869b : [Easy][BE]: Enable clang-tidy check for duplicate includes (#116193)
fe15645619 : Revert "Serve multistream graph captures from correct pool (#114647)"
ea7f2de6f3 : [docker] Fix typo in docker-release workflow (#116191)
16e539e0e6 : Fix index range check (#116062)
fabf9433e7 : [AOTI][refactor] Organize model runner files (#116022)
4d6a1ad400 : Activation checkpoint and checkpoint_sequential errors if use_reentrant not passed explicitly (#115868)
cfb3cd11c1 : Add basic autograd TORCH_LOGS support (#115438)
cfbf647adb : Add aten/src/ATen/native/quantized/cpu/ path to CPU quantization merge rule (#116145)
8eb7f6276b : Ensure wrapping subclasses with `as_subclass` is supported (#116091)
c215e59bf2 : Revert "[inductor] Avoid bool being upcast to int (#109913)"
968b94bef2 : [8/N] Fixes clang-tidy warnings in c10/{core,util}/*.h (#116082)
d72d99e591 : Fix sparse compressed tensor invariants checks when nnz==0 (#115826)
bdfabe5e7d : Revert "[Dynamo][9/N] Make SkipFilesVariable wrap functions only (#115963)"
af8a50e656 : Revert "Fix allowed dtypes for mem_eff attention (#116026)"
6e1ba79b7f : [re-land] Introduce 3 low-latency, intra-node allreduce algorithms for small messages to PyTorch (#114001) (#116125)
9df4ee8d38 : Fix ColwiseParallel typo (#116151)
545d2126f6 : [pt-vulkan] Enable Python code blocks in shader templates and upgrade shader template generation (#115948)
9766781512 : Skip some flaky Dynamo tests (#116165)
3747aca49a : [C10D] Make all PGNCCL LOG usages use logPrefix() (#116060)
6ffe1da375 : Add support for multi device foreach ops (#116064)
c72bc61bcd : [ROCm] Fix caffe2 build with hipblasv2 api (#116073)
a597a00c87 : [AOTI][refactor][3/n] Declare python_kernel_name and cpp_kernel_name in ExternKernel (#115972)
4f02cc0670 : [C10D] Add logPrefix to abortCommsFromMap (#116059)
c3bc65d9d8 : [dynamo] Restore constant tensor original FQNs (#116086)
6730b5bcb4 : Support nn_module_stack in torch.export(strict=False) (#115454)
c173a9d9b3 : add Half support for layer_norm on CPU (#99590)
45cfe9cdf7 : [export] Fix test to run internally (#116118)
c55210b4f0 : [Inductor] Deduplicate grid wrapper statements for user defined triton kernels (#115849)
9a2a44457a : SDPA extend backward realized tensor alignment checking to forward realized tensors (#116069)
110339a310 : Fix c10::div_floor_floating compile error (#115647)
68c7aac809 : [export][reland] non-strict export with dynamic shapes (#116048)
cd449e260c : Mark set_ as an inplace view op (#115769)
0759240001 : [sparse] update cslt to 0.5.2.1 (#115988)
d55365dc05 : [CUDA] Workaround shmem limit for certain input sizes in `AdaptiveAvgPool1D` (#115231)
7d92449171 : Add call to run_tests for more tests? (#115781)
7f7a7b0b48 : Reset stepcurrent cache if file succeeds (#115775)
f88c9af98e : [TEST] Skip scaled_dot_product_attention test on sm < 80 (#115760)
ae6f1f4a47 : [BE]: enable readability-delete-null-pointer clang-tidy check (#116107)
d85314c95c : Support Predispatch functionalization (#113728)
1474eb5f29 : Fix jagged composite impl of flatten() (#115192)
cbc70e9b9c : [caffe2] Add option for build_cpukernel_avx2 (#116008)
77d5f60740 : [fsdp][torch.compile] FSDP changes (#115497)
e52983939c : fix(conv_v8): optimize lru cache in conv v8 (#114110)
d749b4a152 : Implements `permute_tensor` in functional collectives (#115078)
71bedc3a69 : [Inductor UT] fix unreachable code (#116094)
5ba87a31bc : Unflake test_reference_numerics_large__refs_special_multigammaln_mvlgamma_p_1_cpu_bfloat16 (#116058)
7b7f11f230 : [dynamo] test number of guards when inputs are views (#115793)
91e184fd74 : Revert "Introduce 3 low-latency, intra-node allreduce algorithms for small messages to PyTorch (#114001)"
b6d0d0819a : Revert "[PT2] [Quant] Change the QConv2d Binary post op name from add to sum (#115329)"
c539f7df10 : Revert "[Inductor] Deduplicate grid wrapper statements for user defined triton kernels (#115849)"
505a9e4854 : add support for dynamic shapes in round (#115259)
a7bfa04da6 : Revert "More markDynamoStrictTest (#115870)"
24af118e55 : Revert "markDynamoStrictTest more tests (#115871)"
5b6b680517 : Revert "Adamw refactor (#115983)"
92998693a9 : [inductor] Avoid bool being upcast to int (#109913)
992c4e7b24 : Actually run Dynamo tests in all Dynamo shards (#115962)
0bd5a3fed7 : [releng] Docker release Refactor Push nightly tags step. Move cuda and cudnn version to docker tag rather then name (#116097)
a31effa15f : Update device_mesh.py docs imports (#116074)
2a44034895 : [CUDA] Include `<thrust/swap.h>` in `LinearAlgebra.cu` (#116072)
327bdcdb14 : Some tiny modification about torch.set/get_default_device (#116014)
b48abbc020 : [DeviceMesh] Fix DeviceMesh docstring (#116053)
8b0122ad33 : Add lowerings for reflection_pad{1, 3}d_backward (#115645)
9dda4b20a0 : [MPS] Enable select/[broad]cast ops for complex dtypes (#115727)
1544c37520 : [7/N] Fixes clang-tidy warnings in c10/{core,util}/*.h (#115495)
9b8f934068 : Remove memory_format check for native_group_norm_backward (#115721)
01b979fc9a : [Inductor] Fix constant folding and extern kernel mutation tracking bugs (#115908)
bb5a27052f : [Dynamo][9/N] Make SkipFilesVariable wrap functions only (#115963)
47908a608f : Revert "[ROCm] add hipblaslt support (#114329)"
ed0c0c49ef : Revert "[ROCm] fix nightly 5.6 build (#116029)"
368a0c06d4 : [releng] Docker Official release make sure cuda version is part of image name (#116070)
5894af83be : Use dequantized weight and bias in conv2d quantized ops (#115615)
270ed13e87 : [DTensor] Make DTensor `from_local` backward partial() to replicate() pass through (#115967)
3472a9200d : expand subclass type tests in dynamo (#116024)
054f9548b4 : [dynamo] Store CompilationEvents in a buffer in torch._dynamo.utils (#115788)
fc58909bab : Fix allowed dtypes for mem_eff attention (#116026)
6b120c6cf9 : Update the sdpa benchmark to measure forward backward time in isolation (#115986)
bf62511e07 : Reshape decomposition for jagged layout NT (#115191)
63e242b1e4 : [ROCm] fix nightly 5.6 build (#116029)
8452f41305 : Adds allreduce to inductor remap (#115950)
2a5659a797 : add length assertion to PrepareModuleInput and PrepareModuleOutput (#115957)
a699b10339 : [buck2][win] fix caffe2 protobuf_rule (#115954)
2f7bb18def : [Doc] Add padding size constraint in nn.ReflectionPad2d (#115995)
1e272fb6d6 : [export] Undo "module: export" labeling (#116042)
c4748b425e : Add main in dynamo/test_compile.py (#115941)
a1a0b290d2 : [tp] further fix the docs (#115974)
8868c1cfae : [sparse][ci] Add cuSPASRELt to CI (#115369)
2b2ed52799 : [xla hash update] update the pinned xla hash (#116003)
7b6210e8a4 : Use matrix generate script for docker release workflows (#115949)
e30d436b01 : [fx][split][testing] Add testing for #107981 (#108731)
bf20b56e9d : Fix PyTorch build error on ppc64le (#115729)
77366ba637 : Increased hardcoded limit for number of GPUs. (#115368)
80b1ecc308 : Run eager adam optimizer in benchmarks where possible (#115445)
8a445f7bd5 : Serve multistream graph captures from correct pool (#114647)
3b70bd3970 : Take 2 of "Add an option to log the source of the Triton kernels generated by torch._inductor (#115979)
386776c49a : [torch] Reduce the memory usage by adding flags to clearing intermediate graphs used for optimization during the ineference. (#115657)
dd367b7c8f : check tensor subclass when using torch.compile + SAC (#115960)
e43d33f4f7 : [export] Support torch.sym* ops (#115854)
647f14e70b : [BE]: Enable clang-tidy check for readability-string-compare (#115994)
d7caef7996 : [CI] Update clang-format (#116002)
c285ca7916 : [AOTInductor] Add updaing constant buffer to active buffer. (#116001)
34fe850d00 : SymInt'ify sparse_compressed_tensor (#107903)
419f2ca3e3 : Fix a crash in sparse compressed tensor invariants check when nnz == 0 (#115825)
eafeba71c1 : Adamw refactor (#115983)
87ea6fb844 : Make input contiguous for DTensor reduce scatter to fix the incorrect numerical values (#115847)
bc4115ffcf : [Inductor][Observability] Change to log.debug to avoid excessive long of logs (#115474)
4123cca859 : [AARCH64] Fall back to GEMM if mkldnn_matmul fails (#115936)
b06b02559e : Support non grapharg and intermediary grad access (#115898)
c5dcb50c00 : [easy] aten ops: support passing all args as kwargs, including `self` (#114920)
88207b10ca : Enable thp(transparent huge pages) for buffer sizes >=2MB (#107697)
622947afa8 : [BE] Use nested namespace in ATen/native (#115938)
e3aefe2970 : Revert "Initial Flash Attention support on ROCM (#114309)" (#115975)
8283491eff : [TEST] Increase numerical tolerances in test_torchinductor_opinfo:test_comprehensive (#115768)
49af19cd8e : Skip some flaky Dynamo tests in test_linalg.py (#115925)
2a2f2e454a : [inductor] Fixed issue with true div on integer input with dyn shapes (#115920)
d08905db7e : Trigger a mergability check on ghstack prs (#115944)
14a6b24c8b : [Dynamo][8/N] Wrap itertools.* as ItertoolsVariable (#115802)
056a882cb9 : add markDynamoStrictTest to TestOptimRenewed, removing flakiness (#115947)
0597eb56c2 : Generate exhaustive compiled optimizer tests (#115906)
034e871710 : [Dynamo] Look up variables from old frame, rather than copy variables to new frame; skip some copy to save time. (#115062)
94d28161fa : Fix broken PyYAML 6.0 on MacOS x86 (#115956)
74dfdc567b : [MPS] aten::erfinv bug fix: add storage offset buffers to handle slicing (#105801)
d92d4133e7 : [8/n] Update XNNPACK Submodule Version Part 8 Everything Remaining to get it to work (#115714)
2e517b20d9 : [MPS] Add Conv3D support for MPS (#114183)
9fcf6fb6fe : [C10D] Add waitForDumpOrTimeout to log on dump abandonment (#115876)
82e0d00da9 : [c10d] Polish NCCL PG monitor thread log message (#115888)
1f3bdf40ad : [export] Update schema version (#115712)
715d663794 : [inductor] split test_cpp_wrapper.py into cpu and cuda test files (#115479)
50c9665f92 : Revert "[export] Support torch.sym* ops (#115854)"
80a9625d9f : Revert "non-strict export with dynamic shapes (#115862)"
1bb0d0fc1f : non-strict export with dynamic shapes (#115862)
347cb91946 : [export] Support torch.sym* ops (#115854)
6c2103bdf7 : Fixed some failing inductor tests with exact_dtype=True (#115828)
91b848bf81 : Revert "markDynamoStrictTest on more tests (#115879)"
c006c8b50e : Revert "markDynamoStrictTest some more (#115885)"
61abacf829 : [tp] improve documentation (#115880)
d5115bfb06 : Revert "[AOTI][refactor][3/n] Declare python_kernel_name and cpp_kernel_name in ExternKernel (#115831)"
72eab5aa43 : Configures distributed_checkpoint label (#115833)
1b506e7469 : Revert "non-strict export with dynamic shapes (#115862)"
7ed2bc7c67 : [GHF] Do not block reverts with internal changes (#115903)
f54bb1ed56 : non-strict export with dynamic shapes (#115862)
b062ea3803 : [ROCm] add hipblaslt support (#114329)
287a865677 : [AOTI][refactor][3/n] Declare python_kernel_name and cpp_kernel_name in ExternKernel (#115831)
66994bca5f : Revert "[inductor] split test_cpp_wrapper.py into cpu and cuda test files (#115479)"
55ce4693ff : markDynamoStrictTest some more (#115885)
8b650cdd3c : markDynamoStrictTest on more tests (#115879)
2d43e31aa9 : Fix wrong behavior of is_alias_of and c10d::reducer on MTIA (#115553)
4ea7430ffb : [BE] Don't copy CuDNN libs twice (#115872)
b4d6443bcf : [Dynamo] Log innermost user frame filename & lineno for better error aggregation (#115899)
4edc921857 : Introduce 3 low-latency, intra-node allreduce algorithms for small messages to PyTorch (#114001)
cd47e335d1 : [TEST] Skip test_schema_correctness for float8 dtype (#115757)
c1c9b739e2 : Back out "[aotinductor] replace lld with the default ld linker (#115478)" (#115875)
478f0e96dc : markDynamoStrictTest more tests (#115871)
7f686c8fe1 : More markDynamoStrictTest (#115870)
9ae0e62929 : [PT2] [Quant] Change the QConv2d Binary post op name from add to sum (#115329)
653acd8fe1 : [inductor] split test_cpp_wrapper.py into cpu and cuda test files (#115479)
9056903b09 : [CUDA] 64-bit indexing for avg_pool_backward (#114193)
8e2d63cbc3 : [export][reland] Remove runtime assertion pass (#115597)
7d4ccd7b9e : [AOTI][refactor][2/n] Rename kernel to python_kernel_name (#115766)
8e1cff96e3 : [C10D] Log PG size in init log (#115807)
5989e1222d : [BE] Set `torch.cuda.has_half` to True (#115884)
a8e354a9a0 : [sparse][semi-structured] enable fp32 support, separate sparse and dense constraints (#115550)
6d5fe07659 : Fix numpy warning when importing torch without numpy installed (#115867)
9e84d0fa60 : [MPS] Fix opposite error message in empty_mps (#115746)
85262b0a9e : markDynamoStrictTest some test_cpp_extensions.* (#115858)
8ddca5aeae : markDynamoStrictTest some more tests (#115857)
3477a2ee03 : unMarkDynamoStrictTest on OpInfo-based tests (#115856)
0722ce35f5 : Increase number of Dynamo shards from 2->7 (#115855)
4ccd8eb613 : Add Dynamo test expected failure mechanism (#115845)
5477120ebf : [executorch] Update iOS toolchain with a modern cmake syntax. (#115799)
f90a5f891b : [AOTI][refactor][1/n] Rename cpp_kernel to cpp_kernel_name (#115783)
1b8599283f : Optimize quantized max pool 2d (#115690)
6fee208064 : Handle -1 in jagged layout NT view ops (#115843)
c947ed1135 : [BE][ROCm] Use modern C++ (#115844)
7e6ec8d3db : [ONNX] Add proper iobinding synchronize for ONNX cuda bench (#115773)
823523acc0 : [ONNX] Dump sarif diagnostics for failed onnx exports in benchmark (#115673)
0959e67de3 : [ONNX] Set correct cuda.current_device for multi-device onnx performance bench (#115670)
59f7355f86 : Revert "[ROCm] add hipblaslt support (#114329)"
66b04e3cb7 : [nccl flight recorder] nullptr profiling name (#115851)
21b8127f1c : [Inductor] Deduplicate grid wrapper statements for user defined triton kernels (#115849)
194d57dae7 : Add values backward support for sparse CSR, CSC, BSR, and BSC tensors (#115586)
49d826bcd3 : [dtensor] update op db tests (#115722)
ef6a0faf89 : [export] Fix canonicalization. (#115830)
bb2bb8cca1 : [ROCm] add hipblaslt support (#114329)
04ef21f5dd : [C10D] Make dumpDebuggingInfo share a mutex across PGs (#115803)
7ecddaef23 : Revert "Introduce 3 low-latency, intra-node allreduce algorithms for small messages to PyTorch (#114001)"
67232199b1 : [dynamo] Log shape_env_guard_count separately from guard_count (#115776)
353f2dbd9c : [CUDA] Fix V100 expected failures in `test_mm_decomp` and `test_linalg` (#115666)
28e37d4f3b : Update Trition pin (#115743)
87547a26b8 : [aotinductor] add no weight change version of fuse_parallel_linear (#115791)
ca4caf4eac : Revert "[inductor] Do variance calculation in opmath type (#115181)"
0fe014bd8a : [C10D] Change PGNCCL logs to prefix [PG {} Rank {}] (#115801)
e94267587b : [C10D] Refactor NCCL logs to use common prefix helper (#115800)
eb6e70cf66 : [C10D] Only open NCCL dump pipe file once per process (#115798)
74d2b9dd15 : [C10D] Make DumpPipe disabled when FlightRecorder disabled (#115771)
b618869208 : [inductor] label cpp test files with oncall: cpu inductor (#115167)
c80e2d5bb2 : [fbcode] consolidate usage of fp8 linears for inference models (#115808)
5bddbed399 : Initial Flash Attention support on ROCM (#114309)
ac60a70e06 : Migrated loss functions to ModuleInfos (#115584)
f727bed2e6 : [inductor] Updated upsample_bilinear2d decomposition (#104182)
28e4004286 : Add doc for torch.distributed.breakpoint (#115656)
fcb95bf31b : [2/N] Use std::in_place (#115480)
6500ccebd7 : enable fp16 autocast for dynamo benchmark (#114088)
afe6d272c6 : Fix buck OSS build after #115570 (#115804)
adfbd2b219 : Introduce 3 low-latency, intra-node allreduce algorithms for small messages to PyTorch (#114001)
36c6c0c7dc : [pytree] expand `tree_map` to accept multi-inputs (#115642)
7e1542b938 : [CUDA][FP8] Skip `test_dtypes` on FP8 `_scaled_mm` (#115661)
f5458f8f00 : [C10D] Make DumpPipe pipe file configurable (#115770)
ef01e78fd9 : disable test_ddp_profiling_autograd_profiler in distributed_test.py (#115704)
722752fc28 : Revert "Increased hardcoded limit for number of GPUs. (#115368)"
5e615f5f3a : [BE] Use `version.txt` to determine version of nightly builds (#115794)
661c1cf2aa : numerical mismatch fix for test_mem_efficient_attention_attn_mask_vs_math_ref_grads in test_transformers.py (#115707)
ffc826bf10 : [nccl-pg] Store PG global rank information in tracing logs (#115730)
b38e14c12a : [Reland][HigherOrderOp] remove unused get_item in MapHigherOrder (#115758)
626b7dc847 : Revert "Migrated loss functions to ModuleInfos (#115584)"
3fa3ed4923 : Workaround to avoid MSVC std ambiguous symbol error (#115748)
67ce57ff66 : Add pragma once to headers (#115739)
c7ae2c170f : [inductor] Added non-integer expr support for floordiv in triton codegen (#115751)
3643548447 : [Export] Support ser/des test on existing cases (#115413)
a34d56a64a : [Export] Support retraceability test on existing cases (#115402)
43efe39cb1 : [codemod][lowrisk] Remove extra semi colon from caffe2/caffe2/opt/optimizer.cc (#115018)
ad76a4e1e7 : [inductor] Allow sympy expressions to participate in type promotion (#115676)
869e52e3dd : Support torch function user objects (#111765)
81321baf5c : [PyTorch] Remove ArrayRefTensor::dtype (#113578)
8c57fde21f : Let all_reduce_coalesced accept one tensor as well (#115650)
b9af126908 : [PyTorch] Add input numel assert for minimal arrayref interface (#113577)
db851b1bc9 : [Dynamo][7/N] Wrap python modules under torch as regular PythonModuleVariable (#115724)
54d552e991 : [funcol] Directly import DeviceMesh to avoid circular dependency (#115649)
7388d40165 : Make pytorch_qnnpack a shared library (#115570)
c90fdb9ac0 : Fix torch.distributed.breakpoint (#115705)
8a8d0adc0b : Fix `troch.gradient` check for spacing arg list length (#115686)
23bff71de4 : [llvm][oncall] Fix build for llvm-18+ (#115652)
4d8ad4fb82 : Move SingletonSymNodeImpl from c10 to aten (#114895)
2a514f48d7 : Add huggingface gpt2 fake tensor unit test for torch.onnx.dynamo_export (#115380)
926236305f : [sigmoid] fix for FX tracing unflattened modules (#115708)
75d3bbaaa2 : Fix cudagraph check message (#115664)
42390a097b : [inductor] Do variance calculation in opmath type (#115181)
95de4f5764 : add sm80orlater check to test_sdpa (#115702)
caddcf9de5 : Fix lint error in `aten/src/ATen/native/cuda/CUDALoops.cuh` (#115616)
afa62d6237 : [nccl-pg] Pass group global rank information to NCCL PG (#114736)
193f87857e : [BC breaking] Remove check_sparse_nnz argument of gradcheck (#115658)
310f6ab11a : [fsdp] Replace acc_grad hooking with register_post_accumulate_grad_hook on flat_param (#112184)
97888725c5 : [Export] Test non-strict mode on existing test cases (#115399)
66a76516bf : [ROCm] Disabling Kernel Asserts for ROCm by default - fix and clean up and refactoring (#114660)
fb80f05ee2 : [inductor] Fix angle decomposition return type (#115700)
9cdc80d581 : [inductor] Fix torch.bernoulli decomposition return type (#115699)
0e0dd8f985 : [dynamo][BE] Move torchvision import inside of test_multi_import (#115677)
3807fc690f : [OSSCI oncall] fix lint (#115737)
0870afb85c : Revert "[Export] Test non-strict mode on existing test cases (#115399)"
bda6f02343 : Revert "[Export] Support retraceability test on existing cases (#115402)"
3b87681ddc : Revert "[Export] Support ser/des test on existing cases (#115413)"
f9cf6ae889 : [PyTorch] AOTI: add minimal arrayref interface (#112800)
331128b444 : [c10] signal_handler: atomically exchange the signal count to fix data race in ExecuteStepRecursive() (#115510)
50db2aa70a : [funcol][BE] Apply ufmt to _functional_collectives.py and turn on lintrunner for functional_collective (#115648)
db8d409d08 : [DCP][BE] Apply ufmt to DCP and turn on lintrunner for DCP (#115302)
cc28f61fa3 : [DCP][BE] Move DCP._state_dict_utils out from DCP (#115523)
1500379b6d : [MPS] Enable `torch.rand[n]` for complex types (#115514)
4744359163 : [Export] Support ser/des test on existing cases (#115413)
b0c7dd47cd : [Export] Support retraceability test on existing cases (#115402)
2411a92e9d : [Export] Test non-strict mode on existing test cases (#115399)
dd42201cb8 : [export] Preserve FQN in export_to_torch_ir (#115462)
0dad85b402 : [Dynamo] Fix torch.tensor call with tuple (#115713)
38101e349e : [usdt][torch] Sample dispatch operator integration (#115593)
17c104ac18 : [export] Do not copy state_dict in run_decomp (#115269)
99554112d3 : [pytorch] add namespace for optTypeMetaToScalarType in codegen to avoid not declared when compile (#115623)
1392843e7b : [inductor] make sure bitcast input and target type have the same bitwidth (#115619)
469d6d45fe : [BE] Bye bye, CircleCI (#115701)
76ced0df03 : Consider storage_changed for assigning alias_of_input in aot_autograd when computing differentiable outputs that alias each other (#115315)
946de1cf4c : [export][fix] Add back export strict argument (#115668)
48ed165380 : [FSDP][state_dict] Create a FSDP/EP unittest (#115567)
639060cb0b : Use get_mkldnn_enabled for decompositions (#115448)
f78f23d753 : [export] Turn off output value from sources for export. (#115442)
af09fe256a : [Inductor] Implement a deduplist data structure for name to user tracking (#115609)
ffb2a28a67 : Fixes expected behavior when `no_dist=True` in `state_dict_loader.load` (#115660)
f138b08d2e : Migrated loss functions to ModuleInfos (#115584)
1becd2c314 : Align checks in `_use_cudnn_ctc_loss` with those in `_cudnn_ctc_loss` (#115617)
c3ed9f65a0 : Revert "[8/n] Update XNNPACK Version Part 8 Everything Remaining to get it to work (#115587)"
ac4f6beb00 : [Dynamo] Make resume function name more explicit by adding lineno (#115608)
40ce9a4cfb : [c10d] Create a python c10d API _set_pg_timeout to set timeout (#115453)
8a58af2a9f : [Reland][HigherOrderOp] make MapHigherOrder create map_impl (#115561)
8739d1e3f9 : Fix a fast mode gradcheck bug where specified eps argument is ignored when switching to slow mode (#115634)
75ab294eb5 : Enable builtin tests for ONNX Export with ExportedProgram models (#114762)
d954ef208f : [DCP][state_dict] DCP state_dict cannot correctly find FQN when the leaf module is wrapped by FSDP (#115592)
0ff155fb65 : Fix SDPA for SAM (#115636)
8885128dcc : Fix backward for SDPA NT jagged layout (#115576)
7553c49514 : [S382174] Fix distributed debug w/ non-equal split (#115483)
d521857411 : Terminate handler (#101332)
36b5136270 : [inductor] Don't print disable_cudagraphs_reason when cudagraphs is disabled (#115489)
670eb83573 : Enable test_sparse_addmm for crossref tests (#115536)
a8dc9d8e35 : [8/n] Update XNNPACK Version Part 8 Everything Remaining to get it to work (#115587)
e918461377 : Add instructions for generating optimal Triton kernel parameters of bsr_dense_addmm (#115504)
32286512cc : Add tune_bsr_dense_addmm as an API to find optimal triton kernel parameters for bsr_dense_addmm (#115499)
40dc0580a6 : [inductor] De-duplicate triton helper functions (#115546)
02196c21ac : [inductor] Parameterize ir.Scan on combine_fn (#109132)
d5286d7ea8 : [export] Add canonical form for differentiating IR (#115589)
de4b2e59a7 : [PyTorch] AOTI: add more basic aoti_torch getters (#112799)
c5c4d81b1b : Switched stale workflow to linux.large.arc (#115635)
4fafc36c33 : [MPS] Fix `sum` and `prod` for complex types (#115554)
07f03b4a62 : [MPS] Add support for `MPSDataTypeComplexFloat[16|32]` (#115513)
21cf6e76c2 : Revert "Use linux.large.arc for stale workflow (#115440)"
dadb3694ff : Use linux.large.arc for stale workflow (#115440)
7350dcb307 : [CI] Fix lint errors on master (#115627)
bc51a0c22f : Revert "[PyTorch] AOTI: add more basic aoti_torch getters (#112799)"
f98b0f3ebc : Add bfloat16 support to torch.sparse.addmm for CPU (#115535)
d6f8850653 : Revert "[Export] Test non-strict mode on existing test cases (#115399)"
a8acd6c410 : Add Half support for AvgPool2d on CPU (#109578)
92fd3927b0 : [export][reland] Add math.* ops to pass base (#115559)
36527df344 : [Export] Test non-strict mode on existing test cases (#115399)
fdf814c6ca : Revert "[MPS] Add support for `MPSDataTypeComplexFloat[16|32]` (#115513)"
46694e92b7 : Revert "[MPS] Fix `sum` and `prod` for complex types (#115554)"
f28687dfb2 : Do not use `pytorchbot-env` from upload-test-stats (#115606)
1eca63c6ac : [DeviceMesh] Move helper function 'get_mesh_dim_by_name' to MeshEnv class (#115572)
3de2596abe : [PyTorch] AOTI: add more basic aoti_torch getters (#112799)
2b323e61ad : [PyTorch] AOTI: Use static_cast, not dynamic_cast (#112798)
ca52195112 : [PyTorch] AOTI: Avoid aoti_torch_data_ptr calls for constants at inference time (#112405)
24c67fe8cf : [PyTorch] AOTI: Emit static constexpr int array vars when possible (#112174)
ff6f987adc : [PyTorch] Replace cached thread_locals with stack allocation in AOTI (#112116)
405a0040cf : Adds tool to visualize sharding (#114307)
65651d970b : Optimize the copy of Half to Float and Float to Half on CPU (#103148)
b6a4866330 : [export][reland][refactor][3/n] Move unlift to separate file (#115558)
36199747f3 : [export][reland][refactor][2/n] Move tracing logic (#115557)
dd9a989b83 : [export][reland][refactor][1/n] Split dynamic shapes (#115556)
744d74c456 : [inductor][optimus] enable smart fusion (#115471)
fbb744fd49 : [dtensor] enable radam foreach optimizer (#115566)
c322e5b5e9 : [dtensor] add test for nadam optimizer (#115565)
4bd661c472 : [dtensor] enable adadelta foreach optimizer (#115564)
8a27352d6b : [dtensor] add a implicit replication flag (#115297)
c70f995b5c : [DeviceMesh] Add mesh_dim_names to DeviceMesh __repr__ if it exists (#115579)
0fc04e274d : [inductor] Fix an aliased output bug (#115373)
89ee3af076 : [Reland][Dynamo] Don't log compilation metrics for PyTorch unit tests (#115571)
064846dbc2 : [cpu] flash attention optimization (#115151)
0379c11248 : [c10d] Enable PG NCCL monitor thread by default (#115577)
6988e40b48 : [quant][fx] Lower operator.matmul in convert_fx (#113954)
0a464ad1a7 : [dtensor] turn back on symbolic shape in tests (#115568)
078773b32b : [ROCm] Add owners for more HIP-specific paths (#113989)
17de38c9af : [Dynamo] Check duplication when loading dynamo tracing rules (#115059)
0692240b90 : [dtensor] account for empty list when turning to OpStrategy (#115298)
19c67a9db5 : [dynamo] Fix a closure cell empty error (#115541)
617c228fba : [CI] Lower the smoketest speedup threshold for nangpt (#115562)
4471fe6c39 : [sparse][semi-structured] add alg_id to _cslt_sparse_mm and _cslt_sparse_mm_search (#115178)
8b28380c8e : [MPS] Fix `sum` and `prod` for complex types (#115554)
a4bb4a2373 : [MPS] Add support for `MPSDataTypeComplexFloat[16|32]` (#115513)
288822c968 : Increase ROCm test shards to 6 (#110997)
4307ccde99 : Move ONNX's TorchModelType to pytorch_test_common to fix circ. dep. (#115353)
ccd5bde6a3 : [export] Reintroduce InterpreterModule to unflatten (#115436)
c137335b5c : [export] make UnflattenedModule not inherit from GraphModule (#115408)
8c1567d021 : [c10d] Change watchdog inner loop function name to make it more accurate (#115404)
99f06c0cc2 : [BE] update errors to be more descriptive (#115443)
b706c4116d : [MPS] Add MacOS 14 runtime check (#115512)
03ff44c958 : [c10d] Fix Store check condition in NCCL PG watchdog (#115475)
ccc9e5f5bc : Optimize conv2d pw quantized (#115221)
585aea6e77 : [xla hash update] update the pinned xla hash (#115528)
505574c46a : Add decomposition for torch.block_diag (#115096)
5fe2b138e3 : Revert "[inductor] Fix an aliased output bug (#115373)"
c52b78ebc2 : [ez] Remove some args from run_test.py (#115459)
b5578cb08b : [ez] Remove unittest retries (#115460)
5c0976fa04 : Revert "[dynamo] guarded config (#111299)" (#115386)
6db7b30db4 : Revert "[dynamo] Cache size calc for differing config (#111300)" (#115385)
f06f51b152 : Revert "[Dynamo] Don't log compilation metrics for PyTorch unit tests (#115452)"
f5f6618813 : [executorch hash update] update the pinned executorch hash (#115311)
40a14e07ef : Revert "[sparse][semi-structured] add alg_id to _cslt_sparse_mm and _cslt_spasre_mm_search (#115178)"
5f41fc7619 : [c10d] Change NCCL PG watchdog error msg and test comments (#115403)
794545c11f : [BE]: Enable RUF015 codebase wide (#115507)
1e5636f791 : [sparse][semi-structured] add alg_id to _cslt_sparse_mm and _cslt_spasre_mm_search (#115178)
b88be1686d : Revert "[export][refactor][1/n] Move dynamic shapes logic (#114764)" (#115508)
f017a1af3f : [MPS] add complex_out to MPS backend (#110851)
de89a53df8 : [benchmarking] Reduce box_detections_per_img for vision_maskrcnn (#115487)
274fdc81f8 : [Dynamo][6.3/N] Further cleanup torch.py (#114669)
fe01605830 : [aotinductor] replace lld with the default ld linker (#115478)
1310f0bf38 : [inductor] Fix an aliased output bug (#115373)
2e6b809d6b : [AOTI] Fix a missing declaration for the result of item() (#115175)
9b3cb1c66c : Fix environment condition for docker-release.yml
38f890341d : Implement pass-through `state_dict` and `load_state_dict` for dynamo OptimizedModule (#113423)
26266c9718 : [CI] Call torch.cuda.empty_cache to release device memory (#114663)
694cc6af56 : [benchmarks] Fix NameError: name 'args' is not defined (#115494)
21a1d31ed8 : [caffe2] update Meta-internal googletest references (#115407)
24a463c46c : Revert "[export][refactor][2/n] Move tracing logic (#114768)" (#115503)
b4ef59f740 : Revert "[dynamo] remove unused `OptimizeCtx` field - export (#113901)" (#115401)
b36fc6790e : Revert "[dynamo] Guard on `HAS_GRAPH_BREAKS` if graph breaks are present (i.e. cache miss if compiled object requires nopython) (#114073)" (#115384)
6c1e75e646 : Revert "[HigherOrderOp] make MapHigherOrder create map_impl call_function node instead of map (#115205)"
100c466bff : [CI][Inductor] Skip CPU tests when running on GPU (#115430)
08d63a75a4 : Revert "[HigherOrderOp] Remove additional get item calls in MapHigherOrder. (#115207)"
fbeca60b1f : Remove replace_all and make VTs mutable (#113725)
f71d931b32 : [Dynamo][6.2/N] Dump the in graph function list(~2600 ops) and add unit tests. (#114196)
4eb5838e18 : Revert "Enable builtin tests for ONNX Export with ExportedProgram models (#114762)"
2ee240d14a : Revert "Move ONNX's TorchModelType to pytorch_test_common to fix circ. dep. (#115353)"
4490d4692b : [doc] Rewrite benchmarks/dynamo/README.md (#115485)
8ddc549c0f : [BE][JIT] Do not wrap shared_ptr with optional (#115473)
641ec2115f : [AOTI] move model runner into a library (#115220)
c039f01bd9 : Increased hardcoded limit for number of GPUs. (#115368)
99f222372b : [5/N] Fixes clang-tidy warnings in c10/{core,util}/*.h (#115354)
937d616e82 : Re-enable type checking for distributed_c10d.py (#115223)
485ea9a70a : [DTensor] Add DTensor experimental op for LayerNorm backward sharding rule propogation (#115398)
eb3aa424ce : [Reland][Dynamo] Added support for math.radians on ints with dynamic shapes (#115477)
960ad9d94e : Move ONNX's TorchModelType to pytorch_test_common to fix circ. dep. (#115353)
13d2e3eba7 : Enable builtin tests for ONNX Export with ExportedProgram models (#114762)
7e941a932b : Store user model to simplify ONNXProgram.{adapt_torch_*,__call__} APIs (#115281)
da341d0d48 : [Dynamo][6.1/N] Refactor out TorchInGraphFunctionVariable and improve heuristic (#113432)
1c1f2bbe8a : Add a space in the error message (#115465)
3ebf9acea1 : [Triton] Replace triton.runtime.jit.get_cuda_stream with torch.cuda.c… (#115397)
516bd4a72c : [1/N] Use std::in_place (#115170)
2ed47fecc5 : Robustify torch.multiprocessing.spawn error reporting to be less deadlock prone (#114688)
2962271f58 : [ONNX][dynamo_export] Extend expected fx output types for int, float, bool (#115431)
41b1919208 : [nested_tensor]Python subclass NT overhead improvement (2/n): avoid getting from WeakTensorKeyDictionary twice during __init__ (#115450)
d40a7c6026 : Add decompositions for replication_pad (#115113)
d7705f325d : Patch `--save-xml` when `TEST_IN_SUBPROCESS` (#115463)
c9c4cdf9a9 : [AOTAutograd] Do not call ctx.mark_dirty on mutations hidden from autograd (#115324)
3361496f96 : Fix the corner case of index_add (#114929)
3c54ff6bcd : Update ONNX's IO Adapter to support FakeTensor with ExportedProgram (#114407)
495054545c : Allow `preserve_rng_state=True` when torch.compile + selective checkpointing + CUDA (#113718)
cd444aa075 : [Dynamo] Don't log compilation metrics for PyTorch unit tests (#115452)
e1370ff80f : Vectorize CPU ATen mean kernel for BF16 & FP16 dtypes (#114582)
f614ed78b8 : [docs, dynamo] fix typos in dynamo custom backend docs (#115444)
fb19947962 : Add decompositions for reflection_pad{1, 2, 3}d (#115100)
9f7b3a4e18 : Move autolabeler to "oncall: distributed" not "module:.." (#115447)
749f0c90e1 : Revert "[export][refactor][3/n] Move unlift to separate file (#114787)" (#115457)
28de29fdda : [releng] version 2.2 -> 2.3 (#115446)
3e47e3f441 : Revert "[export] Fix graph output mismatch issue with constant outputs. (#115280)"
3dab46fe19 : Revert "[export] Dont skip output caching for now. (#115374)"
aaaf5c08fb : [ez] Don't run workflows on forks (#115429)
b5d3d3ebf0 : [ao] making hist_obs handle torch.inf and closeby values (#103467)
1215f2ffe2 : [dtensor] readme typo (#115383)
af925a56a1 : Revert "[export] Add math.* ops to pass base (#115271)"
12d7ea19af : [Indcutor][fx pass] Add sub and div pointwise ops to the post grad fusion (#115389)
e8e4141773 : Revert "[Dynamo][6.1/N] Refactor out TorchInGraphFunctionVariable and improve heuristic (#113432)"
d7180161b5 : Revert "[SparseCsr] Remove triton sdpa skip after triton pin update (#109601)"
4186932bac : Revert "[export] Remove runtime assertion pass (#115196)"
317486edb0 : [C10D] Decouple flight recorder from enableTiming (#115358)
3d999d2f2c : [export] optimize unflattener (#115364)
494cb28231 : [PyTorch] AOTI: add ArrayRefTensor (#112115)
a2b89154bf : New swap function (#111747)
5f2ff29569 : Fix typo in `https://pytorch.org/docs/stable/sparse.html` (#115282)
68f74dd162 : Add python and C++ support for LPPool3d (#114199)
1c3a4a864c : Remove always restore (#115317)
a3f93dc44d : [EZ] [CD] Enable Triton 3.12 conda builds (#115424)
81b565b142 : [CI] Fix a missing write_csv_when_exception problem (#115370)
c370450f02 : [inductor] Remove hashing of tensor data for constants (#115356)
e61d6b42f0 : [Dynamo][6.1/N] Refactor out TorchInGraphFunctionVariable and improve heuristic (#113432)
898554a3a3 : [torchgen] Add logic in custom ops to return empty tensor (#114143)
b3b5bd51ea : [raas][torch][jit] Allow not storing the optimized graph (#115381)
f64b10803f : [SparseCsr] Remove triton sdpa skip after triton pin update (#109601)
72e58a756c : Set markDynamoStrictTest in functorch/test_vmap.py (#115274)
cc8f6f56dc : [quant][pt2e] Add convert callback to Observer module (#115001)
ca15671c30 : Fix failing test_invalid_input_csr_large (#114940)
23fa9621e4 : [DeviceMesh] Rename _device_mesh.py to device_mesh.py to prepare for beta (#115099) (#115193)
6c585de076 : [CUDA] baddmm should fall back to addmm for batch=1 (#114992)
4d70802133 : [c10d] Use TCPStore to record NCCL timeout and dump debug info (#115226)
2c84616a94 : Move the shape env symint cache to a symbol cache, better routing for subclass fakification [re-pr 115227] (#115396)
d0f161eae4 : [vision hash update] update the pinned vision hash (#111264)
9521331ba5 : [pytorch] Multiprocessing api to use sigkill if sigterm doesn't kill the process (#115219)
459845b82d : [cuDNN][cuDNN frontend] Bump `cudnn_frontend` submodule to 1.0 (#115218)
e071d6a9eb : [Nested tensor]avoid using shape in python subclass NT, use _size instead (#115371)
5432088098 : Adds Checkpointer Wrapper for DCP [3/N] (#114603)
3b01f30b20 : Prevent invalid pointwise ops on jagged with transposed ragged dim (#115190)
784e20e3d7 : [C10D] Make dumpPipe use async launcher (#115375)
bb7746275c : Add is_integer to SymFloat (#114703)
f5919335db : Fix _load_from_state_dict for num_batches_tracked in batchnorm (#115285)
18d57dde2d : Remove remaining uses of copy_graphstate (#115321)
ecba053cff : [quant][pt2e] XNNPACKQuantizer skip inserting observers for non-float Tensors (#114999)
dacf5d6e92 : [DTensor] Remove assert to allow tensor sharding dimension < Shard(x).ndim (#115114)
7562b45454 : Reland "[C10D] Use future for flight recorder dump (#115176)" (#115332)
fd79995fd6 : [export] Dont skip output caching for now. (#115374)
6a6a1e3ef7 : [dtensor] update README to make all example runnable (#115365)
c06ab369e8 : [OAT] toggle for forcing matmul precision matching (#115326)
7faa67f6ef : [inductor] enable mkldnn op weight pre-packing on aarch64 (#115037)
7201edc0a5 : Fix RNN class constructor signature (#115341)
21cca2494d : Move test_multi_tensor_optimizers to use OptimizerInfos (#114797)
16373bbc1f : fix error message in pytorch (#115349)
eb4ba35b07 : fix test_weak.py on mac (#115367)
b0a9641815 : [Inductor][fx pass] Fuse pointwise operators in the post grad (#114778)
3a5fb0d456 : markDynamoStrictTest in functorch/test_eager_transforms.py (#115268)
a1bfaf75dc : markDynamoStrictTest: add nopython flag, set default to False (#115276)
2847045ed9 : Set _dynamo.config.capture_func_transforms=False (#115267)
3e66385ddd : Add Work to distributed docs (#115172)
ee8b33f7d5 : Fixed crash when calling pad_packed_tensor when packed with cuda tensors and ensure_sorted=false due to indexing with tensors on different devices (#115028)
686a3e0bf0 : [pytorch][PR] introduce WeakHashRef (#115216)
684ce1b21d : Revert "Assert that output could only be the last node of the FX graph (#115179)"
dd6ae6d3b4 : [HigherOrderOp] Remove additional get item calls in MapHigherOrder. (#115207)
8b74735878 : [HigherOrderOp] make MapHigherOrder create map_impl call_function node instead of map (#115205)
be3efbebb6 : [HigherOrderOp] make MapHigherOrder use should_flatten_output=True (#115204)
998c87f93c : [BE][HigherOrderOp] extract redundant code that unflattens the output (#115115)
43f42bf3cb : Updated docs for deprecated `torch.set_default_tensor_type` (#115041)
441ecf03e2 : Update gloo submodule (#115158)
7b8084d1c6 : [5/N] Fixes clang-tidy warnings in c10/core/*.h (#115232)
d08b20d534 : Update FlashAttention too v2.3.6 (#115313)
78b945484b : [c10d] Extend NCCL communicator splitting to more use cases (#114916)
a6736ac851 : Add call to run_tests for a few tests (#115097)
3c882925da : Make subclass type instances constants (like UserDefinedClasses) (#115323)
5e3631db31 : [DTensor] force re-compute sharding when normalized_shape differs in fwd layer norm (#115250)
622688fab9 : [export] Fix graph output mismatch issue with constant outputs. (#115280)
e1f159e6b2 : Remove rebundant api named is_int_list (#115136)
5309ac1b98 : Add test case to prove non-strict export supports external call (#115245)
a93b9ee9d8 : [quant][be] Add a test for per channel quant for groupwise conv (#115224)
b7eb9b1e7e : [Autotune] Enable register pressure handling logic for H100. (#115295)
f55ab176fc : [OAT] move matmul precision out of system info (#115242)
7ec145bfed : [Quant] [PT2] Fix XNNPACKQuantizer set_module_type issue (#115252)
6c0a4ced53 : [export] Add math.* ops to pass base (#115271)
d7160c9223 : Handle potential ValueError exception when stringifying signals (#114696)
ac7d14baad : Revert "[C10D] Use future for flight recorder dump (#115176)"
3a18211622 : Guard on subclass inner tensors (#114965)
c163b3c035 : [export] Remove runtime assertion pass (#115196)
73c0035160 : Add `reset_storage` method to FunctionalTensorWrapper (#115235)
4e9fe496cd : Remove c10::either (#112733)
240f4b2d25 : make __lookup_backend return None when cache misses (#114766)
7457a5f4be : [inductor] adapt to the get_max_simd_tflops Triton API change (#115288)
ae5365819d : [ONNX] Extend `test_fx_op_consistency.py` to cover ExportedProgram model type (#114886)
3642f29a64 : DistributedDataParallel._post_forward, fix return (#114678)
0e07e3dbe4 : [C10D] Use future for flight recorder dump (#115176)
0757e2ba84 : [aotautograd] Fix an output shape error when inputs are aliased (#115279)
7e0e124a5d : Automated submodule update: FBGEMM (#115103)
83cb6a75ad : [dynamo] add list iterator contains (#115237)
71bf4f3b87 : [CI] Add torch/_functorch/_aot_autograd to auto-label rule (#115283)
1489e4bcf3 : [Quant] [PT2] Enable batchnorm in _move_exported_model_to_eval (#114547)
c99db5617a : Introduce general metadata cache to jagged layout NestedTensor (#115212)
b6de337d16 : [funcol] a few optimizations to funcol (#113324)
2cf0cf8137 : [dynamo / DDP] - lazily compile submodules - to propagate real tensor strides to backend compiler (#114154)
967863d91d : [export][refactor][3/n] Move unlift to separate file (#114787)
0ab57ee7ea : [export][refactor][2/n] Move tracing logic (#114768)
53bf8cfcf9 : [export][refactor][1/n] Move dynamic shapes logic (#114764)
5f939e32e3 : [CI] Log load_model failures in csv (#114784)
67c8ad7285 : Fix autograd.Function x enum input x torch.compile (#115206)
233ce0d24b : Support GPU annotations for auto-trace jobs similar on-demand support (#114638)
d4c79a3078 : Add an attention bias subclass for a lower right causal masking (#114823)
4a9fb9832a : Assert that output could only be the last node of the FX graph (#115179)
fcf6a76108 : [aot_inductor][pass] fuse parallel linear based on pre grad aten IR (#114776)
d250b2158e : [4/N] Fixes clang-tidy warnings in header files (#115163)
f4c67ffff4 : [dynamo] Improve support for dynamic shapes str.format and _assert (#115203)
4ff4e06b5b : Update xla pin (#115211)
534f25887b : [inductor] avoid inplace for ComplexView (#115166)
490f2d7570 : Skip privateuse1's checkZeroPoints (#114117)
acdd06e00f : [executorch hash update] update the pinned executorch hash (#115215)
a548e80536 : Use `test_vulkan` to validate run_test without boto3 (#115233)
2bff36bb0e : [c10d] Change set timeout API name to _set_default_timeout (#115197)
b56b002842 : Fix NULL dereference in binary CPU ops (#115183)
892a14a450 : [vision hash update] update the pinned vision hash (#111408)
ef6cbf4e1f : remove myself from CODEOWNERS (#115230)
b0b190f7c0 : More descriptive error message for unsupported inputs to HOP (#115187)
b5b011a5cd : Expand input types for HOPs that use manually_set_subgraph_inputs=False (#115186)
bc46347152 : Refactor how HOPs create new args to subgraphs (#115185)
f6291a5e93 : [Quant] [Inductor] Enable QLinear weight prepack when input dimension size exceeds 2 (#113928)
6d0cf26c3a : [Quant] [Inductor] Enable Dequant Promotion when Linear input dimension size exceeds 2 (#113912)
4a624d1f8a : [Quant] [PT2] Enable QLinear input with multi dims (#113733)
b8ce05456c : enable cat for cuda bits types (#115044)
b9c4fb68c5 : [ONNX][Bench] Fix model name retrieval and remove unused argument (#115108)
ae457a2c4a : [PyTorch] Change test_aot_inductor CPU test failures syntax (#115180)
01ec71e466 : [NFC][Autotune] Use device_prop.regsPerMultiprocessor instead of hardcoded reg number. (#115094)
1102d37958 : remove aot_config.keep_inference_input_mutations from assert_functional_graph (#115195)
7aac689b19 : [inductor] Add ir.Scan and lower aten.cumsum on CUDA (#106581)
d78fe039eb : Introduce OptimizerInfos + add a test_errors (#114178)
99257002fa : Extend auto_functionalized to support ops that return Tensors (#115135)
d0aad93249 : Refactor can_auto_functionalize (#115134)
4620170008 : [Dynamo] Revert multiple PRs since they triggered compilation stuck internally (#115126)
80527c0cf2 : [AOTInductor] Double buffering for Weights (#114446)
12085914b8 : Replace bsr_dense_mm triton kernel with bsr_dense_addm triton kernel (#115030)
f35f52e4a6 : Update auto_request_review.yml (#115182)
f09e8381b7 : [Inductor][fx pass] Fix a bug in batch linear fusion in the post grad (#115061) (#115131)
ab120e65fb : Fix FSDP + TP state dict in param unflattening (#115105)
22704426c3 : Expand dynamic dims support for traceable subclasses (#114311)
259a99669d : [NCCL flight recorder] Dump when writing to pipe (#115139)
5fdae89c03 : [docs][aoti] Link to export docs in AOTI docs (#115088)
a8bd593252 : [c10d] Add _reset_nccl_collective_timeout so users can change timeout of a NCCL PG (#115141)
85d4708512 : HTA docs (#115060)
063423edf5 : Revert "enable cat for cuda bits types (#115044)"
01afa54df5 : [dynamo][FSDP] unit test: FSDP should not be lifted as fx graph attrs (#115112)
4b8ddbbc7e : [dynamo] Improve graph break message for copy.deepcopy (#115120)
522bae20df : [dynamo] Support any() on SymNodeVariable (#115119)
88642d44d9 : [dynamo] Add RestrictedListSubclassVariable (#115057)
a97ed2470a : [dynamo] Support hasattr on dataclass (#115046)
aa70e31610 : [dynamo] Fix MutableSideEffects returning alias (#115095)
5f89cedf9b : Add note to set_default_device about functions with shared memory (#114825)
a987ad3d89 : [BE]: Update ruff to v0.1.7 (#115169)
4c5fe66880 : [DTensor][BE] fix bug in OpStrategy for Tuple output (#115161)
c9853ccadc : Relax tensor contiguity requirement for P2P ops (#114982)
daf89b4101 : Update oneDNN submodule to v3.3.2 (#112700)
4cf97c40f7 : enable cat for cuda bits types (#115044)
a827ac71f2 : Revert "[DeviceMesh] Rename _device_mesh.py to device_mesh.py to prepare for beta (#115099)"
0a9819e3e1 : Prefer is_number over is_constant() (#114513)
5de0dff7ea : Disable bugprone-unchecked-optional-access as it can cause clang-tidy to hang (#115124)
ee96399bb4 : Revert "[Reland2] Update NVTX to NVTX3 (#109843)"
e06bff8bbe : [AOTI] Handle empty input args (#114682)
3d8c174069 : Tie some torch.library def/impls to library objects in testing (#114956)
cfa4370c07 : torch.compile should auto-functionalize certain mutable ops (#114955)
94faba5224 : [nccl-pg] Revert accidental renaming of env variables (#115082)
0ee1e469cb : Revert "Modify pointwise cat heuristic to only apply when inputs are all pointwise and outputs are all pointwise (#114520)"
1224acc018 : [3/N] Fixes clang-tidy warnings in header files (#114431)
89569be2bd : Pin z3-solver on Windows to 4.12.2.0 (#115150)
58809e8914 : [Inductor][Optimus]Move group/batch fusion logic out of inductor (#115128)
d5af6b0301 : Dont pad broadcasting bias dimension in pad mm (#115098)
1dc4588c6a : Add an SDPA dispatcher for nested tensors with jagged layouts (#114164)
fb92983c9b : Added More Information About Adadelta Optimizer (#106290)
eaa64339d6 : [DeviceMesh] Rename _device_mesh.py to device_mesh.py to prepare for beta (#115099)
e199b769b6 : Unbreak vectorization (#115086)
7843df60e4 : [executorch hash update] update the pinned executorch hash (#115116)
1d0e70ad65 : Add get_mutation_names to ir.Wait (#115104)
3cf5348239 : [inductor] Replace rand[n].generator with inductor prim if generator=None (#115051)
3d0bbb24a1 : [dynamo] Improve support for list subclasses (#115052)
fe690f430a : [dynamo] Fix dict.get with no default (#115048)
f6b6fad136 : Fix `torch.inductor._utils.get_device_tflops` on ROCm (#115102)
c56d91ba39 : Log pt2_compliant custom ops used with torch.compile (#115083)
288b1acaa9 : [dtensor] fix empty shape init for dtensor constructors (#115091)
5cfda9b7f8 : Revert "Add an SDPA dispatcher for nested tensors with jagged layouts (#114164)"
aa6920c542 : Fix hang in VonMises rejection sampling for small values of concentration (#114498)
1474dad28c : [quant][pt2e][xnnpack] Add support for QAT dynamic quantization for linear in XNNPACKQuantizer (#113288)
a7bcc78bff : Make it clearer that current selective AC is PT2-only and private (#115081)
4ba37e1804 : Add tests for bsr_dense_addmm and bsr_dense_mm triton kernels (#114800)
aafa8233a4 : Add an SDPA dispatcher for nested tensors with jagged layouts (#114164)
43e3242490 : [BE] Remove test corner cases for CUDA older than supported 11.8 (#114989)
8ef44e6110 : [autograd.Function] Fix torch.compile w/ once_differentiable leads to opaque graph break (#113625)
8dbae73e62 : Use 2d weight and bias texture for conv2d quantized op (#114902)
6317a0350e : [PyTorch][Vulkan] Refactor performance test binary (#114712)
62df4f3428 : Revert "Update oneDNN submodule to v3.3.2 (#112700)"
a70c85ce90 : [dynamo] Improve support for inspect.signature().parameters (#115047)
40218436c4 : Remove size asserts from fx_insert_profiling (#114830)
8bb3cd192f : Revert "Assert that output could only be the last node of the FX graph (#114973)"
dcb486232d : [Reland2] Update NVTX to NVTX3 (#109843)
753c07bbe0 : All gather keys before processing Stateful objects in save/load [2/N] (#114304)
f1c8c427da : Fix https://github.com/pytorch/pytorch/issues/114892 (#115054)
a9e9590934 : FF inductor failure (#114980)
4cb7dd0fc9 : [sparse][quant] Add support for vector alpha in cusparselt mm (#112056)
f101426790 : Revert "Move class definition of DebugInfoWriter to TraceUtil as well (#114901)"
453d509b73 : [xla hash update] update the pinned xla hash (#114586)
bfa2c844a8 : [inductor][cpp] avoid redundant lowp type cast for direct load/store (#115006)
3da67ffad1 : [Inductor] Do not promote int to float for torch.mm (#115043)
3fbfa8cd0a : [dynamo] support `dict.copy()` / `OrderedDict.copy()` / `defaultdict.copy()` (#115012)
917a52d2a2 : [dynamo] support `dict.update(seq2)` / `OrderedDict.update(seq2)` / `defaultdict.update(seq2)` (#115011)
2e8ac5ea93 : [dynamo] support `dict.fromkeys()` / `OrderedDict.fromkeys()` / `defaultdict.fromkeys()` (#115010)
541591dd79 : Add the appropriate check on div_value to the cpp frontend (#114671)
50833021dd : [Inductor] We re-enable the batch_fusion and group_fusion flags in order not to disturb the current production model implementation (#114841)
491f3c8037 : [CI] Small follow up for triton conda builds
bf16fec463 : Fix up triton builds (#115039)
7979ba7b43 : [inductor] Add dropout type check to match eager (#115040)
69a8f9b07e : [inductor] Fix shape mismatch in sdpa pattern matcher (#115038)
55064a4ef9 : [BE] add parentheses to kwargs unpacking `func(*args, **(kwargs or {}))` (#115026)
4d8b9964e1 : [aotinductor] support at::convolution for AOTInductor (#114961)
7f49603ed3 : Fix https://github.com/pytorch/pytorch/issues/114899 (#114985)
3cdfba0a7c : Make DynamicShapes*Tests show up properly in the test failure repro string (#115019)
c808a84680 : Better logging for "cannot fuse" reasons (#115003)
a797821fd6 : [executorch hash update] update the pinned executorch hash (#115021)
3f366aa317 : [audio hash update] update the pinned audio hash (#114997)
a6294d8b9f : [RelEng] Enable Py312 conda builds (#114819)
2391f3717e : [BE] Same install command for aarch64 and x86_64 wheels (#115017)
3cbe7a53a9 : Automated submodule update: FBGEMM (#114444)
d7b303dcf8 : [BE]: Enable a PLC0131, PLC0132, PLC0205. Fix PLC0132 bug. (#115015)
13410d0eda : Moving target/code path to non-pytorch repo (#114095)
8a90249bc2 : [inductor] Update triton pin (#114772)
3a2e2044cd : Revert "[DeviceMesh] Rename _device_mesh.py to device_mesh.py to prepare for beta (#114710) (#114991)"
af5a3bda45 : [merge rule] add CPU quantization (#114994)
28925902fa : [TP] fully rewrite Tensor Parallel APIs (#114732)
729ac7317a : [DeviceMesh] Rename _device_mesh.py to device_mesh.py to prepare for beta (#114710) (#114991)
0fef82b3df : [dcp] fix fsdp state_dict to use run_check=False (#114995)
1f51f977ae : misc visualization/utility improvements (#114984)
3d47b92dfb : Modify pointwise cat heuristic to only apply when inputs are all pointwise and outputs are all pointwise (#114520)
a5a1f0a6b1 : [executorch hash update] update the pinned executorch hash (#114996)
f1fd02503b : Reland #113487 and #112527 (sdpa shim & fp8 AOTInductor support) (#114974)
fe08d995ef : [vision hash update] update the pinned vision hash (#111523)
2882d7fdaf : [BE] Remove stale workaround for CUDA<=11.2 (#114979)
a9aad4ea21 : [AOTInductor] Generate Triton header even if scheduler is not invoked. (#114972)
fb806f487f : [AOTInductor] Add method to get storage size in shim (#114976)
8f164017ee : [quant][pt2e][xnnpack] XNNPACKQuantizer skip quantization for input and output to workaround histogram observer problem (#113405)
7bbc19adc4 : [dynamo] Unskip DALLE2_pytorch (#114960)
4cfe997490 : [dynamo] handle setting .data on a tensor (#113080)
77c4565d58 : [ONNX][Bench] Remove double export and session init in perf test (#114907)
b0a36944cc : [ONNX] Add sanity check in CI for onnxbench (#110178)
1fce51037e : Add `profiler/unwind` to the package (#114981)
d47f715d29 : Expose Flash attn to autograd (#114378)
80d8a2a237 : improve mkldnn_linear_pointwise performance for contiguous tensor with non default contiguous strides (#114939)
e666159e2f : Fix lint in group_batch_fusion.py (#114993)
c546ca9f80 : AOTAutograd: support mutations on buffers that happen during the bw (#114953)
a85df9eb0b : Assert that output could only be the last node of the FX graph (#114973)
3c78ea4c9d : [DDP][Compile] Test to Ensure torch.compile works w/static_graph=True (#114621)
6e495eef60 : [tgif] allow preserving non-forward methods during deepcopy (#114849)
4ee80fd7f4 : [dynamo] Support UNPACK_SEQUENCE nn.ModuleList (#114959)
68a8d74f3f : [inductur] benchmark epilogue fused matmul template (#114809)
8a51845b38 : [C10D] Add filename to dump finished log (#114957)
9cc040fef6 : Switch env variable use in test harnesses to the non-deprecated names to fix warnings (#114880)
1bcefaf575 : [inductor] post_grad batched linear fusion (#112504)
f073dcd4f7 : Stateful Checkpointing for Distributed [1/N] (#113867)
6f32eb7eef : Add decomp for `replication_pad2d` and use for CUDA deterministic (#111590)
c6e975bc0e : Revert "[Quant] [PT2] Enable batchnorm in _move_exported_model_to_eval (#114547)"
afbaa0c165 : Update oneDNN submodule to v3.3.2 (#112700)
93b1e47586 : [inductor][Observability] Add log for Optimus to enable easier debug (#110452)
32b928e582 : Tests have main linter (#114882)
3fc58a6bbe : Revert "Make offsets dynamic by default (#113734)" (#114889)
ec124b90b8 : [pytree] hardcode values for `none_is_leaf` and `namespace` in C++ pytree (#114858)
5eb36166f8 : Fix hard-coded `cuda` device in `ConstructorMoverPass`. (#114932)
833200c54f : s390x: fix build (#114508)
76362cc9a0 : [BE] Do not use AT_ERROR (#114883)
d90d67a146 : Added a check to prevent accessing blocksize during Tensor.to_sparse … (#114905)
9e94c951a8 : Fix missing meta for proxy.node (#114659)
57083542ee : Added support for custom pre-grad passes (#113823)
25b83521be : [c10d] Log NCCL trace buffer size (#114926)
9a075d9a8f : Update expected values after #114828 (#114918)
67562c8cf8 : Add DALLE2_pytorch to skips (#114924)
38e1440bae : [MPS] Remove redundant topk test and move all pad tests inside a class (#113313)
88a659e752 : [MPS] Move non-nll loss tests outside TestNLLLoss (#113312)
4875e4d63f : [tp] delete dead code (#114731)
1b27eae65e : [MPS] Fix out-of-bounds fill to sliced tensor (#114838)
aa390cec21 : [profiler] Fix description to use nelems rather than size (#114735)
373f2060ba : fix extending torch native API docs (#114863)
5687285ca5 : Skip quantization tests running from BaseTestQuantizePT2EQAT_ConvBn (#114829)
d6c0d1b58b : [pytree] support collections.deque type for Python pytree (#113256)
0019196f1b : Refactor `move_constructor_to_cuda`. (#114626)
9267ab9032 : [executorch hash update] update the pinned executorch hash (#114915)
ab5385fc50 : [Dynamo][6.3/N] Further cleanup torch.py (#114669)
64fd706b21 : [quant][pt2e] Add generate_numeric_debug_handle pass (#114315)
2dd2fb91d9 : [DeviceMesh] Add get_local_rank() API to DeviceMesh (#114709)
fb325bbd46 : Move class definition of DebugInfoWriter to TraceUtil as well (#114901)
2a2f74727a : [dynamo, test] add test for backend registration API (#114908)
033f98b7e0 : Remove confusing warning message from SDPA about mask alignment (#114909)
235eaabfed : [inductor][easy] print out exception message upon failing to write to a file (#114836)
1aa54bdebf : [ONNX] Fix op level debug on complex dtype support (#114885)
1d95644740 : [Execution Trace] record root rank for broadcast/gather/reduce/scatter (#113828)
6cba8b584d : [Dynamo] Support torch.cuda.amp.custom_fwd/custom_bwd by inlining (#114891)
7f40640342 : [Dynamo] Support torch.amp.autocast as decorator (#114845)
ad09d81694 : Allow functionalization to work with optional mutable (#114803)
7b3e45be59 : [DeviceMesh] Rename get_dim_groups to get_group (#114708)
38ae17d166 : [dynamo, docs] update dynamo backend registration docs (#114820)
1f845d5898 : [CI] Fix a REQUIRE_HIGHER_TOLERANCE comparison bug (#114870)
386b9c2adc : build small pip wheels for CUDA 11.8 (#114620)
2ab2e8e1c0 : [pytree] support collections.defaultdict type for Python pytree (#113255)
baeb0705fe : [ONNX][Bench] Add warmup for onnx cuda runs (#114821)
c867fddab5 : [inductor] Fix in CppPrinter._print_Pow (#114872)
81adbb6131 : Sort the output of TORCH_LOGS=help (#114657)
b35ca2cb94 : Better error message for misconfigured torchbench model (#114827)
57e482010a : Fix build-deps in benchmarks/dynamo/Makefile (#114815)
7b3429d97c : Fix error with int+SymBool (#114828)
2a3d8e50fb : [pytree] test aligned API signature for C++ and Python pytree (#112485)
e6b3a8ce5f : [export] Refactor export() and separate the non-strict part. (#114697)
e3c42d3fb3 : Inductor cpp wrapper: fix buffer free in non-AOT mode (#114741)
f93ea14309 : [dynamo] Added support for math ops on ints with dynamic shapes (#114507)
69f112d586 : Call triton bsr_dense_mm/bsr_dense_addmm kernels on mm/addmm float32 inputs when appropiate (#114757)
d4128b164d : Fix nn.utils.parametrizations.weight_norm for BFloat16 (#114785)
272e38e78b : [DeviceMesh] Update DeviceMesh's hash (#114812)
db698f733d : Update fbgemm_gpu pin (#114847)
92cd78b1df : [C10D] logging/comment clean ups (#114625)
5c3f03e2dd : [inductor] add a config to specify the shape attribute for the generated svg graphs (#114811)
e97e2ff445 : [CI][MacOS] Cleanup left over local site-packages (#114843)
8ae3835323 : further deprecate PairwiseParallel and SequenceParallel from test (#114402)
c1e51fcbfc : [ONNX][Bench] Relax tolerance for cuda accuracy check (#114767)
fd7201029a : [Quant] [PT2] Enable Inplace Dropout in _move_exported_model_to_eval (#114725)
06eb28c32a : [executorch hash update] update the pinned executorch hash (#114814)
bab054063c : [Quant] [PT2] Enable batchnorm in _move_exported_model_to_eval (#114547)
4ed9e65038 : [C10D] Add time_created_us to flight recorder (#114810)
1f5726708b : [PyTorch][ET] Collect Execution Traces in Chakra schema (#114753)
3b7d60b6ff : Fix keep-going (#112098)
d5544125a0 : [distributed] NCCLflight recorder timeout fix (#114804)
e70a7c3296 : [CI] Update torchbench pin (#114694)
f1fe0b685c : [export] Remove combine_args_kwargs (#114782)
165f4f6ccf : [PyTorch] Redirect c10::optional to std::optional (#101995)
013675ff59 : Revert "Add decomp for `replication_pad2d` and use for CUDA deterministic (#111590)"
9f3ec2ad45 : deprecate PairwiseParallel from test (#114314)
5262484ece : [easy][aotinductor] fix typos & add static typing (#114728)
4ba649e207 : [FSDP][state_dict] Avoid assigning the root _device_mesh to the children _device_mesh (#114384)
8cfc95368f : [Experimental][ONNX] Export with symbolic shapes in proto (#112179)
f0cc6364ed : [export] Remove convert_to_cpu flag (#114775)
34ea0a2bdc : [Pytoch][Vulkan] Create context for layernorm (#114701)
597d3fb86a : Add additional guard for index_put fallback for bfloat16 on whether it's accumulating or not (#114788)
80ae00d11a : [AOT Refactor] jit compile runtime wrappers (#114564)
741414b739 : [AOT Refactor] dispatch compile graph (#114563)
abb84051a3 : [AOT Refactor] alias runtime wrappers (#114562)
4d4093a5de : [AOT Refactor] traced function transforms pt. 2 (#114561)
dab89d546c : [AOT Refactor] traced function transforms pt. 1 (#114559)
0f41a0e99d : [AOT Refactor] (missed) graph signature to i/o analysis (#114558)
5ab61c1ae1 : [AOT Refactor] runtime wrappers (#114557)
7eafdee4d6 : [AOT Refactor] input/output analysis (#114556)
7cb2e8387b : [AOT Refactor] collect metadata analysis (#114555)
e9b03ac36d : [AOT Refactor] subclass utils (#114554)
721d99181e : [AOT Refactor] schemas (#114553)
1971eda1db : [AOT Refactor] functional utils (#114552)
850887b0de : [executorch hash update] update the pinned executorch hash (#114717)
ec4b59305b : [AOT Refactor] logging utils (#114551)
41c1090e48 : [AOT Refactor] utils (#114550)
b5c4b1d9fe : Make Float8 types serializeable (#114662)
fe7b845c8d : [tgif] preserve non-forward method during torch package serialization (#114702)
f1286161a6 : Add decomp for `replication_pad2d` and use for CUDA deterministic (#111590)
0ced55e06c : Optimize `inspect.stack()` call in caffe2/torch/library.py (#114700)
acdb278144 : [BE]: Enable more ruff PLW checks. Disable one PLR that is preview. (#114759)
7c1a5012f0 : [BE][SparseAdam] cleaner way to verify no sparse params (#114425)
febbc48f43 : [DeviceMesh] Make our mesh_dim kwarg naming consistent (#114707)
d197f5c72b : Remove unused call to `inspect.stack()` in torch/_custom_op/impl.py (#114698)
a9d5133207 : [ez][doc] Fix sample code in onnx_dynamo.rst (#114770)
ffa974b940 : [CI] Dump more detailed error msg in PT2 integration tests (#114683)
e38a3a6079 : Revert "[dynamo / DDP] - lazily compile submodules - to propagate real tensor strides to backend compiler (#114154)"
83c0763dda : [CI] Use linux.12xlarge for cpu_inductor integration tests (#114729)
c1f7d4ad6a : [Inductor][fx pass] Refactor code to easily add pointwise op to do the batch fusion (#113381)
ba4285bd9e : Deprecate primTorch module, replace it with decompositions in module Owners (#114754)
b6df841460 : Fixed an issue where a user-specified default device clashed with the… (#114560)
b20330ef81 : [CI] Test PyTorch on M1 using OpenMP (#114738)
e891a3bba9 : [releng] Add release 2.2 to Release Compatibility Matrix for PyTorch releases (#114758)
4a4c9fb0b8 : [ROCm] Add ROCm AMDGPU support for inductor cpp codegen (#105141)
a3bbf9ce3e : [BE][RelEng] Remove `dynamo` extra (#114720)
b6a30bbfb6 : [Dynamo] Forward fix dynamo trace rule test failure due to landing race (#114739)
d2f4215dbb : [quant][pt2e] Fix the order for implicit sharing code (#114704)
7692595834 : Use different conv layout optimization heuristics for inference (#114600)
4e38178bb8 : [Reland] [1/N] Fixes clang-tidy warnings in header files (#114668)
c10893654e : [export] Fix run_decomps to work with fake mode (#114714)
a076a74f11 : [Nested Tensor] Add xpu device in assertion for nested tensor creation (#114664)
69c4819f53 : Add bsr_dense_addmm triton kernel (#114595)
57a5a687b0 : [Dynamo][6.2/N] Dump the in graph function list(~2600 ops) and add unit tests. (#114196)
05f071d922 : [export] Fix state dict device serialization (#114695)
7c8d3639cf : Revert "[fx] log the node when it's get eliminated (#112684)"
64ccdd4afb : AOTAutograd: keep input mutations in the graph if they are under no_grad, even if they require_grad (#114646)
ce00c8fb45 : [PyTorch] Remove hardcoded device=cuda in test_aot_inductor (#112797)
5b9add666f : [PyTorch] AOTI: Emit CACHED_TORCH_TYPE only as needed (#113997)
73a661abf1 : Stop using excess memory in generate_opcheck_tests, re-enable fbgemm TBE tests (#114641)
6256d3710e : [fx] log the node when it's get eliminated (#112684)
24f06c7783 : [no ci] Add `.watchman` to .gitignore (#114718)
48820c928c : Revert "[test] AOTAutograd: support mutations on buffers that happen during th bw (#112906)"
4bfb19827e : Cleanup `.watchman` file (#114716)
ae593d0393 : [sparse][semi-structured][inductor] meta registrations for _cslt_sparse_mm + additional stride checking in test. (#114685)
43d0659d74 : [C10D] Fix DUMP_ON_TIMEOUT env (#114699)
bc34f02c38 : [BE][Easy]: Apply RUF019: remove duplicate checks for dict access (#114478)
c8974d649d : [test] AOTAutograd: support mutations on buffers that happen during th bw (#112906)
11277cc510 : [CI] Remove an exception catching for Triton compiler error (#113064)
3fccc0446c : Add dtensor and fsdp/2d tests to inductor_distributed CI (#114642)
765d4599ee : Give users control over packages in torch.utils.collect_env (#112993)
ce4bff4013 : [dynamo] fix functools.wraps on nested functions (#114279)
a26d747615 : [PyTorch][Vulkan] Fix matrix multiplication performance test binary (#114624)
d114f31b30 : add testcase when bytecode hook changes the bytecode; fix code map (#114487)
47e6cc4d22 : Remove yet more type-ignores in dynamo/inductor (#114684)
9f073ae304 : [BE][Easy]: add some PLR pylint checks and exclusions to ruff (#114519)
74e10f0f60 : [inductor] Fix torch.split bug on unbacked symint (#113406)
4aa2c51a09 : [doc] fix typo on graph 3 that is recorded (#114666)
4a35ec3c0e : [docs] correct the code for cudagraph trees integration (#114583)
44c9e4cbf0 : [C10D] Decouple PGNCCL desync from dbg dump (#114614)
cef79c0df4 : [inductor] `_sparse_semi_structured_linear` fallback - no meta registration; not on testing path (#114477)
ddf1cb7870 : AOTAutograd: handle set_(), detect metadata mutations that cancel out (#111554)
e83c05c833 : [ONNX] Add ONNX ExportedProgram tests (#114633)
39f16c221e : Adding event_tracer evalue logging calls in codegen (#114584)
e6a8052051 : [C10D] Flight recorder - disable c++ stacktrace by default (#114651)
b060694088 : Add `bits` dtypes to `torch._C` stubs (#114661)
0bef97fac3 : [dynamo] Support itertools.groupby (#114192)
cc7a969bb3 : [FSDP] Added test for `ignored_states` + auto wrap (#114612)
79ee99e6d2 : [easy] Dispatch torch.from_numpy to torch.as_tensor (#114609)
0bb2600c28 : Allow to differentiate through NumPy code (#114608)
89a1fe6966 : [pytree] register pytree node type in both C++ pytree and Python pytree (#112111)
088fc7779e : Eliminate unnecessary copy in CUDA addmm with sparse compressed block operand (#114484)
00412e6dfa : [export] Add meta to params (#114622)
95aec251aa : [Quant] [Inductor] Enable the Inductor Lowering of QConv2d post op hardtanh (#114580)
8c1f65dc2b : [Quant] [PT2] Add Hardtanh and ReLU6 into X86InductorQuantizer Conv2d Unary Annotation (#114579)
8a35a68bb7 : [Quant] Enable QConv2d with hardtanh post op (#114578)
06abac971a : [FSDP] Simplified FSDP wrapping in ignored module test (#114611)
5cfa0647a7 : Update mypy to 1.7.0 (#114160)
71b742b42c : [inductor] Remove more type: ignore comments (#114162)
3f574eadb4 : [dynamo / DDP] - lazily compile submodules - to propagate real tensor strides to backend compiler (#114154)
6636c2b178 : [executorch hash update] update the pinned executorch hash (#114648)
8933ff3595 : Make torch::jit::module movable (#114041)
2f875c74bf : Print ghcr docker pull during build/test (#114510)
0de67e7949 : [cpu] Modify inductor opt flag (#113347)
11f11e95df : [Quant] [Inductor] Fix an issue in QConv Binary Pattern Match (#114541)
8556a09d44 : Require less alignment for attn bias (#114173)
4abf2b2261 : [dynamo] fixed record_replayer issue when TORCH_COMPILE_DEBUG=1 (#114623)
2333d381b2 : Make 'distributed' TORCH_LOGS include ddpoptimizer (#114376)
ae40a3ebcf : [inductor] added a config to dump profiling results to a file (#114587)
6ae0554d11 : Enable the lowering of quantized reshape (#114443)
4ba3e6758d : Canonicalize runtime asserts (#114509)
74370a8a9d : Add adaptive_avg_pool2d and flatten into x86 Inductor Quantizer recipe (#114442)
e25b146b8c : [BE][Easy]: Enable flake8-exe rules in ruff too. (#114521)
304ea761f5 : [executorch][be] update test_emit to use export (#114294)
cf9f3ae8d8 : Skip an example of test_instance_norm when running internally due to its size (#114452)
e592b9a469 : [Quant] [PT2] Fix an issue in Conv Binary Quantization Annotation (#114540)
b1fb591272 : [replicate] Simplify replicate() init logic and remove unnecessary variables in _ReplicateState (#113679)
dffa5f3f23 : [dynamo][reland] `ExecutorchCallDelegateHigherOrderVariable` - add sanity check that input and output tensors are disjoint (#114167)
a0be4b7ea7 : [fx] Update symbolic_trace nn_module_stack (#114422)
f505d76462 : Bug fixes to DDP _update_process_group API. (#114194)
7c98bac4a0 : [BE] Speedup register schema compilation (#114438)
e4b1378a92 : Fix dynamo test_logging handling of partial qnames (#114429)
2ea2421b44 : Skip unit tests that fail on MI210 runners (#114613)
2ac0b61e60 : [HigherOrderOp] dedup repeated get_attr placeholders in branches of cond (#112874)
4c794f2ef1 : Reinplace foreach when safe and allow aliasing during lowering (#112440)
e0d2a24967 : Reland "[export] Support user input mutation. [1/2]" (#114496) (#114596)
800cf5f7cb : Add USE_C10D_NCCL around NCCL trace utils (#114597)
69024883fb : Make dynamo's test_logging print helpful error (#114428)
7fa1251080 : [BE][Easy]: Enable NPY lint rules for ruff (#114476)
1793ef77c6 : [BC-breaking] conv1d & conv3d (#114594)
4bb3a02d02 : [BE]: Enable Ruff + Flake8 G201,G202 logging format rule. (#114474)
3a4dea99df : ROCm triton commit pin update (#114348)
bcfca41a2a : [Inductor] fix wrong Inductor UTs (#114504)
9fd447c346 : [CI] Bump up the graph break count for DALLE2_pytorch temporarily (#114598)
56a95afb42 : [RelEng] Pin disabled and slow test for release (#114515)
cff84871ce : [reland][opinfo][fix] conv3d & fix conv{1, 2}d for neg dilation|groups & add ErrorInputs for conv ops (#114589)
ccb1de3595 : Revert "[inductor] Fix torch.split bug on unbacked symint (#113406)"
fa1ccc34c4 : Revert "[export] Support user input mutation. [1/2] (#114496)"
8232d4d1c3 : Revert "[BE]: Enable Ruff + Flake8 G201,G202 logging format rule. (#114474)"
150aaf46ca : Revert "[opinfo][fix] conv3d & fix conv{1, 2}d for neg dilation|groups & add ErrorInputs for conv ops (#113885)"
68a36d2faa : [dtensor] refactor some existing test util to use comm mode (#114404)
b62c0d96bc : [export] Support user input mutation. [1/2] (#114496)
624f202522 : [dtensor] add CommDebugMode for debugging (#113592)
081c5b3adc : Add Stateful/Stateless symbolic contexts, use fresh fake mode for dynamo backends (#113926) (#114526)
4fa1ff8404 : [opinfo][fix] conv3d & fix conv{1, 2}d for neg dilation|groups & add ErrorInputs for conv ops (#113885)
028071c4a1 : Fix test assertions in test_min_max_nodes_parse. (#114537)
bbdd9b059f : [executorch hash update] update the pinned executorch hash (#114486)
d37c4c6995 : Update `torch.compiler_troubleshooting.rst` (#114530)
0f5e24bda9 : Properly type CachedFunction & rename to CachedMethod (#114161)
d30497f6b6 : [BE]: Enable Ruff + Flake8 G201,G202 logging format rule. (#114474)
c6d88604d5 : [Inductor] Fix mutation tracking of ConvolutionBinaryInplace (#114501)
0a063ad2c0 : [inductor] Pass None and skip constexpr in custom Triton kernel calls from C++ (#114475)
cd7d6938c1 : [inductor] Fix torch.split bug on unbacked symint (#113406)
51390722e9 : Fix ConvolutionBinaryInplace using target node (#114436)
07e00de8d7 : Add missing member initialization in c10::ExtraMeta constructor (#114448)
dad3cc4d02 : Fix type for keep_inference_mutation flag (#114482)
fa71f5efdc : [BE][aot_autograd] Remove unnecessary fields from ViewMutationData (#114481)
e6e650d5eb : [BE][aot_autograd] Remove num_mutated_inputs (#114479)
a378ae33e9 : [BE][aot_autograd] Remove mutated_inp_indices (#114421)
b76e2949f7 : Fix pool_size type in TaskThreadPool (#114063)
a28876832c : Fixed an export problem when moving tensors to CPU during `torch.export.save` (#114029)
fd1a01a393 : Set default LR value of SGD to 1e-3 (#114467)
85aa372374 : [inductor] Fixed conv issue with dynamic shapes (#114351)
01366efcc9 : Revert "[pytree] register pytree node type in both C++ pytree and Python pytree (#112111)"
a76bb5d84d : Add support for models with mutated buffer on torch.onnx.dynamo_export (#112272)
7daeb6509f : Update audio pinned commit nightly (#114426)
6f340c6f30 : Handle the case when opening a reverted PR with deleted head branch (#114423)
a43edd836c : Revert "Add support for models with mutated buffer on torch.onnx.dynamo_export (#112272)"
066e072524 : Retry #112889 (Opportunistically use ncclCommSplit when creating new NCCL groups) (#114385)
ed05af278c : [DTensor] Passed `dynamic=False` for compile tests (#114390)
34326e43eb : [DTensor] Made `DTensorSpec` hash recomputation lazy (#114379)
36763d3135 : [ProcessGroupNCCL] Move new trace utils (#114367)
c340db56d5 : [executorch hash update] update the pinned executorch hash (#114427)
088043fc49 : [FSDP] Passed `TORCH_NCCL_DESYNC_DEBUG` instead of `NCCL_DESYNC_DEBUG` (#114432)
d18e6b07aa : Overload vec::dequantize to eliminate rounding error for quantized sigmoid (#114098)
c4a22d6918 : Add support for models with mutated buffer on torch.onnx.dynamo_export (#112272)
b27565ad7d : Forward fix D51468211 (#114381)
7a697c4683 : [RelEng] Tag docker images for release, pin unstable and disabled jobs, apply release only changes (#114355)
2bae888f65 : Automated submodule update: FBGEMM (#113977)
272b40aee5 : Revert "deprecate PairwiseParallel from test (#114314)"
f961bda939 : [export] Move serialized custom class objs to toplevel (#114371)
6a86cf00ad : [CUDA][cuBLAS] Remove explicit cuBLAS workspace allocation for CUDA 12.2+ (#113994)
5f504d1de7 : Check for boolean values as argument on pow function. (#114133)
aca6446a6e : [executorch hash update] update the pinned executorch hash (#114325)
6f3cd046ab : [BE] remove skipIfDynamo for some module hook tests (#114387)
2f536ff92c : Refactor values kwarg in foreach tests (#112781)
ea7d70aecc : [BE]: ruff FURB136: replace ternary with min/max (preview) (#114382)
88a8a0daa4 : Revert "Require less alignment for masking (#114173)"
e7726b596e : [FSDP] Added DDP parity test for CPU training (#114372)
1b66701379 : ci: Bump TorchAudio, less third_party deps (#114393)
d416e5b34f : [torchrun] fix incorrect warning for non static backend (#114335)
f882c175d8 : Require less alignment for masking (#114173)
07b6f377b4 : deprecate PairwiseParallel from test (#114314)
9d68cfee0d : [sparse][semi-structured] Make cusparseLt handle + flag thread_local (#114273)
84909fef52 : Add meta registration for aten.linear_backward (#114359)
0f887a6d1a : limit fused kernel num args. (#113131)
1f1ff629a8 : Use parent class attribute supports_out for foreach_zero opinfo (#112778)
d6578b3678 : [quant][pt2e] Refactor some internal code for observer insertion (#113500)
b927a4e2ca : Revert "Opportunistically use `ncclCommSplit` when creating new NCCL groups (#112889)"
00ae299016 : [c10d] Remove unused function (#114341)
9fcf1f9632 : [export] Update schema (#114172)
9bab96c78c : [ONNX] Consider negative dim in _index_fill_reshape_helper (#114050)
f2ca07b680 : [ProcessGroupNCCL] Remove jumper to UCC (#114170)
d7f698102e : Disable MPS tests on macos-m1-13 runners (#114360)
324cde59b2 : [MPS] Fix test_copy_cast_no_leak (#114313)
33fad1c0d4 : [AOTI] Fix a weight loading issue when the weight size can be 0 (#114280)
2f3beb715c : Revert "Add Stateful/Stateless symbolic contexts, use fresh fake mode for dynamo backends (#113926)"
e239a2b2d7 : Revert "[dynamo / DDP] - lazily compile submodules - to propagate real tensor strides to backend compiler (#114154)"
b4faa6bfa4 : [dynamo] report guard failure user stack, fix incorrectly skipping interesting files (#114053)
2b72543f36 : Solving pickle error when saving CyclicLR state_dict (#110931)
a0e3321f0c : [inductor cpp] vectorize embedding lookup (#114062)
3e1abde46d : Revert "AOTAutograd: handle set_(), detect metadata mutations that cancel out (#111554)"
172a103857 : [dynamo] `strict=True` kwarg for zip (#114047)
c77a4a4096 : Fix compiling add with torch.int32 and scalars (#113965)
9f0deb132b : [Inductor] Refactor group/batch fusion to support user defined execution order and configs (#113738)
bebe66e262 : [ONNX] Benchmark to save sample inputs to disk before running (#114163)
bd44bdb675 : [ONNX][dynamo_export] Turn off opmath for type promotion (#113780)
e7326ec295 : [DTensor] Computed `DTensorSpec` hash lazily (#114322)
c5ddfa79b3 : [HigherOrderOp] add output tensor meta check for cond (#113900)
9e657ce2ed : [HigherOrderOp] set should_flatten_output=True for cond (#113819)
e0ec71deab : Fix module: distributed labeler (#114324)
0a33cf95c6 : Add python-3.12 to triton wheels build matrix (#114327)
2c4930a91d : Revert "[fx/DDP] add nested ctx_manager test for DDP Dynamo (#114056)"
db8f9686a7 : [cmake] set 'mcpu=generic' as the default build flag for mkldnn on aarch64 (#113820)
6187153753 : Consolidate sym/non-sym overloads for _make_wrapper_subclass (#114236)
a785fbe513 : [reland][quant][pt2e] Refactor insert observer to do sharing checking in the same place (#113458) (#113920)
3f736c2d77 : Add ONNXProgram.__call__ API to run model with ONNX Runtime (#113495)
044cd56dcc : [Easy] make @markDynamoStrictTest set nopython=True (#114308)
d5d62e8561 : [fx/DDP] add nested ctx_manager test for DDP Dynamo (#114056)
4d07428ede : Fix for out of bounds read in mobile interpreter FORMAT opcode handler (#110303)
9cbee4757e : [Autotune] Reduce XLBOCK for outer reduction (#114284)
995fae6060 : Move small pypi build as default for linux cuda 12.1 (#114281)
628586606e : [test] fix broken test, enable test (#114235)
066ac56e02 : ci: Clean up logic for `merge -r` (#114295)
afdc528520 : Print the index and summary of the SampleInput that failed an OpInfo test (#99444)
7fc292930c : Add support for `torch.Generator` type in TorchScript (#110413)
b88abb1674 : [ONNX] Fix export issue of aten::layer_norm in opset 17 (#114058)
62de29d06f : [optim] be explicit about CPU scalar tensor dtypes (#111008)
266054c3ca : [dynamo / DDP] - lazily compile submodules - to propagate real tensor strides to backend compiler (#114154)
54d04553ea : [fx, DDP] fx.split_module will setup/unwind autocast & grad_mode (#113374)
6ff7260700 : [CI] Switch to check against expected result files for cpu inductor integration tests (#113668)
a9f9f98e2f : [CI] Switch to check against expected result files for dynamo_eager and aot_eager benchmark tests (#113559)
212f668408 : [CI] Remove CI skip list for inductor integration tests (#113446)
3c8a4f01b9 : [CI] Increase the shard numbers for torchbench tests (#113575)
799d8c3035 : [CI] Rename the inductor test config names for dynamic shapes tests (#113574)
ebeaec71bf : [aotinductor] don't generate python profiling code in the cpp world (#114182)
64a5372e6c : Opportunistically use `ncclCommSplit` when creating new NCCL groups (#112889)
3b108a150a : A fix for reduction + pointwise + multi-level reduction optimization (#112935)
2abfb8ec7d : Correctly codegen math.inf in Inductor (#114159)
c47d2b8035 : Add Half support for CPU autocast on eager mode (#112484)
4e4a6ad6ec : [pytree] register pytree node type in both C++ pytree and Python pytree (#112111)
85b97605ab : Enable set sequence nr (#114120)
1a3dbf57ca : vmap: simple inplace batch rule (#113513)
f66add9b85 : [dynamo] graph break on `np.ndarray.tobytes` (#114208)
7694b05416 : [DTensor] Reduced to one `isinstance` call in `is_shard` (#114140)
ef90508f75 : [AOTI] Support ReinterpretView in abi mode (#114169)
b5dd37f23e : [MPS] Fix memory leak in copy_from_mps_ (#114197)
4b7f9fa436 : Meta register all foreach ops (#112281)
1f8d00c5a3 : [inductor] Added decomposition for upsample_nearest_exact Nd (#113749)
7733599b2e : update pthreadpool to 4fe0e1e183925bf8cfa6aae24237e724a96479b (#113904)
2aa486de9b : vendor packaging.version (#114108)
8ec59d3553 : Revert "[dynamo] report guard failure user stack, fix incorrectly skipping interesting files (#114053)"
dd6ef0877e : Revert "[inductor cpp] vectorize embedding lookup (#114062)"
1efff12a88 : [pytorch-vulkan] BinaryOps auto convert int tensors into float (#114145)
5f0d72124e : Revert "Print the index and summary of the SampleInput that failed an OpInfo test (#99444)"
6c597ef015 : [PyTorch] Fix attr cleanup after constant folding (#113957)
2c0474c02d : [inductor cpp] vectorize embedding lookup (#114062)
8f8722e3f1 : [nccl-pg] Avoid using NCCL_ prefix for non-NCCL env variables (#114077)
e122c90d3c : [executorch hash update] update the pinned executorch hash (#114008)
99af534e93 : [docs][jit] Mention dynamic-shapes settings in jit/OVERVIEW.md (#113964)
7ea184d7e3 : Handle item() on boolean tensor (#114157)
18e1a37c4e : [ao] updating embedding_bag support for fx and eager (#107623)
dc65f6c601 : [c10d] Remove deprecated multi-gpu-per-thread APIs (#114156)
f67696f45e : Update TorchFix to 0.2.0 (#114190)
e76c54bd87 : [vision hash update] update the pinned vision hash (#113217)
bbc39b7bb4 : [dtensor] enable RMSprop optimizer foreach support (#114152)
bcd310a7ad : [dtensor] enable adagrad foreach support (#114151)
9b50611002 : [dtensor] add test for SGD optimizer (#114150)
b09bd36402 : [dtensor] add test for adamw (#114149)
36869463e0 : [DTensor] add forward layer norm test (#114174)
87925789ae : Make V.graph properly typed (#114025)
4812a62ca0 : [inductor] Delete more type-ignores in dependencies.py (#114013)
a911b4db9d : AOTAutograd: handle set_(), detect metadata mutations that cancel out (#111554)
81f93991d3 : Update merge rule to allow pytorchbot to land ExecuTorch hash update (#114180)
e8996055a9 : [iOS][PTMCoreMLCompiler] update other deprecated function (#114177)
77f16eb00c : Fix prod double backward when there are 2+ zeros (#113969)
85ce8a602b : Pin pywavelets to 1.4.1 (scikit-image dependency) (#114146)
585332fb8d : [ProcessGroupNCCL] Fix avoid-record-stream warning for P2P (#114168)
6ec344b08f : Fix empty cpu tensor output in cudagraph (#114144)
3e49621f3b : [DTensor] Cached hash for `DTensorSpec` (#113915)
fb25fd6f86 : [DTensor] Replaced neg dim normalization with assert in helper (#114141)
d70857bd9e : [pytorch][lite interpreter] add tracer run under inference guard (#114003)
e7f12b1eb0 : Print the index and summary of the SampleInput that failed an OpInfo test (#99444)
e4a88d9581 : Convert SymInts to SymFloats with SymPy (#113683)
4182092feb : [reland][HigherOrderOp] remove _deprecated_global_ns (#113813)
c1d9d4a2b5 : checkpoint_sequential warns if use_reentrant not passed explicitly (#114158)
2ca1119d53 : Add Stateful/Stateless symbolic contexts, use fresh fake mode for dynamo backends (#113926)
7afceb9f64 : [AOTI] add float support of triton (#114014)
ae00d9623e : [inductor] Add ABI shim function for torch.scatter (#114027)
4b07fca7d7 : [export] Allow shifted constraint ranges in dynamo._export (#114024)
c39c69953f : [DTensor] Used new placements for neg dim in `distribute_tensor` (#113930)
e2095a04ae : [DTensor] Ensured `grad_placements` was tuple (#113925)
f4ffd46c08 : [DTensor] Used new placements for neg dim in `from_local` (#114134)
b41ad7d695 : [DTensor] Used new placements for neg dim in `redistribute` (#113924)
77e058f055 : [DTensor] Made `_Partial`, `Replicate` frozen dataclasses (#113919)
97d2b439ce : [BE] Use definitely_true/sym_eq for same_meta (#114137)
13dd7f0c98 : [export] Add missing builtin ops. (#113982)
8c4812be80 : Replace expect_int with guard_int (#113921)
59ad51e10a : Insert deferred runtime asserts into Dynamo FX graph (#113958)
473b17c4c1 : Run sympy expressions with Python values / FX tracing (#113978)
cd2798943d : [dtensor] support convolution ops (#113123)
af51c948ac : Add mechanism for make_fx to not error on data-dependent-ops (#114129)
d1bb0b0e4d : Mark more built-in ops as pt2_compliant (#114128)
811bec46ef : Don't DCE item nodes if they're float (#114135)
0c450f4504 : [functorch] fix potential race condition while loading `vmap` decomposition library (#113520)
2b97f5a9a1 : Disallow fp8 type promotion (#113975)
0bb29f9450 : [dynamo] Guard on `HAS_GRAPH_BREAKS` if graph breaks are present (i.e. cache miss if compiled object requires nopython) (#114073)
2b4c489f71 : [lint] Install compatible numpy for 3.8 (#113869)
fc39efc4c1 : Fix filename typo 'funtionalized' (#114132)
934e9c3346 : Boolean masking backwards doesn't work even with dynamic output shape ops, break accordingly (#114126)
039a4689a2 : Update sdpa doctstring to point to flash-attn-v2 (#114124)
9d2425c8a4 : [dynamo] Be clearer about dict subtype source availability (#114069)
100b9952b1 : [dynamo] Fix user defined object sourceless callable (#114066)
e4ec5545cd : [export] Turn on verifier for serialization. (#113980)
d1ae5efa94 : [torch][fsdp] More informative assertion error when rank mismatch (#113765)
59bc98e4ae : [EASY] Rewrite test_anomaly_aot_autograd to more reliably trigger error (#114122)
95eab508e3 : [caffe2] Add non-x86 stub definition for `libraryFor` too (#114023)
aeb5fd52c7 : Remove dead tensor_has_hints. (#114071)
7d5e8c1d51 : [BE][easy]: Update ruff to 0.1.6 (#114125)
cbc6873538 : [Dynamo][Forward fix] Add torch.ao back to is_allowed list (#114016) (#114111)
140c54e6cc : [xla hash update] update the pinned xla hash (#110377)
f36d09fcb7 : Revert "Add function to materialize COW storages (#113396)"
fe428a284b : Revert "Add `torch._lazy_clone` to create COW tensors (#113397)"
d40d72d664 : Revert "Skip test_lazy_clone for Inductor (#114012)"
7d0339fb9a : Revert "[Dynamo][Forward fix] Add torch.ao back to is_allowed list (#114016)"
7963aaac41 : add Half support for AdaptiveAvgPool2d and AdaptiveMaxPool2d on CPU (#102079)
5a96a42cea : [AOTI] Improve the two-pass wrapper codegen (#114067)
226384b460 : [2/N] Cleanup header inclusions in torch_cpu by iwyu (#109964)
0bd4d1f4ab : Add sparse tensors support to dataloader. (#112842)
12f95df0e9 : Eliminate unnecessary multiplications by 1 in addmm with sparse compressed tensor operand (#114026)
826ab0e32d : [dynamo] report guard failure user stack, fix incorrectly skipping interesting files (#114053)
edc5ae3113 : Allow for calling lift_fresh_copy manually (#113923)
72a8329ec9 : [reland][aotinductor] Add example_value metadata to nodes (#113986)
33c6cae13b : [pytorch-vulkan][5/n] Enable BMM with the new packing. Massive refactor. (#113943)
e3eca4c49f : Revert "Convert SymInts to SymFloats with SymPy (#113683)"
fb3bc3949a : [Inductor] remove GPT2ForSequenceClassification from ci skip list (#112100)
84f791e697 : Fix checking symbolic shapes inside torch._check (#113811)
bae61ecb96 : [Reland 1] Cleanup header inclusions in torch_cpu by iwyu (#112311)
68ab458fe3 : Don't recommmend max_split_size_mb first (#113481)
d968c4cac3 : [torchelastic] ensure grandchild processes are restarted correctly (#113231)
958f3b0df6 : [nccl-pg] Migrate to getCvar* functions for env variable checking (#113797)
09fe36274a : [Dynamo][Forward fix] Add torch.ao back to is_allowed list (#114016)
b30580e121 : [PT] Include tensor shape info in the error messages of torch split (#113984)
0ec66b3be5 : Convert SymInts to SymFloats with SymPy (#113683)
870539670a : [Dynamo] Support skip/inline function by name and consolidate skip/inline check logics (#113888)
f0dedb340f : [C++] Fix clang compilation issue. (#114017)
11857e9a64 : [Inductor] Allow autotuned argument to be anywhere in the argument list (#114002)
e0c3936843 : [Inductor] Support ReinterpretView in inductor codegen (#113967)
ff7c06a01b : Revert "limit fused kernel num args. (#113131)"
b53d47a719 : [inductor cpp] refactor: CppVecOverrides inherits CppOverrides (#113950)
f8516cef88 : [pytorch-vulkan][2/n] Height packing (#113883)
fdaddec2c3 : make_fx can now SymIntify int inputs (#113452)
33f7c6638f : Guard when fetching non-symbolic value out of Scalar (#113911)
bc0d87cde3 : Explicitly enumerate all method to operator mappings (#113968)
ecd8d388b9 : Skip test_lazy_clone for Inductor (#114012)
caffa44b1c : Correctly use real boolean operators, not bitwise in shape guard prints (#113927)
7b442c2b0a : limit fused kernel num args. (#113131)
5e30741754 : Clean up optimizer imports in test_optim (#113971)
46542f6ce2 : [reland][export] make aot_export_module uses dynamo's fake_mode (#114009)
310e3060b7 : [Caffe2] Handle `cpuinfo_initialize()` failure (#114011)
855a5cf427 : 312 test fix in named tensor and TS deprecations (#113981)
4667e20b3f : Delete a bunch of type-ignores (#113990)
47220bc72a : fixes multiple GPU detected error for test_fsdp_fine_tune.py (#112406)
1567917e5a : [ROCm] Enable several inductor UTs (#112777)
b169f04170 : [ONNX] Fix bench w/ iobinding; Remove cpu fallback (#113703)
d4189d8007 : Extend _TestONNXRuntime to reuses all tests for new model format (#112289)
2efa89a388 : [torch/csrc/onnx] Use nested namespaces (3/N) (#113993)
d6744a698c : [torch/csrc/onnx] Use nested namespaces (2/N) (#113992)
c83a897348 : [torch/csrc/onnx] Use nested namespaces (1/N) (#113991)
e360f4c6dd : [DTensor] Renamed `shard_spec` -> `placements` in test file (#113917)
8372983fe3 : [AOTInductor] Use ProxyExecutor for aten op if c-shim is missing (#113918)
dab272eed8 : [td] Consistent pytest cache (#113804)
033d7b670a : [Dynamo][6.1/N] Refactor out TorchInGraphFunctionVariable and improve heuristic (#113432)
3fc38e6c83 : [GHF] Abort merge on rebase failure (#113960)
a450c784da : [AotAutograd] Move mutations hidden from autograd in graph (#113454)
4d8c73b2b7 : Trivial fix for minor typo in torch.jit._script.py (#113892)
e736d27e38 : [inductor] Fix slice scatter shape calculation (#113838)
e5102ccd27 : [quant][pt2] Support conv1d-bn QAT fusion (#113714)
d40d2709c9 : Minor fix in Unit Test test_max_autotune.py (#113889)
5d439b07ca : Fix failing test_mkldnn_pattern_matcher if built without MKL (#113949)
69d9267c4f : [BE]: ruff - enable PIE804 (#113951)
4b1583fe57 : type-ignore issues exposed by import following (#113979)
0885c58296 : Add Bfloat16 scalar support to gloo backend (#113557)
c435b8c10a : Fix autograd engine callback error propagation from device thread (#113702)
957312a4cf : [ONNX] Relax unsupported node analysis on complex dtype (#113785)
76bf10e551 : Revert "Fix checking symbolic shapes inside torch._check (#113811)"
c51827b8ce : [ez] Hash update to reuse issues again (#113961)
ac08022137 : [BE][benchmarks] Minor comment cleanup, typos (#113898)
00b67193ef : [utils] move `config_typing.pyi `to `torch.utils` (#113929)
a7b701ed21 : Update ExecuTorch pinned commit daily (#113832)
d4bb16f443 : Change functorch import to proxy_tensor import (#113913)
631fb33fd6 : Enable import following in MYPYNOFOLLOW (now MYPYINDUCTOR) (#113830)
0c8362de1a : [dynamo] Make {guards,eval_frame}.py pass follow_imports typechecking (#113721)
e2b114ab9f : [BE] Package dynamic_dims/constraint_dims into CreateSymbolicPolicy (#113802)
dc3d0caab3 : BUG: fix np.ndarray.resize under dynamo (#113931)
6849d75300 : Automated submodule update: FBGEMM (#112312)
7c35874ad6 : Fix for PyTorch mobile flatbuffer loader out of bounds reads (#110162)
9f47580ad7 : [BE] Don't mutate torch.compile global config in tests (#113882)
4f8cb52ed9 : Fix checking symbolic shapes inside torch._check (#113811)
dbb96ef30d : improve annotation device parameters where a device ordinal is allowed (#113647)
a56af02913 : [dynamo] Added support for is_contiguous with dynamic shapes (#113645)
3df2c42921 : [dynamic_shapes] SymNode's `hint` does not always conform to `pytype` (#113848)
a5e4d4f25f : [dynamo] promote skipfiles logging to verbose (#113916)
b62230a685 : [dynamo] remove unused `OptimizeCtx` field - export (#113901)
78318d0249 : [dynamo] Cache size calc for differing config (#111300)
5927e9cbf2 : [dynamo] guarded config (#111299)
7731c97e06 : Revert "Fix checking symbolic shapes inside torch._check (#113811)"
f27ab241a4 : [dynamo] Fix UnspecializedNNModuleVariable's source (#113852)
7c38b76efe : Make offsets dynamic by default (#113734)
c94fdebd3e : [dynamo] chore: Fallback on `const_handler` instead of special-casing on `ConstantVariable` (#113893)
c233cef8fd : [dynamo] Enforce lifetime of output fx graph and its metadata (#113517)
16da135550 : More replacing assert with CUDA_KERNEL_ASSERT in kernels (#113563)
015fd2eb41 : [NCCL PG] Add dumping flight recorder in the NCCL watchdog timeout (#113678)
0ea126e834 : add use_fake_all_gather and use_fake_reduce_scatter to FSDP for ablation studies (#113106)
4979f9c0d7 : [EASY] Support SymInt tracing on broadcast_shapes (#113877)
e8ee14292e : Export `_C` in `torch/__init__.py` explicitly with `from . import` (#113887)
7f224f6714 : Fix checking symbolic shapes inside torch._check (#113811)
237cbd5be6 : BUG: trace frames with numpy scalar -> ndarray functions (#112959)
99b89db174 : [DTensor] Added `op_call` in no-mesh dispatch assert message (#113903)
0894981f6c : [HigherOrderOp][BE] change _make_inlined check callable() (#113881)
ae94c7e491 : [dtensor] add foreach_zero_ support (#113897)
9916d8a9ea : Add `torch._lazy_clone` to create COW tensors (#113397)
e2f090086b : Add function to materialize COW storages (#113396)
a9134fa99a : Skip cudagraphs when there is sparsity (#113791)
31459e3e56 : [ONNX][dynamo_export] Add 'aten::rsub' type promotion (#113697)
b3308c4856 : [FSDP][Docs] Omit "on CPU" (#113753)
2ac33ad98a : [dtensor] group dispatch unwrapping to a method (#113846)
769f924bc6 : robustify parametrize default name (#113856)
03bebd90f6 : cleanup test parametrization (#113855)
277229d0c6 : [dynamo] Fix incorrectly casting `SymNode` to `int` when input is `bool` (#113871)
986634a117 : Add Pass to move constructors from cpu to cuda (#109665)
ec20c9044e : [TD] Fix metric emission for split test files (#113789)
1480c670a0 : [AOTI] Delay the fallback kernel naming decision to the codegen time (#113660)
bab41f44b8 : [dynamo] Fix allow_in_graph decorator doesn't work on autograd.Function (#113510)
3f6e5e87f8 : Revert "[1/N] Fixes clang-tidy warnings in header files (#113608)"
d9f2cf9974 : [BE]: Enable ruff rule PIE800 - unnecessary nested dict expansion (#113880)
bdf0b196db : Quantize bias for conv2d quantized op during setup (#113582)
e19ea53e1d : Add optional torch.export.ExportGraphSignature to ONNXProgram (#113477)
9a9232956f : Include job name in the emitted metrics (#113884)
2530d47cbe : [dynamo] re-add option to log all guard check fails (#113585)
40dfabf970 : Revert "[export] make aot_export_module uses dynamo's fake_mode (#113681)"
2abb04d1dc : [inductor] Relax symbolic guard for sizevars.evaluate_min (#113841)
98df3088c3 : Revert "Make offsets dynamic by default (#113734)"
3c4e4d9947 : Revert "[quant][pt2e] Refactor insert observer to do sharing checking in the same place (#113458)"
de4fd3843c : [Inductor][fx pass] Fix a bug in the merge getitem cat pattern (#113822)
8dc4b12fa7 : [Pytorch][Vulkan] refactor layer_norm (#113676)
0d6d97d956 : Relax constraints on `test_cast_round_trip` (#113872)
c4c45ab9b5 : Fix resize matrix_power.out dynamic shapes (#113695)
8a183bf1ab : [BE] Consistently query tracing context for fake mode in Dynamo (#113768)
3a3a979984 : Add torch.distributed.breakpoint (#113775)
eddce3c054 : [AOTInductor] Rename model_runner to model_container_runner (#111324)
1d96034816 : [BE][easy] Simplify the registration of a few metafunctions (#113635)
ef982418df : Add OpInfo test that tests meta functions binary ufuncs with different dtypes (#113674)
9b3e694f5d : Fix metafunction for many pointwise operations (#113634)
3e3c6cc05e : Do not error when printing view created in no-grad modified in-place in no-grad (#113716)
6cdb6234d6 : [ROCm] Supports ROCm6.0 reorganization and cleanup (#111486)
070b2d3cff : cholesky_solve_backward: speed up using output_mask (#112981)
25fb88cf23 : Add all 3.12 binary build for wheel. Let's see how it goes. V2 (#112882)
275403be16 : [doc] Add nn.parametrizations.weight_norm (#113783)
62d86f27c2 : Revert "Add Pass to move constructors from cpu to cuda (#109665)"
3bac94b107 : Add Pass to move constructors from cpu to cuda (#109665)
7183926622 : [HigherOrderOp][BE] consolidate UserFunctionVariable.call_function pattern to _make_inlined (#113814)
d19cef34fb : Do not attempt to compile unwind.cpp on aarch64 (#113782)
f9bf104c64 : [2/N] Fixes clang-tidy warnings in header files (#113727)
ecf129565b : Avoid adding to lazy device cache if cache size is 0 (#113710)
51cbe780cb : [pytorch-vulkan][1/n] Enable Packing for Vulkan Tensors (#113627)
5fb1d8f18a : [NCCL PG] Enable storing nccl traces into storage and make it configurable (#113503)
c1c4882367 : [aps] Sync thrift (#113810)
8033f65c0b : Don't toggle torch logger to NOTSET if it is not set; always use pre-existing (#113842)
9efbb4ea73 : Make offsets dynamic by default (#113734)
b612e27221 : [Easy] Fix typo in TagActivationCheckpoint comment (#113818)
cffea773e3 : Fix bsr_dense_mm with a non-contiguous out argument. (#113801)
0a9dbbbaad : Make _inductor/fx_utils.py, _dynamo/utils.py pass follow_imports typechecking (#113722)
bbd73c746e : Revert "[ONNX][dynamo_export] Add 'aten::rsub' type promotion (#113697)"
8241fe6edb : [quant][pt2][be] Rewrite QAT annotations using subgraph matcher (#113709)
8efa6ad1fc : [vision hash update] update the pinned vision hash (#113821)
48800e9bb0 : [ONNX][dynamo_export] Add 'aten::rsub' type promotion (#113697)
670311190d : [HigherOrderOp] Move _map.py to _higher_order_ops (#111152)
1364f84b42 : [easy] encapsulate fb changes from OSS (#113677)
cebad9867b : graph break on intermediate leaves that require grad (#113277)
c5f26a409a : Build and test ExecuTorch on PyTorch (#113364)
c41a32a3bf : Move test_utils.py back to MYPY (#113745)
a3b859fc67 : Drop dynamo-specific type hints on Tensor in favor of type-ignores (#113720)
605d274300 : [dynamo] Make {mutation_guard,symbolic_convert,side_effects}.py pass follow_imports typechecking (#113610)
df9acc61fb : [inductor] Make {freezing,ir}.py pass follow-imports typechecking (#113534)
d52b9ba6a8 : [torch.compile + selective checkpoint] Attach `context_fn` to the checkpointed graph module, fixing flaky tests (#112672)
b526aae95a : test_lazy: skip HashTest.Scalar (#112747)
72ce5dd13e : [2D] Remove enable_2d_with_fsdp() API and make remove_enable_2d_with_fsdp private (#112473)
c2c22dc427 : [BE] Some debug logging for track_symint in produce_guards (#113774)
bd6b3c4df4 : [BE][profiler] add test for EventList (#113764)
f8eb46d623 : index put device error checking (#113729)
1e260c851b : [ez] Don't retry onnx in shell (#113803)
5d170fce29 : Revert "Support tensors as Dict keys (#111196)"
463489ec95 : [ez] Add some more pyre related files to gitignore (#113796)
7137f5f8c3 : Revert "[easy]Remove specialized value (#112252)"
c99d88afa4 : [AOTI] Remove try_find_schema (#113617)
b19cf868e8 : Back out "Support fp8 in AOTInductor + support optional<> in C ABI (#112527)" (#113747)
094beca0c6 : [export] make aot_export_module uses dynamo's fake_mode (#113681)
6435fc17bb : Remove ignore_sublcass from FakeTensorMode (#113795)
97a62c715d : [BE] Remove duplicate storage_offset equality test (#113790)
9b736c707c : [Codemod][python/main_function] caffe2: (#113357)
87aeb248c9 : More random stepcurrent (#113620)
4534cf102a : Revert "[funcol] a few optimizations to funcol (#113324)"
dd28006d8d : SGR/Assistant: making sure linker drops unnecessary dependencies (#112871)
585e315b3a : [quant][pt2e] Refactor insert observer to do sharing checking in the same place (#113458)
deec2380c7 : Add 0dim Tensor overload for _foreach_div (#113688)
2164598c40 : Improves comparison of state dicts for Checkpoint E2E Tests (#113181)
275a4521a9 : [ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)
12b2dd16b0 : [Kineto] Initialize libkineto profilers during torch init process during pybind set-up (#112623)
cc11c0d11b : aot_autograd: keep input mutations on requires_grad=True tensor out of the graph for inference (#113584)
032e5a4528 : handle cross-dtype views during AOTAutograd view-replay (#113416)
720e866d18 : graph break on out= ops with noncontiguous out args (#113267)
05d949279c : [C10] cpuinfo error handling (#113771)
c1315ae2b9 : Only check significant strides in test torchinductor (#113389)
42b2b9e663 : fixed pyi file for ReduceLROnPlateau (#113659)
b3423889fe : [inductor][fx pass] handle numpy compatibility arg names (#113078)
ca9e654353 : [FSDP] Fix FSDP submodule with DeviceMesh does not return DTensor state_dict error (#113593)
277474f1a0 : Revert "[2d] pass shape/stride during tensor unflatten (#113547)"
c678c5ef38 : [doc] caution torch.multinomial usage (#112892)
296c9e3ce7 : upgrade lintrunner to the lowest supported versions on python 3.12 (#113562)
7f9fafed53 : Resolve docstring errors in throughput_benchmark.py, weak.py, _traceback.py, file_baton.py, _contextlib.py, _device.py, cpp_backtrace.py, bundled_inputs.py, run_cpu.py, hooks.py, mobile_optimizer.py, _freeze.py, __init__.py, mkldnn.py, dlpack.py (#113311)
e100ff42fd : Fix chrome trace entry format (#113763)
dedb47d94c : Revert "Fix resize matrix_power.out dynamic shapes (#113695)"
6c187246d6 : Add support for float8_e4m3fnuz and _e5m2fnuz (#107586)
c3918c18b5 : Fix resize matrix_power.out dynamic shapes (#113695)
9146ca6a07 : use sourceless builder for builtin getattr (#113340)
50101d59ba : [export][retry] Move lifted tensors out of state_dict (#113689)
17e2313dd3 : Add an API to DDP for dynamically updating the underlying process group. (#113580)
7f1eda8c29 : Minor: fix a typo (#113648)
757f36b988 : [docs] Fix `torch.compile` "tensorrt" backend docs (#113711)
9b0f2f8d94 : expose sdpa helpers to python (#110496)
78f3937ee8 : [BE] Handle errors in `set_num_threads` (#113684)
1a8d076e0c : [inductor cpp] simplify test for uint8 add/sub (#113407)
dadca7aeec : remove \ in cache_dir (#110945)
fda94124d7 : [inductor] Make {cudagraph_trees,decomposition,post_grad}.py pass follow_imports typechecking (#113609)
6f4409073f : [doc] two diff meanings of rv generated by torch.tensor.geometric_ and torch.distributions.geometric.Geometric (#113183)
fcdfcdeef9 : [inductor cpp] fix non-contiguous reduction store (#113261)
f9ea697112 : [quant][pt2][be] Refactor QAT tests for future patterns (#113658)
77f66ade66 : Revert "use sourceless builder for builtin getattr (#113340)"
84ee7453ad : ci: Add clickable PR link to trymerge (#113712)
92e3f45f0e : Revert "[dynamo] Refactor test cross importing (#113242)"
6bffde99b0 : Revert "[inductor] Move things into torch/testing/_internal/inductor_utils.py (#113275)"
45671be2a0 : Revert "Only check significant strides in test torchinductor (#113389)"
6a25bb8545 : [inductor] use fusion_log for verbose logs (#113701)
1e60174891 : Revert "[dynamo] Add run_inductor_tests entrypoint (#113278)"
9724d0fd87 : docstyle _correct_bias.py _equalize.py _learnable_fake_quantize.py backend_config experimental fake_quantize.py fuse_modules.py fuser_method_mappings.py (#112992)
252e68a83b : Revert "Add support for `torch.Generator` type in TorchScript (#110413)"
c892f1a318 : Doc: Add and fix docstrings for torch.distributed files (#112735)
b8b3c26d3d : If we re-fakeify a FakeTensor with the same ShapeEnv, preserve symbols (#113651)
cab039fe9b : [1/N] Fixes clang-tidy warnings in header files (#113608)
31e16847ea : [doc] torch.tensor.geometric_, torch.tensor.uniform_ fix PMF vs PDF (#113109)
56c453233f : [doc] clarify the range of sampled rv for torch.tensor.exponential_ (#113195)
f5ce4d8baf : Fixed docstring errors in gradcheck.py, forward_ad.py, profiler_util.py, profiler_legacy.py, functional.py, grad_mode.py, function.py (#113266)
28228e1517 : Only check significant strides in test torchinductor (#113389)
cf6e9f572e : Update xla pin (#113603)
91973e1c31 : Issue113185 (#113523)
6b01126df5 : [Easy] [Dynamo] Catch OSError when calling inspect.getfile (#113671)
1d640566d4 : [BE] Do not warn when safely loading legacy dicts (#113614)
538114db65 : [MPS] Fix and refactor unary/binary ops with non-zero offset or non-contiguous output (#97085)
9f71452331 : Disable atomic_add fallback for cpu (#113655)
18d7b8e4f7 : [BE]: ruff apply rule PLW1510 to find silent subprocess errors (#113644)
53e7de4b65 : Issue 112599 - fix pydocstyle errors (#113177)
a05639cea6 : Add some checks about Device and Layout when create/convert named tensor (#113628)
20eaa49dde : [PT-D] Made `_get_registry` return `None` if no APIs applied (#113654)
afef32bd23 : [Pytorch][Vulkan] native_layer_norm (#113573)
b7b2178204 : [BE]: Remove useless lambdas (#113602)
2a8a7425be : Fix to wrap jagged dims for split() / split_with_sizes() (#113591)
ea39cc34f9 : Refactor NestedTensor subclass to remove ragged_size from constructor (#113491)
cdc9a05c89 : cudagraph_trees.py: remove duplicate line (#113624)
149b9dfd04 : [easy]Remove specialized value (#112252)
b0805fa5d0 : Support tensors as Dict keys (#111196)
f22486b0fc : [doc] scale parameter notation for torch.Tensor.cauchy_ is misleading (#113178)
e6bffc6b87 : Fix docstring errors in default_hooks.py, optimizer_overlap.py, checkpoint_wrapper.py, copy.py, benchmark_ddp_rpc.py, utils.py, dependency.py, phony.py, pipeline.py, checkpoint.py, worker.py, batchnorm.py, quantization.py (#113511)
3b80577212 : [Memory Snapshot] Add timestamps to memory events collected in snapshots (#112266)
5465f2bb6c : Revert "Improves comparison of state dicts for Checkpoint E2E Tests (#113181)"
79e3833703 : Enable clang-tidy in torch/csrc/quantized and some fixes (#113604)
14eb92cb43 : [quant][pt2][be] Remove add/relu from conv-bn QAT pattern (#113006)
a7b75f586a : [RELAND] Disallow skipping dynamo (#110222)
8f5fead86e : Improves comparison of state dicts for Checkpoint E2E Tests (#113181)
93372455a7 : [2d] pass shape/stride during tensor unflatten (#113547)
7117bffff9 : [funcol] a few optimizations to funcol (#113324)
b16e3b5373 : [funcol] add two APIs: wait() and numpy() (#113323)
a1e3c50165 : A small fix for do_bench_using_profiling (#113611)
c21320b3b1 : CPU Publish: Fix Assign device error, when module has multiple devices (#109149) (#113509)
b3a76ccc12 : [BE] Make legacy type storage warning point to the caller (#113601)
ffc3731dc4 : Update TensorBase.to()'s' signature; create {guards,compiled_autograd}.pyi (#113536)
5b95715bc0 : Make {Tracing,Compile}Context.get() return non-optional type (#113535)
d561654d99 : [ONNX] Support more sympy operations in fx-onnx exporter (#112758)
78ae49d104 : [vision hash update] update the pinned vision hash (#113598)
567db94d87 : Add markDynamoStrictTest (#112768)
edd967fe78 : Add testing for foreach scalar Tensor overloads in inductor (#111600)
d94bfaff2e : Add TorchFix to the CI (#113403)
e1c872e009 : Add optimal triton kernel parameters to bsr_dense_mm and scatter_mm for bfloat16 and float32 dtypes (#113553)
ff82dcd8fa : [2/N] Enable clang-tidy checks in torch/csrc/profiler (#113439)
a43c757275 : Fixed error with cuda_ver in cpp_extension.py (#113555)
4b09b08d2e : Fix recompilation issue with content store (#113533)
ad06e9f060 : Support logging aliases to list of modules (#113567)
92ebf74ac1 : Refactor loggers to use NOTSET when not set by user (#113566)
54493fe8c4 : Add support for `torch.Generator` type in TorchScript (#110413)
3eacdaf1b3 : [HigherOrderOp] add pytree operands tests for cond (#112661)
68278cf7a8 : [dynamo] Initialize tensor_weakref_to_sizes_strides with a weak dict (#113412)
6ed20af10e : [dtensor] refactor op dispatch and fix is_same_size/equal (#112927)
9062e429db : Fixed docstring errors in torch/nn/functional.py (Docathon H2) (#112856)
a2552d5521 : Fixed docstring errors inside torch/cuda/ and torch/optim/ (Docathon H2) (#112964)
27c3774320 : Forward fix efficient attention rocm failure (#113588)
b3a7d9208b : disable test int_mm for sm90 or later (#113327)
01478f1afa : Fix pydocstyle errors listed in issue 112589 (#113227)
0e6b6a2483 : Revert "AOTAutograd: handle set_(), detect metadata mutations that cancel out (#111554)"
cfee3bcf97 : Add inheritance to ONNX's InputAdaptStep and OutputAdaptSet impl (#113476)
b01e89587e : [ROCM][CI] Introduce tests-to-include as rocm-test workflow input (#110511)
2ea3d64f47 : fix docstring issues in torch.utils.tensorboard (#113336)
a144eb502a : [aotinductor] add versions for the sdpa shim api (#113487)
6ea20f5dc5 : [AOTI] Use expr_printer to print sympy expr (#113317)
c0b57d4e3b : fix docstring issues in torch.distributed (#113337)
5e10dd2c78 : fix docstring issues in torch.utils (#113335)
44367c59b2 : Update skip reason for failing unit tests on ROCm 5.7 (#113286)
1aece432ba : Implement narrow from a regular tensor to jagged tensor (#112770)
3700894099 : Fix FSDP `summon_full_params(..., with_grads=True)` when grad precision is not `fp32` (#112746)
47a59ee4d1 : [ONNX] Update exporter issue report instructions for quantized models (#113494)
c46fc46dba : expose mem-eff to autograd (#110495)
3afb4e5cf7 : AOTAutograd: handle set_(), detect metadata mutations that cancel out (#111554)
1d9919c46d : Fix pydocstyle for issue 112591 (#113233)
0fd856ca22 : Revert "[ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)"
d64bc8f0f8 : use sourceless builder for builtin getattr (#113340)
115da02432 : [xla hash update] update the pinned xla hash (#113549)
44f1c6e41c : [inductor] Handle variance corrections larger than number of data points (#113284)
2bcff4d8e3 : [state_dict][11/N] Implement cpu_offload and full_state_dict for get_state_dict (#112837)
b910d9eaa6 : Add tensor.is_privateuseone (#113421)
7afb503e3c : [inductor] Label align() with [[maybe_unused]] (#113502)
8b61daaf73 : Prune more unnecessary includes from CUDA transformers (#113493)
9c331be919 : [pytorch] Remove dot if no suffix (#113273)
7f1cbc8b5a : remove intel_extension_for_pytorch from THIRDPARTY_SKIPLIST (#112840)
70064ac416 : [Dynamo] Match closures by code ID (#109427)
fe5d8850e2 : Fixed docstring errors in _fuser.py, _state.py, __init__.py, _freeze.py, _async.py, _recursive.py, _tensorboard_vis.py, _trace.py, _await.py, _check.py, _serialization.py, _script.py, annotations.py, _monkeytype_config.py (#113371)
15a2caea8e : Enables copy/clone/reshape/contiguous operations for bits types (#113508)
d00c983b63 : [dynamo] Make {testing,debug_utils,utils}.py pass follow_imports typechecking (#113519)
6805d1e1d6 : [inductor] Make graph.py pass follow_imports typechecking (#113518)
a8cf04fd2a : [inductor] Make {output_graph,pad_mm}.py pass follow_imports typechecking (#113413)
8d41a5c605 : [indictor] Fix cat decomp when first tensor is empty (#113514)
39ca5a3226 : [ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)
b00311ce9e : [dynamo] Add run_inductor_tests entrypoint (#113278)
fb9a136383 : [pytorch-vulkan] Add operator<< for uvec3 (#112113)
ef49f61f19 : [vision hash update] update the pinned vision hash (#113499)
66d09f8217 : [inductor] Move things into torch/testing/_internal/inductor_utils.py (#113275)
4309d38f5d : [dynamo] Refactor test cross importing (#113242)
5e03af8295 : [inductor] Enable floor_div indexing to work under ABI-compat mode (#113276)
e75e01e6b9 : Skip if the max element is 0 to avoid invalid config for CAT (#113321)
3b915f9de0 : [pt2] enable meta tests for `foreach` ops (#113484)
28e11f54ab : [dynamo] skip test_internal_error_suppress_errors in fbcode (#113482)
575be044c3 : [TD] Disable HistoricalClassFailurCorrelation (#113497)
3cb6cf1e8a : Revert "[ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)"
9f15fbae53 : [Dynamo]fix bug for bytecode hook and leave a test case (#113457)
670abff6ff : docs: Fix docstring lint errors in torch/distributed/fsdp/_flat_param.py & torch/distributed/fsdp/_init_utils.py (#113358)
4916a7e94f : Revert "[Kineto] Initialize libkineto profilers during torch init process during pybind set-up (#112623)"
0a7eef9bcf : [BE] Remove stale CUDA version check from cpp_extension.py (#113447)
740e8a536f : Perf improvements for eager GridSampler (#113341)
e8e3afb784 : [ONNX] Refactor MaxPool to support dynamic inputs (#113318)
a4dc3716c0 : Deprecated verbose parameter in LR schedulers (#111302)
d4e670c37c : Add pyre internal configs to gitignore (#113480)
06dc2f162d : [AOTI] Implement support for user defined kernels that use triton.autotune (#113229)
f2cd68102a : [ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)
cbf12dfba6 : [LLVM] Replaced getInt8PtrTy with getUnqual (#113455)
48c2f89399 : [BE] Add friendly error message if you compile_fx_inner but not return tuple/list (#113451)
dfa9e7b511 : Allow inferring divisibility on unbacked SymInts and do replacement trick (#113165)
91c90f232a : Fix docstring errors in reductions.py, spawn.py, pool.py, parameter.py, cpp.py, grad.py, __init__.py, profiler.py, queue.py, graph.py (#113052)
9752ef595c : [BE] Consistently use the sym_stride lowering, instead of short-circuiting before (#113071)
958f755a0e : [FX][CodeGen] Make sure fx code is valid in python (#113345)
5540d276ce : Fix docstring errors in container.py, _functions.py, transformer.py, comm.py, parallel_apply.py, data_parallel.py, scatter_gather.py (#113250)
7b28f8c5ea : Better error message when applying interpolation on non-4D tensors (#113459)
a62a88bb84 : [Kineto] Initialize libkineto profilers during torch init process during pybind set-up (#112623)
6b38836c73 : [BE] Don't reify entire graph.nodes just to access last element (#113450)
ae2c219de2 : Revert "[BE] Remove stale CUDA version check from cpp_extension.py (#113447)"
a2c32b8bd0 : [inductor] Make codegen/{common,wrapper,cuda/cutlass_utils}.py pass follow_imports typechecking (#113411)
5a9f08feb5 : [inductor] Make {joint_graph,inductor_prims,utils}.py pass follow_imports typechecking (#113410)
b0ede09682 : [inductor] Make pattern_matcher.py pass follow_imports typechecking (#113409)
6e243f475d : [inductor] Move `has_torchvision_roi_align` check inside test_roi_align (#113385)
c4fe817a69 : [inductor] Fix test_dist on pre-sm80 and add skipCUDAIf decorator (#113384)
7ccca60927 : [BE] Remove stale CUDA version check from cpp_extension.py (#113447)
cb233dada4 : Fix docstrings on torch/nn/modules (#113260)
b794bec581 : [PyTorch] AOTI: add AOTIInductorModelGetNumOutputs & use for internal runner (#113299)
b1eb9e172a : remove jit from dynamo benchmark (#113338)
2cd8c0565c : Revert "[AOTI] Implement support for user defined kernels that use triton.autotune (#113229)"
3c9a59cb8d : Revert "[BE] [cuDNN] Always build assuming cuDNN >= 8.0 (#95722)"
2a271a3efa : Revert "[pytree] register pytree node type in both C++ pytree and Python pytree (#112111)"
6e714d7315 : [state_dict] Rewrite _gather_state_dict to extract the traversal logic (#112885)
c197c48ceb : [aotinductor] Add a demo tutorial (#112457)
91e4b0fc4e : Improve torch.unique docs (#113424)
23e0923c74 : Revert "[pytree] reorganize submodule structure for C++ and Python pytree (#112278)"
d4c810cc11 : [state_dict] Add cpu_only and ranks_only support for _gather_state_dict (#112836)
08641a3232 : Make FakeProcessGroup traceable (#113314)
c3c4e70b2c : Revert "Revert 107846 and 109695 (#111099)" (#113420)
8880584015 : Improve test_float8.py (#113361)
574e313643 : Add thiagocrepaldi as person of interest for onnx exporter (#113402)
71ca42787f : Replaced deprecated pkg_resources.packaging with packaging module (#113023)
f49b8e9313 : Register SymInt-aware meta function for mm out, symintify resize (#113202)
4f2b2883dc : [Inductor] [Quant] Enable QLinear int8-mixed-bf16 Lowering (#112486)
eb1534027f : Back out "[inductor] scale up num_warps for reductions to lower register pressure (#113039)" (#113400)
86d32bedc2 : [Inductor] [Quant] Enable QConv2d Binary int8-mixed-bf16 Lowering (#112551)
65e99357ae : [Inductor] [Quant] Enable QConv2d Unary int8-mixed-bf16 Lowering (#112550)
63d65dd6cd : Correct output shape of meta registration for qlinear_pointwise (#112390)
41e8632ca4 : [1/N] Fix clang-tidy warnings in torch/csrc/profiler (#112360)
0f7ac2635d : Uniformly use SourcelessBuilder to handle user defined types (#113390)
59592389fc : Revert "[dynamo] Refactor test cross importing (#113242)"
eeeb40b327 : [pytree] reorganize submodule structure for C++ and Python pytree (#112278)
68bf0f1e7d : Revert "[inductor] Move things into torch/testing/_internal/inductor_utils.py (#113275)"
8943207925 : [dynamo] Support kwargs for lazy module call. (#113387)
7a1314c548 : [Kineto] Fix the Chrome trace loading issue with all_to_all input split length > 30 (#113392)
f9114193bd : [NCCL PG] ADD a separate monitoring thread to ensure we collect debug info and check watchdog heartbeat (#112518)
265d6aac0b : [MPS] Fix crashes during Conv backward pass (#113398)
7064fbf1ea : Fix selective activation checkpointing with subclasses that override sizes() (#113380)
cb48f7855a : [inductor cpu] fix uint8 add and sub (#113253)
c7e12c7427 : Rerun disabled tests on MacOS x86 (#113315)
866457e746 : Fix pydocstyle errors in fully_sharded_data_parallel.py, api.py, graph_utils.py, distribute.py, iter_graph_module.py, comm_tensor.py, experimental_ops.py, batch_dim_utils.py, data_parallel.py, graph_optimization.py (#113216)
773b1cbe4f : [BE] Parenthesize and clauses for clarity (#113362)
a0d00349ed : [pytree] register pytree node type in both C++ pytree and Python pytree (#112111)
5e2adc8650 : [pytree] align function signature between C++ and Python pytree (#112482)
605236af06 : Force fp16 for vision_maskrcnn inference (#113110)
8bdce9bb74 : Fix `UntypedStorage.resize_` to keep same CUDA device index (#113386)
1488bafb27 : [AOTI] Implement support for user defined kernels that use triton.autotune (#113229)
44d0226690 : Fix logging exception/stacks from logging (#113394)
66150b29e3 : Revert "[pytree] align function signature between C++ and Python pytree (#112482)"
9a90989121 : Revert "[pytree] register pytree node type in both C++ pytree and Python pytree (#112111)"
d18d7a603e : [fbgemm_gpu] add pt2_compliant tag to some ops (#113201)
cada6c7fee : [dynamo] Fix a bug by desugaring in-place ops on constants (#113117)
bf452dcde6 : Revert "[pytree] reorganize submodule structure for C++ and Python pytree (#112278)"
c967dc526a : [inductor] Move things into torch/testing/_internal/inductor_utils.py (#113275)
8a91138f60 : Dont error on returned constant, fix for levit_128 (#112544)
f8a6ea770c : [UCC] Fix input tensor in scatter (#112246)
c7e0fa49b6 : [UCC][CUDA] Overlap p2p (#111608)
bb06725ee0 : Update mentions of deprecated functions if complex_numbers.rst (#113391)
afbf345807 : [ROCm] Unskip functorch tests that now work (#110760)
0aed86a175 : Fix docstring errors in Zero Redundancy Optimizer (#113200)
e6f0960762 : [inductor] Make debug.py pass follow-imports typechecking (#113307)
a65969928c : [inductor] Make codecache.py pass follow-imports typechecking (#113306)
87082bd025 : Reduce single reader check time for inline_container (#113328)
a3a55df4af : [dynamo] Add .pyi declaration of _CacheEntry (#113305)
767ce2b81c : [dynamo] Make decorators.py pass follow-import typechecking (#113304)
4e2e0437ea : [fx] stylistic improvements for fx.split_module (#113373)
82369e44a9 : Add sym_node to uninteresting files (#113349)
ff592f1038 : [iOS][PTMCoreMLCompiler] Refactor use of deprecated writeToFile:atomically: (#113377)
b8a302ae6a : Disable flaky cpp test (#113302)
501d118255 : [quant][pt2e] Add transform_for_annotation method in Quantizer (#113115)
e53da90fe6 : [Execution Trace] record global rank in pg_config_info (#113316)
5ccd22502f : [contextlib] Wrapping a function with `set_grad_enabled` will consume its global mutation (#113359)
0381d8ce68 : Quantized max pool 2d (#112937)
44c0521e8c : fix: docstring error in torch/distributed module (#113241)
977e555ca6 : Skip conv-bn folding on multiple conv uses (#112543)
b0c9ccdc4b : Add standard deviation of metrics over runs to inference benchmark (#113309)
d977f118ad : Update ruff linter to v0.1.5 (#113355)
9834fb7fd0 : [dtensor] full_tensor to return synchronously (#113322)
4da5d4b2ef : Fix Clang compilation error with Lib ATen for ppc64le (#106446)
289d887a41 : Fix ZeroDivisionError when unfolding a zero-dimension tensor in compile mode (#113259)
1d56e7b5af : Adds broadcast to functional collectives (#112668)
bf2c20be55 : [inductor test] enable dynamic loop for test_adaptive_avg_pool1d_argmax (#113339)
f98ba596f1 : Use CapturedTraceback symbolizer for C++ exceptions from Python library (#113207)
e6eab49e11 : [dynamo] graph break on setattr `requires_grad` (#113163)
8c704f7a0e : [inductor cpp] fix argmax with >1 reduction dims (#113168)
be66d5e845 : Add file name and size to the serialization metadata logging (#113077)
addb8e29cd : Enable 2d + AC torch.compile (#112536)
acd595e352 : [easy][tp] Fix typo (#113292)
0093e23e52 : [dynamo] GradModeVariable should only be eagerly initialized when doing the equivalent of `set_grad_enabled` (#113293)
b3ad29e269 : [export] Fix executorch models. (#113296)
fbf7866ac9 : [Inductor] Fallback scatter when src dtype is bf16 (#113204)
31ded95cd5 : [2D] Bind _fsdp_extension to FSDP instances (#113237)
204ec11e6d : [inductor][easy] Fix fusion logging (#113308)
adcf9bb2bd : optimize case where div denominator is -1 (#112878)
b694f88ef6 : Grandfather in built-in TorchScript ops to being pt2_compliant (#113061)
c88a36ebce : Grandfather in some more pytorch ops to be pt2_compliant (#113050)
e2236ae097 : [tp] Fix test_tp_transform_with_uncovered_op (#113310)
15b61d6c1a : TensorImpl: Lazily compute numel and contiguity when symbolic (#112785)
8c4bdac560 : TensorImpl: Move symbolic refresh_numel and refresh_contiguous into their own class (#112890)
8858edad65 : [dynamo] Refactor test cross importing (#113242)
325e0fdfdd : Enable masked_scatter_backward for inductor (#109642)
14811d69d7 : [BE] Cleanup sdpa test helper usage (#113294)
84d64d72d6 : Persist copy_ in training graph for inputs that don't require grad (#111046)
2c4be77f02 : Revert "[dynamo] Graph break on `setattr(Tensor, "data", Tensor)` (#113043)" (#113297)
94d95a91a2 : Revert "[dynamo] graph break on setattr `requires_grad` (#113163)"
12c257cc00 : [qunat][pt2e] Support allow_implicit_sharing flag (#112929)
625958d8bc : Inductor support for native c10d_functional (#112439)
297c26bb8e : Support fp8 in AOTInductor + support optional<> in C ABI (#112527)
ee777a7c3c : docs: Add docstring for torch.masked._ops.logaddexp (#113206)
f6c00b16c8 : [aotinductor] Update the benchmarking script to clone an eager model (#113046)
24bb60d8a1 : [inductor] Add test for debug.trace mode (#113240)
5506b9db43 : [decomp] Fix _scaled_dot_product_flash_attention decomposition bug (#113102)
aef9e43fe6 : Revert "Replaced deprecated pkg_resources.packaging with packaging module (#113023)"
b30f178d09 : Replace assert with CUDA_KERNEL_ASSERT in Reduce.cuh for consistency (#113098)
77e8e8fd2d : Rewrite docs so that it is OK to use record_stream before uses (#113282)
5da9abfec2 : [dynamo] Enable typechecking for comptime.py (#112999)
26f907e09b : [dynamo] Enable typechecking for skipfiles.py (#112975)
7fb56993ba : [dynamo] Enable typechecking for device_interface.py (#112974)
152f9bbb9a : [dynamo] Switch MYPYNOFOLLOW config from includes to excludes (#112973)
bea2b703b0 : [dynamo] Enable typechecking for bytecode_analysis.py (#112972)
c1fa708b03 : [dynamo] Enable typechecking for utils.py (#112971)
1c40d1c683 : [dynamo] Enable typechecking for profiler.py (#112970)
dc63248b76 : Make dynamo configs more amenable to static type checking (#112130)
d5eb9f725c : Fix test_add_scalar_with_empty_list_tensor (#113262)
d261687d5f : [dynamo] graph break on setattr `requires_grad` (#113163)
a66f2a1b99 : [state_dict] Move _gather_state_dict to dcp module (#112835)
d98182e34e : Revert "Grandfather in built-in TorchScript ops to being pt2_compliant (#113061)"
81bf0bd68d : [no ci] Fix typo in `persons_of_interest.rst` (#113283)
e49b9492c6 : Revert "Grandfather in some more pytorch ops to be pt2_compliant (#113050)"
16f82198ca : Export ReduleL1/ReduceL2 ONNX ops for aten::linalg_vector_norm(ord={1,2}) (#113173)
81b0166ca2 : [Inductor][fx pass] Normalize nodes created by users (#113179)
0ab2a48e7e : Reland: [TD] Add heuristic for class level historical correlations (#113213)
5ea76f1760 : [DeviceMesh][Test] Update 2D related test to use init_device_mesh (#113236)
e138d80e8e : [DTensor][2/N][forward fix] extend util function normalize_to_torch_size to accept single int size (#113244)
088587574d : [DTensor][1/N] add forward layer norm support (#113105)
9e6e9587c1 : Make numel/sym_numel PyInterpreter work symmetrically to others (#113065)
78b8465565 : [Distributed] Limit world_size to 8 for FSDP Unit tests (#103412)
66577c0f3b : Update ROCm triton pin (#111129)
9bda1e874c : Reland "[aot inductor] Move constant loading logic from Container to Model" (#112197)
6e73ae2022 : [ci][ez] Add job_id to emit_metrics (#113099)
3914566c73 : [dynamo] Refactor OrderedDict to dict (#113234)
728ed37663 : [AOTInductor] Allow using ProxyExecutor for ATen fallbacks (#112976)
df4f0b3829 : [BE] [cuDNN] Always build assuming cuDNN >= 8.0 (#95722)
8ba11bf79d : [AOTI] Support non auto-tuned triton kernels in aoti (#113090)
9f3e378125 : [nested tensor]add split and layer_norm_backward operations (#113108)
3a429423fc : Upgrade CI to ROCm5.7 (#110465)
fa895da968 : [pytree] reorganize submodule structure for C++ and Python pytree (#112278)
3e4d14702a : On grad access, check if grad has changed and update stored example grad as needed (#112811)
d01f8b291d : Fix visualize_overlap for Inductor comm reordering (#113066)
95f52611c7 : [pytree] register pytree node type in both C++ pytree and Python pytree (#112111)
1f3fa13f0a : Handle unbacked SymInt sized outputs in AOTAutograd (#113159)
aa376e31fd : [export] Enable verifier [2/n] (#113075)
f2963642c2 : [DDP] Add device_mesh to DDP ctor (#112761)
9d765d28ca : [pytorch] Add binding to get nccl version suffix (#112884)
93cea394de : CMake: Loosen CUDA consistency check (#113174)
b7acd374c9 : Remove unecessary warning when getting storage.filename (#113212)
ceb07656c2 : [dynamo] use APIs to use device interface instead of raw object in dynamo capture (#113000)
a6ed86bfdb : Add torch.onnx.dynamo_export test using ExportedProgram from file (#112271)
2043d92472 : [PyTorch][Vulkan] Add `LayerNorm` performance test binary (#112915)
e5b758b855 : S390x complex division (#108516)
a8097ed479 : Fix docstring errors in _composable_state.py, remote_device.py, value_ranges.py, utils.py, run.py, rendezvous.py, launch.py, argparse_util.py, __init__.py, _cycles.py (#112953)
bae8506589 : [TorchElastic] Add option to configure log prefix for each rank (#112357)
d1c092ae1b : Update impl_abstract_pystub to be less boilerplatey (#113182)
aae418aea6 : Remove TODOs to add docstrings (#113197)
eb5487361d : docs: fix docstring errors in quantized modules and others (#112695)
edcbd5a895 : Make TORCH_COMPILE_DEBUG=1 work again (#112917)
041b6b5c6b : TorchInductor Opinfo fixes for rng ops (#108170)
498a760802 : Update comm_analysis.py license (#113184)
a3a2486be8 : [dynamo] Avoid eager imports of classes with custom VariableTrackers (#112319)
e4c8737a0c : [PT-D] Updated Dynamo skip message for `@contract` tests (#112793)
0fee7a0181 : Revert "[TD] Add heuristic for class level historical correlations (#112162)"
356f3458c4 : [dynamo] Remove incorrect sources (#112961)
bd8d924e9b : [dynamo] Relax NullContextVariable and RangeVariable guards (#112962)
8cee0a25bd : fix: Flake8-BugBear code B-026 for PyTorch (#111362)
2da062da51 : [pytorch-vulkan] fix zero-dim test (#113116)
ff1ae35205 : [TD] Add heuristic for class level historical correlations (#112162)
056f2cba17 : Deprecate "fallthrough" as autograd fallback default (#113166)
f496c8c4a7 : [tp] handle non-covered ops (#112530)
0af8fb71ab : add test for consecutive aot inductor compiles (#111170)
ad1c3467e2 : [dynamo] run guard fail hooks for each cache entry for which there is a cache miss (#110325)
c0aba9be41 : [quant][pt2] Fix custom dtype per channel weight in QAT (#112612)
538ec4942a : Do not generate zero-numel NT by default in helper and improve to_padded_tensor msg (#113162)
0c991acab0 : Factor out test_nestedtensor setUp tearDown and call super (#113091)
5fe96eaaf4 : [dynamo] Remove VariableTracker.propagate (#111726)
843a8ecd24 : [dynamo] Remove VariableTracker.add_options (#111725)
9664190952 : [dynamo] Eagerly install guards (#111415)
2964682490 : [dynamo] Add LazyVariableTracker (#111306)
2322d989e8 : Apply release only changes to core (#109208)
0c448526a4 : [experiment][TD] Rating number system (#112676)
82875e69fe : [inductor][fx pass] Fix a bug for the merge_stack_tahn_unbind pattern (#113101)
785e586eb0 : [CUDA][cuBLAS] Separate reduced precision reductions on/off for addmm tests (#112545)
bc3e2e03cd : Revert "Update impl_abstract_pystub to be less boilerplatey (#112851)"
2fc940e0c4 : [DTensorTestbase] Add run_subtests to DTensorTestbase and fix test_ddp checkpoint test error (#113051)
7c4e49ec80 : [Fix] add validation logics to TCPStore queries (#107607)
56e514aefb : [dtensor][BE][1/N] fix DTensor Ops test (#113104)
92e7f79609 : Doc: Add and Fix docstrings for torch.util.data files (#112817)
740137df6f : [MPS] Add bucketize op (#112830)
c4bb77323d : [MPS] Add searchsorted op (#112829)
70eeb82f00 : s390x: skip tests relying on specific openblas precision (#112843)
611a7457ca : [Inductor] Kill MutationLayout from ir.py (#112925)
562c4ae4bc : Update Pillow pin to 10.0.1 (#113111)
4fecbebc37 : Fix OOM in test_large_block_sizes (#113153)
6ce5de5275 : Avoid calling as_tensor twice (#112866)
6ae4e3a8d2 : Update impl_abstract_pystub to be less boilerplatey (#112851)
9a28a7b498 : Revert "Add support for `torch.Generator` type in TorchScript (#110413)"
bb7ac12cbf : [ProcessGroupNCCL] Avoid recording stream for broadcast and scatter (#112896)
98564d2d7a : If you have i0 = i1 * 12, perform this replacement directly (#112653)
493b52b3d9 : Grandfather in built-in TorchScript ops to being pt2_compliant (#113061)
85832c0b9b : Grandfather in some more pytorch ops to be pt2_compliant (#113050)
a06832f911 : Grandfather in c10d_functional ops to pt2_compliant (#113049)
c6f435befd : Don't recompute numel and contiguous in detach (#112689)
52e2b87d00 : [Kineto][NCCL][5/n] Populate in/out split size info for all_to_all from CPU to CUDA kernel (#112308)
220c3bae6d : Pass in parallel strategy to tp_transform API (#112286)
f6fb9fd681 : use smaller batch size for timm_efficientdet in inference (#113095)
65304d8fd0 : s390x: fix inductor constructing floats out of bytes (#112723)
ff51f94e32 : [Reland] Fix default timeouts for python entrypoints (e.g. init_process_group) (#113094)
68c4507bc2 : [Inductor] Allow None values to be passed in as arguments to triton kernels (#113056)
bfa717c6a6 : [Inductor] Improve reinplace_scatters pass (#112801)
f6008be266 : Move all triton related testing utils into shared file (#113008)
dbf44dffc9 : [Inductor] Cache generated user defined triton kernels on tensor dtype and non tensor parameters (#112752)
f99b5f1f23 : [Inductor][fx pass] Remove split nodes with split section size one (#112922)
7bd066ab48 : Package `pybind11/eigen/` (#113055)
10a829b85d : Retarget sym_size/sym_stride lowerings to their .int overloads (#113054)
c847fd2ac8 : Fix `torch.compiler.cudagraph_mark_step_begin` example (#112807)
74c24d2367 : Fixes a bug in inductor.triton.load (#113047)
ddfe572534 : [dynamo] Graph break on `setattr(Tensor, "data", Tensor)` (#113043)
5c1ea30ca3 : bump torchbench commit (#112650)
5cfe973bed : [PyTorch FX] ProxyableClassMeta skip map_aggregate if not is_fx_tracing (#112934)
4c04ae2451 : [ROCm] fix test_softmax_forward_64bit_indexing_cuda OOM (#113093)
8768b87bd1 : Remove torch distributed from CODEOWNERS (#112813)
1fea599d9a : Revert "Grandfather in c10d_functional ops to pt2_compliant (#113049)"
19dbd8aca3 : Revert "Grandfather in some more pytorch ops to be pt2_compliant (#113050)"
d94d72b397 : Revert "Grandfather in built-in TorchScript ops to being pt2_compliant (#113061)"
ad844e7919 : [inductor] fix out of shared memory issue (#112916)
c608b0eb35 : [Dist] Enable FSDP on CPU (#112145)
5ffa98f7ba : [Dist] Add fallback reduce_scatter_base, allgather_base APIs to Gloo (#112144)
e9496fdc34 : [pytorch-vulkan] Disable failing test on vulkan_api_test (#112936)
4893a2814f : [pytree] align function signature between C++ and Python pytree (#112482)
7715b47f44 : [fx] Speedup ShapeEnv cache invalidation checks (#112687)
65ecb36621 : Move ShapeEnv config out of dynamo (#112933)
b4dbb02d46 : Adjust _list_with_default to also work with SymInt input (#113073)
8219bf051b : [BE]: Apply RUF015 to torch folder (#113025)
fb8ffba47f : [PyTorch][Vulkan] Reduce 2D float matrix multiplication shader latency by more than 50% on some Android GPUs (#112918)
24b61a45c9 : [inductor] scale up num_warps for reductions to lower register pressure (#113039)
c2084da14a : [NT] Backward support for broadcasting binary ops (#112519)
d5007d8d8e : Split out input_metadata.cpp from input_metadata.h (#113031)
1d4d5e4319 : Grandfather in built-in TorchScript ops to being pt2_compliant (#113061)
efae8449a8 : Grandfather in some more pytorch ops to be pt2_compliant (#113050)
fe8570a1fe : Grandfather in c10d_functional ops to pt2_compliant (#113049)
71dca16610 : Grandfather autogen'ed ops as pt2_compliant (#113036)
75adb9f371 : Revert "Fix default timeouts for python entrypoints (e.g. init_process_group) (#112893)"
eefe327b11 : Rename torch.onnx.ExportOutput* to ONNXProgram* (#112263)
21b6030ac3 : Don't set CUDA_HOME when not compiled with CUDA support (#106310)
27e31ab6e8 : Add support for `torch.Generator` type in TorchScript (#110413)
7b99b3efb1 : added 'weights_only' param in torch.load examples (#112860)
c83112a31f : Add Autocast support to Conv thourgh explicit cast (#112806)
f9d47e1381 : Fix default timeouts for python entrypoints (e.g. init_process_group) (#112893)
81ea7a489a : Replaced deprecated pkg_resources.packaging with packaging module (#113023)
67256d5c1c : [aotinductor] Solves a problem where a tensor is returned more than once (#112177)
718035791d : Prefer `e.is_number` over `not e.free_symbols` in SymPy (#112688)
19e9f5cc7b : [torchgen] Add support for optional tensor (#112938)
bdfde62e54 : [Inductor CUTLASS backend] Epilogue fusion codegen (Step 1) (#110890)
59e003d159 : Fixed cat uint8 lowering (#112753)
542fa4a2e7 : Revert "Revert "Use OpOverload instead of OpOverloadPacket for size/s… (#113058)
118e842fdf : [2D][test] Update 2d test to reflect distributed_state_dict API changes (#112967)
4d9546cc1b : [pytorch-vulkan] conv1d, only handle special case (#112880)
ab1f6d58bc : [c10d] use allocator trace callbacks for NCCL PG register (#112850)
c6ecd018d5 : Fix docstring errors (#112693)
5248bc9c8e : [LTC] Fix type inference for native_layer_norm_backward (#112948)
a810126cf7 : [FSDP][optim_state_dict] Skip the parameter if the parameter does not belong to the current FSDP instance (#112804)
5f562afff3 : [DTensor] min, max and prod sharding propagation rules (#112403)
b6e85eb8d5 : [quant][pt2] Support quantized conv bias in QAT fusion (#112528)
e39668770a : [CUDA] 64-bit indexing fixes for cross-entropy kernels (#112096)
a50f6d3685 : Move release docker container builds to ubuntu22.04 (#113032)
3f62531191 : Fix: docstring errors in `torch.nn.utils` - parametrizations.py/prune.py/weight_norm.py (#113021)
88920b26be : [Cmake] Check that gcc-9.4 or newer is used (#112858)
77d5f0379e : Revert "[HigherOrderOp] remove _deprecated_global_ns (#112757)"
a1d1b73a7c : Revert "Use OpOverload instead of OpOverloadPacket for size/stride/etc slots (#112119)"
679ca510b0 : Revert "[Cmake] Check that gcc-9.4 or newer is used (#112858)"
185515368b : Add generated opcheck test for if the pt2_compliant_tag is incorrectly applied (#112759)
376217cc0b : [BE]: Apply FURB145 to make code more readable and idiomatic. (#112990)
fa9045a872 : [xla hash update] update the pinned xla hash (#113011)
2bc1378d7b : Revert "[aotinductor] Solves a problem where a tensor is returned more than once (#112177)"
455241bbd3 : Add Half for aten2, logaddexp, logaddexp2, hypot, and nextafter on CPU (#112138)
bd9be877e4 : [aotinductor] Move cache_dir to utils.py (#112728)
46a34e8c75 : Inductor cpp wrapper: fix QMaxPool (#112379)
3be0e1cd58 : `c10::DriverAPI` Try opening libcuda.so.1 (#112996)
d0a80f8af1 : Better errors in `c10::DriverAPI` on `dl` failure (#112995)
57191172f8 : [BE] Use static local variable instead of `call_once` (#112994)
9c1fb2cbb3 : [BE]: Enable ruff PIE794 and fix bugs it found in test suite (#112989)
07123bc198 : [ROCm] Build Triton in Centos for ROCm (#112050)
a5cb8f75a7 : [dynamo] Replace checkpointing with speculate/restart in graph_break_if_unsupported (#112921)
7818a2887a : [dynamo] Replace InstructionTranslator.checkpoint with speculate/restart (#112902)
7a18376187 : Add Half support for poisson and use float for Half cumulative distribution on CPU (#112124)
674c104d12 : Fix RecursionError in Inductor for large for loops (#112320)
e64d250210 : Add a tool for a semi-automatic optimization of bsr_dense_mm meta parameters. (#112737)
26b5e27ace : Add Half support for cummax, cummin, cumprod, logcumsumexp, and prod on CPU (#112132)
64f326097b : [dynamo] Refactor handling of state in context managers (#112939)
ea4b63db62 : Back out "[aotinductor] Add example_value metadata to nodes (#112415)" (#112946)
3a41fff5c0 : [dynamo] Remove empty_checkpoint (#112899)
d78b5e5403 : [dynamo] Remove checkpoint in GenericContextManager (#112920)
2ba2525d12 : [dynamo] Remove checkpoint in conditional (#112898)
a6b42b5ada : [dynamo] Remove checkpoint in inline_user_function_return (#112897)
847c7c6da6 : Update ruff to v0.1.4 (#112966)
f908b0e9a3 : [dynamo] Enable typechecking for hooks.py (#112565)
fe41a9ce08 : [dynamo] Enable typechecking for resume_execution.py (#112564)
3b34c818ac : [dynamo] Enable typechecking for test_minifier_common.py (#112563)
ca4fe028c8 : [dynamo] Enable typechecking for replay_record.py (#112562)
b8ac5bbcbd : [dynamo] Enable typechecking for bytecode_transformation.py (#112561)
854882bbf4 : Add test for init_process_group timeout (#112803)
247b5bdbb5 : [dynamo (easy)] Add skip reason to debug logs (#112869)
d5fff7338e : BUG: gracefully fall back to numpy.random if asked in dynamo.config (#109205)
9af3f98faf : [DTensor] Fix DTensor.from_local() returns DTensor with wrong size for uneven sharded tensor (#110781)
add78ac425 : Fix a type error in AppendOnlyList (#112362)
ad894cd072 : [Cmake] Check that gcc-9.4 or newer is used (#112858)
dfb26d5999 : Reland "Symintify repeat_interleave (#109133)" (#112726)
596dab4277 : [DeviceMesh] Remove _validate_mesh from device_mesh.py (#112928)
fb044e2b17 : [aot_autograd] Check that autocast states are never mutated by graphs passed to AOTAutograd (#112822)
0ac748cd29 : Make pattern-matcher failure diagnostics lazy (again) and added an error message if format string is too long (#112923)
418c5206ec : Make `test_distributed_spawn.py` tell you how to run it correctly (#112924)
b4ce501137 : [Inductor] [Quant] Re-structure Quantization testcase pattern matcher check (#112570)
042445b7d3 : Add new Macro to count ops and time lazy tracing (#112679)
075cb6bab6 : [pytorch-vulkan] slices to support zero-size output (#112879)
62cbe86ac0 : [torch] Skip the assertion on the return type when the annotation is a forward reference (#112870)
e36dba3a94 : [Cutlass 3.2.2 submodule upgrade] Adapt Inductor cutlass backend to Cutlass 3.2.2 (#112762)
8f10a2321d : [pytorch-vulkan] log, log_softmax (#112828)
df149581bc : Tabulate outputs in inference benchmark (#112900)
6ba2748690 : [Quant] [PT2] Enable Decomposed quant per tensor/channel to accept bfloat16 input (#112225)
67e8762e83 : [Inductor] Kill has_aliasing (#112875)
65b74c9254 : Make init_process_group timeout kwarg override pg_options (#112611)
fa81237af7 : [HigherOrderOp] remove _deprecated_global_ns (#112757)
55971c5c4e : Enable concurrent reader for getRecord function (#112818)
57a3af900e : Add suggested changes to init.py (#112864)
973f730dda : [DCP] Add test for planner option for load_sharded_optimizer_state_dict (#112891)
63fc48257a : Configure labeler for 'module: distributed' (#112812)
6e1494ec7c : correct output dir (#112760)
f58ecd4823 : docs: fix docstrings for datapipes and other (#112765)
132cb57e47 : Skip aliasing correction for `lift_fresh`. (#112202)
c799689437 : Refactor inference benchmark and add runner script to do sweep (#112863)
dc1a3581e4 : Remove c10::variant (#112725)
a91baaf314 : [aotinductor] Solves a problem where a tensor is returned more than once (#112177)
a3db4377eb : docs: Fix some docstring errors in torch.nn.utils parametrize/spectral_norm/stateless (#112786)
d084a024ae : [easy] skipIfTorchInductor - use condition variable (#112774)
2c3ab60506 : [profiler] skip flop compute for Nested tensor (#112767)
43fb5147e2 : [BE] Enable Ruff's Flake8 PYI001 (#112823)
e2e5897269 : [CI] Do not use `packaging` in run_tests.py (#112873)
feb479757f : Make addc[mul|div] support different out dtypes (#112682)
028e4fc6fa : Add `packaging` to requirements-macOS.txt (#112854)
44a28a5efa : [DCP][test] Make dim_0 size of params scale with world_size in torch/distributed/checkpoint/test_fsdp_optim_state.py (#112825)
fd6e571207 : [aot_autograd / dynamo] restore grad_mode and other globals to state prior to tracing; add grad_mode mutations to runtime wrapper (#112396)
001573b687 : [Inductor] Support one node creating multiple mutations in scheduler (#112547)
21bc37fad8 : [5/N] Apply clang-tidy to aten/src/ATen/core (#112219)
3e2c9410e1 : Fix docstring errors in memory.py, nvtx.py (#112751)
29716e865c : Enforce both input tensor shapes of CosineEmbeddingLoss to be equal. (#112782)
2337d8d062 : Use OpOverload instead of OpOverloadPacket for size/stride/etc slots (#112119)
7f143d7ef5 : [aotinductor] Allow specifying a .so name in the aot_inductor.output_path config (#112651)
871e27a61c : [Quant] [PT2] Remove the output Annotation of Conv/Linear in x86InductorQuantizer (#112140)
a53d29cc18 : Enable oneDNN QLinear FP32/BF16 output (#112126)
b6fc7af8a0 : Enable oneDNN QConv FP32/BF16 output (#112010)
9089242048 : Fix typo under test directory (#112346)
94ebf52ea3 : [cuda] introduce trace tracker callback in cache allocator (#112238)
53fff56ab8 : Graph break cleanly for test_nestedtensor (#112662)
88b98191b7 : [FSDP][state_dict] Add world_size 1 unittest (#112669)
458e7d09fd : Add meta func for scaled mm (#112609)
3be99012d4 : Switch some more SymInt tests to TORCH_CHECK_ALWAYS_SHOW_CPP_STACKTRACE (#112626)
62c88ba0fc : E2E test for FSDP, HSDP, FSDP+TP in Distributed Checkpointing (#112541)
4a17693d19 : [CODEMOD][caffe2] replace uses of np.float with np.float64 (#112675)
8665a51baf : Initialize logging facility when running ProcessGroupNCCLTest (#112809)
0d95378341 : [Profiler][Easy] Make timestamps in memory timelines be in microseconds (us) (#112772)
2d5fec4d59 : Revert "Enable concurrent reader for getRecord function (#111426)"
32039883d1 : Set default for IS_FBCODE flag (#112766)
13d62e28a3 : [Inductor] Add Dynamic shape support to user defined triton kernels (#112523)
f6dc09c1b1 : [dynamo] Fix typo in higher_order_ops.py (#112750)
12dab00173 : Fix Docstring errors in init.py (#112617)
2e29172942 : Revert "Add meta func for scaled mm (#112609)"
c63693ca27 : Revert "[Fix] add validation logics to TCPStore queries (#107607)"
c27a03a4e5 : [ONNX] Cast scale back to fp16 after _attention_scale. (#112554)
0a92ec9452 : warn once for use flash attention and memory efficient attention (#112773)
e9d7fac89c : [state_dict][10/N] Let set_state_dict returns IncompatibleKeys (#112414)
3904b81420 : [pytree] Add back a default serialized name (#112748)
50a9981217 : [Fix] add validation logics to TCPStore queries (#107607)
12a6f5aa6b : Enable concurrent reader for getRecord function (#111426)
9d0c3e21d0 : [state_dict][9/N] Add get and set APIs for model and optimizer state_dict (#112203)
0adb28b77d : Show CUDAExtension example commands as code (#112764)
07c9b053f7 : Enable planner to be used for loading sharded optimizer state dict (#112259)
b10fa8a447 : Adds lucasllc to CODEOWNERS in distributed (#112055)
db7a3cc436 : fix missing nvml in c10/cuda/driver_api.cpp issue (#112121)
4e67c69a7d : [TD] Support downgrading test relevance (#112671)
d9ad7ac390 : Skip test_fork_wait_4 and test_fork_wait_4_async (#112743)
157bda1bf0 : Fix pydocstyle errors in torch/nn/module (#112674)
ac9476ba99 : Add .boxed() to c10d::ProcessGroup and c10d::Work's pybind (#111997)
6a3922d523 : BUG: compile np.array(list_of_arrays) (#112711)
c1dc4cda5b : Delete unused is_inside_mode (#112677)
eadb6aca9d : Improve repeat_interleave error message to report repeats/input sizes. (#112729)
50767a075a : [export] Clean up verifier [1/n]. (#112505)
8198474eb7 : Fix scope name when parent scope is empty for torch.onnx.export (#112654)
9d09d29297 : [DTensor] Add rand_like, randn_like, randint_like ops to shard propagation (#112576)
0bd2955f15 : Memory leak from bsr_scatter_mm_indices_data argument cache (#112301)
75174c3797 : Add meta func for scaled mm (#112609)
dd957138ec : Pin Docker images to main (#112692)
543a618ae8 : [inductor][fx pass] Fix a split cat bug in the pre grad (#112667)
7cbf9869d5 : Add v0 inference benchmark script (#112582)
b1f50ead4f : [state_dict][8/N] Ignore meta parameters (#112167)
6929ebf2b0 : [quant][docs] Add x86 inductor quant docs (#112648)
954cba2ede : [optim/dynamo] shortcut adagrad with `has_complex` (#112722)
ca33dd780e : Revert "[pytree] Add back a default serialized name (#112748)"
82e428723a : Followup patch for cpuinfo fix in ppc64le (#112707)
174aef71af : Clarify maximize option in optimizer.py (#112724)
25e17f3522 : Revert "Use OpOverload instead of OpOverloadPacket for size/stride/etc slots (#112119)"
1245a7e75b : Revert "Remove default timeout from PGNCCL::Options ctor (#112555)"
75f6d52971 : [DTensor] Fix DeviceMesh.__repr__ to output valid Python syntax (#112401)
ca72d23613 : [pytree] Add back a default serialized name (#112748)
09df6b771b : Add a note about performant record_stream use. (#112526)
51a38380d1 : Fix torch.load(..., weights_only=True) for NT (#112516)
85e93632e7 : Remove default timeout from PGNCCL::Options ctor (#112555)
a1ab22b81d : Reland "Trigger specialization when you call size()/stride() from C++ (#111935)" (#112605)
68dead4a6c : [c10d] print NCCL_SUFFIX in NCCL version log at PG init (#112560)
0276d5621a : Fix typo in compilation_unit.h (#112572)
ae85ba820f : [inductor] Memory planning (#112178)
db66f15785 : docs: fix docstrings in distributed.py and others (fixes #112604) (#112657)
b07cfd79fe : [DeviceMesh] Move DeviceMesh out from torch.distributed._tensor (#112364)
6f681ab5d9 : [torch.compile] autograd.Function with multiple return values (#112475)
59869903b3 : Fix mem eff bias bug (#112673)
40ab6409da : [Trivial change] Remove duplicate line in freezing.py (#112538)
493ae78201 : [inductor] nan-checker (#112091)
01e4984bac : Add decomposition for dynamo_export + ExportedProgram and remove None from input (#112444)
6c19de07cd : [Quant] [PT2] Add ConvBNAdd(ReLU) Annotation into X86InductorQuantizer (#111281)
56ca0043f6 : [Quant] [PT2] Enable QAT Quantization flow in X86InductorQuantizer (#111280)
8191fb3e06 : [Reland2] [inductor][BE] split triton_meta and inductor_meta (#112351)
ff35e1e45b : [pytree] Add custom treespec fqn field (#112428)
131e0f1b75 : [export] Separate out graph signature (#112412)
b63335c27a : Make ci_expected_accuracy/update_expected.py apply csv linter (#112655)
af1a8f4cb2 : Allow passing in dynamic_shapes without original argument name (#112298)
c1e2ccdb97 : AssertionError -> AttributeError in cuBLASModule (#112606)
258874888b : Refine replacements with equality tests on runtime asserts (#112156)
793c62b79c : Allow binary pointwise operations to cause refinement on unbacked SymInts (#112155)
4f5acf8329 : Log non-pt2_compliant ops encountered by Dynamo (#112581)
00d6d2f66b : [aotinductor] Add example_value metadata to nodes (#112415)
f8285b1195 : [dynamo] Fix nested torch function mode not setting correct value on exiting (#112621)
9e2af971fc : [Quantization] Add "quantization_tag" as metadata to fx proxy (#108764)
e06288f8f1 : skip test in test_eager_transforms.py while Triton lacks ARM support (#112092)
5b0840c71b : Guarantee expr is a sympy.Expr before xreplace'ing it (#112619)
5d7f23b1f4 : [HighOrderOp] allow aliasing a variable from outer scope in higher order op (#112537)
9d23440c81 : Nvfuser code base nuke (#111447)
5a6f8014c4 : Add a decomposition for _weight_norm_interface. (#112193)
1b86d5ef2f : [Ci] Add arm64 libtorch CI config (#112474)
7f77ec37be : [Inductor] Clarify mutation related comments (#112466)
dd24e92949 : Use OpOverload instead of OpOverloadPacket for size/stride/etc slots (#112119)
ab20bab729 : [ONNX] Fix partial name matching when searching parameter tensors (#112517)
623a311d22 : fix torch.distributed.rpc example incorrect usage (#112367)
54c7d0d99d : [GHF] Bot should reopen PR after revert (#112614)
4a2242e479 : [BE] Use GITHUB_API_URL (#112613)
fd209543d5 : Add `torch.utils.deterministic.fill_uninitialized_memory` flag (#111377)
cce5016653 : [Profiler] Manual Submodule Update for Kineto (#112540)
84f59d893a : [fx] Cache translation_validation_enabled on `ShapeEnv` (#112493)
9e89c36a54 : [FakeTensor] Reuse flat_args throughout FakeTensorMode.dispatch (#112418)
a126bbfea3 : [AOTInductor] Include AOTI debug folder in package (#112514)
29f3d392bf : Inductor cpp wrapper: support QLinear (#112378)
337d69e40a : Inductor cpp wrapper: support QConv (#112373)
e061144aaf : [inductor] replace ops.div with ops.truediv (#112243)
2ed3a73e40 : [dynamo] treat `torch.device`, `torch.dtype` as constant literal; revise guards to have access to `torch` module (#112426)
76918367ff : fix(dynamo): `Optimizer._init_group` did not handle return value (#110709)
c73da67d46 : new_qtensor support privateuseone allocator. (#111464)
748c1a1d81 : [dynamo] Be stricter about `HigherOrderOperator` kwargs (#111938)
320ac546ed : Clarify difference between share_memory and from_file (#111856)
df0a3c0541 : Upload ROCm artifacts from the new workflow to S3 (#112442)
dcd94814a3 : [inductor][fx pass] Add split-stack-tahn-unbind pattern detection (#111854)
a1e222ef02 : metric table (#109245)
5296c14094 : Add inverse gamma distribution and fix `sign` bug in `PowerTransform`. (#104501)
0347b36b52 : SummaryWriter.add_figure: add type hints (#110021)
6dd002f24e : avoid readonly arrays (#112524)
3cee033b98 : Reland of a bunch of pattern matcher + indexing fixes (#112476)
ef1f08c5a0 : State_dict serialization for meta tensors (#112213)
41720c2a48 : [dynamo] add infinite generators `itertools.{count, repeat, cycle}` (#110967)
9bfebf754f : [dynamo] fix graph break, improve hygeine - enforce using ConstantVariable for `torch.device`,`torch.dtype` (#112416)
74e6c877e9 : Revert "[inductor] Memory planning (#112178)"
333d5821ee : [ROCm] Add gcnArchName to collect_env and torch.cuda.get_device_properties (#107477)
4daf8afe8e : Revert "Fix bug: not creating empty tensor with correct sizes and device. (#106734)" (#112170)
0f4d2904be : [dynamo] compiled_autograd support for post_acc_grad hooks (#112326)
16953482d9 : Revert "Enable planner to be used for loading sharded optimizer state dict (#112259)"
c8b74fd012 : Add assigntome-docathon workflow (#112525)
9e0cd64c5e : [fx] Add Graph option for replace_pattern (#112409)
53acdb66f7 : [primtorch] `aten.normal` decomp has wrong return type due to `elementwise_type_promotion_wrapper` (#112467)
24f217ee64 : [Nested tensor] Add more ops in Python subclass nested tensor (#112302)
17fd4885aa : [dynamo] Support custom dict constructor with kwargs (#112513)
f74d766632 : feat(optim): use `has_complex` shortcut flag for all applicable optimizers, use `_view_as_real` auxiliary function (#110706)
90bef4411e : [Profiler] Disable CUPTI Teardown when using CUDA Graphs (#112507)
bc098c7fc2 : Revert "[dynamo] `ExecutorchCallDelegateHigherOrderVariable` - add sanity check that input and output tensors are disjoint (#111960)"
b1b3d489f3 : Revert "[dynamo] Be stricter about `HigherOrderOperator` kwargs (#111938)"
f64a97c6f8 : [inductor] Memory planning (#112178)
aa649f713f : [dynamo, test] remove #ops comparison to fx.symbolic_trace from dynamo standard_test (#112420)
bb45f89cd9 : Hackable distributed filesystem reader and writer (#106635)
1df1ae66cc : [DTensor] Assert shard dim is less than tensor ndim (#112404)
6ae21e73d3 : [inductor] FX graph cache: Add support for symbolic shapes (#111421)
1483097679 : Update how Dynamo decides to graph break on an OpOverloadPacket (#112200)
fb0e3a5740 : Refactor TD tests to own folder (#112166)
5f461e9ec1 : Revert "Error early when dataclass is not registered (#112211)"
a21851c69d : fix(inductor): `ForeachKernelSchedulerNode` group shape should be opaque for graph debug (#110336)
2e40e09d57 : [dynamo] `{*}Tensor.__init__` from list of Tensor/ndarray as `torch.stack(List[FakeTensor])` (#111741)
2f51b9223c : Make sure namedtuple are preserved when adding backward hooks on Module (#112433)
94f3df27e4 : [aotinductor] reland: return a copy of any constant (#112370)
36164265ae : [export oncall] add some examples during oncall (#112445)
fbafff3668 : [reland][inductor] benchmark fusion (#112450)
481a7a9643 : [execution trace] ignore some properties when symbolic size/strides exist (#112458)
a5641bc56b : [TD] Enable Test Class granularity on heuristics (#112161)
5cd1208415 : [quant][pt2][be] Refactor QAT q-dq patterns (#112279)
231129ea36 : [quant][pt2] Fix QAT conv-bn bias derived qspec (#112159)
30237aaeec : [MPS] Fix bug when value is of complex (#111937)
3db0095ea2 : [reland][quant][pt2e][be] Cleanup observer insertion logic (#111828) (#112453)
a1c56df1f0 : [inductor cpp] vectorize support for truediv (#112234)
b91fcdf4aa : [dynamo] Add support for register_post_accumulate_grad_hook (#112325)
04024926f4 : Use `pytree.tree_map_` everywhere (#112417)
66c32d099a : Use `pytree.arg_tree_leaves` everywhere (#112394)
046c0c66fd : [pytree] Add arg_tree_leaves to optimize flattening function arguments (#112393)
86196bf116 : add batch impl. for inplace `index_add` operation (#112276)
424c093fc7 : Fix comment spelling error (#112468)
a310cc8968 : Add Half support for kthvalue, cross, hist, and logit on CPU (#112135)
8d6b4322d0 : [CI] Limit libtorch builds to `shared-with-deps` (#112452)
70b392ae02 : [dtensor] enable foreach operators for adam optimizer (#112108)
e66ec5843f : [RESUBMIT] Cleanup error reporting for ProcessGroupNCCL (#112419)
cb942ef2b1 : Revert "add batch impl. for inplace `index_add` operation (#112276)"
08dbfecdbd : Revert "Symintify repeat_interleave (#109133)" (#112245)
6cebacdbc0 : [vision hash update] update the pinned vision hash (#112455)
710337244d : [Inductor] Extend Pattern Matcher to Match Equivalent Function Invocation (#107832)
f50ec341bc : inductor cpp wrapper: add GIL release and acquire (#111888)
bb97ce4c7f : [ivalue] operator<<: don't error on invalid IValue tags (#112232)
c3113514e9 : Fix regression from pointwise + multi-level reduction fusion (#112297)
6ab1121bdc : Enable Mypy checking for scheduler.py (#105600)
0ce8cf7c7a : Update small wheel nccl-version to 2.19.3 (#112293)
236eff9531 : [BE] Refactor repeated assets in test_foreach.py (#112348)
e3c8c63dea : add batch impl. for inplace `index_add` operation (#112276)
2f09da3a21 : [dtensor] Introduce full_tensor API to DTensor (#112224)
e2cd69a770 : [CI] Call upload step `upload` (#112451)
b8a10a8a2d : Add batch decomposition for torch.unsafe_chunk (#110862)
40569b28f4 : Constrain fx_stride order for scaled_mm (#112430)
12a9e09200 : [inductor] Fix bug handling output_strides in fx graph cache (#112041)
cf3aa985a9 : Don't rewrite assert in pytest (#112436)
479f5eb029 : [dynamo] Remove dead code - `real_value_tensor_positive_aliases` (#111911)
6188f2e899 : Enable planner to be used for loading sharded optimizer state dict (#112259)
ac71fea1a8 : [test][functorch] fix function name in factory_fns (#112315)
31c0ef934b : [pytree] Remove LeafSpec construction cost in tree_flatten (#112392)
0f2b7a99e3 : [pytree] Avoid constructing intermediate lists in tree_{flatten,leaves} (#112391)
da90c31593 : [export] Upstream unflattener. (#112189)
67638d4dad : torch.compile: fix bug of fallback_randn when 'generator' is None (#112240)
9f1ccd4dac : Fix internal test listing errors (#112300)
80de49653a : Prevent OOB access in foreach_list variants (#112349)
a14f8e09bb : [dynamo] torch._dynamo.optimize to torch.compile in cudagraph trees tests (#112314)
69b9e54d45 : Add openvino backend into torch.compile docs (#112321)
4fbf884f58 : [fuzzing result][fuzz_torch_jit_lite_interpreter] read-heap-buffer-overflow-far-from-bounds (size 4) in c10::IValue::IValue() (#110453)
4b8a5e1854 : [dynamo] Remove VariableTracker.as_specialized (#112363)
b97afc4018 : Support 'BaseOutput' and subclasses from 'diffusers' in dynamo (#111978)
d713b8dd5d : Revert "[inductor] Fix bug handling output_strides in fx graph cache (#112041)"
fc0b0820fc : Revert "Readded device_assert skipping in index and index_put (and also added (#112093)"
4439b906c4 : Revert "Some cleanups in pattern matcher (#112101)"
052f7a3edc : Revert "Added patterns for randperm + index_add (#112102)"
013f622dd2 : grid_sample: support bfloat16 (#112331)
3b58755c1c : Fix FakeTensor tolist when size is not symbolic (#112206)
0cda4c8abe : Replay view with view_func instead of as_strided in meta_utils for NT (#112205)
503955f5ec : [Pytorch][Vulkan] layer_norm (#112322)
33c41daf60 : Fix scatter_mm kernel failure on non-contiguous tensor arguments (#112337)
cf6041e942 : Use weakref in storing tensors as keys (follow-up to #111470) (#112076)
e5c8ac8544 : Eliminate try-catch block around triton::_triton_bsr_dense_mm_out call. (#112154)
21330e5ba1 : [pytree] align `__all__` for C++ and Python pytree (#112110)
219763c38d : Support calling user defined triton kernels with kernel.run (#112292)
1250032c2e : [Inductor] Add triton.autotune support for user defined triton kernels with complex grids (#112290)
5a1a9dc354 : [inductor][fx pass] Add new split cat pattern detection (#110923)
31c223a52c : Forward fix a dynamo tracing rule test failure due to landing race (#112368)
a8c74e8225 : torch.export: cannot instantiate Dim from REPL (#111231)
92cc52ab0e : [CPU SDP] Remove mem efficient attn checks in CPU (#112375)
255a4d0bd3 : Fix doc of fullgraph parameter in torch.compile (#111906)
f77b9bf3ba : [xla hash update] update the pinned xla hash (#112374)
e36dacaeed : [Docs] fix typo in example of `torch.linalg.solve_triangular` (#112361)
29844adbe0 : Add Half support for logspace and range on CPU (#112131)
0d669f06a6 : Update Android to R21e (#109355)
bbd5b935e4 : Use `pytree.tree_leaves` everywhere (#112324)
a0bf137a78 : [pytree] Add optimized `tree_leaves` implementation (#112323)
bfbc2e3ca8 : [fx] Cache `_torchscript_schema_to_signature` (#112327)
919c9b713e : [Typo fixed] in triton_heuristics.py (#112350)
088d1648ec : [test][fx] fix incorrect method call in test case (#112336)
a9ebee30fa : Make numpy core tests Dynamo traceable. (#112141)
ccab8ce745 : Make numpy fft and linalg tests Dynamo traceable (#112146)
740d636165 : Add clang-tidy checks in torch/csrc/autograd (#112313)
ace2713d1e : Revert "Add `torch.utils.deterministic.fill_uninitialized_memory` flag (#111377)"
ae72607e5f : Add way to determine which overload an OpOverloadPacket will resolve to (#112199)
235a04c0de : Add getAllSortedOperatorsFor helper function (#112198)
f5088d2e45 : [dynamo] fix None routing bug during var_getattr on UDO (#111614)
b165abaa3b : Error early when dataclass is not registered (#112211)
eb8af4dc67 : [dynamo] Be stricter about `HigherOrderOperator` kwargs (#111938)
c14c4efc0e : [Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)
12c1465d76 : [DeviceMesh] Make mesh_resources private (#112294)
a7a0955790 : [pytree][BE] reorganize imports and format code style and update type hints (#112268)
0948550c53 : [dynamo] Remove mutation in AutogradFunctionContextVariable (#112216)
c7b78fb76c : [dynamo] Replace recursively_contains with parents_tracker (#112122)
a380bf3297 : [dynamo, test] skip flaky dynamo-wrapped tests (#112310)
31f605344f : [Resubmit][S372460 follow up] Reduce embedding feature validation failure carry-on impact (#111838)
fdcd927d8a : [vision hash update] update the pinned vision hash (#112306)
a2dcf26df4 : [c10d] Pass avoidRecordStreams into collective() function (#112195)
25f06ee51b : [dynamo] `ExecutorchCallDelegateHigherOrderVariable` - add sanity check that input and output tensors are disjoint (#111960)
3080fd8383 : [profiler] add send/recv src/dst info (#111811)
2c7c2b7827 : [torch op][xs] verbose error message for type mismatch in toList() (#110872)
2225e6361d : Support for as_nested_tensor() with jagged layout + fixed nested_tensor() semantics (#112304)
8d44999183 : Revert "[Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)"
668c3b3f3b : Add embedding op to jagged NT (#112288)
1ff0b82be9 : Added patterns for randperm + index_add (#112102)
a1a765c195 : Mirror of Xformers Fix (#112267)
46a6435203 : Make numpy/lib vendored tests dynamo traceable (#112147)
128f4db77e : A small fix in "do_bench_using_profiling" (#112223)
3d2041b342 : [inductor] Fix bug handling output_strides in fx graph cache (#112041)
dbb31a2984 : [Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)
c67236a05d : Revert "[dynamo] Be stricter about `HigherOrderOperator` kwargs (#111938)"
089e7aa4ac : Revert "[dynamo] `ExecutorchCallDelegateHigherOrderVariable` - add sanity check that input and output tensors are disjoint (#111960)"
061bf1a153 : [5/N] Make torch context manager a TorchCtxManagerClassVariable (#111622)
1460e5b7f5 : updated aarch64 maintainers in docs (#112047)
f7dc0ae16c : Some cleanups in pattern matcher (#112101)
6d685ff54f : [BE] Remove float8 from vec is_floating_type definition (#112196)
ca2106e871 : [pytorch-vulkan] floor-divide for tensor, tensor (#112190)
1774704fc1 : [dynamo] Simplify add_dict in preparation to refactor it with call_set (#110523)
1dcbd1c088 : [dynamo] [easy] Move Set to dicts.py (#110522)
b9cb4103d7 : Fix iphoneos compilation (#111502)
328a4c5475 : [BE] Enhance `OpInfo.supported_dtype` (#111995)
192e795f3f : Change save -> load in comment (#112217)
c120e5606e : Use ops_and_refs in test_ops.py instead of _ops_and_refs (#112022)
c7dcba9276 : Remove passing disable_fastpath in kwargs (#112250)
b110d87ac2 : Readded device_assert skipping in index and index_put (and also added (#112093)
baf3e054e3 : Fixed an error in the comment of file torch.utils.data.dataloader.py#944 . (#112244)
33daaeb6b5 : Automated submodule update: FBGEMM (#112118)
700071869a : [no-ci][EZ] Update RELEASE.md (#112253)
cb48ef21cc : [no-ci] Clarify revert handling in release branches (#112262)
a26cb0a3f2 : [dynamo] Enable typechecking for testing.py (#112129)
d3bf6803b6 : [dynamo] add sanity check that we do not wrap tracked tensors (#112025)
d97332f839 : Add cuda status checks to FA templates (#112229)
63c089b09d : [c10] Move profiler clock to libc10 for timestamps (#111972)
fdbb73fa4e : Check both ops and refs in test_strided_layout (#112160)
bd0ea72b28 : torch.library: Create helper function `is_functional_schema` (#111660)
7df675743c : Stop using defaultdict for deferred_runtime_asserts (#112172)
9f7bff1171 : Add timeout for master store if clients do not join (#111805)
cf5479b57e : [MPS] Make the device in MPSGenerator consistent with MPSAllocator (#112188)
7265c22a5d : [AOTInductor] Enforce no_grad for Run entries (#111613)
2a86bcbac2 : [FSDP][state_dict] Cleanup the usage of _get_pg_default_device (#112168)
46667c97fd : [Pytorch][Vulkan] var.dim (#111965)
20fc2b4186 : [dynamo] Enable typechecking for compiled_autograd.py (#112128)
632ac01bef : [dynamo] Enable typechecking for exc.py (#112127)
6a99291546 : Removing sdpa conv layout constraint (#112045)
572b66331e : [PyTorch][ET] collect comms in ET for send/recv (#111985)
7e5e951dfe : [tp] update node meta with partitioned val (#112080)
033680c9af : [tp] fix PrepareModuleInput for multiple inputs (#112204)
a6e556f8b0 : Support calling __torch_function__ attribute access (#111737)
589625cbae : Add bandwidth to extern kernel calc (#110539)
c84dbd2c03 : [2D] Enable 2D optimizer set_state_dict() (#111778)
aa9e65d8f5 : [DCP] Add fsspec.transaction context when writing checkpoint to storage (#112191)
7cb72704cc : Constrain sdpa to fx strides (#111721)
94e90c199c : [dtensor] fix pointwise op linearity with strategy (#112107)
64fd027f2e : Revert "[inductor] benchmark fusion (#108193)"
0a3199dd7e : Revert "Readded device_assert skipping in index and index_put (and also added (#112093)"
797d7100de : Revert "[quant][pt2e][be] Cleanup observer insertion logic (#111828)"
ac4cc5dbea : [Dynamo] Do not crash if numpy is not installed (#112175)
22221c6d60 : Revert "Trigger specialization when you call size()/stride() from C++ (#111935)"
1569df7f01 : Don't search getitem for batch fusions (#112088)
5b71834785 : Avoid c++ exception and stack trace (#111438)
acd02a60d5 : Add a test making sure we are not importing SymPy when importing torch (#112038)
47ccf04885 : Split SymNode into its own file (#112037)
deac5357db : Make proxy_tensor.py not depend on SymPy (#112036)
4f7f46ee35 : Move SymDispatchMode to its own file (#112035)
55ab9932f5 : Revert "Constrain sdpa to fx strides (#111721)"
4a94f77c8e : Revert "Make numpy/lib vendored tests dynamo traceable (#112147)"
73cc5d1cdd : [inductor] benchmark fusion (#108193)
e660bd1422 : Re-enable some embedded bag tests (#111712)
190b6e4ba8 : Make numpy/lib vendored tests dynamo traceable (#112147)
abe172e268 : Revert "Cleanup error reporting for ProcessGroupNCCL (#111979)"
d91a18c433 : Grandfather in torchgen'ed aten ops to torch.Tag.pt2_compliant_tag (#112053)
27cf49549a : [dynamo] `ExecutorchCallDelegateHigherOrderVariable` - add sanity check that input and output tensors are disjoint (#111960)
73f36e44fb : [aotinductor] Add a debug compile flag (#112021)
f66cc67562 : [aotinductor] Fix duplicated unbacked symbol declarations (#111823)
f839a5627b : Add bf16 support to replicate padding (#112099)
8a7c3cec78 : Constrain sdpa to fx strides (#111721)
1b702b185e : [pytorch-vulkan] disable one zero-dim tensor test to fix test (#112087)
5e5329155e : [aotinductor] only include -lc10 for non-fbcode case (#112125)
3a284dae30 : Revert "Do not materialize entire randperm in RandomSampler (#103339)"
b7affa2ac3 : Add unit test for ONNX models with torch.distributions.normal.Normal (#111498)
8bc0b382fa : [HigherOrderOp] Move map_impl to torch.ops.higher_order (#111404)
f6f81a5969 : Update get-workflow-job-id to also return job name (#112103)
485cc0faae : Revert "[inductor] benchmark fusion (#108193)"
7da713bbaf : Convert evaluate_expr GuardOnDataDependentSymNode into graph break (#111919)
036abd43b3 : [dynamo] Preserve node names in export (#111947)
b126adcdee : [aotinductor] Pass TorchIR to AOTInductor (#110020)
ed2cc4dd59 : TST: make torch_np added tests dynamo traceable (#112149)
42e4c648a2 : New @decorateIf decorator for param-specific conditional decoration (#112033)
7671be8108 : [aotinductor] allow generating default args in fbcode (#112085)
c8a5bb451e : Do not import sympy within torch._prims_common (#112034)
d6724a51f9 : [dynamo] md5 hash non `compile_ignored` configs (#111298)
1c89ea7f72 : Add Half support for softmax and log_softmax on CPU (#103315)
fbff99ffea : Add regex matching to Inductor all2all collective unit tests (#112077)
395614c1a4 : keep sync bn training flag same with converted bn's training flag (#111998)
e38347f490 : Readded device_assert skipping in index and index_put (and also added (#112093)
d090c18fca : [dynamo] annotate config with `@compile_ignored` (#111303)
89bd17552d : [dynamo] Enable typechecking for funcname_cache.py (#112031)
413baa1b25 : [dynamo] Enable typechecking for codegen.py (#111992)
e67d2c9825 : [dynamo] Enable typechecking for allowed_functions.py (#111894)
b61efe1c2b : Fix `torch.[size|stride]`(dim=None)` invocation (#111991)
ec0cdcdf6a : [inductor] benchmark fusion (#108193)
edafe2ddb9 : [dynamo] Be stricter about `HigherOrderOperator` kwargs (#111938)
2aaa7e542c : AOTAutograd: avoid intermediate_base logic when all aliased outputs came from a multi_output_view (#111411)
28c0b07d19 : [ROCm] remove HCC references (#111975)
f1785373c0 : Add `torch.utils.deterministic.fill_uninitialized_memory` flag (#111377)
7a3a00bb0b : [inductor] Remove redundant views (#111773)
64d75f72d4 : [fx] Add a faster method for inserting positional argument. (#111974)
b29c658265 : Cleanup error reporting for ProcessGroupNCCL (#111979)
74adb4cccc : Updated flop counter to accept pytree inputs/outputs (#111990)
d641450180 : Revert "[cpu][inductor] improve cpu vec implementations of log (#111898)"
3831cf4891 : TST: make test_multiarray traceable by Dynamo (#112084)
a4e4f41cce : MAINT: graph break on numpy.__version__ (#112083)
7352c88f58 : TST: add x{pass,fail}IfTorchDynamo (#112082)
5b7caf31c1 : CI: remove numpy_torch_interop from CI (#112081)
d8e19bb03a : Revert "[2D] Enable 2D optimizer set_state_dict() (#111778)"
0ed461ae4c : [dynamo] Ensure Dynamo uses this graph's fakes for `Tensor` `example_value`s (#111954)
17b732eb04 : increase CPU memory requirement for test_nll_loss_large (#110963)
8516b4d7da : Automated submodule update: FBGEMM (#106168)
2971bdd6fc : Ignore Dims of value 1 in Require_Stride_order (#111976)
4851c973ae : Update FlashAttentionV2 kernel to 02ac572 (#111886)
ec18ef62f4 : Native c10d_functional ops (#110570)
7fe51e3e9b : Add cudagraph_mark_step_begin in torch.compiler, reference in error message (#111722)
f2a0bef35a : [export] Upstream support of (tensor, tensor list) in op returns. (#111857)
e5049648be : Add a "pt2 compliant" tag; add config to graph break on non-pt2_compliant ops (#111933)
6365992f92 : [opcheck] Add way to initialize blank failures dict (#111948)
3219b728b6 : [torch.library] Clarify torch.library.define's schema (#111915)
2d04be9a00 : [torch.library] Add mechanism to add tags during define (#111912)
ed15fa7cc2 : [Kineto][NCCL][3/n] Get the NCCL communication info from PARAM_COMMS_INFO (#111846)
1623cc5815 : [easy] Make test_mandelbrot_numpy deterministic (#112042)
b33220063d : [TD] Historical edited files and profiling heuristics (#111510)
36b3e1789a : Docker release build don't include build suffix in the release (#112046)
b54ab57522 : Document torch.from_file and fix UntypedStorage.from_file docs (#111688)
f3b42ab5b9 : feat(dynamo): remove inconsistent tracing histories by acknowledging possibility of inconsistent side-effects (#110804)
cb4e62a498 : Fix broken lint on trunk (#112051)
b365acba28 : [ONNX] A better way to safe guard 2GB model serialization (#111984)
6b7b90462f : [aotinductor] Turn clang warning ignored-optimization-argument into error (#112008)
7e654c8f88 : Revert "WIP / TST: allow testing torch._numpy under Dynamo (#110401)"
e9804aaacc : Fix unit tests and add logging for Inductor intra-graph reordering (#111981)
9d4dbebc34 : Add support to ExportedProgram as input to torch.onnx.dynamo_export (#111497)
07ccaabee7 : Make profiler function will be ignored warn only once (#111921)
2b952834c7 : [pytorch][PR] [Inductor][FX passes] Pre grad batch relu fusion (#111146)
721b1a6683 : s390x vectorization: implement atanh for complex vectorized data (#111653)
49489d478b : Update onnx 1.15.0rc2 submodule (#111964)
5ce8002d24 : Revert "Remove deprecated fbgemm operators (#104535)"
5846705e36 : Trigger specialization when you call size()/stride() from C++ (#111935)
5ed4a423de : WIP / TST: allow testing torch._numpy under Dynamo (#110401)
6fd3659391 : Make require_stride_order peek into AliasedLayout (#111681)
ac08b10d60 : [pytorch] bfloat16 support in erfinv (#111257)
247f39f603 : Revert "Fix inconsistency of max_split_size between DeviceStats and CUDAAllocatorConfig (#111555)"
8253e0524c : Add "device not supported" assert to inductor (#112001)
88244cd7a9 : [torchx] Do not terminate parent process if exit code from child isn't valid (#111961)
28ebe5df7a : yolov3: reduce batch size due to OOM (#111959)
5120c97f32 : Revert "Add support to ExportedProgram as input to torch.onnx.dynamo_export (#111497)"
52eec50d31 : [2D] Enable 2D optimizer set_state_dict() (#111778)
d8a9b6640e : [Kineto][NCCL][2/n] Add records NCCL meta to more collective functions (#111843)
43d0ae4822 : [Kineto][NCCL][1/n] Add the world size info in NCCL metadata (#111842)
bf998a2c5d : [quant][pt2e][be] Cleanup observer insertion logic (#111828)
8dc4887e84 : [2D] Enable 2D optimizer get_state_dict() (#111774)
6625269e14 : [vision hash update] update the pinned vision hash (#111982)
f9cc7f6a1c : Enable Wno-unused-private-field,Wunused-lambda-capture and fix CUDA warnings (#110856)
9e6c97890b : Dynamo runner: add FSDP handcrafted module wrapping policy (#111505)
a29a844938 : [Inductor] Support top level constants in user defined triton kernels (#111970)
bb550b25c9 : [Inductor] Support user defined triton kernels calling other triton kernels and activation functions (#111956)
b570320364 : [cpu][inductor] improve cpu vec implementations of log (#111898)
e574a8ab55 : [dynamo] Add sanity checks to ensure no double-wrapping of `FakeTensor`s produced by the current graph (#111913)
4f42edfb6e : Add support to ExportedProgram as input to torch.onnx.dynamo_export (#111497)
6e2dfb360b : [quant][be] Clean up prepare code (#111827)
3acaf8564d : [easy] use number of param bytes as the chunk size if it's not provided (#111844)
ad4971c0b1 : Delete deepcopied model after use in benchmark to reduce memory consumption (#111868)
a8760f1b42 : [Quantization] Add a test for QAT + PTQ selective quantization in (#111689)
192477b5ba : Enable flake8-bugbear B020 lint (#110823)
b600aed237 : [TD] Make test class times available during CI (#111836)
1dd57082a4 : [inductor] Decompose boolean min/max into all/any (#110311)
46e80ce58a : [ATen] Support multi dim any and all reductions (#110310)
9849ef1253 : Remove requires_grad_info from AOTDispatch (#110773)
5344468712 : Revert "[dynamo] Properly track user-defined types for `type()` (#110794)"
4839f319da : Apply same 'pick_grad' on generating fp64 reference outputs (#111593)
ec2e0712db : [ONNX] Enable onnx inlining in benchmark for >2GB models (#111867)
5da903ff78 : [qnnpack] suppress empty translation unit warning (#111475)
b0087b4cf7 : Revert "record_function: remove legacy internal operators (#72303)"
e72fcd382b : [aotinductor] Fix a problem when the generated graph is empty (#111822)
b01e87d0c0 : [BE][EZ] Use `setup-ssh` actions from `test-infra` (#111922)
ddcf9c050b : [Inductor] Support calling user defined kernels with different type of arguments (#111939)
4ac848cf77 : [dynamo] Perf (`MapHigherOrderVariable`): do not unnecessarily `get_real_value` (#111920)
3c46e859aa : [TD] Enable trial mode for new heuristics (#111858)
7bec7d95e4 : Automate release only changes, binary_linux_test.sh (#111862)
d92459617e : Automate passing conda-pytorchbot-token-test for release (#111821)
cd034e1793 : [HigherOrderOp] don't mannually set input for cond (#111611)
a0043d4840 : [PyTorch] AOTI: cache dtypes and device types at DSO load (#111820)
de2b41bbbf : [PyTorch] AOTI: override VecISA selection in fbcode (#111816)
6afd00a318 : [PyTorch] AOTI: use array of constants (#111815)
b70efde3ad : [easy] Reapply D49842542 (remove pessimizing move) (#111910)
b89c2202bc : [pytorch-vulkan] Support zero-dim (#111680)
062850f4b9 : Remove TorchText from RELEASE.MD (#111940)
f97c2dabd9 : Move negative index checking to common.py - Fix issue 97365 (#108690)
f32eb9bc55 : fix missing non-contiguous output handling for add op (#111758)
0c64ac0d3a : Add tests for strided layout in factory functions (#111463)
fb7047e1a1 : Place local_used_map_dev_ on CPU for MTIA (#111581)
ad3572a5dc : Unify torch.SymInt and torch.types.SymInt (#110573)
099efd8346 : Fix reduction + () + multi-level reduction optimization (#111781)
a887ad0b60 : Add continue-on-error if ssh step is failing (#111916)
1ddbdb5144 : Optest: Allow parametrized names for xfails checks (#111797)
4f79161452 : Add tensor parallel sharding APIs for torch export (#111236)
ebcc42ea10 : [Dist] Fix coalescing manager + DETAIL debug mode (#111878)
babb6c6ac4 : nccl flight recorder (#110960)
9dfaba6f10 : [dynamo] add repro for functorch/fx interop issue (`allow_in_graph`) (#111746)
4b804dac33 : [MPS] Add complex support for `fill` (#111885)
0ad91c2bfb : Add an explicit _shutdown method to ProcessGroupNCCL (#111392)
6d78f34a06 : fix regression which creates a new fake tensor (#111864)
0e0f6a248d : Fix num_batches_tracked of BatchNorm when load_state_dict (#110850)
30cbd2ea37 : Add Benchmark for freezing + max autotune, turn on in weekly run (#111853)
cbc6213f5d : [inductor] Defer memory operation lowering to wrapper (#111402)
6977ba6e3c : [inductor] decomposition for complex addition (#110740)
b3bb94b980 : [dynamo] Update test_invoke_in_pt2_compiled_autograd (#111817)
a469aca1cc : Exposes a fast_fp8_accum option to _scaled_mm (#111847)
702aaf8aea : [sparse] semi-structured sparse + torch.compile support (#111049)
5eac44bc72 : Ignore beartype if its version is 0.16.0 (#111859)
9132734a35 : Use Dr.CI GitHub checkrun summary when querying its API fails (#111628)
e62c887bab : Revert "[inductor][BE] split triton_meta and inductor_meta (#111397)"
0a26e5fd8f : Use 'device' argument in test_sparse.py::TestSparseAnyCUDA::test_as_sparse_gradcheck_* (#111584)
b969c675f5 : Add batched dimensions support to the second operand of bsr_scatter_mm (#111796)
6382011843 : Add NVIDIA A100 optimized meta parameters to bsr_dense_mm (#111760)
f3d08ab271 : Use more performant bsr_scatter_mm within bsr_dense_mm when blocksize is 16. (#111489)
6078ed95cc : Use lru_cache to cache indices data for bsr_scatter_mm. (#111470)
b56699b699 : Add post grad graph logging (#111808)
0ea9646cdd : Rewrite torch.library's documentation (#111310)
66b74d231a : Change torch.library.impl to accept a device string (#111659)
6463f2b51c : Rename name->qualname in torch.library.impl_abstract (#111380)
0be84bb41e : record_function: remove legacy internal operators (#72303)
4ed4753ac3 : [inductor][easy] skip test_extension_backend.py in fbcode (#111591)
d22e5e4b52 : Fix DDP notes (#111833)
070b94dc08 : [inductor][BE] split triton_meta and inductor_meta (#111397)
73170b23d4 : Add compile support for NT unbind (#111531)
4d45c21c3f : [Export] Don't serialize missing args with default value (#111715)
185e76238d : [2D][Documentation] Add some comments to _chunk_dtensor (#111775)
3b5b7ebd09 : [ci] Save various json files from test infra into folder (#111516)
e509b162ed : Disable FlashAttenion for is_causal=True when seqlen q not equal kv (#111007)
98e749a306 : [Pytorch][CPU] Switch building compiler to Clang (#111537)
6c384cf4a6 : Don't DCE unbacked SymInt if it is returned as shape constant buffer (#111803)
0b602b13c8 : [small] fix tcpstore doc arg (#111807)
d4708a6da7 : Add scatter_mm and bsr_scatter_mm operations. (#110396)
3b9246ba18 : Add CSR tensor with non-contiguous values support to CuSparseSpMatCsrDescriptor (#111742)
335582584f : [inductor] Adding a way to force fusion of int_mm with mul (#111413)
e264b42a2e : [re-land][inductor] Refactor and optimize allocation calls (#111117) (#111511)
184aee12cc : make no-inline calls to throw exceptions (#111787)
36d34ce951 : [dynamo] support comparing LHS constant with tensor (#111492)
59ae0d9f9d : Allow setting logger output format with TORCH_LOGS_FORMAT (#111770)
01a2c801d4 : Pass `BUILD_ENVIRONMENT` to MPS tests (#111595)
0247dce6cb : [Pytorch][Vulkan] mean.dim (#111609)
39c09d4da6 : Revert "Revert "Nvfuser code removal (#111093)"" (#111604)
ce48d36324 : [aotinductor] Update test utility to use AOTIModelRunner (#111657)
4b6b8fcf6d : Disable dynamo when running generated opcheck tests (#111685)
e644b03775 : [Forward fix] torch.fx.passes.shape_prop should not be skipped (#111771)
4b324a8717 : Add Half support for aminmax on CPU (#106853)
ad4ccf9689 : [dynamo] Properly track user-defined types for `type()` (#110794)
a22e238db0 : Additional lint fixes (#111793)
f3d02d9ae6 : Add support for sym_ite (#111440)
09040f6fbb : bypass nvml for torch.cuda.device_count() if rocm (#110418)
236472b32a : Allow to specify specific files for debug info (#111748)
024ffd342a : [ATen] Make _unsafe_index CompositeExplicitAutograd (#111795)
f3991df408 : [caffe2] avoid variable shadowing (#111476)
e676ec2fe7 : Fix undefined __assert_fail on FreeBSD (#111761)
1eb6c4314b : [xla hash update] update the pinned xla hash (#111788)
fb8876069d : Support tracing base torch_function impl (#111731)
0b424ee0b7 : Fix inconsistency of max_split_size between DeviceStats and CUDAAllocatorConfig (#111555)
f7401de1bb : Add mha to Autocast CPU (#107674)
1d9a7f9e43 : [Reland] TensorWithTFOverride inheritance from TensorVariable (#111766)
c65c0682b1 : [dynamo] Expand _nonvar_fields names (#111749)
2b2b6caf8f : [inductor] Implement clone removal for user defined triton kernel via reinplace_scatters (#111627)
d118531733 : Use `\odot` everywhere instead of mixing `\odot` and `*` for the Hadamard product (#111763)
5af97fedd2 : [dynamo] Fix context wrapping grad mode variable (#111534)
798efab532 : Fix S367052 to unblock ICVR MC3 (#109937)
c4ab229a82 : [dynamo] Implement `set.__contains__` for `Tensor` as object match of `FakeTensor` (#111738)
977d3bcc46 : [Inductor] Support user defined triton kernels in inductor (#111434)
e2e1189f41 : [dynamo] Fix guard for ndarray calling `torch.as_tensor(None)` (#111665)
8e60d646b9 : [dynamo][stream]support device-agnostic stream in dynamo and capture stream/event method in fx graph (#108312)
57c7aa12db : Remove deprecated fbgemm operators (#104535)
bf01a7b023 : [3/N] Merge skipfiles.check rules (#111451)
61461f39d1 : [dtensor] handle negative dim and fix TP regression (#111750)
1d291e1f19 : [dtensor] hide xla imports to avoid warning (#111751)
c9ca0dde0d : python_arg_parser + dynamic shapes: fix segfault coercing symint to intlist (#111642)
62942b075c : dynamo: graph break on resize_ (#111553)
f0cde8613c : Revert "Use fmt::format in NCCLUtils and ProcessGroupNCCL instead of c10::str (#107268)"
cc776d2186 : [PyTorch Pinned Allocator] Create per thread task pool for mapping memory space (#111545)
7bd004297a : [inductor] Move inductor ops to CompositeExplicitAutograd (#111702)
1a528c826e : [Compiled Autograd] Error if tensor_post_acc_grad_hooks is set (#111701)
a1154e673b : [Compiled Autograd] Turn accumulate_grad into an op (#111700)
39f484646b : [4/N] Apply clang-tidy to aten/src/ATen/core (#111406)
47eed65481 : [dynamo] Add `is_` support for `Tensor`s, force `get_fake_value` to reuse previously computed `example_value` if available (#111565)
9455af58b5 : [easy][dynamo] Cleanup guard builder selection (#111723)
cc28b9c10a : Fixed a memory leak in PyTorchFileReader (#111703)
344fc98991 : [dynamo] fix: `SetVariable` should test `Tensor` identity based `example_value` FakeTensor, not `fx.Node` (#111696)
d054078b74 : Fix missing guards from logs (#111698)
9c9f66c042 : [TorchFix] Update old pretrained TorchVision API in tests (#111708)
920c9adcc6 : [MetaTensor] fix inplace copy for meta tensor (#111705)
5737545467 : [vision hash update] update the pinned vision hash (#111720)
3c4581d613 : Remove outdated declarations from setup.py (#110660)
c84c86f018 : SymIntify convolution (#111599)
0a147fd112 : Pointwise fuse cat with pointwise inputs or outputs and <= 4 inputs (#111233)
03da0694b7 : Fix buffer overflow in `torch.sort` (#111672)
62df159c3f : move tf override tensor to torch_function.py (#111714)
5034e98393 : Fix create source distribution step for release (#111697)
8376079b97 : [DTensor][XLA] Support Xla backend in distribute_tensor API (#110275)
ff864efd53 : [DCP][Test] Add use_dtensor subtests for test_state_dict FSDP test (#111615)
cb2fef1f47 : [DCP][Test] Update fine-tune e2e test to use init_device_mesh and DTensor state_dict (#111598)
7709382b50 : Fix regression in `torch.equal` behavior for NaNs (#111699)
aa24459595 : [NCCL][CUDA][CUDA Graphs] Flush enqueued work before starting a graph capture 2 (#110665)
f9d45f63dd : [torch] Add LOAD_METHOD_SUPER and LOAD_ATTR_SUPER (#111707)
9b499b417e : [BE]: Apply subprocess check to github scripts (#111684)
43c211facb : [quant][pt2e] Actually support transitive sharing for SharedQuantizationSpec (#111172)
1ad0f0b308 : [BE]: remove unnecessary enumerate calls (#111690)
c2a248bdb3 : Revert "[ROCm] Unskip functorch tests that now work (#110760)"
e9422b1fb0 : Fix test listing error (#111630)
101210e2ce : [dynamo] cast single-elem tensors to float and int (#111518)
079394e9d6 : [documentation] adding desc for adaptive_autorange (#111612)
4c6e85365f : Add NVIDIA license to comm_analysis.py (#111670)
71b35862d3 : [ROCm] Unskip functorch tests that now work (#110760)
303c54dbd9 : [dynamo] share a subgraph tracer across fwd and bwd in autograd.Function (#111588)
bdba54fb4d : [HigherOrderOp] use assertExpectedInline for control flow tests (#111610)
8ffbc36f8f : [Pytorch][Vulkan] Fix the implementation of `aten::sum.dim_IntList` (#111586)
e4e7d34fe9 : [pt2][quant] Clean up QAT get conv-bn-relu nodes (#111515)
cc37d8d3f8 : [Easy] Fixed typo in `init_device_mesh` note (#111658)
14c2f296e0 : Don't suppress original error message for data-dependent value (#111596)
ba04d84089 : S390x inductor support (#111367)
8d03a0dd75 : [ez] Remove extraneous files (#111668)
fdc29f58c6 : [TP] Refactor style to make it work with torch.compile (#111625)
d1afb7d43d : add Half support for multinomial on CPU (#104178)
d1110a18de : [Dynamo]make sure resume function have valid names (#111635)
a55ecec195 : [dynamo][`__torch_function__` 2/n] Refactor TensorWithTFOverrideVariable (#109556)
11a3c7696b : [dynamo - testing] Add repro for higher order op list inputs (#111647)
9656ef88b6 : [sigmoid] Switch to oss serializer. (#111455)
974c47a20e : remove flatten.using_ints, linalg_*, linear, log_softmax.int, logdet, special_* from xfail list (#110985)
8df42f9220 : [PyTorch][Vulkan] Allow 0-size tensors to be represented in PyTorch Vulkan (#111512)
2452e65960 : [BE] More nested namespaces (#111575)
a267d95c2a : Reland: Add `lazy_clone_storage` to create COW storages (#111579)
619ae87a1d : Disable inductor layout_opt on ROCm (#111474)
3ca81aed42 : Add sdpa to Autocast CPU (#111558)
6c56e1ce2b : Use fmt::format in NCCLUtils and ProcessGroupNCCL instead of c10::str (#107268)
37253c0cd5 : Update RUFF to 0.1.1 (#111618)
ff835fb464 : [AOTInductor] Disable NonABI tests in fbcode (#111616)
e24fdfa177 : [vision hash update] update the pinned vision hash (#111624)
93a9b1314b : Make step() faster by passing in a tensor vs scalar 1 (#111084)
ca7d084ff9 : Add ScalarTensor or 0dim overload for _foreach_add (#111079)
935f697754 : remove movedim.intlist, tensor_split*, to.* from xfail list (#110999)
652f4c656e : Freeze fuse two mms (#111232)
cb856b08b2 : [BE]: Attach cause to some exceptions and enable RUFF TRY200 (#111496)
c90f8c883d : [ONNX][s390x] byteswap data when serializing to external files during onnx exporting (#111543)
8899abde32 : [PyTorch][ET] Improve Process Groups Mapping Info Collection (#110908)
675df7520a : [tgif][multiforward] allow codegen to generate different func name (#111446)
f0fac6a94f : Update gloo submodule commit to include recent ROCm6.0 related updates (#111465)
7a3c3d63bf : fix gloo cuda sparse_allreduce dispatch (#111485)
dc31dbbcab : Optimize reduction + amax fusion (#111122)
786c51d626 : Symintify torch.diff (#111530)
74f6f7adcf : Fix NT subclass test typo (#111529)
79529ef657 : [dynamo] fix graph break when listlike of tensor contains const (#111572)
2a40b7efcb : Add Half support for addcmul, addcdiv, cumsum, and topk on CPU (#103319)
715dfced72 : Revert "Nvfuser code removal (#111093)"
ca5f6f7af3 : [MPS] Skip virtualized devices (#111576)
0617f7fa75 : [ez] Remove unused code in upload_test_stats (#111504)
4e310fd875 : [Autograd] Track when mutations are for triton kernels (#111500)
971f67c988 : Allow SymInt to specialize to FLOAT (#111219)
40c44c2307 : Force specialization on INT_LIST (#111216)
aa3243bceb : [vmap] symintify : is_same_size and split_with_sizes (#111491)
03e28bde2e : [tp] fix torch compile regression (#111521)
894b9957c8 : [DOCS][CUDA] Update TF32 docs for sm90 (#111337)
503f44fbb8 : Fix: perverse input's NaN values to prevent undefined behavior for `matrix_exp` function (#111539)
90e2117a99 : Allow optimizer state conversion to accommodate optimizers that have no tensor state (e.g. SGD) (#111501)
5ce2ab8466 : [cuda] Preserve operations order between vectorized and non-vectorized in ln grad input (#111488)
b2b5f1377b : [caffe2] replace numpy.object with object (#111494)
e3463fe4ca : [ONNX] Benchmark to store test data along exported model (#111095)
71d7173ab3 : Introduce is_big_gpu condition for test_max_autotune (#111467)
4ec777e9a5 : [BE] Clean up trymerge code handling broken trunk failures (#111520)
4f0cf1e1ff : Mark more decomp tests as slow (#111524)
18cc8a92ac : [ProcessGroupNCCL] Avoid recording stream for synchronous ops (#111431)
a7883ee470 : Bump urllib3 from 2.0.6 to 2.0.7 in /tools/build/bazel (#111435)
547a116fcf : Fix redundant asserts (#111445)
ba2ba9621c : More NT subclass op support for SAM (#111253)
53c1dca6a3 : [Reland] Add a workflow to release Android binaries (#110976)
a771fde8b1 : Update the magma to version 2.7.2 (#111442)
102fbd402c : [ci] Move step to get workflow job id before test step in linux (#111483)
9c7391ea36 : Revert " [1/N] Apply clang-tidy to c10 cuda files (#111137)"
7fabb73dae : Add ciflow/rocm label to run ROCm jobs (#111394)
16cb3bdd57 : Skip `test_quick_core_backward_baddbmm_cuda_float64` (#111493)
93e5065ba0 : [CODEMOD][caffe2] replace numpy.bool with bool (#111432)
fa995626a8 : [ROCm] Bump kineto submodule commit to clear kineto cache to avoid memory leaks (#110849)
256a5ff49d : int4 mm kernel enhancement (#111460)
b72a1402f5 : [AOTInductor] ProxyExecutor skips serializing missing args with default value (#111425)
543dc75746 : [Reland] horizontal concat fusion (#111437)
3eb5cae3af : Revert "[Compiled Autograd] Turn accumulate_grad into an op (#111271)"
0be90c5d7f : Revert "[Compiled Autograd] Error if tensor_post_acc_grad_hooks is set (#111273)"
a389e2c7c7 : Revert "[inductor] Move inductor ops to CompositeExplicitAutograd (#111274)"
ed7739d690 : Revert "[aot_inductor] return a copy of any constant (#111356)"
08f580d498 : Revert "[inductor] Refactor and optimize allocation calls (#111117)"
a4391f085b : Add regression test for cuda_stream type checks (#111430)
e2f1d03d73 : [BE] Use `C10_UNUSED` (#111439)
1ac36dbd2a : [aotinductor] Make writing of the weight files to be conditional (#111379)
108378e2af : Fix: `torch.matrix_exp` performance issue (#105225) (#110848)
a9b3afd3d8 : [aotinductor] Refactor the generated result (#111080)
e9a51a6a07 : [BE] Revive test_typing (#111428)
572628e520 : Nvfuser code removal (#111093)
0b14ec8ca6 : [ONNX] Add dynamo_onnx_aot_inline to bench (#110183)
eafce2394d : [pytorch-vulkan] aten::floor_divide (#110785)
2dc1726ab7 : Compile NestedTensor with AOTAutograd (#110529)
e708de83b9 : [4/N] Reorder VariableBuilder._wrap (#111409)
41490119f2 : Revert "[sparse] semi-structured sparse + torch.compile support (#111049)"
17002d25c5 : [export] Remove call_spec argument from ExportedProgram ctor. (#111407)
2bb1692334 : fix dict size change during iteration (#111267)
cc9b7bb85c : [reland] [inductor] fix a max-autotune rng state related bug (#111381)
1aad6d803a : [Reland][Inductor] Disallow OpOverloadPacket in ir.FallbackKernel (#110567) (#111396)
6e8079e00f : Fix timeout value for memory leak check job (#111386)
543a763cd8 : [DCP] Add HSDP checkpiont unit tests (#111399)
2c313880fc : [TD] Make test class correlation scores available to heuristics. (#111229)
973c87b320 : raise instead of skip in test/test_meta.py (#110939)
71e1f34923 : [aot_inductor] return a copy of any constant (#111356)
7a740e2b85 : Revert "direct runtime assertions (#111262)"
29048be41c : [Reland] Add int4mm kernel (#111403)
7b7f070ec5 : [3/N] Apply clang-tidy to aten/src/ATen/core/ (#111301)
43b023694e : [1/N] Apply clang-tidy to c10 cuda files (#111137)
46000bede6 : Fix a typo in fake tensor test. (#111193)
013b51f8cc : [state_dict][7/N] Add a fine tuning e2e test case for distributed.state_dict and DCP (#111111)
9ce0ae836d : [inductor] Refactor and optimize allocation calls (#111117)
3e354ef3e3 : Increase coverage of clang-tidy to CudaIPCTypes.cpp (#111371)
a0632389b7 : [BE]: Update lintrunner mypy to 1.6.0 (#111375)
c8a72db432 : [BE]: Update ruff to 0.1.0 (#111391)
19a6487ad4 : [state_dict][6/N] Change API names to avoid conflict and simplify the API signatures (#111120)
7fb09b804b : Reland "AOTAutograd: Go down inference path if no outputs require grad (#111011)" (#111347)
f84755bcac : Fix _CudaStreamBase type annotations (#111387)
9683a26c55 : [state_dict][5/N] Add submodules save and load support (#111110)
bd9a2465e7 : Back out "Add a workflow to release Android binaries (#110976)" (#111401)
408f210938 : [sparse] semi-structured sparse + torch.compile support (#111049)
deb800ee81 : Fix typo under test directory (#111304)
1e70f4d02c : Revert "Reland #2 "[C10] PG observability hooks. (#108815, #110907)" (#111072)"
5a8a89360d : Handle the `.tolist` method of np.arrays in dynamo (#111382)
afb4914c3d : Align torch.library.impl with the new torch.library style (#111308)
9d9cc67592 : Make torch.library.define consistent with the new APIs (#111307)
5c3955200c : Add linear quantize function to custom ops (#111148)
408e991dfe : Revert "Quant: add weight int4pack mm kernel (#110914)"
5ff9b49063 : Revert "update int4 tinygemm kernels (#111327)"
f29b957475 : [cuda] vectorized implementation for layer_norm_grad_input_kernel (#111021)
8b46a106f2 : [inductor] Move inductor ops to CompositeExplicitAutograd (#111274)
cba0dd0fdc : [Compiled Autograd] Error if tensor_post_acc_grad_hooks is set (#111273)
04b04c0686 : [Compiled Autograd] Turn accumulate_grad into an op (#111271)
6f06832219 : Fixed typo in activation.py (#111358)
97a513ed07 : Revert "Add `lazy_clone_storage` to create COW storages (#110192)"
c271df9239 : IPUHooksInterface: fix a typo, remove const & (#111372)
07f0413b70 : [c10d] add nccl version to c10d logger (#111215)
ff432c048d : [easy] Remove duplicate exprs in produce_guards (#111270)
b691d09010 : fix: reset prefetch flag upon reshard (#111354)
9ab6ac5bc1 : [ONNX] Fix aten::new_zeros due to TorchScript behavior change on Pytorch 2.1 Fix #110935 (#110956)
9f562a3de3 : [dynamo] make disable_cahce_limit also disable accumulated cache limit (#111334)
89f11c69a8 : Revert "[inductor] Adding a way to force fusion of int_mm with mul (#111125)"
59281d5631 : [tp] fix SP style regression (#111353)
493618d745 : Revert "[C10D] Introduce C++ side Collective Callbacks. (#110307)"
6462d71c10 : Fixes a typo in docstring: should be "elastic" (#111352)
0d368f586a : fix wrong meta for index_select.out (#111364)
4cf23c6a61 : FunctionalTensor: avoid spurious not_implemented logging during proxy tracing (#111040)
50b80185d6 : fix bugs about traceback.walk_stack in python3.8.x (#110922)
126d422cf0 : Error if you try to run Dynamo compiled function under torch.jit.trace (#111321)
78909a6f0b : [xla hash update] update the pinned xla hash (#111360)
9af82fa2b8 : Revert "[vision hash update] update the pinned vision hash (#111316)"
b4745d476c : Revert "[sparse] semi-structured sparse + torch.compile support (#111049)"
bfcd86955e : [TP] Fix TP doc format to show examples correctly (#111346)
e0e15a4ac6 : update int4 tinygemm kernels (#111327)
882bc1708b : [dtensor][11/n] adds some __str__ for ease of read (#111278)
6b5d736bf7 : [dtensor][10/n] switch pointwise op to use op strategy (#111234)
f34f3b5421 : [dtensor][9/n] matrix ops to generate strategy (#110717)
b4ab8ac515 : [dtensor][8/N] Introduce cost model for sharding (#109145)
25a2845d78 : [TP] Enable embedding sharding in TP API (#111177)
e942fddb83 : Fix get_estimated_runtime for symbolic shapes (#111314)
e6d9350d7f : direct runtime assertions (#111262)
7df287dc18 : [state_dict][4/N] Support strict flag for model.load_state_dict (#111109)
da36444990 : [vision hash update] update the pinned vision hash (#111316)
4a388e70f2 : Update mypy to 1.6.0 (#111305)
48989bc820 : trace frames with np.ndarray (#110512)
da662248fb : [Dynamo] Fix autograd.Function tracing errors loudly involving saved tensors (#111277)
ff3d773dd9 : [TP] Add deprecation warnings in the documentations for Pairwise parallel, sequence parallel and other prepare input/output functions (#111176)
73d288fdf9 : [aotinductor] Relax ExternKernel kwargs checking (#111167)
5caf2e55d4 : [FSDP] fix: fix for fsdp zero2 validation error (#110139)
6dc54fe8d6 : [BE] Compile FBGEMM with ASAN (#111266)
cff8bf47c3 : update the dispatch of some operators which accept scalar (#110918)
8085e08a84 : [TP] Add prepareInput and output for input/output DTensor layout annotation in the parent module in TP API (#111166)
7c67139e7b : [state_dict][3/N] Cleanup StateDictOptions, make it more readable (#111275)
3a8b10e2da : [TP] Refactor Parallel Style to make it more usable (#111160)
b28cb43f5c : Intra-graph reordering pass on Inductor scheduler IR (based on #100762) (#108091)
8bd5eb8c96 : [2/N] Apply clang-tidy to aten/src/ATen/core/ (#111006)
d7317d8a11 : Fix size_hint call sites failing on unbacked SymInts (#110520)
0013611c81 : [inductor] Allow backend compiler to skip (#111153)
48e4d18388 : [BE] Move ASAN from clang-12 to clang-15 (#111218)
581d97c19e : Revert "[state_dict][3/N] Cleanup StateDictOptions, make it more readable (#111108)"
11ac4ace5f : [export] Use meta val from the old nodes in run_decompositions(). (#111225)
ac02531bab : [sparse] semi-structured sparse + torch.compile support (#111049)
1c30814417 : Add `lazy_clone_storage` to create COW storages (#110192)
482782406a : Revert "Add `lazy_clone_storage` to create COW storages (#110192)"
4f4e2c1c08 : Add constant node sizes to proto size calculation (#111097)
3b08a4a6b2 : [dynamo] collapse local and global guard builders (#111226)
bb89a9e48c : Skipped CUDA Flags if C++ Extension Name includes "arch" Substring (#111211)
f4297576e6 : [inductor] Adding a way to force fusion of int_mm with mul (#111125)
c151163333 : Documentation Clarification on torch.compile Example (#110942)
00d962631c : [BE] Enable Ruff's Flake8 PYI045 (#111184)
ba7b9211ee : [export] Update serialization schema to input/output specs. (#845) (#111204)
a3e9b80082 : Fix torch.diagonal for torch.onnx.export when dim1<0 or dim2<0 (#111130)
375e7bd003 : Un-skip a bunch of UnaryUfuncInfo bfloat16 tests (#110799)
d8de45d22c : Update arg{min,max} tests and docs (#110845)
d38472c176 : Don't sympify reflection_pad2d ranges (#111212)
382327bd0e : [BE] Enable Ruff's Flake8 PYI034 (#111105)
2fd546aa5e : Allow strided layout in torch.normal (#111205)
b1db959085 : [state_dict][3/N] Cleanup StateDictOptions, make it more readable (#111108)
625a3b1a42 : Remove some patterns from PrimTorch merge rules (#111230)
6aa91c8dad : [dynamo] Register einops functions lazily (#110575)
8747e4c8c1 : [dynamo] Add specialized variable tracker for sys.modules (#110990)
058cb70ad9 : [CI] Add auto label rule for torch/_export (#111181)
8db72a430d : [sparse] Add padding for dense matrices in semi-structured sparse (#110583)
2b6f281e5c : Revert "Remove dead code (#111207)"
cf6b1cdf6a : Revert "AOTAutograd: Go down inference path if no outputs require grad (#111011)"
d84dcfb3e0 : [Doc] Fix typo in cpp/installing when wheel is used (#111143)
c2ed714f54 : Remove dead code (#111207)
e99abaae2f : [state_dict][2/N] Let distributed.state_dict accepts single optimizer (#111107)
ac768333be : [dynamo] fix prim lowering validation logic for dynamic shape args (#111208)
247d5e16fc : [DCP] Improve with_temp_dir robustness (#111106)
5a2ab7dcb7 : [CUDA][cuFFT] Initialize CUDA context for cuFFT before execute is called (#110326)
f68d6e8108 : Revert "Move at::{Refcounted,}MapAllocator to c10 (#109881)"
8162f4170b : Fix typo under c10 directory (#111155)
ac48c11ab7 : Fix typo under torchgen directory (#111154)
b460c30893 : [BE] Enable Ruff's Flake8 PYI042 (#111114)
5db9f911ac : [pt][group_fusion] fix shape guarding in fusion candidate search (#111174)
84975339bd : [PyTorch] AOTI: generate reused thread_locals when tensors provably have static shape (#110892)
bf72a723ef : [PyTorch] AOTI: Add aoti_torch_assign_tensors to ABI (#110909)
cff71c47dd : [dynamo] Forward fix a bunch of distributed collective allow fixes (#111171)
33f1513486 : Add `lazy_clone_storage` to create COW storages (#110192)
35750bf9d1 : [export] Fix issue with internal model (#111140)
359336e3e9 : [C10D] Introduce C++ side Collective Callbacks. (#110307)
d24539ee6a : Improve reflection_pad2d lowering for dynamic shapes (#110988)
0dfa354570 : [inductor] Implement Fx graph caching to improve warm compilation time. (#103453)
69dcbc02b0 : [Dynamo]Expose bytecode hooks and add example usage for decompilation in docs (#110714)
cdc8d709cb : Fix mkldnn_matmul error on AArch64 (#110150)
24bd3301d9 : Fixed description of run_on input for linux-binary-test workflow (#111191)
af05fbb84a : Linter to avoid csv merge conflicts (#111163)
8209bbbd06 : [AOTInductor] Improve validation for C++ wrapper codegen (#111102)
898482f1bf : [logging] log exceptions when provided (#111164)
4c01686027 : Public API for constructing NT with jagged layout from tensor list (#111078)
a2c17a2b00 : [PyTorch] AOTI: add CPU fast path in aoti_torch_empty_strided (#110877)
b85f848233 : [PyTorch] -DNDEBUG in inductor codecache builds (#110876)
168bad5f23 : [export] Reland "Fix graph signature data model to list of specs." (#111136)
9980876cab : Quant: add weight int4pack mm kernel (#110914)
8713a1a363 : add Half support for bernoulli on CPU (#104176)
74b1f4f71a : Update sdp_utils functions to accept const& params (#111144)
21dc1d2547 : [Vulkan] Add the 2D case to Layernorm operator (#110796)
9c7f464eef : [inductor]: Better debugging of `can_fuse` decisions with `TORCH_LOGS=fusion` (#110415)
1208a44799 : [docs] export full aten opset (#111161)
ad4472833c : define public API for torch.nn.utils (#111026)
8f90be4478 : Expand NT subclass to support SAM (#109123)
e0eaa95e99 : [DCP] Remove _shard_tensor() call in load_sharded_optimizer_state_dict in optimizer.py (#111096)
6748a14a71 : [aot_inductor] add a test with AOTInductor + TorchScript (#111124)
397deaa825 : Fix typo in mixed dtypes linear operator implementation. (#111127)
7fbfa4e020 : Revert "[inductor] Implement Fx graph caching to improve warm compilation time. (#103453)"
f9053877b4 : Add pypi required metadata to all wheels except linux (#111042)
94c9dbff22 : Disable cutlass_template on ROCm (#111132)
bb1424d46e : Reland #2 "[C10] PG observability hooks. (#108815, #110907)" (#111072)
dede1e96e2 : [BE] Enable Ruff's Flake8 PYI018 (#111101)
53a9ac534c : Added decorator `skipRocmIfTorchInductor` and skipped failing tests (#107760)
918054f422 : [Inductor] support channel last for xpu conv in inductor layout opt path (#111018)
5ace912263 : fix: do not reshard parameters twice (#110948)
aec0f98e70 : Move cuda driver exit handling from helpers to threads (#111061)
2f53085f3f : [BE] Enable Ruff's Flake8 PYI030 (#111103)
68a1219f74 : Move at::{Refcounted,}MapAllocator to c10 (#109881)
42b89aea4b : Revert "[export] Fix graph signature data model to list of specs. (#111017)"
395d0eaea0 : Dynamo - config gated torch.distributed allow, exclusion for special leaf funcs (#110894)
499146354e : Use CUDA image for lintrunner (#110502)
d8ad0ba5c1 : [Dist][ez][nit] Formatted nccl version string in startup (#111076)
b35279dfac : [DDP] Make _ReplicateState inherit from _State and make replicate eagerly initialized (#109647)
5614023f5e : Move export.constrain_as_* to torch._constrain_as_* (#110757)
6ce3a38050 : Revert "Move export.constrain_as_* to torch._constrain_as_* (#110757)"
f0e7a91030 : [vision hash update] update the pinned vision hash (#111098)
5e8be63e99 : Allow specifiying inputs as GradientEdge in autograd APIs (#110867)
33b69509d3 : [export] Fix graph signature data model to list of specs. (#111017)
097defb160 : [device mesh] only check when world size > num_devices per host (#111091)
9316c8b4bc : Use torch._check for cat error checking (#111035)
6dca81c054 : Revert 107846 and 109695 (#111099)
07f0f383fa : update tensor-like to check instance for torch function impl (#111087)
8e32e62f67 : [TP] Validate TP mesh dim for 2D composition (#111001)
80ea8784f3 : Bump xla_base version tag to v1.1 (#109757)
e0ddc3ff9c : [quant][pt2e][be] Move xnnpack quantizer tests to separate file (#111004)
8f8d8a0b50 : Linear Quantize (#110581)
5292a92e03 : Add `torch.unravel_index` (#110580)
577e3dff88 : [aotinductor] Fail models temporarily (#111100)
986ad3bfa6 : [2/N] Dynamo supports skip by function & removes skipfiles circular import (#110835)
a6b452dfdc : [2/N] Enable Wunused-result, Wunused-variable and Wmissing-braces in torch targets (#110836)
6d7744ca46 : Fix typo under torch/_functorch directory (#111067)
4d29b40299 : torch.compile DTensor E2E (#105236)
3553eb9b89 : Add CUTLASS-based support for mixed dtypes matrix multiplication (#110981)
0f924cdee3 : Fix `functional::smooth_l1_loss` signatures to not override `beta` (#109798)
73f4c1a406 : [reland2] Update custom Function preserve torch function when inputs … (#110895)
52e76a3056 : fix ShardedTensor.gather when shard is empty (#110962)
ded5ee75ac : AOTAutograd: Go down inference path if no outputs require grad (#111011)
6d8e0c4b5a : [export] Get export APIs ready for PTC (reland) (#111030)
20a7366147 : Fix Android publish step with lite interpreter (#111071)
6c7013a3dc : [Doc] Add weight dtype in torch.nn.CrossEntropyLoss (#110998)
d589106bcd : [quant][pt2e] Disable remove_qconfig (#111000)
cf1da9bd17 : enable index add test (#111016)
e151307db0 : Clean-up composite implicit ops for aten::isfinite, isreal and log_sigmoid (#110896)
d3205f8377 : Revert "[2/N] Dynamo supports skip by function & removes skipfiles circular import (#110835)"
80dfc974dd : [2D] Enable 2D FSDP+TP model.load_state_dict() (#110925)
fd4ba806f6 : Implement tensor slice in inductor to stop falling back for aten.index (#111015)
6c136c3302 : [2D] Enable 2D DTensor state_dict for FSDP + TP (#110846)
0bd4ce728b : [2/N] Dynamo supports skip by function & removes skipfiles circular import (#110835)
de1ca4a081 : [dtensor] small change to refactor random ops (#110900)
657e8f2cad : [dtensor] make replicate -> partial do division instead (#110898)
204f338f71 : Reland [Profiler] Improve the docstring for export_memory_timeline (#110983)
cae3a2e6eb : Revert "[sparse] Add i8i8->i32 support for cuSPARSELt (#110499)"
86619c9c9d : [aotinductor] Add both cpu and cuda tests for the AOTInductor cpp test (#110920)
3058700f7f : [aotinductor] Add AOTIModelRunner as a utility class (#110891)
b17c247eb1 : [aotindutor] Update the cpp test example (#110652)
3062e267b1 : [cond] Add more tests for valid inputs of cond (#110727)
ef19824db8 : Suppress warnings in tensorpipe.h (#111012)
f2d476843e : [MPS][BE] Avoid redispatch in `sign_out` (#110955)
fc1105b282 : [inductor] Implement Fx graph caching to improve warm compilation time. (#103453)
2cf9782912 : [generate_opcheck_tests] Add some reasonable defaults (#110977)
4abfa22812 : [aotinductor] Add a perf smoke test for AOTInductor (#110972)
98c329b19e : Revert "[core ATen IR] Add decompositions for max, min, var_mean (#110906)"
95ff51d8ed : [MPS] Add support for Softshrink to MPS Backend (#110814)
de370eb313 : [Distributed] Small nits to apply_optimizer_in_backward (#110903)
0821868110 : Revert "[export] Get export APIs ready for PTC (#110410)"
74029fae9d : Fix broken period workflow after #110976 (#111013)
056d6247c7 : [MPS] Use Metal Events to synchronize buffers in MPSAllocator (Part 1) (#106938)
b96ea9f361 : [export] Get export APIs ready for PTC (#110410)
1e7947b3e0 : Revert "Reland 3rd try [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#109323)" + Forward fixes + test (#110964)
e49ea87162 : Fix socket.cpp compilation using gcc-9.4 (#111002)
a614281ea9 : Add current_device() to torch.cpu (#110987)
110382bacf : Make NestedTensor compilable with eager backend (#109171)
e0dbaa04d2 : Fix the meta func for mem_eff_backward (#110893)
0e551bbcd7 : [quant][pt2] Preserve source_fn_stack after QAT fusion (#110899)
5aee22e0e0 : Move export.constrain_as_* to torch._constrain_as_* (#110757)
c9eb8d8d90 : Add set_checkpoint_debug_enabled that overrides local setting (#110728)
02f6a8126e : Support a simple subset of functions as backward hooks on intermediate tensors (#109537)
79212430df : feat(inductor): fx graph debug should display device (#110346)
24bf9aeb6b : Fix arange with dynamic end argument. (#110979)
a11d4a8378 : [Reland] [Inductor] Break the loop fusion when node2 depends on node1 mutations (#110677)
314a502eb0 : Revert "Reland "[C10] PG observability hooks. (#108815)" (#110907)"
2edc75a669 : Add a workflow to release Android binaries (#110976)
5aa96fd336 : [dynamo] list index: add more list types to testing, support namedtuple, improve error handling (#110919)
9606cda64e : [core ATen IR] Add decompositions for max, min, var_mean (#110906)
3100d3e661 : Revert "[inductor] Implement Fx graph caching to improve warm compilation time. (#103453)"
f98d6ad8b3 : [1/N] Apply clang-tidy to aten/src/ATen/core/ (#110861)
ca03f36233 : Change ProcessGroupNCCL default timeout to 10 min (#110947)
cd275dc24f : Remove RangeConstraints in favor of ValueRanges (#109859)
7a69e3d30b : [fx][subgraph_matcher] Add a matcher that supports name to node map (#110743)
91eeb77260 : StackDataset batched sampling (#110694)
ac01304e22 : pin_memory support for NT (#110404)
43ea782af3 : Multiprocessing support for NT (#110292)
7f2d25c547 : [ONNX] bump onnx submodule to rel-1.15.0 (#110663)
3a29cdc5e6 : [optests] Add dontGenerateOpCheckTests and is_inside_opcheck_mode (#110951)
d9eb5a57aa : [FSDP] Change _create_chunk_dtensor in fsdp/_shard_utils.py to use public API from DTensor (#110831)
6e770c0dda : [dynamo] Add `itertools.repeat` via polyfill (#110953)
02a02a23ee : Revert "Move at::{Refcounted,}MapAllocator to c10 (#109881)"
495f77be7a : [cpu] explicitly vectorize digamma (#110217)
7678cd22af : Reland "[C10] PG observability hooks. (#108815)" (#110907)
84ad3ed7b2 : [dynamo] add config for displaying all guard failures (#110927)
8cf1a02e80 : Rever [Profiler] Improve the docstring for export_memory_timeline (#110978)
bc49b1e50b : [reland] Use is_symbolic instead of testing isinstance in some place (#110676)
df9a6bcaef : [reland] Symintify guards.cpp (#110675)
3842b175d2 : [reland] Add symbolic singleton int (#110674)
fda0a965c7 : [reland] Support SingletonSymNode mul with coefficient (#110673)
fb4b9e9c8e : Re-enable a couple of fixed tests (#110770)
5183760ca5 : Adding Backward Support for NestedTensors and FlashAttention (#97485)
77e5f5d8a4 : Updates to patch version release plans (#110952)
52b1470935 : [Profiler] Improve the docstring for export_memory_timeline (#110949)
31611b40b9 : cmake: allow to build pytorch as a CMake subproject (#110373)
57f6368b8e : [collective] Add a torch.compile + functional_collectives test (#110688)
c5f06b9753 : Re-enable test_copy_transpose_math_view, neg_view/dce fix (#110651)
ba86dfcd83 : AOTDispatch subclass (#104483)
8bc04f46fe : [inductor cpp] use c10::bit_cast to avoid violating strict-aliasing (#110809)
7b25c2b90e : [FSDP][optim_state_dict] Move local optimizer state to FSDP compute_device (#110929)
fb68aa0a92 : [Easy] Remove unused return type from utils (#110887)
a425307589 : [ATen core IR] De-register `full_like` and `empty_like` as core (#110924)
37567fdf31 : Nvfuser cpp api deprecation attempt 2 (#110881)
8820dda943 : Revise def of contiguity in bmm (#110811)
35e48e262c : [custom op] Use canonical API to constrain unbacked values (#108372)
33403336fa : Revert "[user errors] compulsory case names, allow multiple (#110878)"
8891da40d7 : [vision hash update] update the pinned vision hash (#110915)
19ecb5d0d5 : Revert "[Inductor] Disallow OpOverloadPacket in ir.FallbackKernel (#110567)"
2ae71c4598 : [user errors] compulsory case names, allow multiple (#110878)
f10aab03c4 : [sparse] Fix semi-structured sparse shape mismatch bug (#110420)
468a73f0e3 : Support Numpy ints in the `torch.nn.functional.interpolate` dtype check (#110778)
de3ae93e9b : Include rank of default PG in C++ log messages (#110623)
0341deb1c7 : Move at::{Refcounted,}MapAllocator to c10 (#109881)
3704bf4ee8 : [export] Update custom ops docs (#110492)
28d7d7fc42 : device agnostic: torch.cpu.set_device (#110716)
2aa0ba38a4 : Make is_sparse a property of MaskedTensor (#110725)
6c8096ec31 : [ATen core IR] Register additional ATen operators as core (#110882)
733368a822 : Change default NCCL_ASYNC_ERROR_HANDLING to 3:SkipCleanUp (#110723)
0a580da582 : Add batch decomposition for torch.linalg.eigh (#110640)
201d02ef77 : stop non-differentiable values from being materialized in aotautograd (#110721)
c596db762f : refactor aotautograd to set requires_grad on info rather than a separate array (#110720)
db760527e0 : fix(dynamo): list index via polyfill (#110817)
2a76c7f018 : [dtensor] skip move to device when device_type match (#110774)
50bd252863 : Fix typo `the the` (#110869)
b5f9696d81 : Fix typo under torch directory (#110824)
d1c157c598 : Revert "[reland] Update custom Function preserve torch function when inputs r… (#110679)"
8ae623db9d : Don't pass tuple to with statement (#110864)
4b881b0da3 : [MPS] add support for sgn to MPS backend (#110829)
144cda7f06 : [BE]: Enable ruff's flake8-PYI rules (#110830)
306b2284f2 : Add meta kernel for ctc_loss.intList (#107949)
bbdc8c7b05 : Revert "deprecating nvfuser c++ API (#110318)"
2e57b1e847 : [BE]: Update NCCL submodule to v2.19.3 (#110827)
a18b98f8a2 : [xla hash update] update the pinned xla hash (#110852)
3a70a02a81 : Enable Wrange-loop-analysis (#110837)
d2a2a67fa4 : Added new test sample to interpolate op in OpInfo (#104181)
ddb0c26511 : [inductor] Re-enable more fixed tests (#110798)
92fea5ae3f : [GHF] Re-enable `test_internal_changes` (#110834)
3ec33957eb : [1/N] Enable Wunused-result and Wunused-variable in torch targets (#110722)
e1f0f9c64e : [dynamo][easy] Move code from GetAttrVariable to a suitable place (#110535)
ad24965f6c : typo: add space after cudnn error messages (#110806)
a603dcc307 : Fix typo under test directory (#110826)
afed0314a8 : Fix typo under aten directory (#110822)
105f3b5f91 : Fix typo under caffe2 directory (#110825)
fde28fdc8c : Fix typo under torch/_decomp directory (#110821)
8a8668e1ae : [inductor] Implement Fx graph caching to improve warm compilation time. (#103453)
5ef490f736 : Update AOTInductor compile logic for CPU backend for Meta internal env (#110729)
36e6b0cfa2 : Fix cpuinfo related crash on ppc64 (#110708)
bff28ec568 : Fix typo under torch/_export directory (#110808)
844ea6408b : feat(dynamo): handle accumulate kwargs ("func", "initial") (#110686)
fa8e4ea212 : Add support for hasattr on ListVariable (#110438)
58637c4b43 : [dynamo] Remove SuperSource (#110475)
6b4c686b9a : [aotindutor] Forward fix a performance regression (#110800)
1824ea3c0f : Add a test to make sure all modules in the codebase are importable (#110598)
230a124a7a : [5/N] Move remaining c10::variant calls to std::variant (#110423)
459cef8649 : switch dtensor and functional collective to use optree (#110670)
defa0d3a2d : Add a side table for triton kernels to avoid using itertools.partial (#110633)
57cc886639 : Fix public binding check to check all submodules (#110601)
8edb561631 : Fix use after free in tensor creation (#106707)
0a5f0b5db3 : Suport tracing HuggingFace models (#110748)
d84bcb9c8c : [HigherOrderOp] expose torch.cond (#110293)
0a5bb1c2eb : Feature/stft no window warn (#110695)
c3e4e4f6d2 : [4/N] Add -Wdeprecated and related fixes (#110204)
096b14eae8 : Fix numel test to be > 2 (#110731)
2dc5e166a5 : [TP][Inference] Enable DTensor TP inference (#110751)
19ce68a45c : Fix typo under torch/_numpy directory (#110782)
a119efe9c7 : [AOTInductor][ez] Fix FallbackKernel.codegen() (#110777)
12f97bb2e9 : [Reland][3/N] Add -Wdeprecated and related fixes (#110518)
98b79e9488 : [inductor] Add AOTI ABI shim function for torch.nonzero (#110766)
13a2f42635 : [inductor] Add size, stride, storage_offset to RAIIAtenTensorHandle (#110764)
abb00f66d8 : [inductor] Add AOTI ABI shim function for repeat_interleave.Tensor (#110745)
432df71820 : [inductor] added a config to always add tensor constants (#110491)
840e68301c : [AOTInductor] Change UpdateConstants to UpdateConstantsMap (#110576)
18f0d3af72 : Revert "[user errors] compulsory case names, allow multiple (#110733)" (#110783)
d54e20f457 : [FSDP][state_dict] Add a unittest for local_state_dict resharding (#110625)
1b34238d67 : fix get device index if has _utils._get_device_index in privateuse1 (#108123)
c2e7a0d689 : [core IR] Add decomps for `aten.sum` and `aten.squeeze` variants (#110645)
c77dd684c9 : Enable typechecking in _inductor/ir.py (#110112)
e8ef8bfdce : [Inductor] Allow matmul to have flexiable layout when we are not autotuning (#110726)
5cc1a38370 : [release_notes] Some updates after 2.1 release (#110771)
bf0866fc16 : deprecating nvfuser c++ API (#110318)
983f6f36db : [user errors] compulsory case names, allow multiple (#110733)
90bf6e3938 : [FSDP][optim_state_dict] Enable cpu_offload config for optimzer state_dict (#108434)
563728f61c : [reland] Update custom Function preserve torch function when inputs r… (#110679)
1c97808f81 : [dtensor] support lt/gt op (#110585)
9378a2ceda : [dtensor] support aten.where and enable implicit scalar promotion (#110584)
e3bf5000a7 : Hide the contiguous requirement for user input mesh when initializing DeviceMesh (#110628)
a0bbd075b2 : Add the Mode section in the extending doc (#110073)
6b1007b2a7 : Fix error in div lowering with integers (#102809)
d35e3dbd06 : Fix concurrency limits for Create Release (#110759)
9b55194f81 : fix(dynamo): Incorrect `accumulate` implementation, bad tests (#110683)
4342b0849f : [vision hash update] update the pinned vision hash (#110667)
f952551963 : Handle invalid cancellation signals in trymerge (#110690)
2aa3064364 : [inductor] Add aoti_torch_dtype_bool to AOTI ABI shim (#110713)
65d40a72c4 : Delete rogue print from `test_quantize_pt2e.py` (#110732)
59592ce9f2 : [CUDA Host Allocator][ROCm] fixes (#110715)
3d87c52cef : Remove stuff for Python before 3.8 from install_conda.sh (#110671)
f4796df914 : Add support for generators on the IPU device (#110704)
44d34fe65c : different bounds for same Dim name (#110638)
0d4a360fa2 : remove replaced symbols from range_constraints (#110644)
f74937741e : Remove runtime assertions between export and AOT compilation (#110710)
7cc0020a80 : [decomp] Fix different return type in threshold_backward vs. eager (#110689)
756b4e9e08 : [export] Add codeowners. (#110718)
b8a3998c23 : add batch rule for missing inplace ops (#110692)
1b1bc08557 : [Dynamo] SizeVariable can be indexed by symint (#110349)
ff0358b038 : Revert "[C10] PG observability hooks. (#108815)"
37a0265992 : [Inductor] Disallow OpOverloadPacket in ir.FallbackKernel (#110567)
0c7a877745 : [C10] PG observability hooks. (#108815)
17348b0f51 : Implement split_with_sizes backward for NT (#110647)
48240ec62e : Make unbind() overrideable for NT subclass (#110646)
33da6c8951 : [sparse] Add i8i8->i32 support for cuSPARSELt (#110499)
f7ce19d40a : Fix typo under torch/onnx directory (#110697)
69ea214cc2 : [reland] Update singleton int to error when inequality relation is undefined (#110672)
576b80d23e : Revert "[HigherOrderOp] expose torch.cond (#110293)"
e75f2e2ea1 : Fix clang-tidy warnings in CUDAPluggableAllocator (#110678)
601f872831 : [HigherOrderOp] expose torch.cond (#110293)
e8f1f4ed66 : [quant][pt2][ROCm] follow-up PR 109908 for miopen_batch_norm (#110653)
c4db607607 : Doc test non packages (#110568)
a3e5ec453a : Move Docker official builds to Cuda 12.1.1 (#110703)
261cae793a : [cpu] remove vec code for ops that do not support complex no (#110280)
ceb773b68d : Fix #110680 (requires_grad typo in decomp) (#110687)
d776dd04ac : perf(optim/dynamo): shortcut `is_sparse` iteration in SGD multi_tensor (#110648)
96f616a054 : Revert tl.int1 casting change for ROCm to avoid hangs (#110531)
6b92c367c5 : Add test_jit_cuda_fuser to ROCM_BLOCKLIST (#110440)
65afa760a6 : Add a script to run iOS test app on AWS Device Farm (#110202)
7d98549ca9 : retain_graph=True in compiled_autograd (#110367)
63fe5de89b : feat(optim): add SGD sparse multitensor to testing path (#110562)
371d8ba599 : vmap: decompose real and imag instead of registering batch rule (#110508)
e8605f6f22 : Correct outdated Doxygen link (#110654)
6d23193aab : Added strict=True to zip in aot_autograd (#110668)
d279979102 : perf(inductor): improve `Adam` compile times by shortcutting for loops (via `has_complex`) (#110607)
26bfb0fc21 : Check for both workflow and job names from Dr.CI (#110661)
64583c4d04 : [CUDA Host Allocator] Add support of CudaHostRegister (#108488)
57e9969021 : feat(optim): Add adadelta multi_tensor support for complex, with `has_complex` shortcut (#110631)
11047be10e : feat(optim): Add `NAdam`support for complex, with `has_complex` shortcut (#110634)
347ea3fe0d : feat(optim): Add `RAdam` support for complex, with `has_complex` shortcut (#110635)
be5dc3a00d : [export] Update ArgumentSpec definition. (#110612)
83061ee177 : [aotinductor] Fix benchmarks with self.autocast (#110490)
8a09fe4a05 : [ez] Remove print in heuristics aggregation (#110621)
dac895c10a : Revert "Multiprocessing support for NT (#110292)"
555c83d097 : Added a UserWarning when using torch.{std,var,std_mean,std_var} with dof<=0 (#109824)
81ce5d5725 : Revert "pin_memory support for NT (#110404)"
11b3210a11 : [Reland2] Remove calls of c10::either (#110487)
330db8278b : Revert "Update singleton int to error when inequality relation is undefined (#110044)"
1c3fae46ee : Revert "Support SingletonSymNode mul with coefficient (#110369)"
236afe73a2 : Revert "Update custom Function preserve torch function when inputs returned as-is (#109825)"
fdf6055ea7 : Revert "Add symbolic singleton int (#110370)"
585e2bd818 : Revert "Symintify guards.cpp (#110371)"
bcd44dac60 : Revert "Use is_symbolic instead of testing isinstance in some place (#110372)"
5d963474aa : Replace enforce_dtype with dtype in ShardedTensor.gather (#110561)
f274c7b32c : Add functional collective all_to_all_single and support it in Inductor (#110195)
df7d01aed5 : perf(inductor): use for loop with shortcut in `Optimizer`s to speedup against list comprehensions (e.g. complex conversion) (#110613)
7b6042111f : [quant][pt2e] Refactor conv related annotation for XNNPACKQuantizer (#110308)
be02103786 : [BE] Get rid of code duplication (#110619)
82e353fffc : [BE] Use nested namespaces in autograd/templates (#110618)
cae537126f : Set _diffThreshold on our TestCase (#110603)
668eb55488 : [BE]: Enable some basic pytest style rules (#110362)
c95cf4b4c9 : [dtensor] add grad placements kwarg to to_local API (#110629)
ada65508d2 : Add option to flop counter formula registration to get raw values (#110591)
9e72c9cccd : [torch] easy missing move in aoti_runtime/model.h (#110469)
71beca4899 : [dynamo, logging] Report name of defining class along side function name in Dynamo logs (#110190)
c99de9f37c : fix(optim): adagrad sparse multitensor incorrect early exit (#110454)
ecdd1bcf03 : Back out "[Inductor] Break the loop fusion when node2 depends on node1 mutations (#109172)" (#110622)
88616349d7 : [state_dict][1/N] Implement the basic functions of distributed.checkpoint._state_dict (#105902)
298f01d9a2 : [aotinductor] Avoid generating redundant kernel loading code (#110510)
f1b94461aa : [AOTInductor] ProxyExecutor support Dynamic Shape (#110526)
a0cea517e7 : Add 9.0a to cpp_extension supported compute archs (#110587)
c89d35adfe : Bump pillow from 9.5.0 to 10.0.1 in /.ci/docker (#110494)
efdf155383 : Add requirement for input to AllGatherIntoTensor to be contiguous (#109561)
f21c322e20 : Fix typo in BatchLinearAlgebraLibBlas.cpp (#110608)
d6e5898e8d : Quieter logs in CI (#110033)
3597325bc7 : pin_memory support for NT (#110404)
cc1de49340 : [HigherOrderOp] fallthrough some keys by default. (#110478)
26f634eefb : Enable aarch64 for fixing undefined symbol error. (#110542)
a94b6f39d1 : [ROCm] conditionally enable hipsparse const descriptors for version >= 2.4.0 (#110317)
f767a6c57a : Made pattern-matcher diagnostics lazily reported + added TORCH_COMPILE_CPROFILE (#110504)
1e4c0641ce : Revert "Made pattern-matcher diagnostics lazily reported + added TORCH_COMPILE_CPROFILE (#110504)"
1a729618ef : [FSDP][optim_state_dict] Make the new optimizer allgather fusion work with fine-tuning models (#110540)
f17fe89e14 : Multiprocessing support for NT (#110292)
7c72238e4b : Back out "Enable pickling model prepared with QAT qconfig" (#110392)
cf1b494afd : [AOTInductor] Store loaded kernels in the model (#110554)
c36b31d530 : `torch::nn::AdaptiveLogSoftmaxWithLoss`: check length of `cutoffs` (#106777)
00b9afa429 : [vision hash update] update the pinned vision hash (#110571)
416eca9736 : export db links for user errors (#110555)
21019620ee : Revert "[Dynamo] SizeVariable can be indexed by symint (#110349)"
62cad5b5b0 : [quant][pt2] Support cudnn_batch_norm in QAT fusion (#109908)
4b1e138162 : [dynamo] [easy]Remove InstructionTranslator from within Set (#110521)
a93337ed55 : [export] Add ir spec (#110394)
a8653f35de : One more small Perf Tweak to fill_ (#110294)
434a996c42 : Fix typo under torch/_inductor directory (#110530)
9648df1a6a : Made pattern-matcher diagnostics lazily reported + added TORCH_COMPILE_CPROFILE (#110504)
e686341f64 : Consider that ops can be fused into cat in the min-cut partitioner (#110501)
d24e7be243 : Include `onnx` and `onnxscript` information in collect_env.py (#110560)
653f966df0 : Fix type promotion of float8_e5m2 and float8_e4m3fn (#110279)
c121f957c2 : [aotinductor] Enable test_non_default_cuda_device on CI (#110509)
9f40ffeec6 : [optim] disable large_tensor tests for ROCm (#110559)
6a974bec5d : Change flash attention outputs to be SymInt instead of int (#110533)
f1d81134ef : Print output type if assert fires (#110534)
f3aba45049 : [ONNX] Create onnxscript-torchlib specific xfails/skips for fx tests (#110536)
95c59b30b8 : Update fully_sharded_data_parallel to fix typing (#110545)
0daa7d4815 : [test][docs] Fix doctest warnings for syntax errors (#110517)
053367b1ed : fix: flake8-bugbear code B024 (#107265)
449271f3f1 : [pytree] Extract reusable generic tests for pytree (#110395)
37afa0c349 : fix(inductor): Increase coverage of Inductor ATen lowering (#110473)
2e31fae5c5 : Cleanup the code in the `dynamo` userbenchmark (#110519)
0949d97c16 : fix batch_isend_irecv example incorrect usage (#110408)
8672d64fed : Use is_symbolic instead of testing isinstance in some place (#110372)
e1cfcdfa06 : Symintify guards.cpp (#110371)
a7145cb3a4 : Add symbolic singleton int (#110370)
eb8feb8ff8 : Support SingletonSymNode mul with coefficient (#110369)
07331c65e6 : Update singleton int to error when inequality relation is undefined (#110044)
4e73eee93f : Update custom Function preserve torch function when inputs returned as-is (#109825)
21d77bcf80 : added path to correct directory containing headers (#110063)
6fc09aee36 : constant output errors (#110472)
a9df9e5187 : [inductor] get_system shouldn't error if CUDA is not installed (#110282)
6db3853eeb : Add doc for torch.cond (#108691)
901aa85b58 : fix TEST_ROCM definition to disable test_jit_cudnn_extension on rocm (#110385)
46a5558cd5 : [AOTInductor] Simplified AOTInductor interface and model class (#110411)
baa9af155e : Add more tests for native triton kernels (#110486)
f04b1a0d27 : [AOTInductor] Implement autograd eager backend for native triton kernels (#110403)
c0c2e052a4 : [aotinductor] Clean up fallback kernel cpp name generation (#110267)
539367f0bc : [aotindutor] Refactor optional value codegen (#110233)
247c574313 : [jit] make register parameter/buffer thread safe in torch::jit::Module (#110488)
2c1b009e39 : Fix typo under torch/_dynamo directory (#110459)
4c3d3b7176 : [inductor] Lower small gemvs on CPU (#110456)
30c4c6ff9b : [PyTorch CCA] Refactor caching allocator config code (#110123)
156aefa89b : Revert "[3/N] Add -Wdeprecated and related fixes (#109698)"
5220d0dfaf : Increase header coverage of clang-tidy (#110443)
0e55cc4986 : [HigherOrderOp] Flatten outputs of `wrap`. (#109433)
f68f49c462 : Refactor expect tests on test_higher_order_ops.py. (#110290)
9f0601df6d : Fix a typo in `cholesky_inverse` documentation (#110364)
31d635803b : [Dynamo] Fx proxy for builtin all with list iterators (#109972)
2bf3ca1be7 : [torchdynamo] preserve deterministic_algorithms_warn_only in convert_context (#110457)
dddf581da7 : [dynamo] Add graph break on requires_grad_() (#110053)
562c68e56f : [nccl] denoise warning msg (#110433)
a0e321d5ad : [vision hash update] update the pinned vision hash (#110489)
3fd938369f : add `foreach_abs` meta registration and inductor decomp (#110468)
08c7dcda65 : [pt2e][xnnpack_quantizer] quantize "mul" (#110428)
66202ed29c : [pt2e][xnnpack_quantizer] add util function to convert scalars to attrs (#110427)
64416a1fc7 : [quant][docs] Fix formatting (#110460)
005e8ddcb9 : cache the hash construction on Guard (#110464)
3fe3439242 : Use LLVMSymbolizer directly for unwind inside fbcode (#108800)
510ec7e3c5 : [Dynamo] SizeVariable can be indexed by symint (#110349)
50054b1a62 : [AOTInductor] ProxyExecutor support ReinterpretView inputs (#110451)
dd95eaaf1a : turn back on constant folding in fbcode (#108604)
efb73fe8e4 : Fix send()/recv() to adhere to timeout (#109611)
a0bffe7ed7 : [S366352] Print nccl version during initialization (#110305)
c31fcdaa4f : [3/N] Add -Wdeprecated and related fixes (#109698)
836ba6430a : [AOTInductor] Initial functionality for Inf and NaN checker (#109526)
06e88d2cfc : [aotinductor] Remove output_spec from AOTInductorModelCache (#110462)
98c8550158 : Fix Triplet Margin Loss Opinfo (#110302)
a8a31bc165 : [dynamo][BE] test_misc.py shouldn't change the default dtype globally (#110412)
dc794ec32c : [dynamo] Trace through builtin `abs` (#110398)
a389181f2e : [MPS] add support for aten::nextafter (#109685)
9ce2e02fd6 : Revert "[ROCm] Remove PYTORCH_MIOPEN_SUGGEST_NHWC flag (#90725)" (#110319)
b457e3f79a : Reland attempt 2 of "Update AOTAutograd to use FunctionalTensorMode instead of C++ functionalization (#106406)" (#109906)" (#110079)
b5c3a17c2c : [fuzzing result][fuzz_torch_jit_lite_interpreter] read-heap-buffer-overflow-far-from-bounds (size 4) in c10::IValue::IValue() (#110441)
da63c7f2c3 : [AOTInductor] remove CUDA dependency for cpp backend (#110409)
df3ab70dde : Revert "Added new test sample to interpolate op in OpInfo (#104181)"
40be6b72e1 : [ez] Type function in distributed_c10d (#110435)
5977d17953 : Update common_methods_invocations.py (#110383)
aecfe5d168 : [aoti] Remove pessimizing move (#110446)
174e46b853 : [inductor][easy] Free functions in headers should be declared inline (#110445)
cd0e7d133b : Migrate MacOs wheel binary builds to ephemeral M1 runners (#110432)
7f0a659ccc : Script to compare measured (trace) runtimes with estimated runtimes (#108037) (#109076)
f2a1b93549 : Back out "[quant] Support integer implementations for adaptive_avg_pool2d (#104226)" (#110316)
9bc5e10899 : [New][1/N] Dynamo skipfiles refactor (#110330)
aa3629ee3e : Fix typo under docs directory (#110359)
4069d1de59 : [distributed] Remove recordStream for callback that ends a profiler event (#109933)
ff96f6d04f : [core IR][reland] Add `split.Tensor` and `unbind` decompositions to core ATen decomp table (#110323)
4cdc52a2d4 : Bump urllib3 from 2.0.2 to 2.0.6 in /tools/build/bazel (#110421)
2cbfcc740f : use torch.xpu.manual_seed_all in torch.seed (#110376)
428cbd7513 : [ao] fixing multihead attention convert size (#110407)
f76e5c846d : Speed-up casts to FP8 (#110251)
f4c0ef95bc : [AMD] Fix broken build from nested transformer utils (#110245)
d9fe1713c3 : Enabled batch rule decompositions for upsample*.vec ops (#110333)
15219f53d1 : [AOTInductor] Fix ProxyExecutor's handling on multiple outputs (#110374)
03f28dbce3 : [HigherOrderOp] Better testing strategy for wrap that checks guards and recompiles (#110343)
ce50132748 : [vision hash update] update the pinned vision hash (#110424)
d15d7a6485 : [DTensorTestbase] Add "cpu:gloo,cuda:nccl" backend to DTensorTestbase (#110397)
f7909cb947 : Build and test iOS on GitHub M1 runners (#110406)
3fe94e46c2 : Skip test_retracibility under ASAN (#110414)
3bd229b53c : Add quantized tensor function to get scale and zero point (#110095)
f69e9c8c91 : run_tests.py minor logging changes (#110188)
e55d6f923c : minor tf32 fixes for unit tests on H100 and L40 (#110201)
3812f2e40c : Preserve layout on like constructors (#110242)
d58a91b2a6 : [4/N] Move remaining c10::variant calls to std::variant (#110382)
01b2f25ebd : [inductor] Cast loads from boolean tensors to `tl.int1` (#110388)
28b3ff7974 : [quant][pt2e][docs] Update main quant doc with pt2 export quantization information (#110260)
cba3f407b1 : Revert "[HigherOrderOp] Flatten outputs of `wrap`. (#109433)"
859733512f : Revert "Refactor expect tests on test_higher_order_ops.py. (#110290)"
cdde899a73 : [FSDP][optim_state_dict] Fuse allgather for optim_state_dict when use_orig_params is True (#108298)
15dfe7b8e3 : Actually enable typechecking for _inductor/index_propagation.py (#110110)
80b6f072e3 : [ATen] Remove ATen.h includes from transformers (#110199)
c28bb46445 : Fix test_mem_efficient_attention_vs_math_ref_grads tolerance from test_transformers.py (#108094)
6b2c52278e : Benchmark flag to include slowdowns when computing gmean of speedups over eager (#108375)
b5268456f9 : Fix optimize_for_inference to support modules that don't have a forward method (#110013)
651b198cdf : [HigherOrderOp] Flatten outputs of `wrap`. (#109433)
d9aecaefbe : Refactor expect tests on test_higher_order_ops.py. (#110290)
92242f599a : [PyTorch] Add Expanded call stack to nodes [Take 2] (#110229)
16e3f158b9 : Add function to port FX minified graph to HLO via StableHLO (#109084)
7e6cf04a84 : Revert "Multiprocessing support for NT (#110292)"
881e7304d6 : Multiprocessing support for NT (#110292)
7827ae2864 : Increase job timeout limit when running with memory leak check (#110193)
8d6479725a : Revert "Adding Backward Support for NestedTensors and FlashAttention (#97485)"
26900d21c2 : [dtensor] skip pytree when not necessary (#110132)
fd6c993eea : Add missing CUDA error check (#110368)
46d1f9b385 : fix(lint): Fix lint issues on main (#110389)
a3c1e3c95c : Generalize toAccumulateType() (#108248)
7853f8f6da : Fix override warnings in nvfuser (#110350)
e47e946bbf : [aotinductor] Use dynamic_shape instead of constraints (#110360)
87f8bc65f8 : Added new test sample to interpolate op in OpInfo (#104181)
175b626216 : Enable torch.promote_types in Dynamo tracing (#110358)
e0348ceceb : Avoid undefined behavior in JIT-generated conversion code (#110212)
f7812cdbd9 : [inductor][Optimus]Improve logging for Optimus (#110186)
06464a3477 : Change compiled_autograd tests to xfail instead of skip (#110348)
a588648759 : [DCP] Fix 'torch.cpu' has no attribute 'current_device' in checkpoint/optimizer.py (#110299)
13af952f94 : [export] Add run_decomposition() function to ExportedProgram (#110236)
13681382d5 : Add heuristic for when `evict_first` should be set (and some other minor things) (#108841)
e4414716d5 : [onnx] support attn_mask fp16 type (#110306)
55905c4a1a : [2/N] Enable clang-tidy to c10/test/*cpp (#110270)
ef5ff79019 : [2/N] Clean up CMake target linking (#109986)
669faab0ad : [AOTInductor] Add non-default device test (#110024)
2bcae75513 : [vision hash update] update the pinned vision hash (#110344)
e8c0364f36 : [AOTInductor] Add model runner to avoid using torch_extension (#110263)
898656e9d1 : [AOTInductor] ProxyExecutor supports Tuple of Tensor and List[Tensor] in returns (#110187)
6bb448a2d3 : [inductor][fbcode] Add -D C10_DISABLE_TENSORIMPL_EXTENSIBILITY to cpp_compile_command (#110122)
d0ad848aa5 : Enable misc clang-tidy checks (#110283)
2ead6c2f6e : Skip launching kernels with zero grid in AOT Inductor (#110312)
81a74457ca : [BE] Clean up trymerge code handling flaky failures (#110133)
f7ba3e85e2 : [Dynamo] Add functional triton kernel wrapper (#110185)
6b84658433 : [CUDA][cudaMallocAsync] Improve `PYTORCH_CUDA_ALLOC_CONF` error message (#104891)
ad8aef0f98 : [BE] [3/N] Use nested namespaces (#110314)
8745d2d4f2 : Small optimization to how we call flash-attention (#110324)
7eeb392eb3 : [Inductor] Enable the item() and nonzero() codegen test on CPU (#110262)
e0be9ebc18 : Simplify the conditionals used for learning rate calculation for `ConstantLR` learning rate scheduler (#109785)
993eea0edd : [aotinductor] Fix a missing schema issue for repeat_interleave (#110105)
ee0bff209c : [LTC] correct AdaptiveAvgPool3d channel dim index for shape inference (#109822)
5a87477e3f : [BE] Use `std::make_unique` (#110298)
b083058e45 : Revert "Make unbind() overrideable for NT subclass (#109122)"
1e95a1ae8c : MAINT: pytorchify torch._numpy tests: core/ and fft/ (#109815)
9c7071b0e3 : [fuzzing result][fuzz_torch_jit_lite_interpreter] read-heap-use-after-free (size 8) in std::_Function_base::_M_empty() (#110289)
f2d7faf4ba : Revert "MAINT: pytorchify torch._numpy tests: core/ and fft/ (#109815)"
28d69d5256 : Adding Backward Support for NestedTensors and FlashAttention (#97485)
359c2a53f5 : dynamic_shapes + retrace exported program (#110276)
c2c7c4035f : Revert "Simplify the conditionals used for learning rate calculation for `ConstantLR` learning rate scheduler (#109785)"
b253fc9c93 : Revert "[1/N] Dynamo skipfiles refactor (#109567)" (#110296)
bc047ec906 : [inductor] Make sure unfuse_addmm and addmm patterns don't overlap (#110235)
d04b35e7e3 : [inductor] Fix bug in input mutation (#107614)
d7de26804e : [AOTInductor] ProxyExecutor supports List[Tensor] return type (#110182)
d6d3f6cfe5 : Add weight update for DSOModel. (#110273)
6e2c14a0e8 : [Codemod][[codemod] Replace third-party mock with unittest.mock] caffe2/caffe2 (#106541)
88ef126a93 : rename nanogpt_generate to nanogpt to also support train (#109746)
30759848fa : [inductor] handle non-list/tuple outputs for FallbackKernel (#110145)
defb364adf : Clean up test_external_module_register (#110254)
0ff1155d3a : [aotinductor] Refactor test_aot_inductor to take different devices (#110216)
ce6d09a775 : [aotinductor] Refactor test_aot_inductor (#110215)
28f52f2f80 : Fix aminmax on CUDA when input shape contains 0 (#107564)
2d50a30d77 : [Dynamo] Add native support for Triton Kernels to Dynamo (#109623)
3693777a86 : Pickle support for NT (#110219)
c9511e8ac9 : [foreach][BE] cleaning up MultiTensorApply.cuh (#110228)
92f4a7b663 : [inductor] Add fbcode include path for cuda (#110240)
758735b739 : [dynamo] Convert dtype arguments as well as inputs in `cast_to_fp64` (#110232)
24e5d61af8 : Log usage of optimizer in backward (#110206)
acac92f806 : [vision hash update] update the pinned vision hash (#110258)
d615f0078c : Updating documentation for `PolynomialLR` (#110151)
07ec95b17c : TD: Fix sorting bug for historical correlations heuristic (#110257)
3dc479e70b : [1/N] Apply clang-tidy to c10/test/*cpp (#109278)
e6b5e0ecc6 : removing the functionality of nvfuser python APIs (#110124)
88de391692 : [torch.library] Fix some docstrings (#110214)
83283b4f0d : Simplify the conditionals used for learning rate calculation for `ConstantLR` learning rate scheduler (#109785)
c9b8e06060 : [quant] Enable quantization for wav2letter (#109830)
ce8b4f56d8 : [dynamo] Dont put nn module guards on torch inbuilt nn modules (#110230)
20dabea35d : Inductor cpp wrapper: support MkldnnRnnLayer (#107858)
d1a13129bb : Add support for item() and nonzero() codegen in Inductor (#109893)
3de42995e4 : [quant][pt2e] Add quant API re-entrant test (#110125)
bbb95878e9 : [LLVM] Update apis incompatible with llvm versions in codegen (#110200)
ae546db562 : [GHF] Update meregbot tests (#110221)
be3b16daad : [decomp] Fix baddbmm decomposition (#109714)
41d6c29b19 : [BE] Fix `pointless comparison` warning (#110227)
f82a29e32b : [inductor] Add CI jobs to test AOTInductor (#108419)
81da6db74a : fix a missing keyword virtual (#110220)
e0b035c220 : Revert "[core IR] Add lift_fresh, split.Tensor, and unbind decompositions to core ATen decomp table (#110102)"
aaaa3c1586 : Fixed minor issues for bmm/mm decompositon (#109836)
168f516fae : [3/N] Move c10::variant to std::variant (#110141)
84c5435b29 : [1/N] Dynamo skipfiles refactor (#109567)
e3eb1d92d8 : [quant][docs] Add documentation for `prepare_pt2e`, `prepare_qat_pt2e` and `convert_pt2e` (#110097)
3603f646eb : BUG: fix torch._numpy.arange(5, dtype="float32") (#110005)
5f7eff0adb : Replace node.meta source_fn with source_fn_stack (#108595)
1d0a8eed5d : [generate_opcheck_tests] Enable using same failures_dict for multiple testclasses (#110164)
f2c360e3e5 : Reorganize and rename COW files and APIs (#110191)
c62be12061 : Added batch rules for _upsample_bi*2d_aa and _upsample_bi*2d_aa_backward (#110172)
2a246c5259 : update type() calling to not use unneeded device (#110163)
7f5fd92372 : Reland use std::make_unique after internal changes (#109742)
7f5737392d : [FSDP] fix: fix for fsdp exec order pre fwd record (#110138)
6f48d872d0 : Re-land: Break graph on `manual_seed`. (#109109)
5f417fd710 : [aot_inductor] Lightweight model runner (#110158)
ad0ba5e187 : [torchbench] Consistent accuracy results with dynamobench (#110189)
8e14e76c34 : [inductor] Enhance an input type assertion msg (#110176)
248a1b7011 : Revert "Enable function declaration check in Vulkan and Metal backends (#106762)"
eb082ef604 : [inductor] Decompose addmm if it's a dot product on cpu (#110010)
ee8983da70 : 109605 dynamo scalar ndarray pow gen (#109953)
5da5e068f3 : deprecate constraints in favor of dynamic_shapes (#110143)
419ec3b229 : Enable pickling model prepared with QAT qconfig (#109288)
c71a64ccce : [aotinductor] Rename if name is prefixed with integer (#110113)
e20c35a53b : Allow public access for imports (#108914)
fc1fcc4d17 : Enable typechecking for _inductor/fx_passes/group_batch_fusion.py (#110111)
3e7f23e04f : [inductor] Actually enable typing for sizevars.py and joint_graph.py (#110109)
a81d083b1c : [Reland] Add -Wdeprecated and related fixes (#110019)
7f2b51c668 : [AOTInductor] ProxyExecutor supports custom op with tuple output (#110140)
75462fd870 : Revert "[1/N] Dynamo skipfiles refactor (#109567)"
68b0db1274 : Define the public API for torch.distributed.fsdp (#109922)
1ca68c971c : distributed doc fix (#110157)
f5a23ca78d : Make unbind() overrideable for NT subclass (#109122)
f8e0ebec8c : [1/N] Dynamo skipfiles refactor (#109567)
22e706f768 : [core IR] Add lift_fresh, split.Tensor, and unbind decompositions to core ATen decomp table (#110102)
840bb650f8 : [AOTInductor] Update regex rule for symbol (#110184)
9399e0b1ff : add fp16 support for gemm (#99498)
d796518485 : [refs] Fix size check from #108360 (#109083)
85e408217a : [ONNX] Move out onnx bench bash scripts (#103983)
60b46d7902 : Add ROCm folks as CODEOWNERS for triton.txt (#110108)
40b83d98de : fix bugs in export docstrings (#110169)
bf7307adf8 : Support inference_mode decorator (#109274)
a200bb5e54 : [BE] Do not use `assert` in unit tests (#110179)
2ff9d1fda3 : Add size to constant - type dispatche through BaseListVariable.cls_for (#110166)
7782108792 : [AOTIndutor] Fix freeze for AOTInductor (#110055)
955298bc40 : Use Dr.CI results to classify flaky failures in trymerge (#110054)
213badf632 : [dynamo][guards-log] Add debug msg for nn_module_guards only when log is enabled (#110167)
6aae636f69 : chore(inductor): Simplify `will_fusion_create_cycle` and cleanup to `node.ancestors` (#109976)
b123fd168a : Higher order op for preserving leaf functions through trace, particularly for getting user defined hooks to compiled autograd (#109690)
fe11227764 : [dynamo][higher order op] Fix minor bug in error msgs (#110099)
7c1702f099 : Keep JSON mocks file in gzip format (#110173)
d4b06dc426 : Pass S3 credentials to ios upload workflow (#109222)
21ff0cc3ac : [xla hash update] update the pinned xla hash (#109999)
ae064ad4c6 : Fix XLA update rules (#110177)
5ef5f1ab9a : [HigherOrderOp] wrap (and checkpoint) should accept pytree inputs (#109962)
58c33789c6 : Fix governance.rst link rendering (#110171)
36eb1bb548 : Use constexpr members in ConstantSymNodeImpl (#110142)
a8bed7191b : [Easy] use BaseListVariable cls_for for all list-y type dispatching (#110159)
ec5bbef8af : [AOTInductor] Switch ProxyExecutor to use AtenTensorHandle (#109748)
633bd0765e : Integrate xpu into torch.Generator and torch.seed (#109866)
0511df0ee9 : [ROCM] enable skipped test_api cpp tests (#109817)
063d2572da : Revert "Use Dr.CI results to classify flaky failures in trymerge (#110054)"
8791e8697a : Print full stack trace on suppressed error (#110106)
0721a394b6 : [executorch][kernel reg] Allow kernel manual registration (#110086)
1265400ba6 : Revert "Reland: implement a function to convert a storage to copy-on-write (#110022)"
7dbdf3be1e : Fix inductor CI (by updating graph break count) (#110160)
bf8617c37d : Enable function declaration check in Vulkan and Metal backends (#106762)
774137d506 : Add torch.ops.import_module (#110090)
34ded74399 : [Dynamo] fix signature in dynamo types (#110081)
a51b8df261 : Add support for event_tracer in codegen layer (#109990)
10c646295d : When doing typed typecheck, also check signature with symint removed (#109727)
1b51d29b66 : [quant][pt2e] Enable constant folding for quantize ops (#109343)
6138750ab1 : [vision hash update] update the pinned vision hash (#110127)
ddbf1aab64 : [export] Add dynamic_shapes to _export.aot_compile (#110101)
f7c9ef88f5 : Add masked_select abstract impl (#110103)
33d8f5f73e : fix typo (#109965)
869226bf94 : Avoid passing generator to parametrize (#110104)
dec140f1ea : [core IR] Add a core decomposition for aten.all (#110093)
51a8c166a6 : Add test for `ShapeEnv` recording fallback. (#109944)
9928c10e71 : [core IR] Add glu as a core decomposition (#110043)
4d0ae7c9da : [inductor] support _scaled_dot_product_flash_attention fallback (#110085)
19ca883f8b : [pytorch][jit] allow passing in obj loader in unpickle api (#109730)
3262c5358f : Use _check_is_size for validate_dim_length (#109849)
27443eadeb : [dtensor][7/n] remove reduction rule (#109144)
2dd9a79d22 : [dtensor][6/n] refactor reduction to use op strategy (#108264)
986d255db2 : [dtensor][5/n] switch random ops to op strategy (#108263)
d0f82cd082 : Use Dr.CI results to classify flaky failures in trymerge (#110054)
bb9779ecd2 : Revert D49640259: Revert D49615962: [optests] Test names in failure dicts should be prefixed with test class (#110094)
ac3190c52c : [cpu] vectorize atanh (#107786)
194d9aa0f2 : Revert "[Dynamo] Match closures by code ID (#109427)"
a7409695bb : [export] Verifier for exported program (#109519)
0a60219fe3 : [foreach] Fix 0-size handling for real for real (#109402)
317e39a8ad : [C10d] Cleanup collective sequence number. (#109136)
818f2297e6 : Ensure fill_ works when value is a view of self (#109835)
3705e65254 : Add `pin_memory` to `torch.Tensor` type annotation args (#109797)
1277d0e834 : [BE] Add sharding data by default to metrics (#110035)
d91492a7a4 : [MPS] Fix sort with empty tensor. (#109584)
993530ee4f : [aotinductor] Relax the CUDAGuard device index check (#110030)
47adcd412f : Increase timeout for slow tests (#109206)
0dcea70bfd : fix sfdp patern 13 accuracy issue (#110001)
2393864070 : Revert "[optests] Test names in failure dicts should be prefixed with test class (#110045)"
a5de10d7a5 : Remove linux.t4g.2xlarge Usage (#110064)
ea20db8aa0 : [optests] Excise unused operator_compile_check (#110011)
812bf847b7 : Revert "Add test for `ShapeEnv` recording fallback. (#109944)"
e05eb69c93 : Don't link to libcpuinfo on s390x (#109875)
92d86cd1ad : [inductor] Fix triton compiler error in multilayer any (#109325)
1b90f07f5a : Revert "Reland "Update AOTAutograd to use FunctionalTensorMode instead of C++ functionalization (#106406)" (#109906)"
132a138a01 : MAINT: pytorchify torch._numpy tests: core/ and fft/ (#109815)
8140494afd : [3/N][2D] Enable training with new 2D flow (#110034)
0673aa3d28 : [dynamo][guards-log] Print nn module guard saved dict versions for debugging (#110028)
5df8aca994 : [core IR] Add a core decomposition for floor_divide (#110046)
26e8cc0465 : Add test for `ShapeEnv` state when not recording. (#109945)
2ac7e52d34 : [dynamo][nn_module_guards] Config flag to disable nn_module_guards (#110039)
dd819138da : [pytorch vulkan] add tensor vulkan check for at::cat (#109936)
5dcee01c2b : Monitor baseline for TD prioritizations (#110031)
ac1e85161e : [MPS] Fix nll_loss with default ignore_index (#109574)
0087118997 : [MPS] Fix mps to cpu copy with storage offset (#109557)
129f535778 : [VMAP] Add linspace and logspace batch rules (#105451)
5589b81173 : Remove redundant change for gloo (#106750)
dddf07e56a : Reland: implement a function to convert a storage to copy-on-write (#110022)
76fcec74c4 : [optests] Test names in failure dicts should be prefixed with test class (#110045)
41bb5c27a2 : Enable typechecking for _inductor/fx_passes/joint_graph.py (#109955)
86762f33d1 : Enable typechecking for _inductor/fx_passes/pad_mm.py (#109954)
55f8553078 : Enable typechecking for _inductor/fx_passes/pre_grad.py (#109952)
89fc66fb36 : Enable typechecking for _inductor/fx_passes/split_cat.py (#109951)
ac60638c6c : [ndk] Clean up LLVM and libc++ 12 and 13 (#107326)
f8fcc54f70 : Add torch.library.impl_abstract (#109912)
b481349d3c : [dynamo][guards-log] Do not print duplicate guard entries (#110023)
56659844f9 : [profiler] Show shapes for lists of tensors in chrome traces #109263 (#109751)
4bf1cd6961 : [aotinductor] Rename aot_runtime to aoti_runtime (#110007)
b07bebd4bd : Add default arguments to sym_constrain_range_for_size (#109858)
bcedbac96a : Re-enable more Windows tests (#109847)
a81cb0de16 : [Dynamo] Support python class member_descriptor (#109956)
5f6216b12c : Add torch.fx.experimental.recording to uninteresting_files() (#109887)
7af30ea54c : [AOTInductor] Bug fix for redefining symbol name (#110041)
6275f91654 : Improved DDP checkpoint documentation (#106985)
7ed06e8317 : [inductor] enable mypy checking in torch/_inductor/codegen/cpp.py (#109729)
f87863335c : [BE]s/DEFINE_ENUM/DEFINE_ST_ENUM_VAL_/ (#109917)
57cdad2396 : [aotinductor] Update benchmark to include compilation time (#109998)
ab70183c53 : [RFC] Allow "spawn" start method for torchinductor workers. (#108850)
a4dec8d306 : Add test for `ShapeEnv` recording fallback. (#109944)
5c4b5baf21 : Fix python decomps for OpOverloadPackets and add tests (#107707)
c1a2f35805 : Revert "Disallow skipping dynamo (#109476)"
c4f2b6dbd2 : [profiler] use PyCFunction_Check to check both PyCMethod_Type and PyC… (#110002)
83deaa16ed : Revert "[1/N] Cleanup header inclusions in torch_cpu by iwyu (#101178)"
d6cc3ac8b2 : Add PR number to metrics when available (#109406)
3de0857503 : [Dynamo] Match closures by code ID (#109427)
09c598745c : Rename `torch._C._TensorBase` to `TensorBase` (#109940)
a565f1bee6 : [aotinductor] Skip benchmarks with control flow (#109661)
6b39cf863f : Fix invalid arg to getLogger in torch distributed checkpoint (#110008)
7de669f2f9 : [core IR] Remove trunc decomp and add trunc to core (#109902)
fe5e63f5db : [inductor] Do type promotion in pointless cumsum pattern replacement (#109960)
4734496a0c : Extend storage access error api for untyped_storage() (#109750)
a5364b12bb : Revert "[ONNX] Remove the depreacated function `_export` (#109763)"
52e14787ae : Revert "MAINT: pytorchify torch._numpy tests: core/ and fft/ (#109815)"
f5886bf352 : Revert "[3/N][2D] Enable training with new 2D flow (#109553)"
837272f150 : Python 3.10 Union operator | support for JIT (#109293)
d0fe8fa5db : Reland "Update AOTAutograd to use FunctionalTensorMode instead of C++ functionalization (#106406)" (#109906)
3beed41e12 : [Easy] Remove hook warning where source is always guaranteed (#109898)
5565a29568 : Release GIL in torch.cuda ops wherever possible. (#109159)
96a3a7cc82 : [pytorch] make IterableDataset of Iterable type (#109645)
6a202c36af : Minor fixes in semi-structured sparse code (#105595)
829b5c0949 : Revert "[Dynamo] Support python class member_descriptor (#109956)"
217b37c023 : [3/N][2D] Enable training with new 2D flow (#109553)
12cd776d90 : [Dynamo] Support python class member_descriptor (#109956)
265acd4bea : Clean up CMake target linking (#109959)
7c9052165a : add fp16 support for native conv and deconv on CPU (#99497)
ca5f3a7436 : TST: test that numpy dtypes do not graph break (#109974)
84a67c0665 : Use wrapper instead of V.graph.wrapper_code (#109883)
10f9edc99d : Don't -Werror on cast-function-type (#109796)
bb74d9104f : [PTD][TP] Refactor the test and temporary disable one test case (#109919)
5ad1baf6fa : MAINT: pytorchify torch._numpy tests: core/ and fft/ (#109815)
0d3db1048a : remove nvfuser test in upstream pytorch (#109918)
befe60afc2 : TST: pytorchify test/torch_np/test_dtype.py (#109967)
95e2eec9bf : Better invariants - always route list/tuple to their requisite VTs instead of ConstantVariable (#109869)
e9c9b1ed59 : [Inductor] Generalize inductor triton backend device agnostic (#109486)
b7a95f4fdb : [1/N] Cleanup header inclusions in torch_cpu by iwyu (#101178)
dee100945e : [2/N] Move c10::variant to std::variant (#109723)
c13177f2cb : [FSDP] Propagate requires_grad attribute to unsharded params (#109892)
ebb30bdd6f : Revert "Better invariants - always route list/tuple to their requisite VTs instead of ConstantVariable (#109869)"
d9627c4264 : Revert "[inductor] fix a max-autotune rng state related bug (#109828)"
b89ce814c0 : [FSDP] Remove _set_use_dtensor in post_load_state_dict_hook (#109924)
7bb1d10c2f : Disallow skipping dynamo (#109476)
460fc9da62 : Disabled UserWarnings for some public functions in torch.overrides (#109890)
f35cc0fb6f : Don't record function call if `ShapeEnv` is not found. (#109904)
92c49e2168 : MAINT/TST: pytorch-ify torch._numpy tests (added tests only, not vendored) (#109593)
8d47f90e50 : Pytorchify numpy vendored tests in torch_np/lib/ (#109718)
835c18e7ea : Avoid saving `self` for mean.backward (#109935)
a13201e857 : [DCP] Add unit test for FSDP -> TP checkpoint conversion (#109899)
2872f788aa : add path for DPC++ SYCL device code in Float8_e4m3fn (#109911)
85ddc985d0 : Back out "[pytorch][PR] [Inductor] Extend Pattern Matcher to Match Equivalent Function Invocation" (#109931)
54faedf5f2 : [AOTInductor] Load model on arbitrary device (#109816)
bbdce93571 : Basic fp8 support in Inductor (#109168)
ff7af15e80 : Re-enable max_autotune tests for the CUTLASS backend. (#109831)
c0d746c90e : [ONNX] Relax getting module attributes in ONNX export (#109759)
c789ed6e62 : [Inductor][FX]support nn.Linear nn.ConvTransposeNd for efficient_conv_bn_eval (#109722)
3663436db3 : [inductor] fix a max-autotune rng state related bug (#109828)
d7f3986314 : Fix S367052 to unblock ICVR MC3 (#109853)
06aa6966a8 : Better invariants - always route list/tuple to their requisite VTs instead of ConstantVariable (#109869)
691f8ca4f4 : faster build instructions CONTRIBUTING.md (#109900)
8ed08e5a7c : [dynamo] Graph break on rng get/set state - remove GeneratorStateSource (#109410)
a902150a1e : [Easy] ConstantVariable() -> .create (#109896)
e42d450a55 : [core IR] Add div.Tensor_mode, div.Scalar_mode, and copy as core operators (#109812)
334ead04a9 : Back out "[decomp] Fix baddbmm decomposition (#109714)" (#109855)
f0d71de4ac : Update caffe2 with LLVM-18 API change (#109408)
c26270c733 : [C10D] Even more store scalability work. (#109218)
de1b00abda : inductor: tigher upperbound for rblock scaling (#109839)
e2cfbca5ab : Add clip to dynamo runners (#109840)
2895fbd857 : Enable typechecking for _inductor/pattern_matcher.py (#109613)
411ca10e74 : [Pytorch][Vulkan] Add baddbmm (#109851)
1df14f1bf8 : Move has_triton to top level triton utils so that dynamo can also access (#109832)
4b0281b32c : [BE][foreach] name tests correctly. noncontiguous inputs != fastpath (#109771)
92de1d3222 : [C10D] Push store scalability a bit further. (#109217)
c27c56a5c4 : [inductor] Add back a missing header include (#109845)
d0c8e8240d : Revert "When doing typed typecheck, also check signature with symint removed (#109727)"
629a628cc8 : Revert "Update AOTAutograd to use FunctionalTensorMode instead of C++ functionalization (#106406)"
2512017814 : Fix for out of bounds read in torch mobile flatbuffer loader (#108439)
93ce6df931 : Fix torch.utils.benchmark API while use privateuse1. (#108548)
f092eecc92 : Handle C++ exceptions raised during `finfo`/`iinfo` calls (#109743)
d7dfa91e12 : [inductor] Refactor some libtorch c shim interfaces (#109834)
098d62d278 : Add global_step parameter to SummaryWriter.add_hparams (#109572)
b4ede53776 : Use constrain_range_as_size for nonzero/repeat_interleave (#109857)
56ef200c2d : When doing typed typecheck, also check signature with symint removed (#109727)
24ba4b7059 : [dynamo][`__torch_function__` 1/n] Add getset descriptor and `__get__` vars (#109542)
d7c05bb2e8 : [ONNX] Remove the depreacated function `_export` (#109763)
b5d6e831a9 : Update AOTAutograd to use FunctionalTensorMode instead of C++ functionalization (#106406)
63526a63f5 : Make FunctionalTensor subclass to be more like functorch (interaction with ZeroTensor + Conjugate key) (#109023)
7a21e960c6 : fix infinite loop with primtorch and .to(meta) (#109632)
46b0b7bff7 : _return_and_correct_aliasing: fix for schemas with mutable tensor in kwargs (#109662)
dae9aa8925 : fix subclass custom sizes dynamic shapes caching (#108654)
ebc7039bcb : New export API with dynamic shape specifications instead of constraints (#108448)
cd99cdc3af : fix std::move warnings from gcc (#105780)
4ff294522a : [Inductor] Extend Pattern Matcher to Match Equivalent Function Invocation (#107832)
8124a6c40c : [TORCH_LIBRARY] Add impl_abstract_pystub (#109529)
3268b039ec : Handle unbacked symints in Triton size hints (#109609)
abd9b763ca : [RFC] Add debug log as we lower each FX node (#109602)
e1d71231e2 : [Pytorch][Vulkan] Add bmm op (#109360)
8856c1628e : [inductor] Change AOTInductor to return output tensors (#109790)
d43f9f7707 : Add redirect links to the contributor wiki (#106863)
8dcdc74915 : torch->onnx export support: quantized::linear_relu (#109755)
175ccfc4c8 : Verify flatbuffer module fields are initialized (#109794)
d65e067baa : Updates to attn_mask handiling in mem_eff (#109620)
b5fde4c382 : Revert "[Reland] Remove calls of c10::either (#109708)"
255d1a776a : [MPS] Add support for Mish to MPS backend (#109786)
f7ddc54503 : [aotinductor] Update performance benchmark code (109560) (#109820)
8dedc9dd9b : Add meta tests for layer/group/batch norm backward (#109591)
83b4aab5bc : Allow zero sized tensors to be resized with meta_randperm (#109721)
8207118d55 : MAINT/TST: pytorch-ify test_linalg, vendored from NumPy (#109775)
e9e93c5350 : [Reland] Move torch::make_unique to std::make_unique (#109780)
c6b9481c15 : Update type hint for `Tensor.__getitem__`. (#109531)
b1f1b39feb : Revert "Add PR number to metrics when available (#109406)"
09622d8d49 : Allow inferring size-nature from sizes passed to empty constructor (#109720)
6ca964b410 : Remove torchtext from Build Official Docker images (#109799)
0351e2042b : Avoid throwing exception in ClosingTHPObjectPtr (#109758)
2cd0b94533 : Hide __getattr__ from type checkers (#109683)
ef8d461b09 : Fix torchbench --multiprocess (#109657)
ba0362a09e : Remove unused build system checks and definitions (#109711)
5e19216a6e : Add PR number to metrics when available (#109406)
6b7b9c796e : Fix registering jit decompositions for jvp for out wrapped decomps (#109367)
406b8412c2 : Revert "[inductor] Use _unsafe_view decompostion (#109669)"
3f3e353885 : torch.compile + selective activation checkpointing (#105489)
a5145364d9 : [FSDP] Fix _use_dtensor not automatically turn on for model state dict when using DeviceMesh (#109767)
62555930a0 : [inductor] Enable mypy checking for codegen/triton_foreach (#109643)
4eada253e1 : [inductor] Set CUDA_VISIBLE_DEVICES for multi-device subprocess autotuning (#109500)
169ae7540d : Revert "Handle unbacked symints in Triton size hints (#109609)"
ac967e9dad : [export] Fix tree spec matching behavior. (#109679)
d38379f9f1 : Update dynamic shapes documentation (#109764)
86a9534165 : Upgrade nightly wheels to rocm5.7 (#109571)
600d0d0284 : Add "cuda" to MPI backend capabilities (#109614)
b91ba226ce : Don't use cpuinfo on s390x (#109496)
772e104dfd : [inductor] visualize fused ops in svg graph (#107752)
f5b753bab1 : Fix inline_container_test on Windows (#109754)
b780b246eb : Use a reduction implementation for unique when dtype is bool on CPU (#109695)
cddd0db241 : Add `finfo` properties for float8 dtypes (#109744)
e2e9d15726 : Unblock float16 dtype for xla autocasting (#109554)
13bd4ed933 : Add docs for torch.compile(numpy) (#109710)
7a04ae6fba : [export] Remove redundant no_grad() for exported program execution. (#109686)
e4d8ec9fe8 : inductor: only do the conv+bn folding for the freezing path (#109587)
9e2b07ac9d : [Inductor] Break the loop fusion when node2 depends on node1 mutations (#109172)
9c2715bbb2 : [inductor] Clean up AOTInductor runtime ABI (#109678)
4e3b03217d : [BE] Replace 8 with `CHAR_BIT` (#109740)
6e3a7473cf : Trace calls with Python `Enum` values. (#109507)
55685d57c0 : [JIT] Fix typed enum handling in 3.11 (#109717)
7ce69d5dbe : [RELAND] Remove some unnecessary <iostream> includes from headers (#108150)
05b3a4dd88 : Fix test_libtorch.bat not exiting on error (#109393)
0735f6c0d5 : [Reland] Remove calls of c10::either (#109708)
cadb566bbc : [RELAND] [ATen] Update pre-compiled header (#108149)
8bc00dfffd : Hashing for constant and singleton SymInt/SymBool (#109170)
5252fcb133 : Handle constant SymBool in unary and binary operations (#109169)
8597d37536 : Implement numpy(force=True) (#109636)
1f6828ca99 : Fix numpy(force=False) (#109634)
9a1b6d44bb : [C10d] Add PG::enableCollectivesTiming to make it dynamically enabled. (#108814)
3add22b716 : Created nested utils.cpp (#109304)
559d1f94a0 : Revert "[Dynamo][Test] reland testcase with state (#109713)"
f9947830bb : [ONNX] Remove the depreacated function in symbolic_helper (#109681)
f3c12f5aa2 : [DCP][test] Update test_dtensor_resharding.py (#109619)
7e05cd4eca : [autotuning] move logging logic into logging function (#109155)
90a2026cd1 : [inductor] Use _unsafe_view decompostion (#109669)
6f0cf5a837 : [decomp] Decompose unsafe_split{,_with_sizes} into safe variants (#109668)
9e629dd73c : [decomp] Add all std and std_mean overloads to core decompostions (#109667)
36a8105f54 : [decomp] Fix baddbmm decomposition (#109714)
b60a7c59ea : Refactor check_fast_path_restriction in preparation for has_empty_tensor variant (#109534)
5c897eacff : [Dynamo][Test] reland testcase with state (#109713)
be712a02e9 : Trace `pytree` calls inside `vmap` implementation. (#109107)
654731a52b : Handle unbacked symints in Triton size hints (#109609)
1c4e811565 : replace data_ptr with aoti_torch_get_data_ptr for cpp codegen (#109615)
cdb51d2ad0 : Revert "[2/N] Add -Wdeprecated and related fixes (#109564)"
af3741745c : [CI] Add `torch.compile` works without numpy test (#109624)
b771c04d6e : Handle unbacked symints in buffer reuse calculation (#109603)
63025d4218 : Do not redundantly min start with new_size[dim], since end is already min'ed with it (#109599)
1cc052bcab : Revert "[1/N] Add -Wdeprecated and related fixes (#108626)"
db6e9f66f1 : Use pretty print for checking no duplicated pattern (#109066)
d24ba7a634 : Add 3d Attn Pattern to match HF Whisper (#109156)
881bfbf21d : [c10d] Add tests for usig libuv through init_process_group. (#108661)
567e8ebf94 : [1/N] Move c10::variant to std::variant (#103675)
e87bd9f588 : [aot inductor] Make unit tests work on CPU (#109625)
e73efbffab : [Test][ShardedTensor] Add test for corner case for chunk sharding spec (#109626)
a019e5cbff : s390x onnx: byteswap data when serializing it (#107963)
40b2c796dc : [Decomposition] baddbmm (#108534)
b30ee35a6f : [Inductor][FX]Support efficient conv bn eval (#108757)
595af261b2 : [ao] Support Subclasses of `FloatFunctional` in eager mode prepare (#109646)
293205c54b : [AOTInductor] Fix aot_inductor/test:test_custom_ops (#109660)
5b50641bac : [2/N] Add -Wdeprecated and related fixes (#109564)
d137b620c5 : Fix c10_tempfile_test failure on Windows (#109680)
ad53b53518 : Generate patterns in fp16 and fp32 (#109142)
122264a0c0 : [generate_opcheck_tests] tests should ignore meta/FakeTensors (#109641)
d3d71367b9 : [generate_opcheck_tests] Always print a repro (#109640)
af900fe228 : [generate_opcheck_tests] flip unified_diff order (#109639)
7564f04389 : [generate_opcheck_tests] add type checking (#109638)
10d575911e : [generate_opcheck_tests] rename "success" to "xsuccess" (#109637)
d271a5c796 : [minimizer]skip mode for minimizer (#109399)
067f172930 : Serialize Remaining Patterns (#108917)
16d608d70d : Add Python serialization to Pattern Matcher patterns (#108894)
1a5e0edf56 : [dynamo] Avoid divided by zero error when printing out choices (#109328)
76dd38b591 : add back in unsafe view decomp (#109663)
238fb66085 : python functionalization: support higher order ops (#108656)
d9342cde6e : custom ops: don't error if autograd input is a tensor subclass (#109248)
c9b60a691b : functorch: fallthrough on calls to custom size/stride/storage_offset calls (#109024)
0317626df5 : [MPS] adding weight_norm_interface support for mps (#108008)
1b3e5b53f3 : [FSDP][optim_state_dict] Add device to _shard_utils.py to explicitly use the device from fsdp_state (#109631)
6b760ffd6c : improve unique performance on CPU (#107846)
518308a740 : Trace through `pytree` API with dynamo. (#108533)
103260a43b : Re-define check for `typing` classes. (#109201)
85d26f7868 : [inductor] Enable mypy checking for torch/_inductor/codegen/triton.py (#109146)
8705fc1bbd : Revert "Add Python serialization to Pattern Matcher patterns (#108894)"
8b4b1817c8 : Revert "Serialize Remaining Patterns (#108917)"
b1d2028eb0 : Add compiled optimizer test for nadam (#109548)
c2f5d4d8f0 : Revert "Generate patterns in fp16 and fp32 (#109142)"
11c6a98bca : [torch] add use_buffers to swa_utils interface (#109078)
14994cc978 : Generate patterns in fp16 and fp32 (#109142)
7b53303d3c : Improved the docs for torch.std, torch.var, torch.std_mean, torch.var_mean and torch.cov (#109326)
7bf08b77f3 : Serialize Remaining Patterns (#108917)
7db175b6f6 : Add Python serialization to Pattern Matcher patterns (#108894)
5845fc2fa6 : [PyTorch][Coreml] Bubble up NSError from loadModel (#109444)
a86727a06b : [Pytorch][Vulkan] rewrite available() check and add tests for them (#109541)
964b79c813 : [EASY] Update dynamo dependency installing Makefile (#107229)
caf4376349 : [PyTorch] remove branch in isIntrusivePtr (#109273)
e29330deab : [PyTorch] clang-format ivalue.h (#109272)
cd31c170c9 : Revert "[ONNX] Remove deprecated functions (#107208)"
70f2adaec3 : Setup_context does not contain default values of forward() (#108561)
1427b8149c : Revert "Eliminate c10::guts::make_unique_base (#109429)"
a68280e2c3 : [cpu] Vectorize nan_to_num (#98329)
1895bd9bb5 : [inductor] Decompose torch.ops.quantized.embedding_bag_byte_unpack (#109398)
d0cc623192 : [Decomposition] _unsafe_view (#108713)
deea268e43 : Update aten_fill to avoid d2h sync (#109533)
2e721aab98 : [Decomposition] Trunc (#109319)
ae66d0b3bf : [Decomposition] clamp_max (#108718)
25e81f19f3 : reland "python functionalization: add helpers, functionalize_sync and mirror_autograd_meta (#107917)" (#109518)
677a1010e6 : Implement traceable torch.tensor when you have SymInt/SymFloat inputs (#109515)
8ed906030c : add fp16 support for mkldnn conv and deconv on CPU (#99496)
54c28c564f : add Half support for BatchNorm on CPU (#102070)
2f53bca0fc : [Docs] Fix typo in `torch.unflatten` (#109588)
af867c2d14 : [Docs] Fix `compiler.list_backends` invocation (#109568)
a53a677b4d : [1/N] Add -Wdeprecated and related fixes (#108626)
4a60bd22b2 : [Quant][Inductor] Enable quantization dynamic batch size support (#108550)
ac603bc2f8 : [Reland] Eliminate invocations of c10::stoi,c10::stod,c10::stoull,c10::stoll (#109566)
2c1554a032 : Make SymFloat behave symmetrically with float in torch.tensor (#109513)
e8ab8c877d : [exir] Add lift constant tensors passes after aten_to_edge (#109382)
0ec9f59f70 : Loudly Error in dynamo bench if eager fails (#109536)
98208e5160 : [export] Update deserialized FakeTensorMode/ShapeEnv with same configs as export (#109522)
a44cf44067 : improved type hints ScriptModule (#109535)
871b5caae7 : Fix hpu deserialization bug (#109499)
5b13f74e9b : [export] Update how we input kwargs (#109160)
a6d34c60a1 : Fixing searchsorted doc (#109364)
6f4b9cc9ab : [export] Skip noop runtime assertion pass. (#109395)
550b0ec3d4 : Release GIL around VariableInfo::zeros to avoid deadlocks (#109454)
0e2b22c451 : [ONNX] switch from onnxscript-preview to onnxscript (#109139)
9863286abf : [ROCM] Enable bwd cross_entropy on ROCM now that eps tolerance update (#109384)
0bf30c140a : [pytree] Use OpTree for PyTree manipulation (#93139)
8a567bb59d : [HigherOrderOp] Should automatically pop modes (#109157)
73ac814148 : [Pytorch][quant] Move xnnpack quantizer to use aten.linear (#109254)
77d745666b : Add TORCH_CHECK_ALWAYS_SHOW_CPP_STACKTRACE (#109373)
8a1bbf383d : Out-of-line cannot call with symbolic error test (#109372)
050c56d0a5 : [dynamo][ci] Pin beartype to 0.15.0 (#109510)
4d44d8c00a : Revert "Eliminate c10::stoi,c10::stod,c10::stoull,c10::stoll (#109179)"
70ca3ee951 : Revert "inductor: only do the conv+bn folding for the freezing path (#109270)"
5cd8a6d40a : Enable typechecking for _inductor/fx_utils.py (#109415)
fe452108fb : Enable typechecking for _inductor/debug.py (#109335)
9172c9f03f : Fix spelling / capitalization in freezing.py error message (#109347)
bab627073a : Enable typechecking for _inductor/freezing.py (#109269)
282aa26764 : Update the instruction to enable dynamo logs (#109409)
a399f839ac : Revert "Add PR number to metrics when available (#109406)"
05c31b3b69 : typo in DispatchKeySet.h (#109431)
cddeceb6b6 : [inductor] scale down RBLOCK for occupancy (#109275)
5d5990fc49 : Remaining replacement of c10::stoi with std::stoi (#109482)
6ffa59031a : [inductor] Fix CudaStreamGuard in AOTInductor ABI compatible mode (#109471)
d2ca5fa6c5 : [lintrunner] Capture mypy internal error (#109421)
591c01995b : Add CONDA_CMAKE=yes for all ROCm docker configs (#109334)
88600e7d2e : [RELAND] Force synced KJT to trace unbacked SymInt (#108960) (#109216)
1a361e4e9f : [inductor] realize_into should not alias src and dst (#108126)
fc47ba2794 : [Decomposition] clamp_min (#108717)
a6d4cca7c0 : [Decomposition] unsafe_split.Tensor (#108544)
af93b29c5e : [Decomposition] std.correction (#108733)
a683bc54fd : move transform_bias_rescale_qkv vectorized code to cpu sub folder (#109095)
f0fb4b3897 : Add PR number to metrics when available (#109406)
9e86a093e4 : add torch.device to python type (#108116)
6d725e7d66 : [BE]: enable ruff rules PLR1722 and PLW3301 (#109461)
a9a0f7a4ad : Build CUDA image for lintrunner (#109456)
0cae3b5df5 : Revert "[PyTorch] Add Expanded call stack to nodes (#108426)" (#109468)
f9e72acc8f : Guard default dtype in torchdynamo (#109459)
71420a98ab : Revert "Remove c10::either (#109299)"
525e4f42d0 : Revert "replace torch::make_unique with std::make_unique (#108866)"
07f2efa285 : Revert "[HigherOrderOp] Should automatically pop modes (#109157)"
49b18ae546 : Revert "python functionalization: add helpers, functionalize_sync and mirror_autograd_meta (#107917)"
17193faf1a : Revert "Created nested utils.cpp (#109304)"
c8e4e08c8d : [inductor] Forward fix a windows test error (#109449)
75b954b715 : [4/N] Enable clang-tidy in torch/csrc/autograd (#109455)
c7017fff38 : inductor: only do the conv+bn folding for the freezing path (#109270)
51d2d825ab : [3/N] apply clang-tidy in torch/csrc/autograd (#109368)
d8da2a7c85 : Switch to CUDA event based profiling (#109338)
92b0db2967 : Don't find MKL if it isn't used (#109426)
6b1a15d1bb : Eliminate c10::guts::make_unique_base (#109429)
4e4314da7f : [dynamo] remove DummyGlobalSource (#109411)
9a95b4bc7b : [dtensor] quick fix to #109306 (#109428)
f15adf204b : [BE]: Replace undocumented constant in logging (#109434)
0aedacb4f7 : [2D][1/N] Add _enable_extension to fsdp state (#109242)
322bf50dbe : [2D][2/N][DeviceMesh] Add get_parent_mesh_dim() in _MeshEnv class (#109330)
b275a902d3 : Small type hint fix (#109414)
247e2f8461 : [BE]: Update ruff to v0.0.290 (#109435)
0f646b1d15 : [inductor] Add a C shim layer for libtorch (#109391)
d860313903 : Improve can't call type() error message (#109378)
58bdc63dd6 : [inductor] Remove a bunch of check_gradient=False in opinfo tests (#109417)
1e4f2b576d : Have inductor tests call output_process_fn_grad (#109416)
7f3885137f : Add meta function for _segment_reduce (#109359)
55c19a3c6d : Inductor: Increase multiplier to 3 for Inductor AMP benchmark correctness check (#109097)
7014ef0f43 : Eliminates c10::guts::array (#109423)
b03ef1d969 : [Dynamo] Fix numpy error in test_numpy_torch_operators (#109087)
852f1b8417 : Eliminate c10::stoi,c10::stod,c10::stoull,c10::stoll (#109179)
393fe9339a : Back out "Revert D49107540: [pytorch][PR] split by tag" (#109332)
7bce7f50f3 : Add torchgen path in gen_vulkan_spy (#108980)
706d8e2230 : [dynamo] Respect shape dynamism of SymInt sized tensor (#109331)
fb58a72d96 : Use `torch.cumsum` instead of numpy one (#109400)
4ee179c952 : Fix `ConstantVariable` init method if NumPy is missing (#109388)
b904432e82 : [dynamo] preserve some FX node metadata of GraphModules (#107067)
7af792ab05 : Revert "[inductor][Optimus]Improve logging for group batch fusion (#109314)"
a14d30d8d1 : [1/N] apply clang-tidy in torch/csrc/autograd (#109032)
b4ea3260d7 : [JIT] Document torch.jit.interface (#109356)
ec8b58f5ba : Add support for tolist on AsyncCollectiveTensor (#109377)
806c52b4c9 : Update chunk_sharding_spec.py (#108915)
afad0d074b : [inductor][Optimus]Improve logging for group batch fusion (#109314)
71b4b32014 : return_and_correct_aliasing: massage some schemas to work with torchgen (#108897)
0ad595954a : python functionalization: add helpers, functionalize_sync and mirror_autograd_meta (#107917)
f22b303f65 : Add TorchDispatch version of functionalization (#106404)
504dceacb1 : [ONNX] Fix indexing issue of meshgrid op (#109350)
4c208c1475 : Remove unneeded linking in CMake targets (#109192)
d3a64ff249 : Display subclass name when tolist() fails due to tensor subclass (#109376)
9d297cc773 : Remove c10::either (#109299)
cc03e3a892 : [AOTInductor] Do not hardcode directory with .cubin files (#109151)
7da3c938cf : [quant][be] Move QAT tests to its own file (#108061)
369a84e5c4 : [core][sparse][pruning] Add (i8i8)-> fp16 support to cuSPARSELt matmul (#109214)
ab99a95470 : Update planner.py (#107998)
86e6bd3e53 : [inductor] Enable mypy checking for torch/_inductor/bounds.py (#109271)
a9bf1031d4 : [BE] Do not use `numpy` in `torch._inductor.codegen.cpp` (#109324)
653c1564bf : Fix broadcasting cosine_similarity (#109363)
aed9bee041 : [inductor] Lower masked_scatter on CUDA (#108803)
3943afc94e : [quant][be] Remove unused APIs (#109342)
f3d1401843 : Fix cond branches take no arguments (#109308)
1aba61e977 : Allow cond to have more dynamo cache beyond limit (#109318)
dfdc0b63c9 : Bisect FX node asserts on `ValidationException`. (#107493)
a873f523ba : [aarch64][caffe2/torch/csrc/profiler] Support aarch64 in inline assembly (#104707)
faf3de35db : Fix max/min.reduction_with_dim opinfo test for bool tensors (#109264)
19f8b05afe : Disable gradient check for linalg.eig (#109165)
66fdea606d : Enable typing for _inductor/exc.py (#109176)
bd89f80bae : Add more types for inductor_prims.py (#109173)
1cc0921eb6 : Add tensorboard to pip requirements (#109349)
9456de937b : [dtensor] Fix and improve the sharding cache behavior (#109306)
0cbca85707 : Add check to prevent NumPy ndarray from being treated as tuple when indexing (#108954)
f786fbdebd : Reland 3rd try [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#109323)
af7d79923c : Remove thrift from Docker builds (#109344)
34ddf08f27 : [inductor] update fbcode skips for AOTInductor (#109313)
2b6d983b8b : Reland [dynamo][activation checkpointing] Trace through ActivationWrapper (#109327)
924723bda7 : Created nested utils.cpp (#109304)
2d4924db32 : Remove S3 Update Workflow (#109317)
b3272b2c00 : Trace attention inference patterns with p=0, cleanup (#109118)
5349615240 : [dynamo] Unblock a model with jit.isinstance (#109178)
2bca5f2af7 : [C10D] Track pg name in c++. (#108813)
58a883093f : [quant][pt2e] Add test for serialize and deserialize quantized model (#109158)
36b8ca4e48 : [2/N] apply clang-tidy in torch/csrc/autograd (#109277)
ec3c748fa2 : Document Triton dependency for the release process (#109296)
8cb96f5f2c : [Reland]Use cpuinfo to determine c10::ThreadPool thread number (#107339)
fa62308673 : [tensorboard] Fix TensorBoard summary encoding for torch.bfloat16 tensors (#108351)
bf5622e965 : Revert "split by tag (#108892)"
be9f73f031 : Revert "Add meta and OpInfo for _embedding_bag_dense_backward (#109211)"
28169193b4 : [TD] Improve heuristic metrics collection (#109305)
89b6276be9 : split by tag (#108892)
2bf7a283cb : Remove expected test failures for cond (#108709)
6140facf00 : Support SymBool input to torch.compile (#107850)
ea94344821 : [ROCm] Enable Lerp tests for complex32 (#108100)
54c5f474a7 : Forward rank and world size info to Torchbench models when using dynamo runner (#108438)
03e35efbf7 : replace torch::make_unique with std::make_unique (#108866)
f03b8abd47 : [HigherOrderOp] Should automatically pop modes (#109157)
492a93d185 : [HSDP] Updating HSDP test - test_hsdp_init_with_device_mesh (#109202)
602413a0a0 : Refactor `test_foreach.py` (#107869)
f7574ea43f : `torch.load`: Replaced multiple one byte read() calls during the `_is_zipfile` check with a single call (#109119)
c382ad47dd : Deprecate torch.cross default behaviour (#108760)
78cd86c552 : NT support for gt (#109121)
263ca7d69b : [ONNX] Remove deprecated functions (#107208)
0edb616793 : Add test/onnx_caffe2 to ONNX Exporter merge rule (#109295)
fe14e43d14 : Add meta and OpInfo for _embedding_bag_dense_backward (#109211)
b121e4df92 : Increase tolerances for baddbmm opinfo test (#109164)
9187559e75 : [quant][be] Remove test/quantization/pt2e/test_quantize_pt2e_fx.py (#108925)
900288f138 : Revert "[inductor] Lower masked_scatter on CUDA (#108803)"
d4990ad5a1 : Fix the example in the extending.func.rst (#109279)
9021fb8dac : [dynamo] implement custom dict variable as a general solution for HF's ModelOutput class (#105044)
e4036ed706 : [inductor] Lower masked_scatter on CUDA (#108803)
800c665618 : Revert "[inductor] Add ir.Scan and lower aten.cumsum on CUDA (#106581)"
1b502139f3 : Added a flag is_cpu to the AOTInductor runtime (#109300)
3acccb3aa0 : [AOTInductor] Add is_cpu for AOTInductorModelContainer (#109287)
b226373d16 : Revert "add Half support for BatchNorm on CPU (#102070)"
94a54b89aa : [dynamo] Add BACKEND_MATCH guard to detect and recompile when backend changes (#107337)
9b3f5823f3 : Added test for interpolate nearest exact (#108558)
111b9ef390 : [ROCM] Enable test_fn_fwgrad_..._functional_binary_cross_entropy on ROCM (#109038)
7f1f5afc91 : Run only one pytest parametrization when generating optest (#108936)
7f7f6267e9 : [AOTInductor] Skip pre_grad_passes for exported graph. (#109246)
b6a1d3fb97 : add Half support for BatchNorm on CPU (#102070)
41e2189843 : [quant] Remove reference representation rewrite for adaptive_avg_pool2d (#108924)
a6fadf643f : Re-do D48544397: [TGIF Inplace] [xlv2][1/n] Expose a couple APIs from inline_container that will be used for chunk read" (#109183)
9cd4548f01 : AOTInductor dynamic shape (#109012)
f4e96df60a : [export] Preserve shape dynamism for unused inputs. (#109239)
25bf1a49c0 : [FSDP][Wrap] ModuleWrapPolicy callable (#109117)
f558e86fa0 : [FSDP] continue if param not exist in sharded load (#109116)
6898754401 : [ONNX] bump ort-nightly==1.16.0.dev20230908001 (#109212)
90068ab30a : Fix CUDA-12 wheel loading on AmazonLinux (#109244)
47f79e9a2b : Revert "Support SymBool input to torch.compile (#107850)"
de76c88d90 : Revert "Remove expected test failures for cond (#108709)"
05170b0b73 : Reformat line of code header to put co_name after (#109233)
c914ca7577 : [quant][be] Add TestPT2ERepresentation test case (#108923)
064ae9ff33 : Support register_hook on input tensors (#108903)
50a084070f : [inductor][easy] Enable mypy checking for all inductor files that already pass (#109238)
acad84ba6c : Disable cutlass tests in fbcode (#109241)
62732bdcdb : [ez][inductor][fx passes] quick fix for invalid nodes (#109234)
5edbee9404 : [export] Normalize nn_module_stack paths. (#109231)
109ab6a0df : Support str() on user defined functions (#108973)
a08e1370ef : Remove expected test failures for cond (#108709)
9f6d70b2fd : Support SymBool input to torch.compile (#107850)
025d1a18ab : [export] Separate out exported_program.py (#109147)
4a09ed5459 : [inductor] Parallelize Max Autotune step 2: Use multiple GPUs (#109127)
ce4283933f : [inductor] Parallelize Max Autotune step 1: refactor autotune_process (#109126)
dbddf1816a : Remove include_0d from sample_inputs_gather (#109125)
61f0578787 : Update take_along_dim docs to include `dim=None` case (#109120)
d046376c4f : Dispatch `numpy.take_along_axis` to `torch.take_along_dim` (#108880)
49e3d76684 : Add `SymInt` support to `torch.take_along_dim` (#108879)
aca3bd44d1 : Fix failing inductor test (#109220)
33c94b8b16 : Better error handling for cond (#108817)
04a765f95d : Revert "add Half support for BatchNorm on CPU (#102070)"
c44f816960 : Disable tests mentioned in 109213 (#109232)
2d26364fb3 : [caffe2][cuda] Fix instrumentation of malloc/free SDTs for `CUDACachingAllocator` (#108907)
faa5985dfe : Fix issue when input/output buffer of functional collective (e.g. allreduce / allgather) is incorrectly reused later (#108811)
54dd65f93a : [FSDP] Only check exec order if DETAIL (#109049)
916183a012 : [MPS] Fix crash if nonzero is called concurrently (#108996)
35aeb6aa85 : Do not use a specific LOC in link (#108957)
32f50b7021 : Improve type annotations for `jit.script` (#108782)
8851603a9c : Back out "[Inductor] Extend Pattern Matcher to Match Equivalent Function Invocation (#107832)" (#109174)
c657d9ecc5 : [PyTorch] Add Expanded call stack to nodes (#108426)
00908475e6 : Use global variables to register the return_types namedtuples (#108832)
6065e7a97c : add Half support for BatchNorm on CPU (#102070)
f6d8ecf9b3 : Use the correct channel token when uploading nightly triton conda (#109073)
c9fdfafb00 : Allow marking multiple unstable configs of the same job name (#109185)
fe198f3141 : inductor/test_max_autotune serial in CI (#109209)
d05a6e5ade : Add missing DeviceMesh import (#109187)
f2639a2c37 : Back out "Dynamo support for autograd.Function w/ once_differentiable (#108686)" (#109199)
264f1e7b4c : [inductor] Enable Mypy Checking for torch/_inductor/codecache.py (#108789)
ad90ab31f2 : Flash Attention v2 (#105602)
55f956f1d2 : optests improvements based on torchvision usage on nms (#108929)
bfa8429c6a : [optests] Changed failures_dict format to json; automatic update of failures_dict (#109110)
db48bc80d9 : Check index size during decomp of index_add (#108826)
d2d36aad6f : Enable typechecking for _inductor/virtualized.py (#108916)
c5e7588613 : Revert "[dynamo] preserve some FX node metadata of GraphModules (#107067)"
aee5dec3aa : torch/csrc/profiler/README.md - stubs, RecordFunction, Autograd interaction (#108470)
de0b18fad9 : Use user directed names for variables where possible (#109092)
015be4cedb : Forward fix lint (#109177)
3d8d59e68b : Update inductor ci_expected_accuracy (#109148)
3ac2396e00 : Fix `torch._numpy.random` (#108944)
41e5d410cf : Symintify repeat_interleave (#109133)
a09539f454 : Add torch.export.register_dataclass API (#109152)
375d2ca6c9 : [dtensor][4/n] don't use make_fx for strategy propagation (#108262)
09f3e08bcc : [dtensor][3/n] use dedicated TensorMeta instead of the fx one (#108261)
fc1dcfb9ab : [dtensor][2/n] use op overload instead of function schema (#107306)
48e6ffbe30 : [DCP][Test] Fix device assignment in test/distributed/checkpoint/test_file_system_checkpoint_cpu.py (#109141)
91e154fcd7 : [ONNX] Support None in fx.args as torchlib inputs (#108708)
a2ff345416 : [HigherOrderOp] Support SymInt as input to body function (#108967)
4667a5c948 : Update SingletonSymNode to allow more comparisons (#108315)
a46df6ebce : [pytorch-vulkan] add aten::randn_like & aten::normal_ (#109075)
e5f300f085 : Make mutation test work with quantized tensors (#108935)
687f027896 : [submodule] Fix eltwise share buffer issue in ideep (#108038)
e027de2c86 : Add torch.distributed get_rank and get_world_size to constant_fold_functions (#109029)
12e8530b35 : Record and replay for ShapeEnv. (#107989)
e066056414 : fix 'Node' object is not iterable in functorch.compile.minifier (#103011)
063a62622b : Add memory overlap check to `meta_copy_` (#108989)
f08885287f : Fix cumprod f16 opinfo test via ref-in-float + increasing tolerances (#109128)
6869b25f1b : Fix a bunch of opinfo tests by using reference_in_float (#109089)
baefe47161 : Fix std_mean f16 opinfo test by using reference_in_float (#109081)
4c5e43574c : Reland 2: Add PyObject preservation for UntypedStorage (#109039)
6dc56d3490 : [DTensor] Remove compute_local_offset from _utils.py (#109096)
cf26e5575d : [quant][be] Reduce warnings in tests (#108922)
9118073fe7 : assign var for "not populated" str (#108844)
91aab161d0 : Revert "[inductor] Lower masked_scatter on CUDA (#108803)"
b01b934aca : [quant][be] Cleanup xnnpack_quantizer implementation (#108921)
bde75eb9a8 : [Gloo] Properly pass op type to Work (#108812)
a2d5f13310 : [Inductor CUTLASS backend] Step 5: Gemm CUTLASS templates (#108015)
097fd43f8c : [Inductor CUTLASS backend] Step 4: CUDA (template) kernels (#107931)
b2d764ece0 : [Inductor CUTLASS backend] Step 3: autotune_process, and CUDABenchmarkRequest (#107901)
102fefac21 : [Inductor CUTLASS backend] Step 2: CUDACodeCache (#107847)
a14761b68a : [Inductor CUTLASS backend] Step 1: Inductor config for cuda / cutlass, util functions. (#107802)
15b13d3cff : Revert "CI Sev - pin docker images for A100 workers (#108871)" (#109071)
cd46b5db76 : make sure all torch._numpy tests run on CI (#108762)
abd83ce180 : Small fix in SDPA docstring codeblock (#109086)
1b9b3a2d15 : [MPS] Adding lgamma, digamma, and polygamma implementations (#106292)
c8e577bf40 : [inductor] Lower masked_scatter on CUDA (#108803)
464f9c3725 : [meta] Add meta implementation for aten.masked_scatter (#108802)
c3945b5f84 : Update HF version to commit hash (6c26faa) (#107400)
58391aeaf1 : [export] Lift constant tensors as buffes (reland) (#109040)
1d32c9c7f2 : Revert "Force synced KJT to trace unbacked SymInt (#108960)"
8c981c8c4b : [ONNX] bump submodule to onnx==1.14.1 (#108895)
5a7c008b30 : Revert "[ROCm] Add ROCm AMDGPU support for inductor cpp codegen (#105141)"
5531a23b20 : Don't set requires_grad inside meta function (#108988)
bc3f0d341a : LazyBatchNorm{1-3}d support dict&set (#109015)
41bd0fde7e : Revert "Remove fixed skips (#108674)"
65a3d398f1 : [Pytorch][Vulkan] Call binary_op_scalar when 'other' is a 0-dim tensor (#109035)
59f605be57 : Revert "Reland 2: Add PyObject preservation for UntypedStorage (#109039)"
47be61e12b : untracked inputs in constraints (#109037)
f9a250c35b : Force synced KJT to trace unbacked SymInt (#108960)
6c8b0dfba6 : [export] Add a private interface for customizing decomp. (#109058)
15202cc80c : [caffe2] Remove cxx override to c++17 (#108687)
b1f21399c8 : Prerequisite of ATen/native/utils header for C++ extension (#109013)
85428f5ea5 : Fix 0-sized views of tensors in cudagraphs (#109055)
419e4e17a2 : Reland 2: Add PyObject preservation for UntypedStorage (#109039)
2039f30c06 : Revert "[inductor] Parallelize Max Autotune step 1: Use Popen (#107982)"
c36c2bfcb2 : Revert "[inductor] Parallelize Max Autotune step 2: Use all GPUs (#107983)"
f150f96255 : [Reland] increase clang-tidy coverage in torch/csrc (#108875)
b6f9d4dbc4 : [DCP] Enable nD device_mesh resharding DTensor in DCP and add associated tests (#106230)
8025b193a9 : Re-enable some Windows tests (#108930)
4691cb26b3 : Disable compile for massive data pipe test (#109063)
55a204ebc8 : [Easy] log graphs in compiled_autograd if TORCH_LOGS=compiled_autograd (#108991)
33c1136f89 : Added limit on number of warps for coordesc autotuner (#108997)
241e84bf98 : [quant][be] Rewrite xnnpack_quantizer_utils.py to use decorators (#108920)
b2cba439b4 : Introduce Tensor overload to linspace and logspace (#104889)
405f014c26 : [jit] Skip NNAPI, test_ivalue, CPU NNC tests in fbcode (#108937)
293d3b89d8 : Add Opinfos for the Tensor overload of linspace/logspace (#107958)
03fd3544a2 : fixed lgamma documentation error (#108719)
97d9188178 : Speical treatment to build AOTInductor with cuda-12 from Meta internal (#108831)
29c29339e5 : Add torch_lazy_enable_device_data_cache to disable lazy device data cache (#107827)
03bf745e1d : Fix the parameter error in test_device_mesh.py (#108758)
bb14805bcd : fix an incorrect indent in documentation (#108273)
a4138b1f99 : [ez] Fix small type error in run_test (#109036)
5c8efa6077 : [export] Fix export arg type declaration (#109060)
b0656ac81f : [pytorch-vulkan] move glsl random utils to Random.h (#108724)
e7bd9c5315 : [CUDA][CUDA Graphs] Fix CUDAGraph::reset function (#108896)
fb288aa99b : Add Bfloat16 support to CrossKernel.cu (#108941)
5976a08eea : [inductor] Add ir.Scan and lower aten.cumsum on CUDA (#106581)
2bcff92540 : Add NestedTensor python subclass (#108314)
4a4a2fc1a5 : Enable Mypy Checking for torch/_inductor/fx_passes/fuse_attention.py (#107369)
e276d70451 : Revert "Add Opinfos for the Tensor overload of linspace/logspace (#107958)"
a7f5abeade : Revert "Introduce Tensor overload to linspace and logspace (#104889)"
1d42148fee : [dynamo] preserve some FX node metadata of GraphModules (#107067)
ba4782e3c0 : cleanup typos; redundant parentheses (#109003)
3b265e021f : Support Optional typehint without graph breaking (#108970)
090fe45e1c : Revert "make sure all torch._numpy tests run on CI (#108762)"
3efc1882e8 : Update CopySlices to not internal assert when grad_output is undefined (#108353)
e8a402c56e : [quant][pt2] Fix and rename `move_model_to_eval` (#108891)
57e5239321 : Introduce Tensor overload to linspace and logspace (#104889)
106e0a0ef1 : Add Opinfos for the Tensor overload of linspace/logspace (#107958)
e19a855b4d : [HSDP] Fix Node 1 unable receive parameters from Node 0 (#108331)
9a492fc27f : Fix unknown c++ flag detection in CMake (#109000)
18225cc6aa : inductor: add custom pass hooks in post_grad_passes (#108615)
a6b153b311 : inductor: remove redundant memory copy when view a ExternKernelAlloc buffer (#108635)
a6ada463ec : inductor: make onednn linear inputs are always real contiguous (#108560)
e716505345 : Graph break within check_kwargs for higher order ops #108597 #108730 (#108821)
1a3a07ac2c : [inductor] Enable Mypy Checking for torch/_inductor/codegen/triton_utils.py (#108951)
4a98c898e2 : Refactor ios-build-test workflow to support binary release (#108322)
63ae1051e1 : MAINT: do not test numpy funcs in torch._numpy (#108807)
59254c75a1 : [Reland] fix c10:TempFile APIs on Windows (#108508)
f81eacd30c : typo fix strategy_comb in basic_strategy.py (#108972)
2c61313ff3 : [inductor] Parallelize Max Autotune step 2: Use all GPUs (#107983)
d685668003 : [inductor] Parallelize Max Autotune step 1: Use Popen (#107982)
89eb7a75a2 : CI Sev - pin docker images for A100 workers (#108871)
2b9ad3d5c4 : Fix setitem with SymInt (#108873)
9b12a28d89 : [MPS] Implement `mul` operation for complex types (#108395)
c7bb842d35 : [MPS] Add complex `add`/`sub` (#108394)
92de1d2d02 : Revert "[Dynamo][Test]Add a testcase for module with training state (#108750)"
56c2386157 : Revert "reland [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108883)"
53a4ca4b58 : [MPS][BE] Add `dispatch_sync_with_rethrow` (#108393)
2b138e4f7d : [export] torch.export landing page (#108783)
7abeb92796 : make sure all torch._numpy tests run on CI (#108762)
003c5bb156 : Add checks to `num_layers` for `RNN`, `LSTM`, `GRU` (#108853)
4c503f2451 : [Inductor] Extend Pattern Matcher to Match Equivalent Function Invocation (#107832)
e4350d6d4e : Functools partial support in dynamo (#108846)
8ff00360a4 : [ROCm] Add ROCm AMDGPU support for inductor cpp codegen (#105141)
0f88d93b10 : decomposition spectral ops fixes (#108360)
d4230e5574 : reland [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108883)
ed7f9cac91 : [inductor] Add CPU-side profiler event names for templates and foreach kernels (#108449)
311fbe43e6 : [DeviceMesh] Fix __getitem__ docstring typo (#108837)
7b3efeaf42 : Follow-up #108379 (#108905)
2c3febb273 : [dynamo] disable flaky test_unhandled_exception_in_dynamo2 (#108906)
324b23f337 : MAINT: torch/_numpy: remove stubs raising NIError (#108902)
b41b189b71 : Un-skip the linalg_ldl_solve tests (#108842)
a5e1d38025 : add check for torch_arg (#108397)
af8b04d5f6 : Add create_graph_input debug log (#108836)
66f67d9a25 : Print restart attempt as part of Dynamo log context (#108864)
703cdd711f : Revert "[export] Lift constant tensors as buffers (#108592)" (#108893)
f30f9fec87 : Fix the issue described by #106769 (#108340)
8caaa4f4cd : Revert "Re-land: Break graph on `manual_seed`. (#108647)"
296f015f42 : [Dev Container]Add readme for devcontainer (#108848)
137afe74e0 : Don't fastpath conj copy when conj/neg bit mismatch (#108881)
bd1229477d : [ONNX] Add initial support for FP8 ONNX export (#107962)
fa542cc4bb : update triton pin (#108104)
39ff80125f : Add support for an operator level thread local observer (#108822)
68238606f3 : Revert "Reland: Add PyObject preservation for UntypedStorage (#103907)"
8d863560bd : Allow adding extra dispatch keys to wrapper tensor subclass (#108808)
aa3355da8a : Refactor torch.onnx documentation (#108379)
e91f66471c : [reland][inductor] Switch to use the runtime interface for AOTInductor testing (#108878)
a81290ccb9 : Add DLPack bool support (#108486)
b0de6a8002 : [quant][executorch] Support inception_v4 in examples (#108382)
25d657c701 : Fix possible naming collision issue (#107743)
8990174676 : [Dynamo] Should inline __new__ function rather than skipping frame (#108549)
9b83402666 : Add support for symbolic repeat_interleave (#108763)
ef2bbe1ae1 : Dynamo support for autograd.Function w/ once_differentiable (#108686)
16c2fb702b : fix a CMake syntax warning (#108849)
fa8bfe5ca2 : Revert "increase clang-tidy coverage in torch/csrc (#103058)"
cdf7f3e780 : increase clang-tidy coverage in torch/csrc (#103058)
2028987bf7 : Fix finding Intel MKL on Windows, as well as LAPACK, cuDNN and cuSPARSELt (#108040)
366baf690b : Back out "[Dynamo x FSDP] Add support for params, buffers, submodules on FSDPManagedNNModuleVariable (#107923)" (#108823)
39180a8414 : Comment about prune_dead_locals in dynamo (#107787)
51c2b587c9 : Back out "[PyPer][BE] Fix test_scripted_module in StatCollector" (#108588)
ddbaad6d74 : updated pad_sequence type hint (#108765)
09f7cb0eaf : fix typo of mkldnn linear dynamic shape path (#108330)
a9c663c269 : Revert "Flash Attention v2 (#105602)" (#108827)
e40d6ae0a7 : Improve torch.cuda.amp type hints (#108630)
6c7260407b : Back out "Horizontally fuse input concatenation (#108115)" (#108793)
428f5f9e7e : Revert "[inductor] Switch to use the runtime interface for AOTInductor testing (#108663)"
4965fffeda : [dynamo] Move global state guards to C++ (#108624)
258bc2d845 : [vision hash update] update the pinned vision hash (#108818)
30a33b76b9 : [AOTInductor] Include constants in AOTInductor .so file. (#108473)
72f24d0001 : Revert "[dynamo][finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108528)"
e45b290127 : Revert "Revert "Flash Attention v2 (#105602)" (#108827)"
24e9bbe22a : Revert "Flash Attention v2 (#105602)" (#108827)
8391e3fba4 : fixed nn.Module.to type hint (#108767)
f90444cf0b : [Dynamo][Test]Add a testcase for module with training state (#108750)
dec2b267d4 : [dynamo] Add "Torch-Compiled Region" profiler event (#108462)
38fcf77a1b : Revert "[dynamo] Add BACKEND_MATCH guard to detect and recompile when backend changes (#107337)"
e3280a7c88 : fix returning in void function (#108774)
a6dab86259 : [C10d] Fix TCPSTore::wait to be robust to interruptions. (#108425)
fc2b980000 : [Lint] Auto format graph_module.py (#108594)
c458fa0d35 : Decompose/add reference for `view_as_complex` (#108005)
366ce589d0 : [inductor] Switch to use the runtime interface for AOTInductor testing (#108663)
c55cb29bb2 : enforce equalities (#108429)
247c603da9 : Run mm decomposition tests for CPU and GPU (#108620)
1a64ec7dd4 : [dynamo] Add BACKEND_MATCH guard to detect and recompile when backend changes (#107337)
b26af5d5ac : [c10d] Add TCPSTore libuv backend support to c10d rendezvous. (#108284)
96d269eab1 : [Dev Container][CUDA]Fix linker path (#108766)
09a17c512d : Add better error messaging to scaled_mm (#108454)
1f20531939 : fall back to eager on `NotImplementedError` (#107863)
8ba23e48fa : Revert "[inductor] Add ir.Scan and lower aten.cumsum on CUDA (#106581)"
774c822979 : Fix expected test failures for predispatch export nested cond and out_dtype (#108715)
53a27021c5 : [inductor] Add ir.Scan and lower aten.cumsum on CUDA (#106581)
ab9fb03d6f : Remove fixed skips (#108674)
77691e8bc3 : Revert "[dynamo][activation checkpointing] Trace through ActivationWrapper (#108599)"
c75aec90d3 : [dynamo] Record nn_module_stack also for unspecialized nn modules. (#108281)
121cfb60c0 : fix the issue decribled by #108380 (#108759)
b928e08f3d : Initial vmap + NT support with unbind fallback (#106786)
e4f3e5434f : [Reland] Elimates c10::guts::to_string (#108748)
c887309437 : Re-land: Break graph on `manual_seed`. (#108647)
9f37aec964 : Add torch._check_is_size (#108685)
e1aba2c8c3 : [CI] Update the pinned timm version (#108076)
b193f295b6 : Add capturable ASGD impl (#107857)
4fa283e0a4 : [Reland] Simplify c10::string_view implementation (#108622)
fae9547cb7 : [inductor] Refactor wrapper.py (#108653)
6a448816f5 : [fx][split] Copy node metadata for placeholders (#107981)
56b848157c : Reland: Add PyObject preservation for UntypedStorage (#103907)
35974234c4 : [inductor] simplify time_and_log fallback (#108489)
96dd173fa0 : [inductor] simplify cudagraph_fail_reasons printing (#108468)
7bc25e38c0 : [HSDP] Raise error when HSDP device_mesh has a parent_mesh (#108603)
275e71c562 : [inductor][easy] Enable Mypy Checking in torch/_inductor/kernel/ (#108678)
3f88e3105f : Reland: Remove remaining global `set_default_dtype` calls from tests (#108088)
54e73271c7 : When patching dynamic shapes test class, don't run the original tests (#108681)
027e3b7910 : [Forward-fix] check if source is None when using tensor out variants (#108700)
34bb74c4cf : [dynamo][finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108528)
d830e4658a : [export] Fix unlifting pass param name handling. (#108659)
d301fb4022 : Fix broken doc tests after #108482 (#108725)
e3407238f6 : [export] Lift constant tensors as buffers (#108592)
43527d41a2 : Revert "Remove fixed skips (#108674)"
27fe45eaf6 : [inductor][easy] Enable Mypy Checking for torch/_inductor/decomposition.py (#108682)
9efe0f7bf2 : [dynamo][activation checkpointing] Trace through ActivationWrapper (#108599)
c1877e99c5 : [Quant] Move to BFS instead of DFS to check for connectedness (#108572)
2a40fe2dbf : [experimental] use EXCEPT_FOR env to suppress CPU tests from GPU RE (#108672)
6a304ed1f2 : Revert "Skip ROCm jobs on PR (for now) (#108083)"
e73ec92ad2 : Minor fixs to make torchbench runable on torch/xla (#107919)
518cfda2dd : Remove fixed skips (#108674)
e6042db0f1 : Try to use linux.arm64.2xlarge runners (#107672)
cd6a332bc5 : Use linux.24xlarge for conda linux nightly builds (#108695)
d856f3b47d : [export] Change _generate_new_graph_signature (#108571)
089950b83a : Fix inductor `sub` with symbolic integers. (#108518)
3f74e57e34 : add packaging to requirements.txt (#108554)
8a76f8e6fe : Enable mypy checking in torch/_inductor/sizevars.py (#107862)
32a16d4999 : [quant][pt2e] Support int16 quantization (#108453)
bee7e78130 : [PT2 Inference] Prototype of Inference Runtime (#108482)
5a4fe05a15 : Revert "Force synced KJT to trace unbacked SymInt (#107788)" (#108684)
1aacbaed8b : Revert "[export] Fix dict.get() to dict.setdefault() for param lookup. (#108587)"
27d5dcf589 : Revert "Use global variables to register the return_types namedtuples (#107000)"
e5e653a660 : Revert "docs: Match open bracket with close bracket in unsqueeze (#95215)"
20812d69e5 : Fix extension rebuilding on Linux (#108613)
4e042cfed5 : Improve triton bsr_dense_mm performance on column-major ordered inputs with float32 dtype (#108512)
1dabfb68e7 : Add TORCH_API to expose RPC module functions for RPC module device extension (#108553)
e471c12a01 : Enable mypy checking in torch/_inductor/__init__.py (#108336)
738106c1f7 : Torchbench model tolerance changes (#108598)
aa89f0a1fd : [Doc] Move Dynamo IPEX backend to training/inference category (#108643)
79bc4eeb2b : Fix empty vector segfault during version parsing in quantized serialization (#108418)
ebed490c2f : [sdpa decomp] change sdpa decomp to be consistent with flash attention (#108608)
6edd06441a : Fix copy=True behavior for torch.asarray when device is not None/cpu (#108511)
aebb86fef7 : Back out "Faster gc_count update for CUDACachingAllocator" (#108632)
ca9f4222e1 : Inductor cpp wrapper: fix codegen of positional args with default value (#108552)
60bd30ee0b : [inductor] Move AOTInductor runtime headers (#108564)
b60273b88a : [MPS] Pixel shuffle unshuffle support (#99306)
ca2cdb3009 : [DeviceMesh] Minor docstring update for init_device_mesh and rename variables (#108391)
3fe8417643 : [PyTorch] Add the lazy init call for p2p access function (#1991) (#108589)
49aa8d19dd : [DTensor] Replace usage of compute_local_offset by compute_local_shape_and_global_offset (#108547)
ce4967ad18 : [vision hash update] update the pinned vision hash (#108611)
3b92ef814d : Force synced KJT to trace unbacked SymInt (#107788)
c8e72a4a5c : Improve mem efficiency of constant folding (#108421)
28c5b62210 : [inductor] Use empty_strided to create output tensors when testing AOTInductor (#108364)
d494b923a9 : [pytorch-vulkan] aten::.rand_like (#108086)
d471eaeb1d : fix inline_container.cc inplace loading (#108573)
ff28b4b908 : Fix dynamo benchmark config --print-graph-breaks (#108584)
bae14b3d9f : Update clang7 CI jobs to clang9 (#108339)
c99a70c8df : [export] Fix dict.get() to dict.setdefault() for param lookup. (#108587)
eab57145ab : fix matrix_power documentation bug (#108585)
208fd1cb84 : [RFC] Somewhat BC breaking: make checkpoint_wrapper default to NO_REENTRANT (#108435)
db6d09c086 : [RFC][FSDP] Don't move ignored params / buffers to device (#108033)
3334ec3a00 : [RFC] Don't materialize ignored modules for FSDP (#108032)
fee9fc1df0 : [pytorch] Update docstring for FSDP.set_state_dict_type (#103864)
64ad16a5e1 : [XNNPACK] Enable XX kernels (#108440)
66af4f6ec7 : [HSDP] Add device_mesh to FSDP kwarg and add dtensor state_dict support for HSDP (#107533)
b1729d8bbe : Fix doc preview page url at CONTRIBUTING.md (#108580)
fac7a1f730 : fix issue with lift_fresh_copy when using export + compile (#108243)
da914aed21 : error when using _dynamo.optimize_ddp=True and _inductor.keep_output_stride=False together (#108235)
def33d4d7a : Fix inductor <> ddp_optimizer issue (#108081)
ae8eb7a3f9 : Use global variables to register the return_types namedtuples (#107000)
d64e1c5f9d : Fix error message concatanation (#108581)
7cdfc38433 : [inductor] Update how AOTInductor resizes output tensors (#108412)
1b76a5c24b : Revert "Use std::filesystem in c10 tempfile and tempdir (#106656)"
a9a6423261 : Revert "[export] Copy gm before calling PassManager" for test or build failures (#108441)
0b44fdfaec : fix use_deterministic_algorithms docstring (#108551)
23e8a11fef : [c10d] Introduce TCPStore client metrics collection. (#108348)
4a472d9e95 : [jit] Verify stack size and index to prevent off-by-one error (#108413)
a74f50d524 : torch.compile-functorch interaction: update docs (#108130)
42f94d7e9f : add Half support for maxpool on CPU (#98819)
1e0e55c504 : [xplat][buck2][typing] Fix typechecker issue (#108525)
8da04e023e : Revert "Eliminate c10::guts::to_string (#108480)"
5b31a41841 : Revert "[NCCL][CUDA][CUDA Graphs] Flush enqueued work before starting a graph capture (#104487)"
29f1097891 : [dynamo] Reduce cache size limit to 8 (#108526)
03aac0bff6 : add input check at the beginning for C++ API `interpolate` (#108506)
9f71a4ebd4 : Revert "Simplify c10::string_view implementation (#108479)"
e8005781be : Softmax in functorch example fixed (#107988)
e787708ad7 : [jit] Validate statement parsing during class deserialization (#108417)
96d74073f8 : Horizontally fuse input concatenation (#108115)
6a1a893f8f : Bump version 2.1.0 -> 2.2.0 (#108156)
a16b0aa26a : [dynamo] Fix return type of Tensor.shape (#108240)
7c931f2491 : [dynamo] Add dynamic shapes support to torch.Size.numel (#108239)
b2c6383f44 : [pytorch] Small fix to docstring of FSDP.optim_state_dict_to_load (#108383)
0ef2556351 : Update sparse_funcs to include primtorch types (#107421)
e27ddd2cee : s390x SIMD: update abs() function for complex numbers (#108515)
0a8296da7d : ReduceLROnPlateau: inherit LRScheduler (#108464)
efc7c366f4 : Remove auto_gil.h (#108492)
a9d9803bfd : Enable MKLDNN ASAN tests (#108478)
468660d03e : use std::initialization_list for vector literals (#108504)
3d2938b1fc : [inductor] Add an aot_inductor class in inductor config (#108369)
ff38c0e2f9 : [Inductor] Make aot-inductor work with pip installed torch (#108319)
159ce22694 : [rpc] Fix assertion on vector length during message parsing (#108414)
48286d34a4 : Revert "Break graph on `manual_seed`. (#107594)"
e08577aec5 : Spelling fix (#108490)
51c2e22e94 : When byteorder record is missing load as little endian by default (#108343)
7e878c9d10 : Add decomposition for `aten.take_along_dim` (#108185)
4146be192e : Eliminate c10::guts::to_string (#108480)
06b173780d : [dynamo] "TorchDynamo Cache Lookup" event: use C++ api (#108436)
621463a3e6 : Update libfmt submodule to 10.1.1 (#108431)
ce03b78a8f : Simplify c10::string_view implementation (#108479)
aff7fdcb4c : Add a missing argument (#108477)
cc50e654d4 : [aten decomp] Update sdpa decom (#108371)
ba9acbebfc : [Doc] Update the dynamo deepdive doc (#108147)
7b91f762b6 : Use std::filesystem in c10 tempfile and tempdir (#106656)
1b3dc05c3e : Use contiguous() to handle noncontiguous outputs during elementwise decomposition (#108140)
e5548f8195 : NT support for cat with dim > 0 when representable as jagged (#108428)
76ccf6c770 : NT support for narrow() on dim=0 (#108362)
01b662bafe : [gen_operators_yaml] add arguments to control include_all_overloads (#108396)
b9dfdc091b : [AOTInductor][Reland] Proxy Executor for Extern Fallback kernels (#107279) (#108350)
b9fc6d7ded : [Dynamo] Update the implementation of _debug_get_cache_entry_list (#108335)
de58600126 : Improve docs for `torch.unique` `dim` argument (#108292)
0cc2f06aec : [Reland] Improve MKL related logic in FindOpenMP.cmake (#104224)
ffc0c46092 : [Quantization] Add metadata porting for nodes added by quantization (#107107)
d6a9c2b4b5 : [BC BREAKING] Remove outdated python submodules (#108236)
eb67c452c8 : [Quant] Add DQ duplication pass (#107900)
f8d1ca9835 : [Quant] Bug fix (#107899)
37b0d76e35 : [Quantization] Make annotation util functions return annotated nodes (#107106)
99168c1fa9 : [Quant] Use input_qspec_map for weight quantization of linear (#107105)
ab6a86dccd : [vision hash update] update the pinned vision hash (#108460)
ed92d9345e : Refactorings for constant folding (#108450)
5f5caed25a : do not cast all inputs in benchmarks (#108456)
b8af8ac784 : [CUDACaching Allocator] Release the allocator lock on the slow path (#108367)
4084d039b7 : Only add triton dependency to CUDA and ROCm binaries if it hasn't been set as an installation requirement yet (#108424)
2e3fce5450 : Add dynamo support for `rdiv` dunder method. (#108422)
fa8edd93b7 : [inductor] Handle aten.full's dtype in the decomposition (#108443)
2c1f0772d5 : Revert "Horizontally fuse input concatenation (#108115)"
a27f01083d : [S362716] turn off constant folding (#108389)
e3933609d4 : Make make_fx cond preserve node meta (#108356)
ac42b4ea4d : [pt2] Turn on cudagraph tree in fbcode (#108416)
ad032a76f3 : print equalities (#108427)
add45aea1c : Flash Attention v2 (#105602)
234f00e1cd : [PyTorch][Vulkan] Add a matrix multiplication performance test binary and fix GPU latency measurement (#108266)
8f02884569 : add Half support for GroupNorm on CPU (#100234)
54dcb0ea61 : NT support for matmul of (B, *, C, D) NT with dense (D, E) (#108370)
a78b78cd76 : [DTensor][random] add DTensor constructor: randn (#108285)
c67ebae344 : Put logging in run_tests (#107987)
29f17e1f14 : Fix `full` on symbolic value. (#108166)
fc1c862e62 : [export] Properly handle duplicated params. (#108415)
2d9a828900 : enabled AT_USE_JITERATOR() for `tan` and `tanh` kernels. (#102427)
6ba2b6e147 : [ONNX] Show sarif_report_path (#108398)
e58d3ed81d : [inductor] Generalize pointless_cumsum_replacement pattern (#108373)
0f1a225f33 : [CI] Enable max-autotune for Sunday dashboard run (#108386)
2a6ef9b04d : [dynamo] Avoid recompilation when the PyTorch function accepts scalars (#108162)
591cb776af : [FSDP][state_dict][optim_state_dict] Log slow optim and model state_dict paths (#108290)
db63bf3d7e : [NCCL][CUDA][CUDA Graphs] Flush enqueued work before starting a graph capture (#104487)
4a9c6f1b73 : [PyPer][BE] Fix test_scripted_module in StatCollector (#108232)
d96446b9c2 : [export] Fix duplicated params for AOTInductor. (#108354)
e18f512b81 : Update accuracy checking for nan, floats (#108202)
90ef3b82d1 : [DeviceMesh] Add unique mesh_dim_name check in init_device_mesh() (#108326)
3702980717 : dynamo: trace autograd.Function with tensor subclass input (#108093)
414cb26ded : NT support for cat with dim=0 (#108361)
a9fe0b5b74 : [quant][pt2e] Move propagate_annotation from quant flow to quantizer (#108320)
ab5b4c4419 : Revert "[HSDP] Add device_mesh to FSDP and add dtensor state_dict support for HSDP (#107533)"
8289ad8e5e : Support is_mtia attribute. (#108307) (#108310)
d569e506ab : Revert "Flash Attention v2 (#105602)"
ee0e04ac48 : Allow float dtype when Autocast CPU Disabled (#107348)
6c342ec368 : Revert PR-107951 to only support new graph capture API in Quantization (#108317)
fb808c30c7 : x86_inductor_quantizer switches to new graph capture API (#108214)
aadd86b1e8 : [DCP]Add unit test for tp checkpoint (#108286)
63eee52ba7 : Add Drq to BF16 Higher Tolernace (#108368)
9178deedff : removing some redundant str splits (#106089)
cc220e45a8 : [HSDP] Add device_mesh to FSDP and add dtensor state_dict support for HSDP (#107533)
a29b9101fa : [dynamo] fix dynamo + DTensor to work with 2d (#108329)
eafc05887f : [dtensor] fix two more requires_grad callsite (#108358)
3e75fd06e2 : Pin pandas version for inductor Docker image (#108355)
bae409388c : [MPS] Fix `.item()` for multi-dim scalar (#107913)
5b6ba4110b : Fallback to eager for float8 ops in inductor (#108293)
49df1de383 : Cudagraphs support for compiled optimizers (#107504)
d5ff8ca4ef : Relax divsibilty by 16 for leading dimension of mat1 in scaled_gemm (#108308)
aeb4d6d5c5 : Fix constant folding of arithmetic operations with symbolic values. (#108160)
eb8659fe81 : pass inference accuracy check for detectron2_fcos_r_50_fpn (#108328)
95f268e426 : Add examples for `nn.CosineEmbeddingLoss` (#108215)
f8c93df2d1 : Fix boolean tensor for map (#108289)
46f0d17498 : Change to torch.ops.higher_order.cond in verifier (#108302)
74ff028839 : [dtensor] fix new_empty_strided op (#107835)
46cd2fef3f : Create empty host tensor for MTIA device type. (#108198)
dabdb97087 : [Dynamo] Graph break on functions using tensor out variants (#108182)
877561f388 : Enable Mypy Checking in torch/_inductor/dependencies.py (#107675)
2e1e7ed610 : Revert "Fallback to eager for float8 ops in inductor (#108293)"
335767e7da : Raise an error for unsupported ctx managers (#108272)
5727b07ac6 : TD: logging bugfix (#108288)
06d74e6b24 : Revert "[AOTInductor] Include constants in AOTInductor .so file. (#10… (#108349)
01dfa7620d : MAINT: np.unique works with f16 directly (#108228)
cbf7c91883 : inductor: make fallback for cpu scatter_add (#108220)
9df3d882c8 : Flash Attention v2 (#105602)
239ee76177 : Add refs/decomps for dot/vdot (#108194)
239fed7e1e : Add reference for linalg.vecdot (#108188)
150088a9cd : Revert "Use ctypes to serialize raw content for tensors. (#108287)"
691e0e9799 : [export] Copy gm before calling PassManager (#108321)
31ef33871d : [vmap][dynamo] run vmap under python dispatcher (#107947)
58268137f1 : [pytree] Allow register_pytree_node to take in 5 inputs (#108256)
50fa5880e8 : [vmap] symintify alias and squeeze (#107577)
138fafe72d : [export] Fix torch.export() issues for server use cases. (#108275)
43f28beffc : Use ctypes to serialize raw content for tensors. (#108287)
c24d0d3163 : clang8=>clang9 in jobs (#107144)
a20fac89c8 : [4/N] fix clang-tidy warnings in torch/csrc (#108305)
d72b990bab : [ONNX] Move large scale models without non-persistent buffers to runtime test (#108084)
9ed0b3fcd9 : [release_note_tool] Update test and skip commits that errors out (#108252)
9862c7196b : [Dynamo] SetVariable supports contains (#108189)
98aa3745c2 : Fallback to eager for float8 ops in inductor (#108293)
0e4752bafc : Allow registering decomps for HigherOrderOp; add decomp for out_dtype (#108080)
95e3126370 : Revert "[BE] Pin scipy to 1.10.1 (#108270)"
11860d9d41 : Added info for each artifact option, added a help option to TORCH_LOGS, and changed the error message (#107758)
cd3860cf16 : [BE] Pin scipy to 1.10.1 (#108270)
b535ed2c1a : Update to RNN documentation (issue #106085) (#106222)
23a6706c7d : Fix triton upload channel detection (#108291)
7cb4bf675b : [inductor] no-side-effect codegen (#107617)
3817de5d84 : Fix layernorm cpu precision issues (#108089)
8a089f632e : [inductor] Fix MKL issue with test_indirect_device_assert (#108172)
b2fe5eb710 : [inductor] Ignore sympy.PolynomialError while simplifying (#108280)
6830480999 : [inductor] Move test_inductor_sequence_nr out of test_aot_inductor (#108237)
7fb131043c : [memory snapshots] _record_memory_history_legacy bug fix (#108260)
5911faeb8f : Horizontally fuse input concatenation (#108115)
704b0b3c67 : [RESUBMIT] Standardize on error types for distributed errors. (#108191)
6dacf52f88 : [submodule] [C10] Update gloo. (#107236)
39130c7433 : Add reinplacing pass for scatters + incremental fake tensor updating (#106192)
d0b725ea8a : reduce overhead in split and chunk for NestedTensor (#108213)
071f9ccd8b : [inductor] Add input generation fn option for autotuning (#108242)
ad17e5ec4e : Faster gc_count update for CUDACachingAllocator (#108071)
238cc84af9 : [TD] Emit metrics to compare heuristic quality (#108192)
d695486f69 : [Vulkan] Fix addmm & linear when bias needs to broadcast (#108199)
5683ab74f4 : [export] Fix autogenerated stacktrace (#108217)
6ad5568cbc : Break graph on `manual_seed`. (#107594)
b1b9a3646a : Increased logging threshold for profiler matching (#108010)
01fc6466d1 : [Reland] [1/N] fix clang-tidy warnings in torch/csrc (#108114)
7be233f3a5 : Remove commit hash when building triton wheel and conda in release mode (#108203)
057b807178 : [quant] Move dropout replacement to `move_model_to_eval` (#108184)
0fb1c05c5a : [pytorch] Add decomp rule for scaled_dot_product_attention (#108180)
e31038d574 : Check results dtype in index_out (#108167)
fe1f26af8a : Add support for PickleOpCode::APPEND in torch unpickler (#104027)
0297232053 : Fix operator precedence (#108196)
813246c554 : Add scalar conversion using avx instructions for half (#102140)
ca7249b80a : Remove duplicate sentences in description of torch.linalg.eig (#108230)
3a79621c9d : [Inductor] Add fused_attention pattern matcher with additional clone (#108141)
e45b391ebd : Enable Mypy Checking in mkldnn_fusion.py and quantization.py (#108131)
1a5fdc2458 : Re-enable some Quantization UTs after Quantization flow updates (#108125)
620d267ef3 : Refactor TestPrioritizations to support more priorities and reduce risk of accidental mutations (#108117)
5e0ec03a71 : [inductor][easy] reuse a single is_aligned function (#108135)
bf517f4092 : [vision hash update] update the pinned vision hash (#108201)
283ce12aa9 : Add channels_last3d support for mkldnn conv and mkldnn deconv (#95271)
13e4cce83c : [DTensor] Add util API to compute_local_shape_and_global_offset for checkpointing purpose (#107996)
556bfe7cb5 : [inductor] let codegen not rely on node order (#107320)
7264b75763 : Remove Anaconda Prune (#108111)
cd07214a41 : Fix various issues on build-triton-wheel workflow (#108187)
2c87ef3dbf : [inductor] Fix inputs with existing offsets (#108168)
c3239442a3 : [AOTInductor] Include constants in AOTInductor .so file. (#107718)
fa49be2a49 : [docs] Properly link register_post_accumulate_grad_hook docs (#108157)
525b593954 : Fix focus builds of macOS apps on apple silicon. (#96966) (#107816)
86bc50ae60 : Add AMP support to linalg.vecdot. (#108165)
75884f4e1d : Error when someone calls train/eval on pre_autograd graph (#108143)
cadd97feef : Remove case for `RecursionError` on `try_solve`. (#108144)
68b518c13e : Add check for out of range pointer. (#107510)
78810d78e8 : Fix the coredump described by #106702 (#108002)
fa885baf04 : [ROCm] Update ROCm pin to fix triton wheel lib issue (#108137)
4e47ea5131 : Revert "Break graph on `manual_seed`. (#107594)"
fe2cda64dc : [C10D] Implement new libuv backend for TCPStore. (#108066)
80c7fdf49f : wrapper subclasses: support non-cpu device for dynamic shape overload (#107926)
c6e3adaf54 : add dynamic shapes support for subclasses that override size/stride (#107916)
4f34caf164 : add return_and_correct_aliasing() util for wrapper subclasses (#107915)
6c28de2437 : Break graph on `manual_seed`. (#107594)
b7624fc91e : Cleaned up test_mps.py::test_output*_match (#108092)
f3a8d57aea : [Dynamo x FSDP] Add support for params, buffers, submodules on FSDPManagedNNModuleVariable (#107923)
977e4302ab : skip dynamic shape test for test_conv_bn_fuse (#108113)
147b3495e2 : [quant][pt2e] Add reference representation for dynamic quantized linear (#108073)
0cfc5899f9 : [inductor] Improved grid_sampler_2d decomposition for cuda (#104710)
d040d5b9ee : Fix multi output layout error in indexing dtype calculation (#108085)
e68b3ad14f : update triton pin with needed inductor change (#107722)
00eed6f367 : Better Error Message for invalid Out_dtype + Bias for scaled_mm (#108097)
1b2eac00cb : [vision hash update] update the pinned vision hash (#108112)
6648880aca : Revert "Remove Array.h (#106810)"
de5ffa8a3a : [inductor] Add aten.multinomial to disallowed cudagraphs ops (#108105)
6d61d74545 : [dynamo] Fix setattr nn.Module with new attribute (#108098)
39297eb22f : Remove Array.h (#106810)
da54f3c519 : reorder proxy / fake modes so they always run last (#104482)
5efd63b1b8 : better support for fakeifying and dynamoing through torch_dispatch subclasses (with dynamic shapes) (#107415)
378ffde8c1 : Revert "Remove some unnecessary <iostream> includes from headers (#106914)"
2f226804a0 : Revert "Minor fixs to make torchbench runable on torch/xla (#107919)"
de972529dc : [logging] Add more flags to default logs (#107912)
5251ae6fb7 : Explicitly include iostream (#108103)
2d54d4c913 : [inductor] Add constant_to_device for ir.Constant (#108087)
73235d08c3 : [dynamo] Graph break on pack_padded_sequence (#108096)
d4ff06ec84 : Revert "Standardize on error types for distributed errors. (#107651)"
cd4f74fb2e : [PT2] - Add check for stack (#108012)
3488837ec1 : Update ruff to v0.0.286 (#108058)
8caa89917b : Revert "[ATen] Update pre-compiled header (#106915)"
9d2ffc5dfa : [reland][Dynamo] cache_size policy #107496 (#108069)
cd20a89ccc : [ROCM] Add ROCm support to debug_dump and enable_debug_mode (#107845)
0e2317479b : Standardize on error types for distributed errors. (#107651)
9fdb5ef26b : Skip ROCm jobs on PR (for now) (#108083)
199e23bc3a : [quant][be] Clean up QAT tests in test_quantize_pt2e.py (#107991)
18a58f0bd6 : Implement "RAdamW" optimizer (#107507)
8cbf77585d : Revert "[1/N] fix clang-tidy warnings in torch/csrc (#107648)"
b0d109f29f : [ONNX] Bump onnx submodule to 1.14.1; ONNX Runtime 1.16 (#106984)
bcda859e34 : fix typos (#108006)
5d85d897e0 : Torchrec Enablement Fixes - Re-PR 107910 (#108018)
73cbe95005 : [pt2][autotuning] add logging for failed autotunings (#108034)
182a9cf366 : Add Independent Memory Efficient and Flash Attention Build Flags (#107985)
f0c6e5c91f : Fix the use of inputs.build_environment in #107868 (#108075)
584a01b650 : Fix LayerNorm(bias=False) error (#108060)
054f3f1d8f : [3/N] fix clang-tidy warnings in torch/csrc (#108024)
356b8f6339 : [dynamo]bugfix:implement numel() for SizeVariable (#107944)
7349e8c1a1 : Don't use `np.random` for TorchDynamo (#108009)
a1d8132210 : Enable mypy check in torch/_inductor/optimize_indexing.py (#107943)
20f3808aa2 : Implement decomposition for aten.tensor_split.tensor_indices_or_sections (#107251)
010064159b : Fix the issue described by #106532 (#108036)
c8f7f2659b : Two small mem_eff bug fixes (#103201)
67371c7431 : Binary op support for (B, C, *, *) NT with (C, 1, 1) dense (#107890)
33d70be95f : Binary out-of-place ge.Scalar / eq.Scalar support for NT (#107892)
e917d2749a : Unary out-of-place sin / cos support for NT (#107891)
264df88a2d : [C10D][Logger]Add more info to c10d logger (#107331)
dcc674de8e : remove step invocation warning (#107216)
60bb02a907 : Fix fallback FBGEMM implementation for Big Endian systems. (#96422)
49e964cad6 : Automatically turn on dynamo in cond (#108028)
6f8eecfb10 : Add UncapturedHigherOrderOpError to always raise exceptions for cond. (#108027)
138e2895d0 : Enable tuple operands for cond (#108026)
8688965337 : Move cond to torch/_higher_order_ops/ (#108025)
1fd4e787ce : [2/N] fix clang-tidy warnings in torch/csrc (#107966)
9ae3d7ca90 : [reland][quant][pt2e][xnnpack_quantizer] Add support for mul and mul_relu (#107930) (#107992)
a432f37e49 : Serialize pytree to json string (#106116)
4b27e46ddb : [Quant][Inductor] add UT of dequant promotion for linear (#106935)
f3adbab4bb : [Quant][Inductor] Enable quantization linear pattern fusion inside inductor (#106934)
15ceafb5c5 : [Quant][Inductor] Enable qlinear weight prepack inside inductor constant folding (#106782)
e9b0f62a19 : [Quant][PT2E] Enable linear and linear-unary post-op quant recipe for x86 inductor quantizer (#106781)
2179ebde1f : [inductor] correctly handle resize for AOTInductor wrapper calls (#107848)
a6d3da1835 : [Quant] Add int8 linear op impl for quantization PT2E with Inductor. input is an int8 CPU tensor; weight is an int8 MdkldnnCPU tensor. (#105818)
bad3f2db40 : [vision hash update] update the pinned vision hash (#108011)
808e088615 : Update writing_batching_rules.md (#108007)
a18ee0c6ec : [ROCm] ROCm compatible configs for triton kernels (#107584)
15e5bd5103 : [ONNX] Support `torch.compile(backend="onnxrt", options=OrtBackendOptions(...))` (#107973)
c85c5954f2 : [Quant][PT2E]Make _fuse_conv_bn_ support graph capture by torch._dynamo.export (#107951)
fdbc2ec5cb : [Quant][Inductor] Fix the non contiguous load with uint8 data type (#106958)
9e3f3f0b3d : [Quant][Inductor] Enable lowering of qcat (#106838)
1147a28b0b : [Quant][PT2E] Add cat and avg_pool2d recipe into x86InductorQuantizer (#106836)
15d4dedbbf : [quant][pt2e] Add reference representation rewrite for statically quantized linear (#107994)
162109f6c2 : [export] Don't save example_inputs for now. (#107978)
d4a99631dd : Handle 2D blocking with foreach (#107840)
558a9501fa : [cuda] remove dead CUDA code in layer_norm_kernel.cu (#107976)
9fa5283401 : [dynamo+aten] Enable embedding_bag_byte_unpack + meta kernel impl (#107937)
2bddfb0263 : [submodule][Quant][PT2E] Upgrade IDeep to remove redundant QConv weight scale reciprocal calculation (#107565)
780a5a0c7d : [Quant][PT2E] Enable weight scale optimization in QConv PT2E (#105996)
9319dd1c7c : [Quant][Inductor] Enable the lowering of quantized maxpool2d (#105906)
70ca18f8a0 : [Quant][PT2E] Enable X86InductorQuantizer single quantizable op(maxpool2d) (#105639)
c5ad44be1d : Add torch.sparse.as_sparse_gradcheck decorator of gradcheck that allows gradcheck input function to receive and return sparse tensors (#107150)
e4b38b9ce9 : Support torch.sparse_mask on strided input with sparse CSR, CSC, BSR, and BSC mask. (#107777)
fe3309b4b8 : Add optional is_coalesced argument to sparse coo tensor factory function. (#107638)
781b7ebe91 : [DeviceMesh] Expose init_device_mesh (#107969)
35f4bb9a25 : [ONNX] Return input itself for non-fp inputs and support decimals for aten::round op (#107920)
ed8f21282f : Minor fixs to make torchbench runable on torch/xla (#107919)
95cacb7fa9 : [reland][inductor] make thread order consistent with loop order (#107902)
4e9d7f878b : [export] Serialize getattr nodes (#107924)
27afb1c61f : Disable Constraint constructor (#107918)
f877d0a4bf : [dynamo] Treat monkey patched .forward as dynamic (#107104)
240bdbea61 : [quant][pt2e] Fix annotation for conv no bias case (#107971)
25d98a3e3b : [ONNX] Remove API reference for TorchScript export diagnostics (#107979)
52eb773e9c : Add runtime assertions for prim values (#107939)
f92f69dbfb : [quant][pt2e] Enable testing for reference quant model representations (#107474)
8d44b0f5a5 : Revert "[quant][pt2e][xnnpack_quantizer] Add support for mul and mul_relu (#107930)"
3267996372 : add channel last 3d support for maxpool3d on CPU (#97775)
ee171465ad : [ONNX] Support constant tensors in FakeMode exporting (#107836)
42d60d012e : Bias overflow fix mem eff bias (#107968)
1d1739dc6d : [quant][pt2e][xnnpack_quantizer] Add support for mul and mul_relu (#107930)
d35d7de60e : Revert "Handle 2D blocking with foreach (#107840)"
af229ecd34 : [RFC] Change --standalone to bind to a random port (#107734)
7ef13b1831 : [TP][2D][EZ] Fix Error in FSDP 2D test (#107975)
08e49fe97a : Make openxla and opexla_eval backend show up in list_backends (#107905)
6c0ce03b1f : [inductor] WeakDep should not prevent dead node elimination (#107813)
71045f4885 : AOTInductor: error: ‘c10::Dispatcher’ has not been declared for CPU model (#107935)
1374974d60 : [Quant][Inductor] Enable quantization conv_binary(add/add_relu) pattern fusion inside inductor (#105456)
d2105a8688 : inductor: support masked load for cpu path (#107670)
f87ffe473d : Handle 2D blocking with foreach (#107840)
d9fb7166d6 : [BE] use DeviceIndex instead of int64_t for related device interfaces (#103068)
4656e09431 : Fixes #107737 SGD doc blank line (#107738)
161ea463e6 : Revert "Remove remaining global `set_default_dtype` calls from tests (#107246)"
c68d0a7042 : [ATen] Update pre-compiled header (#106915)
a6c29b7227 : Remove some unnecessary <iostream> includes from headers (#106914)
78a053bad7 : [activation checkpointing] Add default autocast keys to functional rng wrappers (#107934)
3992450e8d : Add backward check for test_memory_format (#106104)
c1e0fb7ff0 : [Quant][Inductor] Enable quantization conv_unary(relu) pattern fusion inside inductor (#105455)
4f3ff16baf : [Quant][Inductor] Enable dequant promotion inside inductor (#104590)
087c0613c3 : Implement size checking for copy_ with meta tensors (#107779)
46f63e283b : [Quant][Inductor] Enable quantization conv pattern fusion inside inductor (#104588)
572bc4817d : Fix how DDPOptimizer clones dynamo callback (#107834)
25678e31dc : [Quant][Inductor] Enable quantized conv weight prepack inside inductor constant folding (#104581)
2b7271c703 : Support cond and out_dtype for predispatch (#107941)
8ef057255d : [Quant][PT2E] Enable qconv for quantization 2.0 export (#104580)
679e8e9d48 : [cuda] Fix the incorrect types in int8_gemm (#107895)
d24c457b30 : [inductor] Add cat + split_with_sizes elimination pass (#107956)
9af0e47653 : Hide `transform` method by renaming it (#107940)
598babf017 : Added normal op decomposition for specializations of the normal op (#106792)
b4c6c4da88 : Revert "[Dynamo] cache_size policy (#107496)"
4b44b1861d : [export] Store the arguments used to trace the exported program in itself (#107906)
48e05d5d44 : [ROCm] enable missed cpp tests - test_libtorch_jit (test_jit and test_lazy) (#107234)
b382d55338 : [core aten] Remove split.Tensor from core aten (#107938)
b445ed3158 : Cleanup RUNNER_TEMP folder (#107868)
3a3cf0e09d : Revert "[optim] Make casting to match params a hook (#106725)"
b9472decf8 : Initial Python 3.12 build fixes (#106083)
97a291f6bd : [ONEDNN][BC-breaking] update onednn from v2.7.3 to v3.1.1 (#97957)
ff37f6018d : Enable custom device support in fsdp checkpoint (#107289)
b18e1b684a : Bump scipy from 1.9.3 to 1.10.1 in /.ci/docker (#104746)
196ef78b90 : [ROCm] Use rocm manylinux builder image for triton wheels (#107600)
39854df1d3 : Make validate private by renaming validate to _validate (#107927)
f2f82855e2 : Add tests for foreach copy (#107860)
925d71e72e : [core][sparse][pruning] cuSPARSELt Kernels and ops. (#107398)
c2ac0da445 : Enhance fakepg: add fsdp+tp tests (#107626)
bfcd26459c : improved error message for IO mismatch (#107907)
bfb09204bd : Expose torch.export.{save,load} APIs (#107888)
4f2ff1d019 : add get buffer from exported program (#107809)
a0cfaf0688 : [quant][pt2e] Make sure XNNPACKQuantizer works with the pre_dispatch=True (#107872)
86f9fec3ac : Avoid decomposing `_unsafe_index` in Inductor (#107882)
e00bd83124 : Fix the example of torch.slice_scatter (#107849)
8b7b824dca : [inductor][ac] preserve recompute tags through pattern matching (#107742)
df2ca1871d : [vision hash update] update the pinned vision hash (#107911)
6e85a68829 : [MPS] Implement `polar` via metal shader (#107324)
00e9735ee3 : [ONNX] Enable 'ExportOutput.save' for models larger than 2GB (#107904)
fc33dc014a : [inductor][fx passes] batch tanh in pre grad (#107881)
0a9778a372 : Expose cudaStreamCaptureMode in CUDA Graphs, use local setting in inductor (#107407)
c18d2a3c05 : profiler tree test: skip cudaGetDeviceProperties_v2, cudaGetDeviceCount (#107887)
ec10b17cfb : [FSDP] verify backward_prefetch works correctly with unit test (#107058)
485de73004 : Improve unbacked symint error msg (#107806)
49eeca00d1 : [1/N] fix clang-tidy warnings in torch/csrc (#107648)
7dd1113463 : Expose ExportedProgram and related classes (#107852)
49fbaa29e6 : [c10d] Increase socket buffer size to allow ProcessGroup init up to 12k ranks (#107878)
8a7a6867b9 : [PyTorch][Tensor] Introduce tensor.dim_order (#106835)
2fbe6ef2f8 : [pytorch][Quant] Fix bias quant bug (#107810)
497571df58 : [aot_inductor] fix hardcoded output dtype (#107825)
870fa460be : Enhance fakepg: send and recv (#107625)
66dc1aba03 : [Inductor][MacOS] resolve macos openmp problem and provide a holistic instruction (#107111)
4175a6e944 : [Dynamo] cache_size policy (#107496)
d7130e9704 : Add SingletonSymIntNode (#107089)
a41d15e458 : Update nccl submodule to 2.18.5 (#107883)
5c133e91c3 : Add guidance on the tutorials release proccess (#107871)
1ef4bd169d : [ROCm] Add conditions for channels last logic (#107812)
40cbda274b : document memory snapshotting (#107660)
cd031f13ba : [security] Move s3-html-update workflow into its own environment (#107889)
6c508e0be4 : refactor common code, fix test discovery (#107506)
969bf8a054 : Fix the document of torch.nn.functional.conv2d (#107851)
f6cce3c468 : Fix sym_{sizes,strides} slow path (#107839)
35de780aa6 : Fix Inplace tensor update on transpose (#104689)
3cc5c42a23 : Fix aot sequence_nr to reset bwd flag (#107210)
eefce56b66 : Revert "[dynamo] Treat monkey patched .forward as dynamic (#107104)"
aa8ea1d787 : Remove remaining global `set_default_dtype` calls from tests (#107246)
918df10198 : [Easy] use dtype.itemsize in partitions (#107749)
0156eeb564 : [dynamo] bugfix - make module setattr more restrictive (#107828)
85b0e03df8 : Default permissions for torch.hub downloads (#82869)
64d5851b1f : make python decomp for native_batch_norm CompositeImplicitAutograd, remove native_batch_norm from core aten opset (#107791)
91a674ccd4 : Fix docstring for shape of `target` for MultiLabelSoftMarginLoss (#107817)
256fed02e9 : Check tensors are defined before attempting to access their impl (#106787)
c91d2f5bf6 : Remove CUTLASS extensions merged upstream (#107612)
cf76938f70 : remove redundant dynamic_dim (#107815)
8354d32f6b : Ensure optimizer in backward works with 2d parallel (#107748)
1491bae277 : [reland][inductor] Adjust dynamic SMEM limit when above default in AOT (#107814)
16fcb07846 : [quant][pt2e] Add support for channel in DerivedQuantizationSpec (#107833)
387556318e : [ONNX] Cap opset version at 17 for torch.onnx.export (#107829)
444875cd25 : constraint violation error messages (#107790)
1e71c51350 : [export] Serialize map correctly (#107837)
1166f9a02c : [export] Custom object serialization (#107666)
6ec2ec845c : [exportdb] Fix generating docs (#107838)
2fcda650cf : Revert "inductor: remove conv_bn folding from pre_grad pass (#106686)"
3af04ce0ff : Revert "enable conv+bn folding for mixed-dtype when bn has post activation (#107142)"
6178022aac : [vision hash update] update the pinned vision hash (#107831)
9bda8f1e16 : [inductor][fx passes]batch linear in pre grad (#107759)
f8119f8bda : Move `Constraint` class to torch.export() to avoid circular dependency in _dynamo package (#107750)
7bab98f161 : [export] Serialize cond submodules (#107818)
a560135516 : [Inductor] Add new fused_attention pattern matcher (#107578)
9b2d43df93 : Handle empty lists properly (#107803)
d707724ac9 : [DeviceMesh] init_device_mesh dosctring update to include one d mesh initialization (#107805)
26ae48832e : Remove run torchbench. Torchbench runs are now part of the dynamo ci. (#107826)
4fdfe33ae6 : Bump scipy from 1.9.0 to 1.10.1 in /.github/requirements (#104763)
96c27c2d81 : Support is_mtia() in TensorBase.h (#107723)
4fd42e62c6 : Remove unnecessary import in python_variable.cpp (#107794)
6e71ad0509 : Add tensor post accumulate grad hook API (#107063)
3828cd4b79 : [TP][EZ] Update doc for TP parallel style (#107819)
432fce4e0d : Revert "Add tensor post accumulate grad hook API (#107063)"
bc0790559b : Revert "Remove unnecessary import in python_variable.cpp (#107794)"
2c45a579ca : Add wait_tensor so print always has a correct result for AsyncCollectiveTensor (#107808)
3d3f18260f : Move conda uploads into environment (#107807)
9a365fe914 : Use docker-build env to access GHCR_PAT (#107655)
b74b8e33db : [Redo] Enhance fakepg: alltoall and alltoall_base (#107798)
726b7ff608 : Support integer implementations for padding(cpu and cuda) (#107755)
8c66f97c9b : [profiler] move _enable_dynamo_cache_lookup_profiler (#107720)
cb107c74bb : [profiler] DISABLE_CUPTI_LAZY_REINIT for CUDA 12 as well (#107744)
4a022e2185 : Update unary_ufuncs groupings to include primtorch types. (#107345)
9f86d85172 : [optim] Make casting to match params a hook (#106725)
92f6454ff8 : [export][reland] ExportedProgram.transform updates graph_signature automatically (#107792)
2515ab93c4 : [FSDP][Docs] Add note on `NCCL_CROSS_NIC=1` for HSDP (#107784)
c0ba9a7840 : Fix docs, missed a // in LaTeX for nadam (#107736)
36399d067a : Port existing heuristics to TD framework (#107071)
d7f943ec82 : [mergebot] Flaky and broken trunk should take precedence over ic (#107761)
ee4b99cc3a : Decomp for aten.dropout (#106274)
50024d04a8 : [core aten] Add ops to core aten set (#107766)
8c62f01cb7 : [dynamo][guards] Use dict for storing weakrefs (#107645)
221daeb1a7 : Fix deepcopy for tensor with MTIA device key. (#107427)
42b6ba3484 : Use TORCH_SYM_CHECK for check_size_nonnegative on SymIntArrayRef (#107785)
cdd0821f00 : [2/N][DeviceMesh] Overriding __getitem__ for DeviceMesh to support Mesh Slicing (#107730)
652ccfadc1 : Expose torch.export.constrain_as_{size,value} APIs (#107735)
9d23b8b3ea : Remove unnecessary import in python_variable.cpp (#107794)
79b3a9f945 : [dynamo] Treat monkey patched .forward as dynamic (#107104)
977aba7cfe : Revert the removal of a SampleInput for gather (#107776)
c9b5e9d7a8 : [allocator] register oom observers on every device (#107399)
cc54448a07 : [memory snapshot] add 'address' key to block (#107171)
2b964d6efd : [FSDP] Enable async all-reduce for HSDP (#106080)
50e1378680 : [FSDP] Break up `_post_backward_hook` into smaller funcs (#106068)
55d6b80188 : torch._numpy: keep f16 CUDA tensors in f16 where possible (#107768)
61fe49b8ed : pt2: make aot_eager backend handle basic float8 operations (#107783)
5b632bf7a6 : [ONNX] More debug logging from fx to onnx (#107654)
bb1852fb9e : [ONNX] Clean up diagnostic rules (#107653)
c3c1b68ae8 : [ONNX] Enclose package info for modules exported as local functions (#107409)
7a8db57e37 : [ONNX] Re-purpose 'name' field of GraphProto (#107408)
398f4ae451 : Back out "[inductor] make thread order consistent with loop order (#106827)" (#107796)
f7a51c4208 : fix pad_sequence docstring (#107669)
42738c56a0 : Skip the extra copy operation in broadcast_object_list if tensor_list has only one element (#107509)
ecde622649 : Revert "reseed all Generators in Dataloader's _worker_loop() -- via GC (#107131)"
3f2ecf7755 : [inductor] Separate to_{dtype,device} from lowering to avoid copying (#107640)
3022a395f3 : test_memory_format test now passes on rocm (#107696)
469e7479e8 : [CI] Delete .github/ci_commit_pins/huggingface.txt (#107729)
9f5c705806 : [CODEOWNERS] Add wz337 as a reviewer for Distributed Package and Distributed Tests. (#107747)
6f0d0b3850 : fix type check of overflow (#107579)
48b1208e05 : Disable nn.MHA fastpath for floating point masks (#107641)
207b06d099 : [dynamo] Wrap ndarray dunder methods (#107689)
b5c90ba7e7 : [dynamo] Fix ndarray.__pow__ (#107746)
2b6249e209 : Wrap indirect indexing on CUDA (#105055)
c81c217a2f : Make ExportedProgram valid tracing callable (#107657)
400c4de53b : [ONNX] Add huggingface models into CI tests (#107247)
610f64d72a : inductor: also check index_exp when select tiling var (#106765)
4a40e27583 : Enable mypy check in torch/_inductor/config.py (#107448)
d0f8ee45bd : [ONNX] Exclude FXSymbolicTracer from _assert_fake_tensor_mode (#107712)
31b0445702 : Fix torch.compile with FakeTensor that has SymInt sizes (#107662)
83517c8dba : Enable Mypy Check in torch/_inductor/virtualized.py (#107127)
4cc05c41fa : [MPS] Fix `torch.std` for negative dimentions (#107754)
17675cb1f5 : [vision hash update] update the pinned vision hash (#107757)
09c642bfc8 : Fix the use of head_branch in filter-test-configs action (#107753)
cbcd551045 : Fix torch.compile FunctionalTensor inputs for higherOrderOps (#107604)
fc380a2b5a : [ez] Minor refactors (#107656)
d395088dc8 : Add _native_batch_norm_legit_no_training to core IR (#107732)
d34cf147d1 : MatMul heuristics for aarch64 (#107167)
fada0527fa : Dispatch take_along_axis to gather (#107711)
62113a2361 : [dynamo] np.sort(complex) is not implemented (#107710)
2fc828312c : Support negative indices in ndarray.__getitem__ (#107688)
db39a81e1e : Add a flag that allows breaking on NumPy ops (#107687)
e573abec12 : Revert "[ATen] Update pre-compiled header (#106915)"
874d1b18b0 : [BE] reorganize opt disables in dynamo for clarity (#107709)
0c4fa02296 : fallback to cpu_kernel for VSX (#98511)
42897e8127 : Revert "[inductor] Adjust dynamic SMEM limit when above default in AOT (#107601)"
68c941d228 : [Mypy] move inductor to exclude list (#107741)
660e8060ad : [BE]: Update ruff to 0.285 (#107519)
e8278d6058 : Support graphs which return get_attr nodes directly as output (#107610)
979e706f8e : [dtensor] update some comments (#107608)
945fa7e8a8 : [dtensor] fix requires_grad in distribute_tensor (#107606)
8367cf1220 : Add Early Link To Intro Doc (#107344)
b115da8361 : [MPS][BE] Refactor atan2_out_mps (#107334)
d9460bb8f8 : Update test_MaxUnpool_index_errors XFAIL after #107483 (#107658)
a711679527 : Add skipLazy marker for tests and use it for tests not working with LazyTensor (#107382)
4d13422997 : fix errors about mypy check in torch/_inductor/compile_fx.py (#107508)
5025fb9213 : Revert "pt2: make aot_eager backend handle basic float8 operations (#107642)"
9c56ca80f3 : Delete accidental print statement (#107745)
c093fdf924 : Fix wrong hardcoded value for _scaled_mm (#107719)
c14f4d66c3 : [pytorch][export] Move is_param and get_param out of exir and into export (#107264)
8fb6416bfa : Revert "Remove CUTLASS extensions merged upstream (#107612)"
bcee3d6fa4 : [BE] Make nadam decoupled_weight_decay clearer, add missing setstate (#107706)
b282787409 : Revert "Wrap indirect indexing on CUDA (#105055)"
d59a6864fb : Revert "[BE]: Update ruff to 0.285 (#107519)"
1e9b590df9 : Optimize Net._get_next_net_name (#107479)
24147a8e1c : pt2: make aot_eager backend handle basic float8 operations (#107642)
ba5eeed4ac : [inductor] Add CPU-side profiler event for triton kernels w/ python wrapper (#106351)
614b865721 : [profiler] _RecordFunctionFast - faster python bindings for record_function (#107195)
137d96a26e : Expose torch.export.dynamic_dim() API (#107635)
515aa993e3 : Document post acc grad hooks in backward hooks execution (#107323)
b0e93e206c : Grant upload-stats jobs access to S3 (#107717)
2e054037da : fixing named tensor unflatten example (#106921)
28dc1a093f : Revert "Remove some unnecessary <iostream> includes from headers (#106914)"
c8a6c74443 : Remove aws ossci metrics upload keys from rocm (#107613)
a5f83245fd : Access ROCKSET_API_KEY from ephemeral runners (#107652)
de8a91f40a : [ROCm] Remove expected inductor UT fails for batch norm (#107027)
e0238577b6 : Always import test selection tools (#107644)
4dc9df2f87 : Slightly more flexible naming system for disable + slow tests (#104002)
e740491674 : [caffe2][cuda] Trace `allocate` and `local_raw_delete` events with PyTorch USDTs (#107322)
a408920817 : Reland fakify FunctionalTensor (#107569)
02d41b7afd : allow result of at::for_blob to advertise as resizeable (for tensor subclasses) (#107416)
2c8759df9d : Allow storage() to work on python tensor subclasses, but error on future data accesses (#107417)
df42f15e28 : Improve `generate_opcheck_tests`, add opcheck utility (#107597)
3f655277d4 : Add tensor post accumulate grad hook API (#107063)
bcede143bd : Do not mutate `SymNode` expression. (#107492)
d2215f14ba : Fix: transactional translation validation insertion. (#107523)
3f3479e85e : reduce header file to boost cpp_wrapper build. (#107585)
94d85f18c9 : Enable Mypy Check in torch/_inductor/triton_heuristics.py (#107135)
431d25a141 : [export] Add save/load function (#107309)
134d415615 : Unlift mutated buffers (#107643)
8ed169b162 : Re-enable AVX512 ATen kernels for compute-intensive ops (#104165)
ee72071fc7 : Avoid executing side-effectful graph_module as validation step (#107271)
155d12856c : Update utils.h and correct misleading error messages (#107602)
f9f88f2d31 : [ONNX] Add unittest for exporting embedding_bag (#105862)
849fbc6929 : [vision hash update] update the pinned vision hash (#107649)
a506d0ad8f : [dynamo] Store originating source in the Guard object (#107634)
12b0372a75 : [dynamo] Continue on fbgemm import fail (#107622)
3361fae89b : Fix FP16Planner documentation (#107620)
f13101640f : Quick return when there's nothing to bound in bound_sympy (#107549)
85c673e6b2 : Wrap indirect indexing on CUDA (#105055)
8292b03c47 : Use fast traceback for symbolic shapes (#107439)
072bb06117 : Change how caching/cleanup for CapturedTraceback works (#107471)
bbb216bca4 : Move torch.export() to torch.export.export() (#107609)
2e73c86d45 : [fx][split] make sure we copy node.meta over during split (#107248)
9c2b4a35a3 : [dtensor] group all dynamo tests together (#107487)
42f25d49f8 : [dynamo] enable 2d torch.compile test (#107473)
8c10be28a1 : Update reduce_scatter_tests to work for world_size > 1 (#104424)
1641d671e5 : [optim] FusedAdam/W accepts lr: Tensor without h2ds (#106916)
350fb16f47 : Add space to merge cancel comment (#107603)
da67b414d9 : torch._numpy: remove noops and half-implemented nan-functions (#107596)
f5d1df3c2f : [1/N] Introduce init_device_mesh() (#107254)
5ddb8ef827 : Make emit_metrics importable without having boto3 installed (#107070)
3920ce2f6e : [inductor] Adjust dynamic SMEM limit when above default in AOT (#107601)
cfd98d3c42 : Remove CUTLASS extensions merged upstream (#107612)
6981bcbc35 : fixing bug with non-contiguous mixed_mm [inductor] (#107495)
977a77ca2c : Manually enable `capture_func_transforms` for testing (#107122)
a816aa785b : Implement autograd support for sparse compressed tensor constructors (#107384)
04a7915dbc : Run check api rate limit on ephemeral runner (#107621)
a250cc9bd7 : Update persons_of_interest.rst (#107592)
d7c0c5de2d : Set crow_indices outputs as non-differentiable. (#107447)
a4eae43315 : [ONNX] Update xfail reasons in fx runtime tests (#107257)
612c8a8c84 : Guard numpy imports in the dynamo folder (#107299)
79d35bfc01 : [BE]: Add PYI files to ruff lintrunner (#107524)
e201e3ffa1 : [dynamo][eval frame] Make CacheEntry a PyObject (#107405)
3b2c5d47c0 : Use default build env and test config for test times (#107325)
ad07a4bc56 : Print per-tensor guard messages for TENSOR_MATCH (#107562)
3336aa191c : Adding allocated and reserved memory values to memory timline view. (#107056)
da765995fb : [2d] remove ShardedTensor from fsdp extension (#107472)
e0f1fe102a : Revert "Add scalar conversion using avx instructions for half (#102140)"
df16b1ed53 : [dynamo+aten] Enable embedding_bag_byte_rowwise_offsets + meta kernel impl (#106105)
d5b8c71112 : [inductor] Revert inductor changes in #105977 (#107468)
a5efb5eb84 : [export] Serialize constrain_as_size ops (#107386)
5f56c4fb32 : [torch.compile x autograd.Function] More test cases (#107467)
72de9b2ec2 : [HigherOrderOp] stop erroring out on non-Tensor returns (#107461)
c5c41f9601 : [HigherOrderOps] Saner error message (#107459)
796ce67229 : Single source of truth for guard logging (#107532)
8316affc45 : Add frame/recompile counter to all log messages in tracing context (#107530)
5ed60477a7 : Optimize load inline via pch (#106696)
24968383b5 : Fix RenamePlanner documentation (#107535)
7ba513b6e4 : [FSDP][state_dict] Expose optimizer state_dict config (#105949)
63e9b5481d : [export] Add schema version to serializer/deserializer (#107420)
6dea9927a8 : Don't use thrust::log(complex) in CUDA as it takes a FOREVER to compile (#107559)
5ce88e7e71 : remove unnecessary import introduced in PR 106535 (#107440)
b9befc53a6 : benchmark: higher tolerance for RobertaForQuestionAnswering (#107376)
1ea83f04d2 : benchmark: convert output of fp64 to torch.float64 (#107375)
d77e95c3bf : [Compiled Autograd] Improve nyi error messages (#106176)
59c5424654 : [inductor] Improve handling of index_expr with floating point dtypes (#105021)
3b160ecc71 : Fix wrong error messages with torch.nn.AdaptiveMaxPool1d (#107450)
96c5be8bc4 : Revert "Fakify leaf of FunctionalTensor (#107062)"
c1cc74c7da : Enable a number inductor of tests on CPU (#107465)
71632d4d24 : [cpu] add sdpa choice and UT (#105131)
a46217d2ef : [CPU] Enable fused_attention pattern matcher (#107128)
6d647762d0 : [cpu] enable bfloat16 and refactor for flash attention (#104863)
3fc321f342 : [cpu] implement flash attention backward (#104693)
5516fe12ec : [cpu] implement scaled dot product flash attention (#103826)
02dfacb1ec : expand functional map for reduced floating points on CPU (#104584)
68b9bf9671 : Simplify verbose error guard printing (#107516)
d6d485fa8c : Revamp guard debug logging (#107505)
db3a199b2c : fix symint meta val (#107491)
4d0e7908c3 : disable multi_linear_share_same_input for dynamic shape case (#107123)
e21ca06f46 : [BE]: Update cudnn_frontend submodule to v0.9.2. (#107525)
2c3d2fa2d2 : do not raise constraint violation on trivial guards (#107470)
b1e8e01e50 : [BE]: Apply PYI autofixes to various types (#107521)
24f0b552e1 : [EASY] Use runtime_var_to_range for guards (#107329)
88ab3e4322 : [BE]: Update ruff to 0.285 (#107519)
02c2b750c5 : Add support for GET_YIELD_FROM_ITER, YIELD_FROM, SEND (#106986)
4f3284e3ed : [ATen] Update pre-compiled header (#106915)
60936e4c29 : Remove some unnecessary <iostream> includes from headers (#106914)
eee2f57257 : Raise TypeError for calling moduletype in dynamo (#107393)
3349725766 : Fakify leaf of FunctionalTensor (#107062)
11602ac564 : [dynamo] fix disable_saved_tensors_hooks - graph break (#106875)
4eac43d046 : Trace through Tensor slots (#107159)
8df298bc1e : [functorch] vmap-dynamo: run vmap_impl under fake_mode (#107462)
871d7d242d : Silu support Complex for CUDA (#106854)
3ddf30505f : fixing internal test failure on non sm_80 machines (#107340)
b5642f0b02 : [vision hash update] update the pinned vision hash (#107498)
302278b4d5 : [pytorch][fakepg] enhance fakepg: broadcast and scatter (#107480)
017499b078 : Update reduction_ops groupings to include primtorch types (#107338)
93f2a64d4d : Update submodule NCCL to v2.18.3 (#104993)
64e02de93c : Revert "Use CUDA DSA in ATen (#95300)" (#107483)
2d7a062db0 : Update shape_funcs to test primtorch operators (#107336)
5814380e7b : Revert "Revert "Reland "Add forward mode AD to out-place foreach functions (#102409) (#106043)""" (#106320)
bc662ffff9 : [ROCm] Update ROCm skip decorators (#106138)
28be2c674a : [quant][pt2e] Move specific quantizer related things outside of main quant code base (#106806) (#107259)
4ee6224767 : Remove jbschlosser from symbolic-shapes auto request list (#107482)
35e222e152 : Enable mypy check in torch/_inductor/fx_passes/post_grad.py (#107449)
77f080ee29 : [pt2] test if core decomps are differentiable (#107241)
5b7b9e7896 : Update binary_ufuncs groupings to include primtorch types (#107419)
af0ed25ea8 : Change >= in the GRU and the LSTM document to \ge (#107379)
c2706e5b5d : Enable mypy check in torch/_inductor/kernel/unpack_mixed_mm.py (#107445)
2d2d43d9fb : add more check on LSTMCell (#107380)
bdecdfd202 : [Compiled Autograd] Fix duplicate visits of same node (#105887)
67bb3c05b0 : Add verbose_guards logging artifact (#107388)
36bb7a1f42 : Add fast traceback utilities (#107358)
d5f7df3b8a : Hand bind CapturedTraceback (#107438)
d8f2ef10a6 : [dtensor][1/n] refactor op dispatch logic to reduce overhead (#107305)
8d6a487d69 : [dynamo] Make KeyedJaggedTensor a variable. (#107319)
ea3381d92c : Make StarDep.index throw an error (#107092)
139437bb84 : Make Openxla dynamo backend take boxed input (#107260)
3c11184ca8 : Revert "Fakify leaf of FunctionalTensor (#107062)"
c21e9de25d : Inductor cpp wrapper: fix optional tensor input (#106847)
e10791c0bd : enable mkl_gemm_bf16bf16f32 in cpublas::gemm (#107196)
42625da5e1 : reseed all Generators in Dataloader's _worker_loop() -- via GC (#107131)
95f1591acb : error on bad input to equality constraint (#107311)
9c9982a0aa : Turn on typechecking for _inductor/kernel/conv.py (#106258)
18b1c2907d : [inductor] Add ir.WelfordReduction with multiple outputs (#104725)
3699c6adaa : [DTensor][random] add DTensor constructor: rand (#106535)
d465d6a838 : [inductor] scatter_reduce - skip .item() in backward if GradMode is not enabled (#107353)
a815e719e8 : Turn on typechecking for _inductor/utils.py (#106252)
1d6a446567 : Add scalar conversion using avx instructions for half (#102140)
b31a357eaa : [dynamo][eval_frame] Set destroy_extra_state deleter as part of co_extra (#107117)
4608b9422c : [dynamo][eval_frame] Unify cache entry and frame_state on the same co_extra index (#106917)
fcd1a0e93e : [inductor] Use divisor_override param in in aten.divisor_override lowering (#107401)
2bb59a9ac6 : Fix vscode test discovery (#107404)
6cb0128c8a : Fakify leaf of FunctionalTensor (#107062)
36141de427 : Throw error if `stateless.functional_call` called with `nn.DataParallel` (#107403)
600f9ef2ad : [nullability] Suppress -Wnullable-to-nonnull-conversion errors in caffe2 (#107418)
9ca2106e5f : Use CUDA 12.1.1 patch version in CI (#107295)
35b2b3ee47 : Fix rst formatting in torch.compiler_troubleshooting.rst (#107360)
608afe8083 : Added xla friendly codepath to single_tensor_adamw (#102858)
89de048563 : [BE] Use allocator to allocate workspace (#107178)
3c3874d623 : Align formula in Impl::General mode with Impl::Contiguous mode in batch_norm_elementwise_cuda op. (#106943)
02bcaf45f6 : Revert "Add backward check for test_memory_format (#106104)"
d3f92ca9e9 : Revert "[C10D] Implement new libuv backend for TCPStore. (#105870)"
266772472e : Describe the 'float32_matmul_precision' settings in more detail (#107169)
2b3917dc63 : [ONNX] Fix memory leak when exporting models (#107244)
d8dadb0f25 : aot_inductor: fix compile returning None if cache hits (#107020)
37eb969939 : Update test name in multiGPU test (#107397)
b9c86c521d : Make mergebot work with review comments (#107390)
4874b02379 : [BE] Remove deprecated github gql param and disable inconsistent test (#107385)
8ccfd801be : Introduce CUDA-only `_scaled_mm` op (#107341)
2e44adb066 : Add backward check for test_memory_format (#106104)
c69514ccb2 : Update `generate_opcheck_tests`, also use it to test some internal tests (#107328)
bbf03561a9 : [functional collectives] Move back to registering finalizers on wrappers. (#107250)
3c841163ce : [C10D] Implement new libuv backend for TCPStore. (#105870)
d86445a506 : [cuda] vectorized gamma and beta loading in vectorized_layer_norm (#107287)
8a0425fdd6 : [export] Remove setter for graph_module (#106651)
2d727c8c3f : remove the duplicate method `is_private_use1` in class Device (#107198)
aca3d1433c : Estimate Scheduler node runtimes (#106426)
aa04b0536b : Fix inference_mode decorator pass mode as kwarg (#107349)
4bfc55ba8b : [MPS] Enable forward test for renorm (#106666)
8298720299 : Enable Lowering Channels last Conv1x1 when max autotune is set (#107004)
f96617fdcd : Add deployment environment for docs and upload test stats (#107318)
aa9f6a4335 : Fix native_batch_norm_backward returning non-channels_last_3d grad (#107270)
e108f33299 : Update distutils.Version to packaging.version due to the deprecation … (#107207)
a98f745c80 : Use compiled model in torch.compiler_get_started (#107267)
f21b9cb954 : Enable mypy check in torch/_inductor/kernel/mm_common.py (#106776)
11e366943d : Fix rst formatting in dynamo/guards-overview doc (#107275)
384e0d104f : [vision hash update] update the pinned vision hash (#107342)
29813c61ea : enable conv+bn folding for mixed-dtype when bn has post activation (#107142)
f53ecfbcc6 : Correctly format original traceback for delayed CUDA error (#107297)
e9af315e02 : Fix torch.bucketize docs for "right" (#104474)
25d87c8301 : torch.ops.aten.*: sort aten ops before jit overloads (#107138)
983fd5ba79 : [2D][TP] Enable DDP TP integration with unit test (#106583)
4979a1b8f9 : Fix trymerge broken trunk detection when the merge base job was retried (successfully) (#107333)
5b9b816b17 : WAR by avoid querying device before env mutation (#107301)
b234b94760 : Add in-place `_foreach_copy` (#107226)
8507b22fea : propagate _GLIBCXX_USE_CXX11_ABI to NVCC (#107209)
a4229690e3 : Add Some Checks about dim (#107223)
f3b0d83fe3 : [EZ][TP] Refactor FSDP 2D integration extension code so that it can re-used (#107313)
b08b0c915f : [easy] Fix docs for sd calculation in BatchNorm1d/3d for consistency with BatchNorm2d (#107308)
d3c4ec767b : [quant][pt2e] Fix handling for SharedQuantizationSpec (#106922)
6bfb4f7c4b : [CUDA][Linalg} Patch crash of `linalg.eigh` when input matrix is ill-conditioned, in some cusolver version (#107082)
0434a2c7c8 : [BE][PG NCCL] Improve input mismatch error msg (#107281)
ba6fcc4eae : [caffe2][SDT] Check whether `TORCH_DISABLE_SDT` macro is defined before referencing it (#107149)
f16be5e0d4 : Reordering tests experiment (#106347)
884c03d240 : Improve activation checkpoint docs wording (#107296)
e9ae820279 : Unfuse bias add before pointwise ops (#106912)
c88775b937 : Make Nd tensors hit fused addmm pass (#106911)
1c6f39098f : [export] avoid calling the callable during export. (#107249)
d0e50d9094 : Move overloaded_args from FunctionSignature to PythonArgs (#106983)
1f6c1d9beb : Fix inductor torch.cat edge case for empty tensor (#107193)
7cb2a6bfab : [dynamo][fallback] Fallback to eager when backend fails with fake tensor exceptions (#107179)
3577ae3e53 : Fix TestDistBackendWithSpawn.test_backend_group and test_backend_full_group (#107231)
ddba7a5a55 : Expose torch.export() API (#106904)
528a2c0aa9 : Fix bug: not creating empty tensor with correct sizes and device. (#106734)
4de5e1775a : Improved log1p implementation for complex inputs (#107100)
35cca799ff : [vision hash update] update the pinned vision hash (#107272)
e0d6072f69 : Add API to mark input tensors static for cudagraphs (#107154)
9921b48558 : Extend Inductor to support the third-party backend (#106874)
6c0bba3daf : Revert "Use cpuinfo to determine c10::ThreadPool thread number (#107010)"
1af324b560 : Revert "Introduce CUDA-only `_scaled_mm` op (#106844)"
22f5889753 : [Dynamo, ONNX] Replace `onnxrt` backend with new backend from ONNXRuntime team (#106929)
d290511ecd : [gtest-static-listing] Enable for cc_test (#107186)
19db570cd9 : [ROCm] Add miopen determinism support for convolutions (#107028)
bc053070f8 : Mark test_gradient_extreme_cases as slow for inductor (#107189)
f6a9c15421 : [FSDP][state_dict] Make optim_state_dict_to_load work with use_orig_param=False + NO_SHARD (#107185)
f76250f6e3 : [ONNX] Relax not exist assertion for 'register_pytree_node' (#107245)
d8a71a6633 : [ONNX] Set 'Generic[Diagnostic]' as base class for 'DiagnosticContext' (#107165)
c71828b097 : Lift non-FakeTensor restriction for compile (#107042)
22cade56ba : Revert "[Reland] Upgrade NVTX to NVTX3 (#97582)"
87cd10bc7b : Add basic TD framework (#106997)
b860c8c5b8 : Revert "ExportedProgram.transform now updates graph_signature automatically (#107080)"
10ce16bebb : Specify if mismatch is input or output in export (#107145)
5673c0874c : Use expect_true to make split with unbacked sizes work. (#106788)
e1ee10e6f5 : Add expect_true for irrefutable guards (#106720)
388ba7e5ae : [ptd] make multithreaded pg wait for readiness before the 1st collective (#106954)
e9cb7179cb : [ONNX] Fix diagnostic log and add unittest (#107158)
19a76290d8 : [ONNX] Public diagnostic options for 'dynamo_export' (#106741)
45128ab67c : [Reland] Add OnCompletion Hook to ProcessGroup (#106988) (#107233)
d9dc4b2b4c : [BE] Add missing override to remove build warning spam (#107191)
8c44cfef5e : Add some support for detecting false aliasing in AOTAutograd (#106461)
517ba2add7 : AOTAutograd: allow input mutations on inputs that are non-contiguous (#106460)
ad0476540d : Use cpuinfo to determine c10::ThreadPool thread number (#107010)
7fb543e36d : [ROCm] Enable hipsolver unit tests for batched linalg drivers (#106620)
ed0782125a : [gtest-static-listing] Make decision for caffe2 targets (#107129)
fd214aa8be : Revert "Add OnCompletion Hook to ProcessGroup (#106988)"
2abcfc40b0 : Enable torchgen for MTIA dispatch key (#107046)
935f2dd084 : adding fused uint4x2_mixed_mm to inductor (#106516)
4add06eb5c : [CUDNN][CUDNN V8 API] LRU Cache for cuDNN frontend `ExecutionPlan` (#104369)
b7431d815f : [submodule] Pin fmtlib/fmt to 10.1.0 (#106672)
20c5add133 : [export] Refactor `constrain_as_value` and `constrain_as_size` (#106591)
d6c120d7f9 : [TP][DTensor Perf]Fix DTensor Spec hash (#107181)
2d841bcb9f : Remove type promotion workaround (#107202)
c9c90765c1 : grad_mode decorators without paren (#107086)
ba1da47e8f : Add OnCompletion Hook to ProcessGroup (#106988)
2624da638d : Support third-party devices to use the init_process_group method with… (#107113)
574442ba01 : CI upgradeapalooza `bionic`->`focal`, `gcc7`->`gcc9`, `clang7`->`clang10` (#105260)
9440a8cbec : Introduce CUDA-only `_scaled_mm` op (#106844)
e4e9aa28a7 : Add `generate_opcheck_tests`, a PT2 crossref testing mechanism (#106903)
ddf36c82b8 : [PT-D][FSDP] Handle corner case of load with multi-backend PG (#107172)
064d813f37 : Add distributed/c10d *.hpp files to lintrunner (#107160)
facadc6c97 : [Easy] Make Work::retrieveOpType a const function (#107141)
dd6319198d : Apply clang-format to distributed/c10d folder (#107140)
858b465d74 : fix str splits in single line (#106005)
759c4995e7 : [ci] clean up some multigpu tests, and add funcol test (#107153)
9ae51e3ad9 : [thread_pg] fix reduce_scatter to respect different cuda device (#107152)
4be8fe0f0d : [thread_pg] fix all_reduce to respect different cuda device (#107151)
50927e25f7 : Correct compile doc string format (#107124)
2f0ca722d1 : Typo fix in Nonzero.cu (#107090)
2c5f96deac : [Inductor] Make softshrink composite implicit (#107052)
6d899571d6 : Simplify sign lowering in triton (#107051)
3b1254e800 : Make hardshrink's decomp composite implicit (#107039)
45c7880486 : Simplify some decompositions. (#107038)
80988b6277 : Introduce memory stacks for free (#106758)
df8493455e : [ROCm] enable test_api (test_libtorch) cpp unit tests (#106712)
1e007d044d : [AOTInductor] Prepare for ProxyExecutor, OSS only change (#107065)
4a6ca4cc05 : [TP][DTensor Perf] Some perf improvement to reduce DTensor CPU overhead (#106524)
00751772e6 : Upload perf benchmark to Rockset in batch of at most 5000 records (#107095)
8c9b2fe8f0 : ExportedProgram.transform now updates graph_signature automatically (#107080)
05db3d9969 : improve doc on how to understand dynamo (#106860)
770a565e26 : [dynamo][easy] Only xfail test_dynamic_shapes_float_guard_dynamic_shapes if z3 is available (#107137)
6af6b8f728 : Reland: Remove `set_default_dtype` from nn tests (#107069)
32f93b1c68 : [Security] Use github environment for update-commit-hash workflow (#107060)
5bbfb96203 : [Reland] Upgrade NVTX to NVTX3 (#97582)
461c703ee6 : Add typecasting for gelu backward kernel (#86673) (#106856)
2932b0bf37 : Extend impl_backward to be usable with torch.library operators (#106817)
db9a0cf689 : Extend impl_backward to handle non-Tensor outputs (#106800)
9fcce1baf1 : [custom_op] Allow constructor to infer more types (#106799)
d8ad74857c : Run translation validation on tracing error. (#106645)
937cd3742b : [xla hash update] update the pinned xla hash (#107120)
2b1058c542 : Enable mypy check in torch/_inductor/wrapper_benchmark.py (#106775)
d392963ac4 : [fbcode] Use FastCat in PT Concat implementation (#106727)
e7a3fb13e7 : [pt2] add Python metas for `special` ops (#106683)
b897c57d47 : [TGIF][Inplace][Perf] Copy tensor to device with pinned memory & move copy weight sleep to getRecord (#106849)
ddd2f682b9 : [executorch] Let custom ops registration code only import ATen headers (#107064)
f26aa2dcd9 : Keep fx node name consistent with aot_export (#107068)
8472c24e3b : [inductor] Optimize away zero-element loads (#107074)
aa36e16f95 : Add gfx90a target for ROCm CI (#106879)
6f83382161 : [inductor][easy] add a missing parenthesis (#107001)
5b04e9b6ce : Install torchrec/fbgemm from source in CI (#106808)
9858edd99f : Revert "Reordering tests experiment (#106347)"
c9cbcb2449 : [device_mesh] move remaining collectives to a separate file (#107012)
22095acfd7 : [ONNX] Migrate to PT2 logging (#106592)
5d09e49947 : Make the __call__ op of ExportedProgram follow calling convention. (#106186)
42660015b4 : [Dynamo x FSDP][2/x] Small changes to distributed to make it dynamo friendly (#106886)
91778ada87 : [inductor] graph replayer (#106952)
6730d5f9a0 : [inductur][easy] show kernel name in str(ExternKernel) (#106990)
2c8f24829f : Decomposition of bmm and mm for dot product (#106593)
ec0f3fda7d : Revert "Remove `set_default_dtype` from nn tests (#105775)"
3d00170b20 : [inductor] fix test_dim_function_empty (#106994)
547ccae0db : [export] Support preserving calling convention to some modules. (#106798)
354484ea6d : Revert "Add `_foreach_clamp` (#106574)"
c9cdcb299a : Remove ExclusivelyOwned from register_dispatch_key (#106791)
d97b18d769 : Propose nkaretnikov as general PrimTorch/meta/decomp reviewer (#106970)
fbfb9a1648 : [Dynamo] Improve PT2 fbcode logging observability (#106932)
1cfe292061 : Mark test_lstm_packed as slow (#107048)
22a20d0850 : Add `isFloat8Type` predicate (#106977)
5c48ff20b5 : AsyncCollectiveTensor: dont sync on view ops (#105240)
e165938853 : Implement decomposition for aten.rrelu_with_noise (#106812)
b1b3f61f2c : Skip Triton templates in MM max autotune with zero-size inputs (#106865)
656412f0cb : Add multiprocess option to dynamo benchmarks (#106394)
3fe1dba068 : Fix test_cond_functionalized_aot_func_check_functional (#106889)
a926be39d4 : torch.jit.script escape hatch (#106229)
71be8f2223 : Revert "Add initial support for FP8 ONNX export (#106379)"
e18ca4028b : [indcutor] add one triton config for reduction (#106925)
6696a75ea8 : [inductor] make thread order consistent with loop order (#106827)
745d29b0cc : Revert "[export] Refactor `constrain_as_value` and `constrain_as_size` (#106591)"
0b05aef8d0 : Add ONNX export support for huggingface's bigscience/bloom-560m model (#106930)
9f26503bf0 : SymInt'ify tile (#106933)
a5d841ef01 : `asarray`: take the default device into consideration. (#106779)
171341ee65 : Support complex inputs in `nan_to_num` (#106944)
7db6eb7156 : [test_nn] add custom device support for dropout tests、lazy_modules te… (#106609)
03414081ff : adding mixed_dtype_mm to torch._inductor (#106443)
18989890bf : [export] Refactor `constrain_as_value` and `constrain_as_size` (#106591)
df6aaf7bc2 : inductor: fix compile error for inplace variable multi-defined (#106852)
7460adf7f3 : Causing internal clang tidy to error (#106895)
71a336ef75 : [Dynamo x FSDP][1/x] Builder support for deque, appendleft (#106884)
4df84c3b4d : make sure mkldnn convolution given same stride as ref path for nc11 contiguous input (#106966)
a9dca53438 : NumPy support in torch.compile (#106211)
8f774330af : [inductor] Use shape env bounds in inductor bounds.py (#106175) (#106568)
62b3018024 : [Vulkan] Introduce GPU Memory Layout qualifier (#106978)
8c8477e55a : Add _unsafe_index decomp (#106814)
152203d3c3 : [pytorch][ao] Add `torch.matmul` in FloatFunctional/QFunctional (#106831)
dfb1b95919 : [caffe2] Add enforce inside ScatterAssignOp (#106882)
aef27bdbe7 : Reword release version: major->minor in README.md (#106980)
a62de2d5ec : [inductor] Enable multilayer reductions with dynamic shapes (#106747)
fa65df3745 : [inductor] Type triton size arguments in the kernel index_dtype (#106870)
3b2cb459fc : [inductor] Fix reference_as_float gradcheck (#106626)
02abbb8109 : Fix some typos, mostly "that that" (#106901)
7b94d93431 : [FSDP] Fix train -> EMA -> eval with mixed precision (#106858)
79449e6272 : [quant][pt2e][fix] Remove the requirement of using no_grad for reference model that contains quantized conv2d (#106924)
1afbc985fe : Make RNGStateTracker support cuda-like device (#106771)
bb6b157458 : Fix IndexKernel.cu build (#104423)
393e9eed90 : [inductor] modify index_reduce to pass opinfo tests (#106429)
a14d99bb6c : Close non existent disable issues complete rollout (#106923)
c0f80c6696 : [forward-fix] Fix multigpu varying tensor optim tests (#106887)
149e458846 : [BE] RPC is missing RRef docs (#106902)
89fd1b8717 : Upgrade all inductor workflows to CUDA 12.1 / gcc9 (#106876)
4d6a891baf : Remove `set_default_dtype` from nn tests (#105775)
22bc08da29 : inductor: remove conv_bn folding from pre_grad pass (#106686)
35a1913370 : [inductor] Added affine_grid_generator decomposition (#104709)
bb2fcc7659 : unify TEST_CUDA (#106685)
2b560d3c3a : Add `_foreach_clamp` (#106574)
3495f0c999 : Generate mypy hints for torch.Tag, add a couple of pointwise ops (#106910)
606e3c297b : conv-bn folding in low precision (#106576)
4afab40b56 : [quant][pt2e] Removed mean/hardtanh annotations and refactored adaptive_avg_pool annotation (#106805)
dfd441a12c : [BE] Use nested namespaces in `torch/csrc/cuda` (#106928)
e34a05b960 : [ez][inductor][fx pass] strengthen numerical check for batch fusion (#106744)
83b5027027 : Enable Mypy Check in torch/_inductor/select_algorithm.py (#106701)
8093349d42 : Enable mypy check in torch/_inductor/fx_passes/post_grad.py (#106839)
e93a90bdd5 : [ONNX] Refactor perfect/nearest match criteria to allow optional inputs and disallow mismatch attributes (#106478)
4c1d8ab272 : [vision hash update] update the pinned vision hash (#106926)
9891c6aa15 : [export] cleanup pass base. [2/n] (#106905)
08704f96f0 : Add initial support for FP8 ONNX export (#106379)
526d93bba3 : Add `_onnx.pyi` to ONNX merge rules (#106927)
b9ad7bc533 : Don't run test/autograd/test_fallback.py in parallel (#106866)
0b57581dec : [pytorch] Disable fast path in MultiheadAttention in Export (#106824)
7f9d1cacca : [export] Minor fixes to contrain_as_size (#106737)
99a10da295 : [Dynamo] a dyanmo backend based on ONNXRuntime (#106589)
4dc66a4b5c : [BE] fix type iteration typo in test_lrscheduler.py (#106908)
7b3d50e4cc : [Pytorch][Vulkan] Set global and local sizes for image->bool copy (#106752)
eefe06ef56 : [BE] Move common logic into `cublasCommonArgs` (#106842)
4bc846c101 : [FSDP] Ignore buffer type casting in ignored modules (#106766)
97ce979e5d : [quant][pt2e] Add reference representation for quantized conv2d (#105784)
02e4415315 : Attempt to pin opencv-python version (#106900)
787d5259fa : Include fused nodes' debug_str in FusedSchedulerNode::debug_str_extra (#106356)
77acb04a00 : [dynamo] Readability - Rename name to get_frame_name (#106880)
8aca724312 : [dynamo] use cache size to detect recompilation (#106878)
c2ddb71aba : Add F8 BLAS data types conversion (#106843)
0b88007540 : Adding release compatibility matrix for release 2.1 (#106891)
861ae39938 : [aarch64] Add PT Docker build image for aarch64 (#106881)
7dfab082be : Reordering tests experiment (#106347)
a44c072c89 : Make InternalModel and Resnet work with rexportable flow (#106676)
8ea13a955a : Avoid subtracting by sys.maxsize when something is bounded below by -sys.maxsize - 1 (#106716)
1b32ac3cab : Update torchbench.txt (#106761)
47014883a7 : Remove unused _add_runtime_assertions (#106759)
e1a1780626 : [quant][pt2e] Move annotate functions in XNNPACKQuantizer to utils (#106642)
467a2e63f0 : [pt2] add Python meta for `triangular_solve` (#106682)
318fcc5eb9 : Change dropout of device Privateuse1 to fused kernel (#106774)
6f036c9637 : [FSDP][Easy] `zeros` -> `empty` for immediately freed tensors (#106857)
a0c0666fca : Add some const to `IndexKernel.cu` (#106809)
387e3b04fa : Reenable `torch._int_mm` testing on newer CUDAs (#106840)
046d6178c5 : [BE] Add optional `t` param to `CuBlasLtMatrixLayout` (#106841)
fe594ab323 : Revert "[core][pruning][feature] cuSPARSELt kernels and ops (#102133)"
387f1ab104 : [inductor] Switch inductor_prims._bucketize over to aten.bucketize (#106658)
40a15b50a8 : Enable mypy checking in compile_fx.py (#105830)
088e316659 : add xpu support for foreach kernels (#106021)
dc7ec4c843 : Revert "conv-bn folding in low precision (#106576)"
0ce103a0f8 : Revert "inductor: remove conv_bn folding from pre_grad pass (#106686)"
c876afea2d : [vision hash update] update the pinned vision hash (#106832)
6691413145 : export torch/csrc/dynamo/*.h (#106757)
e1e6bbd889 : Update opset version warning text (#106830)
cce2c52b0b : [pt2] support vmap (#101707)
c379d6283a : Don't suppress ModuleNotFoundError if the failure is for an unrelated module (#106807)
69ecad6f2b : [quant][pt2e] Add reference representation for quantize_per_channel and dequantize_per_channel (#105783)
c14cf312c9 : expandable_segments fix possible assert (#106818)
44448754c1 : [CI] Fix sccaching of nvcc builds (#106811)
9e35df4adc : [pytorch][ao] force weight observer/fake_quant to be on the same device as the weight tensor (#106755)
cbcd9083be : [DCP] Modify tensor saving logic in DCP (#106415)
c913f3857f : Remove dynamo+nvfuser (#105789)
32d8de23d4 : Enable mypy check for torch/_inductor/codegen/common.py (#106199)
2a138d7f1d : [ONNX] Turn on batch norm related unittest (#105769)
3c52c6fd53 : [pytorch] Disable CUDA sync events by default (#106723)
d4bc27191a : [exir] Update exir.pass_base to use export.pass_base (#106647)
2764ead429 : Add missing quantize per tensor vulkan backend function (#106641)
3a300ed84e : [export] refactor and add same_signature flag to dynamo.export (#106569)
bd3b6f1ab4 : add a debug api to extract cache entry from code (#106673)
bc88028e8e : Back out "Reland "Make adding buffers more like adding parameters (#104069)" (#106224)" (#106743)
891bb259f8 : Revert "Remove dynamo+nvfuser (#105789)"
16b6873885 : [custom_ops] extend impl_abstract to work with existing torch.library ops (#106088)
cebff39fad : [custom_ops] make custom_ops.impl work on existing operators (#106076)
60a4ac3068 : [custom_ops] Block overload names (#106075)
6030151d37 : Remove dynamo+nvfuser (#105789)
ad22f0ffb4 : [core][pruning][feature] cuSPARSELt kernels and ops (#102133)
03c9321722 : [CUDA][CUDA Graphs] Fix potential race with autograd thread during a graph capture 2 (#106570)
cc21fa75a3 : Enable dynamic shapes of torch.nn.Parameter (#105855)
0d57e87000 : Fix test_div in caffe2/caffe2/python:hypothesis_test (#106694)
cdfd0ea162 : [MPS] Introduce torch.mps.Event() APIs (#102121)
5f551133dc : [NCCL][AVOID_RECORD_STREAMS] Initialize `stashed_for_allocator_safety_` in `endCoalescing` if `TORCH_NCCL_AVOID_RECORD_STREAMS=1` (#106166)
9e4e0ecdd9 : Add 0-dim `Tensor` overload to `_foreach_mul` (#106677)
90c264c276 : sd flaky on cpu skip (#106726)
2a16457976 : inductor: remove conv_bn folding from pre_grad pass (#106686)
8ef7512dc4 : create API jit::Module::deepcopy(device) (#106521)
a25eee1d77 : _force_original_view_tracking to work as both context manager and function (#106706)
3cda19c10a : [inductor] Fix xpasses being impossible (#106631)
ab6efb1649 : [pt2] Add reference implementations of torch.{stft,istft} (#106400)
d4d090e2da : [FakeTensor] Workaround FFT ops with incorrect meta strides (#106319)
66d90e8054 : [inductor][fx passes] add tensor size limit for group fusion and enable batch fusion (#106627)
45c03b1ad4 : Better dynamo dict support via SetVariable keys (#106559)
4639ceb3fd : [BE] Use convenience function (#106709)
e02a3d4ad5 : [BE] Use nested namespace in `ATen/cuda` (#106708)
1317dbf176 : Reland "Add nn.CircularPad{*}d for consistency + fix no_batch_dim support (#106148)" (#106632)
0208574db9 : [NAdam] Add capturable API and tests + fix differentiable (#106615)
3dd8cb12b5 : [ROCM] enable test_aten cpp tests (#106476)
3a07dfde48 : Fix lifetime of JITException binding (#106401)
af78e139a8 : [functorch] fix dynamo support for functorch.grad (#106610)
2d4b1ae434 : [Fix Bug] Cannot assign index like x[[1,2], :] = 2 when torch.use_deterministic_algorithms(True) to main (#105833)
070eb88a96 : Handle `Rational` divisors in `FloorDiv`. (#106644)
33e70e34a3 : More readable Z3 expressions printer. (#106643)
26846546e8 : export tools/autograd to torchgen package (#106663)
273ad1dd23 : Fix typo in jit_opt_limit.h (#106684)
1cc002621d : [xla hash update] update the pinned xla hash (#106695)
416bf4e3e7 : [Inductor][FX passes] Pre grad batch linear LHS fusion (#106497)
e35cb480f4 : [DCP][Test]Remove broken 2d checkpoint test (#106640)
aa1b2f16c5 : fix `upsample_nearest` decompositions for `uint8` tensors (#106675)
c21df02ec0 : conv-bn folding in low precision (#106576)
7215007f01 : [pt2] add Python meta for `polygamma` (#106681)
12041d8e1f : Use default dispatch table for `tensordot.out` (#106669)
f694bcc9a8 : [pt2] add meta for `_cdist_backward` (#106680)
05e1a50723 : [pt2] remove meta skips for `aminmax`, decomp exists (#106670)
26e98040da : Improve AOTAutograd tests to do something when inputs don't require grad (#106558)
8b8f576f56 : Minor update to ROCm triton commit pin (#106616)
68cb854d73 : Fix CPUFallback Mechinasm on TensorList Type (#105209)
19621a73c0 : [pt2] add metas for `grid_sampler_3d` ops (#106261)
bd34f85fe5 : [pt2] meta for `searchsorted.Scalar`, tests, and out support (#106283)
0a4e5e07db : [inductor][easy] log the number of fused nodes for each graph (#106653)
6540f92507 : Compile AOTInductor in Meta prod env (#106636)
7e55dd7a15 : Make NCCL default logging more friendly. (#105695)
1819fe1324 : Revert "Extend Inductor to support the third-party backend (#100706)" (#106652)
dc22b4fdb1 : [vision hash update] update the pinned vision hash (#106654)
4be6b6b673 : Add quantization support to reshape and size for the ONNX exporter (#106629)
5eb429ac30 : Add test support for dynamic shapes for torch.onnx.dynamo_export (#106495)
aa7824867f : [ONNX] Remove legacy diagnostic printing (#106498)
136bda2568 : fix issue with checking counters in binary folding (#106575)
02b9119105 : [Pytorch][Vulkan] aten::flip (#106628)
c287262b02 : enable missing-prototypes warnings on MPS backend (#105831)
e190afb829 : [Dynamo] Allow users to patch custom builtin functions and inline them (#106595)
e61558b5fe : Test fixes (#106586)
786977c647 : [easy] Add reset_parameters for nn.PRelu (#106507)
45f6ef2597 : Expose intended public constraints. Fixes #106386 (#106458)
578969ca61 : skip maml (#106471)
a01e795a6d : [Compiled Autograd] Fix bug with multithreading check (#106621)
b782beb18e : [ONNX] Expose OnnxRegistry publicly (#106140)
5dcc85d663 : [dynamo, logging] add default pt2 logging group (#106417)
2156f0434c : [quant][pt2e] Add reference representation for quantized adaptive_avg_pool2d (#105709)
3e6da46aff : err on dot product for tensors of different sizes (#106572)
df8abaaf5f : [Dynamo] Revert 'Enable torch._dynamo.config.suppress_errors by default' (#106562)
d67f4d4e9f : Revert "[DCP][Test]Remove broken 2d checkpoint test (#106367)"
ae4b2d272f : Fix the Test of duplicate registration on genarator (#106536)
8e76c01043 : Fix the api of privateuse1 in comment (#106537)
91afefb55b : Fix some fake mode confusion between inner/outer fake mode in export (#106515)
5b13c779d4 : [AOTInductor] Remove call to aot_autograd when receiving ExportedProgram (#105977)
63d45275f4 : is causal hints for transformer (#106143)
e421edf377 : Add utility to test if autograd was registered correctly (#106561)
5a9e82fa02 : let torch.device be overrideable by TorchFunctionMode (#106514)
d4d086ce7b : [MPS] Fix Clamp with strided outputs/inputs (#97858)
9e301949ec : [quant][pt2e] Add reference representation for quantized max_pool2d (#105708)
b2d3a2f433 : [inductor] Remove ReinterpretView copy_ for AOT Inductor outputs (#106564)
a899333ffc : fix: nll_loss batch rule with negative ignore_idx (#106118)
f8817d8ac8 : Remove deepcopy override from ExportedProgram (#106578)
0ae7afd14e : [MPS] Adding renorm implementation (#106059)
aaa989c244 : inductor: support linear fusion when multi linear using same input (#106300)
3c7331742a : test_fused_sdp_choice in test_transformers.py fix (#106587)
d4271b16ca : [vision hash update] update the pinned vision hash (#106588)
8fe5fa8613 : Update mobile build docker image to pytorch-linux-jammy-py3-clang12-asan (#106582)
b283e93158 : Add missing hpu check to is_any_autocast_enabled (#106539)
93f538db35 : Fix nullable-to-nonnull-conversion warnings (#106232)
c0b8b7b90c : [inductor] Enable mypy checking in torch/_inductor/metrics.py (#105793)
ce608712cb : [inductor] don't cache non-static content (#106502)
60121e391b : [caffe2] Replace `CAFFE_` prefixes in `static_tracepoint.h` macros with `TORCH_` (#106380)
1642daeaaa : [inductor] codegen dynamic shapes tests: reset inductor metrics (#106481)
424dc238f4 : Fix split module interaction with dead code (#104554)
239578beff : [ROCm] Enable a few bfloat16 unit tests (#105177)
236eda4d51 : remove jit from torchbench (#106071)
b03505eca8 : update expected pass for torchbench dynamic (#106573)
c9eb95cca4 : Update XLA dyanmo backend name (#106489)
a8e3bd97cf : [export] cleanup pass base. [1/n] (#106480)
c27e15359a : use no_grad() consistently for testing transformer trace construction (#106523)
3200f63ee6 : Make mocked functioned return the proper result structure (tuple for native MHA for attn result and attn weights) (#106526)
d1a99a083f : Reland Simplify handle indexing (#105006) (#106357)
578d9fee42 : [DTensor][EZ] op schema comparison so that no redistribute is called (#106158)
4c46ea583f : [Export] Support re-exportability (#106531)
3db255020b : Clarify the clarification (#106358)
2f281949a5 : [dynamo] resolve InlinedClosureVariable in InstructionTranslator stack (#106491)
6268ab2c2d : torchbench pin upd: hf auth token, clip, whisper, llamav2, sd (#106009)
0dc7f6df9d : [inductor] Make AOT CPU Inductor work in fbcode (#106225)
1f734e03df : [pt2] add metas for `mode` ops (#106273)
70469e6f04 : [pt2] add metas for `median` ops (#106272)
57fba6fd86 : [FSDP][9/N] Introduce `CustomPolicy` (#104986)
15953fdf35 : [FSDP][8/N] Replace `_FSDPPolicy.policy` with `_Policy._run_policy` (#104969)
697893568d : Improve error message when export encounters non-local input (#106403)
83e36fe127 : Fix vscode test discovery (#106490)
3d165dc3f3 : Upgrade expecttest to 0.1.6 (#106314)
bcc0f4bcab : Move ASAN to clang12 and Ubuntu-22.04 (Jammy) (#106355)
97396cdbb2 : Fix undefined behavior detected by clang-12 (#106354)
6e2a2849f0 : [Typo]Fix a typo for index. (#106447)
a6f7dd4707 : Catch cuda driver shutdown error in NCCLWatchdog (#106503)
c9c2b14c53 : Fix copy_ broadcast behavior on mps (#105617)
1c2918647a : Revert PR #106442 (#106492)
77e369b363 : Run minification for TorchDynamo benchmark models that fail evaluation (#106201)
4b1872f1e1 : [vision hash update] update the pinned vision hash (#106500)
e1a0543dac : [logs] Share same formatter between trace_source and other Dynamo loggers (#106493)
3322bfb66e : [jit] test_complexity.py - don't set default dtype in global scope (#106486)
14266b4955 : make sure log tests are run in non-verbose mode (#106496)
410bc558e6 : Assert that `args` is of tuple type. (#106352)
fd6e052a8a : Some minor improvements to FakeTensor testing (#106311)
60237ccbdf : fix bf16 constant accuracy (#105827)
f82e6ff29e : add channel last 3d support for batch_norm on CPU (#97774)
719c493f0b : MemoryViz: print stream 0 if other streams exist (#106483)
6f07c57416 : MemoryViz.js: format, move style (#106482)
820e68b58a : [quant][pt2e] Add reference representation for quantized add - relu (#105707)
ba387b8830 : [easy][be] operator_config -> quantization_config renaming (#106479)
f533791cd0 : [SDPA] Mirror c++ implementation in FlashAttention meta func (#106477)
b3c29cd1ec : Remove unused workflow.py (#106340)
cebc11ae8f : Register ONNX exporter under PT2 logging (#105989)
640a96dfbb : [FSDP][Easy] Allow `ModuleWrapPolicy` to take `Iterable` (#104999)
031ce0fadc : [FSDP][7/N] Add warning about frozen params (#104967)
bdcc454be4 : [dynamo] Add missing fields for THPPyInterpreterFrame. (#103227)
a8c52863dd : [FSDP][6/N] Check valid param freezing for `ModuleWrapPolicy` (#104427)
aec8418bd9 : Pin conda to 23.5.2 for Docker builds (#106473)
1f1dfa9be9 : Fix grad higher order handling TupleVariable (#106425)
f998869160 : AOTInductor compile in prod env (#106442)
fb6652b56e : [profiler] add profiler parsing support for custom device. (#106142)
6339f57fae : Update export/export-aot-inductor benchmark code (#106323)
3143d81f6c : Add support for edge dialect ops in exir/serde (#106371)
cc38d40cec : Document `f` parameter of `torch.save` (#106248)
1534af2a5c : Add type annotations to torch/__init__.py (#106214)
bd84651e19 : Replace `sympy.solve` with a new simplified one. (#105877)
bfed2da2e4 : [Quant][PT2E] Re-enable test case of conv add/add_relu recipe for x86inductorquantizer (#105638)
7e47343d64 : [BE] document more of FSDP checkpointing logic with a sprinkle of cleaning (#106069)
ae1c0f42a3 : update tf32 thresholds for H100 (#105879)
7820bd8404 : Disable TV if Z3 is not found. (#106399)
57f2a8d3a8 : freezing w aot (#105497)
63b7be5a6f : Use ProcessPoolExecutor in the ufmt adapter (#106123)
e7b2430818 : add pruning method: Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (#95689)
aa0b4dac46 : Check that mypy is installed (#106212)
dfcfd5cedb : Revert "Add nn.CircularPad{*}d for consistency + fix no_batch_dim support (#106148)"
b37a50afda : [ROCm] fix ROCm 5.5 nightly build after hipblas change (#106438)
f81f9093ec : [core][pruning][feature] cuSPARSELt build integration (#103700)
d83b887f2a : Revert "Add error checking for padding modules (#106147)"
fdd4b3aaa8 : Revert "faketensor: prevent deepcopy from cloning FakeTensorMode (#104476)"
d528a137e0 : [quant][pt2e][quantizer] Suppoert set_module_type in XNNPACKQuantizer (#106094)
936333fd5f : Fix the Requirement of CMake Version (#106254)
8ee0b17990 : Fix reference cycle in our test suite (#106328)
30442c039c : fix torch.norm for custom device (#106198)
05bd24bb35 : Extend Inductor to support the third-party backend (#100706)
850ad54139 : correct spelling mistake (#106309)
04f20bb285 : Use `isinstance(foreach_arg.type, ListType)` for correctness (#106428)
92cac6bf32 : InductorCpp: Fix "call to constructor is ambiguous" error (#106418)
17a3141696 : Support is_mtia() (#106396)
4c3e137157 : [vision hash update] update the pinned vision hash (#106434)
d1a2aa1909 : [MPS] Fix MPS clamp issue with different dtypes between input and min/max tensors (#105747)
af37608276 : Remove duplicate ops tests in test_quantized_op.py (#106398)
dd12c4c2cb : Fix wrong class name in comments (#106419)
c29f8ccc02 : [inductor][easy] Improved warning message for missing OMP on mac (#106241)
e7115dbecf : [pytorch] Suppress C4624 warnings on Windows (#106348)
92a22a8098 : [quant][pt2e][quantizer] Suppoert set_module_name in XNNPACKQuantizer (#106087)
9ba0558d48 : Add sequence_nr to aot_autograd to map forward ops to their corresponding backward ops (#103129)
0cba33e176 : [DTensor]Minor Docstring Update (#106250)
5ebb18c5c6 : exclude internal folders from lint (#106291)
76163a56c0 : Refactor stack handling to always use TracingContext to populate real stack on exception (#106277)
753991b8c5 : aot_inductor: properly split code gen compilation command (#105367)
5e3aca6c5c : [BE] Input check for torch.nn.MultiheadAttention (#106363)
ef0576f203 : [Benchmarks] Updated CSVs for improvement to visformer_small (#106414)
40184b28eb : [ROCm] enabling miopen_batch_norm lowering in inductor (#105740)
7a3503dfd8 : Add `_foreach_sign` (#106343)
b46a89bcfb : Remove skipIfROCm from test_cuda_repro.py (#106416)
3ce7abe111 : Fix whenRegisteringAutogradKernelWithCatchAllKernel_thenCanCallAutogradKernel (#106023)
97e5055a69 : Add cumprod support for device mps (#104688)
fadd0859ca : Expose module method in ExportedProgram (#105575)
60e65a70e5 : [ROCm] enable cudagraph inductor UTs on ROCm (#105662)
506b55fc29 : [FSDP][Easy] Move `_FSDPState` attrs to avoid comment confusion (#106392)
5c3aae8385 : [ONNX] Support type promoting sym number representing scalar output (#106178)
449f481de0 : [memory snaphots] record for all devices (#106346)
d2a9b256f0 : [DCP][Test]Remove broken 2d checkpoint test (#106367)
4b2c6282e0 : Modify signature for tensor.tile in doc (#106295)
05b2a6c8db : [ONNX] Do not run 'deduplicate_initializers' when 'keep_initializers_as_inputs' is True (#96320)
cfa4edcde0 : [SDPA] Update dispatch checks to catch last_dim_stride != 1. Also update mask padding logic (#106102)
1a6f1d816d : [Doc] Add proj_size < hidden_size in LSTM (#106364)
6d2162e644 : Remove fake_mode arg from torch._dynamo.export API (#106345)
596491f1f5 : Propagate dynamic int on `__setitem__`. (#105923)
cf012c43f4 : Do not call decref if python runtime is already dead (#106334)
6d86a255e6 : Revert "Add scalar conversion using avx instructions for half (#102140)"
aaaafa1bcf : [Export] remove unused flags in export (#106336)
e35950cd0d : [caffe2] Move CAFFE SDT macros' definitions to `c10/util/` (#105856)
4dc063821d : [inductor] Fix lowerings that create unexpected aliases (#105173)
e514386315 : Normalize builtin types to dtypes. (#106074)
87d2536971 : Add nn.CircularPad{*}d for consistency + fix no_batch_dim support (#106148)
0547b6279d : Add error checking for padding modules (#106147)
c9be60cd0e : Add error inputs to ModuleInfo (mirroring OpInfo) (#106325)
16df54239f : remove tensorpipe code which forgot to delete (#106301)
f23d755e1f : [pt2] add meta for `ormqr` (#106278)
780b90ba6c : [opinfo] Fix logic in sample_inputs_linspace (#106353)
59d0dea90f : Only make a shallow copy when loading optimizer state_dict (#106082)
ceea08a986 : [vision hash update] update the pinned vision hash (#106350)
aa2cee44b7 : [Pytorch][Vulkan] Reuse broadcast checks (#105960)
f456c504b9 : Update kineto submodule to 465ff4cd7 (#106293)
f11090288c : create benchmark example tensors with correct sizes (#106238)
03e85be9b0 : [Inductor][FX passes] New group/batch fusion pattern searching algorithm + group mm fusion + preserve memory layout (#106279)
a2e8ac1d34 : Update Anaconda download link (#106335)
e075f91dcc : Extend workflow sync to more workflow (#106331)
55f9359d36 : fix sdpa math accuracy issue when scale is negative (#105202)
788c825837 : Higher order operator util for raising if inputs require grads (#106078)
57d0bec306 : aot_inductor_interface: surface previously eaten error messages (#105366)
186352a625 : [inductor] Make autotune_process.py pass mypy (#105791)
018ac76362 : fix x.numpy() breaks in #106211 (#106327)
2f35715f0d : [onnx] Fix output shape mismatch issue of max_pool (#106270)
2138aaa978 : [FSDP] Validate buffer dtype in pre-forward hook for FSDP mixed precision tests (#106231)
23f47f746b : [optim][rprop] Minimize intermediates=1 for foreach to save memory (#105193)
4eeda6616c : Correct URL Link for torchDynamo (#105903)
5379b5f927 : [ROCm] use hipblas instead of rocblas (#105881)
c9c66819a1 : Move more TCPStorestate from BackgroundThread to TCPStoreMasterDaemon as it won't be used by the libuv backend. (#105674)
57a47ed905 : [ONNX] Log instead of crash when 'tabulate' is not installed (#106228)
196372415a : Use nodejs-16 for docs builds (#106312)
1b757fb60b : Enable distributed cpp test for rocm (#106132)
4a549dd57a : AOTAutograd: correctness fix when tracing custom autograd functions that alias inputs (#102992)
a0b6c0d1da : Remove @penguinwu from distributed codeowners (#106322)
0010a8f753 : Deallocate constant when it is no longer needed in constant folding (#106216)
cdc9127733 : [ONNX] Perform Shape inference on added "Cast" node (#106093)
66c537429e : [export] Move attrs to properties and add BC decorator (#106170)
0dc251323d : `torch::nn::functional::batch_norm()`: add a shape check of input tensor (#105930)
1cebfef8a4 : sm90 efficient attention test fixes (#105978)
d8e5f2aa6d : Reland "Make adding buffers more like adding parameters (#104069)" (#106224)
50e3f9cbbb : [ROCm] HIP stream priority fix post #101956 (#106157)
2b427ae3a7 : Revert "Reland "Add forward mode AD to out-place foreach functions (#102409) (#106043)"
c5b9dc1f40 : Optimize stack frame inspection in `torch._custom_op.impl:CustomOp._register_impl` (#105940)
c54afea6ee : faketensor: prevent deepcopy from cloning FakeTensorMode (#104476)
d3b508d068 : Fix typo which suppresses user exception reporting (#106289)
0af3203c72 : fix torchrun script for custom device (#105443)
0a0abd0ecf : Revert "Update kineto submodule to 465ff4cd7 (#106154)"
3c70d4bda7 : UFMT torch/jit/frontend.py, manually fix mypy suppression (#106268)
af88e6d09d : UFMT torch/jit/_script.py, manually move mypy suppressions (#106267)
b581e03850 : Apply UFMT to torch/distributions/distribution.py, manually resolve fstrings (#106266)
eab3b2637a : only collect fx node for user_visible_outputs when doing output stride conversion (#106194)
888bdddb1e : Add scalar conversion using avx instructions for half (#102140)
21fd2bc32e : Allow setting TORCH_LINALG_PREFER_CUSOLVER=1 to prefer cusolver as linear algebra library globally (#106226)
858ca65c8a : [vision hash update] update the pinned vision hash (#106262)
8549abc347 : Grab bag of DTensor enablement stuff (Enable whole graph capture for DTensor) (#105787)
3bf922a6ce : Apply UFMT to low traffic torch modules (#106249)
a4ebc61f15 : Ignore UFMT revs in blame (#106246)
d2aa3f5fa9 : [GHF][mergebot] record ghstack dependencies in the commit message (#105251)
0ee3b84021 : [pt2] add meta for `cholesky_inverse` (#106120)
80755884be : [pt2] add meta for `cholesky` (#106115)
329a9a90c0 : fix some typos (#106253)
3ecd05d9f3 : Fix FakeTensor issues with copy_ between devices (#106172)
7047d132fd : add context support for custom device (#105056)
1da4115702 : Make _dynamo.export return a NamedTuple (#106062)
df50f91571 : Support fx_pytree in dynamo (#105574)
f160a972aa : [inductor][easy] "unhandled ValueRange op" - log at debug level (#106215)
e6ec0efaf8 : Apply UFMT to all non test/torch files (#106205)
1163800d0f : Upgraded triton pin to allow PTX ISA 8.2 (#106195)
7b14a14e27 : [Inductor] Optimize finding users of buffers for mutation (#105882)
9b94dcf2ac : [Compiled Autograd] Remove duplicate code from double-merge (#106233)
c11412b4a8 : [DDP] Support optim in backward after DDP init (#105995)
5d4e170d58 : [Optim in backward] API to retrieve in-backward optimizers (#105991)
86237dc59b : fix typo in serialization.md (#106191)
bd669d52d2 : Print env var name instead of flag name for commandline repros (#106223)
52d4b1ae31 : [BE]: Enable ruff rules PIE807 and PIE810 (#106218)
f3d165bf61 : [fake_tensor] Don't run fallback for fbgemm ops (#106210)
505dd319ef : [caffe2] Don't evaluate message in CAFFE_ENFORCE_THAT unless the check fails (#106145)
26d29d9639 : [Compiled Autograd] Support CopySlices and CopyBackwards (#105809)
099345f1e5 : [Compiled Autograd] Handle aten.sym_size/aten.sym_stride (#105814)
6da8825f20 : [Pytorch][Vulkan] sum.dim_IntList with keepdim (#106159)
2ec7cd2db2 : [CheckpointWrapper] Test for kwarg propagation, remove checkpoint_fn_arg support (#102679)
4d3ea5df65 : Restructure torch.compile docs (#105376)
0cf918947d : [inductor] Support using the 'stream' param in AOT mode (#105589)
035124774a : Enable registering fallthroughs to (op, dk) from torch.library (#106086)
ad3af0aead : Change phrasing on optim state hook docs (#106209)
800287fb56 : [FSDP] Optimize away intermediate `div_` for HSDP (#106034)
c7b122b2b5 : [FSDP] Add HSDP parity unit test (#106131)
3841be80de : [FSDP] Improve `test_fsdp_hybrid_shard_basic_setup` (#106072)
b3e24c53eb : use performance-unnecessary-value-param in clang-tidy (#102615)
8f4d8b3773 : More descriptive graph diagram names in svg (#106146)
5237ed55e6 : [export] allow register dataclass as pytree node (#106160)
37cfe944bb : add support for mutated params (#106098)
db2239706e : Fix TORCH_COMPILE_DEBUG incompatibility with aot inductor (#106169)
76a2ec49d7 : [Dynamo] Ignore no-op tensor assignment (#106092)
7c8efc9049 : [PT][FSDP] Combine _utils.py into _common_utils.py [2/2] (#106181)
e5bd63a7f3 : run freezing in nightly (#106097)
c2e948edca : Fix lint in test/inductor/test_compiled_autograd.py
57f23ca58b : Bot message changes for -f and rebase (#106150)
f15b6ec6d6 : [Compiled Autograd] Add eager autograd tests (#105808)
2e02dfae9a : [Compiled Autograd] Fix handling of undefined gradients in hooks (#105813)
ac6d8fb16e : [Compiled Autograd] Add eager autograd tests (#105808)
23a1eda890 : [Compiled Autograd] Inplace updates of gradients (#105713)
7b73b1e8a7 : Fixed test_get_classifications_pending_unstable (#106203)
bb0b283e5a : Do not force -Werror on Pooling.cpp
cb6c3cbc91 : inductor: enable weight prepack for LSTM (#103071)
dad65d09f2 : Update custom op API (#105947)
6d553a42fe : Move most custom op related tests to test_custom_ops.py (#106036)
db365e1fb5 : Create test/test_custom_ops.py, move test_custom_op_testing to it (#106035)
0b8fbfe9de : automatic_dynamic_shapes is on by default (#106188)
2636751fb9 : [C10d] Add skeleton of LibUV backend. (#105672)
dffa4e14b9 : Add Optimizer state_dict hooks (#105953)
4fe407ad73 : Add details about ic, broken, flaky, and unstable checks to merge records (#106162)
6366ed6edd : inductor: using dummy input to pack the linear weight for bfloat16 dynamic shape path (#106122)
359aa17125 : [inductor] realize boundaries in bucketize() lowering (#106107)
32175d794a : No need to wait for pending unstable jobs when merging (#106095)
3b5fb7c0d4 : Support regex flaky rules in trymerge (#106103)
eebfb921c6 : [ONNX] Support complex in FX exporter (#100554)
3e5a52cedd : [memory snapshot] track context for segments (#106113)
45b564766d : [memory snapshots] removed chained history (#106079)
73a8544d8a : [vision hash update] update the pinned vision hash (#106182)
32844be3cf : [JIT] Fix getting type for subscript assignments. (#106041)
efeb46e507 : Update kineto submodule to 465ff4cd7 (#106154)
01069ad4be : sparse.mm.backward: fix for non-contiguous grad values on CPU (#106127)
93b2036bef : Revert "[quant][pt2e] store scale/zero_point as tensor attributes to support serialization (#105894)"
cb14ff294b : [inductor] Pass to remove pointless clones (#105994)
e31855d0d6 : [pytorch profiler] fix profiler test for windows (#106156)
88c400e03b : Add @penguinwu to distributed codeowners (#105945)
ce63389246 : Allow graph breaks in inductor opinfo tests (#105480)
ec0ffac33b : [BE] Document optimizer state_dict better, use example (#105958)
723bc136a1 : Add context for warning about batch_first (#106139)
7b9d250f06 : Change _dynamo.export to be export(f)(*args, **kwargs) (#106109)
5cbd3fc412 : [Inductor] Fuse non-foreach ops with foreach ops without iterating over all subnodes (#106008)
d4136c9088 : Add pull request target to bc lint (#106065)
ca7ece9b50 : [easy] improve hint on error message in nn.Module.load_state_dict (#106042)
70bc1b0f48 : Tag functions to core IR in native_functions.yaml (#105849)
d960664842 : Lower batch on cait_m36_384 (#106091)
27ece5fad4 : [Easy] remove unneeded sort (#106090)
10f55a2a94 : [export] Handle the case for no placeholders during in runtime assertion pass. (#106134)
9ff36c2b3f : [Pytorch][Vulkan] sum.dim_IntList (#105612)
78fffe8906 : Bump certifi from 2023.5.7 to 2023.7.22 in /tools/build/bazel (#105983)
487ebcac3b : Clean up unsed MHA code to avoid confusion (#105956)
1a59be2c9e : [BE] Use `C10_CLANG_DIAGNOSTIC` macros (#106084)
977df45a0f : [inductor] Call render() once for templates (#105987)
377f306b4c : [inductor] Add has_mkl check (#106049)
6f1042c049 : Make sure that little endian is default case when __BYTE_ORDER__ is not defined (#104249)
7c97c943fb : inductor: always convert weight to channels_last for cpu conv (#105517)
a1d0db1c60 : [pytorch] Fix MSVC unexpected tokens following preprocessor directive (#105922)
b435bff53a : [PyTorch] Add tests for empty tensors w/storage null data_ptr (#101426)
5d8596292b : fix atomic add for bf16/fp16 (#104592)
707aadeedd : Track global Numpy variables as side-effect. (#105959)
b812e35a75 : [pt2] add meta for `argsort.stable`, use `sort` samples in `OpInfo` (#106025)
edebdaf182 : Change _dynamo.explain to be explain(f)(*args, **kwargs) (#106066)
e773f28ee3 : Reland "Add forward mode AD to out-place foreach functions (#102409) (#106043)
49e047e0f9 : Delete dead summarize_dim_constraints (#106053)
076781ba9b : Revert "fix building errors on FreeBSD (#105897)"
c5f6c2de15 : Revert "update kineto submodule to a94f97b (#105866)"
457d01bcfd : [Compiled Autograd] Remove TORCH_API from generated autograd nodes (#105286)
952021934f : inductor: legalize fp16 (#100857)
2d41fa9d38 : Revise err msgs for weight param of Multimarginloss (#106047)
fd4f8e194e : [dashboard] Replace cpp_wrapper with aot_inductor on the perf dashboard (#106077)
f026b32008 : [device_mesh][BE] reduce_scatter fallback to funcol and remove from DM (#105642)
2fa063e1e0 : [device_mesh][BE] remove allgather from DM (#105614)
4a49f1f46e : [device mesh][BE] remove allreduce from DM (#105605)
06dd850dd5 : Simplify check (#106044)
6847c965f5 : Turn on capture_dynamic_output_shape_ops/capture_scalar_outputs by default for export (#105962)
f70844bec7 : Enable UFMT on a bunch of low traffic Python files outside of main files (#106052)
5a114f72bf : [Compiled Autograd] Move to torch::dynamo::autograd namespace (#105854)
f20ead0aea : [pt2][inductor] guard for __package__ is None (#106056)
3959695fbd : Fix typo ; Update grad_mode.py (#106045)
c05eb77f09 : Increase ignorable failures limit (#105998)
5a2b9ca754 : [ONNX] Limit number of elements to display for list/tuple/dict in diagnostics (#106048)
64843993a4 : [mypy] autotune_process.py (#105732)
cb9a4fbbf2 : [BE] Improve test_transformers test structure (#105938)
646fa36875 : Add const reference in opportunities detected by clang-tidy (#105931)
b69e5302b5 : add skip if sm < 80 check (#105888)
38861ba39f : Fixes netName assignment for NCCL Config (#105776)
43b3632215 : [Composable] Add hybrid shard AC compile test (#105207)
4137d6e499 : [Composable FSDP] Enable HSDP (#105206)
dc19f8a6b5 : Fix cuSparse CSR SPMM for using nullptr in csrRowOffsets (#105957)
3ca71ed735 : [quant][pt2e] store scale/zero_point as tensor attributes to support serialization (#105894)
841b4acf1e : [FSDP][Easy] Rename to `_comm_hook`, `_comm_hook_state` (#106033)
035704e88d : [FSDP][Easy] Move post-bwd hook logging to own func (#106032)
f725e6374d : doc: fix fake quantize per channel doc (#105955)
60ad46f49d : [ONNX] Clean up outdated skip ort < 1.15 decorator in tests (#105951)
9a1cdcb8a0 : Format: fixing multiple string concatenation in single line (#106013)
3a77f9aaaf : [quant][api] Move torch.ao.quantization.pt2e.quantizer to torch.ao.quantization.quantizer (#105885)
70b0f1b248 : fix some typos (#106018)
21ede4547a : remove duplicated code in optimizer (#106022)
716f37cef8 : If we can't statically prove 32-bit indexing OK, only add guard if hint exists (#106004)
c0c208516b : [Doc] Add `Tensor.Shape` (#104750)
28a4fc8d8a : Fixe some typos (#105869)
2dbadd1eae : [export] Remove experimental runtime assertion configs from export API. (#105043)
8af25cfc24 : update kineto submodule to a94f97b (#105866)
c4b7311fc2 : Meff Attn Bias (#104310)
45322fafd6 : [ONNX] Add comment on test_view_dynamic_zero_dim (#105950)
72f2c87a5a : [foreach] Set `SavedVariable.is_output` to `true` for `grad_fn->result_` (#105504)
9d2e15882e : Add torch.utils to the docs page, remove dead code and fix docstrings (#105142)
6b6702f506 : Enhance `no_grad`-context FSDP backward handling (#105374)
66b73b08df : Allow disabling bias for Transformer (#101687)
76fb72e24a : [profiler] Fix profiling shapes with PT2 + lists of dynamic shapes (#105893)
6938494d03 : [jit] move get_annotations out of infer_concrete_type_builder (#105197)
22f93852a2 : Fix error message about enabling InferenceMode in Python (#105948)
c099b80073 : [FSDP] Add `record_function` for explicit prefetching (#105985)
a9a3c45649 : Revert "Simplify handle indexing (#105006)" (#105984)
854fe470cd : fix check issue for replace_params_with_constants (#105909)
0616952d13 : Merge and improve torch optim optimizer type stubs (#102593)
1b9faf22ef : [vision hash update] update the pinned vision hash (#105988)
46b74ab9cf : Bump pygments from 2.12.0 to 2.15.0 in /.github/requirements (#105669)
d767cff7c7 : [quant][fx] Fix docs for prepare_fx/prepare_qat_fx (#105979)
0c65a2d58f : [pt2] add meta for `_adaptive_avg_pool3d_backward` (#105816)
36ae359655 : Update matmul decomp to match eager (#105850)
9bde7f4e27 : Fix the docs for cosine_similarity (#104772)
fff4a9db8a : Fuse ops in eager cosine_similarity while keeping the stability and the gradients (#104771)
a61a0fe490 : test_linalg: triangular_solve - make well_conditioned well conditioned (#105919)
aabdd2b7a1 : Add support for tensor.tolist() for static sized int tensors (#105976)
5c5eece6d8 : fix building errors on FreeBSD (#105897)
afd621ddde : inductor: fix CSE issue when have symbolic shape input at the freezing path (#105651)
9c1802f8e3 : inductor: using binary folding path to do conv+bn folding (#105650)
920b446da9 : dynamo: support disable_saved_tensors_hooks (#104869)
7b31732a6f : Delete unused experimental export (#105873)
03e2ca9d9c : [Composable] Add more sharding strategies to runtime test (#105205)
a326f5621e : composable fsdp, checkpoint, + compile test (#105180)
5d70fe0165 : [Composable] Use non-reentrant generator, remove reentrant (#105176)
c76c84bde4 : [dynamo] make ProcessGroupVariable a DistributedVariable (#105593)
15442915cf : [ONNX] Fix the warnings of `aten overload fallback to default` in onnx dispatcher (#105972)
8d9c8897ed : [profiler] add option for kineto synchronization events in the trace (#105187)
a770295af4 : Don't alter original node's meta in Interpreter (#105880)
6dd4b99ec2 : Revert "Disable torchrec/sparse from top-level Dynamo tracing (#105733)"
884cd53e49 : Unconditionally record when FakeTensorMode is allocated and report it on inconsistency (#105927)
523100a2f1 : Make _CURRENT_TRACING_CONTEXT thread local (#105942)
0003d5135d : [TP] Enable partial tensor add without redistribute (#105939)
e18d53e2df : Added ModuleInfo test for meta device ctx init (#105871)
837363c72f : inductor: support conv+binary foldinig for freezing path (#105048)
78b28e884a : Fix error formatting string (#105935)
4af9a914ab : Improve FakeTensor to work with mixed meta-cpu embedding bag arguments (#105924)
dd3a77bc96 : Apply UFMT to all files in benchmarks/ (#105928)
a361fceef3 : [C10d] Move TCPStoreMasterDaemon to TCPStoreBackend. (#105184)
1880852830 : [C10d] Move protocol constants to TCPStoreBackend.hpp (#105164)
e60af5c8e4 : Revert "[Compiled Autograd] Move to torch::dynamo::autograd namespace (#105854)"
a4cffaae67 : [pt2] add metas for `_cholesky_solve_helper` and `cholesky_solve` (#105867)
48cd8e29c1 : Revert "Slightly improve AOTAutograd logging with ViewAndMutationMeta (#105702)"
3eef86dbf4 : Only do TLS access when necessary in basicAutogradNotImplementedFallback (#105845)
da4f3fdca1 : Fix bug in basicAutogradNotImplementedFallback (#105660)
e7142700ed : Update expected inference for torchbench sam (#105891)
fe284b0d97 : [C10D] Extract some bits of TCPStore into TCPStoreBackend. (#105163)
b65b9e6ff4 : [PT][FSDP] Combine _utils.py into _common_utils.py [1/3] (#105857)
340ec1f460 : Revert "Meff Attn Bias (#104310)"
3bc047fb9a : [ONNX] Detailed diagnostics for 'perfect_match_inputs' (#105892)
8282c53789 : [ONNX] Add primitives formatting for diagnostics (#105889)
00c6a2ecd5 : [ONNX] Diagnostic option 'warnings_as_errors' (#105886)
c9edf11073 : [FSDP][Docs] Make model/optim state dict configs visible in docs (#105848)
71d18f6105 : [DocString] Fix incorrect api Examples (#105911)
1157b4393b : Add const reference and std::move in opportunities detected by clang-tidy (#105815)
5f6c6ff4cf : [inductor] Make OpenMP work in fbcode (#105777)
b2b1f2194b : [inductor] Enable vectorization in fbcode (#105756)
487a33e38a : [FSDP x dynamo] simplify registry keys (#104209)
da8de0455b : [ONNX] Support ONNXFakeContext with op_level_debug (#105874)
1032a2541e : Add option to disable rewriting index hints in default global save plan (#105861)
8bf253ecce : [export] Remove eliminate_dead_code (#105875)
c89aec207a : [ROCm] reduce tolerance for triangular solve with well_conditioned set to True (#104425)
9c458942ae : [easy] Minor torch.load docs fix (#105876)
8b34fa5e9b : add basic cuda support for float8 dtypes (#105807)
3a01c056f5 : [PyTorch][ET] Collect Process Groups Mapping Info (#104373)
00ee38c661 : [ONNX] Export module as function (#105618)
ec33733701 : [ONNX] Improve shape inference for Slice (#105755)
98956c5320 : Support dynamic shapes in TritonTemplates (#105295)
26e3b4020f : [Compiled Autograd] Move to torch::dynamo::autograd namespace (#105854)
5403c7770c : Provide a refined upper bound for nonzero when original numel is static (#105843)
cc137342d0 : Slightly improve AOTAutograd logging with ViewAndMutationMeta (#105702)
5fec1f93dc : Add meta registration for foreach_maximum_.List (#105864)
6655b6527a : [FSDP][Docs] Tidy up FSDP ctor/api docs (#105847)
65bce811a6 : [ONNX] Passes to reuse existing fake mode if possible (#105764)
8047ce05dd : Cleanup calculate-docker-image (#105752)
becb8dc91a : [inductor] triton_utils.config_of: check for divisibility by 16, even when expr is not an Integer (#105743)
8b94280008 : [functional collective] parameterize allreduce tests (#105604)
5453508115 : Meff Attn Bias (#104310)
9d62c5faf6 : [exir] Add deepcopy to ExportedProgram (#105852)
c902b84e0b : Compiled autograd (#103822)
14304afd76 : Remove unnecessary simplification in guard_lt (#105842)
b78341dda9 : Use hipsolver for default svd case on ROCm (#103540)
bf693f2000 : Strengthen ConstantVariable invariants (#105796)
d2ee3d0675 : Add a version to this signpost so I can tell if packages have taken updates (#105735)
b0708654c0 : Implement NAdamW optimizer (#103881)
a54043516f : Add SparseCsrCPU and SparseCsrCUDA dispatch to sum.dim_IntList (#99292)
fb0ffeece3 : Use the newer g5.12xlarge instead of g3.16xlarge for multigpu tests (#105759)
3045e84e67 : Tweak dynamic=False behavior (#105715)
0ab74044c2 : [BE] remove deprecated attributes from distributed_c10d (#105753)
b8eb827d93 : use UBSAN on some tests (#103655)
68dce23722 : [ROCm] Skip test_jit_cudnn_extension on ROCm (#105805)
1600585219 : Revert "Fix test failure in TestCudaMultiGPU.test_cuda_device_memory_allocated (#105501)"
33b855e906 : [xla hash update] update the pinned xla hash (#105828)
80144d9cf9 : Implement NEON accelerated implementation of ERF() (#105610)
54a673bdcf : Initial sourceless builder (#104734)
b0816e4714 : [inductor] Fix AOTInductor output issues (#105773)
32b67b5a6b : Fix RRef Future type annotation (#105296)
c44ae5544f : Skip the source info in the error report if the source code is too large (#105608)
e3539a0e54 : [dtensor] forward fix for dynamo import with deploy (#105760)
66fbffce1f : Fix unstable CI related to dynamo tests (#105797)
45e4706aff : [pt2] add decomps for `multilabel_margin_loss_forward` ops (#105302)
944db0357d : Unify `multilabel_margin_loss_shape_check` on CPU and CUDA (#105645)
eac9e1b35f : [OpInfo] add reference and error inputs for `multilabel_margin_loss` (#105523)
bba06ad751 : [MPS] aten::erfinv metal kernel ops (#101507)
12ea12d659 : [MPS] Fix upsample output size tensor (incorrect result in MacOS 14.0) (#105677)
6d43c89f37 : [BE]: Update Ruff to 0.0.280 (#105724)
53a4b262d2 : Add missing evaluate_expr for slice_scatter, slight refactor (#105714)
f5def50461 : Supress eager fallback suggestions when exporting (#105767)
afd955f3de : [dynamo][constant] Kwargs already supported for str methods (#105785)
20fb2ba68d : [ONNX] Register list/tuple/dict to format_argumment and refactor fx.Node format_argument in diagnostics (#105263)
0ad93a3d56 : Fix aten.logspace decomposition (#105201)
5afc2f5069 : Documentation for `torch.autocast` (#95760)
09b5c35911 : Support torch.onnx.dynamo_export within FakeTensorMode (#105477)
0b11da0ccb : [partitioners][ac][dynamic] Fix output signature of fwd with symints (#105771)
0148db6765 : [ONNX] Support torch.device in FX exporter (#105757)
60d5efdb15 : Disable torchrec/sparse from top-level Dynamo tracing (#105733)
45e0193174 : Add telemetry for number of nodes being compiled (#105741)
7b211ff8dd : doc: fix fake_quantize_per_channel_affine (#105241)
a6b8c30726 : [dynamo][higher order ops] Bugfix for kwargs support (#105699)
1959802548 : [AdamW] Fix complex x amsgrad support (#104990)
e1296a7f8d : [Adam] Fix complex x amsgrad support (#104989)
a44f8894fa : [Inductor] Provenance tracking for wrapper code (#105717)
050d3de07d : Revert "Correct dynamo logging docs (#105658)"
4d5d4d8b02 : [pytorch] Disable new autograd fallback for mobile builds (#105750)
221853af23 : [FSDP][Easy] nit follow-ups to handle refactor (#105738)
f3a261e096 : Correct dynamo logging docs (#105658)
174b0c22cb : [C10D] Remove watchKey functionality from the Store. (#105014)
9d2f56fd22 : Bump pygments from 2.12.0 to 2.15.0 in /.ci/docker (#105654)
999ca07ef8 : Improve fake mode support by adding fake_context to ExportOutput (#105247)
803d42e457 : add lerp cpu support for half (#105607)
d5d6eb2d46 : [ONNX] Refactor AvgPool to support dynamic shapes (#105683)
4cc1745b13 : [BE] f-stringify torch/ and scripts (#105538)
4c73016ff2 : [Dynamo] Enable torch._dynamo.config.suppress_errors by default (#105307)
de8bd108b4 : [BE] Enable ruff's UP rules in pyproject.toml (#105437)
6b2d48e78c : [8/n][FSDP] make use_dtensor=True work with offload_to_cpu=True for optim.load_state_dict() (#105690)
72b223cd1b : [Inductor] Optimize read write merging in FusedSchedulerNode ctor (#105693)
842616bcba : Allow (temporarily?) non-fake input during ONNX export with fake mode (#105246)
04da0c76a0 : Improve basicAutogradNotImplementedFallback + new tests (#105591)
ed6de45563 : Fix Tensor::register_hook behavior on undefined tensors (#105587)
29f856e3e0 : Kill process in `wait_for_process` if `SIGINT` fails to terminate it (#105625)
ec26947c58 : [Inductor] Replace functools.reduce union calls with set unions (#105720)
79c5e33349 : [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436)
322dff475c : Skip test_cudnn_rnn when cudnn not available (#105701)
429d45f91a : Simplify handle indexing (#105006)
b0a04331b4 : [dynamo] Fix import if numpy is not installed (#105711)
e40f8acef2 : [inductor][fx passes] batch layernom (#105492)
a01a732954 : Rename some sizevars methods for clarity (#105585)
cce2b7e3c9 : [dynamo][numpy] Add support for builtin len() on numpy ndarray (#105691)
fed8d3608d : Update core aten decomp table (#105673)
c759a57003 : Skip deterministic mode for SAM (#105615)
117325862c : Revert "Add torch.utils to the docs page, remove dead code and fix docstrings (#105142)"
6ed96b9ed8 : inductor: fix bug in nn.Linear when in_feature size is zero (#105449)
cb9abf725c : Update torch.compile docstring (#105652)
143c83d637 : [quant][pt2e][be] Remove unneeded code (#105676)
a8f568e99b : Make recompiles log print stack traces (#105663)
e985719e98 : Add torch.utils to the docs page, remove dead code and fix docstrings (#105142)
1e87778552 : [inductor] refactor wrapper benchmark code out of utils.py (#105584)
07ea344dcf : Fix docs not showing error, remove circleci docs scripts (#105678)
e47fad68a0 : [caffe2] Update tracepoint USDT macros (#105232)
024d26208c : Add Freezing Option to Benchmarking (#105616)
8399cf9bfe : Rnn base hidden size type check (#105659)
18d8961d91 : [Pytorch][Vulkan] aten::pow (#105550)
795885d947 : [docs] Fix docstring. (#105689)
450c22c311 : mypy index propagation (#105622)
fe7187b903 : mypy _inductor/cuda_properties (#105620)
75a8c8a538 : softshrink lowering (#105603)
6560750d08 : [Dynamo] Support list indexed by constant tensor (#105509)
e6fd8ca3ee : Fix test failure in TestCudaMultiGPU.test_cuda_device_memory_allocated (#105501)
6abb8c382c : [export] add kwargs support for export. (#105337)
9584d614a1 : [inductor] add decompositions for aten.angle (#105609)
9760ea58a3 : fix lint (#105675)
3464cd6e62 : Close non existent disable issues (#105096)
777fc0bb58 : [dynamo] fine-grained bytecode-source attribution in python 3.11 (#104676)
b5d3d58497 : Fixed cmake mkl lib path in caffee2 public (#105525)
25d80c69ce : [foreach] super minor BE: remove unnecessary cast (#105601)
e855348cdf : [foreach][SGD] minimize intermediates=1 to decrease peak memory (#105599)
585ce32ca1 : Heap buffer overflow in `ditributed/rpc` module (#105537)
0af18f2234 : Unify TEST_CUDNN definition (#105594)
b64bd4a5dd : Add torch.float8_e5m2 and torch.float8_e4m3 data types (#104242)
803d58a408 : Add TensorPipe header files to Python package (#105521)
154d89b224 : Revert "Unify TEST_CUDNN definition (#105594)"
f2b15772ff : Revert "Add torch.float8_e5m2 and torch.float8_e4m3 data types (#104242)"
02cd971e95 : [C10D] Improve MTPG autograd test. Fixes #105106 (#105356)
ded9b94207 : Improved error messages for deprecated linalg functions. (#105506)
ca126880d9 : Enable intellisense for _dynamo, _inductor and onnx by importing under type_checking guard (#105361)
a9804130e5 : Add torch.float8_e5m2 and torch.float8_e4m3 data types (#104242)
1ea153a11d : Unify TEST_CUDNN definition (#105594)
692e0566d6 : Rely on is_expr_static_and_true to test gcd (#105578)
71067631c2 : [inductor] Fix an AOTInductor missing output issue (#105496)
0e4c12157c : inductor: add support for 0 repeats (#105446)
0578732bc3 : [inductor] fix duplicate arg handling in triton templates (#105315)
a5317ae857 : Remove unnecessary left == right test. (#105576)
980589b04d : [ONNX] Suppress ORT warnings in unit tests (#105624)
8daed86e4e : [Inductor] aten.dist decomposition (#105586)
dfc9874740 : Revert "inductor: promote half/bfloat16 constant to float for cpu vectorization path (#105440)"
dff4e034b8 : [quant][pt2e][be] Rename qnnpack quantizer to xnnpack quantizer (#105551)
c6653b65d8 : Back out "Make adding buffers more like adding parameters (#104069)" (#105581)
2e81cdc1dd : Remove dead sizevars.__getitem__ (#105579)
43540a1cab : Tighten size_hint invariant (#105580)
690ea933ca : Enable more e2e foreach optimizer compilation tests (#105438)
0cd51b3df0 : Reland: Value range refinement using multi-variate expressions (#105491)
3dacc8e847 : [PyTorch] [Memory profiler] Early return if qualified name is invalid (#105495)
86076abeff : Update slow CI jobs to rocm5.6 (#105516)
93e6fc54fa : [PyTorch] Remove device transfers from JNI (#105583)
0b524343be : Reenable UFMT on pyi (#105577)
5abc5ab55d : [inductor] Disable cudagraphs if index_put_ fallback is encountered (#105439)
bc6bca9d42 : [XNNPACK][QS8] torch.slice (#105252)
fa6be2fa6f : [Quant][PT2E] Remove x86 inductor pt2e backend config (#105039)
af9a4e08fa : [dynamo][rewrite_asserts] Insert assertion msg in bytecode only when needed (#105549)
6c432381f5 : [Quant][Inductor] Use truncate instead of default rounding round when convert float to uint8 (#105109)
a832967627 : Migrate tuple(handle) -> handle (#104488)
c54f630201 : [7/n][FSDP] make use_dtensor=True work with offload_to_cpu=True for load_state_dict (#105378)
5ce5372d70 : Create tensor from Numpy in current device. (#105546)
73e1455327 : [BE] Enable ruff's UP rules and autoformat test/ (#105434)
7b56238551 : fix typo (#105507)
801fb93b0c : Update pybind11 submodule to 2.11.0 (#105245)
70b5264ec5 : [EZ][BE] Fix the massively annoying strict-weak-ordering issue. (#105189)
4448c78a5d : [ONNX] Add missing spaces between sentences in warning text (#105527)
218b5477ea : switching NNC as default for TorchScript support (#105185)
a10f93f606 : [composable API] Fix the replicate_device_id test case to avoid copy replicated models. (#105503)
f139aab2f4 : [dynamo] add initial dynamo support for DTensor (#103146)
a788365d14 : Switch UFMT to denylist rather than allowlist (#105536)
232b96b6e2 : [BE] Enable ruff's UP rules and autoformat distributed/ (#105433)
2125794c12 : [MPS][BE] Use `Tensor::copy_` (#105475)
8a688277a2 : [BE] Enable ruff's UP rules and autoformat dynamo / functorch and refs (#105432)
88f119775d : Upgrade nightly wheels to rocm5.6 (#105076)
cb7a30f656 : [BE] Enable ruff's UP rules and autoformat inductor/ (#105431)
c0d8a4af0a : [BE] Enable ruff's UP rules and autoformat ao/ (#105430)
0b6de0eb1c : Improve validator module behavior if Z3 is not installed. (#105168)
e137ac6c59 : [dynamo][torch_np] support linalg, random and fft module (#105320)
18bcf62bbc : inductor: promote half/bfloat16 constant to float for cpu vectorization path (#105440)
7ddb66e334 : Fix for "AttributeError when attempting to remove inductor buffers twice" (#104901)
167eab1cec : [inductor] Suport OMP on MacOS (#105136)
0e85c224f8 : Use shareable calculate-docker-image GHA (#105372)
554052f321 : [quant][pt2e][be] Rename prepare_pt2e_quantizer to prepare_pt2e (#105484)
5ef023b05a : [BE] Enable ruff's UP rules and autoformat benchmarks/ (#105429)
9c225c9b9a : [pytorch] Change autograd fallback mode to Nothing (#105505)
d2fa3f608b : Produce more logs from TCPStore init (#105350)
d2c24eca8a : Fix mps unary op issue on non densely stored tensors (#105512)
133a5e9a7a : Upgrade triton pin (#105463)
64c39ece65 : Fix a docstring of resolve_neg (#104151)
fe04c6c371 : [inductor] Allow specify a subdir to store .so and .cubin files (#105466)
1597dd7a54 : Report guard failures with recompiles logging (#105500)
11b753af01 : Refactor causal mask generation and detection for nn.transformer (#105265)
14d87bb5ff : [BE] Enable ruff's UP rules and autoformat tools and scripts (#105428)
5666d20bb8 : Add unlifting pass under private config (#104897)
fbd7e74c92 : [inductor] Enable mypy checking in lowering.py (#105317)
88f1885ec9 : [XNNPACK][QS8] torch.cat (#104800)
39477f7ca9 : Remove unnecessary seen check in get_current_graph_task_execution_order (#105487)
78a7684b5b : [Pytorch] Unary Ops (#104994)
e983625f22 : [FSDP] Fix skip-sharded-views + mixed precision (#105346)
e645f2adaf : [DTensor] Fix device detection logic for TestDTensorPlacementTypes::test_split_tensor. (#105357)
242a7743f3 : [BE] Enable ruff's UP rules and autoformat onnx/ (#105427)
f508d3564c : [Pytorch][Vulkan] Templatize BinaryOps (#105380)
78829d6e07 : Fix `isinstance` check in `quat_utils` (#105476)
3721fa5612 : [BE] Enable ruff's UP rules and autoformat optim/ (#105426)
be03a56955 : [BE] Enable ruff's UP rules and autoformat testing/ (#105425)
9e1b07e692 : [C10d] Handle bool tensors in gloo. Fixes #103585. (#105354)
abc1cadddb : [BE] Enable ruff's UP rules and autoformat utils/ (#105424)
91ab32e4b1 : [pt2][inductor] fix LoweringException: TypeError: '<' not supported between instances of 'ExternKernelCaller' and 'ExternKernelCaller' (#105469)
8cd94e1eab : [MPS] Add lerp implementation (#105470)
cb23373264 : [dynamo] allow tensor subclass fakification in dynamo (#105308)
bcb9ca4e5a : [dtensor] canonicalize detach callsites and use `view_as` when appropriate (#105239)
1aba399138 : allow set_multithreading_enabled to act as function and context manager (#105291)
ed2b9f1af1 : [quant][pt2e] rename _quantize_pt2e to quantize_pt2e (#105377)
8364a5116c : Simplify Dispatcher case for zero arguments (#104613)
133c5ec997 : Add torch.ops.out_dtype (#103333)
1b78f23a1a : Allow nn.ChannelShuffle to run without erroring on CUDA tensors (#105351)
dc1186b0f9 : Add peterbell10 to core reviewers (#105461)
b10de43c0a : Add aot_inductor as a test backend for benchmarking (#105221)
671a21926f : [torch_np] update test to use ones_like instead of empty_like (#105453)
5e942ac5ec : add bfloat16 support for reflection and replication padding (#102949)
1a4ee2a6bb : Add XPU support for storage resize_ (#105262)
d09195ce82 : inductor: fix fx tracing error for freezing pass (#105133)
38c1e86ee2 : inductor: make sure as_strided ops' input layout is not changed after converting conv's weight format (#105122)
964d29f312 : [BE] Enable ruff's UP rules and autoformat torchgen/ (#105423)
6ca3d7e1a2 : [pt2][inductor] only use global cache on MAST (#105375)
8010f6bf48 : [dynamo][inductor] Provide public API to get compiler options/configs (#105026)
4b3c261a2e : inductor: fix issue of vectorization when the store's index is constant value (#105314)
3201a90428 : inductor: don't force convert channels last if one op's user is as_strided ops (#105111)
5d473a950f : Make conversions from/to sparse semi-structured always @torch.compile-d (#105272)
ad6dad810e : [dynamo][profiler] More verbose profiler warning (#105362)
2ba9b56449 : [ONNX] Fix aten::cat export when arg include parameters (#105373)
dc58259746 : [Inductor] [FX passes] Group linear fusion (#105116)
ba00b0939e : Inductor cpp wrapper: support torch.complex64 (#105305)
fcb7d4b358 : Mark `bincount` CUDA deterministic if `weights` are not given (#105244)
e9fd815226 : Misc visibility changes for compiled autograd (#105298)
cf404a8ce4 : Fix get_current_graph_task_execution_order accumulate_grads ordering (#105353)
750b9b359f : fix aot_inductor+dynamo fail on IG_CTR (#105250)
a4021af42e : [Pytorch] General broadcast for arithmetic operators (#104718)
196f2ab014 : Log triton random warning to INFO (#105343)
88aa51fe85 : [dynamo] Support defaults for namedtuples (#105341)
03a4aecf60 : Make libtorch CUDA12 builds actually build on CUDA-12 (#105364)
a6758cb304 : Revert "Revert "SetVariable in dynamo (#103205)"" + Fix for improved graph breaks (#105345)
b2150b4795 : [pt2][inductor] move gemm local cache to `cache_dir()/cache/{hash}` (#105334)
5e6c124649 : upgrade multipy to latest master there (#105344)
d623f22b8b : Skip frame if the graph is empty (#105228)
0af287cef2 : Update batch_norm_backward_elemt args in native_functions.yaml (#104529)
37f5d7866c : Remove redundant setuptools in pyproject.toml requires (#105303)
a26afb9848 : Better comparisons for np.ndarrays in dynamo (#105333)
3fdf365397 : Move TypeAndSize out of /generated/ (#105195)
28d018dafd : [inductor] Implement bucketize() for dependencies.py (#105102)
74a08db12b : [BE] Speedup bazel builds (#105347)
32d422f335 : Make adding buffers more like adding parameters (#104069)
4fc47b4156 : Allow disabling bias for `LayerNorm` (#101683)
e0d2ad1a21 : [Profiler][Memory] Export raw timestamped events in export_memory_timeline_raw (#105094)
95232c216b : [dynamo] Bugfix for enums (#105306)
ca47205783 : Revert "create mergability check (#105086)"
07a1c3f7ff : Exercise subclass of TransformerEncoderLayer (#105297)
e5f5bcf6d4 : [inductor] include global cache dir in inductor resources (#102130)
05854212dd : add syncBN support for custom device (#104250)
2fa7d11b64 : Immediately compile backwards graph in AOTAutograd if dynamic shapes (#104971)
94b3f9f646 : Revert "SetVariable in dynamo (#103205)"
07108ff1e8 : Fix typos under _decomp directory (#105210)
3152feab07 : Assert that we can compute the bounds for guards using rational numbers (#105139)
34c91a7051 : Prefer bound_sympy over sympy_interp (#105138)
eae99b0f51 : Bound just size variables in bound_sympy (#105155)
b5beced299 : [xla hash update] update the pinned xla hash (#105312)
7f84d55e58 : [BE] Add actual dtype to RuntimeError in ADDMM_META() (#105309)
8c479d32da : [inuctor][easy] avoid duplicate kernel definitions (#105099)
93f852f201 : Add PyObject_CallMethodNoArgs to pythoncapi_compat.h (#105285)
e68cf02420 : Revert "[inductor] Implement bucketize() for dependencies.py (#105102)"
9adfaf8807 : [inductor] Add lowering for aten.unfold (#105165)
b47d91a537 : [dynamo] Reland Move decorators into decorators.py (#105273)
cbe0254dc4 : Optimize sparse.mm reduce in BFloat16 data type in CPU backend (#103239)
e3c4f2fb59 : [vision hash update] update the pinned vision hash (#105282)
efdabbff06 : Assert that evaluate_expr matches hint (#97792)
5837e95d30 : [Reland] Update mypy to 1.4.1 (#105227)
5cd861fcf7 : Add empty/empty_like to core aten decomps (#105158)
1152e86da1 : Transmute refined SymInt into int (#104828)
66d3729388 : Add THPVariable_WrapList helper (#105194)
7b4d080496 : [quant][pt2e] Rename _pt2e to pt2e (#104668)
a63f3f4335 : [dynamo] Disable fused adam op compile (#105256)
922a98e693 : [ONNX] Enable attribute type checking in onnx dispatcher (#105104)
0285366464 : Revert "[dynamo] Maintainable code - Move export impl to a different file (#105071)"
1fdc88f877 : Inductor cpp wrapper: fix codegen of FallbackKernel with kwargs (#104575)
ffce2492af : Remove set_default_dtype calls from jit and ops tests (#105072)
82fb5edfc7 : SetVariable in dynamo (#103205)
a155c68e6d : [MPI] Allow previously initialized (#105023)
15411b8afd : [ONNX] Make unsupported node analysis result deterministic (#105231)
d438b99823 : [reland][inductor] fix a custom_op test problem (#105234)
15fd1ea118 : Revert "[Reland] Update mypy to 1.4.1 (#105227)"
fb376f80a2 : [retry][dynamo][numpy] Add support for np.dtype (#105034)
7e72126487 : [pt2] add decomps for `multi_margin_loss` ops (#104578)
0a6888243b : `multi_margin_loss`: check `weight` shape, make contiguous on CPU, add tests (#104852)
de67b52a88 : Unify `multi_margin_loss_shape_check` on CPU and CUDA (#104851)
0c89596e4f : [OpInfo] add reference and error inputs for `multi_margin_loss` (#104850)
d06e1df1aa : [torchgen] Rename executorch's RuntimeContext to KernelRuntimeContext (#104892)
99ab2ad677 : [nightly] Fix macos nightly conda builds due to miniconda update (#105226)
c9c4f8efc3 : [Reland] Update mypy to 1.4.1 (#105227)
1518d5eec4 : Update Documentation for TripletMarginLoss (#105115)
cff5d6a22c : [inductor] Implement bucketize() for dependencies.py (#105102)
4fc408ded2 : [ROCm] Add AMD devs as owners for hipify code (#105080)
bf46b6653f : [export] Allow optional call-spec (#105179)
d3b96969a0 : Upgrade CI to ROCm5.6 (#103092)
fcbe1be8f9 : Update torchbench.txt to include SAM (#105222)
d855c6c7de : [PyTorch-TB] Write full tensor as tensor proto (#105186)
233f917c83 : Revert "Add torch.ops.out_dtype (#103333)"
0401ffa83f : s390x: fix special_hermite_polynomial_h for '+/-inf' and '+/-nan' (#104705)
17250976f3 : correct empty tensor mps all operation (#105218)
15ea0a00cb : Fix RRef type annotations (#104876)
c0a278d6f0 : Upload all test stats only if the workflow is from pytorch/pytorch main (#105087)
7c10b58c5f : Add torch.ops.out_dtype (#103333)
10cbc9a063 : Enable cuda graphs for dynamic shapes (#105064)
9fc3a22731 : Turn on typechecking in cudagraph_trees (#105151)
1646d6f939 : Revert "Merge and improve torch optim optimizer type stubs (#102593)"
3c5a494d7a : Revert "Update mypy to 1.4.1 (#91983)"
1c69f363c4 : Revert "Transmute refined SymInt into int (#104828)"
f03a8f0589 : [reland] Deprecate registering autograd kernels at not an autograd key (#105078)
b4d91b1c5b : Revert "[Typing] Fix PEP 484 Violation (#105022)"
528ab477ce : [reland][inductor] Register an op for mm_plus_mm (#105153)
c099b7e07a : ValueRange analysis for indirect indexing (#102611)
87a3ed58cb : Fix ranges for range vars (#104987)
88dcecdf54 : Remove unnecessary casting in triton (#104975)
dc4a0582fb : fix `hash_storage`'s padding calculation (#105036)
8e01f75b1b : New `Mod` class for SymPy expressions. (#104968)
068f163dd3 : [dynamo] Maintainable code - Move export impl to a different file (#105071)
b7b44e766b : [Checkpoint] Separate implementation into generator (#105101)
6871cf6e1e : refactor GeneratorForPrivateuseone (#105038)
90b50f0303 : [quant][pt2e] change internal code to only import from _quantize_pt2e (#105162)
893983e59f : Use GitHub REST API to get the merge base commit SHA (#105098)
9942a14e96 : Fix torch.compile g++ flag error on ppc64le (#104956)
a66f08d626 : enable channels last for replication padding on CPU (#102597)
c1877c741c : [vision hash update] update the pinned vision hash (#105191)
4328138c1e : AOT inductor: error: ‘c10::Dispatcher’ has not been declared (#104742)
46104882d7 : inductor: enable cpu fusion for dynamic shapes path (#104945)
8af8e1fe36 : Add torch._dynamo.maybe_mark_dynamic (#105145)
8a6e5d7cc7 : CUDAGraph trees real inputs should never be SymInt (#105148)
d7e6040efa : Update sparse semi-structured linear operator (#104608)
b88b742db8 : fixed torch.manual_seed note (#105175)
85745cd3d9 : Fix bug in fuse_modules (#105069)
b33d63d97b : [BE] Use ValueError for input.dim check in torch.nn.modules (#105127)
cd15229950 : [foreach][RMSprop] Minimize intermediates=2 to decrease peak memory (#105161)
219cf2a1c8 : [foreach][ASGD] Minimize intermediates=1 to decrease peak memory (#105146)
3a7d77f704 : Serialize empty pytree cases (#105159)
6c10edcb2d : move kineto submodule commit (#105144)
485cad4a86 : Dynamo tensor aliasing guards, dedup graphargs (#104921)
f987d11fa7 : Reland: Make `torch.empty*` deterministic by filling with NaN or max int (#104995)
42530c17fc : [ONNX] Fix UnsupportedFxNodesAnalysis after onnx dispatcher changes (#105156)
15c1e44d64 : [ONNX] Apply param_manipulation.py from onnx-script to op validation and dispatcher (#104679)
fc2f87b281 : Add semi-structured sparse conversions (#103830)
15478a50ef : Revert "[export] Allow optional call-spec (#105041)"
df3a64fb3e : [Dockerfile] Add `make triton` to the `build` target (#105114)
ef05c5f202 : Use plain power operator in Adam/Adamw when capturing (#104254)
194fe1d12f : [export] Allow optional call-spec (#105041)
b06a426390 : Fix typo (#105130)
44c8515d0d : SDPA: frontend for BSR masks (#104042)
05eea20eb9 : [dynamo] Simulate torch function enablement state (#105091)
87cf51cc7f : Switch automatic_dynamic_shapes to True by default in fbcode (#104883)
c36dca7bc5 : Revert "[inductor] Register an op for mm_plus_mm (#104835)" (#105150)
91c64f55ab : Revert "[inductor] fix a custom_op test problem (#104972)" (#105149)
d1fedad080 : Perform value range analysis with rationals when possible (#105137)
634659e262 : Update mypy to 1.4.1 (#91983)
f73757d551 : enable channels last for reflection padding on CPU (#102518)
d35137cc37 : Revert "[PyTorch TB] Write raw tensor as tensor_proto (#104908)"
e1502c0cdb : Add some more files to ciflow/inductor (#105112)
c6b9c31a2c : [inductor] fix incorrect strides in copy() decomp, fix hf_LongFormer + hf_BigBird errors (#100115)
053654b9cf : Optimize scatter_add/scatter_reduce in BFloat16/Half data type in CPU backend (#103427)
735e6ae801 : [dynamo] Maintainable code - Move decorators in a separate file (#105070)
4a0d773a08 : Update attention.cpp to remove warning about preferring torch.bool type (#103362)
0f322a300e : Transmute refined SymInt into int (#104828)
242fc29c96 : [FSDP] Refactor optimizer in backward (#104813)
f2eed129c4 : FSDP optimizer overlap (#98667)
1d02106e03 : Preserve source_fn or nn_module_stack in the lifted params (#105017)
dceae41c29 : [PyTorch TB] Write raw tensor as tensor_proto (#104908)
b99d605a30 : Add meta registration for foreach_mul_ (#105107)
0faf8ed49f : Skip TS backend in FBCODE (#104354)
0a20233e5b : create mergability check (#105086)
2c85f28c71 : [CUDA][cudaMallocAsync] Reduce record-stream warning spam (#105015)
7030403048 : Fix initializer naming at torch.onnx.ExportOutput.save_model_with_external_data (#105002)
bf40561ab4 : [ONNX] Support 'aten::randint' in torchscript onnx exporter (#105089)
9647a251cb : [dynamo] Dataclass variables with default field (#104840)
601db856d1 : elevated cudagraphs failure to warning, added lineno to recompiles (#105081)
3fe2b73416 : Update use_mkldnn in LSTM op to avoid input and parameter not in the same device (#102050)
5b4aacd691 : Revert "[DCP] Add FsspecReader and FsspecWriter to checkpoint __init__.py (#105088)"
954bae8e53 : [FSDP][Easy] Rename streams; add back stream sharing test (#104966)
59bb07ca46 : Update vendored version of pythoncapi_compat (#105083)
4f8ba6f8f6 : [DeviceMesh]Add validate mesh flag to DeviceMesh (#104807)
76a053d55c : [DCP] Add FsspecReader and FsspecWriter to checkpoint __init__.py (#105088)
15c67ca95c : Update troubleshooting.rst (#105018)
cf9d784e32 : Skip test_indirect_device_assert in fbcode (#105065)
398606e1c4 : Fix bug when an index appears in two expressions (#104886)
2563079d59 : [ONNX] Allow None as operator argument (#105040)
f0ed71273e : Make ops functional (#105000)
d77a2d8fe3 : Remove shard ID and unstable suffix when comparing failed job names with the base commit (#104821)
64bbe61600 : Fix lint: [PyTorch] Add Vulkan support for at::softmax 1,2,3 dimension tensors (#105082)
fc012d716d : [core] Bring cpu device module closer to cuda's. (#103172)
66fb83293e : [inductor] Add min/max to index propagation pass (#105020)
06a5df8d31 : Revert "Transmute refined SymInt into int (#104828)"
246dc0d9f2 : [MTPG] Use TLS propagation to enable MTPG from bwd. (#104735)
43c94360e2 : [PyTorch] Add Vulkan support for at::softmax 1,2,3 dimension tensors (#105012)
08cbfb2a58 : Avoid tensor creation and use scalar overload (#104264)
16d3638c11 : Add best practices for CPU backend doc (#105051)
be76bfb743 : [inductor] fix a custom_op test problem (#104972)
5c7e826f4d : [ONNX][TypePromo] Introduce ReductionTypePromotionRule (#104491)
0e7529940d : Revert "Switch automatic_dynamic_shapes to True by default in fbcode (#104883)"
4694f54356 : Transmute refined SymInt into int (#104828)
1ecef7d805 : Remove unused private code from ATEN (#104751)
979f826015 : Read out real strides from compilation result, rather than real args (#105010)
4148b7bada : [Typing] Fix PEP 484 Violation (#105022)
603a777b09 : [PyTorch TB] Refactor formatting (#105027)
c7a76d9be5 : Replace use of `first_layer` in init with `encoder_layer` argument to init (#104058)
c03558fa8d : [doc] apply `weight` after `p` in `MultiMarginLoss` (#104844)
0bc382ea55 : [foreach][Adamax] Minimize intermediates=1 to decrease peak memory (#104991)
ea6a563a8c : [foreach][Adagrad] Minimize intermediates=2 to decrease peak memory (#104988)
455f495f04 : [foreach][Adadelta] Minimize intermediates=3 to decrease peak memory (#104983)
9c46a1620c : [inductor] Register an op for mm_plus_mm (#104835)
5f2a76ddf7 : inductor: fix LoweringException of AdaptiveAvgPool with output_size 0 (#104691)
9d1f5a35df : Move more stuff into ViewAndMutationMeta (#105009)
5913437a40 : aot inductor: opportunistically fix check_output -> check_call (#104743)
980fb94f9c : [Doc] Specify output parameters for FractionalMaxPool2d and FractionalMaxPool3d (#104941)
73e179a5ca : Follow file move for functorch bits for ciflow/inductor (#105019)
1a6619a830 : Added missing whitespace when reporting invalid gradient type (#104992)
96b91ab248 : Fix merged lintrunner error (#105005)
ece19bf018 : Update run_test.py to use TEST_WITH_SLOW_GRADCHECK flag (#104819)
24aa8b9b9a : Revert "Deprecate registering autograd kernels at not an autograd key (#104481)"
2f95a3d0fc : [BE]: Apply ruff PERF fixes to torch (#104917)
c9a806be28 : [ROCm] enable additional inductor/dynamo UTs (#104624)
6f27c5185f : Fix broken link to torch.compile docs (#104982)
c60cb91700 : [dynamo] fix bug where trace_source and graph_sizes artifacts were not being printed with TORCH_LOGS='+dynamo' (#104912)
a2f04e9841 : Force multi-line messages to still get log format prefix (#104932)
515e3f2bb9 : Add [rankN]: to log messages when distributed is initialized (#104929)
5e4ee15e85 : [MPS] Fix unique flatten logic (#104938)
ad37dd5155 : Make unspecified ints to range over negative and positive. (#104658)
4b29829ece : [quant][pt2] Fix QAT convert for mobilenetv2 (#104110)
eb03af44ee : Fixes to the torch.compile doc and doctest (#104911)
6abe0b2ee8 : Disable translation validation on performance runs. (#104887)
5d4b2fcc6f : Updated pillow version to 9.3.0 for Python version <= 3.8 (#104958)
f01deb23d5 : Revert "[dynamo][numpy] Add support for np.dtype (#103546)"
49a2b72927 : [inductor] handle `Min` and `Max` in `TritonPrinter` (#104944)
15aa401baa : [foreach][NAdam] Minimize use of intermediates to decrease peak memory (#104910)
6878d3a157 : [foreach][RAdam] Minimize use of intermediates to decrease peak memory (#104904)
ed13ab6664 : Deprecate registering autograd kernels at not an autograd key (#104481)
e095716161 : Add a note for Incorrect signature in nn.Module.register_full_backwar… (#104964)
231364fd06 : [optim] use lerp whenever possible (#104796)
999abd56a7 : [BE] Make ONNX imports lazy (#104843)
26f7f470df : Handle empty PR body in filter_test_configs (#104914)
db4aed6a03 : Include nn.ParameterDict in dynamo __getitem__ (#99771)
ba167e6578 : Inductor cpp wrapper: fix codegen of ScatterFallback (#104524)
0710791929 : [dynamo][numpy] Add support for np.dtype (#103546)
90eaa98d13 : dynamo : kwarg support for wrap (higher order op) (#104180)
ed5ea15714 : [Easy] remove debug code (#104915)
f1bff6601c : [ONNX] Add fake tensor support to torch.onnx.dynamo_export (#103865)
ca8c56ff5d : fix QuantizeAvx512 (#104400)
dbb69f78fe : Add assert + test for artifact log booleans (#104907)
d184c81166 : Add -fstandalone-debug debug flag (#104475)
63d1fb21f5 : [FSDP] Default `limit_all_gathers=True` (#104900)
7c3c3dd7ca : [C10D] Reimplement TCPStore wait timeout logic. (#100594)
332f2057df : [XNNPACK][QS8] torch.nn.ELU (#104307)
c4e084e3c7 : [XNNPACK][QS8] torch.nn.ConstantPad2d (#104306)
2c960c73a3 : [XNNPACK][QS8] torch.permute (#104305)
d41c4a8338 : [XNNPACK][QS8] torch.clamp (#104304)
66c41e1c5e : Avoid generating core dumps when CONTINUE_THROUGH_ERROR is set (#104905)
e940d5d3c3 : Disable cudagraphs by default when dynamic shape is enabled. (#104448)
3279f06410 : Merge and improve torch optim optimizer type stubs (#102593)
6059fea760 : Make perf_hint_log report at info level (#104873)
4063158df9 : Enable running compiled optimizers in CI (#104888)
7e9c891056 : [foreach][AdamW] Minimize intermediates to save peak memory (#104898)
d5dbe77629 : Fix mod semantics for `Z3Ops`. (#104827)
951b9a6a14 : Update torchbench pin (#104829)
0300be5b7b : Fix AttributeError("'constexpr' object has no attribute 'type'") (#104831)
aa84078c6c : [PTD][TP] Add BWD support for colwise embedding sharding (#104820)
fd378db6a8 : Fix lint after 104902 (#104909)
9861c4a3f8 : Add lerp decomps + meta registrations (#104866)
dff42857bd : [inductor] update triton pin (#104303)
c2e286daf9 : Testing: Print test reproduction command on failure (#104537)
912a6a1b5a : [pt2][test] Loosen stack trace check in test (#104902)
86680a6c0b : [dynamo] handle calls to typing.cast (#104799)
2ee440054b : Small tweaks to SDPA docs (#104749)
d1ca98665f : Switch automatic_dynamic_shapes to True by default in fbcode (#104883)
bcdd4130b4 : [inductor] Fix float64 constants in triton codegen (#104830)
7b538d8987 : [DCP][fsspec] Consolidate OSS FsspecWriter/Reader and internal FsspecWriter/Reader (#104724)
48a49b2683 : use more informative error message for ConstandPad2d/3d (#104762)
1ad435772b : Added option to always call nn.Module global/non-global forward hooks (#104278)
0433cb0596 : [dynamo] simulate tracing tree_map_only (#104815)
b93590b692 : Copy debug artifacts instead of renaming (#104561)
35f0e35529 : [foreach][Adam] Minimize use of intermediates to decrease peak memory (#104780)
e25f5732c8 : Add meta registrations and distributed decomps: _foreach_div_.Scalar, sqrt_.default (#104779)
038cb4075a : Add capturable/maximize tests to Adam(W) optim configs (#104669)
af52f6b928 : [DCP] Add documentation for HSDP saving using DCP (#104810)
e695b397e1 : Fix broken ROCm quick start link (#104527)
4911b80b8e : [inductor] addmm + ReLU / GELU fusion pass (#104132)
7166df8094 : Add big doc to wrap_fx_proxy_cls (#103407)
4b8378967a : Fix pytest test discovery for vscode (#104864)
af34123caf : Consolidate example_value int cases in wrap_fx_proxy_cls (#104836)
e7fe2a797c : Revert "[optim] use lerp whenever possible (#104796)"
46154c4c35 : [FSDP][optim_state_dict] The correct way to initialize optimizer states if the corresponding param is empty (#104765)
54f33265db : inductor(re-land): support cpu fusion path for bfloat16 amp (#104399)
1b24a75175 : Generalize sympy.Rel test to sympy.logic.boolalg.Boolean (#104833)
26ff7a7e2a : Allow for torch.sym_int to return int while tracing (#104837)
dfe7a1e089 : [dynamo] Support wrapping + returning tensor subclasses (#104802)
51e246affc : Update cuDNN frontend submodule to v9.1 (#104847)
546db2e36e : [fx] make fx.wrap idempotent (#104838)
87e6b19ee0 : [export] Make serializer more composable (#104816)
98d48709fe : update cudnn==8.9.2.26 in .ci/docker (#104795)
395a0ba303 : Training skip list should not be applied on inference bench (#104738)
a860b965f1 : [inductor] Relax custom op schema checking for cpp_wrapper (#104349)
dd6c38cb59 : [vision hash update] update the pinned vision hash (#104834)
3179c21286 : remove aot_inductor_lib from deeplearning (#104730)
dffcf999bd : Misc changes from compiled autograd branch (#104316)
e80787c8e1 : [inductor] Split ops.reduction into reduction and store_reduction (#102737)
0ceca92f80 : [inductor] Add single pass "var_unnormalized" reduction_type (#102486)
26108d5d2b : Add --check-str support to after_aot minifier (#104758)
85cbe7e6fd : Add timeout for translation validation instances. (#104654)
91dcc3b272 : Fix activation checkpoint for mps (#104787)
c85468a94c : [autograd Function] Add private API to not materialize grads for non-differentiable outputs (#104291)
e600505e32 : [FSDP][5/N] Unblock `ignored_states` + auto wrap (for now) (#104418)
610f74627e : [FSDP][4/N] Remove `_get_fully_sharded_module_to_states` (#104409)
d9be0366d3 : [FSDP][3/N] Unify `fully_shard` auto wrap (#104408)
6d71b4f9f1 : [FSDP][2/N][Easy] Prepare `_auto_wrap` for `fully_shard` (#104407)
d58f75be8b : [FSDP][1/N] Move wrapper `ModuleWrapPolicy` to new path (#104346)
f334b54d7f : Handle the list of skipped messages when uploading disabled test stats (#104803)
fbe2a7e50a : [optim] use lerp whenever possible (#104796)
5da4745c24 : [ONNX] Fix exported onnx initializer name (#104741)
012561ff39 : [ONNX] Restore readable names for parameters and buffers (#104493)
3d51c2e06d : [ONNX] Refactor FX Registry and Support Custom Operator in FX Exporter (#103943)
f45629d6ed : Pin pillow (#104760)
dbc2216800 : Add autograd modes table to docs (#104774)
2df939aaca : [inductor] Update ops.bucketize to take offsets_size as a sympy.Expr (#104756)
3d07184930 : Move optimize indexing to use the class Bounds (#104558)
710abc41cc : Implement bound_sympy (#104559)
ff05f81e1d : Simplify and extend ValueRanges (#104557)
2f04aab140 : [SPMD] Disable all SPMD tests (#104784)
ae12081e70 : [ONNX] Remove unnecessary deepcopy on args in 'DynamoExport' (#104736)
c68fac9c25 : [pt2][inductor] include `allow_tf32` in system information (#104129)
ed4a8869af : [ONNX] Fix third party custom operator support in torchscript exporter (#104785)
d8cb80e382 : [inductor] If a kernel contains bucketize, try using config with num_elements_per_warp=32 (#104456)
1a661639f7 : [quant] Support integer implementations for adaptive_avg_pool2d (#104226)
98e14ac37e : [ONNX][TypePromo] Simplify API `_run_node_and_set_meta` (#104720)
fa262eb46e : [ONNX][TypePromo] aten.div (#104229)
29c30b1db8 : [export] Fix serialize nn_module_stack (#104721)
6a3d5f1986 : [HigherOrderOp] Remove _deprecated_global_ns from cond (#104380)
d5a83a5f27 : [export] Fix deserialization of symint (#104722)
199e93a0da : [export] Serialize optional tensors (#104723)
78734a76ad : Revert "Add libxml2 and libxslt in docker image (#104663)"
2fdf1175cd : [ONNX][TypePromo] Explicit type promotion pass (#104064)
eb4a1a07af : Upgrade HuggingFace to v4.30.2 (#104657)
c500f1d13b : [CMake] Fix TORCH_CUDA_ARCH_LIST warning (#104680)
6970ffbbc7 : [HigherOrderOps] Clean up side effect handling (#104685)
4ad5081794 : [HigherOrderOp] Fix returning captured value (#104371)
8d65635378 : Prefixing DeviceType with c10 namespace to avoid name collisions (#104364)
296b45f9d3 : Cleanup scatter-related code (#103074)
63dc24b4a6 : Expose some APIs in FunctionsManual.h (#104684)
0bf39d5663 : [FSDP] Option for eval in fp32/bf16 (#104682)
e517b3651a : [pytorch] put more to pytorch_fmha namespace (#104628)
348dfc1cf3 : Update cuDNN to 8.9.2.26 (#104757)
8ca63ff9a8 : Revert "[inductor] Add single pass "var_unnormalized" reduction_type (#102486)"
1280b19827 : Revert "[inductor] Split ops.reduction into reduction and store_reduction (#102737)"
a2fe6953bc : Generate `nearbyint` for Round in tensorexpr llvm codegen, match `torch.round` result (#104430)
8ce3a18b6a : inductor: reduce complie time by reducing repr calls of quantize or Opaque tensor (#104696)
0ccdbbe233 : Add deterministic path for `Tensor.resize_` (#104300)
d64bada876 : Refactor funcol for readability and dynamo tracing (#104387)
456ecefd52 : [BE] Fix warning in top-level CMakeLists.txt (#104726)
8c13e96be2 : [dynamo] add logging artifact for traced graph tensor sizes (#104672)
c7c9aa797f : [dynamo] New logging artifacts for source code attribution (#104013)
8c0b9a2d69 : [ONNX] Export dynamic step size for aten::slice() (#104385)
c42fd73cf9 : Add functions to get and set default endianness in load() functions (#101973)
2efe4d809f : [hotfix inductor test] disable cpp vectorization codegen in fbcode for inductor (#104560)
b190f46514 : Allow NumPy code in torch.compile to run on cuda (#104699)
b073f6a5e8 : Revert "inductor: support cpu fusion path for bfloat16 amp (#104399)"
a358a9262e : [inductur] coordesc tuner bug fix with no_x_dim kernel (#104692)
c42de84708 : [quant] Skip some x86 quantizer tests for now due to time out (#104666)
202fb95c68 : [benchmark][export] Add torch.export passrate for TB/TIMM benchmarks (#104382)
f8aedf1efe : Revert "Optimize scatter_add/scatter_reduce in BFloat16/Half data type in CPU backend (#103427)"
c4cf90aad1 : inductor: fix assert error when load a bfloat16 inf constant (#104614)
4fafe0b74c : [export][serde] Hookup export upgrader with TorchScript upgrader entries (#104227)
6c1d959889 : [FSDP] Annotate modules for `fully_shard` (#104363)
7c8dded9db : [BE] QNNPACK Test - FC, use ASSERT_NEAR (#104651)
cbad55f6c4 : [BE] QNNPACK Test - Sparsegemm tests, use ASSERT_NEAR (#104650)
ce1a40519f : [BE] QNNPACK - Q8[g]avg, loosen threshold to allow fp compare to pass (#104649)
833faccce2 : [BE] QNNPACK Test - DQgemm tests, use ASSERT_NEAR (#104648)
5c2dc9b0b2 : Label for mem leack check (#104643)
315a77a02d : Add libxml2 and libxslt in docker image (#104663)
59b8d5be74 : [inductor] Split ops.reduction into reduction and store_reduction (#102737)
def7b3ed60 : Enable bitwise shift operations tests (#97150)
17ab4f85e9 : [c10d] Adopt allgather_into_tensor_coalesced for NCCL. (#103086)
0aa6486441 : inductor: reduce compile time for cpu backend by reducing weight conversion (#104402)
adf1405909 : [HigherOrderOp] Simplify design by removing reliance on name match (#104350)
69c4314945 : Add more child links to benchmark readme (#104627)
db1ac4e29b : fix functional collective's allgather for gloo (#104681)
b1ea0d90fe : [MPS] Set the default optimization level (#104661)
917ac30aeb : Revert "inductor: reduce compile time for cpu backend by reducing weight conversion (#104402)"
8e2e2d730e : [Quant][PT2E]Accelerate test of conv2d_add and conv2d_add_relu by reducing test configs (#104686)
ac9c2aa6ee : Use random inputs for mps extension tests (#104597)
6bfd507c15 : inductor: reduce compile time for cpu backend by reducing weight conversion (#104402)
434fcffa21 : [6/n][FSDP] Update _sharded_pre_load_state_dict_hook to use DTensor when use_dtensor=True in ShardedStateDictConfig (#104087)
a956b1c849 : optimize mimalloc build options. (#104497)
c3f29ed16e : Update cutlass submodule to stable 3.1 from RC (#104638)
22520964ae : inductor: convert view to reshape before doing fake_tensor_prop at freezing step (#104612)
13763f58ad : [vision hash update] update the pinned vision hash (#104677)
df281bf788 : Refactor unwrap_proxy() for proxy tensor tracing. (#104667)
d0e5c681f5 : [dynamo][ddp][ac] Fallback to single bucket when higher order op (#104639)
da7675621e : Optimize scatter_add/scatter_reduce in BFloat16/Half data type in CPU backend (#103427)
bf127d236a : Update xla.txt (#104671)
c46869a941 : inductor: support cpu fusion path for bfloat16 amp (#104399)
e802900bdc : inductor: move the CPU weight packing path after of AOTAutograd (#103851)
8c191d8eef : [dynamo][ac] Reland #104397 - Remove disable monkeypatching of utils.checkpoint (#104665)
0444f9f85b : [dynamo] Reland #104317 - Lazy disable_dynamo API out-of-dynamo (#104664)
d3589c9456 : reduce computation of batch_norm when weight or bias is none (#104616)
13ea3d8530 : [jit] Fix inspect.get_annotations usage in python >= 3.10 (#104485)
7e098f9559 : [inductor] Add single pass "var_unnormalized" reduction_type (#102486)
63755efb90 : Disable git fsmonitor daemon on Windows (#104662)
611febf6cf : [quant] Support integer implementations for max_pool2d (#104225)
a290cbf32b : Enable fused foreach Adam compilation (#104121)
01e6d64dd2 : [MPS] Fix unary ops over sparse-mapped tensors (#100765)
4005152b92 : [dynamo] Organize higherorderops variable trackers (#104565)
666aeaa313 : Preserve original co_filename when FX symbolic_trace (#103885)
4baac20117 : [BE] switch fprintf to fmt::print (#104640)
f00f1d4cfb : add fused support for xpu devices (#104517)
b5c2404116 : Expose TorchDispatchUtils Api for Extensions (#104619)
5b600dee19 : Properly preserve --tracing-mode when isolated minify (#104101)
3dc4adc7a6 : Don't build CUDA with debug info by default. (#102617)
0cee4e3c32 : Turn translation validation off on timeouts. (#104464)
40b8d10d5e : Re-land: Turn translation validation on for tests and accuracy runs by default. (#104467)
5ab2b27353 : Revert "Re-enable low memory dropout (#103330)"
fb1ad02833 : Support bit shifting `SymInt`s (#104318)
d3ba8901d8 : Adding precision issue note docs for `functional.interpolate` (#104622)
05eaf5ab51 : optimized the backward of index_select when dim = 0 on CPU (#102961)
3834582327 : [ONNX] Add autograd_inlining flag to torch.onnx.export (#104067)
c00dd43e43 : [pt2] add metas for `multilabel_margin_loss` ops (#104388)
a3aa4da154 : [pt2] add metas for `multi_margin_loss` ops (#104236)
ad58aba932 : [pt2] add metas for `adaptive_max_pool` ops (#104167)
54e320d4d1 : Revert "[dynamo] Lazy disable_dynamo API out-of-dynamo (#104317)"
40f53912cf : Revert "[dynamo][ac] Remove disable monkeypatching of utils.checkpoint (#104397)"
0c8323e4a4 : cmake: allow USE_SYSTEM_ZSTD (#104611)
ea4d5c4538 : [Quant][PT2E] Enable vec code gen for pair of quant/dequant (#104503)
12ca224662 : Add hacked_twin overloads for _unsafe indexing functions (#104127)
2385dad4b3 : Enable automatic_dynamic_shapes by default (#103623)
2abbed42ee : correct the generated code and corresponding text to make them consistent (#104596)
bfd995f0d6 : Revert "Specialize storage_offset - Does not cover automatic dynamic (#104204)"
e8174faa02 : cmake: respect USE_SYSTEM_LIBS when USE_NCCL is set (#104511)
52094a3454 : Correct warning message info in fork_rng (#104525)
5c580a9846 : [decomp] Add test tracking core ATen operators (#104262)
d62a80adc3 : remove ipex backend (#104329)
7ae100628e : Move most SymPy functions to their own file (#104556)
985cb5055c : [vision hash update] update the pinned vision hash (#104562)
2a21469a77 : [Quant][PT2E] Enable conv2d unary and binary recipe for x86 inductor quantizer (#98826)
8780bd6a01 : [ONNX] Use load_model_from_string (#104533)
07c60d11b3 : replace `AT_ERROR(...)` with `TORCH_CHECK(false, ...)` (#104534)
709c9b5c93 : Fix tabulate import error (#104468)
d7b5cd7d0b : Fix `mH()` to `mH` in Python examples (#104532)
e9d2d74f0a : [inductor] Add prims._inductor_bucketize and add lowerings (#104007)
0ac2666d72 : Advance docker builds to cuda 11.8 (#104528)
d6b1f12846 : Add onnx to common_methods_invocations.py approvers (#104530)
437bc5b1b7 : sparse_mask: backward support for sparse lhs (take 2) (#104341)
f353d17755 : Revert "[ROCm] reduce tolerance for triangular solve with well_conditioned set to True (#104425)"
9f7ad25c98 : [PyTorch][Dispatcher] Fix destruction order fiasco crash (#104393)
707d265db2 : [Inductor][Quant]Refactor load and store vectorization code generation with uint8 data type (#104075)
fcb53c1394 : Revert "[6/n][FSDP] Update _sharded_pre_load_state_dict_hook to use DTensor when use_dtensor=True in ShardedStateDictConfig (#104087)"
bd0f0f40a1 : [PT2][Quant] Enable symbolic shape in linear quantization (#104473)
4e27e6c160 : [vision hash update] update the pinned vision hash (#104490)
004ff536e8 : [ROCm] Fix circular recursion issue in hipification (#104085)
e865bc7da4 : add SM80OrLater checks to bfloat16 torchinductor tests (#104436)
b3e60ee052 : Fix broken torch._inductor.config import (#104477)
d6f1827181 : [Inductor][Quant] Add UT to combine dynamo export and inductor constant folding (#104245)
b1c31b1d26 : [pt2] metas and `SymInt` support for `max_pool` ops (#103951)
c4a6f86062 : [pt2] add metas for `max_unpool2d` and `max_unpool3d` (#103821)
f9aa004d39 : [ONNX][TypePromo] Materialize type promotion rules (#104063)
828b275740 : [exportdb] Setup website (#104288)
49af83cf44 : [6/n][FSDP] Update _sharded_pre_load_state_dict_hook to use DTensor when use_dtensor=True in ShardedStateDictConfig (#104087)
1de1bea60d : Back out "[Inductor][FX passes] Remove config.split_cat_fx_passes & A… (#104370)
9626604bdd : [inductor] Fix squeeze normalization pattern (#104434)
d982fdb5d5 : [FSDP] Rework meta device init (#104189)
93f5a82e37 : Add detailed requirement of compiler in README.md (#103819)
3ff111a4b4 : doc: fix fake_quantize_per_tensor_affine docs (#104453)
a5ca445f79 : Check for corrupted ivalues. (#104243)
f20fe674f9 : [easy][cuda] Removed the warp size hardcode on layer norm backward CUDA kernel (#104441)
8958f041be : Revert "Add forward mode AD to out-place foreach functions (#102409)"
c178257b40 : Don't limit fusions with foreach scheduler nodes (#104471)
ef7bc3e23d : [ROCm] reduce tolerance for triangular solve with well_conditioned set to True (#104425)
6929e9e947 : Use `int64_t` accordingly in `cunn_SoftMaxBackward` to avoid `int` overflow (#104270)
4de1ee6ba4 : Revert "Value range refinement using multi-variate expressions. (#97964)"
7acc4a2e86 : add generic func to get function defined in custom device module (#99048)
b5980c0b86 : [PyTorch Vulkan] add Vulkan support for `aten::masked_fill` (#104444)
d901dd94cb : [logging] add custom format option to logging artifacts (#104443)
53919d4bf8 : add named tensor support for custom device (#104401)
28720ad585 : Fix argmax and argmin clamp value on MPS (#104374)
36c4dad197 : [ET][XNNPACK] Add support for quantized LeakyReLU (#104309)
ddd7da7546 : Enable more tests (#104437)
032ea6a61e : [ONNX] Create stand alone diagnostic rule on nearest match finding in dispatcher (#104267)
a2a8b4d415 : Revert "Turn translation validation on for tests and accuracy runs by default. (#103611)"
b1e4378b05 : Migrate jobs from windows.8xlarge.nvidia.gpu to nonephemeral (#104404)
624d20c3de : kill inductor.config.disable_cpp_codegen in internal (#104351)
e799f565eb : [DTensor][TP][Random] Introduce TensorParallelRNGTracker to integrate parallel RNG state with Tensor Parallel (#103910)
7bc181d374 : [Xcode 15][caffe2] Avoid redundant redeclaration of 'constexpr' static data member (#104049)
da06920f47 : Replace all_gather in device mesh with functional collective equivalent (#104056)
77642da3b8 : Fix broken meta registration for torch.full (#104451)
0b62aca726 : Don't decompose aten.bucketize (#104396)
958bd3a549 : [fake_pg] remove init barrier env var (#104428)
56ef8ca054 : Fix recursive call error in `lift_tracked_freevar_to_input` (#104378)
e2ec0ba404 : Add forward mode AD to out-place foreach functions (#102409)
8457703e8d : lazy init device mesh in fsdp (#104447)
0ff9a82a4d : [profiler] Fix profiling PT2 w/ dynamic shapes & record_shapes (#104320)
ecca9591d5 : [quant][pt2e] Add reference representation for quantize/dequantize operators (#104395)
a704251628 : inductor: fix compile error of bfloat16 broadcast operation (#104319)
89decc3a10 : [vision hash update] update the pinned vision hash (#104449)
537a6c0651 : [dynamo][ac] Remove disable monkeypatching of utils.checkpoint (#104397)
2642412207 : Value range refinement using multi-variate expressions. (#97964)
ffb526a2e4 : Value range refinement using uni-variate expressions. (#97963)
e311bed2a8 : Turn translation validation on for tests and accuracy runs by default. (#103611)
d0509fe32d : Document how functional collectives work under eager/dynamo (#104386)
ffb1b4c462 : [inductor] Install guards on both cases of View.handle_negative_index (#103780)
d455d48744 : Add back in reduce_scatter_tensor_coalesced (#104345)
a993319a4b : [export] Dont run export guard hook when there is no graph (#104383)
76a91075ea : propagate pred guards in TorchHigherOrderOperatorVariable call_function for cond (#104379)
12f19b5dd9 : consider `CALL_FINALLY` non-jumping in `stacksize_analysis` (#103621)
a78bddac01 : Revert D46920584: Multisect successfully blamed D46920584 for test or build failures (#104269) (#104302)
a6b9a61a6a : Added a note to torch.round doc to indicate the return type (#97227)
4ab140902b : [docs] Fixed typo in grid_sample doctring (#104406)
ec85ab6157 : Adding aarch64 wheel CI workflows (#104109)
082832b0f8 : Revert "Add DSA to IndexKernel.cu (#104054)"
cbb9683e3b : [ONNX] Speed up export of large models (#103278)
193adde5f4 : Fix `UnboundLocalError` in `test_segment_reductions.py` (#104353)
f32593630b : Re-enable low memory dropout (#103330)
60e2a4a4a0 : [2D parallel] workaround for FSDP init issue (#104398)
8cad411d3d : Fix UntypedStorage pin error (#104355)
9dda446505 : Pin pytest linux dependencies in docker (#104281)
408cb45e14 : [Dynamo] Support threading.local getattr (#104292)
875f60399e : pre_dispatch tracing: support autocast and no_grad/enable_grad ctx managers, add a pre_dispatch_eager dynamo backend (#103024)
ebb8aa9c0b : Correct output_padding for quantized tconv (torch->onnx) (#104207)
04c0d85caf : [ONNX] Add op_level_debugging rule on validate_op_between_ort_torch (#104268)
5c12a810ac : [dynamo] Lazy disable_dynamo API out-of-dynamo (#104317)
2bb83cd45c : [dynamo][ac] Minor refactor for better code organization and a bugfix (#104276)
9154bbc999 : Fix CUDA Bazel build to optionally include gmock after #104255 (#104308)
f78b92f490 : fix an ASAN error in MKLDNN (#104331)
d4e51511a0 : Inductor cpp wrapper: add -ffast-math in linking flag (#104332)
732067e5c3 : s390x SIMD: propagate NaN value in clamp functions (#102978)
fea683491e : Make `torch._dynamo` lazy-importable (#104368)
d0a72ec5e4 : Translation validator for dynamo guards. (#102563)
3707fbf63b : [RFC]: Add test for graph partition after assertion ops functionalization. (#104287)
ede1f99904 : Add gelu vulkan function (#102762)
f2ea27e4a0 : Replace torch.has_cuda() call with torch.backends.cuda.built() (#104338)
e140c9cc92 : Fixes ROCM_HOME detection in case no hipcc is found in path (#95634)
8464a6a165 : [GHF] Better check for internal diffs (#104344)
998c07799f : [dynamo] fix deep nested closure cell KeyError (#104222)
98f00f881f : [inductor] convert layout of conv weight ahead of time for inference (#103642)
044a8e3305 : [skip ci] Fix the deprecating link to **our office hours** (#104339)
b81f1d1bee : Speed up cpp extensions re-compilation (#104280)
42b0bdd0c5 : [onnx] Convert aten::flatten with 0d input to onnx Reshape and 1d to Identity (#104089)
aaada2c4fc : Add DSA to IndexKernel.cu (#104054)
c866446d6c : [FSDP] Check module.training for _root_cast_forward_inputs (#104223)
ee19121931 : Change nn.Module.__getattr__ return type to Any (#104321)
2504af5ec9 : [cuDNN][cuDNN V8 API] Improve safety of `ParamsHash` keys (#104122)
35a8242226 : [Doc] Add sum reduction for CTCLoss (#100235)
0a7b6dd97d : [BE] Fix test_trymerge.py (#104343)
fde024b32d : [HigherOrderOp] Fall back on all new side effects in speculate_subgraph (#104077)
c06bb82ba1 : fix specialization when you pass an unspec int into slicing on a Python list. (#104142)
6493519fff : [Easy][FSDP] Remove misleading asserts (#104274)
ba9f6e6e92 : [FSDP] Validate `ignored_modules`, `ignored_states` (#104273)
cc27e6c0f9 : [FSDP] Fix `ignored_states` doc (#104253)
9db8ad7f1d : [FSDP] Support unfreezing params for reshard-only hook (#104186)
89fcfc1b8c : [Doc] linalg.ldl_factor: render the Shape of tensor A (#99777)
5cf3a99013 : sampled_addmm: backward performance improvements (#103544)
148960b8cc : [BE] Modernize C++ in MetalPrepackOpContext (#104312)
c2095af3f8 : make funcs argument type from torch.cuda.stream as torch.Stream (#104156)
f7fdaf8191 : Revert "Re-enable low memory dropout (#103330)"
2d14395f17 : Re-enable low memory dropout (#103330)
a8b63d4d1b : [dynamo] If UserDefinedObjectVariable.var_getattr() is a callable, try handling as a TorchVariable (#104231)
28d42e66e4 : [CI] Add DALLE2_pytorch to FORCE_AMP_FOR_FP16_BF16_MODELS (#104283)
54cb61f7d9 : enable ASAN on some tests (#103647)
27eecf32bd : Remove redundant dummy overrides (#103992)
361ef824ea : Handle custom higher order ops (#104285)
05ebd538d4 : Inference Horizontal Fuse Addmm (#100746)
9165d46b89 : DDP + C10D sparse all_reduce changes (#103916) (#104256)
c0aa442cb5 : [dynamo][higher order op] Relaxing too restrictive check for output to be a list/tuple of tensors (#104221)
945a257277 : [Quant][PT2E] Supported customized _EQUIVALENT_TYPES in Module Partition API (#102516)
298ff41a38 : [inductor] fix a bug in coordinate descent tuner (#104293)
280df5dc2e : [HigherOrderOp] Remove `_deprecated_global_ns` from some ops (#104105)
de7b6e55eb : Fix bad cudagraph interaction (#104196)
7bb40be143 : Fix fake tensor for private use backends (#103090)
1a8af1503f : Upgrade Pybind submodule to 2.10.4 (#103989)
c98896b76f : [quant][pt2e] Add more precise representation for quantized add (#104130)
7bf27cf163 : [Inductor][FX passes] Remove config.split_cat_fx_passes & Add config.experimental_patterns (#104208)
2da6cae43c : [core][pruning][sparse][feature] SparseSemiStructured tensor subclass (#102135)
39868b0578 : [codemod][third-party][gtest] Migrate all fbcode gtest from tp2 to fbsource/third-party (#104255)
a66107a30c : [DTensor][Random] Introduce CudaRNGStateTracker to maintain parallel RNG state for DTensor (#103235)
84f578dcc2 : [ONNX] Cache AutoTokenizer in CI for test (#104233)
93b6b17dd0 : CUDA_HOST_COMPILER spelling fix in cmake build files generate method (#104126)
a73ad82c8f : conditional CMAKE_CUDA_STANDARD (#104240)
bf34ecd0c8 : [RFC]: Integrate assertions functionalization to export (after AOT export) (#103887)
936cd4f2f5 : Migrate exportdb to torch.export (#104260)
ab9577087a : Update accuracy for dynamo/torchbench CI - vision_maskrcnn, hf_T5_generate and dlrm (#104263)
ef285faeba : [ET][XNNPACK] Add support for quantized Multiply (#104134)
80ea3422f0 : [ROCm] Enable tl.reduce usage on ROCm (#104099)
99e87bb6a0 : [MPS] Dispatch outer bin edges selection function (#101792)
217a8b4697 : [MPS] Add MPSProfiler to histogram kernel (#101692)
c40f5edf7b : Change tools search order (#104214)
4d613b9a5f : [doc] Improve `mps` package description (#104184)
ad2905ad27 : Make _test_autograd_multiple_dispatch_view a view operation (#104149)
567b5e5b28 : Multioutput backward formula: allow conditional guards against saving (#103750)
18dacf7e79 : [Specialized Kernel] Update yaml syntax to use kernel instead of dispatch (#104070)
95707ac964 : [fake_pg] allow fake_pg allgather to do some simple validation (#104213)
6c1ccccf21 : Enable mimalloc on pytorch Windows (#102595)
803c14490b : Specialize storage_offset - Does not cover automatic dynamic (#104204)
c3e4a67905 : Refactor multigpu tests to `test_cuda_multigpu` (#104059)
572ff2779b : [RESUBMIT] Ensure ncclCommAbort can abort stuck ncclCommInitRank (#103925)
b76a040b18 : Revert "[core][pruning][sparse][feature] SparseSemiStructured tensor subclass (#102135)"
7157dfdd4a : [jit] fix duplicated module input and output values in tracing module (#102510)
aea771de30 : [core][pruning][sparse][feature] SparseSemiStructured tensor subclass (#102135)
968b7b5e0f : Initial commit of collective_utils (#101037)
41866a2ead : Fix missing mandatory device_type argument in autocast docstring (#97223)
6d2da6106d : Raise AttributeError in _OpsNamespace if __self__ attribute is requested (#104096)
f8ac569365 : [Inductor][Quant]Fix tile2d code generation issue with uint8 data type (#104074)
d2281e38ae : Adds the initial support for AOTInductor model and interface (#104202)
d8a2e7461b : Fix incorrect distribution of randperm with device mps (#104171)
994b98b78b : Add language server support for vscode (#104160)
981f24e806 : Add docstring to torch.serialization.register_package (#104046)
4a008d268a : REDO of dropout support for mem eff #102038 (#103704)
bfa08a1c67 : Revert "[core][pruning][sparse][feature] SparseSemiStructured tensor subclass (#102135)"
d4a98280a8 : [Reland] Use missing-prototypes in torch_cpu (#104138)
436d035dc7 : Revert "DDP + C10D sparse all_reduce changes (#103916)"
a69f427f95 : aten: Ensure dim is size_t (#104201)
b93ed8164e : Add non-recursive module.to_empty option (#104197)
cf5262a84f : [core][pruning][sparse][feature] SparseSemiStructured tensor subclass (#102135)
f7f415eb2d : [inductor] add cpp randint implementation to ir.py (#103079) (#104124)
fed5fba6e4 : DDP + C10D sparse all_reduce changes (#103916)
8a08733218 : update test_higher_order_op: grad test (#104179)
adf9595c2f : Update CODEOWNERS (#103934)
fb8aa721e2 : [Pytorch Edge][BE] Delete Sparse Qnnpack test failing since 2022 jul (#104073)
100aff9d4f : [export] Deserialize subgraphs. (#103991)
dd4f4bb47d : [exir] Initial serialization (#103763)
618cc82e77 : Stop Dynamo from peeking into wrap's body (#104076)
5364366f8c : Sparse Compressed mm avoid creating temp sparse (#104062)
bd8841101b : [ET][XNNPACK] Add support for quantized Sub (#104090)
edc9c0df7e : Fold Conv-Bn (#100653)
c1fffdcd5b : Change how AOTInductor's fx input is produced (#104123)
b2277075b0 : Fixed benchmark_utils.Fuzzer (#101553)
3c34a00d1b : Preserve all submodules/parameters/buffers when unpickle graph module (#104115)
58feefa4ed : add custom device support for special nn.modules (#103419)
7cef7195f6 : [draft] Update Multiprocessing best practices with CPU device (#103229)
86e0eda18d : Add partial derivative unit tests (#103809)
a9efbef716 : Add support for unique overload of foreach_pow (#104137)
e4d8504ebc : Unify GELU tanh approximation in _addmm_activation GPU back-end (#104061)
925f0a01c7 : Do not pass `stepcurrent` option unless in CI (#104135)
454f4e98a2 : [Pytorch] aten::expand (#103930)
466efccc8a : [Pytorch] aten::zeros (#103703)
6f78390607 : [vision hash update] update the pinned vision hash (#104133)
63f66d19ea : [Tests] Make `run_test.py` usable without boto3 (#104111)
483f748dd5 : [BE] Enforce missing `override` keyword (#104032)
202a9108f7 : Disable core dump when rerunning disabled tests (#104131)
75dab587ef : [dynamo] FSDP + AC + torch.compile (#103953)
b3ace213f2 : Heap buffer overflow at `source_range_serialization.cpp:73` (#103969)
344bab2669 : [RFC]: Functionalize assertions (#103757)
98d513cabf : [BE][Test] Remove `--pytest` option from `run_test.py` (#104125)
9f11ad6f86 : Extend torch->onnx export for quantized convolutional ops (#102759)
75108b2096 : Normal and Uniform return earlier without entering kernel<RNG> (#103507)
bd5b1788cd : Support printing inequality in ExprPrinter (#104104)
3e674b75b1 : Allow fusion of epilogue copies with upstream foreach ops (#104018)
47ff90fde5 : [pt2][inductor] update local caching and create `get_system` method (#104050)
ef3d1cfa16 : [cuDNN][cuDNN V8 API] Thread safety fixes for cuDNN V8 API (#103939)
933166e5c0 : Fix null pointer dereference on ROCm (#95810)
a45132e049 : Remove CUDA 11.7 Docker image build (#104116)
6ff4548b6e : [AMP] Support XLA:TPU (#96370)
c17bdb3247 : [C10D] Add functional collective reduce_scatter_into_tensor_coalesced. (#101023)
93e63fc0f6 : [Core] Drop GIL in THPVariable_item around aten op (#104103)
b5d1b42f99 : [bfloat16] adaptive_{max, avg}_pool3d (#89754)
7274582390 : Revert "sparse_mask: backward support for sparse lhs (#95165)"
3a823e4617 : [BE][CMake] Do not pass `-mfpu=neon` on Apple (#104078)
d1c367470b : [Specialized Kernel] Remove requirement for type_alias and dim_order_alias to be present (#104006)
8176cd8c0f : [ao] fixing quantized prelu workflow (#103455)
8a500f0be6 : Update triton commit pin for ROCm (#104035)
7320ef5651 : [quant][pt2] Add prepare QAT test for mobilenetv2 (#104068)
fd40abb706 : Minor bugfix for int inputs in minifier (#104100)
fb04b59fa2 : [functorch] Remove test_functionalize (#103748)
ce845dfe49 : [Reland][ET] Select used et_kernel_metadata only (#104005)
afc788a99c : Re-land _cycleviz.py: visualize reference cycles holding cuda memory (#104051)
f090fdf3b4 : sparse_mask: backward support for sparse lhs (#95165)
fcb7a47f8b : [Quant][PT2E]Fix the maxpool2d input observer didn't insert after QuantizationAnnotation API (#101941)
47894bb165 : [functorch] disable C++ Function under functorch transforms (#103957)
ec24f1e4cc : Simulate treespec flattening/unflattening (#101896)
92c0e49419 : Add num_elements_per_warp as an triton_config (#103702)
09d093b47b : Update foreach realization rules (#104017)
a152b3e3b8 : [RFC] Create functional aten assertion ops (#103751)
3c28431a0f : Feature: Dump compile_times when TORCH_LOGS=dynamo is enabled. (#104057)
23b7035b3c : [TP] Add an input resharding wrapper for TP and unit test for 2D + AC (#103334)
8c3958eddc : Fix lr_scheduler serialization contains bound methods issue (#102627)
c805b81fef : [vision hash update] update the pinned vision hash (#104082)
29e3fddb08 : Revert "Preserve original co_filename when FX symbolic_trace (#103885)"
5a97c947c6 : Fix optimizer grad mode state interaction with dynamo (#103952)
22eaedacd3 : [nccl] Do no skip send/recv 0 byte tensor (#103140)
0330f67b22 : Remove ExportGraphModuleMixin. (#103786)
4624afaa30 : use reset_running_stats in swa_utils.update_bn (#103801)
75716fb060 : [export][serde] Add opset version check and upgrader API (#103238)
6463c55ef8 : [inductor] Limit window for horizontal fusion (#104024)
6bda97e2c1 : Raise type error message for `interpolate` if `size` contains non-integer elements (#99243)
51664489ba : fix upload alerts to rockset (#103995)
4e204ff87b : Added is_xla (#103100)
49dc26435f : [BE]Fix @parametrize not working when using @with_comms in DTensorTestBase (#104065)
a3ac258291 : Pass in .so name via lower setting (#103968) (#104015)
8d9581a390 : Remove foreach triton workaround that is no longer needed (#104016)
1f1fb58b8a : [dynamo] Fix TimmRunner typo in benchmarks (#104052)
0d5f1cb666 : [quant] Add torch.flatten to executorch backend_config (#103988)
f044613f78 : Back out "Revert "[DDP] multiple forward support for static graph (#103487)" (#103873)" (#103938)
10ad74cbec : Update SavedVariable to support saving non-input leafs (#104039)
d7994dfd07 : [inductor] Add triton_helpers.any instead of reusing max (#103974)
303ff84b04 : [quant][pt2] Update special qspecs after QAT rewrite (#103970)
c16a28860f : Reenable disabled tests by pr body (#103790)
7ac1c64bc4 : Exclude _nvfuser from test collection (#104003)
5847cb55e4 : [PyPer][ET] Refactor EG to ET (#99694)
ec922efe3b : [inductor] fix a failed test for layout optimization (#103984)
b5594f7df0 : Revert "Use missing-prototypes in torch_cpu (#103725)"
4aee0fef11 : Heap buffer overflow due to wrong loop condition in torch::jit::unpickler (#103667)
f27a9129e7 : XFAIL test_MaxUnpool_index_errors CUDA slow tests (#103905)
abd4ee8150 : Specific namesapce for mha (#104001)
d2d3394c21 : [pytorch/tensorexpr] Create LLJIT instance with an ObjectLinkingLayer (#103824)
f818036f85 : Fix test_addmm_gelu assertion on Windows CUDA (#104031)
7b3b6dd426 : Revert "_cycleviz.py: visualize reference cycles holding cuda memory (#102656)"
ab9ea0d0f2 : Bump numpy from 1.21.6 to 1.22.0 in /benchmarks/dynamo/_onnx (#104014)
1c33c398c7 : [FSDP][state_dict] Add a summary log when finishing state_dict (#103784)
ab8fc41e2f : Support bfloat16 dtype for CUTLASS-based semi-structured sparsity (#103978)
5eb7325bc7 : Add autocast support for IPU (#103890)
0d653730ce : Refactory bits for the codegen cache (#103452)
b663a41b51 : add onlyprivateuse1 decorator for test framework (#103664)
4143b6b89b : Add torch_dispatch and modes to extending.rst note (#102087)
e9705c52ac : [pt2] add metas for `_pdist_forward` and `_pdist_backward` (#103817)
e48851033a : [pt2] add metas for `pad` ops (#103815)
f9c64a1156 : [debugging] aot_eager backend to use the min-cut partitioner (#103555)
613970eb05 : [5/n][FSDP] Update _sharded_post_state_dict_hook to use DTensor when use_dtensor=True in state_dict_config (#103921)
34336bd625 : [PyTorch Vulkan] fix the position computation with the consideration of channel padding (#103908)
2d528625d7 : Make PyTorch compilable without XNNPACK (#104004)
b689128db3 : Fix an UBSAN error (#103900)
bffcfa9628 : [ONNX] Separate fx _type_utils from torchscript exporter (#103942)
c575f748ab : [MPS] Remove unnecessary PSO checks (#103244)
dba67f71c9 : _cycleviz.py: visualize reference cycles holding cuda memory (#102656)
73c927f901 : Improve debuggability of activation checkpoint (#103859)
dc15b4c838 : add workflow dispatch to upload-alerts.yml (#103972)
518abe8b7e : Revert "Migrate exportdb to torch.export from torchdynamo.export (#103861)"
5f88dd3e47 : Link new PyTorch Contributing Guidelines from CONTRIBUTING.md (#103986)
c40fa8b614 : [inductor] remove `fft` and `svd` ops from `fake_incorrect_kernels` (#103616)
fb6173a4ac : Migrate exportdb to torch.export from torchdynamo.export (#103861)
430cb3e160 : [PyTorch] add Vulkan support for `aten::tile` (#103944)
41cc526b19 : Avoid unwanted type promotion in `tensordot` (#103917)
3535c634d1 : Eliminate c10/util/array from PyTorch (#103893)
58d11159bd : Revert "Reenable disabled tests by pr body (#103790)"
c1a49823cd : [ONNX] Bench torch.onnx.dynamo_export and torch.onnx.export under dynamo bench (#103135)
ba6b1ae43a : Fix group norm mixed type error (#103360)
2237b4ad75 : Reenable disabled tests by pr body (#103790)
ede1965f5d : Enable additional inductor test suites on ROCm (#102270)
cd05c3b98c : [BE] Use `TEST_MULTIGPU` from `common_cuda.py` (#103982)
eed287ec19 : [android] Only pass -mfpu to armv7 (#103929)
626d8548df : Revert "add override to Caffe2 (#103795)"
13664bb535 : Revert "add oncall info individual info to failing alert job alert (#103915)"
08a7d60a46 : Revert "[Reland][ET] Select used et_kernel_metadata only (#103705)"
b7ae40f4c8 : [min-cut partitioner] Disable a heuristic if graph has recomputable ops (#103635)
3912b722f3 : Upgrade LoggingTensor mode and add traceback collection (#103734)
09fdea8564 : Fix autograd issue with identity conversions (#92022)
7fb2a928cf : fix hpu storage serialization (#101680)
9590228303 : Fix device of lengths in pack_padded_sequence when the default device is GPU (#103967)
c3c03e7cb8 : Reland of https://github.com/pytorch/pytorch/pull/101818 (#103888)
8b418f197c : [decomp] Add decomposition for torch.renorm (#103858)
c0596ffe85 : improve repr for pytrees (#103945)
ec8aa6e592 : [Easy][FSDP] Fix "column" -> "row" in PG example (#103975)
a2d001d4dd : [FSDP][state_dict] Use _get_module_fsdp_state_if_fully_sharded_module for state_dict (#103783)
591981c5e2 : [inductor] Lower diagonal, diagonal_copy and diagonal_scatter (#103755)
a61096fb94 : [decomp] Decompose logaddexp2 (#103765)
1c79003b3c : Enable addmm + GELU epilogue fusion via cuBLASLt (#103811)
1b0d23708b : add oncall info individual info to failing alert job alert (#103915)
0beec88c93 : Inductor support for all_gather_into_tensor_coalesced. (#98643)
2adfd1315a : [export] Serialize subgraphs. (#103901)
6fd358e7f7 : [ONNX] FX Dispatcher Test (#103971)
61cd605813 : [decomp] Don't call .item() in aten.fill.Tensor decomp (#103880)
785d472861 : Skip Tensor-Tensor ops which have a Scalar input (#103928)
ae1ed27756 : [codemod][numpy] replace np.str with str (#103931)
72f09faf10 : remove CUDA 11.7 builds (#103904)
17ef983516 : skip torchinductor test_data_type_propogation if C++ compiler is not available (#103920)
223f232928 : Fix shape function for transpose convolution (#102139)
678ce61cdb : s390x simd: update abs() functions for vectors of complex numbers (#103850)
dbbf24decd : Fix counter resetting in pad mm (#103918)
873f772df2 : [quant][pt2] Fix QAT convert for resnet18 (#103759)
f73ff54f9a : Use torch._foreach_lerp for SWA update (#103550)
3cfd677b1f : fix inference mode / PyDispatcher / Functionalize interaction (#103275)
106d3f0115 : [AOTAutograd] make _unsafe_view() logic happen during the runtime epilogue (#103919)
7ce932a92c : Add signpost_event to dynamic_shapes (#103882)
716b3b893d : Use missing-prototypes in torch_cpu (#103725)
d552c271db : [pt2] grad support (#102264)
6d2887cc06 : Reland "Move tensor grouping to ATen" (#103912)
b9f81a483a : Preserve original co_filename when FX symbolic_trace (#103885)
6b1d6750b9 : [FSDP][state_dict][BE] Remove outdated and fixed TODOs (#103782)
1192f5ac46 : [FSDP][optim_state_dict] Cleanup the unused optimizer state_dict APIs (#103781)
e737a8486f : Revert "[pt2] grad support (#102264)"
2642f31e4c : Make `torch.empty*` deterministic by filling with NaN or max int value (#101849)
d8352312f9 : tf32 threshold fixes for various tests (#103138)
85b83954c8 : [pt2] grad support (#102264)
02f28de408 : [dynamo x fsdp] Simplify stream logic handling (#103902)
39a22e2791 : softmax: Triton kernel for BSR inputs (#102095)
ee83c646bb : Replace `_prims_common.check` with `torch._check*` (#103240)
f3c3d12efb : [vision hash update] update the pinned vision hash (#103869)
e5e9d563c2 : Lift user defined attributes into inputs for certain cases (user defined types and tensors) (#103386)
8c2effcaf7 : Fix bug for buffer reuse (#103720)
c9256ac609 : add branch and sha info to alerting schema (#103631)
a4b9872187 : [PyTorch] add Vulkan support for `aten::repeat` (#103255)
0ae4c4d417 : [FSDP][optim_state_dict] Avoid calling optim.state_dict() to get the initial empty states (#103609)
8ce4fee68d : [BE] Use C++17 features in ParamsHash.h (#103911)
a475ea4542 : [fx] change from #users to num_users in graph printout (#101140)
f83ebfe1bb : [FSDP] Improve support for CPU tensors. (#103171)
8b37821813 : make balance check in DP only for cuda (#103311)
4bd14d97f8 : s390x simd: switch clamp min and max order (#103849)
f7737bb96b : Revert "Ensure ncclCommAbort can abort stuck ncclCommInitRank (#103264)"
d06fc1bfda : [PyTorch] Add Vulkan support and tests for at::softmax along all dimensions for 4-dim Tensors (#102988)
8391618b99 : [Inductor][FX passes] Pre grad pass modified graph should be topological sorted (#103794)
974525c053 : De-register forward hooks upon exiting flop counter context (#103744)
54ff8ffedd : Add Thiago Crepaldi (ONNX) to CODEOWNERS (#103894)
3a53dbae2a : Update viable/strict script to ignore unstable jobs (#103899)
036cda415f : Change HigherOrderOperator default namespace from global to 'higher_order' (#103870)
3ca8542dff : Fix _saved_tensors argument issue in test (#103594)
d52d1fd5ba : add description for unexpected case (#103500)
f730e22b5b : [cpp] remove redundant code (#103808)
e031dd23b0 : Revert "To add brief intro for CPU backend optimization (#103666)"
2722c52e52 : Allow Unequality in top level IR too (#103746)
50d8cf27e1 : Fix annotations on `torch` function signatures (#103807)
013ffe457e : To add brief intro for CPU backend optimization (#103666)
b1ddd5a293 : Revert "[DDP] multiple forward support for static graph (#103487)" (#103873)
7b6dc72ffa : Revert "[decomp] Decompose logaddexp2 (#103765)"
a39466c934 : Modify DeviceThreadHandles.h file for device generic. (#95133)
bab21d20eb : [decomp] Decompose logaddexp2 (#103765)
d4b85f3031 : Support params/buffers inside cond and map (#102310)
1be1f5090e : [Dynamo] Fix broken NNModule comparison (#103812)
1dba81f56d : Abstract FX->ONNX translation through FxOnnxInterpreter (#102810)
724a1ba2de : Tidy __all__ under torch._refs (#103712)
5d34656fd7 : Update dynamo sum dtype handling to match eager (#103037)
13ef0ec186 : Add "slow" tests to list of disable conditions (#103856)
def1b57151 : Update datapipe.py (#103834)
55814bb46e : [CI] Limit service jobs to Pytorch org (#103853)
3e42854caa : [xla hash update] update the pinned xla hash (#103827)
9832cfbbfe : Quantization oneDNN backend only support VNNI CPU (#103653)
7b3242d5f7 : [PyTorch Vulkan] fix bug of `aten::cat` for concatenation of 3D tensors at channel dim with channels as multiple of 4 (#103718)
79fe3aef2f : indutor: fix issue of compute index_expr range (#103147)
01abccf63f : inductor: fix CppTile2D bf16 store complier error for cpp backend (#103659)
adeb63de95 : [CI] Fix a bug that bfloat16 is also used for dashboard training run (#103816)
15eed5b73e : [Oncall][MTPG] Fix flaky test multi_threaded - test_broadcast_object_list (#103568)
59a01c49ee : [Reland][ET] Select used et_kernel_metadata only (#103705)
f5f020adb0 : add override to Caffe2 (#103795)
0a7351e9ee : [Doc] Fix torch.UntypedStorage.mps() doc (#103797)
1b16ac7481 : Add A Pass to Fold Tensors With a Uniform Value, match sdpa on a few models (#103600)
dbc8eb2a8f : [Quant][PT2E]Enable x86 inductor quantizer (#98730)
2357498a8c : s390x simd: ensure that vectorized complex constructor behaves same to x86 (#103426)
a2988c9e6a : [CI] Switch inference accuracy and performance tests to bfloat16 (#103535)
918fe519a0 : Use the new analytics ID (#103766)
9541053cca : [dynamo] support FakeTensor for SYM_INT/SYM_INT_LIST/INT_LIST param in python-to-cpp argument parsing (#103448)
b34ac35b77 : Revert "Use hipsolver for default svd case on ROCm (#103540)"
750cbb299b : [RPC] Check stack for emptiness in interpreter (#103327)
f1b367c418 : [BE] Nested namespace in `ATen/native` headers (#103753)
fd4beb7a05 : Better function annotations for `nn.functional` (#102918)
36ff9879de : update multipy pin to not use install options (#103758)
d80174e2db : Do not materialize entire randperm in RandomSampler (#103339)
67babf7a45 : Enhance decorator _use_grad_for_differentiable (#103567)
5875a2fb3c : [Inductor][FX passes] Forward fix an internal unit test failure. (#103739)
8fc687f7ee : Add activation functions (ReLU and SiLU for now) for structured sparse linear operator (#101339)
0da38409a0 : [gloo] Make it possible for gloo TCPStore to take over an existing socket fd (#103478)
2bc56bec07 : [quant][pt2] Handle literal conv args in convert QAT (#103731)
08a054649c : [operator_compile_check] Add FakeTensor testing (#103595)
23c143400e : use mutable_data_ptr for grad_input in backward passes (#98999)
0a4a7d4b26 : Use hipsolver for default svd case on ROCm (#103540)
b27c3558a4 : [RFC]: Create aten native op for constrain_range (#103346)
df814484f4 : remove dynamo fake param/buf check (#103574)
ae78e80123 : [memory_viz] fix javascript url (#103741)
ad4ee297ed : allow cpu scalar to be moved to xpu in masked_fill (#103645)
d3971f2d15 : [ONNX] Support aten::hstack and aten::vstack (#102872)
f889c886d4 : [export] Make pass base composable (#103701)
0411fc6ab6 : [ONNX] Support aten::atleast_1d and aten::atleast_2d and aten::atleast_3d (#103061)
703875e364 : [Reland][Dynamo] VariableTracker.recursively_contains should be updated correctly when mutation happens (#103564) (#103717)
b287cb816c : inductor: make the vec_transpose's tiling stride doesn't depend on out_idx and tiling_idex (#103651)
19b3e07fe0 : [memory_viz] Unified viewer (#103565)
346feb6b56 : [memory_viz] profile_plot generates snapshot objects (#103497)
efc3bcceb1 : Move memory viz templates into separate javascript files (#103474)
69969e52c3 : Cast computation_node_input_size to int (#103677)
bcf2becaf2 : [vision hash update] update the pinned vision hash (#103721)
d1effcd4a9 : Don't apply automatic_dynamic_shapes if we force tensor to be static (#103673)
39ba2e6226 : Allow for sympy.Expr in tensor lowering in inductor (#103678)
dad29f906b : [quant][pt2] Fix no conv bias in convert QAT (#103298)
a52b6f086d : [export][serde] Add validator to compare deserializer opset version with model opset version (#103691)
1f5ee39c6c : [reland][inductor] Make clone_graph copy node name as well (#103688)
806a642eb1 : update README.md to reflect current build from source status on master (#92729)
38f35b4fc3 : Add some missing disabled functions (#103662)
ecf4ce7a0e : Silence CUDA graph tree cuda warning (#103636)
03881b0c92 : Ensure ncclCommAbort can abort stuck ncclCommInitRank (#103264)
1985c490fe : [inductor] Fix tags for inductor random ops (#103648)
8c54cd434f : [inductor] Fix allow_buffer_reuse=False (#103630)
7c152376b7 : [Easy] Dont truncate cudagraph error msg (#103693)
5f979d400c : [inductor] let coordinate descent tuning respect max block size (#103660)
155691a7d9 : Implement meta functions for rshift and lshift (#103637)
6f655d4195 : Add symbolic tracing support to torch._dynamo.export (fake input + weights) (#100017)
f61b248d5b : [BE][Functorch] Use nested namespace (#103685)
def01eafc5 : [BE] Remove unused `dim_plane` from `reflection_pad2d_backward_out_template` (#103680)
8553f9c896 : Revert "[ET] Select used et_kernel_metadata only (#103658)"
22e8a61d9b : Implement coalesced reduce_scatter_tensor (#103561)
da7ca82121 : [inductor] Store real inputs to be used for cpp wrapper codegen (#103289)
ed3a61afcc : Add automatic_dynamic_shapes test configuration (#103598)
480d20cac1 : [ET] Select used et_kernel_metadata only (#103658)
0cd6ebd704 : optimize replication padding performance on CPU (#102255)
d1cecd9c32 : Add assign kwarg to module.load_state_dict (#102212)
73be9842be : Revert "[Dynamo] VariableTracker.recursively_contains should be updated correctly when mutation happens (#103564)"
9f39123d18 : Allow to continue when fail to configure Windows Defender (#103454)
3e9eaa1a12 : [GHF] Fix regression
e6108e8533 : [caffe2] Create deterministic zip archives (#102903)
90ef8d58cf : [export] Serialize metadata (#103274)
7b5f8988a2 : [GHF] Auth when trying to fetch labels (#103679)
f2900420da : fix missing-prototypes warnings in torch_cpu (Part 6) (#101845)
e75f7994e1 : Fix `Dirichlet.log_prob()` when x=0 and alpha=1 (#103605)
2f893d04c8 : Implement adding bias vector into structured sparse linear operator (#100881)
e56cdfd74b : [MPS] Handle deserialization more permissively (#98834)
bc6ec97e02 : Switch dynamic_shapes to True by default (#103597)
5642b5a36f : enable performance-noexcept-move-constructor in clang-tidy (#103593)
f0360c99ca : Properly account for empty lists in symbol_to_source (#103633)
96c23fe212 : [dynamo][numpy] Add support for builtin functions (#103457)
da21273ad5 : inductor: support rsqrt for dynamic shape (#103579)
5efdcd5802 : Handle long Docker image name when building Docker image (#103562)
1e108d9c21 : enable more ASAN tests (#101483)
17217d367f : Inductor cpp wrapper: support Constant in input (#103496)
90ee6a7354 : [PT2][Quant] Update op names for decomposed quantized lib (#103251)
5211fad738 : cm3leon_generate is at edge of timeout, so bump it up (#103607)
b4056ba744 : chore: Update ModelReportObserver variables to buffers (#97971)
00546333a5 : Register more foreach op lowerings (#102654)
6d570ccd59 : tf32 context fixes for various tests (#103137)
2e65354880 : Fix inductor-perf-compare (#103538)
3d6fd07c46 : Revert "[inductor] Make clone_graph copy node name as well (#103409)"
d6da649a1b : [benchmark] hf_T5_base - torchbench original batchsize too large (#103442)
16c2090b2d : [benchmark][compile] Limit number of bounding boxes to 5 (#103413)
2087d32811 : Revert "Support params/buffers inside cond and map (#102310)"
ddf4cd69ec : Delete ifdyn and ifunspec combinators (#103596)
e82616d900 : Add `generator` argument in `torch.randn` signature (#102075)
a0885dff98 : Link torch.cat in docstring of torch.stack and vice versa (#103421)
766f236bad : Support params/buffers inside cond and map (#102310)
600f7dc211 : add instruction to compile with new C++ ABI (#95177)
55cf5c00fa : Improve DDPOptimizer Logging (#103489)
9152d0e5be : Silence `has_cuda` deprecation in optim (#103610)
d0ff640ec8 : [Pytorch] aten::stack (#103344)
2eea3cb19d : Fix composable `checkpoint(use_reentrant=True)` with multi args (#103590)
c2952e8be9 : [inductor] Fix an expression printer issue during generate_return (#103557)
dc3fa9e52f : Update optimizer tests to compile with fullgraph (#103559)
7dd0f525b5 : [FSDP][4/n]Update use_dtensor option for _optim_utils.py (#103599)
bd0ed940b7 : [activation checkpoint][dynamo] Wrap AC into Tag based higher order op (#102935)
df0505743f : [activation checkpointing] Tagging based min cut partitioner (#103357)
aece6705d1 : Move locals/globals to output graph, make it easier to access them anywhere (#103456)
d27bc34f4b : Simple Source traversal util (#103450)
6db21a9cf8 : Update clang-tidy install in CONTRIBUTING.md (#101247)
9946499228 : Continue simplifying dynamic shapes tests (#103592)
49dcf48e66 : [PT2][Quant] Change quat conv bn fusion code (#103556)
a60f6dbe69 : Revert "Add groups to dynamo benchmarking output data (#103268)"
69b09eca5a : optimize reflection padding performance on CPU (#102254)
717e63b7bd : [inductor] use aten.kernel.OVERLOAD_NAME instead of aten.kernel in python wrapper (#103576)
5c3556da94 : [Dynamo] VariableTracker.recursively_contains should be updated correctly when mutation happens (#103564)
0ca3c6f7d7 : [_memory_viz.py] Fix bug when using profile_plot (#103384)
6ff6b49039 : Revert "Register more foreach op lowerings (#102654)"
b1adaa8777 : [inductor] Fix no-xdim reductions (#103527)
80139fc2db : [DDP] multiple forward support for static graph (#103487)
780b24b27c : [DDP] Refactor _DDPSink to take DDP weakref (#103304)
a3a32c1be0 : [DDP] Rename num_iterations -> num_forward_calls (#103283)
2076a2ffa7 : [DDP] Rename state_dict var to ddp_state (#103282)
2d745b95d7 : [inductor] Make clone_graph copy node name as well (#103409)
7a2a006c9e : Remove dynamic_shapes test for inductor static weights (#103377)
45401ef745 : Enable float16 and complex32 support for sparse CSR elementwise multiplication operation. (#100394)
a980b19be7 : Revert "Remove dynamic_shapes test for inductor static weights (#103377)"
339007fe65 : operator_compile_check v0 (#103198)
149cd09221 : Refactor and improve AOTAutograd tests (#103197)
27a67d8699 : Refactor and improve make_fx testing (#103196)
53cb1a7d15 : Remove dynamic_shapes test for inductor static weights (#103377)
ccf56eca84 : [inductor] Fix is_broadcasted (#103514)
e9674d146c : [Specialized Kernel] Propagate Specialized Kernel Support through ComputeCodegenUnboxedKernels (#103113)
e3ee5b00be : Enable test sparse allreduce basics Windows (#103317)
8b015c166c : Don't test dynamic_shapes in tensor_always_has_static_shape (#103517)
593642d1d8 : Use CUDA DSA in caffe2/operators (#95299)
d991ce6da3 : [FSDP][3/N]_shard_utils update for dtensor state_dict support (#103479)
3c5ac4baa4 : [CI] Enable inductor dynamic accuracy test on cpu device (#103387)
f0832914ee : [Dynamo] Fix lineinfo generation on PY3.11+ (#103525)
193d8412e7 : [vision hash update] update the pinned vision hash (#103560)
674d18c124 : inductor: using int64 as index dtype for slice_scatter (#103511)
2e1369d7ad : [inductor] fix benchmark call for inplace update (#103547)
876161983d : `default` should be used as default value in `boolean_dispatch` (#103463)
cbea85b416 : [Pytorch] aten::zero_ (#103042)
8340762211 : Update lr_scheduler.py to check the type of eta_min (#97003)
2f5fef5912 : Refactor tests for dynamic shapes (#103542)
b7777c812e : extend serialization for tensor metadata (#99808)
ce0a511993 : Using dynamic allocation buffer and dynamic threads on scan with index (#103502)
fee01640df : Make DDPOptimizer handle subgraphs without outputs (#103488)
93b0410eef : Use CUDA DSA in ATen (#95300)
6cc0f1c20c : Checking for nullptr in get_model_bytecode_version (#97149)
0cd155b042 : [reland][quant][pt2e] Annotate GRU module (#103358) (#103526)
0254880015 : NCCL process group: avoid workEnqueue when capturing cuda graph (#103503)
25b6b95b2e : Fix freezing tests (#103531)
056bf951bf : Strengthen partially supported invariant of base for chained sources (#103445)
bc2caa7fdf : Add type hint for retains_grad (#103528)
d38b651d51 : [pt2] add `SymInt` support for `cosine_similarity` (#103400)
c07634436e : [pt2] add `SymInt` support for `bilinear` (#103396)
4a76fb49f3 : [pt2] add metas for `avg_pool3d` and `avg_pool3d_backward` (#103392)
8dc6001057 : [export] Serialize symbolic values (#103273)
876695d4ec : [ONNX] Add constant folding for Softmax op (#102861)
3804eb109a : Always register SHAPE_ENV guard (#103521)
ea384cd377 : torch.compiler public namespace (#102182)
3596a853b4 : Always apply new_empty special case in Dynamo (#103378)
51d21ffd8a : [FSDP][2/n] add use_dtensor flag to both StateDictConfig and OptimStateDictConfig (#103477)
72931759fd : Unified aa_filter and aa_filter_075 for bicubic upsampling (#103510)
71b560208c : [FSDP] Fix `device_id` when buffer-only module (#103504)
1628bbecb6 : Use free_symbols to determine if convolutions involve dynamic shapes (#103486)
38890e1d2b : Stop disabling ShapeProp with dynamic_shapes for mkldnn (#103381)
1506acebaf : Detect symbolic tracing_mode with free_symbols (#103515)
ddb682f616 : Enable Python dispatcher when ShapeProp with fake mode (#103512)
af7bd409be : Don't test dynamic_shapes in profiler (#103516)
05c01b9bfc : Register more foreach op lowerings (#102654)
5b33d39114 : [FSDP] Workaround for GLOO's lack of all_gather_into_tensor. (#103170)
b77f1b0f27 : Wrong type when exporting {zeros, ones, full, empty, rand, randn}_like ops to onnx (#103048)
e9f2921bff : Fix rerun disabled test uploading logic (#103476)
3ffac08271 : Fix bug in SplitCatSimplifier when next_user is an output node (#103338)
9591e52880 : Add vfdev-5 as reviewer for CPU Aten backend (#103524)
b00d388ada : Update test_misc.cpp (#97768)
cbe270d233 : Fix zeros_like for sparse tensors with batch dimensions. Add opinfo-based tests to like-functions. (#101215)
597e2a11a3 : indexing_dtype_strength_reduction more aggressive free_symbols tests (#103470)
63fe26809d : Implement all_gather_into_tensor_coalesced. (#98642)
4081e924a8 : Dynamically assign number of threads in innerdim scan (#103435)
f6b4106554 : [export] Automatically add label for export (#103458)
13777e3391 : Revert "[quant][pt2e] Annotate GRU module (#103358)"
b0a93c851c : Fix BUCK build after #103185 (#103446)
db07ba3a9b : Use size_t in THManagedMapAllocator (#103331)
23892d8ee4 : [quant][pt2e] Annotate GRU module (#103358)
6ed3c4499a : Fix fuse_custom_config_dict arg from being None (#102154)
45104cb67f : Different csv headers by bench mode on infra error (#103134)
5f77be8bbe : Refactor OptimizeIndexing (#100549)
88ebb2e321 : Windows FileStore skip timeout if the file path is invalid (#103247)
4c3799447f : Back out "Dropout support for memory efficient attention (#102038)" & "Two small mem_eff bug fixes (#103201)" (#103464)
7360d0f904 : Upgraded nightly wheels to rocm5.5 (#102242)
9bc0b79369 : [dynamo][numpy] Install numpy_pytorch_interop in ci jobs (#103447)
fa893f3f58 : Fix optim state_dict casting to allow step to cast to CPU (#102619)
666ec8160c : Skip test suite (#103472)
4a52694b08 : [torch.compile] Add explain as a backend #102053 (#103259)
2abad0c184 : Add dtype check baddbmm (#102659)
a18048d982 : Remove redundant fallback for view_as_complex (#103261)
2c313e7b99 : Revert "Record view stacks if running anomaly mode (#103185)"
c3d3165f16 : Enable uploading metrics and upload Test Reordering metrics to dynamodb (#102691)
72b7c4efe5 : [Profiler] Fix flaky test_memory_timeline_no_id (#103441)
58d2c66a70 : [activation checkpointing] Higher order functional rng op wrappers (#102934)
31ee1512d3 : [inductor] Update triton pin (#102736)
455f542ed9 : Add groups to dynamo benchmarking output data (#103268)
4935b3e0e7 : Make specialized attributes on Tensor mandatory (#103434)
056d92e2a0 : sparse.mm backward: performance improvements (#94991)
d083d444ff : Inductor Freezing (#100652)
54daf870bc : CUDA graphs overrides dynamic shapes and forces specialization (#103290)
6c6c897d6b : Add graph break logging option instead of config flag (#103202)
50c972bfd2 : [c10d] Add xpu to the default device supported by user specified backend (#103410)
49754f44ee : Rewrite size/stride/numel TensorVariable handling (#103438)
141828498c : [CI] Update inference accuracy test (#103361)
f22d99c784 : Update C++ frontend docs (#103451)
d997969b8b : [Reland] Add sym_size/stride/numel/storage_offset to native_function.yaml (#103107)
0cb5bc3b04 : Revert "Move tensor grouping to ATen (#100007)"
3766c04736 : Add uint8 support for CPU images in interpolate(mode='bicubic') (#103252)
5ed618132f : Revert "change pre_autograd to pre_dispatch tracing (#101818)"
1b31665e78 : Make all CI commit pin changes trigger ciflow/inductor (#103443)
fc46f01b55 : Revert "Cleanup scatter-related code (#103074)"
1ef7d6790d : [ONNX] Fix onnx constant folding (#101329)
48e3ee29ff : enable missing-prototypes in functorch (#103391)
de354bf53e : Replace CUDA 11.7 small pip wheels with 12.1 (#103091)
ac3ce0a57a : Remove dynamic_shapes special case in SizeVariable getitem (#103380)
2eac8bd2b8 : [dynamo][numpy] Support ndarray methods (#97537)
18f203a567 : Clean up op BC check list (#103363)
df83fe5bf7 : [dynamo] graph break on nn.Parameter construction (#103262)
08f90b3481 : Revert "Update torchbench pin - torchrec_dlrm moved to canary (#103383)"
caecb55223 : Revert "Log functional_collectives apis to distributed logger (#103288)"
c3fdfca5da : Always create ShapeEnv, always apply unspec logic (#103302)
f4228e7037 : [xla hash update] update the pinned xla hash (#103416)
37359c36fd : Log functional_collectives apis to distributed logger (#103288)
f37be77813 : [Quant][XNNPACK] Delegate add_relu fusion (#103266)
8a744c31d3 : Up to 48% speed up using Sklansky method for innermost prefix scan algorithm (#103314)
0863e5503a : Handle nonzero via its meta registration (#103379)
114f99bba1 : Update torchbench pin - torchrec_dlrm moved to canary (#103383)
03101a227f : Remove not dynamic_shapes case from wrap_listlike (#103301)
900226f20a : add multi swa support for custom device (#103297)
daf75c0759 : [AOTAutograd] compare with stride hints (#103342)
4cfa06f706 : [BE] Deprecate `has_XYZ` attributes (#103279)
0496d70aa0 : [Profiler][Easy] Add log msg to assertEqual for flaky test_memory_timeline_no_id (#103326)
919c567c38 : Simplify has_unpack_var_sequence (#103324)
d61cd03b97 : Inductor cpp wrapper: support ConvTranspose and fix Convolution ir (#103308)
d67b676c51 : Remove config.dynamic_shapes test for tracing size calls (#103325)
cf8af57c4a : Make torch.compile(dynamic=True) not assume static by default (#99469)
f474497cd3 : [Docker] Update cc/c++ to point t clang/clang++ (#103350)
347463fddf : [cpp-extensions] Add clang to the list of supported Linux compilers (#103349)
00e16179f0 : [LibTorch] Fix `append_whole_archive` macro (#103348)
5c252f2c7c : [Inductor/cpp] Fix reduction on pre clang-10 (#103347)
414ec6ce97 : Turn off automatic_dynamic_shapes in prep for dynamic-by-default (#103320)
a8549357d2 : Add distributed category to TORCH_LOGS (#103351)
59ee6cd864 : fix soundness bug with unsupported constraints (#102897)
1b398297dd : Rely on repeat meta reporting dynamic shapes (#103294)
1d40b394e6 : Remove getitem dynamic shapes special case (#103296)
5987c52082 : Delete is_dynamic_shapes test (#103291)
7be2a6228d : Delete non-dynamic shapes export special case in guard creation (#103295)
1eb762c919 : [Inductor][FX passes] Normalize torch.cat for pre_grad fusion (#102951)
443edb9015 : [DOCS][DDP]Fix the simple of saving and reloading PowerSGD state and hook. (#102721)
fff5daf3ee : [Dynamo] Support methods of NamedTuple (#103217)
d84b63c4f4 : Properly respect automatic dynamic config for unspec int (#103321)
2b3d955ffd : [pt2] add meta and `SymInt` support for `linalg_matrix_exp` (#102945)
3a0f37735c : [pt2] bug fix: invert condition in `checkFloatingOrComplex` (#102944)
34ccd1dde6 : [Reland2] fix missing-prototypes warnings in torch_cpu (Part 5) (#102931)
90110b0e4f : Revert "Add distributed category to TORCH_LOGS (#103287)"
cde4657284 : [inductor] Support complex fallback for convert_element_type, _fft_c2c, view_as_real to support GoogleFnet with cpp wrapper (#103183)
f49b2f114a : [Pytorch] Add Vulkan support for aten::unsqueeze, 1d->2d, 3d->4d (#102987)
89632b56ff : Revert "NCCL process group: avoid workEnqueue when capturing cuda graph (#102542)" (#103341)
7550ec16a4 : Add support for dictionary with torch object keys. (#103158)
d1f24f73da : Revert "Make HigherOrderOperator stop appearing like torch.ops.* in FX (#103108)"
0b252aebb2 : Add distributed category to TORCH_LOGS (#103287)
d89dd05e4d : Revert "CUDA graphs overrides dynamic shapes and forces specialization (#103290)"
5aefa61d2f : Fix calls to unqualified format_to to not clash with C++20's std::format_to (#103130)
74a5d62d7c : NCCL process group: avoid workEnqueue when capturing cuda graph (#102542)
88aea179e3 : Cleanup scatter-related code (#103074)
c760f0e4dd : CUDA graphs overrides dynamic shapes and forces specialization (#103290)
b0392de2c3 : change pre_autograd to pre_dispatch tracing (#101818)
1c3a7d9a7e : Resolve TODO by deleting assert sparse cannot be meta on SymInt (#103299)
a02c573a89 : Record view stacks if running anomaly mode (#103185)
79e0a1eacb : Revert "Make torch.compile(dynamic=True) not assume static by default (#99469)"
2e21cb095a : Remove capture_scalar_outputs sanity check prepping for dynamic by default (#103292)
4a5d56b74c : Disable dynamo'd test_optim entirely (#103323)
6fa2d41dc7 : Add mmap option to `torch.load` (#102549)
74b7a6c75e : Move tensor grouping to ATen (#100007)
7108c035bc : Make torch.compile(dynamic=True) not assume static by default (#99469)
96fd283640 : Preserve CreationMeta when metafying views. (#103152)
c24b61bc20 : Enable torch._C._get_privateuse1_backend_name in Dynamo tracing (#103141)
6095a22cff : [inductor] add the ability to do heavier search for coordinate descent tuning (#99403)
2961ea80f5 : Deprecate "Type" and support more devices for save_on_cpu (#103245)
c037088ac4 : Debug Windows locked files (#103237)
4cc474dec4 : [dtensor] support torch.save/load with DTensor (#103106)
d31707a257 : Get rid of dim_groups attribute from DeviceMesh (#103105)
81b704eab3 : numpy1.25 deprecation: np.product -> np.prod (#103263)
f3553c508c : ImportLib py3.10 bug in AOTInductor (#103277)
4c03adc1f4 : [dashboard] Allocate 4 shards for torchbench (#103280)
8c584028a7 : add github action to upload alerts to rockset / aws (#102995)
bb8278731e : [FSDP][Easy] Remove redundant var def in test (#103270)
8e5b7ce5db : inductor: fix bf16 legalization issue for fp32 load with to bf16 case (#103080)
40dbbcab6c : Update error message with torch logging instructions (#102892)
d0c0e13b69 : [Specialized Kernel] Translate Kernel Assignment Logic from function.yaml to native_functions.yaml (#102576)
98a1e3a3e9 : Put back cuda 11.8 distributed tests (#103265)
481023fb6c : add huggingface to inductor docker images (#102881)
89d57f269f : [quant][pt2] Fix convert in Conv + BN + ReLU QAT fusion (#102993)
606fb882c4 : Dropout support for memory efficient attention (#102038)
05e91a50d9 : Manually generate guards for optimizer (#103121)
48056b168f : [FSDP] Reshard frozen params in backward (#101982)
b52ee80cdc : Revert "Add print statements to debug sharding error (#102713)"
cea899cd57 : Add early validation logic to dynamic_dim (#102982)
f1f13a35b0 : Fix GELU-related docstring formatting (#102845)
1d857586f1 : [ROCM] enable hipSOLVER backend for linalg.ldl_factor (#102665)
b4f3a6f58f : [Dynamo Hackathon] Add support for hasattr on TorchVariable (#103177)
c62fcedc44 : [cuda] Limit grid size for torch.cat kernel on aligned16 contig tensors (#103233)
39201ce025 : Make dynamo bench conditionally import DDP/FSDP (#103163)
591134f2a5 : [CI] Enable UCC in CI (#100395)
a1c26ba77c : Rename READEME.md to README.md (#103230)
4a72708d2b : [dynamo] Fix Autograd Function Classmethod bug (#103175)
a667b2ad1d : [codemod] Use C++17 [[fallthrough]] in caffe2/torch/csrc/utils/python_arg_parser.cpp (#103039)
40d70ba7ed : Remove a number of fixed skips (#103162)
3c896a5adb : [dynamo] fix torch.distributions lazy_attribute failure (#103208)
57c63aad10 : [c10d] Remove test for init barrier (#103223)
2a4fa25109 : [Profiler] Include more uncategorized events in memory profile (#101200)
675f2597fa : [reland][DTensor][3/N] add DTensor constructor function: full (#101436) (#103165)
fdca7f7c2f : Revert "export helper funcs in foreach ops (#102928)"
4833dc10b8 : [DCP] Rewrite read slicing to use a wrapper. (#99167)
39bf86ae90 : [dynamo] Support OrderedDict constructor with kwargs (#103192)
580958a338 : Revert "add github action to upload alerts to rockset / aws (#102995)"
a49aefdce2 : [PT2][Quant] In linear partition include functional.linear (#103186)
c9681613b2 : [export] Unskip non supported tests. (#103168)
978a2f2b27 : export helper funcs in foreach ops (#102928)
91e82ba0a6 : [PT2 Dynamo Hackathon] Fix simple bug in inline dict (#103187)
49450fe021 : add github action to upload alerts to rockset / aws (#102995)
d2d03f0f44 : Make index_add_ error if input source shape is wrong (#100321)
52e310f7a8 : Enable torch.nn.init._calculate_correct_fan in dynamo tracing (#103182)
676210a139 : GHA setup-linux should always be pair with teardown-linux (#103216)
2868a5d0d1 : Two small mem_eff bug fixes (#103201)
9508e60c1e : [quant][pt2] Add prepare QAT test for resnet18 (#103020)
18e4a466db : fix amp in inference in benchmarking suite (#103220)
8585784a34 : [dtensor] fix allgather unpadding logic (#103219)
d5142c52d3 : [FSDP]Remove dim_group from device_mesh init (#103218)
6acb8d3d1c : [data_loader] Extra signal handlers in DataLoader.cpp should be added on top rather than replacing defaults (#103164)
194262ee49 : Make HigherOrderOperator stop appearing like torch.ops.* in FX (#103108)
47cfcf566a : Add selector.is_et_kernel_key_selected (#103184)
c37f02f61c : [PyTorch][HAM]: Deprecate functionalize (#103053)
0900782f0c : [inductor][easy] raise register spill threshold (#103190)
8c5d97d353 : [inductor] Fix correctness issues with pre_grad and context managers (#103051)
17737f9d0e : [DTensor] Allow DTensor support cuda-like device (#102468)
790f5732f6 : Fix Graph Break on builtin comparison on NNModule (#103176)
95fced4483 : Pretty dataclass dynamo explain (#102869)
2baadc2ade : Small operatorbench improvements (#103110)
e936277cc2 : [ROCm] force HIP context initialization for inductor UTs (#103149)
376cf7965f : Use gcc9 in linux-bionic-cuda12_1-py3_10-gcc9-build workflows (#103075)
c454534d25 : Enable torch.get_autocast_gpu_dtype in Dynamo tracing (#103166)
b5021ba981 : Enable torch.is_complex in Dynamo tracing (#103154)
2e8d2a2e69 : [quant][pt2] Add test for inplace add (#102867)
28f43c767c : Fix outdated log settings in doc (#102285) (#102286)
471407cf78 : [PT2][Quant] Use composble quantizer for embedding + static conv + dynamic (#103116)
3c0072e7c0 : [MPS] Prerequisite for MPS C++ extension (#102483)
0c9117a61f : [dashboard] Bring back inference perf measurement as nightly (#103151)
686d7e4c48 : [Inductor] Fix x.view(dtype) decomp and make inductor support it (#102920)
b8caa2b08f : Fix regressions caused by https://github.com/pytorch/pytorch/pull/103128
e930c0fc35 : [export] Initial deserialization v2 (#102716)
adcefcb378 : insert to dtype for fused mem copy scheduler node (#101042)
605a85249c : Fix graph break on boolean mask better (#103052)
2dafa70d61 : Add a little more error checking to minifier (#103057)
e4a42bcf56 : add foreach support for custom device (#102047)
07104ca99c : [c10d] Make it default that PG do not perform barrier after init (#103033)
3e988316b5 : update argument checks from padding layers (#102253)
5acf7e266b : [vision hash update] update the pinned vision hash (#103120)
a02a58d862 : [FSDP][1/N]Add device_mesh to FSDPstate (#102317) (#102551)
0769a50a5f : Disable dynamo on some opt methods and differentiable optimizer tests (#103066)
f760899864 : Teach Triton codegen to generate sqrt (#103084)
3f6f508646 : [PT-D] Update torch.distributed code owners (#103114)
821493715c : Back out "Remove `check` from `_prims_common`, replace with `torch._check*` (#102219)", Back out "Forwatd fix for D46427687" (#103128)
428bff842d : [benchmarks] Torchbench llama is not suitable for training (#103094)
2800a04a17 : Add device range helper and remove sm86 specific check for memory efficient attention (#102985)
6596cfa4d7 : [export] Remove example custom_object_type to type_reflection_method. (#103015)
27f4dc6c0a : [ONNX] Add FX exporter MaxPool tests (#102773)
5b700fc914 : Disable fallback for custom kernels (#101131)
8e0837cf84 : [PT2][Quant] Move embedding quantization to osss (#103088)
bf312f2d9d : [inductor] add a few tests to verify view_to_reshape pass is safe (#103034)
61736679cd : [Dynamo] No graph break for super(MyConv{1/2/3}d, self).forward and super(MyConvTranspose, self).forward (#102509)
038955f489 : torch.compile docs: "Profiling to understand torch.compile performance (#102862)
6261055471 : dst_bin_of_end_center is defined twice (#102755)
0279d0b611 : [Profiler] Update Kineto Submodule (#103031)
dfa64fddeb : [FSDP] Fix for optim state dict (#102901)
2405c59c75 : [BE] Use `value_or` (#103065)
08c4a442fd : Dont run test files that are already run in test_optim (#103017)
90fd90dd94 : Fix rocm sharding (#102871)
a867e6db85 : Add newline before minified repro path (#103083)
fbbde8df69 : [inductor] fix a numel expr codegen issue (#103005)
49577c7e47 : [inductor] Turn off autotune_cublasLt for cpp_wrapper (#103004)
44fdfd3222 : [inductor] Support select_algorithm with cpp_wrapper (#103003)
8824101fb6 : [PT2][Quant] Introduce composable quantizer (#102846)
eeb3c62117 : Add Wav2Vec2 HuggingFace support (#103009)
ba962fefea : Add parametrization version of weight_norm (#103001)
3a38acf18f : Move CUDA 11.8 CI jobs to CUDA 12.1, CUDA 11.7 jobs to CUDA 11.8 (#102178)
1fcc67fd8c : [pt2] add `SymInt` support for `linalg.tensorsolve` (#102466)
ec0aa965da : [pt2] add meta for `_linalg_solve_ex` (#102454)
4bda4a7e4d : [pt2] add meta for `lu_unpack` (#102937)
39f3514fa3 : Add an env PYTORCH_TEST_SKIP_CUDAGRAPH to skip all cuda graph-related unit tests (#103032)
b592e67516 : Use C++17 [[fallthrough]]; (#102849)
30e2764221 : remove c10::guts::{max,min} (#102952)
3a385656b5 : [export] Initial serialization v2 (#102707)
d7035ffde3 : Enable uint8/int8 mkldnn/dense tensor conversion (#102965)
7a42a03547 : fix use-after-free in test (#102734)
5fbbae4283 : [quant][pt2e][be] Cleanup prepare function in _pt2e (#103022)
872fdb329b : This extra message would have helped with Wav2Vec2 debugging. (#103002)
6408b85d88 : [vision hash update] update the pinned vision hash (#103038)
dda59162f1 : Native `rearrange` in `functorch` (#101957)
367b0ad062 : enforce `dtype` (reland) (#102996)
e26f5b2ac7 : docs: Render bullet points correctly (#103021)
9567aaebe5 : Package `torch/*.pyi` type hints (#103016)
258525093e : Exclude clang-format diff from git-blame (#103000)
117f9bb847 : [BE] Explain how to get consistent linter behavior locally (#102990)
12cd1dbba0 : Handle recursive tuple in clone_inputs (#102979)
4479e2fa19 : fix profiling ref in side panel (#103014)
6cb1455857 : [export] Change equality constraints to list of tuples (#102998)
3cb0ba2263 : [ROCm] MIOpen supports bias with bfloat16 (#95080)
1943bd0d7e : [Release] Add FAQ explaining release terms (#102618)
1c2dfdf30c : Add renorm forward-ad (#100798)
d89c719160 : Fix torch.compile side panels refs (#102407)
76a98abcb2 : Rework Inductor support for collectives. (#99765)
cca7b38564 : Don't allow skipping deepcopy (#102973)
7112880cc1 : Preserve leaf-ness and requires_grad-ness in minified repros (#102899)
719584600b : Merge original module attributes with attributes assigned by __setattr__ (#102910)
515c427941 : Enable clang-format on foreach / multi_tensor_apply files (#102887)
604a414bfc : [quant][pt2] Fix convert in Conv + BN QAT fusion (#102224)
4bb2b65ea4 : Turn on add_runtime_assertion by default (#102671)
ecb191683e : Revert "enforece dtype (#102802)"
9cabdff8bd : Update documentation to read FileSystemReader instead of FileSystemLoader (#102795)
f1f57e1e54 : trigger tracing for MTIA events (#102288)
2c2e4d5228 : Populate the eviction_policy field for load/store properly (#91316)
ee77d2b660 : Create public interface for torch.jit (#101678)
f79d2b45fb : Revert "Replace _dynamo.config with an object instead of module (#96455)"
258d398eec : Revert "torch.compiler public namespace (#102182)"
6ac3352a37 : [pt2] add meta for `_linalg_slogdet` (#102464)
ca18053913 : inductor: add fake mode tracing for cumsum graph pattern (#102820)
8e2a86c2a5 : enforece dtype (#102802)
881307abcf : [inductor] Fix a cpp_wrapper issue when fx_passes modified fx graph (#102851)
26bf8894b6 : [export] Replicate exportdb examples and tests in oss. (#102769)
a748be93df : [CheckpointWrapper] Warn on reentrant use (#102890)
5b623d6c6a : [Composable] fully_shard load_optim test (#102692)
88ce6215f5 : [FSDP/DDP] Unify _cast_forward_inputs (#102680)
957ea485c4 : [FSDP/AC] checkpoint_wrapper acccept auto_wrap_policy (#102672)
df40ec82dc : [FSDP][Docs] Document get_state_dict_type (#102658)
c6d0fe39ec : [FSDP] Document optim_state_dict_config in method (#102657)
beb7131c64 : [FSDP] Use INFO instead of DETAIL for warning logs (#102639)
4d516f44a1 : [FSDP][ez] Type optimizer correctly (#102637)
e66c498d2d : Log modules FSDP hooks fire for (#102508)
757791d1e3 : [pt2] add `SymInt` support for `linalg.vander` (#102469)
87cbfe957a : increase clang-tidy coverage to more c10 source files (#102902)
992bffe5a3 : [vision hash update] update the pinned vision hash (#102919)
9d20b47e47 : make device normalization more generic in faketensor (#102519)
85efacee07 : Add a new UNSTABLE category in trymerge (#102784)
3864207c2a : Replace _dynamo.config with an object instead of module (#96455)
eebe0ee141 : [Executorch][codegen] Add ETKernelIndex for aggregating all kernels for kernel (#102874)
0f672e8c67 : Revert "[DTensor][3/N] add DTensor constructor function: full (#101436)"
c46af25bb3 : Initialize optimizer in dynamo to avoid graph break and tracing slowness (#102640)
eb0971cfe9 : [quant][pt2e][be] Remove _input_output_share_observers and _reuse_input_obs_or_fq from QuantizationAnnotation (#102854)
8215468870 : Feature:To add --tolerance option to benchmark scripts (#102218)
1237502213 : Introduce fast path for cuda_equal (#102714)
4254e052fb : [BE] Fix `lintrunner init` on python 3.11 (#102889)
00f1bb0963 : Fix optimizer cuda health check graph break (can be done in the compiler) (#102765)
d92bb036a4 : [Dynamo] Fix if condition on UnspecializedNNModuleVariable (#102583)
a84bb2709a : Remove `check` from `_prims_common`, replace with `torch._check*` (#102219)
1035e33b38 : [dynamo] test attaching attributes to an OptimizedModule (#102781)
2fb182e054 : speeds up will_fusion_create_cycle (#102770)
39b04370db : Preserve coalesce state in sparse COO tensor serialization (#102647)
ec4a107f87 : [LLVM] Make changes needed for opaque pointers (#101396)
c304fddf68 : [dynamo][numpy] Support graph break for numpy ndarray (#100839)
2491aa53a8 : Make DataParallel generic (#102455)
ed113332e5 : [jit] Try to mitigate bad_weak_ptr error from type ptrs and print more error message. (#102822)
af50efca24 : add nested/sprase/quantized tensor key for privateuse1 (#102696)
a1142053f0 : [reland][quant][test] Fix broken PT2 import, add warnings (#102819)
5d57a348cd : Graph break on differentiable boolean mask setitem (#102843)
02dd1f38f2 : [pytorch] CUDA kernel for torch.cat on contiguous tensors with wide loads (#102815)
896d997dd0 : Remove incorrect THP{Cpp,}Function_traverse PyObject traversals (#102860)
9866408167 : Multihooks should not keep tensor alive in closure (#102859)
77f2883c41 : [Reland2] fix missing-prototypes warnings in torch_cpu (Part 4) (#102228)
86c7652503 : [inductor] layout optimization for conv (#99773)
4da88447ea : Disable grouping by dtype and device if compiling (#102771)
a8c1967cee : fix an asan warning of container overflow (#102735)
a6a030a8eb : [data_loader] Enable overriding signal handler in DataLoader.cpp (#101816)
a7efa0ce35 : Revert "Remove `check` from `_prims_common`, replace with `torch._check*` (#102219)"
c36d235db0 : Revert "implement __dir__ for dynamo (#102480)" (#102766)
fc218a8a13 : Fix typos in README of DTensor (#102813)
659f947583 : Try to use a bigger runner for android-emulator-build-test (#102855)
fb79d43649 : Remove `check` from `_prims_common`, replace with `torch._check*` (#102219)
2296ee08fa : [PT2][Quant][BE] Test refactor to be organize them better (#102704)
9978850cc0 : Update list of bots in upload_external_contrib_stats.py (#102786)
fdd6375a80 : Revert "fix alert upload action (#102840)"
624257890e : Reenable hf_T5_generate (#102818)
a53acafd2b : [PT2][Quant] Enable dynamic quantization (#102703)
7af47f139d : fix alert upload action (#102840)
b740d3b014 : Add comptime.breakpoint (#102758)
2301b624ae : [PT2][Quant] Update quconfig to contain input/qoutput activation qspec (#102702)
6a24cfd74c : Fix merge rules for XLA pin updates (#102844)
6492b7d22e : [PT2][Quant][BE] Refactor qnnpack_quantizer.py (#102701)
c64aae4287 : Move ROCm distributed jobs back to periodic (#102790)
8bbef821c3 : Add some unit tests from cm3leon involving repeat_interleave (#102733)
7c00d45312 : Reenable cm3leon_generate (#102793)
09b5b73b90 : [xla hash update] update the pinned xla hash (#101388)
8a52b5440e : Revert "upload alerts to rockset/aws through github workflow (#102646)"
b5840f99c3 : torch.compiler public namespace (#102182)
b76af5f9a6 : Fix broken link in Dynamo's guards doc (#102183) (#102185)
f22148f0ed : aotautograd: fix mutation bug when input is noncontiguous (#102767)
80f59cc61a : Change some py_context_manager_DEPRECATED to py_context_manager (#102643)
51e0f9e858 : Add missing decompositons/lowerings for logical/bitwise operators (#102566)
3897c479af : Add API to construct the functional variant of an op (#102293)
eaeea62ee4 : Make TestPythonRegistration clean up after itself (#102292)
72cdbf6a3f : Fix spurious "missing return" error in irange.h (#102785)
2e8ce910bb : [Profiler][1/N] add profiler support for custom device. (#101554)
1204463bd0 : inductor: fix bfloat16 reduction crash issue which store float value to bfloat16 (#102719)
c537acf46f : Make 1D integer sorting work in parallel (#100081)
c75e064dd6 : Disallow _foreach_utils.py, but allow it to be inlined (#102221)
1ca2e993af : [ONNX] Support aten::logit (#102377)
683753fb0f : upload external pr kpi for 10 days in the past (#102780)
ddd741f385 : upload alerts to rockset/aws through github workflow (#102646)
4d055ee5a1 : RelaxUnspecConstraint some more (#102729)
9fbfaaa57f : [c10d] Add flag value for direct teardown without comm abort (#102599)
5be1088ed6 : [c10d] Bridge c10d and gloo stores. (#102641)
4c9992d5ed : Inductor cpp wrapper: cache the wrapper (#89743)
0b7320315a : [CI] Move libtorch-debug CUDA build to CUDA-12.1 (#102756)
da963d793b : Fix aten.copy device mismatch bug in FakeTensor (#102664)
c7873522c2 : Add print statements to debug sharding error (#102713)
cf0aa38005 : Allow ORT backend for DTensor (#101914)
72ed22e806 : Revert "[Pytorch] Add Vulkan support for aten::unsqueeze, 1d->2d, 3d->4d (#102042)"
8b03a59e4d : Revert "[quant][test] Fix broken PT2 import, add warnings (#102644)"
f15af19877 : initialize max_stream_priorities in getStreamFromPool(bool) (#102739)
67792e175c : Add `-debug` suffix to trunk libtorch builds (#102764)
401109a243 : Use int64_t for indexing in `multi_tensor_apply` (#101760)
b8e2e0e907 : check users are actually recieved in upload to s3 (#102760)
6340aa5d58 : Skip test test_triton_bsr_dense_bmm if not TEST_WITH_TORCHINDUCTOR [v2] (#102660)
ca470fc59f : [BE] Make `test_no_triton_on_import` simple (#102674)
90b1b17c9f : Fix string concatenation with non-string (#102728)
ca1c1fdc91 : [C10D] Implement Store fallbacks for append, multi_get and multi_set. (#100768)
59532bd6f1 : [inductor] Fix a cpp wrapper codegen issue for _scaled_dot_product_efficient_attention (#102624)
bd0a4e2d83 : Serialize pytree to string v2 (#102708)
fb0729054b : Revert "[Executorch][codegen] Add ETKernelIndex for aggregating all kernels for kernel (#102565)"
9d9ce19d12 : [split cat fx passes] Normalize squeeze (#102294)
f18b9f86ba : [quant][test] Fix broken PT2 import, add warnings (#102644)
87c976b69d : Remove deprecated HIP flags (#102271)
30558c2896 : [functorch] Get test_functionalize to run on FB infra (#102695)
08150ee020 : Mark job as unstable dynamically (#102426)
ce8d31551b : [quant][be] Change return type for zero_point to be int32 Tensor (#102234)
c9ae705a22 : [Pytorch] Add Vulkan support for aten::unsqueeze, 1d->2d, 3d->4d (#102042)
2f96981e5a : [inductor] Reduce duplication of reduction combine functions (#99661)
d930bfc419 : [quant][pt2e][be] Add QuantizationSpecBase (#102582)
685505353a : Back out "Add PyObject preservation for UntypedStorage (#97470)" (#102553)
32360b48e8 : [C10D] Rewrite TCPStore client send path to minimize amount of syscalls. (#100742)
9d77949b9e : Revert "add foreach support for custom device (#102047)"
74f10b9ea5 : Switch most Python RAII guard usages to context manager (#102642)
dcf0c5fb6e : Use safe_is_leaf to test leafness (#102706)
d9c8f9a00d : add storage dtype for custom device (#102481)
e59db08699 : inductor: eliminate meaningless copy (#102089)
ce9923a1cb : [Quant][PT2E][Inductor] Lower quantized conv to Inductor (#101164)
b088ff4677 : add foreach support for custom device (#102047)
9fa82c90f7 : [Dynamo] Correct UserDefinedObjectVariable.var_getattr on function/method type (#102580)
92923aca61 : [TP] Use Stride inferred from local tensor in to_local bwd (#102630)
7a569f86a0 : [export] Cleanup constraints (#102666)
bebb8b7c1e : [inductor] use native fetch_add function for trivial types (#101931)
a548fab8a8 : Add size info to collective logs (#100413)
c5d4ee2d73 : [dtensor][simple] fix some comments (#102661)
49cd184f89 : inductor: improve the index range check for index_expr vec check (#102263)
49d0d1d79f : Update XLA pin (#102446)
b9294c7ca2 : Allow more inserts before reIndexTopology (#102312)
6b8e68ce7e : [pytorch-vulkan] aten::uniform (#102431)
20ca994a3e : Use size in python list (#102538)
0d2e7a1888 : support ConvBinaryInplace in Inductor cpp wrapper (#101394)
cdfba6fca7 : Add ngimel to Core Reviewers (#102668)
c84f246c83 : Improve time savings calculation math for test reordering (#102411)
693114c0a2 : Adds script to generate alerts for failing jobs (#102002)
398a5f4d4a : Clean up mypy (#102555)
8d7e082300 : [c10d] Add is_backend_available for c10d backend. (#101945)
e03800a93a : Add torch._utils.render_call, improve printoptions (#102623)
cba4004983 : Run libtorch in 2 shards (manual sharding) (#102554)
d9f75dded1 : [export] Add aot_export 1/N (#101490)
04c1c2b791 : Try to build the Docker image if it doesn't exist (#102562)
9a2df0a5af : [RFC] Add method to DDP to check for backward finalization. (#100773)
fc31b3a106 : Allow existing "Python RAII guards" to be used as context managers (#102579)
65631d4515 : [benchmarks] Use train mode for accuracy checks for HF models (#102578)
213e10dc3d : fix bug in trace model when out-operator has more than one output (#101563)
17166c2511 : python_arg_parser to allow fake tensor element in symint_list when in dynamo mode #95424 (#97508)
3ae42cb7db : adjust header inclusions in C10 as sugguested by IWYU (#102467)
0ecca122e7 : [Replicate] Add unit test with replicate param names (#102401)
9331b7fa05 : Run slow gradcheck on the newer G5 runner (#102496)
c27cefccd3 : Faketensor hpu device normalization (#102512)
eaffd98880 : Enable hipSOLVER in ROCm builds (#97370)
46a925795e : S390x clang fixes for SIMD (#100874)
850b37cc3b : merge identical branches in cpu_index_kernel (#102601)
f47ee87765 : Fix ignored_states when they are passed as generators (#102575)
9f97b7c43b : Add integer overflow checks for large compressed tensor dimensions and nnz (#102530)
9fd14fcd09 : Improve repeat_interleave with scalar repeat value (#102570)
47b884a74c : [inductor] Revert a CI remedy for Triton compilation error (#102541)
d80d3b18d0 : nn.Linear with BSR inputs: spare the user from explicit Triton kernel registrations (#98403)
019c38624c : [Executorch][codegen] Add ETKernelIndex for aggregating all kernels for kernel (#102565)
7c2641d5f1 : apply constexpr and if constexpr when possible (#102471)
a5ddb72aec : Quick fix for keep-going + reruns (#102569)
3c0251a100 : [inductor] Fix issue with 0D reductions (#102568)
46691d4369 : [inductor][pattern matcher] Retain meta tags (#102462)
e7cc41772d : Add dynamo collections.deque support (#102412)
cdca25cdc7 : Fix warning couldn't find split args (#102561)
b4a49124c8 : [ONNX] Reduce exporter memory usage by removing intermediate values (#101148)
5324124eac : [profiler] Reintroduce forward-backward links (#102424)
73fd7235ad : add function specializations for the case of parameters in BFloat16 data type (#100233)
9edf65a821 : [build] fix compilation error on s390x (#101923)
cce58a43c9 : [MPS] Fix softplus with f16 input (#101948)
c3c1496143 : [dynamo][higher order op] Bugfixes to pass graph.lint (#102448)
8d5b14e907 : [ONNX] Don't duplicate model weights in ONNX export (#101134)
33a49eeae7 : [benchmark] Flag to switch on activation checkpointing for HF models (#102557)
6ac8a11746 : Switch cuda 12.1 docker images to gcc9 (#102380)
9ff1932d2b : [Dynamo] Save global autocast state to restore on graph break (#102415)
1a6ab8a5dc : Revert "Quick fix for keep-going + reruns (#102569)"
4f468646d9 : [PT2][Quant][BE] refactor tets cose to reduce duplication and standardize (#102497)
7f6edcf422 : Quick fix for keep-going + reruns (#102569)
f14ac74fce : [quant][pt2e] Add support for FixedQParamsQuantizationSpec (#102439)
168ae806d0 : [fx] Fix repr when arg is an OpOverload (#102547)
68e55bff62 : [minifier] add missing import (#102521)
95cdd58c8f : Revert "[pt2] add `SymInt` support for `linalg.tensorsolve` (#102466)"
463df86ce8 : Revert "[pt2] add `SymInt` support for `linalg.vander` (#102469)"
c28f8e314d : Add type hints in torch/distributed/utils.py (#102262)
05717895aa : [pt2] add `SymInt` support for `linalg.vander` (#102469)
b1b76f614d : [pt2] add `SymInt` support for `linalg.tensorsolve` (#102466)
0ba81ce8fe : [pt2] add `SymInt` support for `linalg.tensorinv` (#102465)
7378b6b9e3 : Add devcontainer support to PyTorch Project (#98252)
4d89489df5 : Move static checks of layers[0] (e.g., isinstance check) to model build time (#102045)
ff58d19c89 : DeviceMesh use dispatchable PG to support custom backend (#102336)
3ef4d697df : [c10d] default backend need to check for nccl availability (#102470)
b02f48b181 : implement __dir__ for dynamo (#102480)
704283d61f : Improve `clip_grad_norm` to use torch.linalg.vector_norm (#102429)
e71ab21422 : update triton pin (#101919)
5fa273c870 : ASAN: fix heap-buffer-overflow (#101970)
fcbdbd6682 : Fix silent nnz overflow for large sparse compressed tensors. (#102523)
77f97019b7 : Dynamo remaps legacy allgather to traceable one (#102232)
c58264c3e9 : [inductor] Support multiple symbolic numel expr in CudaWrapperCodeGen (#102093)
7042e10215 : Fixed issue with bicubic interpolation on uint8 input and antialising (#102296)
0f1621df1a : [pt2] fix typos in `checkFloatingOrComplex` errors (#102456)
e380d692dc : [pt2] skip `linalg.householder_product` tests on x86 macOS (#102460)
076f84c46f : [pt2] update tolerance for `linalg.pinv` `singular` tests (#102458)
999bae0f54 : Add padding check for use_nnpack (#92238)
00992ffa2f : [profiler] Global function for controlling fwd-bwd connection behavior (#102492)
0e72ada9bb : [vision hash update] update the pinned vision hash (#102495)
2cc6ae1926 : squash xblock for persistent inner reduction (#102444)
3c2519ab5e : Revert "apply constexpr and if constexpr when possible (#102471)"
461c03a93c : apply constexpr and if constexpr when possible (#102471)
319a1cb4e5 : [inductor] Replaced refs.op by torch.op in _refs/* (#102176)
fc0fed36d9 : [inductor] fix issue with ops.lookup_seed (#102485)
c6d9a0b9dd : [inductor] Handle floordiv and remainder in IndexPropagation (#102277)
e4e151d669 : [inductor] Inline ComputedBuffer computation when there are no reads (#102000)
b1bc8aecf5 : [inductor] erfinv: CPU/CUDA lowering (#101863)
0803b91867 : Revert "Replace int64_t with a size type in python_list.h when applicable (#101922)"
af1d437654 : Improve precision and performance for BFloat16 upsampling (#91169)
040d2cc969 : [dynamo] Some torchrec_dlrm related fixes (#101953)
53d1d301c6 : Enable CuDNN v8 frontend in RL (#102284)
81ac076bce : Revert "[FSDP]Add device_mesh to FSDPstate (#102317)"
af70fe9f3e : [PT2][Quant] Enable test_qnnpack_quantizer_conv_linear test (#102399)
0d876f7d43 : [PT2][Quant] Move observer sharing ops to use module partitions (#102398)
9fac5afbcc : [PT2][Quant] Move add/add relu pattern via module partitioner (#102397)
3d8f405022 : [PT2][Quant] Move maxpool_2d quant to use module partitioners (#102396)
d997e3aac6 : [PT2][Quant] Use module partitions for conv2d and conv2d + relu (#102395)
4cb6add471 : [PT2][Quant] Use module partition for fused patterns (#102394)
4c584acc5d : [FSDP]Add device_mesh to FSDPstate (#102317)
c3ea8cc58b : [pt2] convert `out` params in `register_meta` (#101344)
44e7f07ed4 : Replace int64_t with a size type in python_list.h when applicable (#101922)
3f4fee735a : add Half support for logsigmoid, threshold, elu, gelu, hardtanh, hardsigmoid, hardswish, hardshrink, softshrink, leakyrelu, softplus, glu, silu, mish, and prelu on CPU (#98745)
eda5abf5e0 : [quant][pt2e] Fix propagate_annotation after recent refactors (#102422)
6e3e3dd477 : Do not collect and skip non-disabled tests when rerunning disabled tests (#102107)
995ac703cd : [pt2] add `SymInt` support for `linalg.pinv` (#102367)
c9f4f01981 : Add security guards to avoid crashes in torch::jit module (#102156)
d7eec5628d : Fix some move warnings by gcc13 (#102353)
26f53bb8b0 : Deallocate workspace on thread exit (#102276)
5ee46afc05 : perf hint logging in inductor (#102250)
25058d5f66 : Modified logging threshold for memory profiling (#102243)
e344ff4113 : Support dynamo tracing collectives with processgroup arg (#102222)
ecd79b1fef : add additional stream priority for cuda streams (#101956)
88961e6d30 : Revert "[inductor] Inline ComputedBuffer computation when there are no reads (#102000)"
20e6ff375a : support ConvBinary in Inductor cpp wrapper (#101393)
f162ab0423 : Revert "[inductor] Handle floordiv and remainder in IndexPropagation (#102277)"
da3aba1e46 : Revert "[pt2] add `SymInt` support for `linalg.pinv` (#102367)"
23223402eb : [quant][pt2e] Add Support for DerivedQuantizationSpec (#102282)
267a181beb : [inductor] Handle floordiv and remainder in IndexPropagation (#102277)
f2dfcb8778 : [inductor] Inline ComputedBuffer computation when there are no reads (#102000)
1e4292a1e8 : [export] Rename graph_module.py to exported_program.py (#102260)
c4028de462 : [export] ExportedProgram (#102259)
80b916a586 : fix sm86 cuda 21.1 conv threshold issues (#102361)
c06d33ce43 : Add dynamo itertools.combinations support (#102379)
76a36159f7 : Replace full_like lowerings with decomps (#101963)
9c4fd72b53 : [aot_autograd][functional_rng] Change calling convention (#102344)
bcaa93e80c : s390x simd: disable functions with out-of-bounds reads (#102266)
0ed22fce97 : Merge type stubs torch nn parallel (#102194)
7b6438da9e : [Dynamo] Fix if condition on NNModuleVariable (#102335)
3469f100f3 : support ConvUnary in Inductor cpp wrapper (#101392)
0d5b74da0c : [pt2] add `SymInt` support for `linalg.pinv` (#102367)
8751002215 : equality assertions (#102256)
9b5e4c308c : [PT2][Quant][BE] Apply formatting to test_quantize_pt2e (#102275)
efd774a295 : Document faster builds for C++ changes (#102316)
c05a317371 : Bump requests from 2.30.0 to 2.31.0 in /tools/build/bazel (#102059)
6c9b94dcda : Revert "add additional stream priority for cuda streams (#101956)"
3dfa755a1f : [MTPG] Enable for some tests in test_fsdp_misc (#102043)
ce41faa2ae : Add cpp.max_horizontal_fusion_size to control the granularity of horizontal fusion (#99828)
e1dc793ef0 : [vision hash update] update the pinned vision hash (#102318)
fb468b6792 : [ONNX] Support aten::scatter_reduce (#102048)
ef13fde290 : Increase mem eff backward performance (#101847)
6f464e0cf8 : Invoke the bf16 load w/o #elements to bypass the temporary buffer allocation from the performance perspective. (#99822)
c3550d8376 : Add fast path for BF16 kernel if all the operations within the kernel support bf16 (#99814)
68816e4fa9 : Remove inplace buffers when original and mutation are both removed (#102289)
0db704d240 : [OpInfo] Add multi_head_attention_forward (#100153)
8aa48315de : Revert "Disallow _foreach_utils.py, but allow it to be inlined (#102221)"
54f38381a0 : [CUDA][DLPack] Try ~~bumping sleep interval~~ running on explicit side-stream for Windows `dlpack` test (#102283)
b469ed72d0 : Integrating new API usage metadata logger (#101762)
ae5606bb2f : Make test_inductor_collectives use self.assert* (#102274)
b628eb524b : simplify BinaryDivFloorKernel.cu code (#102168)
552299c42c : Disallow _foreach_utils.py, but allow it to be inlined (#102221)
de7ec2ddd7 : [MPS] Allow saved models to be loaded directly to MPS through torch.jit.load (#102204)
836798e0f3 : [inductor] Support precomputed_sizes in CppWrapperCodeGen (#102083)
053dff1111 : [ONNX] Bump ORT version to 1.15.0 (#102248)
3c77310752 : fix benchmarks/dynamo/runner.py (#102311)
0d17bd5fa4 : DOC Fixes unpacking issue in dynamo explain docs (#101761)
5b01c8dc6a : fix functorch/test_ops.py test_vjp flash attention unexpected success (#102131)
184d4f1ba3 : [ez] add `docs/source/compile/generated/` to .gitignore (#101094)
80f7264804 : Foreach kernel codegen in inductor (#99975)
1f80b972a6 : [CUDAGraph Trees] Fix empty storages handling (#102273)
c1db235040 : [dynamo] fix module buffers call (#102251)
d40f4f12f6 : [dynamo] add itertools.chain support (#102247)
c2498d3deb : Fixed indentation error in test_binary_ufuncs.py (#102244)
080d86acfb : [DCP] Add API logging for checkpoint high level API (#102278)
bd39767408 : Bump requests from 2.26 to 2.31.0 in /.github (#102057)
870880236b : Enables configuration of NCCL communicators (#97394)
3cae6d2493 : Make exir passes work with map_impl HigherOrderOperator. (#102009)
ee33bae5c7 : Fix an issue where checking sameness throw an exception (#102279)
d64ec82d15 : Turn on padding (#101915)
0833f475ce : Cache mm padding decision (#102200)
375446a0ea : [fix opinfo] empty_strided (#102088)
ed87508b32 : [quant][pt2e] Add support for SharedQuantizationSpec (#102184)
fab49823a5 : Skip bandwidth bound mms (#102199)
9aaa12e328 : Move mm padding to pattern matcher (#101913)
0bb2b01541 : Add forward mode AD to in-place foreach functions (#100695)
6c7410ddc3 : sampled_addmm: BSR support (#101163)
4882cd0801 : inductor: align cpp floordiv with python floordiv for dyanmic shape path (#102068)
a896962f0a : [fx][2/n] Add metadata to placeholders (#102195)
7b47cd0a6c : [c10d] add fake pg necessary collectives (#102238)
9a19262556 : [c10d] conslidate barrier after init logic (#102237)
aa83a52742 : Profiling doc (#101895)
818d92f58c : Support resize on meta storage (#101988)
3ca068bc44 : Location-shift MKL Exponential Distribution (#101720)
d4380edb9b : [TP] Add API logging for TP high level API (#102209)
d4f711b0b5 : do not raise when constraint locals are not in signature (#102198)
69c7f710ba : Add meta registrations for some foreach ops (#102225)
2f08f9a66f : [vision hash update] update the pinned vision hash (#102230)
2434a205de : Support unary not on lists (#102210)
a0e44284de : [pytorch] add Vulkan support for the `aten::cat` operator for 1d, 2d, 3d and 4d (#102128)
23dbdd900f : Full default dict support in dynamo (#102202)
f3e42f15e9 : [FSDP] Start to generalize modules to ignore for mixed precision (#102010)
c2093de5d9 : [partitioner] fix for rng ops (#102123)
2763b50803 : update thresholds for various ops in functorch/test_ops.py (#102016)
e274c2e4fd : [MPS] Restride output strides to contiguous format for inverse op (#102122)
11d1cd899a : Replace require_backend with require_backend_is_available (#101891)
3e08988cd3 : Fix redudant kernel generations (#102104)
9ce95ce157 : [pytorch] add Vulkan support for the `t` and `transpose` operators for 2d, 3d and 4d tensors (#101808)
c903b12cb8 : Add fake process group (#102180)
5da497cabb : add additional stream priority for cuda streams (#101956)
f8896b7b0e : update tf32 thresholds in nn/test_convolution.py (#102015)
dedcf8f70f : No need to run non-CUDA jobs in memory leak check mode (#102188)
ce42010722 : [inductor][decomp] Add aten._unsafe_index_put for unchecked indexing (#101812)
dbf6912be6 : Populate all args with fake tensor value (#102129)
210fc28d5e : Revert "Support resize on meta storage (#101988)"
06f656c5d1 : [distributed] implemented find_all_descendants (#102138)
5d6810a4ee : [dynamo][higher order op] Support nn.Module calls (#102022)
e6af31a5a2 : [dynamo] Add astunparse dependency (#102120)
e6fc7d814d : Segmentation fault in flatbuffers when parsing malformed modules (#95221)
2e2a74670d : torch.sparse.softmax: allow negative dim (#102172)
424c930f76 : Add quantization lowering for nn.PixelShuffle and nn.PixelUnshuffle (#101926)
956bd03808 : add ignored_states to FSDP/fully_shard (#102056)
023bc30b17 : Revert "Merge type stubs for torch.nn.parallel (#101528)"
d316a2dd5c : [spmd] Enable data parallel to work with non 0 batch dim (#100073)
d378837039 : [spmd] add more decomp and fix a sharding bug (#100938)
dd1f295201 : [spmd] Improve activation handling, factory ops and batch dim reduction (#100853)
4d55ea8548 : [spmd] enhance batch dim analysis of data parallel (#100852)
b2eaba6b62 : [spmd] by default average gradients for nccl backend (#99964)
942cd12d55 : [spmd] add option to preserve node types (#100072)
2232cce69c : No cpp + step current (#102001)
fcf812c35a : Unbind Cat pattern (#101767)
6cabc105bb : Merge type stubs for torch.nn.parallel (#101528)
fdd28399dc : Replace unsqueeze transform with stack (#101766)
c0d0a9f7a0 : Replace split-squeeze pattern (#101765)
0eb4f07282 : [ONNX] Introduce FX-ONNX dispatcher (#100660)
47b4136439 : Refactor normalize passes to use @register_graph_pattern (#101764)
66f6e0e605 : [CUDA][DLPack] Handle legacy default streams for DLPack conversion (#101318)
3baa67caee : [quant][pt2e][be] Move annotate helper function to quantizer/utils.py (#102127)
5f0463a6d7 : [inductor] Move two cpu tests to test_cpu_repro.py (#101887)
e3d97b6213 : [inductor] Added `smooth_l1_loss` refs (#102077)
ddf4f7bc89 : fix inference_mode with torch.compile (#101219)
98ab11a2c3 : separate out dynamo .requires_grad and .is_grad_enabled guards (#100570)
32643bc926 : Remove vsx suffix in sleef calls (#100149)
d08066a438 : [Reland][functorch] test for compiling functorch transforms (#100718)
08fb648fe1 : Add mechanism to turn any RAII guard into a Python Context Manager (#102037)
8b7bd81902 : determined collective device by _get_pg_default_device rather than explicit cuda (#101533)
fd1d442185 : [inductor] Add more dynamic shapes support for CudaWrapperCodeGen (#102019)
ee95e37a69 : [c10d] Record time spent for init_process_group, new_group, _store_based_barrier (#101912)
d6afa7d003 : add Half support for sinh, cosh, ploygamma, entr and i0e on CPU (#99002)
8aea9dad8f : Bump mpmath from 1.2.1 to 1.3.0 in /.github/requirements (#102058)
faa7eb81c6 : change error_message for XPU Autocast data type check (#102073)
d06802778e : No need to run C++ tests under rerun disabled tests mode (#102132)
29da75cc55 : Enable mypy allow redefinition (#102046)
bf059e3925 : [Typing] Export `torch.backends` as subpackage (#102099)
d26c8f26d1 : Lower xdist processes from auto to NUM_PROCS (#102124)
3318a832b3 : Tighten FakeTensor reentrancy asserts, add debugging (#102091)
38f8f756bf : group constraints by arg (#102096)
907cc6c11c : [vision hash update] update the pinned vision hash (#102136)
2e18dd2bdc : Improve bf16 neg by bypassing the convertion between BF16 and FP32 (#99711)
45843c7f41 : test_memory_format fix for test_modules.py (#102006)
47e9dba765 : move tf32_on_and_off fix for test_convolution.py (#102007)
d805a53f1f : disable tf32 for rnn tests and norm tests (#102005)
ea5eaa8692 : Remove config check in specialize (#102098)
d55aad1f3e : Disable (Broken) CUDAStreamVariable in dynamo (#100766)
cc233f4e23 : integrate the new event with pytorch (#101025)
e79d9b9938 : [pt2] add `SymInt` support for `linalg.matrix_power` (#101940)
69f7b40949 : [pt2] add `SymInt` support for `eye` (#101955)
42b974e8f7 : [pt2] add meta for `linalg_lu_solve` (#101836)
6c68116643 : [MPS] Calculate nonzero count first before running nonzero op (#102052)
be5e77ca4c : Make _StorageBase.byteswap faster ( > 10000x) (#101925)
94ed26d177 : [quant][pt2e] prepare_pt2e use quantization spec directly (#102054)
99f68d56ee : [PyTorch] Delete c10::guts::if_constexpr (#101991)
f65732552e : Support FakeTensor with FlatParameter (#101987)
5147fe4969 : Revert "[inductor][decomp] Add aten._unsafe_index_put for unchecked indexing (#101812)"
2e5e53b718 : Do not upload MacOS conda environment to GitHub when job fails (#102108)
32ce06a5ab : Revert "[Reland] fix missing-prototypes warnings in torch_cpu (Part 4) (#101949)"
45a8f691ec : Revert "[Reland] fix missing-prototypes warnings in torch_cpu (Part 5) (#101976)"
0759e1d132 : Revert "add Half support for sinh, cosh, ploygamma, entr and i0e on CPU (#99002)"
7e58891ca0 : Support list output for HigherOrderOperators (#101986)
e7a6818e97 : Register top level logger for torch (#102090)
149237415f : Using deterministic hashing instead of GUID for pytorch serialization id generation (#101964)
76af22103b : Fixed type hints for CosineAnnealingWarmRestarts (#102067)
4692ea76a0 : Fine grained apis docs (#101897)
723f111545 : [custom_op] explicit autograd API (#101824)
8487105fae : [custom_op] Create a new torch._custom_op namespace (#101823)
73d1be8e99 : [custom_op] Add a test for symints (#101822)
6e0c741105 : [dtensor] hide mesh validation check under init_process_group flag (#101996)
70eccdbf92 : [dtensor] add necessary logging to APIs and components (#101994)
eda7efe662 : Fix ProfilerTree Test (#101983)
02a7318a5b : [MPS] Add aminmax op (#101691)
80dd847b62 : Fix fragile code in `torch.__init__.py` related to `torch._inductor` import (#102021)
7d1ba0a92a : Support resize on meta storage (#101988)
51ff408f77 : Add retry when cleaning up Windows workspace (#102051)
431344f2d0 : [inductor] Refactor generate_kernel_call (#102018)
e132f09e88 : [Dynamo] Fix test_cuda_set_device to restore device (#102049)
b91eb97d34 : [transformer benchmark] relax tolerance in sdp.py (#101965)
e9246b290f : Initialize cuda tensor in fake tensor (#102027)
9bbee245fe : update rules_python and let bazel install its own pip dependencies (#101405)
2ca75d49a8 : [DTensor][3/N] add DTensor constructor function: full (#101436)
5c3cf76eb2 : add Half support for sinh, cosh, ploygamma, entr and i0e on CPU (#99002)
f7c736e1e7 : [quant][pt2e] Add observer_or_fake_quant_ctr to QuantizationSpec (#101920)
8cab7994a6 : [inductor] Move cpp wrapper dynamic shapes test to test_cpp_wrapper (#102017)
2bce7c8f46 : CUDAGraph trees doc (#101902)
4a1e9230ba : [vision hash update] update the pinned vision hash (#102028)
9121f5ca84 : Use the symint version of computeStorageNbytes within get_nbytes. (#101634)
f216fea44f : Remove commented out pdb (#101993)
6d0079b12b : [BE] Do not expose `torch.functional.opt_einsum` (#102004)
a2fd2c2b83 : [Pytorch] Add Vulkan support for aten::unsqueeze for 2d to 3d (#101719)
5fe629e314 : Add PyObject preservation for UntypedStorage (#97470)
488a4303a5 : Enable quantized_max_pool3d (#101654)
8243abc84a : [1/n] instanceof instead of singleton for ph check (#102008)
81b0f72e16 : Fix xnnpack link errors (#101630)
2ae87a1f87 : missed StackDataset documentation (#101927)
b9721bd705 : [inductor][decomp] Add aten._unsafe_index_put for unchecked indexing (#101812)
e07c04f48a : [inductor] Update qualname and module for wrapped testcases (#101975)
c618093681 : [vulkan] Fix concat op in feature dimension (#101721)
be94ff976d : Have irange use `if constexpr` (#94050)
38e73b30b7 : bring quantized_backward.cpp in sync with intern (#101990)
4de5ee43bf : [torch.library] Change Library.__del__ into weakref.finalize (#101829)
5e635e17da : Add documentation for a catching invalid index type (#96451)
c9f8f4cf2d : Fix device normalization of automatically generate methods for custom backends. (#101796)
4db2dade25 : [Reland] fix missing-prototypes warnings in torch_cpu (Part 5) (#101976)
bdb3fb49bc : [c10d] Fix the check message of unsupported collectives ops. (#101775)
5ba16011d7 : Suppress profiler spam in dynamo benchmarks (#101942)
38a29324b0 : [dtensor][2/N] more tensor ops to use strategy propagation (#101203)
496212f408 : Revert "group constraints by arg (#101815)"
a630328695 : Fix Backend docs search items (#101214)
a6f4088c21 : Hint Tensor._make_subclass as a staticmethod (#101961)
19af5c0b69 : Explain how fastAtomicAdd works (#101951)
4f2c007a1b : [Reland] fix missing-prototypes warnings in torch_cpu (Part 4) (#101949)
d0bb8fdc64 : Revert "[dynamo] Minor refactor to use is_allowed to decide inlining of NNModule methods (#101910)"
e9a7115605 : Update Kineto submodule (#101952)
0a694dba2b : [inductor] fix avg_pool2d accuracy problem in lowering (#101789)
3004d40439 : torch.unique with dim: NumPy compatible sorting (#101693)
dcffd5c646 : show errors on bazel test failure (#101928)
2a62b59e04 : improve diagnostics from bazel_linter.py (#101445)
b54cdaf9fb : use bazelisk as the bazel binary for lintrunner (#101744)
807d81155f : [CUDA][CUBLAS] Fix BF16 reduced precision reduction note in Numerical accuracy docs (#101884)
351c2ea2fb : [export] Prototype on serialization schema. (#101899)
330c907301 : [MPS] Fix embedding cache key (#101857)
22ca1a1124 : Partially fix shape mismatch in vision_maskrcnn (#101477)
9e8da7fb44 : [vision hash update] update the pinned vision hash (#101938)
66a2600b6a : [T153220354] Fix header inclusions in c10 (#1541) (#101846)
dde6d56101 : Prevent pattern matches across mutation ops in inductor pre-grad FX passes (#101144)
13640bf925 : disableing quantizing gradient in 8bw (#101739)
f0dc41a768 : [ONNX] Bump onnx submodule to release 1.14.0 (#101809)
03de15806e : group constraints by arg (#101815)
b5ee34e5f2 : Disallow module forward input mutation in aot_export (#101834)
0c6f409cda : [inductor] Refactor RNG operators (#100064)
8b2a9f81cc : [dynamo] Minor refactor to use is_allowed to decide inlining of NNModule methods (#101910)
2886b3e692 : [vision hash update] update the pinned vision hash (#101917)
bb62a3734e : inductor: fix name 'inf' is not defined issue when calling external_call function (#101865)
350f0cd78c : inductor: fix bfloat16 store complier issue (#101856)
029c6a9934 : [accuracy minifier] cast copied model rather than update the original model (#101901)
73e887b5c7 : [easy] refactor signature flattening transform (#101886)
7a17e9d0b6 : [dynamo] Bugfix for unspecialized nn module variable (#101859)
48346a4648 : [inductor] Test indirect indexing asserts with dimension of size 1 (#101811)
89bd5d3dab : [inductor] Implement magic methods on IR values (#101076)
15495f2d96 : [quant][pt2e] Introduce QuantizationAnnotation API (#101708)
03f50fcc02 : [codemod][3.10][NamedTuple] Use typing_extensions to get NamedTuple Generics (#101830)
b07e97c084 : Fix finding existing Needs label comments (#101889)
c8fd1cfad1 : [pt2] Turn off lazy reinit when cuda graph is on (#101848)
fa7ad77ac9 : [Profiler] Workaround CUPTI Lazy Reinit and CUDA Graphs crash in CUDA 11 (#101879)
3666ca9d97 : Dynamic Shape Doc (#101885)
ff5b9428aa : Fake Tensor Docs (#101882)
581d13a069 : Add Logging Doc to compile index (#101888)
7f3fed125e : Revert "separate out dynamo .requires_grad and .is_grad_enabled guards (#100570)"
2dd33c71c1 : Docs for torchcompile and functorch (#101881)
81c181dc01 : Update BCEWithLogitsLoss pos_weight description in documentation (#101567)
5ea7096ebc : match sdpa patterns from HF (#100609)
96ee23e198 : Print restarting analysis at INFO level with a exception breadcrumb (#101573)
e5e451a9db : Update batch size for a couple models (#101837)
498c34e8e8 : Revert " fix missing-prototypes warnings in torch_cpu (Part 4) (#100849)"
083f304d27 : Revert "fix inference_mode with torch.compile (#101219)"
e760a968c8 : Revert " fix missing-prototypes warnings in torch_cpu (Part 5) (#101788)"
4f9aa7cb0f : [export] Error when constraining on static values (#101655)
3e2ea32dab : [BE]: Enable ruff rule TRY302 and apply fixes (#101874)
1ac663d9f1 : `collect_env`: parse HIP version exception free (#101844)
0df691df4e : [ONNX] Support aten::broadcast_to (#101833)
113b67059f : Fix specify_constraints signature for exporting module (#101831)
11f7ae19cd : fix inference_mode with torch.compile (#101219)
1fabee399d : separate out dynamo .requires_grad and .is_grad_enabled guards (#100570)
f99eeb5bdf : Check devices on meta functions that return inputs (#101807)
61b6b038b0 : inductor: fix FloorDiv issue for dynamic shape path (#101793)
e06bd8f3b1 : fsdp support create hybrid-sharded process group for custom backend (#100622)
4441ce21dc : Add missing conversion functions between half and float for ppc64le (#100168)
4486a1d09a : Improve the functionality of untyped storage for privateuse1. (#100868)
f66d5dd788 : SymIntify functorch vmap (#101409)
1aaf0396eb : [reland][opinfo] empty_strided (#101782)
e5b7c7a04f : Fix torchinductor uint8 bug (#101468)
4c1bc91f42 : Support autograd.Function w/ grad (#99483)
7776a41bd6 : [ONNX] Detect None constant during jit scalar type analysis (#101608)
eb470ab2fb : Add cooperative_groups header to cuda_to_hip_mappings.py (#100721)
bcb4444cec : PyTorch -> C++17 (#98209) (#100557)
6f13d6892a : Add meta support for multinomial (#101324)
6f46716ee2 : Fix/skip CSE tests on Python-3.8 without `astunparse` (#101805)
0a0acce515 : [vision hash update] update the pinned vision hash (#101821)
60547fcbee : Autoformat torch/utils/checkpoint (#101649)
d7f6bfe651 : Fix require_backends_available to reenable distributed tests (#101704)
b5217d0898 : Revert "match sdpa patterns from HF (#100609)"
a76c1af351 : Revert "Implement adding bias vector into structured sparse linear operator (#100881)"
eb9ac9c156 : Revert "Add activation functions (ReLU and SiLU for now) for structured sparse linear operator (#101339)"
2c0d607882 : [bazel] add build for functorch (#101475)
7ffdd4fedc : Update release related information (#101819)
686b12c93d : Reduce log output when no tests are prioritized (#101803)
f95d42b1b7 : [DataPipe] Update docstring for functional form of DataPipes (#100446)
556bb691fd : [AO]Fix observed LSTM layer setup individually observed LSTM (#101299)
2fa1b563da : [dynamo] Activation checkpoint higher order ops - Reland 101028 (#101790)
a33ac44540 : Better needs label error message (#101747)
8b751b41c0 : Do not trigger lint and pull workflows when sync nightly #26921 (#101746)
1930428d89 : Minor improvement on the decomposition of upsample_bilinear (#101682)
ac1cf00085 : fix missing-prototypes warnings in torch_cpu (Part 5) (#101788)
c9ba967c21 : Upstream xformers code (#100583)
794cc3952e : adding moco to CI (#101098)
b315c9b5ab : [CI] Enlarge memory for OOM models in inductor cpu HF accuracy test (#101395)
72a73ef67b : Add aten.searchsorted.Tensor meta kernel (#101637)
c2f28d1c1d : fix missing-prototypes warnings in torch_cpu (Part 4) (#100849)
900ca4df59 : inductor: skip weight packing when has zero shape (#101355)
e48a052e7b : Fix link error on s390x (#101000)
bfb3941ad8 : Add activation functions (ReLU and SiLU for now) for structured sparse linear operator (#101339)
a0e6f82087 : [inductor] send max_pool2d_with_indices and its backwand to fallback if dilation is not 1 (#100531)
dafa009c3c : [dynamo][moco] Save global torch state to restore on graph break (#101201)
28098cae6b : [DataLoader] Adding `StackDataset` (#101338)
f0f0f70904 : Fix check-labels workflow commenting on forked PRs (#101467)
c73923473d : match sdpa patterns from HF (#100609)
18f6f30d7c : Make HUD link https (#101461)
124d812f38 : [BE] Fix rule not found error message (#101745)
66e398951a : [inductor/decomp] Add aten._unsafe_index to disable range checks (#101602)
b256091c7b : [inductor] Generate indirect_indexing checks even if optimized out (#100895)
ef512db0f8 : [inductor] Constant and index_expr propagation pass (#101077)
df6acf27fc : update gloo submodule (#101472)
8c0b148926 : [CI] Distribute bot workload (#101723)
29de581764 : [Dynamo] Graph break on torch.cuda.set_device() (#101668)
5f07c589b0 : Revert "[inductor] Refactor RNG operators (#100064)"
3135bec4a0 : [docs] Clarify when to use SparseAdam (#101465)
5d3cfda1ed : Revert "match sdpa patterns from HF (#100609)"
b429a4de13 : Update public_api to remove duplicated `randn_like` (#101302)
2236d5ef83 : [Security] Move mergebot workflows in its own env (#101718)
f3fc531eee : Check for pytest extensions in run_test (#100916)
e3c9a1e5c4 : Run dynamo tests in parallel (#101432)
e3c66ded86 : remove default lower bound in dynamic_dim suggestions (#101636)
c8579b7374 : Run `test_cpp_memory_snapshot_pickle` only when linux and x86_64 (#101366)
dfac4364c4 : Revert "[opinfo] empty_strided (#100890)"
f33725b82b : match sdpa patterns from HF (#100609)
8e51521cee : [quant][pt2] Handle maxpool + conv + bn case in prepare QAT (#100941)
3ed1569e86 : Adding serialization ID to inline container (#100994)
326a4cc815 : Support map autograd and pytree in/out. (#101633)
38e537db55 : Handle multi-user case in split-cat simplification (#101473)
e17d9f2c64 : Fix determenistic typos (#101631)
07e759eca2 : [PT2][Quant] Move to module partitioner for linear pattern quantization (#101122)
ebae77e891 : [transformer benchmark] sort by cuda time (#101349)
403ce1a1c9 : Fix benchmark model names printouts with tqdm (#101627)
bec655f826 : [PT] Update module partitioner to return parameter node (#101121)
75375b410d : inductor(CPU): fix issue when padding/stride/dilation size is one for cpu weight packing pass(reland) (#101353)
2c807a4acf : [PT2][Quant] Remove None annotations (#101120)
783a46adee : [functorch] fix UB in interpreter stack (#101568)
cb6fa890d4 : s390x SIMD: Propagate NaN in minimum and maximum operations (#99716)
a85f6aa4ca : s390x zvector: implement expm1 for complex vectorized types (#99872)
f72f0119ec : Implement CSE for dynamo guards. (#98488)
f994d0b619 : [dynamo] Change dimension constraint summary to log.info (#101584)
39f52c0218 : Switch AOT Inductor test to export, add dynamic, fix invocation bug (#101585)
c3a893c659 : Implement adding bias vector into structured sparse linear operator (#100881)
97180aca5e : Enables barrier to support the specified device (#99589)
6261aa5c8d : [inductor][cpp] support non contiguous vectorization codegen (#99966)
47f43ed84a : Actually functionalize torch.export (#101433)
0c470b17e3 : Extend storage create for custom storageImpl (#100237)
d1a472a366 : Fix Buck OSS build after flatbuffers update in #100716 (#101626)
41d668c9dc : work around precision error in constraint solver (#101607)
3c4f97c213 : [vision hash update] update the pinned vision hash (#101635)
ba2bc7df8f : Enable `backward` on `_foreach_zero_` (#101149)
3bbf0683a1 : [inductor] Refactor RNG operators (#100064)
bb3558961f : [MPS] Add histogram ops (#96652)
20cf42de2c : Revert "[Reland] Add sym_size/stride/numel/storage_offset to native_function.… (#100749)"
9a17989b63 : Prioritize modified tests when running on `main` (#101618)
7ca5e68c00 : Reorganize foreach ops more logically in native_functions.yaml (#101583)
cde597efa1 : [docs] Warn that GradScaler can scale under 1 (#101569)
e69198b043 : Revert "Support map autograd and pytree in/out (#100494)"
b8fa41be9d : Support map autograd and pytree in/out (#100494)
552b712f80 : Run C++ testcases in parallel with pytest-xdist (#101440)
b998ec96ac : Don't run libtorch tests on slow test shard (#101429)
a26516b78b : Add inductor as a test disable group (#101448)
e0fc24cdc5 : add retries to inductor benchmark suite (#101019)
01da732691 : Fix type annotation of `torch.split` (#100655)
41468833fb : vision_maskrcnn is now deterministic (#101116)
2e08c68564 : Avoid cond prefix when naming subgraph of HigherOrderOperators (#101439)
0585944eac : Revert "match sdpa patterns from HF (#100609)"
42e65a2587 : [pt2] add meta for `linalg_lu_factor_ex` (#101375)
cb734123e2 : [GHF] Ignore flaky classification for pin updates (#101587)
b1474019a4 : Test Reordering: Run previously failing tests first (#101123)
b5ed606a8b : use Bazelisk to fetch Bazel in CI (#101424)
e7681b53e3 : Fix typing for `setup_context` in `autograd` (#101464)
eac5f2a8e4 : Revert "Actually functionalize torch.export (#101433)"
b94f143ace : SymIntify convNd and conv_transposeNd, fix inductor symint handling (#101488)
411ba1c8bf : [pt2] skip flaky `linalg_householder_product` tests (#101551)
afea1a9fe9 : [meta] error checking for inplace ops (#101532)
54fe828cd0 : Improve rebase message when PR is uptodate (#101504)
20deccf8a1 : BE changes for tryrebase.py (#101503)
1272cd73da : Revert "extend serialization for tensor metadata (#99808)"
3f87c04cf8 : fix a typo in common_device_type.py (#101485)
2a3e45a2a8 : Docs: update default device description (#101283)
010763be9a : [DTensor][2/N] add DTensor constructor function: empty (#101022)
5cc361c736 : [DTensor][1/N] add DTensor constructor function: ones (#100933)
2af7df62a5 : log inductor compilation time to scuba (#101317)
eec752ed05 : Actually functionalize torch.export (#101433)
59dff01319 : Add top level function to check if running with deploy (#101420)
05f6250815 : Add missing `torch.distributed.ReduceOp.AVG` in type stubs (#101534)
47d31364d7 : run buildifier on WORKSPACE (#101411)
ff3f19615f : Type conversion between float/complex dtypes (#97935)
2b2a717f19 : [inductor] erfc: lowering (#101416)
23d1cc3811 : Update llama to failing (#101565)
9e023e1818 : [fx] Better replacements finder in subgraph rewriter (#100556)
6bc0f4a4ee : [reland][CustomOp] Add Dispatcher error callback (#101452)
c8be493dac : [reland][custom_op] Change the python type that maps to ListType in schema (#101451)
4f8cbaa10a : [reland] Cleanup custom op library after each custom_op test (#101450)
c2e16d8b2c : buck1 can't properly handle '/' on rule names, so fixing 'impl/cow/context' and 'core/impl/cow/context_test' build rules (#101552)
c51dfbf5b4 : triu/tril: complete dtype support for CPU/CUDA. (#101414)
88b6a4577b : inductor: fix sign gets wrong result dtype issue (#101377)
935100cbde : [profiler] When record_inputs=True, record scalar lists of length <= 30 (#100593)
e389bfa01a : inductor: add dtype check before doing cpu binary fusion (#101376)
6f7ebcdcd8 : [inductor] enable descriptive name for cpp kernels (#101330)
86869475ff : [inductor] move dtype propagation log to schedule artifact (#101351)
dfc46153a7 : [inductor] add graph id prefix to inductor_wrapper_call profile info (#101350)
d9d34b3e18 : [vision hash update] update the pinned vision hash (#101471)
c03555a303 : add retries to external contribution data upload (#100889)
773f6b626d : [ONNX] Diagnostic to show all unsupported call_functions (#100451)
45d080e0ac : [ONNX] Diagnostic 'log' and 'log_and_raise_if_error' (#100407)
e4eaf33346 : Re-enable detectron2_maskrcnn on CI (#100791)
af4248b9ad : Update the torchbench pin to include timm upgrade (#101466)
964e61ee95 : [quant][pt2] Handle no conv bias in prepare QAT fusion (#100610)
7052fb37bd : [Dynamo] Improve handling UnspecializedNNModuleVariable side effect (#101141)
6f8a71aa3d : [c10d][Fix] Start gloo sequence numbers at 0. (#101422)
4b849744d1 : [IValue] Only coalesce once (#101447)
13056ca229 : Revert "[fx] Better replacements finder in subgraph rewriter (#100556)"
194d360329 : Add more canonical way of adding runtime pass (#100956)
f0786ad776 : Use %zu instead of %ld when formatting size_t (#101412)
52363de2ec : Clean up grad check in sdp_utils.h (#101435)
c3f7db3f52 : Use python3 instead of /usr/bin/env python3 on Windows (#101437)
24cc7fe020 : Fix Wishart distribution documentation (#95816)
7f3b00bfe0 : [Inductor] Improve view/reshape on tensors with shape 0 (#101051)
d198033661 : Revert torch.fx.interpreter error printing change (#101462)
799ef7e501 : [caffe2/tools/autograd] Fix non-determinism in code gen (#101425)
a8376099f9 : fix print tensor in cpp for privateuse1 (#100797)
788ff0623b : [decomp] fix decomp of batch_norm when weight/bias is not flattened (#101059)
1faef895ca : Inductor cpp wrapper: support sympy.Expr as input (#101257)
187eb7ca88 : Enable default workflow PyT 2.0 UTs on ROCm stack (#100981)
01c7106580 : [opinfo] empty_strided (#100890)
3920ec1442 : Apply the same fix to cleanup process on Windows CPU build job (#101460)
0577043d94 : Rename minpybind namespace from py to mpy (#101410)
a206e8b027 : [small BE] update NcclTest dim size (#101127)
59a3759d97 : Update cpp_extension.py (#101285)
1732077758 : Bump up flatbuffer submodule version to the latest release (v23.3.3) (#100716)
9d858642af : [PTD] Make input contiguous for _ReduceScatter (#101373)
0e811044bd : [dynamo 3.11] enable other torch 3.11 dynamo-related tests (#99180)
3b7c6b21d7 : Disable locality reodering in training (#101423)
70ef0bb45a : Fix checkpoint doc small formatting issue (#101419)
af841f38bd : [SPMD] Allow Override.replacement to have a global view (#101427)
9ffad5b62b : Remove input tracker from runtime assertion pass (#100955)
563d8058f4 : Fix inconsistent torch.nn.MaxPool1d output on cpu and gpu (#99843)
9eb1748b2b : [pt2] add meta and `SymInt` support for `linalg_lu` (#101372)
ac4cc63ae2 : [pt2] add meta for `linalg_ldl_solve` (#101367)
0a7ea9627f : match sdpa patterns from HF (#100609)
9842d1ef94 : [fx] Better replacements finder in subgraph rewriter (#100556)
7912b34789 : Revert "[CustomOp] Add Dispatcher error callback (#101015)"
4b9bc6f2a6 : extend serialization for tensor metadata (#99808)
3b82298265 : [caffe2/torchgen] Fix codegen non-determinism (#101286)
349a2b3871 : Revert "Cleanup custom op library after each custom_op test (#100980)"
22b9bef3d0 : Add device extensions to the test framework for supporting custom device (#99960)
b50595702b : Revert "[custom_op] Change the python type that maps to ListType in schema (#101190)"
ee40cce475 : [AOTAutograd] add export entrypoints (#100587)
bba12a4668 : aot_autograd: factor out runtime epilogue from aot_dispatch_base (#100586)
a4830bd86b : fix sign return type (#101346)
d0db7d624d : Revert "[dynamo] Activation checkpointing as higher order op (#101028)"
13383f45c5 : Revert "[c10d] Bridge c10d and gloo stores. (#100384)"
2341bd69e9 : Revert "[caffe2/tools/autograd] Fix non-determinism in code gen (#101287)"
3e1c8168f8 : Add pattern to merge/simplify split-cat (#100713)
721b144f0f : [MPS] Add support for Custom Kernels (#100661)
f48718f749 : Update torchbench pin (#101365)
e35323d6a7 : [Profiler] Fix HTML plot output for profiler export_memory_timeline (#101316)
a8ea4178ab : Fixed bug in interpolate when interpolation size is larger than max (#101403)
a94135641c : Fix some NVCC warnings (Part 2) (#101383)
effe1425dd : ASAN: fix use-after-free (#101400)
66eef31444 : Revert "[fx] change from #users to num_users in graph printout (#101140)"
616208b4fe : [BE]: Cleanup deprecated stdlib imports (UP006,UP035) (#101361)
1b7d875083 : put third_party/ittapi/ in .bazelignore (#101364)
cca31f1797 : Revert "implement a function to convert a storage to copy-on-write (#100819)"
bfb2888b51 : Re enable AutogradNotImplementedFallback on Windows (#101062)
9b6ccde0e6 : fix precision error in constraint solver (#101307)
87f9160b67 : Revert "[inductor] fix incorrect strides in copy() decomp, fix hf_LongFormer + hf_BigBird errors (#100115)"
7dd8e08817 : [pt2] add meta for `linalg_ldl_factor_ex` (#101362)
a8964d6377 : [pt2] add meta and `SymInt` support for `linalg_householder_product` (#101315)
cc54da4877 : Inductor cpp wrapper: fix FallbackKernel support (#100788)
72908e768e : Fix Math Typesetting for torch.linalg.matrix_exp (#101363)
fcf2fb273c : Make missing model import error marginally better (#101221)
96487d0d1f : Refactor after_dynamo to have a CLI interface too. (#101220)
9ba64cba55 : Fix torch.utils._traceback on Python 3.11 (#101277)
dfe484a3b3 : [BE]: Bugfix functorch and some generic typing improvements (#101337)
65412f95f0 : [dynamo] Graph break on ops having inplace_view tag (#100787)
568db1b464 : [dtensor] Relax condition for _split_tensor() (#101218)
674e52b0b9 : [vision hash update] update the pinned vision hash (#101347)
8876c0b282 : [transformer benchmark] fix in sdp_bwd for scaled_dot_product_attention return type (#101341)
2361f7f0ce : Update doc strings to make description of is_causal consistent for nn.Transformer and nn.MHA (#101089)
f6c2859ee3 : Print the path to the code with `TORCH_LOGS=output_code` (#99038)
07d3772eff : fix typo in comments under torch/distributions/mixture_same_family.py (#101290)
ab74744522 : add inplace_view tag to resize_as_() (#100786)
76b72bd80d : Rewrite frame state to use a struct for shapes, splitting scalar and size, prep for stride (#101250)
e406125b6b : [profiler] replace record_concrete_inputs_enabled interface with callback instead of boolean (#101292)
44fb7fcb83 : [vision hash update] update the pinned vision hash (#101323)
387b369ee4 : [CI] Fix a dashboard command line string formatting bug (#101325)
4414160453 : Factor automatic dynamic into a private helper function (#101114)
9e089db32e : [MPS] Enable `arange` for `int8` and `uint8` dtypes (#101303)
ceecccc09e : Bugfix: Correctly detect test changes in PRs (#101304)
d75f93603a : Flatten exceptions in dynamo (#100779)
cc0a271935 : [GHF] Use `baseRefOid` to get PRs merge base (#101232)
c498b1ad95 : [C10D] Implement extended store api in HashStore. (#100633)
2a14652879 : [CI] Introduce dashboard-tag to pass dashboard run configs (#101320)
bb454891ed : [Reland] Add sym_size/stride/numel/storage_offset to native_function.… (#100749)
4dbab17edb : [c10d] Use macro to deduplicate codes (#101243)
0be53d83fc : [MPS] Add support for MPSProfiler Python bindings (#101002)
816400a294 : Add branch change name for composite action (#101309)
47c99e3a1c : Update PyTorch docker base image to Ubuntu-20.04 (take 2) (#101310)
5fe834afc1 : [inductor] Insert triton barrier before storing to inplace buffers (#100769)
05077f2ac3 : [PyTorch] Avoid extra refcounting in vector variant of VariableType::unpack (#95835)
6afa9a4a69 : [CI] Change dashboard workflow inputs type to boolean (#101308)
a12b640dc9 : Fix typos in troubleshooting.rst (#101305)
6ac0542747 : Cpp Reduce LR on plateau scheduler (#100311)
c772d56966 : Use 20.04 as base image (#101301)
066175d69c : [CI] Add workflow_dispatch.inputs to control dashboard runs (#101279)
a8c32eb78e : [PyTorch] add test for numel slow path affecting data_ptr (#100993)
568bac7961 : [BE][GHF] Add `retries_decorator` (#101227)
2fcc2002fa : Handle tail 0-size tensor appropriately in `MultiTensorApply` (#100811)
52f526cfc0 : [caffe2/tools/autograd] Fix non-determinism in code gen (#101287)
630593d3cc : [bazel] add python targets (#101003)
4434b9af6a : [quant][pt2] Handle constant conv args in prepare QAT fusion (#100525)
3f734c584e : Revert "Mark Windows CPU jobs as unstable (#100581)" (#100676)
7e333fe502 : Fix cuda graphs & sdpa for dropout==0 (#101280)
a8ff647e42 : Disable conv cache emptying (#101038)
5ac48eb353 : [FSDP]Skip unshard call during checkpointing for NO_SHARD sharding strategy (#101095)
aec11b8c80 : implement a function to convert a storage to copy-on-write (#100819)
f0f700e8d2 : ASAN: fix use-after-free (#101064)
a3700571e1 : Fixed a bug in interpolate uint8 AVX2 on non-contig input (#101136)
4a7ee79bf9 : [BE] super small comment update to gradcheck.py (#101103)
a53cda1ddc : [optim][BE] split test file into logical parts: SWA, LR, optim (#101100)
a64e97b62c : Revert "[dynamo 3.11] enable other torch 3.11 dynamo-related tests (#99180)"
8e54218024 : [ROCM] Add build ROCM support to build-triton-wheel.yml (#95142)
dfd822d756 : Fix deserialization for UpsamplingBilinear2d (#101248)
fa40195fac : Don't set_current_node in DDP. (#101046)
d54fcd571a : [dynamo] Skip tests that are broken in fbcode (#101217)
74b2c04aa1 : [c10d] Bridge c10d and gloo stores. (#100384)
c0e5d7e7fe : [CustomOp] Add Dispatcher error callback (#101015)
de6470e28e : [custom_op] Change the python type that maps to ListType in schema (#101190)
d0d8165230 : Cleanup custom op library after each custom_op test (#100980)
3ffeab7f80 : [custom_op] Make repeated registrations error gracefully (#100979)
b3b333205f : Fix `asarray` doc examples. (#100971)
b5c8d0359c : Update autograd.rst (#101007)
aa8dcab1ce : [dynamo 3.11] enable other torch 3.11 dynamo-related tests (#99180)
d56e1b2f67 : add Half support for unary ops on CPU (#98493)
98f6b815b7 : [BE] Make some simplifications to torch.utils.checkpoint logic (#101193)
e568c5a18d : [fx] change from #users to num_users in graph printout (#101140)
2c29149109 : Enhance Composable FSDP cast forward input tests (#100349)
49578913fb : update timm commit (#100931)
3ae612ba7f : [dtensor] remove assertions about submesh checks (#101229)
bf50180b4a : enable dispatch stub for backend PrivateUse1 (#99611)
e98d762f21 : update requirements.txt in /docs (#101092)
de15e740a1 : [dynamo] Activation checkpointing as higher order op (#101028)
c5c75aa06d : [vision hash update] update the pinned vision hash (#101230)
ce76670c6f : [GHF][BE] Add `__repr__` to `FlakyRule` (#101234)
f0cc535c28 : [GHF][BE] Memoize `read_flaky_rule (#101239)
47ec9cc26d : Improve error messages in `THPVariable_set_grad` (#100683)
02f152626c : Fix typos in error message (#101231)
d9cfa0461a : use const_data_ptr in get_device_pointers (#100997)
b9bfc2b2d9 : Warn on failure to end warmup, add explicit api for start of model invocation (#101129)
74dc2a53f6 : Thread generator through trunc_normal_ (#100810)
4c8ee583c3 : [inductor] fix incorrect strides in copy() decomp, fix hf_LongFormer + hf_BigBird errors (#100115)
a6b8e69d36 : [aot autograd] fix de-dupping metadata computation bug (#100431)
5651006b9d : [aot_autograd] proper handling for when outputs are aliased but have identical size/stride/offset metadata (#100430)
2c786961b7 : Towards making torch._inductor.ir typed (#100712)
380054ebb2 : Add IRNode.realize stub with docs (#100710)
738ba13b35 : [BE]: enable PLE error codes in ruff and fix bugs (#101079)
b7bf953bbc : [MPS] Fix bernoulli for int types (#100946)
599ae95d1a : [dtensor] use stack to manage mesh resources (#101202)
6d6abba0d8 : [IValue] Better handle sparseTensors in extractStorages (#100783)
cb94ea6044 : [BE] Simplify tests, elaborate testnames in test_optim.py (#101004)
49c8a0cad0 : [SPMD][BE] Remove the legacy tracing code (#100858)
c567748e16 : Make interpolate_bilinear deterministic using decomposition (#101115)
daed3bf8f9 : Implement coalesced all_gather_into_tensor (#101157)
e47cdd0ca4 : [BE] Testing docs: clarify test instantiation function usage (#100905)
ae23328625 : Remove obsolete upsample_bilinear2d lowerings (#101111)
346e1f512f : sparse compressed validation: allow empty-batched inputs (#101180)
65b15be04c : Fix incorrect sparse_dim in COO.zero_() and in binary operations with zero-sized COO operands (#98292)
41a4e22015 : Update torchbench pin (#101071)
f7571507e0 : Add global boolean for controlling whether to record concrete shapes or not (#101043)
14964b3aa5 : Add is_xpu to torch type (#101072)
1e6002bef6 : [pt2] Skip if curr_size is None (#101170)
0ec4646588 : CUDA Graph Trees - error on deallocated access (#100927)
369a256381 : [Dynamo] Remove cross import in dynamo unit tests (#100851)
502e791241 : Update cpuinfo submodule to include AVX512-FP16 detection (#100865)
4a4854f6b2 : [inductor] Test for shape padding (#100493)
d283075282 : Reduce fake_tensor create_mode logging (#101074)
ad070b6dfa : Check canary_models for models too in torchbench.py (#101081)
4eaaa08623 : Revert "Fix header inclusions in c10 by iwyu (#100304)"
683adb2091 : Add dummy CUDA kernel for assert_async.msg (#101130)
cbfed470bd : Revert "CUDA Graph Trees - error on deallocated access (#100927)"
dd2c22f4bb : bsr_dense_bmm(): enable more precise float32 support with float64 accumulators (#100882)
979f55d3bc : implementation of DataPtr context for copy-on-write tensors (#100818)
87084643e5 : [CI][MPS] Actually make grid_sampler_2d available (#101108)
c4752b1a91 : [MPS] Rename metalIndexingFunction to metalIndexingPSO (#101156)
8b4e28d65d : Fix microbenchmarks (#101065)
036a8d6b4a : Remove NullContext() from benchmark runners (#100309)
c25fdc20c2 : [cuBLAS][cuBLASLt] Allow user-specified cuBLASLt workspace size via `CUBLASLT_WORKSPACE_SIZE` (#101145)
b46553f652 : [inductor] simplify test_cpu_repro with self.common (#101050)
ea86eb3197 : inductor: fallback ConvTranspose when output_padding is big (#100846)
a66de845de : [Quant][PT2E]Fix pt2e quantization maxpool input observer issue (#100961)
6037ee8cc9 : Fix header inclusions in c10 by iwyu (#100304)
da02ccc60e : Revert "PyTorch -> C++17 (#98209) (#100557)"
2621fbda7d : Turn on anomaly detection for AOTAutograd backward tracing (#101047)
15a51e2012 : simplify sdpa backward meta registration (#101128)
5f89d89ada : [vision hash update] update the pinned vision hash (#101142)
075d36d37f : [Dynamo] Fix nested function resume execution (#100426)
c84627c2ee : benchmarks: make --amp works for cpu path (#101057)
a1aa32e204 : [dtensor] tensor ops to use strategy based sharding prop (#100607)
d1f0c8e2d0 : Run C++ test_api binary directly in CI slow jobs (#101088)
0848ed21b8 : [c10d] Figure out device to use for object collectives (#100954)
a0e6ae2c01 : Restore Vulkan tests to periodic (#101026)
13d445c2c2 : Move periodic dynamo benchmarks to inductor workflow (#100915)
b1a8a10a73 : inductor(CPU): fix masked_fill issue when filled value is nan (#101058)
3271413e74 : Revert "Fix header inclusions in c10 by iwyu (#100304)"
bb7d9886fb : [efficiency_camp] Vector Realloc Optimize caffe2::BinaryElementwiseWithArgsOp::DoRunWithType (#100631)
7110060cff : Enable reordering pass (#100747)
91ca9a276f : Revert "Enable reordering pass (#100747)"
6c3af6a966 : Revert "inductor(CPU): fix issue when padding/stride/dilation size is one for cpu weight packing pass (#100951)"
176dabf88c : [MPS] Export check for 13.3 to `is_macos13_or_newer` (#101119)
c650b12e0b : [pt2] Add some helper function for SymIntVector (#101056)
8a20ea0a1f : [Dynamo] Fix torch.{cuda/cpu}.amp.autocast arguments binding bug (#101052)
08ef92e711 : Delete Python-2 checks from setup.py (#101112)
96f46316c9 : Preserve PyTest Cache across job runs (#100522)
2dc93c20ac : [ROCm]Fixed ut test_memory_timeline (#96752)
e762cce61f : Allow cmake vars in docker build (#100867)
058d740f59 : [reland][quant][pt2e] Change input act annotation to a map and allow dynamic quantization for non zeroth argument (#101005) (#101041)
3941bbc5ba : CUDA Graph Trees - error on deallocated access (#100927)
3b2a93a3b5 : [inductor] Make codecache file permissions less restrictive (#100870)
32c9e7d377 : [CI] Run test_multi_gpu in test_inductor_distributed (#100135)
000368b092 : Allow C++ custom class to define __repr__ and use it from Python (#100724)
c0d33f66c9 : [pt2] remove unused `meta_linalg_eigh` (#100965)
6abde61f8e : [pt2] add meta function for `_linalg_eigh` (#100964)
39ec5fa722 : Fix header inclusions in c10 by iwyu (#100304)
c658732950 : [RFC] Add tqdm to benchmarking script (#100969)
0fbe55ea8f : [FSDP][state_dict] Make sharded_state_dict work with composable fully_shard (#100856)
9ba2bfea9c : [PG Wrapper] Add diff capability (#100214)
9ff547a57f : Revert "Fix ordered dict loading with LibTorch (#100743)"
cb668b1291 : Optimize split-split pass (#100983)
f542b31c9d : [export] More robust view->view_copy pass (#100908)
8a193c6dc5 : [DataPipe] Add generated docstring to functional form DataPipe (#100503)
51fe53e619 : [opinfo] item (#100313)
55844dfdbc : [FSDP][state_dict] Restore the state_dict_config for NO_SHARD (#100855)
a723f1f2b9 : fix _privateuse1_tag problem (#100632)
5a933d044f : [opinfo prims] equal (#100663)
33f3dca6b5 : [CUDA][CUBLAS] Fix BF16 reduced precision reduction note in docs (#101044)
6e2efd16d8 : [CUDA][CUBLAS] Add cuBLAS workspace allocation behavior to docs (#100919)
1e89a56a5b : Apply static policy correctly to unspec (#98983)
dfa951171a : Fix typo in RELEASE.md and README.md (#100536)
2b250e1921 : inductor(CPU): fix issue when padding/stride/dilation size is one for cpu weight packing pass (#100951)
083f88e126 : PyTorch -> C++17 (#98209) (#100557)
30cecc0e11 : [MPS] Fix build regressions introduced by #92868 (#101036)
b004c0b3c6 : [inductor] default cache_dir in torch._inductor.codecache should be lazily evaluated (#100824)
b06c180a32 : CUBLAS Flag (`CUBLAS_GEMM_DFALT_TENSOR_OP` -> `CUBLAS_GEMM_DEFAULT_TENSOR_OP`) (#100976)
535368f00e : [vision hash update] update the pinned vision hash (#101032)
649e609667 : [c10d] make ProcessGroupNCCL work.wait() respect timeout (#100162)
b33c9c7c9f : [inductor] support vec type conversion between float and bool (#100950)
44e73da444 : Extend assert statement to include ListVariable (#100841)
27d5019e39 : STFT: correct stft definition and better document tensor shapes (#100427)
2241aaa60c : Revert "[quant][pt2e] Change input act annotation to a map and allow dynamic quantization for non zeroth argument (#101005)"
76cc3ab4f3 : [CI] Delete skips from https://github.com/pytorch/pytorch/issues/93847 (#96049)
bf214f40d4 : explicitly check or discard cudaGetLastError return value (#100488)
f08ddae888 : [quant][pt2e] Change input act annotation to a map and allow dynamic quantization for non zeroth argument (#101005)
20a231b55b : [BE] Prevent pytest from thinking this class defines any tests (#100949)
6308563a39 : Enable reordering pass (#100747)
7da8705f18 : [dynamo 3.11] fix segfault when printing stack trace (#99934)
d57544d39a : Revert "fix specify_constraints's signature when exporting model (#100739)"
4b8127b90e : Revert "[Dynamo] Fix nested function resume execution (#100426)"
fa6df34d30 : [ET selective build] add kernel metadata section to selective_build.yaml (#100665)
35834a405c : Run C++ tests on CI with run_test.py (#99956)
a8c2cd1039 : Add CUTLASS-based MM for structured sparse linear operator (#100485)
d63e0b1578 : [optim] More cleanup and reorg of test_optim.py (#100917)
d0dab772df : [BE][optim] Remove objects from being globals and comment to clarify (#100899)
e353013aa4 : [Vulkan] Ensure non-zero divisors in Vulkan API Tests (#100909)
17fec516fe : [Vulkan] Test conv2d after division (#100910)
9035b6a651 : Allow disable binary build jobs on CI (#100754)
c3f3cb5b0f : [quant][pt2e] Support conv bn fusion in convert step for QAT flow (#100442)
f92b3e1477 : [MPS][BE] `std::is_same::value` -> `std::is_same_v` (#100975)
858657090b : Make sure we get full file path for filtering in pr-sanity-check (#100978)
5970fb402e : C++ CustomClass in Python: indicate which methods are not implemented (#100171)
0073d4cd27 : Update FBGEMM submodule (#100236)
d98d95fb9f : Revert "[Dynamo] Remove cross import in dynamo unit tests (#100851)"
6aa80beca1 : [c10d] Implement new Store methods in TCPStore. (#100383)
8769fb854d : [BE] Fix flake8 B027 errors - missing abstractmethod decorator (#100715)
bd18225c04 : [functorch] Remove internal assert in index_put batch rule (#100516)
19be2bb875 : Revert "[MPS] Add support for Custom Kernels (#100661)"
f558af2a55 : [adam] Use the right params in weight_decay, rename for clarity, fixes #100707 (#100973)
ba47a2b227 : [export] Pickle of ExportGraphModule (#100924)
b71ec6bdf3 : Revert "Forward fix lint failure from #100661 (#100907)"
0e08a9b057 : Wrap more constraint violation cases to UserError (#100897)
b179d34a19 : Handle negative padding in reflect_pad_backward (#100923)
92a7640b76 : Add mul tests with sparse sample inputs (#100393)
0141a242fd : bsr_dense_bmm(): remove sparse_rowspace kernel and some dead code (#100876)
793bd6993a : Work around torchdynamo import error with functional collectives (#100901)
93aac15d82 : make torch/csrc/jit/backends/coreml/objc/PTMCoreMLFeatureProvider.mm data_ptr-correct (#100886)
c5d7226ab9 : Upgrade torchbench pin (#100937)
1ea224c2a4 : make torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp data_ptr-correct (#100888)
bc3108c2e2 : make torch/csrc/jit/runtime/register_prim_ops.cpp data_ptr-correct (#100832)
de02c8bed4 : Revert "Rename DispatchKey.PrivateUse1 to custom device in torchgen. (#99406)"
01476465dd : Revert "add a cast function that suppresses -Wcast-function-type-strict (#100170)"
d371a890a2 : Fix ordered dict loading with LibTorch (#100743)
a3f656cc6c : use const_data_ptr as source of std::copy (#100885)
36d91b5513 : Add differentiable mkldnn_rnn_layer_backward to support double backward of LSTM (#100627)
d261e43c37 : [fix] cat_slice_cat : slice with negative size (#100828)
622e582a2b : Register get_cpu_capability for jit (#100723)
c4bc259f00 : bsr_dense_mm(): better test coverage (#100543)
43127f19f1 : Revert "Allow disable binary build jobs on CI (#100754)"
4c3b52a5a9 : Allow disable binary build jobs on CI (#100754)
e72385af20 : [Reducer] Move require_finalize_ (#100782)
d90f71ea0b : [PG NCCL] Provide work obj in postProcess (#100781)
a0752b68e7 : [BE] Remove empty pre and post proc functions (#100780)
7012600abe : fix cpu autocast check in rnn (#100621)
26cd958718 : Support runtime assertion for inline constraints (#100763)
75e4214f92 : Fix `recursive_store` for smaller elementSize (#100902)
cecfcf1e17 : [MPS] Handle MPS failures of test_modules.py in common_modules.py (#95334)
97bb4c2538 : [vision hash update] update the pinned vision hash (#100926)
9eab13fc90 : Reenable llama benchmark (#100877)
5ef50ef2d8 : [caffe2] Remove inline keyword of function CUDACachingAllocator::format_size (#100734)
4447dfa673 : Remove MacOS workflow step to disable XProtect (#100692)
660a0d8622 : [Functorch] Skip docs setup if called in optimize mode (#100750)
16a4075327 : Throw if 'dropout' argument name but func does not have nondeterministic_seeded (#100771)
9a811d1df2 : [BE] Update notes linkage in common_device_type, fix very minor grammar (#100898)
812cadf90a : [3/n] loading meta to device (#100495)
bde7b81f34 : [S337714] Back out "[PyTorch] Don't do extra numel() check in TensorImpl::data() (#98090)" (#100893)
2d2f716ddc : [export] Fix cond for pass_base (#100836)
b0a372e1fa : fix specify_constraints's signature when exporting model (#100739)
fb69aa1592 : Forward fix lint failure from #100661 (#100907)
95f191a248 : Always run prioritized tests first, even if they're expected to run serially (#100748)
c4bbeb5b8a : [Dynamo] Remove cross import in dynamo unit tests (#100851)
5079bf3df6 : [inductor] Add variable names to MemoryDep (#100308)
651b5b0f5f : Fix nightly build of C++ docs (#100845)
f39cda83d1 : [MPS] Add support for Custom Kernels (#100661)
d9d98b4d54 : Skip DNS host fix on ROCm runners (#100861)
aaa1323c97 : remove double doc upload after CloudFront fix (#99032)
116e04be29 : Initialize view_replay_enabled_ in the AutogradState ctor (#100822)
ec144b9412 : handle new param from torch.compile (Inductor pattern matcher), enable_log (#100814)
ccd060abd8 : [stronghold][bc-linter] Switch to reusable action, enable for everyone (#100737)
176ef97fc1 : [inductor] Fix bug where a node gets erased twice (#100848)
0731420645 : [PyTorch/Distributed]Only sync buffers when broadcast_buffers is True (#100729)
bfe5f5bbe1 : [WIP] enable cuda graphs support for flash attention with dropout (#100196)
bb28f3f519 : `USE_PRECOMPILED_HEADERS` is not supported on Apple M1 (#92868)
86ddfc7f68 : [inductor] Move cpp wrapper trigger logic to inner_compile (#100611)
62c53aabdb : Revert "[xla hash update] update the pinned xla hash (#100369)"
6eb0d7541d : [pt2] add `SymInt` support for `linalg_qr_backward` (#100833)
1e591a8b64 : [pt2] add meta function for `solve_triangular` (#100829)
cd8b82e5c6 : bsr_dense_mm(): code refactoring (#100634)
41bafb0b7b : [xla hash update] update the pinned xla hash (#100369)
333de1fdb0 : Fix some NVCC warnings (#100823)
46affcb004 : inductor(CPU): skip weight packing when autocast is enabled (#100844)
92cecb8e3c : inductor(CPU): don't do binary fusion if binary's inputs are same tensor (#100843)
970c60b336 : inductor: disable lowmem_dropout on CPU (#100702)
7d0e4e2aa8 : Fix AT_USE_JITERATOR checks (#100464)
3b6a7f4d51 : [MPS] Fix index_put with deterministic algorithm enabled (#97660)
4154c8ea15 : [quant][pt2] Add Conv + BN + ReLU fusion for prepare QAT (#100283)
05e355022f : [inductor] track provenance of masks from indices (#100816)
953aa6d90e : [TP] Enable more generic attn in Tensor Parallelism (#100508)
03433080e6 : [inductor] Support FallbackKernel in cpp wrapper codegen (#100553)
5293dee920 : fix missing-prototypes warnings in torch_cpu (Part 3) (#100245)
358fe95088 : [fix] check for histogramdd when bins is int[] (#100624)
ca9f55f79d : misc. fixes to constraints warnings and errors (#100745)
0bf9722a3a : modify ipex backend (#99499)
82091d666c : [ONNX] Refactor Input/Output Adapter (#100490)
aa081d8f27 : [CI] Update torchtext commit (#100767)
00d4890218 : [c10d] Apply EFA workaround to Store tests. (#100382)
266c84e3ab : [pt2] add meta function for `linalg_qr` (#100714)
8d56b0a5b5 : remove unused tuple_cat utility (#100731)
44caa395cb : inductor: fix mm_plus_mm fusion pattern issue when has broadcast add (#100679)
d719f0276d : [Dynamo] Fix nested function resume execution (#100426)
f73973d789 : Expose function to retrieve list of registered loggers (#100776)
6af509860e : Add logcumsumexp forward-ad (#100629)
71c4becda7 : [inductor] Track operator counts (#100329)
8360b6c2a8 : [c10d] Expose new Store methods. (#100381)
19d8d31c94 : [fbcode/caffe2] Make fmt formatter methods const (#100616)
a5cb888013 : [inductor] Do not try to shape-pad symbolic-sized tensors (#100738)
e55f02f4d0 : lint test/inductor/test_cuda_repro.py (#100777)
850556ed6e : Add "all" option to logging (#100664)
f1b2e00700 : graph break when calling resize_as_() on graph input (#100148)
ce50674f85 : [inductor] TARGETS for all inductor tests (#100744)
3362c1d240 : [ONNX] add cast operator after reduce to match desired dtype (#100700)
fee6d46940 : Revert "Bump up flatbuffer submodule version to the latest release (v23.3.3) (#100716)"
e5b065525b : Add unit test for nested_tensor input to nn.TransformerEncoder (#100650)
aad017183d : Introduce aggressive merge to `CapabilityPartitioner` (#100195)
9790f9174a : skip lcnet (#100726)
db5430fd25 : fix bash math for pr-sanity-check? (#100735)
e3d783c013 : [inductor] Cleanup strip_last_size logic (#100305)
bd9d50a3fc : Remove future deprecation warning from kl_div docs (#96541)
e20c94bda9 : [MPS] Add the test for 5D in test_mps which is skipped. (#99271)
a1f318daba : Fix get_reordered_tests in run_test.py (#100752)
c9593bc0e1 : [ONNX] Refactor diagnose_call message_formatter signature (#100299)
3f025c607c : summarize graph breaks (#100696)
f76d0e1b82 : remove unused extract_arg_by_filtered_index utility (#100649)
4a90deb137 : [Doc] Add GRU new gate calculation difference (#100646)
e0a3d014e9 : [CI] Do not auto-label nightly builds PR (#100740)
8d31b81edc : Bump up flatbuffer submodule version to the latest release (v23.3.3) (#100716)
5c14eea1de : Revert "extend serialization for tensor metadata (#99808)"
7961812c4d : Rename ForceInPlace to InPlaceHint. (#99764)
036d2f6593 : Add unstable-periodic to upload test stats (#100751)
d41134e2f2 : dynamic equality constraint (#99993)
2f5e9b60f9 : [ROCm] Limiting the NUM_PROCS to 8 while UT testing (#100133)
67e3dd86b5 : Update Multipy CI pin (#100640)
59cb02db54 : Symintify TensorFactories.empty_like (#100668)
447a20fdb1 : [profiler] provide torch.profiler._utils._init_for_cuda_graphs() as a workaround (#100441)
41dc25d5fc : [inductor] Pattern to replace cumsum with arange (#100673)
f42eae4755 : Revert "[export] Pickle of ExportGraphModule (#100620)"
bf2258f582 : Fix frequent "warning C4141: 'dllimport': used more than once" (#100708)
d4975a5fe0 : [export] Pickle of ExportGraphModule (#100620)
4ca26d183a : [CI] update hf version for ci (#100666)
b89b5716a9 : ROCm fixes for PyT2.0 (#100089)
3f725db4a6 : [inductor] Run dead_node_elimination to a fixed point (#100672)
d66add688f : Revert "Add logcumsumexp forward-ad (#100629)"
40df6e1647 : [ONNX] Simplify repeat_intereleave export for scalar-valued 'repeat' (#100575)
3f2336d3fe : Revert "[EZ] move test decorator up in the class def (#100719)"
6b20ac3bc4 : make torch/csrc/jit/passes/onnx/unpack_quantized_weights.cpp data_ptr-correct (#100681)
1fe91f5922 : make torch/csrc/distributed/c10d/quantization/quantization.cpp data_ptr-correct (#100688)
c676aa8bee : make torch/csrc/distributed/c10d/ProcessGroupGloo.cpp data_ptr-correct (#100689)
543e9c6517 : use const_ and mutable_ data_ptr for much of torch/csrc/jit/runtime/static/ops.cpp (#100678)
4101de342b : Type torch._inductor.codegen.wrapper (#100657)
642f4ed606 : add a cast function that suppresses -Wcast-function-type-strict (#100170)
e53b288679 : remove unused filter_map utility (#100647)
bf08b072a7 : Add functionalization pass in TorchDynamo (#99461)
31fdd19b5b : Add support for list copy in dynamo export (#100669)
0e017af35b : make torch/csrc/jit/python/pybind_utils.cpp data_ptr-correct (#100682)
a2e81a8004 : [DataLoader] `__getitems__` added to description of Dataset API and better supported within `Subset` (#100375)
6064c4c64c : Disable core dumping on ROCm UT workflows (#100532)
daf5100656 : [EZ] move test decorator up in the class def (#100719)
7a15e82388 : Fix tensor registration to work with coalescing collectives. (#99763)
54f27c7d5c : make torch/csrc/distributed/c10d/quantization/quantization_gpu.cu data_ptr-correct (#100685)
57e19ad86d : Add pattern to merge consecutive splits (#100107)
c91a41fd68 : [Inductor][Quant]Enable the decomposed dequant maxpooling2d loop fusion (#99132)
675029aabf : inductor: add params check before doing sfdp fusion (#100619)
37f1be041a : [pt2] enable `svd` in `fake_tensor` (#100130)
bb6b24c622 : [BE] Dockerize PyTorch docs jobs (#100601)
05adf4d49d : inductor(cpu): skip ConvTranspose2d packing if has output_size input (#100612)
63f2f9fb0d : [BE] Remove unused CircleCI checks (#100630)
ee4cb4b1e7 : Add --offload-to-disk support to minifier (#100546)
ce1ad1c143 : Add load_storage (#100519)
d4dad36cf1 : [quant][pt2] Improve prepare_qat Conv + BN numerics test (#100271)
bf52d570d9 : torch.save/load torch.compiled models (#97565)
2f9538006e : [vision hash update] update the pinned vision hash (#100671)
d658c62677 : Add logcumsumexp forward-ad (#100629)
2f41bc5465 : [DataLoader] Add context to NotImplementedErrors in dataset.py (#100667)
a3989b2802 : remove unused concat_iseq (#100648)
35a6b04419 : Set assume_static_by_default to True in Dynamo config (#100458)
73eab18ac8 : set lowmem_dropout and fallback_random configs for all tests in test_… (#100506)
b6d318291b : [FSDP] Do not `sys.exit(0)` explicitly at end of unit test (#100645)
6d2f8114be : Revert "[BE] Dockerize PyTorch docs jobs (#100601)"
da0993280d : use const_data_ptr in torch/csrc/lazy/core/hash.h (#100644)
fe3c83d349 : Have testing overhead dashboard only use successful workflows (#100580)
1d5577b601 : No need to run Windows binary build for every PR (#100638)
c525440ba3 : Logging documentation updates (#100595)
24e9b8f5f4 : [PT2E][Quant] Use subgraph matcher annotate linear pattern (#100566)
8869897ebe : [replicate] support simpler device_id (#100217)
2703684acf : [BE] Dockerize PyTorch docs jobs (#100601)
73dd6f04c9 : extend serialization for tensor metadata (#99808)
25b42aef67 : [Inductor] Using PythonPrinter for SymInt arguments codegen for FallbackKernal (#100606)
4bad3f62f7 : [MPS] Add support for MPSProfiler (#100635)
8994d9e610 : [dynamo] Hide guard_fail_hook behind a flag to improve cache lookup time (+10% DebertaV2) (#100590)
edebad81a9 : Add a rst doc for the performance dashboard (#100592)
23a095ca5f : Chunked inplace weight loading API (#100615)
04d67e20a7 : Revert "torch.save/load torch.compiled models (#97565)"
67fc9bbb9b : Rename percentiles to quantiles in triton.testing.do_bench (#100477)
6370ac0251 : [codemod] Replace hasattr with getattr in caffe2/torch/ao/quantization/stubs.py (#100597)
9c185b6b46 : [codemod] Replace hasattr with getattr in caffe2/docs/source/notes/extending.rst (#100598)
87f08d717e : torch.save/load torch.compiled models (#97565)
4b2f496eab : [c10d] Implement new Store methods in PrefixStore. (#100380)
97245a06e1 : Turn on TORCH_CHECK for NT wrap_buffer (#100596)
26533349a7 : [codemod] Replace hasattr with getattr in caffe2/torch/jit/_trace.py (#100362)
6120c5842c : [codemod] Replace hasattr with getattr in caffe2/torch/ao/quantization/utils.py (#100361)
ff974cd962 : Fixing interpolate on uint8 unsqueezed 3D CL tensor (#100258)
9b3552eb2c : Add runtime assertions for input shape constraints (#100247)
8f1122ce7b : [inductor] Enable conditional use of tl.reduce (#100569)
94d306fd45 : [inductor] Stop using `x + tl.zeros(...)` in generated triton (#100163)
06fbd5dc9c : [inductor] Fix argmin/max with duplicate values (#100573)
4918940184 : [inductor] Fix nan-handling of max and min reductions (#100572)
19a57870a3 : Fix a number of issues with divs in ValueRangeAnalysis (#100547)
a204f7f518 : [c10d] Fix subprocess group handlig in scatter_object_list. (#100552)
aecbaa5d45 : [vmap] bucketize (#95783)
c4fd76e7b4 : Revert "[export] Pickle result of export (#100423)"
7226dbcbce : [export] Pickle result of export (#100423)
8d598f2f25 : [exportdb] Change case ids to case names for UserErrors. (#100600)
c58d9642d0 : Don't build Triton from source in benchmarks/dynamo/Makefile (#100613)
8eb82135d1 : [docs] Docs for writing ATen IR passes + FX Pattern matching (#100577)
fe3ecfe0cf : Add AotAutogradFallbackTests to dynamic suite (#100454)
2dca418112 : Reland basic dynamo support for traceable collectives (#100476)
9f3c6b1b63 : Fix graph break in a common `func(self, *args)` pattern (Faster stable diffusion) (#100444)
c2556c034d : Improve minifier printing to be more chatty when it makes sense (#100486)
c7e9f40653 : Misc accuracy improvements on minifier (#100447)
7f997aa393 : [codemod] Replace hasattr with getattr in caffe2/test/distributed/fsdp/test_fsdp_optim_state.py (#100360)
8df748f3be : [vision hash update] update the pinned vision hash (#100510)
c29ab84115 : Fix bug in process_group_name when there is duplicate pgs (#100518)
253b9d3247 : [replicate] input casting support (#100216)
e87ed2a88d : [primTorch] add ref for `polar` (#100345)
2892c06e82 : Ensure device arg is passed to test_transformers (#100260)
f558bb6f76 : inplace PyTorchStreamReader getRecord() (#100418)
e6c0164f1c : Use Boxed Calling Convention for AOT Eager (#100417)
d67e4db8ff : Require contiguous for view_as_complex (#100428)
d25c93f919 : Remove speech_transformer workaround, torchbench handles it correctly now (#100558)
fd841763e1 : [dynamo] Minor fixes and clean-up in eval_frame.c (#100496)
6aeb85add8 : add checkpoint support for custom device (#99626)
3c1dd0a4b1 : [cuDNN][CUDA] Fix for `install_cudnn.sh` following 12.1 CI update (#100501)
fa2bfab93e : [C10D] Drop the GIL when creating a TCPSTore to avoid deadlocks. (#100555)
c3bcf5f628 : Support multiple separator characters, '/' and '\\', on Windows. (#98146)
f82756335d : [ONNX] Update 'Functionalize' pass to support pre-decomp graph; Drop 'aten_graph' arg for 'DynamoExporter' (#99667)
9bc68fcd25 : [pytorch] Accelerate indexing_backward_kernel with duplicates (#99441 attempt 2) (#100505)
61813a8e62 : [reland][CI] Start to collect inference perf with cpp_wrapper ON (#100187) (#100502)
1a6f613b8f : Check uppercase when checking for merge blocking SEVs (#100559)
0a6a0ac49b : [MPS] Add dot input check (#100099)
3c5ec6af14 : Partition modules (#98628)
75945d54f7 : Properly propagates checkpoint wrapper args and kwargs (#99791)
8f6951cf55 : [cuDNN][cuDNN V8 frontend API] Clean up `time_sorted_plan` workaround for cuDNN v8 API (#100287)
478a5ddd8a : Mark Windows CPU jobs as unstable (#100581)
f04bb519f5 : [DataPipe] Change DataPipe display name in profiler (#100042)
72c68704d7 : Revert "Temporarily move ROCm to unstable (#99579)" (#100564)
a304b2a45f : Activate TracingContext earlier (#100043)
3d10e748e7 : [Reland] Initial version of Dynamo capture for HigherOrderOperator (#100544)
e552b91286 : torch.utils.checkpoint warns if user does not pass use_reentrant explicitly (#100551)
0595ecf00c : [ONNX] Add symbolic for _convolution_mode (#89107)
d419ad17b2 : [dynamo] Disable pytest AST rewriting in test_export (#100484)
2f13a7a7a7 : Prevent GraphArg from keeping real tensors live (#100515)
16d268e280 : Fix comment error in TensorIterator.cpp (#100227)
6a12f10b08 : Publicly exposing `torch.backends.cpu.get_cpu_capability()` (#100164)
3e18d3958b : [DataLoader] Follow-up Fix: TypeVars of Sampler (#100409)
db4572dbf1 : Revert tl.reduce usage (#100521)
287f74c4fc : Revert D45387167: Multisect successfully blamed D45387167 for test or build failures (#100424)
2ac6ee7f12 : Migrate jobs: `windows.4xlarge`->`windows.4xlarge.nonephemeral` (#100548)
843ead134c : [ONNX] Add supported ops into test_fx_op_consistency - 1st batch (#100265)
2ebb48ff28 : [SPMD] add FQN argument to Override.replacement (#100473)
6cc0158311 : Use `maybe_unused` attr in VariableType (#100498)
58f796ff5d : Revert "Initial version of Dynamo capture for HigherOrderOperator (#99988)"
b2d703e2d7 : Stop loading functorch._C unless torchdim is needed (#100491)
8b64dee5d2 : [fix] torch_compile_debug don't log with 0 (#100462)
896eb1db26 : [Dynamo] Skip TB Background_Matting model eager accuracy check because of non deterministic (#100513)
9e2808aa47 : Retry resolving download.pytorch.org with Google DNS (#100509)
771a9debbe : [PT2E][Quant] Refactor quantizer and qnnpack qantizer code to support dqlinear config (#99399)
1bbca4fbc0 : Relax after_aot restriction on no buffers, serialize small constants (#100472)
2089a9bd48 : Refactor minifier tests to be more compact (#100471)
409fc7a4c7 : Make hash_storage work with size 0/1 storage (#100467)
4b9ba3fad5 : Allow discontiguous NestedTensors to empty_like (#98383)
419387f66f : Run periodic jobs only twice a day on weekends (#100489)
6b2ecb12b6 : OpInfo: specifying sparse sample input function implies the corresponding layout support (#100392)
3ae0e23b90 : Fix sum OpInfo for sparse sample inputs and assert coverage for sparse-enabled operators (#100391)
ffcbd1c2de : Move tracked nn_modules from OutputGraph to TracingContext (#100457)
2439090bef : Remove special casing for stride/size setup for guards (#100456)
9439cb0e11 : Avoid using einsum for torch.cat DTensor propogation (#100251)
d23dbfff60 : [ONNX] Add RemoveConstantInputStep to adapt torch inputs to ONNX inputs (#100252)
6b5f50004d : [inductor] Change the default value of layout (#100254)
c3aa59c8f5 : Revert "[WIP] enable cuda graphs support for flash attention with dropout (#100196)"
dc4a25312f : Fix hosts update for binary build (#100507)
e4ad67f9c2 : Remove ci: sev label and details from ci-sev.md tempalte (#100504)
34e90b8df1 : Revert "[inductor] Cleanup strip_last_size logic (#100305)"
8ec0a939a2 : [PT2E][Quant] Fix but in quant spec of symmetric static quant (#99398)
8430430e94 : Handle trailing masked column behavior for nested tensor (#100113)
0acfe2ce09 : [dashboard] higher tolerance for AlbertForQuestionAnswering (#100277)
de7793d577 : [inductor] Cleanup strip_last_size logic (#100305)
32615618e4 : [WIP] enable cuda graphs support for flash attention with dropout (#100196)
a587f1ff0a : [CI] Change the dashboard run to once a day (#100499)
7ff71a3a48 : Populate download.pytorch.org IP to container (#100475)
2ec6eb3d09 : Revert "PyTorch -> C++17 (#98209)" (#100497)
543b7ebb50 : Revert "Migrate jobs from windows.4xlarge windows.4xlarge.nonephemeral instances (#100377)"
7caac545b1 : Migrate jobs from windows.4xlarge windows.4xlarge.nonephemeral instances (#100377)
311c2bb7ec : Move pattern match for foreach before bulky if-else in `save_variables` (#100445)
e8a1d0be3e : Revert "Mount /etc/hosts into container (#100475)"
5fbb40669f : [dynamo][moco] Disallow_in_graph distributed APIs (#100071)
0dc671c247 : [c10d] Add new Store methods: append, multi_get, multi_set. (#100379)
8f0c825d36 : PyTorch -> C++17 (#98209)
50b0fff060 : ci: win cpu test -> trunk, cuda test -> periodic (#100478)
0efab60401 : [BE] Update cutlass with NVIDIA upstream changes to 3.1 (#100333)
06bf5d4de7 : enable headdims > 64 for flash attention on sm90 (#99776)
279f3cd0a6 : [pt2] add `SymInt` support for `dsplit`, `hsplit`, `vsplit` (#100352)
794e3971ab : Add size check before calling stack_.at(dict_pos) in unpickler.cpp (#94300)
ab65bac3ce : Use yaml.SafeLoader instead of legacy yaml.Loader (#100443)
02a0fb8df4 : Add error_inputs_sparse method to OpInfo (#100389)
d425da8bf3 : Replace master with main in links and docs/conf.py (#100176)
0aac244680 : Support expandable_segments:True in fbcode for caching allocator
99ded8bbce : Mount /etc/hosts into container (#100475)
af92fc1cd7 : Revert "[functorch] test for compiling functorch transforms (#100151)"
4c99f9cdf2 : Initial version of Dynamo capture for HigherOrderOperator (#99988)
984a2397ba : Refactor OutputGraph (#99987)
1114673c90 : Revert "[pytorch] Accelerate indexing_backward_kernel with duplicates (#99441)"
ec3c8abb54 : [inductor] Remove redundant model copy when running with cpp_wrapper (#100275)
af62d098fe : [export] Migrate internal verifier to subclass export/verifier
41361538a9 : [pt2] add `SymInt` support for `tensordot` and `inner` (#100356)
4582ceb2c4 : [distributed][sharded_tensor] Move local_shards check from ShardedTensorBase to ShardedTensor (#100197)
8556cf208a : Make backend_accuracy_fails suppress errors in same_two_models (#100324)
054a254b06 : Run minifier tests same process when possible (#100416)
f093ee1722 : Prevent Triton from getting eagerly imported when importing torch._inductor (#100374)
74cc377162 : Speed up minifier tests by applying some configs that speed things up. (#100387)
0a479d9b9c : Simplify minifier testing by incorporating fault injection in prod code (#100357)
17be65381d : Do not use pickle to output config entries in repro scripts (#100354)
0093df78df : Manually resolve download.pytorch.org to IPv4 (#100436)
52a36a98d9 : [dynamo] Graph break on a list referencing self (#100296)
090ec55f8d : Only skip in torch inductor test
d5169e7141 : Use a stable ordering for saved values in functorch.default_partition (#100111)
ea5f6d7312 : [functorch] test for compiling functorch transforms (#100151)
ff29722364 : [inductor] Prevent reusing aliased buffers if aliases still have uses (#100332)
3fd46e1f0d : [vision hash update] update the pinned vision hash (#100437)
fdc853b14c : Add --baseline option to benchmark runners (#100266)
c6c9258357 : Delete @property support at module level, it is unused (#100353)
e918fd18e7 : Disable densenet121 as it is flaky (#100371)
123be4b694 : [dtensor] add debug tool to track op coverage (#100124)
13da6585b6 : [MPS] Skip all empty ops tests (#100368)
a50fb50c51 : [MPS] Fix exception regex not compared (#100367)
5daef13883 : Fix `merging` label removal (#100433)
f143c92739 : [docs] Fix typo in get-started.rst (#100355)
7b684310c8 : [BE][GHF] Do not call GraphQL twice (#100434)
dc9c79d3cf : Allow each fully_shard unit to cast foward inputs for mixed precision config (#100290)
429155b3c8 : Disable some check to get the test pass
66fde107e2 : [codemod] Replace hasattr with getattr in caffe2/torch/testing/_internal/common_device_type.py
3fb0bf4d96 : Automatic pulling ExtraFileMaps without explicit mapping.
a1d041728b : Back out "[aarch64][tools/build_defs/third_party/fbcode_defs.bzl] Fix dep handling in cross-builds"
c3ccdc0125 : Add store.wait() tests (#99577)
97afbcbc80 : [pytorch] Accelerate indexing_backward_kernel with duplicates (#99441)
dc27b842ba : Ensure optimizer state references are cleared (#100282)
e88e92e7a2 : Update to reruns + timeouts in run_test.py (#100412)
29b2745285 : Add message about need_weights=False performance profile. (#100396)
940662c4dc : Remove some dyn shape flags (#100317)
aafc6ce8cc : Produce constant variables in cases where a SymNode is created with a constant (#100144)
0cf6e74fa9 : add users to external contrib stat upload (#100403)
0bcb9dac4f : [ONNX] Non-global diagnostic context (#100219)
8e084cbfaa : [ONNX] Remove 'diagnose_step' (#99944)
c94b6a6712 : [ONNX] Introduce 'diagnostics' to 'dynamo_export' api (#99668)
85bd6bc010 : Cache pretrained mobilenet_v2 and mobilenet_v3_large models in Docker (#100302)
fd82f11882 : [lite interpreter][hack] Add batch_norm_update_stats if batchnorm and training are present (#100134)
d5bd23684d : Pin scikit-image and tb-nightly CI requirements (#100399)
2a6a159c0c : Modify repeat_interleave docs to highlight potential overloading (#99650)
73dac48464 : Add bertmaher to triton pin CODEOWNERS (#100390)
5f92909faf : Use correct standard when compiling NVCC on Windows (#100031)
73645a8412 : Add CUDA 12.1 CI workflows (#98832)
3edff6b6ec : Improve detection of workspace/non-output allocations in cudagraphs (#99985)
5d93265cce : Report timeout/infra_error instead of 0.0000 on infra error (#100372)
a014d1b18c : [Easy][FSDP] Clarify `_use_unsharded_grad_views` comment (#100359)
2d8deffc1e : Refactor repro/minifier into CLI; add analyze (#100226)
89c43f4108 : Revert "Produce constant variables in cases where a SymNode is created with a constant (#100144)"
83b803c2b5 : [FSDP] Fix `use_orig_params=True`, CPU offload, `no_sync()` (#100180)
e779a30d50 : [BE] Fix SIM109 `compare-with-tuple` (#100337)
01abbfbaae : [BE] Fix all B022 `useless-contextlib-suppress` (#100335)
d7bdfd3454 : Produce constant variables in cases where a SymNode is created with a constant (#100144)
cc3ed8ae53 : [inductor] avoid zero division error for dropout (#100222)
beb7f79517 : Fix intermediate hooks on inplace buffers, enable it in testing (#100322)
155fa4e714 : Use sympy.And instead of bitwise operator, for better promotion (#100328)
6c934a89a7 : Skip invalid grads in outplace foreachs' backward (#100256)
76bcc87277 : fix TIMM mobilevit_s complier issue for dynamic CPU path (#100230)
e1021ec535 : [decomp] Bad accuracy for elu_backward (#100284)
3d55bce3bf : Revert "Move win-vs2019 build and test to unstable (#100281)"
2442858f52 : [MPS] Fix `layer_norm_backward_mps` key (#100295)
03806eddbf : [dynamo] Compile torchvision augmentations (#100292)
6647e61a59 : Update docstring for dynamo.export tracing_mode (#100205)
9075e3c2c6 : Revert "Run test_fx_to_onnx_with_onnxruntime serially (#100298)"
1267897d67 : [ONNX] Skip flaky dynamic test in CI (#100297)
3a3f781f6c : Run test_fx_to_onnx_with_onnxruntime serially (#100298)
7684044b71 : Add size check before calling .back() in rpc/script_call.cpp (#94297)
35991df5d6 : fix(docs): torch.autograd.graph.Node.register_hook can override grad_inputs, not grad_outputs (#100272)
2b79d6c425 : Update testing aggregate data (#100070)
6a02342131 : Check inputs have same dtype in addmm_impl_cpu_ even if input has zero numel (#100274)
d7fa7fa8cf : Introduce fast path in the CPU equal op
331ed5bee7 : Add comment link to revert message (#100276)
999e17d80a : Move win-vs2019 build and test to unstable (#100281)
ccce7a2de0 : follow up PR for test_c10d_ucc.py in response to Xiang's review of #88110 (#99654)
8714fc7a2b : [ONNX] Set tracing_mode through options.dynamic_shapes and enable dynamic tests in test_fx_to_onnx_runtime.py (#100212)
0a5c930499 : Re-enable CUDA 12.1 builds for Windows (#100268)
5b98910139 : [inductor] Stop using `x + tl.zeros(...)` in generated triton (#100163)
270a33165b : [inductor] Move reduction_type special cases out of make_reduction (#99660)
6ab9453ea9 : File level rerun changes (#100200)
43dea76305 : [CUDA] Switch to `at::empty_like` in `adaptive_avg_pool3d_backward_cuda` (#100202)
380ccfd442 : Revert "Added round_with_scale_factor arg to ATen (#97868)"
5022143f88 : Bump cuDNN frontend submodule to 0.9 (#99674)
3f656ad7bb : [CUDA] Do accumulation for Adaptive Average Pooling in `opmath_t` (#99378)
b66d7007d8 : Add aten.smooth_l1_loss_backward to core_aten_decompositions (#100267)
9e1f46d55b : Use `[[maybe_unused]]` in `VariableType_[0-4].cpp` (#100250)
c4bed869d1 : [PG Wrapper] Enhance error msg (#100213)
e0a2b49f0b : [SPMD] Introduce prerequisites to graph_optimization_pass (#99970)
61dffa61c3 : [fix] masked_scatter_: non-contiguous self (#100232)
9cd48b0575 : Add warning information for dtypetensor. (#99521)
56e235ad8c : Pin functorch docs requirements (#100257)
628a8df1c9 : [c10d] Comment out ddp_hook_with_optimizer_parity tests (#100215)
efed5e1c47 : Fix triton auto update pin workflow (#100211)
1b84be551a : Improved CustomOp API with schema inference (#100127)
7ebb60c9f4 : [CustomOp] Fix lifetime semantics (#100114)
d176e3ff69 : [quant][pt2] Add test for prepare_qat Conv + BN numerics (#99846)
23de2e0620 : [Dynamo] Fix staticmethods for FSDP (#100117)
e6f9bc500b : CustomOp simple abstract implementation registration (#99439)
4135295a76 : Excise yaml dependency in torchgen.model (#100203)
55b661137f : [inductor] Use decomposition for smooth_l1_loss_backward (#100242)
2504089329 : Enable test_linalg_solve_triangular_large (#96182)
90c44b134a : Revert "[CI] Start to collect inference perf with cpp_wrapper ON (#100187)"
07c02b9e92 : Add vmap support for `smooth_l1_loss_backward` (#99429)
d4bf76c2a4 : Persist torch.assert in aten graph (#100101)
cef15ecc2e : [ROCm] Also look for 'Cijk' (rocblas kernel) to verify gemm in test_kineto (#92889)
751c54b546 : Add experimental export() API (#100034)
a23365885f : [FSDP] Make set_state_type to SHARDED_STATE_DICT compatible with NO_SHARD sharding_strategy (#100208)
7220201a2c : fix missing-prototypes warnings in torch_cpu (Part 2) (#100147)
54c0edf6da : Track exact origin_node on best effort basis (#100110)
89b1e67d0a : [Tensor Parallel] Add a new Colwise Parallel style when Pairwise cannot directly used (#100137)
56a93ed56d : Store constraints and example inputs in the graph module as metadata in export (#99961)
c8877e6080 : enable some cuda warnings (#95568)
cba07ffe0c : [ONNX] Add xfail into subtests of op consistency and retire fixme (#100173)
6168bed663 : Remove unecessary <execinfo.h> include (#99800)
a8ad0dc333 : [philox_rand] Add decomps (#100206)
9cda7b9e47 : [hotfix] Do not import torch.ao.quantization._pt2e from dynamo (#100194)
9609aeefbb : Revert "[c10d] Comment out ddp_hook_with_optimizer_parity tests (#100215)"
0692bdd95f : Improved message to suppress errors in _dynamo/exc.py (#97345)
b51f92ebda : [Docs] Fix docstring format (#99396)
64efd88845 : Add directly referenced header files for "ceil_div.h" (#99607)
3e87fc521b : [CI] Start to collect inference perf with cpp_wrapper ON (#100187)
674018903d : per-Tensor `grad_fn` for in-place foreach functions (#96405)
9a3e411a41 : More rigorous mixed overloads on SymInt (#100008)
9dcabe293a : Delete pytorch/caffe2/contrib/docker-ubuntu-14.04 (#100155)
d1fbd33c70 : [FSDP] Remove unneeded disable of tf32 (#100179)
1f4183e275 : [FSDP] Subtest sharding strategy in test_fsdp_grad_acc.py (#100178)
ae40a6c735 : [c10d] Comment out ddp_hook_with_optimizer_parity tests (#100215)
a145a3332c : Add tensor to fake clone snapshot for immutable source of truth (#100128)
ca1cf434e7 : Not flatten states when use_orig_param is True and sharding is NO_SHARD (#100189)
3241fbd627 : Inductor cpp wrapper: support LinearBinary (#99957)
0221198790 : Added Typechecking to input tensor in RNN (#100100)
b8d7a28e1a : refactor test_sdpa into two test classes to account for failure modes (#100121)
477ca1789c : Avoid elementwise dispatch of gradient unscaling/validation ops in `_foreach_non_finite_check_and_unscale_cpu_` (#100108)
1dba53cbab : [ONNX] Refactor test_op_consistenct.py and test_fx_op_consistency.py (#100172)
61917a006d : Make DimConstraints create actionable message (#100103)
d5f15d3515 : Check for debug mode (#92707)
b02aa5e71d : [Feature] storage resize_ support custom device. (#99882)
9834358e0f : Get SchemaCheckMode to error on ops that return inputs directly. Expose as a dynamo backend, eager_debug (#99744)
1f2d00e537 : move SchemaCheckMode to torch/_subclasses (#99743)
884c5c86f1 : Pass torch.compile mode/options to all backends (#99645)
7295ab6746 : [ONNX] Add test_fx_op_consistency.py (#99465)
d06b93b0c7 : Decompose arange.default to arange.start_step (#99739)
a67fa845bd : [vmap] Fix searchsorted batch rule (#99698)
991b1c0286 : Do not use `--extra-index-url` in testing wheels (#100183)
151d76cc23 : [quant][pt2e] remove dropout from fx quant
089b085c32 : Optimize periodic jobs (#100182)
01de8ee845 : [SPMD][Easy] Add time counter in graph_optimization_pass (#99969)
87db02ea38 : [DDP] Perform input casting in pre forward (#100131)
ae0eb2342d : [Experimental] Remove store barrier after PG init (#99937)
7bece142a9 : [export] Port over const prop pass (#100102)
fad2f6edab : [PTD][Checkpoint] Upstream fsspec storage read/write to PT (#98387)
b94a0ba5bb : [SPMD] Add embedding dense backward prop rule for postional embedding (#100038)
8fe91d16b0 : Remove CUDA 11.6 note from complex docs (#100118)
02f059c2b7 : Add private _export API (#99992)
f5853342ea : [dynamo][numpy] Handle return value being numpy ndarray (#99560)
687afeb686 : [dynamo][numpy] Add NumpyTensorVariable to translate ndarray attribute calls to tensor attributes (#95849)
d855b6aed6 : [Dynamo] Add unit test for explicitly calling __call__ (#100146)
cb569dbccd : Fix cat forward-AD tests (#99596)
659dcc5e71 : [inductor] Fix argmin/max with duplicate values (#99920)
f9c3fcd1df : [inductor] Fix nan-handling of max and min reductions (#99881)
ed2eb13d76 : [inductor] Create triton_helpers module for helper functions (#99880)
ad21890f8f : [c10d] Scalable PG initiation. (#99931)
2eab5abb50 : sparse.sum backward: short circuit on zero/empty grad (#98838)
67e0913de9 : Add support for serializing real tensor data in after aot minifier (#99834)
5cfaea15c4 : relu/threshold backward for sparse: enable 0-nnz grads (#98935)
c2402a9257 : Change caffe2 branch links to main (#100129)
77a37a54ce : Include all mkl/mkldnn related test files to CPU ATen backend (#99592)
100a25d021 : Basic dynamo support for traceable collectives (#94440)
925a3788ec : [CUDA] Switch to `at::empty` in `max_pool3d_with_indices_backward_cuda` (#100138)
859e82a7a9 : Making fsdp device-agnostic for custom-backend which implement cuda-semantics (#99024)
4456e932f8 : [inductor] fix _print_Pow given reciprocal of dynamic dim with float exponent (#100090)
569eff85a0 : inductor: enhance conv+binary fusion path test for cpu path (#100058)
e248016472 : fix missing-prototypes warnings in torch_cpu (Part 1) (#100053)
e0bf51d3bf : [dynamo] Add ddp_graphs artifact (#100021)
1504bdf9e7 : [inductor] logger message fix in split_cat (#100088)
c0ecd98958 : Rename DispatchKey.PrivateUse1 to custom device in torchgen. (#99406)
3588688ade : inductor: simplify the test_mkldnn_pattern_matcher.py code (#100057)
13259fe8f0 : [ONNX] Fix type annotation for 'fx_to_onnxscript' (#100050)
3f5d768b56 : Refactors/improvements in _inductor/fx_passes (#100063)
be8c7c06b6 : [Tensor Parallel] Simplify distribute for MHA (#100046)
97f4af3f4f : add sm80orlater check for bfloat test in test_torchinductor (#98034)
bc0c74bcd5 : Don't apply _Py_OPCODE twice (#97986)
32a67e42c4 : Introduce FXGraphExtractor into torch.onnx.dynamo_export (#99940)
763e0a9027 : [inductor] fix inconsistent behaviours when padding size is zero (#100082)
19e81b7b19 : [BE][DTensor] add DeviceMesh test to periodic testing list (#100029)
4c6f7cbc86 : Fix prims unbind if given dimension size is 0 (#100122)
2989d6c93d : [Dynamo] Fix constructing lazy submodule inside of lazy module's initialize_parameters (#100047)
fab2e3971f : enable -Werror=sign-compare in our Bazel build (#98671)
6789342a56 : [dynamo] Make bytecode logging off-by-default (#100093)
c523d7d899 : Add a new hook (#99854)
eaa00017c8 : S390x tests (#99871)
45337e20bb : Fix byteswapping (#99869)
5f138a6b65 : [minifier][after dynamo] clone inputs while retaining gradness (#100066)
3d39bd5976 : [dynamo] Remove redundant recompile call (#100084)
b9146d8b0b : Remove inclusion of non-existent header on s390x (#99870)
dc10004553 : Add asan slow test shard (#99925)
9bbd3d6489 : [export] ExportPassBase + view_copy pass (#100000)
9bf2dfbbb0 : migrate memcpy src to const_data_ptr (#98781)
e5291e633f : Revert "Migrate jobs from windows.4xlarge to windows.4xlarge.nonephemeral instances (#100091)"
006785cd46 : [dynamo][hf_bigbird] Actually graph break on tensor.unsqueeze_/resize_ (#99986)
aa99c5b4ed : Added round_with_scale_factor arg to ATen (#97868)
cc628293bf : simplify method_def generation (#100059)
c778980fb8 : remove casts to `getter` in python_cpp_function.h (#100065)
a337c42dfc : make ATen/native/cuda/AdaptiveAveragePooling.cu data_ptr-correct (#100030)
6170be9012 : make ATen/native/cuda/EmbeddingBag.cu data_ptr-correct (#99083)
9b3862cd02 : make im2col calls data_ptr-correct (#99111)
06ca9bb915 : make col2im calls data_ptr-correct (#99112)
0bc02d3805 : [pt2] remove unnecessary if expr (#99865)
004f3d71aa : [export] Move verifier over to export from torch/fx (#100019)
6c550bb4d5 : [quant][be] Easier way to override default in QConfigMapping (#99888)
9f0092c4b7 : [CI] Replace timm_efficientdet with timm_vision_transformer in smoketest (#100106)
3a5427baf4 : Add torch.utils._content_store (#99809)
45bf3f6216 : Optimized EMA implementation (#94820)
c6ab4ff35c : convert to mutable_data_ptr data_ptr calls immediately after at::empty() (#98734)
65823619c0 : convert trivial data reads to const_data_ptr (#98751)
5b4a523583 : Add all_reduce_coalesced to functional collectives (#98640)
9bc03db670 : Move nn.module state dict pre hook (#98964)
bb4e9e9124 : functionalization: error during mutations on mem overlap (#99919)
1183eecbf1 : Migrate jobs from windows.4xlarge to windows.4xlarge.nonephemeral instances (#100091)
33fba6ef07 : [SPMD] Add arange and zeros to default factory ops (#100037)
afa9d10ed6 : [inductor] Support mixed device in cpp wrapper (#99950)
e789de952f : Make sizevar addition work properly (#100015)
7ec4392068 : Remove in-place operations in NegativeBinomial (#96748)
81978120ec : [MPS] Fix trace exceptions not raised for error inputs (#99239)
f4a37c9a5d : [MPS] Fix max_pool2d exceptions not raised for error inputs (#99238)
f4cf744380 : [MPS] Fix gelu exceptions not raised for error inputs (#99237)
aaa3eb059a : add some missing includes (#100049)
4b1310bfa4 : suppress `-Wcast-function-type-strict` when casting to PyCFunction (#100068)
69bf0241b1 : Allow calling functorch transforms when their DispatchKeys are disabled (#100011)
62f9189d9d : make ATen/native/cuda/AveragePool2d.cu data_ptr-correct (#99336)
a0934f8bad : Replace maybe_guard with statically_known (#99383)
400dbde8a0 : make ATen/native/cuda/ScanUtils.cuh data_ptr-correct (#99080)
1fcf40da63 : [MPS] Add linear inputs check (#99228)
c11441fda3 : Update torch.arange doc. (#99963)
08c49eee5e : [ONNX] Support aten::atan2 in torchscript exporter (#100040)
9d99d8879c : add missing include on <stdexcept> from Registry.h (#100036)
0b1b063158 : [buckbuild.bzl] Fix dep handling in cross-builds
8fd866c666 : Add frame summary to for/while loop backedge log message (#100045)
1ded73f909 : Remove little endian asserts (#99713)
c680f2b8ea : relax restriction on cond branches calling closed functions (#100013)
488e0effe3 : Fix test_multiple_devices_randint_cuda (#99775)
89baa1a74c : [MPS] Add support for linalg.vector_norm (#99811)
539363a873 : [inductor] Lowering of rngprims philox_rand (#99289)
111358de19 : Support non-ASCII characters in model file paths (#99453)
efded3f3e9 : [inductor] Add cpp_wrapper support for FallbackKernel (#99887)
d3143d0be6 : Skip timm_vision_transformer in Inductor torchbench smoketest (#99766)
79f8ac14d5 : Add pass to normalize torch.ops.fb.equally_split
785676ccb0 : [dynamo 3.11] refactor cpython function defs out of eval_frame.c (#99947)
bafa2c4724 : Change 'w.r.t.' to 'wrt' in function docstrings to fix doc rendering (#100028)
676a23f452 : [RFC] Allow elastic agent to fail fast (#99051)
eddb3a060e : Rename master -> main in docs workflow (#100022)
1c110652a8 : [ONNX] Support aten::tile in torchscript exporter (#99927)
6bc4651193 : [philox_rand] Dynamic shape support (#99290)
dfba65be8b : Update Cutlass to v3.1 (#94188)
15e1bee269 : change torch._dynamo.export(aten_graph=...) to allow pre_autograd tracing (#98031)
62fad315c1 : fix per-dispatchkey-mode caching bug (#98030)
d976df49c5 : [dynamo] don't use LazyModuleMixin.cls_to_become if it is None (#99943)
9e012fd401 : [export] Associate one cond() error case with exportdb. (#99844)
5c16dfd708 : Add half to `real` param description in `torch.complex` docs (#99938)
cf21240f67 : [MPS] Squeeze last dimensions if possible for 5D (or bigger) reductions (#99856)
87a2af6d4a : Fix loading data on different encoding (#94503)
8cc57593b9 : remove redundant trailing semicolons in StorageImpl.h (#97658)
808267767c : Prevent grad scale from overflowing (#98876)
ae5e1819a5 : stepcurrent (#98035)
3e57d49e5b : Unblock fbcode build issues with torch/testing IS_CI issue (#99997)
3a630d9e3a : [stronghold][bc-linter] Use merge to find the changes in the PR (#99958)
0ebd3a78f4 : Make sdp_utils more c++ style guide compliant (#99948)
865d30a3dd : [ONNX] Add new ops and enable the MNIST test (#99284)
fc6f2f6e4e : [spmd] simplify data parallel tests (#99901)
0901b41a5e : [spmd] Add a few more loss ops to the reduction op list (#99900)
932ed333f7 : [spmd] expose input_batch_dim to DataParallelMode (#99899)
c6949db481 : [spmd] enable fully_shard fused_adam test (#99898)
ad882c5210 : [spmd] Use TupleStrategy and enable replicate fused_adam (#99374)
f274c4ecf6 : Don't change filelock log level by default (#99991)
650ba57236 : Remove Anjali from CODEOWNERS (#99955)
dbc7e919b8 : add Wmissing-prototypes to clang-tidy (#96805)
39ff87c6a4 : [ROCM] Extend try-catch mechanism for ROCM detection (#99980)
df3455b716 : [reland][quant][pt2e][refactor] Cleanup the logic for deciding whether to insert observer/fq or not (#99220) (#99767)
14c3eb7fb6 : [Testing] flip switch, remove slow path assertions (#99101)
e2a3817dfd : [BE] Enable C419 rule for any all shortcircuiting (#99890)
e43918b93a : [inductor] Fix AOTInductor (#99203)
3afa60bf0f : Get OutOfMemoryError to inherit from RuntimeError (#99786)
e7157bd048 : [inductor] Fix shape padding (#99917)
cc01568efd : [pt2] Register meta func to randperm.default (#99593)
8a0badfff1 : [ROCM] Do not send "debug"=True down to triton.compile (#99756)
9a69634b28 : Skip some failing dynamic shape models on periodic (#99895)
df1ff0925e : inductor: fix issue for conv+binary issue for binary scalar path (#99860)
ed3957795c : inductor: add fallback test case for hardtanh and leakyrelu fusion pattern (#99859)
e2cb6bcc91 : Fix typos and clarify text in tags.yaml (#99954)
4cf654625c : [ONNX] Bump onnx-script version with imported module renaming (#99926)
04e8df4dd7 : Return full accuracy status for printing, not abbreviated version (#99894)
bfa63bf45f : div16 changes for dyn shapes (#99930)
e5c9a0fcf5 : [dynamo] avoid graph break on repeat_interleave.self_int (#99528)
ecd2c71871 : Implement the get_device method in the storage base class. (#99818)
e51453298b : [ONNX] Improve diagnostics performance (#99936)
466adab7c4 : Add fsspec to PT setup.py (#99768)
4be0aa1382 : Allow persistent reductions in dynamic shapes if last numel is static (#99789)
cd61707167 : yolov3 dynamic training accuracy is fixed (#99896)
41069f2faa : inductor: align inductor behavior with eager mode for split_with_sizes (#99702)
96ceae3a7f : Use memoized only mode for guard size/stride extraction (#99742)
0b545bc667 : Stop marking sequence length as dynamic (#99889)
42921fc801 : [torchgen] accept scalars for unary `SymInt` arrays (#99921)
1dbecbf913 : make ATen/native/cuda/NaiveConvolutionTranspose3d.cu data_ptr-correct (#99347)
4ca44d32d3 : make ATen/native/cuda/SortStable.cu (#99340)
1b30f588e6 : make ATen/native/cuda/RreluWithNoise.cu (#99341)
fbb0ff10a4 : [pt2] add `SymInt` support for trapezoid ops (#99281)
36e1ae6778 : De-select odd numbered heads from nn.MHA fastpath (#99672)
3de7fd461a : [FSDP][Reland] Include duplicate parameters and modules when calling named_parameters and named_modules (#99448)
0eb59ad093 : Change export tracing_mode default to symbolic (#99877)
73f7459a90 : Do not assume static by default when exporting (#99554) (#99876)
08a8a37ffe : [FSDP] Set `NCCL_DESYNC_DEBUG=0` for FSDP unit tests (#99916)
855f611baf : [spmd] skip gradient copying for fused adam (#99489)
ca24a96216 : minor fix to fused adam meta registration (#99436)
ff7d5b62d4 : Improve ProxyTensor tensor_tree list/tuple handling (#99897)
78c2e3374d : [fx] Remove replace_literals_with_placeholders (#99728)
862d658059 : [inductor][non determinism] Disable autotuning when determinisitic mode is ON (#99851)
7398b5650d : [Lint] Fix wrong docstring for dcp save_state_dict() (#99778)
33fe2dbb23 : Fix a minor bug about method generation. (#99704)
baf092b82d : Update pt2-bug-report.yml (#99928)
3a09aa5977 : [c10d] Faster coalescing (#98793)
3dcc7b396c : [easy] iterate dict with sorted keys for accuracy checking (#99793)
2cea2edc27 : [easy] Fix upload test stats after master -> main switch (#99924)
367d3afd7c : Update MacOS deployment target to 11.0 (#99857)
4c9d660733 : fix gather issue when index is shape of n by 1 (#99709)
e9e5ffe83e : Re-enable dynamic shapes test in dynamo benchmark (#99816)
d881b2978c : Make autocast cache and buffer stealing aware of cudagraph static output tensors (#99368)
3009c42e7d : [CI Testing] Re-enable timm_efficientdet training (#99787)
a1633b1776 : Add support for call_method patterns (#99782)
41280a0791 : Don't detach to create parameters in MetaConverter (#99618)
39590d06c5 : Make new_subgroups avaliable for non-cuda depend backend (#99706)
f0e28b1cb9 : Adding the maintainers approved in 2023Q1 Core Maintainers meeting (#98520)
7d2a18da0b : Enable ruff in lintrunner (#99785)
dcd686f478 : [MPS] Add PSO caching for advanced indexing kernels (#99855)
09b189edc3 : Improve the precision of abs() and sign() for large values (#99550)
5ee5afb82c : Update channel shuffle to return alias instead of self as-is (#99745)
ab0a8215bb : [xla hash update] update the pinned xla hash (#99863)
466877b692 : Nicer logs for dynamic shapes (#99277)
d0886f686e : Revert "Do not assume static by default when exporting (#99554)"
c83e1f517d : Revert "Delete tracing_mode argument to export (#99555)"
1e8cf6ad7f : Add documentation for `torch._logging.set_logs` (#99219)
6e3cdcad08 : Fix flake8 lint errors - part 2 - manual fixes (#99799)
48d112c431 : Fix fake tracing of cross entropy with label smoothing and weight (#99830)
7a6c650b81 : [inductor] Lower aten.prod (#99484)
79c9e82e27 : Fix flake8 lint errors reported by ruff - take 2 (#99798)
dc1c0924ec : Properly parenthesize dynamo_dynamic_indices test (#99823)
6d5040a1ac : [BE] Update python versions for black formatter config (#99827)
f8c6861120 : [MPS][BE] Introduce `LookUpOrCreateCachedGraph` (#99422)
d29cf18442 : [CI] Pause perf data collection for max-autotune (#99829)
a89d6b0a82 : [MPS] Add encoder coalescing support for native kernels (#99810)
2d3456167d : [Typo]Summary:Fix a typo in comments (#99824)
716ef2f5ad : Improve code to make it more pythonic. (#99720)
72daadef2c : [dynamo] Explicitly fall back to eager with GraphModule with no output for onnx&tvm backends (#99805)
9b0b31a5e3 : fix conv+bn folding issue for mixed dtype (#99696)
1fc4d58f43 : inductor: fix split+cat issue when cat order is not align the split output's order (#99700)
ebd47b0eec : Propagate mark_dynamic in Dynamo compiled outputs. (#99634)
efed5a4969 : Allow data size equal to 0 for SegmentReduce (#99733)
7a8d0ccddf : Correct LBFGS tolerance_grad doc string (#99792)
f602b3a6ae : Preserve mark_dynamic when cloning inputs (#99617)
5e73569ab4 : Add memoized_only mode to tensor conversion (#99741)
4c2892944f : Guard static shapes alongside tensors, instead of from shape_env, in dynamic_shapes=True (#99566)
220712f4de : Fix torch.compile() on a skipped module (#98894)
d192729cfd : inductor: fix AllenaiLongformerBase dynamic shape error on CPU (#98842)
31eb9949e4 : [dynamo] disallow_in_graph bugfix (#99600)
e63c502baa : [Executorch][XNNPACK] Quantized Max Pool 2d (#99587)
7749eec8df : Remove deprecated declaration suppression (#99749)
a964a3dbed : [quant][pt2e] add all convs-relu fusion qat configs (#99586)
c139dfd71e : [quant][pt2e] add dropout to executorch backend config (#99585)
9db6920635 : [spmd] Add list handling to data parallel and add foreach tests (#99373)
c1e2fa8189 : [dtensor] add StrategyType and TupleStrategy (#99435)
82a54513ac : [fx] Add a function to allow adding more functions to the side effect function set (#97288)
87b71e570e : [Profiler] Support HTML plot output for profiler export_memory_timeline API (#99751)
ca8625f456 : [BE][1/N]Add sharding spec logger for ShardedTensor (#99748)
bd7191111f : [ONNX] Add additional_test_kwargs into test_fx_to_onnx_with_onnxruntime.py (#99434)
e9bf94149e : [spmd] Introduce Compile Mode FSDP with DTensor (#99062)
be62a80787 : [vision hash update] update the pinned vision hash (#99486)
e5664c652a : [ONNX] Support aten::scaled_dot_product_attention in torchscript exporter (#99658)
6585d76f0f : [docs] nn.functional.embedding: Note expected discrepancy between numerical and analytical gradients (#99181)
2b7161e2bf : lower cmake version requirement in FindSanitizer.cmake (#97073)
93d0a9c1b5 : Add pattern to normalize split (#99588)
79ec91a943 : Add pass to remove redundant conversions (#99697)
4637c5ae5b : Revert "Simplify _use_grad_for_differentiable (#98706)"
872319d393 : [ONNX] Cover 'undiscoverable' ops 'torch.ops.aten' (#99682)
96d3f3dee3 : Discover and run C++ tests with run_test.py (#99559)
bfbc4e74ab : adjust batch sizes for hf suite (#99691)
ce60997376 : [BE][DTensor] validate the mesh argument in DeviceMesh construction (#99094)
cf357adc7e : Allow torch.fx to take Modules that return dataclass (#99576)
547bef11ee : tweak heuristic for sdpa selection based off of *data* (and a decision tree) (#99644)
bb830224e3 : Remove extra space (#99750)
4f62e7cb10 : [FSDP][BE] Remove unused code (#99731)
363d530035 : Fix decision logic for `should_cast_forward_inputs` in `_root_pre_forward()` and `_pre_forward()` (#99546)
10c938abef : Handle meta['val'] for tuple of lists. (#99724)
6c899999f4 : Fix wrong path when reinstalling MacOS pip requirements (#99758)
db46d9dc49 : [CI] Change max-autotune's output file name (#99754)
8548cb3dd5 : Improve OOM error message (#99699)
c39aff1084 : Disable XProtect on MacOS runner (#99506)
18fd6394dc : Give distinct names to __unknown_tensor (#99729)
b9da79d280 : Simplify _use_grad_for_differentiable (#98706)
08376cc546 : [Inductor] Fix rand_like with kwargs device of str type (#99673)
7876c503b7 : [FSDP][optim_state_dict] Consolidate rank0_only load logic (#99647)
dd07dab1c7 : [FSDP][optim_state_dict] Support rank0_only when use_orig_params is on (#99624)
fc63d710fe : Revert "Disable XProtect on MacOS runner (#99506)"
ee5f09ab80 : [Feature] storage pin memory support custom device. (#99712)
400075f733 : [stacktraces] Keep addr2line procs around (#99670)
e09f785a72 : [CI] Remove inductor skip list for Huggingface (#99375)
75e754800f : Revert "[quant][pt2e][refactor] Cleanup the logic for deciding whether to insert observer/fq or not (#99220)"
b96bb2f1a6 : [spmd] Introduce ParallelMode and add DTensorExpandMode (#98452)
9244264f46 : [Inductor] Fix view/reshape on tensors with shape 0 in any dimension (#99671)
d56adb1b54 : [quant][pt2e][refactor] Cleanup the logic for deciding whether to insert observer/fq or not (#99220)
06081ac8f3 : Update docstring of torch.nn.functional.normalize() (#99512)
e9786149ab : Delete tracing_mode argument to export (#99555)
881c57230d : Move more stuff to after_aot (#99557)
d3bb762f1e : Do not assume static by default when exporting (#99554)
6b8ef8ea4c : [BE] Build PyTorch with `-Wnewline-eof` (#99687)
dbf0db958f : Fix torch.nn.FractionalMaxPool2d output_size error (#99507)
9861ec9785 : Revert "[c10d] Faster coalescing (#98793)"
da57d597e1 : Revert "fix onednn ConvTranspose2d channels last issue when ic=1 (#99539)"
9df8b1b594 : Init comm_nonblocking_ when creating AutoNcclGroup (#99679)
24bf15fe8d : Support record_stream in dispatch mode (#99529)
0ac0d9d224 : Pass locals to enum_repr to correctly make the guard str for enums (#99680)
8ee59280d7 : Bug - check config for dynamic (#99676)
a421c54753 : [exir][delegate] torch.ops.call_delegate (#92562)
9def799097 : [combined tracebacks] missing gil acquire (#99685)
daff040886 : [inductor] skip triton.Config that spills (#99385)
9bece55a7e : Disable XProtect on MacOS runner (#99506)
63690afc6c : Make CI error on inductor fallback when decomp is available (#99473)
deaf983bdb : [Inductor][quant]Enable decomposed.quant/dequant lowering and vec code gen (#99131)
2a47f68586 : inductor: fix onednn conv2d(transpose) packed issue when input size is three (#99601)
51742a467d : [ONNX] Fix missing import numpy for docs example (#99663)
16a4dc0f93 : Correct typo for NCCL_MAJOR (#99482)
6427b849a3 : Allow in graph einops operators (#99631)
716ba6851e : Make testing._internal.common_utils safe to import (#99659)
d168161cd3 : [dynamo] Fix example_inputs with unsqueeze_ (#98696)
0d2b55c459 : [DTensor] Change Sharding algorithm to be in line with ``torch.chunk()`` (#98722)
27f8eb8c2b : add storage serialization methods for privateuse1 (#98920)
907f2dad7d : [inductor] Update triton pin (#99209)
fdeee43650 : Disable SDPA FlashAttention backward and mem eff attention on sm86+ for head_dim above 64 (#99105)
fc8fa6c356 : Require at least one tensor to be marked dynamic with --dynamic-batch-only (#99620)
abdd1f4a38 : Reuse tracing context and fake tensors from backwards in forwards (#99619)
bbfd577b7c : bug-report.yml fix broken link (#99425)
9f95032101 : Fix broken links in contribution_guide.rst (#99295)
c9b08a087d : [Dynamo] Merge symbolic_converter SETUP_WITH & BEFORE_WITH (#99651)
c412056921 : Temporarily move ROCm to unstable (#99579)
37bcdb98f6 : Fix buck parsing in OSS build (#99648)
22af604e1b : [quant][pt2] Add Conv + BN fusion for prepare QAT (#98568)
418a9fb9d8 : [reland][inductor] coordinate descent tuning upon max-autotune (#99594)
b87c7ab6d6 : Remove redundant `found_inf` recompute from `_step_supports_amp_unscaling` path (#98620)
a8e1893b7c : Clarify error message of torch.nn.functional.embedding_bag (#99471)
e68e84ef8a : [dynamo] Support BUILD_MAP_UNPACK (#98664)
c19d19f6ff : [profiler] support cuLaunchKernel (for triton kernel launches) & update kineto submodule (#99571)
5315317b7b : Skip some detectron2_maskrcnn models with KeyError _ignore_torch_cuda_oom (#99599)
7c3fa5c70d : Revert "Build Windows binaries with Visual Studio 2022 Build Tools (#90855) (#99591)
0a98289af3 : Stop testing if CUDA is initialized on teardown (#99627)
aa4ed332c3 : Improve torch.cond useability: Return UserError with actionable error messages (#98909)
e47e8c9d98 : Guard on default device (#99551)
88c45a1954 : [SPMD] Allow users to dynamically pass the last_iter to IterGraphModule (#99575)
7acb0bdd22 : Fallback for Complex Dtypes in Inductor (#97198)
638feec4e3 : Turn on meta converter for complex (#98869)
df84d74058 : Allow getting type of ScriptObject (#99542)
971df458db : Reland of "Python binding to set/get CUDA rng state offset" (#99565)
f4354b2a5e : [dynamo] Support dict kwargs constructor (#98660)
c17ff0ed36 : Print AOT Autograd graph name when accuracy failed (#99366)
4721553431 : [vmap] Fix searchsorted batch rule for self_logical_rank == 0 (#99526)
2ad02d00b9 : [BE] `stdint.h`->`cstdint` (#99570)
35ad5122d2 : Revert "[vmap] Fix searchsorted batch rule for self_logical_rank == 0 (#99526)"
ccd5ad816e : inductor(CPU): add ISA check before do cpu fx packed weight (#99502)
4d8906885e : Revert "Temporarily move ROCm to unstable (#99579)"
21e88a543b : Revert "Fix trailing spaces lint (#99581)"
96a262d666 : Revert "Allow in graph einops operators (#99478)"
edd2507c73 : [functorch] Prevent using for-loop for out-of-place index_fill batch rule (#99229)
a2a4144256 : [FSDP]Make param_groups optional for FSDP optim state dict (#99117)
68bc0fc012 : [inductor] a script to benchmark the perf impact from tensor layout (#99583)
da322ea874 : Enable torch.jit.load for custom device (#99535)
6580b160d3 : [vmap] Fix searchsorted batch rule for self_logical_rank == 0 (#99526)
c0674c439c : [vmap] Add max_pool3d batch rule (#99522)
d31a00e713 : [vamp] Add max_pool1d batch_rule (#99517)
233cc34d3b : fix onednn ConvTranspose2d channels last issue when ic=1 (#99539)
3af467eff4 : inductor: support sqrt for dynamic shape (#99514)
27a43c0242 : inductor: add input type check for fuse_attention (#99296)
309b7edfe1 : Allow in graph einops operators (#99478)
95ca8e589d : [ONNX] Update install_onnx.sh: onnx-script -> onnxscript (#99572)
789070986c : [Dynamo] Implementing generic context manager by inlining __enter__ and __exit__ (#98725)
fbdb86c174 : Fix trailing spaces lint (#99581)
d06624d3c4 : Temporarily move ROCm to unstable (#99579)
805a6dc8d2 : Add an expect test for test_save_graph_repro (#99538)
36acad58b6 : [quant][pt2e][refactor] Move the annotation for observer sharing ops into separate util (#99384)
1b9edb680f : increment generation in run only context (#99099)
c660db2074 : Adding vmap support for special bessel functions (#99543)
19788002e7 : Remove a couple of additional places where we would construct tensors - aliases of params, inputs (#98950)
3233450d07 : Add TorchXLA option to benchmark runner (#99505)
6026caed1e : Make HAS_CPU boolean lazy, speed up import time (#99537)
cf354a0491 : Don't eagerly initialize CUDA when importing common_cuda (#99536)
32cd05ae60 : Package `torch.fx` type hints (#99541)
f6f35135a4 : suggest constraints to specify for export based on generated shape guards (#98463)
04f7a2a5e1 : Support module dict iter (#99503)
c75ac11fb5 : [cond] error on closed over variables (#99367)
237f917f5b : [Profiler][Easy] Fix typo in Profiler report input shapes (#99430)
af7fed1d92 : fix osd rank0_only in fsdp (#99136)
2402fe5210 : [memory allocator] fix ifdef typo (#99553)
495e1b4d0e : Add device_asserts before indirect loads and stores (#98590)
9ac2b041c9 : Make opacus xfail instead of skip (#99380)
cfacb5eaaa : Revert "Use correct standard when compiling NVCC on Windows (#99492)"
ca89e7942a : [SPMD][Easy] switch to tree_map_only to simplify code (#99547)
db6944562e : Use correct standard when compiling NVCC on Windows (#99492)
db456ab83d : [c10d] Faster coalescing (#98793)
bc9eaa7abf : Run post-aot compiler at compilation time, not at runtime. (#99457)
7546972565 : [BE] Refactoring test execution and improving comments (#99467)
6ca991cacf : [Composable API] Add fully_shard debug function to print sharded tree structure, module names and managed param fqns (#99133)
6b6dc4418d : Warn if guards are added to ShapeEnv after we produced guards (#97820)
2aa35e6cc1 : Fix Tensor.uniform_ documentation to mention generator argument (#99510)
d6d55f8590 : [fx] Variatic arg matching (#99431)
e21f648cde : improve mkldnn matmul performance when one input is contiguous tensor but the strides is not default contiguous strides (#99511)
efa16c20c3 : make ATen/native/cuda/UpSampleNearest3d.cu data_ptr-correct (#99328)
5cb788a9a5 : Revert "[cuda rng] Making offset calculation independent of device properties (#98988)"
5d395769a6 : Skip vision_maskrcnn after #98923 (#99394)
6b8bab8e39 : Fix (4 device) multi-gpu `ShardedGradScaler` Tests in `ciflow/periodic` (#99485)
b0df0cd7cc : [reland][quant][fix] Compare resnet with quantizer api with the prepare_fx and decomposed convert flow (#99355)
391a3add54 : make ATen/native/cuda/TensorModeKernel.cu data_ptr-correct (#99330)
8eb7743401 : make ATen/native/cuda/ReflectionPad.cu data_ptr-correct (#99325)
4d3011b600 : make ATen/native/cuda/Col2Im.cu data_ptr-correct (#99348)
121edd2161 : make ATen/native/cuda/Shape.cu data_ptr-correct (#99343)
b01edf45f8 : Add typing to debug_utils and repro (#99452)
2e25fb5d55 : Refactor debug_utils into after_aot and after_dynamo modules (#99450)
a3ee9800ba : Codegen fixed size for static sympy values (#99362)
e605b5df74 : [SPMD] Add sym_stride to DSymInt (#99504)
2cb8a8d4cc : [SPMD] Support DSymInt for slice_backward in SPMD expansion (#99501)
292296141a : [SPMD] Support SymInt with non-op call_function nodes (#99420)
7c0c663a4c : [SPMD] Add aten.stack and aten.select to DTensor prop (#99417)
301be37091 : Avoid import * from experimental_ops (#99363)
8d3dc2131d : make ATen/native/cuda/TensorTransformations.cu data_ptr-correct (#99350)
98907589ee : Make GetItemSource(*, slice) hashable (#99379)
9b909cbe9a : Update README.md to explain installing triton (#99464)
0ae9d15543 : make ATen/native/cuda/Bucketization.cu data_ptr-correct (#99334)
367d3657a4 : make ATen/native/cuda/TensorFactories.cu data_ptr-correct (#99342)
3ace394d43 : make ATen/native/cuda/RangeFactories.cu data_ptr-correct (#99344)
67db44694a : make ATen/native/cuda/Nonzero.cu data_ptr-correct (#99333)
441ac80988 : make ATen/native/cuda/UpSampleNearest1d.cu data_ptr-correct (#99329)
c67c16bcd2 : Switch calling convention back to real tensors (#99320)
1eb1911012 : migrate cuda files to const_data_ptr (#99357)
1aa52fc041 : make ATen/native/cuda/NaiveDilatedConvolution.cu data_ptr-correct (#99346)
f17119d10c : make ATen/native/cuda/AdaptiveAveragePooling3d.cu data_ptr-correct (#99324)
bb2cd4a107 : Revert "Python binding to set/get CUDA rng state offset (#98965)"
33483b0be4 : make ATen/native/cuda/UpSampleTrilinear3d.cu data_ptr-correct (#99349)
ea50d4f146 : Revert "Switch calling convention back to real tensors (#99320)"
6467495900 : Allow split_reduction if all dyn values are static (#99475)
113bd11cf4 : Skip levit (#99491)
41d7969590 : [SPMD] Upstream iter_move_grads_and_optimizers (#98785)
fcd2e8cbf4 : Support bf16 searchsort op (#99426)
b3b0fbca11 : [ONNX] Export Relu6 without using Relu (#99022)
d41aa448b8 : [ONNX] Run ONNX tests as part of standard run_test script (#99215)
8e69879209 : [inductor] adjust sliceView limits (#99447)
4aedb8e116 : Revert "[inductor] coordinate descent tuning upon max-autotune (#97203)"
e60557793f : Make hash update script more robust and run it (#99370)
96cad5cf95 : Revert "[inductor] adjust sliceView limits (#99447)"
26f318574f : [cuda rng] Making offset calculation independent of device properties (#98988)
bb017d7671 : Revert "Codegen fixed size for static sympy values (#99362)"
48463f687a : refactor macro with AMP (#99285)
52ecc3274b : [inductor] coordinate descent tuning upon max-autotune (#97203)
3fef633333 : Add CUDA-12.1 manywheel build to trunk (#99458)
8009891be6 : [inductor] adjust sliceView limits (#99447)
44b09bf673 : Reland "Simple Custom Operator API, V0 (#98440)" (#99416)
840431fa59 : Fix test/test_proxy_tensor (#99415)
8a89eec2f8 : [BE] Do not use unicode quotes (#99446)
2b49a7330b : Make lintrunner work with new main branch (#99466)
5ff2ad6fc1 : torch._int_mm: fix triton kernel caching (#99283)
6c5fdde881 : Codegen fixed size for static sympy values (#99362)
60c8a75a7e : [EASY] Turn on slow path assertions but only on first run (#98945)
93347dde22 : make ATen/native/cuda/DistanceKernel.cu data_ptr-correct (#99327)
cf97b820c1 : make magmaLdlHermitian data_ptr-correct (#99361)
472f46635e : Cache output tensors on execution (#98944)
93b64f0ad3 : [Easy] Remove C++ call now that it wont be on hot path (#98943)
bce21ee06a : Revert "Fix bug in check required output size in _as_strided_scatter_meta (#98483)"
19501b254f : Revert "Codegen fixed size for static sympy values (#99362)"
d8d479c854 : Codegen fixed size for static sympy values (#99362)
deec8bd522 : [fix] Disable EMA if ema_alpha is set to None (#98992)
24f882369a : [EdgeML] Remove dependency on all_mobile_model_configs.yaml from pt_operator_library BUCK rule (#99122)
c0be06667f : [PT2E][Quant] Support for embedding op quantization via ExecuTorchNativeQuantizer (#99106)
06f19fdbe5 : Turn off Windows Defender in temp folder on binary build workflow (#99389)
00f76dbaaf : add comment indicating that `maskPrefixSum` is mutated (#99309)
e51c6c19c0 : make ATen/native/cuda/DilatedMaxPool3d.cu data_ptr-correct (#99319)
a387ac41fb : make ATen/native/cuda/DilatedMaxPool2d.cu data_ptr-correct (#99321)
d69a1a4491 : In detect_fake_mode, assert that all detected fake modes are consistent (#99392)
a8a1c57664 : Reset joint graph fake mode earlier, and more comprehensively (#99391)
38e964056b : Reland python ops (#99170)
5327dad617 : make ATen/native/cuda/ForeachReduceOp.cu data_ptr-correct (#99318)
e7a5cb99e2 : [CI] Fix test failures at TestTensorCreationCPU.test_float_to_int_conversion_finite_cpu_uint8 (#98916)
24d20ea194 : make ATen/native/cuda/ConvolutionMM2d.cu data_ptr-correct (#99323)
7880f9e7e3 : Fix isinstance on SymInt in dynamo (#99393)
57e1a50da3 : Fix FakeTensor printing (#99205)
20a90a1f80 : make ATen/native/cuda/UpSampleBilinear2d.cu data_ptr-correct (#99313)
cee9d86d20 : make ATen/native/cuda/AmpKernels.cu data_ptr-correct (#99312)
46b9377190 : [CI] Collect inductor max-autotune performance every Sunday (#99387)
ce7c4ba11d : Revert "Mark doctr_det_predictor as broken on master (#99370)"
f497031df9 : Revert "Simple Custom Operator API, V0 (#98440)"
1c042a2137 : Revert "Reland python ops (#99170)"
c97dd8e134 : Fix the pt2e UT path after refactor (#99402)
88c8c2b71b : [dynamo 3.11] implement 3.11 exceptiontable (#96511)
8214fe07e8 : Python binding to set/get CUDA rng state offset (#98965)
b290381e09 : Mark doctr_det_predictor as broken on master (#99370)
5c5ad53517 : [CUBLAS] Specify alignment for `cuBlasLt` `addmm` (#98975)
5b692fd819 : Fix bug in check required output size in _as_strided_scatter_meta (#98483)
2611fccfed : [Inductor] Unify Inductor CUDA & CPUT unit tests input clone function (#99118)
964c7e3e85 : [BE][DTensor] fix DTensor equal op (#99014)
e6aa8e0729 : Test and document dynamo backward hooks support (#99382)
cde35b4069 : [JIT] clarify errors due to non-literal indexing into ModuleList, ModuleDict (#98606)
a763d948d7 : [CI] Move last iOS job to periodic (#99388)
4ffd407d12 : [CI] Update torchbench pin (#99386)
780922c24e : Switch calling convention back to real tensors (#99320)
a109453df4 : Delete use_functionalize feature flag (#99317)
17d7be68ee : Delete functorch use_fake_tensor and debug_fake_cross_ref (#99314)
2471eac618 : Make run_fwd_maybe_bwd work with int inputs (#99365)
3d8498f926 : [DataLoader] Add missing documentation for arg in DataLoader (#99371)
436edc5ac3 : [ONNX] Retire 'DynamoOptimizeExporter' (#99202)
694ed70e01 : [inductor][easy] create a wrap for triton do_bench function (#99216)
063173cb46 : Skip sccache initialization on MacOS (#99121)
6df87b2e9b : Rename sym_shapes logger to dynamic (#99335)
6212a85af8 : Revert "Skip sccache initialization on MacOS (#99121)"
59e343b12c : enable data type propagation (#98065)
01fdcbdcc9 : [FSDP][optim_state_dict][Easy] Temporarily disable rank0_only=True for use_orig_paramscaseEas (#99354)
7ff1f3f3f6 : Revert "Revert "Expandable blocks in allocator (#96995)"" (#99275)
99c6d46cf7 : fix typo in gen_functionalization_type.py (#99303)
b003000f41 : Updates NCCL to 2.17.1 (#97843)
c2fd198caf : Skip sccache initialization on MacOS (#99121)
bdaf32261f : [FSDP] Ensure that customized non tensor optimizer state can be saved (#99214)
d4de64ae8d : Reland python ops (#99170)
106ccf4a2a : [pt2] add meta function for `linalg.cross` (#99279)
6f7b434f7b : [pt2] add `SymInt` support for `column_stack` (#99276)
ccc5d1daec : Revert D44897935: Multisect successfully blamed D44897935 for test or build failures (#99353)
a6a90eaf28 : Remove unnecessary check when logging artifacts (#99260)
ab08284225 : Revert "Disable dynamo tracing torchrec.distributed (#97824)"
08dd4ad0b9 : Revert "[pt2] add `SymInt` support for `column_stack` (#99276)"
62a6d81143 : [SPMD][Easy] Making typing consistent by replacing object with Any (#99332)
19c2804614 : [SPMD][EASY] Remove unnecessary torch.ops prefix (#99331)
f957334c2b : Revert "[pt2] add meta function for `linalg.cross` (#99279)"
2b54d673fc : Add custom backend case for storage and automatically generate storage attributes. (#98478)
8efc965e05 : Update FBGEMM submodule (#99315)
80eab63587 : [Quant][pt2e] torch.mean and ReLU6 (#98984)
444a9769ae : [quant][pt2e] QAT Linear (#98897)
568935caca : [quant][pt2e] QAT conv + bn + relu (#98896)
7401f0f8ce : Add unbacked symbool support (#98877)
08ffe34621 : Revert "Skip sccache initialization on MacOS (#99121)"
cdab6c8df9 : [PT2E][Quant] Support specifying None for obs_or_fq_ctr in target_dtype_info (#99071)
36a95625da : [PT2E][Quant][BE] Refactor observer code (#99066)
4f4e0db5bd : [PT2E][Quant][BE] Split short term and long term tests in different files (#99065)
bcf6393024 : [PT2E][Quant][BE] Move pt2e quantization test to separate folder (#99064)
0711bff9aa : [ROCm] add skipCUDAIfVersionLessThan to unskip test_jiterator for ROCm (#99197)
e549ad0046 : Add log_sigmoid_backward forward-AD (#99288)
dede0bb065 : [NCCL] Use OptionalCUDAGuard in ProcessGroupNCCL::WorkNCCL::synchronizeInternal (#98895)
ed5395dbef : make aten/src/ATen/native/cuda/Indexing.cu data_ptr-correct (#99154)
d44c5e3639 : make ATen/native/cuda/IndexKernel.cu data_ptr-safe (#99158)
63a09a588d : make ATen/native/cuda/UniqueCub.cu data_ptr-correct (#99150)
55ed2b8a32 : inductor: add device and dtype check before doing cpu fx packed weight (#99028)
0157b2d722 : Simple Custom Operator API, V0 (#98440)
503104ce31 : make ATen/native/cuda/LegacyThrustHelpers.cu data_ptr-correct (#99273)
e9fef4a70c : make gemv calls data_ptr_correct (#99161)
46c0912ca7 : make ATen/native/cuda/Blas.cpp data_ptr-correct (#99274)
148d49260a : [SPMD] Implement split_fused_optimizer to split one fused_optimizer node to two (#98784)
2fc7f984e5 : make vol2col data_ptr-correct (#99152)
ecf4400b3a : make radix_sort_pairs data_ptr-correct (#99153)
98b62f7c12 : make remaining gemm calls data_ptr-correct (#99156)
306594b2b0 : make ATen/native/cuda/AdaptiveMaxPooling2d.cu data_ptr-correct (#99164)
314cba9641 : make ATen/native/cuda/SegmentReduce.cu data_ptr-correct (#99163)
777a666a60 : make ATen/native/cuda/Unique.cu data_ptr-correct (#99240)
31393ea457 : make ATen/native/cuda/MultinomialKernel.cu data_ptr-correct (#99241)
a8429342df : fix mul/div overflow issue on CPU float16 (#98820)
efc3887ea5 : [pt2] add meta function for `linalg.cross` (#99279)
775dd869d0 : [pt2] add `SymInt` support for `column_stack` (#99276)
2418b94576 : Rename default branch to `main` (#99210)
31f311a816 : [PT2E][Quantization] Refactor Quantizer and QNNPACKQuantizer (#99063)
888c65b6a4 : fix fake tensor propagation for cross_entropy with smoothing (#99255)
fa502ab034 : simplify codegen for integer min/max since they don't need to propaga… (#99249)
9ab5fdff81 : Remove obsolete HAS_PRIMS_REFS (#99252)
20a1788136 : Revert "[quant][fix] Compare resnet with quantizer api with the prepare_fx and decomposed convert flow (#98905)"
be0b12ece5 : make untemplated gemm calls data_ptr-correct (#99184)
29ff5a0c91 : make ATen/native/cuda/Embedding.cu data_ptr-correct (#99183)
b08c384106 : Add parameter for pin memory of storage to support other devices. (#98692)
851e89c8e8 : Revert "Expandable blocks in allocator (#96995)"
6f181aae7c : [vmap] Register decomposition for huber_loss_backward (#99236)
e2923b521b : Further improve symbolic shapes logging (#99159)
fdbc8625a1 : Functionalization of torch.rand/rand_like ops (#97377)
6e1e27fc4e : [inductor] Refactor pre-grad passes into inductor.fx_passes (#99130)
91279f9471 : [inductor][quant]Enable inductor vec codegen for quantization (#98489)
039faf0dbf : Add invariant that all symbolic shapes must be bound in graph (#99089)
c69d54885a : [SPMD][BE] Generalize factory ops support in SPMD expansion (#99233)
6bb20822f5 : [SPMD][BE] Remove deprecated aten.sym_numel branch (#99232)
39be994913 : [SPMD][BE] Consolidate DSymInt Branches (#99231)
544cd8e134 : [SPMD] Refactor DSize to DSymInt to enable sym_numel (#99206)
bafb984022 : [SPMD] Enable aten.full.default with SymInt on sharded dims (#99190)
d350646ff6 : SymIntify randint and randperm (#98968)
756a86d52c : Support large negative SymInt (#99157)
5c062e8bb4 : [vmap] Add vmap support for nn.functional.huber_loss (#99235)
c9403f128b : fix breakage from #99027 (#99245)
3cde50e3fa : Update NT error message (#99166)
47c685def3 : [dynamo] Support DELETE_ATTR (#98698)
15fe5a0798 : [Dynamo] Fix benchmark --verbose error (#99224)
21681f36f4 : [pt2] add `SymInt` support for fft ops (#99115)
f89b7c2bec : [pt2] add `SymInt` support for `roll` (#99114)
d5f7ec8a31 : Apply dynamic shapes policy correctly to _base tensor (#99211)
85f38b8a33 : [BE] Update flake8-comprehensions and adapt to rule C418 (#99178)
506bd05752 : make ATen/native/cuda/NLLLoss2d.cu data_ptr-correct (#99179)
e9201ab690 : make ATen/native/cuda/AdaptiveMaxPooling3d.cu data_ptr-correct (#99185)
34f681c13b : [CI] Remove inductor skip list for timm_models (#98840)
a595a50653 : [CI] Use expected accuracy csv files to check benchmark test status (#98839)
1adb6fa922 : nn.Linear: dispatch to bsr_dense_mm for half and bfloat16 (#94825)
b69f0480a5 : make ATen/native/cuda/MaxUnpooling.cu data_ptr-correct (#99189)
60f914e77e : make ATen/native/cuda/UpSampleNearest2d.cu data_ptr-correct (#99186)
05809c7d3b : [Dynamo] No graph break for explicit calling Conv{1/2/3}d.forward & ConvTranspose{1/2/3}d.forward (#99015)
157c869026 : Enable FSDP ``use_orig_params=True`` mixed precision training when some ranks have no (non-zero sized) parameter shards (#99175)
e9be0b0fb9 : [dynamo] Support functools.wraps (#98699)
b9426ded8d : [vision hash update] update the pinned vision hash (#99212)
3c4622c0ec : Patch failing slow-test logic for inductor-dynamic (#99182)
6eab5e88c8 : Graph-break on allowed modules if they have hooks (#97184)
55c71cf91f : [ONNX] Support aten.stack for dynamo_export (#99191)
606ce5b653 : [ONNX] Introduce Input/Ouptut adapter; Switch to 'DynamoExporter' (#98421)
ef11966aff : [composable] Enable replicate + trec_shard overall (#98890)
e45fa1a581 : Back out "[core][pruning][be] rename BaseSparsifier to BasePruner (#98747)" (#99171)
c130b8a716 : Reintroduce s390x SIMD support (#99057)
7cb581d42f : aot_autograd: more logging on metadata asserts (#99177)
06ad8d6d5f : Remove filter step (#98969)
cb23191523 : [Vulkan] rewrite quantized add, mul, conv2d and conv2d_relu ops (#97468)
a910045add : [PATCH] Back out "Move functional collectives implementation to python. (#98595) (#99168)
20019f7c56 : Fix bug in symbolic shape messages (#99169)
70ec347f06 : Skip sccache initialization on MacOS (#99121)
c0d9a0268d : [inductor] Use FakeTensorMode() when creating patterns (#99128)
bd07f8d2e0 : DDP forward support custom stream accelerated copy. (#98723)
a1074ddf51 : Enable cadd_sparse for BFloat16 on CPU (#96767)
2d542d36a8 : [dynamo] FakeTensor comparison with "is" instead of "==" (#99134)
b9d691040a : Update InternalMatch in subgraph_rewriter after repeated replacements (#99039)
651c1be885 : Recompute flat_arg_fake_tensors after fakification (#98769)
df43fef87f : Support >4GB strings in the TorchScript model (#99104)
6b9a52d1a4 : [inductor] Refactor post_grad.py (#99127)
ae55619a2b : Add check for same dtype in tensordot implementation (#98938)
9e0df2379b : [quant][fix] Compare resnet with quantizer api with the prepare_fx and decomposed convert flow (#98905)
baa06790f8 : Unbreak torch.compile on macos (#99119)
70072c926e : Fix MHA doc string (#99146)
286b618b7d : [SPMD] Move some functions to IterGraphModule.setup() (#99076)
d863876545 : [SPMD] Remove the unused code (#99075)
9642eb59ad : make ATen/native/cuda/Normalization.cuh data_ptr-correct (#99044)
46cfde4645 : make ATen/native/cuda/MultiLabelMarginCriterion.cu data_ptr-correct (#99077)
4d1297cae8 : trivially convert memcpy sources to use const_data_ptr (#98754)
3a510e3911 : trivially convert all memcpy destinations to mutable_data_ptr (#98753)
40aaacd4fa : Respect sharded dimensions when aten expaned/view consumes SymInt values (#99058)
d365d9ed29 : [torch package][easy] Make all the save/load tests use buffers (#98798)
210354620c : [MPS] Fix macOS build (#99139)
05a55b96d2 : make ATen/native/cuda/group_norm_kernel.cu data_ptr-correct (#99041)
298cc5c611 : Add vmap support for `torch.nn.functional.smooth_l1_loss` (#98357)
1e78a2edcc : Make summarize_perf.py work with perf-compare (#99095)
ca735ac856 : Don't specialize when indexing by SymInt (#99123)
0963e1187a : Put GraphArg on the node meta. (#99068)
6a50b83b73 : Expandable blocks in allocator (#96995)
2494e62599 : [MPS] Add ASSERT_ONLY_METHOD_OPERATORS Finish (#99021)
7ddeb8d320 : [MPS] Add ASSERT_ONLY_METHOD_OPERATORS Part 5 (#99020)
70120f2f92 : [MPS] Add ASSERT_ONLY_METHOD_OPERATORS Part 4 (#99019)
f043ff2cec : [MPS] Add ASSERT_ONLY_METHOD_OPERATORS Part 3 (#99018)
be50c1c48e : [MPS] Add ASSERT_ONLY_METHOD_OPERATORS Part 2 (#99017)
4ddaab845c : [MPS] Add ASSERT_ONLY_METHOD_OPERATORS Part 1 (#99016)
27049f3ff2 : make native/cuda/EmbeddingBackwardKernel.cu data_ptr-correct (#99027)
bce7308881 : [SPMD] Upstream partial_lower (#99069)
10fbdcf72c : Re-PR of 90269 - Force all nn_module associated tensors to be static (#99108)
b3e63baf58 : [spmd][easy] delete unused optim states during compile (#99061)
55a1dc7f88 : [dtensor] redistributed by default take self mesh instead (#99060)
cdef4f073c : inductor: fix timeout in ModularIndexing (#98841)
0a7baabafb : [torch.compile] Add sympy.core.relational.Relational to inductor.ir (#98971)
9d62f771eb : [ONNX] Remove duplicated code from previous rebase (#99072)
cd078d376e : GraphArg is always length one, adjust APIs accordingly (#99059)
e613a419ed : Remove dead wrap_sym (#99049)
21ed07ceb9 : Delete dead orig_graphargs (#99047)
cc345d181a : Change unspec ints to not be duck-sized (#99010)
a6a3df08e6 : make ATen/native/cuda/Loss.cu data_ptr-correct (#99073)
8382e91b9c : make ATen/native/cuda/MultiTensorApply.cuh data_ptr-correct (#99081)
13e4cc962a : [vision hash update] update the pinned vision hash (#99109)
d5abc7bfee : [Vulkan] Move convert_qconv2d_context to custom ops (#98548)
ece497b681 : make cublasSgemmStridedBatched data_ptr-const (#99085)
64a61fc7c3 : make at::cuda::blas::gemm calls data_ptr-correct (#99087)
979c5b4cf8 : Move torchdynamo start tracing message earlier (#98990)
ee1f28cd15 : Fix the bug of comm headers. (#98658)
c4f81cb6f4 : [NCCL] Add experimental Nonblocking NCCL Fault Tolerance/Checking (#95715)
006e6f1a05 : Fix CPU vectorized eq and ne operations for complex types (#97374)
5e1ac1bb83 : Fix visual studio generator (#98605)
06d8e231d5 : Make sure that while caching values we don't invoke any Aten operator (#99050)
0a98d94357 : [FSDP] Auto-pad for no `pad()` in post-bwd hook (`use_orig_params=True`) (#99054)
49cd650e2b : [BE][DTensor] merge random init test to test_random_ops.py (#98874)
36f52cc099 : [BuildSpeed] Limit `Logcumsumexp` complex to OSS builds only (#98957)
e778bcec05 : Revert "fix allgather func collective to use maybe_wrap_tensor (#98866)"
f84078b40b : [dynamo] Remove pointless graphs from with no_grad() (#98956)
02d1cf51b6 : [Easy] Clean up args remap for DTensor expansion (#99040)
fd7eaf79de : cmake will only run properly named c10 tests (#98710)
93d75568c7 : [ONNX] Refactor ShapeInferenceWithFakeTensor to fill metavalue into the original gm (#98760)
d5aa4cec57 : Delay torch.onnx import to after all dynamo [sub]components (#99070)
8062735f78 : [ONNX] Support aten::unflatten in torchscript exporter (#99056)
7b91bd2a7b : [primTorch] Add count_nonzero (#98995)
7d74dca780 : [primTorch] Add rad2deg and deg2rad (#98994)
668c578083 : Automatically generate attributes and methods for custom backends. (#98066)
09ebdf44fa : [quant][pt2e] Fix a bug in reference quantized module (decomposed mode) (#98903)
6f07ad6cbf : more trivial mutable_data_ptr from at::empty (#98750)
b8580b0897 : Fix lazy_modules while enabling Unspecialized '__call__' tracing (#98516)
1d077f28ed : [export] Constraints API (#98433)
4d3d3317eb : make ATen/native/cuda/LossCTC.cu data_ptr-correct (#99030)
9a04482a74 : make ATen/native/cuda/SoftMax.cu data_ptr-const (#99029)
5d758ea952 : make ATen/native/cuda/MultiMarginLoss.cu data_ptr-correct (#99008)
8e328762ff : [FSDP] Include duplicate parameters and modules when calling named_parameters and named_modules (#98912)
a44813e6d7 : trivial data reads to const_data_ptr (#99004)
35c6547f02 : Adds 3D attn_mask support to merge_masks() for Multihead Attention fast path (#98991)
bae304a5fc : make ATen/native/cuda/WeightNorm.cu data_ptr-correct (#99006)
bba2090831 : Enable fused optimizer for DP (#98270)
079452ea0f : Enable test_matmul_cuda UTs for ROCm (#98797)
fc53472ce4 : Move/Fix FakeTensor logic for detecting multiple fake modes (#97186)
82b8764b75 : [unwind] clarify warnings (#99005)
d8b09b0139 : [FSDP] Full precision in eval mode (#97645)
c74310616d : _mm_prefetch is for Intel, changed to __prefetch for Arm64 (#96638)
7a77961d63 : trivially migrate std::transform output to mutable_data_ptr (#98756)
e20981bda9 : [Dynamo] Fix Lazy Module initialization with constant arg (#98996)
7692243e40 : [functorch] typo in merge rule's github handle (#99052)
dda7ce4bb3 : Revert "[core][pruning][be] Rename sparsifier folder to pruner (#98758)"
e5501a967e : [inductor] Support IndexPutFallback in cpp_wrapper (#98972)
670c5cf962 : AOTAutograd: fix 'Trying to backward through the graph a second time' error (#98960)
39438c6803 : trivially convert std::copy output to mutable_data_ptr (#98755)
3a400a5adc : Enable passing a dict of module names: log level to set_logs python api (#98989)
6a568779b6 : [quant][pt2e][improvement] Remove the need to annotate all nodes with default annotation (#99001)
f501234be0 : Add test for squeeze.dims shape function (#98144)
2c337dd934 : [fix] update the condition for aliveness of TensorWrapper (#98748)
0ff3059ad0 : [pt2] recursive IR check (#98887)
1854e8ac5f : convert trivial assignments to use mutable_data_ptr (#98752)
388d269234 : Use the same python version in MacOS workflows and add more debug messages (#98902)
8155b72c15 : [ROCm] Sync updates in hipify_torch to Pytorch hipify utils for ROCm. (#93169)
8a6dd0dc97 : Disable logging in pattern matcher calls to AotAutograd (#98936)
8a3f1be809 : inductor: relax the dynamic variable check for cpu dynamic test case (#98815)
a408ed24ba : Support module hooks in UnspecializedNNModuleVar (#98540)
731590bae5 : Revert "[quant][fix] Compare resnet with quantizer api with the prepare_fx and decomposed convert flow (#98905)"
296822c475 : Make update_expected not fail on one missing file (#98982)
43146bd490 : [quant][fix] Compare resnet with quantizer api with the prepare_fx and decomposed convert flow (#98905)
51ff9ce997 : [Replicate] Simplify code a bit (#98889)
cfd1b4df94 : [Composable] add checking key for check_fqn function (#98961)
ccc9a3d726 : Automatic Dynamic Shapes (#98923)
46a31e9bab : Revert "[quant][pt2e] Fix a bug in reference quantized module (decomposed mode) (#98903)"
c80592ff9c : [ONNX] Remove torch dependencies in _beartype (#98958)
75f55ca63b : Support FQN as SPMD module override key (#98966)
6ebeefb4b0 : remove merging label when merge is cancelled (#98967)
9a2a6fcfa5 : add get_device_index for custom device (#98804)
c3186dc85e : [inductor] Support integer pow (#88938)
efc90c797d : improvements to torch.gradient docs (#98824)
a8f5d72edf : Guard color diagnostics opts by compiler type (#98952)
ab761605ae : Revert "[export] Constraints API (#98433)"
99aacf5c68 : [SPMD] Expedite the allreduce call before doing comm_fusion (#98922)
78ff7ca24a : [Dynamo] Fix Sequential nn module with duplicated submodule (#98880)
8db04e080c : [pt2] add `SymInt` support for `cdist` (#98881)
a2e809f29b : [quant][pt2e] Fix a bug in reference quantized module (decomposed mode) (#98903)
3c5a825f3c : [AOTAutograd] Fix is-duplicate check in de-dup guard logic (#98932)
bb4998b531 : Add shape function for `aten::cross_entropy_loss` (#97875)
5c38c4cfa4 : Improve symbolic shapes guard logging (#98941)
1149ba5553 : Revert "[NCCL] Add experimental Nonblocking NCCL Fault Tolerance/Checking (#95715)"
c650d7b67f : [inductor] add `cumprod` to `make_fallback` (#98898)
a38ff4cfd1 : documentation update (#98782)
4828585019 : Revert "Move/Fix FakeTensor logic for detecting multiple fake modes (#97186)"
dc52ba2906 : Fix test_mps for macos 13.3 (#98739)
419ad49e65 : Make Tensor.__contains__ accept SymInt/Float/Bool. (#98933)
ada7dfff71 : fix allgather func collective to use maybe_wrap_tensor (#98866)
e99549526e : [spmd] move the param_buffers to the front (#98437)
65070e1f0a : Allow set pred with ConstantVariable (#98900)
a33eac3988 : [NCCL] Add experimental Nonblocking NCCL Fault Tolerance/Checking (#95715)
09458a2bf1 : introduce TensorBase::mutable_data_ptr() (#98163)
be8a4eb8e3 : [MPS] Add index_fill op (#98694)
01e011b07c : [MPS] Move bitwise ops registration to native_functions.yaml (#98908)
c47464ed95 : [PyTorch] Further reduce cost of TypeMeta::_typeMetaData (by 10x!) (#98105)
8a057c445d : Move/Fix FakeTensor logic for detecting multiple fake modes (#97186)
8654699c54 : [dynamo] Remove _dynamo.skip and fold it in _dynamo.disable (#98899)
71aea7f56e : [MPS] Add error inputs check (#98167)
286212080f : introduce TensorBase::mutable_data_ptr<T> (#97874)
629377ea8b : Revert "Replace _dynamo.config with an object instead of module (#96455)"
0c0e5c574e : [inductor] Consolidate constant_args and cpp_constant_args (#98742)
ff9e34fb35 : [inductor] Consolidata kernel and cpp_kernel for wrapper codegen (#98741)
439a716785 : remove unused TensorImpl::unsafe_data<T>() (#98720)
951df11af8 : [dynamo] Raise exception on incorrect usage of disallow_in_graph (#98892)
ee0143bf65 : distinguish mutability of TensorImpl::data<T>() (#98719)
9c98f2ceb7 : inductor: rewrite mkldnn fx fusion using pattern_matcher(binary) (#97141)
d3a1a772b5 : inductor: rewrite mkldnn fx fusion using pattern_matcher(conv_transpose_unary) (#97140)
73c3cb717d : inductor: fix the issue of cat missing dim argument for sink_cat_after_pointwise (#98901)
562e5d4942 : inductor: rewrite mkldnn fx fusion using pattern_matcher(linear_unary) (#97139)
c214c50355 : inductor: rewrite mkldnn fx fusion using pattern_matcher(conv_unary) (#97007)
0be65069d3 : [BE] Use `Literal` from `typing` (#98846)
6ff32b5575 : [MPS] Expose mps package in torch (#98837)
d3a35956de : Skip dtensor ops on CPU-only runner due to flaky timeout (#98868)
60ebb2f116 : [Gloo][BE] Print stacktrace on collectFullMesh (#98810)
39fd7f945f : Add Symbool support in python to C++ translation (#98453)
bc8cb62bcb : torch.compile benchmark utility (#97699)
455795c799 : Enable fake_crossref unit tests on rocm (#97368)
9c5473b79c : [BE] Move mobile builds to python-3.8 (#98886)
1510eb4072 : [export] Constraints API (#98433)
ac5025cdad : [llvm-17][ORC] Fix for move most ORC APIs to ExecutorAddr, introduce ExecutorSymbolDef. (#98811)
f3080997e5 : [SPMD] Introduce remove_copy_for_optimizer optimization (#98580)
401320690b : [SPMD] Add optimizer states and steps to the return (#98579)
07a1378f52 : [SPMD] Introduce schedule_comm_wait (#98578)
dd3e2ddc0a : [SPMD] Introduce graph_optimization_pass and comm_fusion_with_cat (#98285)
78ad800a2a : [nccl] Remove lock for nccl collective launch for 2.0+ (#97904)
e37986d48f : [memory viz] support larger visualizations (#98865)
a2e0f5128c : [dynamo] Fix bug with torch._dynamo.skip (#98862)
2de67eaaee : [SPMD] Add a dump_graphs_to_files utils to facilitate graph transformation debug (#98284)
0962114802 : Fix 'fully_shard' may determine compute device incorrectly (#98831)
c93ff384c3 : [Easy] Reuse `source` variable in `wrap_tensor` (#98845)
ad373efe6d : [ONNX] Skip flaky dynamic tests before ORT==1.15 in fx exporter (#98856)
6cbe5c5ef7 : Fix Lint (#98873)
89894115ab : [MTPG] add all_to_all collective to MTPG (#98791)
420104a886 : Replace _dynamo.config with an object instead of module (#96455)
06c206cea3 : [SPMD] Add the default graph module transformation that is applied after tracing and expansion (#98182)
367051e47e : [docs] Add missing functions to autograd.rst (#98854)
3b6a78ea87 : [Dynamo] Lazy Module support list/tuple input (#98809)
def50d2534 : Create a new unstable workflow for periodic jobs (#98858)
88dae230d0 : dynamic range constraint API (#98779)
1e807f1189 : Log PT2 compile to Scuba (#98790)
97889fa199 : simplify indexing expression before trying to determine strides (#98783)
4130e4f284 : [hypothesis==6.70.1] Fix more test errors (#98685)
16beb636b8 : Generalize summary script to work with more CSV names (#98500)
6361c3debc : Return zero_point from determine_qparams as a int64 (#98746)
abafb1e6dc : [fx] Minor bug fix for SubgraphMatcher when ignoring literals (#98458)
c9adc4c376 : [Dynamo] De-dup graph inputs (#98775)
ca791b6909 : [MPS] Add higher order derivatives warning to max_pool2d (#98582)
e2cfdf177b : Remove un-used part of cuda rng state (#98787)
778fd1922a : [core][pruning][be] Rename sparsifier folder to pruner (#98758)
583193e1d9 : [MPS] Fix batch_norm_backwards key (#98794)
2b38bd5bba : [ONNX] Safely set node name for 'replace_placeholder_name_and_target' (#98633)
ad1d842234 : [Dynamo] Make python random calls real random (#98812)
abe96654de : [reland][BE][autograd Function] Raise an error if input is returned a… (#98051)
97a756f57d : Enable G004 lint check (#98843)
15686950b7 : [spmd] quick fix on batch input view issue (#98813)
760967a284 : Update _store_based_barrier implementation to reduce load on rank 0 (#98000)
b8b840be3d : Convert logging f-strings to use % format, part five (#98765)
5a7aad9681 : Convert logging f-strings to use % format, part four (#98705)
5a458a9df4 : Convert logging f-strings to use % format, part three (#98704)
5ca3afd1bf : torch.hub: add safe weights_only option to load_state_dict_from_url (#98479)
5907173022 : Updated upsampling test to use parametrize_test decorator (#97769)
6145964ec9 : distinguish implementation of data() and mutable_data() on TensorImpl (#98732)
34961d416c : Remove unused log config settings (#98795)
ce4df4cc59 : Enable triton build in CI docker image for ROCm (#98096)
7117c87489 : torch.library.Library.impl: add missing param in docstring example (#98619)
0c162adfa8 : [dynamo] Support callable() on user defined functions (#98662)
c377a8590b : Add `nonzero_static()` op to pytorch to unblock export (#97417)
d4ce045cfc : [Add] storage support for custom backend. (#98469)
1ff0a03e3f : Fix misuse of active mask (#98157) (#98159)
a7892802b9 : [dynamo] Add einops to skipfiles (#98661)
910d9224b5 : [spmd compile api] use fake tensor for DTensor propagation (#98789)
5a2de506fc : [spmd compile api] run gm_transforms before running the first iteration (#98788)
ec1d6580f1 : [stronghold][bc-linter] correctly determine the base commit of the PR (#98538)
ab385bd49e : docs: Linking ResNeXt PyTorch Hub Pipeline (#98689)
85e1d74c52 : [FSDP] Clarify CPU offload implicitly in reshard_doc (#98666)
c00fd71a95 : Workaround for CuDNN-8.7+ load bug (#98644)
fa077377ea : [PtE][CoreML] Create modelID as value not reference (#98655)
ef3ea30eed : Add CUDA 12.1 workflows (#98492)
dda95236c9 : Add fast path in our type checks and argparser (#98764)
7ecbce374e : [DTensor][3/N] enable aten.native_dropout (#98577)
e686a1e1b3 : [DTensor][2/N] add Philox offset adjustment logic in operator_dispatch (#98199)
67963c32bd : [DTensor][1/N] add DTensor RNG state APIs (#98198)
3c2bc0760b : [EdgeML] Switch from BZL to BUCK for model resource testing (#98450)
803a1a041a : [torch.package][easy] Add another opcode for matching pickle protocol 4+ correctly (#98674)
76ac454146 : Index expanded dims before checking memory overlap (#98656)
f011db345f : Fix typos under torch/_inductor directory (#97592)
822464567f : Lazily format graphs for debug printing (#98776)
f25f85546f : add rng_state support for custom device (#98069)
a13a63ae9a : Fix typos under torch/ao directory (#97679)
a531a464fd : Fix typos under torch/nn directory (#97594)
105ef68f72 : Fix typos under torch/fx directory (#97596)
4584851da5 : [core][pruning][be] rename BaseSparsifier to BasePruner (#98747)
bd83b205cc : Skip test test_triton_bsr_dense_bmm if not TEST_WITH_TORCHINDUCTOR (#98462)
5bcbb9bca7 : Skip testing distributed backend if the backend (UCC, NCCL, Gloo) is not available (#98576)
117da58b65 : [dynamo 3.11] enable dynamo unittests in 3.11 (#98104)
457afe48fd : [caffe2] Micro-optimizations in BlobGetMutableTensor (#98103)
02cff64784 : Assert that there are not duplicate sources for distinct arguments (#98738)
b663f7e887 : [better_engineering][multiplatform] Replace host_info() check with separate cmd and cmd_exe commands for protos (#98426)
d5120ff18a : [torch.library] Add ability to create library fragments (#98439)
618ea6fac3 : Fix test_python_dispatch under debug mode (#98609)
01b2c45659 : [autograd_function_db] Add NumpyTake as OpInfo (#98438)
c139df407b : Skip failing test_torchinductor_codegen_dynamic_shapes tests on CPU (#98621)
9abae6ae32 : Make all Source subclasses frozen. (#98737)
69eef5a4be : [CUDA12] set_device change (#94864)
3fcc5ff0d6 : Avoid passing buffers to optimizers during spmd rematerialization (#98714)
a3701b6740 : fix backward bug for custom device (#98586)
537c346117 : feat(add method is_private_use1() in class Device) (#98123)
b09722f540 : Convert logging f-strings to use % format, part two (#98700)
9a8f71f23e : Convert logging f-strings to use % format (#98697)
ad88afcff8 : [xla hash update] update the pinned xla hash (#98195)
95621b3c2e : [aot] fix disable amp for runtime wrapper (#97864)
96fb64a159 : Turn off cudagraph trees (#98709)
fdfd370c10 : [vision hash update] update the pinned vision hash (#98654)
584244460b : use float as accumulate type for reduce Ops: min, max, minmax on CPU (#96079)
8fee46693c : Fused attention patterns (#97741)
f4858fa8ef : Improve dynamo support for autograd.Function (#98158)
7e0c26d4d8 : [JIT] Allow `tuple` and `list` generics (#98703)
2400cb1d57 : distinguish mutability of TensorImpl::data() (#97776)
6b9a1cf858 : Removed hip call hipDeviceSynchronize (#97209)
ff825de442 : [primTorch] add ref for `cumprod` (#98670)
9d36361601 : make TensorImpl::data_ptr_impl() non-const and have mutable in the name (#97744)
54b168484d : Support LayerNorm without weight or bias parameters (#98687)
1be3549a27 : Enable replicated embedding in SPMD for NLP models (#98686)
fdb04c6a86 : Add overflow check for stride calculation (#94900)
3925f6edb2 : add Half to cat fast path on CPU (#96078)
d95ee64b58 : ddp forward support custom backend. (#98283)
a2e7910dfd : [pt2] remove skip for `masked.logsumexp` in `test_proxy_tensor.py` (#98676)
b411238d76 : [pt2] add meta function for `logcumsumexp` (#98683)
387feaa131 : add mutable to name of non-const Storage::data_ptr (#97694)
2edfcafd4b : [inductor] remove RBLOCK from persistent reduction kernel's parameter list (#98653)
d77d2f03a5 : [ONNX] Fix scalar elements in op.Concat (#98509)
70535d60fc : Restore CPU distributed tests (#97424)
0fa25cbd57 : Fix broken MacOS build due to #97690 (#98665)
cb3c478069 : Revert "refactor(add privateuseone floder in aten/src/ATen): add a PrivateUse… (#98127)"
526d9bbc65 : [ONNX] Refactor op level debugging (#97494)
5375e78b50 : [Inductor] turn on vectorization with fallback for indirect indexing etc. (#98138)
584a7ef35c : [Inductor] cpp further code cleanup (#98135)
85a90d9181 : Rename assert options, turn off by default (#98616)
a5f3468618 : [Dynamo] Fix bug when dynamo generate guards for enum type (#98652)
0dbdc8a380 : reenable lowmem dropout (#98631)
c68a94c5ea : distinguish mutability of untyped Storage::data (#97690)
d255c8e1ad : Add NLLLoss to DTensor prop rule (#98512)
a6155f34f6 : Set up automated hash pinning for triton (#97568)
f959a0d56c : Modify 'fake_tensor_unsupported' function (#98585)
b7ff717232 : [inductor] Use 64-bit indexing for large tensors in triton code (#97447)
48397cddd7 : [inductor] Fix benchmark_compiled_module codegen with CppWrapperCodeGen (#98608)
917e9f1157 : Fix pytest config (#98607)
4563adacc5 : Update the use of nvidia-smi for GPU healthcheck (#98036)
112dfa1415 : Back out "[kineto] add SOFT_ASSERT when logging metdata" (#98630)
5ceae85f1c : [Dynamo] Include UserDict in clone_inputs (#97725)
cf10fd827e : Add comments about maybe_guard usage in Inductor (#98563)
ebd4c165ff : Back out "`GradScaler` recomputes `optimizer_state["found_inf_per_device"]` before `optimizer.step` (#97415)" (#98613)
2d9f482d88 : [fx] Subgraph rewriter matching on attributes (#98604)
4adae2d1ae : Enable flatbuffer tests properly. (#98363)
0a0f107b50 : Retry ONNX tests (the quick way) (#98627)
4f9dbc17a4 : [ONNX] Enable xdoctests in CI (#98546)
b2b783ea3c : Fix wrong SPMD test target in test_log_softmax (#98610)
61c74ab0f8 : Fix MPI rank and world size pg initialization (#98545)
24d9001527 : Move functional collectives implementation to python. (#98595)
c75dd7c413 : grab bag of changes (#98572)
9667f261c6 : Remove MERGE_IN_PROGRESS when exiting merge (#98611)
55724a5ec9 : Revert "[experiment] More procs in CI (#98098)"
5210d7c423 : [CI] Mark vision_maskrcnn as NONDETERMINISTIC (#98570)
c5269ad6c6 : [quant][pt2e] Add support for a few ops in QNNPackQuantizer to enable quantizing internal model (#98560)
89e5774482 : Work around CI worker gpu issue for inductor_distributed (#98601)
1c226f5aad : [pt2] add meta functions for `cummax` and `cummin` (#98552)
483fd3351a : [Quant] Add get_symmetric_qnnpack_qat_qconfig_mapping (#98569)
e016dec66e : Clean up compile reason logic, report only graph break compiles (#98574)
f55e72c0f6 : Add option to log recomps (#98564)
9fd3eba6ce : [experiment] More procs in CI (#98098)
e302f083bb : Flip Switch Redux (#98341)
16ec7efa49 : Don't use f-strings in logging calls (1/X) (#98591)
79e14f8fd6 : [better_engineering][multiplatform] Repalce host_info() check with select for default_compiler_flags (#98306)
390c51bf87 : Skip nnmodule hook guards by default (#98371)
46d765c15e : [devX] make labels only count their own occurences (#98551)
d06662fb57 : Add ephemeral merging label (#98543)
d643a00efc : inductor(CPU): support dynamic shape for onednn fusion path (#97230)
77d9742c24 : [Inductor] Fix bug in lowering.slice_ when negative start out of range (#98517)
45a2f6b70f : Revert "Reduce includes of CUDACachingAllocator.h (#97072)"
5c8fea5647 : Reduce overhead in CUDAGraph Trees (#98529)
616f50da3a : [quant][pt2e] QNNPackQuantizer support annotation for resnet18 (#98507)
5a537e291d : refactor(add privateuseone floder in aten/src/ATen): add a PrivateUse… (#98127)
29608fd28d : [pt2][inductor] hardcode autotuning names (#98351)
3d8ead7ee1 : [vision hash update] update the pinned vision hash (#98367)
1fb8428d70 : Fix off-by-1 error in dynamo coverage stats (#98558)
2161be08c4 : Disable test_torchinductor_dynamic_shapes on ASAN (#98544)
152d65ae1d : [reland][inductor] Enable CudaWrapperCodeGen for non-AOT mode (#98534)
d4dbdee528 : Update _linux-test.yml (#98317)
a0a0b0c701 : Dont decompose dropout so it can be pattern matched (#97931)
482f87a7bc : [quantized] Fix return values of _get_name() in quantized ConvTranspose (#97678)
88208c6fdf : [inductor][cpp] fix mul for uint8 (#98473)
06eaa0970b : [Resubmit] Don't crash on retrieveDesyncReport (#98470)
4adba70cc6 : [inductor][easy] use num_stages=1 for reduction (#98524)
86cb7f40a9 : Fix the missing PATH in mps workflow after #98522 (#98559)
22411b6f02 : Revert "[dynamo 3.11] enable dynamo unittests in 3.11 (#98104)"
481ecffb5e : Add test c10d ucc tests (#88110)
8a29afe98a : [RFC] Add warning about object-based collectives for GPU tensors to docs. (#97702)
eb5da4df8a : Speed up LossCTC.cu (#97269)
a2bb2fae1b : Add Autocast support to MatMult thourgh explicit cast (#98346)
0066f3405f : [dynamo 3.11] enable dynamo unittests in 3.11 (#98104)
dbfc4df075 : Add $CONDA_ENV/bin to PATH on MacOS (#98522)
531b8e8f1e : stop using caffe2/core/logging.h forwarding header in serialize lib (#98168)
fdb9441e7e : Stop recursion on trivial replacement (#97903)
ca1fe9bae5 : remove no-op C10_DISABLE_NUMA preprocessor flag (#98243)
e4c8c75583 : [PG NCCL] Add TDD, NCCL_DEBUG log (#97692)
03a428a5b2 : [ONNX] Introduce 'Functionalization' for fx exporter (#98245)
edebe413d3 : [inductor] fix scatter fallback and fallback in deterministic mode (#98339)
68cb06c752 : Make gen_annotated_args support kwargs (#98396)
fe99d39fbd : migrate PyTorch to c10::bit_cast (#98418)
213cec3c45 : Revert "Add typing_extensions as MacOS ci dependency (#98522)"
12f340dcd9 : Add round as UserError (#98376)
e0b958f975 : [SPMD] Allow IterGraph support a more general subgraph movement (#98360)
f228b3977b : Revert "[inductor] Enable CudaWrapperCodeGen for non-AOT mode (#98264)"
3b6e94cb8c : [small] replace with .format() with f-strings (#98514)
0210481dcb : Fix _like meta registrations (#98160)
dcb9440af9 : [kineto] add SOFT_ASSERT when logging metdata (#98442)
e394f6db5a : Revert "Improve dynamo support for autograd.Function (#98158)"
e6e33488d3 : Add typing_extensions as MacOS ci dependency (#98522)
49b80c3ea2 : [reland] remove typed StorageImpl::data() and StorageImpl::unsafe_data() (#98411)
e663143871 : [dynamo 3.11] fix 3.11.2 issues (#98364)
1bcb880894 : Reduce includes of CUDACachingAllocator.h (#97072)
e085acc9f3 : Cleanup Copy.cu logic (#97071)
938c5da61e : [inductor] do not generate loops when the condition doesn't hold (#98185)
bb33173962 : Add max-autotune compilers to benchmarks (#98464)
67d1a77086 : Revert "Move functional collectives implementation to python. (#98315)"
ce797795e1 : Support `getattr` for ConstantVariable when compiling with Dynamo (#98153)
4716fa2411 : Improve dynamo support for autograd.Function (#98158)
0c5389b401 : Remove unnecessary schema_map from spmd API (#98444)
77f32eb6cc : [inductor] Enable CudaWrapperCodeGen for non-AOT mode (#98264)
348dcf51e5 : [inductor] Combine CppWrapperCodeGen and CppAotWrapperCodeGen (#98088)
7b25976323 : [pt2] add meta function for `take` (#98451)
019914095e : [Easy] remove unnecessary get_rank() in tests (#98445)
bbf180af9f : Add new aten::device variant to TorchScript (#97023)
d1e7434bcf : Improved configuration naming for repetitive workflows (#98496)
fa4cab8925 : [Sparse] Raise exception when expand is called on sparse tensor (#98365)
8b0374f83c : Move functional collectives implementation to python. (#98315)
f98c1809a4 : Add mark_static (#98427)
bdb79a8f52 : Turn off divisible_by_16 for dynamic shapes; support ablation (#98471)
3142ce208f : [quant][pt2e] Support quantizer API in prepare_pt2e_quantizer (#97994)
ccc27bc361 : [Inductor] Fix convolution lowering if stride or padding or dilation is 1 element list (#98448)
b8cf010139 : Print collective (#97544)
dab1a7e6a1 : [PG Wrapper] Add sequence number (#97462)
428c531d00 : [FSDP] records for composable (#98428)
eadd84d065 : [ROCm] Enable FSDP BF16 comm hooks unit tests (#97517)
37dc47a1ac : Make caling type on user defined class UserError (#98366)
1cd1d9c24a : [SPMD] Dedup collectives from DTensor expansion (#98216)
11b0a84f3e : Enable LogSoftmax for SPMD tracing (#98380)
e2c81e44db : backport std::bit_cast from c++20 to c10 (#98417)
ab95b7a05f : Support neg calls to dyn shapes (#94068)
11890156e7 : fix grain size setting for baddbmm_cpu_kernel (#98297)
cc5f64957b : Add PrivateUse1 for dispatching PyTorch Distributed Collectives. (#98137)
d3adbbf44b : Clean up CUDA 11.6 Docker images (#98395)
bd78532020 : [BE] Fix `collect_env` for python-path-with-space (#98415)
680bf14a40 : [EASY] Fix some more places where we incorrectly assume only Tensor (#98310)
478df47fab : Disable persistent reductions with dynamic shapes (#98405)
007587aa00 : [CI] Update update_expected.py to make it generate a combined csv file (#98407)
a76114832a : [quant][pt2e][fix] Fix the internal test failures caused by refactor (#98378)
2a48f43fe2 : Add check for 0 to 1 inclusive for elements of target tensor in BCE loss (#97814)
3112d2a2b6 : Export function symbols to enable Windows build of Intel Extension for PyTorch (#98054)
013c7f5ba4 : [inductor] Move `tl.broadcast` call out codegen.common (#98304)
bb4174d2a3 : [inductor] Enable CSE on masked loads (#98303)
aa7850c214 : rewrite at::vec::*::convert_to_int_of_same_size (#98429)
29d2e4b7fa : Forward fix for DataLoader to accept custom Sharding DataPipe (#97287)
d01ee10b25 : Add detect_fake_mode (#98321)
5854923c17 : Extract ExtraMeta symbolic shape fields into a dedicated SymbolicShap… (#98399)
5af47dbb23 : Add slow workflow to upload test stats workflow (#98447)
d0eafed7fb : [Easy] Fix minor errors in DTensor examples (#98430)
b1c2925493 : [Dynamo] Support typing.Union and typing.Optional (#98384)
846415f6ea : Add HPU to the storage tensor backends (#98404)
29cde00701 : [MPS] Add `random_` overload (#98333)
c9b1e09958 : [c10d] delete lengths offset checks (#98368)
9c7b03d51e : [Dynamo] Fix bug of torch.is_floating_point & is_complex (#98393)
ebeaf8adf1 : Add hacky example inputs to dynamo produced graph (#96561)
3d36f6f18d : Fix default argument of `parse_ir` stub (#98397)
3c8e9e38a1 : [pt2][inductor] retry add `triton.__verison__` as cache key, update cache layout (#98369)
f21a176c03 : Python Dispatcher should respect FuncTorchBatchedDecomposition key (#98328)
78e991e575 : Patch release process description (#98425)
37b9143206 : Require sequence length in huggingface to be dynamic (#98335)
cf1bfca2ba : Require batch dimensions to be compiled dynamically (#98334)
69f9bd2323 : Don't error if we mark_dynamic without dynamic_shapes on (#98324)
2c6c7deeb3 : Added ModuleInfos for Pooling ops (#98358)
3a0ad3c194 : [easy] Remove large LayerNorm sample input causing OOM from ModuleInfo (#98424)
3ed66f94b5 : Add more debug logs to evaluate_expr (#98344)
f557402e8d : remove //c10:headers (#98420)
937ba248eb : Make the Index Rounding Mode Consistent Between the 2D and 3D GridSample Nearest Neighbor Interpolations (#97000)
dcec2100b1 : [dtensor] add placement strategy and einsum strategy (#98227)
93063768da : [pruning][core][feature] Implement convert for pruner (#97545)
3657b37d6b : Add forward_prefetch flag to fully_shard (#98277)
49c130256d : Clarify `Tensor.is_sparse` doc (#98408)
d1de5f5f0d : Change daily aggregates upload job to use sum and occurence counter instead of averages (#98359)
762a81cb7d : [spmd compile api] pre-flatten state container and pass the flattened state container to transforms (#98392)
37dbd5bf76 : [spmd compile API] add a (temporary) mechanism for overriding input tensors' placements (#98391)
970c08f92f : [spmd expansion] support scalar_tensor (#98390)
0830808dde : [spmd expansion] speed up expansion by ~5x (#98389)
161f7c0b28 : [spmd expansion] support torch.ops.aten.sym_numel (#98388)
3344d79e3f : Pattern matcher improvements (#97740)
279ca5f9db : Revert "[CUDA12] set_device change (#94864)"
981f9f0408 : Better Handling of Storage Cache (#98254)
f1b901b040 : Make sure we dealloc on recording, not just replay (#97440)
c18be2b2ec : [CUDA12] set_device change (#94864)
7b08889074 : Fix GridSample Activation Quantization (#98278)
3da7e83250 : Add test for pickle_module (#98373)
ea00f850e9 : add new() method identifier to _StorageBase (#98201)
2a32bc50c6 : Only print guard code when printing guards (#98347)
555ab310dc : Add itemsize and nbytes properties to Tensor (#98322)
14ccad73b4 : fix _slice_meta's shape calculation (#98326)
b4420f0fd5 : Fix complex variable notation for division operator to be consistent. (#98057)
526b564fa0 : Uniformly use elem when checking ListType (#97873)
c4de7fdef5 : [CI] Mark sebotnet33ts_256 as nondeterministic (#98356)
a05d787eb6 : [inductor] Fix slow tests not being run in CI (#97841)
45edc58e4f : Revert "remove typed StorageImpl::data() and StorageImpl::unsafe_data() (#98219)"
752e43c301 : Move android-emulator-build-test to periodic (#98370)
2987bc0758 : Inductor cpp wrapper: support dynamic shapes (#97965)
601e7dc0bb : Fix typos under caffe2/operators directory (#98235)
feb9ec4282 : Account for forwards which whose corresponding backwards are not invoked (#98112)
ae0d06b42c : Fix saving and loading pickle files on Big Endian systems (#95881)
1e3abda31a : Revert "[spmd expansion] support torch.ops.aten.sym_numel (#98229)" (#98382)
e943b120a3 : Fix incorrectly getting the name of OrderedDict's index in dynamo (#96940)
30d47e4520 : Do not track parameters, do not generate guards (#98350)
144d5268a1 : remove typed StorageImpl::data() and StorageImpl::unsafe_data() (#98219)
6887333cf9 : [inductor] Fix a perf regression caused by https://github.com/pytorch/pytorch/pull/98214 (#98343)
b923f84805 : Switch accuracy CI to dynamic batch only (#98307)
6514d71add : Fix typos under torch/distributed directory (#98225)
3af0228338 : remove typed StorageImpl::unsafe_data() (#98218)
a3365e1d0d : Increment pending forwards after invocation (#98101)
3686416a57 : [SyncBatchNorm] Support running with low precision parameters (#98332)
2d9b2bcfba : Extend TensorImpl with BackendMeta (#97429)
dd503376bd : Revert "[pt2][inductor] add `triton.__verison__` as cache key, update cache layout (#98010)"
bd6db54285 : [CI] Mark mobilenet_v3_large as nondeterministic (#98314)
ecf08a0f8b : [ROCm] Enable test_filtering_env_var (#84100)
51a978fe7b : Set number of threads to be 1 for ARM (#97482) (#98267)
aaae588727 : [FSDP][Docs] Add warning about forward saving param refs (#98320)
66d07e3b19 : [FSDP] Only move current FSDP's states to GPU during init (#98319)
d7156175fe : [FSDP] Add skip writeback check gated by env var (#98300)
96595617b9 : Support Modules with custom __getitem__ method through fallback (#97932)
057911741a : [EASY] Teach requires_bwd_pass how to interpret int. (#98312)
fd0be80dd1 : [Dynamo] graph break when calling resize_() on graph input (#98279)
3c36f82fa2 : [EASY] Handle new inference csv from CI (#98294)
75ac6fdcdd : Propogate dynamo shape_env to make_fx (#96437)
0eab3ab51e : [pt2][inductor] add `triton.__verison__` as cache key, update cache layout (#98010)
9ddd97e1eb : [kineto] make input shape collection opt-in for on-demand tracing (#746) (#97917)
b04f86363f : Fix ideep submodule (#98305)
34c7adf1d7 : add Half support for sigmoid on CPU (#96077)
89dc87a225 : Deduplicate pointers to manually free (#98097)
a52cf3398c : Revert "Add arm tests to mps workflow (#97279)"
558e5a240e : Introduce torch.onnx.dynamo_export API (#97920)
fe9da29842 : [FSDP][Easy] Remove unused `requires_grad_mask` (#98299)
4934dde310 : Cleanup redundant CI jobs (#98044)
10271a60a8 : [FSDP] Skip `_use_sharded_views()` for `SHARD_GRAD_OP` (#98250)
69f1131178 : Bring the fix to flaky missing libzstd on MacOS M1 to its build job (#98236)
ba6bc5080f : Fix fused_8bit_rowwise_conversion_ops_test (#98183)
23a9e08d0d : [bazel] Move torch/csrc/distributed/c10d/quantization/quantization_gpu.cu (#98188)
42cbf7120a : Add parentheses to `FloorDiv`. (#98290)
0b31f87c18 : [FSDP] Use correct handle training state when prefetching (#98249)
950431c334 : extract out a caffe2 macros library (#98156)
f6272ce79d : [FSDP] Allow non-uniform `requires_grad` for `use_orig_params=True` (#98221)
301f00f350 : generate caffe2/core/macros.h in shared build structure (#98131)
d47a4bf53f : Align settings for new device key. (#98224)
86505c692f : Disable inductor/test_minifier on ASAN (#98263)
e7874eea7a : fix the use of incomplete vector<T> for C++20 compatibilities (#93978)
a9c7e882ac : [Dynamo] Support skip fbcode modules (#98192)
d16a9b7676 : [inductor] be able to enable max-autotune and cudagraphs independently (#98255)
7eaaefafb3 : Revert "Extend TensorImpl with BackendMeta (#97429)"
8f2f1a0b32 : [torch/fx] add torch/utils/_stats.py to stack frame skiplist (#98117)
1fae179ee1 : add support for SymNodeVariable in getitem_const (#97756)
b109083098 : [quant][pt2e][refactor] Remove `backend_config` from `_maybe_insert_input_observers_for_node` (#98094)
bc38b278bf : Extend TensorImpl with BackendMeta (#97429)
c5963b7792 : [vision hash update] update the pinned vision hash (#98261)
4431509a54 : introduce c10::DataPtr::mutable_get() and use it in c10 (#98217)
fa08e546f3 : Revert "Add all_reduce_coalesced functional collective (#97157)"
177994eb54 : [inductor] [cpp] fix bitwise codegen (#98056)
0f151ad2ed : Inductor cpp wrapper: support LinearUnary (#97655)
0e2bde3000 : Create script to upload test aggregation data (#97954)
4cf3e7c255 : [dynamo benchmarks] Fix inference benchmark runs (#98248)
96ad739ddc : Added ModuleInfos for {*}Norm modules (#97919)
a3fc3531f5 : Add all_reduce_coalesced functional collective (#97157)
69ff39d2e7 : Skip gat, gcn and sage for TorchBench CUDA test (#98244)
f386312ec9 : [PyTorch] Don't do extra numel() check in TensorImpl::data() (#98090)
9ad66dd588 : Switch reduce_scatter and all_gather in DeviceMesh to use functional collectives (#96226)
2ac9086987 : run buildifier on unified build files (#98141)
b1e60bfb6a : Pass f_locals as a dict rather than kwargs (#98107)
b96fe9b61c : Fix issues related to ClassInstantier in HF models (#97997)
4d13fcddef : [spmd expansion] support torch.ops.aten.sym_numel (#98229)
a6bd21d935 : [Dynamo] Eagerly initializing Lazy Module to reduce graph breaks (#97946)
96f548a1ac : [inductor] Add an AOT mode for the Triton backend (#98214)
73b06a0268 : Fix rendering of arguments for nn.functional ops that use boolean_dispatch (#98092)
eeb18d1e54 : Fix dynamo tests and re-enable internally (#97937)
3654552b8c : add deterministic impl for scatter and scatter_reduction sum/mean mode (#98060)
13f169c9da : Per Channel in brack-propagation function (#97475)
8e5f57a2b1 : add users to external contribution metrics (#97928)
1ea528ef24 : [bf16] bf16 support for conv_depthwise3d (#97819)
55afaa46a4 : Support functools.partial and itertools.product (#98120)
2c905f2152 : Extend Pattern Matcher to allow handling split-cat style patterns (#97726)
095c129bd3 : [CI] Add inference run for the performance dashboard (#98174)
ba7ee00f00 : Add a --inference flag to dynamo benchmark script (#98173)
5a54eb0b15 : [caffe2] miniz fix -Wstrict-prototypes (#98027)
0f0c1b6516 : Flip back swtich (#98099)
55daa835e9 : Added allowed_workflows to pytorch probot (#98082)
ced5c89b6f : add explicit vectorization for Half dtype on CPU (#96076)
c99895ca6f : Move pull and trunk slow tests to periodic (#98040)
c597d9c1f2 : Revert "Inductor cpp wrapper: support LinearUnary (#97655)"
d03003ab8e : Inductor cpp wrapper: support LinearUnary (#97655)
0c1f524b92 : Inductor cpp wrapper: support MKLPackedLinear (#90755)
5d62d12557 : [Inductor] support transpose vertical reduction in cpp (#97781)
76074dc0a3 : Improve support for dict subclasses (#98154)
bf22ecba2a : [Inductor] support vertical reduction in cpp (#97644)
35b3309539 : Fix graph break from inline patched init (#98150)
8e5f491623 : [Inductor] simplify CPP backend Tile2D code and support non-contiguous load/store (#97626)
71d850a100 : [inductor] Fallback on complex64 kernels (#98155)
bc9dd969e1 : Support inlining no_grad() decorator (#98121)
96403cfcec : [Easy] Fix lint error on DTensor math_ops.py (#98170)
02179827cb : [Easy] Include SPMD and DTensor files in UFMT checks (#98148)
38609cc47d : TensorExpr eval: fix copying variables from pointers on big endian systems (#96951)
2ab18a23e1 : Update ideep submodule (#97430)
347c67d4a2 : [Easy] Consolidate string startswith checks (#98147)
7fcff01b50 : [reland] switch mean to use reduction linear (#97996)
d9e5ab4606 : Fix graph break from 'hasattr: HFPretrainedConfigVariable()' (#98119)
b9d3b3f595 : Improve support for contextlib.nullcontext (#98111)
92b46202ef : Add --stats option to benchmark scripts (#98109)
e402259b8a : avoid warning in irange for unsigned types (#97973)
2af09393f9 : `masked_scatter` should accept only bool masks (#97999)
bbc4e911c8 : Move CPUReproTests to its own file (#97943)
db8abde9b6 : [MPS] Enable conditional indexing tests (#97871)
e8d39606eb : [SPMD] Enable fused Adam in full train step tracing (#98113)
bccf2ef0ce : Format DTensor dispatch.py and _meta_registrations.py (#98114)
64077ce511 : remove redundant typed StorageImpl::data() member (#97650)
13461e9767 : [inductor] more cuda metrics in wrapper (#97723)
553bb01df9 : [quant][pt2e][refactor] Remove extra arguments of _maybe_insert_observers_before_graph_output (#98029)
2630144786 : Call to mkldnn_matmul from aten::addmm on AArch64 (#91763)
57c6f3fe90 : [vision hash update] update the pinned vision hash (#98108)
5df59f957f : Fix G001,G002,G003 in logs to % syntax (#97812)
7f9533e224 : [Dynamo] Add UserError type (#97705)
ee9a9b7add : Remove old logging callsites (#98095)
7c60d7a24d : Move CudaReproTests to its own file (#97942)
df216b5736 : Disable dynamo tracing torchrec.distributed (#97824)
b89f74aa35 : Mark Vulkan test as unstable (#98106)
7aa010dcc9 : [stronghold][bc-linter] add BC linter suppression by `suppress-api-compatibility-check` PR label (#97727)
6b319d1525 : [dynamo][graph break fix] inplace add for empty tuple (#97923)
7dde61ce46 : [quant][pt2e][refactor] Remove extra arguments of `_maybe_insert_output_observer_for_node` (#97959)
8313b852cb : Fallback getitem fix (#98041)
091177516e : Rearrange the fields in at::OperandInfo to reduce padding. (#98037)
9be9592f28 : [Dynamo] Code refactor: move context managers out of misc.py (#97958)
3c7b2b730f : use libcusolver_lapack_static.a for CUDA>=12 (#98072)
5810f5ad1a : Fix `aten::squeeze.dims` shape function (#98078)
d158545b16 : [pruning] Add gelu to list of supported activation functions (#95618)
8564ed24a8 : do not need to check if element in dict input is Tensor. (#97866)
794f6e50a1 : [PyTorch] Accept string_view in Pickler::pushGlobal (#96402)
fb7b398479 : [FSDP] Do not `_unshard` if already prefetched (#97981)
195b92ab01 : [FSDP][Easy] Minor cleanups to `_runtime_utils.py` (#97980)
adee9423bd : [FSDP][Docs] Tidy up FSDP ctor docs (#97979)
3226ad21cf : Revert "[Reland] fix some MKL detection issues of CMake (#94924)"
0d73cfb3e9 : Retry at test file level (#97506)
3b188c5883 : Don't use subclass when tracing and call wait_tensor immediately. (#98001)
f2127bbf47 : [PyTorch] Add Vulkan support and tests for at::upsample_bilinear2d (#98022)
64b8d20a5c : Fix typos under c10 directory (#98079)
762a2079c7 : [dynamo 3.11] make create_instruction kwarg mandatory (#98032)
089134bf66 : [dynamo 3.11] implement 3.11 linetable (#96509)
14ef91cea6 : [dynamo 3.11] small bug fixes (#96508)
cb4bc8e0f5 : [dynamo 3.11] support prefix instructions MAKE_CELL, COPY_FREE_VARS, RETURN_GENERATOR, RESUME (#96506)
05641b81e5 : [dynamo 3.11] fix jump if (not) none (#96505)
27e06e1a28 : Print test times for pytest in verbose mode (#98028)
d03799f9a5 : optimize the AMP func name in custom_device_mod (#98052)
c699ac17df : [CI] Bump up torchbench version to fix dynamo graph breaks in transformers (#98003)
9e3b34775b : Revert "[dtensor] switch mean to use reduction linear (#97996)"
87f5e92916 : [dynamo] Add guards for deterministic algos (#96695)
864ab93656 : aot_autograd: avoid using intermediate_base logic unnecessarily (#97786)
4e26ad786d : fix load_sharded_optimizer_state_dict error on multi node (#98063)
cb8c0be54d : add StorageImpl::mutable_unsafe_data (#97648)
f4f1a5b5b3 : Revert "Move functional collectives to the right namespace (#97793)"
fa1a8b9f96 : Fix device handling in `nn.utils.rnn.unpad_sequence` (#98042)
1c21cd2213 : [quant][pt2e][refactor] Add input_output_share_observers to node.meta["target_dtype_info"] (#97949)
6b9e22f3f6 : Clarify the saving of intermediates in the "extending torch.func" docs (#98020)
91ad5984d8 : Add script to summarize performance from CI performance run (#97977)
e073979794 : [Quant][FX] Add test case for lowering conv_transpose with kwargs (#97311)
efdd08a8d0 : [MPS] Move impl functions to mps namespace (#97238)
e61b842001 : [Quant][FX] lower functional conv_transpose ops (#97126)
c797c7bc8b : Clean up duplicate function run_test.py (#97914)
675dfd2c1f : Revert "Retry at test file level (#97506)"
3a5ca4bdd4 : [quant][pt2e] Add support for conv bn fusion in et backend config (#97389)
c091aa9a2c : [vision hash update] update the pinned vision hash (#98043)
fe2bdfb2cd : [Executorch][XNNPACK] Quantized mean (#97388)
f78b44b2d9 : [quant][pt2e][refactor] Refactor prepare to remove the use of qconfig in `_maybe_insert_input_observer_for_arg_or_kwarg` (#97948)
f9ca48ddb5 : [Executorch][XNNPACK] Quantized hardtanh (#97387)
ae5b044ccb : [XNNPACK] Enable S8 Operators (#97386)
4befb84d49 : [XNNPACK] Allow VCVT Operators (#97385)
9c3fbe7475 : [BE] Enable flake8-simplify checks (#97984)
3dc4405278 : Add a unit test for negative torch.arange() incorrect numerical behavior with dynamic shapes (#97926)
dc2b7aa955 : [Reland] fix some MKL detection issues of CMake (#94924)
a1dc2b1774 : [BE] Remove bool dtype from `masked_scatter` (#98015)
26a90fb9c2 : using accumulate type to do the computation of mean reduce(CPU) (#97351)
a5b6f10c5d : Fix format bug in NT docs (#97998)
fae28fcdf5 : separate deterministic scatter_add as a helper function (#97922)
99f25c2920 : [Vulkan] Fix divide-by-zero with padded tensors (#97698)
38207a9e53 : [ci][easy] Only print remaining logs if test step ran (#97713)
1b323b313c : [dtensor] switch mean to use reduction linear (#97996)
184bfbc3d7 : Move functional collectives to the right namespace (#97793)
45acfc8574 : Revert "[BE][autograd Function] Raise an error if input is returned as-is and saved for forward or backward in setup_context (#97212)"
c218309f88 : [dynamo] profiler.record_function on all dynamo_timed functions (#96495)
ca135ed6b5 : [PyTorch] Optimize TupleType::annotation_str_impl for small tuples (#97910)
cadccf0daf : Flip switch (#97993)
f2e6b0837a : make triton uses the wheel script now (#97995)
1f85390eb2 : Skip test_batch_norm in test_jit_fuser_te for asan (#98016)
7bb5fb3c6d : [vmap] Fix index_select support when dim is negative (#97916)
7868e4b45b : Revert "Disable dynamo tracing torchrec.distributed (#97824)"
ee1c539ecf : Fix module backward pre-hooks to actually update gradient (#97983)
06d677f41d : [dynamo 3.11] fix push null timing in resume functions (#96504)
5b6e4c48b1 : [dynamo 3.11] properly determine cell/freevar index in bytecode_transformation.py (#96503)
ba52268da5 : [dynamo 3.11] properly copy free/cell vars in eval_frame.c (#96501)
c681c52e01 : [inductor] fix TritonTemplateCaller.__str__ (#97578)
c905251f9f : [dynamo 3.11] fix eval_frame.c debug prints for 3.11 (#96500)
848bf8103b : fix functional collective to not generate getattr node (#97924)
2fddcf0fc0 : [CUDA][CUDA 11] Remove more CUDA 11 version checks (#92934)
90f69cad9a : [inductor] test codegen with dynamic shapes (#96934)
da28af3286 : distinguish mutability of StorageImpl::data_ptr() member (#97651)
35090b869d : set num_warps to at least 4 (#97950)
19706356b5 : Fix TorchScript support in `as_nested_tensor` (#97960)
b235e1f737 : Compare `len(fw_derivatives)` with 0 w/o using `not` (#97953)
97fc8ea5f4 : Run the benchmark suite with dynamic batch only (#97912)
4cce60751b : Move TestIndexingSimplification to its own file (#97941)
94bae36a1f : Fix strip_function_call in GuardBuilder (#97810)
ffd76d11c9 : [fix] take : backward batching rule (#95772)
7d5d5beba2 : Retry at test file level (#97506)
24a5d006f2 : [dynamo 3.11] Refactor create_instruction (#96499)
e6888697c4 : Revisit `torch._six.string_classes` removal (#94709) (#97863)
9ec6fdb29b : Enable adam foreach in full train step tracing (#97897)
19dcf55a6f : [functorch] .data should not work for grad, jvp, vjp (#94817)
96dbca69e6 : Add unstable workflow to upload test stats (#97918)
65e8c14948 : Corrected batch norm docs with the exact computations of the standard deviation (#97974)
cdb32dad3d : [minifier] cuda.synchronize to better detect IMA (#97962)
0e4ddc2b40 : NT: Refactor for lazy computation of opt_sizes (#97895)
47dca20d80 : [BE] Enable flake8-comprehension rule C417 (#97880)
1d08b5b103 : [fx] Replace literals with placeholder helper (#97683)
19162083f8 : Improved perfs for vectorized bilinear interpolate cpu uint8 RGB-case (channels last) (#96848)
379fb47654 : [SPMD] Support foreach optimizers with functionalization (#97853)
0f3ffaf798 : extract torch.proto to its own library (#97614)
428cb3a868 : distinguish mutability of untyped StorageImpl::data() member (#97647)
0770ad3cae : extract caffe2.proto to its own library (#97613)
5ab50cf048 : Fix shoud/shoudl typos (#97930)
7554c10899 : Fix typos under tools directory (#97779)
5a81508bb6 : Add NestedTensor ops: logical_not, logical_not_, masked_fill (#97934)
f92cae4849 : Fix a grep-itself bug when checking for GPU healthcheck (#97929)
b093dfaefa : Revert "Fix a grep-itself bug when checking for GPU healthcheck (#97929)"
7776653a0c : Add linear gradgrad (#97151)
15271d353a : [quant][pt2e] Support convtranspose + bn fusion (#97933)
f7fe6e148e : [test] Make environment variable name better (#97356)
53c9bc8c68 : Add DLPack support for XPU backend by mapping to kDLOneAPI in DLPack … (#94968)
88234540e7 : Fix typo under torch/csrc/jit/tensorexpr directory (#97218)
721260e966 : [3/n] Consolidate `replicate` and `DDP`: update `replicate` to reuse functions in `DDP` (#96660)
af0264ae08 : [BE] Pass `-faligned-new` if supported by compiler (#97887)
a95815c6b7 : fix compiler version detection on MacOS (#97883)
1432a893ef : Fix issue with single input cat (#97822)
7fc100a290 : support random for custom device (#97420)
3eecca764a : Skip test_cpp_wrapper on mac (#97911)
f40b2ed59c : Fix a grep-itself bug when checking for GPU healthcheck (#97929)
b23cfe5465 : [Inductor] Remove fb custom ops dependency (#97907)
2f6c18d1a2 : improve memory footprint of torch.testing.assert_close (#96131)
8e5c5d2023 : Revert "Propogate dynamo shape_env to make_fx (#96437)"
3460b2b7d3 : Add support for pin memory on custom device. (#97621)
f603873c1b : add various NT ops needed for testing (#97837)
2b56da139c : [kineto] init kineto_activity for each event (#97550)
47ce41e732 : [dtensor] remove DeviceMesh typing hack guard type imports (#97889)
aa4ea6e1f3 : [cuDNN][cuDNN V8 API] Fix incorrect use of `emplace` in the benchmark cache (#97838)
35be579701 : Refactor TENSOR_MATCH guards to check dim (for NT support) (#97896)
04ca3a289d : Disable modes in preserve_rng_state (#97738)
3a22916c7a : Propogate dynamo shape_env to make_fx (#96437)
7257de6eac : Fix typos in torch/fx/_compatibility.py (#97618)
2f86c9bc0b : Update query version for update_expected.py (#97898)
099b2801db : Stop runner service when its GPU crashes (#97585)
2806fa4470 : Use the latest NVIDIA driver from setup-nvidia (#97840)
b93e1f377e : [dynamo, benchmarks] Add inductor-mode (for max-autotune) and warm start options to dynamo benchmarks (#97719)
942e587d40 : [SPMD] Make compile cache the compilation result and add option to perform transformation (#97836)
d70f9c7888 : Fix typo under torch/csrc/jit/runtime directory (#97243)
1f71ac785c : [RFC][inductor][index_put] fallback to aten in torch deterministic mode (#96898)
e6909f6ccc : [Dynamo] Fix for tuple construction from tuple iterators (#97862)
477f3f555f : Simplify by using yield from (#97831)
22b723132b : Update ufmt to v2.1.0 (#97900)
e626be79a4 : Add config setting to error on recompile (#97829)
bb40b62501 : Delete fusions_possible counter (#97881)
313db584f3 : [BE][autograd Function] Raise an error if input is returned as-is and saved for forward or backward in setup_context (#97212)
4114c1ea02 : Revert "[dtensor] remove typing hack of DeviceMesh (#94526)"
8372c5dc68 : Refactor dynamic dims api, stateless internals, higher level export API (#96699)
2c16b73a1b : Remove comma from parametrized test name (#97844)
44e73db3c2 : [2/n] Consolidate `replicate` and `DDP`: split `forward` function (#96658)
170a1c3ace : [ONNX] Fix typo "scipt" -> "script" (#97850)
2ce6ad9aa9 : [inductor] make `run_and_get_cpp_code` signature match `run_and_get_triton_code` (#97826)
4ae4c6f68a : Fix typo when setting FSDP state dict config (#97110)
004bb34f42 : inductor: fix vision_maskrcnn dynamic_shapes error on CPU (#97312)
2f06fc2422 : prepare doc preview s3-prefix for future change (#97433)
faccd87658 : [NNC] Fix the issue that the void** could not store a scalar if the bit width of the scalar is greater than 32bit on a 32bit platform (#97669)
6871665a97 : Avoid copies in matmul (no ghstack) (#97355)
46faa79e09 : Simplify by using yield from in torch/utils/data (#97839)
f388bec985 : [Dynamo] torch.Generator state should have a source and be reconstructed properly (#97403)
9d1d95099b : Disable dynamo tracing torchrec.distributed (#97824)
f4ac8e0052 : Add dynamo config skip_nnmodule_hook_guards (#97830)
91166ef7e7 : Remove rocm python 3.11 restriction (#97818)
f754be897a : Disable speedup_experiment_ds (#97806)
60631aefe5 : Disable test_variable_sharing on ASAN due to non-deterministically hang (#97742)
9e2e345af7 : [inductor] avoid kernel cache miss because of different arg name (#97755)
5949d86bec : [Easy] Remove unnecessary graph lint (#97815)
70b063db0e : [dtensor] remove typing hack of DeviceMesh (#94526)
8a45befcec : [reland] add numpy typing plugin to mypy config (#94525)
2490ac561f : Propagate inductor guards to ShapeEnv (#97777)
597b558c51 : [BE]: Update flake8 and plugins and fix bugs (#97795)
7282be3d91 : Patch for nvfuser build (#97404)
e0a647d8b5 : new pin (#97278)
bc86af0d37 : Remove DeferredIndentedBuffer (#97616)
c92dfe2694 : [Vulkan] Add convert_qconv2d_context op (#97714)
662a8cf74d : [FSDP][8/N] Simplify addr padding internals (#97796)
aee96e2cb3 : Revert "[inductor] Refactor cpp_wrapper to be an attribute of GraphLowering (#97709)"
dc3d6fe6b0 : [jit][easy] add missing quotes in namedtuple forwardref tests (#97736)
79d2a8dd9e : [PyTorch] Second try: use c10::FastMap for memoizing in Pickler (#96688)
d4829bd6c7 : Use remote master as the linter merge base (#97800)
c39f1c1490 : Allow DTensor to trigger collecives before inplace ops (#97787)
35a13a593e : Revert "Updates NCCL to 2.17.1 (#97407)"
b443198966 : Fix sparse addmv ref impl for non-contig tensors (#97730)
bb42104fe8 : [DataLoader] Fix collation logic (#97789)
ae3316c16e : Update CODEOWNERS for torch data (#97797)
fee1407c8d : [xla hash update] update the pinned xla hash (#91874)
fb7f983357 : Graph break on operators that fake tensor doesn't support (#97708)
08f125bcac : [ROCm] Remove usage of deprecated ROCm component header includes (#97620)
4afef85dda : [MPS] Fix index_select_scalar test (#97773)
196acc84b1 : [UX] Advise users to rebase-and-merge for stale PRs (#97808)
b113a09ef9 : Updates NCCL to 2.17.1 (#97407)
8289120ef0 : Revert "test/test_torch.py: fix TestTorch::test_from_buffer test (#96952)" (#97759)
2ef6ffdfa1 : Revert "[BE][autograd Function] Raise an error if input is returned as-is and saved for forward or backward in setup_context (#97212)"
6854fd7189 : Add Config to Skip Cpp Codegen, Enable in FBCode (#97204)
c0e0fbb6e1 : inductor: fix _dynamic_reshape_indexer issue when tail index is sym (#97502)
b895a0a675 : [BE] Move flatbuffer related python C bindings to script_init (#97476)
d8cc8ffebc : [DataLoader] Short circuit pin_memory recursion when operating on bytes (#97737)
1a2dcff127 : Added ModuleInfos for remaining activation functions (#97704)
dbd41cfa91 : Add arm tests to mps workflow (#97279)
9eea9d21a4 : Update ONNX submodule from ONNX 1.13.1 with Protobuf 4.21 updates (#96138)
22e3f67cd2 : Update vision pinned hash (#97706)
8710dc8d5a : [inductor] Refactor cpp_wrapper to be an attribute of GraphLowering (#97709)
2b369eb3c2 : [fix] jacrev and jacfwd : support non-tensor args again (#97746)
1c83888be8 : [memory profiling] show pre-existing memory in trace_plot (#97590)
b1a83c4da4 : [memory history] cleanup recording API (#97406)
9e029f44b5 : [EASY] Fix test that does nothing (#97722)
0176fb4cd6 : Remove fast_nvcc entry in README.md (#97624)
26c5e34b47 : Re-enable ProcessGroupMPITest in CI (#97687)
428540001d : Add shape function for squeeze.dims op (#93919)
b2f1edabfe : Renaming all_known_overloads to all_py_loaded_overloads and add comment (#97672)
bb85b43c0b : Move test_cpp_wrapper to its own file (#97634)
c785f1903a : Add dynamic shapes to perf dashboard (#97673)
08766b23de : [Quant][FX] lower ConvTranspose3d (#97125)
a8065cc61f : [dynamo] simplify `get_item_dyn` (#97637)
867b07b424 : Sampler API described for customization. (#97338)
100b396b9b : [Pytorch][coreml]Pass backend and modelid by value (#97566)
5aa4046743 : [ONNX] Remove the `_jit_pass_onnx_scalar_type_analysis` pass (#97729)
8624a2e88a : Include missing header (#97453)
6df18260fc : Fix typo in error message (#97716)
c1a6dde79e : Make dynamo-FSDP skip guards (#97463)
e9050ef74e : explicitly list out caffe2 protos (#97612)
1726c6f7a7 : [fix] vmap: fix segfault on data access (#97237)
403905a37b : cleanup caffe2/proto package (#97601)
5e6e984835 : flake8 version reporting in collect_env (#94573)
f3aca45a16 : [BE][autograd Function] Raise an error if input is returned as-is and saved for forward or backward in setup_context (#97212)
c7fa648ea1 : move caffe2/proto/ to its own Bazel package (#97600)
e1f44ee3b3 : [inductor] correctly setup constant in the wrapper (#97571)
b756fd98bb : Fix NumPy scalar arrays to tensor conversion (#97696)
b2be14bcca : Fix missing extra_traceback in InterpreterShim (#97615)
08c1d1a871 : [dtensor] set cuda device automatically, and refactor error handling (#97583)
e9c4904915 : [dtensor] remove custom dispatch op (#95629)
342ed0372f : [DCP] Expose create_read_items_for_chunk_list helper. (#97570)
1c15cd48e2 : [FSDP][7/N] Add alignment padding for `use_orig_params=True` (#97667)
b9049a7f11 : [FSDP][6/N] Rename param/module name helpers for clarity (#97666)
30a6ed34a0 : [FSDP][5/N] Lift `FSDPParamInfo` to use `FlatParamHandle` (#97665)
5d554ca26f : [FSDP][4/N] Document `use_orig_params: bool` (#97664)
c622559968 : [FSDP][3/N] Minor fixes (rename, assert message) (#97663)
a27882ecd1 : [FSDP][2/N] Rename "flattened parameter" -> "flat parameter" (pt. 2) (#97662)
bd979737cd : [FSDP][1/N] Rename "flattened parameter" to "flat parameter" (#97661)
2bca64ae28 : [Vulkan] Merge upsample_nearest2d and quantized_upsample_nearest2d (#97467)
a9a81ab7e3 : [CI] Run benchmark test with dynamo_eager in periodic (#97543)
82592f7e53 : remove dead torch_pb.h library (#97599)
a283c15e34 : Added ModuleInfos for {*}LU modules (#97375)
1c3ec7c4c5 : [eazy][inductor] fix typo in mm max-autotune log (#97486)
32fdd44577 : SymIntify maybe_multiply (#97675)
35c9ea89fa : dont bake in defaults when tracing *_like factories (#97564)
2ca911f2ac : make_fx, make pre_autograd a kwarg (#97559)
6c450c7880 : Allow -ic when no pending jobs (#97707)
652592efa9 : [inductor] use torch.prifiler in the triton wrapper (#97405)
6c43e9fdbd : Run _calculate-docker-image on 2xlarge with a larger disk space (#97551)
7d94493392 : [easy] Update xla hash pin merge rule (#97700)
5d33596c5f : remove dead proto_convert library (#97598)
35fd5c548e : Fix typos under torch/distributed directory (#95638)
8313becefa : With Chillee's permission, add me to all Chillee's diffs (#97632)
6430dad700 : Apparently aot_function doesn't cache anymore (#97610)
b24052b1d9 : Make test_binary_shape_functions actually test the ops (#90566)
b66a121c5e : [Vulkan] Fix broadcasting in quantized elementwise ops (#97554)
5da86bbb68 : Add decomposition for aten.squeeze.dims op (#97020)
91cce4c09a : Sort: Use cub::WarpMergeSort for small sorts (32 < n <= 128) (#96223)
236bac811a : Add ModuleInfos for Adaptive{Max/Avg}Pool ops (#97291)
8177081848 : Add gather to MTPG (#97555)
759e527ea1 : Use internal symbolizer for FBCODE (#97172)
008be795ce : run buildifier on the root BUILD.bazel file (#97611)
bbc7c79b20 : add device checks for sparse csr (#97520)
96e3b3ac72 : [BE] Cleanup CMake flag suppressions (#97584)
345714e372 : Upload merge records to Rockset (#97471)
2ea097071a : fix device type bug for custom device (#97213)
fcc312e945 : [BE] Update flake8-comprehensions to 3.11.1 (#97671)
4d2611375b : Fix typo in throughput_benchmark. (#97619)
97711ac6db : [CI] Reduce perf nightly run frequency and bump up its timeout limit (#97682)
6db196b744 : Specify the head branch when upload perf stats to Rockset (#97643)
9d37cefcb0 : Resubmit _int_mm (#96685)
5f88d86142 : Remove hacky python dispatcher fallthrough (#96635)
a6bc1f3a9f : Dynamo size dim kwargs (#97450)
8275e5d2a8 : [cpp_extension.py] fix bogus `_check_cuda_version` (#97602)
a1ada050f8 : do not insert to_dtype for memory copy only buffers (#97147)
e1f153f3b1 : Add support for copysign operator in functorch (#96018)
d0abc31428 : Remove unnecessary retain_grad call from gradcheck (#96923)
51c3fd39a5 : Modify all calls to checkpoint pass use_reentrant explicitly (#97376)
38da54e9c9 : Split rnn primitive for inference and training (#96736)
e3df6a7c8a : [Dynamo] Unspec int list if enabling dynamic_shapes (#97557)
542fb0b1fa : Specify file encoding in test_torch.py (#97628)
b73e8cd4fa : [BE] Use nested namespaces in sparse (#97581)
461f088c96 : add -std=c++17 to windows cuda compilations (#97515)
4c0dce50fd : [BE] Apply ufmt to run_test and GitHub Python util scripts (#97588)
f09347a9f1 : [inductor] Fix broadcast of random seed in mm epilogue (#97591)
4f2ac8abac : Fixes double printing of code in debug mode (#97608)
dc45ad7024 : [inductor] support SymPy exprs in `reflection_pad2d_backward` lowering (#97604)
9585a7ffd3 : [inductor] support non-tensor ops with dynamic shapes (#97519)
13dcf635e0 : Dynamo stride dim kwargs (#97444)
233742cb2f : Add accuracy tests for traced optimizers (#97577)
1b08a01361 : Default to aot_eager for torch.compile on MPS (#96980)
75fb0b6c9f : Enable full train_step tracing and customizable dist graph expansion (#97416)
e67b58105a : Enable lowering to inductor (#96927)
3b1b585a59 : [FSDP] Fix bug in determining whether parameters need to be materialized (#97488)
14177f0d3d : [BE] Make `USE_FLASH_ATTENTION` private (#97579)
5e014bfbbd : [vmap] ldl_factor: batch rule (#97518)
f89af60183 : Rewrite NCCL watchdog to more reliably throw timeout (#97066)
ee934fd633 : Use unordered NEQ comparison for vec512 operator!= implementations (#97466)
c757647dd8 : [Better Transformer] make is_causal a hint and force attn_mask to be set on `is_causal=True` in F.MHA (#97214)
2e8086b0a1 : Add myself to nn codeowners (#97277)
0781188e64 : [NCCL] Cleanup NCCL-no-record streams, move to `TORCH_NCCL_AVOID_RECORD_STREAMS` (#97053)
021de486ff : [Easy] Apply black to format _spmd files (#97534)
a8f7e0b213 : [Easy] Improve error message for meta_mm (#97533)
b32afbbdb6 : [Kineto] Improve Config Options Part 2 - update to new Kineto Submodule (#97556)
129e03905d : disallow invalid value ranges in torch.testing.make_tensor (#96334)
47bfb192a7 : deprecate low==high in torch.testing.make_tensor (#96333)
76fb9a1c7f : fix low and high in torch.testing.make_tensor for integral inputs (#96124)
779cd1f15b : only apply domain eps for floating and complex types (#97010)
9029361f24 : honor low and high for torch.bool in torch.testing.make_tensor (#96332)
7602aade0f : fix random mask creation in test_maskedtensor (#97017)
303eb37e38 : QoL improvements for torch.testing.make_tensor (#96125)
090af4aa71 : add proper tests for torch.testing.make_tensor (#96331)
dbe6da797a : Revert "Sort: Use cub::WarpMergeSort for small sorts (32 < n <= 128) (#96223)"
85885301fd : fix ignored qualifiers errors (#97443)
39c8188194 : Inductor: fall back bernoulli on cpu (#97002)
2b75955c9f : [CI] Add missing --cold-start-latency for the dashboard run (#97547)
95c166cd3d : Add `is_causal` API for `TransformerDecoder` (#97166)
92605ee776 : Support per channel tensor with unpacking in QNNPACK (#96268)
c5135ff2a6 : [DataPipe] Fix missing imports in DataPipe interface file (#97458)
827b2aee97 : Warn once on dynamo module w/ hooks (#97535)
197434df96 : [Kineto] Improve Config Options for Input Shapes, Memory, Stack, Flops, and Modules - Part 1 (#97380)
cf0ba1b9c0 : Use L1 loss for Smooth L1 loss with beta=0 (#97022)
17567e5b29 : [pytorch@arvr/windows] Fix pytorch build/import on Windows @ ovrsource (#97193)
baf71a8aad : [ROCm] Update clock intrinsic handling for AMD gfx11 family (#97005)
5170995b2a : Revert "Upgrade NVTX to NVTX3 (#90689)"
a96ccaa362 : Code update for vectorized interpolate cpu uint8 (#96847)
4ff71c91d3 : backport std::ssize to c10 (#97442)
b5edf18334 : `GradScaler` recomputes `optimizer_state["found_inf_per_device"]` before `optimizer.step` (#97415)
6fcd671574 : Complex support for expm1 (#96644)
1b8b82f835 : [ROCm] Update magma commit for ROCm (#97491)
c55d1a6049 : [CI] Experiment with a newer CUDA driver (#96904)
622a11d512 : Fix typos under torch/utils directory (#97516)
d305d4a57f : [Dynamo] Fix TIMM benchmark compute_loss (#97423)
5f5d675587 : remove unused CAFFE2_VERSION macros (#97337)
605a77fd59 : Log FSDP mixed precision (#97367)
51ce02232b : [ONNX] Support converting fx graph with symbolic shape to ONNX (#96350)
bcff4773da : add /std:c++17 to windows compilations when not using Ninja (#97445)
6e46f47227 : [inductor] xfail tests by default (#97331)
36d64760d9 : Disable inductor developer warnings in official releases (#97451)
73fadd523b : Use a single stream for cuda graph pool (#97419)
b11ce4bbca : Bring back tensor_has_compatible_shallow_copy_type (#97455)
f25cdf8aeb : Revert "Rewrite NCCL watchdog to more reliably throw timeout (#97066)"
ad5d81adda : [Sparse] Add reference implementation for addmv (#97353)
31e858e8fc : Add missing aot_autograd_arg_pos_to_source (#97487)
9320cae1da : Add GPU frequency lock option to inductor workflows running on A100 (#97465)
fa4c77e39b : Rename PyOperator to HigherOrderOperator (#97493)
763c5a33e7 : [Vulkan] Fix quantized cpu to vulkan broken by padding (#97372)
a66625da3b : [PyTorch] Optimize DictType::annotation_str_impl (#96498)
000cfeb848 : [PyTorch] Optimize TupleType::annotation_str_impl (#96497)
33dfdedb28 : CUDAGraph Trees - Warn on dealloc (#97171)
24e280d5e2 : clean up triton mathlib (#97460)
bb74d04353 : Remove inductor-perf-test-nightly label (#97290)
63e1f12b49 : Speedup bincount and histc on CUDA (#97090)
f3cf3d7620 : [DTensor] Fix the default PG condition for DeviceMesh (#97384)
e4b365a9a0 : Use a equal operator that don't depend on nonzero for flatbuffer_serializer (#97298)
12da0c7037 : Revert "remove dead torch_pb.h library (#97323)"
b531eb974a : Revert "move caffe2/proto/ to its own Bazel package (#97324)"
91a3040b4b : Revert "cleanup caffe2 cc_proto_library (#97325)"
0d66db1b2a : Implement last dim split_with_sizes for NT (forward only, non-SymInt-ified) (#97446)
37f7c13b7b : [ci] disable some dtensor tests (#97358)
13fbf93238 : Revert "remove dead proto_convert library (#97322)"
95e8d0c39e : Rewrite NCCL watchdog to more reliably throw timeout (#97066)
416bac5b81 : [Vulkan] Fix static analysis errors in vulkan_quantized_api_test.cpp (#97400)
c2d7508276 : [DTensor] default value for DTensor ops on non-participating devices (#95852)
103f4c99f0 : [DTensor] implement aten.equal sharding prop (#97170)
5f57b36318 : Rename torch._inductor.triton_ops.autotune to torch._inductor.triton_heuristics (#95558)
f0649d4723 : update flatten.py docstring (#97276)
a3b30c5025 : update internal triton (#97422)
29c061bb90 : Remove non existent files in multigpu tests (#97393)
4a88f71f65 : Fix potential naming clash when writing traces with tensorboard_trace_handler (#97392)
d499b7d750 : [inductor] Fix a multi-gpu context error (#97398)
7711d24717 : vmap support for linalg.lu_factor (#94328)
bdaf402565 : build C++ extensions on windows with /std:c++17 (#97413)
feace5d66f : [inductor] handle integer `Symbol`s in `is_integer_type` (#97217)
a331cd4314 : [inductor] fix cpp legalize bf16 reduction (#97228)
580b4702bc : [FSDP][optim_state_dict] Consolidate the arguments and logic of optim_state_dict and optim_state_dict_to_load (#96534)
1fb1c6e135 : Retry download and install NDK when testing Android (#97067)
37faa48844 : DCE inference graphs too (#97275)
fbc803df0c : Only warn once for TypedStorage deprecation (#97379)
b507d7d798 : Fix Device Idx Setting (#97399)
5d8c7e7ea4 : Sort: Use cub::WarpMergeSort for small sorts (32 < n <= 128) (#96223)
3b54592050 : [PyTorch] Add annotation_str benchmark (#96496)
a34d35d569 : [vision hash update] update the pinned vision hash (#97396)
62ecfa8b79 : Fix typo under torch/csrc/jit/passes directory (#97222)
603a32c964 : cleanup caffe2 cc_proto_library (#97325)
35439e8610 : [Inductor] add guards to guarantee vector int32 only used by comparison ops (for masked load) (#97144)
c5b65032ac : Restore ROCm trunk jobs (#97354)
a74ecaf0f6 : Revert "Retry download and install NDK when testing Android (#97067)"
da7c42f89a : Uninstall PyTorch after testing on non-ephemeral Windows runners (#97285)
e64ddd1ab9 : Upgrade NVTX to NVTX3 (#90689)
4b75583052 : Add autocast_test_lists.py to the merge patterns (#94381)
4610ce49f6 : Fix typo under torch/testing directory (#97254)
788300cc2a : [cudnn] Support v8 API in fbcode (#96512)
fe0afc5852 : use accumulate type in BF16 gemm(include dot, mv) ref path (#96074)
b45880c537 : Optionally ignore utf-8 decoding error when converting std::string to python str. (#97282)
a524123c91 : [torchgen] Bump native function max namespace levels due for internal use case (#97381)
13ca08435c : [test_foreach] add cases of zero size tensors (#95028)
116a4f2301 : linemaps for inductor: python 3.9 and lower doesn't have bisect key argument (#97369)
3303f5447a : [inductor] use real data for cudagraphify (#97363)
a1edf5f63c : [EASY] Do hook sizes check with SymInt (#97362)
5425191f57 : Update xla pin merge rule for python3.8 (#97371)
bc268284de : [ci] Onnx test 3->2 shards (#97383)
191a2322f0 : [WIP][Stronghold] Integrate python API BC-linter from test-infra (#96977)
712bd9ae88 : Upload failed and rerun tests (#97304)
545abc292b : [aot autograd] refactor to make functionalization self-contained (#96341)
e8a722b9cb : Fix missing dynamo cache lookup registration in profiler.profiler (#97305)
ec54f186fe : Add an issue template to disable CI jobs (#97045)
5cc2e4d7c9 : [10/N] Remove ST init ops (#96985)
11114ab8be : rename to need_attn_weights to match elsewhere (#97102)
7a8b691388 : Make early stop the default for checkpoint and expose a way to disable (#96866)
546835c45a : [9/N] Remove ST multiple ops (#96989)
5d5f43abea : [prims] Fix schema of minimum_value for a primitive operation (#97327)
726fc366a2 : Add missing __main__ in two unittests (#97302)
28929b1205 : Add `as_strided_` to tensor docs (#97300)
a7856e18a7 : Revert "DCE inference graphs too (#97275)"
d779dadda1 : Remove stack trace captures from import (#97274)
9c144bc4fe : Dont increment generation if forward of backward exists, and warning on deallocation of live tensors (#97168)
9370f253e3 : [inductor] Rewrite convolution triton templates (#95556)
da96ae230b : [CI] Add a missing dtype flag in nightly perf run (#97357)
73b7702b7e : Revert "FIX make sure we import the correct object from multiprocessing (#81862)"
6273c0af95 : move caffe2/proto/ to its own Bazel package (#97324)
364d92f9b6 : remove dead torch_pb.h library (#97323)
89d116d961 : [BE][docs]Improve and update checkpoint documentation (#96862)
0f424f7f05 : Fixed broken link to troubleshooting.html docs page (#97330)
a133b5081c : [JIT] Partially support ForwardRef type annotations for NamedTuple attributes (#96933)
d850c33bfe : remove dead proto_convert library (#97322)
5537792307 : [dynamo] handle dim in size kwargs (#96992) (#97098)
9d5ac03b9a : Deprecate gradcheck check_sparse_nnz argument as duplicate of masked argument (#97187)
cff4826f28 : pytorch_unet is now passing (#97309)
be49d3b170 : [CI] Turn on debug logging for dla102 and gernet_l (#97307)
c37ab85d96 : Improve TORCH_LOGS settings error msg (#97264)
aab34a476f : inductor(cpu): support mkldnn packed linear to improve bfloat16 performance (#96954)
e49b4d3827 : Changed logging in aotautograd a little (#97289)
4ab1588d99 : Enhance error message for dependency check (#96642)
f6bafcde6f : Added current buck target as minifier dep (#97183)
a6d8c70933 : Init quantization backend config for inductor (#96476)
517a432d6e : [Inductor] Enable CppWrapper to support BF16 (#97089)
573b2deb4b : [Inductor] Fix the issue that cannot pass lint check for debug mode (#97249)
37e1d85848 : [Inductor] Load a BF16 scalar and broadcast it as a float vector (#97070)
c5d7ed9423 : [Inductor] Fix the issue that cannot pass lint check for debug mode (#97249)
b72bddabe9 : Move empty check to the start of _pack_padded_sequence (#94885)
f9a9a88812 : Remove chhillee from autoreview (#97293)
db15d191b6 : Update NestedTensor add to support non identical striding for NT+NT (#97195)
4733de18fd : [Inductor] Add debug logging to explain reasons of disabling vectorization (#97108)
c1025af012 : [Dynamo] throw better error message if assert with non-string message (#97297)
57c13fde18 : Test and fix guard fail message in CompileProfiler (#97055)
1e4e256790 : Mention pytorchbot command on label error (#97267)
688427b5ae : Add sympy to binary linux test - fix conda nightly (#97281)
c7fad13310 : [Dynamo] Support nn.Module.named_children (#97216)
aa3a57b80d : DCE inference graphs too (#97275)
3282030fa4 : [inductor] reusing autotuning sub-processes (#97219)
0b094ca37f : Add gradcheck_nondet_tol to a few padding moduleinfos (#97265)
af440c427b : [draft for discussion] add per-dispatch key modes (#97052)
793cf0cbb0 : Fix dispatching issue of the new device type. (#97273)
2b32a74ab0 : moving nvfuser benchmark to third_party/nvfuser (#96725)
a1ef0be30c : [BE] Remove spurious semicolon in XPUHooksInterface.h (#97296)
6dded5d63e : Fixes warning to refer to SMs instead of Cuda Cores (#97224)
47f18b78ec : leave libdevice name for fbcode (#97257)
9a18968253 : Fix kDefaultTimeout multiple definition build failure (#97270)
e7d9331688 : [inductor] hoist symbolic padding expressions (#97099)
b615b7ef9e : use a proper cc_library for the miniz library (#96957)
d9b289b747 : Retry download and install NDK when testing Android (#97067)
19b5b67bc5 : exclude all generated files from torch_headers (#96956)
d785d0c0a1 : [reland][inductor] do benchmark in sub processes for max autotuning (#97215)
b759134152 : update Bazel to the latest release 6.1.1 (#96955)
ea9194a4f2 : [inductor] Make the original ATen info dumped in alphabetical order (#97261)
01885cea43 : [Typo] mulithreading_enabled => multithreading_enabled (#97054)
b04363ead4 : [easy] Expose documentation for a few global nn.Module hooks (#97185)
7a93865c46 : Fix regression on loading jit module from flatbuffer (#97190)
de2230baa7 : [dynamo] Improve error message for missing backend (#97255)
ec3894ec0a : Fix typo in settings regex logging (#97245)
77e73b9b7a : Refactor NT offsets metadata to be a Tensor (#96909)
22ea21da3d : Change 1D Tensor of 1 element to 0D Tensor (#96994)
c47cf9bc7f : Update parallel_apply.py for assertion error when len(modules) != len(inputs) (#94671)
a6bbeec2e1 : Fix required template (#97247)
dbb31672b2 : Fix the compatible issue of the Dynamo and the PyDev.Debugger. (#96721)
b95896c578 : [CI] Fix perf_nightly output file naming error (#97263)
acd9df8a72 : [inductor] Add scaled_dot_product_attention to fallback kernels (#93339)
0a2b527abe : Update mkl_verbose return value check due to API change in mkl (#96283)
244736a5a5 : Mark ROCm tests as flaky (#97259)
5d3c347bf6 : Make split reduction warning only emit once (#97112)
701cdbb6a5 : FIX make sure we import the correct object from multiprocessing (#81862)
4e054175d6 : Fix uniform returning end point for BFloat16 and Half (#96962)
5acf403088 : Run functorch tests in default shards; delete functorch-specific shards (#96464)
b004819f91 : Re-enable TestJit.test_profiler (#94391)
2c588b3ad5 : Allow new_full's fill_value argument type to be complex (#91345)
38b687ed4d : [PTD][Checkpoint] Add checkpointing support for DTensor submesh (#96802)
a9b9fd90a2 : [Inductor] index_put - unsqueeze indices[0] if self and indices[0] are not broadcastable (#97105)
141a2ebcf1 : Clean up Compilation Profiler (#97029)
f9ce593267 : Extend aot autograd dedup guards to params, stop using positions (#96774)
e8be6d813b : [Quant][FX] Fix issue of lowering weighted functional ops with kwargs (#95865)
7beac103ee : [PyTorch] Remove unnecessary unpickler.h #include in jit/serialization/import.h (#96687)
d2f5722996 : [ONNX] 'Transform' as base class for passes (#95935)
45296f87ec : Fix for verify_dynamo on ROCm (#97013)
ee6b19bd4c : Error only if autocast actually enabled (#96097)
cc0701e5b3 : [inductor] Move fx-fusion tests to a separate file (#97028)
695d98b0bc : [inductor] Allow `tensors` kwarg in sink_cat_after_pointwise (#97019)
e20e5f5578 : [RFC] Add an API to remove autograd hooks from DDP (#96490)
fa82080016 : Don't run fallback if symbolic sizes in fake tensor (#97148)
adcd1b3077 : inductor: support profiler_mark_wrapper_call in cpp wrapper (#97119)
50ed38a7eb : Fix typo under docs directory (#97202)
793cb3f424 : [FSDP][optim_state_dict] Print out more useful error message for optim_state_dict (#96860)
f5612758d8 : [SPMD] Make the IterGraphModule less verbose and more profiling friendly (#96969)
9c288b992b : minor spelling fixes NestedTensorImpl.h (#97103)
a269e5fa04 : Add forward and backward support for silu to NestedTensors (#97181)
9a5fed1bd0 : Harmonize BCELoss example to F.binary_cross_entropy (#95178)
252c6f25e0 : Update vec256_complex_float_vsx.h (#95658)
c089c6bf15 : update triton pin (#96730)
485cc7515d : [Inductor CI] Fix concurrency cancellation rule of inductor-perf-compare job (#97197)
ea6113ea20 : Update loss.py (#95367)
b1e8f2fc11 : Update torch.fx docs (#97058)
663e7c9eeb : Fix TestBufferProtocolCPU::test_byte_to_int_cpu test on Big Endian (#96424)
270b42d279 : Fix test_schema_check CUDA illegal memory access (#97062)
c848a777e8 : DOC: Various typo fixes (#97095)
8a6e28ccd3 : Fix typo for generator. (#97136)
13398d8b95 : [inductor] improve bandwidth computation (#97057)
6b691b99da : add amp support for custom backend (#96188)
a37b4fa03a : [mergebot] An ignore current flag (#96756)
aacbf091db : Allow fused optimizers to call _foreach_zero_ in zero_grad (#97159)
1c40ce4f19 : handle SymInt shape/input when debugging in dynamic shape (#96645)
100641aadf : [MPS] Fix torch.eye unsupported bool constant on macOS 12 (#97027)
16e7e5a24b : [dtensor] lazy init process groups in device mesh (#96700)
ead5186462 : [CI] Change tests used by the new dashboard (#96986)
bda9d7ba73 : [pytorch][2/3] Pytorch profiler permits CPU events with CUPTI Range profiler mode (#97048)
16d85160d5 : Fix standalone compile for op with multiple outputs (#96936)
4a99b4f12b : enable Half for cat serial kernel (#96021)
dba9487324 : Add helpful pretty pretting summaries to torch for lldb debugging (#97101)
5471621497 : [BE] Remove unnecessary dict comprehensions (#97116)
be0b415a5a : [ONNX] Set shape/type into torchscript (#96349)
722c4e59a4 : Replace source check with assert (#95640)
c8030b5406 : Revert "Update mkl_verbose return value check due to API change in mkl (#96283)"
e74c5e5637 : rexnet_100 is disabled for static, does not need dynamic listing (#97100)
5d33f9cddb : Revert "Fix standalone compile for op with multiple outputs (#96936)"
90537a779c : Update FlashAttention to work with sm90 Gpus (#97051)
37cde56658 : Fix standalone compile for op with multiple outputs (#96936)
c1214ce5c2 : Update mkl_verbose return value check due to API change in mkl (#96283)
5ee5a164ff : [aot] disable inference view tracking (#96478)
4805441b4a : [dtensor] remove unused tests and fix ci (#97064)
a5923ab3f3 : Revert "[inductor] do benchmark in sub processes for max autotuning (#96410)" (#97075)
a1c46e5f8f : component-level configurable logging for dynamo, inductor, aot (#94858)
086ce765a5 : Add new parameter `materialize_grads` to torch.autograd.grad() (#97015)
34256bc730 : [inductor] do benchmark in sub processes for max autotuning (#96410)
b132220309 : Update MHA doc string (#97046)
915cbf8208 : [Inductor] Eliminate redundant to_dtype node (#96650)
679dec847e : Use is_available instead of device_count to check for CUDA availability (#97043)
c62fc81cc5 : Increase the timeout value for linter calculate-docker-image (#96993)
b390e7037e : [docs] passing LogSoftmax into NLLLoss (#97001)
410210b351 : Remove obsolete "merge -g" flag from update_commit_hashes.py (#97033)
db2c1ea8c8 : Re-enable test_ops_jit on Windows (#96859) (#96931)
a4c706bcbc : [dynamo][dashboard] fix triton clone step in dashboard (#96623)
4a90aca60d : Make keep-going work for more than linux (#96974)
b59a60ddff : Fix CPU bitwise shifts for out-of-limit shift values (#96659)
dd9ade6377 : Remove unnecessary items() call in zero_grad (#97040)
98a5cf090d : [SDPA] Remove the chunk_grad from mem-eff attention (#96880)
d4b8ed2b11 : Fail fast when dynamo attempts to add unspecialized int/float as additional graph inputs (#96786)
cea13ad9fa : Improve size mismatch error messaging referencing mat/vet sizes (#96863)
985fc66b30 : Bind increment_version to python (#96852)
1983b31711 : Fixed print tensor.type() issue. (#96381)
57bb5b159d : [static-runtime] one more attempt to improve crash log readability (#96903)
44d7bbfe22 : [cpp extension] Allow setting PYTORCH_NVCC to a customized nvcc in torch cpp extension build (#96987)
8ce296ae2c : [ez][inductor] show kernel category in kernel benchmark result (#96991)
46eaf4be7d : Fix Typo in pytorch/torch/autograd/__init__.py (#97024)
95575f0a5f : [DTensor] Fix _get_or_create_default_group() (#96961)
ffddb2219a : Change `THPStorage::cdata` to be a `MaybeOwned<Storage>`, add unpack func (#96801)
7f94ea8492 : test/test_torch.py: fix TestTorch::test_from_buffer test (#96952)
18cf30fb2a : [Inductor] preserve AliasedLayout on View (#96948)
92eb9d363a : Decoder native functions join the dead code society (#96025)
b5ecf727be : Revert "[aot autograd] refactor to make functionalization self-contained (#96341)"
238b06086f : inductor: fix cpp wrapper ExternKernel check (#96799)
13538c88b3 : [1/n] Consolidate `replicate` and `DDP`: setup ufmt for `distributed.py` (#96597)
24ce3a7c34 : Move `hasPrimaryContext` to `c10::cuda` (#96800)
cbd3df93c4 : [vision hash update] update the pinned vision hash (#96990)
4de1bc16e3 : [PyTorch][XNNPACK] Update wrappers for internal only x86 SSE2 kernels (#96896)
f865e23abc : [MPS] Introduce MPSUnaryGradCachedGraph & MPSBinaryGradCachedGraph (#95289)
571f96bf59 : cudagraph trees (#89146)
cf732053e4 : nn.EmbeddingBag bound check (#96022)
50beab2978 : [MPS] Fix the failure with ReplicatePad3D (#96988)
417e7bc09f : Revert "[PTD][Checkpoint] Add checkpointing support for DTensor submesh (#96802)"
c9135e4408 : Stop using my channel for 3.11 builds (#96973)
e4e761b277 : record caller frame instead of function frame (#96882)
ea7415087a : Expose Stream Recording Apis in python (#96384)
b01d6f2cdb : addmv decomp #2 (#96264)
5842e5c175 : vmap support for torch.tril and torch.triu (#94287)
cfa6b52e02 : [PTD][Checkpoint] Add checkpointing support for DTensor submesh (#96802)
2abcafcfd8 : Add masked_grad kw argument to to_dense (#96095)
9d80969fa4 : Retry brew and gem installation in trunk ios workflow (#96970)
b02fd701fb : [FSDP] Reduce CPU overhead (#96958)
931a4913b1 : [inductor] Refactor memory management code in wrapper codegen (#96768)
3f4090652c : Passing LinearPackedParamBase Capsule as a saved_data to backward stage (#96269)
397fb2762e : [DTensor] Fix DeviceMesh (#96861)
6718e3ca7c : Cache the transformers model used in ONNX test (#96793)
aeb3db8aa0 : Back out "Fixing a bug where allocating a 4GB block results in using 8GB of memory (#95827)" (#96796)
0eb9e01cbd : Enable TestTorchbind on Windows (#96507)
62eb7a2e97 : [MPS] LSTM grad_y missing fix (#96601)
b249b44bc1 : Turn off split reductions for dynamic shapes (#96850)
bf08d1387c : [primTorch] handle out in `sort` meta function (#96719)
577d930c39 : [CI] Revert https://github.com/pytorch/pytorch/pull/96195 (#96897)
8187c0de88 : Add xpu device type to torch dispatch overload (#96849)
0a53c9624a : Back out "Add _int_mm to expose cuBLAS int8@int8 -> int32 matmul (#94339)" (#96885)
06054d7df0 : fix random output issue on index_select when src is scalar and index is empty (#96408)
3405ac8a08 : [TP][DTensor Op] Enable Embedding op for DTensor (#96702)
44c9ecad8d : fix flop formulas for sdpa (#96690)
8c2341c1b9 : Remove pytest block list (#96698)
3162f71787 : [memory debugging] Extract frame information from inductor (#95753)
e74f70d212 : Revert "Revert "[memory profiling] add a facility to gather combined C++/Python/TorchScript stack traces. (#95541)"" (#96878)
1f340df33c : [vision hash update] update the pinned vision hash (#96906)
3606f59366 : Default specialize_int to False (#96624)
308a58ebca : [FSDP] Rename to _get_orig_buffer_dtypes (#96790)
71adb32ddc : [DDP] API to get data parallel parameters (#95097)
3ce9aac786 : Add environment variable to force flattening of 3D input tensor (#96761)
dcafe3f271 : Updates to the release notes scripts and documentation (#94560)
731bb6e61b : Fix periodic job by excluding check_graph_breaks (try 2) (#96803)
6d62134f2c : fix aminmax output resize issue when input is a zero dimension tensor (#96171)
7c525823c7 : Remove un-used list. And disable pytest for public binding test. (#96684)
f3db2a6341 : Expose API to specify custom context manager for checkpoint (#96783)
ac7329b323 : Add exceptionhandler to more distributed_c10d APIs (#96770)
1716709d46 : [CUDA] Use accumulate type to improve accuracy of grid_sample on half precision inputs [v2] (#96586)
54cd4a67d0 : Output peak memory stats from dynamo torchbench perf CI (#95666)
445863128b : Use .to instead of contiguous to generate channels last tensor (#96791)
24c49dbf14 : [functorch] batch rule : few decomposition ops (#96744)
9b1b3fdd2d : add to functorch codeowner (#96746)
11e708dd6b : [doc] fix `torch.cuda.mem_get_info` doc (#96621)
6339ee5d23 : Temporarily disable test_ops_jit on Windows (#96859)
aa09f8891c : add 2d tests to ci (#96711)
5612aa6acd : Fixes a layer_norm_nested backwards edge case. (#96788)
80e8e41ca7 : Fix type hint for `torch.Tensor.grad_fn` (#96804)
a7d2e451fd : Fix build, shadowed variable (#96778)
e9d9151eec : [aot autograd] avoid cloning some inputs unnecessarily when they dont require grad (#96342)
3cd9c7a16d : [aot autograd] refactor to make functionalization self-contained (#96341)
8e6287264d : [Optim in backward] register_hook=False API (#95096)
8ce8d49cc4 : aot autograd: consolidate metadata (#96340)
070cefaef9 : aot_autograd: dont requires_grad on tangents (#96339)
a269469982 : aot autograd refactor: make all synthetic base logic layered in a single location (#96235)
7a076b7b93 : [aot_autograd] only performance functionalization analysis pass once (#95992)
e1ea584b1c : Revert "[memory profiling] add a facility to gather combined C++/Python/TorchScript stack traces. (#95541)"
33c7be360f : [reland][CI] switch torchbench to a pinned version (#96782)
3fd24fb608 : COO intersection: allow external hash + hash reuse in sparse_mask (#94596)
93f7996995 : [MPS] Fix the MacOS 13.3 selector check. (#96733)
60a68477a6 : Bump black version to 23.1.0 (#96578)
a229e78544 : [BE] Enforce sign-compare (#96723)
96c745dfdc : Fix int() casting in torch.nn.RNN to have correctly traced JIT and ONNX graph. (#92970)
d3a38bdd47 : Revert "Update xnnpack to the latest commit (#95884)"
6110effa86 : Rework torch.compile docs (#96706)
2795233668 : Revert "Fix periodic job by excluding check_graph_breaks (#96780)"
85639c1a88 : [allocator] Generalize recording to a pool (#96542)
e01b092705 : inductor: don't remember pre-loop order if pre loop has loop collapse (#96640)
c6a82e4339 : [vision hash update] update the pinned vision hash (#96787)
8ec9beacec : Fix periodic job by excluding check_graph_breaks (#96780)
6d4d9840cd : Stop including of PassManagerBuilder for llvm >= 15 (#96762)
a8f36dd646 : Revert "add amp support for custom backend (#96188)"
5396f85c91 : Propagate dynamo dynamic_shapes config to backwards (#96771)
0da89664cc : Update xnnpack to the latest commit (#95884)
707d892564 : Debug logging around DDP mixed precision copies (#96438)
b60d6e246e : [inductor] Consolidate codegen functions in sizevars.py into wrapper.py (#96654)
037acd5a22 : Update CI skips (#96745)
a198ce6d76 : [PyTorch][XNNPACK] Update build files for newly added kernels (#95911)
dd5e6e8553 : [BE]: Merge startswith calls - rule PIE810 (#96754)
be4eaa69c2 : Revert "[CI] switch torchbench to a pinned version (#96553)"
2951a75c3a : Revert "Update perf smoke test threshold in check_hf_bert_perf_csv.py (#96772)"
e7d795dccd : [Inductor] aten.{avg_pool2d/max_pool2d_with_indices} arguments can be 1 element tuple (#96727)
784dd583a6 : Automatically register/clear dynamo profiler hooks while profiling (#96199)
2eed44933b : Update perf smoke test threshold in check_hf_bert_perf_csv.py (#96772)
159145a19e : Add support for torch.complex in functorch (#96032)
06b7285163 : Add `torch._check*` functions analogous to C++ `TORCH_CHECK*` (#88725)
cf12edee02 : add amp support for custom backend (#96188)
d30db9a251 : Replace non-reentrant checkpoint with a rewrite that can be nested and contain grad (#90105)
234df29901 : [MPS] Add C++ API support for MPS backend (#96668)
ba4fb9b6ad : Revert "Default specialize_int to False (#96624)"
da1489e405 : Fix signed-unsigned compare in FlattenIndicesCommon.h (#96765)
66871d61bb : One line print for check_graph_breaks (#96750)
6ea790c5b6 : Make share_memory_ call thread safe within itself. (#96664)
799521fae5 : Fixes 96676 (#96714)
61d6ccd29a : [CI] switch torchbench to a pinned version (#96553)
1ac8782db2 : Default specialize_int to False (#96624)
02f6d14b97 : Only allow SymInt across partitioner boundaries, and fixes (#96653)
9cb02b2e72 : Mark empty, rand, randn as core aten op (#96661)
4e1060c609 : [memory profiling] add a facility to gather combined C++/Python/TorchScript stack traces. (#95541)
6e3d51b08a : [inductor][CI] also skip rexnet_100 on non-dynamic shapes (#96691)
a916d64900 : [FSDP] Relax `sharded_grad` assert to allow IDLE (#96584)
05dda7ff65 : bsr_dense_mm Triton kernel: fix out kwarg (#96648)
40df3b41aa : [AO] Update qLSTM implementation to remove unsupported backend ops (#96436)
7ec0d6f006 : Moves SDPA backward helper native function to functionsmanual.cpp (#95821)
152c1529ca : Add tests for all padding layers to `module_db` in `common_modules.py` (#96641)
4562898ad1 : Disable flaky linalg.det.singular tests on ROCm (#96707)
0d3bf2fdca : Add missing ceil for libdevice in triton (#96709)
1ec655565d : [fix] resize_, resize_as_ : version bump in ADInplaceOrView (#96598)
f03db8d6cb : [reland2][inductor] Add an AOT compilation mode for Inductor CPP backend (#96520)
178d2a38e0 : debug shape guards (#95848)
a37197df99 : [Inductor] Enable Inductor to support BF16 atomic_add (#96620)
ff7e510d1e : Correctly use PythonPrinter for generating wrapper code referencing sympy (#96710)
f1d4d291b0 : update_expected.py to parse artifacts and update graph break stats (#96480)
9239279cc0 : Suport tensor type for XPU (#96656)
a07817ad8f : Revert "[MPS] Add C++ API support for MPS backend (#96668)"
bdd09e68e4 : [Inductor] Legalize BF16 (#96183)
190e284bd3 : [Inductor] apply vec float mask on logical comparison ops in cpp (#96502)
3f7235463a : [Inductor] Fix the incorrect fusion if a Conv/Linear moduel is called from multiple places (#96485)
3cad8d23d0 : [Inductor] Skip the hf_T5_base due to intermittent failure on CI (#96649)
166117e050 : control_flow.{cond/map} allows tracked_fakes divergence (#96546)
ec536232a3 : [primTorch] add meta implementation for `upsample_nearest2d_backward` (#96612)
6a2dcfd738 : Move all ONNX test dependencies to Docker (#96590)
70090b4daf : [CUDA] Abate spurious resize warnings in MultiMarginLoss backward (#96382)
906a1952c6 : [DDP] Enable delayed all reduce in DDP (#96673)
d0a4881d95 : [vision hash update] update the pinned vision hash (#96703)
2a08a62777 : Add extra metadata (as comments) to Inductor generated code (#96581)
f56cb41c2e : Fix calls to sizes to enable dynamic shapes with sdpa (#96674)
218eeacacd : Check dynamo graph-breaks in CI (#96346)
2cc8368af3 : Clean up duplicated retry function in common.sh (#96682)
62c1e33fc9 : [BE] Remove fast_nvcc tool (#96665)
82daf98151 : [Sparse] Move `SparseTensorUtils.*` to `native/` (#96696)
c31f5cc26a : Update functional_bfloat16.h (#96027)
a66474b411 : Update vml.h (#96028)
5a8a4030a2 : [BE] Add regression test for aten shared build (#96697)
a22b92d8ba : Revert "Enable thp(transparent huge pages) for buffer sizes >=2MB (#95963)"
86a9fe8abc : Update Exceptions.cpp (#96031)
507feb805f : Don't specialize torch.Size with specialize_int = False (#96419)
da265652d6 : Return Live Data Pointers from Checkpoint, swap onto tensors (#95020)
1cc32aedb0 : Handle additional live allocations not in checkpointed state (#94943)
d798de2b05 : Checkpoint CUDA Allocator Private Pool State (#94653)
c95bcb6694 : [MPS] Fix flip where no dims need to be flipped (#96605)
ca7e53324f : [Quant][fx] Remove unused is_qat args in prepare_fx (#96631)
6e3e22d58c : [CUDA][cuFFT] Minor fix for cuFFT plan cache docs (#96373)
6a492908cc : Update conv_fused.py (#95551)
ae4d690931 : Make linter image available on ECR when rebuilding (#96671)
f330281fb2 : Add torch.nn.LayerNorm() to documented list of supported nested tensor ops (#96434)
069ace131c : [MPS] Add C++ API support for MPS backend (#96668)
c28b224e0f : Update CUDAGraph.cpp (#96029)
2ea0cb1207 : Fix the typo for the docstring of args in the observer (#95887)
9159599cd5 : Gramatically updated the tech docs (#92896)
1d792288a5 : [dynamo][dashboard] Clear local changes before pulling git repos (#96667)
16a16d1803 : Incorrect links #96515 (#96536)
a48d518e45 : test_foreach: remove `skipMeta` (#96599)
f5a0b31a95 : [FSDP][optim_state_dict] Make FSDP optim_state_dict aware of DDP prefix (#96415)
b992199487 : [pytorch][coreml] Use from_blob instead of empty in pack_outputs (#96564)
c69b3b8d4f : [CUDA12] Autograd engine use current device only (#92354)
31137a63a7 : Changed flop formulas for flop counter to take in shapes directly (#96565)
3f1efadea5 : [inductor] fixes addmm pattern matcher to exclude non-conformant patt… (#96634)
30d56dd8c1 : Support randn_like() for NT (#96528)
f673ad6d5c : Add a new knob to separately enable the autotuning in Triton. (#96440)
4454655a4c : Add triton to relevant packages (#96663)
a8d1eb1961 : Convenience script for getting correct Triton nightly binary (#96669)
120c6f6637 : Revert all_reduce workaround as it might be causing issues on other parts of the codebase (#96460)
19833486dc : Autorun binary builds when a commit pin is updated (#96526)
7eef469793 : Add merge_rule for "functorch" module (#96657)
55a1bd3fc6 : [PT-D] Update CODEOWNERS, merge_rules, and Persons-of-Interest for to… (#96321)
bb8dc7f7d9 : Dockerize torch deploy setup (#96593)
0b5040b329 : sparse_mask: remove syncs by removing calls to coalesce (#94406)
13011afb87 : Fix vmap registration for t, t_ (#96539)
024ea1a21e : Support zeros_like() for NT (#96527)
3cdf18cb4f : Corrected `HingeEmbeddingLoss` documentation (#95140)
32f11f58c9 : DDP native mixed precision (#92882)
c7f39c0820 : Update CI skips (#96554)
279ada515a : inductor(cpu): make variable number used of masked vectorization path align with scalar path (#96510)
2cbce06fee : Enablee test_inverse_errors_large (#94727)
760ad90518 : [Dynamo] User defined functions support torch & builtin functions as default arguments (#96563)
6eca391e83 : inductor(cpu): remove __restrict__ keyword to avoid generating wrong result when two pointer point same memory (#96492)
be220690d9 : Revert "[primTorch] add meta implementation for `upsample_nearest2d_backward` (#96612)"
2b9d9bcb85 : Deprecate non-bool masks in masked_fill (#96594)
fe180596b8 : [primTorch] add meta implementation for `upsample_nearest2d_backward` (#96612)
99efe3ef5a : Generate type match guard for torch.Size input (#96421)
1ab883797a : [BE] Dedup hardcoded triton versions (#96580)
30b968f60d : Revert "[BE] Dedup hardcoded triton versions (#96580)"
c131e51e62 : [BE] Dedup hardcoded triton versions (#96580)
4b372e3958 : [memory profiling] C++ tracing support (#95357)
48490cec28 : [memory profiling] Move Context object to c10 (#96280)
266089a3fe : [memory snapshots] record scripted stack traces (#95356)
e8b0f504e2 : Fix unpicklable object in AveragedModel (#95979)
82d3d053b9 : Properly capturing argument names for decorated/wrapped functions (#96557)
a7a09adb86 : Add location information for assertions in `torch.jit.annotations.try_ann_to_type` (#96423)
12735952a0 : Symintify `_gather_sparse_backward` (#96591)
cb7c796b4b : Enable `min.unary_out` (#96441)
31a6730411 : [pt2][inductor] Ignore trace.upload_tar when pickling config (#96519)
0d7c44096a : Add `baddbmm` meta function (#96548)
8e0d5bf538 : [primTorch] add meta implementation for `aten.min.dim` (#96442)
ab148da66c : Add fsspec to requirements.txt (#96532)
f3fc4d035d : add timeout and retry to metric upload job (#96582)
b4f434a731 : [JIT] mark _exchange_device op as having side effects (#96364)
f89bd26fe4 : update options (#96551)
362958125a : [vision hash update] update the pinned vision hash (#96570)
c3614c7a61 : Add a flag to benchmarks script to keep the test report directory (#96398)
bdecf50b47 : [fix] reshape_dim_outof to handle 0 sized dim (#96493)
1be04be3b2 : Remove fetch-depth from _calc_docker_img (#96588)
61cb544397 : Align mask formatting of both masks more closely (#96286)
1e6961586b : [Profiler] Memory timeline to show actual timestamps (#96535)
51b8ab7879 : Clean up references to test_megatron_prototype (#96431)
4242e698a3 : [BE][MPS] Add MPS to clang format (#96562)
a7689e73f6 : [Docs] Document of RReLU about its different behavior between training and evaluation (#95624)
0bf2ed2eb4 : Remove duplicate windows job (#96552)
80ce1a934e : Fix flaky Dynamo export tests (#96488)
7fcf8b1829 : [Dynamo] Support torch.{cuda/cpu}.amp.autocast (#95416)
d05f2ae476 : Require DOCTEST_SHOW environ to run plt.show (#96522)
384545bf84 : [ONNX] Preserve stacktrace info for decomp (#95929)
b97ce3774a : [ONNX] Move graph transform functions to 'passes' (#95664)
41991710b2 : Revert "[PyTorch] Use c10::FastMap for memoizing in Pickler (#96360)" (#96547)
429091140e : [BE][MPS] Use convenience functions (#96521)
85961f5728 : Fix broken anchor in RELEASE.md (#96525)
4519228f60 : Reduce pytest blocklist part 2 (#96397)
49eed50d19 : [Inductor Perf CI] Lower the threshold of performance smoke test speedup. (#96531)
e948ba07d4 : [Profiler] Add export_memory_timeline to save memory timeline plot to file (#96137)
29cd60dfb7 : [CI] handle more dynamo benchmark models that are not expected to be deterministic (#96324)
481582eb11 : Remove land checks in trymerge (#96401)
6894bb0a85 : Remove on_green and mandatory_only (#96400)
219d5eb4f1 : [QOL] Raise a NameError when accessing non-existent variable (#96418)
55cf7eef86 : add/add_ for sparse compressed formats: fix silent index downcast int64 -> int32 (#95294)
a651e6253a : [CI] Change compile_threads to 1 when running benchmark accuracy test on CI (#96195)
939c4ae6cd : [DataPipe] Add `copy` option to `fork` DataPipe (#96030)
e35f020418 : Retry XLA dependencies installation step (#96352)
55d4842a48 : [SPMD] Add defunctionalize_optimizer feature (#96323)
c7bd9b9490 : Switch AsyncCollectiveTensor to be a wrapper subclass. (#96105)
3bb16a0842 : Enable thp(transparent huge pages) for buffer sizes >=2MB (#95963)
b053a0f2ba : [XPU][Profiler] Add API support for XPU profiler to Kineto path (#94502)
d0f4d62961 : flatten_indices: remove syncs (#94401)
1b59c3feb5 : Add PyObjectSlot member to StorageImpl (#93342)
987eade3f3 : [fix] resize_ and resize_as_ : version bump (#96403)
8bce88d9de : [caffe2] dont call cudnnDestroy on thread exit (crashes on windows with cuda 11/12) (#95382)
76cac70939 : new triton main pin (#95896)
9aa216cb46 : reland #96249: [inductor] show more kernel specific metrics in the benchmark result (#96461)
d0731271cd : Revert "new triton main pin (#95896)"
076792a3e1 : [ONNX][Diagnostics] Speed up 'python_call_stack' by 'traceback' (#96348)
15e58c19ec : [FSDP][optim_state_dict] Copy step tensor so that each parameter has its own step (#96313)
cdab1d676c : pt2e short term quant: respect qmin/qmax for linear weight (#96232)
bea6b1d29a : [vision hash update] update the pinned vision hash (#96369)
f3b8638074 : Adding nn.ZeroPad1d and nn.ZeroPad3d (#96295)
d0e4ca233e : some reference and move fixes (#95942)
6e0359dd42 : new triton main pin (#95896)
065de43012 : Fixing a bug where allocating a 4GB block results in using 8GB of memory (#95827)
a87f3f612e : [MPS] Fall back multi-layer LSTM on macOS 12 (#90909)
b0a580a21d : [ONNX] Export logical_not (#96315)
5f89d147a1 : [ONNX] STFT Support (#92087)
69d3fa2e4d : [PyTorch] Use c10::FastMap for memoizing in Pickler (#96360)
cc798f1a4f : [PyTorch] add c10/util/FbcodeMaps.h (#96359)
cc699c56dc : reland #96248 [inductor] show performance for each autotune config for a kernel (#96458)
cf3d3a583e : Add env PYTORCH_TEST_DO_NOT_USE_PYTEST as an option to not use pytest in unit testing (#96444)
ff2e14f200 : Skip rexnet_100 in dynamic CI (#96474)
79313345e8 : Fix missing items() typo (#96417)
8ef8bd023d : [CI] Use different subdirectories for amp and float32 nightly perf run (#96470)
384d3ec2b6 : Extra CR comments from #95621 (#96043)
2f6a371ae9 : Revert "Optimize nn.Module __call__ fast path for dynamo (#95931)" (#96242)
6154be1dd1 : [ONNX] Fix circular padding to support dynamic axes (#95647)
faa4cb29b2 : [Quant][fx] Create new FX-based LSTM reference module (#96343)
05b679ce6a : [inductor] don't match indirect indexing in fusion (#96273)
1bde36ba41 : test only smaller block_k for mm_plus_mm (#96385)
090b3b95b8 : [PyTorch] Add Vulkan support and tests for at::select.int operator, 4 dim/rank tensor case (#96228)
6a675f7cac : Correctly resolve dispatch keys for PyOperator (#96306)
30c4ea138f : Assert that there are no None arguments to backwards (#96300)
bbe1b9bbd4 : Fix https://github.com/pytorch/pytorch/issues/96278 (#96299)
075a49442d : [MPS] Allow `float16` input to float32 `LayerNorm` (#96430)
457396fcdc : [Autograd] `expand_as` instead of `clone` to get `AccumulateGrad` (#96356)
cb42bc2cf8 : [FSDP] Add unsafe setattr gated by env var (#96326)
fe05266fda : Revert "[reland][inductor] Add an AOT compilation mode for Inductor CPP backend (#95985)"
44d8e6c2aa : Retry CI Android emulator test (#96163)
df0ff34bcb : [ONNX] Bump onnx submodule to release 1.13.1 from rc2 (#96325)
32ffd70644 : Rewrite fallthrough to more closely match how C++ works (#96304)
67c329bc9b : Refactor to reduce duplicate logic in torch._ops (#96302)
4662ae5b62 : Add missing types to inductor IR assert (#96221)
038e838e7b : Make setup linux action be more friendly with gcp linux runners (#96289)
78e04f8272 : Update nvfuser_executor.py (#96218)
7863efbd76 : [BE][8/N] Remove ShardedTensor from TP FSDP integration test and other tests depending on Sharded Linear (#96254)
f5c39b7ba2 : [inductor] fix typos in test_torchinductor.py (#96233)
0f4652f498 : [ONNX] Merge 'initializers' into 'TorchScriptGraph' (#95676)
e9e6b3b6c5 : [EASY] Add complex dtypes to partitioner (#96297)
a7fe11dec0 : --subprocess for pytest (#96210)
8921b22297 : Set ref for linux_job checkout in lint (#96317)
c8216e558b : Add basic Module serialization BC test (#96238)
5bbec680d7 : Fix usages of contextmanager without finally (#96170)
34d18c8bee : Remove unimported expecttest deps and usage (#96314)
0f6d6d6124 : [TorchScript] Fix torch.cuda._exchange_device (#95306)
deaf9e5e65 : [reland][inductor] Add an AOT compilation mode for Inductor CPP backend (#95985)
b9c25f186c : Ignore shape inference exception from Caffe2 ATen fallback (#90408)
c988de1040 : [EASY] Update inductor training dynamic skips (#96298)
b3a079810e : [CI] Add a workflow for quick perf comparison (#96166)
4a1b971748 : Move MacOS x86_64 build and test jobs to periodic (#96279)
9137f53ec2 : Revert "Error when jit.trace/script is used with torch.compile (#91681)"
7362e22f8b : Notify on outdated lintrunner (#96241)
11aab72dc9 : [SDPA] Add an optional scale kwarg (#95259)
3f840cc627 : Revert "Ignore shape inference exception from Caffe2 ATen fallback (#90408)"
9c5a24b9df : [BE] Delete `pre-cxx-11-abi MacOS libtorch builds (#96301)
e7dd9b1138 : [Quant][test] Add test for mixed dtypes in the same model (#96104)
1d4e872370 : Ignore shape inference exception from Caffe2 ATen fallback (#90408)
98ece75043 : [aot autograd] merge all outputs of funtionalization analysis into single metadata (#95991)
29b216acd5 : aot autograd: handle detach() and no_grad() mutations on input (#95980)
bb650b34c4 : [inductor] do not handle `int` in `placeholder` (#96230)
f96bd52841 : aot autograd: dont allow symint outputs to get tangents in the bw graph (#96219)
6bbae86253 : Revert "Fix hooks handling for unpickled nnmodule (#96224)"
a1d7014c0f : Hooking backward for QNNPACK (#94432)
92edac72aa : [FSDP][optim_state_dict] Fix a memory leakage in optim_state_dict (#96263)
2bb022e902 : [MPS] Adding xfaillist with all categories of failures. (#96176)
b90a9c7db2 : [static-runtime] fix one forwarding usage (#96271)
3ce1e15cf7 : Revert "[Dynamo] Support torch.{cuda/cpu}.amp.autocast (#95416)"
941ff109d3 : `dl_open_guard` should restore flag even after exception (#96231)
8ca264ef36 : Fix hooks handling for unpickled nnmodule (#96224)
08fb13db65 : [Quant] Add lowering for pixel_unshuffle/narrow (#96160)
9e3f173636 : [1/n] Add verifier for EXIR Aten dialect (#94783)
3a4275278b : Use GH cache for sccache on GH mac runners (#96142)
d7db5b05b4 : Context manager to push/pop frame summaries (#96054)
bb8645acda : [vision hash update] update the pinned vision hash (#96243)
664381b293 : [CI] Avoid calling torch.use_deterministic_algorithms for some models (#96245)
93ff71ec37 : [ET] Add RuntimeContext to ET Aten mode (#96084)
c88aa336aa : [Dynamo] Support torch.{cuda/cpu}.amp.autocast (#95416)
b8f7bd593c : [Dynamo] Guard name should be valid Python identifier (#96174)
738cc5e644 : Fix validate_input_col for nn.Module or Callable (#96213)
fdd7e76b95 : [PyTorch][easy] Don't call IValue::type twice in Pickler::endTypeTag (#96214)
3623cfb790 : [FSDP] Speed up first iter order check (part 2) (#96220)
7324aef9a8 : Add torch.empty_like() to documented list of supported nested tensor ops (#96211)
b0b5f3c6c6 : Fix gumbel cdf (#91698)
203890e1e0 : Properly show buck target to run (#96089)
79d49c60c1 : [ONNX] Fix expand_as (#95962)
bdb076ab43 : [ONNX] Skip doctest `torch.onnx._internal.fx` if ImportError (#95686)
82dba844bb : [ONNX] Move symbolic export to separate file (#95650)
d06729746c : [RFC] Add _abort method to ProcessGroupNCCL (#96017)
d6d8d3484e : _memory_viz.py: Visualize how blocks fit into segments. (#91336)
71f369092d : Revert "Revert "memory viz: Add colors for categories and a legend (#90587)"" (#96133)
32eb3ab7e8 : [FSDP] Speed up first iter order check (#96146)
3d5eba811a : Add shape function for stack op (#92205)
5e73cc9310 : Update lintrunner to version that uses as default mergebase (#95938)
a5aceba61f : [static-runtime] a pass through checks throwing exceptions (#95983)
576762d9d2 : Clean up duplicate line in logit sample inputs (#95163)
eea0733045 : Reduce pytest blocklist (#96016)
30237e7aec : Provide more informative kernel names in Inductor (#95940)
c74f09403b : [inductor] make `philox_rand_like` work with dynamic shapes (#95461)
02a18b1a97 : Properly avoid wrapping numbers as tensors before backend (#96193)
2f66b57a7a : [MPS] Fix in-place add and sub with alpha == 0.0 (#96184)
d4f5f9fdb4 : Profile dynamo guards (#96119)
d0641ed247 : [TEST] Turn on unspecialize int dynamic training inductor CI (#96058)
9a575e77ca : inductor: align baddbmm behavior with eager mode for beta=0 and input has nan value (#96087)
ac77883e48 : fix issue of baddbmm when out has nan value for beta=0 (#96086)
666efd8d5d : Improve ASAN and TSAN handling in cmake (#93147)
8dbc549517 : Correctly pass $@ to the runner benchmark script (#96190)
40ca20bb7b : [Easy] Fix typo "steams" -> "streams" (#95706)
803e30441f : [FSDP][Docs] Per-device NCCL stream is per PG (#95705)
98a4d74a68 : COO intersection primitives: performance improvement (#96094)
98ff841a75 : Use maxint to bound integers. (#96121)
a6e3e7905e : Turn on unspecialize int dynamic inductor CI (#96034)
3326c14e86 : Add a sample for index_fill to test framework (#91534)
12ab4f08b7 : [Dynamo] No graph break on namedtuple and potential other functions (#96122)
8ca3c881db : Dynamo.export to preserve names of args & kwargs (#95851)
e38f48ff11 : Refactor unittest around dynamo.export wrt function signature (#95850)
c596504292 : Type annotate `dynamo.export` (#95742)
18e8aa95f1 : Restore _graph_executor_optimize flag after JIT test_profiler (#96135)
769cc8a614 : [MPS] Add type promotion to `torch.addcmul` (#96164)
7038458c5b : Add Generator register for the privateuse1 backend (#93920)
e9ca902894 : fix typo under aten/src/ATen/cudnn (#96063)
19ee61f7fa : Upload torch dynamo performance stats to S3 before Rockset (#96165)
2973994259 : fix typo in comments under torch/csrc/distributed (#96062)
fe4fec37a4 : [inductor] Refactor IR printing (#96024)
4ab4d88147 : Stop printing data dependent variable stacks by default (#96120)
1cd0929bf7 : [BC] Allow only `bool` tensors as mask in `masked_select` (#96112)
e70ea8d58d : enable taskset core pinning in addition to numactl (#96011)
e168dbb90a : [inductor] improve cpp vec implementation of square (#96072)
bf01caf27b : Fix broken contribution stats upload job (#96003)
e6b361bd47 : Refactor dynamo benchmark test script to reduce duplication (#96096)
969586c373 : Handle GitHub request failure when filter test config (#96145)
847d6520ed : Don't guard on the exact int value on conversion to bool (#96008)
680214ac11 : SymIntify a few more relatively non-controversial schemas (#96100)
95d17dc93d : [inductor] Reland #95567 part 1 (#96023)
39e8311a29 : unwrap sizevars passed as args (#96051)
8c8148c887 : Revert D43643526: Multisect successfully blamed D43643526 for test or build failures (#96126)
962b3f78bd : [inductor] run all kernel benchmarks individually in a compiled module (#95845)
6c9a51cdc9 : Install NVIDIA driver in bazel workflow (#96020)
893aa5df3f : Promote "Skipping frame" message to INFO log level (#95968)
28aa2efd14 : [7/N][BE] Remove Partial Tensor and its dependency (#95949)
6dddc0d689 : [6/N][BE] Remove Sharded Linear Op for ShardedTensor (#95948)
4e396a54e8 : [5/N][BE] Remove Replicated Tensor class (#95947)
b38b39c441 : Revert "memory viz: Add colors for categories and a legend (#90587)"
22c9896ea4 : Use original arg names if possible (#95898)
6fff232280 : Delete torch._functorch.config.use_dynamic_shapes (#96102)
483e193d0e : [CI] Use CUDA 11.8 to run inductor benchmark tests (#96059)
000d9ec189 : Use working pre-built sccache binary (#95997)
69bfdcd244 : Only print bytecode of inlined function if output_code is True (#95969)
69aa6b4bb9 : fix typo in comments under torch/csrc/autograd (#96061)
301a28bf8c : [primTorch] move diagonal & add linalg.diagonal refs (#95774)
1fd7ea1ba8 : Update skips for RecursionError (#96109)
5b2ab0dd4f : Multiple fixes for functional collectives. (#95897)
3beafc91d1 : USE_FAST_NVCC Windows (#95206)
7a192cc51c : dynamo: wrap graph break inst in try except block - with context manager setup/teardown (#94758)
18d6ce9622 : sparse compressed tensor validation without syncs for low-(batch)dim tensors. (#94048)
ebaf9af76e : use float to init reduction value (#95452)
dcc159d3b6 : inductor: pre-convert a TensorBox's layout to FixedLayout at FX side if one user of it is a CPU external customer kernel (#95873)
cc775fb8c4 : Upload torch dynamo perf stats to Rockset (#95675)
02792ff16f : [CI] Make inductor-perf-test-nightly produce data for dashboard (#95685)
fa92b6a7b0 : Error when jit.trace/script is used with torch.compile (#91681)
e8cd173aae : Fix node provenance tracking (#95901)
36a6e2c54b : Automated submodule update: kineto (#95798)
5dd52e250f : [inductor] Add some simple decomps (#96039)
60cf95610d : [CI] Skip xcit_large_24_p8_224 in TIMM (#96048)
1359d16fe8 : [CI] Further tighten the checking of two eager runs (#95902)
43e71cddb0 : [inductor] use triu ref instead of lowering (#96040)
789fc4c292 : [dtensor] refactor shape/offset calculation (#95923)
af8dbe7ec2 : Fix training enablement in AOTAutograd (#95975)
b7a3f331f1 : Add doc test in graph_drawer.py (#95919)
5da6da659a : [inductor] Enable some decomps (#96038)
03b6e6979c : Transformers: fix src and key padding mask bool regression (#96009)
78da315afd : [MPS] Fix bidirectional LSTM & small one-direction LSTM fix (#95563)
c7c4a20321 : Update dynamic skips (#95966)
49f6849f58 : Fix codegen logic for foreach derivatives (#95263)
a10897a344 : [Dynamo] Fix number of inputs in onnxrt and tvm backend (#95429)
26045336ca : Optimize nn.Module __call__ fast path for dynamo (#95931)
6ca286df69 : [Dynamo] Support call dict with list/tuple as input (#95928)
43dd043ea7 : Revert "[inductor] Improve error messages (#95567)" (#96014)
dc70e8175f : Add various uninterpreted bit tensor data types (try 2) (#95860)
5e1067bcc2 : [vision hash update] update the pinned vision hash (#95932)
7ff9612e34 : Improve error message for instance norm when channels is incorrect (#94624)
436993d52b : [MPS] Error on unsupported types (#95982)
f4b33791fd : [BE] Remind people there are rsts to update in docs/source (#95914)
d303665d33 : Make int unspecialization actually work (#95621)
7d765cdc66 : Fix wrong handling of `grad_scale` & `found_inf` in fused optimizers (#95847)
d214d82acd : Prettify assert expr in self.symbol_to_source failure (#95972)
4d9c499dc2 : [SPMD] Introduce the cross-iteration graph optimization framework (#94803)
5a07c3d3d1 : Remove fake inputs from control flow (#95988)
9a781ce3e1 : Add flop formulas for sdpa (#95854)
7db5f8c765 : Improve Discoverability of Inductor Optimizations (#95824)
7d02ecfabb : Fix typo in RELEASE.md and CONTRIBUTING.md (#95965)
ac07de4a61 : Add export docs, improve asserts (#94961)
027ebca4d7 : Don't use guardless contiguity/stride-like implementations (#95733)
f8b57ba635 : [EASY] Unindent some blocks (#95967)
e6f3e16d89 : Fix: `validate_input_col` for partial functions (#95067)
ee43842505 : memory viz: Add colors for categories and a legend (#90587)
c72fbf2e5a : [inductor] do not use `ceil` in `arange` ref (#95773)
feffcafe09 : [inductor] use FP div in CPP expr printer (#95698)
6c061e5145 : [DTensor][Shampoo] add _tenso.zero function (#95863)
1d3c394d5e : [MTPG] Improve all_reduce and handle bwd thread support (#95524)
a7698a8260 : [DCP] Add DCP FSDP sharded_state_dict checkpoint example to DCP .rst file (#95517)
4026c62174 : Revert "Don't use guardless contiguity/stride-like implementations (#95733)"
7f5f0b3665 : Run _nvfuser/test_torchscript serially (#95951)
879400e4e8 : Revert "[inductor] Add an AOT compilation mode for Inductor CPP backend (#94822)"
d21577f28c : Run more tests through pytest (#95844)
004bcffc6a : Fix formatting (#95906)
f3c25cd348 : [Quant][PT2.0] fix issues for rearranging weight observer for decomposed linear (#94296)
d809020fc8 : Triton kernel for bsr @ dense (#94823)
88e554b513 : Move label check failure to mergebot (#94707)
73b66098b2 : [inductor] Add an AOT compilation mode for Inductor CPP backend (#94822)
3eb8eaa177 : Inductor: fix crash when indexing an empty tensor by invalid index (#95046)
d4e0d895e9 : inductor: fix permute_linear_fusion/linear_permute_fusion has nor 'bias' keyError issue (#95930)
35bf5bac26 : Fix "sandcastle_skip_if decorator name is confusing" (#95649)
0147a408c3 : Refactor inductor collectives around base class (#95920)
92a2107375 : Support Inductor collectives with wait or collective outside graph (#95893)
7206b5e9e5 : Remove pydispatcher from test since no longer needed (#95890)
738beaa6b8 : [dtensor] fix experimental_op slice_scatter (#95894)
304a95435d : [MPS] Disallow reshape in slice (#95905)
a32be76a53 : Disable more warnings on Windows CI test (#95933)
22d3ac79d2 : [torchgen] Prettify generated type annotations (#95877)
3bb76e6ced : [static-runtime] increase verbosity for schema check (#95937)
76ade51307 : [pt2][inductor] turn off cache search by default (#95662)
53b4f6c0f6 : Revert "[jit] Add c++ stacktraces for jit::ErrorReport (#94842)" (#95886)
d1a168f176 : equal_quantized_cpu requires both inputs are quantized tensor (#95875)
4e02ad7538 : Rename inductor collectives test (#95889)
ce4cbac914 : Change linux.gcp.a100 to linux.gcp.a100.large (#95913)
2ee85f9e8b : Extend filter logic to remaining CI workflows (#95437)
53c9866ffa : Print the actual values with scalar mismatch. (#95879)
c22e7c7bf3 : Revert "sparse compressed tensor validation without syncs for low-(batch)dim tensors. (#94048)"
d7637801d3 : Revert "COO intersection primitives: performance improvement (#92976)"
1b1b9c8706 : Add flop counter utility (#95751)
3095c95828 : Fixes for PyTorch/XLA functionalization integration (#94537)
f397d1700f : Inductor reduce_scatter_tensor (#95764)
3df1a9baca : Upload external contribution data to s3 (#95747)
02fa2291f7 : Add support for custom backend (#95072)
b2875268c9 : [bazel] use GPU machine and run GPU tests (#95721)
61fa43a1f2 : [GHF] Add submodule updates check (#95885)
7ebd816aab : Switch DTensor to use funcol::all_reduce. (#95804)
00ebbba623 : Remove torch._inductor.config.triton.convolution (#95842)
b033594943 : COO intersection primitives: performance improvement (#92976)
06562529d2 : Revert "Upload external contribution data to s3 (#95747)"
1712a18170 : Fix typos under torch/_C directory (#95710)
e83d0a1893 : Improve unittest class printing for generated classes (#95806)
f418e1f8b6 : Upload external contribution data to s3 (#95747)
5309c44210 : [inductor] enable `test_alexnet_prefix_dynamic_shapes` on CUDA (#95766)
db8e91ef73 : [CUDA] Split out compute capability 8.7 and 7.2 from others (#95803)
d0dd898943 : [MPS] Remove remaining casts from 13.3 (#95870)
3a7fd20108 : fix nll loss decomposition to properly ignore ignore_index (#95833)
c86d23a1ef : Allow point-ranges on floating point inf (#95799)
7bdfdbbd5f : [MPS] Add macOS 13.3 selector check (#95866)
d9f822b566 : Add dimension check in tensordot (#94497)
75cb99e549 : [optim] Widen the cases for defaulting to foreach (#95820)
2bcf863fad : [optim] include nn.Parameter as foreach supported (#95811)
45fd1b390e : [vision hash update] update the pinned vision hash (#95843)
4973ca5e3e : [sdpa] Add broadcasting for batch and num_heads dimensions to fused kernel nested preproc (#95657)
62b775583f : [inductor] Improve error messages (#95567)
d1ec9a51e9 : Bump version 2.0.0 -> 2.1.0 (#95790)
4d3352ed90 : [MPS] Remove casts from reduction/cumsum/sort ops starting with macOS 13.3 (#95817)
184fb9f11d : Small doc update for torch_compile_debug (#95809)
1fd119948e : [3/3] Update `.pyi` Python stub files and enable `'UFMT'` linter (#95268)
b3d8fae042 : Fix typos in documents under torch directory (#95709)
b35e67142c : [JIT] Improve source attribution for NamedTuple type inference (#95761)
053205aab5 : [dynamo] Fix OrderedDict reconstruction bytecode (#95800)
6786a24fd2 : fix some tiny code issues (#95757)
f7b26bdd22 : Remove mention of dynamo.optimize() in docs (#95802)
deaf077de8 : Don't use guardless contiguity/stride-like implementations (#95733)
a9a3a1bd14 : Apply peephole for eval mode when constant folding is enabled only (#95801)
8093abce3e : Always get attr static out (#95771)
34a7c79eac : Rename func (#95639)
de86538f55 : [ROCM] Restrict pytorch rocm to only use triton 2.0.x (#95793)
2936c8b9ce : Revert "Enable thp(transparent huge pages) for buffer sizes >=2MB (#93888)"
13340638f4 : Update inductor-perf-test-nightly.yml (#95807)
63796d35ef : [sdpa] move seq_len_1 check and replace with seq_len_0 check in sdp_utils (#95486)
72b9d45e76 : Clean up install_triton and install_filelock in CI (#95754)
dd88954511 : Preserve specialize_int_float during export (#95741)
5d9d8c6154 : [MPS] Add fixes for div with floor and raise error for div_trunc (#95769)
5ba4dafccd : Retry Merge: extract utils from check labels ptr (#94899)
975333d80c : Logaddexp for complex in CPU (#95717)
97fbceead4 : [EASY] Make has_hint work on more things than just SymInt. (#95792)
879f0c3fee : [CI] Increate the timeout limit for benchmark test (#95787)
ef731cdaf0 : [2/3] Update `.pyi` Python stub files: Prettify `rnn.py` by using type annotated `NamedTuple` (#95267)
a46e550d06 : [1/3] Recognize `.py.in` and `.pyi.in` files as Python in VS Code (#95200)
e096bca5f9 : adding symbolic link to get CI to run tests where cmake is not run on CI node (#95402)
5d29b68bbc : [inductor] generate triton kernel benchmark (#95506)
e9c70b0b20 : Fix typo and grammatical errors in community docs and dynamo docs (#95692)
3e8eedd78e : Round of fixes for functional collectives (#95714)
46f092dc66 : Add jinja2 as mandatory dependency (#95691)
2bcc0e9e18 : Expand sparse.softmax zero nnz tests to cover cases of previously reported FPE. (#95646)
c5f6092591 : Use FindCUDAToolkit to find cuda dependencies (#82695)
7901f2d156 : sparse compressed tensor validation without syncs for low-(batch)dim tensors. (#94048)
e5a959a2d4 : [MPS] Fix views with 3 or more sliced dimensions (#95762)
7d097e3695 : [CI] Reduce the frequency of running inductor-perf-test-nightly (#95778)
9835c93aba : [CI] Change the way tests are triggered with dynamo and inductor (#94539)
e3892fd16b : [inductor] correctly infer dtype of `full` (#95593)
9da903f180 : [Inductor] Fix the logical_and/logical_or vectorization issue (#95609)
c1f5e50fd1 : [Inductor] Vectorize channels-last adaptive_avg_pool2d (#95608)
074ae720f4 : [Inductor] Fix the issue that at::vec does not support indexing (#95459)
7a772bfff9 : [dtensor] add submesh example to checkpoint_example (#95655)
3fa939625b : Rearrange some transformer tests (#95745)
1e2e149570 : Dynamic dim guards (#95584)
e628a3e724 : Don't generate guards that refer to unbacked SymInts (#95732)
9b86b53285 : allow privateuse1 key to be used with legacy constructor (#95748)
93f1aa5511 : raw_values is dead (#95703)
9227fd741c : Avoid recursion in graph traverse (#95723)
e970dd9dcf : [CI] Compile on M1 natively (#95719)
e79b2b7792 : [CI] Force clear triton cache between running each test (#95729)
d3d75a5cd8 : [vision hash update] update the pinned vision hash (#95665)
21b1134be6 : [inductor] fix type promotion for comparison operations (#95736)
6930f30ccd : Small bugfix in nested matmul bmm path head_dim acquisition (#95744)
e50ff3fcdb : Fix kernel name bug (#95739)
65f49ab663 : [Inductor Perf Test Workflow] Remove pull request trigger and rely on ciflow/ label only (#95755)
1c526664d5 : feat(dockerfile): shrink layers & build cleaner (#95375)
60a1d29585 : Correct OneCycleLR doc example code to explicitly call optimizer.step() (#95730)
ed1957dc19 : [MPS] Add support for masked_scatter (#95743)
d9cd9a13bc : [BE][DDPOptimizer] De-dup `p` and `param` (#95654)
94bec94f5a : Initial minifier smoke test + runbook (#95670)
7ea3aab45d : Remove dead ZeroGuard (#95701)
cf3638a9cc : [dynamo] Clear cache on dynamo dashboard accuracy tests (#95726)
40d54cf8bf : Apply filter logic to disabled jobs dynamically (#95442)
2fbbc3362b : [ONNX] Support 'dtype' argument for 'aten::norm' (#95637)
88a31f4be6 : hoist precomputed exprs from indices (#95690)
dc10ab15b7 : Warn on modification of OptimizedModule.forward (#95673)
6bdef7a5ff : Warn on dynamo OptimizedModule.forward() (#95672)
20dfce591c : Add support for Inductor + symbolic shapes + training (#93059)
70029214f3 : [jit] Add c++ stacktraces for jit::ErrorReport (#94842)
e3c5c369ba : Run tests in USE_PYTEST_LIST through run_tests (#95659)
e5b9d98752 : Rephrase zero_grad docs (#95643)
ba43d908f9 : Build Triton in Docker image (#95233)
b55d0d2aef : Fix trymerge changed files count (#95720)
2cc845eb1a : Enable thp(transparent huge pages) for buffer sizes >=2MB (#93888)
f1dbfe2f2a : [ao][fx] Enable observed -> quantized float for static quantized MultiheadAttention (#95636)
fafb410985 : Clean up unused `fill_` sample inputs (#95117)
835122c89f : Add missing f-string specifiers (#95707)
e13b804105 : Add standalone torch._inductor.compile() API (#95594)
fc324d3485 : [quant][pt2e] Add support for dynamic quantization with symmetric quant for input (#94854)
f8ad64d5eb : [dynamo] avoid truncation of python pointers (#95619)
1e15a272ff : [dtensor][BE] remove redundant tests (#94838)
2a1cb9640c : [dtensor] support creating DTensor in submesh (#95458)
261eb46ddd : [dtensor] refactor get_coordiniate (#95457)
bb9a05b116 : [dtensor] use tracing for metadata prop (#95456)
80614783e3 : Enabling FlashAttention for SDPA when given NestedTensor (#95438)
57f2c5888f : Update skip message to reflect why test is being skipped (#95127)
4fada6eb95 : MHA torch.jit.script fix for in_proj_weight = None (#95653)
1a72712645 : Add dynamo graph break stats to CI (#95635)
f33180fb7f : [MPS] Add pow.Scalar (#95201)
71ad1005f6 : Add prelu into Autocast CPU whitelist (#95366)
b87229f19d : Reland #94719 - Update ideep to add primitive cache for ARM (#95688)
05943712a4 : [MTA] Skip size-0 tensors in `multi_tensor_apply` (#94655)
9e16f1281f : [MPS] Add copysign op. (#95552)
b7c2a65139 : [MPS] Fix type casting copy with storage offset (#95573)
7c66333c08 : [pt] add share_memory_ to aten TensorBase (#95557)
58648822b6 : Handle int/float arguments for cpp codegen in inductor (#95533)
447f5b5e2d : [bazel] enable sccache+nvcc in CI (#95528)
49ba11962e : Update Dispatcher.cpp (#95589)
3944e7c3e8 : Fix grammatical errors in contribution guide (#95454)
46385b3e48 : Fix typos under torch/_dynamo directory (#95599)
38c32e19c8 : fix DeprecationWarning (#95545)
3762e801ba : Update dynamic skips (#95587)
8b0543381b : [Inductor] Support sparse_grad for torch.gather (#95490)
454c48b987 : Add experimental torch.export prototype (#95070)
801b3f8fc7 : Revert "Use FindCUDAToolkit to find cuda dependencies (#82695)"
f8692dcc4a : Node.stack_trace should have innermost frame last (#95592)
b818b3fe1c : better error message when functionalization cant handle op (#95392)
ddd6b53d80 : fix embedding_backward_dense decomp with broadcasting (#95499)
84e2d957a1 : fix primtorch handling for sub.scalar with alpha and float64 arg (#95421)
eff5ae8746 : Better mark_dynamic assertions (#95566)
4e926db1f8 : Add super().setUp() in test_symbolic_shape_analysis (#95336)
d7146e7870 : Update copyright (#95652)
10bf019b71 : [jit] Add shapes info to the output type of CallFunction nodes after tracing, if the output is a tensor (#95544)
5272d6e6e5 : Remove mentions of distributed/_shard/test_replicated_tensor (#95632)
38fdd28db4 : [4/N][Deprecate ST][BE] Move warnings of Partial Tensor to functions (#95631)
33cf62359d : Revert "Convert operator.not_ to torch.logical_not (#94626)"
cc6da7b901 : Inductor allgather_into_tensor (#95530)
68eec90cfd : Support elementwise add / mul for [B, *] nested, [B, 1] dense (CUDA only) (#95620)
1fe2a9d122 : Add _int_mm to expose cuBLAS int8@int8 -> int32 matmul (#94339)
32558910f3 : make overriding operator warning message only print once (#95179)
29f9a702cc : [NCCL] (re-open) Optionally avoid `recordStream` calls in `ProcessGroupNCCL` (#89880)
f43ce9553b : [meta_tensor] polish error strings in meta registrations (#95052)
fa5a4b0dfc : [CI] Do not compare two eager run results against fp64 result (#95616)
34617d7eb8 : dynamo export should be able to export identity function (#94962)
868640e094 : Re-enable a FX-to-ONNX kwargs Test (#94763)
8dfac7b887 : Update `fx.pass.graph_drawer` usage doc to draw fx graph (#95534)
f27e09de04 : Cleanup Windows warning suppression in CMake and fix some warnings in the source code (#94927)
d950f45577 : Revert "[Functional Collectives] Migrate DeviceMesh::all_reduce to use functional all_reduce. (#95009)"
1cf11c1c86 : Add bfloat16 support to upsample (#95500)
c44a733018 : Fix split_module bug (#95493)
a3b505c55e : [Quant] Fix setting fixed qparams for inner LSTM ops (#95537)
31ce32b03d : Fix typos in documents under torch (#95597)
3beb644578 : [dynamo] Fix keyword argument name of all_dim (#95600)
4f84c57c87 : Fix potential deadlock when recording memory traces (#95273)
9a4cb9bcaf : Fix typos under torch/_inductor directory (#95601)
5d70ee93fa : Expose more headers for extensions. (#95447)
c1fa403e57 : suppress nvfuser loading warning when we disable nvfuser (#95603)
97ec340fe9 : Fix double-a typo (#95470)
4930ae7f82 : [MPS] Add roll op (#95168)
448c97ca10 : Revert "Disable MacOS M1 test jobs (#95509)"
b89fda51cd : Implement sparse semantics support in gradcheck (2nd try) (#95405)
ea367347c0 : [inductor] Allow list of decompositions to be overridden (#95468)
325b43661e : add/add_ for compressed sparse inputs: bypass BLAS in some trivial cases (#95293)
d301caa890 : Deepcopy output node metadata (#95426)
b3175ae95f : Avoid copies in matmul (#76828)
d83a14e7f6 : [inductor] enable `test_grid_sampler_2d_dynamic_shapes` (#95575)
03cc0f587c : Don't create large intermediary tensors in the backward of matmul (#95261)
fd8367a7b1 : [MPS][BE] Introduce xfail (#95045)
11f293a74e : Comment about Meta-internal usage of trymerge.py (#95536)
fb10e66d35 : Bulk convert numel() to sym_numel() in FunctionsManual (#95543)
21f680e8ad : Follow up on CUDA 12 support for PyTorch/Caffe2 (#95582)
5265170029 : [inductor] enable `test_recompile_on_index_dynamic_shapes` (#95581)
6624a73837 : Move istype and object identity tests into a dispatching dictionary. (#95476)
d6dd67a248 : Dynamo: Use out-of-place binary ops instead of in-place (#95446)
7dd95ad7f3 : Add a convenience shortcut for accessing size on ComptimeVar (#95404)
56c3e4ce35 : [inductor] Shrink mm configs for small sizes (#95555)
6e61629f10 : [inductor] Refactors/improvements to max-autotune (#95554)
d3e1f165b3 : Copy helper next_power_of_2 from triton (#95436)
261b019a64 : Copy nn_module_stack meta data when creates create node in tracer (#95358)
bc51ee4ed7 : fix spurious aot autograd warning (#95521)
6c30dc6cee : [FSDP] Save `_all_handles`; `_all_fsdp_states` to root (#95465)
ac9b305afe : Back out "cherry-picking autodiff support for gather/index_select (#93333)" (#95565)
3064bc4060 : [dynamo] Reserve the tensorrt backend name for torch-tensorrt (#94632)
fa7f17799a : [3/N][BE][ST Deprecate] Remove Replicated Tensor (#95453)
a88bfc60c7 : [2/N][ST deprecate][BE] Remove Replicate Tensor convert from DDP and PTD (#95450)
9b7abc4fac : Run slow gradcheck tests sequentially (#95494)
9bca9df42b : [BE] Fix TORCH_WARN_ONCE (#95559)
407b0f3214 : fix for debug crash build (#95464)
d78274b759 : Automatically guard when SymInt is converted to int (#95479)
a530446f57 : Manual submodule update: kineto and libfmt bazel issue (#94756) (#95535)
02d44e5de4 : [Dynamo] Support CUDA stream passed from outside of torch.compile decrator (#94627)
ab1ab3ab19 : [CI] Specify more torch.backends.cudnn options to reduce non-determinism (#95478)
4dca9bde05 : [MPS] Add fmax fmin op (#95191)
057bc7191d : [Dynamo] Remove torch.autograd.profiler.profile workaround in UserDefined (#95504)
f5cf1a8b43 : Update triton hash (#95540)
ee6610ddf6 : [vision hash update] update the pinned vision hash (#95532)
b8151d2ba9 : Utility for running delta comparisons between two flag configs (#95411)
69d62373aa : Move multi-line wrap functions to helper (#95472)
a33d8133a5 : Slight cleanup of VariableBuilder giant if condition (#95471)
8693604bc6 : coreml - Wrap Core ML execute and forward calls in autorelease pool (#95384)
ca59b2d375 : Fix co-dev regresssion in github-exports-check job (#95345)
acb81c1c5a : [pytorch] Bump SoLoader version to 0.10.5 (#95498)
afece1992a : Disable MacOS M1 test jobs (#95509)
cc39cd6938 : [CUDA][CUBLAS] Explicitly link against `cuBLASLt` (#95094)
b215af2db8 : [optim] Add general documentation on our algorithm defaults (#95391)
0520a680c0 : Rebuild LICENSES_BUNDLED.txt (#95505)
b855b5eaac : SymIntify topk (#95015)
f53671e46e : [inductor] Bugfix in autotuning cache handling (#95435)
76cbe5797d : [MPS] Add TORCH_CHECK for Conv (#95480)
4c8ad93a7c : [Inductor][CI] Remove hf_GPT2_large from CPU inference test (#95473)
01c861af14 : Added utilities to instrument kernel bandwidth numbers (#95355)
d677432b70 : Remove non-existing third_party/catch from CMake (#95420)
9ded087bac : During export, generate Python TENSOR_MATCH guards (#94970)
80a6b24ee1 : [pt] move csrc shm logic to aten storage utils (#95228)
a12e92d8e4 : Support nn.Module forward hooks in torchdynamo (#92125)
d89bfa16e7 : [quant] add serialization method for quantized hardswish (#94486)
9d04d376d8 : docs: Match open bracket with close bracket in unsqueeze (#95215)
6665fe9e65 : [vision hash update] update the pinned vision hash (#95427)
a641d60757 : hotfix for memory leak in aot autograd induced by saving tensors for backward (#95101)
4846d52212 : inductor: fix complier error when trying to vectorize logit_and and logit_or (#95361)
0765dbc25e : [Functional Collectives] Migrate DeviceMesh::all_reduce to use functional all_reduce. (#95009)
5cad542e43 : [MPS] Add log_sigmoid op (#95280)
9f707f164e : Add more GPU metric instrumentation (#91717)
8efe4fd590 : Memoize repeated nonzero calls to the same fake tensor (#95399)
4833e47feb : Add support for nonzero, some improvements to reduce guards (#95387)
627282fa6c : Corrected grammar in contribution guide (#93014)
3bafecf719 : Revert "Add various uninterpreted bit tensor data types (#94992)"
f172c7c60a : Improve retries when ECR login is flaky (#95398)
6dc81f7bdd : Update docs that Parameters are immune to no_grad mode (#95232)
98c5921ed5 : Upload artifacts from inductor-A100-perf to S3 (#95401)
24dd37ef51 : Add BOOL_FALSE guard to optimize empty container case (#95248)
9c45f47bbe : [FSDP] Save `_fsdp_states` on root (#95343)
cba8b12fa7 : [quant][bug fix] Fix qrange_len in `torch.ao.quantization.utils.py` (#95297)
0eeb04652a : [vulkan] Pad channels when using texture storage instead of "tight packing" (#95251)
d4882a9445 : Make the cuda device assert error message clearer (#95360)
ec10d23c51 : [dynamo] Fix list contains check (#95092)
0c0694495b : Fix a bug in nesting check_sparse_tensor_invariants context managers (#95372)
808879ec8b : Revert "Implement sparse semantics support in gradcheck (#94714)" (#95386)
fb3ff77438 : [mergebot] Fix for pagination error (#95333)
254b161def : Revert "During export, generate Python TENSOR_MATCH guards (#94970)"
cb6e38d89d : Revert "Update docs that Parameters are immune to no_grad mode (#95232)"
b9e95158d5 : [MPS] Fix LSTM backward and forward pass (#95137)
86efa104f5 : [MPS] Fix view op slicing for 2nd dim in case of 0 offset (#95381)
5783cee2a3 : Update docs that Parameters are immune to no_grad mode (#95232)
af202aea34 : Add knobs for globally turning off 0/1 specialization and duck shaping (#95352)
94fd063f3f : Stop subclassing sympy Symbol (#95313)
cece63f197 : Add warn-once deprecation warning to legacy sparse constructors (#94850)
3b966a6ce3 : [autograd] disable backward/grad for complex scalar output (#92753)
b5ff41a47a : [Dynamo] No graph break on calling dict & collections.OrderedDict() (#95250)
bc438af6fe : std/var: support floating point correction value (#94073)
56aed2a6bb : SymFloat: Expose comparison operators in C++ API (#94812)
5730cabdd0 : using float type to do the computation of norm reduce for cpu half and bfloat16 dtype (#95166)
6912cf4053 : [DCP] Update DCP to use the updated FSDP optim state_dict APIs (#95303)
c97275acf6 : Fix OOMing periodic shards (#95246)
bdb78e529e : [PTD][DCP] Add fsdp checkpoint example (#95258)
c594a32f60 : [vision hash update] update the pinned vision hash (#95340)
29c235e555 : [SDPA] Fix bug in parsing scaled_dot_product_attention arguments (#95311)
8e391c735f : use 4 warps for small block config in mm (#95339)
ba8ff4be4d : [inductor] enable `test_nll_loss_forward_dynamic_shapes` (#95176)
f98733e976 : Fix disbale typos (#95322)
586ac98cde : Bugfix nested mem_efficient path in SDPA when E_qk != E_v (#95330)
78175ceeab : [FSDP][Docs] Re-add why reg. post-bwd hook on 1st forward (#95326)
f247129f23 : Avoid FPE when running batch norm with zero batch size. (#95324)
a257486bdd : coreml_delegate - Add input shape in error when throwing from predicting (#95249)
3ebab9aeff : [pt2][inductor] switch dinfo representation (#95302)
ca7eb1bab2 : Preserve meta["val"] on export (#95314)
f6f413c6b6 : Second part of splitting #91254 in two (#92749)
cbac56e244 : [BE] Simplify `Source.is_nn_module`; add some types (#95292)
674ef1f9be : Make fx.Transformer.get_attr call tracer to preserve node.meta (#95245)
c0fa0669f6 : Update isend/irecv warning messages for nccl (#95236)
7ac511c29a : Implement sparse semantics support in gradcheck (#94714)
b6a1c238bd : [MPS] Remove mps specialized path in BCE backward (#95220)
69c76ff05e : [MPS] Add xlogy op (#95213)
5fa937886c : [DCP][nit] Rename variables + minor documentation fix for optimizer.py (#95264)
3758559a58 : Reland "Introduce constrain_range; remove old expr_subs (#95063)" (#95209)
d6a8d397da : Fix formatting for merge failed message (#95234)
d88d4145c3 : [MPS] Fix Float16 issue with Reduction ops for macOS 12 (#94952)
5e47571a13 : [MPS] Convolution cleanup; remove unnecessary contiguous calls (#95078)
02a6d4334b : [MPS] Handle broadcasting by expanding src tensor in Copy.mm (#95272)
5a8092f058 : During export, generate Python TENSOR_MATCH guards (#94970)
8475af7761 : [MPS] Cast int64 to int32 for reduction ops (#95231)
6ae60b19b7 : Revert "During export, generate Python TENSOR_MATCH guards (#94970)"
5f24b2b1f0 : [pt2][inductor] search caches by default (#95134)
8de4238a31 : Add dynamo bench arg --per_process_memory_fraction (#95260)
a4b02a15d3 : Support registering op returning symint in python (#95240)
097679478e : [optim] Set defaults to foreach, NOT fused (#95241)
2f547ae613 : Remove SHA checksum for bazel http_archive from GitHub (#95039)
8d22eb61aa : Upgrade setuptools before building wheels (#95265)
a4d866b1eb : Update triton hash (#95247)
e769371781 : [vision hash update] update the pinned vision hash (#95252)
cf6e078c34 : Revert "Reland "Introduce constrain_range; remove old expr_subs (#95063)" (#95209)"
f67d2df933 : [ONNX] Refactor validation op-level (#94920)
c399ee09fe : Use PyTorch wheel in Windows CI (#94958)
f70a3430aa : [MPS] Add hypot op (#95196)
7289d22d67 : Use FindCUDAToolkit to find cuda dependencies (#82695)
5d1fec80e3 : [BE][CI] remove .jenkins entirely (#92625)
f20c4d2345 : Stop printing giant container in test failure message (#95226)
ed4b6d2113 : [profiler] update docs with repeat=1 (#95085)
640b9c80f9 : [primTorch] Redefine prim.collapse{,_view} end point to be inclusive (#92017)
2622adb980 : [primTorch] Make `prims.collapse` a real prim (#91748)
0d2e91573e : Reorder the Fx execution order to in-time get_attr rather than putting all get_attr ahead (#95014)
e5785f1e34 : If the input is contiguous, short-circuit infer_size_dv in reshape (#95216)
7b403c8c75 : Nvfuser moving python tests and files under nvfuser (#95155)
5d2eb6d636 : During export, generate Python TENSOR_MATCH guards (#94970)
307ebacf94 : [dynamo 3.11] fix to eval_frame.c (#94102)
1123ab8647 : [dynamo 3.11] changes to with contexts (#94101)
04d931d979 : [dynamo 3.11] changes to MAKE_FUNCTION and MATCH_KEYS (#94100)
d5aaf54261 : [dynamo 3.11] fix cell/freevar offsets (#94099)
055a9e45aa : [dynamo 3.11] changes to LOAD_GLOBAL and function calls (#94098)
da98053c6d : Fix bug where a github api failure would prevent the label check from failing (#95098)
311b20aae1 : [fix] torch.pow handle real negative base and complex exponent (#95198)
976d289e86 : Fix `update_pytorch_labels` workflow (#95227)
b0f22f8d2b : Use `run_subtests` utility in FSDP `test_state_dict_save_load_flow` test (#95090)
bef3c02330 : try triton with remat fix (#94882)
f7bf31fff1 : Reland "Introduce constrain_range; remove old expr_subs (#95063)" (#95209)
ce950b412f : Reland "Add torch.empty_permuted (#95069)" (#95208)
8aa34602f7 : Jetson Update for CI Redo (#94549)
c6d8d10b3e : Fix warning if backend registers timer (#91702)
7ca623c2e1 : Fix convit_base (#95174)
92e03cd583 : Revert "Add torch.empty_permuted (#95069)"
079476c6b2 : Add a check for n<0 and a test for it (#95144)
4e88547c95 : Revert "Introduce constrain_range; remove old expr_subs (#95063)"
1ab112cfab : code is clean enough that some warnings can be enabled (#95139)
e0a0329a67 : [MPS] Add hardsigmoid op (#95164)
d96aac8d2a : [MPS] Add logit op (#95162)
062380db91 : Fix Typo (#95173)
aa042a57cd : [inductor] fix max_pool2d with ceil mode (#94887)
1aea2d2ec3 : for SymInt nodes in fx graph, get result from node meta in inductor GraphLowering (#95152)
77dae43767 : Don't truncate leading 1s if they are unbacked (#95141)
f54233e273 : [foreach] bump tensor's version and define backward via torchgen (as possible) (#93901)
83b5eb4e16 : [sympy] fix ValueRanges.pow error when b.lower is float (#95151)
679e5dbfa1 : [executorch] Always generate CustomOpsNativeFunctions.h if custom_ops.yaml is present (#95084)
da41003b5f : [MPS] Fix the uint8 type issue with View ops kernels (#95145)
08370ddad8 : Update model skips (#95089)
4d753b5045 : [WIP][dynamo] simplify module_key creation logic (#94945)
954c767bc6 : [Inductor] Enable accuracy test for CPPBackend (#94898)
3dcf8b6140 : [Fix] Inbound check of sorter indices in searchsorted (#95109)
286d821e61 : Don't replace FloorDiv with floor in simplify, do simplifications for divisible exprs (#95076)
bedeb1f014 : Add torch.empty_permuted (#95069)
50ec4ddb70 : fix 'sympy.core.logic' has no attribute 'boolalg' (#95130)
567362cedb : [inductor] move dynamic shapes tests into a new file (#94971)
3711f7c59f : Introduce constrain_range; remove old expr_subs (#95063)
f89ae0a7f4 : Revert "Only truncate leading 1s if the value is too big. (#94521)"
06489a3c1c : [functorch] roll : fix batching rule for scalar tensor (#95048)
039b4c8809 : Add meta function for _upsample_bilinear2d_aa (#94982)
17d0b7f532 : [pt2][inductor]global autotuning cache (#94922)
3f381473cd : [blob inspector] free memory from workspace for di blobs post stats (#95064)
a17a7ccc92 : [MPS] LogSoftmax numerical stability (#95091)
9511b9fad2 : [MPS] Fix copy_cast_mps() on tensors with storage offset (#95093)
25ee6dd335 : [MPS] Fix fill_ where input tensor has a storage offset (#95113)
57830a9655 : [vision hash update] update the pinned vision hash (#95106)
9bb2fe3eae : fix numpy1.24 deprecations in unittests (#93997)
9dbfca7840 : Add various uninterpreted bit tensor data types (#94992)
e44737e619 : Revert "Update error messages to reflect why test is skipped (#95049)"
8928e7bdb8 : Raise error on 3.11 dynamo export (#95088)
4fc277c338 : [Quant] Add lowering for pixel_shuffle (#94769)
c16b2916f1 : Back out "fix: make sure `sorter` indices are inbound in `searchsorted` (#94863)" (#95086)
22e797a878 : Update error messages to reflect why test is skipped (#95049)
500ebb2cd6 : Fine grained dynamic shape controls (#94787)
30c07722d1 : Revert "Inductor: fix incorrect result of inplace unsqueeze (#94797)"
17c149ad9e : Revert "[CI] Use prebuilt triton from nightly repo (#94732)"
0205ffb8d9 : Fix expired deprecation of comparison dtype for NumPy 1.24+ (#91517)
d5d55363d9 : Add broadcastable check to index_put (#94849)
e0ede1cc30 : Revert "Fine grained dynamic shape controls (#94787)"
0a9c608461 : [MPS] Fix tensor with non-zero storage offset graph gathering (#91071)
5de3ead712 : [MPS] Add optional `minor` argument to `is_macos13_or_newer` (#95065)
c43e88665a : [Resubmit] helpers to torch.dist.utils (#95025)
2aa806608b : Fine grained dynamic shape controls (#94787)
766d51b496 : [export] Add a data type for representing export workflow information. (#95013)
c137d3d688 : inductor: enable lowering for bitwise_right_shift (#94997)
d978395f55 : Deprecate Caffe2 ONNX exporter (#94994)
2f9ffe7b0a : Add torch.utils._sympy.interp (#94985)
ccef485221 : Add boolean/comparison operator support to ValueRanges (#94944)
08ef83f07c : Add exhaustive testing to ValueRanges, fix bugs (#94939)
12c9a932ca : Assert more invariants on ValueRanges (#94906)
950a9efcc3 : [Dynamo] Enable test_autocast_sdpa (#95011)
2cf1a7d79b : Fix clang warnings and other minor issues (#94975)
45d775cedb : [BE] Cleanup triton builds (#95026)
a2afc657da : [MPS] Fix upsample for NHWC output (#94963)
a8cbf70ffc : Inductor support for aten::all_reduce (#93111)
5d1e9fd214 : [MPS] Fix prelu backward pass (#94933)
acc1dfe670 : [vision hash update] update the pinned vision hash (#95017)
16a4579335 : [FSDP] [composable] [BE] warning should read TorchRec, not DMP (#95010)
e5496ebcac : [torch] [composable] [analytics] add analytics logging to PT-D composable APIs (#95016)
13ebffe088 : [CUDA] `sm_87` / Jetson Orin support (#95008)
0dffbcd4fa : Remove unnecessary TensorMeta rewrap (#95004)
d9950c5215 : Hard code known true contiguity settings for unbacked SymInts (#95003)
a2f44d82f8 : Flag guard unbacked SymInt/SymFloat support (#94987)
30d0112bf3 : fix performance issue in torch.sparse.mm reduce mode (#94969)
bb347dc3c3 : [PTD][DCP] Add 1D DTensor based DCP (#94868)
5cdedab0cc : Raise error if torch.compile is called from windows or py 3.11 (#94940)
8126bb5529 : Mark linux-focal-py3.8-gcc7 / test (distributed) as unstable temporarily (#95002)
b45ec156a8 : Revert "Temporarily disable ROCm trunk tests (#94995)"
e0106e1850 : Use the run_subtests utility instead of self.subTest (#94983)
ee0e7f0529 : [dtensor] add checkpointing example (#94743)
59005bb998 : Fix segmentation fault in script_type_parser.cpp and unpickler.cpp (#94815)
03f4a63fd8 : Only truncate leading 1s if the value is too big. (#94521)
4f257a507c : [Dynamo] Support Python builtin sorted function (#94949)
5747a51657 : Fix flaky StaticRuntime.Nonzero test (#94418)
b209d8fa0d : [PT-D][Sequence Parallelism] Enable DTensor based Naive sequence parallelism (#94369)
29fdb354ff : [MPS] Fix embedding_backward() issue with Float16 (#94950)
21eb7f70f1 : Nvfuser python API import fix (#94036)
7aaebe00ee : Fail dynamic_aot_eager AllenaiLongformerBase model (#94986)
920ad2415c : Temporarily disable ROCm trunk tests (#94995)
641cb4243c : Fix c10d regression during cleanup. (#94988)
b652577d8e : Change test_torchinductor_opinfo.py to mark skips/xfails in a better way (#94813)
981511d0fe : Upload coredump from ROCm and print the stacktrace (#94938)
ef5de0a4cf : Don't use PrimTorch decomposition for empty (#94512)
2f32fd7762 : Introduce branchless implementations of TensorImpl bools (#94473)
e22d791287 : [PTD] Introduce tracing friendly collectives. (#93990)
d0fbed76c6 : Test inductor with stock g++ (#90710)
89e16c4f18 : Assume sympy is always installed (#94903)
23b1af0399 : Inductor cache clear (#94918)
68600fc7c6 : avoid extra copies in batchnorm inference by introducing a new op, _native_batch_norm_legit_no_training (#94946)
2ef6659107 : [Dynamo] Raise warning if user has hooks installed on the module (#94848)
bfec4965a1 : [inductor] Get compiler from environment variable if exists (#94926)
28e69954a1 : [ONNX] Support aten::bit_wise_not in fx-onnx exporter (#94919)
a0389681c2 : [complex] nansum & nanmean (#93199)
6ae06e49ac : Inductor: fix incorrect result of inplace unsqueeze (#94797)
aa9e481e0c : Revert "Re-enable a FX-to-ONNX kwargs Test (#94763)"
a049bbb100 : Revert "Change test_torchinductor_opinfo.py to mark skips/xfails in a better way (#94813)"
e751553848 : [vision hash update] update the pinned vision hash (#94866)
753c33bf86 : Enable half type support for unique cpu (#91666)
04b4704a0b : Re-enable a FX-to-ONNX kwargs Test (#94763)
4b2d1beca2 : [dynamo] keep submodule's name for nn.Sequential when unroolling (#94913)
8c44ae2f5d : [inductor] enable `test_lowmem_dropout1_dynamic_shapes` (#94884)
5e1de31548 : fix: make sure `sorter` indices are inbound in `searchsorted` (#94863)
a863d5e37c : Hide failing merge rule's name in the internal debugging section (#94932)
a4085ab837 : [dynamo] support custom __getattr__ on torch.nn.Modules (#94658)
bfc0d5e22c : Change test_torchinductor_opinfo.py to mark skips/xfails in a better way (#94813)
3d40a86acd : [ONNX] Enable skipped gpt2 test (#94930)
b4c8186774 : [BE][1/N] Add deprecate msg to Sharded Partial and Replicate Tensor (#94928)
07bc6b9587 : [SDPA] Update dispatch logic to check for sm86 and head_size == 128 for flash attention (#94921)
41865bd8ed : [executorch] Add RuntimeContext to generated C++ API Signature (#94570)
e5c2a35d83 : Add check that embedding_bag's weight is 2D (#94931)
3e9df622fb : [mta] implement `_foreach_pow` (#92303)
e28ba6813d : Enable persistent reductions (#94847)
0d7913c9c1 : add backwards for layer norm nested (#94781)
904d549ca4 : Add some simple sanity tests to ValueRanges (#94905)
e8dc34eaeb : [MPS] Move max_pool2d to mps dispatch key (#90772)
250c054bdd : [SPMD] Pull the minimal working distribute API and SPMD module to PyTorch (#94802)
bc361fdfdf : [MPS] Fix bilinear backward pass (#94892)
dd7e2b7c0e : [pt2][inductor] update choice caller hashes (#94853)
0698af67c7 : Revert "Fix XNNPACK OSS Buck build (#94935)"
c01f5118a6 : Add float to list of allowed ops (#94910)
9d2fddf820 : Fix XNNPACK OSS Buck build (#94935)
a005dd1c01 : [MPS] Fix nn.functional.conv_transpose2d grad (#94871)
cd9ca4c73f : [tp] additional doc fixes (#94786)
b6df987671 : [Inductor] Added aten.normal_ decomp (#91207)
092e28f17f : Make the glue compute short circuit only if possible (#94437)
ff7772317b : Stub all TensorImpl bools; do not go to Python if not hinted. (#94431)
6da88bc966 : try to fix OSS CI error (#94785)
dea05cdbf0 : [MPS] Fix the crash in elu_backward() (#94923)
66bea59538 : Clarify meaning of `pin_memory_device` argument (#94349)
f2c26420f2 : [pytorch] Add support for "height" and "width" dimension for the "select" operator on pytorch vulkan backend (#94612)
fa1ea9f9bc : Revert "Re-enable a FX-to-ONNX kwargs Test (#94763)"
b46b2e35d4 : [BE] Add flake8-logging-format linter (#94840)
dc4f2af6f6 : Take `CUDA_VISIBLE_DEVICES` into account for nvml calls (#94568)
ea657726d9 : Re-enable a FX-to-ONNX kwargs Test (#94763)
1d7133c542 : inductor(cpu): fix C++ compile error when sigmoid's post ops is a reduction op (#94890)
7dd7dde033 : [MPS] Convert output back to ChannelsLast for MaxPool2D (#94877)
54ebf255ab : [MPS] Fixes for LSTM. (#94889)
799df90d0e : [ONNX] Add bloom ops (#94878)
3ace14eb8b : [Bug fix] sparse_mask: wrong intersection on CUDA (#94829)
0c3ba78568 : [FSDP] Fix `clip_grad_norm_()` when rank has no local gradients (#94835)
8da776e3a7 : [FSDP] Fix "use-after-free" in reshard logic (#94859)
5a54537918 : Add further info to `masked_scatter` and `masked_scatter_` documention (#94545)
5705199fb1 : Update smoke test threshold (#94888)
77d1135566 : [ROCm] Pyt 2.0 rocm staging (#94660)
71ec2617d2 : [MPS] Block uint8 data type for unary and binary ops on macOS 12 (#94876)
8261c600b7 : Update ideep to add primitive cache for ARM (#94719)
c10acb834d : Revert "Temporarily disable inductor torchbench test (#94873)"
e0a954f531 : call `zero_grad` in foreach/fused optimizers tests (#94724)
afadc3697a : [ONNX] Fix assert in cat (#94870)
3d5f4dcc4d : Update vision commit pin (#94874)
117fafc260 : [CI] Install `pytorch-cuda` for conda testing (#94852)
79b7c697a4 : Temporarily disable inductor torchbench test (#94873)
abf59f5703 : Make _simplified kwarg private (#94782)
ae57bd6630 : PT2/TorchScript interoperability fix (#94678)
b6443fca86 : [ONNX] Wrap op validation inputs and add export_options.py and function_dispatcher.py (#94721)
5bc72bd019 : sym_int simplification for integer args, attempt 3 (#94799)
65b998325c : [inductor] Disable developer warnings for "2.0.0" version (#94845)
7f7f91e36f : add reproducibility notes to nn.UnpoolND operations (#94629)
7c44823a4e : Fix layout/device checks in sparse-dense addmm (#94843)
40cb494b1a : Switch Docker release to CUDA 11.7 (#94818)
98012e4a59 : [ROCm] hipGraph support for pytorch mainline (#88202)
79783a51da : [torchgen] Loosen the restriction for only allowing 2 nested namespaces for kernels (#94834)
7ef76ce6c3 : Preloads more nvidia pypi library for multi arch distributions (#94355)
97510c6d50 : Convert operator.not_ to torch.logical_not (#94626)
69bcefceec : [ROCm] Added MIOpen header files to installation package for ROCm. (#92969)
989299802c : Use s3 for some test infra files (#94642)
63bf7674fa : add backwards for gelu and relu on nested tensors. (#94776)
b7e1477e9b : Improve leaky relu doc (#94090)
33f13fc959 : Fix XNNPACK missing symbol from post-operation.c (#94768)
4a5ce921a0 : Add HPU to compatible shallow copy list and remove lazy HPU changes (#94673)
5c64d2141f : [ONNX] Add ExportOptions and op_level_debug mode (#94720)
3fc4bc115f : [functorch] jacrev, jacfwd error for complex input or output (#94805)
18d93cdc5d : [CI] Use prebuilt triton from nightly repo (#94732)
57b22bc6d8 : [Dynamo] Backend registration with ``entry_points`` (#93873)
94f0808629 : [MPS] Add fmod op. (#94722)
d1d5d16df3 : dynamo: handle straight-line graph breaks for autocast context manager with constant args (#94137)
73ee4964d3 : Add new checks in CI system to verify the built linux pip wheel with cpu-cxx11-abi (#79409)
22e2fd554c : OpInfo for aten.exponential, Add check for dtype, parameter in decomp ref (#92709)
1dbaa5c290 : Use decompositions for some fallbacks introduced in #94039 (#94206)
b005ec62b9 : [BE] Remove dependency on `six` and `future` (#94709)
39511697d4 : [PT-D][BE] Update 2D parallelism API name and docs (#94771)
53062e1fe4 : inductor: fix size and stride comparison (#94481)
28ed0bdb37 : Revert "[tp] additional doc fixes (#94786)"
bafc4e377b : [vision hash update] update the pinned vision hash (#94784)
5cd2b65816 : [inductor] fix sympy.core.numbers.Expr (#94780)
7522ca55f1 : [tp] additional doc fixes (#94786)
1f06a71797 : [MPS] Error out for square int64 input (#94766)
d567df9f36 : [dynamo 3.11] remap dup/rotate to copy/swap (#93988)
751bab094a : [dynamo 3.11] support new binary ops (#93987)
d4d13d99e4 : [dynamo 3.11] support new jump opcodes (#93986)
3faa636196 : Clarify the instructions for setting up dev environment [skip ci] (#94155)
055dc72dba : [ONNX] Bump onnx to 1.13.1, onnxruntime to 1.14.0 (#94767)
7e3f79914c : Support functionalization for torch.map (#94558)
3ea59b68af : [c10d] Enhance broadcastUniqueNCCLID error reporting (#94752)
ce474bc643 : fix view + detach graph case for inductor (#94744)
9fb9219478 : Make DDPOptimizer work with torch._dynamo.explain() (#94749)
fb55f12cb0 : [cpu][inductor] improve cpu vec implementations of cos & sin (#94577)
cedb7e3d77 : [MPS] Fix remainder op for integral dtypes (#94757)
84a5aec8c6 : [ONNX] Add bloom ops (#94761)
5ed7c701a3 : [ONNX] Remove the deprecated monkey patches to torch.Graph (#94747)
92f3feabaa : fix torch.var backward when n==correction (#94546)
86240898de : Improve profiling and stack traces for SymNode method calls (#94410)
f1f26fe8ec : Streamlining guard expect tests (#94404)
9d5fcd37a2 : sym_max/sym_min introduce guard if hinted (#94400)
4acdc446b2 : [MPS] Fix batch norm for NHWC (#94760)
840fb74ec8 : 86990 range mps support (#91075)
f2aee8b8d5 : small fixes for mlir backend (#94717)
a0d1dbc446 : Fix pytest arguments when --save-xml is not passed (#94589)
e743d316e2 : Revert "fix some MKL detection issues of CMake (#94402)"
2db12e3844 : [tp] minor update to TP docs (#94748)
8b3e3f937d : Update documentation init_process_group optional backend (#94543)
25820b69f6 : Revert "[BE] Use data() method when possible as it's safer and more readable (#92755)"
5ee230face : [FSDP][1/N] Refactor module materialization (#94196)
6cef200af9 : [ONNX] Wrap symbolic method calls with graph context (#94746)
a6a433aecd : Add stack emptiness checks inside interpreter.cpp (#94298)
c0e7077674 : Fix link in docs (#94686)
d82c2b14c7 : jit trace will fail for parameter check if it contains param whose ki… (#94032)
4d6a4401f8 : Raise warning if torch.compile options change without reset (#94680)
7c3fc2c7f0 : Revert "Issue-88098: extract utils from check labels (#94597)"
1f7448eeda : Add missing super().setUp() to test_freezing and test_tensorboard (#94553)
bdf9963e57 : Cache linter S3 dependencies (#94745)
36dfbb08f3 : Revert "Update Cutlass to v2.11 (#94188)"
f70ba23415 : [inductor] enable `test_upsample_cat_conv_dynamic_shapes` (#94715)
0444a6c90a : [BE] Remove deprecated logging warn method (#94708)
ae7a628b03 : Dynamic shapes CI updates (#94690)
e355a5c1d6 : inductor: fix the CPP issue of flag_to_float (#94730)
b57e6fdb50 : [MPS] Enable Memory Leak Detection for test_mps.py (#94646)
ceb0f1576b : turn functionalization on in aot_autograd inference (#92857)
5ce1fad711 : Add rnn.unpad_sequence and rnn.unpack_sequence to documentation (#94316)
701412a4ec : Update gradcheck docs to mention non-differentiability (#94618)
a064ce1939 : Pin setup-buildx-action version. Fix Docker build (#94734)
216f88d084 : ao migration: remove package test as this behavior is tested by other things (#94422)
f6adbf4d97 : ao migration: delete unused test class (#94420)
2acac8a83a : Logcumsumexp for CUDA (build-time optimized) (#94310)
4869929f32 : Update Triton hash (#94249)
e61d5b9588 : Revert "Dynamo Export use fake tensor (#94276)"
641dc0b844 : Revert "[quant] Add quantize and dequantize operators to decomposition table (#93312)"
2628901033 : [Executorch][Quant] Add Choose_qparams_symmetric (#94685)
ab261ff514 : Tweak config for mode=max-autotune/reduce-overhead (#94659)
e7e51b3a5c : Fix NVML visible device parsing (#92315)
6fadd5e94a : Checkout torchbench with only needed models (#94578)
18587cb31f : [MPS] Add sort and argSort Op. (#94697)
046e88a291 : [BE] [3/3] Rewrite `super()` calls in test (#94592)
bdd8f518d7 : [MPS] Add Python Module Bindings for the MPS backend (#94417)
a0f9abdcb6 : Update Cutlass to v2.11 (#94188)
7ef46d40a1 : fix some MKL detection issues of CMake (#94402)
a8fdfb4ba8 : [inductor] Persistent reductions (#92267)
eb81e7ec22 : [FSDP] Avoid printing incorrect warning for _get_param_to_fqns (#94494)
963d8f547e : [FSDP][state_dict] Return tensors instead of FlatParameters to avoid pickling errors (#94637)
2c76838d7f : Issue-88098: extract utils from check labels (#94597)
d04fd6b808 : inductor: fix customer op _convolution_pointwise_.binary functional error at AOTAutograd (#94581)
fe0c7fbcf8 : [MPS] Add repeat_interleave to MPS (#88649)
b794fd19c5 : [MPS] Add scatter gather kernels (support up to 5 dimensions) (#94663)
e3c4cea668 : [functorch] Add support on CUDA keys for control flow ops. (#94465)
989fb7c921 : [vision hash update] update the pinned vision hash (#94557)
67d9790985 : [BE] Apply almost all remaining flake8-comprehension checks (#94676)
54c0f37646 : [MPS] Add support for TopK k>16 (#94639)
ed54a5d06b : enable bf16 emb (#94163)
020a0fbf62 : [MPS] Perf update to convolutions. (#94661)
4a762cb622 : [MPS] Fix channels last copies in ELU,ReLU and Hardswish (#94664)
371f587c92 : Dockerize lint jobs (#94255)
abfd293c39 : functionalization: fix x.is_contiguous(channels_last) (#94195)
aba4fb9a16 : fix functionalization resize stride compute (#94018)
2b36d35b9c : add torch.autograd._unsafe_set_version_counter API (#92924)
c74f438c01 : [MPS] Fix the cat op for NHWC case (#94662)
8ad10eab4d : [Dynamo] Fix bug of calling super from class extended from metaclass (#94547)
d09cd15216 : [Profiler] Defer recording startup python events (take 2) (#91684)
8d45f555d7 : [BE] [1/3] Rewrite `super()` calls in caffe2 and benchmarks (#94587)
aa6f0ace2f : Remove API declarations in Ops.hpp (#94532)
a27bd42bb9 : [ONNX] Use onnxruntime to run fx tests (#94638)
9dd7e83676 : update xnnpack to newer version and update API usage in pytorch (#94330)
e7a8af9376 : don't warn on explicit fallback in inductor (#94643)
4fe365774a : Revert "[MPS] Add Python Module Bindings for the MPS backend (#94417)"
77d9e36b0a : [ONNX] Reduce 'find_mismatch' memory footprint by promptly freeing past sessions. (#94648)
7f068b7978 : [MPS] Add APIs to query current and driver allocated memory in MPSAllocator (#94649)
6d1a9d7323 : Revert "Mark ROCm trunk job as unstable (#94550)" (#94631)
50bc25baa0 : Move ValueRanges into its own module (#94528)
bae397ec63 : Add filelock to MacOS dependencies (#94647)
07cdea7cda : inductor: fix guard_equals (#94506)
c1c7eaf52b : Prevent sym_int from showing up in FX graph (#94595)
030209088f : [MPS] Fix the regression with test_index_select_scalar() (#94645)
ceab30775b : [Inductor] Enable fusion of mutation ops in narrow cases (#94110)
7ce785b50b : [MPS] Fix gelu forward and backward ops (#94529)
507b8c3423 : [MPS] Native implementation for addr (#94538)
d51ca38ef0 : Run test_serialization serially (for 2xlarge runners) (#94613)
680fc84e7b : [dtensor] group public APIs together (#94524)
3d82d8d0ed : [BE] Enable more flake8-comprehensions checks (#94601)
0b31ebf9e4 : [MPS] Added zero check to inverse & fix for any op to avoid segfault issue (#94551)
45edf9a2ea : Reland: [Autograd] Use in-place input accumulation fast path for dense Tensors. (#90217)
beb4f5bf39 : [MPS] Add Python Module Bindings for the MPS backend (#94417)
d0cff06bcb : Call MPSAllocator callbacks when allocation fails. (#94133)
948cd61afc : add fallthrough kernel for AutogradMeta key (#94603)
0176405c69 : fix: check if double to i64 is in well-formed range (#94290)
3fb08199f6 : Remove unnecessary replace on self.expr (#94408)
480e0c0198 : Remove anaconda-prune yml files as these have been moved to test-infra (#94610)
c53bd0dd30 : Mitigate broken test_coalesce_reference_cycle test on dynamo (#94622)
728dfeee48 : [MPS] Fix ops with bool issues in macOS Monterey (#94464)
5b1cedacde : [BE] [2/3] Rewrite `super()` calls in functorch and torch (#94588)
d14a59b63c : [MPS] Update merge rule list. (#94619)
25619bdeb6 : [ONNX][Experimental] FX Exporter w/ ONNX Script and ATen Lib (#94566)
8d8fb7efe7 : [ONNX] Update diagnostics system (#94565)
88d0235b73 : [ONNX] Update CI test environment; Add symbolic functions (#94564)
c5c7687b74 : Allow FakeTensorProp to run on graphs traced with some None inputs (#94569)
534db77e73 : Autotune pointwise/reduction in max_autotune mode (#94556)
111c86bfe5 : Revert "[CI] Move M1 testing to periodic (#94608)"
7c4acdad4a : [MPS] Fix the crash in huberloss with Float16 (#94567)
d8f4026ebf : Continue support sharding pipes in `tud.datapipes.iter.grouping` as deprecated (#94527)
5c16788e5f : [CI] Move M1 testing to periodic (#94608)
e116ca93e1 : Run test_torchinductor*.py with implicit_fallbacks=False (#94039)
e44586a78f : Pass input tensor __dict__ along to placeholder nodes (#94080)
9171f7d4cd : [BE] Modernize PyTorch even more for 3.8 with pyupgrade (#94520)
70026aaad6 : [SDPA] update type hint for scaled_dot_product_attention and documentation (#94008)
9bef1ebb9e : Fix div by fp64 scalar issue on xla device (#94459)
67513aee6d : Cleaning up some logic in tools/shared/cwrap_common.py (#94475)
51cec7bf52 : add compile reason in InstructionTranslator RETURN_VALUE (#94176) (#94367)
92d8c4b37c : [MPS] Fix cumsum for integral data types (#94530)
d990ddadd5 : [fx] Fix matching args (#94375)
db6cfff827 : fix: forbid multi-index for index_select over scalar (#94347)
0d0ebcdfe5 : feature: adding the ability to restore shapes after loading a traced model (#90744)
c7c7238976 : Fix bug in unsqueeze_nested stride calculation (#88688)
889a4640a0 : [ONNX] Skip import test for experimental files (#94552)
c620ece726 : port sparse_mm.reduce to pytorch and optimize it on CPU (#83727)
24ae50bcc7 : Add config option to reduce warnings in inductor (#94413)
1d3980656c : [MPS] Fix min/max_reduction_with_dim ops (#94386)
0fe11589df : [MPS] Add im2col and col2im to Fallback (#94491)
a21bddcc90 : WelfordOps: Remove combine_t and use acc_scalar_t instead (#94522)
e22e323bea : [decomp] Use var_mean in native_batch_norm decomposition (#94140)
e844120b2f : Fix embedding_dense_backward to not cast indiices to floats (#94572)
1770ccf6c8 : Don't throw tf32 warning if no nodes in graph are matmuls + fp32 + cuda (#94561)
f152a79be9 : Revert "update aten op overload to not use `from` to avoid compile errors (#89797)"
a5daea69fb : teach inductor to handle floor (#94341)
02b8a7f473 : inductor: don't do transpose vectoriztion if input ld depends on most inner var (#94493)
3a12b16fb0 : Renamed passes to options in torch.compile (#94500)
59e8756676 : [MPS] Fix the Channels last bug with GradientWithInput. (#94384)
8dbe63c99e : [MPS] Casting int64 to int32 for reduction ops and raise warning. (#94484)
715f3733ef : don't call floor for symint unless necessary (#94365)
89df0e4253 : Enable Python-3.11 binary builds across the board (#94430)
a1f15fb987 : [MPS] Fix batchnorm forward and backward pass (#94351)
2ad29009bf : [MPS] Fix addmm calculation (#94534)
10c430ba0a : Revert "Set torch.backends.cudnn.enabled to false when testing accuracy (#94363)"
a1d210de44 : Add exception handlers for stoll in jit/frontend/schema_type_parser.cpp (#94295)
d21a7e7193 : Assert TensorBox produced by lowering and add [Note: Inductor IR] (#94361)
01de5ddafc : add mixed data type support for LayerNorm backward on CPU (#88064)
54fa980186 : Dynamo Export use fake tensor (#94276)
2af89e96ec : Lower libtorch build parallelization to avoid OOM (#94548)
544c04f2df : Add uint8 support for interpolate for CPU images (#90771)
782e4f5c02 : [quant] Add quantize and dequantize operators to decomposition table (#93312)
df13247e67 : small bugfixes to release notes script (#94536)
93ee1bf168 : [inductor] Fix a conv stride assertion (#94405)
f5ccbc1704 : Ignore 7z locked usage log error on Windows non-ephemeral runners (#94483)
016f0b2f62 : [MPS] Calculate nonzero count inside nonzero op (#94442)
4c6a7faec5 : [Profiler] Use RAII wrapper to manage refcounts during python tracer startup. (#91646)
336d9354d6 : [MPS] Enable index add for TestConsistency (#94356)
299ada9cff : [MPS] Add the floor_divide fixes. (#94488)
93d7d546ff : Fix saved tensor hooks to propogate errors back to python as-is (#94456)
2a5851735a : Set torch.backends.cudnn.enabled to false when testing accuracy (#94363)
79ed6b246c : Mark ROCm trunk job as unstable (#94550)
2394e6baa9 : [quant][fx] Change prepare_fx and convert_fx to preserve the GraphModule type of input (#94412)
09598b603f : [dtensor] update readme for prototype release (#94517)
66bfcd32fd : [ROCm] Remove PYTORCH_MIOPEN_SUGGEST_NHWC flag (#90725)
c1e2704656 : ao migration: fix broken import, try 2 (#94458)
bebe58bd71 : [DCP] Set single_file_per_rank default to True (#94501)
54b7c7d5e9 : Added requested_bytes to CUDA Caching Allocator Stats (#88575)
dddc0b41db : [ROCm] centos update endpoint repo and fix sudo (#92034)
dd315e5c06 : Dynamo: Support ConstantVariable (comparison_op) SymNodeVariable (#94519)
88e16849db : [pt2] Fix multiple races in log folder (#93407)
444829fa21 : [nn] Remove deprecated `torch.nn.utils._stateless` (#94498)
f45c196653 : Update backend config to be under _World (#94191)
98d3612e48 : [Profiler] Enable SOFT_ASSERT to log Invariant Violation to Kineto (#92872)
92620aface : [DCP]Update optimizer.py docstring (#94379)
760836f738 : Add back in registration (#94452)
a229b4526f : [BE] Prefer dash over underscore in command-line options (#94505)
a63524684d : [ONNX] Add col2im for opset 18 (#84594)
ea98ba02e2 : Prevent duplicate symbol for dsa_add_new_assertion_failure (#94064)
6007874bbb : Revert "teach inductor to handle floor (#94341)"
f35f12320a : [MPS] Fixes for arange_mps for empty tensor. (#94485)
105f7205bd : [MPS] Fix and unblock TestConsistency for median (#94489)
69e0bda999 : [BE] Import `Literal`, `Protocol`, and `Final` from standard library `typing` as of Python 3.8+ (#94490)
527b646f4b : Refactor to extract label_utils from export_pytorch_labels (#94179)
4f691d2e2f : [MPS] Fix correctness issue with fill_scalar_mps() (#94479)
75545798c6 : test_inductor test.sh fix (#92833)
81853354c3 : added aten.log_normal_ decomp (#91674)
b2ea1d06aa : Collective dispatching from Process Group (#91257)
31c30134bb : [MPS] Raise error for Conv3D as currently we don't have support. (#94492)
1dd6c8176c : Doc Fix: Update _symbolic_trace.py (#94510)
490c8f67c5 : Revert "WIP: don't call floor for symint unless necessary (#94365)"
e7df9aaec8 : teach inductor to handle floor (#94341)
685108b201 : [docs] Fix incorrect wrapping of function (#94446)
47efbd5719 : [pytorch] [hygiene] remove legacy buck rules (#94053)
4f3858c6d8 : [functorch] linearize (#94173)
a5b052259b : Add MPS support for aten::remainder.Tensor_out (#92139)
4e1bd4abe7 : Fix scalar type resolution for optional tensor (#94427)
76ed1a81d1 : Revert "COO intersection kernel: respect value intersection order (#92242)"
f165be5a49 : tuned best BS with inductor on cpu for E2E models (#94181)
a81cf49d97 : Remove dead functions (#94415)
e4fe11eecb : [MPS] Fix torch.topk for empty tensors and k=0 on mps (#91884)
19264b50bb : [MPS] Add support for nansum on mps (#93845)
8a9ea44985 : WIP: don't call floor for symint unless necessary (#94365)
8b37eff69f : remove abi uncertainty and potential abi conflict (#94306)
02ca2253cc : [MPS] Fixes for Binary ops with casting issues from FP to uint8 (#94382)
e0e4f1a890 : Revert "[functorch] linearize (#94173)"
b6b9e1e6e0 : [functorch] linearize (#94173)
81e318353f : Align input memory format and grad memory format for GroupNorm backward (#92668)
81bbee7d7e : [SDPA] Adds basic correctness checks (#94274)
92f569fe11 : [Inductor] added aten.geometric_ decomp (#91672)
c028fc4e25 : Decouple PT2 dynamic shapes from the functorch setting (#94469)
c82bb28759 : Update autocast policy list on CPU (#92527)
2180a0dc0c : [FSDP][optim_state_dict] Remove the dead code (#94448)
af5b09182a : [PT-D] Update torch.distributed code owners (#94362)
11f51e798f : Upgrade nightly wheels to ROCm5.4.2 (#93090)
cb715c26e2 : [MPS] Replace the explicit commit in View ops with adaptive commit (#94218)
6d722dba0f : [ONNX] Update CI onnx and ORT version (#94439)
03b9569d2c : [vision hash update] update the pinned vision hash (#94455)
bc26890bbe : [inductor] Fix args in sink_cat_after_pointwise (#94416)
fe00722539 : Revert "feat(fx): `make_fx` should be aware of functions wrapped with `@fx.wrap` (#93273)"
41e3189222 : [PT-D][Tensor parallelism] Add documentations for TP (#94421)
5b8e485a34 : [MPS] Add 2d grid sampler (#94273)
6c80d0a5a5 : [MPS] Fix correctness issues with Pool2D ops (#94348)
ca63040d2b : Revert "Set torch.backends.cudnn.enabled to false when testing accuracy (#94363)"
bb48d90b00 : [Executorch][Quant][BE] Refactor Choose_Qparams (#94338)
1e2d82b8e4 : [BE] Merge isinstance calls together (#94419)
f9cc12eebd : Remove duplicate CI jobs between pull and trunk (#94426)
5ea6f59875 : Update xla image tag (#94377)
66ae3aa096 : [Inductor] added aten.cauchy_ decomp (#92047)
0ce95c3a17 : Dynamo: Support min / max over iterables (#94350)
53a5c8c7cb : Avoid guarding on zero-ness with meta tensors. (#94399)
dc70b00d0b : Track and record hint on SymNode and use when possible (#94201)
b5ef37b9a4 : Dynamo: Fix graph break when iterating over tensor (#94326)
7bfc59993d : Set torch.backends.cudnn.enabled to false when testing accuracy (#94363)
04b06c9627 : [ONNX] Use optional op to keep None in results for ONNX internal tests (#84789)
b27ac6dc56 : [ONNX] Add full checker mode in torch.onnx.export (#83186)
4e984cb614 : [dynamo 3.11] changes to python code object (#93985)
021d267694 : update aten op overload to not use `from` to avoid compile errors (#89797)
f2156ef42b : Make triton debug util reusable (#94225)
22e1698cf7 : [MPS] Add triangular solve op through MPSMatrixSolveTriangular (#94345)
82401c6a69 : [BE] Set PYTORCH_TEST_WITH_INDUCTOR only once (#94411)
0bf78b57c0 : fix: max_unpool3d buffer overflow (#94372)
3a5a762443 : Revert "[quant] Add quantize and dequantize operators to decomposition table (#93312)"
6ac0198c02 : [CI] Add known ciflow labels to probot (#94368)
c0fe5fb987 : [BE] Doc Update: Python 3.7 is past End of Life (#94314)
b8de1cf007 : [functorch][nn] Refactor NN stateless APIs by swapping module tensors (#92536)
3fd46a2f9c : [quant] Add quantize and dequantize operators to decomposition table (#93312)
a405c6993f : [submodule] update libfmt to tag 9.1.0 (#93219)
8ba87fa525 : [dynamo] fix general attr on tensor for user-provided attributes (#94332)
f65a206433 : Revert "sparse compressed tensor validation without syncs for low-(batch)dim tensors. (#94048)"
e44cd942e3 : [MPS] Fix the crash with hardswish_backward() (#94342)
eb1aca162e : Re-enable cudagraphs for benchmark scripts (#94192)
fe0e28ab87 : [decompositions] GRU decompositon with and without packed sequence (#91466)
5a7c1b7894 : [decompositions] LSTM with packed input (#91465)
bef61225c3 : [decompositions] add decomposition for RNN with packed sequence (#91281)
e5f6e1f660 : [decompositions] add LSTM decomp (#91124)
20d01d2dc9 : [expanded weights] add RNN support via decomp (#91807)
c2a92687e0 : [decompositions] add RNN decomp and testing (#91123)
768e547543 : Fix SIGFPE in slow_conv3d_forward_out_cpu (#94325)
73bf32cb57 : Bump to stable ONNX 1.13.0 (#90332)
6f543e0d0a : add not_close_error_metas for internal comparison machinery (#90004)
566eb49ed2 : minor internal cleanup in assert_close (#90003)
bbe33532ae : Rename DynamicShapeVariable to SymNodeVariable cause thats what it is (#94152)
cd057390b5 : [quant][fx][pt2e] cleanup the args for some helper functions (#94352)
1767026d1e : Abstract the optimization context information as a dedicated class to better organize the code (#92057)
e0c24ec2a5 : Print fqn in the warning message (#94313)
e16daa78a0 : [PT-D][Checkpoint] Turn on all default planner flags (#92933)
230c4fe93d : [GHF] Fix pushDate handling (#94364)
5fe72b8716 : [Dynamo] modify dynamo ipex backend (#94169)
877482ebc4 : [MPS] Fix crashes in several backward ops (#94343)
61ecaf1dd4 : [vision hash update] update the pinned vision hash (#94358)
5f25c0831c : Cleanup hung Windows processes (#94357)
68b35017a9 : Tiny unimplemented improvements (#94150)
b191a5f75f : Remove overly strict assert, add test (#94151)
88ef4739b2 : Check the semantic of loading the mask value (#91755)
83275d8cdf : add torch.autograd._set_view_replay_enabled, use in aot autograd (#92588)
333e771394 : Add benchmarks.py to run all benchmarks, add new file with all torchbench model names (#94146)
5fa7120722 : Simplify CMake CUDNN code (#91676)
9291f9b9e2 : Simplify cmake code (#91546)
c981b7e572 : [MPS] Add MPSAllocatorInterface to access methods of MPSAllocator (#94327)
51b487bf51 : [inductor] fix cpu implementation of argmax / argmin (#94165)
94394e568e : change the dynamo benchmark timeout as a parameter (#94284)
f48b4d8842 : Handle sympy in split (#94285)
3ce1ebb6fb : Apply some safe comprehension optimizations (#94323)
bef2483ed8 : [NestedTensor] Call contiguous in linear backward (#94317)
ab4fe01e72 : [FSDP][optim_state_dict] Returns the initial states of the empty parameters for KeyedOptimizer/NamedOptimizer (#94130)
ec25db7741 : torch.inference_mode: add type hints (#94223)
75e04f6dad : Test enabling full testing on 3.11 for linux (#94056)
34bbd7af87 : Use the right run_test for inductor opinfo tests (#94312)
d16c2c36ad : Add another missing decomp (#94113)
6b8eb0eb04 : [vulkan] Add core graph components (#94222)
8fce9a09cd : [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
567e6152da : Revert "[inductor] fix crash issue when input is a view tensor (#90150)" (#94329)
7b3217e6a2 : Add deprecation warning to reduce flag of scatter for Tensor src and redirect to scatter_reduce (#94282)
748bac8757 : [BE]: Apply pyupgrade yield from and unit test alias upgrades (#94309)
895d4781b8 : [easy] Add NestedTensorMeta to parseDispatchKey (#94279)
8c835a9e52 : Factor out SYMPY_INTERP (#94307)
e1f17b3530 : Add CSR->BSC and CSC->BSR conversions (#93301)
d690a596dc : Fast path binary ops in fake tensor (#94047)
0603f4ff14 : temp fix for segment reduce undocumented FC window (#94242)
a88c15a849 : Build Windows binaries with Visual Studio 2022 Build Tools (#90855)
e0950fccfa : [SDPA] Add expanded autograd testing for fused kernels and disable head_dim128 sm86 mem-efficient (#94009)
7bba87ed06 : add rsub decomposition with alpha (#94144)
e9533767af : trymerge to ignore certain failures (#91134)
b07c839b70 : COO intersection kernel: respect value intersection order (#92242)
0b2dc3b3ac : [Py-3.11] Skip dynamo related tests (#94187)
5d48392abb : [MPS] Skip gather/blit calls in case of strided output (#94260)
86ae14deaa : [MPS] Fix MPSGraph casting issue to MPSDataTypeBool in masked_fill op (#94263)
e3ac109618 : [MPS] Fallback on gather code to solve view tensors when a slice is followed by a reshape (#94278)
4cd086b14c : [MPS] Raise error for int64 inputs of dot operator. (#94270)
b654d1494b : [MPS] Fix the argument error for tensor_split() test (#94234)
a3ca66c69e : [MPS] Remove the unused code for view lists in OperationUtils.h (#94265)
a0a3728069 : [MPS] Don't reset the Graph state (#94283)
36062dd2b4 : [MPS] Fix the crash in View ops when slicing wrong lengths (#94259)
bf4fe5dddd : General in-place binary op support in dynamo (#94203)
f954498edf : Dynamo: Fix to unpack ConstantVariable in call_range() (#94202)
c4544bc169 : Fix thread-allocation in `_vec_log_softmax_lastdim` (#85398)
a2ac25f63e : update test fixture (#89796)
513b5da357 : sparse compressed tensor validation without syncs for low-(batch)dim tensors. (#94048)
42b6bcdb13 : [BE] Add empty tensor check to _compute_linear_combination (#94245)
a28a062938 : [Inductor] Fix CPU vectorized implementation of mask calculation that breaks torch.where (#93922)
0e94fbc0c8 : [inductor] bug fix: use `create_symbolic_sizes_strides_storage_offset` (#94031)
900e09c872 : [Dynamo] Support torch.Tensor.fn as TorchVariable, not UserDefinedObjectVariable, preventing graph break (#93243)
d6dec1a5cf : Refactor sharding data pipe into a seperate file (#94095)
59c1b5025f : [quant][fx][pt2e] Refactor prepare so it's aligned better with the new API plan in pt2e (#94011)
ffb3561caa : [Docs] Add pointer to FlashAttention paper (#94253)
f92348e13d : Clean up mentions of removed torch/csrc/generic/*.cpp (#94107)
bc8a378333 : [MPS] Unregister put_() op due to lack of implementation (#94231)
bc6d54f6d8 : [FSDP][optim_state_dict] Let optim_state_dict ignore the non-FSDP managed parameters that do not reside on the rank (#94129)
f04106f1c2 : [FSDP][state_dict] Fix incorrect valid_data_size for local_state_dict when some ranks have zero data. (#94109)
605b661805 : FakeTensor should constant propagate through ops that allow numbers as scalars (#94145)
579ae64d81 : [mobile] List all missing ops at once (#94205)
4b0e2e2cc6 : Use official NVML Python bindings (#93925)
1063394898 : Revert "Add fabi-version=11 to ensure compatibility between gcc7 and gcc9 binaries for _GLIBCXX_USE_CXX11_ABI=1 (#93835)"
f1c435d7b4 : [vision hash update] update the pinned vision hash (#94241)
b562be793a : Add fabi-version=11 to ensure compatibility between gcc7 and gcc9 binaries for _GLIBCXX_USE_CXX11_ABI=1 (#93835)
ca74105377 : [MPS] Add scalar params to the softplus key. (#94256)
9358726a06 : [MPS] Handle empty input in layer norm (#94212)
d493bc8a76 : [MPS] Return input in addcmul/div if value is zero (#94214)
fa2b99f402 : [MPS] Fix the crash in nan_to_num() with Float16 data type (#94220)
f15ab8a7f2 : AO migration: replace torch internal callsites (#94170)
a9f57db607 : AO migration: migrate .rst files to new locations (#94211)
368e364c19 : [MPS] Fix gradient issues with NLL and Smooth_L1 loss ops (#94226)
bf9be50bb8 : Some more fixes (#94049)
53e4fe076a : Revert "enable bf16 emb (#94163)"
6ba041fcae : Look up `group["capturable"]`, not `defaults["capturable"]` in Adam(W) (#94149)
0dfc3e1340 : Cleanup all leftover processes in MacOS pet runner (#94127)
a595d06c12 : [inductor] Avoid re-computing mean in lowering for aten.var_mean (#94139)
719f78d311 : [inductor] Count bytes can't read from buffers that are never written (#94142)
43f6ed4abd : Extend torch-trition conda to 3.11 (#93117)
3c6bc58f63 : use C10_API in libc10.so (#94171)
a07d1291cf : Re-enable compilation tests (#92333)
180adf8c18 : Fix bug in generic_list_compare (#94156)
fdebc06242 : Point to scatter_reduce for reduce argument in scatter_ docs (#94081)
05397b1250 : Make linter quick-checks setup steps retryable (#94199)
496c0a207b : Make segment_reduce properly private. (#93166)
9b3277c095 : Make sure to properly pull the right submodule in BC test (#94182)
0444b8f560 : Revert "Support neg calls to dyn shapes (#94068)"
9b2e7d3b4f : [Inductor] Performance smoke test - hf bert performance increased (#94088)
d2b82feb41 : Don't compare ids of temporary python objects (#94097)
25a6e0fd79 : Fix serialization (#94096)
db011e11ea : Skip sebotnet33ts_256 on CI (#94067)
16387bee4a : [DCP] Fix test_file_system_checkpoint.py and test_file_system_checkpoint_cpu.py (#94069)
819990f595 : [decomp] Decompose std/std_mean into aten.var/var_mean (#94072)
26cba842ad : Optimize ConvTransposed2D with mkldnn float32 and bfloat16 on CPU (#92530)
f3bf46e801 : enable bf16 emb (#94163)
ea4cda5268 : fix inductor clamp decomp to correctly type promote and avoid wrappin… (#94157)
9350bcf6ae : Support neg calls to dyn shapes (#94068)
7b6e948812 : Add missing move to torch_dispatch_mode.h (#94154)
10a1efb49f : [MPS] Fix `cumsum` for negative indexes (#94119)
60a3b7425d : Small refactor of shape guards to allow for 1:1 code_parts (#93894)
8a88852d5f : [MPS] Fix `index_select` for empty input (#94117)
8ecda19607 : fix upsampling decompositions to have integer output sizes (#94123)
2362b5fca3 : [Dynamo] Put torch.cuda.stream into Dynamo FX graph (#93808)
25c0737adc : dont graph break on list[SymInt] comparisons (#94054)
1d53123f44 : Report graph breaks separately from graph count (#94143)
a2db70b3c7 : Add graphs/ops to parse_logs.py (#94138)
9895c19a7a : To vectorize long datatype as mask index (#91076)
834e8f0464 : Hack SymInt.__iadd__ to be working. (#94136)
c1da35af5e : Update dynamic benchmark skips (#94114)
3693039bb7 : perf: fix missing noexcepts on minpybind in functorch (#94135)
f54fd6fb28 : [c10d] Update get_backend() in exception_handler (#94063)
8c26ed5f5e : Add lowerings for all symbolic shape operators (#94121)
afd7b581aa : Simplify OpenMP detection in CMake (#91576)
d4a93eadee : tools: Add lint for CONSTEXPR (#94089)
996cc1c0d0 : Fix Win+CUDA builds using VS2017 (#94091)
2064fa9f10 : Clean-up removed TH from BUCK (#94022)
7fb2ac2bd5 : Revert "trymerge to ignore certain failures (#91134)"
170a3e0257 : Enable Python dispatcher on inference-only aot_dispatch_base (#94118)
4207d3c330 : `FusedAdam(W)` should take `OptState` into account before unscaling grads (#94060)
adde6fd25e : [dynamo 3.11] update instruction sizes (#93984)
11de399447 : [inductor] fix cpu implement of torch.neg (#94035)
1a32db15e7 : Some performance fixes (#94034)
fa65ae8f56 : cleanup unused include (#93359)
27efdc5eed : fix writable-strings warnings (#93246)
59a81b695a : Fix flaky linter clang-tidy relative path (#94093)
e071d72f3c : Tag dynamo backends as debug/experimental (#93878)
5c7f4534e9 : [small] multithreaded-pg guard attr (#93883)
6d597c532e : [ROCm] Add diskspace check for rocm CI nodes (#93032)
ef156f9136 : Enable retry support for MPS tests (#94070)
3c79ea2607 : Removes stray print (#94079)
dfac113cfc : Remove torch/_dynamo/optimizations (#93871)
5f4fec7459 : Fix/refactor dynamo tvm backend (#93870)
0a93e6db5a : Fix/refactor dynamo ipex backend (#93863)
5197496799 : Add a private API banner (#93996)
1c30268ff1 : Update rockset version (#94005)
5be57d51f9 : Fix testing now that random.sample() arg must be a sequence (#94052)
8051f8a6ee : Fix Storage destruction GC tracking (#94051)
203b2cad3e : Remove fx2trt/torch2trt backends (#93822)
5d709af59a : Rename aot_cudagraphs to cudagraphs (#93821)
8b7bd5dffc : trymerge to ignore certain failures (#91134)
a5ff40032d : Fix/refactor dynamo onnxrt backend (#93818)
d9870d70c1 : Exempt `_foreach_norm` from autograd_not_implemented_fallback check (#93995)
dc7bf1a7ea : General reversible binary op support (e.g. __add__ / __radd__) in dynamo (#93271)
e52786f3d1 : Silence profiler error (#94013)
a0fc90b07f : Add TorchData for regular cleanup of anaconda pytorch-nightly channel (#94014)
3b7140d938 : Add the new submission form (#94000)
6650aac8ce : move more operators to BatchRulesDecompositions (#93164)
6e1e212c39 : [platform010] remove more ovr_config//runtime:platform009 usage (#93008)
6c555b29a8 : MHA optimizations (#93234)
162e3ca58e : [fx] fix type promotion in `binary_magic_impl` (#91376)
34bcbfbd6a : [fx] throw exceptions on invalid input in `FloorDiv` (#93143)
ba614f3a32 : [fx] test `FloorDiv` against Python impl (#93142)
e7c63b962b : [fx] add SymPy assumptions to `FloorDiv` (#93185)
2481fc0df4 : Add count to FakeTensorMode.__torch_dispatch__ (#93936)
12f22655b1 : Short circuit device property access on FakeTensor (#93946)
77acb556e6 : [primTorch] Rewrite nan_to_num ref in terms of aten functions (#93952)
72385bbd03 : [primTorch] Rewrite is{,pos,neg}inf refs in terms of aten functions (#93951)
6c4dc98b9d : [CI][BE] Move docker forlder to `.ci` (#93104)
6e1cfcdf4b : cauchy_ few fixes (1) check gamma > 0 (2) better dtype error log (#93314)
d7c71a95b6 : [Dynamo] modify IPEX backend (#92067)
aaa27a6b6d : Vectorized more stable complex division (#93277)
b41e2779f2 : cumsum, cumprod, logcumsumexp: adjust grain size (#94025)
ca8450849b : compute dynamic tensor shapes for indexing on the host (#93872)
e4f11e01bd : [Fake Tensor] Allow fake meta by default, delete unused ctor args (#93993)
be364c0cda : [Inductor] Fix OpenMP discovery on MacOS (#93895)
e98a942399 : [PTD] Land 'to_std' utility parser fix #93209 (#94023)
63115b70f0 : Fixed issue with --diff-branch arg in dynamo benchmarks (#93989)
3df0e26e20 : [SDPA] Remove private version and only utilize public version (#94004)
d996acfbc2 : [XNNPACK] disable ARM_BF16 and ARM_FP16_VECTOR (#94020)
dd7d47c4ac : abstract vectorized reduction utils on CPU (#92284)
79243516f6 : collect CPU info with collect_env.py for new issues reporting (#93899)
a71395dd88 : [inductor] fix crash issue when input is a view tensor (#90150)
732a865c1b : [vision hash update] update the pinned vision hash (#94016)
d05ec0efeb : [dtensor] add split_with_sizes op (#93957)
bfe5e1258b : avoid unnecessary static_cast (#93898)
dbbcefcd78 : remove std::iterator (#93924)
f7bd5d0ccb : Revert "[Reland] Add sym_size/stride/numel/storage_offset to native_function.yaml (#91… (#92402)"
60e8c766b5 : Refactor dynamo training backends (#93409)
f84f89b1c3 : ns: add compare_weights API with a single model (#92058)
660bea10ba : add add_loggers implementation using PNP (#91639)
a719bb0e37 : Readme: Fix for outdated build-from-source documentation (#91861)
0f5b6caa16 : [FSDP][optim_state_dict] Ignore the state check on rank that does not own the corresponding parameter (#93318)
0844213f7d : Improve Windows CI logic to cleanup leftover processes (#93914)
5817695bfa : [pt2] Fix arange to match ATen behavior (#93353)
264c89658b : Move in backward opt setup to helper (#92059)
e32d99ae19 : [FSDP][optim_state_dict] Make FSDP.optim_state_dict compatbile with DMP (#93285)
989722cd19 : Use global PIC flag for XNNPACK (#93896)
7db4d813c3 : [dynamo 3.11] fix opmap key error (#93983)
37a28255cb : [dynamo, benchmarks] Fix dashboard update location (#94006)
c2fb1f8ee4 : Add is_integer assumption to ModularIndexing (#93903)
b7a5c79399 : [inductor] Fix type inference in CPU masked operations (#93842)
fde220ca44 : [BE] Get rid of `six` in caffe2 code (#93956)
37fcc53096 : Remove import cycle from torch._refs.nn.functional (#93948)
4e4293f15f : Add meta registration for bucketize (#93893)
2b0d7e63f0 : Move dynamo.optimizations.distributed to backends (#93408)
2910695942 : Remove cuda 11.6 from nightly (#93979)
ee2729890c : Refactor dynamo register_backend/BACKENDS (#93389)
6e285c479d : Remove cuda 11.6 from CI replace with 11.7 (#93406)
f9d2600ce2 : [Dynamo] Rename `GuardBuilder.guarded_code` -> `check_fn_manager` (#93934)
f5e9c8ce54 : Revert "Remove CUDA 11.6 from nightly builds (#93404)"
5d259425fc : Revert "[inductor] fix crash issue when input is a view tensor (#90150)"
769eca6f97 : Basic Validation for FSDP `state_dict` transformations of modules with persistent buffers (#93396)
98e1b3e93a : Merge Inductor perf smoke test with other inductor CI tests (#93395)
9ff7ddb241 : [inductor] Don't import torchvision (#93027)
481a334b7a : [FSDP][3/N] Refactor `summon_full_params` unit tests (#92298)
10990734ce : [FSDP][2/N] `_summon_full_params` -> `_unshard_params` (#92297)
c76ac8eef2 : Remove CUDA 11.6 from nightly builds (#93404)
a14e3190e3 : Mark buffers that reuse other buffers (#93329)
d69876b2f1 : Refactor to allow reuse of SchedulerNode.allocate (#93328)
84187399fc : retire sparse_mask_helper (#91714)
a2fded3001 : update fbgemm third party (#93907)
b11ec270ba : [inductor] fix crash issue when input is a view tensor (#90150)
a672fd1dba : [Inductor] add config for weight prepacking (#93811)
59ccc786df : Check for none for NNModuleVariable.__module__ (#93326)
f4db47b176 : inductor: don't assert error when do cpu fx fusion for training mode (#93837)
3d020b6903 : inductor: separate bias from PackeLinear for better performance (#93348)
4b0f1cc1ee : [FSDP][optim_state_dict][10/N] Make optim_state_dict and optim_state_dict_to_load public (#92118)
84ee50a28a : inductor: add conv+hardsigmoid fusion for cpu path(reland) (#93341)
6f3018d50b : [DTensor] implement dist_split as a sharding prop rule (#93306)
966030f7c7 : [DTensor][fix] MultiThreadedTestCase misses _tls object and it won't reflect in CI (#93832)
b82f93d561 : [DTensor] fix DTensorSpec dim_map description (#93160)
db87396474 : inductor: align the decomposition output stride with none-decomposition path for torch.lerp (#93336)
cff4d3bb22 : inductor: fix convert_shape_to_symint (#93349)
e7ace1ff93 : [PT-D][NamedOptimizer][6/N] Upstream init_state from keyed to NamedOptimizer (#93887)
f58ba553b7 : [ROCm] Fix distributed tests failure and enable ROCm distributed CI (#92932)
569f2e3228 : Remove many untested dynamo backends (#93382)
653dc73df0 : [SDPA] Wire up FlashAttention's backward (#92917)
b6367c8aa4 : Remove torch/_dynamo/optimizations/inference.py (#93381)
68b06ee4d4 : Add `torch_compile_debug/` to .gitignore (#93889)
61d3589e07 : [vision hash update] update the pinned vision hash (#93892)
489e74cf73 : Fix lint after #93278 (#93902)
6c93c3b58a : Save and restore functorch configuration in minified scripts (#93853)
caf1b27196 : Fix Upsample/EmbeddingBag module printing (#93850)
306dc2ed1a : Make ShapeEnv deepcopy'able (#93403)
54eedf6fa6 : Fix test_jit_cuda_archflags on Windows (#93332)
d7b39b17ab : Remove torch/_dynamo/optimizations/{analysis,log_args}.py (#93279)
d37bc6d04e : Revert "[fx] add SymPy assumptions to `FloorDiv` (#93185)"
57d74aae55 : Remove torch/_dynamo/optimizations/normalize.py (#93278)
6a4bf3b71b : feat(fx): `make_fx` should be aware of functions wrapped with `@fx.wrap` (#93273)
dd8662d5c8 : [BE] Migrate Anaconda Prune jobs from CircleCI to GHA (#93876)
ca9ebf9e2b : Delete dynamo_import and inductor_import (#93851)
74592a43d0 : Update tests to use ConfigModule.patch (#93254)
31d466f925 : [BE][ez] Move hardcoded constants to function args (#93874)
23d58fedb1 : Use ConfigModule for _functorch.config (#93375)
0485bf5398 : Avoid saving pointwise intermediate to global memory if followed by a reduction (#93810)
8594529c2e : Run ASAN in 4xlarge in all shards (#93879)
3e6978172e : [dynamo] Handle general tensor attributes with a getattr proxy node (#91840)
8c1ee89f19 : Added super init to Module (#91819)
207399cf5f : Add repro_forward_only for inference debugging (#93856)
03b465a6d0 : Add --iterations to benchmark script (#93858)
3fb6e119e2 : [PT-D][TP] Fix the module registration in TP API (#93412)
498c6ed8d8 : Add missing format string (#93866)
87b9ab4870 : [CI] Add Py-3.11 wheels for all platforms (#93400)
2ea3036d8b : Disable cudagraphs by default (#93253)
45eadc2c4d : ConfigModule for _{dynamo,inductor}.config (#93252)
a23ed38f9a : [mta][foreach] Implement fused adamw (#88015)
86ab4d49d4 : [pruning][core][feature] LSTM Structured Pruning prune_functions + pattern (#90801)
f577a5279b : Enable `USE_CUDA` (#92640)
e80af53bf0 : Move bazel back to pull (#93867)
6fe234ecc4 : pnp: move shadow loggers to parent module (#91428)
56f9475625 : ns: change PNP testing to use QNNPACK (#91421)
1dcd2609b5 : Add retries for get_workflow_job_id and try catch in upload_test_stats (#93401)
eb987abd24 : Clean up leftover processes on non-ephemeral Windows runner (#93414)
77cbaedd5c : [docs] Add section about tensor hooks on in-place in autograd note (#93116)
76b999803a : add filelock as a dependency (#91607)
d5901fcc80 : fix(fx): make all `make_fx` invocations isolated (opaque to higher `make_fx` invocations) by default (#93290)
2fc2ca7652 : [BE]: Fix CMake LTO policy on pytorch (#93388)
bf2e2fea41 : [dynamo] getattr for EnumVariables (#93397)
37f7c00a8a : More fixes and improved clang-tidy checkers (#93213)
679e869af0 : [inductor] only check mutations attr for TritonKernel (#92277)
c4ccf7e121 : [fx] add SymPy assumptions to `FloorDiv` (#93185)
f1030dcc6d : [Re-open 90267] [inductor] weight prepack for single conv_transpose2d (#91956)
66fd99cc09 : Use symbolic tracing_mode for aot repro with dynamic_shapes (#93393)
298075e183 : use aten parallel on lu factor (#93037)
bdca5fcd43 : cherry-picking autodiff support for gather/index_select (#93333)
b484d17c24 : _sparse_coo_tensor_with_dims_and_tensors backward: simplify and optimize (#91704)
6a2838eec5 : [jit] jit._drop fun modifier to allow in jit class non-jit decl funs (#93012)
994f85d639 : sparse_mask: extend lhs to sparse COO tensors (#92248)
6a7d6cc30d : Introduce core_aten_decompositions (#93131)
f77f88fbc7 : [Quant] X86 qengine always uses fbgemm kernels on OS other than Linux (#93218)
776079b5bc : Fix test_file_system_checkpoint_cpu.py temp directory usage (#93302)
eea752f853 : [Quant][ONEDNN] Fix weight reorder issue for grouped convolution (#91934)
2457d0ef4f : [Dynamo][Easy] Remove duplicated code in builder.py (#93809)
9daca46dc4 : [jit][await] Apply review comments (#93284)
feb6c9ae9b : Partial revert of autogen view_copy ops which return lists (#93411)
9d1263a88d : [ONNX] Fix Gather replacement in RNN peephole (#93120)
2cd8cb02a1 : [inductor] Don't skip realize heuristics with dynamic shapes (#93814)
ac791bddce : Refactor dynamo distributed test helpers to be reusable (#93187)
60e503d468 : [dtensor][6/N] change to a better/safer op registration (#90735)
42633cf5f9 : Inductor cpp wrapper: cache the loading of the kernel (#89742)
9a56997fe1 : [dtensor][5/N] add cached propagator for TP (#90734)
b072245178 : [dtensor][4/N] refactor dispatching logic and add propagator (#90733)
965f4ea3ba : [Reland] Add sym_size/stride/numel/storage_offset to native_function.yaml (#91… (#92402)
79db5bcc9d : [vision hash update] update the pinned vision hash (#93323)
e752ec6dea : Re-enable xla workflow (#93334)
10910758f4 : Make dynamo tests work under pytest (#93251)
08041c5264 : Configurable repro_tolerance for same_two_models (#93398)
3bae5484d0 : Typofix (#93402)
0f802eedc2 : [Quant][FX] Lower QConvAddReLU2d for onednn backend (#91155)
e77f28a03d : [Quant] Add fused ConvAddReLU2d module for onednn backend (#91154)
ef4118e435 : [Quant][FX] Lower QConvAdd2d for onednn backend (#91153)
eb9c4c8929 : [ONNX] Properly skip tests by onnx version via 'unittest.skipIf' (#93316)
53c3555a6a : [Quant] Add fused ConvAdd2d module for onednn backend (#91152)
7bcc446ede : [Vulkan][Optimize for Mobile] Avoid dereferencing element [0] if the vector is empty (#92918)
e83f473bb7 : [BE] Don't use `six` in torch.utils.tensorboard (#93383)
218d4eac56 : Remove submission form (#93287)
8dfcb59d66 : Update version of Python to 3.8 in the prerequisites (#93399)
129a1bc715 : Minor error in docs regarding execution time (#93258)
7d7c4d9c1f : [inductor] Minor fix of addmm shape padding (#93320)
b179a097ea : Add platform markers for linux x86_64 only extra_install_requires (#93066)
18c6ca1ee1 : Add release matrix to release.md (#93392)
902b4dba75 : Change capture_scalar_outputs to use SymInt/SymFloat rather than Tensor to model scalars (#93150)
76b683b008 : Correctly propagate compiler kwargs to aot minifier (#93308)
295fd20eb5 : [CI] Add Python-3.11 Linux conda builds (#93186)
811e95a15e : --dynamic-ci-skips now works for all backends (#93369)
4d504a9ce8 : Fix Windows python3 path (#93387)
2a31c3589b : Report suppressed exception in minifier (#93368)
e5235fb62c : Convert GuardOnDataDependentSymNode into graph break (#93373)
44a948c820 : Fix MSVC compiler error in basic_ops.h (#93322)
5b2afaaca8 : Fix Vulkan compiling issues on Windows (#92207)
438f12d91a : Rewrite some decomps to allow producing aten ops (#93099)
332d55d3df : [Dynamo] UserDefinedClassVariable supports python type (#93310)
7b426e8da2 : Remove fake tensor cache clearing in dynamo (#93304)
cfff440614 : [inductor] Lower fallback kernel warnings from WARNING to INFO (#93330)
46c05a7ae3 : [ez] Update base branch when updating python docs (#93305)
d72db37c4a : Remove a redundant check from code. (#93025)
bb6af061a0 : `torch.triangular_solve` for CSR: materialize diagonal elements when `unitriangular=True`. (#93352)
d9117b93fb : unsqueeze only when dim = 3 (#91052)
bd4a5b400a : [Re-open 90266] [inductor] weight prepack for _convolution_transpose_pointwise (#91955)
cc49f5abd3 : [Re-land 90265] [inductor] add conv_transpose2d unary fusion for cpu in inference mode (#91954)
3870fdabfb : [Re-land 90264] add conv_transpose2d pointwise(unary) fusion kernel (#91953)
fba13d94a1 : Remove deprecated torch.symeig (#70988)
ec2461bbd8 : Remove proxy tensor's check for data dependent output (#93265)
d7a3f2128f : pass `None` instead of `False` inside `Adam.__setstate__` (#93289)
af5b01294e : [Dynamo] Fix bug if module calls module with static forward function (#93299)
91a4947e28 : Populate extern_kernels on import (#93282)
8c09a005c5 : [inductor] Pattern matching engine (copy) (#93291)
aee5f84ac3 : [c++] use constexpr instead of const (#93267)
f9c08e25a1 : Fix MacOS nightly builds (#93331)
888771dc5d : [FSDP][optim_state_dict] Fix `_is_named_optimizer` when the state is empty (#93303)
441b09d1b7 : [CI][ez] Rename some jobs (#93327)
524ee07143 : Fix https://github.com/pytorch/pytorch/issues/92377 (#92379)
782b9a9cde : Use _exchange_device to reduce torch.cuda.device overhead (#91127)
fc4e9931da : [fx.GraphModule] Populate memo in deepcopy BEFORE copying children. (#93295)
21c7c7c72f : [Quant] Use the true src zero point to query and create conv pd (#90818)
a71d9a928f : [Quant] Add fused conv2d_add_relu op for onednn backend (#90364)
01687a6bad : Revert "add numpy typing plugin to mypy config (#92930)"
1a454310b9 : Update SECURITY.MD (#93313)
aeac7f4203 : [bazel] Fix gloo.BUILD (#92858)
5f1ac188f8 : add numpy typing plugin to mypy config (#92930)
2a6e085704 : Update custom backend docs (#92721)
c499e760f5 : [XNNPACK] Enable Memopt for OSS (#93097)
24b501903c : Minor sympy usage fix in fbcode (#93171)
36fe31f537 : [Reland] Refactor stack_trace preservation for node meta preservation (#90803) (#92400)
1fa68d40b8 : [pytorch] fix backend_type for backend/PG plugin (#93129)
2e9107ec1e : [Pytorch][Executorch] Handwritten view copy out ops should resize out (#91194)
7dabb8b53b : [vulkan] Enable command buffer reuse and add keys to Tensor/StorageBuffer objects (#92993)
ae79f95cb8 : [quant][fx][pt2e][refactor] Refactor prepare.py for upcoming quantize_pt2e changes (#92641)
dd0ba2076a : return clone in case of 1 input cat (#93294)
286cca8929 : Add cudnn install 8.7.0.84 for CUDA 11.8 (#93086)
0ecb071fc4 : [BE][CI] change references from .jenkins to .ci (#92624)
2b267fa7f2 : [inductor] Check memory compression ratio in model tests (#89305)
53a669869c : Remove checks for refs/prims (#93250)
e17bfde622 : [vulkan] Create separate BUCK target for command buffer recording (#92157)
710fe40597 : [Export] Introduce as_none in ex.Argument union type (#93210)
1d25070949 : [Export] Refine design around TensorValue (renamed IValue) (#93217)
845e4b8a47 : [fix] legacybatching: getPhysicalDims (#93261)
7a621c443b : [GHF] Fix ghstack branches in sync logic (#93298)
54056c1705 : Update cudnn_frontend to 0.7.3 (#93272)
c516e5488e : Move bazel and xla to unstable (#93296)
4fc19e1a71 : [optim][adam] use fastest impl whenever possible, add util (#93184)
efee879695 : Don't suppress warnings in CI. (#93269)
5d9902cbcd : Beef up error when converting sympy expr to int/float/bool fails (#93198)
2fc73622f8 : [jit] Support Awaitable type (#90863)
53f7fb9a22 : Add CSC->BSC conversion (#92307)
434eb16deb : Correctly restore pybind11 error_already_set (#93238)
3e4d0e8d82 : [Reland][FSDP] Do not clean FQNs for `use_orig_params=True` (#92662)
c7b03010ec : Split the aot/dynamo TORCHDYNAMO_REPRO_AFTER cases (#93226)
9eb402d18e : Update dynamic benchmark skips (#93228)
04082fc042 : [inductor] enable more dynamic shapes tests (#93216)
5112f44dc4 : Add vmap support for torch.index_fill (#91364)
08035b1eb9 : inductor: support more conv+unary fusion (#92518)
4d51c8532c : Some simple fixes (#93221)
e790281a85 : SymInt'ify view_as (#93242)
3c570a2be3 : SymInt'ify reshape_as (#93241)
0247ed27cc : Apply Clang-Tidy readability-container-size-empty (#93236)
239afa0e43 : Revert accidental change to libkineto version (#93237)
b3e422948d : [Dynamo] Support out variants of ops mutate the tensors out of the function frame (#93177)
129f136179 : Move Sherlock to snooping dynamic shapes (#93239)
5976f0bdfe : Set min supported Python version to 3.8 (#93155)
0dceaf07cd : Add two decomps for optimizer fusion (#93193)
878f4f09d2 : Warn about deprecation of private decoder builtins (#93181)
304d8dd6c8 : [Dynamo] Support enum.Enum type as dict key (#93026)
9a2becf60a : inductor: fix inplace op's wrong lowering issue when preop is NopKernel (#92247)
900f8886e2 : inductor: make as_strided support non-contiguous input and always fix it's input layout using eager stride (#92063)
cac1912bfb : Add some more missing moves to aten functorch (#93098)
61fd1188ba : [Export] Remove the concept of Scalar in export schema (#93211)
68a1065bd7 : [Export] Remove op filed from ex.Node schema (#93208)
7cc91f4002 : [vision hash update] update the pinned vision hash (#93189)
cb817d6176 : Fix endian handling in THPStorage_fromBuffer (#92834)
1e0c57b645 : More fixes found in tidy and libc++ (#93138)
4ca511c69e : Fix positional issues in dedup guards (#93137)
ef988c2b37 : Add post cleanup step for MacOS (#93126)
cfb160185e : Update ROCm CI builds to 5.4.2 (#93163)
648202ceb9 : Improve DDPOptimizer by avoiding small preamble graph (#93162)
f40183d374 : Fix C10_CUDA_CHECK for failing to capture last cuda error occasionally (#93192)
aac9e5288f : Increase test multiprocessing waiting time (#93183)
72502b94f3 : correct use of torch.backends.cudnn.flags() (#93182)
a62fc09a1f : [Quant] Add fused conv2d_add op for onednn backend (#90262)
00b3f22210 : Add missing scalar example in docs of `torch.where` (#93145)
ca8f5e177a : Use the old aten underscored function for Predictor (#93096)
189ae948d3 : [CI] Move XLA to Python-3.8 (#93178)
2f0b0c5dd7 : exponential_ few fixes (1) lambda > 0 (2) mkl kernel to continuous (3) better error log on dtype (#92891)
42d4eca796 : Update submodule kineto fix bazel1 (#92318)
b74a0fc486 : Mark aten.flip and aten.alias as core aten op (#93130)
4d107e3426 : torch.export Logical Schema V1 (#93135)
1ff292abe0 : Make CPU inductor work with dynamic shapes (#93077)
a0ca9dc8ca : [torchgen] Small fix for empty yaml file edge case (#92938)
75cfc0be21 : Logcumsumexp for CPU (#93153)
61457671a5 : [quant][fx][be] Remove _input_output_observed from backend_config (#92589)
58acab4616 : [dynamo] support [tensor].type(torch.FloatTensor) (#93043)
35ea82541b : Send float32 to a different GitHub issue (#93168)
65d6802e2f : Improve error messages for sparse methods on tensors with unsupported backends/layouts. (#93149)
27ab1dfc28 : Remove print_test_stats, test_history, s3_stat_parser (#92841)
975feb606e : [DDP][Easy] Remove unused var (#93128)
4eb69af5af : Upgrade CI to ROCm 5.4.2 (#92972)
00f3e0d8c9 : [ci] Set step level timeout (#93084)
62aa4e096b : Revert "Add cudnn install 8.7.0.84 for CUDA 11.8 (#93086)"
d3049378be : Repair the path to jni.h for libtorch windows build (#93057)
64d0624cee : Explicit Name needed to run with buck test (#93035)
3a10bf791f : Add cudnn install 8.7.0.84 for CUDA 11.8 (#93086)
68a98537d5 : [fix] nn c++ : segfault in modulelist and moduledict (#93074)
219e9533f0 : Improve autograd doc on complex numbers (#93065)
5105a8d3fc : Enable Kineto in OSS builds by fixing build condition (resubmit) (#93033)
070163fb53 : [inductor] Clean up TRITON_CACHE_DIR (#92879)
6fa84fdea2 : [FX][Quant] Enable FX quant for patterns like x.view(x.size(...), ...) (#90001)
a4238976a8 : [FSDP][optim_state_dict] Ensure correct devices for tensors when doing all_gather (#92992)
8b1b47c36a : [FSDP][optim_state_dict] Use all_gather to deal with uneven size tensors (#92991)
f172feae0d : More tidy fixes (#93069)
5bae580502 : Don't graph break on patched module methods (#93115)
a2e0f8e529 : [ FL-gradient quantization] Adding QNN unpack feature (#92714)
661800a2cf : Fix BC-breaking change introduced by #91499 (#93091)
7fade4f771 : fixing flag to skip nvfuser_tests build (#93080)
e2739372eb : [vision hash update] update the pinned vision hash (#93114)
074f5ce0b7 : Install Torchvision in all Linux shards (#93108)
025ef99ddf : Get rid of dedicated inductor dynamic_shapes config (#93076)
f3fcc80622 : [dtensor][7/N] remove backend in with_comms (#93040)
8b3e01cd30 : [DTensor] implement dist_cat as a sharding prop rule (#92677)
24172eebac : [ONNX] Export 'aten::index_put(self, mask, v)' when rank(mask) < rank(self) (#92862)
95dfad9d93 : Add kwargs support to torch.export() API (#92013)
ae171cf623 : [ci] Move sm86 from trunk to pull (#93085)
d1807dc1f4 : Fix topk IMA (#93095)
8d7f9e2f79 : Make __deepcopy__ of GraphModule able to handle circular reference. (#93038)
ceb44350cf : [CI] Move parallel native builds to 3.8 (#93103)
f6f46ba3bb : [Reland] aot autograd explicitly errors on double backward (#92893)
913cf2908e : Revert "Disable torch_jit_fuser_te for dynamo CI (#92945)"
340811bf8d : Torchinductor randn_like lowering (#93005)
1b5bfe9dd1 : Properly compute device for elementwise operations with CPU scalar tensor (#93073)
1f352f7c1f : Update flatbuffer test models to match pkl models (#93022)
68a49322e7 : [MacOS] Explicitly use cmake from cloned conda environment (#92737)
15c46eb89b : Remove try catch in test_torchinductor (#93004)
17803fb36e : Make meshgrid support symbolic shapes (#93075)
5de19dd348 : Don't copy name_to_input in OutputGraph (#93034)
f30787e52d : Update XLA docker image to v0.8 (#93041)
d9f0d14835 : Update RELEASE.md with pinning xla and builder PRs (#93079)
0e92bbe5b1 : Add sparse COO tensor support to torch.sum(dim=..., keepdim=...) (#92979)
ca2a23c243 : [BE][CI] Move more builds from 3.7 to 3.8 (#92928)
729f1a8ef2 : Setup shebang and set -x on generated runner script (#93007)
7012d985fa : Revert "Improve `bsr @ strided` performance in `baddmm` for `bfloat16/half` with Triton kernels. (#88078)"
3888555fa1 : Apply some more missing moves in aten native (#92983)
7e449e8ba7 : Fix some silly Inductor bugs (#92997)
abcaa05f55 : Revert spurious submodule change from #92107 (#93067)
5e9fa0a8fc : Mark crossvit_9_240 as passing dynamic=True (#92981)
1d03a6a901 : [Quant][Fx] Fix issue: qconfig_mappings of onednn backend are not correctly set for fused modules (#91297)
913866efbf : [PT-D][TP] Fix TP API for FQN path based parallelization (#93029)
46f16b9363 : Improve `bsr @ strided` performance in `baddmm` for `bfloat16/half` with Triton kernels. (#88078)
4c074ddfd2 : [functorch][reland] vmap: bitwise operators (#92836)
ccad2e5000 : Include cublasLt as an option in max_autotune mode (#92915)
d88bc38b0c : [functorch] fix batching rule for dropout (#92975)
77f336600a : [PT-D] Enable Meta Tensor Support for DTensor (#92652)
e714e37a06 : [optim][sgd] default to foreach when CUDA + differentiable=False (#92730)
8c9f745af1 : [foreach] guard default support on native tensors only (#92923)
c9ce0e63e8 : [Dynamo] Support context wrapping(e.g, torch.no_grad) on nested functions w/o closure (#92922)
a6b51448f5 : [Dynamo] Supports if condition on user defined object (#90892)
819bd5b77a : [nn] add set_to_none flag for C++ optim endpoint (#92989)
dbeb513192 : [vision hash update] update the pinned vision hash (#92937)
68f198913a : Revert "Mark XLA Linux jobs as unstable temporarily (#92634)"
f646126ecd : Running timm benchmarks no longer silently retries (#93030)
d322f82b05 : Add @count util to torch, use it to track benchmark stats (#93013)
c11b301bcd : [NVFUSER] refactor nvfuser build (#89621)
0a57a20c02 : [caffe2] Fix pybind11 native python link error (#92325)
341613fc14 : Move the pin to latest to unbreak the xla CI (#93000)
32bcb97c7a : [package] Add better debugging for torch.package (#92939)
22b6a5fda9 : Update base docker image tags for ROCm CI (#90694)
cee5174d44 : Add test tracking operators without decompositions (#90887)
345695e8f7 : Remove PY37 from binary build matrix (#92919)
1af9231c98 : Replace IndexingDiv with FloorDiv in test_torchinductor (#93003)
1f55f3b0de : Solving the under/overflow for complex division (#92539)
b90496eef5 : [nn] zero_grad() set_to_none default True (#92731)
5441f2c067 : Fix DDPOptimizer fake_mode execution (#92986)
e7b7e8dc3d : [SDPA] Remove unused rng_engine_inputs (#93024)
dd05f028e2 : [PT-D][Checkpoint] Rename DCP storage layer init() (#92869)
b0f3736fa2 : [BE][CI] symlink .jenkins to .ci (#92846)
b453adc945 : [BE][CI] rename .jenkins (#92845)
67689c823f : refactor: move dynamo/TorchXLA bridge to pytorch/xla repo (#92601)
b2f3ff6183 : [Py3.11] Remove skip logic from vmap and forward_ad (#91825)
f2f42e54ca : Apply some std::move and param value fixups to aten (#92901)
b073c09f7a : Added keep_key option to Grouper (#92532)
63331a5fac : Add --timing and --explain to CI runs (#92980)
63e47c68a6 : [cpp] remove checks from embedding bag impl (#92982)
99ced6482a : Disable vml's abs and log1p (#92113)
d4c8e37b85 : Improve performance for unary kernels using vml (#91963)
0de81906cc : Add get-job-id in get-workflow-job-id action (#93001)
d354499faf : adding some more missing ops to vmap (#92110)
92fbb35bff : Upload failures shouldn't fail a CI that passed tests (#92996)
e292ddff4e : More clang-tidy fixes (#92944)
4e67332677 : Add few more tests to 3.11 smokechecks (#92946)
b399007a07 : Make TensorIterator give better error message for symbolic tensors (#92914)
c0ed0f22cd : [FSDP] Fix `no_sync()`, `use_orig_params=True`, mixed precision, sharded (#92874)
077e135ed6 : add number of cuda retries into tracker (#92557)
a6ac922eab : Rename Canonical Aten IR to Core Aten IR (#92904)
e5fd7e6d8f : Fix to use upsample_bicubic2d.vec decomp for dynamic shape support (#92854)
0fc2f9febb : Disable torch_jit_fuser_te for dynamo CI (#92945)
2ee94633a1 : Change ciflow/inductor to test inductor inference with dynamic shapes (#92771)
f724ecbd52 : Add dynamic shapes aot_eager to periodic (#92770)
9c487a4b91 : Fix #92814: assertion error when explicitly provide out=None (#92873)
f180873fd5 : Revert "[CI] Disable regularly failing CUDA 11.8 windows periodic tests (#92902)"
e45b566018 : [inductor] skip CUDA tests under ASAN (#92883)
a3715efd8b : Remove windows check for cmake to build Fused kernels (#91909)
f0d09572b0 : [CI] Rename TSAN job (#92929)
01f1097770 : Revert "Fix to use upsample_bicubic2d.vec decomp for dynamic shape support (#92854)"
54bbb446ca : lru_cache shape expansion (20-25% speedup on local bench) (#92860)
78caa7921c : [dynamo] Allow DynamicShapeVariable as predicate to cond() op. (#92864)
2503a4a7c6 : Fix MPI backend PG initialization (#92847)
18d5288010 : Add support for Generator=None in inductor (#92851)
f3266015a4 : Add `_StorageMeta` metaclass for `StorageBase` (#92648)
4d9920fa9c : Move PyInterpreter code in `python_variable.cpp` to its own files (#92647)
4bc0491752 : Add USE_FLASH_ATTENTION flag to setup.py (#92903)
bf1ff4918f : Fix Dockerfile conda install error for some shells (#92702)
b0f5e15c4c : [CI] Enable Python-3.11 in smoke CPU testing (#92787)
6c7e6d9689 : Make `torch.fx` compatible with Python-3.11 (#92895)
a2da0a0b02 : Revert "Add test tracking operators without decompositions (#90887)"
e665f03ad8 : Fix dynamo func defaults handling for torch.device, size, dtype (#92880)
d49187bf88 : Fix to use upsample_bicubic2d.vec decomp for dynamic shape support (#92854)
9b23fd378f : Revert "Logcumsumexp for complex in CPU and CUDA (#90847)"
acdd462b1a : Revert "Remove deprecated torch.symeig (#70988)"
16f7db5287 : Don't fail-fast for docs, only push on schedule and some tags (#92853)
d4a35e21c0 : Revert "[MacOS] Explicitly use cmake from cloned conda environment (#92737)"
550f98332b : [fix] vmap and anomaly mode interaction (#92672)
fb46d3e138 : Run all of the timm models shards in the periodic (#92900)
2740daf701 : Add test tracking operators without decompositions (#90887)
5f09f76b5d : Revert "Revert 61cdae0ce58bcbe048b143356fd9ded821225657 to fix CI (#92631)"
a817008bb3 : Fix #92108 (#92870)
9e56378ef2 : Add documentation for DCP. (#92813)
bcbc522d1f : [CI] Disable regularly failing CUDA 11.8 windows periodic tests (#92902)
68a40a47a0 : [Inductor] Lower aten.tan (#92837)
19c9b09449 : Replace IndexingDiv with FloorDiv in Inductor (#92878)
c0327eb463 : Some more inductor fixes for symbolic shapes (#92867)
0fe5367058 : [Vulkan] implement abs (#87414)
7265f60ad0 : Regularize mask handling for attn_mask and key_padding_mask (#92733)
a2e1365248 : [functorch] Remove not needed named member polyfill functions (#92613)
d8aa68c683 : make sure that our error handling runs with the GIL enabled (#92848)
abe64889b8 : [inductor] make `conv2d` tests pass (#91952)
045d1de02d : Fix some code issues (#92760)
3f64c96655 : `asarray`: Add support for NumPy scalars (#90914)
cc4fbd1077 : remove default implementation for RoIAlignRotatedOp::RunOnDevice (#92885)
70f4b3551c : Add Hook to store arbitrary python objects that are copied over in tls (#89169)
118a6dd1f1 : [vision hash update] update the pinned vision hash (#92875)
b6f41e2bcd : [MacOS] Explicitly use cmake from cloned conda environment (#92737)
0bf7506051 : [CUDA] Drop CUDA < 11.0 test flags (#92605)
a799acec8b : Allow cublas an cudnn to be in different nvidia folders (#92122)
eb32bb2ca6 : [Executorch][Quantization] Backend Config for functional embedding (#92700)
9613395e2f : [SDPA] Integrating the main branch of flash_attn instead of cutlass (#91994)
1c30844eaa : where() function added as a Tensor method as well (#92849)
fb980581a7 : Revert #92688 and #92348 (aot autograd explicitly errors on double backward) (#92863)
397b1a3da0 : Remove unnecessary includes from `python_variable.cpp` (#92839)
8c8cd9539d : Add missing moves to torch autograd (#92772)
2a8669c54c : ci: Increase timeout for linux binary builds (#92859)
402c6d4299 : Add Meta backend into tensor type strings (#92697)
dd4b46e010 : [PT-D][Checkpoint]rename init() (#92829)
7560660bd3 : Update XLA pin (#92806)
57fe33403d : [lint] clang-format register_prim_ops_fulljit.cpp (#92150)
2cf03bbbab : Revert "Run all of the timm models shards in the periodic (#92743)"
d70ed68162 : Remove deprecated torch.symeig (#70988)
dd25111250 : [caffe2] Remove OperatorBase::newstyle_outputs_ (#67093)
e137dcc2c8 : Splitting #91254 into two PRs (#92748)
f7e1f3e8bb : [PT-D][Checkpoint]Resolve issue #89501: Rename _nested_tensor.py to (#92705)
9bfd1357d5 : Add CUDA 11.8 CI workflows (#92137)
f333885704 : Create pt2_bug_report.yml (#92773)
3643d5deed : Move ASAN and ONNX to Python 3.9 and 3.8 (#92712)
4e9539e002 : [ONNX] Support ListConstruct in quantized_args (#92009)
df14650f0b : [SDPA] Update SDPA API and make function Public (#92189)
1237cf6b6c : Allow direct Tensor constructor to return preexisting PyObject (#92754)
e994e78397 : Added vectorized horizontal flip path for channels last for NcHW (#91806)
a112814a7f : Simplify retains grad hook implementation (#92604)
71b1051230 : [Docker] Factor GHCR push into its own step (#92832)
9f381c9b7f : sparse_sparse_matmul: simplify backward (#91712)
36ba2ce546 : [BE]: remove old dataclasses install from CI (#92763)
a43b55e135 : A few usability improvements for the dynamo benchmarks. (#92713)
d40a4540d6 : Fix typo under docs directory (#92762)
8f294f785f : [FSDP][optim_state_dict] Fix the conditions to check non-parameter associated states (#92744)
d90d92e733 : Don't fail-fast Docker builds (#92816)
c0dd9b3b67 : Revert "[Executorch][Quantization][BE] Refactor Choose Qparams (#92592)"
9c6433ce48 : Revert "Move ASAN and ONNX to Python 3.9 and 3.8 (#92712)"
2037746e8d : [inductor] Rename aot_inductor_debug to aot_eager_decomp_partition (#92314)
63d6ee7d02 : [FSDP][Easy] Remove outdated comment (#92739)
b88340ac72 : [PT-D][Lint] Include nested directories to ufmt (#92779)
afe6ea884f : Revert "[BE][CI] rename .jenkins to .ci, add symlink (#92621)"
5d66a418de : Swap file size on BE platform (#92810)
4a3fb7bcbc : Make CI_SKIPS into a consolidated dict (#92769)
3cfd2fa1c7 : Make --inductor imply --backend inductor (#92764)
7ddcf4e0c3 : Revert "[functorch] vmap: bitwise operators (#91971)"
fa5be78de1 : Cleanup get-workflow-job-id action (#92193)
b5f614c4cd : Move ASAN and ONNX to Python 3.9 and 3.8 (#92712)
8f3600b966 : [RELAND] Add metadata coverage for unsafe_split and unsafe_split_with_sizes (#92802)
53ef803705 : Make torch.cond work with retracing (#92646)
e54f7b3edd : [functorch] vmap: bitwise operators (#91971)
53bfba0d72 : [inductor] run CPU and CUDA tests with dynamic shapes (#92667)
30876229a7 : [mta] Backward of unary foreach functions (#89591)
32b2d8009a : check if `multi_tensor_apply_kernel` was called (#92077)
b985c2ef4a : [PT-D] Enable init ops for DTensor (#92651)
20bf77f9bd : Fixed virtualized import and typing rule (#92774)
387d769156 : [BE]: Replace string compares with more efficient cpp comparisons (#92765)
582485bf0f : [BE] Use data() method when possible as it's safer and more readable (#92755)
b847ac227f : Fix typo in buckbuild.bzl (#92751)
c52567ec18 : Switch CI exclusions to use exact match. (#92761)
e57a694d77 : Add some missing moves to torch jit passes (#92317)
cfaa1bace3 : A bunch of fixes for Inductor + dynamic shapes enablement (#92609)
2f6a975f25 : Remove cffi dependency as it doesn't look like we're using it (#92738)
0d9de46d9c : Revert "Add meta kernel coverage for aten.unsafe_split, aten.unsafe_chunk (#92608)"
36e1f7bc2b : Add meta kernel coverage for aten.unsafe_split, aten.unsafe_chunk (#92608)
6016e4c707 : [quant][fx][refactor] Rename modules to named_modules (#92575)
ed07070a11 : Restore lint after PR 92637 (#92759)
6bc62a6392 : Revert "[inductor] run CPU and CUDA tests with dynamic shapes (#92667)"
93e71cc2f5 : Add helpers for running tests and then putting them in a CSV (#92642)
756acd3fa1 : Guard solve behind mod for symbolic shapes (#92597)
363ca57d02 : Remove is_aot_autograd_safe_to_run (#91927)
fb776a2df1 : Fix mistaken script merge (by me) (#92756)
425e506ffe : [inductor] run CPU and CUDA tests with dynamic shapes (#92667)
5c4f0fd72c : Change convolution to use symbolic shapes for propagation (#92397)
97342ae04b : Fix python tensor hooks behavior on inplace (#92734)
de69cedf98 : Run all of the timm models shards in the periodic (#92743)
bea0b5ba73 : [BE] Delete unused docker configs (#92711)
020c0d5895 : Add debugability comments to DDPOptimizer (#89802)
5778c04a15 : Add `--timing` flag, phase timing to @dynamo_timed (#92637)
27bf879b8c : Forward fix: restore sebotnet33ts_256 aot_eager skip (#92741)
3cc1031322 : Mark XLA Linux jobs as unstable temporarily (#92634)
e4d81a9ec9 : fix various pointer issues (#90651)
0ab4ab9f8d : [Dynamo] Fix calling UserDefinedObject.func should pass self object (#92050)
0d870b50d3 : [optim][nadam] group tensors in foreach, make it default (#92715)
9ccf9362c2 : [optim][rprop] default to foreach when CUDA + differentiable=False (#92728)
c628654724 : [optim][rmsprop] default to foreach when CUDA + differentiable=False (#92727)
7277247a8c : [optim][radam] default to foreach when CUDA + differentiable=False (#92726)
9f356568ab : [optim][asgd] default to foreach when CUDA + differentiable=False (#92724)
30bda6b12b : [optim][adamax] default to foreach when CUDA + differentiable=False (#92723)
9b4a778420 : [optim][adagrad] default to foreach when CUDA + differentiable=False (#92716)
6f1727b288 : Print aot graphs if user specifies aot graph env vars (#92720)
c0fe41f983 : Use SymBool for is_contiguous computation (#92229)
011df6630c : [vision hash update] update the pinned vision hash (#92732)
d2728bb6a7 : [functorch] add is_any_true (#92686)
e6a8267cf5 : [pt2.0/inductor] Fix race in cache dir across ranks on the same host (#92664)
8972a9fe6a : [BE][CI] rename .jenkins to .ci, add symlink (#92621)
09eb4c2a70 : Revert "Update Module.__setattr__ to respect property setters (#92044)"
85851b1e8f : remove useless clang-tidy suppression (#92287)
5489b32337 : Add periodic job to test aot_eager on benchmarks suite. (#92695)
9ad0aca6e5 : Update aot_eager CI failures (#92696)
1bf512017e : Refactor test_inductor_benchmark into test_single_dynamo_benchmark helper (#92665)
85a1f0223a : Add a warning about performance cost of set_default_device (#92703)
5c6f5439b7 : Implement SymBool (#92149)
34e8eb229d : Dispatch the auxiliary frobenius_norm and nuclear_norm to better implementations and deprecate them (#81763)
1af40d5108 : [cublas][cublasLt] Fall back to unfused `addmm` for 2-byte-aligned inputs (#92201)
a74c8df7cd : [quant][fx][pt2e][be] Store node_name_to_target_dtype to node.meta["target_dtype_info"] (#92574)
de0375e79d : [optim][foreach] Do NOT inplace modify gradients (#92706)
2b885e1f6c : [optim][NAdam] Fix discrepancy between mt vs st impl (#92699)
896b6d8768 : fix the formatting of runtime error msg in prims _cat_meta (#92124)
703265e599 : Shard mac to 3 (#91277)
d6c3468f70 : Don't allow recomputing a node that *must* be materialized in the backwards pass (#90896)
97b7e4cdd5 : Fix GroupNorm backward prop on CUDA (#92671)
8c0289a61c : [CUDA][CUBLAS][BFloat16] Tenatively disable reduced precision reductions for some matmul tests (#92599)
5644059489 : [inductor] Lower torch.exp2 and use it for torch.pow(2, x) (#92632)
5a1344407a : Add GHA side support for ciflow/inductor-perf-test-nightly (#92693)
a3efa9d740 : Create autograd Function for aot_autograd backward only when needed (#92688)
eee2869ea7 : [PT-D][checkpoint] Resolve no such file or directory issue when checkpointing on multi hosts (#92553)
e4d83d54a6 : Foreach gradient clipping (#91846)
44b7a0b7ef : Clean up argparser help (benchmarks/dynamo/distributed.py) (#92687)
9db4323e4c : Deprecate capture hooks except distributed use case (#92653)
c4501593c3 : Delete get_pyobj() entirely (#92638)
5610766044 : Mark test monitoring as an optional process (#92658)
8b3e35ea4a : Revert "Run dynamo/test_dynamic_shapes serially (#92215)"
fb3d9f39cc : update vmap to accept nones (#91644)
2fb328eb46 : [Dynamo] Preserve source_fn in node.meta (#92399)
dd760c98f8 : [decomp] Use new squeeze.dims overload in decompositions (#91602)
2af2952c66 : logaddexp2: Use log1p and exp2 (#92116)
67bb5236da : lint fix (#92685)
2891cecd8d : Revert "Add meta kernel coverage for aten.unsafe_split, aten.unsafe_chunk (#92608)"
215f4fc355 : Update android/README.md, how to build pytorch android from source (#92356)
b2ca2c8662 : [optim][adagrad] group tensors in foreach to maximize perf (#92362)
44132cc4b0 : Revert "Add `--timing` flag, phase timing to @dynamo_timed (#92637)"
5ac22782d1 : Optimized vertical flip using memcpy (#89414)
387357539f : Log accuracy failure in more cases (#92645)
64985123e4 : Logcumsumexp for complex in CPU and CUDA (#90847)
4386f317b9 : Add meta kernel coverage for aten.unsafe_split, aten.unsafe_chunk (#92608)
274958ef43 : [vmap] unsafe_split : batching rule and OpInfo (#92291)
f6acd95ae5 : Fix performance smoke test script bug (#92660)
2a3954372a : [Dynamo] Make torch.autograd.Function.forward support graph break and no re-compilation (#91295)
119d5e425c : [Inductor] decompose expm1 for CPP vec (#92289)
38a4cb765b : Torch package support in dynamo (#91821)
773b513435 : Add `--timing` flag, phase timing to @dynamo_timed (#92637)
663bf4ba15 : [vision hash update] update the pinned vision hash (#92270)
1464db08b4 : [quant][pt2e] Support setting qconfig by module_type (#92355)
620846c8b4 : Remove reference in dynamo benchmark makefile to triton master branch (#92663)
e9bc82f54b : Vectorize torch.exp2 on CPU and add complex support (#92115)
52e8af57a6 : [3/N] Update ema_teacher_arch in the backward call (#92080)
f659452009 : [FSDP][1/N] Split `fully_shard` unit tests (#92296)
59071ab1e7 : [Executorch][Quantization][BE] Refactor Choose Qparams (#92592)
cf5495ac3a : Add perf check for inductor smoke test (#92358)
493a6ced74 : [fx] Throw error when symbolically tracing control flow ops (#92313)
4110900b22 : let inductor generate broadcast when loading a single value (#92595)
f0e3c4929b : only copy meta if available (#92623)
60bf851931 : Revert "Improve `bsr @ strided` performance in `baddmm` for `bfloat16/half` with Triton kernels. (#88078)"
550983e39d : Revert "Move check_label ci to mergebot (#92309)"
190f7803f5 : Move check_label ci to mergebot (#92309)
b33d9e2c87 : Point to README.md#from-source instead of duplicate instructions in CONTRIBUTING.md#developing-pytorc (#91850)
706aa51628 : [dynamo] Support control flow map() operator. (#91939)
647b8f8e3e : Add TORCH_CHECK_TENSOR_ALL (#89097)
25e530083e : [ci] Run test_decomp parallel (#92566)
0998ec1e27 : Revert 61cdae0ce58bcbe048b143356fd9ded821225657 to fix CI (#92631)
a20c678c72 : Rename Makefile_dashboard to Makefile (#92584)
90024436e7 : Do not specialize int/float with dynamic=True (#92570)
0bc875ac1d : [dtensor] disable gpu tests in op db first (#92611)
a2b8e891f6 : Fix/modernize dynamo docs (#92572)
ce43fc586f : Register sccache epilogue before starting sccache (#92587)
44e52ea514 : Reenable mobilevit_s in CI, seems to pass (#92585)
b6cfd62285 : vmap support for torch.linalg.vander (#91749)
3ba5eae72a : [optim][radam] fix eps discrepancy for foreach (#92551)
97f34e367d : Run CI in a new environment (#92378)
ccbdf49582 : [MPS] Fix index_select scalar input with multiple indices (#91064)
827e22ec2d : Revert "[vmap] unsafe_split : batching rule and OpInfo (#92291)"
a9f4462847 : [primTorch] Remove prims.to_dtype (#92380)
1906eaf22f : [BE] Get rid of `future` (#92596)
1bc60c6b31 : [reland] Improve hooks ordering behavior (#92559)
0510ae59b3 : [vmap] unsafe_split : batching rule and OpInfo (#92291)
0a404fdd82 : Follow up comments of PR #91531 (#92359)
2066523508 : Fix `ShardedTensorMetadata.tensor_properties` for Python 3.11 (#91795)
06d54b4061 : [threaded_pg] fix the comments of MultiThreadTestCase (#92373)
997de44100 : [dtensor] delete lagging op db and update op db tests (#92290)
8383b5c488 : Improve `bsr @ strided` performance in `baddmm` for `bfloat16/half` with Triton kernels. (#88078)
4f4b62e4a2 : some fixes to get symbolic shapes working through inductor (#92320)
cac217c80a : Fix key error formatting and move exc code to exc.py (#92593)
ba6820574c : Make run_dynamic_ci_skips_only.sh more generic (#92581)
2a7a859d00 : [CI] move parallelnative to periodic (experimental) (#92567)
28cb3141e8 : Remove temporary export skip hack (#92160)
ef2586422c : fix promote_constants with ExpandView (#92403)
bdbd3ed312 : When nopython=True, Dynamo can't allow graph breaks. (#90970)
eb39d990ce : Guard on at::Tensor device index (#91779)
388d79ccda : [CI] valgrind 3.16.1->3.20.0 (#92552)
bb7790781f : Make aot_autograd explicitly error when double backward (#92348)
62eeb7d60f : [PTD][Oncall] Sync Reorder structure for compatibility with linux-6.0 and gloo submodule for PT (#92568)
34353a402e : [mergebot] Flatten workflows into jobs, fix bugs (#92097)
8b861544f9 : Remove lowering and decompositions of zero_, zero, zeros_like... in favour of their references (#92071)
b5c3b4a36c : Fix dynamo.export(aten=True) for condition op (#92361)
c5cb46ecdb : [optim][asgd] group tensors in foreach to maximize perf (#92364)
5fdddbbfe8 : Fix checking of current mode in PyOperator dispatch (#92357)
f8a07ca422 : Reland 2nd attempt "Add heirachical module names to torchFX graph.node" (#91721)
76cb2d0ede : fix incorrect _embedding_bag meta (#92549)
5aa3740d63 : Change references to pytorch/functorch to the torch.func APIs (#92543)
fbafcecf8d : [optim][radam] group tensors in foreach to maximize perf (#92365)
de459bdfaa : [optim][rmsprop] group tensors in foreach to maximize perf (#92369)
07800c52af : [optim][adam] group tensors in foreach to maximize perf (#92349)
e2433e420c : [optim][adamax] group tensors in foreach to maximize perf (#92363)
92d412d684 : [FSDP][optim_state_dict][11/N] Let FSDP support NamedOptimizer/KeyedOptimizer when use_orig_params is False (#92184)
befe3b68de : Revert "Clean up C++14 code (#92216)"
4450424b8e : Reduce some ambiguity in Tensor (#92266)
8770a7ed6f : Decompose more inplace ops (#90967)
0d65a10a2d : [inductor] run CPU tests when CUDA is available (#92220)
dc1c0f78e2 : Remove dead TORCHDYNAMO_DYNAMIC_SHAPES print (#92547)
3481ad3365 : Make log parser work on inference runs too (#92546)
6420fecdc4 : Introduce sym_min and sym_max (#92107)
b26efd0dd2 : Run bazel jobs on 4xlarge (#92340)
bb34461f00 : [optim][rprop] group tensors in foreach to maximize perf (#92372)
b92a7afed9 : Reclassify some dynamic aot_eager failures as static failures (#92376)
ae4ec7de1e : Fix and update type hints for `make_functional.py` (#91579)
bcd9f189f4 : Remove setup-python on Windows CI and use Conda instead (#92183)
65056845d3 : Update clang-tidy to 15.0.6 (#92195)
74bc894ede : [BE] Delete unused args during docker build (#92396)
e525f433e1 : Revert "Improve hooks ordering behavior (#85849)"
7f0d321d2e : Add missing gc untrack for cpp autograd Nodes (#92351)
0070c546b5 : [BE][optim] abstract out docstrings, add differentiable docs (#92336)
0035340488 : Allow DDP to handle custom dataclass forward outputs (#92334)
5d01277fea : Deprecate torch.nn.utils.stateless.functional_call (#92280)
a8a44a1aa2 : Add deprecation messages for functorch.* function transforms (#92279)
21d2bd782b : stack_module_state should return unrelated parameters (#92278)
3aa6cec18c : [dynamo] exclude reset_rng_state when measure timing (#92237)
f0b592dae7 : Make masked_fill reference traceable (#90972)
368c737603 : [PT-D][5/N] Enable add_param_group for named optimizer (#91928)
61a7618f3c : [Quant][Eager] Copy MHA's batch_first attribute in prepare() (#91680)
206f4e47bb : Replace exp(x) - 1 with expm1(x) (#92154)
4058dedf21 : Replace log(1 + x) with log1p(x) (#92114)
5a2ae8805c : [Quant] onednn backend switch to ideep new api without affacting performance (#91056)
fb50a4b4ce : [Inductor] added aten.exponential_ decomp (#91673)
4a4520e74b : Retire unsafe sparse tensor constructors in Python API (#91331)
dfbdfb276e : Clean up C++14 code (#92216)
c55f6973e4 : [dtensor][3/N] move OpSchema and types to a separate file (#90732)
dc95ef25e5 : [dtensor][2/N] add __repr__ to placements (#91785)
a1186d6af9 : [dtensor][1/N] add __hash__ to device_mesh and dtensor_spec (#90731)
bc9af74c99 : Clear references to user tensors after compilation is finished (#92353)
387ca598a1 : [nn] full_backward{_pre}_hook: warning for Module returning dict, list, etc (#87547)
3868eeb75f : fix biasadd OMP perf issue for the packed MKL SGEMM (#92300)
bb11e072ae : Squash and merge linalg meta kernels (#92335)
0d4bbd1996 : [Lint] Add FSDP/composable API files to ufmt include (#90873)
9b173b87b2 : Refactor away leftover import indirection (#92188)
a414b7f367 : Make clone-deps checkout correct Triton hash (#92345)
6fa86d7402 : Add @chillee to codeowners for functorch tests (#92337)
94a7c01159 : Enable oneDNN implementation in LSTM op (#91158)
a41f00ed70 : [optim][sgd] group tensors in foreach to maximize perf (#92338)
98b78aa11c : [autograd.Function] setup_context always appears on the Function (#92312)
00fe63d1d8 : fx Graph should copy meta on deepcopy (#92062)
60fe2f4420 : Revert "Torch package support in dynamo (#91821)"
cf5a40c2b4 : Only warn about fallbacks once per graph (#92211)
30f2026863 : [inductor] Promote half-precision CPU constants to float (#91224)
764f79f680 : [Microbenchmark] microbench fix for triton template (#92282)
88366a9075 : Document hooks ordering behavior in the autograd note (#91667)
388b245d54 : Expose autograd.graph.Node as an abstract base class (#91475)
0157e2ef4e : [optim][adamw] default to foreach when CUDA + differentiable=False (#92306)
fcde6dbbac : [onnx] Add mse_loss symbolic (#90717)
40d6f2a020 : Update sdp_utils to check gradmode and subclassed tensors (#92323)
68f8042064 : Bypass filament2 for new pytorch random distribution method (#92190)
b31905c727 : Fix Windows cpu_profiling_allocator_test same pointer check flakiness (#92264)
16f9d1bb83 : [torch.func] Add migration guide from functorch (#91811)
89f1ad08b4 : Revert "Improve `bsr @ strided` performance in `baddmm` for `bfloat16/half` with Triton kernels. (#88078)"
7f256fff77 : Improve `bsr @ strided` performance in `baddmm` for `bfloat16/half` with Triton kernels. (#88078)
befe815466 : Revert "Add sym_size/stride/numel/storage_offset to native_function.yaml (#91919)"
88942a3199 : Revert "[FSDP] Do not clean FQNs even for `use_orig_params=True` (#91767)"
0c8f4b5893 : Update Module.__setattr__ to respect property setters (#92044)
4fc796daf9 : [optim] abstract out _default_to_foreach_util (#92305)
5c9c39a83f : Revert "[fx] rewrite `FloorDiv` to match Python better (#90906)"
013afc5abe : Revert "[fx] fix type promotion in `binary_magic_impl` (#91376)"
933cc67e7e : [pytorch] [compososable] make contract() pickle-able through functools wraps (#92120)
ea1007b89c : Run dynamo/test_dynamic_shapes serially (#92215)
2eaa7a25d0 : Fix model accuracy issue caused by vectorized transpose (#92299)
d29f0ba74d : Fix philox randn to follow standard normal distribution (#91945)
d6f3265e1a : [FSDP] Do not clean FQNs even for `use_orig_params=True` (#91767)
1439cb0314 : [FSDP][optim_state_dict][9/N] Rewrite the all-gather flow of optimizer state to support older GPUs (#91343)
46a81c8db7 : Deprecate .mT,.T,.mH,.H on 0D tensors (#92143)
66e498626c : Perform first the decomposition and then the ATen function to catch in-place modifications (#92243)
77b8aa6e43 : Wrap a few more functions to ease their tracking during debugging (#92004)
ea8b14f27e : Add a test for decompositions that decomposes all the operations as much as possible (#87182)
d162c8f92b : Assorted decomposition fixes (#87183)
da58f9eb8f : Rewrite out-of-place decompositions in terms of out-of-place ops (#92003)
1d47c59384 : Check in some utility scripts for running dynamic shapes sweeps (#92256)
049838f249 : Improve hooks ordering behavior (#85849)
fb1427ea8f : squeeze: allow squeezing multiple dimensions at once (#89017)
fbf9e379e1 : [autograd.Function] update error messages for vmap to point to docs (#92030)
81cc9bba5e : [autograd.Function] Kill the extension feature flag (#92026)
7aaad0b832 : Rename flag that enables/disables _SingleLevelFunction for functorch (#92025)
14ff58d4fa : [generate_vmap_rule] Delete unused output_shapes (#92024)
f5af97ef06 : [autograd.Function] add nice error message for incorrect usage of vmap (#92023)
2f9166ef89 : [autograd.Function] Cleanup asymmetry in generate_vmap_rule and vmap (#91787)
88b3810c94 : [fx] fix type promotion in `binary_magic_impl` (#91376)
d13207c7ad : [fx] rewrite `FloorDiv` to match Python better (#90906)
5e0d3458eb : Move XLA test job to 4xlarge (#92269)
ded2b47bde : Fix AOTAutograd 2.0 perf regression involving as_strided (#92255)
9b716a0682 : Clean up more clang-tidy supression (#92203)
bbce4184be : Refactor inductor to use standard BACKENDS dict (#92187)
0388400f3f : Add sym_size/stride/numel/storage_offset to native_function.yaml (#91919)
801d831d7a : [dtensor] enable op db tests by using multithreaded test case (#92198)
2ce63ef26c : [dtensor] switch pointwise op tests to use DTensorOpsTestBase (#92197)
e16979c9a0 : [threaded_pg] full rewrite of MultiThreadedTestCase to enable device_type tests (#91650)
9942ddd5b3 : [threaded_pg] enable subpg creation and concurrent collective (#91649)
85edb58179 : Fix oneDNN double checkout issue and Upgrade oneDNN to v2.7.3 (#92239)
d62eff56bd : Fix typos introduced by 014ac7fda2d1e59796b1147221fb92f4377ca2f1
014ac7fda2 : Add ROCm merge rules (#85762)
eadbf762fc : Fix CUDA error not getting captured by handler (#92227)
32937f39f4 : Don't raise error if job_id can't be fetched (#92192)
301644d3cb : [ROCm] disable NVFuser (#92182)
0b90ddacd9 : Unit test for is_causal Better Transformers (#91900) (#92102)
b05f509601 : Add missing conversion for to_sparse.sparse_dim (#92006)
523d4f2562 : Revert "[cuDNN][cuDNN V8 API] Always build assuming cuDNN >= 8.0 (#91527)"
1a98c3e36c : Revert "Add kwargs support to torch.export() API (#92013)"
76c88364ed : [inductor] Respect dtype argument in ops.constant (#92093)
5a0fa04a49 : Add MTIA DeviceType for Meta training and inference devices (#92232)
9cf8434776 : [ONNX] Raise Unsupported for Grid Sample with volumetric 5D input (#92212)
85e0fd0280 : [FSDP][BE] Improve `device_id` + CPU offload test (#92031)
5a3b4dacad : [FSDP][BE] Rename `prefixed_param_names` -> `fqns` for consolidation (#92028)
b0888cce0f : [FSDP][BE] Better error msg for incorrect device for training (#92027)
b5d8fef9a5 : [DTensor] remove redundant device mesh test code (#92069)
513c1e71e2 : [DTensor] check DeviceMesh ranks contiguity (#91802)
2293a6b95e : [BE] Refactor get_workflow_job_id (#92191)
1da0ac2c93 : Enable -Werror=bool-operation (#92221)
bc4c324807 : Remove variable_excluded_from_dispatch() assertion from mkldnncommon (#92168)
d41b5d7c14 : [adam] Add not torch.jit.is_scripting() as a requirement for switching to fused (#92181)
da43584bef : [Reland] Clean Up MobileOptimizerType Rewrite Flags Public API and Documentation (#92081)
55f0ed6dcd : [inductor] Fix an issue causing "Could not generate fp64 outputs" (#92036)
353e9f883f : Add name attribute to ValueRangeAnalysis (#92121)
a0626c356d : Cleanup std::move (#91987)
1490dc6421 : Revert "[BE] meow (#92174)"
dfabb91614 : [LTC] Use DataCache in GetIrValueForScalarFromCodegen (#92066)
3debb97084 : [BE] meow (#92174)
421f40e051 : Use binary units for CUDA memory summary (#91854)
b8057aa16d : Remove unnecessary copies of Scalars for TensorBody template (#92162)
3a0053abd6 : Move `PyObject` code out of `TensorImpl` into new `PyObjectSlot` class (#92169)
7568484d54 : [torchgen] Add CI job to cover custom ops registration for Executorch (#91291)
66b324cf06 : Revert "In inductor triton generated code, avoid masking when numel=1 (#91254)"
d3765509df : [optim][adadelta] default to foreach when CUDA + differentiable=False (#91896)
cb67d9460b : [PT-D] Fix `send`, `recv` return type (#92152)
4af5939d7a : [optim] Improve adadelta foreach, group tensors to maximize fast path (#92048)
3779a75fc9 : Apply noexcept to relevant move methods to improve performance (#92156)
901a34ccb5 : Add the new unstable workflow (#92106)
3794b4643f : [GHF] Record how many times PR is revered (#92180)
70b3ea59ae : [ROCM] Modify transcoding: absolute path ->relative path (#91845)
214c0fdc4b : MYPYNOFOLLOW for test_utils (#92136)
04689ae209 : [CI][ROCm] skip multiprocessing tests that trigger hangs (#92101)
4d07ad74f1 : [cuDNN][cuDNN V8 API] Always build assuming cuDNN >= 8.0 (#91527)
4d26903739 : Revert "Pytorch-bot test (#92163)"
7fe3c64bdb : Pytorch-bot test (#92163)
f4b804eeaa : Call profiler step via optimizer post hook (#90101)
6783db13ef : Update CMakeLists.txt since MacOS linker doesn't support whole-archive (#91736)
745fe35df5 : [follow-up] Python Attr Serialization (#88913)
a72bcb3388 : Do not leak SkipFrame exception to parent frames (#91059)
a60125e298 : add docstring for adam differentiable parameter (#91881)
8f1c3c68d3 : [BE] Use nested namespaces in .cpp/.cu files (#92100)
a4a0195c6c : Fix torch.where signature mismatch that was caused by torchgen (#91627)
accecd7b04 : [torchdim] Fix Python 3.11 bytecode decoding in dims (#91290)
60e37a6e08 : Update sgd doc to insist on momentum buffer initial value (#92111)
a26e5e21b5 : Improve type hints for Module forward hooks (#92061)
890b68281a : Add kwargs support to torch.export() API (#92013)
b3e4f5029b : Add check-sparse-tensor-invariants flag to Context - 2nd try. (#92094)
a111dd9014 : [dynamo] support comparing numpy ndarray (#91870)
fa3841ffd4 : [ONNX] Fix potential flaky test in test_verification.py (#92105)
ec3941ada6 : [quant][fx] Add support for GRU in fx graph mode quantization (#91976)
0bd3fa3d22 : [Quant][docs] Move parts of BackendConfig tutorial (#91999)
a617d031ff : [Inductor Perf CI] Enable perf CI smoke test (#92051)
eb7b89771e : unify reduction types from different operators: scatter, scatter_reduce, segment_reduce (#91499)
a70387f0fa : [vision hash update] update the pinned vision hash (#92119)
fbbb19599a : Update dynamic skips after #92076 (#92103)
9412778d51 : Fix OneCycleLR error log (#92040)
61cdae0ce5 : Switch Windows CI jobs to G5 runners (#91727)
b7cad020b5 : [DTensor] require DeviceMesh size equals world size (#91801)
3dd9dbd942 : [DTensor] create default process group when absent (#91756)
f8e641bad4 : Revert "Make ModuleList derive from Sequence[T] and type it appropriately (#89135)"
8fa66a6337 : [quant][pt2e] Add a test to confirm we can set qconfig according to module_name (#91977)
6f749fd171 : Fixes to DSA infra (#91835)
4636fe701c : Limit the memory and CPU of Bazel build to avoid crashing the runner (#92056)
7078ad5b8c : Reland "AOT Autograd refactor + cleanup, handle intermediate views of bases, use view replay, fix non-tensor input handling" (#92076)
da77b10b41 : fix in-place geometric pmf (#92049)
5f55335c2e : Fixed output memory format mismatch for bicubic2d (#90470)
c4a6f21b50 : [JIT] Add tests for pow() with different dtype inputs (#91946)
515dff7811 : [functorch] move batch_norm_replacement to torch.func (#91412)
7bdcf6d4f0 : Revert "[FSDP] Do not clean FQNs even for `use_orig_params=True` (#91767)"
91920ee6da : sparse_mask: remove redundant mask.coalesce() in to_dense_backward (#92001)
b9182cbbd8 : Fixup torch jit with some initializers and moves (#92037)
5625f521a4 : generate set_device call to ensure context existence (#92055)
7c641eaaf0 : [Inductor] Support vectorized transpose in CPP backend (#91532)
eece6da162 : [inductor] Reduce device context manager overhead (#91045)
db466ae057 : Revert "[Modes] Add assert that the mode isn't already on the stack (#90770)"
a383789f4d : [FSDP] Do not clean FQNs even for `use_orig_params=True` (#91767)
7f50ff1685 : [FSDP] Test `use_orig_params=True`, `no_sync()`, mixed precision (#91193)
e5503aceae : [FSDP] Re-support model dtype change after FSDP init (#91192)
e096d2db5a : [BC-Breaking] Separate `stream_id`, `device_index`, and `device_type` in `pack` and `unpack` for `Streams` (#81596)
a2368a7c13 : [dynamo] delegate handling of len() of TensorVariable to size(0) (#92016)
3ab58fd5ed : optimize sampled_addmm performance on CPU (SparseCSR) (#90978)
81f7c40612 : Cleanup some unused includes (#91961)
8acf0e62d0 : Use c10 math constants consistently in Math.h (#91967)
c7a22bb7c7 : Revert "Add check-sparse-tensor-invariants flag to Context. (#90849)"
05d0c4cee3 : [functorch] Fix proxy unwrapping for cond(). (#91907)
a76bc410df : Fix `_foreach_norm` on some tensor sizes (#91844)
44413f2525 : properly convert fill value to x dtype in constant_pad (#92045)
fb38b9ff2a : [cuBLAS][TF32] Fix TF32 get/set test when `TORCH_ALLOW_TF32_CUBLAS_OVERRIDE` is set (#92052)
ffbd13b654 : Fix for swap_custom_module_to_observer doing duplicate swaps on the same node.target (#91905)
ccd8b66b0a : [testing] add ErrorInputs for `adaptive_{avg, max}_poolnd` (#90924)
6cfaa92239 : Handle tensor default func args when inlining (#90575)
8e2e648f84 : Propagate sources in VariableBuilder and add SuperSource (#91729)
07e595e88a : Add `device_idx` to `free_fn` in `CUDAPluggableAllocator` (#91398)
723d7641e2 : [vision hash update] update the pinned vision hash (#91744)
18677d5249 : sparse_mask: faster, with support for uncoalesced mask (#91964)
3305265962 : [FSDP] Clarify `MixedPrecision` docs (#91974)
8612ec5b90 : Implement hybrid sparse to/from dense conversions. (#90177)
e1bcbbf18c : [Quant] make x86 the default quantization backend (qengine) (#91235)
5766764d6c : [functorch] Fix map() operator behavior. (#91906)
b8252e07c7 : [Reland] add DisableTorchFunction that matches DisableTorchDispatch (#88219) (#92012)
6676193b5e : [frontend] Expose real_type getter for torch.Argument (#91938)
dc6916b341 : optimize gather performance for gnn usage on CPU (#87586)
f8026413f5 : Fix `CUDA_MAX_THREADS_PER_SM` for `sm_89` (#91972)
3613ff06b1 : [MKLDNN] Rename pooling_avg to pooling_avg_exclude_padding (#90247)
c537f5bee8 : [ONNX] Documentation for `torch.onnx.find_mismatch` (#90728)
ed7885c254 : [utils][foreach] Add group tensor by device and dtype util (#92014)
af242eedfb : [Inductor] Added aten.uniform_ decomp (#90869)
f40777e4ad : [Dynamo] Fix guard bug when np.float used in control flow (#91991)
8007c2d96a : Python Script Object to IValue (#91776)
8b00c54425 : Add utility report_compile_source_on_error (#91069)
4e21fc2075 : In inductor triton generated code, avoid masking when numel=1 (#91254)
33e3c9ac67 : Not explicitly set the manifest filename in Windows (#91988)
a155f64957 : Update _optim_utils.py (#91935)
28c736a424 : Third batch of canonical aten ops (#91995)
d0bfd79f3d : Make ModuleList derive from Sequence[T] and type it appropriately (#89135)
c5836153f5 : Revert "optimize sampled_addmm performance on CPU (SparseCSR) (#90978)"
74cbf058a5 : Support --dynamic-ci-skips (#91893)
83e6e9dde3 : Disable NVFuser in internal (Meta) build (#91836)
4806a9e7f6 : Remove DL_RUNTIME_BUG (#91960)
6287bb78dc : [static-runtime] clamp fast_sigmoid result into (0,1) range (#91993)
d8e795ecd5 : [modes] make python arg parser also check for python key (#91573)
702838637d : [Modes] Add assert that the mode isn't already on the stack (#90770)
8b3c4bc481 : [stateless] add weight tying support (#90477)
e03ac0ee8c : Add bf16 and change header file include path (#91838)
d24324bf1d : s/INDCUTOR/INDUCTOR/ (#91885)
84b819d083 : Preventing crashing incase of no network by loading from cache (#91569)
850cf8949a : enable `conj()` for sparse compressed tensors (#91695)
56ed976edf : hrnet_w18, tts_angular works with dynamic shapes (#91891)
d7dc1c2fd5 : Support zero dimensions in softmax decompositions (#91322)
afd8dd085f : replace vec::vec_scalar_t with at::opmath_type (#91086)
3790b50505 : inductor: fix .to(memort_format) issue which doesn't generate right stride (#91948)
92855a215b : [SDPA] Guard mem efficient attention in deterministic mode (#91979)
d540442e36 : [ONNX] Fix 'prim::PackPadded' shape inference (#91829)
812d774cc9 : Easy: add instructions for testing pytorch/builder (#91923)
8f5f15a64b : optimize scatter_add performance for gnn usage on CPU (#82703)
364f526b9c : [Inductor] assert generator for random, dropout (#91833)
554a796aef : Implement `torch._foreach_lerp` (#87562)
7c907bd829 : Minor doc updates for S3 update procedure (#91978)
19723d754d : [CUBLAS][TF32] Change cuBLAS TF32 environment variable to be initialization only (#85859)
de4e4c785a : [mergebot] Fix mergebot allow revert of codev diff (#91975)
6b542147a3 : Make job names match BUILD_ENVIRONMENT (#91512)
43050b8301 : Revert "[Inductor] Added aten.uniform_ decomp (#90869)"
4dcb10e027 : Add missing clang-tidy fixes for modernize-use-equals-(default|delete) (#91857)
b9a035c1c5 : Add check-sparse-tensor-invariants flag to Context. (#90849)
949f25be0c : [vmap] all, any : batching rule (#91966)
7c1c239db1 : [inductor] Rewrite Triton templates + epilogue fusion (retry) (#91575)
6912f7c564 : Update references to 1.14 to 2.0 (#91769)
fd0030fe74 : Fix indexing_dtype_strength_reduction (#91601)
c55293d640 : [Inductor] Added aten.uniform_ decomp (#90869)
0a677f2335 : [MPS] Add testcase for copying cpu tensors into strided mps tensors (#91784)
09c2b2af53 : [MPS] Solve contiguos view tensors using arrayViews instead of blits (#146) (#91743)
4f91b8e0ee : Fix typo under docs directory (#91871)
645fb217c0 : optimize sampled_addmm performance on CPU (SparseCSR) (#90978)
3aeb7127b4 : Revert "Clean Up MobileOptimizerType Rewrite Flags Public API and Documentation (#91600)"
323e0143d6 : [Op Benchmark] Add Pointwise Conv2d Op Benchmark (#91918)
a6e2d76bb9 : [Vulkan] Add Override Mechanism to Shader Registry (#91917)
3d6f85c936 : [Vulkan] Enable Codegen ShaderInfo Registry from GLSLT + Params YAML files (conv2d_pw) (#91916)
67d401d1be : [Vulkan] Enable Codegen ShaderInfo Registry from GLSL files (conv2d) (#91915)
0c3ed2ed22 : [dynamo] Support dynamic slicing (#91341)
3139e687db : [Vulkan] Add Basic Shader Registry (#91914)
cd62ad5f88 : [Vulkan] Enable including GLSL files from custom locations in gen_vulkan_spv (#91913)
ec94cbc66a : [Vulkan] Remove GLSL Code Gen (#91912)
28eb3c8faf : [Vulkan] Generate ShaderInfos Directly via Codegen in gen_vulkan_spv (#91911)
776fef9ecc : [Vulkan] Merge ShaderSource into ShaderInfo (#91910)
370df963e0 : Clean Up MobileOptimizerType Rewrite Flags Public API and Documentation (#91600)
e9cd7e0869 : [dynamo] Fix rst syntax for list (#90390)
7f2b5ea1e1 : Revert "Avoid device casting for all singleton tensors in optimizer states (#91454)"
e0b82d7d1f : [MPS] Fix convolution `Source and weight input channels mismatch' crash (#91822)
cdc30048e5 : Fix numel() result after resizing a sparse compressed tensor. (#91831)
ce50a8de75 : [CI][ROCm] add test_dataloader to CI_SERIAL_LIST (#91895)
1892c75a45 : fix norrow_copy correctness issue for non-contiguous input for cpu path(reland) (#91883)
d1cc64b2ac : [primTorch] Fix masking in `logsumexp` ref (#91941)
498be7ed25 : Revert "Refactor stack_trace preservation for node meta preservation (#90803)"
c887837ec3 : Reland "Fix dynamo handling for tensor attributes: T, H, mT, mH (#90463)" (#91897)
3726d23219 : Torch package support in dynamo (#91821)
ae2e755f15 : RM (unused?) has_mutation (#91931)
32e9b29ce9 : [pruning][core][feature] Add in SaliencyPruner to pruner._experimental (#91814)
42a63a7ed9 : Dynamo.export uses dynamic=True for symbolic tracing (#91899)
4919e11900 : fixing test_batch_norm_implicit_dtype_promotion (__main__.TestNvFuserDynamo) (#91541)
67f965b15a : Add Skylion007 as a core reviewer (#91890)
b0f359a3c9 : Disable win vs2019 cpu build+test until we figure out the linker crash (#91932)
138a0188e0 : Add support for logaddexp(float16) in CUDA and implement its reference (#91869)
df3adbd521 : Pin onnx-script to a version before they bumped numpy (#91929)
0f1302eeae : Refactor stack_trace preservation for node meta preservation (#90803)
1e768c63c1 : Add merged label to ghstack prs (#90238)
32356aaee6 : [4/N] Add test for partial training for NamedOptimizer (#91344)
26beb46da4 : Reduce #iters to make test run always (#91837)
95e3e339a8 : Add log_once to fused attention kernels (#91858)
333540a458 : Reland "Add torch.utils.device_mode" (#91796)
9d20d6d5ec : Foreach clamp_min clamp_max (#91384)
b2646dcb65 : Consider updating pin to CMake version 3.23.1 on run_torchbench CI (#91739)
d4aa807ba9 : Enable bfloat16 for hardtanh_backward_cuda (#91511)
630ef6c711 : Fix Dynamo+DDP documentation (#91832)
e67f5ab6cc : Print and zip remaining test logs (#91510)
00e5f3a9c5 : [primTorch] Move `logsumexp` decomp to refs (#91860)
84266ae670 : Revert "Fix dynamo handling for tensor attributes: T, H, mT, mH (#90463)"
f6c7cf1bf5 : Revert "Torch package support in dynamo (#91821)"
39524f20de : [functorch] excise remaining functorch imports from examples (#91282)
071756c9cf : [functorch] rewrite examples that use make_functional to use functional_call (#88851)
0ec3c5bc72 : [MPS] Reduce ops multi axes support (#91734)
fd213c3231 : Match get_attr when compare node (#91657)
fe80f190df : use context manager for path extension in torch.hub (#75786)
d85f3c8237 : Revert "fix norrow_copy correctness issue for non-contiguous input for cpu path (#91789)"
9b415240d4 : Revert "Reland "Add torch.utils.device_mode" (#91796)"
9945a78a94 : Fix dynamo handling for tensor attributes: T, H, mT, mH (#90463)
3643b4ee4a : fix sort crash when the input is expanded scalar (#91752)
136dadd689 : fix norrow_copy correctness issue for non-contiguous input for cpu path (#91789)
8cec433cf2 : Apply clang-tidy fixes to api/csrc/api/include/torch/nn (#91766)
f59845db40 : Symintify pytorch slicing logic (#91340)
81b5eff3c3 : Reland "Add torch.utils.device_mode" (#91796)
eeb3e49ed4 : Torch package support in dynamo (#91821)
73e5379fab : Apply clang-tidy perf fixes to aten (#91772)
2c00064113 : remove unnecessary decomps (#91828)
e3ed55d483 : [ONNX] Add aten::zero support (#91731)
0c1777acec : Dynamo benchmark: add CPU specific changes (#88477)
75c652821c : Assert valid base source for derivative sources (#91711)
edaba335b9 : [primTorch] Use torch.fill to implement prims.fill (#91747)
faed4db497 : [CI][ROCm] prune all stopped containers (#91815)
b32b81a0c5 : Make torch.split take symint as arg (#91724)
08a378a286 : Revert "[ONNX] Add aten::zero support (#91731)"
a2c5efaf0f : Un fold squeeze permute (#91656)
5fabd96f3c : [PT-D][3/N] Add FSDP hook with Named Optimizer (#91321)
acab0edfab : [ROCm] fix hipify mapping for cuDeviceGet (#90726)
53ef96faae : [MPS] Add support for randperm (#91708)
ff23508c0d : [ONNX] Add aten::zero support (#91731)
766ebf4441 : Remove hard numpy dependency introduced by inductor (#90796)
7cd951c21e : Properly guard all numpy usage within dynamo and remove UnspecializedNumpyVariable (#90795)
f44946289b : [CI][ROCm] fix device visibility, again (#91813)
4f1f14e38b : [JIT] Skip builtins while enumerating class methods (#91805)
69acc34083 : Automatically convert real tensors to fake in dynamo export (#91742)
ef495b7d64 : make sure mutated args are iterated in the same order (#91792)
b3603f8129 : Revert "Deduplicate c10 error and PyTorchError hierarchy (#87855)"
f219970990 : Return empty attention weights when need_atten_weights = False (#91782)
f77a9a585c : Add shape function for movedim op (#91696)
f556c5b979 : Revert "[dynamo] Support dynamic slicing (#91341)"
f4b3b577d8 : Docs push fix .netrc sometimes a directory (#91745)
87164ace51 : [MPS] Fix the ChannelsLast memory format in cat_out_mps() (#91786)
eeba9d5ab4 : Preserve node's meta during fx.transformation (#90737)
8e7dcd140a : [dynamo] Support dynamic slicing (#91341)
de99bc39e8 : [MPS] Remap the view ops to exisiting graph APIs. (#89436)
2354ff5fab : [functorch] test: try using reference_inputs in vmap tests (#91355)
eb8547e939 : Add a NestedTensor Readme (#91472)
859ac58c54 : [Inductor] Support loop split at given depth in CPP codegen (#91397)
2555971b76 : [inductor] fix output_stride of cat (#91233)
c99a2a43ad : [inductor] decompose tanh in CPP backend (#91687)
ad70a70171 : Revert "[functorch] test: try using reference_inputs in vmap tests (#91355)"
a51090d4b1 : [functorch] test: try using reference_inputs in vmap tests (#91355)
d0a4e2e782 : Don't remove files across the whole OS on clean (#91503)
e3bd38d224 : [DTensor] fix test_device_mesh failure on GPU (#91783)
66745831d7 : [ONNX] Support constant 'aten::__contains__' (#91660)
2f0e4839ee : [MPS] Fix correctness issues with Pooling ops (#91519)
33547bb587 : inductor: Move graph.lint() in Intel's FX Passes to the End of Loop to Reduce Compile Time(part 2) (#91677)
25ff10caa7 : inductor:enable conv+unary fusion for torch unary function (#91609)
2175c9414e : [cpu] implement erf based on oneDNN algorithm for aten::Vec (#91613)
745dc3a13c : [inductor] optimize lowering for empty-related operators (#91350)
e1a2b0d34f : Fix `test_math_ops` for python-3.11 (#91774)
de9c82f41a : [Meta] Register aten.pixel_shuffle.default for meta (#91605)
b2c68c1dea : [Quant] Update IDeep to support oneDNN conv add fusion (#90605)
aab55d6d0d : [Quant] Remove all the dequant nodes when the ref module has multi input args (#90157)
ae0c4c4c29 : Update version numbers in torch.{stft,istft} deprecations (#91761)
2a64365a29 : Fix rendering of std/var docs (#91730)
f571ae4fdb : Revert "Make torch.device usable as a context manager (#91525)"
c73147f741 : Revert "[decomp] Use new squeeze.dims overload in decompositions (#91602)"
0100293a7b : feat: adding greater_equal Scalar variant (#91324)
3b4e4d2b62 : Make `requirements-ci.txt` reading cwd independent (#91771)
a5f32f8978 : training support for dynamo+torchxla integration (#88449)
df4b3b13bc : Revert "squeeze: allow squeezing multiple dimensions at once (#89017)"
f11dc26ed5 : [ROCm] tools/stats/monitor.py support (#91732)
9262ffc692 : [decomp] Use new squeeze.dims overload in decompositions (#91602)
3bb63aa387 : Revert "Symintify pytorch slicing logic (#91340)"
9ca37d6527 : [MPS] Improve the performance of torch.linear() (#91114)
c775eb2879 : [CI][ROCm] always stop all docker containers (#91740)
1a0738f599 : [MPS] Add support for torch.linalg.cross (#91642)
8c172fa98a : Symintify pytorch slicing logic (#91340)
18b37bbff9 : Clang-Tidy: Improve tensorexpr headers with additional std::moves (#91572)
3d1772857e : Apply clang-tidy perf improvements to aten and torch/jit/passes/onnx (#91726)
bac33ea8b6 : [CUDA] Drop CUDA 10 support (#89582)
13b3d862dd : [vulkan] Move Tensor.* from `ops/` folder to `api/` folder (#91033)
aa562f94b3 : [vulkan] Remove dependencies from op/ in vTensor and move it to higher level namespace (#91023)
c7f32613ec : Find other temp directory for code cache if no /tmp (#91701)
229f12bf6a : [MPS] Implement nan_to_num() for MPS backend (#91110)
197e57ee68 : Use indexing instead of reshape for broadcasting (#91722)
ca62ed9067 : [vulkan] Remove ATen dependencies in vTensor class (#91022)
f630294f59 : Optimize GELU BFloat16 Impl in CPU path (#79378)
ad7aefb608 : Fix Meta tests for FFT functions (#91628)
b44d46702a : [MPS] Fix correctness issues with Upsample 1D and 2D (#91669)
7ff97d2e95 : update .circleci/docker/common/install_cmake.sh for centos (#91647)
64a3738fcd : [vulkan] Remove external dependencies in core API and introduce torch_vulkan_api target (#91021)
700399e3f1 : Make sure the ends of linspace are correct regardless of the precision (#91625)
223d1aa692 : Improve linspace decomposition and remove its lowering (#91621)
6790a558dd : Simplify macOS build instruction (#91561)
d6bd67f2eb : vmap support for torch.trace (#91679)
56db21aec1 : [Checkpoint][Test] Add test for optimizer state_dict and resharding to 2d checkpoint test (#91092)
7dd28e9e83 : [MPS] Fix data type and shape issues in Scatter and Gather ops (#91514)
fc59664ef4 : [MPS] Add Unique and unique_consecutive ops. (#88532)
13de5a0150 : [MPS] Fix the right padding bug in Monterey (#91522)
1effabe257 : Support per-parameter test decoration (#91658)
0e60bef516 : [Lint] Update clang-tidy to 11.1.0 (#91709)
d4713b4c7d : [dynamo] Fix bug in tensor.item fake tensor propogation (#91668)
4bad40f559 : Revert "inductor: add conv+hardsigmoid fusion for cpu path (#91433)"
c18e8c68d8 : [ROCm] fix parallel test runners and device visibility (#91137)
5a6019033f : [bazel] change visibility for //c10:headers (#91422)
17bc40c19d : add __hash__ to FunctionSchema (#90730)
a7749ae177 : [reland] rename DisableTorchFunction to DisableTorchFunctionSubclass (#88218) (#89221)
a5e2309f5e : [bazel] Add @pytorch in tools/bazel.bzl (#91424)
1e725c9747 : Avoid device casting for all singleton tensors in optimizer states (#91454)
979255067d : [MPS] Fix the crash in max_out() caused by cached key conflict (#91520)
ce9963e6ba : Fix typo in _lobpcg.py (#91641)
66b3325304 : Adds more nvidia pypi dependencies (#89944)
e26cb06681 : squeeze: allow squeezing multiple dimensions at once (#89017)
3120054c15 : Vectorize norm(double, p=2) on cpu (#91502)
2004df9097 : Remove python ddp (#91663)
ebb7f20afc : quant: make various configs printable (#91419)
316ba9e6fc : Run jit legacy tests sequentially (#91518)
80394bb734 : [MPS] Register norm_dtype_out_mps and cdist (#91643)
619d52a5d2 : Make torch.device usable as a context manager (#91525)
aa0ca994ca : [Inductor] add missing ops for cpp vectorization overrides (#90750)
1d2bfea33e : inductor: add conv+hardsigmoid fusion for cpu path (#91433)
6f9a4ae5c9 : Revert "Populate the eviction_policy field for load/store properly (#91316)"
e116f1a3ff : Add an env variable to disable addmm_cuda_lt kernel (#91436)
162474d7fd : [functorch] add new ensembling api, demonstrate in example (#88850)
c5e5916fff : [functorch] add functorch functional_call, update tests to test this (#89213)
264f5ed516 : [autograd.Function] Add docs on the functorch interaction (#91452)
31a699934b : Remove CircleCI ios PR jobs (#91638)
38de981e16 : [MPS] Add nonzero mps support (#91616)
97ff20d722 : [cuBLAS] (re-open) Fix default cuBLAS workspace size and parsing for multiple workspaces (#91564)
0a6053e9b5 : Revert "Avoid copies in matmul (#76828)"
6bf0e3b697 : [inductor] Check for BackendCompilerFailed on CI (#91634)
3a60debe9d : implement ordering (#91362)
743c385543 : refactor show_traces in memory_tracker (#90145)
b6bb726cc3 : Revert "Dispatch the auxiliary frobenius_norm and nuclear_norm to better implementations and deprecate them (#81763)"
57b7f33ba8 : [Inductor] Move graph.lint() in Intel's FX Passes to the End of Loop to Reduce Compile Time (#91179)
818079dc4e : disabled flaky c2 test (#91640)
7ef7c57ae7 : CSC/BSC -> COO coalesce fix (#91440)
4709523722 : Revert D42051833: Multisect successfully blamed D42051833 for test or build failures (#91458)
2965d7e11a : [CI] Disable rocm distributed tests (#91632)
688e351970 : [MPS] Implement MPSGenerator to enable manual random seeding (#91348)
dfb651452a : inductor: meta registration for mkldnn ops (#91299)
8c2e82b487 : Avoid copies in matmul (#76828)
db2a237763 : Revert "Avoid copies in matmul (#76828)"
2b0abd4ce3 : symbolic shapes: add parenthesis around FloorDiv expression (#91554)
f7939b21e1 : [MPS] Add bincount support for mps (#91267)
cb3204823e : adding test to audit CompositeImplicitAutograd ops that do not have a batching rule (#91367)
6e236553f5 : implemented test and Changed assert to TORCH_CHECK #88808 (#91273)
cce577b391 : Revert D42257039: Multisect successfully blamed D42257039 for test or build failures (#91548)
fae821c2f1 : fix inductor linspace when steps=1 (#91578)
0c3659586d : Avoid copies in matmul (#76828)
122245985a : Dispatch the auxiliary frobenius_norm and nuclear_norm to better implementations and deprecate them (#81763)
b797a24259 : Support indices contiguity per batch and non-contiguous values in sparse compressed tensors (#91243)
dbf96164be : [MPS] Add suport for casting updatesTensor directly in scatter (#91197)
34f2d3e6ae : Deduplicate c10 error and PyTorchError hierarchy (#87855)
2b52db9c95 : [xla hash update] update the pinned xla hash (#91087)
39d49dbe45 : Revert "[cuBLAS] Fix default cuBLAS workspace size and parsing for multiple workspaces (#89027)"
77c2a8a11f : Clang-Tidy: Improve ctors by removing unnecessary copies and initializations (#91538)
b407d98dbe : [cuBLAS] Fix default cuBLAS workspace size and parsing for multiple workspaces (#89027)
f613633124 : Remove _ignored_param_names (#91530)
6cef59487a : [BE] Move internal only non-globbed lists to OSS (#91513)
73436af43f : [cuDNN][cuDNN V8 API] Improve hot path heuristics performance in V8 (#90811)
bc92444b34 : Rename `torchtriton` (#91539)
62713636d8 : Bump protobuf from 3.20.1 to 3.20.2 in /.github/requirements (#91540)
fdbbd20f32 : Cache conda and pip for IOS CI (#91359)
af589b3d1f : switch causal mask for is_causal flag (#91171)
9710ac6531 : Some CMake and CUDA cleanup given recent update to C++17 (#90599)
d5163f5206 : Fix NumPy broadcasting in lstsq_backward (#91460)
051d16a2f7 : Fix NumPy-compat broadcasting in the derivative of linalg.solve (#91456)
484dd40022 : Implement PReLU in a compositional way (#91238)
0e8565d1d5 : [FSDP][optim_state_dict][8/N] Enable fully_shard optim state_dict save and load (#91234)
f8740db410 : Properly resolve source_ref when constructing shape guards (#91058)
bcf15cd93b : Store source, not sname, in Symbol (#91057)
2edf589e66 : [Profiler] Fix SOFT_ASSERT test to not raise on debug builds (#91464)
946e57704e : Drop compute capability < 5.0 in CUDA 12 (#91213)
31e66ca4ef : [torch.func] Add docs (#91319)
6f034dc0b0 : (non-batch) BSR/BSC to COO performance improvement. (#91389)
b1bdec83c9 : Clang-Tidy: Prevent implicit promotion in math functions (#91450)
1c3bb2fdb0 : Chore: fix clang warning - mismatched tags (#91455)
a34a9c3471 : Perf: Apply more clang-tidy fixups to torch headers (#91445)
553b592824 : Clang-Tidy: use modern for each loops and transparent functors (#91449)
b8ba4802fe : Add an option to skip loading of debug traces (#91430)
6ec3d65b0c : Automated submodule update: FBGEMM (#90489)
3ac6106523 : Add out of bounds checks inside irparser.cpp and unpickler.cpp (#91401)
0417da2288 : Set a timeout value when testing multiprocess DataLoader (#91476)
bc764f453d : Fix sharded_tensor test_sharded_tensor_to_cpu (#91453)
5030929c5d : add channels last with mixed data type support for GroupNorm backward (#89485)
ad782ff7df : Enable xdoctest runner in CI for real this time (#83816)
fb4fc0dabe : [CUDA] Bump version requirement for CUDA Graphs debug dump function (#91429)
9b144ddbe4 : Make input casting in root module only in default (#91365)
3d8834bdbf : SymIntify F.interpolate() with recompute_scale_factor=True (#91318)
dbd0d76515 : Disable test_fs family for dynamo (#91459)
f012d0ea5b : [autograd.Function] enable the extended Function feature flag by default (#91441)
ae52750d91 : Reduce hook registration code duplication (#91418)
8191c49f82 : Update links in `writing_batching_rules.md` (#91354)
08a47549af : Rename `Tensor._storage` to `Tensor.untyped_storage` and update docs (#91414)
5b223c43ec : Avoid calling allclose in the backward if there are tensor subclasses (#91444)
4444138fae : Add backward for complex numbers for diagonal_scatter (#91443)
f969834f68 : [functorch] vmap: nansum & nanmean (#91372)
d7674e70f4 : Fix for tryrebase after PR was merged (#91337)
cc11edb084 : [aot_autograd] symintify `logsumexp` (#91442)
f5e20d6060 : Make the state dict of CyclicLR scheduler pickleable (#91400)
896aa72359 : check_forward_backward_compatibility C10D APIs (#91409)
8b55b86dbd : Move sym_int and sym_float alongside SymInt / SymFloat in base torch package (#91317)
1c40ec46ff : Decomps and meta registrations for upsample_nearest 1D / 2D / 3D (#91260)
f1d8fef4d4 : Softmax added to tensor, torch and docs (#91292)
af7132302a : Revert "Softmax added to tensor, torch and docs (#91292)"
3066edbc60 : [Inductor] fix undefined `MockHandler` use (#91434)
9f91e94080 : Workaround for NumPy builds that ship with a broken Dlpack deleter (#89759)
41a0318f2d : Remove overload at::frobenius_norm(const Tensor&) (#81762)
274d3b24c3 : use scatter_add for index_add when dim is the most inner dim (#88729)
700941f683 : Fixup c10 headers with clang-tidy (#91407)
c470ad4f4a : Add missing overload for ivalue toSym(Int|Float) (#91405)
b416d50502 : [inductor] Fix "RuntimeError: Tried to erase Node permute but it still had 3 users in the graph" (#91327)
22a718b40b : [LTC] Restore LazyTensor() = delete (#91426)
3fdbf824ae : [functorch] jacrev: chunk_size=1 without vmap (#91326)
878719a2db : initialise the members boolean_ and integer_ of at::indexing::TensorIndex (#91399)
1b2ee4d0e1 : Update functorch supported autograd.Function to allow mark_dirty (#91222)
ca39c5b04e : Fix conda install on distributions with strict POSIX sh (#91371)
2e79d46708 : Revise error reporting when TorchInductor cannot access /tmp folder (#91385)
0b709b4816 : [FSDP][Easy] Fix context manager syntax (#91410)
e8393131ee : [generate_vmap_rule] support for jvp (#91211)
48e63bf69f : [functorch] composition of three transform tests with jvp (#91206)
1768a28a20 : `COO @ COO`: fix to always produce coalesced outputs. (#91094)
67c53d50e5 : Revert "Fix conda install on distributions with strict POSIX sh (#91371)"
81b3df4fb0 : Fix dtype mismatch for unallocated storage deserialization (#91285)
93a810b045 : Add dim checks for internal `embedding_bag` functions (#85433)
467d269ad1 : Minor fix in package exporter (#90306)
06bdd491fb : [vmap] fix reduction boxed batching rules (#91109)
255d14947d : Fix resource consumption in reductions (#89144)
1c681f4bd8 : Fix distutils.LooseVersion DeprecationWarning (#88524)
97db9fde69 : Fix header-filter for clang-tidy c10 and apply some fixes to c10 and … (#91178)
bb24185ff4 : Fix _check_no_differentiable_outputs for forward ad (#91391)
a061f139dc : [optim] Adam defaults to fused when CUDA + differentiable=False (#90865)
0b255b3f80 : Better __repr__ for ModuleList (#90452)
57dcd93c41 : Fix conda install on distributions with strict POSIX sh (#91371)
3f4e87beaf : Populate the eviction_policy field for load/store properly (#91316)
772684c9ce : Do not generate default value when it's zero (#91315)
f8b28799f8 : Softmax added to tensor, torch and docs (#91292)
789b1437e9 : Fix meta registration for aten._cudnn_rnn (#91333)
df46ba4026 : Use python 3.9 for iOS build and test (#91366)
a188e6ddc0 : Fix typo in troubleshooting.rst (#91301)
5725a44080 : Remove Windows compilation dependencies installation from CI/CD scripts (#89909)
bdbf188c80 : [MPS] Exclude int64 dtype from reduction ops (#91272)
0745242ca5 : Fix wrong committer when rebase and merge (#91330)
69cca4f3ae : Update xla base tag v06 (#90939)
6485d2609a : [MPS] Fix data type issues in Binary Ops (#91151)
d08e3d2304 : [Composable API] Apply ufmt to _composable and the corresponding test folders (#91255)
99aec69f58 : [BE] remove Backend.TCP (#91314)
f62a3cabfc : [ROCm] enable CI after host upgrades to ROCm 5.3 and ubuntu 22.04 (#91339)
f471770fd4 : Add bad status management for get_workflow_job_id (#91145)
94a6d72032 : Update doc of clip grad (#91312)
76a3869fc6 : Support functionalization on torch.cond (#89966)
d1123c94a7 : [pytorch] Update troubleshooting_url (#91298)
4477a5b691 : [MPS] Register unfold key for MPS (#91266)
e8e3980e65 : [Checkpoint] Update DCP init to include DefaultSavePlanner/DefaultLoadPlanner (#91269)
0149467677 : [Checkpoint] Update docstring for DCP ``save_state_dict`` and ``load_state_dict`` (#91209)
8b3d31cfc5 : Add A ValueRange Analysis Pass to convert int64 indexing to int32 (#91028)
b95e1d76a8 : [CUDA12] Conditionally set device in autograd engine (#91191)
4437d0d161 : [functorch] vmap: chunk_size support (#91157)
c47bdd7522 : *_scatter ops should preserve input stride/storage_offset (#91029)
a32916190d : buck-related minifier work (#91215)
d397f414bd : [BE] Reformat ReduceOps (#91221)
c7302075f3 : Fix passing frame to callback (#91170)
eadd557266 : Revert "use scatter_add for index_add when dim is the most inner dim (#88729)"
bacd2ced4f : [CUDA12] Clean up deprecated APIs (#91050)
e40e4d36c9 : Fix test_profiler_seq_nr flakiness (on macos) (#91019)
07c61685c8 : [inductor] CI improvments (#91283)
55749b9c41 : [dynamo] Write full code of how to enable `output_code` (#91230)
4c5928e387 : Fix for `mul(compressed, wrapped scalar)` (#91239)
bc843682dd : [inductor] New approach for computing triton load/store masks (#91241)
fd3a7264ae : [MPS] Add `group_norm[fwd+backward]` and `mean_var` (take 2) (#91190)
9b42e4ef73 : [Composable API] Make _StateKey as a str subclass (#91279)
39306c1dfb : Use `@pytorch//` in bazel build files (#89660)
6cea4f3d57 : [FSDP][optim_state_dict][7/N] Make FSDP support NamedOptimizer (#91160)
71318742f9 : [vision hash update] update the pinned vision hash (#91284)
a0554261a1 : Restore RNG states for composable reentrant activation checkpointing (#91265)
8f16524598 : Run test_spectral_ops serially to fix CUDA illegal memory access (#91264)
365071c73c : Fix non-existing parameters in docstrings in torch/distributed (#91116)
b50f379cec : Remove inductor performance from ciflow/nightly as infra is not ready to handle these jobs… (#91271)
68e9da68cb : use scatter_add for index_add when dim is the most inner dim (#88729)
59a5be3b45 : add mixed data type support for GroupNorm backward on CPU (#88663)
8e55d5831a : add cu118 workflows for Windows (#91216)
014d7802c8 : [MPS] Fix the error with high watermark value on x86 (#91268)
85e393bade : Fix for RNN/LSTM/GRU modules to work with stateless.functional_call() (#91111)
300f777796 : [Vulkan] Use EXPECT_EQ instead of ASSERT_TRUE in vulkan_api_test querypool_flushed_shader_log (#91259)
2a23dfe8ed : [quant] Support lowering for quantized embedding byte operator (#91159)
b68fd7e319 : Revert "Store source, not sname, in Symbol (#91057)"
6e0cd8b91e : [Resubmit] Require inductor to match stride order (#91185)
1ab6ac4682 : [FSDP][optim_state_dict][6/N] Refactor the optim_state_dict APIs to support hooks (#90798)
d19988093d : [autograd Function] Return input as-is if marked dirty even when requires_grad=False (#91214)
fb2e1878cb : [torch.func] alias torch.func.vmap as torch.vmap (#91026)
e803d336eb : Fix missing indentation in serialization.rst (#91253)
48511eca82 : [pruning][docs] Update README.md for structured pruning (#90403)
6a3ddd0171 : Revert "Don't graph break on patched module methods or aliased methods (#91018)"
81a9a0ac07 : [MPS] Fix gather for uint8 dtype in index_select (#91047)
b285f1080f : Fix small typo in comment (#91247)
97f514f38e : Fix two typos in `torch.distributed.distributed_c10d.py::broadcast_object_list` (#91237)
e3383d296f : [optim][fix] test_fused_optimizers did not test fused before (#91228)
c7f1974cf1 : Fix FastToLocals call by copy pasting (#91168)
5e77971a6e : Fix all simple compilation issues in eval_frame.c (#91166)
b7f48d71fe : Upgrade lintrunner numpy to a version supported by 3.11 (#91164)
c0e7d8f84c : Use python compat from python/pythoncapi_compat (#91163)
645eda0a00 : Revert "[MPS] Add `group_norm[fwd+backward]` and `mean_var` (#91190)"
8b617f813d : [cuBLAS] Add an option to disable reduced precision reductions for BF16 GEMM (#89172)
1c7e81576a : Temporarily disable ROCm periodic tests (#91256)
eeb9154b27 : [MPS] Add MPSHooks interface to enable accessing MPS functions globally (#91104)
371716eb36 : [MPS] Add `group_norm[fwd+backward]` and `mean_var` (#91190)
d6fc2d82ca : Don't graph break on patched module methods or aliased methods (#91018)
15af4b1cee : Dynamo, FX, Inductor Progress Bars (#88384)
bfdc0358dc : Compile fix for Clang + libc++ (#91212)
6d2b0cbb40 : [Re-landing 86706] [JIT] Frozen Graph Linear-BatchNormNd Folding (#91020)
e8bf7c21e4 : Integrate apply_optim_in_backward with DDP (#89194)
8992eec781 : [inductor] Update how REQUIRE_HIGHER_TOLERANCE is handled (#91227)
b7f35e4104 : [MPS] Fix index_add with non-f32 inputs (#88542)
0476201482 : Update debug option for torch._dynamo (#91223)
b66862ba87 : [autograd Function] Don't materialize forward grad for non-differentiable types (#91183)
88c581be87 : Store source, not sname, in Symbol (#91057)
5d37890b8e : Update torchrun and TorchElastic to take optional `local_addr` param to allow skip local IP lookup if specified (#88922)
57390116e0 : Restructure ShapeEnv so it uses GuardBuilder.SHAPE_ENV directly (#91055)
2f154f68ea : [torchgen] Add CI job to make sure torchgen works for Executorch op registration (#89596)
37ea99cd25 : [QNNPACK] Add more unaligned attributes (#91208)
a274b5b99e : [MPS] Fix data type issues in Unary ops (#91120)
c8546c930f : [BE] Use `aten` global in `torch._refs` (#91189)
46f64117db : [BE] Use `aten` global var (#91188)
dd735b96df : [MPS] Fix `torch.std`/`torch.var` default/correction handling (#91203)
e670c261c5 : Decompose fill, zero, and zeros_like (#90968)
eeacb6ae04 : Temporarily disable ROCm tests (#91217)
2f37804cae : [generate_vmap_rule] Add generate_vmap_rule to autograd.Function (#90966)
2a55984139 : [generate_vmap_rule] reductify_leaf helper function (#90965)
53c94ef1bb : [generate_vmap_rule] Add mechanism to override ctx.saved_tensors (CtxWithSavedTensors) (#90964)
31981d0139 : [generate_vmap_rule] add restore_vmap helper function (#90963)
94262efc7d : Revert "[inductor] Rewrite Triton templates + epilogue fusion (retry) (#91105)"
e932c3e547 : Delete dead intermediary_symbols (#91070)
e5a748fef8 : [Nested Tensor] do not use at::cuda::getDefaultCUDAStream(), again (#91180)
1c46a32b67 : Minor typing improvements (#91068)
7fecba7bdb : Doc improvement in LKJCholesky distribution (#91091)
dafd0432ee : Update __init__.py (#91196)
712170e929 : [threaded pg] adapt test_pointwise_ops.py (#90713)
a6dcebf997 : [threaded pg] make exception handling consistent with MultiProcessTestCase (#90712)
34da446072 : [threaded pg] add assertion util to MultiThreadedTestCase (#90595)
c7e7ea92e2 : [NamedOptimizer][2/N] Prepare the enablement of state_dict for FSDP (#91147)
c248f2f379 : [ROCm] Modify GPUs visibility code when starting docker container (#91031)
f460893cec : Update optim.rst (#91195)
c43209db4d : use libdevice for tanh (#90889)
192a11d49c : refactor the dfs cyclic search from recursion to iterative approach (#91042)
e6fcf7ad9d : Remove breakpoint (#91128)
cdbca3563e : Small operatorbench changes (#91027)
83f4e30ea7 : Use deque instead of list for BFS (#91139)
649d0b6ae7 : Add env var PYTORCH_TEST_RUN_EVERYTHING_IN_SERIAL=1 that allows running unit test suites in serial (#90981)
2f5759eaba : Disable non-deterministic models for optimizers (#91149)
f8b348c1fc : Update ProcessGroupRoundRobin (#91172)
5ed5dfd915 : Don't run ios jobs on forks (#91112)
34717b3ea8 : nn/test_convolution to run in serial (#91113)
dabf515c18 : [cuDNN][cuDNN V8 API] (re-re-re-open) cuDNN V8 API on by default (#91117)
28ceccec21 : cleanup old python_compat code (#91162)
346fd04076 : Set cmake PATH on macos to address libzstd flakiness (#91142)
84e73e1269 : [inductor] small CI improvements (#91140)
6a757f1cbb : Cleanup Windows pip dependencies (#88862)
b63f0311a5 : [MPS] Add floor_divide() op and its test case (#91126)
aec09eeb3a : [FSDP][7/N] Support `replicate` in `fully_shard` (#91044)
e81ccfd1ed : [FSDP][6/N] Add note explaining idioms for `_FSDPState` traversal (#90959)
32fde53713 : [FSDP][5/N] Add manual "wrapping" support for `fully_shard` (#90874)
da9af9868e : [FSDP][4/N] Refactor func to share state/init handle attrs (#90871)
3194281ca7 : Revert "use scatter_add for index_add when dim is the most inner dim (#88729)"
07c340bb2a : Remove debug code (#91148)
2d68cc4bc2 : Add cu118 workflows (#90826)
289f06434c : [dynamo] check buffers when checking accuracy (#91037)
17b80bfaf3 : Update patch release cherry pick condition (#90220)
0eb45d546c : Bind autograd current Node for debugging purposes (#90867)
13dbad6369 : use scatter_add for index_add when dim is the most inner dim (#88729)
63b8ecc415 : [CUDA12] Make PyTorch compatible with CUDA 12 (#91118)
7c58f1d4e8 : Update dynamo xla test to make it part of the xla CI (#91130)
29b119d04d : [Checkpoint] Add test for fsdp model state saving and loading with/without resharding (#90950)
7330eabe36 : fully_shard load state_dict (#90945)
95a115dd07 : Revert "use libdevice for tanh (#90889)"
511fbad830 : [Dynamo] Fix builder for class with metaclass (#90807)
ef2bb9ca04 : Revert "When nopython=True, Dynamo can't allow graph breaks. (#90970)"
0f57e7f2d9 : Do not run inductor perf test with postnightly branch (#91133)
857ed2d7dd : [Inductor] Replace graph.eliminate_dead_code() with graph.erase_node() in Permute Fusion (#91014)
d6dd2e97da : [inductor] Rewrite Triton templates + epilogue fusion (retry) (#91105)
3bd37ff2d5 : Removing invalid git option when updating submodules (#91132)
0148809131 : use libdevice for tanh (#90889)
30edd39bdc : Fix non-existing parameters in docstrings in benchmarks (#91115)
99bd8d12e1 : Fix non-existing parameters in docstrings in misc places (#91121)
0210d508cc : Fix terminology within `linalg.slogdet` docs (#91129)
a5eb564ba4 : [Quant] lower fused LinearTanh for onednn backend (#89188)
666d218055 : TorchDynamo: set output stride using eager output for cat (#89477)
b309599d1b : Add catch socket.gaierror for _matches_machine_hostname (#91119)
ebea45fe41 : [MPS] Fix the assert in Garbage Collector (#91106)
1d3e7fcc3b : [pytorch profiler] Add step tracker logic to handle multiple sources of step increments (#90880)
41846e205e : [torch.func] Setup torch.func, populate it with all transforms (#91016)
cad1ce6158 : Stop using :attr: in functorch docs (#91015)
7e9bf2ed86 : When nopython=True, Dynamo can't allow graph breaks. (#90970)
d1772aff60 : Autocast support for scaled_dot_product_attention (#91066)
fadf222661 : Propagate guard failures to userland (#91053)
7bc3467fff : Delete dynamic_propagation config (#91040)
7ebc45eadd : [dynamo] Better error message for bad timm model name (#91049)
322e4b4c8a : set -Wsuggest-override for builds (#89852)
8ecb49b8fb : [MPS] Add Inverse op. (#90428)
58b5a9df00 : Update to sdp benchmark to take into account pt2.0 stack (#90096)
909b7ca92a : [torchgen] Move Executorch codegen logic into torchgen (#90806)
679da8bd89 : [torchgen] Move Executorch custom ops logic into torchgen (#90099)
ca52f63fc0 : [torchgen] Move Executorch unboxing logic into torchgen (#90098)
f02e93b584 : jacrev : Support chunked computation (#89376)
e2dc60c6cb : [Vulkan + Profiler] Add Timestamp Adjustment Algorithm (#90672)
0428de06ee : [Vulkan + Profiler] Use 0 as Vulkan Event Durations During Tree Building (#90671)
8c80a4684b : [Vulkan + Profiler] Report Vulkan Events to Profiler in QueryPool (#90670)
193068cbcf : [Vulkan + Profiler] Enable Processing Vulkan Events in Profiler (#90852)
7badd0b9e6 : [Vulkan] Store entries in a separate queue after resetting query pool (#90668)
c345755013 : [FSDP] Fix `_mp_shard` `record_stream()` (#91096)
2a37ba8e81 : [inductor] Add retry after benchmark test fails on CI (#90808)
50ab2b702f : move inputs to device on root module only (#91078)
d6efd25d1e : functionalization: check for undefined tensors in advanced indexing (#90791)
440a3f2398 : fix set_() with functionalization (#90722)
548960f68e : Replace TORCHINDUCTOR_TRACE with TORCH_COMPILE_DEBUG in documentation (#91011)
e5a48da664 : Allow FSDP to have ignored modules out of wrapped root (#91079)
6686e9bc07 : [Quant] Add fused LinearTanh module for onednn backend (#88923)
731f417f60 : Use scalar implementation to keep the precision in linspace of integral types (#89048)
f833880b2e : Fix torch.distributed.run init connect timeout by comparing `host` with the current IP list (#90221)
dfe916ca88 : Dynamo comptime, with public ComptimeContext API (#90983)
ec748cbecd : inductor: separate onednn fx fusion from overriders.py (#90890)
4bf22fcfe2 : add mixed data type support for GroupNorm (#81852)
ea49e769f6 : [Quant] Add fused linear-tanh op for onednn backend (#88879)
17d860d03e : Type torch._inductor.graph (#90987)
3916d7a575 : Apply modernize-use-emplace to aten, c10, torch (#91077)
944519a468 : Switch use_fake_tensor to True by default (#89663)
ce4900f3bb : [cuDNN][cuDNN V8 API] Fix `benchmark_limit` ignoring failed kernels in FIND (#91032)
856651dd55 : Vectorize exmp1 and log1p (#91074)
490c1cf650 : [Dynamo] Support torch.get_default_dtype (#89790)
1accd915a4 : Re-enable optimizers (#90709)
9ca41a986c : [Quant][FX] Lower QLinearLeakyReLU for onednn backend (#88668)
8004f934cd : Fix CSR with int32 indices to CSC conversion (#91061)
6be1e43367 : [Checkpoint][Test] Add 2d DCP model state checkpoint test (save/load) (#91046)
b72caf311d : Introduce guardexpr, aot autograd guarding of duplicates into torch._guards (#90955)
212873c615 : Add dynamic shapes benchmark accuracy to CI (#90444)
a1a2f548a9 : [Composable API] Enable composable `fully_shard` submodules in `replicate` parent module (#90711)
3229713cf2 : [Checkpoint][nit] Fix test_fsdp_optim_state.py test name (#90943)
e2377c8300 : Revert "Add dynamic shapes benchmark accuracy to CI (#90444)"
85db031e60 : Add dynamic shapes benchmark accuracy to CI (#90444)
7c524221ba : [reland3][dynamo] Revert "Revert "[reland][dynamo] use optimizers correctly in benchmar… (#90956)
78efde920e : Revert "[inductor] add conv_transpose2d unary fusion for cpu in inference mode (#90265)"
7b0ec67e34 : [Quant][FX] Add backend config for onednn backend and fuse Linear-LeakyReLU (#88665)
bfa223aaa6 : [Checkpoint] Fix checkpoint test test_fsdp_optim_state.py (#91036)
1d948787b7 : Remove duplicate line (#91006)
f7b384cc46 : [reland][quant][pt2e] Add early prototype top level quantize_pt2e APIs (#91035)
4ab81ae80d : fix default partitioner: save sizes instead of tensor for backward when possible (#91012)
1609b954f8 : Save and restore tracked_fakes (#90995)
ed589dd8e4 : [functorch] add composition-of-3-transform tests for autograd_function (#90962)
e1c799ff82 : Fix comment about get_fw_grad_mode() only being used in custom Function (#90790)
ffa37c9fca : Add VmapInterpreter.randomness (in pyfunctorch) provide it in info object (#90789)
8bd959e462 : set -Winconsistent-missing-override for builds (#89851)
93cb580677 : lint transformer.py (#91048)
5d70d12812 : [dynamo] turn torch.backends.cudnn.is_acceptable into a constant (#90323)
7d3f2b7902 : Revert "add conv_transpose2d pointwise(unary) fusion kernel (#90264)"
7a0f29b776 : Allow Process Group to support multiple backends (#88330) (#90997)
93ac8c4aeb : [dynamo] Refactor how autocast parameters are binded (#90953)
4fa8d774b8 : Add macro C10_AS_INTARRAYREF_SLOW (#90675)
ba7aeac37b : Revert "[cuDNN][cuDNN V8 API] (re-re-open) cuDNN V8 API on by default (#89022)"
4438b019a8 : Fix non-existing parameters in docstrings in torch/ao (#90875)
ee2475869c : ModuleInfo-based tests for AOTAutograd (#90980)
3226209636 : LSTM SymInt-aware changes & meta registration (cuDNN) (#90944)
512ec181ec : Introduce causal mask (#90508)
e689c50922 : Don't recompute var in bn decomp (#90984)
e4de6ed6bb : functorch: non-contig samples for test_grad (#90990)
5ea418bf63 : [FSDP][3/N] Move `fsdp_modules(root_only=True)` -> `_get_fsdp_root_states()` (#90862)
67ef88af37 : Revert "[Quant] onednn backend switch to ideep new api without affacting performance (#90354)"
7a683eaeb8 : aot_autograd: add assert for functional-only graph (#88816)
c83ff1ea08 : [GHA][BE] Update to newer checkout action (#90969)
bd94ee66ea : [quantized] [executorch] typo (#89960)
68805b565a : Include dispatch key in wrapper symbol name (#90674)
6bc6fb21db : Revert "[reland2][dynamo] Revert "Revert "[reland][dynamo] use optimizers correctly in benchmar… (#90956)"
8cd1808dbf : [FSDP] Introduce "fully sharded module"; remove comm. module (#90933)
b0cda0b38c : LSTM SymInt-aware changes & meta registration (non-cuDNN CUDA) (#90701)
a10b3ce876 : generate device context managers in inductor code (#90934)
9d8fa78d2c : tools: Update clang-tidy hash (#91008)
01e7f46215 : Ensure sorted indices from the CSR->BSR conversion (#90918)
634555d981 : [ONNX] Auto test based on OpInfo (#86182)
8bc38ae4e2 : [reland2][dynamo] Revert "Revert "[reland][dynamo] use optimizers correctly in benchmar… (#90956)
c2c14f9597 : Sparse compressed mm: fix for orthogonal inputs (#90917)
4dd3de23dd : Sparse compressed mm: fix for empty inputs (#90763)
3e44fcee2f : [FSDP][2/N] Move `fsdp_modules(root_only=False)` -> `_get_fsdp_states()` (#90861)
673c25d45a : [FSDP][Easy] Rename `entry` -> `fsdp_module` to be more descriptive (#90864)
95ee5fecb1 : [FSDP][1/N] Add `_get_fsdp_states()` (#90860)
06533a2eb7 : [Inductor] actually check `replacements` in `AutogradMonkeypatch` (#90901)
9d79d09b6e : Make it easier to find troubleshooting steps (#90927)
ad1b04c4a9 : Revert "[reland][quant][pt2e] Add early prototype top level quantize_pt2e APIs (#90971)"
ddf5b68dcb : Nuttall window (#90103)
53e71fad8f : Add shape_env guards to tracing context (#90876)
a01c1ee594 : [ao] making _is_activation_post_process private with BC (#90554)
6ea93b2295 : [Quant] Add fused LinearLeakyReLU module for onednn backend (#88661)
ffd0b15a49 : Add support for keep-going label (#90902)
c6cba1865f : [Docker] Install Trition deps (#90841)
7dd5e55497 : [reland][quant][pt2e] Add early prototype top level quantize_pt2e APIs (#90971)
e48c91688b : DebugInterpreter works with symbolic shapes now, plus test (#90913)
67436f621a : Add utility for binding symbols based on arguments passed to placeholders (#90912)
bbea58d500 : Stop using GraphArgs for shape env guard source tracking (#90911)
eef019c14a : Lint rule to forbid direct use of logging.info/etc APIs (#90907)
82a191313e : Revert "Add support for keep-going label (#90902)"
2f6ada84b4 : [inductor] Remove flag of bmm's dim m and n in shape padding (#90937)
5e3bc1975b : Add `any_chain()` in upstream (#90949)
855f4b7d24 : Add support for keep-going label (#90902)
4372dbb89f : use pytree to allow any input format for cuda graph (#90941)
d9d263efb9 : Revert "[Quant] Add fused LinearLeakyReLU module for onednn backend (#88661)"
d3e0bcc796 : pin multipy (#90942)
d8c1872cc3 : Make it easier to find troubleshooting steps (#90948)
9d523616b3 : fix segfault for EmbeddingBag on CPU slow path when include_last_offset is true (#90358)
353c2e7d39 : [Quant] Add fused LinearLeakyReLU module for onednn backend (#88661)
750576a50a : Revert "Include dispatch key in wrapper symbol name (#90674)"
f660d62ddc : Make dynamo.export preserve user input/output format (#90884)
31b8dc7542 : Revert "[JIT] Frozen Graph Linear-BatchNormNd Folding (#86706)"
535b0e37dd : Suppress RecursionError in sympy; fix logging (#90904)
140a3139d6 : Revert "Add macro C10_AS_INTARRAYREF_SLOW (#90675)"
9259933edd : [ao][fx] fixing public v private prepare.py (#88398)
f3da157ce3 : Reset rng in hf before loading a model (#90936)
d04e3c994f : [FSDP] Fix input grad propagation when using param mixed precision (#90921)
9c912c7dd0 : Revert "[quant][pt2e] Add early prototype top level quantize_pt2e APIs (#90802)"
fdc973308b : [inductor] Use --continue_on_fail when installing torchbench (#90922)
57e2090e21 : [Dynamo][TIMM][Benchmarks] Fix TIMM `0.8.0dev` breaking the `timm_models.py` script's data config (#90404)
e686a442b4 : If a torch.* returns non-Tensor, make this unimplemented rather than assert. (#89918)
a66af1feba : [quant][pt2e] Add early prototype top level quantize_pt2e APIs (#90802)
201c36d81a : Hack get_nbytes() to return 0 for sparse tensors as workaround for functionalization (#90702)
15c9df7756 : Error messages for kernel selection (#90783)
173accd1c1 : [ao][fx] fixing public v private qconfig_mapping_utils.py (#88399)
abc54f9314 : Revert "Revert "[functorch] Refactor life handle storage (#90317)"" (#90856)
81f351acd7 : [inductor] Prevent blowup in inner_fn_str and extract_read_writes (#88933)
c4718e9b09 : [FSDP] Enable mixed hybrid/non-hybrid sharding strategies (#90846)
2f8c0cb2a4 : [FSDP][Easy] Use `run_subtests` for hybrid shard test (#90859)
b92975a6f3 : replicate state_dict tests (#90868)
d6fe9838d1 : [inductor] add conv_transpose2d unary fusion for cpu in inference mode (#90265)
85698d0ac4 : add conv_transpose2d pointwise(unary) fusion kernel (#90264)
9b89ff0923 : [Quant] onednn backend switch to ideep new api without affacting performance (#90354)
79009cbc53 : [CUDA 12] Fix the endif guard position for cusparse const descriptors (#90897)
98799ca0f4 : [Composable API] `replicate`: cleanup _ddp.py (#90257)
0b22f5ae9f : Deeply rework WeakIdKeyDictionary (#90825)
54563e6288 : Don't put tracing state on Tensor (#90628)
103029e035 : inductor: sort the reads buf by name (#89744)
9fe050f39c : fix cudnn RNN reproducibility problem (#90522)
dcfe7ff7e2 : fix a memory leak on return without free (#90372)
0ac0af02d5 : Reland Fix issue 38095 TODO in test_multiprocessing.py (#90741)
4e6455163f : Fix unittest rerun logic when checking for skipped tests (#90888)
2ba5c1d7c4 : Inductor cpp wrapper: change inputs args from tuple to vector (#90754)
39d9dd135a : [FSDP][Easy] ufmt files (#90858)
670efb974a : [CUDA] Use accumulate type to improve accuracy of grid_sample on half precision inputs (#90427)
eecd621f06 : [cuDNN][cuDNN V8 API] (re-re-open) cuDNN V8 API on by default (#89022)
6a866c3ed1 : [ao] fixing public v private for torch.ao.nn.X (#87883)
edc5bb5fbe : Only populate real_value_cache during export (#90468)
f286cbebce : [ao][fx] fixing public v private graph_module.py (#88395)
283cf718ed : Fix _fix_weakref memory leak (#90823)
d19791e4cd : add autocast keys to pybind11 DispatchKey object (#90821)
86269852de : Serialize dynamo/inductor config for minifier (#90501)
e585156c59 : [JIT] Frozen Graph Linear-BatchNormNd Folding (#86706)
1ca9d43d4e : [ao] quantize.py fixing public v private (#87521)
691a44f403 : [Quant][fx][bc-breaking] Add simpler BackendConfig pattern format (#90698)
1e347b737b : Run MPS PR tests on both Ventura and Monterey (#89312)
7a112c43c1 : [DataLoader2] Fix apply_sharding to accept one sharding_filter per branch (#90769)
1ba4e3c711 : [FSDP][BE] Remove `_module_to_handles`, `HandleConfig`; use term "fqn"; clarify docs (#90840)
8090cb5386 : Add macro C10_AS_INTARRAYREF_SLOW (#90675)
cdf4a80cc1 : replace skipIf with xfailif (#90368)
fb18c29486 : [BE] Tweak Meta copyright headers (#90805)
f3393b7ea7 : [torchgen] Introduce Executorch types and signatures (#90781)
4adffe6d51 : [torchgen] Let native function declaration generation logic take a callable (#90780)
df58020bb6 : Align max_pool1d Error Checking between CPU and CUDA/CPU requires_grad (#90211)
3859aace20 : [MPS] Skip tests broken on Ventura (#90843)
8a21cac3c3 : Improve interpolate() speed for channels_last CPU videos (#90302)
0cd69d7cda : Revert "[functorch] Refactor life handle storage (#90317)"
3c637e8007 : fix aot autograd for None fw inputs (#89975)
e9dc8cc19b : Add torch.compile support to minifier (#90308)
fde5646f3d : Inductor cpp wrapper: support bmm, mm, addmm extern call (#88667)
51c6c5e156 : [SDPA] Standardizes the return shape for dense tensor of SDPA regardless of fused kernel called (#90776)
caa05e6f87 : Give linting steps a unique prefix (#90705)
f21cb7d77e : [pyfunctorch] Generate a more meaningful name for _SingleLevelAutogradFunction (#90418)
da42eab48b : Fix circular import in torch/autograd/function.py (#90415)
4809e838c1 : functorch.jvp support for autograd.Function (#90077)
dcb73aa291 : Run inductor benchmark test for every PR (#90773)
cc4131a815 : Inductor cpp wrapper: support more dtypes of input (#88666)
ba77afbce1 : Move _test_inductor_realize into python (#90517)
d35aa2f65a : Inductor cpp wrapper: support Reduction (#88561)
7963dbf3db : symbolic-shapes: -anjali411, +jbschlosser (#90816)
1aab755320 : Fakify params and weights under private config (#90417)
3870a9e28d : to_sparse_XXX: backward support (#90281)
708108a9d3 : Optimized vertical flip using memcpy (#89414)
e54c6c2870 : Fix non-existing parameters in docstrings in torch/onnx (#90593)
37cd96a6fe : inductor: using pre-existing fake mode to fallback kernels (#90814)
6c8ef6a4c2 : Add tracing context, Integrate dynamo guards into torch._guards (#90647)
f4099af1e9 : Fix gradcheck for BSR and BSC inputs. (#90719)
a60d712010 : Support (non-batch) BSR/BSC to COO sparse tensor conversions (#90718)
cc504ce292 : Restore test_warn_types (#90810)
e8e591b72f : Upgrade CI to ROCm5.3 (#88297)
258860fa3a : [ao][fx] fixing public v private for pattern_utils.py (#88397)
769392178a : [vision hash update] update the pinned vision hash (#90727)
e87370133c : Include dispatch key in wrapper symbol name (#90674)
6c605e9c3d : [FSDP] Skip param check for pure FP16 (#90785)
e2e4a80cdb : Inductor cpp wrapper: support None as output (#88560)
93aee0cdc9 : [FSDP][Easy] ufmt files (#90548)
e90169d174 : Fix missing return statement for test_it_returns_empty_list_when_model_contains_supported_inplace_ops in #89299 (#90797)
510339c07b : [FSDP][2/N] Refactor state dict hook registration (#90777)
ed050e7a18 : Small fixes for better channels last performance (#89616)
dbe85265a8 : Automated submodule update: kineto (#89846)
d52f121dba : [Composable API]Common _State parent class for composable and wrapper FSDP (#89147)
b66cedd906 : [FSDP] Fix `use_orig_params=True` + `no_sync()` (#90546)
6d425a7ce9 : Fix forward AD custom Function non-differentiable outputs (#90787)
9575f2ca83 : [LTC] Make some LazyTensor interfaces virtual (#90686)
bf2668a899 : Add support for kineto in memory viz (#90567)
b4b8a56589 : Doc for Canonical Aten and Prims IR (#90644)
65e762acc8 : [FSDP][optim_state_dict][5/N] Remove optim_inputs for sharded state_dict. (#89981)
4a2d64994c : [FSDP][optim_state_dict][4/N] Remove the unused _get_flat_param_to_fsdp_module API (#89980)
7cd900eb97 : [fix] `adaptive_{avg, max}_pool` variants : cuda & cpu (#88906)
043de8d1b1 : [FSDP][optim_state_dict][3/N] Support use_orig_param optim_state_dict (non-broadcast version) (#89900)
0a4e4de525 : [ROCm] add case for FP32MatMulPattern skip property (#84077)
79156c11c3 : [ao][fx] fixing public v private match_utils.py (#88396)
a856557b3a : [ao][fx] public v private convert.py (#88394)
b3d49c2fb8 : [FSDP][1/N] `fully_shard` state dict (#90767)
ad4189c8db : [reland][inductor] Update TIMM skip list (#90762)
5c133c5744 : [Dynamo] Supports two torch.distributed.* functions (#90683)
21fc28285e : [stateless] fix functional call docs (#90476)
4a5f4416d0 : Make at::outer SymInt-aware (#90714)
3f14c70576 : Make functional inverse for squeeze_copy SymInt-aware (#90697)
1119d2fa54 : Revert "Reland "Add heirachical module names to torchFX graph.node" (#90205)"
1439ebd899 : Enable inductor perf test on GCP A100 (#90322)
544756ae5e : Fix mps constant pad (#89864)
7035bcdd0f : [inductor] Enable test_torch (#90518)
0d5c849d48 : Update cuSPARSE usage for CUDA 12.0 (#90765)
d4dda519c9 : Fix FSDP checkpoint tests (#90745)
a76032d8f4 : [inductor] Pattern match cat->view*->pointwise and hoist pointwise (#90743)
da8f539e84 : [Fix]: Add missing std::vector reserve in aten and torch/csrc (#90627)
4d494986af : [functorch] Refactor life handle storage (#90317)
24c3ad7851 : Move private forward grad mode helpers to torch.autograd.forward_ad (#90240)
3049d99027 : autograd.Function supports vmap staticmethod (#90037)
4dc7d87421 : [LTC] Make LazyGraphExecutor::RunPostOrder() virtual (#90680)
af4735d3ad : Revert "Upgrade CI to ROCm5.3 (#88297)"
96a36c9a3b : Fix: Apply clang-tidy to c10/core (#90699)
ff1bbc2773 : Revert "[reland][dynamo] use optimizers correctly in benchmarking (#87492)" (#90746)
eae0f3f5e3 : Add mkl implementation for exponential on CPU (#69967)
a50fe978f8 : [LTC] Make even more LazyGraphExecutor interfaces virtual (#90650)
fc429512d5 : [FSDP] Clean up `FlatParamHandle` dtypes, post-backward hook (#90660)
ffa89033c5 : TorchDynamo: always convert tensor to fake tensor at fake_mode path for ShapeProp (#90685)
7a7f29704f : Remove hard numpy dep introduced by _inductor/utils.py (#90716)
181a82ffd2 : Upgrade CI to ROCm5.3 (#88297)
7498e23bd5 : Re-enabled 2 Metaprogramming tests on Windows (#87284)
dc4d18d47d : Remove hack to hard code test times (#90720)
1f86a1447b : [c10d] remove some outdated bc checks for c10d op (#90681)
7da504508d : [c10d] update alltoall signature to be more consistent (#90569)
f30694c700 : Add allgather_into_tensor to CommTensor (#90565)
b782927ed4 : Add reduce_scatter_tensor to CommTensor (#90564)
3ba9e4cd55 : Add alltoall_ to CommTensor (#90512)
6165a1807d : [LTC] Make DeviceContextArena protected (#90531)
b8f35ec6a5 : Guard Symbol and ShapeGuardPrinter behind HAS_SYMPY (#90704)
ea64c8c6ad : Revert "[torchgen] Let native function declaration generation logic take a callable (#90590)"
b3e6a6dc0b : Revert "[torchgen] Introduce Executorch types and signatures (#90591)"
42a5f6ee5d : Create stub function for doing SDPA cpp and cuda dispatch (#90576)
df569367ef : Fix non-existing parameters in docstrings in torch/fx (#90594)
94b9bb324f : [quant] Add example for lowering quantized dynamic linear pattern through delegation (#90640)
b6f114c208 : Fix a minor typo in documentation (#90667)
98a9235dce : Fix prelu ref when a.ndim < 2 (#89809)
34dc34e8a0 : Add comment to output_code in dynamo config (#90333)
7bb97c4ca4 : move TypedStorage handling to assertEqual (#89557)
17941b12e0 : Fix a typo in some torch.load error message. (#90662)
e2674aafed : [Dynamo] Supports calling parent class‘s non classmethod from child class (#90682)
e11650887e : [ao] fix incorrect integer cast on histogram observer bounds (#90355)
60e196c241 : Better url in trymerge (#90583)
f258753799 : [ONNX] Add repro export from `GraphInfo` (#89947)
525c33c09f : [ONNX] Verification tool to find mismatch in model export (#89946)
4ed175bfb7 : fix with statement in test_fsdp_hybrid_shard.py (#90580)
06326a7721 : [optim] skip .item calls in all optimizers when compiling with dynamo (#88173)
7541c9f8be : [Fix]: remove unnecessary copies in aten, c10, and torch bindings (#90629)
27932ff8c9 : [Inductor] Add note that stride_vars result may be inaccurate (#90184)
dcce5677fd : Adding test when registering a batching rule for a CompositeImplicitAutograd operation (#89465)
e37c8c8436 : Revert "[inductor] Update TIMM skip list (#90188)"
0b3316ad2c : Don't enable debug_fake_crossref for TORCH_COMPILE_DEBUG (#90666)
f7365eca90 : Add unbacked symints support; item works now (#90624)
6702345416 : [xla hash update] update the pinned xla hash (#90161)
5adc18dcbc : Shape guard structure (#90679)
2e0ce24890 : [Dynamo] Support access nn.Module keys (#90502)
c8ed84ad06 : Fix a static initialization order fiasco in c10d (#90149)
4ca2fc485c : inductor(CPU): add Conv+binary+unary fusion filter (#90259)
c318de4274 : [dynamo] Get GPU names without calling nvidia-smi (#90474)
b95ea4f149 : [pt2] Reset dynamo log level when exiting inductor debug context (#90473)
d3d85e1c3b : Emit torch.cuda.synchronize() after every kernel call in inductor (#90472)
8fd31ac4da : Preserve original GraphArgs for shape guard codegen (#90665)
9447005ae3 : Improve dynamo debug logging (#90664)
450bd282e0 : Slightly improve error messages on sympy failure (#90655)
8127724c3b : Skip some unittests (#90609)
11442accc6 : Make torch._guards, shuffle structures around for migration (#90636)
e1ed5ad5a5 : Add a timeout to benchmark script (#90634)
5d8618dfbd : Some memory saving in large unittests (#90148)
995d39c221 : [Fix]: Add some missing moves in 90442 (#90661)
e33f1eeeb7 : SymIntify resize_ and deduplicate memory format logic (#90442)
181d37475d : Simple fix: add missing positional arg in init_optimizer() call (#90641)
15a4c60383 : Revert "Make torch._guards, shuffle structures around for migration (#90636)"
7ec1cb8553 : [FSDP] Fix _pre_forward type annotation (#90621)
80542add73 : [FSDP] Allow MixedPrecision to skip inputs (#90620)
31351c61dd : [FSDP] Tighten post-bwd cast to `reduce_dtype` (#90615)
933b6c4eed : Make torch._guards, shuffle structures around for migration (#90636)
c7d2fb7f86 : Adopt state_dict_pre_hook in FSDP (#90436)
746c773d7c : [FSDP][Easy] Move to `_storage()` in test file (#90622)
6845598617 : [FSDP] Uncomment test for `use_orig_params=True` (#90610)
e7efeb5282 : [FSDP] Save `_stream_to_name` for debugging (#90611)
184f6b5787 : Fix perf bug in #90528 (#90630)
9eccfedca2 : [Reland][FSDP] Another fix for `DTensor`, `use_orig_params=True` (#90562)
a69cdd9cf8 : Add global registry to composable API contract (#90579)
12671fe620 : Reserve space for std::vector output in extract_tensors for nccl python bindings (#88203)
583d216c1a : Fix: [ATen] add more missing moves - part 2 (#89000)
9ef1d55e6b : Fix non-existing parameters in docstrings in torch/nn (#90596)
45109ec30a : Completely redo how ShapeEnv guards are generated (#90528)
49c674e155 : Revert guaranteed symint allocation (#90381)
b68dead20c : Keep track of source name on all allocated SymInts (#90295)
f9aa099074 : [Inductor] fix issue: redeclaration of float g_tmp_buffer_xxx (#90270)
5a665a39d1 : [LTC] Make some LazyGraphExecutor private data structures protected (#90598)
ddf00c803b : [torchgen] Introduce Executorch types and signatures (#90591)
de6beca838 : [torchgen] Let native function declaration generation logic take a callable (#90590)
453ff96029 : [torchgen] Refactor types (#90589)
0457020d2c : [dims] Fix large array inputs (#88596)
bb9fc32fe0 : [vision hash update] update the pinned vision hash (#90586)
d3a3604581 : [pthreadpool] Don't recreate threadpool if the counts are same (#90478)
3b3ed25109 : Add a way to visualize memory snapshot traces (#90348)
2bac4d1fae : [reland] add save and load stats in memory_tracker (#90510)
1b2c59ad24 : [ONNX] Introduce ONNX reference evaluator for verification (#89808)
7afba50508 : [dtensor] delete unused torch_function (#90449)
45b64e8c61 : Populate Canonical Aten Ops (Batch 2) (#90456)
79f9672249 : [ONNX] Use `VerificationOptions` to wrap option arguments (#89807)
6de216a2e8 : [fx] Have replace_pattern return replaced nodes (#90244)
4a1633ca69 : [Inductor] GEMM Shape Padding Optimization (#90425)
b7dfbf876f : Revert "[LTC] Make some LazyGraphExecutor private data structures protected (#90457)"
02eb0bdbc1 : [fx] Added better tests to pass infra (#90432)
f51f6aa387 : Fix non-existing parameters in docstrings (#90505)
fd3f5d7bf7 : [inductor] Update TIMM skip list (#90188)
1a735a8094 : [FSDP] Subtest `CPUOffload` for `test_fsdp_grad_acc.py` (#90545)
912748e3b7 : [SDP] Fix alignment check for efficient_attention (#90413)
669f7461ac : Use some `if constexpr` in the code (#90483)
d91d7a3221 : [reland][dynamo] use optimizers correctly in benchmarking (#87492)
9c4189f82d : [dynamo] Add is_compiling for dynamo (#90329)
082450609c : [FSDP] Allow nested FSDP wrapper to use different mixed precision (#90523)
eedf7a4989 : Log1p complex for CUDA (#90422)
b2795d3c4e : Revert "[inductor] New approach for computing triton load/store masks (#89566)"
4e1881b8b7 : use proper temp directories in test_tensorboard.py (#89826)
09ccda0d94 : Fix: Make `__len__` of datapipes dynamic (#88302)
93aa6e3e36 : [LTC] Make some LazyGraphExecutor private data structures protected (#90457)
bcf7036be5 : Disable BUILD_CAFFE2 from ONNX builds (#90475)
730e44bbc7 : Add logging for aot autograd and unified debug flag (#88987)
983d4f6fbb : [Vulkan] Enable QInt8 weights and test quantized convolution with QInt8 weights and QInt32 bias (#90441)
282dfe8ba4 : [inductor][Reland] Use decomposition for _to_copy (#90494)
6581063583 : Revert "Dynamo, FX, Inductor Progress Bars (#88384)"
eeb3f8aa54 : Add missing infer_size_symdimvector implementation. (#90405)
c6c2de586d : [inductor] New approach for computing triton load/store masks (#89566)
c8954a8907 : simplify implementation of c10::isIntegralType (#90193)
6b7efac3c9 : Reland "Add heirachical module names to torchFX graph.node" (#90205)
0a00858095 : Implement checks for vmap escaped errors (#89585)
c71b12851d : [ao] public vs private for ao.quantization._X (#88392)
6050a7a3d9 : [ao] backend_config moving all to top (#88391)
3759777edc : [threaded PG] fix long hang issue in testing (#90515)
db0ce4acf3 : Dynamo, FX, Inductor Progress Bars (#88384)
b4c27c86b7 : [vision hash update] update the pinned vision hash (#90513)
aacafd2cba : Fixed a couple of mistakes in type annotations in optim package (#90216)
78da18345e : [ONNX] Extend PR approver list (#90490)
797544f1c4 : [dynamo][ez] Change module type to str for easier downstream parsing (#90429)
f978a8b026 : [quant][be] Remove special casing for getitem in prepare (#90393)
6fb79b7004 : Bump version: 1.14.0->2.0.0 (#90491)
ff5a3592e7 : Fix static initialization issue for static build (#90133)
c8f5c194ca : Fix bug in dynamic shapes multiply (#90336)
2cf703214b : [Composable API][Easy] Fix some follow-ups (#90471)
eb5b4c21e1 : Deepcopy GraphModule in minifier (#90401)
80150788bc : [21/N] Add alltoall_base custom op with CPU/CUDA implementations (#89813)
e65ee3975f : [20/N] Add recv_any_source custom op with CPU/CUDA implementations (#89505)
43660051d8 : [Ez] Omit HSDP Z2 from doc (#90503)
912a1f7b27 : Fix issue 38095 TODOs in test_quantized_tensor.py (#90344)
fec39f6310 : Don't update vision hash on push (#90498)
9bb16cd3ca : Track torch.compile calls (#90310)
76f440f20a : [dynamo] Rewrite inplace addcdiv and inplace add (#90330)
0c972fb5c7 : [rfc][pkg] check spec for module source before falling back to file in package exporter (#90258)
e1674d7dc0 : avoid fork in torch/__init__.py for deploy/multipy (#90492)
b651e06049 : Add Pointwise Tag from pointwise set in DTensor, use in aot_autograd partitioner (#90029)
8ca1c910fb : Refactor test_inductor_XXX to reduce code duplication (#90443)
7342251281 : functorch.grad support for autograd.Function (#89860)
eb314f9b1a : Add setup_context staticmethod to autograd.Function (#89859)
103be1f164 : Add feature flag for the autograd.Function extension (#89858)
1ba5c55992 : skip flaky tests (rather than expectedFailure) (#90233)
e89685b0b5 : Revert "[inductor] Use decomposition for _to_copy (#90314)"
b738da8c8e : [LTC] Tweak LazyTensor Class for XLATensor (#90363)
b71c710db1 : Add additional tests for view slice tensors (#86282)
465005c1e0 : Revert "Fix issue 38095 TODO in test_multiprocessing.py (#90335)"
8ea90d926f : Add support to foreach torch empty for bfloat16s (#90437)
d2ee94231e : [inductor] Fallback for index with None in the middle of indices (#90022)
b62cfbca84 : Remove TORCH_API from inline at::internal::lazy_init_num_thread (#89511)
793a999ce0 : Hybrid Sharded Data Parallel (#89915)
454361435c : Implement correction argument in torch.masked.{std,var} (#87118)
a6593d6622 : [Composable API][Easy] Use `policy=None` since that is supported (#90400)
21a0e809c2 : [Composable API] Match `fully_shard()` comm. schedule with wrapper FSDP (#90387)
4011597dd4 : [Composable API] Refactor `test_fully_shard.py` to use common models (#90386)
5ca4e95f6c : [Composable API] Move test models to common file (#90385)
3fdb5f2dda : [inductor] Use decomposition for _to_copy (#90314)
dc40b6d043 : Upgrade oneDNN to v2.7.2 (#90051)
b485781440 : Add a transform for positive-definite matrices. (#76777)
c00b135adf : Remove deprecated call to tf.io.gfile.get_filesystem (#89832)
ecd784667c : Avoid overflow in tensorboard image summary (#90423)
1978773399 : [LTC] Overlap data creation and ir_value setting (#90438)
9c80f13692 : [Resubmit] state_dict_pre_hook (#90435)
de016b3799 : [pruning][core][feature] Implement prune for structured pruning (#89777)
c20d41253f : [LTC] Tweak LazyGraphExecutor for XLA (#90420)
1a48ae96ba : [PT-D][Easy] Reformat the optim code within PTD code base (#90399)
cbb2d5af81 : Fix issue 38095 TODO in test_multiprocessing.py (#90335)
06c98e673f : [ONNX] Fix ignored small eps in layer normalization in fp16 (#89869)
5f3ca208c5 : Revert "add save and load stats in memory_tracker (#90144)"
22a249e44e : Revert "[Inductor] More robust stride and offset extraction from index expressions (#90184)"
25eb7c3ae3 : Clean up dependancy for flatbuffer_loader (#86041)
37892041a1 : Always compile tiny graphs with AOTAutograd (#89775)
b8b7480065 : [Checkpoint][2D][6/N] Add optimizer and update default_planner to core distributed (#90212)
36ac095ff8 : Migrate PyTorch to C++17 (#85969)
f2d95765e4 : [pthreadpool] Set max threadlimit to tsan limit (#89453)
772b726068 : Revert "Disable dynamo tracing torchrec.distributed (#90087)" (#90416)
00118f5c30 : Fix issue 38095 TODO in test_jit_fuser_te.py (#90246)
ad188a227e : Introduce CUDA Device Assertions Infrastructure (#84609)
f99f239531 : Fix issue 38095 TODOs in gloo tests (#89985)
1ba94b3882 : Support pickle version 4 by adding missing ops (#90223)
d5c6a74699 : Rewrite dynamo cond() handling to not recursively call export (#90286)
54d344b0b7 : Type torch._dynamo.side_effects (#90202)
ca5f69ef19 : Convert InstructionTranslatorGraphState and OutputGraphState to NamedTuple (#90186)
1119aac485 : Type torch._dynamo.symbolic_convert (#90185)
7abd035b2f : Add missing mypy-nofollow.ini (#90179)
47071c3d47 : [quant] Add support for symmetric quant in executorch (#90304)
9f7bc7bc24 : Revert "[Quant][fx][bc-breaking] Make convert.py smaller (#90189)"
d7c30e11c6 : [inductor] Remove .to from lowering (#90280)
b8b439aede : C++17 friendly iterator implementation (#90379)
5351176caa : Kineto activity fix (#89785)
79406378ae : [primTorch] Add prim and ref for as_strided_scatter (#88426)
1f137c1e2f : add save and load stats in memory_tracker (#90144)
bc93454e4a : correctly set strides for expanded/unsqueezed dimensions (#90341)
50ec416599 : Fix C2 Ambiguous namespace (#89534)
56ab94d6e4 : [Vulkan][TCC] Add tests for quantized convolution with QUInt8 activation, weights and bias (#90012)
e0f681aa85 : Add manual cuda deps search logic (#90411)
3ef4fc2012 : Automated submodule update: FBGEMM (#74729)
ecd418673b : [FSDP][Easy] ufmt files (#90384)
32973651e6 : [Vulkan] Enable copying QInt8 and QInt32 tensors from cpu to vulkan. (#90357)
a076bdb357 : [fx] Copy codegen in legalize_graph (#90023)
6dcc214ac2 : Fix AssertionError fake_mode is not None in distributed (#90392)
2ad6ed8ac9 : Fix some typed storage is deprecated warnings. (#89867)
1b1301f16a : Revert "[pruning][core][feature] Implement prune for structured pruning (#89777)"
44779d9bc6 : [FSDP][optim_state_dict][2/N] Add _get_fqn_to_fsdp_param_info to map from original FQN to flat_param (#89899)
f7cdd3a7a0 : [inductor] Use a large tolerance for botnet26t_256 (#90383)
2b0b4bb6fd : [Dynamo] Fix llvm target for meta schedule & add torch to tvm ndarray helper func (#90214)
6a7659f304 : Fix issue 38095 TODO in test_autograd.py (#90031)
4b1053497c : [vmap] Prepend "legacy" to files for old vmap implementation (#90324)
94d800ffd1 : Make Transformers compilable by C++17 (#90389)
3531e44307 : [pruning][core][feature] Implement prune for structured pruning (#89777)
d680ea7e36 : [quant]Fix public bindings for DTypeWithConstraint (#90315)
4cdc96fb4f : Add hooks structure for passing around user provided hooks, add a new guard_failure_fn (#90371)
c92cf6bee3 : [BE][CI] Add windows test run instructions (#90388)
824641b083 : [Quant][fx][bc-breaking] Make convert.py smaller (#90189)
99fb39f508 : reland #89243: [Composable API] replicate: add support for DDP args (#90255)
e6a7278753 : Give std/var correction overloads proper defaults (#56398)
b0bd5c4508 : [MPS] Fix median_out_mps caching (#90326)
85ae28b454 : Reformat optim import (#90294)
15949fc248 : [ROCm] Enable few test_prim UTs for ROCm (#88983)
26d1dbc4f8 : [inductor] More correct check for fbcode environment (#90312)
351d73b97f : Fix exception causes all over the codebase (#90271)
8f079b895b : [Dynamo+FSDP] Update benchmarks with use_orig_params=True (#90100)
898b46d6cc : [Dynamo][Easy] capture more exceptions when import skip modules (#90338)
71f27f7688 : [Inductor] More robust stride and offset extraction from index expressions (#90184)
4f44877983 : [Inductor] Add test for Scheduler fusions (#90014)
13fcc412be : [Quant][fx][bc-breaking] Remove unused functions in fx/utils.py (#90025)
f28927e9c4 : Revert "[MPS] Fix median_out_mps caching (#90326)"
887249b2bb : [quant] Add fused "q - qlinear - dq" operator with skipped quant op for output of linear (#89882)
22e363348c : [Vulkan] Partially fix and then disable copying of vulkan quantized tensors to cpu (#90275)
23c192c3df : [MPS] Fix median_out_mps caching (#90326)
b769005924 : [fx][passes] Implement annotate getitem node FX passes (#90237)
0e182c9441 : [quant][fx] Add support for matching constant in the custom matcher code in quantization (#90092)
5caa27a3fd : as_strided: Fix default storage_offset for reference implementation (#89513)
3d4b92b171 : Ensure that we fakeify tensor subclasses when they are initially tracked (#90009)
f09e7b5ce7 : Replace assertEqualIgnoreType in test_nn.py (#90242)
6c195881b1 : [CI] Relax CMake requirements (#90307)
3b9a386d48 : Add `TORCH_FAKE_TENSOR_DEBUG` use it to enable storage of traces on fake tensors at init time (#90215)
d224ac7f77 : Remove logging.CODE (#90234)
14894a7311 : Remove non-existing parameter from docstring (#90163)
7e9a8a1361 : Disable dynamo tracing torchrec.distributed (#90087)
27ad2605c8 : Hotfix to unblock TRT unit tests internally (#90313)
62e450d55f : [CUDA Graphs] Add option to dump a captured graph for debugging (#85519)
1abe264ef0 : [Upstream _NamedOptimzer] Reland PR (89480) (#90293)
7436b19eb2 : [FSDP] Clarify loss dtype check in `_test_fsdp_parity` (#90251)
919e09f26a : [FSDP][BE] Clean up dead code from `clip_grad_norm_()` testing (#90250)
3b578edd04 : [FSDP] Test `use_orig_params=True` in `test_fsdp_ignored_modules.py` (#90290)
25f39c1bce : Fix uniform ref implementation (#90094)
a1ab06ab65 : ShapeEnv.create_symbolic_sizes_strides_storage_offset (#89962)
e818c36647 : reland #89222: [Composable API] replicate: change to per module call, remove mark_root_module() (#90254)
bd9ad89a6d : [FSDP] Fix accidental change in `_test_fsdp_parity` (#90252)
ce21262808 : Log1p for complex in CPU (#89691)
9e314bd822 : [dtensor] handle the case where output of op is Optional[Tensor] (#90241)
eace084815 : Use Sized not Iterable to test for len (#90182)
c6942dbbfb : add shape check for random_samples in fractional_max_pool{2d|3d} (#89992)
be5108d5f9 : replace memset with value-initialization (#90048)
97e47a52b8 : [Quant] Add fused linear-leaky_relu op for onednn backend (#88478)
41bfa49db9 : [ONNX] Add src/index dynamic axes support for aten::scatter_add (#90090)
176b962f4b : Revert "[PT-D][Composability][1/N] Upstream NamedOptimizer from TorchRec (KeyedOptimizer in TR) (#89480)"
3c9431f505 : Add factory functions to python frontend (#89230)
e645771e95 : Revert "as_strided: Fix default storage_offset for reference implementation (#89513)"
44dac51c36 : Improve Autograd Documentation Clarity (#89401)
49ccc41d57 : [Vulkan] Enable QInt8 and QInt32 quantization (#89788)
45b40be078 : [FSDP()] Fix `fully_shard` fwd hook registration (#90201)
2b7fcfa399 : fix: Moving operators to FuncTorchBatchedDecomposition (#89762)
bb673fb1d9 : fix: update error when tensor escapes vmap (#89077)
2c2cce73d4 : [dtensor] remove torchgen function schema and parse manually (#90106)
a0c7b88861 : remove backward hook in memory_tracker (#90143)
6bbcd025bd : Fix issue 38095 TODO in onnx/test_utility_funs.py (#90085)
508916128d : [ReduceOp] ameliorate custom `__eq__` (#90088)
2d9267ba30 : [dynamo] Rewrite addcdiv in dynamo to its constituent ops (#90227)
77f9b2e8bf : Fix exception causes in fx, nn and onnx packages (#90134)
31ec1a1ef7 : [PT-D][Composability][1/N] Upstream NamedOptimizer from TorchRec (KeyedOptimizer in TR) (#89480)
cee396fa07 : [ao][ns] PNP demo for exposing arbitrary model transforms (#90153)
42705bd7b3 : Disallow registering meta function for CompositeImplicitAutograd ops (#90222)
a88400e0cc : pad low precision matmuls when requested (#90235)
ba70a8be03 : as_strided: Fix default storage_offset for reference implementation (#89513)
05ccbd6d94 : Functionalization: skip meta block computation if compute_reference_meta is false (#90219)
962ebe88a2 : Assert there are no outstanding side effects before calling cond (#90208)
0d8e53dfe7 : Revert "[Composable API] `replicate`: change to per module call, remove `mark_root_module()` (#89222)"
73565ce320 : [vision hash update] update the pinned vision hash (#90239)
3749b9dc73 : Revert "[Composable API] `replicate`: add support for DDP args (#89243)"
2597d5d722 : TorchDynamo: always convert flexiblelayout to be FixedLayout when given a stride_order (#89904)
29233a18c7 : [inductor] Add test_ops_gradients running with inductor (#89792)
ebeecbf833 : Dynamo FX graph stack traceback fix (#87136)
a268b9e53c : Fix yet another C++17 Windows build issue (#90228)
55b10e6b1d : [Pytorch][Vulkan] Use specalized shader for 3x3 depthwise conv (#89953)
a17765a127 : [Pytorch][Vulkan] Templatize depth wise convolution and specialize for 3x3 and 5x5 (#89952)
bd456fb549 : [Pytorch][Vulkan] shader codegen use ordered dictionary (#89951)
cb68dcbd6b : [Pytorch][vulkan] Simplify depthwise conv to remove bounds compute (#89950)
876b70245a : [Vulkan] output benchmark numbers for aibench parsing (#89949)
841eba6382 : [pytorch][vulkan] realistic benchmark size for depthwise (#89948)
564905c8e1 : [Caffe2] Fix the assert message (#89816)
2ea32f41f4 : Fix XLA dynamo CI (#90229)
5d6aa99c45 : Add sharding strategy to fully_shard (#90192)
e4670885b9 : Add a repro for fully_shard _unshard error (#90190)
0f274ed385 : [Composable API] `replicate`: add support for DDP args (#89243)
72fdfad4ad : [FSDP][optim_state_dict][1/N] Restructure _optim_state_dict to prepare the support of use_orig_param (#89898)
2b20a3d3ef : Simplify by using yield from (#90160)
54858cce4e : Fix issue 38095 TODOs in NCCL tests (#90033)
7571134f69 : [NNC] Use New PassManager for LLVM >= 15 (#89978)
5de5c5e462 : Assume that co_firstlineno is always defined (#90180)
1ea20cdb33 : workaround for indexing formulas with negative terms (#89933)
368a1cbd02 : fix c10::detail::integer_iterator for C++17 (#90174)
5423c2f0e2 : Light refactor to how we get shape_env for graph lowering (#90139)
32639a822c : Fix missing line in XLA backend after mergebot + ghstack gap (#90197)
7e034193bb : [LTC] Restore default ctor for LazyTensor (#90086)
65a0dcffd8 : [Composable API] `replicate`: change to per module call, remove `mark_root_module()` (#89222)
8845a8f899 : Revert "as_strided: Fix default storage_offset for reference implementation (#89513)"
6d794f6a4a : [ONNX] Fix concat with empty tensors (#87620)
301d9c0556 : Remove deprecated usage of is_pod/is_pod_v (#88918)
b1eb42bcfd : [4/4][DataPipe] Remove iterator depletion in Zipper (#89974)
eded97ac72 : as_strided: Fix default storage_offset for reference implementation (#89513)
199b8b6025 : Remove deprecated flatten_params_wrapper.py from lintrunner config (#90154)
7a08261a9c : Fix fully_shard error when policy is not provided (#90151)
777ac632fb : Added vectorized flip for uint8 (#90013)
226e803ecb : [Inductor] handle non-positive exponents in `Pow` (#90146)
41c3b41b92 : Use dynamo fake tensor mode in aot_autograd, move aot_autograd compilation to lowering time [Merger of 89672 and 89773] (#90039)
4648baa911 : Revert "Use dynamo fake tensor mode in aot_autograd, move aot_autograd compilation to lowering time [Merger of 89672 and 89773] (#90039)"
a580a63448 : [codemod][llvm15] LLVM-15 fixes for caffe2/test/cpp/jit/test_module_api.cpp (#89938)
d6c8603b98 : Fix warning: use of bitwise '&' with boolean operands (#90131)
57bb4cd046 : [Doc][Distributed] Add missing functions to distributed.rst (#89905)
f3aeed4960 : Add generator argument to torch.rand docstring (#90071)
1a25e6f3c3 : Fix indentation (#90110)
7322f73c8f : Fix exception cause in storage.py (#90118)
c00d395f05 : Revert D41682843: Multisect successfully blamed D41682843 for test or build failures (#90132)
bda6ff0990 : [1/4][DataPipe] Properly cleanup unclosed files within generator function (#89973)
2bca280a31 : Revert D41683102: Multisect successfully blamed D41683102 for test or build failures (#90117)
e47af44eb8 : [FSDP][Easy] Remove unused methods (#89229)
1ee189ce8e : [FSDP] Issue warning when clamping to `NO_SHARD` (#90060)
4068c5467d : [Reland] Move functorch/_src to torch/_functorch (#88756) (#90091)
f7520cb51e : Reduce memory usage requirement of `test_pdist_norm_large` in `test_torch.py` (#90075)
61bd7fbacb : [vision hash update] update the pinned vision hash (#90095)
e53a0e391b : [Easy] Remove unused parametrization (#90079)
dd060f359e : Test composable checkpoint wrapping FSDP submodules (#90078)
a775204499 : Fix issue 38095 TODO in test_dataloader.py (#90084)
ef0c7ec958 : Use dynamo fake tensor mode in aot_autograd, move aot_autograd compilation to lowering time [Merger of 89672 and 89773] (#90039)
9a1c6fd506 : [pruning][core][feature] Align BaseStructuredPruner with existing pruning flow (#88436)
d3f20a20b8 : [reland][quant] Explictly set default quantized engine instead of relying on the order of supported_qengines (#89804) (#90036)
65f38160f0 : Fix issue 38095 TODOs in test_quantized_op.py (#89883)
29d1d8f3ef : [Quant] Remove explicitly default QConfigMapping settings (#90066)
a306f85ea7 : Update Persons of Interest (#90069)
9d54d3bec2 : [NVFuser] undo v100 OOM skips (#90070)
74a090a744 : Add integration test for composable fully_shard and checkpoint (#90041)
cba96366a2 : Revert "remove torch.equal usages (#89527)"
e1532af0bb : Fix meta registration for aten._cdist_forward (#90042)
eb56b08f96 : [FSDP] Fix `clip_grad_norm_()` for low prec grads (#90028)
688b767265 : [FSDP] Fix `keep_low_precision_grads=True` for `use_orig_params=True` (#90027)
f5fbb5001f : Revert "[follow-up] Python Attr Serialization (#88913)"
78bdb858f9 : Call _sdp_attention in nn.functional.mha (#89470)
3916d729c8 : [Dynamo] tensor.type() should return tensor types with CPU and GPU variants (#90021)
538f6279db : Fix access to unitialized memory in VSX vector functions (#89833)
acd68f9097 : [Reland] dont clone args (#89766)
59101b6fe4 : Fix binary iOS uploads (#90058)
f62e54df8f : Reland "Dynamo, FX, Inductor Progress Bars (#88384)" … (#90055)
b87682f555 : Fix gradcheck for CSR and CSC inputs. (#89786)
526e4aa5f8 : Update to_sparse docs regarding the layout and blocksize kw arguments. (#89912)
cf3c3f2280 : Revert "Revert "Dynamo, FX, Inductor Progress Bars (#88384)" (#90018)"
0bde810572 : Add more debug information for Inductor (#90008)
6f4dea562d : Implement post and pre hooks for optimizer (#89176)
adc1a94ef4 : Add tests for custom pybind type_casters (#89897)
b703e4b3c2 : Add hierarchical module names to torchFX graph.node #87659 (#87742)
9dffc56008 : Intel compiler support in c10/util/TypeIndex.h (#89610)
9013c92a9f : [ao] making QConfigMapping print in a user friendly way (#89932)
5f881ac2d1 : Adding dispatch alias 'FuncTorchBatchedDecomposition' (#88771)
6addc8d923 : [Inductor] add expm1 lowering (#89961)
42f27c322b : TorchDynamo: don't compute index for max_pooling when return_index is false (#89838)
f623b123f0 : [Inductor] Do not install g++12 by default (#90038)
b058a02786 : TorchDynamo: enable convolution bn folding for functional bn (#89746)
3162a48a77 : [dynamo][benchmarks] Call zero grad (#90026)
63e57280fc : [Profiler] Memory profiler part 13: Add sizes to timeline. (#89356)
6727e537a7 : [Profiler] Memory profiler part 12: Emit timeline of memory events. (#89355)
342139589c : [quant][fx] Add support for matching multiple arguments in patterns (#89986)
4176102407 : [vision hash update] update the pinned vision hash (#90035)
39937b84cd : Change periodic concurrency group (#89850)
d09c52e4fd : [inductor] Deterministic kernel names (#89713)
8b2f9887bf : update quantization doc: add x86 backend as default backend of server inference (#86794)
69d7afc799 : [LTC] Remove noop_execution_mode_ (#89989)
342d78d1a2 : Cache guards once per variable tracker, rather than re-propagating them repeatedly (#89827)
6efedfd774 : Revert D41609017: Multisect successfully blamed D41609017 for test or build failures (#90034)
c63afb283c : Disable dynamo on optimizer lazy initialization (#89902)
d94f5c784c : Fix binary testing if torchtrition is mandatory (#90017)
f628f2ed73 : [QNNPACK] Fix Memory Leak in QNNPACK QSoftmax Op (#89544)
7bd284495a : Add non-reentrant checkpoint to composable APIs (#90015)
a5430e1067 : [UCC] Properly finalize unsuccessful collective posts (#89306)
063bbeb3ba : Revert "[quant] Explictly set default quantized engine instead of relying on the order of supported_qengines (#89804)"
29ea1c9c8e : [doc] update dtensor readme (#89991)
6f5945e4bb : triton supports devices < 7.0, not 6.0 (#90020)
607ff6f4c1 : [quant] Explictly set default quantized engine instead of relying on the order of supported_qengines (#89804)
d04480a6b5 : [Vulkan][TCC] Add tests for quantized add, sub, mul and div (#89578)
8aee768025 : [quant][be] Merge qconfig_mapping_utils.py in quantization and fx folders (#89979)
0ad6715b7b : [aarch64] add sleef_arm dependency (#89988)
07be48de37 : [chalf] relax tolerance : conv_transpose2d (#89993)
ca5526cf1f : [tp] ufmt test/distributed/tensor (#89970)
9b5e6b029f : [tp] umft distributed.tensor.parallel (#89969)
c37c5163da : [dtensor] ufmt test/distributed/_tensor (#89968)
bf23e0bdbd : [dtensor] ufmt distributed._tensor (#89967)
768bd3fb4a : Add `torch.compile` implementation (#89607)
bcf4292f04 : Revert "Dynamo, FX, Inductor Progress Bars (#88384)" (#90018)
015b05af18 : Editorial pass on Dyamo docs (#89921)
b2f340557a : [ONNX] Supports scatter_add with different static shape of src and index (#89787)
d80056312a : [Quant][fx][bc-breaking] Rename fx/*patterns.py (#89872)
314e7c37c3 : fix citation file in MANIFEST (#89994)
a5532929da : Remove DDP import (#89982)
5a36d99845 : Add error repro test for FSDP ignored modules with mixed precision (#89971)
dfb533ca5b : add vjp test with non-contig inputs (#89375)
99dac4dd48 : Type torch._dynamo.guards (#89919)
e03cde07e4 : Guarantee symbol allocation for all sizes/strides/storage offset (#89879)
74bcf2b604 : Add definitely_not_01 set to ShapeEnv. (#89871)
8d333761a9 : When dealing with dupe arguments, prefer leafifying if possible (#89896)
808cb2e86d : [FSDP][Dynamo] Define annotation attributes as globals (#89913)
4095ef8b80 : remove torch.equal usages (#89527)
0acbcef4ab : fix assert_close docstring (#89620)
d72cd4c4e5 : document torch.testing.assert_allclose (#89526)
4baa78bb1f : enable ufmt for torch/testing/*.py (#89525)
850b53bbee : ad more error info for cublasLtMatmul (#89983)
a747326423 : Add manual meta implementations to quantize_per_tensor.tensor and co (#89958)
f1978b18f9 : add mixed data type support for LayerNorm (#81851)
b6d6c6933e : [vision hash update] update the pinned vision hash (#89749)
b399acd2dd : [codemod][llvm15] LLVM-15 fixes for caffe2/caffe2/video/video_decoder.cc (#89937)
2f5532a90e : [codemod][llvm15] LLVM-15 fixes for caffe2/caffe2/video/video_decoder.h (#89940)
cc01614186 : [codemod][llvm15] LLVM-15 fixes for caffe2/test/cpp/jit/test_graph_executor.cpp (#89936)
6317311e61 : [inductor] Disable parallel compilation inside fbcode (#89926)
8d8a215d4c : [Vulkan][TCC] Helper functions for vulkan quantized tests (#89922)
a61450726f : Minor fix for dynamo xla integration test (#89891)
4bae860813 : quantization: make x86 as default backend (#88799)
0e7918b931 : fix mkldnn quantization issue for weight reorder error (#86876)
6372f11d8d : RowwiseMoments: use float as acc type for bfloat16 inputs (#84405)
ad1585b4a4 : [Checkpoint] Minor update to checkpoint utils (#89964)
a43e09c064 : Implement gamma cdf (#89955)
5167108c1a : Add device note to the docs of sparse tensor factory functions (#89910)
11db12bd94 : Issue 68576 prefetch factor docstring changes (#89874)
7cf0913909 : Correct the label for quantization PRs (#89888)
1ccaa2a5f7 : [EASY] Replace direct use of Guard ctor with make_guard (#89945)
4451eb24e6 : Move tensor_parallel out to distributed.tensor folder (#89878)
8a760ea922 : Subscribing janeyx99 to optimizer PRs (#89943)
5a82c79024 : Small fix for `torch._C.Graph` type hint (#89821)
dfbc4e5473 : [Easy][FSDP] Fix pyre error (#89930)
0c3537a3c3 : Add dynamo smoke tests to CI (#89302)
9e4a25c731 : [quant][decomposed] Add support for int32 for decomposed q/dq ops (#89881)
62f01e2b26 : [FIX][QAT] Switch to use `kwargs` when `args` is empty (#89778)
0bc19e77d2 : [quant][be] Simplify `insert_observers_for_model` in fx/prepare.py (#89887)
76e869c911 : [BE] Beef up test_functionalization to test functionalizing multi-parameter functions (#89798)
4144ad16af : add XPU backend to support torch.save and torch.load (#89679)
6fb8423904 : [FSDP] Slightly refactor fx symbolic tracer (#89917)
89769d84eb : [FSDP][BE] Move dynamo annotation to separate file (#89890)
76c6dfeaa6 : Add layout and blocksize arguments to Tensor.to_sparse method (#89502)
f2308b1da6 : [MPS] Enable fp16 for linear backward (#89774)
b5ad90932a : [jiterator, complex32] lerp : cuda (#75584)
26054c1607 : beef up inplace/view note on copy slices (#89856)
b7c42b4066 : [FSDP][Easy] ufmt `test_fsdp_checkpoint.py` (#89916)
6e8e7b9407 : Fix binary ios builds (#89929)
1207b0e474 : Update Reviewers for PyTorch Distributed team (#89889)
09f2373ec0 : Fix TODOs related to #38095 in test_mps.py (#89815)
f1415b8cb6 : Revert "Call _sdp_attention in nn.functional.mha (#89470)"
618a585f6c : Revert "replace double transpose with single permute in nn.f.mha (#89847)"
a6caa9c54b : Add a cpp wrapper for Inductor (#88167)
5949d5fed5 : [FSDP][Easy] Remove internal default arg (#89227)
7cd6e6acad : add bf16 in fp32 out fast path for embedingbag in caffe2 perfkernel (#89198)
68805b08d1 : [benchmarks][dynamo] Trying CI - Set train() for TIMM models accuracy tests (#89780)
969a7d09f6 : Revert "[aarch64] add SLEEF dependency for aten_cpu (#89475)"
4cc5be3a06 : Revert "Add bits tensor types (#88594)"
296e1ba4d0 : Row and column select support for block compressed sparse tensors (#88733)
0cc0e5ef65 : [PT-D][Checkpoint]Add MultiThreaded FileSystemWriter for distributed checkpointing and Update tests (#87987)
87d18cf0e7 : fix RowwiseMoments vectorization issue on CPU (#84404)
92f08f09d8 : Vectorize erf (#89837)
009dd3c4af : [PT-D][Tensor Parallel] Add more test cases when we use use_orig_params for FSDP wrapping (#89779)
011452a2a1 : Dynamo, FX, Inductor Progress Bars (#88384)
d88b555577 : [Dynamo] Fix source/reconstruction bugs in NNModule named_* calls (#89729)
447283752c : Update DDP docs for Dynamo/DDPOptimizer (#89096)
12f98f85bc : [dtensor] update README (#89800)
b09efae3bc : update subscriber list (#89799)
f4707ae004 : Add arguments to collect_results (#89611)
ce17bb95fc : [FSDP] Include module classes in `ModuleWrapPolicy.__repr__` (#89058)
c8aaad040e : [FSDP] Limit all gather after pre-unshard (#89057)
56b3ad78e1 : [Checkpoint][2D][5/N] Add checkpoint_utils for distributed checkpoint to testing/_internal/distributed/ (#89873)
be80b72add : [FSDP] Remove unneeded stream sync from `clip_grad_norm_()` (#89308)
90bed8874f : Generator of tensor inputs with variable layout and structure (batch/non-batch, hybrid/non-hybrid, block/non-block) (#88914)
275ade6371 : Enable rsqrt (#89771)
2d32e5dd09 : add env/config flag to disable dynamo (#89828)
a70082a863 : [functorch] Move `cond.py` to `_cond.py` and expose `cond()` under functorch.experimental.control_flow. (#89819)
d1760d7a42 : [FSDP][Easy] Remove outdated TODO (#89217)
1a33b7cbfa : Make fake tensors preserve dense strides in type conversion (#89803)
9c8a94bf90 : [checkpoint] Improve test (test_nested_dict.py) (#89854)
cefece3726 : Fix typo in filesystem.py (#89849)
5a79144a79 : [dashboaard] Fix flag compilers (#89853)
59a2fe74d4 : [CI] Add TorchTrition conda packages (#89841)
24b3b73c98 : [Caffe2] Fix merge logic bug (#89551)
55789b40ef : Remove beauby and dzdang from CODEOWNERS (#89811)
693135a9b8 : [inductor] Add aten._native_batch_norm_legit to decomposition (#89843)
3d47c74cfe : Update code style for optimizer code (#89862)
8ca09dda42 : [quant][docs] Move some of the descriptions out of codeblock (#89795)
fcb5d6e771 : Enable instance norm running mean test (#89793)
c599cf24ad : [FSDP] Another fix for `DTensor`, `use_orig_params=True` (#89845)
b9afa92827 : replace double transpose with single permute in nn.f.mha (#89847)
8713119c89 : Stream actually overrides __new__ so we need to patch it as well (#89592)
a029ec2c88 : Move gpu slow tests to sm86 (#87880)
991028cd9f : Deprecating DataPipes (#89794)
6c1fb3f21d : Don't unsafely clone autograd meta (#89720)
02e2eaa9c6 : Fix CopySlices logic to ensure wrapped node runs properly. (#89812)
8314d403a6 : [test_nn] split multihead_attention from test_nn (#89748)
fb47a66989 : [Quant][docs] Use get_default_qconfig_mapping (#87299)
2bce6d09ee : [Quant][fx][bc-breaking] Remove backend_config_utils.py (#89810)
e1dbd9a288 : Revert "[GHA] Decrease Windows test timeout to 120 minutes (#89694)"
6e2da426f0 : [FSDP] Relax post-backward assert (#89791)
218d9c6e09 : Revert "Move functorch/_src to torch/_functorch (#88756)"
086b251f9a : [follow-up] Python Attr Serialization (#88913)
2f9ec226e4 : don't run input mutation analysis in dynamo (#89760)
3cef87f9fd : [aarch64] add SLEEF dependency for aten_cpu (#89475)
c6ede0bdfc : [Quant][docs] Fix BackendConfig example in docstring/README (#89319)
52bc5c1cfe : Move functorch/_src to torch/_functorch (#88756)
620994cd7a : Guard the boundary of index computed in compute_source_index_and_lambda (#89252)
93772305d9 : [PyTorch Edge] Set training for module only (#89488)
a78467f3df : Refactoring to share vectorization code for int8/uint8. (#89650)
8226a5d383 : [minifier] Continue on assertion for accuracy minification (#89739)
40dd03eeaa : [dynamo] Don't copy the graph during checkpointing (copy_graphstate) (#89232)
91899a9ebd : add memory_tracker tool to help profiling memory usages (#88825)
7ec7a82082 : Test FSDP with submodule non-reentrant checkpointing (#89781)
705ad36cc5 : Dynamo asserts FSDP wrapped modules use_orig_param (#89523)
7860fcc245 : Enable DDPOptimizer by default in dynamo (#88523)
9048cf16fe : Move Dynamo docs back to core (#89769)
2b522670d2 : [dynamo] Minifier fixes for reproducing segfault (#89712)
c1950620c5 : [decomp] Fix native_batch_norm_backward dtype of dweight and dbias (#89740)
4d7ec30220 : Call _sdp_attention in nn.functional.mha (#89470)
e20ec44544 : fixes for inductor <> batch norm (#89603)
740860d414 : Add type hint to torch.norm and Tensor.norm (#89728)
908daa8ae5 : [nvfuser] avoid out of bounds error (#89584)
77df2ca9b6 : Special-case fsdp wrapped modules to be Unspecialized (#89330)
c75434ed4f : [Inductor] Add an option to mark wrapper call in PyTorch profiler (#89674)
4b11119cc3 : [functorch] fix possible overflow (#83389)
63843401f5 : Fix archive issue impacting summary stat diff (#89789)
943acd4d27 : [FSDP] Fix `nn.Parameter` usage for 2D and `use_orig_params=True` (#89782)
23ee6757fc : [Checkpoint][2D][4/N] Add nested_dict for distributed checkpoint to core distributed (#89537)
a378ba2123 : Re-enabled 3 reductions tests on Windows (#89567)
f3b1315eee : Add bits tensor types (#88594)
22e7514a15 : [Checkpoint][2D][3/N] Add nested_tensors for distributed checkpoint to core distributed (#89501)
0057be3361 : [CUDA graphs] Add warning if captured graph is empty (#88754)
c18da597e0 : [skip ci] documentation update for the kwargs defaults section of fun… (#89719)
13d2af2a9b : [LTC] Metrics can be reset too (#89606)
5abe454d6c : [Vulkan][TCC] Fix conv2d pack biases (#89568)
2e0cd7c8bd : Add meta implementation for _efficientzerotensor (#88936)
69a8c92d53 : Fix comparison of batched_prop vs unbatched_prob in test_distributions (#87977)
47cca5e444 : Revert "Move Dynamo docs back to core (#89769)"
8321066031 : Tweak formatting of note on macros (#89598)
be2816db18 : Move Dynamo docs back to core (#89769)
465ee7bc09 : [inductor] skip dm_nfnet_f0 in TIMM model test (#89768)
cdf4087597 : [benchmarks] Disabling gradscaler (#89741)
e8643ded6d : Revert "Don't allow recomputing a node that *must* be materialized in the backwards pass (#89171)" (#89770)
2a2c07ae37 : [multipy] Address GetPythonFramesFunction() and multipy incompatibility. (#267) (#89315)
95563b3eda : Reland "Add single process version of dynamo distributed hf_Bert tests (#89721)" (#89756)
6ef702490d : Revert "Support set_rng_state with fake tensor (#89642)"
ed41a7fb68 : Update minor release acceptance criteria (#89767)
ed9cd47e31 : Add AOTAutograd and partitioner to ciflow/inductor (#89772)
cf91e3641a : Use isinstance test rather than exact type test for wrap to fake (#89671)
b87c45d5a7 : Make aot_module_simplified accept fake tensors (#89670)
abf91562bd : Change aot_module_simplified to take take arguments directly (#89669)
b589e726d9 : Refactor how AOTAutograd backends are defined (#89736)
cf4969d9d6 : [ROCm] Replace layer_norm_grad_input_kernel with cuComputeGradInput for ROCm (#87726)
098cbe23c3 : Update masked.rst (#89758)
faa032c5e5 : [GHA] Decrease Windows test timeout to 120 minutes (#89694)
a37072170d : [FSDP()] Require args as kwargs for `fully_shard()` (#89573)
090fc62b24 : [FSDP()] Register root pre-forward hook (#89572)
8721448544 : Add statement about minor releases, in the release.md document (#89698)
6ba6b64a79 : Ci andriod cache conda (#89554)
2661ff10a9 : Include test/distributed/test_dynamo_distributed.py for ciflow/inductor (#89755)
0d9a615af4 : Revert "Add single process version of dynamo distributed hf_Bert tests (#89721)"
2f8769d680 : Support set_rng_state with fake tensor (#89642)
856e2fa59c : Guard traceable_tensor_subclasses patching with finally (#89689)
49eb43fc45 : Don't modify log level in dynamo distributed test (#89655)
d089fbdc33 : supress Werror introduced by lack of override by #86786 on `bool initialized()` (#89687)
f45fe7de33 : Add mypy checking for a few files in torch/_dynamo (#89731)
55e8b5c126 : [xla hash update] update the pinned xla hash (#89405)
b5616cd5f4 : Add simple assert to detect fake tensors on modules (#89723)
db1f1144f1 : Beef up AOTAutograd logging with aot_id and input descriptions (#89710)
5f8848f329 : Don't suppress log messages for dynamo CI config (#89653)
1a2dd6b15e : Add single process version of dynamo distributed hf_Bert tests (#89721)
0e7c100c9b : Add debug asserts to AOTAutograd for input consistency with compilation (#89702)
1f95f24d30 : Factor input deduplication into a separate function (#89701)
dcefc8f90f : Implement guard_source on RandomValueSource (#89711)
1da633f98a : Access named parameters/buffers/etc via getattr rather than index (#89625)
e36d68af88 : Don't allow recomputing a node that *must* be materialized in the backwards pass (#89171)
b709078dc6 : [Profiler] Memory profiler part 11: Mark tensors created in the backward pass which don't correspond to parameters. (#88926)
143d2881a8 : [Profiler] Memory profiler part 10: Mark optimizer state (#88925)
ae725d501e : [Profiler] Memory profiler part 9: Mark activations (#88924)
56e40fe054 : Let SyncBatchNorm fallback to BN if not using distributed training (#89706)
39449ea61d : [vision hash update] update the pinned vision hash (#89692)
483d3a3d07 : [Profiler] E2E expecttests for category assignment (#88653)
0435894bb3 : [Profiler] Memory profiler part 8: Mark parameters. (#87568)
17fa6bf1f5 : [Profiler] Memory profiler part 7: Mark inputs (#87567)
64c5c77cd4 : [Profiler] Memory profiler part 6: Mark gradients and temporary intermediates. (#87566)
5f09a6d573 : [Profiler] Memory profiler part 5: Data flow graph (#87006)
c3116dd78b : [Profiler] Memory profiler part 4: Select top level torch ops (#86880)
bb77accb4c : [Inductor] Record cpp kernel in PyTorch Profiler (#89367)
36018a6ee6 : Don't suppress exceptions from backends (#89656)
3e20d023b1 : put descriptive kernel names behind config (#89697)
591dfffa38 : update docstring for torch.linalg.lstsq (#89383)
c9a0cc8640 : Simplify aot_module_simplified by removing top_args/top_kwargs (#89666)
6168f22fae : Don't support kwargs at runtime in aot_module_simplified (#89664)
b04dda4291 : Delay verify correctness wrapping to call site. (#89662)
61a3fe4b64 : make inductor correctly propagate nans for maximum and minimum (#89612)
70c0a3006e : Fix typo in segment_reduction_op_gpu.cu (#89647)
2c0bd85c75 : complex: register c10::complex with py::cast (#89680)
abb446af8c : Implement old windows in Python (#87082)
95ea47ef0c : torchdynamo to torch._dynamo in aot_autograd.py (#89385)
6904324781 : Remove fake_tensor_propagation (#89646)
1aa1014b26 : xfail maml test, instead of running it without fake tensor prop (#89645)
a048913e25 : [vision hash update] update the pinned vision hash (#89667)
3b3ebcd031 : TorchDynamo: weight prepack for single conv (#89209)
0c4f3db7bf : TorchDynamo: weight prepack for mkl linear (#89109)
07151a6bd6 : TorchDynamo: weight prepack for onednn convolution external call (#88988)
0884fdaba0 : Revert "Dont clone unmutated args in triton autotuning (#89519)" (#89652)
4a16f8cdb2 : Reenable fake_tensor_propagation on test_cudnn_rnn (#89644)
fc7dcb684a : Run optimizer tests with fake tensors (#89643)
9b13508ef3 : Force test_rng_state to run with fake tensor prop (#89641)
c6be06d93a : Easy: These tests work with fake_tensor_propagation on (#89640)
6fb6eb0a74 : Support unspecialized integers with dynamic shapes (#89639)
0c96841a20 : Cond capture with fake tensors actually works; don't raise in this case (#89638)
d3c012f409 : [test_nn] split pruning tests from test_nn (#89590)
83666f167d : Added vectorized CPU code for uint8_t datatype. (#89284)
9497552771 : Update SyncBatchNorm _all_gather_base to all_gather_into_tensor (#89521)
94a88b53ed : Remove fake_tensors_available (#89637)
1c8b0779de : Fix segfault when swapping custom allocator (#89613)
fd279fe85b : Make pytest work again on test/dynamo (#89631)
c3e85d879c : Mention discrepency between original impl and our impl of RAdam (#89575)
860bae49e4 : Suppress guards on as_strided call only. (#89569)
1588ea0dbf : Added log1p for complex in c10 (#89214)
4f5c4c022a : [LTC] Refine MetricsArena::Reset (#89608)
a8629a1c18 : Upgrade nightly wheels to ROCm5.3 (#89101)
c0d81aa70c : Use fx.replace_pattern for removing empty_like+fill in nvFuser+PrimTorch execution (#89132)
b515c1d960 : [QAT] Check the value of numel to avoid segfault (#81547)
22a1b5e243 : quantization: deprecate observer compute_dtype and replace with is_dynamic (#85431)
e4ccec6eca : [Dynamo] Fix bug of using customized torch.autograd.Function (#89397)
903ae4570e : Disable optimizer tracing, enable for tests only (#89500)
c79489c8e6 : Expose to python the backward AD view_func (#89586)
4cb6bbbe27 : Symintify `embedding` (#89327)
9c867eae1a : nnc: fix Store if value is fp32 while buf is bf16 (#86788)
f0e5bc4b9f : Symintified layer_norm (#89466)
fdb2dd113d : Install missing VSX headers (POWER) (#85547)
e922bd4e52 : [ONNX] Move two headers from .h to .cc (#86852)
23fe2ff910 : verify the number of outputs of xla graph (#89536)
0bde514981 : Add `c10::` namespace in front of `optional` (#89605)
e19a7165fd : [nn] Remove deprecation warning from nn.functional.{tanh, sigmoid} (#86905)
a00bd6f686 : Don't run auto request review on forked PRs (#89583)
0a1a53083e : [primTorch] Enable regex error testing for some refs (#87765)
3ad2a032f4 : Update default cmake to 3.18 (#89570)
8695f0cced : Rectify `native_batch_norm` schema by splitting it into two legit schemas (#88697)
a00efe55c3 : Fix CheckOutputStreamSetting on JitLoggingTest as it failed if logging wasn't enabled. (#82722)
b8d3afd886 : Skip upload test stats for test reports from rerun disabled tests workflow (#89548)
f18f0c70ab : Dont clone unmutated args in triton autotuning (#89519)
ac19c5be82 : FFT: disable dimension wrapping for scalar tensors (#89234)
50e2e4faf3 : Sparse CSC/BSR/BSC serialization and pickle support (#89553)
a8d6b82167 : Fix norm decomp when dtype is passed in (#89508)
72110d7833 : Fix Upsample Decomp Striding For Small Channels (#89528)
b7483be06a : [quant][docs] Add docstrings for operators defined in torch.ops.quantized_decomposed namespace (#89547)
a188f05e8c : Reland #89031 Added conv constraint that infers layouts (#89530)
e800d27b10 : [dashboard] Add graphs for all summary metrics, add additional testing flags (#89580)
953f39578a : Mark IPU device as not supports_as_strided (#89130)
37e46a5035 : [Dynamo] Fix several bugs & code refactor in RangeVariable (#89322)
91dcef41ae : Thread PG: add allreduce to threaded pg (#89043)
27db806888 : Handle Tensor.__deepcopy__ via clone(), on IPU (#89129)
fa7a963f65 : Remove BaseException TODO (#89540)
9eed6b7f9a : [Dynamo] Several fixes on TensorVariable & TorchVariable (#89486)
f03e6672fb : [Checkpoint][2D] Minor update for dedup_tensors.py (#89542)
74703eb502 : [Checkpoint] Add a logger to dedup_tensors (#89503)
57353c9608 : first draft of input mutation handling for aot autograd (#88817)
902e4e3926 : Revert "Fix the kineto daemon build condition (#89174)"
049a0f2cd5 : [inductor] Update CI model tests (#89499)
95474e00a9 : [quant][be] Remove unused util code (#89272)
128faf2b69 : [quant][be] Refactor the error checking code for quantize_per_channel op (#89271)
71c0e84914 : Gate leak check and reruns on schedule (#89504)
c9d4390d13 : Add Pluggable CUDA allocator backend (#86786)
1333fdcff1 : [test_nn] split parametrization test from test_nn (#89552)
347a7d97a5 : Deprecate decorating classes with torch.no_grad and similar (#89522)
2de38a0714 : Add `torch._dynamo` to docs (#89510)
de0dee30d0 : [PT-D][3/N] Sync TP API change to Pytorch (#89535)
795473ff5e : Call `symint::sizes()` instead of `sizes()` on convolution error messages. (#89549)
39772a6a01 : [quant] Add support for quantize_per_channel in the reference flow with decomposed tensor (#89270)
c651944f92 : [test_nn] split hooks test from test_nn (#89201)
dd140fc351 : [test_nn] move init tests from test_nn (#89202)
7594e043b8 : Fix Use-after-Free in qembeddingbag_byte_prepack_out (#84750)
07dd2fe6c3 : Symintify `select` (#89326)
29742786f3 : [quant] Add dequantize_per_channel in quantized_decomposed op library (#89269)
5266953443 : Add crossref debug mode for functionalization, catches stride errors (#89498)
fe990c8db9 : [BE] Add more `ssh` instructions (#89516)
5b51ca6808 : Update CUDA compiler matrix (#86360)
504570d577 : Delete unused variable assignment in _refs/__init__.py (#89538)
ed32511974 : Don't use explain() for --explain; instead read it off the counters (#89518)
f5d18574a3 : Allow Module forward-pre and forward hooks to take kwargs (#89389)
4935b597ac : Added implementation and tests for MPS Hardswish (#87952)
1cfd3858ac : [inductor] Use dense masks for indirect indexing (#89524)
26322544b8 : Add limited FSDP correctness to torchdynamo benchmark (#89469)
7f4b4d2827 : [Inductor] Limit g++12 installation to Linux (#89472)
b50699f247 : Fix inductor fallback_random for dropout/rand_like (#89515)
8bf8e4d71e : [dashboard] Add metric graphs back to dashboard (#89531)
ce856cee7e : [test_nn] fix missing class attributes for NNTestCase (#89200)
391b593ca2 : [quant] Add quantize_per_channel in quantized_decomposed op library (#89268)
5bba783d21 : [dashboard] Remove aot_cudagraphs and nvprims_nvfuser (#89514)
ea920a1115 : [Vulkan][TCC] Add tests for quantize_per_tensor and dequantize (#89496)
74e62a1fef : [ROCm] Optimize layer norm backward kernel for ROCm (#87635)
00b7d8ef23 : Shard windows periodic job more (#89455)
77d7f2c659 : [dashboard] Add commit date & fix date related issues (#89517)
177baf366a : Fix vectorized trigonometric functions for VSX (#86453)
ac3004757e : Relax tolerance for test_out_addbmm_cpu_float32 (#86365)
d053d51343 : (Further) limit world size in test_fsdp_pure_fp16 (#86280)
c2ce79f06e : Fix dev-discuss link in the maintainer docs (#89493)
ef8b91fec7 : enable previously failing UCC distributed_test.py tests (#89023)
f281f435a8 : Fix benchmarks - xla tensor test (#89509)
7c0bb61291 : Force numpy prod to use 64 bit integers on Windows in some tests (#88089)
f4898daaee : Add cached conda env file for Buck CI workflow (#89422)
9c0bf9387c : Meta impl for linalg_cholesky and linalg_cholesky_ex (#89430)
c4e08387c1 : [quant][fx] Support producing reference quantized patterns for dynamic quantization (#89248)
2823fc5e4c : [inductor] generate nan in the cpp backend (#89289)
5797f74924 : [19/N] Add monitored_barrier custom op with CPU implementation (#89318)
be22b5d39f : [18/N] Add allgather_coalesced custom op with CPU/CUDA implementations (#89317)
d9cbe7764e : Make aten.copy preserve strides (hf_Longformer) (#89464)
2d94fd3b19 : [Vulkan][TCC] Fix quantized shaders (#89456)
0f7dca1733 : Vectorized CPU code implementing right shift operator. (#88990)
1d6a188d08 : Reland Dispatch torch.norm to linalg.vector_norm and linalg.matrix_norm (#81761) (#84624)
6b085d5cad : [Checkpoint][2D][2/N] Add traverse for distributed checkpoint to core distributed (#89398)
7b0650d5cf : Back out "[static-runtime] change the backend for permute_copy" (#89463)
f2cf1b0f5e : Revert submodule updates introduced by #89157 (#89449)
40cf214f2d : Support masked_fill to address the GPT2 performance issue (#89274)
e545caa50f : dynamo/torchxla integration: trace on xla rather than eager (#88904)
1dae59ba16 : [Checkpoint][2D][1/N] Add dedup_tensors for distributed checkpoint to core distributed (#89399)
ce342ed2d3 : Fix retrying logic for successful unittest tests under --rerun-disabled-tests mode (#89454)
338f619044 : [vision hash update] update the pinned vision hash (#89471)
00b9473ad6 : [PT-D][Tensor Parallelism][2/N] Sync TP API change to PT prod (#89467)
82713a1cc4 : [inductor][compilation time] Fallback when kernel size for avg/max pool is large (#89448)
496c8ae760 : [xnnpack][lite-int] Handle Constant Data (#89445)
120d200620 : Revert "Added conv constraint that infers layouts (#89031)" (#89451)
06dffb3319 : dont clone symints, dont clobber symint proxies (#88230)
58a74f34f9 : [17/N] Add _reduce_scatter_base custom op with CPU/CUDA implementation (#88903)
7174572b1e : Add torchvis support to dist bench (#89324)
57ed94804e : Bind DispatchKey.Functionalonalize in pybind11 (#89452)
b189a7444d : [fix] tril & tril : out of bound check (#89384)
dbc354b262 : Mitigate flaky test_ops_fwd_gradients on macOS (#89410)
ea50549ce6 : Suppress guards when creating fake tensors (#89349)
fa4980cd5e : Add commit hash to dynamo dashboard (#89462)
186192bb26 : [Dynamo] Fix bugs when calling tensor.data and tensor.layout (#89257)
821ba6b51b : [4/n] Thread PG: add reduce_scatter to threaded pg (#89442)
3e99d4db76 : [3/n] Thread PG: add scatter to threaded pg (#89441)
3876f94c3d : [2/n] Thread PG: add test for broadcast (#89440)
deae450899 : [1/n] Thread PG: add test for allgather (#89439)
047e542a1a : [tools] expose selective build library (#89351)
c068fa900f : [inductor] Misc division lowering fixes (#88603)
1267dcf297 : [inductor] Fix nan handling for aten.sign (#88937)
3d247a8bcd : Fix unconvertible_ops as per #89261 (#89299)
1d9e1fca97 : Update sdp dispatch logic to enable fused backward (#89154)
cf9476554f : update kineto pinned commit (#89435)
e4d9dbd7d2 : Port torchdynamo's torchbench script to userbenchmark (#89239)
9d209e7834 : Revert "[ao] making _is_activation_post_process private (#87520)"
f3db03612f : Revert "[ao] maintain BC for is_activation_post_process (#89260)"
6796979ee1 : [Inductor] Limit the number of compile threads to the available cpu cores (#89377)
c2cf0bde1f : Move the OpInfo same-storage error to the autograd test (#88306)
a80e5e7813 : Update ideep for future performance improvement (#87966)
31708a7310 : TorchDynamo: enable conv+silu fusion (#89278)
bc716383a6 : Redefine the simdlen semantic (#89263)
79770d3636 : TorchDynamo: enable conv+relu6 fusion (#89265)
e0251de42f : [Easy] Use prepend arg to register forward hooks in quantize.py (#89391)
1db5ce095f : [vision hash update] update the pinned vision hash (#89287)
51e961dd7b : use std/libdevice erf in inductor (#89388)
1856fa5df7 : Temporary increase ASAN shard 5 to 4xlarge (#89387)
e1d58b1928 : Revert "Update sdp dispatch logic to enable fused backward (#89154)"
c09929659c : Also include MKL_THREAD_LIB in link libraries for caffe2::mkl (#89378)
7b0d577c22 : Set INTERFACE_LINK_DIRECTORIES on caffe2::mkl (#89359)
dbeacf1182 : Fix cat striding in PrimTorch (#89332)
caf3d5319f : Symintify numel(), infer_size, prims.elementwise_meta (#88956)
7c811efab7 : Add support for dynamic kwarg to torch._dynamo.optimize (#89290)
8ad39536d7 : Revert "Symintify numel(), infer_size, prims.elementwise_meta (#88956)"
8ac58bc2e3 : Add nullptr_t overload to c10::intrusive_ptr (#89196)
5582001bd5 : Reland 2 "Towards unifying symbolic and non symbolic fake tensor (#89038) (#89143)" (#89346)
6afe341276 : [PT-D][1/N] Sync TP Beta change to prod (#89242)
6b8c1b19b5 : RM expectedFailure UnspecReproTests.test_batch_norm_act_unspec (#89340)
6daf60be5a : [ONNX] Add setType from user into InferredType and Reliable in ConstantValueMap (#88622)
940959ebbf : [quant][fix] Add quant_min/quant_max for default dynamic quantization observer (#89267)
808bdbab89 : Fix try/except flow where DataDependentOutputException is getting wrapped in a RuntimeError (#89314)
419ef2cdcf : Added utility to count memory reads/written in Inductor (#89203)
7a2930b357 : add jvp test with non-contig inputs (#89131)
631baecbcd : Add --explain flag to bench (#89316)
e6996ea172 : Don't redefine __STDC_FORMAT_MACROS (#89310)
8c0515dbff : cast C++ py-bound SymNode to SymInt correctly (#89295)
2e72ec7982 : Update sdp dispatch logic to enable fused backward (#89154)
85a87e635c : [dynamo] mutable local caching to make dynamo faster at tracing mutation (#89170)
ea58955dda : Move bazel to c++17 (#89297)
cad5772c2c : [dashboard][huggingface] skip accuracy checks for really large models… (#89273)
ee907375fa : [small] Update error message (#89294)
c3938bb97a : [functorch] introduce an experimental map() op. (#88767)
94b5c807fd : Detach fake tensors into val, so they aren't affected by metadata mutation (#89140)
885f8a56d4 : [BE] Print backtraces from coredumps (#89309)
0e1fcc8aa8 : [FX] Add type annotation to `getitem` node before `split_module` (#88510)
ecfb4e064c : [Inductor CI] Use string format for cuda-arch-list input to prevent 8.0/9.0/10.0 etc from being interpreted as 8/9/10 (#89279)
7551136b81 : Add NVTX markers that dump additional information for nvprim_nvfuser Dynamo graphs (#88259)
35d5fc52f0 : [Profiler] Don't raise SOFT_ASSERT in debug builds. (#89240)
bfffc8d8ef : [DDP][Docs] Add warning that `no_sync()` should include forward (#89244)
304b5de1b0 : Re-enable test_hf_bert_fsdp (#89223)
ba605c3b04 : Don't trace when we track_tensor_tree (#89139)
e04dc35a6a : Symintify obeys_layout_contract (#89138)
837ca8f344 : Remove --retry-all-errors from environment with old curl (#89298)
ee2ce3fef6 : Set make max load when building libtorch (#89237)
7ec8a4d2a2 : Vectorized horizontal flip implementation (#88989)
81a4aeabdf : [Dynamo] Support Tensor.nelement & torch.cuda.is_available (#89164)
8a419cbffb : Added partial decomposition of conv_backward and grad_bias computation (#89128)
38ccd08f9b : [quant][fx][be] Refactor replace observer with q/dq op code (#89247)
c219b55b5f : Use standard __func__ macro in symbolic shape. (#89264)
12a97444c3 : [xplat] remove -weak_framework (#89233)
19e66fcec2 : [Quant] Allow setting fixed qparams for inner LSTM ops (#88456)
19fcb80551 : [inductor] Skip DALLE2_pytorch in torchbench (#89288)
1f7c0ff6e7 : [inductor] Temporarily disable functorch_dp_cifar10 test in TorchBench (#89281)
55e55d95ea : Update torch.distributed.DistBackendError type (#89235)
154e58c032 : Add most in-place references/decompositions (#88117)
6741443c7c : Simplify maybe_resize_out (#88116)
ce0e22a81a : Fix names of some reference functions (#88115)
2e358cc98f : Add platform markers for linux only extra_install_requires (#88826)
5654fed23e : Export c10/[macros|util] headers to be used by internal inductor builds (#89249)
4c6724985d : [PT-D][Checkpoint] Update import and update docstring for distributed checkpoint (#89256)
2dcacc6b99 : [LTC] Upstream short_metrics (#89186)
c5fafb4e16 : [ao] maintain BC for is_activation_post_process (#89260)
30c3e5afb0 : Disable tracing `zero_grad()` (#88731)
afdc48f843 : Gate CUDA-only inductor tests by HAS_CUDA (#89251)
6a964c16e5 : [flaky] relax tolerance conv1d_vs_scipy (#89193)
fc1c0cd3ef : Add support trace on MPS backend (#87910)
7beb151889 : [xnnpack][executorch] remove unordered_set from xnn_compiler (#89231)
ab75982d3a : Always retry curl downloads (#89157)
3bc78295c2 : Fix consistentcy of histc on CPU and CUDA (#87832)
f1fb586bc6 : Symintify repeat_interleave.self_int (#89111)
ba5e39e106 : Fix tol for test_nvfuser_correctness__softmax_backward_data_cuda (#89178)
6f609dd0e0 : docs: conv2d `padding` attribute- add `int` option (#85004)
6f4f69f54d : [Executorch] [Quantization] New pattern for dynamic dequant (#89236)
f4efc5e821 : [quant][be] Move some helper functions to the top level to reduce function length (#89246)
6ed14c7dcf : [vision hash update] update the pinned vision hash (#89102)
3c2676de3d : [LTC] Restore GetPythonFrames (#89122)
65bcd1f880 : Add previously deleted circleci readme back to repo (#85598)
92f9214a31 : add -Wnarrowing as error to cmake builds (#89207)
fd0efb01a7 : [MPS] Support for median with dim (#88807)
9fd00f194a : Fix the kineto daemon build condition (#89174)
b652fbc57a : Fix torch.nn.functional.gelu docstring formatting (#89061)
177621a0b2 : Use pytest-flakefinder to rerun tests multiple times (#89106)
57e05e822d : Issue 68576 prefetch factor (#88972)
2b3ac879a7 : feat: adding view_copy_batch_rule and opinfo for view_copy (#88150)
31b10e7d40 : Enable inductor CI for TorchBench (#87465)
3d8a853a87 : [DataPipe] Add container template for _Fork and _Demux (#89216)
e2229a89b0 : Fix typo in aten/src/README.md (#89175)
a695fcf201 : Add tests for replicate multiple modules (#89099)
767f6aa49f : [JIT][Security] Do not blindly eval input string (#89189)
fbbf368745 : Fix distributed test paths when running periodic multigpu job (#89225)
f057a45faf : reland "support running test_mobile_profiler with buck1/buck2 and OSS (#89001)" (#89091)
e856a4d66b : Add an env var to skip cudnn version compatibility check (#89184)
04169c5b6e : Rewrite assert statement with torch._assert under config (#88246)
af448e84eb : Fix bug in dynamo dashboard summary stats diff (#89226)
706f791a19 : Revert "Support masked_fill (#88736)"
8e4c9828f4 : Revert "Reland "Towards unifying symbolic and non symbolic fake tensor (#89038)" (#89143)"
cd81a700ec : Fix buffer overflow from AddressSanitizer checks due to inaccurate bfloat16 representation of large integer (#89210)
2b131b1d43 : Support masked_fill (#88736)
e686b8c3ba : Reland "Towards unifying symbolic and non symbolic fake tensor (#89038)" (#89143)
bdc9911575 : Fix typo in dist_util.py (#89167)
3beccbc299 : Add BFloat16 support and optimization for mish, hardtanh backward, and silu on CPU (#82460)
37c85cf5f2 : Add warning if tensor cores are not used (#88844)
b72f5b9ae3 : [Dynamo] Support typing.Mapping & Support function as argument (#88963)
126e44173d : [ONNX] Add onnx-script into ONNX docs (#89078)
74610a1ced : [dynamo][benchmarks] HF - Fix seq len and batch sizes (#89165)
a41f70603a : Round out rad2deg sparse support (#88442)
70fb673e51 : Use software approach to catch overflow ( `c10/utils/safe_numerics.h` ) on ARM devices (#89042)
54fca6a9da : Fix: prefer .is_none() over .is(py::none()) for pybind11 in caffe2 (#88199)
4e1d19c5a5 : Revert "Redefine the simdlen semantic: (#88482)"
81a8fdc40d : [MPS] Add binary operations dtype precedence test case (#87545)
44c9185f91 : Fix empty input issue of convolution for channels last memory format (#86521)
637e764ec5 : [xnnpack][executorch] Pass xnnexecutor pointer to compileModel() (#89090)
24b9890f03 : [torchrec] [composable] update ShardedEmbeddingBagCollection to be use registered EBCs with shardedTensors as registered modules (#758) (#88026)
1cd6ebe095 : Fix typos in messages under torch (#89049)
d1f48f05ce : [xnnpack][Bug Fix] Pass serialized model by reference (#89089)
366f1b2c2f : [xnnpack][lite-int] Freeze/Inline module to remove reference to self (#88863)
1adb7b9b84 : [nn][utils] Preserve requires_grad from original weight and bias in fuse conv/linear bn weights (#89100)
a5f04e9a91 : Fix typos in .md and .rst files (#88962)
573eaf1225 : Analyze and upload disabled tests rerun to S3 (#89083)
fce6d6b3dc : Redefine the simdlen semantic: (#88482)
c3acb9c885 : [ONNX] Add Internal Utils: onnx_proto_utils.py for onnx/onnx-script/onnx_proto (#88376)
f3af5ba48e : [WIP] Composable API: `replicate` and `DistributedState` (#87649)
f73d9a79fe : [torch][fx] Fix PassManager to not use a class variable mutable list (#89108)
ac0a6f381d : [dtensor] disable op db tests for now (#89162)
30d9fb9157 : [dynamo][reland] API Support for nn.Module (#89113)
f5e2cb5249 : Add comprehensive minifier tests (#88022)
088f2fa567 : Fix typos in messages under test (#89121)
716f70f19a : Added conv constraint that infers layouts (#89031)
251fdda77b : Add pytest-flakefinder as a test dependency (#89103)
0d87a4fec8 : Fix typo in Dispatcher.h (#89045)
80b6761863 : Update README.md (#85534)
3af5cf4de1 : doc(typo): memroy -> memory (#89126)
cfd552547f : Use the Python frame safely in _pythonCallstack (#88993)
8506b305df : handle scatter(Scalar) overload in inductor (#88894)
0c835e25bb : Fix nightly build binary errors (#89153)
98379a3949 : [ONNX] Add onnx-script test cases (#86907)
f920bfaf2a : Use torchrun for dynamo/distributed.py (#89149)
8ba62bdff5 : add test_c10d_spawn_ucc.py (#86508)
ec61951f07 : Fix inaccuracy in nt constructor documentation + broken rendering (#89152)
5848704ef8 : Removed unecessary check in `select_nested` (#89150)
ee1d375bf9 : [FSDP] Add fast path for `NO_SHARD` `clip_grad_norm_()` (#89137)
e70f446a16 : [Dynamo] Fix bug in NamedTupleVariable (#89110)
640af8d70a : More dynamo dashboard improvements (#89155)
305b9b1f0e : Fix XLASymNode.str() no str() attribute error (#89093)
4908a12542 : Reland "SymIntify convolution backend calculation (#89069)"" (#89142)
45c62a3377 : [ao] making _is_activation_post_process private (#87520)
aee96bbf5a : [PT-D][Checkpointing] Move distributed checkpointing from torch.distributed._shard.checkpoint to torch.distributed.checkpoint (#88698)
6b521bbf35 : Prevent module full_backward_hook from erroring in double backward (#88357)
0581331963 : [ONNX] Document ONNX diagnostics (#88371)
848e7240a1 : [Dynamo] Add a dummy profiler to avoid activating real profiler (#88930)
61801799a0 : [Quant][bc-breaking] Remove overwrite_output_observer (#88620)
a6ef2c7634 : Support test-config filter logic for rocm (#89046)
7b0adc290a : Run tests from test/inductor in inductor CI job (#88957)
58ebf92cf0 : Add bfloat16 support to torch.prod to align with torch.cumprod (#87205)
3320915303 : Fix decomp for embedding_backward and simplify the decomposition of embedding_dense and embedding_dense_backward (#87204)
e1ecf53d84 : Simplify linspace decomp and increase its tolerance (#87203)
d2d22d89d9 : test_unary_ufuncs few tests enabled on rocm which are passing (#89007)
7f55db4fb0 : add quantize_decomposed_dynamic to op lib (#88855)
cf6003f046 : Revert "Towards unifying symbolic and non symbolic fake tensor (#89038)"
fe276ea0f9 : [UCC] Add pre & post processing for CPU collectives (#89030)
90db86be10 : Revert "SymIntify convolution backend calculation (#89069)"
cf4b4b1b06 : Fix python types in pybind function signatures (#89115)
abe41aee77 : [ONNX] Support custom Op with onnx-script local function (#86906)
9fe36a0214 : [ONNX] Extra support for bernoulli export (#88655)
37d54239c7 : Towards unifying symbolic and non symbolic fake tensor (#89038)
09ed8b67e2 : SymIntify convolution backend calculation (#89069)
5e0c01330c : SymIntArrayRef type caster (#89074)
57af0c8245 : Bug fix: make sure `copy_impl` doesn't read out of bounds (#88544)
dc40d3f93f : Add meta impl for grid_sampler_2d_backward (#88745)
5270122773 : [Inductor] Build FX Linear + Permute Vertical Fusion in Inductor (#89118)
9d28775c1d : Revert "Rewrite assert statement with torch._assert under config (#88246)"
9d2f5a2784 : [dynamo] Support if cond on NNModuleVariable (#89095)
f20b3f2e57 : [dtensor] PART 8: move tensor parallel api and tests to core distributed (#88180)
0230e52b54 : [dtensor] PART 7: move remaining DTensor tests to core distributed (#88179)
550a019fb8 : [dtensor] PART 6: move DTensor op tests to core distributed (#88551)
527c5bdb45 : [dtensor] PART 5: move DTensor basic tests to core distributed (#88178)
1b88476320 : [dtensor] PART 4: move remaining DTensor ops to core distributed (#88550)
2dcf0978a2 : [dtensor] PART 3: move most DTensor ops to core distributed (#88177)
4b945967de : [dtensor] PART 2: move DTensor abstraction and APIs to core distributed (#88176)
370fc5cb42 : [dtensor] PART 1: move DeviceMesh and placement to core distributed (#88549)
59ba15f374 : Upload CSV test reports from inductor (#89112)
7e66d1d6cd : [Inductor] Support Shape Padding for aten.mm in Inductor (#89086)
e2f0648750 : Add an option to include actual license terms to the output (#85624)
8ebbd5a89a : Easier to understand event_dim computation (#81396)
ce2f8700ba : Symintify numel(), infer_size, prims.elementwise_meta (#88956)
b291c1213a : Create native function for determining which implementation of SDP to call (#89029)
397f100672 : [FSDP] Test `named_parameters()` in forward (`use_orig_params=True`) (#89066)
46ba0150cb : Increase slow grad check timeout (#89079)
9f0b2c73f3 : Revert "[Inductor] Build FX Linear + Permute Vertical Fusion in Inductor (#88859)"
d96dd8ff09 : Add int64_t, SymInt overloads for all binary operators in C++ (#89063)
431642111f : Move ConvParams methods directly on struct (#89062)
49f0be0762 : Hide ConvParams struct from ConvUtils.h (#89059)
19cacecf34 : Fix and Re-enable test_quantize_fx_lite_script_module.py (#88897)
3bc327993f : PyDispatcher integration with functorch (#88785)
2268a3215c : [functorch] add switch to enable autograd.Function (#88784)
0ce22574b1 : Revert "Enable correct supported activities for kineto on rocm (#88207)"
a13433940c : allow loading model from a path in torchbench (#89028)
60ffeb9866 : Don't iterate over graph when adding graph input (#89084)
ee05f47bdd : Rebase and re-land thread PG (#88795)
35093fc1ab : Enable correct supported activities for kineto on rocm (#88207)
d0130cd21e : Enable test_ops for inductor (#88994)
67af734ade : skip test that is broken in head (#88759)
175b7e1cde : print xpass (#89020)
8dc3353b0b : add `to(dtype)` support for all sparse compressed formats (#89055)
da2afcb1e0 : Add test for out-of-bounds Tensor access on GPU (#39211)
d47b94fa8e : [inductor] Added bucketize to decomp table (#88348)
9262d18e1b : [inductor] Introduce CSEVariable type and use it to track if Triton variables are scalar (#88347)
edd2dea859 : [torch] [analytics] add dynamo to analytics (#88915)
3e2ba60ac0 : [torch] [analytics] add pytorch event logger callsites to torch.save and torch.load (#89003)
d8466964b3 : Add range check to multi margin loss target (#89008)
18c1f2f82e : [torch] [analytics] add pytorch event logger callsites to transformers and encoder/decoders (#88896)
ff6d2a6d1b : Add mem efficient backward (#88856)
d60abe4b95 : [Inductor] Build FX Linear + Permute Vertical Fusion in Inductor (#88859)
f5df685090 : Enable channels_last_3d on SyncBatchNorm (#88401)
8023c9dc64 : [Profiler] Memory profiler part 3: Schema parsing and mutable arguments (#86854)
2439bc1e9b : [Profiler] Memory profiler part 2: Config validation (#86853)
279dcce702 : disable test that fails in fbcode (#88786)
1db0f735e8 : [Profiler] Account for caching when assigning IDs (#88917)
ee4412381e : Allow ROCm runners to have 2 or more gpus (#89011)
2819df9a19 : [ROCm] Enable python ref executor UTs for ROCm (#88981)
62ba15e10e : Rewrite assert statement with torch._assert under config (#88246)
b815f1fc50 : Symintify view_as_complex and view_as_real (#89052)
b9029fc449 : [ao] quant_type.py fixing public v private (#87519)
5faa2792fa : Symintify decomps for split and upsample_bilinear; Fix decomp for _softmax_backward_data and native_dropout_backward (#88761)
63e16216d8 : [c10d] Implement `__instancecheck__` for `c10d::ReduceOp` (#88275)
2452e3f99a : Update xnnpack graph schema to use xnode and xvalue (#89036)
8c46a5de3a : Add debug handle to xnnpack schema (#89033)
50c18217a3 : Revert "Add mem efficient backward (#88856)"
5314af5383 : Set correct size of `attr::output_layouts` when the graph has multiple outputs in JIT oneDNN fuser (#88496)
60e59c0755 : Fix get_default_qat_qconfig for PT 1.13 (#88876)
5ed90c40f8 : enable index_put test (#89019)
68fd8f3706 : [BE] [c10d][send] Improve error message on dist.send() with destination rank as itself (#89004)
21dd311077 : Add a mode to rerun all disabled tests (without running anything else) (#88646)
73d71ae3d6 : [WIP] Unwrap View in Reinterpret View (#89016)
dd6beca854 : Changing the use from ASSERT_EQ to ASSERT_FLOAT_EQ on nn_utils test. (#83693)
ce8a45c282 : [vision hash update] update the pinned vision hash (#89026)
55b88cde0a : [Inductor] Build Shape Padding in Inductor (#88709)
cbdb683dc8 : Add test that bias gradient is properly tested in same_two_models (#88995)
45d2daaf85 : Fix lookup file update in dashboard (#89024)
1f88b208ac : Fix cuda/cpu check on NoneType (Unit test) (#88970)
35e668b5ce : Add mem efficient backward (#88856)
f3462833bd : Use same retry logic as macos binary builds (#89014)
7a37bbed15 : Take input striding for conv fusion op based on eager output (#88864)
0544a32ba3 : [inductor] fix could not find as_strided with config.triton.mm=triton (#88946)
92c78f37af : improving torch.linalg.lstsq documentation formatting (#89013)
8df64abc6d : Fix some naughty uses of reshape/flatten (#88999)
c53a5ac6cc : Revert "support running test_mobile_profiler with buck1/buck2 and OSS (#89001)"
3c3bd55bea : [testing] fix a key in parse_namespace() (#88969)
911a1349dd : [Dynamo] Fix torch.is_tensor and torch.overrides.is_tensor_like (#88704)
3b33a2794e : support running test_mobile_profiler with buck1/buck2 and OSS (#89001)
074278f393 : [CI] Push `latest` and hash+CUDAver tags (#88971)
b2082833c6 : Revert "woof (#89010)"
4570bd6030 : woof (#89010)
f80992217d : Remove skip (#88979)
540b42a1a8 : [quant][executorch] Support quant fusion for cat in quant in executorch stack (#88960)
e0c194f10b : Fix typos in messages under torch (#88961)
3d79ced8cf : wrap_pybind_function: support member function pointers (#88932)
36d87465fb : Fix long comment error on dashboard (#89002)
cdb798faef : _get_nested_attr should return a value in the general case (#88822)
f1a5044de0 : [primTorch] _refs & opinfo alpha_dropout (#87989)
b0c86caa1d : Remove cpu path from lobpcg's basis helper (#88984)
06f1b52705 : don't use prims.unsqueeze in group_norm (#88927)
c8f3d1c134 : Run test_torchinductor_opinfo CPU tests if triton not installed (#88934)
ec4eadac5b : reland "Do not use unsafe restriding for subclasses (#87610)" (#88343)
9943d46aab : TorchDynamo: skip convolution fusion when convolution's padding is string (#88794)
15ef0660c5 : Fake Tensor For (ConvFusion) Propagation (#88414)
5e6cefd258 : Revert "Run test_torchinductor_opinfo CPU tests if triton not installed (#88934)"
8371bb8a3d : Run test_torchinductor_opinfo CPU tests if triton not installed (#88934)
072920c281 : TorchDynamo: Add convolution binary+unary fusion for cpu in inference mode (#88412)
cb4842c949 : [xla hash update] update the pinned xla hash (#88982)
03296844aa : Fix typos in messages under aten (#88964)
4ad7b17fab : TorchDynamo: Add convolution binary(inplace) fusion for cpu in inference mode (#88403)
06486cd008 : fix typo: AT_MKLDNN_EBABLED => AT_MKLDNN_ENABLED (#88952)
eea506aee1 : Revert "Symintify decomps for split and upsample_bilinear; Fix decomp for _softmax_backward_data and native_dropout_backward (#88761)"
48dc24ddce : Fix: [ATen] Add some missing moves (#88514)
9eabcc370f : Symintify decomps for split and upsample_bilinear; Fix decomp for _softmax_backward_data and native_dropout_backward (#88761)
76af71444a : [primTorch] Add ref for `complex` (#88562)
8f7e519f12 : Skip dynamo benchmark tests under TSAN (#88895)
52be0c42ab : meta function for max_pool2d_with_indices_backward (#88743)
98bcb4acb6 : Revert "[reland][dynamo] Better support for nn.Module (#88959)"
897d029a73 : [reland][dynamo] fixes dict changed during runtime error (#88877)
4284862db6 : [Dynamo][FSDP] Migrate to `ModuleWrapPolicy` (#88453)
bca75fd2d3 : Move xnnpack taget to fb code base (#88909)
2b12bfce88 : [dynamo] Skip frame when graph break in a loop (#88857)
e950afc395 : [reland][dynamo] Better support for nn.Module (#88959)
06ce1338bc : [dynamo] Port all pytorch/dynamo and test/dynamo pieces over from symbolic-shapes branch (#88768)
4f2639e56a : [FSDP] Fix `FSDP.clip_grad_norm_()` for `NO_SHARD` (#88955)
46796fe5e9 : Fix XLA symbolic shapes binding (#88928)
2aca97cc9a : Vectorized CPU code implementing left shift operator. (#88607)
df1df9d10a : [16/N] Add _allgather_base custom op with CPU/CUDA implementation (#88889)
3765621356 : torchdynamo support self.modules() for nn_module (#88695)
27dc03e09b : Turn internal assert when saved tensor is detached inplace into torch check (#88860)
4270bb37da : [primTorch] Improve `narrow` and `narrow_copy`: refs, tests, docs (#87045)
6e5f736d86 : [15/N] Add allreduce_coalesced custom op with CPU/CUDA implementations (#88846)
ae2c668cc0 : Revert "[dynamo][api] Better support of torch.nn.Module (#88629)"
6b775c42dd : [quant][executorch] Support quant fusion for reshape in quant in executorch stack (#88858)
34641c4384 : Revert "Add comprehensive minifier tests (#88022)"
c83348597b : [dynamo][api] Better support of torch.nn.Module (#88629)
d01bf1d1f1 : [FSDP] Introduce `ModuleWrapPolicy` for simplicity (#88450)
b2b0a0d3ba : [vision hash update] update the pinned vision hash (#88920)
ae4074669e : [FSDP][state_dict][6/N] Remove most FSDP module dependency from _optim_utils (#88638)
4108367123 : Exclude poolformer_m36 from the inductor model test (#88908)
1e2327baf7 : fix fx tests (#88886)
66736ff425 : Fix bug in OptionalTensorList (#88887)
2b166532f7 : Remove incorrect assert about hermetic state. (#88885)
2cd05a2818 : Support torch.qint32 in Convert (#88871)
a3f3ec8fac : [FSDP+dynamo]: forward treats parameter-views as params (#88781)
5ff600aa6e : Add comprehensive minifier tests (#88022)
37c5b42fa6 : Fix matmul decomp to use reshape instead of contiguous().view() (#88832)
7c3adddd6c : [functorch] delete some unused files (#88763)
a7fa423f48 : copy_: Short-circuit when self and src view the same data (#88884)
6fe47b682f : [Dynamo] Fix str(Guard.obj_weakref) bug to re-ennable support overriding __getattr__ (#88564)
be8d88f8d0 : [DataLoader] Removing DataLoader2 related code (#88848)
f39cad50b7 : Make InductorCPU usable in internally (#88870)
fbc1878265 : [ONNX] Pretty print diagnostic logging (#88261)
ea0ec9d71c : [tourch] BatchBoxCox - fix numerical issue in vectorized code (#88875)
dfb4b73e45 : Fix unused variable 'options' warning in RNN.cpp (#88753)
7aa144ac54 : [FSDP][state_dict][5/N] Remove the FSDP module dependency from _state_dict_utils (#88637)
575e02df53 : Fix CUDNN_PATH handling on Windows (#88898)
f74946324e : [fix] allow saving python attr on Tensor and Parameter via torch.save (#81616)
ba4d5aae06 : Revert "rename DisableTorchFunction to DisableTorchFunctionSubclass (#88218)"
4e5d7afe84 : Revert "add DisableTorchFunction that matches DisableTorchDispatch (#88219)"
9d7d21f569 : [ONNX] Add stack info to diagnostics (#87258)
3d1c5c89ed : [FSDP][state_dict][4/N] Move the core logic of summon full parameters to _unshard_params_utils.py (#88636)
5f0783bd6d : Fix ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops (#88504)
8ff2e34ca6 : Take input striding for conv forward based on eager output (#88706)
adfbd831cf : Revert "[Autograd] Use in-place input accumulation fast path for dense Tensors. (#88339)"
89a326ff7e : Explicitly check filelike arg of `torch.save` (#88867)
a6832b08a3 : Regularize bernouilli_ with bernouilli decomp (#88349)
1e8f95ace1 : Symintify `broadcast_to` (#88776)
d615d12289 : Add meta impl for topk (#88694)
3c7f96665e : [FSDP][state_dict][3/N] Change how state_dict utils access attributes in _FSDPState (#88635)
b92acee8f8 : Add context manager to allow mutation on saved tensors (#79056)
91b71cdbe4 : [dynamo] Add torch.device to is_safe_constant (#88766)
324ac93a43 : [FSDP][state_dict][2/N] Move state_dict related enums/dataclasses/states to state_dict_utils.py, api.py and init_state_dict() (#88481)
ee91c328da : Fix cuda/cpu check on NoneType (#88854)
d15a6b0c97 : Error on ZeroTensor serialization (#88803)
b843f4db0a : [ONNX] Add test case for onnx::Max scalar type (#88751)
396c3b1d88 : Use `atomicAdd` for `bfloat16` in Ampere and above (#84981)
a6d72f44a4 : [ONNX] Add onnx::Max into standard Op for scalar type alignment (#88750)
0de8f047c1 : Revert "[dynamo] fixes dict changed during runtime error (#87526)"
310335de48 : Update lr_scheduler.pyi to match lr_scheduler.py (#88818)
86b7aa26f0 : Fix FakeTensorProp on Module with Parameters or Buffers (#88700)
c4fc5d372f : [FSDP][state_dict][1/N] Moving state_dict logic to pre_state_dict_hook (#87900)
9d09968bbe : Disable check for dropout in MultiheadAttention fast_path (#88831)
3082378701 : [vision hash update] update the pinned vision hash (#88853)
495e7b1c72 : Ref for aten.full; symint changes in prim (#88762)
3fbf748f21 : Assert we have triton before scheduling on triton (#88849)
fc9e36dd42 : Add meta support for scalar_tensor and argmax (#88590)
c961e45ee5 : handle zero dims in reductions (#88280)
534ae6ae47 : [primTorch] Implement group norm reference (#87054)
072834d56d : [ao] qconfig_mapping.py fixing public v private (#87518)
f9221bf53b : [pytorch] Enable memory map file support for Android, Apple, and CXX (#88545)
8441443132 : Revert "Add nondeterministic error for `scatter` (#88244)"
62ef15e320 : [MPS] Fix `test_embedding_dense_backward` (#88847)
b30222e0c4 : [Dynamo] Add complete support for Tensor.is_contiguous (#88407)
ae01615d75 : Fix cupti search path in CMake (#88657)
d9ad08ce8a : Symbolic shape: sym_floor , sym_sqrt, sym_int (#88760)
cc04cf50bf : [Inductor] Fix lowmem_dropout() missing 1 required positional argument: 'p' (#88716)
500fd65531 : [ONNX] Create common ExportTestCase base class (#88145)
20ae19aa1d : [ONNX] Improve diagnostic message formatting (#87830)
a6610faa93 : [ao] qconfig_mapping_utils.py fixing public v private (#87517)
c1553880de : Have kernel names include fused ops (#88624)
ad2eba802c : [ao] fuser_method_mappings.py fixing public v private (#87516)
37b468ac77 : [xnnpack][lite-int][on-device] rebuild serialized modules at runtime (#88780)
de38c87698 : Use run_test in MPS (#88829)
1ae772a663 : [inductor] Remove import check for fast_flush (#88812)
3a4e8736ad : [xnnpack][on-device] compiler --> executor object (#88779)
394b998de2 : sub setup.py install -> develop (#88507)
d5e1e2f0fc : [xnnpack][on-device] executor class (#88778)
29550e2c1d : Revert "[Inductor] Build FX Linear + Permute Vertical Fusion in Inductor (#88566)"
90cf14ddf6 : [DataPipe] Deprecating drop_empty_batches from Filter and other functional APIs (#88693)
98ecd06580 : Bring Unfold/Fold param doc order in line with code (#88819)
1d54ce9d5d : [14/N] Refactor _new_process_group_helper() to remove repeated code (#88351)
4bcf2c53e5 : Add warnings & regressions info text (#88837)
3b8245ab12 : [LTC] Make ComputePostOrder accept const T pointers (#88773)
48b58930cb : [Inductor] Build FX Linear + Permute Vertical Fusion in Inductor (#88566)
d157fca59c : Revert "Symintify `broadcast_to` (#88776)"
6bf2776ac1 : [FSDP][Perf] Do not call `pad` in no-padding case (#88769)
d3178465ee : [dynamo] `VariableTracker.call_method` requires a name (#88311)
1e4079a476 : [nnc] Disable opaque pointers mode in LLVM backend to allow getPointerElementType (#88798)
656d0de6c5 : Change TORCH_INTERNAL_ASSERT to TORCH_CHECK and add a nice error message (#88804)
79b049af5e : Switch to setup-nvidia action (#88757)
f98edfcc48 : Make TorchElastic timer importable on Windows (#88522)
4b898a7304 : Symintify `adaptive_avg_pool3d` (#88783)
3a09d9a129 : Symintify `broadcast_to` (#88776)
c0ecce15b5 : add DisableTorchFunction that matches DisableTorchDispatch (#88219)
7f28be10e5 : rename DisableTorchFunction to DisableTorchFunctionSubclass (#88218)
3e43ff2794 : torchdynamo: add convolution add(relu) inplace fusion kernel (#88048)
e6561291b8 : add hack to allow hybrid compressed sparse comparison in assertEqual (#88749)
7c353eb395 : [MPS] Fix softplus (#88555)
7ad87f63e2 : Support src_mask and src_key_padding_mask for Better Transformer (#88488)
dcefea2706 : [caffe2][tourch] Optimize BatchBoxCox (#87585)
e87c79ca0c : [vision hash update] update the pinned vision hash (#88742)
cf04b36ce8 : [dynamo] fixes dict changed during runtime error (#87526)
0b8889c724 : Do not flag models in dashboard due to NaN values (#88792)
6e3555edea : Add absolute latency to dashboard (#88790)
2381548071 : add stride constraints to fallbacks (#88534)
fb5c6ae61f : [cuDNN][cuDNN V8 API] Match V7 API behavior for `channels_last` stride coercion for cuDNN (#88699)
59115e6139 : disable test that times out in fbcode (#88758)
16bd363863 : Fix dynamo dashboard passrate denominator (#88777)
4f18739bf0 : Fix Docker image generation (#88741)
7006ac6ee5 : [Dynamo] Fix Tensor.T trace (#88642)
c7fc710459 : Revert "[3/n] Thread PG: add threaded PG implementation (#88627)"
6fe4ccc7cb : [ao] qconfig.py fix public v private (#87515)
3a3500fa08 : [13/N] Update gather with CPU/CUDA implementations (#86409)
1af9b38a90 : Symintify embedding_sparse_backward (#88746)
b7aa22d6db : [fx] Fix GraphModule.print_readable() (#88730)
6dd081846e : [3/n] Thread PG: add threaded PG implementation (#88627)
93d3bd626e : Revert "[primTorch] Improve `narrow` and `narrow_copy`: refs, tests, docs (#87045)"
8523c45717 : Delete stub file to enable mypy check (#4649) (#88701)
133e61af7a : OpOverload is_view (#88722)
55df18e3da : [12/N] Update scatter with CPU/CUDA implementations (#86408)
3a1bdfee67 : skip environment collection test in fbcode (#88744)
de53d4143a : Fix TorchInductor benchmarking in fbcode (#88689)
c4a3aa8fe7 : [vulkan] Add option for buffer representations in vTensor (#87622)
d81797e845 : Meta function for aten.sort and aten.scatter* (#88705)
100b55637b : Mark dynamo torchbench dlrm as unsupported (#88712)
eb9b156019 : [fix] MathBits: serialization (#88182)
525fe53aa4 : [BE] Delete push_nightly_docker_ghcr (#88748)
f11f0e4a03 : [inductor] Handle nested tuple/list output in fallback kernel (#88495)
3150c9dc6f : extract out the clean workspace test to its own file (#88682)
c19bae9f84 : Add SherlockNoMad to symbolic-shapes reviewer list (#88739)
44de7cdbc4 : Add voznesenskym to symbolic-shapes group, move wconstab to listener (#88593)
c86cc68d23 : Mark diag.out composite (#88670)
69b2352236 : Add min cut partitioner for AOT+nvFuser (#88204)
ff7c5b0df8 : Changing as_strided_scatter to deterministic inputs (#85583)
fca6ed02b9 : [Inductor] fix c++ compile error with masked float value init (#88298)
652af5ec15 : upsample_*.vec ops are now CompositeImplicit (#85638)
aa8279bcb8 : [primTorch] Improve `narrow` and `narrow_copy`: refs, tests, docs (#87045)
f6192b75c6 : [Quant] Support lowering of channel shuffle in FX (#83731)
ab9a19a95b : [BE] Move `setup-ssh` step ahead of clone PyTorch (#88715)
a7420d2ccb : Hopper (`sm90`) support (#87736)
19d7941e37 : Fix Python-bound function signature (torch._C.Graph.addInput) (#88528)
f0e6cea2ed : Meta registrations for inplace operators (#88678)
a880ddc164 : Meta implementation for unsqueeze_ (#88675)
1dab35ca1b : Meta implementation for bernoulli (#88676)
6be426ca1a : Update gloo submodule (#88530)
08b2a251e1 : [export] Preserve meta["val"] on placeholders in dynamo.export(). (#88651)
5f876bfdc5 : Reduce the number of shards inductor uses for model tests (#88610)
9f58e027a9 : Add implementation for irregular dimension selection for nested tensors. (#88585)
87238e6491 : [nn] add remove_duplicate flag to named_parameters (#759) (#88090)
cef13ebea0 : [Profiler] Memory profiler part 1: Gradient identification (#86802)
c0e6b4329f : [dynamo] only error out on nested fx trace if dynamo is optimizing (#88640)
a02ea655b5 : Slight fix in error message for check_for_seq_len_1_nested_tensor (#88690)
6e6f929b2c : [Profiler] Restructure inputs and capture TensorLists. (#87825)
e132c45fd0 : [Profiler] Handle ABA for TensorImpl* when assigning IDs (#87133)
078c25df13 : [MPS][BE] Code cleanup (#88529)
1d82eba98b : PatternMatcher supports matching list-typed args (#88656)
8e2627d42f : [inductor] Fix aten.fmod lowering (#88602)
f556d73574 : [torch] Implement aten::native_batch_norm.out for CPU (#88604)
3e30a9ea1c : Fix `CUDA_MAX_THREADS_PER_SM` for `sm_87` (#88644)
6bb7f4f29f : Minor error message improvements on meta functions (#88677)
d98a884b33 : Revert "[cuDNN] (re-open) Enable cuDNN Frontend v8 API by Default (#87669)"
5eecfcf5f3 : Run libtorch trunk build on linux.4xlarge (#88683)
eaf4fe3d2b : Most recently used cache management for TorchDynamo (#88076)
1b5373fc83 : Mark as_strided_ as supporting SymInt in C++ (#88674)
dba887766b : Revert "torchdynamo support modules() for nn_module (#88023)"
860e354d1c : Support diag_embed.out decomposition (#88671)
3f6a560184 : Correctly test that dtype/device match in generated .out kernels for composites (#88672)
245144a636 : Propagate layout and pin memory in randint to inner constructor (#88673)
96104c7b7e : torchdynamo support modules() for nn_module (#88023)
ee28b865ee : Deprecate TypedStorage, its derived classes, and all of their public methods (#85303)
53ca5ad347 : enable scalar reduction with dim=-1 (#88628)
89c5819626 : Dynamo DDP accuracy bench uses find_unused_parameters (#88645)
fcc2883476 : Clean up SymFloat binding to cover all functions (#88370)
6abaa5946d : Fix categorization of sym_int method (#88369)
bc66ddb5cb : Add torch.distributed.DistBackendError exception type, thrown from C10D_NCCL_CHECK (#88134)
1a7c4b0de7 : Create _make_alias to preserve the name of a function when creating an alias (#88114)
af09270e10 : nvprims bookend non compute (#88457)
8cb5c5543e : Revive static_runtime_benchmark build and test (#87660)
02c1a304fa : [ci] increase timeout time of ios test app build (#88611)
8f66ae413f : [Autograd] Use in-place input accumulation fast path for dense Tensors. (#88339)
ffb6e68962 : Add missing args to DDP constructor in distributed.pyi (#88209)
ced71e8e82 : [Pytorch] add an option to disable TORCH_WARN and TORCH_WARN_ONCE log (#87188)
ed97e0aa29 : [vision hash update] update the pinned vision hash (#88465)
9f11ce7f67 : Setting pickle_module isn't working (#88570)
825f4e602b : Add support for symbolic shapes to sparse tensor (#88573)
c29502dd2f : [LTC] Remove view (#88445)
f2000842a8 : Do not use double for single-prec upsample (#88277)
4ea2310f1e : Fix typos used in documents under torch directory (#88483)
d25be63c05 : [Reland] Use sudo when reset NVIDIA devices (#88605)
c77368d416 : Implement a constructor for nested_tensor that is similar to torch.tensor() (#88213)
72a7351993 : Pin linux ninja dep to 1.10.2 (#88548)
fdf2865108 : Use test/test-reports for inductor (#88533)
eb3f975c6e : Fix segfault in has_torch_function (#88559)
4796e23bbb : Fix pull docs build running with a schedule and increase cpp doc timeout to 4h (#88589)
d453b3c4d4 : Add a note on the stability of linalg functions. (#88313)
b00c43b310 : Revert "fallback for scatter_(scalar) (#88210)"
0e67b2f7dd : Dynamo Dashboard Improvements (#88516)
b14e06503a : (fix): Add some missing std::moves to C10 (#88512)
d8506ff42b : Generalize gesvdjBatched to run whith full_matrices==false (#88502)
9dadf8fcc2 : [DataPipes] Add group support to the sharding_filter (#88424)
23a3eb37cf : SymIntify _copy functionalization kernels (and _copy_out too) (#88572)
896fa8c5c9 : fallback for scatter_(scalar) (#88210)
0a69c50a46 : Publicly expose _LRScheduler to LRScheduler (#88503)
05b9e8ec00 : Upload test stats for inductor workflow (#88535)
a37524085d : [torchdynamo] support torch.autograd._profiler_enabled (#88378)
95d57b54e0 : Handle pin_memory in refs.randn (#88473)
bf49dada1e : [nvfuser] skip extremal tests on rocm (#88587)
7bf9db81c5 : Revert "Use sudo when reset NVIDIA devices (#88531)"
78a0ca29d9 : Revert "[fix] allow saving python attr on Tensor and Parameter via torch.save (#81616)"
91a4039842 : [exir][fx] PassManager error handling (#88520)
bd1ffc6501 : [Dynamo] Fix bug: GradMode doesn't carry grad state correctly after graph break (#88537)
6663ae5537 : [2/n] Thread PG: add class _World to distributed_c10d.py (#781) (#88471)
fc8f2f66fe : Clarify rules for which commit is used in CI (#88425)
c407a7b203 : Upgrade Linux NVIDIA driver to the latest prod version (#88517)
505486ce93 : Use sudo when reset NVIDIA devices (#88531)
cec4bd99b0 : allow XLA folks update the pin (#88527)
a16ced03c9 : reland "fix as_strided_scatter_backward (#87646)" (#88342)
dd43903fa9 : [Static Runtime] Fix tensor_split sections overload (#88113)
7076a6481d : [xla hash update] update the pinned xla hash (#88070)
ad27d762a7 : Support sign for HF models like ElectraForQuestionAnswering (#88160)
a9d37ce8f5 : Support reduction vectorization (#87356)
6541e51ffd : Explicit vectorization support for TorchInductor (#87068)
a95419b47e : use faster cache flush in triton benchmarking (#88557)
eda247ee6c : [Dynamo] fix torchdynamo's TVM meta schedule backend (#88249)
791d9ee253 : [inductor] Add lowering for as_strided_scatter (#88379)
81042d3a53 : Revert "Reenable optimizer overlap tests (#88439)"
bbaa0637df : Add error inputs to `gaussian_nll_loss` `OpInfo` (#88486)
404f254e20 : Upstream apply_optim_in_backward from TorchRec (#87397) (#88539)
da452bcadb : Reenable optimizer overlap tests (#88439)
d1ee073041 : Handle case when candidate is empty (#88359)
46730aec35 : [Reland] Fix primTorch compute_elementwise_output_strides (#88525)
0e3031f7e7 : Functionalize and compute joint simultaneously. (#88063)
957a9b63c5 : fx.replace_pattern accepts pattern/replacement as GraphModule (#88479)
4bb5c2c205 : Add docstring to DDPOptimizer (#88521)
1f32c3c087 : Add single-process DDP accuracy support to dynamo benchmark suite (#88511)
3fd0729bb6 : DDPOptimizer replace debug=True/False with using torchdynamo logger (#88480)
52375a0fd2 : nvprims native batch norm patch (#88455)
b1116a5117 : [Dynamo] Improve BuiltinVariable log when incorrect arg count happens (#88409)
5220d07d2c : Fix minifier accuracy msg (#88515)
dde9affeaa : Populate self.export in InstructionTranslatorBase (#88508)
afdc2283ef : [QNNPACK] Add unaligned attributes where asan fails (#88276)
7560a7b27c : [Quant] Respect non_leaf_module_list for activation modules (#88498)
5af3feefab : [BE] Update native_functions.yaml README; we do not support Tensor! (#88513)
678d038001 : Support DDP ignored parameters in DDPOptimizer (#88460)
ff6770a9a1 : enable backward for log1p (sparse layouts) (#88155)
6938dd0b2c : Support sparse inputs to deg2rad (#88156)
1964d8c34f : Enable sparse_csr autograd testing for relu (#88154)
f03302ba49 : Add sparse layout support for torch.frac (#88153)
d632d94cc7 : Disable mem leak check (#88373)
093e220836 : Re-enable inductor models tests as periodical jobs (#88509)
3e6579b8f6 : Don't print fatal:... in generate_torch_version.py (#88335)
955cbe610b : [inductor] Handle the case where kwargs contains tensor (#88417)
e940a2f8e2 : Add nondeterministic error for `scatter` (#88244)
6575174dcb : [fx2ait] fixes for AITSplitter (#87805)
7b419e8513 : [NVFuser] Upstream push 1026 (#87779)
15e54293ef : [MPS] Fix embedding backward with scalar index (#82809)
5b767d404e : Modified roundup_power2_divisions to specify the number of divisions for each power of two interval (#87290)
b78b8727ff : [vulkan] enable prepacking for Batchnorm op (#88433)
53eac1d482 : Revert "Revert "Put Python Dispatcher cache in dict, clear it on new registrations. (#88329)"" (#88489)
79abea5683 : nvprim python runtime dtype correctness patch (#88452)
8c1c6759b2 : Revert "remove assert_allclose from torch.testing (#87974)"
bda688c186 : Fix typo in clones (#88501)
633f0d620d : [torch package] Treat builtins as default extern module (#88385)
ead36e5a90 : Add dep on Accelerate framework to torch podspecs (#88422)
dc00bb51b8 : [Vulkan][TCC] Add tests for conv2d prepack context (#88316)
a171b0636a : Add use_lazy_shape flag to GenLazyIr class (#88444)
b3206268ac : TorchDynamo: enable convolution and batchnorm folding for inference path (#87435)
fbd08fb358 : Introduce TORCH_DISABLE_GPU_ASSERTS (#84190)
70b00b1383 : Add hf_bert + DDP multigpu test (#88435)
71f793d312 : TorchDynamo: Add linear binary fusion for cpu in BF16 inference mode (#87066)
7d95b1e344 : Run all fallback kernels with FakeTensor (#88248)
e4efea4f14 : TorchDynamo: Add linear unary fusion for cpu in BF16 inference mode (#87065)
657f2e12f0 : [MPS] Add native `cumsum` implementation (#88319)
52173188ef : TorchDynamo: Add convolution binary fusion for cpu in inference mode (#87064)
2ce2fc133d : Disable Current Modes when printing Tensor (#88344)
e804c72294 : [LTC] Update merge_rules.yaml (#88291)
a84d68cdfd : [FSDP][Docs] Reword `sharding_strategy` docs and other minor doc changes (#88431)
ff23e07b2e : [FSDP][Docs] Simplify CPU offload docs (#88430)
4de50b2521 : [FSDP] Allow to use TorchDispatch with FSDP (#88014)
31ebd3cc2f : Reset NVIDIA devices stuck in failed mode (#88459)
ab8f3333ff : [FSDP][Docs] Simplify `mixed_precision` ctor docs (#88429)
36582574f3 : [dynamo] Skip mutation detection for inference mode (#88406)
410ce96a23 : Revert "Put Python Dispatcher cache in dict, clear it on new registrations. (#88329)"
9946041a3e : [functorch] make hessian docs actually use hessian function (#88451)
ce961b3443 : Dont hold onto references of saved tensors in backward (#88247)
65de9a2b81 : Fix fuse_func method overwrite (#87791) (#88193)
433746300d : [pytorch] Expose EmbeddingPackedParamsBase::unpack to Python (#88362)
23a6e15321 : [ONNX] Remove the INT64_MAX magic numbers (#88341)
6d7eee04b8 : [FSDP] Default to `BACKWARD_PRE` (#88428)
c28022d96c : [profiler] Add an option initialize kineto profiler on start up (#87226) (#88020)
826b4a9c2d : [coreml] delegate multiple outputs (#88345)
9533fe9031 : [pytorch][vulkan] Add bias storage type to template (#88324)
893f8e3790 : [PyTorch][Vulkan] Add template based codegen for shader generation (#88323)
60925fcb7e : Dont clone inputs if using fake tensor (#88208)
192e806c26 : [Pytorch][vulkan] Generate shader with parameters (#88322)
fe3a226d74 : [minor] use set_default_dtype instead of try and finally (#88295)
f8b73340c8 : [dashboard] Replace aot_nvfuser with nvprims_nvfuser (#88437)
2bda2baad7 : [Dynamo][Easy] Fix config.suppress_errors error log (#88402)
4d62ee1b36 : Verbose exc printing fix (#88387)
0a274c4b6c : [ONNX] Default runtime type checking to raising errors (#86555)
d70bc222d8 : add parameters check for mkldnn_transpose (#85318)
c1dd13fb2f : [dynamo] Support compare op for userfunctionvariable (#88372)
2c46d5725e : Disallow module attribute mutation (#88354)
2b117c8436 : Revert "Fix primTorch compute_elementwise_output_strides (#88175)"
0f6304ef1e : disable the out variants in test_cumprod test for inductor (#88328)
529ba076c6 : add an exclude for test_constructor for inductor (#88143)
002dad35f4 : better error message for out= ops (#88367)
b4fcfe77b2 : reduce the number of autotuning iterations, don't autotune simple til… (#88386)
5e6ceebccb : Add support for neg to NestedTensor (#88131)
35be73df09 : [FSDP()][Easy] Make `fully_shard()` only `FULL_SHARD` (#88260)
fc743ec059 : [FSDP()] Have `fully_shard()` abide by `@contract`! (#88235)
63cd5d7e27 : Add a shortcut in Makefile for updating triton (#88318)
f884e817d4 : Make Python op registration work with torchdeploy/multipy (#87162)
2f296cfdbb : Add a reshape_copy operator. (#88314)
86c7cd287c : Put Python Dispatcher cache in dict, clear it on new registrations. (#88329)
97d3b200ca : Unconditionally enable python dispatcher in AOTAutograd (#88365)
a689502275 : [FSDP] Do not include empty state in `_flatten_optim_state_dict()` (#88353)
95a9721a15 : [FSDP()][Easy] Rename `_State` to `_FSDPState` (#88234)
0520131ed6 : [FSDP()] Rename to `fully_shard()` and move to `_composable/` (#88233)
54b6188cc6 : [fix] allow saving python attr on Tensor and Parameter via torch.save (#81616)
1c8a0656d6 : Fix primTorch compute_elementwise_output_strides (#88175)
0efd4e92b5 : Make GenLazyNativeFuncDefinition generator to be customizable in lazy codegen (#87823)
a8f40b39ce : Update all ONNX symbolics with new JitScalarType API (#87245)
b013825c7d : [vision hash update] update the pinned vision hash (#88382)
5fb9c113ae : Update pybind11 to v2.10.1 (#88332)
e59d307e2f : Improve perf by avoiding implicit string creation in c10_cuda_check_implementation (#88350)
a0fb234b45 : [codegen] using TORCH_LIBRARY_FRAGMENT for some namespaces (#88229)
7b8cc063ac : Not run inductor test in trunk (#88374)
d979caa87c : Added add/mul for nested dense [B, *, D], [B, 1, D] case (CUDA-only) (#88289)
4c20c0509d : Split out forward AD tests from test_ops_gradients and reenable slow gradcheck CI (#88216)
a8561c4571 : Revert "[inductor] Handle the case where kwargs contains tensor (#88215)"
7354368fd5 : [LTC] Remove non-native view ops (#88031)
72f3688029 : [Pytorch][Vulkan] Update spv generation script to embed shader parameters (#88321)
6c858e3727 : [FSDP][Easy] Remove unneeded `TrainingState` transition (#88232)
73de44fc56 : [FSDP] Rename `unflat_param_name` -> `fqn` for consistency (#88123)
f35d5145a1 : [FSDP] Simplify `_get_buffer_names()` (#88122)
572a3d2d6e : [FSDP] Remove unneeded `torch.no_grad()` context when offloading to CPU (#88121)
c87f0501ab : [FSDP][Docs] Add note mentioning rate limiter for backward prefetch (#88120)
32d22edc67 : [FSDP()][27/N] Add forward hook registration (#88040)
6fd416650a : Add _foreach_addc(div/mul)(_).Tensor (#88157)
91a51fe9f4 : [ONNX] Produce comprehensive assertion errors for quantized outputs (#87242)
ca2dc8b4e7 : [1/n] Thread PG: fix pyre error of class ProcessGroup (#88281)
d1ba4c3a6d : Update Reviewers for CPU-related Modules (#87591)
b325c3fc25 : [nvFuser] patches profiling on scalar arguments for std/var (#88165)
bf7c996dcb : Revert "torchdynamo support modules() for nn_module (#88023)"
7dfa75546c : Print only the driver version from the first GPU (#88364)
943b20e7ae : Use tensor cores for NT bmm (#86856)
1c0d47cb17 : [PyTorch] Make c10::irange(x) generate the same assembly as for loop (#86841)
ef4ce6d4c6 : Add [[noreturn]] attribute to operator() in DispatchKeyExtractor.h (#88333)
983c0e7f31 : [inductor] Handle the case where kwargs contains tensor (#88215)
98f09c9ab3 : [WIP] Add symnode magic method testing (#88119)
99c07735e4 : Revert "Add support for neg to NestedTensor (#88131)"
0fa23663cc : Revert "Introduce TORCH_DISABLE_GPU_ASSERTS (#84190)"
4a84d69f50 : [functorch.dims] Fix corner cases with permute (#88226)
84a302e534 : Remove wrong internal assert in handle_view_on_rebase (#88243)
30dc6cee3a : [FSDP()][26/N] Move `_lazy_init()` into `_fsdp_root_pre_forward()` (#87941)
1e2c4a6e0e : Introduce TORCH_DISABLE_GPU_ASSERTS (#84190)
b18d0f1dc9 : Add more debug information when installing NVIDIA driver (#88168)
923a5e9685 : [dynamo] Error when user nests FX with dynamo (#87797)
c503398828 : Ignore macos usage log upload artifact failure (#88288)
5b882a34c4 : Consolidate macos pip dependencies (#88071)
f132c171ac : [FSDP()][25/N] Add `_post_forward_reshard()` (#87940)
5b75b19f51 : Revert "Do not use unsafe restriding for subclasses (#87610)"
c00c34fb69 : Fix meta for aten.upsample_bilinear2d.vec (#88158)
71fb763e54 : Revert "fix as_strided_scatter_backward (#87646)"
bf2819a836 : [FSDP()][24/N] Refactor `_lazy_init()` (#87939)
bd5b4e6504 : [Easy] Unused var in functional_adam (#88292)
7382c88df2 : [BE][MPS] Do not use malloc/free in 2022 (#88307)
4e6f5f22fd : Run asan's shard 4 on `linux.4xlarge` (#88310)
3d90788a58 : [ONNX] Add 0d-tensor test case in runtime check (#87212)
2aed670710 : Fix ONNX operator_export_type on the new registry (#87735)
b2679dc61c : Remove Krovatkin from dynamic shapes auto request review (#88315)
dcbcf5b90e : [profiler] Expose experimental performance events to python (#87905)
47a542dc06 : Nested profiling support for Linux-perf Profiler (#87904)
ebdaeaaa8c : [edge profiler] Add e2e test for profiler event and chrometrace (#87877)
03346296db : [edge profiler] Add support for performance events counting (#87876)
bc1e9a07a3 : [profiler] Add Performance events support in Kineto profiler (#87874)
70782981f0 : aot_dispatch test fix: always use functionalization in symbolic tests (#87647)
f9d7985851 : fix as_strided_scatter_backward (#87646)
b5a925ff2e : propagate .meta info when replacing subgraphs in fx (#87255)
5669e10d37 : remove assert_allclose from torch.testing (#87974)
b9c617838a : remove make_non_contiguous from torch.testing (#87973)
8893c6cd07 : remove deprecated dtype getters from torch.testing (#87972)
a360be50b5 : remove deprecated device getter from torch.testing (#87971)
554cdc9a63 : remove deprecated rand and randn from torch.testing (#87970)
bc73affdad : prepare removal of deprecated functionality in torch.testing (#87969)
0fc7de3986 : [profiler] Add Linux Perf support (#87866)
d6b58d6924 : [FSDP()][23/N] Refactor handle attr initialization (#87938)
d172dcf316 : [FSDP()][21/N] Refactor and fix `_cast_buffers()` (#87935)
b0b1e78e2d : [FSDP] Rename `dtype` to `buffer_name_to_dtype` (#87934)
d14fc0bc36 : [FSDP] Remove `device` arg from `_cast_buffers()` (#87933)
19c7df89fb : [FSDP()][20/N][Easy] Move functions in file (#87932)
4635f56da1 : [FSDP()][18/N] Refactor `pre_forward_unshard()` (#87931)
0a752688bd : [FSDP()][17/N] Refactor `_fsdp_root_pre_forward()` (#87930)
39d9d2ed70 : Implement reference for lerp (#87424)
6b5d7fccc6 : Add a basic test for "nvprims_nvfuser" Dynamo backend (#88186)
9ebb8d5232 : Add ops.broadcast for nvFuser (#88080)
2ddefbdc3c : Fix typos used in documents under torch directory (#88300)
4a8382b58e : Update caching of tensor arguments for nvFuser's fusion creation (#87860)
ccf6b558a4 : [Dynamo] UserFunctionVariable supports type & ABCMeta as arguments (#88257)
e763b7abeb : [complex] conv_transpose3d : complex support (#87967)
7674af9ce7 : [vision hash update] update the pinned vision hash (#88162)
4ab5d79b28 : [inductor] Updated some triton.libdevice calls (#88242)
a51da28551 : Support multi-gpu CI for inductor-distributed (#87996)
95fc0bcaad : Disable torchdynamo in backwards compiler harder (#88132)
3c6bddc3f6 : [cuDNN] (re-open) Enable cuDNN Frontend v8 API by Default (#87669)
dfa9475755 : Check SM version before calling flash attention with BFloat16 (#86600)
bc9caafc78 : record_function: update to use custom_class API (#76420)
0131a66ab6 : Fix typos under torch directory (#88172)
72958b9665 : [Dynamo] Update Dynamo benchmarks running commands (#87844)
a56beb2a82 : [nvfuser] merge rule update (#88228)
fb1586fbcb : Make a copy of the submodule inputs (#87899)
73492645cf : Copy DDP code to be reused in composable API (#87836)
b2dfd20260 : Remove BSC conversion skip from TestSparseCompressed.test_consistency (#88152)
d044b4cc58 : Update torch.abs and torch.positive opinfos to reflect sparse support (#88151)
ffd54def8f : [GHF] Remove CC line from commit message (#88252)
ba643b4ddf : feature: adding batch support for narrow_copy operator (#88130)
c40033be16 : [Vulkan][TCC] Implement tests for cat_batch, cat_width and normalize_dim (#87633)
e6ea0a4a4b : Don't Require contiguous For Extern Kernels (#87650)
8ef9bda1bf : Fix nvFuser Fusion Definition printing of Squeeze and Permute (#88041)
68f9f256a3 : [reland][fx][subgraph_rewriter] Change match_filter to be a List in replace_pattern_with_filters (#87998)
2c7de4a144 : Add meta implementation for aten.max.dim (#88005)
97b3eeac90 : remove assert on tensor inputs to FusionGroup (#88018)
e1c123d29a : Add UBSAN to ASAN (#88055)
81f74eed75 : [11/N] Update all_to_all with CPU/CUDA implementations (#86407)
90fa25705c : Rename 'nvfuser' to 'ts_nvfuser' indicating TorchScript usage (#88188)
bed8102741 : [10/N] Update barrier with CPU/CUDA implementations (#86368)
1f34067e9d : [FSDP()][16/N] Refactor post-forward/pre-backward (#87929)
5a53f024e4 : [FSDP()][15/N] Refactor `_init_streams()` (#87928)
90c5f856b2 : [FSDP()][14/N] Refactor pre-forward/post-backward (#87927)
eb91e8a534 : torchdynamo support modules() for nn_module (#88023)
de1f641f11 : Fix meta function for aten.addmm (#88068)
fdc419786d : Add unit test for torch_geometric library (#85937)
5c3666cb81 : [codev] Make backport work with flatbuffer models (#88127)
bb7e6254e4 : Add ability to freeze storages inside functionalization (#88141)
61f955dd83 : Inline Alias into FunctionalStorageImpl (#88140)
73c9911fc0 : always realize output regardless of the number of reads (#88046)
c368c0faf0 : Fix meta for aten.fill, constant_pad_nd, _adaptive_avg_pool2d (#88069)
82a9de16d4 : Change dynamo/distributed tests to use cuda/nccl (#88133)
44f8efd5c1 : [BE]fix DDP when the number of output features is zero (#87793)
20d849b982 : [9/N] [Dispatchable Collectives] Update reduce_scatter with CPU / CUDA implementations (#86166)
1e5d33b6df : Reenable assert sanity testing with ADInplaceOrView reenable (#88102)
bdb14238ec : [Reland][ONNX] Move all torch.onnx.export related tests to test/onnx (#87292)
62988e4fe6 : Update _distributed_c10d.pyi (#88088)
b1750d0440 : [FSDP()][13/N] Refactor unshard/reshard/grads (#87926)
8039317c07 : [FSDP()][12/N] Easy cleanup (#87925)
c1e28731b3 : [FSDP()][10/N][11/N] Introduce composable (ctor only) (#87924)
78170701a3 : [FSDP()][9/N] Refactor ctor (continued) (#87923)
23fe6c8ca1 : [Static Runtime] Fix ReplaceWithMaybeCopy test in OSS (#88099)
7c6fe21a38 : Fix monitoring script for macos (#88159)
323c646ca9 : Cleaned up the nvFuser Python Frontend Batch Norm printing (#88057)
a6acbad5c3 : [BE] Use default constructor in `LoggerVoidify` (#88054)
560786ac20 : call contiguous on BMM inputs for NT on CUDA (#88108)
0eea05b11e : Remove "prims_nvfuser" backend for TorchDynamo (#88083)
a8aaee77be : [torch::deploy] add gpu unit tests to CI (#88107)
6a75a0d1a1 : Add support for neg to NestedTensor (#88131)
708c050af9 : Add labeler with cpu, mkldnn, amp, NNC and quantization paths to start (#87690)
3aa7a52855 : [xnnpack][lite-int][4/n] introduce serialization to delegate (#87908)
8287c1d964 : [xnnpack][lite-int][3/n] flatbuffer serializer class (#87907)
7bf819b181 : [xnnpack]lite-int][2/n] flatbuffer xnn_value schema (#87906)
905d532d39 : [xnnpack][lite-int][1/n] flatbuffer buck rules (#87826)
aa1f9a1bd7 : [xnnpack][lite-int][graph-build] torchscript -> xnnpack graph (#87824)
d596b048e5 : Also skip large models for normal --accuracy runs (#88086)
afd00673b6 : Change Nested Tensor logging copy (#88104)
c0761a835b : Revert "[dynamo] Error when user nests FX with dynamo (#87797)"
caaf37a111 : Fix `PyTorchStreamWriter` exception handling (#88128)
ea8a5b09a9 : [IOS] Update Cocoapods for 1.13 release (#88075)
bc03aa6013 : Store `autocast_gpu_dtype` in `custom_fwd` and `custom_bwd` for BFloat16 autocast (#88029)
f2b247f0d8 : Remove stale comment (#88135)
139afc50ec : Fix links to tutorial in torch masked docs (#88129)
9fed04ba33 : fix for auto labeler (#88100)
ba26bc0fc2 : Fix random "C1041: cannot open program database" errors when compiling on Windows (#88084)
73379acaf3 : Do not use unsafe restriding for subclasses (#87610)
6fe41e76a9 : Create separate files for NT Unary, Binary and Matmul ops (#88091)
1a9edc8136 : Changing from sample_inputs to reference_inputs in test_compare_cpu (#86462)
4c78c7c82a : Enable `src_mask` in fast path of `TransformerEncoderLayer ` (#87377)
e9599724fa : Revert "[ONNX] Move all torch.onnx.export related tests to test/onnx (#87292)"
e9cabef663 : enable xpu group norm channels last support (#87680)
7d2f1cd211 : Fix typos under docs directory (#88033)
c7ac333430 : Fix args for meta__fused_moving_avg_obs_fq_helper (#88058)
3eb379052d : unfold_backward: Remove stride >= size kernel in favour of copy_ (#88061)
ceddcf5434 : istft: Use unfold_backward instead of col2im (#88060)
ff94494644 : Revert "Revert "Unify meta tensor and fake tensor converter conversion (#87943)"" (#88045)
2e1199d171 : [quant][fx] Fix a typo in utils.py (#88024)
0a4ca9d083 : Fix meta for aten.angle and aten.index_copy (#88066)
a3f8495b84 : [primTorch fix] use _maybe_convert_to_dtype (#85163)
2702aaffc0 : remove old label check functionality (#88007)
83f31ffdfe : Move check labels to separate workflow (#87999)
5723fd503c : Fix meta function for aten.flip and aten.rot90 (#88065)
9308cefbdf : [FSDP()][8/N] Refactor limiter's `_FreeEventQueue` (#87922)
d89cf2fdc9 : [FSDP()][7/N] Refactor most of ctor (#87921)
9d9267c6f7 : [FSDP()][3/N] Refactor public APIs (#87917)
59fe272c1e : Fix: prefer .is_none() over .is(py::none()) for pybind11 (#88051)
75dbe37909 : make autocast cache global instead of thread-local (#86492)
34f523b221 : [FSDP] Enable `use_orig_params=True` test (#88034)
df1cc0ef47 : [Vulkan] Add Vulkan Rewrite to Transfer Inputs and Outputs to Vulkan and CPU Backends Respectively (#87432)
bc68625151 : [Vulkan] Add support for Optimization Blocklist to Vulkan Rewrite (#87431)
f717986f93 : .gitignore log files (#88085)
8ea19c802e : Make IValue::unsafeToTensorImpl a little less unsafe. (#88043)
e238752e20 : Simplify magic method definition code. (#88017)
2a47b10780 : Get the magic method try reverse protocol correct (#88030)
12dd877395 : Fix all references to torchdynamo from the merge (#87731)
496acb6602 : Add fake tensor files to ciflow/inductor (#88052)
6735bf21c7 : [test_nn] split convolution tests from test_nn (#87474)
46ce92713d : fix github bug issue 87552 (#88059)
e24ce484ed : Use scaled_dot_product_attention within attention.cpp (#87312)
d13f1e6ab4 : Add sequence number support for UCC (#85047)
9642a7c2f6 : [ONNX] Fix get wrong summary of the docstring in `torch.onnx._deprecation.deprecated` (#87194)
d67b2edec3 : [dynamo][dashboard] minor fixes for a clean Dashboard (#88056)
9109ecf914 : Even "nvcc not found" should be commented out (#87959)
1b575782a0 : [dynamo][benchmarks] use fresh inductor cache and raise batch size wherever possible (#88044)
e7b854fae9 : [BE] Do not package caffe2 in wheel (#87986)
65e7719599 : [vision hash update] update the pinned vision hash (#87948)
621158cd7f : [BE] Do not assign string literal to `char *` (#87949)
59001d05b4 : [Inductor] Enable Inductor unspec inputs test for different dtypes (#87809)
bc64999b83 : Revert "Unify meta tensor and fake tensor converter conversion (#87943)"
e4a8661ab8 : torchdynamo and xla integration (#87741)
6cd25eb6de : Use TORCH_CHECK instead of inappropriate CUDA_KERNEL_ASSERT (#87714)
384b84d6a6 : [BE] Upload GHA artifacts to S3 (#87827)
d9b6e41da9 : Add composable activation checkpointing (#87664)
19171a21ee : Make barrier blocking in UCC (#86961)
baa715e790 : Unify meta tensor and fake tensor converter conversion (#87943)
4210cebc16 : [ONNX] Add internal node kind parsing (#87638)
cb05a4da39 : [ONNX] Parametrized Avgpool2D test to have all test combinations (#87893)
f2ae459311 : [ONNX] Disable ONNX ceil_mode and count_include_pad to aligntorch ceil_mode results in corner case (#87892)
c810489dd9 : Cleanup macos common conda installation (#87816)
53fea90547 : Store usage log on GitHub when S3 is not available (#87947)
d3c01c722d : Fix pybind11 problems with c10::SymInt unregistered (#88011)
e667c00656 : [FSDP()][2/N] Refactor training state (#87916)
cbc9faebfe : [FSDP()][1/N] Start refactoring FSDP root pre-forward (#87915)
edd6cf9996 : Revert "[ONNX] Deprecate operators.py (#87798)"
e3e84830aa : [ONNX] Move all torch.onnx.export related tests to test/onnx (#87292)
1dad051b05 : Move workspace related functions to separate file (#87651)
0cf572ff6c : [C10D][BE] Add exception handlers to c10d collectives function (#87643) (#87988)
20e16c013f : Allow caffe2 to build with fbcode/mode/mac (#87293)
9835413009 : Fake Tensor For (Conv) Propagation (#87641)
14d5f139d2 : Fix typos under benchmarks, test, and tools directories (#87975)
18f3db2963 : Fix functorch tests (#87914)
af0c339f00 : Disable slow-gradcheck tests (#88008)
785054d3a9 : [CI] Report build errors in Windows build step (#88001)
1eba3f220e : Fix bugs found by static analysis (#85705)
376acf7625 : Add 'share_from_this' to 'torch::jit::Graph' (#87343)
ecf277abec : [quant][improvement] Check the fixedqparam op qconfig based on backend_config (#87425)
c3c817c972 : Revert "ci: Switch merge / revert flow to our own infra" (#88016)
a2ffc3be97 : [AC] Add trailing "." to `_CHECKPOINT_PREFIX` like FSDP (#87951)
4faf086e5f : Update build scripts for ninja and ROCm5.3 install (#87505)
349ad23ffb : ci: Switch merge / revert flow to our own infra (#88009)
9691ba2dbd : Remove excess exception logging for minifier, cleanup backend failure exception format (#87537)
1c37119a1f : [FSDP] New fix for composing with other module wrappers (#87950)
c2c269c10a : Convert MetaConverter's tensor memo into a weak value dictionary. (#87911)
e72962a34d : Force people to call from_meta_and_device directly (#87903)
ab8fbd26f8 : Advance nightly docker to 11.6 (#87858)
c5cb6ec066 : Allow 64bit indexing for channels-last upsample2d on CUDA (#87901)
fb64f7b804 : [Profiler][Trivial] Move ID assignment code to `data_flow.cpp` (#87670)
8d395ec6bc : [Profiler][Trivial] Add hashing struct for pairs and tuples. (#87668)
d13b6781d8 : Revert "[fx][subgraph_rewriter] Change match_filter to be a List in replace_pattern_with_filters (#87257)"
fc21b9db23 : Use Eager Code To Determine Conv Layout (#87305)
1bc0e923bb : add special case for power of 0.5 (#87912)
35c611d30f : Add mem efficient backend flag (#87946)
89fd451934 : Fix codeowner errors (#87954)
8a9aca7b8d : Reland 2 Many symintifications (#87604) (#87980)
ce3e0e9856 : Add state to distributed composable API (#87838)
b192e7e415 : Support non-contiguous NestedTensors for elementwise ops (#87888)
f150e70ca2 : add the function specialization for promote with ITensorListRef (#87756)
166b5d3e7c : Revert "[EZ] Fix simple bug in torchdynamo (#87821)"
78b406932f : Add me to reviewers of composable API changes (#87891)
1da5aeb97b : [dynamo] Error when user nests FX with dynamo (#87797)
07f7c4615b : [MKLDNN] Replace pooling algorithm `pooling_avg` with `pooling_avg_exclude_padding` for future oneDNN upgrades (#87851)
23b79e6f48 : Update CMakeLists.txt (#87030)
daff5d3556 : Fix typos under caffe2 directory (#87840)
e8a97a3721 : FakeTensorMode and Prims.add/sub/mul/div support scalar only inputs (#87759)
d47ffecbe4 : [dynamo] relax fake tensor restriction with `assume_constant_result` (#87895)
2e48b478e0 : [ROCm] Use -rpath-link to fix libtinfo conflict (#83552)
9c793b366f : Move incorrectly placed closing curly brace of `extern "C"` block (#87853)
13de4d2137 : Meta OpInfo Test for stride correctness (#87849)
8b4d95759c : Revert "Many symintifications (#87604)"
2cb7c3f865 : [dynamo][benchmarks] Prepone Cold start setup (#87913)
641d8e0e69 : Revert "Enable mypy check for distributed.py, and fix type errors (#87543)"
f967918411 : [AC] Return `None` from `apply_activation_checkpointing()` (#87871)
81c4049f4d : [Static Runtime] Move PrepackWeights to internal-only graph passes (#87799)
ce7fcab9bd : [EZ] Fix simple bug in torchdynamo (#87821)
fd27246c16 : Fix decomposition for std (#87181)
f21d0b310c : Add decomposition for diagonal_scatter (#87282)
9225f26176 : [FSDP] Fix wrapped module changing after ctor (#87837)
7a3afe61d2 : Check all CUDA API calls for errors in caffe2/ (#81816)
3ece9fb45d : Check all CUDA API calls for errors in torch/ (#81560)
4e3a0ff92e : Update how inductor cpu tests are skipped on fbcode (#87867)
6cc4ae3d2d : Revert "[Inductor] Enable Inductor unspec inputs test for different dtypes (#87809)"
cda0d5a57b : Revert "[dynamo] Error when user nests FX with dynamo (#87797)"
6ad3543a1b : BE: Improve test_will_engine_execute_node unittest (#87806)
0f7df16c71 : [doc] Add out-kwarg documentation to torch.where (#87870)
46b16977d9 : Reimplement Kaiser window (#87330)
369755f8ce : [Inductor] Enable Inductor unspec inputs test for different dtypes (#87809)
1ff52225f1 : Unify SymIntNode and SymFloatNode into SymNode (#87817)
2205f56f46 : [LTC] Remove lazy::View (#87822)
83b381d34d : [dynamo] add inductor runs w/o cudagraphs (#87847)
d2d0be9a76 : fix typo in per sample grad test (#87790)
b8b1d7be24 : [dynamo] Add ao.nn to skipfiles inline allowlist (#87820)
a485528a7e : [dynamo] Error when user nests FX with dynamo (#87797)
f1b78224ca : Fix type promotion for 2 wrapped scalar args (#87845)
03d6af4db3 : add nesting to TORCH_SHOW_DISPATCH_TRACE (#87751)
23ff47ccc5 : functionalization: fix detach() (#87750)
e2bbc0a134 : [BE] Move remaining workflows off Xenial (#87834)
1e1b045128 : [ROCM] Enable Sparse Pickle Test (#82729)
aaba0bd306 : [JIT] Fix torch.jit.script for functions with many decorators (#87804)
1780e0ef7f : [complex] conv_transpose2d (#81805)
c36db82e12 : TorchDynamo: Add convolution unary fusion for cpu in inference mode (#87063)
b16b5fb802 : [Profiler] Hold weak reference to prevent TensorImpl address reuse during profiling. (#87244)
4b23905172 : [torch] Add torch cpp cpu target for torch/csrc/api/src files (#87327)
bf113e38fa : use nv_diag_suppress (#87712)
107f92a683 : [FSDP] ufmt FSDP test (#87812)
e3cf81e0a7 : [FSDP] ufmt /fsdp (#87811)
49ce3ed14c : [vision hash update] update the pinned vision hash (#87831)
21bef8e944 : fix sym_storage conversion and some cleanup (#87718)
58650835bb : [fx][subgraph_rewriter] Change match_filter to be a List in replace_pattern_with_filters (#87257)
195a13f48c : [quant][be] Remove unused function `quantize_node` (#87153)
30ea8f5c20 : Limit ROCM option to Linux only (#87833)
0e3b5ea026 : [quant][fx] Add _convert_to_reference_decomposed (#87094)
a12d3d6b49 : [profiler] Standard performance event names for the profiler (#87538)
2cc624cd43 : Enable mypy check for distributed.py, and fix type errors (#87543)
5dbd80a605 : [pytorch] Layer norm backward speed gain with warp shuffles (#87814)
449778a939 : Fix typos under .github directory (#87828)
2c66889f90 : Synchronize before change cuda stream (#82050) (#82056)
59b9d29260 : [primTorch] Check `error_regex` in `test_python_ref_errors` (#86987)
5ee5f5ac1b : [BE] Don't build CUDA-10.2 docker images (#87819)
3208c2f6bd : Add logging for nested tensor usage tracking (#87632)
536474e823 : [LTC] Remove tensor.storage_ (#87645)
5edbc92683 : print stderr for ghstack rebase (#87795)
91c95ff7c5 : Enable graph_split_inductor test as it runs now (#87762)
53c640a528 : [CI] Delete `nnpack` installation from conda (#87813)
1522946882 : Simplify installation instruction in contributing file (#87460)
adb76ef510 : Expose API for backward execution order (#87507)
926827b89c : Revert "Disable linux-bionic-py3_7-clang8-xla-test (#87737)"
71933d381b : [ao] Fixing tests for block pruning shapes (#87326)
1168f42790 : Update XLA hash (#87818)
bbcd4b2f2f : Clean up CPU test in test_torchinductor.py for fbcode (#87783)
88eff10722 : [ONNX] Deprecate operators.py (#87798)
b21fe312c0 : Fix meta for index_add and index_put (#87775)
8016fd9eb1 : Set check-latest to false when setup python and pip cache in CI (#87621)
5f4329134e : Revert "Set check-latest to false when setup python and pip cache in CI (#87621)"
38dd4cbdf1 : ROCm enable sparse_sampled_addmm (#86401)
123b103bf1 : Add dynamo_optimize_ddp arg to dist bench (#87768)
aa66c6e01e : Fix missing weight init and clean up helper (#87760)
58dc95b321 : Fix typos under aten directory (#87754)
4080b1db28 : Set check-latest to false when setup python and pip cache in CI (#87621)
2c1efe7472 : Enable some PyTorch core tests with inductor (#87490)
f7a04f310b : [ao][ns] Replacing List[QConfigMapping] in PNP (#86922)
9639cb83eb : Revert "[pytorch] Layer norm backward speed gain with warp shuffles (#87445)"
585d71513d : Add type annotations to distribution.py (#87577)
16e35bd179 : Adding expm1 to MPS (#87147)
493ff6ac5b : Install py for pytest-sugar (#87803)
e2e428b03c : Remove custom Ceil in favor of sympy.ceiling (#87294)
777e6a2c51 : Many symintifications (#87604)
ae4fbac819 : Enable nvprims.transpose fusions for nvFuser (#86967)
ac0c13f665 : Revert "[ROCm] Use -rpath-link to fix libtinfo conflict (#83552)"
701b3dd773 : optim utils all_gather_into_tensor (#87769)
642b63e1e7 : Add test that `import torch` doesn't modify global logging state (#87629)
422f946b8c : [FSDP][BE] Improve the assert message of sharded load_state_dict (#87486)
c2ef5c4f7e : [ROCm] Move ROCm CI build to python 3.8 version (#86677)
775fef51b7 : Implement copy_, fill_, and ones_like for Nested Tensors backends (#87728)
a10446c4d8 : [ROCm] Use -rpath-link to fix libtinfo conflict (#83552)
ed7a8ab436 : [Static Runtime] Make canEnableStaticRuntime examine sub-blocks (#87396)
72f446b9bc : Remove getitem special handling in the partitioner (#87073)
59aacc40ca : Couple fixes for argmax/argmin (#87758)
0294787bd6 : Format distributed.py (#87667)
a24635208b : [Inductor] update triton commit pin (#87732)
02797db24f : [vision hash update] update the pinned vision hash (#87744)
0d13ffbbae : [inductor] Fix finalization issues when using multiprocessing (#87725)
8a6a126182 : [FSDP][BE] Split state_dict related hooks to a separate file to reduce development conflicts (#87421)
82c8365c16 : [BE] Delete `TH_DISALLOW_COPY_AND_ASSIGN` (#87743)
354549e033 : [MPS] Use `bandPartWithTensor:numLowerTensor:...` (#87752)
de65f156ed : Add distributed composable API contract (#87580)
9c2555f018 : Upgrade CI binary build runner from 4x to 12xlarge (#87727)
85a79a7f50 : [ONNX] Expand `_cast_` symbolic functions (#87666)
63397ac3f9 : Disable ossf-scorecard (#87740)
c600ce39ed : [ONNX] Refactor UnsupportedOperatorError arguments (#85349)
57b36bf353 : Bring back TIMM model inductor CI test (#87730)
85ffbedfb2 : Strip GCC5 stuff from PyTorch (#85914)
21f7e7d040 : Disable linux-bionic-py3_7-clang8-xla-test (#87737)
7ab6f56ca7 : [quant][core] Add quantize/dequantize ops for decomposed quantized Tensor representation (#87093)
4a168e9941 : [static-runtime] run codegen (#87534)
dd82d936e1 : [cuDNN][cuDNN V8 API] Use suggest memory format for cuDNN V8 API (#87617)
882a4f4528 : Update xla.txt (#87739)
20c08f299f : [FSDP][BE] Skip asan (#87729)
bd4c4537dc : aten cpu and xnnpack to be compatible with arvr mode build (#87125)
a605a30732 : Fix CODE level usage in dynamo config.py (#87522)
e150a6212b : Added gm.print_readable to torchinductor_trace output (#87717)
b013eb5447 : [xnnpack][lite-int][graph-build] graph passes and op checking (#87128)
44d7ba7efb : Fix debug dir bugs and minifier output directories (#87682)
ff2569bc8c : Intercept aten._reshape_alias for nvFuser (#87072)
a3d495bd4e : Fix typos under functorch directory (#87663)
0b162f5b49 : Fix stride for prims.where (#87563)
bc19494814 : [Dynamo] Symbolic shape guards (#87570)
d0e12d1cc8 : [ao] Adding FAQ to docs (#87322)
ece3758afc : Fix _refs for aten.zeros/ones/empty/randn (#87569)
ebe5aad466 : [inductor] Revert channels-last support (#87588)
312628d299 : Fixed minor typos in torch.flip and torch.rot90 (#87724)
52ac8adc20 : [ONNX] Fix pad Circular Mode (#86984)
e532fb9a95 : Use setup_instance script to enable conda and load cuda libraries (#87296)
7a6808c5f6 : build: support DNNL_GRAPH_CPU_RUNTIME=TBB (#87512)
82698b8954 : Add prepend argument to nn.Module hooks (#87370)
82dff8ee09 : [ONNX] replace AT_ASSERT with TORCH_INTERTNAL_ASSERT take 2 (#86405)
65b4a633bb : [ONNX] Support quantized::conv1d_relu (#85997)
15370d32b9 : Disable test_inductor_timm_shard (#87710)
874625e039 : Graph-break on FSDP in dynamo (#87420)
b6f28334bc : [pytorch] Layer norm backward speed gain with warp shuffles (#87445)
7b5978254f : Add named_buffers to torchdynamo nn_module (#87644)
8a2a4ed488 : consider numel args when identifying aligned args (#87394)
569eebb43c : Add get_guard_expr to symbolic_shapes which returns all guards in a single expression (#87665)
eb99c1efce : Prefer python meta function over c++ meta function (#87426)
65601f5ef3 : [ONNX] Add Support on 0d tensor Broadcast (#87211)
5308886ec3 : Revert "Intercept aten._reshape_alias for nvFuser (#87072)"
0cba7888c5 : Performance improvment to cumulative seq len (#87530)
87163fe8df : [inductor] Trivial smoke-test (#87598)
9efca7c085 : [ROCm] [FakeTensorTest] Enable test_fallback_memory_prop (#85760)
e818574e78 : Support `signbit` in MPS. (#87214)
163a829caa : Intercept aten._reshape_alias for nvFuser (#87072)
9bbdc7ab34 : [vision hash update] update the pinned vision hash (#87639)
e85230b819 : [JIT] Fix return types of inputs/outputs method in Graph (#86349)
0367c12bce : Fix torch.testing.assert_close not exported from module (#87619)
ec15942916 : remove unnecessary __syncthreads() in conv_depthwise2d_grad_weight_kernel (#84854)
874a94ce94 : Fix `tensor.stride()` type hint (#84177)
4ef5f5dec7 : Fix use after free in tensorpipe agent (#87627)
fd60b818b9 : [Python] refactor slices on sorted (#86995)
98f40af7e3 : [Inductor] Truncate function expr str if it's too long at RecordLoadStore (#87248)
0fab8df0b6 : Fix incorrect param names in get_testing_overrides (#87625)
d4aa811593 : Defer importing meta_table (#87630)
ea30002a60 : Add cached conda env files for macos (arm64, x86) (#87541)
63138fbec3 : [DataLoader2] Change serialization wrapper to iterator (#87459)
3f94adc105 : [Kineto][Profiler] Rename Profiler post processing Index Key (#87477)
a3c5a80a25 : Fix TensorShape.cpp compilation (#87654)
28593a8339 : [docs] `batch_isend_irecv` and `P2POp` of torch.distributed (#86438)
cf895bac15 : Fix typo in secrets name (#87655)
b085c80126 : Add /= to c10::SymInt (#87603)
5ce9993dce : Fix a PyObject leak (#87608)
3263bd24be : Improve argument printing (#87601)
72ec1b5fc1 : Fix typo under docs directory (#87583)
8ff3566aab : Make me codeowner of test_aotdispatch.py (#87624)
72064c456f : Fix bernoulli functionalization. (#87573)
be925df25d : ATen/native (6/6): Use per-operator headers (#75576)
630fcdadcf : ATen/native (5/6): Use per-operator headers (#75575)
482f6419ee : ATen/native (4/6): Use per-operator headers (#75574)
4abd3e299d : ATen/native (3/6): Use per-operator headers (#75573)
f1440e77e7 : [CI] Fix triton wheel build (#87461)
1655b47a38 : Add some common tools to docker base (#86993)
96aac51717 : [functorch] dont compute expected output multiple times (#86202)
bad64bdd93 : Upgrade actions/upload-artifact to v3 (#87553)
c4fecff97d : [inductor] Prevent aggressive fusion during inductor lowering (#87447)
e5ceab173a : [dynamo] fix `explain` (#87640)
71fe069d98 : ada lovelace (arch 8.9) support (#87436)
4105ef9a6b : small improvement to error message in fx interpreter (#87599)
8d37e51931 : [ONNX] Enable test_fill script test (#79555)
fbe256cb1e : cpp docs push fix (#87614)
2abe9c464e : Add codeowners for functorch (#86213)
00b8c7e63b : New feature for issue #85575. (#86514)
17509d1ec4 : [Vulkan][TCC] Implement tests for hardtanh, hardtanh_, relu and relu_ (#87506)
4f2d869095 : Fix distributed issue by including distributed files (#87615)
e46a8971e6 : [dynamo] Support class members in nn modules (#87531)
272747db36 : attempted fix for nvrtc with lovelace (#87611)
4b4aff774f : [FSDP] Fix `use_orig_params=True` + AC (#87413)
7a4d91cac4 : Add distributed dynamo benchmarking utils (#87419)
181b615b4e : Fix accuracy minifier (#87606)
512a3a48e3 : sync AveragedModel buffers when use_buffers=False (#84054)
1bcd63d5e1 : [BE][einsum] add small comment explaining an invariant (#87264)
a06e235eda : [FSDP] `summon_full_params()` in computation stream (#86836)
eafc910d16 : [Quant][docs] Add README for BackendConfig (#86523)
084e773663 : [FSDP][2/N] Remove `params_with_grad` (#87480)
edac0d22af : [FSDP][1/N] Rework `clip_grad_norm_()` and tests (#87479)
3528b1fc9a : [FSDP][Docs] Clarify warnings to mention collectives (#87478)
573c8b6b07 : [FSDP] Rename streams (#86833)
04ad0134ae : [FSDP] Use `reduce_scatter_tensor()` (#87240)
cdb63a77d5 : [xla hash update] update the pinned xla hash (#87590)
faf9c47abb : Simplify a few diagonal-related functions (#87180)
08c2314d98 : [PrimTorch] Add maker for *_copy variants of view functions (#87278)
5e4bcb049e : Improve readability of the extra message errors in assertEqual (#87202)
233305a852 : Improvements for DDP Optimizer (#87549)
4c8e1a9829 : Fix 64bit indexing in `vol2col` (#87527)
2e4c89eba9 : [torch] Unify batch_box_cox implementations into perfkernels folder (#86569)
0d2baed45e : [Profiler] Regularize `AccumulateGrad` name (#86909)
5ec03fc17a : [Profiler][Trivial] Add Module cls and self bindings and type_caster macro (#86755)
b0e10292fa : [Profiler] Tensor IDs for Module and Optimizer variables (#86754)
be2d647ea6 : [Profiler] Use parameter as key for optimizer state recording. (#86753)
fc3beef5ac : Fix stupid N^2 naming behavior in FX and removed assert that slows things a lot sometimes (#87533)
efdd43d519 : [vision hash update] update the pinned vision hash (#87528)
9bb4926de0 : Add xlogy and xlog1py references (#77712)
f3f1b44778 : Fix meta for meta_fill_ (#87493)
2f9fc160a4 : [CI] Run all MacOS builds on MacOS-12 (#87496)
c28cdb53ea : [BE] Delete BUILD_SPLIT_CUDA option (#87502)
f047dadab9 : Enable inductor CI for TIMM (#87462)
0ef0a78196 : Revert "Improvements for DDP Optimizer (#87525)"
cf693a02e0 : Improvements for DDP Optimizer (#87525)
8461460d55 : Unified debug directory for dynamo/inductor tools (#87438)
b18fadae88 : Re-enable dynamo ddp tests (#87524)
707218f125 : Reland #87025 and fix periodic tests (#87084)
5c4a2e679b : fix docs push (#87498)
838b699e10 : as_strided_scatter storage offset defaults to None not 0 (#87481)
c55b332517 : Delete unused static runtime experiment (#87473)
dfc65f43f9 : Delete unused ts experiment (#87472)
7baf4b1969 : Delete unused ltc experiments (#87471)
62d30f5a8a : Remove unused cold_start experiment (#87470)
ee231671c0 : Make torchbench setup a function (#87469)
169ec120ef : [Modes] refactor modes to only use a stack in cpp (#86458)
13cad7e120 : [BE] Remove pip and conda installation in Linux build workflow (#87256)
620dbc43d8 : Slowly introduce ops to be tested by test_numpy_ref on MPS backend (#87342)
7bd04fb09f : [1/N][C10D] Add a customized ScubaLogHandler implementation for internal FB use (#86699) (#87123)
100beb2099 : Only label checks against pull requests (#87488)
2a6079d588 : fix for dynamo xml reporting (#87378)
6e1764d806 : ci: Allow nvidia-smi to continue with non-0 exit (#87464)
9ad1659b17 : functionalization: make view_copy outputs always contiguous (#85747)
294bfb8e80 : Create workflow to make sure PRs have valid labels (#86829)
fbcd4fe2d2 : Skip auto request review on forked PR (#87482)
5b7f027d91 : Remove redundant zeroing in col2im/im2col (#87375)
4fc72b0f4e : Grammatical update of the tech docs. (#87357)
6efdcb0788 : Add dynamo smoke test (#87400)
db83a0578c : [inductor] force 'fork' method for processes, cleanup (#87411)
96691865b9 : [dynamo] Unify raise_on_* config to suppress_errors and raise by default (#87440)
1133682c46 : [FSDP][2/N] Fix grad zero vs. `None` edge case (#87308)
4ee13a5925 : [FSDP][1/N] Update `summon_full_params(with_grads)` `None` gradient (#87314)
4caddac534 : [quant][api] Add assert for backend in get_default_qconfig related apis (#86259) (#87331)
4cc5d6644f : [FSDP][6/N] Remove FPW! (#87114)
f8dd27420b : [FSDP][5/N] Update `FlatParamHandle` after FPW deprecation (#87113)
214d51756a : [FSDP][4/N] Rework FPW test to not use FPW (#87112)
277e37f945 : [FSDP][3/N] Register `flat_param` to wrapped module (#87086)
9f8ef8eaff : [FSDP][2/N] Remove `_fsdp_wrapped_module.flat_param` (#86122)
ce0c6e828e : Reland "add an API for external backends to register custom device names (#86992)" (#87453)
70c46d32e2 : Fix input dimension issue in RNN, LSTM, GRU error message (#87442)
0c1dec375f : Revert "Back out "Revert D40198461: [pytorch][PR] Backport currently dont work with some models if:" (#87124)"
d73d4aa7de : Audit for error prone isinstance int/float and add lint (#87345)
1285542f9b : OpInfo: Add test that sample_inputs_func returns a generator (#84567)
aa8248cc9a : Reenable `isinstance` with `torch.distributed.ReduceOp` (#87303)
d37dc6f698 : Make LazyGraphExecutor extensible (#87218)
d80a5f9a96 : Fix typo under torch directory (#87274)
ae62cf7c02 : [MPS] Revamp copy_to_mps_ implementation (#86956)
435e78e523 : [dynamo] [easy] RM spurious `)` (#87439)
ab901b4817 : Python binding for dispatcher getAllOpNames (#87422)
7caeac1718 : [inductor] Fix channels_last conv2d propagation when CuDNN is not found (#87266)
6b59d9b566 : Fix registration hooks (#87369)
ff43288d31 : [AOT][CUDAGraphs] torchdynamo -> torch._dynamo (#87243)
13ab819356 : [functorch] fix AOTAutograd tutorial (#87415)
b1cf377cce : Enable inductor CI for huggingface (#86792)
9ba632253a : [Inductor] Convert 0d CPU tensor to scalar during triton codegen (#87329)
961ebca225 : Add `weights_only` option to `torch.load` (#86812)
e3d73bbb07 : Remove jansel/voz from dynamo CODEOWNERS (#87430)
bd1e95ce30 : Improve the performance of validate_non_overlapping_shards_metadata (#85639)
a42fbfa0cb : Back out "Revert D40198461: [pytorch][PR] Backport currently dont work with some models if:" (#87124)
f38a88c4dd : Revert "[dynamo] use optimizers correctly in benchmarking (#87311)"
a91abedf0d : [Inductor] TorchInductor tracing fx_graph.py should import overrides (#87271)
1801b57cf6 : set ci in mps (#87325)
f7da9db9c1 : Unify decomp registries into global_decomposition_table (#86857)
7e83f65ad5 : Add General Project Policies (#87385)
17202b3637 : [maskedtensor] fix docs formatting (#87387)
bc8cf33244 : add deprecation warning to nn stateless functional_call (#87367)
9b88dcf248 : [ci] handle libomp upgrade on github (#87382)
0826863962 : [functorch][docs] Downgrade the warning about forward-mode AD coverage (#87383)
2fd008ed43 : [dynamo] Add support for invoking nn sequential (#87156)
68e946b0c3 : Fixed tune_layout to not do anything for non-2d convolutions (#87328)
b805e1abef : [functorch] Fix torch.cat batching rule (#86932)
c16b7b41f7 : [Profiler][Trivial] Small style and safety fixes (#86752)
1e4a274248 : [dynamo] avoid popen.communicate() (#87335)
75a5a46aa0 : Retry sccache downloads (#87306)
4b757f4633 : Assert if padding mask type is unexpected (#86353) (#87106)
38543d8da0 : [torch] Add fmsub to vectrozation primitives (#86568)
a895af9250 : Revert "add an API for external backends to register custom device names (#86992)"
9199f9188c : Add inplace function testing to test_proxy_tensor (#87324)
254b681dc6 : Convert torch.Size() argument to sym size in test_proxy_tensor (#87304)
9bd6ea5d76 : Add meta inplace testing (#87291)
2e08ac8696 : Add randint OpInfo (#87231)
8b704eddcd : Update the pinned triton hash (#87300)
c4cf701889 : Revert "[complex] conv_transpose2d (#81805)"
05ad7bd743 : Revert "Advance nightly docker to 11.6 (#86941)"
1b8af28fe8 : [primTorch] Add refs for `softmax`, `softmin`, `log_softmax` (#84956)
703c19008d : [dynamo] use optimizers correctly in benchmarking (#87311)
8349bf1cd1 : Added special printing to FloorDiv so it's printed out with // insead of as a name (#87263)
b90db4a78f : [DataPipe] Fix type checking to accept both Iter and Map DataPipe (#87285)
d94e33f041 : Add support for .to() for NestedTensor backends (#87146)
472bdb3aa8 : [vision hash update] update the pinned vision hash (#87339)
c18eead2df : Update saved variable hooks to no longer trigger on wrapped numbers (#87316)
0cae309069 : [Quant] Add get_symmetric_qnnpack_qconfig_mapping (#87002)
e6bc8f415b : [BE] Move conda cmake installation to Docker (#87309)
0d2c2110f1 : [allocator] Introduce the abstract class CUDACachingAllocator (#87251)
888e15408e : Fix wrong lintrunner version (#87295)
bd757b364c : Ensure that symbolic variables incorporate fresh constraints before they're used (#87254)
bcde75427e : run torch::deploy test using pip install (#86507)
07bd053a7e : [rpc] Wrap exception creation with try/catch (#87224)
c97ffcff46 : [discussion] fix for aot autograd outputs that dont require grad (#86838)
c9b618447d : Fix line numbers bug (#87247)
c8889f4e10 : `cuda._is_in_bad_fork`->`_C._cuda_isInBadFork` (#87317)
56b150ac63 : [Dynamo] Support optimizing over any Tensor with requires_grad = True (#87141)
12b2f70a89 : Symintify pad ops (#87046)
c5de535bc0 : Advance nightly docker to 11.6 (#86941)
6eeeb88172 : OpInfo: Sample input cleanup (4/n) (#86324)
c141f28b64 : Fix compilation warning and spurious print (#87297)
4a533f1215 : Tweak several test serialization to store models state_dict (#87143)
cf2be34ff5 : [maskedtensor] add docs (#84887)
cd21613526 : Revert "[primTorch] Add refs for `softmax`, `softmin`, `log_softmax` (#84956)"
c08c799750 : [FSDP] Add set_state_dict_type API to setup state_dict_type without using context manager (#86243)
f3cc588d09 : Revert "Dynamo FX graph stack traceback fix (#87136)"
c09ca93e47 : [primTorch] Add refs for `softmax`, `softmin`, `log_softmax` (#84956)
00c91f4446 : [allocator] disable tests that don't work for cudaMallocAsyncAllocator (#87250)
15ca68526c : [functorch] Get rid of defunct functorch/setup.py (#87235)
ac80da2293 : [functorch] add test for torch.manual_seed inside grad transform (#87233)
f56ce8dbad : [allocator] Move getFreeMutex (#87237)
89e6078bc3 : Dynamo FX graph stack traceback fix (#87136)
40d0fa5314 : Reenable aot tests on windows for cuda 11.7 and up (#87193)
86a581928a : Pin ios conda dependencies (#87229)
a79e034d89 : [MPS] Do not dispatch empty job in `bitwise_not` (#87286)
6775c3e19d : fix 0d cpu tensor handling when it's the first arg (#87273)
fb6826bfd8 : add an API for external backends to register custom device names (#86992)
cc64863d71 : Clean Inductor complication cache during dynamo dashboard run (#87246)
b3071e2eb6 : functionalization: skip meta reference compute for aot autograd (#87108)
4801397b6e : ban .sizes() and .strides() calls in derivatives.yaml (#86611)
182ee87996 : symintify nll loss fns (#86915) (#87095)
c6187ea326 : add support for pin memory on xpu device (#86545)
528dd05108 : [complex] conv_transpose2d (#81805)
232fbd90ff : [TorchDynamo]: fused bias for cpu convolution path (#87050)
5e23074f0d : Fixed FakeTensor not calling CompositeImplicitAutograd decomps sometimes (#87252)
b5bdc34541 : [inductor] Sympy compability fix (#87249)
6faa6c68e8 : fsdp lazy_init typo (#87184)
2418ddb1ec : Unified symbolic shape variables between Inductor and AOTDispatcher (#87161)
48df4b7a1d : [vision hash update] update the pinned vision hash (#87100)
dfe3fc028c : [CI] Add triton wheels build workflow (#87234)
c413a32135 : Release note script: match topics with spaces or underscores (#87011)
c471c29fdc : Update sdp guards for performance (#87241)
6d0d7afe8d : [GHA][BE] Delete unused macros from `common.yml.j2` (#87253)
31e731e5ae : [dynamo] fix logging (#87239)
7ff1ca4e33 : Add type annotation to get_worker_info (#87017)
4dc579838b : Allow fx.Graph.owning_module to be used as attribute. (#86822)
3eb7429385 : [Profiler][trivial] Add profiler options to trace metadata (#87102)
f6c6048b10 : Use CUTLASS GEMM for NT bmm (#85894)
80790ecee4 : [einsum] Call view instead of sum to remediate MPS regression (#87135)
c4a03e4da1 : [einsum] keep the promise that we contract left to right (#87199)
d06d569e90 : Update the sdp benchmark to work with nested tensors (#87215)
e8c4adf3c3 : Add torch.sparse overview section (#85265)
31edccf6c7 : Revert "Temporarily disable ios jobs (#87186)"
223ad9bc9e : [ci] remove circleci mac jobs (#87225)
9a786202b7 : [ci] fix log printing (#87223)
afa5086078 : Revert "Install blas from conda-forge (#87150)"
e7cefff058 : [Kineto][Profiler] Guard event metadata python thread via verbose flag (#87096)
c54bcea793 : Improve complex_memory_overlap check for Inductor CUDA graph (#87177)
ef1844a151 : [CI] Move sm86 tests from periodic to trunk (#87228)
1dbc8ad3b7 : Add `Warning` class and refactor C++ warnings to use it (#84101)
db65909255 : [Docs] Update mm family ops and F.linear to note limited sparse support. (#86220)
a73ca6f58c : Revert "Improve readability of the extra message errors in assertEqual (#87202)"
e4285f09b9 : [inductor] new way to compile f64 libdevice calls (#87189)
c56be31d2e : Upgrade oneDNN to v2.7 (#87061)
2485498294 : [FSDP] Use `all_gather_into_tensor()` (#87077)
56c28ee32a : Improve readability of the extra message errors in assertEqual (#87202)
48f0231223 : Fix Scalar(bool) handling in toIValue (#87179)
4540330f97 : Revert "Use conda-forge in mac mps test (#87155)"
adc7ee09dc : Added upsample_nearest3d/1d lowering to inductor (#87158)
d7801a6042 : Add voznesenskym to CODEOWNERS (#87227)
88b76ae9ea : Store type(module) in the module stack (#87149)
d01eea6027 : Do not run triton tests on sm86 (#87198)
2b03a941f7 : [dynamo] graph capture for calls to arbitrary self. methods on nn module (#87040)
09a967d6c9 : Make nested TreeSpec printing nicer (#46538) (#86546)
440f734169 : [inductor] Minifier fixes (#87062)
c30cfb07ab : [dynamo][dashboard] Run 2 iterations for the correctness runs (#87104)
d29dc2b72a : Temporarily disable ios jobs (#87186)
ecd25df313 : Add prototype warning to MaskedTensor constructor (#87107)
240bba7ac8 : add sym_int (#86916)
157310c85d : [inductor][triton] if device is a torch.device, then make cuda_properties index it correctly (#87174)
dbccccb7a2 : [BE] Get rid of deprecation warnings in workflows (take 3) (#87152)
9ac2a06acf : istft: require complex input (#86628)
b886cd15f5 : [primTorch] Add a ref for NumPy-style `T` (#86850)
f2ec9fbd03 : `torch.ormqr`: backward support (#86800)
841995d53b : [primTorch] Add refs for data conversion ops (#86561)
731b4bf0f1 : Revert "Check all CUDA API calls in aten/src/ATen/test for errors (#74919) (#83556)"
8b0cc9c752 : [inductor] Fix copysign issue in old msvc build (#87117)
11915b3196 : Revert "[BE] Get rid of deprecation warnings in workflows (#87152)"
d36c284d14 : [triton] allow cuda properties to be queried from workers (#87101)
9da032ecee : [BE] Get rid of deprecation warnings in workflows (#87152)
66658e1da7 : Revert "[BE] Get rid of deprecation warnings in workflows (#87152)"
8ca7820e45 : [Inductor] Lift the maximum depth of the Python interpreter stack to adapt large/deep models (#87130)
acaf484f0a : [BE] Get rid of deprecation warnings in workflows (#87152)
5fb687182d : Enable sdp_forward for NestedTensors (#86720)
74138a8daa : Use conda-forge in mac mps test (#87155)
9d1a8edc0e : [vulkan] Use 2D texture types for convolution weights and biases (#86972)
5b588036aa : [vulkan] Enable 2D texture types (#86971)
a7ed398cf6 : Check all CUDA API calls in aten/src/ATen/test for errors (#74919) (#83556)
f02f0e3ad1 : Install blas from conda-forge (#87150)
9db7270ee7 : Small update to Module note (#87142)
fb614b1871 : Enable UBSAN mode for test_jit (#85735)
18cc00d399 : [ci] put more logs in a folded group (#86138)
e3b84f6c9d : remove dynamo hash updates (#87092)
4fd98dfe69 : Don't only apply DDP optimizer on forward frames (#87097)
09d720919e : Add venv to gitignore (#86702)
0cb273b5d9 : [DataPipe] Fixing interface generation in setup.py (#87081)
f5ee2d8840 : [ci] fix bot comment (#87127)
f552eee427 : [Docs] Remove outdated comment for sparse all-reduce (#87018)
d023e83933 : handle libomp update on circleci (#86979)
5acf6e0e80 : Use 12xlarge for nightly cpp doc generation job (#86859)
4814270708 : [dynamo] Introduce `get_real_value` API to TensorVariable (#87091)
e85dbcc9b0 : [docs] Fix ScalarTensor __repr__ in Extending PyTorch example (#86330)
b8007742c2 : [Dynamo] More robust pyop support, module properties as args (#87020)
1167949b2d : [ONNX] Ignore print(Tensor) during tracing (#86223)
31931515bc : Workarounds for cudnn_batch_norm with TorchRefsNvfuserCapabilityMode (#86796)
33343def0b : add XLA backend into tensor type strings (#86881)
317eeb81c3 : Revert "OpInfo: Sample input cleanup (4/n) (#86324)"
8f85831fdf : Give more clear error message when gscope is non-empty (#87005)
c01c7a5e2c : [DataPipe] Fix missing functional name for FileLister (#86497)
c27a5171b8 : Update action lint with missing new runners from scale-config (#87009)
1704256b10 : Enables `where` to have cpu scalar args (#87022)
f3969bd8b5 : [functorch] Fix cross to match unbatched behavior (#86926)
e271e823c7 : Avoid calling logging.basicConfig (#86959)
6351220573 : Add meta support for _adaptive_avg_pool2d_backward (#86359) (#87074)
66715767ff : Revert "[Dynamo] More robust pyop support, module properties as args (#87020)"
8617f5f481 : fix cudagraphify for inplace parameter change (#87060)
2c6167c4bb : Revert "[inductor] Use decomps for unfold (#87025)"
2b558138cf : [inductor] Set correct strides in fallback example run (#87049)
4e5357faf5 : ATen/native (2/6): Use per-operator headers (#75572)
b40f4434ac : conv backward impl (#87047)
1463013c85 : autograd clone_obey_contract() symint support (#87044)
86c2e44cb6 : meta funcs for avg_pool2d and avg_pool2d_backward (#87043)
c21dcffc00 : Very limited pow support (#87042)
37e9e89afb : [xla hash update] update the pinned xla hash (#87067)
91b3cd0b5a : [primTorch] Add a ref for `narrow_copy` (#86748)
847ded6db3 : [primTorch] Implement NLL loss reference (#81128)
78e2289005 : [TorchInductor] enable inplace buffers by default (#87037)
1b43883fd6 : Make `AdamW`, `NAdam` & `RAdam` differentiable (#86183)
364a9973ca : [vision hash update] update the pinned vision hash (#87021)
3a4c0900c7 : Reland 3 of Merge more symbolic meta kernels and symint changes from branch (#86795)
0379af681b : [inductor] Disable parallel compile (#87048)
3007efda08 : stft: Require return_complex to be passed explicitly for real input (#86724)
2b7236a0e1 : [torchdynamo] Use ProcessPoolExecutor for triton compiles (#87032)
945d333ae4 : Migrate dynamo CI test shards to torch._dynamo (#87039)
30f6f6903c : [inductor] Move size asserts to C++, fix bug (#87028)
d45e99acf5 : [dynamo] Put printing graph breaks behind a config option (#87026)
2a6d37d23d : OpInfo: Sample input cleanup (4/n) (#86324)
5099883f05 : [inductor] Use decomps for unfold (#87025)
8a8cd092c8 : Add labeler with dynamo/inductor paths to start (#87024)
a0c2a7f2ed : Add triton to CI (#86988)
3c320a5613 : [Dynamo] More robust pyop support, module properties as args (#87020)
5d6e831563 : OpInfo: Sample input cleanup (3/n) (#86380)
054a2fd6c2 : Sync changes from `pytorch/torchdynamo` (#87013)
2c1bc216b8 : Fixed partitioner issue with getitem and made metadata a storage more consistent (#87012)
91c7015426 : [einsum] Fix opt_einsum defaults to be more reasonable (#86985)
7980ed95bd : Support unpacking python dictionary in torch.jit.trace() (#81623)
bdefa260b2 : [RFC] Separate CPU offload activation to its own wrapper (#85459)
100113b877 : [quant][docs] Formatting fixes for fx graph mode quantization README (#86914)
f6f1aefb8f : [vision hash update] update the pinned vision hash (#86758)
46aaae98c5 : torchdynamo: add linear pointwise(binary) fusion kernel (#86583)
5210fab64d : torchdynamo: add convolution pointwise(binary) fusion kernel (#86582)
9a7a49b254 : torchdynamo: add convolution pointwise(unary) fusion kernel (#86581)
d5a7e6db38 : ATen/native (1/6): Use per-operator headers (#75571)
4584d06e76 : [data] add autocompletion to datapipes (#86960)
3924aa75b1 : [BE] Extend linter to detect DOS newlines (#86973)
b8aa1767cd : [quant][be] Remove unused helper functions in convert.py (#86913)
761ca20dd8 : [quant][be] Rename qconfig_map to node_name_to_qconfig (#86861)
8f71e8de7e : Sync changes from pytorch/torchdynamo, enable tests (#86950)
78ef40973c : Set -Werror=braced-scalar-init (#86911)
155b885806 : [xnnpack][lite-int] preprocess (#86980)
7c73b45621 : [onnx] Add support for autograd function inlining in ONNX_ATEN_FALLBACK mode (#85736)
d29c8c0ffa : enable optim tests on dynamo to test flaky bot (#86976)
1a7409c771 : [CoreML][ios_crash] Use special throw macro when encountering CoreML API errors (#86938)
34c86adec4 : symintify all of derivatives.yaml (#86610)
d7bbb61f6b : min/max support for SymInt/Floats, finish as_strided/scatter/squeeze() backward symint support (#86609)
1bb609ad47 : Added new test test_compare_cpu that checks if cpu and gpu results are consistent (#85011)
e027740e77 : Chore: Add 'mps' to the docs of tensor_attributes (#86585)
fc3afc8407 : Remove empty_like+fill from AOT Autograd graphs for nvFuser (#86908)
56a744bf47 : [ONNX] Reland: Update training state logic to support ScriptedModule (#86745)
527ebedbff : Sparse support for ReLU (#86749)
ef045695e0 : Fix decomp for huber_loss_backward (#86955)
7da018b2f8 : [functorch] fix fbcode tests (#86936)
f17b3e9b7a : Vectorize tensor lerp kernel (#84845)
13cff2ee8e : [MPS] Copy from CPU always add storageOffset (#86958)
1ece1ab6c2 : [ci] print rerun stacktraces for pytest (#86831)
d393a463ff : Fix functorch test selection logic (#86944)
bbd7b38d55 : Revert "symintify nll loss fns (#86915)"
0ece7c86d8 : symintify nll loss fns (#86915)
a86278b08c : [FSDP] Consolidate FSDP state_dict offload_to_cpu settings (#86211)
c9a8d309bd : add super setup to test to enable disabling in test_dims.py (#86953)
8eb579e362 : Revert "[Profiler] Move legacy profiler out of `torch/csrc/autograd` (#85512)"
4460e40db4 : [primTorch] Add a ref for `addcmul` (#86731)
746500d58d : Revert "[cuDNN] Enable cuDNN Frontend v8 API by Default (#84948)"
2cfc4cb367 : Add optional recomputable_ops argument for the min cut partitioner (#86686)
fd80684784 : Add nvFuser support for torch.Tensor.view (#84634)
b48deedb77 : Set up new module torch.signal.windows (#85599)
056cfb0464 : Revert "[ONNX] Update training state logic to support ScriptedModule (#86745)"
157a3d2a7c : [Profiler] Move legacy profiler out of `torch/csrc/autograd` (#85512)
35fb007749 : [Profiler][Minor] Separate standalone profilers from the main PyTorch profiler. (#85511)
b8f14b7877 : [Profiler][Minor] Group and consolidate stub APIs (#85510)
bc4ca4c2c4 : [FSDP] Fix load_sharded_state_dict FQN mismatches for shared parameters (#86524)
960b98128e : [ONNX] Update training state logic to support ScriptedModule (#86745)
f451e824f3 : Revert " C10D extension to enable per-thread PG (#86348)"
c16c4a37ab : Remove functorch copy of conftest.py (#86927)
b3b9786fdd : Unified symbolic shape variables between AOTAutograd and Inductor (#86659)
c7c09722ad : Move TorchDynamo into PyTorch core (#86461)
97abc21f2b : C10D extension to enable per-thread PG (#86348)
66979fbfaa : Improve complex lerp performance (#84844)
ae45dab57e : disable failing circleci test jobs (#86940)
974ad8fa6c : Add BFloat16 dtype support for oneDNN Graph JIT fuser (#85591)
14dd5db2f5 : [fsdp] Fix test for 2d parallel integration to trigger the load hooks. (#86272)
18f58e2df1 : [quant][be] Rename node_name_to_target_dtype to node_name_to_target_dtype_info (#86860)
158a071034 : add _freeze for embedding op (#86769)
e737f2d81c : set the correct size of aten tensor in presence of mkldnn padding (#86767)
860ad04990 : [ONNX] Fix FindCommonAncestor in function_extraction (#86650)
af1dcef79c : [ONNX] Fix triu/tril export with diagonal input (#86843)
dbdfb8dd8b : Skip test_nvfuser_extremal_values for native_batch_norm (#86897)
2ce6150d23 : [ONNX] Fix scalar_type_analysis metadata for copied constant (#86716)
4839f73f32 : Fix incorrect tensor storage check (#86845)
afc9963865 : Fix path to nested_tensor in example (#86891)
54ee95c8ec : [nn] module: full_backward_pre_hook (#86700)
7dcfbedce0 : Fix LinearLR scheduler start_factor (#86695)
6ee94b572a : [functorch] Add shard to run functorch tests with asan (#82164)
427e0a6b4e : [cuDNN] Enable cuDNN Frontend v8 API by Default (#84948)
b0d80f4355 : [ONNX] Clarify phrasing of skipScriptTest/skipTraceTest decorators (#86216)
0ee0999608 : [ONNX] Renable assert diagnostic test (#85999)
cff333bdb5 : Enable max.unary_out (#86855)
25811663af : [FSDP] restricts meta model check to non ignored modules in FSDP (#86766)
ab69550678 : Add nested squeeze.dim and unsqueeze (#86813)
e531cf7b2e : [ao] fixing public v private for fx.backend_config_utils.py (#86037)
d169f950da : Revert "Use CUTLASS GEMM for NT bmm [OSS-only] (#85894)"
b97ae59e29 : Change legacy wrap_dim to work with symint == (#86842)
3d9fd060f4 : [functorch] Add more details to the functorch install page (#86823)
cbc01c4344 : OpInfo: Sample input cleanup (2/n) (#86379)
2efc56d9d7 : OpInfo: Sample input cleanup (1/n) (#86231)
45274c56a4 : [ONNX] Partially re-enable RoiAlign and RoiPool unit tests (#86169)
e17732b234 : [test] add cross-ref tests for python meta kernels (#86228)
0feccda7d7 : fix aliasing bug in pixel shuffle/unshuffle (#86608)
3376050543 : fix type promotion for group_norm composite C++ kernel (#86607)
6907db3f95 : fix aliasing for primtorch view meta kernels (#86285)
77e68b16cc : suggest rebasing through @pytorchbot if PR is stale (#86898)
8fffb79771 : Add vmap support for slogdet; fix regression from functorch 0.2.1 (#86815)
77d94ac5ab : Sets CUDA_MODULE_LOADING to LAZY when not set by the user (#85692)
30a8a87c80 : Fix autogen for _ctc_loss.Tensor (#86871)
dc6ce1485e : Use Variable Size Indices in Sparse Qlinear Code (#85247)
d3afd49c85 : Enable 16bit and 8bit Row/Col Indices in Qnnpack Fully Connected Sparse Op (#85246)
6c6e06619f : Add 16bit and 8bit row/col indices q8gemm sparse kernels (#85245)
6c6a32c223 : Enable Running Variable Size Row/Col Indices q8gemm Sparse Kernels in QNNPACK (#85244)
4c0e1dc980 : Update Qnnpack Fully Connected Sparse Op to Store Variable Size Indices (#85243)
1a87c25fe1 : Add functorch shard to sm86-periodic workflow (#86820)
cb4867a71a : Make `ASGD` & `RProp` differentiable (#86258)
5224906749 : Spread distributed backends among all distributed shards (#86837)
48c648d75d : Fix typo TORCH_ONLY_METHOD_OPERATORS -> TORCH_ASSERT_ONLY_... (#86661)
67fbd940ba : [ao] fixing public v private for fx.quantization_types (#86036)
b00cdb5b34 : [ao] fixing public v private for quantization_patterns.py (#86034)
77d29bcee2 : [primTorch] special: ndtr, ndtri, log_ndtr, erfcx (#86077)
ea586c0579 : Fix up cond a bit to make it work w/ fake tensor (#86727)
2a75152537 : [easy] Add nested tanh (#86826)
b79bac0e4d : Make the data types of output and input consistenst for batchnorm (#84410)
c2f29e75cd : [flakybot] add dynamo as platform (#86701)
9470059766 : Allow viable/strict promotion even if periodic or docker-release-builds jobs are failing (#86827)
66cab5245f : Reland 2 min/max support for SymInt/Floats, finish as_strided/scatter/squeeze() backward symint support (#86797)
894c4218dd : ci: Just use regular checkout (#86824)
aacb9f3ac6 : Make `Adadelta`,`Adagrad` & `Adamax` differentiable (#86096)
e552cf1050 : [DOC] Use type hints to show annotation in the docs (#79086)
a77f2a95a7 : Improve NestedTensor documentation (#85186)
be81f3d8d4 : Revert distributed test parallelization (#86756)
09a676f639 : Add hooks for register_buffer/module/parameter (#86148)
c08cbfccd9 : Let retried jobs advance viable/strict (#86821)
3b26680222 : Update _torch_docs / ldexp (#86721)
363b108e39 : [quant][fx] Fix weight_dtype and bias_dtype backend_config checks (#86719)
d6bfbdf50c : [ao] fixing public v private for fx.pattern_utils.py (#86033)
bf0116d1f0 : [ao] fixing public v private for fx.graph_module.py (#86032)
25476f2e4b : [ao] fixing public v private for quantization_types (#86031)
ef58a132f2 : Use CUTLASS GEMM for NT bmm [OSS-only] (#85894)
73c43ce2e2 : Display unexpected exceptions raised from test_dtypes (#86599)
6be9d9a630 : Add AutocastHPU support (#84927)
553eaaba7c : Disable tf32 in functorch transform tests (#86799)
d56017a14f : [primTorch] Add ref for `triplet_margin_loss`, improve `triplet_margin_with_distance_loss` (#85614)
ce56ee11fd : Extend torch.cuda.is_available() to attempt an NVML-based CUDA availability assessment when explicitly requested by the user (#85951)
cd7c86eaa4 : Add prims.clone (#86705)
3356d0385f : [BE] Store helper functions C++ for python API parity (#82136)
cc7ea93c2c : [ONNX] Support device().type() string comparison with constant (#86168)
58542eb256 : [ao] fixing public v private for backend_config.native.py (#86030)
409efebab8 : Added define to fix issue with compatibility with latest Windows SDK (#85408)
f24d174fff : Allow PrivateUse1 backends to not have Storage (#86557)
61a5898675 : use cff standard for citation information (#86200)
493ded249e : [primTorch] decomposition for bucketize (#86366)
f903f1ab34 : Patching getitem in partitioner (#86713)
2344135179 : [primTorch] special: entr, expit (#86592)
a47f93b6c9 : Add type and shape annotation for gm.print_readable() (#86562)
e0d6898cbd : Revert "Backport currently dont work with some models if: (#86510)"
25725fd624 : (Re-open) Adds cudaMallocAsync as an alternative backend for the CUDA allocator (#82682)
a216f4700c : Add testing on A10G GPU to periodic workflow (#85524)
c4f0b93f86 : Disable autocast in aot autograd (#86515)
d598290baa : Basic SDP benchmark harness (#86729)
4bfb734181 : Backport currently dont work with some models if: (#86510)
ce48df9e93 : Re-enable torchdynamo unit tests (#86658)
692b525b71 : [MPS] Extend unary ops to int64 (#86615)
f912b58544 : Revert "Enable max.unary_out (#85926)"
2aa981ab74 : Revert "Reland 2 of Merge more symbolic meta kernels and symint changes from branch (#86334) (#86488)"
9eb4f9dd17 : Tweak test tolerances to be compatible with A10G (#86538)
7fa601b1a7 : Skip chalf.mean in test_reductions_large_half_tensors (#86747)
811b8e012b : Revert "min/max support for SymInt/Floats, finish as_strided/scatter/squeeze() backward symint support (#86643)"
f1fdb6efbd : Manual changes for moving dynamo to core (#86621)
09364f4298 : Compile C10 with `Wshadow` (#86666)
0337f0ad47 : Add error checking to flaky test bot platform parser (#86632)
42bd275233 : [doc] LR scheduler example fix (#86629)
32152ce328 : Add original sources/references to Wishart.py in distributions (#86543)
50af1ace5e : Mark aten ops as canonical (#86215)
8db30255c3 : [ROCm] set nvfuser default to disabled, keep CI (#86369)
5ffe24fca4 : [vulkan][ez] fix always printing out a warning when retrieving the global context (#86697)
f32aeeae00 : Set interface_call to true be default (#86668)
7f02f2ac0c : [Experimentation] Add TSAN build and test (#85313)
92562046e9 : Optimize __dlpack_device__ performance (#86665)
c12f829cce : [nn] Add remove_duplicate flag to named_buffers (#674) (#85903)
693250ac85 : Docs: fx.Node docs incorrectly state that the self argument is included in args for module calls (#86685)
160118d72a : Add test case for matrix multiply-add with large inputs (#85550)
212fa874ce : Fix torch histogramdd docstring (#86593)
f26292d91e : [BE] Fix python docs typos up till torch.chunk (#86642)
86f914e996 : min/max support for SymInt/Floats, finish as_strided/scatter/squeeze() backward symint support (#86643)
6923dc3b59 : Add module: decompositions as an owner to test_decomp.py (#86703)
109f4d4453 : Move functorch tests from functorch/test/* to test/functorch/* (#86623)
51ea441862 : Upcast to fp32 in test_addmm_block ref_half_bfloat16 (#86682)
3edf79dc03 : Revert "Add meta support for _adaptive_avg_pool2d_backward (#86359)"
97de281176 : Improve interpolate() speed for channels_last CPU images and masks (#86361)
a4ee6956ff : Pin numpy version during MPS tests (#86691)
352d926482 : [CUBLAS][CUDA GRAPHS] (re-re-re-re-open of #83461) Explicitly set the workspace for cuBLAS handles (#86645)
937d677d9f : Add version selector back to functorch docs (#86602)
a56a8c0fc0 : Add meta support for _adaptive_avg_pool2d_backward (#86359)
03d8ab4dec : Skip forward AD tests for torch.native_batch_norm (#86206)
6ab07febce : [FSDP][Easy] Rename `_prefixed_param_names` -> `_fqns` for consistency (#86653)
2fe5808590 : Symintify NLL loss, copy and squeeze (#86606)
be8627827e : More symintification of get/set item (#86605)
f841442252 : symintify autograd view chaining (#86604)
49c9b0a154 : symintify einsum (#86603)
3a2cfbb813 : Revert "Improve interpolate() speed for channels_last images and masks (#86361)"
17074389de : index op with int32 support (#86318)
88a8a900b9 : fix: half reduction with multiple sub-iterators (#85596)
55479fe80e : Enable capturing of comm collective parameters (#98) (#85368)
ad2b04c39c : [torchdynamo hash update] update the pinned torchdynamo hash (#86651)
bd381121b9 : [vision hash update] update the pinned vision hash (#86652)
deb414a43f : Revert "Use FindCUDAToolkit to find cuda dependencies (#82695)"
577070ff96 : update fbgemm commit ID in PyTorch (#86577)
d8b971ed25 : Fixes for partitioner with symbolic shapes (#86425)
16f65f178a : Nested tensor forward only chunk operations (#85645)
4fc0d5341c : [PyTorch][Fix] Improve numerical stability of HistogramObserver (#86522)
8a47a49d5e : [quant] Move the order of x86 engine to avoid changing the default qengine (#86631)
224ae0da10 : [BE] Fix variable shadowing in CUDACachingAllocator.cpp (#86646)
2cb330ab15 : Acyclic partition patch (#86511)
dd6dd03ff2 : Enable output allocation cache (#86100)
82ed5ca340 : [Vulkan] Don't crash immediately if Vulkan context could not be retrieved (#86485)
b409d1f65b : Turn on Data Dependent Throwing (#86480)
ce7751188a : [DDP] Add `PackedSequence` support when `device_ids` is specified (#86614)
b7b5bd47ae : [MPS] Implement `frac` operator (#86625)
885122b7dc : Move PadNd from ATen/native to ATen (#82379)
e2a4dfa468 : Add correct __all__ for torch.distributed and torch.cuda submodules (#85702)
d93b1b9c4e : Address feedback from previous PR (#86622)
d792d75091 : [quant][fix] Fix the call to get_executorch_backend_config (#86338)
2288a1c806 : Added new option any_common_cpu_cuda_one to OpDTypes (#86286)
8f2dda5bf2 : [CI] Build MacOS M1 binaries without distributed support (#86451)
dcc3ae98b7 : [NestedTensor] Add a contiguous checks to get_buffer (#86496)
ad449b338f : [8/N] [Dispatchable Collectives] Update allgather with CPU / CUDA implementations  (#84423)
9eb771583c : symintify rand and randint functions and meta suport for randint (#86358)
67358ee124 : MaxPool: correct pooling description (#86559)
16a0fa1204 : Enable max.unary_out (#85926)
e18d466f35 : [test_nn] split lazy_modules from test_nn (#86526)
8a1fc5d2f8 : [7/N] [Dispatchable Collectives] Update reduce with CPU / CUDA implementations (#83916)
978b46d7c9 : Reland 2 of Merge more symbolic meta kernels and symint changes from branch (#86334) (#86488)
55663b7f81 : Reland 3 of Symintify getitem and add the required helper functions (#86207) (#86487)
4a5fdc56ec : fix some composite compliance ops for functionalization (#86470)
5102f0cffc : [FSDP][1/N] Retire `FlattenParamsWrapper` (#86117)
bf7c46facf : [xla hash update] update the pinned xla hash (#86099)
5844f00bbf : [FSDP] Add `low_prec` prefix to param and reduce dtype varnames (#86512)
cc5de7f1ac : [FSDP] Remove `utils.py` (moved to `_utils.py`) (#86528)
c6b7c33885 : torchdynamo: add linear eltwise fusion kernel (#85622)
ec2d22ece0 : [torchdynamo hash update] update the pinned torchdynamo hash (#86567)
753536b7a5 : BlasKernel: Improve gemm's inner dot product when a is transposed (#80977)
a45fead623 : mkl: Use per-operator headers (#75570)
c89d286af6 : symintify unbind_backward and tensor_split (#86357)
a6c0442cce : Add __all__ to torch.{autograd, fx, cuda} submodules (#85343)
6aec0d3ddb : [BE] Remove remaining cuda-11.3 builds (#86540)
7134b9bc7b : Quantized: Use per-operator headers (#75569)
67434c70df : [MPS] Fix printTensor() for MPS (#86534)
9998f9100b : [vision hash update] update the pinned vision hash (#86490)
92ac84c98a : [torchdynamo hash update] update the pinned torchdynamo hash (#86489)
492d1be5d2 : QuantizedCPU: Use per-operator headers (#71217)
4bfe2a2450 : cuDNN/miopen: Use per-operator headers (#71216)
33f0e98a49 : Re-land*4 "SymIntify cat and narrow" (#86468)
8ea2ed0fc7 : Revert "Re-enable torchdynamo tests (#86297)"
d3f7c34cb3 : Enable aten-aten decomps (#85921)
af9c6bc851 : [FSDP] Add `keep_low_precision_grads` support when CPU offloading (#86495)
7ec12a559c : Revert "Enable aten-aten decomps (#85921)"
b0ceb8ea1c : [vulkan] Add buffer to buffer copies (#86424)
511d81cd2a : [vulkan] Clean up convolution code (#86423)
b645c237bc : make g2p ~30% faster on mobile by suppressing a log (#85907)
bac26155e7 : [JIT] Allow freezing modules that contain mutable interfaces (#86039)
04490e90ea : better error message fix (#86422)
3a02873183 : [quant][ao_migration] nn.intrinsic.quantized migration to ao (#86172)
91b1bae1df : Caching allocator tracing (#86241)
8a3a54e012 : Fix index_select decomp (#86469)
a079dad7cf : Skip dynamo for all optim test as they are all flaky otherwise (#86482)
ba3fde6aa0 : Add multi-grad hooks (#86260)
97e56c176d : Try to fix shutdown test in edge cases (#86464)
62e4f51efd : Enable aten-aten decomps (#85921)
a95889ba7c : [FSDP] Add initial `summon_full_params(with_grads=True)` (#85738)
82229d1e33 : [optim] fix: empty grad support for SparseAdam (#86459)
66d480d314 : Revert "Disable mac m1 jobs (#86463)"
ac632b4374 : Disable mac m1 jobs (#86463)
ac74976a56 : [ao] fixing public v private for fuser_method_mappings.py (#86029)
be682befbc : [FSDP] Add `use_orig_params` (#84911)
b43ae1c411 : Add reference counter in FileStore (#85601)
efccb6401c : [quant][ao_migration] nn.intrinsic.qat migration to ao (#86171)
e610288130 : Re-enable torchdynamo tests (#86297)
e8d3b7201c : [ao] fixing public v private for fuse_modules.py (#86028)
d29912cc06 : [ao] fixing public v private for torch/ao/quantization (#86027)
65b408074f : Revert "Relandx3 "SymIntify cat and narrow" (#86289)"
5b69b87d5a : Revert "Symintify getitem and add the required helper functions (#86207)"
75df4b5e3d : Revert "Merge more symbolic meta kernels and symint changes from branch (#86334)"
b3fdb02fb2 : Fix memory leak in _LRScheduler.step() (#85602)
0e639ff45c : Revert "Cleanup PT-D imports (#85781)"
9b2ea41f48 : COO intersection primitives : fusing value selection with value intersection. (#86269)
e125baf90b : [autocast] Clean up registrations using new macros (#86403)
9b74267eb6 : [autocast] Make it easier to register rules (#86402)
55f5e0de8d : remove unused arg from `impl_func_cum_ops` (#86364)
a00f8489df : Relandx3 "SymIntify cat and narrow" (#86289)
cc9183eb4c : Update distributed.rst backend collective support chart (#86406)
b74ca31bf6 : [fix] sum_to_size: MathBits test - don't reuse same input tensor (#86378)
facbddb9ff : Override Quantized Backend to use Fbgemm in Qlinear Packed Params Test (#86236)
dbea07b6aa : [Profiler] record gradient from nnModule (#86355)
28a0b3fb18 : Fix col2im and im2col decompositions (#86426)
93b2d99158 : Improve interpolate() speed for channels_last images and masks (#86361)
70c6a988d6 : Fix the performance issue that the for-loop before ExternallCall could not be parallelized. (#85056)
2110c89443 : Revert "Revert "Revert "SymIntify cat and narrow (#86191)"" (#86289)"
6c604c9262 : [CuDNN v8 API][Quantization]fix alignment function in quantized cuDNN V8 path (#86253)
455b873919 : Introduce a match filter for SubgraphRewriter (#86430)
b5fd845fdf : [torchdynamo hash update] update the pinned torchdynamo hash (#86399)
10aead9adc : [MPS] Cache multinomial_with_replacement graph (#86437)
9ceadcadb2 : Fix unfold backward decomp aliasing for 0 dim input (#86428)
b14f1d7bb8 : Add Skip List for Aten Ops that are fused in nvFuser. (#86101)
c5a4844085 : Xformer SDP forward/backward kernel (#86157)
ca39e3679f : [vision hash update] update the pinned vision hash (#86173)
2fec853c87 : Fix SubgraphMatcher for case of no anchor found (#86421)
b73f0e98d5 : Fix cond tests after CI was disabled for a bit (#86321)
ca69ddb4f7 : Fix broadcasting to implicit leading dimensions in `torch.where` on MPS (#86240)
0e30da3f2f : [refactor] Renaming ao.sparsity to ao.pruning (#84867)
9a170b24f6 : Cleanup PT-D imports (#85781)
a241963837 : [nll_loss] Avoid unnecessary type casts (#86086)
2232db7fc1 : Replacement is irrelevant for 1-sample multinomial (#86342)
5a8b07de75 : Declare public dependencies on libshm (#82694)
08e3999fa4 : Merge more symbolic meta kernels and symint changes from branch (#86334)
3af0eafea6 : Release 1.13: Bump nightly version 1.13->1.14 (#86296)
5ed75ec1d7 : Fix SparseAdam consuming iterator (#86210)
f0977c4658 : [FSDP] Doc to explain running submodules (#86343)
3db8ddcac1 : [FSDP] Fix clip_grad_norm for CPU offload (#86337)
adfd8f3823 : [FSDP] assert to runtime error (#86336)
7a411952fb : CheckpointSequential support non-reentrant (#86331)
3037f3d710 : Docs: fix typo (#86273)
233d6f195a : Revert "Fix memory leak in _LRScheduler.step() (#85602)"
bf74679884 : Fix for binary upload step, use bash shell rather then default sh (#86382)
facf210f9a : [ao] fixing public v private for qconfig.py (#86026)
7c5e07f87b : [kineto] guard global observer init against Edge profiler (#86347)
bc919ac796 : [torch.ao.quantization] include torch.qint32 for static quant (#86345)
08780229df : Two small improvements to references (#86371)
795906f207 : Add total GPU memory utilization (#86250)
1059d3b52d : Make mergebot message clearer when starting a new merge (#86311)
6b295cd046 : Enable autograd on Linear with sparse COO weight (#86302)
8f2c2167d4 : Support autograd on sparse_mm in full. (#86301)
88b882cd1c : Support sum on a sparse COO tensor. (#86300)
f104490d63 : Support autograd on Linear with sparse compressed weight. (#86137)
fc21cc82fc : Enable sparse_dim() and dense_dim() methods for Strided tensors (#86203)
bed1ece9c5 : [torchdynamo hash update] update the pinned torchdynamo hash (#86306)
eb32330d6b : Fix memory leak in _LRScheduler.step() (#85602)
b8b564c908 : Ensure the minimum NVIDIA driver version to be 515.57 for CUDA 11.7 (#86344)
0c148a4b5f : Remove extra bracket, update header definition (#86317)
fb9b96593c : Use FindCUDAToolkit to find cuda dependencies (#82695)
fa799132d8 : [MPS] Better error message for `slow_conv2d_forward` (#86303)
4d7728890b : Inline asIntArrayRef (#86350)
cebf08afb2 : [Quant] Remove weight from DTypeConfig for non-weighted ops (#86335)
cdbffa7f66 : 🦊 [AI Accelerators] Consolidate native_layer_norm for nested tensor (#86295)
85c3b745f6 : Conditionally build the TestApp benchmark based on lite interpreter (#86314)
936e93058b : Delete torch::deploy from pytorch core (#85953)
27c3fb0386 : [Profiler] trace verbose=false by default (#86263)
a117fde86f : [Profiler] Apply TensorMetadata for Optimizer and nnModule (#86047)
fd5085c445 : Symintify getitem and add the required helper functions (#86207)
0a75c42f36 : Workaround MSVC ICE due to constexpr char* template argument (#86288)
45f03d6948 : Add at::symint:: namespace for ease of templated functions (#86329)
ea21a982f2 : Reduce warning suppression by just disabling pytest warnings plugin (#86255)
adf5919720 : Add option to record C++ backtraces in _record_memory_history (#86145)
97d6b5bbf8 : Refactor _cuda_recordMemoryHistory to use pybind11 (#86139)
d04889323e : Add Context Manager for Disabling Multithreading in Backwards, use in aot autograd (#86245)
237316aa1d : PNP: early FX numeric suite tool to quantize each layer N times (#80521)
b233d83471 : make torch.histc ignore NaNs on CPU (#85870)
ddec1eea05 : [Static Runtime] Block linalg_svdvals codegen & run codegen script (#85983)
bebd162249 : Fix doc of DDP (#86244) (#86256)
020f2b2c0b : add myself for dynamic shapes PR review (#86292)
dc9c507d24 : add nominal support for int32 indices in index/index_put ops (#86309)
e8b0bea677 : Rename fromIntArrayRef to fromIntArrayRefSlow, audit call sites (#86235)
168ba066e3 : Revert "Symintify getitem and add the required helper functions (#86207)"
be4e43c7d0 : Remove DataParallel remnants from DDP doc (#86221)
9e1a431220 : Mark ctc_loss with dynamic_output_shape (#86293)
0e5a27fb8d : Fix horribly double truncation bug in Scalar (#86304)
73777d8a2b : [ao] fixing public v private for quantization_mappings.py (#86025)
28a5cd9480 : [ao] fixing public v private for quantize_jit.py (#86024)
17addb307e : Symintify getitem and add the required helper functions (#86207)
b8895df8db : Fix black binary again for debug python (#86275)
e778fbf519 : Revert "Revert "SymIntify cat and narrow (#86191)"" (#86289)
089a64e99e : Install c10d headers with absolute path (#86257)
b67e022833 : Fix ref / decomposition index_add (#86266)
14db44ad72 : [ao] fixing public v private for quantize.py (#86023)
c21caff876 : [ao] correctly set public v private for fake_quantize.py (#86022)
3b1ec7511e : Optimize is_symbolic test and some refactor (#86230)
8c6d352bcf : Log a new "timer expired" event to Scuba in file_based_local_timer (#85861)
fc94a2115b : Revert "SymIntify cat and narrow (#86191)"
3ec71fce79 : Improve make_tensor performance for float and complex types (#85473)
7f607e8cb5 : [torchdynamo hash update] update the pinned torchdynamo hash (#85774)
97d2e1df55 : [MPS] Fix GELU for `torch.half` (#86218)
63d8d4f6ec : SymIntify cat and narrow (#86191)
0e03dc5f1e : Remove softmax from recomputable ops (#86268)
c609768896 : Add refs for torch.unfold and a decomposition for its backward. (#85629)
67eb2d5952 : [FSDP] Dequeue one instead of flush (#86165)
1c5ca724f4 : PixelShuffle check that output is not null before applying kernel (#85155) (#86262)
9d6109c4b0 : improve annotations (#86105)
736adc0808 : Memory snapshots from C++ (#86190)
a348975e00 : Add opteinsum backend to give users control (#86219)
db13049b88 : [allocator tracing] missing GIL acquire (#86254)
d07b85393a : SymInt fixes from symbolic-shapes branch (#86242)
ac25c210e5 : [jit][easy] remove deprecated escape sequence (#85987)
2355b6256b : Remove `std::cout` from `multinomial_out_mps` (#86246)
4f95f7ae9b : Remove unnecessary header (#86212)
6d7235e3d3 : enable cpu meta testing (#86226)
1432b9978b : Add ref for cumsum (#86229)
b317736c39 : Fix default correction value in std/var decompositions (#85839)
adb12438c1 : [AO] Cubic sparsity level scheduler (#85232)
248796987e : [FSDP] Expose internal prefetch limits (#86198)
f20e4eab7b : Fix ITT unit-tests if PyTorch is compiled with `USE_ITT=OFF` (#86199)
d39e9c1e90 : [6/N] [Dispatchable Collectives] Update recv with CPU / CUDA implementations (#83876)
d447eff146 : [kineto] make ProfilerKineto the only option (#84714)
d724a91935 : Adding Wunused-local-typedef build flag (#86154)
8da704cdb7 : [MPS] Remove incorrect asserts from `Copy.mm` (#86184)
9da5646cdb : Add device logic handling for functions which allow scalar inputs as tensors (#86149)
d6b030856b : [primTorch] special: j0, j1, spherical_j0 (#86049)
8bce2f3d22 : [easy] Add spaces to vmap over as_strided error message (#86150)
e1859c0707 : delete special fake tensor new handling (#86144)
3f2e7d5c9a : [5/N] [Dispatchable Collectives] Update send with CPU / CUDA implementations (#83859)
a75edfa97c : Move ITT testing to its own test case (#86174)
b95e0fcc2c : Forward fix land race (unexpected successes) (#86186)
79dd621f76 : Symbolic shapes mega merge PR (Oct 3) (#86160)
de75274883 : Symintified factory functions (#86067)
82d9592f1b : Batch of symintifications to allow more models to pass in inference (#86104)
a4ff07f197 : Stop modifying the global logger on `import functorch` (#86147)
fe190078aa : Require bias to be contiguous for depthwise3x3_winograd backend (#85711)
bc1d884061 : [maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845)
0db9419e28 : .github: Improve sanity check for generated files (#86143)
5ca0f9e1d4 : [GHF] Make EasyCLA unskippable (#86161)
f3d7ab5438 : Unconditionally register Python decomps to Meta key in Python Dispatcher (#85750)
06ddb1c07e : Revert "Disable XLA test (#86123)" (#86151)
cda815dc23 : Switch to checking EasyCLA on merge (#86127)
dfde7cf3e2 : ANTIALIAS updated to Resampling.LANCZOS in torch/utils/tensorboard/summary.py (#85679)
2494c318c4 : [easy] fix nested view call taking in more than one -1 (#86134)
6a842e33c6 : MPS: Add multinomial op (#80760)
37013bb443 : Added _unsafe_view decomp (#86103)
40a8cc28e7 : [MPS] Cast dot inputs to int32 when needed (#86140)
954660a308 : Correctly error if you pass in tensors where size arguments expected (#86126)
2aa9e0750a : Symintified all functions, not including factory functions (#86078)
cb87983cb8 : Decay integer-only (Optional)SymIntArrayRef to IntList in IValue (#86094)
146db41eb9 : Preserve/strip OptionalSymIntArrayRef when finding real schema (#86114)
1da74929d9 : Fix compile error for vs2022 #79358 (#85958)
36634d78da : [ONNX] Remove registration in __init__ (#86130)
e01d616ba9 : Re-introduce the functorch docs build (#85838) (#86125)
75c0e3a471 : [MPS] Improve memory usage and performance utilizing garbage collector and adaptive commit (#86119)
8860e48994 : [MPS] Handle compatible inputs to where (#85946)
2f692236fe : [GHF] Add commit statuses to checkruns conclusions (#86129)
cd6477617c : Custom sdp implementations dense (#85984)
8d9472d7d4 : [skip-ci] Fixed bad link in build_ci_governance.rst (#85933)
85d520d448 : [docs] Add `torch.channels_last_3d (#85888)
2067b768fc : [FSDP] Delay moving tensor to CPU until necessary for optim_state_dict() (#85761)
e23cede0aa : Revert "Require bias to be contiguous for depthwise3x3_winograd backend (#85711)"
c670bad72f : Update dist.scatter() documentation (#86069)
2403d0c258 : implementation of qmul using xnnpack (#86040)
7941b042a7 : parallelize at file granularity (#85770)
d401732baa : Added roundup_bypass_threshold_mb knobs to the PyTorch Caching Allocator (#85940)
bc993e39cc : Unwrap SymInt => Proxy when being returned from the wrapped function make_fx traces (#86098)
470f8fb9e5 : Fix functorch/test/test_control_flow (#85981)
a262ccea58 : Change torch.autograd.graph.disable_saved_tensors_hooks to be public API (#85994)
6d06be89fe : Disable XLA test (#86123)
5fa840103b : Revert "Re-introduce the functorch docs build (#85838)"
9a126702ce : Require bias to be contiguous for depthwise3x3_winograd backend (#85711)
d253d6ec0c : [Metal][BE] Fix signed/unsigned compare (#86068)
4a528bc16f : remove vkuzo from CODEOWNERS for AO (#86038)
68a6113248 : Add nvFuser support for torch.native_batch_norm (#85562)
d28a882319 : [ONNX] Remove excessive deprecation messages (#86065)
6cd9c447da : Make test_api compile on DEBUG mode with some compiler versions (#86092)
368e8e7520 : Skip, don't xfail, nondeterministic as_strided_scatter test (#86091)
1f157099fa : Teach remove_symint to handle OptionalSymIntArrayRef (#86088)
bd32f9a833 : Correct ownership of OptionalSymIntArrayRef in backwards (#86087)
6fd5d6397a : [Docs] Updated torchvision people (#85931)
5322f00151 : Re-add benchmarking files to ios TestApp (#85539)
2b5625a726 : Update hierarchical_model_averager.py (#85648)
6a1e3f2f37 : Update fbgemm submodule (#86054)
acd2f21ea1 : [Profiler] Update python binding type annotations (#85722)
5ed338a55b : [Profiler] Add dtype to `_TensorMetadata` (#85721)
ba95984588 : [Profiler] Make `name` a property. (#85720)
dcac4dd58e : Add int32_t range check in packed_accessor32 in PyTorch TensorBase (#86085)
aabf3e234b : Allow functionalize_aten_op to work with non-SymInt signature. (#86080)
21e00d5acc : Fix type of as_float_unchecked (#86075)
8753703b68 : Fix some bugs in SymFloat IValue and toPyObject handling (#86072)
a66506b136 : Revert "Revert "Build and run Metal tests in CI (#86062)"" (#86073)
07ce0b435b : Remove backward for im2col and col2im (#85542)
99ca25e6eb : Misspelling Correction PR common_methods_invocations.py (#86081)
e6dd2965af : A bunch of coverage improvements (re for models in inference snext50, BERT_pytorch, mobilenet_v3_large, pytorch_CycleGAN_and_pix2pix, dcgan, resnet18, mnasnet1_0) (#86050)
b8bf604459 : Ported linear to symints (#86021)
b9b24c31fd : [MPS] Fix non-contig to contig tensor copy (#86056)
007e12a3e9 : OpInfo: Extend natural syntax to allow adding metadata (#85890)
ed5f95048e : OpInfo: Add natural syntax for SampleInput creation (#85723)
195184e69c : Revert "Build and run Metal tests in CI (#86062)"
3638089755 : Ported reshape to symints and added a shim for BC (#85998)
f88bf8de2c : Build and run Metal tests in CI (#86062)
cd5ac15d5d : Fix internal/external desync for Metal hotfix (#86061)
b26eafec07 : [MPS] Specialized memory pool for scalar values. (#85817)
481def752c : Update fbgemm submodule (#86054)
f183a989a2 : Fix fake tensor kernel nesting (#85920)
365498f673 : Add rmod support to SymIntNode (#86053)
c857b3e73e : More fixes for LTC symint interlock. (#86043)
0060d871df : Add a bunch of extra functionality to SymFloat (#86046)
833edeb020 : Register py metas to py dispatcher so they are used by functionalization (#86057)
b562987c28 : Revert "Fix fake tensor kernel nesting (#85920)"
fe89cd6c57 : [BE] Use reusable workflows from test-infra (#86035)
92c2295ab4 : Remove dead ts_native_functions.yaml entries (#86045)
2f703c5956 : SymInt-ify TypeAndSize (#86044)
07800c9c81 : Miscellaneous fixes from symbolic-shapes branch (#86042)
d9273e8b6b : [functorch] refactor: get_exhaustive_batched_inputs (#85965)
a5a2f576a7 : [vision hash update] update the pinned vision hash (#85776)
05d1128106 : [c10d] Start deprecating *_multigpu APIs (#85961)
463283e016 : [c10d] Start deprecating *_coalesced APIs (#85959)
bf667c63e7 : Fix the error with constant_pad_nd for 4D+ padding (#85991)
be29ca9716 : [FSDP] Ignore buffers that are non-persistent. (#85740)
db4c6fe54f : Revert "[maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845)"
9bf9db57be : Refactored recomputable ops a bit and added a bunch more ops (#85993)
e09a84a184 : Removed debug output that doesn't work with faketensors (#85992)
4b86a9359a : [Quant] Make x86 backend default when querying qconfig (#85461)
fd553c46f4 : nvprim op support runtime checks on dtype compatibility on prims.convert_element_type (#85566)
01292cc9e4 : [BE] Get rid of `std::result_of` in `c10` (#85977)
c2d9ea7f4b : Fix fake tensor kernel nesting (#85920)
28061d50e6 : Lazily load decompositions for jvp (#85989)
334686bde7 : Fix the dimension of padding to match the input's dimension (#85990)
24fc680ee4 : [Quant] Enable XNNPACK ops in QNNPACK BackendConfig (#85863)
d9421f8158 : added fix for WorkUCC (#84368)
a4cc63991a : [MPS] Enable caching for random ops with Philox engine (#85833)
071f875046 : [quant] Fix per channel weight observer (#85883)
6a5550fca4 : [test_nn] split embedding tests from test_nn (#85892)
2037b7cb60 : Make FunctionalTensorWrapper correctly handle symbolic shapes (#85975)
3b6588ab74 : Consistent compute numel/contiguous strategy with SymInts (#85858)
84a06d7193 : Enable convolution_backward with bias and symints (#85970)
a4d10342e9 : [maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845)
1c97084685 : [BE] Generate names of known device from array (#85982)
71eb04403c : Revert "[CUBLAS][CUDA GRAPHS] (re-re-open of #83461) Explicitly set the workspace for cuBLAS handles (#85447)"
401a358817 : [ci] two procs for parallelization (#85985)
e73e3e3523 : [functorch] test no warning on `import functorch` (#85980)
eed5f0464c : [functorch] fix whirlwind tour ipynb (#85974)
4c04fa9587 : Remove `optim_mt` from `test/test_optim.py` (#83549)
94da90e41f : LU solve/unpack fix to prevent bad memory usage on CPU (#85922)
7238ca4c2e : Disallow saved tensor hooks in functorch transforms (#85972)
7c72bc48d8 : Add mechanism to disable the "saved tensors hooks" feature (#85971)
69b927701a : [ONNX] Update user documentation (#85819)
1fae890a07 : fix grad silent correctness issue from view fn followed by an inplace fn (#85374)
8d99d6127e : Add torch_lazy_all_numbers_special_scalars flag (#85902)
be327ec08f : [MPS] Fix base shape size for view ops in case of multiple slices (#85934)
8669f6d426 : [ONNX] Fix layer_norm return type (#85979)
7ddf167ba5 : Move the asserts in shape functions upsample_nearest_2d op. (#85801)
b60ad2e529 : [maskedtensor] negative testing (#85938)
0a7d8b40b6 : Create a quantized in-palce version CUDA ReLU function, relu_quantized_cuda_. (#85670)
eb650abc2c : Add OpenSSF Scorecard Action (#85412)
7e5105dd11 : [ci] expand log file if failed (#85927)
9ba1630bd7 : Limit world size in test_fsdp_pure_fp16 (#85957)
3a13c8493a : [1.13] Mention optim_input future BC breakage (#85963)
d003757a84 : Clone symint on set_sizes_and_strides (#85878)
24adadd4db : Revert "Disallow saved tensor hooks in functorch transforms (#85829)"
801818f9e6 : Revert "Add mechanism to disable the "saved tensors hooks" feature (#85553)"
b13b10a8fa : Extend collate function that can register collate functions to handle specific types (#85748)
b00a5359f7 : Add a way to skip lowering to nvprims (#85811)
787028cadb : Implement col2im decomposition and fix im2col and add a few preconditions (#85541)
1f38abb5d2 : Adopt ncclRemoteError (#85887)
8f4edf1e1d : [ONNX] Initial version of diagnostics infrastructure. (#85107)
dab1c7c379 : Update trunk CUDA-10.2 to CUDA-11.7 (#85943)
ade1c19612 : Add reduce_scatter_tensor in place of _reduce_scatter_base (#85867)
33401ee81f : [ONNX] Rename 'sarif_om' to 'sarif' (#85918)
6bb0a36d0e : [ONNX] Add type annotation for SARIF attributes (#85898)
e9b254a025 : [ONNX] Migrate SARIF from attr to dataclasses (#85651)
91667d1d21 : [ONNX] Introduce SARIF (#85428)
1ad0048b64 : Refactor distribuetd to use absolute header path (#85780)
0b0ce72b25 : [Profiler] Extend ID assignment to allocations and frees (#85719)
95681929e4 : Hotfix for S298125 (#85814)
a50d8864fc : Revert "Refactor distribuetd to use absolute header path (#85780)"
668082718a : Refactor distribuetd to use absolute header path (#85780)
81b366a9dd : [MPS] Handle scalar input for scatter and gather (#85842)
62a4fd7907 : [MPS] Handle output shape for empty input in binary ops (#85836)
ae93a4dc43 : Make launch check exit code depend on results (#85886)
dde43d083b : [c10d] Reorder macros so they are defined before getting used (#85850)
9009393f46 : [ONNX] Remove protocol dataclass (#85916)
6a14fcb922 : [MPS] Add support for aten::masked_select on mps (#119) (#85818)
85258ec17e : Add mask_type=2 to masked_softmax for when mask.size() == input.size() (#85915)
6004c65af8 : Fix rand_like nvprim meta function. (#85882)
103a21f480 : Update _torch_docs.py (#85924)
c036fb3e7d : assert lambda >= 0 in poisson distribution cuda kernel (#85906)
bc57306bdd : Fix typo under docs directory and RELEASE.md (#85896)
11224f34b8 : assert weights being 1-d tensor in bincount (#85881)
6db3539e70 : Revert "Improve make_tensor performance for float and complex types (#85473)"
50000f3cdc : Align functorch docs with PyTorch's (#85856)
19c7a6b54b : [functorch] Update notebooks for latest release (#85855)
48b3582e28 : [functorch] Update install instructions in docs (#85854)
06868004b7 : Remove codesigning from ios circleci workflows (#85630)
a9183c0f9e : Fix bug in PythonFallBack (#85795)
fe87ae692f : Fix `check_compiler_ok_for_platform` on non-English locales (#85891)
0449cf0c9e : Re-introduce the functorch docs build (#85838)
941d7a31f6 : Pass group ranks and options to third party distributed backends (#73164)
e15a48def7 : (bsr/csr) x dense mm (#85551)
ef0baba23f : Use `int64_t` for nll_loss with cuda inputs (#85395)
5f26df0345 : resubmit: "resubmit: [mta] APEX style Fused Adam (#81705) (#85507)" (#85739)
44eefb1376 : Update debug flag for vulkan (#85715)
ad3bea58da : Add vulkan qualifier to the kernel name (#85714)
e0af12a076 : [Pytorch][benchmark vulkan] Fix vulkan profiling (#85713)
e0170c7cde : Remove torch/extension.h dependency in torch/csrc/functorch/init.cpp (#85659)
8fb470e81a : [fix] max_pool1d: shape check (#85594)
cab6ffa0f7 : catches failure on nvprim speculative lowering (#85580)
a807f1987a : Stop cuda-10.2 binary builds (#85873)
3cdf621fe5 : Add opt-einsum to CI (#85574)
b25a1ce22d : Release GIL when doing shared memory copies on Tensors (#85389)
6fae62b35f : Revert "C10D extension to enable per-thread PG (#84153)"
976e2a3502 : Separate magma installation for ROCm into its own file (#85567)
9fb72ca494 : Treat layout / pin_memory consistently across creation refs (#85333)
a76995e584 : Improve make_tensor performance for float and complex types (#85473)
ad87365e54 : [qat]A more stable conv_bn fusion for qat training. (#85744)
3cfc61b846 : [Profiler][trivial] Optimizer states (part 4 of Record Optimizer) (#85840)
72b32f1644 : [c10d] move ncclgetlasterror directive definition upfront (#85825)
dc63948dc9 : [ONNX] Update behavior for `register_custom_op_symbolic` (#85636)
3c9e8cd8df : Create a quantized non-in-palce version CUDA ReLU function, (#85669)
7628603aee : [Profiler] bug fix: python object reference counting (#85847)
edb99df2e0 : [ONNX] Fix reduce node shape inference (#85765)
7e4684009c : Improve codegen for jvp decomposition (#84894)
f23f362c5d : [Profiler] Use strong typedef for Tensor ID (#85718)
282d8dfa68 : [Profiler] Fix traversal utility (#85717)
dfdfaec3fc : [Profiler] Don't assign in AppendOnlyList::emplace_back (#85716)
bd65adf4e9 : Properly fix log_sigmoid vmapjvp and remove hack (#84892)
cca909645f : Add bfloat16 support for lerp on CPU (#84327)
7cdd39b393 : [ONNX] Update `unconvertible_ops` (#85595)
ada6e5b53a : Implement duck shaping on SymInts (#85808)
3a3e2002d8 : [Quant] Add unified x86 quant backend (#84329)
d542aab5c1 : [quant][ao_migration] nn.intrinsic migration to ao (#84842)
6a2b12dd65 : Turn on aliasing tests for fake backwards, Fix Batch norm running mean/var decomp aliasing (#85471)
a67621a6ca : Update functorch README to reflect move into PyTorch (#85832)
498591467b : Excise functorch/version.txt (#85830)
d8277d9075 : Disallow saved tensor hooks in functorch transforms (#85829)
5aa183d2bc : Add mechanism to disable the "saved tensors hooks" feature (#85553)
85d8441fba : [ONNX] Deprecate setter functions for global variables (#85165)
5deeb09d4e : [ONNX] Annotate all g as GraphContext (#85491)
c42a408baa : [ONNX] Create decorator to handle symbolic context (#84776)
723193ec16 : [cuDNN][cuDNN v8 API] Fix 3d convolution_add_relu in V8 (#85055)
01add6e288 : Allow only one -1 in nested view/reshape (#85691)
1418a663b1 : Fix upload condition pypi-cudnn build (#85799)
3d2316670f : [ONNX] Create GraphContext and load `g.op` method to the class (#84728)
75db0225ad : Handle fake tensor in intlist (#85759)
913f5784d7 : move functionalize out of experimental namespace (#85742)
796da4df4d : Return contiguous tensor from softmax decomposition (#85788)
8bb69a007f : reenable circleci mac jobs (#85824)
879ae45230 : Increase timeout and retry count conda upload (#85802)
afaee00fec : Add python `nested_tensor` and `as_nested_tensor` constructors in `torch.nested` (#85593)
a876432aea : Expose torch._will_engine_execute_node (#84773)
8dd45424ea : [primTorch] Add ref for `huber_loss` and error inputs (#85041)
0b93afb112 : add amp tests (#85434)
29c78266c0 : test_decomp.py: Skip tests for embedding_backward bf16 (#84554)
c2e9b9ec4a : TestModule: Don't assume sample inputs version counter is zero (#85734)
5b476e68af : Slightly beefed up dynamic shapes tests for storage_offset (#85806)
d776693701 : [Profiler] Optimizer param_groups (part 3 of Record Optimizer) (#85784)
2f8cfb74af : Fix gelu repr (#85790)
ff71f45788 : [FSDP] Add `FSDPExtensions` for TP support (#85039)
a4bd89b267 : Revert "Revert "Symintified mmm/addmm derivative formulas (#85794)"" (#85820)
39130ccf73 : Registered _like metas (#85793)
fc8ba3a92d : Revert "Allow only one -1 in nested view/reshape (#85691)"
b44a4a8b51 : Revert "Registered _like metas (#85793)"
4c6dc6a1a4 : [BE] Do not use VLA (#85800)
424aad7f82 : [JIT] support freezing modules that don't have a forward method (#85779)
a0b1693996 : Revert "Update `amax/amin/norm/count_nonzero` signatures with `int[*]? dim` (#83300)"
224b689cf1 : Handling for getitem with boolean in meta, and other improvements (#85807)
b6885f7d4a : Don't make parameters have symbolic shapes (#85809)
e63d3a8aa6 : Augment errors raised in fx.Interpreter with Node info (#85810)
b04b2fa9aa : [CUBLAS][CUDA GRAPHS] (re-re-open of #83461) Explicitly set the workspace for cuBLAS handles (#85447)
823dc33b00 : Revert "Symintified mmm/addmm derivative formulas (#85794)"
5709c67f1f : [ROCm] Retry loop implemented to avoid transient memory leak errors (#82607)
b2311192e6 : [NN module] speed up _load_from_state_dict (#85743)
cef8dfc8ba : [BE] small typo+lint fixes for einsum/sumproduct_pair (#85709)
230edd2515 : Symintified mmm/addmm derivative formulas (#85794)
a4e75ccf85 : Registered _like metas (#85793)
35fe4abdc7 : Added symbolic shape testing for AOTAutograd (#85789)
0b251d985d : skip test TestCompositeComplianceCUDA::test_forward_ad_nn_functional_max_unpool2d_cuda_float32 (#85767)
06e0583fb0 : [4/N] [Dispatchable Collectives] Update all_reduce_ with CPU / CUDA implementations (#83810)
0e256c2550 : removed compile cache and static argnums (#85783)
614d6f19e3 : Fix Use obj1.is(obj2) warnings (#85688)
793488cda2 : Revert "Revert "Symintifying slice ops (#85196)"" (#85746)
049be5ac10 : Remove some dead code from fake tensor (#85764)
795028a3ce : Make Python reference for permute accept varargs (#85460)
ccac8d13d5 : [3/N] [Dispatchable Collectives] Update broadcast_ with CPU and CUDA implementations (#83735)
01f946d766 : Rename test file from test_pythonkey to test_aotdispatch (#85769)
fc2e7ebaac : Added floordiv simplification rule needed for models (#85768)
a8be59545d : [aarch64] Use the correct binary when cross building //caffe2/torch/csrc/deploy:remove_dt_needed (#85632)
3276b51243 : Add environment parse function that supports default value (#85563)
f80ef73d1c : [Profiler] tracking Optimizer (part 2 of Record Optimizer) (#84920)
1c0f0b33a0 : Update `amax/amin/norm/count_nonzero` signatures with `int[*]? dim` (#83300)
80b8886223 : add itt unit test and docstrings (#84848)
572dd862c4 : Revert "Update `amax/amin/norm/count_nonzero` signatures with `int[*]? dim` (#83300)"
1c1f3a99dc : [FSDP] Handle the state_dict on CPU cases (#85640)
ce4f187f15 : [MPS] Add tensor::index_put op (#85672)
9858f04350 : [quant][docs] Add types for scale and zero_point tensor for torch.fake_quantize_per_channel_affine docs (#85733)
7ff6a00a9a : [MPS] Handle 1D weight in linear layer (#85752)
4ca125a9e1 : [Quant][fx] Add quant and scale ranges to BackendConfig (#85200)
24a268143d : Directly access has_symbolic_sizes_strides, avoid expensive test (#85754)
8c7c7ed322 : Update `amax/amin/norm/count_nonzero` signatures with `int[*]? dim` (#83300)
f1f6cb07e2 : [UT] Fix random failure of test_qconv_transpose1d by skip using hypothesis (#85463)
ea50e7f262 : fix ovrsource pytorch build from D39769513 (#85708)
7934596b70 : [ucc] Remove internal tracing (#85730)
f98109795f : Stop using restore() in ProxyTorchDispatchMode (#85756)
0a64e73d12 : 52189: refractor unreachable Runtime Error (#85725)
5bfcf1f01a : [Docs] Update sparse Maintainers (#85126)
775a22c7c6 : Add all_gather_into_tensor in place of _all_gather_base (#85686)
34cee3e82b : Auto tag milad for symbolic-shapes PRs (#85751)
16543f6878 : Revisit python tracing OD flow (#85326)
fc99705f22 : Add functorch M1 shard (#85565)
5cbffbbac9 : C10D extension to enable per-thread PG (#84153)
b6bee3c491 : Upload to different path for pypi cudnn (#85753)
6cfe555f4f : [ONNX] Apply Common Subexpression Elimination pass to ONNX export (#85665)
c719ec9c11 : [DataPipe] Fix MapDataPipe spawn lambda test (#85668)
64a526d4af : [DataLoader] Replacing `traverse` function with `traverse_datapipes` (#85667)
8a926b3187 : Enable CSC @ CSC addmm (#85379)
bb5001ce3d : Enable dense x bsc mm/addmm (#85308)
e59120ab51 : C++20 compatible changes (#85703)
e746fff8ba : [MPS] Enable adaptive avg pool 2d with larger output size (#85726)
c8776dca6a : Remove extra `with` in value error exception statement (#84713)
a106611055 : [Modes] fix handle_torch_funcion logic (#85707)
f4251525de : Adding Wunused-lambda-capture to Clang build flags (#85655)
d51f6de9b8 : [quant][core][feature] Implement index_put for quantized CUDA tensors (#85685)
3a171dfb0c : Revert "Symintifying slice ops (#85196)"
9f1468ae6c : CyclicLR memory leak fix (#85462)
4c4e5f6106 : Allow only one -1 in nested view/reshape (#85691)
7167996346 : Revert "resubmit: [mta] APEX style Fused Adam (#81705) (#85507)"
f8e71ca338 : Designate divice to generate_square_subsequent_mask (#85609)
aaef5d8f2c : sparse mm/addmm enable dense x csc, csc x dense and simplify layout check logic. (#85307)
b656ba0b11 : Use hexfloat for threshold OpInfo tests (#85676)
fdef507897 : Simplify noncontiguous_like (#85518)
101f10d7ca : Cherry pick sorting patch (#85620)
1367f2409f : [MPS] Fix placeholder behavior for transposed view (#85689)
15c52ffc4f : Disallow auto_element_wise for in-place and fix some in-place gradients (#85634)
01dbbeeeb5 : Expose cpp_backtrace to python binding (#84896)
54e03cdda9 : Don't use a fixed name to avoid race conditions. (#84952)
0183c1e336 : Add __all__ to torch.utils submodules (#85331)
f64857189d : resize_as_sparse support all compressed layouts (#85378)
45be74cc63 : Optimize to if the datatyep of the source tensor is as same as the dest datatype (#85140)
83261ff9a8 : Use high precision accmulate buffer for bf16 accmulation (#84402)
cf5699f2fc : [vulkan] Rewrite prepacking functions using aten functions + some code cleanup (#84973)
b360d66391 : Revert "Add environment parse function that supports default value (#85563)"
e1e056ac44 : [torchdynamo hash update] update the pinned torchdynamo hash (#85683)
8125d2e188 : [vision hash update] update the pinned vision hash (#85684)
7a5449f148 : [MPS] Clamp op - fix shape issues (#114) (#85673)
cce6d8d641 : Fix warning in kineto_shim.h (#85653)
18d8c548f4 : [Modes] remove enable and rewrite mode stack (squashed) (#84774)
a0be0ca161 : [MPS] Fix test consistency error 'mlir module expected element type ui8 but received si8' (#85666)
b8d2ab3dd5 : [MPS] Fix memory leaks that cause the buffers not to be released and cause OOM (#85661)
755b39ba66 : [LRD] Allowing using dedicated iteration counter for learning rate (#85195)
784f4ba1ce : Add environment parse function that supports default value (#85563)
686555b663 : [maskedtensor] port torch/_masked into torch/masked (#85515)
90261945b7 : Copy over non parameter grad (#85658)
4a2d2e5e40 : Change API type `Tensor[]` for structured kernels. (#73350)
1a2734e015 : Fix broken periodic workflow after removing ios secret (#85664)
e4471032da : Enforce non-virtual-dtor everywhere (#85586)
f325c29b05 : [fx] Make NormalizeArgs preserve node type (#85637)
5547c6aa4e : Match kwargs in SubgrpahMatcher (#85617)
e38b3424c3 : Clean up the functorch test skip mechanism; add a new decorator (#85564)
6a04df3ac8 : Get flash_attn to compile for CUDA 11.6 linux nightly build (#84941)
15435325eb : Configure PyTorch Testing ArgumentParser Instance To Avoid Unnecessary Conflicts with System Args (#85616)
d5ce2bbed2 : [primTorch] decompositions for upsample_bicubic2d (#85403)
70cce9f8d1 : [torchdynamo hash update] update the pinned torchdynamo hash (#85225)
89896b8778 : Fix typo in comment (#85635)
291b080e8c : CODEOWNERS: [ONNX] remove @shubhambhokare1; add @abock (#85476)
a8ca0d4849 : fix segmentation fault in QTensor.choose_qparams_optimized (#85552)
bcc544e9d7 : Add FakeCrossRef tests for backwards, Fix Layer Norm Backward Decomp (#85417)
0f561f0bd2 : Log Watchdog events to scuba (#85391)
60d98821c5 : Remove unnecessary skips in test_dispatch.py (#85557)
b0eeffdf6f : Fix printing regular tensors inside functorch transforms (#85556)
30fccd03a6 : Make profiler table column widths changeable via arguments (#85203)
b32020e937 : make vulkan codegen windows-compatible (#85241)
ef95baf2ec : Add `IListRefTag::Materialized` to `IListRefIterator` destructor. (#85467)
9c036aa112 : Add SymInt to Scalar (#84958)
33404436aa : [doc] Add pin_memory and layout to new_{zeros, ones, full} (#85605)
6e50f8e395 : Allow External Scripts (e.g. vscode) To Discover and Execute unittest Tests (#85584)
f0570354dd : [MPS] Fix memory error in var (#85571)
e29f2483a6 : Remove codesigning from github actions ios build workflow (#85597)
1a0e1db763 : [Profiler] Compute unique IDs for Tensors (#85162)
e1f9125e61 : [doc] add argument default values in rot90 (#85610)
0d86dfccf8 : Bump protobuf from 3.20.1 to 3.20.2 in /.circleci/docker (#85572)
bd5efbb7ee : [doc] add pin_memory argument to rand (#85221)
a8add2b92f : Support matching Args for SubgraphMatcher (#85456)
db40fbdee0 : Add deprecation warning to ProcessGroupRoundRobin (#85158)
41be45f0f4 : Revert "Create a quantized version ReLU function for CUDA (#85502)"
a531a604a0 : Support BF16ImmPtr (#84041)
ffaff8896a : Removed None arg check in test/test_decomp.py (#85402)
d3be4245bb : Fix the issue that cat result would be incorrect for channels-last (#85076)
2efea21c52 : [mkldnn_matmul] enable mkldnn matmul for aarch64 bf16 devices (#83671) (#85546)
93a53ff4d9 : Create a quantized version ReLU function for CUDA (#85502)
e7e1cd945f : Add path optimize kwarg to einsum (#84890)
e78e00f4d9 : [vision hash update] update the pinned vision hash (#85581)
2b6d2cad29 : Remove @saketh-are from CODEOWNERS (#85521)
4d3acf1203 : Enable pytest-shard for functorch (#85321)
70b27e91c7 : [pytorch] Skip linalg tests that fail on Meta infra (#85577)
a554a546b3 : Update PyTorch Distributed CODEOWNERS (#85560)
7d8ee38a5c : [Static Runtime] Fix prim::If tuple corner case (#85446)
18685b7fe1 : Update PT maintainers list for AO (#85125)
a38e43e936 : [perf][1/5] Replace IValue::toString()->string() with IValue::toStringRef() (#85437)
ea81138bd6 : [quant][improvement][better-engineering] Refactored get_supported_device_types into common_quantization.py (#79607)
12ae3bea43 : Faster mul(sparse, sparse) with broadcasting in dense dims. (#85336)
40d3e55b7d : Temporary fix to skip NVIDIA driver installation from RHEL repo (#85569)
4befe45084 : [FX] Add one option to maintain the FX graph execution order after splitting_module (#85188)
4f5f2c1a9e : Add torch.nested to ovrsource (#85384)
4c01c51266 : Symintifying slice ops (#85196)
604487f239 : OpInfo for Slice (#85554)
bc6dc8d271 : [fix] composite compliance: cumprod, _masked.cumprod, linalg.vander (#85330)
2e81710366 : [Quant] Add initial Executorch BackendConfig (#85527)
a8074a1a0b : [Checkpoint] rename apply_ac_wrapper (#85449)
cc64f64670 : [Docs] Minor fix to apply_ac doc (#85448)
a4c94f0739 : Fix cuda issue with sparse.sampled_addmm (#85194)
49e10c1598 : [ci] test_ops in parallel, ci tests log to file (#85528)
0e582fbfcc : [NVFuser] Upstream push 0907 (#84626)
52a8be523c : Adjust retry time for conda upload (#85545)
3007093007 : Add new cudnn buid for linux only (#85549)
d83ca9ebff : [CI] Make `cuda-arch-list` a parameter to linux-build (#85523)
108b25db25 : Let antoniojkim snoop all symbolic shapes PRs (#85555)
4dfaca6fb1 : [Profiler] Clean up Tensor representation (#85161)
e296a82f23 : [Profiler] Capture storage data pointer (#84276)
4615d1bcfa : resubmit: [mta] APEX style Fused Adam (#81705) (#85507)
f1a6f32b72 : [DataLoader] Make distributed lazily initialized & share seed via PG (#85279)
e3766e9855 : [maskedtensor] move __torch_function/dispatch__ functions to a map (#85529)
7893748900 : Add instructions on how to merge a PR (#85280)
c7b17d7eb1 : Add nvprims `rand_like` support for Dropout (#85077)
1e4c88518c : Fake tensor refactorings (#85498)
d10de31cc8 : Revert "Add FakeCrossRef tests for backwards, Fix Layer Norm Backward Decomp (#85417)"
eb570ab7d0 : Revert "add amp tests (#85434)"
3b195fd33e : Revert "Turn on aliasing tests for fake backwards, Fix Batch norm running mean/var decomp aliasing (#85471)"
f371b5267d : Made max_pool2d_with_indices_backward_cuda contiguify `indices` (#85493)
ea72a0991c : Add support to traverse all python collection objects (#84079)
1e92eb8068 : Turn on aliasing tests for fake backwards, Fix Batch norm running mean/var decomp aliasing (#85471)
c2f4bbe669 : add amp tests (#85434)
78afa0cf0c : Add FakeCrossRef tests for backwards, Fix Layer Norm Backward Decomp (#85417)
2e883d4655 : Revert "[mkldnn_matmul] enable mkldnn matmul for aarch64 bf16 devices (#83671)"
034f2b4d23 : [Quant][fx] Enable FX static quantization for LSTM (#85068)
71dddec6ea : Cast grad_input to half when input_dtype is half in _softmax_backward_data aten decomposition (#85497)
b4f9b68225 : should_check_strides (#85416)
d5cabf7946 : Make functorch compilable with Py-3.11 (#85054)
56c0c0af5b : [ShardedTensor] Add `is_floating_point` (#85483)
c8f78d417b : [ShardedTensor] Add `is_meta` (#85482)
05d0eb2aee : [FSDP] Make `_ran_pre_backward_hook` check more robust (#85481)
8602873a12 : [vision hash update] update the pinned vision hash (#85522)
cf0de77c2c : [Easy][FSDP] Simplify `assert` to `p_assert` (#85479)
8bd4724f04 : Adding a unit test that would gate PRs and prevent reverts, e.g. #83327 (#85442)
5f6735ea97 : [FSDP] Address comments on previous PR (#85490)
539076e2c2 : Remove deprecated torch.lstsq (#70980)
6380016bdd : Disable decomposition registration on Python-3.11 (#85509)
f0869cc8d0 : Make CUDA exceptions unlikely and isolate C10_CUDA_CHECK body (#85256)
f0a084f3db : Revert "[Profiler] Make `LibKinetoClient::stop()` directly call `ProfilerStateBase::pop` (#83965)"
46a6a50f4e : Skip MPS test from generic M1 testsuite (#85500)
92a942100a : disable circleci jobs b/c they are flaky (#85508)
5e700803c2 : Use fallback approach for nested matmul (#85311)
63c1f2fef9 : [Static Runtime] Fold linear prepack ops (#85289)
e4899764b2 : [Static Runtime] Fix aten::index_put list conversions (#85298)
bd854588fb : Increase timeout for ProcessGroupGlooTest (#85474)
e505360eb8 : Revert "[mta] APEX style Fused Adam (#81705)"
848437590f : Delete functorch's monkeypatching (#85430)
5e5c319549 : Move functorch python bindings to torch/csrc (#85426)
bcf93181a0 : Remove deprecated torch.matrix_rank (#70981)
e342976907 : Adding conda retry upload to mitigate connection reset errors (#85407)
253ffbf28b : Exposing native _scaled_dot_product_attention to torch.nn (#85044)
0d04e54898 : [GHF] Create "Core Reviewers" group (#85477)
e227e3ec48 : Disabling the pypi cudnn wheel from uploading temporarily (#85470)
5043457a8e : Revert "Add FakeCrossRef tests for backwards, Fix Layer Norm Backward Decomp (#85417)"
9baf6770bc : Apply new symbolic shape strategy to make_fx symbolic mode (#85260)
2f50d2f685 : [ONNX] Update docs on symbolic registration (#85290)
9c77083965 : Add FakeCrossRef tests for backwards, Fix Layer Norm Backward Decomp (#85417)
61b4e8a7bf : More SymFloat support (#85411)
0c46e3ec66 : [maskedtensor] add basic tests and unary/binary/reduction tests from common_method_invocations (#82841)
2bc82163eb : Reduce memory usage requirement of test_warp_softmax_64bit_indexing in test_nn.py (re-open of #85037) (#85373)
76d60778eb : [ONNX] Use decorators for symbolic function registration (#84448)
c7c2578f93 : Skip NVIDIA driver installation if it's already there (#85435)
99ad8a3048 : [vision hash update] update the pinned vision hash (#85451)
34296e2f4c : SubgraphMatcher remove invalid matches (#85444)
4523ac7aa1 : [quant][docs][ez] Fix formatting for qconfig_mapping (#85306)
f21e77d9a6 : [mkldnn_matmul] enable mkldnn matmul for aarch64 bf16 devices (#83671)
26a861cb27 : [quant] Check if cuda quantizing while on qnnpack engine (#85423)
56a41b5998 : [composite compliance] ctc_loss (#84752)
1910c5847e : rebase and merge - add sleep (#85420)
caa0ab557d : Revert "Use fallback approach for nested matmul (#85311)"
0336308be5 : [AO] Callable norm function for sparsifier (#85236)
6fb182c86b : [doc] document pin_memory for randn (#85219)
7c31f6e672 : Use fallback approach for nested matmul (#85311)
5aa84c16db : [pytorch] cuBLAS addmm malfunction test (#85432)
9c127986bf : Fix labeling detection bug (#85429)
3dce26635f : Revert "test in parallel at file granularity (#84961)"
0278a141fc : csr <-> csc, csc <-> csc, bsr <-> bsc, bsc <-> bsc, bsr <-> bsr conversions (#85091)
a2cbbbd46f : Improve SymInt print and move to correct file (#85413)
3a09a1e8f0 : [doc] remove out argument from squeeze (#85222)
9a81da7ad1 : Update NCCL to current master and remove patch step (#85367)
d5adf8151a : [PolishTypo] inherentely->inherently, intentially->intentionally (#85325)
764cba6848 : add Python ref for isreal (#85361)
77f1f98479 : Re-introduce `torch.Tensor.to_padded_tensor` (#85293)
8e1ae1c19d : Add Krovatkin to symbolic-shapes team (#85422)
25a5ada426 : Typofix (#85421)
35943f30cb : Reference implementation for torch.Tensor.sum_to_size (#85338)
85073b8ddc : Add __all__ to fx, fistributed and cuda submodules (#85080)
0217a8d049 : Revert "[fix] composite compliance: cumprod, _masked.cumprod, linalg.vander (#85330)"
0ac6311356 : Revert "[CUBLAS][CUDA GRAPHS] (re-open of #83461) Explicitly set the workspace for cuBLAS handles (#85292)"
0e194b3219 : Add Auto Request Review for reviewer assignment (#85414)
2a88f1b2d8 : Land "Make ceil,floor,round,trunc handle integers" (#85144)
6f2b390fc1 : Fix import of instruction count benchmark (#85359)
d9aa6dfe88 : Add Fake Cross Ref Mode, migrate sparse to it (#85382)
8107666c6a : test in parallel at file granularity (#84961)
2fb820455c : Revert "[pytorch] cuBLAS addmm malfunction test (#85084)"
eb94df28c7 : Use pip install cu117 (#85097)
90b64e231e : Update hipification logic for all ROCm headers (#85320)
2c285f3e9b : [quant][docs] README for FX Graph Mode Quantization (#85070)
5fa104a76c : Move functorch C++ into aten/src/ATen/functorch (#85381)
125e9256f4 : [FSDP] Add back `forward_prefetch` (#85177)
977f8fce3c : [FSDP] Simplify backward prefetch implementation (#85176)
563b065f5a : [fix] rrelu, rrelu_, & RReLU when lower bound > upper bound (#84996)
de0f3c4200 : Change Lezcano to lezcano (#85396)
0297c75c14 : [pytorch] cuBLAS addmm malfunction test (#85084)
b70c254ebb : Rework printing tensor aliases in CSAN error message (#85008)
3eb27229dd : as_strided symbolic support (#85264)
793deeefb4 : [quant][core][feature] Implement masked_fill for CUDA tensors (#85108)
308b26fe4d : Add nvFuser support for transpose (#84629)
2f4a517d67 : Ported matmul compositeimplicitautograd impl into core (#85239)
a3dc338ee1 : Revert "Exposing native _scaled_dot_product_attention to torch.nn (#85044)"
09965957cd : quantization: align observer dtype with reference model spec (#85345)
08f413bd6a : [vision hash update] update the pinned vision hash (#85380)
75451d3c81 : Address eellison's CR comments on AOTAutograd (#85370)
3c870dadc3 : [BE] Mark unused range-loop vars with `C10_UNUSED` (#85383)
3122a96ee4 : Revert "Improve and expose cpp_backtrace to python binding (#84896)"
9fdd8a8b7f : Exposing native _scaled_dot_product_attention to torch.nn (#85044)
d7029fea51 : Remove TS compatibility transition code (#85003)
73fbca1ea6 : Improve and expose cpp_backtrace to python binding (#84896)
52fd7e491b : Update torch.ops.aten.all ref to be symbolic-trace friendly (#85352)
f6a18d3d37 : [PyTorch] StorageImpl: cache size_bytes.is_symbolic() (#85309)
90fa744c09 : Fixed memory issues in linalg_lstsq (#85357)
cb8e73bb71 : fx quant: fix bug in custom module test (#85344)
62786a09d3 : Fix indentation in functorch limitations docs (#85346)
e1f634753c : Setup fake tensor and symbolic shapes once at beginning of AOTAutograd (#85233)
5f623f5c4c : Correctly handle duplicate arguments to AOTAutograd (#85301)
b9b27f7664 : Added `Tensor.to` overloads to `torch._refs.to` (#84802)
d3dec8097b : [fix] composite compliance: cumprod, _masked.cumprod, linalg.vander (#85330)
e24e17916f : Remove errant semicolon (#85356)
a3afb2c2f6 : Fake: fix conv_transpose2d striding (#82846)
e1ed485c65 : [MPS] Handle reduction of scalars in edge-cases (#83743)
d17b144e65 : Adding multigammaln ref and fix arange (#85153)
7a6c4d0c50 : [mta] APEX style Fused Adam (#81705)
00a1065286 : [pytorch] Inline std::forward definition (#85255)
9c1a6a522d : Make ones and zeros's ref accepts variadic size argument (#85117)
5e8f16b877 : Fix fake_tensor to_copy meta dispatch (#85337)
4012e623e8 : [CUBLAS][CUDA GRAPHS] (re-open of #83461) Explicitly set the workspace for cuBLAS handles (#85292)
39f482acdf : Add a reset() method to nvFuser FusionCache to enable proper resetting during tests. (#85319)
86d8c61c7c : Revert D39583438: Multisect successfully blamed D39583438 for test or build failures (#85277)
cf2f552cd8 : Add __all__ to torch.{fx, distributed, backends} submodules (#85079)
a4dca9822d : [composite compliance] prod (#81969)
077db3de92 : [MPS] Fix conv1d backwards crash for channels last case (#85283)
bcdef58a55 : [MPS] Fix the crash in bitwise ops on x86 platforms. (#85285)
6c48a01cef : [Quant] Improve performance of ONEDNN backend (#84470)
35088f283e : Revert "Python stack tracing OD flow (part 1) (#84362)"
8c7e20976e : [vision hash update] update the pinned vision hash (#85315)
c05ca0dbf2 : [torch.futures] Fix nullptr deref (#85304)
66907e7262 : [functorch] Fix dangling impls (#85299)
53fdd60635 : Revert "Reduce memory usage requirement of `test_warp_softmax_64bit_indexing` in `test_nn.py` (#85037)"
a998a8eb10 : Fix segfault for `out` with a large number of dims (#85294)
d9024ea284 : Setup torch/csrc/functorch/*; move CompileCache.{h, cpp} there (#85263)
1f4f05e59c : Python stack tracing OD flow (part 1) (#84362)
66a9cba221 : Reduce memory usage requirement of `test_warp_softmax_64bit_indexing` in `test_nn.py` (#85037)
e41d758e26 : Handle implicit real->complex casting for backward of stack (#84993)
cd7408e950 : Add aten _assert_tensor_metadata op (#84617)
6ed90379a8 : Revert "Legalize BFloat16 in NNC. (#83988)"
1456cca1fc : Fix exception handling, improve overheads and avoid constructing storage for element size (#84612)
cbe5469e88 : [PolishComment] Polish code comment, revelant->relevant (#85238)
8c952db13a : Fix segfault case for torch.ormqr (#85278)
555bb6cdb8 : Check that groups is > 0 in _convolution op (#85111) (#85248)
7234eb06f7 : Revert "Land "Make ceil,floor,round,trunc handle integers" (#85144)"
f0b06c64c8 : Fix bugs in sparse compressed tensor shape and device inference (#85240)
6a18616296 : Support for sym_strides() in backwards formulas (#85210)
f38f9dfbfa : When tracing SymInts, peephole optimize multiply by one (#85261)
ebf45a0785 : [NNC] support aten::_convolution when it is 2D conv (#84038)
b049493ed5 : Legalize BFloat16 in NNC. (#83988)
b27eb8d377 : Land "Make ceil,floor,round,trunc handle integers" (#85144)
cd32a86bf2 : Stop monkeypatching Tensor.backward() on `import functorch` (#85152)
5ce56d9377 : Stop loading jit decomps in eager_transforms.py (#85151)
6fd8e28a99 : Make addmm meta kernel consistent with mm (#84960)
3a51b557ef : Added docs and opinfo for narrow_copy (#84493)
b0c447e954 : [functorch] add batch rule for linalg.lu_solve (#85175)
d561aa944b : Adds normal prim, randn reference, and randn OpInfo (#85128)
17aefce0aa : [xla hash update] update the pinned xla hash (#85242)
7df0878b99 : [FSDP] Option to keep grads in lower prec (#85223)
9024015adf : [BE] Small improvements to device_count (#85192)
dadd89a8a6 : Add a flag to trigger inductor testing (#85183)
1378561d03 : [vision hash update] update the pinned vision hash (#85199)
b8bf11bbf4 : Add braces around single line conditional (#85207)
68929f4768 : Remove improper asserts. (#85206)
9d84db3b72 : Templatize checkInBoundsForStorage and setStrided for SymInt (#85205)
280e2f9283 : Fix bug in computeStorageNbytes (#85204)
12a19a4846 : Made tracing of proxy symints lazy (#85185)
5dd9610e9d : Refs and decompositions for index_{add,copy,select,fill} (#85002)
45a9dcd4dd : [BE] Add explicit `__all__` to torch.cuda (#85193)
8c9d7fabd6 : Add SymInt::guard_int (#85139)
b0a631cd14 : Add nondeterministic alert for `MaxUnpool1d/2d/3d` (#84766)
b8418e02eb : Create Cache for Fusion Reuse in NVFuser in Python Frontend for Primtorch (#85045)
d23ce29761 : allow changing the cuda allocator settings even after the process started (#84970)
81620c3360 : Revert "Faster mul(sparse, sparse) with broadcasting in dense dims. (#83428)"
98b8ef99e1 : Add refs for sinc and sgn (#85142)
e33b464ffc : Revert "Refs and decompositions for index_{add,copy,select,fill} (#85002)"
1838957e6f : fix external codegen kernel error checking (#85029)
652707abc0 : Don't cache model specs within PTMCoreMLCompiler (#85136)
2dbd2673b6 : remove symintnode bits in LTC (#85171)
02f654abca : Disable torch.library.Library with PYTORCH_DISABLE_LIBRARY (#85190)
dca42ec20c : [torchdynamo hash update] update the pinned torchdynamo hash (#85198)
21f2d55974 : [Profiler][Trivial] Make `test/profiler` folder. (#84273)
4a5edbf076 : Make param 'option' const& to prevent unnecessary copy at call-site (#84747)
32fc0b958e : Expose get_active_ddp_module api for torchdynamo DDP (#83333)
0a6f32619e : CoreML .mlmodel export support (#84784)
ca419c3338 : [NNC] add eltwise OPs: mish and elu (#80586)
377b5d6f8b : Added additional simplifications/caching for replacements and divisibility (#84918)
9d1155235b : [ONNX] Create decorators for symbolic function registration (#84709)
05ff3f8960 : Add symlink resolution in benchmark timer interface (#82734)
2f0b3de443 : Refs and decompositions for index_{add,copy,select,fill} (#85002)
d6c2080eb4 : [ONNX] Update ONNX documentation to include unsupported operators (#84496)
46843be1e6 : [ONNX] Update error messages (#85179)
a4c7cadca6 : Retry installing lintrunner if download fails (#85189)
14b3bdc025 : Revert "[FSDP] Option to keep grads in lower prec (#85134)"
4382da5d5e : Remove assertEqualIgnoreType from test_pooling (#85112)
cd7e6d4ad1 : [ONNX] New symbolic function registry (#84382)
735154354b : update torch.narrow doc (#85180)
5877cc9a9f : fix for rebase and merge (#85168)
17593f15bd : [functorch] Document DynamicLayer.{h, cpp} a bit more (#85178)
d559299ccf : [QNNPACK] Export cpuinfo-targets in clog CMakeLists (#84876)
c6c3346d5a : [FSDP] Short-term fix to remove `optim_input` (#84201)
a9258eba8e : [Testing] Port `bernoulli` and `multinomial` to ErrorInputs. (#74683)
a5d9d2aaa2 : [functorch] remove argnums partial helper function, rewrite test to use slice argnum (#84951)
776e0fe756 : Revert "Make ones and zeros's ref accepts variadic size argument (#85117)"
490727a35f : New calling convention for Python dispatcher (#85133)
e5fac7f5dc : Optimize torch.ops.ns.opname.overload accessor in torch dispatch (#85132)
607eccb13c : [FSDP] Option to keep grads in lower prec (#85134)
7e5616c9ff : Make ones and zeros's ref accepts variadic size argument (#85117)
38778add8d : flash_attention_helper mitigation: pass contiguous inputs (#85135)
7b3e177b87 : Increase docker build timeout (#85156)
29eba319b4 : Use alias for nop decomp (#84727)
d8eae6283d : Rename 'torch/ao/nn/quantized._reference' to 'torch/ao/nn/quantized/reference'. (#84974)
d710c95cc0 : Implement forward AD for scatter_reduce (#85000)
6162a04364 : fix half_to_float arg in *softmax decomp (#85120)
f37069aac7 : Re-enable fixed dynamo tests (#84969)
54c9c4e73d : Flip fake tensors on in aot autograd (#84968)
61ba125064 : Add warning about installing functorch via setup.py (#85095)
2e1ec5d18c : Re-enables some tests for linalg.det (#85141)
8b29b7953a : Fix behaviour of index_add / atomicAdd(bool,bool) (#85100)
4bdc0af53d : Added support for symbolic is_contiguous (#84829)
5652ab22f6 : [FSDP] Add `_set_flattened()`; `_is_flattened()` (#85038)
0ec19db7ac : [vision hash update] update the pinned vision hash (#85130)
b363e9874a : [torchdynamo hash update] update the pinned torchdynamo hash (#85129)
647aeb831f : torch/jit/_trace.py in compare_outputs(original, reference, match_wha… (#84850)
54bccbb22f : [mergebot] rebase + merge (#85028)
89525cbd69 : Add variable_list support to ExtractVariables struct (#84583)
50733c8bba : TorchDynamo Remove context manager (#85124)
95a2c3df31 : Replace `expectedAlertNondeterministic` with simpler check function (#84808)
1275e2df1f : Remove getattr magic method from OpOverload (#85090)
00ce302c07 : Performance optimizations to proxy tensor (#85049)
d49943bda8 : Faster mul(sparse, sparse) with broadcasting in dense dims. (#83428)
abaf99d37f : Enable unary elementwise inplace ops for all sparse compressed layouts. (#85031)
27ec195a81 : [functorch] fix jacfwd so all inputs get wrappers (#84915)
64899c5d10 : change the type of storage_offset to SymInt (#85102)
7f88934a8f : [reland 2] Call jit decomp in VariableType to improve forward AD coverage (#84976)
7dcc723d35 : [c10d] Ensure collectives are called with the same dtype for all tensor params. (#84664)
5558deac59 : [ONNX] Add `caffe2/python/onnx/**` to merge rule (#85118)
b1f5644fad : [frontend] Print real type for Argument (#85103)
52a2b61203 : Fix fetch function which breaks user code (#85099)
2386cd2945 : [reland] [numpy] add torch.concatenate, alias of torch.cat (#85073)
25ecc4889d : [FSDP] Fix memory regression! (#85087)
4306a18826 : Fix tied params with Fake Tensor (#85065)
2e41fbc211 : [ONNX] Enable test_custom_opsets_inverse (#85013)
a225f3cfce : torch.zero_ on a sparse compressed tensor resets nnz to 0 (#85030)
21e656b020 : [ONNX] Add `third_party/onnx` to merge rule (#84715)
6bd7d0f856 : doc string fixed in torch.distributed.reduce_scatter (#84983)
d52452b3d1 : [Functorch] Set rpath for Mac builds (#85086)
4db1588ca0 : [functorch] follow-up vmapjvpvjp (#84992)
5014282792 : Removing cuda 11.3 nightly builds (#84866)
ebd4e90ff7 : [Profiler] add config option to remove 'Call stack' field from trace file (#84982)
a22f4f535b : Add xpu path for GRUCell (#83723)
17925122d0 : Rewrite new_zeros, new_ones, new_full decomp with aten.full (#84946)
65158b8876 : empty strided symint (#84830)
d05f07494a : Use angle brackets in include for internal clangtidy (#85032)
be800cd6ea : [vision hash update] update the pinned vision hash (#85061)
625e44c1df : [torchdynamo hash update] update the pinned torchdynamo hash (#85060)
62af1c9eed : [Easy][FSDP] Change `assert` -> `p_assert` (#85052)
cdd625ba70 : [Easy][FSDP] Remove outdated comment (#85051)
cc62ad79c7 : [FSDP] Fix `pin_memory()` for CPU offloading (#85048)
e7ad699be0 : Resubmit bfloat support for im2col,col2im (#84372)
8ca1839d32 : Python Dispatcher integration with C++ dispatcher (#85050)
3a107bc9be : [functorch] fix vmapvjpvjp test for prelu (#84939)
8badb00ff4 : [functorch] fix conv_transpose with groups batching rule (#84938)
8cb7826889 : [CheckpointWrapper] Reentrant kwarg support (#84908)
55ca6901a7 : [CheckpointWrapper] Decouple CPU offload (#84907)
166ea7e6b1 : [functorch] fix jacrev so all inputs get wrappers (#84914)
1a6cf6ea88 : [MPS] Fix int rounding div crash on M1 (#85016)
976f8bee94 : [c10d] add ncclGetLastError to NCCL pg (#83724)
ccade9410f : Don't detach when making views; force caller to detach (#84893)
2711b9fa63 : Revert "[CUBLAS][CUDA GRAPHS] Explicitly set the workspace for cuBLAS handles (#83461)"
a1a95d402d : Fix inheritance in TestDataLoaderUtil (#85018)
713d8b8552 : [CUBLAS][CUDA GRAPHS] Explicitly set the workspace for cuBLAS handles (#83461)
7b64c885d5 : Enable manual test config label selection on GHA macos (#84895)
fa7bf3e2dc : Revert "[numpy] add `torch.concatenate`, alias of torch.cat (#82946)"
23b7a5fc7a : Shard distributed tests on non CUDA focal (#84891)
3e57c9550e : Ensure as_strided_tensorimpl is never called with MPS (#85020)
5271494ef2 : [CUDA graphs] Fixes errors in RNG seed (#84967)
270e5e519d : [numpy] add `torch.concatenate`, alias of torch.cat (#82946)
94b67f4cd8 : Revert "Create Cache for Fusion Reuse in NVFuser in Python Frontend for Primtorch (#83267)"
4247cc98a2 : [MPS] Fix mps to cpu casting from a smaller dtype to a bigger dtype (#84928)
1a81ab3ba5 : Test tracing consecutive comms on the same input tensor (#84980)
0f30059227 : Remove eager mode support form CommTensor (#84978)
b6d6a78c12 : [ROCM] test_batchnorm_cudnn_nhwc (#84603)
706b990306 : Revert "Python Dispatcher integration with C++ dispatcher (#84826)"
74ead61944 : [2/N] [Dispatchable Collectives] Extract ProcessGroup::Work into a separate class and update references (#83680)
54c46e4f90 : Upgrade to CUDNN version for cuda 11.7 (#84964)
6750946b82 : Skip validate_view_consistency for nvFuser tests (#84858)
35f6a69191 : Python Dispatcher integration with C++ dispatcher (#84826)
44c30c5d1c : [quant][docs] Add example for the error message for fixed qparam ops (#84666)
55ca297d4e : Remove enable_recursive_torch_dispatch (#84945)
922560b872 : Removes unnecessary namespace of functions used only in einsum (#84955)
d26e9cd9b2 : [vision hash update] update the pinned vision hash (#84975)
b28d82cb1d : [torchdynamo hash update] update the pinned torchdynamo hash (#84912)
d00cabae7b : Fix `expectedFailureMeta` to avoid skipping tests (#84875)
8cbbd3a25f : Avoid nested CommTensor wrapping (#84963)
8ca057eb71 : Revert "Don't detach when making views; force caller to detach (#84893)"
a185dc2e63 : [ROCm] re-enable tensorexpr and test_openmp (#81367)
cb9ef4668e : Updated library level maintainers for torchtext (#84950)
d05a11337c : [CMake] Add functorch target (#83464)
26b5986297 : `ReflectionPad` supports `BFloat16` (#84949)
fdd3665413 : [Profiler] Make `LibKinetoClient::stop()` directly call `ProfilerStateBase::pop` (#83965)
3bb8d6a93c : Don't detach when making views; force caller to detach (#84893)
ec916bf6af : Create Cache for Fusion Reuse in NVFuser in Python Frontend for Primtorch (#83267)
59bb5c933b : Adds mruberry as superuser (#84869)
c61e89545e : disable onednn gelu for empty input (#84926)
25d91e0a9d : Updating cudnn_frontend to 0.7.1 (#84943)
36d79143ce : Revert "[reland] Call jit decomposition in VariableType to increase forward AD coverage (#84151) (#84675)"
38192f63cd : Add __all__ for a few distributed modules plus a little typing (reland) (#84872)
53c71e2142 : [functorch] test - vmapjvpvjp (#83375)
b4a881afac : [ROCm] Remove gfx900 from base docker build and Pytorch build scripts (#80015)
0e8c5cf847 : Revert D34636039: Multisect successfully blamed D34636039 for test or build failures (#84942)
81da50a972 : Return device count using nvml (#84879)
94f20c3514 : Memoize `torch.cuda.device_count` (#84878)
bda8a5729b : [Nested Tensor] Create differentiable nt to tensor view functions (#83371)
fa86874bbd : Fix intermittent link errors in NCCL build (#84245)
74d0c64708 : Don't use reentrant dispatch for composite compliance (#84909)
b4799736ee : autograd: fix non-deterministic output in codegen comments (#84695)
2e65f187cd : [Functorch] Delete unused files (#83777)
33352336b4 : [FSDP] Add rate limiter (#83917)
39676a977f : [FSDP][Easy] Save unpadded/padded unsharded sizes as attributes (#84366)
afcc7c7f5c : [FSDP] Generalize prefetching; lower unshard/reshard to handle (#83665)
a2acead002 : [FSDP][Easy] Minor cleanup (#84761)
8c2da0616c : Revert "Upgrade to CUDNN version for cuda 11.7 (#84859)"
351ac63cdd : coo binary_op intersection primitives (#83427)
3f047b2a90 : SymInt support for computeStride (#84905)
8b8141e971 : SymInt support for multiply_integers (#84904)
ecee6c742f : StmInt support for InferSize (#84903)
7e900f204f : Avoid throwing an exception when ScriptList doesn't match. (#84921)
33bb8ae350 : Set shuffle to DataPipes with set_shuffle API (#83741)
7a9ab5c232 : Move Python argument related functions to cpp file (#84919)
99cfaf9eee : [FSDP] Subtest prefetching for `test_fsdp_grad_acc.py` (#84601)
dbd38f63f5 : Include CoreML error description in exception thrown when inference fails (#84804)
e980ff8eb9 : Remove unused method_assignments (#84917)
d951165bd8 : [C++ API] Added missing antialiasing path in interpolation C++ api (#84599)
2fbc0fab20 : Setup sccache for linux test (#84916)
9d5b3e4da8 : [FSDP] Remove `forward_prefetch` (#84600)
8f92140c40 : [vision hash update] update the pinned vision hash (#84913)
dc865bff4e : [Profiler] set_class util (part 1 of Record Optimizer) (#84779)
6d222116a1 : [Documentation] Minor rendering issue (#84856)
964fde7d7c : Raise AttributeError for __origin__ to avoid C++ exception raise (#84880)
260b716c65 : [Mobile Tracer] Allow tracing multiple input models at once (#84833)
5a29db142e : Use int64_t index type in multiplications to avoid integer overflow in max_pool2d and avg_pool2d on CUDA (#68682)
918cd8b9ba : [torch::deploy] Ignore return value of function declared with 'warn_unused_result' (#84862)
9b16bf04af : Fix MPS test sanity (#84889)
d09e8b23bf : [primTorch] Add repeat and unfold_copy references (#81374)
d673332782 : Forward fix for FB internal breakage (manual export of internal diff D39421802) (#84871)
5238404f4d : Increment `version_range_max` (#84815)
c85e47b368 : [BE][PT-D] Fix race on checkpoint file (#84881)
3baa363f71 : [Functorch] Make minpybind less likely to crash (#84788)
ccb1ff2233 : Updated invalid type error message to explicitly say only float types… (#83170)
cfeb531700 : Fix failing test_model_dump due to empty file (#84744)
cd3731bd17 : [BE] Refactor `_is_compiled()` function (#84877)
31cc03cc13 : fixing English typo in MPSFallback error message (#84834)
bb4e96c964 : [reland] Call jit decomposition in VariableType to increase forward AD coverage (#84151) (#84675)
a2cccb2d6b : add oneDNN graph fuser context API and unittest (#82491)
c304a1206b : [FSDP][Easy] Remove unused functions (#84598)
9e5563dbb1 : Delete SymIntArrayRef wrapper struct (#84837)
7e43c6f28e : [ONNX] replace AT_ASSERT with TORCH_INTERNAL_ASSERT (#84790)
034f2db1fd : Revert "Delete SymIntArrayRef wrapper struct (#84837)"
c3df78f436 : TARGETs changes for flash attention and cutlass (#84781)
9064bf2c72 : Upgrade to CUDNN version for cuda 11.7 (#84859)
4f6027b78a : [opinfo] narrow: add new sample for Tensor overload (#84785)
a06f2edab6 : [Build] Replace message() in caffe2/CMakeLists.txt with message in cmake/Summary.cmake (#84814)
d6b2f5c643 : [Quant][fx] Remove `remove_quant_dequant_pairs` and fix tests (#84203)
e217b30b0f : Add `torch.nested` namespace (#84102)
9c78f599e4 : Delete SymIntArrayRef wrapper struct (#84837)
8cdc0679b9 : [ROCm][jiterator] unskip additional tests (#84371)
2698f99dc7 : fixing form link for governance (#84861)
d2d145a400 : [xla hash update] update the pinned xla hash (#84853)
5ea2eb304e : Converted batch norm over to use symints (#84113)
caf034a9a2 : Fix bugs in how LTC decides whether or not to symint op or not (#84832)
bfc6db0a5a : [torchdynamo hash update] update the pinned torchdynamo hash (#84828)
5f960db0e0 : [Mobile] Add support for dtypes and custom classes in model tracer (#84795)
0455c9b036 : [torchdynamo hash update] update the pinned torchdynamo hash (#84797)
b5e921b89e : [vision hash update] update the pinned vision hash (#84798)
21bf9a467e : [jiterator] logical_{or, xor} : complex (#75947)
08c4f8c7a7 : ProcessGroupUCC tests (#83285)
2765243cd5 : [torchgen] Refactor static_dispatch to take in source signature (#84384)
c5a8946e40 : Revert "Revert "Redo how custom/python_custom methods on TensorImpl work (#84796)" (#84806)
bccc26f365 : [MPS] Handle casting for div operation (#84742)
ddc56732ae : [GHF][BE] Delete land checks branch when done (#84767)
b7d2818598 : Return contiguous tensor from native_layer_norm reference (#84799)
5e25c2b4cc : Add missing spaces to error messages in PG (#84780)
ca3b2bfbe3 : Revert "Redo how custom/python_custom methods on TensorImpl work (#84796)
96e4bd9500 : [docs] Person of interest update: sparse, torchrec and smaller tweaks (#84772)
f598b5be18 : Remove last bit or torch.eig from functorch/test/test_ops.py (#84787)
cd50512d41 : Upload the benchmark result to S3 and post the URL (#84726)
01c54ad6de : Remove deprecated torch.eig (#70982)
c4a5255df7 : [Mobile Tracer] Use unified source file list for BUCK build (#84770)
1dabb51a16 : quant: add `extra_repr` to HistogramObserver (#84760)
0fc02dbba4 : flash_attention integration (#81434)
219ff26172 : Revert "Add __all__ for a few distributed modules plus a little typing (#84119)"
2614079f89 : OpInfo: Prevent clamp sample inputs from sharing tensors (#84696)
5c0c8f2ce3 : [coreml][bug] coreml gpu flag not set (#84725)
fe47e61425 : [QNNPack] Update GoogleTest SHA256 Hash (#84754)
daffff9986 : [Profiler] Make `RecordQueue` manage the lifetime of `PythonTracer`. (#83964)
328538700a : [Profiler][Trivial] Move `PythonTracerBase` to `torch/csrc/profiler/orchestration` (#83895)
e8b9501861 : test: adding uniform (#84292)
a3855cc611 : Make xnnpack based convs thread safe (#84602)
2c4eaddb28 : Use exclude_zero in i0e sample inputs function (#84453)
93aef3a010 : Use presence of _symint in kernel name to generate symint sig or not (#84579)
18a31cc044 : [Mobile] Fix The Build For Model Tracer (#84755)
52224139b8 : Revert "Convert NoopPyInterpreterVTable into a Meyer singleton (#84656)"
ac364f8ba1 : Removing .github/scale-config.yml, now this repo is using the config in test-infra (#84753)
28c830ac07 : [FSDP] Optimizer states may be on CPU, copy them to GPU before gathering (#84708)
0fd8f6b93c : Missed one CHECK_NOTNULL in #82032's find-replace (#84720)
d12f3524b7 : Add user facing documentation for CSAN (#84689)
a8198a0955 : remove c10_defs.bzl and embed its logic directly where it is used (#84595)
214a6500e3 : [quant][docs] Additonal fixes for quantize_fx docs (#84587)
0a89bdf989 : Set up aten/src/ATen/functorch directory; move some files there (#84648)
8e57ce63a1 : Add CSAN support for CPU synchronizations (#84428)
7702ca4993 : [functorch] Simplify BatchedTensorImpl (#84642)
0d46bfac5b : [Mobile] Use -ffunction-sections and -fdata-sections for Mobile builds (#84704)
747f27a9ad : [Mobile] Update build_mobile.sh to allow lite interpreter and tracing based builds (#84647)
27e5299ee3 : [DataPipe] Fix mishandling of exception message when error is not iterable (#84676)
df3377fb64 : [functorch] delete functorch/csrc/Constants.h (#84639)
09bcc006e9 : ROCm support for test_lazy_init (#84333)
67d6f7160c : Add synchronize hooks (#84427)
591b75bf98 : Redo how custom/python_custom methods on TensorImpl work (#84641)
d802fcfcd8 : Add config to PrimTorch's nvFuser executor (#84482)
6f72c13f9b : test mkldnn conv2d channels last when weight is nchw format (#77348)
1840f24df7 : [FSDP] Ensure that all ranks use the same order to iterate through optimizer states (#84654)
2211949513 : Moving CommTensor from tests to private _spmd folder (#84719)
b00a4b7cf1 : [torch.fx.wrap] Use callable / function.__name__ instead of function.__code__.co_name (#84373)
1c8cb08770 : [vision hash update] update the pinned vision hash (#84731)
1c8f02d406 : [torchdynamo hash update] update the pinned torchdynamo hash (#84730)
e6ee8e613d : Return x.alias() when transpose is an nop (#84674)
dbdc1cd590 : [ONNX] Fix node attributes when namespace is aten (#84211)
2fa8142cf9 : [ONNX] Rename constants for clarity (#84645)
bc3683de83 : [quant] remove mkldnn headers in OnednnUtils.h (#84195)
7a5d5a0020 : Disable Transformer/MHA fast path when autocast is enabled (#84722)
f23a1cf805 : Fix conda cmake setup for macos x86-64 (#84682)
6f21680563 : Add __all__ for a few distributed modules plus a little typing (#84119)
9b8e0a38a6 : [functorch] excise older custom_vjp prototype (#84638)
8c91dd2677 : [functorch] Add some C++ documentation (#84637)
05b778f958 : Revert "Add mkl implementation for exponential on CPU (#69967)"
6bedb7a75e : [aarch64] Fix aarch64 build so that quantize_val_arm is defined (#84564)
a6e6276c8b : Revert "Moving CommTensor from tests to private _spmd folder (#84655)"
c5cf8f6b28 : fix [rpc] Wrong usage of RRefContext::handleException #71458 (#83166)
1459a909b4 : Added mv, mm, and binary_cross_entropy_with_logits decomps (#84451)
fe353e1413 : Enable manual test config label selection on Windows (#84669)
07dad15583 : Moving CommTensor from tests to private _spmd folder (#84655)
9beb0c0b87 : Reland: [Profiler] Unify global and thread local profiler lookup. (#83894) (#84668)
bea0184033 : Reland: [Profiler][Trivial] Create orchestration folder and move observer management there. (#83893)" (#84667)
b9793a66b5 : Fix linalg.norm sample inputs function and related failures (#84452)
335033f718 : asyncio increase throughput (pytorch change) (#84301)
368742d059 : Dispatch for xpu in adaptive_avg_pooling (#84541)
eddc2370ec : [functorch] vmapvjpvjp (re-enable test with skips and xfails) (#83999)
76fc690522 : Revert "[functorch] vmapvjpvjp (re-enable test with skips and xfails) (#83999)"
9addeccb6b : [functorch] vmapvjpvjp (re-enable test with skips and xfails) (#83999)
8bd9fe3f49 : Changes to prepare for fake tensors on in functorch by default (#84432)
b288cfd328 : [Profiler] Add quoted metadata API to remove empty trace cpu_op metadata (#84128)
942c0f31df : [ONNX] Align Optional Type in block (#83599)
49ec8d32c7 : Suggest draft PRs in contribution_guide.rst (#84658)
0945074a8e : Preserver stacktrace over functionalization (#84662)
cb6ba27db3 : [vision hash update] update the pinned vision hash (#84679)
889540d091 : [torchdynamo hash update] update the pinned torchdynamo hash (#84678)
e0229d6517 : Remove caffe2 mobile (#84338)
9669e3c6ec : Ignore UB on multiply (#84665)
1a1bcc7361 : Actually chown artifacts (#84672)
93359bf9b3 : Convert ConcretePyInterpreterVTable into Meyer singleton (#84657)
9162bc0252 : Convert NoopPyInterpreterVTable into a Meyer singleton (#84656)
29672b2136 : [functorch] add pinv batch rule (#83761)
586832ce65 : Add underlying_store property for PrefixStore (#84640)
e68df8e4a1 : Turn on functionalization by default in functorch (#84435)
6b2111619e : check rate limits of other tokens too (#83632)
9532c7e267 : [functorch] add matrix_rank rule (#83760)
e14f46f9dd : Add host and port to TCPStore pyi definition (#84636)
c7f6deb667 : [PyTorch] Guard against self assignment in SymInt (#84375)
fc4acd4425 : Fix error in the index range math expression in the docstring of MultiMarginLoss (#84513)
d892d5d682 : [CUBLAS][TF32][CUDNN] Update numerical_accuracy.rst (#79537)
acb4a09628 : Revert "Call jit decomposition in VariableType to increase forward AD coverage (#84151)"
31ef8ddb8c : add option to remove passes (#84425)
2feb31cb26 : Improve torch::jit::as_{module,object} performance (#84399)
2b2e0fddf8 : Add CUDA Sanitizer (#83984)
19e27b1556 : Make dispatcher registrations of SymInt functions backwards compatible (#84557)
ed46b9670e : add randomness kwarg to jacfwd (#84220)
50ae5c9141 : set workspace size to 4M (#74159)
87738f2073 : Remove expired c10d::broadcast backward compatibility check (#84107)
99b7eb4dfb : move internal only PyTorch test defs into fb/ subdirectories (#84605)
d3d163af80 : Add xla/ folder to gitignore (#84632)
42d99e6f19 : Call jit decomposition in VariableType to increase forward AD coverage (#84151)
e31ad1c2d3 : [reland] Move decompositions and helpers for jvp from functorch into core (#84581)
3eb16509c7 : optimize householder product backward to be more memory-efficient (#84627)
e96fb5d58c : [c10d] Fix docstring of scatter_object_list (#84596)
a47bc96fb7 : [composite compliance] fix linalg.eigvals (#84137)
89c4654ba9 : Add scatter_ to CommTensor (#84606)
f43c38bdc8 : Add broadcast_ to CommTensor (#84604)
a24d7a8565 : Add reduce_scatter_ to CommTensor (#84592)
e4519548a5 : Supported nested lists in CommTensor and enable tracing allgather_ (#84585)
189768ed64 : Add mkl implementation for exponential on CPU (#69967)
9e7af4e8d4 : Add alias info to torch._C (#84580)
ec3939a62f : Detect `__code__` a bit more reliably. (#84610)
07d398fb26 : [composite compliance] linalg_householder_product (#84180)
045ebc771d : [BE] Use `teardown-linux`/`chown` actions for binary builds (#84449)
1a33e944b5 : nvfuser torchbench patch (#84411)
7c3102f3f0 : Add ShouldSyncTensor interface (#84418)
c4e0c927e3 : [c10d] Add a soft error handling mode (#84386)
5b58140d1a : Add deterministic impl of `scatter_add` CUDA for all input sizes (#79466)
039b0146f9 : [vision hash update] update the pinned vision hash (#83900)
15c5baf878 : Throw on data dependent ops (#83567)
0be77d5415 : [torchdynamo hash update] update the pinned torchdynamo hash (#84613)
b168c4faa2 : Make CommTensor Generic to arguments and outputs structures (#84576)
00e0228050 : [BE] Delete `Check for new workflow" check (#84608)
06ebe2d5bc : Add watchdog to TorchElastic agent and trainers (#84081)
d9ceda49c4 : ONNX: fix default function value in _optimize_graph (#83996)
16f8dc00f0 : [nnapi] remove unused field 'order_' in nnapi.h (#84067)
166dec74b5 : Revert "Dispatch torch.norm to linalg.vector_norm and linalg.matrix_norm (#81761)"
0e49bcfd41 : [aarch64] Use cross build ld/ar/objcopy when creating libraries for cross building etc (#84558)
1cad744694 : Enable select.int when NestedTensor requires grad (#83875)
752c3bcb47 : Enable nvfuser tests for refs.broadcast_to and refs.broadcast_tensors (#84337)
aec76e391f : circleci - add master back, retry checkout for ios (#84443)
7a7b05802a : Add col2im_batched kernel (#84543)
bab1304f59 : Add step closures (#84300)
02da9437b0 : Store SymInt out of line (#84390)
7f90606309 : [static-runtime] update generator for the modified tests; re-run autogen script (#84437)
6363b1b358 : Add nvFuser support for aten.native_batch_norm_backward (#84546)
7243264c61 : fix: Allowed optimizers with more than 2 betas (#84486)
e20f217295 : Remove unnecessary decomposition_table= from test/test_prims.py (#84188)
88b1cc885c : Removed tri[lu]* tests, superseeded by OpInfos (#84256)
92a6b970ba : Be compatible with SYCL 2020 and SYCL1.2.1 for sycl.hpp (#83259)
c4e8d6282b : Improve getitem syntax for TensorType (#84555)
fa99b7b8f7 : [bazel] fix integration test (#79843)
4f0b9f3c31 : move PyTorch internal-only starlark files into fb/ subdirectories (#84548)
c794ee5cc1 : Reenable TestCppExtensionJIT on M1 (#84552)
c771d73461 : [composite compliance] fix max_pool1d (#84127)
139599ba95 : Contiguify bias in slow_conv_transpose3d kernel (#84125)
ee228ad949 : Revert "[BE] Use `teardown-linux`/`chown` actions for binary builds (#84449)"
faac3dbce2 : [optim] asgd : handle complex params as independent real params (#84472)
f725009a48 : as_strided supports SymInt; codegen supports optional SymInt (#84393)
ee57f5c6c8 : fix skipIfTorchDynamo on classes (#84392)
5e9c26c8e2 : [maskedtensor] adding reductions (#82839)
f125bd2cbb : Support torch.ScriptObject in torch::jit::as_object (#84398)
207a5a8fa9 : [torchdynamo hash update] update the pinned torchdynamo hash (#84383)
d2b8b8f291 : [aarch64] Unused variable (#84549)
26c136a135 : Use TensorBase in Shuffle and WeightNorm cpu kernels (#84499)
6f29642b6f : Remove Tensor.h includes from spdiags cpu kernel (#84500)
1a16b2576f : [BE] Use `teardown-linux`/`chown` actions for binary builds (#84449)
91a5f52f51 : Decomp for nn.functional.grid_sampler_2d (#84350)
acb11da556 : Increase default test timeout for distributed tests (#80330)
da99008d37 : fix typo in torch/package/_mock.py (#84508)
e79d0ebfa6 : Fix typo in core.py (#84534)
1896d80191 : [PyTorch][Profiler] Increase max number of elements to record in execution graph (#84285)
7e05879b46 : Fix fx test for S3D (#84526)
437b066e26 : [xla hash update] update the pinned xla hash (#84533)
edab44f6dd : Support a few corner cases for nvFuser executor (#84416)
9a6aa9053f : Don't convert INT64_MAX start index into zero (#84509)
e91c1e65b6 : [aarch64] Fix _mm_pause() on aarch64 (#84505)
7c4c7dafbd : [ONNX] Add onnx::LayerNorm support for version 17 (#84293)
6d6e04d6cc : [test_nn] move dropout tests to test/nn/test_dropout.py (#84165)
e46c1c7931 : [aarch64] Cast to signed char to fix aarch64 build (#84429)
388368b699 : [ONNX] Fix type annotations and enable type checking for all apis (#84091)
2a332afbf4 : Add SymFloat, support SymInt to SymFloat conversion (#84284)
7f5da70ef0 : Avoid hitting the fused path in Linear for xla backend. (#84503)
3dfbf09afe : Optimise the decomposition for `adaptive_avg_pool2d` wrt. TorchInductor (#84483)
ab6c57217a : Add NCCL PreMul Sum to c10d `redce` ops (#84243)
0b363c5c5c : don't synchronize single element any/all reductions (#84465)
5ffda02388 : Fix alertCuBLASConfigNotDeterministic to respect warn_only=True (#84215)
65beff5acb : Dispatch torch.norm to linalg.vector_norm and linalg.matrix_norm (#81761)
72f0f24a76 : remove unneeded _to_copy meta (#84460)
9b115c7bd3 : Sparse Compressed Transpose add support for Batch dims and BSR/BSC layouts (#82122)
0192a34910 : Dense -> CSC support batch dimensions (#83086)
a5a01e443c : Dense->BSR performance improvment (#83085)
f0e5b73364 : Dense -> CSR support batch dimensions (#83084)
2d969dc2ca : Revert "Support a few corner cases for nvFuser executor (#84416)"
f803fa9fc9 : [Nested Tensor] Add a NestedTensorUtils header and cpp file for organization (#84385)
ae67099e88 : Fix type annotation in `_ConvNd` for in_channels (#84302)
3db3845f5f : Support a few corner cases for nvFuser executor (#84416)
0fd173b097 : Revert "Support a few corner cases for nvFuser executor (#84416)"
3ac9f6683d : Support a few corner cases for nvFuser executor (#84416)
cb4421b19c : [Proof of Concept] Use labels to select the test configs to run (#83690)
97b2dff600 : Add Initial Support For Fake Tensor Constant Tracking (#84387)
832ce5f8fa : Adding codeowners to quantization, sparsity, ns, etc. (#79505)
f6ce2a442e : Refactor PyInterpreter to use normal vtables (#84388)
241c99232e : Fix typo (#84439)
edec9698ab : Fix ScripModule typo (#84444)
375d6cd5b7 : Revert "Move decompositions and helpers for jvp from functorch into core (#84358)"
6ef85dc990 : Fix minor typo in rpc_test.py (#84431)
a65b88d516 : Import forgotten pack_weight_bias in rnn.py (#84315)
73cb6cf8ae : Fixing back invariant on offsets (#84433)
a3c60a4db4 : Move decompositions and helpers for jvp from functorch into core (#84358)
eaab653376 : Read via FileAdapter when loading files in torch if not flatbuffer - Part 2 (#84296)
a563a4880f : [Edge] Add an option to avoid adding base ops to static op library (#84360)
ff56f1c30d : Define the SYCL device version assertation used in the other backend, like XPU (#84106)
1463c6f3de : Increase distributed shards (#84430)
ce1b727e77 : Disable autocast cache in torch.cuda.make_graphed_callables (#84289)
d39490a711 : Add meta function for repeat (#84349)
0fb1495512 : [aarch64] Fix ATen-cpu aarch64 builds (#84294)
5e5c610a58 : Move slow-grad checks to CUDA-11.6 (#84313)
673b35c847 : Better reshape with autograd support (#82754) (#84154)
9bcad063d8 : disable ios on circleci b/c failing (#84438)
88802719b6 : [FSDP][Easy] Move utils to `_utils.py` (#84212)
e71370064c : Improvements to FX Minimizer (#83833)
dd82b31e55 : [fx] Add metadata to fx.GraphModule (#84378)
8b578849b4 : Revert "[Profiler][Trivial] Create orchestration folder and move observer management there. (#83893)"
5a73a0291d : re-enable ATen packedtensoraccessor_test (#84397)
fd756caa36 : [ONNX] Support nn.init.normal (#84149)
5d39e8de57 : add matrix rank op info tests with non-default kwargs (#84074)
041edeeecb : Fix several typos (#83823)
7a348a1d4a : Fix internal breakage caused by #82134 (#84363)
7ffa10036c : Revert "[Profiler] Unify global and thread local profiler lookup. (#83894)"
6dc9223c8b : Sparse_coo: Be more agressive in setting coalesced True to avoid suprising behaviors (#82426)
2e0f5bce39 : Revert "Fix several typos (#83823)"
bf62ece536 : [static-runtime] add schema checks to most of the ops where these checks are missing (#84163)
d648375f13 : [GHF] Changing the ordering in merge rules to allow more appropriate messages to be raised first (#84359)
bfdfeecd15 : Add per-op MPS gradient tests and update skips (#84242)
f1ee162193 : Use SymInt signature to compute saved variables (#84354)
5e2c23377a : LTC codegen appears to be hardcoded to only support tensors (#84355)
7d9e546738 : Replace assertEqualIgnoreTypes in common_nn.py (#84210)
5cfe769387 : [primTorch] Add refs for `reshape_as`, `view_as`, unify tests (#84222)
8778f33744 : Dense <-> bsc conversions (#80781)
0909639c90 : fix dispatch declaration bug about quantized op (#83649)
70ef06cc19 : fix and enable ATen ExclusivelyOwned_test (#84395)
521d1071f8 : [quant] Subpackage import in nn.quantized (#84141)
546e5fa0c5 : register skipped ATen tests in CMake (#84345)
65e887c041 : Remove unnecessary copy from torch._refs.to, add OpInfo for torch.Tensor.to (#84270)
90d6112a94 : Test distributed backends in parallel (#84034)
693ed8b147 : [1/N] [Dispatchable Collectives] Create Backend class (#83679)
ece0002c4b : [ONNX] Disable autocast cache in exporter (#84219)
18264432f7 : [ONNX] replace all _C._flatten to torch.jit._flatten (#83598)
f701cb04fb : Test Dynamo CI w Fake Tensors (#84282)
ef3ab31f1c : Decomp for aten.im2col (#84303)
cd96f3f676 : Use register_meta for everything in meta_registrations (#84297)
305c6a6c35 : [FSDP] Fix the FQN not found issue for load sharded_state_dict when using activation checkpoint (#84253)
e8885a872c : [CI] Move bazel from 11.3 to 11.6 (#84314)
fddfc4488a : Further improve mergebot messages (#84283)
c585e149e2 : Process for maintaining Build + CI contributors list (#83869)
4b8ae04788 : [BE] Delete torch._dl extension (#84361)
cfb9d0d233 : [DataPipe] Fixing `map` function signature validation (#84279)
744019ece7 : [AIBench] Pass Vulkan Profiling Data to Kineto Profiler in lite_predictor_benchmark (#84185)
a0ccfe0847 : Temporary fix to not fail concurrent viable/strict updates (#84324)
84ceebebf9 : [FSDP] ufmt `flat_param.py`, `flatten_params_wrapper.py` (#83664)
040263d7dc : sort ATen tests in CMake (#84344)
65f98eb47d : Revert "Add meta function for repeat (#84349)"
6efadf7e7e : [ROCm] guard ROCm-only files in NVFUSER_RUNTIME_FILES (#84312)
762890d11e : [FSDP] Retire `self.device_id`; clean up ctor (#83663)
85931eaa6b : Rename fake_result to val (#84331)
85b889fa5f : [primTorch] Add ref for `poisson_nll_loss` (#83805)
71ce9cd072 : [primTorch] Add decomp for `soft_margin_loss` (#83804)
305af90d0f : [primTorch] Add docstring and promotion for `l1_loss` ref (#83803)
44bc6db8f8 : Add meta function for repeat (#84349)
7834f557d7 : Add dynamo_timed to aot autograd (#84307)
14093b5979 : Revert "Use register_meta for everything in meta_registrations (#84297)"
bf67589915 : Escape curly brackets in FxGraphDrawer _typename (#83604)
b170db8554 : build/test MaybeOwned_test in OSS and fix it (#84342)
a27a4a02fe : Refactored proxytensor to clean up separate branches (#84325)
8843f5b986 : remove data-dependent shapes from some distributions (#84322)
6a3ecda5a2 : Started storing faketensor/symbolic shape metadata on FX nodes in make_fx (#84114)
79e3a39f95 : [BE] Remove unused `export.h` include (#84305)
abaf8112e6 : ci: Replace setup-miniconda with test-infra version (#84236)
b343febe61 : [torchdynamo hash update] update the pinned torchdynamo hash (#84317)
8cd296f680 : Use register_meta for everything in meta_registrations (#84297)
71d99662a0 : add nvidia-smi to run_torchbench (#83857)
9c452abcf1 : Use reentrant mode when invoking prims, delete global prim_fake_mode (#84090)
db7784e722 : [Static Runtime] Schema checks for index_put (#84152)
7532d5b125 : [Modes] remove inner constructor kwarg (#83925)
e23d159bc5 : [PyTorch][caffe2] Add CAFFE2_{DECLARE,DEFINE}_KNOWN_TYPE (#83707)
af741e821b : no ios arm builds on circleci (#84299)
e014bd8e4e : Upgrade default cuda version of torchbench (#84248)
7acdb2d564 : Don't start land checks if the PR hasn't been approved yet (#84239)
eabe34cc40 : [Quant] Remove warnings from using torch.tensor(value) (#84277)
eda217ab67 : Reland symint_numel (#84281)
d09486ab23 : [ROCm] enable nvfuser (#82498)
f9609d8203 : Fix several typos (#83823)
c06a5586f5 : [Profiler] Unify global and thread local profiler lookup. (#83894)
48a596ad3f : [Profiler][Trivial] Create orchestration folder and move observer management there. (#83893)
c26b53f6a4 : [Profiler] Encapsulate callback handle management. (#83892)
ddd841b316 : Removing multigpu 10.2 . Using 11.6 cuda for multigpu tests instead (#84286)
772721a4b7 : Revert "Test distributed backends in parallel (#84034)"
20018aa766 : modify split_by_tags to retain output order (#84136)
90161c23cf : Add nvfuser support for squeeze (#84117)
174c3c6859 : [Nested Tensor]Clean up offsets (#84145)
3ae5be74ac : Test distributed backends in parallel (#84034)
641c395251 : [ONNX] refactor test_pytorch_onnx_onnxruntime_cuda.py (#84218)
b8ee810144 : [Easy][FSDP] Update `StateDictType` doc (#84200)
7f58db7424 : [Easy][FSDP] ufmt `_optim_utils.py` (#84199)
5bceaadb70 : [ONNX] Add script/trace different flatten and move optional type tests to runtime (#83184)
b106a04d76 : Fix the edge case when y = 0 in kl_div (#82714)
b182f08135 : Fix issue in softmax.cu with transformer error when mask seqlen > 1024 (#83639)
897907d42c : Fix split torch_function handling (#83866)
65dc5dd3f3 : [c10d] Introduce dist.get_local_rank, dist.get_global_rank and dist.get_global_ranks (#82134)
56a37ea1a6 : Set default value for nccl make MAX_JOBS if ProcessorCount returns 0 (#84231)
f0efc1c2d1 : [Easy][FSDP] Fix sharded optim state dict doc formatting (#84198)
546d68226c : Update README.md (#84263)
44a975335e : Revert "Re-land sym_numel (#82374) (#82726) (#82731) (#82855)" (#84207)
60f47cb002 : Revert "Use self-hosted runner for viable/strict update (#84249)"
acd6ca8cfa : Use self-hosted runner for viable/strict update (#84249)
ec714e33a3 : [PT] Allowing deepcopy in unitialized parameter (#83809)
856a7d9411 : Vectorize conversions to BFloat16 on CPU (#80906)
7a14c56bee : only run the circleci mac/ios jobs on prs (#84227)
71369051ee : [Nested Tensor] fix from_padded bug (#84217)
df98c52948 : [fx] Make get_isolated_graphmodule accept tracing mode. (#84238)
399b1eb84b : [functorch] fix multinomial (#83838)
34e5b0997e : [reland] Make allreduce compatible with make_fx (#84221)
a402e100be : [fx] Make wrapped_fn also work for non-mutating passes. (#84232)
8aba2535e4 : Fix typo (#83802)
7371761d9c : Add Lazy backend type string (#84228)
adc54dc219 : Give better error message when merge fails to find any rules (#84160)
54d8661266 : [vulkan] Add vulkan_api_test as an instrumentation test (#83978)
e7635c06ce : Fix typos in docs (#80602)
372a19d2c6 : Update start_index and end_index for adaptive pooling (#84010)
95863f2ccc : Make mergebot failure messages more readable (#84214)
d62a6ca521 : Link to instructions on submitting an RFC (#83990)
724b63d694 : [ci] move XLA pin update to weekly (#84208)
806878518f : [ONNX][Reland] Export node and value with scope name (#82040)
d144594512 : [Quant][fx] Remove WEIGHT_INDEX_DICT and BIAS_INDEX_DICT (Part 2) (#83853)
ad44670fa1 : Back out "Revert D38984222: Don't introduce new overload for SymInt (#83628)" (#84173)
cfd18e105f : [Pytorch][Ondevice quantization] Add device side API to convert model (#83807)
eebdcb5a2e : [Pytorch][quantization][ondevice] Add a wrapper API for server side prep (#83742)
5c7e801c50 : [pytorch][on device quant] Finalize method for ondevice quant (#83571)
446afb5f9f : [On Device Quantization][pytorch]Make insert_quant_dequant support ondevice ptq (#83570)
6a5d9f1be0 : Replace "_scalar_type" string with constant (#83569)
9189edb3b3 : [Quantization][Pytorch] On device quantization support part 1 (#83568)
8acc92eb00 : [FSDP] Print exec order only in debug mode (#83868)
352da6de6b : [fx][pass] Fix type of exception (#84094)
7088a98fba : conv2d: require bias to have the same dtype as input and weight on cpu (#83686)
1945d28f58 : Revert "[fx][pass] Fix type of exception (#84094)"
c29b7865d0 : Revert "[xla hash update] update the pinned xla hash (#84164)"
eff312f07b : nit fixes in modes (#83924)
1a53e35b9d : Enforce explicit ProcessGroup passed into DefaultState (#84105)
f66be71d77 : [checkpoint] Adopt Planner interface across the board. (#83781)
fbf5a3f9f4 : [xla hash update] update the pinned xla hash (#84164)
b8e1c54f53 : [Prim] Implement group_norm_backward (#84037)
2436cf8aa8 : [Nested Tensor] detach (#84078)
0095571135 : [AOT Autograd] Redirect named_parameters to original mod (#84157)
3f94726453 : [DataPipe] Convert MapDataPipe.shuffle to IterDataPipe (#83202)
7480e83338 : [Profiler] Add `disabled` and `global` methods to ProfilerConfig. (#83891)
8e6207bcd8 : Revert "[ONNX] Export node and value with scope name (#82040)"
d50aa517b5 : Revert "Add support to traverse all python collection objects (#84079)"
0ac2986d33 : Fixes softmax indexing for large tensors (#84182)
533203f5aa : _to_copy decomp (#84108)
9fc02f6bc5 : Decomposition for adaptive_avg_pool2d (#84062)
3aae6ff1e1 : Add nvprims.var_mean (#83508)
261be8e5c2 : Revert "[Profiler] Add `disabled` and `global` methods to ProfilerConfig. (#83891)"
7244a3737c : Revert "[DataPipe] Convert MapDataPipe.shuffle to IterDataPipe (#83202)"
33db5da4c1 : Revert "[Prim] Implement group_norm_backward (#84037)"
df523a6eee : Revert "[AOT Autograd] Redirect named_parameters to original mod (#84157)"
f4f54c7ce1 : Revert "[Nested Tensor] detach (#84078)"
5cf4542f86 : Revert "Enforce explicit ProcessGroup passed into DefaultState (#84105)"
ff23f3ac1c : Revert "_to_copy decomp (#84108)"
d8cc8368ab : Revert "[ONNX] Fix type annotations and enable type checking for all apis (#84091)"
b159a5230f : Revert "Add nvprims.var_mean (#83508)"
71cd3fa2d5 : Revert "[xla hash update] update the pinned xla hash (#84164)"
b078d242c4 : Nvfuser to copy decomp to prim (#83782)
c9b144ff47 : Replace assertEqualIgnoreTypes from common_methods_invocations.py (#84076)
b8fe0edcf5 : Revert "Make allreduce compatible with fx ProxyTensor (#84126)"
c032b097e3 : [xla hash update] update the pinned xla hash (#84164)
7e7694b661 : Add nvprims.var_mean (#83508)
6446da1730 : [ONNX] Fix type annotations and enable type checking for all apis (#84091)
e33897cb99 : _to_copy decomp (#84108)
adc9a1e2fb : Enforce explicit ProcessGroup passed into DefaultState (#84105)
092fe71f33 : [Nested Tensor] detach (#84078)
43620b7e8d : [AOT Autograd] Redirect named_parameters to original mod (#84157)
c7edcd6968 : Revert "Don't introduce new overload for SymInt (#83628)"
38e5e4a85f : Revert "[xla hash update] update the pinned xla hash (#84043)"
bed85cce8b : [Prim] Implement group_norm_backward (#84037)
a423c966a7 : [DataPipe] Convert MapDataPipe.shuffle to IterDataPipe (#83202)
69e9f905b7 : [Profiler] Add `disabled` and `global` methods to ProfilerConfig. (#83891)
f4dc7b3a8a : [Profiler][Trivial] Cleanup ExperimentalConfig (#83890)
eb2fa2e042 : [fx][pass] Fix type of exception (#84094)
aa4be48b58 : [Nested Tensor] do not use at::cuda::getDefaultCUDAStream() (#84134)
82efb0e196 : Enable cache action for windows and other minor workflows (#84093)
3fae89d4a4 : Read via FileAdapter when loading files in torch if not flatbuffer (#84028)
e0f0c8e7b9 : Add support to traverse all python collection objects (#84079)
6a3666282d : [ONNX] Export node and value with scope name (#82040)
b5c2b0b200 : make job pass even if monitoring script fails (#84068)
6a58603956 : Update Dynamo pin (#83829)
61b9d8fccd : [Profiler][Trivial] Add null handling to `AppendOnlyList::copy` memcpy path. (#83963)
014a333df3 : [Profiler][Minor] Extend Python bindings (#83622)
681c38704e : [ONNX] Clean up patch functions (#83136)
ec5b83f768 : Make allreduce compatible with fx ProxyTensor (#84126)
f93446adc2 : Update proxy_tensor.py to support List input/output (#83302)
527a160169 : Expose ProcessGroup::Work.wait() API to TorchScript (#83303)
c6348a7109 : Add type hints to torch.save, torch.load (#83937)
582c0833d5 : mac circleci workflows (#82780)
e9dff858c3 : [functorch] add lstsq batch rule (#82325)
a08911400e : Use C10_HAS_CPP_ATTRIBUTE to simplify nodiscard definition (#83976)
b429a17545 : Enable -Wunused-local-typedefs (#83708)
65ea3d0621 : [composite compliance] cov, corrcoef (#82954)
cddf96c4ba : Fix preconditions of adaptive_avg_pooling2d (#84061)
9a236c7ab4 : Made some minor cleanups to decompositions (#83814)
ddedc294fb : [xla hash update] update the pinned xla hash (#84043)
d54fad5675 : Remove unreachable except block (#84070)
f03ab28b97 : Use an unused variable (#84073)
993d8bb77e : Use size to check same tensor sizes in reduce_scatter and allgather (#84099)
089101fc82 : Fix small typo in cuda.rst (#84012)
15b560a5c4 : Fix missing include for size_t (#84088)
9790d90e4b : Don't introduce new overload for SymInt (#83628)
d2f37401b8 : Silence namedtuple warning in dist (#84072)
b35e7c5da7 : Fix FSDP not all outputs used in loss (#83195)
7e5c76da47 : Make graph_module.print_readable() discoverable (#83960)
a4a55f5ea6 : New TORCH_UCC_BLOCKING_WAIT env variable (#81791)
85f82f7311 : example program for paper intro (#83945)
bf25a140f9 : [ONNX] Add runtime type checking to `export` (#83673)
562a021cf3 : [GHF] Land validation should not change default branch (#84084)
9f4626ea1b : [vulkan] use VMA at third-party (#83934)
ced2ca8f86 : Torch cond operator, python dispatch, pyoperator (#83154)
3c2a0780b8 : [ONNX] Assign ONNXScopeName during function substituion (#82039)
4c19981316 : [DataPipe] Reset Shuffler's iterator when NotStarted (#83535)
b82c74da07 : functionalization: support inplace views on inputs (#83993)
0c6a616af0 : run functorch decomps after functionalization when enabled (#83992)
caaa723ae2 : [GHF][BE] Move merge rules to yaml (#84065)
86e134ddf7 : disable c10::SymIntNode tests on mobile (#84066)
2f04ba2c7c : [quant][ao_migration] `torch.nn.qat` → `torch.ao.nn.qat` (#78716)
29e83b6599 : [quant][ao_migration] `torch.nn.quantizable` → `torch.ao.nn.quantizable`. (#78717)
b1455f9424 : [quant][ao_migration] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference` (#78715)
d32a762147 : [quant][ao_migration] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic` (#78714)
c92e5ac95b : [quant][ao_migration] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules` (#78713)
a83d7d8b65 : enable qlinear dynamic parallelization with fbgemm (#84033)
e2f75d63d4 : Decomposition - batch_norm, save_mean and save_variance always float32 (#84013)
56fef4e6ee : fix `NoneType` object has no attribute `python_exit_status` (#83985)
00cb184512 : [functorch] add batching rule for fill_.Tensor (#84015)
31f151767b : add qscheme check for quantization observer (#80126)
f5a3515083 : Make linalg.inv composite of linalg.solve (#80074)
e3c89d0778 : Disable autocast cache during aotdispatch (#84035)
63cbdc92a7 : switching the exact check to isinstance check (#84023)
02c3781332 : Enable cache action for lint workflow (#84026)
c00f0c80c0 : [fx] add deferred weights (xl_weight) and tracing for xl_embedding_bag (#84016)
8b8942b114 : Fix dumb make_fx issue (#84011)
c03f8abb21 : [fx+scripting] Adding num_iter_1 and num_iter_2 params LearningRate op (#83691)
2000eba454 : NCCL: Re-enable parallel builds (#83696)
d5af2a70ba : Revert "[TorchTidy] Adding support for unique tensor identifiers (#80266)"
1f61c39ac4 : Revert "Support NCCL Premul Sum (#81272)"
460636ab94 : [caffe2] Remove last clang-for-cuda sources (#84021)
a013597b32 : fix oneDNN channels_last path issue (#83653)
b6ba41921d : [TorchTidy] Adding support for unique tensor identifiers (#80266)
b21a6ff639 : [NVFuser] Upstream push 0811 (#83239)
e90db17565 : Increase timeout for linux binary builds (#84008)
6b597595b2 : [Quant] Vectorize scalar remainder in quantized kernel for normalization (#79673)
a7edf71360 : Revert "Don't introduce new overload for SymInt (#83628)"
7a02ee55db : Revert "[xla hash update] update the pinned xla hash (#83967)"
5321bf52f2 : Revert "Make linalg.inv composite of linalg.solve (#80074)"
4a6726a840 : use condensed disabled tests file (#84017)
cef522a8a9 : Add docstring type guidelines for list & tuple to `CONTRIBUTING.md` (#83634)
101709f43b : Add comments for block_reduce.cuh (#83825)
bf8d5e8328 : Pretty print stack trace with gm.print_readable() (#83706)
e72256604f : Enhance add_out_dense_sparse_cpu for hybrid sparse tensor (#23057)
3b11b80fc3 : Named pipe based watchdog timer (#83695)
37d3db7579 : Deletes CCACHE_DISABLE and SCCACHE_DISABLE from nccl.cmake (#84007)
1eff853fdc : Pin conda to 4.13.0 (#83991)
f5bfa4d088 : [ROCm] Enable test_multiprocessing tests (#82356)
d56577284a : Set python build-docs timeout to 30 minutes and cpp build-docs timeout to 180 minutes (#83957)
f38a32c905 : remove duplicate WarpReduceSum (#83757)
67f0940cdd : Check all CUDA API calls for errors in test/ (#74921) (#83954)
a741927e61 : Improve Normalization.cuh (#83871)
7b1a056b88 : Map new CUDA error handling to HIP (#75032) (#83953)
ef782e730d : Support BF16 for fast layernorm (#83971)
a5564c4bd0 : Suppress Anomaly mode warning message (#83966)
a8a36c45a6 : [frontend] Fix tensor list alias annotation (#84005)
b745e5f115 : Check all CUDA API calls for errors in benchmarks/cpp/nvfuser (#74920) (#81817)
f7e668b7b5 : add hud link to merge failure message (#83946)
3a9ae518f2 : Skip NCCL slimming for cxx11 libtorch builds (#83959)
d79ccb7b45 : [pthreadpool] Cap max thread count to fix TSAN issues (#83950)
5e01fb995c : strip SymIntNodes off in the mobile builds (#83938)
b842670aa5 : logical ops (#83879)
2b805e3520 : add arithmetic ops (#83878)
0831813e26 : support more symintnode operations (#83877)
5c49c7bbba : [WIP] Validating input_col for certain datapipes (#80267)
30a5583d75 : [TorchTidy Fix] Don't try to collect strides for non-strided tensors (#83935)
3f88171240 : [ONNX] Remove static None graph output (#82623)
7a8152530d : move pooling test from test_nn to test/nn/test_pooling (#83915)
4eb02e8637 : [LTC] Add custom lazy tensor save function (#83294)
3e6e0a1d10 : Support a stable double backward on linalg.det for real inputs (#80217)
4737b33614 : Make linalg.inv composite of linalg.solve (#80074)
0bdcfcb840 : Strenghten preconditions of linalg.cross (#83798)
4a18d0a972 : Fix LTC build warnings (#83955)
ce7a9f92e3 : [xla hash update] update the pinned xla hash (#83967)
fa241fd50e : [Profiler] record nn.Module's parameters (#83209)
0ae298f869 : Test type promotion assertignoretypes (#83867)
432c508e71 : Support NCCL Premul Sum (#81272)
67aed39319 : Support the XPU backend untyped storage (#83952)
0491e1a13a : Support returning symbolic strides from t.stride() in Python (#83842)
df70714e76 : [BE][CUDA] Use packed_accessor64 (#83949)
754d7f05b6 : Remove conj kernels for real dtypes (#80374)
2c76d05b8f : [Nested Tensor] Make offset copy and move assignment more explicit. (#83488)
7fdc2f70c6 : Task: T129772171 remove assertEqualIgnoreTypes from test/test_nn.py (#83870)
6edcf8e18c : Move nnapi code from ATen common code to specific library (#83748)
84f0411f4f : add merge blocking to ci: sev template (#83940)
c47e0450f8 : [fbia] Keep Track of full qualified name before and after remote sharding (#83889)
58f61d50a4 : Add hypothesis to requirements.txt (#83740)
84e45e7e90 : Revert "Optimize transpose copy on CPU using fbgemm transpose (#83327)"
591222f5d9 : Fix use-dict-literal lint (#83718)
fc470cf980 : Back out "Support regex-style matching for Any and Oneof (#82853)" (#83922)
89072177e1 : [fx][pass infra] Adding error catching (#83933)
7c8d265822 : ci: Remove dead code related to android uploads (#83930)
21bc77ca96 : Remove CoreMLMemoryObserver (#83703)
8fae7027b3 : Don't introduce new overload for SymInt (#83628)
4808bda796 : Prefer signal from land checks over PR signals (#83715)
25dd2a0422 : Fix load_extra_only api for flatbuffers and enable flatbuffers in mobile for OSS properly (#83855)
bbe803cb35 : Revert "Strenghten preconditions of linalg.cross (#83798)"
a802603ef7 : [complex] conv_transpose1d (#79694)
9095030239 : [fix] edge case in `MaxPool1d` and add ErrorInputs (#83553)
8f9ae35648 : remove assertEqualIgnoreTypes from test/distributions/test_distributions.py (#83709)
5204b8e4f9 : [torchgen] Add documentation for `autogen` keyword (#83610)
732255f031 : [vulkan] Add VMA as a third_party subrepo (#83906)
81843596cb : Fix view_func replay in no-grad mode (#83872)
7f0198e739 : Strenghten preconditions of linalg.cross (#83798)
9beddde1d7 : Enable NCCL_DESYNC_DEBUG when TORCH_DISTRIBUTED_DEBUG=DETAIL (#83881)
cb488e6d2f : Allow None arguments for elementwise type promotion wrapper and fix clamp with None arguments (#83586)
8793cd2fd3 : Move ATenNVRTC.h include from `jit_utils.h` to `jit_utils.cpp` (#83886)
8db04c1113 : reinplace pass: special handling for view_scatter ops (#83846)
75ec7b7547 : reinplace pass: bugfix for output node replacement (#83845)
01434c2d20 : Improve DistanceKernel.cu (#83811)
df048414e0 : [functorch] add linalg cross batch rule (#83759)
e4af53c1a1 : [PyTorch] Remove unused sstream/string includes from c10/macros/Macros.h (#83353)
7e386845a4 : Update retry action to latest version (#83911)
a315a2c79b : [ROCm] restore MIOpen benchmark flag default to true (#82656)
0270a707e5 : Fix stride issue with faketensors (#83822)
7ebdb4c72f : Refactored ops on size to be dispatcher ops (#83719)
58170fb8aa : Remove DBR quantization from the codebase (#83642)
4dfa6d28a1 : Normalize DLPack stride to 1 where shape < 2 (#83158)
247468baf0 : [ROCm] More Sparse UTs enablement and more hipification mappings. (#78939)
ed949e2258 : [xla hash update] update the pinned xla hash (#83899)
7c20ad3dfa : [optim] rprop: handle complex params as independent real params (#83858)
dd67d52b57 : [nn] split rnn_utils test from test_nn.py (#83675)
a419e483b2 : [quant][fx] Add support for quantized matmul (#83885)
3dfb8dfcf3 : [ONNX] Use `errors.SymbolicValueError` for more context (#83332)
04d8da88a6 : Optimize transpose copy on CPU using fbgemm transpose (#83327)
b29a074882 : [BE] Revert distributed change in https://github.com/pytorch/pytorch/pull/68779 (#83181)
4e90526a4f : [FSDP] Remove unneeded checks (#83150)
8e074f4557 : hash update - bug fix for branches (#83865)
7cfc8b7820 : [MPS] Move mps_linear to mps dispatch key (#80068)
b18f984307 : [cmake] Change COLORIZE_OUTPUT option to USE_COLORIZE_OUTPUT (#83716)
b7afee8a27 : Lazy deprecation import function in torch.nn (#83834)
658f958bc4 : fix upsample bf16 issue for channels last path by using high pricsion to compute index (#83847)
80cfafc385 : [ONNX] Add quantization support to more single output ops (#83008)
1e4383f756 : Add lazy shape inference for cholesky op (#83720)
36f6d91a2d : Migrate last workflows from 18.04 to 22.04 (#83861)
09331c947c : [optim] rmsprop: handle complex params as independent real params (#83860)
62d9f1559e : Fix model type CNN->MLP in functorch ensembling notebook intro (#83603)
dc557b94ec : Used generator for "any" and "all" (#83844)
e10c47a7d0 : [maskedtensor] adding unary and binary operations (#82837)
daca0ee5e2 : [ONNX] Introduce ONNXScopeName (#82038)
91766360b1 : [mergebot] Post PR Comment on cancel (#82744)
b136f3f310 : More doctest refinements. (#83317)
9c9f424817 : modify the signature of method `__getitem__` from `ModuleList` (#83799)
91eb1b9bb9 : Move _masked opinfos to opinfo/definitions/_masked.py (#83763)
7656ef73f1 : Move `torch.special` OpInfos into opinfo/definitions/special.py (#83762)
35d4fa444b : Fix for transposed convolution shape functions (#83557)
eff28d61c9 : [JIT SSA] Allow updating shape functions without recompilation (#83629)
53cda905be : Revert "Optimize transpose copy on CPU using fbgemm transpose (#83327)"
d1be36ceab : [MPS] Fix the index error in constant_pad_nd() with single-dimension input (#83745)
a6b75bb099 : [MPS] Fix placeholder case for missing gather graph (#83744)
b8496eb411 : [Quant] Separate FBGEMM/QNNPACK BackendConfigs (#83566)
07d0c9ec75 : make sym sizes be computed lazily (#82233)
f56720ea7c : Optimize transpose copy on CPU using fbgemm transpose (#83327)
fcb124406b : release the current symintnode in the move c-tor (#83789)
b47f712b7b : Fix uninitialized member if the default c-tor is called (#83788)
09157c76c0 : [Static Runtime] Add schema checks for aten::list (#83753)
d46dba18f7 : Simplify reshape and fix _refs.unflatten (#83827)
473b733bae : Replace .new_zeros(()) with 0.0 in torch/_decomp/decompositions (#83734)
6a9c02339d : Revert "[quant][ao_migration] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules` (#78713)"
b1a7b67529 : Revert "[quant][ao_migration] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic` (#78714)"
355d343fa8 : Revert "[quant][ao_migration] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference` (#78715)"
e9dd4d5adf : Revert "[quant][ao_migration] `torch.nn.quantizable` → `torch.ao.nn.quantizable`. (#78717)"
4cbb1986fe : Revert "[quant][ao_migration] `torch.nn.qat` → `torch.ao.nn.qat` (#78716)"
3c6c39e66e : [fx] refactor fba_passes into FBAPassManagerBuilder (#83268)
7cd2fa1d38 : [quant][ao_migration] `torch.nn.qat` → `torch.ao.nn.qat` (#78716)
e0876feb49 : [quant][ao_migration] `torch.nn.quantizable` → `torch.ao.nn.quantizable`. (#78717)
a7344e52b9 : [quant][ao_migration] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference` (#78715)
08126c8967 : Minifier fixes (#83754)
e6fb97d8ae : [quant][ao_migration] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic` (#78714)
8948fdc525 : Switch mobile targets to flatbuffers_mobile (#82829)
f0eb841d20 : Make `torch.optim.RMSprop` differentiable (#83578)
ac39d2bd6e : Make negative integer test always done for Int to SymInt (#83815)
4902254b9b : fix torch._C._nn.linear bug (#83682)
da6cd12173 : gt constraint heutristic (#83334)
432f037498 : [quant][ao_migration] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules` (#78713)
765fd77d9a : ci: Switch binary builds to github artifacting (#83778)
91e754b268 : [BE] setup.py refactors (#83635)
5c5a5f1505 : Add HIP libs into torch depoly init list & corresponding dependency for CURE benchmark running on AMD (#83434)
09e837634b : [Profiler][Minor] Set end time on python events when profiling stops. (#83621)
37f91d700b : [Profiler] Break metadata generation into multiple visitors (#83033)
f295dd0735 : [Profiler][Minor] Add typed visit method to Result. (#82993)
294f9d1282 : [Profiler][Minor] Organize collection.h/.cpp (#82992)
c9475fa927 : Create flatbuffers_mobile (#82828)
f45cd00d7a : Added inference to context when only compiling forwards (#83783)
08c03c91d7 : guard include of x64 intrinsics headers (#83793)
e0f2eba93d : Move odd num_head in TransformerEncoder to slow_path (#83483)
5a1f6d50a9 : Skip pr-sanity-checks with skip-pr-sanity-checks label (#83751)
f0ee21fe0a : Update cpuinfo to the latest commit (#83620)
b2ddef28d7 : Freeze the rest of python docs requirement (#83785)
9732a7d84e : torch.cartesian_prod: add type hints (#81377)
0e0af73ba2 : Add support for partial decompositions in make_fx (#83770)
d5a74efc82 : Don't extract tensor metadata from sparse tensors (#83669)
329deb9757 : Refactor is_X_like, better invariant checking for SymInt overload (#83668)
7fe19c03e4 : fix functionalization <> fake tensor mode (#83701)
e9e7363854 : reinplacing pass fixes for torchbench + huggingface (#83626)
cce32c6fa1 : functionalization: handle models that resize their program inputs (#83542)
0c24af4985 : Always allow tensor metadata changes (#83590)
a7d8863c7a : [vulkan][ez] lock cache mutex when purging for ShaderCache (#83738)
155343ef2d : Pin sphinxcontrib.katex to 0.8.6 (#83774)
2efbdbfcc4 : Make some optimizations to minifier (#83641)
13f42069a8 : [quant][fx][refactor] Rename qconfig_utils.py to qconfig_mapping_utils.py in torch/ao/quantization/fx (#83369)
1f38225b56 : [primTorch] Add ref for `new_empty_strided` (#82466)
307421930a : Enable pg_nccl to perform vector AllGather for uneven output splits (#83713)
1fa9a377d0 : [Profiler] Start moving python bindings out of autograd (#82584)
7453019e79 : Remove duplicate_dequantize_node and remove_extra_dequantize (#83611)
da520a43f2 : [Vulkan] Fix issues in GRU and LSTM (#83722)
108a1fb173 : Avoid using fx.Interpreter in nvfuser executor function (#83607)
e0d26ee092 : [vulkan] Throw std::runtime_error instead of using TORCH_CHECK when creating Vulkan context/runtime fails (#83627)
1407e6728c : Nvfuser python api patch take 2 (#83684)
0ec7fc13d6 : Refactor CppSignatureGroup to collect signatures as list. (#83667)
03e322c8d6 : Switch fx.replace_pattern to use new SubgraphMatcher (#83717)
73652dd1c4 : Avoid unnecessary copy of pointeeSet in MemoryDAG::setWildcards (#83681)
93eedc51a5 : [functorch] re-classify linalg.eigh in vmap testing (#83614)
8788e92f0f : Move `torch.linalg` opinfos to opinfo.definitions (2/2) (#83554)
8dbb0990bc : Move `torch.linalg` opinfos to opinfo.definitions (1/2) (#83547)
4aeb98dee9 : Move RefInfo classes into opinfo.refs (#83563)
f4caeb25e9 : Move gradcheck_wrapper and clone_sample funcs into opinfo.core (#83560)
ae68e455be : Enable formatting in all of testing/_internal/opinfo (#83559)
b4bc0d249f : [composite compliance] batch_norm (#79990)
a6f777c80d : Ensure cuda_primary_ctx test is run on multigpu CI (#83252)
ca9919e3e8 : [vision hash update] update the pinned vision hash (#83729)
b8d647e1d5 : Revert "Manually shard slow-gradcheck CI job to prevent timeout #83354" (#83704)
5bc85fcceb : Remove assertEqualIgnoreTypes from test_unary_ufuncs (#83711)
0ff929f487 : Add lazy shape inference for take op (#82679)
76d5699e13 : Fix use-generator lint warnings in module.py (#83700)
61b2cde527 : Revert "Enable formatting in all of testing/_internal/opinfo (#83559)"
107465af2c : Revert "Move gradcheck_wrapper and clone_sample funcs into opinfo.core (#83560)"
0ddabe56ad : Revert "Move RefInfo classes into opinfo.refs (#83563)"
c8730d0a2f : Revert "Move `torch.linalg` opinfos to opinfo.definitions (1/2) (#83547)"
88e0165d08 : [ao] Added Equalization QConfig generation to ModelReport class (#83698)
393137e13f : Revert "Move `torch.linalg` opinfos to opinfo.definitions (2/2) (#83554)"
05849eafb9 : [ONNX] Create empty opset 17 symbolic file (#83287)
1f2efdce15 : Move `torch.linalg` opinfos to opinfo.definitions (2/2) (#83554)
bb86c31e26 : Move `torch.linalg` opinfos to opinfo.definitions (1/2) (#83547)
03ce36e3c1 : Move RefInfo classes into opinfo.refs (#83563)
5120263703 : Move gradcheck_wrapper and clone_sample funcs into opinfo.core (#83560)
a7e6196909 : Enable formatting in all of testing/_internal/opinfo (#83559)
7aba6f8e7b : Rename flatbuffer_serializer to *_mobile or *_full_jit (#82827)
b02e620fa3 : [PyTorch] Bypass dispatch for narrow() calls within split_with_sizes (#83213)
784c47fbee : [quant][fx][refactor] Move ObservationType to backend_config.py (#83368)
82507ce334 : Minifier fix for non tensor inputs (#83644)
1f3ef5a2c8 : [ROCm] unskip test_jit TestBackendsWithCompiler (#81281)
f094113ebf : [MPS] Add native bitwise-not implementation (#83678)
b14df5334d : CMake: List python source files as codegen dependencies (#83683)
5e715be17e : [ao] Added Quantization QConfig generation to ModelReport class (#83688)
72963bbae9 : Update isDynamic api to align with is_symbolic API (#83415)
04353f7837 : Check existence of the array ref when tracing resize_ (#81422)
9152144944 : Coverage for nondeterministic_seeded, respect it in constant prop (#83650)
24acc3155f : Be more conservative about propagating constants. (#83648)
02581f053b : Address CR comments for "Delete ProxyTensor wrapper subclass" (#83646)
a7baad04f6 : Preserve stack trace for backward nodes over AOTAutograd (#83558)
e2e71c1f4c : [functorch] add linalg solve batch rule (#82814)
ff533b1efa : [MPS] Fix torch.full for uint8 (#83697)
88d3acd6b1 : Fix and improve the efficiency of the backward of xlog* functions. (#82713)
9e1560821e : [primTorch] Refs for pdist, triu and related ops (#82819)
3834836260 : [DataLoader] Move loop content into a function to ensure we don't preserve anything (#83595)
23d2272473 : Add remaining device types in the pybinded DeviceType enum (#83676)
46ba9f2e52 : Revert "Remove conj kernels for real dtypes (#80374)"
d11d3dd036 : [dist.cp] Introduce LoadPlanner and SavePlanner extensibility API. (#83419)
4a033be448 : [functorch] reclassify svd as an allowed failure; add test (#83612)
601aca2a2d : [functorch] add some vmap+jvp inplace+view tests (#83178)
d84dc589c2 : [functorch] relax as_strided batching rule (#83597)
69728d7dd9 : [functorch] annotate test_jvpvjp (#83530)
e4f74f0891 : [ONNX] Update the default opset to version 14 (#83284)
f204afc2bb : Added communication hook for sharded cases (#83254)
78c8a0d752 : [quant][ao_migration] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional` (#78712)
3e1fc85b23 : [FSDP] Implement sharded_optim_state_dict and flatten_sharded_optim_state_dict. (#77628)
cd0ab154b5 : Handle python frame is empty in GetPythonFrames (#83643)
abcf01196c : Release the GIL when munmap'ing tensors - fixes #77139 (#83623)
f84e087d5e : Revert "fixing define_constant pybind signature to match std::complex scalar (#83645)"
3f612b58be : fix quantization/core/test_docs for Buck2 (#83341)
aad89bb771 : Make the derivative of masked_fill more efficient (#83515)
4b3f1bdb0c : [vision hash update] update the pinned vision hash (#83582)
eb6004146a : [xla hash update] update the pinned xla hash (#83581)
ce7177f88a : [MPS] Register index.Tensor_out (#82507)
6dc8673b1b : Update ideep for NNC post-op (#82705)
278c726458 : fixing define_constant pybind signature to match std::complex scalar (#83645)
badbdb0330 : [torchgen] Relax the restriction on number of custom namespaces (#83580)
7263450c30 : Revert "[primTorch] Add ref for `new_empty_strided` (#82466)"
d6a30e213e : Enable pg_nccl.reduce_scatter to perform vector ReduceScatter for uneven input splits (#82924)
52be908225 : Delete unnecessary sum.SymInt overload (#83591)
6679d238fd : SymInt'ify schemas for prims (#83528)
817a82704f : Delete ProxyTensor wrapper subclass (#83330)
0a48cdfb3b : re-enable aotautograd tests (#83485)
e154f5ae3b : [primTorch] Add ref for `new_empty_strided` (#82466)
b3c99bef0c : Support nested dropout autograd (#83338)
451c6296af : [kineto] deprecate USE_KINETO_UPDATED (#83305)
79534b7f25 : Adding XLA folks to reviewer/approvers (#83555)
cf2c94e6de : NestedTensor Softmax (#83435)
71141c3023 : extend torch.ones to handle tuple inputs (#83194)
7536ac7125 : prevent graph mutation in constraint generation (#83109)
ea2183f0ea : removed duplicate_quantize_dynamic_node (#83459)
cf5330977d : [CI] Move torch-deploy to cuda-11.6 (#83572)
af8e34cca9 : [vulkan] Do not populate unpacked args of PackedContexts when deserializing (#83587)
cf52680d40 : [primTorch] Add OpInfo and ref for eye (#82323)
1a49eea301 : [primTorch] Add ref for diag_embed (#82322)
ea037344e8 : Reset compile cache to fix flaky test (#83608)
ad44079952 : Remove conj kernels for real dtypes (#80374)
652fb03355 : Symbolic Shape Analaysis: Add Generalized List of Tensor Shape Support (#78679)
b1e02ae8fc : Move PythonRefInfos for `torch.fft` into opinfo.definitions (#83277)
5f50289b39 : Move OpInfos for torch.fft into `opinfo.definitions` (#83276)
85ef1a1cd1 : [primTorch] added ref for nn.functional.glu (#82214)
bd0ad7a84f : Add backward support for rudimentary NestedTensor.sum(dim) (#82625)
68d2d7866d : [static-runtime] change the backend for permute_copy (#83532)
30af17cea7 : [HIP] Add extra exception handling for non-ROCM builds (#83009)
244690205f : [FSDP] Use _init_from_local_tensor to create ShardedTensor to avoid communication overhead (#82911)
5e8b4c64aa : Delayed compilation of backwards pass to when backwards runs (#83367)
1f7153bee8 : [quant] Optionally clamp weights post quantization (#83438)
ab02b89811 : expand torch.full to reason about integers (#83087)
1a38724ed3 : fix bug in a linear constraint (#82938)
0061e67629 : Revert "NestedTensor Softmax (#83435)"
eb4e03ddf8 : made some minor tweaks to minifier to reduce outputs more often (#83565)
84c4b07932 : Make sure that we can load old optimizer checkpoint (#83588)
dcda907693 : Add docstring type formatting guidelines to `CONTRIBUTING.md` (#83536)
9f03444f70 : Add torch.ops.aten -> torch._refs mapping to TorchRefsMode using decomposition_table (#82657)
7af3208412 : [ROCm] Enable test_ddp_profiling_torch_profiler (#82749)
c8ec4ceb9b : Delete checked_dense_tensor_unwrap (#83543)
822a8e057f : Use opmath_type for CUDA logcumsumexp (#83425)
2a096e940d : [primTorch] support for a few magic methods (#83524)
5aab57e112 : Make Adam optimizer differentiable (#82205)
11d4d91bdc : [torchgen] Add logic in annotation parser to accept alias set (#83501)
a09c3fcb8d : Add loss operators to fp32 cast policy of AutocastCPU (#81689)
d3a176a156 : [PT-D][BE][TP perf 1/N] Get rid of unnecessary collectives in Embedding/EmbeddingBag and use autograd-enabled collectives (#81853)
e09821f784 : Avoid using true division in split_dim (#83527)
d7fc76a1ed : NestedTensor Softmax (#83435)
343b5f8651 : [TorchTidy] Adding support for accessing strides and scalars (#80072)
1a09b05c94 : Fix `torch.equal` on CPU (#83350)
df62ea76d1 : add the nessesairy constraints for the next 5 benchmarks (#82923)
aac622ad55 : Optionally run fbgemm in tracer (#83531)
31d4b6f52a : [MPS] Fix conv1D and conv2D with non-matching strides/paddings (#83522)
0e2efaf9cc : use global var for disabled and slow test dicts (#83487)
1ee9eb52b6 : fix native_layer_norm meta kernel parity w cuda (#83457)
f4b7c10e14 : fix resnet50_quantized_qat and mobilenet_v2_quantized_qat <> functionalization (#83339)
785f7f6298 : Revert "Use opmath_type for CUDA logcumsumexp (#83425)"
3586af8adc : [quant] Remove unused quantize handler definitions (#83360)
059321469e : [vulkan] Use aliases when retrieving from packed/unpacked lists in OpContexts (#83526)
31fad3926a : Add option to run anomaly mode without nan checking (#83481)
1b437718a3 : ci: Add workflow to build official docker images with multiarch (#83437)
3a511e8354 : [Expanded Weights] add 'same' and 'valid' padding support (#83345)
cd68f08992 : [ONNX] Update the script for version updates (#83283)
d52d2bd5a9 : [ROCm] MIOpen fused convolution relu (#82002)
79356311f5 : update merge failed msg (#83462)
4b597019b7 : [Nested Tensor] Created Nested Tensor to Nested Tensor Views (#82658)
94ba085ce0 : [maskedtensor] first commit, core and creation (#82836)
84146f3d0d : Vectorize cpu tensor conversions (#80905)
559c8b8992 : Fix _refs.lcm using floating point maths (#82950)
9745edf971 : [ROCM] Enable test_memory_format_nn_BatchNorm tests on ROCM (#82512)
06a64f7eaa : Use opmath_type for CUDA logcumsumexp (#83425)
0faf10b0f4 : Split ScanKernels.cu (#83422)
8473e69684 : [ROCm] Fixes the kernel asserts API declaration mismatch error (#81790)
b156f3329e : [primTorch] Add ref for movedim (#83278)
2c79b9c638 : module names are made more consistent with POI page (#83219)
92a005883a : [easy] Fix .sizes() call in saved_variable.cpp for nested tensor (#83356)
7e7afcabe7 : [functorch] classify some more test failures (#83520)
52b8a58197 : [functorch] audit skips and xfails for vjp tests (#83518)
64a3fbae5e : [functorch] Classify some vmap failures with comments (#83517)
a3e3cbfbbe : [primTorch] Add ref for diagonal and more test inputs (#82321)
4010f96121 : [primTorch] Fix off by 1 in `canonicalize_dim` (#83198)
6a5ca409da : Revert "reverted diff: Add python stack tracing option on on-demand flow" (#82378)
bb94a13d03 : [vulkan][fix] Fix unsafe direct array access (#83432)
08d38bbcfb : [vulkan] Replace *_size() functions with get_dim<N>() (#83423)
cd86d25515 : [primTorch] Move addcdiv from decompositions -> refs (#80842)
59fccab857 : [Shape Fns] Fix handling of empty dim list in sum_mean_dim shape fn (#83357)
d589aa531f : TS jit 2 week compatibility window for new TEL forward() (#83467)
cf4fb5a631 : Make test_jvpvjp_as_strided_scatter skipped due to flaky (#83516)
f9a3d82220 : Fix typo in MPS allocator (#83465)
4c8cfb57aa : Convert SymInt tracing to mode based tracing (#83380)
a3907ca92d : Respect TorchDispatchMode for shallow_copy_and_detach (#83372)
1665715cb0 : add sym_strides() function, use in fake/proxy tensors (#81300)
2e8e386d6f : Add refs for real and imag to __all__ (#83057)
3500df7983 : [composite compliance] istft (#82955)
a9ba3fe1db : [vision hash update] update the pinned vision hash (#83503)
445b55682a : [xla hash update] update the pinned xla hash (#83502)
f77adb71cb : made some minor refactoring of minifier (#83439)
ff75562cff : Adding maximize to rprop (#81864)
a8941aa996 : [BE] Better test stats errors (#83484)
03f9c7922e : [FuncTorch] Fix compilation with -Werror (#83463)
a5f688ad0a : Remove unused var from ProcessGroupGloo (#83286)
43a94daca0 : Revert "Add a workflow to cache third party dependencies on S3 (#83306)"
641d75d0ba : Revert "S3 third-party deps sync workflow: specify correct secrets (#83489)"
7ec49810cc : S3 third-party deps sync workflow: specify correct secrets (#83489)
794ae64174 : [FSDP] Pass kwargs to load_state_dict (#83309)
0961dd6e99 : Add a workflow to cache third party dependencies on S3 (#83306)
c177a7124c : Adding additional debug logging and documentation for shape functions (#77115)
9e1daf7644 : skip flaky tests for now (#83482)
cb64b558ee : Add spaces so example is flake8 compatible (#83420)
b75a214b36 : Fix windows flaky test env var (#83466)
a234774096 : Revert "Fix flaky tests env variable length on Windows (#83426)"
6266003d71 : Revert "Check if IMPORT_DISABLED_TESTS is set (#83436)"
dffa5d309a : shard `trunk / linux-bionic-cuda10.2-py3.9-gcc7 / test (default` from 2 -> 4 (#83424)
43f950af20 : Manually shard slow-gradcheck CI job to prevent timeout (#83354)
13e2a0a048 : Add `getDynamicValue` to `dynamic_ir` (#82188)
ca4f353451 : Updated the build process for PyTorch/XLA CI testing (#82497)
60295e3abd : [functorch] Delete functorch_lagging_op_db (#83418)
759c37a4f4 : make sure arguments are tuples otherwise they won't be hashable (#83342)
a65825116a : clear cache in-between each test (#83431)
1187dedd33 : Check if IMPORT_DISABLED_TESTS is set (#83436)
2d8f091f6a : Move TorchDispatchModeTLS to c10/core (#83370)
beb83d7419 : Fix flaky tests env variable length on Windows (#83426)
0306147276 : Fix issue with compiling under with_grad (#83395)
ff5fe9e622 : [ROCm] enable jiterator (#77982)
316cb8a06a : embedded_interpreter_hip (#83329)
1bf2371365 : Rename path on Windows from lib/x64 to lib\x64 (#83417)
50b1ecc28f : [fix] cat : support different dtype tensor with 0-dim like before (#83391)
d4bd88b64b : [Quant][fx] Remove WEIGHT_INDEX_DICT and BIAS_INDEX_DICT (#83263)
684a404def : Rename flatbuffer_all to flatbuffers_jit (#82826)
fbe8c77427 : Implemented basic version of AOTDispatcher that only chooses between autograd or no autograd (#83248)
86de9e7291 : Added some additional symbolic tracing tests (#82209)
37ef61ccee : [PyTorch] Profiler execution graph record tensor device (#82895)
016fcca243 : format some aotautograd-related files in functorch with black (#83240)
408fa38f33 : [GHF] Validate graphql output (#83366)
f02f304657 : Added nll_loss_forward decomposition + some other minor decomps (#83235)
097951a967 : [vision hash update] update the pinned vision hash (#83374)
d0d6b1f222 : [torchgen] Generate out variant for functional operator (#81437)
bb1e3d8008 : Enable lint for test_module_interface.py (#83359)
c2808571bf : Removed trace_factory_functions=False option (#83215)
b99f972e07 : [functorch] update functorch lagging db (#83346)
a7e7fbab82 : Add shape functions for conv_transpose2d.input and convolution (#80860)
8d81bfc512 : Refactor PyFunctionHooks (#83331)
b567742038 : Add ability to register prehooks to grad_fn (#83226)
02cfefb48c : [MPS] Fix for matmul errors in test consistency (#83124)
cb2cb94074 : [ONNX] Look at owningBlock instead of graph when recording autograd subgraph (#82852)
ea51e87b52 : Added list clearing codegen to AOTAutograd (hidden behind config.aot_clear_list (#83137)
d8d9ecbfd0 : Fix building with Werror (#83275)
ccb7d56a18 : Rename PyFunctionPreHook to PyFunctionTensorPreHook (#83225)
67e0067832 : Ensure full fetch before running git commands (#83352)
88d7322b07 : fix a comment since the options in arg parser no longer require Declarations.yaml (#83337)
404c1c04ff : [ONNX] Add `acceptable_error_percentage` to backend tests (#82622)
7896621f94 : [ONNX] Move caffe2 import into `exportTest` in `test_models.py` (#82621)
8598a8662b : Switch to the new Windows AMI with conda preinstalled (#83188)
2c089290b6 : [ONNX] Fix float point detection for optional tensor (with unknown rank) within a list (#81386)
bf21ebce38 : Use merge base to compute number of lines in PR (#83344)
4f00c7589d : Fix and unskip dataloader tests on ARM (#83125)
833413291c : [PyTorch] Minor improvement in split_with_sizes (#83208)
39e6238788 : Support regex-style matching for Any and Oneof (#82853)
0cd8526b07 : assert that ProxyTensorMode does not accidentally bake in constants (#83297)
e327cc5e44 : [ONNX] Enable test_uninitialized_optional (#83183)
b8b54eccd2 : Add *_only and all/any pytree utilities (#83316)
d423722607 : Add data_dependent_output tag; generalize proxy tensor to test it (#83312)
d07a9ba11b : Don't build nvfuser benchmarks by default (#67857)
42b572bb60 : Fix internal type conversions in floor_divide and trunc_divide (#83288)
f81b4ae55c : [Quant] Make quantizable LSTM scriptable (#83304)
d5fb4e2a27 : [vision hash update] update the pinned vision hash (#82560)
86759ad402 : [xla hash update] update the pinned xla hash (#83319)
0e0f8fd03e : Implement QAT for APoT (#83282)
2ca721cda5 : An improved version of subgraph matcher (#82090)
59b1c4e55f : Temporarily disable test_fx for dynamo (#83307)
4618371da5 : Integrate xdoctest - Rebased (#82797)
ba90c9f229 : fix functionalization <> resnet18, make ProxyTensor work with tensor-less decomps (#83207)
568c6fb9a5 : [WIP] Add pr sanity check workflow (#83295)
5b621205f4 : Revert "Revert "adding a custom caster for c10::SymInt (#82692)"" (#83223)
99d8eb7bd7 : [JIT Test] Add more debugging information for JIT opinfo tests (#83269)
aac317535c : [JIT SSA] Handle other Tuple outputs (#83222)
dfc97df64d : Add fastpath test for mask check flag (#82999)
b60dc2eb43 : `mul`: sparse-dense + sparse-sparse with 0-dims support take 2. (#82962)
bf19e4eab6 : [ONNX] Bump ONNX Runtime version to 1.12.1 in CI (#81147)
27108d9434 : [ONNX] Update typing and error messages in symbolic_helper (#83007)
799620178c : Revert "fix RowwiseMoments vectorization issue on CPU (#81849)"
7e6da2fb10 : Revert "RowwiseMoments: use float as acc type for bfloat16 inputs (#81850)"
b6156fd481 : Add mac-mps workflow (#83296)
7f18ef14c1 : Register nested matmul as an addition to CompositeImplicit (#82786)
4128712397 : Propagate CUDAOutOfMemoryError to Python. (#83146)
cf6a91b7f9 : [BE] Fix lint in trymerge.py (#83293)
cd18b78daa : [ROCm] Enable bf16-related tests in test_c10d_nccl.py and test_grad_layout_1devicemodule_1replicaperprocess (#82020)
b18962552e : Fix and unskip cpp extension tests for ARM (#83115)
958651327f : Set default qengine to QNNPACK on ARM for quantization tests (#83097)
ba53efa6e7 : Unskip CompositeCompliance tests for ARM (#83089)
382ef1fda7 : Autograd graphtask trim unnecessary edges (#82544)
d438e86719 : Add assertions to fix torch::jit::load bugs (#79192)
2ae1afd6ae : When encountering dynamic types, one should cast it recursively. (#83218)
e88dea0c89 : Fix issues with Werror=range-loop-construct (#83273)
bce1540f1f : [quant][fx] Add more detailed docs for prepare_fx/prepare_qat_fx/convert_fx (#83132)
a395f6e842 : Limits constant chunk propagation for pw-node-only (#83083)
abb2204f6a : Fix TORCH_CHECK macros when glog is used (#83216)
cff55682d8 : Change the input of `mvit_v2_s` on the FX test (#83242)
63f35f1a0b : Hack up make_fx to natively support varargs (#83210)
23b5fcaab9 : [mergebot] Update merge bot messages to explain flags (#82907)
fa54021a0c : [functorch] Add some more view+inplace grad+vmap tests (#83176)
f8c408b79a : [functorch] vjpvjp inplace testing (#83119)
ffc4a50259 : [functorch] in-place testing for test_vjp (#83114)
3dc402fd1e : [functorch] in-place jvp testing (#83077)
beceb8b92f : Propose code owners from Intel (#80218)
7191ae58a7 : Add nvfuser support for prims.sign and refs.sign (#83167)
c35db96785 : [xla hash update] update the pinned xla hash (#83236)
916def84d4 : CUDA trace Python hooks (#82824)
6a09847c42 : Fix broken FX tests (#83187)
ccfbbc0f50 : [PyTorch] Solved two syntax issues when dumping execution graph result to json file. (#81854)
6915676448 : Preserve node's stack trace during retrace (#83050)
3aeb5e4ff9 : [functorch] remove some testing hacks (#83079)
6700a78504 : Move vmap's OrderedDict pytree support to torch.utils._pytree (#83073)
7d89c3b01a : Prefer contiguous output from mkldnn_bf16_gemm (#82968)
663967777b : Handle redispatch correctly with tensor subclasses in ProxyTensor mode (#83122)
22830c7c77 : add 3 operators ops (max_pool2d, linear, conv2d)in xnnPack test (#83131)
2dae93b212 : [ROCm] update nightly builds to rocm5.2 (#82353)
9590cf6d79 : [caffe2] Make ListIterator compatible with libc++ 15 (#83189)
45e7d0268a : [fx] Implement __deepcopy__ for fx.Tracer (#83130)
a2ca89331f : [ao] Create framework for ModelReport Qconfig Generation (#83091)
9213751970 : Add exception handler for stoull in caffe2 (#77557)
1a51efd8bb : dispatch API for checking computed table, use it in prim decomps (#82358)
8a6b076196 : lift numpy tensor, add randperm support (#83191)
76953beee3 : Update state_dict docs (#83104)
2fe3ea65c2 : RowwiseMoments: use float as acc type for bfloat16 inputs (#81850)
4e9b969baa : fix RowwiseMoments vectorization issue on CPU (#81849)
017ecb782d : [ONNX] Update legacy code, initialize onnx_shape_inference=True by default (#82767)
693a8dd04c : [NNC] enable fusion of conv with elementwise OP (#77157)
1c83ec8f61 : Build nccl single-threaded (#83173)
2801f5cf5c : Fix test_cuda_primary_ctx being skipped on CUDA (non-ROCm) (#83182)
fd3fdf6b79 : Reland "[functorch] add error inputs check to vmap test" (#83106)
c322fc03a1 : [torchgen] Fix selective build error on custom namespace (#83141)
df741c589f : [NVFuser] Upstream push 0809 (#83067)
ce8716f59a : [PyTorch] Allow const T& access to ListElementReference when possible (#83177)
d58ced3db7 : Revert "disable some of the periodic ios jobs b/c flaky (#83103)"
c5c0dd9b62 : Update shallow_copy_and_detach for nested tensor impls (#83002)
0e8beb7d0d : Deleted cuda graph files that were moved to torchdynamo (#83128)
8a7b4b8559 : Turn off keychain timeout in CircleCI builds of ios/TestApp (#83162)
f534b2c627 : Revert "Remove split functional wrapper (#74727)"
651c13166c : Fix typo in norm_first description, respectivaly - > respectively (#83139)
e3e33cfae0 : Enable codegen of per-dispatch key derivative formulas in derivatives.yaml (#82801)
e4ea751810 : Fix hash for Tensor subclasses (#83174)
1f99bdfcc4 : [JIT] Retry - Support scripting torch.is_autocast_enabled() (#82394)
f8a10a7f79 : feat: add PolynomialLR scheduler (#82769)
ab76862b7a : Editing gen_jit_shape_functions to make lintrunner happy (#79571)
c8eae2de52 : [Shape Fns] Fix optional None for the actual shape functions (#83092)
a58876ace7 : Remove split functional wrapper (#74727)
fc65b2becb : Fix error_inputs for linalg.lstsq; assert SampleInput args are tuple, take2 (#83105)
1b2a17b8f9 : Build MacOS binaries with `-Werror` (#83049)
5e477714fa : [BE] Fix MPS build warnings (#83048)
888c1a143f : [ao] Added some additional / future tasks for ModelReport API to README (#83088)
be5b3df6cc : Update `std_mean/var_mean/nanmean/nansum` signatures with `int[1]? dim` (#82912)
03abcf2317 : [ao][sparsity] Data Sparsity with Post Training Quantization (#82759)
9690fbf9a8 : FSDP namedtuple support (#83055)
8163af7c30 : Hide flatbuffer build dependencies (#82953)
3d61d93ea7 : Revert "merge_rules, person_of_interst and CODEOWNERS now better aligned (#83127)"
fb833aabac : merge_rules, person_of_interst and CODEOWNERS now better aligned (#83127)
62c8d30f9f : [BE] Add `append_cxx_flag_if_supported` macro (#82883)
b3dea3e413 : Add the Conv1D support for NHWC format. (#83121)
c25220f05f : [xla hash update] update the pinned xla hash (#83155)
d3a1f17fc7 : Revert "[BE] Add `append_cxx_flag_if_supported` macro (#82883)"
2e1929709d : Back out "[Quant][fx] Remove dequant-quant around getitem" (#83147)
d9a7e93aaf : [ONNX] Add dtype check in onnx verification (#79263)
7b39406526 : [LTC] Pass a BackendDevice parameter into GetIrValueForScalarFromCodegen (#82970)
e100795048 : docker.Makefile: fix phony targets (#82941)
90821aab10 : Add SOFT_ASSERT to gracefully recover from invariant violations (#82689)
cda210e23b : UCC PG build in CI (#81583)
b4f7e22640 : Enable periodic builds for CUDA 11.7 (#81688)
b236352036 : Add mask identifier for multiplexed src_mask/src_key_padding_mask in BT (#81947)
5b2c03823d : Generalize CheckpointWrapper (#83035)
2b6905413e : [JIT] Add SchemaCheckMode OpInfo test (#82442)
a0b3854548 : Change seperate -> separate (#83056)
daeea7d2c3 : Revert "adding a custom caster for c10::SymInt (#82692)"
3953764620 : Fix get_workflow_job_id to dynamically get the repo name (#83036)
9c5c045f63 : disable some of the periodic ios jobs b/c flaky (#83103)
cd5efc6f08 : Remove unbalanced `pragma diagnostic pop` (#83095)
b55f9047e1 : Add forward AD support for elu_, celu_, selu_ (#83080)
f5701a1f9a : [ONNX] Remove unused patching methods (#83006)
08d54b5cd5 : Correct DDP example (#83034)
988bd0173c : Add OpOverload.decompose API (#83075)
9b4dc56c83 : [ONNX] Fix quantization outputs' dtype (#79690)
ffe2e0687d : [BE] Fix unused variable in QuantizedOpKernels (#83047)
10be80de40 : [BE] Fix unused but set variable (#83046)
4d95111aa6 : Revert "Fix error_inputs for linalg.lstsq; assert SampleInput args are tuple (#83004)"
b8cd727ad9 : Revert "[functorch] add error inputs check to vmap test (#83017)"
726d040692 : annotated allocator snapshots (#82146)
aad3b8e4d3 : move strip_overloads , add conj_physical (#83040)
e77d4ec5eb : fix where backward to use scalar 0 (#83043)
111214ac71 : [SSA] Clear Shape Cache on Load Error (#82850)
24d3ea61ab : [functorch] add error inputs check to vmap test (#83017)
da8fcc422b : [functorch] better error when vmap over no Tensor inputs (#83016)
571b930114 : [functorch] `squeeze_` batch rule, fix `squeeze_.dim` batch rule (#82972)
21ec288475 : Fix error_inputs for linalg.lstsq; assert SampleInput args are tuple (#83004)
6c60a656b0 : [xla hash update] update the pinned xla hash (#83058)
f9533560cc : Use flatbuffer of alternate namespace (#82952)
b4b60c2a2e : Get rid of ENABLE_UPGRADERS macro (#77574)
4a1783ee4c : [MPS] Make softplus beta a placeholder (#81169)
fc0d5d2bd3 : Use UnaryUfuncInfo for type conversion functions (#82349)
5e3d1ef49f : Allow ufunc OpInfos to have no reference (#82348)
a17211f79b : Fix prims.div to return the correct dtype (#82949)
9e65e93c39 : Fix IMA for topk (#83042)
e816644495 : Add nested tensor contiguous (#82147)
0918154967 : Supports symbolic diff for silu (#81724)
c6bc766184 : Remove unnecessary copy constructor (#83030)
9ba1631c67 : Governance process been actualized. (#82736)
c6cdca5c68 : [ONNX] Reland #81953 Type utility for converting among JIT, torch and ONNX data types (#82995)
943553965e : support custom class in torchgen schema parser (#82925)
1fedd40424 : Update cross entropy documentation to metion logits clearly (#82538)
dc51fc71fc : Revert "Update the pull request template (#81991)" (#83025)
dee63f4f7b : adding a custom caster for c10::SymInt (#82692)
35b4ac4eeb : remove unused/debug header (#82845)
84079f3125 : [PyTorch] Proper reset execution graph data in remove callback registration (#82910)
8d1ff9fc5d : [MPS] Remove checks that incur unnecessary syncs on GPU with tensor.item() (#82505)
d7e6aaa59b : [BE] Add `append_cxx_flag_if_supported` macro (#82883)
86d5262e87 : ci: Bump max_available for windows gpu to 100 (#83018)
7e3c3fd37b : Fix typos in `torch.package` documentation (#82994)
b9c8db435b : Allow map location to meta device (#82603)
27f790bbd7 : Fix error_inputs for diag (#82984)
810884411d : [functorch] `transpose_`, `t_` and aliases thereof batching rules (#82903)
16f5e119b6 : [TorchTidy] Adding support for Device (#82787)
782f3489c6 : [Quant][fx][bc-breaking] Integrate BackendConfig with quantization flow (part 2) (#82557)
263c05c918 : [ROCm] work-around missing hipProfilerStart/Stop (#82778)
6098a7c5a6 : [MPS][BE] Use same convenience dispatch method (#82982)
3646e05acc : [temp mitigation][ios] add timeout to Run Build Test step (#83001)
24a084eda6 : [c10d] Fix async error in batch_isend_irecv (#82450)
88e43ca409 : properly compute batch_element_count (#82927)
d7cebab7d8 : Merge torchdim into functorch build (#82454)
ea39146507 : Add a common wrapper for make_fx to handle args and kwargs (#82965)
0f4f79fb9c : add the operations needed for electra model (#82856)
1cd1f48d93 : [TorchTidy] Adding support for storing scalar values in profiling (#81843)
10e7a25488 : [composite compliance] eig_backward (#82957)
a3d37f1114 : [composite compliance] quantile and nanquantile (#81767)
4f61aa79dd : Update nan_to_num prims (#82908)
c3f3b86fa9 : [functorch] fix unsqueeze_ batching rule (#82899)
62e095ab57 : [functorch] Add vmap in-place testing (#82898)
7c993d7f03 : [functorch] refactor get_fallback_and_vmap_exhaustive (#82897)
d5eea7f1a7 : [functorch] eliminate erroneous warning on `import functorch` (#82905)
73ddd41247 : [Profiler] Make KinetoEvent a view of Result (Part 4 (final), stragglers) (#81322)
c8db7102e2 : [Profiler] Make KinetoEvent a view of Result (Part 3, forwarded from `result_`) (#81321)
7a726a4f2d : [Profiler] Make KinetoEvent a view of Result (Part 2, python and stacks) (#81320)
63873ab770 : [Profiler] Make KinetoEvent a view of Result (Part 1: trivial fields) (#81319)
b91ff5e361 : [quant] Remove unneeded lines from APoT linear (#82909)
780a303a2a : [torchdynamo hash update] update the pinned torchdynamo hash (#82959)
4d3f7df33f : Properly compare floats in gemm-block-sparse-microkernel-tester (#82947)
51d12548b0 : [Refactoring] making the code more Pythonic (#82929)
0377615b68 : [MPS] And native bitwise_[and|or|xor] (#82307)
18db7f36d7 : Fix possible out-of-range error of 'as_nd' util function (#82393)
1d56ea5e92 : Remove flatbuffer types/headers from flatbuffer_loader.h (#82893)
0c7ca2d97b : Revert "Add DLPack support for XPU backend by mapping to kDLOneAPI in DLPack (#82867)"
b170a52a09 : Revert "[ONNX] Type utility for converting among JIT, torch and ONNX data types (#81953)"
ab3c039910 : Fix FSDP device_id when CPU offloading (#82892)
7388913085 : [torchdynamo hash update] update the pinned torchdynamo hash (#82943)
a61c96492b : Add EnableTorchFunction (#82647)
51bbf6329a : Improved legalize_graph pass in FX (#82874)
4f255dbfb3 : Remove manual bindings for arange (#81380)
adc5e7d32e : Remove manual bindings for linspace, logspace and full (#81378)
2c2278a960 : Make python TensorOption signatures consistent with JIT schemas (#82241)
6e86e9bcb6 : Add CPU-only (for now) philox randn impl to aten (#82383)
dd3f60232d : Revert "Vectorize cpu tensor conversions (#80905)"
814c19b266 : Revert "Allow ufunc OpInfos to have no reference (#82348)"
785b13af66 : Revert "Use UnaryUfuncInfo for type conversion functions (#82349)"
13e78a2cb9 : Use UnaryUfuncInfo for type conversion functions (#82349)
566d734396 : Allow ufunc OpInfos to have no reference (#82348)
948cc542bf : Vectorize cpu tensor conversions (#80905)
5b51849b48 : Increase size limit on calling CublasLt in addmm by 32x (#82922)
082e6506b8 : Fix buick-build-and-test (#82935)
8d091ca434 : [docs] fix inconsistent default `rcond` value (#82887)
d24724499b : Parameterize TestGenericProxyTensor on tracing_mode (#82746)
b361f70347 : Reorganize test_proxy_tensor.py per tracing mode (#82739)
9ea0a203b9 : [xla hash update] update the pinned xla hash (#82931)
e66da90f50 : [torchdynamo hash update] update the pinned torchdynamo hash (#82930)
8a6c104ce9 : pytest - print message re skip info (#82901)
86437b8631 : [ao] Updated ModelReportVisualizer per-channel line plot (#82918)
da5272ef3b : [ao] Fix per-channel histogram visualization in ModelReportVisualizer (#82917)
eebcb9117a : Fix BSR->Dense Batched Bug (#82120)
0e0dfaa057 : Add support for `select` of batch dims for all sparse compressed formats. (#82119)
737fa85dd2 : Update CUDA compiler matrix (#82860)
7dd795cbed : Prevent ref cycle creation in inner hook (#82776)
3d5e49e91d : Better error message for torch.library (#82904)
6ddf4c6f58 : [ONNX] Type utility for converting among JIT, torch and ONNX data types (#81953)
c54d18dbc7 : Handle complex optimization in Adamax by treating complex numbers as 2D real numbers (#80319)
a4264da6c1 : [PyTorch] Refactor GlobalStateManager as a templated singleton class (#82152)
22e865a785 : Revert "Add CPU-only (for now) philox randn impl to aten (#82383)"
45291c7ec8 : Revert "Implement `mul(dense, sparse), mul(sparse, dense)` for sparse COO tensors. (#81556)"
796fba02fe : Revert "Implement and extend `mul(sparse, sparse)` to work with 0-dim arguments on either side. (#82717)"
efd8e083bf : Add pre-compiled headers to one of the CI runners (#77351)
2e74b51a4e : [ao] Added ModelReportVisualizer info to README for ModelReport (#82796)
561f1568c6 : [ONNX] Make einsum_helper in opset12 private (#82402)
e7ff9d44ad : [fsdp] add ability to iterate through dataclasses in fsdp.utils (#82638)
6bdf89b0c7 : [ONNX] Fix `argmin` and `argmax` test cases (#79503)
6d76b2c19b : Add CPU-only (for now) philox randn impl to aten (#82383)
5ca098fe38 : [ao] Changed ratio of channels needed for input-weight rec (#82795)
95c7fc395b : [ao] Fix punctuation issue with Dynamic Static Report (#82794)
7520f8758e : [CI] Disable Libtorch Linux Bionic Cuda for Infra Flakiness (#82862)
cafdc52cdc : Skip TestNNAPI tests if QNNPACK is not supported (#82882)
2255911f8a : Make M1 tests green (#82213)
1cafb1027f : Fix leak when create_graph and full backward hook registered (#82788)
d8ae83ba79 : Move OpInfo subclasses into opinfo.core (#82830)
4d405517e4 : Move OpInfo class into new opinfo folder (#82540)
8f38f6773a : [Quant][fx] Remove dequant-quant around getitem (#82675)
1885876530 : [vulkan] Pass correct output_padding_arg when creating conv2d contexts (#82861)
dcbe9ce2ad : Handle complex optimization in AdamW by treating complex numbers as 2D real numbers (#80280)
0f32d71b40 : [xla hash update] update the pinned xla hash (#82877)
7b4866673d : [torchdynamo hash update] update the pinned torchdynamo hash (#82876)
de0e03001d : Add DLPack support for XPU backend by mapping to kDLOneAPI in DLPack (#82867)
bfebf254dd : Re-land sym_numel (#82374) (#82726) (#82731) (#82855)
bdb0abb234 : Fix profiling with record_shapes=True and nested tensor (#82854)
e47637aabc : fix matching against MemcpyAsync (#82782)
6e712823c5 : Migrate remaining pytorch code to use new flatbuffer_loader.h APIs (#82620)
0810961d5f : Remove flatbuffer types/headers from flatbuffer_serializer[_jit].h (#82619)
802a4fd286 : New flatbuffer_loader functions that do not depend on flatbuffers.h (#82618)
4ae40d74ac : Back out "Add an op_lowering_disallow_list in fx splitter base class. (#82288)" (#82750)
0e16340f92 : Revert "Add DLPack support for XPU backend by mapping to kDLOneAPI in DLPack. (#81021)"
dc788f6d9e : Revert "Fix possible out-of-range error of 'as_nd' util function (#82393)"
b18498a636 : [ONNX] Add RReLU eval mode behavior (#82678)
c434bc2f91 : Add a utility function for quantizing bias (#82848)
1ac0285cd4 : Fix possible out-of-range error of 'as_nd' util function (#82393)
b721ea9b82 : [NNC] Skip buildShapeExpressions if ConstantChunk input shapes are unknown (#82698)
2f9d046d67 : [NestedTensor]Remove tensor buffer replace with Storage type (#82757)
b1922e03ab : Test that multi_tensor optimizer state buffers match with single_tensor state buffers (#81894)
78bd95b13a : Revert "Re-land sym_numel (#82374) (#82726) (#82731)"
ae399d009f : [functorch] Add a bunch of low hanging fruit linalg batch rules (#82177)
34103a3033 : Refactor quant levels visualization (#82790)
f6749e7653 : Fix a segfault in `new_empty_strided` (#82422)
c1b9c95307 : Skip tests using uninitialized data (#82728)
0d22d6844a : [Vulkan] Implement arithmetic ops where one of the arguments is a tensor wrapping a scalar. (#81726)
9958cbeed7 : making sure we don't accept pointers in int64_t c-tor (#82743)
c90e00cf85 : Re-land sym_numel (#82374) (#82726) (#82731)
5ca9b2b6fa : Enable `dim=None` for `torch.var` (#82765)
e3dd424265 : Adjust schemas of a couple prims (#82821)
8b20e47974 : add integer divison for symints (#82791)
c08092fdf2 : Update NCCL to v2.13.4-1 (#82775)
61b1301ed8 : dump ios test simulator logs on a failure (#82822)
8e33396cf4 : [vulkan] Add buffer to texture and texture to buffer copies (#82799)
26d50ff1be : [ONNX] Update merge rules and persons of interest (#82673)
a1af0d1bec : add saketh-are to torch.nn codeowners (#79431)
82f558feee : Allow user to assert no mask contiguous check is necessary (#82533)
3ab54b971f : Implement and extend `mul(sparse, sparse)` to work with 0-dim arguments on either side. (#82717)
8b4fee5912 : Remove unnecessary `import warnings` (#82760)
ff753cbc12 : [primTorch] Added unbind OpInfo and ref (#81776)
ec67c6abbe : Add torch.ops.nvprims namespace for nvFuser-specific prims (#82155)
7c298b8244 : Fix objcopy version detection (#82774)
83086b7f45 : Fix NCCL detection by Gloo (#82773)
112ec24f09 : Fix device behavior for masked_fill (#82737)
bbe8d019f2 : Fix Windows builds with _DEBUG flag
8be853025c : Add DLPack support for XPU backend by mapping to kDLOneAPI in DLPack. (#81021)
bcc8f592ba : [xla hash update] update the pinned xla hash (#82805)
3687792cb9 : Add a line about module owners to labels mapping in CODEOWNERS (#82748)
406ce692ca : [torchgen] Generate wrapper functions under custom namespaces (#81744)
cda8635a5e : [_shard] only check shard metadata for copy_ (#82655)
0a919e8bd6 : [torchdynamo hash update] update the pinned torchdynamo hash (#82458)
7d5db63076 : Store compilation os version on a per model basis (#82661)
dd838cee0d : Register Triton scaled dot production in aten (#82509) (#82758)
1164c83c3c : Revert "Revert "Added zero.symint and modified aten::trapz to use symbolic ints (#82054)"" (#82779)
95ed516d8d : Made another tweak to inductor heuristic + made it enableable from an arg (#82772)
d852dce720 : Revert "Bug fix in ExtraCUDACopy and remove unstable lints for release (#82693)"
c379915969 : Add nondeterministic alert to CUDA cumsum (#75693)
eb0e30e0bc : Enable `dim=None` for `torch.std` (#81845)
ed6d2b562e : Add ref for meshgrid (#82284)
26d4e53166 : Fix embedding quantization issue when memory format is not `contiguous ` (#82605)
9887d51c8e : Properly compare floats in fully-connected-sparse-operator-tester (#82768)
0f52794ce7 : Revert "Added zero.symint and modified aten::trapz to use symbolic ints (#82054)"
9ec8d64d0c : [Vulkan] Implement packed contexts (#82730)
95d873855e : [ONNX] Inline prim::PythonOp for Autograd Function Export (#74765)
58d1cf7e39 : Fix issue 38095 TODOs in test_jit (#82629)
6d7b7615b1 : store parameter values as static shapes during constraint generation (#82742)
1f29a5f675 : linear constraints (#82614)
b858abd27c : add constraints for layer_norm function (#82597)
6c94eb0e4a : rule for bypassing scalars (#82590)
f6c2a7578b : Embedding rule for TorchDynamo (#82163)
50a1124fd0 : Bug fix in ExtraCUDACopy and remove unstable lints for release (#82693)
7922bbef73 : [TorchTidy] Add option to generate json report (#82261)
1a74fd166d : [Static Runtime] implementation of variadic grouped_accessor_async operation (#82680)
07088df37b : Bumping doc build instance type (#82740)
f4a808b582 : [functorch] share code between test_vmap_exhaustive and test_op_has_batch_rule (#82659)
657c97a060 : [functorch] Delete hacks for composite compliance (#82654)
cd73fc9456 : Added zero.symint and modified aten::trapz to use symbolic ints (#82054)
752579a373 : Preseve stack trace in nodes during fx.Transform (#82670)
7390ae837c : Resolve TODO for GroupNorm numerical issues (#82423)
baef03a0a1 : Add profiler paths to CODEOWNERS (#82741)
3921ed31f3 : Use multipy.package in `multipy/runtime` (#111) (#82690)
14b660fcc0 : [DataPipe] Correct the type of exception that is being raised by ShufflerMapDataPipe (#82666)
786a9d095a : Update backends.rst (#82525)
54064ad198 : [TorchTidy] Add pattern to detect matrix alignment in fp16 AND reorganize benchmark structure (#82248)
684ce1b0bc : add inplace_view tag to resize_() (#82667)
aa40503954 : Add Custom Module Support List (#82606)
9d228fe517 : [Small] Remove using c10d::ProcessGroup directive from c10d test (#82681)
b750c10fbe : [FSDP] Move the sharded_state_dict logic to the post hook to avoid OOM (#82613)
f4ee37453c : [dist.checkpoint] Change metadata format and improve error reporting (#82078)
fbbd036871 : [Reland] [functorch] Fix linalg batch rules to error on non-matrix inputs (#82176)
a3909d958a : Golf the function contiguous_strides (#82709)
39ffad392c : Fix faulty, vectorized `pow` function on VSX (#82646)
d0e6e5a5bb : Revert "sym_numel (#82374)" (#82726)
22fea8f654 : [primTorch] Added reference for unflatten (#81231)
d6303cd860 : Added OpInfo for unflatten and ref for flatten OpInfo (#81230)
dcf5188561 : [MPS] Fix scatter for bool type (#82685)
42fefd4403 : Sparse fake tensor support (#82172)
8092cf60c6 : Use enable_tracing flag for ProxyTorchDispatchMode instead of modifying torch dispatch mode stack inner attributes (#82643)
8da2b204e1 : [xla hash update] update the pinned xla hash (#82706)
d693478789 : Support variable arguments for refs.reshape and refs.view (#82651)
e63e97c59d : Update over-cautious heuristic in AOTAutograd (#82697)
fd68b0931f : sym_numel (#82374)
3139722679 : [foreach][mta] Inplace `maximum` and `minimum` (#82523)
9647bec0ec : Properly compare floats in fully-connected-operator-tester (#82688)
d3e7d09ebe : Revert "Merge torchdim into functorch build (#82454)"
b57188760b : [ROCm] torch.cuda.is_bf16_supported() returns True (#80410)
8d69349208 : [ao][sparsity] Refactor and bugfix SparseQuantizedLayer tests (#81802)
686a0941cd : [ao][sparsity][fx] making fx qat and sparsity compose (#82204)
15a284b09e : optimize softmax backward and logsoftmax backward (#80114)
916a565151 : Upgrade fbgemm in OSS PyTorch (#82676)
23b90044da : Merge torchdim into functorch build (#82454)
da0a3fe058 : [Re-land] [CUDA graphs] Clear autocast amp cache (#81896)
0fd990ef8b : Fix nightly docs (#82672)
420c576809 : [MPS] Unary ops over empty tensors should be no-op (#82650)
9e81c0c3f4 : Update model_ops.yaml (#82444)
149f0eb51e : [JIT] refactor SchemaInfo to lazily initialize (#82596)
428e02461f : Revert "Revert "Add a lint rule for PYBIND11_DECLARE_HOLDER_TYPE (#82556)"" (#82600)
df69660832 : Revert "Revert "Add a lint rule for torch/csrc/util/pybind.h include (#82552)"" (#82599)
31292599eb : Revert "[functorch] Fix linalg batch rules to error on non-matrix inputs (#82176)"
dba287ed11 : Revert "Remove unecesary copy constructor (#82626)"
d326a55bff : [JIT] Refactor SchemaInfo training ops to account for rrelu_with_noise case (#82441)
19e3ea5e1c : [MPS] Add dispatch stub code for MPS backend. (#82612)
d362b8e9e6 : reland "add a reinplacing FX pass (#80897)" (#82407)
95c3e2370a : [mergebot] Amend messages for land checks (#82649)
8f86e36191 : Revert #75195 (#82504)
c05c3952cd : [FSDP] Implement _param_fqns() to return all parameter FQNs for the FSDP module (#82595)
9f5f6ba683 : check params shape for mkldnn_convolution (#76526)
ae2e303de0 : [TorchTidy] Reland #81941 Add pattern to detect if bias is enabled in conv2d followed by batchnorm2d (#82421)
c7dde7da6d : [tensorboard] Remove dependence on torch.onnx (#82628)
714669e22d : Remove unecesary copy constructor (#82626)
f1f231c4dd : Pin windows numpy (#82652)
5aef03513f : Bumping nvidia docker version and using python 3.10 for cuda11.7 (#82472)
b2bb2251f3 : Replace SymIntTable with directly storing pointer in SymInt (#82468)
a3316cb3c7 : Attach ProxyTorchDispatchMode to ProxyTensor and use the mode in __torch_dispatch__ (#82549)
1dfcad84aa : [functorch] Fix linalg batch rules to error on non-matrix inputs (#82176)
900e93d351 : Add context manager for conditional rewrites of torch.* to torch._refs.* calls (#81764)
54c5f5a535 : [xla hash update] update the pinned xla hash (#82640)
87a11f112b : Abstract away git repo during tests (#82594)
7d490dba72 : CoreML backend should log on failure not success (#82604)
09a6d0b5bb : Fake: copy over grad attribute (#82593)
c6b1e096af : guard against issues where we try to access tensor_meta on non-tensor input (#82611)
62de55ed6b : Fix: memory cross-border access on the ROCM platform (#76100)
41b54c303d : Revert "Fix crash on unload torch cpu dll (#67632)"
6ada85eff0 : [Profiler] Clean up visit logic (#80822)
09059d9148 : integrate plugin (#82395)
2ffb23616d : Fix false positive AVX, AVX2 and AVX512 detection with MSVC (#82554)
48a34acf13 : [_shard] add copy_ to shardedtensor (#82508)
b8c1aa813c : Add check() to FileCheck type stub. (#82516)
f1a1356907 : [ROCm] Enable/fix unit tests test_stream_args and test_event_args (#82346)
6ca95547ac : Initial private SDP interface and naive composite impl (#81956)
c041b2f158 : [mergebot] Add Land Check Troubleshooting Message (#82580)
9b46737fca : Add tests for fake tensor striding (#82571)
b2f6aa666e : Add tests for aliasing in fake tensor (#82337)
642aed8b99 : Add Autocast Support for FakeTensors / use fake device dispatch keys (#82449)
a64c981d09 : Fixed QuantizedMeta creation via empty (#82587)
bf387e894f : Fix a NotImplemented mode bug and improve Parameter handling for fake tensor (#82574)
98215923ad : Correctly unpack constants when used in multi-return output (#82568)
532b8a9e00 : Revert "Add a lint rule for torch/csrc/util/pybind.h include (#82552)"
ab8e5e6fb5 : Revert "Add a lint rule for PYBIND11_DECLARE_HOLDER_TYPE (#82556)"
e06d1029f7 : [fx] Minor modifications to pass infra (#82485)
207ac1c507 : Move doxygen generation to nightly only (#82573)
bdd0a4a84c : [MPS] Fix `torch.full` for boolean types (#82575)
afafd16671 : Lintrunner: Run mypy-strict on torchgen (#82576)
9f0fccadd7 : Store non-symbolic SymInt as int in IValue (#82555)
ba84e9662e : Use OrderedSet in ufunc codegen (#82567)
e93b5210ec : [composite compliance] allclose, linalg_eig (#82437)
71d50f4f89 : Change docstring type callable to Callable for consistency (#82487)
c2ac3e6831 : Typo thrid->third (#82578)
246068604f : Add missing TORCH_ASSERT_*_OPERATORS defines in native/cuda (#82529)
6937c8083c : Add a lint rule for PYBIND11_DECLARE_HOLDER_TYPE (#82556)
9465c0e0b5 : Add a lint rule for torch/csrc/util/pybind.h include (#82552)
d08157d516 : directly init a zero immediate buffer to reduce overhead for batch_norm cpu path (#82558)
3141379f1e : Make SizesAndStrides able to handle nontrivial objects (#82467)
a9320e6d96 : Delete SymInt::data() in favor of as_int_unchecked() (#82477)
50e8abbcad : Change SymIntNode into an intrusive pointer (#82548)
36120ce5c0 : Adding 3.11 nightlies for linux PyPi (#82302)
c94706c011 : Fix docker build error related to libc6 (#82476)
6592259ea5 : [HPU] Enable torch.jit.load for HPU (#81759)
a54c9a419e : Fix crash on unload torch cpu dll (#67632)
53f56894ae : Fix nondeterminism in torchgen (#82536)
4bb7e148c4 : add nested tensor matmul support (#81957)
32cf6c6fb0 : Remove `THPTensor` defs, override macros, and `GenerateByteType.h` (#82503)
14d0296e5c : Rename `_Typed/_UntypedStorage` to `Typed/UntypedStorage` and update docs (#82438)
28304dd494 : [vision hash update] update the pinned vision hash (#82522)
d74d79a880 : [xla hash update] update the pinned xla hash (#82521)
7496937e3f : [MPS] Add prelu (#82401)
5f9939f65e : Introduce discontinuity to nested tensor (#80981)
dc53cb4536 : Added AOT_PARTITIONER_DEBUG and added list of ops that inductor supports fusing (#82283)
0b2566456f : [CUDNN] Update tests and dispatching for CUDNN V8 API behavior for `bfloat16` convs (#81139)
b9cdd6d0ac : Do not assume what is in `os.environ` (#82495)
301fe8c27d : [torchgen] Fix multiple backends with custom namespace (#82133)
3ca78a4c75 : redispatch sym overloads to sym versions correctly (#82380)
6830573c5a : [JIT] Revert SchemaInfo usage in AliasDb (#82475)
ccd30a12a2 : [PyTorch][Kineto] add ActivityType.h when USE_KINETO is not set (#82028)
3b9cbb1738 : Revert "Change SymIntNode into an intrusive pointer (#82432)"
ff5399e528 : Revise sparse docs regarding Sparse Compressed tensors (#82108)
8def154e00 : Fix multiple docstring type mistakes (#82474)
7be44f8158 : Change SymIntNode into an intrusive pointer (#82432)
7eed83e016 : fix functionalization handling for mixed functional/nonfunctional tensorlists (#82326)
ba4727d4e5 : Codegen: Parse deprecated signatures as a full FunctionSchema (#82179)
2f95b61cea : Revert "Revert "Make factory functions CompositeExplicitAutograd (#82251)"" (#82470)
61b21f28d8 : [Profiler] Handle events from Kineto in the unified result class. (#80797)
38b4114278 : [MPS] Add MPS implementation for constant_pad_nd() (#75) (#82366)
386b398317 : Update MPS POI (#81757)
19bdc4f007 : updated file names for viable-strict script (#82414)
e0f708d4d3 : [quant] Remove resnet18 model file, instead fetch from remote (#82447)
edd2f6daa7 : Implement `mul(dense, sparse), mul(sparse, dense)` for sparse COO tensors. (#81556)
fd84c458f4 : Add torch.unflatten and improve its docs (#81399)
5257d1d64b : A Launch script with Best Recipe of Deep Learning on Intel Xeon CPU (#63932)
d3cf482c81 : [vision hash update] update the pinned vision hash (#82459)
4f2a475f7f : refactor add_conclusions in trymerge.py (#82371)
69a32e6acb : Enable xla build caching (#82342)
af7dc23124 : [Profiler] Add `tag` property to `Result` (#81965)
8d14b06c05 : [Profiler] Improve tree testing (#81895)
7edd947178 : [Profiler][Python tracer] Add ephemeral inputs to the value cache. (#81958)
7bb1d44a55 : [cuDNN] Work-around 32-bit indexing failures in cuDNN batchnorm (#81486)
1df307f334 : Revert "Make factory functions CompositeExplicitAutograd (#82251)"
1c523f75a9 : Allow PR collaborators revert PRs (#82360)
0b132602f0 : [ONNX] Test and address autograd.func (FusedLayerNorm) shape inference (#81931)
16093a1d81 : Fix primtorch out_wrapper semantics for factory functions (#82375)
6e56629efa : [JIT] JIT script init verbose assert (#80495)
e3243203b0 : Revert "Add Python to CompositeImplicitAutograd (#82333)"
a7299647d3 : Re-enable non-contig in qarithmetic_test.py (#82345)
0bb467de69 : Add shapefn for selu and adaptive_avgpool3d (#82297)
fb54ddfe66 : map_nested_tensor (#82240)
4630b9f44e : [Easy][FSDP] Remove variable shadowing (#82386)
727a327162 : Back out "Back out "[profiling] Adding targets file for test_mobile_profiler"" (#82243)
89c0123ba0 : Add rudimentary NestedTensor.sum(dim) (#82387)
2bfae07a79 : Enable `dim=None` for `torch.mean` (#81286)
357b7d589c : Fix docstring inconsistencies: string -> str, boolean -> bool (#82410)
8324cdda35 : [ONNX] Add quantized model tests to CI (#80398)
938765e0b6 : Add all supported Jetson platforms (#82404)
ee7c98ade3 : DDP: Get static graph to print unused parameters in debug mode. (#81929)
4680047001 : Modify LinearAPoT matrix multiplication bitshift to support all k (#82409)
46eeb78c56 : add graph_name to functorch (#82368)
34bdd46e6e : Rename shared_ptr<SymIntNodeImpl> to SymIntNode (#82355)
fd5ac1e6b5 : Rename SymbolicIntNode to SymIntNodeImpl (#82350)
1a20c69385 : Add Python to CompositeImplicitAutograd (#82333)
9943ca3ce6 : Make factory functions CompositeExplicitAutograd (#82251)
40a0150f8b : Revert "libtorch: exclude from libomnibus to support multipy usage from pybind (#81672)"
8707aabe9a : Bundle lazy ts backend (#82384)
11c7e7ae08 : [GHF] Refactor + test for revert checks (#82381)
86f038dd56 : download test times during build to avoid race conditions (#81915)
2fe73164b6 : Revert "[TorchTidy] Add pattern to detect if bias is enabled in conv2d followed by batchnorm2d (#81941)"
688b971876 : Extend fake tensor tests to cuda, add support for index put (#82281)
925ebbe215 : [mergebot] dont run land checks when trunk label is there (#82338)
cd28b3cd5c : Re-enable some functionalize tests (#82363)
c6c0366999 : [ao][sparsity][fx] adding unit tests for sparsity+reference quant (#81994)
1ebe98220c : Optimize the copy of BFloat16 to Float and Float to BFloat16 (#79685)
bea5c08dd1 : Less confusing way of denoting dimension in symint name (#82317)
b019a41674 : fix bug for thnn_conv2d when input's C is 1 and weight is channels last (#82392)
a1e6325149 : Implement linear module for APoT quantization (#82105)
69eecdbc9c : Introduce MetadataIndex and helper to use it. (#81909)
2d8ca1820a : [vision hash update] update the pinned vision hash (#82193)
3749c09fde : [torchdynamo hash update] update the pinned torchdynamo hash (#82192)
615f2fda4f : [TorchTidy] Add pattern to detect if bias is enabled in conv2d followed by batchnorm2d (#81941)
b472852778 : ci: Use SCCACHE_S3_KEY_PREFIX in CI builds (#82103)
863176a1c7 : Remove `torch/csrc/generic` (#82373)
ec5c85ca01 : fix syntax for killing the monitoring script (#82390)
13ad4739a6 : [quant] Implement PTQ for APoT FakeQuant (#81040)
f445c220be : [TorchTidy] Add pattern to detect if set_to_none is set in zero_grad() (#81921)
a71d0e882c : Add an op_lowering_disallow_list in fx splitter base class. (#82288)
677908e9e7 : Make pytest ending output less chatty (#82262)
98b9dfa129 : Add decompositions for zero_, fill_, new_full, new_zeros, new_ones (#82332)
4a000ff03e : [NVFuser] Upstream push 0714 (#81861)
47b03ad42e : [MPS] Workaround for uint8 gather bug (#82315)
9e40be3b78 : [vulkan] Fix GenericList constructor (#82365)
28c43190b8 : [_shard] Add ShardedTensorBase (#82291)
34bb3714f0 : Remove obsolete onnx_c2 scripts (#82285)
554b4060aa : Revert "[JIT] Support scripting torch.is_autocast_enabled() (#81305)"
0e95746580 : [RFC] enable oneMKL&oneDNN on-demands verbose functinality (#63212)
de9b3fb3e5 : Minor comment updates on DispatchKey.h (#81923)
bcc9084bc4 : [JIT] Support scripting torch.is_autocast_enabled() (#81305)
df36ccbd81 : Revert "add a reinplacing FX pass (#80897)"
1c0f7bd6d2 : Enable complex for meta tensors (#79975)
82712b7985 : [PyTorch] Support ExclusivelyOwned<caffe2::Tensor> (#81964)
703cfb0074 : Fix floating point exception when nnz is zero for softmax backward cuda (#82341)
8d82367f52 : [ao][sparsity][fx] make sparse prepare->quant prepare compose (#81993)
d537f868f3 : [TorchTidy] Add Pattern to detect Synchronous Data Loader (#81740)
df1b7c2978 : Enable distributed tests for ROCm (#81751)
22bc497656 : [GHF] Ignore cancelled workflows (#82359)
e0e3a98555 : [ao][sparsity] README for base data scheduler class (#82131)
2d5318434e : Generate vmap plumbing on all native_functions, not just ones in allowlist (#82352)
5c92777307 : Stop checking in VmapGeneratedPlumbing.h (#82351)
d80fe49de0 : [Reland] Add py-3.10 config (#82329)
3b6b27e9d7 : Add a miniature backend select implementation for prims (#82311)
c2ccf6e625 : [JIT] Add backwards compatibility test for old NonDeterminism ops list in ir.cpp (#82257)
7ab8b83ba9 : Change autogradNotImplementedFallback to utilize is_mutable and is_aliasing (#82256)
8d5951e7e8 : [JIT] Add is_aliasing method to FunctionSchema (#82255)
b82f2c6159 : [JIT] Fix AliasDB edge case for multi non existing alias sets in output (#82254)
67c22b6c07 : [JIT] Modify is_nondeterministic to utilize tags in SchemaInfo for non-mobile contexts and integrate with ir.cpp (#82253)
842f05f014 : new doc/tutorial module been added, with the first maintainer svekars… (#82274)
1231194dd3 : Add hardshrink op to metal backend (#82224)
3a4343afc5 : [ROCm] parse rocm version during hipify (#82258)
3ef7a6921d : add a reinplacing FX pass (#80897)
46b83f66ec : functionalization: fix striding for out-of-place copy() (#82009)
286b2e1b2b : fix lift() when using funcionalization with fake tensors (#82008)
7f7c81c5f9 : Add empty_like support for sparse_csc/bsr/bsc (#82310)
617e90db22 : Add meta support for eye (#82309)
f9ef363982 : Modifying Adam to support complex numbers as 2d real numbers (#80279)
26776d628c : Revert "Initial private SDP interface and naive composite impl (#81956)"
d38ffa6a4c : Make all of new_/_like factory functions composite explicit autograd (#82238)
487c0e1181 : [pytorch] Bump SoLoader version to 0.10.4 (#81946)
f67b293bfa : Fix some mypy issues in functorch/codegen/gen_vmap_plumbing.py (#82314)
3f740f6d7f : Move test_dtypes so it runs later (#82169)
0933c037e7 : libtorch: exclude from libomnibus to support multipy usage from pybind (#81672)
68ed092cc6 : Increase atol for test_forward_ad_nn_functional_conv_transpose3d_cuda… (#81889)
4a77bee661 : prevent python view impls from getting registered to the meta key (#82007)
8533951f09 : [ao][sparsity][fx] make quant prepare -> sparse prepare compose (#81992)
e0faa02fe7 : simplify a check in python_arg_parser for varargs. (#82271)
ad788662b1 : [ao][sparsity] README for activation sparsifier (#81814)
7af2baffce : [ao][sparsity] README for data sparsifier lightning callbacks (#81813)
8580f9c3f8 : add comment explaining empty dispatch entry (#82298)
0d0bd0e3c6 : [ao][sparsity] README for BaseDataSparsifier (#82130)
7391dec96a : [ao][sparsity] Bug Fix: Retain mask and config while replacing data in data sparsifier (#82129)
78412824ef : enable native 1d spatial input for Intel xpu (#82301)
f15c5bf133 : Initial private SDP interface and naive composite impl (#81956)
01e46fda01 : Add functorch shard for mac x86 tests, linux cu102 tests (#82000)
2cd037e739 : Scalar multiplication for nested tensors (#82265)
629ad91b67 : Add CppFunction::makeFromBoxedKernel (#82268)
9abf60fab4 : Create BoxedKernel as a subset of KernelFunction (#82267)
d1a68ddf92 : [vulkan][test] allow for skipping of tests in vulkan_api_test (#82244)
d750f42827 : Autoformat functorch/codegen/gen_vmap_plumbing.py (#82246)
ec99a8003a : [ROCM] Improvements of incremental hipification and build (#82190)
a721d27c51 : Make TORCH_SHOW_DISPATCH_TRACE actually work (#82277)
d2078fac11 : [dist.checkpoint] Cleanup usage of collectives and introduce narrow helper (#81828)
52aae5aa19 : [Sparse Adam] Fix error in loading serialized models due to introduction of new parameter (#82273)
18d0e533da : fix silent type promition for sparse COO tensors with `select` (#82215)
4b7de26556 : Fix C API to be compatible with latest 3.11 beta (#81242)
9f6e3b0fe4 : [vulkan][test] benchmark div op (#82225)
8c519b1090 : [vulkan][test] benchmark mul op (#82222)
1ab72cf5fb : [vulkan][test] benchmark sub op (#82221)
0dc54e1c87 : [vulkan][test] benchmark conv2d op (depthwise) (#82128)
18be9015e2 : [vulkan][test] benchmark conv2d op (pointwise) (#82127)
11deae92b0 : [vulkan][test] benchmark conv2d op (#82125)
6dc7ca55b5 : Resolve old Jenkins->GHA TODO (#82280)
68530c11e2 : [vulkan][test] benchmark add op (#82124)
560a19b2e2 : [vulkan][test] perf test demonstration (#82123)
9088757cc6 : move aten.native_batch_norm_backward decomposition to core (#81522)
c1b564607d : add graph dumping utilities (#82184)
80e2d5704b : Add OpInfo and ref for linspace and logspace (#81826)
24d702d38e : Fix invalid read in masked softmax (#82272)
a42616e0bf : Revert "Revert "Ported aten::cross to work with symints (#82052)"" (#82287)
c81b802ead : Fix a typo in JIT overview.md (#82269)
3cf9c3d876 : Remove obsolete Python < 3.3 TODO (#82278)
200a27d8be : redispatch expand.SymInt to its int version properly (#82264)
8e926ff49e : Change ADD to COPY in Dockerfiles (#82151)
63891227d1 : [MPS] Handle int inputs of matmul ops by returning error for unsupported data types (#82183)
e519dd37e1 : Revert "Ported aten::cross to work with symints (#82052)"
30ed427d2e : Ported aten::cross to work with symints (#82052)
91b4648633 : Did some cleanup of symbolic shapes (#82051)
fc389cc0a0 : Added new_empty.symint overload and a new_empty ref (#82049)
4a34cbc7cd : [ONNX] exporter native_layer_norm (#81754)
b6b8803a5c : [ONNX] Add skip decorator for related tests (#82037)
85a9e7367c : [ao][sparsity] Store mask as sparse coo during serialization for the activation sparsifier (#82181)
18e8bc9b72 : [ao][sparsity] Store mask as sparse coo during serialization for the data sparsifier (#82180)
7d031db4a5 : move ROCmBackwardPassGuard from autograd engine.cpp to function.h (#82187)
767d740aab : [mergebot] fix up land message (#82223)
452af0bc44 : Call lift fresh in valueToTensor (#81927)
b7f9315eac : [ONNX] Export aten::native_dropout (#81743)
d8b03b1b8d : being_migrated modules use the original allowlist (#82022)
500be5998d : Revert "Introduce discontinuity to nested tensor (#80981)"
7a4a8f327a : Add new NVFuser Python Frontend Record Keeping for Cache enablement. (#81578)
0b0dbc59e6 : Revert "Update shallow_copy_and_detach for nested tensor impls to enable nested tensor softmax backward (#81838)"
6c10a598ca : Revert "add nested tensor matmul support (#81957)"
d2c47d559c : Revert "Revert "Enabling SymInt in autograd; take 3 (#81145)"" ; make sure is_intlist checks for symintnodes (#82189)
30e74be784 : a new section for ir generation (#81847)
801f0d24bb : [primTorch] Add rsub reference (#80421)
4594555123 : [vulkan] allow for skipping of tests (#81905)
a01fb5392f : Modify APoT dequantize method (#82126)
c8002614bf : only update pinned hash if new hash is newer than old hash (#82173)
04fc3e4c04 : [ao] Add histogram visualization capability to ModelReportVisualizer (#81975)
aaf13cd589 : fix upload stats when workflow_run conclusion is null (#82186)
76923c513a : Replace black with ufmt linter (#82199)
d7d04fd38c : [ao] Add line plot visualization capability to ModelReportVisualizer (#81974)
eac20a45fa : [ao] Added table visualization capability to ModelReportVisualizer (#81973)
b9ed224a1b : [ao] Added filtered table generation capability to ModelReportVisualizer (#81673)
2bae67fcf8 : Fix "Failed to fetch https://github.com/rust-lang/crates.io-index" (#82171)
6b0ca72b61 : Add prim, ref, and OpInfo for arange (#81734)
650afa83d2 : Make eye composite compliant (#82114)
e1bd244a14 : Revert "[JIT] Modify is_nondeterministic to utilize tags in schemaInfo and integrate with ir.cpp (#81836)"
dae51b55ff : Revert "[JIT] Fix AliasDB edge case for multi non existing alias sets in output (#81837)"
cbeef2c541 : Revert "[JIT] Add is_aliasing method to FunctionSchema (#81916)"
69acd1f505 : Revert "Change autogradNotImplementedFallback to utilize is_mutable and is_aliasing from FunctionSchema (#81917)"
18dd7e55c9 : Revert "[JIT] Add backwards compatibility test for old NonDeterminism ops list in ir.cpp (#82029)"
c15924969e : Add shape fn for isnan (#82162)
0b3037cbf6 : [mergebot] set on green to true if it has ciflow (#82110)
7bdafed4f1 : add nested tensor matmul support (#81957)
2c2f122674 : Update outdated nondeterministic error examples (#82003)
fc78976921 : [ao][sparsity] README for data sparsifier benchmarking (#81781)
312ece7f65 : fix sgd maximize when momentum is involved (#81859)
58c330fcbb : [ao][sparsity] Data Sparsifier Benchmarking: Forward time evaluation of the sparse dlrm model with torch.sparse (#81780)
4791ab6d4f : [vulkan] implement quantized upsample (#81720)
645a9235f0 : Add functorch shards for windows CI (#82161)
5e28c05cd8 : [vulkan] implement quantized div (#81641)
026ef78b22 : [DOC] fix a mistake in torch.dist document (#82104)
4bf2f140b3 : [vulkan] implement quantized mul (#81640)
d15f742f47 : [vulkan] implement quantized sub (#81632)
1d9462be7e : [vulkan] fix broadcasting (#81631)
dc57624622 : Revert "Add prim, ref, and OpInfo for arange (#81734)"
9d45243e24 : Move empty_like to DONT_REQUIRE_DERIVATIVE list (#82178)
5df1ce46f0 : Revert "[resubmit][FX] Fix PyTree unpacking carrying forward type annotations (#81999)"
4f34cd6d1e : Replace all CHECK_ and DCHECK_ with TORCH_* macros (#82032)
cab819222a : Exclude sparse from non-functional alias (#82064)
a6c792df5c : svekars added as a maintainer to the docs module (#82157)
6ab1fe19ee : torch.sparse.softmax avoid div by zero and invalid kernel launch parameters (#82149)
67dc9abbd8 : Add prim, ref, and OpInfo for arange (#81734)
355d5da24b : [MPS] Perf fixes. (#81951)
6ea422dd0b : Format torch/onnx with ufmt (#82137)
0c8a1c4d85 : Really check GRADLE_OFFLINE (#81954)
6691ed7884 : [ONNX] Export aten::_log_softmax (#81804)
e849ed3d19 : Redirect print messages to `stderr` in `torch.utils.cpp_extension` (#82097)
e32691dc7a : [ONNX] extend add and sub exporter to cover graph non-tensor inputs (#81736)
0fcdf936e7 : Skip tests that don't call gradcheck in slow gradcheck CI (#82117)
ce92c1cfe9 : [resubmit][FX] Fix PyTree unpacking carrying forward type annotations (#81999)
eca21fbd17 : [ao][sparsity] Data Sparsifier Benchmarking: Model quality evaluation of the sparsified DLRM model (#81779)
1223e94469 : [ao][sparsity] Data Sparsifier Benchmarking: Evaluating disk savings of DLRM model (#81778)
6697f1e467 : Update shallow_copy_and_detach for nested tensor impls to enable nested tensor softmax backward (#81838)
11fe277b62 : [PrimTorch] Add reference for torch.norm (#81765)
f430e0b459 : [Vulkan] Implement Stack operator (#81064)
050aec1805 : [nn] add `pop` to sequential and ModuleList (#81601)
7ff121e75a : [reland] make ShardedTensor be a Tensor and nn.Parameter (#82089)
40e99fe6db : Re-enable iOS simulator tests (#82027)
e7920c03ae : Fix capitalization (#82112)
563f6c7a9e : Pass stride overload, not overload packet; add aten.stride.default (#82083)
c88fbae480 : All zero sparse tensors are always coalesced (#82085)
57a566234f : [FSDP] Refactor casting of grad to full param dtype (#81574)
1a9317ca64 : [ONNX] Export aten::convolution (#81815)
6f2a88dd50 : script to monitor memory + cpu utilization (#82006)
0e995f88ea : Add some more SparseMeta support (#82086)
110cd724fc : [nn] Add support for +=, * and *= operations for nn.Sequential objects (#81279)
28262e1ef0 : change rocblas.h -> rocblas/rocblas.h (#80849)
7288ea4e1d : [JIT] Add backwards compatibility test for old NonDeterminism ops list in ir.cpp (#82029)
3c20b5be86 : Change autogradNotImplementedFallback to utilize is_mutable and is_aliasing from FunctionSchema (#81917)
eb2ea9a581 : [JIT] Add is_aliasing method to FunctionSchema (#81916)
5237677974 : [JIT] Fix AliasDB edge case for multi non existing alias sets in output (#81837)
fc3555ce4d : [JIT] Modify is_nondeterministic to utilize tags in schemaInfo and integrate with ir.cpp (#81836)
14968d59f2 : s/Compiance/Compliance (#82087)
db0e121b46 : [composite compliance] put, take (#81094)
d30784be31 : [ONNX] Fix ONNX aten::mul exporter with boolean inputs (#81671)
767942dacd : Add Pull Request template for functorch (#81995)
f42ed3f98f : Turn on linting for functorch (#81987)
5cb802a63b : Delete some no-longer-neccessary debugging output (#81990)
68bd687297 : Add more functorch shards to PR CI (#82013)
5f4e8c0a4d : Add ability to functorch tests via run_test.py (#82012)
49b4f45781 : Add initial support for differentiable optimizers (#80938)
0894c4967d : Add test_make_fx_model_train example (#980) (#82011)
02550bc1f0 : Support non-standard bools in CUDA mode (#79393)
5880a66758 : [composite compliance] matrix_exp (#81225)
83c6113712 : [torchdynamo hash update] update the pinned torchdynamo hash (#82080)
594652f0e4 : [ROCm]: Enable test_grad_layout_1devicemodule_1replicaperprocess (#82005)
64094d81fe : Remove unused line (#82019)
5810391be5 : Do not suppress fallback exception in fake tensor (#82066)
17730ace5a : [torchdynamo hash update] update the pinned torchdynamo hash (#82069)
51c3054ff5 : [xla hash update] update the pinned xla hash (#81676)
230f217d6e : fix a warning uint to int changes a sign (#82030)
88ee79e582 : Spelling (#82067)
194255bb56 : [Quant][fx] Implement BackendConfig (part 1) (#81469)
1a18ff3247 : Revert "Revert "Added dynamic shape POC (#81093)"" (#82063)
40feeea500 : Fix typo in dirichlet.py example (#82062)
0985188c8f : Fixes wrong link in CONTRIBUTING.md (#81809)
ae5c166035 : Fix two small typos in ddp_comm_hooks.rst (#82047)
76ca5af216 : [Vulkan] Fix view op to infer dim size when -1 is given (#81668)
0888a4844c : Revert "Added dynamic shape POC (#81093)"
a3fc076f22 : Copy black config to ufmt and run lintrunner -a (#82043)
47b4ec8aa7 : Conv2d shape calculation for meta tensors (#79834)
d576a7dc97 : [JIT] Fix python binding error with empty containers in init.cpp (#81786)
9a5fa15ea8 : [JIT] Remove BatchNorm and InstanceNorm special cases from AliasDB and replace with SchemaInfo is_mutable checks (#81785)
1e8ef8cb20 : [ROCm] Update CI docker images and jobs to rocm5.2 (#81168)
8169a85dc6 : Added dynamic shape POC (#81093)
c3fb47af59 : [vision hash update] update the pinned vision hash (#82035)
c0017ad0c2 : [torchdynamo hash update] update the pinned torchdynamo hash (#82036)
03ebe6d1d9 : add workflow name to conclusions dictionary in trymerge (#82026)
35d97e21c8 : [DataPipe] Simple graph snapshotting (#79479)
cb63ffc553 : Add decomposition for `aten.upsample_bilinear2d.vec` (#80964)
e07ae941f0 : [Vulkan] Patch Linear op to support higher dimensional input. (#81773)
aeb97d9559 : [TorchTidy] Add pattern to detect if single-tensor implementation optimizer is used (#81733)
64c6387c0f : [Profiler] Add speedup estimate for FP32 pattern and Extra CUDA Copy Pattern (#81501)
10a47c533d : [FSDP] Update `ShardingStrategy` and `_free_full_params()` docs (#80894)
2c57b27c0f : add all missing constraint for HFmodel (#81714)
1ba63e5a56 : [ao][sparsity] Serialization support (#80890)
aa23447904 : [ao][sparsity] Implementation of squash_mask() (#80889)
6b3bf3d6d9 : [ao][sparsity] Implementation of step() and update_mask() (#80888)
5fe3a1669c : [ao][sparsity] Implementation of register_layer() and get_mask() (#80887)
4c0000a98e : [ONNX] Remove duplicated test run (#81146)
df4226b48a : [ONNX] Fix bug using std::copy_if (#80999)
f87d8c2f62 : [ao][sparsity] Basic implementation of activation sparsifier (#80886)
f114468fd2 : Allow user to disable built-in fuser when using TorchDynamo (#81731)
98cad3d305 : [Checkpoint] Fix autocasting (#81766)
75aab6540e : [ao] Update DynamicStatic Detector to account for Conv (#81972)
0cacaf070f : [ao] Fix to InputWeightEqualization detector to handle Conv groups (#81971)
f51cf774c6 : Revert "[_shard] make ShardedTensor be a Tensor and nn.Parameter (#79825)"
dc3c1ade4b : Some fixes for FX pass with nvFuser backend (#81911)
89a6680036 : Update the pull request template (#81991)
9c94e10bba : [FSDP] Move `_post_backward_called` to `_init_param_attributes` (#81243)
777ff539f2 : [FSDP] Clean up `_lazy_init()` (#80185)
4bf33ab808 : [Easy][FSDP] Add `zero_grad()` to unit test train loop (#80087)
3059b13791 : [FSDP] Remove `self.numel_padded_per_param` (unused) (#80002)
790b122901 : [FSDP] Move tensor sharding logic to `FlatParamHandle` (#80000)
b069120d9c : [FSDP] Deduplicate `_orig_size` and `_unsharded_size` (#79984)
be656f55b1 : [FSDP] Introduce `FlatParamHandle` (#79652)
2ac24675cc : get rid of push_torch_{dispatch, function}_mode (#78215)
3bd08e3410 : Revert "Add more functorch shards to PR CI (#81919)"
6381cd61ec : Revert "Add python stack tracing option on on-demand flow (#80919)"
1a6329209c : Lazy shape symbolic const (#81718)
50c655d5e3 : Adding maximize to ASGD (#81875)
9c32439a77 : [_shard] make ShardedTensor be a Tensor and nn.Parameter (#79825)
38988a8d14 : [rpc/distributed] eliminate code duplication in distributed/rendezvou… (#81577)
9f9cff2bf2 : [vulkan] implement quantized depthwise conv2d (#81497)
a86aa478d0 : [vulkan] implement quantized pointwise conv2d (#81496)
0481cb8114 : [vulkan] implemented quantized reg conv2d (#81495)
054d4c12e0 : [vulkan] implement quantized add (#81494)
74c6a754a5 : [ONNX] Deprecation utility (#81901)
1bdb1a0a35 : Add `scalarType` to JitType pyi (#81112)
3c2c2cc947 : cudagraphs dynamo backend (#80566)
38023ff371 : fix LTC usage of torch.tensor ctr, add test (#81928)
008bff1e03 : Fix missing symbol issue when USE_NUMPY=False (#81967)
6e9b0dcdc4 : Revert "Implement `mul(dense, sparse), mul(sparse, dense)` for sparse COO tensors. (#81556)"
c078476eb0 : Revert "Enabling SymInt in autograd; take 3 (#81145)"
0d1710ade5 : Revert "[FX] Fix PyTree unpacking carrying forward type annotations (#81906)"
f50a248a5e : Add python stack tracing option on on-demand flow (#80919)
bf6481553a : Ensure `Transform` is pickleable. (#81707)
cc5b01651f : Implement `mul(dense, sparse), mul(sparse, dense)` for sparse COO tensors. (#81556)
b133a0454c : [torchdynamo hash update] update the pinned torchdynamo hash (#81961)
c07bcc3cbc : [vision hash update] update the pinned vision hash (#81960)
e0d83a0bdc : [FX] Fix PyTree unpacking carrying forward type annotations (#81906)
c52ee6dc0a : CSE Pass and common pass Tests (#81742)
737dbfe3b2 : Remove functorch pin mechanism (#81924)
68d18f217f : Add more functorch shards to PR CI (#81919)
5b359c5369 : Change functorch pin mechanism to test functorch in pytorch/pytorch (#81918)
a7d7f6d856 : Change ADD to COPY (#81944)
12cb26509a : Apply ufmt to torch internal (#81643)
f595467e5c : Reenable slow gradcheck and make it pass (#80514)
d28e667159 : Update actionlint (#81922)
b2f09ad161 : Revert "Support non-standard bools in CUDA mode (#79393)"
7592d3f646 : Support non-standard bools in CUDA mode (#79393)
1d88635b1f : Fix cuda-mode and add more tests (#81898)
521d5ae1ce : Revert "Enable reentrant dispatch for decompositions (#81598)"
032facd6e6 : Enabling SymInt in autograd; take 3 (#81145)
18e2aeae01 : [T124977342] Clean up duplicated code in vulkan_api_test (#81834)
31c3dcb20b : Small optimisation to linalg.cholesky (#81316)
96dfee4ce7 : [PrimTorch] Reference for linalg.norm (#81241)
c5330183ca : [PrimTorch] Reference for linalg.matrix_norm (#81113)
5fa64ed9dc : Add shape fn for hardswish and backward (#81827)
3574329a76 : Cleaning Indexing.cu (#81477)
ec77d35bda : remove backend keys from FunctionalTensorWrapper, update TensorImpl::is_device methods (#81471)
2529ff4bd9 : Registered python meta functions to a table (#81092)
031ec66311 : Add warning about DCE in FX being unsound with mutation (#81818)
44193f6b5d : Add basic support for sparse meta tensors (#81800)
104cbbf115 : Generalize sparse construction to work with arbitrary device types (#81796)
6f0c253956 : Add sparse, quantized and nested tensor meta support to codegen (#81793)
a33c5aceb5 : Get TensorOptions.h using the FORALL macros to reduce manual code (#81789)
ec91f42d79 : Sync up DispatchKey in model with C++ (#81770)
1724e9f21f : Refactor functionality and backend keys to reduce duplication (#81752)
635f670dbb : Support non-standard bools in CUDA unique (#79392)
43523f4602 : [functorch] remove kl_div skips (pytorch/functorch#975)
c6600e2513 : [functorch] Use python3 in shebang
e2361bcc0f : [functorch] Add exhaustive testing of vmap autograd composability (pytorch/functorch#851)
cac8662625 : [functorch] disable tests to land symint autograd changes (pytorch/functorch#973)
aa94afa225 : [functorch] [batch-rule] householder_product (pytorch/functorch#972)
cc07aaf315 : [functorch] Use fake tensor for primal computation in AOTAutograd (pytorch/functorch#935)
cb62788990 : [functorch] Allow batch norm with all variations of batching when training=False (pytorch/functorch#958)
f926b0a4dc : [functorch] Replacing iterator with tuple for ops (pytorch/functorch#971)
a425011fe7 : [functorch] sum.SymInt (pytorch/functorch#969)
254f1ca1e0 : [functorch] Remove nn.Module from aot_module_simplified (pytorch/functorch#970)
63820e9b7d : [functorch] Align functorch lint with PyTorch, part II (pytorch/functorch#968)
bca7f7c0cc : [functorch] searchsorted.Tensor batch rule (pytorch/functorch#966)
268c81dd7b : [functorch] add view.SymInt kernel (pytorch/functorch#965)
8073fba6e5 : [functorch] add functionalize index tests (pytorch/functorch#820)
cca627d51c : [functorch] functionalize test hardening (pytorch/functorch#732)
bc147a3d3d : [functorch] masked_fill.Scalar batch rule (pytorch/functorch#964)
b33c1f7dd4 : [functorch] forward-fix flake
cf208394cc : [functorch] make sync logic more accurate for functionalization (pytorch/functorch#795)
d02f085f70 : [functorch] Align functorch's flake8 config with pytorch's (pytorch/functorch#963)
67b104af02 : [functorch] Add skips to coordinate land (pytorch/functorch#962)
d95d9af43f : [functorch] Minor improvements for _autograd_grad (pytorch/functorch#750)
9ba2266156 : [functorch] Reduce flaky tests in test_vmap via skips and tolerances
a60ff6985d : [functorch] Generate n^2 not n^3 inputs for batch and instance norm; small batch norm fix (pytorch/functorch#951)
d546e857c2 : [functorch] Revert "remove kl_div skips"
4704d865a6 : [functorch] remove kl_div skips
73acfe7efe : [functorch] Remove removable randomness skips (pytorch/functorch#953)
c4eafb0569 : [functorch] Bump functorch mininum PyTorch version
65e8d2209c : [functorch] (fix CI) Add batch rule for split.sizes (pytorch/functorch#952)
8445222264 : [functorch] refactor code, add docstring (pytorch/functorch#908)
f6dea84f51 : [functorch] Use Functionalization pass (pytorch/functorch#810)
1afb604c22 : [functorch] Generate 2^n tests, not 3^n tests for vmap (pytorch/functorch#937)
cded2e29c1 : [functorch] Add ninja to circleci builds (pytorch/functorch#938)
f34c6f78e3 : [functorch] Use the smallest batch size we can in vmap testing (pytorch/functorch#936)
0d9fd3d521 : [functorch] Fix CI (pytorch/functorch#931)
c58932ea79 : [functorch] Fix link for colab (pytorch/functorch#929)
5aa51c76b5 : [functorch] Add _unsafe_set_level for torchdim (pytorch/functorch#924)
7527eac49d : [functorch] fix CI issues with AOTAutograd
2ab972f6e7 : [functorch] Fix CI
62868fa3d5 : [functorch] Added chunks arg to vmap (pytorch/functorch#774)
acd63940f2 : [functorch] Fix functionalize failures (pytorch/functorch#911)
582bcda977 : [functorch] Remove some xfails for transform coverage (pytorch/functorch#910)
e642a34627 : [functorch] Add complexities and references for NTK implementations. (pytorch/functorch#907)
1b5fbf872f : [functorch] Utilities to get GPU Utilization from chrome trace dumps (pytorch/functorch#868)
cdd7693650 : [functorch] Util dump chrome trace (pytorch/functorch#869)
8f31dbc68a : [functorch] accounted for l1_loss now being a composite op
16ddcbb6c8 : [functorch] Did some dead code elimination before partitioning
58cda84f34 : [functorch] made partitioner strip overloads
51bb9862a3 : [functorch] add a pass in ts_compile to prepare for jit.script (pytorch/functorch#899)
88a6be46f1 : [functorch] Update vmap codegen based on changes to torchgen (pytorch/functorch#898)
1f715123bd : [functorch] Fix expand.SymInt issues by adding a decomp (pytorch/functorch#903)
f30f177ebf : [functorch] Remove cross_entropy monkey patch (pytorch/functorch#901)
002227ab27 : [functorch] Clearly mark the old batching rule API as "legacy" (pytorch/functorch#897)
4b8543489f : [functorch] fixed compilecache problem
1c689b2839 : [functorch] Add note on test deps
2e5c837abf : [functorch] We support windows
e25af56f2d : [functorch] Update eager_transform_test (pytorch/functorch#891)
c01d3a486f : [functorch] Cleanup jvp testing (pytorch/functorch#890)
a2a6171859 : [functorch] update discover_coverage
c896cc8e68 : [functorch] update discover coverage
bfeb6db6de : [functorch] Remove some xfails; add xfail for test_grad[linalg.eig]
97a5d9b703 : [functorch] Disable autocast (pytorch/functorch#794)
a178624b4f : [functorch] batch norm forward over reverse coverage with decomposition (pytorch/functorch#877)
6050b42388 : [functorch] Present Random state (pytorch/functorch#887)
3182642b2c : [functorch] Beef up transform limitations doc (pytorch/functorch#879)
512a22c55f : [functorch] docs for functionalize (pytorch/functorch#874)
a88aecb9e4 : [functorch] fixed some issues with setting leaves and made partitioning deterministic (pytorch/functorch#880)
09ec30fbb0 : [functorch] Removed some obsolete workarounds in ts_compile and added a new one (pytorch/functorch#875)
b148bfe7d6 : [functorch] fix ci
33f2cd0c44 : [functorch] update discover coverage
8993e7a772 : [functorch] fix discover_coverage
b9829fd397 : [functorch] fix dp-sgd example (pytorch/functorch#873)
c550da16ca : [functorch] "set_inplace_requires_grad_allowed" should be a context manager (pytorch/functorch#870)
f0889545e1 : [functorch] Install torchvision in CUDA CI
6c8874f1a9 : [functorch] unify PythonKey (i.e. ProxyTensor) tracer with the one in core (pytorch/functorch#853)
8caeea7eb7 : [functorch] make mse loss backward use decomp (pytorch/functorch#866)
ae24147211 : [functorch] Fix cuda tests, skip mac unexpected successes (pytorch/functorch#864)
7820bab01a : [functorch] Fix index.Tensor, index_put batching rules (pytorch/functorch#862)
bf14e42a07 : [functorch] Fix MSE forward, use decomposition for MSE backward (pytorch/functorch#860)
4a541649ae : [functorch] remove xfail for hinge embedding forward mode
fcd66defc5 : [functorch] Implement a Common Subexpression Elimination (CSE) pass in AOTAutograd (pytorch/functorch#852)
dd109e6a4e : [functorch] Fix CI by using pip instead of conda to install torch (pytorch/functorch#863)
d3f7057534 : [functorch] update discover_coverage
c811db5032 : [functorch] Fix CI (pytorch/functorch#854)
e476645024 : [functorch] Run CI
f59a85de7f : [functorch] updated op analysis script
d636450ac9 : [functorch] Disable calling Tensor.requires_grad_() inside a functorch transform (pytorch/functorch#849)
9b69051601 : [functorch] fix ci
abdf82f302 : [functorch] Update discover_coverage
3255e42c88 : [functorch] Fix CI (pytorch/functorch#844)
04bae8ff8c : [functorch] Remove test/test_functorch_lagging_op_db.py (pytorch/functorch#845)
e5d326b9b8 : [functorch] Allow people to arbitrarily add dispatch keys between DynamicLayer{Front,Back} (pytorch/functorch#843)
60218730dd : [functorch] [block_diag] add xfails
ab7f5fd236 : [functorch] add batching rule for block_diag, kill DECOMPOSE_FUNCTIONAL (pytorch/functorch#814)
e4da6e15aa : [functorch] Revert "[release] Set up CI for release/0.2 (pytorch/functorch#830)"
8f18d6da34 : [functorch] [release] Set up CI for release/0.2 (pytorch/functorch#830)
1feff6bb69 : [functorch] update compile example imports (pytorch/functorch#834)
36fa9d8295 : [functorch] Replace addr_decomp with the one in PyTorch Core, issue pytorch/functorch#833 (pytorch/functorch#836)
5870902ff1 : [functorch] bump version
f312dbc2af : [functorch] remove 'fill_' op info
ff11f39193 : [functorch] [test_jvpvjp] check if custom formula is available (pytorch/functorch#821)
3f85e1fd92 : [functorch] Add extremal testing for forward over reverse using decompositions (pytorch/functorch#818)
229fce0dc2 : [functorch] [discover_coverage] add ability to compute percentages
86e1ba376e : [functorch] fix unexpected successes for binary cross entropy, linalg.norm, linalg.matrix_norm
88aff34255 : [functorch] fixed some static argnums stuff
236565f461 : [functorch] add layer norm support, clean up some binary cross entropy support (pytorch/functorch#807)
c048d19772 : [functorch] fix CI (pytorch/functorch#816)
c9ac6d6246 : [functorch] Update environment.yml (pytorch/functorch#811)
ea1a3477cc : [functorch] fix functioanlization tests (pytorch/functorch#800)
40845473ff : [functorch] make jvpvjp actually run all the tests for decompositions (pytorch/functorch#792)
5f99ec5ea4 : [functorch] fix ci (pytorch/functorch#804)
8c5a5a7c4d : [functorch] Add wraps to jacfwd, fix partitioning heurisitc
39d4e87465 : [functorch] Revert "fix functioanlization tests"
879c443fa0 : [functorch] fix functioanlization tests
28a908d7ba : [functorch] typo
bff07eb7ad : [functorch] glu failing now that test_ops runs again, switch functionalization failures to use expectedfailure
1eae4ae5f8 : [functorch] Minor fix2 (pytorch/functorch#797)
f67ca07a09 : [functorch] fix ci
7693f08ee3 : [functorch] fix distance heuristic
46099fb32c : [functorch] retrigger CI
debc326f61 : [functorch] Update for consolidated contiguous/sizes policy (pytorch/functorch#785)
e4fb39fe37 : [functorch] add binary cross entropy (pytorch/functorch#791)
d917b4a6a6 : [functorch] Fix advanced indexing (pytorch/functorch#777)
ff9558a2ea : [functorch] Update neural_tangent_kernels.ipynb (pytorch/functorch#788)
126fd93c21 : [functorch] windows being dumb
d9b25b1a2a : [functorch] fix ci (pytorch/functorch#789)
a5454bfc6c : [functorch] fixed stupid issue
fa7f58f1c2 : [functorch] fix jvp decomp testing
5d6ac147ff : [functorch] fix linear numeric accuracy for functorch, add test (pytorch/functorch#781)
9c077bb4fc : [functorch] add lintrunner support (pytorch/functorch#783)
2f6c93e2e8 : [functorch] fix flake8 issues
0967838c6b : [functorch] fix issue
090b2260e4 : [functorch] Try adding log_sigmoid_forward decomp (pytorch/functorch#778)
5f8c69d414 : [functorch] remove decompositions from functorch (but keep shims) (pytorch/functorch#771)
4e8fbf61fc : [functorch] fix tests
7204c1c02d : [functorch] added option to simply use torchscript to call back into python lol (pytorch/functorch#775)
07e000478f : [functorch] fix flake
1b2f01e712 : [functorch] skip unpool tests on python key too
e1a88a3b39 : [functorch] fix dtype casting issue with softmax_backward_data decomps
9a6b91b6fd : [functorch] Port jvp decompositions to unify on python one (pytorch/functorch#772)
c0707c6f43 : [functorch] Revert "Port nll_loss_backward decomposition to unify on python one"
b208b96030 : [functorch] Port nll_loss_backward decomposition to unify on python one
3cd45f7c1b : [functorch] Add trace decomposition, invoke from C++ (pytorch/functorch#740)
57cf01dfc9 : [functorch] Minor fix (pytorch/functorch#769)
5318b61649 : [functorch] [jvp x vjp] remove conv skips
fe203ad112 : [functorch] [jvp x vjp] cross entropy
7314fd2487 : [functorch] fixup jvpvjp testing, add log_softmax and softmax jvpvjp coverage
f10e9d66c4 : [functorch] jvp x vjp support for l1_loss, mse_loss
b51e9b50c5 : [functorch] fix thing I thought I already fixed
ce6a80573d : [functorch] fixed torchscriptable error
babc862b91 : [functorch] Use nll_loss_backward decomposition for jvp transform (pytorch/functorch#764)
41869d9ace : [functorch] Update discover_coverage, add jvp support for linear
0f1332b09e : [functorch] Fix a bunch of linalg coverage issuez (pytorch/functorch#765)
42286432b2 : [functorch] Fix some minor issues (pytorch/functorch#768)
ec21c90887 : [functorch] fixed rsub
99abac4b25 : [functorch] fix flake
855f939ce3 : [functorch] fixed complex number decompositions and added isnan
aba4a67b04 : [functorch] fix CI
3dccff7846 : [functorch] Added batch norm decomposition and fixed some other stuff
6f7ea6d814 : [functorch] Added clone implement for ProxyTensor __deepcopy__
01164988b3 : [functorch] Some more test annotations
e86d8ebfb6 : [functorch] re-annotated some tests
3bf448936c : [functorch] Refactor DynamicLayer so the logic for each transform is separate (pytorch/functorch#756)
8a6491c87d : [functorch] Add xfails for nansum, nanmean
2788574f29 : [functorch] Only generate vmap plumbing for operations in the allowlist (pytorch/functorch#758)
60214759c7 : [functorch] nll loss decomp (pytorch/functorch#755)
b2b1a00df8 : [functorch] L1 loss decomposition (pytorch/functorch#753)
ac9a47bf5b : [functorch] fix binary_cross_entropy_backward decomposition and fix _tensor_str
fd05b533fc : [functorch] xfail binary cross entropy decomposition on cuda, bfloat16
8aaea274ff : [functorch] DynamicLayerBack refactor (pytorch/functorch#754)
851b4e727c : [functorch] Fix pad failures
927e2a6b45 : [functorch] Refactor dynamicLayerFrontFallback (pytorch/functorch#751)
9eb10d033c : [functorch] updated some decompositions and cleaned some stuff
151b1f4ae1 : [functorch] Add l1 loss forward decompositions
da3eb084d4 : [functorch] Add l1 forward decomposition
1204b54294 : [functorch] fix issue
8625aaee60 : [functorch] Add embedding decomposition
669b0113c9 : [functorch] missed one unexpected success
577656365b : [functorch] fix unexpected successes (pytorch/functorch#752)
ab280e5a21 : [functorch] Fix some of the block_diag failures
c71dfa6eba : [functorch] add diag_embed batch rule
99279f46b2 : [functorch] DynamicLayer uses TransformType instead of DispatchKey to identify transforms (pytorch/functorch#748)
4b2ded615d : [functorch] Add expand_copy and masked_select_backward to reenable tests from upstream breakages (pytorch/functorch#736)
34b2094170 : [functorch] Added params_require_grad arg to make_functional* (pytorch/functorch#701)
d5ec0bfb83 : [functorch] Generalize get_decompositions handling for OpOverloadPackets (pytorch/functorch#743)
47c75a876e : [functorch] Fixed flake8 error on CI (pytorch/functorch#744)
c5c4f74609 : [functorch] functionalize(): properly handle global return tensors (pytorch/functorch#735)
e7293ee5ae : [functorch] skip flaky(?) test
9c65dd571e : [functorch] fix random stuff
46142ab452 : [functorch] Add a decomposition and remove op_analysis from .gitignore
fc24004a9f : [functorch] Updates to the min-cut partitioner to improve it (pytorch/functorch#653)
a46fbe8d79 : [functorch] skip flaky test
244197a2a6 : [functorch] Update lagging op db (pytorch/functorch#737)
8cedd1ae2e : [functorch] Make printing work again for PythonTensor (pytorch/functorch#738)
55fd0ac636 : [functorch] Add functorch-only conversion opinfo variants (pytorch/functorch#734)
ce096e1adf : [functorch] Change the xfail/skip mechanism (pytorch/functorch#731)
7bf45014ce : [functorch] Cleanup return_types handling now that it has been upstreamed (pytorch/functorch#730)
a5da52b138 : [functorch] fix test (pytorch/functorch#722)
0b8752b0f7 : [functorch] Fix test following tls update in core (pytorch/functorch#718)
36ce1b34ee : [functorch] fix tests (pytorch/functorch#721)
468375824b : [functorch] fix_ci; not with codegen (pytorch/functorch#720)
9f700c9a01 : [functorch] functionalize(): make "additionally removing views" toggleable (pytorch/functorch#678)
d8c6b4bad0 : [functorch] Add some unimportant xfails/skips
352c07c2f5 : [functorch] fixed some minor issues
1a85653108 : [functorch] fix issue where default partitioner might recompute things
4bb3ed2b8f : [functorch] Trigger CI
f8943966dd : [functorch] nuclear norm hacks (pytorch/functorch#708)
cf2de77dd3 : [functorch] Fix tolerance override
75d379c8c0 : [functorch] fix lint
45e8381c9b : [functorch] fix ci
88d863245c : [functorch] fix build
480f12da1c : [functorch] Update docs.yml (pytorch/functorch#700)
4b2b9da501 : [functorch] Windows circleci build (pytorch/functorch#696)
505f8ef06d : [functorch] Remove erroneous -g from opt build
b7243f7c2a : [functorch] Macro-guard the custom_vjp prototype from windows (pytorch/functorch#699)
962b566370 : [functorch] Fix visibility macros (pytorch/functorch#698)
db4d0500f9 : [functorch] Fix strict-prototypes handling for Windows
649d494387 : [functorch] Refactor DynamicLayer TLS logic (pytorch/functorch#683)
5ac57fe516 : [functorch] Add mac config to CircleCI (pytorch/functorch#694)
f76488caeb : [functorch] Fix CI (pytorch/functorch#695)
a4f0fb9044 : [functorch] Make functorch codegen more robust (pytorch/functorch#693)
b22bbc49d4 : [functorch] Add Python 3.10 config to CI (pytorch/functorch#691)
647882bf19 : [functorch] In ux_limitations: use "elements" instead of "memory"
b8ae949131 : [functorch] Fix link to nightly binary
b2f03d7f7d : [functorch] Better intro to functorch (pytorch/functorch#688)
f61d825462 : [functorch] Fix CI (pytorch/functorch#686)
bdc306569b : [functorch] Doc fix
6299538fc0 : [functorch] Fix to batching rule (pytorch/functorch#685)
38cbf390e2 : [functorch] Skip extracting meta tensor info for sparse tensors (pytorch/functorch#676)
40e1c33b45 : [functorch] Glu batching rule (forward + backward) (pytorch/functorch#665)
e75f817e4b : [functorch] Clean up grad state handling (pytorch/functorch#681)
84a105f44f : [functorch] delete failing tests (pytorch/functorch#679)
f3ca8bce1c : [functorch] Fix normal_ and bernoulli (pytorch/functorch#670)
ce8a587672 : [functorch] silu batch rule
ec2b0fb5f8 : [functorch] fix lint
670fa3efd5 : [functorch] Fix the decomposition of native_layer_norm_backward operation. (pytorch/functorch#674)
b849ba4f7c : [functorch] forward deriv for backwards of celu, elu, selu added
28d413c87e : [functorch] update discover_coverage
cd007e3b8b : [functorch] Handle -inf for Fx to TS (pytorch/functorch#671)
1abeb6a541 : [functorch] fix multinomial (pytorch/functorch#664)
3a1cb5be10 : [functorch] Add cudnn_batch_norm decomposition to default nvfuser decompositions (pytorch/functorch#661)
ed3f44a54f : [functorch] Removing the hack to fix the avg_pool2d backward (pytorch/functorch#619)
af89eef016 : [functorch] Disbale torchdynamo on AOT Autograd generated graphs (pytorch/functorch#662)
604540c47d : [functorch] update discover coverage
cb931b1649 : [functorch] Reduce overhead of AOT Module (pytorch/functorch#660)
9bbd8682c4 : [functorch] Update coverage
f65b073fbe : [functorch] Prelu batching rule (forward + backward) (pytorch/functorch#609)
f4ed73b3d1 : [functorch] Trace the backward pass assuming contiguous tensors (pytorch/functorch#536)
4af9cc9a42 : [functorch] fix functionalize(): properly propagate input mutations (pytorch/functorch#654)
bf4f323e65 : [functorch] fix gelu_backward (pytorch/functorch#640)
ee6a824a61 : [functorch] add nonzero batch dim tests (pytorch/functorch#646)
a4dddef366 : [functorch] fix tests
bda21ebc90 : [functorch] excise missing opinfo
977a58de56 : [functorch] Fix build
5472f5ebfd : [functorch] made some minor updates to the notebook
561ba2ebbd : [functorch] switch decompositions over to use public API instead of aten ops
1cc3add0e7 : [functorch] Update functorch lagging op db (pytorch/functorch#652)
34d7bf6865 : [functorch] Beef up jvpvjp testing (pytorch/functorch#648)
5e23cc20af : [functorch] fix group norm, add scaffolding for autograd.grad tests (pytorch/functorch#630)
0786542266 : [functorch] jvp x vjp testing (pytorch/functorch#343)
236e1dff81 : [functorch] newline at eof
4056021e6c : [functorch] _nested_tensor -> nested_tensor (pytorch/functorch#637)
5214a286e2 : [functorch] Apply functorch-side fix for PythonTLSSnapshot change (pytorch/functorch#633)
d342ec7fe3 : [functorch] fix None consistency
4994478228 : [functorch] changed from long to int64_t
5fd680867d : [functorch] Fixing docs building issues with jinja and sphinx (pytorch/functorch#625)
ceb05e7848 : [functorch] fixed some minor nits with decomposition
8a8491fd6b : [functorch] Add decomposition for aten.native_layer_norm_backward op. (pytorch/functorch#525)
562c91b133 : [functorch] removed special.zeta grad
1e5160f926 : [functorch] Fix multiply warning
408b6c14a5 : [functorch] Revmoed operator authoring
32e63f8acc : [functorch] fixed optionalarrayref breakages (pytorch/functorch#623)
f2e5e93144 : [functorch] Fix CI, for real
97c0385089 : [functorch] make_functional parameter sharing fix (pytorch/functorch#620)
ac090de70b : [functorch] add a functionalize() transform (pytorch/functorch#236)
43ae23f5d4 : [functorch] Remove erroneous files
4e4a9f56dd : [functorch] Fix CI
71d9a94abf : [functorch] Make default_decompositions visible in functorch.compile namespace (pytorch/functorch#613)
c64bf40b46 : [functorch] New coverage calculator (pytorch/functorch#610)
eae0cf279d : [functorch] Added docs deployment step in GHA (pytorch/functorch#530)
0eb5aa5c7d : [functorch] fixed some op packet issues and added some more tests for partition optimality
7e98aa8706 : [functorch] Enable calling jvp inside autograd.Function (pytorch/functorch#607)
7095647db6 : [functorch] Fix some lint issues (pytorch/functorch#606)
0b5385c8cc : [functorch] Refactor randomness tests (pytorch/functorch#593)
cd68083cd9 : [functorch] Fixes _new_zeros_with_same_feature_meta batch rule (pytorch/functorch#603)
cc32a1e655 : [functorch] Add tests for vmapjvpall batch rule coverage (pytorch/functorch#602)
f23ff680a4 : [functorch] vmap codegen, for review (pytorch/functorch#546)
5b916c2c87 : [functorch] Fixing deprecated circle-ci images (pytorch/functorch#599)
f20c1ed115 : [functorch] fixed docs issue
bc95698767 : [functorch] Make a number of small changes to randomness (pytorch/functorch#594)
3245652427 : [functorch] Add website note on randomness (pytorch/functorch#587)
aab0785d6b : [functorch] [skip ci] Fix typo in notebook name (pytorch/functorch#549)
efe29a9a8d : [functorch] update typo in README.md (pytorch/functorch#596)
6ef169a337 : [functorch] fix flake errors
ee61e3cc76 : [functorch] Added minifier example notebook
a202c91fe5 : [functorch] merged
3a26772c53 : [functorch] Actually made some modifications to fix OpOverload changes (pytorch/functorch#579)
8abfc81705 : [functorch] Update colab install instructions (pytorch/functorch#590)
1682996616 : [functorch] Update functorch install instructions
2c28438d41 : [functorch] Add code highlighting to the README (pytorch/functorch#588)
33aead5f7b : [functorch] Update readme (pytorch/functorch#589)
d3461c6dc2 : [functorch] beef random docstring
d3493dc117 : [functorch] xfail rrelu
911458776a : [functorch] Sphinx and docstrings for AOT Autograd (pytorch/functorch#580)
0156248877 : [functorch] unsupported random errors
34643a6c87 : [functorch] Update batch norm error message to be more informative (pytorch/functorch#567)
4005aa5f0d : [functorch] Add randint_like support (should be final randomness function! 🥳) (pytorch/functorch#566)
593955ec6d : [functorch] Compile readme (pytorch/functorch#585)
1ac79567fc : [functorch] Setting tensor_meta attr for inplace ops (pytorch/functorch#565)
e5fab4d890 : [functorch] Fix unbatched tensor with different randomness behavior for dropout variants (pytorch/functorch#583)
bd2b450b15 : [functorch] Fix not cleaning up DynamicLayer dispatch keys after transforms. (pytorch/functorch#584)
ad2ce1cd3d : [functorch] fix cuda breaks
824eeb96e9 : [functorch] remove s_where, use where.self (pytorch/functorch#581)
2c6b97276b : [functorch] Fix the tutorial index (pytorch/functorch#577)
ccc6b33cbf : [functorch] Fix + test dropout without batch dimension (pytorch/functorch#575)
dd1327f801 : [functorch] Make TestTransformFailure test names deterministic (pytorch/functorch#576)
87b5cd2e9c : [functorch] made some modifications to work with new changes (pytorch/functorch#574)
abaa147566 : [functorch] combine_state_for_ensemble doc update, add validation (pytorch/functorch#571)
0828438d71 : [functorch] Updated logo alignment (pytorch/functorch#573)
26c137a5df : [functorch] Cleanup namespace (pytorch/functorch#570)
0af3636257 : [functorch] Add Batch Norm module utilities to not track running stats (pytorch/functorch#505)
664cd284c5 : [functorch] Docs website modification (pytorch/functorch#563)
38e3bc1ef8 : [functorch] Update codegen
b4fa6decf0 : [functorch] update gen (pytorch/functorch#550)
2fc65d886e : [functorch] remaining out of place ops using randomness (pytorch/functorch#564)
3110183992 : [functorch] Neural Tangent Kernels tutorial (pytorch/functorch#540)
98985f327e : [functorch] in place batching rules (pytorch/functorch#559)
14e308e06f : [functorch] Fix erroneous print
b7bc52b618 : [functorch] Cleanup warnings (pytorch/functorch#562)
d3552a53d1 : [functorch] Add way to override op precision; manually override conv_transpose precision (pytorch/functorch#560)
3e2f309177 : [functorch] bernoulli_ only taking in float, includes dropout (pytorch/functorch#523)
3823b4b197 : [functorch] error on any use of autograd function (pytorch/functorch#558)
57e78c6654 : [functorch] Don't unnecessarily wrap the elem in PythonTensor (pytorch/functorch#554)
454e9d6e93 : [functorch] Added some more decompositions
23620c5b31 : [functorch] Added ability to enforce dynamic shapes in decompositions
9e8145566e : [functorch] Add some more decompositions
4aa14f1fe3 : [functorch] Convolution double backward batch rule (pytorch/functorch#557)
9ae520640e : [functorch] Update ensembling and per-sample-grad example (pytorch/functorch#556)
ea1e2906cc : [functorch] Synchronize jacobians_hessians tutorial (pytorch/functorch#555)
97eafce682 : [functorch] bernoulli_ support (pytorch/functorch#521)
612df22e7d : [functorch] Add a short-term parameter tying error message (pytorch/functorch#552)
29d8a62da5 : [functorch] Fix jvp testing (pytorch/functorch#548)
6648a05b25 : [functorch] Skip networkx if not installed
917d54be39 : [functorch] fix problem from getitem (pytorch/functorch#551)
72891a3404 : [functorch] Updated plumbing (manually)
9832ecd9cd : [functorch] Added temporary odict pytree registration (pytorch/functorch#544)
f6c0cd8cad : [functorch] Explicit error message when calling torch.where(x) (pytorch/functorch#547)
f53d4caa3a : [functorch] normal support (pytorch/functorch#420)
5351c456c0 : [functorch] reset skips now that torch dispatch works again
e555100a30 : [functorch] Mention how to compute jacobian of f and output of f at the same time (pytorch/functorch#539)
f21e573c99 : [functorch] Workaround to avoid Torchscript bug for new_empty (pytorch/functorch#538)
b7a7e935c6 : [functorch] Updated tests to output failed arguments
9b5d63ea76 : [functorch] Fix vmap level skipping bug (pytorch/functorch#535)
69a8cc1920 : [functorch] normal with one tensor (pytorch/functorch#511)
397388ba95 : [functorch] in-place plumbing now goes through vmap codegen (pytorch/functorch#532)
b0ca9a4a94 : [functorch] Quick fix for torch.trace backward (pytorch/functorch#527)
4c0b402ad3 : [functorch] Fixed issues with native_layer_norm precision
aeeb259812 : [functorch] normal, floats only (pytorch/functorch#510)
298129dd1d : [functorch] Rewrite codegen based on PyTorch's codegen library (pytorch/functorch#528)
3c21e0818b : [functorch] Made some minor modifications to decompositions to make them torchscriptable
3adc8167c9 : [functorch] Updated css to reduce badge size and code-block bottom padding (pytorch/functorch#526)
d85f6d8208 : [functorch] random_ support (pytorch/functorch#412)
b1b26a19cd : [functorch] Fix potentially recursive import
bb89b1c50d : [functorch] Added binary_cross_entropy and native_layer_norm decompositions
ce644bb80a : [functorch] Added a hack for _s_where issue
67948ebd9a : [functorch] Fix docs build
90da897750 : [functorch] Made codegen 1:1 instead of many:1 (pytorch/functorch#522)
aec7a219aa : [functorch] reskip decomposition tests
cfe8ff227f : [functorch] Colab ready tutorials (3 total) with updated colab badge links (pytorch/functorch#519)
1a748aeadc : [functorch] Aot Autograd tutorial (pytorch/functorch#476)
b27a01adae : [functorch] Added gelu_backward decomposition with new kwarg
0f6aaa5a6f : [functorch] made some updates to the minifier
d095055afb : [functorch] randperm support (pytorch/functorch#411)
565365baaf : [functorch] GHA docs build; nbsphinx -> myst-nb (pytorch/functorch#517)
08449c2a7c : [functorch] gelu skips to cpu only
ad35082061 : [functorch] flake8
12724711b7 : [functorch] skip decomps, fix ci
e989340c21 : [functorch] fix the obvious issue
28f47ad6ee : [functorch] fix flake errors
db8cfb0aff : [functorch] Purge some licenses
e504faba8b : [functorch] fixed prelu_backward decomposition
eeb3f14d0b : [functorch] Added rrelu_with_noise_backward decomposition
5bbb1b039d : [functorch] Added softshrink_backward, prelu_backward, and col2im_backward decompositions
ae24d77e36 : [functorch] Cleanup from adding rand and randn PR (pytorch/functorch#503)
01fcb030b8 : [functorch] fixed issue with incorrect device querying on cuda
dfe32fe541 : [functorch] Fixed flake8 issues
9e72d9b192 : [functorch] Modified decomposition testing and added embedding_dense_backward
c560263d63 : [functorch] functorch manylinux builds (pytorch/functorch#494)
20b43152ab : [functorch] Added has_aux argument to jacfwd and jvp (pytorch/functorch#419)
1be9042dc3 : [functorch] Removed pytorch footer in docs (pytorch/functorch#499)
cc68964c06 : [functorch] Improve version rendering on docs website
e4d20b58c4 : [functorch] Change docs name
f0d4bb647d : [functorch] randint support (pytorch/functorch#410)
11c9263d38 : [functorch] fix unexpected success not showing up (pytorch/functorch#492)
8926f51aa7 : [functorch] Add support for randn and rand (pytorch/functorch#409)
39228e7d1d : [functorch] instance norm
8b8d1c970f : [functorch] fix ci, remove xfail for jvp of linalg.eigvals
46fa2e21d8 : [functorch] fix ci, xfail linalg.cond and linalg.svdvals
56b5905f07 : [functorch] gelu fix because of new kwarg
234092a31b : [functorch] Fix typo (pytorch/functorch#485)
b2a56755fa : [functorch] Add functorch.__version__ (pytorch/functorch#481)
10e383b907 : [functorch] Excise prototype mentions (pytorch/functorch#478)
58e6ca7a5e : [functorch] reverted local scalar dense guard
634fc36fe6 : [functorch] add check for footgun if output contains non-registered pytree
33cb1967c2 : [functorch] fix small issues with examples (pytorch/functorch#464)
5626d5c165 : [functorch] [Compile Cache] Dont pass None args to Compile Cache (pytorch/functorch#470)
07df6ae9e6 : [functorch] fix ci
01a5e55a4d : [functorch] Revert "Enabled any type of aux output (pytorch/functorch#423)" (pytorch/functorch#471)
f3f5a47af0 : [functorch] Enabled any type of aux output (pytorch/functorch#423)
897d1adb92 : [functorch] Fix CI (pytorch/functorch#466)
cffca98ea9 : [functorch] fix some issues
d88fa3d69e : [functorch] fix stupid flake
d3406d9cd0 : [functorch] Fix some CI fails for AOTAutograd and such
278d6ca48c : [functorch] Doc strings (pytorch/functorch#463)
7ba1e4303b : [functorch] Fixed minor bug with partitioner and other minor changes
dda66b2cae : [functorch] Switched docs theme to pytorch-theme (pytorch/functorch#453)
75b08c9bef : [functorch] update differential privacy example
928e1dc72c : [functorch] Excise more legacy batching rules (pytorch/functorch#457)
f476263a7d : [functorch] Split VMAP_SUPPORT macro into VMAP_SUPPORT/VMAP_SUPPORT2 (pytorch/functorch#462)
81f3bd4d58 : [functorch] Fix CI (pytorch/functorch#461)
3d21dd4376 : [functorch] possible fix (pytorch/functorch#455)
105f7acd7d : [functorch] Excise a bunch of legacy batch rules that could be decompositions (pytorch/functorch#456)
78b2450e25 : [functorch] Fix a bunch of tests (pytorch/functorch#452)
5a3e0217fc : [functorch] Quick fix build (pytorch/functorch#450)
ed937e455e : [functorch] Bump functorch main version to 0.2.0 (pytorch/functorch#448)
4b72e62178 : [functorch] Added Grad context to AOTAutograd (pytorch/functorch#441)
d22886ae4e : [functorch] Don't trace static_args (pytorch/functorch#435)
793cb12c42 : [functorch] Decompose aten std op (pytorch/functorch#399)
92f552cd56 : [functorch] index_put : batch-rule (pytorch/functorch#426)
b13db682ec : [functorch] some minor fixes to AOTAutograd
dbd2080d54 : [functorch] cleaned up partitioning code a little
f5f1dbd3c8 : [functorch] Better error message for static argnums (pytorch/functorch#431)
53db0bb63b : [functorch] Fix more fixable things in the CI (pytorch/functorch#427)
13d569ab1d : [functorch] Fix CI (pytorch/functorch#425)
88c722d84c : [functorch] fix vmap : index_put_ (handle broadcasting of value) (pytorch/functorch#401)
4c78fd7f97 : [functorch] Added static_argnums to nnc_jit
5b6b1711ec : [functorch] Enabled pytree of tensors output for jvp, jacfwd (pytorch/functorch#404)
89443f4ac1 : [functorch] Fixed some test cases/added some easy batching rules (pytorch/functorch#416)
d9125de7b4 : [functorch] Fixes flake8
07679ac656 : [functorch] [requirements] missing networkx (pytorch/functorch#403)
5d821c1a9c : [functorch] [Compile Cache] Caching the forward and backward compiler ids (pytorch/functorch#413)
5582fab406 : [functorch] Updated README (pytorch/functorch#414)
a052366e58 : [functorch] fixed some misc things with AOTAutograd
dbe13adf9d : [functorch] fixing test that missed beause jvp != vjp
f4632d942a : [functorch] hopefully last test fix
3c8b626b95 : [functorch] Use empty tensor instead of efficient zeros tensor (pytorch/functorch#406)
23ed94181a : [functorch] fix unexpected successes
30355ad6a8 : [functorch] fix failing tests
2ffb5a9719 : [functorch] Fixes printing issue with TensorWrapper (pytorch/functorch#397)
9157be2b10 : [functorch] Cleaning up TVM compiler integration (pytorch/functorch#405)
781d932e13 : [functorch] Revert "Minor refactor for default partition. (pytorch/functorch#390)"
3ba43c26a8 : [functorch] Minor refactor for default partition. (pytorch/functorch#390)
3cdbf2c43c : [functorch] Fixed unexpected success for cartesian_prod (pytorch/functorch#402)
faf642a803 : [functorch] tragically, yet another flake8 fix (I thought I'd got them all already this time :( )
fc272929a6 : [functorch] Added support for outputs that don't require gradients + fixed some other tests
9ac203cd4e : [functorch] Added cartesian_prod decomposition
bc6f6d6980 : [functorch] Fix failing tests and added comment about contiguity assumptions
2037792863 : [functorch] fix another minor thing
afba1985ea : [functorch] some minor fixes
a8d54b4029 : [functorch] [pytree] register return_types (pytorch/functorch#350)
524002be34 : [functorch] xfail failing jvp tests
21bd4ba561 : [functorch] skip to solve nasty seg fault
633d1f02b1 : [functorch] Use num_bytes instead of numel in min-cut partitioning (pytorch/functorch#398)
f0f65be54c : [functorch] group norm + backwards running (pytorch/functorch#387)
e5980eab0d : [functorch] fix some partitioning issues
183e88cdf2 : [functorch] Cleanup for memory efficient fusion (pytorch/functorch#388)
18a374c230 : [functorch] fix remaining failing cuda test
6e5c26804c : [functorch] cleaned up code and fixed failures
a5c7dade81 : [functorch] made some updates to min-cut solver
feb2073a44 : [functorch] fix unexpected success
83bb9033bb : [functorch] [Compile Cache] Handle non tensor args (pytorch/functorch#383)
1c666edd68 : [functorch] fix flake8 issues?
c6a7b94ae7 : [functorch] binary_cross_entropy batch rule (pytorch/functorch#380)
bdcc5286b1 : [functorch] fix test failures
bbf2e80eff : [functorch] fix flake8 issue
de4b1e2fc1 : [functorch] fix CI test for an op I don't have locally yet :)
e47b8f333f : [functorch] fix some random minifier and add some random decompositions
e735de5707 : [functorch] switch per sample grad test to use group norm
ace198ee8d : [functorch] Add Batch Norm (+ backward) batching rules (pytorch/functorch#371)
33110e8a10 : [functorch] Fixed errors
747c2fa2b2 : [functorch] fix flake issues
02ae449c7b : [functorch] refactored decompositions a bit
3b8b72a322 : [functorch] max-flow based algorithm for activation checkpointing (pytorch/functorch#344)
a8e01d4bc3 : [functorch] made tests that work on either cpu or cuda but not both skips
dd0fe72064 : [functorch] fix ci, forgotten commas and use xfail
761c382550 : [functorch] fix ci by disabling certain tests
0d37b12f01 : [functorch] Dropped 3.6 and added 3.9
47f69917f4 : [functorch] Remove make_nnc from test_pythonkey import
67b8a097e7 : [functorch] Fix flake8
647b09bf19 : [functorch] README.md: don't include prompt in example code snippets (pytorch/functorch#375)
2c1ed6956b : [functorch] fix flake errors
16f34150bd : [functorch] port nnc_jit to use AOTAutograd
e0987b6241 : [functorch] fix flake rules
2aa246e5a9 : [functorch] fix some test failures
89d30b2987 : [functorch] Added better handling for output device inference (i.e. only use when metatracing)
a64602af4c : [functorch] don't use sum for loss, arbitrary args in torch.Timer benchmarks (pytorch/functorch#369)
e73417d2ac : [functorch] Add Tutorials to docs build (pytorch/functorch#367)
fd4d82ee0d : [functorch] quick cleanup
649ffb026b : [functorch] Transcribed tutorials to ipynb
51432e3353 : [functorch] Wrote tutorials (pytorch/functorch#366)
a1b35d830a : [functorch] Fix getitem (pytorch/functorch#364)
99b842d62a : [functorch] Added cdist forward/backward batching rules (pytorch/functorch#306)
ae7db8eca8 : [functorch] More docs for jacfwd, hessian, jvp (pytorch/functorch#362)
08c71c4306 : [functorch] Add wheel builds (pytorch/functorch#358)
62963125f5 : [functorch] Fix parameter declarator cannot be qualified error (pytorch/functorch#361)
e3c1ec684f : [functorch] convolution_backward batch rule (pytorch/functorch#359)
2182acf16f : [functorch] Support vmapvjp through F.pad (pytorch/functorch#357)
5bc902d1c2 : [functorch] Embedding backward batch rule (pytorch/functorch#355)
3339da30df : [functorch] Fix CI
1eb454f597 : [functorch] Fix vmapvjp tests (pytorch/functorch#354)
a809297d63 : [functorch] Remove -l from pytest.ini
8135ab66ed : [functorch] Probably fix CI
c94446a04d : [functorch] Fixed python code formatting and added flake8 setup (pytorch/functorch#346)
dc3eb6af00 : [functorch] Convolution refactor
4d29a060af : [functorch] Fix functorch build
dc7a640c81 : [functorch] Embedding batch rule (pytorch/functorch#351)
da79c69d51 : [functorch] discover_coverage update
68652e8d07 : [functorch] add option for meta tensor tracing (pytorch/functorch#349)
00dfb3b2a2 : [functorch] Added support for output pytrees in AOTAutograd (pytorch/functorch#332)
d7e7fe0141 : [functorch] More batch rule fixes (pytorch/functorch#348)
6311baf030 : [functorch] fix discover_coverage bug
152e00f310 : [functorch] Random assortment of batch rules and cleanups (pytorch/functorch#347)
476d21bc0b : [functorch] Fix convolution batch rule in the transpose case (pytorch/functorch#345)
6722ee7797 : [functorch] added syntax highlighting
6dd6b06100 : [functorch] added compilation readme
3d9eee4a88 : [functorch] hopefully fixed CI - started skipping conv1d tests
db07b2492c : [functorch] Added subprocess launcher for NVFuser minifying
967da4b639 : [functorch] added some tests and stuff to minimizer
2ade222642 : [functorch] jvp transform fully composes with vmap (pytorch/functorch#340)
695990e7c3 : [functorch] Another fix
953c5d50d4 : [functorch] Fix CPU CI
d61573272a : [functorch] Fix CI hopefully
9052153fec : [functorch] add conditional functionalization (pytorch/functorch#235)
f0301db1fb : [functorch] Fix some "unexpected successes"
24c93d8e34 : [functorch] Fix undefined bias for conv batching rule
56961ad1ed : [functorch] Fix CI
c0fdee06f3 : [functorch] Revert "Fix one composability problem"
73d5d73294 : [functorch] Fix one composability problem
3a7e2a9876 : [functorch] Some more utilities
c952118aeb : [functorch] Fix flaky tests
a7bf7831ab : [functorch] Added autominifier for FX graphs (pytorch/functorch#330)
f12896b408 : [functorch] Add `has_aux=False` to vjp, jacrev (pytorch/functorch#335)
742d4c9e56 : [functorch] Kill dead code
f30ff85599 : [functorch] Added batch rule for LU op (pytorch/functorch#326)
291cbf0926 : [functorch] Introduce ForceLocalDispatchKeySet
808306fb36 : [functorch] Refactor DynamicLayerFront dispatch key selection
9b9980520e : [functorch] Some more xfails
042f83eb3c : [functorch] Probably fix CI
c7c15eed41 : [functorch] Add some utilities for writing batching rules in python
2396dc35ee : [functorch] Move DynamicLayer implementation out of header
9ed57423e1 : [functorch] DynamicLayerStackHolder -> FuncTorchTLS
e0ace21b2e : [functorch] Deduplicate pushDynamicLayer
1e7c7ca5c8 : [functorch] Add setDynamicLayerFrontBackKeysIncluded helper function
ccb277f40f : [functorch] Move all_dynlayer_keyset location
99c712e95c : [functorch] Fix CI again
23a115047d : [functorch] Revert "Refactor DynamicLayer.cpp"
9ad55a212b : [functorch] Fix setup.py, hopefully to work on windows
7378880172 : [functorch] Refactor DynamicLayer.cpp
a9f69501af : [functorch] Debugging improvements
0ade178c6e : [functorch] Fix CI
fbe01d27e2 : [functorch] actually fixed things
a7479002da : [functorch] fixed test failures
c6b7ffab09 : [functorch] Custom Function prototype
b1460ea208 : [functorch] fixed test failures
7f76d5c96f : [functorch] Added aot_function/aot_module which alias compiled_function/compiled_module
b2d691843a : [functorch] layer norm backward batch rule
ca41c5f08d : [functorch] Added tests for vmap and autocast (pytorch/functorch#327)
ba04c326b6 : [functorch] fix int64 issue
5dd3998936 : [functorch] Update discover_coverage.py
935d96f024 : [functorch] Fixed typo in docstring code-snippet (pytorch/functorch#316)
495fa98992 : [functorch] Quick hack
0c8edf2571 : [functorch] modified tanh_backward to avoid rsub
6a52a485b1 : [functorch] renamed API
30c7c337a2 : [functorch] Single cache (pytorch/functorch#319)
f5ac3641c4 : [functorch] Added addr op decomposition (pytorch/functorch#323)
c4390fb854 : [functorch] Added cholesky_solve op batch rule (pytorch/functorch#324)
fdafa34fd9 : [functorch] [batching] diagonal_scatter (pytorch/functorch#315)
affb4d86e7 : [functorch] Fix CI
cad8b76544 : [functorch] Add some quick short-term hacks (pytorch/functorch#317)
d091220ae2 : [functorch] diag_backward batch rule
1f7822f650 : [functorch] Update lagging op db (pytorch/functorch#318)
d7fb500252 : [functorch] fixed CI
6812607e77 : [functorch] made recompute partitioning prune unused saved values
ba25a7ec23 : [functorch] Use FuncTorchTLSBase instead of ThreadLocalDebugInfo (pytorch/functorch#314)
73a024769f : [functorch] Fix CI
ac4f79617a : [functorch] fixed draw_graph
98bbf3d5d4 : [functorch] [Benchmark] Layer norm patterns (pytorch/functorch#311)
85576dbb2d : [functorch] update op analysis script
421aeeb04d : [functorch] Added a bunch of __foo__ ops
60cdeb6654 : [functorch] [Benchmarking] Adding scripts for lightseq benchmarking (pytorch/functorch#310)
306eda0845 : [functorch] Fix CI
6f882bb4aa : [functorch] Fix CI
211b2148b1 : [functorch] Fix CI
6f4330e6f8 : [functorch] Fix CI
20997b4917 : [functorch] Fix some linting
1d956a0aaf : [functorch] Added grid_sample backward batch rule (pytorch/functorch#284)
d9c63e2dc1 : [functorch] replace reshape with view (pytorch/functorch#304)
f03ac2ec86 : [functorch] Added logit decompositions
73443c14a5 : [functorch] added some more decompositions
211e09586c : [functorch] Add node about potential exponential blowup
9720b5b1db : [functorch] fix decomposition tests to work on GPU
f6baad47af : [functorch] [batching] pixel_un|shuffle (pytorch/functorch#295)
2553c1a29c : [functorch] added some more decompositions and rejiggered the testing infra
5eb9a1d109 : [functorch] modified gitignore
ab64065e5e : [functorch] updated with some more decompositions
c51aceb877 : [functorch] jacobian / hessian correctness tests (pytorch/functorch#301)
036b0d873f : [functorch] fix unexpected scucess with hessian test
d1f8127043 : [functorch] moved decompositions to their own file
653e56b6b0 : [functorch] moved some stuff around
f3946010cd : [functorch] [batching] slice_scatter (pytorch/functorch#290)
7f4fc89b33 : [functorch] Quick fix
b9500431f9 : [functorch] quick fix
18ac89a86c : [functorch] Update README.md (pytorch/functorch#302)
476252c0be : [functorch] Fix CI
9573737a83 : [functorch] Fix CI
30350bfcaf : [functorch] jacfwd accepts pytree inputs (pytorch/functorch#300)
057aaa3b51 : [functorch] jvp with pytree inputs (pytorch/functorch#299)
1501f81a8e : [functorch] functorch jvp quick fixes (pytorch/functorch#298)
fe9aac72d1 : [functorch] Added decomposition testing + display infra (pytorch/functorch#281)
0fa9f7af83 : [functorch] Fix CI
63ca57247d : [functorch] Add per-op tests for jvp and vmap + jvp (pytorch/functorch#232)
9323734e0e : [functorch] Fix CI
b912da127d : [functorch] Update jacrev to accept functions with pytree inputs and outputs (pytorch/functorch#297)
3deb38e68e : [functorch] [batching] select_scatter (pytorch/functorch#289)
70e052a87e : [functorch] [batching] isinf, isreal (pytorch/functorch#294)
7f0d0fe53b : [functorch] layer_norm batch rule (pytorch/functorch#293)
5daececc91 : [functorch] Fix CI
e7b95307e4 : [functorch] Fix functorch additional ops db
1b33ea0d13 : [functorch] OutOfPlacePlumbing now gets annotated with aten ops
fada7dd563 : [functorch] [batching] index_copy (pytorch/functorch#282)
1a6d3cd0d4 : [functorch] add missing undef (pytorch/functorch#291)
7506e8b8aa : [functorch] Added logical_and/or/xor vmaps to enable torch.pow (pytorch/functorch#286)
ecf5817ac9 : [functorch] Fix method rename (pytorch/functorch#288)
99ed1b27e1 : [functorch] Added adaptive_max_poolNd batch rule (pytorch/functorch#263)
1f9c78cce7 : [functorch] Removed some xfail for nn functional pad circular (pytorch/functorch#285)
9e6a6f5d27 : [functorch] copy batch rule (pytorch/functorch#278)
907fc8822c : [functorch] new_blah batch rules (pytorch/functorch#277)
461e779c6e : [functorch] cross_entropy_loss decomposition, fixes to coverage detection (pytorch/functorch#276)
4e3b150345 : [functorch] improve coverage testing
cdb56a0e62 : [functorch] hessian API
4e75634a82 : [functorch] exported register_decomposition to be public
f1c9426c2d : [functorch] Fix edge case with in-place captures and grad transforms (pytorch/functorch#272)
21a4c9fbfe : [functorch] Revert "Making some things kwarg only"
37123cd274 : [functorch] Making some things kwarg only
30f787c6de : [functorch] Strict mode for jvp
b8e25d2599 : [functorch] Fix unrelated output behavior for jvp
1b10fb4f81 : [functorch] jacfwd supports multiple inputs and multiple outputs (pytorch/functorch#270)
e19546fbdb : [functorch] jvp transform accepts functions that take multiple inputs and outputs (pytorch/functorch#267)
403e296284 : [functorch] jacrev, multiple outputs support (pytorch/functorch#265)
5a58d5f70d : [functorch] Added im2col batch rule and enabled vmap for nn.functional.unfold op (pytorch/functorch#262)
8769c9a233 : [functorch] changed naming from aot autograd to eager compilation
1eaf24c9ce : [functorch] updated some eager compilation code
9129569580 : [functorch] [CompileCache] Adding compilation cache (pytorch/functorch#250)
e6a1dfbd72 : [functorch] Document some silently uncorrect behavior before I forget
752d0ed847 : [functorch] Skip testing empty_like instead of xfail (pytorch/functorch#259)
7142acbf4d : [functorch] quick fix ci
41699bb9f5 : [functorch] Update functorch lagging op db (pytorch/functorch#261)
9962b90857 : [functorch] Better error on item() and data dependent control flow
1321ed0446 : [functorch] [AOT Autograd] refactor partitioner to extract arbitrary graph given inputs and outputs (pytorch/functorch#258)
bef4603620 : [functorch] Added batch rule for (adaptive_)avg_poolNd (pytorch/functorch#248)
b485247d68 : [functorch] Link to docs site
d70f21b685 : [functorch] fix softplus_backward decomposition
eb9ba23888 : [functorch] Fixed nn.functional.pad constant mode (pytorch/functorch#249)
82f8f532eb : [functorch] Added backward batch rule for pad replicate/reflect modes (pytorch/functorch#251)
f3f8d8bdf8 : [functorch] Enables tests for index_put_ on CUDA (pytorch/functorch#252)
126b4b91b4 : [functorch] update op analysis
d9c4c43ec2 : [functorch] Refactored BatchedTensorImpl to have single bdim and level (pytorch/functorch#239)
5cd8af24ee : [functorch] Add instructions on how to do the functorch docs deploy
cc735edf23 : [functorch] Link readme from install.rst
1ecbe4063b : [functorch] Try collapsible sections in README
f811613e2d : [functorch] Update README with install instructions
18256d111d : [functorch] FunctionalModulesWithBuffers should be exposed
35b75ec118 : [functorch] fix small typo
cb010e9139 : [functorch] fix nested example
75fde90441 : [functorch] sphinx cleanup. Making sure everything renders as expected
f5246a935d : [functorch] Fixed unused device in some test_eager_transforms tests (pytorch/functorch#238)
957525de64 : [functorch] Enabled half and bfloat16 and fixed issue in tests with missing dtype (pytorch/functorch#242)
33e4b3e52d : [functorch] Rename pointwise operator CompileCache to PointwiseOperatorCompileCache (pytorch/functorch#243)
1af5d3b0ef : [functorch] add new vmap examples, cleanup existing vmap docstring
beccd7c1ae : [functorch] Added nll loss backward decomposition (pytorch/functorch#237)
8ba8030282 : [functorch] fix ci--missed xfails in test_vmap
0f29f84c72 : [functorch] s/auxiliairy/auxiliary/g
1cda613719 : [functorch] cleanup ci and tiny fix of grad_and_value docstring
cbcce607d5 : [functorch] add grad_and_value docstring
9059aa2772 : [functorch] switched over to using the pythonkey decomposition
1aa9098362 : [functorch] [Operator Authoring] Memory efficient pointwise fusion (pytorch/functorch#233)
535c4c2976 : [functorch] Added docstring for `grad` (pytorch/functorch#231)
b25d781d80 : [functorch] cleanup unnecessary functions, examples
8e5bebe1f6 : [functorch] vjp doc
a01a995422 : [functorch] add jacrev docstring (pytorch/functorch#234)
dc52f44b38 : [functorch] added decomposition option for pythonkey tracing with a couple decompositions
1c43c1fdc6 : [functorch] pairwise_distance
331611acd3 : [functorch] some printing fixes
bfb19e7921 : [functorch] updated gen_data script
855649c25d : [functorch] Namespace cleanup (pytorch/functorch#229)
44e835b62b : [functorch] Quick doc fixes
e264c959e7 : [functorch] beef up make_functional docs
03e3b59cbc : [functorch] Typo (pytorch/functorch#230)
62643c1e51 : [functorch] Updated nll loss decomposition rule with `ignore_index` (pytorch/functorch#218)
26ead7753f : [functorch] Update op db and fix failing tests
7ad39add01 : [functorch] Remove deepcopy from FunctionalModule (pytorch/functorch#228)
e442037852 : [functorch] Audited vmap docs
31e272f06c : [functorch] Fix missing spaces in some error messages. (pytorch/functorch#226)
2db6c2b9ec : [functorch] Excise functional_init_*
258b7850be : [functorch] Excise vjpfull and deprecated make_functional APIs
24a91e4115 : [functorch] Docs build (pytorch/functorch#227)
a301dc7c9f : [functorch] clean up eager compilation code a bit
86506a21a6 : [functorch] moved operator authoring out
9e559f793b : [functorch] Kill make_functional deprecation warning
f0d496ed22 : [functorch] Update install instructions
7d4cead9bc : [functorch] Fixed unsupported op error message
acbf2f685f : [functorch] fix failing test on CI
11a363e3b7 : [functorch] add atleast_nd decompositions + tests
dc5311cb7e : [functorch] moved movedim decomposition to batchrulesstopdecomposition
2a9538e6a6 : [functorch] readd movedim (pytorch/functorch#221)
1cd009aef8 : [functorch] stopped decomposing by default (pytorch/functorch#215)
8823ad8401 : [functorch] Update setup.py (pytorch/functorch#223)
ffb2fe0512 : [functorch] cleanup ci (pytorch/functorch#216)
36cc7fb11d : [functorch] Fix NDEBUG build (pytorch/functorch#217)
feada9603a : [functorch] update discover_coverage
5942764658 : [functorch] Fix CI
93416ece72 : [functorch] Implemented nll loss through decomposition (pytorch/functorch#208)
cbf900b369 : [functorch] [Partitioning] Recompute forward in the backward pass (pytorch/functorch#213)
f50fee6c2c : [functorch] [Op-Authoring] Adding mapping from torch ops to ExprHandles (pytorch/functorch#205)
f46adc90cb : [functorch] convert movedim to new api (pytorch/functorch#211)
4f7269f9ef : [functorch] fix CI
4174156aa5 : [functorch] Clean up perf scorecard and add barplot generation script (pytorch/functorch#212)
7f7e9dc9e7 : [functorch] Excise onlyOnCPUAndCUDA
4e431704cf : [functorch] port and fix unfold batching rule (pytorch/functorch#206)
f220892dab : [functorch] Make it so that vmap tests generate with bdim=-1 as well as 0 (pytorch/functorch#204)
fabbb9b391 : [functorch] Fixed code-snippet with jacrev (pytorch/functorch#197)
18c37d3bfd : [functorch] Fix PointwiseCompiler on CUDA (pytorch/functorch#203)
51e5e36f48 : [functorch] "Scorecard" benchmarks for pointwise op authoring (pytorch/functorch#193)
c3df3e13e4 : [functorch] Fix max_pool2d batch rule (pytorch/functorch#202)
39f05e050c : [functorch] Fix CI (pytorch/functorch#198)
543f6b2daa : [functorch] Remove xfails
cbd20344d4 : [functorch] max_pool2d_backward batch rule
768be7a30e : [functorch] Make one of the boxed fallbacks faster
4ed8ddaa83 : [functorch] Fix batch rule for max_pool2d
51be1e02f5 : [functorch] resolve_neg, resolve_conj
a475ae8ea1 : [functorch] add scatter.reduce batching rule (pytorch/functorch#188)
13bd75217a : [functorch] Support conj bit, neg bit
f07936ed4a : [functorch] Ravel fix, pt3
0e66c857a3 : [functorch] fix ravel part 2
f95bc12c80 : [functorch] Fix ravel
01b84b497d : [functorch] Update script
fde8d1ed07 : [functorch] temporary xfail for nll_loss
37810948d1 : [functorch] broadcast_to just works thanks to expand (pytorch/functorch#191)
fea84d48b1 : [functorch] Add some minor comments to scatter plumbing
3b2b73675a : [functorch] removes existing_bdim_boxed (which did nothing sensible...)
f92a5f8746 : [functorch] cleaned up some eager compilation stuff
206b236447 : [functorch] Batch rule for avg_pool2d_backward
2e774b12a6 : [functorch] Get rid of new_blah_hacks; diagflat batch rule
d6d36eac53 : [functorch] Added index_select batching rule (pytorch/functorch#183)
e0b1271681 : [functorch] Update install.sh (pytorch/functorch#189)
e26a33a703 : [functorch] Fix CI by unpinning the binary version (pytorch/functorch#187)
0b93c18d2e : [functorch] add scatter_add batch rule (pytorch/functorch#182)
51501b4e2f : [functorch] Re-land the compile cache (pytorch/functorch#169)
0d3a029bd6 : [functorch] [port] `view` to new api (pytorch/functorch#164)
14a304a0ff : [functorch] add aminmax batching rule (pytorch/functorch#180)
660b181e7d : [functorch] no_grad fix (pytorch/functorch#179)
4c417e1be8 : [functorch] add cosine_similairity batching rule (pytorch/functorch#171)
1c0efffc2e : [functorch] Ensure WithoutTop doesn't get rid of mode metadata
c5c89b3868 : [functorch] Some test cases for no_grad
4d6abe9763 : [functorch] [port] permute to new api (pytorch/functorch#172)
78d8ed15f5 : [functorch] Fix arg type of roll_batch_rule
9a5105de42 : [functorch] Added tests for op aliases (pytorch/functorch#173)
854a0fd6b7 : [functorch] [port] `expand` to new api (pytorch/functorch#161)
15f24f0fe5 : [functorch] Update README.md (pytorch/functorch#176)
a0df7823fb : [functorch] temporary fix pth version to make CI pass (pytorch/functorch#174)
58aaddb217 : [functorch] Revert the compile cache (pytorch/functorch#168)
00c286220d : [functorch] Python pointwise compiler implementation (pytorch/functorch#163)
a80fe5ee00 : [functorch] [port] diagonal to new api (pytorch/functorch#165)
8d6984b566 : [functorch] Python bindings for compilation cache
726e317a08 : [functorch] Complete compile cache, with in-out specialization
ee21ae22b8 : [functorch] Num arg specialized cache
f4c5322b85 : [functorch] Num arg-and-dim specialized cache for generated kernels
8f68de2e44 : [functorch] Proxies for binding compilation results to python objects
2305561af4 : [functorch] Class for caching compilation results
cdea3fb034 : [functorch] Helper to convert SpecializationKey to python object
f4ae9bb345 : [functorch] Shape-specialization key for op caching
a14f16e5b5 : [functorch] Added argsort batching rule (pytorch/functorch#166)
4e3b50e672 : [functorch] {log|softmax}_backward update as per the new signature (pytorch/functorch#160)
033e7109a4 : [functorch] Support buffers in compiled_module (pytorch/functorch#147)
8bb907b8b4 : [functorch] Added roll batching rule (pytorch/functorch#145)
69db88ab38 : [functorch] made python key turn None into empty tensors
af5c0c1e4f : [functorch] fix multi-output tests
fecaf533ce : [functorch] fix issue with multiple outputs
90ad67843b : [functorch] plumbed partition_fn through compiled_module
aa6b5a3954 : [functorch] Exposed default partition to users
d9de3842d6 : [functorch] renamed some variables in gen_data
cdf866da96 : [functorch] Added cross batching rule (pytorch/functorch#144)
903313741d : [functorch] List of method only pytorch operators
92158729be : [functorch] Check in top operators list to do analysis on
ef534dac54 : [functorch] Fixes pytorch nightly cpu installation (pytorch/functorch#157)
b4daef77a2 : [functorch] Revert "Ported `view` op to the new API (pytorch/functorch#137)" (pytorch/functorch#156)
2f64f5fa45 : [functorch] Ported `view` op to the new API (pytorch/functorch#137)
acf0318fa3 : [functorch] Remove some commented code (pytorch/functorch#146)
cee92d4e2a : [functorch] Fixed functorch to build with PyTorch master
321ebb585d : [functorch] Update run_test.sh (pytorch/functorch#143)
53af441b6c : [functorch] Added rot90 batch rule (pytorch/functorch#138)
b2f72d10ba : [functorch] Fixed unexpected failures with log_softmax (pytorch/functorch#142)
ab71bc7b81 : [functorch] made compile_module faster
4b80ff5acf : [functorch] merged
53c9fdd24b : [functorch] Introduce compiled_module for eager compilation of modules (pytorch/functorch#133)
510089fce6 : [functorch] try vfdev's suggestion for CI
0d9117d74c : [functorch] Fix CI
3c0d7f08c6 : [functorch] Move to PyTorch core's parametrize testing mechanism
dc7f8163a1 : [functorch] Fix CI; update lagging op db
9e0005fca3 : [functorch] updated gen_data.py
9c833de30f : [functorch] Fix CI; temporarily disable some index_put tests
43336f8949 : [functorch] Ported `_log_softmax` to the new API (pytorch/functorch#135)
8f9d697337 : [functorch] fix CI
6a8b18b1c8 : [functorch] Added support for negative in_dims (pytorch/functorch#136)
7b4451bd74 : [functorch] updated batching rules guide
bafdc5d9f9 : [functorch] Turn -Werror off
8ea093d3af : [functorch] refactored eager compilation a bit
59643a5903 : [functorch] fix master
03fff794a0 : [functorch] update coverage script
5237b85e3d : [functorch] Ported `_reshape_alias` to the new API (pytorch/functorch#130)
9804cc1eb0 : [functorch] Grid sample batch rule registration (pytorch/functorch#128)
fd82e4eed0 : [functorch] apparently argmax/argmin are covered by our boxed reductions?
2d2d0cae7b : [functorch] Support functions with multiple outputs in `compiled_function` (pytorch/functorch#127)
43ed96f678 : [functorch] Kill some more dead code; the CI doesn't use gcc?
051b24d478 : [functorch] Turn on -Werror
f70f562c6b : [functorch] Update OutOfPlacePlumbing; killed some UB
beecfbda74 : [functorch] Clean up some dead code; regenerate OutOfPlaceBatching
70246bdd23 : [functorch] Added guide to writing batching rules
edf79c4be7 : [functorch] updated opinfo db
33f05ca58f : [functorch] Added batching rule for index_put_.
050bf89769 : [functorch] Added Index.Tensor batching rule (pytorch/functorch#123)
349dc2f31b : [functorch] NNC Compile: use BufHandle instead of Placeholder which is being deleted upstream.
1c96132a3d : [functorch] Nested jvp seems to work
0753d0c44c : [functorch] Some view backward batch rules
839517a368 : [functorch] Revert "some view backward batch rules"
ae1c20475f : [functorch] Fix README typo. (pytorch/functorch#120)
b6412da10c : [functorch] some view backward batch rules
0db3962308 : [functorch] Kill SkipInfo
b47bc8f0c1 : [functorch] remove scatter expected failure
975a2df062 : [functorch] move select to new API
9cba69268b : [functorch] Added VMAP_SUPPORT for returning vectors of tensors
3a49b986f3 : [functorch] Added wraps(f) to jacrev
dc9cbe1d34 : [functorch] removed some unnecessary batching rules from batchingregistrations.cpp
f20150a3bb : [functorch] Adding jacrev numargs argument (pytorch/functorch#111)
b23c9b907c : [functorch] removed unnecessary note
fcd08e1adc : [functorch] removed stupid line
ae5a12908d : [functorch] Fix location of file
8bd71e0a0c : [functorch] Fix bug in discover_coverage
818487a5e1 : [functorch] Handle label smoothing in cross_entropy
7af7eddf4e : [functorch] Added dlpack conversion to/from for TVM
9168d834f9 : [functorch] Added handling of parameters so that requires_grad is propagated correctly
b1390744ea : [functorch] merged
7e13e3c6f5 : [functorch] added option to skip specialization cache
c0afd464d1 : [functorch] Cleaned up the tests
0562725c32 : [functorch] fix some tests
d314e2f78b : [functorch] stored some stuff
cb22fefebe : [functorch] moved function to eager_compilation
09b23abf45 : [functorch] Added start of compile_funciton opinfo tests
4ff4501d1a : [functorch] Added static_argnums to nnc_jit
dcea426112 : [functorch] actually fixed tests?
66586e95a5 : [functorch] fix warnings
a528be7d85 : [functorch] moved a bunch of inplace ops to use boxed fallback
13fab14fe8 : [functorch] Added special handling for svd correctness
b5ae3dcd02 : [functorch] Add svd to test exclusions for now
b2deb179d5 : [functorch] fix unnecessary thing that breaks custom ops
17bbac74ed : [functorch] Added some decompositions and augmented boxed_reduction to support optional single dim args
6077a0b0f4 : [functorch] Fix tests failing since I added batching rules
1cc47c29e6 : [functorch] Cleaned up some things, added some batching rules, etc. part 2
7671b9f7eb : [functorch] Script to list out OpInfo coverage on our @ops tests
87180080f3 : [functorch] Batch rule coverage for vmap({grad, vjp}) testing
9707431d84 : [functorch] Test for batch rule coverage
a98f43e9f0 : [functorch] Cleaned up some things, added some batching rules, etc.
3fa1792c00 : [functorch] Fixed handling for boxed fallback for optional dimarrays + scalar inputs, added linalg_eig, var_mean.correction, and std_mean.correction
ad29ecb7f3 : [functorch] Added linalg_cholesky/inv_ex batching rules and removed STOP_DECOMPOSE entries on non-composite ops
19005eb19b : [functorch] Added svd/qr batching rules and cleaned up somew arnings
6ce0325f58 : [functorch] Added boxed variadic bdims batch rule and some linalg rules (linalg_slogdet, geqrf, logdet, matrix_exp, solve, symeig)
3c47e87589 : [functorch] Added a grab bag of batching rules (bitwise_or/xor, logaddexp, logaddexp2, kthvalue, isposinf, isneginf, nan_to_num, signbit)
486679ba23 : [functorch] Updated gen_data script to use new APIs
fddaff44f6 : [functorch] Added pointwise backward batching rules (elu_backward, hardshrink_backward, logit_backward, log_sigmoid_backward, gelu_backward, softplus_backward)
70d3b16041 : [functorch] Added median/nanmedian batching rules
d02e227cbd : [functorch] Added some fft batching rules
d741c68d86 : [functorch] Added any.dim, count_nonzero.dim_IntList, cummax, cummin, linalg_vector_norm, logcumsumexp, nansum, nansum.dim_IntList, and mvlgamma
854d538005 : [functorch] fix some errors in reduction batching rules
11bd7e33a5 : [functorch] Switched over to boxed implementations for reductions :)))))))
2fb4a5973e : [functorch] whups
6a7a022f9d : [functorch] Added boxed pointwise batching rule and almost finished pointwise (addcdiv, addcmul, clamp.Tensor, frexp.Tensor, lerp.Scalar, lerp.Tensor, log_sigmoid_forward, polygamma)
f9144e9409 : [functorch] Specialization cache (pytorch/functorch#99)
38803ae05b : [functorch] Run python key tests in CI (pytorch/functorch#98)
a0a3369706 : [functorch] removed OP_DECOMPOSE
4dd2f09f65 : [functorch] Started decomposing all ops that showed up in tests and also didn't throw errors
e736a68301 : [functorch] Added decomposition stops for all of the composite ops we haven't implemented yet (won't do anything until we decompose by default)
3dd4196d1a : [functorch] fix embarassing bug causing builds to fail (I blame Copilot)
0884b8a533 : [functorch] Added all of the upsample no overload functions too (not sure if these ever even show up...)
eb26b8b97a : [functorch] Fixed pythonkey tests
b178cf6867 : [functorch] Updated some op annotations
f91c37dbb2 : [functorch] fix CI hopefully
b04e50d421 : [functorch] Finished rest of easy pointwise operators (16 of them)- bitwise_and, heaviside, hypot, gcd, igamma, igammac, lcm, nextafter, polar, rrelu_with_noise, bitwise_not, glu, logit, mish, threshold, copysign (kinda)
6136d66b58 : [functorch] updated the op DB
eae7765bb7 : [functorch] Added operator analysis generator
ac22e4617d : [functorch] Remove the nondeterministic tests from xfail...
4afc9a53ed : [functorch] Use new xfail mechanism in test_vmap_exhaustive; turn on CUDA for that test
4d91cead10 : [functorch] New expected failure mechanism for test_ops.py
82a7a6a8cd : [functorch] Fix matmul batch rule; use decomposition from core instead of our own
49b157f044 : [functorch] Fix softmax_backward batch rule
fa50f1e237 : [functorch] Fix CI
da8b20fc8e : [functorch] Adds one_hot decomposition, scatter batch rule (pytorch/functorch#93)
f7b8cc6402 : [functorch] Improve dynamic op error message
1a44e9982a : [functorch] Add newline to eof
d9c3351183 : [functorch] Fix nll_loss_batch_rule (the decomposition is a negative gather)
5672317b29 : [functorch] gather_backward batch rule
07079d8a4f : [functorch] gather batch_rule, nll_loss batch rule improvement
f17fb50a5e : [functorch] masked_fill_.Scalar batch rule
897f0e94dd : [functorch] Add true_divide/floor_divide/cholesky batching rules
1201fc28a1 : [functorch] Some upsample backward batch rules
c4e5ce3dfe : [functorch] Quick fix
f904936be6 : [functorch] fix log_softmax_backward batch rule
14f02a9407 : [functorch] softmax_backward batch rule
d80ce0b618 : [functorch] Added Pythonkey Opinfo tests for tracing a single op
0805733521 : [functorch] Some easy backward batching rules
6cea76a28b : [functorch] Batch rule for mse_loss
f08e17f246 : [functorch] Fix vmapvjp testing; it used to not do anything
2f490672ef : [functorch] remove nn.functional.linear from test_vmap
9907915986 : [functorch] added rest of scalar batching rules (xlogy, float_power, special_xlog1py/xlogy/zeta)
2c4b5968cf : [functorch] Add xlogy batching rule + UNARY_SCALAR macro
1b5cd8e227 : [functorch] Fixed some more vjp failures
5de1f0c0a1 : [functorch] Mention that users should install ninja to speed up the build
227d0c9c2b : [functorch] Fix bug in softmax/log_softmax related to scalars + refactored code to be nicer
e1a342f0ba : [functorch] Fix op db + nnc_compile failing tests
a3eb2ef978 : [functorch] Added nn.functional.mse_loss batching rule
5466ac5da3 : [functorch] Update eager_fusion.py
02713fd157 : [functorch] Update README.md
93fb561a04 : [functorch] _to_copy batch rule
d4ada7c7c3 : [functorch] vmapvjpvjp testing
e8a5b3725b : [functorch] fix example to work well :)
1a2e538580 : [functorch] Moved to Keops example
9ca5ae86d0 : [functorch] Added eager-mode fusion prototype
e632a46ff0 : [functorch] Added initial NNC example of eager-mode fusion
7c405f44b1 : [functorch] Fix "vmap: inplace arithmetic(self, *extra_args) is not possible" for linear
a9327ea80e : [functorch] Revert TensorWrapper changes to make things more sane
fd7e524b4e : [functorch] update binary_pointwise_batch_rule
4ce294d25c : [functorch] Quick grab bag of batching rules
0a7954b9d4 : [functorch] Update EXISTING_BDIM_BATCH_RULE
3a1a59a74b : [functorch] Updated variadic_bdims_batch_rule
745c633687 : [functorch] replace basic_unary_batch_rule with BASIC_UNARY_BATCH_RULE
7e1d730a4f : [functorch] Updated squeeze and squeeze.dim batching rules to new style and added scalar handling (pytorch/functorch#81)
53155d3ba0 : [functorch] remove old python key implementation :)
707c26d3db : [functorch] Added vjpfull
a92ca843d2 : [functorch] updated nnc_compile to work with new python key
a11bdcc411 : [functorch] pythonkey refactor
8107b13b1e : [functorch] Tensor printing
b3e39c1968 : [functorch] Fix reshape failure by adding _reshape_alias batch rule
f6bb9acdbc : [functorch] Add interpolate/upsample batching rules
308f477598 : [functorch] Finished most of the rest of the pad batching rules (also added an existing_batch_dim_template)
a5a7245e46 : [functorch] Added batching rules for constant_pad_nd
266d2ced94 : [functorch] updated opinfo db and fixed failing tests
ed65f0a83a : [functorch] Revert "We actually fixed a lot of the vjp problems"
22ad9d473a : [functorch] We actually fixed a lot of the vjp problems
2ceba07ea5 : [functorch] Resolved some vjp failures
5cfa5728eb : [functorch] Normalize, cross_entropy OpInfos
f8caad7fb1 : [functorch] OpInfo for pad
4028de1d45 : [functorch] additional OpInfo for Conv2d
f28e199609 : [functorch] More OpInfos
6b59f1ad78 : [functorch] Add `functorch_additional_op_db`
d506951937 : [functorch] Some more batch rules for pointwise ops
983a43cfc9 : [functorch] batch rules for torch.special unary ops
1b78cae7b6 : [functorch] Quick grab bag of batch rules
6ca3e96eef : [functorch] Fix CI; for real this time
2e7ddb7a86 : [functorch] Fix ci
6d39fa335b : [functorch] Added some make_fx+vjp/jac/vmap tests
8e62e271be : [functorch] Add make_fx(grad(..)) test
236d2f20b6 : [functorch] Update README.md
046453c66b : [functorch] Added a quick benchmark
8d127816d3 : [functorch] Added citation section
57f48cc691 : [functorch] Fix parametrized testing
fea6254d71 : [functorch] Fix grad wrapper behavior when wrapping Python tensor
e836870e07 : [functorch] removed comment that previous commit fixed :)
d71bb37414 : [functorch] ported random registrations to boxedfallback
b28268c8f4 : [functorch] Added errors for dynamic ops
899fea5eab : [functorch] Batch rule for resize_ when batch dim is at front
0bba49c8a7 : [functorch] Added dropout (tested by opinfo to be upstreamed)
a598ed6be6 : [functorch] Added softmax batching rule (tested through local OpInfo, to be upstreamed)
abd8c710b2 : [functorch] Added dist decomposition
23fc2e0f6e : [functorch] moved some decompositions from batchingregistrations.cpp out
71042b1b16 : [functorch] added remove-inplace pass to nnc_jit (kinda works, but won't work with aliasing)
abc520f804 : [functorch] Update dependency (pytorch/functorch#58)
a9ac8e814c : [functorch] (Grad)TensorWrapper sometimes has storage (and data_ptr) (pytorch/functorch#65)
26503f21e1 : [functorch] Install expecttest as CI step (pytorch/functorch#66)
74f773192f : [functorch] added ability for module attrs to be directly referenced in output
f10845eeaa : [functorch] Added binary_cross_entropy lowerings
379ae35ef2 : [functorch] fixed device issues in tests
f561f0f665 : [functorch] add no-dim for batchrulesreduce and move trace from old batching rule to new decomposition
0a8ef33ca8 : [functorch] Removed no-dim rules for batchrulespooling and batchrulesviews
8f06e91546 : [functorch] removed no bdim batching rules from batchrulesbinaryops
cb286b9b49 : [functorch] removed no bdim cases from batchruleslinearalgebra
8cd80e0b16 : [functorch] Added a test for case where we would previously have dispatched to a batching rule despite having no bdims
446d0b4e4d : [functorch] Selectively enable dispatch on kBatchedKey (pytorch/functorch#63)
bbfdcdbd79 : [functorch] Added einsum batching rule
495112f550 : [functorch] updated opinfo DB and added meshgrid batching rule
f6667347a2 : [functorch] Exclude failing tests, add xfail test for torch.tensor
c02cc07c96 : [functorch] Add flip batching rule
b20e4decc4 : [functorch] Added batching rule for logsumexp
9c138786b7 : [functorch] Added norm.Scalar and norm.ScalarOpt_dim
65c54e282c : [functorch] Revert "Exclude List[Optional[Tensor]] from the batched fallback"
6916cc5d5b : [functorch] Added full_like and refactored factory macros a bit
b815b5bc6b : [functorch] Added triu/tril
0aedd9e8c1 : [functorch] finished batching rules for div
345cf3ebf2 : [functorch] Added sort batching rules
94a55b7ead : [functorch] finished off where batching rules + added OP_DECOMPOSE macro
7b8478332b : [functorch] Added where/_s_where, changed flatten (a composite op) to decomposition, and added (var/std).correction
60fbba9d0e : [functorch] Added erf/inverse/isinf/isnan batching rules
58319030bd : [functorch] Added isnan + topk batching rules
7bb63ff2ef : [functorch] Added clamp.tensor batching rule
f787f0be9d : [functorch] Added mode batching rule
4f87a0694e : [functorch] Added max/min/prod (no dim)
85587011a8 : [functorch] Exclude List[Optional[Tensor]] from the batched fallback
9094e757a0 : [functorch] Added cumprod/cumsum and ported over log_softmax
ec25616e0c : [functorch] Added maximum/minimum/clamp batching rules
9b00f55a46 : [functorch] Added bmm batch rule
e9c196e4f7 : [functorch] Add deg2rad/rad2deg/radian batching rules
ede0952ee4 : [functorch] add acosh/atanh/asinh batching rules
ed2f58242f : [functorch] fix some bugs with reduction batching rule + add amax/amin
43de84e088 : [functorch] Added sigmoid_backward and replaced a bunch of typedefs with decltypes
5af36052cf : [functorch] Added atan2 and zeros_ batching rules
56378e986c : [functorch] Added batching rules for convolution, conv1d, conv2d, conv3d, etc.
1666d90161 : [functorch] _unsafe_view batch rule
bea0df36c2 : [functorch] Clarify the vmap fallback warning message
cfcd328ff7 : [functorch] Added note about reaching out
0b60292122 : [functorch] Quick optimization
5acd1b995d : [functorch] linalg_eigh batch rule
75a64bb029 : [functorch] Fix some headers
79e42fc15b : [functorch] Implement more batch rules for resnet18 per-sample-grads
a054adeb74 : [functorch] Fix build/test
dc5e0c0f58 : [functorch] cudnn_convolution decomposition
eb788ae986 : [functorch] maxpool_2d_with_indices_backward batch rule for specific case
1262034bb7 : [functorch] Implement per sample grad rule for cudnn_convolution_backward
cfa9d98499 : [functorch] quick fix
4a045e9659 : [functorch] Introduce gen_plumbing.py
03fbb542fc : [functorch] Very experimental PyTorch forward-mode AD support
e8c5f67cd8 : [functorch] Setup circleci (pytorch/functorch#53)
4aea806404 : [functorch] Added enough things into denylists so that tests pass
3524505d7e : [functorch] Create "lagging op database", use it in our OpInfo tests
fee49501db : [functorch] Fix a lot of warnings; use c10::irange
e3429647d4 : [functorch] Change initializer list tuple return to std::make_tuple
b29e666ade : [functorch] [BC-breaking] Update make_functional* (pytorch/functorch#52)
fdcc680c9d : [functorch] Update tests and examples to use make_functional*_v2
ba1952c176 : [functorch] Added some notes about pythonkey tracing to readme
58e7df77d3 : [functorch] Added std batching rule
710d06c815 : [functorch] templated out sum/mean/var batching rules and added nansum
1b90b429d7 : [functorch] make_functional*_v2
4a20c215ce : [functorch] vmap-of-vjp and vjp-of-vmap OpInfo testing
e58d7dde62 : [functorch] Add newlines to eof
324f4d5e51 : [functorch] Implement batch norm batch rule for one case (where everything is batched)
7a1ee75ff3 : [functorch] adaptive_avg_pool2d batch rule
804a901abc : [functorch] Fix nightly binary links
c91ef2f13e : [functorch] refactored argmax batching rule to support argmin too
039f98d0ea : [functorch] Fixed argmax batching rules
9644436866 : [functorch] Added a bunch of unary activation functions
f536c20b05 : [functorch] fix bug with repeat batching rules, fixes pytorch/functorch#9
51408b2b32 : [functorch] move some test info around
180214ddb9 : [functorch] Fixed oversight in vmap tests, fixed bug in sum batching rule, and added var batching rule
1279ee4282 : [functorch] add lowering for triangular_solve + getitem
bfcaed043d : [functorch] Added get_ops and made lowering work for tuple outputs
6877541cc1 : [functorch] Add some citations
b627d3516e : [functorch] Citations
c7759445e5 : [functorch] Add vjpvjp tests, fix pytorch/functorch#44
5ead88c7dc : [functorch] Added code of conduct + contributing
06a722867e : [functorch] Added a reference to the license in the README
c446037aa4 : [functorch] Add LICENSE headers to code files
1a3a2cf1ca : [functorch] Refactor vjp testing
dd2d217a3e : [functorch] Added error checking to vjp, also added opinfo tests for vjp
a49af32a6e : [functorch] Fix incorrect vjp semantics
92a4886afa : [functorch] Beef up grad testing, eliminate some false errors
7ddbbc392f : [functorch] Enable colors in build log
66ee9e96a3 : [functorch] Add opinfo based testing for grad transform
74c7c73672 : [functorch] Kwarg support for grad
2f3f44c302 : [functorch] cleaned up logic
a92f492b9c : [functorch] removed extraneous pdb
0e3a2b2d5c : [functorch] made tensors const references - fixes pytorch/functorch#38
ea85c20c35 : [functorch] Make the tensor wrap/unwrap logic aware of Tensor?[]
fc5c632ab5 : [functorch] fix some compile warnings
a24930ba93 : [functorch] Fix being unable to call .backward() after vmap
bf1df4f3af : [functorch] ported sumdim batch rule over and added argmax
74d0250734 : [functorch] fix vmap tests
fbc76eb6da : [functorch] Add mode dispatch stack (pytorch/functorch#34)
45d4228334 : [functorch] Added wraps to grad_and_value
470ecce6e5 : [functorch] nll_loss_backward batch rule for some cases
221bdfba33 : [functorch] Added a MKLDNN decomposition and new_ones overload
6b1cc7f499 : [functorch] added reshape lowerings to nnc
a7f406ce58 : [functorch] Quick attempt at hiding functional module init
7f344c5a0b : [functorch] Add batch rule for nll_loss_forward for most common cases
1cd0dd00ce : [functorch] Linear batch rule (which is just a decomposition)
9496ea3e8a : [functorch] fix build failure
2910423017 : [functorch] Update some version numbers"
9e6201db9a : [functorch] lennard-jones example and test
d65bb48b46 : [functorch] Added a way to call the slow fallback from plumbing
03173fad44 : [functorch] Inplace +-*/ batch rules
8e82d5afd1 : [functorch] fix issues with passthrough variables
7644e62d11 : [functorch] Added statement if there's an empty NNC compute expression
f2204f6045 : [functorch] Added some tests + fix some stupid stuff
e1281fca60 : [functorch] Fix some failing tests
29b90b4f4f : [functorch] Added conv2d batching rule (pytorch/functorch#10)
354e79afc5 : [functorch] log softmax backward batch rule
8e0e341076 : [functorch] Fix some of the more important lint errors
69e7dccc25 : [functorch] switch from TORCH_INTERNAL_ASSERT to runtime_error
f7662a2101 : [functorch] test/common.py -> test/common_utils.py
31a3d45c88 : [functorch] Added aten::diag and aten::rsub.Scalar batching rules
2ddf7b0f70 : [functorch] Switch PythonTensorImpl to using custom contiguity policy
45a4dda3ad : [functorch] Switch to overriding is_contiguous_custom
38eb311988 : [functorch] Fix clamp_min / clamp_max
bb701ef563 : [functorch] Roll back clamp_min / clamp_max change
256099c1a6 : [functorch] Switched over to pytorch core's tree_map
66081519da : [functorch] Removed WrapModule
f0f15bc84a : [functorch] Batching rules for: threshold_backward, clamp_min, clamp_max
d9d5a52a14 : [functorch] Update README's torch nightly version
5e90f1e61b : [functorch] Parameterized testing
647ce0ab8d : [functorch] removed unnecessary code left in
69ae2b6dd0 : [functorch] fix oversight with reusing the same storage
26d2ab7e35 : [functorch] updated README
ac017463e0 : [functorch] Added a simple function example
25453d611c : [functorch] Added nnc compilation stuff to functorch
4b0d62a2af : [functorch] Migrate mm and mv batch rules from old style to new style
9bdd9cee5d : [functorch] reshape_dim_into and reshape_dim_outof helpers
0c5755fcaa : [functorch] Delete some redundant code
58ee527d46 : [functorch] Added batching rules for some in-place pointwise ops
66918c4a3e : [functorch] Refactory UnaryOps
cb51b16e59 : [functorch] Batching rules for comparison ops
f92fbeef74 : [functorch] vjp now supports pytree inputs and outputs
9490aa0b65 : [functorch] Fix vjp and grad for unrelated outputs
ec99d21d1e : [functorch] link colab with install instructions
9a81203259 : [functorch] Use functorch._C._set_vmap_fallback_warning_enabled
15ab42ce7c : [functorch] Beef up make_functional docstring, update some examples
6ecf169a07 : [functorch] fixed some python key tracing issues
7224611cd9 : [functorch] Removed some stupid stuff
80131b937d : [functorch] adds support for grad if inputs don't depend on output (pytorch/functorch#6)
86e49cf0d7 : [functorch] fix vmap tests and added dot shape check
73f57a7192 : [functorch] removed nnc_compile
98e995c467 : [functorch] Added python key stuff that builds with master
9d36895a83 : [functorch] merged
98df806b95 : [functorch] Fix up the cifar10_transforms example
dfeb7898f2 : [functorch] pytree output support for vmap
c6773c67d6 : [functorch] Add pytree support for grad and grad_and_value
8c8685ca39 : [functorch] Fix build
ba91512ce9 : [functorch] Installation instructions
32240466a4 : [functorch] Added ones_like/zeros_like/... batching rules
03550ccd61 : [functorch] Added debug commands
033e6e30af : [functorch] removed python key stuff
2467773967 : [functorch] Quick temporary fix for python key
c521cd1f45 : [functorch] fixed dot batching rule with no batch dims
0d94ae66a7 : [functorch] fixed dot batching rule and added vmap tests using opinfo
918ede7a85 : [functorch] Added dot/add.Scalar/mul.Scalar/etc. batching rules and added functools.wraps to grad
ac9be17a87 : [functorch] modified example
c24314c09b : [functorch] More batch rules
d7d266f51e : [functorch] Grab bag of batch rules
0abba43aa3 : [functorch] fix testing warning, update readme
42843306ab : [functorch] Add codegen file; beef up testing
e8abf483ea : [functorch] Run all correctness tests
20fac9da6e : [functorch] testing, version.txt
8277d74e42 : [functorch] Update readme, fix unsqueeze batch rule
7001a2f1e4 : [functorch] Update README, setup.py
70925e47c7 : [functorch] Update
b7096ab83a : [functorch] misc cleanup
ce453d449e : [functorch] Updated some batching rules to use new API
985b35c23d : [functorch] new_blah hack
8201dfc2d5 : [functorch] Various bug fixes; new BR api
a39cb7a80f : [functorch] beef testing
93888a3779 : [functorch] a lot of files
608a932c1a : [functorch] Initial commit
c84791b458 : Change get module info to NOT parse flatbuffer as module. (#81819)
320cc1940b : [Vulkan] Implement select.int operator (#81771)
e60f8f4f60 : Improve autograd custom function docs (#81340)
15fef3dc7e : normalize cuda device (#81739)
745739d8f3 : reland of #80545 w skip rocm (#81738)
729077b461 : Move away from deprecated 10.15 runners for ios tests (#81903)
445ee5620e : Simplify torch.nn.grad by calling into aten::convolution_backward (#81839)
d4043f0d95 : Added generator check for parametrize and ops (#81263)
dfb2a1d1ca : bmm and contiguous constraints (#81527)
428e44ffa1 : [DataPipe] Fixes various warnings, exceptions, and clean up testing (#81833)
e02f7f42b0 : remove myself from native_functions.yaml codeowners (#81890)
f126cd5ffa : add an env var for dispatcher debug logging (#81846)
74c7958717 : [TorchTidy] Adding Layout support and exposing TensorMetadata (#81155)
d3f27f9312 : [JIT][Autocast] document that scripted autocast context cannot disable eager-enabled autocast (#81747)
a7c1f74426 : Revert "Revert "Call lift_fresh after scalar_to_tensor in composite derivative formulas (#81609)"" (#81885)
b492f7c485 : Introduce discontinuity to nested tensor (#80981)
43e7fee764 : [Reland] Recursively print graph module and its submodule (#81639)
8d0cbce069 : Lower randint default dtype to the C++ API (#81410)
5f2e31797a : Replace _dtype_default_type_hack (#81479)
e66986421d : [ao][sparsity] Training-aware data sparsity callback for lightning (#80371)
eecf34fbe7 : [ao][sparsity] Post training data sparsifier callback for lightning (#80370)
a9433dffc2 : ci: bump lite interpreter to macOS 12 (#81882)
86bd888bc6 : ci: Fix mac names, bump builds to bigger runner (#81874)
edf1868e67 : Fix test_doc_template regex (#81755)
6e9a201986 : [Vulkan] Generalize GLU along channel dim to support input of any even channel size. (#81729)
f5b460b200 : Revert "[CUDA graphs] Clear autocast amp cache (#81558)"
fdc2af0090 : Revert "Call lift_fresh after scalar_to_tensor in composite derivative formulas (#81609)"
da87fa684c : Revert "[fix] allow saving python attr on Tensor and Parameter via torch.save (#81616)"
927e9f150a : Revert "[complex] conv_tranpose1d (#79694)"
92d0700520 : ci: Switch MPS tests to self hosted runners on AWS (#81772)
e68583b4d1 : Fix deserialization of TransformerEncoderLayer (#81832)
aad7a1c06c : Call lift_fresh after scalar_to_tensor in composite derivative formulas (#81609)
a16639e20a : [torchdynamo hash update] update the pinned torchdynamo hash (#81842)
a31aab71b4 : Fix windows workflow RCE (#81835)
0b3a239e85 : [pocket fft] turning on pocketfft flag (#81670)
5b88a2078b : Follow GitHub relabeling of oncall: fx for test owners (#81821)
e9d07bd4f0 : [CUDA graphs] Clear autocast amp cache (#81558)
c179597753 : Add native impl for group norm on quantized CPU for channels-last inputs (#70520)
9409015094 : [fix] build failure in python 3.10 (#81812)
2fb2740ef9 : corrects typo in quantization docs (#81687)
a5fb41e3d3 : Revert "Revert "Refactored prim utils into _prims_utils folder (#81746)
75aa049a81 : Revert "[Reland] Add should_traverse_fn to torch.fx.node.map_aggregate (#81695)"
b8f9751f11 : Add cpu/gloo tests for sharded tensor distributed checkpoint (#80997)
1bf5ce47d2 : [GHA] Re-enable backwards compatibility test! (#81795)
c9497886fd : [JIT] Modify is_mutable in FunctionSchema and SchemaInfo to have SchemaArgument parameter instead of index (#81784)
41d054ab11 : [JIT] Modify SchemaCheckMode to utilize SchemaInfo and fix training op tests accordingly (#81783)
21a4be34cd : [JIT] Enchance training ops check to be more inclusive and account for possible pybind exceptions (#81782)
08b9544e1c : Enable reentrant dispatch for decompositions (#81598)
8b5685da12 : [composite compliance] test_operator correctness (#81600)
d681ed567b : Upgrade mypy in lintrunner to 0.960 (#81775)
3c9479dc30 : Revert "FIX make sure we import the correct object from multiprocessing (#53282)"
66cf1b6459 : correct argument name in docs (#81485)
8573da59c3 : Re-enable C++ doc generation (#81719)
a3d5d2ddf1 : Add partitioned nvFuser executor with ATen fallbacks (#81043)
767facdb18 : Expose Lint for AliasDb (#81579)
f3f8d96ea6 : [fix] allow saving python attr on Tensor and Parameter via torch.save (#81616)
26ecd67d38 : [Profiler] Add pattern to detect if TF32 is available but not used (#81273)
6e621d0287 : Add pattern to detect for loop indexing into tensor (#81056)
706b420a52 : [composite compliance] check output of forward-ad with subclass args against regular tensor (#81464)
b24c102209 : Add test that all BackendComponents are covered by toString (#81713)
41d8c24a85 : Fix printing of DispatchKey in operator not found message (#81637)
e103d6af3d : FIX make sure we import the correct object from multiprocessing (#53282)
0aedda25bc : [PyTorch] Reporting OOM events to the Pytorch Profiler. (#80050)
2785818f5a : Choose test affinity based on current affinity (#80327)
d7210e6129 : [MPS] Fixes for MPS testConsistency (#81735)
0ad702339d : constraints for transpose (#81516)
abc13520be : api for user-defined constraints (#81445)
fca695edbe : modify get_attr to work with HF model and fix a bug in multiplication (#81376)
ee1d79a6df : [Reusable workflows] Linux - Move codegened binary builds + upload to a reusable workflow (take 2) (#81564)
dced803339 : [nn] add `insert` method to sequential class (#81402)
596bb41163 : [MPS] Get the correct size of the view tensor when copying from cpu to mps (#81730)
92c6690b9c : Fix linspace dtype replacement in docs (#81371)
d4ddc5bf63 : [complex] conv_tranpose1d (#79694)
2edd6aaeaa : Add prelu op and module for quantized CPU backend (#73491)
7cea41fcf2 : Revert "[ROCm] Temporarily disabling ROCm CI job (#81646)"
2c0b11b43b : [nn] implement `extend` method to sequential class (#81179)
0f164d342f : [functorch hash update] update the pinned functorch hash (#81748)
24010a609e : [torchdynamo hash update] update the pinned torchdynamo hash (#81675)
c09d84d325 : [Reland] Add should_traverse_fn to torch.fx.node.map_aggregate (#81695)
eaaaafdb37 : Remove unused approved_by (#81741)
589e8a1da5 : [ao] Get feature and module names from ModelReportVisualizer class (#81647)
1d3935a77d : [ao] Add method in ModelReport to generate visualizer (#81589)
d0ce1fbbe2 : [ao] Created Skeleton for ModelReportVisualizer class (#81523)
e85bdd5435 : [vulkan] Use automatically generated descriptor set layouts (#81716)
96958be6be : [vulkan] Automatically generate shader layout from GLSL (#81715)
04838696b0 : parallel_apply should forward current streams to worker threads (#78824)
7c5dac5228 : Dialect agnostic CSE Pass (#81530)
99cb5fde12 : [PTE] Fix module level information in profiling (#81727)
a60907ec11 : Adding fsdp fp16 and bf16 hooks (#81711)
1ddbc5a7dc : [JIT] Remove has_side_effects functionality from SchemaInfo (#81575)
8e454cc702 : [JIT] Add SchemaInfo python bindings to init.cpp (#81518)
dadfe1c7bf : Add nondeterministic tags in tags.yaml and add the nondeterministic_seeded tag to all functions in native_functions.yaml defined as nondeterministic by alias_analysis.cpp (#81440)
6cf0d9249f : Add prelu and relu6 refs missing from __all__ and decomp db (#81420)
081eedca1e : Revert "[complex] conv_tranpose1d (#79694)"
3a6306b9af : Remove remaining `eval` calls from `torch/storage.py` (#81701)
e43a02c314 : Revert "Refactored prim utils into _prims_utils folder (#81088)"
06a0cfc0ea : pytest to run test_ops, test_ops_gradients, test_ops_jit in non linux cuda environments (#79898)
9f9dd4f072 : [ROCm] Temporarily disabling ROCm CI job (#81646)
87cdb52cc4 : [FSDP] Stricten `_update_p_data()` in `_summon_full_params()` (#81573)
f82b19f15b : Revert "Disable use_mkldnn when input is not contiguous for oneDNN (#80864)"
5bd7abf281 : functionalization: fix for mutable ops with different type promotion rules (#81702)
ba54165392 : Make sure that exit code is propagated from Child to parent process (#81408)
1a6b97b8a3 : Clean-up error checking in linalg. (#80767)
0d25915836 : Modify quantizer test for k=1 to use uniform observer for scale, zero point (#81696)
aa1466d542 : Raise proper timeout when sharing the distributed shared seed (#81666)
0f61414579 : [complex] conv_tranpose1d (#79694)
2a3fd6fdaf : Update sample inputs for fft.hfftn (#81416)
e1cd2506fa : ci: Move macOS binary builds to bigger runners (#81699)
7e60e315da : Add support for Generator conversion to/from IValue (#81697)
8a6d1289d8 : [ao] Revised ModelReport API to take in model at initialization (#81588)
c67a4d8a65 : [GH1] replace globs with patterns that mergebot can process (#81414)
270069cfb9 : Fix DataLoader flaky tests that run out of shared memory (#81660)
ed0091f8db : fix view_copy kernel striding check logic (#81553)
3d74fd4870 : [Expanded Weights] add ability to not specify batch size (#80944)
5973cbe657 : [Expanded Weights] fix conv3d (#80943)
2bcbea1ff6 : [Expanded Weights] fix layer norm (#80895)
7408004454 : Revert "[Codemod][Format buck files with arc lint] caffe2/third_party (#81441)"
fde1107fe8 : Revert "Add should_traverse_fn to torch.fx.node.map_aggregate (#81510)"
e907a8d966 : [ao] Updated dict keys of detectors to have consistent naming scheme (#81587)
9f873ed7c8 : [torchgen] support codegen'd C++ API for a mixture of namespaces (#81581)
8367fd9d6b : Remove `eval` from `torch.storage._TypedStorage.__new__` (#81679)
a6e716cfed : [JIT] Add may_contains_alias function in SchemaInfo class (#81444)
47cdab6601 : [JIT] Fix double wildcard edge case for may_alias in SchemaInfo and improve formatting (#81439)
18a0737f7d : [functorch hash update] update the pinned functorch hash (#81674)
80231d0a72 : Refactored prim utils into _prims_utils folder (#81088)
a8f4011e90 : Revert "Adding fsdp fp16 and bf16 hooks (#80557)"
368018530e : [quant] Implement forward and backward autograd functions for fake quantize (#81438)
4aac42cc98 : [LT] Add a new backend interface [DUP of the original] (#81662)
29a9928767 : torch.distribution examples rendering issue (#81611)
8a3f88b5e0 : [ao] Standardized InputWeightEqualizationDetector output to single level (#81586)
72de816f5c : GIL acquire needed in ValueCache::trimPrefixes (#81061)
d80bce7afc : shard `trunk / linux-bionic-cuda10.2-py3.9-gcc7 / test (slow` to 2 (#81569)
af33c2444c : [vulkan] Update allocation parameters for uniform buffers (#81636)
7230b67cde : [PyTorch] Support norm_first in nn.TransformerEncoderLayer fast path (#78269)
f7d6828467 : Adding fsdp fp16 and bf16 hooks (#80557)
2ddb722bc6 : [ao] Standardize PerChannelDetector Output to be single level (#81585)
799bc645d9 : [Expanded Weights] fix loss reduction (#80892)
71e16f9eef : [build] Add `fbsource` keyword to `pt_operator_libarary` (#81298)
84c8a9f88e : Use slow but safe formula for prod_backward (#81617)
c96485804f : Revert "[CI] Move CUDA-11.6 to Python-3.10 configuration (#81233)"
1233c3c256 : [Codemod][Format buck files with arc lint] caffe2/third_party (#81441)
ccbf04dd5f : [DataPipe] Fix fork/unzip with a single child (#81502)
50d205c551 : make clamp decomps use torch.* calls, move clamp_min/clamp_max to refs (#81619)
19a296486b : Remove unused local variables from generator.py (#81505)
e3a569384a : Correct `super` class name (#81507)
6884865009 : Remove unnecessary assigment (#81498)
547e499731 : Enable Zero1's ddp_with_overlap for hpu backend (#80438)
ec1b3a45ad : Revert "[Py3.10] Allow floats to be imported as Long (#81372)"
4035a53cca : Revert "Recursively print graph module and its submodule (#81080)"
37474a54de : create a concated LICENSE file for wheels (#81500)
d82d2083e2 : Excluded future functorch path from linters (#81563)
99c464ae26 : Add CUDA 11.7 workflows (#81095)
35563f4fcd : [torchdynamo hash update] update the pinned torchdynamo hash (#81621)
471397d0ee : Remove spurious assert (#81604)
fe7262329c : Recursively print graph module and its submodule (#81080)
86b86202b5 : fix torch.config can't respect USE_MKLDNN flag issue (#75001)
4655c3bace : Disable use_mkldnn when input is not contiguous for oneDNN (#80864)
95c148e502 : [BE] Turn `_check_cuda_version` into a function (#81603)
7e274964d3 : [BE] Disamntle pyramid of doom in _check_cuda_version (#81602)
446833d11f : [CI] Disable ios-12-5-1-x86-64 (#81612)
3289f72069 : [Profiler] Move Kineto activity generation into `collection.cpp` (#80796)
a5b5a001d5 : [Profiler] Use parent time for implicitly finished Torch ops (#80810)
5392a7ecef : Revert "[Profiler] Use parent time for implicitly finished Torch ops (#80810)"
869b3ed94e : [torchdynamo hash update] update the pinned torchdynamo hash (#81605)
8feccedd0b : [functorch hash update] update the pinned functorch hash (#81606)
037792f448 : Revert "[Profiler] Move Kineto activity generation into `collection.cpp` (#80796)"
97938d872e : Added a couple more symint magic methods + symbolic shape infra (#81086)
7ccf693cf6 : [CI] Move CUDA-11.6 to Python-3.10 configuration (#81233)
24e6b60be2 : [Profiler] Move Kineto activity generation into `collection.cpp` (#80796)
eaf92d7883 : [Profiler] Use parent time for implicitly finished Torch ops (#80810)
0b5b10002a : Reduce the boilerplate needed to bind properties (#81576)
efe2c0422d : [Pytorch][ferraris] Move native/utils/Factory.cpp to aten_cpu lib (#81356)
5fbf22efbd : [vision hash update] update the pinned vision hash (#81594)
d656578d62 : [functorch hash update] update the pinned functorch hash (#81592)
5f30ea05c7 : [torchdynamo hash update] update the pinned torchdynamo hash (#81593)
3d1e9d6998 : Clean up xenial py3-clang7-asan and py3.7-gcc7 (#81591)
a0af1d73ed : Checked if symbolic shapes are present before using fallback for sizes, and also checks for custom size policy in shallow_copy_and_detach (#81078)
a4647cc1fa : Apply ufmt linter to all py files under torchgen (#81570)
d625637c7c : Include aten.where.self in NvFuserOperatorSupport (#81436)
00359ff886 : Fix docstring on FileOpenerIterDataPipe (#81407)
a46ac94677 : [BE] Fix CMake dev warning (#81580)
b22166fd62 : Add a small fastpath test for native mha (#81432)
381355630a : disable flaky test VulkanAPITest.cat_dim1_mult4ch_nonmult4ch_success (#81582)
4c9eae331b : Use standard mechanism for stdlib names (#81520)
69d73345a2 : [Py3.10] Allow floats to be imported as Long (#81372)
e100e6fd4f : Added note about katex install for building docs (#81550)
e71f4e7958 : [JIT] Implement may_contain_alias in FunctionSchema (#81352)
23088fcfdf : disable src mask for transformer and multiheadattention fastpath (#81277)
3ea1b9be94 : [FSDP] Construct FQN in _full_post_state_dict_hook (#81253)
88e1c5c1d8 : Apply ufmt linter to all py files under test/onnx (#81335)
9d3c35d1e1 : Back out "Revert D37720837: Back out "Revert D37228314: [Profiler] Include ActivityType from Kineto"" (#81450)
e345138591 : [retake2][mobile] Fix lightweight dispatch OOM error by introducing selective build (#80791)
5139053e02 : Fixed the decomposition for `embedding_dense_backward` (#81528)
4c728a7581 : [ONNX] Add tests to quantized cat (#81484)
6b82601bbc : Stop attempting to install networkx (it's already installed) (#81555)
69608fc598 : [ONNX] remove outdated ImplicitCastType QA in onnx.rst (#81268)
d68fed56ef : [MPS] Handle 1D bias for addmm (#81519)
d7d23ffcfb : [vulkan] implement dequantize (#81493)
3a62f9a585 : [vulkan] implement quantize per tensor (#81492)
7988c53169 : [vulkan] add support for quantized tensors (#81491)
a8f53e80c4 : Make convolution_backward_overrideable give better error (#81538)
cc62603938 : CPUBlas: Use mkldnn optimized BFloat16 matmul for gemm (#65840)
d61ae1a773 : Remove unused variables from state_dict_loader (#81513)
fe34bf1201 : Remove unused storage_size (#81514)
d083b44818 : Remove unused rank from _AllGatherBase backward (#81515)
443b13fa23 : [primTorch] Implement fftshift and ifftshift (#80737)
c0ff72b3ab : [primTorch] Implement two-dimensional fft transforms (#80736)
353180e1bf : [primTorch] Implement n-dimensional fft transforms (#80571)
bf36d8b987 : [primTorch] Implement one-dimensional fft transforms (#80570)
00459c2c87 : [primTorch] Implement constant_pad_nd (#80182)
d52f8c2533 : Add should_traverse_fn to torch.fx.node.map_aggregate (#81510)
7af0200a46 : Add deepcopy functionality to parametrized modules (#80811)
b95ae2909e : Convert toOpMathType to an inline function, to fix test breakages (#81463)
dc849632af : [vulkan] Upgrade vulkan memory allocator to 3.0.1 (#81472)
0b9eb93fe9 : Make type_resolver_ null error have more useful info (#81466)
42ee1608d3 : [JIT] Add special cases batch_norm, instance_norm and dropout for SchemaInfo (#81007)
e49780050f : [xla hash update] update the pinned xla hash (#81524)
32895aa319 : [vision hash update] update the pinned vision hash (#81525)
fca03eeec1 : Make proxy tensor support item() calls on torch.tensor constants (#81192)
c09617f98f : Revert "Revert "Remove python key when setting functional tensor metadata (#81401)"" (#81456)
2f0172bfc4 : Remove unused dtype assignment (#81503)
9004a10e57 : Automatically update the functorch pinned commit (#81404)
b294efc99b : Install networkx when testing functorch (#81403)
978ebf615e : [composite compliance] fix masked_ops item call (#81475)
75fdebde62 : [JIT] Fix annotation extraction for named tuple (#81506)
938643b8bc : CSE_Pass (#81512)
557fbf4261 : Disable some flaky vulkan_api_test (#81509)
3dea7fe6f3 : Remove unused local variables from gen.py (#81508)
3d0b0b2f9b : [fx] PassManager changes (#80531)
bebe74001a : [Bootcamp T124004534] Better Transformer fastpath diagnostics (#81013)
e5a504d45d : [GHF] Report workflow startup failures (#81521)
845792db3c : [ao] Fix for extra lines after return in Outlier Detector (#81499)
1caa25ebcb : [MPS] Add `aten::index_add.out` (#79935)
62af121fc7 : Revert "[Reusable workflows] Move codegened linux binary build/test/upload to a reusable workflow (#81044)"
ce92755d79 : Revert "[Reusable workflows] MacOS - Move codegened binary builds to reusable workflows (#81447)"
f482820596 : Revert "[Reusable workflows] Windows - Move codegened binary builds to reusable workflows (#81442)"
2f55050fb5 : Bump nvfuser executor lru cache max size (#81461)
e3b98ba7a4 : [MPS] Handle Boolean inputs in the cat op (#81480)
e7e835e50a : Fix to folder by adding custom_builtins to dump (#81433)
cf6499e5e8 : Update docs to say that wildcard only aliases other wildcards (#81341)
15845f98fb : [vulkan] Use busy polling when waiting for VkFence (#81470)
8011405a7b : Adding docs on how to run backwards compatability test (#81431)
ae565b27c8 : Fix the undesirable warnings in torch.nn.parallel APIs (#81476)
880b972841 : More efficient indices validations for compressed sparse formats. (#81108)
51cc614cb9 : [pytorch] add missing -fexceptions flags (#81394)
0adc2e35f2 : Add testcase for MPS issue #80856 (#81455)
4963adcc8d : Revert "[composite compliance] matrix_exp (#81225)"
0f3c8c939f : [ao] Added README for ModelReport functionality (#81369)
8f743d7a70 : [ao] Updated detector observer insert args to be vars not strings (#81382)
367c695237 : [composite compliance] matrix_exp (#81225)
446edadd95 : [quant][fx] Follow up fixes for qconfig validations for fixedqparams ops (#81010)
6997ac79d6 : Revert "[primTorch] Implement constant_pad_nd (#80182)"
30a5f2a910 : [Reusable workflows] Windows - Move codegened binary builds to reusable workflows (#81442)
7ec6a6bab5 : [Reusable workflows] MacOS - Move codegened binary builds to reusable workflows (#81447)
cce2f0d0e4 : Disable test_functionalization.py under torchdynamo (#81458)
cc67a92e74 : fixing call_module on subscripting into generator (#81258)
dd73c97ea2 : [FSDP] Remove the dependency of ``_symbolic_trace`` in ``wrap`` (#81443)
c88cda7fce : [PyTorch Edge] Fix ao::sparse::BCSR missing in qlinear serialize and deserialize when USE_FBGEMM and USE_PYTORCH_QNNPACK are not set (#81256)
e8fe11faf9 : [merge bot] Migrate Rest API to GraphQL (#81264)
77cfa9f7a1 : [primTorch] Implement constant_pad_nd (#80182)
924b7951aa : [primTorch] Implement conj and conj_physical (#80358)
8175a8bcb5 : [torchdynamo hash update] update the pinned torchdynamo hash (#81299)
62f1ff23fb : Make functional tensors printable (#81454)
6f6e47aba4 : Suppress virtual-dtor check on llvm_jit (#81449)
05ce013338 : [composite compliance] check output of backward with subclass args against regular tensor (#81400)
bdf5abd6f0 : fixed return type for cuda.memory.mem_get_info() (#81073)
3a87b47de9 : docs: Fix a few typos (#81435)
da247ea1d9 : [vision hash update] update the pinned vision hash (#79256)
340ae3ca43 : [ROCm] unskip test_fx tests (#81125)
ce2ce3ae96 : Cudnn conv cache key patch (#81418)
17fe7ce0e4 : [BE] Delete Win specific case for CMake older than 3.1 (#81411)
0d7e44b874 : re-enable vision hash updates (#81425)
65d03b1024 : Add missing LTC headers to setup.py (#81424)
c657c3d3ab : [Quant][fx] Rename convert_to_reference to convert_to_reference_fx (#81326)
6866c0532d : [PyTorch Edge] Remove LinearPackedParamsBase __getstate__/__setstate__ from check_forward_backward_compatibility.py Allowlist (#81048)
f2bb25a758 : Revert "Remove python key when setting functional tensor metadata (#81401)"
623e041122 : Xla npm hotfix (#81430)
067c8067a3 : [MPS]: Added op upsample_nearest1d (#81303)
85efdec060 : detach and dropout constraints (#81360)
8f2e29fe99 : IndexSelect constraints, add a missing constraint in constraint tranformations and modify tests accordingly (#81344)
c811478915 : refactor mul and add constraints for fields (#81274)
33ccdea419 : constraints for type_as, long, int (#81265)
fb13847db3 : get_attr constraints (#81190)
9f90b19ec2 : combine ne and add in one inference rule (#81189)
1a7be42a62 : [Easy][FSDP] Delete dead code (#81158)
c87828c146 : [BE][FSDP] Subtest prefetching in `test_mixed_precision_e2e_full_shard()` (#80915)
423aa29ca4 : [BE][FSDP] Subtest prefetching in `test_fsdp_core.py` (#80908)
a25df29cc4 : [ao] Updated ModelReport function calls to show not dependent on Fx GraphMode (#81252)
5eec908700 : [ao] Update ModelReport class with class usage in description. (#81251)
6366c99e5b : [ao] Added Collab link for Outlier Detector ratio val choice (#81250)
9c298fff2e : [ao] Added constant channel check to Outlier Detector (#81249)
229762dcd9 : [ao] Added statistical threshold arg in Outlier Detector (#81174)
20e6b3f3b0 : Re-enable vulkan test (#81368)
b5b9db9f84 : Make `kl_div` a composite function. (#80334)
b0199c06f6 : Remove python key when setting functional tensor metadata (#81401)
8d753c8062 : [WIP] Upstream push 0627 (#80355)
87ba203ac7 : Modify D2H copy with a different dtype (#80607)
a272ff39fd : [Reusable workflows] Move codegened linux binary build/test/upload to a reusable workflow (#81044)
8926b5b9c2 : Fix typos in docs: Profiler and CUDA semantics (#80406)
7ce92d7fac : [mergebot] combine land check and job started comments (#80965)
2cb75e1579 : [BE][FSDP] Introduce `FSDPTestModel` interface (#80873)
12c30a8250 : Add linux cuda 11.7 workflows (#81089)
31142f57fc : Generalize the broadcasting interface for generating constraints and add constraints for the expand operation (#81175)
dbee37510c : constraints for to and extensions for addition and getitem (#81159)
22dc22c8a0 : constraints and tests for masked fill (#80976)
814291dd3b : extend lt and gt to handle tensors (#80925)
27db2750ba : layernorm and ne constraints + tests (#80909)
a85d1f0bcd : Fix backwards compat test by building twice (#80969)
2f5d4cf90c : Fix mypy for IterDataPipe.collate (#81275)
9506f9ec97 : [small] Formatting Changes (#81331)
dff70a5e1a : Make language std configurable. (#75519)
3c7044728b : Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) to PyTorch (#63289)
937ca69f15 : [TorchArrow][efficiency][3/n] variadic versions of op fused /unfused inference_wrapper_run_flat (#81133)
e423354b91 : Enforce linter in vulkan code (#81390)
347b036350 : Apply ufmt linter to all py files under tools (#81285)
d3acbc821e : Nvfuser opt in for decomposition (#81134)
667607b3ab : [MPS] Reduce the number of command_buf created and improve performance (#81338)
94a8a8aa32 : [di] avoid copying optional input for get_real_inputs_from_optional_inputs_v2 when possible (#81137)
92c891448c : [xla hash update] update the pinned xla hash (#80752)
8f07b7a069 : Fix circular import error in torchgen (#81355)
ed8a830da8 : [FSDP] import ``_symbolic_trace`` only when ``torch.fx`` is enabled (#81339)
07f4dc9b2c : [torchgen] Add documentations for custom namespace support (#81362)
a4daa89861 : Serialize memory_format (#81332)
0b3ed9d57c : constraints and tests for cumsum (#80847)
614779f975 : [fx] PassResult (#81366)
3b4964230e : [JIT] Add side effects checks for ops in SchemaInfo subclass (#81002)
14c28caed9 : [JIT] Add determinism checks for ops in SchemaInfo subclass (#81000)
50ba94f5cc : [JIT] Add aliasing checks in SchemaInfo with associated tests (#80984)
aa61fdb667 : [JIT] Add argumentValue functions and is_mutable checks to SchemaInfo (#80972)
ee8261c066 : [Static Runtime] documentation update for StaticRuntime (#81066)
1810c876df : Remove noexcept from dtype (#80991)
3b00b17f64 : [docs] Updated quantization docs to show per channel support for conv1d (#81349)
d4f065d261 : Return mode object from __enter__ (#80998)
2b8c5605b6 : extend view function to accomodate node/number args, generate constraints for scalar addition and lt (#80823)
0a1cd68df7 : constraints for arange, full (#80799)
cf345fa2ed : expose gradual type inferface to HFtracer and generate constraints for gt nodes (#80744)
80f6d2e9e6 : [torchgen] Extract out schema registration logic into a function (#80780)
80a700bd1b : [caffe2] Don't copy Tensor dims during deserialization (#79471)
85144e63a9 : `matrix_exp`: Make sure `_compute_linear_combinations` result preserves dim of the input. (#81330)
6b280e880a : Update NvFuserOperatorSupport (#81311)
0b8c383089 : Modules under migration in the public binding test (#81314)
47718d54ac : Add `native_functions.yaml` to MPS rule (#81351)
893d763276 : [ao] Implemented Outlier Detection Report Generation (#80937)
ae83e44c5f : [MPS] Handle 1D inputs for NLL (#81290)
478081c698 : Revert D34113898: Multisect successfully blamed D34113898 for test or build failures (#81266)
55d1b376ea : [ao][sparsity] Vectorized WeightNormSparsifier (#80059)
3c4c7d3e6b : [Release Notes] fix bug with categorize call (#81284)
36d2c44cce : Revert "Back out "Revert D37228314: [Profiler] Include ActivityType from Kineto" (#81122)"
b256ff6a8f : Allow torch._C to be recognized a module in torch.package (#80917)
e3a870986e : [JIT] Add may_alias in FunctionSchema with associated tests (#80918)
772999ebae : Remove libtorch-linux-xenial-cuda10_2-py3_7-gcc7-build (#81289)
d552ba3b4f : Use fabi-version=11 to ensure compatibility between gcc7 and gcc9 binaries (#81058)
d5bda29207 : [JIT] Tweak annotation extraction for py3.10 (#81334)
ed1da2a9df : [ONNX] Quantization support for quantized::cat (#79826)
1e3c6f2263 : [primTorch] Add a ref for allclose (#81003)
52a538868b : Back out "Revert D37228314: [Profiler] Include ActivityType from Kineto" (#81122)
e608befae4 : Revert "[c10] move fexceptions to compiler_flags (#80387)"
3e1ac21c3b : [c10] move fexceptions to compiler_flags (#80387)
782f18e9b5 : [DLv2] Make graph `traverse` working with unhashable `DataPipe` (#80509)
54bdaf76d6 : [PFC] Native UCC process group for Pytorch (#79918)
268d910170 : [TorchTidy] Adding testing for size and dtype capture (#80071)
cef057038f : [Static Runtime] Update README (#81072)
e505796a2c : [Array API] Add linalg.vecdot (#70542)
56d1c75518 : Make nn.stateless correctly reset parameters if the forward pass fails (#81262)
575cdd1112 : [composite compliance] masked ops (#81199)
d321be61c0 : [ci] remove dead code related to test selection (#81163)
9f58d5d7ce : [test stats] use published test stats for sharding (#81116)
fb93c3988a : [build] Split `.cu` to improve compile times (#81193)
282de5539d : add open device registration test with cpp extensions (#80477)
f2dcb11bac : basic SymInt test for functionalization (#80418)
f84b30f790 : fix functionalization regression introduced by ProxyTorchDispatchMode, migrate testing to make_fx (#80416)
5f8c2076df : [CI] Add functorch testing shard (#81283)
773d80747c : [ONNX] Clean up unit tests, rename files and improve import style (#81141)
06710ec1b9 : [ONNX] Reland: Add quantization support to _avg_pool opset 9 and clean up (#81267)
28776c45e3 : Add ref for relu6, fixes hardshrink and improves testing of related ops (#81142)
9ee312023d : [Composite compliance testing] Refactor check_forward_ad_formula to accept Callable (#81239)
9ed76c8c89 : Add 3.10 stdlib to torch.package (#81261)
ef035d083e : Add ufmt to unify black and usort (#81157)
747b3b311d : Fix links in `torch.testing` docs (#80353)
b4e342928b : [JIT] Add mutability checks in FunctionSchema and create SchemaInfo subclass (#80734)
528ee0fa75 : Fix composite compliance testing to check for .item() calls (#81060)
d253cdd8ff : [composite compliance testing] Refactor check_backward_formula to accept Callable (#81059)
6b0651209e : [composite compliance testing] remove tree_flatten hack (#81057)
7f3677d723 : Revert "Remove split functional wrapper (#74727)"
b946e7a7f2 : [TorchArrow][efficiency][2/n] added inference_wrapper_run_flat_out fused version registration call (#81131)
3da8c909da : [nn] add `+` operator for torch.nn.Sequential to concatenate (#81170)
8740c68c41 : [primTorch] Adds contiguous and expand references (#79820)
c144a09961 : disable backwards_compat test config (#81246)
13451e393b : CPUBlas: Use opmath_type for alpha/beta scalars (#65839)
0f37d34815 : ci: Add push trigger for docker-builds (#81228)
e8bd9b531d : [viable strict] don't try to push nonexistent commit (#81226)
56dea92d97 : Fix set_requires_cuda_init (#81183)
f69768fed4 : [forward ad] Fix codegen to ignore undefined outputs (#81114)
b69a2546f4 : [forward ad] Skip some metadata checks for 0 numel tensor (#81055)
1b17444c75 : Fix formatting on cumsum.cpp (#81107)
1afb804f26 : Improve wrapper subclass detection for serialization (#81105)
33a419dbd0 : [CI][BE] Move repeated deps to `CONDA_COMMON_DEPS` variable (#77923)
af0160ce43 : [torchdynamo hash update] update the pinned torchdynamo hash (#81156)
793621984b : Disable nnc/test_aot_compile.sh (#81196)
93912b1a73 : Add __all__ to torch.distributed submodules (#80523)
4503c45553 : Remove registration of NNC backend. (#81160)
3c2199b159 : Update CUDA version in CI to 11.6.2 (#80378)
caee732aa1 : Revert "[quant][fx] Support keyword arguments for functional linear (#79095)"
d71fb40d98 : [quant][fx] Support keyword arguments for functional linear (#79095)
24af7948ca : Add prim.svd, refs.linalg.svd, and refs.linalg.svdvals (#78616)
45b67b65de : Fix handling of device (#78615)
e9a9b50f48 : Reference for linalg.vector_norm (#78350)
23c8244fb2 : Add shortcuts for refs.pow (#80219)
f3008be900 : [FSDP] Getting the parameter execution information using torch.fx (#80294)
c1c796446c : [CI] Fix concurrency for tagged pushes (#81154)
a879cb5865 : Update poi based on recent activity (#81097)
e14941ef79 : Add kwarg support for no_reentrant checkpoint (#80987)
d26516fd1b : [primTorch] Implement loss function references (#80573)
68ec793cfd : [ao] Moving the sparsity/experimental to sparsity/_experimental (#81149)
b98b9eaae5 : Revert "[ONNX] Add quantization support to _avg_pool opset 9 and clean up (#79793)"
0097be4f4e : [PyTorch Edge] Extend LinearPackedParamsBase __getstate__/__setstate__ deadline in check_forward_backward_compatibility.py Allowlist (#81135)
8f562a8210 : Annotated all endif in caffe2/aten/src/ATen/native/vulkan (#81091)
23bdb570cf : Reland: Enable `dim=None` for `torch.sum` (#79881)
3f56a1b8c0 : [lint] fix actionlint linter (#81119)
4906a95618 : Fix istft default output length (#80031)
356341a3ec : [ONNX] Add quantization support to _avg_pool opset 9 and clean up (#79793)
39f659c3ba : Revert "[Array API] Add linalg.vecdot (#70542)"
80bf2ea3d9 : [CI] Install vision without pep517 (#81074)
0d124fc696 : Revert "CPUBlas: Use opmath_type for alpha/beta scalars (#65839)"
d087b32149 : [BE][FSDP] Retire `_get_full_detached_param()` (#80871)
2ea215fd59 : [BE][FSDP] Sort `common_fsdp.py` imports (#80870)
f583f81b6e : [BE][FSDP] Fix that MP config not being passed to FSDP (#80869)
4d9b96a01a : [BE][FSDP] Remove unneeded `torch.cuda.synchronize()` (#80868)
5c8a9803c8 : [torchgen] Support multiple namespace in NativeFunctions.h (#79733)
ff6655defb : [ROCm] unskip external streams tests (#80922)
c9a0204ef4 : Disable functorch modes in testing's freeze_rng_state(), part 2 (#81109)
fc10a63727 : Prims+NvFuser Backend Prototype (#80591)
d3dba3c42a : Fix ModuleInfo skip logic (#80471)
8fab682e47 : [Quant][fx][bc-breaking] Do not move models to CPU in convert (#80555)
cc3126083e : Remove split functional wrapper (#74727)
516f3198d6 : Fix retains grad behavior after in-place (#79996)
e9b3bc2ead : [DataLoader] Locking lower ranks seed recepients (#81071)
bcab5257de : Expanding DataPipe to support DataFrames (#71931)
79a502f4db : Add doc string for Library.impl (#81047)
4c57cf9a8b : Register unregistered refs and add a test to check registration (#80497)
b26d664810 : Switch on TorchDynamo for PyTorch tests (#81083)
8a5d9843ff : Update ROCm base docker images to focal (ubuntu20.04) (attempt #2) (#81031)
d50f4a3c24 : Support sparse/dense_dim for Compressed Sparse tensors (#80901)
65a768f647 : [mergebot] Remove Comment Validator for Reverting (#80866)
74208a9c68 : [Array API] Add linalg.vecdot (#70542)
6a7ed56d79 : [ao] Added OutlierDetector observer insert implementation (#80880)
04ef236c0d : [primTorch] Elementwise unary ops vi (#79526)
278958686b : Added OpInfo for nn.functional.dropout3d (#81045)
4bf076e964 : Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520)
81ca2ff353 : Prevent automatic cuda init in init_rpc (#80180)
57c6bbd274 : Make TensorImpl::check_pyobj const (#81001)
135af0fe30 : Revert "Adding maximize to ASGD (#80323)"
0b8a5ca01b : Revert "Adding maximize to rprop (#80335)"
7ced3921d7 : Manually update XLA hash to the latest (#81070)
f24c94d7ae : Adding maximize to SparseAdam (#80336)
495aa9bc3a : Adding maximize to rprop (#80335)
a1fd5b4273 : Adding maximize to RMSprop (#80326)
14bd5bd6ee : Adding maximize to ASGD (#80323)
da7f7cea38 : allow contiguous inputs run into qcat_nhwc_stub when dim is last dimension (#72575)
ab8dd268a0 : Define rules for additional operations needed for HF model (#80147)
b8f268708d : [vulkan] Refactor QueryPool (#80729)
6ff28f0a9a : [vulkan] Clean up deprecated code (#80728)
a965a67492 : Revert "[Profiler] Include ActivityType from Kineto (#80750)"
2f6f7391ef : [Profiler] Include ActivityType from Kineto (#80750)
3b78c5682b : Don't implicitly convert to channels-first in MaxPool3D on CUDA (#80748)
01a0dfb7f8 : [torchdynamo hash update] update the pinned torchdynamo hash (#81076)
26582056fa : Disable functorch modes in testing's freeze_rng_state() (#81006)
b0aaefb50f : Build example_allreduce only for GLOO (#81062)
8389ccbcd8 : reinstate size and shape returning symints (#79560)
1263247395 : [torchdynamo hash update] update the pinned torchdynamo hash (#80927)
25449292a0 : Run mask test with and without nested tensor (#81008)
e454be5f0f : [MPS][BE] Introduce MPSUnaryCachedGraph (#81033)
d2c726d43c : torch.jit doc link for nvfuser readme.md (#77780)
ae6dd20ba7 : [cuDNN V8 API] (reopen 2) Allow the number of kernels profiled under torch.backends.cudnn.benchmark = True to be limitedCudnnv8 benchmark limit (#78299)
7fd0cf5581 : [Release Notes] Add way to export result from Google Sheets to Markdown (#79911)
6d4410b8c6 : [Release Notes] Simple script to merge categories (#79910)
755861063d : Adding additional topics to align with github topics list (#79909)
770fc74e33 : [Release Notes] Add Github PR link to csv export (#79908)
ad6328ea51 : [Release Notes] Adding CSV Category Export (#78212)
62bf807113 : Always use the CommitCache, and make it a singleton (#78203)
da549f58d5 : Adding Author and Accepters information into pytorch release notes gen (#78190)
8549fafd36 : Refactoring release not script to use dataclasses and have a shorter test. (#78189)
c97ff3d51e : Update NestedTensor docs (#80963)
712b3e76ef : [onnx] Add argsort support (#80234)
31f98766c4 : [vulkan] Improve mutex management when syncing with GPU (#80959)
d2018f887c : Fixing the torch.jit.freeze docs (#81020)
39db8b3823 : [Profiler] Add Pattern that detects extra cuda copy (#80572)
69a77f77b7 : [Profiler] Define pattern matcher structure (#80108)
4cd848ecce : [composite compliance] as_strided (#80276)
1d90d6ee60 : Setup for running PyTorch tests with TorchDynamo and skips for known failing tests (#80106)
bd6bea35f8 : Update package.rst to not include hermetic claim (#81019)
e4eb95d42f : nvfuser opinfo patch test_nvfuser_extremal_values_native_layer_norm_cuda (#80440)
a2ee1a92d6 : Change cudnn incompatibility message wording (#80877)
2abdb7d390 : [jiterator] De-template launch_jitted_reduce_kernel (#80968)
15cf4c2496 : [jiterator] Reduce templating in jitted_gpu_kernel_impl (#80967)
dad071d8fe : [nvFuser] Add real and imag to nvfuser and its python frontend (#79824)
6f1d99b79f : update nn.init doc to reflect the no_grad (#80882)
bd75b2fea1 : Add ref for nn.functional.prelu (#79768)
1a71b83e18 : Increase stack level for get_attr warning (#81041)
8684fdb4b3 : [Vulkan] Implement GLU operator (#80910)
1ffde76790 : [Vulkan] Implement Layernorm operator (#80980)
7eb1a6a965 : [Vulkan] Implement Batchnorm operator (#80510)
d266256621 : Support compressed sparse tensors with dense dimensions (#80565)
beb98676ba : Correct cbrt implementation (#80443)
c0e15fe3f4 : [c10d] Change pg/process_group to self (#80978)
7591444cf1 : [caffe2] Fix shadowed variable warnings in `clang` (#80902)
7d0975f453 : Change AllReduceCommHook to accept instrusive_ptr (#80975)
71ee384924 : [ROCm] Use torch._C._cuda_getArchFlags to get list of gfx archs pytorch was built for (#80498)
98e4524dcc : [PyTorch Edge] Add OwnedOrBorrowedVector for QNNPack BCSR Indices/Values (#80476)
5c12cd224f : [PyTorch Edge] Add qnnpack bcsr matrix unpacking and use unpacking in Linear module (#80475)
eaf817df3a : [PyTorch Edge] Add serialization/deserialization of Sparse Quantize Linear Packed Params (#80474)
ab09f34622 : [FSDP] Fix `full_optim_state_dict()` hang (#80712)
523b081a64 : [PyTorch Edge] Remove Original Weight Tensor from QNNPack Sparse Quantized Linear Packed Params (#80473)
d96d186537 : [Expanded Weights] fix unbatched support issue (#80891)
e51c63da65 : [composite compliance] preserve stride correctly for non-contiguous tensor with requires_grad=True (#81035)
120987ffeb : Fix macos public bindings failures (#80970)
fcf027c49d : ci: Remove extra docker pull (#81038)
91220799e8 : Added logic for pushing docker images to ghcr.io (#80950)
b0b24b4285 : [MPS] Fix LSTM batch_first output transposed (#80597)
2beb57a823 : Add `-Werror=non-virtual-dtor` (reland) (#81012)
07e41652c4 : [ci] simplify sccache stats uploading (#80806)
b7046e9b7f : Stopped ProxyTensor from turning aten::lift tensors into proxy objects (#81024)
6ee54a8780 : fix weight norm backward bug on CPU when OMP_NUM_THREADS <= 2 (#80930)
7234b0271b : Revert "Test xla llvm_build_cache patch (#80178)"
3ca309c4b8 : Correctly setup ancestors on explicit push mode. (#80995)
b42c50da26 : [Profiler] Eliminate transitive include of profiler implementation headers. (#80564)
f98bfce6a1 : Add const modifier for lazy shape inference for boolean logic ops (#79999)
665fd30a5a : Load CPU MLModel first, and configured MLModel async (#80941)
2458b3cd5f : [MPS] Add argmin (#80828)
457e400ac3 : Test xla llvm_build_cache patch (#80178)
74877943b8 : Don't invoke mode as overloaded argument in torch dispatch (#80992)
74b41af2ad : [caffe2] Use `arch_deps` instead of host info for arch-specific deps (#80814)
e08026d4d4 : Use miopen_LIBRARIES and rccl_LIBRARIES directly, when they are valid target (#80446)
91b0250606 : Remove dead code from torch.ops torch function handling (#80993)
13de7275ac : [caffe2/perfkernels] Avoid `native.host_info()` in build files (#80812)
e89b1991c4 : Detect ProxyTensor layering violations (#80994)
0491c10a63 : Revert "Add -Werror=non-virtual-dtor (#80584)"
0c5fdfd95f : Revert "Revert "[FSDP Optim State] Remove checkpoint prefix (#80480)"" (#80936)
814cccc968 : Revert "Automated submodule update: kineto (#79925)"
6b5bab17d6 : Revert "Update ROCm base docker images to focal (ubuntu20.04) (#79596)"
394aca9a26 : [vulkan] fix some broken tests in vulkan_api_test (#80962)
8302bbe408 : Update ROCm base docker images to focal (ubuntu20.04) (#79596)
daf00e843a : [ao][sparsity] Bug Fix: data norm sparsifier not working with 1D tensors/parameters (#80465)
cb630c775e : Add multithreading test to model compare binary (#80958)
ec594dd305 : [ao][sparsity] Bug fix: data not correctly attached to the sparsifier (#80394)
45cb8199df : [FX] Fix typo in user stack walk (#80830)
8b419f8918 : Properly import Parameter and DataParallel in torch/nn/__init__.py (#80955)
be14dca481 : Remove unnecessary debug asserts (#80971)
1d4dbdcefe : [mergebot] Fix Land Checks validation (#80907)
d9ff56ccc0 : python bindings for create_metric_report (#79679)
4b54a946a8 : add MetricReport API (#79678)
5e9136c24c : fix two corner cases of torch.arange (#80758)
9d20af5060 : remove overly restrictive checks for cudagraph (#80881)
393f7f6ad7 : add layout to slow path (#80429)
f2c8557521 : Revert "Make `kl_div` a composite function. (#80334)"
7705175f83 : CPUBlas: Use opmath_type for alpha/beta scalars (#65839)
cc0f1cc3d3 : Automated submodule update: kineto (#79925)
38169c2287 : [Static Runtime] Fix precision error in test cases (#80935)
255649984c : Revert "[MPS] Add argmin (#80828)"
46aecea516 : [JIT] Document runtime flags (#80900)
c48e964a04 : [MPS] Add huber loss (#80163)
37ebcc8a80 : [MPS] Add argmin (#80828)
7f37b1b3e2 : Disallow creating a library with prim namespace (#80913)
960758b0b7 : fix overload ambiguity with functional ops; fix _foreach op grouping (#80556)
ce0786add2 : fx quant: fix warning in util function when cloning tensors (#80883)
12382f0a38 : Make addcmul and addcdiv support different dtypes (#74234)
736fb7d22c : Add refs::{isposinf, isneginf} (#78655)
8745118ccb : [MPS] Add softplus backward (#79873)
25e5a3e684 : Revert "[jiterator] Reduce templating in jitted_gpu_kernel_impl (#80103)"
b46f03577f : Revert "[jiterator] De-template launch_jitted_reduce_kernel (#80138)"
f608db5ff0 : [ONNX] Moved uninitialized_optional test cases into runtime test and parameterized left test cases (#80145)
5436134f32 : [MPS] Move the View ops to a separate file and reduce the number of graphs created (#80491)
6305a85c0c : Enable ROCm CI for trunk (#80920)
fe361dede4 : Revert "[FSDP Optim State] Remove checkpoint prefix (#80480)"
b349d15907 : [build] fix compiling with clang13 (#80916)
45f77189c1 : fix the invalid configuration argument error when running layer norm backward (#80893)
1ad7ef3f21 : Add check for cuda lazy init (#80912)
04c50fec1c : [FSDP Optim State] Remove checkpoint prefix (#80480)
fefdad6137 : [Static Runtime] test case for staticRuntime::runAsync() API (#80407)
9402219a36 : Move serialize_module() out of OSS graph_manipulation.py to internal (#80785)
6d03ea65ba : improve sort multi-core perf by adjusting grain_size w.r.t. dim_size (#74897)
f36a5d23ce : Prevent potential memory leaks in CoreML backend (#79928)
b603860c1d : Back out "[profiling] Adding targets file for test_mobile_profiler" (#80789)
280f4704b7 : [torch.fx] Check node type before fetching .users (#80166)
7af5a27dfe : [vulkan] Refactor Descriptor::Pool (#80727)
861dfeec68 : [vulkan] Refactor Descriptor::Set (#80726)
f70c346bf1 : [vulkan] Add synchronization to Adapter and Context (#80725)
ed931a4d8d : [vulkan] Remove references to old command struct (#80724)
c013639734 : [vulkan] Add back reshape using new APIs (#80723)
ab8cc6ce51 : [vulkan] Migrate to new command dispatch interface for all ops (#80722)
5c0e80144f : [vulkan] Add alternative command dispatch interface to Context (#80721)
caa14951bb : [vulkan] Refactor Command (#80720)
a2846135e7 : [vulkan] Refactor vTensor to only maintain texture representation (#80719)
b4cdefdbef : [vulkan] vTensor class cleanup (#80718)
50bbd9c042 : [vulkan] Remove old Resource Struct (#80717)
5a0b4717e7 : [vulkan] Remove references to old Resource struct (#80716)
a49dfd8378 : [vulkan] Replace `resource().pool.uniform(block).object` with `UniformParamsBuffer` (#80708)
9ca561cbd4 : Add prims::{real, imag}, refs::{real, imag}, remove prims::is_infinite (#80148)
3f5b33a4f8 : [vulkan] Integrate refactored memory mapping, remove Tensor Future pattern (#80707)
b6076471c9 : [vulkan] Integrate refactored fence (#80706)
b52d51c56e : [vulkan] Integrate refactored buffer and image (#80705)
47fc108bc3 : [vulkan] Add refactored Resource (#80704)
d275c3f4af : [vulkan] Remove ThreadContext (#80703)
d80a31ed2c : [vulkan] Move ownership of Shader and Pipeline caches to Adapter (#80702)
e6672c2eb9 : [vulkan] Refactor Adapter to have PhysicalDevice hold physical device properties (#80701)
d6def7a8e1 : [vulkan] Refactor Shader and Pipeline (#80700)
3a295e9220 : [vulkan] Remove unused code for Winograd convolutions (#80699)
e0eeb06ec6 : Consolidate the naming of named_parameter and state_dict for CheckpointWrapper (#80089)
0071008927 : [Profiler] Add heuristic to rank events base on computed metrics (#80094)
b62209f047 : [Prims] Unbreak CUDA lazy init (#80899)
d03f989d53 : [ROCm] Load ROCm if Torch is used as a dependency (#80469)
6bf4c662c8 : Handling the error codes of hipGetDeviceCount (#80405)
5b493ba18b : [quant] Refactor quantize clamping into float_to_apot util function (#80885)
be10787988 : Fix the derivative of `acosh` for complex numbers. (#80841)
769df7430d : [lint] create a workflow consistency linter (#80200)
769b446f18 : [test stats] record invoking file in test cases (#80792)
9a12aa6cad : Add cached nvFuser's fusion creation for torch._prims.executor (#80525)
56f4b69c4b : [Expanded Weights] Fix instance norm (#79800)
8fc8713f78 : Fixed docstring typo (#80879)
54217e695d : [Quant] Add fast path of qmean/qstd for quantized CPU (reopen #70172) (#80579)
4c279994fd : Fix `Module.share_memory` error (#80843)
f76bb88205 : fix docstring of PostLocalSGDOptimizer (#80855)
f767805503 : Disable ROCM builds/tests (#80862)
c42c594315 : Fix rrelu on CUDA (#80434)
eb0889cf7d : Add support for multiple inputs to out_wrapper and strict dtype checking (#80601)
877180e1af : Revert "Add DLPack support for XPU backend by mapping to kDLOneAPI in DLPack. (#78154)"
c78bdd7ba0 : [torchdynamo hash update] update the pinned torchdynamo hash (#80852)
fce5560e06 : [Convolution] Fix derivatives of convolution overrideable backward (#80840)
5af48581b5 : In order to make pytorch headers consumable from cpp20 code bases, … (#79985)
0a5123a752 : Revert "Revert "Add support for directly passing symint to empty"" (#79954)
3a6c1dc7c7 : Add DLPack support for XPU backend by mapping to kDLOneAPI in DLPack. (#78154)
828c787ea9 : Make `kl_div` a composite function. (#80334)
7670035862 : Add -Werror=non-virtual-dtor (#80584)
081b56fd41 : Improve readability of cuda_lazy_init (#80788)
c54aabf3eb : Exclude Fake dispatch key during tensor construction (#80782)
76cff18242 : [MPS] Add test consistency from OpInfo based tests from PR 78504 (#79532)
b4ed13ea0f : Update docstring for scale_factor in torch.nn.functional.interpolate. (#80807)
482e53c655 : [torchdynamo hash update] update the pinned torchdynamo hash (#80825)
0e3953fc52 : MPS: Fix handling of 1D tensors in linear backward (#80759)
caa6ef15a2 : [torchdynamo hash update] update the pinned torchdynamo hash (#80801)
b00e4e8017 : [ONNX] Convert aten numpy_T to ONNX transpose (#79269)
88807bcf40 : [torchdynamo hash update] update the pinned torchdynamo hash (#80790)
667fdd72c8 : Revert "Revert [ONNX] Enable data propagation from ONNX to simplify current shape inference" (#80730)
8f9e3a2030 : Do not use `thrust::lower_bound` on device (#80746)
e1cfac42b2 : [jiterator] De-template launch_jitted_reduce_kernel (#80138)
df665b1a9d : [jiterator] Reduce templating in jitted_gpu_kernel_impl (#80103)
19f3d4d795 : Expose linalg.solve_ex (#80073)
6fb6dc42ff : Improve heuristics for linalg_lu_solve when B is a matrix (#79838)
37a5819665 : Make slogdet, linalg.sloget and logdet support metatensors (#79742)
b744e1c8ef : Add scatter support for view operations (#79939)
31952b56eb : Transform constraints to z3 constraints which is the final step (#80110)
c93ceef658 : Wrap static initializers in ifdef (#80590)
7846953316 : [ONNX] Constant folding to not add folded constant as initializer (#79552)
9e867a7f3b : [torchdynamo hash update] update the pinned torchdynamo hash (#80751)
fe73528a90 : minor fix for shared build (#80739)
3964b9aebc : rebase via comment - ghstack commit author fix (#80747)
ffae7308c9 : Enable test: distributed/algorithms/quantization/test_quantization (#80097)
d8cd15b00c : Revert "Allow returned tensor impls in fallback (#80545)"
796a41a00b : automate hash updates - remove vision hash update (#80740)
5a4c9e8394 : Add spdiags sparse matrix initialization (#78439)
e5162dcfa7 : [ao] Added framework for ModelReport Outlier Detector (#80743)
805120ab57 : See if we can elide TORCH_API from inline functions. (#80609)
bc8524eb33 : [torchdynamo hash update] update the pinned torchdynamo hash (#80713)
e266bea793 : Add DTYPE to GPU_ATOMIC_INTEGER macro (#80547)
7002621a4f : [chalf, jiterate] acos, acosh, asinh, atanh (#80030)
d7e4520d1d : Allow returned tensor impls in fallback (#80545)
bdee679d3a : [JIT] Refactor SchemaCheckMode to check for mutated metadata (#79991)
e40d95416f : [LT] fix lazy tensor tutorial and test for cpu only run (#79744)
56e3bc5215 : Revert "Add spdiags sparse matrix initialization (#78439)"
a620512d4b : Support non-standard bools in CUDA masked_scatter (#79391)
ae0c521b3b : Fix remaining CPU operators for non-standard bools (#79390)
fc82870d92 : [xla hash update] update the pinned xla hash (#80414)
d80d0a1693 : first step of constraint transformation (#80102)
ea071e56b1 : gradual type constraint generation (#80095)
cfb2034b65 : Add spdiags sparse matrix initialization (#78439)
45ae244086 : [torch.package][doc] PackageExporter does not have file_structure (#79948)
421f04dd02 : Only allow numbers as tensors if operator was explicitly allowlisted so (#80587)
d6cd130e0b : [ci] remove errant comment (#80711)
c9aa74a37f : [profiling] Adding targets file for test_mobile_profiler (#80351)
e72dfd992e : Revert "[jiterator] Reduce templating in jitted_gpu_kernel_impl (#80103)"
02e0c7caa6 : Revert "[jiterator] De-template launch_jitted_reduce_kernel (#80138)"
1947175495 : ci: Skip docker push for most builds (#80402)
4dd646d036 : [test stats] flush stdout before uploading to rockset (#80487)
b1943e01e2 : Revert "[MPS] Add test consistency from OpInfo based tests from PR 78504 (#79532)"
4e12300e4e : [ci] upload test stats to s3 instead of rockset directly (#80593)
682c0d2615 : Use segment/scatter_reduce to support masked reductions on sparse CSR tensors (mean, amax, amin) (fp only) (#78918)
c8d64ba5ec : Allow register float16 weight_norm on cpu and speed up test (#80600)
adcf9647dc : Extend `torch.[argange|linspace]` to float16 type (#80492)
ae6f07e7d5 : [MPS] Fix std/var cache issue (#80502)
f668b7ecb0 : Add integer support to index_reduce (#80464)
b64096a264 : Revert "Add prelu op and module for quantized CPU backend (#73491)"
1454515253 : Revert "Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) to PyTorch (#63289)"
c980fc3d3c : [MPS] Add glu (#79866)
3a6d6bc3cc : Add prelu op and module for quantized CPU backend (#73491)
127ac09455 : Revert "[ONNX] Enable data propagation from ONNX to simplify current shape inference (#75307)"
bee2d485d9 : [ONNX] Enable data propagation from ONNX to simplify current shape inference (#75307)
f988aa2b3f : Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) to PyTorch (#63289)
4c04f6da74 : [jit] fix python enumerate with start kwarg (#80585)
184a065ba7 : Revert "Add support for multiple inputs to out_wrapper and strict dtype checking (#79941)"
cce4766c80 : [torchdynamo hash update] update the pinned torchdynamo hash (#80578)
dc7066a8f0 : Add support for multiple inputs to out_wrapper and strict dtype checking (#79941)
2d100eaa40 : Correct meta behaviour of prims.resize (#80516)
c71886e048 : [MPS] Add test consistency from OpInfo based tests from PR 78504 (#79532)
f667aaed1d : Revert "Added serialization to postlocal_SGD. (#80435)"
1a7e560ade : [quant] Refactor quantization tracer to a separate file (#80268)
b4e491798c : Avoid temporary buffers for tensors with torch.save. (#80404)
fa6b6842e1 : [ao][sparsity] removing leading '.' from fqn in utils (#79774)
3f1dc7ec00 : [quant] Create default custom modules for LSTM and MHA (#79960)
dfdf4e79df : Added serialization to postlocal_SGD. (#80435)
560a8cc530 : Revert "Add fast path of qmean/qstd for quantized CPU (#70172)"
39bd81a11f : Add clang -Wconstant-conversion (#80461)
1058b47562 : Weak-ref-ify MetaConverter and FakeTensorConverter (#80544)
8ea033a009 : [profiling] API For profiling backend memory events (#80350)
ced2f2965c : [Static Runtime] support forked subgraph execution on parent graph's executor (#80381)
b62d39eda0 : Consolidate all python targets in the tools folder (#80408)
70e86b4562 : [test_shape_ops] Increase system memory requirement (#80369)
c44317704a : [Quant][fx] Add default configs for fixed qparams ops (#80184)
6dc32a93e9 : [GHA] Remove new lines from PR_BODY too to appease batch env var copying (#80548)
cfe8dce814 : [Bootcamp] Use Apple's Accelerate framework for blas acceleration (#80449)
4a1309035e : [AutoAccept][Codemod][FBSourceBuckFormatLinter] Daily `arc lint --take BUCKFORMAT` (#80468)
17104d3d7f : [Quant][fx][bc-breaking] Replace is_reference with convert_to_reference (#80091)
d579838eb5 : [torch][fx] Add ignore_parameters_and_buffers kwarg to FxGraphDrawer (#79982)
edf76cd9c2 : Move qnnpack to shared BUCK build (#80260)
c1fa9fdff9 : Add fast path of qmean/qstd for quantized CPU (#70172)
ac5a94789f : Refactor lift_subgraph_as_module as a fx.passes.util function (#80292)
da61ec2a4a : [CI] imporve ios simulator test (#80459)
00f651811a : Interpreter for decomposing aten -> prims (#79989)
92e1710dc0 : Add ComplexDouble scalar creation bindings to nvFuser's Python API (#80522)
d7847ed23e : Add integer support to scatter_reduce (#80324)
dc7d3fd4bb : functionalization: fix _unsafe_view debug asserts (#80526)
331c0c1803 : [DataLoader] Close open in DataPipe streams on best effort basis (#78952)
2f146f1d39 : [jiterator] De-template launch_jitted_reduce_kernel (#80138)
74fb6ee4c5 : [primTorch] support one tensor and two scalars in _prims.where (#80146)
66f66faccf : [jiterator] Reduce templating in jitted_gpu_kernel_impl (#80103)
e3599b0344 : Revert "Add objective-c language support in CMake (#80432)"
725de4fb94 : Increase atol for test_noncontiguous_samples_nn_functional_conv_transpose3d_cuda_float32 (#80518)
de0150d898 : Revert "Extract setting up ec2 linux and checkout to be composite actions (#80462)"
655fc51f07 : [GH1] Relanding #80064 to erase double messaging as it was overwritten by mistake (#80550)
fe8bfef8a6 : Extract setting up ec2 linux and checkout to be composite actions (#80462)
a34301064a : Add integer support for gpuAtomicMin/Max/Mul (#80320)
d1d2687d34 : [ONNX] Fix potentially unbound variables (#79789)
35d97f7b03 : [ci] temporarily disable ROCm distributed tests (#80530)
58532256e9 : Revert "Add __all__ for torch.distributed and fx modules (#80460)"
08795f9afc : Add _reduce_scatter_base to ProcessGroupWrapper. (#79633)
63ef2a03e5 : torch.special.scaled_modified_bessel_k0 (#78900)
c28315eab8 : [primtorch] add reference for clamp_min/clamp_max (#79821)
0922cc024e : Added support for `expand` in LazyTensor shape inference (#77830)
853247e585 : [torchdynamo hash update] update the pinned torchdynamo hash (#80485)
3d9cef8c98 : Clone tensor to write in ShardedTensor checkpoint (#79400)
57f001f35a : Don't error if _warned_capturable_if_run_uncaptured not set (#80345)
5d40c3d5c8 : Add __all__ for torch.distributed and fx modules (#80460)
5943aaa0c4 : [MPS] Add logical ops (#80216)
182870f4a7 : Add objective-c language support in CMake (#80432)
d32ab80c32 : Update buck_setup.sh (#80467)
5fc2d45a3a : Remove unneeded TODO (#80453)
7e34edf12d : adding sym_size override (#80357)
7f8e852dff : [Static Runtime] Support Futures in Static Runtime Engine (#80162)
11f7463309 : [torch] Add more functions to __init__.pyi.in for torch._C for Node and Value (#79654)
e487ba7333 : Add nlohmann/json submodule (#80322)
33fecf057f : [IOS] Update Cocoapods for 1.12 release (#80472)
b9d516138b : [PyTorch] Add test_modules test for TransformerEncoderLayer fast path (#78268)
3dec9fd09f : [ONNX] Fix `hardshrink` and `softshrink` output's shape (#79695)
c4da23ed1b : [MPS] Add flip (#80214)
a48f3059b7 : Corrected comments in fsdp (#80456)
f62d8b2a0f : ProcessGroupWrapper log full rank fingerprint mismatches (#79901)
5070f5d18f : [quant] Implement APoT fake quantization (#79845)
fca1523604 : implement numel and tests for nested tensor (#80424)
f70bf13c6e : Disable doxygen / breathe / exhale generation of C++ API docs (#80451)
b8e50f512f : [DataPipe] Count number of successful yields for IterDataPipe (#79657)
7850a328b4 : Revert "Revert "parse pysymints to IValues (#80066)"" (#80419)
f6a45f7984 : [distributed] Make DDP work with python process group (#79176)
3cc48184a5 : Revert "[ci] disable docs build (#80313)"
14a7cf79c1 : Add __all__ to torch.distributed and tensorboard submodules (#80444)
c8943f831e : ci: Move ios-12-5-1-x86-64-coreml to periodic (#80455)
0ca9888000 : Correct the math of repeat_backward in the function comment (#80286)
4a245d4a78 : [BE] Rectify `C10_UNUSED_DISPATCH_CUDA_WORKAROUND` (#80423)
9e86796fe3 : simple c10 implementation for std::call_once (#78051)
e8d3c77a78 : [torchdynamo hash update] update the pinned torchdynamo hash (#80309)
e1b15b7a04 : [MPS] add `aten::normal.Tensor_float` `aten::normal.float_Tensor` `aten::normal.Tensor_Tensor` (#80297)
db52e4b7d9 : Bugfix/weakref (#80139)
ea987086fc : Fix test_gradcheck_forward_ad_respects_requires_grad for slow gradcheck (#80401)
12dc410ff2 : Fix nvFuser's where(tensor, python_scalar, tensor) type promotion (#80347)
5fc209ed11 : FSDP communication hook interface for NO_SHARD strategy (#79833)
2a09e95169 : Register nested tensor linear kernel (#80397)
2afbc998cb : Use AutogradNestedTensor key for nested tensors (#80396)
e88dadd5eb : Fix FSDP when not all outputs get gradient in backward (#80245)
fd8a15bd75 : [ci] disable docs build (#80313)
cb5ef130b6 : [ao][sparsity] Fixing failing internal pruner tests (#80111)
615dd25088 : Made Proxy Tensor Mode also trace overloads (#80403)
8aedd8fb25 : [Quant][fx] Hide equalization_config from prepare APIs (#80164)
8a45ef23f5 : [5] move XNNPACK to shared BUCK build (#80209)
17ee5de070 : Mark underlying type as C10_UNUSED (#80415)
d4f8a6d05b : [ao] Implemented report generation for InputWeightEqualization Detector (#80191)
08c32d7f49 : [ci] fix viable/strict promotion (#80399)
602c38ff63 : Revert "torch.special.gamma (#78904)"
5da776dd08 : [Resubmission] fix mul_out CUDA config for COO tensors (#80254)
7549f901a5 : `__launch_bounds__` for `torch.mode` with CUDA 11.7 (#79710)
ed7464f935 : Update README.md (#80190)
4d80b2dcde : [GHA] remove newlines from COMMIT_MESSAGES to avoid batch annoyance (#80385)
2727d88569 : [quant] Modify APoT global methods to align with uniform API (#80364)
4f94c8e039 : [ao] Implemented InputWeightEqualization Detector observer insertion (#79962)
2acd2317b8 : [mergebot] Create land time check options (#77943)
79c2dfcd8e : [c10d] Make send/recv as custom ops (#79779)
3ec9d34f21 : Fix distributed store to use add for the counter of DL shared seed (#80348)
3d218e1c87 : Raise warning for unpickable local function (#547) (#80232)
f68f77610a : Add __all__ to torch.nn.quantized, fx.passes, ao.nn and amp submodules (#80376)
7394de4e1e : Add a note on CUDA 11.6 (#80363)
27fc9fcd13 : More stable computation of KL between two Bernoulli distributions (#79944)
be71f663f6 : StringUtils: Avoid unnecessary allocation in ReplaceAll (#79915)
3bcc19b29a : Add __all__ to various submodules in torch.fx, distributions, distributed, package (#80367)
02730c1e79 : Add merge rules for distributions per https://pytorch.org/docs/stable/community/persons_of_interest.html (#80344)
9e3677f85d : Add support for BSR <-> Strided Conversion (#80354)
8f7d4fa157 : [fix] primTorch test: atanh (#80365)
7a8d6c9b1d : [ONNX] Fix onnx logical functions' dtype (#79339)
f72d867b70 : Add torch.nn.functional.threshold ref (#79808)
ab8797d69b : torch.special.spherical_bessel_j0 (#78912)
0468dafeaa : Fix Tensor.scatter_add_ doc (#80223)
4e33c8c6bb : switched over to using faketensor in proxytensor (#79634)
f563f25efd : torch.special.gamma (#78904)
d67ce755ad : [ci] only upload summarized test stats for PRs (#80295)
8878bc7bfe : Revert "[ci] only upload summarized test stats for PRs (#80295)"
f3a5e364a0 : Add dynamo test configuration (#80342)
5d595182c7 : [ci] only upload summarized test stats for PRs (#80295)
f2587849db : [chalf] roll, fftshift, ifftshift (#79970)
471c833043 : Fix div(scalar, tensor) case for nvFuser (#80341)
cde365a7cd : Validate Sparse Compressed tensor inputs (#79385)
9db3c517de : Add __all__ for torch.nn.modules, torch.distributed.elastic, torch.nn.utils submodules (#80240)
2258db5da3 : TensorImpl:::size_custom to support NestedTensor.size (#80236)
4300f64ad0 : [composite compliance] embedding_bag (#79986)
590d3e5774 : [ci] skip slow gradcheck jobs (#80311)
7794254851 : [Checkpoint Wrapper] Fix assert (#80283)
1b18c2e93c : [composite compliance] fill (#80277)
777c12f2df : [quant] Modify APoT nonuniform quantization workflow (#80075)
71d9592a72 : Only sync CUDA if the operation is run on GPU (#80328)
3b8589ac44 : Copy Tensor for tests to avoid in-place transform modifying the original tensor (#80331)
1d0d506e97 : Add Div reference (#77936)
09f79e94ac : support nested_tensor * scalar (#80284)
0eee81aaad : [quant] Modify APoT qparam quantization levels calculation (#80303)
044c9a650b : Avoid unnecessary copy of rvalue ref shared ptr in SharedPtrWrapper (#79697)
14813536a7 : Disable AVX512 CPU dispatch by default (#80253)
238eaf2094 : [c10d] Make barrier as a custom op (#79777)
dcd17357a4 : [c10d] Make alltoall as a custom op (#79691)
799d71378c : cmake: Fix variable typo for USE_SYSTEM_PYBIND11. (#80272)
c2d395cf8e : functionalization <> LTC integration (take 3) (#80251)
33761c80d2 : [ONNX] Move no runtime tests out of runtime test cases (#80078)
8a5cc940e3 : [composite compliance] quantile, nanquantile (#80024)
4331bc436e : Ensure torch._refs registrations also get triggered on import torch (#80270)
443db9b58e : Introduce Z3 types and utility functions for constraint generation (#80084)
f11cce309b : [MPS] Add equal operator (#80195)
ee0e836189 : [FX] OSS is_fx_tracing (#80255)
417677bf62 : `permute` for COO sparse tensors (#79707)
3764ffef12 : [torchdynamo hash update] update the pinned torchdynamo hash (#80262)
2c43876f64 : AT_DISPATCH: Expose switch-case like macro syntax (#79978)
0d5bc54114 : Fix interpretation torch -> torch._refs in case of nested torch calls under TorchRefsMode (#80135)
fc0faa2cf6 : Support nested_tensor.bmm (#80224)
06f874e276 : [MPS] Fix binary ops between int32 tensor with int64 scalar (#80220)
b35c033c91 : Update Dockerfile (#80258)
972a209284 : [bazel] Add --config=shell for easier debugging (#79350)
02093da36c : Autogen native_batch_norm and native_batch_norm_backward (#79637)
fdd3e20935 : Revert "[MPS] Fix binary ops between int32 tensor with int64 scalar (#80220)"
e98e7fe428 : [4] move pt_operator_library to shared BUCK file (#80170)
bda04e9f5e : Add __all__ for torch.optim and torch.nn.modules modules (#80237)
84c0a308a1 : [c10d] Make scatter as a custom op (#79688)
9d6e3f81d4 : [c10d] Make gather as a custom op (#79687)
3367e632b2 : [c10d] Make reduce as a custom op (#79686)
b868cd48e4 : move run_android_tests.yaml to a reusable workflow (#80227)
80b50dfa3a : [c10d] Make reduce_scatter as a custom op (#79683)
b3ca3638be : torch.special.scaled_modified_bessel_k1 (#78901)
e33f5c5eb2 : Add gradual typing constraint definitions (#79912)
3a1e3e67c5 : [Profiler] Add queue depth computation (#79993)
8cfb8f94a0 : Remove flacky test and replace with Mocking test (#80159)
fc6b645fe2 : Prevent out of bounds access to null LTC operands (#80060)
3359af9390 : [c10d] Make allgather as a custom op (#79669)
bfc2a4355a : Remove obsolete code from build.sh (#80199)
0322ecc3fd : Revert "parse pysymints to IValues (#80066)"
c1d11f40c9 : [FakeTensor] Use the device of the meta tensor for fallback kernel (#80193)
a6556efd5c : [MPS] Fix binary ops between int32 tensor with int64 scalar (#80220)
752c06e0e1 : FX graph partitioner and fuser (#79439)
86682caf93 : [ONNX Export] Use `half_pixel` instead of `pytorch_half_pixel`. (#80003)
9dd16c1885 : [GHA] make commit hash pins merge rules more specific (#80233)
a47cc18254 : remove unnecessary tuple check on tensor types (#79896)
de7219e8a7 : Use generators with all/any in torch/optim (#78142)
ff44bfa1ea : [MPS] Add L1 loss test (#80010)
fcdaf35114 : Revert "Add validation for mapper function in datapipes with `input_col` (#79344)"
375668cd96 : Remove overly restrictive assert in adam (#80222)
0308609b41 : [quant] Quantizable documentation (#79957)
524d181267 : [ao][sparsity] Implemented state_dict() and load_state_dict() functions (#79883)
3bcec850e5 : [quant] Add QuantizedMHA class (#79956)
af4e2b2c42 : [ao][sparsity] Implemented the step() function (#79822)
70b7bca423 : [ao][sparsity] Base scheduler class for Data Schedulers (#79817)
b00448df6b : [primTorch] asinh, atanh (#80210)
2d7d5a75aa : Cleanup buck_setup.sh a bit (#80198)
35268bdc2a : Revert "[FakeTensor] Use the device of the meta tensor for fallback kernel (#80193)"
845021db2c : [ao] Adds framework for InputWeightEqualization Detector (#79916)
54a1cc5246 : Support softmax(nested tensor) (#80179)
79ba65c0f2 : Revert "Raise warning for unpickable local function (#80140)"
42a2359612 : Add forward AD for linalg.det and simplify its backward (#79487)
4b75b7d3c1 : Raise warning for unpickable local function (#80140)
3afc802c5a : [Static Runtime] Add Metadata to ProcessedNode depending upon the op type (#79961)
bab1ea8592 : [Vulkan] Optimize LSTM operator with pre-packing (#79702)
6f9460bd54 : Setting validation flag for Distributions tests to work with TorchDynamo (#80081)
addeb1ed5e : [GHA] Add warning when S3 stats for sharding aren't found (#80176)
93e70c5973 : [FakeTensor] Use the device of the meta tensor for fallback kernel (#80193)
18b6ce38b0 : [torchdynamo hash update] update the pinned torchdynamo hash (#80112)
219692d2a4 : [xla hash update] update the pinned xla hash (#80196)
55aa0b1a84 : sync pt_ops.bzl (#80194)
ff5a588e6e : Port cholesky to structured kernels (#79300)
7bf1b1dc31 : switch computation to computationPtr (#79491)
ea5299fca3 : Make linalg_det function structured (#79486)
7e54959a8a : Add support for indexing cuda tensors with cpu (#80115)
648224dd80 : Use streams for import_ir_module for pickle case (#80131)
72e40d2bc7 : BUG: Evade segfault by throwing a RuntimeError for `nn.ChannelShuffle` and empty input tensors (#77029)
5029a91f7b : [ci] delete JOB_BASE_NAME (#80046)
8c0796e57f : Use cub::BlockRadixSort to improve medium length sort performance (#79628)
ca395bc099 : [ci] don't mutate BUILD_ENVIRONMENT in test.sh (#80144)
ffdc5eebc7 : [ao][docs] tests for quantization docs (#79923)
da33c93169 : [ONNX] Clean up `onnx_supported_ops` (#79424)
a732887a2a : [JIT] Add checks for outputs aliasing unexpectedly, implement logging and flesh out test suite (#79904)
f749f86fee : Add nested tensor metadata nested_stride then use it in unbind, select (#79831)
efd24ddc89 : [ci] add rocm build to pull (#80149)
f532b3a619 : parse pysymints to IValues (#80066)
b3308e21bf : torch.special.airy_ai (#78902)
787ac4edf8 : Add validation for mapper function in datapipes with `input_col` (#79344)
848af37209 : Debug small ACC subgraphs elimination (#80117)
438142a599 : [ONNX] Update onnx submodule to 1.12 (#79585)
817eb94ff4 : Speed up module constructor by avoiding module.__setattr__ (#77098)
0a06bf89db : fix up trymerge (#80100)
bcc02769be : Check for contiguous well-formed mask (#79927)
e282f5fe97 : [GHF] Update commit body generation for GHF PRs (#80105)
736bbe1ec7 : [3] move aten targets to shared buck file (#79966)
0ee31ce8c8 : Revert "Use cub::BlockRadixSort to improve medium length sort performance"
67a5d0bf40 : Use cub::BlockRadixSort to improve medium length sort performance
072311bb28 : Enable torch._prims.amax/amin for nvFuser executor (#80070)
577e90ae9b : Improve error message for missing ops (#80005)
e5841bafbd : [c10d] Make allreduce as a custom op
08820cb030 : [xla hash update] update the pinned xla hash (#80113)
3507bee7d1 : Update buck_setup.sh (#80116)
9cbc692ba8 : [ao][sparsity] adding type hints to sparsifier and utils
5a559a547d : [FX] fix split_util by using getattr_recursive instead of getattr (#80011)
9f29d09b3b : Revert "[ci] delete JOB_BASE_NAME"
44ff6be35a : Fix backward of binary_cross_entropy_with_logits
f54e7b4ad6 : More forward AD formulas
38ee99b382 : Strengthening unit test cases for quueue depth calculation
14eadf937b : Enable test: test/distributed/algorithms/ddp_comm_hooks/test_ddp_hooks.py
655419fc59 : [ao][sparsity] removing old sparsity API and updating tests
f2573944e0 : [quant] Add QuantizedLSTM class
a4080240e9 : [ci] delete JOB_BASE_NAME
c2e9f7a8eb : update sccache_bucket in binary build (#80014)
afdd83efcb : Disable XLA builds (#80099)
491625b3da : [Profiler][Trivial] Update test_profiler_tree
58d751155a : [caffe] remove some unnecessary exported_preprocessor_flags (#80063)
70be6f8470 : [ao] Added generate report capability to ModelReport class
f714d8f574 : [ao] Added insertion of observer capability to ModelReport class
01720ae3b6 : [ao] Added ModelReport class outline for Fx Graph Modules
d43e6c9f4a : Revert "Revert "formatted _decomp folder with black""
964eb47881 : [GHF] Do not notify on merges (#80064)
59ab55522a : [JIT] Handle aliasing cases where two arguments alias in SchemaCheckMode
27a86204fc : [ci] remove old USER_LAND code from ci scripts
9f866b8dbb : stream pkl (#79931)
e3d0a3ca88 : Revert "More forward AD formulas"
581e846d9d : Migrate pull off linux-xenial-py3_7-clang7-onnx (#79489)
418514726a : [ci] fix bug in win build artifact uploading
4a75ba6606 : Upgrade pytorch nightly docker python version to 3.8 (#80051)
159d459c50 : Switched to tracing overloads by default (#80013)
5eb98887fb : Update merge rules to pull for pinned hashes (#80044)
4390546f86 : [MPS] Fix torch.uint8 support (#80049)
82a1961129 : [quant] Implement APoT_tensor class
4193252de9 : Revert "Revert "Added kl_div_backward decomp""
94dda03c78 : [fx2trt] move common_fx2trt.py into fx folder (#79924)
942c371bbc : Revert "Fix backward of binary_cross_entropy_with_logits"
e1caf3465b : [ONNX] Refactor `TestONNXRuntime` with parameterized class
af9b8a691f : Refactoring the AO experimental sparsity tests
8155753f8c : Added PassBase implementation
a622e3e14d : [static runtime] Fix linalg_solve test case (#79971)
184443f1b4 : [quant] Modify APoT utility function comments
c4d558e2da : ci: Move buck-build-test to periodic
7ea723b8f6 : Updating miniz library from version 2.0.8 -> 2.1.0 (#79636)
28a7ee8cec : Fix backward of binary_cross_entropy_with_logits
6b20ef6b91 : More forward AD formulas
ea6fa8dc95 : Revert "[ao] Added ModelReport class outline for Fx Graph Modules"
4bed8a23e2 : Revert "[ao] Added insertion of observer capability to ModelReport class"
4e8e817dde : Revert "[ao] Added generate report capability to ModelReport class"
ba4f780fde : fix submodule imports by importing functions directly (#79368)
7751ed41a6 : [ao] Added generate report capability to ModelReport class
1cff414784 : [ao] Added insertion of observer capability to ModelReport class
bf75708ce4 : [static-runtime] add nnc codegen for aten::div (#76903)
0b349f7e69 : [quant] Dequantize apot tensor
d6ec8398a9 : [quant] Implement quantize APoT method
f89e640810 : [quant] Add quantizer class skeleton
2148e6b4a4 : [torchdynamo hash update] update the pinned torchdynamo hash (#79149)
b99a1653ed : [AI Accelerators] softmax kernel for Nested Tensor (CPU) (#79756)
c9cbdb411d : [2] move more pytorch buck targets to shared build (#79330)
0f95e1846c : [ao] Added ModelReport class outline for Fx Graph Modules
50ce30520b : [GHF][BE] Comment on reverted commit (#79913)
bd7cd7c356 : Revert "[GHF][BE] Comment on reverted commit (#79913)"
efbee797b8 : [ONNX] Fix `prelu` output's shape (#79846)
4b52babcd9 : [ONNX] Fix `any` and `all` outputs' shape (#79371)
c47f78d25f : [ONNX] Fix linalg norm output's shapes and dtypes (#79506)
18e19c6b0b : Move CI to cuda-11.6 (#79921)
4843b58f4b : [GHF][BE] Comment on reverted commit (#79913)
61305cd638 : Improve small sort performance on CUDA
9244547a1b : small cleanup of executor (#79973)
ec4be38ba9 : Revert "To add hipify_torch as a submodule in pytorch/third_party (#74704)"
a8b0988596 : Fix //:module_test Conversion_MultiCUDA (#79926)
6c049e62af : Support for parsing hpu as device within JIT IR (#79947)
268ced5104 : Retry - [JIT] Propagate profiled information to DifferentiableGraph outputs
e3c9cb675a : Explicitly convert to double for comparison (#79964)
0b0e65516d : [FSDP] Fix param name prefixes for ignored modules (#79955)
018d071a48 : [composite compliance] frobenius norm (#78515)
163fc0e752 : Create a new Ubunutu 22.04 (jammy) build for platform010 (take 2) (#79945)
4f08d56183 : [BE][CI] remove flakiness calculation in print_test_stats.py (#79905)
e7ed01720a : [Static Runtime] Fix MemoryPlanner dtor crash in debug mode (#79942)
02d01707a6 : ci: Move multigpu to periodic
45e3afdef2 : add heuristic and idle time computation
9dcf304634 : [ci] reenable xla (#79949)
b592c62129 : [Vulkan] Fix mm op null pointer dereference on Android (#79701)
344c55cdd9 : Update release.md with rc validation steps (#79889)
765b6a8fab : Fix SequentialLR initialization (#72856)
63c0bcf887 : [static-runtime] modify ProcessedInputNodeWrapper to enable view start from the middle (#75763)
ca845462a0 : Fixes maybe_broadcast to actually broadcast only when needed (#79298)
290e95d35a : ci: Move deploy prefix to job suffix
f71999d221 : ci: Remove pytorch- prefix from android jobs
0b42e21519 : ci: Remove pytorch- prefix from xla job name
93b0fec39d : To add hipify_torch as a submodule in pytorch/third_party (#74704)
c0ce4b0de9 : make refs executor handle kwargs (#79858)
7195b86bbd : [vulkan] fixed calling of threshold op (#79717)
b84e099695 : don't call native namespace directly in autograd formulas (#79946)
0c78821408 : Compilation fix to access pretty_print_onnx function (#79864)
84564f2fab : Enhance negative operator for SYCL half conversion (#79850)
604f8d2ed5 : Update llvm deps for Buck build (#79919)
b58afc926d : [mergebot] post comment on finishing (#79448)
ed71f88531 : Fix composite compliance tensor for inplace views
9ad91cc6e0 : optimize `to_dense` for CSC (#79635)
2ede28724d : [CheckpointWrapper] Replace generic mod prefix (#79830)
4b6ba340e2 : [JIT] Imbue stringbuf with C locale (#79929)
fdcb22efef : Revert "Revert "Add support for multiply on direct SymInt"" (#79681)
efc7343743 : Revert "Revert "Put symint overloads on a different name"" (#79680)
d8b68069a8 : add kineto and flatbuffers to OSS BUCK (#79860)
f1c0280e7b : [xla hash update] update the pinned xla hash (#79587)
e89676f76c : fix logical_not reland issues
79507d2a9d : error when registering meta kernels to composite ops in core
c18a18cae9 : Separate faulty agent RPC tests (#79810)
cb2b7b1e57 : Fix code that triggers BytesWarning (#79868)
a098937c20 : Add factory function derivatives (#79872)
6d6e77eb6b : Fix some links in torch/distributed/CONTRIBUTING.md (#79855)
24f550b0cb : Minor fix in jit tests to pass TorchDynamo (#79903)
0656e9e595 : [ao] Adding model report detector base class and implemented detectors
56c98a2b7b : [FSDP] First implementation of ParamExecOrderWrapPolicy (non-recursive wrap policy) (#79238)
de9fd07093 : Relax the thread count assertion, that is modify EXPECT_EQ -> EXPECT_GE (#79806)
71c24a6a2e : Reduce string formatting overhead in PyWarningHandler
1cf3b24d42 : Remove unnecessary allocations in `processErrorMsg`
4749e86dd0 : Avoid temp string if standard string_view is available
9bf52f4be8 : Add OpInfo for torch.equal and fix support for non-standard bools
aa911efdeb : [ONNX] Fix `_graph_op` type annotation (#79882)
cb82224930 : Enable index_add for ComplexHalf (#79897)
f7ee061638 : Wconstab/reland pysymint (#79795)
a6b783e714 : Refine conditions under which index.Tensor has a dynamic shape
e675c33c51 : [onnx] have all classes that call setUp inherit from common_utils.setUp (#79543)
9705fb03b3 : Add support for a couple ops
d43dbeafdd : [ROCm] fix for multiple runners and docker commands (#79759)
f5eb05f107 : Revert "Reland #2 of "Added {logical_not, trace} refs, moved logical ops to use method overloads""
f3665dd237 : Reland #2 of "Added {logical_not, trace} refs, moved logical ops to use method overloads"
26b51290e5 : [BE] Update ProcessGroupWrapper to add deserializer and improve logs
ccccd0efec : [DataLoader] Share seed via Distributed Store to get rid of CUDA dependency (#79829)
16f30b494c : Make l1_loss composite
8adec19230 : Specify "Generic" BLAS library name. (#74269)
ddf582da9d : Modify nccl_dependency to take dev mode (#79169)
5b44ba834f : add check for ci sev (#79745)
f6d9a9a952 : [JIT] Bind AliasInfo to decrease differences in interfaces across languages
ee715e0a65 : [small changes] add closing parentheses to print out
d2e18606e7 : Fix view issue in embedding_dense_backward decomp (#79857)
8edaf388e5 : Fix fx decomposition example
aa0292122a : Move IPU tensors to the CPU for printing. (#79287)
fed12ff680 : [BE][flatbuffer] Remove code duplications and refactor (#79184)
7de231a813 : Fixing AO tests
0d909d3cff : add a new FunctionSchema kind called scratch
49368d9848 : [Static Runtime] Added nested prim::fork and aten::wait test case (#79746)
75f15c5b70 : [PyTorch] Record Sequence Number to Match Forward and Backward Operators (2nd try) (#79753)
2bfba84084 : Fix release doc builds (#79865)
644c3cfa0a : [ao][sparsity] add option for tensor_fqn to sparsity API
584e21bf7b : automate hash updates - add new lint to appease lint (#79832)
228e082ca9 : [quant] Refactor nonuniform quantization mapping functions
660d9ddef4 : Fix `SWALR` doc string (#79836)
e2a2ecac0e : finalized GHA for promoting to viable/strict (#79649)
0fa1920d45 : Small typo fix in nn.utils.parametrizations (#79854)
de842cac69 : [GH1] Make pull required for merge (#79704)
802efc90c6 : [primTorch] Implements refs for gcd, lcm and remainder (#78747)
9be97ea4b1 : Automated submodule update: kineto (#79662)
332e43ed1a : [Profiler] Expose extra fields to Python
24a0467149 : Add opset16 onnx support for `torch.scatter_add` (#79103)
5ca9253fa8 : [FSDP] Fix a small bug of pre_backward_hook params prefetch (#78851)
d4bbebbb97 : Back out "Back out "[const_fold] Set requires_grad based on the folded tensor; add device_for_folding option"" (#79696)
0545c85f74 : [static runtime] Add JIT prim ops: aten::cpu, aten::list, aten::numel, aten::__range_length (#79111)
7d17e3b884 : Windows nvtx for CUDA 11.6 require _wgetenv decl (#79798)
e10cbe3880 : Revert "Fix BytesWarning in torch.load() (#74813)"
73f6601cfc : [ONNX] Refactor heavy memory usage tests
074dc7465e : MPS: Add amax and amin Ops with tests (#79682)
dee43798d7 : Revert "Create a new Ubunutu 22.04 (jammy) build for platform010 (#77591)"
399b3dcac6 : Do not bind `c10::irange` output by reference (#79815)
6c2e8119dd : Fix BytesWarning in torch.load() (#74813)
706fa49ca1 : Exploit symmetry in comparison operators to reduce no. of kernels
24243659e4 : disable modes during constructor
cd3a0471b6 : [primTorch] refs: nn.functional.{hard|soft}shrink (#78427)
e627dd542d : Revert "Back out "[const_fold] Set requires_grad based on the folded tensor; add device_for_folding option" (#79655)"
bef2fecbbc : [shard] make state_dict hook be consistent
06274d7a48 : Add test for torchscripting nn.TransformerEncoder, including fast path (#79796)
eff74ed7bd : [AMP] Use generic autocast in example, specify dtype (#79579)
71d82917f4 : Create a new Ubunutu 22.04 (jammy) build for platform010 (#77591)
f5d7e5a192 : started using mode-based tracing
1432a3d6ac : [JIT] Add basic aliasing checks for tensor inputs
e8727994eb : Add symmetric version of gpu_kernel_with_scalars
268bbecf1c : Add option for allowing non-fake inputs, add deepcopy impl
e213c6addf : Fix forward AD add.Scalar to not set tangent as-is
a2d159e6e2 : Fix forward AD copy_ into same-sized tensor without fw grad
648a6658ec : Remove python implementation for eigh meta
7b4e92acef : fx quant: refactor qconfig setting out of find_matches
1b25aa6786 : Support dropout(nested tensor) (#79318)
62ba548cac : [DOC] Missing line in serialization notes (#79454)
514bf9fecb : automate hash pinning (#79470)
d4a9438786 : Revert "Make l1_loss composite"
321cd14aa8 : Update build targets to include generated enum_tag.cpp
e8ed16f3c0 : [DataPipe] Enable profiler record context in __next__ branch
25ca006707 : [DataPipe] Refactor _hook_iterator for readability
fb9d8de379 : Make `LR scheduler` stub complete, including `OneCycleLR` and class attributes. (#59476)
459090e3ce : [NVFuser] add "canBeEnabled" interface
7360b53ff3 : reland Add offsets-based reduction to segment_reduce (CPU, CUDA)
64f3742b2b : Use cuda instead of cudatoolkit for cuda 11.6 (#77164)
bcc4dba439 : [fix] composite compliance: isclose (#79700)
9acbaaaf05 : Fix typo in ChainedScheduler docstring (#79775)
04b98df87a : [fix] composite compliance: eig, eigh, symeig (#79698)
bc1fef96af : Reference implementations for rsqrt and native_layer_norm (#79413)
4a4890cfb2 : [BE] Use CamelCase for enum class members (#79772)
8a7a5def1d : Revert "Support dropout(nested tensor) (#79318)"
e85dfb6203 : Add pyproject.toml for black configuration (#79399)
9aad40e8eb : Capitalized first letters in the contents of readme table (#77953)
1c4fe2a4cd : numpy_T: typo fixed in warning message (#75859)
43aff84c84 : [JIT] Nested fix (#79480)
ac89ff56ce : [BE] Test trymerge against hardcoded set of rules (#79764)
4df76d1df3 : Adjust wording for consistency (#79758)
500fb24715 : Ensure tensors are contiguous in functional all_gather.
4661c75f0d : [BE][TryMerge] Do not re-import trymerge multiple times (#79760)
6e90572bb9 : [PyTorch] Don't create a new Storage in FreeMemory unnecessarily
1211ab679c : Support dropout(nested tensor) (#79318)
91c5fc323b : fx quant: unbreak Mac OS tests when MKLDNN is available
9955fff4da : add utility to compute self time of events
8a6d83079c : Functionality/pickling for commhooks (#79334)
ee6ebfc06b : Revert "Enable `dim=None` for `torch.sum` (#75845)"
f9656817df : Add nested tensor support to autograd (#79446)
4b342b30ad : CI Disable xla for infra issues (#79726)
ca7ab1708b : [Static runtime] Pass parent graph metadata to forked subgraphs (#79578)
4615f6aa97 : [MPS]: Add fix for squeezed input axes handling in BCE loss (#79676)
e79a51f7db : Enable `dim=None` for `torch.sum` (#75845)
c8b073c5c5 : [BE][FSDP] Enhance summon full params test (#79642)
9d419e81e2 : true up c10::FunctionSchema::operator<< to print native_functions.yaml syntax
44436947bc : Revert "Reland PySymInt (#79617)"
226a5e87f3 : reenable buck test (#79721)
e3135946b2 : reorder cpuinfo and clog deps in TorchConfig.cmake (#79551)
c9c402eae9 : [nvfuser_upstream_push] Reland: nvfuser code base bump 060822 (#79406)
78144b9f35 : [Quant][fx][bc-breaking] Replace *custom_config_dict with config objects
920ac70e8a : [pytorch][gh1] Fix failing test (#79632)
acd072967a : canonicalize includes of form <aten/src/ATen/...>
751fbc4ce4 : [ao][sparsity] Support for L2 norm based block data sparsifier
419b82e0fa : [ao][sparsity] L1 norm based block data sparsifier
70fc865237 : [ao][sparsity] Support for embeddings and embedding bags in BaseDataSparsifier
490631aaf3 : [ao][sparsity] Support for the nn.Parameter in BaseDataSparsifier
69347e9f64 : [ao][sparsity] Implemented state dict and serialization functionalities
596095dc82 : [ao][sparsity] Support for sparsifying data operations on raw torch tensors.
15828bcfd7 : [ao][sparsity] Base class for Data Sparsifier
2539efb842 : Enable embedding forward over reverse support (#79699)
df1ef27e6a : [AutoAccept][Codemod][FBSourceBuckFormatLinter] Daily `arc lint --take BUCKFORMAT` (#79692)
61a1eef7fc : [Quant][fx] Add get_default_qconfig_mapping
355a1c8c3f : MPS: TopK raise an error if K>16 (#79677)
270c518be0 : [checkpoint] Implement interop between Tensor and Sharded Tensor (#78120)
c6c207f620 : use ufunc_defs.bzl in BUILD.bazel
24c2aff1b2 : Back out "[const_fold] Set requires_grad based on the folded tensor; add device_for_folding option" (#79655)
afc037ae38 : [quant] Add quantized levels visualization
0088172e38 : [tensorboard] update assertion error for `scalar()` and fix docs (#76859)
f1fb575b9e : Remove -Wno-unused-but-set-variable for clang 13.0.0 (#79666)
8ef6356f26 : Reland PySymInt (#79617)
c8fb02b452 : Use amax instead of max for softmax decomps (#79667)
ef1e5d0e1e : [Profiler] Prepare for change to unique_ptr in Kineto
c348980b6c : [CI] Disable Buck Build Test (#79675)
ac0d540db2 : [ci] don't upload sccache artifacts with the wrong extension
4b817f5816 : [ONNX] Improve docstrings and typing annotations (#78231)
9681c75fc7 : [FSDP] Extend ignored modules test to not pass to root
e21c0ac9a5 : use exe/exepath in our genrules
86606fbe22 : fix generate-code caching by indicating that the binary is an executable
18fcd4826f : [FSDP] Fix exec order validation for diff ignored modules across ranks
3064982fb8 : Support percentages in random_split (#78877)
2f43784708 : fix at::from_blob_quantized_per_tensor_affine strides calculation (#79314)
44e5593b17 : Revert "automate hash pinning (#79470)"
61a5c779bf : Make l1_loss composite
42fd58eaa0 : [JIT] Change SchemaCheckTensor into SchemaCheckMode and fix global variable issues
c8b9b6266b : [ONNX] Fix arg type in `_set_training_mode` (#78583)
81f277002e : [quant] Add param calcs and tests for APoT observers
adf8060600 : add a new alias key for functional to view op decompositions
6114b0f921 : [quant][core][improvement][feature] Enabled support for quantized fill of nhwc tensors
89bae0665c : Update ideep for ideep conv changes -v2 (#79258)
55e3d7a483 : update tensor impl support_as_strided for nested tensors (#79616)
0ac2289a05 : Create decoder class (#79438)
110c730676 : Refactor out QKV in projection (#79437)
70446c25d7 : [FSDP] Add forward prefetching option in FSDP API (#78841)
26e1afe9f2 : Made improvements to isGreen (#79619)
aee9762a51 : [static runtime] Disable unit test for linalg_svdvals (#79574)
8f8a661ddc : automate hash pinning (#79470)
b9f83cb737 : use is_same_size in autograd init (#79553)
8d7fcfa8f1 : [static runtime] Add native ops: aten::index_put, aten::item, aten::tensor_split (#79065)
0bc1b9e039 : [static runtime] Add logging info for unspported ops (#79064)
eb5751d84b : move gen_aten and gen_aten_hip into shared build structure
b9bb52d97b : Revert "Put symint overloads on a different name"
e20c6a89f8 : [quant][core][improvement] Added warnings to quantized dynamic conv and linear ops when reduce_range=true
09df27fe45 : Revert "Revert "[distributed] Handle object collectives and NCCL. (#79034)""
d79f99c4b4 : Revert "Revert "[ci] convert empty s3 artifacts from error to warning""
a083199d2e : Revert "Revert "[ci] remove remaining RDS dependency""
f2f4cdc9e5 : [bc test] pull nightly from before the base commit (#79570)
bc82a5f79c : [ci] turn sccache stats error into warning
21e32d5a0b : Revert "[ci] remove remaining RDS dependency"
5de9f42486 : Revert "[ci] convert empty s3 artifacts from error to warning"
2579b3ed77 : Revert "[ci] turn sccache stats error into warning"
279634f384 : Revert "[distributed] Handle object collectives and NCCL. (#79034)"
dcf381e982 : Fix BC Test for SymInt (#79612)
6015987dc3 : Added edge case checking in isGreen (#79565)
d2fbfe7fce : [ONNX] subscribe onnx to our custom test infra (#79546)
6a96bda445 : [BE] Add clang-format changes to blame-ignore-revs (#79593)
5953fd9133 : Revert behavior of Dropout2d on 3D inputs to 1D channel-wise dropout behavior & warn
2d73c8e6e0 : Add Dropout1d module
081ff9602a : Correct torch.nn.CrossEntropyLoss output shape specification (#79568)
69971dd111 : [fixed] Make GHA workflow to retrieve latest promote-able SHA from master (#79559)
25e3331d7c : Revert "Add support for multiply on direct SymInt"
b8db0a0475 : Revert "Python Bindings for SymInts (#78135)"
aa9d25efc0 : Revert "Add support for directly passing symint to empty"
83e575c510 : have a common interface to extract metadata from SizeNodes (#78088)
e81ab046bd : Skip stale check when facebook-github-bot is merging (#79572)
38952d9350 : [ao] Added function to inform dynamic vs static appropriate
e10b762537 : Enable torch._refs.var for nvFuser executor (#79517)
20675977bc : [Static Runtime] Performance optimization for fork operation (#79482)
35b130ec0f : [Vulkan] added vulkan threshold op (#78654)
22c7b1ddb5 : [DataPipe] Fix error message coming from singler iterator constraint
bad7720dde : [xla hash update] update the pinned xla hash (#79172)
6e2f9ece4c : [ci/docs] add some documentation about the stats uploading process
aff7eef476 : [ROCm] Enable some sparse tests on ROCm (#77877)
20d56d2b32 : increase sleep for TestCuda.test_caching_pinned_memory_multi_gpu (#76601)
adaafeedb1 : do not write sccache stats to json if missing OUR_GITHUB_JOB_ID (#79541)
05664a957e : Add support for directly passing symint to empty
6b015af729 : Add support for multiply on direct SymInt
28d1216bd5 : Skip Flaky ONNX Test (#79556)
e895672b35 : Followup fix after #78828 (#79554)
549a597c00 : Port linalg_eigh and linalg_eigvalsh to structured
4fc7832d72 : Reference implementations for softmax, log_softmax, logsumexp (#79423)
8895862744 : Enable torch._refs.mean for nvFuser executor (#79444)
0005ad8801 : Skip extremely long chebyshev/legendre tests introduced in #78304 (#79529)
dde81c20f9 : Revert "Make GHA workflow to retrieve latest promote-able SHA from master (#79548)"
8e05513152 : [ao] Added ModelReportObserver to inform on dynamic vs static
04f87f2ab9 : [DataLoader] Fix the world_size when distributed sharding MapDataPipe (#79524)
e479daed78 : Make GHA workflow to retrieve latest promote-able SHA from master (#79548)
f614f66acf : [const_fold] Set requires_grad based on the folded tensor; add device_for_folding option (#79067)
577f87bbff : Make flatbuffer loads faster if loading as mobile module. (#78998)
81cd276d61 : [MPS] Support stride of stride
51b65cd765 : Fix warning: cast from type `const char*` to type `char*` casts away qualifiers (#79520)
95f9ca4931 : Symbolic storage size
0a651a231d : Add full support for serialization of MPS Tensors (#79465)
f4edbaa62f : [PT-D] Use process group of the partial tensor so sub pg comm will be enabled during reshard
ce6ce74703 : Revert "Add full support for serialization of MPS Tensors (#79465)"
31ada133cb : [meta] nansum, nanmedian (and few minor clean-ups) (#79411)
6bce0f4ace : [ci] remove PR_LABELS env var
48505356f5 : Propagate map_location arg to torch.jit.load in torch.load (#78733)
9d351b3ddd : Update the governance page (#78850)
5399fef644 : Update persons of interest (#79076)
6cf085e8fc : [MPS][BE]Do not use `new/delete[]` in `chainViewOperation`
4dd3e3f409 : [MPS] Fix getDefaultGenerator and copy_kernel_mps
2f2fa9948d : [MPS] Delete unused vars from OperationUtils.mm
64c2a275c4 : Add full support for serialization of MPS Tensors (#79465)
d7fc864f0d : Revert "[primTorch] refs: lerp (#78473)"
44764f131b : [ONNX] Move tests in test_onnx_export.py to test_pytorch_onnx_no_runtime.py (#78310)
d9fca126fc : [ci] fix upload test stats job
a9f6a35a33 : [primTorch] refs: lerp (#78473)
d3ef5c3fa3 : [ONNX] Clean up `__init__` in torch.onnx (#78446)
ae8e5c702a : hook XPU device in _get_available_device_type (#76167)
8279567c46 : Link LazyLinalg with cusolver statically when needed (#79324)
07a528cac7 : Adding isDynamic Support to SizeNodes
498a34224b : Adding helper function for getting DimensionNodes
28668b4850 : Slicing Bug that was effecting Nested Tensor (#79445)
f754d2501d : Use custom AppendOnlyList for op_events to reduce the number of atomic operations (#78643)
d9a6f76a9e : Variable Name Clarity (#78478)
1a845579b6 : [ONNX] Fix inconsistent rand dtype (#79193)
4824222472 : Migrate pull off linux-xenial-py3_7-clang7-asan (#79087)
18305e30a7 : [BE][FSDP] Enable multigpu unittests (#77947)
1900b8acb2 : Remove linux-xenial-py3_7-clang7-asan from merge rules (#79088)
d332724071 : Python Bindings for SymInts (#78135)
e757cf40cc : [c10d] Make broadcast as a custom op
f31f4a3ac2 : Remove the construction of unused tensors (#79183)
46fac97bff : fix Python parent id
05624bcf7b : add sizes to slowpath
543919cfc8 : Forward attributes to wrapped module
44fe851feb : [WIP] Fix non-reentrant hooks based checkpointing
b8914aab1c : Fix backward compat (#79484)
38e717dc87 : Add docs for Python Registration
134459161d : [ONNX] Improve shape inference supporting inferred values and unspecified optionals (#78999)
3b194fd532 : Revert "Add offsets-based reduction to segment_reduce (CPU, CUDA)"
88e2229a20 : Use C++17 for RocksDB 7 header. (#75741)
34b0285185 : turn on -Werror=extra in Bazel
3db1092da6 : disable warnings for external repositories
b370959da1 : [MPS] Make it compilable with either xCode or CLI (#79430)
1d47e0df5a : Updates TF32 docs (#79401)
bfb46ebb66 : [mergebot] Add return on force (#79450)
70810a3691 : turn on -Wall with a few exceptions in Bazel build
52a5266aab : turn on -Werror=unused-but-set-variable
c77dc60143 : [ci] set -e in get-workflow-job-id
c10908cd41 : [jit] fix indexing into a tensor with a tuple
91a2e953e5 : [JIT] Use signed integers in CalculatedNecessaryArgs
c727ec6129 : [mergebot] add additional checks (#79317)
4ebb326b75 : [distributed] Handle object collectives and NCCL. (#79034)
c31398653c : [ci] turn sccache stats error into warning
d42d5fe778 : [ci] convert empty s3 artifacts from error to warning
964d505958 : [ci] remove remaining RDS dependency
71b00a049a : Refactored print_latest_commits.py to return the latest viable commit from master (#79422)
2f7ed05f22 : Retry - [JIT] Add mutation checks for tensor inputs
dcd8900c39 : Use python3 -m to fix tools not being found (#79432)
d05fb78685 : [chalf] enable skipped tests (#79376)
5889eae906 : [retake][mobile] Fix lightweight dispatch OOM error by introducing selective build See #78983 for more details. This PR adds a new option argument to avoid changing the existing one and added unit tests to cover the arugments.
9c6579f8f7 : move CUDA flags out of --per_file_copts into the cu_library macro
9cca8ae41a : precisely specify the regexps of our unused-function exemptions
4b5647f068 : [deploy] move deps installation to docker image
8b03f2a93a : quote all of the regular expressions
3ac27bb6ad : avoid multiline strings in our warning flag overrides
3771331b2a : [ci] move sccache stats off RDS
ea934a580c : [ci] Pull out generic upload_test_stats functionality
842da8a5de : [ci] remove TD + test specification code from run_test.py
d9ffc37cea : [ci] unpin get-workflow-job-id
03a1bb61fc : [ci] improvements to test stats uploading
943c09a53e : [ci] clean up dead code related to PR test selection
6bc0302e05 : [ci] don't explicitly set `CUSTOM_TEST_ARTIFACT_BUILD_DIR`
91659dea83 : [ci] minor refactorings to common ci scripts
edb424fd9f : [ci] remove old sev mitigation
5680588193 : [complex32] triu, tril, diag, trace : `cuda` (#78062)
a6103cee26 : Automated submodule update: kineto (#78232)
a7dde90173 : [CI] further shard linux debug jobs to 4 (#79412)
436396eddd : fix GRU document string (#79380)
0e25a9490b : Removing cublas static linking (#79280)
ba27ee9e8f : [CUDA graphs] Allows Adam and AdamW to be capture-safe (#77862)
a732bbea23 : [meta] Add meta support for fft ops (#79311)
bd1a35dfc8 : [meta] diag ops, trace (#79341)
aa681e401d : [ci] fix env var propagation
213a8fc992 : Put symint overloads on a different name
f23685f008 : [transformer] BT enablement on fairseq - pytorch change (#79186)
c321dfe1b5 : move tree tests to the start of test_profiler.py
a81be44410 : Fix `shard_module` to appropriately deal with sub process groups.
5413580f9e : Revert "[JIT] Propagate profiled information to DifferentiableGraph outputs"
66460c4a6a : Revert "Fixes maybe_broadcast to actually broadcast only when needed (#79298)"
1cb1c2c08c : Fixes maybe_broadcast to actually broadcast only when needed (#79298)
30fb2c4aba : [lint] autoformat test/cpp and torch/csrc
1ec30a6647 : Add offsets-based reduction to segment_reduce (CPU, CUDA)
c978b609f7 : [ci] remove IN_CI env var
f51d5233f2 : [ci] fix GITHUB_ACTIONS env var checks
a8bafd2659 : [ci] do not pass GITHUB_RUN_ID explicitly to docker image
2edcc9e600 : Update ideep for ideep conv changes -v2 (#79258)
164029f783 : masked logasumexp/logaddexp
9fc2518a8a : Update and improve the heuristics for linalg.lu_solve
1880d293b6 : Expose lu_factor_batched_cublas
54949a5abc : Simplify and optimize linalg.solve
65a37923f9 : [Static Runtime] Exception handling during fork subgraph execution (#79292)
ab2ca95dd1 : turn on -Werror=unused-variable in our Bazel CPU build
e727539c29 : Support multi-dimensional lengths in segment_reduce to support pytorch_scatter.segment_* functionalities (CUDA)
38350acf8f : Autogen Tags enum, and allow specifying tags while defining an op
bdcee8f995 : update is_same_size to work with nested tensor dispatch
10dfdf2066 : engine changes
50e1301293 : qcat: use direct memcpy when all the inputs and output share the same scale and zero_point
77d5eb7d1c : Remove linux-xenial-py3.7-gcc7-no-ops from distributed checks (#79320)
606b234336 : turn on -Werror=unused-function in our Bazel CPU build
4aca751921 : remove spurious warning in amp (#79203)
68136828e0 : Add Design Philosophy (#79248)
24050a5801 : [RFC][Codegen] Add custom namespace support (#78015)
d28e9e145b : Revert "[nvfuser_upstream_push] nvfuser code base bump 060822 (#79147)"
bcd7a20953 : Revert "turn on -Werror=unused-function in our Bazel CPU build"
2e8312c704 : [vulkan] replication_pad2d.glsl: use clamp() instead of min(max()) (#79291)
5e656eaae5 : [refs] ravel (#78421)
3d77017674 : [primTorch] refs: masked_fill (#78132)
b712467cd1 : Revert "Add mutation checks for tensor inputs"
7b307e5fca : [meta] angle, angle.out (#79278)
49c41b87a2 : [nvfuser_upstream_push] nvfuser code base bump 060822 (#79147)
35eda5f959 : [DataPipe] Correcting deprecation version
5e926aafab : add utils for checking that all modes are in the same scope and finding the outermost mode
fff1948b02 : [PyTorch] intrusive_ptr: don't guarantee release_resources will be called
cfd697c979 : [PyTorch] add intrusive_ptr exclusive ownership benchmark
26f2376b78 : [Static Runtime] support fork and wait operations on Static Runtime (#79211)
ec86070922 : Checkpoint util
67d313a032 : turn on -Werror=unused-function in our Bazel CPU build
3734fcc8f8 : add ability to push a mode if the current mode is an ancestor
83c0a2bc38 : Add mutation checks for tensor inputs
6bea742c10 : Revised test cases for isGreen (#79223)
7b3a0ff87a : Port `index.Tensor` to structured kernels.
a90f006fe5 : add strides to slow path
1d7627955b : Add instructions for iOS test (#79100)
cde0cefa1c : [CI] Remove broken upload binary size step (#79282)
eaaa34daef : [ci] write test suites to rockset
cec251fc4b : [lint] Don't invoke lintrunner with --verbose
b26c5b4638 : [ci] refactor upload_test_stats + add unit test
0117fb7600 : [ci] remove IS_GHA env var
ebc936d608 : [mergebot] Default on green (#79242)
adaecb2cbb : [chalf] index_select: cpu support (#79217)
7843a5e882 : Move Tensor.grad back into C++
dd620c4575 : add type annotation to distributions.kl_divergence (#78432)
77b6885a22 : MPS: add layer_norm_backward (#79189)
83239351c5 : MPS: add exponential op (#79188)
c99ea0db46 : Revert "[PyTorch] Record Sequence Number to Match Forward and Backward Operators (#78795)"
d2200e38f7 : Revert "fix _unsafe_view schema to work with functionalization"
f96d96a7fc : turn on -Werror=type-limits in our Bazel CPU build
28c541776c : [ao] Added fx model report per_channel detector
b9e3d722c4 : Use appropriate dtype for sharded linear implementation.
a299a2fa26 : [PyTorch] Record Sequence Number to Match Forward and Backward Operators (#78795)
43ad2833b9 : [pytorch][papaya] Replace parseObject with dummy function in FlatBufferLoader during load parameters (#79167)
d837443a6f : [fix] composite compliance: matrix_rank (#78968)
fefff54cad : Revert "Revert "Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads"""
a2d2981e8e : Revert "Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads""
87a5ecced2 : Revert "Support multi-dimensional lengths in segment_reduce to support pytorch_scatter.segment_* functionalities (CUDA)"
46234df5f1 : fix _unsafe_view schema to work with functionalization
1d2a6c2e94 : [JIT] Propagate profiled information to DifferentiableGraph outputs
40f7ef1f3d : Support multi-dimensional lengths in segment_reduce to support pytorch_scatter.segment_* functionalities (CUDA)
c0a7c1d02e : Expose _export_data from C++ to Python (#79207)
338bfe6315 : Revert "[ci] remove IS_GHA env var"
01929b7374 : Delete syncbranches script/workflow (#79245)
1a2d95c68a : [ci] remove IS_GHA env var
1e2890ab51 : [ci] remove COMPACT_JOB_NAME env var
0a4d646529 : [ci] remove ppc64ie build env
18c3314e47 : [ci] fix USE_DEPLOY condition
21be4d40ba : Fix how we handle host memory in CUDA getDeviceFromPtr
1bc8c87322 : [CI] Turn flaky test signal to green (#79220)
13a8867c01 : Add Dynamic Output Shape Tagdfor ata-dependent ops, handle in FakeTensor
f221b16b2c : Remove Unused Nervana GPU submodule (#79168)
54ea6bbc6f : Add onnx / test to required merge rules (#78790)
39bfdf4175 : Revert "[mergebot] Make merge on green default behavior (#79199)"
ddeeca0bb5 : [mergebot] Make merge on green default behavior (#79199)
70d6446a3d : Support both train / eval modes for ModuleInfo
79f18c1aee : Minor FX test fix for TorchDynamo (#79206)
b18ba7e036 : Properly setup __name__ on refs functions.
84b9e5ba84 : Move test_profiler tests to tree rather than icicle format
9f2e2aa28b : Revert "Revert "[Profiler] Move python tracing to unified event type (Part 2)""
2bafb42a0a : Add onnx support for `movedim` and `moveaxis` (#78931)
c1b831f9cd : Fix jit schema_matching ignoring self resulting in wrong operator schema
e289a18e78 : Support multi-dimensional lengths in segment_reduce to support pytorch_scatter.segment_* functionalities (CPU-only)
e40e378100 : [quant][core][feature] Implemented index_put for QuantizedCPU tensors
397f222457 : implemented isGreen functionality to verify whether a commit SHA is promote-able (#79061)
3255ddeec9 : Make `Wunused-local-typedef` a hard error (#77918)
db08eb88e6 : [GHF] Make wait-on-green abort on FAILED checks (#79209)
2384916b7a : [c10]Delete non-existent paths (#79187)
e7495702b0 : Avoid creating extra tensor when matching conj and neg
b1ae519df9 : Added functionality for post_local SGD (#78988)
50f7b40ad9 : MPS: Binary cast fix by proper type promotion and remove spurious copy warning (#79185)
6fa202847e : Add TODO comment
462874f418 : adding a quick link to nvfuser README.md in jit doc for 1.12 release (#78160)
221755cc71 : Link BLAS privately (#78883)
82504985d5 : [PyTorch][easy] Tie DimVector inline size to SizesAndStrides inline size
e99c8ec3c2 : [ci] use TemporaryDirectory in upload_test_stats.py
a374254b68 : [GHF] Make `-g` wait for all (#79196)
5ca24c60a1 : Add Meta backend
af6321f3d8 : Port linalg_qr to structured
2b6a04270f : [quant][core][feature] Implemented masked_fill for QuantizedCPU tensors
fe2fbb947d : Run torchdynamo tests on PyTorch Linux CI
d67309aefb : Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads"
60a13f4ec9 : Revert "Added kl_div_backward decomp"
97594a24b4 : Print output during MPS test import tests (#79163)
3556457dd2 : Revert "`kl_div`: fix for grads wrt `target`, double backward, forward-over-reverse AD support. (#79007)"
2027eae67c : Revert "formatted _decomp folder with black"
04407431ff : MAINT: Harmonize argsort params with array_api (#75162)
fb6749d977 : Support CSC/BSR/BSC inputs to unary zero-preserving functions.
2a0e4322e6 : Support ComplexHalf in nonzero and add of sparse_csr input.
72ad222cff : `kl_div`: fix for grads wrt `target`, double backward, forward-over-reverse AD support. (#79007)
cd9e158007 : Accept non-standard bools in more CUDA kernels
4945c72151 : formatted _decomp folder with black
a08685ebc9 : Added kl_div_backward decomp
64b6bd8c1e : Added {logical_not, trace} refs, moved logical ops to use method overloads
28ed0f5686 : Minor fix in FX tests with TorchDynamo (#79174)
c3e089a047 : Revert "[mobile] Fix lightweight dispatch OOM error by introducing selective build"
cb04cc0aa7 : [ci] don't upload test stats for cancelled workflows
95b15c266b : Apply clang-format to ATen headers (#78406)
5ea3e657d9 : [vulkan] Implement replication_pad2d op (#79057)
854c833f81 : Revert "Support both train / eval modes for ModuleInfo"
dc11a5642d : Improved stack ref and added more decomposition annotations
c3531c9bce : Ported roll to use torch ops and added as a decomposition
1b75a2e458 : Add PReLU to MKLDNN convertible Ops (#79011)
225bf132ab : Black torch._meta_registrations
eb856daf0f : Do not treat all dense tensors as isTensorSubclassLike
8cd23eb40b : Add vision onto the commit pins train
004b63876f : Centralize commit pins in a folder
3c5a3ca9e8 : Make FakeTensors return meta within kerenl invocation, add FakeTensor op tests
7b42ecada5 : [Pytorch] Disable use of qnnpack with ceil_mode avgpool (#79028)
7541b07f62 : Prevent invariant assert failure on win+debug (#78636)
9dd8008ae4 : commit hash update - fix error when no change (#79162)
12658fcd5b : Support both train / eval modes for ModuleInfo
17862ae823 : [JIT] Improve error message for InferredType
bfaa187fb0 : move substitute lib to open source (#79093)
d6ecdf1605 : refactor op handling to use register pattern
fe7a13496e : Add CPU Fallback
290d0979f1 : Migrate FakeTensors to always call into FakeTensorMode and have them hold a reference
430955b3a8 : [Test] create shared targets for xplat aten (#78345)
50f2af84da : Add embedding_bag meta functions
7710d872fc : [DDP] Fix broadcast for channels-last tensors (#79060)
a8ea58afee : Add randomness case to the autograd notes
7cb4a76844 : Add detailed error message for iOS test (#79140)
6ad51c9422 : Support indexing of the underlying tensors for nested tensors (#78934)
e85f3b58ab : [fix] composite compliance: margin_ranking_loss, hinge_embedding_loss (#78935)
abe55562cc : [Vulkan] Replace Conv2dOpContext by VulkanOpContext (#78818)
95806fce8e : [Vulkan] Replace TransposeConvolution2dContext by VulkanOpContext (#78817)
65ecc02a39 : [Vulkan] Replace GruOpContext by VulkanOpContext (#78816)
96445b907e : [Vulkan] Replace LinearOpContext by VulkanOpContext (#78815)
4b82ef7928 : Revert "Port `index.Tensor` to structured kernels."
c19cf34f81 : Move test/jit/test_onnx_export.py to test/onnx (#78116)
79ee0c0dd5 : Swap fx2trt_oss to torch_tensorrt (#950) (#79115)
cfd84125bd : Port `index.Tensor` to structured kernels.
b12d951f86 : [quant][core][improvement] Added earlier termination and improved error message for calling min and max ops on per channel quantized tensors.
d26c575ff5 : [ONNX] Enable script tests based on recent exporter changes (#77254)
b45d303dac : Add onnx support for `torch.lerp` (#78891)
5f7b92024f : torch/package: add fix for implicit numpy dependency (#78979)
68ef29b4ca : Revert "[hot fix] Disable MPS tests as machines are down (#79119)"
5158a6b41a : Foward fix sharding bug for DL (#79124)
541b9d2593 : [quant][improvement] Added quantized fill test case for quantized cuda tensors
c1516e0c8d : Fix LeakyReLU spelling (#79102)
d4aa204f11 : ns for fx: further fixes for kwargs-only
5b32c34450 : [reland][complex32, jiterator] cos, sinh, cosh, tanh (#78718)
ff39e3493a : Test torch._refs with aten and nvfuser executors (#78926)
446d5bae93 : [Vulkan] Add generic VulkanOpContext
f2d67c547f : [hot fix] Disable MPS tests as machines are down (#79119)
32593ef2dd : move MPS compat into common comparison machinery (#77836)
496e294e31 : Revert "add some instructions for ios test (#79097)"
e4c3e98a48 : Add onnx support for `torch.tensor_split` (#77437)
9a0bb4b646 : [Vulkan] Fix bug in GRU op (#78945)
272bdb1442 : [mobile] Fix lightweight dispatch OOM error by introducing selective build
99ffeff949 : [forward ad] Sync conj for between primal and tangent on set forward grad
cf3ce329b5 : [PyTorch] Avoid initializing storage for empty Optionals
65fd0cdddb : Install torchdynamo as part of most CI jobs
bbd49bcaec : Generalize update-xla-commit-hash and setup torchdynamo hash
e1534e3fe7 : add some instructions for ios test (#79097)
530ee1c383 : [GHF] Workaround GH GQL limitation (#79090)
4305f8e9bd : Revert "[Profiler] Move python tracing to unified event type (Part 2)"
acecd4acc1 : Make torch.library decorators return function
41bd5b85fd : cdist meta function
8d9e5d0b5f : [pytorch] Update Gloo submodule to include rocm fixes (#77207)
8ce310b943 : Revert "Revert "moved logit to use torch ops instead of refs + added …a couple more decompositions"" (#79082)
9e52ad28c9 : [nvfuser_upstream_push] nvfuser code base bump 052422 (#78244)
c2a3c8186c : [Profiler] Move python tracing to unified event type (Part 2)
a173613f6d : [Profiler] Move python tracing to unified event type (Part 1)
fa1aa836ce : Copy rollbear/strong_type to `c10/util`
d09e3674d8 : addbmm meta function
c6215b343c : Deprecate torch.lu_solve
f7b9a46880 : Deprecate torch.lu
f091b3fb4b : Update torch.lu_unpack docs
c8a5f28fde : Revert "Test torch._refs with aten and nvfuser executors (#78926)"
c7d6cec078 : Add linalg.lu_solve
7d192d48d2 : Revert "moved logit to use torch ops instead of refs + added a couple more decompositions"
13dff3b2c2 : Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353) (#74353) (#76771)
a07f57d44b : [fx2trt] support for new_ones, new_empty, as_strided, einsum (#79047)
b3ed65343d : Fix sharding strategy for distributed DL (#79041)
a711ce4a2a : add non-kwarg device and _like constructors
d4eebca7bc : Test torch._refs with aten and nvfuser executors (#78926)
0be9df4e85 : Install NDK 21 after GitHub update (#79024)
5bf298123e : [XLA hash update] update the pinned xla hash (#78600)
32461ed319 : Pool cudaEvents in CUDACachingAllocator (#78279)
a6347f5467 : MPS: Fixes (#78930)
7a23dd343e : Wrote stubbed out test cases for isGreen function to verify if a commit SHA is promote-able (#78932)
a9a9b6c8ab : [Vulkan] Implement LSTM operator (#78943)
814ff74460 : Add prod reduce option to segment_reduce + opinfo
3f14ca8f02 : [ONNX] Compare types for DeviceObjType instead of strings (#78114)
f7da97101c : [PyTorch][easy] Remove double hash table lookup from Registry::Create
243dd7e74f : Fix LeakyReLU image (#78508)
c936396af2 : Always convert truthy booleans to 1
a56f4e23b9 : [quant][core][better-engineering] Rename files in quantized directory to conform with non-quantized countertpart filenames
0d176befab : [quant][improvement] Added quantized fill test for per channel quantized tensors
9da5defff6 : Package config/template files with torchgen (#78942)
67badf0d5c : Add missing QSCheme IValue conversion logic (#78862)
1d9f445b5d : moved logit to use torch ops instead of refs + added a couple more decompositions
bbbfbbeddc : Added "dump ops" API to return ops instead of print (#78995)
69778ee4eb : Ported nn.functional functions to use torch calls instead of ref calls
d06bdaac50 : Revert "Wrote stubbed out test cases for isGreen function to verify if a commit SHA is promote-able (#78932)"
0df77f320f : Revert "add non-kwarg device and _like constructors"
45f5e6db92 : Remove mentions of non-existing test_jit_py3 (#78977)
4750f745bf : [ONNX] Disable parallel run for custom op related tests in CI (#78944)
5a4325955c : Correct var_mean in _refs/__init__.py (#78971)
bd08d085b0 : Update argmin docs to reflect the code behavior (#78888)
44937da6db : add non-kwarg device and _like constructors
4d88affb5d : Ported proxy tensor tests over to core (#78890)
e806d13214 : [Privacy][Codemod][TransferMegarepoThirdPartyLibraryOwnership] Asset Ownership Update For asset://code.third_party_library/fbsource/xplat%2Fcaffe2%2Fthird_party (#78937)
42fac176eb : [DataPipe] Add function for deprecation of functional DataPipe names
f551c22a20 : [lint] preparatory changes for mass clang-format
f625bb4bc9 : [codemod][usort] apply import merging for fbcode (1 of 11) (#78973)
da805953fb : Subscribe onnx _BaseTestCase to TestCase setUp() (#78933)
fc3a5d8117 : Wrote stubbed out test cases for isGreen function to verify if a commit SHA is promote-able (#78932)
949dc0588b : Add methods to torch._C pyi (#78757)
e675dbadc4 : Ported gelu decomp to ref (#78697)
b40c751c33 : c10 mathematical constants (#78910)
80f2c175be : Follow up on CR for "Replace TensorMeta with FakeTensor"
d0431b09ec : [ci] Only upload test stats when the test step actually runs (#78960)
c44472c5b1 : [DataPipe] Disable profiler for IterDataPipe by default
bf80f6c7b0 : [Static Runtime] prim::fork asyunchronous execution on JIT interpreter (#78858)
91a43a03f1 : [CUBLAS][TF32] Fix broken docstring for `set_float32_matmul_precision` (#78949)
ebb7f424b8 : Add Tensor.is_cpu (#78887)
a0a23c6ef8 : [bazel] make it possible to build the whole world, update CI (#78870)
97762d3196 : [flaky-test] fix duplication check after introducing skips (#78958)
be25566d13 : tools: Ensure compat for collect_env with python 3.5
bb3e1f30a8 : [Pytorch NNAPI] Add compilation_preference & relax_f32_to_f16 APIs (#78758)
e7b96ad078 : [complex32] sqrt-rsqrt : cuda (#77490)
530dcc2b94 : [ONNX] Tool to verify exported model discrepancy between sets of inputs
48c3d85739 : Update codeowners for MPS (#78727)
6fe6902f97 : [DataLoader] Apply sharding settings in dist when num_workers is 0
ea3c4d0c75 : Added glu_backward decomp (#78919)
4e27863739 : [fbsync] Apply changes from D36937789 & D36937774 (#78956)
c461d8a977 : [primTorch] refs: hsplit, vsplit (#78418)
c81c0b6d42 : Support clone for NestedTensor (#78826)
71e1992b0d : quantization: remove most fp16 configs from fbgemm/qnnpack
491c2ed281 : [ONNX] Use TORCH_WARN for warnings (#78441)
bbe150ba7b : Migrate pull off pytorch-linux-xenial-py3.7-gcc7 (#78859)
a886bd387c : ci: Add clickable debug messages for trymerge
0fdc1caf02 : Cleanup some Python2-related code (#78864)
7adf929a99 : Skip flaky ONNX test: test_custom_op_fallthrough (#78936)
f2b56dd6ee : Revert "Default on green (#78811)"
9fca008809 : [DataPipe] Adding functional API for FileLister (#78419)
9b6cb83b0c : Make ShufflerDataPipe deterministic for persistent DL and distributed DL (#78765)
c175fac2e7 : [JIT] Autodiff - use more accurate requires_grad info
ab6c7b4b3f : fix __torch_function__ bug in getindex that causes an error not set exception
03847808a0 : Add all bzl files per D36874458
1f53d036d2 : Build a __torch_dispatch__ class that records torch operator names
ee933c3346 : [Vulkan] Fix cat registration (#78806)
1d9c54ec20 : Clean up after gcc5.4 build removal (#78860)
67b27a7bae : generate kernels for codegend out= operators
a4403c17c7 : Improve reproducibility docs for RNG (#78849)
129d9dbb15 : Revert "Make ShufflerDataPipe deterministic for persistent DL and distributed DL (#78765)"
3f58dd18dc : ENH: Add a force argument to `numpy()` (#78564)
5a95b20d0f : DOC: Harmonize ELU documentation with the module doc (#78909)
19228205ae : functionalization: update wrapper to propagate symints
c0f0cd7676 : functionalization: allow mixed functional/nonfunctional TensorList inputs
17f0c3be2e : [primtorch] refs: {h, v}stack (#78614)
030f721b51 : [primTorch] refs: isclose - throw error (#78922)
57c117d556 : update signbit docs and add -0. to reference testing for unary and binary functions. (#78349)
b769a0e18b : Make ShufflerDataPipe deterministic for persistent DL and distributed DL (#78765)
184e0065b3 : add better error message for class method
d6be779ee0 : [RPC] Fix init_rpc_twice test for MacOS
080cf84bed : Reland hardtanh ref (again) (#78914)
2d354cdc2a : [ROCm] Enable test_instantiator, test_type_hints (#78633)
ddf1930734 : Revert "reland Hardtanh ref (#78894)"
823ddb6e87 : reland Hardtanh ref (#78894)
157d478a30 : Fix omission of shape in size check in index.
99882fc492 : Make check() strongly typed, fix erroneous call sites
4075972c26 : [BE] Fix FSDP flaky checkpoint test
e6cc2e8d38 : Revert "Ported hardtanh decomposition to ref (#78689)"
587efdb5fa : Replace TensorMeta with FakeTensor
484282a6fd : Ported hardtanh decomposition to ref (#78689)
ac8c6d09d1 : Don't check overflow for scalar arg in comparison ops (#78881)
df748b60f7 : Allow pytrees as output for make_traced and nvfuser executor (#78802)
b7caf402aa : Revert "[PyTorch] Avoid initializing storage for empty Optionals"
a3468b7d4a : [ao][sparsity] Added the Nearly Diagonal Sparsifier
afe8a25eb0 : [primTorch] Adds dsplit/dstack reference (#78696)
0990a1c627 : [FSDP] Profiling range for FSDP.backward (#78479)
519347df49 : fix gather sparse_grad backward crash with empty index tensor
54c99a9e1d : relu ref
17bd683aad : [PyTorch] Avoid initializing storage for empty Optionals
c6ca4a4038 : Fuse matmul in row-wise sharded linear to have a single matmul.
7860ce5b79 : Fix tests that were never running, add a new test
bcb424c8cf : Fix #78675
720cb5023a : [Static Runtime] Implement prim::Fork and aten::wait (#78780)
83d40a4dba : linalg_cholesky_ex meta function
6120a8e05d : Implement meta function for aten::index.Tensor
789687fe12 : ci: suppress output to reduce confusion
bc84143152 : Orthogonal Polynomials (#78304)
fb02acef1f : [Checkpoint] Test file for checkpoint_wrapper
55cac22cdf : [MPS] Add `arange_mps_out` implementation (#78789)
8c8527233f : Update the Contribution Guide (#78779)
7d0bbb853e : Add userbenchmark support to TorchBench CI (#78794)
b6378103c0 : Add comment link for Job Run URL (#78823)
1bd21dd152 : _linalg_qr_helper meta function
e0d78950f0 : Upgrade mypy to 0.960
def778527e : [ONNX] Quantization support for five ops (#78103)
1f819ee965 : Add check for no grad in transformer encoder nestedtensor conversion (#78832)
4cfd09d7bc : Reland: Add index value checking to MaxUnpool2d and MaxUnpool3d (#78280)
bb20296190 : rebase via comment - better ghstack comment (#78833)
4dc5992a79 : [composite compliance] _make_wrapper_subclass respect strides (#78520)
6d7eddbb75 : Make allocator check C10_UNLIKELY
2fce7483a5 : Back out "[shard] make ShardedTensor a torch.Tensor subclass" (#78796)
b7cb4eae6b : Fix embedding jvp support by making embedding_renorm ignore forward mode AD (#78560)
14b0e9e75f : [cuDNN] Don't enforce bitwise exact results in `test_conv_transposed_large_cuda` (#78147)
0f05e39870 : [ONNX] Fix shape inconsistency when exporting scalar log2 (#78701)
fb7a761ffd : [ONNX] reduce log spam when exporting dropout in training mode (#78309)
e8b8a9f1ae : Add Distributed merge rules (#78751)
40e2aadf47 : Create __init__.py (#78629)
eb49dde9cf : Disable TracerWarnings on NNC opinfo tests
c5a0d8dccc : Default on green (#78811)
86c42d63c8 : rebase via comment - rebase to any branch (#78772)
79931889f7 : Fix distributed_test.py flakiness (#78797)
a4509f5b72 : More forward-over-reverse implementations. (#78740)
298b9ad708 : [ci] harden test stats reporting
f7ac389e71 : Run MPS tests (#78723)
6ba1d05fa4 : to_padded_tensor doc v0 (#78657)
26d273959c : Add Caching of Conversion to Fake/Meta tensors in FakeTensorMode
4615738a3d : [FSDP] Allow different `optim_input` orders across ranks
d4d8aaf7cb : [FSDP][Docs] Fix typo in `full_optim_state_dict()`
1683a2618d : rename BUILD.buck to BUCK.oss (#78792)
fc66521ebd : [cuDNN] [cuDNN v8 API] Support cuDNN Errata Filter (#73934)
c29df68f95 : [FSDP] Return original module when fsdp wrapped model call .module (#78671)
1884d7fbe9 : Avoid CPU Sync in SyncBatchNorm When Capturing CUDA Graphs
1eab34d173 : Remove non-existing code_template.py glob (#78773)
dd45e8d3cd : Add linux-focal-py3.7-gcc7 to merge_rules.json (#78785)
954522a485 : Revert "Autogen Tags enum, and allow specifying tags while defining an op"
c0814bff87 : [ONNX] Variable length argument support for quantized_args (#78775)
1ea4075bda : Ported t decomp to become a ref (#78686)
9476a78f37 : Autogen Tags enum, and allow specifying tags while defining an op
063c93665c : [quant] follow up fixes for prepare_fx/prepare_qat_fx calls in classyvision (#105) (#78660)
ad1bff1bff : [TF32] Fix typo in tf32 wrapper function (#78438)
b740a99b9e : [cuDNN][TF32] Threshold adjustments for TF32 on `>=sm80` (#78437)
416f581eb1 : Updating torch.log example
8047d2a564 : Revert "Reenable assert after test update"
76392c67ed : Migrate off Xenial gcc5.4 for trunk jobs (#78734)
4ab62ecfae : [JIT] enable autocasting + freezing test
88f4a12402 : using new rockset commit_jobs_batch_query to print check statuses (#78750)
ef0332e36d : Allow relocatable device code linking in pytorch CUDA extensions (#78225)
9446f9678a : repeat_interleaves meta function
cc6a51c9f3 : added shape checking to WeightedRandomSampler (#78585)
28f87b9cf9 : [Static Runtime] Fix aten::clone out variant (#78297) (#78322)
ebfc70f37a : [static-runtime] out variant for aten::mean (#78161)
22b10873f3 : Allow torchdispatch to customize dim()
b30b1f3dec : update mps note with more details (#78669)
3e0f1a8a32 : Add option to skip binaries when doing pip install for lintrunner (#78668)
d8093105d1 : Update linter to validate file names are compatible across OSes (#78736)
3354b31e9a : Fix nightly docs push: don't use nonexistent 5.4 image (#78730)
501d0729cb : move build_variables.bzl and ufunc_defs.bzl from pytorch-root/tools/ to the root
5ef378a30f : Fix out of date documentation & remove friction points (#78682)
4220799ea7 : scripts: Fix dry run for cut-release-branch.sh
844368c032 : remove Bazel globs that don't match any files
7d12eecba1 : move GENERATED_CPP_CUDA to caffe2/build.bzl
f1132c2c3c : Remove mentions of deleted TH and friends (#78683)
9c8eb2cf1b : Leaky relu in metal shader (#78544)
849b08f14b : [reland][chalf] where(cpu and cuda), pow(cuda) (#78665)
d578197747 : Revert "Fix embedding jvp support by making embedding_renorm ignore forward mode AD (#78560)"
883f8ef62e : [DataLoader] DataLoader now automatically apply sharding to DataPipes
642fc94501 : Update extending.rst (#78707)
79ddc32b6a : Add a check to ensure input func to Library.impl is callable
ebc4cfe3aa : Add __all__ definition in torch.profiler to fix Pylance type check er… (#78553)
b0814b63df : Reenable assert after test update
308d813d45 : Add nonuniform observer class and tests
eb88ea01b5 : Cleanup impl_nvfuser for unary ops (#78670)
7fc73285da : Added a function that prints the check statuses run on a given commit SHA (#78663)
4a5381ab40 : Bessel functions (#78451)
78824a7d54 : Revert "Always convert truthy booleans to 1"
ce7c7bb2a9 : Fix embedding jvp support by making embedding_renorm ignore forward mode AD (#78560)
0be4672a9d : [primTorch] Use the same error message as in ATen for canonicalize_dim (#78541)
48256f3cbb : Reference implementations for rot90, roll, atleast_1d,2d,3d (#78080)
fea909b43e : [primTorch] Adds broadcast_shapes reference (#78612)
4858c56334 : MPS: Fix issues with view tensors and linspace. (#78690)
3c3c6cd982 : Always convert truthy booleans to 1
388d44314d : Fix docs for torch.real (#78644)
b651148fc3 : remove prims::square (#78627)
876c359347 : Generalize sizes and strides policy on _make_wrapper_subclass
64a01f12ad : Revert "[complex32, jiterator] cos, sinh, cosh, tanh (#78458)"
02273f056b : Norm decomposition (#78582)
cfc968956c : [ONNX] Update CI test script to run parallel by default (#78200)
bf629642ff : remove math kernels that have derivative formulas in core
575c420287 : [DataPipe] Lazily generate exception message for performance
7dc5b5bf10 : move generated_srcs_list.bzl into caffe2/build.bzl
d90652db65 : Docs: build with Sphinx 5 (#70309)
22fd2f2e05 : [quant] Factor out common operator configs from native.py (#78407)
634954c55c : [MPS] Do not pass linker command to a compiler (#78630)
ca7f948806 : Don't include libiomp with conda install on MacOS (#78632)
6671b504f7 : Modernize FakeTensorMode, throw on non-fake inputs
24b7142d7a : Update distributed/CONTRIBUTING.md to remove ProcessGroupAgent references and add test instructions
5874a31169 : [quant][core][better-engineering] Rename files in quantized/cpu directory to conform with non-quantized countertpart filenames
aa06d05297 : enable with semantics
9b81e81771 : [PyTorchEdge] Extend Flatbuffer to get mobile_info for NMLML workflows
5fbec86fae : [complex32, jiterator] cos, sinh, cosh, tanh (#78458)
4bb8db85e9 : Revert "[chalf] where(cpu and cuda), pow(cuda) (#77640)"
272193d026 : Move THPStorage definitions out of `torch/csrc/generic` (#78032)
6a4997e66a : [Profiler] Weaken ordering check during post processing.
5aa2ed1922 : Remove call to `.contiguous()` for `local_shard_t`.
497ae27050 : [chalf] warn once on creating a chalf tensor (#78245)
3697cf7f76 : [chalf] where(cpu and cuda), pow(cuda) (#77640)
cd4ffc865b : Skip test_fn_gradgrad_linalg_pinv_singular_cuda_complex128
7390658e80 : Add APoT tensor class and tests
d990277908 : Make lintrunner compatible with M1 (#78628)
03cf01bdc0 : `index_select` for COO CUDA tensors. (#77551)
de5a2320f2 : Mark more methods of DispatchKeySet as constexpr
ffaee6619c : tools: Add ability to grab release versions
44aa4ad894 : Use `_all_gather_base` and fuse matmul for sharded linear.
effd270986 : Fuse row-wise sharded linear matmul to increase perf.
93d5a722b1 : [coreml] Introducing Quantization (#78108)
2d5eac48d5 : Revert "Reference implementations for rot90, roll, atleast_1d,2d,3d (#78080)"
6dafefe3d4 : Revert "[primTorch] Use the same error message as in ATen for canonicalize_dim (#78541)"
40ada91161 : [PyTorch][easy] Fix borrowing from optional in binary_cross_entry_with_logits
c944cf8745 : [PyTorch] Clean up native transformer implementation
bfc3b955a3 : [DOCS] Add docstring to _get_async_or_non_blocking in _utils.py (#78036)
c054993b53 : [primTorch] Use the same error message as in ATen for canonicalize_dim (#78541)
efdb4192bc : set data permits requires_grad=True on integer tensor
e41389f84b : [Quant][docs] Replace qconfig_dict with QConfigMapping in docs
43c09b5cef : Support saving Bfloat16 tensors for XLA/HPU (#77534)
e78a7835bb : Move dynamic RPC tests from common rpc tests to tensorpipe tests
20c32503eb : Add more impl_nvfuser for prims (#78493)
a3bdafece3 : MPS: add linespace op (#78570)
619db91165 : Make linalg.vector_norm a proper reduction
9b97d5d625 : Add complex_to_float option in ReductionOpInfo
c7e9eea915 : Expose is_out to python
9229749f62 : Cleanup unused CircleCI binary jobs (#78596)
e8bf3a9cd4 : Remove Python 2-related code from dataloader (#78594)
0d8c9406de : [S271837] alias torch._utils_internal (#78597)
99244435f6 : Resolve TODO after Python 2 for custom_fwd (#78592)
5bcbad7614 : Fix perf regression introduced in #70943 (#78588)
7ea9c6edc2 : [primTorch] Adds broadcast_to, column_stack references (#78416)
0520810461 : [Vulkan] Add cumsum op (#78554)
1f680a2752 : Temp fix for upgrader (#78589)
41e2611e6a : Fix left nav (#78552)
c20969c40c : Fix ParameterList printing meta tensor
fd37d1d870 : Minor updates for upcast_tensor
d7dd0df22b : Add ability to compare skips/xfails against a text file of operators
fca1f495c2 : Revert "Port `index.Tensor` to structured kernels."
1705be8ff7 : Fix `_free_weak_ref` error (#78575)
6548f8335d : ci: Only pull image if not available locally
a171ea6b47 : rebase bot - add ghstack config (#78561)
ceb93afe3f : Revert "Fix bug in flatbuffer deserialization"
b6c53f90b9 : Add nn.module activation support in BetterTransformer (#78394)
dc169bb5fc : Assert that the autodiff implementation of backward() returns the correct number of values
5b922c29e9 : [ci] disable flaky mobile-lightweight-dispatch-build
d0baf5792a : [Testing] Turn instantiate_parametrized_tests into a decorator (#78228)
9fe6f1baf5 : Port `index.Tensor` to structured kernels.
f733fa0b13 : Remove gcc5.4 from docker/build.sh (#78405)
1b4b5acafd : Fix -Werror=type-limits in UnarySignKernels.cu
7f12b0c5b2 : Remove gcc5.4 jobs from CircleCI config (#78555)
96deba836a : [DataLoader] Fix unraised exception in eventloop
288b23bc52 : fix MetadataTensor example (#78073)
a88f155f4b : [XLA hash update] update the pinned xla hash (#78330)
c88367442d : [forward ad] forbid non-float non-complex tangent and primal
96c134854d : Reference implementations for rot90, roll, atleast_1d,2d,3d (#78080)
d71816a51b : Revert "[ci] disable flaky mobile-lightweight-dispatch-build"
12f911d5e2 : ci: Re-build when docker images aren't available
3524428fad : DOC Corrects default value for storage_offset in as_strided (#78202)
b27f0fea2c : [ONNX] Fix case in type annotation in docs (#78388)
2bb4fce8b9 : [ROCm] TestGradients: Enable grad and gradgrad (#78401)
aa62b3e003 : Add test case for issue: https://github.com/pytorch/pytorch/issues/77851 (#78547)
f12ef908a8 : added stub script that prints out latest commits (#78397)
01d20491ff : [ci] disable flaky mobile-lightweight-dispatch-build
7313a7a987 : Make Meta into a backend component
eee2aa14a6 : Register std_mean ref as a decomposition
523c9c2ac2 : [ci] some improvements to buck workflow
6883b0ce9f : [ONNX][WIP] Refactor verification.py
050e416f40 : Save a heap allocation in embedding_bag_{byte,nbit}_impl
7e72c96b10 : Fix bug in flatbuffer deserialization
f42b42d3eb : MPS: Implement aten::count_nonzero.dim_IntList (#78169)
fe67dff82a : Deprecate `TSNodeLoweringInterface` (#78273)
032f8d0aa2 : Register the extension device module as a native module under torch namespace (#78329)
cea7dd1646 : Add FakeTensorMode
4c18f362a9 : add support for type_as/_to_copy
98e0816986 : Extend __new__ on subclasses to set custom_device and custom_strides
678213ead2 : Fake Tensor Part 1
d136852bda : [CUDA][Linalg] Add a `driver=` kwarg to `torch.linalg.svd` and `svdvals`; add cusolver gesvdaStridedBatched driver to svd (#74521)
235f8db3d8 : [quant][core][better-engineering] Rename files in quantized/cudnn directory to conform with non-quantized countertpart filenames
51ecc366e1 : [DataLoader] Minor documentation improvement
2ac79c9e45 : Remove dataclasses from requirements (#78501)
e387fb4df7 : [FSDP][BE][Docs] Improve auto wrap policy doc (#78400)
789115e05e : Don't check for linalg errors on meta tensors
59fdb627a3 : Reenable TestMeta native_batch_norm and native_layer_norm
be0629e925 : Reenable TestMeta slice
4bbc3e2809 : Some helper code for determining missing meta coverage for XLA ops
0865df4b87 : Reenable TestMeta testing for isnan
1e11fc894c : Reenable tensor_split meta testing
b7215de32f : prod ref
e562ed0964 : Register PrimTorch sum as a decomposition.
5620ebad5f : Unconditionally transform dtype arguments to double for upcast
f09f6aadb4 : Update ideep for ideep conv changes (#78238)
8f7e3791ef : Make PyTorch importable on python-3.7.0 (#78500)
dabf8f0569 : Populate the torch._decomp table on import (#78476)
a468941355 : Fix jiterator doc format (#78471)
017b0ae943 : MPS: Fix crashes in view tensors due to buffer size mismatch (#78496)
b6672b10e1 : Fix incorrect decomposition for native_dropout (#77933)
3f334f0dfd : Fix `asarray` documentation formatting (#78485)
11b9a81e02 : [NNC] channels last propagation within NNC fusion group (#76948)
c7b4eec233 : [Quant][fx][bc-breaking] Replace qconfig_dict with a config object (#78452)
bde246fcc6 : Speed up test_mps from 9min to 25s
02551a0025 : Remove prints and add proper asserts
472d67a727 : Revert "move XNNPACK buck build to shared build file (#77941)"
64e0d0c4fe : Laguerre polynomial (#78366)
089203f8bc : Updates floor_divide to perform floor division (#78411)
3ee863cb7c : [ROCm] enable test_lobpcg_ortho_cuda_float64 (#78385)
e9afb43676 : Add meta device support to `_UntypedStorage` and `_TypedStorage` (#78008)
d63db52349 : MPS: Fixes the as_strided_mps implementation for contiguous view operations (#78440)
9dc6d42c18 : Probabilist’s Hermite polynomial (#78357)
b8b46f932b : move XNNPACK buck build to shared build file (#77941)
18273c39da : Physicist’s Hermite polynomial (#78352)
2df1da09e1 : Add Elementwise unary ops 4 references (#78216)
437ecfc461 : [MPS] Fix `copy_kernel_mps` (#78428)
40a6cc6cc6 : Chebyshev polynomial of the second kind (#78293)
4963d41f9d : Add logsumexp to AMP autocast (#76330)
1a9a1b8b5e : fixing typo (#78417)
8552acbd74 : MPS: Eye op (#78408)
aefb4c9fba : [mps] Do not use `malloc`/`free` in Indexing.mm (#78409)
ee86af17fb : [CI]Preserve `.ninja_log` for Mac builds (#78387)
8c88a55d44 : Fix sparse BSR tensor validation.
2e32d5fcd8 : MPS: Add adaptive max pool2d op (#78410)
8ad305f375 : default argument handling for mobile unboxing codegen
85f308275e : [fx2trt] Fix dummy weight initialization in conv1d converter (#78402)
299fbbccec : [ONNX] Fix `check_training_mode` in symbolic_helper (#78376)
dfd78bf4ab : Generate CUDAConfig.h only for CUDA builds (#78218)
2679755bdc : [static-runtime] out variant for aten::max (#78271)
ac031e1326 : [GH1] Switch trymerge concurrency to be PR-based (#78296)
26852d6fe1 : Remove mentions of py3.5-3.6 (#78318)
6ee072a324 : fix missing dim out of range check for logcumsumexp_cuda with empty source tensor
029bbe4995 : Chebyshev polynomial of the first kind (#78196)
8bd8f62812 : [primTorch] refs: margin_ranking_loss, hinge_embedding_loss (#78057)
4e1f41f66a : [docs][nn] conv: complex support note (#78351)
31016eb81e : [primTorch] Elementwise Binary Ops I (#78023)
e840b82b35 : Migrate off xenial gcc5.4 (#78137)
d2d1d78f81 : [GHF] Add spaces between words in merge fail reasons (#78378)
483bb4f0cb : [ONNX] Extract export verification as standalone api from unittest
6b4ffa14df : [Docker] Pin protobuf to 3.20.1 (#78369)
1a41cd8f97 : Conv BN folding data type issue when conv has no bias (#78241)
dde56ca329 : fixing invalid upstream for rebase to viable/strict (#78371)
45462baf7e : MPS: add ranked tensors for addcmul ops instead of constants and update version_check (#78354)
d98a8148b6 : [ci] remove ciflow/all (#78317)
5ecd30e857 : [primTorch] Rename is_finite->isfinite (#78211)
92229adf0c : add special handling for resize_() in functionalization pass
e9c54ae1c2 : functionalization: remove some unnecessary view_copies in inplace views
7ff091fc4e : move Functionalize dispatch key closer to backends
5cc258ec9e : make block_diag composite compliant
29189d2ba8 : [LT] Add IR resuing support for manually-implemented ops
91a4fe0777 : [docs] Move a sentence from `nn.Transformer` to `nn.TransformerEncoder` (#78337)
a4723d5a5f : Fix coreml ios workflow (#78356)
3924d56fae : `BCE` loss: forward-over-reverse AD support (#77852)
5ef9108ad2 : Revert "MPS: add ranked tensors for addcmul ops instead of constants and update version_check (#78312)"
50930604cf : Hackily use addSkip to track flakiness in common_utils.py (#78292)
fa54eb1fb6 : Revert "Revert MPS changes (#78335)"
fa7117c64a : Update PeachPy submodule (#78326)
18d46ea9fd : [PyTorch] Integrate Execution Graph Observer into PyTorch Profiler (#75358)
59d29bfd52 : Remove code for circleci from binary_populate_env.sh (#78321)
f4e493e1b6 : Remove hardcoded -pthread (#78324)
ffb3101484 : Revert MPS changes (#78335)
032d1ace1d : [ci] disable flaky MobileProfiler.Backend test
d12bf9fd75 : [static_runtime] Add auto-generated view ops (#77106)
1803a592f4 : [static_runtime] Add script to auto-generate view ops (#77105)
f69c990ecc : fix index_select when source tensor is empty
d6db5ea50d : Back out "add mixed data type mode for LayerNorm forward path"
a0b3814433 : Clean prefixes when searching for params / buffers to ignore (#78278)
52034a0fbd : Disable XLA cc test on PyTorch CI (#78327)
b6920405da : reorder checks to shave 1 us off no-op dispatch time
59b6052dad : MPS: add ranked tensors for addcmul ops instead of constants and update version_check (#78312)
8225f42a8a : [quant][fx][equalization] Fix example_inputs follow ups in test_equalize_fx
c1cbe3bad3 : Enhance the _rebuild_qtensor to support other device type other than CPU (#78234)
46b0306c13 : Update Release.md with latest details (#78285)
e9d0f5fb17 : Eliminate Named tensor warnings in XNNPACK and QNNPACK
7ea5fa3dd4 : [reland][quant] Add utility function get_fqn_to_example_inputs
56c23f5633 : [SR] Out variant for embedding_bag_byte_unpack
3b70a7c294 : [primTorch] impl_nvfuser for unary ops - 1 (#78220)
a2ef1edb1f : MNT Check that torch._refs are in python_ref_db (#78222)
13e444bfa5 : Fix internal build
5e03dfd36d : [ONNX] Wrap test decorators with functools.wraps (#78254)
716f76716a : [quant] Skip some broken tests due to hypothesis
c8ab55b293 : Fix the MPS Heap volatility (#78230)
cd41c8f032 : fix set item to scalar tensor missing gradient info
e01fb9cd07 : Set block dim and grid dim macros within NestedTensorTransformerFunctions.cu (#77199)
bbb2964bd8 : Add functionality for rebasebot to rebase onto viable/strict branch (#78276)
49979c4021 : [symint] Make TensorImpl::sizes_and_strides_ contain SymInt
7e4730d017 : [PyTorch] Round T up to next multiple of 8 in NestedTensor case
51c4c79e3d : Use random seed in normal_mps_out (#78010)
fc2aef5562 : Add MPS merge rules (#78259)
8449ac770c : Build Python module and tests CPU mode with Bazel in CI
141238a889 : [PT-D] Enable nan_to_num op for sharded tensor
a9b68009b6 : [quant][core][better-engineering] Rename files in quantized/cuda directory to conform with non-quantized countertpart filenames #77431
593d66e1b3 : Add lazy shape inference for logical boolean ops (#78004)
ebba4219ae : torch/distributed: move WorkerInfo registration into libtorch instead of libtorch_python (#78028)
8412f209f0 : [FSDP] Remove unneeded padding logic for optim state dict
cdb009c860 : Add from_blob with storage_offset arg
53e05ad4b2 : ns for fx: remove restriction on nodes with no args and only kwargs
705082656a : Fix typo in testname (#78258)
e0a071a47e : [Profiler] Abstract interface for Python tracer
34d160b1fa : [Profiler] Build call tree in `collection.cpp`
9f9e8ff232 : [XLA hash update] update the pinned xla hash (#78144)
7e76772992 : ci: Remove verbose windows compilation commands (#78197)
b9fb940dec : Conversion between SparseBsr and Strided (#78025)
2679aa4789 : [MPS] Lazy initialize allocators (#78227)
ba952d8308 : [quant][core][improvement][bug fix] Added channel axis bound checking in fused_moving_avg_obs_fake_quant_*
a98b1a8b6a : [quant][core][gpu][improvement] Made plan and run for quantized linear op conform with Conv_v8.cpp
b447114873 : [quant][core][gpu][improvement] Removed linear_output and set output tensors as virtual in quantized cudnn linear op
d5f99581b5 : Pass WITH_BLAS option from environment to CMake (#78037)
589f40e3ad : [quant][core][gpu][improvement] Made plan and run for quantized cudnn conv op conform with Conv_v8.cpp
87148f2b59 : Revert "[quant] Add utility function get_fqn_to_example_inputs"
82cb7210e8 : [PyTorch] Fix record function inputs_valid_ check (#78002)
17c1aed2b5 : remove torch.no_grad from sample_inputs (#78076)
bd5ec6c8b7 : [quant][core][gpu][improvement] Removed conv_output and set output tensors as virtual in quantized cudnn conv2d op
50a44fe461 : [quant] Add utility function get_fqn_to_example_inputs
b291752685 : [quant][core][gpu][improvement] Enabled broadcasting multiplication support for requantize_multiplier_tensor in quantized cudnn add, linear, and conv2d ops
a1765f0176 : addr ref
72a4f6773d : Add an argument to specify warmup iterations (#78124)
357707b9f9 : [GHF] Better "Reviews missing" error message (#78219)
d450034f24 : Revert "Beta function (#78031)"
161e931156 : [ONNX] Modernize python syntax (#77935)
f3af51069d : Modernize LoggingTensorMode
588826b389 : Fix gradcheck when outputs that don't require grad precede those that do
07e4533403 : reland of as_strided support for functionalization; introduce as_strided_scatter
2aad28a539 : [quant][core][gpu][feature] Implemented quantized cuda gelu
b7bb34d762 : [MPS] Add version check (#78192)
2d93e1fada : Add slow path for device
6db8440f35 : Python Jiterator supports multiple outputs (#78139)
b994ce359e : Revert "[cuDNN V8 API] (reopen) Allow the number of kernels profiled under torch.backends.cudnn.benchmark = True to be limitedCudnnv8 benchmark limit (#77002)"
da16450360 : Beta function (#78031)
f37ce948ff : add bfloat16 support for kl_div_backward_cuda (#77676)
08dafb761e : Automated submodule update: kineto (#75687)
a95f1edd85 : Revert "as_strided support for functionalization; introduce as_strided_scatter"
a52bfe2c5d : Convert MPS Tensor data using MPSGraph API (#78092)
6185478edb : make logsumexp composite compliant
2eea5eff62 : functionalization: fix bug with multiple views of same base
26d9386f67 : Make string serialization of C++ FunctionSchema consistent with torchgen.model.FunctionSchema
c083489f46 : [kineto] Optimize getStepCallbacks for common case of no active callbacks
02c4d877b4 : Codegen Non-Native IR Nodes (#76535)
13dcba8c07 : ci: Remove runAttempt from s3 artifact upload
ab5e6f0915 : [chalf] enable testing for multiple ops (#78171)
331629046d : update xla commit - check for exist pr (#78156)
e4f5203386 : print available modules in predictor error message (#78101)
3a921f2d26 : as_strided support for functionalization; introduce as_strided_scatter
7ddc1425ff : functionalization fix for inplace comparison ops
22d566acda : functionalization fix for inplace_view ops
70c511b5c8 : [GHA] broaden upload test stats criteria (#78177)
a1f0e69519 : [ci] improve upload test stats logging
0c8c39fa71 : Fix derivatives of norm(p=inf)
664bb4de49 : [composite compliance] backward: cummin, cummax (#77872)
80c4919bec : [PyTorch] Stack-allocate boxed args for RecordFunction
e451259a60 : Reorganize Community Section v1 (#77912)
821c711baf : Revert "Move THPStorage definitions out of `torch/csrc/generic` (#78032)"
ee4034ed0d : Revert "masked logsumexp/logaddexp"
6f4d200725 : [complex32, jiterator] sin, asin (#77606)
4ea176ea57 : expose fast get_current_stream (#78165)
49e15b578a : masked logsumexp/logaddexp
851885aedf : .github - add pytorch dev infra team as codeowner (#78134)
9b8abff4ac : fix typo in docstring of `Transformer.forward()` (#78167)
f9e346d5ac : [opinfo] tranpose_conv, conv, adaptive_{max, avg}_pool unbatched samples (#73002)
2ac09cc6ce : changed install_katex.sh to install_docs_reqs.sh, added install doxygen (#77907)
f012152836 : Move THPStorage definitions out of `torch/csrc/generic` (#78032)
9e806619cc : [Codegen] Remove view operator check in NativeFunctionGroups and allow skipping native function generation (#78145)
6244daa6a9 : [MPS] Fix torch.mps.is_available() (#78121)
c7ce4fcc61 : [MPS] Initialize `MPSDevice::_mtl_device` property to `nil` (#78136)
8eb62bd7ba : [shard] make ShardedTensor a torch.Tensor subclass
fb84f2223c : Revert "[symint] Make TensorImpl::sizes_and_strides_ contain SymInt"
c274f2ad52 : [cuDNN V8 API] (reopen) Allow the number of kernels profiled under torch.backends.cudnn.benchmark = True to be limitedCudnnv8 benchmark limit (#77002)
c50089712c : Revert "Add index value checking to MaxUnpool2d and MaxUnpool3d (#70545)"
2ae3c59e4b : [SR] Remove linear/relu fusion
bb4653e736 : Add i0, i1, zeta refs (#78111)
0bad87a932 : xla hash auto update - add token to trigger workflows (#78094)
53ef66bb59 : Add index value checking to MaxUnpool2d and MaxUnpool3d (#70545)
a136408ada : [complex32, jiterator] tan, atan (#77802)
623cfb2596 : [skip ci] delete triage.yml
c186250d95 : raise error when groups is not positive in Conv modules
018982318c : [GH1] Add builds as required status checks (#78086)
17ff484412 : [ci] don't run distributed tests on mac
2c4312c5ba : [ci] add retry to clean before checkout step
317d601e8d : Fix docstring for nn.Hardswish (#70993)
ea5d01e629 : [Primtorch] Tried porting leaky_relu into a ref (#78041)
652ecc9ad9 : [ONNX] Fix typo when comparing DeviceObjType (#78085)
9aed30d3ad : [ROCm] support benchmark flag for MIOpen (#77438)
cac16e2ee2 : Minor typo in contributing doc fixed (#70284)
b2d1104471 : Fixed numpy bool check (#77857)
793ee6b7ed : prims shouldn't be checked for BC checks (#78079)
e2eb7a1edc : [GHA] attempt to re-enable mac test workflows (#78000)
2738405a76 : [primTorch] Adds any, all, equal, item references (#78072)
88fca3be59 : [reland][complex32] conv1d, conv2d : enable test (#77999)
2676931d3e : [composite compliance] forward_ad: linear (#77950)
141ea86c33 : reduce overhead of get_current_stream (#78066)
d4345ed0a6 : [primTorch] Adds random operations (#78026)
735ab79168 : Static initializer update (#78052)
d7680cb7f0 : [Profiler][Trivial] Switch to nanoseconds for Result's internal representation
e17f14fab2 : [Profiler] Propagate metadata into `Engine::evaluate_function` event.
71b94b09ae : [Profiler][Trivial] Force Result to be a shared_ptr
33dc5d1a39 : [Profiler] Move Allocation into EventType.
acfbc16b1c : Revert "[primTorch] Adds random operations (#78026)"
59be76c6cf : [Profiler] Introduce `torch::profiler::impl::EventType`
043cf1f9c7 : [primTorch] Adds random operations (#78026)
192aa3ad5f : adds std and var refs and var prim (#77948)
5f1b0a4f48 : [primTorch] add exp2 (prim and ref), log10 (prim and ref), frac (ref) (#78046)
8da4993557 : [fix] use `opmath_t` for activation functions in Activation.cu (#77949)
57fab66fdc : [primTorch] add refs fliplr, flipud (#78049)
37eb31599c : [reland] Add sharding tests to multigpu-test.sh and fix custom operator decorator (#77987)
416899d1a9 : [quant][fx][bc-breaking] Add required example_args argument to prepare_fx and prepare_qat_fx (#249) (#77608)
774b4ff83e : Reenable mul tests
6ffa9a8ad2 : `linalg_ldl_solve`: change to new `MetaBase` API.
6fa7efaafe : `linalg_ldl_factor`: change to new `MetaBase` API.
9a941ec2c2 : `_linalg_svd`: change to new `MetaBase` API.
59bc76f97a : `linalg_lu_factor_ex`: change to new `MetaBase` API.
cc9eae13a8 : Drop `set_output` function from `MetaBase` API.
e0ff6a21d8 : Replace `set_output` by `set_output_raw_strided` in the codebase.
cbdb694f15 : MPS: Fix the memory growing issue and BERT_pytorch network crash fix. (#78006)
4428218945 : [primtorch] Added `native_group_norm` decomp (#78029)
c0abd83482 : Prepare python jiterator for multiple outputs (#77921)
580e6583d5 : [Profiler] Fix segfault in AppendOnlyList
673346b350 : [Profiler] Pop `KinetoThreadLocalState` at the start of post processing.
47834679ba : Disable complex32 meta conversion, which removes a few skips
6b273444c4 : Add logit ref; allow non-refs to be called in refs.
45eab670ac : Add test_imports (#77728)
1d845253d8 : [ci] move rocm jobs from pull to trunk workflow
50cadfae10 : Add strictness check and made tensors into leaves if input tensors were leaves (#77474)
ffa3cce100 : [Codegen] Expose namespace argument for static dispatch (#77710)
c82fb7a67f : Adding support for upper and lower bound functions in SSA
9432be9b8c : [flatbuffer] Move saving storage to the last step. (#78024)
64b4bb4b01 : Fix meta tests on norm (and relanding norm fixes) (#77930)
c7e8de6e86 : Revert "[mergebot] Add all-green option (#77660)"
340c4120d5 : [mergebot] Add all-green option (#77660)
1b878b4eaa : Add a consistent check API step so we can track our usage (#77998)
a9a99a901e : Add ignore for -Wunsupported-availability-guard
04ac80c73a : Fix a few issues on assert/double error/legacy constructor (#77966)
a7a818d9e2 : [symint] Make TensorImpl::sizes_and_strides_ contain SymInt
86a0b9621b : Added note about Dev Infra Office Hours in CONTRIBUTING.md (#76667) (#77883)
bb7fd1fcfb : [caffe2] fix type annotations for workspace.SwitchWorkspace() (#77464)
53b30579b7 : Revert "[complex32] conv1d, conv2d : enable test (#77239)"
44c91383d3 : [quant][ao_migration] Base package in tests
dbee7e5499 : Adding SSA support for convolution_backward
417373337f : Put imports in correct order so clang-format doesn't get mad every time
855c4eb051 : [symint] Change SizesAndStrides test back to using negative ints
a645abd5aa : Update nightlies from 1.12 -> 1.13
0f74b44f1a : Revert "Add sharding tests to multigpu-test.sh and fix custom operator decorator (#77825)"
9d44b3d110 : [quant][refactor] Remove the base class from __all__
19701267f3 : MPS: Add back the memory leak fixes. (#77964)
8d4c8df33a : Add sharding tests to multigpu-test.sh and fix custom operator decorator (#77825)
bee26da294 : [GHF][BE] Use GraphQL fragments (#77945)
64c741e202 : Add `torch/package` to list of blacklinted folders (#77937)
a8467de6fa : Guard test_sparse_csr.test_mm on CUDA11+ (#77965)
ae9b17ad14 : auto make pr for xla pinned hash (#77824)
b9087b664f : rebase via comment - ghstack (#77911)
2d3a6d7274 : [complex32] conv1d, conv2d : enable test (#77239)
53e0d7a3ba : Revert "MPS: Fix some memory leak issues in release pools (#77934)"
38bc10ae25 : retry - enable NVFuser by default
2bc2adf2ba : MPS: Fix some memory leak issues in release pools (#77934)
734a97a7c8 : Revert "Revert "Switch to use nested tensor by-default in Transformer… (#77924)
dd313d7338 : support TestCase.longMessage in TestCase.assertEqual
63e9fdd92f : re-add dynamic error messages to assert_close
efb2c093fc : [fix] complex type promotion (#77524)
6dae1e419e : remove unnecessary ATen/core/Macros.h
83d875d431 : re-enable kernel launch checks in CI (#77841)
df1f9b9840 : Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#77756)
a8c929b0a6 : [quant] Reordering the imports in the torch/__init__.py
68e22aa9fc : [symint] add support for negative integers
f54098cd3e : Create JSON from new FX IR and lower to LLVM (#77765)
3d83321b44 : MPS Fixes: copy operations, addmm and baddmm (#77791)
c60d2ef4eb : [StaticRuntime] Replace Permute with copy version only when it's followed by reshape or flatten (#77832)
03546e9c07 : Revert "Fixed type promotion semantics for native_batch_norm and native_layer_norm (#77407)"
cecb2ad95e : Restore old names for private funcs in legacy storages (#77861)
6583c0384b : fixing trivial reduction & broadcast scheduling (#77884)
0d76299ff7 : [ONNX] Clean up module imports (#77423)
e67284d9ee : Added support for `slogdet` in LazyTensor shape inference (#77904)
d6ae650738 : Added support for `inverse` in LazyTensor shape inference (#77888)
67c30a04f1 : [GHF] Add checkruns pagination (#77922)
1bec7f8468 : torch: Fix black linter
a5766c8649 : fix another missing c10::
c915fbe201 : ENH: Convert finfo.tiny to finfo.smallest_normal (#76292)
7892a45741 : Add missing decref to `createStorageGetType` (#77860)
11daf200e8 : Adding activation references for celu, mish, selu, softplus, and tanh (#77473)
e69d13b8b3 : [FSDP][Easy] Update `state_dict()` docstring
d9b3feb27d : [FSDP][Easy] Reword device placement warning
36bf8007f7 : [FSDP][Easy] Fix `state_dict_type()` docstring example
96e674a0c9 : [FSDP][Easy] Doc fixes
0bc4b2af56 : Populate bytecode version and operator version (#77685)
94eba341f8 : Revert RPC Meta device support
82682aab0b : Prepare Jiterator code template for multiple outputs (#77902)
0d6fa91d1b : Revert "Switch to use nested tensor by-default in TransformerEncoder (#77217)"
294fff16ec : add slow path for is_contiguous (#77906)
009c14b014 : aten: Removed unused-local-typedef
b3ca7fc88d : Support saving extra files in pytorch flat buffer format via python (#77870)
fd121dfeec : Move x86 binaries builder to macos-12 to enable MPS build
8cacb199ba : Switch to use nested tensor by-default in TransformerEncoder (#77217)
de646c06d4 : fix jit List[Optional[Tensor]] type singleton bug
e0295f55b5 : Fix derivatives for linalg.vector_norm(..., dtype=)
5984bc8233 : Allow specifying alias analysis while registering new ops
9e0e949484 : Fix bugs, increase meta coverage
7a4e3f329f : Revert "Fix derivatives for linalg.vector_norm(..., dtype=)"
b65a44d7b9 : ci: Have revert use hosted runners
8bffb87735 : fix missing c10::
17fbb85734 : [nvfuser] prevent spamming warning message (#77777)
5e0589ca20 : [CI] Do not use conda-forge for Python-3.9 configs (#77873)
b4a6730ce1 : [DataPipe] Refactor 'mux' to have buffer as an instance variable
ba0ca0f591 : Add torch dispatch mode to ProxyTensor tracing (#77174)
327d313705 : Refactor operator dispatch framework across different Tensors.
0161e9eb00 : [test] attempt to functionalize ops with mutable positional-only args
b8639cf6e1 : masked median
b333a752c0 : Validate that tensors are contiguous in ProcessGroupNCCL
282951da20 : Add knobs for XLA build options (#77729)
99f6e614e8 : Seed `Shuffler` for MP DataLoader without explicit `manual_seed`. (#77855)
2ac35e2ed1 : [GHF] Preserve revert reason in commit message (#77798)
70d80fb424 : Fixed type promotion semantics for native_batch_norm and native_layer_norm (#77407)
00a187c373 : Revert "add slow path for is_contiguous"
2d2b9f9980 : [GH1] Increase checkruns to 60 as we have over 50 in pull (#77859)
97fa1d317f : [DataPipe] Preventing automatic reset call after state is restored
0d9e42408b : Revert "[GH1] Add builds as required status checks (#77596)"
007cc731ce : Move pull linux-docs job to Ubuntu 20.04 (#77700)
3c4af1c496 : [WIP] Add support for elementwise unary ops (#77807)
5cdf79fddc : Bump minimum CMake version to 3.13
00b3b4dc75 : [vulkan] Fix sign mismatch warning
50c60c770e : Remove commented out code in test file (#77810)
eb0ff991f7 : [FSDP] Dont move if on CPU (#77720)
e69e9b0ed8 : Don't test std values if tensor is meta; fixes normal meta
88c89c9eb9 : log_sigmoid_forward out support; out_wrapper_multi
baeffdbc6c : reflection_pad2d support
e526099824 : [GH1] Add builds as required status checks (#77596)
fdd5f7214e : Revert "[DataPipe] Preventing automatic reset call after state is restored"
ec290949aa : Change transpose to return CSC when given CSR, adjust addmm, addmv, mm (#77615)
2d4291fb81 : [torch] Fixed a few test for Windows & Linux GPUs (#77531)
aea6e2c396 : Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459)
ac1837ddd3 : [DataPipe] Preventing automatic reset call after state is restored
7945fa6ce2 : `BCE` loss: forward ad support (#77755)
d1bb420fb1 : std_mean, var_mean : update skip message (#77150)
b3e7230efa : [symint] Fix SizesAndStridesTest to not use negative sizes/strides
c2ff413622 : move generated-autograd-headers to the shared build structure
0c91efb64e : [fx/graph_manipulation] Fix _update_weight_fused_dtypes (#77702)
f9db8b72ac : MHA forward pass bug fix
e3403ff4ab : square support
4a57321a93 : [FSDP] Use post load_state_dict hooks (#76912)
ea27244383 : masked cumsum/cumprod
365ce350cb : Make ShufflerDataPipe deterministic for SP & MP DataLoader (#77741)
4124307fae : [shard] fix failed tests in sharded tensor
929f1d5317 : [RELAND] Adds torch.cuda.is_current_stream_capturing (#77789)
0f328f3532 : Update scale-config.yml (#77803)
dac3fba274 : Add testing workaround for EFA and TensorPipe (#77363)
d40a24005b : [GHF] Add URL for pending/failed mandatory checks (#77763)
5e0f559d23 : Revert "Add sharding tests to multigpu-test.sh (#77708)"
c9570e4b88 : [checkpoint] Synchronize error handling across all ranks (#77091)
8b5f11c61e : Support copy_ for Sparse Compressed tensors.
dcd2ba3538 : improve mps note to describe the different functions available (#77767)
1f7d243e36 : Add USE_MPS option to cmake summary
0794d59d76 : Throw a nice error when SubTensor.__torch_dispatch__() returns the wrong type for detach()
8881d7ac6c : Support no-batch-dim for CrossEntropyLoss with prob target
de86146c61 : rocblas alt impl during backward pass only (#71881)
0d8a0f186b : Revert "Adds torch.cuda.is_current_stream_capturing (#77673)"
a2802ad0b9 : Upstream master bump 0513 (#77471)
4941e72e40 : Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)""
befa4e371e : Fix typo
55be35ae39 : Fix 'Code below assumes there is at least one tensor arg' assumption (#76917)
a7cf95a609 : Add sharding tests to multigpu-test.sh (#77708)
2a99018147 : Adding a way to register both upper and lower bound functions
73480bcbe0 : Adding support for nonzero in LazyTensor shape functions
23b2e8821f : [fix] composite compliance : nuclear_norm (#77734)
4eec865f58 : [nvFuser] Improving bitwise ops support (#77158)
8571007017 : [chalf] div, eq, masked_fill, index_put (#77479)
d03d43df52 : Adds torch.cuda.is_current_stream_capturing (#77673)
ddb2eb7aee : Micro-optimisations for matmul 2.0: Electric boogaloo
c446f78ffd : Use any_type in test_out
4d1ead6dff : [DataPipe] Update `mux` data pipe (#76384) (#77145)
4c34343216 : [FSDP] Warning for shared params, small doc fixes (#77726)
05ce0f9be6 : Add option to disable autocast pass
e10a002e52 : 2D Strided to/from CSC, COO to CSC, CSC to CSC conversion. (#77521)
687ab97338 : [reland][chalf] enable testing for multiple ops (#77656)
edc904d6ba : add native view_copy.out ops, teach codegen about tensorlist out=
a325fa94b9 : [flaky test reporting] print stack trace for flaky reruns (#77664)
580a053832 : [primTorch] Enforces stride metadata (#77542)
93b20b0232 : [FSDP][Easy] Remove extraneous print
ccc991ba29 : Support str for Sparse Compressed tensors
6436fba319 : Migrate cross compilation trunk test to use macos12 to build MPS
7c3d3b759b : Migrate x86 trunk build/test to macos12
13d8fb93bb : Fix derivatives for linalg.vector_norm(..., dtype=)
ff7b6d6b5f : Update linalg.*norm
c15fca1137 : quant doc: improve rendered documentation for backend_config_dict
e7cb44b6c4 : Guard distributed imports (#77727)
a760dc2687 : `binary_cross_entropy`: double backwart wrt target (#77416)
1af47a3a3e : ROCM: Enable few more tests for ROCM (#77669)
bfb1206d09 : make `to` properly support permutation (#77610)
e9d660c331 : Revert "Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"""
7b8cf1f736 : [pytorch][PR] [Profiler][Trivial] Format profiler_python.cpp
bd34636b13 : [pytorch][PR] [Profiler] Add EventFieldsVisitor
acf7136a52 : Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)""
3b2375291a : [PT-D][Sharding] Fix view op and matrix ops unit test
6e3391a7c3 : Re-enable ios-12-5 builds (#77611)
48581d74ad : Revert "Add dispatch mode testing for meta tensors and other stuff"
c35bd8d423 : Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"
f6beda89c6 : add slow path for is_contiguous
ee080918df : [DataPipe] Moving DataPipe buffers from __iter__ to instance (self)
bbaefdf6b5 : [DataPipe] Enforcing single valid iterator for IterDataPipes multiple DataPipes as outputs
7c52f204e0 : [DataPipe] Enforcing single valid iterator for IterDataPipes without multiple outputs
e0451d8022 : [DataPipe] refactor to separate _IterDataPipeMeta
4e2f5507d0 : Add support for TxT mask layout for masked_softmax in BetterTransformer (#77607)
b64845eb18 : Revert "[GH1] Add builds as required status checks (#77596)"
3822a472ef : Python function to extract information on mobile::Module from flatbuffer (#77624)
fc4c3c9bc7 : Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)
c1cdb1216b : Add dispatch mode testing for meta tensors and other stuff
7f1e331b34 : Make SymInt constructor explicit
c673696b17 : [skip ci] fix comment spacing in SymIntArrayRef.h
76b952bb35 : [CUBLAS][TF32] Skip `test_cublas_allow_tf32_get_set` if `TORCH_ALLOW_TF32_CUBLAS_OVERRIDE` is set (#77298)
14ab3ff484 : [cuDNN V8 API] Enable cuDNN v8 API by default (#75466)
068d35a648 : Make PartialTensot a torch.Tensor subclass
b87e5f383b : Revert "[flaky test reporting] print stack trace for flaky reruns (#77664)"
090eddf1c7 : Fix MPS interaction with autograd engine
f274558018 : Bitwise ops improvements (#77621)
059c439ed9 : fix complex abs/angle output format (#77585)
54bc94bb51 : [GH1] Add builds as required status checks (#77596)
8f20ea19fd : change sparse_mask to support Half
6f954d7bbb : FSDP parameter sync
8ae0b275f5 : Fix device_id
a580e07d5d : [BE] remove unused RDS pipeline from print_test_stats.py (#77654)
646a20c8fe : [flaky test reporting] print stack trace for flaky reruns (#77664)
d8b80edade : Revert "Use weakref.proxy when saving module to internal dictionaries to not increase refcount (#76435)"
71d61bb78b : Fix typo in torch.package code and docs (#77604)
c003494754 : add channels last support for PixelShuffle and PixelUnshuffle
cfc87cad02 : fix grad(torch.tensor()) using lift() operator
541a378914 : Remove operators that support BFloat16 in the fp32 cast policy list of AutocastCPU (#77623)
dc882ed33d : Add Sparse Compressed tensor support to torch.clone
98a20ebb0f : updating tolerance override (#77402)
369d9f4137 : A few forward AD formulas
4d3930fe8a : Fix mypy lint failure
387cccb1e9 : [complex32, jiterator] angle (#76692)
5a16f84911 : [chalf] enable testing for multiple ops (#77404)
0d1329c4ea : Revert "Add Sparse Compressed tensor support to torch.clone"
47809fe6c0 : [composite compliance] backward: index_select, trace, repeat_interleave (#77578)
f9f4896a07 : fix torch.jit.tracing for at::lift (#77588)
3ccf35e003 : `nn.functional.glu`: forward over reserse AD support (#77309)
d0dc7cb774 : Reland "[JIT] during freezing, cast optional bias to half if weight is half"
9cc92d5358 : quant docs: best practices for quantization accuracy debugging
9d44250760 : Reduce structured kernels' `set_output` boilerplate with new overloads.
c83f8ee46a : Fix partial_tensor ops.
18e36a6295 : [graph_manipulation] Set fused dtypes for all constant params/buffers (#77401)
942f04172a : Add Sparse Compressed tensor support to torch.clone
0e351c7df9 : Added setattr to `functional_call`. (#77137)
e517fc8b28 : eliminate Bazel's libtorch_cpp_generated_sources
a013d83bf9 : eliminate Bazel's libtorch_python_generated_sources
f9f8127414 : CheckpointWrapper state_dict fix (#77224)
4b4a6a0b19 : Use TensorPipe libuv in Gloo (#77312)
6aea0b1073 : [CI] Make `install_user.sh` compatible with Focal (#77622)
668599a673 : Rewrite ShardedTensor.gather to use dist.gather instead of gather_object (#77272)
5e3e5a5403 : Revert "Python function to extract information on mobile::Module from flatbuffer (#77328)"
efcbbb177e : [Re-submit] Make tracer be able to trace different forward functions
246078e251 : Revert "[JIT] during freezing, cast optional bias to half if weight is half"
f1c8e8fa4e : Revert "Add Sparse Compressed tensor support to torch.clone"
4596ecb4d2 : [BE] Move numba pinned version to requirements-ci.txt
1f8049566f : Re-land BUCK build for pytorch mobile (#77612)
0975174652 : Fix doc about type promotion of lshift and rshift (#77613)
89e32f52c7 : Change test_sparse_csr test signatures (#77595)
31d9f7c303 : Move other div variants to upgraders map
20ba6e6935 : Add Sparse Compressed tensor support to torch.clone
2547be5135 : [JIT] during freezing, cast optional bias to half if weight is half
9a608af828 : Support comparing Sparse Compressed tensors
f9aaf9d388 : [GH1] Allow conclusions to include workflows and checks (#77597)
25fa964d96 : [shard] add clone/detach and set requires_grad for ShardedTensor
25c6ebd12c : Revert "Revert "[LT] Codegen ReuseNode for supported ops""
375e21b2c6 : check that flip doesn't accept repeating dimensions (#77500)
fe1232a494 : Use FastAtomicAdd for index_add/_index_reduce
906fc77745 : [mergebot cli] add new revert regex (#77459)
d76efed578 : Add Sparse CSC support to torch.empty
77f1ca0065 : Replace SparseCsrTensorImpl crow/col_indices with compressed/plain_indices
1b1dec4529 : stirde_properties fix (#77460)
7ba4e124e6 : Bugfix gradient formula for index_reduce('prod') + separate out sample_inputs for index_reduce
8c608a79b4 : Compressed sparse layout conversion stubs (#77489)
2a905aef09 : Revert "enable NVFuser by default"
8875453d8b : skip primTorch nvfuser tests on rocm (#77468)
5ab8afe487 : [Model Averaging] Support disabling post-local gradient sync (#76723)
93d84c0fcf : [coreml] Use special throw macro when encountering CoreML API errors (#77429)
c34be4eb04 : [metal] Use special throw macro when encountering Metal API errors (#77430)
46e16737d4 : [GHA] Use silent checkout (#77547)
a8261b4b7b : [chalf] exp, log: cuda (#77483)
9c94b760dd : [metal] fix op registration for cat (#77451)
2a496e2f80 : Adding maximize to Adamax (#77409)
c3c92ca10c : [Profiler] Remove `addMemoryUsageActivity` from kineto_shim (#76678)
1db1337473 : [BE][CI] Add `pip_install` macro
69fa49f123 : Python function to extract information on mobile::Module from flatbuffer (#77328)
b5bc954a71 : Fix optional dtype/layout/memory_format pycall; fix memory format
14e59edd02 : Saving JIT to flatbuffer should respect options. (#77456)
3e89a1d6b7 : Disable GC if fraction is not set (#76648)
8c50414233 : add BFloat16 support for BatchNorm on CPU
530481ed69 : Revert "[mobile] add buck build for mobile targets (#76480)"
9d3ffed327 : [FSDP] Sharded Grad Scaler (#76918)
beb405035c : Update forward AD metadata check to skip stride check when size is 0
c218263207 : [LTC] Mark Step Indicator (#76840)
3feb038421 : [chalf] abs, sgn (#77446)
3c804989ec : missing header include in SortImpl.cu (#77297)
563c2719bf : [ONNX] Refactor to remove inline imports - attempt 2 (#77448)
dca416b578 : Pretty-print dataclasses (#76810)
6fa20bdfe8 : add native kernel for weight_norm on CPU
a0cc38eaca : [FSDP] Add unittests to test distributed checkpointing
88205886d7 : Add ccol_indices and row_indices methods.
edffd595c2 : [DataLoader] Adding ability to use dill to pass DataPipes in mutiprocessing
58c9d521a1 : [FSDP] Implement sharded_state_dict and load_sharded_state_dict
a2cb94d21a : [PT-D][Sharding] Enable more ops needed in the transformer model training
e5a5cd149f : Simplify IfThenElse and CompareSelect within for-loop (#76793)
168dc70faf : [mobile] add buck build for mobile targets (#76480)
e5a752a6ca : Discover and check operator variants
39d3a7ffe5 : Connect Tensor.__ipow__ to pow_ method
459e10f446 : Fix `mean` bug for integral tensors. (#76584)
e175065c4e : [NVFuser] fix force-disable flag
77ac82c7d0 : [fix] logsumexp: type promotion (integral to float) (#77480)
93a969221d : Revert "add BFloat16 support for BatchNorm on CPU"
978304fc9c : MPS: fixes (#77462)
7c8911ca7a : add BFloat16 support for BatchNorm on CPU
a275491c6f : [Reland] load_state_dict post hook (#77392)
2166fc55fc : improve softmax lastdim performance on bfloat16 by adding more fusion
59b56ba785 : improve group_norm channels last performance on CPU
2f602abf14 : Register more decomps for meta.
3cade9d454 : Revert "[LT] Codegen ReuseNode for supported ops"
d0ce792145 : Revert "[chalf] enable testing for multiple ops (#77405)"
ab1b4aa237 : Make s3_init linter adapter work in abscense of git (#77466)
e9de3a69cf : [GHF] Skip internal checks for bot comments (#77465)
981719fe5a : Revert "[quant][core][gpu][feature] Implemented quantized cuda gelu"
ed51b930f7 : torch/distributed: move quantization ops to libtorch instead of libtorch_python (#77444)
36f7a6cc4a : [NVFuser] don't decompose conv2d if we don't have shape info
25a6aabe71 : Expose permute inputs (#77391)
aaf5c32992 : device_id
bdacc0856c : [ONNX] handle equality checks on devices (#77203)
cf975dde0d : Make sure that we can build without xcode on mac (#77450)
b892b85b88 : [quant][core][gpu][feature] Implemented quantized cuda gelu
fe6aa08444 : [PyTorch] IValue(const c10::Scalar&) improvements
649bd82acc : c10/SymInt.h: Fix "integer conversion resulted in a change of sign" (#77398)
f1b16e5f32 : Remove c2_aten_srcs.bzl (#77314)
b49a4be7ac : ns for fx: remove duplicated BNReLU mappings
d8479098a6 : ns for fx: remove quantized ReLU6 from mapping
6a33b80191 : ns for fx: remove GroupNorm from mapping
fff560cb6e : [chalf] enable testing for multiple ops (#77405)
6e05f76089 : ns for fx: clean up linear in relationship mapping
9265cc2097 : ns for fx: make torch.ops.quantized.dropout mapping dynamic
cc59641acb : ns for fx: remove torch.ops.quantized.cat
20b75e3e5f : ns for fx: clean up convtranspose mappings
0e067e4cc9 : ns for fx to backend_config_dict [2/x]: native lowering mappings
289192199a : Add to_sparse_bsr (#77366)
5419236946 : ns for fx to backend_config_dict [1/x]: fused and qat modules
17653a53d5 : Forward fix failing TestDispatch tests
9728368f42 : Revert "tools: add ability to grab git tag (#77279)"
fa018ef989 : Revert "Make tracer be able to trace different forward functions (#77109)"
841c65f499 : Unprivate _index_reduce and add documentation
6b366dd3c1 : Revert "[ONNX] Refactor to remove inline imports (#77142)"
6066e5929f : [LT] Codegen ReuseNode for supported ops
e011a8e18b : Enable PyTorch operations on MPS Backend. (#77343)
399169fc92 : add BFloat16 operators on CPU: diag, fmod, cumsum, cumprod (#61897)
058be5f162 : CUDA RPC Meta device support
1136965aa1 : Upstream remaining shape functions from Torch-MLIR. (#76889)
2c1de3aa47 : Back out Dispatcher change that makes Messenger Desktop crash on M1 devices (#77414)
2fc1725938 : [GHF] Retry push unconditionally (#77426)
3077e9059c : ci: Add single concurrency to merge / revert
78f291bc70 : fix lint
8d34a8325d : TorchScript to support capability to rethrow the original python exception (#77093)
e867831b84 : extend replaceConvolutionWithAtenConv to handle conv_transpose3d (#76888)
dcc255d10b : add simd horizantal reduce to improve log_softmax and softmax performance on CPU
bf4b6d0dce : Make tracer be able to trace different forward functions (#77109)
abb6fab0f4 : Add new PrivateUse1 DeviceType for non-public devices (#77208)
fa44e165ff : Retry "[JIT] parse prim::Constant[value=annotate()] and prim::Constant[value={0}]"
1141b45e7a : Index reduction CUDA support
5762c7b25b : fix StridesPolicy logic for FunctionalTensorWrapper
9fcf75e534 : [BE] Fix shadowed variable warning in `c10::TensorType::contiguousStridesOf` (#77387)
64c6a89bd6 : [primTorch] reshape and view (#77220)
2b7943c47c : fix torchvhsion failed case test_classification_model on slow_conv2d
39bd37f34f : [complex32] sum, prod : cuda only (disable jiterator reduction on windows) (#77192)
8626f76555 : Add trace and log_sigmoid_forward decomps (#77329)
c08b8f0967 : [ONNX] Refactor to remove inline imports (#77142)
5c7d916c3d : make scalar clamp overloads propagate nan (#77371)
0925597707 : [JIT] Support for ParameterDict getattr
1dd7336441 : [ONNX] Add quantization support for maxpool (#77393)
a812c4cd96 : [ONNX] Relax node constraint for onnx shape inference (#77379)
b250759242 : mul(dense, csr), mul(csr, dense) via sparse_mask_csr (#77177)
78d3798181 : [ONNX] Fix type comparison in utils._need_symbolic_context (#77365)
65f71c0cbe : ci: Add pr number to job name for trymerge
d5ed73badd : Make it possible to register decompositions to Meta key
1a9197566f : exposing more CUDA driver API (#77296)
69347dc564 : ci: Switch lint to use full custom runners
b1214babbb : ci: switch trymerge to custom runner
86e923beec : ci: Remove unused azure_pipelines ci infra
4ef6407906 : Remove unnecessary ifdef, fixes fbcode build
d92b0a51aa : Revert "Load state dict post hook"
3e92cae0dd : ci: Set create_release.yml to run on nightly
35af5f3c4c : ci: Cleanup create_release.yml workflow
14f84a8732 : reenable filtered op tests (#77330)
81e2708d58 : Support operator list in LightWeightDispatch
4f04f77b90 : Revert "ci: Remove unused azure_pipelines ci infra"
caeefb4f8a : Fix forward AD for `torch.angle`
f1016d45d7 : ci: Remove unused azure_pipelines ci infra
96fb63bc75 : [Bootcamp] Set default value of TCPStore world_size to None in pybind definition (#77277)
43f6d79e51 : update release notes script to automatically grab labels from the PR
5ed7312081 : release notes script changes
9cb4e1cbf9 : Add a .git-blame-ignore-revs file (#77357)
286d788029 : Properly capitalize PyTorch (#77308)
cdf9572c52 : add BFloat16 operators on CPU: histc, atan2 (#72695)
37c6017831 : Add BFloat16 support for GLU, and randperm operators on CPU (#61944)
97deda4f28 : add BFloat16 support for logcumsumexp on CPU (#72694)
e36a8c1f13 : Lazy codegen change for xla (#76717)
4f82f439d1 : Enable BFloat16 ELU, SELU and CELU in CPU path (#62546)
bdbb7fe37a : Use _process_group in ReplicatedTensor and ShardedTensor.
29e615f7e5 : Fix pre-compiled header
9e3eb329df : [chalf] getitem (#77339)
47dd092bae : add a new at::lift operator, fix torch.tensor for functionalization
e06400e730 : Fix docs "quantization" instead of "quantiztion" (#77300)
09be44de7b : Sparse BSR: Enable addmm, addmv, triangular_solve for BSR layout (#77255)
d1beda53e8 : Sparse CSR CUDA: add batched support for torch.sparse.sampled_addmm
689df63904 : [RecordFunction] Don't lazily construct the guts of RecordFunction. (#76016)
b010c3451c : nvfuser opinfo test fixes masked_var/std (#77273)
ada65fdd67 : [complex32] fft support (cuda only) (#74857)
b825e1d472 : Revert autoformat of tools/fast_nvcc/fast_nvcc.py
257c55f422 : make clamp_min/max use minimum/maximum kernels, make clamp* correctly propagate nans (#77306)
3b56efd4e1 : add mixed data type mode for LayerNorm forward path
af80329ca9 : [quant][core][gpu][feature] Implemented quantized conv1d cudnn op
d973ece80f : [black][codemod] formatting changes from black 22.3.0
c25bdeea26 : Added logsumexp decomposition (#77219)
f6eb811786 : Add RefineTypes JIT pass for Tuple (#76919)
2881e0ea17 : torch/deadlockdetection: add TORCH_DISABLE_DEADLOCK_DETECTION env for use with torch deploy (#77270)
e912d24303 : [FSDP] Do not check fwd order in eval mode
e8f53ad1f1 : docs: Add section for Hardware / Software support (#77304)
cc9d0f309e : lshift and rshift stop support floating types (#77146)
166a466e7f : improve LayerNorm bfloat16 performance on CPU
355ca32b1d : Remove old grouped_accessor operator in static runtime (#77193)
0303647083 : [shard] Add deepcopy for ShardedTensor
440bc8152b : ci: Move lint to be run on linux.20_04.16x runners
bcbc1a3e13 : [static-runtime] plug in caching logic (#76210)
bbb1f106c7 : Separate input moving to utils file
ba55296504 : tools: add ability to grab git tag (#77279)
8a5c34195d : [static-runtime] add (noop) usage of cache at call sites (#76208)
00758061ff : [static-runtime] add placeholder for kernel cache (#76209)
fdf6ef15bf : Using macos 12 as required for MPS feature in nightlies (#77263)
e7c0607d35 : [lint] update lintrunner, don't run lint twice
c656e61b67 : [GHF] Add pagination support for reviews (#77274)
225b037df8 : port clamp.Tensor to structured (#77149)
d694cf60fe : add decomposition for nll_loss2d_backward (#77198)
2fcd5808a3 : ci: bump linux.4xlarge.nvidia.gpu to 175
24f7dcd816 : enable NVFuser by default
c7ab1d2103 : [pytorch/tensorexpr] Update use of LLJIT::lookup for LLVM 15
545d90f032 : Sparse CSR: enable autograd for torch.sparse.addmm and torch.sparse.mm
21d4281b1d : Simplify the OpInfos for norm / linalg_norm
85bd65a880 : Revert "[test] try to fix torch.tensor for functionalization"
1aa3cbb83b : Use weakref.proxy when saving module to internal dictionaries to not increase refcount (#76435)
68e1e2d9c1 : [ci] shard distributed on cuda 10.2
a41d4f27d7 : [static-runtime] refactor out variant for `aten::embedding_bag` (#76207)
40576bceaf : Add mode property to distributions. (#76690)
feaa64f7e0 : Apply black lint CI to PrimTorch
bbb1a55d9f : Revert "Apply black lint CI to PrimTorch"
b0bd5926c9 : Fix prims lint broken on trunk due to land race (#77271)
5993cc0b3d : Update operator list for AutocastCPU (#68725)
9edee09ed6 : [test] try to fix torch.tensor for functionalization
b91a14900e : General fixes for ShardedTensor op framework.
420b49c3ef : [complex32] add, sub, neg (#77179)
f348b1b2b5 : Add the Runtime components for MPS backend. (#76725)
93953a48b7 : [ONNX] Bug fix: opset_version checked before set (#76928)
fa2bf5ab66 : [GHF] Improve authorship detection (#77266)
eaee4ff9ed : [ci] don't run cpu tests on win-cuda builds
4f4ebc5491 : [FSDP] Fix local_state_dict and state_dict_type bugs
0a14a4c280 : Register prims as operators.
188854eeaf : Apply black lint CI to PrimTorch
0f4b1ebca3 : Update Metal for "Consolidate customization contiguous/sizes policy into unified policy"
e9c681d23e : linux focal builds install cmake from conda (#76964)
d9351ed520 : Speedup segmented sort with large nsort
99339fddd9 : move SymInt and SymIntArrayRef to c10/core (#77009)
6fd14ba9db : [NVFuser] Add environment variable to force disable NVFuser
3c2e0dc657 : [NVFuser] assert that vectors are the same size in translateSingleWelford
f2d9fc18f1 : Update amp document with CPU Training/Inference Examples (#77244)
3d0e6f169c : add channels last support for slow_conv_dilated2d
9493900876 : [Reland] Mixed precision batchnorm fix (#77234)
140c8168c4 : [composite compliance] backward, forward: nn.linear (#77151)
4a45b88d6d : [composite compliance] backward: gather, take_along_dim (#77152)
299ebf1ec8 : [composite compliance] backward: masked_select, combinations (#76794)
dc5fe2b3f2 : expand the coverage of conv folding (#75724)
1d7b294574 : [quant][better-engineering][bc-breaking] Removed quant_min/quant_max from fake_quant modules
2083b16f68 : Revert "[JIT] parse prim::Constant[value=annotate()] and prim::Constant[value={0}]"
c5792583f1 : Fix TBB build (#77022)
8f5cdc6d5d : Revert "Revert "[LT] Store OpKind for each IR subclass in a static field""
6cbe9d1f58 : [ci] delete old linter stuff
533b44a280 : Add _native nested_tensor_from_mask (#76942)
92b8e5aa67 : [chalf] rand, randn, full, empty, ones, zeros (#77183)
091f8915ae : Revert "Mixed Precision batchnorm fix (#77089)"
9ba3b8305c : expose torch_python_obj as a static library
bf61b79503 : Mixed Precision batchnorm fix (#77089)
ecde870d4e : Move `ATen/core/DimVector.h` to `c10/util/DimVector.h`.
e94ba981b9 : [JIT] correct torch.jit.Attribute docs
9555b0b20d : Refactor test/onnx/test_onnx_export.py to reuse code (#76851)
a5c9e88632 : [lint] don't run differently on PR vs. master
2896f81dd4 : Consolidate customization contiguous/sizes policy into unified policy
4bd5b1614b : Move legacy Caffe2 TensorImpl methods out of header
7311390d35 : [WIP] Make constructor calls in experimental MetaTracer serializable
3d561ee926 : add channels last support for thnn_conv2d (non-dilated)
57b54dfec5 : Fix Optimizer.zero_grad type annotation (#76998)
81528d4b21 : [shard] add more tensor creation ops (#77185)
afd8bd772c : `nn.functional.glu`: forward AD support (#77186)
023aafbcd7 : Fix for normalizing signature for op overloads (#77182)
02713221e3 : [SR] Fuse clamp/nan_to_num
27ea79b8a5 : Use a more reliable signaling mechansim to stop TCPStore background threads (#76973)
12f67ed2c0 : Revert "[ci] delete old linter stuff"
872f5dafed : ci: Move rocm distributed tests to periodic
87e543da9b : Add `load_state_dict` error message for non-dicts (#77197)
3a68155ce0 : [ci] delete old linter stuff
ec33f0ee83 : Fix case of PyTorch in issue templates (#77180)
58bb1f747d : Fix bincount to use acc scalar for the bounds (#76979)
61dcde88a6 : Jiterator with Python Registration (#77121)
00fb828276 : [chalf] update type promotion table (#76893)
2e6ed5d0cc : Move `contiguous_strides` to `c10/util/strides.h`.
635aaa3d9d : replace "grep" with Python processing in `collect_env.py` (#77148)
4d30ebc82a : torch/deploy,package: log usage for InterpreterManager, PackageExporter, PackageImporter (#77097)
64b543434d : [ROCm] update cmake package DIR paths (#77087)
ecab4fb057 : shard `linux-xenial-py3.7-clang7-asan / test (default` from 4 to 5 (#77084)
54dbfc1e4b : Revert "speed up 1d sort (#77100)"
d22d749a0e : faster batch sampler (#76951)
752d496c91 : Fix `broadcast_in_dim` support in NVFuser Frontend (#76790)
8560fa730d : Parallelize `gatherTopK` on multiple blocks (#74267)
767af8e335 : Add meta tensor support for some operations using python registration
e31b6213ac : Profiling range for FSDP.forward (#76899)
a127c584a0 : Fix max pool forward nhwc (#76597)
65a8f8f62e : Add __all__ for torch.autograd.{anomaly_mode, gradcheck, forward_ad}
fc4f8b5ede : Add a ZT fastpath for linalg_cross (#76940)
cd33e412a2 : Enable fp32/bf16 PRelu forward and backward in MkldnnCPU path (#60427)
8d4e069e66 : add BFloat16 support for UpSample on CPU
e383b135ef : [fix] reflection_pad3d: correct dispatch name (#77153)
182b28fe65 : Fix math notation for linear projections in nn.RNN docs (#77082)
00a1fb64bb : Faster `index_select` for sparse COO tensors on CPU. (#72710)
d7035c1cbb : [FX qconfig] add weighted_op_qint8_dtype_config for int8 TRT and import linear config to get_tensorrt_backend_config_dict (#76877)
ae25d14f23 : speed up 1d sort (#77100)
a008d19ff7 : [DataPipe] Revamp serialization logic of DataPipes
cd916feae7 : [CUBLAS][TF32] Add environment variable to allow override of `allow_tf32_cublas` (#77114)
8d67972b14 : Revert "Faster `index_select` for sparse COO tensors on CPU. (#72710)"
6ae047b0a9 : Remove misleading statement in optim.Optimizer docs (#76967)
614e045921 : [ROCm] rccl performance improvement via env var (#76985)
890bdf13e1 : Remove deprecated torch.solve (#70986)
f94abd59f7 : Revert "Sparse CSR: enable autograd for torch.sparse.addmm and torch.sparse.mm"
9903f1ae4a : [FSDP] Do not clone buffers; offload buffers to CPU if needed
3621462ebb : [FSDP] Change default auto wrap policy name (#76858)
75f316f14e : [FSDP] Move param/buffer name comp. to ctor for `ignored_modules`
da3b8309cb : Disable ios 12.5.1 job as its logs don't reveal why it's failing (#77162)
da3ebaebee : Minor torchhub docs
721a8ca697 : Sparse CSR: enable autograd for torch.sparse.addmm and torch.sparse.mm
417676720e : [JIT] fix opinfo utils to handle tensor kwargs
949cbf1d65 : [NVFuser] Opinfos for extremal values in binary ufuncs
2e2200d76c : RPC Meta device support
24372eb5ac : ROCM: Increase timeout for flaky test_with_kwargs (#76706)
0272aa1f1b : Latest linalg_lu revert caused bc to fail, add exceptions (#77112)
8a6856ae3a : Fix docker builds, cleanup cuda 115 (#77086)
41ff6f8c49 : make has_bundled_input work for flatbuffer (#76854)
b4fa1e88be : Skip lu_solve meta tests (#77110)
489818e7c6 : disabling squeeze/unsqueeze; disabling BN/BN_BWD for perf concern (#77017)
84fe6021b9 : [distributions] Apply weaker test for `torch.distributions.wishart.Wishart` to match the level in other libraries. (#76525)
eb0bc5b1c9 : [sr][pyper] add fusion broadcast_concat_batch_matmul_batch_gather (#76839)
a28e436677 : shard `linux-xenial-cuda11.3-py3.7-gcc7 / test (default` from 2 to 4 (#77083)
26e2936edc : [JIT SSA] Added testing for the Cat Op in LazyTensor
8355c66065 : [RecordFunction] Store a c10::variant of name and schema rather then both. (#76017)
0c7c50972b : Revert "Move Tensor.grad back into C++"
7eaf4780ba : Revert "[LT] Store OpKind for each IR subclass in a static field"
da565c07e1 : [composite compliance] backward: value selecting reduction ops (#76731)
b9e919fed7 : [fx] fix merge_matmul tests making invalid torch.split calls
2c5bf12584 : Revert "stft: remove non-center overload and python functional wrapper"
ce3857e73c : Faster `index_select` for sparse COO tensors on CPU. (#72710)
3e4bff7285 : Move Tensor.grad back into C++
4ebc4890dd : Revert "Add linalg.lu_solve"
4ceac49425 : Revert "Update torch.lu_unpack docs"
1467e0dd5d : Revert "Deprecate torch.lu"
b042cc7f4d : Revert "Deprecate torch.lu_solve"
13a8471792 : Revert "Move `contiguous_strides` to `c10/util/strides.h`."
e5915a2216 : [PyTorch] Don't enter MHA fast path when bias & query dtypes don't match
daf8c48a87 : Revert "Revert "[WIP] customize the C++ class for valueT"" (#77003)
a6341d2ce5 : LT tutorial [WIP] (#76392)
36150c63a7 : [LT] Move device lock in LazyGraphExecutor to a later place
bd573389f6 : [Bootcamp]Add option for flatbuffer loader to copy memory to individual tensors (#76986)
0b0611c223 : [PyTorch] Replace IValue::intrusive_ptr with IValue::isIntrusivePtr()
c3e5c12058 : [PyTorch] Add corner case tests for IValue::use_count()
b4f3f9c651 : Torchvision patch (#77001)
232dd59cb1 : Add failure on torch.use_deterministic_agorithms(True) for scatter_reduce
dc4f12d9cc : shard `win-vs2019-cuda11.3-py3 / test` from 2 shards to 5 shards (#76867)
bcd2167bb5 : fix linalg inv doc (#77079)
7e20d389d2 : New generated conv_bn folding should use same weight and bias dtype as original conv module (#77042)
3fb0b098ae : [disable bot] Handle empty bodies (#77078)
6d9dbd3391 : Manually skip test_sparse_addmm as disable code is not working for now (#77076)
4ded63e2ac : updates and encodes clamp xfails (#77077)
cd6363de36 : ldl_factor fix magma version check (#77019)
bb8baea932 : [primTorch] flatten, squeeze, unsqueeze... (#77043)
078a9eedc4 : use opmath and expm1 for elu (#77062)
362525724b : type promote clamp (#77035)
a238bab17c : Typo fix in generated module name (#76880)
2ec60944c7 : Move `contiguous_strides` to `c10/util/strides.h`.
95adb322e0 : Revert "Move `contiguous_strides` to `c10/util/strides.h`."
84316cc389 : Move `contiguous_strides` to `c10/util/strides.h`.
a585df6664 : xfails bool add nvfuser test (#77031)
104f0bf09e : [Reland] Add atan2 isfinite isinf isnan isneginf isposinf isreal to nvfuser and its frontend (#76769)
60f131fb6c : Add OpInfo based meta tensor tests [RELAND]
29b702144b : Fix issue in s_addmm_out_sparse_dense_cpu only supporting CUDA device checking (#77018)
c031643e39 : Adds decorators for Python References and extends Python Reference testing (#76945)
e3dcd175f7 : Support Tensor source for x.set_(storage, offset, size, strides)
d0dcebe0a7 : [xplat] Move BatchLinearAlgebraKernel.cpp to aten_native_source_non_codegen_list
7eb7b090b0 : [lint] make grep_linter.py portable
901cb7c2e4 : Skip TestCudaFuserOpInfo for Jiterator (#76995)
1ac434f673 : [GHF] Add PR number to commit title
3202398fc7 : [GHF] Add "merged" label to GHF merged PRs
9f3a497c62 : [GHF] Small cleanup
828fb8c620 : Revert "Add OpInfo based meta tensor tests"
ec841b0346 : Revert "[WIP] customize the C++ class for valueT"
70b4746329 : port clamp_min/max tensor overloads to structured
01e9ed7d08 : Merge On Green Bot
2c0268c41f : [ci] fix bugs in test stats upload
d5210a4269 : Add gradient choice detail to autograd doc
c152817926 : [WIP] customize the C++ class for valueT
d9fda18c4b : Add OpInfo based meta tensor tests
f2eed9400d : Register PrimTorch refs as decompositions.
103881ec0c : clean up windows sharding batch files
31d3ce7000 : [JIT] parse prim::Constant[value=annotate()] and prim::Constant[value={0}]
b08633917d : Revert D29463782: opitimze ConvTransposedND with mkldnn float32 and bfloat16 on CPU
ac37ddc795 : [LT] Store OpKind for each IR subclass in a static field
8b6a78f39f : Python Interface for Jiterator
6c615a21a0 : [NVFuser] prep for on-by-default
d21154b098 : [be] ci: Remove unused promote workflow
479e0d64e6 : opitimze ConvTransposedND with mkldnn float32 and bfloat16 on CPU (#58348)
9e52b50e34 : Additional ops for ShardedTensor, ReplicatedTensor and PartialTensor.
c2f362d36c : move win cuda tests from pr to trunk
0adf070574 : Use scatter_reduce to support masked reductions on sparse COO tensors (sum, prod, amin, amax)
2a9779adbf : Bugfix NAN and Inf handling for scatter_reduce (amin and amax)
cc685bcccd : Allow for custom sharding specs to register their own ops.
fd6991e714 : add trunc_normal_ function to doc of torch.nn.init
fcf38a5812 : Add support to `Tensor[]?` for structured kernel codegen.
f23b629196 : Change no_torch_function_mode to StashTorchFunctionModeGuard
621ff0f973 : Add linalg.vander
ba818991af : Fix typing for load_state_dict_from_url()
d20b8397b0 : tracer compare_outputs should compare in cdouble for complex
e9f34931ef : Add some shape decomps (t, transpose, rot90, stack)
849984a2cd : [SR] Sigmoid out variant calls fast_sigmoid (#75661)
040e2e04dd : Fix Context.cpp compilation on older Windows
37fb636b7f : fix package violation caused by D35587412 (#76808)
8ac6b0a010 : Revert "Contribution- Grammatical Corrections in the documentation"
a0ebf1d386 : Contribution- Grammatical Corrections in the documentation
ce9a477fdf : Support torch.Tensor.to for CSR
52af4fc5ba : [PyTorch] Make RecordFunction store inputs as ArrayRef (#72484)
710246ea99 : Introduce distributed checkpoint with ShardedTensor.
465e0ae266 : Bugfix scatter_reduce backward formulas
56bed0dcfe : Load state dict post hook
f84d4d9cf5 : Deprecate torch.lu_solve
a5bbfd94fb : Deprecate torch.lu
9dc8f2562f : Update torch.lu_unpack docs
fc5b4a5a33 : Add linalg.lu_solve
31d03c2f63 : [RPC small change] Improving logging for store.wait error
0a770f804d : Fix rpc_test.py MyConvNetForMNIST location and sparse tests
55e90d17fc : Handle torch.memory_format serialization in TensorProperties.
bc3c7a6cbd : Fix issue with _checkpoint_without_reentrant
9e32cdeda6 : [SR] Use at::DimVector in reshape_copy_out (#76473)
61aec161f0 : [ROCm][GHA] split 4 GPU hosts across two runners
4ee29d6033 : [Reland take-2] Add JIT graph fuser for oneDNN Graph API (v0.5)
94fc92f288 : exclude distributed tests from windows
1c776d209c : Adds amax and amin references
122798916f : Port clamp_min and clamp_max to structured
b30c027abf : Fix FSDP CI
3d28ab0709 : Minor follow up fixes for python registration
3a8752db86 : ns for fx: skip shadowing ops if copy subgraph is not implemented (#76663)
d3e338935a : ns for fx: skip shadowing for torch.cat, and also for nodes with only kwargs (#76561)
73b33de989 : [FSDP] Include buffers in `ignored_modules`
33fabe9a2e : `functional.max_unpool`: OpInfo tests + simpler backward + forward ad + fwad over backward ad
2b3b205f18 : Go through the dispatcher in matmul_out for the 1D-1D case
7cb7cd5802 : Add linalg.lu
1a4eea57be : Improve derivative of QR decomposition
3df0140cbd : Sparse CSR: Fix sampled_addmm for noncontiguous inputs and fix block sparse triangular solve
ad8386c132 : Lint fix
6a44e0667b : [lint] preoperly suggest lintrunner when lint fails
da15d76e83 : Revert "Add ZT fastpath for remainder.Tensor JVP"
0f1618ef76 : [complex32] real and imag (also remove unused real and imag kernels)
4c504dd3ef : [PT-D][Sharding] Clean up sharded tensor code by leverage handle_torch_function (#76824)
7471f614ae : Add ZT fastpath for remainder.Tensor JVP
07f766df54 : Allow creating new libraries and defining new operators from Python
55f55a4cf6 : Allow users to override kernels for existing C++ ops through Python
32ae584008 : ci: Add upload credentials for rocm periodic test
5dd1c67776 : [ONNX] Format ONNX python with black
3a6da16a5a : Return all overloads for an operator in _jit_get_operation
4a11678368 : Change TensorMeta to use tensor subclass.
48eb8d6aad : Use TorchFunctionMode to implement PrimTorch tracing context
f05710dd40 : [LT] Add a trie data structure for caching IR nodes
4baf7c0899 : Dispatch to mv rather than mm in the case that tensor1.ndim == 1 and tensor2.ndim == 2
02b5b92c65 : Fix mv/addmv on CUDA when dealing with vectors of size=1 and stride=0
35ae9f68c5 : Refactor the API of the matmul implementation
1fed6b7559 : [SR] Eliminate extra permutes around softmax calls (#76391)
cac2733af1 : [SR] Codegen for aten::clamp (#76340)
c59d5f17d9 : Remove pow and float_power TestGradient Skips
381e08309f : Revert "Use scatter_reduce to support masked reductions on sparse COO tensors (sum, prod, amin, amax)"
8a2f207de8 : [complex32] enable testing for multiple ops
1335512056 : Sparse CSR: Add CPU fallback for sampled_addmm
6917034afb : Added logit/reciprocal decomps, fixed var for complex, moved type promotion logic to standardize on primtorch's
4aa6b6b9de : Revert "Bugfix NAN and Inf handling for scatter_reduce (amin and amax)"
6df5a53127 : Fix unmarked fstring
ff94c9dee4 : DOC fix momentum equation for nesterov
e838137b3e : Add high level control of fp32 matmul precision; disable TF32 for matmuls by default
679fc90cdb : [ONNX] Support optional type (#68793) (#73284)
b8776e143f : Fix false DeprecationWarning in `Module.state_dict`
429a80dded : [NNC] Lowering function generates the output buffer with the specified stride (#76529)
0878ba4640 : [ROCM] remove rocfft workaround
bcddd4ab3e : [Static Runtime] Add auto generated unstructured ops (#76398)
ca0f267022 : [Static Runtime] [RFC] Codegen support for ops with unstructured kernels (#76203)
fc64dbdc01 : [SR] Fuse quantized linear/relu (#75775)
7dc1383101 : aten::_validate_sparse_compressed_tensor_args ops added to the ALLOW_LISST
94d65b05e9 : [FSDP] Optim state: ignore params if not in dict
b557e102d8 : Fixes prim type promotion and updates type promotion testing
fc1f290e84 : Correct _validate_sparse_compressed_tensor_args signature
ac45fb9b93 : switch Bazel to the shared generate-code genrule (#75790)
096ff0ecca : introduce new --gen-dir flag to generate_code and use it in fbcode (#75800)
dfde877c0b : Add type hints for a few random functions/classes
6779366f27 : add nested mode to python mode
68fa6d8fec : Revert "Make addcmul and addcdiv support different dtypes"
ce63c53c9b : Revert "Add binary_cross_entropy and trace decomp - fixed _log_softmax/_softmax dtype promotion semantics"
fa3e0d5f4c : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT` (#76802)
ef9f56eb0b : [primTorch] slice and transpose & etc.
9b97313cc7 : Revert "Adjust the stubs for PyCharm autocompletion of the Tensor methods."
5adf97d492 : Add docstrings to sparse compressed tensor factory functions
2834726edb : Support str for compressed sparse tensors
436a7be059 : Factory functions for sparse CSC, BSR, and BSC tensors
c51b53d4ef : [WIP] sum reference
e2aa28a2d0 : [quant][fx][improvement] Renamed default_affine_fixed_qparams_observer and default_symmetric_fixed_qparams_observer (#76637)
4cdec1a79e : Make addcmul and addcdiv support different dtypes
f8b0f6984a : Adjust the stubs for PyCharm autocompletion of the Tensor methods.
68e012b023 : Optimize half conversion for SYCL kernel
ed18181d83 : Added gelu decomposition
fc2a2e8b72 : Use scatter_reduce to support masked reductions on sparse COO tensors (sum, prod, amin, amax)
e33f3229a2 : [NVFuser] environment variable to turn nvfuser on or off (#76485)
56ea57de61 : shard `pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (distributed` 1->2
3a2fc312be : [xplat] add static_cast where missing (#76756)
20e4d6c4dc : [PyTorch][AMD] fix hipify_python (#76720)
fc95eda285 : Added proxy tensor
4fbbbed674 : Support no sharding config
fb0f285638 : [lint] upgrade mypy to latest version
8473173c36 : Remove breakpad dependency
4441582f80 : Bugfix NAN and Inf handling for scatter_reduce (amin and amax)
3d7428d9ac : Revert "[lint] upgrade mypy to latest version"
4bb5944133 : Revert "Add atan2 isfinite isinf isnan isneginf isposinf isreal to nvfuser and its frontend"
9bf18aab94 : [lint] upgrade mypy to latest version
84641d0dba : [lint] fix pip init for lint when user has a global install
b02b3f25db : [SR] Quick hack to eliminate no-op slice (#75774)
4537ac11db : [lint] register formatter linters
aca5594818 : Turn on memory efficient format for jit pickle files.
76abbbe317 : Adding output_size to to_padded_tensor (#76640)
4d6b145bb2 : Nested Tensor Elementwise (#76470)
8a3e9255ea : Add binary_cross_entropy and trace decomp - fixed _log_softmax/_softmax dtype promotion semantics
db21e22b4b : [EASY] Quick Fix for broken shape function autogen.
ccd5fa506f : [iOS][coreml] Add CoreML memory observer Round 2
dcdc7b2ffc : [RF][scuba] add pytorch_operator_stats column for Static Runtime out variant (#76566)
7c44d560ba : [PT-D][Sharding] Enable ops needed in the transformer model training (#75374)
47e7b12d39 : Update isfinite for complex to avoid overflow
2e975fb2ed : Kill dead code in ScanUtils.cuh
9c902f4749 : Add `TORCH_CPP_LOG_LEVEL` to the docs
543eaac415 : [PyTorch] Remove dead store in intrusive_ptr dtor
6b6c63ce5e : Upstream `argmax` shape function.
92d10decc4 : Add atan2 isfinite isinf isnan isneginf isposinf isreal to nvfuser and its frontend
9e34a8241b : Improved matmul tests
f1f29ac8b3 : [ROCm] tests set PYTORCH_TESTING_DEVICE_ONLY_FOR="cuda"
f8a4780eb2 : [LT] Move MakeNode into ir_builder.h
138c944a6b : [ROCm] default tests use 1 GPU, distributed tests use 2 GPUs
65b9778d30 : [LT] Add a flag to control IR reusing
9bfa7e9791 : [ROCm][GHA] relax docker purge conditions
d23ecbfc9a : stft: remove non-center overload and python functional wrapper
3e08b18167 : Back out "Back out "[torch deploy] Update deploy.rst with working simple example"" (#76713)
a4c14caffc : [FSDP] Faster dict inversion
7772f702b2 : [FSDP] Validate exec order using `compute_device`
401179f263 : disable the //:generate-code target in Bazel (#76174)
596c54c699 : add support for filtering out Bazel targets from common structure (#76173)
9bcb4de168 : check parameter k and l
eb27c85160 : move generate-code into shared build structure (#75699)
28dfed962a : Remove deprecated string torch::lazy::BackendDevice constructor (#76506)
e155e2584a : ns for fx: skip operator.add and operator.mul when shadowing (#76504)
385e5ba561 : ns for fx: more meaningful error message when creating shadow model (#76468)
04369f637c : quant: rename _ObserverBase to UniformQuantizationObserverBase (#76461)
31d5a300ac : quant: make RecordingObserver inherit from ObserverBase (#76460)
ce76244200 : fix where type promotion
e68686bb05 : Add optional timeout argument for RpcAgent join() (#76194)
b34739fbef : [composite compliance] fix index_copy backward
71ae190b87 : [composite compliance] Fix a bunch of fft backwards
b074bffa41 : Revert D28836788: add BFloat16 support for UpSample on CPU
7a06e90b24 : [ci] bump lintrunner version
91f5056ffc : [JIT][Autocast] Don't cast softmax on CPU
1399d83bc0 : add BFloat16 support for UpSample on CPU (#58297)
dbfb9a823d : enable BFloat16 mkldnn_convolution on both contiguous and channels last memory format (#55864)
d23619b030 : Permutation extended
0ae3aa648e : [torch.onnx] support `torch.nn.functional.grid_sample`
e14f5336c0 : Fix syncbranches (#76693)
58d773ad29 : Upgrade oneDNN to v2.6.0 (#75398)
01c1560d10 : Back out "[shard] ShardedTensor Interface"
d16ce8a2f6 : Back out "[torch deploy] Update deploy.rst with working simple example"
461cc0a960 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
9bcd60f06a : [shard] ShardedTensor Interface (#74695)
a240d45277 : [torch deploy] Update deploy.rst with working simple example (#76538)
598e7e5f19 : [Reland] Change 'python mode' to 'torch dispatch mode'
47969553f5 : Update lint.yml
be424efb2e : [Vulkan] Added Lerp op
bc5307347f : Revert "Add linalg.vander"
150c307140 : Remove c10::getClassConverter
4734e7e6c8 : Make getCustomClassTypeImpl a plain function
5f98145fd9 : [scuba] log to pytorch_model_stats when we've tried and failed to enable static runtime
bf3dcb9599 : fix anon-struct usage that's a warning/error -Wnon-c-typedef-for-linkage (#137)
b6bc5b325a : [torch deploy] remove torch deploy being added to "torch libraries" (doesnt work)
d7db6a7b02 : Sparse CSR: Add backward for torch.sparse.sampled_addmm
e6b4d77c3e : Sparse Compressed tensor factory function 2
9fae0762b0 : fix typing in `Module.state_dict` and `load_state_dict`
9445609ee3 : Better error message for android-tests workflow
b182c22e15 : [PyTorch] Exercise MHA fast path in JIT
1ea49c68d0 : Add linalg.vander
2e480fc2db : Cleanup ATen-core forward declarations
395a620a4f : Revert "Change 'python mode' to 'torch dispatch mode'"
4e4c80d539 : Nit in `TripletMarginLoss`
c9bd73878a : adds elementwise opinfos and unary references, extends to out testing
f92cddd890 : Removed direct doc formatting
76916ccf81 : Fix lists in the docstring
23dcbe3fed : Fix failing test when SciPy is not available for test_ldl_factor
7203a73986 : Change 'python mode' to 'torch dispatch mode'
5d966e9d3f : [FSDP] Relax exec order valid. to only fwd
f6bbecf8b5 : Adds python ref consistency test, elementwise unary reference inputs, and formats test files
33be4c94c0 : [Nvfuser] Add cast support between double and half types
10bf20da8b : Change mixed precision API
e36d25fbae : [complex32] support printing the tensor
fb24614011 : Port functorch decomps over and fix some tests
786903ea29 : Provide an auto wrap policy for common transformer models
a3f10ec281 : Move functorch decompositions to PyTorch
1f1d0b30ce : [fix] mul chalf: cpu scalar and cuda tensor case
100e72f54b : Nvfuser faster fallback
201ddafc22 : Make Opinfo a dataclass to get a more useful repr
2540f866ff : Add better error message for FX when using concrete_args
b0c5fba967 : [CUDA Graphs] Fix OOM inside graph capture_begin
3dcd67a1b3 : Revert "[Re-landing 68111] Add JIT graph fuser for oneDNN Graph API (Preview4.1)"
0708630d9f : Allow sharding for distributed tests
fe1968dea0 : [primTorch] Prototype nvFuser integration and test_prims.py
87a05fa8b4 : Enable xla test
8b11d81058 : [Re-landing 68111] Add JIT graph fuser for oneDNN Graph API (Preview4.1)
8c47e9dc81 : [quant][core][gpu][improvement] Added support for padding quantized cudnn conv2d operator
ac31e5d4a3 : Add a matching lerp implementation to eager mode. (#1612)
ff10e45993 : Unsafe Sparse Compressed tensor factory function
7afe4afd86 : Export aten::to("cpu") and aten::to(device="cpu")
fd3cfb683a : Fix string repr for nested tensors & run nested tensor tests
4c014a7373 : [GH1] Fix pending/not run conditional and add a way to test new merge_rules
e5a55af305 : Reland reland
e52dc9888b : Retry - [NVFuser] always use fallback if fusion fails
8bb7203049 : Add torch.linalg.ldl_factor_ex and torch.linalg.ldl_solve
2e841e68b2 : [ONNX] Documentation and typing annotations in registry
8880e3a3c7 : Readme update to remove old python version
5b6efb824d : Add test_jit_cuda_fuser.py to NVFuser merge rules
e9f17da2cf : Nvfuser - Type Promotion Fix
4d527cd235 : [NVFuser] Reduce accuracy threshold for fp16 nvfuser opinfos
ad4f51d7a1 : Shard rocm distributed tests
833d65aecb : Disable XLA test workflow
9bf2c87e2b : Add __all__ for torch.cuda.memory
7422ccea8b : Hipify fixes for a successful DeepSpeed build
bbc263eb5d : [quant][core][gpu][feature] Implemented quantized cuda adaptive average pool2d op (#76081)
f51516df1c : Adding `broadcast_in_dim` and non-contiguous Tensor support NVFuser Python Frontend
c1037d0d4c : [PT-D][Sharding] Move Partial Tensor to the _shard folder and add logic to remove padding (#76199)
b69d44daa5 : [quant] Fix tensorrt config after the backend_config_dict refactor (#76414)
a37141addd : [shard] Extensible Sharder and ShardingPlanner (#75844)
a5bc02aeb2 : Revert "[JIT] Register decomp reland"
2af0c42ac0 : Revert "[JIT] Add another decomposition api"
9e73e89029 : [quant][core][gpu][improvement] Converted reinterpret_cast<T *>(some_int8_tensor.data_ptr()) calls to some_int8_tensor.data_ptr<int8_t> in quantized cudnn operator files (#75980)
d0cb31d5bc : Make lazy tensor ptr class customizable (#76476)
4cae57080a : Make lazy tensor creation and value strings customizable (#76472)
cfc90cf3eb : Fix GenLazyIR.node_base_ctor_call (#76471)
ec420764ec : [RPC small change] Improve logging from 'unknown destination worker'
15884418e2 : [quant][core][gpu][improvement] Made exception throwing message clearer in Quantized Cudnn Conv2d, Linear, and Add ops (#76102)
ad88816c86 : [quant][core][gpu][feature] Added support for float->quantized cuda tensor copying (#76177)
32cb037532 : [JIT] Add another decomposition api
f3f327e103 : Decouple LTC from TS Backend using Lazy IR Builder
b204ad863f : Revert "Revert "Allow specifying tags for aten operators in native_functions.yaml""
1c5a66c2aa : [FX] Fix operator_schemas normalize_function to consider OpOverloads (#76469)
b7bd677eae : [quant][gpu][core][bug fix] Added memset to CacheKey for quantized cudnn linear op (#76447)
b5b8139d46 : [quant][gpu][core][bug fix] Added memset to CacheKey for quantized cudnn add op (#76445)
6c78dc4661 : [quant][gpu][core][bug fix] Added memset to CacheKey for quantized cudnn conv2d op (#76436)
7c0ccb8a9d : black formatting for utils/tensorboard (#76396)
bcee215d2b : [Testing CI] test exact layout on nvfuser tests
ef63408853 : Revert [DataPipe] Update mux data pipe
e549e97484 : Upgrade CI to ROCm 5.1
363025029a : shard asan 3->4
4120cfd018 : chore: remove git.io
d39b18ed99 : shard crossref 1->2
676a4a3969 : Prototype _index_reduce (CPU-only)
a997046017 : [DataPipe] Update `mux` data pipe (#76384)
177ea46332 : [ROCm] persist test results even on failure
6d1bacaf8c : [ROCm][GHA] keep docker images for at most 1 day
ea1901693e : Port to_padded_tensor CUDA kernel from pytorch/nestedtensor
1721137fb9 : [PyTorch] Use native serial stack when there is only 1 thread (#76399)
bc34cf5fe4 : Support for tensor subclasses as parameters
54c75e1e8f : Add "mps" device to PyTorch framework.
a0bf0f5611 : Add new dispatch keys for Fake Tensor and Deferred Module Initialization
a28b132bc2 : Revert D35860266: [pytorch][PR] Update torch::lazy::BackendDevice to have a new default ordinal
2469525c4c : [ROCm] Skipping few multiprocess test
a39a2c3969 : Enable LTC Input/Output Mapping (#75828)
3fa77fa51a : [SR] Fix quantized linear tests not managing outputs (#75776)
04b3313379 : remove unneeded overload for nansum
122999919c : Revert D35511873: [iOS][coreml] Add CoreML memory observer
4048d4cdd2 : [primTorch] Prototype tracer and elementwise unary reference opinfo class
aae7b00f7c : fix nested grad(functionalize(f)) transforms
3ac27e78ca : Fix typehint of multi_head_attention_forward
6e292f1a21 : [quant][core][gpu][improvement] Integrated quantized cudnn max pool2d with existing quantized_max_pool2d (#76129)
6e959dec69 : add buck generated files to ignore list
0bd8e237b5 : rebase via comment
f323a8ab3f : [fx2trt] support ops for hf_T5 (#62)
48ea440b15 : ci: Unblock syncbranches, add a58c6ae and 7106d21 to block list (#76417)
5932c37198 : [caffe2] drop XROS ports (#76366)
22fb929405 : [iOS][coreml] Add CoreML memory observer (#76251)
09a5b075fe : [libkineto] Re-enable user-annotations in PyTorch (#75601)
ec62901a2c : Disable RPC profiling for kineto profilers
5cd880f4c0 : Add mixed precision doc
3c327f3f01 : Reenable XLA workflow/test
6eee1e9b27 : Fix TensorImpl use count assert for _thnn_fused_lstm_cell_backward
81b9cb741c : [JIT] Register decomp reland
4d1d1b3179 : [mobile test] Update and rename README to README.md
ffb0946504 : Generalize param verification and broadcast
305a9cc00a : [jiterator, complex32] tanh_backward : complex
887a93e5ac : support PackedSequence type for apply_for_tensors
9de2beb86b : Update README.md
2291960d3f : Back out "record_function: update to use custom_class API" (#76253)
5ed7dc1eee : Add README for mobile model test
40d96f0afd : Revert "functionalization: add support for zero_()"
368430036e : Make binary_cross_entropy_with_logits composite compliant
3d438f7189 : Make tolist correctly work for 0 element tensors
4bf5380ec7 : remove references to ort_test
75be4f9cdb : check tensor has storage before refer to tensor data ptr
e816e17655 : [PyTorch] Add native fast path for transformer encoder inference (#76333)
68a9057aa6 : [PyTorch Edge] Add Optimized QInt8 Quantize Tensor Arm (#76245)
bfb39e577c : Revert "[NVFuser] always use fallback if fusion fails"
25fa6235f4 : [Model Averaging] Make an error message more clear in hierarchical_model_averager.py
20543221f4 : [lint] url encode lint message
5b65361a57 : Fixes black lint
ccd7233fdd : [DataPipe] clearing buffer for DataPipes during __del__
6b21a33795 : [ROCM] Enable custom tests on rocm
811ccde41a : [Dynamic RPC] Add graceful shutdown for dynamic RPC members
f02b7a9c36 : Pad: don't error when unused fill value is zero
03e85e5700 : remove unused --ninja-global from codegen (#75869)
f010b15db9 : remove unused all_generator_source from generate_code (#75868)
8b1cf8ed6b : move version_h to shared build structure in Buck (#75964)
b17b2b1cc7 : Add NVFuser Python Frontend
2d72cb3373 : Revert "[JIT] Allow registering Decompositions"
c2ae0b01c0 : Reapply black for torchgen, this time with lint to fix!
30342f6ba6 : [quant][docs] Fix formatting for quantization.rst (#76223)
d9f0774f98 : [JIT] Allow registering Decompositions
6d0f2be31c : Fix memory use after free: buffer need to be attached to the module. (#76350)
b26df43f15 : Fix bug where __getstate__ of DDP looks for self._replicated_tensor_module
bb60cac25a : E2E SymInt example narrow_copy
d286197e81 : Add black linter
bbc6fcd730 : deploy: add dummy metadata for builtin packages (#76211)
a9deda5469 : Fix issue in sparce_coo_tensor only supporting CUDA device.
f9d07ae644 : Update torch::lazy::BackendDevice to have a new default ordinal (#76264)
1e1118957f : [lint] correctly display annotations for all severities
1f3aa53545 : [reland] use scatter in shard_tensor API (#75991)
9cb2871f31 : Fix forward-mode AD formula for binary_cross_entropy_with_logits
e48b29b1fb : patching 11.1 ptxas issue
640ce6bc9b : functionalization bugfix: using owning type when unwrapping tensors
f36d348f75 : [NVFuser] multithreading nvfuser test
a4ce113649 : Avoid compiling ts backend binidngs in fbcode build
40f3e85005 : [POC] fix static init issue with JIT container types
74e93f727a : remove _is_foreach_op codegen special cases, clean up mutable return type checks
ea5209c9fd : functionalization: add native fill() op
5da76acd1d : functionalization: add a copy() native function
7d44b3675b : functionalization: add support for zero_()
c8fd1bbc11 : [complex32] mul
f954c0a774 : [Pytorch][4/4 Static dispatch] Support multiple backends with multiple kernels (#76059)
1df2d6a959 : [metal] Fix error reporting on failure to flush command buffer (#76263)
a9a17ce626 : third_party: Fix build_bundled script
da984c507c : [NVFuser] always use fallback if fusion fails
5109d81fc5 : Distribute torchgen as part of PyTorch package
1d55518198 : Revert "[nnc] Strides to Tensor (#72962)"
c55b425de5 : [flatbuffer] Bugfix: some class dont have __getstate__ (#76197)
b4b1b0d5ee : [FSDP] Support initialization of modules on meta device
e4d5801e36 : Make sure requires_grad is propagated for all backend
920876d693 : Re-land D35572928: [kineto] implement kineto client interface (#76239)
4f3dd80c1a : [Vulkan] VK Timestamp Queries for op profiling (#75829)
f4200600e4 : move Bazel version header generation to shared build structure (#75332)
111b2bf9da : [cmake] Use list(APPEND instead of string(APPEND for vulkan codegen args
bc9bba9b43 : delete ${GEN_VULKAN_FLAGS}
7a27accbd6 : add comment for lerp cuda implementation
d78dd825ba : define the caffe2_serialize target in Bazel (#75942)
0d7be81c9c : [JIT] Add Context Manager to force strict fusion
fdaf393ec1 : [EASY] [JIT]Cleaning up the device analysis temp test code
2387efd356 : Revert "[PyTorch] Add native fast path for transformer encoder inference"
1c35f37c9f : remove bucket_size_limit property from bucket struct
c1ced8ff72 : [composite compliance] add test for fwd AD
81586a6a5e : ROCm: Enable test_distributed_spawn
60e2ee3937 : ROCm: unskip c10 gloo tests
0ff05b1e97 : [DataPipe] Add funtional API docstring and fix typo in test
3d7abc0e55 : Make -h work with run_test.py
438cc79f5a : Improve more the error message with explicit recommendation
78ea86a445 : [shard] Sharder and ShardingPlan prototype (#73873)
f7e4b77add : [FSDP] Fix exec order validation (static variable issue)
28c3e0f77c : Initial prims, references, and test architecture for them (#75095)
b369b89f23 : [PyTorch] Add native fast path for transformer encoder inference
36420b5e8c : Rename tools/codegen to torchgen (#76275)
8d31706b9e : [ONNX] Support restricted quantized range for activation.
cada2cd3ae : [ONNX] Support per channel quantization
6ca8272d46 : [Distributed tests] Add skip for odd world_size condition
d65ab9a689 : Improving typing and typing-related performance in rendezvous.py
f980c3c193 : Fixed typo in documentation of `LazyModuleMixin`
89370e008a : Fix minor typos
a5ffdaf064 : bypassed cublasLtMatMul bug after slicing (#76205)
e90580390d : [Model Averaging] Make the error message more informative in hierarchical_model_averager.py
77f23d6460 : Revert "stft: remove non-center overload and python functional wrapper"
90d31cb311 : Emit ATen ops when symbolics raise + minor fixes
939060925f : [nnc] Strides to Tensor (#72962)
1a7e43be14 : Remove incorrect xfail
407e8eba8c : Enable simple indexing into CSR tensor, add torch.select for CSR
5622c8b445 : [Pytorch][3/4 Static dispatch] Move static dispatch logic to Operators.cpp (#76058)
9d8ff02e27 : Make LTC codegen customizable enough for XLA migration (#76180)
31a6e6cabc : Remove cuda 11.5 builds since we have 11.6
6b7d89c4f1 : stft: remove non-center overload and python functional wrapper
cb37e7a080 : Remove F.pad python implementation
7a80fc2ce7 : [fx] Don't use __module__ to test if a function is bound from C++
2f2158ae45 : [ONNX] Add typing annotations to onnx symbolic gelu
a71fabab33 : Revert "Dnt CSE across context managers"
1324410f2e : [JIT] Reuse traced fn for jit opinfos
7557407653 : Added directory check before saving in C++ API
79891abf40 : fix UndefinedBehaviorSanitizer
44bbb247a6 : [ROCm] enable fsdp tests
88b61d132c : Make GenLazyNativeFuncDefinition abstract and extensible (#75343)
041e6e750a : Fix to support no-batch-dim inputs in ConvTransposeNd._output_padding
ea8a0184b7 : Fix fuse_parallel_linear (#76202)
394b4d853c : Fix deterministic indexing with non-contiguous tensor
97fbe6f0a4 : [ROCm] use ncclAllToAll for rocm
be3ad8c637 : [PyTorch][2/4] Support static dispatch with multiple backends (#75605)
383f026791 : [DataPipe] Enabling graph traversal for MapDataPipe
ec591087fb : [DataPipe] Add input_col to filter and add deprecation warning for DataPipe arguments
b8cce8847f : [DataPipe] Add functional API to StreamReader and FileOpener
e846ef8818 : add rocm ciflow/slow workflow
24d06182f6 : [ROCm] unskip test_fmod_remainder_by_zero_integral
ed8e498c70 : [DataPipe] Improving debug message when argument is a tuple/list of DataPipes
2c748b7573 : [ONNX] Trace model if quantization is detected
11f4dcd016 : [ONNX] Stablize eraseTupleConstruct
3326fa60cc : move setup_helpers programs to shared build structure (#74849)
b167897317 : disable xla test job
9e49ab2389 : [FSDP] Add exec order validation
ee636e2fd1 : [sr] remove max_indices argument of embedding_bag when unncessary (#75993)
ecd5567980 : Tentative fix for CUDA-10.2 windows build failures (#76204)
4b311a9633 : [complex32] conj
25d5b63acf : [GHF] Skip b3aa2de (#76231)
adc920905a : Revert D35572928: [kineto] implement kineto client interface
da764f9224 : Update clip_grad_norm_ documentation
6593d293f7 : Added functorch to functional_autograd_benchmark
b3aa2de5be : Sync `c2_aten_srcs.bzl` with fbsync
056627ddce : [quant][docs] Add more docs for quantization.rst (#75998)
018dee502a : [kineto] implement kineto client interface (#76076)
cd7895e64f : [kineto] global callback support in ProfilerKineto (#76078)
ef0873327e : [NNC] Add utility functions to check channels-last contiguous (#75938)
9e137ee583 : more numerically stable cosine_similarity
333da3eaef : Handle simple tuple type inside Dict (#76164)
0981b01af6 : Dnt CSE across context managers
7047b94fe5 : [ROCM] Enable miopen for RNNs with dropout.
653892e288 : Kineto: Don't search for CUPTI in default paths
441aea4127 : Update Choesky's forward and backward derivative
b447fa3912 : [GHF] Manual fix syncbranches (#76200)
e28ac60dd7 : Back out "[easy][PTE] Remove GetMutableSizePrefixed* functions" (#76187)
0289ab2cec : Fix data-related public API (#368)
2c2c13d21b : Decouple Lazy Node Shape Cache (#75324)
2ab77acbfa : Create UCC ProcessGroup when ucc_lib available (#69564)
8326af0117 : [quant] Fix TensorRT tests (#76148)
0bbcac58e3 : Monkey patch Variable module to fix FX codegen
4766b6824c : Add out variants for softmax and log_softmax
6b6d09ce71 : Fix hacked twin
80fe96c860 : Revert "Add type hints for a few random functions/classes"
e0c1786587 : [onnx] Add support for torch.cross and torch.cdist
5b4d110f51 : fix lint
cdb40eb528 : Add type hints for a few random functions/classes
82421b0fb8 : [JIT] support parameterlist iteration
272890998e : [JIT] pass more exception info through the JIT interpreter
e5282c3cb8 : Again add first version of Buck build workflow
547ac879f4 : [GHF] Add pagination to commits_with_authors
eb69e8a3ed : Revert "Revert "record_function: update to use custom_class API""
3f9f35b9f8 : Revert "record_function: update to use custom_class API"
2dd1b9cc81 : Add code for more error message
a6a5e6cecf : move the stateless util to public API!
66502bb231 : small cleanup for public bindings test. no logic change
c6abda27b8 : [FSDP] Full state_dict rank0 only and CPU offload
e07134092f : Add warning when importing caffe2 on build without BUILD_CAFFE2=1
bf730e5039 : Fix unnecessary recursion in GraphModule.__call__
7c8c8cc248 : Use batched operations for PowerSGD
3e10fe3231 : Port `sort` to structured kernels.
45bbc4c028 : Update Dataloader with default parameter device (#65402)
77665e9a53 : [kineto] ClientInterface stub for ProfilerKineto (#75525)
aa51704ce5 : [complex32] add chalf alias for complex32 and chalf method
128dd6b150 : [pytorch] Relax the check that makes sure number of outs equals number of returns (#76049)
a8ed69502e : [NVFuser] document _jit_set_nvfuser_skip_node_kind
92a9c0e3e0 : add channels last (2d) support for mkldnn_convolution (#55584)
cebdca4191 : Add more nvfuser merge_rules.json
8385e06b0b : [pytorch][cupti profiler 6/n] Changes to configure Kineto cupti profiler from pytorch profiler interface (#75616)
35545d85dc : fx quant: add quantized Softmax workflow integration (#75106)
91e9fcf5b0 : sup torch script parameterlist
81722f6630 : Fix autograd.functional tests to not fail with logging tensor
27014a860f : clarify hardtanh's definition
f31d518283 : [GHF] Improve failures debugability
90d6e25b6f : Explicitly import functional into the torch.nn namespace
7f66b4ea5a : remove inverse from LOBPCG
b6a4234090 : [SR] Fix broken unit test build (#76111)
8646e0dc28 : [Dynamic RPC] Allow existing ranks to communicate with newly joined ranks
690bc1c54d : [ONNX] Raise exception for unimplemented ops for non-caffe2 builds
f6c275f55d : Remove `-Wno-unused-variable` from `utils.cmake` (take 2) (#75538)
29b004be7a : Corrected documentation for supported padding
638055df2d : fix VF import into torch's all field
cd0591dff3 : Change default TLS behavior in dispatch to favor is-a style
a11c1bbdd0 : Run Black on all of tools/
ae864d4fb9 : Remove 11.5 periodic
5477f0ae60 : back to fetch depth 0
2625dab11e : Modify unique to return sizes when dim is zero-length
116d0bec5d : [DataPipe] Improving debug message when exceptions are raised within IterDataPipe
e20793b054 : [quant][core][gpu][cudnn] Added support for nhwc tensors in quantized cudnn add_relu op (#75806)
e5ee6f5cf7 : Fix `CosineAnnealingLR` on restart
da3c848dfa : Make distributed raise ImportError when not available
ee955b8bb9 : Cannibalize noarch CI job into crossref CI job
d9219d2944 : Add torch.nn.init to list of overridable functions
6642e88ad2 : Adding maximize flag to Adagrad
d938867f91 : Export NamedTuple when it's nested in first type layer Dict (#75996)
317b8fa7ae : ROCm: Enable TestUnaryUfuncsCUDA tests
e5b6a7666d : [torch deploy] Re-define D_GLIBCXX_USE_CXX11_ABI in deploy CMakeLists.txt
2f9a239ae9 : do intermediate math in opmath_t in lerp
69e048b090 : List of SymInt rebase on master
15e36f03ad : Experimental MetaTensorTracer
f1f99ab310 : Fix conv1d with explicit precision (#75824)
d0af05f931 : [FX] Modified __deepcopy__ to also copy _codegen
c358c5d7d8 : [PyTorch Edge] Using Qnnpack in Quantized Softmax Op (#75799)
911b2f2beb : [SR] Mark create_owned_ref with AliasAnalysisKind::CONSERVATIVE (#75381)
839109f689 : [GH1] Add sparse related changes to merge rules
b1a369b423 : [GH1] Add FFT related changes to merge rules
45a7ae1929 : [ONNX] Add verbose option to onnx unittests
6f991fc5fc : add XPU support for autocast
f65eb09d6b : [JIT] Move Shape Function definition to python
0c671c15ec : [JIT] Remove CSE Hoisting
9562aedb58 : ROCm: add HIP_HOME/include,lib in cpp_extensions (#75548)
a5e338a826 : [RecordFunction] More effecient machinery to determine which callbacks to run. (#75807)
6ac2ce9abc : [RecordFunction][Trivial] Reorder `record_function.h` (#75036)
f80d0f4e7c : Revert "exclude slow tests from sharding calc for linux-bionic-py3.7-clang9-test"
5364752b7d : exclude slow tests from sharding calc for linux-bionic-py3.7-clang9-test
7478ce187a : ROCM:Unskip more tests for ROCM5.0
0a7d6f34b0 : expanded weights: instance norm faster rule
0dc860dbd6 : [pytorch][require export] Skip internal checks in Meta service (#75837)
3d4136dc44 : ReReReland Fix public binding check for modules with `__all__`
7e0c4abf69 : Removing static definition of namespace scoped method
f4aa27a9a3 : Removing unused variables
1000aaf855 : Add some assertions to python_variable and check for resurrection in tp_clear
bba4780232 : Enable autograd wrt sparse CSR tensors
0a5e788ab2 : [PyTorch] Add NestedTensorCPU and NestedTensorCUDA dispatch keys (#75808)
a5cb0d6be4 : [GH1] Add linalg related changes to merge rules
347ea626aa : Revert "ReReland Fix public binding check for modules with `__all__`"
982be19638 : [quant][core][gpu][improvement] Suported int8 matmul for quantized linear cudnn op
5c56b2286b : Revert "Remove `-Wno-unused-variable` from utils.cmake"
5635af45f7 : Update allowlist
5878215133 : Revert "Port `sort` to structured kernels."
4787adb5c4 : Remove extra conv OpInfos
c3e67d8a8c : [easy][PTE] Remove GetMutableSizePrefixed* functions
166568d49f : Enhance exporting torch.minimum() function to ONNX so it can handle parameters with different dtype.
af13797c8f : [GHF] Exclude 5f37e5c2a39c3acb776756a17730b865f0953432 from sync
e23cbd633f : [complex32] jiterator support
eab3f42883 : Update symbolics policy to emit aten::ATen for Caffe2 build only
d2517a43db : ReReland Fix public binding check for modules with `__all__`
8c37a056df : Port `sort` to structured kernels.
74454bdb46 : [quant][fx] Move backend_config folder to torch.ao.quantization
cbb9b33c85 : Revert "Reland Fix public binding check for modules with `__all__`"
018cbe1f5c : Remove `-Wno-unused-variable` from utils.cmake
f5517761aa : add operator header
2e5e4be761 : Reland Fix public binding check for modules with `__all__`
0f98fc05a7 : Reformat tools/codegen with black
7541068254 : Annotate some long lines with noqa: B950
cc56fac213 : Fix complex to real casting warning in _to_copy backward
58fb3f018e : Fix conjugate bit discrepancy in composite compliance
e587c8bc57 : [ROCm] enable composite compliance backward tests
18c74d10bb : Register torch.return_types.* as pytree nodes
45ac08b319 : torch.autograd.grad needs an extra tuple when handling single outputs and is_grads_batched=True
cd0fdccaef : Enable windows tests for nvfuser
bd7e99cbb9 : Fix doc build
736f0d0e46 : [quant][core][gpu] Added additional uid-data_ptr pair for broadcasted_bias in quantized cudnn linear op
c52290bad1 : Revert "Fix public binding check for modules with `__all__`"
a2392b000c : [quant][core][gpu][improvement] Set tensors as virtual in quantized cudnn linear op
28a3654668 : Make PYTORCH_TEST_WITH_SLOW_GRADCHECK consistent with other test envvars
30943d1610 : Remove noarchTest decorator
725aad1432 : Fix public binding check for modules with `__all__`
86ea57805b : Add back SHARD_NUMBER and TEST_CONFIG to upload test stats step
6b72357b14 : Modify GraphQL PR info query to adjust for workflow consolidation
b34b192d6b : Reland "Make debug_pkl smaller by only emitting unique traces." (#73368)
de949a0e59 : Various OpInfo architecture improvements
cdbd39ba57 : fix fetch-depth: 1
ce7feeadc0 : Disallow calling tolist on tensors with nullptr storage
3577ff6382 : [FSDP] Fix `no_sync()` + `FULL_SHARD` root all-gather behavior
fc2778cd33 : [FSDP][Easy] Minor simplifications
7ca03dcdfc : avoid some unnecessary view_copy calls
e70fea8e76 : ci: Add credentials to upload test stats for rocm
7be1b2961b : fix unfold for meta tensors
204df13d42 : teach ivalue about List[Optional[Tensor]], fix fallbacks
4c7b4b5770 : fix out= op handling for functionalization
cb17973a2b : split out functionalization codegen to use view_copy operators
2602a5e76f : fix local_thread:pytorch embeddingbag
381e725911 : [quant][core][gpu][bux fix] Added clone and contiguous() to broadcasted_bias tensor in quantized cudnn linear op
6dc71461e1 : [quant][core][gpu][bug-fix] Added additional caching support in quantized cudnn add_relu op
c4cf51de99 : Revert D35679120: Add first version of Buck build workflow
9e864f475c : [FSDP] Separate shared code into hooks
b582472ac3 : [GHF] Fix sync-branches
f120d5be94 : remove fp16 support from cpu linalg functions
ebb60a8b2f : [NVFuser] don't decompose linear if we don't have shape info
019259a66f : [FSDP] Add `scatter_full_optim_state_dict()`
c5d57e7be9 : Revert "Use batched operations for PowerSGD"
5654e63398 : Use batched operations for PowerSGD
3a38f175dd : Convert DDP parameters to ReplicatedTensor during forward pass.
f4d89aa28e : Fixes combinations throwing a meshgrid warning
e9791cd8c9 : Validate Sparse Compressed tensor arguments
5d0450d4b8 : Updates codeowners for OpInfo files
ef50186a7d : remove unused //:tools_jit target and dependency from generate-code
92a5815502 : stop creating jit/generated/ directory
c132b9fd71 : Add first version of Buck build workflow (#75815)
7d5c07830d : Add upgrader related logic to flatbuffer (#71451)
b5a25180f1 : Revert "Add first version of Buck build workflow"
fe8eff3711 : Revert "Add upgrader related logic to flatbuffer"
d79d9fa283 : Revert "Remove breakpad dependency"
9aa3c7fd83 : Remove breakpad dependency
0df2e863fb : [Profiler] Expose `profilerType` in Python
1c5c739993 : Adds opinfo for pdist
9663886009 : remove dead code from generate_code program
d8374efb53 : [lint] fix spurious annotations on formatting linters
977a66fe88 : switch DimensionNode's base from TsNode to Node
dfae96171a : Add upgrader related logic to flatbuffer
2d14e42186 : Adds multilabel_margin_loss OpInfo
051802e4f4 : [quant][core][gpu][bug fix] Fixed off by one index issue in broadcasted_bias
17fbb617ad : Adds soft_margin_loss opinfo
9a0d1c5446 : Adds binary_cross_entropy opinfo
452ebd03a9 : OpInfos for triplet_margin_loss and triplet_margin_with_distance_loss
c45f1add97 : [quant][core][gpu][improvement] Modified quantized cudnn linear caching
702c7f00e2 : Exclude mobile TorchScript models from linter checks
b0081e7642 : [LT] Support narrow
9d17157fae : Adds OpInfos for l1_loss and smooth_l1_loss
1c0a01e709 : [Distributed] Add a guard for non CPU/CUDA devices
7d8b366223 : [quant][improvement][gpu] Fixed errors in test_qlinear_cudnn
2da43ec01a : Adds margin_ranking_loss opinfo
5dcbcc6de8 : [Quant][fx] Fix get_default_qconfig_dict for fused modules
41c59987d9 : Adds OpInfos for max_unpool{1, 2, 3}d
a7dc893df4 : Adds multi_margin_loss opinfo
aeb6cf356f : Adds multilabel_soft_margin_loss opinfo
18b9d6b20a : ci: Change with-ssh to be on by default
dc2e630341 : Optimize PReLU (float32) and enable PReLU BFloat16 support in CPU path (#63634)
e8ed042043 : Revert "Optimize PReLU (float32) and enable PReLU BFloat16 support in CPU path"
263c4c2a95 : Optimize PReLU (float32) and enable PReLU BFloat16 support in CPU path
e54c1f6c90 : [torch][elastic] Make final agent barrier to shutdown properly
bc72add504 : [2] remove caffe2 math.h from maskrcnn ops
85235c6f8e : [lint] Use a problem matcher for GitHub annotations
3e0e137555 : [lint] add test ownership lint to lintrunner
991c89b2d1 : set fetch-depth: 1
3c238c6c5a : Revert "Add support to `Tensor[]?` for structured kernel codegen."
15892481ab : [NVFuser] fix incorrect vector size in chunk move
9555f3b3c1 : [lint] improve retries in stale job
fa2c7e2ede : [jiterator] logical_and: complex
a5b4839f35 : Move //xplat/caffe2:caffe2_serialize to shared build structure
3467f3fa80 : Remove spurious warning when using disabled torch function
ac2de5a03a : Add support to `Tensor[]?` for structured kernel codegen.
0ab03af566 : Utilities for handling Sparse Compressed tensors
337e3932aa : Fix data race on owns_pyobj_ accesses with non-GIL protected threads
533f0cb28a : Set correct module for APIs in torch module
2772870860 : Preserve Python dispatch keys upon copy_tensor_metadata_except_version_counter
b09769992f : Improves the OpInfo out= tests
123297a8c0 : [lint] use python to run flake8 and mypy in linter
356f1478d8 : [lint] add actionlint to lintrunner
cbbb96c271 : [lint] add shellcheck to lintrunner
63b0e1faa8 : Add layout member to SparseCsrTensorImpl
1cd46b309b : Introduce sparse compressed layouts: SparseCsr, SparseBsr, SparseBsc
4c87283593 : Introduce virtual layout_impl method to TensorImpl
ef640355de : [quant][core][gpu][bug fix] Changed at::contiguous call to at::to for quantized cudnn operators
515d61f2fc : [quant][core][bug fix] Corrected at::to(memory_format=...) support for quantized tensors
cf735d899a : [quant][core][bug fix][gpu] Added kReluFused to quantized cudnn conv operator's caching
03cbdaee20 : [lint] use new merge-base-with feature in lintrunner ci
990d155c9c : Update Index.rst to add TorchRec to domain list.
752ab799bf : Support noncontiguous inputs for torch.distributed.nn.functional.all_gather/reducescatter/gather
d2e7e4e2e4 : [lint] add back some omitted clangtidy folders
32aeb1e95e : [ci] fix make setup_lint
e843b6667e : [ci] add retries to stale action
cc1902a5ed : Revert "Add warning when importing caffe2 on build without BUILD_CAFFE2=1"
1c60b9aaa5 : [ci] use lintrunner in CI
54bd9e0402 : [PyTorch] Add a warning that NestedTensor is in prototype stage
dfcaedeb1a : Fix formatting issues
db6165215e : Revert "[ci] use lintrunner in CI"
9bbe1d632e : Fix ONNX ATen fallback for non-caffe2 engines
b142a224c6 : Add warning when importing caffe2 on build without BUILD_CAFFE2=1
950dc1b457 : Fix use of ONNX optimizer by Caffe2 backend
d21082c982 : [NVFuser] make comparators obey strict weak ordering
56f801e788 : [PyTorch] Add test for all-masked case for native softmax
d4c527e738 : [PyTorch] Run test_transformerencoderlayer_gelu on CUDA
96cf8a450a : [PyTorch] Run test_transformerencoderlayer on CUDA
9312ee8cd6 : Revert "remove fp16 support from cpu linalg functions"
39717d3034 : Remove histogramdd functional wrapper
c2d5f6a5a4 : [nnc] Update bounds overlap analysis to identify non-overlaps even with symbolic bounds
d8ad1a579f : [nnc] Fuse loops that have variable bounds
de66304aa5 : Add first version of Buck build workflow
29af58db51 : remove fp16 support from cpu linalg functions
5564663778 : [CUDA] Add fastAtomicAdd to scatter_add [v2]
496d4bb7ca : Revert "Add first version of Buck build workflow"
24fd220be3 : Fix compilation on macos
22a10ce513 : Port `cat` kernel to structured kernels.
4c3ee53522 : [ci] use lintrunner in CI
213ada853f : [pyper] to + lengths_to_offsets with nnpi shape inference support (#5931)
97c993ca7a : [PyTorch] Add NestedTensor support functions for transformers
332086c08d : Add BFloat16 support for multinomial and poisson on CPU
1f4828002d : improve qcat_nhwc performance on both multi-core and single-core
34be6401c3 : improve multi-core performance of qupsample_bilinear2d
36d622f53f : improve multi-core performance of qupsample_nearest2d
cb6bee8f01 : improve multi-core performance of qbatch_norm2d
09e9580731 : improve multi-core performance of qmax_pool2d
5ab1a5fa6c : improve multi-core performance of qavg_pool2d
b2894de0fd : Indent allowlist_for_publicAPI.json entries
0c08fcff32 : [quant][fx] Cleanup some unused states and args
4766314de1 : Disable GPU tests for the PiecewiseLinearTransform operator. (#75738)
8a1a93a923 : port Bazel //tools/autograd to shared build structure
ab0d9b18e9 : [LT] Support Tensor.is_alias_of
b311f255d8 : Disable GPU tests for the Dropout operator. (#75739)
715e07b97f : Revert "Remove histogramdd functional wrapper"
b46f3a49b3 : [tensorboard][writer] Add missing 'dataformats' argument to 'add_image' docs.
7a243ddd19 : Add import to `importlib.abc`
8cc7221a65 : move generate_code resources into tools/autograd library
8cc338e5c2 : Remove histogramdd functional wrapper
1118b157bc : Revert "Remove 11.5 experimental builds now that we have 11.6"
045228bad1 : Add first version of Buck build workflow
d65414d145 : Add test for FC/BC for torchscript file.
7545e2a4d6 : [ONNX] Add constant fold for onnx::ReduceProd
aa51ee2345 : Enable numel tracing
db20a3b014 : [SR][easy] Fix README diagram
277c8fe646 : [DataPipe] Make sure the profiler wrapper can delegate API for iterator
2197a9caaf : [vulkan] Refactor Vulkan Runtime and Adapter
5d059d20ad : Remove 11.5 experimental builds now that we have 11.6
fa6e3cf0be : [FSDP][Easy] `named_parameters()`, `named_buffers()` refactor
01c8ac3bd2 : [pytorch][require export] Don't require the repo to be cloned
98b4a4100d : [SR] Add a copy variant for fused_split_and_squeeze
63c6209d09 : ns for fx: reenable tests disabled by #62608
f1f185f6f9 : ns for fx: fix bug to enable again on torchvision models
ae3210420e : ns for fx: fix issue with shadowing nodes of unknown dtype
f78e0fc956 : [ONNX] Support aminmax
6305e572ed : [ONNX] Support dynamic scale & zero_point for fake_quantize_per_tensor_affine
d4cce30573 : update codeowner for public API
495c5aebb1 : Revert "remove fp16 support from cpu linalg functions"
de18c28a4c : remove fp16 support from cpu linalg functions
ce842f43f2 : Relanding shape cache (75400) (#75710)
db1801099b : Revert "Relanding shape cache (75400)"
fe1e6de73a : [lint] fixes to mypy linter
d6e6061b98 : Add checks for public and private API
692ebc8d8b : baby steps on patching inf/nan behavior & aten::amin support in nvfuser
b5b296c4cf : Fix: Make `nn.init.orthogonal_` no-op for empty input
790cc8f259 : [JIT] nvfuser test - use only major & minor versions
d7b29f3ee6 : Add forward AD codegen support for single formula returning multiple outputs
8721abc429 : Add forward AD support for norm, dist, F.pairwise_dist, F.normalize
3471b0eb3d : Revert "Remove histogramdd functional wrapper"
51666ff123 : Do not ignore lazy/generated/README.me
08f3b95857 : fix PostLocalSGDOptimizer and ModelAverager average bug
89486821ed : Relanding shape cache (75400)
d6f22abbcc : [PyTorch Distributed] Fix batch_isend_irecv
3c505fb4bb : Expose some functions out of ENABLE_FLATBUFFER.
ac8d220188 : Add `__torch_function__` override protocol supporting to some factory functions
7c9017127f : Remove histogramdd functional wrapper
4afe2db641 : [shard] fix sharded_tensor tests
bc371a2cd0 : [quant][fx][fix] Add additional checks when tracing back during maybe share output observer function
348881deaf : Update doc copyrights to 2022
bdf5a87714 : Extend sign-compare warnings to gcc (take 2)
1601a4dc9f : stop lint from getting submodules
c274f66268 : Revert "Adding Caching of calculated Symbolic Shapes"
1aeea24567 : Revert "Add checks for public and private API"
76614b3a33 : Test linalg vector norm subgradient
692f1b9a88 : [ci] custom stale action
a4126e5936 : Add test to make sure submodule hooks fire
a305c078da : [JIT] Prevent nvfuser registration on ROCm
31ed4827fe : Add checks for public and private API
648823b087 : [FSDP] Add `ignored_modules` ctor arg
c2124f5c66 : Turn on -Wsign-compare
f5f64526da : Adding opinfo tests for binary cross entropy with logits
04db1b874f : prevent overriding shuffle settings in DataLoader for datapipes
58f5fd9d32 : Turn on sanitize-address-use-after-scope/detect_stack_use_after_retur…
80e05b7df4 : Revert "Extend sign-compare warnings to gcc"
761bb06292 : [quant][fx] Use native backend_config_dict in convert
42b4d0e934 : [caffe2] remove unecessary RCCL dependency
34446653c7 : Extend sign-compare warnings to gcc
71587e0514 : modify the check condition of Conv-> Add/Sub/Mul/Div folding
f83d047338 : [quant][fx] Use native backend_config_dict in prepare
af9203868f : Revert "Add checks for public and private API"
577c9ff854 : [FSDP] Implement reshard_flatten_tensor
d38d950f22 : [ROCm] libtorch nightly now correctly uses rocm runners
b1310b463f : [quant][core][improvement] Added support for data_ptr<T> for quantized tensors to return pointer to underlying int type (e.g., int8* instead of qint*)
d7e23286c5 : Add checks for public and private API
9a7bfaa929 : Adding Caching of calculated Symbolic Shapes
be354d8139 : [shard] Add basic math ops to ShardedTensor and add ReplicatedTensor inter-op
0203341bbd : patching clamp for one sided clamp
c2d4e201a3 : [FSDP] Avoid using the legacy _sharded_tensor package
f7e7af80e0 : disabling reshape
25aa251f37 : updated cudnn frontend to v0.6.1
143f7cca5d : [FSDP] summon full params staticmethod
8d3e3ebc58 : Add note about testing inconsistency
6d9210be68 : Update docker-builds to add CUDA 11.6
6402e62454 : Refractor flatbuffer jit code
c1f0e6e763 : [ONNX] Make Non-Float Op Exportation Compatible to Avoid Invalid ONNX Models
ca0ef52382 : [PyTorch Edge] Add Quantized Softmax Op (Naive Implementation) (Re-land)
2debdfdce6 : ci: Add lts for s3 index updating workflow
f281d83d77 : Moving Remove Tensor Type Specializations to after custom passes
04224e18f5 : Docs: Detail 3D tensor shape for transformer masks
3b18bc36f3 : Docs: Add missing zero-ing step in Rprop algorithm
484534f3be : Warn once about sparse CSR tensors beta support
ac2d2e3a3d : Fix some typos.
8e365fabc9 : [FSDP] Fix `_get_param_to_unflat_param_names()` for shared params
20bf0d4502 : ci: add repo-token for stale workflow
23b8414391 : code-generate non-aliasing {view}_copy kernels (#73442)
dfcb7035a0 : Add retry to Run Actionlint shellcheck step
1793ab0bd2 : [ONNX] Initial test suite for Torch IR to ONNX
26d22b7fcf : [DataPipe] Change interface generation process to revert back to original working process
91d134093e : Add fastpath for stack and cat JVP computation
1aaf3ee6ec : Add ZT fastpath for sub
e21e4b70f4 : Support masked prod on CSR tensors
a98b4666e0 : Enable test_sparse_mask for Windows
e580db0588 : [ci] fix codegen lint
80ea6955af : Add cuda-11.3+clang9 build workflow (take 2)
0a1bc5f501 : Miscellaneous __torch_function__ fixes
4209609037 : Adds missing AT_CUDA_CHECK in CUDAGraph.cpp
25ee52570e : [ao][sparsity] comsability for sparsity and QAT convert
a68b1f388f : ignore exception in Add-MpPreference
6cdc6dd8e5 : [pytorch][require export] Create smartplatform configuration for trymerge (#75542)
8a053e387e : remove special casing for sparse CSR shape comparison
8fe43d76d5 : Revert "Add cuda-11.3+clang9 build workflow"
1a85699c03 : [complex32] enable test view, view_as
709fcc862e : Add cuda-11.3+clang9 build workflow
e7f4f5dd9b : CUDA 11.6 workflows (#75518)
466100295d : fix external backend device guard codegen for factory ops
ca056cc918 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
0b9eb19d04 : Handle real paths that have other invalid Python identifiers
e02584cd95 : [FSDP] Add `rank0_only` to `full_optim_state_dict()`
72d3d160fb : [quant][fx] Remove additional_object_mapping from the docs (#75389)
bcf6974c20 : [qunat][fx] Remove "additional_fuser_method_mapping" key from prepare_custom_config_dict (#75388)
0389f99c49 : make apply_to_tensors support OrderedDict type (#75560)
55d479aca5 : [qunat][fx][bc-breaking] Remove "additional_qat_mapping" key from prepare_custom_config_dict (#75387)
9855f1271c : [jiterator] neg: complex
caa28ff495 : refining regrex of .gitignore core.*
e177d2cc44 : [complex] conv3d
9b639f263d : Fix tsan issue (#75528)
f42bdff016 : [qunat][fx][bc-breaking] Remove "additional_quant_pattern" key from prepare_custom_config_dict (#75386)
f26891c8b7 : [quant][fx] Using native backend_config_dict in fusion (#75378)
689cec9493 : [quant][fx] Remove "additional_fusion_pattern" from prepare_custom_config_dict (#75377)
dd667b6e97 : [quant][fx] Move all fusion registrations to backend_config_dict (#75318)
0f7e60d6a2 : [shard] add ShardedTensor.cpu() (#74941)
e6842c0ed4 : Optimize size(dim) and stride(dim)
03dd22a281 : [fx2trt] improve to_dtype (#48)
caa1177440 : [c10d] fix nccl gather outputs on non-root ranks (#75535)
3f1351d1cf : Disable strides and contiguity for CSR tensors
771527fc6b : Revert D35489740: [pytorch][PR] Updated cudnn_frontend submodule to v0.6
1fc2f4cc31 : [ci] Fix bug in get_workflow_job_id.py
6ba29d715e : [BE] Fix deprecated usages of `isIntegral`
58a44523c1 : Add maximize flag to Adadelta
c02bdfae5d : Docs: Fix `log_target` example in kl divergence
43cc726c22 : updated _forward_unim. to include descriptive error
e61b2e12e1 : Support masked sum on CSR tensors [CPU, CUDA]
0bd3354547 : Update onnx.rst
dc37090ec5 : [LT] Support diagonal op (#75230)
b10d151745 : Ensure convolution_backward respects output_mask
70cab5ebb1 : [PyTorch] NestedTensor kernels for {r,g}elu{,_}
5c5a1e0a57 : [PyTorch] Add & use in-place gelu
48147675f2 : [PyTorch] _addm_activation native function for matmul/bias/activation fusion
d88a116015 : Fix exporting models to ONNX without allow_tf32 in _convolution call
4a85145bbd : Ansley's rebase of DimensionNode onto master (#75352)
dfa1d953c6 : [complex32] enable testing for {h,v,d}stack
8c1004e103 : Make split_with_sizes an overload of split
fe799374de : [complex] conv2d
ff8705b374 : improve datapipe deprecation warnings (#74685)
1d6007fad8 : Add edge case tests and update tril code to cover
26ba7a9297 : ROCm: Enable test_masked_scatter_large_tensor
0d4d193ac7 : Enable atomicAddNoRet() for all gfx targets.
7a915a1576 : fix 'pytorch/tools/code_coverage/README.md' for renamed options
1bb6b3bd96 : remove from generate_ci_workflows.py, add tags back in .github/templates
0bdf9a9833 : [Quant][fx] Decouple prepare_*fx from training/eval modes (#75401)
e4817e6c13 : [quant][fx] Move embedding ops to backend_config_dict (#75317)
68b18666a9 : forward-mode AD formula for F.dropout
cbabd8f9f8 : [ONNX] Raise exception for mixed precision input for BatchNormalization
6e11435c41 : Updated cudnn_frontend submodule to v0.6 (#75481)
d92213f741 : move setup_helpers programs to their own package (#74838)
9905b1f29a : [quant][fx] Move rnn ops to backend_config_dict (#75316)
c755d22045 : [mobile] Update test model generation script to count op occurrences
38a758e251 : Add forward AD for rsub, polar, and FFT
37dea0454d : [quant] add checking number of args when checking observer in same graph (#75460)
8e95a2cda6 : [wip] Correctly use pip3 and python3 for upload-test-stats
f9407fdb86 : Dynamo+LTC: handle inplace ops (#75359)
9c56cc4755 : [torch deploy] Add -rdynamic option explicitly to CMakeLists.txt (#75461)
8ac4729105 : [ao][sparsity] Composability of fusion and sparsity (#74847)
31ed77b769 : Revert "Support masked sum on CSR tensors [CPU, CUDA]"
35d4a805eb : CUDA Kernels: Use per-operator headers (4/4) (#71215)
cd62bbf756 : CUDA Kernels: Use per-operator headers (3/4) (#71214)
9d4a76dcc7 : CUDA Kernels: Use per-operator headers (2/4) (#71213)
622cff3e95 : Cuda 11.6 Disable failing tests (#75420)
9d05ce602e : [JIT] Move log_extract.py helper functions to torch.utils
b7682d351a : [SR] Refactor memory planner to prepare for new algorithm (#74730)
00d11de564 : [JIT] Add support for closed over inf
8bf8b64b54 : [easy][PTE] Ensure the stream points to the beginning before calling getFileFormat (#75437)
0b1f8b0319 : [fx2trt] support for ne, logical_not, logical_and (#75444)
7f0d79625b : [quant][fx] Move output share qparam with input ops to backend_config_dict (#75315)
ad07b7c338 : fix to map an undefined tensor back to a tensor list
2f98fa9147 : [SR] Do not manage tensors that escape scope via container (#74966)
f1db3e465a : Adding integration of SSA into LazyTensor
3001bda304 : [PyTorchEdge] Backport from v9 flatbuffer to v8 pickle (#75201)
d264824bc8 : [quant] adding const qualifier to Quantizer::equalTo (#75355)
c1bd33cfff : [ci] run upload-test-stats on self-hosted runner
e3f546af13 : Remove flake8 pre-commit hook
f98881b1bf : update eigen submodule to latest release (3.4.0) with rocm fixes
e60c403b2f : [ONNX] Use fixed test input for flaky test
c1de6b191b : [SR] Update README (#75283)
e48ecfda9f : [FSDP][Easy] Fix return in docstrings
339e45ba39 : Revert "[CUDA] Add fastAtomicAdd to scatter_add"
8a6f0a25b3 : [ci] add rery to getting job id
ce9e27a0fc : Add new keys for Graphcore IPU (DispatchKey / Backend / DeviceType)
c7ae23b50e : Extend CSR constructor to support batched indices and values
5c28216aea : Support masked sum on CSR tensors [CPU, CUDA]
e0f9c69fcf : Fix addmm_cpu for int64
11f1fef981 : Update documentation for scatter_reduce
11606f5c8f : [ci] add env vars to upload test stats
3f1633fa86 : [quant][core][gpu] Implemented max pooling 2D using cudnn (#74673)
d031ce90c1 : leaky_relu forward-over-reverse rule
791321e44d : Fix minimum, maximum forward-ad formula for float32
20be31de90 : Revert D35423079: [pkg] add generic ZipFile Reader/Writer
a1d59bbca1 : Revert D35423078: [pkg] add zipfile unit tests
9a403db6af : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
a0a2b75565 : [jiterator] sigmoid_backward: complex (#74948)
53c2fc65b3 : Abate spurious resize warnings in `MultiMarginLoss` on CUDA
252e1ccce6 : Enable TE fuser to support user defined operator (#73073)
6e59067c0e : [quant][core][improvements] Removed reflection_pad1d_quantized_cpu, dimension and output resizing code in reflection_pad1d_out_template and implemented reflection_pad1d_out_quantized_cpu (#74755)
a9d43d6f6e : Dynamo+LTC: add pybind to set force fallback config and use that in test_extract_compiled_graph.py (#75292)
31c86625cc : __torch_function__ mode
4055d1f653 : [SR] Fix StaticRuntime move ctor (#74927)
5dbf39b823 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
00c1e01ad0 : Remove internal logic to handle bytecode version 3 (#57775)
7e9bb1c273 : use Timer for cuda benchmarks
2d5e4cff85 : disabling view
aac4d6cd63 : updating nvfuser tests
17ad0f2607 : Update upload-test-stats req
401b25bb56 : [ROCM] Navi21 Enablement 9: Range and Multinomial Kernels (#73550)
dc28b2ba21 : [ONNX] Fix error comparing tensors on different devices in DeduplicateInitializers
50b6959c0f : [ONNX] Support torch.amax and torch.amin
89e79f844d : Add list of supported ATen ops by ONNX converter into torch.onnx page
ca374773b4 : [ONNX] update default opset_version to 13 (#73898)
0dae42f5bd : [ci] temporarily pin get-workflow-job-id to master
3b1c085613 : Fix duplicate commit logic in syncbranches (#75384)
782fd7496a : [PT-D] Fix Sharding spec inference to avoid invalid chunk sharding to be inferred as chunkshardingspec (#75296)
e167244aa4 : [quant][fx] Move the remaining fixed qparam ops to backend_config_dict (#75314)
7adf59a3bd : [quant][fx] Add BatchNorm ops to backend_config_dict (#75260)
2f3a94996c : [quant][fx] Add cat to backend_config_dict (#75259)
86485f61c5 : [quant][fx] Remove the remaining registrations in BinaryOpQuantizeHandler (#75258)
53f7233004 : [quant][fx] Move all binary op configs to backend_config_dict (#75241)
99ce996474 : [pkg] add zipfile unit tests (#74929)
d4a709be3d : [pkg] add generic ZipFile Reader/Writer (#72237)
7dca0af299 : [ci] Only upload test stats on workflow success/failure
14ae8784f3 : [ci] fixes to upload test stats workflow
899326ad5e : [kineto] submodule update and fixes
be4a604442 : [CUDA] Add fastAtomicAdd to scatter_add
9b563d6468 : initial test stats work
ef41201d4a : [ONNX] Add bucketize symbolic
10023eeb65 : elu, selu, celu fwd-over-rev rules
9a8e605565 : Add support for legacy tensor constructors in JIT (#74785)
e50dd5ba97 : [JIT] Allow empty temporary list literals to be matched to arbitrary types (#74768)
b72b5b2833 : Add support for nested var names in parser (#75124)
43b56b3814 : Add Parsing of tensor constants (#75119)
24c255ee7c : Small repro improvements (#75108)
f984e50f39 : Extend jit::load to work on flatbuffer file; Take 2 (#75256)
ff7051781f : [quant][fx] Remove Standalone and CustomModule QuantizeHandler type checks in prepare (#75202)
936e7eabca : add pytorch directory into Exclusions of Windows Defender
14baca38c5 : [WIP] enable cu116 builds
eb43e60a92 : [ROCm] upgrade CI distributed test to ROCm 5.0
b780e8a640 : [complex32] {d,v,h}split
1b3313f2cd : [complex32] enable complex32 testing for atleast_nd
706b9e8b8d : [reland] [complex] conv1d
0d7aad822e : move Bazel //:tools_autograd to the //tools/autograd package (#74745)
6ccccfcc12 : [FSDP] Enhance test for checkpoint
a8e45b5969 : Make forced eager fallback optional in codegen (#75274)
5c964e38b0 : Make default codegen behavior skip Lower function (#75267)
78ba87ec4b : [fx][ShapeProp] make shapes and args/kwargs concrete for minimizer (#75291)
a90bcd2066 : [quant][fx] Support override observers and fake quantize module in backend_config_dict (#75135)
9817875729 : [quant][fx] Add support for BinarOpQuantizeHandler in backend_config_dict (#74882)
5994d68484 : Reland NVFuser guard changes
722e9e3403 : Wconstab/doc codegen (#74850)
00e2c14b78 : Revert D33970688: [pkg] add generic ZipFile Reader/Writer
20266f054b : Revert D35254715: [pkg] add zipfile unit tests
9bb21fac95 : [ao][sparsity] make sparsity compose with PTQ convert (#74846)
1b4e5a8c68 : Make LazyIr.h use provided backend namespace (#75264)
1ab03a0f6f : Deprecate `__torch_function__` as instance method in C++
383e56eb3f : jiterator reland (#75231)
cd80ef6cdb : Add TORCH_CHECK for floating point exception in native_group_norm
26dcec152c : Added support for SSA for ops not in a JIT graph
e1b4117e30 : Move shape and operand definitions to base node (#75223)
b8a4708ac0 : [pt] Add half precision support for nn.EmbeddingBag (CPU) (#74844)
57ba615676 : Back out "[shard] use scatter in shard_parameter API" (#75295)
76e9730d02 : [FSDP] Code simplification
93e650b8f7 : [pkg] add zipfile unit tests (#74929)
85e163c56b : [Static Runtime] Fix a bug that `aten::full_like` reuses a tensor that does not match arguments (#74255)
c2c260bfc3 : [pkg] add generic ZipFile Reader/Writer (#72237)
6d832a7a20 : Revert "Extend CSR constructor to support batched indices and values"
3a0b393d49 : Back out "Revert D35000703: [WIP][FSDP] Mixed precision enablement" (#75024)
189e72babe : [Model Averaging] Fix post_localSGD_optimizer
bfbe4b60aa : use links to re-enable tests
6a37e0df97 : Split SILU OpInfo
862f67454f : Revert "[complex] conv1d"
3c10987692 : don't add extra shuffle in DataLoader2 if one is present
e5a1a78045 : masked argmin/argmax
2ecc59086a : .github: Update s3 actions to include runAttempt
5870e84407 : add DispatchKeySet function to get highest backend key
b64e7dee51 : [complex] conv1d
f2a4d49174 : torch.mm(dense, sparse_csr)
e9a8e6f74a : Add include_self flag to scatter_reduce
45d5e2b2cb : [FSDP][Easy] Fix 0-dim tensor optim state device (#75243)
79cae53fdd : Add quantized::softmax to fc list
d88c454e8b : [FSDP] Warning when fail to clone (#74946)
90a56fc515 : Add `-Wsign-compare` to list of clang flags
640c1be900 : Shape functions: Use friendlier clamping pattern
32e58c73c4 : Back out "Extend jit::load to work on flatbuffer file" (#75244)
41992791e8 : torch.hub security improvement: add new trust_repo parameter
1bcae0d10e : Back out "[PyTorch Edge] Add Quantized Softmax Op (Naive Implementation)"
23bcab19a9 : [quant][refactor] Refactor find_matches for easier future extension (#74878)
7172d8918e : Towards supporting quantized structured kernels (#74560)
02e30a09f7 : [ao][sparsity] make sparsity and PTQ compose (#74845)
10bb0ffe69 : Fix casting bug in state_step for optimizers when loading state dict
22d227fd29 : Fix lazy ts backend build flags (#75237)
f6e7a2ab64 : Fix sign-compare in caffe2 cpp tests
6d85e7dafa : Fix sign-compare in caffe2
c593c220ff : Fix sign-compare violations in torch_python
78305ad2b7 : Fix sign-compare in nnapi backend
fd8f2e4018 : Typofix
88edc21828 : [quant][fx] Fix lowering pass for cases when `to` is not called with positional args (#75146)
dd63658a2b : [GHA] Add note about updating cached GQL queries
b0e047b59d : ci: Bump linux runner availability to 750
81d765ef1f : Fix sign-compare violations in cpp tests
ef56497ea0 : Fix sign-compare in `c10d/Utils.hpp`
ff404cecdd : Add "pytorch/metamates" to merge_rules.json (#82)
74b23b2066 : quantization: autogenerate quantization backend configs for documentation (#75126)
83400e836e : [JIT] nvfuser CI fixes
60bda4d06b : [Static Runtime] Fix handling relu in quantized linear relu dynamic op
211c6bc050 : Fix Team Tagging in CODEOWNERS
eead599039 : Extend CSR constructor to support batched indices and values
f6b9a1d4fb : Revert "Support masked sum on CSR tensors [CPU, CUDA]"
7d2f36b8b1 : Revert "WIP Jiterator reduction"
4fb7fa081e : [Model Averaging] Code simplification for _find_process_group function (#75007)
87f40ee6d6 : [PyTorch] Existing MHA: fuse the attn_mask addition (#73219)
77c7a50d46 : Add BFloat16 support for logsigmoid, hardsigmoid, hardshrink, softshrink, hardswish and softplus on CPU (#63134)
63189698ec : Go through codebase and consolidate checkout steps
29de7924a9 : Fix parameterlist dir func error (#74404)
841a7f5187 : [DataPipe] apply dill serialization for _Demux and add cache to traverse
cda3f586d0 : Support masked sum on CSR tensors [CPU, CUDA]
caa403083b : [GHA] Small updates to syncbranches.yml
6a58651256 : changed documentation from cosine distance to cosine similarity; fixe…
0765a80491 : Typo in Dockerfile
9429dbb434 : make functionalization work better with subclasses
936a65056e : Use the same checks in all `grid_sampler` functions
e3848d75df : Dedupe no parsing __torch_function__ handler
de6353ba88 : Introduce SafePyObject, make TorchDispatchTypeObject use it
1faf1cdf12 : Split PyInterpreter into its own file.
ee9335a608 : [Quant][fx] Define native backend_config_dict for linear and conv (#74636)
7b506e889c : Revert D35352705: fix: buck2 build fbcode//caffe2:torch_types_gen
152489a8cf : fix: buck2 build fbcode//caffe2:torch_types_gen (#75176)
c5872e6d6d : Add BFloat16 support for smooth_l1_loss on CPU (#62558)
96050ee05b : Deprecate bytecode v3 and bump kMinSupportedBytecodeVersion to 4 (#75149)
bf16552617 : Restore TestTorchFunctionOverride
0509022450 : torch.tensor: add tests for list of numpy arrays case
631f035131 : Update forward AD not supported error message
ad028e5e09 : WIP Jiterator reduction
bd032cd8d6 : [quant][fx] Remove is_output_quantized from QuantizeHandler (#74843)
dca42dc925 : [WIP] Add support to `Tensor[]` for structured kernel codegen.
8b7e2bf7a6 : Skip TorchScript backend for OVRSource as well (#75138)
3f108a5cc1 : Save disable_torch_function in ThreadLocalState
6efc5c1acf : Rewrite upgrader bytecode version from 3 to 4 (content unchanged) (#75120)
d9d34922a0 : Extend jit::load to work on flatbuffer file (#75022)
3ebdd4388c : [Easy][FSDP] Update full osd warning (#75109)
19747cbbe6 : Dynamo+LTC: merging related code from staging branch to master (#75046)
8f4f1638bb : [PyTorch] Flip polarity of masked_softmax mask (#78)
87ab665ba6 : Fix SyncBatchNorm for empty inputs (#74944)
c5b3727e5e : [JIT] OpInfo tests for nvfuser (#71299)
27deefb5e1 : [JIT] Enable NVFuser tests in OSS CI (#73322)
e9e75215e2 : [JIT] Optionally validate nvfuser outputs after execution (#74361)
1352c6417a : Revert "Nvfuser guard patch"
637273da4e : [ROCM] unskip test_fn_grad
623f939704 : [GHA] Do not chown for linux CI
f888dc5842 : [ROCm] re-enable test_Conv2d_groups_nobias tests
4757ae4b9e : Update SSH part of contributing. Fixes #75112
711bb4a159 : GIT_DEFAULT_BRANCH can be empty
5b142ce5ce : `cholesky_inverse`: complex autograd, forward AD and correct tests.
58d4f59e87 : [mobile] enable ios tests for on-the-fly models
2b40e6f991 : [BE] add readme for .github, located in .github/scripts
54bd433879 : [mobile] display uncovered ops and pytorch version
ea2e0e773b : Fix LTC tests on Windows (#74960)
adee867c8d : irangefy ONNX
b9ba9c621c : irangefy autograd codegen
9bb12beda1 : Fix sign-compare violations in python_list.h
a48fe4620c : Fix c10 sign-compare violations
ea833a5dc1 : [quant][gpu][core] Added quantized linear operator in cudnn (#73959)
8a7c9a5e01 : [quant] Always match the first matchable pattern in fuse (#75047)
0ce02ea52d : Revert D35284563: Use the same checks in all `grid_sampler` functions
a82a78b9ef : Removed python dispatch keys from dispatch key extraction
40d70a6fcb : re-enable disabled test via commit message
f7829812b4 : scatter_reduce CUDA support
65b65af236 : [complex32] cat, fill_(partial), item
7aaa75af05 : Extending _get_bytecode_version to support flatbuffers format (#75021)
835cc66e5d : Use the same checks in all `grid_sampler` functions (#74635)
d86181f745 : Nvfuser guard patch
00607e75e5 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
fa241e5951 : [AutoAccept][Codemod][FBSourceGoogleJavaFormatLinter] Daily `arc lint --take GOOGLEJAVAFORMAT`
fd2050900b : [fx][1/2] add PassManager and refactor AFG/AGM (#74972)
e1ac97030a : [PT] Make error message from jit.trace more meaningful. (#75056)
2bfa018462 : [BC-breaking] Use ScatterGatherKernel for scatter_reduce (CPU-only) (#74226)
11c412a8ec : [static-runtime] optimize empty if blocks at runtime (#74987)
65ed1e3526 : Add forward AD for torch.atan2
cf66d42012 : Add mapping for unsqueeze_n_times (#75043)
dfce8e2539 : Revert "masked argmin and argmax"
b9e535a64a : Add non-eager registration to dispatch autogen (#74557)
c5023ea1d5 : stft: Implement center padding in ATen
7f051b4d2b : Implement F.pad in ATen
2e8b9c7785 : [TorchArrow][AIBench] Add AIBench Metrics for TorchArrow Inference Benchmark Test (#75035)
62c6801edc : Pin cmake version to workaround pytorch build issues
cb687773fb : masked argmin and argmax
4568daf55d : [torch.package] add utility for determining where bad modules may come from (#74998)
14affba799 : Fix ir_metadata Python frames func and remove dead code (#74979)
8d69254dfd : Disallow functions that are in submodules to also be methods
c0a6add7ee : Changes to support input sequence ID tracking (#70264)
fd4ad5d72c : [FSDP] Register state_dict hooks for FlatParamsWrapper even if params_list is empty (#74860)
5177f95d21 : Introducing SymInt to Pytorch (for tracing size arithmetic) (master rebase) (#74861)
1b7d7d9327 : Reland: "free up dispatch key space (in C++)" (#74963)
3d34afa9ae : Revert "Removed python dispatch keys from dispatch key extraction"
6905feea1a : Adding versions to flatbuffer schema (#74989)
72f7193f4d : expanded weights: group norm faster rule
8d7242a18b : [PyTorch Edge] Add Quantized Softmax Op (Naive Implementation) (#75017)
8b8f3e836b : expanded weights: layer norm faster rule
c4321e1396 : [ROCm] unskip FFT tests
60729d02f1 : remove unused nn_path from generate_code (#74563)
1a1f0d4bcb : Use ONNX Runtime 1.11 in CI
a98d1a5ff4 : Revert D35000703: [WIP][FSDP] Mixed precision enablement
13a3e5c70c : Catch overflows in calculating storage byte size
40bf3cfeb7 : use the shared //tools/codegen:gen in OSS Bazel (#74471)
785972b4eb : move codegen binary to the common build system (#74470)
88096253ef : Add Hpu to the rebuild component list
6b0b088c6c : [WIP][FSDP] Mixed precision enablement (#74452)
79307fbde0 : use the //tools/codegen target in Bazel (#74465)
76eabe9ef0 : move //tools/codegen:codegen into shared build structure (#74386)
a78eb243ad : Make all `.pyi.in` files exportable from torch/_C/ folder (#74962)
fa1a41ca71 : Revert "Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353)"
0b845bb645 : Revert D35258695: [quant][fx] Cleanup unused to_fp16 check code in lowering
873ced7cd0 : Nvfuser code bump 030122 (#73627)
b82df92c33 : [quant] Fix qmin/qmax when using customized qrange (#74717)
ce700da7f4 : [Deploy] Change `numModules` type to `unsigned` (#74978)
8c05f44fe2 : [quant] fix int16 quantization scale in conv weight (#74665)
c40a009d66 : Revert D35194935: Check all CUDA API calls for errors in torch/
3c701468dc : [quant][ns] Fix ns tool bug for mobilenetv2/v3 (#74149)
79e5b053b6 : Check all CUDA API calls for errors in torch/ (#74923)
43313cbde3 : Revert D34647822: [tensorexpr] Add support for aten::stack
320e5a8268 : Revert D34808051: [tensorexpr] Enabled aten::stack in the fuser pass with static shapes
ec6f767097 : [quant][fx] Cleanup unused to_fp16 check code in lowering (#74969)
90c3699cc8 : [tensorexpr] Enabled aten::stack in the fuser pass with static shapes (#74077)
317b1a0ed9 : [JIT] fix common_expression_hoisting (#74794)
2ef5611f31 : Add comments for adding shape function and linting (#73570)
f9ccf7ab80 : [skip ci] Set pytree tests to module: pytree owner (#74686)
3036a0309d : [skip ci]Revert "Add comments for adding shape function and linting"
b10d7498bc : ci: move nvidia repo disable to common.sh
3c1dd4e752 : Removed python dispatch keys from dispatch key extraction
5547741960 : Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353)
eb3fa2b0f1 : [ios] Update _ios-build-test.yml
954c7e2a77 : [tensorexpr] Add support for aten::stack (#73801)
9d9a0f5027 : ci: Comment out nvidia repositories
bfac65dfe5 : [testing] Update dispatch macros (#74977)
28a4b4759a : Add models test for android and iOS
f82b2d4a82 : [PyTorchEdge] Make _load_parameters() handle flatbuffer inputs (#74580)
1659a267f9 : [PyTorchEdge] Export flatbuffers from _save_parameters() (#74579)
4c5d532728 : [DataPipe] only apply special serialization when dill is installed
be7177751e : Add binary to benchmark model load speed (#74700)
cc23725e89 : Revert "Extend CSR constructor to support batched indices and values"
2e4152b118 : Revert "[testing] Update dispatch macros"
c6102048b8 : [ci] delete unused templates
b0d7cd0111 : [BE] Fix bug in flaky test uploading
9233af181f : [Model Averaging] Add a unit test that launches hierarchical SGD by PostLocalSGDOptimizer (#74668)
2793cf85ec : Check all CUDA API calls for errors in caffe2/c10/ (#74918)
273c2f0124 : EmbeddingBagCUDA: remove oob check for perf
577eca5237 : [jiterator] kaiser_window
ef71046f9c : masked std
eed19a0f38 : [testing] Update dispatch macros
5630c5ac75 : record_function: update to use custom_class API
3eea311c0c : [Lint] Clang-format ios folder
bf091f78a6 : [AO][bugfix] Fixing FX QAT but for untraceable modules (#74277)
1e37c20844 : Pin cmake on windows
9f4e7dec9c : [FSDP] Add re-key btw param names/IDs for optim state dict (#74912)
f28e3f6837 : [FSDP] Optim state chkpt: key by param name, not ID (#74879)
522041a0fd : [FSDP] Add full optim state dict (#74215)
40bb880009 : Remove use of force_reload parameter from torchhub tests
d0387ad285 : Move torchhub tests into separate test_hub.py file
5fcc867a40 : Add torchhub devs to GHF superusers
f5c0739bf8 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
dfe6e88adc : Upgrade CI to ROCm5.0
3269729c68 : [complex32] make_tensor
9cc848c3f8 : fix typo torch.functional.py
541c296728 : [GHF] Print better merge rule match reason
0d16189287 : [quant][gpu][core] Implemented quantized add operator using cudnn [reland PR74463] (#74463)
bbf7e159e0 : Implement torch.special.log_ndtr
6d36bbde7e : Add comments for adding shape function and linting
e505f06a79 : More folders in clang-tidy (#74908)
2462f0d6d5 : [Easy][FSDP] (Reland) Doc fixes (#74834)
33b9726e6b : Revert "add model test for Android"
fc47257b30 : expanded weights: embedding faster rule
c074a53002 : Extend CSR constructor to support batched indices and values
92f01e19f3 : [ROCm] libtorch nightly builds
dc9a7f1f1f : Fix cusparse sync issue in bsrsv2 and bsrsm2 [v2]
86c817cfa0 : Requires grad guard
2ca66ffb7d : [SR] Force split_and_squeeze usage via graph transformation (#74274)
f9d0bc5338 : [PyTorch] Delete NestedTensor Python wrapper (#74691)
a235d3ff59 : [GHF] Add pytorch/pytorch-dev-infra as OSS CI approver
6a81e68e4a : [GHF] Add ability to specify team as the approver
568e02dcd7 : Support sum(sparse_csr)
ba14e70b25 : get_bazel: Add download path for Mac
c90be037b4 : Extend Graph Export to NNC, extend script to support CPU (#74076)
9c4a63787b : Add api for changing function executor settings, hook up execution with decomposition registry (#74186)
0ecf1add1b : Introduce function-local settings for executor, expose in c++ (#74012)
aacdf291e0 : [JIT] Make aot autograd decompositions usable in JIT, add script for serializing the decompositions (#73938)
a6ed689173 : Remove bailout logic (#73876)
6694fdaccd : Clean up profiling mode and profiling executor strategy (#73875)
ab57876420 : fix docs error in Autograd Mechanics
91a72e9021 : Back out D34696255 "[pyper] to + lengths_to_offsets" (#74906)
1249d490de : Add additional CUDA error handling macros (#74865)
bdf468b94d : [FX] Fix type of argument min_acc_module_size (#74891)
d92fd2dd03 : [Profiler] Limit calls to `recordThreadInfo` (#74888)
f17ad06caa : Fix docstring for torch.roll
81d994d5c1 : Combine android and ios merge rule
ddf8ffa15b : [GHF] Fix force merge handling
00f2962e06 : [fx2trt] Enable enum type with lower_precision (#74841)
9872a06d77 : Back out "free up dispatch key space (in C++)" (#74859)
a9216cde6c : Back out "DispatchKeySet perf improvements" (#74858)
7cd78ea307 : [JIT][easy] comment about where nvfuser is called in profiling_graph_executor_impl.cpp
ffd9608963 : [ci] add timeout to rocm test
87590b6259 : [ci] add an easier-to-understand error for workflow consolidation FC breaks
ca75b578e0 : [ci] unpin workflows from master
fde1d0605a : [fix] fix op name in dispatch
ff206ed09e : Add lazy tensor python bindings (#74508)
0928da10e4 : [FSDP] exclude from typing (#74833)
3cbf308a1f : [vulkan] Remove unnecessary include in vulkan_api_test (#74699)
0e2d5f145e : Cleanup C10::Scalar stringification (#73462)
91ef3c8261 : add model test for Android
51e50a2ddb : [GHF] Adding PyTorch Compilers Devs to merge_rules
c7f9da5752 : Add C++ implementation of histogramdd
71003c74f8 : Add typing for torch.return_type
e55b73d65a : Add strided layout support for to_dense
f8b0f003c0 : [ci] make wait-ssh the default behavior
8e12d2bf25 : fixes torch.jit.script lp_pool bug. (#73287)
8ed6cb42ba : [PT-D] Update dist code owners (#74840)
6675f1e697 : [ROCm] Enable topk operator for bfloat16 dtype
e832eedd29 : Composite Compliance testing for backward formulas (#74646)
80d64b365a : Test case where some inputs are Tensor Subclasses in CompositeCompiance (#74645)
c96f321804 : Move CompositeCompliance tests to their own TestCase (#74644)
edea59acf8 : [Easy][PyTorchEdge] Fix unused variable build error (#74117)
58f78ff4e0 : Revert D35124731: Automatically extern C extension modules in torch.package
0428364cbf : Add missing LTC headers, re-enble xla configuration
421f66a29f : [PyTorch] Add fused addmm path in linear for contiguous 3D input (#72728)
7bb0133b8c : Revert D35009111: [quant][gpu][core] Implemented quantized add operator using cudnn
ea44645c9a : Revert "Allow specifying tags for aten operators in native_functions.yaml"
cce831c805 : Fix misleading DataLoader docstring
2aebece625 : [Model Averaging] Remove unused variable world_size in post_localSGD_hook.py (#74803)
3f37337ed0 : [SR] Native implementation for reshape_as (#74585)
9f2344aa40 : [SR] Native implementation for select (#74568)
facdbe6d72 : [SR] Native implementation for IntImplicit (#74562)
3e307c2562 : [quant][gpu][core] Implemented quantized add operator using cudnn (#74463)
923a922b1b : Grammatically updated quantization tech doc
f1af4dbed0 : [fix] Contiguity of `torch.ravel`!
5f94eea495 : [quant][fx] Remove input_output_observed from BinaryOpQuantizeHandler (#74776)
0f27b4b7fc : Automatically extern C extension modules in torch.package (#74702)
6707d67131 : Expose GetMetaDataIfDebugging API (#74784)
550d50ed0a : [quant][fx] Remove should_insert_output_observers (#74775)
9ba553b0fa : [jiterator] sgn: complex
77495a7b29 : [jiterate] addcdiv : complex
47acdfaa1e : Revert D35045122: [Easy][FSDP] Minor doc fixes
c96ced864d : [jiterator] reduce kernel code duplication (#73908)
56e5340947 : [complex32] support complex operator
7104f39721 : [FSDP] named_buffers fix (#74517)
ea2d58a3df : [Quant][fx] Refactor lowering code (part 2) (#74619)
116d879b83 : Fix `asarray` docs + add test case.
15275eb5a9 : [Easy][FSDP] Minor doc fixes (#74214)
5667c4ea21 : Remove default parameter of ShufflerIterDataPipe (#74370)
1c5a812579 : Better type checking in disable_torch_function/dispatch
5375b2e994 : Resolve `int[]?` arguments to new OptionalIntArrayRef class
90459ba9dc : Fix full Android builds (take 2)
56e0537e4e : [ROCM] Navi21 Enablement 8: Index, Repeat and Sort kernels
214951bc6b : [FX] Make split_module preserve proper placeholder names (#74736)
a2d2610ec9 : [FX] Assert None concrete_args and improve error messages (#74662)
1c4eb3a266 : [android] improve unsupported scalar type error message for android
edf2deb81e : Add private conversion function from CSR to block CSR
1dab71ab25 : Allow specifying tags for aten operators in native_functions.yaml
79f91e6ef4 : ci: Move ssh setup to it's own action
85abc328b9 : Adds dependencies on lazy codegen sources to invocation of generate_code (#74750)
a75c718d7c : [reland] Update tls logic to work better with guarded call (#73925)
d014772b9f : [Profiler] Store Input shapes, dtypes, and metadata into flat AppendOnlyList (#74241)
e8c4926e75 : [GHF] Adding James Reed to Merge Rules superusers (#74758)
ebeea9e2ea : Support masked sum on sparse COO tensors.
3b29bd00eb : Make ProcessGroupNCCL load torch_ucc.so when TORCH_UCC_LIBRARY_PATH is set (#69552)
f36ceefd71 : [GHF] Speedup default PR query
3b3bdfd51c : Revert D34808842: Reland "[pytorch][PR] Support dataclasses in TorchScript"
7fe0b6a5cd : mul(sparse_csr, sparse_csr) using mul(sparse, sparse)
cd929f403f : [ROCM] Navi21 Enablement 7: Sparse kernels
c0491c9179 : DispatchKeySet perf improvements (#72828)
2cbddc0e9b : free up dispatch key space (in C++) (#72827)
7c747c7907 : Add Sherlock to superusers
0747bdbf11 : [quant][fx] Removing more unused code (#74603)
cdcd1ac121 : [PyTorch Edge] Make contexts thread local for quantized matmul (#74676)
96c8f64459 : Remove with_traceback(None) in wrapped_call to show the root cause error
7df0d9fda4 : Call super().setUp() and super().tearDown() in torchhub tests
ca96d1d447 : Use nvidia cuda image without cudnn for cudnn 8 and up
66e07f2aef : [quant][fx] Merge is_general_tensor_shape_op into is_general_tensor_value_op in QuantizeHandler (#74601)
7235ebc5e2 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
b57cc9c752 : Reland "[pytorch][PR] Support dataclasses in TorchScript" (#74353)
c7a6be4b9c : qlinear: Remove legacy cpp_custom_type_hack support (#72680)
3466c1b690 : [PyTorch][deploy] Work around missing libdl (#74705)
eaae62fed9 : Make args work in the uru10x10_to_trt_eval script (#74707)
5079321b71 : Fix issue with prim::Print() and torch::deploy (#74513)
b347b8c191 : [quant][fx] Support some default ops in the native backend config (#74600)
797fa26f60 : [PyTorch] Only select root ops in codegen unboxing (#74663)
4d82e5bf44 : [PyTorch] Avoid registering ops into dispatcher in lightweight dispatch (#74664)
51e7a3406c : Fix formatting of scalar tensors (don't call item)
f86bb2d6e4 : Implement _pad_circular in ATen
f7317d3c51 : Jinja2 for docs/cpp build set to version 3.0
75d6cbe605 : [4/5]Testing jit module in flatbuffer in Python. (#74387)
11894db9ea : Add Python Version to Torch.Package metadata (#74610)
7f996b855c : Jinja2 version pinned to 3.0.* (#74690)
13ebcf3723 : Add support for backend to register reducer timer
5fbe8b1966 : [Model Averaging] Make HierarchicalModelAverager a subclass of averagers.ModelAverager
fc2cf3d26f : Back out "Revert D34805092: Extend _save_for_mobile and _load_for_mobile to support flatbuffer format; Default format is pickle + Change buck targets to support `only pickle` and `pickle + flatbuffer` for migration" (#74594)
d64e7634ff : [quant] Remove assert for weight since it could be non-Tensor (#74365)
f2ca4341c9 : [pyper] to + lengths_to_offsets (#73879)
5b915e844c : c10d: retry dns lookup failures (#74641)
d0adb5ff26 : Automated submodule update: FBGEMM (#74633)
2ecf743757 : [Profiler] Pay for what you use (v2) (#74484)
3f164e0395 : [reland] Process inputs and outputs in fx interpreter (#74637)
7c1f3cc89e : [quant] Populate FakeQuantize quant_min/quant_max to observer (#74581)
ff58899b5e : Pull request to run CI for #72556 (#73404)
23383b1e9f : Add meta support for [un]squeeze(), fix bug with set_()
4025ca8cf4 : Adding gpuAtomicMin and gpuAtomicMax for non-integer types
6dd6be6351 : register meta allocator
3f9115dc7a : Decorate test_pdist_large for requiring large memory (#74574)
3024bcfff5 : Add cuda_atomic_ops_test to run_tests.sh
9270bccaf6 : [Dynamic RPC] Allow newly joined ranks to communicate with existing ranks (#73373)
f76d1c022e : [Dynamic RPC] Allow for optional world_size argument in init_rpc (#73372)
09f32eba7a : [quant] Add default symmetric qat qconfig for qnnpack (#74507)
ca4ba2ee92 : Skip specifying rcond for gelsy driver in tests
a4c81b13f3 : Add forward AD support for clamp when bounds are tensors
a1e284d9c8 : Remove high priority as an owner for tests (#74555)
0524b2829a : [shard] Add ReplicatedTensor (#73529)
c9612cddb7 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
a7b6b1f061 : [FSDP] Fix summon_full_params test (#74456)
2e22c660d9 : [jiterator] abs : complex
df8d9204e1 : [GHF] Add `Lint` to the list of mandatory checks
238d01ec90 : Allow torch/csrc/deploy/interpreter/Optional.hpp to be allowed into the wheel distribution (#74643)
d7a857b0c0 : [GHF] Pass message id to revert command
8a4f504ab8 : Revert "Enable XLA CI tests"
2ba496bc23 : [ONNX] Fix 1d case flatten export
f4a0da8695 : Supports super().__torch_dispatch__ with arguments list
981baadf47 : [JIT] Add autocasting to freezing pass & enable autocast pass by default (#74178)
f5a9c36d0b : [SR] Eliminate extra permute ops before `aten::sum` (#74481)
d9f2cf58ee : Create jiterator cache dirs recursively (reland) (#74592)
85d8647c7a : [jiterate] addcmul : complex
f7ee308dfb : [complex-half] support casting (by updating copy_)
d583f9c9d2 : Enable XLA CI tests
13f28df460 : disable contiguity on cross dimensional overlapped tensor
15c98700ed : Add CPU slow test job (#73748)
e4e19d5beb : nvfuser parser skip api (#74520)
a7866ada1c : Revert "disable contiguity on cross dimensional overlapped tensor"
670e4d9808 : set_dir expanding "~"
93a1068d09 : [quant][fx] Relax the constraint for input of custom module nodes (#74510)
e9776fe58c : [quant][fx] Support conv1d and its fusion variants in QAT (#74506)
d8c31a819d : [ONNX] Modify int[] listConstruct to support tensor arg
6c383dede5 : disable contiguity on cross dimensional overlapped tensor
903dd87eb7 : [GHF] Add option to fetch all comments for PR
075a633359 : [GHF] Add `--force` option
807b2e190b : Move to_sparse_csr to C++
105e58a552 : [Foreach Reduction] Use `OpMathType` tensor for intermediate results
cfc86a309f : [quant][core][gpu] Wrapped CacheKey in Conv.cpp with anonymous namespace (#74543)
c25bf596f2 : [quant][core][gpu][refactor] Refactored auxiliary functions in cudnn Conv.cpp to an utilities file (#73957)
98207aabf6 : [quant][core] Refactor qat conv implementation to use the same _ConvNd as base class (#74505)
78cb1abc97 : Class name
a15f78347b : Back out "[PyTorch Distributed] Consolidate NCCL_DESYNC_DEBUG and TORCH_DISTRIBUTED_DEBUG=INFO" (#74586)
c24bc056c8 : [mobile] add test model generation script and ios tests
f167a3f95b : Automated submodule update: FBGEMM (#74447)
02ba0fa8e8 : [pytorch profiler] enable iteration tracking for kineto (#72292)
c28df6934e : skip HPU tensor in TensorIterator (#73343)
6a664481d5 : Print reason for test skipped in CI (#74451)
79ddc72b85 : Virtualize `<type>Storage` classes (#66970)
2e2cfa2ef7 : Make bitwise_left_shift/right_shift scalar variants composite
3547f20872 : Land remaining parts of Torchscript Lazy Tensor backend (#74111)
93be0e2053 : [SR] Avoid boxing inputs in DictConstruct/ListUnpack (#74250)
c53b3ed20f : Revert D34805092: Extend _save_for_mobile and _load_for_mobile to support flatbuffer format; Default format is pickle + Change buck targets to support `only pickle` and `pickle + flatbuffer` for migration
144b7de9dd : [ONNX] Adjust `is_train` flag for onnx pass deduplicate initializers
e13229e682 : [PyTorch] Fix quantized linear_unpack ops not being registered issue (#74526)
273e22b6f1 : [PyTorch] Fix flatbuffer build error on Android (#74518)
8ccc484a8f : Prelu OpInfo: change to mostly use positional arg
879be6abd9 : Revert "Create jiterator cache dirs recursively"
3941b1ab05 : [NNC] call super().setUp() & tearDown() in test_tensorexpr.py (#74504)
284b2b7135 : Extend _save_for_mobile and _load_for_mobile to support flatbuffer format; Default format is pickle + Change buck targets to support `only pickle` and `pickle + flatbuffer` for migration (#74209)
8f55b1d87e : docs: expose at::native::unfold (#74224)
bf5e25f3a9 : Revert D34898108: Process inputs and outputs in fx interpreter
ca500f32da : [GHF] Add test for `GitHubPR.get_last_comment`
9a6630e28d : Fixes build of CUDAHooks.cpp
ea56b9d92d : Passing explicit pretrained_backbone (#74372)
1f12f26f7f : [GHF] Force dry-run to be passed as named argument
e4beaadc36 : [GHF] Refactor comment fetching logic
f65594fc9f : Process inputs and outputs in fx interpreter (#74242)
65329f4fac : Disable meta device tests.
93f7f58856 : Make lazy codegen honor per-operator-headers flag (#74450)
45da320092 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
a079cbca0a : Support coalesce/to_dense on boolean sparse tensors.
37c5f11c16 : [hotfix] hotfix a bug of shard tensor
3157b1abd7 : CrossEntropyLoss triggers floating point exception
95bae3cd1b : Torchhub: automatically disambiguate ref if it's both a branch and a tag (#74418)
6fceada3f3 : Adjust binary_linux_test.sh to support reruns
6da9a79e28 : disable android workflows
08590b4159 : Cosmetic changes to torchhub tests (#74431)
e0ecdb5cba : Properly catch warning in torchhub tests (#74430)
d38dd74981 : Remove private _download_url_to_file (#74429)
8fd1043dda : Remove deprecated import_module hub function (#74428)
bcc77c470b : Cosmetic changes to torchhub tests (#74431)
b4ab7608e4 : added handling for r=0 edge case in torch.combinations(tensor, r)
4de870f604 : Enable faster cuBLAS path for torch.linalg.lstsq for batch of small matrices
6f05edd2ed : Create jiterator cache dirs recursively
56f218edb0 : [quant][fx] Remove unused method from QuantizeHandler (#74408)
3a112ebb57 : add autocast cpu doc
4917f8ce0a : Throw python_error if the call returns nullptr.
dfb2c3c134 : [GHA] Fix Android-full workflow
e5bf87963d : Revert D34584878: [pytorch][PR] Add JIT graph fuser for oneDNN Graph API (Preview4)
979f8d8ccc : Update RELEASE.md with steps to prepare before cutting RC
7ccfc114b5 : skip flaky onnx quantized test
46a88036af : Refactor error input tests in test_torch.py to OpInfos (#73981)
8a9d481bc6 : [ROCm] Update the magma commit
7dd0823011 : Add JIT graph fuser for oneDNN Graph API (Preview4) (#68111)
c7e5fb78ca : [GHA] Use `GITHUB_DIR` in path generate_ci_workflow.py
b544c006b7 : adjust conditions to enable jit decomposition pass only for GPU device (#73637)
fde282fc23 : supporting complex with requires_grad in autodiff (#74339)
06ff4f570c : [Core ML] Support enumerated input shapes (#74441)
327029b080 : torch.where variant that supports sparse tensor inputs
ba280b59c8 : Revert "[ROCm] Update the magma commit"
288de548fd : [shard] use scatter in shard_parameter API (#72160)
2d3c220c8d : Support RRefs that contain threading.Thread
956a028b55 : [ROCm] enable HIP IPC
be1eac0803 : [structured kernels] Port `amin` to structured kernels. (#73581)
14a891f38e : [ROCM] Navi21 Enablement 6: Tensor kernels
9425eba784 : [ROCm] Update the magma commit
a5187759f0 : Enable test_lazy binary test in oss CI (#74449)
11ea09effc : [CUDACachingAlloc/GPUInference] Implement garbage collection without GPU sync (#74261)
ae23ad19f8 : [quant][fx] Cleanup quantization_patterns.py (#74407)
9a407560d6 : [PyTorch] Make NestedTensorImpl::get_nested_size_tensor() const
3d6317cbe0 : [GHF] Add remaining PyTorch Distributed Devs to GHF merge rules
11c63d9323 : [ci] move test_tools into lint workflow
14bf20cd92 : [ROCm] revert cat operator performance work-around
1e08448435 : [ROCm] enable foreach fastpath
5f82566310 : skip failing jobs
aff74cd654 : [jiterator] log2 : complex
4ca56b7c2c : Workflow consolidation for GitHub actions
2f2f153526 : Inplace forward AD formula optimization
a6c402a08e : [FSDP] Override `named_parameters()` for clean names in `summon_full_params()` (#74333)
0aa3c39e5f : Extends OpInfo architecture with reference inputs, adds them for elementwise binary operators
7c2103ad5f : [jiterator] log, log10 : complex
38512d90c1 : Automated submodule update: FBGEMM (#74423)
c064ea07f6 : Automated submodule update: FBGEMM (#74409)
b86554abed : [quant][fx] Fix dynamic weighted op lowering when input is used multiple times (#74364)
dbf43d621d : [quant][fx] Only do reference moduel swapping for floating point fused modules (#74231)
0471da58ff : [Quant][core][refactorization] Refactored qlinear_unpack.cpp into an implementation file and higher level call registration and definition file (#73956)
ec071a0815 : [PG NCCL] catch cuda lib runtime error - driver shutting down (#74258)
d6edb8473e : [AutoAccept][Codemod][FBSourceBlackLinter] Daily `arc lint --take BLACK`
cfe1a41b01 : [quant] Add default symmetric qconfig for qnnpack (#74396)
e813eaee22 : [ci] inline display_ec2 script
2c51c8c817 : [quant][core][performance] Changed cudnn quantized conv2d impl to use inplace operations (#73857)
caed2a1bc2 : [quant][core][performance] Removed int_repr calls in quantized conv2d cudnn implementation (#73849)
dc1209448f : Revert D34929680: Multisect successfully blamed D34929680 for test failures (#74381)
e8023c94aa : Pin unittest-xml-reporting to freeze printing test summary logic
6ffe16662f : Change ShardedTensor torch_function to be a classmethod. (#74273)
70588ccddc : [GHF] Rebase and retry push if fails
dc0c94910f : [quant] Don't regard MatchAllNode as node matched (#74198)
0a0502edf6 : Revert D34957139: [pytorch][PR] Automated submodule update: FBGEMM
54a6942f8d : [ONNX] ONNX Exporter logging (#71342)
975c9f15bd : [quant] Rename _convert_do_not_use.py to convert.py (#74322)
a6bed4deaa : [quant][fx] Remove convert.py since it is not used now (#74276)
4b4f652f79 : [3/5] Put JIT source inside flatbuffer (#74245)
96ed3320f8 : [pkg] Improve mocked module detection by combining mocked object errors with the rest of the errors in PackageExporter (#74315)
119df90ead : use gmock 1.10 instead of 1.8 (#74150)
d8c0f2f35d : Automated submodule update: FBGEMM (#74369)
40a51edca7 : [PyTorch] Hook CUDA LayerNormKernel up for dispatch (#74259)
f7fe75c766 : [PyTorch] Call self._impl.unbind directly in _nestedtensor wrapper (#74001)
79c3bbceb2 : [PyTorch] Move NestedTensor printing to _tensor_str.py (#74000)
14dcb5a1a0 : Fix asmjit compilation with clang-13
abc868dd64 : ci: Add workflow to test collect_env.py
fafa54a940 : Update LSTM documentation
d67a265881 : Sync lazy_tensor_staging to master (#74311)
6e30d1c512 : [Vulkan] Optimize GRU operator with pre-packing (#73599)
c170d395de : utils: Only check for xnnpack if torch installed (#74342)
44a8d4d998 : Add lazy tensor unit tests, disabled (#74309)
72b1194464 : Run lazy tensor codegen in generate_code.py (#73996)
80e0d8a8fb : Move AndroidNightly to GHA
4a0f6e6c53 : report an error if num_channels is not divisible by num_groups for nn.GroupNorm
3a261f8bf1 : [Quant][core][refactorization] Refactored qconv_unpack.cpp into an implementation file and higher level call registration and definition file (Reland #73773) (#74319)
2a20a1f3ff : [Quant][core][gpu][improvement] Refactored implementation for conv2d_cudnn to use packed parameters (Reland PR#73510) (#74318)
0bfa2f8255 : Move torch::deploy tests to their own workflow job (#73676)
3c7fc945d1 : Update Release.md with release day steps
4f32bdd802 : [PyTorch Distributed] Move NCCL_DEBUG print to after NCCL init (#74287)
b3210d6d2a : [WIP][fx2trt] Replacing fp16 and int8 mode with enum type (#74338)
a705486915 : [Quant][fx] Refactor lowering code (part 1) (#74128)
a5b848aec1 : Use has_torch_function_unary instead of manual type test.
ebca80ed08 : Move test ops gradients and test ops jit to separate files
577bf04872 : Revert D34932200: [pkg] Improve mocked module detection by combining mocked object errors with the rest of the errors in PackageExporter
2775701e4b : [pkg] Improve mocked module detection by combining mocked object errors with the rest of the errors in PackageExporter (#74315)
495e69eaff : Revert "Update Release.md with release day steps"
2f24a85cd5 : Update Release.md with release day steps
d7c5acef34 : final fixes
688039859f : [PyTorch][Static Runtime] out variant for where.self (#73438)
f690a559ea : [torch::deploy] Replace c10::optional with boost implementation (#74286)
06605c6772 : [torch::deploy] Remove `c10::errors` from torch::deploy (#74283)
6294a2eb7f : [Static Runtime] Add out variant wrapper for aten::index_select (#74321)
350aded47e : [fix] `torch.amax` and `torch.amin` for empty tensors if dim arg not provided. (#73914)
0a6ac31797 : [PT-D][DDP][BE] Add unit tests for Forward and Backward Hook (#74063)
fc95e9617e : fix upload-test-artifacts step
e1d2d3f480 : fix bug in print test stats
b604314aa6 : Prepatory changes for GHA workflow consolidations.
6ecd13dfef : Add super() calls for Fx TestCases (#74216)
493bbdc4fe : Use shared CUPTI by default
6f13024c8c : add no-sudo argument to checkout-pytorch
5165efd7d3 : Add checkout-pytorch action
232faeacf8 : Revert "Move test ops gradients and test ops jit to separate files"
7906ac40dd : Advance fbgemm submodule
7cf9b942da : Move test ops gradients and test ops jit to separate files
e084e266da : [jiterator] sqrt-rsqrt : complex
ef8995ff93 : [GHA][BE] Remove unneeded shellcheck suppressions (#74308)
5de5dd2d06 : First step of refactor lower passes (#74219)
daf959c851 : [Profiler] Switch to thread local subqueues to reduce lock contention. (#74151)
11dc158129 : Remove sync in embedding (#70943)
52c1095212 : remove _lazy_init() in rebuild full params (#74263)
5e39d94908 : make sharding strategy configurable and support zero2 algorithm (#73819)
3fe1606092 : Revert D34886988: [Quant][core][gpu][improvement] Refactored implementation for conv2d_cudnn to use packed parameters (Reland PR#73510)
5853c8a97b : Revert D34886987: [Quant][core][refactorization] Refactored qconv_unpack.cpp into an implementation file and higher level call registration and definition file (Reland #73773)
bbf4a99807 : [DDP][Tests] Fix weight sharing test (#74252)
a5ea3b7fd9 : [DDP] Generalize activation checkpoint tests (#74130)
d4a4430059 : [PyTorch] Add Tensor.is_nested (#73999)
a9d9f91f31 : Revert "Update tls logic to work better with guarded call (#73925)"
102add6675 : [ATen][AMD] revert a change of USE_DIRECT_NVRTC (#74194)
f98b316f13 : Preserve codegen on fx graph in transformer (#74189)
4de9cb9a86 : Dispatch from torch.Tensor.to_sparse_coo to to_sparse
606c26d3e9 : [PyTorch] Add unit test for c10::Synchronized<T> (#74110)
977cc480a9 : [PyTorch] Use c10::Synchronized<T> in RWSafeLeftRightWrapper (#74109)
e1eb876ade : [PyTorch] Update Synchronized<T>::withLock() to return the type/value from the aceepted callable (#74108)
44ef979d7d : [PyTorch] [Model Tracer] Use c10::Synchronized<T> for build features tracer (#74107)
3674f92df9 : [PyTorch] [Model Tracer] Use c10::Synchronized<T> for custom classes tracer (#74106)
cf461a8bbd : [PyTorch] [Model Tracer] Use c10::Synchronized<T> for kernel dtype tracer (#74105)
a346a18150 : Use assertEqual consistently in test_sparse_csr.py
2e15e16f8f : Excluding ASAN and periodic jobs from slow job calculation (#74253)
0effac2b6a : Support running pipelines on main in .jenkins and tools
ce7041d257 : [Quant][core][refactorization] Refactored qconv_unpack.cpp into an implementation file and higher level call registration and definition file (Reland #73773) (#74227)
f8c1acea1e : [Quant] qconv: fix xnnpack operator caching (#74217)
9275149896 : Add operator selection ability to gen_unboxing (#74271)
d5744f4760 : Improve error message of loading saved TS module out of support window (#74228)
a2f701d4ee : [Quant][core][gpu][improvement] Refactored implementation for conv2d_cudnn to use packed parameters (Reland PR#73510) (#74220)
ac3effd150 : [PyTorch GPU Allocator] Better use of blocks with rounding of allocation sizes (#74213)
c1d070d0f0 : [ao] Fixing obs insertion through dtype propagation (#73274)
3c837a8b05 : Automated submodule update: FBGEMM (#74104)
d94f440c48 : [GHA][BE] Delete unused definitions (#74270)
ca4348f628 : [quant][fx] Allow incrementally remove the items in quantization_patterns.py (#74210)
9a0b7b4723 : [quant] Fix implementation for `output_quantized_idxs` in convert (#74140) (#74229)
e109d66f84 : Revert "Add facebook-github-bot to superuser approver list"
6bd4376c60 : [caffe2] Fix alias analysis for quantization compression ops (#74169)
ddb34e7b6a : release: Add convenience script for branch cutting
6b92abe00f : Add facebook-github-bot to superuser approver list
ef066f0832 : Revert D34856571: [pytorch][PR] Replace `get_all_` type macros with the ATen dispatch macros.
f14a0be302 : [SR] Avoid allocating rstd/mean in layer_norm (#73606)
064206df03 : Performance and memory improvements to batched torch.linalg.solve (2nd attempt) (#71756)
c5bc9e122c : ci: Migrate metrics credentials to managed IAM
f9a63d0a97 : Implement checksuites pagination
770da3012a : [ROCm] Update the handling of hipRuntimeGetVersion()
3ded7b1da3 : Replace `get_all_` type macros with the ATen dispatch macros. (#71561)
0120ff759c : fixing assert condition (#74239)
0988dc481a : [Codemod][Codemod deprecated unittest asserts] fbcode//caffe2/test (#71708)
90be8fa279 : [PyTorch] Make TensorImpl::sizes() customizable and disable it for NestedTensorImpl (#73817)
4646caede9 : Update circleci pipelines to support both master and main branches.
ded82ad7c7 : Create method to map JIT module to (source, constant) and back. (#74119)
cc5f8aea5c : Revert D34868005: [torch::deploy] remove asserts from deploy
c77857cb83 : Improve numerical stability of `torch.distributions.wishart.Wishart` (#72993)
039b62cf03 : Install ffmpeg=4.4.1 because torchvision doesn't compile on ffmpeg-5
48b165405d : [torch::deploy] remove asserts from deploy (#73456)
b0cab0f4ed : [Static Runtime] Use composite op for TE fusion (#74126)
381c0c080f : [Static Runtime] Fix a bug that `aten::full` reuses a tensor that does not match requested one (#73990)
7ab482f2df : check if owner is pytorch in jobs
3b421af85a : [Reducer] small fix (#74127)
017be3497d : [ddp] parameter verification (#74113)
3f140c5b32 : Parametrize remaining tests in TestAutogradFunctional to use logging_tensor
ff3688f07a : [BE Hackathon][DataPipe] Automatically generate datapipe.pyi via CMake (#73991)
eec994fc16 : [DataPipe] Separating DataPipes from Dataset into different files (#73396)
3399876306 : [PyTorch] Avoid schema parsing in lightweight dispatch (#74069)
8bc647b3d5 : [fix] kaiser_window : meta for window_length > 1 (#73733)
38b127326f : [Easy] Remove erroneous comment (#74195)
4bb5e6e830 : Fix `test_reduce_add_coalesced ` failure (#74027)
21a82fb519 : Make `torch.nn` importable on Python-3.7.0
1e64c8a8e3 : Revert D34846005: [quant] Fix implementation for `output_quantized_idxs` in convert
c2aba0f1b4 : [FSDP] summon offload to CPU (#73904)
a5d472010d : [FSDP] Option to summon on rank 0 only (#73903)
9bb47f000e : [Quant][fx] Reenable serialization test after convert refactor (#74204)
a7f9fb997a : [quant] Fix implementation for `output_quantized_idxs` in convert (#74140)
bb82a700f0 : Remove inaccurate confusing signal box from README.md
060f1b822a : Add onednn quant backend (#74137)
deae5950ba : Mention milestones as way of tracking issues/PRs
b68f227709 : [FX] Disable buffer tracing test due to SEV remediation
6c18a9951b : [PyTorchEdge] Start writing magic to flatbuffer output (#74084)
6a44efa888 : [FX] Fix bare generic type annotations (#74135)
11759491b9 : [Kineto] Manual Submodule Update (#73858)
72f21a25b0 : Fix Android full CI build
9e8bda0e93 : [Static Runtime] Use IValue::toListRef for aten::len to address comment on D34705231 (#74192)
2337d4e503 : Disable all buffer tracing
08f1352cf2 : [RFC] release: Formalize patch release process
0111065c8f : [ci] add a script to get the workflow job id
dff02851d1 : Update tls logic to work better with guarded call (#73925)
f42e994933 : Add names to lint jobs so we could require them pass
57c7bf7fee : [ONNX] Remove redundant warning for reshape
f810d96806 : remove redundant index check for index_select_out_cpu_dim1_ (#74093)
890b1e8f9e : [JIT] C10_EXPORT -> TORCH_API (#73818)
1b80f609b0 : [Static Runtime] Add out variant wrapper for aten::ones_like (#73945)
727e24313b : Fix math formatting, misc edit
b7307687e9 : [jiterator] exp: complex
0bc595df61 : .github: Add PyTorch Core team as superusers
39470387cf : [JIT] enable NNC cpu fusion with torch.jit.fuser("fuser1") (#74078)
97ade8c64c : Back out "fix: nn.Module allowing for expected Mixin MRO"
3349782191 : [BE] Unify CI workflow dispatch
097ae0ecf5 : [GH][BE] Fix typo
e1e932d76d : Add atalman to OSS CI reviewers
fb47cff737 : [PyTorch Edge] Use Parallelization in Internal Quantized Matmul (#73247)
95fe0a40c3 : [PyTorch Edge] Allow >2 dimensional tensors in internal Quantized Matmul (#73246)
730b65f1b0 : [PyTorch Edge] Perform QMatMul Requantization within Ruy (#73245)
c6f991e7f1 : [PyTorch Edge] Internal Optimized Quantized Matmul (#73244)
0b1f3bd158 : [Profiler] Prefer TSC to wall clock when available (#73855)
198d727d01 : Remove trailing semicolon. (#74031)
22b876782f : Revert D34803275: [Quant][core][gpu][improvement] Refactored implementation for conv2d_cudnn to use packed parameters
dab5659d74 : Revert D34641680: [Quant][core][refactorization] Refactored qconv_unpack.cpp into an implementation file and higher level call registration and definition file
a1e9a0cb41 : [Quant][core][refactorization] Refactored qconv_unpack.cpp into an implementation file and higher level call registration and definition file (#73773)
85c790ed9f : [Quant][core][gpu][improvement] Refactored implementation for conv2d_cudnn to use packed parameters (#73510)
ef9023e93d : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
247e4cca1e : [BC-Breaking] Remove redundant fsdp prefix (#73791)
74a3b9c661 : [fx][acc_tracer] fix defaulted placeholder normalization (#73406)
bbdb758423 : [BE] Add optional step to upload coredumps
1f04a00ccf : [PyTorch Distributed] Update documentation about NCCL environment variables (#74006)
56aa1ab010 : [ONNX] Remove dangling print in repeat_interleave
b5244b8470 : [JIT] add keep_unique_names arg to canonicalize python bindings (#74074)
5a897536f3 : Revert D33716039: [pytorch][PR] Add ONEDNN quantization backend
44997efd80 : [PyTorch] Move get_nested_tensor_impl to header (#73998)
34247626c3 : [PyTorch] Add stub NestedTensor_is_contiguous function (#73997)
989b24855e : Add ONEDNN quantization backend (#69820)
3e556efc29 : regenerate flatbuffer header (#73810)
5a58820f01 : [Profiler] Specialized AppendOnlyQueue (#73409)
66356130f1 : [jiterator] sigmoid : complex dtypes (#73643)
7ddf212f33 : [quant][fx] Fully align convert with the reference model design and simplify the implementation (#73863)
7070fe4d15 : Automated submodule update: FBGEMM (#74088)
8d5cdb559c : linalg_solve_triangular should not be a method
f12703cbe9 : Revert D34604068: [PyTorch] [Model Tracer] Use c10::Synchronized<T> for kernel dtype tracer
689cd22413 : Revert D34604067: [PyTorch] [Model Tracer] Use c10::Synchronized<T> for custom classes tracer
747c6fddc4 : Revert D34604066: [PyTorch] [Model Tracer] Use c10::Synchronized<T> for build features tracer
a554fc6836 : Revert D34645509: [PyTorch] Update Synchronized<T>::withLock() to return the type/value from the aceepted callable
b65adf84f3 : Revert D34645508: [PyTorch] Use c10::Synchronized<T> in RWSafeLeftRightWrapper
1ac519e6b5 : fix: nn.Module allowing for expected Mixin MRO
4adfe0647b : Revert D34800969: [PyTorch] Add unit test for c10::Synchronized<T>
acd3f3705f : Revert D34814800: [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
abfaef0aec : [Quant][core] Merged conv packed params and linear packed params (#73486)
89d6f3e609 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
9b203f667a : [PyTorch] Add unit test for c10::Synchronized<T> (#74062)
62ff23db6b : [PyTorch] Use c10::Synchronized<T> in RWSafeLeftRightWrapper (#74061)
de4a87fb48 : [PyTorch] Update Synchronized<T>::withLock() to return the type/value from the aceepted callable (#74060)
54cc1d465c : [PyTorch] [Model Tracer] Use c10::Synchronized<T> for build features tracer (#73725)
372378b18b : [PyTorch] [Model Tracer] Use c10::Synchronized<T> for custom classes tracer (#73724)
6fd6fe0a23 : [PyTorch] [Model Tracer] Use c10::Synchronized<T> for kernel dtype tracer (#73723)
9a5c3ca23f : [iOS] Fix the TestApp (#74090)
7cd8857b8a : [iOS] Update Cocoapods for 1.11 (#74089)
cfc389f496 : [vulkan] Enable Pytorch Vulkan to build in FBCode (#73872)
cb4aeff7d8 : [easy][PyTorchEdge] Add magic number to flatbuffer schema (#74048)
ee67ba0e6a : Automated submodule update: FBGEMM (#73895)
75ad6fea66 : [jit][edge] Pass through dynamic type for DictType. (#74025)
794f813522 : Fix test_binary_ufuncs.py for Python 3.10
fdd12a9f4c : Support tensor.__getitem__() in TorchScript compilation (#73952)
766eba60f7 : (torchx/elastic) honor NCCL_ASYNC_ERROR_HANDLING set from the env var (#73982)
f685dfaac1 : [JIT] call super().setUp() in test_jit_fuser_te.py (#73762)
60f22a40ef : [Static Runtime] Add out variant wrapper for aten::zeros (#73946)
616b36e437 : [PT-D][FSDP] Implement _clip_grad_norm_ for FSDP (#73405)
39605a5632 : [ao] Removing memoryless observer args for MovingAverage (#73947)
00b01ec056 : disable LT interface (#74021)
734281c3d6 : Cleanup all module references in doc (#73983)
6656c71049 : docs: code examples running successfully
35ce68bbf3 : [PyTorch] Fix `lkj_cholesky` device error (#73980)
62e032e328 : [ci] move rocm distributed jobs to master-only
238f7d9cbf : rename config module file to work with gh pages better
8b2ae86f02 : [shard] disable rocm and windows for sharding_spec test (#74040)
99db53eaa7 : Jit save/load meta tensors (#73435)
b553e73c01 : [ONNX] Enable topk export with non-int64 k
d0f9556c4a : [shard] use gather_object in gather API (#71624)
5b805a6eec : Disable TF32 in some linalg tests; Disable TF32 in svd_lowrank forward (#73614)
967606124a : port torch cov tests to error inputs (#73977)
122f8648ab : [PyTorch Distributed] Add debug hint for NCCL async system error (#73897)
420385eb60 : Adding a step to start docker if it is not running.
6eb94f16c7 : [GH1] Refuse to merge PRs with internal changes
bd13bc667e : Add run_android_tests workflow
a64ba135ad : Report the names of unsupported operators in flatbuffer_loader.cpp (#73865)
38b94f4159 : Don't do math with null pointers in SortingKernel.cpp (#73986)
4bf84170ce : update script to calculate operator coverage
78e17eaadc : expanded weights: conv faster rule (#73692)
6cf2cafe60 : Fix distributions/test_distributions.py for Python 3.10
87564a1bd7 : [Static Runtime] Add native op support for `aten::len` (#73899)
d8dbf9d8b2 : [Quant] Qlinear Add qint8 support backed by xnnpack (#73672)
bac6b8c894 : [Quant] Qconv: Add qint8 support backed by xnnpack (#73669)
b0a327d1f4 : [fx/operator_schemas] Bring back check for OpOverload (#73978)
f71eede85a : [GHF] Specify multiple mandatory checks
519e226b66 : [tensorexp] ExternalCall2 without memcpy (#72225)
bc512253d5 : [Quant][test] Added test to check if fp16 packing->unpacking yields the same result as to(torch.float16).to(torch.float32) [reland] (#73808)
b2a5507654 : Fix deadlock in some edge case in autograd (#73961)
adae0d35d2 : RNN args renaming in memonger.
4802449836 : ci: Fix cudatoolkit issue, make docker builds testable
aed71885cd : [GHF][BE] Add match_rules test
63a1e7bd09 : Revert "Check that PR base is default branch in trymerge.py"
69cdcd2da2 : [OSS] add script to generate test models for mobile (#73746)
012829eb36 : [Lazy][JIT] Do not crash when target device is unsupported by fuser (#73820)
2f90c82265 : Get rid of TorchScript sparse tensor is experimental warning. (#73874)
0239284313 : Relax dtype restrictions on torch.Tensor (#73850)
8811d217ed : [DataPipe] Slight refactoring IterDataPipe serialization test (#73922)
0821154072 : [DataPipe] Adding serialization test for all MapDataPipe (#73921)
7534525735 : Reset worker cycle iterator for determinism across runs (#73675)
3c82e422d9 : Print system info as part of EC2 info step
97ae431e3e : [ONNX] Add symbolic support for torch.nn.cosinesimilarity (#72128) (#73283)
95b1232752 : [ONNX] use onnxruntime 1.10 in CI (#69271) (#73282)
341e20a1b6 : [ONNX] Add module name as PythonOp attribute (#67193) (#73281)
9210e8f540 : [ONNX] Adds overload_name to Aten op (#69378) (#73280)
343a973349 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
a267c8e710 : add script to aggregate production ops
7ebab9247d : FX graph module - prevent infinite recursion (#73866)
b9c59ddcaf : [PyTorch] Move is_nested_tensor_impl & add get_nested_tensor_impl_or_null (#73928)
ad6290f4ce : [pytorch] flatten_indices function should use vector::resize instead of reserve (#73831)
97b20b9b50 : [SR][easy] Stack/concat out variants do not segfault on empty inputs (#73704)
18ed747915 : Add test/mobile to merge_rules.json
5b011fc6eb : Fix Undefined variable in QInterpolateBenchmark
dc38326f1d : Check that PR base is default branch in trymerge.py
1c152f800b : Parametrize some TestAutogradFunctional tests to use logging_tensor (#73854)
a149a4dda5 : Clean up some tests to use common_utils.parametrize (#73853)
15df909d34 : Move autograd functional tests to separate file (#73852)
0723639b60 : Revert D34455360: Multisect successfully blamed D34455360 for test failures
2e039b653e : [Quant] Qadd: Add qint8 support backed by xnnpack (#73663)
41b86f4099 : [CircleCI] Delete MacOS binary smoke tests
ee8d7d807a : Use cub::DeviceSelect::UniqueByKey for EmbeddingBackward (#68376)
600a01a082 : Add test with multiple ops (#73888)
2ad30037f4 : [codemod][type-comments] Convert type comments in _lobpcg.py (#73088)
a482fd7b2e : [ONNX] Fix onnx gather shape inference
1fbc08c70c : Add Autocast support for Einsum (#71916)
beda4e8b2f : Fix fx tracing for OpOverload (#73940)
11231b0f93 : ci: Migrate windows conda to GHA
be535e24ec : [FSDP] Provide a utility API to allow users easily to set state_dict_type (#73716)
e47a5a64bb : Back out "Revert D34524207: [pytorch][PR] remove _s_where" (#73579)
37e0d2e361 : Fix segfault while real and imaginary attributes are set to a number (#73867)
5dfbe52a3b : [ROCM] Navi21 Enablement 5: Softmax kernels (#73545)
299dec1ca7 : [codemod][type-comments] Convert type comments in examples.py (#73085)
3b30c8da88 : Add logging for ProcessGroup backends. (#73702)
f982d6a632 : Fix nightly docker publish build
31b64fc3e6 : [JIT] log extract tool - dump NVFuser fallbacks instead of fusion groups (#73881)
56164c07c4 : Fix libtorch_cuda_linalg builds (#73896)
9012e8d65a : [ZeRO][BE] Clean up ZeRO tests (#73842)
5372dcd9bb : [FSDP] Generalize fsdp_modules() (#73553)
4e6aefaf72 : [Qunat] Refactor reference module mapping (#72755)
5993f48711 : [Model Averaging] Add a reference to hierarchical SGD (#73823)
270c27efeb : [FSDP] Add always_wrap policy (#73687)
076619d7c2 : Fix docstring hiding due to #45689 (#73747)
9ef5c679ef : record_function: add torchbind alternative API (#72301)
e99e3fa580 : [fx/graph_drawer] Add skip_node_names_in_args option, default to True (#73815)
2c3509606d : [SR] Make sigrid_transforms fusion work on graph outputs (#73091)
be4bcf8fdf : Revert "ci: Migrate windows conda to GHA"
a3cff18149 : add tools/onnx to merge rules
d6c29b1d30 : Deduplicate legacy _ctor and _new Python bindings (#73822)
de73f9a558 : Add forward AD support for logsumexp, log_softmax, softmax, nll_loss, and cross_entropy (#73741)
943085feaf : [RELAND] [cuDNN] Add a new optimized cuDNN RNN algorithm for small RNN hidden_size (#73211)
086645ad77 : Update __torch_dispatch__ to return op overload instead of the opoverload packet function (#72673)
a3d099ea18 : [JIT] make RegisterCudaFuseGraph use TORCH_API instead of C10_EXPORT (#73742)
6c26bf0e72 : [ONNX] Fix repeat interleave when repeats and dim is 1
15a799e312 : [FSDP][BE] Change assert to assertEqual (#73787)
89b16f1a2c : [Easy][FSDP] Fix warning render (#73786)
4a06b8d36c : [FSDP] Add grad accumulation without `no_sync()` (#73535)
71961d37bb : [Static Runtime] Add out variant wrapper for aten::ones (#73851)
2acf9c74f3 : ci: Migrate windows conda to GHA
07410207c4 : ci: Enable long paths for windows
7ed73b2803 : CMake option for using static MKL libraries
f3c6e8f720 : [Quant][fx] Add lowering for functional conv (#73708)
cedce3be20 : [Quant][fx] Add lowering for Linear-Bn1d in QAT mode (#73509)
9929a9fc8f : [GHA] Migrate win/linux binary-smoke workflows from CircleCI
979a78f8b2 : Sphinx panel
1d497114e7 : [Metal] Use MPSNNNeuronDescriptor for Mac Catalyst Target (#73718)
e118b1dfff : [Metal] Remove unused shader code (#73636)
ea698c148a : Revert "[GHA] Migrate win/linux binary-smoke workflows from CircleCI"
e7730eaeaa : Revert D34230284: [pytorch][PR] [WIP][Kineto] Manual Submodule Update
486bd9f306 : [GHA] Migrate win/linux binary-smoke workflows from CircleCI
c088c8fe92 : [WIP][Kineto] Manual Submodule Update (#73090)
d047e475f8 : Automated submodule update: FBGEMM (#73814)
4b8106f92c : [XNNPACK] Expose available() to xnnpack namespace (#73617)
5dbec7c07c : [Metal] Raise the minimum support version of iOS to 11.0 (#73635)
723ba4e31d : CUDA Kernels: Use per-operator headers (1/4) (#71212)
5816a95ca9 : CPU Kernel: Use per-operator headers (#71137)
6f2dad24d3 : ns for fx: add ability for fp16 model to shadow fp32 model (#73785)
bebfdca093 : Re-enable Windows debug libtorch
5167e9d59d : [quant][fix] Fix bug for ave pooling in FX quant (#73054)
aca4d02d12 : Use higher timeout for test_tensorpipe_set_default_timeout (#73771)
52ccbf4494 : Lock thread/block computation (#73800)
4a74285e97 : [ONNX] Rewrite linspace symbolic
55525632ab : Revert D34554432: Back out "Revert D34524207: [pytorch][PR] remove _s_where"
7b51629c53 : [PyTorchEdge] Add getFileFormat() so we can differentiate Zip/Pickle from Flatbuffer (#73707)
0cbb458be9 : [FSDP][BE] s/ConfigAutoWrap/_ConfigAutoWrap (#73759)
034848280e : Fix typo in error message (THCGenerateByteType.h) (#73670)
9c03c6163f : Back out "Revert D34524207: [pytorch][PR] remove _s_where" (#73579)
88b19da49a : [PyTorch] Make NestedTensor::dim() work (#73679)
a8d9fbb021 : [FX] Make `immutable_list` and `immutable_dict` work with pytrees (#73766)
5332d8705b : [FX lowering] Modify replace_all_uses_with to allowing filtering of nodes to update; use it to (#73763)
452c26bbeb : Fix `functional.max_poolNd` warning spam in the CI
8b8fac91bf : [quant][fx] Refactor _convert_fx_do_not_use (#73777)
0bb3b0652c : [Model Averaging] Support hierarchical model averaging (#73285)
bcd0843bec : [torch.distributed][DDP] Disable DDP bucketing for the first iteration (#72843)
727debb18e : dbr quant: enable reference module support for torch.qint32 (#73493)
5787a36e30 : dbr quant: insert activation obs explicitly, instead of relying on hooks (#73492)
350ecc9d1f : Automated submodule update: FBGEMM (#73277)
6067054cdd : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
8a47d9cf86 : Revert D34599476: [Quant][test] Added test to check if fp16 packing->unpacking yields the same result as to(torch.float16).to(torch.float32)
3042f0ce22 : [NS] Mark output logger impure to avoid being removed in acc tracer (#73745)
bd3db019a0 : Update fbcode symlinks for mkl-dnn ideep 2.5.2
f5c7e5406b : [quant][fx] Add lowering support for qat and fused convs (#73527)
a39e8e8f5e : [Quant][fx] Added explicit entries for for functional and module conv&linear support into get_default_qconfig_dict&get_default_qat_qconfig_dict (#73528)
72e4aab74b : Eliminate unused parameters in PyTorch (#73749)
a7d2f39bde : Eliminate unused parameters in PyTorch (#73751)
c24783fbd4 : Don't discard stacktrace when rewriting AttributeError (#73720)
0642285e70 : Add `.jenkins/caffe2/*` to ONNX merge rules
a4160223fd : [Quant][core][devs] Removed support for non-quantized tensors in reflection_pad1d_cpu (#72485)
0b79715d49 : GHF: Add clickable links for Approved By
37d1ec9dca : [Quant][test] Added test to check if fp16 packing->unpacking yields the same result as to(torch.float16).to(torch.float32) (#73685)
818bf361b6 : [SR] Fix a kwargs API default value bug (#73681)
75f5ceaae6 : [static runtime] Fix aten::relu TE lowering to enable vectorization (#73713)
cf143bdc57 : Refactor tools/actions_local_runner.py to allow custom remote/branch names
b955a046cd : [shard] fix init_from_local_shards issue with deepcopy (#73400)
22a3a17ea0 : Improve Python 3.10 compatibility
e6f8e521ca : [Core ML] Attemp to fix the OOM issue (#73750)
35cfa74f97 : Add a default implementation of __torch_dispatch__
7c0f166b26 : [ci] don't build pytorch twice on windows
4267698ad4 : Add Ed to superusers
fc3622904f : Add Gflags for fusion strategy and make it local to executor (#73668)
905efa82ff : [fix] `torch.broadcast_shapes` should not handle shapes with negative dimensions. (#72999)
9ad0578c59 : Remove istft python functional wrapper (#71993)
6bb13f77ce : [FSDP] s/local_state_dict/_local_state_dict (#73532)
5ae5f00a58 : retry step for Build and upload nightly docker
fc914346d7 : Fix alignment, make sure release labels are included (#73739)
3186e366d1 : Support `0`s in `out_size` of `FractionalMaxPoolNd`
bf896a2988 : dbr quant: add torchscript pass to remove redundant aliases (#71230)
eb8d06591c : quantization: fix bug in QuantWrapper with DeQuant qconfig (#73671)
71ed388905 : [GHA] Increase ASAN test timeout to 5h (#73740)
a37d54b6d1 : [Easy][c10d][DDP] (Reland) Minor fixes (#73569)
7cce2d9dbb : [DDP][BE] (Reland) Remove bucket replicas (#73567)
63932edcc7 : Back out "[pytorch][PR] Support dataclasses in TorchScript"
1a8bd1a7eb : (torch/elastic) add documentation clarifying that torchrun is a console script to torch.distributed.run (#73598)
b4173b80b7 : Use intrusive_ptr for LazyTensor (#73445)
dae7ed179f : [FX] Make module getattr wrapper proxy buffers (#73612)
ba205a7eae : [_shard] use absolute imports for ShardMetadata instead (#73678)
e7a786ff34 : [FSDP] Add the unittests for the _replace_by_prefix (#73530)
71aa3ab020 : Add note in RPC docs about retries. (#73601)
2ab9702955 : [quant][core] Add Embedding and EmbeddingBag reference module (#73436)
d14de3139a : [PyTorch FX] Return mapping of qualified names from split_module() (#73564)
4168c87ed3 : Support CSR to COO conversion in to_sparse(2). (#73642)
bbc59ff2bf : [Static Runtime] Introduce StaticNodeInfo to store ProcessedNode's data independent from runtime instances (#73536)
e8b10b6e34 : fix wrong indexing of class names in docs
e6afa4f771 : batch_norm_jvp: improve error message when running_{mean,var} have forward grad defined (#73655)
715c000cf1 : setup system includes for generated files on MacOS (#73591)
f087f1a3e6 : static_cast value to microsecond type (#73595)
5dcc24963b : only pass --std=c++14 to C++ compilation actions (#73593)
26463aa879 : [PyTorch] [Model Tracer] Use c10::Synchronized<T> abstraction for mutex protected data (#73408)
154f0ace72 : [RFC] Implement c10::Synchronized<T>, a basic error-avoiding synchronization wrapper for data (#73407)
74fe57079f : [SR] Add new `fb::split_and_squeeze` op (#73252)
201b51078a : Update nightly wheels to ROCm5.0 for GHA
7adc76acae : Only process stale pull requests
b27ec57331 : [JIT] script & logging for extracting IR from logs (#72889)
b7a7cdd00a : [Quant][fx] Add lowering for functional linear (#72855)
d7682d31f9 : [GH1] Append "Reviewed by:" to commit message
a4ad332a66 : Remove myself from native_functions.yaml (#73626)
484c0de670 : Minimal NestedTensor (#72881)
bbf4bc9f8e : [GHA][BE] Delete unused only_on_pr attribute
5eb3664c66 : [GHA] Enable XLA tests on PR
ba8dd48d8a : use cross-platform sed command in cmake_configure_file (#73589)
0a94f108eb : split typeid into its own test since it is its own library (#71909)
9956965369 : extract out tests for //c10/util:base (#71908)
2efee542fd : create a c10 test suite (#71907)
026c0af479 : move intrusive_ptr benchmark to shared build structure (#71413)
4bf8a9b259 : add benchmark to Bazel build (#71412)
e9dfc61938 : extract //c10 to common build system (#71411)
81437e66c1 : [quant][fx] Add RNN reference module (#73386)
61d6c43864 : Make debug_pkl smaller by only emitting unique traces. (#73368)
07df88767d : Eliminate unused parameters in PyTorch (#73625)
bea075f305 : [quant] Add support for multiple inputs in fusion pattern (#73572)
539acb29cd : [Static Runtime] Fix a broken test & Add an out variant wrapper for `mse_loss` (#73574)
dab5e2a23e : [cuDNN v8 API] cuDNN benchmark, convolution bwd / transposed convolution fwd, `bfloat16`, conv-bias-activation fusion (#60755)
e7051939fb : [pkg] improve error message for module detection on saving pass (#73106)
7b8fc74897 : [ROCM] Navi21 Enablement 4: Normalization kernels (#73543)
2f957f513e : Deletes unused line in test_autocast_rnn (#73195)
701fa16eed : only run complex autograd tests once
f275b3f9a1 : simplify run_test for distributed tests
dc81ba1f9f : parse TernaryIf as right associative, fix #68221 (#68416)
eb0d370f14 : Write explicit meta-kernels for `normal` (#70089)
dad0e0c37f : Unify checks for `normal` (#70087)
08493e03fc : Fix a typo: add a missing space (#70086)
e9c1ccee22 : Bug fix: allow std 0 in the meta definition of `normal_` (#70085)
19572950a1 : [Quant][core][gpu] Implemented support for relu in quantized conv operator using cudnn (#73337)
4e54c1cc2d : [Quant][core][gpu] Implemented support for bias in quantized conv operator in cudnn (#73035)
9ce9803abe : [PyTorch] Add codegen unboxing ability (#69881)
6396547f9e : [FSDP] Make summon_full_params a public method (#73116)
6302cdb9bc : [Reland] Add BUILD_LAZY_CUDA_LINALG option (#73447)
ebf0ca3307 : ci: Migrate aws creds to managed IAM
554169fc7b : Disable forward/backward correlation to workaround the profiler crash (#72904)
f42202d26c : 'typename Base' is checked repeatedly (#72842)
6643522db9 : Improve logging on ONNX symbolic helpers
b08309ee0a : (torch/elastic) skip logging structured error info if error_file is not set (#73477)
9e60b00316 : Remove AutoHeaders.RECURSIVE_GLOB from caffe2/ (#73227)
1c621a7bfe : Reserve the memory for vector to save cost in `gather_ranges_to_dense_op.h` (#73540)
8ac7393565 : Revert D33767740: [pytorch][PR] Sparse CSR CPU: cuSolverSP backend for `linalg.solve`
97cf4c6bfe : Use set -ex instead of many &&s
dd9517cc4a : Revert D34524207: [pytorch][PR] remove _s_where
fb2fe11ce4 : [Quant][improvement] Rename ReferenceableQuantizedModule (#72717)
f89fadd467 : change stale processing order to ascending
197764b35d : Remove cuda 11.1 references (#73514)
1cf6b34c0e : [Easy][Tests] Rename module in test (#73551)
95204c4e2b : Revert D34503882: Support CSR to COO conversion in to_sparse.
7506042e69 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
d39ad0543a : [quant][fx] Remove Fuser class in fusion implementation (#73470)
5723c03bad : [quant][core] Refactor implementations for reference module (#73385)
4eb2482568 : remove _s_where (#73468)
84f4e9c10a : Support CSR to COO conversion in to_sparse. (#73471)
be253b8ee8 : Fix overflow check in `geometry_is_contiguous` (#73162)
1ac7bb0365 : Remove native_functions.yaml dependency from CUDA distributions (#67875)
199d9a992c : Sparse CSR CPU: cuSolverSP backend for `linalg.solve` (#71399)
1b99996119 : [nnc] Make run methods in TensorExprKernel const (#73240)
2cefbb71cf : [FSDP] generic argument forward for load_local_state_dict (#73325)
6dfaa71529 : [FSDP] Improve the documentation of state_dict (#73453)
6b424de338 : [FSDP] Add state_dict() save/reload in parity test (#73366)
7dc2cfa249 : [c10][rocm] fix __assert_fail() declaration mismatch error (#73040)
6b883d9933 : Back out "[BE][DDP] enable rebuilt bucket when find_unused_parameters=True" (#73524)
a9c1c205d3 : Back out "[DDP][BE] Remove bucket replicas" (#73523)
fb6977cbd5 : Back out "[DDP][BE] Fix clang-tidy" (#73522)
2d45a3d7cf : Back out "[Easy][c10d] Minor fixes" (#73521)
52b707b537 : [dnnlowp] fix qparam handling during bias quantization (#73478)
3ec1dd9989 : [torch] set workspace size for cublas lt interface 1M (#73439)
c62de0ac15 : [Static Runtime] [Code Cleanup] Use `SROperator` for operators' function type (#73450)
5e86505693 : Move util functions to a more common place (#73519)
ad1078a21e : [quant] Enable reference path by default for CopyNodeQuantizeHandler (#73233)
31d742b645 : Disable cublasLt when batch is too large. (#73533)
8280919fe6 : [ONNX] Export bias requantize steps to ONNX
ce7910ba81 : ufunc codegen (#65851)
89b4cfb49f : Disable TF32 in some linalg functions (#73460)
78914b3082 : codegen/templates: disable clang-format check (#73057)
01bd6f4357 : pytorch: fix typo in quantization docs (#73511)
1e7a4d6bbe : [PyTorch Distributed] Fix NCCL version string (#73333)
20a037d80f : [Core] Update Exception.h (#64553)
a5dcc0c378 : Enable test_coalesce_cuda_bfloat16 (#73158)
d00de0d435 : Support dataclasses in TorchScript (#73066)
b3cfc74f0f : [ONNX] Capture annotated attributes for local function
cfd92f2d59 : [Static Runtime] Add test that runs NNC fused kernels in parallel (#73256)
ab6395fc65 : Add api for recursively analyzing function calls (#73329)
d3d74e9040 : Allow custom registration of shape functions (#73270)
c4ff49f4c7 : Enable win-arm64
a1d5b5d2b3 : [BE][GHA] Use `timeout_after` for win templates
1436507960 : fused int8 static (#73452)
6c8e516a80 : Add pickling support for WorkerInfo (#73371)
f437ca6e8e : Remove legacy tensor constructors for complex dtypes
d51103e79e : Refactor set_default_tensor_type to avoid legacy tensor types
b213041df3 : Also install c10d headers with .h extension (#73422)
fe7e1bd1ce : [Static Runtime] Add auto-generated out variant dispatchers (#72603)
94501ff91e : [GHA] Increase win test timeout to 5h (#73490)
a22d079abd : Update XNNPACK (#73467)
3c932c345b : Fix test_Sparse_to_Sparse_copy__cuda_bfloat16 failure (#73157)
540361fa53 : [FSDP] full_state_dict impl (#73324)
7aec6d78b0 : [FSDP] Generic arguments for state_dict (#73323)
09697c6df5 : [test] Disable TestXNNPACK on ROCM machines (#73476)
5613527ef9 : [quant][fx] Add lowering support for functional ops using DefaultNodeQuantizeHandler (#73120)
b196e016a6 : [fx/graph_drawer] Add args/kwargs and users (#73464)
45a042037f : [quant][fx] Add root_node_getter in backend_config_dict (#73345)
05c86c2be1 : T112685841: Use irange in PyTorch (#73378)
16cd6853e1 : Fix test_sparse_addmm_...float16 and test_sparse_matmul_...float16 test failures (#73155)
186ef8b22d : Fix test missing target (#73415)
0ba3498248 : Sparse CSR CPU: implement addmm(dense, sparse, sparse) -> dense (#73076)
4c522643e7 : Fix CUDA error when multiplying sparse hybrid tensors with zero dense dimensions (#73428)
d398d4d32c : [SR] Disable aten::where out variant (#73367)
2aade49a14 : [PyTorch Distributed] Consolidate NCCL_DESYNC_DEBUG and TORCH_DISTRIBUTED_DEBUG=INFO (#73257)
11224db67f : [PT-D][Sharding] Make Partial separate file and enable padding for reshard when size not divisible by world_size (#73392)
33ca944f29 : Don't use vector accessor methods to do pointer math; unblock platform010 (#73414)
12890abcb4 : [Easy][c10d] Minor fixes (#73318)
9a8000fbf1 : ci: Bump CUDA 11.1 -> 11.3
16554bec1b : [qunat][fx][fix] Fix get_module_type for fusion (#72735)
31271284bc : Revert D33992795: Add BUILD_LAZY_CUDA_LINALG option
dc1bd9711e : [FSDP][Reland] Implement local_state_dict and load_local_state_dict (#73300)
fe820b3899 : Fix more binary search overflow issues (#73239)
ee5b8f0c64 : [quant][fx] Move MatchAllNode from match_utils.py to utils.py under quantization (#73344)
9db0e0e76e : [quant][graphmode] produce reference pattern for binary ops and then rewrite to quantized op (#72953)
b3c435be7f : Revert "Improve logging on ONNX symbolic helpers"
86b6cd2a98 : Update github workflows to support the main branch
3c45fc8e20 : Fix URL for creating github issues
590685dc6e : [DDP][BE] Fix clang-tidy (#73299)
9ac27e4e3c : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
7fcf6c5c02 : [DDP][BE] Remove bucket replicas (#73237)
285272f399 : Fix undefined variable errors (#72838)
a6c6f42c25 : [BE][DDP] enable rebuilt bucket when find_unused_parameters=True (#73276)
c6f1bbc0ac : promote torch.testing to stable (#73348)
14bcd3f681 : cleanup torch.testing namespace (#72708)
0415a64f3e : deprecate torch.testing.make_non_contiguous (#72705)
0973c5a1cc : align signature of make_tensor with other creation ops (#72702)
5f310c5e27 : Testing of masked reductions on mixed layout inputs. (#72398)
c021824128 : Clean up bisect_percentile_op (#73148)
91261feb7b : Add SoftplusTransform (#52300)
c5bf63aeca : Improve logging on ONNX symbolic helpers
881a574c47 : [ROCm] fix for break on nightly whls
fd999442f9 : Support nested templates in generated code via helper (#73271)
8bc28e9c9c : [JIT] Add more python ir utilities (#69871)
b2054d3025 : Prepare for an update to the XNNPACK submodule (#72642)
abb55c53b3 : [ONNX] Make graph name spec-compliant (#71961)
331df00785 : Do not set the `logtostderr` GLOG flag just to be on the safe side (#73360)
91ce1080c5 : Print ref on win
17d0951c2e : Fix torchbench CI issue.
6471104c11 : Generate pep503 automatically
0e7a7a5fe7 : Add documentation for c10d log levels (#73361)
e59403fe2a : Make TS recognize input arg name (#73253)
615ecac638 : [DataPipe] Adding examples for MapDataPipes with small fixes for others (#73250)
67cb0f2a03 : ci: Remove CUDA 11.1 binary builds
c3d79ac422 : Manual skip sparse tests
199d1cb9dd : [FSDP][BE] remove get_full_params() from test code (#73242)
e10cd88648 : [FSDP] summon_full_params fix (#73314)
cafd0f3304 : [jit][edge] Fix array index checking in mobile interpreter. (#73241)
62eb7d64cf : [PyTorchEdge] Extend flatbuffer to support extra files map (#72951)
f51a497f92 : [JIT] add builtin functions for (complex, Tensor) (#73286)
faacb8ab36 : [Pytorch Edge] Lean Runtime Test
3f7c17a2b9 : set SCCACHE_IGNORE_SERVER_IO_ERROR=1
9b27160478 : Try retryable steps for installing nvidia tools
30653d164d : Fix serialization and deepcopying for wrapper subclasses
8af23ea04e : [JIT] Add GRAPH_DEBUG for conv_add_relu fusion (#73138)
3bd1507ff2 : Revert D33994011: Make debug_pkl smaller by only emitting unique traces.
5772b1afbc : [Static Runtime] Avoid checks during op execution for TupleConstruct & ListConstruct (#69029)
49444bb501 : Revert D34400588: [pytorch][PR] super setUp call missing in TestSparse
7366724e07 : Introduce an environment variable to change c10 log level (#71746)
3d37f5b052 : Make debug_pkl smaller by only emitting unique traces. (#72596)
86deecd7be : Check clang++/g++ version when compiling CUDA extensions (#63230)
3ac7828195 : Update ONNX submodule to 1.11.0 (#73111)
d56d530e5c : Move Lazy Shape Inference functions to pytorch core (#72756)
fffb97f3cb : [torch] do not fold bmm into mm when tensor1 dim==3 but not contiguous (#73115)
f4611ee989 : [PyTorch] [Model Tracer] Protect writes to set/map for capturing used operators, dtypes, custom classes, and build features (#73327)
c2279d48f7 : [PyTorch] [Model Tracer] Refactor method bodies in various record function handlers into source (.cpp files) (#73321)
ebd93f69db : Enable CSR inputs for torch.sparse.mm (#73075)
ac97e953b4 : Add dynamic shape support to AOT driver & compiler (#72995)
5a7778c9a6 : Squeeze p2: hook up Squeeze to LazyView (#73067)
6c6ae0e756 : [shard] fix some imports in tests (#73309)
d6c5295ec8 : [shard] Extensible ShardingSpec (#72130)
b3e3eb9935 : [vulkan] Remove dead code from previous Vulkan backend (#73243)
82130758f0 : Add BUILD_LAZY_CUDA_LINALG option (#72306)
fc3c7fb756 : Make "server socket not listening" warning logs less noisy (#73149)
e143f98010 : Introduce debug and trace log levels in c10d (#73167)
e1db2f13ce : Refactor TORCH_DISTRIBUTED_DEBUG implementation (#73166)
3d9706c464 : Prefix c10d log messages with `[c10d]` for easier troubleshooting (#73144)
ec49373044 : [GH1] Support merging PRs with more than 100 files
e3019a963a : [GH1][BE] Move GQL mocked data to json db
4838c6dca0 : [Static Runtime] Enable all tests to run with TensorExpr fuser (#73263)
9d6639abcd : Fix `nn.Module.state_dict()` (#72780)
2321f26fa3 : Move vectorized CPU codegen to after ATen codegen (#72869)
eabb0a086b : [pytorch][nn] torch.nn.MultiheadAttention fix (#73053)
5eb5b61221 : [tensorexpre] Add typecast when src and dest buf types are different in PlacementAllocate (#71934)
555b215a90 : super setUp call missing in TestSparse (#73217)
353615897d : Remove PR/trunk distinction as it isn't accurate + fix win sharding
1dd3f950ba : Optimize grid sample 3d
6688487f3e : Fix get_author detection
75db05c3fd : Check if the iterator is valid before dereferencing it (#72405)
a7f9e610af : [Static Runtime] Adding out-variant support for quantized::linear_relu_dynamic_fp16 (#73238)
46123236db : [ONNX] Relax sequence tensor dim_param serialization
50efa3a6e8 : Skip optimizer overlap tests that have issues with NCCL async error handling (#73261)
d8efa05386 : [caffe2][c10] Fix signed/unsigned int64 mismatch (#71861)
f85309e478 : [DataPipe] Adding serialization test at different stages of reading for IterDataPipes (#73119)
38944a3c96 : [DataPipe] Enable serialization of ForkerIterDataPipe (#73118)
cd4ecce1bb : [DataPipe] Fix issue with DataPipe serialization with `dill` (#72896)
595a51b951 : [ROCm] Enable sort operator BF16 support (#72854)
1d6b156c3a : Reland fix dispatch (#73231)
987f146185 : [fx] Improve support for tuple subclasses such as NamedTuple (#73198)
715a0dc5c0 : [PyTorch/d2go] fix optim _multi_tensor (#73215)
97898e5144 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
3b1b4875f1 : [AutoAccept][Codemod][FBSourceGoogleJavaFormatLinter] Daily `arc lint --take GOOGLEJAVAFORMAT`
bd4902d81f : [ONNX] Add Squeeze/Unsqueeze dynamic dimensions support when opset >= 13 (#71158)
80291dff43 : [ONNX] Add torch.nan_to_num and torch.maximum/minimum symbolic (#72090)
40de6b80ee : [ONNX] Add infra for quantized model export and support quantized mobilenet v3 (#72215)
785ebb9d6d : [ROCM] Navi21 Enablement 3: Embedding kernels (#72809)
299b40de50 : [jiterator] stricter static_assert (#72576)
9ea6db4aca : fft: Fix invalid shape error for complex-to-real transforms (#73012)
16e2f5d291 : [quant] Add ConvTranspose reference module - Reland #73031 (#73094)
2051068233 : Change how cuda available memory is calculated in largeTensorTest decorator (#72207)
491ee70e6e : Avoid `collections` deprecation warning (#72239)
facd6f0bea : Unpin librosa and update SciPy pin (#72834)
0947521268 : Update stft tests to support latest librosa (#72833)
1ef244e003 : Fix tensor.__deepcopy__ for lazy device (#73197)
af902102e0 : Fix discrete sampler test to correctly run Chi2 test (#73251)
3d9ec11fea : Quantized LSTM/GRU: Remove legacy API support (#72522)
4267e6e55e : Fix formatting issues for onnx
cc2aad2ef2 : [ONNX] Add symbolic for torch.addcmul (#72126)
28bf2f80cf : Don't call _jit_pass_onnx_function_extraction if export_modules_as_functions is False (#69742)
cbb2df541a : Added check for unsupported dispatch key in codegen (#67961)
7a5b0efc64 : [caffe2] fix build failures in optimized builds under clang
600f4bf20c : Clean up some unused variable warnings (#73151)
e9c64168d9 : Import packaging.version in torch_version, if available (#71902)
7e919bd3c6 : add dry run option and improve test list printing
53faf78143 : expanded weights without fast rules (#70140)
7807a83f6e : Fix error handling TestSetDefaultMobileCPUAllocator
cfb6c942fe : `scatter_reduce` documentation (#73125)
e12c57a35b : [ONNX] Apply clang-format changes (#73220)
28339ddc25 : [PyTorch] Hit fused addmm path in linear() for existing MHA (#72871)
8625623e86 : Update clang-format hash
0bcf190c7a : .github: Create superuser group for GHF
9a96604800 : Revert D34318185: [pytorch][PR] Ensure that call before redispatch work well for PythonTLSSnapshot
932adf26e4 : [easy][PyTorch][CleanUp] Removing unused function def (missing function implementation) (#73019)
6d86dc5390 : dbr quant: store auto_quant_state on the top level model (#72934)
c30659ffcc : [ZeRO] (Reland) Add ctor support for multiple param groups (#72932)
1d404727c5 : Automated submodule update: FBGEMM (#73061)
04c9e52ecc : Ensure that call before redispatch work well for PythonTLSSnapshot (#73045)
849c6a526e : Extrapolated on equiv between linalg @ and solve (#71769)
99bcadced4 : improve android instrumentation test and update README
c2255c36ec : Fix binary search in bisect_percentile_op (#73146)
5dad19fef0 : Back out "[pytorch][PR] add BFloat16 sparse operators on CPU: copy, coalesce, sparse_mask, ad…"
9f541aa3ac : [Profiler] Optimize `reportMemoryUsage` (#71538)
24c91e23d3 : Fix nasty bug in bisect_percentile_op (#73147)
bf03d93496 : Revert D33919683: [FSDP] Implement local_state_dict and load_local_state_dict
2a7f9f0600 : Revert D34284271: [TLC][checkpoint] Add unit test for StatefulComponentCheckpointAgent
d50643adcd : [FSDP] Implement local_state_dict and load_local_state_dict (#72469)
f49a93ba56 : [TLC][checkpoint] Add unit test for StatefulComponentCheckpointAgent
3aecce7015 : [pytorch] use cublas lt interface for bias fusion (#72148)
237574db19 : add assert to make sure expected number of LTC roots matches what TS … (#73112)
c837caf5c5 : Adding details to kl.py (#72845)
46f9e16afe : Documenting cuda 11.5 windows issue (#73013)
d059c0821c : [Easy] Update the bytecode version comment (#73097)
52175307e2 : [vulkan] Allow benchmark binary to handle non-single tensor inputs/outputs for Vulkan models (#73109)
bdc8b3f3e8 : [vulkan] Re-route arithmetic ops to scalar versions when second arg is zero-dim (#73108)
564f99226a : [vulkan] Clamp tanh activation op input to preserve numerical stability (#73107)
c6f56599bb : add BFloat16 sparse operators on CPU: copy, coalesce, sparse_mask, ad… (#72846)
f41db99a56 : Add simple correctness check for native MHA (#72941)
b1bd2268f8 : [Vulkan] Add performance test for GRU operator (#73126)
534d5cf91d : Fix backward compat (#73127)
906d26fb9b : [codemod][type-comments] Convert type comments in api.py (#73084)
671c8a459a : [ONNX] Add pixel_unshuffle support in opset 9
0d66748948 : [jit] Add tests for JIT with dynamic shape fusion (#72201)
374de33655 : [codemod][type-comments] Convert type comments in workspace_test.py (#73086)
79a216ce57 : Move native MHA code out of PyTorch core (#72944)
1646a0033d : Use irange in PyTorch (#72836)
7fe3f334fb : Remove call into python API without GIL being held in c10d (#72928)
9c8fb2ee2d : .github: Consolidate binary checkout logic
956bafef8b : [onnx export] Add broadcast to matmul shape inference (#70534)
98f9ff9026 : [ONNX] Fix an assertion failure involving Slice (#71965)
2791725a84 : Integrate full ONNX check into ONNX export API (#71125)
2724e4c039 : [Static Runtime] Do not replace with copy variants if TE fuser is enabled (#72946)
02afdd54b9 : [Static Runtime] Handle fallback graphs that are generated as part of the TE Fuser (#72945)
87f882b056 : Move magma utils to its own header (#73058)
5843fea94d : [ONNX] Add export support for linalg norm (#66575)
32f6a1e2a2 : [ONNX] First version of quantized model export: Support quantized.Linear (#69232)
a6517c20cf : [ONNX] Improve Expand shape inference (#69264)
08510ba5e4 : Disable test history as it's fragile
38d37436f2 : Add all 4 android ABIs to android build script by default
99427654aa : Use "large" macos for binary builds
0951cb513a : Revert D34342689: Revert D34250357: Sync lazy_tensor_staging back to master
477d1bd6cf : Revert D34313425: [quant] Add ConvTranspose reference module
7e35922a2c : Fix test tools
86a961af87 : Revert D34250357: Sync lazy_tensor_staging back to master
f5e201e4e9 : [DataPipe] Adding usage examples for IterDataPipes (#73033)
1c0df26597 : eager quant: convert mapping for fused QAT Linear-Bn1d (#72796)
e73eaffd3b : quant: add QAT fused Linear-Bn1d [1/x]: prepared module (#72431)
710f12f58e : [quant] Add ConvTranspose reference module (#73031)
51b04f27c7 : [ci] do not run distributed jobs for windows (#73064)
69389fb542 : Sync lazy_tensor_staging back to master (#72875)
056b6260f7 : [ONNX] Mergerule: add onnx pass registration file
39fb771423 : [Static Runtime] Report static op statistics from graph when input size is zero (#73032)
b0c3b36943 : Don't print so much in Display and Upload Test Stats
bac7feb76e : Remove smoke test functionality to simplify infra
4d642d0dd4 : add android and ios folder to merge rule
6d33852685 : [NNC] TensorExprKernel state should not be modified on calls to run methods (#73028)
f670179c0a : Fix doc regressions for various modules and functional forms (#73014)
af3ca50291 : Fixed docstring typo for nn.Module.get_submodule (#73018)
1522912602 : Port `mse_loss` to structured (#72294)
84680423b5 : Move implementation of CUDA error handling to Exceptions.cpp (#72958)
209a948896 : [Reland][FSDP] Implement apply() (#72925)
2cd0667928 : [ci] delete generate-test-matrix
2c916ef198 : More update on the guidance (#72818)
d4f3d07ae2 : [ci] don't compute ignored issues in generate-test-matrix
36fa50be60 : Add `tools/` to OSS CI merge rules
0942af7c4b : [ci] switch arm mac jobs to periodic, delete fulljit jobs
6448f1bcee : Revert D34283475: ci: Add documentation for github actions
d1c5f9e439 : [JIT][SR] Introduce prim::IfThenElse (#72587)
ec8d677725 : [Static Runtime] Add a script to auto-generate out variant dispatchers (#72602)
fc832d476d : gitignore tools/bazel executable (#72878)
e0e1e0b114 : Fix empty tensor handling in RReLU (#70496)
c22b8a42e6 : Automated submodule update: FBGEMM (#72805)
cee84f4051 : fix model dump for the lowered module (#72866)
540cb5fee2 : [graph_manipulation] Unpack list of outputs (#72940)
5ea74b4996 : [Static Runtime] Remove ProcessedNode::num_outputs_ (#72592)
e785c0a1ab : Enable Half/BFloat16 support for to_dense and coalesce methods. (#72397)
456d96d544 : Generate static docstrings for torch._masked functions. (#72865)
1f74e082e2 : only compare attributes for meta tensors (#72508)
b5f2574f36 : no longer coalesce sparse COO tensors before comparison (#69751)
81fbeea760 : Add docstrings to native_channel_shuffle (#72919)
e2c1533c7b : [quant][core][gpu][eager] Improved quantized conv operator in cudnn (#72770)
2f222fc88c : Mild refactor of native_functions.yaml dispatch parsing (#66109)
6a2624f7c4 : ci: Add credentials to upload_test_statistics
352eeb2ef9 : doc fix `nn.Module`: docstring should come after class variable (#72912)
bbac8c9c48 : [ONNX] List of files to consider for mergebot onnx rule (#72297)
dbac0f5cdc : Update persons of interest for ONNX (#72072)
5cf2228405 : ci: Add documentation for github actions (#72943)
3d8b6d3361 : fix: onnx PReLU unidirectional broadcasting
e6fd28fb05 : Revert D34126542: [Qunat] Add ConvTranspose reference module
486572223b : Fix command example (#72847)
0b117a3956 : Revert D34245091: [pytorch][PR] Improve numerical stability of `torch.distributions.wishart.Wishart`
bb736fac33 : fix workflow lint
3fcdff0615 : Set pull_request checkout to head sha
889f3f48b2 : Revert D34178476: Update lazy_ir.py from lazy_tensor_staging
c19255840f : codegen: do not generate code for dispatch_namespaced_definitions (#69074)
c32b74cecb : [nnc][aot_compiler] Memory formats args to aot_compiler (#72873)
41ad221751 : [PyTorch] MHA: fix contiguity assumption in transform_bias_rescale_qkv (#72465)
dadbf43eff : Fix asserts in tests (#72864)
3842140fd5 : Update lazy_ir.py from lazy_tensor_staging (#72730)
ae8198121c : [PyTorch] Handle non-vectorizable parameters for native MHA CUDA rescale kernel (#72671)
ad623fdecf : [PyTorch] MHA: add test for transform_bias_rescale_qkv (#72464)
443a337e14 : Create a CI workflow for XLA tests using the XLA test image (#72496)
b02c514764 : [PT-D][Sharded Tensor] new init api for local tensor and sharding spec auto inference (#72733)
3493646f76 : [CircleCI] Re-enable nightly android builds
c1032bf0d1 : [.github] Fix typo in job name
76df91215f : [Pytorch Edge] Caffe2 Serialize files into indepedent target. Clean up function.cpp deps
d79aec91f7 : [easy][PTE] Reduce unnecessary ref count bumps in callstack debug (#72547)
5343cfe949 : Improve numerical stability of `torch.distributions.wishart.Wishart` (#72059)
87975d895c : [DataPipe] Improve .pyi generation (#72829)
f395a75c67 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
ccdff4c480 : Revert D34111109: [FSDP] Implement apply()
17b3ba148d : Set `BLAS_LIBRARIES` to `${MKL_LIBRARIES}` for MKL case (#72806)
59dd84cab6 : [Join][BE] Fix typo; remove obsolete method (#72886)
1f29b3130a : [FSDP] Implement apply() (#72600)
e4214929c5 : Port `amax` to structured kernel (#72124)
85d7e73a8a : [Perf] Reduce unnecessary ref count bumps (#72523)
84cb810b3f : Revert D34106940: [ZeRO] Add ctor support for multiple param groups
763ad1bf25 : (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change (#72899)
f8a2efc190 : Make fusion strategy api public (#72639)
1750c0177e : Move dyn fusion api to jit/api/module/ (#72638)
f67cf03526 : [Quant] Add qint32 quantization support (#72472)
7a031ec17f : [Qunat] Add ConvTranspose reference module (#72473)
ad38b92f5d : [Quant][core][devs] Separated implementations for quantized & non-quantized tensors in reflection_pad2d_cpu (#72442)
ec3a5ca6d3 : [monitored barrier] Slight logging enhancement (#72754)
aeacf910b5 : [Checkpoint] Rename file (#72748)
08889b24df : [FSDP] Improved shape unflattening test (#72573)
b01d1ad171 : [FSDP] Fix summon_full_params when not sharded (#72572)
8e7fe87630 : Rename `Typed/UntypedStorage` to `_Typed/_UntypedStorage` (#72540)
8b08478115 : Fix the doc of PostLocalSGDState (#72792)
8a43aa9538 : [Kineto][Bug Fix] Avoid picking up old CUPTI headers (#72761)
277c4c9dec : Fix vjpvmap for linalg.svdvals (#72811)
67adc0cb11 : Remove xfail for trapz and trapezoid on meta device (#72677)
93a8bbbcdb : be: Remove unused docker folder (#72884)
3d377fb4a3 : [quant][fx][improvement] Add lowering support for BatchNormQuantizeHandler (#72490)
961bbe1c6a : `linalg_det_singular`: modify samples such that CUDA IMA dissapears. (#72585)
67cd98fad4 : [tensorexpr] Fix isNLC segfault (#72786)
d2c0c0b638 : [SR] Apply all graph passes to sub-blocks (#72598)
cb00d9601c : Revert D33800694: [pytorch][PR] `scatter_reduce` documentation
12a1df27c7 : `scatter_reduce` documentation (#68580)
313557a613 : Add missing import (#72840)
da07d1cda2 : Add GH1 merge rule to merge documentation changes
ca0ac3a74b : [caffe2] allow dropout to take 1.0 as dropout ratio to zero-out a layer (#72741)
a7cac05ca6 : Add new tls snapshot feature (#72832)
7db4a48d92 : Revert D33342569: (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change
41782a4542 : [quant][core][devs] Refactor the implementation for quantized batchnorm module (#72489)
03662b32d5 : Uncomment step no-op test in test_optim (#70953)
2a5aaf1c49 : Optim foreach cleanup for AdamW (#70484)
dff58d519f : Optim foreach cleanup for Rprop (#70483)
ce3094f5f6 : Optim foreach cleanup for Rmsprop (#70482)
2cb03e926f : Optim foreach cleanup for SGD (#70481)
5f9590681d : Optim foreach cleanup for Adam (#70295)
5dd0732457 : [ZeRO] Add ctor support for multiple param groups (#72578)
1b089292df : Fix test failure when compiled without LAPACK support (#70671)
32dd4a8639 : move fx_acc out of pytorch core (#72803)
f43165a75f : Remove duplicate call to objective function in strong wolfe line search in L-BFGS optimizer. (#72773)
80f23469dd : Revert D34152115: [pytorch][PR] [ROCm] Enable sort operator BF16 support
dc169d53aa : Remove native_functions.yaml dependency from DistributionNormal.cu (#67874)
23b98202b5 : Remove native_functions.yaml dependency from DistributionBernoulli.cu (#67721)
111e52c5d7 : Remove native_functions.yaml dependency from Sorting.cpp (#66980)
28388b4b43 : Remove native_functions.yaml dependency from GridSample.{cpp,cu} (#66979)
5b82e4f72b : stop sccache server after building (#72794)
b9ccbe4ff2 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
856157fcee : (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change (#70471)
511ec7f366 : Fix `sequence_ops_test` (#72844)
a482aeb0ce : [PyTorchEdge] backport v8 to v7 to support promoted ops as instruction (#71662)
d2d982c739 : [PyTorch] Fix MHA grain size computation (#72463)
00769060bc : [PyTorch] MHA: just use existing softmax on CPU (#72462)
ee79e4c6b2 : [PyTorch] MHA: guard epilogue TODOs w/checks & implement 1 (#72461)
49611a3329 : [PyTorch] MHA: simplify gemm_nt (#72460)
f5d078088b : [PyTorch] MHA: fix dim_per_head / V bug (#72459)
d707114a2f : [PyTorch] Fix ASAN errors in CPU MHA impl (#72458)
f2f4847e16 : [PyTorch] MHA: add debug shape checks (#72457)
6f29313e38 : [PyTorch] Use DimVector in at::matmul (#72230)
2d110d514f : Nvfuser code bump 2_1_2022 (#72127)
52c516ecb8 : [Pytorch Edge] Minor improve documentation in test_backend_with_compiler
fb4504da2f : DOC: release documentation version should be major.minor (#72706)
22ccf448e8 : Revert D34034848: free up dispatch key space (in C++)
7f560fb3e0 : Revert D34034847: DispatchKeySet perf improvements
f1a9650e4f : Revert D34214953: Add new tls snapshot feature
5bb851dcaa : Fix ConvTranspose2D dilation type annotaion (#72789)
801abc0cdd : MAINT, DOC: Trivial spellings and warnings (#72745)
831cb4b94d : ci: Update macOS binary workflows with new templates (#72727)
6199b5231f : Add new tls snapshot feature (#72623)
454e2ec7bc : [test_fx_const_fold] Remove dependencies on acc_* (#72810)
8e8c15cf6e : Operator developer guidance (#72470)
584f13967b : Add wrapped Tensor autograd test (#72622)
3c33f0bdcd : Clean up LoggingTensor semantic (#72620)
78e481d07d : add optional encoding argument to fileopener (#72715)
f87f753bb9 : avoiding adding some functions to the public python API before 1.11 release (#72543)
aa44480b40 : [ROCm] Enable sort operator BF16 support (#71226)
9981aadee1 : [Quant][core][devs] Separated implementations for quantized & non-quantized tensors in index_select_cuda (#72407)
e7985e3c60 : Properly initialize `grad_weight` in `raw_cudnn_convolution_backward_weight_out` (#72157)
7d542a4f2b : Fix type annotation for `torch.backends.cudnn.allow_tf32` (#72757)
89c934f4b8 : [ROCM] Navi21 Enablement 2: Depthwise kernels (#72682)
8aa3620d73 : DispatchKeySet perf improvements (#72403)
6690256021 : free up dispatch key space (in C++) (#72402)
4f8b986e28 : Implement Tanh Gelu Approximation (#61439)
64faa043f7 : Mobile upgrader: add GELU test modules
47c6993355 : Update from_dlpack tests and documentation (#70543)
1776caf361 : Automated submodule update: FBGEMM (#72769)
c9ccb850ab : Back out "[SR] Make fused_sigrid_transforms work on graph outputs" (#72767)
5a95ffbdc5 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
c73cc92eff : [FSDP] Use unflatten_parameter in _summon_full_parameters (#72467)
81a1330b91 : Automated submodule update: FBGEMM (#72764)
4d04ef62a1 : Allow forking until a worker thread is created in autograd engine (#72689)
a877441494 : Clean up use of cpu ready queue in autograd engine (#72688)
1312524759 : Remove un-used function in autograd engine (#72687)
444191de56 : Use default value on empty llvm_code_path (#72758)
dce84b9c83 : Deprecate torch.jit.quantized (#72690)
fb7c4780f9 : Add autograd tests for addmm, addmv, mm, mv and CSR matrix input (#71949)
6b24d7e4e5 : [caffe2] Allow LpNorm to accept empty tensor (#72660)
ef92b5490b : [GH1] Ignore "COMMENTED" reviews
0c95209fe7 : [GH1] Enable cross-repo merges
340fae4363 : [Doc] Better formatting in autograd.rst (#72586)
11359e14fc : Release documentation update to include latest GHA changes (#72740)
28549b618a : [ROCm] Enable skipped ROCm unit tests (#67706)
9257de7efa : [ONNX] Minor doc update (#69501) (#69550)
7884c2bbe2 : [ONNX] Add Concat to Scalar type analysis JIT pass (#69227) (#69548)
cc792746d2 : [ONNX] De-duplicate initializers (#68202) (#69547)
ce5b155ccb : [ONNX] Link to the wiki (#68505) (#72663)
308de30abc : [ONNX] Support embedding_renorm ONNX export
1ed4653e89 : Stop writing logs to root logger (#72649)
c5f904aeb3 : Convert type comments to annotations in caffe2/torch/util (#72667)
25fba4a019 : [DOC] Add link to "double backward" from "extending pytorch" page (#72584)
757286ff35 : incorporate direct conv into pytorch split (#72685)
03afd86295 : [ONNX] Fix lstm reshape shape inference regression
ca23ca7637 : Fix for builder repo not pinned in release branch (#72719)
04c5d978b9 : [ONNX] Refactor _run_symbolic_function (#67573) (#68491)
d635d0f86e : Refactor FX codegen into extensible Codegen object (#72566)
8b67b83c6e : [quant][fx][improvement] Add lowering support for FixedQParamsOpQuantizeHandler (#72488)
e593816554 : [Vulkan] Implement GRU operator (#72692)
328cfd50e7 : Move debug_util and python_util to torch/csrc/lazy (#72607)
50770b9e19 : Add torch.ops per overload API (#72206)
2b0f334443 : Automated submodule update: FBGEMM (#72701)
30d2e37469 : remove some spurious warnings fixing take 2 (#72542)
678c08bb55 : [PG Wrapper] Small fix (#72657)
4feef6c970 : Log static graph in constructor if it is set (#72456)
37651894f9 : [Easy] Small DDP fixes (#72455)
6942fccf60 : Skip superfluous storage allocations while constructing meta tensors (#65331)
4737ae7a16 : [tools_common] Don't remove underscores from call_module targets in get_acc_ops_name (#72664)
33b7e6ff23 : Convert type comments to annotations in torch/nn (#72662)
e4629c600f : Automated submodule update: FBGEMM (#72665)
5f2eed6109 : Don't do BC check on ops with valid upgraders (#72313)
426f50e5b2 : [FSDP] Add no_sync() context manager (#72446)
2fa34fb7b9 : Revert D34154832: [pytorch][PR] Add `multi_head_attention_forward` to functional rst docs
4dae85c385 : [ATen][rocm] fix type issue (#72676)
bafaf0d610 : Add `multi_head_attention_forward` to functional rst docs (#72675)
1855b14922 : [TensorExpr] Delet `DimArg` class. (#72390)
9123e9b3b5 : [TensorExpr] Switch from `ExprPtr` to `ExprHandle` in Compute impl. (#72389)
730fef25c7 : Convert type comments to annotations in caffe2/test/onnx/ (#72632)
eb4e6ca30c : [ROCM] Add ROCM version api within cmake (#69481)
a2e545e6c5 : pad_sequence: fix regression - support tensor (#72436)
c963dedcd3 : ci: Migrate macOS x86_64 binary builds to GHA (#71888)
a34d2849cd : Revert D33353157: add native quantization support for pixel_shuffle
6f84c5f0b9 : [SR] Generalize VarStackNodeWrapper (#71573)
b05d8b95b1 : improve CPU performance for log_softmax when dim != -1 on both float32 and bfloat16 (#72163)
75e769449d : optimize dim reduce performance on norm, argmax and argmin (#72083)
4d01789f69 : Remove fx2trt from oss CI (#72595)
c975b928ab : [SR][easy] CPU fuser uses native control flow (#72544)
2e04295790 : [tensorexpr] support for fusing autocasting ops (#72478)
dbbdabc340 : Automated submodule update: FBGEMM (#72658)
3e1eff9a0e : [DataPipe] Add docstrings for IterDataPipe and MapDataPipe, along with small doc changes for consistency (#72618)
634427d65c : Make test_multiprocessing_spawn.py compatible with pytest (#50408)
0f8a1fc9ba : add native quantization support for pixel_shuffle (#68328)
b0dd2c2ef5 : [DataPipe] Adding a note about FileLister behavior (#72619)
364055b277 : Automated submodule update: FBGEMM (#72647)
d891b730c5 : small optimizations of a view function (#72626)
b6df02bbbb : Fix tagged build detection for binary builds (#72628)
ccfafb6ee1 : Fix refcounting in access of saved for forward attribute (#72627)
91e4f7788c : Gradcheck forward AD respects requires grad but run with requires_grad=False (#72309)
eb4238fc26 : Allow caffe2-specific graph transformations for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460) (#68490)
828e36c75d : [JIT] Cat shape analysis fix for -1 dim (#72616)
0672cf9ed5 : Use opmath_t in cusparseSpMM (#72559)
9a3b86f6d7 : Use cub 1.15's latest scan-by-key algorithm to replace thrust for Embedding.cu and EmbeddingBag.cu (#66580)
239531fb31 : Ensure all modules in the dict are instance instead of class (#72615)
c1a4714bf1 : Fix doc build for release branches (#72567)
c76c4912bb : [SR] Make fused_sigrid_transforms work on graph outputs (#71507)
2539b6a984 : [DistributedInference] Relax the assertion for uniqueness of blob name across external inputs and outputs (#72492)
3b1ef1fde8 : [CircleCI] Deprecate `gpu.medium` class (#72613)
e235e437ac : [jiterator] Move jitted_gpu_kernel into into its own header (#71960)
078247304a : fix minor issues for ATen/ROCm (#71925)
933e0e8991 : Opinfo test for mvlgamma: add epsilon (#72491)
bbd42c605a : [JIT] Opinfo tests for nnc fusion - retry (#72486)
7035738b50 : Change ParameterList and ParameterDict to be able to contain any kind of objects (#70499)
decc79e541 : fx quant: add workflow support for torch.matmul quantization (#72444)
979744b66b : Automated submodule update: FBGEMM (#72583)
84729cef70 : [Static Runtime] Fix a bug in aten::slice to honor optional arguments (#72530)
0972db5b7d : Optim foreach cleanup for ASGD (#70231)
5948522e9c : Optim foreach cleanup for RAdam (#70230)
3653f07c7c : Optim foreach cleanup for NAdam (#70229)
d9acfef831 : Optim foreach cleanup for Adamax (#69982)
dabfea8363 : Optim foreach cleanup for Adagrad (#69981)
8e8d170674 : Optim foreach cleanup for Adadelta (#69980)
7767f47e12 : [JIT] Remove warning in conv-add-relu fusion (#72441)
b014d4ddb9 : Add transformation using cdf of distribution. (#72495)
bf233aa049 : [quant][core][docs] Add docs for torch.quantize_per_tensor_dynamic (#72311)
cdc9b1160e : Port `linalg_cross` to structured kernels (#72413)
9d1dec4897 : Move fx2trt test out of PyTorch repo (#72570)
ac0cac7724 : [quant][fx][devs] Add lowering support for torch.cat (#72487)
3670466201 : Move fx2trt out of PyTorch core (#72499)
60aea2e4d2 : [jiterator] Make USE_JITERATOR error if jit_macros.h isn't included (#71959)
1af820018d : Automated submodule update: FBGEMM (#72483)
4b69a2373f : [quant][fx] Add lowering support for ops in GeneralTensorShapeOpQuantizeHandler (#72387)
6a1147d059 : [package] fix orderedimporter dummy package check (#72533)
ad5a5a9794 : Beta value is ignored for sparse torch.addmm with non-MKL build (#72430)
8bb1d06702 : [optim] ASGD fold state updates into functional and pass list of vars rather than states (#71335)
ccc1a01dcb : [optim] NAdam fold state updates into functional (#71334)
41aba71c0d : [SR] Make fused_gather_ranges_to_dense work on graph outputs (#71498)
6c0521b919 : [SR] Add native implementations for converted prim ops (#71474)
8d525d4760 : Correcting a minor typo: "Users should pay" instead of "Users should be pay" (#72500)
bc03c1d000 : Structured Kernels for `index_copy`, add `out` variant (#67329)
7d917a5220 : [JIT][easy] Fix off-by-one error in tupleIndex (#72447)
a88c5e6a03 : [ROCM] Navi21 Enablement 1 (#69942)
8886ed2dd5 : [DataPipe] Fixing MapDataPipe docstrings (#72476)
b4e5b4d92e : [DataPipe] Fixing IterDataPipe docstrings (#72475)
9d08318aa3 : DBR Quantization: Add support for functional conv variants (#71795)
fed0ec30c8 : add fx2trt diagnostics (and a framework) (#72374)
c4af6ba173 : Show friendly error message when forgetting `init` in `torch.cuda` (#72404)
e578a80184 : [Caffe2] Use more irange()s in loops. (#72273)
fd3ed2e4f7 : [Caffe2] Use more irange()s in loops. (#72262)
2ea5031991 : [Caffe2] Use more irange()s in loops. (#72265)
27f3ce85c9 : [Caffe2] Use more irange()s in loops. (#72264)
4cca3935d9 : [Caffe2] Use more irange()s in loops. (#72270)
c314750401 : [JIT] enable profiling optional tensors (#70532)
7176c92687 : [optim] update step in functional and pass state_steps instead of state (#71333)
5e6f296612 : Structured Kernel Precompute codegen handle fields without replacement (#71368)
8bf3179f6e : #71946 Remove Python 3.6 references (#72211)
2afed243b5 : [fx2trt] remove split.py (#71933)
d51d2bd608 : [SR] Add a flag to disable copy variants (#71102)
765908708b : [nnc] Adding a test with dynamic shapes from a model (#72198)
cedf37d933 : Back out "Revert D34043182: [pytorch][PR] Added missing antialias argument to functional.pyi.in"
bb101ec78d : Revert D33595240: [JIT] Opinfo tests for nnc fusion
58f25678bd : Revert D33780905: Opinfo test for mvlgamma: add epsilon
4d4b94b3cb : gen_backend_stubs.py: fix typo for supported_autograd (#68562)
127bf42ee7 : Revert D34043182: [pytorch][PR] Added missing antialias argument to functional.pyi.in
8cdcc1181c : Add missing entry for sampled_addmm in sparse.rst (#72312)
896703d3d7 : Do not push binaries generated by ciflow
9ab71f5ac8 : [pytorch/aten] Avoid temporary array reconstruction (#72391)
72cedba655 : Opinfo test for mvlgamma: add epsilon (#71794)
5654b68731 : Revert D34011981: [pytorch][PR] remove some spurious warnings fixing
d50211860a : Use SLEEF functions for NEON vectors on macOS ARM64 (#70354)
f0f49a1153 : [torch.package] add test case for repackaging parent module (#72367)
29c81bbff5 : Fix SVD error code handling for OpenBLAS 0.3.15+ and MKL 2022+ (again) (#72357)
bc1fb7a618 : CMake: Limit python include directories to only python libraries (#69085)
bec2ed05e8 : [BE] Move upload logic to shared template (#72426)
b74c2de46a : Set `DRY_RUN` to disabled for Win binary builds (#72425)
25f9fe22a9 : [PowerSGD] Add orthogonalization with QR factorization (#72043)
0b57bd4c66 : [JIT] Opinfo tests for nnc fusion (#70465)
8315c9b885 : Added missing antialias argument to functional.pyi.in (#72420)
1bad3c4a84 : remove some spurious warnings fixing (#72352)
ff71429906 : [nnc] Add stride args while running with allocated outputs (#72223)
224093db11 : [FSDP] Add FlatParameter to track the information of a flat parameter (#69241)
09e2fb8f6e : Make LinearPackedParams works with both torchscript and torch.package (#71656)
717d8c6224 : [BE] Fix pybind deprecation warnings (#72376)
5da6de5dc2 : Fix unused variable warnings (#72410)
805dff354e : Avoid type qualifier specified more than once (#72411)
2b702b43c5 : Fix unused variable warning (#72412)
b047963983 : [PT-D][BE] Fix DDP no_sync() test logic (#72348)
133461e5d6 : Move CUDA linalg code to its own subfolder (#72304)
d8c3ab11ae : Fix BC by adding aten::_native_multi_head_self_attention (#72429)
83b3b5fb00 : [PyTorch] Support NVTX range_start and range_end (#70030)
9f9b9c48e5 : Tensorimpl cleanup try 2 (#72336)
9d8f0c7842 : Add ZT fastpath for torch.{dot, vdot} (#71129)
4e98a4b6e3 : Update release note bot to actually ping people
a004f13567 : Pin librosa
9ac28cbfd2 : Added prod op to FX2TRT (#72284)
bebf8dd543 : Define TORCH_ASSERT_ONLY_METHOD_OPERATORS in ATen/core (#72344)
998a5adf8a : dbr quant function fusion [2/x]: use fusion for observation and inference (#71781)
d672bbd0a9 : fx quant: add fusion matching for operator.add and torch.relu (#71780)
5937c48f4e : dbr quant function fusion [1/x]: record matches for functions (#71764)
7e2a7728dd : dbr quant: record dag of non-quantizeable ops (#71551)
1580856455 : dbr quant: refactor first_call hooks to be standalone (#71324)
0ff9cf39f4 : dbr quant: rename seen_op_info to seen_q_op_info (#71312)
4eb277ac61 : [bench] Adding a cpp benchmark to compare performance of nnc with static and symbolic shapes (#72197)
237e960ec9 : [bench] Fix build issues with TensorExpr cpp benchmarks (#72196)
8559d39cf0 : Automated submodule update: FBGEMM (#72158)
1edf6f5647 : [Codemod][FBSourceBlackLinter] Daily `arc lint --take BLACK`
b9d4c0c7a9 : Add num_head param to native multihead attention to evade dim_per_head always 64 bug (#72375)
5c6b897516 : enables module modification during repackaging (#71520)
745d8a0501 : [Static Runtime] Fix build issues (#72310)
2e82c1e597 : [Static Runtime] Fix printing graphs in debug mode during fusion (#72222)
3ad971798f : [PyTorch][JIT] use a better hash table in alias analysis (#69854)
26b8a009a5 : [PyTorch Edge] Add Quantized Matmul Op (Naive Implementation) (#71783)
791e7df7d9 : Back out "free up dispatch key space (in C++)"
888e3fbcb5 : Back out "DispatchKeySet perf improvements"
57f039b41f : Fixing few bugs in torch flatbuffer (#72349)
f2f40ce870 : Add TORCH_CUDA_CU_API to CUDABlas functions (take 2) (#72340)
f5c7f81548 : [quant][fx][devs] Delete unused code (#72244)
706f2b30fc : Update test-infra branch for test stats. (#72233)
90c8623c2e : [ONNX] Resolve attribute error in CI (#72350)
b33831dcd8 : [Quant][core][improvement] Enabled slicing on per-channel quantized tensors (support is limited to the a contiguous sliced tensor); added correpsonding test case (#71269)
d6d6d4cc1e : [Quant][devs] Separated implementations for quantized & non-quantized tensors in empty_like (#71990)
6451e525e4 : Revert D31316086: [fx-acc] PassManager
aa4f048de9 : [fx-acc] PassManager (#67261)
b2116f5847 : Port FSDP::summon_full_params from fairscale to pytorch. (#71225)
dbabc9122f : Add nan_to_num plugin (#72144)
2b53121ddd : Reenable tests that now run fine after the IMA fix (#72288)
53acd2fad3 : Back out "Revert D33994546: [Quant][fx][improvement] Added test for quint4x2 for fx graph mode quantization (reland PR 69846)"
8f2e7bba83 : Fix binary bitwise converter for when lhs is scalar (#72101)
18cbe80f23 : DispatchKeySet perf improvements (#70364)
20b8653dfa : free up dispatch key space (in C++) (#69633)
1cec719448 : [fx][graph opts] port FoldLayerNormArithmetic from glow to FX (#69715)
99d490e911 : Document forward AD interaction with grad mode (#72216)
d832d93e69 : Revert D33992798: Add TORCH_CUDA_CU_API to CUDABlas functions
07e9f86a4b : Add TORCH_CUDA_CU_API to CUDABlas functions (#72305)
cd5ed54989 : Revert D33994546: [Quant][fx][improvement] Added test for quint4x2 for fx graph mode quantization (reland PR 69846)
1e7d20eaea : Remove forcing CUDNN_STATIC when CAFFE2_STATIC_LINK_CUDA (#72290)
a5dad85c4f : [Quant][fx][improvement] Added test for quint4x2 for fx graph mode quantization (reland PR 69846) (#72278)
ab1e88e392 : [Quant][Eager][improvement] Added 4 bit support for eager mode quantization flow (reland PR 69806) (#72277)
bfdf45cc89 : [Quant][improvement] Added 4 bit support for embedding quantized module (reland PR 69769) (#72276)
e81bfffbe1 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
6297aa114f : [DataPipe] Extend FileLister to support load multiple directories (#72260)
03260f85ff : Update torch flatbuffer usage to OSS version (#71957)
3f6643e661 : [FX] Fix default argument handling for Interpreter (#72272)
dbf09bc088 : Sparse: Use per-operator headers (#71115)
e90f5586d6 : Add support for include-what-you-use (#71114)
36385bee78 : Remove CUDA Foreach... files dependency on function operators (#68462)
3c266620a4 : [quant][gpu] Adding quantized conv operator in cudnn (#70622)
aa04c8edc7 : Combine install miniconda routes in Windows GHA
8c505bbc86 : Make ShardedTensor ctor more inline with torch.Tensor ctor (#72164)
defde3bb04 : [NNC] Use index for stride mapping in kernel.cpp (#72266)
9d4d782e42 : remove alwayslink/link_whole from //c10 library (#70997)
0ca0e02685 : Bump torch version to 1.12 (#72221)
e970160c19 : remove //c10:headers dependency from //c10 (#70996)
4fc6ab5e81 : [DataPipe] Fix OOM when traverse IterDataPipe due to pickling (#72209)
90458004cb : move //c10/cuda/test to shared build structure (#71429)
2d4513e4c4 : [PyTorch] MHA: remove TODO about transposing second arg to linear() (#72229)
286f5a51f9 : move //c10:tests target to the shared //c10/test package (#70928)
c965b47995 : add ZeroTensor specialization to div.Tensor (#71862)
69c9bc9d16 : [PTE] Adding `prim::to_list` to be emitted (#72238)
6d9c0073a8 : create //c10/cuda library (#70863)
38f696c0cd : [nnc] Add a API to unroll loops by a given factor (#72071)
8f4f0ba7e5 : use random number in shared file name (#72232)
38ebb776a4 : Fail with unexpected success for fatal errors (#72016)
7c182f785c : [Quant][devs] Separated implementations for quantized & non-quantized tensors for the unsqueeze function (#71648)
85591dc85d : Test 0->0 correspondence for Unary Ops with Sparse CSR inputs (#70302)
f03734d337 : [Quant][devs] Separated implementations for quantized & non-quantized tensors for the squeeze function (#71639)
bafe440d14 : [Quant][devs] Added early return statement in squeeze_qtensor when input tensor has zero dimensions or the input dim is not size 1 (#71876)
35e25258ca : Fix lint (#72261)
bae40bc764 : [Quant][devs] Separated implementations for quantized & non-quantized tensors in index_select_cpu_ (#71900)
b62827b81a : [Quant][devs] Separated implementations for quantized & non-quantized tensors in fill_ (#71939)
5b7c72101c : [Quant][devs] Removed check for is_quantized in dequantize_cpu_or_cuda (#71958)
6474195ec8 : [Quant][devs] Changed empty_quantized call for quantized tensor to resize_output (#71899)
2291981eea : Fix backwardcompat definitions after #72200 (#72255)
b26801276f : Fix quicklints (#72256)
0bb3158eae : [SR] Implement prim::CreateObject (#71854)
cff5e22a72 : [SR] Relax aten::__is__ constraint for SR enablement (#71807)
7cdbbfaee2 : Revert D33716716: [pytorch][PR] Added remove_duplicate parameter to `nn.Module`
88547396eb : [PT-D] Enable megatron-lm style MLP layers (Changes mainly on sharded linear op) (#69735)
19d0de8a57 : [PT-D][RFC] Resharding related API implement for ShardedTensor and Partial Tensor (#70079)
541773d268 : Make native MHA private for release 1.11 (#72200)
2a391284fc : Revert D33851316: ci: Migrate macOS x86_64 binary builds to GHA
14538fa7bf : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
bf09ece782 : Make svd / svdvals fully functorch compatible (#72181)
e03c3dd150 : Add leaf module code example (#72100)
c83eaf5c26 : add the default destructor of TensorImpl (#72190)
c585d35463 : CUDACachingAllocator: Keep one event queue per stream (#71745)
d23231fd8c : Fix upgrader codegen when constant list is 0 (#72199)
4a7e07e53e : Fix torch.save and detach for CSR Tensor (#71963)
c2e63b43ce : ci: Migrate macOS x86_64 binary builds to GHA (#71888)
4c62ffa11e : Improve fx2trt benchmark (#72145)
1ad53b51d0 : Fix unused variable warning in LostCTC.cu (#72155)
774e0847c9 : Add hook for functorch to error out with unoverridable autograd operations (#72176)
f607af126e : Set correct device id on efficientzerotensors (#71611)
889a62ddb2 : Update approvers for ONNX (#71659)
2d5296b0e7 : [SR] Implement prim::Loop (#69838)
2aa699505d : [SR] Implement prim::If (#69837)
d2599701fd : [SR] Force sub-blocks to return at least one output (#69836)
238dded10f : [SR] Graph pass to create owned refs of special IValues (#69835)
b0518b2705 : Codegen: Do less work in dry-runs for sharded files (#69805)
cb0d7f0d96 : [Profiler] Defer KinetoEvent and GenericTraceActivity creation to post processing. (#71539)
e19f2e52ad : enable GHA workflow defaults for ROCm (#72142)
31b348411a : fix typos in aten/src/ATen/native/mkldnn (#71853)
02f6226bff : [fix] Dropout2d-3d no-batch-dim (#69885)
39483b5918 : Updated CONTRIBUTING.md (#72044)
a1383a9cfa : Reland torch.ops API change machinery with the core functionality disabled (#71785)
1fdbe9aa76 : Make `asarray` behavior consistent with Python Array API. (#71757)
2336571cb7 : make fsdp folder to be public (#72084)
ed435e903f : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
8757e21c6a : Update logspace and bump the version number to 9 (#72051)
230186f9f7 : Jiterator cache for Windows (#72048)
64670e414e : [reland] Create torch.distributed._shard package. (#72141)
7b014cc645 : [DataPipe] Disable Typing for DataPipe before branch cut (#72123)
aa99df5cc3 : Check for grad mode enabled in dynamic shape fusion check (#72161)
0c49800c1c : change default grain size in gather scatter kernel to improve CPU performance (#72082)
7c2eda3829 : Fix fx docs (#72108)
f99147dec0 : Targeted documentation updates in autograd.functional (#72111)
a60e2ae037 : [TensorExpr] Move AOT compilation logic from aot_compiler.cpp to NNC's to_backend (#70375)
64668e61b8 : [TensorExpr] AOT Compiler: support symbolic shape arguments. (#70374)
a7cebda955 : Revert D33892443: use irange for caffe2/aten directory
a34c17bfaa : [Pytorch Edge] Fix Custom Class Parser (#72153)
aa5dab02b2 : [fix] EmbeddingBag segfault for out-of-bounds idx (#71904)
67a275c293 : Fix persistent worker exits before pin_memory thread (#71579)
10cc66bc78 : use irange for caffe2/aten directory (#72067)
e39bf13316 : Fix internal assert custom function when input does not require grad (#72008)
26f88eb0e6 : Revert D32053748: [pytorch] use cublas lt interface for bias fusion
b9f9d78a8c : Update flatbuffers to v2.0.5 (#72132)
3dce68fdf4 : [SR] Eliminate op_name_ in ProcessedNode (#71986)
b1897d6d99 : fixing stride order for expanded tensor (#71665)
b8a4ee5e35 : Clean up old warnings in F.interpolate (#72093)
29d9100277 : Process commit update2
d0f397ae61 : Avoid unnecessary copy of padding/dilation vectors in check_shape_forward (#72019)
9e8334e3ae : [tensorexpr][quant] Enable tensorexpr for quant,dequant (#71243)
34e4418dfa : [nnc] tensorexpr for quantized/aten::upsample_nearest2d (#71236)
e118d6e59f : Add lowering path for LinearReLU module (#71427)
be7ee92669 : Update process_commit.py
e0a0f37a11 : Add docs for fusion strategy (#72036)
a55ef69e68 : update default fusion strategy (#72038)
b44f724aef : [nnc] Update cuda codegen to use llvm for thread and block extent computations (#72040)
27a4d39756 : NNC Dynamic Channels last fixes (#72032)
59a6375639 : [NNC] Add Tests for Dynamic Shape Fusion Change default fusion strategy (#71651)
f1499d6c18 : Refactor PE so fusion specializations are configurable (#71650)
cf1833df70 : [WIP] add explicit dynamic fusion arg (#71173)
e305248a33 : Add logspace test modules (#72052)
7e8217549f : Added remove_duplicate parameter to `nn.Module` (#39)
4567d5ded4 : Upgrade oneDNN to v2.5.2 (#71546)
c61be5fb22 : Add split_with_sizes converter (#71953)
b28e696516 : Update linspace and bump version nuymber to 8 (#71486)
5024c1bc7b : Make `get_file_pathnames_from_root` output order deterministic (#70435)
2c3ecb435e : Automated submodule update: FBGEMM (#72116)
702d375df5 : [pytorch] use cublas lt interface for bias fusion (#71200)
1a30954f44 : CUDA TopK Optimization: use multiple block per slice (#71081)
4aade95029 : [PyTorch] Rework stat collection in CUDACachingAllocator (#71669)
ca2ff12ea3 : [PyTorch] Remove call_once from CUDACachingAllocator (#71668)
da0423aa0b : [PyTorch] Use a better hash table in CUDACachingAllocator (#71667)
4b789df68b : [SR] Add BlockRunner and handle sub-blocks (#69834)
7bb614fc71 : Simplify TensorImpl size check and fix error message (#72070)
ba8d5f6f75 : [JIT] FuseLinear pass now handles CallFunction("linear", ...) (#61646)
e8d226cd9a : Remove some unnecessary python functional wrappers (#61608)
7ea96a7293 : [quant][fx] Don't assume bias is a keyword-argument (#71426)
a5e27c45dc : Use new_empty in dropout (#72078)
44e2b8da28 : Automated submodule update: FBGEMM (#72068)
58dabebcd7 : improve quantized error checking for structured kernels (#71928)
f20fa66f70 : Revert "[fix] max_pool1d: composite compliance (#70900)" (#71992)
1cc824ef59 : Fix old GCC ABI check in CMake package config (#72081)
c93d6f90c9 : Revert #62143, the new CUDNN_RNN_ALGO_PERSIST_STATIC_SMALL_H algorithm (#72089)
bb456d2bf7 : Split cuda: list cpp files that go in _cu library explicitly (#69082)
e784808bc6 : DOC: create 1.12 docs from a tag like v1.12.2rc1 (#71985)
a319bce58d : Make sure we set GITHUB token in the header for pr-label GHA (#72085)
cf70466970 : [ONNX] Improve scope inference in function extraction
a83cf17807 : Composite compliance for gather_backward (#71766)
25f5f2cd06 : Composite compliance for index_put (#71765)
1f2751cdea : Composite compliance for index_copy, index_fill, masked_scatter, masked_fill (#71751)
c62b515691 : Make diag_embed a primitive w.r.t. autograd (#71750)
184b78c4c1 : [acc_ops] Move slice_tensor to consider single dim at a time (#5906)
082ff25f37 : [reland][bc-breaking][quant][be] Refactor fuser_method to include `is_qat` argument" (#71956)
847dbb8684 : CMake: Clean up unused definitions (#69216)
d693739248 : CMake: Clean up unused definitions (#69216)
d46256bd7c : [skip ci] Remove unused outdated .circleci bazel_definitions file (#71943)
a1b4410964 : Add owners to custom test infra (#72080)
871e240e63 : Improved error message for interpolation (#72066)
6714d039a1 : [bug fix] for add_activation layer, mobilenetv2 is fixed (#71979)
689c218c36 : [caffe2] Rename c10d::detail::vformat to resolve conflict with fmt (#72039)
dbd090d610 : .github: Change binary build workflow trigger (#71890)
34494e6252 : Back out "Create torch.distributed.shard package." (#72062)
bb6b501aa0 : Back out "[pytorch][PR] Fix SVD error code handling for OpenBLAS 0.3.15+ and MKL 2022+" (#72063)
95d71ed212 : Run the pr-label check on PR closed action and validate closed_by (#71917)
74c44ba9d6 : Revert D33850228: [pytorch][PR] Implement Tanh Gelu Approximation
5045c18bd1 : Error if pocketfft is not found (#67909)
23d03025dc : Implement Tanh Gelu Approximation (#61439)
ca61292465 : Add append method for nn.Sequential (#71326)
72c972e1e1 : Fix bug in linspace model generation (#72027)
1e4aefaa2f : Revert D33834916: Set correct device id on efficientzerotensors
6208c2800e : torch/monitor: merge Interval and FixedCount stats (#72009)
a18cfb790d : Set correct device id on efficientzerotensors (#71611)
784bd92340 : Use upgrader_mobile.cpp as the reference for codegen unittest (#71930)
af65634d1c : Move generated keyword out of gen_mobile_upgraders.py (#71938)
815532d40c : Unsqueeze ops to reduce the number of reshapes in we use in LTC (#72011)
7a69752c27 : Make upgrader test model generation more robust (#72030)
87bbcf70f7 : Create torch.distributed.shard package. (#71742)
db370b7a1e : [warnings][caffe2] Fix broken asserts (never trigger) (#72014)
8ca7484ce7 : [FIX] Enable TORCH_CHECK again (#71971)
d68c314b13 : [warnings][caffe2] Fix asserts yielding -Wstring-conversion warnings (#72013)
2017b404ec : Fix SVD error code handling for OpenBLAS 0.3.15+ and MKL 2022+ (#68812)
bc9d1e709a : [EASY] Adding virtual to the isUnionType op (#69554)
8fa5cde3a9 : Fix hooks (#71970)
3e1e02595a : Avoid unnecessary copy of ExecutionPlan in operator() (#71982)
2821574eea : [caffe2] Fix compilation with fmt 8.x (#71966)
726cc39242 : Rename inplace variant of freeze_module (#71437)
4cd7819854 : [caffe2][torch] Remove unreferenced local variable e (#71856)
57a9b499dc : torch/monitor: update pyi definitions (#71950)
65d3adc65d : Add linspace test modules (#71850)
bc0e216d1f : [jit][edge] Print correct type strings in code file for mobile models. (#71968)
63429bf4b3 : Removed JIT FC tweaks for interpolation options (#71937)
09e54ffec3 : .github: Ensure we're using correct build matrix (#72010)
b2b63209e1 : make code simplify in get bufffers and parameters (#70399)
3d2d466fc0 : [Quant] Fixed errors in test_embedding introduced by https://github.com/pytorch/pytorch/pull/69768 (#71387)
99bc978b78 : [JIT] Propagate requires_grad to autodiff subgraphs (#71666)
765669e1b9 : Update docs for torch.real to indicate that it's supported for real tensors (#71962)
cb823d9f07 : Revert D33744717: [pytorch][PR] Implement Tanh Gelu Approximation
0c3bc426a8 : LTC move squeeze to master (#71677)
c5df294940 : Fix bug in upgrader generation in mobile (#71578)
f499ab9cef : Implement Tanh Gelu Approximation (#61439)
e58d5b718a : Remove code for using our own build cudnn image, use nvidia image (#71952)
de44a50f14 : index_backward: use out-of-place index_put if any input is subclass (#71779)
5735f2f875 : Make detach redispatch like a regular PyTorch operator (#71707)
fa38e93fe9 : Add lightweight reparametrization for `_stateless` calls (#68969)
9413c0cd3e : Revert D32626563: [pytorch][PR] Fix SVD error code handling for OpenBLAS 0.3.15+ and MKL 2022+
e88c999da3 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
be2dc8f294 : Sparse CSR CUDA: Add torch.baddbmm and torch.bmm (#68711)
e06eb286da : Fix SVD error code handling for OpenBLAS 0.3.15+ and MKL 2022+ (#68812)
60997be85c : Replace LOG by LOG_EVERY_N to avoid log spamming (#71755)
fb0e27d38a : Add mechanism for functorch to error out on autograd.Function (#71866)
5bd19ba846 : Expect test_fn_fwgrad_bwgrad to fail because forward AD is not implemented (#71944)
746e702104 : Revert D33827835: Try setting pull_request checkout to head ref
0c2b1b8bcf : Update docs for forward AD and make them public (#71643)
e849c8b0f2 : Move bytecode generation to python (#71681)
7bc5962329 : Trace asserts with fx by looking at byte code (#70960)
1aa2257cac : Error message update: use proper name of custom c++ classes (#71922)
7d613ab1d6 : Fix indentation typo in test_fx_experimental.py (#71885)
8548657ddb : TransformedDistribution.icdf: Fix erroneous icdf ValueError (#71393)
d0ff1f0013 : [FSDP] Backward prefetch in recursive call (#71804)
a30b0cf52a : [FSDP] Add/refactor unit test for wrap (#71803)
99a9929254 : [Easy] Format DDP error (#71802)
f115a42362 : Revert D33805315: [pytorch][PR] Automated submodule update: FBGEMM
51ae9ccba4 : Fix forward AD for cudnn batch norm (#71901)
3b9f2e2cca : [GHF] More verbose failures messages (#71941)
c85965600c : Fix bug where frozen mod not used for OFI #68903 (#71436)
b486797864 : [jit][edge] Make flatbuffer_serailzer print correct type strings. (#71935)
1407939f69 : Remove unnecessary non_contiguous and gradient tests from test_linalg (#68188)
6cb128c8dd : Generalize noncontiguous tests to several outputs (#67996)
171cf153d2 : Make repeat_interleave respect the conj and neg bits. (#68523)
a675770adc : Deactivate the tracking of gradients in sampling functions within OpInfos (#68522)
e2011b29aa : Add OpInfo test to check that floating point inputs in OpInfos have requires_grad set to True (#69909)
dcc6aed52c : Implement derivatives for torch.remainder and torch.fmod wrt the second argument and update the docs (#69908)
b62780fc4f : [warnings] Disable broken TORCH_CHECK (#71947)
75cc2184e1 : Automated submodule update: FBGEMM (#65595)
81d1ce05fd : Add complex support for Jiterator, port sinc to Jiterator (#71577)
8551989bff : [c10d] Enable gather_object on nccl (#71623)
e27271d05a : Try setting pull_request checkout to head ref (#71734)
e755a4f124 : Update the operator version check logic when generating models for testing upgraders (#71894)
0cae3c0481 : Improved error messages for `max_unpool{}d` operators (#67328)
eeda31fa08 : Added antialias flag to interpolate (CUDA, bilinear and bicubic) (#70930)
567c2bb8e9 : Support printing inplace operators in FX (#71887)
5a14eca191 : Revert D33820822: [pytorch][PR] Run the pr-label check on PR closed action and validate closed_by
6feba4bc7e : Implement scatter primitive for ProcessGroupNCCL (#70029)
9b53d3194c : Implement gather primitive for ProcessGroupNCCL (#66745)
0a8b391936 : ci: Enable tests for iOS on GHA
e020414cb2 : Run the pr-label check on PR closed action and validate closed_by (#71917)
8ff1a8fdca : Implement forward AD for linalg.svd and improve svd_backward (#70253)
84f1685397 : Rewrite svd and linalg.svd as structured kernels (#69827)
2a8b91548e : Add `scripts` to OSS merge rules
a49f2412e4 : [SR] Add static runtime scopes to record function (#70944)
09c417ae65 : Add new reduce options and autograd support for scatter_reduce (#71788)
a432b9a7c6 : Clean repo after checkout
fdec94504f : Rename _scatter_reduce to scatter_reduce and make it unstructured (#71787)
21d307cd22 : CUDNN changes for cuda 11.5 (#71869)
b66f1bc80f : fx quant: make forked subgraph rewriter preserve stack trace (#71858)
a6d9dd9370 : [c10d] Use the term "errno" instead of "generic error" in logs and error messages (#71865)
7aa4a1f63e : torch/monitor: TensorboardEventHandler (#71658)
d4d0ab71b3 : use `torch.testing.assert_equal` in `TestCase.assertEqual` (#67796)
de58a27769 : define //c10/core:CPUAllocator target (#70862)
41690d7804 : define //c10/mobile targets (#70861)
844a4b47df : extract out //c10/core:alloc_cpu (#70859)
fc6a488e9a : extract out //c10/core:alignment (#70858)
4523a73288 : Fix usages of TORCH_CHECK/_INTERNAL_ASSERT without condition (#71879)
56511f859a : Revert D33178977: [bc-breaking][quant][be] Refactor fuser_method to include `is_qat` argument
bf69a61293 : (1/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: backend change
027c0d7f8e : fixed compilations on xla tensor print (#71147)
76a2c22341 : [c10d] Improve the "not yet listening" warning message of `socket` (#71864)
0099796978 : [CUDA Pinned Memory] [Retry] Alternative implementation of pinned memory allocator focusing on multi-threaded scalability (#69299)
ebeeee7b2b : [warnings][caffe2] Fix -Wstring-conversion warnings (#71874)
cf3ef23713 : Propagate full autocast state to CheckpointFunction's forward-inside-backward (#71169)
804f13289e : [ONNX] Update opset_version restriction for local function
ef501e8fed : [bc-breaking][quant][be] Refactor fuser_method to include `is_qat` argument (#70009)
b066931106 : fixing of usage of rel_tol for test adadelta (#71880)
c224f82ed3 : Revert "Disable XLA config"
bbe6144b45 : Revert "Fix lint"
7beb030e11 : .github: Exclude rocm from ciflow/all,ciflow/trunk
5bd33247ec : [GHF] Add revert workflow
5ee629e50d : .github: Enable windows binary builds (#71484)
9f4bdf7811 : Refactor flatbuffer loader to allow overriding how IValues are parsed. (#71661)
666ff0ae22 : Update _create_c10d_store to check port value (#71863)
d7e5870b9e : Fixes pr-labels workflow trigger (#71871)
d73dc9b7d1 : [GHF] Small cleanups
bdcdf94bdd : [Opt Overlap] Clean up code in _OptimizerHookState (#71620)
1c8fcc44cb : [Opt Overlap] Support optimizing partial set of parameters (#71608)
c44d0ac181 : Implement labelling for release notes and topics check (#71726)
46817895bd : [Profiler] Split observer implementations based on ProfilerState (#71135)
d3bbb281f3 : [numpy] add decimals argument to round (#66195)
7e6312a5df : [SR] Reverse iteration order in resetMemory (#71705)
e04ade92ae : Skip compiledWithCuDNN() call for mobile to avoid segfault (#71775)
0891c908bb : Revert D33768645: Set correct device id on efficientzerotensors
adcf34f65a : Revert D33778917: Disable some forward mode AD tests
25e84fa4e5 : Add forward AD formulas for some losses (#71026)
c6d885e489 : extract out //c10/core:base library (#70857)
130ca58601 : extract final two libraries out of //c10/util (#70856)
bfc481cf67 : extract //c10/core:ScalarType to its own library (#70855)
f37d2046f8 : Implements allreduce_coalesced for ProcessGroupNCCL (#62140)
942a084c46 : Remove state_dict from AveragedModel and use buffers instead (#71763)
40e88b75c4 : extract out //c10/util:base library (#70854)
108b37db84 : [Array API] Add linalg.diagonal (#70599)
fe277b8717 : [jit][edge] Migrate to TypeFactory for jit types on mobile (#71516)
e5794974cb : [acc_tracer] Do not rewrite the leaf modules (#71790)
d3354602fc : [Easy] DDP typo fix (#71607)
10ca760c0a : [Opt Overlap] Implement register_fused_optim in DDP (#71606)
8273912a8c : [Opt Overlap] Implement _OverlappedOptimizer (#71605)
bd6ec4efb4 : [TensorExpr] Add lowerings for scalar binary ops (+,-,*,/,&,|,^,<<,>>,cmp). (#71298)
1dbcde2ade : [TensorExpr] Support scalar intermediate and output values. (#71186)
530e7f6195 : Define check_sizes_nonnegative as inline (#71640)
88c298c28f : Fix symbolic shape function for `flatten` (silvasean's) (#71762)
66939e3b94 : [acc_tracer] Add test coverage for retracing (#71752)
b36b11cbc1 : Separating CaptureDataFrame out of DFIterDataPipe (#71776)
dfcbe059ec : Obliviate ALL_TENSORTYPES and ALL_TENSORTYPES2. (#71153)
166d4e4201 : Change `test_conv_large` parameter initialization (#71521)
965b9f483e : [cuDNN] Add a new optimized cuDNN RNN algorithm for small RNN hidden_size (#62143)
358b5078ec : udpate missing ops message (#71294)
24f577dcb2 : Disable some forward mode AD tests (#71791)
e4500306c8 : [Quant] Enable default reference path for CopyNodeQuantizeHandler (#71168)
5dd6cd55ba : Set correct device id on efficientzerotensors (#71611)
ce6e6812b1 : use legacy unrolled kernel for non-trivial offset calc cases (#71710)
f3ebf06e98 : Release GIL when assigning to real or imag components (#71747)
dba42056d8 : Release GIL in Tensor indexing functions (#71728)
de8d0203e9 : Allow torch.Tensor.real on real-valued tensors (#71718)
03f1f0cfe4 : Check the availability of MAGMA / cuSOLVER when setting the Linalg backend. (#69826)
332d67b065 : Add hascuSOLVER flag to Context (#69825)
12e01f7825 : `linalg.matrix_rank`: fix cpp interface + add more overloads (#70575)
33403f4848 : edge_order check in torch.gradient only applies to dim argument (#67926)
f3e81f3eed : Remove copies in jit_log.cpp (#67841)
07ca1fc88b : remove hasPrimaryContext workaround on ROCm (#71146)
22a77d7b92 : [warning] Disable broken assert (#71778)
456a4dc6bb : [warning] Fix TORCH_INTERNAL_ASSERT calls missing condition to check 2/x (#71767)
f866e8b5aa : [fx2trt] Add trt splitter setting (#71717)
9a2b43085d : Improve docs for `from_dlpack` and `to_dlpack` (#70437)
f5a71ec2d6 : [Opt Overlap] Implement as_functional_optim and create_functional_optim (#71604)
541817628b : [Easy] Add comment explaining DistributedOptimizer gating (#71603)
281663955f : [Opt Overlap] Create Optimizer Hook State directly from functional optim (#71602)
6848e0dae5 : Fix RNN modules with inputs shapes containing-0 in CUDA (#71696)
211deb0364 : Fix CI quick-checks (#71773)
d32b7d9585 : Logic to auto-categorize commits (#64929)
0fdb90da5e : [warning] Fix TORCH_INTERNAL_ASSERT calls missing condition to check 1/x (#71711)
bb157dd4eb : Make methods of internal file_obj visible from StreamWrapper (#71653)
16a9ffba4b : Allow specifying num_samples to RandomSampler even when replacement=False (#71568)
133c213415 : updated the docs for BatchNorm1d and InstanceNorm1d (#71371)
1df4eca6d7 : [Operator Versioning][Test] Automate model generating process (#70629)
e33cd8f382 : Drop unused variables (#71685)
7a0c97195f : Add save_for_forward to custom function (#71569)
09aeadf4ab : Fix custom function forward AD internal assert (#71531)
1cc3291716 : Fix custom function when non tensor argument precedes tensor argument (#71530)
1295d2699f : don't include Loops.cuh from Reduce.cuh (#71730)
70f3078dd6 : [Pytorch Edge] Wrap lowered module in to_backend (#71597)
b82c4a890d : Fix aten's native's folder docs. (#71395)
35e7ac3fa1 : Fix bug in singleCheckErrors (#71706)
9d47652bee : Fix lint
09f7b42f5c : Add @suo to the list of CI GHF approvers (#71737)
506d41d659 : Improve disable name match (#71499)
7bc220e060 : Update distributed.rst for ProcessGroup Extensions (#71482)
cda6f40151 : Disable XLA config
8ba1ee6aa7 : [tensorexpr][easy] add missing comma to test_jit_fuser_te.py (#71642)
f75e92a936 : Fix for retracing documentation which would break for n-ary operators (#71599)
edcd4a20ea : Exit once there's an environment error (#71693)
b372be4211 : [nn] lstm : no batch dim support (#71056)
99d9883a22 : dbr quant: make SeenOpInfo a dataclass (#71267)
41afeea791 : dbr quant: split observer insertion to a separate pass (#71253)
e12cc227a2 : dbr quant: make QTensorInfo a dataclass and add orig_dtype (#71245)
8fe82b855e : dbr quant: do not crash on unsupported qconfig_dict keys if they are empty (#71233)
c3570fd945 : fx quant: preserve node stack trace throughout prepare and convert (#70757)
e0d829a266 : Kill the test_torch.py mixin and creates test_scatter_gather_ops (#71691)
3a03af2f50 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
9b3a56eecf : [Optimizer Overlap] Move hooks to own file (#71601)
ba08440e88 : [Opt Overlap] Remove redundant tests (#71600)
3e55fa6385 : Fix unused variable warning in FractionalMaxPool3d (#71591)
27308642a0 : Fix unused variable warning in layer_norm_kernel.cu (#71587)
6ed46b08ab : Fix unused variable warning in LossCTC.cu (#71588)
6edb06daa6 : Fix unused variable warning in DilatedMaxPool3d.cu (#71590)
b9a7dd79f9 : Fix unused variable warning in DepthwiseConv2d.cu (#71584)
f269f990f2 : [jiterator] polygamma (#71162)
c7c864bbd1 : Fix unused variable warning in AveragePool2d (#71585)
8d5d875ac7 : Fix unused variable warning in ConvolutionMM2d.cu (#71593)
70532e32d9 : Fix unused variable warning in EmbeddingBag.cu (#71589)
1794fdd154 : Fix unused variable warning in MultiLabelMarginCriterion.cu (#71594)
4f498f11cd : Fix unused variable warning in DistanceKernel.cu (#71586)
9950f3b7e6 : [BE][GHA] Further refactor `checkout_pytorch`
dcc1e1cd87 : [BE] Use `!{{ common.checkout_pytorch("recursive") }}` in binary builds workflows
c9bd1c60ed : Move upgraders from python to cpp (#70593)
47cf0dbf8b : Prefer at::detail::empty_cuda to the native function (#70618)
86aefdc082 : Revert D33694867: Fix persistent worker exits before pin_memory thread
ce3215db70 : Fix nnq.dropout in vision mobilenetv3 pretrain model (#71438)
91b43b7820 : Add clean workspace step to clang-tidy workflow (#71655)
ae285d837e : [1/n][caffe2] Add session based margin loss function in caffe2 operator
26d54b4076 : monitor: add docstrings to pybind interface (#71481)
84fe4279db : Structured Kernels: Use at::detail::empty functions (#70617)
0a2cdd18f3 : nice error msg from load_state_dict for non-tensor value (#70596)
71a41323bb : BackendSelect: Use at::_ops API and per-operator headers (#69840)
b09d6224e2 : Register{Schema,BackendSelect}.cpp: cleanup header includes (#70021)
7dd6ead0ac : Update actions/stale to latest version
d8abe813bc : [LocalSGD] Move feature to Beta, clean up some docs (#71621)
29a7cb41d8 : [BE] Fix FSDP flaky test (#71525)
4e031419aa : Skip broken svd tests (#71646)
e2191e7084 : Fix persistent worker exits before pin_memory thread (#71579)
8b3f58d311 : Labels more elementwise binary operators correctly as BinaryUfuncInfos (#71622)
b40dbdc49f : Fix test ownership lint (#71554)
3a77fb244b : [PyTorch][Static Runtime] Delete cleanup_activations option (#71501)
8d880b06a1 : stochastic_depth support (#71536)
9ada2b0768 : add dumb retry to installing miniconda (#71558)
1a917e637c : Bump dlpack.h to latest version (#65047)
13ea2cb330 : [DataPipe] Make GroupBy serializable with lambda function (#71497)
36b4c95e74 : [DataPipe] adding serialization test for all core IterDataPipes (#71456)
40d1f77384 : Codegen: python_torch_functions only include relevant operators (#68693)
7680a0ae9d : Deprecates _aminmax (#71576)
401e755354 : Fix hsplit vsplit dsplit crash when section is 0 (#69342)
dea61e7e6c : [Docs] Fixed missing format common args (#70439)
c5fe70021c : Fix version strings in CI (#71564)
3a963d5621 : [fx2trt][torchbench] enable shufflenet lowering (#71562)
26c123efbd : empty_cuda: Add functions that don't depend on Tensor (#70616)
abe361754e : [fix] isin : non-contiguous input on cpu (#70659)
7ee0712642 : Fix torch.{unique, unique_consecutive} out of bound (#71540)
9f0227a0eb : Revert "[ONNX] Minor doc update (#69501)" (#71615)
76fd3cfd38 : fix python version error (#71021)
e2dc2aca93 : Export ONNX models with readable input/output names (#68976)
64a3827d4e : <Qunat> remove inplace hardtanh in test (#71519)
114c13d020 : [ONNX] Minor doc update (#69501)
9adee84a3f : .github: Improve syncbranch debugability (#71596)
64d221ffbf : Add onnx.rst to the list of mergeable files
c92ff47afd : Use == operator to test type equivalance in pytorch_jni_common.cpp (#71508)
0df607ce00 : Separate title and body of commit by 2 lines (#71598)
dc0a8a6587 : Improve storage assertion of Tensor's enforce_invariants (#70380)
db2fc82a54 : Generalize IValue's aliased hash handling for opaque tensors (#70371)
2eb4b05b94 : torch/monitor: make tests more robust on windows (#71581)
bc608b6e16 : Add gitutils tests (#71580)
08b389fc36 : .github: Set DRY_RUN for refs/heads/nightly (#71570)
e43769e8ab : remove ciflow_should_run (#70321)
67385918ab : move header inclusion (#71307)
8b5775a30f : Fix unused variable warning in Sorting.cu (#71555)
9f1ad2d3e4 : Fix unused variable warning in lp_pool_op.cu (#71557)
deb1c2f837 : Include act_rewriter_allow_list and leaf_module in lower (#71289)
10da8726ef : Fix unused variable warning in adagrad_fused_op_gpu.cu (#71556)
add774ddbd : .github: Set rocm workflows to only run on PRs (#71567)
53b3904115 : Fix memory leak in ShardedTensor. (#71445)
4b3cf1eaf7 : [BE]Clarify how to check memory saving if using gradient_as_bucket_view (#71483)
e926360cb8 : [Pytorch Edge] Refactor Compatibility Stuff into own directory (#71432)
1c61d8c43f : [PT1.11] make static graph to be stable (#71459)
11d8fe59fd : Revert "Move syncbranches and trymerge to 3.9"
c7c767726b : Move syncbranches and trymerge to 3.9
4868907cf3 : [binaries] fix dump_operator_name binary (#71246)
89c844db9b : [torch.distributions] Implement positive-semidefinite constraint (#71375)
640bfa7e6f : Refactor convolution_backward's cudnn cases (#71491)
06f14c2d63 : Refactor convolution_backward's CudaDepthwise3d case (#71490)
17d2a5167e : Refactor convolution_backward's CudaDepthwise2d case (#71489)
42f7afc4cd : [BE] Improve gitutils Inherit `PeekableIterator` from `collections.abc.Iterator`
a9f44b22c0 : Fix composite compliance problems for linalg.{matrix_power, inv, cholesky} (#69437)
011fd1d933 : [DataPipe] improving DataPipe unit tests (#70215)
9f0c808593 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
06838ce8b1 : fix: do_constant_folding arg when exporting ONNX (#71348)
21b697b646 : add flatbuffer_loader and flatbuffer_serializer as BUCK target (#71463)
99df96d800 : Add silu and hardsigmoid converter (#71453)
80b19c4c8c : Enable Python bindings for UntypedStorage (#68945)
f5b19ba683 : Additional unit test for sharded linear. (#70476)
a5d5b11252 : Add GitHub merge rules (#71514)
c59942ac73 : [PyTorch] Fix a bunch of structured kernel refcounting (#71140)
b98e955b24 : [flatbuffer] Fix forward flatbuffer type handling with dynamic type. (#71500)
565f78f571 : [Pytorch] Speed up LayerNorm 4-5% (#71423)
958f9cf5ff : [PyTorch][Static Runtime] Fix extra refcount bumps in layer_norm (#71237)
811af25963 : Fix trivial typo at the doc of `torch.lobpcg` (#71464)
dc5cda0cca : Update min python version to 3.7 in setup.py and mypy configs (#71494)
06bc6748a1 : [acc_ops] Remove usage of kwarg expansion via **locals() for jit scripting support (#71425)
ef4bc3fa2f : [distributed] Make rref_proxy._invoke_rpc trully async when needed. (#70206)
70c9146c40 : [nnc] Update block and thread extents in cuda_codegen to use int64_t (#71428)
2dbbb1a921 : [fx2trt] Issue warnings instead of error if there's possible const folding opportunities (#71031)
61713acb07 : Add trymerge workflow (#71488)
f45e217c01 : Consolidate the overloads of TensorImpl::shallow_copy_and_detach (#68953)
805b7575db : test //c10/... without Google libraries in OSS (#70853)
78e1f9db34 : port //c10/macros to common build structure (#70852)
661d10aab4 : use c10/macros/cmake_macros.h in fbcode build (#70851)
bdeec0c7b6 : [fx] add documentation to AccOpProperties (#71450)
7ce6db48e5 : add rocm GHA workflow (#68552)
15e7d18124 : [jit][edge] Create convinience wrapper for dynamic type construcytors. (#71457)
ac26f8237c : Allow disabling nvfuser without CUDA (#71358)
214f4bf2ff : Support sparse.sum on empty sparse tensor (#71091)
3b589c3497 : [DDP Checkpointing] non-reentrant checkpoint tests (#69060)
75aaa9f92b : Remove simd qualifier for pragma omp loop in upsample_nearest_op.h (#71462)
908fd3d78b : [fix] composite compliance: quantile and nanquantile (#70894)
a0ada2d22b : Back out "[pytorch][PR] Performance and memory improvements to batched torch.linalg.solve" (#71421)
8a9243996c : Lazy load `pandas` when importing pytorch (#71316)
671a0b5376 : Move sccache compilation log to its own group (#71444)
7ed2a43d26 : Adding wheels with py3.10 (#71419)
b56ba296b1 : Support multiple input dims for sharded linear. (#70266)
fbc3b8c1bb : [RPC] Fix a few flaky RPC tsan tests (#71460)
9515213070 : [Operator Versioning] Remove version compare as they are decoupled now (#71461)
677fab6d1d : Support broadcast_to on sparse COO tensors (#71073)
9b9b878c89 : Fixes jiterator cache macro include + updates CUDA note with cache variables (#71452)
125bdb6d51 : empty_meta: Add functions that don't depend on Tensor (#70615)
b4a75af758 : [fx2trt] Export some options out (#71315)
87215ed526 : empty_strided: Factor out generic implementation (#70614)
d5e9a276ea : Adapt to llvm marking SmallVector::set_size private (#71434)
30739f5329 : ci: Change binary trigger to be nightly push (#71447)
6f4c491c6b : empty_cpu: Add functions that don't depend on Tensor (#70613)
6964aa2ced : backout D33469839 (#71443)
4fd1992a60 : [Docs][BE] DDP doc fix (#71363)
322f13d914 : [Profiler] Fix memory profile type from recent refactor (#71417)
ff8fb717db : Fix `get_git_repo_dir` (#71448)
b8679ee1fc : fix conv+bn folding issue when bn hasn't running states (#71259)
a986154950 : Lazy import `packaging` in `torch_version` (#71345)
efd274bbcb : Fix for windows builds with python 3.10 , getting rid of ssize_t (ssize_t is not a C++ defined type) (#71390)
ea0524dbc3 : [FIX LOG] Complete a '\n' in GRAPH_DEBUG (#70421)
02ac73a973 : ci: Add PR trigger for binary builds workflows (#71431)
5243986df6 : Update `syncbranches` workflow (#71420)
1eb6146d96 : Add manual simple retry to ECR login (#71287)
2bb6a4f437 : Generate aten_interned_strings.h automatically (#69407)
d665097cad : allow Bazel to build without glog and gflags (#70850)
ffdc6b4994 : extract //c10/macros to its own package (#70849)
8d0e354191 : fix CAFFE2_BUILD_MAIN_LIB to the correct C10_BUILD_MAIN_LIB (#70848)
fd9e08df5d : Make Demux serializable with lambda function (#71311)
f0db15122f : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
d17f340a2e : The Cacherator (#71350)
7b9fff90d2 : empty_generic: Remove redundant device argument (#70612)
f93ffc9ea8 : Sparse CSR: Handle zero matrix consistently for triangular_solve (#71304)
17540c5c80 : [warnings][Caffe2] Suppress warnings in non-c10 headers (#71370)
cf47338191 : [Caffe2][warnings] Suppress -Wimplicit-int-float-conversion in TypeSafeSignMath.h for clang (#71369)
ddf97a59ca : Remove the dependency of pytorch nightly. (#71323)
a383d01774 : [fbcode][warnings] Suppress warnings in caffe2/c10 (#71356)
1ecfa1d61a : Load zip file in deploy interpreter (#71072)
08d8f81704 : [quant][fix][fx][graphmode] Fix qconfig setting for fused modules (#71254)
bb49352354 : caffe2/torch/csrc/jit/frontend/tree_views: workaround nvcc compiler error
4bf1be898d : caffe: fix warning: overloaded virtual function "torch::jit::Function::call" is only partially overridden in class "torch::jit::GraphFunction"
3ed27a96ed : [BE] Refactor repetitions into TorchVersion._cmp_wrapper` (#71344)
c43e0286a9 : [PyTorch][Lazy] Make hashing null optionals cheap (#71290)
a138aad6e6 : [jit][edge] Return a no-op nullptr for UnionType on mobile for backward compatibility. (#71341)
b7222e15b6 : [fix] max_pool1d: composite compliance (#70900)
fcbc34a5eb : [PyTorch][Static Runtime] Avoid recomputing input size in dict_unpack (#71252)
bf82d2012e : [PyTorch] Add IValue::toDimVector & mostly replace toIntVector with it (#71247)
94ed61eb5c : Pin numba to 0.54.1 (#71327)
d74bb42f7a : Add a missing precondition to `DistributedSampler` docstring (#70104)
2faccc2f5d : [quant] Remove some redundant entries in backend_config_dict for TensorRT (#70971)
d793cc1993 : Revert "Pin numba ot 0.54.1"
ac7f188c64 : Pin numba ot 0.54.1
680d61daab : [LT] Remove torch::lazy::convertShapes (#71291)
c7d1501e4d : fractional_maxpool3d: port to structured kernel (#70414)
a4196a9abf : Remove unused `optimizers` variable in test (#70668)
054b90f0d6 : add channels last support for ChannelShuffle (#50247)
e531646955 : Fix docstring for nn.MultiHeadAttention (#71100)
17bb68618f : Copy: Fix CPU transpose path ignoring neg and conj bits (#69026)
84b1c9798c : add BFloat16 support for AvgPool2d on CPU (#66927)
88012c7daf : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
3a0c680a14 : Jiterates exp2, erfc, erfinv and entr and refactors code_template.h to ATen (#71295)
d068849cc0 : - Fixed memory leak in ir_simplifier.cpp (#71285)
910c01020e : add BFloat16 support for AdaptiveMaxPool2d on CPU (#66929)
9e45c89891 : remove skips from determinant tests (#70034)
356af8f857 : Do not use `ssize_t` in `python_arg_parser.[cpp|h]` (#71250)
675acfc1f4 : Remove unwanted comma (#71193)
558622642b : Fix `torch.dsplit` docs dim specification (#70557)
5f2b4be3b9 : [jit] Split DynamicType conformance test into smaller pieces. (#71275)
81f693d509 : [ONNX] minor clarifications of docstrings (#69260) (#69549)
d555d3f0d0 : Update generated header to use flatbuffer v1.12; (#71279)
e47771cca0 : [ao] Removing unused allow list arguments from propagate_qconfig and helper (#71104)
e7c87e8b44 : [quant] fix dropout in FX graph mode quantization (#71043)
eac3decf93 : ModuleList concatenation (#70887)
2981534f54 : [nn] cross_entropy: no batch dim support (#71055)
e4d522a3cf : More informative messages for None types comparisons (#69802)
ed9804088a : Adding support for loops (#70209)
18d91a97e4 : Adding custom device type change rules (#69051)
03c4d2b9e3 : Adding support for Ifs in Device Type Analysis (#69050)
4a8aa971cc : Building a TensorProperty AbstractBaseClass (#71184)
dabcbb2726 : Testing for Default Inference for Device Type (#69052)
ade83ed90c : Building Default Inference for Device Type (#69049)
b64946cbc1 : [acc_normalizer] Delete is_wrapped after normalization (#71046)
71b274d34d : [pytorch] move ATen/CUDAGeneratorImpl.h to ATen/cuda (#71224)
1de830a985 : Use `ptrdiff_t` rather than `ssize_t` (#71271)
83b45fe166 : [ao] disabling dynamic conv/convT ops (#71110)
37eaf7640f : Revert "Revert D33480077: .github: Re-enable xla test config" (#71202)
40eb004da5 : Use nightly-binary instead of nightly to deduplicate refs for nightlies (#71270)
003c94c790 : [Quant] Templatize activationLimits function (#71220)
4a26624670 : [Quant] Add a guard against shapes for qnnpack qadd (#71219)
e1b9d5854a : [Quant] Add quantized input tensor data type checks (#71218)
188b744390 : Make docker build cron once a week and not every hour on Wed (#71255)
1e3893ecbb : [DataPipe] Removing deprecated DataPipes (#71161)
60632a00fe : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
ff78c73286 : [ONNX] Remove f arg from export_to_pretty_string (#69045) (#69546)
3cc34a4502 : [PyTorch][Static Runtime] s/toObject/toObjectRef/ in native ops (#71238)
ffdc0e23af : [SR] Add various missing native ops (#71113)
f6b804ba9f : Fallback to server JIT type for type checking.
84d4087874 : Fix trt const_fold as output use case (#71194)
1bbea3c3a2 : [PyTorch][JIT] Support mayContainAlias(Value*, ArrayRef<Value*>) (#69853)
cd253938a9 : [PyTorch][SR][easy] s/input_or_constant_aliases/external_aliases/ (#69852)
1bc3571078 : [pytorch][PR] Add ability for a mobile::Module to save as flatbuffer (#70201)
7a93d8bb2d : Revert D32374542: Implement the patterns module for the multi subgraph rewriter.
9ca367d48b : [nnc] Use given kernel function name while emitting code (#67781)
67941c8a94 : Document `torch.cuda.ExternalStream`, `torch.cuda.caching_allocator_alloc` and `torch.cuda.caching_allocator_delete` (#70126)
ad803936d1 : Codegen: ADInplaceOrViewType only include operators registered (#68692)
cc55da8a9b : [caffe2/server quant] use new depthwise conv fbgemm interface (#71166)
de62bcac66 : Implement the patterns module for the multi subgraph rewriter. (#71181)
3c0c5bde0e : [cmake] Uncomment binaries (#71157)
e1f01d2c01 : .ci: Add nightly trigger, remove CircleCI linux binary builds (#70957)
6c1be299c1 : caffe2/c10/core/TensorImpl.h: adapt to clang 12 (#70973)
385773cb77 : add BFloat16 support for MaxPool2d on CPU (#56903)
de902b5d02 : [FX] Add a default_value arg to Graph.placeholder and fix split_module (#71016)
5749be4678 : Fix the shape inconsistency of `out` and `elem` tensor (#71065)
2290976880 : ci: Comment out pull_request trigger for binary builds (#71244)
bfe1abd3b5 : torch/monitor: add pybind (#69567)
90ef54f8ea : [PyTorch] Remove buggy ExclusivelyOwnedTraits<intrusive_ptr<T>> (#70647)
479ce1c3a0 : [PyTorch] Add isUndefined to ExclusivelyOwnedTraits<TensorBase> debug msg (#70638)
4d28cef03a : Added AutocastCPU string (#70013)
7884143dff : Legacy support for embedded interpreter (#71197)
a71b4dc164 : Update nightly wheels to ROCm4.5.2 (#71064)
fd0d4bef03 : Edit cron to make the docker jobs run hopefully (#71232)
70951884d4 : Add option to load historic operators in IR when the operator is deprecated (#71148)
8f4cec2231 : [warnings][Caffe2] Suppress warnings in caffe2 headers (#71196)
149f5ffa36 : Fix inconsistency between new and old upgrader design (#71185)
54fe2741a1 : [fx2trt] break down div (#71172)
6a40bb0fdf : [DataPipe] Update deprecation warning (#71171)
706777bf56 : Disable the output invocation in jit (#71138)
5480deb183 : Add support for permutting dynamic fusion group outputs to channels last format (#70656)
39be20f259 : [JIT][NNC] Add handling of strides to dynamic shape support. (#70464)
975e7d246e : Remove ignore shapes arg (#71144)
97585ae1e7 : Simplify forward / backward AD for linalg.eigh and add checks (#70528)
061be8d600 : Correct forward AD for linalg.eig and add checks (#70527)
e1aea9b968 : Add retry to disabled tests file download (#71030)
928ca95ff0 : fix TensorLikePair origination (#70304)
49a5b33a74 : add a equality comparison helper for assert_close internals (#69750)
b0a10a709f : add explanation of quantized comparison strategy in assert_close (#68911)
802dd2b725 : change sparse COO comparison strategy in assert_close (#68728)
8d05174def : make meta tensor data access error message for expressive in assert_close (#68802)
b652887ad7 : improve documentation of comparison internals (#68977)
523d448968 : Remove deprecated cuDNN convolution ops (#71128)
93b2399c6c : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
4a8d4cde65 : Fix for tensor in list return added to wildcard set (#71170)
9bccb31306 : Remove precise tuple construct flag (#71121)
47ad6628f1 : add optional refining (#69776)
772b3e92bf : Parse symbolic shapes (#69775)
97e8dcba5e : Fix mis-specified device arg name (#69645)
9465c24245 : [jit][edge] Use dynamic type instead of union types for schema parsers. (#70509)
40121456af : Sparse CSR: Add `torch.randn_like` (#68083)
831c129e85 : fx quant: fix test_fx_acc_tracer::test_quantized_batch_norm2d (#71175)
410e91adee : Performance and memory improvements to batched torch.linalg.solve (#69752)
786f946098 : [Profiler] Add glue layer to reduce the use of `#ifdef USE_KINETO` in the profiler code. (#69798)
a3b7dd7b78 : Enable nested default hooks (#70932)
433cf44b79 : delete ecr_gc_docker job (#71178)
e7634f83ce : [jit][edge] Migrate base types to DynamicType on mobile. (#70233)
ecb6defa36 : Fixed docs for forward_ad.make_dual (#71159)
2c8cb8a964 : Speed up quantized upsampling for channels last (#70903)
edf15ebbc2 : Adding python 3.10 binary workflows (#71132)
7d6535cab3 : Make Kineto + distributed a warning rather than an error (#71120)
45b0bafb38 : Drop more unused variables (#71123)
6c03f8d9e5 : Drop unused variables and add some const (#71106)
1c8b167327 : Move implementation of empty_like for sparse COO (#71103)
a8612cd72a : Skip failing tests in test_nn if compiled without LAPACK (#70913)
14922a136f : Revert D33480077: .github: Re-enable xla test config
940b89b03f : Disable Python-3.6 binary builds (#71163)
4f35b9144c : [jit][edge] Migrate ListType to DynamicType on mobile. (#70212)
18e1e1d4d3 : .github: Re-enable xla test config (#71008)
85c6489cdc : ci: unquote env variables (#71139)
cf61738097 : Drop unused variables; make things const; use some auto (#71107)
3c2ae2b47c : Revert D32994274: [ONNX] Link to the wiki (#68505)
1b496cf158 : Fixes doc errors in `Tensor.triu()`, `Tensor.tril()`, `Tensor.ravel()`. (#71057)
ac0d131291 : Decprecating routed decoder (#70990)
d6b7d69d8b : Python3.10 migration adding to binary linux tests (#71130)
fb8a9732d9 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
fdda7b5e8a : [Codemod][FBSourceBlackLinter] Daily `arc lint --take BLACK`
40b80aa490 : [jit][edge] Migrate TupleType to DynamicType on mobile. (#70205)
5cae40c169 : [pytorch][aten][cuda] move CUDAGeneratorImpl.h to ATen/cuda (#70650)
33a5905cc6 : [quant] fix reduce_range warning (#71027)
59e166feb2 : [Quant][DBR] Add test for serialization (#70078)
043e84b3d2 : Per-overload torch.ops API (#67254)
b12ca69179 : [jit][edge] Migrate DictType to DynamicType on mobile. (#70202)
a606ea73d6 : [ONNX] Link to the wiki (#68505) (#69544)
7397683b57 : Add forward AD formulas for mv, scatter_add, _s_where (#70468)
78994d13c0 : Add forward AD formulas for {batch,layer,group}_norm (#70355)
7a08030903 : Fix fx2trt CI test trigger condition (#71014)
80659b71a5 : Hoisting common expressions out of If blocks [retry] (#65645)
569aeec1bc : fix typo in debugging_hooks.py (#70956)
49ed097ebe : Add documentation for lowering (#71116)
3fbff80bea : ci: Move MAX_JOBS to not set on Darwin (#71122)
cfc1117591 : Update sparse.rst to warn about _values() (#71088)
30699cbfd5 : Reland D33284352: [jit][edge] Do not reuse mobile type parser for all unpicklers. (#71048)
fb66f561b1 : Add copy out to the fallback path in SR invocation of composed op (#70871)
c8332256ee : [JIT] Refactor SR invocation of fusion (#70508)
0adc7cc546 : Inline Fallback Functions For Debugging (#70463)
840459a269 : [ONNX] Relax constant_fold gather with indices rank > 1 (#68140) (#68493)
4b47047dae : [ONNX] Add support for shrink ops (#66969) (#68492)
62441157e3 : Have getFilesToLevels return a reference (#71047)
87484d67e3 : .github: Enable linux binary builds (#68388)
e9a8bb59b4 : Move the apply_tensor_props into its own function for more public use (#67786)
3ef10da97d : add support for pickle v4 (#70642)
118bd82dde : detect mocked module on saving pass (#70641)
c4400fc431 : Retire repeat_test_for_types (#71033)
e1b84e1b6b : fix loading of older models that don't have maximize (#71023)
b27dfa70c4 : caffe2: disable TensorImpl static_assert (temporary)
fca8a0acaa : Prevent import race condition that leaves torch.package.PackagePickler with unwanted dispatch table entries. (#71025)
2bed616e0f : [Dist tests] Make event_listener work for all dist tests (#70628)
9267fd8d73 : [WIP] [ATen] Add native_multi_attention_self_attention CPU + GPU implementation (#70649)
785b6905de : reduce plan generation log spam (#70880)
49a07c8922 : Suppress some unused variable warnings in Sorting.cu and TensorTopK.cu (#70999)
d1e049c306 : Fix some unused variable warnings and make some stuff const in ReplicationPadding.cu (#70998)
11aa1961c1 : Use (void)error_unused to avoid unused warning (#71000)
704af23ee4 : Use a reference in GetSingleArgument (#71007)
9762aa0fdc : Revert D33284352: [jit][edge] Do not reuse mobile type parser for all unpicklers.
f626bef598 : Fix docstring for nn.Hardshrink (#71012)
0a921ba0d0 : [jit][edge] Do not reuse mobile type parser for all unpicklers. (#70338)
3f3eae6737 : [jit] Split Tensor type implementations to separate file. (#70121)
53b9c0f12d : [jit] Polymorphic IValue::type() for DynamicType. (#70120)
62909facb3 : [jit] Decouple ivalue.h from jit_type.h (#70119)
0eb2fc608c : [fx_acc] ensure all acc ops args to be keyword arguments (#70952)
0cd474b2ce : fix op not scriptable
d26e5ced72 : Add missing docstrings for ONNX converter API. Fixes #67393 (#67640) (#68489)
c59c86706e : [quant] Add back README.md for backend_config (#70964)
00e5610914 : FX quant: allow duplicate named_modules during fbgemm lowering (#70927)
ad88354e25 : torch.futures doc formatting (#70630)
d583eca8c3 : Add workflow to sync `fbsync`->`master` (#71013)
d7db5fb462 : ctc loss no batch dim support (#70092)
9032d73f3b : Disable cpp tests in multigpu job (#71015)
0721fc6474 : Decouple MapDataPipe from Dataset (#70991)
3febe0d986 : Remove backward op for 3d depthwise convolution (#70462)
704fbc29ae : Remove backward op for 2d depthwise convolution (#70461)
a70297e7cb : NNAPI: quant logistic fix (#70847)
ed50a35cf8 : [Model Averaging] Update the documentation of PeriodicModelAverager (#70974)
c8b897333c : [rnn/gru] no batch dim (#70977)
338eb1b2b3 : [LTC] Export torch::lazy::GetBackendDevice() (#70963)
0a002f879e : Actually clean on clean workspace, including hidden files (#71018)
bc026c0577 : [jit] Split Union type and Optional type to separate impl file. (#69483)
1011ac188f : [jit][edge] Create DynamicType for OptionalType in mobile. (#68137)
0517e719ac : [jit] Add conformance test for DynamicType with server JIT types. (#69482)
649dda9fee : [jit] Implement DynamicType for TorchScript runtime. (#68136)
0408449244 : [jit] Reclaim some binary size. (#68038)
dd1121435b : SequentialLR update _last_lr on step (#70558)
195181d4df : Revert "add very dumb retry to ecr gc"
c6e727d05b : Fix adamw formula doc (#68587)
08074c8f2d : Update gradcheck.py (#70950)
8dfff8b2e2 : Fix scatter for empty indexes (#70662)
4e7e8f2826 : [PyTorch] Outline destructor of CppFunction (#63688)
40c512f52c : split cuda for all 11.X (#70899)
2378421340 : Implement torch.allclose for sharded tensor. (#70331)
997fa8671d : Fix docstring for nn.Hardsigmoid (#70987)
f135438d3b : Dispatch to at::convolution intead of at::_convolution in _convolution_double_backward (#70661)
9ad21091dd : [SR] Give VarStackNodeWrapper an iterator (#69922)
6e16c9bb1d : Add support for deleteKey for FileStore (#69953)
d697bb4220 : Adapt llvm_codegen.cpp to LLVM TOT (#70810)
87139d8532 : [LTC] Sync LazyGraphExecutor and LazyTensor with the staging branch (#70867)
1cdc643714 : [TensorExpr] Add a pass for trimming JIT graphs. (#66847)
8223ef1cd8 : [TensorExpr] Clean-up logic for copying input tensors and remove some dead code. (#70535)
5d7cc8f22a : [TensorExpr] Add some graph-rewrite passes to prepare models for AOT compilation. (#66515)
cdbf83b0c3 : [TensorExpr] Add helper passes for AOT pipeline. (#66514)
a311cfa800 : Revert D33460427: [pytorch][PR] [rnn/gru] : no batch dim
1622546050 : use irange for loops (#70248)
36d9e03ab7 : Reserve vector in gather_ranges_to_dense_op.h (#70478)
df6eb9bbab : Fixed to_folder not saving dtype (#69983)
23f902f7e4 : Fix incorrect variable in autograd docs (#70884)
22f5280433 : add very dumb retry to ecr gc
c18e6b790e : Adding elu,selu,softsign support for fx2trt (#70811)
70b18b9511 : Fix comment indentation issue (#70227)
32bf5e0ef9 : Add native impl of gelu for QuantizedCPU (#69968)
6eba936082 : [rnn/gru] no batch dim (#70442)
880a5b9ea6 : [PyTorch] Move prim string ops to JIT op registry (#70501)
ddea6980fe : [PyTorch][JIT] Don't refcount Type singletons (#69579)
e6befbe85c : Add flag to optionally average output attention weights across heads (#70055)
cc7382dd92 : Enable upgraders in TS server (#70539)
7b8f73dd32 : No-batch-dim support for ConvNd (#70506)
6896b2d734 : [NNC Testing] Randomized loop nest infrastructure (#70410)
b7742b437a : Allow RNN hidden_size to be 0 (#70556)
e7602a1e30 : Fix multiplication of 0-D sparse tensors (#70749)
4fa70a2483 : [pytorch] fix hipify_python (#70619)
9c455d7086 : dbr quant: add limited support for `torch.nn.ModuleList` (#70372)
c3f0c77b64 : dbr quant support for custom leaf modules, part 3/x (#70349)
423d8aabbd : dbr quant: support for custom leaf modules, part 2/x (#70335)
b12852eb41 : dbr quant: support for custom leaf modules, part 1/x (#70330)
a8929c3278 : dbr quant: unbreak case when child module not returning any outputs (#70329)
f742853838 : dbr quant: support functional linear without bias (#70328)
c21a540866 : dbr quant: support dynamic linear (#70257)
dfb807d65e : dbr quant: do not attach auto_quant_state to observers (#70256)
524bbb1442 : [LTC] Sync gen_lazy_tensor.py from the staging branch (#70385)
81b52c290f : Adding leaky_relu support for fx2trt (#70799)
19f04da21e : GHA: Make WORKFLOW_ID not a concatenation of run_id and run_num (#70938)
10b55648f5 : CI: remove unused yaml and make upload_binary_size_to_scuba script work with GHA (#70643)
578fe11673 : [pytorch][aten][cuda] fix LpNormFunctor (#70601)
c00d33033c : Remove repeat test for types in test nn (#70872)
bc514cb425 : Skip distributed tests if built with USE_DISTRIBUTED=0 (#70677)
ff408fca7f : Forward AD formulas for activation backwards (#70460)
3051aabd0e : Add forward AD formulas for convolution and some others (#69956)
4916a21f10 : quantization: fix scale+zp serialization of quantized BatchNorm{2|3}d (#70432)
6773589a06 : Drop some unused variables (#70879)
748790588c : Upgrading the loop to use irange (#70326)
b0fdca8855 : Bump version number to 7 and compile old operators with old schema (#68358)
8bdbe94344 : Add forward compatability tests in CI (#64139)
402f2934bf : Revert D33262228: Per-overload torch.ops API
884aa2baad : ci: Make linux.*xlarge non-ephemeral (#70869)
2367face24 : Prefer maybe_multiply when multiplying by a constant (#68185)
1a061c7fe1 : Merge index_{add,fill,copy,select} sampling (#68184)
baeca11a21 : Remove random_fullrank_matrix_distinc_singular_value (#68183)
08ef4ae0bc : Remove unnecessary sync in linalg.det (#67014)
4d4e81d869 : Make linalg.lu_factor structured (#66934)
012c38e04d : Add contiguous_strides as a correct replacement of defaultStride (#67789)
a35b4b49d2 : Add linalg.lu_factor (#66933)
3f53365086 : define `get_dot_graph` (#70541)
917d56a7e4 : Copy: Fix conj bit being ignored on type mismatch (#68963)
cfc5519661 : Support Sparse CSR transpose. Fix clang-tidy warnings. (#70582)
3a21f38a2e : Integrate multi_tensor zero_grad into Optimizer base class (#69936)
8e6d1738a4 : Per-overload torch.ops API (#67254)
f9e1a1c97f : Increase tolerance for test_adadelta (#69919)
ce409d8f50 : docs: clarify smooth l1 == l1 when beta == 0 (#70673)
2431218ee4 : Jiterates more ops (#70663)
a5bc44422a : [PyTorch] Remove the List/Dict move operations (#69370)
b283b1de39 : Cleaning code in fbcode/caffe2/c10/core/TensorImpl.h (#70588)
395f853770 : Parallelize docker dependency builds (#70866)
be298212a6 : reduce igamma instantiations (#70666)
6c4437118b : Deprecating Python 3.6 (#70493)
025cd69a86 : [AMD] Fix some legacy hipify script (#70594)
34c49d3d3b : Document torch.quantile interpolation kwarg (#70637)
616afcf981 : [jit] [shape analysis] Move constant tensors out of fused subgraphs during generalization (#70320)
b60b1b100f : Set cuDNN deterministic flag for test_conv_double_backward_cuda (#69941)
93c7504438 : [PyTorch] Improve StorageImpl::set_data_ptr (#65432)
70d3b2700f : [LTC] Fix stride accessors in LTCTensorImpl (#70623)
6f473c80a5 : Enable fx2trt CI test (#70658)
4cbe140ec5 : Add CI config to test USE_PER_OPERATOR_HEADERS=0 (#69907)
e1e43c4e71 : Prevent sum overflow in broadcast_object_list (#70605)
8ba27c576c : Upgrade CI to ROCm4.5.2 (#69886)
20489ebdc9 : Increase tensor size for mem check tests (#70603)
1aa98c7540 : [docs] multi_head_attention_forward no-batch dim support (#70590)
e228b71dae : remove unnecessary skips in rsub OpInfo (#69973)
216ae7bc91 : [docs] Transformer: no batch dim support doc update (#70597)
5543b7ce16 : Fix docstring for nn.Softplus (#70576)
657a7e74ed : Fix docstring for nn.Tanh (#70577)
adceb13da1 : Copy: Avoid extra dispatch in type-mismatch case (#68950)
e1aa5db108 : Bazel: Only run ATen codegen once (#70147)
1681323ddc : DOC: Merge extraheader block from theme instead of override (#70187)
aea3d3ced7 : dbr quant: stop calling eager quant convert (#70247)
4e90fa6a8c : dbr quant: break up test class into multiple classes (#70246)
5b20052857 : dbr quant: start recording ops which are not quantizeable (#70200)
80e685e2c0 : dbr quant: start reusing static quant module mappings (#70196)
45f5a3ceb8 : Fix generating files for Vulkan on Windows (#69696)
c468e35d83 : [caffe2] don't use __FUNCSIG__ when building for Windows with clang (#70561)
12653be434 : [PyTorch] Optimize no input NVTX collection (#70133)
44283c2766 : NNAPI: Add qint16 support via int16 (#70621)
10b40acbdb : [PyTorch][Static Runtime] Fast aliasing in select_tensor by manual borrowing (#68122)
4d8fc8693c : [PyTorch][Static Runtime] Support memory planning for torch.to() w/o requiring copying (#67223)
1507ce90b2 : [PyTorch][Static Runtime] Avoid managed output tensor DCHECK (#67221)
99a10c371f : [PyTorch][Static Runtime] Fix dtype changing between iterations for to() (#67394)
ab7d0df449 : Support cloning CSR tensors (#70581)
d1dbcb1780 : Change to use current LLLVM APIs (#70625)
f8eaebc978 : Avoid adding torch::deploy interpreter library to the data section (#70208)
2292520bdc : Fix genSparseCSRTensor: generate non-trivial values for uint8 dtype. (#70580)
29ff596dca : [CUDA graphs] Changes batchnorm to increment num_batches_tracked in place for improved graph safety (#70444)
14457bb8cb : Remove backward op for slow 3d transposed convolution (#69933)
1adb70c6f0 : Revert D33409880: [pytorch][PR] Deprecating Python 3.6
8369a46417 : [maskrcnn] use stable sort in mask rcnn caffe2 ops (#70510)
b16b444828 : don't unsqueeze every stack arg if possible (#70288)
f8f96d4858 : Copy: Re-use existing neg and conj kernel implementations (#68949)
95a1952633 : add SparseXPU to dispatch key set autogradother_backends (#70443)
a60adc7f8a : fractional_max_pool2d_backward: port to structured kernel (#68245)
7e58b1dd7b : Sets device guard in _cudnn_impl functions (#70406)
6089a0f14a : Extend checkout for pytorch/builder (#70644)
7b8c43cd7c : Revert "Revert D32498570: make codegen'd device guards not cuda-specific. Allow them to be used in external codegen" (#69951)
bb5b4cceb6 : Revert "Revert D32498569: allow external backend codegen to toggle whether to generate out= and inplace kernels" (#69950)
d95be99561 : Deprecating Python 3.6 (#70493)
4d08db0cb2 : Flaky tests reporting: use GITHUB_RUN_ID instead of concatenated value (#70604)
0ece9a49d7 : Revert D33198155: Bump version number to 7 and compile old operators with old schema
61b562206b : Fix docstring for nn.ELU (#70574)
9cf0de509f : DispatchStub: Improve type mismatch errors (#67880)
f64906f470 : ibm z14/15 SIMD support (#66407)
8dcfdf39e7 : [DataPipe] Renaming FileLoader to FileOpener with deprecation warning for FileLoader (#70367)
7c7eb351c3 : Populate __name__ for torch.nn.modules.utils.{_single,_pair,...} (#70459)
1150046d29 : NNAPI: Add runtime flexible shapes & return shapes (#70334)
a825351c13 : GHA Windows: Propagate exit code from .bat to calling bash script (#70011)
d35fc409ad : Bump version number to 7 and compile old operators with old schema (#68358)
d9106116aa : nnapi: Add int32 type torchscript expressions (#70197)
1b66915f39 : Have type_parser return const reference (#70477)
bc3246453b : Added explicit build command for Windows and clarification on obtaining (#70190)
1e67570f3a : Drop omp simd from batch_permutation_op.cc (#70579)
ab49d41bb5 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
fa09099ba3 : Codegen: TraceType only includes operators being registered (#68691)
779f41a78a : [quant] Add a e2e test for standalone module + custom backend_config_dict (#70152)
ce86881afa : [quant][graphmode][fx] Add qat module mapping support in backend_config_dict (#70287)
65faf1a7eb : [fx2trt] Add version check for ProfilingVerbosity bulider config (#70286)
6bc06ec3c2 : [PyTorch Edge][QNNPack] Tighten Step Height for Indirection Buffers (#70530)
7bfaa230be : [nn] adaptive_avg_pool{1/2/3}d : Error on negative `output_size` (#70488)
e6c3aa3880 : Remove backward ops for mkldnn convolution (#70467)
cfc71f56e4 : [quant][fx][graphmode] Support standalone module in _convert_do_not_use (#70151)
401a6b682b : add BFloat16 support for AdaptiveAvgPool2d on CPU (#56902)
bc40fb5639 : [Reinstate] Wishart distribution (#70377)
14d3d29b16 : make ProcessException pickleable (#70118)
9c742bea59 : [PyTorch Edge][QNNPack] Enable Depthwise Specific Conv3d Kernel for Kernel Size 3x3x3 (#69315)
3d4590d16f : [PyTorch Edge][QNNPack] Depthwise Conv3d mp8x27 (per-channel) Sse2 Kernel (#69314)
821c085c9b : [PyTorch Edge][QNNPack] Depthwise Conv3d mp8x27 (per channel) Neon Kernel (#69313)
15d443326c : [PyTorch Edge][QNNPack] Depthwise Conv3d Weight Packing (#69312)
db37fd3865 : [PyTorch Edge][QNNPack] Depthwise Conv3d Indirection Buffer Setup (#69311)
9863cd5741 : [PyTorch Edge][QNNPack] Refactor Computing Step Dimensions (#69310)
cea3eba617 : [PyTorch Edge][QNNPack] Operator-Level Conv3d Tests (#69309)
35251a5528 : [PyTorch] Add Enum to IValue Deepcopy (#69937)
36db501736 : softplus_backward: remove output arg (#70296)
18dd5cdba5 : [Operator Versioning][Test] Use hypothesis for better test input data and broader coverage (#70263)
c627211651 : [quant][fx][graphmode][be] Change the type for output of convert to be torch.nn.Module (#69959)
fb78a31916 : Add testing across mem_formats to ModuleInfos (#69317)
14f4b91f6e : Add Nondeterministic Tol to gradient test in test_modules (#69402)
d2abf3f981 : Added antialias flag to interpolate (CPU only, bicubic) (#68819)
2b00dbbbbc : fix typos in torch/csrc/deploy/README.md (#70494)
8af39b7668 : AdaptiveLogSoftmaxWithLoss no_batch_dim support (#69054)
0460324b9b : Fix docs rendering for nn.Module.named_modules() (#70491)
fb736c77a4 : Remove backward op for slow dilated 3d convolution (#70068)
2c67621a19 : [rnn,gru,lstm]cell : no batch dim (#70236)
9266b2af73 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
103fc5f9a5 : Remove unused variable (#70261)
066c9ff08f : Deprecating python 3.6 (#70325)
a0c99a8d3b : [Operator Verioning][Edge] Update upgrader codegen with latest change (#70293)
a6eadf9b50 : Remove backward op for slow 3d convolution (#69978)
5e113eb24d : .github: Add linux.4xlarge executor (#70474)
0fb73035f7 : [Bootcamp Task] Replace string concatenation by fmt::format (#70366)
e96dda15e5 : Remove backward op for slow 2d transposed convolution (#70333)
c732a26e59 : Add macro to register CPU kernel for all arch types (#70332)
244730eeea : .github: Add needs build for generate-test-matrix (#70456)
4ed02748be : fix typo in the docs of multiprocessing (#70448)
73b5b6792f : Adds reduction args to signature of F.multilabel_soft_margin_loss docs (#70420)
6f83841582 : .github: Temporarily disable xla test config (#70453)
15f14ce0dc : fix typo in adam docs (#70387)
574dbb584d : quant tests: fix log spew for HistogramObserver (#70107)
00df885d4e : quant tests: clean up logs about incorrect tensor copy (#70106)
b7b32b56f1 : Revert D33281300: Prevent sum overflow in broadcast_object_list
807f9a828c : Prevent sum overflow in broadcast_object_list (#70336)
5a9ea9e386 : Automated submodule update: tensorpipe (#70438)
bf610f08b0 : Back out "Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions"
4ae71c8d34 : Add graph op replacement pass (#69915)
63e58d262a : Extend Graph, CompilationUnit, and schema matching to accept optional operator version number (#69914)
df3cbcff28 : Add utility methods to find an upgrader (#68355)
911d527b87 : Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions (#70339)
ab4f9862a3 : [Compiled Mobilenetv3 Demo] Integrate Compiled Mobilenetv3 into FB4A Playground app (#70370)
0ee663d2fa : Revert D33234529: [NNC Testing] Randomized loop nest infrastructure
e429a68478 : Allow single node fusion for nvfuser (#70000)
5ccf28d066 : Do not use ZeroTensor for inplace ops (#69998)
3116d87024 : Add forward AD formulas for `{adaptive_,fractional_,}max_pool{2,3}d_{backward,}` (#69884)
6925576e88 : [acc_ops] No longer mark acc_ops.cat as unary (#70365)
133c7f2cf9 : Revert D33301254: [pytorch][PR] GHA Windows: Propagate exit code from .bat to calling bash script
6431ac6c7a : GHA Windows: Propagate exit code from .bat to calling bash script (#70011)
ab57f6d12c : [LTC] Upstream utils to extract BackendDevice from at::Tensor (#70069)
16e6e1a59e : [Easy] Lint wrap.py file (#70341)
3c231e9bd7 : [FSDP] Remove module.wrapper_config support (#70340)
d100d98db8 : `torch.linalg` routines return `torch.linalg.LinAlgError` when a numerical error in the computation is found. (#68571)
6a84449290 : [SR] Fast path for VarStack on scalars (#70210)
cc8b916395 : Transformer{DecoderLayer} : no batch dim (#70322)
4d49af863f : GaussianNLLLoss no_batch_dim docs and testing (#69783)
a9c7d626e1 : Add the `maximize` flag to AdamW (#70146)
b15212c62b : enable backward pass computation and communication overlap by prefetching all gather (#70235)
1d094587ea : [NNC Testing] Randomized loop nest infrastructure (#70174)
656d2a7bf6 : [quant][fx][graphmode] Add backend_config_dict for standalone module (#70150)
795af1578c : Revert D33172665: [LTC] Upstream utils to extract BackendDevice from at::Tensor
12afe2bb84 : update poisson_nll_loss opinfo samples (#70300)
681e78bace : [Profiler] Address issues from profiler bifurcation. (#70327)
121d067999 : [LTC] Upstream utils to extract BackendDevice from at::Tensor (#70069)
bd8e8e3aaf : [GHA] Clean after checkout (#70337)
a421ee0e52 : [nn] InstanceNorm : no batch dim for modules (#65323)
c06b3208d4 : Revert D33141012: test //c10/... in CI
23ab6ce723 : Revert D33141011: extract //c10/macros into its own package
f126501d37 : Revert D33141010: allow Bazel to build without glog and gflags
682fab19d4 : [SR] verify_and_correct_memory_overlap handles tensor lists (#69774)
385c12852e : [LTC] Upstream LazyTensor <=> at::Tensor utils (#70066)
2e94a0d282 : Remove backward ops for NNPACK spatial convolution (#70305)
7cdfd86a72 : TestMathBits: test with neg and conj bit set (#68948)
7c690ef1c2 : FractionalMaxPool3d with no_batch_dim support (#69732)
8c41f258f4 : allow Bazel to build without glog and gflags (#69995)
8f4c724bb6 : extract //c10/macros into its own package (#69994)
0ccccf4ed5 : test //c10/... in CI (#69993)
1bd147b61a : Fix masked_softmax's perf for element_size is not 8 (#70271)
c34aa715fa : AT_MKL_SEQUENTIAL and build changes (#70259)
b37de0a4bb : Update flags in nnc lowering (#70306)
f36b44bb9e : Remove ciflow_should_run job (#70204)
276253b164 : Fixed wrong return type in ModuleList getitem (#69083)
ce9a2f8ba9 : [C++ API] Added missing nearest-exact mode and anti-alias flag (#69318)
da63f3f92b : Corrected typo in Cross entropy formula (#70220)
b7259b8660 : [quant][be] Add a check in prepare_qat to make sure the model is in training mode (#69879)
2806d821b0 : Add conversion of torch.permute to acc_ops.permute (#70294)
56969bf88a : make inverse call linalg_inv (#70276)
4db3a8fc0a : [nn] TransformerEncoderLayer: no-batch-dim (#69291)
69b37a16f3 : Remove unused CUDASolver.h from SparseCUDABlas (#70281)
31c7e5d629 : Install TensorRT lib on oss docker and enable fx2trt unit test (#70203)
b5f71375f5 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
29f1ccc8f0 : Fix some Composite Compliance problems with binary_cross_entropy backward (#70198)
75dbe88b05 : [DataPipe] removing unbatch_level from .groupby (#70249)
e02d836cb2 : [LTC] Upstream LTCTensorImpl (#70062)
633f770c3c : [StaticRuntime] Add out-variant support for TensorExprDynamicGroup op (#69479)
7d4db93a7d : [jit] Handle output tensor being passed in as inputs to TensorExprDynamicGroup (#69478)
4dec15e6d8 : [nnc] Add a run method to TensorExprKernel that takes in output tensors (#69477)
0bdf4702f6 : [jit] Add a new op that composes all of the dynamic shape logic (#69476)
b613fbdbf2 : Back out "[Quant] Added 4 bit support for embedding quantized module" (#70273)
47ba28f3b5 : Back out "[Quant][Eager] Added 4 bit support for eager mode quantization flow" (#70272)
a86f9806bc : Back out "[Quant][fx] Added test for quint4x2 for fx graph mode quantization" (#70274)
6217fee96b : Revert D33246843: [pytorch][PR] Implementation of Wishart distribution
2d509ff31b : [GHA] Fix doc push jobs (#70269)
591ca4d6bc : [Operator Versioning][Edge] Reorganize upgrader initialization logic for thread safety (#70225)
21c6de9fdc : Extend autograd functional benchmarking to run vectorized tasks (#67045)
82c5f298ed : [shard] fix named_params_with_sharded_tensor (#70228)
74c834e0dc : [DataPipe] adding a finally statement to ensure hook is reset (#70214)
23902fb895 : Fixed typo in torch check for cdist (#70178)
a217a62e73 : Implementation of Wishart distribution (#68588)
0544f975e1 : [reland] Support torch.equal for ShardedTensor. (#70145)
c321d4c1ca : [Operator Versioning] Split the upgrader test to a separate file and cover mobile part (#70090)
a6f953156e : [StaticRuntime] Add TensorExpr fusion with dynamic shapes in SR (#69475)
c6d1162325 : [jit] Add support for dynamic shape fusion in JIT. (#69474)
c5333cdfba : [nnc] tensorexpr for quantized::add (#70188)
bb51519937 : bug fix FractionalMaxPool2d (random_samples dimensions) (#70031)
91da2d5fa1 : [StaticRuntime] Refactor StaticModule to pass in sample inputs (#69473)
c4a6c7a436 : fix cpu binary size increase for clamp (#70168)
5504e4ae5c : [nnc] Move DispatchParallel to external_functions (#70221)
304efd8e9a : Change TH_BLAS_MKL into AT_MKL_ENABLED() (#70219)
a197f3fe52 : [FSDP/Checkpoint] Activation offload support in checkpoint_wrapper (#70165)
e428a90553 : Android build migrated to GHA. (#68843)
5e222d08a1 : Revert "Revert D32498572: allow external backend codegen to be used without autograd kernels" (#69949)
8e763cd735 : Add explicit OperatorHandle destructor (#70033)
adaf383837 : dbr quant: better fix for bug with recursion on dequantize (#70128)
cce9c9aa45 : dbr quant: stop overridding tensor getters (#70115)
f291708058 : dbr quant: clean up logging format (#70114)
fb2a6747b8 : dbr quant: add test for qconfig_dict and methods (#70109)
78bea1bb66 : update example in classification losses (#69816)
19f898402d : Revert D33241684: [pytorch][PR] Install TensorRT lib on oss docker and enable fx2trt unit test
b376d82caf : Remove backward op for slow dilated 2d convolution (#70067)
dab3d3132b : Install TensorRT lib on oss docker and enable fx2trt unit test (#70203)
123be0e5b7 : [fusion] Add ConvTranspose+BN fusion support (#70022)
24f16de987 : [Static Runtime] Support native op split_with_sizes (#69999)
6623c4838e : Handle the corner case when min == max in L2 search (#70207)
f17e76b0f2 : Expand description of bias_sizes arg for convolution_backward (#70195)
3e8ef9a272 : Add return type annotation for ShardedTensor (#69945)
c555b7bacb : GHA: Remove caffe2 check in Windows shard 1 smoke tests (#70010)
e6d9bb8d57 : reduce the number of instantiations for bernoulli tensor tensor kernel (#70169)
79a40b22aa : [Checkpoint] Make checkpoint_wrapper an nn.Module (#70164)
fcaecd718a : Write flaky tests to rockset (#70136)
5651e1e3ad : Add auto_linear formulas and some others (#69727)
65f54bc000 : [SR] Optimize VarStack (#68750)
a799ffebd2 : Create lower code example (#70142)
423ce416d8 : Prune osx-arm64 binaries from nightly channel (#70132)
41959ce77f : [JIT] scripting, freezing, serialization for sparse csr (#69555)
bcb6076099 : Sparse CSR tensors: storage access should throw (#70072)
bcc7dbdf37 : Change open source unit test deps (#70167)
dd02af6283 : Bilinear no_batch_dim (#69539)
978089c381 : Prevent divide-by-zero errors in Timer (#70050)
ad0cd8a76e : [DataPipe] Improve inline doc and testing for CollatorIterDataPipe (#70139)
8a912014b1 : [Operator Versioning][Edge] Initialize upgrader thread safe (#70161)
7ea86dfdb1 : [Profiler] Factor common logic into `torch/csrc/profiler/api.h` (#69459)
181120f7d7 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
60191196d4 : [AutoAccept][Codemod][FBSourceBuckFormatLinter] Daily `arc lint --take BUCKFORMAT`
ef70174f2e : Separate c10::Symbol header from list of interned strings (#69406)
06d0536dad : Low precision support for jiterator (#70157)
78f06e0690 : fixing conv2d decomposition and tests (#70127)
de4e7dece9 : [Quant][fx] Added test for quint4x2 for fx graph mode quantization (#69846)
75718e5059 : [Quant][Eager] Added 4 bit support for eager mode quantization flow (#69806)
9f512e129b : [Quant] Added 4 bit support for embedding quantized module (#69769)
b331752314 : [Quant] Implemented 4 bit embedding op support; added corresponding test case (#69768)
94abf120c8 : [quant][fx][graphmode][be] Use is_qat instead of model.training as a flag for qat (#69878)
fb34af1b21 : [nnc][quantization] Optimize constructTensors in ext functions (#69856)
84b7832010 : Updates CUDA memory leak check to verify against driver API and print more diagnostic information (#69556)
6c68045f60 : [quant][graphmode][fx][be] Fix a typo in quantization/fx/graph_module (#69877)
9d3a6fa623 : [quant][bc-breaking] Remove QConfigDynamic from quantization api (#69875)
5db711f9d3 : [quant][be] Replace QConfigDynamic with QConfig in code (#69864)
c463d50098 : [fx2trt] Convert to tuple is output_size of adaptive avg pool is an integer (#70144)
9ee3006d58 : [fx-acc][graph-opts] bug fixes for transpose_to_reshape, optimize_quantization, finalize_kwargs_to_concrete
bd9983366b : [fx2trt] Add support for torch.mean (#70052)
9fb199bc12 : Add convolution_backward to aten_interned_strings.h (#70112)
9b14d93d78 : Fix bazel workflows (#70137)
70ed4f3ffc : Try dropping Torch from typeshed_internal (#69926)
e35bf56461 : [Bazel] Add CUDA build to CI (#66241)
e0f4e28c69 : Skip forward-over-reverse gradgrad check for pinv singular on CUDA fo… (#70123)
38e026c14d : Add tanh_backward to AT symbols (#70071)
a6b7521428 : always use max cmake when cmake3 and cmake are all existed (#69355)
254360e182 : [ROCm] Skip test_fn_fwgrad_bwgrad_* unexpected success tests (#70124)
26e32988bd : Revert D32596264: Codegen: TraceType only includes operators being registered
2f622e87bd : Revert D32596274: Codegen: ADInplaceOrViewType only include operators registered
60eb1e53b2 : Sparse CSR CPU: Add block sparse support for MKL path (#68710)
0cfff65395 : Apply contiguous on inputs of cdist backward (#70016)
bc95e5a196 : [ROCm] Skip test_fn_fwgrad_bwgrad_gradient_cuda_complex128 (#70061)
de992c6b21 : Specify ij indexing when cartesian_prod calls meshgrid (#68753)
9ad940d982 : Codegen: ADInplaceOrViewType only include operators registered (#68692)
e66a8ab4f5 : Codegen: TraceType only includes operators being registered (#68691)
0d06616c47 : Add `dict` methods to `ParameterDict` (#69403)
35519428a2 : Remove backward ops for miopen depthwise convolution (#70064)
ab2a739851 : Remove backward ops for miopen transposed convolution (#70063)
ec577300d7 : OpInfo: Convert more sample_input_funcs to generators (#69976)
950957f857 : Fix jit tests assuming sample_inputs is a list (#69975)
ad79d0dd4b : Add `ciflow/trunk` label (#69575)
de296d526f : move torch.testing from prototype to beta (#69668)
de2d9e2966 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
1065739781 : Fix build on latest main branch of thrust - SoftMax.cu (#70039)
92463573d8 : Sanitize string before passing it as shell argument (#70070)
54406314cc : Update PULL_REQUEST_TEMPLATE.md (#70105)
b1d5948b34 : Remove backward ops for miopen convolution (#69987)
f045618dab : dbr quant: extend qconfig_dict support to functionals, part 2 (#69766)
a4173fc887 : dbr quant: extend qconfig_dict support to functions, part 1 (#69758)
c186773d92 : dbr quant: make fqn during prepare op hook required (#69726)
b999f87503 : fx quant: move _parent_name to common utils (#69720)
4f450f44bf : dbr quant: initial support of qconfig_dict for modules (#69719)
0f1ceb34ec : fx quant: refactor qconfig_dict utils to separate file (#69636)
7abb7667a6 : [tensorexpr] Add memory planning to reuse intermediate buffers (#66452)
ac92f7cc75 : [tensorexpr] Remove the optional argument in LoopNest::prepareForCodeGen (#67144)
bbfd7b75ca : [tensorexpr] Move the allocation of intermediate buffers from TEK to CodeGen (#67143)
6075ec15b1 : [tensorexpr] Add BufMap instruction to reuse the memory of dest buf for src buf (#66451)
c7e0951524 : [tensorexpr] Add a stmt recorder to obtain stmt PCs (#66450)
043098ef7f : [quant][graphmode] Rename backend_config_dict folder to backend (#69882)
3d51c88032 : [DataPipe] Unifying API - removing options to have fn_args and fn_kwargs from MapDataPipes (#69561)
b89c283c80 : [DataPipe] Unifying API - removing options to have fn_args and fn_kwargs from IterDataPipes (#69560)
4a6a5d1630 : OpInfos for torch.{flatten, column_stack} (#69237)
ef6f776e82 : [quant][be] Cleanup test cases for eager mode workflow (#69880)
92320dfe6e : [shard] remove set device for nccl (#69946)
9813629500 : [reland][quant][fx][graphmode] Add support for conv add pattern in backend_config_dict (#70007)
62809dc062 : .github: Volume mount netrc to home directory (#70057)
a73c6a45b6 : [reland][quant][graphmode][fx] Enable fuse handler for sequence of 3 ops (#70006)
fa582045fc : Fix lint/mypy violations (#70059)
02c63c3006 : extract out c10 targets to the c10 package (#69992)
d459e79500 : [jit][edge] Remove usage of shared_ptr<mobile::Code>. (#68037)
39f65fee47 : [jit] Split ClassType into a separate header. (#68036)
243e135eb4 : Sparse CSR CUDA: Add block sparse support for torch.triangular_solve (#68709)
5f3f327a9d : update `SequentialLR` signature (#69817)
15b9e5f8a4 : Revert D33136054: Remove backward ops for miopen convolution
b199e3c842 : Provide functionality to write custom ShardedTensor ops. (#69874)
1f86e0ee2a : don't compile pow kernels for non-existent case (#70017)
8b9b819d22 : Remove backward ops for miopen convolution (#69987)
b4c4a015d6 : Revert D33163841: Revert D33102715: Back out "Revert D32606547: torch/monitor: add C++ events and handlers"
96fe82ac3c : HANDLE_TH_ERRORS: Move exception translation out of line (#69974)
9ff8c49ed9 : Enable cpu scalar arguments for jiterator (#69861)
ff53ed24d2 : fix NameError of docstring in broadcast_object_list (#69810)
c9e898fef8 : delete TH (#69929)
7f7966a888 : [Docs] Fix the syntax of documentation (#69958)
ebc66bfeea : [Profiler] Pull helper methods into dedicated file. (And start `torch/csrc/profiler` folder. (#69255)
b23890177f : [Operator Versioning][Edge] Codegen upgrader_mobile.cpp (#69194)
c4281cc92d : Prototype checkpoint_wrapper (#69955)
c80b5b8c8f : Revert D33102715: Back out "Revert D32606547: torch/monitor: add C++ events and handlers"
8c7f4a0d0b : [tensorexpr] check for index out of bounds in ir_eval (#68858)
76d282d447 : Nvfuser code bump 12 5 (#69964)
a6a1c709ff : Fixed libtorch at::Tensor::print() linking error (#69615)
531da0c43b : change asan test shard to 3 (#69843)
fe7b6446d5 : [LTC] Upstream LazyTensor and LazyGraphExecutor (#69815)
28243769f9 : [LTC] Upstream several internal ops (#69716)
e6a4988b2d : [LTC] Upstream utils in computation_client (#69621)
73a6c36f1b : Add more details to the known limitations section of torchhub docs (#69970)
eb374de3f5 : Back out "Revert D32606547: torch/monitor: add C++ events and handlers" (#69923)
5cc4037369 : [PyTorch][Distributed] Integrate with ShardedOptimizer in the unit test of ShardedLinear (#69569)
dc18048dd8 : [PT-D][Fix] Broken sharded embedding and embedding bag test fix (#69725)
4d5dd00e61 : Remove backward ops for cuDNN transposed convolution (#69902)
3dc3651e0e : Remove backward ops for cuDNN convolution (#69901)
bf15dc22bc : Fix build on latest main branch of thrust (#69985)
98c0fb8b42 : [sparsity] More descriptive error message for missing parameters (#69895)
46ace4ac33 : Add support for masked_softmax when softmax_elements > 1024 & corresponding unit tests (#69924)
32ffad17a9 : [PyTorch][Easy] make GlobalRecordFunctionCallbacks smallvector (#70002)
65ab63310b : [PyTorch] use div instead of mul when calculating sampling probability (#70001)
66406ee0f7 : [PyTorch][Static Runtime] Fix to() w/dtype bool (#69935)
b28a4100ff : scripts: Fix manylinux2014 promotion to pypi (#70003)
38cfacd817 : Tensor: Define operators override functions in TensorBody.h (#68697)
9c7c1b769a : Functionalization: Only include headers for required ops (#68690)
7bb4b683b5 : Codegen: Registration now only includes the functions used (#68689)
6ba18ba87e : Codegen: Generate static dispatch headers per operator (#68714)
303d60b8da : Add TORCH_ASSERT_ONLY_METHOD_OPERATORS macro (#68688)
bab61be43b : Codegen: Add root_name property to NativeFunction{,sGroup} (#68687)
a406a427ae : Revert D33004315: Support torch.equal for ShardedTensor.
1c4c81622c : Support torch.equal for ShardedTensor. (#69734)
8a08e70bf4 : Revert D32596676: Avoid adding torch::deploy interpreter library to the data section
24bc3be146 : [Profiler] Clean up profiler includes. (#69421)
587f8d9924 : OperatorEntry: Avoid unnecessarily templated code (#67986)
986d19c0a7 : Avoid adding torch::deploy interpreter library to the data section (#69245)
c6bcfb152d : [PyTorch][easy] Move GlobalRecordFunctionCallbacks{,Entry} to cpp file (#68483)
873585da2b : [SR] Improve set_inputs (#69087)
aeedd89d4e : [PyTorch] RecordFunction: use SmallVector for ObserverContextList (#68412)
29914f55bf : Skip print_test_stats checks for tests that use repeat_test_for_types (#69872)
d71b8e1a8d : More distutils.version.LooseVersion changes (#69947)
6f9844693f : Revert D32974907: [quant][graphmode][fx] Enable fuse handler for sequence of 3 ops
87bc1f4ed8 : Revert D33024528: [quant][fx][graphmode] Add support for conv add pattern in backend_config_dict
43b8e833e9 : Fix bug in aten::full signature in version_map.h to accurately reflect the current schema (#69860)
5c7817fd43 : Add test operator in upgrader entry (#69427)
47f11730ec : Add testing for forward over reverse gradgrad (#69740)
d0fe7db1f6 : Add formulas for distributions (#69690)
b399a4d7b9 : Add some reduction forward AD formulas (#69661)
3b7fc0243c : [PyTorch] Make TypePrinter take const Type& (#69412)
7a12b5063e : [AutoAccept][Codemod][FBSourceBuckFormatLinter] Daily `arc lint --take BUCKFORMAT`
59000cff91 : [quant][fx][graphmode] Add support for conv add pattern in backend_config_dict (#69778)
408283319a : [Operator Versioning][Edge] Change OP to CALL when there is a valid upgrader (#67731)
9e4d60a552 : [Operator Versioning][Edge] Use check in cpp source file for upgrader (#67728)
bf089840ac : [quant][graphmode][fx] Enable fuse handler for sequence of 3 ops (#69658)
102684b252 : [SR] Fix stack/concat bug (#68777)
ebc35a7ead : [JIT] Enable freezing for sparse COO tensors (#69614)
33363cea64 : Revert D32498572: allow external backend codegen to be used without autograd kernels
f6cad53443 : Revert D32498569: allow external backend codegen to toggle whether to generate out= and inplace kernels
0ef523633f : Revert D32498570: make codegen'd device guards not cuda-specific. Allow them to be used in external codegen
24ee1d13f6 : Another attempt to fix version comparison check (#69939)
d4f8313497 : Add low level torch.profiler.kineto_profile base class (#63302)
e8d5c7cf7f : [nn] mha : no-batch-dim support (python) (#67176)
37ec99c0e4 : Open source trt lowering workflow (#69381)
930067d129 : Build clang builds with -Werror (#69712)
c76c6e9bd3 : [ONNX] Add BFloat16 type support when export to ONNX (#66788)
800a457b6f : [shard] add ShardedOptimizer (#68607)
457ba1dd3e : Porting index_add to structured kernels, add an out variant (#65993)
9594a94d80 : fix CompositeImplicitAutograd ops improperly labeled (#69863)
269e92669a : [c2] Remove unused private fields (#69709)
fef9981998 : Update run_test.py (#69920)
3e43c478a8 : [Quant][fx] Lower reference conv[1-3]d module (#69228)
b67eaec853 : [DateLoader] more clearly expose 'default_collate' and 'default_convert' to users (#69862)
1188d89a1d : TestMathBits: Call functions with original sample input values (#68947)
1a299d8f1b : Add support for transformer layout of masked_softmax (#69272)
2e7a91c45f : make codegen'd device guards not cuda-specific. Allow them to be used in external codegen (#68531)
aa0cf68c17 : allow external backend codegen to toggle whether to generate out= and inplace kernels (#68530)
b83b6f7424 : allow external backend codegen to be used without autograd kernels (#68529)
8acd0a8b2f : Allow row sizes to support int64/size_t. (#69303)
2c9dd886af : Modify torch.movedim to handle scalar as no-op (#69537)
7503ec58b2 : [nnc][fix] xnnpack ifdef (#69870)
f7294cd865 : [Static Runtime] Skip ReplaceWithCopy when inputs have writters (#69819)
07767569c9 : Properly import LooseVersion (#69904)
fdcb78df38 : `print` fix in `lr_scheduler` (#68338)
f7210f8d90 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
4f81b2adbb : Remove if conditioning from some MacOS workflow steps (#69788)
fa615b332d : added set_printoptions examples (#68324)
d90012689f : [DataPipe] Control shuffle settings from DataLoader2 (#65756)
620a1fcb55 : OpInfos for: normal, bernoulli, multinomial (#66358)
4829dcea09 : Codegen: Generate seperate headers per operator (#68247)
badf7b0210 : fix typo changing the generated code (#69899)
51033ec840 : Add forward AD layout check for storage numel (#68631)
6078e12ad6 : Add forward AD support for as_strided (#68629)
fed9b90ed4 : fixing removeProfilingNodes duplicated functions (#1282) (#68804)
82075c0a19 : Create trt plugin base (#69487)
77a4b89411 : Adding windows cuda 11.5 workflows (#69377)
b1ef56d646 : [quant][docs] quantized model save/load instructions (#69789)
2b81ea4f9a : [DataPipe] Export ShardingFilter (#69844)
603a1de871 : Fix inefficient recursive update in ShardedTensor.state_dict hook (#68806)
b08d64202a : Remove THGeneral (#69041)
8dfdc3df82 : [ROCm] Refactor how to specify AMD gpu targets using PYTORCH_ROCM_ARCH (#61706)
c6c3b43498 : [SR][easy] Accessors for value array offsets (#69755)
3d358a7678 : Adds a `maximize` flag to Adam (#68164)
fc37e5b3ed : Hook up general convolution to convolution_backward (#69584)
0420de3539 : [SR] Log SR options (#69809)
f0e98dcbd3 : General convolution_backward function (#69044)
a5b5152d7a : Fix typo in aten::full in version_map (#69807)
af7ee9fc01 : Forward AD for inplace comparison operators (#69597)
0dcbd73eee : Add some forward AD formulas (#69384)
baf92f9d5a : Fix copy_ forward AD to handle broadcasting (#69592)
db32daf4b2 : Do not test batched forward grad for inplace ops (#69558)
f565167fbd : Revert D32606547: torch/monitor: add C++ events and handlers
f575179953 : [quant][fx][graphmode] Move more patterns to use ModuleReLU fuse handler (#69644)
e61fc1c03b : torch/monitor: add C++ events and handlers (#68783)
20f7c893c1 : Populate runtime with upgrader graph (#68773)
17f3179d60 : Back out "[pytorch][PR] Add ability for a mobile::Module to save as flatbuffer" (#69796)
3906f8247a : clear predict_net field from PredictorExporterMeta stored in the exporter to save memory (#68485)
19fecc63e4 : [PyTorch][kineto] Remove heap-allocated vectors in saveExtraArgs (#69737)
731c8255b7 : Fix the TorchBench CI when running with a benchmark branch. (#69795)
59deee8308 : Make c10 tests compilable with -Werror (#69711)
e305e4d4d8 : Suppress common warnings when building by clang (#69710)
41c344d460 : Revert D32739976: fix CompositeImplicitAutograd ops improperly labeled
77213fa4d3 : Fix docker builds for Python-3.6 (#69785)
a5a7e30943 : [DataPipe] Adding interface for MapDataPipes (#69648)
81a60b9813 : [DataPipe] Adding output types to DataPipe interface file (#69647)
d026057bb3 : [PyTorch] Update SmallVector from LLVM (#69110)
1d269e8c15 : [PyTorch] Simple refcount bump fixes in standardizeVectorForUnion & callees (#66695)
5374d5d8c9 : [shard] fix with_comms wrapper (#69493)
e1c583a691 : [JIT] simplify logic for merging types during profiling (#69096)
3219f6a487 : Make vec512 bfloat16 map function clang-Wall clean (#69707)
a5ad2cdab5 : Cleanup ProcessGroup.cpp (#69706)
7ea5926130 : Make blend operations clang-Wall clean (#69705)
195b0d0645 : fix CompositeImplicitAutograd ops improperly labeled (#69169)
29d759948e : use irange for loops 2 (#66746)
91d16cb633 : [Jit] Fix schema of aten::split int[] version (#69745)
9962bfb3c9 : Remove THTensor (#69040)
531b045446 : [tensorexpr] Fix the buf size of discontiguous tensors (#69657)
aab67c6dff : Add native masked_softmax (#69268)
a5996a6857 : [SR] Wrap check_for_memory_leak with DCHECK (#69588)
3bb20ae49f : Make c10d tests -Werror clean (#69703)
be757addfa : Do not use `std::labs` (#69704)
3f02ad09ec : [ONNX] shapeValueMap: Represent symbolic shape as value (#68203) (#69545)
3d32a0c139 : Back out "[wip][quant][graphmode] produce reference pattern for binary ops and then rewrite to quantized op" (#69713)
7dba88dfdb : [nnc][quant] Fix quantized concat (#69596)
b2e79ed5ec : Remove WindowsTorchApiMacro.h in favor of Export.h (#69585)
f87f1d08e8 : [SR] assignStorageToManagedTensors returns a vector (#69568)
9aa1b3e396 : [Static Runtime] [Code Cleanup] Encapsulate function objects within ProcessedFunction (#69595)
41e1ab0785 : Introduce isTensorSubclassLike; add special cases to backwards formulas (#69534)
d3649309e6 : [pytorch][PR] Add ability for a mobile::Module to save as flatbuffer (#69306)
193e3c484e : .github: Add fbsync to push triggers (#69718)
3e20a74b55 : [SR] Update memory planner docs (#69559)
e963b43691 : Extend explanation of `torch.cholesky_inverse` to consider batched inputs. (#69069)
9ad05f2c3a : Upgrade oneDNN to v2.3.3 and package oneDNN Graph API together (#63748)
17641fed2a : Revert D32942007: OpInfo: Convert more sample_input_funcs to generators
0ccb1dcdbb : Fix inference_mode decorator (#68617)
afb742382a : use irange for loops 10 (#69394)
2d5b3101c1 : Added ScriptFunction pkl exception for issue #61210 #61381 (#67076)
d21646c432 : OpInfo: Convert more sample_input_funcs to generators (#69257)
6de9f0fc94 : OpInfo: Allow sample_inputs_func to be any iterable (#69256)
d2917f705a : Fix errors in `common_utils.py` (#69578)
07932e2735 : [sparsity] Convert function for sparse kernels without a context manager (#66778)
b957b82db7 : Replace issue templates with new issue forms - v2 (#69361)
e948856ce7 : [sparsity] Add ability to keep sparsity parameters in modules (#66777)
13faaff54c : [Operator Versioning][Edge] Implement register function for upgrader (#67730)
4f5806dee7 : [AO] Clear the contents of the torch/ao/__init__.py (#69415)
015e481a41 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
dc87cf5fe1 : Fixes mem_get_info when querying on a device other than the current device (#69640)
24d885f5f8 : [Vulkan] Thread-safe Vulkan backend for OSS (#69576)
ecf9c82f24 : Reduce binary size of TensorCompare.cu (#68835)
3e560239e2 : [Vulkan] Implement clone operator (#69551)
eb2a803406 : Run test_embedding_bag_with_no_grad_tensors only for TensorPipe (#69626)
b61c532f96 : Make make_dual redispatch (#68630)
7956a405ef : Make make_dual also return namedtuple when level less than zero (#68628)
1c43b1602c : [SR] Scope exit guard for memory planner deallocation (#68795)
3b27304d20 : Fix typos in ATen README (#69170)
b10381f42d : Port smooth_l1_loss to structured kernels (#67404)
497ec9d9b8 : Getting NS to work with Ferraris (#68908)
51b6981c36 : [PyTorch Tests] Split out skip logic, make changes for plugins (#67256)
e279963eef : Remove remaining THC code (#69039)
7407e3d6fd : [fix] cross_entropy : fix weight with ignore_index and label_smoothing (#69511)
d44d59aa70 : [BE] Enable C++ stacktraces for MultiProcessTestCase (#69175)
adb619a193 : Adding hardswish, opinfo tests to custom rules (#69399)
a0efa48c7b : [Operator Versioning][Edge] Have operator version number available at the loading stage (#67729)
2808563e69 : Forward fix for failing master (#69625)
3e6164449f : Add efficient zero tensors (#64837)
30bb4e0071 : Add nvidia-smi memory and utilization as native Python API (#69104)
ee60b5ddf3 : Improve efficiency of shape hash by not using tostring (#69496)
2cb385dd6e : OpInfo for `nn.functional.dropout2d`, revise sample inputs for `dropout` (#67891)
f54745a6ff : add `OpInfo` for `torch.diagflat` (#65680)
7e49f4638c : add `OpInfo` for `torch.nn.functional.kl_div` (#65469)
8b20dde932 : add python dispatch test back to CI and fix typo in test (#69565)
afaa184b44 : [Static Runtime] Avoid evaluating expressions of `Node*` for interpreter fallback op (#69489)
fc2614537b : Updating quantization documentation (#68907)
39fb855d91 : [DataLoader] Implementing communication processes for Map-style DataPipes (#68549)
f3983f9c47 : [quant][embdding qat] Re-land Add FX support for QAT EmbeddingBag (#69334)
93aa3603ee : [quant][embedding qat] Re-Land Support Embedding QAT via FX API (#69333)
fc8404b5bc : histc: Avoid dispatch in parallel region (#68520)
2a38e1a76a : Fix TSAN issue in TCPStore (#69590)
0ce49000db : Release GIL during RPC shutdown. (#69586)
c236247826 : OpInfo tests for `(svd|pca)_lowrank` (#69107)
e06af79136 : Fix sign op converter (#69580)
6b950eea27 : Remove finput and fgrad_input from slow3d transpose signatures (#68899)
05946051f8 : [quant][graphmode] initial support for fusion pattern in backend_config_dict (#69335)
2d38d37f5f : use irange for loops (#69533)
8a975c0106 : [LT] Sync with the lazy_tensor_staging branch (#69527)
049debd97d : [Reland][Autograd/Checkpoint] Checkpoint implementation without reentrant autograd (#69508)
3456c2cbc8 : Allow build_android.sh to forward Vulkan args (#69332)
fa39754e11 : [vulkan] Disable shader optimization to avoid Validation Errors (#69331)
bede33e3f5 : [vulkan] Add image format qualifier to glsl files (#69330)
e5a1ee0e5a : [quant][graphmode] Refactor fusion to use the new Pattern format (#68770)
1433160a36 : use irange for loops 6 (#66742)
9a7732e852 : CMake: Support dynamic codegen outputs (#68246)
cd9da3267c : Rationalize API exports in torch_python (#68095)
829b49b867 : Output UnionType str rep with () instead of [] (#69502)
a8232ee1bc : Sparse CSR CUDA: Add block torch.addmv when mat is sparse (#68708)
6df7b75186 : skip ORT tensor in TensorIterator because it doesn't have storage (#68705)
008469c5e2 : [SR] Simplify memory re-use algorithm (#68302)
c309637923 : Making cuda 11.5 workflows periodic (#69323)
baac51ff4a : Add conda-forge dependency for cuda-11.5 (#69541)
358e908162 : Add Union type to TorchScript Language Ref (#69514)
c21169ea41 : [JIT] optimize_for_inference on methods other than forward (#69367)
60ca6776e2 : [JIT] run frozen optimizations on methods other than forward (#68668)
63470f9449 : Sparse CSR: Implement unary ufuncs (with 0->0 correspondence) (#69292)
1a202b0c39 : Docs: Fix broken code syntax in autograd.rst (#69362)
10229e156b : trt engine inspector demo (#66683)
aa9fbb9ae9 : [JIT] check stack size after calling operator (#68788)
bd8d4195a6 : [DataPipe] Small change to generation script and update to DataPipe .pyi file (#69392)
fdfdafd1e6 : [DataPipe] Removing usage of unbatch_level from .batch interface and DataFrame (#69393)
357160e68e : [DataPipe] Unifying API - removing nesting_level argument from FilterIterDataPipe (#69391)
4478b14e4c : [DataPipe] Unifying API - removing nesting_level argument from MapperIterDataPipe (#69390)
9cb52327a8 : [quant][refactor] Move pattern type definition to ao/quantization/utils.py (#68769)
976b076715 : [iOS] Add LibTorch nightly build (#69341)
3edf1b6cee : [PyTorch] Avoid no-op shared_ptr dtor when constructing tuple (#69337)
617a3bd944 : GHA: Re enable mac json uploads (#69387)
945d2e380c : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
4670f0f2c5 : Set non-default backend names to lower case (#69400)
2dd46d3aa9 : FX: ensure node stack trace survives copying (#69368)
ca945d989a : [quant][graphmode][fx] Add default_replay_qconfig for ops like reshape (#69249)
8b1e49635a : [JIT] Separate GPU implementation of frozen_conv_add_relu_fusion.cpp (#68149)
e55b939732 : Enable build-split for all CUDA-11.x version (#69494)
bd8a4a9372 : [wip][quant][graphmode] produce reference pattern for binary ops and then rewrite to quantized op (#68229)
bcd0303834 : [fx2trt][easy] add sparse flag to TRTInterpreter (#69495)
3211588308 : [fx2trt] Separate sign from `trunc_div` and use it for acc_ops.sign (#69486)
e23827e6d6 : [fx2trt] [Prep for release] Add type hints to converters and separate main files (#69458)
a2d1cadfdb : [fx2trt] Add a helper function to generate specs for dynamic batch size (#69405)
cfe3cbb392 : [fx2trt] Use weights shape as normalize shape in layer norm (#69401)
59e98b66ac : Revert D32704467: [Autograd/Checkpoint] Checkpoint implementation without reentrant autograd
bc89528931 : Initialize upgrader and operator version files (#68772)
9e678446a2 : [Pytorch Edge] Add new_empty_strided to tracer (#69492)
65b0f389d2 : [PyTorch][Distributed] Use auto-grad enabled collections for the shared linear op to enable backward grad calculation (#68096)
7c2489bdae : [PyTorch][Distributed] Enable Reduce Scatter and modify all_to_all for sharded linear with more test cases. (#68786)
e032dae329 : [Autograd/Checkpoint] Checkpoint implementation without reentrant autograd (#69027)
4d81175a07 : add VSX dispatch for fft_fill_with_conjugate_symmetry_stub (#68914)
f87faf3c29 : .github: Volume mount local netrc for docs push (#69472)
1859e5f000 : [FSDP] Enforce wrapper_cls as a mandatory kwarg in enable_wrap. (#69358)
00245fed96 : [FSDP] Kill config_auto_wrap_policy, remove policy from enable_wrap, (#69357)
c95277e92a : [FSDP] Remove auto_wrap() (#69356)
f333cde14e : [FSDP] Make recursive_wrap, wrap APIs independent of ConfigAutoWrap. (#68776)
456139d0ae : FX pass: fuse_sparse_matmul_add (#69340)
68b5c86e65 : [Vulkan] Implement slice operator (#69382)
a84ed8be6d : unify compare kernels (#69111)
38c576cfef : Clean up CODEOWNERS for .github/ (#69395)
bf01cd5228 : Move THC_sleep to ATen (#69038)
a974699633 : Skips failing ROCm test (#69456)
b737e09f60 : expose return_types in Python (#66614)
78b7a419b2 : Enable native_dropout/backward for lazy (#69374)
b6f41bb848 : The Jiterator (#69439)
3202028ed1 : [Core ML] Avoid recompiling models when the OS version is not changed (#69438)
c97dc9286d : Revert D32780415: [Static Runtime] Move implementation details from impl.h into internal.h
29a45f0009 : Revert D32743881: [Core ML] Avoid recompiling models when the OS version is not changed
999e93e6a8 : [Static Runtime] Move implementation details from impl.h into internal.h (#69274)
b97903abb8 : [Core ML] Avoid recompiling models when the OS version is not changed (#69234)
e8f4c9cc40 : [LT] Upstream LazyView and view ops IR Nodes (#69277)
0bbe21b172 : [LT] Upstream more util functions (#69098)
bfe5ad28e6 : [Linalg] Add a runtime switch to let pytorch prefer a backend impl in linalg functions on GPU (#67980)
9663e08674 : [Static Runtime] Fix a bug that aten::embedding_bag keeps cannot handle resized input tensors (#69219)
6a4fa86026 : Add OpInfos for misc nn.functional operators (#68922)
da023611d7 : [CUDA graphs] Fixes make_graphed_callables example typos (#69379)
e92b14bf1f : Update CUDA version to 11.3 and setup proper environment variables. (#69383)
a3ca4c83a6 : [PyTorch] Add torch::jit::toString(const Type&) (#66689)
855365e9c4 : Clean up dead code (#69296)
a813ddf5ec : CUDACachingAllocator: make an error message more accurate. (#69174)
088a4feb41 : Update the documentation for AMP with DataParallel (#69218)
80a67cd33c : Limit uploading JSONs to trunk (#69385)
a20b9f8d5c : add HPU case for backend_to_string function (#69225)
6f7a5ddffc : [SR] Use std::vector::reserve in GetLivenessMap (#68884)
ae11264583 : Fixed type checking errors in node.py (#68124)
6baaec30cd : [DataPipe] Adding ShufflerMapDataPipe (#68606)
3e45739543 : [PyTorch][JIT] Use stack.pop_back() instead of pop(stack) for DROP (#69326)
2c84b010e6 : [PyTorch] Use toObjectRef in JIT interpreter (#69324)
5a480831e6 : .github: Propagate WITH_PUSH to docs jobs (#69372)
8f8524a447 : Expose is_metal_available in header (#68942)
77ca153d3e : Remove columns and ones from slow2d transpose signatures (#68898)
7ca2da14e9 : Remove finput and fgrad_input from slow3d signatures (#68897)
73d2ca20e0 : .github: Add credentials for macos test jobs (#69371)
6ed7354435 : [SR][Code cleanup] Typedef/default for kwargs (#69164)
b761172406 : Revert D32786909: [C10D] [Easy] Use pinned memory for HtoD copies in Reducer:: sync_bucket_indices
e0fb228e03 : Revert of adding windows CUDA 11.5 workflow (#69365)
21919be96b : CMake: Update precompiled header and fix support (#67851)
cc46dc45e1 : [SR] Factor logic that determines managed tensors out of MemoryPlanner (#68295)
276cb8f501 : [Pytorch Edge] Make Tracer support xirp metal segmentation (#69328)
a07ffe8d0e : Add OpInfos for combinations, cartesian_prod, sum_to_size, ldexp, as_strided (#68853)
834bd3134e : Back out "Add efficient zero tensors" (#69327)
c572a603a6 : fix for python 3.10 for gradient opinfo (#68113)
572c3e3118 : Fix some usages of CUDA_VERSION (#69092)
dbc8d9c947 : [C10D] [Easy] Use pinned memory for HtoD copies in Reducer:: sync_bucket_indices (#69298)
e2c7bd08b9 : Add operator div (#68528)
bede18b061 : Add support for C++ frontend wrapper on Linux (#69094)
33c3c539b6 : THPStorage: Prefer intrusive_ptr over owning raw pointers (#69248)
9f39a2de0a : [fix] add range check for `k` kthvalue_cpu (#68863)
cc85b68984 : .github: Fix ci workflows generation (#69329)
f786b03f98 : ci: Migrate docs push to GHA (#69172)
db5425bcd2 : re-enable layer_norm in autodiff (#69007)
5b2586fe09 : [testing] Ignore expected_regex in assertRaisesRegex for non-native device (#68723)
36ba1b6b3a : Remove unused _convolution_nogroup op (#68829)
791d5087ed : [TensorExpr] Add lowerings for quantized ops: cat, mul, conv1d, relu. (#69055)
83c4451f60 : [TensorExpr] Add a pass to symbolize an input dimension. (#68857)
1e9dcdd2a0 : [TensorExpr] TensorExprKernel: support custom-class constants. (#68856)
48d7d585c8 : [TensorExpr] IR Eval: add more logging. (#68855)
b6bcf5a0f1 : [TensorExpr] Un-const TEK::kernel_func_name to allow recompilation. (#68854)
a0367f8980 : Revert D32404517: [quant][embedding qat] Support Embedding QAT via FX API
ec4c749024 : Revert D32318435: [quant][embdding qat] Add FX support for QAT EmbeddingBag
8dafe6e147 : Forward fix merge conflict (#69319)
52219b1017 : Fix `ChainedScheduler.get_last_lr()` (#69112)
db30696be8 : [pytorch][PR] bug fix for D32374003 (#69278)
915c26f588 : GHA: preserve downloaded JSONs as artifacts (#69258)
cafcf599d0 : Deprecate torch.triangular_solve (#63570)
dde801686b : Expose MobileCode to python (#66592)
bb522c9d7a : Enabling CUDA 11.5 for binary builds, Adding windows workflows for CUDA 11.5 (#69262)
f587267dc7 : Revert D31705359: use irange for loops 8
97750e03a4 : [torch][edge] Add int to the copy kernel. (#69297)
7142b0b033 : .github: Add linux.large to actionlint.yaml (#69304)
4056251a18 : Add missing spaces to an error message (#69289)
2ea70a6462 : Aloow Union of scalars to be NumberType (#66591)
d673b1ec59 : .github: Switch ciflow-should-run to self hosted (#69166)
14ed4df305 : [PyTorch][Static Runtime][easy] give to_copy_functor a name (#67701)
21686923e8 : [PyTorch][SR] More debug logging (#67220)
b22e4d4aea : [PyTorch][SR] Add more to() tests & extend debug logging in testStaticRuntime (#67219)
84aa1ddedd : [quant] Remove warning for quantized Tensor in `__dir__` (#69265)
17e5200441 : use irange for loops 8 (#66743)
ff3fc37267 : [BE] rewrite ProcessGroupNCCLTest to be MultiProcess (#67705)
5c816520b3 : ns for fx: fix bug in graph matcher (#69238)
698c35e743 : Add functorch TLS to ATen/ThreadLocalState (#69181)
0de7a618a3 : functionalization: update is_aliased() logic (#68881)
4484c04513 : [quant][embdding qat] Add FX support for QAT EmbeddingBag (#68121)
78ab3cde4a : Do not modify type map from getCustomClassTypeImpl() (#69261)
113684cf81 : Fix crash in `checkCustomClassType` if arg is null (#69259)
668574af4a : Add efficient zero tensors (#64837)
abda069ce2 : [quant][embedding qat] Support Embedding QAT via FX API (#68296)
3157371bb4 : [quant][embedding qat] Fix bug enforcing quant_min <= zero_point <= quant_max for float zeropoint (#68852)
397183f44c : Add Lazy Tensor codegen infra (#69020)
28c519961f : Follow the undefined Tensor <-> None rule better in torch dispatch (#67793)
0465f64bb8 : [DataPipe] Adding BatcherMapDataPipe (#68197)
00ebbd5ef6 : Revert D32010095: [pytorch][PR] Add ability for a mobile::Module to save as flatbuffer
ed3b73fd4d : [Static Runtime] Skip ProcessedNode:: verify_no_memory_overlap() for out variants (#68639)
c60232d89a : [shard] add back init_from_local_shard_and_global_metadata API (#69226)
12621c3a39 : support pure fp16 training in FSDP (#68417)
41d35dc201 : Add ability for a mobile::Module to save as flatbuffer (#67351)
40fb28ea87 : [JIT] Compute input sym shapes in partial eval graph (#68281)
d8a44270d6 : [DataPipe] Simplify BatcherIterDataPipe by removing 'unbatch_level' argument and functionality (#68594)
ad182479b0 : [deploy] docs (#69251)
cbe0a38d8c : Back out "[CUDA Pinned Memory] Event recording with non-blocking copies should track the storage context, not the tensor data pointer" (#69193)
929f2a750a : Back out "[CUDA Pinned Memory] Alternative implementation of pinned memory allocator focusing on multi-threaded scalability" (#69191)
370d0afc1b : Strided masked var. (#68738)
291e56eda4 : [Pytorch Edge] Update Black Box Api with operator versioning (#68678)
b9738e923e : [Operator Versioning][Edge] Add old models and unittest (#67726)
124bb6a19d : RegisterDispatchKey.cpp: remove redundant code (#68983)
fced51eaf7 : [torch][distributed] Check for file existence before invoking cleanup logic in FileStore destructor (#68603)
3c1e2ff9eb : fixing layer_norm cuda bug (#69210)
d72d476875 : [pyper] add flag to disable clip_ranges_gather fusions (#69198)
263125a962 : Fix RAdam docstring on LR default value (#69186)
3bf4080fd9 : Change misleading MaxUnpool2d example to better demonstrate output_size usage (#68936)
2eef5e76db : add `extra_repr` for `nn.ZeroPad2d` (#69206)
cd043c335f : Revert D32329330: [JIT] Separate GPU implementation of frozen_conv_add_relu_fusion.cpp
e6c435bf96 : [LTC] Upstream helpers for c10::Device <=> BackendDevice (#69064)
92f168941e : remove accidentally committed redundant debug print (#68510)
1842364b30 : Strided masked normalize. (#68694)
23633bdb5c : record the datapipe for each pieces of Dataset (#67613)
deaf745aee : Add kl divergence between normal and laplace distribution. (#68807)
486ae5c733 : Dataset & IterableDataset attribute errors prints attribute (#69021)
d507fd63f3 : Check that block height and width are positive in `nn.Fold` (#69048)
c08e95dd9c : Introduce `IS_LINUX` and `IS_MACOS` global vars (#69093)
840fe8e4e6 : Fix MacOS artifact upload (#69188)
f9e69af22e : Modify LU_backward and lu_solve_backward to use linalg_solve_triangular (#63569)
478069d6f2 : Remove duplicate .DS_Store in gitignore (#68981)
e5e0c19882 : OpInfo : embedding_bag (#67252)
1da1707568 : Sparse: Implement simple unary ufuncs operators (#68887)
afff381824 : Automated submodule update: tensorpipe (#69089)
a23d1036ab : Add ops for BI (mean) (#68826)
19b87292fc : Add TE fuser ops (#68825)
7fad758e02 : [FSDP] AutoWrap Main API (#68155)
999e52a795 : [FileStore] log timeout in err msg (#69167)
845a82b635 : Debug positive definite constraints (#68720)
8586f374bc : [Pytorch Edge] Get Operator Version from model file (#68677)
219db3b4e1 : Add OpInfo for torch.linalg.tensorsolve (#68810)
b05237f5e4 : [Pytorch Edge] Add bool to copy kernel (#69106)
e534c5efd7 : CMake: Include instead of copying cpu kernel files (#67656)
f6f1b580f8 : Fix mypy in cpp_extension.py (#69101)
6953b7e269 : [BE] Fix mypy local run on MacOS (#69097)
aa2163eba5 : .github: Add linux.large instance type (#69165)
e60fd10659 : [fbgemm] remove assumption number of rows is in 32 bit (#69066)
ef7ed082ec : [PyTorch] Remove StringView from RecordFunction implementation [2/2] (#68411)
1d84d8c5d8 : [PyTorch] Remove StringView from RecordFunction interface (1/2) (#68410)
22690c2cb6 : Use `cub::FutureValue` to simplify 64bit indexing split of cub scan (#66711)
c48e6f014a : [vulkan] Update VMA settings to reduce memory usage (#69088)
fcd1375b2b : [DDP][BE][Docs] Clarify checkpoint support (#68827)
994f110a6f : Refactor DDP checkpoint tests (#68792)
49abda208b : [JIT] internal build bug fix (#69061)
5e0302e1d0 : [quant][embedding qat] Set FakeQuant zeropoint dtype matches observer (#68390)
8f9f559453 : ammend tensors.rst and torch.rst for doc generation (#69030)
0aa9d177fe : [fx] remove CPatcher (#69032)
81246ed01c : Markdown was linking to repo rather than pytorch.org website (#68937)
251686fc4c : Revert D32706197: Sparse: Implement simple unary ufuncs operators
8fef7c09f5 : Remove finput from slow2d signatures (#68896)
cd3e37cbe4 : [Static Runtime] [Code Cleanup] Reduce indentation depth in ops.cpp (#69028)
cfc75c2137 : [JIT] Separate GPU implementation of frozen_conv_add_relu_fusion.cpp (#68149)
7342b654a1 : [static runtime] dequantize out variant (#68664)
d3de3546d9 : Revert D32099294: Split cuda: list cpp files that go in _cu library explicitly
6fea7499c2 : CompositeImplicitAutograd compliance testing (#65819)
b83e8d560b : [LT] Sync LTC branch changes on torch/csrc/lazy/core (#69012)
39ab417107 : [Static Runtime] Fix fb::expand_dims schema (#68636)
5b37ac54cb : dbr quant overhead [14/x]: cache whether an op is a module (#68877)
b47ae9810c : Split cuda: list cpp files that go in _cu library explicitly (#67216)
174eea8a05 : Remove native_functions.yaml dependency from IndexKernel.{cpp,cu} (#66914)
f7d598948a : Remove native_functions.yaml dependency from TensorModeKernel.cu (#66913)
ec1339a48b : [CUDA Pinned Memory] Alternative implementation of pinned memory allocator focusing on multi-threaded scalability (#68906)
0cdeb586ae : [LTC] Upstream some utilities (#69046)
fbaa19a6fa : Sparse: Implement simple unary ufuncs operators (#68887)
3186d36972 : [TensorExpr] Supress TracerWarnings in test_unsupported in test_jit_fuser_te.py. (#68757)
75ce040620 : [TensorExpr] Allow for 'keepdim' argument in aten::mean in NNC's external call. (#68756)
a93f505ee5 : [TensorExpr] IRPrinter: print sizes and name when visiting a Buf. (#68755)
8cc9ec2f6b : Add option to get input dtype from user (#68751)
ac1fe91dc9 : Clean up some THC includes (#69024)
ce53baf573 : Merging the implementations of ClearProfiling (#67575)
e6a8d15a4c : cpu_kernel_vec: Hoist stride checks out of loop (#68962)
61ea2fc35e : Fix device type / dtype handling for parametrized test names (#65217)
933d5b561f : Fixed links to RNN docs in comments (#68828)
863f321c6d : Fix typo in AdaptiveLogSoftmaxWithLoss docs (#68926)
b8c3693281 : Remove autograd-enabled collective APIs from distributed docs (#69011)
178010455d : Vectorized: Use inline namespace instead of anonymous (#67655)
1d0416397a : [PyTorch] Change from unique_ptr to optional for RecordFunction state (#68397)
7194faed7f : [PyTorch] Optimzize mergeRunCallbacks for RecordFunction (#68387)
f1a3512b78 : Adding Linux cuda 11.5 workflows (#68745)
27228656e6 : [FX][docs] Document gotcha about training flag (#68915)
f253370bb9 : dbr quant overhead [13/x]: cache results of get_module_hook_type (#68841)
2ad4727ad9 : dbr quant: fix debugging fqn info for converted model (#68840)
a03fe9ba61 : dbr quant overhead[12/x]: turn off overrides for module convert output hook (#68839)
515db56755 : dbr quant: remove unnecessary outputs hook (#68838)
e3af582f92 : dbr quant overhead[11/x]: speed up module convert hook (#68837)
be70477a7b : dbr quant overhead[10/x]: disable torch_function overrides for leaf nodes (#68836)
1342f19a8c : Add ModuleInfo-based device transfer tests (#68092)
89a145fd91 : Sparse CSR CUDA: Add torch.sparse.sampled_addmm (#68007)
af49805a73 : Port lerp to structured kernels (#68924)
62847a2b9c : Fix bug on empty GLOO_SOCKET_IFNAME_ENV (#68933)
b468566208 : Add ModuleInfo-based CPU / GPU parity tests (#68097)
fb63bb60ec : Strided masked norm. (#68584)
f776f30780 : Keep the sequence or mapping type in `default_collate` (#68779)
d9e7d85390 : Remove TH/THC Storage (#68556)
f5fa91ba2e : Sparse: Add additional opinfo tests (#68886)
3bd7dbf119 : [Dist CI][BE] Remainder of c10d/store tests run in subprocess (#68822)
250d0bd20b : [RPC][Dist CI][BE] RPC tests run in subprocess (#68821)
51f4ac40fd : ci: Use default blank if no TEST_CONFIG (#69008)
ee59a09772 : Implement sharding for MacOS jobs (#68784)
61a4204d80 : Sparse CSR CUDA: Add block torch.addmm when mat1 is sparse (#68707)
9ee5db490b : neg_sparse: Fix output dtype (#68885)
7b701ce2d4 : Add set_to_none option to C++ API (#68801)
787ded5103 : Add lazy::Shape::numel() (#68314)
3d504ae1b4 : [RELAND] Fix Dispatching not considering List[Optional[Tensor]] for dispatch (#68073)
17ba936da0 : .github: Migrate XLA tests to GHA (#64320)
f398320e0d : packaging: Include lazy headers in package_data (#68817)
871cd7c5b9 : Forward-mode AD support for torch.split, torch.split_with_sizes (#68566)
3315c4b31e : add instructions for unhandled exceptions in assert_close (#68722)
d095f498a0 : Tensor docs (#63308)
6ae34ea6f8 : Revert D32521980: Add linalg.lu_factor
b10929a14a : Add linalg.lu_factor (#66933)
01ddd5dde6 : [opinfo] use dtypes instead of dtypesIfCPU (#68732)
cffad597ea : Tune test_reference_numerics_normal (#68019)
5fdcc20d8d : [JIT][Symbolic Shape Analysis] expose op shape functions (#68748)
f14c16e509 : Revert D32599540: [pytorch][PR] implemented 'torch.distributions.constraints.symmetric' checking if the tensor is symmetric at last 2 dimension.
c2e3b92db4 : partial revert of D32522826 (#68889)
4afa5ea0ab : native_functions.yaml: remove SparseXPU which is added by accident (#68791)
c5f63f859e : Add slow path to `getCustomClassTypeImpl` (#68717)
14dc9759f2 : Revert D32650384: OpInfos for torch.{flatten, column_stack}
96929ea995 : Update empty and empty_like examples in docs (#68874)
d44e610efa : [CUDA Pinned Memory] Event recording with non-blocking copies should track the storage context, not the tensor data pointer (#68749)
bc3bdbc8f4 : implemented 'torch.distributions.constraints.symmetric' checking if the tensor is symmetric at last 2 dimension. (#68644)
1940cc028e : [quant][graphmode][fx] Fork subgraph_rewriter from torch.fx to quantization (#68228)
aceb46e4ce : OpInfos for torch.{flatten, column_stack} (#67555)
cf54416925 : Add docs entry for `adjoint`. (#68869)
c7d5e0f53f : OpInfos for torch.atleast_{1d, 2d, 3d} (#67355)
b69155f754 : Avoid dtype mismatch error in `torch.save` if storages are unallocated (#68787)
208e109dbf : Revert D32633806: Sparse CSR CUDA: Add block torch.addmm when mat1 is sparse
7802953dd5 : [nnc][quantization] quantized ops for BI bytedoc via aten (#68790)
31d36fd35d : fix sccache issue on Windows CPU (#68870)
be7e159e71 : Remove extraneous logging (#68830)
7d8a79b6f3 : [nnc] llvm_codegen quantization types for vectype (#68736)
b28ddd72d3 : Sparse CSR CUDA: Add block torch.addmm when mat1 is sparse (#68707)
b5b62b3408 : Cleanup old TD logic (#68842)
d9f3feb5a2 : [SR] Use std::vector::reserve for StaticModule constants (#68834)
8fb9ce4927 : Update Documentation to Make CUDA Call Explicit (#67973)
79b67d9a4a : [Quant] Refactor handling of FixedQParams operators (#68143)
998daf44d6 : All get_attr node to be in64 type (#68818)
78dce417a1 : [BE] Simplify magma installation logic (#68778)
2cd48d14ef : Fix `test_svd_errors_and_warnings` warning message when cuda >= 11.5 (#68683)
8e343ba5db : Revert D32611368: [pytorch][PR] Initial version of general convolution_backward
84047ff342 : Add API usage logging to ShardedTensor and fix a few tests. (#68771)
959cb03132 : Populate operator_input_sizes_ (#68542)
c0e6dc9ac7 : [pytorch] Fix loading from checkpoint after "maximize" flag was introduced in SGD (#68733)
73f494d690 : .circleci: Remove migrated CUDA 10.2 build (#68782)
23288fdacc : Making norms inputs independent (#68526)
e7e1b76106 : Require CMake 3.13 when building with Ninja (#68731)
3282386aa4 : Added additional string to search cpu flags for vnni detection (#67686)
98e51895ef : [dist_quant] change op registration to each file instead (#68797)
445b31abff : Initial version of general convolution_backward (#65219)
a31aea8eaa : [quant][graphmode][fx] Add support for specifying reference quantized module mapping in backend_config_dict (#68227)
b845b9876b : [sparsity] Fix for the failing pruner test (#68794)
d6a68e0b8d : [PyTorch][3/N] Enable the rest forward spec options for ShardedEmbedding and ShardedEmbeddingBag. (#67799)
5d300e761d : Add OpInfos for parcel Activation Functions I (#68521)
74e6d2ce67 : fix typos in jit_language_reference.rst (#68706)
e7d8f096c9 : [sparsity] Fix GPU training for sparsity (#66412)
0b0674121a : Fix strict aliasing rule violation in bitwise_binary_op (#66194)
d176c82bd5 : [sparsity] Fix and enable the pruning tests (#66411)
b46c89d950 : Add linalg.solve_triangular (#63568)
a2e35e167b : refactor: update f-string for swa.utils.py (#68718)
9554ebe44e : [Dist CI][BE] c10d gloo tests run in subprocess (#68504)
ddc22ea3b2 : [Dist CI][BE] test_c10d_nccl run in subprocess (#68503)
39ec0f321b : GHA: add print_tests_stats step to MacOS workflow (#68669)
a66ff81837 : [DataPipe] Optimize Grouper from N^2 to N (#68647)
148f323856 : Revert D32541986: [pytorch][PR] [opinfo] use dtypes instead of dtypesIfCPU
7c6a8a47db : [BE] minor improvement to dist quantization (#67401)
fb556c91ce : [BE] delete frontend.cpp (#67400)
d2a90f91bc : [opinfo] use dtypes instead of dtypesIfCPU (#67619)
2d06c081ca : Fix test issue with householder_product for non-contiguous inputs. (#68231)
3b3dc1ade8 : Sparse CSR CPU: add `triangular_solve_out` (#62180)
e1c449ff34 : dbr quant overhead[9/x]: precalculate when to skip op_convert_after_hook (#68432)
ba230de118 : dbr quant: remove more asserts from hot paths (#68431)
95c00cf029 : speed up quantized relu6 inplace kernel (#68404)
592053f115 : dbr quant: simplify relatedness logic (#68374)
f1021bcf38 : dbr quant overhead[8/x]: small speedup in op_needs_quantization (#68373)
74ba1067a6 : dbr quant overhead[7/x]: speed up AutoQuantizationState.reset_to_new_call (#68372)
b7d58745c8 : dbr quant overhead[6/x]: remove unneeded isinstance checks in `op_convert_before_hook` (#68371)
b3a7d696b3 : dbr quant overhead[5/x]: remove unnecessary asserts (#68370)
16a6e0612d : dbr quant: clean up key types in AutoQuantizationState mappings (#68369)
3fc9bc43c6 : dbr quant overhead[4/x]: speed up hook type calculations (#68351)
c72ffee497 : dbr quant overhead[3/x]: speed up AutoQuantizationState.mark_cur_op_complete (#68350)
c7ecf1498d : dbr quant overhead[2/x]: precalculate op_convert_info (#68347)
9fba8971a7 : dbr quant: move model level utils into own file (#68346)
629f9a5532 : dbr quant: clean up AutoQuantizationState.get_op_convert_info flag (#68345)
52cc9cb0ee : dbr quant: refactor AutoQuantizationState._get_packed_param_name (#68344)
2755cf457c : dbr quant: refactor AutoQuantizationState._get_input_args_quant_dequant_info (#68343)
57472ec414 : dbr quant: refactor `get_quantized_op` to only use `seen_op_info` (#68342)
9cf4779ec9 : dbr quant: refactor `get_func_output_obs_type` to only use `seen_op_info` (#68341)
f8b084c563 : dbr quant overhead[1/x]: remove expensive calls to named_modules (#68309)
ed6ef0eec4 : dbr quantization: inline scale and zp (#68251)
ca499567d2 : barebones numeric suite for quantization with dynamic tracing (#67776)
d0eff8d846 : Strided masked softmin. (#68463)
75955e4ef8 : [clone][sparse] Add `torch._C._sparse` namespace (#68672)
95f4cd0ba9 : Implement topk with sort for some cases (#68632)
e554d8b89c : Fix retry on connect failure decorator (#68600)
8e51381bac : Make AOT compiler generic (#68637)
c41d8290b3 : Rename shard_lengths to shard_sizes to be more inline with Tensor sizes. (#66464)
af564e73b8 : Strided masked log_softmax. (#68461)
578507cb7b : Fix nanmedian result using more CUDA memory than necessary (#68591)
6cca14d02f : [fx2trt][easy] Replace all network.add_activation() call with helper function (#68676)
37edb7483a : [torchelastic][1/n] Fix `caffe2.test.distributed.launcher.api_test` flaky tests (#68624)
a545a409f8 : [quant][graphmode][fx] Support input_quantized_idxs and output_quantized_idxs in the new convert (#68042)
993b7a2052 : Remove doubly nested anonymous namespace (#68555)
5456d8c8f3 : Add vectorized Jacobian and Hessian computation with forward AD (#67041)
7bb401a4c9 : Add forward AD support for miscellanous operators (#67820)
e358c49a5b : Add OpInfo test and fix a couple cases (#66294)
21d203b5ca : Add internal assert for tangent layout mismatch for view ops (#66293)
2455cc2adf : Address case when layout of tangent is not same as base (#66292)
bbe2aae84c : Support cuda 11.5: install magma for cuda in conda (#68665)
183dcdf551 : [reland] Fix flaky test_nccl_timeout (#68544)
875ba3dddb : [quant][trt] Add support for torch.addmm in TensorRT (#67537)
ee4cfaa286 : [SR] Add utility class to determine tensor ranges (#68284)
a6d862c50a : [quant][graphmode][fx] Add support for weight and bias dtype in backend_config_dict (#68602)
da4a95c79a : [ROCm] Use hipCUB/rocPRIM scan algorithms for large index support (#68487)
5880a2f1ef : Allow fuse unsqueeze cat sum with multiple input (#68650)
2cab77f810 : Masked normalization infrastructure and strided masked softmax (#68333)
f99f5ee088 : add support for None in assert_close (#67795)
0809553cf0 : refactor assert_close to be more modular (#67794)
f74779e403 : [android] Lite interpreter naming for android nightly publishing (#68651)
4bcff4733d : Add OpInfos for parcel Elementwise Binary II (#68085)
c2c859bdf2 : [quant][embedding qat] Add benchmarks for QAT Embedding+EmbeddingBag (#66560)
f82f14de17 : [libkineto] Refactor 4/n: Simplify activity logger step 2/3 (#68329)
18312313c4 : [Profiler] Add missing guards (#65812)
343723e2ad : [PyTorch][JIT][easy] Delete unnecessary overload of MemoryDAG::mayAlias (#66966)
ced57eb490 : [PyTorch][Static Runtime] Delete incorrect alias analysis code (#67075)
833dcaf2d6 : Sparse CSR: Add `torch.sin` (#68123)
758d7dea9c : torch.monitor - Initial C++ Stats (#68074)
68d8ab0cc6 : [const_fold] Fix call_module const folding (#68614)
39747dc456 : [nnc] Loweings for flatten, xnnpack prepack op (#68470)
ca92111758 : Add native_dropout (#63937)
a39060c001 : textray demo for unity
ff125a3624 : Minor changes in documentation (#68557)
9ce3c630ba : [Docs] Mention `torch.bfloat16` in `torch.finfo` (#68496)
913ac27112 : Fixes forward AD codegen for multiple formulas (#68535)
e7002c62ae : [nnc] External functions quantized via Dispatch (#68572)
a990a7ac31 : [torchelastic] Remove stale `test_get_default_executable` test (#68609)
003f6ccec6 : [BE] rename some tests in test_c10d_common (#67828)
3757a16c7a : Adding custom testing based on opinfos input for ops with custom rules. (#67500)
71a031e954 : Adding Custom Rules to Device Propagation (#66973)
77db720c65 : Moving parts of the Shape Registry into a common file (#66948)
244691db98 : surface ncclUniqueId store broadcast error (#68597)
ab1d879b33 : [WIP] forbid aliasing between the outputs of a differentiable graph (#67732)
9f4e004abd : Revert D32283178: Add linalg.solve_triangular
48771d1c7f : [BC-breaking] Change dtype of softmax to support TorchScript and MyPy (#68336)
748d9d2494 : Revert D32187063: [static runtime] dequantize out variant
0706607abc : Add linalg.solve_triangular (#63568)
f120335643 : [static runtime] dequantize out variant (#67873)
7d38768d84 : Rename splitter result (#68303)
533e72e0a4 : Fix DLPack CUDA stream convention (#67618)
d5d2096dab : [testing] make @dtypes mandatory when using @dtypesIf (#68186)
857fed1f42 : torch.linalg.qr: forward AD support (#67268)
a2d187a672 : [BE] MapAllocator: report map error on Linux (#68545)
b1aa45a8a7 : Fix `_make_wrapper_subclass`'s storage_offset handling (#68268)
f0e2ad5037 : Stop warning spamming about vmap in gradcheck (#68586)
f9ef807f4d : Replace empty with new_empty in nn.functional.pad (#68565)
6c9cf5e6ea : [quant][embedding qat] eager mode QAT for Embeddings (#66429)
dbbb02474b : [GPU host alloc] Fast path for size 0 malloc (#68532)
4635f5711f : [static runtime][dper] multi_env tests for static runtime: selective enable (#67467)
35712a8eb4 : [reland] simplify init_from_local_shards API (#68021)
952ca25daa : Sparse CSR: add `convert_indices_from_csr_to_coo` (#66774)
96ba2099d1 : Fix c10d TCP store with mutex (#68499)
146a7f68e2 : Enable desync root cause analysis for NCCL (#68310)
9807787135 : `scatter_reduce` (#68115)
e72b9db48e : [fx2trt] add converter for acc_ops.hardtanh (#68550)
9d9ca88f5c : [predictor][trt] Expose more CUDA/CuDNN info to at::Context and BC stage 1 (#68146)
d71092f668 : [android][fbjni] Update fbjni to 0.2.2 (#68400)
53bfb00ee1 : [bugfix] TensorList args in functionalization pass (#68395)
b0bdf588ea : [ONNX] Release values cached in global object (#68210)
4eb772fde6 : Refactor saving jit::Module to mobile .pt in 2 steps: (#66494)
e2aeb4a7af : Improve native layer norm backward perf (#68238)
f3e2fefe09 : Actually enable PYTORCH_RETRY_TEST_CASES for linux tests (#68486)
2f37a39a5c : [quant][graphmode][fx] Refactor node_name_to_target_dtype to make it more clear (#68317)
3b4f072383 : Remove TH/THC Storage data and copy functions (#68127)
4e21d77dbb : Use TORCH_CHECK in MapAllocator (#68424)
693fe2fd9b : docs: Added Union to supported types in documentation (#68435)
61206ba4db : [SR] Add StorageGroup abstraction (#68279)
cac3cd1433 : add torch.diff support for n greater than 1 (#67260)
3da2e09c9b : Added antialias flag to interpolate (CPU only, bilinear) (#65142)
143491e0ad : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
3e3bf40b0a : Revert D32452012: [pytorch][PR] Fix flaky test_nccl_timeout
54ac64f035 : Revert D32477989: [pytorch][PR] Actually enable PYTORCH_RETRY_TEST_CASES for linux tests
0dc3f829d9 : Nvfuser code bump 11 5 (#67943)
01b30922dd : [static runtime] fuse gather+to+lengths_to_offsets (#64075)
faa1e8b7cf : Fix flaky test_nccl_timeout (#68403)
6186b90c53 : [Contrib][Fakelowp] Change Lut Size for Tanh (#68334)
f6696c5a85 : export CPUOffload in _fsdp package (#68308)
9c15523793 : Attach unused parameter info to static graph error message (#68413)
9de730ebba : q_avgpool: Loop over batch dimension inside operators (#66819)
1cade067e3 : [Vulkan] Vulkan backend is now thread-safe (#67733)
2317e28e9e : Enable complex autograd for col2im / im2col (#68199)
fea2bb64c8 : OpInfos for stft, istft, fftshift, ifftshift (#68198)
6e640a0acf : Revise the socket implementation of c10d (#68226)
4c346bd073 : Added forward derivatives for neg, diag, inverse, linalg_eig (#67837)
aa9ee8d02a : [Static Runtime] Avoid copying function objects per StaticRuntime instance (#68368)
fd85d925b0 : Fix some sign issues (#68361)
173c0f8a98 : Actually enable PYTORCH_RETRY_TEST_CASES for linux tests (#68486)
affa3f846c : Sparse CSR CPU: add `torch.addmm` (#65606)
5cfca5524c : [JIT] clear GraphFunction.optimized_graphs_ after freezing a module (#68316)
75ccb07b26 : [SR] LOG->VLOG (#68477)
515d9fb2a9 : Add OpInfo for torch.histc (#67452)
a8bcfc90f5 : fix fsdp overlap flaky test (#68415)
27eca2c6fd : Revert D32467139: [pytorch][PR] [android][fbjni] Update fbjni to 0.2.2
284758b585 : correct NLLLoss parameters default value (#68426)
76e9dbb0f4 : [torch.fx] add code-gen customizability and support for setting breakpoint in code-gen'd forward() call (#67139)
8954c92529 : [PyTorch][Static Runtime] Borrow outputs in static_runtime::VarTupleUnpack (#68161)
755be54c77 : [PyTorch][Static Runtime] Borrow outputs in static_runtime::dict_unpack (#68160)
bbc24222d2 : [PyTorch][Static Runtime] Refcount bump pass in native_ops (#68159)
86399d8e0c : Add histogramdd to torch.rst (#68273)
ed00a763a2 : [PyTorch] Don't force refcount bump when accessing DictEntryRef key/value (#68158)
04056df475 : [android][fbjni] Update fbjni to 0.2.2 (#68400)
df129fa8d6 : [PyTorch] Support MaybeOwned<IValue> (#68157)
030ee34216 : Add OpInfo for torch.nonzero (#67459)
10e9d80ad1 : [PyTorch][Static Runtime] Don't track scalar ivalues (#67702)
391be39575 : Use reduced precision switch in `test_addmm_baddbmm_overflow` (#68399)
5c3529a86d : [lint] small pass to make lint clean (#68367)
639258499f : [PyTorch][Static Runtime] Add & use "small array" for ProcessedNodeInputs (#67935)
6acde23bec : [PyTorch][Static Runtime] Switch input/output repr to 2-byte offsets (#67934)
8678472ec8 : [PyTorch][Static Runtime] Save 2 pointers in ProcessedNode (#67860)
45b2f41c3e : [package] fix torchscript classes in package (#68028)
ba16b1eca7 : [numpy] Alias `arctan2` to `atan2` (#67010)
6226a3cf74 : [Vulkan] Implement permute operator (#68274)
bc3d380ed1 : Throw error when saving storages that view same data with different type (#66949)
bf60c6e71b : [JIT] remove prim::SetAttr from list of ops with side effects (#68311)
add79722dd : Correct `householder_product` docs. (#68335)
01a8862582 : OpInfo tests for `nn.functional.max_pool{n}d`. (#68075)
33e9a0b5f6 : [Reland] Python tracer. (#68325)
438ca7603f : Fix sign comparison issue in Histogram.cpp (#68294)
ec742c65d5 : Fix a sign comparison issue in BatchLinearAlgebraLib.cpp (#68293)
d541aa8cbe : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
27cc11226d : make broadcast fastpath the default for currently rolled-out ops (#68365)
7ee84ad321 : Refactoring quantized op tests to combine test classes (#68282)
065018d812 : [pytorch][xros] Ensure all pytorch mobile operators build ok in XROS mode (#68266)
86c1368611 : [fx][const fold] Add test/example for skipping quant/dequant pattern (#68378)
722af775c3 : [ONNX] ConstantMap setters to update existing value instead of emplace (#67630) (#67812)
d32efe8bc2 : [ONNX] Remove the argument use_external_data_format of export() method entirely. (#67080) (#67811)
9d25554d45 : [ONNX] Allow registration of custom symbolics for aten namespace (#66481) (#67810)
09615cd0b0 : Adding Dynamic Conv and ConvT ops/modules (#68176)
529ebae0ac : Bugfix for TorchScript RNN RELU and TANH (#61274)
2fd468e5f8 : [jit] Set the graph input types before interpreting the graph during tracing (#68242)
9ed49449b3 : [SR] Add net level record functions (#68091)
0823d18fcd : make TSComputation ctor explicit (#68286)
7b958fbec4 : ci: Build periodic jobs with DEBUG=1 (#67192)
ea0a558487 : GHA CI: make the default config use only one GPU (#68382)
6adbe044e3 : Added nearest-exact interpolation mode (#64501)
e3bcf64ff8 : [qnnpack] Remove redundant fp16 dependency (#68011)
0cf46fb0de : [fx2trt] fix a bug in conversion from negative dim to positive dim (#68360)
549e014963 : [docs] fix torch.histc's min/max arg types (#64191)
ccd9675569 : [lint] Disable modernize-use-nodiscard (#68354)
c697eeba72 : [JIT] Combine concat nodes where possible (#67000)
30cda0b28c : [bugfix] functionalization pass for view ops without a 'self' first argumennt (#68339)
5b05983497 : [bugfix] fix two edge cases in functionalization (#68269)
12026124cc : Avoid the view for mkldnn case in 1D convolution (#68166)
56024e91c9 : GHA: Enable flaky test reporting by setting PYTORCH_RETRY_TEST_CASES=1 (#68300)
24b60b2cbf : [lint] lintrunner fixes/improvements (#68292)
43874d79e7 : Fix failing test due to a bug in NumPy when using OpenBLAS (#67679)
d1c529bd0b : replace platform specific CI environment variables with generic ones (#68133)
1c0d6ff835 : [fx][const fold] Allow to set up a function to modify const_nodes for split_const_subgraphs (#67784)
4c87aa77d1 : [DataPipe] Traverse DataPipe graph excluding primitive and callable (#67783)
1adeeabdc0 : Fix trt tuple(Dims) throwing issue (#68318)
be281fc597 : Check for None in torch.jit.Graph.create (#68253)
6fb8ebcd92 : [tensorexp] Add strides to Buf (#68018)
f7366ca51b : implemented quantize_per_tensor_dynamic and added a corresponding test script (#68004)
cb14a258a2 : [c10d] Fix object-based collectives for debug mode (#68223)
ec94bb787a : [TensorExpr] Add a way to define target triple/cpu/attrs for llvm codegen and turn on the AOT workflow. (#66527)
52e93fca2c : [TensorExpr] Fix some TE python bindings. (#68232)
e511a7a5b4 : [TensorExpr] Remove non-determinism in iterating over unordered_set of intermediate buffers. (#68277)
80339e85c5 : Fix disabling bot with subprocessing (#68290)
282221c5d6 : Fuse unsqueeze, cat, sum for inline_cvr (#68289)
48c8de45b0 : [ONNX] Remove the argument example_outpus of export() method entirely. (#67082) (#67809)
a8b93cb3ec : More aggressively market functorch.vmap when torch.vmap gets called (#67347)
da5ffe752a : Add reporting for flaky tests in CI (#68150)
8bf150f21b : Revert D32178667: [pytorch][PR] Python tracer for profiler
a82e51a7ae : Move some cub templates out of the header file (#67650)
6ddaf3bd37 : [LT] Upstream TsNode, TsNodeLowering, TsLoweringContext (#68154)
f6e45102d2 : [quant][embedding qat] Support non-partial functions in qconfig comparison (#68067)
66b52d5b49 : [TensorExpr] Convert linear_clamp_run to using schema in NNC lowerings. (#66523)
06e8cb9e04 : Manually Disabling two TestDistBackendWithSpawn tests on ROCm, test_ddp_profiling_torch_profiler and test_ddp_sync_bn_training_vs_eval (#68255)
33353fb828 : Python tracer for profiler (#67407)
96d116fec2 : [JIT] Add additional debug output when op cannot be found in AliasDb (#68099)
98bab78e11 : Revert D32039318: [pytorch][PR] Bump dlpack.h to latest version
5c3a9f3fdc : adding opinfo for torch.nn.bilinear and torch.nn.glu (#67478)
dc24503a89 : Fix Hash(c10::Scalar), account for garbage data in union (#68201)
0bd0a67c4f : [lint][fbcode/caffe2] CLANGFORMAT
e795315c63 : Changes and fixes to prepare for dynamic conv (#68175)
1181628d85 : BE: Use TORCH_CHECK instead of explicit c10::Error (#68187)
799ebce3aa : Add algo recorder/replayer to lower.py (#68194)
613c1aca6d : Adds support for automated error and warning testing (#67354)
89d556f648 : add VS extension in doc (#63944)
9cb65df79f : [Static Runtime] Fallback to disabling manage_output_tensors instead of crashing when wrong API is used (#67939)
3dc0754c53 : [pytorch][mobile] deprecate the LLVM-based static analyzer (#68180)
301369a774 : [PyTorch][Fix] Pass the arguments of embedding as named arguments (#67574)
9571eb599c : [lint] fix up clangtidy lintrunner integration (#68192)
6afb414c21 : Nan in linalg eig (#67544)
d049772538 : Bump dlpack.h to latest version (#65047)
0420545639 : Enable all dtype combinations in `torch.Tensor.view(dtype)` (#66493)
f9ea41f257 : Fixes spelling error writeable to writable, improves warning, and documentation (#67664)
1e8f836c44 : Remove OpInfo non-contig inputs (#67677)
4fe3965b3a : Fix dtype arg typing for Tensor.type doc string (#67019)
b07a11929d : Array API: Add torch.linalg.cross (#63285)
40bedf6206 : Fix test_triangular_solve testcase enumeration (#67635)
db014b8529 : Add `set_deterministic_debug_mode` and `get_deterministic_debug_mode` (#67778)
cd4e31ff21 : [LTC] Add some comments to BackendDevice() (#68156)
7b376bf844 : Remove ProcessGroup from TensorPipeAgent initialization (#68128)
b473ca999b : [lint] add cmakelint to lintrunner (#68191)
6cade3362b : [fx-acc] add optimize_noop graph opt (#68131)
fe90313d02 : Avoid index_put_ overhead in histogram kernel's inner loop (#67815)
61a94495d9 : [DataPipe] adding ZipperMapDataPipe (#68032)
bd5f33f91e : demo backend decoupled from operators (#66100)
97a386805e : [Pytorch Edge] Add selective macros to metal ops (#68134)
c2642b6465 : Sparse CSR CPU: add `torch.add` with all inputs sparse (#64391)
84d3df8027 : Fast cuda layer norm (#67977)
a1ace029e2 : Add host-side memory requirement for `test_softmax_64bit_indexing` (#67922)
9e7b314318 : OpInfo for `nn.functional.conv1d` (#67747)
35f1617001 : Implement Entropy methods for Binomial and Multinomial distributions (#67609)
864c6b3794 : [nnc] aotCompiler outputSpec support quantized outputs (#67711)
362c6069b9 : [nnc] Lazy lowerings registration; custom classes network params (#67623)
f89572f417 : Add feature: zeros_like() from a dense tensor to a sparse tensor (#68108)
5efe5e243a : Ease constrain for fuse path in trt lower (#68148)
d4ae789655 : OpInfos for new_blah functions and some _like functions (#67357)
4466ba8f30 : Working POC of define-by-run quantization (#64676)
f02efc749a : [Dist CI][BE] Run each test in its own process for test_distributed_spawn (#67901)
aea4e61ec3 : skip test_jit_legacy (#68129)
a6a2616558 : Automated submodule update: kineto (#67445)
a229c3e51a : Add complete type name in error message when fail to export model (#67750)
1f07efd0f2 : [SR] Fix aten::split schema (#68135)
47bc47f2b9 : [SR] Add runtime check to correct bad schema alias info (#67825)
ca7d0062ad : [PyTorch Edge] Better error message when training attribute is not found (#68103)
0e366b8e5f : Make `torch.fx.experimental.fx2trt.passes` a package (#68139)
f171c78c04 : add unpack_sequence and unpad_sequence functions (#66550)
a510f4139b : Fix lambda function broke torch.save
22e73f616c : Update unpack_dual to return named tuple (#68062)
d6e6064efc : [LT] Upstream backend interfaces (#67927)
c075f0f633 : Update rpc testing to include USE_TENSORPIPE directive (#68080)
a3bb95c1b5 : don't include label in ci: sev issue (#68093)
ecd5b1a8d4 : [SR] Native implementation for aten::split (#67476)
746a31b290 : Logger integration format (#67962)
8dfbc620d4 : don't hardcode mask type in mha (#68077)
ae5864498d : torch.allclose opinfo (#68023)
9a2db6f091 : Factor backend routing logic out of convolution forward (#67790)
147de8243b : Fixed deprection warnings with `.data<T>()` in SpectalOps.cpp (#67993)
6011c35a79 : [LTC] Upstream class BackendDevice (#68027)
a6c0edff1a : fix gradcheck to generate valid input for forward AD complex (#68001)
94b6fa6f8b : Adds an optimizer instance variable to ChainedScheduler (#68010)
cb2a41e508 : [PyTorch Edge] Don't use LeftRight in mobile (#66064)
b0817e19e0 : [PyTorch] Avoid reading file from stream for 0 byte Tensor storage (#67787)
bf31d4b2b5 : [PyTorch] Replace copy_ with data_ptr<float>() since input Tensor's dtype is guaranteed to be float (#67788)
6b44e75f6b : aliasing fixes (#66977)
3f1a3f7b18 : Fix ads dense arch regression (#68071)
91af74c934 : remove Generate* macro files (#67940)
790763b0fe : Add an option to disable reduced precision reductions for FP16 GEMM (#67946)
078c655985 : [nnc][mobile] temporarily disable quantization external functions (#68029)
b1a42298a4 : Simplify example for nn.Flatten (#67472)
d8f0087e08 : .github: Fix sccache for macOS workflows on push (#68094)
1b2a366932 : [SR] Enforce checks for resizing of the internal buffer in MemoryPlanner in unit tests (#67941)
8d025bbc2d : .github: Migrate macOS workflows to GHA (#67717)
55e3b23abe : [Pytorch Edge] Generic Build Features for Selective Build (#67817)
43ef6816f2 : OpInfo for `nn.functional.cross_entropy` (#63547)
eaf0457eef : [distributed][docs] Delete distributed optimimzer section from RPC and add reference to namespace docs page (#68068)
7c90bd77ec : Test functionalization pass in python (#66101)
fe46d6c68f : functionalization: map copy_() -> to().expand_as() (#67878)
be4150139a : bugfix for conditional functionalization (#67715)
4100a5cc48 : Revert D32286934: [pytorch][PR] replace platform specific CI environment variables with generic ones
273f7ae9b3 : fx: Update fx.rst (#68043)
c7eaec86f0 : [NCCL] Patch bfloat16 support (#67843)
45ac6f2b65 : [quant] Fix comparison against reference for test_qat_functional_linear (#68061)
a9c2f11d2a : Update Freezing Logic and add new passes (#68024)
d2438a8901 : [qnnpack] Lock before weightpacking in qlinear (#68012)
e86058559a : Op info for activation functions 2 (softsign, tanh, tanhshrink, threshold, celu, sigmoid, mish, hardsigmoid) (#67492)
726e2ed715 : [lint] add more lints to lintrunner (#68069)
cbf596bf8e : Sparse CSR CPU: add `addmv_out` (#61536)
7d931fb082 : replace platform specific CI environment variables with generic ones (#68022)
a027551358 : [LT] Merge cache.h (#67929)
a473417076 : [LT] Merge permutation_util into master (#67766)
442d7d72de : fixed type checking errors in options.py (#68056)
acb035f513 : Revert D31609714: Fix Dispatching not considering List[Optional[Tensor]] for dispatch
6e53d6df83 : [SR] Introduce StaticMethod (#67981)
5e19fb61fd : [SR] Release reference to JIT module if possible (#67911)
9ae3f3945b : Add remote_module logging to the __new__ method. (#68035)
96b4f2296e : CppSignature: Compare types by their mangled names (#67987)
114ef8c5ea : Add SiLU backward Aten symbol (#67665)
c581f56c74 : Fix Dispatching not considering List[Optional[Tensor]] for dispatch (#66506)
803e88d418 : [DataPipe] Fixing pickling issues with fork and demux (#67930)
577a4d34a7 : making import_module private and deprecating public method (#67990)
0a9cd6d461 : Removes unnecessary `no_pretrained_model` from test_quantize_fx.py (#67836)
f9422e1c6b : Fix deadlock for multi-output forward AD (#67995)
f8297d40fc : Adds a `maximize` flag to SGD. (#67847)
c5e5264be2 : Disable TF32 in `pinv_jvp` and `pinv_backward` (#67948)
417dc7f86c : Revert D32007691: [pytorch][PR] Op info for activation functions 2 (softsign, tanh, tanhshrink, threshold, celu, sigmoid, mish, hardsigmoid)
36d9a74bc6 : Enforce that test cases extend from correct TestCase (#67819)
25cd81876d : Fix typo grid_sampler_3d_cuda (#67752)
4b1d044498 : [WIP][resubmit] Don't #define NUM_THREADS (#68008)
a2ab06514b : Fixes CUDA vs CPU consistency for index_put_ when accumulating (part 2) (#67189)
3f048c637f : [distributed] Render `torch.distributed.optim` members (#67885)
fd198a2fea : [fx2trt] fix import in oss tests (#68016)
0d8a8a2e41 : [fx2trt]organize converter utils (#68015)
5b036d5f2b : [Doc] [ONNX]Fix a broken url for ONNXRuntime custom op (#67944)
82398e38ab : Upgrade and fix boto3 version to 1.19.12 (#68025)
9094947b0a : use better secrets for upload labels workflow (#68013)
db9b4f1a37 : [ROCm] Bump magma source to pickup memory leak fix (#67225)
0b09d62cf3 : [hackathon][DataPipe] adding .pyi file generation for torch.utils.data.datapipes (#67374)
2e523ed229 : [JIT] additional support for CallMethod with autocasting (#67925)
f57c63032e : [ONNX] Fix reciprocal when input is not floating point (#67471) (#67808)
eb22d06e5e : [ONNX] Use human readable enum for dtype scalars (#66822) (#67807)
958d517643 : [ONNX] Fix new_full and full_like for Python 3.9 (#67124) (#67806)
37688148ae : [ONNX] Support opset 15 (#67121) (#67805)
ead59b5ff3 : [ONNX] Suppress ort warnings in onnx related test (#67054) (#67804)
ea60e7d559 : Op info for activation functions 2 (softsign, tanh, tanhshrink, threshold, celu, sigmoid, mish, hardsigmoid) (#67492)
a1d733ae8c : Avoid convert trt.Dims to tuple in hot path (#67960)
4a8f27445d : [Quant] Add dynamic QAT Linear module (#67325)
db456d16ee : `torch.lobpcg.backward`: do not save non-Variable types with `ctx.save_for_backward`. (#67994)
8e2528132b : [lint] small updates to .lintrunner.toml (#67942)
d201102d36 : [lint] Add the rest of the grep linters (#67932)
53f118c800 : [lint] improve mypy lintrunner config (#67936)
419c58ea9c : [lint] add newlines linter to lintrunner (#67894)
4b021280ad : [lint] add nativefunctions to lintrunner (#67890)
5bb5bfccf7 : [lint] add lintrunner support for circleci_linter (#67872)
b3770766c4 : Fixes deprecation warnings in `test_optim.py` (#67954)
b546cdf401 : [SR] Out variant for prim::NumToTensor (#67856)
0dc99dcf59 : Update __init__.py (#67900)
5bc89275dd : [SR] Eliminate no-ops (#67437)
191b48b12f : [torch.fx] Fix replace pattern mechanism (#66442)
9fb3ba9d7b : Revert D31762735 (#67924)
9cacf2b718 : Add custom zipper script to zip python modules for torch.deploy (#67006)
ae501a9727 : [PyTorch Edge] Update bytecode version compatibility check (#67417)
80178d6152 : [DDP] Fix some issues with code example in DDP docstring (#67883)
22afe82ce3 : [rpc] Switch RPC agent check to TORCH_CHECK and add more descriptive error (#67882)
efdb17b984 : Add meta support to tensor range factories (#67032)
9e8016d8c4 : Revert D31932215: [pytorch][PR] Don't #define NUM_THREADS
10411e3561 : [quan][fusion] Fix a additional_fuser_method method for fuse_fx (#67876)
f70e8064f4 : Don't #define NUM_THREADS (#67258)
b1ecfc6d45 : Add timeouts for GHA jobs for pytorch/pytorch (#67912)
f6402c469e : (torch/elastic) fix scale down bug caused by calling rdzv_handler.shutdown() on premature agent failures (#67749)
240e8d5cc5 : Updated searchsorted functionality (#66818)
f6a4c80a5a : Refactor cuDNN Convolution memory format and Conv-Bias-Relu code (#65594)
cdd5d16489 : [Foreach] Implement L1&L2 norm (#62646)
e7a3bbce89 : [nnc] Add support for dynamic shapes in TensorExprKernel (#67861)
a4a6d056e6 : Add ownership to more edge tests (#67859)
9dafb6434b : remove use of THGenerateAllTypes, clean up (#67867)
ee7412dd29 : autodiff fix for autocast_to_xxx (#67648)
9269080b47 : [PyTorchEge] backport test (#67824)
02e35ce17b : [ONNX] Update onnx function export with comments and clean up (#66817) (#67803)
ace2183195 : [FSDP] Address follow up comments for CPU offload (#67813)
823ae3a4ff : [forward ad] Also check layout of grad matches that of self for inplace over view (#67816)
13a69d23b1 : Add retry logic for test_multitenancy and documentation for find_free_port (#67775)
33b7790907 : Fix conv_transpose3d backward with non-contiguous grad_out (#67829)
07a08fb95f : Fix typo in LinearLR docs (#67840)
53ebccbe78 : Fix warnings produced when running test_optim.py (#67756)
b098264f22 : Revert D32063662: [pytorch][PR] TST Adds device transfer into module info tests
bb8978f605 : Revert D32175963: Converting hardswish to strucutred kernels with metatensor support
4d5338228f : Revert D32175960: Moving parts of the Shape Registry into a common file
38af37f409 : Revert D32175958: Adding Custom Rules to Device Propagation
b1ac7f51a1 : Revert D32175957: Adding custom testing based on opinfos input for ops with custom rules.
0c8569bec9 : Revert D32175959: Merging the implementations of ClearProfiling
2f68878a05 : [Static Runtime] Add a comment on clients taking ownership of managed output tensors (#67554)
ba9d9d488e : Implement padding with slice layer (#67888)
daaad47d9c : Allow torch::deploy unity embed xar file of any size (#67814)
5a48868d39 : [qnnpack] fix benchmarks after an API update (#67768)
f1754319e3 : Merging the implementations of ClearProfiling (#67575)
b8e165e841 : Adding custom testing based on opinfos input for ops with custom rules. (#67500)
853298481b : Adding Custom Rules to Device Propagation (#66973)
d04389e6f0 : Moving parts of the Shape Registry into a common file (#66948)
57335a9ee3 : Converting hardswish to strucutred kernels with metatensor support (#66899)
ec8a71f9ac : Dtype Analysis for Unary and Binary ops with Metatensors (#66898)
4b084bc832 : Benchmarks for various fusers (#67622)
31fc9d6539 : Introduce version control for tensorrt converter decorator (#67886)
f5daa9f76b : [iOS] Enable ARC for CMake build (#67884)
c2ceba8ada : [PyTorchEdge] Move all serialize/deserialize files to a separate target (#66805)
b0c05297f9 : [Static Runtime] Arena allocate StorageImpls for managed tensors (#66130)
01809731bc : [Static Runtime] Cache managed tensor Storages (#66638)
56dda833ff : Small updates to RELEASE.md (#65489)
d5d342b237 : Sparse CSR CUDA: Support mixed memory format input for triangular_solve (#66401)
a20a64af4e : Increased tolerance for test_zero_model_parallel tests (#67765)
c541c69e89 : Fix minor typo in contributing.md (#67855)
8bed46ef38 : [WIP][LTC] Upstream class Shape (#67672)
e8ac8c005d : [NOOP][clangformat][codemod] Enable CLANGFORMAT (#67854)
938bab0bfd : [PyTorch] Add int version of vectorized PrefixSum to Benchmark (#67865)
641ba36a4e : fix annotation for Demultiplexer (#65998)
da59bd1d13 : TST Adds device transfer into module info tests (#65488)
3d4a6ff15d : Revert D32154788: Move Concat Linear out of Optimize Numerics
86aea79217 : Revert D32154786: Fix Freezing Docs Parameters
279af1a668 : Revert D32154787: Formatted with Black
08d630b9a6 : Formatted with Black (#67792)
db15a7c0b3 : Fix Freezing Docs Parameters (#67201)
ea94dde573 : Move Concat Linear out of Optimize Numerics (#67196)
6f0a1f2b8d : Only set sccache_epilogue to run on build job exits (#67798)
618bab593c : .github: Output expected vs. actual (#67703)
90d311b268 : [RPC] Add exception logging to constValue() (#67802)
7c739e1ab9 : Resubmit #67161 (#67735)
8b0c2c18eb : Fix pretrained=True for test_pt_onnx_trt (#67818)
af1bd88fc4 : Allow scalars for aliased binary ops {`multiply`, `subtract`, `divide`} (#65937)
bd8feb33d4 : Update distributed contributing guide to show how to run one test in test_distributed_spawn (#67801)
4262c8913c : Remove native_functions.yaml dependency from TensorTopK.cu (#66794)
927da4d32f : Remove native_functions.yaml dependency from Sort.cu (#66793)
61ed9285dd : Automated submodule update: tensorpipe (#67845)
cfd998c197 : Remove ProcessGroup RPC backend placeholder as part of 1.11 (#67363)
8e1ead8e4d : Fix the kl_div docs (#67443)
04fe4382ec : Automated submodule update: tensorpipe (#67769)
b8d365ca3a : ci fix (#67826)
1baed45c6b : [fbcode][static runtime] out-variant for quantized::linear_dynamic_fp16 (#67663)
99c7a9f09d : fix bfloat16 autocast skip (#67822)
2486061c72 : [JIT] make x (+ or -) 0 and x (* or /) 1 peepholes type promotion aware (#67688)
88d86de7d8 : Add lint to ensure all test files have headers with ownership info (#66826)
2766662ca9 : [PyTorch][2/N] Basic implementation of ShardedEmbeddingBag using ShardedTensor. (#67188)
fd77fff0b1 : [FSDP] customizable backend in test (#67135)
83e8612d11 : Clean up test autograd (#67413)
ca445645f9 : Revert D31902471: [nnc] Add support for dynamic shapes in TensorExprKernel
603116a6ae : [Core ML][easy] Assign missing properties to the executor (#67737)
fddfb81dd0 : Add BF16 type to _autocast_to_full_precision (#67707)
05e17e7ff6 : Add API usage logging for several other RPC APIs. (#67722)
5fd93fb5f8 : broaden retries on TestHub (#67779)
89b02fc70b : [StaticRuntime][Easy] Correct typos in test_static_runtime (#67739)
4d601a1c36 : codegen: Split up source, header and Declarations.yaml generation (#67497)
fe91906ad7 : Remove Declarations.yaml dependency from gen_autograd (#67496)
9b1caca185 : [SR] Macro to clean up c10::Symbol maps in passes (#67484)
0eaa01ead1 : [SR] Add EliminateTrivialEquallySplit graph pass (#67166)
6cc6a5fd9d : Fix a bug in TorchBench ondemand CI. (#67743)
f455030931 : Adding a docstring for memoryless in observer args (#67690)
98be5216e2 : Revert D32104006: [pytorch][PR] Added forward derivatives for neg, diag, inverse, linalg_eig
6df0d7d502 : [lint] add basic lintrunner compatibility (#67110)
89c4e8c22b : [NOOP][clangformat][codemod] Enable CLANGFORMAT for some folders in caffe2/* (#67746)
a5b57c9433 : Avoid prematurely casting GEMM parameters `alpha`, `beta` to `scalar_t` (#67633)
3f33ada8d5 : .github: Forward fix generating GHA workflows (#67777)
15a3c374e2 : [nnc] Add support for dynamic shapes in TensorExprKernel (#67197)
88c61b8d06 : Added forward derivatives for neg, diag, inverse, linalg_eig (#67339)
a23814577b : Overload TestCase not vanilla TestCase for some elastic tests (#67700)
201f7d330a : Remove duplicate check in distributions arg validation (#67741)
1ffd43cf0c : generated-pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit migrated to GHA (#67695)
4a106e41e9 : [fx2trt] Add torch.nn.function.pad support for fx2trt (#67498)
383c1f51b1 : [nnc] Fixed handling of 0-sized tensors in cat (#67734)
31cf3d6aad : Fix adaptive_max_pool2d for channels-last on CUDA (#67697)
ff5c61a74e : [TensorExpr] Add lowering for aten::max (reduction). (#66519)
00afe9ba7b : [TensorExpr] Add lowering for aten::embedding. (#66518)
008a58d226 : [TensorExpr] Add lowering for aten::conv1d. (#66517)
d58ef2bbff : [TensorExpr] Fix lowering for aten::softmax for the case when dtype parameter is None. (#66516)
ea4d983885 : Modify "gemm" code to enable access to "sbgemm_" routine in OpenBLAS (#58831)
05d1dcc14c : Split channels_last test cases for tensor conversion OpInfos (#67368)
92a85ecbab : add a quantized hardsigmoid inplace variant (#65740)
e32d7f7525 : ATen | Fix potential crash if `MTLCreateSystemDefaultDevice` return nil (#66859)
510336499b : [PyTorch][Static Runtime] Separate overlap checks for easier debugging (#66637)
3db536e55e : add jit_trace_module python binding (#67425)
a8757cdd70 : type inputs (#67424)
d352587210 : add a few convenience helpers to removeAllXXX to Block and Node (#67423)
7f3326a6d2 : [FSDP] CPU offload resubmit (#67249)
06d1be2447 : [NOOP][clangformat][codemod] Enable CLANGFORMAT for caffe2/caffe2/* (#67624)
e86a5a3a1a : [Static Runtime] Add PyTorchPredictor::predict_managed_result to return managed output tensors (#65598)
18955d3564 : Raise warning when calling collectives on non-member group objects (#67639)
54241a9cfa : [quant][fx] Add support for fused modules in _convert_do_not_use (#67245)
91971dfc2a : [BE] [GHA] Use `aws ecr get-login-password` (#67709)
16ee6409ee : Changed value constraint of exponential dist (#67184)
885da61d7d : [PG NCCL] Disable NCCL health check (#67668)
0b2f68eadf : Remove special FX OpInfo list (#67520)
96e3d1a76c : Remove native_functions.yaml dependency from Sorting.cu (#66621)
7deb1726ea : Remove native_functions.yaml dependency from ScanKernels.cu (#66620)
9e97ccbd7a : .github: Migrate iOS workflows to GHA (#67645)
a831713786 : [PyTorch Edge] Use Integer Subtraction (Instead of Float) in Non-FBGEMM Dequantization (#67115)
23bd3cf5b2 : [PyTorch Edge] Parallelize Quantize and Dequantize Tensor (#65845)
92cfda1785 : [PyTorch Edge] Clean up Quantize Tensor code (#66220)
16c62a6dc9 : [PyTorch Edge] Optimize Dequantize Tensor with Intrinsics (#65844)
9cef2033f3 : Modify decorator for acc op converters (#67636)
5ad169b7cc : Adding in Wrap functions for FSDP from Fairscale (#67292)
8f63cfda14 : [LiteInterpreter] Specify `Loader` to `yaml.load` (#67694)
b00206d473 : [vulkan] Use 3D textures for everything (#67647)
0ee8473af7 : [SR][easy] Fix FuseListUnpack 0-use corner case (#67165)
6b1d8e5bb2 : Revert D31861962: [qnnpack] Remove redundant fp16 dependency
3e218dbd27 : [PyTorch] Capture function args from schema by reference (#65951)
33d62266f2 : [PyTorch][easy] Avoid allocating OperatorName strings in append_operator (#66134)
2644725937 : [SR] Migrate gather_ranges_to_dense to new FuseListUnpack (#67164)
82f7f8d471 : [PyTorch] Adopt IValue::toTupleRef() where obvious (#65505)
eb1b8a2160 : pytorch_android_gradle_custom_build_single migrated from Circle to GHA. (#67577)
d9bac7c316 : [PyTorch] Add IValue::toTupleRef() (#65504)
7cd62621fb : [PyTorch] Adopt faster Tuple::create (#65381)
9e71ea292d : Fix test_init_pg_and_rpc_with_same_socket by retrying on addr in use error (#67638)
4061239fdd : [qnnpack] Remove redundant fp16 dependency (#67281)
cd51d2a3ec : Adding OpInfo for `logical_or`, `logical_and`, `logical_xor` (#67178)
c65f332da4 : torch::deploy unity and its demo (#67134)
ec6b472e0a : [vulkan] Add prepacking for conv2d_transpose (#67358)
152f665dee : Inserted check for PyObject_IsInstance in THPVariableCheck (#67588)
c4bf196334 : Strided masked reduction: mean (2nd try) (#67088)
53e6aca8b3 : [Pytorch Edge] Make More Classes Selective (#67397)
45d5b3248b : Fixed C++ BatchNorm pretty_print() with optional momentum (#67335)
234bd6dc56 : [quantized] Add bilinear quantized grid_sample (#66879)
0cbfd466d2 : Remove ProcessGroup from TensorPipeAgent initialization (#66708)
ba369ea053 : check to ensure profiler_edge is only added when use_kineto is on (#67494)
76f57cd442 : [CODEOWNERS] Remove @neginraoof (#67631)
e80cb08cc8 : [jit][shape_prop] Fix jit registration of unpack_sizes ops for prepacked (#66737)
251278d385 : [skip ci] set more tests with owners for distributed and elastic (#67583)
4d99bc839b : Remove TH/THC Storage functions for unused dtypes (#67480)
a122ba776a : Fix less_than_lowest warnings (#67422)
da29655797 : Disable miopen test for convolution on mobile (#66564)
885a8e53ba : replace onlyOnCPUAndCUDA with onlyNativeDeviceTypes (#65201)
39ad7b670e : [SR] Native implementation for aten::squeeze (#67441)
00da7b9a3b : Set test owner for vmap (#67582)
9cdd1d7e48 : Docs module check (#67440)
0d7cf825fc : [SR] Drop support for aten::__is__ and aten::__isnot__ (#67550)
7fbcf79684 : [tensorexpr][nnc] Support quantization (#66676)
97f29bda59 : Relaxes tolerance on ROCm test_noncontiguous_samples_matmul (#67593)
d0662f2f76 : Add adaptive_max_pool OpInfo (#67405)
e01279cc2e : Disable reduced precision reductions for fp16 GEMMs (#67578)
510e3026a9 : [numpy] add torch.argwhere (#64257)
a95c94f075 : [fx2trt] fix acc_tracer when run against module that contains ScriptModule submodules (#67567)
b24c34426f : Add OpInfo for torch.unique and torch.unique_consecutive (#67529)
aa16de517d : Revert D31984694: [pytorch][PR] make `TORCH_(CUDABLAS|CUSOLVER)_CHECK` usable in custom extensions
4a2bbc619d : move functionalize fallback out of aten/core (#67564)
c00806beda : Add skipXLA and expectedFailureXLA decorator (#66857)
69adbc8778 : Fix splitter_base and add unit test for trt splitter (#67569)
d4493b27ee : make `TORCH_(CUDABLAS|CUSOLVER)_CHECK` usable in custom extensions (#67161)
ad89d994c9 : [Static Runtime] Support recordio format input for benchmark (#67530)
2cac92f470 : [SR] Migrate sigrid_transforms_torch_bind to new FuseListUnpack (#67163)
289b0f7b04 : Resent the reverted PR: Add register_frozenpython.cpp to the torch::deploy interpreter library in the OSS build (#67303)
ba74b03b0d : Back out "[sharded_tensor] simplify init_from_local_shards API"
5c77ccefe0 : Resolves #67227 documentation issue (#67379)
66202b7f8d : [Pytorch Edge] Expose runtime operators versioning (#67385)
60a80c5bbd : [jit] Move ModuleIndex operator to selective build. (#67483)
12ede84dbb : [jit][edge] Enable lite interpreter to correctly handle INTERFACE_CALL instruction. (#65972)
d6b15bfcbd : [jit][edge] Load interface methods to corresponding ClassTypes. (#65971)
6259601c8a : Set test owners for tests with unknown owners (#67552)
c19cda5782 : [skip ci] Add test owners for a special hi-pri class of tests (#67553)
fcba8018c2 : Update codeowners for sphinx conf (#67548)
69f86ecd3a : Sparse CSR CUDA: add `torch.add` with all inputs sparse (#63948)
285d5a55b9 : Add API usage to torch.RPC (#67515)
ddc9bd335b : Adds reference vs. noncontiguous OpInfo test (#67434)
16d937b0df : Fix strided _conv_double_backward() with 3D input / weight (#67283)
bf31995194 : Add OpInfo for `nn.functional.cosine_embedding_loss` (#67465)
bcd301a457 : Add OpInfor for `nn.functional.ctc_loss` (#67464)
e2e20e79fb : Add OpInfo for `nn.functional.poisson_nll_loss` (#67371)
8b8fb4f4e6 : Add OpInfo for `nn.functional.gaussian_nll_loss` (#67376)
1d900ee22f : Add OpInfo for `nn.functional.hinge_embedding_loss` (#67381)
c6a6c09383 : Add OpInfo for `torch.nn.functional.gaussian_nll_loss` (#67356)
2e156f649e : Sort output of *NativeFunctions.h (#67046)
f95ed474ac : Norms Op Info (#67442)
d58f209326 : add dequantize support for fp16 + cuda (#67234)
99282126dc : pytorch quantization: document the custom module APIs (#67449)
acdc754918 : [quant][graphmode][fx] Add support for ObservationType.OUTPUT_SHARE_OBSERVE_WITH_INPUT in backend_config_dict (#67210)
2bb20c0e48 : [quant] Move test file to fx2trt folder (#67129)
5e46a4f6bd : Fixes to make trt timing_cache really work (#67524)
96c868217c : [deploy] fix TypedStorage serialization (#67499)
4052393af8 : Revert D31450501: Wextra caffe2/
18807273cb : Fix Ads build broken due to comparison type mismatch (#67526)
26241994b2 : Remove the argument strip_doc_string of export() method entirely. (#66615) (#67278)
43d51254bf : Deprecate the argument _retain_param_name of export() method entirely. (#66617) (#67277)
40920185ac : [ONNX] Remove the argument enable_onnx_checker of export() method entirely. (#66611) (#67276)
609da98154 : [ONNX] Update value name copying logic for onnx (#66170) (#67275)
7c2d3e6d32 : Wextra caffe2/ (#67319)
d8bde98f36 : Workaround the bug of TRT which creates extra outputs (#67327)
fc82ad186a : Add Initial NNC Dynamic Shapes Flow (#66136)
2661507488 : Adding support for Symbolic Shapes in Inplace Ops #65642 (#65729)
d0bc01fac2 : ci: Migrate hardcoded docker builds to GHA (#67455)
6696c59af4 : Adding `optimizer` attribute to SequentialLR (#67406)
354363b57a : [SR] Native implementation for aten::size (#67346)
9f01937caf : [PyTorch][easy] Deduplicate memory planner creation code (#67265)
82c356505f : Revert D31894777: [pytorch][PR] Replace issue templates with new issue forms
afb8434440 : [SR] Native implementation for aten::view (#67341)
60472594e1 : [jit][edge] Implement torch::jit::Function for mobile funciton. (#65970)
5ef62c88a9 : [jit] Replace get_executor() with call() in abstract Function interface. (#65969)
8363da3f92 : [SR][C2][easy] Benchmarks report # of ops (#67436)
b8f07689f2 : [ROCm] Enable frexp support for ROCm builds (#67226)
0795735351 : [jit] Clean up unneeded virtual methods from Function interface. (#65968)
bd5e6fe5ac : Skip complex128 dtype for test_addmm_sizes_all_sparse_csr Windows test (#67453)
5b8b2382d1 : Mark mv as CompositeExplicitAutograd (#67373)
f3aae62942 : Port `tril` and `triu` to structured kernels (#67055)
4a1f73ccb3 : [qnnpack] Remove asymmetrical padding parameters in qnnpack (#67102)
4e873d6799 : Formatting changes (#66257)
cee4e8f35d : Add FlexiBLAS build support per #64752 (#64815)
55b7387e45 : Timing cache for Tensort (#67214)
0032fa7725 : Add a Functionalization pass in core (#64432)
b0a8ca2cb5 : add tags for inplace view ops in native_functions.yaml (#65412)
03f3a0331b : add slice/select/diagonal_scatter variants as primitive ops (#64430)
665c148e42 : move some codegen utilities into utils.py (#63094)
b100a9ea82 : Back out "Make fb::sigrid_hash_compute_multipler_shift return a std::tuple<int64_t, int64_t>" (#67456)
a8f85300da : [quant][graphmode][fx][test] Refactor test code for quant-fx2trt unit tests (#67070)
325b15039c : Add FSDP tests to verify forward overlap and memory usage (#67117)
938afa37a3 : Remove process group barrier and all_reduce function calls from tensorpipe agent (#65946)
0c93c8e39a : Disable linux-xenial-cuda10.2 config (#67344)
6ed68f3f84 : Document `torch.jit.is_tracing()` (#67326)
b27b1ff809 : Fix deadlock when forward and backward AD are used at the same time (#67360)
d3f03af496 : Fix indentation in forward_grad.h (#67359)
6900aacf54 : [fbcode] Fix operator_benchmark with jit mode (#67382)
eb8b80b76f : Add test owners for elastic tests (#67293)
2366948085 : [LT] Add ir_util for ComputePostOrder (#67282)
6293e0ad61 : update coverage ignore to not skip whole modules (#67395)
961fd76a9a : [ONNX] Relax check on Prim::PythonOp nodes for ONNX_FALLTHROUGH (#66172) (#67273)
02a78bdba7 : [ONNX] Support conv-bn fusion in blocks (#66152) (#67272)
9deb602726 : [ONNX] Use Reciprocal operator instead of Div(1, x). (#65382) (#67271)
eea20bfa15 : fixed type checking errors in fuse.py (#66799)
7da9c4ed2e : [SR] NNC out variant for aten::where (#67255)
3aadff651c : [quant][embedding qat][bugfix] Fix and test QAT EmbeddingBag from_float error message (#66989)
62feadd76f : Replace issue templates with new issue forms (#65917)
6827d36c1a : [Static Runtime][DI] Fuse list unpack and variadic_grouped_accessor_op (#66585)
90b722c544 : specializeGradSumToSize patch - propagate profile_none through profile_ivalue (#63941)
fc664ac272 : [sharded_tensor] easier initialization for Shard (#66351)
71a67d0ce9 : [sharded_tensor] simplify init_from_local_shards API (#64481)
0117ada47c : [quant][graphmode][fx] Add input_idx_to_dtype and ouptut_idx_to_dtype to backend_config_dict (#67067)
e332d80299 : [iOS][CoreML] Remove shape information from TensorSpec (#67412)
04aba42ed7 : [Core ML] Assign Core ML computationUnit to executor (#67411)
7e1a53cd5c : [Core ML] Fix error messages (#67410)
fae1c0a434 : [PyTorch] Reduce refcount bumps in ClassType (#66724)
c8dd90c858 : [PyTorch] Fix extra refcount bumps in ClassAttribute (#66723)
1cfdb6f4c6 : [quant][fx] add pass to duplicate dequant nodes with multi use (#67118)
9e175400ac : Moving python binding to _C and its decl to the right pyi file (#67365)
882446c1d2 : add frozen_numpy to :builtin_registry_cuda target (#67396)
9ebc6357b3 : [SR] Vectorize int version of fmod (#67313)
dea8b27433 : [Pytorch Edge] Make some torchbind classes selective (#67340)
f20614af21 : [jit] Allow custom class functions to be traced in invokeScriptMethodFromPython(). (#67380)
2267a984eb : [ROCm] Add sparse mappings for CUDA->HIP translation (#67323)
708f7b1209 : Update extending doc to cover forward mode AD (#66962)
d9a5668983 : [ONNX] Add dim argument to all symbolic (#66093) (#67270)
cb15df76ad : [ONNX] Update onnxruntime to 1.9 for CI (#65029) (#67269)
9900310133 : Fix sign warnings in CUDA kernels (#66753)
3a1aa31a2f : Add dummy bfloat16 VSX implementation (#67331)
7484941eaa : Wrap TRTInterpreter result with wrapper (#67307)
fa70d72e95 : Set kernel func name from AOT Compiler (#67229)
5347dab851 : Set test owners for onnx tests (#66860)
72e25c9f4e : [Static Runtime][DI] Add variadic grouped_accessor_op (#66289)
1ec732bc46 : Add fp16/fp32 autocasting to JIT/TorchScript (#63939)
0101b1ea2b : [skip-ci] .github: Set linux gpu instances to be non-ephemeral (#67345)
b55a2500d2 : [jit] Remove graph() call from abstract Function interface. (#65967)
7c48b9ee25 : Sparse CSR CUDA: add `triangular_solve_out` (#61858)
4b9464f4b9 : [fx]Early return if a node tries prepend self (#67068)
2669e4ed4e : Revert D31945507: .github: Switch 8xlarge to 4xlarge instance_type
7d1c0992e1 : GHA: add back runner type for distributed tests (#67336)
f2f7b02b4c : Add support for vmap+fwdAD for basic out-of-place op (#66291)
a3aa9df59f : Add barrier to ProcessGroup trampoline (#67236)
e52d0e773b : [tensorexpr][ir][quant] Adding qscale and qzero to tensorexpr IR Buf (#66675)
632719c214 : Enable c10d trampoline tests on MacOS (#67205)
c88da701e2 : [hpc][inference] enable cuda graph in engine holder (#66738)
28570664d5 : [Vulkan] Add vulkan_perf_test with google benchmark (#67230)
cdc9b26281 : [Vulkan] Optimize cat operator for channel dimension (#67207)
d691bc1207 : Revert D31937065: [pytorch][PR] fix binding to the wrong python module
dfa7225a38 : [Pytorch][Bootcamp] Add fix and testing for non-vectorized Adadelta optimizer to handle complex numbers (#66587)
fcefed9517 : Revert D31935958: Add register_frozenpython.cpp to the torch::deploy interpreter library in the OSS build
1541bb823a : .github: Switch 8xlarge to 4xlarge instance_type (#67299)
7ac8ed741d : fix binding to the wrong python module (#67246)
0e8bd0c8d6 : [Pytorch Delegated Backend] Add macro to define sentinel value of debug handle. (#66584)
00b0d4eeed : Add register_frozenpython.cpp to the torch::deploy interpreter library in the OSS build (#67280)
f510193e22 : [jit][edge] Export maybe-used interface methods from modules. (#65966)
a72a6365c9 : disallow requires_grad=True in make_tensor for integral inputs (#67149)
81d188101f : .github: Use 4xlarge instances for linux gpu (#67264)
fdc74e2373 : Port triangular_solve to structured kernel (#61857)
6ce14e7b51 : [PyTorch][Static Runtime] Cleanup: add valueVecFromFastSet (#66996)
066a980e7b : [PyTorch][Static Runtime][easy] Fix ValueGroup comment (#66965)
1926156752 : Prevent TCPServer get deleted too early (#67204)
273ab55fc4 : Revert D31914868: Strided masked reduction: mean (2nd try)
2ca552160b : [DDP] logging improvements (#67059)
197dec14b3 : .github: Change periodic docker jobs to always_rebuild (#67267)
99b34b320b : Make fb::sigrid_hash_compute_multipler_shift return a std::tuple<int64_t, int64_t> (#67123)
1ce500f56f : [easy][PyTorch] Use `at::native::is_nonzero` (#67195)
a33d3d84df : Strided masked reduction: mean (2nd try) (#67088)
6c22b96082 : [Pytorch Edge] Extend Tracer to Custom Classes (#67004)
34ee5b11ff : .github: Add 4xlarge nvidia gpu to scale-config (#67262)
7052c41899 : .github: Add workflow to build all docker images (#67215)
d7ac6e977a : Fix test_create_store_multi flaky test (#66953)
49bf24fc83 : Updated error message for nn.functional.interpolate (#66417)
d47a9004c8 : [skip ci] Set test owner for mobile tests (#66829)
204ffd33ee : [CUDA][Linalg] Add gesvd as SVD fallback; optimize SVD gesvdj performance (#64533)
828a9dcc04 : [nn] MarginRankingLoss : no batch dim (#64975)
129e99fbce : __getitem__: Ensure Tensor subclasses are not treated as tuples (#67202)
3c61700cf7 : `torch.linalg.householder_product`: forward AD support (#67043)
5b345e767e : QNNPACK: Update to use pytorch/cpuinfo.git repo as a third party dependency (#67106)
2abffaf050 : Consolidate c10d and dist imports in test_c10d_common.py (#67203)
71b7182ee2 : [skip ci] Set test owner for deploy/package tests (#66830)
49251d05ec : [skip ci] Set test owners for NNC tests (#66833)
a6d702a3ee : add support for ubuntu 20.04 to CI docker images (#66942)
83355f9537 : [SR][easy] Alias for c10::Symbol::fromQualString (#67162)
38cbaeb8a4 : Update deprecated import paths. (#67250)
0c1b7545b6 : [Static Runtime] Add more debug info to verify_no_memory_overlap() (#67206)
31bcfa3760 : [sharded_tensor] refactor sharded_tensor file structure (#67199)
b96337cf47 : add frozen_pyyaml as a builtin library to torch::deploy (#67127)
0e371e413d : [fx-acc] add automated graph opt testing using AccOpProperty (#67228)
3596e13d45 : Add torch.nn.init.normal_ and torch.nn.init.kaiming_uniform_ ops to ShardedTensor (#67057)
bfcde08612 : [trt] Algorithm recorder/replayer (#4)
ecf7e96969 : [Light] Remove ambiguity from compile_spec names, use actual output type (#67209)
ad5731cacc : [PyTorch] Add flop count for bmm and baddbmm (#66636)
7acf0c6d4b : [PyTorch Edge][type] Add type support for NamedTuple custom class (export) (#62612)
0d7d446154 : Disallow annotations on instance attributes outside __init__ (#67051)
1f55dd83ac : [WIP] wrap XLATensors into Python XLA wrapper class (#65841)
fa7fb7b4d9 : [skip ci] Set test owner for test_profiler.py (#66831)
0acc21b412 : [vulkan] Add 2D transposed convolutions (#67104)
059ae96007 : [jit] Factor findAllNodes into one place. (#65965)
239b38268b : [fx2trt] Better trt layer name (#67200)
4ac8d06911 : [quant] Remove unused print in quantization_patterns.py (#67191)
12daa4f663 : [jit][edge] Enable CALL instruction in lite interpreter. (#65964)
b8dfb45ac2 : Refactor cub namespace handling (#66219)
700b39a3df : Sparse CSR CUDA: add `torch.addmm` with all inputs sparse (#63511)
333717eaf0 : Improve assert failure message in test_get_torch_func_signature_exhaustive (#67039)
a6d0339492 : [Pytorch Edge] Extend runtime compatibility to custom classes (#66972)
f4dd88489a : Better and more consistent error messages in torch.linalg (#62734)
4dce051cb0 : [jit][edge] Add control stack frame to lite interpreter (#65963)
ac948f4f35 : .github: Migrate linux-xenial-py3.6-gcc7 to GHA (#67072)
9de0888891 : Move the registration of CPython builtin modules to BuiltinRegistry (#67085)
d68bb50ef3 : Disable SVE when cross-compiling for M1 (#67114)
5d9ff8f30e : [Static Runtime] Add static_runtime::fused_sigrid_transforms (#66659)
8d164a36fb : Use `at::native::is_nonzero` in promoted ops to improve portability (#67097)
acb340de75 : [Pytorch][Bootcamp] Add fixes and vanilla testing for Adagrad non-vectorized and vectorized optimizers to handle complex numbers (#66671)
a0495b3cdb : [SR] Remove unused operator() overload (#67001)
364645cd9d : [SR] Factor operator() implementation into separate function (#67125)
edd4d246c3 : Accept 0-dim channel inputs in convolution layer (#66256)
6c985b57ff : OpInfo : nn.functional.embedding (#66997)
adc21f1966 : [quant] Fix docs build (#67169)
dd81fa9027 : [JIT] Freeze allows preservation of submodule attributes (#66102)
09c7771e9c : Set test owners for jit tests (#66808)
364c4959c3 : [quant] Fix docs error in convert_fx (#67152)
a7ebf76a15 : jit trace (#59949)
f1b5f1898b : Automated submodule update: kineto (#67133)
b51731527d : [ez] [Docs] Missing import in example for post_local_sgd (#67047)
0000c88e10 : [FSDP] No need for list() in _get_shard (#66957)
580efb35a5 : [FSDP] Add some comments after reading the code. (#66956)
b6fa998892 : Revert D31514095: Use kernel_func_name from aotCompiler
313939c9c6 : [quant] Fix lint errors (#67138)
7b55dc8340 : Use kernel_func_name from aotCompiler (#66337)
64c68edaf3 : [pt] Add Half precision support for bucketize and searchsorted op (#67077)
2d81d5ab0a : [quant][graphmode][fx] Remove fbgemm_backend_config_dict for now (#67066)
8460fa5707 : [quant][fx] Add an option in convert_fx to accept qconfig_dict to skip quantization (#66878)
d13829e6be : [quant][[fx] update observer_fqn to not depend on node.name (#66767)
83f70db95c : Fix common device computation for comparison ops. (#66245)
3f5adf4f9c : [quant][graphmode][fx] Use the new convert function instead of the old one in quant-fx2trt tests (#67065)
af1a2df825 : enable better depthwise conv perf on cudnn 8.2+ (#58749)
cf3a5160f8 : [BE] move init_multigpu_helper to common_distributed (#67050)
df3f82a1ef : Add more FSDP unit tests to cover core logic, freezing weights and flatten parameter wrapper (#66904)
f6c88fa99d : Revert D31627107: [BE] delete frontend.cpp
f50bf16c04 : Revert D31663043: [BE] minor improvement to dist quantization
7b0408684b : Fix linter (#67122)
018e06edca : [torchelastic] Skip tests in tsan mode (#67103)
7e5aa0d35a : fixed unique arguments documentation (#66132)
a7bbf8814c : [quant][graphmode][fx] Move quant-fx2trt unittests to test_quantize_fx.py (#67064)
7379d4db20 : [BE] minor improvement to dist quantization (#66649)
1da628bdb7 : [ONNX] Update slice process shape to support rank only inference (#65782) (#66149)
0bc9928f31 : [ONNX] Symbolic: dynamic input for OneHot, bool for Einsum (#65940) (#66147)
2c0fe338da : [ONNX] Modify softplus symbolic to support beta!=1 (#65001) (#66146)
6f3f302d9f : [ONNX] Deprecate fold_if pass (#65697) (#66145)
a0fc14c20f : [ONNX] Add diagonal symbolic (#64454) (#66144)
b18c298f24 : ONNX: Delete or document skipped ORT tests (#64470) (#66143)
7a78f715a6 : [ONNX] Add warning for inplace updates on tensor.shape in tracing mode (#63170) (#66142)
136abf5aff : [ONNX] Update sum symbolic to handle dtypes (#64289) (#66141)
53a163a015 : [ONNX] Export nn.Module call as ONNX local function (#63589) (#66140)
d1986a1cf5 : [BE] delete frontend.cpp (#66581)
e8742f15cf : [quant][graphmode][fx] Add observation_type.py (#67063)
f2582a59d0 : [SR] Add rvalue overload for operator() (#66648)
40a8a50913 : Add static_runtime::fused_equally_split (#2)
391eb1dbe3 : [JIT] UseVariadicOp handles multiple lists (#66288)
c7121ae77f : fix formatting CIRCLE_TAG when building docs (#67026)
d9c4b3feab : Do rowwisemoments computation in `float` for `half` `LayerNorm` (#66920)
6e6ede2e70 : [JIT] Re-enable alias sensitive peepholes (#65860)
051ea5ccbf : [Static Runtime] Bundle function & function_kind to carry them together (#66974)
3d7a344c5e : Fix ArchiveReader to keep archive path (#67035)
d1a5612a3e : remove accscalar from i0 and i0e (#67048)
5f58764d1d : [PyTorch Edge][type] Add type support for NamedTuple custom class (import) (#63130)
d3fc3c4ded : Implement forward AD for linalg.matrix_exp (#62716)
fe102b9888 : diff tool (#66854)
8ea985f240 : [quant][fx][graphmode] Rename files and functions for convert and add do_not_use suffix (#66955)
01ced45217 : [iOS] Bump up iOS CocoaPods version to 1.10.0 (#67058)
77beccaedb : Do not build PyTorch with caffe2 by default (#66658)
4fe8055b9f : made functorch not decompose by default (#66945)
28fac23409 : Fixes CUDA vs CPU consistency for index_put_ when accumulating (#66790)
35965869cf : Enroll bowangbj@ to PyTorch distributed package (#67062)
20f08d23a0 : Revert D31838513: Strided masked reduction: mean.
2578de4851 : [skip ci] Set test owner for test_cuda* tests (#66838)
b40a940192 : Strided masked reduction: mean. (#66784)
b696d64ef4 : Binaries without AVX512 kernels shouldn't report CPU Capability as AVX512 on machines with AVX512 support (#66703)
33790c4e06 : Implement histogramdd on CPU (#65318)
6a224b3370 : Set test owners for quantization tests (#66832)
f29e5220a6 : Revert D31474901: [pytorch][PR] [numpy] add torch.argwhere
fcfa06586d : Wextra fix for NamedTensor.cpp (#66897)
462f333c01 : [numpy] add torch.argwhere (#64257)
892ac08a02 : Do not generate not_implemented error for forward AD when input with tangent passed to non-differentiable function (#66926)
062ae8df0e : Automated submodule update: tensorpipe (#65353)
b07371f19c : [skip ci] Set test owners for serialization tests (#66862)
6f1ba16d6d : [skip ci] Set test owners for cpp test (#66836)
00a871c5c9 : [skip ci] Set test owner for multiprocessing tests (#66848)
78f970568c : Add dummy op to use instead of searchsorted (#66964)
94f4e9a995 : Enable warning tests for nondeterministic backward functions (#66736)
ce6f4b3a02 : Setup c10d extension Backend class attr the same way as builtin ones (#66991)
40e5d31a52 : Add OpInfo for torch.bincount (#65796)
9d4549295d : ONNX export: propagate node metadata across passes (#45256)
a33f341cee : [ci] try setting MAX_JOBS on windows builds to reduce OOMs (#66986)
53cf7e844f : [SR] Fix bug in FuseListUnpackV2 (#67021)
a7ec4b53d2 : Splitter: Transformer_encoder (#66952)
d73b88b473 : Unsqueeze bug fix (#66889)
23321ba7a3 : Fix bug [#66780]: wrong input to torch.is_floating_point (#66783)
13b8599831 : [skip ci] Set test owner for test_dispatch.py (#66840)
8cbdf49dce : [qnnpack] Remove conv_utils.h (#66605)
960e3216a4 : [skip ci] Set test owner for named tensor tests (#66849)
f5c5ab2868 : [skip ci] Set test owner for cpp-extensions tests (#66837)
32e790997b : [Rocm]Reduce severity of detected possible memory leak from assertion to warning (#65973)
70a5113e03 : [ROCm] update Magma for 4.3 release (#65203)
b6df043f1f : Add torch.nn.init.uniform_ operator to ShardedTensor. (#63997)
bdb889aca1 : [nnc] Use a descriptive name for fused kernels when profiling (#66990)
8beabffac3 : [PyTorchEdge] Make aten function common to aten and torch_common (#66663)
f8f04d5424 : [quant][graphmode][fx] Add support for single linear and conv2d (#66950)
a89851a0d9 : [quant][fx][graphmode] Adding a new convert function that produces reference pattern by default (#66925)
db4165892b : [SmartCompose][OnDevice]fix function name bug in mobile export & Script to convert mobile model (#66915)
ab1e4eac42 : [Static Runtime] Add FuseListUnpackV2 (#66509)
17889ad26e : Add support for cat in output stitching (#66098)
2dd23ebfdb : Add support for multi output nodes in partial eval graph stitching (#66097)
0196b984f3 : Add Handling of Cat in Shape Analysis (#65575)
eaba976d49 : Add x + 0 optimization (#65574)
b059f035be : Fix bug preventing optimization from firing (#65573)
63b41e1f4d : [JIT] Add partial evaluation graph stitching logic (#65377)
4ad6c144f6 : [JIT][Easy] Shape cleanups (#65148)
e046386be8 : Avoid inlining error reporting in checked_convert (#66721)
18bbc4c2b7 : [Static Runtime] Fix a bug in aten::index (#66940)
08cb31a03e : [PyTorch][1/N] Basic implementation of ShardedEmbedding using ShardedTensor. (#66604)
257239972c : Fix attr_to_scope's key in `torch/utils/tensorboard/_pytorch_graph.py` (#65692)
450221c534 : Sparse CSR: Add tensor.resize_ and tensor.copy_ (#63510)
f56a1a59a3 : Add simple backwards compatibility check for torch.package (#66739)
6e67150f57 : [skip ci] Set test owner for test_mkldnn.py (#66845)
5569d5824c : Fix documentation of arguments for torch.nn.functional.Linear (#66884)
e86d8323cb : [JIT] Add special cases for batch_norm, instance_norm in alias_analysis (#66554)
cf77bd4cf4 : Fix python version in test tools CI job (#66947)
793f366e34 : [skip ci] Set test owners for sparse tests (#66863)
a015964cf8 : Strided masked reduction: prod. (#66386)
822277f302 : [skip ci] Set test owners for test_type_promotion.py (#66866)
409364e597 : [skip ci] Set test owners for test_typing.py (#66869)
452b359c3f : [skip ci] Set test owners for tensor creation tests (#66864)
8a65047acc : [skip ci] Set test owners for everything considered with module: tests (#66865)
94f4b22df9 : Revert D31761594: [pytorch][PR] opinfo : nn.functional.embedding
f95fef7897 : Add prim::TensorExprDynamicGuard to bc allowlist (#66939)
3fe2ff800c : Module docs update (#66909)
62ca5a81c0 : Exposed `recompute_scale_factor` into nn.Upsample (#66419)
867ccc9987 : Strided masked reduction: amin. (#66385)
c69e33bb11 : Fix doc string for torch.acosh (#66814)
ed5633d0c5 : opinfo : nn.functional.embedding (#66622)
79803b199f : [Static Runtime] Make sure ProcessedNode::function_kind_ is copied over (#66917)
14ee608791 : [PyTorch] Make rearragement in sharded linear work as expected. (#66603)
ef15691a1e : Revert D31732421: [JIT][Easy] Shape cleanups
70c9eb130d : Revert D31732419: [JIT] Add partial evaluation graph stitching logic
90b42452e2 : Revert D31732417: Fix bug preventing optimization from firing
b8d58129bb : Revert D31732420: Add x + 0 optimization
e730752610 : Revert D31732416: Add Handling of Cat in Shape Analysis
57fcea9e88 : Revert D31732418: Add support for multi output nodes in partial eval graph stitching
4187d870df : Revert D31732415: Add support for cat in output stitching
1bf0e1acb4 : Revert D31732414: Add Initial NNC Dynamic Shapes Flow
9c4d7d96db : Address feedback from #66673 (#66905)
deb6989880 : [fx-acc] add optimize_quantization to FX graph opts (#65929)
32e3003726 : Have test classes extend from common_utils.TestCase, not unittest.TestCase (#66900)
de4fe7a38c : Add Initial NNC Dynamic Shapes Flow (#66136)
b4db5174fe : Add support for cat in output stitching (#66098)
0fdc9b77a3 : Add support for multi output nodes in partial eval graph stitching (#66097)
cc7de1df3b : Add Handling of Cat in Shape Analysis (#65575)
66543f88de : Add x + 0 optimization (#65574)
853fc25fb0 : Fix bug preventing optimization from firing (#65573)
5db7db667f : [JIT] Add partial evaluation graph stitching logic (#65377)
16d0896b69 : [JIT][Easy] Shape cleanups (#65148)
b3bb234e16 : Remove THCGeneral.cpp (#66766)
bd4d5cb14c : Sparse CSR: Add torch.empty (#63509)
b1a6129e09 : Add repr to StreamWrapper (#66880)
e70b5d64f4 : Change README getting started link to explicit instructions (#66828)
cbd7bac914 : Migrate clang5-mobile build to GHA (#66673)
15f21eef5e : [fx2trt]fix softmax test (#66885)
a1afb692f3 : Fix metal issues with irange (#66877)
66f241230d : [PyTorch] Take const Type& in {tryS,s}calarTypeFromJitType (#66717)
9a00910bf3 : [skip ci] Set test owner for test_linalg.py (#66844)
57c596eb9e : add interactive_embedded_interpreter.cpp to the OSS build (#66352)
3488a85a76 : Sparse CSR CUDA: fix input checks for `addmm` and `mm` (#66485)
690c2a7076 : masked_scatter: fuse mask count check into one kernel (#66871)
552af8bdef : [PyTorch] Fix missing move in OptionalType::createWithContained (#66697)
7e81a89e13 : [PyTorch] Fix performance-no-automatic-move clang tidy warnings in matchTypeVariables (#66720)
50f5689d60 : Set test owner for distributions tests (#66842)
c37f413e75 : [skip ci] Change pretrained to false for quantization tests (#66795)
c9d9244166 : [skip ci] Set test owner for test_spectral_ops.py (#66843)
34051d74da : Add test owner to distributed files starting with test_ (#66797)
94afbd158c : [skip ci] Set test owner for test_numpy_interop.py (#66851)
17f07c310b : Fix type checking errors in torch/ao/quantization/quantize_fx.py (#66804)
a2e94b80fa : Create linalg.matrix_exp (#62715)
fd608cd313 : [skip ci] Set test owners for optim tests (#66861)
c806bb1022 : [skip ci] Set test owner for test_complex.py (#66835)
299a6a65b2 : [skip ci] Set test owners for autograd tests (#66834)
39215ddf84 : [skip ci] Set test owners for dataloader tests (#66839)
9eab6da887 : [skip ci] Set test owner for nn tests (#66850)
05b6dc9d75 : Fix BatchMatMul test and shape inference (#66733)
9f782f8b35 : add `OpInfo` for `torch.nn.pixel_unshuffle` (#65468)
1164118fc2 : add `OpInfo` for `torch.nn.pixel_shuffle` (#65467)
8f09292c5e : add `OpInfo` for `torch.nn.functional.pairwise_distance` (#65460)
0036e41143 : [quant][embedding qat] Add eager QAT test for EmbeddingBag+Linear model (#66334)
0a07488ed2 : use irange for loops 1 (#66741)
72803dbcfd : [caffe2] Fix invalid vector accesses and polar() call (#66757)
147f7559b1 : Add `SourceView` which doesn't own source text as base class of `Source` (#65309)
bff64e84cd : [DDP] Track models with sync bn (#66680)
e0643fa3fc : use irange for loops 5 (#66744)
bceb1db885 : use irange for loops 3 (#66747)
061baf02bf : Skip failing tests when LAPACK and MAGMA are not available (#64930)
08a464a9f3 : [PyTorch] Pass c10::optional<bool> to Stride ctor by value (#66698)
c9c52b760b : test addr type promotion in a single test (#66812)
d05c1ec007 : Add lazy Node base and associated infra (#66601)
a17a4e93ce : [PyTorch][easy] Fix missing move in UnionType::createWithContained (#66691)
c9c447f4be : [PyTorch] Fix missing moves in ListType (#66701)
d0a63c978b : [PyTorch][easy] Don't copy string in TensorType::repr_str unnecessarily (#66699)
f65b4b7a4c : [PyTorch] Avoid refcount bump in UnionType::canHoldType (#66693)
1db50505d5 : [nn] MultiLabelSoftMarginLoss : no batch dim support (#65690)
8173d4df69 : move get_cycles_per_ms() to common_utils (#66798)
d024f1134d : ci: Move bazel download from github -> s3 (#66815)
06e49ea088 : [not4land][quant][fx][graphmode] lower reference linear module example (#65723)
c994a7fc2d : Update documentation of torch.nn.Upsample (#66756)
0974215c4d : Prefer mT and mH over transpose(-2, -1) and transpose(-2, -1).conj() (#64181)
44fd312604 : [PyTorch] Use intrusive_ptr to save space in KernelFunction (#65618)
622e19b859 : [PyTorch] Take const Type& in TensorType::fromNumberType (#66716)
6a7296be9c : [PyTorch] Use castRaw in InterfaceType (#66728)
9ea3424747 : Set test owner for fx (#66807)
8637556d23 : Migrate THCState to ATen (#66765)
1fcbd8fa15 : [PyTorch] Fix extra refcount bumps in tryEvalTypeVariables (#66722)
393299b124 : [PyTorch] Fix unnecessary shared_ptr copies in RRefType (#66706)
d5a25faf7a : [PyTorch] Fix unnecessary shared_ptr copies in EnumType (#66714)
9b729ebc88 : [jit] shape propagation for quantization (#66343)
1cf317b85f : [ONNX] Support exporting with Apex O2 (#65374) (#66700)
624ce95201 : Run sparse tests only for TensorPipe agent. (#66661)
7fad47e522 : `torch.linalg.lstsq`: forward/backward AD support (#65054)
6bde474066 : [PyTorch] Fix extra refcount bumps in matchTypeVariables (#66719)
c373e188d8 : [PyTorch] Fix extra refcount bumps in unifyTypes (#66718)
472a6f2787 : Strided masked reductions: sum, amax. Testing of masked reductions. (#65990)
d777e490a5 : [bc-breaking][quant][graphmode][fx] Produce reference patterns for GeneralShapeOps (#66647)
eb1eefc399 : [PyTorch] Fix unnecessary shared_ptr copies in DictType (#66702)
09c4e73c95 : [PyTorch] Fix unnecessary shared_ptr copies in FutureType (#66704)
62e89f692f : [doc] typo (#66754)
f4a7273b5c : Set test owners for module: ci (#66796)
8532061bce : [sharded_tensor] support gloo/mpi backend in tests (#65855)
d549c8de78 : fx quant: enable linear-bn1d fusion for PTQ (#66484)
9d287d0b63 : [fx2trt]Add support for negative dim in softmax (#66760)
aa7da7b09c : [quant][embedding qat] Enable quint4 in EmbeddingBag QAT workflow (#66348)
909694fd88 : Fix `nn.functional.max_poolNd` dispatch (for arg: `return_indices`) (#62544)
e4a9ee8d42 : Deduplicate codegenOutputQuery to query maximum CUDA compute capabilities (#55901)
811f5a2b94 : Adding StreamWrapper to ensure file object will be closed (#66715)
0d203a16fe : Add relative and absolute tolerances for matrix_rank, pinv (#63102)
53aac4b6f3 : [PyTorch] Allow override for macro `HAS_DEMANGLE` (#66540)
3b4cb9ddca : Revert D31577488: Migrate THCState to ATen
719d43a2a2 : Revert D31547709: Remove THCGeneral.cpp
8854817f44 : Implement Python Array API `asarray` function. (#60627)
9e3a2babfa : Make aotCompile support multiple input sizes (#66727)
962c6476da : Refactor: move method to func compilation work to compileMethod, add option to specify method name (#66726)
aa0c31876b : Remove THCGeneral.cpp (#66391)
8c5928bd78 : add frozen_numpy as a builtin library to torch::deploy (#66297)
42f138469a : [TS] Return early if device doesn't match (#66694)
32ac001e4d : Suppress deprecated copy in vec256_qint.h (#66646)
65adf1dfa2 : Migrate THCState to ATen (#66480)
2f099c7555 : Revert D30652629: use irange for loops
1e2b2ee5ff : sort_out_cuda: Use custom kernels to fill index tensors (#66668)
9ba39d2008 : Clean up test running scripts (#65508)
2c761caaaa : [Vulkan] cat operator for channel dimension (#66669)
06cfdfae0e : Promote integral inputs to floating for `torch.logsumexp` (#63393)
67e003f09b : [Static Runtime] Determine function for `ProcessedNode::run()` statically (#66692)
d1b6121935 : Revert D31656999: Add meta support to tensor range factories
a25648953c : Add `warn_only` kwarg to `use_deterministic_algorithms` (#66233)
687c2267d4 : use irange for loops (#66234)
b5b7d6a3a6 : EmbeddingBackward exclusive_scan thrust->cub (#66566)
bd25f92e81 : Fix Wextra issues in Half.h (#66643)
abc022f9c8 : Fix torch.cholesky deprecation warning (#66645)
0b8dc0f04a : add BFloat16 operators on CPU: logaddexp, logaddexp2, remainder (#63621)
a58852fd44 : Fix fx2trt broken unit test (#66696)
e48a4cbf64 : Make several methods of SharedParserData private (#66670)
e88d1c4f10 : [PyTorch] Add tuple inline storage (#64066)
f8f9a47b02 : PR3: add a workaround for reference path (#66535)
7400f34b8e : Add meta support to tensor range factories (#66630)
6436bd3d5d : Clarify topk doc (#65938)
2506baf9c2 : [ONNX] move CheckerError from torch.onnx.utils to torch.onnx (#66644)
3a9259f6cf : [TensorExpr] Add missing schema for aten::where and aten::pow lowerings. (#66688)
06c37876b8 : `torch.linalg.householder_product` faster backward (#63880)
65e25256c3 : [ROCm] enable test_distributed() in test.sh (#66657)
8a01bbd64a : add flatten parameter module (#66578)
a3d12bcdf9 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
76efbccc3b : [PyTorch Edge][tracing-based] Unify tracer between internal and external (#64152)
1e47181c47 : [DDP Logging] Add iteration in error reporting (#65772)
3740a06712 : [MonitoredBarrier] Fix some logging (#65771)
06fa6c15c0 : Back out "Revert D31299350: Back out "Revert D31005792: [NCCL] Init dummy NCCL comms in constructor"" (#66393)
59b28063b4 : [NNC] Adding more python bindings for missing operators (#66612)
8dcf84069e : [PyTorch] Implement improved version of gather_ranges_to_dense (#66677)
70fc60b9d1 : Revert D31325860: [PyTorch] Implement improved version of gather_ranges_to_dense
b60050e96a : [qat]Make sure the bn statistics are the same in the unit test. (#66244)
23710e2d80 : [PyTorch] Implement improved version of gather_ranges_to_dense (#66664)
583217fe37 : changes for pytorch issue 55577 (#66571)
a1084401b0 : Clean up `DictLiteral` and `DictComprehension` emission logic (#64953)
a7b79033ea : Clean up `ListLiteral` and `ListComprehension` emission logic (#64952)
22ec625028 : fx2trt example: run all submodules (#66590)
20aa417e38 : [PyTorch] [Quantization] Speed up PackedEmbeddingBagWeight::prepack() (#66632)
871a31b9c4 : [TensorExpr] Add missing schemas for lshift/rshift lowerings. (#66653)
f8348ce9c8 : graceful failure for draw_graph() in acc_utils.py (#66631)
1d90f29f14 : [DOC] Improve Transformer documentation (#66574)
3097755e7a : [DOC] Fix typo in KLDivLoss (#66583)
914796a69c : Fix for prim::BroadcastMKLDNNTensors issue (#66628)
833ede33ed : Fix ubsan in concat_split_op.h (#66283)
76f3b07caf : quantization docs: remove erroneous rebase artifact (#66577)
016362e2d7 : Run sparse tests only for TensorPipe agent. (#66600)
543b7fb942 : [JIT] Fix type annotations of pooling modules (#65847)
51b67f2bca : [qat]Removed outdated context manager in unit test. (#66274)
49a1d7bfcb : [opinfo] elemwise parcel : isfinite, isinf, isposinf, isneginf, isnan, isreal (#66400)
d810e738b9 : OpInfo for `*_like` functions (#65941)
5d4452937d : OpInfos for some Tensor dtype conversion methods (#64282)
77f98ea5e0 : assert no duplicate yaml keys in codegen (#66238)
fe41df3601 : Deprecate x.T on tensors of dimension other than 0 or 2 (#64180)
d802877dfa : speed up quantized interpolate for channels last (#66525)
a40812de53 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
6310eb30d1 : [SR] Clean up GetLivenessMap (#66606)
e1348973ac : Add common_fx2trt.py (#66579)
74849d9188 : [acc_shape_inference] add shape inference for quantize_per_channel (#66562)
7d9bbd3596 : Revert D31580382: [pytorch][PR] dropout update in autodiff
c1c985a282 : Rename tensorexpr::Value so that it can coexist with torch::jit::Value (#66467)
6634570aef : [SR] Fix bug in ValueGroup (#66470)
d30397d42a : [PyTorch][Static Runtime] Don't use vector in ProcessedNode (#65429)
c6f0dde3ca : Cumsum Converter (#66376)
160946e3f3 : Use `torch.empty()` instead of `torch.tensor()` in `torch.nn.Parameter` (#66486)
30d9fd9cf3 : Migrate USE_MAGMA config macro to ATen (#66390)
e75de4f307 : remove a few unused THCTensor/Storage methods (#66555)
4e1c075542 : log_sigmoid: Use log1p for improved precision (#66441)
24202f7fb4 : Remove native_functions.yaml dependency from Activation.cu (#64499)
eb8138d886 : dropout update in autodiff (#66273)
5f45927d15 : Autograd: Delay warnings until the end of backward execution (#66235)
42328090cb : [GHA] Hardcode doc build target to `master` (#66567)
0aab34c26c : [jit] Refcounting spot fixes in alias_analysis (#66295)
9767282643 : [jit] Add MutableTypePtrHelper::mapTypeToBorrowedAliasTypeSet (#65344)
75d98fa0ae : [jit] Implement one-element MemoryDAG::mayContainAlias more efficiently (#65178)
9e8281fd2f : [fx2trt][code quality] Add type annotation and docstring to utils functions in acc_ops_converters.py (#66496)
37db650c9c : [Static Runtime] Clone test does not use uninitialized memory (#66557)
82986a17a6 : fix lint (#66572)
a82fcd3560 : Disable .numpy() and .tolist() for tensor subclasses subclasses and fix .tolist() for conjugated and negated tensors (#66082)
675ba6cd53 : [qnnpack] Remove usage of conv_param_t in deconv-run.cc (#66465)
86cf22cb1c : Add OpInfo for torch.bucketize (#65821)
035310c574 : Handle shared memory cases in MathBithFallback (#63602)
c04bcde245 : Make empty* and *_like factory functions respect tensor subclasses (#65677)
b792a77895 : Skip `interactive_embedded_interpreter.cpp` for clang-tidy (#66569)
09b90612c4 : .github: Enable onnx tests (#66513)
f48f20e154 : Make ContainerHash compatible with const& types (#66497)
fdd9f49cf5 : add a note on numerical accuracy (#65947)
a453ebc8ac : Use interactive_embedded_interpreter to dynamicly loading various third-party libraries (#66512)
a8815d557a : [vulkan] Remove the persistent resource pool (#66478)
cebaf21c5a : [vulkan] Release GPU resources when vTensor::View is destroyed (#66477)
5e34ac6c43 : [FX] Fix cases when we should not fuse due to more than one users of intermediate node (#66472)
9d13ae450a : [oss/ci] skip all dataloader tests with asan (#66561)
713e025c9f : Add no-input-grad-needed cases to test_grid_sample (#66071)
8a40bb62f9 : Compute input gradient only if required (CUDA) (#66070)
f8d98b5a6d : Compute input gradient only if required (CPU) (#66069)
84385c40e4 : Add output_mask (#66068)
6401658b08 : fix type error in hipify_python.py (#66164)
d85948896c : Add softplus support to autodiff (#63942)
82a216c45b : Add tensor.{adjoint(),H,mT,mH} methods and properties (#64179)
87df043f63 : [Bootcamp][Pytorch]Add testing for complex parameters in Adagrad optimizer (#66501)
ecb7b38c00 : [PyTorch] Support additional arguments in Python record function (#65736)
9918fd8305 : [fx2trt] open source tests for converters (#66361)
80a3619823 : Remove THCTensorMathReduce.cuh (#66389)
bc6935ddf5 : [PyTorch][Distributed][Easy] Make ShardedTensor.size() equivalent to torch.Tensor.size() (#65087) (#66012)
8eb85b5027 : Remove THCNumerics (#66388)
2d3b23190c : Revert D31591512: .github: Enable onnx tests
08f3823647 : Sparse CSR CUDA: add `addmv_out` (#61407)
8492e6bc6a : .github: scheduled -> schedule, fix periodic (#66531)
06a156efc7 : .github: Enable onnx tests (#66513)
93d326c868 : Add InplaceOrView boxed kernel (#63878)
40794dbb25 : add backend_config_dict to checkGraphModeFxOp (#66499)
d32736e317 : Make permission errors more human readable (#66492)
d921891f57 : GHA: Stop skipping periodic jobs (#66264)
3ac2c74896 : Revert D31082208: Use shared CUPTI by default
9984f4bb8b : Remove native_functions.yaml dependency from some reduction operators (#64173)
ee38a467ea : fix normal with empty std (#66463)
8b0eae5aa8 : Use shared CUPTI by default (#65401)
c6216b2a43 : Back out "Revert D30710710: [Pytorch Edge] Support profiling kineto events from external source" (#66421)
d7916e3734 : [jit] Eliminate malloc & recursive refcount bumps in HashType::operator() (#65348)
47c531b6e8 : [jit] Compare object identity first in ClassType::operator== (#65347)
17e79bc76c : remove is_reference from all is_output_quantized (#66456)
702fb1de72 : [fx2trt] open source tests for acc tracer (#66302)
a6eec0c60f : Upgrade onnx submodule to 85546f8c44e627f8ff1181725d03cc49f675e44f (#66427)
e6261083f9 : [FX] fuse permute021 linear pass for trt lowering (#66362)
8818dda237 : Fix lstsq to work with inputs that require grad (#66426)
213ac4e59c : Remove native_functions.yaml dependency from PointwiseOps (#64172)
8674a3c6e3 : Remove native_functions.yaml dependency from PowKernel (#64171)
1841f76cc0 : Remove native_functions.yaml dependency from unary ops (#64170)
71e17d9827 : [DataPipe] Fix HttpReader IterDataPipe Issue with streaming (#66432)
5f1518609b : [TensorExpr] Fix lowering for aten::t. (#65859)
6864146f2b : [TensorExpr] Fix lowerings for aten::view and aten::reshape. (#65852)
60a2a295ce : [TensorExpr] Use schema instead of op name in NNC lowerings. (#65843)
24b9b304d9 : [TensorExpr] Nuke TE shape inference. (#65554)
18e4688199 : [Pytorch Edge] Improve bundled inputs name error handling (#65856)
2d1552824a : Revert D31386275: Migrate THCState to ATen
d8532e3524 : [PyTorch] Split c10 Type.cpp into two files to allow targets to include one of them (#66445)
07ec250fd7 : [deploy] fix oss build (#66347)
9a85167d22 : Fix batch_isend_irecv tests for err case (#63112)
3eb9443619 : [FX] Fix issue where GraphModule.delete_all_unused_submodules deletes submodules from called leaf modules (#66430)
a6774d6e1f : Migrate THCState to ATen (#65948)
e7b5712c21 : Call `PyArray_Check` only if NumPy is available (#66433)
565cf47abf : Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380)
8b1258698e : Improve quantization API docs (#66379)
88ed93c2ca : Fix type checking errors in torch/quantization/fx/qconfig_utils.py (#66428)
25965619dd : Back out "Revert D31495086: open source engine_layer_visualize.py" (#66431)
ae5a9a451f : Do not enforce unused vars rule for torch_deploy (#66447)
7baf4f6b12 : Chunk: Converter (#66028)
cc24e4e5d0 : [NNC] Normalize loops in SplitWithTail (#66242)
49f1605392 : [RFC] Reduce logging noise from AdagradOptimizer (#66443)
c03f851750 : [torchelastic] Fix failing tests (#66440)
1d14fbdad7 : [TensorExpr] Adding missing python binding for operators (#66336)
08fab7ae13 : Wextra fix for Integration.cpp (#66321)
8c468ce00b : [PyTorch][JIT] Return a reference from caching specializations of getTypePtr (#66342)
998cb98844 : [PyTorch][jit] Cache TupleType objects in getTypePtr (#66340)
acb0157a3d : Specialization for `c10::util:get_type_index<std::string>` (#66290)
901df0cc22 : Skip test_nccl_errors_nonblocking (#66394)
221c308389 : Wextra fix for LossCTC.cpp (#66381)
736fa09a9a : [Static Runtime] Manage output tensors (#65515)
3b4b1b2d23 : .github: Remove confusing ciflow_config.enabled variable (#66260)
c66847afbe : Add workaround for nvcc header dependecies bug (#62550)
c373387709 : Update CMake and use native CUDA language support (#62445)
d3b29afbb6 : Remove old code that is unused in test/ (#66331)
4775419850 : [BE] Address feedback from #66296 (#66315)
822c0850cb : fix pybind issue for get_autocast_cpu_dtype and get_autocast_gpu_dtype (#66396)
1b40daac74 : pinv: forward/backward AD which is Frechet-defined in a rank-preserving neighborhood. (#66092)
7c2f53b363 : [BE] set pretrained=False for onnx tests (#66312)
1d9a6862cd : fx quant: add a BC test for loading old torch.package models (#65538)
0348148725 : Update link to qnnpack in quantization doc. (#66226)
58fefa6516 : Add pybind trampoline for ProcessGroup and Work (#66338)
bc06eefebe : [reland] Allow external CUDA streams to be set as current (#66324)
355acfdebc : [PyTorch Edge][tracing-based] use operator.yaml to build libtorch library (#66237)
9971113340 : Revert D31447612: Create a documentation page for FX graph mode quantization APIs
b85fd4c54f : Revert D31447613: Create separate documentation pages for quantization observers and fake_quants
10633460ce : Revert D31447614: Create a documentation page for `torch.ao.quantization.QConfig`
037ac2330e : Revert D31447616: Quantization docs: consilidate all API references on a single page
09c3e6002b : Revert D31447615: Quantization docs: rewrite API reference to be more automated
df1858bea5 : Revert D31447611: Quantization documentation: move backend section down
ad0accdecd : Revert D31447610: Quantization docs: add pages for Numeric Suite (Eager and FX)
291d463cf9 : Revert D31495086: open source engine_layer_visualize.py
0e0c98077f : [quantized] Implement 3d convolution in qnnpack (#66350)
150b7c7410 : open source engine_layer_visualize.py (#66301)
27f193af64 : Automated submodule update: kineto (#59674)
84326ef059 : Remove native_functions.yaml dependency from binary ops (#64169)
9539e6216b : Quantization docs: add pages for Numeric Suite (Eager and FX) (#66222)
309a8cf46c : Quantization documentation: move backend section down (#66210)
7d2526ab20 : Quantization docs: rewrite API reference to be more automated (#66201)
fe86f0e068 : Quantization docs: consilidate all API references on a single page (#66198)
7332ed13ed : Create a documentation page for `torch.ao.quantization.QConfig` (#66129)
f0fa3d1110 : Create separate documentation pages for quantization observers and fake_quants (#66125)
a89ac3138e : Create a documentation page for FX graph mode quantization APIs (#66122)
b96c7aea73 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
109aa135e6 : Remove apparently unnecessary std::remove_cv_t (#66254)
4cb4d11e0b : Disable "-Wignored-qualifiers" for vec256_bfloat16.h (#66279)
3fe5895a00 : Back out "Revert D30599136: [Pytorch Edge][tracing-based] build tracer in OSS" (#66267)
1763c25414 : [PyTorch][jit] Fix excess refcounting in TupleType::compare (#66286)
fb5a80ffd8 : [jit] Don't force refcount bumps from getTypePtr (#66282)
85b562dd2b : Fix type checking errors in fx/utils.py (#66311)
e5f6f356da : [hpc infer] fix bench perf number
904fbadaff : Fix merge conflict in bc tests (#66356)
5a67ffe0ad : [PyTorch][Static Runtime] Combine ProcessedNode::{native_,}fn_ (#65414)
566922bbcd : clean up mypy nit in torch/jit/_recursive.py (#66253)
4a302a3074 : Wextra fix for CUDAApplyUtils.cuh (#66323)
0a48f56318 : Revert D31299350: Back out "Revert D31005792: [NCCL] Init dummy NCCL comms in constructor"
c62ed96496 : Revert D30710710: [Pytorch Edge] Support profiling kineto events from external source
c957d9fdf6 : Replace _baddbmm_mkl_ with cpublas::gemm_batched (#66165)
51835bec07 : Wextra fix 1 for caffe2 (#66272)
a28b038af4 : [ao_migration] torch/nn/intrinsic: torch.quantization -> torch.ao.quantization (#65903)
2daae532bd : [ao_migration] torch/nn/qat: torch.quantization -> torch.ao.quantization (#65902)
1a6482ee2a : [ao_migration] torch/nn/quantizable: torch.quantization -> torch.ao.quantization (#65901)
b23709df03 : [ao_migration] torch/nn/quantized: torch.quantization -> torch.ao.quantization (#65900)
f1f3bd8c36 : Back out "Revert D31005792: [NCCL] Init dummy NCCL comms in constructor" (#65883)
c1343ff706 : [Pytorch Edge] Support profiling kineto events from external source (#64397)
8a02d3e5d0 : Wextra fix for Tensorshape.cpp (#66320)
731cf494f2 : Remove cuda/Loops.cuh dependency on native_functions.yaml (#64168)
92ce188510 : Revert D31445799: [nnc] Use given kernel function name while emitting code
2e6fa0261f : Revert D31445797: [nnc] Added a cache to use singleton instances of PytorchLLVMJIT for every triple,cpu,attrs combination
097fdcdf0c : Revert D31445798: [Static Runtime] Cleanup LLVMCodeGen memory after code gen completes
0be36d798b : Remove Tensor.h include from TensorIterator.h (#64167)
bc1dec9b81 : Migrate THCStorage_resizeBytes to ATen (#65944)
3bad54069b : Concatting multiple linear layers with same input Tensor (different weight/bias) (#63198)
94845fc44e : [jit] Implement one-argument AliasDb::mayContainAlias more efficiently (#65177)
c80693f7e6 : [jit] Add cache for MemoryDAG::collectAllContainedMemoryLocations (#65122)
3ef69a4598 : [static runtime] Pre-allocate hash tables (#65343)
0020a151c6 : slow_conv3d grad_weight: call gemm directly (#65759)
dfb64b3287 : log API usage for fsdp API in PyTorch (#64964)
201174cb91 : Revert D31389480: [pytorch][PR] Allow external CUDA streams to be set as current
b72a1782d8 : [PG Wrapper][BE] Add collective information when monitored barrier error is (#66167)
b5b1d49a66 : [PG Wrapper][BE] Make some methods private (#66166)
0cad2c0615 : Move intraop_launch_future from Parallel.h (#64166)
2d885ab73d : [jit] Reduce refcounting of Types (#65345)
1ae468a484 : [jit] Refcounting spot fixes (#65346)
8ebe1a924d : [DataPipe] moving mux IterDataPipe test to the right location (#66277)
ed17851642 : [DataPipe] adding test for IterableWrapperIterDataPipe (#66276)
e808e3d3d6 : [DataPipe] adding SequenceWrapperMapDataPipe (#66275)
a7cc07f109 : quantized embedding: make error message clearer (#66051)
c9aba3b128 : make error message when trying to quantize non floats more specific (#66050)
81660c08f0 : quantized add: enable broadcasting (#66049)
ece0221854 : Rename int to long, add more C++ types. (#66108)
11bc435622 : Allow registration of custom symbolics for prim namespace (#64460) (#66139)
9b09a5f7ba : [ONNX] Enable scripting tests (#64780) (#66138)
53fefaa916 : [ONNX] Fix duplicated output same name case (#64190) (#66137)
4af47eb3a7 : [ONNX] Update slice process shape to support rank only inference (#65782) (#66149)
dc37547c44 : Opinfos for avg_pooling (#64214)
8d6d448238 : Add HPU for Autograd Fallback (#65605)
4af913a7cf : fixed minor issues for index_add in docs (#65806)
61f0bb70c1 : Allow external CUDA streams to be set as current (#65914)
60fe854f9f : [fx2trt] save and load TRTModule for OSS (#65958)
321345d7c9 : Revert "Revert D31227448: [pytorch][PR] fixing sorting in stride indices" (#66176)
74477ba243 : [fx2trt] More controls over output dtypes (#65959)
227f91e72d : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
a58ff186e8 : [quant][embedding qat] Add basic EmbeddingBag QAT fakeQuant workflow (#65443)
64caee1356 : [PyTorch Edge] Leave out field for debug_handle if not being built with eager symbolication support (#66131)
ebe530a9cd : Periodic jobs should not have CIFLOW_DEFAULT label (#66300)
bd9eee4e65 : TBB: Use static partitioner to match OpenMP scheduling (#65327)
d5033410b1 : Parallel: Deduplicate parallel functions in different backends (#65326)
e1817d895f : [BE] Cleanup python_function.cpp (#66296)
ca363d1e22 : docker: Ensure libgnutls30 for all docker builds (#66258)
38f5144eae : Fix https://github.com/pytorch/pytorch/issues/61982 (#66015)
20f2e55d4f : Rename cuda/Resize.cu to cuda/Resize.cpp (#65943)
86de09e49a : Upgrade to ubuntu:trusty-20190515 (#63468)
416f593080 : [Static Runtime] Group graph nodes into input aliases & output aliases (#65517)
0e2d1b221a : [Bootcamp][Pytorch Core] Add testing for complex non-vanilla SGD
5e7d8ec846 : Support Registering a Variable Length List of Builtin Modules for torch::deploy Builtin Libraries (#66021)
40dd2711b6 : [Static Runtime] Cleanup LLVMCodeGen memory after code gen completes (#66218)
7e5ef5e517 : [nnc] Added a cache to use singleton instances of PytorchLLVMJIT for every triple,cpu,attrs combination (#66217)
c30dc52739 : [nnc] Use given kernel function name while emitting code (#66216)
3cc40253d9 : add gather to ShardedTensor (#65671)
f445ed19b2 : OpInfo for 2d fft functions (#66128)
2213c463ba : C++ API and docs for hfftn (#66127)
e6a4f746c2 : slow_conv3d: Use at::sum for grad_bias accumulation (#65758)
2e4e5b0264 : Add inplace_variant for resize_ OpInfo (#66135)
361b34eb81 : Chunk: acc_ops (#66010)
9fb6ba24e7 : Update `torch.fx.passes.split_module` docstring (#65542)
d5f64afc38 : [Static Runtime] Support aten::to.prim_dtype overload (#64928)
a8c0b362ce : [pytorch][PR] Add hash and int128 utils for Lazy Tensor Core" (#66181)
61fca037d6 : [Part 1] upstreaming fairscale fsdp to PyTorch -- sharding, core data flow and hooks (#63881)
88f8944ef1 : Revert D30599136: [Pytorch Edge][tracing-based] build tracer in OSS
2f1ab477f1 : Speed up DataTypeToTypeMeta (#66113)
1e4bcbdddb : [Bootcamp][Pytorch Core] Add test for complex numbers for vanilla SGD (#66230)
057a01556c : [Static Runtime] Do not use variadic_sigrid_transforms_torch_bind if out variant is disabled (#66221)
dcf39f9bb9 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
df11e2d6f9 : (torch/elastic) add fqdn hostname to error printout (#66182)
8a974a482c : [quant] Add support for quantization of Embedding{Bag} in dynamic quant APIs (#65674)
115526cc88 : GELU Converter (#66008)
ac0dbd6eec : Promote missing ops for delegated models (#66052)
3f30526ff2 : Remove THCAllocator (#65942)
eeaf527feb : [Pytorch Edge][tracing-based] build tracer in OSS (#64087)
0cab25468d : [Pytorch Edge][tracing-based] reorganize model tracer dependency (#63421)
300613dc60 : make FX symbolic tracing reuse buffers if they're the same (#66211)
67970e8c9b : Add CI tests for AOT Compile (#65441)
6c54971cd9 : Open Registration for torch::deploy Builtins (#65953)
213c3f45da : [oss/ci] skip TestDataLoaderPersistentWorkers on ASAN (#66236)
4937218611 : [torch][launch] Add ability to override sys.executable for `torch.distributed.run` (#66179)
e8837d741e : [Vulkan] cat operator for height dimension (#66103)
1d586e78c6 : `*_solve` methods: implements forward AD (#65546)
78209b93b3 : Don't build shared library for AOT Compiler (#66227)
4a50b6c490 : fix cosine similarity dimensionality check (#66191)
05e1476d49 : [jit] Fix list copy in MemoryDAG (#65176)
fc4836f400 : [Fix] Use full name to look for the promoted prim operator table (#66081)
7cc121dbcd : slow_conv3d grad_input: Avoid dispatch in parallel region (#65757)
480a1a88d6 : [DDP] Log iteration in debug mode (#65770)
722f1ccfb8 : [DDP][Instrumentation] Profiling range for bucket copy (#65769)
84c5970a77 : ci: Migrate slow_gradcheck to GHA (#65730)
e2be087207 : [oss][pytorch] Add quint2x4 dtype (#65545)
252b6f2cba : [PyTorch][easy] Remove dead std::set in parseAliasAnnotation (#65712)
90db214d4b : support counter-based fused rowwise adagrad (#66177)
6d7fab5929 : [Static Runtime][easy] Clone scripts do not use aten::add (#66161)
9285981de1 : Clean up unused model instantiation (#65487)
8548928950 : Cumsum: acc_ops (#66189)
623ac7eabb : slow_conv3d: Avoid dispatch in parallel region (#65737)
9a0b2acd76 : [quant] Remove hypothesis from qtopk (#66158)
6d4d636d66 : [GHA] Rectify `trigger_action_only` flag (#66209)
c4ea447eb5 : Use src size for memcpy in order to avoid fortify complaints (#65222)
bfaaac6392 : Ignore register_rds errors (#66185)
b8e1999253 : [quant] Add op benchmark for GPU FakeQuantizePerChannel with float zero_points (#66183)
9de9733390 : Add 1d to 2d conv transform during mobile optimization (#65850)
747a5782e3 : [quant][fx] Don't assume bias is a keyword argument (#61647)
ab25516054 : [PyTorch] Remove unused function in import (#65865)
a5895f85be : [PyTorch Edge][type] Add type check in compatibility api (#63129)
c75210face : [PyTorch Edge][type] Move TypeParser class definition to header file (#65976)
931352c68d : Make handle_torch_function_no_python_arg_parser public (#66054)
c0b1965f7c : Back out "[vulkan] Use push constants instead of SSBOs" (#66169)
8d435877d5 : Fix typos at ONNX docs (#66090)
cbc29acca3 : [Codemod][FBSourceBlackLinter] Daily `arc lint --take BLACK`
d1058df885 : fix clang-tidy error introduced by #64382 (#65977)
6cdea8239e : Precomputing Transposes for frozen linear layers (#65631)
43e26d0086 : [deploy] Improve error messaging for create_movable (#65955)
3bd26792c0 : Skip test_multiple_groups on windows (#66154)
eeabab03e7 : [DataParallel] Log API Usage for tracking (#66038)
dc26f5eb65 : [FX] Specifies a default value when possible for placeholders created from concrete_args (#59569)
83bac89d64 : [quant] Add fp32/fp16 zero_point support for GPU fakeQuant (#65836)
f062def486 : Revert D31260343: [pytorch][PR] Add hash and int128 utils for Lazy Tensor Core
5e6347ca64 : .circleci: Remove migrated distributed configs (#66174)
e94fea08d0 : Add hash and int128 utils for Lazy Tensor Core (#65635)
143c957c2d : [nnc] Reduced memory usage of LLVMCodeGen object after code generation is complete (#65373)
68555339d7 : test_utils.py: Add another retry to test_download_url_to_file (#66159)
d2021e5e68 : ci: Migrate vulkan builds to GHA (#66044)
7452b65144 : Remove unused `dump` method from VSX vec256 methods (#66085)
6e06cb76ff : [JIT] Initialize CUDA context before launching fused kernel (#65064)
a5e6b2b2e3 : [Static Runtime] Add variadic sigrid_transforms_torch_bind (#63960)
e7747795c9 : [PyTorch Edge] Reduce dispatch table size further for a trimmed build (#66112)
a3bbaf227c : Revert D31227448: [pytorch][PR] fixing sorting in stride indices
89b56d630d : Create CI sev template (#66163)
5883523c1d : Remove dtype from torch.Storage and use only torch.ByteStorage (#62030)
588c1787ba : Update link to example pytorch/examples (#66095)
da0e29edd4 : fixing sorting in stride indices (#63940)
0d020effab : [quant] Fix the parts that were missing after initial migration (#66058)
727576e501 : [quant] Fixing the hypothesis test for topk (#66057)
92d0b7e99c : [deploy] fix typo in `registerModuleSource` (#66107)
458a00bacb : Back out "[quant] update fused_obs_fake_quant op to accept output_fake_quant argument" (#66063)
2b39b80971 : [quantized] Replace conv_p with convolution_op in qnnpack (#65783)
bda3230b62 : slow_conv2d grad_weight: call gemm directly (#65726)
1db78c30c9 : Fix LLVM-12 concat_split_op.h error (#66060)
9c3eb50b7b : [PyTorch] Use std::move() in a couple places in function_schema_parser.cpp (#66114)
aa80f05d2d : Remove sync in Embedding caused by unique (#66091)
1932bc69e9 : Move GHA to ONNX (#65975)
df475aa1dc : Update Vulkan runner in benchmark binary to handle non-tensor inputs (#66123)
2a5116e159 : [quant][fx2trt] Add quantize_per_channel in acc_ops and acc_ops_converter (#65287)
d609957c95 : patching graph_for (#55139)
ed50fa2513 : [Static Runtime] Test isOptimizableContainerType and getAlwaysAliveValues (#65849)
4c4525fa5c : Compile without -Wno-unused-variable (take 2) (#66041)
6b0aa2958d : [FX] Support torch.layout as arg (#66048)
6ea4902cf4 : [ao_migration] torch.quantization --> torch.ao.quantization in caffe2/torch/fx (#66096)
de24faec5f : Binary building wthout python fix (#66031)
6eb3a1c831 : Run master clang-tidy on PRs (#66104)
7c758759e3 : [PyTorch Edge] Avoid string copying in TypeParser (#64278)
69da4b4381 : GHA: make obvious when we are running smoke tests to user (#66011)
4cdfceddd2 : [Reland] Avoid saving self for `softmax` and `log_softmax` (#66018)
8f5631b859 : Refactor functional api vectorized jacobian to use batched grad parameter (#65566)
73901b099d : Add batched_grad parameter to `autograd.grad` (#65564)
b6d5f1ee70 : Allow None to pass through for vmap (#65565)
89ed9bdaee : [Static Runtime] Fix bug of creating output aliases in aten::embedding_bag (#65516)
40948a935d : Fix LLVM-12 UB in generate_proposals_op.cc (#66009)
c7748fc172 : Added validation of mode parameter in AveragedModel (#65921)
0fc6bd2e47 : [gpu ne eval] disable adam decay unit test for gpu (#66056)
29c0725e8a : Back out "[caffe2] fix LLVM-12 nullptr-with-nonzero-offset UBSAN error" (#66055)
7c52963350 : [WIP] skip constant folding dequant node (#63991)
8a307640db : selective trt import based whether we have gpu or not (#66045)
8b8012a165 : [PyTorch Edge] Skip writing version during backport (#65842)
7941590a51 : [JIT] Selectively enable precise alias analysis for TupleConstruct (#66025)
e4ee5ca698 : Revert D31326599: [pytorch][PR] Compile without -Wno-unused-variable
5ef350d7cc : Revert D31359010: [pytorch][PR] Fix cang-tidy regressions caused by #65954
c269f471f4 : Fix cang-tidy regressions caused by #65954 (#66040)
ca76e193a3 : Fix nll_backward for negative weights (#64572)
eb3b9fe719 : [XROS][ML] System specific adjustments for UTs to work. (#65245)
363ccb257d : GELU acc OP (#65957)
a6280ab653 : Compile without -Wno-unused-variable (#65954)
10f6294281 : Fix shape inference dim_type for Clip, Mean, Div (#65996)
e1d963e8fc : model_dump: Fix memory computation when both constants and data tensors are present (#66006)
23caeb3f71 : model_dump: Add a helper to produce html with a single call (#66005)
d9a95e66f0 : Upload test failures to RDS (#65873)
f85d7422bb : [fx2trt]add support for torch.tile (#66016)
060e41eafa : Forward fix type hint for DataLoader (#66001)
ad889d0b5e : Revert D30634700: [pytorch][PR] Fix typo in tensor docs
7d22007902 : [fx-acc] add acc_op optimization flags and decorator (#65928)
d937473709 : Fix typo in tensor docs (#64160)
8e8695285f : Re-generate workflows (#66027)
894d296bae : Remove usage of GitHub's artifact store in linux jobs (#65875)
6e8ffd191e : Fix typo in name of LayerNormBackwardCUDAKernel (#66000)
ffede499b2 : [PyTorch][Static Runtime] Fast path for contiguous to_copy (#65499)
7b10a76e05 : [PyTorch] Try removing Android strtod implementation (#65713)
176d3c6fb4 : [PyTorch] Fix many Tuple::elements() callsites (#64065)
f14e5e636d : [fx2trt]fix slice tensor converter (#65960)
21eebc9fd6 : [PyTorch][easy] Use copy-and-move instead of copy-and-swap in IValue::operator= (#65826)
592481a5cc : [fx][const_fold] Refactor to use base split module to simplify, and correctly handle non-single-Tensor outputs (#65933)
34682377b9 : [iOS][CI] Update dev certs (#66004)
ccf8d48f16 : Revert D31317680: [pytorch][PR] Avoid saving self for`softmax` and `log_softmax`
21da6ae9ce : suppress mypy error (#66003)
eac218dbc6 : Revert "Port `sort` kernel to structured kernels. (#62391)" (#65876)
5f7cadc7aa : Avoid saving self for`softmax` and `log_softmax` (#65242)
383c0a3858 : Fix internal assert failure for torch.all and torch.any with requires_grad=True (#65714)
53c0d91db9 : Make autograd codegen for differentiable outputs safer to use (#65823)
bff8d8fd28 : [nnc] Add BufHandle.store to python API (#65213)
8cf047afac : [nnc] Add call_with_numel interface for fast CUDA calls (#65213)
8595b6eeed : Avoid UB when indexing into size-0 tensors (#65878)
fc52f1293e : Improve pytorch type hints (Dataloader, trig functions)
982ef8837b : [Static Runtime] Fuse ListUnpack + gather_ranges_to_dense (#65116)
227e37dd39 : pytorch quantization ao migration phase 2: caffe2/test (#65832)
dac35b3592 : pytorch quantization ao migration phase 2: torch/jit (#65829)
e3af4be963 : pytorch quantization ao migration phase 2: caffe2/benchmark (#65833)
c1447f06a8 : [special] special alias for softmax (#62251)
c27b427cd9 : [sparsity] Add m-out-of-n support in the WeightNormSparsifier (#65295)
8b1aa85388 : [sparsity] Change API to take FQNs as configuration (#65296)
ea0de37d2e : [PyTorch] Avoid string construction from const char* and speedup empty string creation if error messages are suppressed (#65939)
2828ce53fd : Added jit log stream changing function and some refactor (#65768)
33c03cb61a : [deploy][1/n] Make deploy code conform to PyTorch style. (#65861)
765b6a90f3 : [TensorExpr] Move lowerings registration from kernel.cpp to lowerings.cpp. (#65553)
015e0079e3 : [TensorExpr] Move 'compute*' functions to operators/... (#65552)
3a0165da49 : [TensorExpr] Port NNC lowerings to the new registry mechanism. (#65551)
eee9ad0fdd : [TensorExpr] Add a skeleton for a registry of NNC lowerings. (#65550)
d84191fcc6 : [TensorExpr] Kernel: make prim::ConstantChunk handled like other ops. (#65549)
a6ad2b41ac : [Static Runtime] Make module_ optional in StaticModule (#65882)
08df4c2b3c : slow_conv2d grad_input: avoid dispatch in parallel region (#65725)
6502fb89dd : Make JIT Aliasing Test Less Brittle (#65493)
4f5ea5983a : [QPL] move metadata logging to markerEnd for model run QPL (#65451)
2481c06496 : [caffe2] fix LLVM-12 nullptr-with-nonzero-offset UBSAN error (#65506)
f6dfac6974 : Migrate THCCachingHostAllocator to ATen (#65746)
d39790340d : [ONNX] Enable export of __xor_ (#64042) (#64581)
e598ba2ef3 : [ONNX] Fix inplace fill_ dtype export mismatch (#64233) (#64580)
89cbe6229d : [ONNX] Update doc and error message for indexing export (#64290) (#64579)
d4ff344fae : [ONNX] Fix remainder export (#64230) (#64578)
0f0ef4fe64 : Add onnx test for batched_nms (#53175) (#64381)
7e15f2ddaa : [ONNX] Fix gather squeeze axis in constant folding (#63588) (#64379)
41bdfe3919 : [ONNX] Fix cuda test case (#63597) (#64378)
2d61009f4a : [ONNX] Fix input sequence for pad op (#60554) (#64377)
f17ee368b3 : Fix empty size constant creation (#63607) (#64376)
84190dafa8 : [ONNX] Update instance_norm implementation and support training (#60538) (#64375)
3d6d4f4322 : [fx2trt][quant] Add lowering support for per channel quantization in fx2trt (#64787)
207fefc988 : Delete rouge cu102 windows builds (#65961)
b3da2afebe : Clarified difference in behavior of `empty_strided` and `as_strided` (#64568)
22f36353dc : Revert D31137652: [pytorch][PR] Skip failing tests when LAPACK and MAGMA are not available
6285348f06 : Implement n-dimensional hermitian FFTs (#63890)
70f9f58a71 : Add __module__ to torch.dtype.__dict__ (#65182)
38c77539e8 : [PyTorch][Edge] Fix inefficiency in objLoaderMobile (#65710)
8f3983254b : [MicroBench] Added a micro benchmark for prefix sum (#65790)
24f59fa20b : [ci] fix softmax bc check (#65952)
d4d3bb91f9 : Refactor `OperatorSupport` related code and fix TRT not supporting int64 dtype (#65848)
9ae63bd87c : Revert D31238123: [pytorch][PR] Avoid saving self for`softmax` and `log_softmax`
541eb1db63 : Add cuSPARSE descriptors and update CSR addmm (#60838)
be00f0207a : Update git version for CentOS base dockers (#65703)
8297a16cc0 : [ci] try installing libgnutls to fix cert error (#65934)
6a30d83596 : Move ASAN to GHA (#65846)
cdbfb2b689 : .github: Bump linux and windows gpu max available (#65923)
928a4bbafb : [JIT] Fix compilation unit reference link in constant object upon load (#65784)
8130157504 : [DataPipe] Fixes an issue where TarArchiveReader closes stream when read into a buffer (#65877)
7f87ff183d : [RFC] [Modular] Include less headers in vararg_functions.cpp (#65672)
ea776fa034 : Update CODEOWNERS for optim (#65773)
b777d790ea : Convert Sampler back to lazily construction (#63646)
4666e3f192 : [quant] update fused_obs_fake_quant op to accept output_fake_quant argument (#65621)
6d4b93bd96 : [quant] adding memoryless observers for embeddingbag QAT work (#65699)
de80aff72d : Revert D31132861: Make JIT Aliasing Test Less Brittle
4176afc4a0 : [Static Runtime] Disable SigridTransform + ListUnpack fusion when outputs reachable from graph output (#62697)
edab202a30 : [DatePipe] add deprecation warnings for DataPipes that will solely exist in TorchData (#65827)
cd458fe092 : [JIT] Make output of prim::TupleConstruct alias only with its inputs (#64879)
dd354117ef : Skip failing tests when LAPACK and MAGMA are not available (#64930)
2c29ec2a41 : Remove "SciPioneer" from PT Distributed code owners (#65862)
91f8755b0e : Revert D31005792: [NCCL] Init dummy NCCL comms in constructor
5349ea921b : Migrate THCIntegerDivider.cuh to ATen (#65745)
3900509b7d : (torchelastic) make --max_restarts explicit in the quickstart and runner docs (#65838)
c7ef620a14 : [quant] Add imports to the torch/ao/quantization/__init__.py (#64911)
fb412bdd80 : Avoid saving self for`softmax` and `log_softmax` (#65242)
768cfaa8f8 : fix typo in _sharded_tensor (#65511)
9f97c66a7a : Make JIT Aliasing Test Less Brittle (#65493)
91611fe1d1 : Decouple forward AD checks from backward AD in OpInfo tests and gradcheck (#65040)
5950240bdf : Stop Win+CUDA-10.2 builds (#65649)
2b22a5dde2 : [NCCL] Init dummy NCCL comms in constructor (#65173)
ad85b582da : Remove THCDeviceTensor (#65744)
20374c991b : slow_conv2d_forward: avoid calling dispatcher in parallel region (#65724)
7191dd2613 : Update Module docstring for Python 3 (#65748)
8bf0ba546e : ns for fx: add basic testing on cuda (#65593)
0dd1b74a5b : Migrate THCScanUtils to ATen (#65743)
a84feeeade : [PyTorch Edge] Conditionally trim dispatch key set to save heap memory at runtime (#65732)
7b5d676fa1 : .github: Bump linux gpu max limit to 100 (#65831)
c975ca4337 : [Static Runtime] Simplify out variant overload implementations (#65384)
2f712c452e : .github: Remove confusing on_pull_request variable (#65731)
6c2f235d36 : common_utils.py: Add ASAN as a platform for which you can disable tests (#65791)
911d01c1de : type annotate operator_support (#65136)
085e2f7bdd : [ROCm] Changes not to rely on CUDA_VERSION or HIP_VERSION (#65610)
9b40eaaaab : Revert D31193205: [pytorch][PR] CMake: Limit python include directories to only python libraries
2670cacfc2 : LLVM-12 fix for tensor_new.cpp (#65785)
09eb3e661c : don't check 0 elements for cat symbolic diff (#65751)
1d681c1ab2 : Migrate THCThrustAllocator to ATen (#65492)
971c57f1d0 : CMake: Limit python include directories to only python libraries (#65654)
5f7ab7be6f : [Static Runtime] concat_add_mul_replacenan_clip retains axis arg (#65741)
f63150fd1d : [PyTorch Edge] Reduce the cost of computing isIncludedInAlias() (#65735)
aebde1bc2b : deprecate device getter from `torch.testing` namespace (#63844)
07d5d7b5cc : move kernel launch checks from `torch.testing` to `torch.testing._internal.check_kernel_launches` (#60862)
0a0564a347 : Revert D31206837: [pytorch][PR] `*_solve` methods: implements forward AD
f9c2dc860d : make layout check optional in torch.testing.assert_close() (#65419)
8a247fb418 : LLVM-12 fix for shm_mutex (#65781)
4a7a0ea42e : Skip flaky ASAN tests (#65792)
d528c7f3c0 : .github: Move windows back to default directory (#64962)
ed4491be6f : Fix error code checking for Windows build scripts (#57331)
0d7036fdaf : don't leak build time path name to runtime for frozen python modules (#65715)
72b27bde83 : [CIFlow] Modify workflow trigger logic (#65733)
b3c32ad32f : .github: Move calculate-docker-image into build (#65789)
609384c056 : [sparsity][doc] Docstring for WeightNormSparsifier (#65294)
92ee5cc2e2 : [sparsity] Fix for accumulation bug in WeightNormSparsifier (#65293)
a90912ecc5 : [sparsity] Remove the pack_param from the sparsifier state_dict (#65292)
c829cb6840 : Port `min` kernel to structured kernels. (#61450)
c2252b3aa6 : Port `max` kernel to structured kernels. (#61449)
51f1569c77 : Add checks for structured in-place operations. (#65686)
93852bb2d4 : Port `sort` kernel to structured kernels. (#62391)
57529d48c4 : [quant] Fix applying non-zero offset 1 to null pointer in quantized interpolation (#65570)
4752453d27 : [Structured Kernels] Port for `baddbmm` and `bmm` (#64805)
278edb5626 : .circleci: Only generate docker configs we need (#65728)
145202c45b : Define timeout in TestIndividualWorkerQueue (#65742)
50edc2679d : onnx/test.sh: Run test/onnx in only shard 1 (#65722)
87cd658c27 : Add override to virtual destructor in derived class (#65476)
57e5ae5306 : [vulkan] Use push constants instead of SSBOs (#65716)
e155e7520f : MaxUnpooling: parallel_for not always backed by OMP (#65655)
26e31f76b0 : `*_solve` methods: implements forward AD (#65546)
2ea724b1fd : Added option to update parameters using state_dict in AveragedModel (#65495)
3324bae5f1 : Remove THCTensor.cu and THCTensorCopy.cu copy (#65491)
6a99053515 : Added sparse-tensor copy logic to dispatcher (#65304)
43d47bdcca : [tensorexpr] conv2d handle optional bias (#64750)
31ea4358d8 : [tensorexpr] Add Op handling for mobilenetv3 large (#64741)
c28e3ffb4b : [jit] Shape propagation batch_norm, dropout, quantize, hardswidh (#64740)
46b3fc032a : Migrate remainder of THCDeviceUtils.cuh to ATen (#65472)
12137db5e3 : Fix the slowdown of _object_to_tensor since 1.9 (#65721)
002ff19836 : [acc_utils] Fix off by one for model info getter (#65708)
63bb7c6dba : Refactor AotCompile to return a pair (#65707)
e9327ed2ce : Add nn.function.hardtanh in acc_tracer (#65639)
6a6ee92e36 : [quant] Add op benchmark for CPU FakeQuantizePerChannel with float zero_points (#65241)
7c62b6e973 : add deepcopy support to subclasses (#65584)
f5b4e369f6 : Sparse SoftMax: Remove unused variables (#65539)
e1340d4282 : [GHA] Small refactors (#65647)
fea32be964 : Add HPU type for check_base_legacy_new (#65410)
82e0bf44c0 : Apply linter suggestions to #65137 (#65459)
811601e19a : Upload sccache stats (#65582)
ea546e20fd : [Reland] nn.functional.linear OpInfo (#65498)
b91375f741 : upgrade windows cuda installer: cu11.1.0 to cu11.1.1 (#65669)
cd2656a2e5 : [package] add some docs describing how to debug dependencies (#65704)
10d0dbc6d9 : Avoid storage access for HPU tensors (#65409)
aa5d2a8d86 : Remove confusing SHARD_NUMBER resetting logic (#65701)
facff2ec65 : Update ProcessGroup collective C++ APIs to be non-pure virtual functions (#64943)
cd80bbe5f5 : Bug fixes in dataframe_wrapper (#65629)
1c8949c51a : [BE] Run Zero test internally (#65519)
f70147b426 : [BE] Enable ZeRO test on windows (#65385)
4fe66d962d : [Codemod][FBSourceBlackLinter] Daily `arc lint --take BLACK`
146817c9d0 : Add all_paths utility function (#65602)
0256c3be50 : [TensorExpr] Delete dtype_ field from Let - it should use its var's dtype. (#65634)
399214efd6 : Revert D31172530: [pytorch][PR] Enable CUPTI for kineto by default on windows
cda2ee9016 : Add nn.function.hardswish in acc_tracer (#65590)
1de8976e85 : Add quantized::convtranspose2d (#63914)
ab5eb56983 : add qmul (#63913)
ece25c453f : [PyTorch] Store Argument::alias_info_ on the heap (#64824)
af7238f214 : Rocm4.3.1 nightly (#65624)
15724bcc03 : [TensorExpr] Re-enable a float16 test. (#65632)
0d3bf97fd0 : TST Adds test for non-contiguous tensors (#64954)
a839cec0ad : .github: GHA retry docker pull (#65103)
68e5935498 : Remove fgrad_input from slow_conv2d (#64280)
71d1d16acb : Moving the constant parameter check to a more common file (#64251)
640a615150 : [easy] [PyTorch Edge] Remove double pragma once directive in the generated code (#65620)
57e066e188 : TST Adds gradcheck and gradgradcheck to module info (#64444)
6b60884f12 : Enable CUPTI for kineto by default on windows (#65608)
eca4f14b6c : [PyTorch] Add C10_ prefix to MPARK_* macros in variant.h (#65589)
7f25c3e666 : Update distributed.rst to show that CUDA send/recv on GPU is supported (#65601)
760aefd34d : Fix nullptr addition (#65548)
c3b09e977a : [fx2trt] Refresh execution context across save/load for TRTModule. (#65592)
1682722152 : keep output type after calling SubgraphRewriter (#65453)
f3587f6bfa : Remove THC ScalarConvert (#65471)
5b2a7eaa03 : [codemod][fbcode/caffe2] Apply all buildifier fixes
b858993c97 : Fix engine check for case where grad is a subclass (#65568)
e742839f0e : Fix autograd engine test in python_dispatch (#65567)
ef9e560796 : [Static Runtime] Add aten::remainder out variant (#64967)
b003b2a9c0 : [Static Runtime] Add record functions (#64698)
fd24e1b61f : add `OpInfo` for `torch.repeat_interleave` (#65455)
d85e12a5bf : add OpInfo for `torch.argsort` (#65454)
ca66698202 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
cc4db35205 : [TensorExpr] Break circular dependency of shared pointers in MemDependencyChecker. (#65600)
01720d6a23 : [JIT] constant object compilation unit ref fix (#65442)
f83250fd4e : Revert logic in `mobile/type_parser.cpp` (#65556)
20143bf07f : [ONNX] Deprecate use_external_data_format param from torch.onnx.export() function. (#62257) (#64382)
478d4cf883 : [ONNX] Deprecated the example_outputs param from torch.onnx.export() function. (#62815) (#64380)
9323ea2195 : [ONNX] minor doc improvements and cleanup (#62514) (#64373)
9965163751 : [ONNX] Add supplementary tests and description for custom_opsets param from torch.onnx.export() function. (#62085) (#64372)
fb71ccf0f1 : [ONNX] Remove strip_doc_string param from torch.onnx.export() function. (#61712) (#64371)
47d1ed60e1 : [ONNX] Remove argument _retain_param_name from torch.onnx.export() function. (#61702) (#64370)
bc02255d5e : Revert D30721329: [pytorch][PR] Enable CUPTI for kineto by default on windows.
8c7caedbb8 : avoid re-allocation of view_shape for every tensor in `torch.meshgrid` (#62908)
963ae25e41 : Migrate THCAtomics to ATen (#65470)
c73f0e457e : Tensor and device is_hpu methods (#65408)
d78b3909e8 : Explicitly destory ProcessGroup in allgather_coalesced_async test (#65513)
b77c979102 : [quant][fx][graphmode] Make FixedQParam ops work for dtypes other than quint8 (#65484)
a2e631b874 : Windows GHA: Only upload artifacts if prev steps pass (#65561)
7dbc21bc2b : Enable CUPTI for kineto by default on windows. (#62175)
f850d7ef2e : [CoreML][OSS] Add Simulator tests (#65076)
2a0208f4dc : fixed comments referring fairscale master branch (#65531)
c015cbabf9 : [codemod][fbcode/caffe2] Apply all buildifier fixes
d07b2cb4ec : [fx2trt] update the oss fx2trt exmaple (#65544)
71704349aa : [DDP] Allow await of custom buffer reduction in backward (#64515)
36485d36b6 : Docathon: Add docs for nn.functional.*d_max_pool (#63264)
1f0f246fe2 : Automated submodule update: FBGEMM (#65360)
65fbd2c12b : [ci] do not continue through error on trunk (#65503)
7e772e7685 : Update link to tutorial on defining NN modules (#65534)
cac7c1a192 : [ci] remove auto-label-rocm workflow (#65558)
c731be8066 : [BE] Use `DispatchKeySet` in `check_base_legacy_new` (#65535)
da166d4f12 : Add a timeout argument to RPC shutdown() (#65425)
97b535dabd : [PyTorch] add fastToString for infer_schema (#64823)
eb949464d6 : [PyTorch] Fix missing moves in SchemaParser::parseArgument (#64839)
14307f7a56 : [Static Runtime] Added logging to dump the model graphs (#65509)
767a104698 : [quant] change observer FQNs generated in prepare step (#65420)
a012216b96 : [nn] Fold : no batch dim (#64909)
2a4d5e4c6d : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
9668a8a82d : [DataPipe] Update Docstrings for Tar and ZipArchiveReader (#65500)
7e7be526c9 : Add TORCH_SHOW_CPP_STACKTRACES to Contributing.md (#64052)
14949d2922 : Add nn.function.hardsigmoid in acc_tracer (#65422)
5525e9a591 : Lock unpickling of source ranges
228141f939 : [pytorch] more informative error msg from fbgemm embedding spmdm call (#65186)
0ca1102609 : [fx2trt] fuse permute + matmul using a pass instead of hardcoding it as a leaf module (#65482)
fccaa4a3c8 : [fx2trt] fix transpose unittest (#65481)
2f67579864 : [ddp] use named_params and named_buffers explicitly (#65181)
0eaf081018 : [JIT] canonicalize aten::rsub (#65014)
32f0387ee8 : Bug in CosineAnnealingWarmRestarts in optim/lr_scheduler.py (#64758)
b80bdcc73b : Add register_module alias to nn.Module (#65174)
31584d065e : [Static Runtime] Added NNC implementation for signed log1p kernel. (#65387)
1c20b98b4b : [iOS][CoreML] Check backend availability at runtime. (#65315)
2898ef7549 : Minor ScanKernels.cu cleanup (#65350)
5739f77775 : [DDP] Refactor and remove sync_params (#64514)
ce5981e431 : [DDP] Custom buffer reduction (#64513)
923f06621c : Fix Windows ninja builds when MAX_JOBS is specified (#65444)
cbc3db8274 : Create test for builtin tensorrt module in torch deploy (#63819)
72fc53ff27 : .github: Add timeout for test step (#65486)
f24bd43375 : Changing type and name of local_used_maps to reflect that it is only one map (#65380)
0fe86ac6c6 : Fix torch.any documentation (#65310)
a0dea074b2 : Remove `.data` from benchmarks and tensorboard (#65389)
70a545b21e : Add Tensor._make_wrapper_subclass (#65340)
11ca641491 : [docs] Add images to some activation functions (#65415)
158393e1a1 : Fix autograd engine checks and update InputMetadata (#65235)
db4b68b3ac : Back out "Eagerly populate python_error::what() when TORCH_SHOW_CPP_STACKTRACES=1"
b3ec88f41f : ugh (#65477)
152f0236c3 : Revert D31082693: Fix autograd engine checks and update InputMetadata
7c9a278895 : fix trailing newlines (#65474)
508845f2b5 : [quant] AO migration of the `torch/quantization/quantize_fx.py` and `torch/quantization/fx/*` (#65033)
762c2276e1 : feed model merge net lower benchmark (#65191)
bcc6e3ab5e : add python API to print all operators that have kernels registered to a particular DispatchKey (#63575)
9324d682fd : Fix autograd engine checks and update InputMetadata (#65235)
f90d9b48db : test_neg_view: preseve sign of sample input (#63010)
9d17f21e46 : Added PandasDataframeWrapper (#65411)
3c6d9fd124 : Eagerly populate python_error::what() when TORCH_SHOW_CPP_STACKTRACES=1 (#65376)
2c7df1360a : Bump torch version to 1.11 (#65435)
96383ca704 : Unify the output pathname of archive reader and extractor (#65424)
e331beef20 : Delete code coverage jobs from CI (#65362)
127c9402d0 : Revert "Revert D30752939: [pytorch][PR] nvfuser update" (#65137)
feefc94573 : [fx2trt] Use itensor_to_tensor_meta to track the TensorMeta info for ITensor node (#65427)
64d3c7388f : [RELAND] Enable ncclAvg for reductions (#62835)
3f5f721ab3 : Pass through allow-list from prepare_qat into propagate_qconfig_ to allow custom mapping and custom QAT module (#65119)
158b8bdc8a : Cleaning up DDP SPMD in reducer.cpp (#64113)
27faa7a560 : [ONNX] Support torch.isfinite export (#64759)
5aa33770f5 : .circleci: Remove Windows workflows from Circle (#64959)
a1216061c1 : [DataPipe] Fix deepcopy filehandle for Mapper and in-place modification for IterableWrapper (#65220)
73c4bfc30a : [ONNX] Add log10 symbolic (#63418) (#64374)
1fec9cd76b : [Fixed] Enable Half, BFloat16, and Complex dtypes for coo-coo sparse matmul [CUDA] (#59980)
8bab468943 : Reduce test size for max_pool (#65336)
cd813f16bf : Add functional api for `nn.Module` (#61447)
c245632e2e : Use higher timeout for TSAN tests. (#65391)
28bfdbb066 : OpInfo for `nn.functional.batch_norm` (#63218)
9afdf017dc : Add force_on_cpu test to win cuda10.2 on GHA (#65094)
00b732e98b : Remove orphan from cuDNN persistent note (#65160)
c0eb266c02 : [Static runtime] Micro-optimization pass on GetLivenessMap (#65175)
6d7bc34b67 : Make new_empty/new_ones/new_zeros/new_full respect subclass (#65169)
04a5e45aeb : [PyTorch] Compare Type pointers before calling operator== in EqualNode (#65352)
88232b4cee : Fix ENABLE_RECORD_KERNEL_FUNCTION_DTYPE build (#65370)
eb4fb1ed81 : THCTensor cleanup (#65369)
600df80296 : [PT/ShardedTensor]Allow zero size local shard (#65007)
7f6580a868 : OpInfo: nn.functional.conv2d (#65233)
9324181d0a : [JIT] Re-land "Add aten::slice optimization" (#65341)
9c23f6eb7d : [nn] TripletMarginLoss and PairwiseDistance : no batch dim (#64882)
d35ee431d8 : correlate forward and backward op (#62553)
f0ada4bd54 : [docs] Remove .data from some docs (#65358)
daa50f1e9f : Adds keyword only args to gradcheck (#65290)
880098a7e3 : [PyTorch Edge] Backport function for defaults args with out args, flag on (#63651)
5826d207ad : [JIT] Delete obsolete message: or if you absolutely have to, use c10::impl::GenericDict(c10::impl::deprecatedUntypedDict()) (#65164)
19a1063888 : [JIT] Support device as Dict key (#65079)
512834b61d : Reduce PyTorch warnings: Cast fix xplat/caffe2/aten/src/ATen/core/DeprecatedTypeProperties.h (#65031)
0dc98728bc : Basic implementation of ShardedLinear using ShardedTensor. (#64128)
257a18d951 : Track peak memory usage (#65157)
58909395ab : Fix logic to determine master vs PR (#65155)
60915eb810 : [quant] Add fp32/fp16 zero_point support for CPU fakeQuant (#65055)
ce101fed02 : [PyPer] copy-free freeze_module (#65118)
ca649851c6 : Reduce PyTorch warnings: Cast fix xplat/caffe2/c10/core/TensorOptions.h (#65030)
2465a103b8 : [iOS] Zero out NSError to avoid heap corruptions for the OSS builds (#65355)
b7adb3350a : Add crow_/col_indices to view types (#63176)
31f61122da : Creating a helper function to generate an unique name for an attr in a module (#64970)
b45ec16310 : Add support to lower acc_ops.transpose (#65036)
e33a1fa680 : [fx] give warning instead of fatal the program when submod not found during adding get_attr (#65225)
8fb253757d : Remove @balioglu from PyTorch Distributed code owners (#65239)
e3210ca184 : [CUDA graphs] Beta, not prototype (#65247)
b71f01f70d : Fix full backward hook when grad is disabled (#65335)
2abf3594d5 : Fix unassigned ciflow trigger (#65354)
378949b83c : fix typo missing f string (#65226)
0430d1da12 : [iOS] Fix the TestApp (#65319)
3e64c9e176 : [Pipe] Add a `WithDevice` wrapper to specify device execution for a module. (#65190)
0a3cf8886a : Torchhub: More robust assumption regarding main or master branch (#64364)
99e4ab5d44 : [Static Runtime] Implement and enable variadic tuple unpack (#64934)
14347d0dd5 : [quant][fx][graphmode] Fix a bug for sub (#65109)
c562ebca23 : Revert "Revert D30558877: Ported std/var to ReductionOpInfo (#65262)
fb1e6835cc : simplify `torch.meshgrid`'s shape computation (#62905)
cf60d24028 : [DataPipe] Unlimited buffer for Forker and Demultiplexer (#64994)
88032d8943 : Automated submodule update: FBGEMM (#64640)
d8189db80f : [quant][fx2trt] Generate engine graph for explicit quant/implicit quant and fp16 graph (#65289)
7f8d622d70 : [Static Runtime] Add perf metrics for number of managed tensors & unmanaged values (#64992)
4a128ed811 : Remove incorrect stride assert in Reduce.cuh (#65227)
543185a0fd : support using gradients named for outputs in derivatives (#63947)
926a3d2e85 : clarify implementation of check_grad_usage (#64439)
d3e36fade2 : [quant][fx2trt] Enable comparison with implicit quant mode (#65043)
4150b672aa : [Codemod][FBSourceBlackLinter] Daily `arc lint --take BLACK`
6707dfeefb : Remove 9.2 related macros for CONSTEXPR (#65066)
1cd9018b6f : Make github.com in noproxy list (#65256)
50c29fef3e : remove utils.cpp (#65184)
19471c54a6 : [fx const fold] fix a case when some inputs are unused (#65223)
992dad1855 : [Profiler] Update kineto submodule (#65236)
4408b755bc : [fx2trt] re-enable profiler and some miscs for TRTModule (#65072)
afa25c77f1 : [package] Make it possible to re-save a PackageImporter module (#65101)
487c771593 : [FX] Fix tracing of bitwise and/or (#65196)
6596173811 : Revert D30731191: [pytorch][PR] Torchhub: rewrite commit hash check to avoid using unnecessary GitHub API credits
3d32dec5ba : [ONNX] Deprecate enable_onnx_checker argument in torch.onnx.export() (#61708) (#64369)
ae00075ac7 : [Static Runtime] Move MemoryPlanner out into memory_planner.cpp (#65123)
eaf85fad62 : [PyTorch] Extract parseOperator() into a standalone source file (#65179)
35084ee451 : [PyTorch] Improve OperatorEntry::getKernelForDispatchKey (#64838)
fcaf526815 : avoid moving Argument in infer_schema (#64822)
79cbcd3e7c : [PyTorch] Fix missing move in Argument ctor (#64821)
5a3475df21 : [PyTorch] shrink Argument (#64820)
132d65ed25 : [PyTorch] Compare pointers before calling expensive Type comparison (#64784)
cf5c00f155 : CI: Consolidate Build and Test naming for better stats collection (#65232)
45bd0f6181 : Back out "Revert D30745960: [DDP] Remove SPMD from self.modules_buffers" (#64778)
70f286c1e2 : Back out "Revert D30745961: [DDP] Remove self.modules_params" (#64777)
61dfcbf4bc : Back out "Revert D30745921: [DDP] Fix when buffers are reassigned in module" (#64776)
cce5381238 : [xplat][pytorch]: Disabling too many logging. (#65170)
047e68235f : delegate parallelism to Ninja when possible (#64733)
b936a10074 : add test for number of jobs when building (#65162)
1ee66a5278 : Remove CUDA 9.2 references conditionals and workarounds (#65070)
51e12f0071 : fix torch.distributed.elastic event docs (#64974)
bbe25af0df : [nnc] Updated inlining to handle cases when producer indices are constants after eval (#65044)
03fc636d5c : [nnc] Updated inliner to remove assertions and exception (#64719)
340531f2e0 : [ONNX] Do not use `numpy` in ONNX opsets (#65188)
7ced25eee3 : [CoreML][OSS] Include Core ML in iOS/MacOS nightlies (#65075)
f9c0a39ad9 : add a test case for const fold (#65224)
3c003aa6ae : [PyTorchEdge] promote prim ops by using ops table for mobile runtime (#64816)
ecfc784e67 : Revert D30993855: [pytorch][PR] OpInfo: nn.functional.conv2d
18fa58c4e9 : [CoreML][OSS] Integrate with CMake (#64523)
c1415a0a72 : [Reland] [Model Averaging] Simplify PostLocalSGD Optimizer API (#65197)
752a820230 : Bf16 matmul (#64619)
f9bf144a0c : Torchhub: rewrite commit hash check to avoid using unnecessary GitHub API credits (#64362)
0559cb37cd : [FX] Ensure BC coverage for all of torch.fx.passes (#65081)
cf7409e184 : [FX] Move graph_manipulation and param_fetch out of experimental and into passes (#65183)
6aa04b6843 : [fx2trt] make gpu trace better (#65168)
a8d7b885c5 : [CoreML][iOS/MacOS] Add the CoreML executor (#64522)
aafeea3a6c : Allow extra unused arguments in symbolic shape function (#65095)
6eafe7f15e : Actually deprecate __torch_function__ as plain methods (#64843)
1ed9c33d08 : Update fx proxy to use classmethod for __torch_function__ (#64842)
473e55d5b2 : Use classmethods for overrides (#64841)
a95fabfecb : Fix port allocation race condition for elastic test (#65149)
f101070587 : Small improvements to compare_models_torch binary (#65171)
9601deb1b3 : Disable autograd fallback tests on Windows (#65147)
aaffcfe9cd : implement "xy" indexing for torch.meshgrid (#62724)
d37c02be08 : Allow parametrization to be nested (#65167)
9157a2889f : Pass GITHUB_TOKEN to linux CI jobs and avoid skipping torchhub tests (#64807)
7dc3858deb : [CoreML][fbcode] Add the `preprocess` python APIs (#64521)
8241193d76 : [Static Runtime] Introduce static_runtime::dict_unpack (#64771)
e6c39a521b : [ONNX] Update submodule to 1.10.1 (#63716) (#64576)
9117eed6ed : [FX} Add torch.ops.profiler._record_function_{enter,exit} as stateful ops for DCE (#65180)
02dec91212 : [quant] AO migration of the `torch/quantization/utils.py` (phase 1) (#64919)
64641eaee6 : [acc_utils] Add print_model_info (#65045)
8c38d141df : Add back the owning_module fix (#65159)
c886406ce0 : Add dropout shape inference as no-op in acc_tracer (#65113)
6f120ada50 : Pin SciPy to 1.6.2 on Windows (#65017)
0a5149019f : Added logging for the Reducer's non-member functions. (#65023)
873255c6d9 : OpInfo: nn.functional.conv2d (#63517)
4c4c03124b : Remove old references to 9.2 in documentation (#65059)
4c15f8e8b4 : Provide function interface for `remove_duplicate_output_args` (#65134)
f9c341fdf2 : Add type annotation for `TRTInterpreter.run` (#65135)
8a094e3270 : [quant]ao migration for quantization mappings and fuser method mappings hg mv (#64985)
9af6fe991c : Remove CUDA 9.2 and older references from our cmake (#65065)
67570a60ba : Disable ParallelTBB (#65092)
96cb05b49a : Introduce tensorRT as builtin module for torch::deploy. (#63818)
8eb21488fd : [JIT] Improve BatchMM mutability handling (#65097)
f309f8fbd4 : [quant] ao migration of observer and qconfig (#64982)
97e86cf319 : [Fix] Raise error when empty index tensor is passed (gather) (#65006)
874f9bd509 : [FX] Gate FXGraphDrawer on whether pydot is installed (#65088)
2c57bbf521 : add support for indexing to meshgrid (#62722)
67bd2a31b5 : [Reland] Add python mode (#64360)
8800a8b428 : Revert D30888794: [Model Averaging] Simplify PostLocalSGD Optimizer API
83878e19ff : Improve LSTM documentation for proj_size > 0 (#65102)
f69cf3cf2f : [Static Runtime] Use FastSet instead of std::set everywhere (#65114)
0bda7476cf : Reduce PyToch Warnings - Cast fixes from D26624430 (#65015)
db601434ef : Bug fix (#65105)
2bb898e039 : [acc_ops] Add support for torch variants of squeeze and mul (#65037)
206646d6ed : Add NNC AOT Compiler executable (#63994)
e0ecd09011 : [quant] AO migration of the `_correct_bias.py`, `_equalize.py`, and `_learnable_fake_quantize.py` (#64917)
3ceecebed0 : .circleci/.jenkins: Remove 9.2 references in CI (#65024)
d9d8250e3f : .github: GHA add retry for docker run in chown workspace step (#65104)
03389dc851 : Revert D30752939: [pytorch][PR] nvfuser update
c151d62f45 : [quant] AO migration of the `quant_types.py` (phase 1) (#64916)
a42996f16e : [quant] AO migration of the `fuse_modules.py` (phase 1) (#64913)
7e9c599784 : [TensorExpr] Add a method for sanitizing Var and Buf names in Stmt. (#65010)
3d5923366d : .github: Enable only specific workflows for canary (#65099)
59c486f2f3 : ci: Disable jit legacy on circleci, enable on gha (#65106)
b75d3cae4c : CI: Upgrade windows 10.1 jobs to 10.2 (#65080)
3f27c1ae78 : Replace windows 10.2 smoke tests on PRs to be 11.3 (#65090)
ec1af11c2e : Revert D30883290: [Static Runtime] Move MemoryPlanner out into memory_planner.cpp
37bcefa248 : [quant] Removing hardcoded "torch.quantization.observer" for migration (#64981)
fe0f9d1daf : [Caffe2][easy] Avoid spurious vector copy in TransposeOp (#64403)
208cf051d4 : [Caffe2] Don't pass vector by value in SqueezeOp (#64400)
177ebea4c5 : Use RDS for build size tracking (#64303)
cfaecaf40b : nvfuser update (#63745)
59988f81bd : Add embedding shape analysis (#64323)
29514bfcdb : Max Pool with indices (#64121)
2626cd3ba4 : Add Maxpool to shape analysis / Opinfo (#63530)
425f173f9d : [quant][refactor] Change the structure of the ao migration tests (#64912)
2967a48b78 : Add retries to ECR login step (#65013)
df3d649380 : To add state dict and load_dict for Chained Scheduler (#65034)
6512838fab : [ONNX] Enhance shape (two changes merged) (#64585)
0e11454d19 : [Static Runtime] Move MemoryPlanner out into memory_planner.cpp (#65011)
db134a6843 : (torch.distributed.elastic) properly format traceback on error (#65041)
4bf7959de2 : Remove `run_functional_checks` from `test_autograd` and create necessary OpInfos (#64993)
21017ad1a1 : Dispatch.h: Avoid including ivalue (#64165)
211ad231dc : To add state_dict and load_state_dict to SequentialLR (#65035)
8a652e0e91 : [CircleCI] Disable pytorch_linux_xenial_cuda10_2 test jobs (#65071)
f1ce64a58e : Starter Task 1 (#64927)
dab6496dbe : [ROCm] Update CI images for ROCm 4.3.1 (#64610)
54d060a8c9 : Port `all` and `any` full reductions to structured kernels. (#64642)
54cdf651fd : [PyTorch] remove string_view::operator[] bounds check (#64670)
57420a6063 : [PyTorch][easy] Add cbegin/cend to SmallVector (#64682)
bdbc622988 : [PyTorch] Avoid extra std::vector in parseSchemaOrName (#64678)
0f1bccb692 : [quant] Removing unnecessary import from torch/quantization/quantize.py (#64910)
3fb33b38b9 : [Static Runtime] Check if outputs of a node do not overlap with each other (#63013)
26e43fe9f3 : Forward fix SkipInfo missing mypy (#65063)
fb8bdb8039 : When test set_affinity, don't hardcode the CPU ID (#65042)
c625f971d3 : [DataPipe] Make TarArchiveReader and ZipArchiveReader accepts FileSream with attempt to close and additional warning (#64788)
32c5da8cd2 : add `OpInfo` for `torch.nn.functional.dropout` (#62315)
d6d286f651 : [dnnlowp] reduce num of test cases to avoid time out (#64935)
b7ec7d760d : Generic test parametrization functionality (#60753)
6ab97fbc28 : [vulkan] Use volk to load vulkan libraries and fix Windows build errors (#64988)
ff6b475d4a : [fix] don't expose unique_dim in torch (#63080)
36cac2be4d : [CUDA graphs] moves memory sharing intro paragraph (#64996)
36a0d97281 : Revert D30558877: Ported std/var to ReductionOpInfo and minimum/maximum to BinaryUfuncInfo
3d312b3b8e : [Model Averaging] Simplify PostLocalSGD Optimizer API (#64885)
382e008fbf : Ported std/var to ReductionOpInfo and minimum/maximum to BinaryUfuncInfo (#63978)
c65128679b : [DataPipe] Improve Mapper to accept input/output index when apply fn (#64951)
670853295a : [quant][tensorrt] Add tensorrt backend config (#64623)
85222c050f : [PyTorch] Add c10::hash<c10::ArrayRef<T>> (#64277)
5d4efed83e : [PyTorch] Add OpCode cache in ByteCodeDeserializer (#64110)
a9121df09c : [PyTorch] Remove implicit conversion from Tuple to vector reference (#63993)
452402b984 : [PyTorch] Fix SourceRangeDeserializer vector copy (#64031)
57eda69219 : [fx2trt] fix elementwise op converter with one operand being a literal and has different type (#65004)
3727baea6f : [PyTorch Edge][Model Loading] Operator Call De-dup at TorchScript Serialization Level [2/2] (#64269)
86e6bed0d4 : [PyTorch Edge][Model Loading] Operator Call De-dup at TorchScript Serialization Level [1/2] (#64268)
97df69eac6 : .github: Add render test results step (#64937)
d188204323 : remove SkipInfo class (#64972)
eedc234e33 : [PyTorch] Don't store multiple kernels per key on mobile (#64447)
446d95a7f6 : [fx const fold] fix some cases with deep model hierarchy (#64945)
00e6e0c593 : [Model Averaging] Revert #63895 (#64903)
882b67dff4 : Drop incremental linking on Windows with REL_WITH_DEB_INFO=1. (#64892)
01cfea9485 : Disable target determination for now (#64921)
4e225da363 : print_test_stats.py: dedup test report upload name with TEST_CONFIG (#64948)
e884554008 : Make {select,slice,diagonal}_backward primitives wrt autograd (#64933)
2853c7da22 : Replace composite dispatch with `CompositeExplicitAutograd` (#64641)
09d221e8d4 : Revert D30711934: [pytorch][PR] Use RDS for build size tracking
f23f21dafe : [TensorExpr] Remove 'Placeholder' class. (#64887)
199031c48e : [TensorExpr] PyBinds: improve QoL of pybind users. (#64886)
caaa6efc1a : Fix use of deprecated tensor.type() in SegmentReduce.cpp (#64151)
d4b4d83521 : [quant] handle empty input in fused_moving_avg_obs_fake_quant op (#64829)
0aef44cb3d : Add forward AD for torch.linalg.eigh (#62163)
35c82dbf5c : [THC] remove TensorTypeUtils and TensorInfo (#64965)
816048e7e6 : EmbeddingBag sort thrust->cub (#64498)
ed30afd480 : Speed up torch.unique_consecutive() (#64835)
ab5e1c69a7 : [WIP] Example of DataPipes and DataFrames integration (#60840)
ee554e2e96 : Re-land Fix test report uploading (#64958)
f159f12fee : [iOS][OSS][BE] Add Simulator tests for full JIT (#64851)
fd09e564d6 : add acc_ops.max, acc_ops.maximum, consolidate acc_ops.min and acc_ops.minimum
3855c24639 : Add BFloat16 support for cross, tril, triu, tril_indices, triu_indices and cumsum operators on CPU (#62454)
1cd0252eed : Use RDS for build size tracking (#64303)
c4073af61d : Add `skipIfTBB` decorator (#64942)
8131bc85d0 : Raise TypeError on assigned grad with wrong type (#64876)
1e25a84993 : kill SkipInfo (#64878)
3710edc86b : Fix TRTOperatorSupport (#64873)
914e3a861a : Revert D30878101: [pytorch][PR] Fix test report uploading
6101cbcedb : torch.ao migration: fake_quantize.py, phase 1 (#64814)
e4314dac57 : [PyTorch] Reduce heap allocations in OperatorName::setNamespaceIfNotSet (#64673)
000f3310d7 : [PyTorch] Add test for operator_name (#64672)
c99277e177 : handle the case in acc_ops.sum when dim == 0, differentiating it from the case when dim is None (#64869)
0561e104d9 : fix build error when system cmake3 version >=3.5 but <=3.10 (#64914)
fba40bfc1a : Fix test report uploading (#64846)
af984c78a9 : Pin SciPy to 1.6.3 on Mac (take 2) (#64922)
1bea49c716 : [Deploy] Avoid use-after-free during autograd shutdown (#64620)
fd716fcda2 : [Pytorch Edge] Quantized Ops Dtype Selective (#63680)
4ca40aeb83 : Disable more of the pragma warning stuff (#64899)
8cfc74400a : [PyTorch] Gate tls_local_dispatch_key_set off on iOS too (#64753)
d4b031b31e : typo fix (#64615)
01e92f2a56 : [nn] no batch dim support: CosineEmbeddingLoss (#64590)
2ae938e15e : Fixes failure in test_dataloader.py that occurs on jetson boards (#64757)
8e63199c7c : .github: Always run chown workspace (#64854)
70e64feda7 : Reland .circleci: Skip cuda /cudnn install if existing (#64880)
3d976d9ceb : torch.ao migration: quantize_jit.py phase1 (#64860)
9d52651d4e : torch.ao migration: stubs.py phase 1 (#64861)
c08b2491cc : add BFloat16 operators on CPU: cummax, cummin (#63307)
d932ddd24b : fix quantization.rst doc (#64802)
9c73a48ecf : ND Embeddings benchmark - Standardize randomized inputs (#64707)
b37503e452 : Initial implementation of nanmean (#62671)
8535418a06 : [Reland] Added reference tests to ReductionOpInfo (#64273)
1cb3507ed3 : Adds DLPack support (#57110)
d46ea03871 : [fix] fix test_python_dispatch with pytest (#64574)
be79da3303 : Revert D30876591: [pytorch][PR] Pin scipy to 1.6.3 on Windows and Mac
1577c106dc : torch.ao migration: numeric suite, eager and fx (#64817)
39f2b9de2a : Pin scipy to 1.6.3 on Windows and Mac (#64844)
47144de473 : Revert D30867266: [pytorch][PR] TST Adds gradcheck and gradgradcheck to module info
30a7c768d7 : [RFC] Modularize functions of parsing bytecode (#61862)
dd2d48df07 : Revert D30875977: [caffe2] [aten] Remove loose (unpaired) #pragma warning ( pop ) in TensorBase.h
d13e0c9c39 : [iOS][OSS][BE] Update XCode to use 12.5.1 (#64850)
c9eb312ce9 : [iOS][OSS][BE] Remove unused files (#64849)
82ac3f108d : [TensorExpr] Move 2 graph passes from kernel.cpp to graph_opt.cpp (#64828)
ff65f637df : [TensorExpr] Add debug logging (store/load tracing) to IREval. (#64848)
180e4fbfae : [TensorExpr] LLVMCodegen: fix lowering for UInt->Float casts. (#64862)
1f35d20a89 : [caffe2] [aten] Remove loose (unpaired) #pragma warning ( pop ) in TensorBase.h (#64870)
d4a86c1f3b : [quant][fx2trt] Add lowering support for reference linear/conv modules (#64368)
4481c87ac4 : [tensorexpr] Simplify x/100 -> 0 if x is a non-negative integer less than 100. (#64763)
5836a116d0 : Revert D30869803: .circleci: Skip cuda /cudnn install if existing
67ebde5645 : TST Adds gradcheck and gradgradcheck to module info (#64444)
c60075d4b5 : Preserve types during empty container assignment (#58911)
b4855619d1 : Always upload stats to S3 (#64853)
f3f410880a : [DataPipe] Remove ZipArchiveReader's dependency on FileLoader (#64786)
717d267e19 : .circleci: Skip cuda /cudnn install if existing (#64825)
dafa0a5a3b : [doc][hackathon] To add Adadelta Optimizer to the documentation (#63255)
d8ae3cc318 : Add more error checking in subclass creation (#64746)
89f94fc15f : Move THPVariable_NewWithVar around (#64550)
2cc9778495 : [MicroBench] Added a log_vml version of the signed log1p kernel (#64205)
cad7a4b0ea : [nnc] Added an implementation of sign op (#64033)
3fbb49e75d : Extend 2Dim embedding bag benchmarking to include 3Dim benchmarks (#64647)
227aafd1d9 : Revert D30846958: [caffe2/aten] Remove loose #pragma warning ( pop ) in TensorBase.h
5060b69d62 : [DataPipe] fixing tests related fork() to remove warnings (#64827)
ade4bf3e82 : [tensorexpr] Add 'pre_alloc' argument in python API of tensorexpr kernel (#64718)
92cd5ab1cb : Skip conjugate and negate fallback for view ops and their in-place versions (#64392)
54b72a99ef : To add Rprop documentation (#63866)
c7b03e2b83 : [ROCm] define C10_WARP_SIZE to warpSize HIP constant (#64302)
db3fcf0af3 : fix typo in torch/onnx/utils.py (#63396)
c12df2dc23 : build: bump bazel to 4.2.1 (#64455)
63b180beed : ROCm MIOpen NHWC Convolution support (#63617)
2a81e8b8f1 : Let all_reduce_coalesced and all_gather_coalesced return Future objects (#64722)
88fff22023 : `torch.lu`: forward AD support (#64742)
be091950d0 : [const_fold] Keep around node.meta for replaced folded ops (#64782)
40098f48a1 : [caffe2/aten] Remove loose #pragma warning ( pop ) in TensorBase.h (#64773)
95d98dfeec : Add TRTSplitter (#64762)
88c0ea9131 : [PyTorch] Fix missing move in torch::jit::Lexer::next (#64653)
b7b4f63bbc : [PyTorch] Use std::find in the JIT lexer (#64652)
a17d6c7f80 : [TensorExpr] Simplify TE IR before applying any transformations. (#64717)
ef2c9d7d8a : [quant][fix] Fix quantization for sub_scalar (#64603)
1b5b210f2c : [Android] print type name for IValues (#64602)
11ef68938c : [caffe2][tiny] Add logging to report what the current lengths are when mismatched lengths are detected (#64768)
d4b09dbab3 : [doc][hackathon] To add Adagrad Optimizer to the documentation (#63254)
9ad75281f6 : [Static Runtime] Fix resize_output_check warning coming from prim::VarConcat (#64765)
7f1932e1b9 : Rename profiler metadata key (#63743)
6cc8cc6e56 : Add support for lowering info during serialize_module, and add padding/partial to it (#5810)
d43fb75a21 : cat_shape_check: Fixes dimension in the error message for CUDA cat shape check and removes unnecessary offending index information (#64556)
2c243ed112 : Enable the on-demand performance PR testing to run on a specified TB branch (#64701)
517033916c : .github: Remove add_annotations workflow (#64449)
9797a32faf : [Dist/CI] Remove dist from target determinator (#64721)
46c886e8a6 : fix acc topk's handling of the case when dim=0, fix tests as well (#64727)
3d3ff4a9e7 : Fix a shadowed variable (#64695)
8deaa476ac : Added more version comparison operations (#63848)
cfa6162e5e : Reverts cat and stack warning when out= is not the expected shape (#64714)
2b41bf40c5 : To add SequentialLR to PyTorch Core Schedulers (#64037)
c3203efe80 : [pytorch] Make qlinear weight packing thread safe (#63804)
dc53546655 : `torch.lu_solve`: forward AD support (#64646)
b7c86365d1 : [nnc] Handled cast in index expression during inlining (#64716)
652a8bf7d0 : [nnc] Updated indices during broadcast to use int64_t (#64627)
459653a0f6 : Revert D30745921: [DDP] Fix when buffers are reassigned in module
5bc53ac5ef : Revert D30745961: [DDP] Remove self.modules_params
f1aaf8afcd : Revert D30745960: [DDP] Remove SPMD from self.modules_buffers
3bf93d769c : [JIT] Add gradient check in constants (#64613)
d4b1016850 : Filter out _disabled_torch_function_impl from handle_torch_function (#64689)
239366c9c2 : To add Rectified Adam Description to Documentation (#63772)
5b21f172a4 : [doc][hackathon] To add AdamW Optimizer to the documentation (#63252)
39ce801d1f : To add Adamax algorithm to documentation (#63903)
15c21fa45f : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
233e3e5bb4 : Fix lop1p lowering bug (#64724)
d0b207e68b : Migrate uses of THCReduceApplyUtils to cuda_utils::BlockReduce (#64713)
1553259520 : [DDP] Remove SPMD from self.modules_buffers (#64474)
8c09510294 : [DDP] Remove self.modules_params (#64473)
d59ecc02df : [DDP] Fix when buffers are reassigned in module (#64472)
b6544ef815 : [PyTorch] Fix MobileDebugInfo vector copy (#64030)
0d0d2f2ac5 : [PyTorch] move from input ivalues in ByteCodeDeserializer (#64029)
f5e76b4e38 : [PyTorch] Copy vectors less in Function::append_operator (#63977)
0ef32625a8 : [FX] make visualizer produce different formatted output (#64699)
86e3b2727e : Re-enable nightly doc pushes (#64708)
9a6c2a75b8 : [acc_tracer] Enable check_mutable_operations (#64456)
5c27a580ec : [tensorexpr] Allocate intermediate buffers at compile time (#64227)
527348a6fe : [tensorexpr] Add 'is_allocated' flag for buffers and use it to insert 'Alloc/Free' stmts (#64226)
f90153cda3 : [acc_normalizer] Improve error when kwarg normalization fails (#64408)
4533e76e7c : Update breakpad to an existing commit: 7d188f6 (#64666)
149f1114fe : To add Stochastic Gradient Descent to Documentation (#63805)
ff18195df9 : .github: Upgrade windows CUDA 10.1 -> 10.2 (#64658)
cc0565326c : Add plugin for linalg norm operation (#64611)
a97015f22c : Revert D30735341: Migrate uses of THCReduceApplyUtils to cuda_utils::BlockReduce
b12150608e : [fx] make const fold code more pythonic (#64451)
24e1315d4b : [quant] Enable jit tracing on quantizable LSTM (resubmission) (#64638)
d701357d92 : Factor out TensorBase that doesn't depend on native operators (#63612)
92318a9116 : Make doc previews use its own S3 bucket (#64594)
43c0f033fc : TST Adds inplace checks to module_info (#63739)
a5ad08ec70 : Migrate uses of THCReduceApplyUtils to cuda_utils::BlockReduce (#64442)
deb9775c07 : .github: Run docker containers in detach mode (#64459)
18d24bb537 : [NNC] Add Softplus operator (#64589)
35413a16f7 : Add `__matmul__` to the magic methods for FX tracing (#64512)
195cb4efa8 : update scatter formula (#64546)
1409492fdb : fixing trapezoid() comments for clarity (#64592)
dd8f6ac597 : Add forward mode differentiation for torch.linalg.cholesky and transpose (#62159)
a2934b38f8 : Fix typo embedding_renorm_cuda_ (#64542)
e0e832c2ba : [c10d] Provide failure reason from ProcessGroup when aborting NCCL comm (#64241)
7205ca0210 : Change MaxUnpool to accept tensors with 0-dim batch sizes. (#64082)
ba8c1fc648 : Add Half conversion of bit cast for SYCL kernel (#64340)
7f0feafa55 : [nnc] Provide helpful error messages about turning off the fuser (#64516)
768014b3e6 : Allow disabling cache in autocast (automatic mixed precision) (#63552)
b616132403 : Adding support for lowering 4Bit EmbeddingBag Operator (#5806)
2223737da9 : restore test_inplace_comparison_ops_require_inputs_have_same_dtype Expected behavior (#64267)
9cc44aad21 : [quant] AO migration of the `quantize.py` (resubmission) (#64445)
72274e2a2f : [TensorExpr] Don't rely on exceptions in Vectorizer. (#64609)
2341ec9ef1 : [fx_const_fold] Fix constant folding for attrs in submodule hierarchies (#64342)
5721205417 : Add __ge__ to TorchVersion (#64565)
81fe2c5e49 : add out variant of linear (#61801)
71ba76b1b5 : Fix building docs instructions (#64508)
4e98304eb9 : Fix quicklint (#64612)
e777e1b01c : Revert D29998114: [pytorch][PR] enable bf16 mkldnn path for gemm
1a033b45dd : [JIT] Fix a bug of rejecting ops with AliasAnalysisKind::CONSERVATIVE (#64336)
8e1fdd4cd3 : Add symbolic shape comparison optimization (#64300)
474a51b6bf : Refactor to use shape arguments (#64299)
bccbe310ef : Add view with negative dim (#63516)
5a1f8b8573 : Generalize expand logic (#63615)
5eb8cec663 : Add permute, arange (#63407)
cf2d15bf84 : Add support for slice, selec twith int, index_select (#63365)
c8a608b197 : Add squeeze, unsqueeze, transpose shape functins (#63099)
a39f3c68b7 : Add batch of unary functions (#63050)
c1b701bc3e : Back out "update rpc tensorpipe logic for sparse tensors" (#64575)
566ee1217f : Use trsm for triangular_solve in CPU (#63567)
52ff9bc639 : [iOS][Metal] Add aten:hardswish (#64588)
2c351c76e0 : [special] Alias igamma, igammac to special.gammaninc, special.gammaincc (#61902)
b01d2d1d3e : Disables four failing distributions tests on windows (#64596)
a22c936b63 : Add lint to ensure .github/ pypi dependencies are pinned (#64463)
7e88d0b370 : Update explicit_ci_jobs to work with GHA (#64598)
a48d83a575 : Move ParallelTBB to GHA (take 2) (#64193)
369db8924f : [Static Runtime] Add first iter metric (#64457)
3bd69d3020 : add bubdle input into AIBench (#64557)
3c87f55752 : Automated submodule update: FBGEMM (#64582)
acc9f9afc8 : enable bf16 mkldnn path for gemm (#61891)
337c71be05 : Array API: Add `torch.linalg.matmul` alias to `torch.matmul` (#63227)
8407ce7e38 : [small BE] .github: refactor concurrency into a common macro (#64587)
7e4ebe06ca : Fixes issue related torch.trapezoid broadcasting behavior and documentation (#64054)
c9d6ca4c54 : Add space in Feature Request issue template (#64563)
85eeb4d682 : Clean up op BC check list (#64584)
43248d9112 : [doc][hackathon] To add Adam Optimizer to the documentation (#63251)
adb85b32d3 : minor fix for elastic doc (#64531)
26b7ff5aea : deprecate dtype getters from `torch.testing` namespace (#63554)
f767cf6683 : To change WarmUp Scheduler with ConstantLR and LinearLR (#64395)
75b9e4a128 : [JIT] Freeze unrolls constant loops (#63614)
adbcc819cd : Fix fx2trt SplitterBase non_tensor_input logic (#64286)
32fbeb170d : Update error messages that use LAPACK error codes (#63864)
1a1fb31cfa : Support `torch.concat` alias, add `cat` OpInfo & remove OpInfo test_out skips {cat, stack, hstack, vtack, dstack} (#62560)
0a1aaff0de : Remove dead code from THC (THCApply.cuh) (#64559)
571a2becf3 : Move ParallelNative and PureTorch to GHA (#64452)
544c8e6a5d : Mark functions in backend header as inline to suppress warning (#64098)
bcc7e82371 : Revert D30745610: [nnc] Make our exceptions c10::Errors, get C++ stacktraces
49fe829cae : [Vulkan] Code Quality: Remove duplicate code for hardshrink and leaky_relu functions (#64405)
1901c675e1 : Back out "nn.functional.linear OpInfo" (#64517)
008bf6689b : Back out "D30740897 Add fusion enabled apis" (#64500)
18b2751ea1 : [nnc] Make our exceptions c10::Errors, get C++ stacktraces (#64332)
6cac7ca980 : Ensure num_threads is initialized in get_num_threads (#64486)
604e885925 : Automated submodule update: FBGEMM (#64338)
a91a278d60 : Fix `copy_transpose_valid` condition for `copy_same_type_transpose_` (#64425)
e4ff14ad59 : [CUDA graphs] Error if attempting to capture uncapturable nccl (#64440)
0e3b45eaef : Fix logical typo in _compare_trilu_indices (#64468)
6831d8e379 : Support Union in TorchScript (#64234)
91b926fab3 : Add fx2trt pass for removing duplicate output args (#64461)
39aeb3bf63 : Add fusion enabled apis (#64429)
7031fbdc63 : update optimize_for_inference docs (#64428)
e1c3e5f830 : [resubmit][FX] Prototype for guarding against mutable operations in tracing (#64467)
cd82bc1af9 : Skips layer norm OpInfo on tbb platform (#64469)
c19bd05e84 : THC: Cleanup dead code (#64441)
db692ec0b3 : Regenerate generated github workflows (#64465)
e161872aab : Revert D30732630: [quant] Enable jit tracing on quantizable LSTM
046ed57a4d : Revert D30055886: [quant] AO migration of the `quantize.py`
4968d0b34f : [POC] .github: Add event name to concurrency (#64402)
b12f34e8c2 : update rpc tensorpipe logic for sparse tensors (#62960)
32a93c2424 : Revert D30675780: [FX] Prototype for guarding against mutable operations in tracing
116142143c : [quant] Enable jit tracing on quantizable LSTM (#64438)
795387477f : [FX] Prototype for guarding against mutable operations in tracing (#64295)
3c79e0b314 : .github: Migrate pytorch_linux_bionic_py_3_6_clang9 to GHA (#64218)
257623da39 : Switch Shuffler to use iter-local buffer (#64195)
f555348aaa : Disable CircleCI ROCm build (#64434)
4ce9c530d6 : [DataPipe] removing filter's inheritance from map (#64404)
4f43480186 : [DataPipe] adding/removing __len__ for different DataPipe (#64398)
3cd0a4ac15 : Fix test_ind_worker_queue by setting max_num_worker based on system resource (#63779)
7d010539c9 : ENH Adds test and docs for modules that already support no batch dims (#62729)
d0cb26ba57 : [DDP] Fix logging iterations (#64411)
22f3bcd164 : .github: Move squid vars to common vars (#64436)
c932afe39b : .github: Move upload-artifact-s3 to common var (#64435)
1519b6084f : nn.functional.linear OpInfo (#61971)
c0cdbb1cc5 : Revert D30468409: Add fx2trt pass for removing duplicate output args
9214450b7f : [tensorexpr] Wrap error msgs with buildErrorMessages for internal asserts (#64409)
6da7552a8e : Add fx2trt pass for removing duplicate output args (#64433)
aeafcde087 : CI: Enable using labels to control GHA workflows (#64314)
66ddc6ef9e : Fixes and details to torchhub docs (#63783)
50067c020a : TST Adds __repr__ and str to module info (#63737)
2c258d91cc : Fix torch.istft length mismatch and window runtime error (#63469)
616fd9219d : [Static Runtime] Add sign/abs/lop1p/mul fusion pass (#64209)
cd3be4675f : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
f04e6594ed : Fix broken caffe2 test: PlanExecutorTest.BlockingErrorPlan (#64401)
b737629ff0 : simplify op name determination into a single forward pass (#64261)
b2c7c1dfcf : fix copy.deepcopy on LinearPackedParams (#64367)
99b064fac4 : [jit] shape propagation for prepack (#63585)
cdb46f4c6e : extract TestAutogradComplex into its own test file (#63400)
be5b05c1dc : require that `TARGET_DET_LIST` is sorted (and sort it here) (#64102)
aedd70fcfe : Fix list() and help() torchhub functions for Windows (#63773)
030154e241 : Remove outdated comment in hub.py (#63757)
1c735768ed : Update hub.load() signature to avoid polluting kwargs param (#63755)
6db8f7a709 : Fix TRTModule not adding outputs in order (#64418)
76e187aa08 : Port `gather` to structured kernel (#63312)
ee8a6c1d14 : Replace std::unordered_map<c10::Device, c10::Device> with DeviceMap (#64393)
8d5b95019d : [PyTorch Edge] Support default args with out arg, flag off (#63540)
0addd75be9 : Remove unnecessary resize_output (#64272)
69e1207084 : Move graph util to fx2trt (#64064)
71e149834b : Add a warning about DataLoader num_workers > 0 "memory leak" (#64337)
d067f15622 : [Dist CI] Move rest of distributed tests to their own CI job (#64253)
4d6314a16e : [DDP] Log num threads (#64072)
59c6ceb6a8 : add documentation to shape inference algorithm (#64312)
778af56504 : [DDP Comm Hook] Add debugging communication hooks to ddp_comm_hooks.rst (#64352)
bf9d66586c : [DDP Comm Hook] Create a noop hook for performance debugging (#64344)
baceea4426 : [DDP] Add more logging iterations (#64071)
59fcbd172b : Fix incorrect DDP test (#64074)
9b8f9d5a25 : [c10d] Prefer use of torch_check (#63928)
5d80a48cef : Add fast path for addmm when the inputs are conjugate (#59380)
a8f9aab840 : [DDP Comm Hook] Add bf16 gradient compression to ddp_comm_hooks.rst (#64346)
ed89937d2c : [quant][graphmode][fx] Add fbgemm backend_config_dict (#64288)
69f4401b7b : Make datasets in `ConcatDataset` not need to be sized (#64114)
535526b95c : Restore LayerNorm numerics test (#64385)
7ffcf15503 : [quant][graphmode][api] Add backend_config_dict to prepare_fx api (#64135)
93bc03622e : Silent rm error for sccache log file (#64388)
9495674905 : [xplat][metal] Add getters and setters for ivars in Conv2dOpContext (#57395)
968d7ee46a : [structured] Preserve computed elements from meta func to impl (#61746)
4aad366111 : [Static Runtime] Make per-op latency readable by FAI-PEP (#64315)
86c9654291 : Update optimize_for_mobile to preserve node's debug information (#63106)
15ff25d1fc : Break up "@generated" string so Phabricator shows changes
e322547fe6 : Add forward AD support for custom Functions (#64061)
25e2578967 : Fix bytes_written and bytes_read (#64244)
03a58a2ba0 : [Caffe2] Create fewer strings during argument fetching (#64285)
468001600c : Back out "Revert D30327514: [Pytorch lite predictor] Use KinetoEdgeCPUProfiler for operator profiling." (#64307)
421d8f86b6 : Add a record scope around autograd::engine::evaluate_function (#63619)
0b48d96895 : [Bootcamp] Include both python unittest and parser parameters in --help and -h flag (#64297)
c6505cc383 : [FX] Fix python code generation for wrapped getattr() with default value (#64271)
87d8ab6e50 : [nnc] Updated generic error message with info about turning off the fuser (#64316)
c4f3f6e62d : Fixes reduction launch config (#64304)
d5bfdd3dac : OpInfo for `nn.functional.layer_norm` (#63276)
d1f3d85fd8 : fix GradBucket.is_last() logic (#63768)
92b31b59af : Revert D29699456: [pytorch][PR] Enable Half, BFloat16, and Complex dtypes for coo-coo sparse matmul [CUDA]
0c4e4e588e : [FX] Rename reduce functions back to their old, public names (#64324)
05ecaefbbf : [Metal][GPU] Enable metal for simulators and fix test failures if possible (#64322)
24e50b8453 : [CUDA graphs] hotfix for test_graph_ (#64339)
479fc4e412 : Remove outdated warning about RecursiveScriptModule not being copiable (#64085)
8337a3fb3f : [TensorExpr] Wrap error messages with buildErrorMessage call. (#64330)
a87808de93 : Fix bug in ShardedTensorMetadata serde. (#63902)
fa5676a41b : Delete some dead code from RRefMessageBase (#64298)
6bb4b5d150 : disallow empty named dims list to flatten(names, name) (#61953)
c59970db6b : [caffe2][easy] Save heap allocation in ConcatOp (#63529)
b23e4f6086 : Convert mul to use opmath_gpu_kernel_with_scalars (#64019)
0733582087 : Use the correct overloaded name to skip boxed autograd not implemented kernel registration (#64182)
09e610e36d : [Static Runtime] Out version for softmax (#64243)
0b9cdeb295 : .circleci: Remove already migrated CUDA configs (#64231)
23da90ab84 : .github: Consolidate linux setup / teardown (#64229)
5ecb966e0f : Add ciflow-tracking issue to pytorch-probot (#64125)
9e25634833 : [TensorExpr] Move declaration of buildErrorMessage to exception.h (#64301)
44fcb00a56 : Fix redundant class definition in GraphModule singleton constructor (#64274)
c2da103fe6 : Discover new tests in run_tests.py (#64246)
0457a85d45 : Revert D30543236: Add python mode
6c8cb9bd76 : [DataPipe] export fork, mux, demux for public usage (#64279)
491bf7cb74 : [DataPipe] adding description, __len__, tests for mux() (#64224)
9a0456939b : Try the forked checkout action with retry (#64120)
13484084a6 : fix syntax error in bfloat16 PR (#64122)
8d08b103be : [CUDA graphs] Prototype API and documentation (#63269)
1c2b5e59ae : Remove ref to test_distributed_fork (#64197)
555171a273 : .circleci: Remove migrated jobs, move docs builds (#64222)
347ef69529 : [ao][docs] Clarify operator support for quantization (#63270)
3a46edb8d8 : ns for fx: make layer types more readable (#64270)
845bc89811 : [fx2trt] Add acc_ops.sign and converter for it (#63876)
83e28a7d28 : Use stacklevel for floordiv deprecation warnings (#64034)
b9275a4003 : [ao][docs] Add description of qconfig and qengine to quantization page (#63582)
ca8dd296ee : Add OpInfo for `nn.functional.cosine_similarity` (#62959)
0ef8760bf6 : [DataPipe] implementing __len__ for fork (no valid length for demux) (#64215)
0deb7a0bc0 : [DataPipe] implementing demux() (#63650)
eee054e6ea : [DataPipe] implementing fork() (#63649)
67cb131458 : Revert D30327514: [Pytorch lite predictor] Use KinetoEdgeCPUProfiler for operator profiling.
3c15822f5f : [Static Runtime] Implement aten::nonzero out variant (#64126)
a3d6dae319 : Automated submodule update: FBGEMM (#64213)
bc9277dca3 : [Pytorch lite predictor] Use KinetoEdgeCPUProfiler for operator profiling. (#63367)
7ca4728e6d : Compile BatchLinearAlgebra without nvcc (#64146)
e7fb35021a : [nnc] Enable fusion of bfloat16 ops (#64196)
538647fe1f : [WIP][FX] BC guarantees for 1.10 (#63888)
09dfaa0339 : add operation list for AutocastCPU (#63534)
93f1090267 : Update contribution_guide.rst (#64142)
6b85c99ce5 : Avoid an unnecessary list creation in `DataChunk` (#64111)
c7c711bfb8 : Add optional tensor arguments to (#63967)
cb7cf823b3 : add BFloat16 support for fold and unfold on CPU (#62880)
ffc2612087 : Add acc_gpu_kernel_with_scalars and port add to use it (#63884)
a49907f984 : Modify inline doc for DataPipe (#64221)
af85bc5ffd : Replace group_by_key by group_by IterDataPipe (#64220)
4bd03b0242 : Add python mode (#63496)
ebc0aacf83 : [nnc] Fix half2float conversion and re-enable float16 (#64199)
1f16c22dc8 : [Static Runtime] Implement aten::cumsum out variant (#64159)
5401159b8f : OpInfo for nn.functional.interpolate (#61956)
a7ae73a238 : BUG Fixes regression for nllloss gradcheck (#64203)
ad4848565e : Enable Half, BFloat16, and Complex dtypes for coo-coo sparse matmul [CUDA] (#59980)
c3464e78a4 : Revert D30561459: Fix bytes_written and bytes_read
e4fd2ab59c : Back out "Added reference tests to ReductionOpInfo" (#64183)
8f88f797db : [quant][graphmode][fx] Add reference quantized conv module (#63828)
65050ec924 : Back out "[JIT] Add aten::slice optimization"
09e53c0cfe : .github: Adding configuration for backwards_compat (#64204)
9035a1cb4d : .github: Adding configuration for docs_test (#64201)
85df73658c : Make name() part of IMethod interface (#63995)
b9933f08b9 : Fix type annotation in tools/nightly.py (#64202)
f3e329cbec : Implements the orthogonal parametrization (#62089)
e98173ff34 : Fix bytes_written and bytes_read (#64040)
eafe33c995 : remove componentwise comparison of complex values in torch.testing.assert_close (#63841)
401bbb2aa0 : remove componentwise comparison of complex values in TestCase.assertEqual (#63572)
a8ffe81b2c : Bring back old algorithm for sorting on small number of segments (#64127)
d37636901e : [Doc] `make_tensor` to `torch.testing` module (#63925)
5b0dfd0f8a : Fix bad use of channels last kernel in sync batch norm backward (#64100)
ac99d63f83 : [jit] Make operation call accept Stack& instead Stack* (#63414)
93d2e5090f : Improve performance of index_select by avoiding item (#63008)
e24c3644d8 : [Static Runtime] aten::cat out version when it is not being replaced by prim::VarConcat (#64157)
16ecdbbaa2 : [PyTorch] Fix missing move in unpickler (#63974)
9777887f0e : [PyTorch] Reduce copies/refcount bumps in BytecodeDeserializer::parseMethods (#63961)
dc4fd3bdda : [MicroBench] Added a micro benchmark for a signed log1p kernel. (#64032)
f79df24859 : Automated submodule update: FBGEMM (#64149)
82174330d0 : [DataLoader2] Adding Messages, Protocols, Loop wrappers (#63882)
7701ea48be : remove one more distributed test (#64108)
093a12aaa9 : [nnc] Updated internal asserts to include more detailed error messages (#64118)
a836d83957 : [nnc] Fixed warning due to implicit parameter conversion (#64117)
d3bcba5f85 : ENH Adds label_smoothing to cross entropy loss (#63122)
8af1407eab : [Static Runtime] Out version for torch.linalg.norm (#64070)
44e3ed88c9 : [quant] AO migration of the `quantize.py` (#64086)
29ad84f252 : Removes beta warning from the special module documentation (#64148)
c5ed31e4a7 : add channel last support for MaxUnpool2d (#49984)
9db56531f7 : Revert D30620966: [pytorch][PR] Move Parallel[Native|TBB] to GHA
710a2e933f : [DOC] Add doc for maybe_wrap_dim (#63161)
7ebdbf82dc : add support for sending cpu sparse tensors over rpc (#62794)
52d7dd7398 : [DOC] improve docstring for Optimizer.state_dict (#63153)
371c6612b3 : Automated submodule update: FBGEMM (#64141)
2e6221a232 : [nnc] Make 64-bit dimensions work (#64077)
405c15516c : Parse int64 sizes/strides (#64076)
4f969db325 : [nnc] Fix batchnorm implementation (#64112)
aefa2f3e64 : To add RMSProp algorithm documentation (#63721)
8b6266fe4f : Automated submodule update: FBGEMM (#64129)
223f886032 : Move Parallel[Native|TBB] to GHA (#64123)
d0c63e857d : Enhancement for smart serialization for out schemas (#63096)
f4496528e3 : [Light] Fix error message (#64010)
0d0605eaa9 : [quant][graphmode][fx] Add reference quantized linear module (#63627)
a3a7a67048 : [iOS][GPU] Consolidate array and non-array kernel for hardswish (#63369)
9ccb9299e0 : To add Nesterov Adam algorithm description to documentation (#63793)
07c5cb8c48 : [Static Runtime] Optimize memory planner initialization (#64101)
2d75ab0c8f : [TensorExpr] Update tutorial. (#64109)
3abbcf079d : .github: Add cpp_docs job to current gcc5 workflow (#64044)
6ccb74b837 : Update codegen to use boxed kernel (#63459)
90a6498a12 : Add autograd not implemented boxed fallback (#63458)
8406dba65a : Removing references to ProcessGroupAgent in comments (#64051)
bdde898d9c : Add README to datapipes (#63982)
358c46f99e : Implement leaky relu op
18cb3fc910 : [FX] Validate data type of target on Node Construction (#64050)
ff4569ae29 : Sparse CUDA: rename files *.cu -> *.cpp (#63894)
8fc1064b7f : [PyTorch] Reduce code size of register_prim_ops.cpp (#61494)
6a76ee04de : Adding alltoall_single collective to collective quantization API (#63154)
04108592a3 : New TLS to disable forward mode AD (#63117)
6257f5b168 : [pruner] add README to repo (#64099)
101a626330 : Improve `distributed.get_rank()` API docstring (#63296)
196fd3ee7a : Modules note v2 (#63963)
19c1b45f25 : Detect out argument in the schema (#62755)
9f1f22b9bc : [Static Runtime] Add out variant of quantized::embedding_bag_byte_prepack (#64081)
6ab3a21098 : fix resize bug (#61166)
538c30a713 : [caffe2] fixes to allow stricter compilation flag (#64016)
eca87f729d : Added reference tests to ReductionOpInfo (#62900)
babd449978 : [JIT] Add aten::slice optimization (#63049)
3abb606091 : Add doc for nn.MultiMarginLoss (shape, example) (#63760)
a9983ac09c : Refactor structured set_output in Register{DispatchKey}.cpp (#62188)
f922b58b5f : [bazel] GPU-support: add @local_config_cuda and @cuda (#63604)
22d38bd10d : [OSS] Enable Metal in PyTorch MacOS nightly builds (#63718)
a43e7a51d7 : Adds return type annotation for fork_rng function (#63724)
ad8eddbd80 : More robust check of whether a class is defined in torch (#64083)
f2c47cf4db : [Static Runtime] Out version for fmod (#64046)
c90b3cb1da : [Static Runtime] Manage temporary Tensors for aten::layer_norm (#64078)
3c3bba4169 : [Static Runtime] Use F14FastMap/F14FastSet (#63999)
3f1c809470 : [static runtime] port c2 argmin kernel (#63632)
294db0603f : [quant] Add support for linear_relu fusion for FP16 dynamic quant (#63826)
cec44aa574 : [quant] Add op support for linear_relu_dynamic_fp16 (#63824)
975f4ccad6 : [quant] support linear_relu_dynamic for qnnpack backend (#63820)
c7027f19ef : [quant][fx] Add support for dynamic linear + relu fusion (INT8) (#63799)
63c90ec3bf : [torch/deploy] add torch.distributed to build (#63918)
65e6194aeb : Introduce the torchrun entrypoint (#64049)
510d2ece81 : Merge script and _script_pdt API (#62420)
0e8c3c51d9 : port glu to use structured kernel approach (#61800)
a5f35ac7cd : Run through failures on trunk (#64063)
0c9dce90ed : [pytorch] add per_sample_weights support for embedding_bag_4bit_rowwise_offsets (#63605)
81764d1153 : document that `torch.triangular_solve` has optional out= parameter (#63253)
ed573a8e08 : Enable test_api IMethodTest in OSS (#63345)
0bd8d0951d : [Static Runtime] Remove unnecessary fb::equally_split nodes (#64022)
dfa35ab3e7 : [pytorch][quant][oss] Support 2-bit embedding_bag op "embedding_bag_2bit_rowwise_offsets" (#63658)
92a154aa29 : Move variabletype functions around (#63330)
49353e319c : More sharded_tensor creation ops: harded_tensor.zeros, sharded_tensor.full, sharded_tensor.rand (#63732)
49b782b2cb : Add shard number to print_test_stats.py upload name (#64055)
085278f8b1 : Derivatives of relu (#63027) (#63089)
7861dba7f6 : Automated submodule update: FBGEMM (#62879)
aeec177833 : [JIT] UseVariadicOp takes list_idx parameter (#63915)
d8d8e4902a : [torch/elastic] Pretty print the failure message captured by @record (#64036)
5a12cb611f : To add Chained Scheduler to the list of PyTorch schedulers. (#63491)
7cfbc85821 : [fx_acc] [fx2trt] add acc op mapper for argmin and converter for topk (#63823)
cbfec02007 : [Static Runtime] Add native op for aten::expand_as (#64024)
95d0b3199b : Back out "[ONNX] Fix an issue that optimizations might adjust graph inputs unexpectedly. (#61280)" (#64004)
c5cc185b6d : Allow uncompiled strings as input to `checkScriptRaisesRegex` (#63901)
48c57b9b2e : Leverage TensorPipe's automatic SHM address selection (#63028)
ad47fb8858 : Rename IterableAsDataPipe to IterableWrapper (#63981)
0f6b524665 : [NNC] Add C++ codegen backend to NNC (#62869)
6d31ba6ddc : [nnc] Sanitized the names of constants in the input graph. (#63990)
ba5f1b1076 : [nnc] Fix dtype promotion involving scalars (#64002)
1354ee417a : run_test.py: add option to run only core tests (#63976)
fbe7133b58 : [Static Runtime] Disable out variant of aten::clone (#63980)
7ccc4b5cc8 : [CI] move distributed test into its own CI job (#62896)
733755f72c : remove special grad_mode tls handling (#63116)
950f7c0237 : Added API tests to ReductionOpInfo and ported amax/amin/nansum tests (#62899)
10da1fc3f8 : Deify opmath_t into its own header, align with accscalar_t (#63986)
774ae0851d : [OpInfo] Added ReductionOpInfo subclass of OpInfo and ported sum test (#62737)
c02eda8166 : Update TensorPipe submodule
61d88cdd1c : use `const auto&` as type for grad alias (#63949)
5757d03145 : Add logging for _MinimizerBase
a6f767ed3d : Fix issue re: DDP and create_graph=True (#63831)
3b284ab024 : Adding BFP16 quantization/dequantization support to OSS (#63059)
9d95d48567 : (torch.distributed) Add torch.distributed.is_torchelastic_launched() util method + make init_method=tcp:// compatible with torchelastic (#63910)
b629ea4620 : Update persons_of_interest.rst (#63907)
b1154cc774 : enable equal_nan for complex values in isclose (#63571)
49c8fbc92f : Clean up related to type refinements (#62444)
80a61142e4 : inference for algebraic expressions (#63822)
124ae597fb : [quant] Fixing the conversion of the quantizable RNN (#63879)
2ea2711501 : Make frozen symbol name customizable in torch deploy. (#63817)
f4bc28990f : Compute cuda reduction buffer size in elements (#63969)
01b8162d00 : Back out "Revert D30384746: [fx2trt] Add a test for quantized resnet18" (#63973)
57d4c6cf42 : replace `self.assertTrue(torch.allclose(..))` with `self.assertEqual(…)` (#63637)
1be1c901aa : Remove render_test_results job (#63877)
ba0e6a1e03 : [EASY] Update the clang-tidy error message (#63370)
44ede71751 : Shard python_torch_functions.cpp (#62187)
730ce29baf : Add note on ifdefing based on CUDA_VERSION for ROCm path (#62850)
b5b9ce146f : Small fixes to the Contributing.txt (#63385)
52ebe7e14e : Back out "Temporary fix for remote gpu execution issue" (#63983)
5b548f6f64 : Shape Propagation Pass: Fix AdaptiveAveragePooling2d (#63629)
ab5cf5a1eb : Move existing target determinator to tools (#63809)
7edeead796 : Add a comment on the potential implicit type up-casting (#63905)
b0782f0f32 : add BFloat16 support for bernoulli and Dropout on CPU (#56372)
7299565768 : Update torch.distributed.run OMP_NUM_THREADS message to log.warning (#63953)
3d4aabfc48 : Fix ciflow/all label generation (#63954)
67d8e7b659 : Reformat run_test.py (#63808)
64d605bab8 : [Static Runtime] Added caching for the NNC code generated for Logit. (#63840)
dde07cad6f : [Static Runtime] Added a variable for clamp in the NNC code for Logit. (#63839)
a2399a76e1 : [Static Runtime] Moved NNC operator definitions to separate files. (#63838)
8a22d4fa5c : [Reland] Replacing the p.data acccess in utils with tensor.set_ . Passes both test_post_localSGD_optimizer_pari and test_periodic_model_averager tests (#63895)
ab954cb0d1 : clean up engine.cpp thread state (#63115)
c06dfd7c26 : [fx2trt] Check input device in TRTModule (#63893)
6324d98e9e : bf16 Error message cleanup as well as addition of is_bf16_supported (#63798)
eebac46282 : [pruner] add getter for pruned outputs in base pruner (#63520)
83b132b112 : [pruner] add support for pruning BatchNorm2d (#63519)
c1dfd58715 : Minor OptionalTensorRef updates (#63611)
5ab356ffe6 : Update CMake minimum version to 3.10 (#63660)
34ed16ffef : Temporary fix for remote gpu execution issue (#63899)
01c35115d8 : Fix bug in `check_empty_containers` (#63492)
8c897d254d : Swap CUDA 11.1 and 11.3 in CI to make 11.1 periodic (#63900)
3926fdbaa4 : [skip ci] Add generated comment to ruleset json (#63896)
87a661c79f : Revert D30526034: [pytorch][PR] compute reduction intermediate buffer size in elements
839eaa2e91 : Revert D30384746: [fx2trt] Add a test for quantized resnet18
10dfa58eba : [fx2trt] Add a test for quantized resnet18 (#63446)
0301c3bc01 : [quant][graphmode][fx] Make maxpool and flatten produce the reference pattern (#63501)
d388a1a5df : [TensorExpr] LLVMCodegen: Use addFnAttr instead of addAttribute which was deleted. (#63886)
c8527bc398 : [qunat][graphmode][fx] Add a separate lower_to_native_backend function for relu (#62861)
e69a1398cb : compute reduction intermediate buffer size in elements (#63885)
ba126df614 : TST Adds more modules into common module tests (#62999)
544af391b5 : Allow arbitrary objects in state_dicts (#62976)
58ef99bd5a : TST Adds pickle testing for ModuleInfo (#63736)
8dda299d96 : Re-apply: [nnc] Support thread level parallelism in fused kernels (#63776)
1787b905c4 : Don't switch executors mid test (#63830)
543130511a : [nnc] Disable erf and erfc (#63775)
d454c9e76e : Migrate THCTensor_copyIgnoringOverlaps to ATen (#63505)
5b28e3c183 : [quant][graphmode][fx] Add reference option support for binary ops (#62698)
6fa646ad54 : [StaticRuntime] Fix bug in HasInplaceOp (#63842)
956c8fa01e : Microbenchmarking matrix mult (einsum, torch.mult, torch.mm) (#63654)
6d58c83007 : Turn off layer norm in jit symbolic differentiation (#63816)
41ffec07ce : Add a common autograd TLS state (#63860)
865d127a66 : .github: Enable with-ssh for Windows (#63440)
4e37a015c7 : [FX] Fix _replicate_for_data_parallel (#63821)
5be17ec1fc : Do not modify saved variables in-place for spectral norm during power iteration (#62293)
4a0776100e : Migrate legacy lstsq from THC to ATen (CUDA) (#63504)
699c764d2e : Revert D30513613: Removing tensor.data usage in utils with tensor set_ method
835dac0869 : Merge common fields from TensorInitParams and ShardedTensorMetadata into TensorProperties (#63731)
d08a36f831 : Removing tensor.data usage in utils with tensor set_ method (#63867)
73431449b3 : update readme and contributing.md (#63843)
e6dc7bc61b : Subprocess encoding fixes for cpp extension (#63756)
14d4723abd : add bf16 support for bucketize (#55588)
1256dcd509 : [pruner] modify base pruner to prune bias by default (#63202)
16ba20507a : [pruner] amend base pruner API to match base sparsifier (#63178)
5dee15401c : [pruner] refactor `ActivationReconstruction` forward hooks (#63158)
7774a4e95b : [Static Runtime] Implement prim::VarStack out variant (#63579)
227cb268bc : [Reland] Embedding thrust->cub migration (#63806)
94d621584a : optimize BFloat16 elemwise operators CPU: sigmoid, sigmoid_backward, tanh_backward, addcmul, addcdiv (#55221)
33a163d886 : Enable BFloat16 LeakyReLU and RReLU in CPU path (#61514)
2ca2761f3c : ENH Adds no_batch_dim for NLLLoss (#62651)
d3be02d100 : fix batchnorm2d issue when input is non contiguous (#63392)
1385f9fb12 : [JIT] Add variadic stack op (#63578)
f4aff3a346 : [BE] add distributed run_test options (#63147)
688f06cac3 : Revert D30388099: Add a common autograd TLS state
9914fb6615 : ENH Adds no_batch_dim tests/docs for LPPool1d and Identity (#62190)
83d9bad44a : Add a common autograd TLS state (#63114)
c545b099aa : Separating quantization test from distributed_test (#63058)
f0d274294d : [TensorExpr] Nuke KernelArena and KernelScope. (#63587)
62d02f2b57 : [TensorExpr] Make 'Tensor' a value type. (#63586)
4e15a6f495 : [TensorExpr] Switch Exprs and Stmt from kernel-arena to shared_ptr. (#63216)
dd96c26066 : [TensorExpr] More NFC changes like Expr* -> ExprPtr. (#63778)
5b7cdc5a3d : add channels last for GroupNorm (#49821)
f5d585391d : Add ROCm as a platform for which tests can be disabled (#63813)
d96ef8c1b1 : [Static Runtime] SR clones graph input (#63704)
195c60d844 : [fx2trt] Add acc op and converter for torch.pow (#63795)
e1bdebf685 : Adding DataLoader2 class as future replacement of DataLoader (#63742)
fc07489ec5 : [BE] Enable PostLocalSGD tests on windows (#63463)
16a4434422 : [BE] Enable functional optim tests for windows (#63462)
630ec2e190 : [fx_acc] Add mapper for torch.log1p (#63792)
e4f44bec27 : Fix pocketfft include path in mobile build (#63714)
fc47497905 : Simplify ccache instructions in CONTRIBUTING.md (#62549)
d9231dc3df : Skip archiving useless build artifacts (#63785)
172e5c76ab : Fix some memory bugs in onnx passes (#63754)
fc6dd0bc00 : [JIT] Move UseVariadicCat internals (#63577)
130549d61b : Fix typo in NNAPI tests (#63797)
84890aae35 : [Static Runtime] Add an out variant op for aten::abs (#63675)
55f8f95ad4 : fix git diff issue (#63408)
49be16d50a : .github: Add ec2 information as a step (#63784)
7946f8a9f6 : Rename DataPipe to Op-er (#63325)
a781340bf7 : Add equality constraints for some acc opeartions for symbolic inference (#63689)
0bc7fef406 : [Static Runtime] Remove unused fusion patterns (#63636)
a709ab34a8 : [nnc] Re-enable CPU fusion" (#63665)
560cd88195 : Kill THCUNN (#63429)
db1b27fa8d : fix mpi ssh runtime error (#63580)
98449f5bba : hotfix clone issue (#63770)
f1d865346f : [ONNX] add test images to repo (#63717)
bafd875f74 : Allow implementing either backward or vjp for Function (#63434)
726fd26b3e : Update ROCm PyTorch persons of interest (#55206)
d6133b2fe6 : Remove `_fork_processes` from common_distributed.py (#63711)
2289a12f21 : Made FuncTorchBatched decompose CompositeImplicitAutograd (#63616)
e926f75b0b : BatchNorm autodiff re-enabled (#57321)
37d60c08e5 : Revert D30360382: [nnc] Support thread level parallelism in fused kernels
76da46ccdc : Revert D30417127: Remove flag to toggle CPU fusion in the presence of parallelism
8871ff29b7 : [sharded_tensor] add readonly tensor properties (#63679)
b2a601ffe5 : [Static Runtime] Implement out variant for fb::quantized_linear (#63635)
2d58f3f56d : NNAPI: Support const values in binary ops
b4f5809db8 : Migrate thnn_conv2d from THC to ATen (#63428)
3ee1f81dce : Extend _sharded_tensor constructor to support other ops like torch.ones (#63378)
7c0f5b9aa4 : [clang-tidy] Enable more folders (#63380)
e0fe5699c4 : enable increment build for build_libtorch (#63074)
efe01c59e3 : [Doc] Deprecation notice for only_inputs argument (#63631)
bcf8e2f57e : Remove breakpad from docker image (#63598)
da0820e553 : add BFloat16 operators on CPU: range, sinh, cosh, frexp, nan_to_num (#61826)
a8de0d83fe : empty caching allocator before test_avg_pool2d large subtest (#63528)
b008bb4443 : Include iostream in ProcessGroupMPI.cpp (#63656)
07e41cf2d7 : [easy]Unbreak caffe2benchmarking build (#63655)
1dd648f1c4 : [ONNX] Suppport torch.dot and torch.nn.utils.spectral_norm (#62596) (#62765)
db0771b05d : [ONNX] Update repeat_interleave for dynamic repeats (#59979) (#62764)
8760254911 : [ONNX] Fix an issue that optimizations might adjust graph inputs unexpectedly. (#61280) (#62763)
a65d1ae7cc : [ONNX] Fix controlflow shape inference with contrib op (#60707) (#62762)
125e2d02e5 : Revert D30417370: [nnc] Enable CPU fusion
2d671ca41b : [8/N] Remove c10d/ddp fork tests. (#63454)
71da114412 : Revert D30426527: Adding DataLoader2 class as future replacement of DataLoader
70a3210eca : Add `BinaryUfuncOpInfo` and broadcasting tests (#61964)
b9fc656cf2 : [nnc] Enable CPU fusion (#63545)
6600bc9651 : Remove flag to toggle CPU fusion in the presence of parallelism (#63514)
d6d86efb1c : [nnc] Support thread level parallelism in fused kernels (#63386)
c78ab28441 : Add support for the ONNX Runtime Eager Mode backend (#58248)
b95ce1591d : Add docs describing saved tensor hooks (#62362)
03cc46a0ac : [fx2trt] Add layernorm plugin for dynamic shape (#63620)
5f997a7d2f : [PyTorch][Edge] Improve InflatableArgs for Bundled Inputs (#62368)
5a7133b87f : Adding DataLoader2 class as future replacement of DataLoader (#63523)
99e28baeba : Small custom function refactor which doesn't change anything (#63433)
0f2c60f0e3 : Adding IterableAsDataPipe IterDataPipe (#63522)
ae901e372e : [Static Runtime] Enable RemoveListMutation (#63536)
913c1f83f4 : [Static Runtime] Add native op for aten::detach (#63625)
bec75daa77 : Update protobuf to 3.13.1 (#62571)
d82667f7e2 : [nnc] Updated sliceTail to do inplace mutation (#63532)
5e31a3b904 : [nnc] Updated sliceHead to do inplace mutation (#63531)
0a66d5b325 : [PyTorch] Remove unnecessary iostream includes in headers (#61500)
b99a299c60 : [PyTorch] Remove unused dump() methods in vec headers (#63533)
0b6cc8daf2 : [PyTorch][Edge] Support backtrace symbolication for Android builds (#63339)
f2bf0f229f : Revert D30359218: [pytorch][PR] [doc] pre-commit fix instructions
d0d27f6971 : Add concurrency group for more workflows (#63606)
71ab48ed3b : acc type inference (#63119)
ccca66597a : Replace hardcoded values in IndexKernel.cu (#63372)
e5ab0d1013 : DataLoader: allow non-integer Samplers (#63500)
11a40ad915 : [Pytorch] Fix callstack pointer serialization bug (#63576)
6c3ebccc00 : Updating the names of these functions (#63513)
ce6fe50158 : Revert embedding thrust->cub migration (#63451)
99203580a9 : Updates internal `assert_allclose` callsites in favor of `assert_close` (#61841)
efd70b7ce6 : Modernizes add and mul documentation (#63309)
d986d4bf63 : [special] use __all__ to hide internal imports (#63135)
0c3904d180 : [BF16] Add a missing thread local specifier to autocast_gpu_dtype (#63416)
535d44141b : [7/N] Remove fork tests for RPC. (#63443)
bd8608cd5c : Use CMake for breakpad (#63186)
e030b81356 : [easy] Fix missing move in TupleType::createNamed (#61572)
3aa4521fe8 : [hpc] use fx2trt for exploration track (#63535)
885e312ce0 : Add permute021 fx2trt converter (#63238)
e7831fe5de : [PyTorch] Test IValue move/copy/assign/swap more (#54717)
79693bb86a : Use linecache.lazycache to cache generated code. (#63453)
e1334512a3 : Add fastpath for dot and vdot when the inputs have conj bit set to True (#62915)
f596aa8b77 : Poisson zero rate (#61511)
be9be9bfdd : add distributed/_sharded_tensor/test_sharded_tensor to ROCM_BLOCKLIST (#63508)
e7c4988b52 : To fix the chainability at epoch zero for some schedulers (#63457)
2d5b19f62b : Update full backward hook doc with not-same-object note (#63245)
47a9e8ff32 : [Static Runtime] Support __getitem__ for lists (#63398)
ce61100923 : Revert D29399533: Hoisting common expressions out of If blocks
6bb68ba507 : Fix interpreter debug logging message (#63499)
5254e3adb8 : layernom inplace (#63437)
531262fe2e : layernorm (#63436)
6e00b31b15 : [TensorExpr] Make CacheReplacer and IndexFlattener mutate stmts/exprs inplace. (#63527)
1d62fb8a63 : [TensorExpr] Speedup ExternalCall.ComputeInterop test by reducing tensor sizes. (#63526)
773c8b6440 : support optional comparisons with different but comparable types (#62890)
2544664e54 : Beef up comment in AccumulateType (#63503)
0d437fe6d0 : BF16 allreduce hook (#63260)
9477211e7d : Hoisting common expressions out of If blocks (#59492)
d9547b9bb2 : Nnapi Delegation: Quick improvements (#63489)
4dcc2197ce : [fix] tensor_split : non-contiguous indices tensor (#63390)
1f4e019d8e : [Vulkan] Fix incorrect input range for Hardshrink tests (#63515)
15eec8e1d1 : using PR number instead of IN_PULL_REQUEST (#63360)
779a3d47b0 : [Static Runtime] Benchmark reports native nodes (#63346)
139413078f : [FX] make ASTReriter patch wrapped functions properly (#62987)
9bbf80969e : [PyTorch] Avoid using std::regex for device string parsing in Device.cpp (#63464)
7fdba4564a : [TensorExpr] IRSimplifier: sort terms in polynomials, terms, minterms, maxterms. (#63197)
8bdd542417 : [TensorExpr] Add debug logging to LoopNest::computeInline. (#63196)
feba6806c9 : clarify that `torch.finfo.tiny` is the smallest normal number (#63241)
9253dc1e58 : Fix segmentation fault due to access to destroyed CudaIPCGlobalEntities instance (#56141)
877e6f2be3 : Bugfix for fuse qconfig comparison (#63384)
2aa19f33c6 : [ONNX] Fix for batchnorm training op mode (#52758) (#62760)
e182401062 : [ONNX] Remove aten parameter (#61652) (#62759)
3a7bbf5fb7 : [ONNX] Add support for opset14 in PT-ONNX exporter (#59486) (#62758)
99b154b8be : [ONNX] Support lstm_cell symbolic (#61476) (#62757)
d661e646ad : [FX] Fix GraphModule deepcopy to use deepcopied graph (#63090)
11fbd3958c : MaybeOwned page for dev wiki (#63450)
9bb1371cc2 : Disable RDYNAMIC check with MSVC (#62949)
d4593d9d08 : document why wrappers exist in `torch.functional` (#62847)
f0f5cffde9 : [DDP] Add a debug check in cpp fp16 compress (#63379)
ac1ece054b : [DDP][Grad compression] Fix fp16 cpp hook (#63375)
4e1d84ae8f : [doc] pre-commit fix instructions (#61717)
50a3b6a6a8 : Make SkipInfo with expected_failure an XFAIL (#63481)
2f615f6313 : Improve custom function docs (#60312)
d565a7bd68 : [6/N] Enable opt-asan for elastic and launcher tests. (#63442)
af3cbfed95 : Add validation check in fx2trt interpreter (#63424)
7df2324120 : [pytorch] Make qconv forward() thread safe (#63432)
565578cdab : Use `fastAtomicAdd` in EmbeddingBag (mode "max") backward (#63298)
e2ddaec5cf : Reverting launch bounds change in topK that induced a regression in perf (#63431)
383a33a0eb : Make DataChunk support list in-place ops (#63422)
93582e3bba : A tiny fix in MT19937RNGEngine (#63219)
c508433617 : Implement subclass priority for __torch_dispatch__ (#63411)
061b36e2f5 : [fx2trt] Add dequantize support (#63448)
a00d587849 : add `OpInfo` for `torch.linalg.tensorinv` (#62326)
30e1c74dc1 : Update cuda amp to also check xla device (#63413)
4a390a56c4 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
2b303f3f31 : enhance comparison tests for c10::optional (#62887)
0f2f6a79cb : clarify the documentation of `torch.meshgrid` (#62977)
f8a84a80cd : [5/N] Run opt-asan with detect_leaks=0 (#63361)
d431c77d76 : [sharded_tensor] fix typing issue for placement (#63426)
2fd14735d6 : [easy][PyTorchEdge] print error message when failing to load model file (#63404)
15144ade25 : [fx2trt] Add quantize_per_tensor support (#63447)
3fd8e09102 : Fix RPC Python User Function Error Handling (#63406)
f12f667e12 : [torch] Set default log level for torch elastic (#63214)
dcf90b797c : [BE] remove _SUPPORTED_OPTIM_MAP from tests (#63383)
5b8862abf1 : [DDP] Support step_param for AdamW (#63382)
cd5e9dcc1d : [quant][graphmode][fx][fix] Fix quantization for tuple arguments (#63376)
975542c314 : Add more ciflow labels for more workflows (#63410)
da87d648b3 : `F.avg_pool3` CUDA backward: gpuAtomicAddNoReturn -> fastAtomicAdd (#63387)
6e5d065b2b : Add pocketfft as submodule (#62841)
078dcc4e97 : [wip] Move smallest bucket to end after rebuild buckets (#62279)
e0e2796fa9 : adding a note to the documentation of polar (#63259)
bcddc71f26 : [quant][graphmode][fx][bc-breaking] Support for reference pattern for fixqparam ops in eval mode (#62608)
9cd24e12a1 : Revert D30281388: [PyTorch] Avoid using std::regex for device string parsing in Device.cpp
495e7e4815 : Fix zero-dim handling in torch.matmul (#63359)
1dc2b52764 : [TensorExpr] Add a wrapper for all expr and stmt pointers. (#63195)
a2db5d34a5 : OpInfo fix: `conv_transpose2d` (#63389)
9d9e7a8d72 : [Static Runtime] Implement aten::append (#63350)
6621df9a6a : [vulkan] Add log_softmax (#63193)
b0396e39f4 : [quant][fx] Ensure qconfig works for QAT with multiple modules (#63343)
e000dfcf97 : Add return type hint and improve the docstring of consume_prefix_in_state_dict_if_present method (#63388)
fcc840eae0 : Add handling of ifs to shape propagation (#62914)
3975c08e5d : Small shape analysis changes (#62911)
e2227e86e4 : Add a few peepholes (#62910)
9a60759453 : Propagate symbolic dimensions through idioms like x.view(y.size()) (#61975)
60cadd0bd1 : [fx2trt] Refactor linear op to use mm + add
517aa8965a : Updates set_default_dtype documentation (#63233)
63554cfb3d : Remove backend_debug from torch_core srcs and replace with library dependency (#63111)
3aecec609f : Move Android Nnapi srcs from aten_native_cpu to aten_cpu (#62919)
c982f13a80 : [android][vulkan] Fix model loading for Vulkan backend (#63402)
f70b9ee5de : Advertise USE_PRECOMPILED_HEADERS in CONTRIBUTING.md (#62827)
011fdc3b7e : [fx] persist `tracer_cls` on `fx.Graph` when deep copying (#63353)
4d6f98ecad : [PyTorch] Avoid using std::regex for device string parsing in Device.cpp (#63204)
013a42bdb1 : [PyTorch] Add Device_test.cpp (#63203)
336aa9cd85 : change with_callable_args to return a fresh _PartialWrapper (#63374)
7bad9ac78a : Fix flaky test for dp saved tensor hooks (#63324)
2992d92b5a : Add mode to TarArchiveReader (#63332)
cae5ddc427 : add torch.meshgrid() OpInfo (#62720)
22f78144c7 : Extends warning on norm docs (#63310)
ad94248b57 : Cleanup dead code (#63328)
877b649bc3 : Workaround for cuFFT bug (#63327)
794b04c6c8 : Add step to report code coverage from GHA (#63373)
548c717cbd : [TensorExpr] Remove test_train from tensorexpr tests. (#63194)
e7724bb100 : [JIT] Set future's error to current exception as is when `--torch_jit_enable_rethrow_caught_exception=true` (#63348)
075024b9a3 : [Static Runtime] Fix a bug that assigns multiple outputs to single storage (#63012)
068d6fec5c : [Model Averaging] Add a few member methods of PostLocalSGDOptimizer (#63340)
aa63c0d9df : [PyPer] Skip printing out per node time when do_profile is on (#63256)
b2069e7d01 : Refactor NnapiCompilation registration into it's own file (#63183)
da36bbcd35 : Add section to CONTRIBUTING.md explaining developer docs (#63228)
4982fc4ecf : test: Add ability to set CONTINUE_THROUGH_ERROR (#63357)
6acd87fe6a : Add driver function to run test_sharded_tensor.py and test_sharding_spec.py (#63189)
f4f2c1231a : [fx2trt] add unsqueeze converter (#63355)
078b8004a6 : [Static Runtime] Implement prim::TupleUnpack (#63243)
a12b371f7c : [fx2trt] Factor out add_matrix_multiply_layer
dc5ce22a1a : A re-open PR: Avoid re-creating the random number generator in RandomSampler (#63026)
3f06f29577 : Improve pip package determination (#63321)
4a59f0b9d9 : [Profiler] Change FLOP/s to Total FLOPs (#62779)
d2e8359971 : Fix triage workflow when the card already exists in project (#63347)
3ce67efea2 : [opinfo] nn.functional.pad (#62814)
1e8de64c66 : Add expecttest to requirements.txt (#63320)
e75ed4a4b5 : add comma to prevent syntax errors (#62492)
0074a099a8 : Retry apt-get during setup_ci_workspace (#63319)
dbcfd7739f : Make `torch.lu` differentiable for wide/tall inputs + jit (#61564)
979180cd01 : [Model Averaging] Allow subgroup to be None in PostLocalSGDState (#63277)
d5d5f42ea9 : Revert "[docs] Update docs for NegativeBinomial (#45693)" (#63192)
d1cbee7b2b : Refactor BucketBatch (#63185)
56d609d93e : Replace str by repr for DataChunk (#63184)
e50e8b07d8 : [nnc] Updated IRMutator and IRSimplifier to perform in-place mutations. (#63246)
a421cba325 : [docs][ao] Add overload information for fake_quantize_per_tensor_affine (#63258)
0831b59cf5 : [docs][ao] Add missing docstrings for quantized_max_pool1d and quantized_max_pool2d (#63242)
a090073fe4 : [docs][ao] Add missing documentation for torch.quantized_batch_norm (#63240)
50fc8e8250 : [OpInfo] Add expected_failure kwarg to SkipInfo (#62963)
8987726cc6 : Small refactor for OpInfo decorators (#62713)
3ca3349555 : [Pytorch Edge] Fix broken test post changes in error reporting format. (#63287)
cec08e7032 : To add warm-up scheduler to optim (#60836)
8e0998ca70 : Move fx2trt and oss_acc_tracer to oss (#63101)
0ce4d30c44 : Hide all symbols in llvm namespace (#63272)
045c4cb82f : Add copy button to code snippets in docs (#63149)
38c185189c : [Pytorch Edge] Enable kineto profiler on mobile via EdgeKinetoProfiler (#62419)
77a6436cac : [Pytorch Mobile] Combing instructions and debug hanles in single struct (#62418)
1b04d99f55 : [Pytorch Profiler] Introduce scopes to enableProfiler (#62417)
b00afe135d : [Pytorch Profiler] Add debug_handles to KinetoEvent (#62228)
44b12ba862 : [Pytorch Profiler] Move start timestamp to end of start callback (#62191)
54f2eb6e7e : [Pytorch Profiler] Add support for adding module hierarchy to (#61792)
385b082854 : add substract of max and testcase (#63132)
baedb559e3 : OpInfo: `nn.functional.conv_transpose2d` (#62882)
f8e217a17e : refactor fx2trt example script so it can be imported as a library (#63262)
3f43a8b9a3 : [iOS] Add `LibTorch-Lite-Nightly` pod (#63239)
809e1e7457 : Allow TransformerEncoder and TransformerDecoder to accept 0-dim batch sized tensors. (#62800)
ab7a472980 : [ROCm] Update HIP_VERSION to TORCH_HIP_VERSION (#62786)
e711b5ce6c : Respect user-set CMAKE_PREFIX_PATH (#61904)
90a96e0642 : Remove left-over print in test_diff_graph_inline_threshold (#63231)
cc6b023cba : Add CostInferenceFunction for SplitOp (#63133)
acdad8bc63 : [docs] Merge note block in `torch.lu` documentation (#63156)
e5c32cdde7 : [docs] Remove `input` parameter from `Tensor.flatten` docs (#63180)
548fe682e2 : [docs] Add cross references to `torch.transpose` and `torch.t` (#63177)
7107c367b5 : [docs] Mention `vsplit`, `hsplit` and `tensor_split` in Tensor views doc (#63191)
38a825c648 : Allow Average Pooling modules to accept tensors with 0-dim batch sizes. (#62025)
de7ae9e9b6 : [skip ci] fix workflow code generation (#63235)
000e3a0881 : [Static Runtime] Add pass to eliminate __getitem__/DictConstruct calls (#62429)
fcc1f87b6a : Fixing user inputs for low, high in `make_tensor` (#61108)
720a7a0d81 : [hackathon] fix benchmarking script in CONTRIBUTING (#63199)
bd9fad25c2 : [codemod][lint][caffe2] Extend BLACK coverage
c5f3ab6982 : ENH Adds no_batch_dim to FractionalMaxPool2d (#62490)
61b49c8e41 : [JIT] Add a flag to rethrow caught exception in jit interpreter (#63073)
32b6104f37 : Port `norm` kernel to structured kernels. (#62711)
07bb6e4fd0 : Port `prod` kernel to structured kernels. (#62024)
1280363bad : Port `mean` kernel to structured kernels. (#61643)
2d75703c6a : Remove req to call step() in training loop (#63164)
28f9e108b1 : Pass `_allow_empty_param_list` into func opt ctor (#63163)
bd81c9178a : Simplify data structures, add uniform approximation, fix mem leak (#63162)
75f198d48d : [docs][ao] update quantize_per_tensor to mention overloads (#63165)
5abeac3ef7 : Make saved tensors default hooks thread local (#62909)
cb23976f9f : Allow 0-dim batch sizes for AdaptiveMaxPool and MaxPool. (#62088)
72bc6dc8c3 : DOC Improve documentation for LayerNorm (#63144)
aa665e1ab8 : Revert D30090760: [iOS] Add podspec for libTorch-lite nightly build
dcb5eb8d9b : OpInfo for `torch.nn.functional.normalize` (#62635)
741accb11e : Implements backward for `torch.lu_solve` (#61681)
126ff6222e : Moving getattr_from_fqn to torch.quantization.utils (#63107)
07b00fc324 : ENH Migrate nll_loss2d from THC to ATen (#62826)
219ba6575b : add autowrap_functions kwarg to fx.Tracer (#62106)
7a1ab9f5d7 : [fx] store Tracer class on Graph and GraphModule for package deserialization [v2, the re-do] (#63121)
988ef190e3 : Show warning in eager mode for empty containers (#62978)
e182b459d9 : [iOS] Add podspec for libTorch-lite nightly build (#62691)
0b89e69e7c : [BE] delete GHA generated workflow files before regen (#63148)
ba25527ffc : [iOS][GPU] Fix the clamp shader function for x86_64 (#63062)
ed7ece389d : Forbid inplace modification of a saved tensor's pack_hook input (#62717)
aa5141f204 : Update CONTRIBUTING.md to remove ProcessGroupAgent (#63160)
96fb1a56ea : add use_strict_trace to tensorboard add_graph method (#63120)
1022443168 : Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
ed0b8a3e83 : LayerNorm Support in autodiff: (#50467)
b004307252 : [codemod][lint][fbcode/c*] Enable BLACK by default
aac3c7bd06 : [reland] OpInfo: `adaptive_avg_pool2d` (#62935)
daba551922 : [BE] shorten CI name part2 (#63030)
eea52b7d47 : Skip zero test on windows (#63087)
4d7a12f68b : BatchNorm: Use resize_output and empty, instead of empty_like (#63084)
d5a7579597 : [quant] Make version 1 the default for get_default_qat_qconfig (#63043)
91525d42d9 : Fix sharded tensor tests. (#63054)
bf7d03ff1f : Port `log_softmax_backward_data` to structured kernel (#62372)
ba603594fd : Port `log_softmax` to structured kernel (#57374)
d2eda7f2f3 : Add ciflow_ruleset.json generator along with gha ci (#63097)
04caef8e1d : Improve IMethod::getArgumentNames to deal with empty argument names list (#62947)
5cf32c1d09 : Fix Nnapi backend execute's dangling pointer (#63092)
709ac6853a : Fix warnings (#62930)
855e8f2b17 : [iOS][GPU] Consolidate array and non-array kernel for upsampling_nearest2d (#63061)
456364729e : irange-ify 13b (#62476)
31c1983603 : Add BFloat16 support for unique and unique_consecutive on CPU (#62559)
51a67d3168 : Add Github action to upload full source releases (#63022)
821c1edea9 : Embedding thrust->cub: unique (#63042)
fa22f6303f : [PyTorch] Add flop count for addmm (#61895)
fb4ba9e664 : XNNPack Input Pointer Caching Comment (#62818)
82123758ba : `_convert_coo_to_csr` CPP and CUDA functionality (#61838)
b8e6144e0a : Add a _RemoteDevice structure for ShardedTensor/ShardingSpec. (#62927)
b746fed164 : [Pytorch Edge] Move RuntimeCompatibilityInfo Factory Method (#63005)
3d3ad0a52f : [easy] add an `inplace` argument to MutableNetProto.to_net() and core.Net() constructor (#63068)
c090ae291e : Fix gha render-test-result mixed failure passthrough (#63056)
4ea6a3aa74 : Fix issues with printing certain torch modules (#62447)
5c00091f02 : Shard python_functions.cpp (#62186)
c5de83adca : Fix inconsisteny between Python and JIT power operation (#62842)
f446e835ee : Fix CUDA_KERNEL_ASSERT ambiguous symbol in NDEBUG mode (#62527)
f7611b31aa : [4/N] Enable opt-asan for distributed unit tests. (#62051)
847a7cfa10 : Back out "[fx] store Tracer class on Graph and GraphModule for package deserialization" (#63053)
324673a537 : rebase for autocast updates to include device_type and dtype flags (#61002)
a55cae3d37 : Fix missing element types and shapes when autograd.Function has multiple tensor outputs (#57966)
390c0ac403 : remove dead code (#63031)
94c5309369 : Revert D30199482: [pytorch][PR] Add BFloat16 support for unique and unique_consecutive on CPU
d1f9c03cef : Use `const auto` with irange (#62990)
d893b44cd8 : change nccl version reporting (#62916)
f307120df4 : Update test_torch_deploy (#62838)
af6ed084b4 : update test_libtorch (#62797)
2f5ac9c0ba : update test distributed (#62796)
dfe8445cd7 : update test_vulkan (#62795)
25c3b9dc10 : update test_rpc (#62781)
f807229fd4 : [ONNX] add support for prim::Unitialized in lower_tuples pass (#56912)
4d0497034c : Remove process_group_agent and faulty_process_group_agent files (#62985)
790553811c : fix sort and topk with discontiguous out (#63029)
500b24e303 : [iOS] enable Metal in the nightly build (#62855)
3beb65d45d : test_cudnn_convolution_relu skipCUDAIfRocm
557047eb4c : Add docstring for saved tensors default hooks (#62361)
dbb7be2e79 : [iOS][CI] Store every version of nightlies in S3 (#63039)
990c2190d1 : [quant][graphmode] Reference pattern support for elu (#62607)
f836c4f8bd : [fix] TestMultiThreadAutograd: propagate exception from child thread to main thread (#63018)
bfa67264d1 : [1/N] Nnapi backend execute and compile (#62272)
fc0b8e6033 : Add BFloat16 support for unique and unique_consecutive on CPU (#62559)
cb7f35d47a : [quant][refactor] Checking activation_dtype instead of activation_post_process (#62489)
6d21e36f21 : LU solve uses cuBLAS and cuSOLVER for matrices with dim > 1024 (#61815)
0c39cea3d2 : [sharded_tensor] add default fields to ShardedTensorMetadata (#62867)
5fb79f61a8 : [DDP] Dont set thread local state in reducer autograd hook. (#62996)
6915bc0781 : [typing] suppress errors in `fbcode/caffe2` - batch 2
ea808df25d : Test shape analysis with opinfos (#59814)
7312bd953c : add ssupport for a few more opinfos in jit (#59812)
9cbdc90d73 : Don't substitute in symbolic shapes to shape compute graph (#59811)
7db0bcfb40 : small cleanups (#59810)
9cd990de0d : Only optimize after change (redo) (#59809)
4c630773e8 : [jit] warn if _check_overload_body fails to find source
aa89d5f7f6 : [quant] Update get_default_qat_qconfig to return the fused observer+fake_quant module (#62702)
08d1a12d69 : [quant] add reduce_range option to FusedMovingAvgFakeQuantize module (#62863)
978490d7c7 : Codegen: Fix operator::name on windows (#62278)
cdf702b60c : Reject kwonly arguments passed positionally in torch.ops (#62981)
9e7b6bb69f : Allow LocalResponseNorm to accept 0 dim batch sizes (#62801)
061062ae2a : Update TensorPipe submodule
3df4870343 : [Reland][DDP] Support not all outputs used in loss calculation (#61753)
5ed6e4429e : To fix variance computation for complex Adam (#62946)
3c1d1170a4 : [quant][graphmode][fx] Attach a weight qparam dict to linear and conv in reference quantized model (#62488)
59ac451ba3 : Simplify the logic of running ci workflow codegen (#62853)
8720369a48 : irange-ify 12b (#62484)
93e0f3a330 : Shard Operators.cpp (#62185)
4b9ca72c7c : irange-ify 13d (#62477)
d16587f84d : Enable rebuilds for Ninja on Windows (#62948)
a82b9ef1ff : BFP16 quantization/dequantization (#62974)
c4aeecac75 : Migrate Embedding thrust sort to cub sort (#62495)
084e92bb76 : Use output memory format based on input for cudnn_convolution_relu (#62482)
4fdb9579fa : irange-ify 12 (#62120)
da9958c899 : irange-ify 1 (#62193)
161fb31893 : Fix render_test_results if condition on always() (#62997)
39ec1da935 : [reland] Gate DistributedOptimizers on RPC availability (#62937)
5b8389e536 : irange-ify 8d (#62505)
6286d33878 : [fx] store Tracer class on Graph and GraphModule for package deserialization (#62497)
f82d4b8957 : Mark unused functions with `C10_UNUSED` (#62929)
08f6bc1da6 : Stop exporting symbols in anonymous namespaces (#62952)
3dcd785cac : [Static Runtime] Add tests for all aten ops (#62347)
a01f832329 : handle get_attr opearations in typechecker (#62682)
3eeaffc7c5 : Linker version script to hide LLVM symbols (#62906)
1b1f1e36b4 : Add ``allow_empty_param_list`` to functional optimizers (#62522)
710c419f11 : [Vulkan] Added Hardshrink op (#62870)
922710f9b9 : Change output node handling for typechecker to deal with tuples (#62582)
e55f271859 : __torch_dispatch__: Populate kwargs dictionary with keyword-only arguments (#62822)
2b83007ae2 : Modify GHA CI to use PYTORCH_IGNORE_DISABLED_ISSUES based on PR body (#62851)
8b54b14f92 : [Static Runtime] Added a cache for NNC generated code across different calls to the same ops (#62921)
3782f3eced : Enable upper for torch.linalg.cholesky (#62434)
e54ee9bac1 : [nnc] Updated IR cloning to create clones of expressions in addition to statements (#62833)
5deeaab36a : minor fixes in c10d for Windows (#62953)
fff83f3f66 : Add handling of list write to remove mutation (#62904)
254148ec7d : Add tensor-scalar op (#62903)
4c4c5b14e4 : Port `sum.dim_IntList` kernel to structured kernels. (#61642)
c7db642a72 : Adding collective quantization API (#62142)
6ccedc7c1f : Set mkl thread locally (#62891)
30214aef2d : [BE] irangefy (#62928)
9f7aba737b : Make IMethod cache mutable so getArgument works on const IMethod (#62834)
b80dffd911 : [TensorExpr] Remove more 'const' from IRVisitor methods for *Imm types. (#62932)
b45cf9b81b : Revert D30117838: [WIP] Gate DistributedOptimizers on RPC availability
e6a3154519 : Allow broadcasting along non-reduction dimension for cosine similarity (#62912)
6630d98ae5 : Refactor codegen file sharding (#62184)
44fad84bca : [DDP] Add host-side time to CUDATimer (#62770)
22e3cc21e5 : Back out "Enable test_api IMethodTest in OSS" (#62893)
bbe2c8e6d2 : Fix reshape for the Lazy key (#62846)
6e24ce7a46 : Revert D30138788: [pytorch][PR] OpInfo for `adaptive_avg_pool2d`
d9154b9b26 : [quant] Input-Weight Equalization - allow logical evaluation (#61603)
43b087791c : .github: Make sure to deep clone on windows (#62907)
e3944ab00e : Revert D30038175: Improve IMethod::getArgumentNames to deal with empty argument names list
7a3f1386ae : Add GradBucket::parameters() to ddp_comm_hooks.rst (#62877)
6d24a075cb : Check contiguous to dispatch to NHWC cuda template (#62839)
e6e579ce74 : [FX] Add torch.memory_format as a BaseArgumentType (#62593)
97dc43beeb : use test environment for test phase (#62824)
786934902c : Adds JOB_BASE_NAME to steps of CircleCI mac workflows (#62892)
c9b5d79d40 : [hotfix] fix BC checker direction (#62901)
59d09b148c : BUG Fixes bug in no_batch_dim tests (#62726)
a03604c610 : Set JOB_BASE_NAME consistently for bazel (#62886)
3f09485d7e : [WIP] Gate DistributedOptimizers on RPC availability (#62774)
1dba329d20 : Enable step_param for Adam functional optimizer (#62611)
836b2431dc : [quant] Input-Weight Equalization - selective equalization (#61916)
e6ef87001c : [BF16] Add BF16 support to _aminmax and _anminmax_all operators (#62767)
56ff996386 : [vulkan] Add _reshape_alias (#62858)
5f4207eb91 : [vulkan] Throw an exception if device does not support Vulkan (#62859)
d3bdf345cb : Introducing DataChunk for DataPipes batching (#62768)
5e5de75f4d : Add getPyInterpreter() API (#62659)
27135f86fd : fix docstring default value of `last_epoch` for SWALR in torch/optim/… (#62799)
9573e7a644 : rename namespace f4d to velox (#61)
e1f81c9321 : [torchelastic][multiprocessing] Print warning message only when child processes are stuck (#62823)
f6c7081a16 : Allow FractionalMaxPool 2D and 3D layers to accept 0 dim batch size tensors. (#62083)
8aa12cbf86 : Add tutorial link (#62785)
64c54f92ca : [opinfo] nn.functional.unfold (#62705)
9ac56ef0fc : [DDP] log gradient ready order and bucket indices (#62751)
80091cb0f7 : [DDP] Allow tuning of first bucket (#62748)
5c431981b5 : OpInfo for `adaptive_avg_pool2d` (#62704)
eaaceea8d4 : Bump protobuf version in CircleCI docker images (#62441)
e62189ad69 : [jit] Better checking for overload function declarations. (#59956)
63fa53d37a : Add batched model to torchdeploy examples (#62836)
c8eda919a4 : test, fix sparse * dense exceptions and corner case (#61723)
8d7786ada6 : Simplify hardswish ONNX export graph. (#60080)
7630f407cc : add `OpInfo` for `torch.nn.functional.grid_sample` (#62311)
5dbcd5638b : OpInfo for `nn.functional.avg_pool2d` (#62455)
878943c64f : Preserve memory layout when aten batchnorm is used (#62773)
d45291613c : [pruner] generalize bias hook for conv2d (#62430)
b524a1101a : ns for fx: add ref_node_target_type (#62685)
b96acb7591 : Allow disabled tests to be re-enabled with IGNORE_DISABLED_ISSUES (#62686)
24a2681358 : Revert D30094460: [profiler] Re-enable test on Windows
0c8ed042f2 : Revert D30095246: [pytorch][PR] Enable ncclAvg for reductions
6d896cb545 : Update faq.rst so OOM section mentions checkpoint (#62709)
b84885cc8b : Add support for boxed functors (#62658)
e6a227465b : Add serialization support for slots and subclass getstate/setstate (#62745)
056b147e10 : clean torch_function handling in serialization (#62744)
ee82e7a14e : [DDP Communication Hook] Renaming C++ calls to match python API closer (#62735)
64b3ab6407 : Improve IMethod::getArgumentNames to deal with empty argument names list (#62782)
019048b3b6 : [PyTorch Edge] Simplify Exception Handling (Take-2) (module.cpp) (#62634)
4b68801c69 : Enable test_api IMethodTest in OSS (#62521)
a749180e4e : Enable ncclAvg for reductions (#62303)
4bd54cebe0 : Refinement types and unification for symbolic shape inference (#61776)
a27a0b1ef5 : [SR] Disable NNC temporarily (#62746)
afc1d1b3d6 : Fix lint errors in cuda_ReportMemoryUsage tests (#62778)
658540f43f : remove deprecated is_deterministic and set_deterministic (#62158)
a705b8f08f : OpInfo for `nn.functional.relu` (#62076)
123be6b261 : Port `addcdiv` to structured kernels. (#62319)
693b0af996 : Port `addcmul` kernels to structured kernels. (#62318)
8bbcef5096 : Report more information for memory profiling (#61282)
0aee9c0ef8 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
aed01a991d : Add hasattr to torch::deploy interface and hasMethod to PredictorContainer (#62669)
281737ea6f : [DDP Communication Hook] Rename 4 Methods of GradBucket Class
7f1b672b7a : Revert D29952381: [Static Runtime] Ensure that unittests only use out variants or native ops
491d89da1b : .github: Fix --no-build-suffix (#62739)
de94034328 : Fixes #62636 (#62670)
8e35df0bf3 : det_backward: return svd path for double backward (so that all ci tests pass) (#62570)
6f0abba04c : [fix] manual_seed{_all}: mem leak (#62534)
89f898ebb5 : Fix wrong command in README.md (#62472)
b454275f47 : Support eager mode use of `torch.jit.isinstance` with multiple types (#60465)
5a1017be97 : [profiler] Re-enable test on Windows (#62703)
8737e17af2 : [Static Runtime] Ensure that unittests only use out variants or native ops (#62335)
de77c6a0eb : [BE] fix bc check (#62687)
0a66416767 : Rename master to main for test-infra references (#62728)
90ba71f841 : Automated submodule update: FBGEMM (#62688)
8bcf01631a : [ROCm] update magma (#62502)
dfdc3069e7 : Revert D30072994: [pytorch][PR] [6/n Update test rpc path
34c9f5a8da : [DDP Communication Hook] Update get_tensor and set_tensor to be cleaner naming conventions (buffer() and set_buffer()) (#62662)
4b47ea9446 : adding a skip for ROCm for a flaky test (#62664)
d1c85d2c06 : Move ASAN tests to clang-7 (#62663)
773a8eede4 : [profiler][refactor] Refactor the usage of legacy profiler implementation (#61931)
5830f122f1 : Add docstrings for save_on_cpu hooks (#62410)
5542d590d4 : [EZ] Fix type of functional.pad default value (#62095)
d7d399f3df : Exposes _aminmax as aminmax and makes it structured (#62401)
92f470da08 : Revert D30070707: [pytorch][PR] [5/n] Update test distribute path
18eeccc7e8 : [mypy] Fix Optional type check (#62668)
5a49abfaf1 : Revert "Revert D29940705: [fx2trt] Dynamic shape inference support" (#62667)
34f50c6e35 : [Static Runtime] testStaticRuntime verifies that # of nodes is at least 2 (#62622)
2bddaf6149 : Revert D30072859: [pytorch][PR] [4/n] Update vulkan test path
ad4e1f1132 : [6/n Update test rpc path (#62526)
c48dfe0d9f : .github: Enable SSH to linux runners (#62280)
9beb279d84 : Add context manager to save tensors on CPU (#61928)
91ef19309e : [quant] Input-weight equalization - branch support (#62366)
62a90c227f : Make _Join, _Joinable, _JoinHook public (#62605)
053e11f1b3 : Revert D29940705: [fx2trt] Dynamic shape inference support
ff31389c21 : Cast a few vars to void that are otherwise unused
59dd12042e : [nnc] Removed const from all fields in IR. (#62336)
474d7ec43b : [Pytorch Edge] Black Box Compatibility API (#61477)
b7391f44df : cast return of cudaGetLastError() to void when discarding (#62518)
d6048ecd6b : Enable bazel builds on `ciflow/default` (#62649)
4d5607bb25 : [Reland][DDP] log bucket sizes (#62625)
1630b86dd6 : [4/n] Update vulkan test path (#62519)
ddd916c210 : [quant][refactor] Return the models in checkGraphModeFxOp for further checking (#62487)
76c447a730 : Remove CUDA10.2 + gcc 9 in CI (#62609)
d8849bdb03 : [5/n] Update test distribute path (#62520)
6b02ad5f82 : [fx2trt] Dynamic shape inference support
b7ac286d0e : CMake: Add optional precompiled header support (#61940)
2cf4d8128d : add `OpInfo` for `torch.nn.functional.mse_loss` (#62254)
ab8af15545 : [Static Runtime] Enabled building Static Runtime tests and benchmarks in OSS CI (#62226)
43327cc197 : Refactor commonalities between two approaches (#62624)
e6a3967c2a : Add invariant check (bucket indices: 0, 1, ..., k-1) (#62623)
87465a6e68 : adding operator cumulative_trapezoid (#61615)
b37578b3c0 : Make bazel output less verbose in CI (#62601)
3bda4ea842 : Avoid unnecessary copying data in Saved Variable (#61927)
7edb4f8761 : Port `cumprod` kernel to structured kernels. (#61899)
e52325655a : Port `cumprod` kernel to structured kernels. (#61899)
c7a7c2b62f : Enable Gelu fp32/bf16 in CPU path using Mkldnn implementation (#58525)
fd8004b42e : add bfloat16 impl for nextafter (#61829)
2888b7fec5 : Fix sign comparison (#62483)
a77be16538 : TensorAccessor::bounds_check should be a CPU-only funciton (#62628)
e0364ccc33 : [caffe2] break one circular dependency between Caffe2 and ATen-cpu (#62632)
88af4d8441 : Initialize RRefs only when explicitly asked for. (#62618)
b58e04f156 : Make sure FindLAPACK finds the same BLAS library (#49647)
2d038b5dc8 : Cast a var to void that is unused
c4196bee93 : Save some memory in scatter (#62516)
10d3a2c13a : [tensorexpr] Added logging info for SimplifierUnderContext (#62138)
3a592730d5 : [nnc] Simplify i%100 to i if i is less than 100; fixed #52580 (#60693)
8f7ae77040 : [nnc] Add context-sensitive simplification for div/mod (#60688)
c07a123b26 : Support saving and loading ShardedTensor. (#62242)
dd23372aa5 : .circleci: Prefix intermediate build image tags (#62610)
525fa2f0b6 : [reland] Catch saved tensors default hooks race condition (#62564)
f5cf24a224 : Fix lint in test_deploy_from_python.py (#62626)
615ac8e573 : Added logic for notifying PTE webapp for Nightly and PR builds (#62512)
db071ef005 : [Reland][DDP Communication Hook] Rename 4 Methods of GradBucket Class (#62592)
d228a8fc94 : [Vulkan] Softmax Along Channel Dim (#62239)
940cbbce76 : Add BFloat16 support to CPU nansum (#61083)
27d3d3a7d7 : deploy in python fix to work in @opt mode
a4af91b2fe : Cleanup CUDA 10.1 and 10.0 support on CI (#62597)
305d5fcc05 : [Pytorch Edge] get_model_bytecode int -> uint (#62201)
0c4c37b01e : Disable libtorch testing on MacOS (#62599)
093495d3f0 : [fx] prevent implicit submodule inlining when submodule is a GraphModule (#62436)
dc1bd6acee : Remove PROCESS GROUP rpc backend (#62411)
2ec4f69b48 : [DDP Comm Hook] Do not expose hook_then_optimizer as a public method (#62532)
b161ac541d : [reland] Add default Saved Variable hooks (#62563)
6f95850127 : Revert D30024161: [DDP Communication Hook] Rename 4 Methods of GradBucket Class
2e4f566d30 : add `OpInfo` for `torch.nn.functional.softplus` (#62317)
cb626da145 : [fix] mark non-differentiable ops (#62529)
562b555a2b : [CUDA] Fix typo in Normalization.cu (#62515)
29c8b1db57 : [DDP Communication Hook] Rename 4 Methods of GradBucket Class (#62510)
34cb2b5d04 : Update SobolEngine docstring w/ correct behavior (#62548)
2445d5c60a : Removed the hypothesis tests for adaptive_avg_pool (#60886)
3dc588d577 : Fix: no enough space for cu102 debug nightly build (#62465)
51f687fd4b : Add overlap with DDP to ZeRO (two approaches) (#62157)
ee482edf0a : Callable activation function support for Transformer modules (C++) (#62342)
c9d5325c52 : [BE] shorten the name part 1 (#62402)
7565039ee9 : Support system-provided Intel TBB (#61934)
bbf6131159 : Add factory kwargs test to test_modules (#62340)
46b18aa294 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
aa5e3ad705 : [quant] Support PerChannel quantization in FusedMovingAvgObsFakeQuantize (#62346)
7adb78017a : [countbuild][xplat/caffe2] contbuild with sanitizers (#61724)
32b37ba246 : [DDP Communication Hook] Update the typing info of comm hook output as well as some docstring (#62457)
72295da6c3 : Reformat (#62456)
c506769f19 : irange-ify 8 (#62422)
bd9f35313a : Revert D29922299: [DDP] log bucket sizes
9df7ac7a94 : Port `nll_loss_backward` to structured (#62144)
5429f68f00 : [DDP] log bucket sizes (#62232)
63d3da7961 : Fix sign comparison (#62194)
2006dc6316 : [3/N] Remove unittest.skip from torch/testing/_internal distributed files. (#61991)
7521addede : [deploy] loader cleanup (#62223)
174433267c : [dte] fastpath implementation for broadcast utility function (4/x) (#62493)
08539ca047 : Add non-context manager usage support for profiler (#61690)
6441caeaa7 : Use multi-dimensional cuFFT transforms to improve FFT performance (#61203)
73c46092f1 : [pytorch] sort the output of the model_dump util (#62485)
49060aa81a : Revert D29999785: Reland D29943356: .github: Migrate ecr_gc to github actions
43d4fe68cd : [Foreach] support implicit broadcasting in slow path (#62167)
70f57bcb1e : [PyTorch] Fix quantized Conv1d module parameters (#62356)
eac288ea77 : [Pytorch Backend Delegation] Annotate function args with type information (#62433)
f16c73b9f3 : Improve error messages of `torch.testing.assert_close` for sparse inputs (#61583)
8a9dfa52e9 : Delete an unused variable
73ba166e2a : fix(elastic-docs): Fix elastic launch doc (#62378)
635e63c53d : irange-ify 15 (#62123)
3c0c1c4ecb : Fix incorrectly sized tensors for svd when full_matrices=False (#62022)
26d2f4acb2 : Quick fix to make torch.tensor work with functorch (#62423)
8c4d8c29e4 : [2/n] add test ATen to wheel test (#62341)
d08165dfdf : [fx2trt] Add op converters for ads 23x dense arch
d783617216 : enable warnings on cuda synchronization (#62092)
273188549f : pass through *EXITCODE *EXITCODE__TRYRUN_OUTPUT variables (#49646)
b3781f0244 : Remove faulty process group agent logic (#62409)
ee7d19ac29 : add `OpInfo` for `torch.nn.functional.one_hot` (#62253)
09d10c4329 : OpInfo for nn.functional.softmax (#62077)
9fdf7ec6a2 : [docs] Update sphinx to 3.5.4 (#61601)
e352585f67 : Clean up running smoke tests logic for Windows GHA (#62344)
329426c249 : Fix cppdoc example syntax (#62385)
d57ce8cf89 : [Linalg] Add cusolver syevjBatched path for torch.linalg.eigh when cuda >= 11.3 U1 (#62003)
956c22b1f9 : [dte] fastpath implementations for mulgrad / divgrad (3/x) (#62437)
607d720be1 : Remove an unused variable
cfd0f5ebc9 : [quant] update per-channel observer min/max_val attribute names (#62345)
d92301dd02 : [sharded_tensor] add new init_from_local_shards API (#60479)
bc787f2402 : Fix setArgumentNames and make Script/Python consistent (#62442)
725d98bab6 : [Prototype] [PyTorch Edge] Speed up model loading by 12% by directly calling the C file API from FileAdapter (#61997)
693d8f2f07 : [PyTorch Edge] Cache operator lambda during model loading [7% faster model loading] (#61996)
0b3f42fa4f : [PyTorch Edge] Add test for lite interpreter operator caching (#62306)
0bbdf0e1e3 : [PyTorch Edge] Add test_lite_interpreter to fbsource xplat BUCK files (#62305)
90977e10ed : Remove an unused variable
74291c8347 : [quant][graphmode][fx] Fix the calls to load_arg in quantization_patterns.py (#62376)
eef85f89b9 : [dte] broadcast fastpath implementations for reduce utility functions (2/x) (#62428)
219917706e : [quant][graphmode] Add support for reference pattern for default ops (#62375)
acba9b3104 : [DDP Communication Hook] Simplify the implementation of parseHookResult of PythonCommHook (#62389)
554daef820 : Reformat test_c10d_nccl.py and distributed_test.py (#62388)
9fee176be3 : [Model Averaging] Fix docstring of PeriodicModelAverager (#62392)
8f519c5e07 : [quant][graphmode] Add support for reference pattern for torch.cat (#62374)
502823c201 : Change torch::Tensor to at::Tensor to fix build failure (#62425)
49dc827712 : Reland D29943356: .github: Migrate ecr_gc to github actions (#62438)
dc8b5db5f8 : [quant][graphmode] relax the constraint for supported_dtypes for reference option (Linear and Conv) (#62348)
9f9244aabe : [dte] scaffolding for c2 operator broadcasting fastpath (1/x) (#62369)
5c47038d12 : Back out D29792193 "Add default Saved Variable hooks" (#62415)
dcfcefcd0b : Back out D29848525 "Catch saved tensors default hooks race condition" (#62414)
389380ffcc : [reland] Refactor Tensor::to to call a primitive that is not copy_. (#62262)
7b6d569a2b : [jit] Renamed prim::Concat as prim::VarConcat (#61983)
5ede826178 : Fix alpine ecr image pull (#62413)
a42345adee : Support for target with class probs in CrossEntropyLoss (#61044)
dd0ef23a85 : Delete .clang-tidy-oss (#62373)
7157ad44bc : Fix windows ci squid env (#62353)
80a662e773 : ENH Updates docs and tests for classification modules that already support no batch dims (#61874)
b9f02778b2 : Forward fix mypy for #61820 (#62398)
2d103025a5 : Adding warning on isend about modifying after send (#61875)
945d40dca6 : Also disable inplace fw AD for acos on windows (#62360)
1b147a52f5 : Allow FX tracer to trace control flow (if/while) statements when parameter shapes are in the conditionals (#61820)
4ed8858817 : Exclude time of waiting in queue from gloo communication prof… (#61342)
35307b131d : Callable activation function support for Transformer modules (Python) (#61355)
1f2b96e7c4 : [DDP] Make compute_bucket_assignment_by_size return per bucket sizes (#62231)
c76daa6de3 : [DDP][ez] Remove misleading comment (#62230)
842228fd0d : [DDP] Save bucket size limits (#62229)
cac4aa71ca : Provide option to pass module instance to _load_state_dict_pre_hooks. (#62070)
2eaf71d749 : [Model Averaging] Update model averager API to avoid the redundant `params` arg needed by post-localSGD optimizer (#62132)
55bee44951 : [Model Averaging] Post-localSGD optimizer (#62131)
58d45d950b : [DDP] Log unused param names under DETAIL debug mode. (#62209)
24ed6e6b16 : Add actionlint (#62364)
fcc7fbe15f : Split zeta_kernel out of BinaryMiscOpsKernel.cu (#62261)
f6e137598d : ns for fx: fix nit in default qlinear weight extraction function (#62334)
72c943a2ac : ns for fx: fix bug for user function in weight extraction (#62333)
d98b1c400d : [pruner] add cuda tests for pruner (#61993)
b39b28ced3 : irange-ify 10 (#62122)
88f8f2ab94 : irange-ify 6 (#62115)
9e77113e85 : irange-ify 11 (#62121)
b5867a1b34 : irange-ify 7 (#62117)
59bb4f2dab : Revert D29928698: [pytorch][PR] Use private squid proxy
3a2603bc68 : Port `slow_conv_transpose2d` to structured (#55503)
05b802d4e0 : [pytorch] Bring back RemoveInplaceOps() (#62200)
b91a917616 : [Static Runtime] Fixed another build failure in OSS due to test_utils.h (#62338)
7c588d5d00 : ENH Adds no_batch_dim support for pad 2d and 3d (#62183)
6da4a25509 : Use private squid proxy (#62244)
2581dfc249 : [Model Averaging] Create a base class for model averaging (#62111)
a15fff0a7f : Revert D29794666: Remove faulty process group code
71a6ef17a5 : ENH Adds no_batch_dim tests/docs for Maxpool1d & MaxUnpool1d (#62206)
cdf85a82ed : [quant][graphmode][fx] Add reference pattern support for BatchNorm (#62215)
7443c90f15 : optimize non lastdim softmax bf16 (#60371)
68efa186cc : [static runtime] Implement aten::full (#62227)
10c6811a6b : [DDP] Run test_ddp_new_tensor_in_fwd with static graph (#61992)
acf8907e94 : These should be equivalent per the previous formula but breaks xla (#62329)
f4baa83eae : [bc-breaking] reference option for conv produce a pattern instead of reference conv module (#61942)
52d1ffb789 : Teach pytrees about namedtuple (#62292)
c06b6e445f : Build M1 binaries with PocketFFT (#62222)
cb2b5f06c9 : Revert D29816592: [pytorch][PR] [fix] polygamma n>=1
73f1e2d1dc : [8/N] Nnapi backend delegation preprocess: New refactored design (#62225)
7aabda6d5d : Update nccl to v2.10.3-1 (#62276)
1f1d01df3e : Revert D29943356: .github: Migrate ecr_gc to github actions
af0f083d42 : [dist_optim] fix the bug of none grads on functional optimizers (#62249)
c0b806694f : Do not use deprecated data accessor in IndexKernel.cu (#62268)
e3be185069 : [PyTorch] Add KWargs support to script module forward (#62224)
9776e1ff2f : Migrate thnn_conv_depthwise2d from THC to ATen (#62281)
ba9423aa93 : Fix forward ad for matrix power land race (#62291)
171e13fde9 : Rework PowKernel.cu (#62260)
7507aeded5 : [reland][bc-breaking] reference option for linear produce a pattern instead of reference linear module (#61892) (#62277)
24d94f5102 : Limit smoke tests on PRs to just one config (#62288)
8e0622abf1 : .github: Migrate ecr_gc to github actions (#62284)
d0e5ef5eba : .circleci: Remove conda-package-handling pin (#62290)
8fe32c9c13 : fix test-report uploading uniqueness issue (#62217)
190cdcb08c : remove print for status on scribe sending (#62285)
e1bee3eb30 : [Static Runtime] Add missing unit tests for static runtime ops (#62238)
4a15f4a902 : Allow 0-dim batch sizes in Bilinear NN layer. (#47106)
ab0354b650 : All remaining linear/element-wise formulas (#59993)
4c3eea26bd : Fix out= variant forward grad detection (#60499)
4a36e2a223 : Add forward AD inplace check and fix codegen (#60498)
df18d05429 : Make bytes_read available for OperatorCost (#62059)
bba7800933 : Add logical op symbol (#62063)
3bdee2bbed : [jit] Rewrote DFS graph iterator to remove unnecessary local state (#61326) (#61980)
fa52b4b922 : .github: chown workspace for render_test_results (#62207)
acaac70f63 : Revert D29883676: Migrate thnn_conv_depthwise2d from THC to ATen
82d81455ae : [2/N] Remove unittest.skip across all of torch.distributed. (#61887)
7fc96db45d : fix typo errors in quantization-support.rst Line320 (#44447)
5f7f08f498 : Reenable AMP on XLA (#61861)
a0c1c7e5d4 : Fixing the case when starter nodes depend on get_attr node (#62234)
8cdf16d1de : Revert D29810657: [bc-breaking] reference option for linear produce a pattern instead of reference linear module
d7ddae8e4f : det_backward: correct, more robust and with complex support [clone] (#61905)
de3a4eb583 : Migrate thnn_conv_depthwise2d from THC to ATen (#62006)
9df605133e : [bc-breaking] reference option for linear produce a pattern instead of reference linear module (#61892)
6c6a9c73f2 : [7/N] Nnapi backend delegation preprocess: compile_spec sanity check (#62213)
2cbc0ede7d : [DDP] Log if graph is static at end of training (#61871)
79eb8bb299 : [Static Runtime] Enforce proper output dtype for many ops (re-land) (#62267)
2eef1f27f8 : Disable ccache for nccl builds (#62208)
dc55d511d9 : Forward fix mypy (#62263)
3cd12448b4 : Add forward mode differentiation for inverse and solve (#62160)
a0309f89f4 : Initial ModuleInfo implementation (#61935)
afe3644321 : Remove faulty process group code (#61907)
a3be2ecc3a : Revert D29887367: [Static Runtime] Enforce proper output dtype for many ops
b599c1e794 : Create linalg and parametrizations codeowners (#62086)
228b50e053 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
2d7c1e3fa8 : [bc-breaking] Produce quantization pattern for add_scalar and mul_scalar (#61859)
b176feec1e : Add device and key for lazy tensors (#61621)
2945a73d90 : Add option to skip GH validation for torch.hub (#62139)
64283fe146 : [DDP/Functional Optim] Support kwarg arguments (#62079)
c0ebeca1a8 : [Functional Optim] Test kwargs parity for SGD (#62078)
478098aaac : Revert D29801652: Refactor Tensor::to to call a primitive that is not copy_.
69adb21940 : Parity tests for functional optimizer step_param (#61756)
b6d10a3a27 : Fix infinite loop in `_validate_not_a_forked_repo()` (#62072)
d0f430927b : [PyTorch][Edge] Serializing sub modules with same names (#61933)
a13f714b6d : DOC: remove git stamp from release documentation version (#58486)
60070982d2 : [Static Runtime] Fixed build failure in OSS due to test_utils (#62216)
962841b532 : Fix subnet counting and re-enable check for multiple onnxifi ops in AOT (#62033)
037c4aa1d1 : [fx2trt] flatten converter (#62202)
f883ed9095 : irange-ify 8b (#62195)
f7743e92bf : irange-ify 9 (#62118)
026cfe85b4 : Fix InlinedCallStack annotation to account for module calling its own (#61791)
f16102f72a : Revert D29892919: Add squid proxy as egress cache
cf1f59452b : Hacky support for meta tensor serialization. (#62192)
f0140a8c5f : Disable cppcoreguidelines-non-private-member-variables-in-classes (#62212)
1343eea037 : Fix clang-tidy line filtering logic (#62210)
2a83f24027 : Enable macos clang-tidy installs (#62214)
f4136c5efc : [Static Runtime] Enforce proper output dtype for many ops
29bb3f4647 : Refactor Tensor::to to call a primitive that is not copy_. (#61458)
e63160d735 : Add squid proxy as egress cache (#62103)
d2594fa538 : irange-ify 3 (#62112)
f5c6c3947e : Remove Input Pointer Caching for XNNPack (#61959)
7ec6d1e857 : irange-ify 2 (#62113)
6dc2c07304 : [Reland] [DDP] Implement a hook which performs FunctionalSGD step. (#62177)
1dfb687f3c : Fixed off-by-one bug in Adam Smart Decay (#62135)
dcb3eadc1f : [quant][fix] Update quantization c++ tests to not run if CPU_STATIC_DISPATCH is specified (#62197)
0ca5dc7f03 : irange-ify 5 (#62114)
8e71f48f0a : Handle simple NNAPI flatten NHWC case (#61796)
b73d759708 : [fix] polygamma n>=1 (#61641)
ef7d572afa : Ensure ShardedTensor handles list/tuple appropriately as `size` parameter. (#62109)
f9dce598a5 : Add some missing cuda guards (#62100)
200b6ccdc0 : Catch saved tensors default hooks race condition (#61957)
f2369f12f9 : Add logging for dynamic rendezvous (#61822)
6007ad3529 : [Static Runtime] Refactor fb op tests to use testStaticRuntime (#62064)
be17d6eadf : Add default Saved Variable hooks (#61834)
89ca638c18 : ENH Adds no batch dim support for AdativeMaxPool*D (#61847)
394dd391dd : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
e6e8745bea : [nnc] Add simplifierUnderContext for simplification that needs context info: currently added for-stmt index var bounds info as context (#60687)
2299d6a013 : Revert D29701447: [DDP] Implement a hook which performs FunctionalSGD step.
457a3fb6d1 : [bc-breaking][quant][graphmode][fx] Produce dequant - fp_op - quant pattern for copy nodes (#61763)
76d3cdf9df : [quant] Add from_blob_quantized_per_channel API (#62049)
7195b78a59 : [quant] Add from_blob_quantized_per_tensor API (#61986)
bd95cf4473 : [DDP] Implement a hook which performs FunctionalSGD step. (#61678)
8152433de2 : [1/n] Update testing lib*.so path (#61960)
956f1c981e : fix a typo (#61061)
ee44d73e59 : Modernize override (#61744)
d2e03dc484 : [fx2trt] Add support for explicit batch dimension (#62110)
cc263ef795 : [bc-breaking][quant][graphmode][fx] Add observer/fake_quant for copy nodes (#61687)
78f7d8ccfa : [Static Runtime] Remove wrappers for aten::cat (#62067)
7c09de8384 : [torch deploy] add support for Python C extension modules (#58117)
e856a45283 : [Model Averaging] Refactor averagers to accept parameters instead of a module (#62105)
41f7a9dac0 : [profiler][refactor] Avoid using legacy event in profiler (#61721)
06a3b23971 : [android] Lite interpreter module to load from assets (#61609)
643e58466e : [nnc] Rename IRSimplifierBase with PolynomialBase (#60686)
046272f3e5 : [6/N] Nnapi Backend Delegate: Comprehensive OSS Tests (#61782)
f03e7170f0 : ENH Updates docs and tests for regression modules that already support no-batch-dims (#61461)
1ec6205bd0 : ENH Adds no_batch_dim support for maxpool and unpool for 2d and 3d (#61984)
f4ffaf0cde : Fix type promotion for cosine_similarity() (#62054)
e408af083f : Improve MHA docs (#61977)
cf3cc01f1d : [Static Runtime] Add is_frozen to StaticModule ctor (#62020)
fa11103c6a : [clang-tidy] Fix unknown GNU flag error (#62128)
9730d91abd : MAINT Migrates multilabel_margin_loss from THC to ATen (CUDA) (#60708)
a6c6fd923e : [profiler] Nvtx support (#61634)
812bc1dde6 : Smart Decay for Adam - DPER3 (#62058)
5224490ae9 : Implement NumPy-like `frombuffer` tensor constructor. (#59077)
ec4e6181e6 : [Static Runtime] Fix broken test_static_runtime build (#62098)
b820493cf1 : [skip ci] Refactor CIFlow init logic (#62102)
71cfbc45b4 : Remove redundant `torch.cuda.set_device(self.rank)` (#62097)
5ef667a8b8 : Remove duplicated movedim implementation (#61939)
10ccc5a81c : remove `randn?` from `torch.testing` namespace (#61840)
cb47d1f9c8 : OpInfo Ref: fmod, remainder (#61527)
c9b71549f2 : don't allow alias dispatch keys to go in the DispatchKeySet (#61771)
143ef016ee : Throw RuntimeError when numpy() is called on a tensor with conjugate or negative bit set (#61925)
943ca5f6f7 : [special] alias for mvlgamma (#61633)
0c55f1bdec : [torchelastic] Improve process termination logic (#61602)
e42360d56f : Remove default arguments before calling to __torch_dispatch__ (#61123)
32d0c3e8ee : Support for reference convert_fx working on gpu
0df1679e5c : BatchNorm: fix mixed precision usage with affine=False (#61962)
e318058ffe : Ignore LNK4099 for debug binary libtorch builds (#62060)
04c95a0638 : ns for fx: expose hook to define custom weight extraction functions (#62047)
07c6a12008 : ns for fx: fix typing issue in weight extraction (#62041)
eaba16d665 : ns for fx: change weight extraction to direct mapping (#62038)
8a2c525d3b : Fix some sign comparisons (#61849)
9d4056468e : Migrate scheduled jobs debuggability to GHA (#62056)
b03b45afd9 : [DDP Comm Hook] Use a single tensor instead of a tensor list as the comm hook result (#62074)
1d2ea76afb : `clamp`: port to structured kernel (#61361)
b106b958eb : preserve residual in transformer norm_first (#61692)
53222c59f0 : Reformat (#62073)
3687bbb1ed : [pruner] add Conv2d support (#61778)
a9b0a921d5 : Disable `avoid-non-const-global-variables` lint check (#62008)
260198d42c : Disable bazel in CircleCI (#62055)
a91be24e2d : Modernize make pointers (#61741)
f98fa5ea13 : [skip ci] minor typo link fix (#62042)
1a64a5c0ba : .github: Only run workflows on pytorch/pytorch (#62044)
414537ac99 : DOC Fixes link in register_module_backward_hook (#61999)
b522f3be4c : Svd docfix (#62028)
d6e776d961 : Add build/.ninja_log to artifacts for Windows (#62035)
0309c5780d : ENH Adds no batch dim support for AvgPool1d (#61860)
5a00152a3d : Warn about poor performance creating Tensor from list of numpy.array's (#51680)
2b0eddb0aa : [Static Runtime] Implement prim::isinstance and prim::TypeCheck (#61783)
e6339ee336 : optimize imports (#61908)
554e04090f : Add 11.3 conda nightly binaries (#61873)
e858f6eed9 : torch.nn.utils.clip_grad_norm_: remove device syncs (#61042)
9e53c823b8 : Add AVX512 support in ATen & remove AVX support (#61903)
59d6e07ada : fix forward_idx check (#59911)
b60d1b713e : Revert D26007050: add channels last support for thnn_conv2d (non-dilated)
171598f0e3 : [Refactoring] Fix imports order for torch/utils/data/dataset.py (#61328)
1b02641bb1 : add BFloat16 operators on CPU: arange, acosh, asinh, atanh, exp2, digamma, trigamma, polygamma (#60444)
f3f7e92be5 : Manually call lazyInitCUDA in structured CUDA calls (#61882)
196679d3aa : [Refactoring] Reordering imports in torch/utils/data/datapipes/iter/__init__.py (#61325)
25be031c6e : Add missing docker build to slow gradcheck label-triggered build (#61941)
5186fa2831 : Fix `c10d` -> `dist` in `test_ddp_hooks.py` (#61864)
109bd5e78a : OpInfo: bitwise_and (#61349)
2f3300f25f : [docs] Correct torch.permute (#61833)
5801431c9b : OpInfo Ref: addbmm (#61832)
31beef009d : Fix IMethodTest.GetArgumentNames after D29648756 (#61985)
07a91f1cfd : fix graph deepcopy to propagate output type (#61747)
8a2063e58a : Foreach Test Refactor: Pointwise, Min/Max-imum (#61327)
d6899fe492 : [Refactoring] Reordering imports in utils/data/__init__.py (#61324)
06efced177 : .github: Specify directory to pull reports from (#61990)
cc18654d66 : [fx_acc] Refactoring acc_tracer (#61963)
6284d2a82b : wrap cudaStreamSynchronize calls (#61889)
3d6aa3a2f6 : Enable `torch.isclose` to suppport bool tensors (#61271)
243c7079a1 : add 3d input and output shapes to maxpool documentation (#61310)
d00bb45846 : [typing] suppress errors in `fbcode/caffe2` - batch 2
a0e381641b : Remove relative paths for clang-tidy annotations (#62004)
e731a63e63 : Silence clang-tidy linter for TorchpyTest.FxModule test (#62001)
b6ff0fa8dd : Enable dynamically ciflow/slow so that we can run GHA slow tests on PR (#61987)
9d6cdf34a4 : Annotate generated files in .gitattributes (#61995)
ae58a4c45d : [Static Runtime] Added a variadic cat operator (#61302)
b145889192 : Modernize use make_unique (#61739)
2c0ecfbb20 : [PyTorch] Expose bias() and unpack() API of LinearPackedParamsBase to Python layer (#61855)
a02ccd6080 : [ONNX] add supplement for standardOps low precision cast (#60731) (#61561)
6f08ddfc28 : [ONNX] Enable aten:normal op and add tests for aten:uniform op. (#60441) (#61560)
f0054e1a6e : [ONNX] Update expand_as for dynamic shape (#61084) (#61559)
34075e2c8b : [ONNX] Fix the issue of converting empty list to sequence. (#58651) (#61558)
22e60d77e7 : [ONNX] Support tensor list as module attribute (#59685) (#61557)
a8f6b5a80a : [1/N] Avoid skipping tests in sandcastle. (#61876)
adb73d3dcf : Removed overhead from reshape() call if tensor doesn't need to be changed (#61466)
a8d99a28d7 : Modernize avoid a C array (#61740)
d7b31fe95d : Add ciflow config and change jinja2 templates (#61886)
2dab368d26 : Refactor generate_ci_workflows (#61879)
e2acce373f : Run Windows smoke tests with gflags in test dir (#61967)
a03466cb07 : Back out "Revert D29687143: [3/N] Nnapi Backend Delegate Preprocess: Basic OSS Test" (#61878)
4532b3c4a9 : Fix _C public bindings test (#61088)
8880f3d450 : [fx] introduce `__fx_create_arg__` dunder method for controlling custom classes are handled as node args (#61780)
3c7bfa632a : reland D29801875: .github: Clone pytorch to separate directory (#61972)
810e19979d : Torch deploy for fx.grapm_module with non-torch dependencie (#61680)
f41d3341b1 : [pytorch] Support embedding_bag_4bit_rowwise_offsets in cuda (#61728)
716567504c : Revert D29801875: .github: Clone pytorch to separate directory
ea8abcf76e : [quant] Remove calls to .item() for fake_quant_on (#61921)
b8386f5d72 : [quant] Create FusedMovingAvgObsFakeQuantize for QAT (#61691)
afdca41bab : [quant] Add a new fused MovingAvg Obs + FakeQuant operator (GPU) (#61589)
92d3391fb1 : [quant] Add a new fused MovingAvg Obs + FakeQuant operator(CPU) (#61570)
403f59701c : Changes default DDP behavior to divide sparse grad by world size before allreduce, not after (#61814)
17d743ff04 : ENH Adds test and docs for dropout for no batch dims (#61911)
06df33857b : fix adapative_avg_pool (#61851)
33db828e52 : Revert D29647586: [jit] Renamed prim::Concat as prim::VarConcat
48af9de92f : ENH Enables No-batch for *Pad1d Modules (#61060)
bdf439a958 : Adds _LazyInstanceNorm and LazyInstanceNormXd (#60982)
db11619901 : [jit] Renamed prim::Concat as prim::VarConcat (#61498)
7fbdc86aec : [jit] Removed a local function to check for dominators and used the one added to Node class (#60909)
429908e540 : [jit] Updated the concat common inputs elimination pass to use the variadic cat op instead of aten::cat (#60908)
53668f8bf6 : [jit] Added an API to remove list mutations and replace with variadic cat until fixed point (#60776)
0cfcf68aa5 : [jit] Added special handling for prim::ListConstruct while checking for may alias inputs (#60775)
4dd04a8bbe : [jit] Handled cases when input list to cat is mutated after cat using AliasDb (#60774)
604f503d30 : Revert D29794958 + compilation fix (#61937)
a152c12d7b : .github: Clone pytorch to separate directory (#61932)
7cbb7c6d2e : [vulkan] Make vulkan ops selective (#58332)
73fbf43684 : [vulkan] Fix asserts (#61495)
22fff61f06 : Revert D29794958: [pytorch][PR] changing trapz to trapezoid
e067960243 : lint_setup should not require elevated privileges (#61798)
994434ad16 : Adding complex number support for all_to_all/scatter (#61299)
457a0b63bf : use `torch.bucketize` in`to_sparse_csr` implementation (+ additional tests) (#61340)
95cec8f4fa : changing trapz to trapezoid (#61475)
86715623dd : Adding super calls to JIT test case setUp and tearDown (#61922)
7acb8b71e1 : Remove AVX detection code that duplicates FindAVX.cmake (#61748)
e8d2916b84 : Add faulty tensorpipe implementation (#61421)
d856914c57 : Fix missing braces (#61745)
f78142b68d : Modernize emplace (#61742)
2c2a084012 : approx 100x acceleration for parse_kineto_results (#60432)
4567a50b2a : Enable clang-tidy on master (#61689)
8b88c24670 : add channels last support for thnn_conv2d (non-dilated) (#49582)
91bc285084 : Fix clang-tidy error in pre-commit script (#61918)
f6446802c7 : Revert D29783943: [pytorch][PR] add BFloat16 operators on CPU: arange, acosh, asinh, atanh, exp2, digamma, trigamma, polygamma
c2cc6a9396 : Add generic join unit tests (#61786)
1c80b5220b : `nll_loss_forward`: port to structured kernel (#61443)
f0df0207ec : [jit] Arithmetic simplification for integers. (#61444)
d2abfc547b : Add ShardedTensorMetadata for ShardedTensor. (#61683)
87334c40a7 : Remove torch._bmm and remove torch.bmm deterministic arg documentation (#61629)
513c40cb1a : add BFloat16 operators on CPU: arange, acosh, asinh, atanh, exp2, digamma, trigamma, polygamma (#60444)
45751e0b34 : Support integral target for the backward of nn.SmoothL1Loss (#61112)
59a5312ce6 : Modernize fix deprecated header (#61736)
5a04bd8723 : Modernize some loops in torch (#61737)
65616184bc : [Docs] Bundle of errata and small corrections / improvements for torch.linalg docs (#61578)
a0c9d70fba : bitwise_and: Port to structured (#60813)
875d63ed04 : bitwise_xor: Port to structured (#60812)
ce8aeefbf4 : bitwise_or: Port to strucutred (#60811)
f59ac5abc8 : Add thread local state guards in autograd engine hooks. (#60067)
641f6ef8a7 : Implement IMethod::getArgumentNames() (#61856)
42d6543c7b : [bc-breaking] Dispatch index_put with boolean mask argument to masked_fill (#61612)
018dc4193e : Factor vector intrinsics out of SumKernel.cpp (#61483)
c44d9d9f70 : Use cascade-summation to improve nansum accuracy (#61082)
bf1c9aaa79 : logit_backward: Port to structured (#60817)
b8686b42d8 : tanh_backward: Port to structured (#60816)
8c42d7ad07 : sigmoid_backward: Port to structured (#60815)
11cc179366 : xlogy: Port to structured (#60814)
9fb6b40f3e : Makes a streaming backward test try gradient stealing more directly (#60065)
873cc7a46d : Support 3 argument variant of the getattr() call where the third arg is the default return value (#61599)
ffd2e602f4 : [CUDA graphs] Make sure graph mempool cudaMalloc_count decrement pairs with cudaFree for all allocations (#61567)
208d06ca8c : Port other comparison ops: `ne`, `lt`, `gt`, `le`, `ge` to structured kernels. (#60942)
97327137ba : Port `eq` kernel to structured kernels. (#60177)
64ac428889 : [vulkan] Adaptive local work group size (#61170)
f324421d34 : [vulkan] Calculate a 4x4 output tile for each invocation in conv2d_pw (#60760)
a1b5025ecd : [vulkan] Convolution op cleanup (#60759)
cacab7e9d6 : [vulkan] Reduce submission rate to save CPU cycles (#60758)
554038c2a2 : [package] merge test_torchscript into test_package_script (#61807)
f02cfcc802 : ban PyTorchStreamWriter from writing the same file twice (#61805)
04043d681e : [package] fix storage serialization collision (#61806)
c30048fccf : add BFloat16 support for topk on CPU (#59547)
15210f3b82 : ignore and clear not ready errors (#61554)
e68c016871 : Regenerate libtorch workflow files that got lost in merge conflict (#61872)
0a6d88244b : Fix grammatical errors on the PyTorch Contribution Guide (#61818)
43c5dc40c5 : Port `signbit` to structured kernel (#57936)
44d3267103 : Remote whitespace introduced by #61438 (#61863)
26d17ddc9f : Exclude wrapper tensors from functorch in the native::resize_output fastpath (#61846)
f912889726 : Remove unnecessary Ubuntu version checks (#61738)
1b0a7f3887 : Always use fast gradcheck for LayerNorm 3d_no_affine_large_feature (#61848)
094abf5fd0 : [BE] Include a unit test for Save Operator with db_options
e389650f10 : Upgrade CPUFallback for loops (#61722)
04bd9d7577 : [DDP] Add API to get model parameters in hook (#61637)
66c8d21d7b : Update progress and error reporting in clang-tidy (#61672)
24a6eb3fda : ENH Adds tests and docs for 2d & 3d modules that already support no batch (#61262)
4f46943e3d : enable check trace when tracing a mkldnn model (#61241)
75b68def63 : fmin has been ported to the structured kernel, removing the old implementation (#60810)
b526080d89 : fmod: Port to structured (#60809)
b65ddef000 : for shared-memory handles, use an atomic counter, instead of potentially colliding random numbers (#60978)
ac5a40e068 : Fix benchmark's import module and remove its usage of tools.stats.scribe (#61808)
9c3346c8aa : reduce max_num_threads for complex double ops in reduce_kernel (#61438)
d565b3e9ea : Migrate libtorch to GHA (#61774)
3e3acf8a9a : Minor documentation fixes (#61785)
813b887dad : Fix indent (#61784)
a26a9f8b75 : zero initialize some members and other fixes (#59915)
0263865bfe : [Docs] Fix docs for torch.chunk (#61097)
552eab7935 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
593e8f41ca : [jit] Fixed a bug in the pass that replaces cat with the variadic op (#61795)
ff82394fc0 : Apply saved tensor hooks (#60975)
eefbff773b : ns for fx: add utils for l2 error and cosine similarity (#61380)
2a2bc1fc8a : ns for fx: add fqn to results, when present (#61377)
7449f49a4c : ns for fx: return results in execution order (#61360)
2b2928c5ca : ns for fx: improve error messages for graph matching (#61359)
ddf6d6cc14 : ns for fx: clean up override_qengines and copy TODO in tests (#61358)
cf6f5efb39 : ns for fx: test case for comparing fp32 vs fp32_prepared shadow (#61357)
4acd14da02 : ns for fx: preserve observers and fake_quants through passes (#61323)
a70505cdbd : ns for fx: support comparing fp32 vs fp32_prepared, except shadowed (#61129)
e117d94e21 : Wrapped create_type_hint in try/except block so that NormalizeArgs doesn't fail if create_type_hint fails (#61524)
59ca89dca8 : Fix scribe logs again (#61768)
311f1f275a : Update clang-tidy-linux64 (#61797)
4337650c91 : Fixing a bug in .to for qtensors so scale/zp move too (#61576)
cb6841b263 : Fix ConnectionError in download_mnist (#61789)
4e2fe9718d : flatten operation (resnet50) (#61265)
4479aa8838 : Remove all the code that constructs metadata.pkl file (#61760)
7ac8054d5a : Use better defaults in the clang-tidy wrapper script (#61651)
dc0d1612e1 : ENH Updates docs and tests for activation modules for no-batch dims (#61300)
6a085648d8 : add aten symbols for amin and amax (#61550)
4e94e84f65 : Type annotate `torch.nn.Module` ctor (#61334)
ee2f2ec9a5 : Revert D29687143: [3/N] Nnapi Backend Delegate Preprocess: Basic OSS Test
a07d3dc34c : Pin macos mkl conda version to fix the cmake build (#61773)
8ad584823f : add shortcircuit in isclose for zero tolerances (#61529)
612632556d : Fix `torch.median` crash on empty tensor (#61698)
3fd9dcf934 : Move non-libtorch scheduled linux CI to GHA (#61732)
287603f51c : Revert D29698486: [pytorch][PR] Remove torch._bmm and remove torch.bmm deterministic arg documentation
5798a00aa4 : [3/N] Nnapi Backend Delegate Preprocess: Basic OSS Test (#61594)
349f2f767c : Modernize to default constructor and nullptr in torch (#61735)
736bb26746 : use `rand` over `empty` in flaky test (#61710)
efeacc0779 : [Static Runtime] Fixed visibility of ProcessedNode class and a newly added function (#61729)
6fa80f7f9f : Refactor embedded_interpreter registration to be friendly to python case (#59991)
6349bde572 : [4/N] Nnapi backend delegation preprocess: List Tensors & Comment Updates (#61752)
328606699f : Remove torch._bmm and remove torch.bmm deterministic arg documentation (#61629)
28150fd0c8 : [static_runtime] Implement aten::linear (#61595)
3624d75864 : Revert D29703523: [pytorch][PR] Fix scribe logs
b963607d50 : [nnc] Insert alloc/free at global scope (#61725)
4c3d9cfe03 : [BE] Fix flaky test_ddp_model_diff_across_ranks test (#61546)
f1114364ad : [DDP] Enhance comm hook docs (#61677)
39ce29efe0 : Refactor metadata_map with flattened key/value pair (#61731)
00a7f55b6e : Apply for MOBILE_MODULE_STATS Logging (#61600)
fc710eecc0 : Apply for MOBILE_MODULE_LOAD_STATS Logging (#61480)
56d562e790 : [DDP] fix test_ddp_inference (#61666)
7e1f01d4c0 : Alias for `polygamma` (#59691)
f008e8d32d : Remove test_out, test_variant_consistency_eager skips for `addmv`; fixed before (#61579)
843c42ffd8 : [nnc] Refactored test macros and updated compress buffer tests to use them (#61716)
d01837081d : [nnc] Cleaned up compress buffer tests to use BufHandle instead of Buf (#61715)
eb5a56fb74 : Fix scribe logs (#61675)
127562a0ed : Fix some sign comparisons (#61618)
e6860ba508 : Fix some sign comparisons and a loop (#61663)
9d955abcdb : Fix test_reductions when no SciPy is installed (#61699)
968a01a94a : [special] migrate xlogy (#60641)
1ce3281a6d : Revert D29361872: [pytorch][PR] det_backward: more robust and with complex support
3a0801f960 : [skip ci] Fix "arugment" typos (#61459)
e5fcc903d6 : torch: Make __version__ better with comparisons (#61556)
0ea29a6ccb : Analysing time taken by gradgrad checks for Spectral Functions (#60435)
4ff121f58d : Add `complex64` dtype for OpInfo Reference testing (#61627)
e2c3049e2a : Delete stable-sort-only-works-on-cpu warning (#61685)
e098e9000b : Compare DDP static graph (C++ core) with legacy DDP forward and backward delay. (#61507)
7a3b05ea6d : Fix hardswish inplace op for strided tensor with skipped elements (#61622)
fce85480b9 : det_backward: more robust and with complex support (#58195)
bd360ebe6f : [nnc] Added a new API to distribute loop and all its parents (#61293)
76f097466e : [nnc] Added a new API to compress all buffers in a given statement (#61087)
2908d3eb45 : [nnc] Modified the semantics of reorder in using permutation (#61085)
7177509380 : Revert [DDP] Support not all outputs used in loss calculation (#61497)
25f9c35dd7 : Revert [DDP] Support for multiple backwards (#61401)
38ac9e69aa : Back out "[DDP] Disable reducer hooks from running outside of DDP backwards." (#61399)
a50a389ca6 : Revert D29701479: [pytorch][PR] Remove `_broadcast_object()` from `ZeroRedundancyOptimizer`
aa01a7a61c : Fix for get_buffer(): check buffers by name instead of value (#61429)
5407108533 : CopyBackward: Remove redundant src_device and unnecessary copy=True (#60025)
da667e2d5f : Add .github for CODEOWNERS (#61598)
8afb65b6c5 : changed launch bounds for upsample_linear1d fwd, bwd from 1024 to 512 (#61307)
ee5a97de11 : Register Saved Tensors hooks (#60663)
94965212e5 : [static runtime] Use at::allclose to test NNC sigmoid (#61566)
9b5d9b4049 : Remove `_broadcast_object()` from `ZeroRedundancyOptimizer` (#61539)
e3d5619ff0 : [pytorch][profiler] Fix division by 0 in computeFlops (#61676)
70e94bb1dd : Avoid redefining `__BYTE_ORDER` (#60346)
a9c3580080 : Grammatical update of tech docs (#61547)
5a5c7f563d : add trainer hook functions (#60785)
304c02ee44 : refactor ps benchmark (#60784)
7d2ea9a8f7 : Release GIL as much as possible in dist_autograd pybind. (#61593)
5ebc7c9f97 : Avoid holding GIL while calling retrieveContext. (#61588)
f2adbff36e : [Metal] Do not use read/write textures in concat shaders (#61074)
80bdfd64c5 : Skip Bfloat16 support when building for VSX (#61630)
43a2f7c26a : [TensorExpr] Do not fuse float16 values. (#61569)
ab27399566 : Make broadcast_object_list accept a device parameter. (#61305)
9b3cbeaf7d : [pruner] fix activation handles logic (#61592)
343cb276b0 : [pytorch] Add broadcasting support to add_relu kernel (#61584)
c23db9327a : Smart Decay for Adam - Caffe2 (#61548)
58adaaba60 : Enable C2 load rate limiter [2/n] (#61551)
57feb35474 : Refactor non-joined process computation (#61555)
03a79f43e3 : adding support for index_select on quantized tensors (#61406)
a07b08136f : [Static Runtime] Check unsupported up when enabling static runtime (#61613)
ac64a41e8a : [FX][docs] Add note about python set pitfall (#61597)
9ade039593 : fix test file not found issue (#61610)
2ab8126e36 : Add NewLib support (#60345)
8e6d8991b2 : [torch/elastic] Fix the agent store key prefix used by workers (#61590)
523d6fe27c : [PyTorch] Remove unnecessary std::string in Device.cpp (#61502)
72394aaf68 : Bump addressable from 2.7.0 to 2.8.0 in /ios/TestApp (#61573)
0751a41ab1 : [quant] Input-Weight Equalization - ConvReLU support (#61350)
b3e4dab45a : [quant] Input-Weight Equalization - Conv convert support (#61287)
77d36b657a : [quant] Input-Weight Equalization - Conv prepare support (#61286)
ce9cedd119 : [quant] Input-Weight Equalization - Conv observer support (#61285)
30e48bbeae : Add neg bit (#56058)
60382de455 : [torch] Set `nproc_per_node` to 1 (#61552)
437e7d9fc9 : codegen_backend_module() now passes correct type designators to isinstance in the generated script
b42cc19c88 : Fix broken assertion error test in NNAPI convertor (#61586)
2ade4d2a92 : .github: Ensure clean workspaces before checkout (#61565)
d5204064dc : [BE] Fix flaky ProcessGroupGloo tests (#61396)
3e5d2b539d : Replace deprecated comment with C10_DEPRECATED in linalg.h (#60374)
9679fa7f30 : Update cpp_extension.py (#61484)
0afbb9e81e : `PYTHON_LIBRARY` may be set to empty or NOTFOUND. (#61230)
ac6ec0efa1 : [ROCM] fix bug in #60313 (#61073)
2e49c5dc37 : Move GetArgumentNamesModule registration to InterpreterManager() (#61549)
5144381b1d : [pytorch][JIT] Widen exception caught by ScriptList casting (#61520)
94840969e4 : SGX can not read from /dev/urandom (#60368)
8a2c7d902f : [static runtime] Add DCHECK to ensure that outputs do not overlap with immutable inputs (#61301)
4ef640d6f6 : Sort imports of test_datapipe.py (#61312)
fd13e925ec : Adding backward compatibility for sharding support in old DataLoader (#61237)
d3cb065b2f : Implement usage of `is_shardable` and `apply_sharding` (#61236)
4d842d909b : Revert FC workaround for ReflectionPad3d (#61308)
2fd37a830e : Revert D29642893: .github: Add force_on_cpu tests for windows
7fdce39a4b : Revert D29642891: .circleci: Remove force_on_cpu jobs from circleci
58df01c3b8 : clarify default value of requires_grad for tensors (#61038)
5897a60480 : warn about SVD outputs not supporting backprop (#61037)
65ab861ec6 : fix mm not correctly report TORCH_CHECK failure issue (#61394)
68f9819df4 : Typo fix (#41121)
255a324258 : add nesting_level as attribute to pickle for map datapipe (#61534)
5144cc029e : Bump docker image tag for clang-tidy (#61545)
a5a10fe353 : Move all downloading logic out of common_utils.py (#61479)
2aedd17661 : .circleci: Remove force_on_cpu jobs from circleci (#61473)
a52de0dfec : .github: Add force_on_cpu tests for windows (#61472)
51d18369c3 : [1/N] Nnapi backend delegation preprocess (#61499)
3faf6a715d : [special] migrate log_softmax (#60512)
f2857883c4 : Add DataPipes Graph Functions (#61235)
25a705610f : ENH Adds support for no-batch dim in AdaptiveAvgPool1d (#61264)
583b045fc3 : Make .contiguous(memory_format) call .clone(memory_format) (#61456)
5a20c56ebc : [static runtime] Remove hasOperation() check (#61496)
99959fe3f5 : [DataLoader] Adding demux and mux DataPipe-s (#61234)
d46689a201 : OpInfo reference tests for `add` and `sub` (#61169)
c18017190b : Relax some linalg test tolerances (#61101)
bacf8ecbd1 : Make pin_memory/is_pinned use BackendSelect (#60547)
7136a62b56 : Add `expecttest` to CONTRIBUTING.md (#61163)
8754238410 : torch._utils.ExceptionWrapper: fix for Exceptions with multiple args (#58131)
93d98ecef7 : update the pytorch-gdb example so that it works on current master (#61175)
0de35fe039 : fix return local reference (#59913)
d4549ba5dc : Add VS_VERSION to Circle (#61532)
00c4897c51 : use make_unique (#61272)
ac086ca15b : Update version.txt file path (#61177)
09679af260 : Delete dead code in Tensor::to implementation (#61435)
60086ab39b : Remove export PYTHONPATH hacks (#61487)
5c1505076b : [Codemod][FBSourceBlackLinter] Daily `arc lint --take BLACK`
666dff381d : add AdaptiveAvgPooling2D (#61239)
93ef40bd83 : add linear operation and modify one of the tests (#61238)
292ee65261 : add maxpool2D, add more tests, handle integer parameters for maxpool2D (#61188)
7a15576a65 : [quant] update FakeQuant modules to use tensor qparams (#61318)
99848c7269 : [quant] Add tensor_qparam variant to fake_quantize_per_tensor (#61317)
57676ce128 : Migrate multi_margin_loss to ATen (CUDA) (#61426)
5a17cb6f44 : Add channels-last support for bilinear and nearest 2d interpolation on CUDA (#56322)
df00c636d2 : [Model Averaging] Skip model averaging for the first K steps (#61207)
0f6876d721 : [Model Averaging] Create a post-localSGD communication hook (#61206)
a46d4212bf : Allow dims=0 in torch.tensordot call (#61331)
7d7b7abb3b : [Static Runtime] Separate function for getting always_alive values (#61506)
7fdc5f9e08 : model_dump: Fix non-counting and double-counting bugs in tensor memory (#60702)
158d351517 : model_dump: Add webdriver test (#60701)
cc78c463c0 : model_dump: Render constants.pkl similar to data.pkl (#60700)
e292f34def : model_dump: Make stdout argument for main a keyword-only argument (#60699)
2942e9aa80 : model_dump: update maintainer comment (#60698)
f5c10fdbd3 : Allow for heterogenous List and Dict values + Improve container typing algorithm (#57137)
ccd0977060 : [Static Runtime] Support prim::GetAttr/SetAttr (#61505)
f291b1899f : Revert D27978269: Smart Decay for Adam - Caffe2
8bcf24b37a : [TCPStore] enhance connect timeout error message (#61390)
336970c03e : Add note on torch.distributed backends on ROCm (#58975)
73b86c9f9c : Add getMethod to PytorchPredictorContainer (#61052)
677313b670 : ReLU (#61150)
a556c1c4dc : [profiler] Update Kineto submodule (ci-all) (#61478)
06166a13e0 : Remove VS install step unless necessary from GHA Windows workflows (#60791)
9b2b45919a : Revert D29639797: [package] error if we try to mock a module in 3.6
aaa1e07609 : Smart Decay for Adam - Caffe2 (#61488)
b52909d861 : [TensorExpr] Add python bindings for ArgValue class and TensorExprKernel constructor accepting custom lowerings. (#61385)
dec5aa2260 : [JIT] clean up (#60390)
54ea7d33ba : [package] error if we try to mock a module in 3.6 (#61469)
a3670ba377 : Add option to specify custom NNAPI serializer (#61025)
cbb6ab6d88 : [package] ignore dunder import errors (#61148)
12772c8dd8 : [package] PackageExporter visualization methods (#61147)
b5f0576278 : [package] Modify Digraph to track predecessors (#61146)
ae65f63971 : Make nnapi flatten converter accept flex inputs (#61024)
028e438d6c : [torchelastic] Make sure `rdzv_configs[timeout]` is not getting overwritten (#61471)
1f4bba77b6 : [fx] fix subgraph API call_module warning about no owning module (#61463)
76c0f223d3 : Make nnapi cat converter accept flex inputs
9e81d3d869 : Make NNAPI linear converter accept flex inputs (#61022)
35b950ea98 : [package] properly handle case where we are re-packaging mocked modules (#61434)
4f4beb8286 : Add Model Parallel Support to ZeRO (#61370)
fb7ed24f6e : [PyTorch] Try using ExclusivelyOwned in LinearAlgebra (#59420)
a5c5b56cf5 : gen ExclusivelyOwned in structured kernels (#59827)
711ded688d : Add a script to codemod max_tokens_total pragmas to C/C++ files (#61369)
3b004aed3a : Enable local clang-tidy lint (#61121)
8296cb37c7 : [torchelastic] Set the correct maximum border width
6bb33d93ab : disable the format library in C10 (#60052)
b01329b164 : [xplat] Update XNNPACK to github revision 79cd5f9 (#61400)
86463a8d02 : Save some little memory in `default_collate` (#61424)
c830db0265 : Raise error in CMake for CUDA <9.2 (#61462)
b5c464d5ef : Make Future store weak pointers to storages (#60943)
962c9fbf85 : [pruner] add handles for hooks (#61425)
682ebc1dd1 : remove UsageError in favor of ValueError (#61031)
5401dd2f9a : change language from array to tensor (#60639)
09c90b3589 : relax type equality constraint (#60638)
24a8915534 : Relax use-count check to allow for 0 (#61414)
9e533a62f6 : Make conv2d nnapi converter accept flexible batch (#61021)
64d61901eb : [ROCm] Skip test_masked_scatter_large_tensor_cuda (#61313)
ee2dd35ef4 : Resolving native dependency and try_run for cross compile (#59764)
8bd3e52e00 : Add conv2d transpose NNAPI converter (#59529)
c19adfff54 : [DataLoader] Introduce ConcatMapDataPipe functional datapipe (#61010)
2bbcc80de3 : Enable disabling test cases on specific platforms (#61427)
e9a40de1af : Add other Linux GPU auxiliary test jobs (#61055)
c966ce6933 : Fix several test_ops cuda dtypes tests (#60922)
5e9bcf9101 : fix: support removing hook in the hook (#61250)
179249084b : Refactor DDP join() API, adding hooks (#60757)
8423ab4f99 : Fix `CosineAnnealingWarmRestart` annotation (#61106)
9b908ab0d0 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
819bac63ff : [Codemod][FBSourceBlackLinter] Daily `arc lint --take BLACK`
14f63763c1 : Avoid using mp.Manager to report #GPUs needed in dist tests (#61409)
905cd6733e : [DDP Comm Hook] Re-enable the optimization of fusing copy and division when no comm hook is specified (#61379)
8f61d94610 : Fix a variable initialization (#60896)
15010bf223 : Make some downcast issues explicit (#60412)
6a3170dba1 : [package] minor cleanups to internal APIs (#61428)
d52ebf2b1b : conv2d (#61093)
5fbc853c5f : [package] PackageExporter remove verbose mode (#61145)
a74516d699 : [static runtime] implement aten::log (#61393)
06dfaadfc6 : update internal function names that apply to both cpu and cuda (#59701)
8726f08e15 : [ONNX] Update documentation (#58712) (#60249)
00b0d826a1 : [ONNX] shape type inference fixes for control flow (#59319) (#60248)
81f95cce59 : [ONNX] Extend chunk for dynamic chunk values (#59644) (#60247)
d9dc94406f : [ONNX] Add linspace symbolic (#58854) (#60246)
4ccfa3ffeb : [ONNX] Fix sum export with attribute keepdims (#59316) (#60245)
95a7f3ccfe : [ONNX] Fix shape inference for large model (#59320) (#60244)
9636c077c3 : [ONNX] Handle onnx::Size in ComputeConstant folding (#59122) (#60243)
38c48e42c6 : [Reland][BE] add test wall time report (#61389)
7481c6fc02 : Bump googletest version to v1.11.0 (#61395)
13658b10bb : [torch] Various improvements to `torch.distributed.launch` and `torch.distributed.run` (#61294)
10f372601d : Support RRefs that contain torch.cuda.Event (#61354)
8bc2ba3fe3 : detect missing kernels from external backends in codegen (#60737)
7318747a3b : move all external kernels into a class for better compiler error messages (#59839)
86eac5b456 : [caffe2] Check for number of created subnets and optionally throw an error (#57366)
0fc110cdd1 : [CUDA graphs] Don't sync between replays for cuda driver version 11.4+ (#61063)
80797d03e0 : Simplify lambda syntax in SegmentReduce.cpp (#61416)
cdc027679b : Add compare_set in distributed docs (#61351)
f01a4e3b02 : .github: Ensure build-results per job is unique (#61005)
4beb5f9ad6 : [DDP Comm Hook] Fix some comments (#61376)
dfe25069a8 : [ROCm] Skip test_*_stress_cuda test for ROCm (#60490)
9310f6bac1 : Use our own statically stored vs_buildtools.exe (#61372)
ac5b910600 : clang-tidy patch (#60714)
074c776011 : Force mypy colors in CI (#61391)
c76eba650a : [bootcamp][pytorch][WIP] Support embedding_bag_byte_rowwise_offsets in cuda (#61075)
9ef1c64907 : [PyTorch][Edge] Tests for QuantizationFx API on lite modules (#60476)
179b3ab88c : [cuDNN] Enable cudnn_batchnorm_spatial_persistent for BatchNorm3d channels_last_3d (#59129)
0222291544 : Fix docs for ShardMetadata. (#61388)
7011513d23 : Enable sparse_csr.to_dense() for bool, float16, bfloat16 and complex (#60657)
5054cb8934 : fix torch.cat bug with boxed CPUFallback (#60993)
141bfbef86 : [iOS GPU] Add tanh and clamp to support GAN (#61383)
4937d9fd6f : Fix Dispatching not considering List[Optional[Tensor]] for dispatch (#60787)
426c42ba45 : [package] ensure we don't write files twice to the archive. (#61371)
1d1d5acbb0 : [RPC] Ensure _wait_all_workers doesn't swallow exception. (#61094)
7b6ddb6793 : [nnapi] add log_softmax (#61378)
eb82a88d85 : Add a type for test fixture world_size (#61363)
d51b437b74 : Cuda quantized tensors, support for quantize per channel (#58245)
b1dc9c3946 : Skip _cudnn_rnn_backward in codegen check (#61386)
b25c65b4f3 : Revert D29589020: [pytorch][PR] adding a build_start_time_epoch to build meta info
9dd1824741 : Fix dispatch keys for eigh, lu_solve (#60945)
fb00194030 : Fix typo in common_utils.py (#61365)
6107cf3750 : Add --jobs 0 for git submodule update (#61311)
d33066ab3f : adding a build_start_time_epoch to build meta info (#61322)
429436edbd : Avoid complex-to-real cast warning in CopyBackward (#60021)
10b2a24508 : Migrate log_sigmoid (forward and backward) to ATen (CUDA) (#60881)
f86460a352 : Add coverage files to .gitignore (#61144)
5e83fefdf8 : [sparsity] sparsifier `step` tests (#60107)
8881b9d852 : [sparsity] sparsifier `convert` tests (#60105)
ec200a60bd : [sparsity] sparsifier `prepare` tests (#60042)
21ad978d4f : [sparsity] rename `sparsity_pattern` to `sparse_block_shape` (#59898)
aa6a8a6d21 : [nnc] Add LoopNest::unsafe_fuseLoops to let users apply fusion on stmts that may violate our correctness checks (#60601)
8fd90f7cfd : Implementing transpose for PackedTensorAccessor (#61114)
39a76fe73c : BatchNorm2D (#61012)
357c4d9cc4 : Add a test case for findDanglingImpls (#61104)
4d9fd8958b : Support `__rand__`, `__ror__` and `__rxor__` (#59240)
9547e57643 : Create SECURITY.md (#61356)
f84a441718 : [torch][segment_reduce] Update default values when initial value is not set (#61266)
a78ad5dc4c : [torch][segment_reduce] Add support for int lengths as well (#61141)
423523d8bb : Alias for logsumexp to special namespace (#58838)
c03f99f3ef : Remove pyproject.toml (#61367)
994ce7dbd9 : Cuda quantized tensors, support for quantize per tensor (#59700)
baa518e2f6 : Add Int32 support for NNAPI (#59365)
cf285d8eea : Add aten::slice NNAPI converter (#59364)
d26372794a : Add aten::detach NNAPI converter (#58543)
0be228dd5f : Add aten::flatten NNAPI converter (#60885)
b297f65b66 : Add aten::div NNAPI converter (#58541)
eab18a9a40 : Add aten::to NNAPI converter (#58540)
14d604a13e : Add aten::softmax NNAPI converter (#58539)
45ce26c397 : Port `isposinf` & `isneginf` kernel to structured kernels (#60633)
c2b0af2560 : [static runtime] Implement aten::sign (#61154)
1262b2c4c6 : fix `torch.futures` docstring examples (#61029)
376dc500a9 : Minor bug fix in the warning message (#61127)
90241d254f : Automated submodule update: FBGEMM (#59968)
29ecb9f90b : Don't check stride by default (#60637)
e2a3f4b560 : Use maximum of tolerances in case of mismatching dtypes (#60636)
5f18ba7075 : upcast to most precise dtype within their category before the comparison (#60536)
5ac87cde30 : tests for diagnostics in callable `msg` in `torch.testing.assert_close` (#60254)
76d9e680d7 : update docstring examples of `torch.testing.assert_close` (#60163)
9979289037 : Improve error messages of `torch.testing.assert_close` in case of mismatching values (#60091)
e1338016dd : cuSOLVER path for LU factorization in CUDA. (#56887)
4a544df00d : Implement and benchmark a torch.optim.multi_tensor.adagrad implementation (#59155)
8bec478a9e : MaxPool2d: use channels_last format for both output and indice when input is channels_last (#61245)
66158a6e90 : Enable AutogradXPU DispatchKey for Intel heterogeneous computation platform. (#61105)
a69e947ffd : avg_pool3d_backward: Port to structured (#59084)
e4c450a4e8 : The dispatch order for custom function (#60251)
a6fea03a8a : Skip codegen checks for `dequantize_self`, `lu_unpack`, `_cudnn_rnn`, and `.*conv.*_backward.*` (#61139)
6f1455440b : task 3: typecheck (#60805)
9813b9bc0d : Fix mypy.ini (#61333)
f0316ec0b6 : Revert D24068202: [pytorch][PR] Add typing return value to init in nn.Module
98119bfce9 : task 2: ast rewrite (#60622)
0dc40474fe : Migrate glu from the THC to ATen (CUDA) (#61153)
7a4ffbd1da : [FX] s/IS_SANDCASTLE/IS_FBCODE/ in tests (#61304)
506397a809 : Add typing return value to init in nn.Module (#45654)
9f3167ebdf : task 1: annotate (#60621)
a1ad28da10 : Refactor clang_tidy.py (#61119)
81e36d02a6 : Improve error message on invalid values to Distribution methods (#61056)
45cc207a88 : Fix breakpad build + add test canary (#60990)
b6024b9d12 : More loop transforms 2
c74c0c5718 : add thrust/host_vector.h header for cuda 11.4 build (#61004)
5da507b57b : Add bazel actions workflow (#61039)
fac744e116 : Foreach Binary Test Refactor (#59907)
5503a4ac6e : DOC Improves shape documentation for *Flatten (#60980)
95cada8810 : Make breakpad depdendencies private (#61183)
635d864b26 : Fix modernize-use-equals-default nolint failures in torch/csrcs (#61142)
718db968b8 : move CI related functions out of run_test.py (#61124)
864dcbb2cc : Set sccache bucket on test runs to save some run minutes (#61140)
05c1e5b655 : [sparsity] Lambda Scheduler (#59771)
37ebf2e3cd : [sparsity] Base sparsity level scheduler class (#59770)
ed63fb5225 : Fix some more loops (#60895)
43fb39c3eb : [DDP] Make uneven inputs work with comm. hook (#61020)
94b730681f : [DDP] Refactor uneven inputs to take GradBucket (#61019)
512448a425 : CTCLoss: Remove dispatching in parallel region (#60599)
d42f1751d4 : [sparsity] WeightNormSparsifier (#58955)
7ab2729481 : [sparsity][refactor] Import factoring out (#58707)
973e9266ff : [sparsity] Sparsifier class (#58704)
80cab10534 : [sparsity] Sparsity parametrization (#58705)
5d34b7955b : [sparsity][refactor] Changing linear row/col control (#60850)
509b1ef9d5 : [sparsity] Add sparsity tests to run_test.py (#60887)
54673fc944 : Sparse: Remove dispatch in parallel region (#60598)
11b722c063 : [DDP] Refactor hook running logic (#61018)
b21df03f3b : [DDP] Remove SPMD from get_bucket_tensors (#61017)
4a2e8b53bb : [JIT] Add `torch._C.ScriptList`` (#52832)
6e9e30cc1d : Ignore notebooks when checking for newlines (#61156)
a4d86e0d53 : [quant][fx][perf] improve runtime of prepare step for large models (#61132)
277b310edb : [DataLoader] Add notebook with DataPipes API example (#60680)
ca2702a776 : [pruner] Make bias hook stateless (#61077)
0a7875231b : [pruner] Add bias support (#60970)
87dbdef65d : MAINT Adds test and docs for Linear with no batch dims (#60992)
369802a504 : Add aten::avgpool2d NNAPI converter (#58538)
19b6ee4d4e : model_dump working with delegate models (#61043)
374278f431 : Improved sparse CSR tensor sampling method (#60283)
6ecc1a4c4f : Make pytorch clang-tidy clean (#60649)
a0a9ea6598 : Fix documentation preview instructions (#61080)
60509f8921 : Update DDP documentation to mention outputs not used in loss is supported (#60275)
0128eb9a85 : Fix TSAN issue in distributed tests (#59238)
5b44d817fb : Expose raw saved tensors for codegen functions (#60565)
3f0f860a1c : Condense JIT/Quantization triage into one workflow (#61130)
6f92f10c94 : Use a leaky singleton for CublasHandlePool. (#60987)
d2fef350f2 : add embedding bag skeleton take 2 (#61126)
e5ae0e652d : [jit] Allow instance overrides of ignored methods (#61076)
ccfdb30644 : Revert D29413019: [torch] Various improvements to `torch.distributed.launch` and `torch.distributed.run`
48bfc0e51c : [DataLoader] Add Example Only `fork` DataPipe (#60679)
62b2dc2059 : [DataLoader] Decorate ZipDataPipe as `zip` (#60678)
8e21ff91e2 : [DataLoader] Add simple `groupby` DataPipe (#60675)
cb7d813275 : Revert D28836794: SumKernel (BFloat16): use float as accumulation type
11dca2e5f3 : Fix some integer comparisons (#60894)
7017dc101f : Revert D29313058: add an embedding bag skeleton operators
d6521c2249 : [pyper][emb][quantization] Support emb trained in FP16 (#60736)
d42aa176e4 : Bump docker image tag for clang-tidy (#61115)
46595a9623 : [Static Runtime] Add gflag to disable nnc and caffe2 math library (#61090)
c1499a9933 : Enable jit tracing to parametrization and add jit tests (#60969)
4e181dfc35 : [torch] Various improvements to `torch.distributed.launch` and `torch.distributed.run` (#60925)
ae21357ada : add an embedding bag skeleton operators (#60491)
db1dd9e7e0 : add support for quantized tensors in `torch.testing.assert_close` (#58926)
06fc637b41 : Check native_function's outputs' TensorImpl and StorageImpl (#60286)
03b5a225a7 : Test parametrization for instantiated device-specific tests (#60233)
6643df2680 : [jit] Use computed loop to dispatch to next instruction in interpreter. (#60211)
357a21bc92 : Fix numerical issue of rowwise normalization in Caffe2 and internal tests. (#60880)
0824b919ec : [BE] move general script out of .circleci/ into tools/ (#60973)
4036820506 : Add PocketFFT support (#60976)
2d0c6e60a7 : going back to use packaging.version.parse instead (#61053)
a2ad84afbb : Send test reports to S3 (#61071)
812ed47caa : [Static runtime] Add unit tests to ops bmm and addmm (#61000)
4ff81ab112 : escape backward slash in stack trace in Windows to slash (#60842)
6c1c1111de : [JIT] Add reference semantics to TorchScript classes (#44324)
aa728dc335 : Fix fx patch module name (#61062)
dabadd7e20 : [quant] Added reset_min_max_vals() function to observers (#60883)
1a0195db49 : [quant] Input-Weight Equalization - support for LinearReLU layers (#60653)
546102e161 : Fix overflow in quantize_val_arm (#60079)
cef0851223 : Make torch.utils.bencmark numpy free (#60564)
d1a4c9e682 : [ROCm] allow user to override PYTORCH_ROCM_ARCH (#60602)
14cc234a8a : Fix some comparison warnings (#60875)
74692f3ada : Loop transformation (#60874)
a8b56ea58b : Remove another for-loop in SoftMax (#60873)
850ff82edc : Remove for-loop for getting number of elements in favour of abstraction (#60872)
95e77e0af2 : [Delegate] A more specific prefix for lowered module name. (#61007)
f32f85e6da : Implemented torch.corrcoef (#60420)
d5be67a338 : Expose findDanglingImpls to Python (#60827)
3cf267bfa6 : Embedding: Remove dispatch in parallel region (#60597)
4f5c68857f : SumKernel (BFloat16): use float as accumulation type (#55217)
4d5edef8d4 : Python composite module execution unit tests on delegation of backend_with_compiler_demo (#60801)
3957ed41a9 : [DDP] Disable reducer hooks from running outside of DDP backwards. (#60921)
5a4282d06b : fix typo in binary_build_script (#61016)
d44515c418 : Fix lint (#61058)
a25e6370e5 : Add IMethod interface
dace860008 : Migrate pytorch-linux-bionic-py3.8-gcc9-coverage to GHA (#61050)
b4496df7d3 : mkl_scsrmm needs to be disabled when MKL is not used (#60051)
5644c31ec0 : Move windows periodic jobs to GHA (#61003)
9b5e1e0734 : [DataLoader] Make `batch` DataPipe sensitive to unbatch_level argument (#60672)
66de50cc11 : [DataLoader] Make `shuffle` DataPipe sensitive to unbatch_level argument (#60671)
a652398465 : [DataLoader] Rename transform DataPipe to legacy_transform (#60670)
abb4ed7412 : Move clang-format to lint.yml (#60918)
0b8a7daa2a : Enable multigpu_test in GHA (#60221)
5576c7bdd1 : ns for fx: initial support for int8 shadows fp32 (#60419)
a5e2ea4345 : Add noop register hook (#60685)
1fd65967e5 : Revert D29312809: add quantized_resize and dequantize for some cuda backends
bfe03120ee : [PyPer] Fix schema of fb::equally_split (#60852)
af5a0df1d0 : Prefer linalg::qr over qr in the C++ API (#60529)
b39770c461 : Fix degenerate shape behavior for ord=+/-2 (#60273)
10fc58620e : [PyTorch][NASProfiler] Add moduleHierarchy Python API to print out hierarchical information about a Node (#60384)
44b3dc4eac : resolve conjugate bit in `torch.testing.assert_close` (#60522)
c4cc26f26a : add quantized_resize and dequantize for some cuda backends (#60489)
4adc5eb6c5 : [Caffe2][Testing] Check for equality first in assertTensorEqualsWithType<float> (#61006)
287c0ab170 : [FX] Add requires_grad to TensorMetadata (#60972)
ce232e7847 : [ROCM] enable fft tests (#60313)
e2b42c6f52 : [ROCm] Update the magma build to new commit (#60900)
93772792e3 : [nnc] Get rid of fuser trigger counters (#57334)
c4f718cb72 : [nnc] Serialize initialization of LLVM targets (#60996)
5bc28c897e : fixed launch bounds for gamma_cuda_kernel (#60393)
b3ec92cf66 : BatchNorm: Remove dispatch in parallel region (#60596)
28dc02fe9f : Accumulate 16-bit float sums in 32-bit accumulators (#60387)
f54290fd72 : Expose raw saved tensors for custom functions (#60551)
a469298707 : Free space in windows libtorch build (#60849)
af66356d47 : [skip-ci] Bump docker image tag (#60988)
8780f8fc3c : Remove extraneous process group agent test code (#60903)
d3de37609f : Support fused_dropout with XPU backend (#60231)
b4a4a8434d : [1/n]support double for Caffe2 ScatterWeightedSum (#60402)
5f51406a51 : Modify error message when atol=0 and rtol=0 (#60897)
6d952dbaf0 : [nnc] Fixed checking for loop carried dependence while fusing 2D reduction loops (#60609)
b099f5429c : Port `argmin` kernel to structured kernels. (#60364)
3e2233841f : Port `argmax` to structured kernels. (#60363)
df47fa5bdc : Using meta checks for unary `torch.all` and `torch.any`. (#60362)
0dd90cceaf : [package] track storages across lifetime of PackageExporter (#59735)
eb2f535689 : c10::Storage python to cpp converter and typecast (#59734)
93eba7471b : Remove fetch in clang-tidy setup (#60974)
91c076eadc : Add TorchVitals for DataLoader (#60959)
652d911f81 : add BFloat16 support for LayerNorm CPU (#55210)
89d0e31fe5 : [torch][repeat_interleave] Remove stream sync when output_size is given for scalar repeats (#60965)
086f6e557e : Fix divide by zero error in the ASAN test (#60723)
ec9c03c234 : Implemented torch.cov (#58311)
8f658d537d : Improved JIT support for torch.einsum (#59265)
d46eb77b04 : Improve CUDA extension building error/warning messages (#59665)
12b63f4046 : [DDP] Fix case where new tensors with no grad_fn are returned in DDP forward. (#60882)
1db2d9b0a8 : [ProcessGroupNCCL] change WARNING to INFO (#60901)
150c828803 : Add lint rule to keep collect_env.py python2 compliant (#60946)
808d0e3353 : [caffe2] update make_mnist_db and make_image_db to move strings into DB::Put() (#60919)
fab1b6cc70 : .github: Increase test shards for linux GPU (#60914)
5fbca0d281 : Use cpu docker image for cpu builds (#60920)
10b929bbfb : Make Jeff and Jithun .circleci/docker code owners (#60958)
53489bc385 : fix for #60319 , forcing to use fork as start method in test/test_dat… (#60868)
4310044fec : update `unsafe` flag documentation (#60899)
5b6818f08a : [Model Averaging] Enforce a synchronization before allreduce parameters (#60891)
fbd4cb1cd7 : Fix error logging in common_distributed. (#60917)
d71e7ae740 : [PyTorch][vulkan] Unify vtensor_from_vulkan to always return non-const ref (#59996)
7eef78597e : fixed launch bounds for grid sampler 3d (#60385)
d36ce61a5e : use explicitly non-returning GPU atomics (#60607)
d62c3ea354 : [skip ci] Add GitHub Actions label for g3.16xlarge (#60888)
d5a44f9f12 : Use expecttest from PyPI (#60658)
ddb1f293b6 : Fix the NNC-disabled path in static runtime for perf comparisons
9b94aa5356 : [quant][fx][fix] Fused modules with object_type in qconfig (#60779)
cadce14e02 : don't return in __init__ functions (#60830)
9af8aecd00 : [caffe2/libtorch] Remove already-owned source
eeea696c02 : [caffe2] Fix include of corresponding header
c3977bf3da : [caffe2/utils] Add some fine-grained rules to avoid package boundary violations
03de807d81 : [caffe2/utils] Add explicit rule to avoid package boundary violation (#60677)
41c380e649 : Enable bionic-cuda10.2-cudnn7-py3.9-gcc7 in GHA (#60204)
971cdafd15 : Upgrade benchmark to v1.5.5 (#60750)
007ba37c9a : [pruning] Speedup activation reconstruction (#60683)
f302e0c781 : [pruning] Additional pruning tests (#60681)
8d4a6ef962 : [pruning] Activation reconstruction (#60292)
965dad25a5 : Allow resizing of parametrized tensors (#60418)
956faea585 : [fix] cauchy sampling inf on cuda (#60186)
70e205a2ab : Use the new URL for docs preview link (#60893)
f5e5ced202 : Enable parallel clang-tidy on ec2 runner (#60870)
c8fb785857 : Print stdout and stderr to console on parallel runs (#60869)
a8057e7ef1 : docs: add `permute` in torch docs (#60821)
d7c58e5a04 : [vulkan] Implement tanh activation function (#60695)
da70dd199d : [quant] Input-Weight Equalization - tests (#60378)
dfb9c0bae8 : [quant] Input-Weight Equalization - support for connected F.linear layer (#60272)
ddf2ce03bb : [quant] Input-Weight Equalization - support for connected linear layers (#60034)
7917318917 : [quant] Input-Weight Equalization - support for F.linear layers (#59964)
387289d4a5 : support non-contiguous tensor in bilinear (#38409)
f118d20bea : Make requires grad check run only when grad mode is enabled (#60740)
3ad3f20bff : Add an optional Device parameter to pin_memory/is_pinned that does nothing (#60201)
85af24f52b : Remove some unnecessary functions from CUDAHooks (#59655)
b52849b589 : Port silu_backward to structured (#58661)
66f01db36c : Make some comparisons explicit (#60505)
f5341bd5e6 : Enhance ProcessGroupWrapper with additional checks + refactor (#60237)
aaea81e3fb : [torch/distributed] remove outdated FutureWarning in distributed/elastic/util/store.py (#60807)
94cdbbf48d : Paren-matching kernel launch check without external deps (#60778)
88b0518a83 : Python error unit tests on delegation of backend_with_compiler_demo (#60689)
e63db3ae46 : ENH Adds byte support for nll_loss (CUDA) (#60650)
7f6b2bc2d0 : Add -I<directory> option to tools/linter/clang_tidy.py (#60745)
5b118a7f23 : Don't reference reflection_pad3d in functional.py (#60837)
f0e972a481 : To add Nesterov Adam algorithm for multi-tensor optimizers API (#59165)
3bfe15085d : [TensorExpr] Add a mechanism to register custom TS->NNC lowerings in TensorExprKernel. (#60804)
5563f4bda0 : To add Rectified Adam algorithm for multi-tensor optimizers API (#59161)
0fbc471d10 : Support default values on NamedTuple fields (#54682)
6b53792f18 : fix cuda mem leak check not properly run on master_builds (#60742)
e3abccec8a : [Static Runtime] Remove output type constraints (#60669)
dae25c2002 : Fix missing spaces in error of constant_pad_nd (#60729)
9a08e87d8b : Modernize for-loops in aten (#59598)
7e3a694b23 : supports non-leaf inputs for autograd.backward() function (#60521)
056a8e0d5c : Remove un-used parameter in _trilinear backward (#60673)
f262217101 : [Model Averaging] Move step out of model averaging API (#60632)
c5f0692b6e : Sparse CSR: increase dtype test coverage (#60656)
dd045ab540 : add channels last for AdapativeMaxPool2d (#48920)
367aff91d8 : Fix missing #pragma once in jit/method.h
8b6487c650 : Add CUDA Vital (#58059)
9134b0e42f : add a boxed CPU fallback kernel (#58065)
ad69e2fd11 : [torch] Module fix on the support of LazyModule on bug #60132 (#60517)
cab926b2c0 : faster generate_square_subsequent_mask in nn.Transformer (#60631)
7585783b8d : Remove `Optional[None]` annotations (#60704)
5ed7400b75 : Fix doc preview source directory (#60792)
7b933cd9ea : configurable pre/post LayerNorm in nn.Transformer (#60593)
e13a9587b4 : Revert "Revert D29135358: [quant] Input-Weight Equaliaztion - convert modifications" (#60646)
7188d84ccf : [Tools] Update path in clang_format_utils after #60473 (#60782)
394f60b0fc : [caffe2] update make_cifar_db to move the string into DB::Put() (#60692)
e1bd4963e2 : To intorduce Functional API for multi-tensor (#60735)
8f16a38067 : Add missing kernel checks (#60635)
dfc8247d33 : Faster cumsum and cumprod backwards (#60642)
d3bec9f4d2 : Use S3 for documentation previews (#60711)
aacc722aec : Dispatch to Python via __torch_dispatch__ (#59760)
a53d7f8f7c : Remove test linalg test skips from MAGMA integration (#58232)
8216da1f23 : Use python3.6 compatible APIs in clang_tidy.py (#60659)
6322f66878 : Add python version and cuda-specific folder to store extensions (#60592)
a404cc9a7b : CUDA `addcmul` and `addcdiv` do math in float for 16 bits I/O (#60715)
0be65cd52a : [c10d] Fix test_collective_hang flakiness (#60662)
474bdaf54d : Add --print-include-paths option to tools/linter/clang_tidy.py (#60744)
608f12b818 : Fix --dry-run option in tools/linter/clang_tidy.py (#60744)
3a838e4ce3 : Parametrizations depending on several inputs (#60530)
8cba365378 : Fix incorrect doc about the dtype for `torch.randint` described in issue #56347 (#60507)
d8c3d555e4 : [Delegate] Support composite of lowered sub modules of the same backend (#59921)
7c2938bf67 : To refactor Sparse Adam algorithm for functional form (#59171)
963c983366 : Improve numerical stability of LayerNorm (#59987)
5b1f5c8f17 : When creating a single parition skip the output nodes, but process possible nodes after it. (#60370)
2b51a8a935 : [BackwardCompatibility] Remove aten::to from allow_list (#60147)
3ca28656fa : [special] erfcx cuda support (#60519)
46d27a53fe : cuda rpc backward sparse tensor fix (#59609)
561132f902 : Revert D29330585: [pytorch][PR] add BFloat16 support for arange on CPU
d63c236fb3 : Introduce quantized convolution serialization format 3 (#60241)
42c8439b6e : TH: Clean up dead code (#60655)
4a7d281119 : Migrate THAllocator to ATen (#60325)
d586248544 : Migrate THStorage_resizeBytes to ATen (CPU) (#60324)
ddec2e0ef4 : tentative fix for adaptiveavgpool gradient computation (#60630)
40a7c317bc : Run BLAS F2C checks on host architecture (#60703)
7bc86458e1 : Revert "Revert D28833086: beef up at::_ops API" (#60214)
9c4eec2a2d : Adjust path to distributed cpp tests (#60705)
8395fdde46 : Increase tolerance for some distributed tests to 5e-5 (#60462)
2fa6c7627e : [CUDA graphs][BC-breaking] Removes post-backward syncs on default stream (#60421)
d90aefe380 : Improve error message for non-differentiable inputs (#60610)
4ed2d5d9bb : ps sparse rpc (#58003)
fadaa52f64 : [caffe2] add an EstimateAllBlobSizes operator (#59775)
fe4ded01f7 : [package] typing.io/re edge case hack (#60666)
375d201086 : add BFloat16 support for arange on CPU (#60444)
7fc4e67771 : ns for fx: fix shadow logger error for resnet18 (#60559)
4ddb2b43b7 : ns for fx: expose function to add comparisons between logged values (#60311)
31fe1c1323 : ns for fx: rekey results by model node names (#60305)
0ba4044b9d : Increase some tolerances for tf32 for Conv3d tests (#60451)
a3ebc40bab : Update intro doc for derivatives.yaml (#60614)
48509b1a9b : Add exclusion list to _check_kernel_launches.py (#60562)
a016150163 : Move torch/lib/c10d to torch/csrc/distributed/c10d (#60543)
b8d7db3b31 : Turn default kernels into Meyer singletons (#60568)
4c00df12ec : Include full Python version in collect_env.py output (#59632)
d52ef2497a : Python basic module execution unit test on delegation of backend_with_compiler_demo (#60468)
b7298f499d : Annotate NoneType as Optional[type] (#60383)
5a077bb10b : Optimize some redunction operators on CPU BFloat16 (#55202)
4aff267072 : Fix Windows error in distributed (#60167)
f2f2f5bf20 : .github: Zip test reports before uploading (#60475)
7e619b9588 : First step to rearrange files in tools folder (#60473)
40d2fe1053 : correct filename issue for test_cpp_extensions_aot (#60604)
9cab894367 : Fix build_only for libtorch (#60615)
eddc5f40f9 : Added GLU and FeatureAlphaDropout to nn docs (#60590)
204da12592 : Reduce number of CEX when passing Tensors to Python (#60546)
bdb964f89f : Support RRefs that contain threading.Locks (#57943)
4e347f1242 : [docs] Fix backticks in docs (#60474)
bb9e1150ea : Revert D29342234: [pytorch][PR] [CUDA graphs][BC-breaking] Removes post-backward syncs on default stream
2b72068a68 : Make Future store Storages instead of references to DataPtrs (#60470)
06e6d63187 : Use a no-warning registry for TensorPipe backends (#60457)
d3a8505ee1 : [jit] Added a pass to transform aten::cat ops to prim::Concat op with variable number of inputs (#59881)
c35a3dd6f2 : [jit] Added a new operator for concat that takes in variadic parameters (#59880)
dfd2edc025 : [special] add zeta (#59623)
26cdec6ce4 : Support `torch.bitwise_{left/right}_shift` and `__rlshift__`, `__rrshift__` (#59544)
b82453cbd4 : Run dist_autograd backward RPCs on appropriate CUDA streams. (#60606)
675cea1adb : [CUDA graphs][BC-breaking] Removes post-backward syncs on default stream (#60421)
00896cb9ed : [caffe2] update db::Transaction::Put() to accept the value by rvalue reference (#60208)
b09c0b6550 : [caffe2] update the BlobSerializer acceptor to allow moving in the data (#60207)
6ea22672c4 : add support for sparse tensors in `torch.testing.assert_close` (#58844)
80f40b172f : [Model Averaging] Periodic model averager (#60320)
4e51503b1f : DOC Improves input and target docstring for loss functions (#60553)
6d1b4642f0 : DOC Describes parameters/buffers registered as None in load_state_dict (#60549)
1e31d26b1d : [Static Runtime] Fix bugs in static_runtime::to_copy (#60503)
d200e9de26 : [Static Runtime] Test for dynamic shapes in SR unit tests (#60579)
99b641169b : Migrates nll_loss_forward from TH to Aten (CUDA) (#60097)
ef84bcfee6 : Convert floating-point constants to T in Bessel functions (#59416)
08020220f3 : [Testing] Adding reference tests to `OpInfo` class (#59369)
236d3afd82 : manual revert of 57575 (#60572)
9e773ea7d5 : Use `accscalar_t` for CUDA add/sub with Tensor and Scalar (#60454)
af66824c1f : [torch][segment_reduce] Add support for sum and min reductions (#60379)
63219f1f9f : To add Rectified Adam Algorithm to Optimizers (#58968)
5a2f41a2db : [torch/distributed.elastic] Fix utils.distributed_test.test_create_store_timeout_on_server to be dual-stack ip compatible (#60558)
1a0058f593 : [nnc] Merge inconsistent profiling information (#60510)
b5b42d4ce2 : [iOS GPU] Add tests for RoIAlign (#60595)
1120a1b92e : [quant][fx][fix] QAT with object_type in qconfig (#60555)
d867340c7b : [nnc] Add LoopNest::getLoopAt to retrieve a specified inner For-stmt (#60569)
c0d08dc10f : [NNC] Add tile transformation in loopnest (fixed #52785) (#57758)
aeea5bf4a1 : [Model Averaging] Provide a util function for model averaging (#60303)
b770c4b61a : Fix ZeRO sort to be by numel (#60556)
1054ad5af3 : Add back smoke tests for windows shard 1 for CircleCI (#60571)
555c154df5 : Use asyncio in tools/clang_tidy.py (#60495)
2dedd96dd2 : cmake: Prefer CMAKE_CURRENT_SOURCE_DIR to TORCH_SRC_DIR (#60493)
ad1041576a : Fix loop types (#60504)
da030c59e7 : ENH Adds Byte support for nll_loss (CPU) (#60308)
7bf195f360 : fix kernel launch check in cross kernel
308d238377 : add SequenceMask op (#60235)
e60f9cfc58 : Revert D29135358: [quant] Input-Weight Equaliaztion - convert modifications
03ab5b72c9 : Fix parallel tbb build (#60532)
bea83e2e46 : Add `NoChunk` wrapper for pipeline args. (#57325)
6385621003 : Use JOB_BASE_NAME throughout code--consolidate CIRCLE_JOB (#60425)
ff3678eec2 : Disable group group backend rpc tests from running on CI (#60407)
109f831409 : Support non-Tensor args in the Pipe API (#57226)
10e11dbdcd : Reland D29190420: [nnc][tests] Tests and benchmarks for computeSum (#60550)
5fd45b8089 : Port `any` kernel to structured kernels. (#60361)
a5aa940f5e : Port `all` kernel to structured kernels. (#60360)
7b2d375148 : Fix convolution_depthwise3x3_winograd for multichannel output (#60460)
c63a0d0cfe : Adding windows CUDA smoke tests on PRs (#59686)
8162439cbd : [DDP] Remove python GradBucket construction (#60301)
e8690dacb2 : To add Nesterov Adam Algorithm to Optimizers (#59009)
a2525b035c : Remove unused sample input argument from functions to resolve issue #55737 (#60486)
265f0e5321 : Add device runtime API for the plug-in to register platform python module into torch (#59857)
c97d4d5a34 : Fix test failures with some glibc libraries (#60450)
f0e4e4be72 : Clean Up ZeRO (#60285)
56481f9762 : Ensure proper syncs for out-of-place grad creation (torch.autograd.grad) when backward ops run on side streams (#60127)
b14f19b6fe : Revert D29190420: [nnc][tests] Tests and benchmarks for computeSum
90cd57ee16 : To add edge_order=2 and documentation for gradient operator (#58165)
7ed07e2a7d : [NormalizeArgs] Retain node.meta (#60449)
66452e0a8c : Ensure num_threads is initialized before calling omp_get_max_threads (#60185)
19553438ed : OpenMP: Refactor parallel_reduce to share code with parallel_for (#60184)
c75714e594 : Ensure thread id is valid in nested parallel regions (#60183)
3f3fd57044 : Migrate crossKernel from THC to ATen (CUDA) (#60039)
f590cceacb : [BE] Fix Convolution.cpp build warnings (#60463)
3846cef2d7 : Increase tolerance for test_grad_scaling_clipping (#60458)
40de03fc55 : `topk` on CUDA supports `bfloat16` (#59977)
21479ad20c : [nnc][tests] Tests and benchmarks for computeSum (#60160)
fbeb8b4992 : [nnc] Speed up batchnorm benchmark
b0c9762e2d : [pytorch][nnc] external function call to xnnpack ops (#59525)
79dc500a99 : Add error message for sequence length to be equal to 0 case for RNNs (#60269)
dc9aa7b960 : Add custom code filter for TS (#60309)
3de79b7757 : [quant] Input-Weight Equaliaztion - convert modifications (#59963)
7589d9c58b : Enable rcb lookup for typing (#60413)
135e203e5e : avoid unnecessary copies in MultiDispatchKeySet (#60093)
4887c6e401 : [quant] avoid resize calls in observer/fake_quant (#60386)
d3ae3e07aa : parse_reports() should include hidden files (#60404)
986a88056c : Remove some unused variables (#60411)
36d4062a62 : Fix some variable types (#60414)
7d779f84a3 : Fix some loop types (#60415)
6e926f1303 : Fix lint (#60472)
0c916c8a4e : up the priority of numpy array comparisons in self.assertEqual (#59067)
82c52fd417 : Do not wrap Tensor.{grad,_base} by default (#60464)
f42140cb8a : Disable warn_unused_ignores again (#60480)
6a87e8d087 : Implement erfcx() (#58194)
b34965435d : Improve testing of inplace views (#59891)
20bda0057e : [caffe2/utils] Add explicit rule to avoid package boundary violation
7c1bca9e94 : [caffe2/utils] Add explicit rule to avoid package boundary violation
7f2592195d : Adds stream recording for cross-stream uses of gradients in streaming backward (#60230)
c7d0e9da0a : Add pyproject.toml (#60408)
1abf45e37f : Revert D29241736: [pytorch][PR] To add Rectified Adam Algorithm to Optimizers
99ca2c5b4b : Migrates nll_loss_backward from TH to Aten (CUDA) (#60299)
fca931d181 : List striding with arbitrary step size (#58537)
df8a8fbc1b : Improve code and documentation clarity for DataPipes APIs (#60423)
71b83c27e2 : [pruning] Move pruning directory into experimental folder (#60395)
f75ea51e67 : [pruning] Move pruning files to their own directory (#60293)
b25db5251a : [pruning] Base pruner class (#60278)
31a884987d : Remove some TH includes from ATen (#60323)
0d2a936176 : To add Rectified Adam Algorithm to Optimizers (#58968)
0126f42841 : [complex] `torch.sigmoid`: CUDA support and complex autograd support (#48647)
567e6d3a87 : Remove Caffe2 thread-pool leak warning (#60318)
91451369ed : require non-empty inputs to grad() calls in the API (#52016)
729f7cd52f : Implement histogram operator on CPU (#58780)
3a56758e1f : changed launch bound to fix col2im kernel (#60315)
926bb5d6be : changed launch bounds, unrolled for loop for grid sampler 2d fwd and bwd (#60405)
23bb2ed00a : Improve documentation for torch.set_rng_state (#60422)
700df82881 : [PyTorch Edge] Update iOS readme to use lite interpreter (#59841)
15dc320cae : Fix lint build (#60438)
0585daae83 : fixed launch bounds for gathertopk kernel (#60314)
45ae2e7863 : Set TORCH_WARN_ONCE to always warn inside of assertNotWarn (#60020)
5d476f5b95 : Fix FFT documentation examples and run doctests in the test suite (#60304)
5921b5480a : ensure xml report path are relative to */pytorch/test (#60380)
9b30fb8528 : add support for constant (#60166)
1764aa79b9 : restore JOB_BASE_NAME for test1 and test2 in test.sh (#60409)
7d39608a29 : split TestAsserts by functionality (#58919)
14b0191d1f : make assert_equal an example how to partial `torch.testing.assert_close` (#58918)
583f072778 : introduce TestingErrorMeta for internal use (#58917)
cf789b9941 : remove pytest.UsageError (#58916)
9fffd05e54 : hide top-level test functions from pytest's traceback (#58915)
18d45b960b : remove rouge raise in helper function (#58914)
dca97b4394 : Weighted decay with frequency (count-based) (#60382)
8f03018980 : [pytorch] Move signal handler test to internal codebase (#60394)
af3f7a210a : add BFloat16 support for kthvalue and median on CPU (#60074)
2606022d01 : [package] fix for edge case `os` and `os.path` importing (#60276)
25e077bce1 : [Issue 59296] added VE device (#59620)
9d1d799034 : Added API to change logging levels for JIT (#58821)
82a6574d89 : cmake: Use BUILD_INTERFACE with TORCH_SRC_DIR (#60403)
8dd1dc89cb : [PyTorch][Edge] Adding tests for lite quantized models (#60226)
5bd49c3396 : fix workflow id usage in GHA (#60376)
1f50dc6e46 : Fix ignoring Tensor properties in torch.overrides (#60050)
65f33ec85c : Follow-up fix for compilation error on CUDA92 (#60287)
01e0296eb7 : [special] migrate log1p, sinc, round to special namespace (#55878)
769c299dcf : [caffe2] add tests for inplace elementwise ops (#60106)
f66b53e8b2 : Ignore unsupported attribute checker pass for torch.jit.trace (#60200)
b505adbb09 : Fix typo in ChainDataset docs (#60336)
2f3be2735f : Don't split oversize cached blocks (#44742)
eaa36ee679 : Enable sharding for Windows GHA CI (#59970)
023907a6fe : Allow Docker build on macOS (#60375)
27e34f731a : Re-enable clang-tidy on PRs (#60297)
c16f87949f : ENH Adds nn.ReflectionPad3d (#59791)
f89ae9cb8d : Moves grid_sampler to autocast promote list (#58618)
61e0bc1955 : [nnc] Remove check on initializer in compressBuffer (#60194)
f2bb0932da : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
5ff407df67 : Skips failing MacOS tests (#60348)
1dee99c973 : LU Solve using cublas and cusolver (#59148)
4a3eea9a6a : [quant][graphmode][fx] Produce reference linear module in convert (#60152)
510334f34b : [BE] clean up IS_PYTORCH_CI and IN_CI (#60279)
2293ab4e53 : [quant][graphmode][fx] Refactor convert for linear to use get_static_module_mapping and get_dynamic_module_mapping (#60151)
a516424a70 : Update internal code for torch.linalg.solve (#56613)
47d727fe1b : [quant][graphmode][fx] Produce conv reference static quant modules (#60138)
b298013cd5 : [add/sub] Cast `alpha` to `acc_type` (#60227)
0131a5972d : [DDP] Test inference works with eval() and no_grad() (#59666)
69b2bf70f9 : [pytorch] fix tools/code_analyzer for llvm 11 (#60322)
c19acf816f : Replace TensorRT's deprecated API in `caffe2/python/trt/test_pt_onnx_trt.py` (#60236)
5ec4ad7f54 : [special] Add special.ndtri (#58650)
5824a866b7 : [pytorch][nnc] support custom class parameters (#59466)
cac9ae1506 : [iOS GPU][BE][3/n] Give MPSImage objects a label for better debugging experience (#60282)
b9cd97c94b : [iOS GPU][BE][2/n] Remove unused APIs (#60281)
80e6e3f1da : [iOS GPU][BE][1/n] Rename MPSCNNContext to MetalContext (#60280)
319890b1b2 : Support *args in Pipe.forward API. (#55441)
a8430f1076 : Remove PlacementSpec from ShardingSpecs. (#59990)
1c97c3e3a4 : DOC Adds LSTM docs for defined variables when bidirectional=True (#60120)
aae2a3c95e : Clarify ConvTransposeNd + reference links (#60291)
e8e3394ea8 : Recognize transposed dense tensors as a form of partial overlap (#59014)
47bbc01e0b : [nnc] Added micro-benchmark to show perf improvement with cat subgraph optimization (#59581)
d0c4ace00f : [jit] Added a tranformation to move consumers of aten::cat to its inputs, in the fused subgraphs (#59580)
d4c626a346 : [jit] Exported a method to get the supported list of elementwise ops (#60162)
55755edc60 : [jit] Made a list for element-wise ops. (#59579)
a029422cae : [quant][graphmode][fx][refactor] Change the env map to add dtype as a key (#60054)
c0f8cad0f0 : Be fix shard inbalance (#60206)
d9e7df707b : [TensorExpr] Add NNC lowerings for `aten::mean`, `aten::addmm`, and `aten::adaptive_avg_pool2d`. (#59347)
c6bb9409b8 : [TensorExpr] Handle not-specified dtypes and strides. (#59346)
f042455a8d : [JIT] ShapeProp: add missing ops from mobilenet v3. (#59163)
3870e68644 : TF32 threshold twiddling for tests (#60209)
5f010c066f : [package] Bring back save_source_file (#59962)
5a45103139 : ns for fx: add API usage logging (#60103)
0baad214b0 : [static runtime][fix] resize to the input tensor size for full_like (#60229)
d5df274ea5 : [DDP] Support for multiple backwards (#59359)
3815a013ed : Enable xenial-cuda11.1-cudnn8-py3.6-gcc7 in GHA (#60196)
d5988c5eca : remove unused `type: ignore` directives (#60006)
7c29ca7f2b : Fix Subset of a Subset not sliceable issue (#59513)
08ce5eedf5 : [reland] Move RPC agents to libtorch (#60170)
958b881d70 : [reland] Add some TORCH_API annotations to RPC (#60169)
83fde5d981 : [reland] Pass RequestCallback to FaultyPG RPC agent (#60168)
8a839c5478 : Fix saved variable unpacking version counter (#60195)
5609c2e59c : Adds an OpInfo note (#57428)
ecc37184a5 : Fix clang-tidy path filtering (#60225)
38c3116813 : [hierarchical sharding 5/n] enable table-wise -> col-wise sharding in embedding table lookup
8b55e9feaf : removed cat, equal, and stack from autocast promote list (#59497)
faf459f13e : [Profiler] Fix memory profiler merge issue (#60037)
bcf8752fb2 : updated launch bounds for trilinear 3d (#59999)
7e032f18cf : DOC Describes behavior for None in module.register_* (#60125)
047925dac1 : .github: Run Windows CUDA build on pull requests (#60215)
6af5d00e4b : [torch][segment_reduce] Add support for multi-dimensional input (cuda) (#60018)
a727f655c8 : [torch][segment_reduce] Support for multi dimension (cpu only) (#59951)
8e67981995 : .github: Disable clang-tidy for now (#60219)
acf04cdedf : Fix default DEFAULT_FILE_PATTERN in clang-tidy (#60212)
9c03de1dde : Use mirrors for ubuntu apt source (#60216)
3995fb1840 : Add new_ones symbolic (#59255) (#59539)
ef1c107be5 : [vulkan] Do not use memcmp to compare structs (#60199)
6d0fb85a62 : Revert D28833086: beef up at::_ops API
0cbb5e15d7 : Correct backend in pipe_with_ddp_test (#60123)
acd914f039 : Fix Pipe + DDP for unused parameters, static graph (#60118)
2062cafaa5 : [iOS GPU][MaskRCNN] Implement RoIAlign in Metal shaders using Sampler (#56075)
e2129d1c06 : beef up at::_ops API (#59115)
462448f07a : Enable GHA sharding on linux (#60124)
bbedfd913d : Run an dummy rpc._all_gather in init_rpc to avoid shutdown timeout (#59801)
ebafd2aadf : Stop warning on .names() access in max_pool2d and max_pool2d_backward (#60059)
ef09428804 : Revert D29104399: Port `all` kernel to structured kernels.
3ff5507fb0 : Revert D29104395: Port `any` kernel to structured kernels.
81baa7fb0d : Revert D29104398: Using meta checks for unary `torch.all` and `torch.any`.
873dac4b5a : Revert D29104397: Port `argmax` to structured kernels.
6b5e77904f : Revert D29104396: Port `argmin` kernel to structured kernels.
3dc8112187 : [NNC] Handle int64 indices and loop bounds (#59769)
96b3537e71 : [NNC] Add a dtypeToCppString virtual method in IRPrinter (#59449)
ed1da5be21 : PG NCCL cleanup: remove usage of completed_ in WorkNCCL copies (#59899)
010f4b6f2d : Add .isort.cfg (#60119)
226d745a0b : Port `argmin` kernel to structured kernels. (#59938)
6f3da4f4bf : Port `argmax` to structured kernels. (#59937)
c078cefa7d : Using meta checks for unary `torch.all` and `torch.any`. (#59373)
519698362d : Port `any` kernel to structured kernels. (#59372)
7809494c68 : Port `all` kernel to structured kernels. (#59371)
b8ab98626b : only runs mem leak check on master (#60023)
59b10036d5 : Unifies OpInfo dtype tests (#60157)
4caca7a15b : Improved torch.einsum testing and fixed bug (#59731)
eb36f67dcc : [TensorExpr] Minor cleanup in TensorExprKernel::computeValue (#60041)
6b1712019a : Revert D29132955: Pass RequestCallback to FaultyPG RPC agent
3c3bb91103 : Revert D29132956: Add some TORCH_API annotations to RPC
f233274f30 : Revert D28875276: Move RPC agents to libtorch
e5c99d9908 : Revert D29147009: [pytorch][PR] refine disabled test
a0ad4c24d1 : MAINT Migrates rrelu_with_noise from THC to ATen on Cuda (#57864)
9e79a8a54f : [iOS GPU][MaskRCNN] Force the temporaryImage to become static when doing synchronization (#60155)
0e7b5ea6c0 : nonzero: Default to transposed output strides (#59370)
c0b7c59e55 : [quant] Equalization Observer modifications (#59953)
45c31cabb5 : [quant] Input Weight Equalization - prepare modifications (#59747)
7ce74f3339 : [quant] EqualizationQConfig to distinguish input/output activations (#59739)
c6cdb4f113 : Refactor ZeroRedundancyOptimizer Assuming SPSD (#59834)
85517a2b70 : [TensorExpr] More python binding cleanups (#60058)
c01939a9b1 : [JIT] Handle modules that already have __constants__ (#60003)
d99a8a31b1 : Fix version comparison for defining CUDA11OrLater (#60010)
c458bb985e : make it easier to grep for unary/binary op kernels (#60128)
3288c9d304 : [numpy] mvlgamma: int -> float promotion (#59934)
f65793507d : [fx][Transformer] Add override for call_function (#60057)
5f017e91b8 : don't use moved field in the second lambda (#59914)
64aec8d2ca : [testing] OpInfoHelper tool (#58698)
0bf1260795 : Fix Python 3.8 expecttest machinery again, this time for good. (#60044)
dab1e59652 : Remove dead code in SavedVariable (#59838)
1efa863837 : Avoid un-necessary unwrapping of Tensor in SavedVariable (#59837)
5948e6f653 : removed gelu from autocast fp32 list (#59639)
a95207dad4 : [quant] Add a quantize_per_tensor overload that takes Tensor quantization parameters (#59773)
5686fe5817 : Revert D29154971: Training resnext with msuru_suru_union and ig_msuru_suru_union datasets
4c8c61f200 : Some fixes to vec256_bfloat16.h (#59957)
8ce6d0c42f : [torch deploy] add register_module_source (#58290)
fd1e9253ff : [Profiler] Fix timestamp discrepancy in profiler_kineto.cpp (#60070)
9d7764642b : Use GitHub's diff directly in clang-tidy (#60048)
b2fc6de2c4 : support parsing of PR stats in run_test.py (#60026)
691183bb74 : Fix compile failure on CUDA92 (#60017)
15dbc566c5 : [torch][segment_reduce] Add missing cuda kernel launch check (#60114)
2c5db9a40a : Add c10d filestore functionality to the current c10d_rendezvous_backend (#59719)
84688b0c40 : ci: Add note about file_diff_from_base for GHA (#60110)
15f236f3e3 : [package] fix tutorial link (#60113)
9f68f93aca : Training resnext with msuru_suru_union and ig_msuru_suru_union datasets
8c4e78129e : .circleci: Disable Windows GPU jobs (#60024)
74ea1f23b4 : Revert D29148233: [pytorch][PR] Add GITHUB_HEAD_REF in check for IN_PULL_REQUEST
bac6bcd6d8 : Update call site for FBGemm quantization util functions. (#624)
d88fbf0fbc : fix minor typo in run_test.py (#60055)
241aac3ef8 : Add GITHUB_HEAD_REF in check for IN_PULL_REQUEST (#60047)
a6ecfb3296 : Update lint.yml to use custom clang-tidy build (#59967)
842a831f53 : [nnc] Move batchnorm to operators library (#59992)
bda40639c5 : [nnc] Move operator implementations into a subdirectory (#59988)
f43ff754ca : [docs] Correct errata in linalg.eigh and add a bit more information (#59784)
36a5647e30 : Handle exceptions from THPModule_setQEngine (#60073)
9fbbab88da : [fx-acc] Saturate host by replicating partitions onto idle devices (#60064)
a344b09db2 : [quant][fx][graphmode] Remove Quantizer class (#59606)
78011bc0ce : typofix (torch.zero to torch.zeros) in docstring (#59703)
e50f264b51 : [caffe2] make MulGradient implementation in-place compatible (#60035)
eda2ddb5b0 : [ATen] Fix aten::to schema (#60001)
95257e8a62 : [fx-acc] Fix wrong device assignment in find_single_partition (#60056)
469f0e42d6 : [nnc] Handle more cases of excessive # of cat args (#60043)
1207745e98 : fixing illegal memory access on NHWC BN kernel (#59981)
27a3204982 : generate C++ API for meta functions using at::meta:: (#58570)
e341bab8ae : bugfix: ensure that at::{dispatch_key}:: API gets external linkage (#58569)
5fd6ead097 : refine disabled test (#60040)
fc50f91929 : Move RPC agents to libtorch (#59939)
04ec122868 : Add some TORCH_API annotations to RPC
cbbb7e145e : Pass RequestCallback to FaultyPG RPC agent
f232b052a6 : [fx-acc][easy] Format FX experimental partitioner code (#60030)
50229b5250 : Fix some typing issues (#59952)
1d5a577f04 : Fix some items identified as problematic by Wextra and other clean-up (#59909)
dc1f60a9a2 : [sparsity][refactor] Restructure the tests folders (#60032)
8dd0570b34 : Reuse build_torch_xla from pytorch/xla repo. (#59989)
b162d95e46 : Fix a number of lint perf and safety issues in torch (#59897)
a0e62c4da4 : Reuse run_torch_xla_tests from pytorch/xla (#59888)
c23624351a : disable test_sparse_allreduce_basics (#60029)
044b519a80 : Symbolic for ReLu6 (#58560) (#59538)
5d00c374dd : [ONNX] Sum empty tensor could not be exported to ONNX successfully. (#58141) (#59537)
83450aa11d : [ONNX] Add support for torch.bernoulli() export (#57003) (#59536)
cd5f142af4 : fix error message for type_as (#57948) (#59535)
55530e2276 : Update Autograd Export Docs (#56594) (#59534)
a120a12ab4 : [Bootcamp][pytorch]Add WebIterDataPipe and ToBytesIterDataPipe to the datapipes. (#59816)
79d7c15dc5 : [PyTorch] Add ExclusivelyOwned (#59419)
d7eb5836bb : Add RRef support to ShardedTensor. (#59776)
20460b0c05 : [nnc] Removed setBufferMap method from LoopNest (#59496)
b822928e33 : [nnc] Removed setGPUBlockIndex and setGPUThreadIndex methods from LoopNest (#59495)
aa163aeff5 : [nnc] Made several LoopNest APIs static (#59494)
4afd0b7952 : .github: Add Windows CUDA 11.1 workflow (#59960)
1c502d1f8e : Don't run_build when run_binary_tests (#59982)
90cf76dde5 : Support torch.nn.parameter type for PDT (#59249)
f9445c8a6b : [torch][segment_reduce] Add cuda support for mean reduction (#59543)
f4f7950812 : Prepare for TensorPipe separating its CUDA-specific headers (#59788)
5e5ca0682b : Move CUDA-related stuff of TP agent to separate file (#59377)
83ba71aa0e : Make CUDA serde support for TP agent pluggable (#59376)
cf63893211 : Enable implicit operator versioning via number of arguments (#58852)
a1780432fa : Move c10d to libtorch(_cuda) (#59563)
8d50a4e326 : Add support for embeddingBagBytewise in FXGlow
cbd1e8c335 : [Static Runtime] Fix bug in aten::to (#59995)
087ac75b26 : Fix quantized mean operator in QNNPACK backend (#59761)
5b9fced70a : add output_process_fn_grad before sum().backward() (#59971)
117b7ae38a : Remove update-disabled-tests workflow as it is migrated to test-infra (#59986)
c2098487e8 : [c10d] Move pg wrapper tests to their own file. (#59840)
5c1d17e697 : Revert D29100708: [pytorch][PR] Parametrizations depending on several inputs
5e993e6c81 : [fx2trt] Make TRTInterpreter don't need concrete tensor as arg (#59948)
c645d39a77 : Implementation of torch.isin() (#53125)
f9ec86a6c6 : External stream (#59527)
8e92a3a8b0 : [docs] Add pickle security warning to package docs (#59959)
ef13341a8d : upgrade onednn to v2.2.3 (#57928)
061e71b199 : Parametrizations depending on several inputs (#58488)
ab70e1e984 : [TensorExpr] Add error checking in mem_arena (#59922)
9ad0de3c6f : Rework requires_grad on DifferentiableGraphOp (#57575)
1f7251df90 : fixing DifferentiableGraphOp updating requires_grad on input tensor list; python test added to verify the test (#57574)
c50c77b444 : remove unused variables (#59912)
580a20f33b : [reland] torch/lib/c10d: Use torch_check instead of throwing runtime_error (#59918)
3d90c82a5c : [TensorExpr] Python binding improvements (#59920)
68d690ffbd : Vectorize the softmax calculation when not along the last dim (#59195)
d60d81b5a7 : Make PyObject_FastGetAttrString accept const char* (#59758)
700add0737 : Fix expecttest accept on Python 3.8 and later (#59709)
cf38b20c61 : Alias for `digamma` as `psi` to `special` namespace (#59143)
ff15d93b88 : Improve numerical stability of GroupNorm (#54921)
095cd6a0da : MemoryOverlap: Avoid has_storage calls (#59013)
be038d8989 : [CUDA graphs] Make stream semantics of backward calls consistent with other cuda ops (ci-all edition) (#57833)
92513038e8 : Revert D28994140: [pytorch][PR] Implemented torch.cov
0ceea7faf4 : Refactor SavedVariable (#59836)
d03ff1a17d : pre compute regex and match simple signature autograd codegen 15s -> 12s (#59852)
30a18fe318 : refactor yaml loader import, no runtime change (#59850)
c60d1ac9cf : Use C dumper if possible aten codegen 23s -> 13s (#59849)
504ec30109 : avoid error string formatting aten codegen 28s -> 23s (#59848)
7143a6a189 : Avoid unnecessary re-computation autograd codegen 21s -> 15s (#59847)
1f6e39336f : Simplify parametrizations.SpectralNorm and improve its initialization (#59564)
10a3a3d363 : Fix bad change in a CUDACachingAllocator loop (#59903)
e49f0f4ffd : Automated submodule update: FBGEMM (#59874)
3529a48ebb : Revert D28981326: torch/lib/c10d: Use torch_check instead of throwing runtime_error
f3218568ad : optimize channels last for BatchNorm2d on CPU (#59286)
864d129bae : [quant][fx] Remove extra q-dq for weight bias in normalization ops (#59882)
60eb22e45e : Build an -Wextra around c10 (#59853)
e41bc31eb2 : make --run-specified-test-case use --include (#59704)
cf0c4ac258 : Fix some issues in CUDACachingAllocator (#59819)
b83ac0cc4e : [nnc] Added a check to vectorize only those loops that are normalized. (#59423)
30e24b2d2b : [nnc] Modified vectorize API to return bool (#59422)
a9e136a61e : Remove ci/no-build (#59889)
f4fdc49957 : [NNC] Add python bindings for loopnest.compress_buffer (#59681)
ee3025f734 : Give clearer lint error messages (#59876)
6ea6075002 : torch/lib/c10d: Use torch_check instead of throwing runtime_error (#59684)
d433a55c94 : Replace throw std::runtime_error with torch_check in torch/csrc/distributed (#59683)
9cdbddb3f7 : Fix `Vectorize<float>::trunc` on ARM platform (#59858)
2ce21b2e61 : [Pytorch backend delegation] Preprocess to accept (#58873)
23c232554b : Implemented torch.cov (#58311)
ba09355b12 : Upgrade Windows CI Python to 3.8 (#59729)
d75e99b709 : fx quant: enable qconfig_dict to target function invocations by order (#59605)
e6110d4d5d : Fix input_buffer check if inplace update is valid (#59817)
c9e4d1372f : Add guards for USE_C10D_FOO in relevant c10d files (#59697)
773b56e719 : Fix Windows guards in c10d (#59696)
cbcae46fa5 : Remove USE_CUDA from c10d reducer/logger (#59562)
b4c35d7ae7 : Remove USE_CUDA from ProcessGroupGloo (#59561)
b5e832111e : [nnc] Limit the number of inputs to a fusion group.
df759a3d9e : [nnc] Do not fuse matmul/conv2d if inputs are discontiguous. (#59754)
4b91355232 : [ONNX] remove raw export type (#59160)
2112074f25 : [Static Runtime] Add schema check to several aten ops (#59603)
6eabbea47c : Disable cuDNN persistent RNN on A30 (#59830)
455afdf974 : Automated submodule update: FBGEMM (#59715)
c7890b4a8e : [package] doc string cleanup extravaganza (#59843)
54bfd41a2e : Fix torch.angle on aarch64 (#59832)
4025f95a20 : [docs] Add table of contents to torch.package docs (#59842)
0e222db087 : [docs] Add explanation section to torch.package docs (#59833)
062dde7285 : [docs] Add "how do I" section to torch.package docs (#59503)
6a18ca7a07 : [docs] Add tutorials section to torch.package docs (#59499)
a3db8e0a26 : [docs] Add torch.package documentation preamble (#59491)
a524ee00ca : Forward AD formulas batch 3 (#59711)
8a7c0d082f : ger is an alias to outer, not the other way around (#59710)
c2c35c0170 : [Binary] Link whole CuDNN for CUDA-11.1 (#59802)
60ba451731 : [torch] Remove using directive from header (#59728)
e9e9291dc1 : [After fix] Reuse constant and bump bytecode to v5 (#59722)
ac6b5beade : [torch][segment_reduce] Add support for mean reduction (cpu) (#59521)
e71db0bb82 : .jenkins: Ignore exit code of nvidia-smi (#59826)
e7ad82eb2f : [DataLoader] Add option to refine type during runtime validation for DP instance (#56066)
e2c784d940 : [reland] .github: Add Windows GPU workflow (#58782) (#59752)
54cc477ea3 : .github: Ensure cleaner windows workspace (#59742)
0099c25b85 : fx quant: remove some dead code in observer insertion (redo) (#59799)
fb620a27d0 : [WIP] Add slow gradcheck build for the ci/slow-gradcheck label (#59020)
cc32dcadd9 : Fix Error when run python setup.py install again on Windows (#59689)
1fc3576d97 : Fixing and enabling tests that check fake_quant matches quant+dequant (#59095)
c90260905f : [fix] torch.{lin, log}space(): properly examine passed dtype (#53685)
9bcef86d18 : Split slow gradcheck periodic CI job so that it does not time out (#59736)
f240624080 : displays graph node's info (#59679)
7af9252ed7 : [skip ci] export_slow_tests.py - Add option to ignore small differences (#59759)
51d954e8e4 : Link ATEN tests with OpenMP runtime (#59733)
4f79270b89 : [PyTorch ] Thread parallel bmm across batch dim (#59596)
3176f16691 : [Pytorch benchmark] Add BMM benchmark (#59595)
58412740ae : Added doc for torch.einsum sublist format (#57038)
5e3e504728 : Update TensorPipe submodule (#59789)
96651458eb : Automated submodule update: tensorpipe (#59374)
0d7d316dc1 : [fx ir] Support lists and dicts in FX IR GraphDrawer (#58775)
e7cccc23b9 : Add query and synchronize to c10::Stream (#59560)
f11120967e : Support EnumerableShardingSpec in ShardedTensor. (#59061)
48ea7c808d : [C10d] Support subgroups (#59111)
fc0582ee95 : [c10d] Use TORCH_CHECK for monitored barrier error (#59667)
12b9e99e0d : Bump the bytecode reading version kMaxSupportedBytecodeVersion to 6 (#59714)
3c6ae6f181 : [OSS CI][iOS] Use LibTorch-Lite.h for nightly builds (#59762)
a62f6b6d04 : ci: Add skipIfOnGHA util (#59748)
1ea5c19c19 : Add USE_WHOLE_CUDNN option (#59744)
bb19dc14cc : add channels last support for AvgPool2d on CPU (#58725)
52b2ed65c0 : Revert D29007258: Revert D28926135: [pytorch][PR] Refactor Foreach Tests: Unary Functions
827e00c914 : Update Kineto to fix fd leak (#59755)
a4e0368c99 : Comment on tests reliance on ZeRO's partitioning algo (#59713)
25179ecb63 : [caffe2] Fix verbose templated signed/unsigned comparison warning (#59578)
b0fd3ca542 : [sparse] Add the AO namespace to torch (#58703)
3dfb94c17c : Construct a -Wall around Torch (#59668)
fa030d1213 : [DataPipes] Add simple unbatch to DataPipe (#59610)
2f395f3b54 : [reland] Document debugability features in torch.distributed (#59726)
c5bee1ec4f : [PyTorch] Parallelize gelu via tensoriterator (#58950)
8b63573c31 : [PyTorch Operator Benchmark] gelu benchmark (#59334)
874e7b889d : [PyTorch] Expose interface to set grain size on tensor iterator (#58949)
1735775662 : [Torch] Cast timestamp type to int (#59712)
44c442293f : [torch/elastic] Fix the edge case when no node is alive (#59663)
0fa3db5594 : Fix subgradient for element-wise max and min (#59669)
e3d75b8475 : irange for PyTorch sans jit (#59481)
804f924504 : Fix accuraccy failures when running test_nn on A100s (#59624)
47e286d024 : Merge c10d elastic agent tests into local_elastic_agent_test.py file (#59657)
13a2025469 : Delete empty caffe2/quantization/CMakeLists.txt (#59717)
171142f9cc : Revert D28926135: [pytorch][PR] Refactor Foreach Tests: Unary Functions
9bb5663979 : Use commit stats from viable/strict instead of nightlies for sharding (#59727)
8845cbabf0 : [CMake] Split caffe2::cudnn into public and private (#59721)
c738c13304 : Fix typo in checkpoint docs (#59646)
51af772937 : [jit] Set debug name for value coming out of GetAttr nodes. (#59123)
bbd58d5c32 : fix :attr: rendering in F.kl_div (#59636)
e385be7611 : .circleci: Disable pytorch_windows_test_multigpu (#59725)
f8bb7e2f7c : Magma isn't needed in cpu build (#59619)
ed3884c3e9 : Fix timeout with ZeRO test_step() and test_step_with_closure() (#59648)
61965abad7 : Move _PartialWrapper to module scope (#59660)
0f6bd550a4 : Revert D28981443: reland D28645531: .github: Add Windows GPU workflow
167477329d : [Reland] adding base commit to scribe report (#59677)
d42e6c7f70 : Clang format distributed_test.py (#59693)
68f74966fc : [ttk] Store float64 in tensorboard instead of float32 (#59435)
3271853912 : hold references to storages during TorchScript serializaiton (#59642)
21121675b3 : reland D28645531: .github: Add Windows GPU workflow (#59678)
0897df18a3 : Refactor Foreach Tests: Unary Functions (#58960)
62583e51a5 : [reland] Add a ci/no-build label (#58778)
b844fd11ee : Allow tools/test_history.py to be piped to head (#59676)
26beda8ed5 : [BE] unsupported backward failing on single sample (#59455)
12b4e8996f : [DataLoader] Add nesting_level argument to map and filter (#59498)
2693b0bef3 : Fix compile error when debugging (#59616)
f1786b293d : Revert D28972444: [pytorch][PR] Document debugability features in torch.distributed
a56c89a160 : Revert D28918331: [pytorch][PR] Automated submodule update: FBGEMM
a9d2810817 : Document debugability features in torch.distributed (#59604)
daa35141e8 : Reland: "[TensorExpr] Fix handling of 0-dim tensors." (#59508)
9f9904969f : Reland: "[TensorExpr] Fix printing of Bool dtype." (#59507)
0b6ec32004 : Reland: "[TensorExpr] Improve debug messages." (#59506)
04986b909f : [package] Add docstring for PackageExporter.intern (#59602)
f52e202840 : Add warning when accessing Tensor::grad() in the C++ API (#59362)
90303157ab : Enable complex dtypes for coo_sparse-coo_sparse matmul [CPU] (#59554)
b386ed6f9b : Fix some compiler warnings (#59643)
02d380450d : [FX][docs][EZ] Fix link to fuser example (#59670)
1733d10399 : Warn when backward() is called with create_graph=True (#59412)
82466e0605 : Revert D28900487: ger is an alias to outer, not the other way around
cc840cf544 : Automated submodule update: FBGEMM (#59505)
2956bbaf23 : Revert D28645531: .github: Add Windows GPU workflow
97dfc7e300 : [Reland] Adding run specified tests option to run_test.py (#59649)
51884c6479 : .github: Add Windows GPU workflow (#58782)
6104ac5aaf : [libkineto] Refactor trace activities (#59360)
acc47357b5 : Fix torch.conj for zero-dimensional sparse coo matrix (#59553)
894aaa3997 : Revert D28943928: [pytorch][PR] adding base commit to scribe report
6ca141fe6c : Make detach return an alias even under inference mode (#59633)
14f4c8d333 : Revert D28387762: Forward AD formulas batch 3
528d82d6a6 : [torch] Add debug name to assert message for useOf
9d533ef3ac : Renorm fix (#59615)
67b8e6410d : [OSS] Add podspec for libtorch-lite (#59638)
1bb1a9e22b : [ROCm] enable test_cufft_plan_cache test (#57520)
43274ca145 : test_store multiworker remove multiprocessing (#59599)
40cbf342d3 : Fix vectorized calculations on POWER (#59382)
ea3b2fd0fa : Throw RunTimeError using TORCH_CHECK (#59485)
5fc105b323 : Raise NotImplementedError on forward passes (#59483)
c268eefe96 : Use TORCH_CHECK_NOT_IMPLEMENTED for AD not implemented (#59482)
84061dadad : Add reduce variants for `scatter` operation. (#57015)
9de0c214bd : [quant] Fix dimension for output of batchnorm 1d (#59264)
58348bea06 : Forward AD formulas batch 3 (#58094)
4512d75063 : ger is an alias to outer, not the other way around (#59448)
d0e84c2f23 : Revert D28961233: [pytorch][PR] Adding run-specified-test-cases option in run_test.py
0208e604e3 : seems os.environ.get() not working well on windows (#59634)
1242dd1357 : Remove cancel_redundant_workflows job (#59608)
7949fdd2b6 : ninja 1.9.0 couldn't be installed, CI might be broken (#59625)
13917bab7f : [Torch] Correct launcher tests (#59635)
3b0c6a7b50 : fix AddPadding tensor shape inference (#59572)
7dac2987ce : [quant][eager][fix] Fix a typo in convert function in eager mode quantization (#59571)
31d136c81f : [DDP] Rename the member divFactor_ as div_factor for naming consistency in reducer (#59523)
b7ee164456 : [DDP] Remove the duplicate parseHookResult in reducer (#59510)
2b398d0537 : [Reland][Gradient Compression] Apply division first to avoid overflow (#59576)
92ed70a048 : adding base commit to scribe report (#59570)
a6c9483c2f : Adding run-specified-test-cases option in run_test.py (#59487)
ea1de87f4b : Sort params by size (decreasing)
935057fc74 : [package] turn MockZipReader into DirectoryReader and add test coverage (#59107)
693b2696f8 : add dispatch for bitwise_and (#59388)
4920d5a05a : Temporarily add skip to fix slow gradcheck failure on master (#59585)
5c7e14d2bc : [DataLoader] Switch NotImplementedError to TypeError for len (#59464)
1b578c4bf5 : [DataLoader] Close byte stream explicitly (#58938)
90c5b74e47 : Back out "[PyTorch Edge] bytecode version bump to v5 and enable share constant table" (#59432)
5d6a10a765 : Revert D28913223: [pytorch][PR] Adding run-specified-test-cases option in run_test.py
010bcb4c2d : Fix xnnpack hardswish memory issue (#59577)
1faba1e4cc : [Pytorch Edge] Make RegisterBackendSelect Selective (#59096)
501320ed81 : [pytorch] deprecate default_op_deps.yaml (#59573)
c436426be8 : [fbgemm] fix gconv + acc16 (#59541)
57d8bccd00 : only reorder tests based on git diff if IN_CI (#59565)
dafa4b3517 : quantization: improve documentation on natively supported backends (#58925)
6575975da9 : [Reland2][DDP] Merge work and future_work in reducer (#59574)
fbe65b16ae : Use irange in torch/csrc/jit (#55716)
ff553e5b09 : enable upload test stats on PR (#59567)
24432eaa29 : Adding run-specified-test-cases option in run_test.py (#59487)
caf76c2445 : Move sharding to after all tests have been excluded (#59583)
93140a31e2 : Use irange in a few places (#55325)
737d920b21 : Strictly type everything in .github and tools (#59117)
6ff001c125 : DOC Improve documentation for LayerNorm (#59178)
a30b359590 : fix double backward for `binary_cross_entropy` loss function when `reduction=sum`. (#59479)
77dde35f1a : Fix error message formatting in _make_grads (#59532)
24e27af683 : [ROCm] enable kernel asserts (#49624)
05b571ee8e : fix name of 'dims' kwarg in torch.tile docs (#59471)
b0ac9bfb2b : Add warning about should_drop for JIT coverage plug-in (#57961)
8693e288d7 : DOC Small rewrite of interpolate recompute_scale_factor docstring (#58989)
1798ff02e4 : [PyTorch] Optimize c10::optional<ArrayRef<T>> for size (#59333)
cc03ea2c47 : [quant] Implemented InputWeightObserver for Linear inputs
c51abf8fca : Make `binary_cross_entropy` differentiable wrt `target` (#59447)
94cc681fc2 : Revert D28922305: [Reland][DDP] Merge work and future_work in reducer
f998e63dca : Revert D28922548: [Gradient Compression] Apply division first to avoid overflow
459270ac01 : [Gradient Compression] Apply division first to avoid overflow (#59522)
a2e56fa0dc : Adding users of a node to the serialized JSON. (#59357)
de40c8e495 : Adds remaining OpInfos and removes redundant test generators (#55558)
8c852de54d : [PyTorch Edge] Remove legacy and kineto profilers from mobile build (#58730)
3137bbeb1a : [Reland][DDP] Merge work and future_work in reducer (#59520)
390fe74944 : Migrate `torch.lstsq` to ATen (#59400)
da972afdcd : OpInfo: to_sparse (#59445)
96ac0e0340 : OpInfo: t (#59442)
0a5bfa9919 : Support `__rmod__` (#58476)
344ecb2e71 : flip via TI (#59509)
1be7ca71ee : OpInfo: log_softmax (#59336)
1dcc034fba : [caffe2] Avoid attempt to use undefined preprocessor directive
1d9c1cc00a : [4/n] [c10d] Introduce the multi-tenancy feature in TCPStore (#58331)
844a98758a : [3/n] [c10d] Revise the implementation of TCPStore (#58330)
4ee761c2c5 : [2/n] [c10d] Introduce the 'multiTenant' constructor parameter in TCPStore (#58329)
cf408c3743 : [1/n] [c10d] Introduce a new TCPStore constructor (#58328)
91eb831422 : Revert D28698997: [Static Runtime] Add schema check to aten ops
c88a0b55b3 : Revert D28677383: [DDP] Merge work and future_work in reducer
f8bebade47 : [DDP] Merge work and future_work in reducer (#58937)
5117ac3bb4 : Revert D28877076: [pytorch][PR] torch.flip via TI
10345010f7 : [Static Runtime] Add schema check to aten ops (#59426)
d82bc3feb8 : torch.flip via TI (#58747)
bca25d97ad : [itemwise-dropout][1/x][low-level module] Implement Itemwise Sparse Feature Dropout in Dper3 (#59322)
68df4d40d2 : show_pickle/model_dump: Handle invalid UTF-8 in pickles (#57661)
ba3a90b55e : Revert D28819780: [TensorExpr] Fix handling of 0-dim tensors.
88fb5ee84c : Revert D28819779: [TensorExpr] Improve debug messages.
aa66990ef1 : Automated submodule update: kineto (#54604)
18848d55b7 : Do not use gold linker for CUDA builds (#59490)
a682ff7ef1 : Add kMaxSupportedBytecodeVersion for Lite Interpreter (#59472)
d125694d0b : Move CUDA async warning to suffix (#59467)
f23c45bd04 : Revert D28841011: [TensorExpr] Fix printing of Bool dtype.
6309b342c3 : [nnc] Enable CPU fuser inside FB, take 5 (#59461)
f5e3eae82a : [nnc] Infer device type from nodes if inputs are all scalars (#59430)
a776072de6 : .github: Switch windows instance types (#59473)
bbf7eceaf0 : Refactor c10d and dist aliases for torch.distributed (#59456)
1183fa3817 : Switch PG::Work to Future in default_comm_hooks.cpp (#59398)
aa27136e3c : Fix test_randperm_device_compatibility for 1 GPU (#59484)
a7c8c56b7f : torchdeploy allow embedded cuda interp use without cuda (#59459)
aeb55225e0 : [caffe2] add a basic implementation of run-time feature rollout checks (#59355)
90ad0f316f : try fixing checkout dirty issue (#59450)
c4349bfa84 : [GHA] add upload binary size step (#58341)
3607478ecd : Conjugate View (#54987)
19985d6f84 : [TensorExpr] Fix printing of Bool dtype. (#59328)
285b8a5252 : [TensorExpr] Improve debug messages. (#59280)
d60efd8207 : [TensorExpr] Fix handling of 0-dim tensors. (#59279)
dce8697aea : [PyTorch][vulkan] Unify convert as `vTensor& convert(const Tensor&)` (#59268)
c99d6254fb : remove THCReduce.cuh (#59431)
780faf52ca : [profile] Clarify record_shapes=True docstring (#59469)
b3ee645cbf : Migrate `_th_std_var` to ATen (#59258)
689a5edd0a : Revert D28326365: [pytorch][PR] Add `torch.cuda.streams.ExternalStream`
3472f0c94d : Enable torch::deploy GPU tests in sandcastle (#59460)
ed993f3243 : [CODEOWNERS] spandantiwari -> shubhambhokare1 (#59427)
e90caac676 : Port gelu_backward to structured (#58665)
153a96054b : Port gelu to structured (#58664)
5f824ef437 : Port hardshrink to structured (#58663)
b4fa4c86f7 : Port hardshrink_backward and softshrink_backward to structured (#58662)
2119efd234 : `reflection_pad1d_backward`: Port to structured (#59103)
a6bd6b9ca5 : [NNC] Fix the uninitialized pointer in loopnest.fuse_loops (#59411)
aa06bc0731 : OpInfo: minor fix in sample_inputs_diff (#59181)
b99523832b : Remove use_env from torch.distributed.run, clarify bc around that parameter in comment. (#59409)
4ae5764d47 : Add is_inference to native functions (#58729)
fa597ee17f : Fix torch.randperm for CUDA (#59352)
202b2c9fc2 : Remove many unnecessary constructor calls of Vectorized<T> (#58875)
d7ef9b73fb : Add `torch.cuda.streams.ExternalStream` (#57781)
c769300301 : Fix MaxPool default pad documentation (#59404)
6d51a89778 : Fix broken hyperlinks (#59425)
63956610a7 : Search for static OpenBLAS compiled with OpenMP (#59428)
c7a3a13bab : .circleci: Disable USE_GOLD_LINKER for CUDA 10.2 (#59413)
06ed658358 : Merge TensorPipe's CPU and CUDA channel registry (#59375)
c09beaaf4a : Remove LazyStreamContext (2 out of 2) (#59299)
03a5c6ea99 : Remove LazyStreamContext (1 out of 2) (#59298)
3e7396f99d : Fix CUDA sync when switching streams in RPC tests (#59297)
8f4cfaa9db : Fix race condition in TP agent (#58753)
c0acffa6ef : Ensure async_execution works with CUDAFuture (#56863)
7bcd8f94a5 : Avoid re-doing CUDA stream sync in OwnerRRef (#57355)
d009c9c129 : [RPC Framework] Separate initialize_from_module_rref method out of RemoteModule constructor (#59292)
c3bf42e0d8 : Fix symbolic derivative of hardswish (#59405)
9ac954789d : [nnc] Add hardsigmoid (#59069)
c717ce6771 : [NNC] Add python bindings for Compute2 (#59350)
db90533b9e : Make JIT not assume that the device is CUDA. (#54238)
7c4ac9e3ee : [NNC] Fix loopnest.cache_accesses for reduce ops (fixed #59002) (#59136)
d9d7d5e24a : [torch] Remove migration warning for ScriptDict
6627c00e63 : [Static Runtime] Fix bug in quantized::linear wrapper (#59407)
7d38901e7c : [NNC] Fix BufHandle arguments in loopnest python API (#59348)
77de640f4b : [torch distributed] Implementing reduce_scatter_base (#57567)
46d724c919 : Revert D28859795: [nnc] Enable CPU fusion inside Facebook, take 4
526445dfa8 : Update reviewer list for the distributed package (#59417)
aa4f27c12a : Prefer accurate reciprocal on ARMv8 (#59361)
3416b8dd70 : Automated submodule update: FBGEMM (#59337)
1aa14fcb14 : Fix the "tensors to be on the same device" error in HistogramObserver (#59234)
2aa463d931 : Support switching RemoteModule between train/eval (#59026)
c1c9774acb : Revert D28538996: Enable torch::deploy GPU tests in sandcastle
e66015dadf : Add build support for kineto + rocm (#58401)
332b01e93f : [DDP] log usage of torch_distributed_debug (#59351)
6408cbd918 : Migrate renorm to ATen (CPU and CUDA) (#59250)
2ad4b8e58c : Extract c10d Store tests to dedicated test file (#59271)
f05d5bec48 : Preserve PyObject even when it goes dead (#56017)
fa72d9a379 : [quant] Fix use after free (#59267)
6baa66ece9 : [nnc] Enable CPU fusion inside Facebook, take 4
57e452ff5d : Revert D28856713: [PyTorch Edge] Add proper error message when loading incompatible model with lite interpreter
6620d7d688 : OpInfo: norm (#59259)
4b74c848aa : Enable torch::deploy GPU tests in sandcastle
f1ce7f4b7f : Update PyTorch version to 0.10.0a (#59345)
c829095590 : Revert D28802058: [pytorch][PR] add dispatch for bitwise_and
d095ec75a1 : Forward AD formulas batch 2 (#57863)
add291cf66 : [JIT] Add a phase to perform inplace<->functional conversion for activation operators (#57477)
91b7bcf4c0 : [PyTorch Edge] Add proper error message when loading incompatible model with lite interpreter (#59354)
3979cb0656 : irange for size_t (#55320)
f914ab193e : Use irange in a few places in torch/csrc (#55100)
18642e664a : [quant][graphmode][fx][refactor] Split quantize.py to prepare.py and convert.py (#59353)
8b4784a9c6 : Revert D28821216: [pytorch][PR] Migrate `_th_std_var` to ATen
eb55b086b7 : [DDP] Log some python-side errors (#59284)
79aeca0b00 : [DDP] Log when errors happen (#59281)
d2e03051e0 : Fix fecher continue next after StopIterator (#59313)
1fb5cf5a71 : Migrate `_th_std_var` to ATen (#59258)
c03cae49fc : [DDP] Remove unused initialize_buckets (#59066)
2a78e896a0 : [DDP] use work.result() in _check_global_requires_backward_grad_sync (#59065)
517ea26eee : [deploy] Make load_library a no-op inside a package (#58933)
dfe85d6fd7 : Revert D28840199: [pytorch][PR] Update version to 1.10
2ce23136d0 : Use irange in torch/csrc utils (#55556)
e6c8e9497c : Small fix type hints in mobile optimizer (#59282)
318c858eb5 : [fx2trt] Organize converters and add unittests (#59261)
0eafef5031 : Fix internal assert location in custom Function binding (#59301)
c3745dc580 : Small change for torch.distributed launcher (#59152)
3453aa44c1 : Update version to 1.10 (#59325)
7ee68363a8 : Add new rpc.barrier API (#53423)
1765f51618 : [iOS GPU] [BE] use channel-last to transform the weights (#59113)
1968efa2dd : [c10d] Remove verbose log (#59070)
7f2e620105 : FIX Validates that weights are 2d in embedding (#59314)
fb709a8ca5 : Build with USE_GLOO_WITH_OPENSSL=1 (#59274) (#59323)
f7097b0c0b : Make unary tests runnable if SCIPY is not installed (#59304)
eae84f0d5d : Fix ONNX forward compatibility (#59327)
c22ac14969 : [Error-reporting] Set upper boundary on border element (#59311)
99f2000a99 : Migrate nonzero from TH to ATen (CPU) (#59149)
b4d30bb583 : [PyTorch] Use expect_contiguous in CPU matmul (#58895)
0528325b5f : [iOS GPU] Raise the minimum OS support version to 11.0 (#59310)
f8f06e7099 : [iOS GPU] Fix the OSS macos build (#59102)
874f287c52 : add dispatch for bitwise_and (#59125)
484d53f4a0 : [torch][JIT] Warn only once when using unscripted dictionary (#59287)
82052b0a76 : [vulkan] Remove constant duplication for Vulkan optimize_for_mobile (#59276)
3ec0904718 : docs: Add note about nightly versions bump (#59324)
5386f6935a : avg_pool3d: port to structured (#59083)
5dc426a6f6 : avg_pool2d_backward: Port to structured (#59082)
eb1adc4c5e : cmake: Add USE_GLOO_WITH_OPENSSL to Summary.cmake (#59321)
afd5237a4f : Revert D28800692: [nnc] Enable CPU fusion inside Facebook, take 3
a7aeaaf99e : Added missing namespaces for C++ API (#45736)
87a25e09f4 : [quant][graphmode][fx][refactor] Remove _convert from Quantizer class (#59042)
580831bfbb : Add support for MatMul to BatchMatMulFP16Acc{16,32}Fake Op Mapping
599f5058cf : [ONNX] Update ONNX to rel-1.9 (#55889) (#57080)
f87aa23125 : .github: Remove windows dependency installs (#59283)
3a2149a4ce : [reland] Make TP agent use streams from Future when sending response (#59212)
258a991027 : [reland] Set and propagate devices in RRef completion future (#59211)
a3392cafe0 : [reland] Set streams when invoking UDFs (#59210)
f8a3fd4e34 : [reland] Create CUDA-aware futures in RequestCallback (#59209)
3af6ff98ff : [reland] Provide pre-extracted DataPtrs when completing a Future with a Message (#59208)
1adc289e10 : [reland] Allow Future::then to return pre-extracted DataPtrs (#59207)
b07d68e24c : [reland] Always use intrusive_ptr for Message (2 out of 2) (#59206)
5ec169b4c3 : [reland] Always use intrusive_ptr for Message (1 out of 2) (#59205)
44c20ce676 : Alias for `i0` to `special` namespace (#59141)
059a717c9e : Fix breakpad build and add to more images (#59236)
dbe629c51d : [RPC Framework] Support creating a RemoteModule by RRef (#59242)
3218d890dd : [quant][graphmode][fx][fix] Fix support for custom module (#59041)
06af7618e7 : [quant][graphmode][fx][refactor] Remove Quantizer class from convert (QuantizeHandler) (#59040)
0a26781966 : fix numpy compatibility in test for `torch.kthvalue` (#59214)
e9e1bb1a4e : Fix device of info tensor for torch.linalg.inv_ex with MAGMA backend (#59223)
50e6ee3ca2 : [quant][graphmode][fx][refactor] Remove Quantizer class from quantize_node (#59039)
2d8f0d966f : CUDA support in the CSR layout: CUDA addmm/matvec (#59012)
3efefc4016 : [CUDA graphs] Makes sure all graphs tests call empty_cache() at some point before capture (#59233)
1d37f41567 : [quant][graphmode][fx][refactor] Remove _prepare from Quantizer class (#59038)
970096b624 : [Reland] Adds an aten::_ops namespace with unambiguous function names (#59018)
8805093ec5 : use long index type for index_add_cuda deterministic path (#59254)
20348fb32e : [quant][graphmode][fx][refactor] Remove find_matches from Quantizer class (#59037)
7d64fc675b : [quant][graphmode][fx][refactor] Remove fold_weights from Quantizer class (#59036)
8af6281201 : DOC Adds register_module_full_backward_hook into docs (#58954)
6e7dae9cec : [nnc] Enable CPU fusion inside Facebook, take 3 (#59253)
cc4891804c : [quant][graphmode][fx][refactor] Remove save_state and restore_state from Quantizer class (#59035)
336ac9496f : Fix mismatch in README.md Docker Image section (#59199)
95c26b2806 : [ROCm] disable test test_Conv2d_groups_nobias for ROCm (#59158)
3d521e8b40 : [quant][graphmode][fx][refactor] Remove prepare_custom_config from Quantizer class (#59034)
a5dcd3c4b7 : Revert D28240105: [pytorch][PR] Fix DistributedSampler mem usage on large datasets
a0ce8da26e : Fix DistributedSampler mem usage on large datasets (#51841)
5a42a97c49 : Add NCCL_ASYNC_ERROR_HANDLING as an environment variable (#59109)
5f1117226f : DOC Update register_buffer/parameter docstring explaining None (#59015)
e4b2684331 : [quant][graphmode][fx][refactor] Remove patterns from Quantizer class (#59033)
83892c1861 : [quant][graphmode][fx][refactor] Remove node_name_to_scope from Quantizer (#59032)
3826f7e8e0 : [quant][graphmode][fx][refactor] Remove quantized_graph from Quantizer (#59031)
1b4586ee20 : [quant][gx][graphmode][refactor] Remove modules from Quantizer (#59030)
aa857850bb : Add check_env, getenv api (#59052)
fd2a36369a : Fixed torch.nn.MultiMarginLoss equation format error (#59188)
06399d441d : Create EngineHolder for serializing and running TRT Engines with PyTorch
e9e5588588 : Improve Tensor traverse to traverse its grad_fn when possible (#58271)
65748f81c9 : Un-verbose the build (#59235)
7523728368 : [quant][graphmode][fx] Factor out run_weight_observer (#59029)
10fc42eacc : [quant][graphmode][fx] Merge quant_env and env (#59028)
afdfd2288a : Revert D28767060: [pytorch][PR] Migrate renorm to ATen (CPU and CUDA)
0b040e17e5 : More user-friendly error messages (#59106)
cab4849463 : [caffe2][glow] Share info about current batch_size (#58902)
7fb3385f4b : Automated submodule update: FBGEMM (#59170)
74ec50893d : Migrate renorm to ATen (CPU and CUDA) (#59108)
223725cfb0 : OpInfo: div - port pending method_tests entry (#59173)
6d45d7a6c3 : Enables previously "slow" `gradgrad` checks on CUDA (#57802)
ef40757de3 : OpInfo: `zero_` (#58731)
2aeb16c13a : [fix] i1-i1e ROCm failure: mark array as const so that it is available for host and device (#59187)
fea7a79e0b : [special] Add ndtr (#58126)
2a78f6376c : TensorIterator: Reduce serial_for_each static overhead (#58909)
445e838210 : OpInfo: resize_, resize_as_ (#59176)
ea465f7378 : OpInfo: true_divide and minor fix (#59154)
aaccdc3996 : SparseCsr: Fix some uses of deprecated Tensor methods (#58990)
6ee9466d3a : OpInfo: tensor_split: port remaining method_test entries (#59133)
96c549d1c6 : Replace `dim_apply` with `TensorIterator` (#58656)
cab65ea3b9 : OpInfo: renorm (#59079)
5c18994674 : [special] Add `i1` and `i1e` (#56352)
27009d6129 : [TensorExpr] Add NNC lowerings for `aten::view`, `aten::reshape` and `aten::expand_as`. (#59157)
355b24438c : make vector_norm backward call norm_backward (#59135)
9fc0c5a54a : OpInfo: tril, triu (#59145)
1871d4e604 : avoid explicitly casting low precision inputs to fp32 in norm (#59134)
d68df54269 : OpInfo: fill_ (#59138)
a427820350 : [NNC] Added triangular_solve external call + fixed permute (#59131)
c9af4c2636 : OpInfo: where (#58349)
b977a3b66d : [c10d] Split custom class bindings out of python binding code (#58992)
ab372ba510 : [iOS GPU] Add debug information to track memory allocation exception (#59112)
41054f2ab5 : CUDA support in the CSR layout: sparse_to_dense/add_sparse_csr (#59011)
9c83e4160d : Use some c10::ThreadLocal to avoid crashes on old Android toolchains (#59017)
4b3d17c0a2 : Include Macros.h in ThreadLocal
0c1420aa3c : OpInfo: `fmod` and `remainder` (#57941)
657b75d155 : Revert D28700259: [pytorch][PR] Migrate nonzero from TH to ATen (CPU)
4e543d017f : Move remaining \*Sort\* in `THC` to `ATen` (#58953)
f3aa61b9ed : Add peephole for len(x.size()) (#59051)
b9dc51863c : Add more shape functions for mobilenet (#58932)
0ebc665305 : Switch symbolic shape registry to operator map' (#58890)
d8cbba3ee2 : [JIT] Disable Complete Shape Inlining For Testing Purposes (#56966)
f66fbb1e2e : Add unary/binary ops necessary for mobilenet (#56828)
40f851c53e : Use dataclasses to simplify ShardingSpec (#58893)
89d78851e6 : [quant][refactor tests] Move qtensor serialization tests from test_deprecated_jit (#59089)
886a2ddc83 : [quant][refactor tests] Clean up test_quantization.py (#59088)
f993ceffb5 : TensorIteratorReduce: Avoid tensor operations in parallel_for (#58655)
ef32a29c97 : Back out "[pytorch][PR] ENH Adds dtype to nn.functional.one_hot" (#59080)
3d2b55553b : Retiring _module_copies field in DDP reducer. (#59094)
c6c563fc26 : Added minor fixes to Az DevOps Build Logic (#59016)
61f946bba6 : don't copy indices to the self device in dispatch_index (#59059)
16ae6cad3d : Revert D28615349: [pytorch][PR] Add a ci/no-build label
bae06a0293 : Add a ci/no-build label (#58778)
3e2db56dcf : [docs] document dim argument to tensor.size() (#58777)
18302bcdf3 : Add script to cancel workflows (#59019)
920619dc2b : [PyTorch] Save a refcount bump in meta functions for addmm and mm (#59063)
2c17b6a0fe : [ONNX] Enable support for roll() op. (#58389) (#58697)
1aabb8f98c : [ONNX] handle aten::_set_item on Dict in convertInplaceOpsAndTrackAlias (#58317) (#58696)
0a6828a306 : [ONNX] use consistent quoting for string literals (#57757) (#58695)
b27fc0ff85 : [ONNX] Improve lower tuples and handle control flow (#57650) (#58694)
57c9355a0d : [ONNX] Update special post process for SequenceInsert after SequenceEmpty (#56965) (#58693)
b8c96e6b08 : Support symbolic for conv_tbc (#58359) (#58692)
d101816fdd : [ONNX] RNN scripting (#57564) (#58691)
51d14b6859 : [ONNX] Update instance_norm2 symbolic to handle track_running_stats=True (#55051) (#58690)
ba694520e5 : [ROCm] fix JIT codegen (#57400)
7e4e648c2a : Enable NNC fusion for relu6 (#58773)
0106fe3934 : avg_pool2d: port to structured (#58987)
d935259171 : Remove redundant code from LayerNorm Fake Op. (#59054)
b14c3205fd : [JIT] Add torch._C.ScriptDict (#52659)
95b1bc1009 : Migrate nonzero from TH to ATen (CPU) (#58811)
934f6dca65 : Fix pthreadpool guard test (#58977)
e89b150a39 : [typing] Pyre fixes for remote_module (#59046)
11aa5e4f66 : Add underscores to some internal names (#59027)
617b74aa35 : [nnc] LLVMCodeGen for any target (#58713)
a1806134a7 : [QAT] Fix the runtime run `cannot resize variables that require grad` (#57068)
25ac647f64 : [QAT] Auto format the torch/quantization/observer.py` (#57067)
9baf75c86e : add test_filename field in scribe upload (#59024)
7461792c4a : adding upload step on all build jobs (#58998)
3d70ab08ae : bump out repeat_interleave BC allow date (#59057)
74089a0d34 : [quant][refactor tests] Move quantization tests into subfolders (#59007)
e146ed21fb : [quant][refactor tests] Move TestModelNumerics to a separate file (#59000)
b6c5c5d90e : [quant][refactor tests] Rename test_numeric_suite and equalization tests (#58999)
82d587f434 : [quant][refactor tests] split test_workflow_module into test_workflow_ops and test_workflow_module (#58963)
0e9a295b41 : Refactor GlooDeviceFactory::makeDeviceFor... (#58996)
9e60c7dee3 : Add docstring for is_inference_mode_enabled (#59047)
1bd22e28b3 : BFloat16 support for torch.sort (#58196)
f4a890d7c6 : fix unique for discontiguous inputs (#59003)
b435a27fb7 : CUDA support in the CSR layout: constructors (#59010)
7c17e1dd90 : [fx2trt] Quantized uru on gpu (#58965)
58d1b3639b : fix nn.MHA scriptability (#58727)
ac67cda272 : [PyTorch] Rename TI::add_borrowed_{in,out}put to TI::add_{in,out}put (#58608)
7db36c0792 : [PyTorch] Add temporary guardrail to borrowing_ op variants on TensorIterator (#58607)
bed0eb5428 : [PyTorch] Add TI::add_owned_{in,out}put for clarity & use them (#58606)
4f390eb6b6 : Document factory_kwargs in nn.Quantize + remove Attributes section (#59025)
a749e8edf5 : Add UninitializedBuffer to nn docs (#59021)
de22657e1c : [PyTorch] Replace RecordFunction shouldRun callback with atomic bools (#56504)
ac07c6451e : [nnc] Use BufHandle in loopnest.cache_accesses python API (#59006)
b93e7a7602 : concurrency fixes (#58961)
97c1179c9d : Revert D28549240: [typing] Pyre fixes for batch_distributed_inference
0d5527de7a : Back out "Back out "[ONNX] Process const folding progressively when converts to ONNX (#54569)"" (#58923)
b420ded66f : ShardedTensor framework for ChunkedShardingSpec (#58517)
671c224b0a : [typing] Pyre fixes for batch_distributed_inference
be47060af9 : [remove xla from codegen] rename aten_xla_type.h -> DispatchKeyNativeFunctions.h (#58568)
86ce2950f6 : remove xla-specific stuff from codegen (minus CPU fallback) (#58064)
511979df85 : Define the SYCL device version __assert_fail when the NDEBUG defined. (#58906)
2e2a75720b : [structured] remainder (#58732)
29487ac7ff : Add 11.3 binaries without conda (#58877)
24508337f4 : Revert D28643215: Adds an aten::_ops namespace with unambiguous function names
12418a4f86 : Back out "Revert D28664514: [pytorch][PR] various TensorIterator speed improvements"
17fb651a3b : Make torch.Tensor(torch.tensor(1.0)) work (#58885)
e24362746a : [nnc] Concat input shapes must be known to fuse (#58974)
8398ebaa86 : Revert D28664514: [pytorch][PR] various TensorIterator speed improvements
c06d2afa99 : [caffe2] Add support for int32 lengths in BatchSparseToDense (#58062)
444e195b6d : Use docker base for clang-lint in CI (#58964)
b8d56572a1 : Open json config file in context manager (#58077)
8130f2f67a : DOC Adds code comment for _ConvNd.reset_parameters (#58931)
950e67fa43 : [quant][refactor tests] Move test_qat_module into test_quantize_eager_qat (#58928)
cc07825a21 : [quant][refactor tests] Split test_quantize into test_quantize_eager_ptq, test_quantize_eager_qat and test_fusion (#58927)
28740869a1 : Adds an aten::_ops namespace with unambiguous function names (#58092)
032d6b0643 : Revert D28112689: CUDA support in the CSR layout: constructors
bbdc428db2 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
9ba9a16700 : [PyTorch Edge] Use stream as backport_vi_to_vi-1 interface (#58790)
1416e57465 : CUDA support in the CSR layout: constructors (#57274)
be4ba29d49 : Detect overflow in numel of sparse COO tensor (#57492)
948df6c7a9 : [numpy] torch.i0: promote integer inputs to float (#52735)
49c2da0ee0 : [testing] improve broadcasts_input error message (#58295)
083d3bb93b : [torch][repeat_interlaeve] Add to exception list in backward compat check (#58966)
26c1f0f72e : [skip ci] Skip debug info on PRs (#58897)
32273e806a : Ensure NativeFunctions.h codegen output is deterministic (#58889)
db5e5781ad : replace all remaining occurrences of deadline=1000, to prevent test flakiness
60af6e928a : [PyTorch Edge][Version] Fix torchscript model after backport (#58892)
fb120493b1 : Make Scalar.to<> for invalid types a compile-time error (#58726)
36a77580f5 : [docs] Clarify batch_first behavior for nn.LSTM, nn.RNN, and nn.GRU (#58809)
7179e7ea7b : [CMake] Prefer third_party/pybind11 by default (#58951)
45aa54d83c : relax test deadlines
b4b95fc87a : Expose `cudaMemGetInfo` (#58635)
133133afa8 : [PyTorch] Extract non-template parts of torch::class_ (#54548)
ec89bf6535 : .github: Ensure 7zip install for windows (#58924)
ede3f5421f : [Pytorch Delegated Backend] Save function name in debug info (#57481)
813adf1076 : [Pytorch Delegated Backend] Save operator name and function name in (#57441)
a7a5992d7d : Add no-grad inference mode note (#58513)
5268b5a29a : Add parsing logic for `Tuple[()]` annotation (#58340)
b9d1ad9c78 : OpInfo: `diag_embed`, `diagonal` (#58642)
f976275858 : Run pthreadpool with _NoPThreadPoolGuard on the same thread (#58759)
b703f1e02d : [NNC] Add documentation for splitWith APIs (#58270)
dd7bbe1a63 : [NNC] Make splitWithMask transform in-place (#58269)
e2467cc43e : [NNC] Make splitWithTail transform in-place (#58268)
6b6a27e430 : [jit] Add Python API for ScriptProfile (#57398)
c88333484f : [resubmit] masked_scatter thrust->cub (#58865)
fedf6f2db2 : Check memory overlap in sort for large input sizes (#58327)
7eade660c6 : [PyTorch] Reduce errors of `foreach` functions (#56993)
8a28bbeeb9 : various TensorIterator speed improvements (#58810)
09a8f22bf9 : Add mish activation function (#58648)
bf269fdc98 : Re-enable torchdeploy oss tests and move to per-PR cuda11 job (#58872)
19bcbfc5cf : [c10d] Use pg wrapper in detailed debug mode (#58281)
aad2ad883a : Disable test_nccl_errors_blocking_abort (#58921)
470160ad78 : [Pytorch] Update FuseLinear to map source range information (#58492)
e067675167 : [Pytorch] Provide API to preserve source range and callstack information during graph rewrite (#58300)
2ef9a1df22 : Increase mimimum number of warmup runs to 2 (#58801)
09a1b1cf87 : Forward AD formulas batch 1 (#57768)
b4f3a989da : [torch][repeat_interleave] Fix ambigious function call (#58881)
3dbfaddfa1 : Port elu_backward to structured (#58660)
5850553bc0 : Port hardsigmoid_backward to strucutred (#58484)
3f0b7e0feb : Port leaky_relu_backward to structured (#58483)
ad27513430 : Port softplus to structured (#58482)
0b8931fe4b : [torch][JIT] Predicate uses of RPC APIs on `torch.distributed.rpc.is_available()` (#58887)
c502f49535 : Fix failing torch deploy tests and reenable. (#58871)
cf395c0718 : [c10d] Introduce ProcessGroupWrapper (#58224)
c00eefb6c7 : [Static Runtime] Clean up and fix bugs in Static Runtime (#58829)
de845020a0 : fix docstring for fusing functions (#58638)
2b0ec9c3cf : Reapply "[jit] Implement ScriptProfile to collect instruction profiles." (#58783)
705dd9ffac : [PyTorch] Migrate remaining stray uses of TI:add_output to borrowing (#58605)
12bb1e86ed : Make c10::ThreadPool::available_ atomic. (#58457)
a5250425e0 : [quant] Eager mode equalization support for ConvReLU and LinearReLU (#58792)
b593dd2027 : [Gradient Compression] Re-enable test_ddp_hook_parity_powerSGD on Gloo backend (#58882)
a566005679 : [skip ci] Update readme to use hud.pytorch.org (#58835)
f29e75c4dc : [reland][quant][fx][graphmode][refactor] Remove qconfig_map from Quantizer (#58455) (#58756)
76f03bc42f : Fix `torch.finfo.bits` typo in stub (#58819)
bc2ee078d1 : Update Gloo submodule (#58853)
51b7224f8f : [vulkan] Add max_pool2d op (#58806)
a679bb5ecf : Refactor local lint (#58798)
a7f4f80903 : ENH Adds dtype to nn.functional.one_hot (#58090)
e4be80c1b8 : simplify cpu_kernel to not have contiguous special case (#58830)
1c5f63d86d : [Pytorch Edge] Model Ops compatibility api (#57501)
2a456e4f49 : [skip ci] Move debug wheels out of package dir before test (#58685)
2733555ed1 : replace all_gather with more efficient collective api _all_gather_base (#57769)
c58709b7bb : Helper function for skipping module parameter / buffer initialization (#57555)
277f587496 : rename benchmark_cpp_extension (#58708)
a083933d2a : .github: Add windows.8xlarge.nvidia.gpu (#58781)
8ae4d07dac : .circleci: Disable windows CPU builds for CircleCI (#58855)
1fca1545d4 : fixing csr addmm bug (#58768)
2dda8d7571 : Move cublas dependency after CuDNN (#58287)
bb4770462f : .github: Enable Windows workflow for pull_request (#58418)
007fe949aa : Adding a new include directory in BLIS search path (#58166)
0e16087064 : [DataLoader] Fix bugs for typing (#58450)
5c7dace309 : Automated submodule update: FBGEMM (#58161)
74c12da451 : add deterministic path for scatter_add_cuda for 1D tensors (#58761)
50ded095e4 : [deploy] temporarily disable deploy tests (#58832)
a7fdd487e5 : Port `kthvalue` tests to `OpInfo` (#58654)
4709fdb117 : Add GenericShardingSpec for generic tensor sharding. (#57409)
0d6fa1adc5 : Introduce ChunkShardingSpec as a model sharding specification. (#55728)
c5a1f04367 : Enabled BFloat16 support for cumsum, logcumsumexp, cumprod, cummin & cummax on CUDA (#57904)
ee3ea31f12 : OpInfo: split, split_with_sizes (#58184)
52a8031e8c : [ROCm] disable test test_Conv2d_groups_nobias_v2 for ROCm (#58701)
fa0b89bbf7 : Change list striding kernel implementation to handle optional integers (#58536)
28840b9a44 : [Gradient Compression] Disable test_ddp_hook_parity_powerSGD on Gloo backend (#58802)
4ca4640bae : [torch][repeat_interleave] remove stream syncronization if output size is given (#58417)
c1c9be16c4 : port mm to structure kernel (#57755)
f9e8dc005a : OpInfo: clone, contiguous (#58390)
a70020465b : adding test_sparse_csr to run_test (#58666)
22776f0857 : [PyTorch] Remove device check from a few indexing methods (#58800)
796c97a88f : [Pytorch Delegated Backend] Add python binding for (#57156)
d6d726f781 : [Pytorch Backend delegation] Add api for backend lowering to query debug (#55462)
e7c35a3363 : Revert D28617214: [Gradient Compression] Do not skip the comm hook tests on Gloo backend
6093161158 : Separated out working tests from not working tests for NNC OpInfo (#58788)
dc8bc6ba4b : [PyTorch Edge] Check if open paren ( occurs in an operator name string (#58687)
4c961beacb : Revert D28474878: Always use intrusive_ptr for Message (1 out of 2)
a6b9268f31 : Revert D28474879: Always use intrusive_ptr for Message (2 out of 2)
c1a9befba2 : Revert D28474880: Allow Future::then to return pre-extracted DataPtrs
a1719be07f : Revert D28474877: Provide pre-extracted DataPtrs when completing a Future with a Message
341f83d6a2 : Revert D28474981: Create CUDA-aware futures in RequestCallback
7a8336a5a7 : Revert D28474983: Set streams when invoking UDFs
89c81c5bba : Revert D28574083: Set and propagate devices in RRef completion future
b8a04e25ec : Revert D28474982: Make TP agent use streams from Future when sending response
dceaf98e79 : [torch][package] Fix importlib.resources.path for python <3.8.8 (#58718)
071d49a970 : Document monitored barrier (#58322)
84b6c629d3 : [lint] Move shellcheck to its own step (#58623)
b842351b4f : Skip SVE acceleration on M1 (#58785)
3e88acbf05 : [Gradient Compression] Do not skip the comm hook tests on Gloo backend (#58784)
041bff77b6 : Make tools/actions_local_runner.py PY-3.X compatible (#58787)
829a096cd7 : Fix arange functions for VSX specializations of Vec256 (#58553)
e094980060 : Makefile should use python3 instead of python alias (#58786)
1d885fbd0e : Update GraphTask::owner_ in a single thread for DistEngine. (#58625)
d9aa0b53eb : [PyTorch] Migrate TI usage in ATen/native/quantized to borrowing (#58307)
3ddb4b3e68 : [PyTorch] Migrate TI usage in ATen/native to borrowing (#58305)
e574c2c025 : [quant][fx] Validate qconfig_dict keys (#58566)
ed4cda0183 : [pkg] opt into autoformat
e5ba9307b7 : catch exception when running print regression (#58751)
378b2af93d : T90561249: Enforce kernel launch checks for OSS CI (#58465)
19a7472702 : Make TP agent use streams from Future when sending response (#58428)
23df70359a : Set and propagate devices in RRef completion future (#58674)
ab1e958d20 : Set streams when invoking UDFs (#58427)
027c68ef00 : Create CUDA-aware futures in RequestCallback (#58426)
bdf6a4bffd : Provide pre-extracted DataPtrs when completing a Future with a Message (#58425)
a0ee299d92 : Allow Future::then to return pre-extracted DataPtrs (#58424)
ebf55a7d13 : Always use intrusive_ptr for Message (2 out of 2) (#58423)
4d704e607d : Always use intrusive_ptr for Message (1 out of 2) (#58422)
35ea8779da : Prevent using anything other than intrusive_ptr for Future (#58421)
44daf1930b : Migrate remaining shared_ptr<Future> to intrusive_ptr (#58420)
59454ce36e : Make remaining autograd methods return futures (#57861)
d6d2fb3323 : Make remaining RRef methods return futures (#57860)
797dff55b5 : Unify fetching RRefs (#57859)
b9b41f6d1b : Deduplicate Python object serialization (#57858)
cd9dbbd93a : Simplify process(Script|Python)(Remote)?Call (#57857)
c96a05d148 : Unify assignment of OwnerRRef result (#57856)
e220a1bbcd : Make processPythonExecution return a future (#57855)
20d02cb7dd : Remove getScriptRemoteCallType (#57854)
60fc37393e : Simplify OwnerRRef completion (#57853)
ea2f5bbb4c : Unify async execution for JIT functions (#57852)
bfdc279134 : Unify invoking JIT functions (#57851)
77428159f5 : Unify invoking JIT operands (#57850)
f94f1db938 : Make some methods of RequestCallback return void instead of bool (#57849)
4ac18f6710 : Centralize setting messageId in RequestCallback (#57848)
f6844eafce : Make RequestCallback collect Futures from methods, rather than providing them (#57847)
7e1f2b33ce : Add helpers to manipulate futures (#57846)
1d7cf4b248 : Reduce overhead when Future invokes callbacks inline (#57638)
ce2f1c29f9 : Introduce thenAsync method on Future (#57637)
d7d0fa2069 : Fix typo. (#58728)
13c975684a : c10/util/thread_name.cpp: pthread_setname_np requires Glibc 2.12 (#55063)
76ce925257 : [c10d] Fix monitored_barrier with wait_all_ranks (#58702)
9e261de630 : Revert D28547564: [pytorch][PR] masked_scatter thrust->cub
5313bafd31 : [JIT] integer value refinement (#56438)
483ea176b3 : Factor out isDominatedBy (#56437)
0d9f1c1ec6 : Add Value * == Value * peephole (#55978)
391603d883 : Factor out non tensor peephole (#55977)
5cebf29b4e : Add list len refinement (#55926)
9fd2306036 : Add handling of symbolic shapes (#55925)
f39471a171 : Initial Symbolic Shape Analysis (#54809)
72ae924fad : Added sublist support for torch.einsum (#56625)
fc804b5def : Revert D28133579: [jit] Implement ScriptProfile to collect instruction profiles.
e56d3b0238 : Added OpInfo tests for NNC (#58719)
d88d321ee3 : More robust slicing logic for nn.ModuleList (#58361)
b301558410 : [Reducer] Remove replica size == 1 checks (#58603)
1d67c6d639 : [DDP] Remove train call to module copies (#58595)
88c76b43fb : [Reducer] move comment to the right place (#58594)
d83c5a5c7f : Format reducer.cpp, hpp (#58593)
6d97a80dd2 : [fx][graph_drawer] Improve graph drawer coloring and tensor_meta handling (#58699)
5455df2b99 : [codemod][dirsync] Apply clang-format
21a9334034 : Revert D28497967: [quant][fx][graphmode][refactor] Remove qconfig_map from Quantizer
1cf8f7a439 : [quant][fx][graphmode][refactor] Remove qconfig_map from Quantizer (#58455)
62adf9e1c9 : [Reducer] Completely remove VariableIndex (#58592)
8e4fc0063a : [Try] [PyTorch Edge] Trim unused code related to CUDA and HIP Interfaces (#58689)
773cfae93b : Tag PyObject on TensorImpl per torchdeploy interpreter (#57985)
fe8e5eb260 : Change native functions to take `c10::string_view` args instead of `std::string` (#57680)
d1d24304ee : [Caffe2] [Easy] Fix comment on caffe2_serialize_using_bytes_as_holder to reflect correct types
db67699ae6 : [Pytorch Edge] NAME -> SCHEMA (#58604)
0ede83db7a : enable torch.cpu.amp.autocast (#57386)
b6dcdeacc9 : [quant][graphmode][fx] Move qat_swap_modules outside of Quantizer (#58454)
fdc5dfdd50 : [PyTorch] Migrate TI usage in ATen/native/cpu to borrowing (#58303)
7c15d3206d : [PyTorch] Add TI::borrowing_nullary_op and use it (#58280)
618be18a41 : Enable the quantization on XPU devices (#54857)
ce3788d6a5 : Add `#pragma once` to CUDA foreach headers (#58209)
f879e70fc1 : [quant][fx][graphmode][refactor] Factor out generate_qconfig_map to qconfig_utils.py (#58453)
bf1c936e06 : [static runtime] out variant for full_like (#58079)
5211eeb22b : Support aten::leaky_relu for TE (#58464)
4668d09ca6 : [quant][graphmode][fx] Quantize the output of statically quantized fp16 op in QuantizeHandler (#58445)
6edd49a8e8 : [Android]Removed dependency with AppCompat. (#58527)
d84121421e : [third-party] Update nccl to 2.9.8 (#58667)
bbf92e6176 : Add missing .to_sparse(ndim) gradient (#58413)
8a3d9962e0 : Enable `ceil`, `floor`, `frac`, `round` & `trunc` for BFloat16 on CUDA (#57910)
034a238bab : [jit] Implement ScriptProfile to collect instruction profiles. (#57397)
e8c6a65074 : Adds grid_sampler to autocast fp32 list for 1.9 (#58679)
691c139144 : Do not use TF32 matmul in linalg and DDP tests (#56114)
a7f06e1e55 : Added statistic related to out variant nodes
056287aec4 : turn off deadline for adagrad test
9db64e6e56 : Revert "Striding for lists Part 2 (#49352)" (#58523)
9123229684 : Cleanup functional.py after lu_unpack was removed (#58669)
0e1bed364d : [nnc] Use int64 to compute matmul flops heuristic (#58676)
a60ce98a2e : Remove opinfo warning from floor_divide (#58682)
1981904c8d : [Static Runtime] Check input container type in aten::__getitem__ (#58639)
84500d03d2 : .github: Upload /download large artifacts to s3 (#58506)
151ec56311 : ENH Adds check for input sizes in cosine_similarity (#58559)
3c55db8065 : Add Deploy to PredictorContainer (#58503)
1fc3e1e1fb : Abladawood patch 1 (#58496)
5152cf8647 : masked_scatter thrust->cub (#56750)
4942fe0290 : [DataLoader] Introduce MapMapDataPipe functional datapipe (#58258)
faa7d3793d : [DDP] Support not all outputs used in loss calculation (#57081)
abb215e229 : Fix dtype inference in sparse_csr_tensor_ctor (#58631)
9ac0bd23a2 : Fix bug in test_fx_experimental codegen (#58587)
bf00d26deb : Enables builds with Compute Library backend for oneDNN (#55913)
145a6f7985 : DOC Adds code comment to clarify nn.Linear.reset_parameters (#58487)
5caccbe39e : [pkg] Catch exceptions where dependency resolution gets invalid imports (#58573)
703f24397b : [pkg] simplifications to broken dependency handling (#58572)
c4f0c5ee50 : Quote in setup-ci-env (#58637)
8615fd65e3 : Fix GIL issue when acquiring multiple sessions. (#58584)
24786bd6ef : Make torch::deploy work with or without cuda (#58493)
fbc235c226 : port `sgn` to structured (#58197)
b5e39bceec : Port fmax & fmin to structured kernel (#58458)
e179a56839 : [FX Splitter] dump final graph and print operator stats via to_glow API
9a622f4cd9 : refactor ASGD to use functional API (#58410)
208b36f109 : remove redundant getDispatchKeySetUnboxed(eligibleKeys) (#58535)
47c566ebb1 : Rename namespace `vec256` to `vec`, struct `Vec256` to `Vectorized` (and other related classes/structs) (#58438)
a6b358d53b : Revert D28461013: [nnc] Enable CPU fusion inside Facebook, take 2
36adc3f04d : [FX] Add APIs to mutate specific args/kwargs (#58571)
296d2a4399 : [THC] Rename THCTensorMathMagma from cu to cpp (#58521)
ae99640a78 : Added publishing of test results and minor fixes to Az DevOps Build Logic (#58436)
b9b8522e00 : [profile] fix recorded data type (#58531)
8de8b492f7 : Revert "Move Azure MultiGPU tests back to nightly (#58242)" (#58451)
3113a1de4a : Fix some tensor operators to return `NotImplemented` for invalid inputs (#58216)
6c70cbedb6 : step 0 of cuDNN v8 convolution API integration (#51390)
954d39ba38 : [ATen][Quant] Pass at::Tensor by reference (#58284)
a91375432a : model_dump: Accept variable-length debug info (#57660)
ab1fdbefe1 : model_dump: Use DumpUnpickler.load instead of .dump (#57659)
53078924ad : model_dump: Add a section that summarizes tensor memory usage (#57658)
ef4e6036bc : model_dump: Handle dict rendering (#57657)
72ff3163bd : model_dump: Handle torch.device objects (#57656)
a380575f5b : model_dump: Refactor renderTensor into a helper method (#57655)
3ff76af23c : model_dump: Implement "Hider" properly (#57654)
3f0b081636 : move code to Blas.cpp, clean up THC magma (#58526)
703cfdc9ed : [JIT] improve documentation (#57991)
79a258f448 : s/foward/forward/g (#58497)
ccad77aa22 : Added OperatorMap for mapping Operator to any template <T> (#58060)
1ba05efd26 : [Reducer] Remove some unused variables (#58524)
4cf9b11022 : Fix issues regarding binary_checkout (#58558)
baf05c3f5e : Split CUDA SpectralOp (#58459)
029bec4505 : [lint] Fix uninitialized variable lint error in `Module.cpp` (#58499)
b45a105acb : Automated submodule update: tensorpipe (#58477)
4d7abdbdad : [Quant] Add out variant for int8 quantized::linear (#58282)
c76405d3b1 : [nnc] Enable CPU fusion inside Facebook, take 2 (#58347)
dcfc2050bd : VaryingShape<Strides>::isComplete() needs to consider whether each Stride is complete (#58510)
3d20ddfe92 : [nnc] Do not fuse unsqueeze with variable dim (#58346)
2ddd841635 : [nnc] Make the pretty printer prettier (#57874)
3a3959d253 : [jit] Add a utility class SourceRef to represent Source as keys (#57396)
0362b753db : [BE] Use __func__ as checkAllSameGPU() 1st arg (#58502)
ea0f7c4720 : move unused parameters to end of bucket orders when rebuild buckets for static graph (#58097)
a7b62abeb0 : [PyTorch Edge] bytecode version bump to v5 and enable share constant table (#57888)
9eee782cb6 : [nnc][scripts] Add a script for bisecting the TE fuser pass (#58357)
7d78d72d7b : removing old comment (#56430)
a07cd22efb : Comment why render_test_results is its own step (#58505)
8efaab1b83 : Add long tensor type to AddFakeFp16 Op (#58504)
4b859cbca1 : [NNC] Do not optimize conditionals when the corresponding loop is not normalized (#57675)
a71b99b50d : [NNC] Add a method to check if a loop is normalized (#57674)
3fe72d30dc : [NNC] Optimize conditionals that correspond to the form generated for aten::cat op. (#57673)
db42ec4297 : [Pytorch Sparsity] Add sparse sources to build target
ad97fd8031 : Support symbolic diff for leaky_relu (#58337)
e1551f1678 : Clarify .github/scripts/generate_ci_workflows.py (#58498)
5fcf49f596 : [PyTorch] Add a guard rail to TensorIterator::add_borrowed_{in,out}put (#58279)
03f2f0f88f : [PyTorch] Migrate remaining CUDA TI usage to borrowing where possible (#58278)
1fd256dc3b : [PyTorch] Migrate CUDA indexing TI usage to borrowing (#58277)
029289bd6c : [PyTorch] Migrate TensorAdvancedIndexing TI usage to borrowing where possible (#58276)
439ba27dea : [PyTorch] Migrate all extant uses of build_binary_float_op to build_borrowing_binary_float_op (#58273)
8a4a511ff5 : [PyTorch] Migrate all extant uses of build_binary_op to build_borrowing_binary_op (#58272)
07da584dbd : Fix KeyError returned by _maybe_get_last_node_only_observer (#58443)
46484e8dfe : Simplify .github/scripts/generate_ci_workflows.py (#58491)
f7c15610aa : Collect kernel version (#58485)
92e36240f5 : fix nonzero perf regression (#58468)
4ce8378ec5 : [local lint] Remove success checks in tests (#58490)
afe23b8f8b : Fix alpine image (#58462)
821a97595b : fx quant: improve performance of all_node_args_have_no_tensors (#58461)
e059fd40a8 : Remove master documentation from being indexable by search engines (#58056)
52b45b7655 : Revert D28494073: [Gradient Compression] Do not skip the comm hook tests for Gloo/MPI backends
34d6618386 : [NNC] Fixing a bug in simplifier (#58291)
df44f015fe : [Gradient Compression] Do not skip the comm hook tests for Gloo/MPI backends (#58444)
c38616491f : Conservatively move all suitable prim ops from full-jit to mobile, and make them selective. (#58353)
b5a834a739 : [Pytorch] Build lite interpreter as default for iOS
8a3fb2689f : Wrap torch::deploy API functions in safe rethrow macros (#58412)
7b73fdf597 : [FX] Fix retracing wrapped functions (#58061)
5fa4541c65 : Make new_ones an operator (#58405)
0547a3be63 : Change link order for BUILD_SPLIT_CUDA option (#58437)
af463d2235 : Add shape documentation for CosineEmbeddingLoss (#58403)
e24dee00d4 : add kernel launch checks after each kernel launch to silence the check (#58432)
7dd08504f6 : [package] fix persistent_load error (#58439)
314a578154 : Clang format distributed_c10d.py (#58435)
b6d3929b51 : [ATen] Use MaybeOwned<T> in at::argmin/argmax (#58338)
6989eb60e5 : Remove timeouts for C2 tests
4310decfbf : .github: Add intial Windows CPU GHA workflow (#58199)
c156a4ffaa : fx quant: fix crash on output dicts and lists (#58416)
a1cacf3b5d : fx quant: remove test debug logs (#58415)
3d12ab452e : [ONNX] Fix split export in opset13 (#56277) (#57605)
0c3db1cb33 : [Pytorch] Build lite interpreter as default for Android
d645088f2f : [torch] Format repeat_interleave op files (#58313)
06c1094ea0 : Merge CreationMeta MULTI_OUTPUT_SAFE with MULTI_OUTPUT_NODE (#58285)
3507ca320b : Remove unused python2 shebang (#58409)
98cc0aa6b0 : Use torch.allclose to check tensor equality (#58429)
50f9a1812e : Enable NNAPI in internal build (#58324)
532632ca26 : Don't bind Android NNAPI on Apple platforms (#58323)
1891e4bf1e : [Pytorch] Remove run_on_bundled_input (#58344)
443ce1e8a1 : Improve error message when Proxy object is iterated (#58302)
a4ce85ad68 : Chown workspace in calculate-docker-image (#58398)
e8981e7c5d : Improve `CONTRIBUTING.md` (#58396)
9afe9fba29 : Reland OpInfo support for forward AD (#58304)
1a9efbbc92 : generate inplace/out kernels for xla (#57510)
9354a68e7d : [codegen] split out backend-specific information from NativeFunction in the model (#57361)
0db33eda2a : remove bridge API from codegen (#55796)
3d9f10f530 : [external codegen] better yaml error messaging, added explicit error message tests (#56597)
4dc1b8e06b : add _to_cpu() operator (#55795)
cce156ac94 : .github: Make on_pull_request a conditional block (#58363)
c29e6d37e8 : [Vulkan] Switch to Image2D for Convolution biases (#57201)
2879f0f780 : [Vulkan] Use 2D tensor views when possible (#57198)
95fd1e9045 : reduce number of randperm template instantiations (#58362)
a3b33139da : [Pytorch] Add non mutator bundled inputs method (#58408)
ae9b66dd94 : Fix TP agent not recording outgoing tensors with caching allocator (#58384)
affed3b04d : Prevent lock inversions with GIL in Future (#58391)
5a238eb96e : Fix deadlock in Future due to lock inversion with GIL (#58382)
eab59bae15 : Fix cmake_minimum_require in libshm (#58306)
bef0e07e09 : Remove unused Dockerfile_runtime (#58333)
4454b18e14 : Revert D28371127: Wrap torch::deploy API functions in safe rethrow macros
432676599c : Stop installing libuv on Windows (#51936)
1ad06ba3f5 : Wrap torch::deploy API functions in safe rethrow macros (#58192)
b1b9fb0147 : Specify the exact commit when triggering multi-gpu pipeline (#58219)
ee93a348de : ENH Raises nicer error when calling module.train with invalid modes (#58247)
9c7d5ed9b0 : Clarifies cholesky_ex role and makes batched support a common string (#58217)
6060684609 : Automated submodule update: tensorpipe (#57613)
71f4c5c1f4 : Fix "ci/master" workflow (#58335)
1a91892f90 : Added fix for missing ops aten::sorted.str (#58339)
211bac53ef : [JIT] Add optimize_for_inference API (#58193)
fad2ce439e : [nnc] Link all available LLVM targets (#58312)
4f50fdc2a3 : fx quant: refactor observer insertion
2436377a7d : Remote the list for the attributes that will be ignored for pickling (#58345)
9def776cd6 : [fx_acc] e2e quantized resnet18 (#58204)
bcacf91a71 : [fx_glow]Add Support for importing quantized linear in FXIRImporter (#57483)
998374a702 : [tsm] add support for jetter to Role (base_image) for mast launches (#58252)
b0819b0b73 : [CircleCI] s/ubuntu-1604:202007-01/ubuntu-2004:202104-01/ (#58308)
67583122f0 : Use pip3 instead of pip when building ECR GC image (#58334)
00a46a5eb4 : Fix incorrect inplace sort in `topk` (#58314) (#58318)
c4c2039fb2 : Revert D27652484: [nnc] Enable CPU fusion inside Facebook
a4075fca9a : extract dispatch keys from optional Tensors (unboxed) (#58296)
3dc70d8f78 : [iOS][Metal] Add target for testing metal ops (#57832)
84d8e3b0f6 : [FX] Finished prepare_for_inference API for release (#58293)
00156d4845 : [FX][WIP] Proxyable classes (#56737)
d3fbb41c61 : [PyTorch Edge] share tensors in mobile with new api (#58182)
c034bce979 : Back out "[ONNX] Process const folding progressively when converts to ONNX (#54569)"
0a561f83ca : [PyTorch Mobile]Fix unit test (#58202)
34ac28b5ff : Bump Ubuntu/Python versions in ECR GC Docker image (#58309)
623d63d9da : [reland] Build and push Docker images in GitHub Actions (#58299)
73d51406fa : [PyTorch Mobile]Move train related files to their own folder (#58205)
49a8942a77 : Revert D25399466: add channels last support for AvgPool2d on CPU
0caec739a3 : Revert D25399468: optimize channels last for BatchNorm2d on CPU
94ef2b9b48 : [Pytorch] Better doc strings for bundled inputs (#56591)
0be334a1ba : optimize channels last for BatchNorm2d on CPU (#48919)
0d11dbf511 : [ONNX] Support index_add_ function. (#56867) (#57830)
520f90f359 : [ONNX] Handling incorrect format for example_outputs (#55802) (#57829)
52bb8120b8 : Mention distributed profiling in documentation (#58286)
064923e635 : Improve native_batch_norm_backward performance (CUDA) (#58240)
c711c30c74 : Revert "Revert D28387764: Codegen inplace forward AD formula from out of place one if needed" (#58231)
6e1718277c : Make GHA test-reports upload regex more permissive (#58250)
4bcaa5ae20 : Revert D28412496: Revert "Revert D28387767: Add forward AD test for op info"
2e26976ad3 : Disallow versionless Python shebangs (#58275)
e6adc06221 : Revert D28425179: Build and push Docker images in GitHub Actions
76d2cb3b8e : [torch.package/TorchScript] flag to gate allowance of TS serializaiton in torch.package (#57678)
e27f861db7 : [torch.Package/TorchScript] TS into torch.Package test cases (#54894)
9403fe17ce : [torch.package/TorchScript] logic to enable sharing of tensors on load (#57573)
307375a88e : [torch.Package/TorchScript] torch.Package python logic to save TorchScript (#54893)
3ad11803f7 : [torch.Package/TorchScript] ScriptModuleSerializer add unified format (#56299)
8ab3aa464a : [torch.Package/TorchScript] refactor ScriptModuleSerializer Exporter (#55958)
07de11c26d : [torch.Package/TorchScript] TS serialization importer to handle unified format (#54891)
2ead01f796 : Build and push Docker images in GitHub Actions (#58174)
1f5ed1ff69 : [metal] Fix binary elementwise ops to handle inputs with mismatched dim() (#58262)
72a90c3ea5 : [metal] Add reflection_pad2d for metal (#58263)
4f28c0b590 : Revert "Revert D28387767: Add forward AD test for op info" (#58230)
ccd7141919 : Modify DispatchKeyExtractor to also work for optional Tensors (#58283)
88ff651e90 : torch.jit.ignore as a context manager (#55172)
cf1daf571d : Port silu to structured (#58050)
f23e10f27b : Port softshrink to structured (#57623)
d65dff463a : Port hardsigmoid to structured (#57622)
401d0fe8c5 : Port leaky_relu to structured (#57621)
9dba26eed1 : Port softplus to structured (#57620)
03398b7edb : Port elu to structured (#57619)
ac04cc775b : [nnc] Enable CPU fusion inside Facebook (#58029)
6b8b591a84 : [nnc] Fix output restriding of size-1 dimensions (#58256)
cb7c6a536b : [doc] update distributed optimizer doc (#58084)
a8122062c0 : [PyTorch Mobile]Add light version of RandomSampler (#58201)
38e606d056 : [RFC] Add method torch.jit._clone_module_with_class (#56152)
452569dffb : cfloat and cdouble functions (#58137)
f6408c0dc1 : [ATen][quant] Use expect_contiguous in quantized::linear fbgemm version (#58221)
31607ad41d : [nnc] Started codegenning some external calls (#58118)
04970057d8 : Code-dedup in PowKernel (#57873)
64d23cc040 : Revert D28379394: Update internal code for torch.linalg.solve
3072c97017 : Gelu Backward, Contribution from Kevin Stephano (#58249)
f3ead05d77 : hardtanh (#57750)
c524448dd1 : init hardshrink (#57749)
047ae6b713 : [profiler][small] CUDA synchronize guard, minor fix (#58254)
8ac0917cc7 : add channels last support for AvgPool2d on CPU (#48918)
fd3d3ef900 : [RPC Framework] Add _script_module_reducer unconditionally for RecursiveScriptModule in RPC pickler (#58020)
993a35a8cb : [Static Runtime] Support clamp.Tensor (#58191)
1f3807ce5d : More stable and faster implementation of the gradient of torch.linalg.eigh (#55049)
b0833533a7 : Update internal code for torch.linalg.solve (#56613)
d304bb070a : Gelu Backward, Contribution from Kevin Stephano (#58249)
3a898c26c0 : Print stderrs in tools/mypy_wrapper.py (#58265)
7756cb6a5b : Migrate pytorch_python_doc_build to github action (#57371)
3f9126f399 : Only quicklint files that exist (#58261)
f6532468c8 : Make norm and vector_norm use the same kernels. (#58214)
26aeec35a1 : Disable more of quicklint test (#58257)
d833caaf6b : [PyTorch Mobile][Forward/backward compatibility] Number of arguments for operators (#56845)
e1bb9d2d99 : Reimplement spectral_norm using new parametrization functionality (#57784)
51cd89ecc6 : [ONNX] Handle mixed mask, index input for index_put (#57604)
01374d69e4 : [ONNX] ListUnpack on dynamic tensor list (#56592) (#57603)
8e29863a0d : [ONNX] Handle NoneType in Assign Output Shapes (#54623) (#57602)
bfe7728f18 : [ONNX] Process const folding progressively when converts to ONNX (#54569) (#57601)
346dc88bfa : [ONNX] Support registering custom export for prim::PythonOp from torch.autograd.Function (#55630) (#57600)
2b0f481d3f : Add support to to(device) op. (#56857) (#57599)
9e56314d2c : onnx.symbolic_helper.parse_args: document and clean up (#56956) (#57598)
dc0071dfa5 : [ONNX] Special post process for onnx::Cast and onnx::ConstantOfShape shape type inference (#55962) (#57597)
ac9e79e561 : Add a new operator for fill_() function. (#56859) (#57596)
6d7fe76317 : [ONNX] Warning when using __len__ to calculate tensor shape (#55151) (#57595)
3bc8a2264d : [ONNX] Support .item() export & NumberType to tensor conversion (#55697) (#57594)
061c7a1e17 : Overwrite with `ln` if libc10.so already exists (#58243)
9b95568dc3 : update abs forward ad formula (#58235)
3c4a90ce38 : Revert "Revert D28387764: Codegen inplace forward AD formula from out of place one if needed" (#58231)
098d9975a7 : Port heaviside to structured kernel (#57933)
770f8cea2d : Add support for real and imag tensor attributes (#54692)
8888565597 : T90561249: Enforce kernel launch checks (#58178)
1de9f51782 : [Pytorch Edge] Runtime ops compatibility api (#57570)
2294fd61c6 : .github: Add windows.4xlarge to scale-config.yml (#58198)
d8c6b74b0b : Deprecate torch.solve (#57741)
020e2ff115 : Add tests for PDT (#58211)
5e65428503 : Fix NumPy compatibility issue for torch.linalg.cond (#58041)
a49406b331 : Fixed batched version of torch.linalg.cond for singular inputs (#58040)
c1430c3425 : Add torch.linalg.inv_ex without checking for errors by default (#58039)
9e156b01e5 : linalg.eig backwards and linalg.eigvals (#57276)
2afcb7e8fd : Move Azure MultiGPU tests back to nightly (#58242)
e507771294 : [RPC Framework] Replace Python Pickler with internal RPC pickler for RemoteModule (#58019)
470cd64514 : [TensorExpr] Remove disabled tests that we do not plan to re-enable. (#58207)
a0f4b7cd48 : [TensorExpr] Re-enable skipped tests, they seem to be working now. (#58206)
dd3bd0286b : T89509943 - Improve error message during Glow ONNXIFI (#58069)
e71b526e7e : Add inference mode python bindings and tests (#58045)
002ce5c1df : port addmm to structure kernel (#57417)
52e9a192b1 : [ROCm] add 4.2 to nightly builds (#58143)
e8574b84bf : Fix legacy tensor constructor/new matching incorrect signature with d… (#58108)
ab5c273950 : Remove the matmul complex backward skip (#58138)
cf7d56d8f2 : Gradgradcheck to runs successfully with unrelated inputs (#58049)
6997e7bd39 : Update Kineto submodule (#58179)
2b99bce1d7 : [profiler] CUDA event fallback (#58133)
fee7e8b91d : Striding for lists Part 2 (#49352)
82d714935e : [TS] Add complex support for more ops (#54541)
7a95cccbc7 : Revert D28393469: [pytorch][PR] Enable `ceil`, `floor`, `frac`, `round` & `trunc` for BFloat16 on CUDA
c8644326a7 : Revert D28177553: [tsm] add support for jetter to Role (base_image) for mast launches
e554731b32 : Hide set_enabled since it's not public facing. (#58078)
8a1dab3d26 : [tsm] add support for jetter to Role (base_image) for mast launches
ad4b2571b6 : Fix multi gpu test break on Windows (#58213)
6b1eeef601 : OpInfo: squeeze (#58080)
a31daf381f : Move libtorch builds to be master-only (#58183)
2d7d6922b6 : Revert D28387765: Add forward AD gradcheck
f88297c66b : Revert D28387767: Add forward AD test for op info
87f7fdfd5c : Allow instruction counting to use shared memory as a staging ground. (And a couple other tweaks.) (#56711)
066e7699eb : Revert D28387764: Codegen inplace forward AD formula from out of place one if needed
ce1a8620d9 : Enabled `roll` & `diag` for BFloat16 dtype on CUDA (#57916)
f9aa6b2432 : Enable lerp for BFloat16 on CUDA (#57907)
e6d8f45523 : Enable `ceil`, `floor`, `frac`, `round` & `trunc` for BFloat16 on CUDA (#57910)
c4a486f4b1 : Enable atan2 & hypot for BFloat16 on CUDA (#57905)
f4a5730a6b : Add LowerSimpleTuples for freeze tuples (#57915)
f0a5500722 : [torch/elastic] Add logging to the sanitize function of RendezvousStateHolder (#58169)
2279962162 : Codegen inplace forward AD formula from out of place one if needed (#57767)
26b6d044cd : Add forward AD test for op info (#57701)
647282cb0c : Add forward AD gradcheck (#57633)
bc30c3165c : Update docs for get_future support (#58107)
645a5f706a : move `flatten_dense_tensors` and `unflatten_dense_tensors` to Native (#58006)
f1ac9b6598 : fix lint (#58203)
028f2f62ac : [torch/elastic] Update the rendezvous docs (#58160)
ae63b1d1c6 : [torch/elastic] Revise distributed run script (#58159)
166a8df65f : [reland] make ddp logging api to be private (#58089)
8a45006765 : enable deterministic path for index_copy_cuda with index_put (#58144)
01d0eb9dac : [package] Add an intern keyword (#57341)
d230045fde : Combine backtrace print into one string to avoid interleaving. (#56961)
d09abf004c : OpInfo: narrow (#58082)
9148f19e85 : enable support for nested containers in `torch.testing.assert(equal|close)` (#57270)
9063cb0a3c : Infer types for arguments of methods not invoked directly by monkeytype (#57202)
1de3525ca8 : [ONNX] Handle PackedParams inputs for _propagate_and_assign_input_shapes (#56449) (#57079)
3d5bb71020 : Back out "[PyTorch Edge] Reuse constant table from ts in bytecode" (#58099)
85d64648d3 : Port threshold to structure (#57810)
82b2013eac : Delete move constructor on TensorImpl (#58048)
9bfc1c4e0e : [Gradient Compression] Update the docstring of fp16_compress_hook (#58168)
2073e866ad : Switch GHA test stats S3 upload token (#58156)
581bf01074 : [Gradient Compression] Remove unnecessary warning on the rst file and the check on C++ version (#58170)
4c24d820ff : [TensorExpr] Implement 'call_raw' in CUDA codegen. (#57901)
c751e53800 : [TensorExpr] Implement 'call_raw' in IREval. (#57882)
cbba3db21b : [TensorExpr] Minor cleanup in IREval. (#57881)
5e83c62a9e : Revert D28351931: [pytorch][PR] Fix some tensor operators to return `NotImplemented` for invalid inputs
46e4b2dbda : Convert assert -> cast. (#57458)
614437751f : make remote model instantiation async when possible (#58052)
0bfcc3e3f4 : fix topk with k=0 on cuda (#58086)
cbd1227809 : Add a note in the parametrize doc about the naming choice (#58142)
3c973de543 : HABANA Device registration key and Autograd key addition (#57094)
c9eb381aac : Allow zero jobs in tools/explicit_ci_jobs.py (#58176)
6955d4d0f7 : [nnc] Handle only the first argument of aten::to (#58028)
a88673e93e : Enable cat wo conditionals iff cpu (#58026)
ab6b5fa036 : Add HIP (ROCm) semantics doc (#57871)
af36d084fd : reland [ROCm] ubuntu version check in install_rocm.sh (#58164)
53bc6f79f3 : Added DevOps PR and Nightly Build logic (#58007)
7156168f71 : Port max_pool2d_with_indices_backward to structure (#57797)
3b977b3b4d : [DataLoader] Add context manager for runtime type validation (#55936)
5c696443c7 : [DataLoader] Modfity construct_time_validation to argument_validation (#55836)
a0ac80ec76 : [DDP] Don't find tensors if static graph (#58105)
87afcea0cc : T90561249: Enforce kernel launch checks (#58116)
35521a2629 : Fix some tensor operators to return `NotImplemented` for invalid inputs (#57934)
6404184700 : Revert D28385479: [pytorch][PR] [ROCm] ubuntu version check in install_rocm.sh
9d56176034 : Fix splitter and add a unittest (#58075)
bfd0a46156 : [fx] Arg normalization not save output node in the node_map (#58058)
3603ba24d5 : Trigger Windows multi gpu tests on master (#57817)
8f83bfeb98 : Update CI images for rocm4.2 (#58017)
94bb1150a7 : [ROCm] ubuntu version check in install_rocm.sh (#57751)
16d617c3e5 : test experiment script (#57925)
d212bf1863 : Enable `BFloat16` for `nan_to_num` on CUDA (#58063)
c52700dbcd : [wip] enhance DDPSink to work for general outputs (#57073)
4faa427383 : Remove printout from distributed tests (#58095)
30f26c5893 : Reimplement torch::flip based on advanced indexing (#56713)
5ea87f9c24 : Grammatically updated the tech docs (complex_numbers.rst) (#57540)
ff982ef73d : OpInfo: reshape, reshape_as and minor clean-up (#57460)
c911c30520 : Revert D28291041: enable deterministic path for index_copy_cuda with index_put
c7fb0a0e82 : Remove beta warning for use_deterministic_algorithms (#58074)
e1078d42f0 : std/var: Return real results for complex input (#58066)
db13119fc4 : Deprecate symeig (#57732)
e18f5f1d13 : [profiler][small] Add skip_first parameter to the default schedule (#58025)
cdf161c382 : [profiler][small] Speed up postprocessing (#58021)
bf2ebfc9f6 : [profiler][small] Handle empty trace (#58013)
f1defeaea4 : [profiler][resend] Add cuda memory and distributed metadata (#58010)
14badd9929 : enable deterministic path for index_copy_cuda with index_put (#57870)
a07a0190f9 : enable deterministic path for index_put with accumulate=False on CPU and CUDA (#57839)
d623fb7e04 : Add a disclaimer about limited CUDA support in RPC (#58023)
c3d40fdf56 : [ATen] Use expect_contiguous in layer_norm (#58067)
c790fd2bf8 : ATen lu_unpack. Required for making `torch.lu_solve` differentiable. (#46913)
32acc96f78 : [Static Runtime] Fix bug in aten::clone (#58100)
8c91acc161 : port topk to structured (#57790)
e9e125475e : [Static Runtime] Add schema check to aten::repeat and fb::fast_gather (#58106)
8824f49e68 : Split `test_testing.py::TestAsserts` for multiple devices (#56365)
8b816e9010 : To implement gradient for Pytorch (#54617)
0d4dc6cb39 : Let submodules be collected as args/kwargs (#57840)
b7d674eb21 : Revert D28331386: [pytorch][PR] [torch/elastic] Update the rendezvous docs
aaca12bcc2 : Deprecate in docs torch.svd and change svd -> linalg_svd (#57981)
e573987bea : remove syncs in one_hot (#57902)
7a23a5e8e9 : Shut up sccache couldn't connect error (#58047)
29cfcf70be : [package] add mock/extern hooks (#58000)
d9ea93181b : Some types for remote_module (#58012)
1f83d8eec2 : [Static Runtime] Return nullptr if the number of input args doesn't match (#58018)
a90c229900 : Remove the BETA status for torch.linalg (#58043)
a1f9a3c643 : Fix UB in library.h (#57962)
c36055bb42 : Make mypy_wrapper.py accept multiple filenames (#57998)
f9c8b7f1a8 : [FX][docs] minor fixes (#58085)
a13718b69f : [FX] Make stack trace testing less strict (#58088)
e4418b67c7 : [torch/elastic] Update the rendezvous docs (#57973)
8b12c8e8b3 : Fixes: register_full_backward_hook crash if first argument don't require a gradient (#57944) (#57945)
4ef94265e9 : Add Futures to ProcessGroupGloo (#57818)
111c99cdfd : [vulkan] Fix glslc path for desktop build (#56507)
d49f6d556b : [DataLoader] Fix tempfile binding and removing for torch_shm_manager (#57566)
1d4d9ffca0 : [torch/elastic] Refactor rendezvous store initialization logic (#58057)
b58a7c95aa : [DataLoader] Raise detailed Error for ForwardRef type (#57824)
dd876120f9 : Out version for aten::repeat (#57683)
86b7ae181a : Automated submodule update: FBGEMM (#57983)
eb1ffa91d8 : [pyper] allow static runtime on and glow on simultaneously (#57972)
698be31262 : Adding support for normalization of __is__ op (#57862)
ad4cd6ef89 : Revert D28338485: make ddp logging api to be private
a02305925c : [local lint] Force color output on mypy (#58071)
0da5421837 : Doc deprecate norm and add seealso to linalg.norm (#57986)
e385aa863a : Add tools/ script to limit circleci to a set of jobs (#58001)
18edb77a28 : Add `pad_sequence` as a native function (#57868)
ac44569b0d : make ddp logging api to be private (#57999)
e0539b0ba6 : [DataLoader] Remove redundant len >= 0 (#57951)
7faac089ca : Enable cusolver potrf batched for Cholesky decomposition when cuda >= 11.3 (#57788)
ea421fb249 : enable static graph training in DDP (#55248)
502eb664ae : OpInfo: chunk (#57935)
90f05c005d : refactor multi_head_attention_forward (#56674)
4fb8676cea : Add dot implementation for BFloat16 on CUDA (#57903)
067147ac7d : Enable BFloat16 for `logaddexp` & `logaddexp2` on CUDA (#57908)
fa318911be : Enable geometric ops, exp2, expm1, rsqrt & erfc for BFloat16 on CUDA (#57913)
dbedb1fa1c : [CUDA graphs] Sync after replay (#57556)
565550d89a : [iOS GPU][perf][5/n] Replace std:vector with IntArrayRef and SmallVector (#57668)
dc55ab3f77 : [fbgemm] fix bug handling bias in rowwise quantization of FC (#58022)
3e46d6c9e4 : Update docs to mention CUDA support for Future (#50048)
9e94921a55 : combine consecutive layes on the same device (#55973)
cf7a0e5af4 : Use RPC context streams to cover serde ops (#57926)
0d564904b5 : [iOS GPU][Perf][4/n] Reuse the same command buffer when copying results to CPU (#57667)
43f6deb6e4 : Deprecate chain_matmul (#57735)
7707efed8f : Deprecate matrix_rank (#57734)
415ae54c31 : Deprecate torch.eig (#57727)
ee48bd089c : Support mix of int32 and int64 offsets/indices for EmbeddingBag and its variants (#55189)
3ec16035f2 : TST Migrates some of test_nn.py from assertEqualIgnoreTypes to assertEqual (#57642)
24087d07ca : Deprecate QR (#57745)
4fef1c1d74 : Deprecate torch.cholesky (#57725)
f3e014f37b : [iOS GPU][Perf][3/n] Cache the compuation pipeline state object (#57666)
36a22967b7 : [fx ir] Handle the case when output consumes get_attr directly (#57844)
a93314dec3 : Alias det, slogdet, matrix_power, inverse, pinverse (#57821)
ba84c91197 : Deprecate torch.lstsq (#57743)
5840c8cfd8 : [nccl] log rank when communicator is aborted (#57974)
a0d686c9cd : OpInfo: select (#57731)
e90fcffb65 : [c10d] Log when store based barrier succeeds (#57711)
71ca3e99af : Only use actually mismatched elements for reporting in `torch.testing` (#57923)
c714596027 : [kineto] Update Kineto submodule, cupti library paths (#57789)
f97650e70b : [nnc] Fix float->bool conversion on cpu (#57798)
b8ca1219de : Add tests for custom state_dict save/load methods in TorchScript (#57886)
fc9c486044 : Add enabling default instructions flag for mobile (#57778)
38500d5d7b : [RPC Framework] Move the annotation w/ bold effect out of the quotes (#57965)
747312bf61 : Support for accumulate nodes traversal and to access op names in the compare function (#57685)
036167111d : Revert D28294662: [pytorch][PR] add cuda memory and distributed metadata
36172f347a : [iOS GPU][Perf][2/n] Prepack linear + Fuse relu/hardtanh (#57665)
f1d01b9488 : Disable test for quicklint (#57968)
f0f69c5dc1 : torch.where is now mentioning Bool rather than Byte when given wrong dtype mask (#57942)
ebb1b74f65 : Fix json parse error for profiler call stack (#57099)
98fcdb8005 : add cuda memory and distributed metadata (#57252)
ba07aaf211 : Fix typo in warning for spawn method (#57927)
19706d91cd : [vulkan] Add sigmoid activation functions (#57867)
481806be97 : Fix creation_meta for multi view outputs in NoGradMode/InferenceMode. (#57842)
478f639779 : [Vulkan] Fix seg fault during descriptor set allocation on some platforms (#57825)
e1cbc43f50 : Use tools/print_test_stats.py in GHA (#57647)
bf053a1296 : Fix hasattr support type (#57950)
fea3824214 : Ensure torch.save() deterministic output (#57536)
fe3c63d9d3 : [DDP] fix param to name mapping (#57771)
b587354e4c : Add Python-3.9 CI testing (#50992)
29753339b7 : Do not download slow test when on sandcastle (#57953)
710a83d09f : Remove code and logic for old style custom autograd Function (#57357)
d115e81a32 : Fix document around DDP uneven inputs (#57448)
4d181ba51c : Port maximum and minimum to structured (#57630)
727c1d69d7 : Remove unnecessary indirection through torch::autograd::impl::pyobj/set_pyobj (#57733)
807bea1c4e : [JIT] initial support for PEP-585 types (#57363)
bc798cdc1d : Add run_master_build workflow (#57899)
ece15f6902 : [DataLoader] Change Decoder signature and add MatHandler (#57391)
cbfce376a8 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
b84a28b50a : tweak sync note wording for linalg docs (#57924)
3c87fe9b14 : Revert D28117714: [pytorch][PR] ATen lu_unpack. Required for making `torch.lu_solve` differentiable.
259d19a733 : [JIT] Adding a concat optimization pass (#55474)
e7e73192f6 : Added cuBLAS path for torch.linalg.lstsq (#54725)
d11cce4f5e : Add cuSOLVER path for torch.linalg.lstsq (#57317)
300363b54f : CLN Removes unused RReLU code (#57672)
50e22e1e08 : Remove tmp folder when run unit test (#57800)
5c67d8dfd3 : ATen lu_unpack. Required for making `torch.lu_solve` differentiable. (#46913)
fc55290e5b : Fix distributed autograd gradients synchronization (#57792)
14282232d9 : Fix `generate_not_implemented_tests` not testing unknown types correctly (#56997)
4cf2c646c2 : Added torch.linalg.matrix_norm (#57127)
9ad19af935 : [TensorExpr] Fix a condition when we use a native depthwise conv2d lowering. (#57906)
0c2d38264a : Improve BatchNorm1d performance (CUDA) (#57786)
e8fb167b17 : [PyTorch Edge] Reuse constant table from ts in bytecode (#56002)
737f48dfc5 : Remove _save_data() and _load_data() from mobile (#57879)
88a1e8eb01 : Add EMA to DecayAdagrad (#57866)
a46e927b1a : [torch] handle embedding bag with empty bag (#57446)
f51798d0dc : [TensorExpr] Fix UB in LoopNest::distribute. (#57883)
8639fd104e : [profiler][kineto] Support for memory allocs/deallocs in the traces (#57835)
3a66a1cb99 : [clang-tidy] Exclude cppcoreguidelines-avoid-magic-numbers (#57841)
bc2540f0be : benchmark rpc ps (#57454)
94080f45ab : [RPC Framework] Update rpc.rst (#57876)
4db88307d9 : [RPC Framework] Add a link to the tutorial in RemoteModule docstring (#57875)
74d493cc07 : [RPC Framework] Support passing RemoteModule as an arg (#57695)
8c04593c0a : [PyTorch Edge] Add backport to export old bytecode models (#56802)
e9c3ce30d4 : Fix flaky test_barrier_timeout_global. (#57523)
73f22bcbf9 : [fx ir] Handle cases in GraphDrawer when shape, type or stride are not present (#57845)
ee4be5322b : Fix lint in test_tensorexpr_pybind (#57869)
4fad8d1a2c : Update the default detach semantic for forward mode AD (#57820)
bc0965ac85 : [Vulkan] Use more optimal command buffer submission rate (#57196)
b0c27b44cf : Enable backward/forward compatibility for TS runtime (#57498)
b38f153d91 : [nnc] Added NNC lowerings for t/transpose/permute/expand + other cleaning (#57426)
c88167d2ed : Respect .ini for flake8 and mypy (#57752)
18fed3dfbe : Change name for namedtuple return of torch.linalg.svd (#57181)
58f32fa5fd : Remove compute_uv flag from torch.linalg.svd (#57180)
db412a6885 : Avoid 2 extra copies when reducing sparse tensors and fix result() vs inplace output discrepancy (#57822)
2043093217 : Add correction parameter to std/var (#50903)
3d2ce60539 : [PyTorch] Remove dead get/setTLSCallbacks APIs (#56492)
9234d7fc27 : [PyTorch] Use MaybeOwned and avoid resize in bmm_cuda (#56115)
96e1a83fb2 : Add Gloo TCP_TLS transport (#56442)
96fce78ac4 : [Vulkan] Add -Os flag to shader compilation (#57199)
731dcd75f5 : [torch/elastic] Revise the note section of RendezvousHandler doc (#57723)
64dc10e268 : [JIT] Also fold NaiveSyncBatchNorm when folding batch norm (#57823)
0503105bc2 : Port logaddexp and logaddexp2 to structured (#57629)
223a362f63 : Port lcm to structured (#57628)
470c7af749 : Port hypot to structured (#57627)
3dd88d6792 : Port igamma and igammac to structured (#57626)
3a1dc60da5 : Port nextafter to structured (#57625)
7e51ac5ea7 : Port gcd to structured (#57624)
5044d9dc51 : Fixing quantize_per_tensor on cuda (#57703)
c07babbcf1 : [Gradient Compression] Divide by world size before all_reduce to avoid overflow (#57410)
626ae7f036 : Copy edit of TorchScript Language Reference (#57694)
b5b158a6c6 : Be more lenient with network exceptions in trigger_azure_pipeline.py (#57714)
161ea537f0 : [reland] Remove unused code in windows_build_definition.py (#57107)
0dd0151c64 : add `torch.testing` to docs (#57247)
27f672a0fc : Fix test reporting regression (#57795)
2901d2e694 : make quantizeable MHA work with torch.jit.script (#57774)
023ecc40ad : Revert D28248766: Update internal code for torch.linalg.solve
6f2c0cccdd : New: sparse complex: add linear algebra, addmm (#57129)
a911c4fc1c : New: Initial support for sparse complex tensors constructors for CPU/CUDA (#57125)
8d363d37da : [FX] Adds PyTree support to FX through `concrete_args` (#55888)
45012da298 : Migrate from shared_ptr to intrusive_ptr for Future (#57636)
36e47af58b : Pass reference to parent future in callbacks (#57635)
9aa1461a68 : Make wrapPropagateTLSState more generic (#57634)
5f2925074b : Update internal code for torch.linalg.solve (#56613)
adaf80bcbe : Update internal code for at::_lu_with_info (#56612)
9e6b7e6e6e : OpInfo: expand and expand_as (#57606)
1f7309dfe3 : [testing] clean-up test_unary_ufuncs.py (#57615)
4cb3c60c20 : OpInfo: float_power (#57648)
6eec730a73 : [testing] atan2: Enable cases where self broadcasts (#57608)
159a2404bd : fft: Increase tolerance for nd-fft tests (#57576)
ee79413b6a : [testing] change unaryufunc default dtypes (#57616)
319b08be59 : Add call_kwargs(args, kwargs) method to torch::deploy api (#57748)
f2fdb61e2d : [iOS GPU][Perf][1/n] Use aten::contiguous instead of permuting weights manually (#57664)
ca8090f81b : [Pytorch Edge] Enable eager symbolication in benchmarking binary (#57705)
e5e095cbe4 : [torch/elastic] Rename etcd-/c10d-experimental to etcd-v2 and c10d (#57764)
cb95c9db9f : Automated submodule update: FBGEMM (#57485)
1f1e2dab6b : Remove optional type for ord parameter in vector_norm (#57662)
cb1272a846 : update doc in build section (#56686)
8b38458011 : [jit] Break interpreter.cpp into smaller files. (#56546)
2787f01455 : Catch KeyboardInterrupt in tools/test_history.py (#57780)
78fb9c2f5b : Reorder gc.py imports (#57779)
241c2f4496 : Add Gelu To NNC (#57753)
aedcff7275 : fix codegen for lite_interpreter (#57761)
52d1b91d38 : Give Python sub-version in GHA CUDA workflow name (#57770)
2992ff3fb8 : Revert D28142447: Improve BatchNorm1d performance (CUDA)
3948ce2fd9 : [Caffe2] Introduce c10::CudaError for CUDA Exceptions (#57609)
cb234e606d : [package] fix corner case in PacakgeImporter.whichmodule (#57651)
2370d8c41f : [profiler] Add profiler fallback (#57612)
da06ae73a3 : [c2] Fix flaky test_spatial_bn_multi_batch_grad
eb6445a92a : [JIT] Lazily initialize aliasDb in DCE (#56649)
b2936ad8fa : Improve BatchNorm1d performance (CUDA) (#57034)
1101a5f6e9 : [paramcomms] support for in and out split sizes (#57709)
ebd2c0a4ed : Port ceil to structured (#57589)
ccbaa5fbe5 : Port sign to structured (#57588)
3c0d22fab3 : Port floor to structured (#57587)
d83d1d3741 : TensorIterator: documentation on the order of creation (#57550)
72ebdd68e1 : Revert D28242069: Add cuSOLVER path for torch.linalg.lstsq
dc06f52480 : Add result() to ProcessGroupGloo::AsyncWork's (#57565)
a7ba0f08f3 : Update internal code for torch.lu_solve (#56611)
cb7197ce3f : Torchelastic: populate __init__.py with failover documentation
ad31aa652c : Fixed the error in conv1d example (#57356)
52ac015d76 : Add note about improved vmap prototype to vmap docstring (#57677)
f1a62264f3 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
40cb55f978 : Revert D28154522: Add call_kwargs(args, kwargs) method to torch::deploy api
7b31d4262b : Add cuSOLVER path for torch.linalg.lstsq (#57317)
35fab44eaf : Add CUDA support for torch.ormqr (#57316)
59d794b2c3 : Port CPU torch.ormqr to ATen (#57315)
b4a098f1fb : [pytorch][nnc] mobile nnc backend skeleton (#56852)
d82333e92a : [pytorch][nnc] protocol classes to persist the context for compiled functions (#56851)
db7b31358f : Fix internal assert in CUDA caching allocator when trying to allocate ~2^64 memory (#57571)
7d4121d1d2 : Make RRefContext get devices from RPC agent when creating OwnerRRef (#57443)
7ffadf6e46 : Replace DeviceIndexes with Devices in RRefs (#57442)
8e9bbd3113 : Make DataPtr extraction in CUDAFuture faster for Python values (#56918)
69de4940f3 : Ensure devices are preserved when forwarding between futures (#57432)
1292602375 : Avoid re-extracting DataPtrs when forwarding values between Futures (#57433)
1f178de800 : [NNC] Add support for computing conv with dynamic shapes (#57514)
eef72f3f8a : [NNC] Update Buf on mutation instead of creating new ones (#57513)
95fbc158d4 : [NNC] Add a method to compute conv without bias (#57512)
3fb5be05ba : [iOS GPU] Add Metal API availability check (#57663)
7870450706 : [PyTorch] Use c10::ThreadLocal instead thread_local in record_function.cpp for specific __GLIBCXX__ on Android (#57689)
fc657b547a : [kineto] set the correct device id for GenericTraceActivity
8bbe383877 : [Static Runtime] Fix bugs in logit (#57578)
126ea1ccad : relax type equality constraint for scalars (#57532)
ba78bf1363 : [standaloneRunner] fix another GIL mutithreading issue exposed by torch::jit::toIValue() (#57688)
ccbbb2d6f8 : Revert D28052211: [paramcomms] support for in and out split sizes
86b061c80e : [FX] Changes in order to move python key out of tree (#57427)
c27428b5e9 : [nnc] ported conv2d lowering over (#56875)
866b19e95d : [paramcomms] support for in and out split sizes
27af9b0462 : Fix flaky test_rref_context_debug_info (#57526)
ba500c5c90 : Add call_kwargs(args, kwargs) method to torch::deploy api (#57484)
8df9b88042 : [kineto] Update Kineto submodule (#57700)
0d813bbca5 : Revert D28177176: [iOS GPU] Add Metal API availability check
44b021d21b : [package] remove save_source_file API (#57340)
a3cba770b5 : [package] remove PackageExporter.file_structure (#57339)
f326f7dda8 : [package] use digraph to back dependency visualization (#57338)
53c21172c0 : [package] add simple graph data structure (#57337)
a39c685ace : [package] make extern a dict (#57336)
dedf9fbe81 : [package] factor out `PackageExporter._get_dependencies` (#57335)
7627dd568a : hardswish reland (#57652)
56211524a7 : [NNC] ported over sum and softmax to new scheme (#56775)
0b51ee311d : Add missing return statement from 57057 (#57669)
cd22bdf236 : [PyTorch] Autoformat c10, round 2 (#57645)
e5179e960e : Share VS Code settings/extensions nicely (#57671)
65fad0ebd2 : Expand Kineto platform support (ci-all) (#56323)
30c96c9419 : [iOS GPU] Add Metal API availability check (#57663)
69e64b2632 : [Flaky tests] Fix flaky rpc profiling tests (#57517)
c4bb6a5781 : NNAPI: flex size support for upsample_nearest2d op (#57563)
4c609a9782 : NNAPI: Add qadd flexible size support (#57562)
28cd04ea64 : NNAPI: add flexible size support for conv2d (#57561)
049152faa9 : Make torch.linalg.eigvalsh differentiable (#57189)
babae61f2f : Make torch.linalg.svdvals differentiable (#57188)
534c457d3d : add a standalone extra file loader for pytorch model (#57591)
15c092b888 : Revert "Make grad mode error just a warning (#56401)" (#57640)
7115a4b870 : Clang format ProcessGroupNCCL.cpp (#56840)
a948e279ac : [c10d] Profiler support for nccl p2p collectives (#56427)
17035f6aab : Speedup render_junit (#57641)
fb9a32b7b4 : [PyTorch][Edge] Add api to get bytecode model version (#56801)
dedaf4fad7 : Reland: [TensorExpr] Add methods for inspecting generated code in `TensorExprKernel`. (#57560)
9e7814d539 : Reland: [StaticRuntime] Use NNC's call_raw API to reduce call overheads. (#57553)
e686c66fe7 : Reland: [TensorExpr] Add `TensorExprKernel::runFast` method. (#57552)
0bf69278f7 : Reland: [TensorExpr] Add `CodeGen::call_raw` method. (#57551)
da8cc355a3 : Relax tp_new so that it is OK to call (#57544)
c65a1da90a : Fixed C++ linalg API (#57464)
887d0e5657 : Revert D28197820: [JIT][NNC] add hardswish symbolic gradient and NNC lowering
0787d781c5 : Fix compatibility problem with LSTMs and torch.save (#57558)
49adac65c4 : ns for fx: clean up manual string names of related ops (#57210)
76f29d53bf : ns for fx: change matching to only match known types (#57186)
44bb15cfd3 : ns for fx: add more type to relationship mapping (#57184)
a9dc9535f6 : ns for fx: move relatedness mapping to mappings file (#57171)
9ec6883442 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
aeaa91bff6 : mkldnn gelu (#53615)
0142fd0b57 : [JIT][NNC] add hardswish symbolic gradient and NNC lowering (#57383)
133d8abbfc : Compute nvrtc during libtorch build (#57579)
cd9995ae14 : Update Gloo submodule (#57586)
45a3231bb8 : [codemod] Enforce proper use of emplacy functions
d728491fc1 : [RFC] [PyTorch Edge] Simplify error logging in mobile/import.cpp (#55711)
eb39da6b52 : Always run as many quick-checks steps as possible (#57572)
7175d49122 : [Dist profiling] Add is_async field (#57253)
151e81b7bc : [nnc][tests] Skip long running tests when using TE interpreter (#57568)
7c3a30fd79 : fx quant: remove matching hack for binary qhandler (#57470)
2b6c09c11e : Add futures to ProcessGroupMPI work (but not including Send/Recv) and python DDP comm hook testing (#57214)
8c9e42baaf : .github: Add render_test_results job (#57472)
00d6472b4d : tools: Add render_junit script (#57327)
9c5478588e : [iOS GPU] [easy] Rename APIs in MPSImageWrapper
76d9070d10 : Replace windows CUDA 11.2 CI with 11.3 (#57223)
1fc89d9ffc : Use proper Google Analytics id (#56578)
383e451036 : Implement torch.sort with cub::DeviceSegmentedRadixSort (#56821)
bca1949dc9 : [typing] suppress errors in `fbcode/caffe2` - batch 2
28c24ec3e8 : [numpy] polygamma: int -> float promotion (#57462)
1461859fde : Revert D28048289: [TensorExpr] Add methods for inspecting generated code in `TensorExprKernel`.
b3c0ef4a40 : Revert back to old assert behavior in as_view (#57499)
42d073a7e9 : Look for unqualified ignore in .pyi, not just .py (#57468)
34d853a524 : [fx2trt] example for lowering model to trt with FX based tooling (#57298)
5326ec60e6 : [Inlined Callstack Fix] Fix inlined callstack for blocks of the node. (#56562)
bb3c6699a5 : [Pytorch Mobile DebugInfo Serialization] Save debug handles for all instructions. (#55252)
e0fc473e47 : [Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
f4a921600a : [PyTorch, Mobile] Serialization format change for source range (#54284)
aa5ff7cc91 : irange for Indexing.cu (#57479)
01e4444211 : Tiny typo fix (#57113)
03b5d87980 : fix(docs): `torch.add` and `torch.mul` (#54672)
dc49299078 : Allow passing cpu to CUDA RPC device maps (#57019)
5439977352 : [Static Runtime] Revamp op schema check (#57521)
a80b215a9a : [1/n][torch/elastic] Move torchelastic docs *.rst (#148)
3db45bcb91 : Compilation error fix for torch/csrc/distributed/rpc/init.cpp (#57500)
3cc733e451 : fix for nvtxstring not printing name for aten kernels (#57407)
67f874de8f : [resubmit] Remove sync for randperm on small tensors. (#54113) (#57364)
5c7e35c689 : [RPC Framework] Clang-format remote_module.py and instantiator.py (#57414)
6b2cb939c5 : [TensorExpr] Add methods for inspecting generated code in `TensorExprKernel`. (#57074)
030692cf9e : [TensorExpr] Remove `dtype_` and add `buf_` fields to `CodeGen::BufferArg`. (#57382)
839d549f8f : [JIT] Add a pass for removing a first (self) argument from a graph if it is unused. (#57169)
3ad3d8bd3f : [JIT] Add a pass for annotating graph with input types derived from sample inputs. (#57076)
74a4868d9a : Add docs for c10::InferenceMode. (#57480)
75f6dcf8b5 : protect destructors of python bindings that can be kept alive by c++ objects (#57488)
1d3a9bff3c : Swap CUDA 10.1 and CPU CI for windows (#57493)
4143483d95 : [RPC Framework] Create a separate remote module template when moving CPU tensors to a cuda device is not enabled (#57413)
15975cf6a6 : To add priority of int/int? over int[] on signature matching and adding {h,v,d}split methods (#57346)
c0309af1f3 : Actually report mac stats (#57511)
bf6e3425b0 : [23/n] [torch/elastic] Introduce the implementation of DynamicRendezvousHandler (#57151)
a357fc8a4b : [22/n] [torch/elastic] Introduce a new from_backend static constructor for DynamicRendezvousHandler (#57150)
4a10bd3b58 : [21/n] [torch/elastic] Introduce _RendezvousJoinOp (#57149)
81ef683cb3 : [20/n] [torch/elastic] Introduce _RendezvousExitOp (#57148)
baf8f4c0a6 : [19/n] [torch/elastic] Introduce _RendezvousKeepAliveOp (#57147)
3e024fcfc9 : [18/n] [torch/elastic] Introduce _RendezvousCloseOp (#57146)
aa5d35e1d7 : [17/n] [torch/elastic] Introduce _DistributedRendezvousOpExecutor (#57145)
2a178d34cd : [Redo] Add pybind interface to caffe2 quantization server (#57378)
6d3bb01b1a : Sequence Blob NVM Reader to Selectively NVMify Ads Embeddings in A*
589072afa1 : Fix return type of getDeviceMap (#57487)
d68ad3cb1e : Add a shortcut to test all torchbench models. (#57311)
33eea146ee : torch.clamp with tensor min and max (#52695)
c328bb6d79 : Port trunc to structured (#57350)
1a6f827ae6 : [16/n] [torch/elastic] Introduce _RendezvousOpExecutor (#57144)
76bccfb2e0 : [15/n] [torch/elastic] Introduce _RendezvousStateHolder (#56538)
160304a81d : fix comments in ATenNVRTC.h (#57318)
e841f335aa : [RELAND] [CUDA graphs] Avoid sync errors when graph capturing cudnn rnn calls that use cudnn dropout (#57373)
1b745efbe8 : [14/n] Introduce a name attribute to _PeriodicTimer (#57143)
233004b4c8 : [13/n] Extend the return type of RendezvousBackend's set_state method (#57142)
a6f60cf4f0 : [12/n] Rename last_keep_alives to last_heartbeats in _RendezvousState (#57141)
3209364724 : [11/n] [torch/elastic] Add heartbeat timeout to RendezvousTimeout (#57140)
6876e15dbe : [10/n] [torch/elastic] Add comparison operators to _NodeDesc (#57139)
6bf8df6b3b : [9/n] [torch/elastic] Introduce RendezvousSettings (#56537)
ac71432c54 : [PyTorch][Edge] Add api to get bytecode version from runtime (#56948)
945c93b8bd : [quant][graphmode][fx] Skip observering boolean Tensors (#57375)
264d87985a : Use ld.gold by default to link in CI (#57061)
c0d39ba680 : Replace 11.2 linux CI with 11.3 (#57222)
375c8a81dc : [DDP] Profile search_unused_parameters (#57376)
52b389259c : Port max_pool2d_with_indices to structured kernel (#56459)
6bc3ad28a3 : Revert D28143091: [pytorch][PR] Add cross OpInfo
c7d8d8f925 : [BE] Improve has_bf16_support (#57408)
f332a8bdff : Implement result() function in MPI Work classes (#57168)
0a0e024648 : use importlib instead of imp as it support python 3.5+ (#57160)
7e12c3e10a : Automated submodule update: tensorpipe (#56916)
87242d2393 : Eliminate global usage of torch.set_default_dtype in test_autograd (#56446)
154eca0309 : OpInfo: ravel, view, view_as (#56910)
e845158b1a : Assert that GIL is not held in blocking destructors (#57030)
da51fd31a5 : fx quant: remove `find_quants` from convert (#57402)
d6563bc153 : fx quant: remove unnecessary quants arguments (#57399)
643f41be61 : fx quant: remove FixedQParamsOpQuantizeHandler from quantize.py (#57393)
2bd158386a : fx quant: move input_output_observed to qhandler (#57388)
1b20eeb138 : fx quant: move output obs logic to QuantizeHandler (#57377)
fe23881e76 : fx quant: readability improvements on observer functions (#57368)
db6cd42434 : fx quant: clean up nit in insert_observer (#57367)
46a32e075c : Improve BatchNorm1d training performance (CPU) (#57033)
4a872f8539 : Add cross OpInfo (#55483)
5c68072ee8 : add support for complex input to `torch.testing.assert_(equal|close)` (#57162)
eaf00bf7d4 : Skip linalg.qr saved mode check if compiled without LAPACK (#56284)
ce4449918a : Port reverse binary ops to `OpInfo` (#56471)
57f72b8433 : [DDP] Uneven inputs: option to throw early (#56755)
7fe4c1d0e7 : Torchelastic: add multiprocessing tests to ci/cd (#56842)
bb640efa40 : ns for fx: add missing add_relu and mul_relu patterns (#56927)
0ecdbfebff : s/InplaceOrView/ADInplaceOrView/g (#57372)
41099ef71c : OpInfo: mvlgamma (#56907)
05b255c543 : Revert D27487549: [TensorExpr] Add `CodeGen::call_raw` method.
75a2a92b02 : Add torch.linalg.cholesky_ex without checking for errors by default (#56724)
afe6b4c8ee : [NNC] Add logical Operators '&&' and '||' (#56947)
2be115336b : Fix torch.ormqr for non Fortran-contiguous inputs (#57314)
7c8d0069c4 : grad_fn getter for optional strings (#55225)
a5288a0244 : Sparse support for division rounding_mode argument (#51989)
6d681d064f : ROCM: Re-enable test_norm_fro_2_equivalence_old (#57170)
4350d4af77 : Immediately mark DLPack capsule as used after stealing the ownership (#56789)
3018093066 : Revert D28110359: [TensorExpr] Add `TensorExprKernel::runFast` method.
82d245faef : Inline hooks in ivalue::Future (#57354)
fb7469fb7f : Use Devices instead of DeviceIndexes in Future (#57353)
0422e67336 : Use Devices instead of DeviceIndexes in TensorPipe agent (#57294)
0c3e79b5b9 : Rename DeviceGuardImplInteface's getStreamFromPool method (#57345)
6697ef51b2 : Add device() method to c10::Event (#57293)
58bc003487 : Add pybind type caster for c10::Device (#57292)
2dffa8cdf8 : Fix CUDA Stream synchronization when arguments contains RRefs (#57394)
d536e6c684 : Fix variable names in torch.fft examples (#57290)
3315f14280 : Revert D28110358: [StaticRuntime] Use NNC's call_raw API to reduce call overheads.
20085f6d23 : Support auto generation of device check (#56872)
22ecb8885f : Disable device check for foreach kernels (#56871)
183320df96 : Add device_check place holder for functions (#56870)
f7f8540794 : Fix tensor device in test_kthvalue_overlap (#56869)
44cc873fba : [PyTorch] Autoformat c10 (#56830)
3c4d57c18b : [pytorch][nnc] update external functions for mobile build (#56850)
b11a24209f : [PyTorch] Take advantage of string literals in TORCH_WARN (#54032)
13dbb77b7a : [RPC Framework] Enable RemoteModule to directly send GPU tensors over the wire on TensorPipe RPC backend if a device map is provided (#57288)
20eac093a7 : [torch][segment_reduce] Add support for initial value (#56923)
bd347012ec : Added sm_75 support for CI Xenial CUDA 11.1 cuDNN 8 builds (#57320)
2b54cec7e8 : Clean up naming and comments (#56964)
bbdadab306 : Refactor fast gradcheck (#55871)
47e9ec401a : [nnc] ported some more ops + added vectors to argvalue (#56766)
233f2cd29f : Maintain submodule references during subgraph rewriting (#55463)
3a5f85465b : [pytorch] fewer cuda sync in unique by using cub instead of thrust (#57323)
208f81b787 : [PyTorch] ifdef out ATen tests that fail with static dispatch (#57379)
293830bc19 : Fix min() and max() for empty tensors (#52565)
c1a442248b : [JIT] Enable conv-add-relu fusion as a part of frozen graph optimization (#56580)
400ca7677c : [StaticRuntime] Use NNC's call_raw API to reduce call overheads. (#57329)
f219ed6627 : [TensorExpr] Add `TensorExprKernel::runFast` method. (#57328)
c9ab384af7 : [TensorExpr] Add `CodeGen::call_raw` method. (#55113)
4c3283da0d : Fix binary_checkout to use master (#57389)
5e422fa170 : per_channel fake quant fp16 and fp64 support (#56894)
42b3fc29f4 : Fix NVRTC versioning for CUDA 11.X (X>=3), CUDA 12 and later (#57204)
72b1faa2d2 : [8/n] [torch/elastic] Add unit tests for _RendezvousState (#56536)
bbc3cc6718 : [CUDA graphs] [BC-breaking] Makes torch.cuda.amp.GradScaler scale updates in-place for better composability with graph capture (#55562)
3a777b6792 : [PyTorch] Optimize intrusive_ptr(TTarget*) ctor (pybind) (#57053)
b9b768c0e7 : Revert D28011862: Add pybind interface to caffe2 quantization server
f54aa85a6c : Fix MAGMA qr for empty batched inputs (#56257)
ff59039a24 : Add cuSOLVER path for torch.linalg.qr (#56256)
6cb9abfd20 : Remove size arguments for internal orgqr and geqrf calls (#56255)
d5e1cac6e1 : Add non-allocating helper function for torch.linalg.qr (#56254)
e68c46bb3a : Propagate information on torch_shm_manager execl failure to parent process (#57310)
2c2aa9e030 : Address temp file/bind race condition in torch_shm_manager (#57309)
7eed5410cd : Make c10::TempFile non-copyable but movable (#57308)
788aefd7cc : Propagate information on torch_shm_manager failures to parent process (#57307)
3f81912885 : static graph api skeleton (#54995)
5f2b9b1df9 : refactor autograd_hook (#54981)
81ef82e5f4 : Add pybind interface to caffe2 quantization server (#57330)
e62cdae469 : Static Runtime support for aten::matmul (#57291)
0a9c9cc674 : Update DLPack to 0.4 (#55365)
b87d3fa432 : [PyTorch][jit] Don't allow create() on singleton types (#56807)
d896d1f4ce : [fx splitter] Fix fusion group utility (#57280)
7c8a7efe3f : [nnc] Enable all fuser tests for cpu (#57332)
d50a969f2a : reduce inline autodiff threshold so we can caputre smaller fusions (#57062)
e795f88d6b : [NNC] Make flatten transform in-place (#56629)
b49e079a2a : Fix string_view::equals_ compilation by CUDA-11.3 (#57322)
52805a0f4f : [PyTorch] Include hip_runtime.h in macros.h (#57070)
c971401696 : [JIT] Disable conv-add-relu fusion for cuDNN7 when model uses fp16 (#56579)
731cc472c5 : refactor autocast to be extensible for devices (#57104)
095c328d9f : Add supported backward_dtype to OpInfo (#56156)
e08303c740 : Revert D27582224: [pytorch][PR] Automated submodule update: FBGEMM
0dddfbf346 : Revert D28114231: [pytorch][PR] Automated submodule update: FBGEMM
95dc2b6e9b : Remove unused forward AD flag (#57058)
83f186717b : Improve perf for forward AD view handling (#57057)
b016bc1c91 : fix InplaceOrView implementation for manual functions (#57152)
c91bd25e90 : Fix use of allow_tensor_metadata in view variable creation (#57069)
6fa1d880b6 : make external codegen aware of autogen'd composite kernels (#56960)
d4ddb47719 : [special] Add `xlog1py` (#55138)
5b3e7638ca : Expand Kineto profiler support (part 1) (#57333)
db32b69591 : quote str kwarg values in `test_ops.py::TestCommon::test_jit_alias_remapping` (#57120)
df69b0d060 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
264db1959e : Automated submodule update: FBGEMM (#57342)
54469e157b : Automated submodule update: FBGEMM (#55347)
b3e1802439 : Static runtime support for fb::expand_dims (#57282)
e31b67f550 : [torch/deploy] opt torch/csrc/depoy into autofromatting
ac72881f3f : Fix a numerical issue of CUDA channels-last SyncBatchNorm (#57077)
c44cbc63cc : Ignore more compiler warnings, unify WERROR options (#56630)
65968ab817 : Revert "Remove sync for randperm on small tensors. (#54113)" (#57299)
49dbe1798f : [kineto] Deprecate ClientTraceActivity and merge it with GenericTraceActivity (#56743)
16fc18bf82 : port neg to structure kernel (#57212)
995161203b : Fix sort for slow gradcheck (#57192)
e27740b38e : [torch] Add backward support for segment reduce (CPU only)
d1def93166 : [torch/debuggability] use log.info() in addition to print() in timeoutguard (#57296)
c2fbd96735 : [RPC Framework] Expose a Python API for device map getter (#57179)
2c6f5e8a12 : [package] PackageExporter `__import__` logic to not parse dynamic cases (#57283)
6ed90ed1ac : Added OpInfos for sub & mul (#56227)
149000c3f0 : Update compare_set docs (#57203)
e31265dfb3 : Fix path handling on Win32 in rendezvous.py (#57000)
a6fa6a6cda : [fx minimizer] Add an option to minimizer to allow return all intermediate results (#57279)
95f393f212 : Add compare_set to trampoline class, add typing and formatting (#57191)
be0ca00c5c : [torch/deploy] Minor housekeeping in interpreter_impl
4b96fc060b : Remove distutils (#57040)
21be40b390 : Add torch_cpu specific flag for debug info (#57190)
d3ffe9ab6b : [PyTorch] Allocate correctly-sized output tensor in addmm_cuda (#56033)
dd9f4c8cc9 : [PyTorch] Reduce move overhead in inferExpandGeometry (#56032)
fb2f3cd172 : [PyTorch] Migrate copy_ to borrow input/output (#56031)
a1d2bd56a0 : [PyTorch] Make as_strided_ use_const_ref_for_mutable_tensors (#55875)
ac86e0a0e5 : fix: index_fill_ formula to support duplicate indices (#57101)
ec86f96e91 : Fix for derivative of sinc(x) when x is positive but very very small (#56986)
fd67088a57 : [Distributed test]Enable ddp_control_flow tests for ROCm (#57159)
2e2c0099eb : Support type inference of nn.Module methods using PDT (#57165)
8a949f9e51 : [23/n][torch/elastic][upstream] Rename torch.distributed.elastic_launch to torch.distributed.run (#56831)
c72f01ab6b : Add CI workflow and script to test torchbench. (#56957)
ee71584236 : Update compare_set implementation for FileStore and HashStore (#57175)
ecacb8c78b : [quant][graphmode][fx] Fix getitem for unmatched nodes (#57173)
9486fc3229 : [PyTorch][Edge] share readArchiveAndTensors between mobile and jit (#57098)
2c8ea63cbb : add a test for grad view with torch amp (#56730)
e96667175e : .circleci: Switch libtorch builds to use smaller image (#56937)
311ad5e3af : Merge CUDAFuture into ivalue::Future (#57052)
71c2f88b90 : Make CUDAFuture handle any kind of device type (#57051)
cf1595c48b : Use only generic helpers in CUDAFuture (#57050)
682476022f : Introduce generic MultiStreamGuard (#57049)
381698f900 : Simplify CUDAMultiStreamGuard (#57048)
ea64c90ecc : Add recordDataPtrOnStream to DeviceGuardImplInterface (#57047)
6fdf092cad : Add getStreamFromPool to DeviceGuardImplInterface (#57046)
63533478bd : Fix misleading messages in test_jit_c10d (#57256)
b232659765 : Replaced _lstsq_helper with internal dispatch (#54724)
03962bc7f1 : Updated linalg.lstsq with NumPy compatible kwarg rcond (#54723)
5a02f72fcf : Modified batched residuals return of torch.linalg.lstsq (#54722)
36ebd0f65d : Improve LeftRight documentation (#57164)
b8e1be1a13 : Revert D28041140: [pytorch][PR] Adding vector_norm to the C++ API
fda8561944 : Adding vector_norm to the C++ API (#57055)
82e50f4757 : Update test_overrides for gradcheck (#57155)
762b3aa7ba : Revert D28078846: [pytorch][PR] Enable clang-tidy on master
17b961b8bc : [PyTorch][Edge] Fix mypy error (#56999)
5c8ceefe46 : Pytorch add agent api tests (#56985)
3a923a555a : [NNC] moved lowerings out of the TensorExprKernel and into independent functions (#56679)
ca814904b4 : Handle error reporting when reply file already exists (#57217)
2aadeac0ff : Remove duplicate entry for filter in language ref v2 (#57154)
e903e16d40 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
eac02f85cf : Fix more clang-tidy errors (#57235)
565b034237 : changed parametric type error in normalize to a warning (#57183)
54eee04226 : support discontiguous tensors only for contiguous output format (#57177)
d0ea3183c1 : Remove debugging print in randperm (#57218)
1ee54cc7b4 : Add devices argument to RRef constructor (#57085)
dd6b9665bf : [profiler] Add sequenceNr and fwdThreadId to the trace (#57182)
2dc3dc2324 : Enhance error message for Future.setErrorIfNeeded. (#56631)
6ff0002b12 : Pytorch: enable many torchelastic tests (#56970)
4049732811 : Enable clang-tidy on master (#57213)
73453f1de1 : Swap CUDA-10.2 and CUDA-11.1 master-only status (#57207)
78736a72a5 : Fix default dtype for randperm, triu/tril_indices inside TorchScript (#57105)
63d54874e7 : [torch/deploy] smol cleanups to generate_packages
c69386ccee : [torch/deploy] remove usage of fbcode_dir (#57102)
3483049d58 : Add xnnpack global average pool op (#55791)
aac2e68515 : Add inplace hardswish xnnpack op (#56715)
28fc59d13d : Add xnnpack hardswish op (#56714)
0a30d64c83 : Revert D27966444: [pytorch][PR] [CUDA graphs] Avoid sync errors when graph capturing cudnn rnn calls that use cudnn dropout
4cb534f92e : Make PyTorch code-base clang-tidy compliant (#56892)
5a10ee71d6 : [Reland] TCPStore add watchKey method and new listener thread (#56217)
6ec01b1610 : [DataLoader] Add mode to LoadFilesFromDisk (#57056)
31e59c3869 : torch.package change `Folder` to `Directory` and add doc strings (#56925)
610c984d2e : [CUDA graphs] Avoid sync errors when graph capturing cudnn rnn calls that use cudnn dropout (#56433)
efd451385c : Add gzip format support for chrome tracing (#56554)
ce79bd255d : Fix doc issues (#57153)
911852ffe2 : .github: Only add @generated on generated workflows (#57063)
18337fec7e : Remove glaringlee from C++ frontend codeowners (#57130)
4b8ccc6a0f : .circleci: Add /opt/openssl to CI images (#57071)
ec0fa40f0f : Release GIL before destructing RPCAgent subclasses. (#57029)
fe09d54120 : [c10d] Add debug level field in ProcessGroup (#56530)
6ee5e490d4 : [BE][SyncBN] Avoid sync stats in eval mode (#56982)
e362ee6f8a : Make it illegal to directly construct _TensorBase (#56150)
4d72538f80 : Give Tensor a trivial (for now) metaclass _TensorMeta (#56147)
5d7e48c9fc : Disable one test in rocm (#56951)
ef2bb784da : Replace raw cudaMalloc calls with CUDACachingAllocator (#57083)
46321cb937 : [static runtime] binding for aten::norm_out (#56636)
4638bd0f0f : Fix ProcessGroupMPITest.cpp Gather, Scatter and SendRecv. Enable ProcessGroupMPITest (#56709)
89377e3e45 : model_dump tool for model inspection (#56868)
1e77ba36db : change ddpLoggingData struct to map or dict (#56641)
3115728cba : [profiler] Support for trace metadata (#56575)
5536cda19a : Update floor_divide behavior in line with NumPy 1.20 (#56893)
77721ee318 : [profiler] Add cuda synchronization point (ci-all) (#57036)
8134806e23 : [iOS GPU][Kernel] Implement channel split in Metal shaders (#56074)
0df574017d : Torchelastic: add support for the new error file format (#57084)
882e273663 : [caffe2] fix bug when weight_decay is used with fused rowwise + SLWS grad (#57090)
51e6ebb5b7 : Add missing vec256<>::isnan() for VSX float and double vectors (#56658)
c91ea7d488 : [PyTorch][Edge] Add binarires for unittests (#57039)
786b0a8091 : [FX] fix normalization issues with lists of tensors (#57004)
1c0617bb54 : Fix clang-tidy for native CPU ops (#57037)
808850b6de : [ARM] Do not use depthwise3x3 conv in grad mode (#56889)
6e826cac67 : To fix inconsistency of digamma with SciPy (#56689)
0319b64ea0 : [aten][simple] Optimize atrepeat (#56994)
e8c268746b : Remove sync for randperm on small tensors. (#54113)
9fe2673d1c : ns for fx: additional bugfix for user defined functions (#57028)
da2cef6a40 : ns for fx: allow comparing int8 to int8 for functionals (#57027)
a359cfac22 : ns for fx: add option to skip matching classes and functions (#57026)
e8a5490c0a : ns for fx: support binary ops when adding unshadowed loggers for inputs (#57025)
ddedeab66d : ns for fx: bug fix for shadowing fp16 emulation patterns (#57024)
2acc19eca1 : ns for fx: add fp16 function shadowing (#57023)
782a0a1469 : ns for fx: allow user functions in shadowing (#57022)
c4bec76bec : ns for fx: move node I/O dtype mapping to be local instead of global (#57021)
c307379170 : Output tensor specified via out= must be on the same device as inputs for dot & vdot (#56334)
7bcce2acb9 : Revert D27765618: Initial support for sparse complex tensors constructors for CPU/CUDA
fa57191b16 : fix #56822 (#56967)
0d41122e61 : Eliminate global usage of torch.set_default_dtype in sparse test (#56393)
18c89a904b : Modernize test-suite in sparse tensor CSR (#56392)
09feb5f579 : Delete grandfathered Caffe2 dispatch keys. (#56939)
60a5ebfac2 : [Pytorch Edge] Remove methods_to_optimize arg (#57045)
7b160e29a4 : [DDP] remove backend constraints on uneven input tests (#56754)
522dca4ab0 : Port `topk` from THC to ATen, migrate most of sort as well (#55392)
ecaa208fd6 : Fix: sparse_csr_tensor segfaults when crow_indices or col_indices are non-tensors (#56723)
4a899bb3c4 : Fix: Incorrect example output in sparse_csr_tensor doc-string (#56722)
daef60c3b7 : Initial support for sparse complex tensors constructors for CPU/CUDA (#54153)
d16ed1ee8a : Add first draft of gradcheck note (#55966)
dd84224edc : .github: Switch alpine to ECR image instead (#57060)
26ed4b4756 : OpInfo : index_fill (port remaining method_tests) (#57009)
092eeedcb7 : [profier] Fix double printing of FLOPs (#56974)
9da0f2e95e : Support `__pos__` and `positive` (#55891)
5b3c0ae563 : Use a FutureFactoryRegistry to allow libtorch_cpu files to create CUDAFuture (#56984)
f9e7e2e20e : Remove unnecessary noCuda arg from AtomicJitFuture (#56973)
cea265b8d8 : Support layer_norm for static runtime (#56444)
3de86b951d : Migrate thrust->cub for index put (#55693)
6c602eb099 : Don't hold ThreadPool lock when destructing task (#56817)
a18f3aacee : Vectorize floating point floor_divide (#55380)
cf17fd6dd5 : Fix multinomial CUDA misalignment and non-deterministic behavior (#55364)
6e91e90b4d : Use OpInfo for unsqueeze test (#56924)
6c37788cb1 : [torch] Add cuda support for segment reduction 'max' (#56704)
d578e8cfa2 : Improved docs for `torch.linalg` (#56265)
9d54475032 : Hide module paths leaking in the documentation. (#54585)
c203c921bc : Revert D27926270: [pytorch][PR] [profiler] Add cuda synchronization points
a93ceb333d : Workaround intermittent gcc-7.5 ICE in cpp tests (#57016)
11d455fa8b : .github: Enable Linux CPU GHA on PRs (#56942)
ed617a61ce : Adjust computeLRWorkDim() to work with Accelerate.framework (#56847)
338a600e78 : Add dispatch keys for out-of-tree grad+vmap prototype (#56824)
cfbd06d7a1 : add all pools, Batchnorm and Tanh (i.e. all ideeped MKLDNN ops) to MKLDNNFuser (#56541)
8d29ac2033 : .github: Bump linux.2xlarge runners to 500 (#56945)
e138987818 : .github: Build test binaries in build/ directory (#56941)
6bbd8ba658 : [NNC] removed the second run of llvm passmanager - it is repeated and caused a slowdown in the generated code (#56837)
3b977a0d28 : [DataLoader] Add `generate_state` for NumPy seeding (#56797)
759cfb7495 : add missing comma to `run_test.py` (#57010)
201ad938b2 : Enable fixed fast_mode for complex (#55699)
7fe6e8e5a2 : Refactor C->C to C->R twice (#55692)
268cc117a8 : Add OpInfos for torch.{complex, view_as_real, view_as_complex} (#56524)
57e37080cd : Added OpInfo for torch.einsum (#56276)
ab1457ad14 : Remove C++17 only optional include (#56782)
0d777a808c : Make test_randperm work with meta device (#56976)
f7fba854bf : Implement module.to_empty() (#56610)
f2acdff73d : DOC: Add note to mutating methods (#56877)
1145e2c6e2 : Revert D27831996: ns for fx: move node I/O dtype mapping to be local instead of global
45e96b5410 : Revert D27833189: ns for fx: allow user functions in shadowing
982c72ac33 : Revert D27836064: ns for fx: add fp16 function shadowing
90d554bd86 : Revert D27857735: ns for fx: bug fix for shadowing fp16 emulation patterns
abb8b6c1c1 : Revert D27864296: ns for fx: support binary ops when adding unshadowed loggers for inputs
cc8c5c1447 : Revert D27886107: ns for fx: add option to skip matching classes and functions
5dc7a6b050 : Revert D27960767: ns for fx: allow comparing int8 to int8 for functionals
5db03b4109 : Revert D27960766: ns for fx: additional bugfix for user defined functions
a0483cd06b : Back out "fx: Fix type_matches for Optional[List[int]] arguments" (#56991)
780f454297 : Add some functions for manipulating mkldnn tensors to TORCH_API (#56954)
c42dd8b257 : Revert "Use at::cpu in bench_approx (#56563)" (#56816)
38bb0ac3e8 : [profiler] Add cuda synchronization points (#56651)
dc8a8cea79 : Move caffe2 signal_handler to c10. (#56717)
6ed5bbfb46 : [TensorPipe] Give higher priority to CPU-only channels. (#56908)
a09bbe73fd : static runtime support for fb::equally_split (#56812)
35f3feca28 : [RPC Framework] Supporting reading the input from the remote worker (#56943)
3721e01d60 : Port adaptive_max_pool3d_backward to structured kernel (#56800)
77e3f5d73d : Port adaptive_max_pool2d_backward to structured kernel (#56799)
e7c79cb158 : Add type annotations to nnapi (#48142)
8a0eb7fb2d : [TensorExpr] Docs: checkin 'Conditionals in TE' doc. (#56949)
e909ad2dc4 : [static runtime] binding for aten::argmin_out (#56638)
9bd14da6e4 : ns for fx: additional bugfix for user defined functions (#56762)
502c58ad84 : ns for fx: allow comparing int8 to int8 for functionals (#56742)
92c7aec5f5 : ns for fx: add option to skip matching classes and functions (#56493)
c004346c88 : ns for fx: support binary ops when adding unshadowed loggers for inputs (#56408)
f35540be38 : ns for fx: bug fix for shadowing fp16 emulation patterns (#56384)
96a9eafcfb : ns for fx: add fp16 function shadowing (#56311)
1917350977 : ns for fx: allow user functions in shadowing (#56301)
93de80203d : ns for fx: move node I/O dtype mapping to be local instead of global (#56296)
8dbf6ae8fa : ns for fx: handling for user functions in weight and unshadowed act APIs (#56292)
d405d41a7c : ns for fx: enable user defined functions for graph matching (#56283)
f5c24cc891 : add deterministic path for index_copy_cpu (#56900)
0888b8726a : [static runtime] binding for aten::clamp_min_out (#56635)
d221be6fb4 : [iOS GPU] Use thread buffer to store indices for transpose (#56706)
16710e5d93 : Add reasons in TODO for the unblocked AVNTM -> InferenceMode cases. (#56823)
e810bed63f : [Static Runtime] Clean up op implementations (#56841)
9b46b6b37a : Added sm_75 to CUDA Arch List for Linux CI GPU builds (#56619)
d1088de522 : Let RRef getValue() synchronize CUDA streams (#56895)
e1a7ec3c4f : [caffe2] fix -Wrange-loop-construct
72c3ee073f : add deterministic path for index_add_cuda (#56521)
cb1e78038f : .github: Add options to force unzip artifacts (#56929)
7989f2ac87 : Clang format dist_utils.py and rpc/__init__.py (#56853)
6155b0d9fa : [reland] Trigger azure pipeline for multi gpu tests (#56128)
2639c4e6b3 : fix bug in rocm device type (#56646)
2f598b53dd : catch xml parser error during report test result phase in CI (#56864)
28a9483e36 : fix ddp logging test (#56640)
5b1f0ef622 : Add cuBLAS path for batched torch.geqrf (#56253)
27a8ece805 : Add cuSOLVER path for torch.geqrf (#56252)
f84f2063b4 : Port CUDA torch.geqrf to ATen (#56251)
5854e93bc9 : Fix derivative of sinc at x=0 (#56763)
3e006fc57e : Adding hsplit,vsplit and dsplit methods (#53536)
6ba9fd5963 : Added "Tensor tol" overload of torch.linalg.matrix_rank (#54157)
a90a3acbee : Use JIT Plug-in for coverage to cover JIT'd functions and methods (#56310)
1e51c05b71 : Name .coverage.jit with timestamp to prevent loss of stats (#56829)
689d3a70aa : Fix broken link to fx graph quant guide in quantization.rst (#56776)
ed9c7e187b : Added OpInfo for addmm (#55920)
b3f56ec0e0 : Automated submodule update: tensorpipe (#56495)
f27513e951 : Fix bug in torch.sparse.addmm on CUDA when beta != 0 or 1 (#56160)
f3743f097f : [TensorExpr] Nuke tensorexpr::ScalarType and instead use c10::ScalarType directly. (#56825)
441c835733 : [TensorExpr] Remove unused field from TensorExprKernel. (#56761)
1faf1f96aa : [TensorExpr] Fuser: don't lift tensor constants from fusion groups. (#56756)
7b31ba4708 : Fix cudnn ctc loss backward (#56639)
9eee14704a : OpInfo: roll and rot90 (#56770)
9e027d7ea3 : [OpInfo] Add opinfo for `transpose` and its aliases (#56122)
298db67220 : [OpInfo] Add Function Variant and Opinfo for permute (#56125)
267b554b6f : fx: Fix type_matches for Optional[List[int]] arguments (#56790)
dde2bc4818 : Add OPENSSL_ROOT_DIR to cmake.py (#56846)
7b74c3c70a : Enable tests for dist profiling with torch.profiler (#56216)
2d2370bb61 : [Dist profiling] Fix ProcessGroupNCCL collective profiling (#55204)
70d9be0f42 : Replace duplicative s with alpha (#56804)
d4707e260b : Infer types (#56832)
e97c17afa0 : Update internal code for torch.geqrf (#56250)
d5ff432615 : Add torch.linalg.svdvals (#56684)
58fcf77712 : Port CPU torch.geqrf to ATen (#56249)
805129f957 : enable support for custom error messages in `torch.testing` (#55890)
edfbc989d1 : add support for equal_nan in torch.testing.assert_close (#55788)
27148db5df : Add support for scalars and numpy in torch.testing (#55786)
dbf3451c6e : Add support for checking tensor containers in `torch.testing` (#55385)
bcef7ebd60 : [NNC] Added matmul for NNC lowering/unified dtypes (#56456)
710288e413 : torch.fft: Document out argument (#56732)
6e5ce569bd : DOC: add note for torch.clamp() special case min > max See #45664 (#56367)
45692fbef0 : [fx splitter][fx net_min] Move Splitter, Minimizer and necessary deps to OSS (#56201)
51bca2ca4d : [caffe2] fix -Wrange-loop-construct in onnx_exporter.cc (#56759)
4ef8205104 : [fx][normalize] Allow for args to be left as args (#55995)
3fbc15410a : Revert D27967517: [pytorch][PR] Use JIT Plug-in for coverage to cover JIT'd functions and methods
c416167fb7 : Add tests for CUDAFuture (#56518)
a688b29750 : Support custom Python classes in CUDAFuture (#56516)
e4efc0c948 : [Static Runtime] Enable check_for_memory_leak in StaticRuntime::benchmark (#56839)
34eb6c8589 : [Caffe2] ScriptModuleOp support pass_inputs_as_tensor_list (#56813)
b2b9efb33a : .github: Add initial Linux CI for CUDA (#56494)
060e4c96ee : Torchelastic: forbid mp tests running with *san (#56827)
bd3dda95fd : Make old_gpu warning dynamic (#56621)
5d940e2fbc : [TSAN] Fix PythonEngine data-race-on-vptr. (#56808)
2041cd6707 : Enable forward/backward compatibility in TS mobile (#56079)
be7a943bb8 : s/AutoDispatchBelowAutograd/AutoDispatchBelowInplaceOrView. (#56657)
375ebd634a : [PyTorch] Break up generated tag in source (#56503)
5288d05cfd : Revert D27958477: [PyTorch][Edge] Add v4 and v5 models and remove unused model
c37095760d : [torch distributed] Implementing all_gather_base (#56315)
5b7317b562 : [NNC] API for Buffer Compression (#55853)
e098515b89 : Fix cdist backward for empty inputs (#56606)
0d7e780eff : Fix broadcasting of cdist backward (#56605)
3ddcc8d833 : Add more test cases for cdist OpInfo and TODOs (#56604)
10fd7d8be6 : Add option to OpInfo to skip gradgrad check and empty cdist OpInfo (#56603)
ed2104fe5c : Fixing MAGMA with HIP issues (#56448)
0424f6af93 : Local lint fixes - missing steps, pin to bash (#56752)
6de1d9b2d0 : Fix bug in emitUse to drop all values that are marked as drop (#56652)
2e4c68a727 : [PyTorch][Edge] Add v4 and v5 models and remove unused model (#56751)
798dd4665d : Add a new API replace_input_with to node.py (#55887)
7d2a9f2dc9 : Fix instance norm input size validation + test (#56659)
7e9f7fb980 : [Pytorch Edge] Prepack folding for functions besides forward (#56081)
7ff1990caf : [c10d] Increment sequence numbers on collectives. (#55718)
ed0a0c3578 : Revert D27902824: static runtime support for fb::equally_split
d1fe68e70b : To add single and chained learning schedulers to docs (#56705)
88bd0510ef : Use JIT Plug-in for coverage to cover JIT'd functions and methods (#56310)
22b151a3ba : Make sure full backward hook fire when no input requires grad (#56693)
acca89e25f : Add more RRef CUDA RPC tests (#56757)
369e8bc4bc : Added support for uppercase letters in torch.einsum (#56475)
15ca379bde : Add CUDA support to a user-created torch.futures.Future (#56517)
58d12eb75e : Allow to specify a set of device for CUDAFuture (#56515)
d6a25a58f5 : add hardtanh(0,6) to the set of MKLDNN fusible ops for mobilenetv2 (#56203)
7b7a4750a9 : [PyTorch] Migrate hacky wrapper removal to borrow_from_optional_tensor (#56648)
f2fd91ccfd : [PyTorch] Add & document borrow_from_optional_tensor (#56647)
02c3e6d98a : addmm CPU inplace implementation shouldn't resize an input tensor (#56452)
e5fda07e80 : Fix: Compare input against beta * threshold in softplus backwards (#56484)
83c23703b7 : Some simple optimizations (#51831)
0a72904ab4 : Torchelastic: make process failure init error non-fatal (#56739)
a4e47ea152 : static runtime support for fb::equally_split (#56565)
7c50852a60 : moved more lowerings over (#55372)
1f04494c0e : Consolidate nondeterministic error tests (#55631)
88deea4e29 : [torch.package] is_from_package check (#56729)
913f1f75b3 : Revert "Revert [ONNX] Redesign inplace conversion" (#56675)
461e887d92 : CPU Convolution benchmark harness for some popular models (#56455)
f84a50109f : Move windows testers to previous image (#56626)
29491f7954 : [NNC] Add unroll and flatten APIs which not require return stmt pointer (#56420)
2078836005 : Clean up raise exception logic (#55656)
d01302431c : Enable fast gradcheck for real inputs and outputs (#55237)
2ea3c24c06 : Disable flaky tests (#56279)
5c752ead3e : Print non-breaking space directly in lint.yml (#56726)
08ce2300bf : torch: Add cpython as a dependency for torch_python_obj (#56740)
bac4cfd54d : Fix mp serialization for integer nn.Parameter on CUDA (#56529)
febff45900 : Support factory kwargs in torch.nn modules (#54508)
3a4344a717 : Create helper function for RPC profiling in _invoke_rpc and remote (#56643)
1719cb82f3 : [quant][graphmode][fx] Support preserving attributes in deepcopy of observed/quantized graphmodule (#56550)
3a44d269ac : Add periodic_ prefix to all jobs run by cron (#56695)
375687839e : [sparsity] Moving the sparsity python files to OSS (#56617)
31fe2bbb30 : Remove extraneous variables in windows report stats step (#56596)
5b01b3e8e8 : Introducing JitPlugin (#56708)
2128a84a69 : Fix grad_fn bindings when saved variable freed (#56499)
679cc7eb13 : Re-enable fast winograd conv on IOS (#56021)
2ee3f5f812 : Copy over test reports before running "report results" for linux test jobs (#56725)
048087d942 : make beg_size output deterministic for EmbeddingBag (#56661)
8b3bf98cb8 : Tell codegen that SparseCsrCUDA is cuda (#56602)
b85b89d246 : Re-enable test_device_maps_gpu (#56415)
0c544ebd24 : Revert to ANVTM in jni_lite due to Oculus failure.
614dce54a6 : [iOS GPU] Fix Shader compilation errors for Metal 1.2 (iOS 12) (#56670)
187a524249 : Re-order tests based on changed files (#56666)
1dbbbbe904 : [doc] FX Graph Mode Quantization - fix preamble (#52192)
f0958f4748 : [c10d] Add requires_gloo decorator to test_logging_init (#56682)
036becf29c : Disable TestComplexity.test_nn_module_test in fbcode (#56677)
c6d004125e : Port all non-float unary operators to structured (and rsqrt) (#56151)
86ae22d85d : [torch.Package] Folder has_file() method (#56584)
dfb65146e5 : Add RELEASE.md (#56520)
8cf85a1152 : [DataLoader][doc] Randomness for base_seed generator and NumPy seed (#56528)
aec83ff45e : [DataLoader] Add Numpy seeding to worker of DataLoader (#56488)
bc3d892c20 : README: Minor improvements (#56193)
21fd5f4b79 : Document current deploy cpython build #56490 (#56600)
78022aa62c : Add more model symbolic tracing tests from torchvision (#55744)
9be2cabc45 : Pass contiguous weight to NNPACK convolution (#56569)
690c8b434f : [static runtime] binding for aten::sub_out (#56656)
3355c30f91 : Always run all the grep-based quick-checks steps (#56700)
47d2edd597 : Fix quick-checks for operator-schemas (#56692)
bdb421895a : Remove some wildcards from mypy configs (#56645)
1f0223d6bb : Fix bug in gaussian_nll_loss (#56469)
76214bb464 : Add OpInfo for torch.baddbmm (#56502)
49df8993c4 : Port `scatter` and `scatter_add` to `OpInfo` (#56140)
0df239e550 : [FX] Make arg normalization a method on Node and not a pass (also augment tests to be exhaustive) (#55992)
81b59211d4 : [static runtime] binding for aten::div_out (#56653)
57cba8e601 : Use at::cpu in bench_approx (#56563)
426852b4f0 : Split test_c10d_spawn.py to test_c10d_spawn_gloo.py,test_c10d_spawn_nccl.py (#56599)
5cc75e46fa : Split test_c10d.py to test_c10d_common.py, test_c10d_gloo.py, test_c10d_nccl.py (#56598)
d24314bd2c : Update Kineto submodule and use new metadata api (#56432)
1b87274460 : [iOS GPU][Design] Support multiple tensors as outputs (#56072)
36828aa0ff : Revert D27866138: [ONNX] Redesign inplace conversion (#55033)
a1299a2802 : Disable Windows GPU testing (#56655)
df1dfd879e : Fix errors when initializing Linear with 0 in_features (#56505)
76fbd755c1 : Reland of "D27708346: generate xla codegen in-tree" (#56601)
0cc42809ce : Enable skipped test for c10::complex on CUDA >= 11.2 (#50227)
24ff92f76d : [ONNX] Redesign inplace conversion (#55033) (#56173)
818ce1d0d2 : Add standardOps match more input type in ORT (#53813) (#56172)
43ad172c54 : make ProcessGroupDefaultTimeout the same as python (#56549)
a970e525fd : make ProcessGroup.Options.timeout argument private in python (#56531)
6d7d36d255 : s/“pad”/"pad"/ in files introduced by #56065 (#56618)
5dcc7ac35c : Add new scheduled job to circle-ci workflow (#55182)
73eaa0a5f5 : Fixing error in jit cuda on ROCm: non-constant-expression cannot be n… (#55243)
e0be76fb9b : [static_runtime] fix num args for to_copy (#56441)
d83ae5d1b7 : Add devices to TensorPipe options (#56405)
853112bbfc : [7/n] [torch/elastic] Rename _Rendezvous to _RendezvousState (#56535)
21d9bc246b : [6/n] [torch/elastic] Reorder type definitions in dynamic_rendezvous.py (#56534)
df91eb924c : [5/n] [torch/elastic] Introduce the delay utility function (#56533)
76ca1eeeb8 : [4/n] [torch/elastic] Fix the finalizer of PeriodicTimer (#56532)
c244d1c540 : [package] resolve `__import__` calls on export (#55153)
28f52649d8 : add dtype information for input (#55358)
6032ea0313 : [PyTorch] Migrate add operators to borrow in TensorIteratorBase (#55691)
01842d2bb0 : [PyTorch] Support borrowing in/out Tensors in TensorIterator (#55690)
7e8f078a3d : [PyTorch] Always update op.current_dtype in TensorIteratorBase::set_output (#55940)
b79901f932 : [PyTorch] Remove non-const TensorIterator::tensor() method (#55420)
26fc27cb4f : [PyTorch] Format generated structured kernels code better (#55258)
1211bccc65 : [PyTorch] Fix const correctness for resize native functions (#55351)
5e695b1271 : Use absolute path for local linter (#56633)
772ca1a2c3 : [vulkan] Add Vulkan registrar for internal build (#56620)
27a0d6f1df : AutoDispatchBelowAutograd takes no arguments. (#56424)
3ec6bf5d26 : Fix cuda launch error in reflection_pad2d (#56451)
eac082891f : [package] Massage exporter docstrings (#56547)
0911ee9108 : Split CUDAFuture into a .h and a .cpp file (#56514)
7dec14a491 : Avoid defining RpcCUDAFuture subclass in TensorPipe agent (#56513)
5ddc2691d0 : Merge ivalue::Future's markCompleted and markCompletedWithDataPtrs (#56512)
af23822112 : Gracefully handle failure of DataPtr extraction in CUDAFuture (#56511)
3e0c226eed : Raise TypeErrors when IValue::getSubValues fails (#56510)
5e4dfd0140 : Add quicklint make target (#56559)
12b2bc94d7 : Revert D27909732: [pytorch][PR] Support factory kwargs in torch.nn modules
284e735b3f : Set show_error_codes = True in mypy-strict.ini (#56616)
5a09def9b0 : Support factory kwargs in torch.nn modules (#54508)
11e26e7246 : [sparsity][refactor] Remove "Sparsity" from the function names (#56555)
8ee1347c3f : Changes to support strides in addition to shape and dtype. (#56567)
4230040470 : torch: Fix flake8 errors from leftover import (#56614)
7660cb880f : Rename job to be py2-setup-validate-errormsg (#56593)
8a81c4dc27 : Update padding_idx docs for EmbeddingBag to better match Embedding's (#56065)
e691f24079 : [sparsity] Moving only the C++ files from internal to OSS (#56553)
02c9d2dc90 : Release GIL before destructing ProcessGroup classes (#56381)
3e55fc91fd : [pet] Remove additional @record in elastic_launch to fix file existing error
90e532f3ef : Revert D27708346: generate xla codegen in-tree
b7d5a0cf10 : [c10d] sequence number in process group (#55319)
096089abcb : [quant][graphmode][fx] Produce torch.cat instead of torch.ops.quantized.cat (#54924)
2e8418025a : [vulkan] safe_downcast for buck build (#56540)
a583b9cd86 : Fixing "naive" `forward` of `ModuleList` and `ModuleDict (#48785)
e51f73a03e : Report test stats for macos_10_13 tests (#56429)
d43d6593cd : [NNC] Handling conditionals in reorderAxis (#56063)
fe0e1c71a7 : Add type ignore lint to Makefile (#56587)
51d0212d0f : generate xla codegen in-tree (#55050)
744360ce52 : Fix missing definitions in Vec256 for VSX (#56486)
75024e228c : Add lint for unqualified `type: ignore` (#56290)
87a1ebc9cd : fix RegistrationDeclarations.yaml, now that we codegen composite kernels for structured functional/inplace ops (#56307)
46a1ac40d9 : fix meta() calls for non-storage tensors (i.e. xla) (#56306)
d168eae114 : make torch.testing error messages more expressive (#55145)
b66a1e00a6 : [NNC] added skeleton for refactoring (#55371)
7929bc76a0 : [shape inference] Fix dim type for Cast
4575028f6c : Update script API to take example inputs (#55376)
c91c4a081d : [NNC] Horizontally fuse all loops (#56324)
33f206b865 : [StaticRuntime] Replace StorageImpl with TensorImpl in MemoryPlanner (#56447)
88fbbb4165 : [ONNX] Fix ComputeShapeFromReshape when input_shape_size < reshape_size (#56171)
1e449694a3 : [ONNX] enable word_language_model GRU and LSTM scripting (#54310) (#56170)
0b0fca3c59 : [ONNX] Export mv op (#55470) (#56169)
90e63cc41f : [ONNX] Add support for prim::min (#55259) (#56168)
a31fd7f453 : Fix onnx/constant_fold.cpp compilation on Windows (#55770) (#56167)
5a455dc717 : [ONNX] Enable tensordot symbolic function. (#55654) (#56166)
f804b65d4e : [ONNX] Update repeat_interleave symbolic (#54312) (#56165)
9986b109d2 : [ONNX] Fix assign input shape for tuple inputs & primitive type inputs (#54112) (#56164)
75995e4bf6 : [ONNX] Add support for hann_window operator. (#54587) (#56163)
19943aafe9 : [caffe2] Speed up remote net loading
a2422cc243 : Add stricter check for function schemas with varargs (#56509)
a4626348bc : fix unqualified noqa lint (#56548)
594c546b69 : [PyTorch Edge] Eliminate non-determinism when generating build YAML file (#56539)
7fff71eb9a : Fix warnings in tensor_flatten.cpp (#55956)
3d904b56ec : s/AutoNonVariableTypeMode/AutoDispatchBelowAutograd/ (#56423)
13ac0019ae : [NNC] Update loop-carried dependence check to handle all known dependences (#56354)
1d8053655d : Rename AutoNonVariableTypeMode to AutoDispatchBelowAutograd and add a warning. (#56422)
3cc4dbb66d : Expose nbins and ratio (#50398)
af7775ba26 : Types for caffe2/torch/testing/_internal/common_distributed.py (#55338)
8ae8fb7dd1 : [iOS GPU][Stub] Move conv2d_prepack impl from MetalPrepackOpRegister.cpp to MetalConvolution.cpp (#56491)
15734f5b6f : Ignore warnings for record_function_ops (#56543)
20e88401db : Add monkey type config for JIT (#54513)
17b8a4db1c : [nnc] Support `pow` on CPU (#56308)
1e03a2505f : add channels last for MaxPool2d (#56361)
7d4e9bdba1 : Add type hint for SequentialSampler (#56374)
c65284aa07 : Remove caption for Lang Reference (#56526)
12b5e666b0 : add codegen subdirectories to mypy-strict.ini (#56523)
6e1fc5cef8 : [quant] added dq->op->q quantization patterns for GELU and softmax ops (#56004)
ea4af1511c : [Pytorch] Better error message for bundling inputs a second time (#56086)
43eb21bff3 : [skip ci] Add simple local actions runner (#56439)
ab20ba4427 : Fix issue with dispatch key: AutogradXPU (#56336)
8868f9c8e3 : [TensorPipe] Use targetDevice in tensorpipe_agent. (#56346)
a8ea490f67 : Revert caffe2 print stack traces flag (#56496)
5017c5fcad : [SPMD] Remove _specify_ddp_gpu_num method (#56425)
04de24d10a : Separate profiling tests from p2p tests (#56412)
59b61f912a : Switch assertWarnsOnceRegex logic to check any instead of all. (#56434)
75651e3cc4 : Add remaining ToCs to ToC lint (#56487)
062e70590c : Add OpInfo tests for torch.{dot, vdot, bmm, mv} (#56409)
e4faebca0d : Automated submodule update: tensorpipe (#56259)
f74a346213 : Fix torch.hub.load("pytorch/vision") fails to validate the master branch (#56138)
b2dae294b6 : Fix distributed.test_jit_c10d flaky tests (#56410)
0e0a5471ef : Remove an unused variable in SoftmaxWithLossOp (#56321)
4e0760f41a : Remove `is_variable` from tests (#56305)
eacf6f1b51 : Updated the tech docs to be consistent with other two descriptions (#56338)
c61778355c : Upgrade ShellCheck to v0.7.2 (#56445)
3d878dee45 : Added out= variant for torch.linalg.lstsq (#54721)
43c747859c : Use c10 backtrace generation in caffe2 (#56198)
63dac82444 : Make grad mode error just a warning (#56401)
0ea4eb745b : [opinfo] torch.lerp: move remaining cases from tensor_methods to opinfo (#55665)
df8bb5a42b : Add OpInfo for polygamma and remove torch_op_tests Infra (#51966)
a661e58731 : Removed infos vector in torch.linalg.qr (#56248)
c5c5230890 : Pytorch resolve bug around incorrect rdzv handler resolution (#56386)
7ae45403a1 : [static runtime] support aten::__getitem__ natively (#55310)
85f4025ad7 : Port adaptive_max_pool3d to structured kernel (#56320)
0d4394778e : Port adaptive_max_pool2d to structured kernel (#56317)
513e9e0927 : Fix cxx11 abi (#55984)
07653b7fe0 : [SPMD] Remove ddp_gpu_size field from SyncBatchNorm (#55946)
023231a2ac : [torch/distributed] Fix pydoc for torch.distributed.elastic.multiprocessing (replace Redirect with Std)
94406f77f6 : [quant][graphmode][fx] Add support for keeping output quantized for list and dict (#56391)
42f0fe1fe3 : fix misaligned access #56325 (#56403)
92d24e3060 : Revert D27855386: [pytorch][PR] Support factory kwargs in torch.nn modules
b1282bc109 : Use stack trace implementation in common/process on fbcode (#56400)
f096245610 : AutoNonVariableTypeMode->InferenceMode in OSS. (#56421)
5b4c3a9da1 : record Torch DP and DDP modules forward (#55578)
31677c5fcb : [reland] .github: Add initial linux CI workflow (#56280)
0917061f43 : [vulkan][jit_pass] Add optimized_for_vulkan attribute on vulkan pass (#56414)
7adc04d7b5 : Add more logging to debug test_reduce_sum_cuda_twice (#56406)
0d94c04247 : [NNC] Change fuseLoops API to return bool flag and not throw any exceptions (#56353)
fe3f6f2da2 : [iOS GPU][Kernel] Implement mean.dim using MPSReduce kernel (#56073)
fa7534788b : Fix typo in gradcheck.py (#56368)
34d0bd5b1d : Fix TestTypeHints.test_doc_examples (#56388)
2f5c352162 : Fix protobuf warnings in caffe2 (#56186)
638617f9f8 : Write mini dump on pybind exceptions (#55652)
a14178ed5c : Remove useless code (#56230)
04607a58f1 : [pytorch] Fix compiler warnings from conv.h (#56181)
2c9972facf : [iOS GPU][Kernel] Implement transpose in Metal shaders (#54522)
e3900d2ba5 : Add lint for unqualified `noqa` (#56272)
7bcf95bbb6 : [iOS GPU] Move the definition of `fp16_t` to MetalUtils.h (#54521)
40483acc51 : Support factory kwargs in torch.nn modules (#54508)
ca6e5c7fc9 : [NNC] added more python bindings for loopnest (#56213)
d1b6383d65 : Hide warnings for deprecated quantization APIs (#56291)
48aaea3359 : unified GlooStore and c10d store API (#56222)
5748cc0d11 : [Mobile GPU] Ban mutations in JIT passes (#56070)
98162cb0bb : Enable AutoGradMode in InferenceMode. (#56107)
8881f504f1 : Remove the unused maximum and minimum functions in vec256_base (#56313)
6409d34482 : Sort glob of files to ensure it is deterministic (#55850)
838d3079ad : Lazily initialize alias db in remove_mutation opt (#55949)
98ac6f7cbc : Increase default rendezvous timeout to 15 minutes
d806b06167 : Support int32 indices in torch.repeat_interleave (#55102)
b6b2fc7e3f : Added OpInfos of add & mm (#55915)
92991d9533 : Add OpInfo for (nan)quantile (#55548)
d05e7c163f : Revert D27600457: [pytorch][PR] Support factory kwargs in torch.nn modules
7d17559152 : [special] OpInfo `i0e`: fix missing check (#56232)
1077f87269 : Support factory kwargs in torch.nn modules (#54508)
7513455c74 : Make tensordot resize output tensor's size if out= argument is specified & make it safely cast & copy output (#56286)
0e106fce9c : add tests for torch.testing (#54784)
2219286de4 : Updated internal code for orgqr function (#56247)
b387f7ca47 : [NNC] Make normalization transformation in-place (#56158)
22d4d9f4a6 : [pytorch][PR] Automated submodule update: tensorpipe (#56348)
ffdecc1ac4 : [CUDA graphs] Allows DeviceCachingAllocator to capture cross-stream memory use (#55860)
3e42da09df : Porting logcumsumexp tests to OpInfo (#56135)
ce05b7a324 : [c10d] Remove deprecated use of torch.LongTensor, torch.ByteTensor (#55861)
a24b17248f : Short circuits DistributedDataParallel._recursive_to's copy and stream syncs if input is already on the right device (#55624)
29c5cb797d : [NNC] Fuse loops that have the same bounds as expressions (#55997)
b0e0841f98 : OpInfo porting for logsumexp operator (#55520)
8c74e1b840 : Vectorize copysign on CPU (#51792)
36b476ccdd : Added OpInfos for eq, ne, ge, gt, le, and lt (#55709)
85126629a5 : [TensorExpr] Add support for constant tensors in tensorexpr kernel. (#56319)
dd9ef529ba : [TensorExpr] TensorExprKernel: switch type of tensors_ from Tensor to Buf. (#56318)
50d4c63f46 : Allow inlining of more Tensor methods (#53905)
be2a0805d2 : [TensorPipe] Update tensorpipe subodule + remove TP_NEW_API switch. (#56260)
928a4733af : [nnc] Only lower float conv2d's (#56289)
04e7891aab : Add adaptive_avgpool2d to the set of fusible ops (#56180)
7636cb6bab : clean up unused reduction functions in THC (#56293)
a43483586d : A heuristic to avoid perf incompatible MKLDNN formats for binary ops (#56089)
d02919dd50 : [FX] Make shape_prop handle targets with aggregate outputs (#56221)
72a93a6337 : Fix warnings in ivalue test (#56303)
48e675ac75 : fx quant: fix subtle bug in BinaryOpQuantizeHanlder logic in matching (#56294)
5eadc243f3 : Preserve node meta info in split_module (#56212)
98933866a9 : [quant][graphmode][fx] Optimize cat (#54813)
cd780e1c6e : Move graph iterator to seperate utility file (#56211)
8176ab6ca0 : [JIT] Put explicit error message on class attribute accesses. (#55723)
d312aeb6ac : Implement faster gradcheck but not enabled for most things (#54480)
83cfaf1a12 : [kineto] deprecate pthreadid (#56209)
643dd26389 : Fix formatting for the new language reference (#56042)
ce1380f9b5 : fixing Optional[Tensor] type in autodiff (#55565)
d30e31cfe6 : [20/n][torch/elastic][upstream] Move torchelastic.distributed.tests to pytorch.distributed (#56215)
1360980659 : Remove duplicate test due to rebasing mistake (#56287)
d4fad109e8 : Add OpInfo tests for torch.inner (#55536)
a6940aae37 : [19/n][torch/elastic][upstream] Replace pytorch.distributed.launch with torchelastic launcher (#56214)
5a9b1ddf3b : fix the readme link (#56269)
164de39a11 : Fix build failure due to namespace change for log_out and tanh_out (#56278)
8d4e6c9570 : [package] make GlobGroup a public concept (#56238)
1ec12fd491 : Add minidump collection via breakpad (#55647)
5f19385588 : [TensorExpr] Add aten::matmuls to TE fuser. (#54605)
8d7faa2af8 : Update _torch_docs.py to close #56240. (#56242)
0dc6e7ae38 : Move grad_mode.h/cpp to c10. (#56204)
d79326ce7a : Revert D27812204: [sparsity] Moving only the C++ files from internal to OSS
eca98fedb5 : split out NamedCType from CType. Remove direct string comparison from autograd codegen (#55334)
947c7a8215 : add C++ namespacing logic to ctypes (#55047)
164bee1d09 : Return a CType instead of a string for returns, beef up CType (#55046)
26046b9110 : [caffe2][publish] Optimize metanetdef load
7629477ff7 : Filter out more expected errors from sccache log (#56281)
3e0744a1ae : [sparsity] Moving only the C++ files from internal to OSS
bb35b066af : Put `env` before `run` or `with` in GHA workflows (#56268)
03cc9fabd4 : Added complex datatype support to sigmoid on cuda (#55975)
dd8bfe2b93 : Finish deprecation cycle for inplace view error checks (#56093)
9f216b9499 : ns for fx: enable shadowing int8 to int8 (#56205)
ae0af8bb51 : ns for fx: move unmatchable mod/fun/meth mapping to mappings file (#56197)
6de5d13e0f : ns for fx: make `call_method` nodes work in NS APIs (#56196)
07f3eaa716 : ns for fx: remove deprecated code (#56195)
0fbc2be234 : ns for fx: enable `call_method` nodes in graph matching (#56194)
2380cc7d65 : ns for fx: fill out coverage for node I/O types (#55918)
430fc03e3f : ns for fx: add category for ops which accept fp32 or int8 input (#55859)
5ec6434945 : ns for fx: move op dtype category mapping to separate file (#55858)
fe18144618 : Generalize HIP-specific launch bounds to apply to CUDA as well (#56143)
48c6f0c25e : Add OpInfo for torch.mean (#55525)
119b3eccda : Revert "Revert D27598681: Add OpInfo tests for torch.addbmm" (#55908)
f9b3dcba0d : Store coverage.xml as artifact for windows test jobs (#56179)
c5e80d30bf : Harden "Add annotations" workflow (#56071)
e387bd780e : Ignore envrc files (#56199)
f236c27819 : Update Gloo submodule (#56189)
b96cc9ab20 : [FX][testing] Test tracing into all the standard torch.nn.functional (#55550)
1a1b23f00c : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
6b5ed5ec45 : Revert D27803529: [pytorch][PR] .github: Add initial linux CI workflow
0a541e23e1 : [nn] Add allow_duplicate option for named_modules (#54812)
b405e2ce12 : Implicit conversion from null tensor to NoneType (#55823)
d2d1112513 : Set ThreadLocalState correctly in the autograd engine (#56174)
8f68396462 : [package] fix error handling with allow_empty (#56190)
4611387608 : [optim] take kw-only argument for functional optim APIs (#56185)
bd3c63aeeb : [PyTorch Edge] Move torch::jit::mobile::_export_operator_list() from serialization/export_module.cpp to mobile/import.cpp (#56044)
94ce10f732 : [iOS GPU] Use setTexture() rather than copyTexture() (#56069)
42f5d66080 : [DDP] Fixes flaky tests caused by incorrect floating-point comparison (#56192)
7d410bc3c8 : .github: Add initial linux CI workflow (#55176)
400398006f : [PARAM] Param comms debug info (#55976)
bde53cfd9a : [tensorexpr] Add missing python bindings for NNC Stmts (#55570)
f59244ec16 : ns for fx: add test for op relationship coverage (#55837)
c8209a7336 : ns for fx: move pattern utils to separate file (#55805)
b461104554 : ns for fx: make get_reversed_fusions reuse quantization fusions (#55803)
84b5f67d9b : ns for fx: add qat tests cases for shadowed activations (#55614)
37fbc069f1 : ns for fx: qat test cases for unshadowed activations (#55508)
f6a3936ab3 : ns for fx: extend functional weight extraction testing to QAT (#55507)
1cbc4023e9 : ns for fx: add qat handling for weight extraction (#55506)
3786c2719d : ns for fx: make NSTracer inherit from QuantizationTracer (#55505)
5ad3bc715c : ns for fx: change node I/O determination to strict allowlist (#55434)
1ca51f0fba : [kineto] deprecate metdata args from ClientTraceActivity (#55988)
52f1a07b63 : Python API for Vitals (#53238)
f17c9ea2ed : Port all unary float functions to structured (#56082)
cfc9716246 : Change all unary functions stubs to use TensorIteratorBase& (#56078)
3c4e1cd141 : remove annoying warnings from common_nn.py (#55982)
ff1498e668 : Add cost inference for MulGradient operator
3fbca31be3 : port addmv to structured kernels (#55746)
8e82e932f3 : Reland: D27652485: [nnc] Enable CPU fusion only when num_threads == 1" (#56120)
e1752ffa04 : [reland][ROCm] use hiprtc precompiled header (#55965)
f02454f957 : Fix ChanelShuffle named tensor warnings (#55911)
dd090e72b2 : [dist_optim] add distributed functional rprop optimizer (#55834)
4e9e7200f2 : [dist_optim] Add distributed functional Adamax optimizer (#55833)
8ef13cf976 : [optim] refactor rprop to use functional API (#55832)
bb245b6444 : [optim] refactor adamax to use functional API (#55830)
f26a6cb372 : [quantization] Fix deepcopy on quantized ConvNd (#56154)
a3a75bd35e : Add complex autograd support for `torch.cross` (#55854)
90e103ddfe : Revert D27753803: [19/n][torch/elastic][upstream] Replace pytorch.distributed.launch with torchelastic launcher
512c744f2e : [torch/elastic] Introduce `PeriodicTimer` (#55919)
e2036ea342 : Revert D27758303: [20/n][torch/elastic][upstream] Move torchelastic.distributed.tests to pytorch.distributed
9bfe16a308 : should_check_autodiff is now should_autodiff_node (#56013)
aae1023bed : [caffe2] allow passing options to the DB in Save operations (#55935)
14d529a368 : Add support for refinement for torch.jit.Future (#56148)
33159b68a3 : Revert "Deprecate legacy constructor `torch.Tensor()` (#54414)" (#55831)
b940516061 : [nnc] Don't fuse fp16 on CPU (#56119)
16820bba5a : [nnc][trivial] Trailing underscore style for llvmCode, asmCode members (#56118)
d56f451820 : [nnc] Separate printing of optimized llvm bitcode from assembly (#56117)
06ea73942a : [easy] Rename fb::jpeg_decode_to_NCHW to fb::image_decode_to_NCHW (#55857)
63f83edcfb : OpInfo porting for torch.real & torch.imag (#55134)
5ed3be799d : skip test_filtering_env_var for rocm (#56178)
6c327ef9d4 : matches_jit_signatures is dead (#53637)
6366658fbf : Add OpInfo for torch.nansum (#55523)
9f6fed8a15 : [20/n][torch/elastic][upstream] Move torchelastic.distributed.tests to pytorch.distributed (#56077)
857d8264a7 : Skip RPC's CPU-only tests on CircleCI GPU jobs (#55778)
0a06d054d0 : Revert "Only allow hub.load() from original repo. (#54451)" (#56048)
71f9e99e29 : [torch/elastic] Introduce aux types required by `DynamicRendezvousHandler` (#55932)
7c708ef4ea : [19/n][torch/elastic][upstream] Replace pytorch.distributed.launch with torchelastic launcher (#56037)
728d2e4e0f : [BE] Speed up runtime of test_ddp_model_diff_across_ranks (#55659)
7eed077406 : [android] Fix headers publishing in aar (#56068)
49e5e284ea : Additional annotations in fbcode/caffe2/torch/_jit_internal.py (#55855)
a60dca8e80 : Make the script generate cancel_redundant_workflows.yml (#56092)
51e7a371f5 : [DDP] Param to name mapping in Reducer (#55075)
1934725875 : Use cascade summation in nll_loss on CPU (#55841)
6c65ce8ee1 : Use THPVariable_Unpack in python_nccl (#56016)
6ec71ed4f9 : Replace all direct cdata access with THPVariable_Unpack (#55799)
61418aa069 : Make THPVariable_Unpack work on THPVariable too (#55798)
82a7fff3cd : Modify a few APIs to take/return const Tensor& instead of Tensor& (#55797)
e8faf69739 : fix torch.pow type promotion issue (#54085)
9d3d169d2d : Implement hardswish/hardsigmoid on MKLDNN tensors (#55218)
71a5314591 : Fix ScriptMethod dispatch on __torch_function__ (#56103)
61725f15c0 : cleanup unused implicit argument of expand function (#56101)
6daa1760d7 : Skip geqrf test if compiled without LAPACK (#56105)
e0f9a5fed8 : [BE] add test selector to test_testing (#55931)
1d49fd31c4 : [reland] Add formulas and basic tests (#56083)
b383b63550 : [ROCm] Updating ROCM_HOME handling for >ROCm 4.0 (#55968)
5cab3b9cf6 : Revert D27709912: TCPStore add watchKey method and new listener thread
6350fcef83 : [testing] add `broadcasts_input` and verifies the behaviour for inplace_variant. (#55771)
50057e560b : [special] Add `i0e` (#54409)
2f895f790a : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
84e6580b5f : Use cusolver potrs as the backend of cholesky_inverse for batch_size == 1 on CUDA (#54676)
699b47cd2c : Update use_deterministic_algorithms docs (#55413)
3802e577fb : [TensorPipe] Use Descriptor::Tensor::sourceDevice in tensorpipe_agent. (#55821)
047164437e : [TensorPipe] Prepare for new Pipe API. (#55820)
6eeffc64f1 : Port NumPy typing testing style to PyTorch (#54234)
a128938a75 : [ROCm] add MAGMA_HOME env var hint to cmake, centos-rocm Dockerfile (#54511)
1e9c7ad4cb : Add a test to measure `import torch` time (#56041)
75b6644a4c : Add USE_NUMPY define only if PyTorch is compiled with Numpy (#56102)
81f181567a : Add `USE_MAGMA` build flag (#55994)
1995640d86 : Fix compiler warnings in mkldnn Pooling (#56095)
f5a7b2e641 : Put llvmMathExtras in c10 namespace (#55886)
556dfcb0db : [TensorExpr] Re-enable "LoopNest.VectorizeUse" test. (#56094)
ad17fadbfc : Revert D27652485: [nnc] Enable CPU fusion only when num_threads == 1
506eca24b9 : Revert D27752279: [nnc] Do not try to vectorize kernels that use float16
8f663170bd : [17/n][torch/elastic] Make torchelastic launcher compatible with the caffe2.distributed.launch (#55687)
c5f9e043e9 : Collect instruction counts (and wall times) for CI (#55428)
92a09fb87a : Manual revert of D27369251 (#56080)
f8d331b33b : PyTorch Execution Graph Observers (#55957)
55432982d2 : [OpInfo][take2] move matmul to OpInfo (#55947)
669a8acc54 : [package] Allow save_module to accept module as arg (#55996)
1a116a9332 : [Static runtime] Add optimize_graph_output_memory flag (#55811)
44e2c2cdfb : Add a lint for native_functions.yaml (#56059)
6b8696172f : Fixed some Clang-Tidy checks in Aten Context class (#55942)
817fd932ac : Revert D25607505: Add formulas and basic tests
ed03a0791e : Change MessageType values from decimals to hexadecimals for readability (#55985)
50bd6a3640 : ci: Remove CUDA 10.1 builds (#56056)
2e7e4d0795 : ci: Add job to ensure python2 setup.py compat (#56057)
70f5905565 : Add formulas and basic tests (#49098)
1e225a5187 : Add a few InferenceMode test cases to the wall. (#55993)
cc7fab6e9c : Update pthreadpool (#55950)
0b8bd22614 : Fix bug with rebuilding extensions every import (#56015)
f8f756efb2 : TCPStore add watchKey method and new listener thread (#54264)
bc86358cf5 : Make run_test.py work even if s3_stat_parser fails to import (#56039)
48a7d69946 : Catch and ignore tracebacks for compilation errors (#55986)
40d74e6f71 : breakup optim, cuda documentation (#55673)
fd15557ccc : breakup autograd documentation (#55672)
bbc4c775bb : [reland][c10d] monitored_barrier: ensure all ranks pass or none do (#55990)
752f5b1030 : [reland][c10d] Log API usage of monitored barrier (#55989)
c8cf9114bf : Include short test suites ln total_seconds stat (#56040)
8df5e61fd6 : [nnc] Do not try to vectorize kernels that use float16 (#55970)
087049000b : Make c10 clang-tidy clean (#55870)
416c18b7c9 : Add a batch_first arg to Transformer / MHA modules (#55285)
ba320cec6b : Prepare for Azure Pipeline for multi-gpu tests (#55600)
1127bab828 : Make GHA for consistency cancel_redundant_workflow return useful err msg (#55961)
3fe4718d16 : Add `padding_idx` argument to EmbeddingBag (#49237)
f94c95a2dd : Revert D23752058: [pytorch][PR] Don't split oversize cached blocks
e7e164f9e6 : [nnc] Enable CPU fusion only when num_threads == 1 (#55621)
88c06d9dfc : Add cuda device synchronization support in JIT (#55469)
1688a5d31a : Cleanup since FEATURE_TORCH_MOBILE is always true. (#55835)
8188d18f8d : ns for fx: add functional conv-relu fusion support (#55433)
1ea95fa5b2 : ns for fx: add test case for linear dynamic (#55432)
784ae23d43 : ns for fx: fix bug in weight extraction testing (#55431)
8b992ab0e4 : ns for fx: add conv1d weight extraction (#55327)
8fc1ca0d22 : fx quant: fix prepacking for F.conv1d (#55311)
457fac0a33 : ns for fx: move more weight matching logic to weight_utils.py (#55288)
13d7b40ea0 : ns for fx: add F.conv2d and F.conv3d weight extraction (#55287)
1fb2abc7ad : ns for fx: rename SugraphTypeRelationship to SubgraphTypeRelationship (#55155)
37a404610f : ns for fx: add allowlist for ops with same signature across dtypes (#55154)
444b318a90 : ns for fx: add linear-relu mod weight extraction (#55080)
2587a28bbd : Improve the instructions on how to build the docs (#56018)
b1d17bc55f : Added OpInfo for torch.sum (#55406)
67dcd62310 : Don't split oversize cached blocks (#44742)
09c0bb4fb9 : Make replication_pad2d structured (#55511)
7985753421 : [package] Add dependency tracing function (#55167)
9f89b53d7d : Synchronize RRef.to_here() CUDA Streams properly (#54932)
c96b5b2a20 : [quant][graphmode][fx][fix] Fix fp16 reference patterns for linear (#55727)
2236f43da0 : [FX] Put tensor metadata into a NamedTuple in ShapeProp (#55930)
48c73d24b8 : Revert D27523060: [c10d] monitored_barrier: ensure all ranks pass or none do
c7aa1026a8 : Revert D27548433: [c10d] Log API usage of monitored barrier
3646fa3621 : Fix tensorpipe test (#55979)
09231b5db1 : [c10d] Log API usage of monitored barrier (#55265)
a5290adea5 : [c10d] monitored_barrier: ensure all ranks pass or none do (#55197)
86368700e8 : [PyTorch] Change MaybeOwned tests to use intrusive_ptr and Tensor (#55684)
cf7c5dcae3 : [PyTorch] Avoid double indirection in MaybeOwned's borrowed state (#55685)
d398a705c6 : Clang-format batchnorm.py and distributed.py (#55971)
132f5c1f36 : Clang-format ProcessGroupMPI.cpp (#55969)
8bdea14cd3 : [FX] Add memory_format to shape_prop (#55815)
2bf26965e7 : Revert D27710107: [pytorch][PR] Update a `batch_first` arg for transformers like GRU and LSTM.
a61d91e803 : Port reflection_pad1d to structured kernel (#55531)
de5e3b5eb0 : Fix OSS flaky test_destroy_full_group on MPI backend in pytorch_linux_xenial_cuda10_2_cudnn7_py3_multigpu_test environment by adding a barrier and retrying MPI_Comm_create 3 times (#55921)
c218ac3bc0 : [NCCL] Join work clean up thread before aborting communicators (#55444)
8596ac186b : deterministic code path for gather_backward for dim = 1 (#55573)
2237754b13 : Update a `batch_first` arg for transformers like GRU and LSTM. (#55285)
b98f011cd4 : cmake: Enable (s)ccache for nccl builds (#55814)
c47cc30bf5 : Skip testing torch.float16 in test_isnan (#55906)
bf8b790ba7 : .github: Bump disk size for auto-scaled workers (#55955)
5a45b1b2f2 : Add nondeterministic alert for `index_put_` when `accumulate=False` (#55827)
5ffc4e3b0f : refactor prepare_for_backward (#54977)
6dd1978d4b : print average duration for caffe2 benchmark
d1fac54f13 : [Pytorch] Only print gradient of a tensor if it requires_grad (#54446)
aceceb3d5c : Reland #50999 (Added pow() on CPU for float16 & bfloat16) (#55280)
de53de39d7 : [PyTorch] Mark borrowed case as C10_LIKELY in MaybeOwned (#55553)
ea446ed600 : [PyTorch] Allow copy operations on MaybeOwned (#55419)
bbdb37b93d : [JIT] Use type cache in erasing shape information (#55828)
8f953ef544 : Increase token count threshold for calling thrust sort in embedding backward (#49913)
72b8864b34 : [caffe2] constexpr const
7ab654afd7 : [TensorExpr] Rename `Tensor::call` to `Tensor::load` to be consistent with `Buf` and `Placeholder`. (#55826)
1263448cb2 : [TensorExpr] Remove mask field from Load and Store classes. (#55825)
754b0d073a : [TensorExpr] Unbreak benchmarks. (#55824)
b01a15d3d3 : [TensorExpr] Redesign Rfactor loopnest transformation. (#55324)
57f795c27b : [TensorExpr] Remove unused `LoopNest::hasLoopBodyFor` method. (#55323)
f61556a7ce : Use autosummary on torch.fft, torch.linalg (#55748)
657b66e87d : [NCCL] Log when barrier guesses device to use (#54991)
0517222dc8 : [package] Correct usage of miniz API in PyTorchStreamReader (#55725)
c3a49cb30c : Better types in fbcode/caffe2/torch/jit/_script.py (#55856)
85b97e449d : [RFC]fix test_ddp_logging_data_cpu with tsan (#54465)
2eebd9fdce : fix ddp logging flaky test (#55414)
800fa5f369 : [ROCM] Enable more dtypes in common_method_invocations (#55808)
18662d4321 : [Static runtime] refactor MemoryPlanner codes to prepare for output tensor memory planning (#55809)
6269efde91 : Add stricter typing to caffe2/torch/distributed/elastic/multiprocessing/errors/__init__.py (#55848)
70a09d97d1 : Use nodes instead of node
2bb58a06ef : move logic to skip a redispatch directly inside of resize_output (#55162)
87fcf3072e : Fix overflow issue in quantized instance_norm/layer_norm/group_norm (#54872)
8c8f8829f0 : Factor out numerical logic (#54479)
381b3d8f4b : Refactor get numerical jacobian to calculate wrt all outputs at once (#54378)
fc6985eceb : [package] Minor fixes to PackageExporter docstrings (#55817)
6a738196af : [package] Create API reference (#55812)
5e625906e9 : Fix lint for redundant-workflows list (#55916)
4753100a3b : Un-ignore F403 in .flake8 (#55838)
75eb026e07 : migrate matrix_exp to opInfo tests (#55533)
99d77c55dd : Automated submodule update: tensorpipe (#55881)
24f9a446c9 : Fix wrong detection of depthwise conv on neon (#55794)
d7d7556f17 : Move tensor implicit conversions to test_builtins.py (#55532)
5dba4ff786 : move topk to use OpInfo (#55547)
192df16a4d : move logaddexp{2} to opinfo (#55535)
505f6f325f : port addcdiv to opinfo (#55518)
9ccae89102 : port addcmul to OpInfo (#55517)
00737efdb2 : [shape inference] Add shape inference func for Bucketize
4b09756d26 : [SPMD] Move a comment (#55877)
56212daf7e : allow tests to run locally without setting environment variables (#55880)
37ac271089 : [AutoAccept][Codemod][FBSourceGoogleJavaFormatLinter] Daily `arc lint --take GOOGLEJAVAFORMAT`
b4cb020c0f : [Gradient Compression] Make orthogonalization_epsilon configurable in PowerSGDState (#55738)
4cfbb2401f : [ROCM] Re-enable 3 previously faling tests in test_cuda.py (#55813)
5a4e5db9ad : docs: fix profiler docstring (#55750)
e61b4fa691 : [3/n] [torch/elastic] Introduce `EtcdRendezvousBackend`. (#55637)
339d3bf394 : [2/n] [torch/elastic] Introduce `C10dRendezvousBackend`. (#55636)
b3dd8cde61 : [1/n] [torch/elastic] Introduce `DynamicRendezvousHandler` and `RendezvousBackend`. (#55635)
da01f4398b : Add InferenceMode TLS to ThreadLocalState. (#55822)
8fc16da649 : [Hackathon]Move tests for slice to test_slice.py (#55524)
5cd73df8f8 : [Hackathon]Move complex tests to test_complex.py (#55514)
bbcb12614e : Sort slow tests json by test name (#55862)
a756a9e553 : Add device id to ConvolutionParams (#50892)
5ba4cfb7bf : Minor typo fixes in `_script.py` (#55818)
e7bb00cb49 : Add a warning message to retire ProcessGroup RPC backend (#55616)
d805908c34 : [NNC] API to reorder multiple loops (#55568)
48ddc9762b : Upgrade mypy to version 0.812 (#55712)
68e0796466 : [JIT][write path] Make NoneType annotation_str emit `NoneType` instead of `None` (#54746)
a3c06e69aa : [JIT][write path] Fix TupleType.annotation_str to conform to `typing` module syntax for empty tuple type (#54745)
d0cd16899f : rework device type filter rule (#55753)
dab1cdf7cb : Revert D27708944: [pytorch][PR] [OpInfo] move matmul to OpInfo
561b507843 : Eliminate device guard in generic dispatch key kernel wrappers (#55131)
69b7b011dc : [JIT] Add cond-add-relu matching pattern to cover in-place ops (#55458)
566e06eb9b : Use _WeakTensorRef over weakref in test_autograd.py (#55726)
af1a772876 : Disable overloading of std::max & std::min for inputs of distinct types (#55638)
c00b9dc599 : Small typo in comment (#55485)
f7a51b2ab9 : Don't set version_counter on inference tensor for unsafe_ ops. (#55819)
08561cad10 : [OpInfo] move matmul to OpInfo (#55543)
008ec544f4 : [p2c2][operators] Self binning histogram op error msg
01441af763 : Use mypy internals instead of fnmatch for mypy wrapper (#55702)
9593af305c : Automated submodule update: tensorpipe (#55137)
684589e8e0 : [codemod][fbcode][1/n] Apply buildifier
db394efbb9 : Support batched embeddings for 8Bit embedding bag quantization (#55343)
80d04f910c : fix typo in argmax docstring (#55239)
c91cf1e7a9 : Add support for multiple outputs in structured kernels, port fractional_max_pool2d (#55581)
8dd7e1528f : Port replication_pad1d_backward to structured (#55537)
3b96a7965a : Port replication_padding3d to structured (#55499)
b9b103ff94 : Port replication_padding1d to structured (#55481)
5fb1142702 : Add CSR (compressed sparse row) layout for sparse tensors (#50937)
c6d9ca0c2b : [reland]Replace AutoNonVariableTypeMode with InferenceMode in static runtime. (#55731)
211d31afc9 : symeig supports complex backward (#55085)
e05ca753bf : Fix nightly tool for python 3.6 (#55776)
13153924cc : OpInfo porting for msort operator (#55488)
1a8ec9c447 : Add breakpad to Docker image (#55439)
3c6b52ae62 : Cache slow/disabled test files (#55682)
ec9b20ddc0 : fx quant: fix edge case with copynode after user function (#55710)
3f8d476857 : Split out CUDA RPC tests (#55695)
399b66c813 : Ports logdet from method_tests() to op_db (#55743)
66289673f7 : patching requires_grad on DifferentiableGraph (#55701)
19f15317a0 : [BE][Docs] Improve dist.new_group doc (#55660)
a3c062d4f5 : docs: improve torch.matrix_exp() (#55626)
93bf0ae6fc : Remove legacy constructor calls from pytorch codebase. (#54142)
fa29a647db : [JIT] Allow unpacking tuple and assign their values to SELECT-type expressions (#55268)
b80c6f863f : Disambiguate error message for working with not fully refined tuple types (#55745)
facbcec298 : Make leak_corrupted_threadpool non-atomic (#55341)
84a7ab250b : Optimize constructing tensors from external data (#55705)
255494c2aa : torch.testing allclose -> close (#54781)
c9b94a85e9 : change torch.testing helper asserts to checks (#54780)
548765d9a5 : [PyTorch] Add & use inferExpandGeometry_dimvector (#55316)
151869aca6 : [PyTorch][easy] Use sizes()[x] instead of size(x) in addr (#55247)
12c19c398c : [PyTorch] Update expand_size API to match expand_inplace (#55246)
16a9141e2c : [PyTorch] Update expand_outplace API to match expand_inplace (#55245)
6fd875923e : [PyTorch] Add MaybeOwned::operator*() && (#55244)
e8dd65102b : [PyTorch] Use infer_size_dimvector in ExpandUtils (#55180)
fa19b6dd4d : [PyTorch] New expand_inplace API with MaybeOwned<Tensor> and no unary tuples (#55065)
2496a09314 : [Gradient Compression] Fix PowerSGD docstring by removing an extra whitespace (#55666)
5a8cdc2fdb : Revert D27691509: Replace AutoNonVariableTypeMode with InferenceMode in static runtime.
d695ba94f6 : Replace AutoNonVariableTypeMode with InferenceMode in static runtime.
263a15c5aa : [tensorexpr] Add PYTORCH_TENSOREXPR_DONT_FUSE env variable to disable fusion on specified operators - fixed #50757 (#55650)
3e8ebb17aa : [reland][quant][graphmode][fx][refactor] Factor out insert_observers_for_model to a separate function (#54733) (#55307)
d33829f844 : Fix type annotations for state_dict() override (#55704)
fc349cbcde : OpInfo for kron (#55546)
3e9cbe5ef7 : [SPMD] Remove the code branches only used in SPMD mode from distributed.py (#55353)
717d54bc2b : [Hackathon] Add source highlighting check to test_unsupported_ops (#55501)
7485818a3f : Revert D27670883: [pytorch][PR] Added an OpInfo for mm & ported its method_tests
846c8d94c7 : mark embedding backward non-deterministic for max mode rather than all reducing modes (#55574)
7671c15d4f : Make VariableVersion::DISABLED the default constructor for VariableVersion. (#55572)
6e4e3a1159 : Fix annotations in _autograd.pyi (#55706)
ee2de8ae3a : [android] Module load extraFiles (#55644)
9f519d2d2d : Simplify benchmark patterns in mypy-strict.ini (#55700)
6842da6251 : [WIP]Relax some limitations of InferenceMode. (#54403)
91ab0d9680 : [hackathon] port addmv to OpInfo (#55545)
162e1003c9 : [package] fix whichmodule for OrderedImporter (#55646)
6ee333cdb5 : modernize test_sparse (#54572)
fc1d7a85bb : Added an OpInfo for mm & ported its method_tests (#55446)
53f9fc1802 : Port hypot method_tests() to OpInfo (#55140)
f3367f917e : Translate annotation line numbers from merge to head (#55569)
11dd6d3dbb : Mycontrib Added Example for is_tensor API (#55052)
c0379ac83f : Simplify device guard code generation (#55112)
43ede4c2e3 : Add Per Tensor Quantization Support to FXIRImporter (#55405)
076961e8b5 : Add tuple add operator (#52292)
159e1100bf : [fix][tests] fix logic if env variables not present (#55664)
defc649eca : Update to short forms of splitWithTail / splitWithMask (#55542)
35a66db774 : Fix complex mean and reduction tests not being run (#55640)
2a24a2418a : common_utils.py use new file names for disabled/slow tests (#55620)
55d45458bd : [cuDNN] Enable Conv3d channels_last_3d (#48430)
c7312f5271 : Enabled xla device in CI. (#55658)
bbd2b1bd3c : [quant][graphmode][fx] Add shape to nontensor op list (#55529)
0910363e8f : adds data_ptr checks to in-place OpInfo variant tests and out OpInfo tests (#55527)
d2784c233b : Partially migrate sort from THC to ATen, replace the thrust path with cub (#54626)
5b149a0d4a : Migrate cos to structured kernel (#55564)
19e43eaaf4 : Migrate cosh to structured kernel (#55563)
a699cda846 : Migrate acosh to structured kernel (#55540)
6bdf7ef2a3 : Migrate sinh to structured kernel (#55538)
4d449f915f : [quant][graphmode][fx] Separate handling Copy operator to a helper function (#54644) (#55429)
42486963b2 : Integrate NNC conv2d with fuser (#55213)
cb4b3b04a8 : [nnc] Move device type checks from isSupported to typesAreSupported (#55025)
90f848572c : NNC depthwise conv2d implementation (#54920)
6a39613f35 : [BE] Make torch/csrc/jit/tensorexpr/ clang-tidy clean (#55628)
c998f3573c : [Hackathon]Move tests related to containers in typing to test_typing.py (#55504)
2ca45cb9e8 : [hackathon] ci: Only generate cuda tests for cuda configurations (#55522)
3498fde20e : Add AccumulateType in AdaptiveAveragePooling3d.cu (#53607)
cc11aaaa60 : Disallow non-breaking spaces (#55465)
bf882929f1 : [skip ci] Add explanation for why we split TORCH_CUDA_API (#55641)
fc45ff8177 : [skip ci] Document '[skip ci]' (#55418)
364639041f : Revert D27121170: [torch] Add cuda support for segment reduction 'max'
55db156229 : remove test_jit_py3.py entirely (#55560)
305abde976 : Fix nvcc warnings (#55367)
eb5e1fc713 : [torch] Add cuda support for segment reduction 'max' (#54175)
778f9eab6c : Don't switch streams when running Caffe2 ops from c10. (#55121)
adc65974b2 : Run ShellCheck on scripts in GitHub Actions workflows (#55486)
960b40156c : [6/n][torch/elastic][upstream] Move torchelastic/distributed/api to torch/distributed/elastic/launchers/api (#55471)
fd450ff1b9 : Revert D27598681: Add OpInfo tests for torch.addbmm
2564c0c889 : avoid CPU std::copysign segfault when compiling on arm64 (take-2) (#55608)
11add8f45f : Add --suppress-diagnostics option (#55612)
ad823888a1 : [FX] Speed up _Namespace.create_name (#55580)
60263e0f5a : OpInfo porting for torch.maximum / torch.minimum / torch.fmax / torch.fmin (#55129)
f665a7f8a1 : [pet] Set error code in reply file when child process is terminated by signals.
8b5da2f48d : rename .pytorch-disabled-tests to disabled-tests.json (#55618)
3f9492c8b3 : [Hackathon] Modernize API used in NNC C++ tests (1/3) (#55512)
432df40d83 : [Hackathon] Move python builtins to test_python_builtins.py (#55479)
7d56de1834 : DOC: use autosummary on tensors.rst (#55042)
d3d7f57c2c : Fix a problem when removing parametrizations (#55456)
473d193966 : Use mkldnn copy for copy_ when self and src are Mkldnn layout (#54248)
b5647dd52b : Add OpInfo tests for torch.addbmm (#55378)
f1a0b817f0 : [pthreadpool] Apply cap for macos builds (#55435)
f88a3fff65 : Set requires_gradient to help autodiff to prune unneeded gradients (#54374)
37d1b39413 : OpInfo: `atan2` (#55132)
902bf0bbbe : [special] Alias for sigmoid and logit & follow-up (#54759)
f4967d68f5 : make torch.testing asserts importable (#54769)
ffe301846b : [Hackathon] Add error source range highlighting check in test_hash and test_list_dict (#55490)
3517ee1bcb : Fix ordered_dict.h for CUDA on Windows (#55275)
0dff0d1537 : [ROCM] Disable few tests for Magma (#55534)
ec38dda1cc : Remove extra close bracket in extending.rst (#55409)
493a233c04 : [torch/elastic] Revise the rendezvous handler registry logic. (#55466)
8ac0619784 : Avoid infinite recursion in __torch_function__ example (#55391)
b39eeb07ed : Revert D27622277: [pytorch][PR] avoid CPU std::copysign segfault when compiling on arm64 with gcc 7.5 / 8 for CUDA
d6cbecbbb6 : [PyTorch] Reapply D27404164: Devirtualize is_contiguous (#55333)
e359842f23 : Strict typecheck all files in tools/codegen (#55227)
384daacd1e : [Hackathon] Add source range info for tests in test_module_containers (#55500)
b3f1fece1b : [Hackathon] add highlight to test_module_interface.py (#55530)
524dbe1fa1 : [Easy] Fix typo in package_exporter.py (#55551)
f0ce8593db : [Hackathon] Add source highlight check in test_torchbind (#55495)
0f1350055b : [Hackathon] Add source range highlight check to test_with (#55513)
94a3bad343 : [Hackathon] Add source highlighting check in test_type_sharing (#55498)
5f90ed550c : [Hackathon] Add source range highligh check to test_string_formatting (#55491)
b9326d418d : [Hackathon] Add error source range highlighting check in test_scriptmod_ann (#55482)
5d78c4f701 : Use assertRaisesRegexWithHighlight test_class_type.py (#55510)
11889a51ed : Use assertRaisesRegexWithHighlight test_enum.py (#55521)
b665298dc8 : Use assertRaisesRegexWithHighlight test_custom_operators.py (#55519)
469734ae54 : Replace assertRaisesRegex w/ assertRaisesRegexWithHighlight test_builtins (#55496)
b1bae01e0c : Replace raiseRegex with raiseRegexWithHighlight test_async.py (#55489)
a20a72d41b : Replace assertRaisesRegex w/ assertRaisesRegexWithHighlight test_backends.py (#55493)
e6bfff679d : [ONNX] Add hardsigmoid symbolic in opset 9 #49649 (#54193)
2dd7dafb62 : [Hackathon][take2] jit py3 move list dict tuple to jit/ (#55515)
afd549bb8f : [Doc] fix profiler doc (#55449)
1e70d217e7 : Add error message for complex alpha and non-complex inputs (#54964)
dd2bccafc5 : nnc hackathon - use new APIs in tests (#55497)
10abbb812a : Support tensor subclasses in Torchscript (#54817)
b91d48877d : Reland Fix reference cycle in sparse coalesce graph (#55404)
797d0c4c68 : [Hackathon] Add error source range highlighting check in test_recursive_script.py (#55475)
0d1058fbc7 : Revert D27625646: [pytorch][PR] move list dict and named tuple tests out of py3 and into test_list_dict.py
85fcadc059 : [lite-interpreter] speed_benchmark_torch support BUILD_LITE_INTERPRETER (#55402)
1c78a4a733 : move list dict and named tuple tests out of py3 and into test_list_dict.py (#55476)
bfee8d0464 : [Pytorch Edge] Dont cache inflated bundled inputs (#55181)
56cd1d366e : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
f9a0bbbeb8 : [DataPipe] Remove duplicate dataset (#54553)
f5675f8306 : [torchelastic] Make sure torchelastic mp wait for queue to be drained before finishing the process (#55412)
3bb1f59a9c : avoid CPU std::copysign segfault when compiling on arm64 with gcc 7.5 / 8 for CUDA (#51834)
af2beaf675 : [profiler] Fix time discrepancy between legacy and kineto events (#55226)
8e6e7dca09 : [ROCm] if TEST_WITH_ROCM, only instantiate GPU device tests (#55069)
17e5ba44f1 : [testing] Support input samples where `self` is broadcasted. (#53014)
2e9eb5afa2 : Use slow tests stats in common_utils (#55190)
b9a02128bc : split nn.functional (#55038)
263d8ef4ef : docs: fix formatting for embedding_bag (#54666)
6fd20a8dea : Back out "[pytorch][PR] [fix] torch.frac : Handle inf correctly"
c96f076248 : Fix typo in extending.rst (#55408)
ece075195d : [fix] torch.frac : Handle inf correctly (#52678)
bc05867618 : Separate TLS for InferenceMode (#55424)
82006ba460 : [PyTorch Edge] Implement fb::jpeg_decode_to_NCHW (#55251)
8c1a70a7c9 : [A*][Gen-1.5] Add shape inference func for PredictorCall.
87cf277bd7 : Don't allocate result Tensors in out overloads: _linalg_solve_out_helper_cuda (#55321)
acfb05ff43 : Boxing logic forwards arguments to stack (#53624)
34b46359e3 : Fix forwarding/move bug (#53556)
4757d4c077 : Don't allocate result Tensors in out overloads: at::kron_out() (#53640)
db75ebac4a : Don't allocate result Tensors in out overloads: Reduction Ops (#53218)
73aeea648e : Fix Scalar formatting (#53229)
35caae6045 : Fix boxing/unboxing for Scalar bool values (#53228)
add49e7e4e : Enforce PEP263 for PyTorch python codebase (#55346)
34a7b4aabb : [tools] Remove newline from clang-format reference hashes (#55328)
96655e2b81 : Re-enable disabled tests workflow with GHA (#55417)
79fe5b7897 : [Doc]fix torch.ceil formula issue(pytorch#54948) (#55039)
5c402d9026 : STFT: Clarify output shape in documentation (#54877)
933bbbbed6 : [PyTorch] Fix waste in unfold() (#55339)
4cf42fc62f : [PyTorch] Cache self.size(dim) in TensorShape functions (#55336)
8eaa4a97b7 : Back out "[quant][graphmode][fx] Separate handling Copy operator to a helper function" (#55388)
84d18727bd : Added linalg.eig, linalg.eigvals (#52491)
da7a27b847 : [NNAPI] Initial flexible size support (#54701)
1e3b3a4714 : [NNAPI] Create get_next_operand_id (#54700)
ca67c17e46 : [NNAPI] Add fixed-size assertions (#54699)
5936faee7e : [NNAPI] Rename local variable (#54698)
1f1d26137b : [NNAPI] Use code generation to better support list input/output (#54697)
d34d6244e7 : [NNAPI] Use array instead of struct for serializing ints (#54696)
1d1db42340 : Fix NNAPI for internal fbcode build (#48925)
476c597ae6 : [NNAPI] Handle binary ops combining NHWC+NCHW in some cases (#48812)
b057d27b0b : [NNAPI] Add support for unsqueeze, cat, and mean (#48811)
3802edd9ab : [NNAPI] Add unit test (#47521)
8fcf9ca341 : [NNAPI] Update support for Linear (#54695)
8d960f7043 : [NNAPI] Fix hardtanh (#47520)
beca1fdbec : [NNAPI] Fix MUL op (#47519)
38a3c28f17 : [NNAPI] Remove solid weights support (#47518)
1be909f074 : [NNAPI] Fix models with no weights (#47517)
0e7af36acd : Make bundled inputs work with quantized zero inputs (#47407)
ad5dc84ed3 : [vulkan] Add Winograd convolutions (#54639)
20d7916a6a : [Pytorch Mobile] Fold Conv BatchNorm for functions besides forward (#54619)
a9bcab46ff : Revert back changes in test_custom_ops.cpp. (#55350)
a0d9776104 : [JIT] Include conv3d in the conv-add-relu fusion (#54772)
ec80981d28 : Revert D27246997: [pytorch][PR] Fix reference cycle in sparse coalesce graph
ae3a876c9c : Revert D27572158: [torchelastic] Make sure torchelastic mp wait for queue to be drained before finishing the process
8e78a1b084 : [Resubmit] Fix for incorrect usage of logging in torch/distributed/distributed_c10d.py (#52757)
e9c6a51100 : [torchelastic] Make sure torchelastic mp wait for queue to be drained before finishing the process
f8788d5188 : Upgrade onednn to v2.1.2 (#54956)
8243ba7205 : Add MonkeyType dependency for testing on Linux (#55305)
158cdece65 : Correct many OpInfos "test_out" skips. (#55141)
815bfad28c : Fix reference cycle in sparse coalesce graph (#52874)
e5f66f0059 : Optimized generic interpolation using TensorIterator (keeps original 2d/3d channels last impl) (#54500)
87d55058f1 : Fix the clang-tidy diff SHA for using PR merge (#55318)
bf70fe69ae : Revert D27442325: [torch/elastic] Revise the rendezvous handler registry logic.
c3d0607ffa : [Static Runtime] Make sure the copy version of the op exist in ReplaceWithCopy (#55337)
1b4bb3691c : [Gradient Compression] Update _powerSGD_comm_hook_wrapper to only expose 2 most critical hyperparameters (#55295)
cc4036905c : [Gradient Compression] Update the default value of start_powerSGD_iter and update the docstring (#55272)
3551bd31be : [PyTorch] Lite interpreter with a backend delegate (#54462)
7d9a619796 : [PyTorch] Fix bin hash comparison failure in clang format script (#55281)
df299dbd7d : [torch/elastic] Revise the rendezvous handler registry logic.
359d0a0205 : [torch/elastic] Improve the implementation of `RendezvousParameters` and add its unit tests. (#146)
7f06c65a4c : [torch/elastic] Improve the implementation of the utility functions and add their unit tests. (#54804)
de7f05b9eb : [torch/elastic] Expose a `stderr` parameter in `EtcdServer`. (#54805)
bad8d34780 : [torch/elastic] Revise the rendezvous exception types. (#54803)
5584332180 : Wrap cub in its own namespace (#55292)
0e03a2978a : [DDP] Call ensure_prior_reduction_finished within lock (#55074)
697b130374 : Add some missing types to torch (#55184)
0521e420fd : [Static Runtime] Temporarily disable fusion tests (#55342)
e0c5d0ea15 : Add tutorials to pipeline docs. (#55209)
15b087cdd2 : [fx]Allow rewrite a symbolic traced module (#54011)
fd02fc5d71 : Port put_ and take from TH to ATen (#53356)
bf37bf7da4 : Make JSON files more human readable (#55335)
b986a76d91 : Clang-format distributed.py (#55254)
6a2f046504 : [SPMD] Restrict DDP communication hooks to SPSD mode (#55253)
d690973295 : irange on int64_t (#55148)
ef262575dd : [pytorch] Fix printing of optional string arguments in schemas (#55196)
2ee02b30b1 : Replace rounding_mode="true" with rounding_mode=None (#51988)
3acbaf834e : Make structured functions properly check device/dtype of explicit out args (#55150)
45aaaef22c : Fix timer overflow on small, fast snippets (#55200)
7613b1150b : [docs][quant] Add fx graph mode quant api doc (#55306)
e61f5b586b : Revert D27404164: [PyTorch] Devirtualize is_contiguous
62aa924368 : [PyTorch] Devirtualize is_contiguous (#54896)
d0ffada9ee : .github: Add scale-config.yml (#55315)
f4a618bb5a : [PyTorch] Don't create intermediate Tensor for at::result_type w/Scalar (#55232)
fffdc5fa2f : docs: Pin docutils to 0.16 (#55309)
5339d534a3 : Add runner for instruction count benchmarks. (#54652)
c5a1eb4156 : extend benchmarks (#54651)
c9b214f9fb : Add Python-3.9 PyTorch M1 nightly builds (#55278)
a102adb55e : Automated submodule update: FBGEMM (#54575)
7fd3c030ef : Write OpInfo for dist (#55092)
6c8270ea21 : fix bc breakage of #52043 (#55303)
ebf40e6ed2 : CI: Run test_lite_interpreter_runtime from built lib directly (#55291)
980d6f2589 : torch.linalg.det (#53119)
197f9f0826 : Merge CUDA Streams and Events (#53902)
5e72571df3 : Fix wrong changes from #54103 (#54610)
f3969d3db6 : Fix bug in self.assertExpectedInline (#55149)
edb919376d : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
c821b83ab3 : [typing] make mypy-protobuf output compatible with pyre for caffe2 type stubs (#55294)
3d492b0697 : Revert D27505153: [pytorch][PR] OpInfo: `atan2`
bcdcf347cb : Add cusolver potrs and potrsBatched to the backend of torch.cholesky_solve (#54315)
0a81034dd0 : Port atan2 to structured kernel (#55130)
d2a58bfe6f : Add mkldnn tanh operator (#54656)
19a0eb4cdb : [c10d] Monitored barrier: option to collect all failed ranks (#55010)
0ec1af4b7e : [c10d] Enforce order of waited ranks in monitored barrier. (#55009)
e309ab8510 : OpInfo: `atan2` (#55132)
c0ac0fef4e : Revert D27448156: irange for size_t
e3691be2d9 : Dump C++ stack traces of all threads for distributed tests. (#55003)
8ed20b3f65 : Leak Caffe2 threadpool in child processes right after fork to prevent segfault (#54895)
8377e6221a : Revert D27478225: [pytorch][PR] Added pow() on CPU for float16 & bfloat16
a84c92b78b : [package] populate a special attribute on imported modules (#55255)
041b4431b2 : irange for size_t (#55163)
322854d2f0 : [SPMD] Error out SPMD in C++ Reducer (#55212)
4170a6cc24 : Migrate `mode` from TH to ATen (#52043)
e8dbd0e1a0 : [TensorExpr] Minor cleanups in kernel.cpp. (#55257)
641d4ff160 : [FX] Add stride to shape_prop pass (#55108)
28531c97b2 : [caffe2] Shape inference for Transpose (#55188)
6d030c14cf : Added pow() on CPU for float16 & bfloat16 (#50999)
c549a147a9 : [DataLoader] Typing Doc (#54773)
0b1c3dfae4 : [DataLoader] Typing Enforcement for DataPipe at runtime (#54544)
1535520f08 : [DataLoader] Typing Enforcement for DataPipe at construct-time (#54066)
44edf8c421 : [DataLoader] Typing Enforcement for DataPipe at Compile-time (#54020)
560e3be587 : [DataLoader] Implement issubtype for type hints (#54299)
159fdde9ae : Support needsOutputs for RecordFunction and ObserverUtil improvements (#55012)
2452182e6c : [SPMD] Remove test_grad_layout_1devicemodule_2replicaperprocess (#54826)
e589247a19 : [SPMD] Change assertions to raising value errors in distributed.py (#54825)
6a40339920 : [SPMD] Error out SPMD mode (#54454)
6e33420436 : Add embedding bag support to fx_glow (#54909)
29916dbf1e : Clang-format _distributed_c10d.pyi (#55220)
91a809bbd7 : [c10] Adjust macro check that detects if glibc++ use c99 csqrt (#55177)
fb64caedb5 : Don't fail "Add annotations" if "Lint" is canceled (#55242)
38a08a49ea : Flip clip_grad_norm default for error_if_nonfinite to false (#55169)
6866c033d5 : [JIT] Add recursive scripting for class type module attributes (#55124)
6e2d020037 : Add interpolation kwarg to torch.quantile (#49267)
e593044748 : [Gradient Compression] Update a warning in ddp_comm_hooks.rst (#55031)
7ab53eb960 : [StaticRuntime] Unbreak benchmarks. (#55199)
a0bb0968d5 : [PyTorch] Don't bother with SmallVector in TensorMaker (#55125)
02af4b511d : Enhance Pipe docs to explicitly mention RPC initialization. (#55187)
24c904951c : Replace AutoNonVariableTypeMode with InferenceMode in fbcode. (#55114)
181de40688 : Split copy_ kernel to InplaceOrView. (#55133)
09670c7d43 : Don't globally disable any ShellCheck warnings (#55165)
978fca64a6 : Revert D25399470: add channels last for MaxPool2d
e406d4e6cb : Modified lstsq_helper to accept lapack error codes tensor (#54720)
8062545c63 : ns for fx: weight extaction for conv1d and conv3d (#55079)
80b1b7e4b1 : ns for fx: ensure kwargs are handled when graph matching (#55078)
a590fa7af4 : ns for fx: clean up debug print statements (#55077)
f6b25e758d : ns for fx: move it to top level file (#55060)
c6cb99a6c7 : ns for fx: weight extraction for nni.ConvReLU2d (#54335)
5319d17be4 : ns for fx: make input logging work for multi node subgraphs (#54327)
b8019cee0e : ns for fx: make input logging work for multi-node subgraphs (#54326)
cbcde79023 : ns for fx: refactor test cases (#54280)
757e3cbf82 : ns for fx: add support for shadowing linear fp16 patterns (#54275)
f43eb59a68 : add channels last for MaxPool2d (#48917)
bdb225e9f0 : Revert D27478436: Use tensorpipe::Buffer::device() instead of tensorpipe::Buffer::deviceType().
3e185253b6 : Use tensorpipe::Buffer::device() instead of tensorpipe::Buffer::deviceType().
61914cb2fa : [ATen][qembeddingbag] Avoid tensor refcount bumps (#55023)
93d0f636bb : [c10] Add default constructor to Maybeowned (#55128)
ec609e7420 : Adds torch.* API section for TorchScript Lang Ref (#53236)
271879fe67 : [PyTorch Edge] Provide a method ObservedOperators::getUnobservedOperatorList() so that model tracer can empty it out during tracing (#55017)
09f1f14569 : Transition to new tensorpipe::Pipe API. (#55193)
b074a24394 : Port torch.copysign method_tests() to OpInfo (#54945)
ed4a1d54a7 : [OpInfo] Enable jit tests for multi_dot (#55147)
ff6b3c76ab : [TensorExpr] Add TORCH_APIs to all expr classes. (#55002)
1ccaec0238 : [TensorExpr] Cleanup IRNodeType enum. (#55001)
f8f30a5e27 : [TensorExpr] Remove stale docs from DesignOverview.md. (#55000)
bdbfb2a035 : [TensorExpr] Nuke BaseCallNode. (#54999)
0b75f862c7 : [TensorExpr] Nuke FunctionCall. (#54998)
688e350725 : [TensorExpr] Nuke DepTracker and findAllNeededTensors. (#54997)
0d47374c54 : construct only necessary elements in OffsetCalculator (#55107)
5610e8271b : Fix skip_if_not_multigpu decorator (#54916)
8822c7e052 : Update TensorPipe submodule. (#55164)
047a487b07 : Fix accidental Flake8 excludes (#55178)
3575e71be8 : [DDP Logging] Log use of uneven inputs API (#54919)
057ec81b17 : [PyTorch] OperandInfo ctor should take rvalue reference (#54972)
dded5d72a4 : [PyTorch] Move Tensor::has_names inline (#54965)
22f3b4eaa8 : Tensor::register_hook: Avoid wrapping hook in two levels of std::function (#53917)
8dc29e8a4a : [PyTorch] Allow IValue to construct from Tuple with fewer copies (#54534)
faa4da49ff : Add code to ensure workflow consistency for autocanceling (#55171)
2962fee99a : Fix/suppress a type warning in PyTorch (#55142)
787854ce41 : [ZeroRedundancyOptimizer] bounding the multiple gpus unit test to 4 gpus, hardcoded values (#54788)
0a329c66bf : [PyTorch] Remove stray comments in TensorBody (#54985)
84ad5df8e3 : Correct the name of the label in auto-label-rocm (#55170)
dfa2daac1d : [PyTorch] Remove outdated C++11 note on C10_DEPRECATED (#55061)
070169e4d0 : [ATen] tensor.contiguous() -> tensor.expect_contiguous (#55022)
b74795c460 : [Pyper] resize_as_ -> resize_ (#55098)
f34de6a9f4 : Modified lstsq_helper to accept rank and singular_values (#54719)
cdd9911a12 : Revert D27470071: [pytorch][PR] Trigger azure pipeline for multi gpu tests
0eba63ec93 : Improve testing documentation in `CONTRIBUTING.md` (#54904)
1b2b3ca86d : Language Ref Python Builtin Functions and Values (#52830)
c64e006fc3 : Fix security of ROCm labeling workflow (#55157)
53609b4cac : fix typo in ReduceMinMaxKernel (#54984)
a4125876c9 : Move BackendSelect to default_included_set. (#55117)
2798f38c86 : Added checks for dtype and device of OpInfo's sample_inputs (#54949)
36c27fd0ac : SVD docs improved (#54002)
6b20046491 : Pin ShellCheck version to v0.7.1 (#55109)
8d5df95551 : Make TensorIterator, SparseTensorMath and UnaryOps clang-tidy clean (#55087)
f0dafeb0cb : Trigger azure pipeline for multi gpu tests (#52490)
69c5fd1e00 : SyncBatchNorm.forward() to handle optional weight (#54568)
f83668b4e5 : Update release notes scripts following runbook update (#54594)
967e59e557 : [tensorexpr] Add sliceHead/sliceTail APIs with short parameter list (#55115)
1324b0dd44 : [FX] Adds C-level monkeypatching of `torch.randn` so that we can capture it during tracing. (#54060)
0cfd9e881f : [static runtime] fix out variant for 4bit embedding bag (#55096)
b880854f31 : port copysign to structured kernel (#55040)
8b02d1207b : Fixed OpInfo jit tests failing for TensorList inputs (#54954)
9d6a81d1a6 : Avoid aggregate initialization for tensorpipe::{Cpu,Cuda}Buffer and tensorpipe::Message::Tensor. (#55136)
204ac21bf1 : Revert D27449031: [pytorch][PR] [ROCm] use hiprtc precompiled header
3036777305 : Replace torch.chain_matmul calls to torch.linalg.multi_dot (#55064)
d98072b027 : Deprecate torch.chain_matmul in favor of torch.linalg.multi_dot (#53453)
5d68b3695c : [Relanding] Implemented torch.linalg.multi_dot (#52859)
5a1191d050 : Check exception messages in embedding_bag_proxy unit test
50cb75edce : [tensorexpr] Add python bindings for TensorExprKernel (#54450)
ba95e08a95 : [PyTorch] Use DimVector for inputs to as_strided that don't grow dim (#55016)
6145ac07b5 : [caffe2] Reintroduce Log1p operator (#55073)
547346d663 : [caffe2] Fix -Wundef
058357a439 : [Gradient Compression] Report compression rate for batched PowerSGD hook (#55103)
d2aab53dc2 : [PyTorch] Check is_same instead of data_ptr in addmm_out_cuda_impl (#55111)
7824d8277a : [ONNX] Fix export of copy_ operator (#51938) (#54870)
40869884cd : Add outer export to onnx (#53603) (#54869)
c5f3d92816 : [ONNX] Update scripting docs (#54634) (#54868)
cb0cee4a3d : [ONNX] Replace decomposeLinear pre process pass with a symbolic (#53077) (#54866)
849dcb8b69 : [ONNX] Fix if output shape mismatch error & Fix graph input directly used as output (#53219) (#54865)
cd9dd653e9 : [ONNX] Support primitive type input/outputs and attributes (#53550) (#54864)
ce48b14060 : [ONNX] Improve index_put symbolic to handle singular Bool updates (#53690) (#54863)
6c235ef267 : Allow `std=0` in `torch.normal`, and error if `std<0` (#51317)
15f04e3466 : Revert D27408378: [quant][graphmode][fx][refactor] Factor out insert_observers_for_model to a separate function
8b8c4096ee : Added OpInfo gradcheck_wrapper to replace output_func (#54914)
790b69e096 : Language Ref for Statements in Torchscript (#52847)
c445f4ee93 : [quant][graphmode][fx][refactor] Factor out insert_observers_for_model to a separate function (#54733)
c57541ce06 : [quant][graphmode][fx] Separate handling Copy operator to a helper function (#54644)
c0d6dbdce4 : [quant][fx][graphmode][refactor] Change activation_post_process_map to track the observer name instead (#54643)
c2adedf6fe : [quant][graphmode][refactor] Remove reduandent code (#54073)
55544cb13a : [quant][graphmode][fx] Add support for one value being quantized with different qconfigs (#53586)
eb52e36460 : Revert D27469727: [pytorch][PR] [android] fbjni from prefab dependency 0.2.2
c85d3f501f : Move shape prop inside acc_tracer (#55091)
0d80f378f6 : fix boto3 resource not close (#55082)
f29039677d : Refactor get numerical jacobian (#54092)
70af5db7ca : Remove use_c10_dispatcher option (#54969)
908a74e8c1 : [Refactoring] make transformations return whether graph is modified (#54777)
a37fbf9b45 : [Futures] Bump log verbosity when ignoring cb errors in python future. (#54476)
28daa6b7dd : [Futures] enhance error handling in then() (#54475)
63c70ae032 : various overhead improvements to cuda addmm (#55026)
8eb9a934bc : Clarify tools/test_history.py output for re-runs (#55106)
2726de0119 : [Pytorch] Expose ops present in dispatcher (#54791)
5c3963373a : Handle 1D input for xnnpack::linear (#54986)
fb1c193eed : Simplify convolution double backward gradInput formulas (#54840)
26c1e2ee83 : [ROCM] enable miopen for rnn f16 (#52475)
7f87358840 : [android] bump nigtlies version to 1.9.0 (#55076)
bcb4583170 : [FX] Add a metadata dict to Node and switch shapeprop to use that (#54926)
b64d775636 : Adding workflow to auto-label PRs with ROCm (#54989)
507b46f23e : [android] fbjni from prefab dependency 0.2.2 (#55066)
0bd96458ba : Revert D26820202: Support mix of int32 and int64 offsets/indices for EmbeddingBag and its variants
8ad32dbbd7 : update build tutorial - choose the correct VS version (#54933)
2a7df657fe : [ROCm] use hiprtc precompiled header (#54350)
07602bf7e1 : [caffe2] Use the CXX11 version of the USE_C99_COMPLEX macro
f71a0daeb7 : Use faulthandler to dump traceback of timed out processes in unit tests. (#54818)
bab8a1a81e : [reland] Add annotations to PRs from forks (#54779)
57519e705a : Link onnx_library when BUILD_TEST=0 for Windows (#51937)
cff266544a : Fix minor style/typos problems in comment_device_type.py (#54768)
43d4f3b8d0 : Implement public API InferenceMode and its error handling (#55008)
f1f3c8b0fa : Adding PyTorch + DNNL + AMD BLIS path (#54953)
a74b10def9 : Keep Markdown ToCs up to date (#54974)
7fc03dd7c9 : Back out "[pytorch][PR] Merge CUDA Streams and Events" (#54996)
24bfcd537e : [FX] Added FX prepare_for_inference for Intel CPUs (#53805)
a0ae3e520f : [Pytorch Mobile] 'fix' filter of named parameters for FL (#54633)
0dd873bdd5 : [ROCm] add 4.1 docker image (#54628)
aeedd5c7df : cmake: fix ONNX_NAMESPACE if USE_SYSTEM_ONNX (#54973)
449a9514d1 : Update Kineto submodule (#55011)
99a423f8fc : Automated submodule update: tensorpipe (#54970)
09756e7280 : Revert D27370295: [android] fbjni android use prefab dependency, version 0.2.2
25e07c6e91 : Revert D27422219: [caffe2] Support Log1p operator
6d87b3667f : Added support for TensorList inputs in OpInfo (#54922)
8a170fbacd : [package] fix mangling issues with TorchScript (#54915)
444e5f0b60 : Add Type System (I) (#53244)
4865195cf4 : [PyTorch] Add DimVector variant of infer_size (#54882)
2bee09a577 : [android] fbjni android use prefab dependency, version 0.2.2 (#54792)
854c92078a : Fixed the default size of the workspace array for MAGMA's SVD (#54875)
1dffbe759b : [ROCm] utilize PUBLIC vs PRIVATE linking to avoid incorrect dependencies (#54727)
d4c37b314e : Mark redispatch functions with TORCH_API (#54966)
b907d6e3b6 : [ROCm] skip some tests to enable 4.1 CI upgrade (#54536)
3baeeb3f57 : Added Azure Pipelines build steps for PyTorch (#54039)
f956b7524e : lazily init AliasDb and add `changed` status to CSE (#54776)
2df4acd025 : Remove hacky wrapper for about 70 kernels (#54898)
1bf57430f1 : Remove hacky wrappers for 21 operators (#54819)
d92e2520de : [caffe2] Support Log1p operator (#54968)
d490e0120f : [PyTorch] One less refcount bump in linear() (#54936)
dde7fff0e9 : [PyTorch] Avoid refcount bumps in addmm_out_cuda_impl (#54935)
ea37fe34ff : [PyTorch] Avoid refcount bump in TensorArg (#54934)
5b448cf21a : Revert D25966661: Support needsOutputs for RecordFunction and ObserverUtil improvements
23b15ef98a : test_c10d: use with_nccl_blocking_wait decorator (#54742)
3f1cd2e3a0 : test_c10d: Run tests with nccl_async_error_handling (#54741)
0e543b2b00 : Provide a decorator to set/unset nccl blocking wait for tests (#54740)
920eb01e2e : Add scatter_add to amp docs (#54908)
4694452d08 : [complex] `masked_fill`: Complex Autograd support and update masked_scatter skips. (#54244)
0e43a73f76 : Support needsOutputs for RecordFunction and ObserverUtil improvements (#54442)
85c056a508 : [JIT] Add EliminateExceptions pass. (#54730)
5bcbbf5373 : Lint trailing newlines (#54737)
eafa235582 : Clarify and document commit choice for CI jobs (#54967)
18e61d1ce9 : Improve placeholder matching in subgraph rewriter (#54958)
f5d6b90c35 : Add a missing sys import in test/distributed/rpc/test_tensorpipe_agent.py (#54925)
46c27ea84d : Enabling OneDNN for group conv (#54890)
d49beba071 : [pyper] out variant of sigrid_transforms_torch_bind + ListUnpack (#54761)
d60874354f : [docs] Add updated TorchScript language reference section for types (#53673)
9f93d82907 : OpInfo: Add opinfo for cum{min,max} and minor fixes (#54762)
4e110528bd : Added cuSOLVER path for torch.linalg.eigh/eigvalsh (#53040)
c9d0c855f7 : [special] Alias for special.expm1 and special.exp2 (#54670)
75ed6fbd91 : Fix CUDA 11.2 jobs for Windows (#54955)
728d18f976 : Enable USE_KINETO (#51273)
9b9e19a808 : Fix test_variant_consistency_jit_addmm for complex types (#54917)
6c8d783830 : Generate no-op meta functions for all inplace operations (#54901)
7c0941ee63 : Clang-format powerSGD_hook.py (#54839)
6c31f56bf4 : [Gradient Compression] Add cuda.syncrhonize back to batched powerSGD (#54838)
6f63126b5c : [quant][fx] Add pass in convert to fold quant-dequant sequence (#54860)
a7dc0ab845 : [quant][fx][pyper] Get first linear use of quantize_per_tensor for FQN (#54859)
c690ed0ae8 : Fix override for __iter__ (#54702)
2503028ff5 : Fix ConvTranspose with padding as a list of values (#54911)
0269a5f481 : Re-enable `cmath.sqrt(complex(-1,-0.0))` test (#54923)
46e7f6773f : [Static Runtime] Check for inplace ops explicitly in ReplaceWithCopy (#54657)
32bb5c3609 : [iOS GPU][Kernel] Fix the softmax kernels (#54519)
626bb3d310 : [iOS GPU][Design] Use function_constants to simply shader kernels (#54518)
f9097c43b9 : Support mix of int32 and int64 offsets/indices for EmbeddingBag and its variants (#53655)
5c12d97d96 : Add script to export a JSON of slow test case times (#54907)
a1bd7918cc : [docs][quant] Fix FX Graph Mode Quantization tutorial link (#54715)
fbaad8c0f9 : [PyTorch] TensorIterator::output should return const reference (#54811)
1267efce75 : [nnc] Add a default constructor for Placeholder
1bccd48465 : Allow creating SugaredValue for a complex valued IValue and deserialization logic for "infj" and "nanj" global constants (#54328)
f4dfa02c03 : Add documentation for torch.jit.Attribute and torch.jit.annotate (#54485)
1a0b77e7c4 : Suggest TORCH_LIBRARY_FRAGMENT in duplicate TORCH_LIBRARY error message (#54883)
e829754992 : [PyTorch] Inline Tensor keyset-checking methods & similar getters (#54806)
49b07ac5d1 : Enable complex autograd for `index`, add `index` and `index_put` OpInfos (#54562)
d5564618d0 : [NCCL][Blocking Wait] Log set exceptions when checking for exceptions in (#54558)
028d2d6e63 : [NCCL] Enhance watchdog to log exceptions (#54557)
8c13dde458 : [DDP] Remove redundant pass statement (#54219)
d185719455 : Expose dist.monitored_barrier() API (#53787)
4541f60390 : Gloo-only CPU-based monitored barrier (#53773)
8e89d30f09 : [nnc] Lower scalar constants as doubles/longs (#54824)
7c8b0f2600 : Test torch.chain_matmul for complex dtype (#54885)
8cf97cbb55 : [ROCm] add 4.1 to nightly builds (#54635)
ff537b77ff : [PyTorch][easy] Move more strings in torch::class_ (#54547)
51fa25443f : [PyTorch][easy] Move strings into class_::defineMethod (#54533)
67d44377e3 : Remove hacky wrapper for about 100 kernels (#54751)
ec1bbe130c : Revert D27364777: [pytorch][PR] Add annotations to PRs from forks
3187a71bbe : [test] vc toolchain modification (#54589)
263180d7fc : Revert D26973911: Implement public API InferenceMode and its error handling
1551bcc670 : change logging.warn to logging.warning (#51727)
9ef53f7e0f : docs: remove extra backticks in `narrow_copy` (#54669)
63997db6ec : [JIT] fix freezing with mkldnn tensors (#54632)
74e01c1dd9 : docs: change to FloatTensor for `requires_grad=True` (#54658)
6dedecc77c : docs: add `memory_format` in torch.empty (#54664)
02f5c50828 : docs: separate autosummary for flatten layers (#54663)
7eef0c3ab5 : docs: add functional group_norm (#54673)
475251631b : docs: reference links to serialization.html (#54659)
59d1f08b4c : docs: fix docstring signature of torch.{onnx,utils} (#54662)
84232b762b : docs: add `reset_peak_memory_stats` in cuda.rst (#54668)
12a454788b : docs: fix parameter in `torch.take` (#54667)
56f12e6199 : Add annotations to PRs from forks (#54779)
68af6d9565 : Use custom sqrt if stdc++ does not fall back to C99 csqrt (#54820)
717e70a824 : (BE) Refactor get-test-times-from-S3 into s3_stat_parser (#54808)
3ddc6174da : Raise error in clip_grad_norm_ if norm is non-finite (#53843)
1f36ce6e4d : Restore storage on meta tensors; increase meta coverage (#53973)
94efb48e16 : Adds the cfloat dtype to the eager and jit variant consistency tests (#54854)
2fd1eb3a9f : make all arguments in test_history.py optional kwarg (#54797)
6d2bf76bba : Using latest windows CUDA exe (#54596)
86b1f4e9f2 : fix silent correctness bug with channels_last usage of upsample cuda kernels (#54744)
4e5af53d29 : Deprecate legacy constructor `torch.Tensor()` (#54414)
a0a7a2d648 : [quant][fx] store dtype, axis as literals in the graph (#54624)
9e6877c5c5 : Port torch.outer method_tests() to OpInfos (#54798)
b7c5d57563 : [testing] support op with args/kwargs in test_unary_ufunc (#52194)
07350da3b4 : enable bf16 for cat serial kernel (#54674)
01b1557014 : enable bf16 vec copy (#54671)
0527d14248 : [numpy] Add torch.take_along_dim (#52833)
eec48303c0 : Make index_add take a scalar argument alpha (#54176)
695eef05a4 : optimizer exploration - v1 and v2 + fix position_weighted optimizer + decoupled weight decay (#54042)
5c3d80d8fa : [DDP] Mark a few variables as const in reducer (#54764)
671f80a313 : [c10d] s/torch::autograd::variable/at::Tensor/g (#54763)
7caa464631 : Implement public API InferenceMode and its error handling (#53343)
2309173143 : Compute Tensor::toString() without reference to backend (#54711)
f067972527 : Make memory overlap a little less precise so it works with null data ptr (#54710)
c782949e17 : Make the fuser raise NotImplementedError when unknown device is hit (#54709)
6445c9a1cb : Avoid testing device in cdist when called in a "Math" context (#54708)
c9e0aab2bf : Make convolution_overrideable default implementation raise NotImplementedError (#54707)
ed560cf2c6 : Disambiguate where 'Doesn't run' error message comes from (#54706)
b5ab348253 : Fix missing format string qualifier (#54705)
d9a7c758e1 : Rename linalg.det test so that it generates a valid method name (#54704)
05fa570bbc : Add empty_generic, which allocates an empty tensor in a device-generic way (#54703)
90e70ace9b : Fix some more native_functions.yaml mistakes (#54597)
e70f3d1189 : Nasty little hack to preserve NotImplementedError raised in interpreter (#54627)
e5634f5f25 : More types for torch (#54037)
d59fb7a2f6 : Add complex autograd support for `torch.unfold` (#52999)
6eaf96961d : [codemod] fix tautological imports
65781f94ad : Enable faulthandler for distributed tests. (#54531)
1d5cc6c53d : Move requires_grad_/backward out of VariableTypeManual. (#54543)
d63dd07f06 : Add JIT support for cmath unary ops (#54089)
8eb896ce99 : Improve error message while setting error twice. (#54464)
f612d4eb58 : Add 'remote_parameters' and 'get_module_rref' to RemoteModule docs. (#54645)
316804e373 : [test_c10d] Add wait in nccl high priority stream test (#54714)
e4d19798f3 : [nnc][tests] Convert a bunch of FileCheck to checkIR
24f589df44 : [nnc] Disabled test case for failure in implementing conv1d (#54756)
e542e67253 : [nnc] Test case for computeAt with reduction (#54755)
71201340c6 : Remove 13 hacky wrapper not required (#54793)
2620bce42a : [ROCM] load only hipfft separately past rocm4.1 (#54349)
0e320ddb36 : Lazily initialize alias db constant prop (#54640)
ba1f640928 : Optimize memory usage in logsumexp_out (#51239)
f22bad752d : Move some variable ops out of VariableTypeManual. (#54459)
394b720e38 : Fix raw_deleter() bug with PYTORCH_NO_CUDA_MEMORY_CACHING=1 (#54775)
416ba5c48f : Merge CUDA Streams and Events (#53902)
593295daac : Migrate kernels with TensorOptions to C10 full dispatcher (#54539)
ee73c752c6 : Delete unnecessary empty file (#54796)
14a2501786 : Update max-version in setup.py to 3.9 (#54690)
3ed6e0ce6c : Remove ops from the complex_list for which the method_tests have been ported (#54754)
3db2333d09 : [JIT] Make NoneType annotation_str emit `NoneType` instead of `None` (#54642)
1e9ad6e5cd : [JIT] Fix TupleType.annotation_str to conform to `typing` module syntax for empty tuple type (#54641)
df70e2fde5 : Refactor get analytical jacobian (#54049)
0435059ddf : docs: fix docstring signature in `all_reduce_multigpu` (#54665)
db3a9d7f8a : Fix __torch_function__ tests. (#54492)
13b1ca9466 : Rename DefaultBackend to CompositeExplicitAutograd (#54470)
70dd2a2bdd : Add myself on all native_functions.yaml code reviews (#54595)
5c6208abba : remove docker dir (#54729)
4399aadcc7 : add sndfile yum package to centos dockerfile (#54687)
20d8fe83cd : [TSAN] Suppress data races in caffe2/c10/util/Logging.cpp
2be1b486ce : Drop Python 2 support in common_device_type.py (#54691)
f6634be4c2 : Fix OpInfo failing without scipy (#54735)
645119eaef : Lowering NLLLoss/CrossEntropyLoss to ATen code (#53789)
d4045e9aa1 : initial commit to refactor all s3 access codes to s3_stats_parser (#54681)
1126d51de9 : Remove useless contiguous calls from torch.matmul (#54616)
5e62da2efd : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
b7b481bd07 : [PyTorch] Enable template build at aten operator level (#53801)
0a18211989 : ns for fx: add weight matching for linear fp16 emulation (#54257)
182d8c375c : ns for fx: add partial support for subgraphs with base_op_node (#54254)
454832e5fa : ns for fx: create subgraph type (#54253)
9e8e744efe : ns for fx: move shadow lstm test to new API (#53828)
cfe7364809 : ns for fx: move shadow activations linear test to new API (#53819)
3dc8ba27a5 : ns for fx: move shadow activations conv test to new API (#53818)
52a8075f16 : ns for fx: add support for lstm activation matching (#53779)
c656a5befa : [FX] Normalize Python operators to `torch.` ops when called with Tensors (#54236)
b81e10a291 : fx quant: fix bug with fusion patterns and disabling quantization (#54654)
a28c7db9f9 : [FX] Garbage collect values in Interpreter (#54726)
fd58ececab : Pin autocanceling GHA repo to specific commit (#54738)
9db4802184 : [fuser] Support bfloat16 (#54571)
6b7652e26c : [DDP logging] Prefer use of c10::Join (#54649)
dfc7fa03e5 : lu_backward: more numerically stable and with complex support. (#53994)
3bb0f1f343 : Automated submodule update: tensorpipe (#54686)
68bdeef2ce : [CMake] Simplify CPU architecture detection logic (#54637)
911b8b1bfc : [package] rename PackageExporter.external to PacakgeExporter.extern_modules (#54601)
9c60fc9cd9 : Fix broken javadoc URL in README (#54434)
a7c7fc96ff : Add doc warnings for default SELU gain (#54057)
f1edaabc35 : Simplify creation of unary structured kernels. (#54592)
71b9f2dd76 : Add GHA to cancel redundant GHA workflows except on master (#54689)
53596cdb73 : Remove hacky wrapper for about 100 kernels (#54367)
d12118c0aa : Handle stride > 1 with im2col in CUDA thnn conv2d (#54080)
0b0a5dd35f : Revert D27327999: [pytorch][PR] Cancel redundant GHA workflows
f251bb40c1 : Cancel redundant GHA workflows (#54685)
4bf90558e0 : [Gradient Compression] Add logging for gradient compression stats. (#54647)
267fc27d39 : Ensure torch.futures.wait_all exits early on error. (#53953)
93bbbeccf7 : Make SharedCache thread-safe (#53750)
9029d0d7d8 : Introduce a fluent API to construct tensors from external data. (#54530)
6cdabb2e40 : Update .gitignore to ignore NFS handle files (#54618)
55dfb4a575 : Update CODEOWNERS for distributed training (#54661)
c0bcd5a58f : Remove NestedTensor from DefaultBackend alias (#54559)
2662e34e92 : Add PyTorchDeploy predictor model type (#54120)
947ab84fd2 : enable_and_enhance_bf16_threshold (#54384)
9f336bdf10 : Fixes new tf32 failures in test_nn.py (#52871)
64d31e3f45 : Add double tensor type to DivFakeFp16 Op (#54636)
fe2c1268b7 : More name refactoring of memory planning codes to make it more readable (#54272)
1ceb90405b : [TensorExpr] Add plumbing for conv2d fusion. (#54439)
6f8328ef44 : [special] Add special.entr (#53500)
347ab5d8b8 : Update Kineto submodule (#54621)
1442a92741 : Ensure local_used_maps_tmp is distinct from local_used_maps_[i] (#54474)
ac33432606 : Fixed out= variants of non-symmetric eigendecomposition and QR decomposition (#54056)
7605ce4ed8 : [PyTorch] Enable test_lite_interpreter_runtime running in android (#54579)
673ed4623e : Gradcheck small fixes (#53916)
796be045bb : Refactor gradcheck (#53857)
d371a9f9c5 : Change ScatterGather kernel names on dtype dispatch. (#54498)
556fc8d418 : skip test_symeig if MAGMA not detected (#54526)
145bc5cd51 : Rename Math to CompositeImplicitAutograd (#54466)
87989a6cf9 : [caffe2] support serializing float data as bfloat16 (#53735)
b032316c41 : Improve `nn.Sequential` documentation (#53380)
2b07bcf9eb : [operator benchmarks] Added more interpolation test cases (#54584)
12a61a172e : Fix missing class in cpp tensor documentation (#54488)
f9ca0d87a7 : Teach Python TS frontend to parse complex literals (#52881)
2f5db68797 : Make nightly checkout work with generated testing py (#54477)
67e4618037 : Add arg layer_norm_eps to transformer layers (#54494)
732815af7a : Automated submodule update: tensorpipe (#54582)
05c8ddfe05 : [AutoAccept][Codemod][FBSourceGoogleJavaFormatLinter] Daily `arc lint --take GOOGLEJAVAFORMAT`
c371542efc : testing: dont skip test_ops suite for operators testing against scipy (#54186)
bac566bf61 : torch.square : OpInfo and minor fixes (#52551)
d3f784244e : fix comparison of narrow type with wide type in loop condition part2 (#54471)
0d81528a47 : Definition infrastructure for instruction count ubenchmarks (#53296)
a4ca394f8a : Revert "Revert D26907093: Add repeats to Timer.collect_callgrind(...)" (#54484)
afe339d7dd : [static runtime] support DictConstruct (#54438)
601e79200d : [NNC] Implementing LoopFusion (#54461)
5105250e16 : [FX] Add docs for shape propagation (#54554)
5cd8a77e01 : Skip inplace autograd test if inplace variant doesn't exist (#54460)
789dc6d445 : [NCCL] Add more details for checkForNCCLErrors (#54117)
b93ab10b7a : torch.lerp: cuda complex support (#54129)
5781aec74e : Automated submodule update: FBGEMM (#54509)
33b95c6bac : Add __torch_function__ support for torch.nn.functional.embedding (#54478)
91d37d7d2f : [CI] Install compatible cmath for Win binary builds (#54527)
66a3614b47 : Fix typo in .github/workflows/lint.yml (#54551)
5754816597 : fix SC2126 introduced error (#54545)
4a74b0f2dd : Fix logic in TestFX.test_get_torch_func_signature_exhaustive (#54510)
7e3cf1ee24 : [pytorch] Add native support for segment reduce step1: API definition (#53727)
591084abb8 : Deprecate torch.matrix_power in favor of torch.linalg.matrix_power (#53538)
f9e7f132fb : Added torch.linalg.matrix_power (#52608)
345b26ca08 : [android][utils] Support ChannelsLast in TensorImageUtils (#48990)
792f5ffb83 : Also strip slow_test (#54528)
c06d979731 : [Static Runtime] Name refactoring to make MemoryPlanning more readable (#54045)
35186eb983 : Update TensorPipe submodule (#54507)
1b792a7f15 : Fix Flake8 (#54540)
e5b97777e3 : [ROCm] allow PYTORCH_ROCM_ARCH in cpp_extension.py (#54341)
446e477d4f : [complex] torch.rsub(): complex autograd support (#53702)
21a9a93eb4 : gdb special command to print tensors (#54339)
583c4bf7d3 : [Pytorch Mobile] optimize_for_mobile: Fuse Add Relu on any function (#54441)
acffa604cc : disable cu112 test on windows (#54512)
f3c00047ce : Reset Optimizer counter while deserializing netWithBackwardOptions
ba9f12d235 : Fix minor whitespace typo in tools/test_history.py (#54504)
a4a21e7d8d : Automated submodule update: FBGEMM (#54486)
2a53897114 : [jit][tensorexpr] Added aten::batch_norm into fuser when in inference mode (#54204)
fee470d8ef : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
f2a38a0edd : Enabled BFloat16 support for argmax & argmin on both CPU & CUDA (#52582)
3519625a34 : Fix onnx warning message (#54371)
1041fdd069 : Grammatically update tech docs (#54370)
8518b0ee55 : [PyTorch] Update Bazel build for TensorPipe (#54416)
c22fc448cd : [Gradient Compression] Remove cuda.syncrhonize in batched powerSGD (#54482)
6d0027197c : Delete all unnecessary singular Math entries (#54436)
6e8c4ad7fd : s/StructuredNativeFunctions/NativeFunctionsGroup/ (#54427)
bf2ca35f35 : Rejigger to use NativeFunctionsGroup even without structured: True (#54426)
c00d66f73c : Move compute_native_function_declaration to its own dest module (#54419)
349a17f1c0 : Replace some tensor.device().is_cpu() calls with direct tensor.is_cpu() (#54397)
77ccd4f9a3 : [5/n][torch/elastic][upstream] Move torchelastic/agent to torch/distributed/elastic/agent (#54343)
5870346173 : Port index_copy from TH to ATen (#52203)
52abd3bd7b : [Static Runtime] Fix bug in reshape_copy (#54467)
c411017a41 : Only allow hub.load() from original repo. (#54451)
9be4c75fa0 : [JIT] Add Reinplacing to MKLDNN Subgraphs (#53908)
81c6e5fb38 : use reshape when possible in broadcasting (#53326)
18c04a3f0f : Avoid dispatch overhead in call to mkldnn convolution (#52614)
3959d393b8 : [PyTorch][JIT] Less shared_ptr use in dictConstruct (#54110)
7e33dc3498 : [PyTorch] Avoid extra intrusive_ptr copy in IValue::toIntrusivePtr (#54124)
568d43b935 : Automated submodule update: FBGEMM (#54447)
decbdf7b0b : Get rid of {Cpu,Cuda}{Channel,Context} in tensorpipe_agent. (#54432)
2668149b8c : Export torch::jit::toIValue (#54449)
92770d25cd : fix comparison of narrow type with wide type in loop condition (#53951)
edfc787df4 : Migrate kernels with Tensor? to C10 full dispatcher (#54263)
5a27199149 : Add device_of overload for optional<Tensor> (#54262)
2130f4ccc4 : Use c10::ArrayRef instead of std::vector for the jit::unpickle's tensor_table. (#54428)
4919fecf23 : Delete dead TensorOptions::key_set (#54004)
9fef25e579 : [Pytorch Mobile] optimize_for_mobile: Remove dropout from any function (#53846)
f1e72a7e18 : add uncommit change detector (#54373)
0f628d1503 : [ROCm][doc] add ROCm section for building from source (#53845)
1e09880b45 : Add support for list insertion for mutation removal (#54271)
263cd5cf98 : Disable all cu92 in scheduled-ci (#54421)
7bda8b650c : [caffe2] Fix caffe2 build with TensorRT support (#54322)
17f9b5ff4a : [caffe2] increase deadline of test_dnnlowp_batch_matmul_int_constant_B (#54241)
8bb07c7e21 : [CI]Install older cmath during Windows build (#54431)
6e7a3c1fdd : Clang-format distributed.py (#54402)
a46d56f988 : Update tensorpipe submodule. (#54412)
19f77700ec : clean up typos in submodule (#54372)
c2c97cd290 : <tensorexpr> Add python bindings for missing loop transformations in LoopNest (#54355)
afb560065c : [testing] OpInfo for sgn and sign (#53885)
b6bbb41fd8 : Temporary disable TestNumbaIntegration.test_from_cuda_array_interface* (#54430)
ef472d5b31 : Back out "[PT QNNPACK] Temporarily disable input pointer caching" (#52917)
2355f61f19 : Add logging for debugging S223170
635595f706 : Change sharding in ci (#54228)
d226985257 : Read out layout from options directly, rather than via backend (#54074)
36ce673f16 : Disable the fusion group which is not supported by XPU device. (#54239)
4ffafbac40 : Make test_cpp_extensions_aot handle lack of pytest more gracefully (#53740)
7b939d934e : Simplifes OpInfo test matrix to reduce test time (#53255)
ab8e9188dc : add --gpu-max-threads-per-block=256 to hipMAGMA build (#54161)
80a4a50081 : Automated submodule update: FBGEMM (#54118)
fc58b3f146 : Skips failing pinv and pinverse test (#54392)
f48a9712b7 : Rewrite functional.tensordot to be TorchScript-able (#53672)
cffa70d36d : Merge channel hierarchies. (#54333)
8294bff20d : [StaticRuntime] Copy version of reshape/flatten (#54353)
08e4312559 : Tag distributed team for review for /torch/nn/parallel (#54221)
98baad5764 : [nnc] Remove cached argv from LLVMCodeGen to fix race condition (#54286)
4fa47e5e7d : Support non-tensor inputs and outputs for checkpointed functions. (#52422)
9d9986fd10 : Support for Half / bfloat16 / index_select and better testing (#53898)
d58c00a5d8 : [package] Make exporters write to buffer in fbcode (#54303)
41b1ea216f : Fix `torch.linalg.qr` example (#54342)
270d675f86 : update distributed doc table for alltoall nccl (#54277)
27048c1dfa : Remove legacy constructor calls from _torch_ folder. (#53889)
6a4d2c61d5 : Allow linking against vcomp on Windows (#54132)
6f7a5a47af : port div to structured (#53680)
fa482aa4ef : port sub to structured, fix sub.Scalar bug (#53679)
779cae9e42 : port at::pow to structured (#53669)
454dd7ba86 : Add codeowners for onnx export (#54287)
679f07a017 : Backup .circleci/config.yml before regenerating (#54345)
da18313de3 : [caffe2] expose whether FBGEMM is available to the Python code (#54274)
f1cbd10276 : [PyPer] Port c2 add to pt (#54229)
fa07d0c8eb : .github: Add workflow to build libtorch (#53292)
05a03a6c8c : [FX][EZ] Fix type correctness on GraphModule.graph (#54305)
bc4f521178 : port at::mul to structured (#52692)
61b074581c : `torch.prod` backward for complex types. (#48125)
cc7a28d727 : Refactor Unary Ops tests (#49712)
645a3e9a92 : Fix inaccurate dispatch tables (#54127)
49f1336106 : Add Tensor::is_cpu, genericize TensorIterator (#54079)
e0aebe241d : Refactor tensor_new.cpp to use TensorOptions instead of DispatchKey (#54034)
544a996f83 : Revert D27155845: [pytorch][PR] Fixed the size of the workspace array in functions calling MAGMA
887759c9b9 : Changes to autograd/custom functions to handle optional arguments (#54270)
f2b4b0e9eb : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
a84afb3a7c : Use type-erased union for Buffer. (#54251)
8f755b9ed0 : initial draft for assert_tensors_(equal|allclose) in torch.testing (#53820)
acf03b13f1 : [Static Runtime] Check for number of uses of op inputs > 1 in ReplaceWithCopy (#54230)
bfd009836e : [torch.special] Add special.erf{c, inv} (#53260)
19792b45db : add a pytest.ini file (#53152)
bbb06c05a8 : remove type_hint_tests and convert the files to use the new test style (#53167)
53d8778b4d : Update clang-format linux hash and yaml import calls (#53932)
04e0cbf5a9 : Add padding='same' mode to conv{1,2,3}d (#45667)
a8a1090324 : Perform appropriate CUDA stream synchronization in distributed autograd. (#53929)
75498164fe : Remove nonexistent files (#54276)
8cd4dac78f : Move mypy wrapper to tools (#54268)
4626886f21 : [JIT] Add CUDNN Conv-Add-Relu fusion for Frozen Model Optimization (#52102)
90bbe0b38b : cmake: auto-detect ccache to speed up developer builds (#49389)
a95abc4648 : Test tools/test_history.py (#54259)
0645e2b490 : Use shard file if present, improve functions used for sharding (#54210)
3b1e3103ca : Remove usage of onEachDevice from legacy profiler (#54125)
d85faf8d8e : Cleanup mypy lint job (#54260)
04a2506091 : Fixed the size of the workspace array in functions calling MAGMA (#54009)
f0056f89a4 : Final kernel launch checks (#54214)
cc92117aad : cleanup static_cast of AutogradMeta (#54103)
004db37358 : properly make AutogradMeta/DifferentiableViewMeta attributes internal (#54102)
09b4af2f0f : Remove legacy from optional-related function names (#54101)
a425eb2135 : Add size check for forward grads (#54100)
cba8516b52 : make internal forwardAD methods on at::Tensor internal (#54099)
a52e295cbb : Add MyPY to lint GHA workflow (#54067)
4b2abc4b8e : [NNC] Adding API to distribute loops (#53865)
dc35848804 : [PyTorch] Rename XPLAT_MOBILE_BUILD to TEMPLATE_SELECTIVE_BUILD (#54217)
9f86b656ba : Resubmit: Adding parallel support for the LLVM backend. (#54122)
444552e7f9 : Optimize alias_analysis node lookup (#54115)
382a47b493 : Add torch.linalg.vector_norm function (#51099)
564456ac44 : Added autograd support for torch.orgqr (#52637)
2f3b194dc2 : Add cusolver potrf and potrfBatched to the backend of torch.cholesky decomposition (#53104)
8caa7889fc : Revert D27001339: Use type-erased union for Buffer.
c618dc13d2 : Use type-erased union for Buffer. (#322)
133000fe7a : [distributed] add processgroup options as argument (#53663)
2d8795c552 : [FX] Normalize torch. namespace ops (#53832)
72c7983f23 : Remove __get__ from Tensor stub. (#54208)
a27f46bbe3 : [FX] Experimental type annotation pass using Python signatures (#53831)
255b103c1b : [WIP] Function to retrieve inspect.Signature instances for PyTorch ops (#53830)
0dc5abfaa9 : Revert D26907093: Add repeats to Timer.collect_callgrind(...)
ca429fedd3 : [StaticRuntime] Fuse SigridTransforms + ListUnpack (#53920)
ef9ee46756 : Avoid modifying rebuild buckets state in no_grad context (#54159)
fef0219f7e : [ROCM] Fix hipfft transform type error (#53411)
f4a044ca1d : [distributed] add options field in ProcessGroupGloo/NCCL (#54090)
a4f0f8b1e9 : [distributed] add base processgroup::options (#53662)
ac78d05d05 : [Kineto] Update rev for fix to #53848 (#54226)
74993dcf7b : Add repeats to Timer.collect_callgrind(...) (#53295)
8ecb2d35bc : Add ability to override _reduce_ex_ function of DataPipe (#52858)
2eb3917629 : [Vulkan] Add reflection_pad2d to Vulkan (#53604)
06cb9293c5 : Add GitHub Actions workflow to test tools (#54207)
7d1e1c7e0d : Pyre-ify torch.jit.interface's (#54084)
94b22b5b3b : try catch test upload failures (#54194)
8f61b13e80 : [Pytorch Mobile] Optimize Non Forward for Mobile (#53314)
407d60ee91 : Upgrade actions/setup-python from v1 to v2 (#54202)
cd776560d0 : [vulkan] Add hardswish and hardsigmoid activations to Vulkan (#53362)
957700be7e : Improved aten::to performance from inline cvr remote_request_only (#53800)
e442d5c8a5 : Disallow CUDA RPC to use new devices in output tensors (#54024)
8cc06e3ca3 : Disable CUDA RPC tests that use new device in user-function outputs (#54023)
79534867ac : Migrate about 100 kernel to C10 full dispatcher (#54109)
fd5c1123e4 : wrap AliasDb in Python (#51336)
2e7311ef25 : First step to refactoring S3 reading logic (#53755)
ccdcfba5de : [caffe2] Refactor tensor serialization function (#53404)
a2a7179695 : Fix bug in assertRaises NotImplemented handling when no exception is thrown (#54126)
7e7533b2e2 : Delete denseTypeIdWithDefault and toDense (#54016)
f30a7a2739 : Add export-historic-test-times option to dump S3 test times into a JSON file (#54083)
7367bca066 : [nnc] Tests for proposed feature: loop bounds conditional simplification (#54121)
a852fdb6b5 : [nnc] Test for using int64 dimensions (#54094)
0806126aad : [fx][trivial] Add TestConstFold coverage to test_fx (#54072)
91747a5e93 : add tests for ddp with activation check points (#52894)
ce40ff5c64 : Avoid DDP race condition with find_unused_parameters=True when all params are used (#53160)
4fac72ee9d : [fix] Dimension out of range in pixel_shuffle / pixel_unshuffle (#54086)
4a24c552cc : [PyTorch] Fix string copy in WARN path for both interpreters (#54076)
8f1af02f35 : [PyTorch][mobile] Audit mobile interpreter for extra copies (#54031)
ce15f312a8 : [PyTorch] Align function parameters across declaration and definition for max pool 2d (#54105)
527c1e0e37 : [iOS GPU] Remove unnecessary texture size changing (#54108)
e579b39b9e : [iOS GPU] Implement view and reshape in metal shaders (#54107)
2e8a9d2bfe : [iOS GPU] Support multi-dimension tensors via MPSImage (#54106)
0c8f16622b : [Caffe2] Rework CAFFE_ENFORCE_THAT (#53303)
11a135ec82 : Remove _th_take (#52665)
04d5278cb6 : [Static Runtime] Only run ReplaceWithCopy pass when enable_out_variant is true (#54111)
fb7bab97c4 : Automated submodule update: FBGEMM (#53947)
b936abd840 : fix nest openmp performance bug in thnn_conv2d (#52577)
252916ab61 : Update TensorPipe submodule (#54070)
c4f50162be : [typing] suppress errors in `fbcode/caffe2` - batch 2
8533a485ea : Fix SIGSEGV in CudaIPCTypes.cpp. (#53080)
dc070605f1 : TST Replaces assertEqualIgnoreTypes with assertEqual in test_indexing (#53115)
4b00bce156 : [Gradient Compression] Introduce fp16_compress_wrapper in ddp_comm_hooks.rst (#54052)
524cb0a514 : [PyTorch Mobile] Dedup method names in bytecode serialization (#53677)
282eefebf3 : Delete defunct ComplexCPU/ComplexCUDA dispatch keys (#54013)
4878415688 : Make storage access error NotImplementedError (#53972)
d47fd3df81 : Compute type_equal() without reference to backend() (#53823)
3c457043fb : Also propagate storage_access_should_throw_ when copying tensor metadata (#53816)
665d5e2a4f : [PyTorch][JIT] Audit interpreter for extra copies (#54029)
ae154a8c2c : various doc building cleanups (#53851)
aa8714dfed : [complex] torch.lerp: complex autograd support (#53689)
c0fafcc766 : Don't actually print anomalies in TTRR (#54078)
1f5b9170aa : Faster backwards for cumsum and cumprod (#53711)
6332fd6255 : enable sc1090 and sc1091 (#54069)
2c4a64589b : fix mkldnn_add in-place behavior (#51687)
b27e678dfb : [RELAND] [CUDA graphs] Private mempools for CUDA graphs (#54038)
bea3cb7069 : remove aliasMultinomial decode from TH and THC (#52585)
e8e570e9c5 : [MacOS] Cross compile stub when building for M1 on x86 (#54046)
2ecb2c7931 : Pass Scalar by reference (#53583)
4dd1c72dde : Treat Scalar parameter as if it is constant (#53582)
603097be18 : OneDNN MaxPooling: reduce memory use for inference path (#52728)
2c5579702a : [PyTorch Mobile] Add module size to logged metadata (#53578)
08f04c0db2 : Test forward reference annotations (#53713)
ce2f71836c : Disabling dispatch to OneDNN for group convolutions when groups size = 24 * n (#53991)
f52a3bd634 : [DDP] remove dedupe check in reducer (#53919)
8c2c9450cc : [package] autoformat (#53783)
ee35060888 : Fix sharding algo + test it (#53942)
e91aeb0470 : [4/n][torch/elastic][upstream] Move torchelastic/metrics to torch/distributed/elastic/metrics (#53870)
b9fdf72174 : Fix doc (#53996)
e87ab2ac4d : [DataLoader] Switch to guaranteed determinism & add option to non_deterministic (#53532)
274b96b878 : Move as_view/increment_version to its separate key. (#53342)
8f98b87212 : Update Kineto revision (#53940)
a7ba3f3aa8 : Automated submodule update: tensorpipe (#53999)
65087dd1d4 : Fix broken link from load_inline to new test location (#53701)
67f765328b : scripts: Change promote pypi to be more flexible (#53774)
793a29a7d5 : add OneDNN batch_norm backward (#50460)
33e3deed4f : add OneDNN relu backward and reshape backward (#49455)
7f88840495 : Fix prefix store timeout bug (#53928)
7ff4955de5 : [doc] Fix documentation for tensorsolve (#53320)
b5cdb53af1 : Add division logic to a slow/fast path (#49250)
4bb34c2a75 : Update Binary Ops with scalar lists (#49249)
c1a39620b8 : [nn] nn.Embedding : `padding_idx` doc update (#53809)
5b62b0d9bc : [RPC] Fix typo in rref_context.cpp (#53978)
7e39a40300 : Fix typo in torchvision_models.py (#53968)
da10ccd35f : Implements cpu_kernel_multiple_outputs and torch.frexp (#51097)
ad8d1b2aaa : [ONNX] Update embedding export wrt padding_idx (#53931)
4f62c622b3 : Cleanup of unused list in adam.py (#53874)
8734e88f0b : delete has no more data after the key (#53886)
700c817a6a : Add install for libCaffe2_perfkernels_avx*.a (#53825)
2782126bfe : Automated submodule update: tensorpipe (#53892)
bb21aea37a : [iOS GPU] Add the reset of binary ops (#53950)
530dc828ae : [iOS GPU] Support element-wise broadcasting for binary ops in shaders (#53949)
df7c0a06d6 : [testing] assert no duplicate in method_tests for an OpInfo entry (#53492)
547f435763 : Fix restriding logic for structured kernels (#53759)
c2f41b6b84 : Add meta device to generic device testing framework, skip NotImplementedError (#53682)
d47d246206 : Add 'noarch' tests which only run in one CI config (#53747)
f6df18f6ca : Clean up future imports for Python 2 (#53349)
319ab58e27 : Skips test_linalg_lstsq on ROCm (#53977)
790326d49b : Fixed the size of the workspace array in functions calling LAPACK (#53909)
7df176b1f9 : Added OpInfo-based testing of some linalg functions (#51107)
d46978cc55 : Refines test_orgqr_* skip (#53975)
39f50f468d : matmul performance benchmarks (#51647)
142c6b0e55 : increase timeout for test_op_nnpi_fp16
84af0c7acd : Refactor ForeachUtils.h (#51131)
f2689b1e13 : Make ideep honor `torch.set_num_thread` changes (#53871)
de70cdb66b : Clang format default_hooks.py (#53956)
ca4aae85fa : [Gradient Compression] Update the docstring of fp16_compress_wrapper (#53955)
3ce51fd5f4 : remove th_fill and th_mul dead code (#52546)
317ff429d3 : [TB] Support writing new style scalar (#53496)
ef07a04072 : [NNC] New APIs to get loops corresponding to a Buf (#53778)
ce0fd095a8 : Implemented embedding_bag for SR (#52429)
3078233e9a : [Gradient Compression] Make FP16 compression as a wrapper that can be combined with other communication hooks (#53808)
8a5b946ff6 : [caffe2] Don't call TensorImpl::size() in dim32() (#53852)
5b648ef909 : Revert D26922420: [ONNX] fix export of embedding with padding_idx (#53053)
00771eff8e : [reland] Add OpInfo for `bitwise_not` (#53181)
fe08671756 : Added cuBLAS path for torch.triangular_solve (#53147)
afa1ff8e04 : Implements `torch.linalg.lstsq` (#49093)
4932342363 : [Static Runtime] Fix bug in ClipRangesGatherRangesX2SigridHash (#53799)
76129c7cdf : Revert D26993790: [pytorch][PR] [CUDA graphs] Private mempools for CUDA graphs
fe38027fc3 : [fix] torch.cat : cross-device check for out and input tensors (#53004)
fdbd667e31 : compareSet method for HashStore and FileStore (#53803)
d4c877b59b : Fix typo "informations" -> "information" (#53746)
f62e9156dc : Add missing decorators in test_spectral_ops (#53736)
89fce74d55 : fix for method_tests() random failures (#53854)
33aaea912a : [caffe2] Support deserializing tensors using alternate serialization formats (#53403)
91531d3047 : [caffe2] add a CAFFE2_NODISCARD macro to help support old compilers (#53754)
7763bb6cb3 : Use the conda channel defined in docker.Makefile to install cudatoolkit (#53316)
90dfdef226 : [CUDA graphs] Private mempools for CUDA graphs (#51436)
804f3f9879 : [PyTorch] Remove unnecessary assert in maybe_resize_storage_cpu (#53724)
34eb644e88 : Replace thrust with cub in randperm (#53841)
7f4aff8203 : Skip dispatch for is_signed (#53847)
2912ad1324 : ns for fx: move linear activation test case to new API (#53777)
57bf13409a : ns for fx: move compare activations for conv test to new API (#53776)
01c6e9360e : ns for fx: move lstm dynamic weight test case to new API (#53772)
a71cd135ae : ns for fx: move linear dynamic weight test case to new API (#53765)
9c8f112ada : ns for fx: move linear weight test case to new API (#53764)
19fe8a529e : ns for fx: move conv weight test case to new API (#53761)
986e3c0a00 : ns for fx: extract common code in tests to util functions (#53748)
7d27eb8068 : ns for fx: clean up API naming (#53729)
421e91dfd2 : ns for fx: add support for logging inputs
cc940f3580 : ns for fx: change dtype cast from once per N node to once per node
d73e36a44a : ns for fx: change API to take nn.Module instead of GraphModule (#53075)
b00cdfe136 : Fix run_test_module logic (#53884)
ae7984b1d6 : Do not use shards for single run tests (#53883)
a7ddd15d15 : fix static dispatch linker error (#53859)
924c15c962 : [doc] reorg dist init and non-init functions (#52976)
fff0a3f906 : [DataLoader] ZipIterDataPipe (#53554)
7297556d5d : Add support for single tensor in `inputs` argument for backward (#53827)
4884a6ab51 : fx quant: clean up names of quantize handlers (#53614)
279b5372ab : [not for land] fix fx quant for quant_layer -> stack -> sum (#53196)
93d5807c1e : [not for land yet]fix using size of quant layer in torch._assert (#53187)
ccab6680d5 : [not for land yet] hacky fix for x.ndim followed by sub (#53120)
4873641602 : Fix TCPStore wait() hang when key is previously set (#53860)
a51f130d37 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
ee4ce8e9d9 : [ONNX] fix export of embedding with padding_idx (#53053) (#53530)
a572f70f2f : [ONNX] Support torch.isinf, torch.any and torch.all export to ONNX (#53328) (#53529)
705131c5d3 : [ONNX] Update ONNX documentation (#51362) (#53313)
a6a811f23a : [ONNX] Add repeat_interleave symbolic (#52855) (#53312)
76147b897c : [ONNX] Update assign output shape for nested structure and dict output (#52893) (#53311)
4c1d9e58c2 : Fix copy_ export (#53046) (#53310)
8dab886d3b : [ONNX] enable several script unit tests using new jit passes (#51722) (#53309)
be344e9d88 : Update test cases generated by make_test() method to support running them in script mode. (#52748) (#53308)
7f17058894 : [ONNX] Symbolic shape inference (#51481) (#53307)
57d1df071f : [ONNX] Support inplace operations on inplace indexing (#52063) (#53306)
38414d29a1 : [ONNX] Remove the last Cast in pow symbolic_opset9 (#52646) (#53305)
1772e26f63 : [PyTorch] Move selected_mobile_ops.h codegen function to tools (#53786)
8737c2a1a2 : [TensorExpr] Reland: "Simplify index expressions constructed in loop flattening. Fixes #51173" (#53861)
aeb3e93351 : Move view handling logic to gen_inplace_or_view_type.py (#53341)
e09e97ebf9 : [DDP] add _distributed_rank helper function (#53795)
0c2fe02ec1 : [DDP] Fix wrong call to dist.get_rank() (#53793)
d4602b7e45 : [NNC] Fixes case where inlining wouldn't work because dim-size was 1. (#53254)
ce670238ba : Revert D26927500: [libkineto] Log CUPTI errors on libkineto initialization
9f75de278f : Move common autograd utils functions from gen_variable_type.py to api/autograd.py. (#53340)
cffe9aa617 : [libkineto] Log CUPTI errors on libkineto initialization
d726ce6668 : Support loading a non-DP/DDP model from a DP/DDP state_dict (#53224)
5c2b3d7784 : [ROCm] Enable RNN test in test_c10d_spawn.py for ROCm (#52707)
dfb5f029da : Disable TF32 on DDP tests (#52941)
06cf6d37b5 : [ROCm] Enable test cases in test_data_parallel.py for ROCm (#52708)
c15d943149 : [PyTorch] Fix broken build caused by keyword missing on Windows (#53562)
b69dd910e8 : [docs] Add starter content for new TorchScript language reference (#53837)
d57ae6c46d : Revert D26906509: Adding parallel support for the LLVM backend.
8d8a4a0624 : Remove the extra ":noindex:" in ddp_comm_hooks.rst (#53855)
5344c3ea9e : Remove `join_workers` from Pipeline destructor. (#53433)
6da0b94dd8 : Add note on forwarding arguments in the dispatcher (#53641)
13f63fda5f : Automated submodule update: FBGEMM (#53722)
ec6a7cace3 : [ROCm] Fix the flaky test test_stream_event_nogil (#53850)
b9e900ee52 : [ONNX] Update inputs/input_names formatting to avoid ValueError with scriptMethods (#53519) (#53548)
cdac61ecd4 : Prevent VS from emitting ambiguous symbol errors (third time) (#53490)
8016d28c0b : [Gradient Compression] Update the comment on fp16_compress_hook (#53780)
14d02517e1 : replace data with data_ptr (#53097)
fa980bb22a : [wip][Dist Profiling] Enable dist profiling for MPI backend (#52949)
7e5ffbfa94 : [caffe2] add a SerializationOptions field for the save operator (#53402)
1acced4eba : Implemented getCodeText(string attr) in llvm/cuda codegen and added python bindings for it - #52974 (#53664)
379f1f1ede : Automated submodule update: tensorpipe (#53810)
8b9e3e6fd4 : [complex] enable complex autograd cumsum (#53240)
ec484981c6 : [3/n][torch/elastic][upstream] Move torchelastic/events to torch/distributed/events (#53760)
bbce574ccf : Pass commit_sha to add-annotations-github-action again (#53834)
5cf4527c88 : Update repo name for add-annotations-github-action (#53826)
3f9c803fe8 : [ONNX] Redesign onnx pass to enable shape type dependent pattern conversion - cont (#51795) (#53304)
5648fe6093 : Make storage access throw for meta tensors (#53681)
ec713c0eb5 : [Pytorch] Improve scale and zero point extraction for per channel quantized (#53726)
d7b5a6faaa : Revert "Revert D26733731: [pytorch][PR] Skip dispatch for `is_floatin… (#53242)
7484c56fa3 : [quant][graphmode][fx] Fix a condition check for CopyNode (#53585)
4c1af249fb : [ROCM] load hipfft separately from rocfft (#53408)
5842d34fac : Call nvidia-smi.exe before running tests on Windows (#53422)
0a549f9412 : [ROCm] Disable flaky tests on ROCm (#53192)
05f137c765 : Remove GHA "Checkout PR tip" step (#53719)
f364e492df : Autograd functional API should enable_grad (#47543)
e185ec6c3d : Revert D26955317: Perform appropriate CUDA stream synchronization in distributed autograd.
ffac9b2ead : Revert D26965463: [pytorch][PR] [docs] Add starter content for new TorchScript language reference
07d315fce8 : Revert D26676150: Simplify index expressions constructed in loop flattening - #51173
95d2318510 : Adding parallel support for the LLVM backend. (#53243)
351f6f5e02 : [JIT] Update set_stream API to change the device (#53741)
cfaa0bf286 : [JIT] Update Namespace from cuda to _cuda (#53378)
0b84f45f03 : Perform appropriate CUDA stream synchronization in distributed autograd. (#53769)
1053c96693 : [GraphModule] Back out changes to module root version of __init__ (#53791)
37ab711822 : Adding learning rate schedulers to C++ API (#52268)
ebfa9276d8 : Move prim::layout for lite jit (#53781)
3bd250fd03 : [nnc] Test ability to vectorize reads from an intermediate tensor (#53752)
a5e19126b6 : [NNC] LoopNest cleanup (#53688)
d49c5c74f5 : [docs] Add starter content for new TorchScript language reference (#52494)
ced91bb713 : [deploy] namespace and rename (#53670)
14acf92b2b : [PyTorch] Speed up Tensor::data_ptr (#53723)
1f01899e4a : Simplify index expressions constructed in loop flattening - #51173 (#52882)
997f05cd34 : [nnc] Add an initialization expression to Reduce() (#53751)
49a5f99440 : skip dispatch in resize_ (#53575)
21f9a6da7d : Avoid of creating a copy of statusString every inference time (#53756)
0584fd9339 : [quant][fx][graphmode][fix] Only insert observers for fixed qparam ops (#53330)
f9185973d1 : [quantization] Add some support for 3d operations (#50003)
895735c69f : TensorIterator: Avoid nesting two levels of function_ref in for_each (#53613)
fe0810e2f8 : Add a section to introduce GradBucket class in ddp_comm_hooks.rst (#53253)
c988b78be2 : Add a description of GradBucket Python class (#53596)
741d0f41d6 : [package] split tests (#53749)
4351d09683 : Fix error message in setStorage (#53198)
fee263595c : Remove trailing whitespace introduced in #52175 (#53762)
023948e6d7 : [caffe2] update load_save_test.py to also verify the chunking behavior (#53401)
99d7c8ff94 : [caffe2] use AddNAlreadyReserved() when serializing blobs (#53400)
2cf90982e9 : [TestZeroRedundancyOptimizer] Add multi gpu checker (#53564)
d9fa957ecc : [quant][graphmode][fix] Handle the case when observed node has no users (#53210)
56f3cb7a99 : Add AST rewriter to acc_tracer (#53644)
5563248b58 : [JIT] [Remove Mutation] Add handling of normal_ (#52175)
c68cc24cee : update upsample tests in test_nn.py to test for memory_format (#53665)
669fcf3093 : Replace supports_tensor_out with supports_out (#53745)
76b58dd9ae : Fix distributions which don't properly honor validate_args=False (#53600)
b03c92a9c5 : [2/n][torch/elastic][upstream] Move torchelastic/timer torchelastic/multiprocessing to torch/distributed/elastic (#53574)
cb68039363 : Port NumPy typing testing style to PyTorch (#52408)
17bc70e6f7 : [package] make importer a little more obscure (#51676)
b4d8f4af82 : [package] implement `get_resource_reader` API (#51674)
bfc80b3566 : Give line numbers in git-grep-based lints (#53733)
70a43425e0 : Simplify init._calculate_fan_in_and_fan_out (#53522)
a76b4736db : clang format reducer and logger files (#53148)
d032287ec3 : fix data type logging (#53162)
7d4b229d61 : add is_multi_device_module logging field (#53149)
a08fc1a7fc : allow users to set sample rate and add per iteration latency breakdowns (#53145)
8f15a2f052 : eig_backward: faster and with complex support (#52875)
b99b6065f8 : Removes trailing whitespace (#53728)
5658ab5f77 : [andoid] publishing to maven central (#53568)
05e0ea9661 : [android] bump gradle version to 6.8.3 (#53567)
e13ef777a7 : Use native ctc loss for target length 256 (#53557)
e937db5dba : Added CUDA support for torch.orgqr (#51348)
45ddf113c9 : [fix] nn.Embedding: allow changing the padding vector (#53447)
bcbe07200c : Improve logic for S3 stats gathering. Uses automatic SLOW_TESTS. (#53549)
1c9fc38eb2 : Remove reference to 9.2 as it's been removed from nightlies (#53716)
70733f2e67 : Marginally improve pytest collection for top-level test files (#53617)
6e020a4844 : Fix inaccurate dispatch table for fill_ (#53611)
4dbd0b639d : Convert a few more checks to raise NotImplementedError (#53610)
e787872a47 : [RELAND] Deduplicate shared params before constructing Reducer in DDP (#53279)
039402b945 : If distributed module isn't available, don't run distributed/pipeline tests (#53547)
6aa5148df2 : Filter 0's returned by exponential distribution (#53480)
c5cd993add : Adds a bool is_available() method to the backend contract (#53068)
215950e2be : Convert type annotations in nn/functional.py to py3 syntax (#53656)
a20b36b03d : [JIT] Fix backward compatibility test broken by #53410 (#53683)
8a6df06a0e : Print onnxifi failed status code in readable format (#53648)
3b0e4a6ed4 : [GraphModule] Improve buffer registration during init (#53444)
c3f8d57c70 : use DimVector for sizes and strides (#53001)
0257eddc16 : Editing pass on native/README.md updates (#53638)
409a76f72c : [Static Runtime] Fix bug in static_runtime::to_copy (#53634)
60ed8fb244 : [JIT] Enable ModuleList non-literal indexing (#53410)
5dca8ff6de : [FX] Make TracerBase._find_user_frame private (#53654)
cff22f8794 : Port sin to structured. (#52277)
b9c3edd583 : Remove hacky wrapper from a lot of unary operators. (#52276)
233b9490c2 : fix channels_last bug in upsample kernels (#53535)
a3465214ba : move rnn cell size check to cpp (#51964)
0606057af3 : [PyTorch] Add c10::MaybeOwned and Tensor::expect_contiguous (#53317)
8acb74c405 : [PyTorch] Make IValue::toTensor() inlineable (#53213)
f8e7d8bb0d : [FX][docs] Render inherited methods in fx.Tracer API reference (#53630)
a8ecf306da : [Static Runtime] Remove dead code (#53588)
a9e4bb56e5 : Add more kernel launch checks (#53286)
b8546bde09 : ci: Remove special versioning privileges for cu102 (#53133)
c0c5f80f36 : Lazy Modules Documentation Clarifications (#53495)
4fa11147c5 : Automated submodule update: FBGEMM (#53632)
efb1895f81 : [caffe2] use snprintf() instead of sprintf() in the Checkpoint operator (#53434)
e87a686d21 : .circleci: Remove hardcoded tag for rocm (#53636)
bcd94e220d : [PyTorch] Fix typo in QNNPACK
a496520c1d : Automated submodule update: tensorpipe (#53599)
b0afe945a7 : Fix pylint error torch.tensor is not callable (#53424)
ef3765b992 : Fix a cuda max_pool3d issue, do multiplication in int64 (#52828)
9f2aea7b88 : [Pytorch] Fix embedding bag bug accessing unaligned memory (#53300)
7e6a84d238 : Add logic to auto-fetch submodules (#53461)
2f91cda37e : Modify error message (#53525)
02c0c7a32b : Add Meta support for empty_strided (#53397)
707fc354eb : Add debug only layout assert for empty_cpu (#53396)
28d6e01511 : Add TORCH_CHECK_NOT_IMPLEMENTED/c10::NotImplementedError; make dispatch use it (#53377)
2d36b30a8c : Expands OpInfo out= testing (#53259)
9df1b98bab : Quality of life improvements to Timer (#53294)
f4b344ad5c : Definition infrastructure for instruction count ubenchmarks (#53293)
0a97712326 : [caffe2] don't use static for template declarations in headers (#53602)
34d9278c19 : Remove notion of "level" from `Module::dump_to_str`. (#52539)
bf88a4dad5 : Support parsing Ellipsis in JIT frontend (#53576)
c2ccb3578e : Fix inport -> import typo in documentation (#53589)
cb36e503d8 : [iOS GPU][BE][5/n] Remove indirection calls from MPSCNNOps.mm and MetalAten.mm (#53432)
aa687bb6f4 : [iOS GPU][BE][4/n] - Convert Objective-C class methods to C functions (#53431)
2dffb4e38e : [Static Runtime] Back out D26659824 (#53570)
dc29604fd1 : [iOS GPU][BE][3/n] - Rename MetalTensor to MetalTensorImplStorage (#53430)
d521fd799d : [FX Acc] Add support for multi partitions in fx-glow (#53280)
5b52ff6c8e : [fx] Add DCE pass (#52658)
17d00319bc : Install GCC-9 into ROCm builders (#53459)
97460d3545 : [static runtime] Minimum fusion group size (#50217)
a947bfaa26 : [Pytorch] Remove assumption forward exists in freeze_module (#52918)
53c77e7d5d : Add mock.patch() to clear environment for test (#53537)
b0984f7925 : [pytorch] use correct warning type for tracer warnings (#53460)
0d04e51233 : [caffe2] Add an optimization to avoid extra fp32->fp16 conversions in Onnxifi (#53560)
d0b32156f0 : move test to CUDA only (#53561)
a0d425d38d : Automated submodule update: FBGEMM (#53509)
7c0a4e78ca : [static runtime] convert to->to_copy (#53524)
1e992810b5 : Revert D26811466: [pytorch][PR] [reland] Add OpInfo for `bitwise_not` and make ROCM and CUDA OpInfo tests consistent
067ad31210 : [NNC] Added some more external function bindings (#53420)
c72473fe2c : Adding print_test_stats.py job to Windows CI (#53387)
48ec939d39 : [iOS GPU][BE][2/n] - Use dispatcher in MPSCNNTests.mm (#53429)
e90e773445 : Fix to empty_like example (#53088)
64255294ba : [PyTorch][CI] Enable building test_lite_interpreter_runtime unittest in CI (macos) (#52566)
7b7775bec2 : feature_segmented_histogram_binning_calibration
b2758cdc77 : [PyTorch] Don't copy vector arguments to caffe2::Tensor::Resize (#53389)
b64acfa9ac : [PyTorch] Move non-template part of TensorImpl::Resize to cpp (#53388)
98943bb863 : [PyTorch] Enable explicit ATen level sources for lite interpreter (#52769)
25a9f45a5a : fix broken quantization_test in operator_benchmark (#53153)
1fc8831322 : Add missing tensor header (#53489)
117a49c4cb : .circleci: Restore docker builds for scheduled workflows (#53412)
1588df6b99 : Fix typo in tools/test_history.py (#53514)
36dc5d3b3a : [iOS GPU][BE][1/n] - Remove unused headers + improve error message (#53428)
1e306b9a71 : Disable failing distributed test (#53527)
2b359dd6dc : Automated submodule update: tensorpipe (#53504)
115df4fa28 : Fix set_device_map docs (#53508)
93f1b10f72 : Add missing attr in LazyModuleMixin doc (#53363)
656930df26 : [FX] Fix default to align with documentation in `fuser.py` (#53457)
c07a62b854 : [FX] change dynamic control flow example to a *more* dynamic version (#53250)
0ca029b22d : [caffe2] Fix DBFileReader (#53498)
d54be1a946 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
a5ada2127d : [reland] Add OpInfo for `bitwise_not` and make ROCM and CUDA OpInfo tests consistent (#53181)
54a2498919 : Modify tests to use assertWarnsOnceRegex instead of maybeWarnsRegex (#52387)
d3cde6c23c : [NNC] Implementation for aten::cat without conditionals. (#53128)
c7b1979b6b : Use Store collect and verify names in all RPC agents (#53209)
affdcce833 : Extract TensorPipeAgent's collectNames to be a standalone utility function (#53202)
e08aae2613 : Automated submodule update: FBGEMM (#53478)
b26c0bb2b9 : [PyTorch Mobile] Allow skipping operator exists check when bytecode model is loaded (#52814)
3236efa4de : [Static Runtime] Call native resize_/resize_as_ as much as possible (#53425)
dbbe0a2105 : [DataLoader] Introduce deterministic context (#53271)
1ba80264f4 : [DataLoader] ConcatDataPipe (#53301)
1fe6a6507e : [WIP][FX] Fix tracing support for torchbind (#52884)
a0d1e701db : Replace `internal::GRAIN_SIZE` by `grain_size` (parameter). (#53177)
369601355f : [caffe2] Use extended versions of cuDNN calls for SpatialBN
758fb94fcb : Prefix assert_async with underscore, fix some bugs in assert_async CUDA testing (#53276)
8c798e0622 : Forbid trailing whitespace (#53406)
cab2689eb1 : Revert D26849826: [pytorch][PR] Call nvidia-smi.exe before running tests Windows
1974969842 : Cleanup async execution for python RPC calls. (#53230)
7bfa9dc7de : Simplify async execution for script remote calls. (#53207)
6cbbef2fea : Modify assert order to correct the error message when nan appears in multinomial on cuda (#53288)
f595ba1bae : [caffe2] move the SaveOp implementation from a header to a .cc file (#53298)
474fe7d976 : docker: Update default cuda => 11.1 (#53299)
f58f7b786c : add distributed backend options in setup.py (#53214)
387d9a6bab : Simplify async execution for script calls. (#53204)
c0adabe172 : automate sharding using S3 test time stats (#53269)
00bd0e9862 : [caffe2] Fix shape inference for LpNorm (#53332)
efebc6524d : Call nvidia-smi.exe before running tests Windows (#53334)
c3405e5ba1 : Revert "Automated submodule update: tensorpipe (#53012)" (#53394)
ba75cedfc5 : [1/n][torch/elastic][upstream] Move torchelastic/rendezvous to torch/distributed/rendezvous (#53172)
14fa47631b : [DDP Logging] Log comm. hook in ddp logging (#52966)
5d9b7bee1a : [DDP Logging] Log nccl_async_error_handling (#52965)
bdbfc2582d : [Dist Debugality] Log key DDP metrics to stderr under debug mode. (#52957)
68134374cb : Refactor/fix DDP model check during init (#52887)
1b35b1a0c4 : Properly skip distributed tests when distributed module is not built (#52945)
c697e48023 : Refactor ForeachUnaryOp.cu (#51894)
56f8379802 : [static runtime] Move all heavy constructor logic into InferenceModule (renamed to StaticModule) (#51564)
5ebfabb310 : MAGMA: Initialize ipiv data to avoid internal memory access violation (#53064)
268b96f069 : Automated submodule update: tensorpipe (#53353)
e9d7137072 : fixes #38775 #38779: complex support for linspace and logspace (#38875)
42e0983230 : [NNC] Added some APIs for dealing directly with Bufs (instead of Tensors) (#53011)
854cc53594 : Automated submodule update: tensorpipe (#53265)
63e0e88ccc : [PyPer] More at::empty -> at::detail::empty_cpu (#53333)
69bb0e0285 : [caffe2] Avoid some double (and triple) lookups in workspace (#53319)
35364c3641 : [static runtime] Enable ClipRangesGatherRangesX2SigridHash fusion for SigridHashPrecompute (#53324)
dfd5331e9c : Skip tests on ROCm (#53339)
8bac382d9d : [TensorExpr] Remove unused classes from TensorExprKernel. (#53283)
cfd9360d09 : Revert D26837780: Revert D26819810: Revert D26815021: Revert D26744062: Add assert_async
51592a9e0a : [package] Add `deny` method to `PackageExporter` (#53233)
f1eedfa2c8 : [package] Add `allow_empty` flag to mock and extern (#53232)
842ba90739 : [iOS] Bump up the Cocoapods version (#53335)
fdd074e806 : [caffe2] Fix shape inference for Softmax (#53132)
795ed5ca3f : Enable Kineto in CPU builds (#53174)
17495e0318 : [PyTorch Mobile] Fix case when error messages are stripped, and stack value isn't popped off in lite-interpreter (#53201)
1accffe450 : Revert D26819810: Revert D26815021: Revert D26744062: Add assert_async
110a17a4d9 : Update foreach APIs to use scalar lists (#51893)
47dbdfcfe9 : [Static Runtime] remove redundant gather_ranges when fusing (#53323)
97d4ed3d2d : [torch.futures] Add note about error handling for non-chained futures. (#53212)
ac668c55e5 : [Static Runtime] Remove dead code in MemoryPlanner and rename unmanaged_value_set to unmanaged_ivalue_set
36180c1322 : [static runtime] aten::to copy out variant (#52343)
18277137ff : make torch.load() aware of import path changes: torch.tensor -> torch._tensor (#53139)
a558d3629f : Remove MNIST for XLA (#53274)
a3c3141dd2 : Fix gradfn attr bindings when saved variable is of an output (#53205)
6db2f012a5 : [PyTorch] Reduce size of register_symbols.cpp (#53278)
4739d15a67 : Skip some nodes during discovery using sequence number (#52180)
85109ce427 : Support submodule manipulation in GraphModule (#52358)
72ec718373 : Leak autograd threads after wait limit (#53170)
51718c2f3c : Update CODEOWNERS to be tagged as reviewer (#53277)
b0aa03b703 : fix tensorpipe_agent linked even when USE_TENSORPIPE is turned off (#53281)
b4395b046a : Edit SiLU documentation (#53239)
7aeee2849b : Parametrization Functionality (#33344)
3826a07a63 : [PyTorch] Don't inline Dispatcher::call on mobile (#53197)
8c54cd7f37 : Declare NamedTuple at top level (#53273)
9e5e5a7d96 : Revert D26815021: Revert D26744062: Add assert_async
6557ea0509 : Context manager for hiding source ranges (#53188)
6dce0cd0d4 : Optimize module path finding (#52990)
e698a634cc : Enabled amin & amax for float16 & bfloat16 (#52579)
5095332ab9 : Minor cleanup of interpolate microbenchmark
b864457743 : Revert D26744062: Add assert_async
bf5e5bf901 : [ROCm] Enable test in test_linalg.py, test_optim.py and test_vmap.py … (#52818)
c4c77e2001 : [special] add `torch.special` namespace (#52296)
c5b0c2fa8b : Support torch.complex (#53227)
d98839e53e : [static runtime] register pow out variant (#52454)
68810c1836 : Delete test_rand_quantization (#53234)
457b9f672c : [CI]Shard cuda11_1 tests (#53235)
d5507aa5b5 : fix output dtype test in compute_types (#52731)
fc7171badc : inline TensorIteratorConfig setters (#52661)
30a8a13a7d : Revert D26625807: [pytorch][PR] Deduplicate shared params before constructing Reducer in DDP
38a34887ac : [PyTorch] Fix missing move in {List,Tuple}Construct (#53206)
68b62493b8 : [Gradient Compression] Make GradBucket class public (#53099)
b59075eced : [Gradient Compression] Refactor tensor grouping in PowerSGD (#52981)
248e8b42fa : [Static Runtime] Use native version of at::empty (#53216)
9b7396e7e2 : [pyper] casted_batch_one_hot_lengths with 4-arg to (#53215)
12d63cc2f5 : Add assert_async (#53086)
14a2ef0932 : Deduplicate test cases in suites by taking the longer test case (#53154)
c94b8e13ec : Remove docker_config_defaults from CircleCI config (#53200)
79944f7ad9 : [fx] simple doc fix
ba36e32406 : [Gradient Compression] Correct the usage of min_compression_rate (#52979)
d30f4d1dfd : Migrate apex.parallel.SyncBatchNorm channels_last to pytorch (#46906)
9c2673df46 : Revert D26723384: [pytorch][PR] Implements `torch.linalg.lstsq`
a812175173 : Update Kineto revision (#53199)
096c66a99f : [sparsity][refactor] Rename row/col to out/in features
f7d65c5cd2 : Use .gv instead of .dot for Graphviz in fast_nvcc (#53208)
86166f2124 : [quant][fix] MHA tensor assignment fix (#53031)
4008df3507 : Add property binding in torchbind (#50670)
59c0c19be2 : Add RemoteModule to master RPC docs. (#53084)
e5ecd1ddf8 : [Vulkan]Fix build warnings-treated-as-error on Linux. (#52781)
f3190a77b2 : [Vulkan] Update VMA to VMA::e74dc79903f3e59b15a48f112b5c804fea2220b0. (#52938)
7cec4b3d4a : [quant][fx] add _remove_qconfig flag to convert_fx (#53166)
25a3732c8d : [vulkan] Add, sub, mul, and div ops with broadcasting for Vulkan (#52842)
8b5b7fa83d : [WIP][FX] Optionally record stack traces when symtracing (#53081)
510c03d922 : [Gradient Compression] Remove some low-level methods of GradBucket class (#53098)
f8238d7917 : [optim] bugfix when all parameters have no grad (#52944)
ecd8e4c1d5 : Add guard to run on current thread (#52361)
0f81a69a96 : Make meta a device (getting rid of empty_meta) (#53143)
fd3004d3ee : Add NoOpDeviceGuardImpl (#53142)
99098c1d70 : Delete dead Backend toSparse (#53116)
f5e725527d : [PyTorch] Save a single add instruction in the dispatcher (#52543)
43906f9b8b : [ZeroRedundancyOptimizer] Minor stub fix (#53165)
5c15a5bb46 : Deduplicate shared params before constructing Reducer in DDP (#51929)
20860ab01a : Revert D26727918: [pytorch][PR] Added CUDA support for torch.orgqr
fbf60b5aaf : Store only coverage info as artifacts (#53150)
c8cc2e2133 : Update CODEOWNERS for test_public_bindings (#53158)
a1d204807a : Add shape inference for SparseLengthsSumSparseLookup
1559fa6a5c : [operator benchmarks] Added more modes to interpolation tests (#53186)
85e5fdb919 : disable TCPStore multi_worker tests for windows (#53156)
b3c4ac6319 : Fix OpenBLAS discovery (#53168)
c957e2ab42 : Add more datapipe to functional API (#53123)
0aa9f22f1a : Move groupbykey to grouping (#53122)
59b2b8b091 : Revert D26727660: [pytorch][PR] Add OpInfo for `bitwise_not` and make ROCM and CUDA OpInfo tests consistent
d90d7245f4 : [PyPer] Optimize sigrid_hash (#53065)
30dd15e778 : [PyTorch] Add doc string for lite interpreter related api in Android (#53136)
a2a88990cd : [PyTorch] Remove extra RNN.cpp file (#53169)
70d0aab7bd : De-prioritise Dimname and DimnameList in python overload resolution (#51350)
816646bd6f : Add OpInfo for `bitwise_not` and make ROCM and CUDA OpInfo tests consistent (#51944)
926e011cde : Fixed out= variant of linalg.solve (#51968)
bd7ac755d8 : Fix loop type (#50484)
e29d8477a6 : Added CUDA support for torch.orgqr (#51348)
0819d5f9e9 : [FX] Added docstring for concrete_args (#53151)
e1e19a71ce : [shape inference] fix pruning
5c1c8cb93b : [caffe2] Fix shape inference for pruning ops (#53082)
0dac7d86ca : blas copy and axpy to aten (#52345)
565d8235e5 : [nnc] Test cases for uneven split + reorder (#53091)
d8730194e7 : use device methods (#52899)
aba33b0042 : [TensorExpr] IRVerifier: add index verifier for Store. (#53137)
0f7f600e01 : Fix constexpr __host__ warning (#52702)
3ac9013235 : Implements `torch.linalg.lstsq` (#49093)
c0b31a5ba7 : [StaticRuntime] Clean up (#53096)
870bac13bc : Fixed out= variant of linalg.inv (#51977)
fd582af06c : enable coverage test for dataloader on Windows (#52550)
e86476f736 : Huber loss (#50553)
2c8f9aec64 : avoid TLS in has_names (#53003)
e2ecfb60a6 : FIX Validates target in cosine_embedding (#53110)
593b0fbade : Revert D26720919: [Gradient Compression] Remove some low-level methods of GradBucket class
c4c20a5d2d : Suppress unsigned comparison warning (#52653)
6ab3a8b6f2 : Update torch.nn.quantizable.MultiHeadAttention docstring (#53106)
a3a2150409 : Codegen python bindings to access attributes of grad_fn (#52451)
baed2cfe01 : Back out "Revert D26753571: [pytorch][PR] add submodules to sys.modules so their attributes can be pickled" (#53127)
521e1e83ea : [Gradient Compression] Remove some low-level methods of GradBucket class (#53098)
b05dd931ee : [Gradient Compression] Add is_the_last_bucket_to_allreduce method to GradBucket class (#53010)
4997c38a15 : [Gradient Compression] Don't provide default values in GradBucket constructor (#53102)
ecb5ac90ed : [Gradient Compression] Add get_per_parameter_tensors method to GradBucket class (#53009)
ab7f6f3f5b : Add default arguments to cuda stream and events (#53025)
2444b4d122 : Add wait_for_worker param to TCPStore and fix port in use flaky test failures (#52888)
41765d4681 : Store coverage files as artifacts for better debugging (#53126)
d697090260 : Add a note in DDP doc to point to ZeroRedundancyOptimizer (#53113)
29034b9487 : [Reland] Update and expose ZeroRedundancyOptimizer docs (#53112)
66b20bb738 : [CUDA graphs] [JIT] improves readability and nvfuser convenience for graph-safe cuda RNG (#51580)
37bf6c134b : Register DefaultBackend implementations for functional/inplace structured operators (#53037)
c5a67f1675 : Fix minor inaccuracy in translate error reporting (#53032)
fbf2883d35 : Revert D26733731: [pytorch][PR] Skip dispatch for `is_floating_point`
890e051047 : Clang-format quantization_hooks.py (#53100)
cb1596a193 : [operator_benchmark] Added channels last 3d option to interpolate test (#53117)
62d1cdd725 : Automated submodule update: tensorpipe (#53012)
2d7119f943 : Revert D26753571: [pytorch][PR] add submodules to sys.modules so their attributes can be pickled
73a57246d9 : disable dill extension behavior (#53118)
43f810fa96 : Add streams boundary check to `torch::cuda::scatter`` (#53057)
e5e54ada61 : fix logcumsumexp functor to properly handle infs and nans (#52947)
d8ef3a4793 : [ROCm] Enable test cases in test_nn.py for ROCm (#52836)
2bf079d060 : Remove useless test_reference_numerics skip infos (#52890)
fbf9745c85 : add submodules to sys.modules so their attributes can be pickled (#53107)
aa603cb2ce : add OpInfo entry for signbit (#52198)
4fb82a8808 : Skip dispatch for `is_floating_point` (#52998)
d4e64dad15 : [static runtime] Register both TupleConstruct and ListConstruct as out variants (#52684)
2d67b76fa6 : [static runtime] Add Alias analysis to Memory Management/Planning (#50060)
b22df26361 : Explicitly export submodules and variables from torch module (#52339)
048e3917f9 : Add duplicate scheduled-ci to allow for debugging (#53109)
28f87bb734 : Don't run cpp tests a second time in the sharded ort_test2 job (#53067)
09ce9b5877 : Store test file in S3 as well for every TestSuite (#52869)
931100f829 : Revert D26696938: Update and expose ZeroRedundancyOptimizer docs
46bd76fdec : [quant][graphmode][fx][fp16] Add fp16 support for silu (#52865)
267aeb8a56 : [quant][graphmode][fx][fp16] Add fp16 support for tanh (#52864)
d40b501cfc : [quant][graphmode][fx][fp16] Add fp16 support for sigmoid (#52863)
3fb324f05b : [quant][graphmode][fx][fp16] Add fp16 support for layer_norm (#52862)
fc6fdade9f : [quant][graphmode][fx][fp16] Add fp16 support for torch.sum (#52811)
97c51d5d5d : [quant][graphmode][fx][fp16] Add fp16 support for div (#52810)
a6af93e921 : [quant][graphmode][fx][fp16] Add fp16 support for sub (#52809)
d382693263 : [NNC] Build aggregate stmt for kernel before LoopNest. (#53024)
f448c59a57 : Fix jit.trace mis-handling of InterfaceType (#53052)
aae188c529 : [NNC] Handle non literal constant bounds in Unroll. (#53029)
748285ccd7 : [complex] add autograd support for torch.polar (#52488)
87b6702833 : [distributed] make the pickler in distributed_c10d pluggable (#53060)
ac122a5a6d : [package] catch exceptions from calling reduce function. (#53061)
506f756a0a : Include max pool in fusion groups (#52613)
6149a26adb : Extend subgraph utils to cover merging a node following a subgraph (#52513)
dbbe21dfd7 : Remove unused subgraph vmap api (#52512)
b1284cfbfb : Only functionalize ops which we want to include in mkldnn group (#51924)
9a990dafd9 : Add a filter to remove mutation (#51923)
f41c80c267 : Dont error on 0-dim in convolution (#51922)
42bfda36e1 : Add 0-dim support for binary mkldnn ops (#51921)
32fed3f375 : Handle mkldnn broadcasting in mkldnn fuser (#51736)
a2f7e929ef : Add MKLDNN fuser (#51600)
43f56e19a6 : [NNC] Make NNC sanitize input names (#52786)
4b40141d2c : Add support for linear in mkldnn fusion (#51484)
bfae3789ba : Move conv to mkldnn (#51483)
7a60b7dc3e : Add support to compare devices (#53045)
a586c02962 : Update and expose ZeroRedundancyOptimizer docs (#52937)
a176c73ed5 : [TensorExpr] Reland: "PyBind: bind ExternalCalls." (#53063)
e22da0a5c4 : [TensorExpr] Add IRVerifier. (#52901)
3bd779cec6 : [rpc] make pickler/unpickler pluggable in RPC (#53050)
83a93ee145 : [package] Pull out _UnpicklerWrapper into PackageUnpickler (#53049)
ec128eadea : [package] _custom_import_pickler -> _package_pickler (#53048)
272dfc7bb9 : Add MANIFEST.in (#52908)
b5ae8e69a7 : [Lite Interpreter] Support features from to_backend (#52870)
8467e5cad3 : Remove ci-all and release branches running scheduled tests (#53069)
cfa41cea7e : [numpy] torch.logit: promote integer inputs to float (#52028)
c7c03dd388 : [PyTorch] Fix TORCH_CHECK_INDEX(false, ...) in IndexKernel (#53028)
07ae4e9309 : scripts: Add script to prep wheels for pypi (#53056)
fd4722949d : Fix the repeated entry in the Tensor Attributes doc (#52995)
e2462745ba : Update kineto submodule (#53039)
3993fb2bf9 : fix(docs): indent in docstring of key_averages (#53006)
b3bf08e67f : Log nccl debug level in ProcessGroupNCCL (#52803)
ec42c2d89c : [pyper] fuse clip_ranges+gather_ranges (#52461)
991160ebd9 : [quant][graphmode][fx] Add support for fp16 bmm pattern (#52808) (#53021)
b039dd15ce : Delete defunct LegacyTHFunctions templates (#53016)
812339ca3d : [ZeroRedundancyOptimizer] Buckets as tensor view + minimize public interface (#52987)
e10d2f477b : Clang-format c10d/init.cpp (#53008)
084839faa6 : Clang-format test_c10d.py (#52978)
e00e42dbab : [reland][quant][graphmode][fx][test][refactor] Refactoring binary op tests to split int8 and float16 tests (#52807) (#53020)
a9f7ae5357 : [ROCm] Enable test cases in test/test_dataloader.py for ROCm (#52766)
096bea5251 : [reland][quant][graphmode][fx][fp16] Add fp16 support for {add|mul}{_relu} (#52714) (#53019)
89b1053413 : [DataLoader] Move BufferedShuffle from Dataset to DataPipe (#52141)
f2657d2e4f : [ROCm] Enable test cases in test_cuda.py for ROCm (#52739)
0a70ec45d1 : [ROCm] Enable test cases in autocast_test_lists.py for ROCm (#52737)
4daa81e267 : Automated submodule update: FBGEMM (#52992)
e36576d153 : Probable fix for out of place BinaryOpScalar bad values and/or IMAs on 11.2 (ci-all edition) (#52634)
8870c391e9 : Update mkl to 2020.2.254 (#52964)
d4527b4e16 : add a full pipeline test for a TypeCheck (#52933)
7d060735ca : Back out "[TensorExpr] PyBind: bind ExternalCalls."
b22b082cc8 : Fixed the error of generator in the RandomSampler. (#52956)
3403babd94 : [doc] Fix documentations of torch functions (#52982)
6d29aa5486 : Make lambda supported by Map DataPipe (#52856)
66f07c0c12 : Optimized bilinear interpolation using TensorIterator (#51653)
0d46926c63 : ns for fx: remove subgraphs from user facing API (#52928)
87be8c1d7c : ns for fx: clean up duplicate code in get_matching_activations_a_shadows_b (#52927)
5b93cdace1 : ns for fx: remove model_name from get_matching_activations API (#52926)
907ee5b290 : ns for fx: docblock fixes (#52925)
0569f638fe : Update CODEOWNERS for torch.nn (#52942)
a06cf5d8a4 : [numpy] torch.{rad2deg, deg2rad}: promote integer inputs to float (#51853)
f5617b0932 : [testing] Add Opinfo for torch.frac and minor fixes (#52660)
312b297b82 : Revert D26626092: [quant][graphmode][fx][fp16] Add fp16 support for {add|mul}{_relu}
03693c7e4a : Revert D26655617: [quant][graphmode][fx][test][refactor] Refactoring binary op tests to split int8 and float16 tests
3a024a7ae2 : Revert D26655616: [quant][graphmode][fx] Add support for fp16 bmm pattern
e43ea227fe : Automated submodule update: tensorpipe (#52930)
57c7a61237 : [NNC] Added NNC IR specification (#52912)
b9e12a0e82 : [pytorch] Fix mkldnn heuristic for multithreaded convolution (#52909)
2c44b256d8 : [quant][graphmode][fx] Add support for fp16 bmm pattern (#52808)
4d94ee566e : Ge v1 (#52136)
f2f7fdba05 : [quant][graphmode][fx][test][refactor] Refactoring binary op tests to split int8 and float16 tests (#52807)
2962fbb03c : [quant][graphmode][fx][fp16] Add fp16 support for {add|mul}{_relu} (#52714)
729d88119a : Fix GradBucket Typing (#52943)
0818dbf49d : [quant][refactor] Merge add and mul handler (#52651)
a296fa36ac : [Caffe2] Implement BlackBoxPredictor::BenchmarkIndividualOps (#52903)
249c213462 : [ZeroRedundancyOptimizer] Pytorch compliant state (#52960)
b685864f50 : [quant][graphmode][fx] Add reference option support for linear_static_fp16 (#52650)
7f1693d95e : Fix type hints of the callable arguments for DataLoader (#52924)
177694681e : [quant][graphmode][fx] Add reference option support for linear_dynamic_fp16 (#52534)
e63ec556bf : [TensorExpr] PyBind: bind ExternalCalls. (#52905)
94e23e51c4 : [caffe2] EnforceFinite: log blobs finiteness in workspace on error (#52892)
10087337c7 : Exclude 'test' from codecoverage (#52935)
1d6bd15790 : [JIT] Add torch._C._jit submodule (#52910)
cb6b65699f : [quant][graphmode][fx] Add support for packed params in state_dict (#51639)
b8e6e2971c : Run distributed_test with NCCL_ASYNC_ERROR_HANDLING (#52619)
b2520ab3dc : Add a demo backend with compiler (#52603)
502a85990d : [PyTorch] Move Aten level source list to build_variable.bzl (#52792)
44b9fcfb55 : Fix local version generation (#52898)
155b19ef1a : [Pytorch Mobile] Remove useless line from bundled_inputs (#52824)
18ee39944a : .circleci: Change conda image to be cuda specific (#51494)
97568d7471 : Use --delta=0 by default for tools/test_history.py (#52877)
7a178a8a52 : [Static Runtime] Add memoray alloc/dealloc time to benchmark (#52902)
7cfe140705 : Add distributed debug mode func to python (#52481)
a3cd881890 : Fix grammar in reducer warning (#52835)
af1fb4e4ee : Revert D26641600: [caffe2] move the SaveOp implementation from a header to a .cc file
21c3f6f415 : Revert D26617038: [caffe2] use AddNAlreadyReserved() when serializing blobs
69b2d5c7c3 : Revert D26641599: [caffe2] update load_save_test.py to also verify the chunking behavior
c423733967 : Add support for builtin sum (#52188)
25001a0148 : ns for fx: remove ".stats" suffix (#52799)
1d3172130d : ns for fx: add node name and type to results (#52798)
d2e88246d8 : ns for fx: make return type of ns APIs future proof (#52789)
fe068157de : ns for fx: unify return types of weight and activation APIs (#52779)
7094d970d1 : ns for fx: decouple subgraph names from node names (#52771)
cd9ac54ea7 : [caffe2] update load_save_test.py to also verify the chunking behavior
b4a8d98247 : [caffe2] use AddNAlreadyReserved() when serializing blobs
3969391c07 : [caffe2] move the SaveOp implementation from a header to a .cc file
fdd25f82c9 : Update to replace AT_ERROR with TORCH_CHECK (#52711)
a0a1bb074b : Make NumPy dependency dynamic (#52794)
9a03e65456 : Adding functional way of stacking DataPipes with fixed mypy (#52885)
f40c9db622 : [FX][EZ] Hoist custom class .so loading into setUp (#52883)
6514a47385 : [quant] Fix conv packed param serialization in state_dict (#52787)
a27aaa49aa : quant norm layers: move scale + zp to buffers (#52861)
51d8543ac7 : [FX] Use precompiled regex in graph name processing (#52853)
569d4fe3f9 : .github: Add workflow to build conda packages (#51243)
649760e5f1 : `maybe_resize_storage_cuda` new_size argument should be unsigned (#52672)
0f3a3f22af : Add sample validation for LKJCholesky.log_prob (#52763)
a52001f923 : Improve test_reference_numerics (#51604)
94da8b9816 : Fix resource leak bug in TCPStore constructor (#52860)
8ba7c4918a : [nnc] Test for direct usage of ramp/broadcast
0b93974075 : Fix incorrect runtime error in mul_() when the tensor layout is Mkldnn (#51758)
da732c76c4 : Revert D26644079: [pytorch][PR] Adding functional way of stacking DataPipes
c2558b4b61 : [vulkan] Add nonVarTypeModeGuard to vulkan tests and speed_benchmark_torch (#52535)
e94940b169 : Use touch() in pathlib for better compatibility on Windows (#52729)
19a8ada8d5 : quant: fix conv transpose with qconfig == None (#52844)
c871abecf5 : Added torch.no_grad() to update_bn (#52654)
0e86f14ec0 : Upgrade onednn to v.1.8.1 (#51184)
7972036bbb : Adding functional way of stacking DataPipes (#52507)
a11b601100 : Expose Store's timeout and TCPStore's host and port in Python API (#52784)
f974cf4688 : Test for distributed RL with RPC (#52393)
163a91bed3 : Fix TensorPipe agent trying to double-set error (#52837)
3ff6c9174a : Update TensorPipe submodule (#52677)
39fa0b5d0a : Add scatter_add to amp promote list (#52133)
316eabe9ba : fix(docs): remove redundant hardsigmoid() in docstring to show up `inplace` parameter (#52559)
1618dc2ac6 : ns for fx: update graph matching to handle dicts and tuples in node args (#52681)
608f44b24b : ns for fx: update graph matching to not match nodes with equal types (#52402)
4483c48eb1 : ns for fx: support linear_relu for weight matching (#52395)
64b4e37c26 : ns for fx: allow graph matching of parents of cat (#52368)
13121598ef : [Pytorch, sparsity] Bug fix to update requantization and zp parameters of input (#52797)
99a428ab22 : Lower ReLu6 to aten (#52723)
fa7575ea05 : Update backwards compatibility check to ignore reverted op (#52841)
914126901e : Fix typos in tools/test_history.py helpstring (#52840)
1ac59d9db3 : Fix RPC get_worker_info for rank=0 (#52804)
f71d9e28f9 : Store test filename in test report path (#52791)
92a4ee1cf6 : Revert D26375734: Implemented torch.linalg.multi_dot
0048d97eda : remove index_fill side-effect for scalar tensors (#52209)
57947c5d85 : [TensorExpr] Add `Placeholder::handle` method to get the corresponding `BufHandle`. (#52793)
d3b427a0e3 : [TensorExpr] Add an unmasked `Load` constructor. (#52790)
30cb6ac53c : Introduce `mlc` device (ML Compute device) to PyTorch's device list (#50634)
2bdf6305a0 : Drop unused variables (#52643)
a649d808e6 : Added fast path in the case of no hooks (#52576)
a6b7da7dfe : Add 64bit indexing support for softmax (#52713)
c140a5ec04 : Use finer-grained mutexes in TensorPipe RPC agent (#52749)
c954817696 : print matrix dims in torch cuda matrix multiply error (#52780)
29c4290a8d : Use c10::irange for great good (#52153)
373a20ad4a : Modernize for-loops in caffe2/torch (#52618)
0567988e74 : Kernel launch checks for aten/src/ATen (#52185)
98873b9258 : Update Gloo submodule (#52754)
0396f492b9 : Implemented torch.linalg.multi_dot (#51807)
964d47dfb9 : Add torch.linalg to generated annotated_args for test_overrides (#52464)
7b54a8fc23 : [quant] Reference option for conv module (#52316)
3cf08eaf15 : [Pytorch Mobile] Improve Bundled Inputs Error Checking (#52386)
88a160dc21 : [TensorExpr] LoopNest: Cleanup LoopNest constructors. (#52726)
08d266943d : structured kernels - error check when structured_delegate is not marked structured (#52227)
5e977d9c38 : Catch Flake8 error codes with multiple letters (#52750)
7ae7768617 : [ZeroRedundancyOptimizer] Remove pseudo futures handling, not needed (#52698)
27d04f291e : Clarify usage and output of tools/test_history.py (#52640)
b4b7db2f3b : [FX acc]Add fx_glow support for multi outputs (#52527)
59ac0ff037 : Change `maybe_resize_storage_cpu` `new_size` arg to unsigned (#52671)
08d7f29601 : Add discontiguous kwarg to make_tensor (#51985)
3489b4a7b8 : Fix the ordering of TCPStore's compare_set parameters (#52696)
97b6b3df51 : [Reland] Update XNNPACK (#52691)
8af648354f : [nnc] Benchmarks for concat (#52592)
b56f59ea20 : Revert D26599390: [pytorch][PR] Fix for incorrect usage of logging in torch/distributed/distributed_c10d.py
958d9a8364 : [fx/package] make GraphModules packageable (#51976)
075bbe0d6a : Fix for incorrect usage of logging in torch/distributed/distributed_c10d.py (#51739)
2d75346c25 : [Gradient Compression] Add a minimum compression rate threshold for PowerSGD communication hook (#52541)
755c60bffc : [PyTorch Mobile] Allow loading of all extra files using the extra_file argument (#52635)
0c455332e8 : docs: add link to Tensor.share_memory_ in Module.share_memory (#52561)
238b0bbb68 : Allow Transformer accept output result that is not Proxy (#52473)
75f7b22025 : Fix hipify_python (#52709)
26419815af : Modernize for-loops (#52330)
caa377f546 : replace type().backend() with device() (#52558)
b534466f01 : [DataLoader] TransformsIterDataPipe (#52604)
cabb1e7a94 : Fix wrong TORCH_CHECK usages (#52670)
420fc42eab : add OneDNN pooling backward (#49454)
df30cb78d2 : Remove unused variable (#52652)
d5ed57569b : Move cuda9 and cuda11.2 CI jobs to a scheduled workflow (#52693)
ecf3ca00d8 : [fx] Separate globals assignment from code generation (#51974)
1cddb27f39 : [FX acc]Store shape and dtype in serialized output node args (#52462)
e2afb269b8 : [caffe2] add a Python test for SaveOp chunking
1c63cb2c0f : Pass child error to parent in distributed tests. (#52632)
e3a805b9c5 : Fake Quantization support for f16 and f32 (#52612)
e658d7c37b : Ignore user annotated ignored attributes (#52367)
2680ff7759 : Revert D26598115: [pytorch][PR] Update XNNPACK
49b59e3472 : Add OpInfo entries for i0 and logical_not (#51956)
dc6fab4452 : Fix performance of CUDA trilinear interpolate backward (#52351)
3721962c33 : Update XNNPACK (#52645)
64847c7f0b : [TensorExpr] Properly handle ExternalCalls in LoadStore analysis and Inliner. (#52628)
b63a1e31d3 : [TensorExpr] Inlining: allow inlining into Load exprs. (#52627)
67794b14bb : Use `int8_t` instead of `char` in [load|store]_scalar` (#52616)
7ecc1b603a : [TensorPipe] Update [Cpu|Cuda]Buffer fwd declarations (#52600)
fa8568184f : [caffe2] Delete some unused fields from TensorProto (#52521)
f111ec48c1 : docs: add fractional_max_pool in nn.functional (#52557)
6cfe55dea9 : Add psutil to requirements.txt (#52285)
a59c4039e0 : Fix undefined symbol for CUDA 11.1 Windows (#52506)
a0652c8f08 : [static runtime] Fix up deprecated exact equality in tests (#52617)
7f4dff5496 : docs: add FractionalMaxPool3d in pooling layers (#52556)
1865499d49 : [Pytorch Mobile] Improve export_opnames Documentation (#52333)
108ec77fa7 : [NNC] Added reductions to NNC python bindings. (#52492)
3309f034aa : remove pointless test (#52609)
fd5792f857 : docs: add :nosignatures: in torch.jit (#52555)
09fe753a33 : Enable TCPStore fixed slow test (#52511)
973e306c84 : changed TE 'Allocate' API to take one argument 'Buf' instead of three arguments 'Var', 'dtype', 'dims'. (#50167)
0bc57f47f0 : torch.Package zipfile debugging printer (#52176)
b72a72a477 : torch.Package extend PyTorchStreamWriter to track written records (#52218)
a39b1c42c1 : MHA: Fix regression and apply bias flag to both in/out proj (#52537)
bfc2645981 : [BE] force cmake to always generate version.py (#52477)
2eb9c0832e : Modernize for-loops in torch misc (#52452)
947225cd1b : update tracing codegen to use redispatch API (#52009)
80240d0888 : update autograd kernels to use redispatch (#51363)
6b8e670eb7 : [CI][IOS] Add lite interpreter ios build job (#52567)
067fd78f05 : add RECORD_FUNCTION to grad_sum_to_size (#52516)
09c56ef45e : Remove DepTracker from LoopNest (#52405)
847d1d4d53 : add debug_flush_compilation_cache to `Method` (#52317)
783b5c0c9f : op_whitelist -> op_allowlist (#52150)
03ae6d9903 : Remove useless _allgather_then_aggregate_hook (#52593)
ad3319cbc2 : fractional_max_pool{2/3}d : Fix segfaults for incorrect kernel_size and output_size (#51626)
116d402200 : Skip handle_r_to_c for dot & vdot (#52474)
4386a3803c : Replace all ASSERTM in serialization (#51756)
014d2123a3 : Replace all AT_ASSERTM in ATen (#51677)
d02a2bd5d1 : codegen'd API for redispatching (#52008)
ed71cbdd39 : Revert PR 52483 "[reland][complex] `masked_fill` (#52587)
57637e0ab4 : port upsample_nearest3d and upsample_trilinear3d to structured (#52065)
d659477ae0 : port upsample_bilinear2d and upsample_bicubic2d to structured (#52012)
f3ea5ca672 : port upsample_linear1d to structured (#51917)
c78a4a52d2 : remove unnecessary/dead code in upsample_nearest1d cuda kernel (#51916)
ee04cd9587 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
c2b9283d4a : [PyTorch Mobile] Use real if constexpr behind macro in hot template (copy D26154964 in a different setting) (#52420)
d177654981 : [Take-2] [PyTorch Mobile] 15KiB size reduction by reducing MaxTypeIndex from 256 to 32 (#52466)
d491fc6d48 : [PyTorch] Add comment to unify macro and rename one macro (#52573)
e677b71056 : Add support for pow (#52374)
d819a21692 : Support any (#52360)
14f7bf0629 : [PyTorch] update CMake to build libtorch lite (#51419)
a935118c90 : Fix caffee2 to use MaybeAlign when using LLVM trunk
a61a8d059e : Restore fast path in OnnxifiOp::adjustOutputBatchSize (#52498)
65bfa1389d : [PyTorch Mobile] Do not create a static variable in Dispatcher::singleton() (#52447)
597c9f8b22 : fix zero_point rounding for _fake_quantize_learnable_per_channel_affine (#52290)
15892a651f : [PyTorch Mobile] Create compile time string for passing in to the exception message instead of 4 arguments that will be concatenated at runtime (#52303)
a62b0deae0 : [pytorch] make is_tracing scriptable (#49853)
d9161d6da3 : Optimize `setDebugName` time complexity (#52346)
53373a8e8c : remove deprecated function (#52426)
bb34fd6191 : [DataLoader] Fix util ImportError (#52459)
1c64f862f6 : Update vec_mergee operand specifiers (_vecb) (#52091)
72f9b3c8d5 : [StaticRuntime] Add function to check for memory leak (#52342)
ef8d17e112 : [DDP] Separate error messages for unused params in forward and not all outputs (#52391)
a3e693789f : [qunat][graphmode][fx] Enable test for non quantized input for cat (#52414)
8fe6d17847 : Moving 11.2 CI to master only (#52536)
09516d2d0c : Reenables skipped tests for all CUDA versions except 11.2 (#52359)
626756ac39 : [quant][graphmode][api] debug --> reference (#52179)
941ebecc54 : [glow aot] Support --onnxifi_min_ops in AOT flow (#52380)
db33afbf9f : Change cmake to allow building with MLC kick-off build (#51326)
0c0de542be : [quant][graphmode][fx] Guard the supported quantization type for add/mul (#52413)
7cd9892f83 : [PyTorch] Sync TORCH_INTERNAL_ASSERT optis with TORCH_CHECK (#52226)
566f7c79d3 : [c10] Take advantage of c10::str optis for simple CAFFE_ENFORCE (#52223)
d6755934fa : [PyTorch] Make c10::str(const char*) return const char* (#52222)
b6cf17deee : [reland][complex] `masked_fill`: Complex Autograd support and update masked_scatter skips. (#52483)
44ff79d849 : Automatically set BUILD_SPLIT_CUDA for cpp exts (#52503)
b6ed05130e : Adding a flag to enable CPU fusion in benchmarks (#48612)
bfb007a438 : Example LSTMCell (#51983)
c9c4b871a5 : [pytorch] reintroduce static dispatch (#51957)
28e3dfdcca : [JIT] Allow __exit__ to have a return value (#52336)
bcd77cece4 : [JIT] Display an error message when with item is not an object (#52335)
338d2eca4a : [quant][graphmode][fx] Enable test for non quantized input for add/mul (#52412)
49a923c8b5 : [ONNX] Update LayerNorm symbolic to handle autocasting (#52199) (#52350)
26e8f8f223 : [ONNX] Update fuseLogSoftmaxNllLoss function to handle autocasting (#51729) (#52349)
12cbd6975a : [ONNX] Fix for sequence of mutations in blocks (#51577) (#52347)
08017f4598 : Add explicit cudart_static dependency for cublas_static (#52509)
752d808fa0 : Trace linear as aten::linear (#51897)
d5ac929b62 : [package] Introduce Importer to manage module namespace collisions. (#51975)
76e8324370 : [package] rename ex/importer.py to package_ex/importer.py (#52320)
bc6852c192 : Change TCPStore world_size and is_master to be optional (#51809)
9699c703c2 : Stable sort for the CPU take 2. (#51790)
5fda3b094c : Add conj OpInfo and fix out inconsistency (#52059)
8094e4844d : [ROCm] Enable test_jit_c10.py tests for ROCm (#52410)
dbeda994db : Update FindvecLib.cmake for macOS 10.14, 10.15 and Big Sur (#51288)
93c4067f25 : [BE] Cleanup UnaryOpsKernel.cpp (#52444)
b71215a909 : Revert D26515596: [pytorch][PR] Add support for pow
7ca9776874 : Fixed _out variants of linear algebra functions (#51560)
df3d1d9378 : [RPC] delete torch/csrc/utils/future.h (#51698)
3b11822825 : [RPC] Refactor rref_context to not use utils::Future (#51697)
d0795ab358 : log newly added construction and runtime stats at randomly selected iterations (#51394)
c75fa39b6c : add stats that can only be collected at runtime (#51386)
0c46b6b3f6 : [DDP] Enhance warning for find_unused_params (#52385)
c29e279f72 : [DDP] unittest for when params arent used in backward pass (#52384)
4ee5bc74d3 : [DataLoader] Change signature of Functional DataPipe (#52458)
3adc8f8cf7 : Enable min & max for Float16 & BFloat16 (#51244)
fb9f89507a : [quant][graphmode][fx] Fix fp16 dynamic quant for functional linear (#52369)
83feaebfc3 : Add support for pow (#52374)
d8b28579c3 : Add NNC support for aten::hardtanh (a hot operation in mobilenet v2/v3) (#52394)
f4c33edb45 : Add onnxifi interface for set/get options (#52388)
82548f3a00 : [ROCm] missing template declarations for complex blas (#52472)
65f6e665e6 : Improvements for FX tracer (#52232)
bb7e07ce8e : [glow] Extending AOT config with two more fields (#5359)
e0b6252de0 : [ROCm] Enable test_ddp_hooks.py test cases (#52403)
89bc9a58e2 : Add arm64 binary build (#52443)
22adea04df : Revert D26299594: [PyTorch Mobile] 15KiB size reduction by reducing MaxTypeIndex from 256 to 32
2d4354423e : Revert nightly docker build cuda version to 11.1.1. (#52234)
49c90648d3 : [iOS GPU] Fix max_pool_2d (#52431)
c7a70eec1b : Make LLVM the default backend for TE (#52314)
8f3ed60d3e : enable mkldnn conv2d backward to support mkldnn tensor input (#48994)
9e54532947 : [PyTorch Mobile] 15KiB size reduction by reducing MaxTypeIndex from 256 to 32 (#51881)
983347fa25 : Allow broadcasting against lerp weights. (#52319)
b52e2e6045 : [BE] _get_torch_cuda_version should return tuple (#52409)
f72b4b83fe : Fix upsample bicubic2d batching handling on CPU. (#52389)
c7b0005831 : Enhance Tensor.unflatten to support -1 as the inferred size (#51955)
ad9746456e : ns for fx: make unshadowed activation comparison work for N models (#52357)
a937d1cb16 : ns for fx: make weights comparison work on N models (#52356)
d903106bad : [wip] ns for fx: add support for subgraph matching (#52130)
3978ffb37a : NS for FX: add test for a simple sparsenn model (#52092)
efbb854ed8 : [PyTorch] Avoid std::string in TORCH_CHECK when possible (#52221)
ba77b8d84e : [PyTorch][easy] Make shared empty string static instead of thread_local (#52220)
c8b3686a3e : Make bias in lazy modules lazy and avoid create empty tensors (#52212)
758aa45563 : Revert D26369476: [pytorch][PR] [complex] `masked_fill`: Complex Autograd support and update masked_scatter skips.
60518d10f6 : [deploy] torch::deploy API (#51754)
9cf6be6b3e : Fix torch.nn.functional.interpolate microbenchmark for non-4D inputs
7a67a7a396 : [static runtime] Generate sigmoid with NNC (#52424)
8228086e64 : [static runtime] Use VML-inspired logarithm with NNC, tweak scheduling (#52423)
e1d927e552 : [JIT] Update freezing api (#52337)
ac121165e2 : Remove ReduceOp::accumulator (#52196)
a788c2d777 : [nnc] Remove output_args from ReduceOp (#52187)
62d5f60ad2 : Avoid using ReduceOp->output_args() in rfactor (#52177)
f6a6814a4f : Dont look at reduction output args when computing mem dependencies (#52170)
de9016007a : [PyTorch][easy] Coalesce string literals in data_ptr error message (#52379)
7a408c7290 : [complex] `masked_fill`: Complex Autograd support and update masked_scatter skips. (#52035)
f6321977e9 : Fix shape inference for multiple outputs with different output dtypes (#52417)
f1e004b954 : Fix compiler warning for MathConstants.h (#52123)
eaad002cf6 : [PyTorch] s/__attribute__((__noinline__))/__attribute__((noinline))/ (#52362)
f7aa88b400 : [caffe2] Explicitly define all DataTypes in python/core.py (#51768)
27d89057f8 : [caffe2] fix deserialization of unknown tensor data_type values (#52411)
e0d9d0f248 : update symeig backward note about similar eigenvalues (#52311)
08b95e3c48 : [Pytorch, Sparsity] Integrate sparse qnnpack operator in framework (#52377)
908ba05a06 : [Pytorch] Add python binding to use mobile cpu allocator. (#52376)
70bed6a55a : Removes deprecated preprocess method from the backend interface (#52258)
b9f051db9f : Add type hints for the _import_c_extension module (#51767)
76af821c36 : [PyTorch] "Fix" wrong-looking move in TensorImpl (#52344)
2b202667c1 : [1/N] CPU pointwise optimization: Add a benchmark for Relu
2775ff4a47 : [BE] Decorate unused functions with C10_UNUSED (#52378)
a11650b069 : .circleci: Downgrade CUDA 11.2 -> 11.1 for binaries (#52151)
79e10ce97b : [PyTorch] Construct IValue from List without copies in args (#52325)
7e2becb70f : [PyTorch] Reduce copy/move in c10::ivalue::from (#52324)
f7a3634466 : [WIP][FX] Normalize torch.nn.functional calls (#51816)
8bf846d2c8 : Skip OneDNN Convolution in case of groups = 24 #50042 (#52327)
f6e0f5b85a : [typing] ignore mypy false positives in aten_test.py (#52370)
5003d417d4 : [PyTorch Mobile] Outline DispatchStub::get_call_ptr() (#51908)
e7f28d4241 : [PyTorch Mobile] Restructure DispatchStub::operator() code to move template independent code into an external method (#51403)
6c875f17ca : Enable PyTorch_QNNPACK for Apple Silicon builds (#52308)
51c28e4d7e : [ROCm] enable fft tests (#51581)
edf8130e9e : [PyTorch] Add set_data_ptr_noswap & use where possible (#52244)
a07530e57f : [quant] Factoring out the list of no_observers (#50459)
b8584b884e : [quant] Quantizable MultiheadAttention (#49866)
440fddf07b : Remove unnecessary statement in `capture_stderr` (#52366)
6dabe0b291 : [Dist Profiling] Enable dist profiling for DDP (gloo only) (#52031)
059ee85ca4 : [PyTorch] Devirtualize TensorImpl::storage() (#51050)
4305609d66 : Fix complex acos edge cases (#52287)
72d1ccd3ca : Revert D26263480: [Pytorch, Sparsity] Integrate sparse qnnpack operator in framework
cbede834d4 : [JIT] Add support for default argument values to Torchbind (#51253)
324c6aada1 : BFloat16: enable prepacked weights's inference (#48922)
e36a900e89 : [tools] Use anonymous access to access S3 bucket (#52338)
0e2520baae : [PyTorch] Don't read 1 char per iteration in Unpickler::readString (#51901)
b2aa63f17c : [PyTorch] Fix return value of IValue::to for Tensor/String (#51463)
a9f5e7229e : [PyTorch] Remove reference_cast in make_boxed_from_unboxed_functor (#51319)
c442776f3c : [PyTorch] Debug-gate static_assert in KernelFunction::makeFromUnboxedFunctor (#51367)
975d9f2551 : Mypy fixes for pytorch master (#52090)
a8885ee7e6 : [BE][typing] add caffe2/torch proto stubs (1 of 2) (#52341)
99619ea3b7 : Automated submodule update: FBGEMM (#52354)
d8bb932245 : Support AST rewriting for submodules (#52297)
87ebaa4eb1 : [Pytorch, Sparsity] Integrate sparse qnnpack operator in framework
a6e94d274f : [Pytorch] Add python binding to use mobile cpu allocator. (#52323)
4501b52fe5 : Benchmark for torch.ops.quantized.linear_prepack_fp16 operator (#52229)
6e1a5b1196 : [PyTorch] Use real if constexpr behind macro in hot template (#51368)
680c4ce1dd : [PyTorch] Avoid some extra intrusive_ptr<Tuple> copies in Unpickler (#51902)
f235c65a2b : [TorchScript] C++ interface of to_<backend> (Re-land) (#52340)
8c185e62f9 : torchvision hipify revamp fix (#51453)
35b0560ea2 : Automated submodule update: FBGEMM (#52255)
bb9e0c625e : [nnc] Add dummy reference to llvm::cfg::Update<BasicBlock> (#52321)
bfc7e28188 : reland - ns for fx - stubs of the three APIs (compare weights, activations, activations with shadow) (#52302)
fa393b56e7 : [static runtime] use NNC to generate logit, relu and tanh (#52322)
4156588365 : [nnc] Allow 1 ulp tolerance in log approximation (#52165)
9409a3a39b : Check kernel launches in caffe2/operators (#52240)
059c564ba4 : [DataLoader] Fix module import (#52224)
4e36891e4f : Temporary disable cat tests on MacOS due to Sandcastle failure
52af23b912 : Update PyBind to official v2.6.2 tag (#52304)
63206ada8f : Adding back CUDA 11.1 CI (#52171)
f3f72b5c6b : when BUILD_SPLIT_CUDA=ON, create dummy torch_cuda (#52305)
b887c30980 : Out version for sum (#52225)
71d5a8ea62 : [nnc] Benchmark inference batchnorm (#52251)
0019a20a2b : [WIP] Add a `_flush_compilation_cache` for testing (#52001)
b01b7ea4f3 : store artifacts for windows binary build (#52239)
4df8e774e6 : [ROCm] warn unsupported PYTORCH_CUDA_FUSER_DISABLE_FMA (#50508)
68e2a8c420 : Reenable test_nn tests for Windows (#52051)
df837d0384 : Use the libc++ detection instead of clang detection around std:isinfinite (#52164)
cd46ee6175 : Revert D26280518: [TorchScript] C++ interface of to_<backend>
1903b32c35 : Directly Return when Numel == 0 for WeightedSum and ScatterWeightedSum
eaddadd4f7 : Revert D26403094: ns for fx - stubs of the three APIs (compare weights, activations, activations with shadow)
4949eea0ff : [StaticRuntime] Clean up output references and remove dead code (#52237)
73de98204d : [JIT] Add static method support for TorchBind (#51177)
de4c9ecc35 : Fix libnvrtc discoverability in package patched by `auditwheel` (#52184)
357e5baf7e : Extend DynamcLibrary constructor to support alternative library name (#52183)
b8f3a658f9 : Do not include "DynamicLibrary.h" into a top-level header (#52182)
52e6ef8b53 : [TensorExpr] Add another test for ExternalCalls. (#52162)
bf841b25e4 : [cmake] Add explicit cublas->cudart dependency (#52243)
490eb3e735 : Add 3D depthwise seperable convolution (#51027)
846755af2f : Remove unused include in TensorIteratorDynamicCasting.h (#51824)
8ff5a46c32 : [RPC] waitForThreadLocalRRefs returns jitFuture (#51696)
87c0b6bffc : [RPC] Move confirmation future in rref context to jit future (#51695)
96fd5d87f7 : Add `dict()` constructor (#51934)
a184ef8df5 : [TorchScript] C++ interface of to_<backend> (#51797)
4ab86c87a2 : [caffe2 and pytorch] replace temp name of new sparse adagrad JIT'ed function in fbgemm (#52193)
a86027ded3 : Use side-stream in CPU to GPU copies in DDP (#50180)
71d0b5632b : Add SqueezeNet to PyTorch Playground
388c38505c : [Metal] Add concat op for metal
4cc10563e7 : Customize traceback for calls to symbolically-traced code (#51648)
1657d59641 : Walk Python AST to check for unsupported attribute type annotations (#51805)
37622db76a : ns for fx - stubs of the three APIs (compare weights, activations, activations with shadow) (#51669)
bfe6e23209 : Early version of fx graph matcher for NS (#51588)
2900cf2b94 : Refactor autograd discovery code (#52057)
b2d8f0a431 : [pytorch][bot] update mobile op deps (#52110)
a8321855ad : Check kernel launches in caffe2/aten/src/THC (#52174)
7b21c6be67 : [Dist Profiling] Enable profiling for gloo send/recv (#52004)
49c8be516e : Add ARM64 cross-compilation build on OS X (#49751)
83fa713f2b : Fix test to use proper condition. (#52216)
0dc0cb1d8d : Enable FP16 sparse regularizer
fa0a049d4e : Add a make_tempdir() utility function to the TestCase base class (#51762)
05b60921ae : [iOS][PyTorch][OSS] fix iOS nightly build (#52197)
de54510f15 : Check kernel launches in caffe2/caffe2/image (#52173)
1795398c24 : Updates rounding_mode documentation to remove "true" (#52202)
e8ab58bfc7 : [reland] Early terminate CUDA on common_utils TestCases (#52126)
d22f700f9e : Link torch_global_deps to libtbb.so if USE_TBB is enabled (#51741)
992d251c39 : Revert D26333953: [StaticRuntime] Clean up output references and remove dead code
0c9d72b5e1 : [StaticRuntime] Clean up output references and remove dead code (#51991)
e4203c4306 : Automated submodule update: FBGEMM (#52129)
db6e0c7c0e : Replace a platform.system() check with sys.platform (#51766)
dc25c90cfc : Check kernel launches in caffe2/aten/src/THCUNN (#52172)
578f0a04c7 : fix torch.nn.parallel.scatter_gather.gather to handle NamedTuples and handle moving output to CPU (#51104)
ba7a2f6513 : Add debug helper function to check target property (#52093)
22b12179db : [PyTorch] Make TORCH_INTERNAL_ASSERT use torchCheckFail too (#52086)
f2b43ddbf4 : Update api doc for enabling TcpStore on Windows (#51847)
ac2bdf553e : update build_host_protoc command for macos cross compilation (#50922)
6385c13630 : [vulkan] Efficient gemm implementation (#49609)
70a805a286 : [ROCm] skip one more magma test that is flaky (#52064)
4c58be4573 : [StaticRuntime] Clean up input references (#51952)
deb74edb28 : Add script to display history for a single test across multiple jobs over time (#52000)
8908874003 : Gh/taylorrobie/import timer fbcode (#52124)
ea8aadf4b6 : Use self-hosted runner for nightly docker build CI. (#52148)
4c93a79a04 : [Dist Profiling] Support shape recording for profiling collectives (#51822)
76c6e12a5c : Minor spelling updates (#52149)
3d77529f5b : enable autocast for xla (#48570)
b6806308ac : typo in docs ddp_comm_hooks.rst (#51986)
517185f946 : test_lc_1d: Increase deadline to 5 seconds (#52013)
497b772547 : Add custom implementation for `csqrt` if libc++ is used (#52018)
0bc7b9843b : use sccache 2.15 over the outdated sccache (#52095)
81b9aa743b : [pytorch] Update caffe2/python to eliminate Pyre errors (#52083)
c4eb22009e : Drop some Python 2 compatibility code (#51769)
c931c29120 : [PyTorch][easy] Fix TODOs in CppFunction constructors (#51315)
10d407647f : [PyTorch] Reduce template expansion in call_functor_with_args_from_stack (#51313)
425a5dc3f7 : [DataLoader] Modify SamplerIDP signature (#52104)
aa2fede201 : Fix autograd when `inputs` contains tensors without materialized grad_fn (#51940)
0de7a4582e : Fix Pytorch docker image name by adding the registry prefix (#52089)
fb2693a632 : Use bool/float instead of np.bool/np.float (#52103)
7763c127cd : [PyTorch] move aten::dict to lite interpreter (#52032)
bc856b49d4 : Add support for constants to fx_glow (#52094)
fd41ed1cce : Fix flaky TestTrainingLoop - TestE2ETensorPipe (#51939)
4ab0ef36a4 : change back to multiple_outputs_gpu_kernel for learnable fake per-channel quantization (#52017)
39aa3db62b : use make_shared and make_unique and clean unneeded code (#51829)
9653161fb4 : bump nightlies to 1.9.0 (#51891)
faaff0cd9b : [caffe2 and pytorch] use new sparse adagrad JIT'ed function in fbgemm
d7ea0fe75a : [testing] Add OpInfo for rad2deg and deg2rad (#51283)
de334e6a2f : fast-path is_complex() in the dispatcher (#50054)
705fa7e964 : [Usability] Capture argument names for traced functions and modules (#51775)
4add8502c3 : inlining a function that i noticed were hot during previous benchmarking (#50848)
fa325d7c9f : Use `sum_integers` and `multiply_integers` (#51146)
bff8194522 : Replace 11.1 with 11.2 on CI for Windows (#51598)
5431d87c3e : [JIT] Use `is_buffer` in `BufferPolicy::valid` (#49588)
410ef1335a : [JIT] Add buffer/parameter metadata test to test_save_load.py (#49594)
9f1f5636d7 : Revert D26019289: [pytorch][PR] Early terminate CUDA on common_utils TestCases
d0fd41dcfe : Add size op in nnapi serializer (#52026)
a1b8f3d4b6 : Replace CUDA 11.1 Linux CI with CUDA 11.2 (#51905)
9b8d414a9c : update sccache wrapper to accommodate new sccache for macos build (#51357)
bd6248106b : Keep alive graph when creating iterators from it (#51951)
ce8ba5f3bc : Fix test time history report if no ancestor report (#52054)
a1c67b0763 : Silence harmless error logs of TensorPipe agent during shutdown (#51785)
b7b944a319 : Avoid TensorPipe agent spamming logs when unable to guess IP address (#51784)
03e82f7944 : Use CUDA 11.2 for nightly docker build. (#51990)
c4a8f0ceaa : [torch script] Add pure list producing ops to alias analysis (#51999)
50e6f0fdb6 : Add benchmark for torch.nn.functional.interpolate
c1b7ca8062 : Early terminate CUDA on common_utils TestCases (#50914)
8b0cb5ede3 : OpInfo: Added clamp and trunc tests with aliases (#51167)
3cf78395cb : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
594a66d778 : Warn about floor_divide performing incorrect rounding (#50281) (#50281)
9c0caf0384 : Adding support for comparing two bool varibales (#51844)
602434bcbe : [te] Benchmark vml-based logit (#51771)
2e35fe9535 : [te] Implement log approximation using the VML approach (#51752)
ff73be7e45 : [te] Introduce likely/unlikely CompareSelect hint (#51751)
74082f0d6f : [te][llvm] Generate arithmetic vs logical right shift as appropriate (#51749)
0620c96fd6 : Back out "Revert D26009829: Optimize relu on cpu using clamp_min" (#51819)
33afb5f19f : fake_quant cachemask: remove Python bindings (#51878)
5f9fb93c14 : [model loading] Add max_batch_size override for batch size exploration
768662913a : Migrate masked_fill__cuda to ATen (#51404)
929b91a24d : ns_eager: rename Logger I/O var names to logger_cls (#51359)
5a9bac58be : Automated submodule update: FBGEMM (#52014)
18e0a61388 : add more logging fields that can be set in construction time (#51260)
d23cb94098 : [FX] Generalize dict key check in `create-arg` (#51927)
256f93fb0f : [FX][EZ] Fix tuple type annotations (#52010)
d4e84b0c07 : [FX] Fix leaf modules in Transformer (#51998)
d5a9627f10 : [PyTorch] Re-order TensorImpl fields to save a word (#50920)
475278f1c0 : [FX] Make some modifications to limitation section (#51928)
3af7b673ef : Let child CUDAFuture wait for parent CUDAFuture's CUDAEvents (#51820)
c6b4fc8a90 : [ROCm] add 4.0.1 docker image (#51507)
1921b244f6 : [DataLoader] Rename files of functional datapipes (#51880)
9eb70c3c78 : [DataLoader] Rename Callable to Map IterDataPipe (#51879)
104371e1dc : [DataLoader] Implement FilterIterDataPipe (#51783)
e964d77fca : [pytorch] recast infer_type error and amend with name and item that failed inferring
12d85b536e : Fixing Softmax bench. (#51898)
7e54a64828 : [C2] Add shape inference logic for ColwiseMax operator. (#51914)
0410cba23e : [FX] make map_arg require a callable (#51907)
2f2b170068 : [Pytorch Mobile] Only preserve bundled input helpers for forward if they exist (#51884)
8fab33f942 : Fix the lifetime of PyTensorType (#51649)
0ec00c1292 : [docs] Add docs for storage and tensors for quantized Tensor (#51817)
fc314350ad : Make RebatchingBuffer compatible with auto shape inference
1e171f024b : Fix warnings in TensorShape (#51642)
141f615161 : Support torch.type (#51904)
b3fda95fe7 : Add LazyBatchNormXd (#51862)
5dd1568aa3 : [ROCm] skip more magma tests (#51915)
8c09cc6475 : Remove android toolchain in Windows CircleCI image (#51405)
20fe2e12d6 : typo (#48887)
c357f8b826 : [package] make torch.package produce unified format (#51826)
85b25257ff : [package] Use custom persistent_load in PackageImporter (#51595)
285e69a9cd : [package] more reliable method for determining standard library-ness (#51694)
42635c3e59 : Fix regex in collect_env.py for CUDA 11 (#51852)
35b3e16091 : [pytorch] Fix torch.nn.functional.normalize to be properly scriptable (#51909)
d61d8d886b : correct value argument name for Tensor.index_fill_ docs (#51763)
d5a2429c24 : Fix flake8 failures (#51963)
a1bfa5eed7 : Do not print warning if CUDA driver not found (#51806)
56034636b9 : Workaround arm64 gcc error in `std::copysign` (#51900)
015cabf82a : move GroupByFilename Dataset to DataPipe (#51709)
482b94ae51 : move RoutedDecoder Dataset to DataPipe (#51704)
8ab22a080b : Build pytorch_android using Gradle wrapper. (#51067)
034a007ad8 : Remind about AutoNonVariableTypeMode in error message. (#51655)
2303c244fc : skip a second call to shouldUseRecordFunction for BackendSelect ops (#50891)
7b9ca54ecf : Reset checkpoint_valid flag when error happens during function execution (#51746)
dac730af11 : Warn if mypy version doesn't match CI (#51799)
21ef248fb8 : [reland] Report test time regressions (#50171)
9e4f3b89c4 : [Gradient Compression] Add register_comm_hook API to DDP communication hooks documentation page (#51846)
1e70b4bb73 : Add GH Actions CI to build nightly Docker and push to GitHub Container Registry (#51755)
58eb23378f : Clean up usage of torch._six partially (#49785)
97e35858ec : [Resubmit] Add compare_set operation and test to TCPStore (#51815)
7363da7c57 : onnx export of per channel fake quantize functions (#42835)
159c48b19b : Fix triplet margin loss and reciprocal docs (#51650)
d90911adf9 : fix AdaptiveAveragePooling crash problem for non support input (#51443)
b9acfcddeb : Support mypy ignore annotation with particular rule specified (#51675)
41bab9a4b6 : Plumbing dispatch keys through the dispatcher (#49354)
6fa5e96f2e : remove unnecessary BoxedKernelWrapper specialization now that ops are all c10-full (#50963)
d9e6750759 : fix multi_output_kernel (#51827)
21dccbca62 : Revert D26232345: [pytorch][PR] Report test time regressions
1aaddd83a5 : don't set the same C++ and C standards twice (#51832)
649e683255 : Fix torch.nonzero type annotation (#51635)
0dd1d60d54 : [JIT] Remove Dropout during Frozon Optimization (#51589)
9cbefad83f : concantenate LICENSE files when building a wheel (#51634)
b97a040f71 : ENH: toggle TORCH_WARN_ONCE to TORCH_WARN for tests (#48560)
d454a84bab : derivatives.yaml cleanup + restore codegen code forgotten in refactor (#51721)
7467f90b13 : Report test time regressions (#50171)
c89f15ec6d : Reland nightlies 11.2 (#51874)
79832f3d77 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
bce4c82f0d : [C2] Add TypeAndShape Inference logic for ReduceMean (#51828)
fcf8b71234 : Disable unaliged-access test from TestVectorizedMemoryAccess.CopyKernel (#51864)
0c313564af : Backward through sparse_coo_tensor (#50361)
4b3c99ce4a : [Resubmission] Add a documentation page for DDP communication hooks (#51773)
4968227058 : add shape inference for Int8GenQuantParamsMinMax
6c0bf28da6 : [wip] doc_fix (#51825)
6488b2bc3a : Revert D26282829: [pytorch][PR] Adding support for CUDA 11.2 in our nightly build matrix
fa70168804 : Add metacompile of Ternary if (#51789)
8a9090219e : [pyper] register aten::index_out (#51742)
9a964ce89b : Enables backend preprocessing to take place outside of the backend interface (#51757)
215d9daceb : Refactor internal methods into debugging utilities (#51737)
19753af6ea : [QNNPACK Sparsity] Add aarch64 kernel of 8x1 sparsity (#51120)
6b2811f288 : [QNNPACK, Sparsity] Add 8x1 block sparse kernels for aarch32. (#51119)
c034e0750c : [QNNPACK, Sparsity] Code refactoring to allow for more generic block (#51118)
bc1b1e8253 : fixing mkldnn_linear & backward with silent error (#51713)
9112f4eded : [FX][docs] Indent forward (#51802)
8c48af822e : pytorch docs: add fake_quantize functions documentation (#51748)
ececbcfff2 : [Conda][Kineto] efine weak `acc_get_device_type` if kineto is used (#51818)
fb07aca7b0 : Adding support for CUDA 11.2 in our nightly build matrix (#51611)
5c3a054b12 : Add FLOPS support to the new profiler API. (#51734)
430329e875 : Revert D26009829: Optimize relu on cpu using clamp_min
50c9c08203 : Enable GPU/RE tags for caffe2/caffe2/python/TARGETS
2054cd56c5 : Optimize relu on cpu using clamp_min (#50924)
3cfbf6d3ac : [quick-checks] Allow `gradlew` to be executable (#51796)
029f857b22 : [Metal] Add hardswish and hardsigmoid to metal, fix broadcasting for binary elementwise ops
a930162c69 : Revert D26276903: [pytorch][PR] Add LazyBatchNormXd
33973d45a9 : Add `acc_get_device_type` weak symbol to `kineto_profler` (#51787)
59cb693c90 : [quant] add docs for embedding/embedding_bag (#51770)
9c2dd5775a : Fixed slight bug in FX docs (#51779)
aa1fd6b45a : Add LazyBatchNormXd (#51548)
5a962369e2 : [Gradient Compression] Check if the backend is NCCL when a DDP communication hook is registered (#51759)
105c3d2196 : Update CODEOWNERS (#51726)
a7ba051fa6 : [QNNPACK, Sparsity] Add dyanmic linear sparse kernel for arm64 (#50591)
70830b5ac0 : [QNNPACK, Sparsity] Sparse kernel with 4x8 blocking (#50590)
e8ee35a666 : Add script to compare namespace content for release cleanup (#51685)
28c5d90b67 : [JIT] Allow implicit boolean conversion of containers (#51683)
d3023d86ba : Revert D26249330: [Gradient Compression] Add a documentation page for DDP communication hooks
1065c2d5b6 : Fix clang-tidy warnings in python_sugared_value.{h,cpp} (#51703)
c941730b96 : [JIT/Futures] support set_exception api (#50983)
8e78dd6de8 : [torch.futures] Fix doc inconsistency about callback args (#50979)
21afbba79b : [torch.futures] Clarify callback behavior when future is completed (#50978)
c3f2f3294e : [RPC] Add option to make rref.get_type not block. (#50977)
716a8c2153 : make forward AD API private (#51693)
e62aabac43 : [Gradient Compression] Add a documentation page for DDP communication hooks (#51715)
de7eeb7752 : Removes nonzero method warning (#51618)
e7ff0854c6 : [doc] Fix inconsistencies with torch.linalg.inv and deprecate torch.inverse (#51672)
ff4848aaa1 : [doc] Fix inconsistencies with linalg.pinv docs and deprecate pinverse (#51671)
e7d7256f2d : doc] Fix inconsistencies with torch.linalg.matrix_rank doc (#51660)
0308261ddc : [doc] Fix inconsistencies with torch.linalg.eigvalsh (#51659)
87504c3265 : [doc] Fix inconsistencies with torch.linalg.eigh (#51658)
4835f203ec : [doc] Fix inconsistencies with torch.linalg.det docs (#51651)
7c12afb5e2 : [doc] Fix inconsistencies with torch.linalg.cond doc (#51641)
4d703d040b : Linear autodiff revert revert (#51613)
6dcbf396aa : [QNNPACK, Sparsity] Added prepacking base aarch32 kernels (#50589)
47a6703bdb : [QNNPACK, Sparsity] ARMV7, aarch32, kernels for dynamic linear (#50588)
3fec1e5025 : fix hardsigmoid_backward for boundary case (#51454)
8c737f732b : replacing ubuntu-latest with ubuntu-18.04 (#51744)
094d597679 : raise windows tol to 30% (#51733)
ab0cf3b6b5 : Add 'repeat' argument to profiler.schedule (#51630)
62aea33d7f : Revert D26237328: Add compare_set operation and test to TCPStore
ecfb73aaca : Update docs for torch.profiler.tensorboard_trace_handler (#51636)
d4d5f8569f : [FX] Fix mypy error in FX for rewriter (#51740)
b150f150ba : Add division overload with rounding_mode selection (#51706)
949ab213dd : Revert "Revert D26246231: [FX] Edits after comprehensive pass over docs" (#51728)
8c0da1f5e9 : [ONNX] Modifications in remove inplace ops passes to better handle binary inplace ops (#51318) (#51572)
c7f1595b19 : fix bug (#51222) (#51527)
25b18bb5d7 : [ONNX] Support list remove for onnx export (#51373) (#51526)
6d47e2cff8 : [ONNX] Fix opset 11 ConstantChunk with negative dim (#51396) (#51525)
ba824eb2d6 : [ONNX] Update unsafe_chunk() method to support new version 13 of Split operator. (#51415) (#51524)
8ae6b0c5f9 : [ONNX] Enable Constant Folding for ONNX Opset 13 (#51096) (#51523)
1c7d966432 : Update error message that displays when encountering an op unsupported for ONNX export. (#51387) (#51522)
586c2e8d62 : [ONNX] Fix graph sequence output from loop node (#51305) (#51521)
3cc46002a3 : [ONNX] Fix graph position to insert clone node for inplace op removal (#50123) (#51520)
0e7e4d4217 : [ONNX] Add silu operator support for onnx (#51193) (#51519)
9191b639ba : [ONNX] Enable remaining failed tests in opset13 (#50806) (#51518)
3f185ac18e : [ONNX] Export get/set attribute nodes (#50768) (#51517)
1829268e7f : [ONNX] Improve error message for parse_arg in symbolic functions (#50512) (#51516)
8dd9fefacb : [ONNX] Fix bug in unfold symbolic (#50504) (#51515)
7255b3f6b7 : [ONNX] Update constant-folding of Gather op (#50554) (#51514)
2d305b97e9 : [FX] Added partial concrete values for symbolic tracing (#51609)
2e8e560cdf : Fix anomaly mode memory leak (#51610)
0222966ecd : Fix several minor things in .circleci/README.md (#51724)
14273126d2 : Numeric Suite: Swap with shadow modules only for quantized part of model (#51052)
a0137808a7 : Note on Modules for 1.8 docs (#51536)
de9364aef2 : fixes clang-tidy-11 install by using ubuntu18.04 instead of 20.04 (#51725)
1e2df9e46d : [cuda] masked_scatter : static_cast init_value to circumvent cuda 11.2 issue (#51614)
7d00aec6bc : Add compare_set operation and test to TCPStore (#51593)
003a240e68 : [package] use WeakValueDictionary for global imported module registry (#51666)
6c80fd005f : Revert D26246231: [FX] Edits after comprehensive pass over docs
4d85e30133 : Support at::cpu on non-structured kernels (#51590)
668e0f3598 : Split anonymous and namespaced definitions in RegisterDispatchKey (#51585)
a626b78467 : Factor out structured generation into its own subclass. (#51583)
93c4f9f972 : Split out RegisterDispatchKey to its own file (#51508)
6045663f39 : Use Literal to model targets. (#51500)
c22bc4821d : [FX] Edits after comprehensive pass over docs (#51705)
9920ae665b : Make te a hidden package for now (#51690)
ecf8166522 : Support Union[NoneType, T] as input type (#51605)
f1f9b049d8 : [profiler] Support top-level memory events (#51421)
a9584f29c1 : Fix attribution of some CUDA events to CPU events (#51632)
d6452a1a0c : [profiler] Default activities value (#51561)
7abba67d8c : add dumping callstack to kineto (#51565)
8c3e0ddbc6 : [Usability] Tolerate `torch.jit.script` call to Enum classes (#51624)
86861095fa : Graceful invalidation of Python Node/Value/Block when C++ object is deleted (#50326)
c8af338407 : Expand benchmark utils docs (#51664)
1518aee639 : unbreak bc test (#51702)
6a945bfb5c : Fix memory leak in qnnpack ops (#51612)
e60f18c2ad : Generate header with version #defines for LibTorch (#50073)
23c50a4a50 : [PyTorch Mobile] Support torchbind custom classes in lite interpreter (#51432)
1ffd26f8d8 : [quant] Add reflection padding to conv (#49011)
c41678fd53 : Use deterministic impl of `index_put` and `index` backward CPU when `torch.are_deterministic_algorithms_enabled() == True` (#51388)
f1a63b7c10 : [FX] Added how to write transformations section (#51278)
bd3ae117fc : Fixes cat backward formula to return correct gradient values for R -> C case (#51681)
d8742eeed0 : [quant] Support 2 dim input in quantized batchnorm 1d (#51597)
5d123ecf2f : Fix caffee2 for LLVM trunk
0c60922fb0 : mem-efficient learnable fake quantization (#49315)
7918f37e8c : [FX] Move examples to pytorch/examples (#51686)
f2c4deabeb : Extend subgraph_rewriter logic (#51532)
627ec8badf : Type-annotate tools/generate_torch_version (#51637)
50d903f19f : [optim] make functional api be private (#51316) (#51665)
45e5562fcc : Beef up {jacobian, hessian} vectorize docs; eliminate a warning (#51638)
443a431ac3 : Revert D25074763: [WIP] Update foreach APIs to use scalar lists
d1bc1ab8ca : Revert D25502940: Refactor ForeachUnaryOps.cu
16cfe970e0 : Updates linalg documentation per feature review process (#51620)
1ee0c42d6d : move ZipDataset to Zip DataPipe (#51599)
34d4d79966 : Autograd doc note fix (#51661)
0d9ca21d74 : [Static Runtime] Native stack for contiguous inputs (#50863)
fe67438f32 : Replace AT_ASSERTM in ATen/core (#51579)
c60dacd4cf : Replace all AT_ASSERTM in ATen/native (#51147)
f38e1d2d60 : [quant][graphmode][fx] Enable inception_v3 and googlenet static quant test (#51402)
8e53bf010d : Use new TensorPipe functions to create channels (#51550)
56ef24bc0f : Use new TensorPipe functions to create transports (#51549)
47557b95ef : Removed typographical error from tech docs (#51286)
333a0c8b6f : Add support for generating faithful at::cpu signatures (#51499)
81c7c3bae5 : Add api.structured; switch structured kernels to use const Tensor& everywhere (#51490)
648cdb7d0a : Relax type signature for tools.codegen.api.translate (#51477)
43df03de13 : [Gradient Compression] Replace torch.sqrt(torch.sum(col ** 2)) by torch.norm() (#51629)
00675292ca : replace silufp16 with cubic interpolation (#51645)
cae4379826 : Enable FLOPS Computation for Experimental Kineto Profiler (#51503)
3361d365bd : [Gloo] Use TORCH_CHECK for ensuring tag is nonnegative (#51370)
a3f2fe0d52 : Prevent CUDAFuture from using uninitialized device index (#51505)
a651696ab4 : fix misspelling in swa_utils.pyi (#51608)
c639513378 : [TensorExpr] Resubmit: Introduce ExternalCall nodes to TE IR. (#51594)
18a7ec7d7d : Update the JIT complex type name to be consistent with Python (#51476)
896f82aa92 : [optim] make functional api be private (#51316)
550c965b2e : Re-enable test_standalone_load for Windows 11.1 (#51596)
727f163bea : caffe2 test.sh pip might not need sudo if pip is root (#50223)
5cf3278723 : Refactor ForeachUnaryOps.cu (#49248)
52de407b4b : [DataLoader] Rename Functional DataSet to DataPipe (#51488)
bea0519b0b : [WIP][DataLoader] Implement BucketBatchIterableDataset (#51126)
14ee63f7e6 : [WIP][DataLoader] Implement CallableIterableDataset (#50045)
c311b8961a : Revert D26113953: [pytorch][PR] [ZeroRedundancyOptimizer] Elastic and pytorch compatible checkpoints
75ee575671 : [Usability] Handle repeated jit.script calls on function gracefully (#51545)
7b556db69d : [PyTorch Mobile] Skip inferring function schema from the C++ function type (#50457)
62f6e55439 : Fix the missing parameter in get_sha function (#51290)
ab4623da16 : Document FX debugging (#51530)
f7313b3105 : Fix Python.h discovery logic on some MacOS platforms (#51586)
7360ce36e4 : [QNNPACK:Sparsity] Add A matrix pretransformed based sparse kernels for FC (#50587)
eb571b33fe : [QNNPACK Sparse] Create fc sparse operator (#50586)
520f96b8c7 : [QNNPACK] Block Sparse kernel. First commit. (#50585)
444203c52f : Fix torch.cdist backward CUDA error due to illegal gridDim setting (#51569)
b48ee75507 : Fix quantization doc issue (#50187)
b18eeaa80a : Implement `np.diff` for single order differences (#50569)
e54cbb8250 : Create PyTorch DDP logging APIs for applications to use (#50637)
26f9ac98e5 : Revert D26105797: [pytorch][PR] Exposing linear layer to fuser
5a402274d4 : [ROCm] add 4.0.1 to nightly builds (#51257)
b283ac6da4 : "whitelist" -> "allowlist" (#51375)
c791a30484 : Fix warnings in "ForeachOpsKernels" with c10::irange (#50783)
e488e3c443 : Exposing linear layer to fuser (#50856)
5499e839f1 : [Fuser] Do not attempt to use OpenMP if build without OpenMP support (#51504)
38eb836387 : [complex] Enable complex autograd and jit tests for `trace` (#51537)
209e27eaff : [FX] Add note about more use cases of FX (#51576)
37f1412965 : [Pytorch Mobile] Preserved all functions generated by bundled inputs (#51496)
cce84b5ca5 : [WIP] Update foreach APIs to use scalar lists (#48223)
506fdf9abf : [ROCm] disable tests for ROCm 4.0.1 (#51510)
bbe18e3527 : [ZeroRedundancyOptimizer] Elastic and pytorch compatible checkpoints (#50956)
a990ff7001 : [SobolEngine] Fix edge case of dtype of first sample (#51578)
4746b3d1fb : Added missing VSX dispatch for cholesky_inverse (#51562)
2565a33c98 : [Vulkan] Remove redundant qualifiers on writeonly images. (#51425)
0402df5427 : [Vulkan] Improve error handling in a few places. (#51423)
365986cfe0 : Add tensorboard_trace_handler for profiler (#50875)
cde7fa6e3c : update kineto submodule (#51566)
a38a648cb7 : Test if allocator is set only in DEBUG mode. (#51360)
0ff855efea : Make empty_cpu sanity test CPU only in DEBUG mode (#51358)
351ee1ece7 : Remove duplicate check for THPLayout in toSugaredValue (#51543)
ec378055c3 : add OneDNN linear backward (#49453)
4fdebdc0c9 : Improve PyTorch profiler flop computation formulas (#51377)
55a4aa79aa : [package] patch inspect.getfile to work with PackageImporter (#51568)
b6c6fb7252 : fix windows 11.1 test2 by disabling test (#51573)
751c30038f : [JIT] Properly convert Python strings implictly to device (#51340)
74ec9e7ccf : compare_model_outputs_fx API implementation (#49266)
0118dec2e3 : [Pytorch] Expanded Bundled Inputs To Any Public Function (#51153)
6465793011 : Fix Dirichlet.arg_constraints event_dim (#51369)
a5b65ae40a : Fix small typo (#51542)
8f0968f899 : Fix: Bad autograd side effects from printing (#51364)
c39fb9771d : [complex] Enable complex autograd tests for `diag` (#51268)
43084d7aab : add type annotations to conv_fused/blas_compare/blas_compare_setup (#51235)
c6f37e50f2 : [doc] Add deprecation message to torch.slogdet in favor of torch.linalg.slogdet (#51354)
1caed167fb : [doc] Fix linalg.slogdet doc consistency issues (#51353)
c0d58bce0d : move Tar Dataset to Tar DataPipe (#51398)
a07a37e4fb : reenable BUILD_SPLIT_CUDA for windows and fixes Linux 11_1 tests (#51538)
4f37150f40 : Revert D26179083: [TensorExpr] Introduce ExternalCall nodes to TE IR.
41e4c55379 : Correct subgraph rewriter pattern containment rules (#51529)
8bb0dff7e2 : Write FX Subgraph Rewriter tutorial (#51531)
5c5db25cd5 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
79e7544cb4 : [Gradient Compression] Check start_PowerSGD_iter > 1 and add guidance on tuning PowerSGD configs. (#51427)
d555768e8f : [FX] Added invert example (#51478)
96a22123f4 : Automated submodule update: tensorpipe (#51469)
f4fc3e3920 : [TensorExpr] Introduce ExternalCall nodes to TE IR. (#51475)
b106250047 : Introduced AliasInfo for OpInfo (#50368)
7328710cbc : [PyTorch][codemod] Replace immediately-dereferenced cast calls w/castRaw (#50229)
f0006315a9 : Add support for complex valued keys for dict in TS (#51472)
9c474c97b7 : Disable BUILD_SPLIT_CUDA for now (#51533)
c354888e5d : compare_model_stub_fx API implementation (#48951)
d02ea9a141 : [ROCm] add hipMAGMA support (#51238)
5e09ec6518 : Fixed SVD ignoring "some/full_matrices" flag for empty inputs (#51109)
4b65a27a35 : [testing] Add OpInfo for round and logit (#51272)
205c971431 : [PyTorch] Remove always-empty string args to inferFunctionSchemaFromFunctor (#51307)
1416fb9877 : [PyTorch] IWYU in torch/csrc/utils/future.h (#51293)
a1c5eba4bd : [FX] Move some heavily used passes out of experimental (#51392)
a3353d1ec0 : [FX] Support ellipsis as arg (#51502)
88af2149e1 : Add build option to split torch_cuda library into torch_cuda_cu and torch_cuda_cpp (#49050)
87ad77eb4e : T66557700 Support default argument values of a method (#48863)
ec3aae8cdb : [JIT] Enable saving modules with hooks in FBCODE (#51241)
630ee57bc2 : [PyTorch] Provide overload of torchCheckFail taking `const char*` (#51389)
c77fc2ee06 : [nnc] Vectorize bitwise ops (#51492)
a23e82df10 : [nnc] Tweak log_nnc_sleef so vectorization kicks in (#51491)
5b0a6482c1 : Out variant for embedding_bag_4bit_rowwise_offsets (#51324)
b198cf4f1c : port `index_fill_` from TH to ATen. (#50578)
09bc58796e : Hashing logic for c10::complex (#51441)
8fa328f88e : [doc] Deprecate torch.cholesky in favor of torch.linalg.cholesky (#51460)
8583f7cbe2 : [doc] Fix linalg.cholesky doc consistency issues (#51459)
c08078031f : [Gradient Compression] Allow BatchedPowerSGD to run vanilla allreduce for the first K iterations (#51270)
718e4b110b : add git submodule troubleshoot to CONTRIBUTING.md (#51458)
109bc1047e : [NNC] Generate C++ code for Allocate and Free (#51070)
642afcb168 : Add sgn to torch.rst so that it appears in the built docs (#51479)
d1ddc5d65d : [PyTorch] Outline OperatorEntry::assertSignatureIsCorrect fail path (#51269)
9877777fee : [PyTorch] check isValidUnboxed() in the dispatcher (#51247)
4495b49ffa : [PyTorch] Pass TensorOptions by value (#51165)
341c76dcc1 : [PyTorch] Add C10_ALWAYS_INLINE to critical dispatcher paths (#51245)
673687e764 : [PyTorch] Refactor Dispatcher to inline less code in fast path (#51163)
ec611aca88 : [Pytorch Mobile] Expose _export_operator_list to python (#51312)
609f76f27a : [WIP][FX] Add Interpreter and Transformer (#50420)
0831984ed5 : [Resubmission][Gradient Compression] Refactor default_hooks.py and powerSGD_hook.py by creating a util function that make a vanilla allreduce future (#51400)
6c24296795 : [PyTorch] Devirtualize TensorImpl::has_storage (#51049)
765062c085 : [PyTorch] Devirtualize TensorImpl::storage_offset (#51048)
50fa415a4d : [testing] Add OpInfo for ceil and floor (#51198)
449098c2d2 : [SobolEngine] Update direction numbers to 21201 dims (#49710)
b1907f5ebc : Fix pickling for Tensor subclasses (redo) (#47732)
508bab43e7 : Support complex number list in JIT (#51145)
40c0fffb4b : Fixes docs (#51439)
d1dcd5f287 : [fbgemm_gpu] Use the latest philox_cuda_state API for stochastic rounding (#51004)
0e1c5cb354 : fixing index clamping for upsample nearest kernel backward (#51240)
9cf62a4b5d : [1.8] Add additional tests for object-based APIs (#51341)
c255628134 : [Collective APIs] Make python object collective API args consistent (#50625)
721ba97eb6 : Create op benchmark for stack (#51263)
e26fccc22b : update profiler doc strings (#51395)
17b5683156 : Multi-GPU Kineto profiler test (#51391)
11cda929fb : [StaticRuntime] Fix bug in MemoryPlanner (#51342)
09e48dbd33 : Handle error during dict expansion (#51374)
7ab89f58be : expose memory_fraction and gpu_process docs (#51372)
7d30f67659 : remove LegacyDefinitions as it is empty now (#51251)
d5541c50a3 : add a c++ interface in processGroup to get its backend name (#51066)
662b6d2115 : [dist_optim] update the doc of DistributedOptimizer (#51314)
a88e1d3ddf : [complex] Complex support for masked_scatter and autograd support for masked_scatter and masked_select (#51281)
fe645fdfc7 : Update _torch_docs.py (#51212)
da920fa141 : Enable rocm tests in common nn (#51227)
52609c8c65 : .github: Up frequency of stale checks (#51365)
dbfaf966b0 : [android] turn on USE_VULKAN for android builds by default (#51291)
ebd2a82559 : Replace all AT_ASSERTM in RNN_miopen.cpp (#51072)
dfca1e48d3 : Replace all AT_ASSERTM under c10/ (except Exception.h) (#50843)
c41ca4ae5b : [doc]Fix autograd.detect_anomaly docs incorrectly formatted (#51335)
5021582fe6 : Fix benchmarks/distributed/ddp/benchmark.py (#51095)
1b089c1257 : Modernize for-loops (#50899)
edaa23c8ab : extend init_group_test timeout to 5s (#51330)
30675d0921 : Added OpInfo-based testing of triangular_solve (#50948)
1b479416b7 : Clarify logic in `ir_emitter` (#51299)
c0966914bc : Internal gradcheck wrapper in testing._internal that sets certain flags to True (#51133)
5a406c023e : Revert D26070147: [Gradient Compression] Refactor default_hooks.py and powerSGD_hook.py by creating a util function that make a vanilla allreduce future
270111b7b6 : split quantization jit op (#51329)
3397919dcf : Rowwise Prune op (Add the test to OSS run_test), Make the op private. (#46131)
ebe26b81d2 : [PyTorch Mobile] Enable partial loading of GPU models on linux CPU machines (#51236)
534aabce14 : [nnc] Don't use sleef where it's slower (#51246)
0a9764ecc1 : [nnc] Expose vectorized math functions to jit fuser. (#51190)
d74a226daa : [nnc] Use sleef if its symbols are available (#51187)
0a065ebe86 : [nnc][trivial] Refactor llvm_jit so the wrapper class doesn't depend on ifdefs (#51186)
1114fd6b3a : [nnc] Refactor generation of intrinsics to reduce the amount of macro-hell (#51125)
43f0ccd1ec : torch.cuda.memory_allocated to return `{}` if not initialized (#51179)
916af892b3 : [quant][fx] Update name of packed weight attributes (#51259)
05c8cd748d : memory efficient per-channel fq: use it everywhere, delete old version (#51265)
267e243064 : fake_quant: more memory efficient per-channel backward (#51255)
f2e41257e4 : Back out "Revert D26077905: Back out "Revert D25850783: Add torch::deploy, an embedded torch-python interpreter"" (#51267)
e7b3496232 : [Gradient Compression] Refactor default_hooks.py and powerSGD_hook.py by creating a util function that make a vanilla allreduce future (#51094)
9d731e87de : [Gradient Compression] Explicitly specify the dtype of the error tensor (#50985)
b619d37bb4 : [Gradient Compression] Simplify the implementation of error feedback and warm-start (#50981)
00d4ec840e : clone pytorch.github.io with depth 1 (#48115)
8a8fac6681 : Remove debug-only assertion from vulkan::api::Command::Command as the buffer can legitimately be null. (#51160)
592a8ad1c8 : Define static constexpr variable in at::native::vulkan:::api::Handle. (#51006)
5ed0ad4b6a : DataPipe naming convension update (#51262)
f9f22c8b5c : Add serialization logic for complex numbers (#51287)
6e4746c1ac : Port cholesky_inverse to ATen (#50269)
9f6e0de548 : Update third_party/build_bundled.py (#51161)
7097c0d4f3 : [quant][graphmode][fx] Add support for functional conv1d and conv3d (#51155) (#51254)
35990b5f56 : .github: Remove title from stale alert (#51306)
1379842f4a : Add private mechanism to toggle vmap fallback warnings (#51218)
f68e5f1dbf : .github: Update stale messaging add newlines (#51298)
b028653670 : Add missing -inf order for linalg.norm OpInfo (#51233)
8b27c2ccca : add mising VSX dispatches (#51217)
96cedefd8e : [Pipe] Refactor convert_to_balance under non-test package. (#50860)
cedfa4ccd8 : Make DeviceCachingAllocator's error handling more defensive and a bit easier to read (#51158)
33d5180684 : [fx] improve args mutation error (#51175)
4288f08d30 : Enable TensorPipe's CUDA GDR channel (#50763)
cc211bb43e : .github: Add workflow to stale pull requests (#51237)
c9cebaf9b8 : Enable TensorPipe's InfiniBand transport (#50761)
288b94a8ee : [quant][fx] Make scale, zero_point buffers in the model, use FQN (for quantize_per_tensor ops) (#51171)
4c3f59b70e : [quant][fx] Make scale, zero_point buffers in the model and use FQN (for quantized ops) (#51166)
096adf4b8b : [quant][fx] Scope support for call_function in QuantizationTracer (#51086)
b955da3310 : Adding correct error message for for..else (#51258)
7a8c64da4d : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
0e8e739a9f : Move AcceleratedGraphModule out of graph_manipulation. (#51220)
df07e1cea8 : Automated submodule update: tensorpipe (#51203)
392abde8e6 : patch nvrtc API for cuda TK >= 11.1 (#50319)
9fe7c0633f : Add centered FFT example to fftshift docs (#51223)
d035d56bfb : [StaticRuntime] Add out variant for reshape and flatten (#51249)
16132a4b1d : Make sure ConstantPadNd op preserves memory format (#50898)
52ab858f07 : STFT: Improve error message when window is on wrong device (#51128)
83287a6f2b : [pytorch] change codegen dispatch key from string to enum (#51115)
773c71cb3a : [atem] Fix type check bug in bmm_out_or_baddbmm_ (#51248)
88baf470d1 : [JIT] Provide more info when attribute fails to convert (#50870)
12a434abbc : Revert D26077905: Back out "Revert D25850783: Add torch::deploy, an embedded torch-python interpreter"
dfdb1547b9 : Revert D26094906: Add serialization logic for complex numbers
0335222a4a : memory efficient fq: use it everywhere, delete the old version (#51159)
983b8e6b62 : fake_quant: add a more memory efficient version (#50561)
d14d8c7f7f : Add convenience import (#51195)
ea0d304e2e : Rewrite "ProfilerStep#<num>" in profiler output (#51194)
4fb33f1d3a : Trim profiler file paths (#51192)
e2eb97dd76 : [ONNX] Fix param names (#50764) (#50955)
84e9bff85d : [ONNX] Replace optional parameters of Resize with placeholder for ops13. (#50574) (#50954)
68034197e8 : [ONNX] Support gelu for fp16 export (#50487) (#50911)
70dcfe2991 : [ONNX] Enable _jit_pass_onnx_fold_if only when dynamic_axes is None (#50582) (#50910)
e90a480d40 : [ONNX] Add logical_and, logical_or, logical_xor torch op support in pytorch exporter (#50570) (#50909)
b308fb78d1 : [ONNX] Add binary_cross_entropy_with_logits op to ONNX opset version 12 (#49675) (#50908)
1723ab53c4 : [ONNX] Update Reducesum operator for opset 13 (#50532) (#50907)
7e4c956955 : [ONNX] Support opset13 Squeeze and Unsqueeze (#50150) (#50906)
1c9347c666 : [ONNX] Use parameter values in onnx shape inference (#49706) (#50905)
dc2a44c4fc : Back out "Revert D25850783: Add torch::deploy, an embedded torch-python interpreter" (#51124)
e975169426 : [TensorExpr] Redesign `Tensor` class. (#50995)
b804084428 : [TensorExpr] Move 'lowerToStmt' method from 'LoopNest' to 'Tensor'. (#50994)
42aeb68128 : [TensorExpr] Move 'initializer' field from 'Tensor' to 'Buf'. (#50993)
3f23ad5bce : [Bug] fix for module_has_exports (#50680)
1321f2bfe6 : [PyTorch] Port Caffe2 opti for BatchMatMul batch size 1 to baddbmm (#51057)
98d9a6317d : Rename profile.next_step() to profile.step() to consistent with optimizer.step() (#51032)
621198978a : Move USE_NUMPY to more appropriate targets (#51143)
2de4ecd4eb : Add serialization logic for complex numbers (#50885)
3b6f30824c : OpInfo JIT op.output_func handling support (#50775)
eaf5ca09dc : Migrate masked_scatter_ CUDA to ATen (#50039)
1c8d11c9e2 : [PyTorch] Save a refcount bump in make_variable (#51180)
f7e90cf311 : Revert D26089965: [quant][graphmode][fx] Add support for functional conv1d and conv3d
40eea6d9d1 : Support device map for distributed autograd while using TensorPipe. (#44859)
6d098095eb : [numpy] torch.lgamma: promote integer inputs to float (#50140)
dd1a97b3ae : [quant][graphmode][fx] Add support for functional conv1d and conv3d (#51155)
1b7a4f9cde : .github: Add GitHub Actions workflow to build wheels (#50633)
b77f72b5a0 : Enable TensorPipe's SHM transport (#50760)
d3ec204ef2 : [quant][graphmode][fx] Add functional conv2d + relu (#51079)
00adc7b07f : Fix more JIT tests under Python-3.9 (#51182)
9b6d463704 : Move std and var tests to OpInfos (#50901)
e9ffad088f : numeric suite: add types to eager (#51168)
16dd5ca8ab : Followup of kron PR (#51045)
4a2aa0f5f1 : index_put_ for complex tensors on CUDA (#51148)
0b5303e833 : Propagate CreationMeta when chaining views (#51061)
5ec2e26310 : DOC, BLD: make the python docs build failures print a nicer message (#50356)
22ac4f3c59 : Add `vectorize` flag to torch.autograd.functional.{jacobian, hessian} (#50915)
fd9a85d21b : Doc update for complex numbers (#51129)
ada916675f : update HistogramObserver to be scriptable (#51081)
0a4bc72890 : [ROCm] work around compiler issue for IGammaKernel.cu (#50970)
b60494000b : DOC: udate left navbar links for vision and text (#51103)
7b85adf20f : Add back pycuda.autoinit to test_pt_onnx_trt (#51106)
1935880860 : [PyTorch] Remove unnecessary dispatcher.h include in torch/library.h (#51162)
42929e573a : add missing return statement to inlined vec_signed (#51116)
ba316a7612 : Fix TF32 failures in test_linalg.py (#50453)
b6eaca9f1f : Add type annotation logic for complex numbers (#50884)
e2041ce354 : Fix docstring to clarify logits usage for multiclass case (#51053)
221d7d99e1 : [torch vitals] move into namespace and fix windows tests
3cc14a0dff : [p2c2] Add support for Int8FCPackWeight in model transformation
345844d9d8 : test, fix deepcopy of tensor with grad (#50663)
97ea95ddd7 : Delete tabs from becnh_approx.cpp (#51157)
57484103be : Revert D25675618: Move AcceleratedGraphModule out of graph_manipulation.
24eab1d80d : BLD: create a LICENSE_BUNDLED.txt file from third_party licenses (#50745)
c4029444d1 : [nnc] Per-operator benchmarks (#51093)
f08464f31d : [nnc] Add benchmarks
6f3aa58d80 : Fix autograd thread crash with python-3.9 (#50998)
069602e028 : [torch vitals] Initial implementation (#51047)
83bfab2fb6 : toTensor cleanup on sparsenn & static runtime ops (#51113)
a949d7b1c8 : Workaround Python3.9 limitations in test_jit_py3 (#51088)
c8a24ebe54 : Move AcceleratedGraphModule out of graph_manipulation.
81ae8edf16 : Revert D26018916: [pytorch][PR] Automated submodule update: tensorpipe
afa79a4df5 : [quant][graphmode][fx] cleanup linear module test case (#50976)
b822aba8ec : Enable BFloat support for gemms on arch other than ampere (#50442)
3562ca2da2 : [dist_optim] add warning to distributed optimizer (#50630)
6dda0363bb : [reland] Refactor mypy configs list into editor-friendly wrapper (#50826)
31194750f2 : [jit] Fix ResolutionCallback definition (#51089)
5834b3b204 : Fix test_jit_cuda_archflags on machine with more than one arch (#50405)
5f297cc665 : Automated submodule update: tensorpipe (#50946)
95ae9a20e4 : Enable ROCM Skipped tests in test_ops.py (#50500)
233e4ebdb6 : Implement autograd functions for c10d communication operations (#40762)
83315965ab : Turn on batched grad testing for CriterionTest (#50744)
e843974a6e : Revert D25850783: Add torch::deploy, an embedded torch-python interpreter
a51b9a823c : Improve docs around Math/DefaultBackend & add PythonDispatcher class. (#50854)
9f19843d19 : [Gradient Compression] Typo fixes in PowerSGD (#50974)
ffaae32d60 : [Gradient Compression] Allow PowerSGD to run vallina allreduce for the first K iterations (#50973)
880f007480 : Add torch.eig complex forward (CPU, CUDA) (#49168)
502ca0105d : Added cuda bindings for NNC (#51046)
6ef66213ee : [PT QNNPACK] Temporarily disable input pointer caching (#51051)
5adbace8e6 : Abort node in fast_nvcc if ancestor fails (#51043)
a347c747df : Fix TransformedDistribution shaping logic (#50581)
250c71121b : Create a DDPLoggingData and expose it to python interface (#50622)
3192f9e4fe : Add torch::deploy, an embedded torch-python interpreter (#50458)
ddf26816d3 : Make torch.svd return V, not V.conj() for complex inputs (#51012)
f8eefbdf7a : fake_quant: fix device affinity and buffer resizing for state_dict (#50868)
68c218547c : Add documentation page for pipeline parallelism. (#50791)
a7cf04ec40 : Workaround for MAGMA accessing illegal memory in batched cholesky (#50957)
9dfbfe9fca : Add type annotations to torch.overrides (#50824)
75cba9d0d1 : More about cudnn refactor (#50827)
28869d5a80 : [quant][graphmode][fx] Add support for quantizing functional linear + {functional relu/module relu} (#50975)
95a0a1a18f : Update docstring on return type of `jvp` and `vjp` (#51035)
09b896261c : Skip test_lc_1d for ROCM (#50964)
ac0a3cc5fd : Merge CompilationUnit from torch._C and torch.jit (#50614)
5e79b8e06d : Back out "Revert D25903846: [pytorch][PR] Structured kernel definition for upsample_nearest2d" (#50794)
f7b339d11c : Clarify wording around overrides subclasses. (#51031)
a6257b2fe2 : Fix #48903 (#50817)
806010b75e : [BE] move more unittest.main() to run_tests() (#50923)
8690819618 : OpInfo: Add DecorateInfo class similar to SkipInfo for decorators (#50501)
5a5bca8ef0 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
627a331257 : Port CPU torch.orgqr to ATen (#50502)
48b6b9221a : [BE] Make Vec256 header only library (#50708)
186c3da037 : Add cusolver gesvdj and gesvdjBatched to the backend of torch.svd (#48436)
1f40f2a172 : Add improved support for parallelization and related graph opts (#5257)
c9cae1446f : fix unflatten_dense_tensor when there is empty tensor inside (#50321)
e544d74c55 : [CPU] Add torch.trace for complex tensors (#50380)
2c3c2a4b7a : [dist_optim] add distributed functional AdamW optimizer (#50620)
3f982e56b1 : [dist_optim] add distributed functional RMSprop optimizer (#50619)
6c81b4d917 : [dist_optim] add distributed functional Adadelta optimizer (#50623)
cd2067539e : [dist_optim] add distributed functional sgd optimizer (#50618)
5cbe1e4933 : [dist_optim] add distributed functional Adam optimizer (#50624)
5a661e0171 : [WIP][Grad Compression] Unittest to verify allreduce_hook parity (#50851)
6aec1eba15 : [aten] Make aten::flatten call native::reshape (#50859)
069e68a2a4 : Fix ScriptModule docstring (#48608)
ce0f335515 : [PyTorch Mobile] Add an overload for deserialize() that doesn't accept the extra_files map. (#50932)
ab331da7ac : Rewrite kron with broadcasting at::mul (#50927)
789f6f1250 : [FX] Minor docs changes (#50966)
5c1c858ca8 : Revert D25977352: [pytorch][PR] Refactor mypy configs list into editor-friendly wrapper
ffc8a26991 : philox_engine_inputs should also round increment to a multiple of 4 (#50916)
63838b9330 : Turn on batched_grad testing for NewModuleTest (#50740)
de8cd6b201 : [BE] Replace M_PI with c10::pi constexpr variable (#50819)
a66851a2ad : [FX] torch.fx.symbolic_trace patching improvements and `math.*` support (#50793)
dd1c2a06b7 : refactor profiling optional (#47667)
f0e72e54cc : Fix CUDA RPC Stream Synchronization (#50949)
78f30386c5 : Implement Swish(SiLU) operator in FP16
ca3ce77746 : Dump torch::jit::AliasDb objects as Graphviz files (#50452)
73dffc8452 : Refactor mypy configs list into editor-friendly wrapper (#50826)
7e10fbfb71 : Add note about TCP init in RPC tests to contributing doc. (#50861)
2ab497012f : Add at::cpu namespace of functions for structured kernels (#49505)
7b12893155 : [BE] .gitignore adding test-reports/ folder (#50952)
a291b254ee : Migrate masked_scatter_ CPU to ATen (#49732)
db079a9877 : Padding: support complex dtypes (#50594)
c908ebd4a1 : [android] fix yuv conversion - remove define (#50951)
8ab1a1495d : Rename `set_deterministic` to `use_deterministic_algorithms` (#49904)
7cb4712b38 : count_nonzero with requires grad (#50866)
d5dc65a45c : Document example of Proxy use (#50583)
89cafde8a4 : Modernize for-loops (#50912)
156da22566 : [PyTorch] Eliminate static default_extra_files_mobile from header import.h (#50832)
d60d108280 : [nnc] Expose fast tanh/sigmoid (#50736)
47f0bda3ef : Improve complex support in common_nn test machinery (#50593)
9ac30d96aa : Add complex IValues (#50883)
002d978428 : Sparse benchmarking utils (#48397)
0436ea125b : OpInfo: Remove promotes_integers_to_float and infer it instead (#50279)
4bbff92014 : Refactor build targets for torch::deploy (#50288)
5f07b53ec2 : [TensorExpr] Add LoopNest::simplify. (#50850)
2ba2ab9e46 : [packaging] add support for BytesIO (#50838)
c7d348fea6 : Turn on batched grad testing for non-autogenerated tests in test_nn.py (#50739)
b2e5617553 : [ROCm] rename HIP_HCC_FLAGS to HIP_CLANG_FLAGS (#50917)
8eb90d4865 : Add Gaussian NLL Loss (#50886)
e34992ebee : Set USE_KINETO=1 (#49897)
7494f0233a : snake_case FX IR names (#50876)
7f22af13b9 : Add alternative prettyprinting method to `Graph` (#50878)
d33cc4c01b : Use quiet_NaN() in calc_digamma, not NAN (#50412)
bb909d27d5 : [PyTorch Mobile] Eliminate static default_extra_files_mobile from header import.h (#50795)
d46210958e : Remove `use_c10_dispatcher: full` lines added in the last couple days (#50769)
57fb2c0fcc : [PPC] Add missing vec_[signed|neg|sldw] definitions (#50640)
533cb9530e : Introducing TORCH_CUDA_CPP_API and TORCH_CUDA_CU_API to the code (#50627)
3aed177484 : [PyTorch] inline Dispatcher::singleton (#50644)
21c2542b6a : Independent constraint (#50547)
5016637955 : [FX] Update overview docstring (#50896)
eb0fe70680 : [distributed_test]Enable disabled ROCm tests. (#50421)
aa3c28a29e : [static runtime] Shortcut resize_({0})
8e9ed27a53 : install magma for cuda 11.2 in conda (#50559)
137f2a385a : [ONNX] Handle sequence output for models (#50599)
c082e2184d : Add autograd tests for complex matrix norm nuclear and +/-2 (#50746)
201f0c1fdf : Automated submodule update: tensorpipe (#50895)
3cd8ed972a : add and adjust kernel launch checks under fbcode/caffe2/caffe2/utils (#50862)
16691516a5 : Add batched grad testing to OpInfo (#50818)
1cce4c5eee : Update Kineto revision (#50855)
884fb48794 : Miscellaneous batched grad testing (#50738)
8ede828df7 : [te] Speed up relu on cpu
98e2914614 : [android] Fix YUV camera image to tensor (#50871)
b5242d66b6 : [quant][doc] Adding a table comparing eager and fx graph mode (#50413)
4d169258ef : Revert D25976245: [pytorch][PR] Enable Skipped ROCM Tests in common_nn.py
4cca08368b : Adds per-op microbenchmarks for NNC (#50845)
4ac489091a : Improve call provenance during GraphModule scripting (#50538)
df96344968 : [optimizer] refactor AdamW to use functional API (#50411)
ce1781d8db : [optimizer] refactor RMSProp to use functional API (#50410)
d6fb27ce72 : [optimizer] refactor Adadelta to use functional API (#50409)
a0cf5566d8 : [optimizer] refactor SGD to use functional API (#45597)
b96a6516a6 : Add CPP Full Reduction Benchmarks. (#50193)
88b36230f5 : Add full reduction benchmark. (#50057)
24a0272132 : Enable Skipped ROCM Tests in common_nn.py (#50753)
480bb7d356 : Automated submodule update: tensorpipe (#50807)
439afda090 : [Gradient Compression] Fix warm-start for PowerSGD laywerwise compression (#50283)
d0e942f9a7 : [FX][docs] Add limitations of symbolic tracing (#50638)
c88eed97c7 : Make `split_module` results deterministic (#50470)
4954417163 : CONTRIBUTING.md: add instructions on how to remote desktop into Windows CI (#50841)
c945a5bb5e : fix typo of quantized README.md (#50681)
7fdc6a27b8 : Skip test_variant_consistency_eager_addr_cpu_bfloat16 (#50836)
c147aa306c : Use doctest directly to get docstring examples (#50596)
1bde5a216f : [TensorExpr] Use wider type for scalars (#50774)
24fd84313f : [pytorch] fix ConstRefCType usage in codegen/api/native.py (#50742)
44922f26f5 : Add support for NCCL alltoall (#44374)
87fb3707d9 : ZeroRedundancyOptimizer: an implementation of a standalone sharded optimizer wrapper (#46750)
c3e3e60657 : Add cloud-tpu-client to xla CI. (#50823)
be7e9845a1 : Remove gtest_prod.h from TP agent. (#50766)
ac8e90fa6d : quantization: Linear + BatchNorm1d fusion (#50748)
db86dd8ad7 : Fix replication_pad for cuda launch configuration (#50565)
cf1882adeb : Fix indexing for overrides. (#49324)
16faabe7f0 : [ROCm] re-enable tests (#50691)
fbf7eec86d : Update JIT_OPT macro for easier use (#50602)
112a583467 : Enable TensorPipe's CMA channel (#50759)
c18403a693 : [metal] Use MPSCNN kernels for binary elementwise ops
1e0809dbf9 : [PyTorch] Remove CAFFE2_FB_LIMITED_MOBILE_CAPABILITY (#50385)
4f3cdd971c : Fix test_dispatch.py when running with TORCH_SHOW_CPP_STACKTRACES=1 (#50509)
f1c578594b : JIT Testing: Improve assertAutodiffNode error message (#50626)
1cc8f8a750 : Add complex autograd support and OpInfo based test for torch.addr (#50667)
66adfcd258 : tools: Move sha check to else statement (#50773)
e1bb476980 : Issue #48724. Only set the CMake IMPORTED_LOCATION property in static… (#49173)
22902b9242 : [WIP] JIT Static Hooks: cpp tests (#49547)
3b88e1b0e7 : [WIP] JIT Static Hooks: python tests (#49546)
0eb41e67fe : [WIP] JIT Static Hooks: serialization logic (#49545)
9c49457233 : [WIP] JIT Static Hooks: schema checking logic (#49975)
a722d28ef0 : [WIP] JIT Static Hooks: adding hooks to class type and adding logic for hook running/compilation (#49544)
1f5c3b3aae : Revert D25958987: [pytorch][PR] Add type annotations to torch.overrides
4a8ef4525e : Add new backend type for Intel heterogeneous computation platform. (#49786)
a3b8cbcdfc : Let TensorPipe detect peer access (#50676)
4803eaf502 : Implement NumPy-like function torch.fmax() & torch.fmin() (#49312)
2ace4fc01e : Add type annotations to torch.overrides (#48493)
4aea007351 : [JIT] Fix archive file extension in examples and docs (#50649)
e00966501b : [quant] Add non-fbgemm fallback implementation for embedding lookup ops (#50706)
5205cc1c62 : [FX] Fix NoneType annotation in generated code (#50777)
8f5ad00e13 : [JIT] Print out CU address in `ClassType::repr_str()` (#50194)
dea9af5c06 : Cat benchmark: use mobile feed tensor shapes and torch.cat out-variant (#50778)
06c734d8c7 : Generalize `sum_intlist` and `prod_intlist`, clean up dimensionality functions (#50495)
47c57b8836 : Fix Native signature for optional Tensor arguments (#50767)
cebab83d3f : Fix USE_MKLDN defaults (#50782)
4ff1823fac : Add Sparse support for torch.sqrt (#50088)
38c45bdd2d : [FX] Fix tracing a free function with embedded constant (#50639)
7526e38cd3 : Revert "Stable sort for CPU (#50052)" (#50752)
08c90d9e55 : Automated submodule update: tensorpipe (#50765)
327539ca79 : Fix bug in hipify if include_dirs is not specified in setup.py (#50703)
526659db20 : whitelist ops we can build shapes for (#49125)
4816bf62d6 : Fix nvcc function signature causing assert in TypeIndex.h (#49778)
a9e46f1413 : add type annotations to torch.nn.modules.container (#48969)
a1b1d0cdc0 : Better split of the windows test jobs (#50660)
ebd142e94b : initial commit to enable fast_nvcc (#49773)
f7b2b22b64 : Remove instance of blacklist (#50478)
0c9fb4aff0 : Disable tracer warning for slicing indices. (#50414)
3344f06130 : [FX] Fix using fx.wrap as a decorator (#50677)
05036564cf : Remove workaround for TensorPipe failing to get device of CUDA ptr (#50580)
5f33f22324 : Fix caffe2 import tools.codegen (#50353)
a9deaf3659 : Shouldn't need user local install for ROCm build (#50299)
cad4753115 : Update TensorPipe submodule (#50733)
4511f2cc9d : Clean up complex autograd test list (#50615)
937eff5853 : Consolidate mypy tests and args (#50631)
1a38fa9930 : Striding for lists Part 1 (#48719)
1154a8594e : Add instructional error message for cudnn RNN double backward workaround (#33884)
5d64658ce8 : Add complex support for `torch.{acosh, asinh, atanh}` (#50387)
1000403f66 : Adding missing decorator for test_device_map_gpu_mixed_self_4 (#50732)
f9a5ba7398 : Added linalg.slogdet (#49194)
f7a8bfd0a1 : Add batched grad testing to gradcheck, turn it on in test_autograd (#50592)
316f0b89c3 : [testing] Port `torch.{repeat, tile}` tests to use OpInfo machinery (#50199)
5f13cc861c : Automated submodule update: tensorpipe (#50684)
c458558334 : kill `multinomial_alias_setup/draw` (#50489)
5252e9857a : [pytorch] clean up unused util srcs under tools/autograd (#50611)
b75cdceb44 : [package] Properly demangle all accesses of `__name__` in importer.py (#50711)
d5e5c5455a : [ROCm] re-enable test_sparse.py tests (#50557)
e9b369c25f : Add SELU Activation to calculate_gain (#50664)
ce30dba36f : Enable TensorPipe CUDA fallback channel (#50675)
94d9a7e8ac : Enable TensorPipe CUDA sending to self (#50674)
8b501dfd98 : Fix memory leak in TensorPipeAgent. (#50564)
f32b10e564 : [BE] Fix the broken test caffe2/caffe2/python:lazy_dyndep_test - test_allcompare (#50696)
d140ca8b69 : Optimize implementation of torch.pow (#46830)
227acc2e51 : Complex autograd support for torch.{baddbmm, addbmm, addmm, addmv} (#50632)
7f3a407225 : Multi label margin loss (#50007)
eae1b40400 : Introduced operator variant to OpInfo (#50370)
3f052ba07b : Remove unnecessary dtype checks for complex types & disable complex dispatch for CPU min/max pointwise ops (#50465)
1fdc35da2c : [BE] Fix the broken test -- caffe2/caffe2/python:hypothesis_test - test_recurrent (#50668)
534c82153e : fix bn channels_last contiguity check (#50659)
7e05d07ca7 : [distributed_test_c10d]Enable disabled ROCm tests. (#50629)
2001f3a2c9 : Finished fleshing out the tensor expr bindings in expr.cpp (#50643)
a469336292 : Fix pytorch-doc build (#50651)
da5d4396c5 : remove dulicate newlines (#50648)
0ea1abe07b : [PyTorch] Add missing Dispatcher.h include in quantized_ops.cpp (#50646)
c99f356051 : Stable sort for CPU (#50052)
3df5f9c3b2 : Revert D25843351: [pytorch][PR] Clarify, make consistent, and test the behavior of logspace when dtype is integral
0291f35b37 : [FX] Make len traceable and scriptable with wrap (#50184)
585ee119cf : Updated codecov config settings (#50601)
b832604ffb : Fix caffee2 for llvm trunk
2569dc71e1 : Reapply D25859132: [te] Optimize allocation of kernel outputs (#50546)
8e60bf9034 : add RequiresGradCheck (#50392)
6e3e57095c : Add complex support for torch.nn.L1Loss (#49912)
d64184ef4c : [RPC] Support timeout for RRef proxy functions (#50499)
ab1ba8f433 : [RPC] Support timeout in rref._get_type() (#50498)
c78e7db7ee : [PyTorch] Remove unnecessary dispatcher.h include in mobile/interpreter.h (#50316)
60a1831e61 : [PyTorch] Remove unnecessary dispatcher.h include in op_registration.h (#50315)
687f6a513a : [PyTorch] Remove unnecessary dispatcher.h include in builtin_function.h (#50314)
0ae0fac1bb : Clarify, make consistent, and test the behavior of logspace when dtype is integral (#47647)
8e7402441d : Move irange to c10 (#46414)
296e4a0b7f : .circleci: Set +u for all conda install commands (#50505)
0d981eea6c : add type annotations to torch.nn.modules.conv (#49564)
00d432a1ed : Remove optional for veiw_fn during View Tracking (#50067)
070a30b265 : [BE] add warning message to cmake against env var "-std=c++xx" (#50491)
a9db2f8e7a : Revert D24924236: [pytorch][PR] [ONNX] Handle sequence output shape and type inference
366b00ab7b : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
ffefa44e20 : Automated submodule update: tensorpipe (#50572)
d9f71b5868 : [WIP][FX] new sections in docs (#50562)
6882f9cc1c : [FX] Add wrap() docstring to docs and add decorator example (#50555)
adc65e7c8d : [ONNX] Handle sequence output shape and type inference (#46542)
e9dc8fc162 : [TensorExpr] Add python bindings. (#49698)
9efe15313a : Revert D25563542: Add batched grad testing to gradcheck, turn it on in test_autograd
be51de4047 : Minor doc improvement(?) on ArrayRef::slice (#50541)
4de9d04f03 : [TensorExpr] Hook Fuser Pass to JIT opt-limit utility. (#50518)
08baffa8aa : Drop blacklist from glow (#50480)
2ceaec704d : Fix warnings in TensorShape (#50486)
1908f56b3a : Fix warnings in "ForeachOpsKernels" (#50482)
171f265d80 : Back out "Revert D25717510: Clean up some type annotations in benchmarks/fastrnns" (#50556)
51157e802f : Use separate mypy caches for TestTypeHints cases (#50539)
468c99fba4 : Reapply D25856891: [te] Benchmark comparing fused overhead to unfused (#50543)
30e45bb133 : Enable GPU-to-GPU comm in TensorPipeAgent (#44418)
554a1a70c7 : [quant] update embedding module to not store qweight (#50418)
3dcf126c31 : Validate args in HalfCauchy and HalfNormal (#50492)
7fb935806d : enable CPU tests back (#50490)
1ea39094a8 : Link to mypy wiki page from CONTRIBUTING.md (#50540)
e05882d2a4 : Back out "reuse consant from jit" (#50521)
0be1a24b48 : Drop unused imports from caffe2/quantization (#50493)
ef6be0ec50 : Revert D25903846: [pytorch][PR] Structured kernel definition for upsample_nearest2d
443412e682 : Add batched grad testing to gradcheck, turn it on in test_autograd (#49120)
0abe7f5ef6 : [BE] fix subprocess wrapped test cases reported as failure (#50515)
d2c3733ca1 : Reorder torch.distributed.rpc.init_rpc docstring arguments (#50419)
2639f1d4a6 : Revert D25717510: Clean up some type annotations in benchmarks/fastrnns
934805bc49 : cleaned up ModuleAttributeError (#50298)
4ee631cdf0 : Revert D25856891: [te] Benchmark comparing fused overhead to unfused
269193f5f5 : Revert D25859132: [te] Optimize allocation of kernel outputs
19a8e68d8c : Structured kernel definition for upsample_nearest2d (#50189)
fc9f013cea : HalfCauchy should ValueError if _validate_args (#50403)
52ea372fcb : [tools] Update clang-format linux hash (#50520)
5ea9584400 : Assemble technical overview of FX (#50291)
a3f9cf9497 : Fix fastrnn benchmark regression introduced by 49946 (#50517)
0b49778666 : [package] mangle imported module names (#50049)
4a0d17ba2d : [PyTorch][codemod] Replace immediately-dereferenced expect calls w/expectRef (#50228)
c6cb632c63 : [PyTorch] Make SROpFunctor a raw function pointer (#50395)
50256710a0 : [PyTorch] Make TensorImpl::empty_tensor_restride non-virtual (#50301)
9ebea77299 : [PyTorch] Reapply D25687465: Devirtualize TensorImpl::dim() with macro (#50290)
21542b43a8 : [FX] Update docstring code/graph printout (#50396)
08b6b78c51 : [FX] Make FX stability warning reference beta (#50394)
aeefe2ce31 : [ONNX] ONNX dev branch merge 01-06-2021 (#50163)
30a8ba93b1 : Remove a blacklist reference (#50477)
7426878981 : Exclude test/generated_type_hints_smoketest.py from flake8 (#50497)
b89827b73f : Drop unused imports (#49972)
62f676f543 : [te] Optimize allocation of kernel outputs (#50318)
36ae3feb22 : [te] Benchmark comparing fused overhead to unfused (#50305)
48318eba40 : Fix TestOpInfoCUDA.test_unsupported_dtypes_addmm_cuda_bfloat16 on ampere (#50440)
d2e96fcf17 : Update loss module doc (#48596)
fc5db4265b : [BE] replace unittest.main with run_tests (#50451)
a4383a69d4 : Clean up some type annotations in caffe2/test (#49943)
7d0eecc666 : Clean up some type annotations in benchmarks/fastrnns (#49946)
05542f6222 : EMA op (#50393)
4a2d3d1cfd : MAINT: char class regex simplify (#50294)
664126bab5 : Enables build with oneDNN (MKL-DNN) on AArch64 (#50400)
deba3bd1d0 : Fix TORCH_LIBRARIES variables when do static build (#49458)
2a603145d7 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
4a3a37886c : Fix fft slow tests (#50435)
057be23168 : [doc] Add note about `torch.flip` returning new tensor and not view. (#50041)
b54240d200 : [PyTorch] Gate tls_local_dispatch_key_set inlining off for Android (#50450)
ca5d9617ba : Fix remainder type promotion (#48668)
a0f7b18391 : Fix `fmod` type promotion (#48278)
dea529a779 : Add torch.cuda.can_device_access_peer (#50446)
4e248eb3f6 : Change watchdog timeout logging from INFO to ERROR. (#50455)
4e76616719 : [StaticRuntime][ATen] Add out variant for narrow_copy (#49502)
49896c48e0 : Caffe2 Concat operator benchmark (#50449)
af968cd672 : [Pytorch Mobile] Remove caching (in code) of interned strings (#50390)
8c25b9701b : Type annotations in test/jit (#50293)
4c97ef8d77 : Create subgraph rewriter (#49540)
374951d102 : Add type annotations to torch.nn.modules.padding (#49494)
cb37709bee : [te] Create TargetMachine only once with correct options to fix perf (#50406)
7d28f1c81d : [quant][refactor] Minor refactor of some typos (#50304)
39aac65430 : [quant][bug] Fixing the mapping getter to return a copy (#50297)
412e3f46e9 : Automated submodule update: tensorpipe (#50441)
50744cd0f7 : [package] better error message when unpickling a mocked obj (#50159)
6d947067c9 : fixing autodiff to support Optional[Tensor] on inputs (#49430)
c198e6c6fa : Stop moving scalars to GPU for one computation in leaky_rrelu_backward. (#50115)
cf45d65f1c : Clean up some type annotations in test/jit/...../test_class_type.py (#50156)
725640ed84 : Check CUDA kernel launches in caffe2/caffe2/utils/math (#50238)
5cdc32bf1c : [vmap] Add batching rules for comparisons ops (#50364)
b2f7ff7d29 : Fix MultiheadAttention docstring latex (#50430)
a389b30bfc : Add Post Freezing Optimizations, turn on by default in torch.jit.freeze (#50222)
30aeed7c2b : Peephole Optimize out conv(x).dim(), which prevents BN fusion (#50221)
a69f008cb7 : [JIT] Factor out peephole to own test file (#50220)
6971149326 : [JIT] Add Frozen Conv-> Add/Sub/Mul/Div fusion (#50075)
035229c945 : [JIT] Frozen Graph Conv-BN fusion (#50074)
b5d3826950 : [PyTorch] Devirtualize TensorImpl::sizes() with macro (#50176)
158c98ae49 : Add new patterns for ConcatAddMulReplaceNaNClip (#50249)
5834438090 : Enable fast pass tensor_fill for single element complex tensors (#50383)
6420071b43 : Disable complex dispatch on min/max functions (#50347)
4411b5ac57 : add type annotations to torch.nn.modules.normalization (#49035)
9384d31af5 : Added linalg.pinv (#48399)
314351d0ef : Fix Error with torch.flip() for cuda tensors when dims=() (#50325)
5546a12fe3 : remove redundant tests from tensor_op_tests (#50096)
53473985b8 : test_ops: Only run complex gradcheck when complex is supported (#49018)
d25c673dfc : Cleanup unnecessary SpectralFuncInfo logic (#48712)
fb73cc4dc4 : Migrate some torch.fft tests to use OpInfos (#48428)
4da9ceb743 : [doc] fix doc formatting for `torch.randperm` and `torch.repeat_interleave` (#50254)
78e71ce627 : warn user once for possible unnecessary find_unused_params (#50133)
8c5b0247a5 : Fix PyTorch NEON compilation with gcc-7 (#50389)
c3b4b20627 : [PyTorch] List::operator[] can return const ref for Tensor & string (#50083)
4fed585dfa : [MacOS] Add unit tests for Metal ops (#50312)
bee6b0be58 : Fix warning when running scripts/build_ios.sh (#49457)
72c1d9df75 : Minor Fix: Double ";" typo in transformerlayer.h (#50300)
09f4844c1f : Pytorch Distributed RPC Reinforcement Learning Benchmark (Throughput and Latency) (#46901)
2193544024 : [GPU] Clean up the operator tests (#50311)
a72c6fd6e0 : [GPU] Fix the broken strides value for 2d transpose (#50310)
5f8e1a1da9 : add type annotations to torch.nn.modules.module (#49045)
f39f258dfd : Ensure DDP + Pipe works with find_unused_parameters. (#49908)
b001c4cc32 : Stop using an unnecessary scalar_to_tensor(..., device) call. (#50114)
ba83aea5ee : [GPU] Calculate strides for metal tensors (#50309)
9a3305fdd5 : Automated submodule update: tensorpipe (#50369)
bb97503a26 : [fix] Indexing.cu: Move call to C10_CUDA_KERNEL_LAUNCH_CHECK to make it reachable (#49283)
d76176cc1f : Raise warning during validation when arg_constraints not defined (#50302)
e160362837 : Add range assert in autograd engine queue lookup (#50372)
7efc212f1f : Add link to tutorial in Timer doc (#50374)
fd0927035e : .circleci: Remove CUDA 9.2 binary build jobs (#50388)
a48640af92 : [JIT] Update clang-format hashes (#50399)
4d3c12d37c : [JIT] Print better error when class attribute IValue conversion fails (#50255)
080a097935 : Add docstring for Proxy (#50145)
3d263d1928 : Update op replacement tutorial (#50377)
ec51b67282 : Fix elu backward operation for negative alpha (#49272)
559e2d8816 : Implement optimization bisect (#49031)
55ac7e53ae : [quant][graphmode][fx] Support preserved_attributes in prepare_fx (#50306)
271240ae29 : [JIT] Ensure offset is a multiple of 4 to fix "Philox" RNG in jitted kernels (#50169)
d390e3d8b9 : [FX] Make graph target printouts more user-friendly (#50296)
a7e92f120c : [FX} Implement wrap() by patching module globals during symtrace (#50182)
f10e7aad06 : [quant][graphmode][fx] Scope support for call_method in QuantizationTracer (#50173)
6eb8e83c0b : [aten] embedding_bag_byte_rowwise_offsets_out (#49561)
0f412aa293 : Move scalar_to_tensor_default_dtype out of ScalarOps.h because it's only useful for torch.where. (#50111)
186fe48d6e : Format RPC files with clang-format (#50367)
acaf091302 : Vulkan convolution touchups. (#50329)
e29082b2a6 : Run mypy over test/test_utils.py (#50278)
eb87686511 : svd_backward: more memory and computationally efficient. (#50109)
9d8bd216f9 : Use Unicode friendly API in fused kernel related code (#49781)
6a3fc0c21c : Treat has_torch_function and object_has_torch_function as static False when scripting (#48966)
d31a760be4 : move has_torch_function to C++, and make a special case object_has_torch_function (#48965)
632a4401a6 : clean up imports for tensor.py (#48964)
839c2f235f : treat Parameter the same way as Tensor (#48963)
fd92bcfe39 : Use FileStore in TorchScript for store registry (#50248)
92fcb59feb : Automated submodule update: tensorpipe (#50267)
26cc630789 : Allow arbitrary docstrings to be inside torchscript interface methods (#50271)
4774c6800b : Added linalg.inv (#48261)
375c30a717 : Avg pool 0 dim acceptance. (#50008)
8530c65e25 : [codemod][fbcode/caffe2] Apply clang-format update fixes
d4c1684cf5 : reuse consant from jit (#49916)
ba1ce71cd1 : Document single op replacement (#50116)
ea087e2d92 : JIT: guard DifferentiableGraph node (#49433)
36ddb00240 : [fix] torch.cat: Don't resize out if it is already of the correct size. (#49937)
c2d37cd990 : Change CMake config to enable universal binary for Mac (#50243)
49bb0a30e8 : Support scripting classmethod called with object instances (#49967)
1c12cbea90 : Optimize Vulkan command buffer submission rate. (#49112)
aa18d17455 : add type annotations to torch.nn.modules.fold (#49479)
2c4b6ec457 : Unused exception variables (#50181)
8f31621f78 : Fix MKL builds on Ubuntu (#50212)
1bb7d8ff93 : Revert D25717504: Clean up some type annotations in test/jit
f9f758e349 : Apply clang-format to rpc cpp files (#50236)
0bb341daaa : Dump state when hitting ambiguous_autogradother_kernel. (#50246)
d78b638a31 : Convert string => raw strings so char classes can be represented in Python regex (#50239)
5d45140d68 : [numpy] torch.{all/any} : output dtype is always bool (#47878)
a4f30d48d8 : Clean up some type annotations in test/jit (#50158)
81778e2811 : [onnx] Do not deref nullptr in scalar type analysis (#50237)
b5ab0a7f78 : Improve torch.linalg.qr (#50046)
88bd69b488 : Stop using c10::scalar_to_tensor in float_power. (#50105)
55919a4758 : add type annotations to torch.nn.quantized.modules.conv (#49702)
54ce171f16 : Fix persistent_workers + pin_memory (#48543)
d00acebd14 : Add tensor.view(dtype) (#47951)
5c5abd591d : Implement torch.linalg.svd (#45562)
006cfebf3d : Update autograd related comments (#50166)
9f832c8d3e : [numpy] torch.exp: promote integer inputs to float (#50093)
fc2ead0944 : Autograd engine, only enqueue task when it is fully initialized (#50164)
c215ffb6a2 : Revert D25687465: [PyTorch] Devirtualize TensorImpl::dim() with macro
294b7867eb : Address clang-tidy warnings in ProcessGroupNCCL (#50131)
5a63c452e6 : Disable cuDNN persistent RNN on sm_86 devices (#49534)
b73c018598 : [PyTorch] Change representation of SizesAndStrides (#47508)
882ddb2f2d : [PyTorch] Introduce packed SizesAndStrides abstraction (#47507)
c480eebf95 : Completely remove FutureMessage type (#50029)
171648edaa : Completely Remove FutureMessage from RPC agents (#50028)
098751016e : Completely Remove FutureMessage from RPC cpp tests (#50027)
1f795e1a9b : Remove FutureMessage from RPC request callback logic (#50026)
2831af9837 : Completely remove FutureMessage from FaultyProcessGroupAgent (#50025)
0684d07425 : Remove FutureMessage from sender TensorPipeAgent (#50024)
1deb895074 : Remove FutureMessage from sender ProcessGroupAgent (#50023)
0c943931aa : Completely remove FutureMessage from distributed autograd (#50020)
b2da0b5afe : Completely remove FutureMessage from RPC TorchScript implementations (#50005)
2d5f57cf3b : Completely remove FutureMessage from RRef Implementations (#50004)
d730c7e261 : Replace FutureMessage with ivalue::Future in RpcAgent retry logic (#49995)
008206decc : Replace FutureMessage with ivalue::Future in RRefContext (#49960)
25ef605132 : Replace FutureMessage with ivalue::Future in distributed/autograd/utils.* (#49927)
84e3237a53 : Let RpcAgent::send() return JitFuture (#49906)
4de6b279c8 : [PyTorch] Devirtualize TensorImpl::dim() with macro (#49770)
1a1b665827 : [PyTorch] validate that SparseTensorImpl::dim needn't be overridden (#49767)
2e7c6cc9df : [PyTorch] Devirtualize TensorImpl::numel() with macro (#49766)
bf4fcab681 : Fix SyncBatchNorm usage without stats tracking (#50126)
870ab04b64 : add type annotations to torch._utils (#49705)
ce370398cc : [Gradient Compression] Remove the extra comma after "bucket" in PowerSGD hook signatures (#50197)
09eefec627 : Clean up some type annotations in android (#49944)
f83d57f99e : [Don't review] Clean up type annotations in caffe2/torch/nn (#50079)
2bceee785f : Clean up simple type annotations in nn/functional.py (#50106)
3b56e9d0ef : [pytorch] prune based on custom importance scores (#48378)
23cadb5d7b : [PyTorch] Specialize `list_element_from` for `IValue` to avoid extra move/copy (#50124)
7ce8f7e488 : [quant] Backend string for the quantized types (#49965)
0c3bae6a89 : docker: add environment variable PYTORCH_VERSION (#50154)
e12008d110 : [quant] Mapping for the `_LinearWithBias` (#49964)
160b4be60a : [PyTorch] typeid: ensmallen scalarTypeItemSizes (#50165)
0495180f6e : Fix deprecation warning in scalar_type_analysis (#50218)
7377bfb1bd : Fix compiler warnings pertaining to uniform_int() (#49914)
e096449360 : Adding MyPy daemon status file to gitignore (#50132)
ec6d29d6fa : Drop unused imports from test (#49973)
fbdb7822c6 : minor improvement: extract major version (#49393)
8706187523 : Fix #42271 (#50141)
45c0d64b33 : Skip test_functional_autograd_benchmark during code coverage (#50183)
ace1680b68 : [static runtime] Remove register concept by giving ownership to the nodes (#50050)
321b98830e : [script] Validator for unsupported ops on accelerator
968ad47b41 : Fix error messages thrown when the padding size is not valid (#50135)
11cdb910b4 : [fx] Add matrix multiplication fusion pass (#50151)
838e73de20 : enable alltoall_single torchscript support (#48345)
4e2ab2cd73 : Move generator state APIs to ATen (#49589)
b6b76a1055 : Mod lists to neutral+descriptive terms in caffe2/caffe2/opt (#49801)
ef1fa547ba : [PyTorch] Use expectRef() when calling listConstruct (#50062)
fa160d18e7 : [PyTorch][jit] Add Type::{castRaw,expectRef} (#50061)
6838ecefb6 : Clean up some type annotations in torch/jit (#49939)
e49372d460 : Bugfix nightly checkout tool to work on Windows (#49274)
eb8003d8e9 : [FX] Remove extraneous newlines at end of code (#50117)
dc41d17655 : .circleci: Add option to not run build workflow (#50162)
3270e661c3 : [PyTorch Mobile] Skip signature check when converting to typed operator handle (#49469)
dde5b6e177 : [PyTorch] Reapply D25547962: Make tls_local_dispatch_key_set inlineable (reapply) (#49763)
eef5eb05bf : Remove backward and requires_grad from Autograd backend key (#49613)
6643e9fbb3 : Remove `use_c10_dispatcher: full` lines (#49259)
249261ada7 : Remove generated_unboxing_wrappers and setManuallyBoxedKernel (#49251)
4a14020c0d : Remove .impl_UNBOXED() and functionalities associated with it (#49220)
e4c41b6936 : Remove codegen logic to support non-c10-full ops (#49164)
fcb69d2eba : Add android.permission.INTERNET permission to Android test_app. (#49996)
473e78c0fa : Remove redundant code for unsupported Python versions (#49486)
09eb468398 : [vulkan] 2D prepacking for conv2d (#48816)
9b519b4a3f : [PyTorch Mobile] Generate Kernel dtype selection code in selected_mobile_ops.h during the build (#49279)
ba691e1a42 : Remove incorrect links to zdevito/ATen (#50065)
6eee2a0a9f : [JIT] disable masked fill (#50147)
3ce539881a : Back out "Revert D25757721: [pytorch][PR] Run mypy on more test files" (#50142)
638086950d : Clean up type annotations in torch/nn/quantized/modules (#49941)
7d9eb6c680 : Implementation of torch::cuda::synchronize (#50072)
e606e60331 : [Needs Review] Convert some files to Python3 (#49351)
efe0533a24 : Clean up some type annotations in torch/testing/_internal (#50078)
74c055b240 : Fix mypy type hint for AdaptiveAvgPool2,3d, AdaptiveMaxPool2,3d (#49963)
68a6e46379 : Push anonymous namespace into codegen, not template (#49498)
480a756194 : [PyTorch] IValue::toTensor can now return const Tensor& (#48868)
1b31e13539 : [PyTorch] Store Tensor explicitly in IValue (#48824)
688992c775 : [PyTorch] Additional IValue tests (#49718)
5f2ec6293d : Unused variables in neural net classes and functions (#50100)
c517e15d79 : Add support for converting sparse bool tensors to dense (#50019)
2ac180a5dd : Fix cl.exe detection in cpu/fused_kernel.cpp (#50085)
45ec35827e : Set USE_RCCL cmake option (dependent on USE_NCCL) [REDUX] (#34683)
0ad6f06684 : drop a unneeded comma from cmakelist.txt (#50091)
ad7d208ba5 : Revert D25239967: [fx] Add matrix multiplication fusion pass
282552dde2 : [PyTorch] Reapply D25546409: Use .sizes() isntead of .size() in cat_serial_kernel_impl (#49762)
57d489e43a : Fix for possible RNG offset calculation bug in cuda vectorized dropout with VEC=2 (#50110)
f6f0fde841 : [reland][quant][graphmode][fx] Standalone module support {input/output}_quantized_idxs (#49754) (#50058)
05358332b3 : Fix mypy typing check for test_dataset (#50108)
def8aa5499 : Remove cpu half and dead code from multinomial (#50063)
9b7f3fa146 : [fx] Add matrix multiplication fusion pass (#50120)
d80d38cf87 : Clean up type annotations in caffe2/torch/nn/modules (#49957)
75028f28e1 : [PyTorch] Reapply D25545777: Use .sizes() instead of .size() in _cat_out_cpu (#49761)
574a15b6cc : [PyTorch] Reapply D25544731: Avoid extra Tensor refcounting in _cat_out_cpu (#49760)
5f875965c6 : Fix doc for vmap levels (#50099)
70734f1260 : Kill AT_SKIP_BFLOAT16_IF_NOT_ROCM (#48810)
26391143b6 : Support out argument in torch.fft ops (#49335)
5d93e2b818 : torch.flip and torch.flip{lr, ud}: Half support for CPU and BFloat16 support for CPU & CUDA (#49895)
d1c375f071 : fix fork formatting (#49436)
7fe25af59d : Revert D25746115: [pytorch][PR] Improve documentation and warning message for creation of a tensor with from_numpy()
dcc83868c5 : [PyTorch Mobile] Mark xnnpack operators selective
5e1c8f24d4 : Make stft (temporarily) warn (#50102)
4a6c178f73 : Improve documentation and warning message for creation of a tensor with from_numpy() (#49516)
9529ae3776 : Revert D25757721: [pytorch][PR] Run mypy on more test files
d1a56fcd9d : [docs] add docstring in torch.cuda.get_device_properties (#49792)
abe1fa49e9 : [JIT] Add `__prepare_scriptable__` duck typing to allow replacing nn.modules with scriptable preparations (#45645) (#49242)
e71a13e8a3 : [pytorch][codegen] migrate gen_variable_type to new data model (#49735)
a272a7eeab : [PyTorch] Avoid heap allocations in inferUnsqueezeGeometry (#49497)
093aca082e : Enable distribution validation if __debug__ (#48743)
e3c56ddde6 : Revert D25757691: [pytorch][PR] Run mypy over test/test_utils.py
e442ac1e3f : Update MultiHeadAttention docstring (#49950)
9945fd7253 : Drop unused imports from caffe2/python (#49980)
eee849be8c : [caffe2][a10] Move down pragma pop to properly suppress warning 4522 (#49233)
16e5af41da : Fix store based barrier to only use 'add'. (#49930)
12ee7b61e7 : support building with conda installed libraries (#50080)
e868825eb6 : [RPC] Relax some profiling tests (#49983)
c115957df0 : [distributed] Provide parameter to pass GPU ID in barrier function (#49069)
3cd2f1f3a7 : Add an option to disable aten::cat in TE (re-revert) (#50101)
bbae6774c1 : [JIT] Remove buffer metadata serialization forward-compat gate (#49990)
04e86be1a2 : eager quant: fix error with removing forward hooks (#49813)
113b7623d6 : quant: throw a nice error message for allclose with quantized inputs (#49802)
44c17b28c6 : quant: nice error message on convtranspose with per-channel weight (#49899)
72306378b4 : quant: ensure observers do not crash for empty Tensors (#49800)
c86cfcd81d : Run mypy over test/test_utils.py (#49654)
b7bfc723d3 : Run mypy on more test files (#49658)
e35b822d7d : fixes indices computation for trilinear interpolate backwards (#50084)
52933b9923 : Patch death tests/fork use after D25292667 (part 3)
ace78ddb6a : Revert D25763758: [pytorch][PR] introduce a flag to disable aten::cat in TE
3845770349 : Fixing error in Readme.md. (#50033)
8c66aec435 : Fix grammar typo in readme.md (#50000)
e4d596c575 : Fix return value of _vmap_internals._get_name (#49951)
6e6231f9cd : unit test for fc parallelization aot (#50056)
ee80b45843 : [TensorExpr] Fix LLVM 10 build after LLVM API changes
c51455a7bb : [FX] fix Graph python_code return type annotation (#49931)
8fb5f16931 : Complex backward for indexing, slicing, joining, and mutating ops (#49552)
9e0b4a96e4 : introduce a flag to disable aten::cat in TE (#49579)
65122173ab : [ONNX] Modified var_mean symbolic to support more combinations of dims (#48949)
d0369aabe1 : Clean up some type annotations in caffe2/contrib/aten/gen_op (#49945)
a5339b9d7c : Drop unused imports from leftovers (#49953)
5acb1cc1df : Drop unused imports from scripts (#49956)
efe1fc21fc : Dont inlinine intermediates on cpu (#49565)
c439a6534d : [ONNX] Handle Sub-block index_put in _jit_pass_onnx_remove_inplace_ops_for_onnx (#48734)
240c0b318a : Suppress "statement is unreachable" warning (#49495)
f96ce3305c : prohibit assignment to a sparse tensor (#50040)
71766d89ea : [BE] unified run_process_no_exception code (#49774)
74dcb6d363 : torch.xlogy: Use wrapped_scalar_tensor / gpu_with_scalars to speed up GPU kernel. (#49926)
483670ff0f : [pytorch] add threshold_backward batching for vmap (#49881)
da790eca69 : Add trace batching forward/backward rule (#49979)
0216366f0d : Make use_c10_dispatcher: full mandatory for structured kernels (#49490)
6c833efd65 : Move default or no default logic into native.argument (#49489)
8eee8460f8 : codegen: Resolve overload ambiguities created by defaulted arguments (#49348)
7202c0ec50 : Tighten up error checking on manual_kernel_registration (#49341)
8e20594b38 : Construct CppSignatureGroup from NativeFunction (#49245)
f0945537af : .circleci: Ignore unbound variables for conda (#50053)
69ca5e1397 : Enforce c10-fullness for all ops (#49619)
6e84a018be : move to non-legacy magma v2 headers (#49978)
fdb81c538a : Improve `torch.flatten` docs and add tests to test_view_ops (#49501)
b76822eb49 : Update update_s3_htmls.yml (#49934)
22bd277891 : Run test_type_hints first (#49748)
211f35631f : Add type annotations to _tensorboard_vis.py and hipify_python.py (#49834)
c7e9abb66a : Making ops c10-full: list of optional tensors (#49138)
e44b2b72bd : Back out "[pytorch][PR] Preserve memory format in qconv op" (#49994)
8aad66a7bd : [c10/**] Fix typos (#49815)
749f8b7850 : Remove flops warnings from the default profiler use case (#49896)
de3d8f8c35 : Revert D25734450: [pytorch][PR] Improve `torch.flatten` docs and add tests to test_view_ops
4677fc69a2 : Fix inf norm grad (reland) (#48611)
730965c246 : Improve `torch.flatten` docs and add tests to test_view_ops (#49501)
cd608fe59b : Revert D25719980: [pytorch][PR] Accept input tensor with 0-dim batch size for MultiLabelMarginLoss
46afd7fc9f : [PyTorch] Decouple version numbers from c10 and caffe2 targets (#49905)
04a8412b86 : [quant] Quantizable LSTM (#49671)
ffbb68af8a : quant docs: add common errors section (#49902)
a7e1f4f37a : Remove incorrect usage of layout(std430) on uniform buffers, correctly now treated as error in the latest release of Vulkan SDK. (#49572)
6a951a6f4c : Fix a KaTeX crash and many docstring issues (#49684)
6b56b71e61 : Accept input tensor with 0-dim batch size for MultiLabelMarginLoss (#46975)
42d2e31cd6 : [numpy] `torch.rsqrt` : promote integer inputs to float (#47909)
b54ad08978 : Enable test_fusions TanhQuantize (#49970)
cfc3db0ca9 : Remove THPWrapper (#49871)
12b73fdbbf : Adding JIT support for cuda streams and events (#48020)
97c17b4772 : Fix auto exponent issue for torch.pow (#49809)
e482c70a3d : added List as an option to the unflattened_size (#49838)
01b57e1810 : Revert D25718705: Clean up type annotations in caffe2/torch/nn/modules
14edc726d9 : Clean up some type annotations in caffe2/torch/quantization (#49942)
4c5a4dbb8c : [Tensorexpr]Copying header files in tensorexpr dir (#49933)
891759f860 : Clean up type annotations in caffe2/torch/nn/modules (#49938)
a111a9291c : added fuse_op and list_construct - list_unpack pass
8d7338e820 : Enable tests using named temp files on Windows (#49640)
d434ac35e4 : Update gather documentation to allow index.shape[k] <= input.shape[k] rather than ==. (#41887)
c619892482 : Fix errata (#49903)
361f5ed91d : Implement torch.linalg.qr (#47764)
bc4ff7ba05 : fx quant: split linear test cases (#49740)
ea558b2135 : fx quant: hook up ConvTranspose{n}d (#49717)
fc559bd6dc : [JIT] Constant prop getattr (#49806)
268441c7d8 : [NNC] masked fill (#49627)
58fe67967c : Support the `in` operator with str (#47057)
e6779d4357 : [*.py] Rename "Arguments:" to "Args:" (#49736)
9c64b9ffba : early termination of CUDA tests (#49869)
963f7629b5 : [numpy] `torch.digamma` : promote integer inputs to float (#48302)
46cf6d332f : Revert D25684692: [quant][graphmode][fx] Standalone module support {input/output}_quantized_idxs
ec6de6a697 : Clip small scales to fp16 min
89b4899ea5 : [quant][graphmode][fx] Standalone module support {input/output}_quantized_idxs (#49754)
69b1373587 : Revert D25692616: [pytorch][PR] [reland] Early terminate when CUDA assert were thrown
9552cc65d4 : Creation of test framework for Sparse Operators (#48488)
5acc27c00a : Revert D25690129: [pytorch][PR] Added linalg.inv
1833009202 : Fix typo in complex autograd docs (#49755)
e6a215592e : [reland] Early terminate when CUDA assert were thrown (#49799)
3f4b98d568 : [numpy] `torch.erfinv`: promote integer inputs to float (#49155)
4d6110939a : [pt][quant] Make the CUDA fake quantize logic consistent with CPU fake quantize logic (#49808)
e163172904 : removes more unused THC functions (#49788)
d99a0c3b3e : Improve docs for scatter and gather functions (#49679)
b3387139b4 : Mod lists to neutral+descriptive terms in caffe2/docs (#49803)
8554b58fbd : Added linalg.inv (#48261)
370350c749 : Preserve memory format in qconv op (#49533)
5171bd94d7 : [lint doc] how to fix flake errors if pre-commit hook wasn't there (#49345)
55b431b17a : [Gradient Compression] Directly let world_size = group_to_use.size() (#49715)
88c33ff8ab : [Gradient Compression] Explicitly restrict the scope of torch.cuda.synchronize to the current device (#49711)
ee271047b5 : torch.utils.checkpoint.checkpoint + torch.cuda.amp (#49757)
f474ffa1a9 : [quant][graphmode][fx] Change standalone module api (#49719)
af1b636b89 : [Gradient Compression] Change wait() to value() in some callbacks of PowerSGD communication hook (#49709)
68d438c9da : Add PixelUnshuffle (#49334)
461aafe389 : [numpy] `torch.angle`: promote integer inputs to float (#49163)
46b83212d1 : Remove unused six code for Python 2/3 compatibility (#48077)
abacf27038 : Revert D25623219: [pytorch][PR] early terminate when CUDA assert were thrown
010b9c52f4 : Skip None submodule during JIT-tracing (#49765)
62f9b03b7c : [lint] Apply whitespace linter to all gradle files
27f0dd36d9 : add type annotations to torch.nn.parallel._functions (#49687)
de07d07600 : fx quant: improve types on convert (#49688)
19f972b696 : fx quant: do not observe bias on F.linear (#49628)
c3a7591cef : fx quant: do not observe bias on F.conv (#49623)
b414123264 : Update `is_floating_point()` docs to mention bfloat16 (#49611)
67d0c18241 : [FX] Try to make it more clear that _update_args_kwargs should not be called (#49745)
2780400904 : [numpy] Add `torch.xlogy` (#48777)
be091600ed : early terminate when CUDA assert were thrown (#49527)
9b6fb856e8 : Update NNPACK (#49749)
6f9532dd53 : only upload s3 stats on master, nightly, and release branch (#49645)
04e04abd06 : remove unused THCBlas (#49725)
1451d84766 : Minor doc fix: change truncating to rounding in TF32 docs (#49625)
21398fb6cb : Fix get_overlap_status for tensors without storage (#49638)
c23808d8e8 : Reland: Add base forward grad logic (#49734)
eabe05ab72 : [onnxifi] Get rid of class member (#49380)
7b4a7661d6 : Make PyTorch partially cross-compilable for Apple M1 (#49701)
42b5601f30 : [ROCm] add 4.0 to nightly builds (#49632)
4d9d03fe47 : Complex backward for torch.sqrt (#49461)
2df249f0ab : [fix] inplace remainder/% (#49390)
dfb7520c47 : NewModuleTest: Don't call both check_jacobian and gradcheck (#49566)
c348faedc4 : [Gradient Compression] Warm-start of PowerSGD (#49451)
590e7168ed : [PyTorch] Remove direct reference to native symbols in sparse related non-native codes (#49721)
d54cf2aa27 : [pt][ATen] Optimize bmm (#49506)
11598da229 : [FX] Fix python code having spurious newlines from placeholders (#49720)
edce6b138d : fx quant: fix types on _find_quants (#49616)
7c90b20f38 : fx quant: add types to observed_module.py (#49607)
9d5d193704 : fx quant: types for fusion_patterns.py (#49606)
ab2194f912 : unbreak mypy torch/quantization (#49549)
a5b27d7a31 : [TensorExpr] Move `SimpleIREval` implementation from .h to .cpp. (#49697)
e1f73ced1e : [TensorExpr] Change `LoopNest::vectorize` to accept `For*` instead of `Stmt*`. (#49696)
f5178bf151 : Revert D25607503: Add base forward grad logic
aa2782b9ec : replacing THC_CLASS and THC_API with TORCH_CUDA_API (#49690)
7eb392d73f : Fix TCPStore type coercion (#49685)
1043ecf68d : Use store based barrier only for certain store types. (#49694)
7e1356db7b : Move device guard from MultiTensorApply.cuh (#46664)
5b163e230a : [jit][tracer] allow traced modules to return dicts with tuple values when strict=False (#49568)
46c9a0e679 : Do not use negative values in GCD computation. (#49379)
fdf02eff3d : Add base forward grad logic (#49097)
befe337072 : Fix test_cuda_init_race skip rules (#49693)
983bfc79ed : Enable product for bool tensor (#48637)
49c9994fb7 : Clean up backward compatibility skip list (#49691)
92f37ae263 : change block codegen to handle new inlining in NNC (#47687)
476cabdfff : added macros in jit logging to check whether loggings are enabled; replaced similar checks in LLVM codegen with such macros (#49121)
aebb7d1836 : converted current debugging statements in LLVM codegen to jit-logging statements #48771 (#49040)
f7a085af98 : Dynamic GRU quantization support (#49448)
a84b93a6f8 : add close() method to tqdm mock (#46040)
12942ea52b : [BE] Introduce `set_cwd` context manager (#49657)
44ce0b8883 : Sparse-sparse matrix multiplication (CPU/CUDA) (#39526)
3779bdec56 : Implementing NumPy-like function torch.broadcast_to (#48997)
db2e9c1e7f : [NNC] Intermediate allocs flattened and dependency support (#49554)
a3aafea076 : Fixed a typo in dataloader.py. (#49437)
b1a1271f68 : Fix typo in add_pr_curve docstrings. (#49648)
b80a36614f : Fix return type Any for Ternary ops (#49165)
8be205ae13 : Added linalg.solve (#48456)
5ce94991eb : Fix sinc docs typo (#49667)
ef172e138c : [Mask R-CNN]Add Int8 AABB Generate proposals Op (#49574)
7ed140a1a0 : [WIP][DataLoader] Prototype of SamplerIterableDataset (#49363)
554f79acb9 : [WIP][DataLoader] Prototype of BatchIterableDataset (#49186)
1b6fc1fd42 : [WIP][DataLoader] CollateIterableDataset prototype (#48933)
bab732a3a3 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
5c3788d5d7 : Add support for torch.tensor_split to accept a tensor for `indices` argument (#49169)
96aed203bf : [Gradient Compression] Replace the assertions in PowerSGD comm hook by stream syncrhonization (#49435)
342bfd892f : [Gradient Compression] Add error feedback to layerwise PowerSGD (#49418)
5c25f8faf3 : stft: Change require_complex warning to an error (#49022)
f5ee619d2a : Updated derivative rules for complex svd and pinverse (#47761)
8b61fbdac9 : Resubmit: [Gradient Compression] Implement the original layerwise PowerSGD (#49639)
c0deb231db : disable kthvalue overlap (#48254)
1ac05cfe01 : Remove DataPtr extractor from CUDAFuture (#48840)
e0f60c9720 : Disable test on windows (#49636)
e2d2d9bb0c : [PyTorch Mobile] Preserve bundled input related methods when calling optimize_for_mobile (#49170)
ad9923e5d5 : Revert D25511543: [Gradient Compression] Implement the original layerwise PowerSGD
5cde23fdd4 : [quant][graphmode][fx] Allow user to specify qconfig for call_method (#49621)
e4eaa6de5f : Fix lint (#49629)
7278e3bd29 : Bump tensorpipe version (#49599)
159de1f1d6 : Add benchmark for torch.distributed.pipeline.sync.Pipe (#49577)
8c52fdf522 : Improve documentation for pipeline parallelism. (#48638)
71f3399e19 : [Gradient Compression] Implement the original layerwise PowerSGD (#49417)
6f381de006 : Inline coverage report combining/reporting (#49615)
e2e44bb10a : [Issue #46210] added torch.fx.len() to provide support for len(); added a test case for torch.fx.len() (#49532)
3659560fba : [NNC] Disable masked fill (#49622)
5ab9593098 : `torch.reciprocal`: promote integer inputs to float (#49102)
485aee7a22 : Output stacks (support for SVG visualization) (#48438)
d0a12c5a47 : Add sinc operator (#48740)
d088359e5a : [torchscript] Fix constant propagation schemas (#49605)
9d91360b5d : Cleanup APIs for pipeline parallelism. (#48630)
39d89e06e0 : Upload test times to S3 (#49190)
b361e33a66 : [package] implicitly extern stdlib before mocking (#49306)
fb755ad33e : [FX] Emit named tuple construction node when NamedTuple appears as an arg (#49553)
27f355f87e : Test pipeline parallelism works with DDP. (#48470)
e17f0fd676 : Adding support for bitwise augassignment operators (#44621)
daaf932a99 : New profiler API (#48280)
4a870f6518 : [PyTorch Mobile] Export Operator List from Mobile CompilationUnit instead of from TorchScript Model (#49385)
71ca600af9 : Renaming CAFFE2_API to TORCH_API (#49496)
c9e052130a : [FX] Enforce args is tuple and kwargs is dict (#49526)
faf6032945 : Remove deadlines for Caffe2 hypothesis_test when running on GPU. (#49591)
ccd646696b : Fix Module backward hooks for all Tensor inputs/outputs (#46163)
0b27d57062 : fixed the first line of torch.rst to match the __init__.py file's first line (#49584)
7545ff6619 : Refactor VmapPhysicalView::newLogicalToPhysical (#49482)
f975f99d1d : add checkout PR tip step for quick checks (#49590)
2de345d44d : Add op bench for caffe2 quantile op (#49598)
6568572712 : Support integral types for kAbs in SimpleIREvaluator (#49357)
72b00a8a52 : Revert D25480770: Set USE_KINETO=1
1a92802bde : Set USE_KINETO=1 (#49201)
020c443fd1 : Fix CustomAutogradTest.ReentrantPriority rerun failures (#49581)
43f6da787e : Use store based barrier in init_process_group. (#49419)
5fcfebd84a : Disables method variant grad and grad grad checks (#49576)
573f4aa352 : FLOPS Roofline Analysis Feature for PyTorch Profiler. (#46506)
5db12b6811 : Add type inference for dequantization.tensors (#49517)
ed0489c11a : disable concat nested namespace check (#49571)
9058040527 : Add more list peephole idioms (#48268)
39d3578e91 : [ddp launch] solve zombie problem (#49305)
1047957831 : [te][reapply] Add fast log approximation based on sleef (#49575)
c78fd76f18 : Revert D25542799: [PyTorch] Merge CoinflipTLS into RecordFunctionTLS
625bc40def : Revert D25544731: [PyTorch] Avoid extra Tensor refcounting in _cat_out_cpu
385f6b4807 : Revert D25545777: [PyTorch] Use .sizes() instead of .size() in _cat_out_cpu
52b3775914 : Revert D25546409: [PyTorch] Use .sizes() isntead of .size() in cat_serial_kernel_impl
19dc5e94a6 : Revert D25547962: [PyTorch] Make tls_local_dispatch_key_set inlineable (reapply)
d17dc37112 : Add dict comprehension (#47774)
ea4ccc730e : Revert D25445815: [te] Add fast log approximation based on sleef
6db5e85726 : [FileStore] Updating Docs to Reflect FileStore changes (#49557)
31fcbbdf35 : [FileStore] Implemented numKeys and Added Tests (#49556)
ad4467b93c : .github: Add action workflow to update S3 HTMLS (#49509)
4b85239532 : [quant][eagermode][fix] Fix quantization for DeQuantStub (#49428)
1329066b69 : [te] Add fast log approximation based on sleef
2b61e4d84c : Revert D25152559: T66557700 Support default argument values of a method
0d411c4216 : Test distributed collectives profiling with Gloo on GPU (#49072)
20b90f3909 : Set is_non_overlapping_and_dense_ flag in OpaqueTensorImpl constructor (#49470)
eb131cf484 : Revert D25105217: [pytorch][PR] Fix bad error message when int overflow
a727bf2851 : Refactor RPC matchBuiltInOp to get rid of exception swallowing (#49009)
b8d98f05e7 : [reland][quant][docs] Add fx graph mode quantization to quantization docs (#49211) (#49515)
815d38395a : PyLong_{As/From}{Long/UnsignedLong} lint checks (#49280)
c20b916cbd : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
f5a26a554b : [C2] Revive unsafe CoalesceOp (#49402)
26974e6b28 : Remove set_quantizer_ from native_functions.yaml (#49463)
f5b68e74d7 : Revert D25574962: [pytorch][PR] Updated derivative rules for complex svd and pinverse
c18af03a41 : [pt] fuse ClipRangesGatherSigridHash (#49181)
26e076d19e : Adding fix for invalid annotation types for dictionary (#49425)
65876d3f51 : Change aten::native_layer_norm signature to match torch.layer_norm definition (#48971)
2ea1d97e3b : Add BFloat16 support for isinf and isfinite (#49356)
ede0b169ea : [quant][be] Add typing for quantization_mappings.py (#49179)
4edaf4d759 : Bring back math_silu_backward which works for all backends. (#49439)
6230e337d5 : Add torch._foreach_zero_ API (#47286)
4ce2b0b0ac : Set caffe2::pthreadpool() size in ParallelOpenMP (#45566)
db2ecefc01 : [reland] Support torch.distributed.irecv(src=None, ...) (#49383)
df2337097d : add files to SLOW_TESTS for target determinator (#49500)
82ac6c75af : fx quant: make sure observer is inserted before a quantized output (#49420)
84506e0316 : fx quant: fix fq when input is quantized and node does not need fq (#49382)
7542076097 : fx quant: do not insert observers at quantized inputs (#49239)
92df8706a0 : fx quant: move {input|output}_quantized_idxs cfg from convert to prepare (#49238)
36b20923ba : eager quant: remove fake_quant after add/mul nodes during QAT (#49213)
904586271b : Add fusion support of aten::to (#48976)
80b508f207 : [NNC] add support for masked_fill (#48974)
50386b9988 : [NNC] Add Support For is_nan (#48973)
60b4c40101 : [extensions] fix `is_ninja_available` during cuda extension building (#49443)
d409da0677 : Fix CUDA extension ninja build (#49344)
1c6e179b38 : Relax the atol/rtol of layernorm math kernel test. (#49507)
c675727adf : Fix bad error message when int overflow (#48250)
a5cc0a6f4c : .circleci: Only downgrade if we have conda (#49519)
872f6486b1 : Prevent accidentally writing old style ops (#49510)
9056173acc : [NNC] Dont inline outputs buffers on cpu (#49488)
47c65f8223 : Revert D25569586: stft: Change require_complex warning to an error
3efd5d8f01 : Introduce tools.codegen.api.translate (#49122)
f66147ebca : BFloat16: add explicit dtype support for to_mkldnn and to_dense (#48881)
6f928a4a53 : [PyTorch] Make tls_local_dispatch_key_set inlineable (reapply) (#49412)
953f9922ec : [PyTorch] Use .sizes() isntead of .size() in cat_serial_kernel_impl (#49371)
c1879b573e : [PyTorch] Use .sizes() instead of .size() in _cat_out_cpu (#49368)
1a0510463a : [PyTorch] Avoid extra Tensor refcounting in _cat_out_cpu (#49364)
9ce1df079f : [PyTorch] Merge CoinflipTLS into RecordFunctionTLS (#49359)
6bde0ca6d3 : T66557700 Support default argument values of a method (#48863)
d0fb55454b : Refine `ConvParams::use_nnpack()` (#49464)
399b07a8f9 : Add note to torch docs for sinh/cosh (#49413)
f0217e2f52 : Fix link in distributed contributing doc and add link (#49141)
676bfa6dbd : Revert D25507480: [quant][docs] Add fx graph mode quantization to quantization docs
09173ae65e : Allow zero annealing epochs (#47579)
4431731c68 : Making ops c10-full: Storage arguments (#49146)
7767dcfc8d : Revert D25564477: [pytorch][PR] Add sinc operator
5874925b46 : stft: Change require_complex warning to an error (#49022)
7729581414 : [quant][docs] Add fx graph mode quantization to quantization docs (#49211)
9955355853 : Updated derivative rules for complex svd and pinverse (#47761)
39a23c797b : Add docs/README.md to make existing doc build info more discoverable (#49286)
6f814d45aa : Update TensorPipe submodule (#49467)
2ec3e803eb : Update accumulate_grad to support vmap (#49119)
f98d8c6237 : Move inplace_is_vmap_compatible to BatchedTensorImpl.h (#49118)
1b6d18aa7c : Adding support for CuDNN-based LSTM with projections (#47725)
48d1ad1ada : Reland "Add test for empty tensors for batch matmuls" (#48797)
afce5890ff : Revert D25421263: [pytorch][PR] [numpy] torch.{all/any} : output dtype is always bool
d7659be58d : [caffe2][autograd] Avoid extensive -Wunused-variable warnings on _any_requires_grad (#49167)
45b33c83f1 : Revert "Revert D24923679: Fixed einsum compatibility/performance issues (#46398)" (#49189)
bbc71435b7 : Add sinc operator (#48740)
09c741868c : [c10d Store] Store Python Docs Fixes (#49130)
4b3f05a471 : [Docs] Updating init_process_group docs to indicate correct rank range (#49131)
c52f1dc365 : .circleci: downgrade conda-package-handling to 1.6.0 (#49434)
f2ee8c6241 : Instantiate PackedConvWeight to avoid linking error (#49442)
86902f84bf : CUDA BFloat embedding (#44848)
001ff3acf6 : webdataset prototype - LoadFilesFromDiskIterableDataset (#48955)
6786b2b966 : webdataset prototype - ListDirFilesIterableDataset (#48944)
efc090652e : Enhanced generators with grad-mode decorators (#49017)
76d09ec33e : [PyTorch] Avoid move-constructing a List in listConstruct (#49355)
ec8e9d31cf : Making ops c10-full: optional lists (#49088)
d69d42db78 : Making ops c10 full: optional out arguments (#49083)
306bab220e : Revert D25554109: [StaticRuntime][ATen] Add out variant for narrow_copy
ed04b71651 : [StaticRuntime][ATen] Add out variant for narrow_copy (#49449)
40d7c1091f : Unescape string in RPC error message (#49373)
a9137aeb06 : quantized tensor: add preliminary support for advanced indexing, try 2 (#49346)
8954eb3f72 : [StaticRuntime] Fusion pass for ClipRanges/GatherRanges/LengthsToOffsets (#49113)
94e328c038 : fix optimizer.pyi typo 'statue'->'state' (#49388)
cbeb4c25e5 : [StaticRuntime] Permute_out (#49447)
acd72e79a3 : update breathe (#49407)
58551e52f0 : [CMake] Use libtorch_cuda list defined in bzl file (#49429)
22c6dafd33 : [PyTorch] Use plain old function pointer for RecordFunctionCallback (reapply) (#49408)
e9d7d37ad0 : [FX] Rename Node._uses and refactor Node.all_input_nodes (#49415)
46debe7f23 : [DPER] Introduce barrier operation to force synchronization of threads in async execution (#49322)
7518f54611 : Add flag torch_jit_disable_warning_prints to allow disabling all warnings.warn (#49313)
aff0b68a58 : Fix include files for out-of-tree compilation (#48827)
16f4b0ed6b : Replace THError() check in THCTensorMathReduce.cu with C10_CUDA_KERNEL_LAUNCH_CHECK() (#49424)
c508e5b1bf : [numpy] torch.{all/any} : output dtype is always bool (#47878)
38a59a67f3 : [JIT] Support multiple outputs in subgraph matcher. (#48992)
3ffe9e0f43 : [static runtime] refine fusion group (#49340)
f4e15c4a23 : [te] Fix bugs with shift operators (#49396)
5912316cf7 : Making ops c10-full: Generator arguments (#49013)
a6274c1278 : Making ops c10 full: out overloads with default arguments (#49012)
b47fa5e88b : Making ops c10-full: Dimname arguments (#49008)
c5f90a25c0 : Making ops c10-full: ops blocked by manual registrations (#49007)
e391dbc1b5 : Making ops c10 full: ops returning multiple out arguments (#49006)
40a02e2ded : Make out ops c10-full (with hacky-wrapper) (#48912)
11334280bf : Suppress `warning: calling a constexpr __host__ function from a __host__ __device__ function is not allowed` warning (#49197)
778006918c : [WIP][FX] Add FX page to docs (#48814)
9908b93dcf : fix test_dispatch tests to error on duplicate def (#49254)
8ae9b46e20 : Revert D25494735: Update TensorPipe submodule
9234f5026d : Make WorkNCCL use CUDAEvent::query() rather than re-implement it (#49343)
5a5e576ab9 : Update TensorPipe submodule (#49232)
98726119d9 : Do not return unitialized qschame from getQSchemeAndQParamVector (#49391)
39a10fb652 : Fix check_kernel_launches.py for macros and provide extended context (#49365)
25bc906281 : Revert D25135415: [PyTorch] Use plain old function pointer for RecordFunctionCallback
a419a3e25d : Add assertion on any NaN error on the error feedback (#49374)
7e23ee1598 : [PyTorch] Use plain old function pointer for RecordFunctionCallback (#48629)
900aa4ee97 : [PyTorch] remove convenience RecordFunctionCallback interface (#48620)
bbeee481c3 : Fix typo in torch.load docstring for the `f` parameter (#49350)
626b8c0cf2 : [te] Ban uint8 tensors from fusion groups (#49247)
50b361a821 : Enable BF16 for indexing on CUDA (#48801)
23e98e73f6 : Fix Windows CUDA-11.1 test jobs (#49376)
e2510a0b60 : Add Kernel Launch Checks to files under caffe2/aten/THC (#49358)
cb3169d7a8 : [aten] index_select dim 1 (#47077)
220b91660f : [pytorch] Expand PixelShuffle to support any number of batch dims (#49187)
3a943e9f82 : Use Unicode friendly API on Win32 in THAllocator (#47905)
1e2d1d7242 : Fixed cat transform to work with event_dim > 0 (#49111)
d5a971e193 : Add kernel launch checks in caffe2/aten/src/ATen/native/cuda/ (#49269)
86cf1e1358 : Add another way to verify ccache in CONTRIBUTING.md (#49337)
6820745e28 : Revert D25489030: [PyTorch] Make tls_local_dispatch_key_set inlineable
4188c374ce : Refactor: use version instead of major version in windows build (#49156)
6cfd7c3811 : Remove type annotations from signatures in html docs (#49294)
9e3c25ff1d : sls + layernorm test (#43799)
be849ed1fd : [PyTorch] Make tls_local_dispatch_key_set inlineable (#49264)
c068180a17 : [CUDA graphs] Cuda RNG-safe graph capture and replay bindings (#48875)
25833e5d1c : [CrashFix] Make the dst tensor contiguous when copying from metal
a0432a7020 : [AARCH64] Fix vst1q_f32_x2 implementation (#49273)
87636c07bb : CUDA BF16 sparse (#48807)
690eaf9c43 : add channels last for AdaptiveAvgPool2d (#48916)
8397a62a64 : Fix cvtfp32_bf16 (#41280)
bd322c8967 : Update docstrings of torch.nn.modules.activation.MultiheadAttention (#48775)
7d406b4a07 : [PyTorch] Make TORCH_CHECK less likely to interfere with inlining (#49263)
eb051afa78 : [PyTorch] native_cpp_binding for size() and stride() (#49262)
f54ab8fbfe : Revert "Revert D25003113: make validate debug-only in Device copy ctr" (#49123)
94a3d4b083 : Remove unused operator at::_fft_with_size (#48905)
fdadfb6e5d : Fix formatting error in `set_deterministic` documentation (#49136)
38ed398580 : [fx] Add constant folding pass (#48443)
f2ba3c1621 : Use group.WORLD appropriately in process group initialization. (#48767)
dc4db95540 : Update pipeline API to accept arbitrary sequence of Tensors and not just Tuple (#48467)
33b7970d9e : fix slow windows test (#49258)
cd927875e0 : [pt] Replace size(dim) with sizes()[dim] (#49255)
717f31d984 : Remove unused reconstruct_scopes function (#48822)
dc92f25b38 : [te] Use c10::ScalarType utility functions in te::Dtype (#49148)
eaac28192c : [te] Use Dtype::is_signed instead of an ad hoc local predicate. (#49147)
ae88d25c23 : [te] Fix clamp with uint8 args (#49143)
8999915a86 : Fix "Missing return statement" mypy error (#49276)
b5b8fe9876 : Revert D25434956: [JIT] Use `is_buffer` in `BufferPolicy::valid`
693e908656 : [shape inference] fix ConstantFill
8d58362f59 : [PyTorch] Remove native::zeros reference in TensorIndexing (#49117)
635f1cd1a5 : Enable LayerNorm test cases
76d41c801e : [JIT] Fix toIValue handling of AttributeError when casting ClassType (#49188)
29f0fa36b1 : [Gradient Compression] Minor update of the comments on PowerSGD. (#49246)
21c38e1799 : Additional validation for DistributedSampler. (#48865)
6b78644623 : [te] Add BitCast to the IR (#49184)
5716b7db72 : Enabled Scalar lists (#48222)
bfce69d620 : inline `has` function for DispatchKeySet (#49191)
53aa9b8c82 : [FX] Move none assignments to same line (#49209)
2f359e7d55 : Add tensorpipe agent tests to multigpu tests. (#49210)
df027bfd2c : Modify Pipe to return an RRef. (#47829)
c6147ae4c9 : [PyTorch] Fix getCustomClassType() perf (#48981)
6c1b405a3b : Updated derivative rules for complex QR decomposition (#48489)
e3542d2c12 : [PyTorch] avoid unnecessary call to empty_tensor_restride in empty() (#48211)
4bc4ec2686 : Reduce kineto logging (#49216)
15200e385a : Enable torch.where() to support Float16 & BFloat16 type inputs (#49004)
218eaf4bba : pyi codegen refactor - no need to group python signatures by overload name (#49057)
33a9b14da0 : pyi codegen - removing byte-for-byte-compatibility hacks (sorting overloads) (#49056)
b94ec8c9f7 : pyi codegen - removing byte-for-byte compatibility hacks (#49055)
9920adebfd : pyi cleanup (#49054)
db5e5b439c : Extra sampling of record function events [resend] (#49114)
1cb5aa6c60 : Fix structured kernel codegen (#49244)
2a3bb1cea0 : [quant][graphmode][fx][fix] Fix typo in fusion (#49183)
796b267763 : fix backwards compatibility for #48711 and its revert (#49240)
f965b0fcfb : Expose run_async function on torch::jit::Method (#48607)
42c78ed745 : Tuple Slice with both negative and positive stepped size (#48660)
c0a0845019 : Improve new_group example in the context of SyncBatchNorm (#48897)
f10b53d9ea : [PyTorch Mobile] Record dtypes for tensors used in kernel function implementations (#48826)
f204f77e6d : Drop FutureNCCL in favor of vanilla CUDAFuture (#49014)
dcd1e3d78d : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
2bb2f641c4 : Bring fast_nvcc.py to PyTorch OSS (#48934)
88b3d3371b : add additional arm64 checker in cmake files (#48952)
2f1d1eb7df : Revert D25428587: [pytorch][PR] add additional interpolation modes for torch.quantile
5ab90b2fda : Make CUDAFuture remember and restore current device in callback (#48789)
2b1057b0cf : [RPC Framework] Support retrieving the RRef to the remote module (#48983)
8669f02573 : Saves a copy of vector<Tensor> in view ops returning TensorList. (#49149)
fce059d4ff : [te] Don't throw when re-registering a CodeGen factory (#49174)
56a157fc79 : hacky_wrapper_for_legacy_signatures reorders out arguments (#48911)
da6f249a10 : [caffe2] DeserializeToNDArray (#49135)
59e822026c : Add manual_cpp_binding to native_functions.yaml (#49092)
743a4ef0ae : [PyTorch] Enable AutoNonVariableTypeMode in static runtime (#49199)
696e30af6e : Fix ProcessGroupNCCL profiling when profiler is not run with use_cuda (#48946)
cc3b59f6df : [package] use bazel-style glob matching for mock/extern (#49066)
159f258415 : Update Kineto revision (#49200)
5469aa5e7f : [NNC] Add a non functional Tensor kind (#48750)
9b0ffb9fb3 : Delete cpp.group_arguments (#49043)
267641a245 : Rename positional and kwarg_only to have flat prefix (#49042)
0dea76ecda : Delete some dead functions from tools.codegen.api.meta (#49041)
882eb0f646 : [quant][graphmode][fx] Add support for dynamic quant for RNN and RNNCell (#49126)
a47a087a43 : [NNC] Add missing data type support for abs and frac (#48679)
7feec06dfe : Only 1 TensorImpl allocation in differentiable views. (#48896)
5e8cfec332 : Add a newline before dependency graph output (#49127)
57145c910f : Revert D24711613: [pytorch][PR] Preserve submodule with __set_state__ in freezing
80f7510d92 : [FX] Fix create_arg for NamedTuple (#48986)
69522410fa : add user vs internal msg support in common_utils.TestCase (#48935)
84fce6d29a : [AARCH64] Fix HAS_VST1 check if compiled by clang (#49182)
f4226b5c90 : [static runtime] add static subgraph fusion pass (#49185)
95a1725a4a : Vsx initial support issue27678 (#41541)
a3e1bd1fb9 : Preserve submodule with __set_state__ in freezing (#47308)
a480ca5302 : [JIT] Use `is_buffer` in `BufferPolicy::valid` (#49053)
c892c3ac9a : remove hacky_wrapper from BackendSelect (#49079)
21dba8c1ad : Make aten::div.out c10-full (#47793)
e1c1a7e964 : [ONNX] Changes to export API to better handle named arguments (#47367)
0c70585505 : fix #49064 (invalid escape) by using raw strings (#49065)
3b57be176e : [NNC] Preserve strided output (#48264)
0b9d5e65e4 : Remove inferred from tensor type ctors (#48263)
71ddc0ba19 : [TensorExpr Fuser] Add support for nodes which have tensor constant inputs (#47814)
413caa7fd2 : [NNC] Compute Tensor Output Properties in ininitialization (#47813)
0e666a9f5a : [TensorExpr] Cache use of fallback in kernel invocation (#47812)
70853c5021 : Dont use symbolic shapes check (#47810)
18c03b9f00 : make duplicate def() calls an error in the dispatcher (#48098)
2519348f60 : [Binary Push] Update the awscli installation, use conda install rather than brew install (#49175)
edbf9263ad : [iOS] Bump up the cocoapods version (#49176)
909a9060e9 : [vmap] implement batching rule for fill_ and zero_ (#48516)
840e71f4e6 : Check CUDA kernel launches (/fbcode/caffe2/) (#49145)
524adfbffd : Use new FFT operators in stft (#47601)
54f0556ee4 : Add missing complex support for torch.norm and torch.linalg.norm (#48284)
25a8397bf3 : add additional interpolation modes for torch.quantile (#48711)
45473ffe23 : Refactor cudnn convolution (#49109)
d5c4a80cfd : Allow ROCm CI to use non-default stream. (#48424)
195b92bfa6 : Revert D25441716: [te] Add BitCast to the IR
3384145418 : [te] Add BitCast to the IR
21c04b4438 : make AT_FFTW_ENABLED available to fb internal
33bc7918e8 : fix some comments in accelerator_partitioner.py (#49104)
c7b8f3e2cd : Decouple direct access to native::scalar_tensor from TensorIndexing.h (#48761)
2255e68da8 : Revert D25433268: [PyTorch Mobile] Preserve bundled input related methods when calling optimize_for_mobile
b5a7e25059 : Cache the DataPtrs in CUDAFuture (#48788)
030fa6cfba : Split out reusable CUDAFuture from FutureNCCL (#48506)
4c425e8da0 : Merge common parts of FutureNCCL into at::ivalue::Future (#48505)
9078088edb : Split FutureNCCL's CUDA-specific parts from generic future logic (#48504)
a6778989d1 : Support wider range of types in FutureNCCL (#48502)
9fe3ac3650 : Don't store device indices separately on FutureNCCL (#48501)
e294c2d841 : Add multi-GPU support to FutureNCCL (#48500)
91ad3ed831 : Fix FutureNCCL not recording dataptrs with caching alloc in wait() (#48563)
003c30ba82 : Fix FutureNCCL's completed() disagreeing with wait() (#48503)
b91b0872a1 : Record CUDA events for "follow-up" FutureNCCL inside markCompleted (#48499)
6157f8aeb5 : Use fresh stream from pool for each FutureNCCL callback (#48498)
8fb52e7fa2 : Make FutureNCCL record events in current stream (#48497)
e4267eb424 : Have FutureNCCL record streams w/ allocator in addCallback (#48496)
868a1a48c6 : Add some safeguards to FutureNCCL (#48562)
b7f5aa9890 : Remove NCCL dependency from PythonFutureWrapper (#48495)
7f7f0fa335 : Avoid using FutureNCCL before it's ready (#48561)
eb9516eaa4 : [numpy] `torch.exp{2, m1}`: promote integer inputs to float (#48926)
27f7d1c286 : Port `eig` CPU from TH to ATen (#43215)
95233870f2 : [PyTorch Mobile] Preserve bundled input related methods when calling optimize_for_mobile
9417e92722 : op to gen quant params from min-max thresholds
c5bc6b40ab : [NNC] Dead Store Elimination (#49030)
7a2abbd8fd : Revert D25416620: [pytorch][PR] Add version_info tuple
3123f878dd : [PyTorch] Avoid storage refcount bump in copy_tensor_metadata (#48877)
e69c2f85f6 : Add version_info tuple (#48414)
5375a479aa : Add type annotations to conv-relu (#47680)
e9ef1fe309 : [PyTorch Mobile] Add continuous build config for xplat/caffe2
16b8e6ab01 : Class-based structured kernels, with migration of add to framework (#48718)
a6fa3b2682 : adding profile_ivalue (#47666)
f431e47a2e : [collect_env] Acquire windows encoding using OEMCP (#49020)
5765bbd78c : Review memory overlap checks for advanced indexing operations (#48651)
dfa3808704 : [PyTorch] Remove aten::native::empty usage in TensorIndexing (#49074)
c7cc8a48c0 : migrating some straggler pytorch ops in fbcode to the new registration API (#48954)
67d12c9582 : Pass shape hints for AOT case (#48989)
bfa95f90a0 : Revert D25325039: Check CUDA kernel launches (/fbcode/caffe2/)
7a4a2df225 : Revert D25003113: make validate debug-only in Device copy ctr
fc0a3a1787 : Improve torch.fft n-dimensional transforms (#46911)
f5e9ffbc27 : Check CUDA kernel launches (/fbcode/caffe2/) (#49105)
7584161dfa : Enhance `new_group` doc to mention using NCCL concurrently. (#48872)
c62f3fc40b : fix clang-tidy warning - make global TorchLibraryInit objects const (#48956)
b98e62f8eb : [te] Add gflag for fast intrinsic expansion (#49060)
44f33596d3 : [pe] Add gflags for num_profiled_runs and bailout_depth, laint (#49059)
e5a98c5ab0 : [ONNX] Remove usage of isCompleteTensor() in symbolic functions (#48162)
41fd51d7d8 : [PyTorch] Reference to c10::GetCPUAllocator() directly (#49068)
b3ab25aefa : [numpy] `torch.cosh`: promote integer inputs to float (#48923)
492580b855 : [te] Remove vestigial __init__.py from test/cpp/tensorexpr (#49061)
9f7fb54693 : Revert D25111515: Extra sampling of record function events
73f7178445 : remove redundant sccache wrappers from build.sh scripts (#47944)
4b26cafb8f : make validate debug-only in Device copy ctr (#47854)
71cfb73755 : Add complex support to broadcast_coalesced (#48686)
09b974c2d5 : Extra sampling of record function events (#48289)
a20d4511e4 : [PyTorch] TensorImpl::is_non_overlapping_and_dense_ should default to true (#48625)
a849f38222 : skip cuda test_cholesky_solve_batched_many_batches due to illegal memory access (#48999)
e8b00023b2 : [ROCm] restore autograd tests (#48431)
1c31f76297 : Add high level profiling trace for dataloading and optimizer (#47655)
2d9585a6a1 : [quant][graphmode][fx] Add test for ResnetBase (#48939)
59a3e76641 : [pt][quant] Remove contiguous calls in qembeddingbag (#48993)
7c0a3e3a06 : Annotate torch._tensor_str (#48584)
34cc77a811 : Torch onnx (#48980)
5450614cf6 : Correctly apply WIN32_LEAN_AND_MEAN to the whole repo (#49025)
4434c07a2c : [quant][fix] Support quantization of ops where input is quantizable (#49027)
993ce4b206 : [quant][graphmode][fx] Add MatchAllNode in pattern matching (#48979)
107c31f2f5 : Add a pass to fetch attributes of nn.Module to fx.node (#47935)
3f9ff48ebb : [JIT] Allow del statements with multiple targets (#48876)
d033e185ed : fx quant: move more functions to utils (#48908)
2668ea8087 : fx quant: move qconfig utils to utils file (#48907)
17e71509a6 : fx quant: quick cleanup for model_device (#48906)
e538bd6695 : [collect_env] Add candidate paths for nvidia-smi on Windows (#49021)
02b63858f2 : [CUDAExtension] support all visible cards when building a cudaextension (#48891)
6000481473 : add a unit test for large node error (#48938)
5960581148 : CUDA BFloat16 batchnorm (non-cuDNN) (#44994)
e8ec84864f : [StaticRuntime] Add aten::narrow (#48991)
d1fb4b4ffc : Put Flake8 requirements into their own file (#49032)
2b70bcd014 : [TensorExpr] Enable inlining for output tensors too. (#48967)
0fb9d36660 : Delete ATen mirror stuff (#49028)
dee82ef3ea : Add LKJCholesky distribution (#48798)
c92c8598a3 : [FX][2/2] Make docstrings pretty when rendered (#48871)
b89c328493 : Add fftw3 cmake as alternative for FFT/DFT (#48808)
b0e919cf60 : Avoid initializing gradInput twice in the backward phase of replication (#48890)
274ce26fd8 : [static runtime] Add Internal Ops to the registry (#48616)
ad3fed8b90 : [BE] Fix signed-unsigned warnings (#48848)
c29f51642e : Modify NEON check for ARM64 on OS X (#48982)
58c13cf685 : Back out "Revert D25375885: [pytorch][PR] Reenable some BF16 tests on CUDA"
e2befb84bc : minor README change to fix #25464 (#48970)
39445f718c : Revert D25375885: [pytorch][PR] Reenable some BF16 tests on CUDA
07978bd62e : [static runtime] fuse inference ops (1) (#48948)
b643dbb8a4 : VariableType calls faithful C++ API for c10-full out ops (#47792)
3ef36dca8e : Faithful out arguments (#47712)
046ea6696d : Enable faithful API for all ops (#47711)
32b098baf9 : Add and adjust kernel launch checks (#46727)
cb6233aa53 : Fix some convoluted(?) code (#48893)
c3a90bedd4 : Move aten::__contains__.int_list for lite jit (#48950)
881e9583b2 : docker: Add make variable to add docker build args (#48942)
5533be5170 : CUDA BF16 backwards (#48809)
3aeb9cc85d : [DOCS]Correct docs for torch.lu_solve (#47762)
bea88ee1d0 : Added entry for torch.linalg.cond to linalg.rst (#48941)
c876d4f477 : [Gradient Compression] Let the dtype of created low-rank tensors P and Q be the same type as the input tensor (#48902)
533c837833 : Register OpInfos for torch.fft transforms (#48427)
adbb74ded9 : [package] pre-emptively install submodules (#48799)
e3893b867f : Reenable some BF16 tests on CUDA (#48805)
7629612f9f : Update torch.randint documentation to include missing note (#48787)
f67259fe89 : Fix CI by removing gen_pyi from mypy-stirct.ini (#48961)
b77ca9e829 : [Docs] Add examples for new object-based c10d APIs (#43932)
d6b5f3ad98 : Add object-based collective APIs to public docs (#48909)
88ebf6f894 : Revert D25304229: [pytorch][PR] Add type annotations to torch.onnx.* modules
d307601365 : Revert D24923679: Fixed einsum compatibility/performance issues (#46398)
924b001b71 : #48733 added logging statements to LLVM codegen using JIT logging (#48758)
dad74e58fc : [WIP] Added foreach_trunc, foreahc_reciprocal, foreach_sigmoid APIs (#47385)
ba6511b304 : pyi codegen update - remove Declarations.yaml (#48754)
f2c3efd51f : Fix generator exhaustion in SparseAdam (#47724)
21ba48fe49 : [vulkan] test_app for mobilenetV2 on vulkan api (#48924)
36df25334f : Fix incorrect usage of CUDACachingAllocator [v2] (#48817)
8bc6023d7a : Add type annotations to torch.onnx.* modules (#48782)
1febd2225b : Add explicit cast to cuda_atomic_ops_test.cu (#48886)
00f01791a3 : [Caffe2]Add more error message in ComputeBinaryBroadcastForwardDims
a39398b9e5 : CUDA BF16 norm (#48806)
19f4c5110e : Add another torch::jit::load API to load PyTorch model with shared_ptr PyTorchStreamReader input (#48802)
e429d05015 : Fixing error: "member may not be initialized" due to constexpr at Windows (#48836)
ea2a568cca : Fixed einsum compatibility/performance issues (#46398) (#47860)
17f53bffef : [Gradient Compression] Replace the key of error_dict in PowerSGD state with bucket index (#48867)
2e600feda9 : [numpy] `torch.sinh`: promote integer inputs to float (#48644)
195ab5e864 : remove non-default settings in fuser.py (#48862)
85121a7a0f : Added CUDA support for complex input for torch.cholesky_solve (#47047)
5de22d3f69 : Removes redundant method_test entries (#48828)
0185a05ceb : Revert D25338250: [pytorch][PR] [BE] Fix signed-unsigned warnings
ae9f39eb58 : [FX][1/2] Make docstrings pretty when rendered (#48738)
0fb58d76a1 : Support ArgMin in c2_pt_converter
251398acca : Force a sync on non-CPU tensors for the benchmark to reflect the timing accurately. (#48856)
0923d19601 : fx quant: add types to quantization_patterns (#48851)
fa5f7d87bf : fx quant: add typing for fuser (#48844)
63a71a82cf : [ROCm] add 3.10 to nightly builds (#48866)
799b700ada : add a unit test for lack of devices (#48858)
5180caeeb4 : Remove deprecated spectral ops from torch namespace (#48594)
7439bc4dd6 : [Gradient Compression] Add an index field to GradBucket for PowerSGD (#48757)
6317e0b2f1 : [BE] Fix signed-unsigned warnings (#48848)
55b93735ac : [PyTorch] Save refcount decrements in StaticRuntime::deallocate_registers (#48859)
af30a89068 : [caffe2][a10] Remove unreferenced local variable e (#48601)
f0f315c33b : [PyTorch] Inline RecordFunctionCallback::shouldRun (#48286)
02d89f9f1d : scatter_object_list API for c10d (#43930)
a3298c2f64 : Implement JIT serialization of ProcessGroup (#48544)
3f10518def : [PyTorch] Add VariableVersion&& overload for TensorImpl::shallow_copy_and_detach (#48681)
9e10e3b74f : [PyTorch] Move TensorImpl::shallow_copy_and_detach to .cpp file (#48680)
092e52a4da : [fx]added prototype of to_folder (#47544)
03abd81b8d : [ROCm] Enable skipped distributed global tests (#48023)
9bb87fa58b : [te] Fix spacing in graph dump (#48829)
2d07d5b50a : [te] Don't fuse integer fmod or remainder (#48700)
5654fc8edd : Revert D25293474: [pytorch][PR] Server connects to its listen socket addr
4b8d965f18 : Revert D25292656: [pytorch][PR] Support torch.distributed.irecv(src=None, ...)
212ec07cb7 : Support torchbind as attribute in torch.fx symbolic tracing (#48732)
b9cd774e29 : Get rid of printf in cuda fuser debugPrint() (#46994)
ca3ae7dc73 : [DI] create a new key for threadLocalDebugInfo (#48762)
0f9823d888 : [PyTorch] Save some space in ProcessedNode (#48861)
142b21fd44 : Add SparseLengthsSum4BitRowwiseSparse in c2_pt_converter (#48240)
4eb4db7c30 : Support torch.distributed.irecv(src=None, ...) (#47137)
e1f9542d00 : Revert D23898398: [Mask R-CNN]Add Int8 AABB Generate proposals Op
7c9ba62130 : Server connects to its listen socket addr (#46801)
42e6951e62 : Remove save_state_warning in LambdaLR (#46813)
714c7020ee : [Mask R-CNN]Add Int8 AABB Generate proposals Op
ba3962f5f0 : [Onnxifi] Warmup cache of output shapes (#48346)
0a42003f8f : [TensorExpr Fuser] Handle fusing values with un-profiled uses (#48689)
31808dcdd8 : [RELAND] [CUDA graphs] Make CUDAGeneratorImpl capturable (ci-all edition) (#48694)
9af627fda1 : fix some typos in the fx ir test_fx_experiemntal (#48847)
a5fb12d168 : RRef proxy support for ScriptModule methods (#48339)
fadec77c30 : [quant][fx][graphmode] Renable torchvision test (#48602)
07d185ef05 : [ROCm] add 3.10 docker image (#48791)
bc2352e8c3 : [NNC] Complete SimpleIREvaluator support for bitwise ops (#48053) (#48179)
3a0d4240c3 : Fix broadcast_all crashing on Tensor-likes (#48169)
eb43e12ee4 : Revert D25277886: [pytorch][PR] Replace constexpr with CONSTEXPR_EXCEPT_WIN_CUDA
6ab84ca0f3 : Implement NumPy-like function torch.msort() (#48440)
cb285080b0 : Added computing matrix condition numbers (linalg.cond) (#45832)
4cc163f8ec : Add deadline to fakelowp tests (#48823)
2181ff89bb : [vulkan][test] Not use non 1 dilation for conv2d (#48800)
5fd61de99e : [ONNX] Added hardswish symbolic in opset 9 (#48423)
15bc21c280 : [ONNX] Track and list model params for scripting (#47348)
f065087567 : [ONNX] Handle dynamic input axes for prim_ConstantChunk (#48176)
86540dbf41 : Fix jit doc model loading example (#48104)
c55d45f04b : [qnnpack] Fix unused var warning when building for different archs. (#48730)
f5d94244b2 : fx quant: more typehints, part 3 (#48794)
54da2dadd8 : fx quant: more typehints, part 2 (#48792)
f5bcf45e3b : fx quant: add more typehints (#48774)
c98c617b44 : fx quant: clean up functions in _prepare (#48773)
536352e86f : fx quant: clean up functions in _generate_qconfig_map (#48772)
16fd1c32c5 : [ONNX] Update batch_norm symbolic to handle track_running_stats=False (#47903)
cf1e5d7d2b : Ignore MSVC's pdb file (#47963)
cc1c3063c5 : Add test binary to compare torch model outputs (#47933)
b3ac628081 : [JIT] Fix bug in get_annotation_str for ast.Subscript (#48741)
e7038a7725 : Improve an autograd warning (#48765)
1eed54d17a : Upgrade oneDNN (mkl-dnn) to v1.7 (#47853)
47aa253632 : [Feature] Allow user to specify a fraction of the GPU memory. (#48172)
c134f32835 : Implemented torch.inner (#46716)
b726a1bbf8 : quantize bias of the quantization parameters (#48749)
dabc286ab3 : Remove output used only by sizes (#448) (#47665)
2cb9204159 : Add nondeterministic alert to index_copy, median CUDA and kthvalue CUDA (#46942)
c2ad3c4e6a : Add scary comment in cpp_custom_type_hack.h (#48737)
416dc68341 : [Pytorch][Annotation] Update inlined callstack with module instance info (#47416)
5c9cef9a6c : [numpy] Add `torch.moveaxis` (#48581)
befab0d9d4 : [ONNX] Cast Gather index to Long if needed (#47653)
92f376147c : Enable TCPStore on Windows (#47749)
93973ee699 : Header cleanup (#48728)
f9a0abfc43 : Fix code review from #48659 and #48116 (#48731)
d6f9e8562b : Generalize some TensorIterator consumers to take TensorIteratorBase (#48727)
c01e5b8827 : Simplify CachingAllocator. (#48752)
ef50c94e7c : reenabling MPI test (#48725)
0484b048d0 : Replace constexpr with CONSTEXPR_EXCEPT_WIN_CUDA (#48717)
5489a98cd3 : Add support for CorrCholeskyTransform (#48041)
313e77fc06 : Add broadcast_shapes() function and use it in MultivariateNormal (#43935)
c7746adbc6 : Revert D24874754: [pytorch][PR] Add test for empty tensors for batch matmuls
79b9c03465 : Optimize torch zeros (#45636)
1112773cf5 : Fix unintended error when worker force kill happens #43455 (#43462)
85c1e8acdc : Replace kernel resource strings with real .cu source files (#48283)
5f105e2aa6 : Add test for empty tensors for batch matmuls (#47700)
ea573ea944 : [qunat][graphmode][fx] Standalone module takes float as input and output (#48671)
22c3ae8b57 : Disable autocast cache for tensor views as fix for #48049 (#48696)
0e4f9a7872 : Refactored OpInfo testing to support custom SampleInputs, added addmm to op_db to test (#48627)
90faf43151 : Support for OpInfo-based testing for operators in JIT (#47696)
9c35a68094 : Refactored assertAutodiff test to have better error message (#48567)
c465602d78 : Refactor existing JIT testing utils to enable new OpInfo test suite to reuse existing logic (#47695)
1195403915 : [NNC] Add cpu fusion gflag (#48682)
0d39bd47cf : only enable cudnn persistent RNN when batchsize % 8 == 0 (#48070)
a1daf1e678 : Use fastAtomicAdd in GPU upsampling trilinear (#48675)
5f62308739 : Hipify revamp [REDUX] (#48715)
780f2b9a9b : torch: Stop using _nt_quote_args from distutils (#48618)
95311add49 : Vulkan linear memory allocator. (#48569)
90a3049a9a : [fix] repr(torch.device) (#48655)
b006c7a132 : Add reparameterization support to `OneHotCategorical` (#46610)
de46369af7 : [vulkan] Distribute weight prepacking along y dimension for conv2d (#48266)
a4e13fcf3f : add type annotations to common_nn.py (#48190)
a49e2c5ce6 : Remove "-b" option from `pip install` command (#48742)
fc1153a8be : [JIT] Fix clang-tidy warnings in jit/runtime (#47992)
a25d52f4e6 : [JIT] Fix clang-tidy warnings in jit/serialization (#47991)
34b2304e34 : [JIT] Fix clang-tidy warnings in jit/testing (#47986)
18eccfbe42 : [JIT] Fix clang-tidy warnings in jit/python (#47985)
8746e1a1cc : [JIT] Fix clang-tidy warnings in jit/passes (#47984)
9b973eb275 : [JIT] Fix clang-tidy warnings jit/ir (#47983)
3039d24f4a : [JIT] Fix clang-tidy warnings for jit/frontend (#47982)
4aa5d68874 : [JIT] Fix clang-tidy warnings for jit/api (#47981)
83c76611d5 : [package] Support glob matching (#48633)
88735f2cc9 : [package] move importer logic into import pickler (#48632)
ce3484595e : [packaging] missing quotation in graphviz printout (#48344)
15fc66d6c8 : fix nvrtc PTX architecture cap for CUDA toolkit (#48455)
bdb68d9b0b : [reland] [ROCm] remove versions less than 3.8 (#48723)
4d26941a9b : Fix lite interpreter record function issue. (#47457)
4fcdbb824b : Updating all call-sites of the legacy dispatcher registration API in fbcode to the new API. (#48178)
022c929145 : Revert "Revert D25199264: Enable callgrind collection for C++ snippets" (#48720)
b2ec21a05a : [ROCm] Enable deterministic rocBLAS mode (#48654)
52f0af03f8 : [reland][quant][fix] Add bias once in conv_fused (#48593) (#48661)
0db73460db : [quantization] fix run_arg tiny bug (#48537)
f61de25dfa : Fix index_put doc. (#48673)
071344debe : Fix index parsing on Python-3.9 (#48676)
3c5db30eaa : Update magma to 2.5.4 for Windows (#48656)
c98c98d77d : Migrate `fmod` and `fmod_` from TH to ATen (CUDA) (#47323)
dc367e7903 : Delete "-b" flag from pip install command (#48722)
4abca9067b : Fix dataloader hang with large sampler (#48669)
3b25af02a4 : matrix_exp + matrix_exp.backward complex support (#48363)
e41e780f7a : Added support for complex input for torch.lu_solve #2 (#48028)
6d6e9abe49 : Delete NativeFunctions.h include from Functions.h (#48687)
e097f8898c : Move var and std overloads to Functions.cpp and remove native:: reference (#48683)
6ba7709415 : Refactor TensorIterator to do allocations via MetaBase::set_output (#48659)
742903c0df : Move argument grouping into FunctionSchema (#48195)
ba5686f8c5 : Refactor argument fields in FunctionSchema to Arguments (#48182)
b4f5efa7b2 : Structured kernels generate Meta registrations (#48116)
47db191f0c : Implement Kumaraswamy Distribution (#48285)
9c6979a266 : [Gradient Compression] Error feedback for PowerSGD (still need to fix the key in error_dict) (#48670)
463e5d2f12 : Disable pruning on embedding look up operators when compressed_indices_mapping = {0} (#48672)
74330e0497 : Added linalg.matrix_rank (#48206)
6646ff122d : Revert D25199264: Enable callgrind collection for C++ snippets
6299c870ee : Revert D25254920: [pytorch][PR] Add type annotations to torch.onnx.* modules
bcc85a363e : [numpy] `torch.sigmoid` : promote integer inputs to float (#47551)
44016e66c4 : Revert D25097324: [pytorch][PR] [ONNX] Cast Gather index to Long if needed
15abf18b67 : [MaskR-CNN] Add int8 aabb bbox_transform op
40a2dd7e1e : Add type annotations to torch.onnx.* modules (#45258)
ff097299ae : Enable callgrind collection for C++ snippets (#47865)
0225d3dc9d : Add support for timing C++ snippets. (#47864)
17ea11259a : Rework compat bindings. (#47863)
07f038aa9d : Add option for cpp_extensions to compile standalone executable (#47862)
27905dfe9c : Expose CXX_FLAGS through __config__ (#47861)
b824fc4de2 : [pytorch] [PR] Rename cuda kernel checks to C10 (#48615)
25e367ec48 : Revert D25246563: [pytorch][PR] [ROCm] remove builds for versions less than 3.8
8b2ca28c1d : Add an option to run RPC tests with TCP init (#48248)
d0e9523c4f : [TensorExpr] Add more operator tests. (#48677)
f7986969af : [FX] Delete values after their last use (#48631)
cff1ff7fb6 : Suppress unsigned warning (#48272)
18f1cb14d5 : Avoid resizing ones array when bias is not used (#48540)
f5788898a9 : TensorIteratorConfig is not used by reorder_dimensions (#48613)
75f38c2fa9 : ret is never reassigned, return 0 directly (#48609)
30324d1e71 : fix INTERNAL ASSERT FAILED for maximum (#48446)
1c02be1b6a : Fix AttributeError in _get_device_attr (#48406)
4fe583e248 : fix move default not compile correctly on cuda92 (#48257)
54022e4f9b : add new build settings to torch.__config__ (#48380)
d9c76360b2 : Add cuda_ipc channel to TensorPipe (#46791)
e3713ad706 : Let JIT unpickler to accept CUDA DataPtr from read_record_ (#46827)
5f181e2e6e : centos now installs cmake from conda (#48035)
3ceec73db9 : [PyTorch] Lazily construct guts of RecordFunction (#47550)
d1df4038ff : [PyTorch] Make RecordFunctionCallback::should_run_ a function pointer (#48274)
9342b97363 : change global_fp16_constants for test_fc_nnpi_fp16 (#48663)
c5f1117be2 : [ROCm] remove builds for versions less than 3.8 (#48118)
aaf6582d02 : fix issue by which pytorch_jni is not bundled in libtorch (#46466)
7c73fda501 : Remove `balance` and `devices` parameter from Pipe. (#48432)
74d6a6106c : Fuzzing benchmark for FFT operators (#47872)
df6fc3d83a : Fix complex tensors and missing data in benchmark utility (#47871)
f80aaadbae : fx quantization: add option to leave graph inputs and/or outputs quantized (#48624)
98fddc1f06 : Revert D25172740: [pytorch][PR] [CUDA graphs] Make CUDAGeneratorImpl capturable
0066b941f1 : Add CUDA kernel checks to fbcode/caffe2/caffe2/sgd (#48347)
736e8965e5 : Change the type hints of "pooling.py". (#48412)
c81f2d9a2f : Revert D25222215: [quant][fix] Add bias once in conv_fused
dc7ab46dcc : Fix incorrect warnings in ParameterList/Dict (#48315)
492683bd42 : Add LazyConvXd and LazyConvTransposeXd (#47350)
ccd20e995f : [vulkan] convolution old prepacking via cpu-shader (#48330)
55fc0e9e53 : [ONNX] Cast Gather index to Long if needed (#47653)
02e58aabe1 : [ONNX] Support nonzero(*, as_tuple=True) export (#47421)
acd4fca376 : [caffe2][torch] Clean up unused variable 'device' (#48600)
9500e8a081 : Testing: Improve interaction between dtypes and ops decorators (#48426)
d2e429864c : [quant][fix] Add bias once in conv_fused (#48593)
7a59a1b574 : add aot_based_partition (#48336)
ddb6594971 : [Gradient Compression] Add a random generator to PowerSGD state for initializing low-rank matrix Q (#48507)
61936cb11e : [PyTorch][JIT] Parameter passing & std::map API usage pass on ProfilingRecord::instrumentGraph (#47960)
dc7d8a889e : caffe2: refactor context to allow being typed (#48340)
adb4fd3f2f : [te] Fix comparison ops on booleans (#48384)
d9f5ac0805 : [TensorExpr] Add a envvar to disable LLVM backend and use IR Eval instead. (#48355)
a6f0c3c4f0 : [TensorExpr] IREval: fix div for Half dtype. (#48354)
671a959233 : Disable fast sigmoid since it causes divergence (#48623)
29f0e1e2ce : Fused8BitRowwiseQuantizedToFloat operator support (#48407)
c3bb3827f9 : remove unused params in scalar_tensor_static (#48550)
ea0ffbb6e6 : [vulkan] Fix Addmm prepacking to persist after GPU flush (#48313)
5b6b1495b9 : Update Windows CI to CUDA 11.1, cuDNN 8.0.5 (#48469)
7f869dca70 : [ROCm] update debug flags (#46717)
d6ddd78eb0 : Fix multiple spelling and grammar mistakes (#48592)
2200e72293 : [CUDA graphs] Make CUDAGeneratorImpl capturable (#47989)
4976208e73 : [caffe2] Register BlackBoxPredictor AllocationArenaPool as CPUCachingAllocator (#48161)
d386d3323f : [dper] supress excessive msg (#48404)
d74f2d28a1 : Fix bazel build after sleef update (#48614)
66440d1b29 : Tweak Vulkan memory use. (#47728)
8f8738ce5c : [vmap] implement batching rules for clamp, clamp_min and clamp_max (#48449)
b5149513ec : migrate export_caffe2_op_to_c10.h macros to the new dispatcher registration API, update code_analyzer regex (#48308)
032e4f81a8 : Fix test comparison ops check for scalar overflow (#48597)
b84d9b48d8 : Fix the typo errror in the line #953 of the docs of 'torch/nn/modules/activation.py' (#48577)
eba96b91cc : Back out "[pytorch][PR] [JIT] Add `__prepare_scriptable__` duck typing to allow replacing nn.modules with scriptable preparations"
fe80638212 : added docs to nn.rst (#48374)
4e15877d5c : Add documentation for torch.overrides submodule. (#48170)
42e7cdc50a : Improve libuv detection on Windows (#48571)
0213a3858a : .circleci: Add python 3.9 builds for windows (#48138)
af520d9d04 : [cmake] clean up blas discovery (#47940)
0b66cdadb6 : Pin the rest of flake8 dependencies. (#48590)
e41d8b3d3d : [JIT] adding missing test cases for test_isinstance.py (#47396)
3c9e71c9ad : fix BUILD_MOBILE_BENCHMARK typo (#48515)
5bb2a87a94 : Update sleef to fix build issues (#48529)
5cb688b714 : Merge all vec256 tests into one framework (#47294)
bdf360f9f2 : [ONNX] Update onnx submodule (#47366)
755b8158e2 : Fix __config__ docs (#48557)
0e5682d26b : Pruning codeowners who don't actual do code review. (#48109)
2fe382e931 : annotate torch._tensor_str (#48463)
36c87f1243 : Refactors test_torch.py to be fewer than 10k lines (#47356)
272f4db043 : Implement NumPy-like function torch.float_power() (#44937)
25ab39acd0 : [numpy] `torch.asin` : promote integer inputs to float (#48461)
344918576c : Migrate `eig` from the TH to Aten (CUDA) (#44105)
f95af7a79a : [numpy] `torch.erf{c}` : promote integer inputs to float (#48472)
7df8445242 : torch.fft: Remove complex gradcheck workaround (#48425)
5dfced3b0d : work around #47028 until a proper fix is identified (#48405)
84fafbe49c : [docs] docstring for no type checked meshgrid (#48471)
c5ce995834 : reintroduce deadline removal (#48481)
8b248af35d : Alias _size_N_t to BroadcastingListN[int] (#48297)
e7ca62be08 : Fix PyTorch compilation on Apple M1 (#48275)
18ae12a841 : Refactor mkl fft planning to not use Tensor objects (#46910)
6a37582162 : Fix misleading doc string in quint8.h (#48418)
e56e21b775 : Grammatically update the readme docs (#48328)
f1c985695c : Enabled gloo backend in test_distributed unit tests for ROCm (#40395)
db1b0b06c4 : Flake8 fixes (#48453)
55e225a2dc : Int8 FC fix to match NNPI ICE-REF step-C (#48459)
3858aaab37 : Fix syntax issue in c++ cuda api note (#48434)
4ab2055857 : Re-enable only cuda tests wrongly disabled before (#48429)
9ecaeb0962 : [numpy] Add unary-ufunc tests for `erf` variants (#47155)
33cc1d6a64 : [docs] fix torch.swap{dim/axes} to showup in docs (#48376)
bc2c1d7d59 : quant: make each line of fx/quantize.py <=80 chars (#48357)
1d984410fb : quant fx: fix typo (#48356)
8581c02a3f : quant: add type annotations on quantization.fx.Quantizer matches (#48350)
f7a8bf2855 : Use libkineto in profiler (#46470)
e9efd8df1b : [numpy] `torch.log1p` : promote integer inputs to float (#48002)
2e0a8b75d8 : An implementation of torch.tile as requested in pytorch/pytorch#38349 (#47974)
2dff0b3e91 : Fix typos in comments (#48316)
671ee71ad4 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
4ed7f36ed1 : Added linalg.eigh, linalg.eigvalsh (#45526)
b6654906c7 : Fix assertEqual's handling of numpy array inputs (#48217)
f2da18af14 : Add USE_KINETO build option (#45888)
c5e380bfcb : quant: add type annotations on quantization.fx.Quantizer class vars (#48343)
6b80b664bb : quant: enable mypy on torch/quantization/fx (#48331)
cac553cf34 : [Gradient Compression] clang-format test_c10d.py (#48349)
6400d27bbb : [Gradient Compression] Define a customized state for PowerSGD comm hook (#48348)
b967119906 : [TensorExpr] Fix lowering for aten::div. (#48329)
5e1faa1d41 : [TensorExpr] Fix aten::atan2 lowering and disable aten::pow lowering on CPU. (#48326)
f1d328633c : Fix mypy error (#48359)
a00ba63023 : Disable old fuser internally (#48322)
636fa8fda8 : [quant] Add backend_independent option for quantized linear module (#48192)
fdc62c74a6 : Add Kineto submodule (separate PR) (#48332)
6615edaf9a : [Pytorch Mobile] Disable OutOfPlace calls for mobile (#48255)
16d089733b : Enable creation and transfer of ScriptModule over RPC (#48293)
44def9ad71 : [quant][fix] Fix quantization for qat.ConvBnReLU1d (#48059)
50e42b9092 : Explicitly cast an implicit conversion from some macro defined type to a double (#48290)
0a3db1d460 : [FX] Prototype Conv/BN fuser in FX (#47657)
6d0947c8cf : Revert D25093315: [pytorch][PR] Fix inf norm grad
f8722825b5 : Compare Weights FX Implementation (#48056)
fefd56c4db : Remove an accidental copy in a range-based for loop (#48234)
286cdf3cda : [static runtime] add static registry (#48258)
0984d3123a : [static runtime] add more _out variants (#48260)
87bfb2ff08 : Automatically infer the type of the iterator in a range-based for loop (#48232)
8f1af0947c : [iOS] Fix the fbios armv7 pika build
dc843fe197 : Fix test_ldexp on Windows (#48335)
7be30d1883 : Move CUDA kernel check to c10 (#48277)
8177f63c91 : Reorganize and refine the Windows.h import in C++ files (#48009)
6d5d336a63 : Revert D25108971: [pytorch][PR] enable cuda11.1 and cudnn 8.0.5 in CI
d1b8da75e6 : [JIT] Metacompile boolean constants (#46721)
6eaf1e358c : caffe2/core.Net: is_external_input rebuild lookup tables when necessary
ca880d77b8 : Fix inf norm grad (#48122)
63b04dc11d : Update index.rst (#47282)
68a50a7152 : Replace `GatherRangesToDense` operator in Dper from c2 to pt.
55d5b27343 : Refactor request_callback_no_python.cpp processRpc function (#47816)
562d4c3bc5 : Add basic ldexp operator for numpy compatibility (#45370)
ec256ab2f2 : implement torch.addr using TensorIterator based kernels (#47664)
eb49dabe92 : [TensorExpr] Add even more operator tests. (#48292)
efd41db32c : [TensorExpr] Add more operator tests. (#48282)
56129bdea2 : remove having no deadline for the test (#48226)
de284b6d35 : [pytorch][codegen] add autograd data model (#48249)
fa41275899 : [Pytorch] Weaker memory ordering for c10::intrusive_ptr (#48221)
d6b374956f : [JIT] Resolve `torch.device` in recursive compilation of classes (#47734)
28580d3c0f : Add TorchBind-based Python and TorchScript binding for ProcessGroup (#47907)
7828a22094 : fix a bug in leakyReLU (#48265)
998c4cac9a : [FX] Add Node.all_input_nodes (#48270)
aa8aa30a0b : third_party: Update pybind to point to fork (#48117)
84d4e9c4fa : enable cuda11.1 and cudnn 8.0.5 in CI (#48242)
1a6666c967 : [Gradient Compression] Add a comment on _orthogonalize. (#48253)
3c936ecd3c : Revert D25056091: migrate export_caffe2_op_to_c10.h macros to the new dispatcher registration API
0ea4982cf3 : migrate export_caffe2_op_to_c10.h macros to the new dispatcher registration API (#48097)
4b56aef05d : add kl_based_partition (#48197)
c0723a0abf : Add MessageTypeFlags enum for RPC Messages (#48143)
feb6487acf : Dont skip NCCL backend when testing all_reduce_cuda (#48231)
685cd9686f : Refactor CuFFTConfig to not use tensor objects (#46909)
2039ff3fbb : [Caffe2] Optimize MishOp on CPU (#48212)
f98ab18445 : [pytorch][codegen] move is_abstract property to NativeFunction model (#48252)
9b19880c43 : Fix collect_env.py with older version of PyTorch (#48076)
343b3e5cae : Added linalg.tensorinv (#45969)
678fe9f077 : Add blas compare example (#47058)
008f840e7a : Implement in-place method torch.cumsum_ and torch.cumprod_ (#47651)
fe6bb2d287 : [PyTorch] Declare the instantiation of PackedConvWeightsQnnp<2>::prepack (#48256)
1dd4f4334c : docker: Make CUDA_VERSION configurable (#48199)
a7153a89a5 : Exclude docs/cpp/src from flake8 (#48201)
975ff6624b : DOC: backport doc build fix from 1.7, tweak link (#47349)
c542614e53 : Implement C++ ModuleDict (#47707)
c4a6df989c : Pass any verbosity from test/run_test.py to pytest (#48204)
370310bedb : batched grad for binary_cross_entropy, symeig (#48057)
db767b7862 : Add c10d new frontend to build (#48146)
daff3a81a1 : [Gradient Compression] PowerSGD comm hook (#48060)
0d8ddb5ec2 : Make softmax and log_softmax handle negative dims, add tests (#48156)
46d846f5bb : T78750158 Support varying size input in numeric suite at 10/30/2020, 3:55:01 PM (#47391)
8819bad86c : Implement igammac (3rd PR) (#48171)
c5dae335e4 : [PT][StaticRuntime] Move prim op impl to ops.cpp (#48210)
6da26fe79b : [te] Fix pow (#48213)
ed57f804fa : [quant][refactor] Move some util functions from torch/quantization/fx/utils.py to torch/quantization/utils.py (#48107)
4316bf98f5 : [FX] Refactor unique name handling (#48205)
bef460a803 : [PyTorch] Return raw ptr from ThreadLocalDebugInfo::get() (#47796)
5883e0b0e0 : [quant][fix][ez] Fix quant_type classification for fp16, fp16 (#48073)
773d1f3208 : [Person Seg] Compress the person seg model (#48008)
a97d059614 : Get TestTorch.test_empty_meta working again (#48113)
4c9eb57914 : [PyTorch] Narrow Device to 2 bytes by narrowing DeviceType and DeviceIndex (#47023)
72918e475e : [quant] FakeQuantize inherit from FakeQuantizeBase (#48072)
efeb988518 : Suppress "ioctl points to uninitialised" check (#48187)
576fa09157 : [quant][fix] Fix quant type classification for float_qparam qconfig (#48069)
f0f8b97d19 : Introducing winograd transformed fp16 nnpack to PT for unet 106 (#47925)
383abf1f0c : [PyTorch] Make RecordFunction::active private (#47549)
1bafff2366 : [PyTorch][JIT] Skip unnecessary refcounting in TensorType::merge (#47959)
0f89be616a : Removing non-thread-safe log statement from ReinitializeTensor (#48185)
4360486346 : pass strict_fuser_check for recursive fusion (#47221)
ea1e78a0c5 : Revert D24853669: [pytorch][PR] Migrate `eig` from the TH to Aten (CUDA)
2fbd70d336 : fft: Generalize fill with conjugate symmetry and use complex dtypes (#46908)
0639387ff1 : move Tensor comparisons back to C (#48018)
ed4dd86567 : move aten::round to lite interpreter (#45931)
a36e646878 : [pytorch][codegen] simplify python signature creation logic (#47977)
5eaf8562cd : [pytorch][codegen] simplify dunder method check in gen_python_functions.py (#47976)
5243456728 : [pytorch][codegen] remove dead code in gen_variable_type.py (#47975)
07657b6001 : [tensorexpr] Switch cpp tests to pure gtest (#48160)
464d23e6b4 : [te][benchmark] Add more optimized versions of gemm (#48159)
8a996dd139 : [te] Make BUILD_TENSOREXPR_BENCHMARK a real CMake option (#48158)
866f8591be : Migrate `eig` from the TH to Aten (CUDA) (#44105)
8af9f2cc23 : Revert D24924736: [pytorch][PR] Hipify revamp
68a3a3f3b5 : Add `torch.swapdims` and `torch.swapaxes` (#46041)
d256e38823 : [JIT] Pass TypePtr by reference in Argument::type() and Type::isSubtypeOfExt(). (#48061)
df88cc3f7f : Document that `remainder` does not support complex inputs (#48024)
0387f2a6fa : Fix default value of `num_replicas` in DistributedSampler docstring (#48135)
140e946fec : Disable distributed collectives profiling tests (#48129)
a6898cb5f4 : Small documentation changes for RRef and Dist Autograd (#48123)
81b1673a21 : Enable complex tests that depend on batched matmul on CUDA (#47910)
3ca4c656de : Install magma on CUDA 11.1 (#48164)
10b490a3e0 : Hipify revamp (#45451)
1454cbf087 : Make numpy optional dependency for torch.cuda.amp (#48154)
e2b4c63dd9 : Enable the faster combined weight branch in MHA when query/key/value is same object with nan (#48126)
9ead558899 : Add max supported SM for nvrtc-11.0 (#48151)
21c823970e : [ROCm] remove sccache wrappers post build (#47947)
98722ab8a7 : There should be a newline between BUILD WITH CUDA and NVTX (#48048)
2ff748a680 : Move kthvalue scalar test to separate method for XLA (#48042)
ca8b9437ab : Add type annotations for a few torch.nn.modules (#46013)
8c00221fe2 : Fix inconsistent environment variable naming for setting NVTOOLEXT_HOME in TorchConfig.cmake (#48012)
2832e325dd : [TensorPipe] Avoid using deprecated alias for error (#48168)
df0ae244a9 : [static runtime] Add out_ variant for aten::stack and aten::nan_to_num (#48150)
6049653c20 : [quant][graphmode][fx] Keep linear op unchanged when qconfig is not supported (#48067)
a1f494cb8b : Fix test_inverse_singular for cublas path; fix cusolver inverse multi-stream issue (#47026)
bc484cfed1 : [c10d][jit] initial torchbind bindings for ProcessGroupNCCL (#42944)
cc611280d3 : Revert D24862372: [PyTorch Mobile] Fix for messenger: avoid error with [-Werror,-Wglobal-constructors]
4883d39c6f : Avoid direct reference to at::native::tensor from TensorDataContainer (#47567)
c6c6a53ba0 : [JIT] Fix function schema subtype checking (#47965)
94cd048bda : Added foreach_frac API (#47384)
134bce7cd0 : Adding bunch of unary foreach APIs (#47875)
0adace3706 : fix calculate_extra_mem_bytes_needed_for (#48102)
9392137dbe : [PyTorch Mobile] Fix for messenger: avoid error with [-Werror,-Wglobal-constructors]
194ea076b2 : Update VMA. (#47727)
568a72bacc : Fix Vulkan empty (and family) breakage as a result of API update. (#47937)
cdc2d2843b : Structured kernel definitions (#45277)
d7e838467a : [qunat][graphmode][fx] Embedding/EmbeddingBag works in static quant qconfig (#48062)
3846e35a55 : [GPU] Enable Metal on macosx (#47635)
05dc9821be : .circleci: Add python 3.9 builds for macOS (#47689)
04545f4b46 : [quant] out-variant for the reflection pad (#48037)
e1a101676b : [quant] ReflectionPad2d (#48036)
cb046f7bd2 : [static runtime] Initial memonger (#47759)
06707a7ef8 : Fix flake8 failure (#48124)
b1c5f06f9e : Revert D24925955: Fix "pointless comparison" warning
d522cd15a3 : fix BC test, after removing __caffe2 ops (#48099)
b10d6c6089 : [caffe2] cache NextName indexes for faster name generation (#47768)
736deefc1f : [torch][te] aten::type_as is unary, not binary (#48085)
bbee0ecbd1 : [pytorch][te] Handle negative axis in chunk (#48084)
aabc87cd04 : [NNC] Fix HalfChecker when half present but unused (#48068)
0d6c900bdb : docker: Fix PYTHON_VERSION not propagating (#47877)
315122ce15 : Bump up the CUDA OOM test memory size (#48029)
9443150549 : Update Graph docstring to match `__init__.py` (#48100)
8aaca4b46a : [reland][quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) (#48038)
a03f05f2a2 : Fix "pointless comparison" warning (#47876)
49f0e5dfeb : Fix typing errors in torch.distributed.*, close issue #42967. (#47534)
7f66fa62ca : Fix typing errors in torch.distributed.nn.* directory. (#47533)
915050ed66 : Fix typing errors in torch.distributed.distributed_c10d.* (#47532)
49eb82a7b2 : Fix type annotation errors in torch.distributed.* directory (#47531)
af37f8f810 : [pytorch][te] Do not merge Tensor[] variant of aten::where into fusion group (#48063)
43a9d6fb6e : [TorchScript] Support user defined classes as constants (#5062)
3611d26a25 : [JIT] Optimize FunctionSchema::checkArg for the Tensor case. (#48034)
7b2c78f120 : Revert D24714803: make duplicate def() calls an error in the dispatcher. Updating all fb operators to use the new dispatcher registration API
549ef1d668 : [caffe][memonger] Extend operator schema check to dag memonger (#48021)
fa0acb73bd : fix node manipulation in partition class (#48016)
824f710694 : make duplicate def() calls an error in the dispatcher. Updating all fb operators to use the new dispatcher registration API (#47322)
cba26e40cf : migrate export_caffe2_op_to_c10.h macros to the new dispatcher registration API (#47321)
93d9837375 : rename macro. TORCH_LIBRARY_FRAGMENT_THIS_API_IS_FOR_PER_OP_REGISTRATION_ONLY to TORCH_LIBRARY_FRAGMENT (#47320)
95b9c2061b : update legacy dispatcher registration API tests to avoid duplicate def() calls (#47319)
6ec2a89e01 : remove ops in the __caffe2 namespace (#47318)
233192be73 : Make sure valid ParameterList/Dict don't warn on creation (#47772)
b12d645c2f : Test TORCH_LIBRARY in CUDA extension (#47524)
cf92b0f3a0 : add type annotations to multiprocessing module (#47756)
1e0ace7fdc : Fix docstring typo (#47545)
825ee7e7f8 : [caffe2] plan_executor_test: add test case for should_stop loops (#47613)
550f26c6d5 : Port math kernel for layer_norm from pytorch/xla. (#47882)
95ea778ac6 : Set proper output differentiability for unique function (#47930)
dea2337825 : torch.Assert: make it torch.jit.script'able (#47399) (#47973)
ee995d33bd : rename torch.Assert to torch._assert (#47763) (#47972)
d20483a999 : Skip dummy node creation for autograd engine when there is a single input and place on correct queue (#47592)
957e45a97c : [NNC] Support vectorization of reductions (#47924)
9aaf7fb398 : [CI] Fix additional CI jobs not launched when PR is created from fork repo (#47969)
3a2aad9314 : Fix documentation to point to torch.overrides instead of _overrides. (#47842)
f9552e6da4 : update windows build guide (#47840)
147a48fb27 : [cmake] clean up cmake/Utils.cmake (#47923)
cd4aa9c95c : Fix inplace check logic to be triggered when written to Tensor does not require gradients (#46296)
d032d22141 : Replacing CUDA11.0 config with CUDA11.1 in CI (#47942)
013e6a3d9d : Revert D24698027: Fix auto exponent issue for torch.pow
8ef7ccd669 : Fix auto exponent issue for torch.pow (#47024)
d293413b3e : Batched matmul dtypes (#47873)
db1f217d8d : Add complex support for torch.addcmul and torch.addcdiv (#46639)
5adf840259 : [pytorch][te][easy] Remove KernelScope from fusion pass tests (#47952)
0e98fdd389 : [ATen/CPU] Parallelize HalfToFloat + FloatToHalf operators in PT (#47777)
f8248543a1 : Pass in smaller timeout into init_process_group for distributed_test (#47896)
07e98d28cf : [pytorch][codegen] migrate gen_variable_factories.py to the new data model (#47818)
4779553921 : Revert "[quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)" (#47949)
c936b43f14 : [pytorch][codegen] add fully migrated scripts to mypy strict config (#47747)
4ff8cd8f3a : [pytorch][codegen] gen_python_functions.py loading native_functions.yaml / deprecated.yaml directly (#47746)
d91cefb0d8 : [pytorch][codegen] migrate gen_annotated_fn_args.py to new codegen model (#47745)
0dbff184e9 : change file name to snake style (#47914)
1606899dbe : distributed_test: Map rank to GPU accordingly (#47898)
982ae987d3 : Revert D24941350: [pytorch][PR] Reopen PR for 0 dim batch size for AvgPool2d.
c543b3b582 : Fix a downcast (#47919)
fe7d1d7d0e : Add LeakyReLU operator to static runtime (#47798)
17a6bc7c1b : Cleanup unused code for Python < 3.6 (#47822)
4f9d0757f3 : Add type informations to torch.cuda (#47134)
2eb1e866e8 : Update links in DDP note (#47663)
550973b675 : Missing curly bracket. (#47855)
1bdd3687b9 : Back out "[JIT] Fix function schema subtype checking"
11710598db : Preserve module parameters in freezing (#47094)
f8c559db8e : [resubmit] Providing more information while crashing process in async error handling (#47246)
a9b6fa9e46 : Fix multinomial when input has 0 prob (#47386)
f86ec08160 : [pytorch][quantization] adding jit state for QuantizedLeakyReLU (#47660)
4380934b9b : [JIT] Dont use specialized tensor type (#46130)
5c0dff836a : Improve dimensionality mismatch warning (#47874)
ceeab70da1 : Reopen PR for 0 dim batch size for AvgPool2d. (#47426)
260daf088d : Added linalg.cholesky (#46083)
e8fecd5caf : Add constructor for ArgumentDef (#47492)
0685773d8d : Automated submodule update: FBGEMM (#47929)
0125e14c9a : [OpBench] change relu entry point after D24747035
6e42b77be1 : Add '--allow-run-as-root' to mpiexec to allow running distributed test inside a container (#43794)
7b8bd91632 : fp16 -> fp32 EmbeddingBag moved into CPU impl (#47076)
6a4d55f23c : [ONNX] Enable onnx shape inference in export by default (#46629)
c0aa863c56 : [quant][graphmode][fx][refactor] insert_quantize_node (#47880)
5d51b63984 : Use Blocking Wait if both Blocking Wait and Async Error Handling Are Set (#47926)
f743b5639a : [caffe2][memonger] Add support for distributed inference predict nets in DAG memonger (#47718)
a3e08e5344 : Support ReduceSum in c2_pt_converter (#47889)
eccbd4df1c : Remove fbcode/caffe2/mode (#46454)
03d1978a1a : [JIT] Resolve string literal type annotations using `Resolver::resolveType` (#47731)
1915ae9510 : [quant][graphmode][fx][refactor] is_output_quantized (#47879)
6b8d20c023 : [pytorch][te] Don't start TE fusion groups with an unknown-typed result (#47884)
d54497fca7 : Try again to give hash in doc push scripts (#47922)
f1babb00f0 : [caffe2] Fix ListWithEvicted _pprint_impl wrongly printing _evicted_values (#47881)
d4db4718fa : Revert D24873991: Profiler benchmark fix
e5da3b6097 : Revert D24891767: rename torch.Assert to torch._assert
4cec19b56a : Revert D24740727: torch.Assert: make it torch.jit.script'able
1c7c612af0 : Revert D24543682: [pytorch][PR] Added support for complex input for torch.lu_solve
8855c4e12f : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
759a548d6e : add dependency check in cost_aware_partition (#47856)
ffd0003022 : Added support for complex input for torch.lu_solve (#46862)
2ed3430877 : [GPU] Make permuteWeights inline (#47634)
692726812b : [JIT] Fix function schema subtype checking (#47706)
1aeac97712 : [PyTorch] Remove unnecessary shared_ptr copies in ThreadLocalDebugInfo::get (#47791)
b787e748f0 : torch.Assert: make it torch.jit.script'able (#47399)
a8ca042ec0 : rename torch.Assert to torch._assert (#47763)
16d6af74e6 : [PyTorch] Optimize ~intrusive_ptr for the case of zero weak references (#47834)
ed20e327d7 : [quant] skip tests without fbgemm support (#47800)
9ee4f499f0 : [OpBench] add _consume_op.list for processing input with type of List[Tensor] (#47890)
0652d755d3 : Fix some flaky tests in test_torch.py and test_nn.py (#46941)
2712acbd53 : CUDA BFloat16 Dropout (#45005)
1589ede8dd : [quant][graphmode][fx] insert_observer_for_input_arg_of_observed_node (#47785)
dfd946871a : Move eq.device to lite interpreter
a97c7e2ef0 : Profiler benchmark fix (#47713)
1afdcbfbb3 : [quant][graphmode][fx][refactor] insert_observer_for_output_of_the_node (#47784)
59e96c55f7 : Support MatMul in c2_pt_converter
c4ecbcdcb3 : [quant][graphmode][fx][refactor] insert_observer_for_special_module (#47783)
9fa681c5e0 : [ONNX] Add export of prim::dtype, prim::tolist (#46019)
85c43c3da1 : [ONNX] Convert _len based on the first dimension length (#47538)
eab809377d : [NNC] Remove all deferred expansion from Reductions (#47709)
eb8331e759 : Revert D24524219: Remove `balance` and `devices` parameter from Pipe.
4f538a2ba4 : [pytorch][bot] update mobile op deps (#47825)
a376d3dd5d : [pytorch] strip out warning message ifdef STRIP_ERROR_MESSAGES (#47827)
8ff0b6fef8 : [OpBenchMobile] Enable operator_benchmark to run the benchmark on mobile through AiBench (#47767)
edf751ca2f : Make empty c10-full (#46092)
3649a2c170 : [numpy] `torch.sqrt` : promote integer inputs to float (#47293)
7391edb591 : [hotfix] fix misleadingly summary BLAS=MKL when there's no BLAS install (#47803)
9734c042b8 : [FX] Fix submodule naming for subgraph split (#47869)
21f447ee2c : Added serialization of parameters for leaf modules (#47729)
8da7576303 : Remove `balance` and `devices` parameter from Pipe. (#46804)
65d5004b09 : Update, appease, and enable fail-on for shellcheck (#47786)
8304c25c67 : Give hash in commit messages in doc push scripts (#47694)
b1a4170ab3 : [NNC] Fix lowering of aten::pow (#47795)
149190c014 : Added CUDA support for complex input for torch.solve (#47045)
275a89a7ee : [Docs] Store Docs fixes about HashStore API (#47643)
6aaf04616b : [Metal] Remove undefined tests
f51be328ae : [FX] Fix __tensor_constants not scriptable (#47817)
76ff557de7 : [NNC] add hazard analysis to Bounds Inference (#47684)
664d2f48cf : [NNC] Enable unary op cpu testing (#47374)
dcca712d3c : [NNC] refactor cuda half support to more general file (#47373)
346a71d29c : [NNC] More cpu tests (#47372)
450738441b : [NNC] Add more CPU Tests (#47371)
e618bd858e : [NNC] Fix llvm min lowering for int inputs (#47370)
fe81faee5f : Add more CPU tests (#47369)
b8a1070ec0 : [TensorExpr][CPU] Fix bool -> int casting (#46951)
ad5be26b2f : Small changes/cleanup (#46950)
f221a19a7f : Force LLVM Compilation for CPU Tests (#46949)
f42cdc2e43 : [NNC] Fix printing of integral doubles (#47799)
1478e5ec2a : [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)
66f9b1de1b : [NCCL] enable p2p tests (#47797)
9ea7a6c7c5 : [ONNX] Update ONNX doc for writing pytorch model (#46961)
d7c8d3cccb : Remove references to `typing` module from setup.py (#47677)
809660ffa4 : ATen DerivedType is dead, long live ATen RegisterDispatchKey (#47011)
00a3add425 : [TorchBind] Support using lambda function as TorchBind constructor (#47819)
b6cb2caa68 : Revert "Fixed einsum compatibility/performance issues (#46398)" (#47821)
cfe3defd88 : [vulkan] Enable prepacked addmm/mm for linear layers (#47815)
e1ee3bfc0e : Port bmm and baddbmm from TH to ATen (#42553)
553ccccc54 : [c10d] switch ProcessGroup to be managed by intrusive_ptr (#47343)
859e054314 : skip test_all_reduce_sum_cuda_async test case for ROCM (#47630)
2df5600155 : [ROCm] add skipCUDAIfRocm to test_lingalg test_norm_fro_2_equivalence_old (#47809)
2907447c97 : Spurious numpy writable warning (#47271)
4b25d83e9b : torch.dropout: fix non-contiguous layout input (#47552)
a02baa0c7a : [reland][c10d] switch ProcessGroupNCCL:Options to be managed by intrusive_ptr (#47807)
665ac2f7b0 : [reland] [c10d] switch Store to be managed by intrusive_ptr (#47808)
70ae5685f9 : [reland][c10d] switch ProcessGroup::Work to be managed by intrusive_ptr (#47806)
89b371bc28 : [quant] Add support for 2D indices for quantized embedding operators (#47766)
47386722da : [quant][graphmode][fx][refactor] insert_observer (#47782)
dd77d5a1d4 : [quant][refactor] factor out get_combined_dict function (#47781)
b46787d6d7 : add cost_aware_partition (#47673)
c5834b6a23 : Look in named-buffers of module for tensors (#47641)
c9f6e70c09 : Refactor DDP uneven inputs control flags (#47394)
e8a73fbf34 : Workaround PyTorch debug build crash using old GCC (#47805)
52ec8b9340 : Added CUDA support for complex input for torch.triangular_solve (#46916)
a0c4aae3d5 : Free original weight after prepacking in XNNPACK based op (#46541)
545f624a4a : Mark overriden Tensor method `override` (#47198)
d4fa84bf5f : Properly serialize types that only appear at function input (#47775)
32b4b51254 : [Docs] Minor doc fixes for init_process_group (#47644)
0c54ea50bd : [PyTorch] Avoid atomic refcounting in intrusive_ptr::make (#47100)
f2b7c38735 : Automated submodule update: FBGEMM (#47605)
fcd44ce698 : Add instruction on how to handle the potential linker error on Linux (#47593)
7864ae9f98 : Improve error messages for operator registration API (#47636)
05a76ed705 : Batching rule for torch.squeeze(tensor) (#47632)
df887936a4 : Fix transpose batching rule (#47628)
f6ff6478cf : Make kwargs argument optional in _batched_grad_test (#47625)
fc24d0656a : Tensor.contiguous, Tensor.is_contiguous batch rule (#47621)
6c815c71b3 : Revert to use NCCL 2.7.8-1 (#47638)
1abe6e5ad4 : [ONNX] Bool inputs to index_put updated symbolic (#46866)
da2e2336b6 : [ONNX] Export and shape inference for prim uninitialized in If subblock (#46094)
4078f44668 : [TB][embedding supporting] Modify histogram to accept multipy types to skip Castop and avoid OOMing in Castop
513f62b45b : [hotfix] fix collect_env not working when torch compile/install fails (#47752)
a1db5b0f2b : Added CUDA support for complex input for torch.inverse #2 (#47595)
dbfee42a7d : [FX] Fix uses not updating when erasing a node (#47720)
d1351c66a8 : [FX] Add a bunch of docstrings (#47719)
dac0192148 : Revert D23632280: [c10d] switch ProcessGroup::Work to be managed by intrusive_ptr
1f946e942d : Revert D24667128: [c10d] switch Store to be managed by intrusive_ptr
2204374fd4 : Revert D24667127: [c10d] switch ProcessGroupNCCL:Options to be managed by intrusive_ptr
0c64f9f526 : Convert from higher order functions to classes in tools.codegen.gen (#47008)
d478605dec : Fix classmethod override argument passing. (#47114)
1239d067ae : [quant][graphmode][fx] Support standalone_module_class (#47705)
4cb73f5a4c : Allow for string literal return during symbolic tracing (#47618)
48ed577fbd : Stop including TypeDefault.h from MPSCNNTests.mm (#46998)
88ec72e1c2 : [fbcode][pytorch mobile] Create model reader utilities.
5647f0ca7c : Revert D24859919: [pytorch][PR] Grammatically updated the tech docs
ae5c2febb9 : [c10d] switch ProcessGroupNCCL:Options to be managed by intrusive_ptr (#47075)
0cfe3451d4 : [c10d] switch Store to be managed by intrusive_ptr (#47074)
0650a6166f : [c10d] switch ProcessGroup::Work to be managed by intrusive_ptr (#44046)
cbf439caf1 : Unbreak backward compatibility tests (#47726)
bfec376e9f : [vulkan] Apply new changes to vulkan api v1 (#47721)
d73a8db2d2 : Use local env for building CUDA extensions on Windows (#47150)
7908bf27d5 : Fix output type of torch.max for Tensor subclasses. (#47110)
a5c65b86ce : Fixed einsum compatibility/performance issues (#46398)
51a661c027 : [vulkan] tentative fix for conv2d_pw, and fix checks for addmm (#47723)
e914a1b976 : Support default args in symbolic tracing (#47615)
a5e9fa1b0d : Add max_src_column_width to autograd profiler (#46257)
1b954749d0 : Disable test_distributed_for for multigpu test env (#47703)
4de40dad5d : [ONNX] Improve stability of gemm export (#46570)
69532c4227 : Vulkan MobileNetv2 unit test. (#47616)
bf6a156f64 : Fix kthvalue error for scalar input (#47600)
6575e674ce : [numpy] torch.{all, any} : Extend Dtype Support (#44790)
c9d37675b2 : Back out "[pytorch][PR] The dimension being reduced should not be coalesced by TensorIterator" (#47642)
f692af209d : add unittest for operator benchmark (#47678)
a843d48ead : Grammatically updated the tech docs (#47345)
febc76a5c6 : fix assert_allclose doesnt check shape (#47580)
8e3af9faa8 : [pytorch] fix debug symbol flag for android clang (#46331)
baa2f777c8 : [complex] torch.sqrt: fix edge values (#47424)
7691cf175c : [ROCm] set ROCM_ARCH to gfx900 and gfx906 for CI builds (#47683)
ef5f54b2c6 : added rocm 3.9 docker image (#47473)
14f0675903 : [ONNX] Fix dtype for log_softmax export (#46627)
0fb1356a98 : [ONNX] Fix eye export (#47016)
5ce9c70631 : Revert D24735802: [pytorch][PR] [ONNX] Update batch_norm symbolic to handle track_running_stats=False
6b94830cdc : faithful signature support in BoxedKernelWrapper (#47267)
0a7ebf00f8 : [Reland] Add tests for DDP control flow models. (#47470)
17c58720fe : Revert D24346771: [caffe2][memonger] Add support for distributed inference predict nets in DAG memonger
163adb9fa7 : Add HalfToFloat + FloatToHalf operators to PyTorch (#45092)
497cd2506f : Add serialize GraphModule to JSON support (#47612)
5cba3cec5a : fix extensions build flags on newer GPUs (#47585)
1a55f5b3ea : [ONNX] Update batch_norm symbolic to handle track_running_stats=False (#47135)
ccc53901bd : Update CONTRIBUTING and gitignore for docs build (#47539)
cc337069e0 : .circleci: Add python 3.9 to linux binary build matrix (#47235)
22d56319ee : Moving hypothesis and other installations to Docker (#47451)
fa560ceb9c : [reland] make intrusive_ptr as a pybind holder type (#47586)
780f854135 : Clear Shape info in frozen modules (#47511)
1c45631f10 : Revert D24737050: [WIP] Adding bunch of unary foreach APIs
5882f2e540 : [caffe2][memonger] Add support for distributed inference predict nets in DAG memonger
1bf3dc51ae : [JIT] Add `__prepare_scriptable__` duck typing to allow replacing nn.modules with scriptable preparations (#45645)
6bb18b24fb : [quant][qat] Ensure observer respects device affinity (#47514)
abae12ba41 : only set ccbin flag if not provided by user (#47404)
65a72cae2c : Fix type promotion for trace on CPU. (#47305)
57dcb04239 : Batched gradient support for view+inplace operations (#47227)
22d21414d7 : Revert D24574649: [pytorch][PR] Utility that loads a DP/DDP model state dict into a non-DDP model with the same architecture.
f2eac5df18 : [NNC] Fix lowering of aten::remainder (#47611)
0b30a8d007 : [NNC] Simplify and fix some bugs in Bounds Inference (#47450)
c8a42c32a1 : Allow large inputs to svd_lowrank. Fix inaccuracy in torch.svd docs. (#47440)
52fe73a39e : Enable Python code coverage for onnx runs (#47387)
b631c872c9 : Utility that loads a DP/DDP model state dict into a non-DDP model with the same architecture. (#45643)
49d5b4d1e1 : move helper functions out of Partitioner class (#47515)
4841e9ef33 : Add Vulkan op Conv2D. (#46900)
ce11dbbb48 : Vulkan tweaks (#47261)
8aca85dbcd : Add diagflat complex support (#47564)
79f8582289 : [ONNX] Add export of aten::is_floating point (#46442)
3dd266304c : Fix inaccurate note in DistributedDataParallel (#47156)
8b3f1d1288 : [caffe2] Add __slots__ to all classes in schema.py (#47541)
2f617c5104 : skip GPU test on sandcastle if sanitizer is enabled (#47626)
86bb413600 : Optimize backward for torch.repeat (#46726)
4c52a56c40 : [caffe2] Properly call super init in schema.py (#47542)
b6a2444eff : [WIP] Adding bunch of unary foreach APIs (#47383)
5686d2428c : [ONNX] Slightly improve indexing with ellipsis under scripting (#46571)
a49367e9c9 : Update the docs of torch.eig about derivative (#47598)
4159191f0e : [pytorch] split out trace type generator and migrate to new codegen model (#47438)
499d2fad98 : [pytorch] factor out return_names api (#47437)
8d1a6ae51d : [pytorch] TraceType codegen tweak - newline before redispatch call (#47436)
e26c1726cf : [ONNX] Fix scripting rand/randn/where (#45793)
a08e8dd70c : Fix python 3.9 builds on Windows (#47602)
6214d0ad88 : Update nccl commit tag to head of v2.8 branch (#47603)
ead86b2419 : Add batching rule for torch.clone(tensor, torch.contiguous_format) (#47365)
7bc8fdb6d7 : as_strided batching rule (#47364)
77c49e65d5 : [tensorexpr] Fix registration of intrinsics on llvm-fb (#47540)
70d34718b8 : [fx] add missing modules for type annoations (#47537)
fbffd959ca : Fix compiler warning variable "num_ivalue_args" was declared but never referenced detected during: (#47494)
4a2fb34042 : check sparse sizes (#47148)
65e5bd23d8 : [quant] Add _FusedModule type to capture all fused modules for quantization (#47484)
8339f88353 : Add complex autograd support for torch.mean (#47566)
3d962430a9 : Make gen_op_registration flake8 compliant (#47604)
b80da89891 : Batching rule for Tensor.new_empty_strided (#47226)
59aca02224 : Implement Tensor.new_empty_strided(sizes, strides, *, dtype, device, requires_grad) (#47225)
4a58f35bef : [caffe2] Fix duplicate name bug in Net.AddExternalInput (#47530)
6248e0621c : Revert D24801481: [pytorch][PR] Add AcceleratedGraphModule and serialzie GraphModule to JSON
9e0102c10f : Add AcceleratedGraphModule and serialzie GraphModule to JSON (#47233)
8182558c22 : [PyTorch Mobile] Don't use __ROOT__ for inference only ops
16c72a5a6b : [pytorch] continue to rewrite gen_python_functions.py with typed models (#46978)
4a7de2746f : Add docs on how to toggle TF32 flags on C++ (#47331)
781e0ed835 : Support RRef.backward() for Owner RRefs. (#46641)
5a5258cb0d : Support the strided tensor on input for torch.cat (#46859)
6e69a24a1d : [ONNX] Reimplement _var_mean to ensure non-negative (#47240)
f23a2a1115 : The dimension being reduced should not be coalesced by TensorIterator (#47237)
29184f86b0 : Correctly print out sign of near-zero double values (#47081)
c19eb4ad73 : BoxWithNMSLimit support int `batch_splits` input (#47504)
9d0c6e9469 : Implement Complex tensor support in all reduce and all gather (#47523)
f90da88d8f : Add complex support for torch.mean [CUDA] (#47048)
451e7d3db4 : Enable diag for bool Tensors (#47455)
3253ccbd9f : Add bool tensor support for where (#47454)
a1fef453b6 : Support extra files in _load_for_mobile (#47425)
3f9697b10e : Correctly compare Stream IValues (#47303)
25d1fb519d : Build nightly binaries only for the latest ROCM (#47503)
e09ec8eefa : Update the error message for retain_grad (#47084)
7af9752fdc : Fix rounding error flakiness in quantized_test (#47468)
637787797b : [JIT] add support for torch.jit.Final in python 3.6 (#47393)
31d041c946 : Back out "[c10] make intrusive_ptr available as a pybind holder type"
8eb228a7f3 : Add support for log_softmax (#47409)
582e852fba : [caffe2] Add unittests for schema.Field init (#47512)
2572d7a671 : [quant][eagermode][qat][test] Add numerical test for qat convert (#47376)
24b549ba84 : [jit] better message for bad type annotation (#47464)
c26c4690fe : Add sub operator
47198e3208 : [caffe2] improve core.Net cloning/init performance (24x for large models!) (#47475)
90a90ab1d6 : Add type informations to torch/storage.py (#46876)
d0d673b043 : Improve reciprocal() and rsqrt() accuracy on arm64 (#47478)
5614f72534 : Suppres test issues in test_torch running in sandcastle (#47474)
611080a118 : [hot fix] cuda 11.0.x doesn't support sm86. (#47408)
160db3db4f : Adding profiling capability to c++ ddp collective functions (#46471)
1aeefcdaa6 : Revert D24730264: [pytorch][PR] Added CUDA support for complex input for torch.inverse
f3ad7b2919 : [JIT][Reland] add list() support (#42382)
eaa993a2e0 : Add type annotations to torch._C._distributed_rpc module. (#46624)
73a3e70b24 : Add type annotations for torch._C._distributed_c10d module. (#46623)
fe77ded48a : Add Python declaration of torch._C and torch._C._autograd modules. (#46622)
fccfe7bd1a : [Gradient Compression] Add unit tests that test default Python comm hook implementations (#47158)
873652d9ac : [TensorExpr] Fix LLVM 12 build after LLVM API changes (#47480)
fd72ec53d4 : [JIT] Optimize hot path in ProfilingGraphExecutorImpl::getPlanFor. (#47465)
9a9383ef2e : PyTorch NNAPI integration prototype (#46780)
ad8c0e57ef : Add a command-line flag for overriding pthreadpool size (#46781)
a63f391c6f : [JIT] fix documentation typo (#46926)
ceb16d8836 : [Bootcamp] add CUDA kernel checks to ATen/native/cuda (#47466)
e985503d80 : [NNC] Fix an issue with half-scalar vars coerced to float (Take 2) (#47448)
9c8f40516f : Batched grad for advanced indexing (index) (#47223)
65241e3681 : add remove_node in Partition class (#47452)
b4b0fa6371 : add get_device_to_partitions_mapping (#47361)
33acbedace : Added CUDA support for complex input for torch.inverse (#45034)
c2d4a5b137 : Disable unused docker-pytorch-linux-xenial-py3.6-gcc4.8 job (#47446)
373246733d : [FX] get the correct error message (#47108)
eed4a57d54 : Speedup copysign for half and bfloat16 types (#47413)
35491412d1 : Revert D24649817: [pytorch][PR] Fix pickling for Tensor subclasses.
7a599870b0 : [ONNX] Update peephole pass for prim::ListUnpack (#46264)
5977d1d864 : FixedQParamsFakeQuantize: adjust default quant_min and quant_max (#47423)
745899f926 : Revert D24706475: [pytorch][PR] [NNC] Fix an issue in Cuda fusion with fp16 scalar vars coerced to float
9c8078cdfb : Revert D24659901: Add tests for DDP control flow models.
1519c7145c : __noinline__ the top level igamma cuda kernel. (#47414)
e40a563050 : Fix sum batching rule, add simple clone batching rule (#47189)
9a9529aa84 : Batching rules for complex view functions (#47188)
ae374dc690 : Move igamma cuda specific code to kernel file. (#47410)
220b3bd667 : Add op benchmark for batch box cox as baseline (#47275)
68954fe897 : Add release note scripts (#47360)
a4ba018e57 : Updated docs/test for dot and vdot (#47242)
d8c3b2b10c : [quant][pyper] Add support for pruned weights in embedding_bag_byte lookup (#47329)
433b55bc7c : [quant] Add testing coverage for 4-bit embedding_bag sparse lookup op (#47328)
f19637e6ee : Expand the test of torch.addbmm and torch.baddbmm (#47079)
df5b4696cf : [Pytorch] Specialize guts of c10::optional for 32-bit scalars (#47015)
0edc6a39c8 : [NNC] Read/Write Dependency analysis (#46952)
c4209f1115 : Fix pickling for Tensor subclasses. (#47115)
60ae84754e : Add torch.overrides checks for submodules. (#47285)
6c5a1c50bf : Benchmark combining Distributed Data Parallel and Distributed RPC (#46993)
ca293ec4e7 : [TensorExpr] Run constant pooling in fusion groups to dedupe constants. (#47402)
5107a411cd : add partition_by_partition_cost (#47280)
878032d387 : [ONNX] Add export of prim::data (#45747)
192b2967a5 : [quant][graphmode][fx][test] Add test for nn.Sequential (#47411)
c8872051e6 : Validate number of GPUs in distributed_test. (#47259)
8a3728c819 : Make `torch.det()` support complex input. (#45980)
030caa190f : Expand the test of torch.bmm on CUDA (#47124)
32c76dbecc : Split IGamma cuda kernel into it's own file to speed up compilation times. (#47401)
735f8cc6c2 : [DI] Allow explicit taskLauncher for torchscript interpreter (#46865)
b704cbeffe : [FX] Speed up non-parameter tensor lookup (#47325)
ff3e1de6d7 : Clean up some imports in cuda kernel code. (#47400)
848901f276 : Fix collect_env when pytorch is not installed (#47398)
da491d7535 : Split up BinaryMiscOpKernels.cu because it's slow to compile. (#47362)
33cf7fddd2 : [NNC] Fix an issue in Cuda fusion with fp16 scalar vars coerced to float (#47229)
31c9d2efcd : Add tests for DDP control flow models. (#47206)
2e5bfa9824 : Add `input` argument to `autograd.backward()` cpp api (#47214)
6f6025183f : Skip iomp5 emebedding if torch_cpu could not be found (#47390)
ae7063788c : [Pytorch] Add basic c10::optional tests (#47014)
17be8ae11a : [pytorch] Remove c10::nullopt_t::init (#47013)
7ab843e78b : [JIT] add freeze to docs (#47120)
a11bc04997 : Expand GRADIENT_IMPLEMENTED_FOR_COMPLEX to allow named tensors (#47289)
5d82311f0d : Add vulkan reshape op (hack) (#47252)
6b3802a711 : [Gradient Compression] Export sizes, along with length and offset of each variable to GradBucket for PowerSGD (#47203)
2c55426610 : Renamed a TensorListMetaData property. Cleaned up a test (#46662)
f588ad6a35 : [quant][graphmode][fx] Test to make sure dequantize node are placed properly (#47332)
bba5a31176 : Revert D24481801: Optimize backward for torch.repeat
4189c3ca76 : Fix onnx test-reports path in CI (#47315)
01da0fe5ff : Including generator param in randperm documentation (#47231)
fe17269e75 : Revert "Revert D24335982: explicitly error out in comparison ops when the types don't match" (#47288)
e4bc785dd5 : randperm: add torch check to ensure generator device = tensor device (#47022)
07e8f48e6b : Removing caffe2 and third_party from our code coverage (#47310)
f1ac63d324 : Implement copysign (#46396)
996f444c00 : [pt][static_runtime] Memory model (#46896)
5c4bd9a38f : Move python-independent c10d implementations to torch/lib (#47309)
0ec717c830 : Support int32 indices and offsets in nn.EmbeddingBag (#46758)
a2f9c7d4e3 : Expose SparseLengthsSum8BitRowwiseSparse to C10 (#47306)
0cba3e3704 : [quant][graphmode][fx] Add support for qat convbn{relu}1d (#47248)
3a0024574d : Do not delete rpath from torch.dylib on Darwin (#47337)
53a5f08e0c : [quant][eagermode] Avoid inserting fakequant for sigmoid/hardsigmoid/tanh in eval mode (#47297)
c6fe65bf90 : [quant][graphmode][fx][fix] Fix error that DefaultQuantizer is not inserted after a module configured with None qconfig (#47316)
dec1c36487 : Create prototype for AST rewriter (#47216)
f91fcefc81 : [Gradient Compression] Surface C++ comm hooks to Python API as built-in comm hooks (#47270)
2652f2e334 : Optimize arguments checks (#46661)
2caa3bd453 : Inlining all non-output buffers, including intermediate buffers. (#47258)
464c569dbf : [vulkan] Add mean.dim op for vulkan (#47312)
9b168a1fed : [TensorExpr] Pick meaningful names for functions in TE codegen. (#47255)
a65e757057 : [TensorExpr] CudaCodegen: restart counter for function names unique ID inside each codegen instantiation. (#47254)
3161fe6d5a : [JIT] SubgraphUtils: add a function for generating a string name for a given graph. (#47253)
7a0f0d24d0 : Codegen - error when an argument that looks like an out argument isn't a kwarg (fix #43273) (#47284)
a8ef4d3f0b : Provide 'out' parameter for 'tensordot' (#47278)
31ebac3eb7 : [quant] Quantized flip dispatch (#46235)
f41f3e3cd1 : Implement bicubic grid sampler (#44780)
63978556fd : [numpy] `torch.a{cosh, sinh}` : promote integer inputs to float (#47152)
2b5433dee6 : [Pytorch][Annotation] Update inlined callstack with module instance info (#46729)
f730f2597e : [NNC] Implement Cond in LLVM codegen (#47256)
8b13ab9370 : Event Logging for NCCL Async Error Handling Process Crash (#47244)
ca61b061f3 : Update minimum supported Python version to 3.6.2 (#47314)
ea93bdc212 : Add comment explaining purpose of the accumulate_grad argument (#47266)
dc0d68a1ee : [JIT] Print out interface mismatch for prim::ModuleDictIndex (#47300)
14194e4f23 : Embed `libiomp5.dylib` into wheel package (#47262)
c424d9389e : [numpy] `torch.a{cos, tan}` : promote integer inputs to float (#47005)
0d00724e36 : [numpy] `torch.{a}tanh` : promote integer inputs to float (#47064)
c68c3d0a02 : [fix] nn.Embedding.from_pretrained : honour `padding_idx` argument (#47184)
f276ab55cd : Added Kronecker product of tensors (torch.kron) (#45358)
32b66b0851 : reorganize sparse_nn_partition (#47283)
774b638eb6 : Change largeCUDATensorTest to largeTensorTest+onlyCUDA; add a buffer to large cuda tensor test (#45332)
4e6f2440d8 : Optimize backward for torch.repeat (#46726)
9c3a75527b : Update doc to reflect current behavior (#46937)
782f92b569 : fix windows CI passed incorrectly (#47105)
8c865493c6 : Automated submodule update: FBGEMM (#47263)
9e58c85d08 : [ROCm] remove use of HIP_PLATFORM (#47241)
579cfc6641 : Moving test order to rebalance test1 and test2 times (#47290)
5c8896f8ad : Delete CUDA build rules from MacOS build (#47277)
c05ee86edd : Fix return-type-is-always-copy warning (#47279)
a341a4329a : Format error message for unmatched signature between _out and base functions (#47087)
73e121de1c : [GPU] Enable optimize_for_metal in fbcode (#47102)
ad3a3bd0d6 : [GPU] Add an attribute to the torchscript model exported by metal (#47174)
0ead9d545a : [quant][graphmode][fx] Add test for non quantized embedding and embeddingbag (#47092)
4df7eefa06 : [TensorExpr] Support LLVM versions 8 through 12 (#47033)
ac8a8185eb : expose Timer docs to PyTorch website. (#46880)
09a52676ad : Add NestedTensor specific dispatch key to PyTorch (#44668)
1fe273d798 : add node by node cost function (#47009)
084b71125f : Fix bug in toComplexWithDefault (#43841)
b1b77148ac : Back out "[Gradient Compression] Surface C++ comm hooks to Python API as built-in comm hooks" (#47234)
2cff3bba58 : [vulkan_api][ops] Mm, Pool, Upsample (#47063)
b0e954fff5 : quantize_tensor_per_channel ARM implementation (#46018)
ecfa7a27b8 : [jit] fix traced training attribute (#47211)
27f4a78bb8 : Add benchmark for per channel tensor quantization (#46017)
82b74bd929 : For torch::jit::module's attr method to moble::module (#47059)
b6685d3863 : [PT] optional -> c10::optional (#47144)
be2e3dd2a1 : [quant][graphmode][fx][fix] Linear work with float_qparam_dynamic_qconfig (#47068)
cedeee2cd4 : Add scalar.conj() and update backward formulas for add and sub (#46596)
86151da19e : Port CPU Trace from TH to ATen (#47126)
8054ae3e77 : Add test for trace (#47125)
f58842c214 : Enable inlining into reductions (#47020)
b5a1be02a0 : Add RAII DetectAnomalyGuard (#47164)
ebf36ad3da : Remove travis-python references as well as some unnecessary dependencies (#47209)
42b6f96764 : Make "Run flake8" step always succeed again (#47236)
f5073b0c5a : Add `inputs` argument to `autograd.backward()` (#46855)
18470f68bc : Fix max_pool1d on discontiguous tensor (#47065)
b3eb0c86cf : Revert D24335982: explicitly error out in comparison ops when the types don't match
7f125bca1c : [Metal] Add pin_memory check in empty_strided (#47228)
e03820651a : Make conversions explicit (#46835)
22b3d414de : Enhance the torch.pow testcase for the complex scalar base (#47101)
9b52654620 : annotate a few torch.nn.modules.* modules (#45772)
7178790381 : Add vulkan clamp op (#47196)
96b23f7db1 : add sandcastle device type test base discovery (#47119)
70d58031d7 : [c10] make intrusive_ptr available as a pybind holder type (#44492)
6852cbb952 : [RFC] Better error message in case operator could not be run (#46885)
c5ae875179 : Add bfloat support for torch.randn and torch.norm (#47143)
60fea510a1 : explicitly error out in comparison ops when the types don't match (#46399)
6e22b6008d : [MLF] Allow for computing prune quantile thresholds on absolute value of indicators in distributed-inference-compatible embedding LUT pruning (#46789)
6906701bde : [ROCm] enable stream priorities (#47136)
c2e123331a : Check CUDA kernel launches (fbcode/caffe2/aten/src/ATen/native/cuda/) (#47207)
0d6bf8864b : add rocm 3.9 to nightly builds (#47121)
da26858c9c : Add complex backward support for torch.exp (#47194)
c10aa44e33 : Back out "Providing more information while crashing process in async error handling" (#47185)
85e5b76f17 : Automated submodule update: FBGEMM (#47190)
1cc1da5411 : LayerNormInt8QuantizeFakeNNPI fix to match ICEREF. (#47140)
19ede75eb9 : [JIT] Enable ModuleDict non-literal indexing (#45716)
317b78d56e : Revert D24665950: Create prototype for AST rewriter
54feb00bbd : Create prototype for AST rewriter (#46410)
ee0033af9b : [Gradient Compression] Surface C++ comm hooks to Python API as built-in comm hooks (#46959)
e3f912e8b7 : Revert D24655999: [fbcode] Make model reader utilities.
7f056e99dd : [fbcode] Make model reader utilities.
1aa57bb761 : Moving coverage, xunit, pytest installation to Docker (#47082)
cb4b6336ba : [FX] Fix handling of attributes (#47030)
7eb427e931 : Providing more information while crashing process in async error handling (#46274)
d1d6dc2e3c : Add more specific error message (#46905)
a81572cdc5 : Add anomaly mode for C++ (#46981)
c86af4aa55 : Disable NEON acceleration on older compilers (#47099)
085193c291 : [quant][graphmode][fx][fusion] Add test for fuse_fx (#47085)
1dd220bd84 : Add C++ coverage for Ubuntu cpu tests (#46656)
edac4060d7 : Fix mul cuda for bool (#47031)
69fe10c127 : use bitfield to shrink TensorImpl (#45263)
99fed7bd87 : faster TensorOptions merging (#45046)
c7fc8cab3b : track Half/ComplexHalf default dtype (#45043)
f05b66b70d : pass TypeMeta by value (#45026)
2643800881 : Fix max_pool2d with ceil_mode bug (#46558)
7df0224cba : Automated submodule update: FBGEMM (#47071)
67b7e751e6 : add warning if DataLoader is going to create excessive number of thread (#46867)
eec201c138 : Add last_n_window_collector
6c34aa720c : add add_node function for partition to fix partition mem size calculation (#47083)
f9d32c4fa8 : [JIT] Add selective backend lowering API (#43613)
0dbd72935a : Split comm hooks into python-dependent hooks and others (#47019)
d95e1afad3 : [pytorch] add script to run all codegen (#46243)
707d271493 : Fix links in tools/build_variables.bzl (#47066)
366888a5e2 : [quant][graphmode][fx] Remove logging for standalone module api calls (#47032)
e3b55a8a65 : [pytorch/ops] Concat fast path w/ zero tensor (#46805)
2e2dc5874b : Fix lint (#47095)
f5477e3703 : Enable python code coverage on windows (#44548)
ddeacf1565 : Fix median bug on discontigous tensors (#46917)
9bc8f071a3 : [WIP] Move torch.fx into its own target (#46658)
7190155408 : [Transposed Conv]add ConvTranspose3d with FBGEMM as backend (#46608)
3c643d112e : Pin destination memory for cuda_tensor.to("cpu", non_blocking=True) (#46878)
e17b8dea1d : fix calculation of number of elements to not overflow (#46997)
78de12f588 : Replace -f with -x for pytest tests. (#46967)
a4caa3f596 : [ONNX] bump CI ort to 1.5.2 rel for stability (#46595)
843cab3f2e : Delete TypeDefault.h and TypeDerived.h codegen entirely. (#47002)
c689b4d491 : Delete TypeDefault call code generation logic in VariableType (#47000)
41f8641f1e : Delete SchemaRegister.cpp, make flag operate on TypeDefault.cpp (#46991)
54d83296a9 : Desugar missing dispatch field into singleton Math entry (#46970)
87e86fa84c : Some miscellaneous cleanup in codegen (#46940)
dc6f723cb4 : Delete Vulkan from code generator. (#46938)
156c08b0d9 : view_as_real doesn't work for all backends since it relies on strides. (#47018)
71c0133e23 : enable PE everywhere but mobile (#47001)
377a09c8e8 : reland fast TypeMeta/ScalarType conversion (#45544)
1ea14e30f5 : [ONNX] Enable NoneType inputs to export API (#45792)
c556d4550c : fix_combine_two_partition_size (#47053)
129b41226e : [ONNX] Support nd mask index in opset >= 11 (#45252)
1d233d7d1f : [fix] torch.nn.functional.embedding -> padding_idx behavior (#46714)
3e499e490a : Bump up NCCL to v2.8 (#46742)
d850b5c98c : Fix DDP issue where parameters share same grad_accumulator (#46755)
680571533b : [RFC] Decouple fast pass functions (#46469)
74d730c0b5 : implement NumPy-like functionality column_stack, row_stack (#46313)
fee585b5a3 : Correctly mark unannotated NamedTuple field to be inferred TensorType (#46969)
1e275bc1a6 : Show Flake8 errors in GitHub CI again (#46990)
6eaa324c9f : Implement torch.igamma (#46183)
dd95bf65b6 : [caffe2/FC DNNLOWP] Shrink Y_int32_ vector capacity when appropriate
38265acfbe : Add Mul op for Vulkan (#47021)
2b6a720eb1 : Update pybind to 2.6.0 (#46415)
2249a293b7 : Fix segfault with torch.orgqr. (#46700)
f629fbe235 : Added torch.linalg.tensorsolve (#46142)
13b4127c95 : Fix implicit conversion (#46833)
ecdbea77bc : Fix DDP documentation (#46861)
262bd6437a : Show old kernel location when there are mismatches (#46850)
dfdc1dbee4 : Disable softmax tests on ROCm (#46793)
4a581ba6c2 : Implement LengthsToOffsets operator in Caffe2 (#46590)
18d273dc0e : [RFC][LocalSession] Fix workspace type
d0df29ac22 : [FX] Put inf and nan in globals instead of with an import string (#47035)
cab32d9cdf : [RPC Framework] Support remote device format "<workername>/<device>" (#46773)
b553c06abb : Throw an exception in the constructor of torchscript serialization to avoid double-exception (#44266)
9c1a41b724 : [RFC] Add OperatorHandle overload to the RecordFunction::before() method (#46401)
604e1b301a : Fix negative column numbers for the torch.eye (#46841)
5c8aad1141 : [numpy] `torch.cos`, `torch.tan` : promote integer inputs to float (#46706)
42a51148c1 : Use f-strings in torch.utils.cpp_extension (#47025)
9d23fd5c00 : [pytorch] get rid of cpp_type_str from pybind codegen (#46977)
79474a1928 : [pytorch] simplify tensor options logic in pybinding codegen (#46976)
a86b3438eb : add support for different memory sizes on size_based_partition (#46919)
c2a3951352 : [quant][graphmode][fx] Remove inplace option for convert_fx (#46955)
ad260ae7fd : Disable test_joing_running_workers for TSAN. (#46966)
9fefb40628 : Fix signed-to-unsigned conversion warning (#46834)
c7183c9878 : Fix object-based collectives API to use torch.cuda.current_device instead of (#46897)
dc8176356e : Various cleanups to ir_emitter and friends (#46686)
fc2bd991cc : [quant] Fix flaky test test_histogram_observer_against_reference (#46957)
cd26d027b3 : [doc] Fix info on the shape of pivots in `torch.lu` + more info on what and how they encode permutations. (#46844)
058f43fc51 : Fix torch.version.debug generation (#47006)
14d87ec5a3 : Add Vulkan op Add. (#44017)
ec600bc391 : Add Vulkan tensor copy. (#46481)
bf08814b73 : [FX] Kill functional transforms name (#47004)
23bce17baa : Add inputsSize to Python IR, like outputsSize (#46779)
179d2b288c : Fix interval midpoint calculation in vulkan (#46839)
98b3da8b13 : Revert D24452660: [pytorch][PR] Add CUDA 11.1 CI
61ee0242c0 : Fix backcompat in master following revert (#46984)
069232a574 : [FX] Fix corner case in name sanitization (#46958)
cbf90dafe1 : Fix CPUCaching allocator guard bug (#46922)
c3fc17b48e : Fix bit math (#46837)
c9222b7471 : Implement clip_ranges operator for PyTorch
c6858fd71a : Set up benchmarks for ClipRanges operator for Caffe2 and PyTorch
b75b961934 : Fix `requires_grad` arg for `new_full`, `new_empty`, `new_zeros` (#46486)
353e7f940f : Ensure kernel launches are checked (#46474)
50c9581de1 : AT_ERROR if mmap allocation has failed (#46934)
c886c7f6dd : fix: Fixed typing of bool in _ConvNd (#46828)
cd8ed93287 : [quant][graphmode][fx][api] Remove `inplace` option from prepare_fx (#46954)
46b252b83a : Revert D24262885: [pytorch][PR] Added foreach_zero_ API
ddbdbce623 : [jit] Prevent caching of `graph` attribute. (#46960)
d92bf921db : [quant][graphmode][fx] Remove `inplace` option for fuse_fx (#46953)
e299393fd5 : [Gradient Compression] Provide 2 default C++ comm hooks (#46701)
e077a2a238 : [Gradient Compression] Add CppCommHook subclass for supporting the C++ API of communication hook. (#46566)
998b9b9e68 : [quant][graphmode][fx] custom_module support static/dynamic/weight_only quant (#46786)
5a8198eb3c : [quant][graphmode][fx][fix] scalar as first input for add/mul (#46751)
810c68fb1d : [OpBench] fix jit tracing with quantized op/tensor by enabling `_compare_tensors_internal` to compare quantized tensors (#46772)
8e37dcb1f3 : Added foreach_zero_ API (#46215)
67c1dc65a3 : [FX] Fix handling of `inf` and `nan` literals (#46894)
53839ac9d7 : Fix internal assert for torch.heaviside with cuda tensor and cpu scalar tensor (#46831)
115bbf9945 : [caffe2] Disable running full grad check in tests by default
8066e89f64 : quant: fix bug with copy.deepcopy of FX prepared quantization models (#46895)
1479ed91be : Add CUDA 11.1 CI (#46616)
c20c840c1b : Install sccache from source (#46672)
64d4b24a12 : Adding link to gcov depending on GCC_VERSION (#46928)
dc53eefd25 : Conditional requirement for py3.6 only (#46932)
79a1d2bd78 : [iOS] Bump up the cocoapods version (#46935)
717e6d8081 : add type annotations to comm.py (#46736)
151f31ba27 : remove event not ready assertion from TestCuda.test_copy_non_blocking (#46857)
8c39f198b4 : Fix typo in setup.py (#46921)
21e60643c0 : [numpy] `torch.log{2,10}` : promote integer inputs to float (#46810)
bbe5bfaa4f : Add GradMode::enabled check to max_pool1d (#46767)
daf2a6a29d : Increase no-output-timeout for OSX builds (#46891)
d5cd781cd3 : Update dper3 to use torch.nan_to_num and nan_to_num_ (#46873)
8640905088 : add sparse_nn_partition (#46390)
4b6e307191 : Replace flatten tensors with flatten loops. (#46737)
6b50ccc41c : [quant][graphmode][fx] Support sigmoid/hardsigmoid/tanh in qat (#46738) (#46871)
60eded6c0f : Add single element tuple output from to_backend/to_glow (#5029)
bcbb6baccf : Add a warning message that torch.sign would not support complex numbers (#43280)
37da6d26ff : add fburl link to error message (#46795)
9858b012ec : Fix TripletMarginWithDistanceLoss example code (#46853)
4a35280ec2 : [c10] fix weak_intrusive_ptr lock() (#46007)
b3e64c86e0 : Remove loop_test mode (#46618)
af27da93de : Add Vulkan Tensor factory. (#44016)
c9bf03a6c4 : Add Vulkan Tensor. (#44015)
2397c8d1f7 : [pytorch] Improve/fix heuristics for using mkldnn vs native conv (#46675)
a602811da7 : [ROCm] fix bug in miopen findAlgorithm. (#46852)
a4adc1b6d7 : Fix unused variable warning (#46838)
a6cd294c9b : [Gradient Compression] Refactor CommHookInterface and PythonCommHook. (#46512)
adafd3d4b2 : Support RRef.backward() for local RRefs. (#46568)
7731370e71 : CUDA BFloat16 gelu, hardswish, hardsigmoid (#44997)
99cf3b1ce4 : CUDA BFloat16 signal windows (#45155)
13a5be571b : Enable complex backward for torch.take() and tensor.fill_() (#46860)
02dc52f25b : vmap fallback: gracefully error out when vmap over dim of size 0 (#46846)
5e2f17d77a : Add NCCL_ASYNC_ERROR_HANDLING to docs (#46856)
57bf0b596a : [docs] Changing the wording on quantization versioning and support (#46858)
58ed60c259 : Added context manager enabling all futures returned by rpc_async and custom build rpc functions to be automatically waited on (#41807)
25db74bf5e : Revert D24486972: [quant][graphmode][fx] Support sigmoid/hardsigmoid/tanh in qat
0c74b43a3f : Update TensorPipe submodule (#46842)
56a3831bc6 : [NVFuser]Benchmark minor update (#46778)
e927b62e73 : [quant][graphmode][fx] Support sigmoid/hardsigmoid/tanh in qat (#46738)
b5662ba0f0 : [uhm][0/n] add cuda Mod Op (#46732)
5a2b537b54 : Add error messages and workaround for RET failure of containers with a torch class type (#46543)
3e606da0af : Upgrading lcov install to install v1.15 to be compatible with GCC9 (#46847)
83d358da7c : Fix LAPACK functionality detection from static OpenBLAS (#46710)
b61671ccd2 : Enable dtype arg for torch.linalg.norm with order 'fro' and 'nuc' (#46637)
d94bd998ec : Update backward formulas (Re #44444) (#46275)
edbc84aa4a : Fix hash type (#46769)
fa8cd06a5c : Perform explicit cast (#46771)
9cbdd84e15 : Fix compiler warning
f9b9430152 : Support doc_string for TorchBind custom classes (#46576)
7d4c1a5ab0 : Fix type warning (#46770)
37dbc6117f : [quant][eagermode] Add additional_fuser_method_mapping to config (#46355)
13b7855f33 : Support hashing of various data types by implementing generic hashing for IValues (#46441)
789e935304 : Annotate torch.nn.cpp (#46490)
c4892c8efe : [pytorch][tensorexpr] Promote integer arguments to sin/cos/tan to float (#46776)
343260a1cc : [quant][graphmode][fx] Add support for additional_{fusion/quant}_pattern (#46346)
74d81080a0 : Use new_zeros in evenly_distribute_backward (#46674)
aa828bf084 : Support undefined grads in vmap fallback (#46671)
85954164a4 : fix minor bug, message variable does not exist (#46777)
89f368bef8 : Enable XNNPACK on Windows & Update XNNPACK (#45830)
999f7ed3a1 : Refactored ForeachFunctors.cuh (#46660)
822efb7275 : add workflow ID to report tags (#46725)
ccb79f3ac7 : Add option to log subprocess output to files in DDP launcher. (#33193)
e519fcd1aa : Remap net name inside arg.n for AsyncIf operator
3ea26b1424 : [WIP] Push rocm to slow path for foreach APIs (#46733)
c31ced4246 : make `torch.lu` differentiable. (#46284)
52f8d320b3 : [ONNX] Update ONNX doc for indexing export (#46349)
f230245c06 : Revert D24422354: [pytorch][PR] fix-process-group-counter
e0fd590ec9 : Fix incorrect usage of CUDACachingAllocator (#46605)
6c5f634657 : Fix grammar and spelling errors (#46713)
4fd2cce9fa : Check support_as_strided before using empty_strided. (#46746)
129279a374 : [FBGEMM][Transposed Conv] add transposed conv support for fbgemm backend for 1d, 2d, 3d (#46607)
8558c0e612 : Eliminate narrowing conversion (#46730)
511f89eaa9 : Add nvtx.range() context manager (#42925)
88e94da580 : Enable softmax and tiny norm FP16 tests on ROCm (#46363)
6ae0a7c919 : Add ReplaceNaN benchmark as baseline (#46685)
27e2ea4cea : Make add_relu an internal function (#46676)
870a5a0d6d : Enable DataParallel to run zero input Module (#46565)
842494af77 : [quant][fx] EmbeddingBag quantization support (#46678)
e34c825b77 : [quant][fx] Embedding quantization support (#46677)
fe6fb7753e : Clean up use of Flake8 in GitHub CI (#46740)
bf1ea14fbc : [CI][IOS] Add a arm64 ios job for Metal (#46646)
344abd56f9 : [CI][IOS] Rename the IOS_VERSION (#46645)
ce5bca5502 : ProcessGroupNCCL::alltoall_base needs to call recordStream (#46603)
bd90379df5 : [quant][graphmode][fx] Add support for additional_fuse_method_mapping (#46345)
d6519d4e9f : [pt][static_runtime] Add option enable_out_variant (#46690)
f326f6a8a0 : Remove dilation restriction on cuDNN ConvTranspose2d (#46290)
53dff784e2 : [caffe2] Fix inplace ops in onnx::SsaRewrite (#46134)
51bf7bed84 : [caffe2] Allow memonger to optimize nets with inplace(enforced) ops (#46560)
23fad9111e : [quant][graphmode][fx] Add additional_qat_module_mapping (#46344)
982fa07ccb : torch.nn.Unfold accepts 0-dim for batch size (#40689)
c57c560744 : Revert "Push rocm to slow path (#46216)" (#46728)
9ccf85b7b4 : [FX] Make wrapped functions traceable (#46692)
2700932ef2 : [FX] Fix recursion depth issue on Graph deepcopy (#46669)
18d80501a6 : Batching rules for: new_zeros, new_empty (#46606)
c44300884e : Clarify timing of GetDeviceProperty() (#46715)
920ec6651f : [OpBench] fix jit mode run of operator benchmark for ops with parameters (#46694)
06d50b5eb0 : Pull in fairscale.nn.Pipe into PyTorch. (#44090)
b63ddd6f57 : [OSS][Metal] Support Resnet models
93719440b8 : Replace map(lambda constructs (#46462)
25dc0056f2 : [RPC] print exception message on workers that run python functions (#46372)
3112e23428 : [py][vulkan][reland] Add is_vulkan to py api, add vulkan to device type parsing (#46655)
bc1ce58451 : Push rocm to slow path (#46216)
3526b604b1 : Add comment about running C++ executable lint locally (#46698)
52a970bac9 : Minor cleaning of `test_cuda.py` (#46617)
aa9ca85bd0 : Fix interval midpoint calculation (#46666)
7245d2c939 : Avoid scatter for single-device case in DDP (#46304)
e5a2ba2ea1 : Fix benchmark_caffe2
143d1fd9f5 : Namespace cleanup for 1.7 Part 2 (#46673)
16c5b7b3f2 : Avoid leaking has_torch_function and handle_torch_function in torch namespace (#46680)
905ed3c840 : Revised sparse tensor documentation. (#45400)
8e13fe6c44 : [numpy] `torch.sin` : support and promote integer inputs to float (#45733)
98aad933b6 : [pytorch][PR] Record FutureNCCL callback stream on CUDA caching allocator (#45318)
ab28bd528d : [quant][graphmode][fx] Support quantizing FloatFunctional (#46634)
9b5197b763 : [mlf][efficiency] add tensor inference function to last-n collector op (#46693)
fe4f90c40b : Cusolver inverse check info (#46625)
adffd8eb6b : Add const to the first arg 'grad' of Reducer::copy_grad_to_bucket (#46501)
db83ddcb86 : small doc fix (#46599)
adbb50ea67 : Enabling alias annotation checks for all operations during autograd tests (#46601)
33e82c0269 : Update error message to include link to readme. (#46613)
13decddae2 : [reland][quant] Add FixedQParamsFakeQuantize module (#45538) (#46657)
746febdeac : [quant][graphmode][fx] Add additional_object_mapping argument to convert (#46338)
8908f6ad8e : [op-bench] modify import path of configs (#46679)
6011b36080 : Fix `type qualifiers ignored on return type` warning (#46668)
e02a3e190e : DOC: Building libtorch using CMake (#44196)
ff0e20b384 : Config inheritance was added for pytorch project (#46584)
475b4e30e6 : Allow for source code comments at any level of indentation (#46548)
e3b2bfa2a3 : [pytorch] Early return in nn.EmbeddingBag when weight is empty (#46572)
caed29a069 : fix-process-group-counter (#46563)
ce04e527b4 : Bump up windows cudnn version (#46436)
c3c249aa0b : Workaround to pay attention for CUDA version (#46535)
09896eda14 : Fix version comparisons for Python 3.6, 3.10 and 4 (#32389)
65da50c099 : Apply hip vs hipcc compilation flags correctly for building extensions (#46273)
ac4ee0ef5d : Fix typo in docs for interpolate (#46589)
96bc7faa50 : [ONNX] Export var, var_mean and std_mean ops (#45678)
6de619e4a4 : Allow converting parameters of nn.Module to complex dtypes (#44788)
611f028168 : Add Batch-Updating Parameter Server Example to CI Tests (#46510)
cf3d7a2660 : first cut of adding a dangling impl test. fix #45165 (#46484)
62e714c9d9 : Delete CUDAUnaryOps.cpp (#46280)
cebe87fe3a : Revert D24379422: [py][vulkan] Add is_vulkan to py api, add vulkan to device type parsing
8328630315 : avoid inlining kernel lambdas on mobile (#46249)
8357e2edc3 : Back out "Revert D24269034: [fx] Refactor Tracer so that find_module and root args creation could be overridden by implementations" (#46573)
e8fbe54cf5 : [py][vulkan] Add is_vulkan to py api, add vulkan to device type parsing (#46511)
a651b876a7 : preserve non-dense or overlapping tensor's layout in *_like functions (#46046)
2181449068 : Revert D24004795: [quant] Add FixedQParamsFakeQuantize module
f47231bf0e : [caffe2][dnnlowp] Remove openmp usage in quantize dnnlowp op
6cd8b5e9a7 : Provide CMake option to enable Vulkan API. (#46503)
3e041b503f : Add Vulkan job dispatch and flush. (#46008)
cb3c1d17e4 : Promote -Wcast-function-type to an error in builds. (#46356)
42a70dc5a8 : Implement all communication APIs in DistributedC10d new frontend (#46053)
253918ec55 : [quant] Add FixedQParamsFakeQuantize module (#45538)
f83cf2dab3 : [JIT] adding torch.jit.isinstance support (#46062)
fdc5261a20 : Support %-based string formatting (#45976)
f9446cb15a : [quant][refactor] Remove register api and rename get_*_mapping to get_default_*_mapping (#46337)
4f5b55f722 : Revert D24395956: [pytorch][PR] Replace flatten tensors with flatten loops.
2b221a9599 : Remove PyCFunction casts as much as possible. (#46227)
1a3ea46dbf : [StaticRuntime] Threading model (#46219)
e18a8aba95 : Add CUDA 11.1 docker build (#46283)
187e23397c : Remove non-existent trusty image references (#46594)
2f51ddb81f : Replace flatten tensors with flatten loops. (#46539)
9c02e2112e : Automated submodule update: FBGEMM (#46578)
e6ed887908 : Add view test for tensor_split (#46427)
5003fd189c : Add an option to getWriteableTensorData to avoid copy CUDA tensor to CPU (#46524)
5e0bfd7455 : [Build] [CMake] [ROCm] find hsa-runtime64 properly (#45550)
35a35c3498 : Move Open MPI installation to Ubuntu CUDA Docker images (#46569)
0d4590c279 : renaming env var IN_CIRCLECI to a broader name of IN_CI (#46567)
1c8d0d8cc9 : Allow vmap to accept nested python data structures as inputs (#46289)
6025f8148a : Implement `_broadcast_to_and_flatten(pytree, spec)` (#46288)
0285618a11 : Add utilities to support handling of nested python data structures (#46287)
75322dbeb4 : [PyTorch] [BUCK] Replace pt_deps.bzl with a YAML operator dependency file which is generated by the code analyser (#46057)
e5ed037529 : [StaticRuntime] Add a 'speed of light' benchmark. (#46308)
17f8c329df : [NNC] IRSimplifier rules for Compare and Mod (#46412)
a06b95b2ba : [quant][graphmode][fx] Support non_traceable_module/module_class (#46298)
5b0f400488 : Replace list(map(...)) constructs by list comprehensions (#46461)
3d421b3137 : [pytorch] rewrite of the python binding codegen with the v2 API (#46244)
8f12c0e786 : Revert D24269034: [fx] Refactor Tracer so that find_module and root args creation could be overridden by implementations
cda88e8e4b : Fix interval midpoint calculation in register_op_utils
ac146c4820 : [nvFuser] Switching to `CudaFusionGuard` from `BailOut` for nvfuser - update 2 (#46452)
30d687522d : [reland][quant][eagermode] Move custom_module registration to prepare/convert_custom_config_dict (#46293) (#46364)
f0f10f82f4 : Automated submodule update: FBGEMM (#46443)
7b2e8bec85 : [fx] Refactor Tracer so that find_module and root args creation could be overridden by implementations (#46493)
6dc763df30 : PyTorch: add API usage logging to numeric suite (#46504)
d38a71d579 : `torch.nn.modules.LazyModuleMixin` and `torch.nn.LazyLinear` (Shape Inference II) (#44538)
7f8b02f5b7 : [ONNX] Add test for Batchnorm (#45633)
172ed51a17 : Mark parts of spectral tests as slow (#46509)
e7564b076c : Refactor scalar list APIs to use overloads (#45673)
f06ea4bcac : Add myself as owner of C++ APIs related folder (#46487)
eadc59df55 : Enable TP_USE_CUDA and TP_ENABLE_CUDA_IPC (#46523)
00c779a92b : detect inplace modifications of views earlier (fix #21875) (#46204)
0c5cd8c2b9 : [RFC] Switch PyTorch Selective Build (Custom Build) to use the SelectiveBuilder abstraction (#45722)
bcd68dfa5f : Change codecov comment style to be less verbose (#46499)
351670f004 : Delete libtorch test jobs (#46508)
c3466dabaa : Disable profiling when `getGraphExecutorOptimize` is unset (#46479)
6a2f40dc66 : Expose script_if_tracing as public API (#46494)
da95eec613 : torch.fft: Two dimensional FFT functions (#45164)
495070b388 : [Metal] Add the Python binding for optimize_for_mobile (#46456)
e8ff0f6c5c : [c10] add operator= of intrusive_ptr to weak_intrusive_ptr (#44045)
cc471c6daf : [Metal] Enable optimize_for_mobile on Linux (#46384)
5233ff9f15 : [TensorExpr] Re-enable test for torch.cat, add a test for torch.cat being a single node in a fusion group. (#46447)
d6de9d573a : [TensorExpr] Properly handle input types promotion and special case of empty inputs for aten::cat. (#46500)
0f668d95b6 : [TensorExpr] Fix shape inference logic for aten::cat. (#46482)
58f14d3a28 : Remove catchAllKernel_. (#46354)
04e5fcc0ed : [GPU] Introduce USE_PYTORCH_METAL (#46383)
fa108bd264 : Add flatten loops transformation (#46365)
5da4a08675 : [GPU] Add metal to DispatchKeySet (#46455)
8c629ecc9a : [WIP] Move catchAll to Math (#45939)
d1ca7ef33e : [Gradient Compression] Rename the first arg of pybinding of _register_comm_hook: ddp_model -> reducer. (#46498)
0c9787c758 : caffe2: use at::mt19937 instead of std::mt19937 (10x speedup) (#43987)
e6e83bf802 : [hotfix] remove test.pypi.org (#46492)
11cc7f143d : Run __setstate__ when cloning modules (#45858)
478fa180ee : Removing hack so that NCCL is not removed in base Docker commands (#46495)
89108ba6ea : type check for torch.quantization.stubs (#46475)
997e672a27 : Move NCCL installation for xenial-cuda10.1 to Docker image instead of for every job (#46476)
28f8372bf4 : Avoid mat1 references in mm_mat1_backward (#45777)
dd169ca17c : caffe2/plan_executor: propagate exceptions from reporter substeps (#46424)
24ca2763e1 : [fx] allow custom behavior for args, kwargs, and bool (#45193)
2e2fe8cf3b : [NCCL] Modularize ncclCommWatchdog (#46051)
be0c431874 : Fix implicit cast in custom_function (#46445)
9300a27702 : Make `torch.lu` support complex input on CUDA. (#45898)
5c5484c889 : [Flaky tests] Fix test_all_gather_timeout test (#45989)
c37baa9177 : [caffe2] add concat benchmark (#46457)
b5702e2350 : Fix out-of-bounds access for caching allocator calls (#46439)
d6e6073016 : install lcov in Docker image if coverage is specified (#46404)
7b788d113e : Fix deprecated warnings for nan_to_num (#46309)
ecf63351bc : [mlf][efficiency] modify equalization scale operator to return single output (#46449)
4359c5e036 : [TensorExpr] Correctly handle negative dimensions in aten::cat when lowering to tensor expressions. (#46446)
ec5f81f9d3 : Remove variable_excluded_from_dispatch() check for factory functions. (#46371)
d1745c36dc : fix type check for torch.quantization._numeric_suite (#46330)
92921c82bb : Add named tuple's error message and workaround for RET failure (#46347)
d278e83e69 : Update VariableTypeManual.cpp to not use catchAllKernel. (#46353)
dc7cd97402 : Fixes bug in sspaddmm (#45113) (#45963)
dda95e6914 : More Timer refinement (#46023)
757173a4da : Add Sigmoid operator from Caffe2 (#46286)
16c52d918b : [caffe2] Bypass memonger for in-place ops (#46378)
faf03bd226 : Update default ouput extension in optimize_for_mobile.cc (#45598)
f96cb9de79 : vmap: added fallback for in-place operators (#46191)
401c85b4d3 : Rename createLevelsBitset -> createVmapLevelsBitset; move it (#46190)
849bc77ee4 : Add quick fix for view/inplace issue with DDP (#46406)
ba92920a28 : Remove codegen for old RegistrationDeclarations.h (#46370)
03c7d5be6b : Add operator benchmark for 4bit/8bit embedding lookups
c99378af1b : Fixing pow for special case between cuda tensors and cpu tensors and reframed test cases a tiny bit (#46320)
7d6d5f4be0 : Migrate CPU_tensor_apply to TensorIterator (#44242)
8f61fa653f : use @mode/ndk_libcxx instead of mode/gnustl
e69f2f82ab : Automated submodule update: FBGEMM (#46395)
c1141b6f68 : Added support for complex torch.pinverse (#45819)
5ce46fbbca : BFloat16 support for torch.sign (#45244)
8c26111adb : Add fence. (#45148)
e3eef0cd7a : Add image sampler. (#45037)
50f833248d : Redo Vulkan command and descriptor pools. (#44496)
1e654a4b7f : Fix error message for scatter reduction (#46397)
528158af47 : Updated derivatives for complex mm, mv, ger, bmm, triangular_solve (#45737)
7f458e16ba : Allow Undefined to get kernel from Math/DefaultBackend. (#46352)
908c23579d : [JIT] Revert Freezing shared type PR (#46285)
b5479737d7 : Add windows JNI support (#44257)
bd449334b8 : Fix formatting issues in torch.tensor_split documentation (#46328)
75809626fb : Stop running clang-tidy on torch/csrc/generic/*.cpp. (#46335)
e366591dc8 : Fix incorrect signatures in get_testing_overrides, and add test for incorrect signatures (#45983)
2d6fd22e24 : Rationalize inlining of kernels into the unboxing wrapper (#42845)
053c252c66 : Update COMPILE_TIME_MAX_DEVICE_TYPES to 12 (#46327)
38c97fb6f0 : [shape inference] add shape inference support
a87a1c1103 : Fix perfornance issue of GroupNorm on CUDA when feature map is small. (#46170)
75bf5f2b59 : [JIT] Improve class type annotation inference (#45940)
86abc8cd48 : [JIT] Make InsertInstruction overflow check a warning instead of fatal (#46369)
5393588e11 : Add guideline about which dispatch keyword to use in native_functions.yaml. (#46126)
4aaad88790 : Bug fixes in profiling allocator (#45993)
419dafe791 : [Reland] Update native_functions.yaml to add DefaultBackend. (#46236)
22f4a58a45 : [pytorch] activation checkpointing: enable mixing tensor without requires_grad (#45934)
103b100ddc : Bazel build has warnings (#46233)
af8c75e211 : [PyTorch] Stringize kernel tag names consistently during macro expansion and require all tag names to be a compile time character array (#46074)
a69910868a : Fix possible padding length overflow in DistributedSampler (#45329)
ff0af7242b : Revert D24290811: [quant][eagermode] Move custom_module registration to prepare/convert_custom_config_dict
a38eeeff5c : Make setup.py python 2 friendly (#46317)
e7e919fc34 : Add warning on ProcessGroup and ProcessGroup::Work APIs (#46220)
fc1d6bf135 : [fx] make sure args/kwargs are immutable (#46325)
2bc6caa9e4 : Add three-phase option to OneCycleLR (#42715)
635aebdfab : [quant] Refactoring the mappings files (#44847)
b28b5d3c68 : [ONNX] Update squeeze test for opset 9 (#45369)
6ca03aeb96 : [ONNX] Fix flatten operator (#45632)
d655341adb : [Distributed] General Function for Parsing Environment Variable Flags in PG (#46045)
3ad797c937 : [quant][eagermode] Move custom_module registration to prepare/convert_custom_config_dict (#46293)
2ffb768607 : [Distributed] deleteKey support for HashStore (#46049)
74f13a8b8f : [Distributed] Adding getNumKeys support to the HashStore (#46048)
5500b62f28 : Enable zero batch conv tests for ROCm (#46305)
dec61f93f2 : [ROCm] update GPG key URL in circleci Dockerfile (#46256)
53316e8b97 : [quant] Remove prehook option (#46292)
9d389b1dcc : [ONNX] Preprocess index_put with bool inputs to masked_scatter/masked_fill (#45584)
49903a5cd5 : [quant][graphmode][fx] Move custom_module_class config to prepare/convert_custom_config_dict (#46251)
e7dbaa252e : Update optim.rst for better understanding (#45944)
1f791c06f0 : adding BAND/BOR/BXOR reduce ops to unsupported list for complex numbers. added tests (#46270)
8a074af929 : Added scalar lists APIs for addcdiv and addcmul (#45932)
f2e5ae4ba2 : Undefine bool and vector after including altivec.h (#46179)
45de2ee3ac : Remove Python version upper boundary check (#46315)
69e152e60b : Fix device guard for c10-full ops (#46091)
4534bf5799 : Fix NativeFunctions.h for c10-full ops (#46090)
84771fc64f : [caffe2] Add 10s deadline for all Caffe2 hypothesis fuzz tests
62d37b9f26 : add size_based_partition final (#46282)
b64cf93f05 : [jit] support tracing tensor __setitem__ with dynamic shape (#45828)
38e64cf949 : Revert D24232288: [fx] make sure args/kwargs are immutable
d790ec6de0 : [JIT] Update comment in jit_log.h. (#46301)
d22455128f : [dispatcher] avoid autograd fixup step on non-backend keys (#46135)
61df99b78e : [fx] make sure args/kwargs are immutable (#46121)
965046c445 : [NCCL] Provide additional information about NCCL error codes. (#45950)
f7398759b4 : Only populate grad accumulator to var mapping for find_unused_parameters=True in DDP (#45942)
31bcd96395 : Parallelize the quantization conversion operators (#45536)
d5ca53c955 : Performance fix for torch.cat operator on ROCm (#46097)
09842a44fa : [FX] Allow tracing free functions (#46268)
ac3f23deb0 : Fixed usage of std::move function (#46199)
173363f31a : Use tensor's quantized properties directly in pickler (#46267)
1fcec6e72b : [caffe2] Add operator schema for FP16SparseNorm (#46300)
f89498f3f8 : Allow RPC framework to use rank in addition to WorkerInfo and name. (#46221)
e1c9aa918a : Reformat ivalue_inl.h and ivalue.h (#46174)
952dc7ed87 : [NCCL] Fix Hang in Async Error Handling due to Work logging (#46265)
b1d24dded1 : make a way to disable callgrind (#46116)
6ef41953e6 : [RFC] Generate generated_unboxing_wrappers_everything.cpp for unboxing wrappers codegen to aid debugging (#45872)
5c67cc7a9e : [caffe2] Enable fp16 for SparseNormalize op (#45551)
2118d58d45 : Add some more docs to expecttest. (#46263)
1c3e335c4b : [pytorch][glow][NNPI] Using int32 as indices for embedding_bag operators (#45878)
a37f2749cd : Avoid computing AutogradKey if not needed. (#46252)
ac245f6b45 : Complex autograd doc fix (#46258)
67a0c0af27 : [quant][fx][graphmode] Add prepare_custom_config_dict and convert_custom_config_dict (#46223)
dac680721c : Automated submodule update: FBGEMM (#46271)
95ccf34fb9 : [quant][graph][fix] Set type for GetAttr nodes in remapTypes (#46250)
7b7f2519d9 : Use storage.cpu() for moving storage to CPU in serialization. (#46028)
fc846db667 : .circleci: Fix android publish snapshot job (#46266)
5604997b09 : [quant][refactor] Alphabetize the entries in the quantized import (#46218)
faa9c22a51 : Support pytest for distribution testing (#45648)
ad376f1a62 : trying to make pow work for tensor raised to the power of a scalar (#46185)
1a57b390e8 : Add torch._foreach_maximum(TensorList, TensorList) & torch._foreach_minimum(TensorList, TensorList) APIs (#45692)
5741de883a : Define the record_stream method in native_functions.yaml (#44301)
d705083c2b : Refactor dispatcher and native to use Signature structure. (#45990)
f086032676 : Remove unnecessary byte-for-byte compatibility code that is not needed. (#45975)
8d5c899b19 : Rename legacy_dispatcher to native. (#45974)
527a8bee02 : Reorder dispatcher/legacy_dispatcher types (#45973)
944eb0e31d : Add NativeFunctionGroup (#45918)
9079aea1ac : Rewrite implementation of faithful cpp signatures (#45890)
a3caa719af : fix #45552 - adding add_done_callback(fn) to torch.futures.Future (#45675)
282f4ab947 : Workaround for bug in DistributedDataParallel (#46186)
a277c097ac : [iOS][GPU] Add Metal/MPSCNN support on iOS (#46112)
7f6a1b2bd5 : [quant][fx][graphmode][api] Change API for custom module (#45920)
e6d30c89c1 : Revert D24165889: Update native_functions.yaml to add DefaultBackend.
1f9ddf64d2 : Update native_functions.yaml to add DefaultBackend. (#45938)
ba1e0a88bb : Use const-references in nodes_to_rewrite range loop
4ad4715643 : Fix JIT test config (#46230)
66505b64a5 : Fix incorrect CUDA `torch.nn.Embedding` result when max_norm is not None and indices are not sorted (#45248)
88dcb95e22 : [fx] use a linked list for nodes (#45708)
31ee5d8d8b : Adding information how to control randomness with DataLoader (#45749)
ee3d3e6dba : [pytorch][PR][Gradient Compression] Reduce the peak memory of fp16 compression provided by ddp comm hook (#46078)
87a4baf616 : [pt][quant] Support either min or max in qclamp (#45937)
bed3b40523 : Implement ravel (#46098)
b98e35948f : fix test_serialization not working with Windows. (#46120)
f3db68776c : [NNC] Fix two more bugs in Cuda Half support (#46129)
c02efdefa8 : adding complex support for distributed functions and . fix #45760 (#45879)
8de9aa196a : clean up dataclasses installation to only <3.7 (#46182)
ba78eb80ff : including tensorexpr tests in CI for all configs (#46188)
85c3ba5588 : [caffe2] add PlanExecutorTest ErrorPlanWithCancellableStuckNet (#46110)
8d5256e6dd : Made exception message for torch.LongTensor() legacy constructor more readable (#46147)
2070834b9e : Improve error checking of Storage._writeFile. (#46036)
9202c44379 : Fix error in Binomial to retain lazy logit initialization (#46055)
146721f1df : Fix typing errors in the torch.distributions module (#45689)
6a001decf2 : [quant][test] Add mul_scalar test (#46106)
6ba6ecb048 : Only use hacky_wrapper_for_legacy_signatures if an op needs it (#45742)
e1f74b1813 : Fix mkldnn build on legacy x64 arch (#46082)
a814231616 : [fix] torch.kthvalue : handle non-contiguous CUDA tensor (#45802)
3883cdb87e : TensorInferenceFunction checks
1a99689d71 : [caffe2] Fix preprocessor checks for FMA
bbb3f09377 : Automated submodule update: FBGEMM (#46151)
a0a8bc8870 : Fix mistakes and increase clarity of norm documentation (#42696)
496d72d700 : [TensorExpr] Disable and/or fix some failing tests. (#46146)
4c87d337af : [Caffe2] use the real new fbgemm sparse adagrad interface (#46132)
9f743015bf : a few more comments on dispatch key computation methods (#46128)
b7261de0df : [pytorch][te] Add compilation time benchmark (#46124)
43fe45ab0f : [JIT] Add dynamic shape benchmark for NV Fuser (#46107)
689499ffa8 : remove duplicate autograd srcs (#46059)
5e4b3dd25a : Automated submodule update: FBGEMM (#46125)
34951e9adc : [shape inference] adding a new flag to the struct
138c22f8e3 : qnnpack quantized activations: fix memory format issues (#46077)
172036a565 : [NCCL] Add Error log when ProcessGroupNCCL takes down process upon (#44988)
7094c09ff7 : quantizaton: add API usage logging (#46095)
c73af6040e : [FX] Make `graph_copy` examine existing values in val_map (#46104)
d811d4d7ba : Support DefaultBackend keyword in native_functions.yaml. (#45719)
e33d455ef7 : [Distributed] Set smaller Store timeouts to make c10d tests run faster (#46067)
2fa91fa305 : [NNC] Fix crash when simplifying certain subtractions (#46108)
281463ba0b : [NCCL] Enable send/recv tests (#45994)
3ffd2af8cd : Add exception classification to torch.multiprocessing.spawn (#45174)
da033e0b2d : [Caffe2] use new fbgemm sparse adagrad interface with temp name (#46089)
0ddcc0ce35 : Add alias dispatch key DefaultBackend. (#45718)
f8b3af21f2 : Allow Tensor-likes in torch.autograd.gradcheck (#45732)
59414b359d : Document fix for logspace and linspace (#46056)
c83314e982 : [ci-all tests] Improve logging in ProcessGroupNCCL for debugging purposes. (#46010)
362d9a932e : Remove object-based collective APIs from public docs (#46075)
62554a3bd2 : Prioritize raising error message about unused parameters when rebuild_buckets fails (#45933)
9fb8e33a5b : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
9443033e71 : Automated submodule update: FBGEMM (#46079)
c734961e26 : [cpp-extensions] Ensure default extra_compile_args (#45956)
a5c0dbc519 : Add support for Softmax. (#45286)
87226f72d2 : [caffe2] temp remove ErrorPlanWithCancellableStuckNet (#46080)
0983ddbfd2 : add sharding option to test framework (#45988)
f363a2e106 : Mark top 3 slowest tests as slow (#46068)
487624e369 : [caffe2] plan executor error propagation test with blocking cancellable op (#45319)
8cd3857bc7 : [NCCL] Add torch::cuda::nccl::send/recv (#45926)
b7f7378b2d : [NCCL] support send/recv to/from self when communicator is created on demand (#45873)
96d48178c8 : Make pipeWrite and pipeRead noexcept (#45783)
c86ee082a2 : torch.fft: Add helper functions section to docs (#46032)
2b204e6db3 : [quant][fx][graphmode] Run symbolic_trace in quantization (#45919)
c6672a608b : caffe2 missing cctype header (#46052)
31888b2e77 : [quant][pyper] Rename the sparse argument for embedding_bag ops (#46003)
8c80ee8ba5 : [quant] Set sparse to False for embedding_bag ops in graph mode (#45997)
0cf0b5f2e8 : Minor refactor to normalize assignments (#45671)
64b0686986 : Expose ChannelShuffle (#46000)
89256611b5 : Doc note update for complex autograd (#45270)
e3112e3ed6 : aten::set_grad_enabled should not push as it does not return a value (#45559)
ddcacc736d : Do not rebase select nighly builds on top of master (#46038)
59e4803b94 : Recommit: caffe2/plan_executor: wait for 1 minute after exception and then abort (#45981)
402abdfdf4 : [NNC] cacheAccesses transform (cache_reads + cache_writes) (#45869)
8e8fb8542e : upgrade clang-tidy to 11 (#46043)
f010df35e5 : Added CUDA support for complex input for QR decomposition (#45032)
5f7545adf6 : Update randomtemp to v0.3 (#46025)
1197a38a63 : [JIT] Bind log1p and lgamma (#45791)
338283057b : [JIT] [3/3] Make sure fusion occurs in test_tensorexpr (#45790)
564296f051 : [2/3] [JIT] Make sure fusion occurs in test_tensorexpr (#45789)
1b97ffa07a : [1/3] [JIT] Make sure fusion occurs in test_tensorexpr file (#45788)
636eb18029 : Fixed median nan propagation and implemented nanmedian (#45847)
298e0e0d57 : Refactor gather_ranges_to_dense from Python to C++ (#46021)
d360402f34 : Use out variants of functions used by linalg.norm, where possible (#45641)
d3d8da7a8e : Enable CUDA Fuser for ROCm (#45965)
40828b68e1 : Revert D24099167: [HTE @ clang-tidy] Enable clang-tidy configs inheretence for caffe2 project
283ae1998c : Pin libuv to 1.39 in Windows CI in order to keep version alignment in read me document (#46015)
ea4fbb2e5e : [StaticRuntime] Replace hashtable based workspace with vector<IValue> (#45892)
735d5b8907 : Add complex32 dtype support to CPU/GPU implementation of (#45339)
7d4f5060ad : Fix doc about operator benchmark (#45853)
acca11b898 : [torchscript] Verbose logging of code location causing the error (#45908)
52f2db752d : unify reproducibility notes (#45748)
d93cae00f2 : [HTE @ clang-tidy] Enable clang-tidy configs inheretence for caffe2 project
9dc9a55bc4 : Fix TypeError when torch.jit.load is passed a pathlib.Path (#45825)
6e4de44501 : [TensorExpr] LoopNest: add a constructor that takes Stmt instead of list of Tensors. (#45949)
1036b77416 : [TensorExpr] LoopNest: replace output_tensors_ with output_bufs_. (#45948)
29da553dd9 : [TensorExpr] Loopnest: unify intermediate_tensors_ and temp_bufs_. (#45947)
598caddd93 : [TensorExpr] Add shorthand versions for `splitWith{Mask,Tail}` functions. (#45946)
b65ffa365c : [TensorExpr] Nuke `Function` class and directly use `Tensor` instead. (#45936)
c9caa828f5 : Throw special exception when backend compilation is met with fatal error (#45952)
a92b49f7c8 : [Onnxifi] Don't throw exception when we cannot write out debug files (#45979)
99d3f37bd4 : Run gradgradcheck on torch.fft transforms (#46004)
c19b9cd18d : Add torch::cuda::ncll::all2all (#45900)
ef4817fe5a : Add `tensor_split` function, based on `numpy.array_split` (#45168)
b2bff9e431 : Workaround for cublas bug for 45724 (#46001)
8d14b50e94 : codegen: Improve array default handing (#45163)
00b8ebe60c : [FX] Preserve type annotations on generated code in Graph (#45880)
81d40aaf96 : Add `[zc]heevd` to the list of MKL symbols exported from torch_cpu (#46002)
c59c4b0d77 : Fix cholesky TF32 tests (#45492)
903acc6b83 : CUDA BFloat16 support of clamp, remainder, lshift, rshift (#45247)
154347d82f : Fix distributed documentation for asynchronous collective Work objects (#45709)
19da1d22fe : [NNC] Registerizer V2, supporting partial and conditional replacement (#45574)
a36f11a3a5 : [FakeLowP] T76913842 Make AddFakeFp16 take int inputs (#45992)
c86655a815 : [JIT] Fix Dict bug in constant hashing (#45929)
72e4f51bc0 : [JIT] fix dict update (#45857)
de0d0bd5ee : Revert D24093032: Improve logging in ProcessGroupNCCL for debugging purposes.
505be08c75 : [dist_optim] serialize compilation when creating dist_optim (#45871)
ce82b522c8 : Define objects using classes instead of namedtuples in torch.utils.data._utils.worker (#45870)
0927e02a6a : [caffe2] Do not run RemoveOpsByType on recurrent networks (#45986)
c8d76ff7dc : Improve logging in ProcessGroupNCCL for debugging purposes. (#45780)
8fb32b9f55 : Parametrize # of longest tests in print_test_stats (#45941)
9679e1affc : annotate torch.autograd.* modules (#45004)
83d2c9a232 : [quant] Add quantized Sigmoid module (#45883)
30bf799f9c : `torch.matrix_exp` doc fix (#45909)
b186831c08 : Automatic update of fbcode/foxi to 6a4e19a2aaf7ae4b9fa9597526e65b395d5e79ad (#45951)
5a2773702f : add test sharding to CUDA on linux (#45972)
5ce31b6f3f : [ONNX] Improve error handling for adaptive_pool (#45874)
1bb2d41b68 : Revert D20850851: caffe2/plan_executor: wait for 1 minute after exception and then abort
5640b79bf8 : Allow consumer ops to sync on GraphRoot's gradient (#45787)
bb99bea774 : Compress NVCC flags for Windows (#45842)
be45c3401a : [JIT] Make objects throw Python AttributeError on nonexistant attr access (#45911)
8cdb638c62 : [FX] Track use nodes in Node (#45775)
205ab49612 : [packaging] simpler dependency plotting (#45686)
317b6516bc : [quant] Add quantized::sigmoid that take output_scale/output_zero_point as input (#45882)
ed1552a48f : Add note about in-place weight modification for nn.Embedding (#45595)
8b39498a23 : codegen: Allow string arguments to have defaults (#45665)
1b31ed3ad6 : [quant] Refactor qembeddingbag to remove duplicate code (#45881)
43dc7ef933 : [quant] Support for 4-bit quantized EmbeddingBag module (#45865)
11c32611d7 : [quant] Support 4-bit embedding_bag operators using the dtype quint4x2 (#45752)
5c283fa292 : [quant] Add 4-bit embedding_bag prepack/unpack support using quint4x2 (#45751)
e8d8de32b4 : [StaticRuntime] Implement StaticRuntime::benchmark (#45639)
275bb5e801 : Fix flakiness in caffe2/test:serialization - test_serialization_new_format_old_format_compat (#45915)
4fdba30500 : [JIT] Add API for ignoring arbitrary module attributes (#45262)
49af421143 : Embed callgrind headers (#45914)
f5e70a7504 : fix test flakiness caused by sys.getrefcount(None) (#45876)
624084e6d6 : [te][llvm] Enable fused multiply-add (fma) in code generation (#45906)
f2e569461b : [te] Tiled (m=32 x n=32) gemm benchmark (#45905)
50f89578dd : [te] Add a benchmark harness (#45875)
5ff31620b7 : [te] Add a 2D convolution example test (#45514)
14997f2125 : [quant][graphmode][fx] Add warning for unsupported case (#45714)
5072728d88 : Fix stride printing/parsing formatting (#45156)
255b0e839f : C++ APIs CUDA Stream Note (Set/Get part) (#45754)
a3662fa78c : Minor gradcheck update to reduce computations (#45757)
e154b36685 : Standardized clamp kernels to Numpy-like implementation (#43288)
a69a78daa2 : Use smaller N to speed up TestForeach (#45785)
c1af91a13a : [caffe2] SliceOp axes indexing fixes. (#45432)
3fbddb92b1 : caffe2/plan_executor: wait for 1 minute after exception and then abort (#45297)
64681d6bec : Add all remaining method declarations from torch.distributed Python API to C++ (#45768)
0da6730f02 : [quant][graphmode][fx][eagermode] Add leaky relu support in quantization workflows (#45712)
fb50fcaa82 : [C2] Add string equality operator (#45886)
fcc7f272de : maximum number of threads per block for sm_86 is 1536 (#45889)
ba642d36ff : ReplicationPad accepts 0-dim batch size. (#39137)
8b7ee33ee6 : [quant] Add quantized LeakyReLU module (#45711)
930bddd403 : Cleanup nccl.cpp (#45899)
d1fc1555d4 : [quant] Add quantized::leaky_relu that takes scale/zero_point as input (#45702)
001a7998b4 : Disabling XNNPACK integration test in tsan mode (#45850)
3510f19c5f : added some more details + debugging steps to CONTRIBUTING.md (#45903)
abedd9a274 : Reduce size of test_unsqueeze to resolve consistent timeout issue (#45877)
9728584cca : Replaced whitelist with allowlist (#45796)
a09e1098e7 : Profiling allocator for mobile. (#43951)
b1373a74e0 : Don't export enums for CUDA sources on Windows (#45829)
be137e45cd : reorganizing tests so that test1 and test2 are balanced in timing (#45778)
67889db8aa : Replaced BLACKLIST with BLOCKLIST (#45781)
8bc0c755be : adding option to move excluding to run_test.py instead of test.sh (#45868)
8a1e100466 : Stricter backward compatibility check (#45773)
2fbe5971b3 : [pytorch/cuda] apply 16-bit mask to the index for device guard registry (#45485)
d44eaf63d1 : torch.fft helper functions (#44877)
e4efc420ae : Correct `Categorical` docstring (#45804)
7eb0a71484 : update persons of interest (#45803)
bf85642c4c : Remove lock from GraphTask::set_exception_without_signal. (#45867)
10d86d1196 : [NCCL] create NCCL communicator for send/recv on demand (#44922)
59083d6176 : [NCCL] Support NCCL Send/Recv (#44921)
b04ae953b4 : [FX][WIP] Mutable Graph APIs (#45227)
1558a3657b : Add LazyNVRTC (#45674)
54aaffb7c7 : Avoid NaN values in torch.cdist backward for p<1 (#45720)
4ab73c1f74 : [docs] Fix EmbeddingBag docs (#45763)
78f055272c : [docs] Add 3D reduction example to tensordot docs (#45697)
26a9012f84 : [fx] import used modules for code gen (#45471)
5177f8de2b : Revert D23398534: [pytorch][PR] [ONNX] Improve error handling for adaptive_pool
f18cc9c57d : Change type inferred from empty annotation (#45360)
a9a9d0b181 : Rocm skip test cases (#45782)
519c086418 : Revert D24042344: [C2] Add string equality operator
9a668f94bb : [jit] allow slicing multiple dimensions with indicies (#45239)
f11f9a8c1f : [pytorch][improvement] Improve torch logging to identify problematic key (#45766)
9f4abcad9d : Automated submodule update: FBGEMM (#45713)
a83696ad53 : quant docs: add API summary section (#45848)
c80ec91b00 : [iOS] Bump up the cocoapods version (#45862)
21fa877026 : [quant][test] Remove numeric equivalence test for debug and non-debug option (#45852)
14e6e50700 : Refactor computeLRWorkDim (#45812)
ffbffc0436 : fixed formatting in function rstrings in torch.autograd.functional (#45849)
615013edcb : setup: Dataclasses only when < 3.7 (#45844)
b5a2f04089 : Disallow creation of ProcessGroupNCCL without GPUs. (#45642)
45ddeb5ce6 : [ONNX] Improve error handling for adaptive_pool (#43032)
adc21c6db2 : Rename jobs and cli switches for testing GraphExecutor configurations to something a little bit more sensical. (#45715)
cf48872d28 : [C2] Add string equality operator
162717e527 : grammatically update index.rst (#45801)
3ab88c3903 : Enable TorchBind tests on ROCm (#45426)
e829d4fba9 : [op-bench] fix jit mode (#45774)
f65ab89edd : [numpy] Add torch.nan_to_num (#44592)
e1ff46b6e5 : CUDA BFloat16 TopK (#44755)
2ab74a4839 : [FX] Make Tracer.trace() just return a Graph (#45704)
8a6b919163 : [StaticRuntime] Fix broken tests (#45813)
24fa2daea6 : Revert D24100389: Revert D24072697: [te] Get llvm codegen to compile with llvm9 and llvm-fb
ff568a0e6b : Revert D24072697: [te] Get llvm codegen to compile with llvm9 and llvm-fb
3a27fc966a : Test torch.svd using complex float and double numbers (take 2) (#45795)
d8a9c2c27e : [iOS][CI] Fix the timeout for nightlies (#45798)
2b48dd168d : [StaticRuntime] Integrate Static Runtime into PyTorchPredictor (#45640)
546aab66c1 : Revert D24027761: Update backward definition for more operators and reenable tests in test_ops.py
31621c828d : Fix JIT tests when run locally in fbcode (#45776)
53aea60bce : [FX] Make output a non-special Node (#45599)
2fa062002e : CUDA BFloat16 infrastructure (#44925)
8cb7280242 : Revert "Remove device maps from TensorPipe for v1.7 release (#45353)" (#45762)
d150d3e276 : Make sure each warnings.warn only executes once inside TorchScript. (#45382)
73e9daa35f : [caffe2] Optimize Dedup version of RowWiseSparseAdagrad fused op by WarpReduce (#45649)
c31066ac9d : Torch Integration Test Formatting Changes (#45740)
7d809f5d8e : Update backward definition for more operators and reenable tests in test_ops.py (#44444)
e3d2defdc8 : [te] Get llvm codegen to compile with llvm9 and llvm-fb (#45726)
5a47a2126d : Revert D24018160: [pytorch][PR] Test torch.svd using complex float and double numbers
f8c1ca5dd8 : Enable NamedTuple data type to work with DDP (#44220)
8619de84f2 : Fix cuDNN error message when it's Conv2d (#45729)
322855e380 : type check for torch.quantization.observer (#45630)
db8b076272 : Change signature for torch.poisson (#45656)
7726754e70 : Add function signature for pixel_shuffle (#45661)
6acd7b686c : adding sharding option to run_test.py (#45583)
3799ba83e5 : [Docs] Adding Store API Docs (#45543)
a052597e6c : Bump nightlies to 1.8.0 (#45696)
6e43f0db8b : Use correct signatures for METH_NOARGS. (#45528)
cdf93b03de : Add string versions of argument funcs in jit Node (#45464)
b234acd414 : Exposes SparseToDenseMask Caffe2 Operator (#45670)
ad31068fe9 : Add a distributed package reviewer (#45744)
24187a0b42 : Enable type check for torch.quantization.fake_quantize (#45701)
888f3c12e7 : Test torch.svd using complex float and double numbers (#45572)
4d08930ccb : remove beta defaulting in smooth_l1_loss_backward. added to the bc whitelist (#45588)
869b2ca048 : some documentation and style fixes to smooth_l1_loss (#45587)
c703602e17 : make broadcasting explanation clearer in matmul doc: #22763 (#45699)
82cc86b64c : VariableKernel calls into scattered C++ api (#44158)
6e2eee2b9d : Add faithful C++ API (#44087)
9201c37d02 : Use addmm directly for 1x1 convolution (#45557)
1a2d3b6a75 : [quant] PerChannelFloatQParams support for quint4x2 dtype (#45594)
04526a49d3 : [quant] creating quint4x2 dtype for quantized tensors (#44678)
a0d08b2199 : Set the default bailout depth to 20 (#45710)
402caaeba5 : [docs] Update docs for NegativeBinomial (#45693)
36de05dbf6 : passing all arguments to sccache wrapper script should be quoted as "$@" (#45582)
f6dc256bc6 : example of splitting up an FX graph into smaller subgraphs with own submodules (#45404)
1552a926a3 : migrate cuda implementation of take() from TH to ATen (#45430)
a015ba8dd5 : migrating the take() fn from TH to ATen (#45283)
fc4209bd4f : Fix the bucketization wrong doc for right argument (#45684)
4c1e50eb5c : remove skip annotations since we already disabled the tests wholesale (#45698)
cbdba7cc1e : win job for the legacy executor (#45612)
0393a1e8b9 : add an indexer to SymbolicShape (#45450)
0de5824f36 : [iOS][CI] Upgrade xcode version to 12.0 (#45677)
e8e0fca99e : [iOS][CI] Update the dev cert (#45651)
de3a48013a : Use CAFFE2_USE_MSVC_STATIC_RUNTIME to determine when to avoid waiting for global destructors on Windows (#43532)
4f685ecc25 : [reland][quant][graphmode][fx] Merge all quantization mode (#45292) (#45672)
18253f4a48 : Fix BUILD_CAFFE2 if FBGEMM and NNPACK are not built (#45610)
5959de3aeb : setup: Only include dataclasses for py < 3.8 (#45611)
93be03cec0 : [quant] torch.mean add path for unsupported QNNPACK modes (#45533)
4564444c91 : [RFC][caffe2] TaskGroup.__repr__ shouldn't have side effects
03e4e94d24 : Find single partition (#45429)
dcda11c4d3 : Disable tcuda_fuser tests in Profiling Mode (#45638)
381f6d32a7 : [docs] Fix hyperlinks for nn.CrossEntropyLoss (#45660)
1efdbfabcc : [docs] Fix back quote rendering in loss modules docs (#45662)
77cd8e006b : Added support for complex torch.symeig (#45121)
4583edb5d6 : Add NativeFunction.signature and kind. (#45131)
41bd5a5ee0 : Switch all Sequences in tools.codegen.model to Tuple (#45127)
a242ac8c27 : Update torchvision version to current latest master (#45342)
72bc3d9de4 : Use MTA for amp grad unscaling, enforce op math type in MTA functors, and allow op lambdas (#44778)
84cf3372d1 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
592b398e82 : [AutoAccept][Codemod][FBSourceGoogleJavaFormatLinter] Daily `arc lint --take GOOGLEJAVAFORMAT`
c36b354072 : Revert D23913105: [quant][graphmode][fx] Merge all quantization mode
78b95b6204 : Revert "Revert D24024606: [FX] Shape propagation example" (#45637)
4339f5c076 : [PyTorch][QPL] Add instance_key into MOBILE_MODULE_LOAD_STATS logging. (#45518)
d306d0c2b1 : remove redundant PE(profiling executor) jobs in CI (#45397)
3da4cea658 : [ONNX] Add dim_param support in export with onnx shape inference (#44920)
ffcb0989e7 : [quant][graphmode][fx] Merge all quantization mode (#45292)
3f440d74fc : [PyTorch][QPL] Add instance_key into MOBILE_MODULE_STATS logging. (#45517)
75fc263579 : [TensorExpr] Add a tensor expressions tutorial. (#45527)
9d5607fcd9 : [quant] Use PlaceholderObserver as default dynamic quant observer (#45343)
2b13d9413e : Re-land: Add callgrind collection to Timer #44717 (#45586)
3a2d45304d : [Experimental][Partial] New implementation for torch.distributed APIs in C++ (#45547)
0b3ad5404a : [bot] Add quantization triage bot script (#45622)
869b05648d : Revert D24024606: [FX] Shape propagation example
f2c2b75e80 : flush the buffer when printing the IR (#45585)
6fde2df1b8 : [JIT] Update JIT triage project board workflow (#45613)
4be42034b6 : Clear shape information before finalizing graph-mode quantization (#45282)
85a70ce71f : Add multiline string dedent support (#45580)
56840f0a81 : Prevent overflow in bucketize binary search
2596113a79 : Add fuse support for batchnorm with affine=False (#45474)
6b42ca2d69 : [ONNX] Update embedding_bag export (#44693)
ac9a708ed0 : [FX] Shape propagation example (#45589)
ffd50b8220 : SET USE_DISTRIBUTED OFF when libuv is not installed (#45554)
c9bb990707 : [c++] Distance-agnostic triplet margin loss (#45377)
181afd5220 : Add an option to DDP to take a list of parameters to ignore upfront. (#44826)
c112e89cc6 : [quant] Make choose_qparams_optimized return Tensors to preserve dtype (#45530)
ce9df084d5 : [pytorch] Replace "blacklist" in test/test_mobile_optimizer.py (#45512)
a245dd4317 : add dllexport before template specialization functions for windows build (#45477)
5539066d12 : [quant][graphmode][fx] Support quantization for custom module (#44074)
51d0ae9207 : Revert D24010742: [pytorch][PR] Add callgrind collection to Timer
6c4aa2a79c : Revert D24002415: Some fixes to smooth_l1_loss
4f3920951e : type check for torch.quantization.quantize_jit (#45548)
939e0389de : Update test_multi_tensor_optimizers test (#45510)
415ed434aa : Add whitelist for complex backward (#45461)
7e863475d7 : Upgrade ReadMe document to guide user to install libuv(1.39) in conda env on Windows platform (#45553)
96540e918c : Add ShuffleDataset with buffer (#45290)
fdbed7118e : Some fixes to smooth_l1_loss (#45532)
e02868e12d : Unify Transformer coder Constructors (#45515)
7566823779 : Enable PE + TE (#45546)
9b27e0926b : Add callgrind collection to Timer (#44717)
f5c95d5cf1 : Source code level attribution in profiler (#43898)
c2c7099944 : Fix docs for kwargs, q-z (#43589)
b4ba66ae32 : Print tensor shapes and convolution parameters when cuDNN exception is thrown (#45023)
93650a82c9 : Move prim::tolist math.log and aten::cpu to lite interpreter for translation model (#45482)
4aca63d38a : [TensorExpr] Change API for creating Load and Store expressions. (#45520)
772ce9ac2c : Fix memory corruption when running torch.svd for complex.doubles (#45486)
ccad73ab41 : Fix D23995953 import.
c87ff2cb90 : Enable transposed tensor copy for complex types (#45487)
0a15646e15 : CUDA RTX30 series support (#45489)
c1e6592964 : Enable type-checking of torch.nn.quantized.* modules (#43110)
375a83e6c1 : Annotate torch.utils.(tensorboard/show_pickle/hypify) (#44216)
eb39542e67 : Add typing annotations for torch.utils.data.* modules (#44136)
33aba57e4c : Patch generate files for system protobuf (#44583)
22a34bcf4e : ROCm {emoji:2764} TensorExpr (#45506)
637570405b : Disable multi tensor tesnor tests on rocm (#45535)
06a566373a : [PyTorch/NCCL] Fix async error handling (#45456)
ef41472544 : Create experimental FX graph manipulation library (#44775)
d642992877 : Quantized operators template selective (#45509)
ab5cf16b6c : fix standard deviation gradient NaN behavior (#45468)
18876b5722 : Update backward formula for torch.dot and add backward definition for torch.vdot (#45074)
147c88ef2d : Add docs to a pytorch.github.io/doc/tag directory when repo is tagged (#45204)
b66ac1e928 : Updates nonzero's as_tuple behavior to no longer warn. (#45413)
0df99ad470 : Remove unnecessary __at_align32__ in int_elementwise_binary_256 (#45470)
6e55a26e10 : Move mobile specific CPUCachingAllocator to c10/mobile folder. (#45364)
b2925671b6 : Updates deterministic flag to throw a warning, makes docs consistent (#45410)
aa2bd7e1ae : Conservative-ish persistent RNN heuristics for compute capability 8.0+ (#43165)
f47fd0eb72 : Updated `cholesky_backward` for complex inputs (#45267)
15f85eea18 : Support bfloat16 and complex dtypes for logical_not (#43537)
ea59251f51 : Fix model_name not logged properly issue. (#45488)
09b3e16b40 : [JIT] Enable @unused syntax for ignoring properties (#45261)
5f49d14be2 : Add mobile_optimized tag to optimized model. (#45479)
17be7c6e5c : [vulkan][android][test_app] Add test_app variant that runs module on Vulkan (#44897)
2c300fd74c : [android][vulkan] Module load argument to specify device cpu/vulkan (#44896)
fe9019cbfe : Reorganized Sorting.cpp method order (#45083)
ab5edf21b0 : Revert D23789657: [wip] fast typeMeta/ScalarType conversion approach 2
b3135c2056 : Enable torch.cuda.amp typechecking (#45480)
df0de780c3 : Add cusolver guard for cuda >= 10.1.243 (#45452)
bb19a55429 : Improves fft doc consistency and makes deprecation warnings more prominent (#45409)
0a38aed025 : Auto set libuv_ROOT env var for Gloo submodule on Windows platform (#45484)
6d37126a10 : Makes rdiv consistent with div (#45407)
7cde662f08 : Add check for Complex Type to allow non integral alpha. (#45200)
0806c58e9f : Optimize view_as_complex and view_as_real (#44908)
87f98a5b54 : Updates torch.floor_divide documentation to clarify it's actually torch.trunc_divide (or torch.rtz_divide) (#45411)
37f9af7f29 : Missing tests about torch.xxx(out=...) (#44465)
56af122659 : Revert D23966878: [pytorch][PR] This PR flips a switch to enable PE + TE
1ed1a2f5b0 : [wip] fast typeMeta/ScalarType conversion approach 2 (#44965)
489af4ddcb : [quant] Add quant APIs to save/load observer state_dict (#44846)
bb478810e0 : [quant] torch.max_pool1d (#45152)
b86008ab75 : [TensorExpr] Remove buf_ field from class Tensor. (#45390)
3c33695a6d : [TensorExpr] Rename `Buffer` to `Placeholder`. (#45389)
92306b85d5 : [TensorExpr] Consolidate {buffer,function,tensor}.{h.cpp} in tensor.{h,cpp}. (#45388)
d2623da52c : replaced whitelist with allowlist (#45260)
8c309fc052 : Add more tests for mt optimizers (#45475)
6bdb871d47 : [FX] Lint pass for Graphs (#44973)
b0bdc82a00 : [FX][EZ] Fix bug where copying node made non-unique name (#45311)
417e3f85e5 : Support tuple inputs in NN Module test (#44853)
dddb685c11 : This PR flips a switch to enable PE + TE (#45396)
50b91103a9 : add self cuda time to avoid double/quadruple counting (#45209)
35596d39e9 : Coalesce TLS accesses in RecordFunction constructor (#44970)
5a6a31168f : add circle ci job name dimension to report test stats (#45457)
5be954b502 : Fix WorkerInfo link format (#45476)
8e47fcba5f : Update docs for RPC async_execution (#45458)
c5ade5f698 : Fix no_sync docs (#45455)
6967e6295e : Fix DDP docs (#45454)
534f2ae582 : Disable inplace abs for complex tensors (#45069)
208df1aeb8 : Use python 3.8 in pytorch docker image (#45466)
8c66cd120b : Disable complex inputs to torch.round (#45330)
0c8a6008ac : Fix torch.pow when the scalar base is a complex number (#45259)
a0f0cb1608 : [JIT] Add test for ignored class type property (#45233)
4af4b71fdc : [JIT] Update docs for recently added features (#45232)
52cbc9e4ec : [TensorExpr] Always inline and DCE in the LLVM backend (#45445)
7ac872b934 : [JIT] Modify to_backend API so that it accepts wrapped modules (#43612)
5855aa8dac : Type check quasirandom (#45434)
49b198c454 : type check for torch.testing._internal.common_utils (#45375)
96f8755034 : Fixed handling of nan for evenly_distribute_backward (#45280)
6a206df891 : 20000x faster audio conversion for SummaryWriter (#44201)
e54e1fe51e : [package] Add dependency viz (#45214)
331ebaf7cb : [Distributed] Adding Python tests for the TCPStore getNumKeys and deleteKey (#45402)
6b65b3cbd8 : [Distributed] DeleteKey API for c10d TCP Store (#45401)
190f91e3db : Adding Histogram Binning Calibration to DSNN and Adding Type Double to Caffe2 ParallelSumOp/SumReluOp
1097fe0088 : Remove CriterionTest.test_cuda code for dtype None. (#45316)
a4486fe7ba : [ROCm] Print name irrespective of seq number assignment for roctx traces (#45229)
c6b7eeb654 : Gh/taylorrobie/timer cleanup (#45361)
a77d633db1 : [ONNX] Fix view for dynamic input shape (#43558)
5d1fee23b3 : Remove convert_target from NN tests. (#45291)
986af53be2 : type check for torch.testing._internalcodegen:* (#45368)
7a4c417ed3 : Fix typo (#45379)
57c18127dc : [ONNX] Update div export to perform true divide (#44831)
9163e8171e : Adding Type Double to Caffe2 Mean Op
47debdca42 : Document change for DDP enabled on Windows platform (#45392)
722faeb2a4 : [RELAND] Added optimizers based on multi tensor apply (#45408)
87b356d093 : [static runtime] Split out graph preparation from runtime (#44131)
6ab1c0b1ca : Disable a few tests in preparation to enabling PE+TE (#44815)
36c3fbc9e3 : CUDA BFloat Conv (non-cuDNN) (#45007)
03342af3a3 : Add env variable to bypass CUDACachingAllocator for debugging (#45294)
993628c74a : Build shape expressions and remove outputs that are only used by `aten::size`s (#45080)
e5242aaf89 : Update TensorPipe submodule (#45433)
48d29c830d : [hotfix] disable problematic cuda tests on rocm builds (#45435)
e2ffdf467a : docker: Add torchelastic to docker image (#45438)
e4950a093a : Backward support for generalized eigenvalue solver with LOBPCG in forward [only k-rank SYMEIG case] (#43002)
6417a70465 : Updates linalg warning + docs (#45415)
7818a214c5 : [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
95a97e51b5 : [ONNX] Improve scripting inplace indexing ops (#44351)
13f76f2be4 : Fix preserve submodule attribute in freezing (#45143)
c3bf402cbb : handle onnx nll with default ignore index (#44816)
8bdbedd4ee : Revert "Updates and simplifies nonzero as_tuple behavior"
8b143771d0 : Updates and simplifies nonzero as_tuple behavior
5b839bca78 : [ONNX] Optimize export_onnx api to reduce string and model proto exchange (#44332)
4005afe94b : [ONNX] Update narrow for dynamic inputs (#44039)
78caa028b6 : Revert D23009117: [Distributed] DeleteKey API for c10d TCP Store
f84b2e865f : Revert D23878455: [Distributed] Adding Python tests for the TCPStore getNumKeys and deleteKey
bc5710f2f7 : Benchmarks: tweak PE config settings. (#45349)
a07d82982a : CI: Add a run of FastRNN benchmarks in default executor/fuser configuration. (#45348)
8cef7326f4 : Benchmarks: add 'default' options for fuser and executor. (#45347)
37a671abc7 : Revert D23828257: Quantization: add API summary section
110aa45387 : Revert D23842456: Quantization: combine previous summary with new summary
3da1061059 : Revert D23916669: quant docs: add reduce_range explanatation to top level doc
54a253fded : Revert D23931987: Added optimizers based on multi tensor apply
e52762cbb7 : Revert D23917034: quant docs: document how to customize qconfigs in eager mode
23dfca8351 : Support record_shapes in RPC profiling (#44419)
19dda7c68a : Fallback to CPU when remote end does not have CUDA for profiling (#44967)
2b21e7767e : Added optimizers based on multi tensor apply (#45299)
0fa551f0ab : [c2] Fix int types for learning rate
cf808bed73 : [Distributed] Adding Python tests for the TCPStore getNumKeys and deleteKey (#45223)
addf94f2d6 : [Distributed] DeleteKey API for c10d TCP Store (#43963)
304e1d1e19 : [Distributed] getNumKeys API to c10d TCPStore (#43962)
d9af3d2fcd : [quant] ConvTranspose warnings (#45081)
92189b34b7 : Add get_all_users_of function to GraphManipulation (#45216)
7763e1d7b1 : quant docs: document how to customize qconfigs in eager mode (#45306)
eb39624394 : quant docs: add reduce_range explanatation to top level doc (#45305)
278da57255 : Quantization: combine previous summary with new summary (#45135)
d2bd556e7d : Quantization: add API summary section (#45093)
958c208666 : [quant] conv_transpose graph patterns (#45078)
606b1a9a2e : Move xla codegen to aten. (#45241)
32c355af5b : [dist_optim] introduce distributed functional optimizer (#45221)
08caf15502 : [optimizer] refactor Adam to use functional API (#44791)
0444c372e1 : [optimizer] introduce optimizer functional API, refactor Adagrad (#44715)
8ab2ad306d : Enable `torch.cuda.nccl` typechecking (#45344)
5211fb97ac : Remove device maps from TensorPipe for v1.7 release (#45353)
439930c81b : adding a beta parameter to the smooth_l1 loss fn (#44433)
37513a1118 : Use explicit templates in CUDALoops kernels (#44286)
a2b4177c5b : Add barrier() at the end of init_process_group and new_group. (#45181)
3b7e4f89b2 : Add deprecation warning to PG backend and make TP backend stable. (#45356)
04be420549 : [static runtime] Remove ops in static from backwards compatibility checks (#45354)
eee7dad376 : Add torch.do_assert, which is symbolically traceable (#45188)
7c5436d557 : [RPC profiling] Add tests to ensure RPC profiling works on single threaded (#44923)
27ab9bc0f9 : [RPC profiling] Extend RPC profiling to support async function execution over RPC. (#44664)
d5748d9a1a : Enable binary ops with Scalar Lists with for foreach APIs (#45298)
f07ac6a004 : Fix Windows build failure after DDP PR merged (#45335)
c8166d4b58 : Add `torch.cuda.comm` to typechecking CI (#45350)
22401b850b : port all JIT tests to gtest (#45264)
5a0514e3e6 : [pytorch] Update fmt to 7.0.3 (#45304)
dc9e9c118e : CUDA BFloat16 neg (#45240)
e5f6e5af13 : Add Deep and wide to test and flatten/tranpose for good measure (#44129)
d1a11618f5 : [static runtime] Add _out variants and reuse memory (#44128)
d1d9017a66 : [NNC] fix Half conversion of immediates in Cuda backend (#45213)
536580e976 : Vectorize bitwise_not (#45103)
a117d968f6 : [quant][graph] Remove redundant aten::wait calls in the graph (#45257)
8b00c4c794 : [ONNX] Correct a minor typo in warning (#45187)
b70fac75ac : CMake: Fix python dependencies in codegen (#45275)
78fcde9c50 : Trace scattered tensor options arguments (#44071)
2ac7de7d53 : Remove hacky_wrapper from BackendSelect kernels (#44062)
043bd51b48 : Remove hacky_wrapper from VariableType and TraceType (#44005)
bf8cd21f2a : Py transformer coder test (#43976)
2739a7c599 : Byte-for-byte compatibility fixes in codegen (#44879)
00e704e757 : [fix] torch.repeat : dim-0 backward (#45212)
76ee58e2ec : [TensorExpr] Move inner loops vectorization logic to its own method (#45287)
241afc9188 : Migrate `addr` from the TH to Aten (CPU) (#44364)
99e0a87bbb : [nvFuser] Latency improvements for pointwise + reduction fusion (#45218)
95df8657c9 : Enables test linalg (#45278)
bdf329ef8a : SyncBN: preserve qconfig if it exists (#45317)
103fa3894a : Revert D23841786: [pytorch][PR] Enable distributed package on windows, Gloo backend supported only
bc3151dee0 : [quant] Remove unused qconfig argument in qat linear module (#45307)
31ae8117ba : [RFC] Remove per-op-registration related code in caffe2/tools/codegen/gen.py (#45134)
0122299f9b : Enable distributed package on windows, Gloo backend supported only (#42897)
c6500bcf14 : [reland] Make grad point to bucket buffer in DDP to save memory usage (#44344)
630bd85aae : [pytorch] refine dispatch keys in native_functions.yaml (2/N) (#45284)
7e5492e1be : [minor] Fix undefined variable (#45246)
0f2c648c97 : log metadata when model loading failed (#44430)
03dde4c62a : Resend diff D23858329 (#45315)
677a59dcaa : [aten] Call fbgemm functions for embedding prepack/unpack (#44845)
0b6e5ad4a9 : Resolve comments in #44354. (#45150)
92ebb04f92 : added check for NumberType (#44375)
bee1d448e7 : Fix test_rpc_profiling_remote_record_function (#45162)
5dd288eb06 : [JIT] Regularize tensorexpr fuser strategy with other fusers (#44972)
0137e3641d : Refactor subgraph merging (#44238)
1539d4a664 : Add operator to compute the equalization scale (#45096)
5a59330647 : Add architectural support for multi-GPU. (#44059)
6311c5a483 : Minor touchups. (#44317)
b84dd771e6 : Grammatically updated the tech docs (#45192)
cd7a682282 : [caffe2] adds hypothesis test for queue ops cancel (#45178)
71e6ce6616 : [JIT] Specialize AutogradZero: merge AutogradAnyNonZero and Not(AutogradAnyNonZero) checks into one. (#44987)
cbe1eac1f4 : [caffe2] adds Cancel to SafeDequeueBlobsOp and SafeEnqueueBlobsOp (#45177)
022ba5a78b : Make ddp_comm_hook_wrapper a private method. (#44643)
e2bcdc7b69 : [Caffe2] Fix LayerNormOp when batch_size == 0. (#45250)
c3a5aed5f7 : Run pytorch_core CUDA tests on GPU using TPX
c211a9102f : add rocm 3.8 to nightly builds (#45222)
26001a2334 : Revert D23753711: [pytorch][PR] Add foreach APIs for binary ops with ScalarList
c79d493096 : added rocm 3.8 docker image (#45205)
3f5eee666c : Adjust TF32 tests (#44240)
b8eab8cdbd : [hotfix] typo in NaiveConvolutionTranspose2d.cu (#45224)
e57a08119b : Add a warning log when there is high skew of uneven inputs in DDP training (#45238)
2b38c09f69 : Moves prim ops from C10 back to JIT (#45144)
8507ea22b2 : replace timer test with a mocked variant (#45173)
bfdf4323ac : Bump up NCCL to 2.7.8 (#45251)
5195d727b5 : adding a test for ddp save()/load() (#44906)
f9ae296a85 : renaming TestDdpCommHook class so it doesn't get picked up as a test by pytest (#44905)
bc591d76a1 : add skip_if_rocm to all requires_nccl tests (#45158)
71d1b5b0e2 : Add foreach APIs for binary ops with ScalarList (#44743)
bea7901e38 : Enable torch.tensor typechecks (#45077)
dc67b47bc9 : Deprecate old fft functions (#44876)
6d21d5f0b3 : gtest-ify JIT tests, through the letter c (#45249)
29dc3c5ec8 : Sparse softmax support (CUDA) (#42307)
b3d7c2f978 : [ONNX] Update ONNX docs for release (#45086)
3dd0e362db : [TensorExpr] Fix min and max for integral inputs in CUDA backend (#44984)
b470fa4500 : Add complex number support for binary logical operators (#43174)
0b6b735863 : [fix] type promotion atan2 (#43466)
6a2e9eb51c : torch.fft: Multi-dimensional transforms (#44550)
070fe15e4c : Add link to profiling recipe from rpc main docs (#45235)
956a25d061 : Revert D23858329: [PT Model Split] Support 2 operators in PT by C2 conversion
2d00ebd29f : Failing test demonstrating problems with mixed output shapes (#44455)
c760bc8fb1 : Add GlowLoadAOTModel flag (#45189)
60665ace17 : [quant] Add optimized approach to calculate qparams for qembedding_bag (#45149)
721cfbf842 : [PT Model Split] Support 2 operators in PT by C2 conversion (#45231)
27c7158166 : Remove __future__ imports for legacy Python2 supports (#45033)
e9aa6898ab : Revert D23802296: gtest-ify JIT tests, through the letter c
89c570ed0a : Revert D23811085: gtestify dce and fuser tests
76c185dcca : [TensorExpr] When lanes differ, insert Broadcast instead of Cast (#45179)
f93ead6d37 : [quant][eagermode] Custom module support (#44835)
0495998862 : [TensorExpr] Disallow arithmetic binary operations on Bool (#44677)
8e0fc711f4 : [TensorExpr] Remove unused EvalConstExpr function (#45180)
2a1a51facb : Fix typos. (#45195)
246bd9422a : gtestify dce and fuser tests (#45055)
d2b045030e : gtest-ify JIT tests, through the letter c (#45020)
3f89b779c4 : [jit] allow submodule methods inference rule be different (#43872)
9e206ee9f1 : [NNC] Fix a bug in SplitWithMask when splitting multiple times (#45141)
adb2b380ba : [quant][graphmode][fx] qconfig_dict support more types of configurations (#44856)
21fabae47a : Remove expensive call to PyObject_GetAttrString in PyTorch_LookupSpecial (#44684)
99242eca1d : Dockerfile: Support CUDA 11 (#45071)
4d80c8c648 : Fix inlining interface call in fork subgraph (#43790)
da4033d32a : Make cudaHostRegister actually useful on cudart. (#45159)
a5a4924c27 : Warn if `import torch` is called from the source root. (#39995)
9db3871288 : Update true_divide_out to use at::. (#45079)
9e30a76697 : Filter `strtod_l` is undeclared errors from sccache log (#45183)
5b20bf4fd9 : Added support for complex input for Cholesky decomposition (#44895)
94c3cdd994 : Let rpc._all_gather use default RPC timeout (#44983)
e5bade7b2c : [PyTorch Mobile] Move string op registrations to prim and make them selective (#44960)
76dc50e9c8 : [RPC] Infer backend type if only options are given (#45065)
215679573e : [TensorExpr] Fix operator order in combineMultilane (#45157)
7fba30c2be : [quant][fx][bug] Fix error in convert step for QAT (#45050)
144dacd8d9 : CUDA BFloat16 batched gemm (#45167)
989d877c95 : [JIT] Do not allow creating generics with None types (#44958)
0a9ac98bed : [reland][pytorch] refine dispatch keys in native_functions.yaml (1/N) (#45137)
25ed739ac9 : [packaging] rstrip fix (#45166)
cb75addee4 : torch.package - a way to package models and code (#45015)
d4a634c209 : [RPC profiling] Don't wrap toHere() calls with profiling (#44655)
70d2e4d1f6 : [RPC profiling] Allow disableProfiler() to be called from another thread. (#44653)
1bd6533d60 : Remove thread_local RecordFunctionGuard from profiler. (#44646)
67a19fecef : CUDA BFloat16 pooling (#45151)
666223df46 : [jit] gtestify test_argument_spec.cpp (#45019)
f575df201f : [quant][graphmode][jit][api] Expose preserved_attrs from finalize to convert_jit (#44490)
e045119956 : [JIT] Add default arguments for class types (#45098)
ebde5a80bb : [tensorexpr] Add flag to fuse with unknown shapes (#44401)
c0267c6845 : [caffe2] Support data types in shape hints (#45110)
b98ac20849 : install ATen/native/cuda and hip headers (#45097)
2a37f3fd2f : Relax CUDA architecture check (#45130)
ccfbfe5eb5 : [quant][graphmode][fx] Custom module support (#44766)
7f4a27be3a : [resubmit][FX] s/get_param/get_attr/ (#45147)
35cdb01327 : [PyTorch] Enable type check for autocast_test_lists (#45107)
cddcfde81d : [JIT] Fix WithTest.test_with_exceptions (#45106)
d1c68a7069 : Clarify that 5-D 'bilinear' grid_sample is actually trilinear (#45090)
79fe794f87 : [FX] Make Graphs immutable and make GraphModule recompile after assigning graph (#44830)
def433bbb6 : .circleci: Upgrade all xcode 9 workers to xcode 11 (#45153)
a4ce3f4194 : Fix type hint warnings for common_methods_invocations.py (#44971)
c253b10154 : Fix incorrect EnumValue serialization issue (#44891)
2b1f25885e : [quant] Fix ConvTranspose mapping (#44844)
09aee06e82 : [caffe2] Replace embedding conversion ops with fbgemm functions (#44843)
e2b40ce793 : Support BFloat16 for binary logical operators on CUDA (#42485)
ef885c10d8 : [pytorch] Add triplet margin loss with custom distance (#43680)
10f287539f : Align casing in test_dispatch with dispatch keys. (#44933)
1fd48a9d1f : Revert D23798016: [FX] s/get_param/get_attr/
8501b89a87 : [ONNX] Update ort release (#45095)
4b42f0b613 : Support Math keyword in native_functions.yaml. (#44556)
ae286d81e0 : [JIT] improve alias analysis for list constructs (#39111)
9fc7a942f0 : Change from self to self.class() in _DecoratorManager to ensure a new object is every time a function is called recursively (#44633)
63fd257879 : Add `Ellipsis` constant to the list of recognized tokens (#44959)
e155fbe915 : add warning when ParameterList/Dict is used with DataParallel (#44405)
4a0aa69a66 : Fix undefined variable 'namedshape' in tensor.py (#45085)
36ec8f8fb8 : [dper3] Create dper LearningRate low-level module (#44639)
58b6ab69e5 : torch.sgn for complex tensors (#39955)
1b059f2c6d : Directly use work.result() to retrieve tensor rather than passing as a separate argument (#44914)
71aeb84ab4 : Revert D23803951: [pytorch] refine dispatch keys in native_functions.yaml (1/N)
339961187a : [pytorch] refine dispatch keys in native_functions.yaml (1/N) (#45010)
c947ab0bb9 : Added sparse support for asin and neg functions, updated log1p (#44028)
d126a0d4fd : [iOS] Disable the iOS nightly build until the cert issue has resolved (#45094)
5aed75b21b : [quant][graphmode][jit] Try to support append (#44641)
2111ec3bf3 : CUDA BFloat16 losses (#45011)
32c1a8c79f : adjust shape inference in sls tests (#44936)
0dda65ac77 : [ONNX] add jit pass for lists (#43820)
09e7f62ce2 : Fix RPC and ProcessGroup GIL deadlock (#45088)
dfc88d4fd0 : [vulkan] support dimensions negative indexing (#45068)
5621ba87a2 : [vulkan] reshape op to use infer_size to expand -1 (#45104)
8968030f19 : [WIP] Add vec256 test to linux CI (#44912)
4b3046ed28 : Vectorize int8_t on CPU (#44759)
f77ba0e48c : Change typo 'momemtum' to 'momentum' (#45045)
20f52cdd76 : [hpc]optimize the torch.cat cuda kernel (#44833)
81bb19c9f0 : [JIT] Prohibit subscripted assignments for tuple types (#44929)
9a31eee107 : [vulkan] Remove duplication of op registration and clean unused vars (#44932)
dfb8f2d51f : CUDA BFloat16 addmm, addmv (#44986)
581a364437 : CUDA BFloat16 unary ops part 1 (#44813)
1cab27d485 : Add a torch.hub.load_local() function that can load models from any local directory with a hubconf.py (#44204)
c941dd3492 : [FX] s/get_param/get_attr/ (#45000)
9dc2bcdc07 : Introducing (Const)StridedRandomAccessor + CompositeRandomAccessor + migrate `sort` to ATen (CPU) (#39744)
7118d53711 : add .cache to gitignore (#45017)
1a580c1021 : Adding test to quantized copy for 'from float' (#43681)
7de512ced8 : nightly robustness fixes for linking across devices (#43771)
42af2c7923 : [jit] gtest-ify test_alias_analysis.cpp (#45018)
92f8f75c59 : Add alias dispatch key Math. (#44354)
acc2a1e5fa : Update submodule gloo (#45025)
a4aba1d465 : fix compile error (#45052)
ac8c7c4e9f : Make Channel API accept buffer structs rather than raw pointers. (#45014)
4bbb6adff5 : [NNC] fix SyncThreads insertion and reenable CudaSharedMem test (#44909)
e2f49c8437 : skip im2col & vol2col in cpu/cuda convolution methods (#44600)
a6895d43b6 : Turn on gradgrad check for BCELoss Criterion Tests. (#44894)
4810365576 : Enabled torch.testing._internal.jit_utils.* typechecking. (#44985)
9f67176b82 : Complex gradcheck logic (#43208)
da7863f46b : Add one dimensional FFTs to torch.fft namespace (#43011)
49db7b59e0 : For logical tests, use the dtypes decorator (#42483)
60709ad1bf : Adds multiply and divide aliases (#44463)
faef89c89f : CUDA BFloat Pooling (#44836)
7ecfaef7ec : CUDA BFloat16 layernorm (#45002)
2163d31016 : histogram observer: ensure buffer shape consistency (#44956)
0714c003ee : [pytorch][tensorexpr] Make gtest-style macros in tests match actual gtest signatures (#44861)
9e5045e978 : [pytorch] clean up normalized_dynamic_type() hack (#44889)
d75c402755 : Add cusolver to build, rewrite MAGMA inverse with cusolver (#42403)
620c999979 : update gloo submodule (#45008)
21a1b9c7cf : skip more nccl tests that causes flaky timeouts on rocm build (#44996)
1c15452703 : Update Windows builders to latest VS2019 (#44746)
e9941a5dd4 : [vulkan][py] torch.utils.optimize_for_vulkan (#44903)
572f7e069c : Enable type check for torch.testing._internal.te_utils.* (#44927)
043466f978 : [FX] Pass module's qualname to is_leaf_module (#44966)
40c09cfe14 : [CircleCI] Fix CUDA test setup (#44982)
e255a4e1fd : Enable bfloat16 random kernels on Windows (#44918)
06389406bb : CUDA BFloat activations 1 (#44834)
76a109c930 : [caffe2/aten] Fix clang build (#44934)
fd4e21c91e : Add optional string support to native_functions schema (#43010)
2d884f2263 : Optimize Scale function (#44913)
374e9373b5 : [jit] Pull (most) tests out of libtorch_python (#44795)
af3fc9725d : Extract rpc/tensorpipe_utils.{cpp,h} from rpc/utils.{cpp,h} (#44803)
d22dd80128 : Enable type check for torch.testing._internal.common_device_type. (#44911)
a47e3697ab : Use iterator of DispatchKeySet. (#44682)
6d312132e1 : Beef up vmap docs and expose to master documentation (#44825)
c2cf6efd96 : Enable type check for torch.testing._internal.dist_utils.* (#44832)
7bd8a6913d : CUDA BFloat div, addcdiv, addcmul, mean, var (#44758)
f175830558 : [NNC] Fuse identical conditions in simplifier (#44886)
09f2c6a94c : Back out "Revert D23494065: Refactor CallbackManager as a friend class of RecordFunction." (#44699)
174cbff00a : Improve sugared value's error message (#42889)
0063512a4b : [ONNX] Updates to diagnostic tool to find missing ops (#44124)
c68cc78299 : Add a device parameter to RemoteModule (#44254)
cff0e57c31 : Remove Incorrect Comment in tools/build_libtorch and remove Python2 support in the module import (#44888)
07b7e44ed1 : Stop using check_criterion_jacobian. (#44786)
6d178f6b8e : Stop ignoring errors in cuda nn module tests. (#44783)
df39c40054 : Cleanup tracer handling of optional arguments (#43009)
caea1adc35 : Complex support for stft and istft (#43886)
e400150c3b : Fixed for caffe2/opt/tvm_transformer.cc (#44249)
f2b3480795 : CUDA BFloat softmax (#44837)
1694fde7eb : Fix a GroupNorm cuda bug when input does not require_grad (#44863)
5dbcbea265 : TorchScript with record_function (#44345)
4a9c80e82e : [pytorch][bot] update mobile op deps (#44854)
9a007ba4cb : [jit] stop parsing the block after seeing exit statements (#44870)
60ae6c9c18 : [FX] Fix GraphModule copy methods not regenerating forward (#44806)
e14b2080be : [reland] move rebuild buckets from end of first iteration to beginning of second iteration (#44798)
2043fbdfb6 : Enable torch.backends.cuda typechecking in CI (#44916)
18b77d7d17 : [TensorExpr] Add Mod support to the LLVM backend (#44823)
e535fb3f7d : [ONNX] Enable true_divide scripting export with ONNX shape inference (#43991)
1c996b7170 : Enable typechecking for torch.testing._internal.common_quantized.* (#44805)
f5b92332c1 : [TensorExpr] Fix order comparisons for unsigned types (#44857)
a153eafab7 : Let logspace support bfloat16 on both CPU and CUDA (#44675)
40e44c5f0a : Make nuclear and frobenius norm non-out depend on out variants (#44095)
086a2e7a4e : [caffe2] add cost inference for FusedFakeQuantFC and FusedFakeQuantFCGradient (#44840)
4066022146 : Do not use `PRId64` in torch/csrc (#44767)
5d57025206 : [TensorExpr] Add log1p support to the LLVM backend (#44839)
f5440a448a : CUDA BFloat16 i0 support (#44750)
bee97d5be0 : Document the default behavior for dist.new_group() when ranks=None (#44000)
2558e5769d : Implement sort for list of tuples (#43448)
c189328e5d : CUDA BFloat16 unary ops part 2 (#44824)
c1fa42497b : fix legacy GET_BLOCKS code from THCUNN/common.h (#44789)
24df3b7373 : torch.empty_like and torch.zeros_like raise error if any memory format is provided with sparse input (#43699) (#44058)
1fde54d531 : [quant][qat] Ensure fake_quant and observer can be disabled on scriptmodule (#44773)
361b38da19 : [quant][fx] Add node name as prefix to observer module name (#44765)
74c3dcd1d2 : Revert D23725053: [pytorch][PR] change self.generator to generator
d2b4534d4d : refactor intialize bucket views (#44330)
6006e45028 : .circleci: Switch to dynamic MAX_JOBS (#44729)
f605d7581e : Implement better caching allocator for segmentation usecase. (#44618)
4affbbd9f8 : minor style edits to torch/testing/_internal/common_quantized.py (#44807)
a40ef25e30 : [te] Disable flaky test CudaSharedMemReduce_1 (#44862)
503c74888f : Always use NewModuleTest instead of ModuleTest. (#44745)
28085cbd39 : Fixed quantile nan propagation and implemented nanquantile (#44393)
99093277c0 : Support Python Slice class in TorchScript (#44335)
b6f4bb0a70 : Revert D23236088: [pytorch][PR] [caffe2] adds Cancel to SafeDequeueBlobsOp and SafeEnqueueBlobsOp
e18a2219dd : Implement scatter reductions (CUDA), remove divide/subtract (#41977)
fdeee74590 : [pytorch][vulkan] Fix downcast warnings-errors, aten_vulkan buck target
b61d3d8be8 : Implement torch.kaiser_window (#44271)
34331b0e0f : CUDA BFloat16 and other improvements on abs (#44804)
ba6534ae2b : enable type check common_distributed (#44821)
e48201c5cf : Mention TF32 on related docs (#44690)
79108fc16c : [JIT] Improve Future subtype checking (#44570)
29664e6aa3 : [FX] Further sanitize generated names (#44808)
204f985fc3 : [NNC] Add simplification of Loop + Condition patterns. (#44764)
8ec6bc7292 : [pytorch][vulkan][jni] LiteModuleLoader load argument to use vulkan device
0ccc38b773 : [caffe2] adds Cancel to SafeDequeueBlobsOp and SafeEnqueueBlobsOp (#44495)
3fa7f515a5 : [pytorch][bot] update mobile op deps (#44700)
6befc09465 : Fix misuse of PyObject_IsSubclass (#44769)
43fe034514 : [JIT] Disallow plain Optional type annotation without arg (#44586)
574f9af160 : [NCCL] Add option to run NCCL on high priority cuda stream (#43796)
161490d441 : Move `torch/version.py` generation to cmake (#44577)
ffe127e4f1 : [JIT] Disallow plain Tuple type annotation without arg (#44585)
09a84071a3 : enable mypy check for jit_metaprogramming_utils (#44752)
3f5bb2bade : [quant] Support clone for per channel affine quantized tensor (#44573)
7b3432caff : [TensorExpr] Support boolean in simplifier (#44659)
ac0d13cc88 : Vectorize complex copy. (#44722)
78b806ab4a : [JIT] Disallow plain List type annotation without arg (#44584)
cb3b8a33f1 : [JIT] Disallow plain Dict type annotation without arg (#44334)
5027c161a9 : Add TORCH_SELECTIVE_NAME to AMP definitions (#44711)
82ab167cce : [NNC] Fix masking for all block and thread dimensions in CudaCodeGen (#44733)
a3835179a1 : [FakeLowP] Addressing FakeLowP OSS issues. (#44819)
07d9cc80a4 : Fix error code checks for triangular_solve (CPU) (#44720)
f3bd984e44 : Move the description comment of compute_bucket_assignment_by_size from cpp to the header file. (#44703)
20ac736200 : Remove py2 compatible future imports (#44735)
6debe825be : [vulkan] glsl shaders relaxed precision mode to cmake option (#43076)
e9c6449b46 : [FX][EZ] Allow constructing GraphModule with dict for root (#44679)
1718b16d15 : [Caffe2] gcs_cuda_only is trivial if CUDA not available (#44578)
c44e4878ae : Enable torch.backends.quantized typechecks (#44794)
1cd5ba49c6 : Add batching rule for "is_complex", "conj" (#44649)
cce7680a23 : Add bound method tests for async_execution with RRef helper (#44716)
257c6d0fde : Make async_execution compatible with RRef helpers (#44666)
924717bf51 : Add _get_type() API to RRef (#44663)
6954ae1278 : Vec256 Test cases (#42685)
e6101f5507 : fixes lda condition for blas functions, fixes bug with beta=0 in addmv slow path (#44681)
570102ce85 : Remove many unused THC pointwise math operators (#44230)
07d07e3c6c : Remove EXPERIMENTAL_ENUM_SUPPORT feature guard (#44243)
3e6bb5233f : Reference amp tutorial (recipe) from core amp docs (#44725)
a011b86115 : change self.generator to generator (#44461)
ee493e1a91 : CUDA bfloat compare ops (#44748)
eb75cfb9c0 : Back out "Revert D23323486: DPP Async Tracing" plus windows build fix. (#44702)
ced8727d88 : Fix a broken link in CONTRIBUTING.md (#44701)
5e717f0d5e : delete the space for the docs rendering (#44740)
a5cc151b8c : Build EigenBlas as static library (#44747)
b63b684394 : Consolidate CODEOWNERS file for distributed package. (#44763)
dbf17a1d4c : Fixing a few links in distributed CONTRIBUTING.md (#44753)
06036f76b6 : CUDA BFloat16 pow (#44760)
63469da3bb : Add a test to ensure DDP join works with RPC (#44439)
3f512b0de2 : [quant][qat] Ensure observers and fq modules are scriptable (#44749)
b85568a54a : [CI] Add profiling-te benchmarks. (#44756)
d66520ba08 : [TensorExpr] Fuser: try merging adjacent fusion groups. (#43671)
2efc618f19 : lr_schedule.py redundant code (#44613)
2c1b215b48 : [fx] remove delegate, replace with tracer (#44566)
993b4651fd : Convert num_kernels to int64 before calling into CUDA GET_BLOCKS (#44688)
fb085d90e3 : Revert D23583017: move rebuild buckets from end of first iteration to beginning of second iteration
26a91a9f04 : [WIP][JIT] Add benchmarking support of NV Fuser with FP16 dtype support (#44101)
2f4c31ce3a : [jit] Speed up saving in case of many classes (#44589)
285ba0d068 : Enable fp16 for UniformFill (#44540)
69839ea3f6 : [NNC] make inlining immediate (take 3) (#44231)
8df0400a50 : Fix fallback graph in specialize autogradzero (#44654)
4ce6af35c4 : Enable fp16 for CUDA SparseLengthsSum/Mean (#44089)
07cba8b1fc : Run vmap tests in CI (#44656)
d62994a94d : ci: Add anaconda pruning to CI pipeline (#44651)
1d733d660d : [docs] torch.min/max: remove incorrect warning from docs (#44615)
6bc77f4d35 : Use amax/maximum instead of max in optimizers (#43797)
9c364da9b9 : Fix doc builds for bool kwargs (#44686)
f5d231d593 : move rebuild buckets from end of first iteration to beginning of second iteration (#44326)
5f692a67db : qat conv_fused.py: one more patch for forward compatibility (#44671)
72b5665c4f : Upgrade oneDNN (mkl-dnn) to v1.6 (#44706)
7036e91abd : Revert D23323486: DPP Async Tracing
2435d941b1 : Fix FP16 fastAtomicAdd for one case where tensor start address is not 32 bit aligned (#44642)
2fd142a2ef : Small clarification to amp gradient penalty example (#44667)
aedce773ed : Deleted docker images for rocm 3.3 and rocm 3.5 (#44672)
c71ce10cfc : add dilation to transposeconv's _output_padding method (#43793)
ed862d3682 : Split CUDA_NVCC_FLAGS by space (#44603)
2c4b4aa81b : Revert D23494065: Refactor CallbackManager as a nested class of RecordFunction.
e7d782e724 : [JIT] Add property support for ScriptModules (#42390)
63105fd5b1 : Refactor CallbackManager as a nested class of RecordFunction. (#44645)
71673b31f9 : DPP Async Tracing (#44252)
e107ef5ca2 : Add type annotations for torch.nn.utils.* (#43080)
551494b01d : [JIT] Fix torch.tensor for empty multidimensional-typed lists (#44652)
2254e5d976 : Add note comments to enforce nondeterministic alert documentation (#44140)
a91c2be2a9 : Automated submodule update: FBGEMM (#44647)
686e281bcf : Updates div to perform true division (#42907)
e594c30bc2 : [quant][graphmode][fx] Support fp16 dynamic quantization for linear (#44582)
43406e218a : [ONNX] Update ONNX shape inference (#43929)
89aed1a933 : [vulkan][op] avg_pool2d (#42675)
8f327cd6c5 : [vulkan][op] add.Scalar, mul.Scalar (#42674)
f7cfbac89b : [ONNX] Update len symbolic (#43824)
da11d932bc : [ONNX] Update arange op to support out argument (#43777)
62ebad4ff9 : [ONNX] Export new_empty and new_zeros (#43506)
d0a56cab07 : [quant] Fixing the output shape for the linear (#44513)
742654d1b6 : [quant] ConvTranspose1d / ConvTranspose2d (#40371)
84949672bf : Fix exception chaining in `test/` (#44193)
a188dbdf3f : Check for index-rank consistency in FunctionInliner (#44561)
b5dd6e3e61 : split torch.testing._internal.* and add type checking for torch.testing._internal.common_cuda (#44575)
cfba33bde3 : Fix the ELU formula in the docs (#43764)
9d4943daaf : [quant] conv_transpose1d / conv_transpose2d (#40370)
ecac8294a6 : enable type checking for torch._classes (#44576)
ad7a2eb1c9 : Simplify nested Min and Max patterns. (#44142)
199435af90 : Update median doc to note return value of even-sized input (#44562)
a475613d1d : [static runtime] Swap to out-variant compatible nodes (#44127)
856510c96d : [JIT] Dont optimize shape info in batch_mm (#44565)
e261e0953e : Fix centos8 gcc (#44644)
ace81b6794 : Remove an extra empty line in the warning comments. (#44622)
21a09ba94d : Fix lerp.cu bug when given discontiguous out tensor (#44559)
95a69a7d09 : adds list_gpu_processes function (#44616)
105132b891 : Move ONNX circle ci build to torch and remove all caffe2 CI job/workflows (#44595)
bd257a17a1 : Add HIP/ROCm version to collect_env.py (#44106)
7040a070e3 : [torch] Minor: Avoid ostreamstring in Operator's canonicalSchemaString() (#44442)
c68a99bd61 : [numpy] Add `torch.exp2` (#44184)
870f647040 : Automated submodule update: FBGEMM (#44581)
68a5c361ae : Adding Adapative Autorange to benchmark utils. (#44607)
8daaa3bc7e : Fix latex error in heaviside docs (#44481)
fe26102a0e : Enable TE in test_jit.py (#44200)
7862827269 : [pytorch] Add variadic run_method for lite intepreter (#44337)
bcf97b8986 : [JIT] Cleanup some places where we log graphs in executors. (#44588)
82da6b3702 : [JIT] Fix jit-log verbosity selection logic. (#44587)
6d4a605ce9 : Fix bug simplifying if-then-else when it can be removed (#44462)
7e91728f68 : Deprecates calling linspace and logspace without setting steps explicitly (#43860)
e703c17967 : Revert D23584071: [dper3] Create dper LearningRate low-level module
a309355be3 : [dper3] Create dper LearningRate low-level module
0743d013a6 : fuse layernorm + quantize (#44232)
6f2c3c39d2 : Add SNPE deps for caffe2 benchmark android binary
05c1f1d974 : [ROCm] remove thrust workaround in ScanKernels (#44553)
d191caa3e7 : Cleanup workarounds for compiler bug of ROCm (#44579)
8641b55158 : fix dangling ptr in embedding_bag (#44571)
82b4477948 : Pass the input tensor vector by const reference. (#44340)
ab5fee2784 : Move the inline implementations of GradBucket class to the header. (#44339)
1f0dcf39fc : [JIT] dont optimize device dtype on inline (#43363)
d729e2965e : [TensorExpr] Do not inline autodiff graphs if they contain prim::TypeCheck nodes. (#44564)
64b4307d47 : [NNC] Cuda Codegen - mask loops bound to block/thread dimensions (#44325)
2ae74c0632 : Compile less legacy code when BUILD_CAFFE2 is set to False (take 2) (#44453)
566b8d0650 : handle missing NEON vst1_*_x2 intrinsics (#44198) (#44199)
db24c5c582 : Change code coverage option name (#43999)
b6f0ea0c71 : [quant][graphmode][fx][fix] Remove qconfig in convert (#44526)
42f9f2f38f : [fix] ReduceOps throw error if dim is repeated (#44281)
f3a79b881f : add `lcov` to oss for beautiful html report (#44568)
c2b40b056a : Filter default tests for `clang` coverage in oss
a82ea6a91f : [quant][graphmode][fx][fix] Support None qconfig in convert (#44524)
1fb5883072 : removing conv filters from conv pattern matching (#44512)
dd4bbe1a79 : Add iterator like functionality for DispatchKeySet (#44066)
e2bb34e860 : Batched grad support for: slice, select, diagonal (#44505)
7632484000 : Add some batched gradient tests (#44494)
ab6126b50e : [rpc][jit] support remote call in TorchScript (#43046)
3e5df5f216 : [rpc][jit] support rpc_sync in TorchScript (#43043)
8bec7cfa91 : [rpc] rename some functions (#43042)
70dfeb44bd : MinMax based observers: respect device affinity for state_dict (#44537)
192c4111a3 : Simplify target handling in nn gradcheck. (#44507)
8a574c7104 : [Cmake] Drop quotation marks around `$ENV{MAX_JOBS}` (#44557)
2b8f0b2023 : [caffe2] adds Cancel to OperatorBase and NetBase (#44145)
5579b53a7f : Fix SmoothL1Loss when target.requires_grad is True. (#44486)
b7ef4eec46 : [NNC] Add loop slicing transforms (#43854)
39bb455e36 : Update fallback kernel for Autograd keys. (#44349)
11fb51d093 : [quant][graphmode][fx][fix] Support dictionary output (#44508)
442957d8b6 : [pytorch] Remove mobile nonvariadic run_method (#44235)
a61318a535 : [pytorch] Replace mobile run_method with get_method and operator() (#44202)
cdf5e2ae86 : add typing annotations for a few torch.utils.* modules (#43806)
7d78a6fcdd : Update interpolate to use new upsample overloads (#43025)
df6ea62526 : Add nondeterministic check to new upsample overloads
3de2c0b42f : Fix L1Loss when target.requires_grad is True. (#44471)
ea55820606 : [dper3] Export PackSegments and UnpackSegments to Pytorch
b73b44f976 : [PyTorch Mobile] Move some string ops to register_prim_ops.cpp and make them selective (#44500)
567c51cce9 : In common_distributed, fix TEST_SKIPS multiprocessing manager (#44525)
d07d25a8c5 : Fix MSELoss when target.requires_grad is True. (#44437)
9a3b83cbf2 : Update submodule gloo to have latest commits to enable it can work on Windows (#44529)
b6b1c01adf : torch.view_as_complex fails with segfault for a zero dimensional tensor (#44175)
a9754fb860 : Use TP Tensor.metadata to carry device info (#44396)
f44de7cdc3 : Add missing rpc.shutdown() (#44417)
77cc7d1ecd : C++ APIs Transformer NN Module Top Layer (#44333)
09892de815 : Clarify track_running_stats docs; Make SyncBatchNorm track_running_stats behavior consistent (#44445)
30fccc53a9 : [NNC] Don't attempt to refactor conditional scalars (#44223)
c967e7724e : [quant] conv_transpose1d_prepack / conv_transpose1d_unpack (#40360)
8b8986662f : [JIT] Remove profiling nodes in autodiff forward graph (#44420)
c6febc6480 : [JIT] Add a python hook for a function to interpret JIT graphs. (#44493)
51ed31269e : Replace FutureMessage with c10::ivalue::Future in DistEngine. (#44239)
b5d75dddd9 : Enable lerp on half type; fix output memory format (#43541)
0c58a017bd : [quant][eagermode][refactor] Add set/get method for quantization and fusion mappings (#43990)
f7278473d3 : [NCCL] Fix NCCL_BLOCKING_WAIT functionality with Async Error Handling (#44411)
6ee41974e3 : Speedup Linux nightly builds (#44532)
69f6d94caa : Register diag_backward, diagonal_backward, infinitetely...gelu_backward as operators (#44422)
7ff7e6cfc8 : Register cummaxmin_backward, cumprod_backward as operators (#44410)
08b431f54c : Add trace_backward, masked_select_backward, and take_backward as ops (#44408)
41f62b17e7 : Fix DDP join() API in the case of model.no_sync() (#44427)
129d52aef2 : Fix uniqueness check in movedim (#44307)
c48f511c7e : Moves some of TestTorchMathOps to OpInfos (#44277)
2e744b1820 : Support work.result() to get result tensors for allreduce for Gloo, NCCL backends (#43970)
91b16bff1e : Disable PyTorch iOS ARM64 builds until cert problem is fixed (#44499)
1dd3fae3d2 : [pytorch] Add logging to mobile Method run (#44234)
a2a81e1335 : Add a CONTRIBUTING.md for the distributed package. (#44224)
4bead6438a : Enable torch.autograd typechecks (#44451)
cc5a1cf616 : [JIT] Erase shapes before fallback graph (#44434)
b3f0297a94 : ConvPackedParams: remove legacy format (#43651)
d232fec1f1 : Partly fix cuda builds of dper broken by caffe2 c++
38c10b4f30 : [NCCL] Fix the initialization of futureNCCLCallbackStreams (#44347)
cb90fef770 : Fix return value of PyErr_WarnEx ignored (SystemError) (#44371)
f9a0d0c21e : Allow Tensor-likes in torch.autograd.gradcheck (#43877)
c8914afdfa : Merge criterion_tests and new_criterion_tests. (#44398)
fa158c4ca6 : Combine criterion and new criterion tests in test_jit. (#43958)
af9cad761a : Stop ignoring NotImplementedErrors in cuda CriterionTests. (#44381)
208ad45b4b : fix scripts (#44464)
356aa54694 : [Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
6c98d904c0 : handle the case of -0.0 on tanh quantization (#44406)
28a23fce4c : Deprecate torch.norm and torch.functional.norm (#44321)
7b547f086f : To fix extra memory allocation when using circular padding (#39273)
65d4a6b7c0 : [ROCm] fix cub hipify mappings (#44431)
28bd4929bd : [NNC] Make it able to normalize loop with variable start (#44133)
c515881137 : Add reset_grad() function (#44423)
6324ef4ced : [caffe2] Speed up compilation of aten-op.cc (#44440)
89ac30afb8 : [JIT] Propagate type sharing setting to submodule compilation (#44226)
d3b6d5caf1 : [JIT] Add support for del to TS classes (#44352)
058d7228ec : Expose the interface of nesterov of SGD Optimizer from caffe2 to dper
5ee31308e6 : [caffe2] exposes Net cancellation through pybind state (#44043)
e028ad0762 : Fix HashStoreTests and move to Gtest (#43384)
69a3ff005d : Modularize FileStoreTest and move to Gtest (#43383)
a7fba7de22 : Convert StoreTestUtils to Gtest (#43382)
b69c28d02c : Improving ModuleList indexing error msg (#43361)
c010ef7f0c : use non-overflowing divide in cuda kernel util GET_BLOCKS (#44391)
ba6ddaf04c : [pyper] export caffe2 bucketize GPU operator to pytorch
e0c65abd38 : Revert D23568330: [pytorch][PR] Moves some of TestTorchMathOps to OpInfos
fc51047af5 : Small fixes in Dependency.cmake and run_test.py (#44414)
b0bcdbb1ab : [JIT] Support partially specified sizes/strides in IRParser (#44113)
3674264947 : [quant] quantized path for ConstantPadNd (#43304)
032480d365 : fix typo in embedding_bag_non_contiguous_weight test (#44382)
a00d36b0e7 : [PyTorch][Mobile] Insert the module name as `name()` to metadata dict if metadata doesn't contain "model_name" (#44400)
24efd29d19 : Check commutativity for computed dispatch table and add a test to check entries. (#44088)
48c47db8fe : [NCCL] Add Environment Variable to guard Async Error Handling feature (#44163)
211ece7267 : [NCCL] ProcessGroupNCCL Destructor Blocks on WorkNCCL Completion (#41054)
afbf2f140b : [NCCL] WorkNCCL Helper Functions (#41053)
f8f7b7840d : [NCCL] Abort Errored and Timed Out NCCL Communicators from Watchdog Thread (#41052)
4e5c55ef69 : [NCCL] Use cudaEventQuery to Poll for GPU operation errors (#41051)
1df24fd457 : [NCCL] Timeout Loop Thread for Async Error Handling (#41050)
15cbd1cf4b : Preserve .ninja_log in build artifacts (#44390)
ef4475f902 : [Reland] Optimize code path for adaptive_avg_pool2d when output size is (1, 1) (#44211)
37093f4d99 : Benchmarks: make fuser and executor configurable from command line. (#44291)
364d03a67c : Misc. FakeLowP OSS cleanup (#44331)
758c2b96f5 : BUG: make cholesky_solve_out do broadcast, error checking (#43137)
683380fc91 : Use compile time cudnn version if linking with it statically (#44402)
6ec8fabc29 : Fix frac in CUDA fuser (#44152)
350130a69d : Prevent the TE fuser from getting datatypes it can't handle (#44160)
960c088a58 : [te] Fix casting of unsigned char, and abs(int) (#44157)
7c464eed16 : Skipping CUDA tests in ProcessGroupGloo and logs (#42488)
2a87742ffa : Autocast wrappers for RNN cell apis (#44296)
a953a825cc : Moves some of TestTorchMathOps to OpInfos (#44277)
f044b17ae2 : Disable a test (#44348)
cfd3620b76 : Don't use VCOMP if Intel OMP is used (#44280)
d23f3170ef : Remove pybind11 from required submodules (#44278)
8acce55015 : Dump optimized graph when logging in already-optimized PE (#44315)
7a64b0c27a : Export Node::isBefore/isAfter for PythonAPI (#44162)
135ebbde6d : [Caffe2] Add RMSNormOp (#44338)
106459acac : Rename test_distributed to test_distributed_fork (#42932)
b22abbe381 : Enable test_distributed to work with spawn mode (#41769)
1d01fcdc24 : [quant] fill_ path for quantized tensors (#43303)
4aacfab221 : Resolve Autograd key for disable_variable_dispatch flag. (#44268)
ecc6358dbe : Port nonzero cuda from THC to ATen (#44259)
bd8e38cd88 : [TensorExpr] Fuser: check node inputs' device before merging the node into a fusion group. (#44241)
646ffd4886 : [quant] Move EmbeddingBag eager quantization to static (#44217)
57b87aaf59 : [quant] Add quantized Embedding module (#44208)
6013a29fc0 : [quant] Support quantization of embedding lookup operators (#44207)
f27be2f781 : [caffe2] fix wrong comment (#42735)
f9146b4598 : fix lint (#44346)
6269b6e0f0 : [quant][graphmode][fx][api] Call fuse in prepare (#43984)
be94dba429 : [NNC] fix support for FP16 in CudaCodgen (#44209)
9f54bcc522 : [quant][graphmode][fx] Support inplace option (#43983)
0351d31722 : add rocm nightly build (#44250)
40d138f7c1 : Added alpha overloads for add/sub ops with lists (#43413)
00b5bd536f : fx quant: add docblocks to _find_matches and _find_quants (#43928)
6dd53fb58d : [fix] output of `embedding_bag` with non-contiguous weight (#44032)
43e38d60d6 : [quant][graphmode][fx] Support quantize per channel in all cases (#44042)
49e979bfde : Set default compiler differently according to platform (#43890)
1fcccd6a18 : [FX] Minor fixups in Graph printout (#44214)
47ac9bb105 : Enable temp disabled tests in test_jit_fuser_te.py (#44222)
54931ebb7b : Release saved variable from DifferentiableGraphBackward (#42994)
63d62d3e44 : Skips test_addcmul_cuda if using ROCm (#44304)
de89261abe : Reduce `sccache` log levels for RocM to a default state (#44310)
477f489137 : Don't register a fallback for private use to let extensions do it themselves (#44149)
caf23d110f : [JIT] Unshare types for modules that define() in __init__ (#44233)
4e0ac120e9 : [FX] Only copy over training attr if it\'s there (#44314)
fd8e2064e0 : quant: switch observers to use min_max (#42957)
de980f937b : skip test_tanhquantize for now (#44312)
8d212d3f7a : add 'run_duration' stats for binary builds to scuba (#44251)
1130de790c : Automated submodule update: FBGEMM (#44177)
5de805d8a7 : [dper3] Export Caffe2 operator LearningRate to PyTorch
cce5982c4c : Add unary ops: exp and sqrt (#42537)
6134ac17ba : Revert D23561500: Benchmarks: re-enable profiling-te configuration (try 2).
7c61f57bec : test_ops: skipTest only takes a single argument (#44181)
0e64b02912 : FindCUDA error handling (#44236)
5d748e6d22 : [TensorExpr] Re-enable tests. (#44218)
589a2024c8 : Benchmarks: re-enable profiling-te configuration (try 2). (#44270)
10dd25dcd1 : Add binary ops for _foreach APIs (#42536)
626e410e1d : Revert D23544563: Benchmarks: re-enable profiling-te configuration.
1b2da9ed82 : Expose alias key info in dumpState and update test_dispatch. (#44081)
514f20ea51 : Histogram Binning Calibration
ac1f471fe2 : Benchmarks: re-enable profiling-te configuration. (#44212)
bb861e1d69 : Ports CUDA var and std reduce all (with no out argument) to ATen, fixes var docs (#43858)
83a6e7d342 : Adds inequality testing aliases for better NumPy compatibility (#43870)
671160a963 : Revert D23557576: Revert D23519521: [dper3] replace LengthsGather lowlevel module's PT implemetnatio to use caffe2 op
e358d516c8 : Revert D23549149: [PyTorch][Mobile] Insert the module name as `name()` to metadata dict if metadata doesn't contain "model_name"
70c8daf439 : Apply selective build on RNN operators (#44132)
68297eeb1a : Add support for integer dim arg in `torch.linalg.norm` (#43907)
719d29dab5 : Implement torch.i0 and torch.kaiser_window (#43132)
4fc29e9c43 : Revert D23519521: [dper3] replace LengthsGather lowlevel module's PT implemetnatio to use caffe2 op
396469f18c : Explicitly forbidden the other inherited methods of RemoteModule. (#43895)
199c73be0f : [quant][pyper] Support quantization of ops in fork-wait subgraph (#44048)
164b96c34c : [quant][pyper] make embedding_bag quantization static (#44008)
a0ae416d60 : [quant] Support aten::embedding_bag quantization in graph mode (#43989)
15a7368115 : Add const to getTensors method of GradBucket. (#44126)
5bd2902796 : [JIT] Remove references to no longer generated _tanh_backward and _sigmoid_backward (#44138)
df67f0beab : [TensorExpr fuser] Guard nodes that have tensor output properties determined by non-tensor inputs (#44137)
5a0d65b06b : Further expand coverage of addmm/addmv, fix 0 stride (#43980)
d07a36e0c1 : Revert D23490149: [pytorch][PR] Compile less legacy code when BUILD_CAFFE2 is set to False
618b4dd763 : fx quant prepare: clarify naming (#44125)
a940f5ea5d : torchscript graph mode quant: remove benchmark filter (#44165)
8c64bb4f47 : [dper3] replace LengthsGather lowlevel module's PT implemetnatio to use caffe2 op
398409f072 : [PyTorch][Mobile] Insert the module name as `name()` to metadata dict if metadata doesn't contain "model_name" (#44227)
15e99b6ff6 : Compile less legacy code when BUILD_CAFFE2 is set to False (#44079)
f3bf6a41ca : [ONNX] Update repeat op (#43430)
3699274ce2 : [DPER3] AOT integration
8b17fd2516 : Add remote_parameters() into RemoteModule class. (#43906)
8f37ad8290 : [BUILD] Guard '#pragma unroll' with COMPILING_FOR_MIN_SIZE
3d7c22a2ce : [ONNX] Enable new scripting passes for functionalization and remove_mutation (#43791)
70bbd08402 : [FX] Fix forward merge conflict breakage (#44221)
4562b212db : Fix potential divide by zero for CostInferenceForRowWiseSparseAdagrad
2ad5a82c43 : [fx] get rid of graph_module.root (#44092)
0c2bc4fe20 : Revert D23468286: [pytorch][PR] Optimize code path for adaptive_avg_pool2d when output size is (1, 1)
6474057c76 : Revert D23503636: [pytorch][PR] [NNC] make inlining immediate (take 2) and fix bugs
539d029d8c : [ONNX] Fix split export using slice (#43670)
af13faf18b : [FX] __str__ for GraphModule and Graph (#44166)
0e3cf6b8d2 : [pytorch] remove code analyzer build folder between builds (#44148)
f38e7aee71 : Updates to SCCACHE for ROCm case (#44155)
2a1fc56694 : replace the white list from default mappings (#41802)
4d431881d1 : Control NCCL build parallelism via MAX_JOBS environment var (#44167)
6aba58cfd3 : Limit MAX_JOBS to 18 for linux binary builds (#44168)
6cecf7ec68 : Enable test_cublas_config_deterministic_error for windows (#42796)
9a5a732866 : Register some backwards functions as operators (#44052)
0c01f136f3 : [BE] Use f-string in various Python functions (#44161)
28b1360d24 : [Codemod][FBSourceGoogleJavaFormatLinter] Daily `arc lint --take GOOGLEJAVAFORMAT`
f8f35fddd4 : Optimize code path for adaptive_avg_pool2d when output size is (1, 1) (#43986)
ef28ee50b0 : [Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
98ad5ff41f : [te] Disable reductions by default (#44122)
a37c199b8b : [c2][cuda] small improvement to dedup adagrad by avoiding recompute of x_ij (#44173)
2f8a43341d : Add API for onnxifi with AOT Glow ONNX (#44021)
d221256888 : [Message] Add what to do for missing operators.
addfd7a9b9 : Add tests against autograd precedence and multiple dispatch. (#44037)
b60ffcdfdd : Enable typechecks for torch.nn.quantized.modules.linear (#44154)
538d3bd364 : Enable CUDA 11 jobs for Windows nightly builds (#44086)
69e38828f5 : [quant] conv_transpose2d_prepack/conv_transpose2d_unpack (#40351)
c40e3f9f98 : [android][jni] Support Tensor MemoryFormat in java wrappers (#40785)
70aecd2a7f : [NNC] make inlining immediate (take 2) and fix bugs (#43885)
bc4a00c197 : [TVM] Support Fused8BitRowwiseQuantizedToFloat op (#44098)
3105d8a9b2 : [TensorExpr] Fuser: rely on input types when checking whether a device is supported. (#44139)
71510c60ad : fx qat: respect device affinity (#44115)
7816d53798 : [JIT] Add mypy type annotations for JIT (#43862)
9dd8670d7d : [jit] Better match behavior of loaded ScriptModules vs. freshly created ones (#43298)
74f18476a2 : [jit] fix segfault in attribute lookup on loaded ScriptModules (#43284)
e64879e180 : [tensorexpr] Alias analysis tests (#44110)
6868bf95c6 : [JIT] Fuser match on schemas not node kind (#44083)
9b3c72d46e : [pytorch] Make mobile find_method return an optional (#43965)
f91bdbeabd : Enable function calls in TEFuser and SpecializeAutogradZero (#43866)
e05fa2f553 : [quant] Prep for conv_transpose packing (#39714)
352a32e7f3 : [caffe2] fix clang build
f3da9e3b50 : Enable Enum pickling/unpickling. (#43188)
d0421ff1cc : Benchmarks: add scripts for FastRNNs results comparison. (#44134)
3806c939bd : Polish DDP join API docstrings (#43973)
442684cb25 : Enable typechecks for torch.nn.modules.[activation|upsampling] (#44093)
a153f69417 : Fix replaceAtenConvolution for BC. (#44036)
ba65cce2a2 : Fix transposed conv2d rewrite pattern to account for convolution api (#44035)
55ff9aa185 : Test TE fuser unary ops and fix sigmoid(half) (#44094)
bfa1fa5249 : Update rocm-3.5.1 build job to rocm-3.7 (#44123)
49215d7f26 : For CriterionTests, have check_gradgrad actually only affect gradgrad checks. (#44060)
42f9897983 : Mark bucketize as not subject to autograd (#44102)
91b0d1866a : add tanh + quantize unit test (#44076)
de672e874d : [JIT] Improve error message for unsupported Optional types (#44054)
d11603de38 : [TensorExpr] Benchmarks: set number of profiling runs to 2 for PE. (#44112)
b10c527a1f : [pytorch][bot] update mobile op deps (#44100)
f96b91332f : [caffe2.proto] Add AOTConfig (#44020)
c59e11bfbb : Add soft error reporting to capture all the inference runtime failure. (#44078)
5973b44d9e : Rename NewCriterionTest to CriterionTest. (#44056)
7d95eb8633 : [fbgemm] manual submodule update (#44082)
c10f30647f : Fix CUDA debug nightly build failure (#44085)
98320061ad : DDP Communication hook: (Patch) Fix the way we pass future result to buckets. (#43734)
768c2b0fb2 : Fix THPVariable_float_scalar (#43842)
b6e2b1eac7 : BatchedFallback: stop emitting the entire schema in the fallback warning (#44051)
cae52b4036 : Merge CriterionTest into NewCriterionTest. (#44055)
15643de941 : With fixes, Back out "Back out "Selective meta programming preparation for prim ops""
24ca6aab02 : Improves type-checking guards. (#43339)
b6d5973e13 : Delete THCStream.cpp (#43733)
68a1fbe308 : Allow criterion backwards test on modules requiring extra args (i.e. CTCLoss). (#44050)
5f89aa36cf : Actually run backward criterion tests. (#44030)
665feda15b : Adds opinfo-based autograd tests and (un)supported dtype tests (#43451)
ab7606702c : Rectified a few grammatical errors in documentation (#43695)
40fec4e739 : [TensorExpr] Fuser: do not fuse ops with 0-dim tensors. (#44073)
3da82aee03 : [JIT] Remove profile nodes before BatchMM. (#43961)
ae7699829c : Remove THC max and min, which are longer used (#43903)
32e0cedc53 : [ONNX] Move tests to test_pytorch_onnx_onnxruntime (#42684)
bc45c47aa3 : Expand the coverage of test_addmm and test_addmm_sizes (#43831)
f5ba489f93 : Move dependent configs to CUDA-10.2 (#44057)
a76a56d761 : Add "torch/testing/_internal/data/*.pt" to .gitignore (#43941)
37658b144b : Remove useless py2 compatibility import __future__, part 1 (#43808)
b2a9c3baa9 : [TVM] Support fp16 weights in c2_frontend (#44070)
b2aaf212aa : [TensorExpr] Add option to enforce TensorExprKernel fallbacks. (#43972)
6a6552576d : rename _min_max to _aminmax (#44001)
486a9fdab2 : _min_max.dim: CUDA implementation (#42943)
834279f4ab : _min_max_val.dim: CPU implementation (#42894)
78994d165f : min_max kernel: add CUDA (#42868)
33d51a9b32 : Respect canFuseOn{CPU,GPU} in TE fuser (#43967)
041573c8cd : Add Cost Inference for AdaGrad and RowWiseSparseAdagrad
2f044d4ee5 : Fix CI build (#44068)
129f406062 : Make torch.conj() a composite function and return self for real tensors (#43270)
f9efcb646b : fx quant: clarify state in Quantizer object (#43927)
f15e27265f : [torch.fx] Add support for custom op (#43248)
7a77d1c5c2 : [FX] Only copy over forward() from exec (#44006)
402e9953df : [pytorch][bot] update mobile op deps (#44018)
297c938729 : Add _foreach_add(TensorList tl1, TensorList tl2) and _foreach_add_(TensorList tl1, TensorList tl2) APIs (#42533)
f6f9d22228 : [ONNX] Export KLDivLoss (#41858)
4716284904 : Update persons_of_interest.rst (#44031)
b167402e2e : [redo] Fix SyncBatchNorm forward pass for non-default process group (#43861)
544a56ef69 : [JIT] Always map node output in vmap (#43988)
276158fd05 : .circleci: Remove un-needed steps from binary builds (#43974)
73f009a2aa : refactor manual function definitions (#43711)
a6789074fc : Implement ChannelShuffle op with XNNPACK (#43602)
df8da5cb5a : fx quant: make load_arg function more clear (#43923)
77ef77e5fa : fx quant: rename matches -> is_match (#43914)
6f5282adc8 : add quantization debug util to pretty print FX graphs (#43910)
b6b5ebc345 : Add `torch.vdot` (#43004)
14ebb2c67c : Allow no-bias MKLDNN Linear call (#43703)
c88ac25679 : Check for internal memory overlap in some indexing-type functions (#43423)
5807bb92d3 : TensorIteratorConfig: Check memory overlap by default (#43422)
cd58114c6c : Adjust level of verbosity of debug dumps in graph executor T74227880 (#43682)
8722952dbd : Add benchmark for channel_shuffle operator (#43509)
6512032699 : [Static Runtime] Add OSS build for static runtime benchmarks (#43881)
c61a16b237 : Kill dead code in common_nn as part of merging Criterion and NewCriterionTests. (#43956)
95f912ab13 : Use NewCriterionTest in test_cpp_api_parity.py. (#43954)
4bb5d33076 : is_numpy_scalar should also consider bool and complex types (#43644)
7000c2efb5 : [2/2][PyTorch][Mobile] Added mobile module metadata logging (#43853)
1dd658f28f : [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#43953)
c259146477 : add missing NEON {vld1,vst1}_*_x2 intrinsics (#43683)
137a4fcc3b : Back out "Selective meta programming preparation for prim ops"
263412e536 : Rename is_complex_t -> is_complex (#39906)
9db90fe1f3 : [TensorExpr] Remove unused functions in kernel.cpp (#43966)
8fd9fe93be : [quant][graphmode][fx] Support dynamic quantization without calibration (#43952)
fbea2ee917 : broadcast_object API for c10d (#43887)
4134b7abfa : Pass CC env variable as ccbin argument to nvcc (#43931)
0ffe3d84d5 : [quant][graphmode][fx] Support dynamic quantization without calibration (#43892)
d15b9d980c : [quant][graphmode][fx][refactor] Move patterns to separate files (#43891)
8d53df30ea : [FX] Better error when unpacking Proxy (#43740)
ec7f14943c : [OSS] Update README.md -- Explain more complex arguments and functionalities
e49dd9fa05 : Delete `raise_from` from `torch._six` (#43981)
5e97f251a8 : Enable TF32 support for cuDNN (#40737)
93fbbaab2a : Update `README.md` in oss (#43893)
24eea364f7 : Check SparseAdam params are dense on init (#41966) (#43668)
bacee6aa2e : Selective meta programming preparation for prim ops (#43540)
a1a23669f2 : [FX] Pickle serialization of GraphModule via forward source (#43674)
73f7d63bc9 : [FX] Support tensor-valued constants (#43666)
06c277f38e : [TVM] Support slice op (#43969)
5472426b9f : Reset `DataLoader` workers instead of creating new ones (#35795)
db6bd9d60b : rename input argunment `interested-folder` to `interest-only` -- be consistent with other arguments (#43889)
bc64efae48 : Back out "Revert D19987020: [pytorch][PR] Add the sls tensor train op" (#43938)
7035cd0f84 : Revert D23216393: Support work.result() to get result tensors for allreduce for Gloo, NCCL backends
63a0bb0ab9 : Add typing annotations for torch.nn.quantized.dynamic.modules.rnn (#43186)
8ca3913f47 : Introduce BUILD_CAFFE2 flag (#43673)
76ca365661 : [pytorch][bot] update mobile op deps (#43937)
e3cb582e05 : Error printing extension support for multiline errors (#43807)
224232032c : Move Autograd to an alias dispatch key (#43070)
13a48ac1f3 : MaxPool1d without indices optimization (#43745)
a044c039c0 : updated documentation to streamline setup (#42850)
b1f19c20d6 : Run function check and out check in TestTensorDeviceOps (#43830)
9b98bcecfa : torch.cat and torch.stack batching rules (#43798)
dbc4218f11 : Batching rules for: torch.bmm, torch.dot (#43781)
fa12e225d3 : Batching rule for torch.mv (#43780)
2789a4023b : TestVmapOperators: add structured tests that batching rules get invoked (#43731)
0b2694cd11 : Support work.result() to get result tensors for allreduce for Gloo, NCCL backends (#43386)
a67246b2d4 : Add reduction string test for ctc_loss. (#43884)
fab012aa28 : Revert "Added support for Huber Loss (#37599)" (#43351)
c14a3613a8 : Fix NaN propagation in TE fuser's min/max implementation (#43609)
820c4b05a9 : [ONNX] Update slice symbolic function (#42935)
f1624b82b5 : Preserve python backtrace in autograd engine errors. (#43684)
825c109eb7 : [reland][quant][graphmode][fx] Add support for weight prepack folding (#43728) (#43902)
6da26cf0d9 : Update torch.range warning message regarding the removal version number (#43569)
85d91a3230 : [TensorExpr] Check statements in test_kernel.cpp (#43911)
f229d2c07b : Revert D23335106: [quant][graphmode][fix] Fix insert quant dequant for observers without qparams
69080e9e7e : simplify profile text output by displaying only top-level ops statistics (#42262)
d7ee84c9b5 : Update determinism documentation (#41692)
69fbc705d8 : Remained changes of #43578 (#43921)
3c2f6d2ecf : [caffe2] Extend dedup SparseAdagrad fusion with stochastic rounding FP16 (#43124)
f17d7a5556 : Fix exception chaining in `torch/` (#43836)
da32bf4cc6 : Move type annotations for remaining torch.utils stub files inline (#43406)
602209751e : [quant][graphmode][fix] Fix insert quant dequant for observers without qparams (#43606)
7db7da7151 : [reland][quant][graphmode][fx] Add top level APIs (#43581) (#43901)
deb5fde51c : [TensorExpr] Make KernelSumMultipleAxes much faster (#43905)
ee53a335c0 : [ONNX] Floordiv (#43022)
f73ba88946 : Avoid resizing in MinMaxObserver (#43789)
98b846cd1d : [JIT] Remove loop peeling from the profiling executor pipeline. (#43847)
d69d603061 : [JIT] Specialize autograd zero: actually remove the original graph after we created its versioned copy. (#43900)
f150f924d3 : [JIT] Specialize autograd zero: fix the guarding condition. (#43846)
9b820fe904 : Fix ImportError in the OSS land. (#43912)
7137327646 : log message at per-test level for`perfpipe_pytorch_test_times` (#43752)
4c19a1e350 : Move torch/autograd/grad_mode.pyi stubs inline (#43415)
e941a462a3 : Enable gcc coverage in OSS (#43883)
da0e93a8c3 : Move `fbcode` related coverage code to `fb/` folder and add `TARGETS` (#43800)
3682df77db : Implementing NumPy-like function torch.heaviside() (#42523)
7680d87a76 : Let linspace support bfloat16 and complex dtypes (#43578)
3278beff44 : Skip target determination for codecov test (#43899)
ffca81e38b : [pytorch][bot] update mobile op deps (#43871)
4e4626a23d : Join-based API to support DDP uneven inputs (#42577)
2f52748515 : Publish all_gather_object and gather_object docs (#43772)
f7bae5b6b1 : Revert D23385091: [quant][graphmode][fx] Add top level APIs
68304c527a : Revert D23385090: [quant][graphmode][fx] Add support for weight prepack folding
0394c5a283 : [fix] torch.multinomial : fix for 0 size dim (#43775)
3c8b1d73c9 : Update aliasing in tensorexpr fuser (#43743)
5da8a7bf2d : use types in the IR instead of vmap (#43742)
259e5b7d71 : Add passes to profiling executor pipeline (#43636)
a7e7981c0b : Use prim::TensorExprGroup interned symbol (#43635)
1c0faa759e : Update requires grad property (#43634)
2bede78a05 : add qr_backward functionality for wide case (#42216)
69dd0bab90 : [RPC profiling] Add test to ensure using record_function works for RPC (#43657)
4ef12be900 : Add __complex__ (#43844)
c5d0f091b2 : addmm/addmv should accept complex alpha and beta (#43827)
89452a67de : [fx] GraphModule.src -> GraphModule.code (#43655)
1390cad2d8 : [NNC] Hook up registerizer to Cuda codegen [2/x] (#42878)
63dbef3038 : Better msg (#43848)
ef08f92076 : [quant][graphmode][fx] Add support for weight prepack folding (#43728)
eb4199b0a7 : [quant][graphmode][fx] Add top level APIs (#43581)
42c895de4d : Properly check that reduction strings are valid for l1_loss, smoothl1_loss, and mse_loss. (#43527)
b8d34547ee : [quant][graphmode][fx][fix] enable per channel quantization for functional ops (#43534)
6ea89166bd : Rewrite of ATen code generator (#42629)
576880febf : Print all traceback for nested backwards in detect_anomaly (#43626)
1cdb9d2ab5 : Test runner for batched gradient computation with vmap (#43664)
1dcc4fb6b7 : Kill unused _pointwise_loss function. (#43523)
a860be898e : [resubmit] Add amax/amin (#43819)
8fb7c50250 : Enable complex blas for ROCm. (#43744)
08126c9153 : [ONNX] Utilize ONNX shape inference for ONNX exporter (#40628)
3aeb70db0b : Documents sub properly, adds subtract alias (#43850)
3dc9645430 : Disable RocM CircleCI jobs (#42630)
7b835eb887 : Update CUDA11 docker container (#42200)
5021ec826b : Fix docs for kwargs, f-p (#43586)
1830e4f08c : Remove unnamed namespace in headers (#43689)
ab3ea95e90 : #include <string> in loopnest.h (#43835)
628db9699f : Vulkan command buffer and pool. (#42930)
d1df098956 : Vulkan resource cache. (#42709)
87e8f50aae : Vulkan descriptor and descriptor layout cache. (#42642)
15aaeb8867 : Vulkan pipeline and pipeline layout cache. (#42395)
387dc24c92 : Vulkan memory allocator. (#42786)
287fb273cd : Vulkan (source and binary) shader and shader layout cache. (#42325)
6373063a98 : Generic Vulkan object cache. (#42394)
4e39c310eb : Move torch/csrc/utils/hash.h to c10/util/hash.h. (#42503)
7f967c08b8 : Document the beta=0 behavior of BLAS functions (#43823)
cc52386096 : Revert D19987020: [pytorch][PR] Add the sls tensor train op
45ba836876 : Revert "Revert D23252335: Refactor Vulkan context into its own files. Use RAII." (#43628)
f31b111a35 : Add the sls tensor train op (#33525)
550fb2fd52 : Expand the coverage of test_blas_empty (#43822)
60ad7e9c04 : [TensorExpr] Make sum available from Python (#43730)
8a41fa4718 : [Selective Build] Move register_prim_ops and register_special_ops to app level (#43539)
d10056652b : Enable `torch.half` for `lt` and `masked_select` (#43704)
931b8b4ac8 : Use ivalue::Future in autograd engine and DistEngine. (#43676)
000739c31a : Function calls for fallback paths (#43274)
8538a79bfe : [jit][static] Basic executor (#43647)
6aaae3b08b : [ONNX] Addition of diagnostic tool API (#43020)
58148c85f4 : Use template OperatorGenerator for prim and special operator registration (#43481)
8997a4b56b : [typing] Enable typing in torch.quantization.fuse_modules typechecks … (#43786)
eae92b7187 : Updated README.md by correcting grammatical errors (#43779)
13c7c6227e : Python/C++ API Parity: TransformerDecoder (#42886)
64906497cd : Revert D23391941: [pytorch][PR] Implementing NumPy-like function torch.heaviside()
47e489b135 : Make ExtraFilesMap return bytes instead of str (#43241)
1a79d7bb28 : DDP communication hook examples (#43310)
68b9daa9bf : Add `torch.linalg.norm` (#42749)
cd0bab8d8d : [ONNX] Where op (#41544)
a1eae6d158 : Implementing NumPy-like function torch.heaviside() (#42523)
633d239409 : [torch.fx] Pass placeholders through delegate too (#43432)
3f0120edb4 : Revert D23360705: [pytorch][PR] Add amax/amin
7d517cf96f : [NCCL] Dedicated stream to run all FutureNCCL callbacks. (#43447)
3f5ea2367e : Adding a version serialization type to ConvPackedParam (#43086)
af4ecb3c11 : quantized conv: add support for graph mode BC testing, and increase coverage (#43524)
4cb8d306e6 : Add _foreach_add_(TensorList tensors, Scalar scalar) API (#42531)
20abfc21e4 : Adds arctanh, arcsinh aliases, simplifies arc* alias dispatch (#43762)
0564d7a652 : Land code coverage tool for OSS (#43778)
89e2a3591e : Add 1% threshold to codecov (#43783)
b23e9cdd64 : .circleci: Add slash to end of s3 cp (#43792)
776c2d495f : [JIT] IRParser: store list attributes as generic ivalue lists. (#43785)
bcec8cc3f9 : Add amax/amin (#43092)
f4695203c2 : Fixes fft function calls for C++ API (#43749)
dc5d365514 : Fix bug in caching allocator. (#43719)
be3ec6ab3e : [caffe2][torch] correctly re-raise Manifold StorageException
b72da0cf28 : OneDNN: report error for dilation max_pooling and replace AT_ERROR with TORCH_CHECK in oneDNN codes (#43538)
1f7434d1ea : Fix 'module' to 'model' in quantize_dynamic doc (#43693)
a76184fe1e : grammatical error fix (#43697)
b630c1870d : Add stateful XNNPack deconvolution2d operator to torch. (#43233)
58a7e73a95 : [TensorExpr] Block Codegen (#40054)
9063bcee04 : Don't proceed into setup.py too far if Python version is unsupported (#42870)
c177d25edf : TensorIterator: Check for memory overlap in all `nullary_op`s (#43421)
dc0722e9b7 : TensorIterator: Check for memory overlap in all `compare_op`s (#43420)
065ebdb92f : TensorIterator: Check for memory overlap in all `binary_op`s (#43419)
bdee8e02c0 : TensorIterator: Check memory overlap in all `unary_op`s (#43418)
0ab83f7f9f : Fixed undefined behavior in BatchedFallback (#43705)
8e507ad00e : Update the div formula for numerical stability (#43627)
b29375840a : Revert D23379383: Land `code_coverage_tool` to `caffe2/tools` folder
c7787f7fbf : [numpy compatibility]Fix `argmin/argmax` when multiple max/min values (#42004)
26161e8ab6 : [Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
f06d3904f2 : Land `code_coverage_tool` to `caffe2/tools` folder
654ab209c6 : [JIT] Disable broken tests (#43750)
1a21c92364 : [ONNX] Update in scatter ONNX export when scalar src has different type (#43440)
87d7c362b1 : [JIT] Add JIT support for torch.no_grad (#41371)
8032dbc117 : Add Rowwise Prune PyTorch op (#42708)
3a0e35c9f2 : [pytorch] deprecate static dispatch (#43564)
3afd24d62c : [pytorch] check in default generated op dependency graph (#43570)
9a2d4d550e : update build flags for benchmark binaries
01f974eb1e : Specialize optionals for grad_sum_to_size (#43633)
a19fd3a388 : Add undefined specializations in backward (#43632)
a4cf4c2437 : refactor tests (#43631)
e189ef5577 : Refactor pass to class (#43630)
d1c4d75c14 : Add API for unexecuted op (#43629)
5da97a38d1 : Check if input is ChannelsLast or ChannelsLast3d for quantized AdaptivePool3d. (#42780)
cdc3e232e9 : Add `__str__` and `__repr__` bindings to SourceRange (#43601)
04ccd3ed77 : Fix bazel dependencies (#43688)
bff741a849 : Improve save_for_mobile cxx binary (#43721)
3830998ac3 : [fx] When generating names, avoid shadowing builtins (#43653)
5a1aa0e21e : [reland][quant][graphmode][fx] Add e2e test on torchvision (#43587)
73dcfc5e78 : Update RNN op registration format (#43599)
288a2effa0 : Operator generator based on templated selective build. (#43456)
c25d0015f0 : Autograd code clean up (#43167)
de84db2a9d : [TensorExpr] Add aten::sum lowering to the kernel (#43585)
48e08f884e : C++ APIs TransformerEncoder (#43187)
f63d06a57b : Fix docs for kwargs, a-e (#43583)
a070c619b9 : [FX] Native callables in FX lowering (#43426)
79e6aaeb4c : pull empty() out of use_c10_dispatcher: full (#43572)
01b5c06254 : [fix] handle empty args in chain_matmul (#43553)
28be3ef2f2 : Fix hipify script for pytorch extensions (#43528)
c4e5ab6ff2 : [TensorExpr] Disable a flaky test. (#43678)
00c1501bc0 : [JIT] Cast return values of functions returning Any (#42259)
f73e32cd04 : Reduce amount of work done within a global lock within ParallelLoadOp (#43508)
0bf27d64f4 : Fix NaN propagation in fuser's min/max implementation (#43590)
033b7ae3ef : implement NumPy-like functionality maximum, minimum (#42579)
9ca338a9d4 : [ONNX] Modified slice node in inplace ops pass (#43275)
1bda5e480c : Add Python code coverage (#43600)
88e35fb8bd : Skip SVD tests when no lapack (#43566)
cf26050e29 : [pytorch] Move TensorIteratorConfig method implementation to cpp file (#43554)
6c28df7ceb : [fx] add test for args/kwargs handling (#43640)
5a15f56668 : match batchmatmul on 1.0.0.6 (#43559)
769b9381fc : DDP Communication hook: Fix the way we pass future result to buckets. (#43307)
0521c71241 : [D23047144 Duplicate][2/3][lite interpreter] add metadata when saving and loading models for mobile (#43584)
306eb3def7 : Additional error checking for `torch.cuda.nccl` APIs. (#43247)
db1fbc5729 : [OACR][NLU] Add aten::str operator (#43573)
6459f0a077 : added rocm 3.7 docker image (#43576)
a91e1cedc5 : Reduce number of hypothesis tests in CI (#43591)
2a4d312027 : Allow GPU skip decorators to report the right number of GPUs required in (#43468)
25dcc28cd6 : [jit][static] Replace deepcopy with copy (#43182)
51861cc9b1 : .circleci: Add CUDA 11 to nightly binary builds (#43366)
42f6c3b1f4 : Raise error on device mismatch in addmm (#43505)
7beeef2c69 : .jenkins: Remove openssh installs (#43597)
573940f8d7 : Fix type annotation errors in torch.functional (#43446)
2b70f82737 : fix typo in test_dataloader test_multiprocessing_contexts (take 2) (#43588)
c1553ff94b : Benchmarks: temporarily disable profiling-te configuration. (#43603)
3ec24f02af : [TensorExpr] Start using typecheck in the fuser. (#43173)
b763666f9f : [JIT] Subgraph utils: add an optional vmap argument to the API to allow retrieving value mappings. (#43235)
d18566c617 : [TensorExpr] Fuser: disallow aten::slice nodes. (#43365)
8dc4b415eb : [TensorExpr] Fuser: only require input shapes to be known (output shapes can be inferred). (#43171)
f6b7c6da19 : [TensorExpr] Fuser: move canHandle and some other auxiliary functions into TensorExprFuser class. (#43170)
f35e069622 : Back out "Make grad point to bucket buffer in DDP to save memory usage" (#43557)
58666982fb : check in intel nnpi 1007 into fbcode/tp2
b3f8834033 : Batching rule for torch.pow, torch.result_type (#43515)
c9f125bf70 : Black to Block for various files (#42913)
348e78b086 : Evenly distribute output grad into all matching inputs for min/max/median (#43519)
be637fd5f6 : Revert D23306683: [quant][graphmode][fx] Testing torchvision
05f27b18fb : Back out D23047144 "[2/3][lite interpreter] add metadata when saving and loading models for mobile"
5ca6cbbd93 : Remove unnecessary copies in ProcessGroupGloo for multiple inputs allreduce (#43543)
9b05fbd92e : Correct the windows docs (#43479)
3df398a3a8 : Update the QR documentation to include a warning about when the QR.backward is well-defined. (#43547)
62dcd253e3 : [quant][graphmode][fx] Testing torchvision (#43526)
9420c773d0 : Revert D23299452: [pytorch][PR] fix typo in test_dataloader test_multiprocessing_contexts
ebc0fc4dfc : Polish the nightly.py docs in CONTRIBUTING a little (#43494)
3dcfe84861 : Grammatical corrections (#43473)
f32ca57c5e : Fix typo in LSTMCell document (#43395)
f8e9e7ad4a : Allocating warp to an input index in compute_cuda_kernel (#43354)
76894062dc : move wholearchive to link option (#43485)
1089ff404c : Refactored the duplicate code into a function in _ConvNd (#43525)
8ecfa9d9a2 : [cmake] End support for python3.5 for pytorch (#43105)
6a2d7a05c4 : fix typo in test_dataloader test_multiprocessing_contexts (#43343)
b430347a60 : Address JIT/Mypy issue with torch._VF (#43454)
f02753fabb : Support AMP in nn.parallel (#43102)
cbdaa20c88 : [serialize] Expose zip file alignment calculation functions (#43531)
d1d32003bb : force pytorch tensors to contiguous before calling c2 ops
675f3f0482 : Fix "save binary size" steps (#43529)
f80b695a75 : Properly format db.h and db.cc (#43027)
7b243a4d46 : [quant][graphmode[fx][test][refactor] Refactor tests for graph mode quantization on fx (#43445)
87905b5856 : [pytorch] add option to include autograd for code analyzer (#43155)
284ff04792 : [quant] Support set API for EmbeddingBag quantization (#43433)
e37f871e87 : [2/3][lite interpreter] add metadata when saving and loading models for mobile
ed8b08a3ba : Update quantize_jit to handle new upsample overloads (#43407)
e08e93f946 : Reland of benchmark code (#43428)
4cfac34075 : [ROCm] allow .jenkins/pytorch/test.sh to run on centos (#42197)
35a36c1280 : Implement JIT Enum type serialization and deserialization (#43460)
0fa99d50bc : Enable torch.cuda.memory typechecking (#43444)
7024ce8a2c : [quant] Add benchmarks for quantized embeddingbag module (#43296)
7cc1efec13 : Add lite SequentialSampler to torch mobile (#43299)
c972e6232a : Implement batching rules for basic arithmetic ops (#43362)
db78c07ced : Enable torch.cuda.nvtx typechecking (#43443)
2f9c9796f1 : [Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
c4e841654d : Add alias torch.negative to torch.neg. (#43400)
1f0cfbaaad : [fx] add type annotations (#43083)
b349f58c21 : [fx] enabling typechecking of fx files (#43082)
a97ca93c0e : remove prim::profile and special-casing (#43160)
d70b263e3a : [DPER3] Separate user embeddings and ad embeddings in blob reorder
4dc8f3be8c : Creates test_tensor_creation_ops.py test suite (#43104)
35351ff409 : Fix ToC Link (#43427)
e4af45f3aa : Fix bugs in vec256_float_neon.h (#43321)
b003f2cc28 : Enable input pointer caching in XNNPACK integration. (#42840)
b52e6d00f9 : Change quantizer to account for input tensor's memory format. (#42178)
b1d31428e7 : Reduce number of `prim::profile` (#43147)
8efa898349 : [ONNX] Export split_to_sequence as slice when output number is static (#42744)
ec9e6e07bc : [quant][graphmode][fx] Add support for general value ops (#43439)
47e1b7a8f1 : Set CONSTEXPR_EXCEPT_WIN_CUDA as const while it is not constexpr (#43380)
d94b10a832 : Revert D23223281: Add Enum TorchScript serialization and deserialization support
915fd1c8fc : centralize autograd dispatch key set (#43387)
88b564ce39 : [quant][graphmode][fx] Add support for general shape ops (#43438)
192c4b0050 : [quant][graphmode][fx] Add support for clamp (#43437)
40c77f926c : Add prim::TypeCheck operation (#43026)
98307a2821 : Fix bfloat16 erfinv get incorrect value problem for cpu path (#43399)
5e04bb2c1c : caffe2: expose CPUContext RandSeed for backwards compatibility with external RNG (#43239)
fb12992b5d : Call qnnpack's conv setup only if input pointer has changed. (#42008)
04aa42a073 : Refactor qconv to reduce allocations. (#42007)
2a08566b8f : Simple caching allocator for CPU. (#42006)
abe878ce96 : Allow Freezing of Module containing interface attribute (#41860)
490d41aaa6 : [quant][graphmode][fx] Add support for instance_norm (#43377)
a5a6a3e633 : add support for optional int list with scalar fill (#43262)
f269fb83c1 : Add Enum TorchScript serialization and deserialization support (#42963)
aa53b2d427 : Workaround bugs in user side embedding meta info and better msgs (#43355)
aec917a408 : [quant][graphmode][fx] Add support for layer_norm (#43376)
089bb1a8e4 : [quant][graphmode][fx] Add support for elu (#43375)
5a02c6b158 : [quant][graphmode][fx] Add support for hardswish (#43374)
93f1b5c8da : Mobile backward compatibility (#42413)
e96871ea46 : [quant][graphmode][fx] Add support for mul and mul relu (#43373)
6c772515ed : Revert D23252335: Refactor Vulkan context into its own files. Use RAII.
8eb3de76ba : Fix enum constant printing and add FileCheck to all Enum tests (#42874)
ff454cc429 : [quant][grapphmode][fx][test][refactor] Refactor quantized add test (#43372)
109ea59afc : [quant][graphmode][fx] Add support for batchnorm relu (#43335)
9e87a8ddf4 : [quant][graphmode][fx] Add support for batchnorm (#43334)
054073c60d : Refactor Vulkan context into its own files. Use RAII. (#42273)
3d76f7065e : [quant][graphmode][fx] Add support for cat (#43333)
26be4dcfa1 : [quant][graphmode][fx] Add support for add relu (#43332)
452a473729 : [quant][graphmode][fx] Add support for add (#43331)
6e48c88e09 : .circleci: Prefer using env-file for docker run (#43293)
100649d6a9 : Normalize loops with non-zero start. (#43179)
74781ab5b8 : Revert D23242101: [pytorch][PR] Implement first draft of autograd benchmark.
650590da0d : [quant][graphmode][fx] Add support for conv module + relu (#43287)
3293fdfa80 : [quant] Enable from_float for quantized Embedding_Bag (#43176)
b354b422ee : [quant] Make offsets an optional argument (#43090)
4db8ca1129 : [quant] Create nn.quantized.dynamic.EmbeddingBag (#43088)
f20a04fa2d : [TensorExpr] Simplify conditional select (#43350)
743cff4a1a : Fix PackedGemmMatrixFP16 repacking (#43320)
e57b89c8dc : Adds arccos, arcsin, arctan aliases (#43319)
3aec1185e0 : Enables bfloat16 x [float16, complex64, complex128] type promotion (#43324)
478fb925e6 : [jit] PyTorchStreamReader::getAllRecord should omit archive name prefix (#43317)
0bd35de30e : Add Enum convert back to Python object support (#43121)
f4b6ef9c56 : Do not define the macro "isnan" (#43242)
7b520297dc : Remove erroneous trailing backslashes (#43318)
c2511bdfa4 : Implement first draft of autograd benchmark. (#40586)
0cb52cb458 : Autograd better error (#43308)
da036250cd : Add benchmark for performance comparison (#43221)
da70976e66 : [ONNX] Add support for operator `add` between tensor list (#41888)
c64594f5cc : Extends test_unary_ufunc.py with numerics, contiguity, domain tests (#42965)
e31cd46278 : Add alias torch.fix for torch.trunc to be compatible with NumPy. (#43326)
17f9edda42 : Bias Correction Implementation (#41845)
665da61d2b : Replace Conv1d with Conv2d (#42867)
e8139624f2 : Search on system path for Vulkan headers and libraries as a last resort. (#43301)
217ddea93a : [quant] Make OP_LIST_TO_FUSER_METHOD public (#43286)
844d469ae7 : Remove proprietary notices
9984d33542 : [quant][graphmode][fx] Add support for conv module (#43285)
7c50c2f79e : Reimplement per-operator selective build (#39401)
e32d014f46 : remove empty override pretty_print (#43341)
ad8294d35b : [vulkan][ci] Vulkan tests running on linux build via swiftshader (added to docker) (#42614)
5cf8592663 : Fix backward compatibility test (#43371)
9a1f2b3617 : .circleci: Use dynamic docker image for android (#43356)
e10aa47615 : Fix `at::native::view_as_real()` for ComplexHalf Tensors (#43279)
b0ec336477 : [quant][graphmode][fx][test] Add per op test for graph mode quant on fx (#43229)
2b7108a96f : Update hardcoded pytorch_android_gradle_custom_build_single hash (#43340)
97d594b9f7 : Make grad point to bucket buffer in DDP to save memory usage (#41954)
51bab0877d : Fix torch.hub for new zipfile format. (#42333)
dae2973fae : [quant][graphmode][fx] Add graph mode quantization on fx (#43175)
c89d2c6bf2 : Replace black_list with block_list (#42088)
a12fe1a242 : Minor RPC doc fixes (#43337)
5006d24302 : Make TensorPipe the default backend for RPC (#43246)
d0a6819b0e : [ROCm] skip test_rpc in .jenkins/pytorch/test.sh (#43305)
c66ca7a48d : vmap: Fix bug with x * 0.1 (#43218)
0dc41ff465 : [pytorch] add flag for autograd ops to mobile builds (#43154)
4fc9e958c4 : [quant] Add benchmakrs for embedding_bag coversion ops (#43291)
c8bc298d6c : streamline stride propagation logic in TensorIterator (#42922)
ca9d4401d4 : .circleci: Remove manual docker installation (#43277)
66a79bf114 : .circleci: Don't quote glob for conda upload (#43297)
397325a109 : Make _compute_linear_combination.out a true out function (#43272)
f9a766bb39 : Increase deadline time for load_save tests (#43205)
a2ae2d3203 : Nightly Pull (#43294)
6a09df99e1 : Fix ASAN error in QNNPACK's integration of qlinear_dynamic. (#41967)
60b524f271 : Update torch.Tensor.is_set_to documentation (#43052)
4e964f3b97 : Make Windows CUDA-11 tests master only (#43234)
3eb31325fc : refactor torch/cuda/nccl.h to remove direct dependency on NCCL in libtorch_python (#42687)
6e1127ea3f : [NCCL] Changed FutureNCCL's then callback logic for better efficiency. (#42869)
97d62bcd19 : Modify Circle CI script to upload test report for analysis. (#43180)
0617156f0e : [vulkan] fix invalid memory op and tests (#43312)
aad1ff9f18 : [quant][cleanup]test_qlinear_legacy should be under TestDynamicQuantizedLinear. (#40084)
410d5b95b2 : [jit] fix str -> Device implicit conversions (#43213)
018b4d7abb : Automated submodule update: FBGEMM (#43251)
eb7fc2e98f : .circleci: Simplify binary upload process (#43159)
d467ac8ff0 : [GLOO] handle empty split size (#43256)
7d10298067 : Implement Tensor.to batching rule (#43206)
1e248caba8 : [CircleCI] Use `canary` images until VC++ 14.27 issue is resolved (#43220)
bc0e1e8ed2 : Add dataclasses to base Docker images. (#43217)
06d43dc69a : default ice-ref to c-step (#4812)
fa6b34b54c : 2 Bit Embedding Conversion Operator support. (#43077)
ab366d0f5f : Fix some mistakes in native_functions.yaml (#43156)
27ec91b0c9 : remove thunk fix now that ROCm CI images are >= ROCm 3.5 (#43226)
8094228f26 : update path in CI script to access ninja (#43236)
7c923a1025 : Optimize linux CI build/test matrix (#43240)
e41ca2d9fa : In copy_weights_to_flat_buf_views() explicitly construct tuple (#43244)
d06f1818ad : Fix `codegen/cuda` gcc-5.4 compilation issues (#43223)
d5bc2a8058 : Remove std::complex from c10::Half (#39833)
6c99d5611d : [tensorexpr] Fix promotion of booleans (#43097)
da5df7e2d2 : Remove use of term "blacklist" from tools/autograd/gen_python_functions.py (#42047)
3951457ca5 : [FX] Add in resnet + quantization tests (#43157)
dd194c1612 : add _save_parameters to serialize map (#43163)
2e6e295ecc : refactor _save_parameters to _save_data (#43162)
888ae1b3d8 : Introducing Matrix exponential (#40161)
dfdd797723 : Replace all AT_ASSERTM under ATen CUDA kernels. (#42989)
493b3c2c7c : Replace all AT_ASSERTM under ATen CPU kernels. (#41876)
0744dd6166 : Fix shapes in the MarginRankingLoss docs (#43131)
fbf274f5a7 : Autocast support for cudnn RNNs (#42385)
0a9c35aba3 : maybe minor fix to dispatch/backend_fallback_test.cpp? (#42990)
e39b43fd76 : Issue 43057 (#43063)
5d608d45cf : Added Encoder Layer constructor with default parameters (#43130)
53bbf5a48b : Update README.md (#43100)
ee74c2e5be : Compress fatbin to fit into 32bit indexing (#43074)
b92b556a12 : Add shape inference to SparseLengthsSumSparse ops (#43181)
b3bda94393 : [NVFuser] Enable E2E BCast-PWise-Reduction fusions (#43129)
c44b1de54e : Pin VC++ version to 14.26 (#43184)
e8db0425b5 : remove dot from TH (#43148)
aef2890a75 : Improve zero sized input for addmv (#41824)
3c5e3966f4 : [ONNX] Squeeze operator should give an error when trying to apply to a dimension with shape > 1 (#38476)
cd96dfd44b : Delete accidentally committed file errors.txt. (#43164)
57af1ec145 : observers: use torch.all to check for valid min and max values (#43151)
3264ba065c : observers: use clamp instead of min/max in calculate_qparams (#43150)
a5dfba0a6e : observers: make eps a buffer (#43149)
5aa61afbfb : quant bench: update observer configs (#42956)
1f6e6a1166 : Remove unused variable vecVecStartIdx (#42257)
133e9f96e1 : Use c10 threadpool for GPU to CPU distributed autograd continuations. (#42511)
825ec18eed : [jit] better error message (#43093)
864f0cfb2d : Fix type annotations for torch.sparse, enable in CI (#43108)
6db0b8785d : Adds movedim method, fixes movedim docs, fixes view doc links (#43122)
37252e8f00 : Implement batching rules for some unary ops (#43059)
768c2a8c25 : vmap: fixed to work with functools.partial (#43028)
9c3f579528 : .circleci: Copy LLVM from pre-built image (#43038)
7cb8d68ae1 : Rename XLAPreAutograd to AutogradXLA. (#43047)
034e6727e7 : Set default ATen threading backend to native if USE_OPENMP is false (#43067)
aab66602c4 : Add torch.dot for complex tensors (#42745)
472f291375 : Fix freeze_module pass for sharedtype (#42457)
269fdb5bb2 : prepare to split transformer header file (#43069)
248b6a30f4 : add training mode to mobile::Module (#42880)
e2eb0cb1a9 : Adds arccosh alias for acosh and adds an alias consistency test (#43107)
4ae832e106 : Optimize SiLU (Swish) op in PyTorch (#42976)
d4c5f561ec : Updates torch.clone documentation to be consistent with other functions (#43098)
5bcf9b017a : Implement hstack, vstack, dstack (#42799)
8864148823 : [jit] DeepAndWide benchmark (#43096)
91f3114fc1 : [JIT] Represent profiled types as a node attribute (#43035)
19902f6c0e : Document unavailable reduction ops with NCCL backend (#42822)
06aaf8c20d : Add set_device_map to TensorPipeOptions to support GPU args (#42637)
c84f78470b : Fix type annotations for a number of torch.utils submodules (#42711)
bcf54f9438 : Stop treating ASAN as special case (#43048)
0cf4a5bccb : Add GCC codecoverage flags (#43066)
91b090ceaf : Add polygamma where n >= 2 (#42499)
4011685a8b : [fx] split Node into Node/Proxy (#42991)
a1a6e1bc91 : Fix warning: dynamic initialization in unreachable code. (#43065)
66b3382c5b : [quant] Add torchbind support for embedding_bag packed weights (#42881)
7632a9b090 : [quant] Add embeddingbag_prepack function that works on quantized tensor. (#42762)
450315198a : Fix a casting warning (#42451)
3d8c144400 : Implemented torch::nn::Unflatten in libtorch (#42613)
33c5fe3c1d : Enable test_logit FakeLowP test. (#43073)
5014cf4a4d : Export MergeIdLists Caffe2 Operator to PyTorch
c8e789e06e : add fake fp16 fusions to net transforms (#42927)
1c6ace87d1 : Embed torch.nn typing annotations (#43044)
fcc10d75e1 : [JIT] Add property support to TorchScript classes (#42389)
64a7684219 : Enable typechecking of `collect_env.py` during CI (#43062)
1f6d0985d7 : fix searchsorted output type (#42933)
059aa34b12 : Clip Binomial results for different endpoints in curand_uniform (#42702)
71bbd5f1d4 : Add back Tensor.nonzero type annotation (#43053)
75dfa5a459 : Remove `itruediv` because it's already defined in torch/tensor.py (#42962)
1c616c5ab7 : Add complex tensor dtypes for the __cuda_array_interface__ spec (#42918)
c3fb152274 : Test the type promotion between every two dtypes thoroughly (#42585)
ff6a2b0b7a : Add inplace option for torch.nn.Hardsigmoid and torch.nn.Hardswish layers (#42346)
2f9fd8ad29 : Build test_e2e_tensorpipe only if Gloo is enabled (#43041)
31788ae151 : Trim trailing whitespace
a2b86d95d1 : Make Mish support large inputs. (#43037)
c7d2774d20 : Fix typo in collect_env.py (#43050)
d60d6d0d7b : Automated submodule update: FBGEMM (#42834)
ed242cbec5 : Guard TensorPipe agent by USE_TENSORPIPE (#42682)
ccd9f3244b : Get, save, and load module information for each operator (#42133)
e182ec97b3 : Fix illegal memory acess issue for CUDA versionn of SplitByLengths operator.
b8102b1550 : Implement torch.nextafter (#42580)
e4373083a2 : torch.complex and torch.polar (#39617)
b9a105bcc0 : [TensorExpr] Cleanup logic in the TensorExpr fuser pass. (#42938)
fc304bec9f : [TensorExpr] Remove redundant checks from canHandle in TE fuser. (#42937)
48c183af3d : [TensorExpr] Wrap fuser in a class. (#42936)
02c8ad70f2 : Reconstruct scopes (#41615)
3dc845319f : Add more verbose error message about PackedSequence lengths argument (#42891)
b992a927a9 : Clearer Semantics and Naming for Customized Quantization Range Initialization in Observer (#42602)
a55b7e2a6d : [reland][quant][fix] Remove activation_post_process in qat modules (#42343) (#43015)
8cf01c5c35 : Back out "change pt_defs.bzl to python file"
830423b80b : Python/C++ API Parity: TransformerDecoderLayer (#42717)
85752b989d : [quant][doc] Print more info for fake quantize module (#43031)
523b2ce9c6 : [jit][static runtime] Simplify the graph and add operator whitelist (#43024)
89b0b3bc8c : Allow RPC to be initialized again after shutdown. (#42723)
21823aa680 : Nightly checkout tool (#42635)
a6b69fdd33 : Add DDP+RPC tutorial to RPC docs page. (#42828)
3544f60f76 : make deadline=None for all numerics tests (#43014)
8b5642a786 : Fix to Learnable Fake Quantization Op Benchmarking (#43018)
6753157c5a : Enable torch.utils typechecks (#42960)
eb47940c0a : Add executor and fuser options to the fastrnn test fixture (#42946)
fd5ed4b6d6 : Update ort-nightly version to dev202008122 (#43019)
816d37b1d8 : [quant] Make PerChannel Observer work with float qparams (#42690)
6f8446840e : [quant] Create PerRowQuantizer for floating point scale and zero_point (#42612)
0ff51accd8 : collect_env.py: Print CPU architecture after Linux OS name (#42961)
ebc7ebc74e : Do not ignore `torch/__init__.pyi` (#42958)
6fb5ce5569 : [NNC] Fix some bugs in Round+Mod simplification (#42934)
f03f9ad621 : update clone doc (#42931)
ba9025bc1a : [tensorexpr] Autograd for testing (#42548)
607e49cc83 : Revert D22856816: [quant][fix] Remove activation_post_process in qat modules
8493b0d5d6 : Enroll TensorPipe agent in C++-only E2E test (#42680)
c88d3a5e76 : Remove Python dependency from TensorPipe RPC agent (#42678)
d39cb84f1f : [Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
c9dcc833bc : [quant][pyper] Make offsets an optional paramter in the qembedding_bag op (#42924)
8cb42fce17 : [quant][fix] Remove activation_post_process in qat modules (#42343)
7a7424bf91 : Remove impl_unboxedOnlyKernel (#42841)
20e0e54dbe : Allow Tensor& in the unboxing logic (#42712)
5d2e9b6ed9 : Add missing type annotation for Tensor.ndim (#42909)
b8ae563ce6 : Add a microbenchmark for LSTM elementwise portion (#42901)
33d209b5f4 : Fix TE microbenchmark harness to use appropriate fuser/executor (#42900)
1adeed2720 : Speed up CUDA kernel launch when block/thread extents are statically known (#42899)
f373cda021 : Revert D22994446: [pytorch][PR] CUDA reduction: allow outputs to have different strides
86841f5f61 : Update cuda init docstring to improve clarity (#42923)
0134deda0f : [FX] Add interface to reject nodes (#42865)
92885ebe16 : Implement hypot (#42291)
62bd2ddec7 : Implemented non-named version of unflatten (#42563)
7f3f5020e6 : CUDA reduction: allow outputs to have different strides (#42649)
ada8404f2d : [jit] Scaffold a static runtime (#42753)
59f8692350 : [pytorch] BUCK build for Vulkan backend
ea65a56854 : Use `string(APPEND FOO " bar")` instead of `set(FOO "${FOO} bar") (#42844)
3d3752d716 : Revert D22898051: [pytorch][PR] Fix freeze_module pass for sharedtype
bda0007620 : Improve calling backward() and grad() inside vmap error messages (#42876)
5c39146c34 : Fix get_writable_path (#42895)
5157afcf59 : fix int8 FC (#42691)
686705c98b : Optimize LayerNorm performance on CPU both forward and backward (#35750)
75a15d3d01 : Follow-up for pytorch/pytorch#37091. (#42806)
2878efb35d : Use `C10_API_ENUM` to fix invalid attribute warnings (#42464)
2f1baf6c25 : Fix coding style and safety issues in CuBLAS nondeterministic unit test (#42627)
77bd4d3426 : MAINT: speed up istft by using col2im (the original python code used … (#42826)
4665f3fc8d : Fix freeze_module pass for sharedtype (#42457)
ecb9e790ed : Remove excessive logging in plan_executor (#42888)
a346e90c49 : Update to NNP-I v1.0.0.5 (#4770)
ab0a04dc9c : Add `torch.nansum` (#38628)
38c7b9a168 : avoid redundant isCustomClassRegistered() checks (#42852)
bee174dc3f : Adds linalg.det alias, fixes outer alias, updates alias testing (#42802)
cd756ee3d4 : Support boolean key in dictionary (#42833)
ac93d45906 : [quant] Attach qconfig to all modules (#42576)
e845b0ab51 : [Resending] [ONNX] Add eliminate_unused_items pass (#42743)
a846ed5ce7 : [quant] Reduce number of variants of add/mul (#42769)
5edd9aa95a : Fix manual seed to unpack unsigned long (#42206)
b0b8340065 : Collect more data in collect_env (#42887)
7a9ae52550 : [hypothesis] Deadline followup (#42842)
eeb43ffab9 : format for readability (#42851)
3bf2978497 : remove deadline enforcement for hypothesis (#42871)
0ff0fea42b : [FX] fix lint (#42866)
43613b4236 : Fix incorrect aten::sorted.str return type (#42853)
71dbfc79b3 : Export BatchBucketOneHot Caffe2 Operator to PyTorch
4afbf39737 : Add nn.functional.adaptive_avg_pool size empty tests (#42857)
9c8f5cb61d : Ensure IDEEP transpose operator works correctly
c660d2a9ae : Initial quantile operator implementation (#42755)
6471b5dc66 : Correct the type of some floating point literals in calc_digamma (#42846)
4bafca1a69 : Adds list of operator-related information for testing (#41662)
aabdef51f9 : [NNC] Registerizer for GPU [1/x] (#42606)
57b056b5f2 : align qlinear benchmark to linear benchmark (#42767)
a7bdf575cb : align qconv benchmark to conv benchmark (#42761)
2c8cbd78bd : Fix orgqr input size conditions (#42825)
575e7497f6 : Introduce experimental FX library (#42741)
7524699d58 : Modify clang code coverage to CMakeList.txt (for MacOS) (#42837)
42114a0154 : Update the documentation for scatter to include streams parameter. (#42814)
1041bdebb0 : Fix a typo in EmbeddingBag.cu (#42742)
916235284c : [JIT] Fix typing.Final for python 3.8 (#39568)
d28639a080 : Optimization with Backward Implementation of Learnable Fake Quantize Per Channel Kernel (CPU and GPU) (#42810)
42b4a7132e : Raise error if `at::native::embedding` is given 0-D weight (#42550)
d396d135db : Added torch::cuda::manual_seed(_all) to mirror torch.cuda.manual_seed(_all) (#42638)
e8f4b04d9a : vmap: temporarily disable support for random functions (#42617)
ffc3da35f4 : Don't materialize output grads (#41821)
ddcf3ded3e : Revert D23002043: add net transforms for fusion
59b10f7929 : [quant] Sorting the list of dispathes (#42758)
dedcc30c84 : Fix ROCm CI by increasing test timeout (#42827)
a4b763bc2c : add net transforms for fusion (#42763)
103887892c : Fix "non-negative integer" error messages (#42734)
c14a7f6808 : adaptive_avg_pool[23]d: check output_size.size() (#42831)
c9e825640a : [c10d] Template computeLengthsAndOffsets() (#42706)
a414bd69de : Skip test_c10d.ProcessGroupNCCLTest under TSAN (#42750)
a2559652ab : Rename some BatchedTensorImpl APIs (#42700)
8f67c7a624 : BatchedTensor fallback: extended to support ops with multiple Tensor returns (#42628)
64a7939ee5 : test_cpp_rpc: Build test_e2e_process_group.cpp only if USE_GLOO is true (#42836)
8718524571 : [vulkan] cat op (concatenate) (#41434)
3cf2551f2f : Fix `torch.nn.functional.grid_sample` crashes if `grid` has NaNs (#42703)
e06b4be5ae : change pt_defs.bzl to python file (#42725)
752f433a24 : DDP communication hook: skip dividing grads by world_size if hook registered. (#42400)
d7aaa3327b : .circleci: Only do comparisons when available (#42816)
d83cc92948 : [ONNX] Add support for scalar src in torch.scatter ONNX export. (#42765)
e7b5a23607 : include missing settings import
77305c1e44 : Automated submodule update: FBGEMM (#42781)
e5adf45dde : Add python unittest target to `caffe2/test/TARGETS` (#42766)
bc779667d6 : generalize circleci docker build.sh and add centos support (#41255)
05f00532f5 : Fix TensorPipe submodule (#42789)
55ac240589 : [ONNX] Fix scalar type cast for comparison ops (#37787)
162972e980 : Fix op benchmark (#42757)
87970b70a7 : Adds 'clip' alias for clamp (#42770)
b6810c1064 : Include/ExcludeDispatchKeySetGuard API (#42658)
79b8328aaf : optimize_for_mobile: bring packed params to root module (#42740)
d8801f590c : fix asan failure for module freezing in conv bn folding (#42739)
5cd0f5e8ec : [PyFI] Update hypothesis and switch from tp2 (#41645)
b7a9bc0802 : Revert D22217029: Add fake quantize operator that works in backward pass
18ca999e1a : integrate int8 swish with net transformer
c889de7e25 : update DispatchKey::toString() (#42619)
5dd230d6a2 : [vulkan] inplace add_, relu_ (#41380)
6755e49cad : Set proper return type (#42454)
e95fbaaba3 : Adding Peter's Swish Op ULP analysis. (#42573)
0a804be47d : [NCCL] DDP communication hook: getFuture() without cudaStreamAddCallback (#42335)
d4a4c62df3 : [caffe2] Fix the timeout (stuck) issues of dedup SparseAdagrad C2 kernel
3fa0581cf2 : [fbgemm] use new more general depthwise 3d conv interface (#42697)
13bc542829 : Fix lite trainer unit test submodule registration (#42714)
48e978ba18 : Add fake quantize operator that works in backward pass (#40532)
2b04712205 : Exposing Percentile Caffe2 Operator in PyTorch
55b1706775 : Skips some complex tests on ROCm (#42759)
95f4f67552 : Restrict conversion to SmallVector (#42694)
faca3c43e6 : fix celu in quantized benchmark (#42756)
4eb66b814e : Automated submodule update: FBGEMM (#42713)
02f58bdbd7 : [caffe2] add type annotations for caffe2.distributed.python
6ebc0504ca : BAND, BOR and BXOR for NCCL (all_)reduce should throw runtime errors (#42669)
7332c21f7a : Speed up HistogramObserver by vectorizing critical path (#41041)
98de150381 : C++ API TransformerEncoderLayer (#42633)
eba35025e0 : [JIT] Exclude staticmethods from TS class compilation (#42611)
9f88bcb5a2 : Minor typo fix (#42731)
04c62d4a06 : [vulkan] Fix warnings: static_cast, remove unused (#42195)
586399c03f : Remove duplicate definitions of CppTypeToScalarType (#42640)
944ac133d0 : [NNC] Remove VarBinding and go back to Let stmts (#42634)
2971bc23a6 : Handle fused scale and bias in fake fp16 layernorm
dcee8933fb : Fix some linking rules to allow path with whitespaces (#42718)
9c8021c0b1 : Adds torch.linalg namespace (#42664)
c9346ad3b8 : [CPU] Added torch.bmm for complex tensors (#42383)
31ed468905 : Fix cmake warning (#42707)
3c66a3795a : [vulkan] Ops registration to TORCH_LIBRARY_IMPL (#42194)
4eb02add51 : Blacklist to Blocklist in onnxifi_transformer (#42590)
fb8aa0046c : Add use_glow_aot, and include ONNX again as a backend for onnxifiGlow (#4787)
73642d9425 : Updates alias pattern (and torch.absolute to use it) (#42586)
cb1ac94069 : [blob reorder] Seperate user embeddings and ad embeddings in large model loading script
9597af01ca : Support iterating through an Enum class (#42661)
952526804c : Print TE CUDA kernel (#42692)
a6c8730045 : [ONNX] Add preprocess pass for onnx export (#41832)
9152f2f73a : Optimization of Backward Implementation for Learnable Fake Quantize Per Tensor Kernels (CPU and GPU) (#42384)
4959981cff : [ONNX] Export tensor (#41872)
40ac95dd3c : [ONNX] Update ONNX export of torch.where to support ByteTensor as input. (#42264)
f9a6c14364 : Fix sequence numbers in profiler output (#42565)
dab9bbfce7 : Move jit_profiling tests into test1 on Windows (#42650)
33519e19ab : Fix 64-bit indexing in GridSampler (#41923)
eaace3e10e : Skip CUDA benchmarks on nogpu configs (#42704)
6cb0807f88 : Fixes ROCm CI (#42701)
cc596ac3a8 : [JIT] Add debug dumps in between passes in graph executor. (#42688)
cdd7db1ffc : Bound shape inferencer: fix int8fc scale and bias
b44a10c179 : List[index]::toOptionalStringRef (#42263)
f22aa601ce : All Gather and gather APIs for Python Objects (#42189)
1f689b6ef9 : suppress all Autograd keys in AutoNonVariableTypeMode (#42610)
85a00c4c92 : Skips spectral tests to prevent ROCm build from timing out (#42667)
40b6dacb50 : Delete dead is_named_tensor_only (#42672)
5ca08b8891 : Add benchmark for calculate_qparams (#42138)
79de9c028a : Remove VS2017 workaround for autocasting (#42352)
e28a98a904 : Turn on non ASCII string literals serialization (#40719)
57854e7f08 : [JIT] Clone runOptimizations and similar functions for profiling executor. (#42656)
a4dbc64800 : Add documentation for PYTORCH_JIT_TYPE_VERBOSITY (#42241)
65066d779b : Add fastrnns benchmark to CI and upload data to scribe (#42030)
a5af2434fe : NVMified NE Eval
049c1b97be : pin numpy version to 1.18.5 (#42670)
bcab2d6848 : And type annotations for cpp_extension, utils.data, signal_handling (#42647)
608f99e4ea : Fix cudnn version on build_environment of Windows CI (#42615)
576aab5084 : Bump up NCCL to 2.7.6 (#42645)
0642d17efc : Enable C++ RPC tests (#42636)
c30bc6d4d7 : Update TensorPipe submodule (#42522)
bd458b7d02 : Don't reference TensorPipe headers in our headers (#42521)
a53fdaa23f : Remove ProfiledType (#42570)
ccfce9d4a9 : Adds fft namespace (#41911)
644d787cd8 : find rccl properly (#42072)
23607441c2 : Create CuBLAS PointerModeGuard (#42639)
eb9ae7c038 : Implement `gpu_kernel_multiple_outputs` (#37969)
1848b43c4d : [NNC] Add loop unroll transformation (#42465)
3d46e02ea1 : Add __torch_function__ for methods (#37091)
92b7347fd7 : Enforce counter value to double type in rowwise_counter
c14fbc36ed : Update docs about CUDA stream priority (#41364)
ddb8849ffc : Fix method stub used for fixing mypy issue to work with pylint (#42356)
04d7e1679d : [quant] Quantized Average Pool Refactoring (#42009)
9add11ffc1 : Fix IS_SPMM_AVAILABLE macro definition (#42643)
509fb77b70 : Adjust bound_shape_inferencer to take 4 inputs for FCs (#41934)
9ea9d1b52e : [fbs][2/n] Remove .python3 markers
5d7c3f92b9 : Issue warning instead of error when parsing Enum while enum support is not enabled (#42623)
50f0d2b97d : quant: add q_batchnorm_1d op (#42491)
54ffb05eff : better error message between C2 and glow (#41603)
aa4e91a6dc : Fix `TestSparse.test_bmm_windows_error` when CUDA is not available (#42626)
5023995292 : fix output size adjustment for onnxifi_op
102abb877c : Reland D22939119: "[TensorExpr] Fix a way we were createing np arrays in tests." (#42608)
2501e2b12d : [RPC tests] Run DdpUnderDistAutogradTest and DdpComparisonTest with fork too (#42528)
4da602b004 : [RPC tests] Generate test classes automatically (#42527)
d7516ccfac : [RPC tests] Enroll TensorPipe in missing test suites (#40823)
2e7b464c43 : [RPC tests] Remove global TEST_CONFIG (#40822)
e7c7eaab82 : [RPC tests] Move some functions to methods of fixture (#40821)
2acef69ce3 : [RPC tests] Make generic fixture an abstract base class (#40820)
a94039fce5 : [RPC tests] Avoid decorators to skip tests (#40819)
935fcc9580 : [RPC tests] Merge process group tests into single entry point (#40818)
b93c7c54eb : [RPC tests] Merge tests for faulty agent into single script (#40817)
edf6c4bc4d : [RPC tests] Merge TensorPipe tests into single entry point (#40816)
73351ee91d : [TensorExpr] Disallow fallback to JIT interpreter from TensorExprKernel (flip the default). (#42568)
ef50694d44 : [TensorExpr] Apply GenericIntrinsicExpander recursively. (#42567)
ea9053b86d : [TensorExpr] Handle constant nodes in shape inference. (#42566)
b9c49f0e69 : [TensorExpr] Support shape inference in TE for aten::cat. (#42387)
feeb515ad5 : add Quantizer support to IValue (#42438)
24e2a8a171 : Revert D22780307: Fix illegal memory acess issue for CUDA versionn of SplitByLengths operator.
df7c059428 : Throw error if `torch.set_deterministic(True)` is called with nondeterministic CuBLAS config (#41377)
7221a3d1aa : enable torch.optim.swa_utils.SWALR (#42574)
18a32b807b : Add API to collect output_col_minmax_histogram
7c33225c72 : Add strict mypy type checking and update code_template.py (#42322)
5c5d7a9dca : Freeze dynamic (re)quantizaiton ops into standard ones (#42591)
6d1e43c5a6 : Release the GIL before invokeOperator (#42341)
76905527fe : Fix illegal memory acess issue for CUDA versionn of SplitByLengths operator.
06d978a9ad : [c10/cuda] Reorganize device_count() and robustly surface ASAN warnings (#42249)
27e8dc78ca : [vulkan] VulkanTensor lazy buffer allocation (#42569)
dae94ed022 : Keep manual_kernel_registration only effective in aten codegen. (#42386)
b08347fd7b : Add CUDA 11 builds for Windows CI (#42420)
db52cd7322 : .circleci: Hardcode rocm image to previous tag (#42603)
eb8a5fed38 : Automated submodule update: FBGEMM (#42584)
924a1dbe9b : Revert D22939119: [TensorExpr] Fix a way we were createing np arrays in tests.
0cf71eb547 : Unconditinally use typing extensions in jit_internal (#42538)
b85216887b : [vulkan] max_pool2d (#41379)
0f358fab6b : Hide cudnn symbols in libtorch_cuda.so when statically linking cudnn (#41986)
882ad117cf : [TensorExpr] Fix a way we were createing np arrays in tests. (#42575)
3c7fccc1c2 : Reenable cusparse SpMM on cuda 10.2 (#42556)
78f4cff8fe : handle multiple returns properly in boxing wrappers (#42437)
d45e2d3ef9 : Reduce the output overhead of OutputColumnMaxHistogramObserver by enabling changing bin_nums, Update the observer_test.py
61027a1a59 : Install typing_extensions in PyTorch CI (#42551)
29700c0092 : [JIT] Fix torch.jit.is_tracing() (#42486)
afa489dea9 : [ONNX] Enable lower_tuple pass for custom layer (#41548)
ccc831ae35 : test: Disable test_strided_grad_layout on ROCM (#42561)
c3e2ee725f : Automated submodule update: FBGEMM (#42496)
b9e68e03c4 : Fix the bug in THCTensor_(baddbmm) and ATen's addmm_cuda for strided views input (#42425)
317b9d3bfc : Implement sort for string in aten (#42398)
56fc7d0345 : Fix doc build (#42559)
e995c3d21e : Add private API to support tensor lists: _foreach_add(TensorList tensors, Scalar scalar) (#41554)
a0695b34cd : .circleci: Have python docs always push to site (#42552)
91d87292a6 : [vulkan][asan] Fix Invalid Memory ops (#41224)
0d1a689764 : [vulkan] reshape op (#41223)
e97e87368e : Clean up CUDA Sleep and Tensor Initialization in ProcessGroupNCCLTest (#42211)
3ca361791f : TearDown function for ProcessGroupNCCLTest Initializer (#42209)
2b8e7e2f2d : Moving ProcessGroupNCCLTest to Gtest (#42208)
b3ffebda7a : [TensorExpr] Properly handle all dtypes of the condition in evaluation of IfThenElse exprs. (#42495)
c334ebf1aa : [TensorExpr] Properly handle all dtypes in evaluation of Intrinsics exprs. (#42494)
38a9984451 : [TensorExpr] Properly handle all dtypes in evaluation of CompareSelect exprs. (#42493)
5939d8a3e0 : Revert "Revert D22360735: .circleci: Build docker images as part of C… (#40950)
4b42a5b5a1 : Remove redundant kernels calling TypeDefault in VariableType codegen. (#42031)
94e8676a70 : Initialize uninitialized variable (#42419)
d2a2ac4eea : Fix read/write bulk data (#42504)
ec898b1ab5 : fix discontiguous inputs/outputs for cummin/cummax (#42507)
ecb88c5d11 : Add NCCL Alltoall to PT NCCL process group (#42514)
b56db305cf : Improve the documentation of DistributedDataParallel (#42471)
f3e8fff0d2 : Batching rules for: chunk, split, unbind (#42480)
f1d7f001b9 : Batching rules for: torch.movedim, torch.narrow, Tensor.unfold (#42474)
01cd613e7e : Batching rules for: T, view, view_as, reshape, reshape_as (#42458)
0c48aa1e07 : Add typing annotations to hub.py and _jit_internal.py (#42252)
d21e345ef0 : Fix segfault in `THPGenerator_dealloc` (take 2) (#42510)
8850fd1952 : Add python inferface to create OfflineTensor (#42516)
ae67f4c8b8 : Revert D22845258: [pytorch][PR] [ONNX] Enable scripting tests and update jit passes
842759591d : [ONNX] Refactor ONNX fixup for Loop and If (#40943)
55d2a732cd : Skip part of test_figure[_list] if Matplotlib-3.3.0 is installed (#42500)
49e06e305f : [ONNX] Updating input node removal in ONNX function_substitution pass. (#42146)
0cb86afd72 : Revert D22908795: [pytorch][PR] Fix segfault in `THPGenerator_dealloc`
dc1f87c254 : Add typing_extensions as a dependency. (#42431)
c8cb5e5bcb : Relax cusparse windows guard on cuda 11 (#42412)
24199e0768 : tuple_map / tuple_concat (#42326)
1b18adb7e8 : [ONNX] Export static as_strided (#41569)
04e55d69f9 : [ONNX] Enable scripting tests and update jit passes (#41413)
c000b890a8 : [ONNX] Export torch.eye to ONNX::EyeLike (#41357)
fb56299d4a : Fix check highlight in filecheck. (#42417)
7a5708832f : fix masked_select for discontiguous outputs (#41841)
d707d4bf6d : Implement a light SGD optimizer (#42137)
934b68f866 : ecr_gc: Iterate through all tags, reduce prints (#42492)
d3acfe3ba8 : Fix segfault in `THPGenerator_dealloc` (#42490)
dbdd28207c : Expose a generic shape info struct for ONNXIFI Python interface (#42421)
f0fd1cc873 : Calculate inverse of output scale first. (#41342)
c3236b6649 : [quant] Expose register activation post process hook function to user (#42342)
1b9cd747cf : Revert "Conda build (#38796)" (#42472)
0eb513beef : Set a proper type for a variable (#42453)
34025eb826 : Vectorize arange (#38697)
fa6e900e8c : Let TensorIterator::nullary_op support check_mem_overlap option (#38693)
ed44269edc : Add missing space after -> for topk.values (#42321)
326d777e53 : Convert _wait_all_workers to _all_gather (#42276)
ebde590864 : Remove debug vestige (#42277)
4cdbe5c495 : Implement batching rules for some view ops (#42248)
2f8d5b68fa : vmap fallback kernel (#41943)
192487d716 : Update MAGMA to 2.5.3 for Windows (#42410)
ebfff31e19 : [distributedhogwild] Introducing new tags for distributed hogwild. (#42381)
bfa94487b9 : Remove register_mobile_autograd.cpp. (#42397)
91c80d122a : torch.gcd: Do not use std::abs() because it does not have an unsigned integer overload (#42254)
4cbf18ccc3 : Enables integer -> float type promotion in TensorIterator (#42359)
d403983695 : Support List[str].index (#39210) (#40348)
bdcf320bed : Support custom exception message (#41907)
5769b06ab5 : [Caffe2] Remove explicitly divide by zero in SpatialBN training mode (#42380)
115d226498 : Pin NumPy version on MacOS testers to 1.18.5 (#42409)
2912390662 : Limits cpu scalar error message to where it's appropriate (#42360)
206db5c127 : Improve `torch.norm` functionality, errors, and tests (#41956)
44b018ddeb : Convert ProcessGroupNCCLTest.cpp to gtest unittest (#42365)
f47e00bdc3 : [NNC] Bounds Inference: make inferred bounds respect gaps (#42185)
dcc4d11ffa : [TensorExpr] Make tensorOrConstant non-templatized function. (#42202)
2decccea2e : [TensorExpr] Implement shape inference for TE. (#41451)
f41bb1f92b : [TensorExpr] Explicitly cast to bool results of comparison ops in kernel.cpp. (#42201)
f8c5800bb5 : [TensorExpr] Add debug dumps to kernel.cpp. (#42196)
655f376460 : Implement Enum sugared value and Enum constant support (#42085)
ff91b169c7 : Changes to match Fused Op: Dequantize->Swish->Quantize (#42255)
1542c41a67 : Change C++ frontend to take optional<Tensor> arguments (#41947)
3a19af2427 : Make operators with optional Tensor? arguments c10-full (#41610)
f502290e91 : [JIT] Make create autodiff subgraphs do in place updates to aliasDb (#42141)
2285a2fc11 : refactor canonical ordering to also be able to do isAfter checks (#42140)
4fc525e729 : [Dper3] Implementation of squeezed input to DC++
a01e91e6b2 : [pytorch] include all overloads for OSS custom build
38bf5be24f : [quant] Use PlaceholderObserver instead of Fp16Observer and NoopObserver (#42348)
6bd46b583e : [quant][graph] Add support for FP16 dynamic quant (#42222)
8c5bf10264 : [quant] Add FP16Observer for fp16 quant support (#42221)
a9eebaf693 : [quant] Add saturate_to_fp16 op for FP16 quant support (#42147)
bdd9ef1981 : Support RowWiseSparseAdam on GPU (#35404)
a9e7e787f8 : [jit] make clone works for interface type (#42121)
352e15f1a2 : Revert D22812445: Update TensorPipe submodule
832b1659e7 : Fix missing attribute when loading model from older version (#42242) (#42290)
4c6878c97d : [gloo] change ProcessGroupGlooAsyncTest to use gtest (#42313)
0adb584376 : Make resize_ use normal device dispatch (#42240)
2f840b1662 : Warns when TensorIterator would resize its output (#42079)
e54f268a7a : Enables torch.full bool and integer type inference (#41912)
31d41f987a : torch.where : Scalar Support (#40336)
1c8217a7a6 : Abstract cuda calls made from `torch_python` (#42251)
fbb052c2cc : BlackList to BlockList (#42279)
27c22b9b3c : Modify function to takes dtype as argument
b5fcd89479 : Add tests to `sigmoid_backward` and `fmod` (#42289)
7d6c4f62ef : Remove 4 unused variables in lp_pool_op.cc (#42329)
153673c33b : fix quantized elu benchmark (#42318)
5ff54ff4ff : import freeze (#42319)
344defc973 : Let bfloat16 support promotion with other types (#41698)
c489bbe122 : Add typing support to torch._six (#42232)
26d58503c2 : Implementing NumPy-like function torch.signbit() (#41589)
c35faae10d : [pytorch][ci] install nightly instead of stable libtorch for mobile CIs (#42220)
ce546328a3 : Const-correctness, variable initialization, and error checking. (#42124)
d0ed1e303f : Add missing header guards. (#42272)
ee2150370e : Add Vulkan Test to ATen Mobile Tests. (#42123)
7cd92aaa6b : Disable validation layers in non-debug builds. (#42122)
8e3d1908b6 : Fix minor typo in comment (#42184)
86b2faeb53 : Automated submodule update: FBGEMM (#42302)
f15af2fe4f : Remove unused variable "schema" (#42245)
547bbdac86 : Add MSFT Owners to the Windows Maintainership (#42280)
269ec767ca : [Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
2335430086 : Update TensorPipe submodule (#42225)
4f163df41a : [caffe2] Special handling of If/AsyncIf op in RemoveOpsByType (#42286)
f30ac66e79 : [caffe2] Fix a performance bug in Dedup SparseAdagrad op (#42287)
0444bac940 : Add test to cross function
9ea7476d9c : Add test to lerp function (#42266)
7459da268e : Add typing annotations to torch.random (#42234)
872237c1f2 : Output to stderr in distributed tests. (#42139)
fe4f19e164 : [CUDA] max_pool2d NCHW performance improvement (#42182)
c18223f9ef : add Dimname support to IValue (#42054)
6c251f74b2 : replace black_list/blacklist with blocklist/block_list (#42089)
27b03d62de : [HT] Clear the device placement tag for the auto gen sum so that we could break the component for FC sharing the same input (#42219)
7cdf786a07 : fix typo in GradScaler docstring (#42236)
79cfd85987 : grad detach_ only when it has grad_fn in zero_grad call (#41283)
4b6e5f42a4 : Creates spectral ops test suite (#42157)
029007c8b6 : Improved coverage for unboxed->boxed kernel wrappers (#38999)
60f51542dc : [Caffe2] Fix spatial_bn bug for computing running_var on CPU or on CUDA without CuDNN (#42151)
91546a4b0f : Environment variable for controlling type verbosity in debug output (#41906)
01b794f169 : Operator-level Benchmark Test for Per Tensor and Per Channel Fake Quantization (#41974)
48acdfd505 : add tests to BinaryOpsKernel -- max/min kernel (#42198)
382781221d : Extending Learnable Fake Quantize module to support gradient scaling and factory (partial) construction (#41969)
0a64f99162 : [JIT] Dont include view ops in autodiff graphs (#42027)
b45b82b006 : Fix type annotation for DistributedDataParallel (#42231)
c8e15842aa : Automated submodule update: FBGEMM (#42205)
460970483d : Revert D22790718: [pytorch][PR] Enables torch.full bool and integer type inference
90074bbfa6 : implement numpy-like functionality isposinf, isneginf (#41588)
1c5c289b62 : [pt] Add incude_last_offset option to EmbeddingBag mean and max (#42215)
6b3f335641 : Enables torch.full bool and integer type inference (#41912)
8c653e05ff : DOC: fail to build if there are warnings (#41335)
4b108ca763 : refactor save_data as non member function (#42045)
8fc5adc88e : Remove dead named_tensors_unsupported_error definitions. (#42171)
8deb4fe809 : Fix flaky NCCL error handling tests. (#42149)
b6a9f42758 : Add appropriate error messages for ProcessGroupNCCLTest (#42143)
e4c3f526c8 : Fixed Skipping Logic in ProcessGroupNCCLErrors tests (#42192)
b2ef7fa359 : Add a flag to enforce fp32 to fp16 conversion for all inputs of the onnxifi net. (#39931)
8a644f0c13 : [Shape Inference] Fix InferFC
30eacb5fb6 : [quant][graphmode] Support stack (#42187)
deac621ae2 : Stop building PyTorch for VS2017 (#42144)
3c084fd358 : Dequant => Swish => Quant Test case. (#41976)
e2344db886 : Use Python3.7 when running OSX builds/tests (#42191)
4c7fb8c2b6 : make FusionCallback refer to specified GraphFuser context (#41560)
8ddd2c4e1b : [pytorch] fix code analyzer for LLVM 9 & 10 (#42135)
fd9205e14b : Enable caffe2 tests for RocM jobs (#41604)
4d17ecb071 : Changed Blacklisted to Blocklisted (#42100)
030ab2bda5 : Replaced whitelist reference with allowlist (#42071)
64965c4572 : Replaced blacklist with blocklist (#42097)
5ed7cd0025 : Allow drop_last option in DistributedSampler (#41171)
48ae5945de : Skip TestExtractPredictorNet if compiled without OpenCV (#42168)
f666be7bc1 : [vulkan] support add for dim < 4 (#41222)
b3a9e21a29 : [vulkan] mm op through addmm (#41221)
b0424a895c : Raise RuntimeError for zero stride pooling (#41819)
5aa2b572ff : replace black list with block (#42091)
2f61aca17b : Skip DataIO tests relying on LevelDB if compiled without it (#42169)
73ff252913 : Back out "[NCCL] DDP communication hook: getFuture()" (#42152)
2de549518e : Make fmod work with zero divisors consistently (#41948)
e7ed0b3fae : Avoid zero division in _cubic_interpolate (#42093)
f0c46878c6 : Fix the issue GPU skip message(#41378) (#41973)
3acd6b7359 : Document formatting (#42065)
14e75fbdb9 : Remove py2 specific code from test_utils.py (#42105)
86492410bc : Don't run tests with custom arguments with pytest (#41397)
672ed3c06b : replace onnx producer_version when updating results (#41910)
b282297559 : Replace whitelist with allowlist (#42067)
1a8269a566 : Replace blacklist with blocklist in test/run_test.py file. (#42011)
e179966248 : [caffe2][tpx] log to stderr (#42162)
0571cfd875 : Implement `MultiBatchVmapTransform::logicalToPhysical(TensorList)` (#41942)
1994ab1473 : Optimize alignBatchDimsAtFront (#41941)
5124436af4 : Fix const correctness for VmapPhysicalView struct methods (#41940)
2bc7dae2fc : Use new sccache for RocM builds (#42134)
6bd88f581a : Revert D22790238: [caffe2][tpx] Use logger instead of print
3c6fae6567 : [caffe2][tpx] Use logger instead of print
5336ccc1b2 : [BugFix] Fix bug in onnx::SsaRewrite (#42148)
4f723825b4 : [vulkan] adaptive_avg_pool2d (#41220)
0a0960126c : If we don't collect tracing, always free the trace data (#42118)
83762844e5 : Make `run_binary_ops_test` function generic and Add tests to add_kernel function (#42101)
c062cdbd90 : Log the net if blob doesn't exist when setting output record (#41971)
f805184165 : onnxifi: make it work with AsyncIf
c76fada4a8 : Let DDP.train() return self to stay consistent with nn.Module (#42131)
bcd75bd683 : [ModelLints] Refine dropout lint message. (#42046)
d5de616a4a : Enable c10d Store tests in CI (#42128)
509c18a096 : Documentation for `torch.optim.swa_utils` (#41228)
646042e0fb : Add suggestion to enumerate ModuleDict in error message (#41946)
1df35ba61e : Back out "Support aarch32 neon backend for Vec256"
d198fb3efe : changed white-allowlisted (#41796)
cb9c2049cd : replace blacklist in aten/src/ATen/native/cudnn/Conv.cpp (#41627)
6ca5421a8f : Enable non-synchronizing cub scan for cum* operations (#42036)
330a107199 : Refactor lite serializer dependencies from full jit (#42127)
f7d50f50b9 : .circleci: Prefer netrc for docs push (#42136)
ed822de0fc : change 2 instances of blacklist to blocklist in tools/pyi/gen_pyi.py (#41979)
5246bc4e87 : register parameters correctly in c++ MultiheadAttention (#42037)
e59db43313 : Find hip properly (#42064)
d6f1346c37 : Add a new op for converting the dense feature to sparse representation
4281240cb5 : Raise error for duplicate params in param group #40967 (#41597)
6367a9d2b0 : [vulkan] Shaders caching (#39384)
d4735ff490 : Avoid refcount bump in IValue::toStringRef() (#42019)
5a6d88d503 : Updates to Scale and Zero Point Gradient Calculation (#42034)
c261a894d1 : Updates to Python Module for Calculation of dX and Addition of Unit Tests (#42033)
e62bf89273 : Renaming variables from dX to dY in Learnable Fake Quantize kernels for Better Clarity (#42032)
3e121d9688 : Amend docstring and add test for Flatten module (#42084)
4290d0be60 : Remove settings for the logit test case. (#42114)
11e5174926 : Added support for Huber Loss (#37599)
fbdaa555a2 : Enable ProcessGroupGlooTest in CI (take 2) (#42086)
96aaa311c0 : Grammar Changes (#42076)
b7bda236d1 : DOC: split quantization.rst into smaller pieces (#41321)
6af659629a : DOC: fix two build warnings (#41334)
47e6d4b3c8 : Revert D22741514: [pytorch][PR] Enable ProcessGroupGlooTest in CI
b00c05c86c : update cub submodule (#42042)
c5b4f60fc2 : Move qconfig removal into convert() (#41930)
12cd083fd7 : Updates torch.tensor, torch.as_tensor, and sparse ctors to use the device of inputs tensors they're given, by default (#41984)
366c014a77 : [Resubmit #41318] NCCL backend support for torch bool (#41959)
38580422bb : Allow specifying PYTHON executable to build_android (#41927)
8e03c38a4f : Add prim::EnumName and prim::EnumValue ops (#41965)
6287f9ed65 : Remove AllGatherTestWithTimeout (#41945)
45e6f2d600 : Enable ProcessGroupGlooTest in CI (#41985)
cf7e7909d5 : NCCL must depend on librt (#41978)
dede71d6e3 : Support aarch32 neon backend for Vec256 (#41267)
976e614915 : caffe2: add PIPELINE tag (#41482)
0c0864c6be : update tests to run back-compat check using new binary (#41949)
42a0b51f71 : Easier english updated tech docs (#42016)
becc1b26dd : updated white list/allow list (#41789)
7e84913233 : .circleci: Make sure to install expect for docs push (#41964)
d4736ef95f : Add done() API to Future (#42013)
890b52e09f : Reduce instability in runCleanUpPasses by reordering passes. (#41891)
d904ea5972 : [NCCL] DDP communication hook: getFuture() (#41596)
2e95b29988 : restore at::Half support for caffe2 SumOp (#41952)
e9e6cc8c83 : Added Prehook option to prepare method (#41863)
1b55e2b043 : add prefetch_factor for multiprocessing prefetching process (#41130)
79cdd84c81 : Downloading different sccache binary in case of ROCm build (#41958)
c0bfa45f9d : Enable typechecking for `torch.futures` (#41675)
750d9dea49 : move min/max tests to TestTorchDeviceType (#41908)
6a8c9f601f : Removed whitelist references from test/backward_compatibility/check_b… (#41691)
e42eab4b1c : Update PULL_REQUEST_TEMPLATE.md (#41812)
2da69081d7 : Fix one error message format of torch.dot() (#41963)
f00a37dd71 : Make setup.py Python-2 syntactically correct (#41960)
97ab33d47c : Fix memory leak in XNNPACK/MaxPool2D. (#41874)
36fb14b68b : [quant] Add Graph Mode Passes to quantize EmbeddingBag operators (#41612)
401ac2dd39 : Replaced whitelisted with allowed (#41867)
a1cfcd4d22 : Change whitelist to another context in binary_smoketest.py (#41822)
b6690eb29a : Might be good for newcomers to read what N means (#41851)
7646f3c77f : Fix type annotation for CosineAnnealingLR (#41866)
c5fdcd85c7 : check pruned attributes before deleting (#41913)
183b43f323 : Clarify Python 3.5 is the minimum supported version in the installation section. (#41937)
a4b831a86a : Replace if(NOT ${var}) by if(NOT var) (#41924)
dbe6bfbd7e : Revert D22496604: NCCL Backend support for torch.bool
b898bdd4d3 : [JIT] Don't re run CSE on every block (#41479)
25b6e2e5ee : [JIT] optimize autodiff subgraph slicing (#41437)
da3ff5e473 : [JIT] dont count constants in subgraph size (#41436)
dfe7d27d0e : implement lite parameter serializer (#41403)
b85df3709a : Add __main__ entrypoint to test_futures.py (#41826)
3626473105 : NCCL Backend support for torch.bool (#41318)
01c406cc22 : [pytorch] bump up variable version regardless of differentiability (#41269)
1978188639 : Remove two "return"s that return "void" (#41811)
77db93228b : Temporary fix for determinant bug on CPU (#35136)
17f76f9a78 : Verbose param for schedulers that don't have it #38726 (#41580)
37e7f0caf6 : Fix docstring in Unflatten (#41835)
fab1795577 : move benchmark utils into torch namespace (#41506)
266657182a : Add `torch.movedim` (#41480)
c0e3839845 : fix #36801 (#41607)
272fb3635f : Add regression test for ONNX exports of modules that embed an Embedding layer inside a Sequential (#32598)
e831299bae : Fix typing error of torch/optim/lr_scheduler.pyi (#41775)
4b4273a04e : Update Adam documentation (#41679)
30ce7b3740 : Fix bug when compiling with caffe2 (#41868)
0ec7ba4088 : [iOS] Bump up the cocoapods version (#41895)
2a3ab71f28 : [quant][graphmode][fix] Remove useQuantizable check for dynamic quant (#41892)
ca3ba1095e : Do not chown files inside docker for pytorch-job-tests (#41884)
586b7f991c : Enable skipped tests from test_torch on ROCm (#41611)
7fefa46820 : scatter/gather - check that inputs are of the same dimensionality (#41672)
b40ef422d3 : .circleci: Separate out docs build from push (#41871)
4e16be9073 : [MemLeak] Fix memory leak from releasing unique ptr (#41883)
dbc6a2904b : [quant][graphmode][fix] Remove assert for uses == 1 in remove dequantize pass (#41859)
dfa914a90c : Modify lazy_dyndep loading to trigger inside workspace. (#41687)
af5d0bff00 : [ONNX] Add pass that fuses Conv and BatchNormalization (#40547)
ad7133d3c1 : Patch for #40026 RandomSampler generates samples one at a time when replacement=True (#41682)
2d15b39745 : [Onnxifi] Support running with quantized int8 inputs (#41820)
47c57e8804 : rename TestFuser to TestTEFuser (#41542)
6ceb65f98c : Document default dim for cross being None (#41850)
b80ffd44b0 : Revert D20781624: Add NCCL Alltoall to PT NCCL process group
ec683299eb : Reland Add non-deterministic alert to CUDA operations that use `atomicAdd()` (#41538)
aa91a65b59 : [TensorExpr] Fix propagation of loop options when splitting loops (#40035)
9c7ca89ae6 : Conda build (#38796)
61511aa1d6 : Remove zmath_std.h (#39835)
ca68dc7fa2 : replace std::clamp with shim (#41855)
b87f0e5085 : Add NCCL Alltoall to PT NCCL process group (#39984)
2da8c8df08 : [quant] Reaname from quantized... to ...quantized_cpu in the native_functions.yaml (#41071)
f03156f9df : replace blacklist in caffe2/python/onnx/frontend.py (#41777)
5152633258 : [ROCm] update hip library name (#41813)
9fbcfe848b : Automated submodule update: FBGEMM (#41814)
71aad6ea66 : Revert "port masked_select from TH to ATen and optimize perf on CPU (#33269)" (#41828)
fd62847eb2 : cross_layer_equalization (#41685)
fced54aa67 : [RPC tests] Fix test_init_(rpc|pg)_then_(rpc|pg) not shutting down RPC (#41558)
e17e55831d : [pytorch] disable per-op profiling for internal mobile build (#41825)
825a387ea2 : Fix bug on the backpropagation of LayerNorm when create_graph=True (#41595)
5c9918e757 : Fix row-wise sparse SparseLengthSum and sparse adagrad fused operator (#41818)
a0f2a5625f : [quant][graphmode][fix] Make it work with CallMethod on non-Module objects (#41576)
ce8c7185de : Add unittests to Comparison Operator Kernels in `BinaryOpsKernel.cpp` (#41809)
302e566205 : add max_and_min function and cpu kernel to speed up observers (#41570)
9e0c746b15 : Augmenting Concrete Observer Constructors to Support Dynamic Quantization Range; Modifying Utility Functions in _LearnableFakeQuantize Module for Better Logging and Baseline Construction. (#41815)
60e2baf5e0 : [doc] Add LSTM non-deterministic workaround (#40893)
941069ca09 : [tensorexpr][trivial] Remove debug printing from test (#41806)
7ffdd765c8 : [TensorExpr] more convenient outer Rfactor output (#40050)
dac393fa24 : [PT] enforce duplicate op name check on mobile
62f4f87914 : Removed whitelist reference from tools/clang_format_ci.sh (#41636)
1ad7160a59 : fix backward compat (#41810)
03186a86d9 : Add test dependencies to CONTRIBUTING.md (#41799)
341c4045df : replaced blacklist with blocklist in test/test_type_hints.py (#41644)
46808b49a8 : Change whitelist to allow in file test_quantized_op.py (#41771)
72a1146339 : Skip warning 4522 with MSVC (#41648)
2da2b5c081 : update CONTRIBUTING.md for ccache (#41619)
523f80e894 : .circleci: Remove docker_hub_index_job, wasn't used (#41800)
1f11e930d0 : [ROCm] skip test_streams on rocm. (#41697)
48569cc330 : Reland split (#41567)
c89c294ef9 : Add Unflatten Module (#41564)
fe415589a9 : disable mkl for expm1 (#41654)
65bd38127a : GLOO process group GPU alltoall (#41690)
5c50cb567c : Generalized Learnable Fake Quantizer Module (#41535)
3a9a64a4da : Add non zero offset test cases for Quantize and Dequantize Ops. (#41693)
1039bbf4eb : add named parameters to mobile module (#41376)
30551ea7b2 : Update NCCL from 2.4.8 to 2.7.3 (#41608)
f07816003a : [2/n][Compute Meta] support analysis for null flag features
897cabc081 : Add operators for smart keyboard to lite interpreter (#41539)
de400fa5ac : [JIT] handle specially mapped ops (#41503)
6161730174 : [JIT] move remove mutation to its own test file (#41502)
cfcee816f1 : .circleci: Prefix docker jobs with docker- (#41689)
cc3c18edbc : More LayerNorm Vectorization in calcMeanStd function. (#41618)
26bbbeaea4 : [DOCS] Fix the docs for the inputs arg of trace_module func (#41586)
ce443def01 : Grammar patch 1 (.md) (#41599)
6769b850b2 : Remove needless test duplication (#41583)
16dde6e3a0 : Augmenting Observers to Support Dynamic Quantization Range (#41113)
9600ed9af3 : typo fixes (#41632)
bd42e1a082 : Doc language fixes (#41643)
a69a262810 : workaround segfault in deviceGuard construction (#41621)
4a3aad354a : [1/N] Implement Enum JIT support (#41390)
46eb8d997c : Revert D22533824: [PT] add check for duplicated op names in JIT
c7bcb285f3 : Makes elementwise comparison docs more consistent (#41626)
e7a09b4d17 : RecordFunction in Dispatcher (#37587)
c6d0fdd215 : torch.isreal (#41298)
581e9526bb : [GradualGating] support better k value change (#41557)
d72c9f4200 : [PT] add check for duplicated op names in JIT (#41549)
96ac12fdf4 : [PT] add overload name for int prim ops (#41578)
445e7eb01b : Add quantized CELU operator by adding additional parameters to quantized ELU (#39199)
1734f24276 : Revert D22525217: [pytorch][PR] Initial implementation of quantile operator
b774ce54f8 : remediation of S205607
8fdea489af : remediation of S205607
39b4701d31 : [caffe2][redo] Reimplement RemoveOpsByType with SSA (#41606)
349c40507c : Revert "[CircleCI] Delete docker image after testing" (#41601)
92b95e5243 : Fix NCCL version check when nccl.h in non-standard location. (#40982)
cf811d2fb3 : retain undefined tensors in backward pass (#41490)
a874c1e584 : Adds missing abs to lcm (#41552)
0f78e596ba : ROCm: Fix linking of custom ops in load_inline (#41257)
3c862c80cf : Move list size constants for profiler::Event and profiler::ProfilerConfig into (#40474)
fbd960801a : [JIT] Replace use of "whitelist" in lower_tuples pass (#41460)
c2c2c1c106 : [JIT] Remove use of "whitelist" in quantization/helper.cpp (#41459)
4f4e3a0f15 : [JIT] Replace uses of "whitelist" in jit/_script.py (#41458)
bf0d0900a7 : [JIT] Replace uses of "blacklist" in jit/_recursive.py (#41457)
758edcd7df : [JIT] Replace use of "blacklist" in python/init.cpp (#41456)
c9bdf474d7 : [JIT] Replace use of "blacklist" in xnnpack_rewrite (#41455)
3b7c05b11b : [JIT] Replace uses of "blacklist" in gen_unboxing_wrappers.py (#41454)
f85a27e100 : [JIT] Replace "blacklist" in test_jit.py (#41453)
43b1923d98 : Enable SLS FP32 accumulation SparseLengthsWeightedSumFused8BitRowwiseFakeFP32NNPI Op. (#41577)
319b20b7db : [ONNX] Update ORT version (#41372)
346c69a626 : [ONNX] Export embedding_bag (#41234)
7eb71b4beb : Profiler: Do not record zero duration kernel events (#41540)
324c18fcad : fix division by low precision scalar (#41446)
5d7046522b : [JIT] Teach IRPrinter and IRParser to handle 'requires_grad' and 'device' as a part of type info. (#41507)
241bc648c9 : Adding missing setting `state_.ptr()` and `hook_.ptr()` to `nullptr`. (#41537)
c7798ddf7b : Initial implementation of quantile operator (#39417)
71fdf748e5 : Add `torch.atleast_{1d/2d/3d}` (#41317)
840ad94ef5 : Add reference documentation for torch/library.h (#41470)
1e230a5c52 : rewrite C++ __torch_function__ handling to work with TensorList operands (#41575)
cb9029df9d : Assert valid inner type for OptionalType creation (#41509)
e3e58e20cd : enable jit profiling tests on macos (#41550)
eb3bf96f95 : During inbatch broadcast, move Tile op after Fused8BitRowwiseQuantizedToFloat if applicable (#41464)
5376785a70 : Run NO_AVX jobs on CPU (#41565)
728fd37d92 : [JIT] make fastrnns runnable on cpu (#41483)
b1d4e33c8b : Revert D22552377: [pytorch][PR] Reland split unsafe version
415ff0bceb : Create lazy_dyndeps to avoid caffe2 import costs. (#41343)
9ed825746a : Use c10::cuda:: primitives rather than make CUDA runtime calls directly (#41405)
a0e58996fb : Makes the use of the term "module" consistent through the serialization note (#41563)
454cd3ea2e : Fix RocM resource class allocation (#41553)
e324ea85ea : Add tests to logical operation in BinaryOpsKernel.cpp (#41515)
f49d97a848 : Notes for lcm and gcd, formatting doc fixes (#41526)
86590f226e : Revert D22519869: [pytorch][PR] RandomSampler generates samples one at a time when replacement=True
ba6b235461 : [RocM] Switch to rocm-3.5.1 image (#41273)
09647e1287 : RandomSampler generates samples one at a time when replacement=True (#40026)
6f5f455c54 : [Gloo] alltoall to ProcessGroupGloo (#41424)
1ac4692489 : Remove unnecessary test in rpc_test.py (#41218)
b5e32528d0 : Fix flaky test_udf_remote_message_delay_timeout_to_self (#41217)
94e4248d80 : Split ASAN and ROCM tests into test1 and test2 (#41520)
81e964904e : [Gloo] Tests for Gloo Async Work Wait-level Timeouts (#41265)
b979129cba : [Gloo] Support work-level timeouts in ProcessGroupGloo (#40948)
01dcef2e15 : [NCCL] Tests for WorkNCCL::wait with Timeouts (#40947)
edf3dc73f2 : [NCCL] Support Wait Timeout in ProcessGroupNCCL (#40946)
9d92fa2679 : [NCCL] Add timeout to ProcessGroup Work Wait (#40944)
fef30220fd : Runs CUDA test_istft_of_sine on CUDA (#41523)
b2b8af9645 : Removes assertAlmostEqual (#41514)
58244a9586 : Automated submodule update: FBGEMM (#40332)
2b14f2d368 : [reland][DNNL]:enable max_pool3d and avg_pool3d (#40996)
45c5bac870 : [WIP] Fix cpp grad accessor API (#40887)
5bba973afd : Reland split unsafe version (#41484)
b9442bb03e : Doc note for complex (#41252)
d80e0c62be : fix dequantization to match nnpi (#41505)
26790fb26d : fix quantization mechanism to match nnpi (#41494)
e6859ec78f : resurrect single quantization op test (#41476)
04c0f2e3cc : enable TE on windows (#41501)
b2e52186b9 : Rename capacity to nbytes in ShareExternalPointer to avoid confusion in future (#41461)
702140758f : Move GLOG_ constants into c10 namespace (#41504)
f27e395a4a : [Gloo] update gloo submodule for PyTorch (#41462)
1fb2a7e5a2 : onnx export of fake quantize functions (#39738)
7a33d8b001 : [PyTorch Mobile] Modularize the autograd source files shared by mobile and full-jit (#41430)
23174ca71b : [reland] Enable TF32 support for cuBLAS (#41498)
200c343184 : Implement gcd, lcm (#40651)
e44f460079 : [jit] Fix jit not round to even if const is folded (#40897)
1770937c9c : Restore the contiguity preprocessing of linspace (#41286)
d90fb72b5a : remove use of the term "blacklist" from docs/cpp/source/Doxyfile (#41450)
404799d43f : Disable failed caffe2 tests for BoundShapeInference on Windows (#41472)
60f2fa6a84 : Updates serialization note to explain versioned symbols and dynamic versioning (#41395)
488ee3790e : Support @torch.jit.unused on a @torch.no_grad decorated function (#41496)
71c3b397a6 : Reduce Image Size (2) (#41301)
5bd71259ed : remove blacklist reference (#41447)
b7147fe6d7 : Learnable Fake Quantizer Benchmark Test (#41429)
2b8db35c7e : [reland][DNNL]:enable batchnorm3d (#40995)
b48ee175e6 : [reland][DNNL]:enable conv3d (#40691)
ff6e560301 : Add C++ end to end test for RPC and distributed autograd. (#36893)
8940a4e684 : Pull upstream select_compute_arch from cmake for Ampere (#41133)
c62550e3f4 : Cuda Support for Learnable Fake Quantize Per Channel (GPU) (#41262)
4367a73399 : Cuda Support for Learnable Fake Quantize Per Tensor (GPU) (#41127)
225289abc6 : Adding epsilon input argument to the Logit Op
954c260061 : Revert D22480638: [pytorch][PR] Add non-deterministic alert to CUDA operations that use `atomicAdd()`
008ab27b22 : [quant][pyper] Add embedding_bag weight quantize and dequantize ops (#41293)
d5ae4a07ef : DDP Communication Hook Main Structure (#40848)
c86699d425 : [cmake] Use PROJECT_SOURCE_DIR instead of CMAKE_* (#41387)
563b60b890 : Fix flaky test_stream_event_nogil due to missing event sync (#41398)
6ff306b8b5 : Add non-deterministic alert to CUDA operations that use `atomicAdd()` (#40056)
dddac948a3 : Add CUDA to pooling benchmark configs (#41438)
3971777ebb : Krovatkin/reenable test tensorexpr (#41445)
04320a47d7 : Add optimizer_for_mobile doc into python api root doc (#41211)
3a63a939d4 : Revert D22517785: [pytorch][PR] Enable TF32 support for cuBLAS
8548a21c00 : Revert D22543215: Adjust bound_shape_inferencer to take 4 inputs for FCs
f153b35b9b : Shape inference for SparseToDense in ExpertCombiner
86a2bdc35e : Adjust bound_shape_inferencer to take 4 inputs for FCs (#41452)
14f19ab833 : Port index_select to ATen (CUDA) (#39946)
9552ec787c : Revert D22516606: [pytorch][PR] Temporary fix for determinant bug on CPU
921d2a164f : SparseAdagrad/RowWiseSparseAdagrad mean fusion on CPU & GPU and dedup version for RowWiseSparse mean fusion on GPU
44b9306d0a : Export replaceAllUsesAfterNodeWith for PythonAPI (#41414)
20f3051f7d : [adaptive_]max_pool{1,2,3}d: handle edge case when input is filled with -inf (#40665)
fcd6d91045 : Temporary fix for determinant bug on CPU (#35136)
f074994a31 : vectorize rounding ops (#41439)
96f124e623 : remove template arguments of layernorm
0b73ea0ea2 : Change BCELoss size mismatch warning into an error (#41426)
fd0329029f : Fix flaky profiler and test_callback_simple RPC tests (#41287)
0d4a110c28 : [JIT] Fix dead stores in JIT (#41202)
4ddf27ba48 : [op-bench] check device attribute in user inputs
a0f110190c : clamp Categorical logit from -inf to min_fifo when calculating entropy (#41002)
359cdc20e2 : Revert D22432885: [pytorch][PR] unsafe_split, unsafe_split_with_sizes, unsafe_chunk operations
144f04e7ef : Fix qobserver test
c68c5ea0e6 : Upgrade cpp docs Sphinx/breathe/exhale to latest version (#41312)
05207b7371 : .circleci: Re-split postnightly into its own thing (#41354)
c17670ac50 : unsafe_split, unsafe_split_with_sizes, unsafe_chunk operations (#39299)
e2c4c2f102 : addmm: Reduce constant time overhead (#41374)
288ece89e1 : Enable TF32 support for cuBLAS (#40800)
c528faac7d : [ROCm] Skip problematic mgpu tests on ROCm3.5 (#41409)
5f146a4125 : fix include file path in unary ops
4972cf06a2 : [JIT] Add out-of-source-tree to_backend tests (#41145)
0e7b9d4ff8 : Fix logit doc (#41384)
87bf04fe12 : AvgPool: Ensure all cells are valid in ceil mode (#41368)
535e8814a4 : Add operators for LiteLMLSTM to Lite Interpreter (#41270)
befb22790f : Fix a number of deprecation warnings (#40179)
13dd53b3d2 : [Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
e888c3bca1 : Update torch.set_default_dtype doc (#41263)
c20426f86d : Fix torch.cuda.check_error type errors (#41330)
80d5b3785b : Add torch.logit function (#41062)
34e11b45c9 : Remove thrust casting from static_cast_with_inter_type (#39905)
5f6c6ed157 : Fix FC issue (#41198)
ca1b8ebbcb : move misc implementation out of `jit/__init__.py` (#41154)
6392713584 : add spaces in .md annotation for python indent (#41260)
b6e1944d35 : .circleci: Explicitly remove nvidia apt repos (#41367)
d601325de4 : update operators in the mapping to fp16 emulation
4196605776 : helper function to print out all DDP-relevant env vars (#41297)
6e6931e234 : fix duplicate extern sdot and missing flags (#41195)
0c77bd7c0b : Quantization: preserving pre and post forward hooks (#37233)
c451ddaeda : Add shape inference functions for int8 quantization related ops (#41215)
7183fd20f8 : Add interpolate-style overloads to aten::upsample* ops (#37176)
fb9e44f8dd : Add support for float[]? arguments in native_functions.yaml (#37175)
d04a2e4dae : Back out "Revert D22329069: Self binning histogram" (#41313)
86d803a9da : .cirlceci: Setup nvidia runtime for cu as well (#41268)
dea39b596e : reduce logging for layernorm (#41305)
67a4f375cd : Pass the number of indices but not embedding size in PyTorch operator (#41315)
98df9781a7 : Impl for ParameterList (#41259)
fa153184c8 : Fake Quantization Per Channel Kernel Core Implementation (CPU) (#41037)
5e72ebeda3 : Fake Quantization Per Tensor Kernel Core Implementation (CPU) (#41029)
402be850a8 : [quant] Adding zero point type check for per channel quantization (#40811)
4b4184fc69 : [quant][graphmode] use RemoveMutation to remove append (#41161)
106b0b6a62 : Op to create quant scheme blob (#40760)
edcf2cdf86 : [quant] dequantize support list and tuple of tensors (#41079)
c864158475 : Add fp16 support to SparseLengthSum PyTorch operator (#41058)
28291d3cf8 : [caffe2] Revert D22220798 (#41302)
e544bf2924 : fix the range of the random weights used in the int8fc test (#41303)
a1ed6e1eb3 : Revert D22467871: add check for duplicated op registration in JIT
095886fa42 : [caffe2] Fix the issues when using CUB RadixSort (#41299)
d1f06da9b7 : Solve log2(x:int) ambiguity by using log2(float(x)) (#41295)
1c098ae339 : Fix arg type annotations in jit.trace and onnx.export (#41093)
877a59967f : Ampere has CUDA_MAX_THREADS_PER_SM == 2048 (#41138)
6cbb92494d : Better THGeneric.h generation rules in bazel (#41285)
67f5d68fdf : Revert D22465221: [pytorch][PR] Reducing size of docker Linux image
ac3542fa59 : Define PSIMD_SOURCE_DIR when including FP16 (#41233)
abea7cd561 : msvc anonymous namespace bug (#41199)
48d6e2adce : Disable the mkldnn for conv2d in some special cases (#40610)
ce3ba3b9bc : [JIT] Add support for backend-lowered submodules (#41146)
1f2e91fa4f : Impilcit casting resulting internal build failure. (#41272)
7bae5780a2 : Revert D22329069: Self binning histogram
dd0c98d82a : [ONNX]Add tests for ConvTranspose 1D and 3D (#40703)
9daba76ba1 : Change to.dtype_layout to c10-full (#41169)
7c143e5d3e : Reducing size of docker Linux image (#41207)
0651887eb4 : Improve repr for torch.iinfo & torch.finfo (#40488)
cb6c3526c6 : Migrate addmm, addbmm and THBlas_gemm to ATen (#40927)
16c8146da9 : Self binning histogram (#40875)
9b0393fcf1 : [ONNX]Fix export of flatten (#40418)
a548c6b18f : add check for duplicated op registration in JIT (#41214)
75b6dd3d49 : Wrap Caffe2's SparseLengthsSum into a PyTorch op (#39596)
d927aee312 : Small clarification of torch.cuda.amp multi-model example (#41203)
4a09501fbe : LogitOp LUT based fake FP16 Op. (#41258)
33f9fbf8ba : Modularize parsing NCCL_BLOCKING_WAIT in ProcessGroupNCCL (#41076)
db38487ece : Autograd Doc for Complex Numbers (#41012)
e568b3fa2d : test nan and inf in TestTorchMathOps (#41225)
62e16934cb : [caffe2] Add the dedup implementation of fused RowWiseAdagrad op on GPUs (#40282)
08227072e2 : Benchmark RecordFunction overhead on some models (#40952)
8a79eec98a : Add add_relu fusion pass to optimize_for_mobile. (#40252)
75a4862f63 : Added SiLU activation function (#41034)
f6eb92a354 : Expose private APIs to enable/disable pickling ScriptModules without RPC (#39631)
df252c059c : [ROCm] Skip caffe2 unique op test for rocm3.5 (#41219)
a79b416847 : make Int8 FC bias quantization use round flush to infinity
7c2c752e6d : Revert D22458928: [pytorch][PR] Use explicit templates in CUDALoops kernels
c5dcf056ee : JIT pass for add relu fusion. (#39343)
82c9f79e0e : Add fused add_relu op. (#39342)
d6feb6141f : [Vec256][neon] Add neon backend for vec256 (#39341)
bddba1e336 : Add benchmark for add op. (#40059)
dde3d5f4a8 : [RPC docs] Remove mention of TensorPipe's SHM and CMA backends as they're not built (#41200)
a88099ba3e : restore old documentation references (#39086)
b952eaf668 : Preserve CUDA gencode flags (#41173)
e374280768 : Use explicit templates in CUDALoops kernels (#41059)
1f1351488e : Revert D21870844: Create lazy_dyndeps to avoid caffe2 import costs.
22f940b7bd : add clang code coverage compile flags (#41103)
2cf31fb577 : Fix max_pool2d perf regression (#41174)
1922f2212a : Make IterableDataset dataloader.__len__ warning clearer (#41175)
e84ef45dd3 : [JIT] Fix JIT triage workflow (#41170)
c1fa74b2d7 : [quant][refactor] test_only_eval_fn (#41078)
7c29a4e66f : Don't add NCCL dependency to gloo if system NCCL is used (#41180)
2252188e85 : [caffe2] Fix spatial_batch_norm_op dividision-by-zero crash (#40806)
df1f8a48d8 : add null check for c2 tensor conversion (#41096)
a318234eb0 : Print raising warnings in Python rather than C++ if other error occurs (#41116)
07fd5f8ff9 : Create lazy_dyndeps to avoid caffe2 import costs. (#39488)
f69d6a7ea3 : [ONNX] Update Default Value of recompute_scale_factor in Interpolate (#39453)
9b3a212d30 : quantizer.cpp: fix cuda memory pinning (#41139)
62cee0001e : Move async + serialization implementation out of 'jit/__init__.py' (#41018)
c8deca8ea8 : Update pthreadpool to pthreadpool:029c88620802e1361ccf41d1970bd5b07fd6b7bb. (#40524)
c038f8afcc : Do not install nvidia docker for non-NVIDIA configs (#41144)
690946c49d : Generalize constant_table from tensor only to ivalue (#40718)
86f72953dd : [Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
3e26709a4e : Remove copy_ warnings for angle and abs for complex tensors (#41152)
7ff7c9738c : Revert D22418756: [pytorch][PR] Migrate addmm, addbmm and THBlas_gemm to ATen
bf9cc5c776 : Add callback with TLS state API in futures (#40326)
155fb22e77 : Run single-threaded gradgradcheck in testnn (#41147)
8e2841781e : [easy] Use torch.typename in JIT error messages (#41024)
33e26656fa : list workaround for CREATE_OBJECT failure (#41129)
302cf6835e : [ROCm][Caffe2] Enable MIOpen 3D Pooling (#38260)
f71cccc457 : test: Add option to continue testing through error (#41136)
04004bf10c : Fix a minor typo "forget add" -> "forget to add" (#41131)
c7768e21b1 : [JIT] Add GitHub workflow for importing issues to triage project (#41056)
6725c034b6 : Migrate addmm, addbmm and THBlas_gemm to ATen (#40927)
3f32332ee6 : [JIT][Easy]move remove mutation to own file (#41137)
b8d2ccf009 : Unify TensorOptions signatures (#39611)
10caf58a52 : [typing] tensor._version is int (#41125)
97052c5fa8 : Extend SparseAdagrad fusion with stochastic rounding FP16 (#41107)
af2680e9ce : Update ShipIt sync
0edbe6b063 : Add a link in RPC doc page to point to PT Distributed overview (#41108)
9d1138afec : Remove unnecessary atomic ops in DispatchStub (#40930)
ec58d739c6 : .circleci: Remove pynightly jobs
dfd21ec00d : Revert D22418716: [JIT] Add support for backend-lowered submodules
2bc9ee97d1 : Revert D22418731: [JIT] Add out-of-source-tree to_backend tests
131a0ea277 : Add version number to bytecode. (#36439)
58d7d91f88 : Return atomic (#41028)
351407dd75 : Disables unary op casting to output dtype (#41097)
c93e96fbd9 : [jit] move script-related implementation out of torch/jit/__init__.py (#40902)
6c9b869930 : [ROCm] Skip Conv2d, Conv3d transpose fp16 test for ROCm3.5 (#41088)
dde18041a6 : [quant][graphmode] Refactor quantization patterns (#40894)
03eec07956 : Move error messages in-line in `_vmap_internals.py` (#41077)
de4fc23381 : clean up duplicated op names (#41092)
e4fbcaa2bc : [Codemod][FBSourceClangFormatLinter] Daily `arc lint --take CLANGFORMAT`
3d3fd13e04 : [quant][graphmode][fix] filter for list append change (#41020)
e0e8b98c43 : Export logic op to pytorch
6ef94590fa : match int8 quantization of nnpi (#41094)
e2a291b396 : [JIT] Add out-of-source-tree to_backend tests (#40842)
6777ea19fe : [JIT] Add support for backend-lowered submodules (#40841)
5a4c45f8d1 : [JIT] Move TestBackend to test directory (#40840)
3e01931e49 : [JIT] Separate to_backend API into libtorch and libtorch_python (#40839)
0911c1e71a : Added index_put to promotelist (#41035)
c55d8a6f62 : Remove std::complex from c10::Scalar (#39831)
3615e344a3 : Unit test case for the Int8FC to cover quantization scale errors. (#41100)
bacca663ff : Fix Broken Link in CONTRIBUTING.md (#41066)
445128d0f2 : Add PyTorch Glossary (#40639)
bce75a2536 : add first implementation of swish (#41085)
a8bc7545d5 : use PYTORCH_ROCM_ARCH to set GLOO_ROCM_ARCH (#40170)
054e5d8943 : .circleci: Fix job-specs-custom docker tag (#41111)
cc29c192a6 : add "aten::add.str" op and remove two duplicated ops
a4fd4905c8 : bump docker version to more recent tag (#41105)
eea535742f : Add bfloat16 support for nccl path (#38515)
38b465db27 : ROCm 3.5.1 image (#40385)
5e03a1e926 : Add support for int[]? arguments in native_functions.yaml (#37174)
4dad829ea3 : In interpolate, inline the call to _interp_output_size (#37173)
3c1c74c366 : In interpolate, move exceptional cases to the bottom (#37172)
8f0e254790 : In interpolate, use if instead of elif (#37171)
93778f3b24 : Expose certain methods in OpaqueTensorImpl. (#41060)
8d570bc708 : Decouple DataParallel/DistributedDataParallel from CUDA (#38454)
75155df8b4 : Doc warnings (#41068)
ff3ba25b8e : .circleci: Output binary sizes, store binaries (#41074)
0e6b750288 : Insert parentheses around kernel name argument to hipLaunchKernelGGL (#41022)
630e7ed9cc : Splitting embedding_bag to embedding_bag_forward_only and embedding_bag (#40557)
00ee54d2a4 : Fix link to PyTorch organization (from Governance) (#40984)
452d5e191b : Grammatically updated the tech docs (#41031)
22c7d183f7 : If ninja is being used, force build_ext to run. (#40837)
733b8c23c4 : Fix several quantization documentation typos (#40567)
2d98f8170e : Add option to warn if elements in a Compare table are suspect (#41011)
a04af4dccb : Revert D22396896: [pytorch][PR] run single-threaded gradgradcheck in test_nn
0e09511af9 : type annotations for dataloader, dataset, sampler (#39392)
a6b703cc89 : Make `torch_cpu` compileable when `USE_TENSORPIPE` is not set. (#40846)
12b5bdc601 : Remove unused Logger in get_matching_activations (#41023)
4aa543ed2e : Fix unordered-map-over-enum for GCC 5.4 (#41063)
50df097599 : Fix CUDA jit codegen compilation with gcc-5.4 (#41055)
56396ad024 : ONNX: support view_as operator (#40496)
b2cc8a2617 : [ONNX]Fix export of full_like (#40063)
6e4f501f1a : Improve error message for Pad operator (#39651)
6b50874cb7 : Fix HTTP links in documentation to HTTPS (#40878)
63ef706979 : [ATen] Add `native_cuda_h` list to CMakeLists.txt (#41038)
5d1d8a58b8 : Enable `in_dims` for vmap frontend api (#40717)
dac63a13cb : run single-threaded gradgradcheck in test_nn (#40999)
37a572f33e : fix grad thrashing of shape analysis (#40939)
4af8424377 : shape analysis fix for default dtype' (#40938)
078669f6c3 : Back out "[2/n][Compute Meta] support analysis for null flag features"
a78024476b : Port `equal` from THC to ATen (CUDA) (#36483)
c0f9bf9bea : s/torch::jit::class_/torch::class_/ (#40795)
cbe52d762c : Mish Activation Function (#40856)
87f9b55aa5 : Use explicit templates in `gpu_kernel_with_scalars` (#40992)
945ae5bd7b : Update the documentation of the scatter_ method with support for reduction methods. (#40962)
35bd2b3c8b : DOC: Clarify that CrossEntropyLoss mean is weighted (#40991)
b9b4f05abf : [nvFuser] Working towards reductions, codegen improvements (#40864)
e026d91506 : [JIT] Remove dead store in unpickler.cpp (#40625)
d753f1c2e1 : Fixes formatting of vander, count_nonzero, DistributedSampler documentation (#41025)
0fbd42b20f : [pytorch] deprecate PYTORCH_DISABLE_TRACING macro (#41004)
7f60642bae : [pytorch] add manual registration for trace type (#40903)
e173278348 : Update quantization.rst (#40896)
e75f12ac15 : Check statstical diff rather than exact match for test_dropout_cuda. (#40883)
c38a5cba0d : Remove duplicate assignment in collate.py (#40655)
c935712d58 : Use unbind for tensor.__iter__ (#40884)
f6f3c0094a : Revert D22369579: add eq.str, ne.str, and add.str ops
9c82b570bf : Fix delegating to jit.load from torch.load (#40937)
73c5a78f43 : Test test_int8_ops_nnpi.py case typo fix. (#41008)
46f5cf1e31 : Improve error reporting of AVX instruction in CI job (#40681)
e1afa9daff : fix cmake bug (#39930)
0b9717b86a : When linking libtorch_cpu.so, put AVX sources last in the input list (#40449)
063d5b0d3f : Remove get_fail_msg in test_dataloader.test_proper_exit (#40745)
450ba49653 : Add the missing `resource_class` key in the update_s3_htmls job (#41000)
54d7a1e3f4 : Fix module dict key ordering (#40905)
0deb2560b8 : add eq.str, ne.str, and add.str ops (#40958)
300a3aaaad : [jit] move private implementation out of `jit/__init__.py` (#40807)
1e64bf4c40 : [CircleCI] Delete docker image after testing (#40917)
8ecd4f36aa : fix __len__, __contains__, getitem inherited from interface class derived from nn container (closes #40603) (#40789)
8223858cc1 : shape inference of undefined for prim::grad (#40866)
88c0d886e3 : update requires_gard on loop inputs correctly (master) (#40926)
0790d11a18 : typing for tensor.T/grad_fn torch.Size (#40879)
0fc0a9308a : fix autodoc for torch.distributed.launch (#40963)
480851ad2c : Docstring changes for dynamic quantized classes (#40931)
3b7df2388e : [RFC] Profile rpc_async call from JIT (#40652)
f3f113f103 : [quant][graphmode][fix] Print the node in error message (#40889)
f083cea227 : [RPC tests] Fix file descriptor leak (#40913)
f9a71d3de4 : [RPC tests] Align ddp_under_dist_autograd test with others (#40815)
d0f2079b5e : [RPC tests] Remove world_size and init_method from TensorPipe fixture (#40814)
3890550940 : [RPC tests] Fix @_skip_if_tensorpipe always skipping for all agents (#40860)
cab7d94d47 : [PyTorch Numeric Suite] Remove unnecessary Logger in input arguments (#40890)
542ac74987 : [quant][graphmode][fix] Fold conv bn (#40865)
824ab19941 : [quant][graphmode] Support quantization for `aten::apend` (#40743)
ff17b83fd8 : [pytorch][ci] add custom selective build flow for android build (#40199)
28e1d241cd : [pytorch] factor out binary size upload command (#40188)
3c22c7aadc : infer tensor properties based on an input tensor rather than defaults for xxx_like ctors (#40895)
6095808d22 : fix pca_lowrank memory consumption (#40853)
3ca5849f0a : Add serializer and deserializer for Int8QuantSchemeBlob and Int8QuantParamsBlob (#40661)
f8d4878b3c : check for unsupported instructions when exporting mobile models (#40791)
3c6b8a6496 : Revert D22360735: .circleci: Build docker images as part of CI workflow
a1c234e372 : Revert D22330340: [C2] Fixed a bug in normalization operator
9cc73966b3 : [TVM] Fix build and sync with caffe2/caffe2/python/dlpack.h (#40888)
b7517a76ba : rshift use default >> operator (#40545)
dec3f918a0 : Migrate 'torch.dot' from TH to Aten (CUDA) (#40646)
81aebf380e : pytorch | Fix linking of qnnpack params on windows. (#40920)
a7e09b8727 : pytorch | Namespace init_win symbol in qnnpack.
e1428cf41b : [JIT] fix unfold shape analysis (#40749)
ce63f70981 : [C2] Fixed a bug in normalization operator (#40925)
af5bcba217 : .circleci: Build docker images as part of CI workflow (#40827)
9f14e48834 : Override shape hints with real weight shape extracted from workspace (#40872)
db39542509 : [2/n][Compute Meta] support analysis for null flag features
b678666a04 : Add `module.training` to docs (#40923)
6ae3cd0d9d : Configure RPC metrics handlers and pass them into Thrift RPC Agent (#40602)
6aabd12390 : fix issue #31759 (allow valid ASCII python identifiers as dimnames) (#40871)
5db5a0f2bb : Re-enable Caffe2 test `RoiAlignTest.CheckCPUGPUEqual` (#40901)
1a74bb84f2 : Remove Int8FC diff restriction.
591fffc524 : Type-annotate serialization.py (#40862)
9fa1f27968 : [jit] Fix value association with dictionaries in the tracer (#40885)
59294fbbb9 : [caffe2] Reimplement RemoveOpsByType with SSA (#40649)
ea03f954ad : [ONNX] Add warning in ONNX export when constant folding is on in training-amenable mode (#40546)
73f11dc3d1 : `torch._six.PY37` should be true for Python-3.8 as well (#40868)
8f6e50d013 : Make some more ops c10-full (#40747)
d7c9f96e43 : Optimize perf for calling ops with custom classes (#38257)
2f47e953f7 : Fixes #40158 (#40617)
04b6e4273e : clang format reducer.cpp (#40876)
ad30d465d5 : Move install_torchvision to common.sh so that it can be sourced. (#40828)
49e12d888a : [NCCL - reland] Explicitly abort NCCL Communicators on Process Group Destruction (#40585)
af34f2f63b : Added missing generator argument in type annotation(pytorch#40803) (#40873)
c73255801f : Fix the autograd codegen for repeat function (#40766)
26543e6caf : [quant][graphmode] FP16 quant support - Operator Fusion (#40710)
55b5ab14d3 : [quant][graphmode] FP16 quant support - Insert cast operators (#40709)
6aebd2c412 : [quant][graphmode] Add FP16 quant support - Insert Noop Observers (#40708)
d1352192e2 : Move `OperatorBase::AddRelatedBlobInfo` implementation to .cc file (#40844)
cbdf399fc6 : Move OperatorSchema default inference function implementations to .cc… (#40845)
c71ec1c717 : Fix zip serialization for file > 2GiB for Windows (#40783)
a0569ad8f8 : [android][readme] Aar native linking add fbjni (#40578)
fcadca1bda : serialization: validate sparse tensors after loading (#34059)
5f9e7240f5 : Fix bug where explicitly providing a namespace never worked. (#40830)
2cf9fe2d92 : Remove more error-exposing tests in exp that cannot be reliably reproduced (#40825)
f13653db29 : [Update transforms.py]use build-in `atanh` in TanhTransform (#40160)
fbcf419173 : Respect user set thread count. (#40707)
0203d70c63 : [nit] fix some typo within documentation (#40692)
8e0714a60d : [rfc] Reduce number of coin flips in RecordFunction (#40758)
179dbd4f25 : [jit] preserve keys on dictionary input tracing (#40792)
0ddaaf6a92 : [codemod][caffe2] Run clang-format - 5/7
29aef8f460 : Skip some error-producing exp tests that cannot be reliably reproduced (#40824)
0a75234934 : Allow np.memmap objects (numpy arrays based on files) to be processed… (#39847)
9d8dc0318b : [pruning] add rowwise counter to sparse adagrad
40e79bb1d3 : Update the version of ninja and scipy (#40677)
e762ce8ecf : Avoid initializing `new_group` in test_backward_no_ddp. (#40727)
5a4911834d : Add CUDA11 build and test (#40452)
1571dd8692 : Refactor duplicated string literals (#40788)
6e4f99b063 : Fix wrong MSVC version constraint for CUDA 9.2 (#40794)
9ac0febb1f : Pin torchvision version for doc_push (#40802)
f3949794a3 : Prototype benchmarking util (#38338)
c648cd372f : Fix complex printing for sci_mode=True (#40513)
871bfaaba1 : [JIT] Fix shape analysis for aten::masked_select. (#40753)
50d55b9f2b : [JIT] Update type of the unsqueeze's output in shape analysis. (#40733)
c3237c7a87 : Print hostname of RoCM tester (#40755)
a303fd2ea6 : Let exp support complex types on CUDA and enable device/dtype in complex tests (#39087)
ef5a314597 : [typing] fix register_buffer/parameter (#40669)
5923a802fa : Back out "[pytorch][PR] [ONNX] Add eliminate_unused_items pass"
3ecae99dd9 : Support Pathlike for zipfile serialization (#40723)
c56255499a : Reverts running clang-tidy on ATen (#40764)
3cc18d7139 : .circleci: Remove executor from windows uploads (#40742)
a6a31bcd47 : Enable `out_dims` for vmap frontend API (#40576)
2f94b7f95c : Initial vmap docstring (#40575)
4a235b87be : pop warning message for cuda module when asan is built in (#35088)
4104ab8b18 : Add `torch.count_nonzero` (#39992)
31de10a392 : Int8FC dequantize fix (#40608)
b9cca4b186 : fix range of results for pairwise operations (#40728)
a371652bc8 : Allow to get string references to strings inside torch::List (#39763)
fabd60ec1a : Add comment with UNBOXEDONLY explanation to codegen (#40117)
01e2099bb8 : [TB] Add support for hparam domain_discrete (#40720)
53af9df557 : Unify boxed function signature between jit and c10 (#37034)
320164f878 : Fix zip serialization for file > 2GiB (#40722)
9393ac011a : [CUDA] addmm for complex (#40431)
d7cd16858f : Add documentation about storage sharing is preserved and serialized f… (#40412)
8f5b28674c : [JIT] Remove dead store in quantization_patterns.h (#40724)
0235676f8a : [pytorch][ci] run mobile code analysis on PR (#40247)
6e1cf000b3 : [jit][oacr] Add some operators for Assistant NLU joint lite model (#40126)
21de450fcb : Fix batch size zero for QNNPACK linear_dynamic (#40588)
14145f9775 : Fix and reenable threaded QNNPACK linear (#40587)
9ca4a46bf8 : Implement parallel scatter reductions for CPU (#36447)
11a74a58c8 : Setter for real and imag tensor attributes (#39860)
fd90e4b309 : [CircleCI] Add RocM build/test jobs (#39760)
63e5a53b8c : DNNL: fix build error when DNNL using TBB threading pool (#40699)
ed83b9a4be : Change function parameter `self` to `input` in torch.__init__.pyi (#40235)
d2e16dd888 : Remove constexpr for NVCC on Windows (#40675)
4a174c83ca : Add option to preserve certain methods during optimize_for_mobile. (#40629)
4121d34036 : Python/C++ API Parity: Add impl and tests for ParameterDict (#40654)
b35cdc5200 : [Fix] torch_common target shared by lite-interpreter and full-jit" and turn on query-based selective build (#40673)
b4db529352 : Fix wrong link in docs/source/notes/ddp.rst (#40484)
502ec8f7f7 : Revert D22227939: [TB] Add support for hparam domain_discrete
5377827b3e : Revert D22275201: [Fix] torch_common target shared by lite-interpreter and full-jit
521722751f : Add examples and tests for combining static/class method with async execution (#40619)
1399655a98 : [Fix] torch_common target shared by lite-interpreter and full-jit
21991b63f5 : Migrate `dot` from the TH to Aten (CPU) (#40354)
4c25428c8c : [TB] Add support for hparam domain_discrete
2456e078d3 : [TB] Support custom run_name in add_hparams (#40660)
15be823455 : caffe2 | Revert range loop analysis fix
68042c7466 : Skip mypy on pynightly if numpy-1.20.0-dev0... is used (#40656)
ac8c8b028d : [ROCm] restore jit tests (#40447)
411bc2b8d5 : [quant][graphmode][fix] remove unsupported ops in the list (#40653)
61a8de77cf : [quant] aten::repeat work for quantized tensor (#40644)
0309f6a4bb : [quant][graphmode][fix] cloning schema in insert_observers (#40624)
0a19534dd2 : [JIT] Remove dead store in quantization_patterns.h (#40623)
e368b11226 : [JIT] Remove dead stores in loopnest.cpp (#40626)
15864d1703 : Skip allreducing `local_used_maps_dev_` when `find_unused_param=False`
4102fbdf08 : [1/n] Allow dense NaN value in dper raw input processor output
897e610c82 : FP16 rounding-to-nearest for row-wise SparseAdagrad fusion (#40466)
47c72be3d7 : Port /test/cpp_extensions/rng_extension.cpp to new operator registration API (#39459)
24a8614cac : [Reland][doc] Add overflow notice for cuFFT on half precision (#40551)
6debc28964 : Ignore error code from `apt-get purge` (#40631)
375cd852fa : Add a utility function for bundling large input tensors (#37055)
41ea7f2d86 : Add channels-last support to bundled_inputs (#36764)
edac323378 : Add special rules to launch docker image with RocM (#40632)
0494e0ad70 : Back out "Revert D21581908: Move TensorOptions ops to c10" (#40595)
b8f4f6868d : [JIT] Remove dead store in exit_transforms.cpp (#40611)
a62f8805e7 : Update TensorPipe submodule (#40614)
5036c94a6e : properly skip legacy tests regardless of the default executor (#40381)
7676682584 : Fix illegal opcode bug in caffe2 (#40584)
fb5d784fb4 : Further reduce windows build/test matrix (#40592)
10822116c5 : build docker image for CUDA11 (#40534)
fc8bca094c : skip_if_rocm test_rnn in test_c10d_spawn.py (#40577)
67c79bb045 : update schema to reflect aliasing behavior (#39794)
a0ba7fb43e : Precompute entries in dispatch tables (#40512)
a4cabd1a3c : Generalize Python dispatcher testing API; disallow overwriting fallback (#40469)
44bf822084 : Add C++ standard version check to top level headers (#40510)
dfc7e71d13 : [Selective Build] Apply query-based on instrumentation_tests
f1406c43fc : [papaya][aten] Fix compiler error: loop variable 'tensor' is always a copy because the range of type 'c10::List<at::Tensor>' does not return a reference. (#40599)
eebd492dcf : [doc] fix autograd doc subsubsection display issue (#40582)
3ab60ff696 : Remove cpu vec256 for std::complex (#39830)
fab412a8f3 : Bump nightlies to 1.7.0 (#40519)
e3a97688cc : [quant][graphmode][fix] dequantize propagation for {add/mul}_scalar (#40596)
547ea787ff : [ONNX] Add eliminate_unused_items pass (#38812)
5466231187 : Fixes lint (#40606)
ac79c874ce : [PyTorch Operator] [2/n] Adding python test
c790476384 : Back out "Revert D22072830: [wip] Upgrade msvc to 14.13" (#40594)
b05c34259b : relax size check in flatten_for_scatter_gather (#40573)
e180ca652f : Add __all__ to torch/_C/_VariableFunctions.pyi (#40499)
c6e0c67449 : [PyTorch Error Logging][2/N] Adding Error Logging for Loading Model (#40537)
e231405ef6 : [jit] Fix type annotations in select assignments (#40528)
dfbf0164c9 : Revert D22103662: [NCCL] Explicitly Abort NCCL Communicators on Process Group Destruction
4d40ec1480 : [PyTorch Error Logging][1/N] Adding Error Logging for Run_Method (#40535)
f41173b975 : [PyPer][quant] Add quantized embedding operators to OSS. (#40076)
461014d54b : Unify libtorch_python_cuda_core_sources filelists between CMakeList, fbcode and bazel (#40554)
7369dc8d1f : Use CPU Allocator for reading from zip container
c362138f43 : Disallow passing functions that don't return Tensors to vmap (#40518)
43757ea913 : Add batching rule for Tensor.permute (#40517)
7038579c03 : Add batching rule for unsqueeze, squeeze, and transpose (#40455)
88ea51c061 : doc string fix for torch.cuda.set_rng_state_all (#40544)
e440c370c5 : [quant] Fix fuse linear pass (#40549)
eae1ed99a3 : caffe2 | Fix building with `-Wrange-loop-analysis` on
cf8a9b50ca : Allow ReflectionPad to accept 0-dim batch sizes. (#39231)
82e9318a16 : Adjust CUDA memory leak test (#40504)
85b87df5ba : Revert D22208758: [pytorch][PR] Report error when ATEN_THEADING is OMP and USE_OPENMP is turned off.
06debf6373 : move __range_length and __derive_index to lite interpreter (#40533)
adcd755e69 : Fix backup solution (#40515)
e12f73ee12 : Add missing file to BUILD.bazel (#40536)
3dcc329746 : Use tree-based sum for floats to avoid numerical instability (#39516)
ea06db9466 : Release GIL during DDP construction. (#40495)
71edd7f175 : Update FP16 to FP16:4dfe081cf6bcd15db339cf2680b9281b8451eeb3. (#40526)
16f276cef9 : Add C++-only `int dim` overloads to `std`-related operations (#40451)
a208a272cb : Update cpuinfo to cpuinfo:63b254577ed77a8004a9be6ac707f3dccc4e1fd9. (#40516)
c120fdc05b : Unify `torch/csrc/cuda/shared/cudnn.cpp` include path (#40525)
cef35e339f : Update FXdiv to FXdiv:b408327ac2a15ec3e43352421954f5b1967701d1. (#40520)
4a0ba62ded : Update psimd to psimd:072586a71b55b7f8c584153d223e95687148a900. (#40522)
3e09268c0a : [jit] allow dict to be mixed between tracing and scripting (#39601)
787e1c4c7d : [jit] fix dictConstruct order issue (#40424)
2e6e8d557c : Update docs feature classifications (#39966)
72f2c479e3 : Migrate equal from the TH to Aten (CPU) (#33286)
4d549077a2 : Skip test_mem_leak on Windows (#40486)
0c923eea0a : Add finishAndThrow function to ProcessGroup::Work, and use with Gloo (#40405)
3e2d2fc856 : [NCCL Docs] Adding Comments for Work-level Finish in ProcessGroup (#40404)
527ab13436 : [NCCL] Explicitly Abort NCCL Communicators on Process Group Destruction (#40241)
fe18dcd692 : Use GLOG logging prefixes (#40491)
fc4824aa4a : enable mkldnn dilation conv (#40483)
de7ac60cf4 : Add out= variants for cuda.comm.broadcast/gather/scatter (#39681)
e66445878d : Adds dynamic versioning pattern (#40279)
a2e1a948a4 : Increase number of iterations in DDP SPMD tests (#40506)
9a3e16c773 : Add guard for non-default stream in DDP's autograd engine callback (#40115)
597cb04b2f : Use Int8QuantParamsBlob to pass the scale and zeropoint params (#40494)
3ed96e465c : Report error when ATEN_THEADING is OMP and USE_OPENMP is turned off. (#40146)
b4ccdef090 : Allow torch.cuda.amp.GradScaler to support sparse gradients (#36786)
d855528186 : wconstab/38034-sliced-sequential (#40445)
727463a727 : Initial vmap frontend API (#40172)
43ab9c677b : Add invariants check to BatchedTensorImpl (#40171)
e490352dc4 : Simplify complex case for tanh backward (#39997)
4975be80f8 : fix typo "normal" -> "Cauchy" (#40334)
ecd9a64712 : fix `torch.jit.trace_module` documentation (#40248)
a4dec0674c : [doc] fix typo in formula of MarginRankingLoss (#40285)
e439cf738a : Fix examples Adaptive avg pooling typo (#40217)
72e8690b78 : Fix typo. in error message (#39958)
b4eb82cd29 : Temporary commit at 6/17/2020, 6:49:44 PM
0ecea2d64d : [JIT x RPC] Consolidate Future type class and Future impl class (#40406)
f035f73d53 : Fix the issue that run clang-tidy on the aten folder (#39713)
46b9e519aa : Remove print (#40475)
7b0f867c48 : Perf improvement of Conv2d and Conv3d (#40324)
cb26661fe4 : Throws runtime error when torch.full would infer a float dtype from a bool or integral fill value (#40364)
a2d4d9eca6 : Improve Dynamic Library for Windows (#40365)
e2201e2ed8 : Fixes caffe2 loading issues on Windows (#39513)
7c07c39845 : [torch.distributed.rpc] Install method docstrings from PyRRef to RRef (#40461)
7c737eab59 : Remove table of contents at the top of rpc.rst (#40205)
b7e044f0e5 : Re-apply PyTorch pthreadpool changes
bdc00196d1 : Enable XNNPACK ops on iOS and macOS.
c314e0deb5 : [quant] Quantized adaptive_avg_pool3d (#40271)
6468bc4637 : [JIT] script if tracing fix (#40468)
92d3182c11 : Revert D21232894: Unify PyTorch mobile's threadpool usage.
ddb8565b25 : Revert D22162469: [pytorch][PR] Migrate `var` & `std` to ATen
7e32e6048d : Fix linspace step computation for large integral types (#40132)
883e4c44b2 : Raise exception when trying to build PyTorch on 32-bit Windows system (#40321)
a6a2dd14ea : Fix typo in warning message (#39854)
0e26a03ef9 : [quant][graphmode] Enable inplace option for top level API (#40414)
2e6da36298 : [android][ci] Fix CI packaging headers to aar (#40442)
b9d3869df3 : Unify PyTorch mobile's threadpool usage. (#37243)
c7d79f35e3 : Header rename complex_type.h -> complex.h (#39885)
111b399c91 : Delete requires_tensor (#40184)
cc9075c5d4 : Add some syntax sugar for when backends use the same function. (#40182)
d8ec19bc03 : Revert D22072830: [wip] Upgrade msvc to 14.13
581ad48806 : Revert D21581908: Move TensorOptions ops to c10
cbd53bfee8 : [jit] Remove unnecessary clone APIs for script::Module and RecursiveScriptModule (#40297)
8c20fb6481 : [JIT] freeze doc (#40409)
09285070a7 : Doc fix for complex views (#40450)
5fce7137a9 : [WIP][JIT] Add ScriptModule._reconstruct (#39979)
5ad885b823 : [Caffe2][Pruning] Make the caffe2 Sum operator support long types (#40379)
b623bdeabb : Move TensorOptions ops to c10 (#39492)
f6b9848c25 : Use chain.from_iterable in optimizer.py (#40156)
0e074074f3 : Disable inlining an opaque tensor into a constant (#40367)
f000b44d89 : Fork/Join Inline Docs (relanding) (#40438)
d21ee2de66 : [wip] Upgrade msvc to 14.13 (#40109)
6a421d50ab : Enabling concat fast path for channels last inputs (#39448)
27982d5711 : fixes to layernorm emulation (#40422)
b82bd654cc : Increase shapes column length (#40440)
d8c384544e : Destroy CUDA events after profiling (#39962)
a54bb4e907 : Fix demangle 't' issue in profiler (#40416)
3b040c478a : Make custom_fwd a no-op when not executed under autocast (#36171)
f652abc1dd : [jit] Enable `copy.deepcopy` and `copy.copy` for RecursiveScriptModule (#32685)
9bf255573f : quant docs: add and clean up ELU (#40377)
d71ec51c0e : quant docs: add and clean up BatchNorm{n}d (#40346)
5e683517a7 : quant docs: add and clean up InstanceNorm{n}d (#40345)
6e3fdd77ca : quant docs: add and clean up GroupNorm (#40343)
d15fcc7e49 : quant docs: add and clean up LayerNorm (#40342)
d27f8eaf92 : quant docs: add and clean up hardtanh (#40341)
8e74fb6a0c : quant docs: add and clean up hardsigmoid (#40340)
c4594a97ae : quant docs: clean up hardswish (#40323)
79736ff9c2 : Simplify complex case for `div_cpu` (#39996)
3e6fa778a5 : Testcppextensionjit rebuild once (#40169)
54c05fa34e : Add basic GPU support to distributed autograd. (#40312)
e509c58a1c : Set C++14 compatibility flag in torch_compile_options (#40399)
2acee6dc93 : Revert D22124313: Use Int8QuantParamsBlob to pass the scale and zeropoint params
08ae7d3a71 : [Codemod][FBSourceGoogleJavaFormatLinter] Daily `arc lint --take GOOGLEJAVAFORMAT`
1ec4337b7d : Use Int8QuantParamsBlob to pass the scale and zeropoint params (#40390)
78b3d5f878 : [TensorPipe] Register multiplexing channel over UV (#40389)
e9efad6878 : [ROCM][CI] Skip fp16 bench and 2-GPU runs (#40243)
ba89a89376 : [quant][graphmode][refactor] InsertQuantDeQuantHelper (#40384)
6c40ec55df : Revert D22165477: [pytorch][PR] [JIT] Fork/Join inline docs
7bf1dd582a : Fix Cuda IPC deadlock (#40347)
18122facb9 : [quant][graphmode] Add warning for debug option for add_scalar/mul_scalar (#40383)
5766da503b : Device name should be a string, not bytes (#40322)
0d24ed0c81 : Add note to torch.save (#40394)
64f925eb0c : [quant][graphmode] Add support for functional linear (#40331)
b02c932fb6 : qat eager: remove unneeded modules (#40396)
d7d75e37bb : Add state dict for LSTM and RNNCell and helper functions for accessing weights and bias (#40333)
8066fba226 : [RELAND2] Change AccumulateGrad to yield `.grad`s that match weights' memory layout (#40358)
9e5d62582c : [android][gradle] packaging headers in aars for publishing (#40392)
ae2f1f0372 : [DDP Note] Remove refs to RoundRobin PG until we officially support it (#40380)
016cf7d66e : Revert D22102408: DNNL: enable conv3d
17fe0e2b8a : Revert D22102407: DNNL: enable batchnorm3d
0d0608532c : [JIT] Fork/Join inline docs (#39952)
13a8ec3cc5 : Revert D22102406: DNNL: enable max_pool3d and avg_pool3d
9498e24ca8 : Revert D22138737: DNNL: enable dilation conv
8ec2ae9a9f : Add view_as_real, view_as_complex for complex tensors (#39099)
7a3c223bbb : Migrate `var` & `std` to ATen (#39967)
c4fc278fa8 : Build docker for CUDA11 (#40231)
02ae9a1583 : add TypeError to c10 and fix segfault in error checking in Tensor constructor (#40106)
a8ab78c815 : Added a link to Contribution guide in Readme (#40353)
dbcc5b7533 : DNNL: enable dilation conv (#40220)
43331609a4 : Port addmm, addbmm, addr to ATen (CUDA) (#38421)
c873895722 : DNNL: enable max_pool3d and avg_pool3d (#35664)
8df35fd755 : DNNL: enable batchnorm3d (#35663)
6ba807cb43 : DNNL: enable conv3d (#35662)
03af4dcbbf : Utilise the vector version for sinh and cosh (UnaryOpsKernel) (#36396)
87c5f02f3d : jit: Conv3d + BatchNorm3d fusion (#40082)
14f7e95c1a : Add prefix of remote events for RPC profiling (#40066)
17d3f74ea3 : Relax cudnn conditions for channels-last convolutions (#38904)
c72ab19458 : Add addmv for complex dtypes (#40238)
3894de569e : Reenable memory format test for some unary functions (#39102)
9f9e7c1d71 : [quant][refactor] Tests for torch.jit.quantized (#40330)
eaa91071ca : [ONNX] Support large attribute and subgraph for large model (#38793)
49887d1fc0 : reference Swish implementation (#40150)
3fa0b1e325 : ONNX: fix bug in export of cumsum operator (#40044)
c04d39aaf2 : [quant][bug] Histogram observer bug fix with min == max (#40310)
766889b6bf : ONNX: fix bug in export of ops involving torch.bool type (#40006)
0e146d2df4 : Update TensorPipe submodule (#40374)
e4766fb4d9 : Meta tensors, but without code deduplication (#38490)
52f3a09663 : ROCm: Use correct device type when exporting tensors to DLPack (#40124)
db5b273961 : Rename dont_resize_outputs() to resize_outputs(false) (TensorIterator… (#40148)
396087bfd8 : [ROCm] Enable BFloat16 for pow, exp, erf ops on ROCm (#40236)
881c1adfcd : Fixed buffer update in BatchNorm when track_running_stats is set to False (#38084)
eb92ed6239 : Append forward slashes to PIP_UPLOAD_FOLDER (#40352)
37c88a4731 : Pin the version of scipy for Windows test jobs (#40369)
ab8a99bd36 : graph mode: add hardswish inplace handling (#40284)
c6dbfcaf9e : quantized elu: graph mode handling (#40111)
cd0afe2b8e : quantized elu: eager mode QAT handling (#40104)
03ed802a90 : quantized elu: eager mode static handling (#40103)
13d54c6471 : quantized elu: require observation (#40100)
3bbedb34b9 : restore generic IndexToScatterGatherOffset specialization (#40349)
e632bf8d57 : Add thrift and tensorpipe backend tests for test_ddp_under_dist_autograd. (#40210)
ac8c3c0ad1 : Fix update_s3_html for nightly jobs (#40338)
3852215170 : [vulkan] jit passes for vulkan conv2 prepack and fuse with clamp (#39282)
f69460d0cb : Add unit test to verify DDP + RPC correctness. (#40139)
a47fb57957 : Change memory format promotion rules of point wise operators. (#37968)
c1dfc05cc9 : [android][test_app][reland] test_app example linking to pytorch_android aar content (#40313)
4cbf87dc92 : [PyTorch Numeric Suite] Add support for dynamic LSTM (#40065)
0079e429d6 : Remove incorrect warning message on rounding mode (#40301)
9da277c635 : [quant][graphmodel] linear_relu (#40021)
e04a611b91 : [quant][graphmode] clang format changes (#40329)
59ca1d31ca : [quant][graphmode] docstrings for top level APIs (#40328)
7a837019a4 : [caffe2] optimize 2/4-bit row-wise quantization (#387)
cfe1c6ef9e : Update XLAPreAutograd keys. (#40265)
5c133eb2db : fix small typo in optim adamw (#40283)
4b028a8e07 : [jit] support pad_sequence/pack_sequence (#39844)
4f761f325c : Back out "[pytorch][PR] Removes dunder div"
5555d210b1 : Cleanup TensorIteratorDynamicCasting.h (#39839)
b2f489dc57 : [quant][graphmode] Rename graph mode quantization API to `quantize_jit` (#40212)
6d70d1574f : rename the LayerNorm operator and add it to the replacement map (#40318)
fb17b05f33 : Make dynamic casting case also benefit from unrolling (#34749)
4194456158 : Add _enable_record_function python API (#40306)
a80dd02a22 : [Resubmit] Ensure NCCL_BLOCKING_WAIT=1 works for dist.barrier() (#40249)
314d645e05 : Add a warning to mention that async_execution does not work with autograd profiler (#40309)
5d0044389a : Minor RPC doc improvements (#40305)
a9f0156271 : Fix RRef to_here() docs (#40300)
caf0c286b8 : Fix RPC API doc links (#40299)
d6d579397d : Improve docs for init_rpc (#40298)
3ca05500fa : Improve RPC documents (#40296)
4463f59c2c : Let torch.futures.wait_all re-throw errors (#40291)
f92089b8ca : [pytorch] tweak code analyzer script to handle new namespaces (#40276)
6df97c20c2 : Make test case precision property (#40057)
c73095e78f : Add note to serialization docs about zipfile format (#40288)
73a156e81f : [ONNX] Update pytorch/onnx docs for new export API args (#39802)
41865d8f19 : [ONNX] Update black_listed_operators for opset 12 (#39414)
65f67bbe92 : improvements to sls 4bit
c3ce35e67b : Update TensorPipe submodule
b48742322a : move ROCm 3.5 thunk upgrade from build.sh into test.sh (#40286)
ca0540a7eb : Remove variable shadowing from tensorpipe lambda (#39126)
cdbf78fba0 : Revert D22118945: [android] test_app example linking to pytorch_android aar content
465138ec39 : refactoring TestQuantizeScript (#39677)
3684dfafc2 : Fix typos in RPC examples (#40280)
b670ff2d3a : Add typing for _CudaStreamBase and _CudaEventBase classes (#40256)
52e4e3a9b8 : NCCL Comment typo fix (#40242)
d9c804ce22 : [PyTorch Numeric Suite] Add support for dynamic quantization of linear module (#39024)
07e581d639 : Remove useless name check for inputs (#4618)
96057c0080 : Fix missing deprecation warning for Tensor.nonzero(). (#40187)
ece8ef2fc6 : Run canonical graph optimizations in optimize_for_mobile. (#38840)
a11870b45d : Revert D22118971: [android] gradle version update
b0324a97f5 : _jit_pass_fold_convbn wrapped with fuse_conv_bn_script (#40224)
b7bfdcbe3e : [caffe2/torch] Use logger in jit instantiator
2393bab036 : [TensorPipe] Update documentation (#40222)
8315bb2359 : Back out "[pytorch][PR] [JIT] Infer NamedTuple type attributes of nn.Modules correctly" (#40270)
86b1afa039 : Assert that kernels are called with the right signature (#40251)
02e091902f : Release DistAutogradContainer context for each dist_autograd test case (#38711)
6e2c88980e : .circleci: Add git to the ecr gc docker images (#40262)
ccea3726da : [Reland #4] Include AT_PARALLEL_OPENMP/AT_PARALLEL_NATIVE/AT_PARALLEL_NATIVE_TBB to ATen/Config.h (#40211)
a6420b8c75 : Increase bazel test timeout to 8 minutes (#40263)
8f51c39649 : Improve torch.futures docs (#40245)
13bd5992d0 : Remove `finalize_bucket_sparse` from DDP (#40130)
7e82382ad5 : Allow profiler to be enabled remotely with RPC (#38748)
d58b8222b7 : [JIT] Add support for with statements (#34705)
8c73e74fdf : Clean up thrust::complex usage in geometric kernels (#39293)
9788a74da8 : [quant][bug] Fix histogram observer with 0 input (#40191)
262ad8e6ab : [android] gradle version update (#40176)
52a2adb3f4 : [android] test_app example linking to pytorch_android aar content (#39587)
954a59a2f5 : Add at::tensor(complex) and torch::tensor(complex) overload (#39793)
35f357927d : [futures] Add specific python unittest coverage for collect_all/wait_all (#40233)
8b5732e8ad : Move `torch.cuda` annotations inline (#40075)
c1958de49d : [Codemod][FBSourceGoogleJavaFormatLinter] Daily `arc lint --take GOOGLEJAVAFORMAT`
41f2dbde31 : Add `AdamW` to C++ frontend (#40009)
89ef8f8141 : add test_openmp to ROCM_BLACKLIST (#40204)
430d5cec0e : print position of the operator that failed to onnxifi (#40232)
cb8b2f0636 : Revert D21534052: Assert that kernels are called with the right signature
85128113f9 : [Selective build] Enable selective build in VariablType
55cdd31bd0 : Assert that kernels are called with the right signature (#38361)
d14d47b9b5 : Get rid of global constructors in cuda codegen (#40183)
0891764e80 : [android] ANDROID_STL=c++_shared (#39588)
d3b786afdb : [android] Add libtorch headers to pytorch_android aar (#39507)
83d7718c5c : .circleci: Add docker builds based on rev-parse (#40194)
442ec1dd4e : [test] split remaining quantization tests out of test_jit (#40144)
30648985a7 : Revert D22108899: Ensure NCCL_BLOCKING_WAIT=1 works for dist.barrier()
74a2cb87e3 : [android][cmake] Remove NO_EXPORT for libtorch mobile build (#39584)
034eddca01 : Fix typos in RPC Docs (#40219)
645d6c014c : preserve output tensor's stride in TI's fast setup (#38895)
6a42d85fc6 : .circleci: Move docker_build workflow to codegen (#40189)
aa84ec5325 : [quant][api] Expose graph mode quantization API in `torch.quantization` (#40198)
fef253e711 : [codemod][custom_rule] Migrate some scripts to use named outputs for custom_rule
fcc9a1e664 : graph mode: move hardsigmoid op to `single_input_general_value` category (#40055)
37362fff66 : graph mode: util for fusion of functions which require observation (#39413)
4ad8ebe738 : quant layer/group/instance norm: make weights and biases optional (#39203)
d4e4f13173 : [quant][graphmode] Add support for detach (#40197)
5f309505ce : Move the check on orig_weight sizes. (#40200)
f3f30d4354 : [JIT x RPC] Consolidate RRef type class and RRef impl class (#35694)
7c9e78fdf5 : [TensorPipe] Add options for agent, including backend killswitches (#40162)
d1a0e88075 : Ensure NCCL_BLOCKING_WAIT=1 works for dist.barrier() (#40207)
4553b0b537 : Reduce number of Window test configurations (#38482)
fd7e09a52b : [quant][graphmode] Clean up and add more logging (#40196)
76fbfba644 : Move _dummy_type to _utils.py (#40177)
efd9fc7434 : Remove thrust::complex from sqrt (#39901)
edd3fbc61e : Add aarch64 specific quantize_tensor using arm intrinsics. (#40113)
fb02007e9f : Export box_cox operator in caffe2
1800032712 : [quant][graphmode] Add warning for prim::Loop (#40195)
0b3755b1d0 : Add optimization blacklist as second arg to optimizeForMobile method. (#37462)
30364f0b01 : Remove obsolete warning message from DDP (#40190)
74142f76fa : Adding torch.futures to API docs (#40051)
693ab77c00 : [test] split onnx export test out of test_jit (#40143)
27d789500b : [test] split tracer related tests out of test_jit (#40142)
e34e32850e : Revert D22076711: [Reland #3] Include AT_PARALLEL_OPENMP/AT_PARALLEL_NATIVE/AT_PARALLEL_NATIVE_TBB to ATen/Config.h
a2ef54c598 : [pytorch] fix CUDA_KERNEL_ASSERT macro for android build (#40151)
3ea15af630 : [Onnxifi] Allow adding timeout for OnnxifOp run (#40081)
1670ea9474 : Remove overload of GPU max_pool3d with kernel_width; fix nan, inf in GPU {fractional,adaptive} max_pool{2,3}d (#39903)
7f0e4265ac : ROCm thunk work-around for future transition to ROCm 3.5.1 (#40181)
f4ffe99da5 : Fix flaky rref timeout test (#40141)
34e28ede57 : Fix flaky test (#40175)
bc9e8af218 : [distributed.nn] Change remote module template instantiator to write to tmp folder (#40173)
7f88f037ac : Stop running target bot on ci-all (#40186)
b5bf21a6bd : [JIT] Expose `__deepcopy__` on script::Object (#40068)
1e03d603c6 : [JIT] Infer NamedTuple type attributes of nn.Modules correctly (#39116)
c252dddcdd : [quant][graphmode] Test JIT tracing for dynamic quant cases (#40128)
f6739ec8e8 : [quant][graphmode] Refactor dynamic quant tests (#40127)
2ba5f98dd1 : Revert D22068657: [pytorch][PR] Remove global CMAKE_INSTALL_RPATH_USE_LINK_PATH directive
55bcb5dccc : Fix inconsistent results of string `split` func on JIT mode (#38772)
5e77999ecb : Add global hooks to `torch.nn.Module` (#38972)
70192c651c : [Reland #3] Include AT_PARALLEL_OPENMP/AT_PARALLEL_NATIVE/AT_PARALLEL_NATIVE_TBB to ATen/Config.h (#40122)
95e51bb7f8 : change BuildExtension.with_options to return a class not a c-tor (#40121)
a71aefe857 : [android][test_app] cleanup (#40136)
c958dd5472 : [TensorPipe] Add guards against transferring GPU tensors (#40167)
08227fea4f : Revert D22079377: [pytorch][PR] [RELAND] Change AccumulateGrad to yield `.grad`s that match weights' memory layout
1ec8ece2b9 : [RELAND] Change AccumulateGrad to yield `.grad`s that match weights' memory layout (#40129)
5200814cfa : Fix test_hook_* issues (#40135)
216f512be2 : Remove requirement of qnnpack engine for arm build. (#40112)
8619d26338 : Add batching rule for torch.expand (#40097)
dec62dbfa3 : Change VmapTransforms to use SmallVector instead of `vector<int64_t>` (#40042)
161fd5f507 : Implement tensor.size(int) for BatchedTensor (#40028)
dea58a7660 : [resubmit] Kill thrust::complex from log kernels (#40079)
44c7a2ab69 : [TensorPipe] Silence some more harmless warnings (#40094)
0152baa33a : move some math ops back to full jit (#40149)
6de6041585 : [iOS] Disable NNPACK on iOS builds (#39868)
9d588f7ce2 : Removes dunder div (#39151)
00505adbad : Add net_pos Tiles added during in-batch broadcast (#40078)
e7a3a43d8f : [pytorch] upload android build size to scuba (#40010)
3258cb61b1 : Dynamic quantization support for LSTMCell, RNNCell and GRUCell [Remove randomness in weights] (#40102)
03529ed14d : Remove hacky double registration of to_here op in reg_distributed_ops (#39602)
15823ac6d5 : Enhance DDP docstrings for DDP + RPC support. (#39916)
23739654cd : Resubmit Remove `THTensor_(fill)` & `THTensor_(zero)` (#40108)
bf544c4a7b : [android][fbjni] Test_app and Readme update with the recent fbjni dep state (#40058)
f42c948df5 : [quant][graphmode] Support another use pattern of mean (#40038)
dcec099d48 : Wrap Caffe2 (RowWise)SparseAdagrad fusion operator as a PT op (#39904)
15758bca55 : Refactor LSTM tests, [Remove randomness in weights] (#40101)
3d8de74e17 : Bumps readable file format version for torch.full inferring float from int values (#40089)
b5d54db6f4 : Revert D22071278: [quant][graphmode] Refactor dynamic quant tests
cb1a1942ee : Revert D22071277: [quant][graphmode] Test JIT tracing for dynamic quant cases
64689c2474 : Remove unecessary copy within blob serialization (#40096)
ba98c0e38c : Split TensorIteratorConfig out of TensorIterator (#39803)
54c0ee1ebc : LayerNorm use Fused Multiply and Add (#40012)
da8cd8260b : Fix KeypointRCNN test (#39589)
f69b72c738 : Back out "Revert D21986243: TORCH_FN" (#40110)
41fa4bef2a : [quant] Support general op modules with inplace options (#39919)
fa4244d783 : [quant][graphmode] Test JIT tracing for dynamic quant cases (#40040)
ddeaa74382 : [quant][graphmode] Refactor dynamic quant tests (#40039)
461aa8a1e2 : [quant][graphmode] Support quantizing `repeat` (#39925)
ee365c58e1 : Fix destructor ordering for cuda handle pools (#39345)
145df306ae : Avoid using default process group in ProcessGroupAgent. (#39909)
7021635d61 : fix more duplicated names (#40062)
7f270233fb : Upgrade DNNL to 1.5 (#40088)
ec1833bc3c : Revert D22069566: Revert D22013026: [quant][graphmode] Pass debug option into insert_quant_dequant pass
49732f0450 : Remove global CMAKE_INSTALL_RPATH_USE_LINK_PATH directive (#37737)
d57ca73c53 : Remove item and data_ptr for std::complex (#39838)
181ea1acce : [quant][graphmode] Support squeeze/unsqueeze (#39924)
f1a5f66115 : [xplat] Add Windows specific ATen build definitions (#40092)
b3dd4d9c33 : [JIT] remove callable check to compile objects with __call__ (#40041)
f1e575a0bf : Revert D20496044: [pytorch][PR] Change AccumulateGrad to yield `.grad`s that match weights' memory layout
4b5530de72 : optimize upsample performance linear mode on CPU (#34864)
5843854e66 : [TensorPipe] Fix transport/channel priorities (#40090)
dd581b4512 : DOC: fix rpc reference in top-level index (#40077)
56b4b44107 : Batching rule for torch.mul (#39859)
33b82c7271 : [JIT] Add registry for backend lowering functions (#39552)
ad86c94f14 : Reduce memory requirement for test_argminmax_large_axis (#40036)
5add2e861c : Revert D21628596: Refactor LSTM tests
e55e0cb1a9 : Revert D20978736: Dynamic quantization support for LSTMCell, RNNCell and GRUCell
5f6e55fb32 : Clean up thrust::complex from tanh_backward (#39827)
b372000d69 : [quant][graphmode] Run RemoveRedundantDequantize in the end (#39923)
305921734a : Revert D22013026: [quant][graphmode] Pass debug option into insert_quant_dequant pass
12cf8390e6 : Update aarch64 CI badge (#39914)
48db06e39a : Dynamic quantization support for LSTMCell, RNNCell and GRUCell (#37159)
2beb9690c3 : Change AccumulateGrad to yield `.grad`s that match weights' memory layout (#34904)
c065049592 : Add smoke test to Windows CI (#39941)
d71804a57d : Eliminate TensorIterator::add_output with explicit dtype. (#39800)
23db54acdf : [DataLoader] add repr for WorkerInfo (#39975)
ee5ad6ce25 : [quant][graphmode] Pass debug option into insert_quant_dequant pass (#39915)
ebd869153c : Clarifies compare_with_numpy behavior (#40064)
8939849f72 : Revert D21986243: TORCH_FN
12cb80b5b8 : TORCH_FN (#39823)
144e8dc5a3 : [quant][graphmode] Use quantizedbatch_norm in graph mode (#39911)
655f1ea176 : Refactor LSTM tests (#38851)
bcb44796ba : [pytorch] consolidate android gradle build scripts (#39999)
9204d76b5f : Back out "[pytorch][PR] Remove `THTensor_(fill)` & `THTensor_(zero)`"
f0b40cac30 : [pytorch] simplify android circleci definition data model
18fe9d267c : Revert D22050656: [Yet Another Reland] Include AT_PARALLEL_OPENMP/AT_PARALLEL_NATIVE/AT_PARALLEL_NATIVE_TBB to ATen/Config.h
1a388da10a : [quant] add quantized::batch_norm (#39910)
5d4a662846 : DNNL: fix F.max_pool2d and F.avg_pool2 issue when stride=None (#39221)
399dd84c8c : Fix TensorPipeAgent shutdown to ensure it drains all outstanding work. (#40060)
d4faf14cb2 : [Yet Another Reland] Include AT_PARALLEL_OPENMP/AT_PARALLEL_NATIVE/AT_PARALLEL_NATIVE_TBB to ATen/Config.h (#40045)
f13be5fde1 : Check if generator has next normal sample cache methods in normal_distribution (#39816)
00651b8c93 : [distribtued.nn] Implement TorchScript-compatible RemoteModule API (#37139)
f37b8e73f4 : [quant][graphmode] Support prim:TupleUnpack and prim::TupleConstruct (#39895)
eb358f49c2 : Overload complex math functions on both :: and std:: (#39829)
84d8a42fdb : [android] Remove android fbjni subproject (#39691)
d602950cb4 : [torch.distributed.rpc] Add WorkerInfo python repr magic method (#40004)
ecfe0c9a25 : [TensorPipe] Use registry for transports and channels (#39957)
51e341df4f : [bernoulli_kernel] Replace CPU_tensor_apply functions with cpu_serial_kernel (#39711)
0c25428597 : [futures] Reland: Add torch.futures.collect_all()/wait_all() python api. (#39964)
cc3fc786b7 : [resubmit] [pytorch][PR] Fix for num_threads==1 in OpenMP "parallel for" (#39533)
f6b0fbe2c5 : topk tensor k support (#39407)
4c3436838f : Show which type was the wrong one when a signature is invalid (#39491)
79450edad3 : [JIT] IRParser: properly parse negative numbers. (#39981)
42f0ea49ca : [Codemod][GleanFbcode] Remove dead includes in caffe2/binaries
569c85b45d : [futures] Add assert to Future constValue() accessor, add hasValue(). (#39950)
019eeb3183 : Kill DataLoader worker when we can't join (#39869)
1d642e2adf : Improve cuda error message for MSVC (#39987)
ac8d63a52f : Update jenkins caffe2 scripts for ROCm circleci images. (#39908)
c6b69a4e4d : Delete Python <= 3.5 specific checks from the code (#39879)
c8c53c802e : Add `generator=` kwarg for DataLoader & random samplers (#39737)
541814f2b7 : Remove dead ScatterGather code (#39963)
cf64af1ad2 : Revert D22036002: [pytorch][PR] Kill thrust::complex from log kernels
ede9bc97c3 : Fix the processing logic of bernoulli on amd (#40001)
4947ee3811 : Kill thrust::complex from log kernels (#39902)
5b194b0fb2 : Remove thrust::complex from reciprocal (#39899)
d5236f8517 : Avoid initializing unnecessary tensors in nccl.reduce (#39688)
8072f0685f : Add zero input support for batch permutation op (#39851)
f1d10978a4 : Added Mean and Variance calculation function. (#39986)
b803b4ce09 : [torch.distributed.rpc] Add stringify WorkerInfo, better error message for py_rref (#39974)
905c6730b7 : Adding /FS for NVCC if /Zi is used (#39994)
e2825392b6 : Update torchvision commit from Mar 11 to Jun 11 2020 (#39970)
bdef721caf : [fbcode] Add find_method into lite interpreter python binding.
ddd45ae919 : Extend int8 FC op to take scale and zero point from input
8d3fcb43cf : Revert D22008317: [Reland] Include AT_PARALLEL_OPENMP/AT_PARALLEL_NATIVE/AT_PARALLEL_NATIVE_TBB to ATen/Config.h
db2b273d1f : Reland: Fix CUDA device guard usage when first arg of kernel is scalar (#39956)
e62d655744 : [Reland] Include AT_PARALLEL_OPENMP/AT_PARALLEL_NATIVE/AT_PARALLEL_NATIVE_TBB to ATen/Config.h (#39881)
34d1098dc2 : [rpc] fix RRef alias annotation (#39933)
356d564886 : [rpc] use annotation_str for RRef type serialization (#39932)
d9539cd835 : [testing] Dont use zipfile for storage __reduce__ (#39893)
727e77a809 : [quant] Enable reduce_range for graphmode (#39874)
b2620722c3 : Kill meanall from TH, THC (#39907)
8749aaca83 : Abort setup_pytorch_env.bat if one of the steps failed (#39951)
8bc821f0d0 : Revert D21976891: [futures] Add torch.futures.collect_all()/wait_all() python api.
14099374bd : Update TensorPipe submodule (#39945)
a9aa6367c2 : [futures] Add torch.futures.collect_all()/wait_all() python api. (#39790)
0d19ae5a14 : [pytorch] fix (ProfiledType|TraceType)None.cpp (#39934)
fdf6d37895 : re-enable some corner cases in memory format transpose test (#39891)
558c20f50a : Int8 PTQ ops for online training (#39818)
99084104b6 : [quant][graphmode][refactor] isScalar check (#39892)
1e05e5e0ae : Correct #39759 for HIP. (#39801)
f3f9415f81 : Add file_name argument to load_state_dict_from_url (#39749)
baa604812c : add optional request headers to torch.hub (#39740)
d367f575b9 : [CI][vulkan] android build abi x86 with USE_VULKAN (#39912)
2bab9149cc : Extend int8 quantize op to take scale and zero point from input
48678aa39f : pin ninja version to fix windows CI (#39944)
4574abc395 : Replace __host__ __device__ with C10_HOST_DEVICE in THCIntegerDivider.cuh (#39797)
124cdf2290 : Add experimental deterministic flag (#38683)
004aa089a6 : [jit][subgraph_rewriter] Support list of filters (#39867)
3876889218 : Remove LegacyComplex.h (#39834)
ae6a68ad09 : [TensorPipe] Add extensive logging (#39781)
52cc0c2c37 : Revert D22011184: [pytorch][PR] Fix CUDA device guard usage when first arg of kernel is scalar
246d7bb41d : [quant][graphmode] Quantizing traced modules (#39826)
c068233300 : Add CHECK-SOURCE-HIGHLIGHTED to file check utils. (#39692)
0526af1af6 : [vulkan] Conv2d with optional clamp (#39115)
71372b452a : [vulkan] addmm support non-vulkan inputs (#39078)
80e5ebf989 : [nvFuser] Transform replay refactor and minor updates (#39579)
2854811ab8 : [JIT] Allow self-referential annotations in classes (#39821)
14e841c292 : [quant][graphmode] Remove dedup pass (#39825)
2cd27be5b5 : Fix CUDA device guard usage when first arg of kernel is scalar (#39870)
b10c53e9b8 : Vectorize on output for reduction kernels (#37206)
a92231b70e : Typo in Dispatch.h (#39882)
bbf364b0c1 : move basic math ops to lite interpreter (#39861)
bdecedd2d7 : [JIT] use python type resolver for all types (#39880)
0aa70039f9 : Delete redundant device/dtype in TensorIterator add_input/add_output (#39798)
f59e38974a : fix multinomial for empty batch (#39873)
8c8d9f8971 : Move pip install after setting up VS environment (#39898)
63dc1363e6 : [TensorExpr] Eliminate Cond statements when each branch is a different kind of empty (#39754)
36501ff5d9 : [vulkan] VulkanTensor, add strides in interface (#39077)
eace053398 : Move all torch.nn.modules type annotations inline (#38211)
e22dd561ad : Migrate pow kernel to c10::complex (#39286)
b5848833f0 : Add runtime check for MSVC redist (#39841)
9ca7fdcef0 : Attempt to fix macos ci by pinning numba (#39875)
0b90b9cdd3 : Allow shuffle when auto-batching disabled in DataLoader (#39865)
ae3567427f : .circleci: Remove upload_binary_sizes job (#39786)
32bf63890b : Revert D21992267: [pytorch][PR] [ONNX] Export linspace
8893c0670d : Revert D21511611: Wrap Caffe2 (RowWise)SparseAdagrad fusion operator as a PT op
91d539097b : [ONNX] Fix regression disabling checker (#39073)
85f1f67f33 : Wrap Caffe2 (RowWise)SparseAdagrad fusion operator as a PT op (#38704)
01986e9890 : Wait for all op types in SimpleNet (#39493)
7a792879f2 : Prevent clobbering of docker images by parallelnative/paralleltbb builds (#39863)
2b29feace4 : [TensorExpr] Fix IRPrinter for function calls with uniqued names (#39753)
7957d83498 : [ONNX] Export linspace (#39403)
4c7d81f847 : Add documentation for properties in TensorIterator. (#39792)
f33c31eace : Separate "configuration" properties in TensorIterator (#39789)
1f027ac02d : Disable testTHCAllocator on HIP (#39843)
ad91a3a11f : Skipping L2 regularization on sparse biases
da3073e9b1 : Revert D21960728: Include AT_PARALLEL_OPENMP/AT_PARALLEL_NATIVE/AT_PARALLEL_NATIVE_TBB to ATen/Config.h
aaaf2eb6b3 : Add batching rule for torch.sum(tensor, dims) (#39581)
2d1cf950bb : Impose maximum level restriction for BatchedTensors (#39580)
1360bb986c : Revert D21976091: [vulkan][CI] CI android build abi x86 with USE_VULKAN
ba27fd04d3 : Fixes type promotion for `cat` (#39777)
c3d4053bc0 : [quant][graphmode] Support quantized::batch_norm2d_relu fusion for tracing (#39645)
e1392922f2 : [quant] Enable per-channel quantization for LSTM Modules (#39666)
425927bb2b : [quant] Add reduce_range params for quantized_lstm (#39604)
e399e470b6 : [vulkan] speed_becnhmark_torch add vulkan arg to use Vulkan backend (#39076)
a752832da9 : Fix `Tensor.tolist` signature in the docstring (#39732)
5d2f6d86e5 : graph mode: add quantization type enum (#39795)
94dfc76e3f : graph mode qat: make fake_quantize scriptable (#39750)
5c10b17491 : graph mode: more docs for insert observers pass (#39739)
f8561acb13 : graph mode: add docs to pre-calibration passes (#39683)
4c4b9916ef : Include AT_PARALLEL_OPENMP/AT_PARALLEL_NATIVE/AT_PARALLEL_NATIVE_TBB to ATen/Config.h (#39612)
97dfdaaad8 : torch.multinomial : fast-path for replacement=False (#39742)
780fa2b489 : Switch torch.save to zipfile serialization and swap quantization to that (#39460)
262dbdf0a5 : [caffe2/nomnigraph] handle when PATH env is not defined (#39373)
96870181c6 : Remove duplicated entries in `random.rst` (#39725)
be838504a3 : Remove `THTensor_(fill)` & `THTensor_(zero)` (#39727)
bfa76ff407 : [Doc] Clarify that variance estimor is biaised for normalization layers (#39752)
95489b590f : Throws runtime error when performing integer division using torch.div (#38620)
cb519801d6 : [vulkan][CI] CI android build abi x86 with USE_VULKAN (#39767)
a5fbd3ef8a : [vulkan][build_fix] Fix Vulkan Build; Prepacking uses new register api (#39771)
7f55197a57 : Peel Loop (#39434)
a1071e5d36 : Fix parsing of subscript expressions using python resolver (#39269)
6748fbd38a : Remove `MultiheadAttention` weights from constants (#39768)
3fb1e73a4e : Add rpc.async_execution support for rpc.remote on script functions (#39758)
eb7843ed01 : [quantization] Remove duplicated piece of code in test (just a nit). (#39770)
d04a3fcc42 : Refactor CUDA bernoulli_kernel by using uniform_and_transform (#39652)
68b8740611 : Update TensorPipe submodule (#39783)
c22bbb2124 : [JIT] Add Type::repr_str to return human-readable str (#39544)
4e892bd99c : [Easy Review] Fix ProcessGroupRpcBackendOptions Doc (#39787)
7994d6e147 : [quant][graphmode] Support quantization for `aten::append` (#39644)
08105a0068 : Remove unnecessary !op.is_read_write test in compute_names/compute_shape. (#39747)
b1750cb884 : always resize_ min/max outputs (#39696)
9ba2530d42 : [ROCm] explicitly embed version within image name (#39735)
2d589bc9da : [quant][graphmode] Fix a corner case in handling `if` in insert_observers (#39615)
0aecbbb762 : Changes TensorIterator computation to not consider out kwarg, lets UnaryOps safe cast to out (#39655)
acc13ac828 : [PyTorch] Make DDP reducer work under distributed autograd (#37998)
7cb4eae8b1 : correct some cpp extension code usages and documents (#39766)
be13388adb : Migrate AT_DISPATCH_COMPLEX_TYPES to c10::complex (#39564)
9111ae7782 : Preserve user specified attributes and methods (#38830)
6bdfd6ae1a : [TensorExpr] Fast sigmoid for LLVM (#39717)
307920731d : [iOS] Add nonVarTypeModeGuard to fix the unit test (#39743)
2193fa119e : [JIT] consider side effects when trying moves in alias analysis (#39497)
3cf9b7d9ea : move mm_cpu from BlasWrappersCPU.cpp to LinearAlgebra.cpp and delete the former file (#39700)
1b99be9088 : Freezing Module containing fork subgraphs (#37044)
0fe1ec3ce0 : [quant][graphmode] Test weight observer for dynamic quant (#39687)
2a06a6935c : [quant][graphmode] Support propagate dequantize for nodes with multiple outputs (#39551)
e46060701d : [caffe2] Fix of initializing ATen's CUDA before using caching allocator (#39759)
56289ba31f : [JIT] Improve error message when type annotation Future without a contained type (#39751)
4c5a808d37 : avoid dividing by 0 in div unit test (#39736)
be3bbfc917 : [futures] Add collectAny() to ivalue::Future for completeness (#39597)
bccf8831b8 : Allow initializing TestCase() outside of unittest.main() (#39695)
c902146ba4 : [quant][graphmode][refactor] propagateQuantizationOps (#39550)
428bc90978 : [JIT] add dtype as type annotation (#39741)
4e30146368 : Use `ProgramFiles` environment variable on Windows (#39707)
0f39ed86a7 : Cleanup debug info switches with MSVC (#39703)
f1e6e56641 : Add aarch64 ci badge (#39698)
b84a7fbbc1 : Fix error message in autograd (#39729)
3bdbb27ddb : Fix Gather::apply accessing moved tensors (#39733)
f31aca3a11 : Cleanup cuda install scripts for Windows jobs (#39712)
2633a9cca1 : Adding LpNorm regularization for sparse features in DPER3 (#38582)
f1c60c04b8 : [JIT] Fix module interface test (#39592)
3413f0a8ca : Fix dll load failure in virtual environments on Windows (#39622)
18073ffca3 : Add tests for mismatched dtypes in torch.gather. (#39689)
8565ae5a76 : Revert D21925406: [pytorch][PR] torch.multinomial : fast-path for replacement=False
9733390998 : Add `torch.flip{lr, ud}` (#38599)
4ec86ca5ba : [iOS] Disable depthwise3x3_winograd on iOS (#39591)
7d85e77076 : Use atomic operations to manipulate current RPC agent (#39663)
af05158c56 : torch.multinomial : fast-path for replacement=False (#39636)
338a1ccce5 : Fix error handling for rpc.remote (#39605)
aa5ccf9626 : Kill dead pairwise ops in THC (#39070)
1790d35848 : Skip `test_minmax_illegal_dtype` on XLA (#39693)
0251ba6108 : Fix ONNX export of RNNs with no bias (#36894)
9f71997380 : some refactor on register_distributed_ops (#38657)
f32c9eb579 : [jit] register distributed backward (#38494)
d493918436 : [dist_autograd] expose distributed backward C++ API (#38656)
e033db0477 : Enable RRef timeout for tensorpipe (#39531)
afb2d27b24 : Migrate AT_DISPATCH_FLOATING_AND_COMPLEX_TYPES to c10::complex (#39296)
d1cdf1fd56 : update convert_sync_batchnorm docs (#39646)
1f7557d173 : Migrate `diag` and `trace` from TH to ATen (CUDA) (#36876)
64192ca3da : Skip unit tests relying on MKL if compiled without it (#39672)
8004d35979 : Remove tuple from reduction (#39433)
9551fb22d6 : [quant][graphmode] Preserve numerics in debug option for clamp ops (#39219)
dd5aa1fb22 : Cleanup unused args in max_unpooling3d (#39664)
b7b7433561 : setup: Add long description to wheel packages (#39676)
84d8d68397 : .circleci: Fold postnightly workfow into nightly (#39669)
0147216a46 : [TensorPipe Agent] Documentation fixes and nits (#39467)
bba30d1bd8 : Add undefined tensor gradient support to all backward functions (#39400)
8251f1872f : .circleci: Move ecr gc build job to ecr gc workflow (#38523)
83dd56632e : Fast tanh for the LLVM backend. (#39528)
df2d19723a : c10/util/complex_math.h and c10/util/complex_utils.h should not be individually included (#39276)
397b24bb37 : Cleanup rref_impl (#39530)
a2125135ee : [predictor] move fblearner/predictor to platform009
ab6c447f59 : [ROCm] Enable AMP autocast tests on ROCm (#39616)
cc2f7fa502 : Revert D21930435: Revert D17923732: Optimize GroupNorm on CUDA
e4f9c74db3 : add dtype checks for scatter/gather family of functions. (#38646)
e3e8f24cbe : Remove duplicate 'with_gil' declaration. (#39540)
a83f7a1d70 : Revert D17923732: Optimize GroupNorm on CUDA
e41fe60867 : Add error message when negative stride is passed to as_strided (#39508)
820e81ba09 : add overload name for min/max with list input (#39614)
b83fed8d4c : [futures] Add c++ ivalue::Future collectAll() helper (#39119)
172f31171a : [quant] QNNPACK deconv kernel and tests (#36790)
6c56671fd9 : [jit] avoid pre-convert tensor to cpu in pickling (#38898)
1db4a31d92 : [quant] QNNPACK deconvolution packing (#37405)
ee2bc13f44 : Fix smoke test jobs (#39638)
b06b792bbd : remove double registered ops (#39609)
8177637374 : remove duplicated op schema for aten::pow (#39606)
614dd03272 : Optimize GroupNorm on CUDA (#28204)
ebdff07d49 : instancenorm: static quant graph mode support (#39096)
b443ca26c5 : groupnorm: graph mode static quant support (#39095)
952deba828 : layernorm: eager mode qat support (#39094)
b530176d10 : instancenorm: eager mode QAT support (#39093)
202625ba9e : groupnorm: eager mode QAT support (#39092)
2140874228 : instancenorm: eager static quant support (#39091)
f9b675f7b6 : groupnorm: eager static quant support (#39090)
26bc272793 : quant: clean up normalization channels_last handling (#37802)
8a4597b808 : [quant][graphmode] Dynamic quantInsert observers for module output (#39458)
67115b226a : [quant][graphmode] Dynamic Quant Do not depend on input shapes (#39412)
6d13b583a7 : [quant][graphmode] Support conv*d_relu in traced models (#39490)
faf0a3bd7a : Move bernoulli_() to DistributionTemplates (#38558)
a25b1b918b : Fix __STDC_FORMAT_MACROS redefinition issue for TypeDerived (#39608)
183b04da3e : [pytorch] remove tracing logic from gen_variable_factories.py (#39514)
9db27a50b4 : [pytorch] add operator name to callBoxed() error message (#39562)
e4627e5dba : [quant][graphmode] Fix add_relu patterns for scripting and tracing (#39455)
2da5444221 : [Resubmit] Fix argmin/max bug (#39576)
644d6a09e6 : add overload name for aten::as_tensor (#39610)
b28422d444 : add overload name for str cmp (#39607)
479b04e26a : Improve DistributedSampler docs and add seed option (#39628)
f2af07d7f6 : Fix circleci postnightly jobs (#39627)
53c19423cf : Update TensorPipe submodule (#39598)
6a75f650dd : Implement Quantized Version of Threshold Function (#39352)
3669e45736 : [jit][subgraph_matcher] Enable regex matching for string attributes of node (#39454)
856215509d : [jit] update to serialization doc (#39025)
834569232b : [online trainer] Add blob reorder (#39534)
e29d873e68 : disable autograd while preparing Tensor for printing (#39420)
e35199a691 : observer bench: add CUDA (#39360)
545a3e1eca : Remove test_nccl from ROCM_BLACKLIST and enable only a couple of test_nccl tests (#39354)
97a2918a07 : reduce number of bailout nodes (#38281)
88fe05e106 : [Docs] Update torch.(squeeze, split, set_printoptions, save) docs. (#39303)
0031108b60 : Support torch.Tensor subclass (like Parameter) input. (#39487)
a6690bdb5b : fix input schema check for spatialbn
51504cb8dd : Fix IDE hint channels_last & preserve_format (#39120)
77798a45a6 : Un-inline Functions.h into Functions.cpp (#39446)
e2a178ca21 : Update cafe2 hypothesis_test_util to support hypothesis-5 (#39498)
baf6ed0238 : Release GIL when deleting users and unforked owners (#39555)
9bfb91b50b : Fix possible deadlock in _wait_all_workers (#39535)
8a6914ddb2 : Add @rpc.functions.async_execution for rpc.remote (#39486)
11abb75362 : Make @rpc.functions.async_execution processing generic (#39485)
fa4ed17183 : Explicitly decref in UnpickledPythonCall dtor (#38398)
876b9591dc : Refactor unittests for activation functions relu, elu, and sigmoid (#39190)
7d56ef27ee : Bumps supported file format in anticipate of torch.div changes (#39529)
17aebe909f : Added Operator_Schema's for missing FakeFP16 Operators (#39363)
f94a171e6f : [quant][graphmode] Test for another type of ops in insert_observer for if (#39380)
da8191a9ad : Remove useless copy on zip file load (#36362)
ed12df64ca : misc updates to fake fp16 tests (#39405)
2a513a6a2b : Do not raise decorator (#39532)
b861daf098 : Reduce time spent per guard by comparing TensorType with Tensor (#39098)
8811e4d00d : Add/fix typing annotations to some functions (#39075)
da2f8c9f1f : deepcopy() of Objects should call __g/setstate__ (#39500)
4e5af8d146 : [ONNX] Fix type casting for reduce ops (#38829)
da2004e132 : Upgrade lint. (#39483)
fe684679b0 : Fix overflow issues when unpacking large numbers (#39140)
49b69b2ade : [JIT] fix broadcasting lists of ints (#39481)
7676aa79ec : .circleci: Move binary builds into their own workflow (#39379)
eb5e0376a2 : Selective enabling of xnnpack based max_pool2d in ceil_mode. (#39447)
7680358122 : Move some of the definitions in LegacyNNDefinitions.cpp closer to sites (#37531)
335e4a1e3b : Add arcosh, arcsinh and arctanh to unary ops (#38388)
ada2652ca6 : Restore docs coverage test via sphinx (#39331)
b4aceb3884 : Fix lint (#39527)
af91df68ed : Remove cuda init patch (#39222)
ac25267753 : fix build table for ppc64le (#39475)
002b19da92 : Add SymbolicShape and replace all uses of VaryingShape<ShapeSymbol> with it (#38544)
11a60b9942 : Clean up thrust::complex from rsqrt (#39294)
92c6776761 : Fix lint (#39517)
8b2bb02e09 : Implement timeout support for RRefs (#38590)
72b0447f8d : [pytorch] move tracing logic to a separate dispatch backend (#38467)
03eca384fd : Optimize GroupNorm on CPU (#28203)
4a0a38c17a : Revert D21652452: [pytorch][PR] Fix for num_threads==1 in OpenMP "parallel for"
67cea74dd3 : Add rpc.async_function decorator for TorchScript functions (#39267)
0829cadca3 : Implement rad2deg, deg2rad (#38852)
4d597cb794 : [ONNX] Update pytoch/onnx doc (#39480)
cc991bbf19 : fix internal targets for layernorm (#39501)
2f7f47eba1 : [ONNX]Enable tests in test_operators.py (#39431)
0102bbf01e : move to.prim_dtype to lite interpreter (#39456)
4d880c0693 : Device and torch._C function cleanup (#38173)
4f7c7e2e76 : [caffe2] compute r_correction only for radam to avoid sqrt(negative) (#39393)
adc13432fe : Enabling lite interpreter in torch python API (#39181)
3370c045ae : Remove copy_imag and copy_real methods (#39065)
5b23f56d5a : Selective build on Training, query based. (#39452)
d137710a64 : LayerNorm Fake FP16 Op debug (#39476)
c0d3d2f60f : Retry/skip test on URLError rather than on HTTPError (#39477)
cb530fcd3c : Enable some test cases in `test_memory_format_operators` (#38648)
9ed5efda47 : Adds TestCase.compare_with_numpy (#39179)
d31e84497c : [TensorExpr] some cleanups / fixes for LoopOptions (#39408)
e4657fe194 : Revert D21579607: [pytorch][PR] Do not call optimizations within freezing API
2ed4ed8733 : [TensorExpr] Fix two bugs in Rfactor (#39268)
dbec0febd2 : Update key_padding_mask arg docs in MHA module (#39321)
5cfd1a190e : Do not call optimizations within freezing API (#38499)
46447045ea : Replace torch.allClose with self.assertEqual (#39424)
5d2cfb3d4c : [torch] remove integer conversion resulted in a change of sign warning (#38968)
ec5d579929 : .github: Add initial target specifier config (#39378)
21ba3b4f40 : Fix `torch.backends.cudnn` mypy error (#38947)
6a60a8c1da : add_observer: respect device affinity for ReLU (#39337)
884e16b41a : `as_strided` : add size and stride length check (#39301)
5beb3b0c53 : [TensorPipe] Re-enable dist optimizer tests (#39441)
b1dab266f7 : [TensorPipe] Re-enable dist autograd tests (#39440)
aea09f5155 : Leak safety in RReLU (#39347)
c767d65caf : Added FPGA DispatchKey, DeviceType, Backend (#38938)
3f099879f7 : [TensorPipe] Re-enable RPC tests (#39406)
7417b4c66f : Fix index overflow in ConvTranspose3d [attempt 2] (#39198)
a05ef17e46 : Add rpc.functions.async_execution decorator for rpc_sync/rpc_async (#39216)
15ad9dd30f : [ONNX] Bump up ONNX submodule to a82c6a7010e2e332d8f74ad5b0c726fd47c85376 (#39372)
abe2be2063 : [resubmit] Use TensorMethods.cpp (#39385)
a952f9bb06 : Fix for num_threads==1 in OpenMP "parallel for" (#36479)
36607c85ee : [TensorExpr] eliminate zero length Allocations in IRSimplifier (#38794)
f166b934ee : [JIT] Kill _cast_* operators (#39348)
8638df45ae : call DoRunWitType on Layernorm (#39409)
ebd4125e7e : [JIT] Make torch.unique_consecutive compatible (#39339)
c6720f0d6b : nit on functional autograd (#35493)
89c0efb30b : Also set CMAKE_C_STANDARD for MSVC (#39304)
71af538e31 : Updated assert to remove check on 3rd dim for MHA (#39402)
a864dbb360 : Make `_C` extension a thin C wrapper (#39375)
09bea13981 : support flip and rot90 for complex dtype (#37826)
58cb369dfa : Replace calls to contiguous with contiguous(suggested memory format) (#38433)
0d96f26404 : Kill THC_logical{Value, Tensor} (#39069)
a6f0051db2 : Fix test_get_and_set_timeout for TensorPipe Agent (#39353)
fca928cabf : [caffe2] fix test error in video_input_op_test (#39382)
04ac41fe70 : [caffe2] format video_input_op_test.py (#39381)
c3ddb3f7a4 : Add rocm image to circleci docker builder (#39262)
8bc5a4939f : Add prim::data to lite interpreter (#39335)
cca29f2969 : [Onnxifi] Support quantized output in Onnxifi (#39230)
35719cdc85 : Fix some bugs of argmin/argmax and min/max (#39212)
11f1014c05 : Adding lost extra_repr() and __setstate __() to activation.py (#39084)
b5cd3a80bb : Return `None` instead `False`, and return `bool` to `None` in type stub (#39324)
bb0377bb24 : Expose torch.futures.Future (#39008)
b3fac8af6b : Initial support for building on Ampere GPU, CUDA 11, cuDNN 8 (#39277)
85b3fa031c : [WIP] Layernorm Fake FP16 Op. (#39103)
30146d7391 : More fixes about using Windows API through ctypes (#39376)
e358adb42c : [TensorPipe] Acquire lock when adding message to timeout map (#39398)
e142d70383 : [TensorPipe] Guess IP addr in separate function (#39397)
de5b8797e6 : Remove unboxed only from AMP registrations for cat. (#39156)
283a3ff16d : The exception raised when RandomSampler.replacement is non-boolean should be TypeError (#36547)
413f023784 : Clean up cast from c10::complex<T> to thrust::complex<T>, and update the workaround CUDA version to <10.2 (#38941)
68f23d566a : [pytorch] Let jit.unused ignore unsupported method signature (#39336)
f4365cf5ba : [JIT] Add support for saving/loading of lowered modules (#38893)
858ab75046 : ONNX Export Support for Celu (#38243)
ed26e8b0a0 : Resubmit [Onnxifi] Generic way of passing output shape/type hints (#39377)
f16b04f8b3 : [caffe2] Update shape info delimiter (#39275)
f29fa06c52 : [quant][graphmode][fix] Run preprocess for child module before parent module (#39368)
625f4e39a7 : [quant] Fix fusion pattern for add_relu (#39367)
3001facd7a : [doc] [distributed] fix typo (#39264)
a47e2d4488 : [Futures] Allow setErrorIfNeeded arg to have type FutureError (#39113)
f117089810 : Restore thrust path for 1d tensors cumulative ops (#39180)
e286cb5e81 : Revert D21781515: [Onnxifi] Generic way of passing output shape/type hints
7b07208d86 : [Onnxifi] Generic way of passing output shape/type hints (#39229)
c5def603a7 : Use @skipIfNoFBGEMM instead of direct check (#39068)
e6d86036e2 : Fix return types of Windows API functions in __init__.py (#39334)
295a23f43f : [Futures] Added markCompletedIfNeeded API (#39080)
48e66859c1 : Check illegal output dtype for torch.{min, max} (#38850)
a3c87c4922 : Make Optimizer.state_dict() nondeterministic (#37347)
7f1a96d43c : Adding sparse Lp regularization operator to Caffe2 (#38574)
6d3e4aa0f9 : Made sure torchscript compiles in optimized mode (#38888)
f76e05a2e1 : Automated submodule update: FBGEMM (#39322)
cffa0bee04 : Don't generate DeviceGuard for CPU wrapping code. (#38806)
2b6a48e962 : Remove supports_named_tensor from codegen entirely. (#38739)
45baf0e1a0 : [Profiler x RPC] Enable RPC Server Global Profiler (#38847)
bdaa78499e : Reland Refactor c10::complex and cleanup c10::Scalar (#39306)
39d037253c : Test PyTorch using python-3.8 + GCC-9 on Bionic (Reland) (#39121)
78244f8129 : Kill CPPTypeToScalarType. It's now subsumed by CPPTypeAndStdComplexToScalarType. (#39263)
ddf6d49445 : Avoid defining bogus CPPTypeAndStdComplexToScalarType<void> by using some decltype tricks. (#39261)
07518e120b : [nvFuser] add torch.jit.fuser context manager (#38993)
42b2dee6c2 : `verbose` unused in `torch.backends.cudnn` (#39228)
c193bd41f5 : fake_quantize: respect device affinity (#39031)
2fe0fc2684 : Revert D21374247: Use TensorMethods.cpp
1943a2c317 : Fix missing code in 'Installing C++ distribution of Pytorch' (#39237)
7773a45c0d : Division by zero crashes for fmod operator(#32699) (#38919)
dc4fd0409f : DOC: remove java documentation (#38920)
a566451017 : Migrate AT_DISPATCH_FLOATING_AND_COMPLEX_TYPES_AND2 to c10::complex (#39285)
aa5afbdb92 : Add dynamic_cast asserts to CPU Loops. (#39258)
9b05b1bacf : Make needs_dynamic_casting multiple-complex-type aware. (#39255)
f9edbda7d7 : Loops: Separate out dynamic_casting concerns from complex overloads. (#39254)
caaea084e9 : [caffe2] minor typo fix in fused_rowwise_nbitfake_conversion_ops.h comment (#39315)
5153cdbe87 : [TensorExpr] fix a bug in ReorderAxis when there are trailing loops (#38841)
68e62b9ab6 : Use TensorMethods.cpp (#37639)
f872cf5ed0 : Add %= support in TorchScript (#38983)
8556664d68 : Revert D21769463: [pytorch][PR] Refactor c10::complex and cleanup c10::Scalar
928ce29ee2 : Refactor c10::complex and cleanup c10::Scalar (#38593)
fcef43965b : [AMD] Fix broken test (#39297)
b7b99ab0c8 : [ONNX] Remove Aten ops from ONNX export (#37239)
c02cb7aa08 : [nnpi fake ops] bug fix int8QuantizeNNPI (#39271)
b0d6e4b604 : work around building onnx in older rocm docker images (#39253)
25a6c5f60f : [quant] Dynamic Linear module to use reduce_range (#39125)
9cacbe29e5 : [quant] Add reduce_range argument for qlinear_dynamic (#39041)
001102c50c : Avoid a TensorIterator/Loops reinterpret_cast in a test. (#39246)
7ab96461d0 : Remove some unnecessary code in Onnxifi (#39197)
a5e023f28a : Set RecordFunction id only when needed (#39265)
1c67c3d587 : test_fc_nnpi_fp16.py test_fc_num0_fix fix (#39248)
1d0ec50a02 : [quant][graphmode] Rename _quantize_script.py to quantize_script.py (#39122)
a50d781c03 : Added real and imag views as tensor attributes (#39033)
c3d3782c80 : Fix init-shutdown race condition in autograd engine (#39194)
88c5fd94e7 : [nnpi eval] enable int8 eval with emulation Int8FC (#39112)
29c04acdbb : Followup for cuda assert cleanups (#39220)
a5d44800f0 : Implement CUDA_KERNEL_ASSERT for MSVC (#39218)
c25b3d4305 : [ROCm] in test_cuda.py, re-enable skipped tests (#37952)
85d0292c14 : [quant][graphmode] Cleanup inplace API (#38827)
7836eaceee : [JIT] JIT should let people know we inferred an argument as a tensor (#38527)
86f46ac9ca : Fix assertNotEqual error reporting (#39217)
f44fca882e : Update NNPI backend to v0.6.0.5 (#4539)
b44f02f8f5 : Fix windows upload jobs (#39249)
10e2126b10 : support complex types for `cumsum`, `cumprod` (#39063)
4b5e87f94a : Revert D21751663: [pytorch][PR] Fix argmin/max bug
d6715e6364 : Improve warnings to actually point at user code (#39143)
d1212e5814 : [TensorPipe] Use PrefixStore to avoid conflicting keys (#39185)
99f6df3c07 : [TensorPipe] Bind to hostname's IP address instead of localhost (#39184)
3ac0ec3dab : [TensorPipe] Don't use separate heap allocation for metrics (#39183)
587d453b0f : [TensorPipe] Ignore expected errors (#39182)
debb7ba6f4 : Update TensorPipe submodule (#39189)
fce01a9bab : [JIT] Make new zip serialization for torch save/load significantly (~70%) faster (#38379)
b08a4aaf3b : [PyTorch] Fix operator perf observer index issue.
d0650af2fb : Change __CUDACC__ and __HIPCC__ to __CUDA_ARCH__ and __HIP_ARCH__ in NumericUtils.h (#39213)
2331853236 : [caffe2] Fix the correctness check for GivenTensorFill operator
2f49757372 : Remove sumall from TH, THC, THCUNN (#39042)
ca6579bd40 : Regenerate config.yml (#39215)
a04fb2ab22 : [Reland] add xenial + cuda 9.2 + gcc 5.4 CI test (#39036)
f7a8851e9e : Fix argmin/max bug (#38946)
527ee63b7d : fused convbn: respect the strict argument when loading from state dict (#39205)
98a755bc8f : Migrate AT_DISPATCH_FLOATING_AND_COMPLEX_TYPES_AND1 to c10::complex (#39045)
41363b299a : test_bottleneck_cuda works on ROCm 3.3 (#38249)
0e8c65f756 : Add timeout to TestBottleneck (#39191)
9c19a12965 : fix asserts in cuda code (#39047)
0b9d537056 : [dper][pruning] add histogram op (#38514)
928e99b9bb : [vulkan] jni build support USE_VULKAN (#39188)
ee3bd10445 : Moves angle/abs test to test_torch (#39154)
7b343cc30f : .cirlceci: Remove setup job (#39081)
feaf72088c : short-circuit pow for complex 1 and 0 exponents (#39117)
5e975cf8d6 : Stops cross-device data movement in tensor iterator (#38998)
d26f7f09b5 : Fixup: rename BatchedTensorKey to Batched (#38798)
e029d678b6 : Make collect_env more robust on Windows (#39136)
5267b17a96 : Revert D21748644: [pytorch][PR] Fix index overflow in ConvTranspose3d
b98948e6dd : implement dynamic bucket order in DDP (#35137)
7e1cc2daa5 : Revert D21729544: add overload name for op eq.str
c2133179a9 : add overload name for op eq.str
f58cc4b444 : [RPC] Fix flaky test by waiting for async rref calls (#39012)
377a355bcc : [TensorPipe] Detect duplicate worker names (#39011)
72f2ff5950 : [TensorPipe] Improve serialization (#39010)
65aa2b65e5 : [TensorPipe] Close and join TP context at shutdown (#38934)
54046c1024 : [TensorPipe] Implement join correctly (#38933)
49e4e41fdc : [TensorPipe] Always complete futures from thread pool (#38930)
eaca6f32b0 : [TensorPipe] Do not mark future messages as complete after they have timed out (#38931)
510971f86c : [TensorPipe] Fix lock inversion upon response read error handling (#38929)
0413e1e624 : [TensorPipe] Fix timeout computation (#38928)
7866854184 : [TensorPipe] Add cases for TP in RPC test helpers (#38927)
7b90ed1117 : [TensorPipe] Pass names of endpoints to context/pipe for easier debugging (#38926)
1d1f16079d : [quant] Add save/load state_dict to quantized dynamic RNNs (#39105)
78acc9dffb : Check reinterpret_cast of complex bidirectional (#38882)
bfcb687b9c : Nearest interpolation gpu implementation fix [Resolves issue #38985] (#39055)
5702a28b26 : Fix index overflow in ConvTranspose3d (#38970)
7e16dd299a : [ROCm] enable mem leak check for rocm (#35953)
0d4eefcd82 : fix comments in gradcheck (#38877)
e088902b4a : Add type-hint check for default arguments in TorchScript C++ frontend (#39021)
7543e7e558 : Migrate minall, max, maxall from THC to ATen and cleanup THC (#39029)
f5bc91f851 : Get rid of multiple inheritence in test_torch (#39110)
01815be1e4 : Infinite timeout for operations against ProcessGroup for RPC (#38577)
b0420cc2de : [Caffe2] Change shape_hints format (#39100)
05f097b5bb : Implement logaddexp (#38384)
90a8cdfdbf : Automatic update of fbcode/onnx to eae3eb8c61cf5ad27cc9a416dbdc5274982385a6 (#39089)
988e31c788 : Revert D21752017: [pytorch][PR] Test PyTorch using python-3.8 + GCC-9 on Bionic
d92ef9268d : Revert D21728402: Simplify precision-specification in tests.
cf8001d2d0 : [TensorExpr] Fix a bug in Rfactor when there are multiple reductions (#38733)
0f1f0a1f35 : update circleci scripts for rocm ubuntu bionic support (#39097)
b12a879184 : Correct Javadoc link to master (#39038)
30dd4acbf6 : Test PyTorch using python-3.8 + GCC-9 on Bionic (#39030)
fa184c351f : [JIT][to-backend] Fix compilation unit and name mangling of generated module (#38679)
ff2e29144c : Refactor backward compatibility tests to use override_qengines decorator (#38838)
20397285c6 : Replace use of np.allclose in tests. (#34287)
898d062bfd : [disagg_acc] In batch broadcast (#38700)
4239416c72 : Throws runtime error on attempted addcdiv integer division (#38762)
bb12e4dca0 : Add JIT fusion pass to fuse quantized add and relu. (#38897)
248758d702 : Expose qnnpack's maxpool when going through aten::max_pool2d (#38896)
c6e9e9359f : [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#39023)
c835dedce9 : Fix the issue that PyTorch doesn't construct bool tensors from non-bo… (#38392)
30063347e7 : remove serial_exec from scatter/gather kernel (#36181)
b636f5e324 : change the int8 test to use unquantized bias (fp32)
df4066bbb6 : Simplify precision-specification in tests. (#37181)
1c74d965ed : Fix attribute warning on gcc (#38988)
3d2fce6bc3 : Change len(DataLoader) for IterableDataset (#38925)
53b55d8f38 : Use ninja build as default for HIPExtensions (#38939)
dfc4be205e : Fix broken reference in sync bn doc (#38890)
0edf063c24 : Enable Constant Folding Tests (#38751)
e07ee1954d : [TensorPipe Agent] Message on Agent Shutdown (#38819)
d08a30a300 : [TensorPipe Agent] Improve Response Error Message on Agent Shutdown (#38818)
1093e26d72 : [ROCm] HIP version guard for occupancy API compatibility (#38551)
626048efd3 : Fix Windows binary jobs after migrating to the new circleci image (#39057)
7f1c9886cd : [ONNX] Enable models tests (#38791)
b789c1790f : Update to use the stable windows image instead of the temporary one (#39066)
d627f2b174 : Support void return type in TensorIteratorDynamicCasting checks. (#38815)
45f32ceb4e : Move needs_dynamic_casting to a non-CUDA specific file. (#38813)
bbb5e106ad : Improve error checking of CUDALoops. (#38810)
b7882f9bd6 : Improve cpu/Loops.h arity asserts. (#38809)
13120bf677 : Updates assertEqual to require atol and rtol, removes positional atol (#38872)
9b95f757af : move num_profiled_runs to common_utils (#38687)
916084d933 : [JIT] Allow @torch.jit.unused to be used on TS classes (#38522)
93d87a16eb : Revert D21493165: Automatic update of fbcode/onnx to 20b3e10e6c3a9cdab90d2bb864d1c36d3e3651cd
de8c888232 : Fix torch.hub.hub_dir inconsistencies (#38969)
2b789e2e03 : [quant] Onnx export of quantized models with new API (#38736)
51274b501a : Automatic update of fbcode/onnx to 20b3e10e6c3a9cdab90d2bb864d1c36d3e3651cd (#38203)
362928d5dc : Remove unneeded const from process group agent header (#38804)
a25062ab50 : [TensorExpr] Fix elimination of For loops with empty bodies (#38883)
4fcd1c3123 : run te only for profiling executor (#38591)
63e545e0fe : Revert D21717199: [pytorch][PR] Updates assertEqual to require atol and rtol, removes positional atol
ba14a701dc : restore proper cuda assert behavior with DNDEBUG (#38943)
eddc3f61d0 : Migrate Windows build jobs to VS 2019 for CUDA >= 10.1 (#38959)
7e85f6f922 : Removes pickle deprecation warning (#39003)
44d418957e : [vulkan] TensorConversions remove explicit vulkan ifs (#39019)
f188b52b59 : Fix the issue that Bad interaction between no_grad and numpy conversi… (#38906)
2e6ee853ab : make onnx expect tests resiliant to producer_version changes (#39002)
c611b57bd1 : Add index number to THArgCheck error message (#38978)
2751dda7f6 : [docs] fix formula `torch.logcumsumexp` (#38952)
8650376444 : DOC: fix import error (#38921)
47869b1b12 : Windows build updates (#39035)
ccab142197 : Add ROCm-specific half_support_literal for JIT. (#38899)
0d649efb81 : Updates torchvision version (#38848)
12c219de54 : Fix histc with empty tensor error (#38987)
c40a79a027 : [c2] cuda impl for WeightScale op (#38712)
224ce03ebe : Revert D21681838: add eq.str op to lite interpreter
e4a3c584d5 : Fix max_pool2d nchw backward bug (#38953)
0ff1aa9058 : Port TH cum{sum,prod}_cuda to ATen (#36458)
996b6a3d00 : [vulkan] Fix python overrides tests for is_vulkan_available (#39016)
583ff947e1 : Fix max_pool2d for returning wrong shape with return_indices=True on cuda (#38992)
c82375306c : [vulkan] Fix Bazel build, add aten/native/vulkan/stub/*.cpp (#39018)
fc4dfbf700 : Remove reference of CUDA < 9.2 (#38977)
108321dc41 : move int8 fc operators and dependencies (#38935)
4d1df74c7c : Use a temporary file during ReducerTest (#39004)
1fef2075a5 : Disable some unsupported module for 32-bit build (#38950)
81daadf651 : Expose VC_YEAR in Windows binary test jobs (#38957)
6ddca30b2d : Updates assertEqual to require atol and rtol, removes positional atol (#38872)
341fd63ff6 : add eq.str op to lite interpreter (#38859)
b460465a18 : [Mobile GPU][Integration] Vulkan backend integration (#36491)
1fa0bb6d9d : Use workspace to persist and restore images for Windows CI build and … (#38971)
f07a60fcd4 : Updating submodules
c34b333230 : improve accuracy of logsoftmax computation on cuda (#38945)
389e16c33b : `torch.pow` Add type promotion support and fix issue with __rpow__ (#37098)
ba3893e736 : Rename `torch._C.Generator` to `torch.Generator` (#38773)
b8f2ecbfb6 : Update TensorPipe submodule (#38923)
5749ef75d3 : Update ShipIt sync
7e6f6f522f : [PATCH] Migrate min from THC to ATen and remove _min (#38440)
d035d05080 : [pytorch] expose __ldg(const Half* ptr) to Clang in host mode (#38151)
f3f3097a4c : Use old circleci windows image for both CPU and CUDA (#38909)
cd5d7a34b8 : [JIT] Factor out aliases to separate test (#38746)
f90dc741eb : [JIT] Normalize op aliases (#38735)
5183e3aa16 : [JIT] Rename canonicalize ops (#38734)
22454c5aeb : Collect and upload error logs if VisualStudio installation fails (#38902)
4c0bf93a0e : Revert D21057090: Remove useless copy on zip file load
a53422e0ee : [FakeLowp] Open source more c2 ops (#38878)
8d8b586c7a : fake_quant: make qparams shape consistent (#38587)
455bf77da5 : Remove useless copy on zip file load (#36362)
8e69c3be17 : [nvFuser] Reduction support in codegen, fp16 support (#38627)
d3b0cf9ae9 : Kill AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX_AND (#38462)
9b9fc59b0a : Add cuda version of clang9 image. (#38825)
f9eb8824f1 : Remove datatype from Storage and StorageImpl (#38870)
9b656dac7f : Switch AT_DISPATCH_COMPLEX_TYPES_AND and AT_DISPATCH_ALL_TYPES_AND_HALF_AND_COMPLEX to c10::complex (#37697)
0e2a0478af : Support paths with spaces when building ninja extension (#38670)
b1982c4bdb : Fix multiline signatures in docstring (#38768)
acc181c2ea : Document `torch.utils.cmake_prefix_path` (#38727)
6d4d508d8e : Log incorrect device in ProcessGroupGloo (#38844)
5dd65ba634 : .circleci: Add simple backup and restore solution for RCs (#38690)
481838f21b : Sphinx parallel build (#38785)
a40049fd2a : Better handling for msvc env when compiling cpp extensions (#38862)
4e46c95826 : Fix cpp extension build failure if path contains space (#38860)
b9105f42a1 : Kill AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX (#38792)
bf9395438f : Disable test_nccl for ROCm (#38801)
07bed4b7ef : remove redundant contiguous in unfold_backward. (#38871)
3487744821 : Add `torch.logcumsumexp` (#36308)
b88b7d552f : Prevent custom Functions from creating non differentiable type that requires grad (#38326)
0f1669181a : Add specific list of supported types in autograd (#38325)
a83f25314b : Some TODO fixes. (#37829)
2c2fe6356a : Add a check for stride==0 in gradcheck (#38774)
6f0e53624d : Enforce that named_tensor_meta_ is non-null only if there is a non-wildcard name (#38725)
1ea80b4234 : [ROCm] Set correct tolerance values for bfloat16 div tests (#38823)
d363cf4639 : Fix incorrect __torch_function__ handling in einsum (#38741)
664a3ab5c7 : Enable py38 gcc9 build config (#38805)
e9902358df : Support fp16 output in OnnxifiOp (#38846)
65e8fe1832 : Perf optimization for conv and gemm kernels. (#37626)
0b2a861507 : convbn fusion: add backwards compatibility support (#38820)
4d5d9c0455 : qat syncbn: add test coverage (#38738)
a8d8fc5532 : [quant][graphmode] Different rule for add/add_/mul/mul_ (#38667)
57d6e19d6f : Use union to cast between incompatible function pointers (#38842)
6cf5c71b32 : Updating submodules
48116ac8d0 : Revert "Revert D21593870: Kill AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX_AND2" (#38814)
f80df4ca79 : port `scatter_add` to ATen (CUDA) (#38262)
83fa3f1c36 : Add HIP to the memory profiler device list (#38795)
c02e7c464a : Replace import cpp_benchmark with `torch.utils.cpp_benchmark` (#38832)
9c88b23fa0 : [bug] Binomial distribution BTRS algorithm has small chance of returning -1 (#38456)
267a8da1bb : Fix broken windows build due per channel quantization stack land. (#38828)
a7a69ad104 : Fast path for contiguous tensor (#38732)
5b8a79ab49 : fix the device inconsistency for import convert_sync_batchnorm (#38729)
6736a76cec : Back out "[RPC] [Minor] RPC entry point cleanup"
604533ddfa : [CircleCI] Add Python3.8-gcc9 config (#38747)
5b1814e44d : Added per channel separate test cases for fc and deconv tests. (#37624)
0a554aeed5 : Changes to enable per channel support on dynamic linear. (#37623)
b8eae1e3b1 : Enabled per channel quantized static linear/conv (#37622)
1c9a110b22 : Added per channel kernels for depthwise conv. (#37621)
1f16d4ce1c : Changes to enable per channel requant. (#37620)
622f5b68f0 : Enable per channel zero point. (#37619)
f1991ca8e7 : Interface changes to enable per channel quant. (#37618)
96d7defb4b : Revert D21593870: Kill AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX_AND2
3b254acd99 : support complex types for tanh_cuda and tanh_backward_cuda (#38786)
51b25218c0 : Remove deprecated cuDNN API from caffe2 (#38680)
90400f48fc : Enforce tensorboard minimum version as 1.15 (#35952)
4b248393b7 : Kill AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX_AND2 (#38459)
c82b873dbf : Migrate CPU min max to c10::complex (#38461)
c039540d10 : [Onnxifi] optimize the dispatcher ordering (#38766)
d8b9448c62 : [pytorch] reorder tracer code in generated VariableTypes (#38308)
cae45e416e : add skipIfRocm to TestAutograd.test_memory_profiler (#38790)
fe66bdb498 : port masked_select from TH to ATen and optimize perf on CPU (#33269)
f4f0dd470c : Migrate CPU clamp to c10::complex (#38460)
ca1978c9db : For jobs need a merge, merge with origin/master for ghstack PRs. (#38745)
8666ea0cd1 : Remove duplicated entries in `native_functions.yaml` (#38389)
c78691b4a6 : [CPU] torch.gather for complex dtypes (#36430)
a3bab37d96 : Add BatchedTensorImpl (#38424)
c60daedb36 : Migrate CPU eye to c10::complex (#37899)
1465970a34 : Update valgrind version build from source (#38754)
42870ddf24 : Generate Dynamic Shapes (#37693)
9e910a95b0 : Add `torch_python` and `_C` library to bazel build (#38707)
530d48e93a : [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452) (#38749)
7587188037 : Skips test_float_to_int_conversion_finite on MacOS (#38753)
40ce90bfc1 : Revert D21560096: [Tensorpipe Agent] Enabling tests with OSS CI
64584573f9 : Updates tests for integer division deprecation (#38621)
5af4e76683 : Back out "Revert D21530545: Remove call_unboxed_super_slow_temp_shim" (#38742)
5fb26b1022 : Delete cuda9-cudnn7 build which is not defined in build.sh (#38750)
9907a3eb65 : Update Argmin/Argmax ONNX Export (#38329)
cbd0adc7b4 : Migrate CPU unary ops to c10::complex (#37898)
bcf8973654 : Add `torch.utils.cmake_prefix_path` pointing to `share/cmake` folder (#38559)
363a2d9455 : Revert D21530545: Remove call_unboxed_super_slow_temp_shim
235f62417d : Fixes for profiling JIT code (#38453)
a94fb71b12 : Memory profiling (#37775)
24b48372b9 : Revert D21626921: override gcc version in cuda related test
b995540a01 : Revert D21632878: [quant] Support for fused ConvBn1d and ConvBnRelu1d modules
87b198d309 : add distributed/test_nccl to ROCM_BLACKLIST (#38730)
befc76bb65 : [RPC] [Minor] RPC entry point cleanup (#34292)
423a00ad39 : Remove call_unboxed_super_slow_temp_shim (#38351)
959afe0726 : Overload bitwise NOT, AND, OR, XOR operators for `at::Tensor` (#38691)
ab169fa5ac : Fix find_first_set for x86 MSVC (Updated) (#38706)
87aa2d25ae : [Tensorpipe Agent] Enabling tests with OSS CI (#38447)
b2991c105a : [Tensorpipe Agent] Dist Optimizer Tests for Tensorpipe Agent (#38446)
b782ad3b9e : [Tensorpipe Agent] Dist Autograd Tests for Tensorpipe Agent (#38445)
7492e98c7f : [Tensorpipe Agent] RPC, RRef tests for Tensorpipe Agent (#38444)
55914f8e83 : Add skipCUDAIfRocm to test_nn test_softmax_results. (#38724)
9ad14f6b43 : cover nn.Conv1d in mkldnn model conversion logic (#38528)
6fd48e24f1 : Add support, test for kwargs in jit._fork (#38357) (#38665)
819da00b3d : Fixes floordiv dunder registrations (#38695)
1ef77f9045 : [quant][graphmode] Different rule for handling `aten::cat` (#38570)
dfbf9f397f : Back out "Back out "[c2] register cuda op for LpNorm (fallback)"" (#38566)
54d4b419db : fix clip_grad_norm to work with parameters on the different devices (#38615)
b14734d92e : Add bfloat16 to CPU cauchy_kernel, log_normal_kernel, exponential_kernel (#38427)
35beff0b9f : RNG infrastructure improvements (#37984)
7d38db0f9a : [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452)
320c35681d : [TensorExpr] (trivial) unique Kernel input names (#38678)
e5ada042b1 : QAT ConvBN: remove explicit folding and use BN instead (#38478)
8d64986202 : Fix target determination file diffing (#38661)
f3b5c22dba : Update On "check-doxygen.sh must be run from docs/cpp/source director… (#38641)
f6f1384811 : [JIT] Refactor attributes to support buffers and parameters as first class citizens, add support for iterating over `named_buffers()` (#37905)
c4d3b042e8 : Cleanup BUILD.bazel (#38699)
1a3f646b9c : Regenerate .circleci/config.yml (#38705)
ddfd720e5d : Redundant schema registration Prevention for Manually Boxed Wrappers (#38588)
5e55f0805f : override gcc version in cuda related test (#38675)
fc19747d64 : handle grad with `stride=0` on GPU MvBackward (#38321)
86397f6b24 : [quant] Remove get_qparams in Observers (#38435)
d5461e7ac8 : [quant][graphmode] Move processing code to prepare_script (#38669)
91163addf8 : organize verbatim sources with subdirectories (#38688)
f184ec819d : Do not use "buffer" in reentrant autograd err msg (#38625)
958313a79f : Fix memory usage increase reported in #38568 (#38674)
f3048609d3 : [CUDA] torch.roll for complex dtypes (#38664)
724b2b6ebd : Profiler: Call `populate_cpu_children` inside `__str__` and fix typo (#37816)
49d687f23c : [JIT][to_backend] Move code that is not related to the user-facing API out of `jit/backends/backend.h` (#38567)
76fc9bd2ef : Docker constants refactor (#38676)
2f21dfb541 : [TensorExpr] Eager reduction initialization & removal from ReduceOp (#38585)
97abed7cbe : [quant] Remove TensorListObserver (#38584)
c430b7d80f : Updating submodules
8c07a98adc : Error out of default_collate for lists of unequal size (#38492)
d7e08b456d : FakeLowP Readme update (#38666)
378956b481 : Make find_first_set works on x86 MSVC (#38637)
23207ae656 : towards guard what you use (#38576)
5fcb2f678f : [Distributed Autograd] Make debugInfoMap from strings to ints (#38416)
5e2d8745c8 : RIP CUDA <9.2: circleci, aten, and caffe2 (#36846)
b29e7f9b9d : [TensorExpr] Use couldMoveBefore instead of couldMoveAfter checks in the fuser pass, add CPP tests. (#38592)
895479e612 : Complete codegen of 'build' workflow YAML tree (#38631)
262f70c986 : [PyTorch] Remove module and operator observer macros. (#38489)
5b12c29b17 : [ONNX]Update Dropout Export (#37641)
59d92e442b : Vectorize non-persistent Softmax (#38557)
b2c06ad875 : [JIT] Export all jit/backend headers in BUILD.bazel (#38668)
eb224721d2 : Enabled dropout removal pass in mobile optimizer. (#38254)
09c430a2aa : support complex types for tanh_backward_cpu (#37791)
a84fd8de39 : Handling Active Call Count through Future Callback (#38589)
34ef473d92 : [Tensorpipe Agent] Timeouts for RPC requests (#38448)
ece878e5b8 : [ONNX] Add GreaterOrEqual and LessOrEqual to opset 12 ONNX export (#38311)
ca05fb2e86 : Add autograd tests for complex (#38658)
44cead3a31 : Improve syncbn doc format (#38423)
59bef16138 : Add ci binary test for windows (#38297)
d904f3324f : [NNPI] Support fp32 bias in NNPI Backend (#38596)
67cd263876 : Fix merge_fp32_inputs_into_fp16 with no partition (#38594)
8338426ed8 : Fix infinite loop bug in minimizer (#38507)
e6993938de : Avoid Releasing, Reacquiring lock per iteration in RPC Retry Thread (#38521)
711f258dc7 : Enable tests in test_pytorch_onnx_onnxruntime (#37868)
a86176dee2 : CTC target un-pad example (#38393)
8292742ba0 : fake_quant: move observer and fake_quant flags into buffers (#38368)
b27be3e0c5 : Avoid double dispatch in logical_not for compilation speed reasons. (#38565)
176174a68b : Remove BC hack (#38571)
873f9025bb : Updating submodules
fe44741dba : Updating submodules
83df3beaca : Add complex support for torch.sum (#38382)
db86c8c6f5 : Test BC for built-in torchbind methods (#38560)
b9c537514c : [JIT] Remove import statement thing in serialization docs (#38578)
feb24577c2 : Reduce number of variables in codegen (#38369)
31b57e38cb : [jit] fix `index_put_` error in subscript assignment (#38378)
f39222a13d : Restore thread_local states in continuation thread on RPC servers (#38512)
8752d6a736 : DOC: Correct upsample doc to match interpolation (#38455)
8743d51182 : Updating submodules
0d9bb5f580 : [JIT] Use GE optimizer guard in import (#38575)
67d76f6bdd : Add utility to enable cpp stacktraces in torch.utils.debug (#38127)
87f40fef84 : Refactor check macros to reuse code (#38126)
adf67b81c5 : Make strip error messages work for cuda code (#38125)
9cfc10d52e : Updates assertEqual to use torch.isclose-like logic (#37294)
6a23214a47 : [JIT] Adjust pybind includes in backend.h (#38562)
b04c07a67c : Added a Resource section to README (#38547)
53a368fedd : [aten] Split some at::launch code into at::internal::launch_no_thread_state() (#38477)
6232481cab : [quant][graphmode] Add RemoveReduantDequantize pass (#38434)
dd7eed5ae4 : [JIT] Export JIT backend extension headers in setup.py (#38525)
1d1533e358 : Migrate CPU cross and some elementwise to c10::complex (#38023)
dc918162b7 : Remove `Caffe2_MAIN_LIBS` (#38408)
daa85cfe2e : [JIT] Exit Transform Rewrite (#38282)
62afc2d63d : [JIT] Remove debug print statement added in #37994 (#38524)
d44573a6dc : Remove _all_weight_values again (#38504)
8d7582a2cf : codegen mobile and macos configs (#38539)
70ef9f5124 : Improve testing of logical_not. (#38505)
42a3fb3a4e : change to find_method of lite_interpreter API to return nullptr if method not found (#38503)
5a19fe7454 : migrate `gather` to ATen (CUDA) (#37659)
52e9953faf : use version number instead of 'master' in html header title (#38149)
4b52e52577 : Use `jit_core_sources` from build_varliables.bzl (#38526)
242af6c078 : Add tan_cuda for complex dtypes (#38400)
acacad2575 : Adding support for manifold files in DBReader (#37727)
bae895cef0 : Issue 37819: Added check for kHIP in ATen/native/Copy.cpp (#38003)
bf2bbd9648 : Add message to static_assert (#38519)
c0bc182761 : Revert "Vectorize non-persistent Softmax kernels (#36485)" (#38534)
8bf3124572 : [TensorExpr] Fix bug when splitting inner reduce axis with tail (#38420)
0d51728d38 : Updating submodules
3cb2778d94 : Remove some unnecessary cast for complex numbers. (#38422)
000fea375c : Support operations on c10::complex and integer scalars (#38418)
ac613371a3 : Update NNPI backend to 0.5.2.5. (#4464)
ec9b2f9a9d : [quant][graphmode][refactor] Factor out getFixedQParamOpFusionInfo (#38359)
960f4b51e3 : [JIT] Fix `@staticmethod` access from `self` on modules (#37702)
3d0532f3ab : [c2] fix compute_norm test (#38529)
8df14c573e : Add sccache support for hcc and hip-clang in ROCm (#38451)
fac9f36563 : Back out "[c2] register cuda op for LpNorm (fallback)"
ee52501976 : [quant][graphmode][refactor] Factor out getInputTensorQParamOpFusionInfo (#38358)
155a287aea : Enforce const on PyRRef functions (#38415)
25177e2796 : [quant] Support empty batch input for quantized ops (#38508)
bc49d938e2 : Revert D21585458: [pytorch][PR] [RELAND] .circleci: Improve docker image build workflow
0e0b9496fe : [c2] [easy] stop gradient when diagnose (#38518)
8cdc4807cd : [RELAND] .circleci: Improve docker image build workflow (#38484)
bbfd0ef244 : [c2] register cuda op for LpNorm (fallback) (#38517)
504637a171 : [quant][graphmode] Support ops with fixed quantization parameters (#38278)
de7025fbdb : [quant] Support for functional quantized::conv1d (#38449)
8e732514cd : [quant][graphmode] Add support for quantized conv1d + relu fusion (#38441)
f4605ae5c3 : [quant] Fusion support for conv1d + ReLU (#38438)
8b6bf2a457 : Add C++ Landing Page (#38450)
1f87f15ba3 : Remove _reset_warning_registry (#38485)
b140ed6848 : Remove structseq_slice (#35625)
6d642a6f6c : Remove (most) Python 2 support from C++ code (#35614)
1b973aa2a2 : Sort CirlceCI config.yml keys to facilitate diff review after codegen (#38496)
69dca43c35 : Updating submodules
0e80c12bb4 : [pytorch] fix -Wlogical-op-parentheses in SortingKthValue.cu (#38500)
9d0e935b48 : skip torchbind on rocm (#38501)
4d4895a62a : Use Future's then() API to fix RPC profiling (#38352)
f178bf10f1 : Support rpc_async call with timeout in JIT (#37884)
3300dd5227 : .cirlceci: Keep tags that look like a sha1 (#38483)
38d141ede5 : Support having a different forward method when we are not in scripting mode (#38158)
5f2a274015 : Fix conv non zero padding being applied in wrong dim (#37881)
b57a339703 : Guard against negative rpcTimeout being passed in to RpcBackendOptions (#38267)
d1eeb3b7bb : [Tensorexpr] Fix and improve handling multiple gpu devices (#38365)
af597335d4 : Remove unnecessary to_string in RPC logging code. (#38414)
2f4da7c00c : Remove a use of exec (#35624)
7f7fdb1013 : Remove a use of checkScript(str) (#35623)
313bea84ef : Remove _get_wrapped_func (#35621)
d060deb5bb : Remove _compatible_subtest (#35620)
7026b39ac7 : Remove _uses_true_division (#35618)
328fc70b84 : Remove (most) Python 2 support from setup.py (#35617)
cbff959bd7 : [quant] Return default qconfig when backend is 'none' (#38407)
7f11079769 : Delete "named_guard" in native_functions.yaml (#38429)
25f918548d : Allow GradScaler to be pickled (#38296)
ae392a77a6 : Add better device idx parse checks (#37376)
0a159b0a3a : Fix precision issues in CPU remainder (#38293)
3e9b4332d2 : Fix @skipIfNoFBGEMM for types (#38432)
628e3b6fbd : Fix unreachable validation for gradcheck (#37915)
48c0331e01 : Sparse softmax support (CPU) (#36305)
fedb70a8fb : Fix encoding errors for hipify tool (#37906)
2b2d2168e8 : Issue #27441 Fix: Bug in updating ModuleDict & ParameterDict (#27814)
15da26f8aa : DOC: Add documentation for Tensor.is_nonzero (#37845)
96885f73ed : make test_jit infer the profiling mode, add a job for simple executor (#38374)
b5868b2833 : Relax sampler check in BatchSampler (#38403)
f3d2e332f1 : [PyTorch] Remove duplicate jit core sources filelists (#38430)
061ed739c1 : Embed ATen/core/CMakeLists.txt into its parent (#38426)
f99a693cd9 : Remove unnecessary py::object copy in PyRRef ctor (#38402)
54c16b44cf : [ROCm] increase timeout, enable test_backend_group (#36166)
8d94615c2b : Migrate erfc from TH to ATen (CUDA) (#38373)
beedc6542e : relax MAX_JOBS restriction for ROCm builds (#38425)
b1d2c1765e : Updating submodules
336e1ec592 : Clean up error handling in is_nonzero and where in TensorCompare.cpp (#38150)
5a979fcb99 : allow user passing relative paths in include_dirs within setuptools.setup (#38264)
ee8bf1c640 : [quant][graphmode][refactor] insertDeQuantForAllUse (#38277)
eb66dd0bc8 : [quant][graphmode][refactor] Refactor propagateQuantizationOps (#38276)
8d883f5c7c : [JIT] [Easy] Add location to implicit conversions (#38442)
7ce733d218 : [quant][graphmode] Move leaky_relu to general value op map (#38166)
16696186e1 : [quant][graphmode] Move elu to general value ops map (#38165)
98d78a7f20 : [quant][graphmode] Move hardtanh to general value ops map (#38164)
1fde373f2f : [quant][graphmode] Move clamp to general value ops map (#38163)
e988b4fbb1 : [quant][graphmode] Move interpolate to general value ops (#38162)
0d220ef381 : [torchbind] Better error message when missing init. (#37474)
2efa7e04c2 : [jit] move torchbind tests to separate file (#37473)
7d7d73655d : [quant][graphmode] Add quantizedconv1d to graphmode (#38341)
ae11718c45 : [quant] Add quantized::conv1d op benchmarck (#38332)
f6626aaf43 : [quant] Add support for Quantized Conv1d and ConvRELU1d (#38283)
2d221df52f : [quant] Add support for quantized::conv1d operator (#38248)
1676c7d618 : Added autograd tests, disabled jit autograd tests for complex and added a separate list for tests for complex dtype only (#38399)
53439be643 : improve some reporting for fakelowp tests (#38428)
dac9b61850 : Move Cuda Abs kernel to its own file. (#38274)
ff76de8ace : speed up hardswish and hardsigmoid tests (#38256)
afa4dbd731 : Use GIL to guard decref of jit::toPyObj return value in processRpc (#38376)
33977ca769 : Update Cpp, rpc docs and Libraries section to match 1.5 (#38350)
328dd9e5d6 : [future] Make new IValue future constValue semantics match torch::utils counterpart (#38355)
b668bbc404 : [quant][graphmode][refactor] Factor out common parts of general value ops (#38161)
6e13146d96 : [TensorExpr] TensorExprKernel: don't do any compilation or lowering in run(). (#37948)
eac54f18b8 : Vectorize SmoothL1Loss forward (CPU) (#37115)
b90fc52c68 : [quant] Implement unsqueeze/squeeze for per-channel qtensor (#38247)
0526eb0f08 : Fix aten_add. aten_sub to handle 2-operand versions (#38367)
d403b85c00 : [quant][graphmode] Move `aten::mean` to general value ops (#38160)
2a54533c64 : Fix the flooding log issues (#38356)
f64d24c941 : speed up SyncBatchNorm by batching distributed communication (#38246)
899a075b25 : Split up BinaryAritmeticKernel.cu to speed up compilation time. (#38263)
d86de916a9 : Migrate `exp` and `exp_` from the TH to Aten (CUDA) (#36652)
e7b4ef8fd3 : Revert "Partial revert of #38144 to fix ROCm CI. (#38363)" (#38380)
f2c6346ebe : [quant][graphmode] Move avg_pool/adaptive_avg_pool to general value ops (#38330)
138769b1b8 : [ROCm] add exact_dtype=False to bfloat16 test (#38381)
61bea93fca : Further parallelize linspace in addition to AVX (#38093)
9a2d8dfe63 : [TensorExpr] Benchmarks: set up profiling executor and fuser according to the given arguments. (#38295)
3a478b1cbf : Updating submodules
167a978a03 : Fix method stub creation for function attributes (#37994)
3d968088e0 : fix multinomial kernels to properly advance random states (#38046)
756788ea87 : Keep py::object alive until jit::toIValue returns (#38348)
e39991e838 : [TensorPipe Agent] Bind default IP address (#37910)
c20b0080c6 : Partial revert of #38144 to fix ROCm CI. (#38363)
797c608f50 : Explicitly decref py::object in PythonRpcHandler (#38366)
2e9d6d99be : Explicitly decref py::object in ConcretePyObjectHolder and PythonFunctionGuard (#38364)
d001862aff : Minor code cleanup (#38340)
6be3e5d3bb : [caffe2] weight_decay in reduced precision adagrad
cfe3c795ed : Port torch/csrc/jit/runtime/register_distributed_ops.cpp to new operator registration API (#38014)
34523b70c1 : Renamed *_transformation to transformation::* (#38301)
4f08bdddfc : Add skipIfNoSciPy/get_all_int_dtypes/get_all_fp_dtypes to common_utils (#38299)
00be4abc38 : Fixing DistributionsHelper.h includes (#38298)
70c6550cc9 : Forgotten changes for Tensor.random_()'s from and to bounds for floating-point types (#38287)
eb3e9872c9 : [JIT] make torch.unique compilable (#38156)
4a266c93a6 : Allow specifying range in and cpu_serial_kernel and cpu_serial_kernel_vec (#37981)
f7e7a15a5d : Fix `NaN` comparison in `torch.median` (#38216)
2c881417a7 : Change input scale to double type for conv params. (#38346)
e3357a7812 : Fix typo in build environment name (#38343)
80639604a8 : Revert D21536269: [pytorch][PR] [RELAND] [RELAND] .circleci: Improve docker image build workflow
c2ac2127be : [JIT] recursively compile class types (#38050)
cdf4d42c39 : [RELAND] [RELAND] .circleci: Improve docker image build workflow (#38335)
3134978816 : [JIT] Handle del statements with variables as targets (#37608)
a2a53447e4 : [Tensorpipe Agent] Add Call Counts to Metrics (#38266)
a4466eeff4 : [Tensorpipe Agent] Tracking Active Call Metrics (#38265)
3317fdf177 : Updating submodules
f954dd7823 : Add dropout removal pass. (#38253)
8ab6377273 : Port atan from TH to ATen (#37991)
d5a7d790a1 : Use torch.ne instead of torch.nonzero in gradcheck (#37857)
7c13a07286 : [Reland] Remove uses of type() part 2 (#38288)
b6d494d6da : [future] Minor: std::move() callback in future for the convenience operator case. (#37861)
525295e696 : BC upgrader for dynamic Linear with torchbind (#38333)
906c50eb69 : Remove dead code in ddp.{h, cpp} (#37990)
6daaeb2bda : [pytorch] Add C++ error when PyTorch used with Python 2
a90e574401 : Enable linear/conv + relu fusion in mobile optimizer. (#38139)
82abd50f2b : Added more autograd tests for C->C complex functions (#37856)
291869d625 : Remove unnecessary RPC profiling code after future merge (#38255)
7c66ad8941 : [caffe2/fakelowp] fix bug in ref code (#38331)
779abf7538 : Implements torch.pow for complex on cuda and enables complex values as exponents for pow (#36793)
986d7e47c4 : Migrate CPU fill kernel to c10::complex (#38026)
d5e8d90a2c : Migrate CPU reduction to c10::complex (#38022)
96d2ddba6c : remove harcoded values for fc testing
b29ec43555 : Limit max numel for test tensors (#38304)
9576b37caf : Fix test_channel_shuffle hypothesis params (#38327)
7eb9f1788c : Using LoadLibraryEX [Reland] (#38302)
6bb1c4a7ab : Move (most) generated return statements for TH functions out of the switch. (#38073)
e3584f8d7e : Migrate CPU tensor factories to c10::complex (#38021)
4c99a9b672 : Add documentation for hardswish (#37989)
ba0851326c : Revert D21449462: [CUDA] addmv for complex tensors
5c44f2a16b : Updating submodules
cf82011361 : Codegen CircleCI Windows configs (#38292)
3a63728149 : [caffe2/fakelowp] optimize ref int8 gemm (#38294)
dad552666e : Add then(callback)->Future API to ivalue::Future (#37311)
dcf1861f88 : add document for bucktization (#38119)
0d977e9223 : [CUDA] addmv for complex tensors (#37940)
63c3b89c1c : Simplify code with decltype(auto) (#30922)
6943253421 : [quant][mobile] Don't release bias tensor (#38284)
09e4ff95ee : [quant][mobile] Ensure qconv doesn't assert with empty batch (#38252)
ec7beda822 : Use thrust::host_vector instead of std::vector (#38178)
cebf5a8767 : Run mypy on some test files, add iinfo/finfo annotations (#38220)
6e66e8562f : Revert D21517822: [pytorch][PR] [RELAND] .circleci: Improve docker image build workflow
bf499cccb6 : Refactor native/cpu/zmath.h (#38037)
375ddb01b5 : Fix tensor printing (#38031)
eea9c6a048 : [RELAND] .circleci: Improve docker image build workflow (#38279)
43dd8760d7 : Move ThreadLocalDebugInfo to c10 (#37774)
6968c8153e : Warn against callOp (#37797)
42a222cf2c : DOC: Add missing args for index_add (#38213)
cdd1b9a891 : [TensorExpr] Distinguish aten::max reduction op from aten::max elementwise op and only fuse the latter. (#38171)
21ce4333b9 : Remove `THFile`, `THDiskFile`, and `THMemoryFile` (#37830)
1ab4f35499 : Revert D21496081: [pytorch][PR] Using LoadLibraryEx and LOAD_LIBRARY_SEARCH_* flag for loading DLLs o…
f41833957d : bypass `getDeviceFromPtr` check when device is known (#36714)
8e07b75cef : Have DeviceType available in torch namespace (#38036)
7c2853be9d : Revert D21511048: [pytorch][PR] .circleci: Improve docker image build workflow
333e29c45f : [ONNX] Fix pow op export (#38065)
19d6e32e9a : fix sample code (#38002)
def9f15b57 : .circleci: Improve docker image build workflow (#37976)
a37b865107 : test_linspace : remove explicit for-loop (#38191)
c6b2844076 : Pin flake8 to 3.7.9 (#38269)
a553935e3c : [JIT] Expose magic methods on script::Object (#38167)
1456515f15 : [JIT] Disallow plain List type annotation without arg (#38130)
00f3790a9d : Using LoadLibraryEx and LOAD_LIBRARY_SEARCH_* flag for loading DLLs o… (#37763)
5f9b9036c1 : Add instance methods tensor.isnan(), tensor.isinf(), tensor.isfinite() (#37942)
5137827ad0 : Lazily initialise thread local num_threads value (#37461)
08c3339e7c : [pyfi] override TP2 networkx -> PyFI networkx (#37764)
c31913671c : DOC: add BFloat16 dtype and BFloat16Tensor (#37051)
b290da0e75 : Migrate CPU tril, triu, masked_fill to c10::complex (#37897)
77d8a44802 : If we're building on C++17, use actual "if constexpr" (#38154)
3569c59600 : Inverse logic of persistent set and prevent use in jit (#38131)
f314d9a077 : Remove codegen for IntArrayRefStride, which isn't used. (#38072)
fe53b52537 : Macro generate ScalarTypeToCPPType, including all ScalarTypes. (#38071)
c26dde967c : Kill resize-ing and zero-ing from codegen. (#37958)
ebad4e463f : add missing include file for fake_nnpi_ops_utils (#38215)
6edf340338 : Delete torch/__init__.pyi, deferring to direct extension stubs (#38157)
6f396e18c3 : Add per-device allocator object in CUDACachingAllocator (#37567)
324dc1623e : add dtype checking for gather and scatter (#38025)
503be4e05e : fixing build failures with USE_NATIVE_ARCH ON (#35359)
f3e620ee83 : explain redundant branch/tag filters (#38169)
5077518c91 : [Resubmit] Migrate AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3 to c10::complex (#38144)
26928b164f : remove internal file logging.h (#38182)
33f4fca1a6 : [TensorExpr] remove Let and LetStmt in favour of binding in Block (#37606)
48ad9f5a30 : assertEqual now requires matching dtypes (#38103)
57d01be92b : Replacing assertEqual with assertEqualIgnoreType wherever types missmatch (#38102)
e3414c1ef1 : AssertEqual now checks tensors dtype (#34154)
64d083bb86 : fix a bracket (#38039)
b579433bf7 : Revert D21487840: Bind VariableFunctions as a module, not a class with static methods.
f6b1c046b6 : Revert D21483808: [pytorch][PR] Remove uses of type() part 2
4501083306 : dedupe test skipping in common_distributed and test_distributed (#38078)
e109ff6379 : Use py::pickle in RRef pickling pybind code (#38147)
30f4064cfb : Bind VariableFunctions as a module, not a class with static methods. (#38136)
7e9af67ca1 : Add minimal skeleton for _C type stubs, delete torch.autograd stub (#38080)
464e5a6c07 : [TensorExpr] Add print functions for Tensor and Function. (#38175)
8181711637 : Automatic update of fbcode/onnx to 79a7e0df7e86e0f32e7a05f563b24a566540c18b (#38106)
3d0279862d : Consolidate builtin/python_udf RPC to return ivalue::Future like torchscript RPC does (#35154)
86d28706e0 : Remove uses of type() part 2 (#38140)
16e62f9305 : Unboxing uses if_constexpr instead of SFINAE (#38145)
ae534dc978 : [TorchScript] Explicitly disallow del with more than 1 operand. (#38089)
138476389e : [quant] Disable qnnpack test when TSAN is enabled (#38153)
63b1ae6983 : Fix overflow in torch.remainder when dividend is very large (#37758)
fdc40616b2 : s/callUnboxed/call/ (#37999)
55de7c3bb0 : Add test jobs on CPU agents for CUDA builds on Windows (#37904)
e84aa0211d : [JIT]Support List variable in adv indexing. (#37966)
c879c6fb98 : Vectorize non-persistent Softmax kernels (#36485)
615235fc80 : Migrate OwnerRRef value store to generic torch Future (#38143)
ca2206d071 : Add documentation for FeatureAlphaDropout (#36295)
c13dc2cab2 : Fix a minor typo in DistanceOpsKernel.cpp (#37596)
ad433e2003 : [TensorExpr] Fix a bug in the IR Simplifier that could introduce a division by zero (#38055)
9957db22a9 : int8 fc with tests (#38017)
172bcdb8c8 : Add documentation for nn.Hardsigmoid and nn.functional.hardsigmoid. (#38120)
41572116f6 : Dont store redundant packed params in dynamic quantized RNN (#38134)
4784af1d78 : [TensorExpr] Don't include aten::rand_like to TE fusion groups since we can't handle rand+broadcast case yet. (#38132)
6e1e2a60dc : fix compilation error with gcc 5.5 (#38112)
a7c29dbfa2 : `unfold_backward` gets its own kernel (#36612)
0ed7fc581c : [quant][graphmode][refactor] Split quantization.cpp (#37975)
ff9a809ccd : [quant][graphmode][refactor] Remove unused code in quantization.cpp (#37974)
c1e7758b5e : Back out "Revert D20229168: [quantization] Use torchbind for Linear PackedParams" (#38101)
91f451a5e6 : [TensorPipe] Do not require user to provide worker name-to-rank map (#38052)
b4946b96c6 : Don't use Profiler key in lite interpreter (#37962)
726aa713d5 : Replace torch.is_tensor usages with isinstance checks. (#38062)
9232356e5f : remove uses of type() and type_as() part 1. (#38029)
0c936f94d6 : Revert D21449612: [pytorch][PR] Migrate AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3 to c10::complex
0f60c8d878 : [TensorExpr] Correctly print 'bool' dtype in Cuda printer. (#38077)
ff1a627bae : [TensorExpr] Don't include prim::Constant nodes with Tensor type into TE fusion groups - we can't handle them. (#38105)
a253ea92fb : [TensorExpr] Properly handle Bool dtype in several other places. (#38104)
459f14e9f6 : [TensorExpr] Correctly print dtypes in Cast and Allocate. (#38091)
609d5a4476 : [tensorboard] Let hparam render values correctly (#31544)
4c358b8b72 : Run QEMU to test that default dispatch doesn't use AVX (#38094)
53aa7d8bc5 : Add option to skip tests after retries (#38079)
d35ab0b7ae : Fix CUDA memory management issues caused by not using PinnedCPUAllocator (#38066)
f4d9713d12 : Migrate AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3 to c10::complex (#37977)
deeef50432 : Check the _geev input matrix for NaNs and infs (#37642)
16e3df3ac6 : Fix typo: TupleUnpack. (#38043)
5a386a0a78 : Fix ldflags string for HIPExtensions (#38047)
c2f787ce77 : Give _VariableFunctions class a different name, so pickling works (#38033)
9fe8243536 : Fix minor issue in type stub for Optimizer (#38067)
3cade9cdd4 : Automatic update of fbcode/onnx to 807c62cf7e4c96ce49040bcf073b7e4a054f28a5 (#37983)
12bbda053c : Remove static initalizers from Vec256 (#38088)
25413635d0 : [c2][opt] nomnigraph transform for ClipRangesGatherSigridHashV2 fusion (#38004)
32329c3338 : [nomni] fix outputs check to replaceSubgraph (#38005)
f8c93c5d3e : Get rid of javasphinx dependency. (#38042)
4bc0a7f86a : Revert D20229168: [quantization] Use torchbind for Linear PackedParams
29f19bf727 : [ONNX] Enable tests for opset 12 (#37846)
5ee2302349 : Add links to more subdir READMEs in CONTRIBUTING.md (#38049)
9efbc19f75 : Fix the issue with C2 cont build
4ae187f6cb : Set SCCACHE_IDLE_TIMEOUT to INFINITE(0) on Windows (#37993)
bfa5070cbc : Fix rebuild with Ninja on Windows (#37917)
eaf9b28c55 : [quantization] Use torchbind for Linear PackedParams (#34140)
e3fcc6ade8 : Skip RPC profiling tests (#38045)
d5df055bbb : [WIP][JIT] Add JIT backend registration API (#35833)
002f5ec51b : Add preprocessing that fuses decomposed linear into linear. (#37937)
376c9a40dc : Fix dummy typo in `skipIfNoFBGEMM` (#38058)
a42616f71a : Fix torch.tensor dtype inference (#38030)
f2f8027760 : [TensorExpr] simplify trivial adds/subs/muls even in Float (#37960)
379e717a1b : Back out "Revert D18927220: if_constexpr for C++14" (#37792)
5e83a13e14 : stop creating integer type Tensors that require gradients (#37789)
facc5e0cc4 : Make profiler thread local (#36291)
2ef4010593 : Propagate TLS callbacks with ThreadLocalState (#37745)
2d708cefcc : Move RecordFunction into ATen (#37548)
c24c5f9684 : Make RecordFunction callbacks thread local and modernize interface (#37491)
dc25190833 : Move resize / zero logic for _thnn_conv_depthwise2d from codegen to native code. (#37957)
ed4e7cec03 : Move _thnn_conv2d resize and zero code from codegen to native code. (#37956)
99349393ba : Fixed gradcheck for complex (#37836)
8a8b7a16be : Remove unpacked int8 blob after constructing the packed blob to save memory (#37973)
f0f587366c : [Tensorpipe Agent] Implementing getMetrics with currently available metrics (#37980)
5d21a9cfc7 : [Tensorpipe Agent] Network Data Profiling (#37852)
25359f7392 : [Tensorpipe Agent] Implement Global Interpreter Lock Wait Time Metric (#37851)
b452fef583 : [Tensorpipe Agent] Base Structs for Tracking RPC Metrics (#37850)
a44824c9ed : [TensorExpr] Allow to enable/disable fallback mechanism thru an envvar PYTORCH_TENSOREXPR_FALLBACK. (#37971)
067f08c148 : [TensorExpr] Move controlling knob out of the TE fuser pass. (#37970)
3066d3ac1c : Remove overly strict assertion for type demotion of scalars. (#38001)
7bf9d983ea : [quant] Release qnnpack original weights for conv/linear (#37595)
dd64d26d74 : Make speed_benchmark_torch report latency in us (#37953)
85fccba224 : Message Delay fix for test_check_failed_messages (#37978)
305444a0bd : Update miniconda repository, be specific about cudatoolkit (#37186)
2b41b9bceb : [BE] Add @skipIfNoFBGEMM decorator (Reland) (#37894)
1667aa6451 : [CUDA_FUSER] Expand operation support for cuda fuser (#37849)
ffed9dca42 : [TensorPipe] Update submodule (#38013)
b2cc9928dd : Move resize logic for bmm from codegen to native code. (#37955)
ee1ddcef8d : Acquire GIL when constructing/destructing ConcretePyObjectHolder (#37870)
594b33ea10 : Add support for non-persistent buffers. (#37191)
46ed3349f3 : Add --check-untyped-defs to mypy.ini and test suite (#37594)
30fc58cfcc : Migrate CUDA where, tril, triu to c10::complex (#37896)
7be9796cc4 : [ONNX] Support clamp_min and clamp_max (#37872)
bc09478a60 : [TensorPipe] Use the new multi-payload message API (#37919)
978ad16290 : [TensorPipe] Allow passing args to agent options constructor (#37918)
4e93844ab1 : remove deprecation warning on get_contiguous_memory_format (#37963)
65260d48c8 : Fix splitWithTail to insert the tail immediately after the outer loop. (#37941)
9143d7fb68 : [Fakelowp] Open source fake fp16 FC ops (#37923)
76c964dfb0 : Reland [quant][tests] Enable tests to run on all qengine backends (#37943)
122587dcb4 : [ONNX] Improve error checking for large model export (#37798)
385f7e59a7 : Report test stats (#37803)
952e0f00a4 : Skip c2_ref_tests on network failures (#37972)
72e5b7ae5b : Add option to run python unittests in parallel (#37180)
681c6fb60f : Move complex utilities out of Half.h (#37676)
634282112b : updated create input and add test methods and added a whitelist for complex (#37835)
14fc83ebc7 : Add missing c10::complex::value_type (#37677)
09bedec29e : move quantization normalization layers to aten/src/ATen/native/quantized/cpu/ (#37352)
4fa049c525 : add quantized instancenorm operator (#36847)
b837d5d418 : add quantized groupnorm operator (#36835)
288dd33770 : quant: remove hypothesis and int32 from layernorm test (#37947)
675e77e88a : add docker image build ubuntu16.04-cuda9.2-cudnn7-gcc5.4-py3.6 (#37610)
35693e9b4b : Give at::cuda::blas::gemv<at::Half> parity with <float> and <double>. Nature is healing. (#37569)
28ed04c620 : [JIT] remove list_with_default op (#37886)
f538cd627a : Install HugePagesArena to optimize pytorch prediction performance (#37640)
3cc5062544 : Update bazel to 3.1.0 (#37951)
56fc347e49 : [quant][fix] A typo in quantized::conv2d_relu (#37964)
f29f96d47b : Port existing zero_dim_dispatch optimizations from codegen and remove codegen capability. (#37615)
f5b3125af7 : [JIT] Peephole optimize list ops (#37612)
bf970bce21 : Migrate some CUDA arithmetic kernels to c10::complex (#37878)
4bbf889bcf : [jit][api][refactor] remove redundant deepcopy implementation (#37538)
cd0724f9f1 : Do not `std::move` returned value (#37891)
728189588e : [reland][quant][graphmode] Support a new category of ops in graph mode quantization (#37936)
ec9342521b : [TensorExpr] Support Bool dtype in Or, Xor, And ops and in TensorExprKernel::bindInput. (#37938)
a3042ca89d : [JIT] Rewrite unaliased if output mutation (#37694)
b53e6bfd49 : [jit] normalize `getMethod` (#37472)
28ac5cdc91 : fix profiling test (#37961)
6293f1fb49 : Migrate cpu kernel for index and index_put to c10::complex (#37877)
ae308db681 : fix lilstm test in tensorexpr_te (#37913)
ab2373205f : Create a desktop shortcut for restoring pytorch environment on CircleCI (#37926)
945672bf3e : cmake: improve dependencies in incremental builds (#37661)
4c4816ad07 : [CPU] addmv for complex tensors (#37924)
7a408576dd : Stopgap fix to `determine_target` predicate (#37934)
1ad46f470f : [jit] `__copy__` for `RecursiveScriptModule` (#36830)
b1b6bc36a5 : Enable xnnpack_integration test in CI. (#37838)
d6b51e4adf : In interpolate, join short lines (#37170)
59f03c69ab : In interpolate, give a short name to scale_factor_list (#37169)
4996961826 : In interpolate, only call _interp_output_size in one place (#37168)
8749aa2d55 : Clean up formatting in upsample ops (#37166)
78529f6de7 : Whitespace cleanup (#37165)
5edf5efd37 : Migrate CPU sum, eq, and ne to c10::complex (#37876)
4e2ea6e013 : [TensorExpr] Remove the Tensor argument from loopnest.reorderAxis (#37873)
53e7d49a98 : Port register_prim_ops_c10.cpp to new registration API (#37834)
0e3a05ec00 : [JIT] rename enable_profiling_mode to enable_profiling_mode_for_profiling_tests (#37825)
436cd2c02d : Migrate check_convert to c10::complex (#37875)
8434247653 : modify `select_equals_backward` to propage only to a single value (#36316)
dd618216c5 : [JIT]Support adv indexing using list. (#37848)
f2148de92f : Revert D21409626: [quant][tests] Enable tests to run on all qengine backends
ec7fd0caef : [docs] Fix broken links in `contribution_guide.rst` and `governance.rst` (#37820)
e729db48ca : Remove requantization scale constraint. (#37683)
6f06df8193 : Fix lint (#37922)
122d8215a3 : [RESUBMIT] Kill broadcasting from the codegen layer. (#37907)
88c447bf71 : Change DeprecationWarning to UserWarning in `torch.cuda` (#32142)
f78d02ed51 : [quant][tests] Enable tests to run on all qengine backends (#37843)
2f61b04514 : Add Aten as dep to fakelowp and cpuinfo path to its include path (#37909)
75c201ac32 : Fix some amount of support for Bool in tensorexpr. (#37914)
cdc56d0b6c : Support c10::optional<Tensor> in custom C++ autograd function. (#37700)
b57b596f20 : Reduction should not coalesce_dimensions when splitting for 32bit indexing (#37788)
222fdd4227 : Updating submodules
ad2305e556 : Revert D21393512: [quant][graphmode] Support a new category of ops in graph mode quantization
fe88806784 : Back out "Revert D21171334: [pytorch][PR] Change StorageImpl to track byte count rather than element count" (#37893)
8c91b78277 : [TensorExpr] Fix the shape info check in the TE fuser pass. (#37882)
e3934dfae8 : [ROCm] Enable bfloat16 for ops in BERT model (#37634)
402f635bbe : Enable ahead of time compilation for HIPExtensions using ninja (#37800)
70f375becf : [quant] ConvPackedParams with TorchBind (#35923)
32b09f7ab9 : Devirtualize device init calls in factory op wrappers (#37815)
9f060d3873 : [Caffe2] Increase timing threshold to 50 ms on Windows (#37892)
5eacc9cb57 : [quant][graphmode] Support a new category of ops in graph mode quantization (#37515)
480bd0ad50 : Stop defining static data in Vec256 (#37767)
96b512be07 : fix msan in vec_reduce_all (#37853)
e3d1c4eaac : Revert D21310335: reenable quantization test_qadd_scalar_relu test
92f750b5c7 : disable clang-tidy modernize-trailing-return (#37888)
0359a9b0a0 : Delay loading the cuda library on Windows (#37811)
91c1505e5a : Move addmm broadcasting code from codegen layer to native layer. (#37613)
6792c3ad24 : Move addbmm broadcasting from the codegen layer to native layer. (#37603)
b8d48d3680 : Revert D21406034: [pytorch][PR] [BE] Add @skipIfNoFBGEMM decorator
34bf868ebc : Fix weight quantization in RNNs (#35961)
a2fc7f787a : Revert D21171334: [pytorch][PR] Change StorageImpl to track byte count rather than element count
563bbeb890 : fix undef CUDA_VERSION warning (#37866)
0cae718723 : reenable quantization test_qadd_scalar_relu test (#37423)
b61fda2313 : reenable quantized test_compare_tensor_scalar (#37422)
b57d82fcbb : workaround nvcc host function bug (#37867)
30a65f1afa : [Tensorpipe Agent] Call Shutdown from Destructor and Join (#37839)
5325606c37 : Add zero_mask() for Vec256<BFloat16> (#37114)
4c009c7f3e : Make aten_tensor_iterator ASAN safe (#37869)
27fc2ab9f4 : [TensorExpr] Add a constructor accepting a name_hint to class Buf. (#36617)
1c0bad25f3 : [TensorExpr] Add dtype to class Buf. (#36611)
2c6aed0d61 : [Testing] Add `--save-xml` option (#37840)
a3639fa516 : [Tensorpipe Agent] Adding Tensorpipe Codeowners (#37854)
3706803b60 : Change StorageImpl to track byte count rather than element count (#37776)
25ba802ce4 : Fix `cdist` backward calculation for `p=2` (#37337)
06e1b68843 : [BE] Add @skipIfNoFBGEMM decorator (#37810)
65291fd422 : Remove unused capture in tensorpipe_agent.cpp (#37828)
bd220b336b : [jit] fix trace checking reporting divergent names (#37842)
9d7a79ac27 : [Caffe2] raise exceptions instead of str (#37744)
b57c8b720e : [wip] Make quantization modules work with DataParallel (#37032)
25e6129c52 : quant BN tests: remove qint32 (#37832)
08304ccccc : add a cuda job for profiling tests (#37812)
5b0244ee8f : Tighten error checking in ConcreteModuleType (#37813)
782b53b654 : Specify _th_ ops in CUDAUnaryOps macros so they are easier to find. (#37582)
9b3911c073 : [quant][graphmode][refactor] rename SwapDequant and refactor code handling general ops (#37555)
7fa968b10d : [TensorExpr] Add python bindings for TE fuser. (#37831)
5c628ddbd0 : Fix README for installation from source (#37301)
3b97723f08 : Let >> and << support half on CUDA (#37670)
3673a7245d : graph mode: more in-place activation handling (#37771)
b354700e75 : graph mode: round out relu support (#37592)
0b693e9601 : uninitialize output and bag_size in the fast path of EmbeddingBag to save overhead (#36681)
145560f499 : Migrate `erf` and `erf_` from the TH to Aten (CUDA) : Closes #24558 (#36724)
23d0441da7 : [JIT] Fix GetAttr inconsistency (#37424)
12e64916b3 : Migrate clamp from the TH to Aten (CUDA) (#37646)
468a9d448e : [aten] Pass std::function<> to thread_pool by value, instead of const ref. (#37681)
d7ccb4b392 : Migrate CUDA unary complex kernel to c10::complex (#37647)
51c9444274 : Enable test_distributed test test_backend_full_group (#37794)
7c2944899b : Add vec256 for c10::complex (#37690)
6133be31bd : Fix for hooks with no name (#37785)
16c7907ad0 : Migrate CUDA fill kernel to c10::complex (#37651)
d4edbbd396 : Revert D21369541: Make a separate cmake option for caffe2 tests
0549e1f384 : [Tensorpipe/RPC] tensorpipe RPC agent (#35483)
3411ec6e32 : [TensorPipe/RPC] Serialize and deserialize message (#36197)
7fa897eac0 : [caffe2] L2 regularization for (RowWise)SparseAdagrad fusion on GPUs (#37805)
429d90f648 : [BE] Split pytorch_linux_test into 3 steps (#37808)
458134f021 : Add several ops for portal NLU/ASR model (again) (#37801)
aff92ef3d6 : Make a separate cmake option for caffe2 tests (#37721)
faad00a290 : add qnnpack path for hardtanh (#35779)
f5e6f39e00 : Remove std::complex to std::complex casting specialization (#37574)
15df33f797 : [Onnxifi] Cache output shape inference result for OnnxifiOp (#37796)
1845545075 : Enable HgemmBatched for ROCm (#37483)
4a2c642e1f : fix ROCm bench CI by increasing first iter timeout (#37633)
090ea775c9 : Math functions of c10::complex should be overloaded as const reference (#37689)
8e5f162b4c : [FakeLowp] Reset workspace in test (#37799)
1d43d7caa2 : Use `gpu_kernel` in Affine Quantizer (#37312)
847d102e93 : docs: Fixed docstring indentation for documentation (#37739)
53ca3e5b9c : Migrate CUDA cat, scatter, gather, index, index_put to c10::complex (#37650)
209c6f9ab5 : Move device type init from BackendSelect to backend kernels (#37402)
0c2a72ec41 : Update README to include few (missing?) links (#37714)
d16c8238e1 : [ONNX] Fix numerical errors in softmax when dim is not last dimension (#37326)
804e32a467 : split out docs tests into separate job (#37793)
57dc4cd0f8 : [MultiProcessTestCase] Improve the error message when a process terminates (#37627)
20e5749129 : Migrate CPU casting copy kernel to c10::complex (#37649)
0a24f33dc1 : [quant][mobile] Return for conv with empty batch (#37779)
4fef3763dd : Revert "Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings" (#37778)
4025d87843 : Kill the ability to codegen tensor-based broadcasting. (#37547)
73aa49d529 : Move addr broadcasting from codegen layer to native layer. (#37546)
e38d7591a7 : Move broadcasting code for fmod, fmod_ from codegen layer. (#37545)
4cdaa5956c : capitalize fuseTensorExpr (#37780)
fe8fdb775f : [quant][graph] Fix bug in replicateDequant (#37637)
a6aa336cc2 : [quant][graph] Fix bug in replaceConvolutionWithConv2d (#37635)
77dd00c850 : Permit registration of multiple triggers, but insert warning (#37772)
a058e938f9 : Refactor error msg stack handling, add TORCH_RETHROW (#37101)
efd8f70cac : Make msg() and msg_with_backtrace() private (#37094)
6dd1beaaa8 : To fix caffe2 model with Copy OP cannot export to onnx model (#37144)
1bac49f075 : Migrate item() to c10::complex (#37648)
c0ff085775 : [PyTorch] Modify `data_parallel` to work with small tensors (#37704)
20f7e62b1d : Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings
fd05debbcd : [TS][easy] Typo Fix (#37773)
812a3fa03d : Show warning if Tensor.random_()'s from and to are not in [-(2^digits), 2^digits] bounds for floating-point types (#37537)
e6221f4ca1 : Remove std::complex from TypeMeta (#37632)
dbcfd62a1c : Remove unnecessary pickle and unpickle invocation in PyRRef __setstate__/__getstate__ methods (#37638)
b7f258bbd3 : add fmt to libtorch_python.so (#37560)
5216917022 : [caffe2/dnnlowp] documentation for pack operator arguments (#37719)
aa54f58041 : LoopOptions::gpu_block_index(): bool -> int (#37578)
f10fbcc820 : Split up documentation into subpages and clean up some warnings (#37419)
b1e4e4d470 : Remove zero_dim_dispatch_when_scalar (#37580)
5ec87a3c1a : Move baddbmm broadcasting from codegen layer to native layer. (#37544)
66a20c259b : [CircleCI] Store build artifacts for python docs (#37658)
bcdff7eb67 : Fix for tests on ROCm (#37616)
6c37ad2674 : typo in MultiheadAttention documentation (#37496)
136bc5a482 : Revert D21215050: Add ops for portal NLU model
6a6c29c1c9 : Update TensorPipe submodule (#37729)
e26631b333 : [caffe2] Shape inference for UnPackRecords
bd9617d5af : [TVM] Implement UnPackRecordsOp (#37489)
843c0230f2 : Add ops for portal NLU model (#37192)
ffed77d0c8 : Updating submodules
506ae60547 : [caffe2] L2 regularization for rowwise fused sparse adagrad (#37653)
3403d27def : [caffe2] L2 regularization for fused sparse Adagrad (#37652)
8cb1f2f9dc : implement L2 regularization for Adagrad in caffe2 and dper (#37705)
cc0f1b22a2 : [PyTorch Numeric Suite] Add module output comparison (#36701)
5baa6b6c34 : Add a Bazel build config for TensorPipe (#37691)
d639418307 : Add timeout injection to faulty agent for testing (#37485)
707e0e86c0 : [WIP] retry apt at individual package level and at command level (#37696)
5e0a24f1f9 : [quant][graphmode] Move numerics changing passes before finalize (#37514)
b4d486abbc : Enable test_DistributedDataParallel_SyncBatchNorm_2D_Input unit test (#33573)
ae755a73d3 : SyncBatchNorm size check update (#37133)
564de515f5 : Add an iterator to Block. (#37542)
fbf110293d : jit/OVERVIEW.md: screen * in 'Node*' for proper rendering. (#37686)
099a84ef9b : Add overload name for aten::tensor and aten::as_tensor (#37655)
fa8ab4b80c : [pt][quant] Unify numerics between fakequant and quant/dequant (#37188)
95465dcbaf : autograd: move scalar input to a different device when needed (#35286)
2658bae570 : use std::move (#34365)
b1790794f6 : Enforce Tensor.random_ check that from and to are in tensor dtype bounds (#37507)
831c8f362f : fix the incorrect merge of profiling information of two tensor types for the same value (#36806)
b410d03e6e : Back out "[c2][opt] nomnigraph transform for ClipRangesGatherSigridHash fusion" (#37675)
ba7461c135 : Add pointer to RPC parameter server tutorial (#37667)
49c8a37a0d : Fix doc-gen warnings in RPC (#37666)
ba5137ea9d : [pyper] Use Caffe2 ops
675b3fc834 : Prevent unbounded growth of sparse tensor in add operation (#36030)
c0a985fcd6 : Allow customizing retryable message types in Faulty agent tests (#37450)
1f09f7ea44 : Python API for Complex Storage and storage copy logic (#35771)
deb4100928 : [DistributedSampler] Only create torch.generator and seed when shuffling (#37604)
6ecb5bb1f0 : match old fuser rem to eager (#37196)
ecf1ea75a7 : Make c10::ComplexHalf a template specialization of c10::complex (#37426)
22708be5af : Migrate `tan` from TH to ATen (CUDA) (#36906)
df31ddbd98 : Add channel shuffle op fp32 + quantized. (#36815)
1510bdd42d : Replace empty_affine_quantizer with new_qtensor_cpu. (#36814)
6de949afaf : Add quantized adaptive avgpool. (#36813)
f6c82e04a0 : Move to using MemoryFormat::ChannelsLast for avgpool2d. (#36812)
e852b45d9f : Overload c10::complex operators inside c10 namespace (#37605)
4ed790d742 : Adding symbolic sizes, contiguity, stride indices (#36101)
9e32a1f5cd : [wip] update graph fuser aliasdb in-place (#37106)
0692804747 : add slope == 0 case into standard leaky relu nn test (#37559)
91e74fd843 : [JIT] Adds a `code_with_constants` method to module printing (#37586)
7c4bda7e6f : Eliminate warnings for cpp extensions on Windows (#37400)
5ab36ec98b : Move cauchy_() to DistributionTemplates (#37602)
bedc50ed07 : Ensure we are diffing against the right thing in clang-format (#37589)
a09cb5f2f5 : [quant] quantized reflection_pad1d (#37452)
20f5d4436e : Updating submodules
e841bea465 : [quant] QNNPACK Add deconvolution parameters (#36716)
5efd10518f : [jit] speed up alias analysis (#36345)
e98ad6c05b : [RELAND] Remove patches that circumvent MAGMA bug (#35973)
cd4c3b48a6 : Add LN after specialzied output embeddings and flexible LCE (#35178)
6f8838cd2f : Revert D21326386: [pytorch][PR] [Reland] Implement cusparse Descriptor class and clean up cusparse code
1aedc2c5b9 : Skip c2 ref onnx model tests (#37591)
cd48fb5030 : Vectorize linspace on CPU. (#27957)
2c33ea1c47 : [doc] improve tensor.view doc (#36728)
3e1859959a : Updating submodules
13013848d5 : Fix cpp_ext build dir create permission (#34239)
287f3b746e : Remove Backend -> THPLayout mapping. (#37527)
8a30553738 : [TensorPipe/RPC] Add TensorPipe dependency (#36695)
b97341e3dd : [c2][opt] nomnigraph transform for ClipRangesGatherSigridHash fusion (#37535)
20ba29d81c : Add support for reductions on CPU in tensorexpr (#37333)
d3d10cc14a : Add tests for lower_graph and fix unpack() ops dispatch (#37540)
149b468ce2 : [TensorBoard] Fixes missing doc for add_graph (#37504)
c5624e831d : Add overloads of std:: math functions for c10::complex [resubmit] (#37468)
e9db16e0c1 : [Reland] Implement cusparse Descriptor class and clean up cusparse code (#37533)
5bb01568c3 : speed up and re-enable quantized bn unit tests (#37420)
f09eb391b9 : Move masked_select broadcasting from codegen layer to native layer. (#37543)
69e2f1aaff : [cmake] add HAVE_SOVERSION option (default=OFF). (#37502)
4c8636c74c : Unify the path for environment restore script (#37486)
6792bafa72 : [pytorch] aten codegen to filter backends for default mobile build
68250fa557 : Vanilla Pytorch bionic clang9 test in CI (#36711)
9d0891f886 : [pytorch][buck] tweak code analyzer e2e script
ac5403f22e : [quant] Check qengine for TestNormalization (#37562)
091a1192d7 : [JIT] Convert float Tensor argument to double in prim::tolist (#37465)
eb5590d6f4 : Updating submodules
a0075c4825 : [XNNPACK] Disable xnnpack ops for both iOS and macOS (#37528)
482d1f4b8c : [quant][graphmode] fix observer instance copy (#37185)
a961d3acf3 : graph mode: add handling for layer_norm op (#37525)
4e7403c286 : graph mode: add hardswish op (#37524)
7ac98c9396 : graph mode: refactor quantized hardswish API for easier graph handling (#37523)
11b6f70f7d : graph mode: add hardsigmoid op (#37522)
6cdc8cac47 : graph mode: add elu op (#37521)
400098d492 : graph mode: add hardtanh op (#37469)
b33b46a950 : [quant] Enable qnnpack tests for test_quantize and test_numeric_suite (#37351)
b48239af3c : Cleanup internal functions in python_functions.cpp (#37536)
322e564ee3 : Minor format cleanup in py_rref.cpp (#37520)
d5b38984c8 : Let RPC return FutureIValue instead of FutureMessage (#37519)
e9db51f9af : Enable float requantization for avgpool/gavgpool ops. (#37037)
d5363e6499 : Set onnx opset version before model select (#37466)
1ef992639d : Make c10::complex the C++ type for complex tensors (#37421)
5bb9357345 : Update assertion in MHA forward to support FP16 training (#37539)
896f8130a6 : Revert D21297549: [jit] fix trace checking reporting divergent names
f1cd0eeb70 : `IValue(bool)` constructor should initialize entire payload (#37513)
7e9cc4df85 : Migrate `cos` and `cos_` from TH to ATen (CUDA) (#36653)
6098cf7e33 : Add `sched_setaffinity` check from libgomp to `valgrind.sup` (#37532)
bca82801e7 : add support for generating Vandermonde matrices (#36725)
f7dce8508c : Revert D21302691: [pytorch][PR] Implement cusparse Descriptor class and clean up cusparse code
297cc5512e : [quant] Enable convolution tests (#37494)
ec5fb29b96 : Add overload names to dict operators. (#37279)
9e97e9244f : Fix mobile type resolution in unpickling (#37425)
a3ab560f6c : Port xnnpack operators to new registration API (#36800)
867e05921f : Fix multiple issues with type annotations (#36358)
bbf29a5239 : Implement cusparse Descriptor class and clean up cusparse code (#37389)
1bb66a0cd4 : Extend some of the basic ops to kHalf (#37121)
bbd2350c99 : Disable tests failing on test2 in ROCm CI (#37427)
58a46a174e : [cmake] add USE_SYSTEM_{XNNPACK,ONNX} options. (#37501)
0d9e3b48c4 : Remove THCudaMemGetInfo. Use c10's cacheInfo instead. (#37447)
68895eda9d : add fmt, take 7 (#37356)
d37a4861b8 : Explicit attribute setting for pruning and weight_norm upon reparam removal (#34170)
6176931695 : Disable stateless xnnpack for ios. (#37460)
9259a283b7 : use detected python version to find pylibs (#34041)
ec8517b6df : Move exponential_() to DistributionTemplates (#37456)
06168bf17d : Move geometric_() to DistributionTemplates (#37418)
ce6077d7a8 : Move log_normal_() to DistributionTemplates (#37392)
253943d5a7 : Remove thrust_t from remainder_kernel_cuda (#37470)
1b525f88ce : Print all ops in model converter
bf53784e3c : Treat cross-execution-space-call as errors for NVCC on Windows (#37302)
4bfa51d405 : [jit] fix trace checking reporting divergent names (#37464)
a55d80e1c5 : [JIT] remove dominated guards of functional values (#37105)
45e8451b33 : optimize is_float_point calls (#37012)
cde1350a5d : Add support for generic list constants (#36953)
c516f84525 : [JIT] Add Lower Tuples Call & Run remove mutation after list unrolling (#36829)
cdc0880632 : add post unroll optimizations (#36828)
92129956cf : Add size peephole optimziation (#36758)
0c3a6f941f : disable peephole optimizations that require alias db (#36757)
4e3dc34c47 : add complex support to `reciprocal_cuda` kernel (#36749)
fd4a09ea73 : [WIP] Bind in CellParams for RNN (#35787)
74c00b1f69 : move to explicit avx2 switching (#37207)
21b7af1e7b : allow inplace leaky_relu backward calc when slope == 0 (#37453)
facdd15cc6 : [quant] Finishing refactor for quantization test files (#37366)
e69115ec52 : [quant][graph] Add JIT passes for dynamic quant multi uses of quant node (#37125)
3b9ddab093 : [quant][graph] Run dynamic quantization for specific ops (#37093)
9dab3ed5c6 : [graph][quant] Enable accessing child/grandchild modules in forward (#37045)
e55d2e6fa6 : [quant][graph] Add check for qconfig_dict key (#37014)
92b9089fd9 : [jit] Fix pretty printing of functions (#37432)
07bb442b24 : Move DistributonTemplates to anonymous namespace (#37429)
12f5a32863 : Don't use NonVariableTypeMode in custom ops (#37355)
edc5ef1afb : run the simple executor for jit tests by default, add profiling jobs … (#37017)
6fa76b8a0c : [jit] __deepcopy__ for `RecursiveScriptModule` (#32684)
e5a24a6389 : Retry anaconda upload (#37414)
273c464145 : Fix `TensorIterator::view_offsets_` size (#37214)
dcd8a1b399 : Revert D21286660: [quant] Generalizing _calculate_dynamic_qparams in quantized test
6c0f447b51 : Remove ONNX BatchNorm(12) test and converter. (#37309)
8258d42bd0 : [pytorch] add '__BASE__' section to op deps to factor out frequently used util ops (#37404)
e0a5b443d6 : [pytorch] remove unused flags from code analyzer & move format support to python (#37393)
239ce75a74 : [quant] Generalizing _calculate_dynamic_qparams in quantized test (#37451)
024f663fc1 : Resubmit "Fix NaN error in dynamic quantization in qLinear, re-enable test_quantized_rnn"" (#37458)
5b6f6da18c : [caffe2] Copy tensor in single tensor input case in UnPackRecordsOp (#37454)
d1a39815f9 : Remove Python 2 string compatibility in ATen/function_wrapper.py (#37388)
580928801f : [ONNX] Adding 'numel' and 'to' export for script module (#36501)
a51f047c7e : Synchronize MAGMA functions with the current CUDA stream (#36605)
d068a456d3 : [resubmit] Enable global observers API (#37382)
4234d62489 : [hotfix] Workaround for older versions of ninja (#37417)
c5d6f59ab1 : Replacing EHa with EHsc (#37235)
8fe2a5e91b : Fixes type annotations for named tensors #27846 (#36890)
ebcacd5e87 : [Bazel] Build `ATen_CPU_AVX2` lib with AVX2 arch flags enabled (#37381)
b37080d97a : remove record_function_enter and record_function_exit from header (#37052)
48b126f496 : [caffe2] Fast path for single tensor in UnPackRecordsOp (#37361)
da64ed14f6 : Reduce volume of spammy warning (#37360)
4ff4119d45 : [rpc] Move _set_rpc_backand and RpcBackendOptions to use float instead of timedelta (#37027)
5a59bbc1da : [TensorExpr] IRPrinter: show output_args separate from reduce_args when printing ReduceOp. (#37367)
a4383266f0 : Revert D21262421: [pytorch][PR] [doc] Fix JIT code highlighting
f1e89fbe53 : [pytorch] add missing host-device attribute to fix clang build (#37358)
fae87908d9 : Back out "Fix NaN error in dynamic quantization in qLinear, re-enable test_quantized_rnn"
cf41f6bed1 : Fix record_function (#37364)
ed0a572eed : Migrate `scatter` and `scatter_` from the TH to Aten (CUDA) (#35697)
b8ec165c0d : Fix failing test in test_torch.py (#37362)
20143e5f27 : Revert D21245094: [resubmit] Enable global observers API
d294c06287 : Fetch TORCH_PYTHON_SRCS filelists from build_variables (#37267)
1039b95ff0 : [autograd] add documentation about multithread autograd (#37020)
b3ada29584 : Skip test_profiler_custom_op on ROCm (#37374)
16f4501cd4 : Improve checkpoint docs to warn users about detached gradient issues (#37266)
023c3575f0 : [doc] Fix JIT code highlighting (#37338)
8dc5502cb1 : Do not add special `CUDNN` search path rules for `torch_python` (#37349)
f463586739 : Revert D20984966: [quant] Generalizing _calculate_dynamic_qparams in quantized test
f07b85b6a6 : Revert D20984967: [quant] quantized reflection_pad1d
5fab4c30dd : [resubmit] Enable global observers API (#37292)
e33c3e49d5 : Fix hard-code cmake target (#37310)
c4401ea9ab : Make test_quantize runnable (#37357)
e8421807d8 : [TensorExpr] Fix indendation in CudaPrinter. (#37305)
e49ccdf211 : [TensorExpr] Add IRPrinter::visit for AtomicAdd. (#37304)
d167a7f654 : Revert D21256854: [pytorch][PR] Add overloads of std:: math functions for c10::complex
af9c3a3652 : uniform_int_distribution does not support uint8_t (#37260)
045c588bc6 : Enable use_c10_dispatcher: full for some more ops (#37273)
201ba13911 : Correct $ANDROID_HOME string empty check (#37064)
805c417ec9 : Implement avg_pool2d kernel for channels_last (#35855)
ec8006cc16 : [ONNX] fix provider_version and add consistency test (#36797)
0048243f70 : Check compiler -v to determine compiler (fix #33701) (#37293)
6d409481b3 : Add overloads of std:: math functions for c10::complex (#35725)
a08a9f3b82 : Enable uint8 upsampling 2 (#35029)
5c9d1e4824 : Propagate module lints for mobile scripted module. (#37046)
5b9f7f7b0e : [cmake] Add USE_SYSTEM_{GLOO,FP16,PTHREADPOOL,PSIMD,FXDIV,BENCHMARK} options (#14699) (#37277)
3a0ff3cd2f : Generate environment restore script for Windows build jobs (#37319)
007163407c : [cmake] Support "Generic" BLAS (#14699) (#37276)
22ac071d9a : Add SWA to PyTorch mainline (#35032)
828d590b06 : [ROCm] Update to ROCm 3.3 (#37247)
f41742ff2f : [autograd] remove spinning for dist engine (#36606)
ed9ec3c96f : [autograd] refactor some functions (#37061)
47fec01c45 : Fix cpp extension compile failure on some envs (#37221)
b428f454e1 : Revert D18927220: if_constexpr for C++14
b64fc3c4b5 : Changes warnings generated in cpp to show point of Python origination (#36052)
f8ec51bd86 : Ensure DataParallel replicas can be saved (#37307)
2b050371b4 : Make listenLoopInternal non-virtual (#37265)
d98ea604f4 : Improve Error Message for Dist Autograd Context Cleanup Failure (#37255)
b198796a28 : [quant] quantized reflection_pad1d (#36450)
7604f470ed : Add weight info in debug_ssa_net (#37262)
92e91cee8d : ONNX Export Support for CrossEntropyLoss (#34830)
205c6ffbc5 : [quant] Generalizing _calculate_dynamic_qparams in quantized test (#36449)
ca39f99d48 : [Pytorch Numeric Suite] Add module level comparison (#37242)
a04022c656 : Use `std::chrono::high_resolution_clock` for profiling on Mac (#37280)
59052e39b8 : [quant] qtensor resize (#36442)
bf860a4eba : Adds missing documentation . (#37295)
34284c1279 : Fix NaN error in dynamic quantization in qLinear, re-enable test_quantized_rnn (#36009)
84a31fb4e7 : Revert D18927221: Boxing uses if_constexpr instead of SFINAE
c90955e3d1 : [profiler] Sort by end interval as well when parsing CPU trace (#37297)
ea741f829e : Add `--repeat` option to python unit-test (#37281)
44345ad08c : Do not define C10_IOS on Mac (#37283)
cb27067b32 : [ONNX] Remove inverse op (#37005)
b18f57e548 : Boxing uses if_constexpr instead of SFINAE (#31092)
f5e6f1f333 : if_constexpr for C++14 (#31091)
04b36fc264 : [TensorExpr] rfactor implementation (#36237)
c52deb694e : Consolidate usage on torch::jit::toPyObject in RPC request_callback (#37249)
3d934c3d36 : Add using torch::utils::Future to simplify code in RRefContext (#36811)
269ec9a139 : Prevent RRef.to_here() to block an RPC thread on the callee using Future callbacks (#36805)
6e1e55c134 : Prevent RRef unpickle to block waiting for OwnerRRef creation (#36785)
d7f7c290e3 : addmv migration [resubmit] (#37236)
856e8cf028 : Revert D21213786: Enable global observers API
e6231c9e24 : Do not run valgrind on the Aten unit tests compiled with clang (#37152)
6e659e928b : Enable global observers API (#37195)
4e976b9334 : Remove callBoxedWorkaround (#36850)
6ea2aedab9 : Cast shape_.size() to int64_t before comparing with squash_dim (#37109)
30eb0bdf32 : Do not define list "0" in torch/CMakeLists.txt (#37275)
904949382e : Ensure that histogram observers have zero-point of zero for post ReLU activations (#37107)
ef9ec03e77 : [CUDA11] Pytorch change (#37187)
a80a438e37 : correctly set and restore states in te tests (#37210)
686b521784 : Update cusparse deprecated Xcsrmm2 call (#37202)
4a72ddedcd : Show cpu info for macos jobs (#37220)
1d0334dd62 : Add cpu build and test to Windows CI (#37135)
1d8012a624 : Delete dead code (#37254)
1f08ff12ec : [jit] fix named tuples as attributes (#37251)
47c4dca1ab : Remove python-2 or python<3.5 checks from unit tests (#37252)
521910e0e9 : Update clang_format_ci.sh (#37268)
b60c3dfdd9 : Add fallback wrapper for profiler (#37194)
047488a7ff : Mask all high dispatch keys in BackendSelect kernels (#37257)
b6bb644e41 : Fix long line splitting issue in python_print (#37088)
d6ce6570f9 : Remove unused imports in aten/src/ATen/function_wrapper.py (#37245)
4f3946a89b : Added complex dtypes to get_all_math_dtypes, complex acc type for cpu, fixed rdiv and pow for complex (#37193)
c38dcd45d7 : [jit] fix return different types bug in tracing module calls (#37190)
5362a0b948 : [jit] fix lifting bug in tracing module calls (#37189)
a13b5b0ae8 : Split reduction compile units (#37205)
9f02897431 : Account for the change in optimizeForMobile API change.
2baff9476e : Test test_is_nonzero make expected exception inline (#37128)
deefafb01d : Allow std::array as operator argument and return (#34399)
fc528ccbaf : [wip] Allow ArrayRef as kernel parameter (#34335)
93cd05b0f4 : Fix CMake errors on systems where {Q/X}NNPACK is not supported (#35607)
6e92579883 : Added autograd support for C->C functions and enabled requires_grad=True for complex (#36932)
1beca4ac6a : Prerequisites for CSPRNG (#36631)
af08334c63 : better local command for clang-format check (#37127)
5a27ec09b8 : Add Inverse Short Time Fourier Transform in ATen native (#35569)
20328f67bb : Add core of c10::complex [resubmit] (#36626)
6ac0f67699 : [C2] Optimize MulGradient Operator when inner_size is 1 (#36767)
cae77fa351 : [doc] Fix broken links in the TOC of CONTRIBUTING.md (#37131)
385165ec67 : [reland][quant] QuantizedCUDA implementation (#36936) (#37081)
77abb6938e : Port register_string_ops.cpp to new operator registration API (#37008)
8254a63802 : Speed up calculate Qparams for per-channel observers (#30485)
a50a1fb4c3 : Enforce kw-only args now that py2 is unsupported (#37069)
35b9c89dc1 : Revert D21045393: [PyTorch Numeric Suite] Add module level comparison
fba9b9a023 : [PyTorch Numeric Suite] Add module level comparison (#36669)
827f04a075 : Supporting create an RPC gang of size 1 (#32731)
a633c2d112 : Fix const-cast lint error in process_group_agent.cpp (#37184)
3ff892febb : Remove redundant definition of fmadd functions in complex Vec256 (#37167)
070dea2d7e : Updating submodules
ff21b15624 : cmake: add USE_SYSTEM_{LIBS,CPUINFO,SLEEF} options (#14699) (#37137)
05e98149ae : Refactor lambda post hook. (#37025)
35f7945828 : Revert D21196366: [pytorch][PR] Update cusparse deprecated Xcsrmm2 call
fd5b5cd604 : Allowing casting str to int in JIT (#36016)
989341c0c6 : Add comments to explain how MultiProcessTestCase works (#37179)
c4b9f3bf55 : Enable torch_speed_benchmark to accept different memory formats. (#36202)
ba3f8d35e0 : Enable stateless XNNPACK linear. (#35791)
72f80b5247 : Enable stateless XNNPACK convolutions. (#35790)
e98cdfa26f : Migrate `tanh` from TH to ATen (CUDA) (#36995)
7aec364bdf : extend gather shape check to handle incorrectly sized outputs (#37102)
006f1a32f8 : Mobile CPU allocator. (#36032)
ebfe631ed8 : [TensorExpr] Cleanup TensorExprKernel class and add CPP tests for it. (#36952)
c306f2ed08 : Revert D20660338: [pytorch][PR] Migrate addmv and mv from legacy to ATen native (CUDA & CPU)
438aed63a1 : Fix prelu_backward TensorIterator split (#36134)
230b68168b : [quant] Refactor test files (#36964)
ab2a9ab925 : Non-blocking SyncBatchNorm update (#36659)
f11df2d2b4 : Use temporary variable to store input parameters in loop. (#36288)
3799d1d74a : Fix many doc issues (#37099)
9763db3031 : `MultiProcessTestCase` to use instance rather than class method wrappers (#36826)
3880f14b64 : Canonicalize includes in torch, and add tests for it (#36303)
b3f04a398a : Re-enable JIT test `test_class_sorting` (#37140)
11cef0fe88 : Update cusparse deprecated Xcsrmm2 call (#36845)
a38c6e0454 : Migrate addmv and mv from legacy to ATen native (CUDA & CPU) (#30898)
0dd21c3b72 : Lets @dtypes take tuples of dtypes (#36908)
50a1850d8d : [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984)
b889e0da8a : [torch] Excluding test_fft_input_modification without MKL (#36680)
355cafde26 : [ROCm] Don't use MIOpen for tensors with more than INT_MAX number of elements (#37110)
f46231a2f4 : Revert D21144940: [pytorch][PR] ci: Change file_diff_from_base to be dynamic
f771c96852 : Returns float from complex angle (#36896)
45706bf6d8 : properly whitelist clang-format in CI (#37122)
7c7cb74887 : Add missing ${CMAKE_CURRENT_SOURCE_DIR}/complex_test.cpp (#37080)
ca665c682c : Separate RTLD_GLOBAL from _load_global_deps() (#36682)
d0291df7d9 : [resubmit] Rebase xla job on top master before running CI build. (#37085)
e557b7cec2 : Kill BC hack in torchbind (#37112)
4ab46f6baf : [pytorch] Delete unneeded scripts
de090c42b1 : Optimize binary size of assert macros (#37023)
7f50162d1e : quantized activations: clean up more unneeded quantizations (#36981)
2773ed3082 : hardswish: remove unnecessary quantize call (#36980)
6fcabf619d : [takeover] BTRS algorithm for fast/efficient binomial sampling (#36858)
baaa0943f1 : Update third_party/cpuinfo to include a fix for conda builds, older kernels (#37083)
8d6a8d2b3f : Fix DDP bug in single process multiple device use cases (#36503)
efcbcca454 : Revert D21138687: [pytorch][PR] Added complex dtypes to get_all_math_dtypes, complex acc type for cpu, fixed rdiv and pow for complex
78d5707041 : Fix type annotations and make MyPy run on torch/ (#36584)
e921cd222a : Move bulky constants from SobolEngineOpsUtil.h to .cpp file (#37086)
5fc391a646 : Enforce type promotion in `torch.cat` (#35030)
73bffeff62 : scripts: Distinguish between platforms in conda promote (#37089)
76cb7f2043 : Use filelist from build_variables.bzl to fetch distributed file list (#37090)
7bd2014eec : [resubmit][rpc] per-RPC timeouts for rpc_sync and rpc_async (#34650)
b0ee6c70aa : Remove register_mobile_ops.cpp (#37035)
8a6ab004f7 : Dockerfile: Update miniconda installer download location & remove unnecessary flag (#37082)
5710f278a1 : ci: Change file_diff_from_base to be dynamic (#36260)
cf77e56938 : clang-format don't run on master (#37058)
171476e870 : CUDA implementation of Sparse Adagrad Fusion for GPUs (#35762)
3580c93716 : [autograd] Demote the dist container shard line to VLOG(1) (#36978)
9b0e7ebab0 : [iOS] 1.5.0 Cocoapods Release (#37039)
a00d6758b8 : Migrate `cosh` and `cosh_` from TH to ATen (CUDA) (#36654)
e7a72bb0c6 : Add nomnigraph include folder to `Caffe2_GPU_INCLUDE` (#37056)
7c9e7ef128 : Revert D21171747: [pytorch][PR] Rebase xla job on top master before running CI build.
e75fb4356b : Remove (most) Python 2 support from Python code (#35615)
a894fff265 : Back out "Revert D21089648: Put TORCH_LIBRARY in torch/library.h; add custom class API"
3b832ee2bf : Use Python3 `super()` throughout `torch.testing.` (#37024)
28fadfc4eb : Reduce overheads on several CPU kernels by avoiding restrides. (#36875)
25eb250d77 : Added complex dtypes to get_all_math_dtypes, complex acc type for cpu, fixed rdiv and pow for complex (#36747)
191fa528f5 : Rebase xla job on top master before running CI build. (#36852)
3e3498cf03 : [quant][graphmode] torch.clamp (#36887)
799793f279 : [TensorExpr] Cleanup IRPrinter implementation for statements. (#37050)
b8e2d797c0 : [TensorExpr] Insert allocations for temporary buffer at the innermost valid scope. (#36836)
6df90bcecc : setup.py: Remove conflicting double documentation of USE_FBGEMM (#36993)
4593d87b84 : Do not link torch_python with nccl (#37040)
4bbc49f53a : Revert D21143025: [reland][quant] QuantizedCUDA implementation
7b03ce7bb3 : make sure logs work inside aten/c10 namespaces as well (#37018)
4a2372bc90 : Implements torch.isclose for complex tensors (#36456)
5c2b273089 : Add RRef Python Helper to launch function on the referenced object (#36619)
b982a6a247 : Expose torch.distributed.is_available() API (#37021)
25abdcb3d1 : [TensorExpr] add Block flattening to IR Simplifier (#37013)
a850d8a526 : Fixes exponential with lambda=0 (#36837)
dc327d9082 : [TensorExpr] Remove obsolete code for handling dynamic shapes from kernel.cpp. (#36686)
359e7f4bba : Teach IRParser to parse strides along with sizes in a tensor type. (#36951)
8eb22f6ee9 : Revert D21161361: [pytorch][PR] Revert "Revert D21089648: Put TORCH_LIBRARY in torch/library.h; add custom class API"
443fe7ca0e : [rpc] Avoid wireDeserializer overreading buffers by 1 byte (#36976)
b019a8d484 : fix spatialbatchnorm on nnpi (#36987)
f0a533c5dd : Fix flaky test_backward_node_failure_python_udf (#36969)
1592d6842c : [resubmit] Move profiler to a dispatch wrapper (#36766)
bcdb0727c2 : Revert D20907254: Fix long line splitting issue in python_print
a92f1dc85e : native_functions.yaml: reset_grad_accumulator (#36431)
806f22b167 : find backtrace by cmake module (#36017)
6ebfff6c4e : Add locks to fallback register/deregister. (#36628)
bf676682e7 : Fix long line splitting issue in python_print (#36188)
e1742e8e4e : Revert "Revert D21089648: Put TORCH_LIBRARY in torch/library.h; add custom class API" (#37019)
6eb109e1ad : Enable float only requantization. Part 1. (#35856)
1f82679311 : Revert D21156042: [pytorch][PR] CMake/Ninja: fix dependencies for .cu files
4e463b6366 : add missing ops for portal TTS model (again) (#37007)
b607c83a26 : Add support for bool/byte `attn_mask` tensor in MultiheadAttention/Transformer modules (#33763)
9854df673c : [TensorExpr] Fix bug in For elimination in the IRSimplifier. (#36965)
6d13a334f6 : Remove use_c10_dispatcher: unboxed_only (#36838)
ea97fa1f2a : [PyTorch][Dist] Trigger pre/post hooks of output function nodes under distributed autograd (#34501)
97d3a8495d : [reland][quant] QuantizedCUDA implementation (#36936)
4efef475d7 : [WIP] make test_distributed gloo test use MultiProcessTestCase (#36970)
6383373a04 : [quant][graphmode] fused conv3d + relu (#36885)
2ccdc39dce : Revert D21089648: Put TORCH_LIBRARY in torch/library.h; add custom class API
a05406ea56 : [clang-format] Disable progress bar if stdout is piped (#36955)
71ec8b2002 : Switches test_jit to use float32 as its default scalar type (#36982)
54f265249c : Optimize grouped Conv3d performance (#36355)
00b7d84eb7 : Add a .with_cache() method to distributions.Transform objects (#36882)
01100cb477 : Put TORCH_LIBRARY in torch/library.h; add custom class API (#36742)
db84689c09 : CMake/Ninja: fix dependencies for .cu files (#36938)
246b208e4f : make merge_fp32_into_fp16_inputs to generate ops for each partition (#36973)
be52b7f0ea : Documentation LU Decomposition: deriving L, U, and P (#36907)
ff435a0e6b : [pytorch] add test for empty tensor support in nn.Linear (#36983)
0f3af8529a : Revert D20961463: [TEST] add ops for portal TTS model
98c293c1ef : Do not use VLAs in vec256_qint.h (#36855)
68f6b9873b : [TEST] add ops for portal TTS model (#36971)
3d2d5c82da : Clean-up non-AVX variant of `bitwise_binary_op` template (#36966)
a1eb591ea6 : fmadd in vec256_base should be on Vec256<T>, not T (#36751)
cdc1ca040a : Enable test_hardsigmoid_grad_xla on pytorch side (#36967)
742d9796bc : [ROCm] Enable wrongly skipped tests on CPU on ROCm (#36968)
ce0500eb4c : Ensure linearIndex of advanced indexing backwards is contiguous. (#36959)
59f923e884 : Update NNPI backend to 0.5.1.8 (#4397)
346215caa4 : [jit] Adding vectorized load/store support for JIT generated CUDA kernel (#36555)
3ae70cb847 : Add RecordFunctionGuard (#36215)
a14a8376aa : Link NCCL lib to TORCH_PYTHON_LINK_LIBRARIES when USE_NCCL=1 (#36948)
32307efd68 : Fix flaky test_barrier_timeout* tests for test_distributed. (#36963)
5e504e83e8 : Add sync-point insertions and block/thread local memory allocations (#36563)
0647f34477 : Delete docker build job for pytorch-linux-bionic-clang9-thrift-llvmdev (#36930)
c03d149483 : [quant][graph] Add quantizedmul_relu and quantized::mul_scalar_relu ops (#36844)
ee2a9ac56e : [quant][graph] Support for quantized::mul and quantized::mul_scalar (#36818)
1c15cb4773 : Add bundled input support to speed_benchmark_torch (#36765)
dbdd0f50f4 : [quant] Minor refactor in fused conv names (#36883)
28f439d4f4 : add absolute alias for abs (#36597)
4d171c0ed9 : hardsigmoid: add PyTorch wrapper for the QNNPACK path (#36699)
1d720228d2 : hardsigmoid operator for QNNPACK, using LUTs (#36698)
97f2513c26 : Canonicalize includes in aten, and add tests for it (#36301)
47023148ee : Convert C casts to static casts (UnaryOpsKernel) (#36400)
a2951a1ea1 : [quant][graph] Update quantize_dynamic_script API to take sample model args (#36817)
63e9d95c12 : Remove hacked twins from codegen (#36666)
4e365b9cd1 : [Distribution] Implement kl divergence for Cauchy distribution (#36477)
4d2502a0c2 : fix explicitly defaulted constexpr assignment operator fails to compile error for gcc 5.3.0 (#36561)
e0e70589ef : [quant][graphmode] tanh pattern and test (#36880)
752d3c281a : [profiler] Allow record_function ctx manager to profile futures (#35055)
1e054bfbdc : Report error for lt, le, gt, ge in complex Vec256 (consistent with <, <=, >, >=) (#36646)
dc4d888193 : ROCm: don't warn about CUDA compute capabilities (#35949)
2fa17dedac : add a fast path for EmbeddingBag calling FBGEMM (#36679)
68f847c4c6 : [rpc] Remove redundant call to createExceptionResponse (#36857)
399f494d22 : Add at::aten::hardsigmoid symbol (#36851)
13391cebe2 : ai-pep: match the qlinear benchmark to linear (#36674)
25649684ed : ai-pep: align qconv benchmark to conv (#36673)
c7cf4c1bd6 : Bmm sparse dense (#33430)
30e7055ed7 : Revert D21078446: [pytorch] Route default warning sync to LOG(WARNING)
0f0d69009e : Makes CUDA -float->uint8 cast consistent with CPU (#36832)
9d5dda7c2f : [pytorch] Route default warning sync to LOG(WARNING) (#36768)
49457a7be7 : Logging for ATen op subtype
246e9abf3f : Backward-compatible workaround for ATenOp index with dtype=uint8 (#36667)
60c3060621 : Remove CUDA9Workarounds.cuh (#36840)
3c55b5a8ef : Update persons_of_interest.rst
1341ea4802 : Fix MaxPool3d CUDA backward incorrect results for non-square output (#36820)
1b3741aa7f : [WIP] reenable bfloat16 masked_select (#36859)
be9748f226 : Minor tweak of FakeLowp CMakefile (#36861)
49b10c58a3 : Revert D20896697: [pytorch][PR] QuantizedCUDA implementation
f6daa6220e : QuantizedCUDA implementation (#35463)
54ed6fd3ee : Use both absolute and relative tolerance in testing (#34258)
3aec9f7924 : [AIDemos] Add missing operators for AIDemos (#36756)
b0b9e704ed : [nnpi glow unit test] SLS tests shape sweep with hypothesis testing (#36833)
8b685a8af0 : C++ make constructor NamedAnyModule(name,any) public (#36869)
6ba734bae9 : Vectorize reduction when reducing on fastest striding dimension [resubmit] (#36873)
136d84dd38 : Enhance error message for MPI unavailability. (#36781)
d933ec14ce : [c10] Fix the hanlding for Caffe2 ops which return tensor list (#36841)
0e6c66493a : [engine] Ensure future is complete when exiting Engine::mark_graph_task_completed() (#36856)
5d9b4d5720 : Update contribution_guide.rst (#36438)
2e93808cde : Update functional.py (#36600)
197c85fcbc : Use hypothesis to generate seed (#36860)
57c50db441 : [reland][quant] Add backward compatiblity test (#36842)
b245b1d23e : Open source fbgemm fp16 pack op (#36791)
1b1a6a90c0 : Open source fakefp16 BatchMatMul op (#36789)
b08494eb19 : Use hypothesis to to control the rand seed (#36717)
dc1f9eee53 : Avoid printing erroneous warning about "MIOpen not found" for ROCm builds (#33837)
6963973d5b : Print GPU info for ROCm test runs (#36827)
a64ea8ea04 : Back out "Vectorize reduction when reducing on fastest striding dimension" (#36854)
681158e211 : Print all test output while running unit tests in bazel (#36825)
f767de608c : [tensorboard] Add strings to image boxes (#30941)
4668d47d1f : Add build_variable.bzl to CMAKE_RERUN target (#36809)
86f354c530 : Python binding api to optimize for mobile model on script module. (#36357)
fac076a82c : [pytorch] move prim::TupleIndex from register_prim_ops_fulljit to register_prim_ops (#36808)
0a8a012005 : [RELAND] Port to quantized and other operators to new registration API (#36799)
d7608c7f56 : Move DICT ops to lite interpreter (#36816)
cc5befc461 : [Format] format a few files (#35187)
54a1e8509c : Reduce binary size of schema inference (#34735)
3a400b8dc3 : [tensorboard] Fix function input parameter for add_hparams (#31301)
d92005ff73 : Vectorize reduction when reducing on fastest striding dimension (#36709)
e6bc34f549 : Amp gradient accumulation example (#36601)
adca88e821 : Fix hardsigmoid/hardswish for proper device dispatch. (#36704)
9df9aef9b9 : [ROCm] Use float datatype for RNN test for MIOpen (#36772)
4c666d42ff : Handle log_sigmoid(out=) properly. (#36736)
46288465fe : Print keyword-only arg symbol for function signature suggestions. (#36780)
ebdc4f02ad : Fix incorrect merge of #34136. (#36760)
32bbf12aa7 : Make trivial thread-idx for degenerate statements without thread-idx. (#36480)
31f91d645a : Improve aten::backward handling (#36750)
b45b9673a1 : Fixes clang format (#36787)
a89d1ed549 : Move unboxing for factory ops to after dispatch (#36564)
b5483b8286 : [pytorch][PR] Re-enable a failing test (#36763)
f00014b790 : Revert D21080503: [pytorch][PR] [quant] Add backward compatiblity test
b0227f2965 : Add a test to verify non-contiguous tensors work correctly with RPC. (#36705)
eccb40f505 : Optimize mobile model on cloned module instead of in-place transformation (#36621)
76f9528878 : fix an infininte loop in liveness (#36697)
6d4c509168 : [autograd] lower MAX_DEPTH limit according to TSAN limit (#36745)
d7fc05b0bf : Fetch TORCH_SRCS from `build_variables.bzl` (#36737)
dcfc121fd7 : Enable jit trace check_trace for quantized inputs (#36740)
484a00b2d3 : [quant] Add backward compatiblity test (#36771)
2c558dba3d : quantized layer norm: add to static quant (#36690)
24aac32171 : [jit] Add dictionary as output of tracer (#36696)
e1cb8577ac : [jit] remove Dict iterationOrder and use insertion order (#36609)
05bbf6afb6 : Revert D20964193: Port to new registration API (part 1)
63e5058c88 : Fix naming of "strides" method in TensorType (#36727)
753157b88e : [quant][graph] Graph mode quantization support for sigmoid (#36622)
17c268be10 : [quant][graph] Add quantized batch_norm2d_relu to graph mode (#36552)
66158868d5 : Update reference to RegisterOperators in error message in Convolution (#36389)
1fc3556ec9 : Teach the tensorexpr vectorizer to handle nested For loops. (#36467)
e9b4580411 : Revert D20839674: [pytorch][PR] Re-enable a failing test
37479ddf4e : [caffe2] create and register child ws in pybind (#36741)
5b515fd034 : Delete pytorch_linux_xenial_cuda9_cudnn7_py3_build (#36731)
4894cba572 : Revert D19775659: [WIP] Move profiler to a dispatch wrapper
ee3d046f87 : [TensorExpr] Add support for Axis reordering in LoopNest (#36540)
a85c835196 : [WIP] Move profiler to a dispatch wrapper (#33057)
487dc0f961 : Re-enable a failing test (#35847)
3567b881a5 : make sure dispatch test works on windows (#36729)
cb6bebfa9b : [quant][graph] Add quantized batch_norm2d support to graph mode (#36692)
dd4dece68a : [quant][graph] Add useQuantizable function (#36691)
54a575c9bd : [JIT] fix torch.tensor jit dtype (#36587)
e29348f828 : Switch to pybind11 style registration function API. (#36258)
3c85f44ce8 : Fail setup.py if trying to set up with Python 2 (#35613)
83de675ebf : Fail CMake setup if trying to build with Python 2 (#35612)
ac950bb9c8 : Update docs for master to remove Python 2 references (#36336)
f5c230b892 : Make futures vector a local function var (#36677)
f11c4f90c2 : New CUDA Fuser: Unrolling support, interface refactor (#36435)
d7fabfd5df : Implements complex isfinite and isinf (#36648)
d0c925f1c7 : Returns float tensors for complex inputs to abs (#35871)
bede7d9995 : Fixed check for the buffer overflow in assert (#36476)
9e016f77a8 : Added complex types to get_all_dtypes and turned on masked_fill for complex (#36335)
049dede3be : Move rpc.rst back to the source folder to preserve existing doc URLs (#36675)
30fabd9398 : Creates "Versioned Symbol" pattern to preserve serialized Torchscript semantics (#36300)
0785585db9 : Reland Make DispatchKeyExtractor forget about TensorOptions (#36290) (#36562)
f89fc204c6 : [caffe] fix input order in SLS op documentation (#36708)
7539ea0207 : [TensorExpr] Add simplification of length 0 and 1 For loops to IR Simplifier (#36348)
e17cf93b9a : Report tesnro_expr test results (#36684)
f548946363 : Fix out-of-boundary access in `caffe2::StartsWith` (#36672)
30dd0b74fd : Save view_fn for inplace update on view tensors (#36073)
f64fae9193 : Fix race in mark_graph_task_completed. (#36640)
a5d0d762fa : redo of add quantized layer norm implementation (#36593)
91f1d79d1b : hardswish: enable for QAT (#36604)
65df8b3886 : hardswish: make it work in static quantization (#36545)
9cbeb0faed : [JIT] Dont optimize shape peepholes on inline (#36404)
a99b169828 : [TensorExpr] fix a bug in LLVM codegen around empty kernels (#36660)
8d66f88eb1 : [jit] Fix bound method copying (#36546)
5927a6731c : [PyTorch Docs] Updated RRef docs to indicate RPC Retries (#36678)
6bd6b70a02 : Fix clang-format (#36685)
609b6875f9 : Enable test_upsamplingNearest2d_launch_fail on ROCm (#36624)
2cf53128a8 : Switch xla job to use bionic clang9 image (#36618)
ddd9eb3e12 : Make special cases prim ops instead (#36635)
a3314f1902 : [jit] Add return statement back to Future::addCallback() (#36662)
dad25ae47d : Add the one-block multi-thread global reduction support. (#36306)
e80813fae3 : Add trivial reduce for Cuda (#36293)
efab75730f : Migrate release CI jobs to CircleCI for Windows (#36657)
5afd816793 : Add a warning for Single-Process Multi-GPU DDP (#36656)
df9a250b8d : [pt][quant] avgpool3d for graph mode (#36598)
ba3d4019e9 : Remove prim::CudaFusionGroup from register_prim_ops_fulljit.cpp: it is registered in jit/codegen/cuda/interface.cpp. (#36661)
62e884f8d9 : Report bazel-test results as CircleCI metadata (#36643)
f98e0a099a : [pytorch] handle pybind11 style registration API with code analyzer (#36607)
527cf877d6 : Delete old `mkl_speed_test.py`
4a49ad0da7 : Fixed error Regex Parsing for Node Failure Tests (#36620)
87be115fd0 : Error Handling in RPC Agent (#35263)
1e7155caa5 : Bucketization (#7284) (#34577)
3c8921b747 : hardswish: add backards pass test (#36420)
16e90eba59 : hardsigmoid: add cuda kernels (#36351)
cdfefa77a3 : PR for double backwards of nn.Fold and nn.Unfold (issue #33452) (#36379)
9cac2b83d9 : [pytorch] improve code analyzer to dump ops called from c++ functions (#35941)
f99a28f515 : [ONNX] Adding a pass to replace interpolate function with aten::__interpolate (#35744)
cf27d07e04 : Implementation of STORM optimizer caffe2 python wrapper (#36399)
f7c9faab05 : Implementation and operator test for STORM optimizer (#36225)
84f4061a67 : Back out "Revert D20147487: Refactor jit::Operator to more clearly distinguish the two possible states" (#36634)
70d3616aa1 : [PyTorch] Split `libtorch_sources` into smaller filelists (#36583)
91e59f5fe2 : [PyTorch] Remove build definitions from `build_variables.bzl` (#36602)
80b01ba4f3 : [TensorBoard] fix #34954 (#36496)
ceecca3324 : Clang-format: whitelist test/cpp/tensorexpr/*. (#36616)
317f598103 : [TensorExpr] Clang-format test/cpp/tensorexpr/*. (#36615)
37aab14d14 : [future] Avoid some future callback self-captures. (#36502)
1a0b95e7e4 : bfloat16: enable basic math function (#35172)
73f11a0b23 : Update qbatch_norm2d opbenchmark test (#36630)
67e0bf14b7 : Add support of Dict as output when connecting script and tracing (#36265)
ce3555a635 : Relanding masked_select cuda port from TH to ATen (#36539)
9216c67c9e : Revert D21021677: [pytorch][PR] Add core of c10::complex
5150334c1d : Unconditionally register schema even for manual registration. (#36250)
6c742af235 : Remove attributes and method of submodules in frozen module (#34787)
01b121bd14 : Fix bc test (#36588)
7390c333d6 : [CI] fix test_distributed for python 3.8+ (#36542)
25252816cf : Add core of c10::complex (#35524)
9a680056ad : Remove extern C for TH_API (#36142)
8a60d8bfe2 : Create a new bionic image with clang9 (#36187)
4ebb1278e0 : [quant] Update qbatch_norm name to qbatch_norm2d (#36494)
f3f640d479 : move test_abs to device-generic tests (#36465)
4b3e3d8227 : [improve logging] add the param information when logging the optimizer engine (#36558)
d3cf9452af : doc note on deterministic/non-deterministic gradient for min/max/median (#36481)
69e3ee2d5f : DataLoader: properly diagnose exceeding file descriptor limit (#34768)
ed2d1cb2c4 : Revert D20147487: Refactor jit::Operator to more clearly distinguish the two possible states
fb70b4fb93 : [caffe2] Add support for std::shared_ptr<std::vector<TensorList>> in PackRecordsOp and UnPackRecordsOp (#36550)
018c3420b8 : Make dim, numel, element_size into prim ops (#36551)
dd64e738c5 : Expunge TensorId from all DispatchKey names. (#36240)
8f501f3083 : Update internal invariants in the world of manuallyBoxedKernel (#36388)
076d46f826 : [ROCm] Add debug flag (#36521)
6e7eaabf49 : Lock optimizations for DistAutogradContainer. (#36529)
411ccce279 : Revert D20936595: Make DispatchKeyExtractor forget about TensorOptions
999d7f6ab2 : [jit] tracer flag to guard risky behaivors (#36277)
d5ba39c25d : [TensorExpr] Postpone insertion of Alloc/Free statements in computeAt. (#36526)
4d1ccafb4b : [caffe2] Enable copying for caffe2::Tensor (#36468)
289d52c120 : Fixing SyncBN dgrad (#36382)
70b826a884 : Make DispatchKeyExtractor forget about TensorOptions (#36290)
36b273abc0 : Refactor jit::Operator to more clearly distinguish the two possible states (#33905)
9fcb4ab393 : Fix either::map naming (#33904)
eb00bac2b5 : Make FakeLowP tests work (#36525)
8544591f5a : Fix a segfault in DeviceThreadHandlePool and PoolWindow (#36416)
c7631716da : Output more debugging information for reduce kernel (#35946)
1e22717118 : qnnpack hardswish - pytorch op integration (#36320)
0964b662c3 : qnnpack hardswish - LUTs (#36252)
455d4aab64 : [PyTorch Numeric Suite] Add weight compare API (#36186)
739351fac4 : Fix linter warning: replace f-strings with str.format for Py2 compat (#35492)
501d9f33ab : Fix clang format (#36544)
0b7e832325 : Fix signed integer overflow in rng_test.h (#36421)
fd008bd170 : Make patterns in test_unmatched_annotations more flexible (#36422)
1f40bddf57 : [TensorBoard] fix #36471 (#36495)
c49de6ce0d : [TensorBoard] fix #33140 (#36497)
0912284830 : CI failure tips (#36507)
b38d505e42 : [shape inference] use max_seq_size as max_feature_len in SLS and LengthsRangeFill inference (#36346)
110893abf0 : [Shape Inference] Infer input(1) from input(0) in elementwise ops (#36498)
c9a1fc2b31 : replace Generator arguments with c10::optional<Generator> (#36232)
5a7f889a11 : Use bazel build rules from fbgemm (#36339)
3526627f46 : Use unittest assertWarns instead (#36411)
d7b7998370 : Enable more tests in fbcode (#36418)
0035aeef40 : [autograd] Avoid holding lock when completing GraphTask futureResult (#35101)
765bf8f03d : Remove duplicate bindings from torch/csrc/jit/python/init.cpp. (#36492)
ced9edbaa4 : [Torch Device][c10] Fix the expected torch device error message (#36446)
d070c0bcf0 : ROCm: enable cpp_extensions.load/load_inline (#35897)
ce54f0d411 : Back out "Revert D20449887: [dt][caffe2] enable using smart exceptions in async nets" (#36172)
d591a7bb82 : Use Function to implement fork. (#36179)
967cdc2baf : Simplify replicate logic (#36174)
4f956fcf88 : _requires_grad -> requires_grad (#36168)
e3b6dd1708 : [rref] Minor tweaks in rref_context (#36419)
2bc49a4b85 : block_diag dense (#33449)
35cc2bbca3 : Removed unnecessary call to '_strong_wolfe' in LBFGS. (#36453)
1e15063761 : ThroughputBenchmark: integration with Autograd Profiler (#36282)
a2e059cfa6 : add missing 'import warnings' (#35313)
379e4d9cad : [pytorch] Make behavior of SobolEngine consistent w/ other RNG functions (#36427)
d2e0c628e9 : Updating submodules
b92f8d9b7e : Revert D20950587: [pytorch][PR] Added complex types to get_all_dtypes and turned on masked_fill for complex
6be8560375 : Do not double compile generated files (#36417)
4bcd8ab6f7 : Added complex types to get_all_dtypes and turned on masked_fill for complex (#36335)
d83509e603 : [quant] Fix for the conv1d kernel shape (#36397)
0c9bf64989 : Disables complex clamp (#36373)
254be6a201 : Adds NumPy array x Torch tensor binary ufunc interaction test (#35945)
4f728c9d81 : [ONNX] Enable constant folding for Shape (#35386)
e3af0c9f9b : [TensorExpr] Add new file bounds_inference.cpp to BUILD.bazel. (#36440)
c1efe1ddb5 : Enable building of FakeLowP ops (#36170)
7aa6a8fd7a : Disables complex min and max (#36377)
7b9ab91614 : Improve boxed dispatch performance (#33313)
22212a82b4 : Remove functor factories in KernelFunction (#35488)
91441ae87f : [Lite Interpreter] Move Implicic ops to register_prim_ops.cpp (#36406)
df5f0a04ff : [TensorExpr] Implement LoopNest::computeAt (#36112)
397aa46a3e : [TensorExpr] Bounds inference (#35120)
c856a2cb0d : Move unboxing to after dispatch for ops with manual kernel registrations (#36398)
7e8c27ed25 : Fix view_complex_as_float for empty tensors (#36415)
742c77971a : Revert D20961711: [pytorch][PR] Returns float tensors for complex inputs to abs
ae452a81a9 : [DistAutograd x JIT] Capture global state, dist autograd current context id, before thread switching triggered by JIT future.wait() (#36395)
0dbb21f89e : Revert D20931186: Enable c10 unboxing for ops with TensorList
409346eee3 : Updating submodules
5b331e8611 : Catch exception in distributed engine callbacks. (#36118)
d71aeeceef : Updating submodules
e892398922 : Upstream generic device test patch. (#36321)
4305c7f97e : Remove experimental c10 ops (#36394)
6920b13500 : Move fakelowp tests from glow to caffe2 (#36409)
bd4761123d : Revert D20958928: [pytorch][PR] Port masked_select cuda from TH to ATen
86e8c49fae : Revert D20523080: [pytorch] reduce memory footprint in fused conv QAT ops
eddbee19a7 : hardswish: add cuda kernels (#36350)
7576cf8d00 : [caffe2] Use cpuinfo in perfkernels to simplify build dependency (#36371)
343f2c0925 : Port masked_select cuda from TH to ATen (#35429)
d27dccfdaf : Open source the missing part of FakeFp16 ops (#36353)
c029aaa25c : Updating submodules
f999d600d0 : Fix the typo in operator name string (#36296)
82be7c755a : [pytorch] reduce memory footprint in fused conv QAT ops (#35002)
15c7486416 : Canonicalize includes in c10, and add tests for it (#36299)
42457e634d : [TensorExpr] add support for Reduction Ops (#35866)
5177906d67 : [Shape Inference] Infer shape info for second input of elementwise ops (#36365)
4a98ba811c : Enable c10 unboxing for ops with TensorList (#36330)
e574ff3511 : Updating submodules
d73ee763fc : Fix the clang-format error caused in register prim ops change. (#36393)
247f2df840 : Fixed include file header guard. (#36329)
79973a16ce : Add missing TORCH_API annotation (#36391)
b0c90fad93 : Re-enable test_avg_pool3d_nhwc (#36259)
1875c2e4bd : Add torch.Tensor.as_subclass method. (#34369)
7c825bad10 : [RELAND] Add __torch_function__ benchmarks (#36138)
3aeb2b1562 : Returns float tensors for complex inputs to abs (#35871)
817e4f9ef1 : Correct a ValueError in dataloader to TypeError (#36244)
a91097bdfb : Revert D20964368: Revert D20408831: [Lite Interpreter] Operator registration migrate from manual to selective build
586481a6e2 : Revert D20408831: [Lite Interpreter] Operator registration migrate from manual to selective build
ee4cc96eee : Vectorize in-place comparison operators (#35117)
7fcf8b0a3b : [Lite Interpreter] Operator registration migrate from manual to selective build (#35426)
9a4bc67f66 : [caffe2/detectron2] fix Mask R-CNN caffe2 conversion on GPU (#36366)
31dca07fa5 : Updating submodules
37c1bd2946 : Move FakeFP16 back to internal to remove dependency on MKL (#36297)
2ec6a30722 : Bump produced file format version (#36085)
aac36a89ff : [model transform] tuple to arglist jit pass (#36093)
391a36a59c : Updating submodules
891a533b24 : Adding Conv1d to quantization default_mappings (#36352)
3d7c9abbf7 : Refactor thread_reduce for better unrolling and vectorization in the future (#36014)
d9227bb311 : Target 4096 blocks instead of split to large grid for large reduction (#35997)
2f5b523cd0 : Remove unnecessary whitespace in complex tensors (#36331)
d916cf05d4 : [quant][test] Split TestQuantizeScript to two TestCase (#36354)
8cb1950805 : [JIT] fix alias assertion (#36178)
48bf3eef1a : [ONNX] disable size optimizations for onnx (#36243)
c5662dd5dc : Base class for the quantized ConvTranspose (#35370)
7374a00bef : [pt]Supported benchmarking pytorch jit self-contained models. (#35279)
f2bae8e869 : [quant][fix] at::print for per channel affine quantized tensors (#36280)
51456dc808 : Updating submodules
358466f1da : [quant] Move graph mode quantization tests to test_quantize_script.py (#36324)
14ce500a9b : Appropriately handle exceptions in autograd engine. (#36019)
9662ef66b7 : Fix `torch.min` docs (#36319)
1ffc2d9ace : Updating submodules
2de3e491a8 : [RELAND] Add temporary impl_UNBOXED syntax sugar for unboxed-only defs. (#36223)
ef07bb65e9 : [RELAND] Add DispatchKey impl overload; remove use of torch::dispatch (#36222)
477f1c047c : [TensorExpr] add simplication of constant branches to IR Simplifier (#36257)
90c7db8ae3 : caffe2/core/plan_executor: add cancellation of async nets on error + propagate exceptions via std::exception_ptr for stack traces (#31966)
88c22070fe : Revert D20768930: add quantized layer norm implementation
d51ad40fe1 : [quant][onnx] Mark upsample_nearest2d, sigmoid and reshape as no scale (#36325)
e551bfc8de : New CUDA Fuser code lowering refactor (#36199)
f0ea6862ba : Support for pruning delays in Adagrad Optimizer (#34527)
376542c83d : caffe2: preserve python exception type from PythonOp (#36267)
8493383e94 : remove some code part never been called (#35033)
264da24c9e : Fixing RPC Shutdown and Thread Joining (#36239)
9497b21e63 : Grad input padding support for dilation argument (#33872)
e311e53abe : Revert D18672405: Revert D18672405: Use codegen'ed unboxing wrappers (#36010)
866227cfb3 : [pt][quant] Add vector path to copy kernel for quantized data types (#36189)
1443db8dc3 : [TensorExpr] fix bug in IRSimplifier when multiplying by 0 (#36287)
5061ef63f4 : Revert "Revert D20885968: [pytorch][PR] Enable backtrace with MSVC" (#36205)
423b01431b : make vendor match with this implementation (#36302)
f813e7184e : add quantized layer norm implementation (#35329)
23e5f6a7be : Add avx2 integer horizontal sum and sum of squares to vec256 qint types (#35693)
126d00c8dd : [pytorch] move force schema registration output into a separate file (#36284)
fdf7a833e7 : Address printing inconsistency between float and complex tensors (#35841)
2b30e7fe11 : Move inplace view tests to generic testing framework (#36281)
ddf5755ff8 : Fix DDP error checking for unused parameters (#36054)
7403545518 : Fix exception message of `torch.optim.AdamW`. (#36088)
075b732f26 : doc fix for KLDivLoss (#36137)
5bbcddae3b : Add at::Generator to IValue (#36231)
ea8e347135 : Replace std::shared_ptr with c10::intrusive_ptr in at::Generator (#36230)
62f9312abd : Revert D20783298: Fix naming of "strides" method in TensorType
7487b2a184 : [caffe2][debuggability] add length checks to MergeMultiScalarFeatureTensors (#36248)
5f25e98fc7 : Use _sparse_coo_tensor_unsafe to shallow copy sparse tensors in accumulate_grad (#36292)
f59e646faa : [rpc] Allow profiling in RPC to work with torchscript function invocations (#36275)
3d199aab08 : Updating submodules
dd36f8c21b : [FBGEMM] Open sourcing fbgemm_fp16 ops (#36212)
291c910e85 : [future] Re-land some safe portions of the future change. (#36254)
2458f6c63e : Move all nccl from torch_python to torch_cuda (#36193)
34a10238d5 : fix is_float_scale_factor warning (c++) (#35601)
3a8838840b : Add comparison operators to Vec256<BFloat16> (#36106)
0bc17ddaa9 : Use templates instead of macro when defining Vec256<BFloat16> bin operators (#35844)
0f34d648c8 : Fix signed-unsigned warnings (RELAND) (#36224)
16980e455f : Fix naming of "strides" method in TensorType (#35170)
6972c27d94 : [quant] Enable fusion for conv modules with bias (#36173)
caa45c8e33 : [TensorExpr] fix warnings (#36167)
76c7652cc5 : Add distributed data parallel benchmark tool (#35198)
4f3af09162 : [JIT] Incremental updates to Alias Db in Mutation Remover pass (#35421)
4db87f4f97 : [JIT] Allow mutated values as functional graph inputs (#33297)
6016f694c0 : Revert D20901746: [pytorch][PR] Update docs for master to remove Python 2 references
83907ded1d : Revert D20895316: [pytorch][PR] [JIT] List reland
7c76c71616 : [caffe2] remove quant options of SparseAdagrad from OSS (#35608)
3be6a4db4d : improve the quantized batch_norm performance (#35639)
82dd01150c : Fix race during RPC shutdown. (#36113)
5910c51545 : Exclude `torch/csrc/cuda/*nccl*` from `clang-tidy` (#36249)
fab06bfb75 : Add utility for bundling sample inputs with models (#35631)
645d57ea01 : Expose JIT Module's "register_attribute" to Python (#35630)
246416ac3b : `clang-tidy` workflow only needs `cuda-toolkit` (#36241)
9a2b505563 : [JIT] Shape inference improvement (#35051)
195362d74c : [TensorExpr] scalar factorization of Div (#36154)
a91535930f : [future] Undo some recent torch::utils::Future api changes (#36220)
93256617c8 : C++ Adam optimizer - corrected messages for check of default options (#36161)
ae71c5c7e6 : Optimized bincount for the CPU by removing extra size() calls (#35822)
e99c53dc86 : Fix broadcast_coalesce for empty tensors (#35965)
38849e119f : [pytorch] Add error when PyTorch used with Python 2 (#36151)
f99e6370dc : fix build breakage of //sigrid/... (#36206)
07306406ce : s/repo.continuum.io/repo.anaconda.com/ (#36233)
9ada7abc18 : [JIT] fix comprehension scope writes (#36105)
b9fc4358d6 : Enabled debug symbol in test_cpp_api_parity tests by default. (#36209)
b9260bdb7b : Don't build deps for `python setup.py egg_info` (#36208)
901bb3c350 : Delete as_variable_ref (#36096)
4c8e38c6d7 : Minor doc improvement for code_analyzer (#36177)
5a03664fd5 : Attempt to fix the pytorch_cpp_doc_push build by pinning breathe. (#36190)
83abd7ffbf : Revert D20909696: [pytorch][PR] Fix signed-unsigned warnings
6f8017bf07 : Enable simple executor for FBCODE (#34748)
f0bddd5e7a : Fix clang-format broken by https://github.com/pytorch/pytorch/pull/33788 (#36203)
c04232ae2b : Back out "[reland] Skip OpenMP Thread when OMP_NUM_THREADS is 1" (#36198)
25fe27981f : Fix signed-unsigned warnings (#36196)
e2f9c668a2 : Use `repo.anaconda.com` instead of `repo.continuum.io` (#36201)
986a8fdd6a : Use counter instead of vector of futures in `_parallel_run` (#36159)
fc5d658324 : [rpc] allow ability to abort second call to RecvWork::wait() in ProcessGroupAgent::listenLoop (#36084)
4b916b6b75 : Mark every frame with a unique id (#33788)
72b55fea6b : [jit] Make torch::utils::Future and ivalue::future apis closer (#35849)
373dc7c8ef : Group libraries in TOC and add PyTorch Elastic (#34928)
2afe171538 : [JIT] List reland (#36146)
43234be525 : Update docs for master to remove Python 2 references (#36114)
ebf743a63a : Fix bazel-test linking issue (#36157)
8afa001d89 : Revert D20885968: [pytorch][PR] Enable backtrace with MSVC
c2901333f1 : Updating submodules
681ca45717 : [ONNX] Export torch.inverse op (#35318)
6bc8ffe824 : [JIT] Optimize before inlining (#35562)
2b06d5adc6 : Fix compilation errors for enabling Intel nextgen compiler (icx/icpx) (#35939)
444073efde : Add GenerateI8Depthwise.cc to bazel build definition of fbgemm (#36144)
16d9bcd725 : Fix test_avg_pool3d issue in pytorch_paralleltbb_linux_xenial_py3_6_gcc5_4_test (#36103)
7920a970c6 : Don't statically link MKL multiple times on Linux (#36078)
34b32ca914 : Remove operator-> from at::Generator (#36027)
3328a2f903 : Rename CPUGenerator to CPUGeneratorImpl and CUDAGenerator to CUDAGeneratorImpl (#36026)
7e84a30ad6 : Enable backtrace with MSVC (#36039)
b55dee9fe1 : fix max_pool2d cuda version Dimension out of range issue(#36046) (#36095)
3e5d25fdfd : Skips test_avg_pool3d_nhwc (#36130)
70acc9c0f5 : Skips test_qadd_scalar_relu (#36128)
2e8f9547fa : Updating submodules
447bcd341d : Bazel build of pytorch with gating CI (#36011)
64594d8333 : Clang 9 and GCC 9 Support (#35835)
803a4e135e : Fixes CMake lint error (#36123)
449a4ca340 : Add more alternative filters in places people forgot to add them. (#36082)
3570ef6a0f : Revert D20876204: [pytorch][PR] Add trivial reduce for Cuda
459163b8eb : Revert D20449887: [dt][caffe2] enable using smart exceptions in async nets
0f243688be : Updating submodules
2f1ca26abd : Update NNPI Backend to v0.5.1.4 (#4334)
4c140052a6 : bfloat16: vectorized unary ops (#35092)
3da67ce367 : [caffe2] Factor libtorch_python_sources into exposed definition (#36005)
6a45584272 : Remove `__nv_relfatbin` section from nccl_static library (#35843)
a81be33a4e : Add trivial reduce for Cuda (#36092)
f421cf3978 : update comments on fake operator (#36086)
5d33cf5dfc : [Shape Inference] Set new shape according to precedence of dimType over previous value (#36081)
4ced22c5de : [JIT] Add IR Benchmarking tests to ai bench (#35732)
40a45957a0 : May fix TopKTypeConfig<at::Half> without an additional Bitfield specialization (#36077)
2173746f64 : Compile THCTensorTopK per dtype. (#36074)
7d1f06462c : Fixing Potential TSAN issue with joining RPC helper threads (#36094)
b8383b3d4c : [WIP] Enable NNC's LLVM dependency in CI (#35564)
2ef1ace877 : [rpc] call threadPool.waitWorkComplete after listenerThread.join() to fix (#35394)
cc78914755 : qactivation_benchmarks: small bug fix (#35731)
6405f26a02 : add more quantized activation benchmarks and input sizes (#35729)
b68c3827de : add benchmark for quantized batchnorm (#35389)
8ef82fc2c9 : [dt][caffe2] enable using smart exceptions in async nets (#34753)
3228939f23 : [JIT] Fix fake_range() (#36083)
3e402a5940 : [ROCm] Enable BFloat16 type for add_out_sparse (#35978)
cb385cb6d7 : Pin Sphinx to 2.4.4 (take 2), fix docs CIs (#36072)
0475d7b08d : [JIT] optimize mutableType calls (#35474)
45fc881f05 : [ROCm] Hotfix: Black list tensorexpr test set that has failures on ROCm (#36049)
59ed0c5fd7 : Strip newline when ingesting `version.txt` (#36002)
4ef383d5db : add type hints on recently added ops to make them scriptable (#35885)
8dba98da0f : [ONNX] Added support for constant folding onnx::Add and onnx::Sub (#35869)
d568c7d966 : [TensorExpr] add more detail to malformed_input exceptions (#35891)
82d58ed484 : disable the test to stop breaking the builds (#36053)
e56ba8481e : [ONNX] fix size for opset 11 (#35984)
8224398c14 : [pytorch] Fix the extra_repr print message for float16 dynamic quantization (#36044)
81c8ca1e2e : Disable tracing for Pytorch Mobile client (#36007)
66d50060eb : Temporary methods for real and imag values of complex tensors (#35879)
b3cdec88e3 : Fix torch complex exp CPU implementation (#35532) (#35715)
7ee88d61f7 : Rename boxing/unboxing files and utilities (#35411)
8a6173edf2 : [caffe2] tune prefetch distance
7b2b17f727 : Revert D20802884: [Shape Inference] Set new shape according to precedence of dimType over previous value
82087ee7f6 : Add DICT_CONSTRUCT and NAMED_TUPLE_CONSTRUCT to lite interpreter (#36015)
5fab1bf3e4 : Use `std::abs` instead of `abs` in lbfgs.cpp (#35974)
e3e2dd7779 : [Shape Inference] Set new shape according to precedence of dimType over previous value (#35910)
b7f4b6a6de : Support for XNNPACK max pooling operator. (#35354)
beb9430ff6 : Propagate input tensor names in XNNPACK backend. (#35351)
a604041a11 : Back out "[pytorch][PR] indexing: throw exception for masks with dtype=uint8" (#36013)
de04a1850f : Remove nonexistent op variable in complex tests. (#35722)
e5c6003f3e : Mark prim::rpc_async as having side effect (#35994)
e73ab30f3d : rand() and uniform_() for complex dtype (#35585)
6be9c77998 : Revert D20783179: [pytorch][PR] Bazel build of pytorch
fced0c9837 : Fix ATen/test/complex_test logic (#35976)
eba5bdbeaa : [pytorch] register c10 ops for static dispatch (#35193)
585f153d00 : Bazel build of pytorch (#35220)
4b64dffcb6 : Move uniform_() to DistributionTemplates(Migrate uniform_ from TH to ATen) (#35580)
4d5fe90046 : [rpc] replace tests on worker_name (#35955)
ccfcf47531 : Calls to Tensor::to pass MemoryFormat by TensorOptions (#34249)
d559a47933 : Enable relu fusion with prepacked linear/conv. (#35705)
eb42199788 : third_party: bump fbgemm to 0bb23bf9 (#35988)
a054d05707 : Add torch.utils.show_pickle for showing pickle contents in saved models (#35168)
ec3b355a0f : Update ostream << TensorOptions printer. (#35892)
03a4a4887d : Fix `clang-format` (#35969)
71669f0249 : Fix flake8 (#35968)
7b04772c51 : Keep same autogenerated files structure between fbcode and OSS builds (#35951)
ba3cec867f : Reenable test/test_tensorexpr.py (#35914)
af5121f62a : Invoke TensorExpr fuser pass from a graph executor. (#35913)
b3d30f2dc4 : [TensorExpr] Compiler warnings cleanups. (#35925)
b46fddf506 : idtt + zch distributed inference (#35763)
596153cad1 : [jit] Enable type tags in serialization (#35741)
19bbfbe1cf : [RPC][Better Engineering] Consolidated all rpcAgentRunning atomic booleans (#33915)
c5c63a2e35 : Add quick utility to transform scripted/traced models for mobile. (#35904)
f48008c261 : Set eval mode during optimization for mobile. (#35903)
6e13a7787b : [jit] Fix type comparisons segfault (#35929)
2fa3c1570d : Refactor C++ API parity test mechanism and turn it on in CI again (#35190)
2d8dbcd3ef : Remove python2 and 3.5 from requirements.txt, README and docs (#35677)
7468ef04c2 : [quant][graphmode] Add quantize_per_tensor.tensors (#35916)
f0c747243c : [quant][graphmode] Insert Observers for dynamic LSTM (#35894)
0429d2c9b8 : [quant][graphmode] Add new tensorlist observer for LSTM (#35893)
87582ae6c4 : Make RRef type_hint mismatch exception message more actionable to users (#35943)
ea8021d726 : Make intdiv_256 a more generic binary operator template (#35422)
a1cf3fd1da : lshift and rshift on CUDA should match the behavior on CPU (#35339)
beac3f27f0 : Make intdiv_256 a more generic binary operator template (#35422)
e707cee501 : Fix gcc-5.4 compilation (#35935)
be125d18dd : [ROCm] [ROCm 2.10+] enable fp16 dot in Caffe2 backend (#30432)
8484ca581e : Add `GRAPH_UPDATE` for `x.size()` in Peephole Optimize (#34865)
aeb13f212b : Make ValType hashable. (#35917)
1a146b0577 : Vec256<bfloat16>::arange step size should accept templates. (#35842)
a5af478f29 : Use full include path in autogenerated Functions.cpp (#35924)
d0ce94d20e : Avoid one unnecessary memory allocation in XNNPACK integration. (#35350)
c33ea41f9c : Fixes a bug in serializing min/max plus one more. (#35850)
e2adcc1c53 : Report CUDA separate compilation flag (#35726)
767ea03b22 : Clear profiling information timely and appropriately (#35814)
1a72326942 : .circleci: Bump libtorch builds to 3.7 (#35912)
591b5da2c8 : Removes integer division call sites (#35862)
ced1e46399 : [PTM] register aten::dequantize.self for spark spot int8 model
f5b9574887 : [easy] ThroughputBenchmark: make ScriptModuleBenchmark usable from c++ (#35848)
762270c51f : add c10d dynamic loading mechanism and unit test (#28068)
2a4ca70832 : Fix contant/placeholder loading in checkGraphCompatibility
2595c62208 : [JIT] Better error on default params error (#35888)
c070e8fb26 : Updated canCast to disallow complex -> non complex conversion (#35883)
dabeff33b9 : [pytorch] Fix fblearner flow compiling errors (#35902)
ddcad5b9ca : temp disable test_tensorexpr.py
15b711a654 : Fix reporting of error message in toBool (#35570)
bc7fdacf06 : [BugFix] Fix compare_exchange_weak in DispatchStub.h (#35794)
d9dd353a00 : fix clang-format (#35884)
09660896c0 : Break circular dependency between ATen.h and TensorIndexing.h (#35765)
a53328e89c : cmake: Grab TORCH_DEFAULT_VERSION from version.txt (#35260)
9097b55479 : Propagate static_if more completely. (#35834)
173e444e66 : track ddp API usage (#35837)
602b51eb30 : Changes to qadd for perf improvement.
3ef5ff6012 : [TensorExpr] Make Load and Store multi-dimensional. (#35800)
676fc929b7 : [caffe2] fix type and shape inference for common gradient ops (#35857)
c4f56e9685 : [pytorch][PR] Optimize qavg_pool3d_nhwc (#35740)
0f99b28431 : Revert D20775783: Add DispatchKey impl overload; remove use of torch::dispatch
e67951af63 : Revert D20775782: Add temporary impl_UNBOXED syntax sugar for unboxed-only defs.
86f3305859 : Improve C++ API autograd and indexing docs (#35777)
6d24f8fe21 : Infrastructure for a new CUDA Fuser (#34785)
8e951c5793 : Add temporary impl_UNBOXED syntax sugar for unboxed-only defs. (#35714)
2db61193bb : Add DispatchKey impl overload; remove use of torch::dispatch (#35706)
c3abcf83aa : [AI Bench] Resumme speed_benchmark_torch.cc to origin
1bd68eafb5 : Skip ROCm test in test/test_cpp_extensions_aot.py (#35838)
051132f119 : [TensorExpr] simplification of round + mod pattern. (#35683)
2f50c11954 : add test_tensorexpr.py (#35776)
6616fad92e : [Docs] Fix typo in RPC docs (#35809)
b33ae23c5a : Revert D20794765: [pytorch][PR] Improve C++ API autograd and indexing docs
6792dac90d : Only Schedule Retries before Agent Shutdown (#35554)
b3c0939af3 : [quant][graphmode][refactor] Move the whitelists to a centeralized place (#35721)
e372f42110 : [caffe2] Explicit vectorization of LSTM operator (#35556)
e0ee8000ac : Make test_leaky_relu_inplace_with_neg_slope device-generic and skipIfRocm. (#35816)
41ef2c0d58 : Improve C++ API autograd and indexing docs (#35777)
301be851ef : Fix grid_sample out of boundary when grid contains large numbers (#35506)
16774f7353 : Increase TimerTest tolerance to 20% on Windows (#35818)
9fe3b1857d : [TensorExpr] Fix imports in tensorexpr benchmarks. (#35830)
1c93a19a7f : Fix another case of float2::x and float2::y may not be the same on ROCm (#35785)
50b0bb6c6a : Updating submodules
26ee0eee10 : Use cufft_static_nocallback (#35813)
5d1205bf02 : Suppress output when checking hipcc (#35789)
16a88e4369 : Add unboxedCallRedispatch (#35476)
ab26dfb44e : [quant] Move quantization tests into test/quantization (#35812)
15326fb240 : Revert "Attempt to fix windows build" (#35217)
990b54146f : Revert D16864196: [pytorch][PR] port fmod from TH to ATen
35cdb78522 : Make kl_div accept target in log space (#34586)
74ef0adf60 : add mv operator to SparseTensor (#21782)
2b068d10b0 : Removing references to PYTHON3COMPATIMPORTS. (#35384)
acb59a3b86 : Remove unused header in process_group_agent.h (#35767)
6491bf2855 : Revert D20777341: [pytorch][PR] Add __torch_function__ benchmarks.
e9d868a529 : Kill CUDA_tensor_apply4 (#33998)
d463c10668 : Migrate prelu_cuda_backward from CUDA_tensor_apply4 to TensorIterator (#33997)
ceff21a4fc : port fmod from TH to ATen (#24405)
a736b994b7 : Remove old section of the aten doc that is not true anymore (#35807)
945d7a7408 : Add All-to-all comms support to distributed module and MPI backend (#32361)
409bac48e4 : Move all warn logic for overwriting registration to OperatorEntry (#35769)
6318899c9b : [ROCm] [ROCm 2.10+] enable fp16 dot in PyTorch backend (#30431)
8c534bb0bd : Add __torch_function__ benchmarks. (#35530)
60a3e82c4e : [ONNX] Fix for constant folding: Slice, Added ReduceL1 and ReduceL2 (#35280)
1f06db2579 : Refactored rpc docs (#35109)
a5bfcc5323 : Unify management of thread local settings (#35523)
bc6bd0bb1a : Debug Information Guard
fd1dfaa7d0 : [jit] kill isSameIdentity (#35019)
2d85daca58 : [jit] kill `shallowEquals` (#35005)
c382ec88d1 : [jit] define equality for IValue (#34986)
0ed3f881c5 : clang-fmt (#35796)
866d9d4e6a : [jit] Fix name collision on load (#35720)
ee6f7c3e62 : Remove extra semicolon (#35751)
dc1ecdf8d9 : Moves torch cpu math tests to device-generic framework (#35658)
319aee1afb : Revert D20771828: [quant] Move quantization tests into test/quantization
06dcb70905 : [jit] Fix Type equality in some cases (#35719)
51fb5ef80e : [jit] add cast<> specialization for NamedType (#35718)
995f53b042 : [jit] make `python_str` take a custom renamer (#35717)
6a5d008abf : [jit] factor mangler out (#35716)
cae6bdf199 : [JIT] Mark aten::wait as having side effect, since it can represent RPC message received (#35695)
d1a4a64092 : Disables imag for real-valued tensors (#35728)
2f84a07b58 : indexing: throw exception for masks with dtype=uint8 (#34418)
fef6c617d4 : [quant] Move quantization tests into test/quantization (#35688)
1ec0676a33 : [JIT] register list prim ops cleanup (#35768)
2c6d1e57cd : is_complex doc fix (#35680)
3ba885896d : [jit] Minor: in unpickler, string tweak in readBytes() (#35550)
8d64a3848c : [jit] In RPC Server, handle TorchScript continuations asynchronously (#34109)
e5746eec1e : [ROCm] Remove installation of ca-certificates and apt-transport-https in test.sh (#35676)
9650f465ce : [quant][graphmode] Quantization support for at::sort (#35571)
8de01aac0b : [Onnxifi] Add initializers to the C2 net passed into Glow (#35764)
7d5350c2a3 : [easy] ThroughputBenchmark: print out aten's parallel settings before execution (#35632)
07dbf0db46 : bfloat16: vectorized clamp, clamp_min and clmap_max (#35082)
8e49afa908 : Updating submodules
063275fd33 : Fix a bug in subgraph rewriters. (#35704)
f182b43760 : [rref] Handle exceptions returned via remote() calls (#35331)
b4c4342747 : hswish and hardsigmoid: improve docs (#35431)
3d6b5bac0a : Move libtorch to py3 and cleanup other CircleCI config (#35700)
8ff05031b0 : Update collect_env.py to detect relevant conda-installed numpy and cudatoolkit (#35646)
ada647214f : [caffe2] explicitly pass use_offsets=false when calling fbgemm embedding kernels (#35711)
81c2412721 : [caffe2] Switch to using `public_include_directories
d2343bea32 : Disables complex floor, ceil, trunc (to be compatible with NumPy) (#35592)
539d3ff344 : Revert D20749588: [pytorch][PR] Use `std::abs` instead of `abs` in lbfgs.cpp
59268d4cbf : [JIT] Improve the error message when registering a custom class twice (#35568)
800d5617c0 : Recording of TorchScript functions (#34710)
8fef8d19fa : clang-format (#35752)
a090de380c : [quant][graph] Add quant fusion for dynamic quantization (#35586)
1f7ee7b6b7 : [quant][graph] Add pass to insert quant dequant for dynamic quantization (#35448)
b0f8429826 : Update clang_format.yml
35087b8d77 : [Shape Inference] Try to infer input of elementwise ops (#35701)
4f4ed5c108 : Disable c10::import(ns) (#35398)
726baf69d7 : Do not link BLAS into torch_cuda/torch_hip (#35724)
cd760fbd7f : Updating submodules
9018538ab3 : [quant][graphmode][refactor] getGeneralTensorInputs(Node*) -> getPassThroughInputs(Value*) (#35558)
8add1843a9 : [quant][graphmode][fix] docs for InsertObservers (#35557)
c2ca4371ae : [PyTorch BC] Clean up whitelist (#35730)
2f3b952d16 : Use `std::abs` instead of `abs` in lbfgs.cpp (#35698)
fdadaf62b0 : Disable batch_norm_relu batch_norm3d quanitized ops tests (#35727)
a15a4a5caf : Revert D20722426: [pytorch][PR] [doc] Add overflow notice for cuFFT on half precision
56fabface2 : fp16 include not needed (#35708)
95c1b16fc5 : don't replace TensorImpl for inplace min/max dim (#35591)
46330b368a : [5] register aten ops in lite interpreter for detectron2go model (#35248)
8981271d9f : Skip test_mm on XLA (#35709)
e021c13d2d : [doc] Add overflow notice for cuFFT on half precision (#35594)
5e27de021e : [rpc] fix backward compatibility test (#35703)
4e19e02976 : [quant][graphmode] Quantization support for `quantized::add_scalar_relu` and `quantized::add_scalar_relu_out` (#35509)
35715a56a9 : [reland] Skip OpenMP Thread when OMP_NUM_THREADS is 1 (#35541)
a0dc36e501 : [Windows] Fix torch_cuda's forced link (#35659)
639c68b2fe : bfloat16: enable basic math function (#35172)
788ef939d8 : float2::x and float2::y may not be the same as float on ROCm (#35593)
dd98abb453 : Enable splitSparseLengthsSumSparse in onnxifi (#35555)
e90e89d189 : Transform pass to split SparseLengthsSumSparse (#35522)
af4d86788c : Split SparseLengthsSumSparse into SparseLengthsSumSparseLookup + SparseLengthsSum (#35507)
35dbc6ebda : [BC] Fix the BC CI (#35692)
39d0500434 : Fix PyTorch separate compilation (Reland) (#35581)
b1f08e7426 : Call uncheckedSetDevice in ~InlineDeviceGuard only when device index are different (#35438)
e7371957cf : Report results from CPP unittests on Windows and Linux (Reland) (#35590)
f3151052ce : [autograd] fix engine flakiness (#35599)
bb32e123e6 : Report results of python unit tests during window test runs (#35687)
3f3b96b1f8 : Revert D20735881: [pytorch][PR] [WIP] [reland][pytorch][PR] Fix some incorrect annotation…
e7a37823b0 : [WIP] [reland][pytorch][PR] Fix some incorrect annotation… (#35588)
3bdc4a37ed : CMake script cleanup - mixed case for function names (#35589)
bf2b411730 : Save results of cpp unittest to `test/test-reports` folder (#35686)
0eb26fb01e : [ROCm] Properly blacklist (#35230)
e3daf70184 : Fix AVX detection with clang-cl (#35653)
340048b67c : [quant][graphmode] Remove unused patterns (#35385)
728c7dcea3 : ONNX Update training ops and training amenable export API (#35567)
1f759936f0 : Propagate model id used by Predictor to Caffe2 logging
2c19b53d4f : [iOS] Enable selective build for testing FBNet in PyTorchPlayground (#35647)
9e3605de98 : [RELAND] New operator registration API (#35061) (#35629)
86be6443d8 : [quant][graphmode] Quantization support for `aten::conv3d` (#35347)
e397f87c4b : [aten] remove variable set but never used warning (#34015)
77b4e2d2fc : [quant][graphmode][fix] Add filter for `quantized::add` (#35345)
915a45298c : [aten] remove warning on change of sign (#34016)
dbd2b8bb41 : [SigridHashOp] Fix converter (#34836)
6fc2403951 : [quant][graphmode] qconfig_dict support None (#35336)
64058796e0 : clang-format (#35635)
860790de88 : Makes torch.real and torch.imag NumPy compatible, but disables them for complex tensors (#35560)
67c3822944 : [quant][graphmode] Make `aten::relu` a general op (#35420)
efec027653 : [quant][graphmode] `prepare_script` takes original qconfig_dict (#35335)
227beb9095 : Revert D20680520: New operator registration API
486277a309 : Replace four make_offset_calculator functions with one (#35551)
444332710c : [quant][graphmode] Quantization support for `quantized::add_scalar` (#35334)
76d5102587 : add a cuda/fuser job for legacy graph executor (#35419)
cd00bbc23f : clang-format. (#35605)
28ab8c6ff8 : New operator registration API (#35061)
e90c32f11f : [quant][graphmode][refactor] Support filter function in quant fusion patterns (#35333)
5557ceb84e : Remove `pytorch_linux_xenial_py3_5` build and test jobs (#35587)
5b3492df18 : [TensorExpr] Extend arithmetic simplifier to work with multi variable expressions (Attempt 2) (#35415)
683246e5ea : Improves precision of linspace, logspace (#35461)
f1d69cb2f8 : [quant][graphmode] Quantization support for permute and repeat_interleave (#35332)
df27b32014 : [quant][graphmode] Make interpolate/upsample work again (#35130)
21c94606b8 : Cleans up type conversions, adds CPU test comparing with NumPy (#35374)
1cc4e5c338 : [quant][graphmode] SwapDeQuant support prim::If (#35142)
c672a7340b : [quant][graphmode][refactor] getGeneralOpTensorInputIndexes -> getGeneralOpTensorInputs (#35141)
26b2725167 : [quant][graphmode][refactor] swapDeQuant takes block as arugment (#35135)
2ef5b947a8 : Disable unit test failing on Windows (#35549)
ad1091f753 : Fixes default dtype value for onnx hardtanh export (opset11) (#35467)
75e4c53b35 : [rpc] Add a debug only check to debug python cleanup races (#35395)
43928effee : [jit] Remove _assert_int_or_pair op (#34509)
a9b540d109 : Revert D20670031: [pytorch][PR] Fix some incorrect annotations found by clang-cl
238903b7be : [jit] Delete polyfill typing (#27510)
6d13ef719e : Update warning message for autograd issue + XLA backend (#35543)
76a8d30693 : [quant][graphmode] Fold quantized prepacking ops (#35077)
f27403d761 : [jit] Fix named tuple resolution (#35409)
cfcb63de34 : custom class method holder should hold a unique_ptr (#35218)
b9adbb5002 : Fix/relax CMake linter rules (#35574)
96eec95ece : torch.from_numpy for complex dtypes (#35531)
f101949390 : Remove python2 support from setup.py (#35539)
45c9ed825a : Formatting cmake (to lowercase without space for if/elseif/else/endif) (#35521)
04a3345335 : [quant] Make conv2d_prepack and linear_prepack pure (#35073)
e1773f2ac0 : .circleci: Change default CUDA for pip, cu101 -> cu102 (#35309)
02d6e6e55f : histc: Add a note on elements outside of given bounds (#34889)
4529d03971 : Move test_libtorch from win-test2 to win-test1 group (#35540)
ef511d884b : Calls to _empty_affine_quantized pass MemoryFormat by TensorOptions (#34248)
05e973d673 : Add WorkerInfo through TorchBind to make it an available type in TorchScript (#35447)
835ee34e38 : [ROCm] Update to ROCm 3.1.1 (#35552)
ff71a4192d : Bump base version to 1.6.0a0 (#35495)
9e22d15f14 : Enable tensorexpr cpp tests in CI. try #2 (#35454)
930d218fbf : Increase Channels Last test coverage (#35504)
3af46c90bd : [caffe2] Header path in byte_order.h (#35519)
2c300df2ac : [fix] at::print for quantized Tensor (#35545)
3cc43bcbb5 : Skip slow quanitized tests under ASAN (#35533)
0c16cedafe : Fix some incorrect annotations found by clang-cl (#35364)
b33e38ec47 : Allow a higher-precision step type for Vec256::arange (#34555)
5a02930d3a : Vectorize (CPU) generic types for binary bitwise operators (#34338)
3c02de0011 : copy_ fixed on cuda so removing the workaround in test_many_promotions (#35528)
77ad3c5aeb : Revert D20683972: [pytorch][PR] Fix PyTorch separate compilation
16394a9d3f : [caffe2] early return for empty indices in SLS (#35498)
25fe7f33ce : Add cmakelint to CI (#35525)
58f5a89c9a : Refactor RoIAlignOp on CPU (#34698)
2d023fe6a7 : [7] add missing roi_align_rotated op to lite interpreter (#35244)
181da12126 : Revert D20687652: [pytorch][PR] Report results from cpp unittests on Windows and Linux
45e1be9762 : Revert D19710370: [pytorch][PR] ONNX Update training ops and training amenable export API
e5cd17cc9e : [4] register quantized ops for lite interpreter (#35247)
025a0abe5a : ONNX Update training ops and training amenable export API (#32950)
ac639d927a : Reland "[RPC] Use qualified name str directly in RPC torch script code path" (#35489)
d2d40c45b6 : Report results from cpp unittests on Windows and Linux (#35500)
da4e68faed : Make operator names consistent between export_opnames and the lite interpreter (#34674)
8c90ae11b3 : [JIT] fix glow subgraph inputs ordering (#35508)
bd604cb5b7 : Upgrade MKL-DNN to DNNL v1.2 (#32422)
8240db11e1 : [pytorch] Remove python2 support from tests and torch.jit (#35042)
98362d11ff : [rpc] create error string in listenLoop outside of lock (#35393)
77bbbf042d : [JIT]Support converting str to float. (#35352)
00a261fddd : [pytorch] add fallthrough variable kernel for C10_MOBILE (#35491)
f5383a213f : Fix openmp detection with clang-cl (#35365)
5371fdb1a0 : [C++ API Parity] [Optimizers] Merged Optimizer and LossClosureOptimizer (#34957)
e68afe3ab9 : [JIT] remove prim::shape op (#34286)
8f18cdf2b8 : [Autograd Testing] Few refactors to test_autograd.py (#35443)
5d9694250c : Updating submodules
9970be2fd2 : Update git-pre-commit (#35511)
9b4bbaab53 : Add RRef.local_value() for TorchScript (#35433)
d4f3bc7f8e : [dt] [caffe2] add/fix shape inference for StumpFunc, SliceGradient and ResizeLike (#35430)
2e739f822b : Fix PyTorch separate compilation (#34863)
2f6f1781af : Add warning to a known autograd issue on XLA backend. (#35449)
8074779328 : [quant][graph] Update dynamic quant tests to use new qconfig (#35451)
daba68c601 : [quant][graph] Add a new observer type for dynamic quantization (#35455)
086dba3804 : [caffe2] move fused SparseAdagrad to open source (#35164)
91e4685514 : [modules][caffe2/aten] Fix `#include` inside of namespace error (#35302)
618104185b : [autograd] enable graph level thread parallelism on CPU (#33157)
9b8c9d6c72 : [autograd] add tests for simple reentrant and stackoverflow escape (#35259)
b0459fec72 : [clang-format] Replace asyncio.run with approximation supported in python 3.6 (#35501)
6b1ffcbf59 : [model loading] Skip ssaRewrite for predict_net if it has been ssaRewritten (#35428)
b704f30189 : [3] register caffe2 mask rcnn ops in lite interpreter (#35246)
c1f5a54397 : Optimize index_select for 1D inputs (#35243)
c8bd5ac7e9 : [workflows] Don't pipe clang_format.py output to a file (#35496)
ea0cab7f46 : Guard listener removal add by `at::Dispatcher::addListener()` with mutex (#35486)
a3e10d2a17 : Expose enablement of TensorExpr fuser as env variable (#35341)
4d39aeec27 : Revert D20653072: [pytorch][PR] Add __torch_function__ benchmarks.
e00575044e : Revert D20657271: [pytorch][PR] [JIT] Optimize before inlining
1ff85fc08b : Prefer python3 in clang_format (#35490)
8d720b7034 : fix complex conversions on cuda (#35344)
bf24753570 : Add __torch_function__ benchmarks. (#34645)
61623430d3 : [workflows] Add clang-format workflow (#35239)
6384c2d81b : [JIT] clang-format JIT code (#35115)
1422d2cd8b : [tools] Replace clang_format.py with clang_format_new.py (#35114)
3b2b6ae1a8 : [JIT] Optimize before inlining (#35424)
cd9a357f32 : Fix non-deterministic RNG behavior in dist_optimizer tests (#35425)
3622e1c90f : Revert D20589048: [pytorch][PR] [ROCm] Update CI dockers to ROCm release 3.1.1
ada40777c4 : Rand function for complex dtype (#34924)
17a01c7c7b : feature: deterministic random_split (#34043)
f7f7c4edd9 : [ROCm] Update CI dockers to ROCm release 3.1.1 (#33930)
79054495d3 : (Fixes #33934) Fix AttributeError for nn.Module's properties (#34324)
299bd6d701 : Update randomtemp on Windows (#35375)
4a4e385e13 : Revert "Load torch_global_deps for Windows (#35177)" (#35355)
e0c227d376 : Revert D20655246: [jit] add module interface tests to test_jit
843fd740fb : Revert D20645945: [pytorch][PR] [C++ API Parity] [Optimizers] Merged Optimizer and LossClosureOptimizer
de3b2f98db : [Shape Inference] Add ssaRewrite pybind func (#35410)
d807292c4a : [ROCm] Hotfix disable tests (#35396)
be0cdf5d15 : [jit] Implement `torch::jit::deregisterOperator` (#35107)
a7c232f74c : Port `mm` cuda from TH to ATen (#34891)
fa4603ef36 : Also sync submodule in the Dockerfile (#35423)
0ccceb2290 : [dist autograd] profile the amount of time spent executing (#35261)
7fbb562369 : Back out "[reland] Skip OpenMP thread when OMP_NUM_THREADS is set to 1"
aa01a95c6d : Revert D20630760: [pytorch][PR] Enable NNC tests vol. i. add test_tensorexpr.py tests [WIP]
2dd867f30f : Move normal() to DistributionTemplates (#35167)
dc2c4d02f9 : Add a wrapper to wrap all optimization for mobile. (#35227)
315929f43e : Refactor code to move const prop to convolution 2d replacer. (#35226)
b4b8b3c0ca : Revert D20630988: [quant][graph] Add a new observer type for dynamic quantization
d7de6ad23f : Revert D20640487: [quant][graph] Update dynamic quant tests to use new qconfig
0a3864f81e : Throw an actionable error message on user call rref<ScriptModule>.to_here() in torchscript (#35369)
efbd6b8533 : [C++ API Parity] [Optimizers] Merged Optimizer and LossClosureOptimizer (#34957)
e08614ffd5 : [Autograd Testing] Test failure in parent graph before child reentrant task (#35268)
f3a5081bd4 : Enable NNC tests vol. i. add test_tensorexpr.py tests [WIP] (#34897)
ccc0e35275 : [quant][graphmode] quantization support for prim::CallFunction (#34855)
64a6faa2c8 : [quant][graph] Update dynamic quant tests to use new qconfig (#35325)
7e24ab8c4a : [quant][graph] Add a new observer type for dynamic quantization (#35265)
7580470cc5 : Update view op list. (#35399)
6bcf0b407b : [TensorExpr] Disable fuser-te cuda tests when run on ROCm. (#35388)
d7c255d2fc : [jit] add module interface tests to test_jit (#35417)
00aa23446b : [JIT] [Reland] add complexity tests (#35330)
a4ea16dbc6 : Put prim ops used in full jit only in a separate file (#35232)
17abb7c31a : Add docs to resize_ and resize_as_ (#35392)
512bcf68be : [Formatting] `if (` -> `if(` in CMakeLists.txt (#35343)
c9117f27c4 : Fix final callbacks for reentrant backwards (#35066)
aadd0fda8b : Document reduce_scatter collective operation (#35274)
40b244ceb4 : Fix handling of non-finite values in topk (#35253)
de3044b210 : Load all DLLs in the lib directory for Windows (#35362)
34b005954e : Support merge_fp32_inputs_into_fp16 for predefined partitions (#35361)
d863fe356d : Ignore rest of outputs of LayerNorm when lowering to Glow (#35338)
15e5453977 : [reland][quant][graphmode] Add quantization support for aten::cat (#34346) (#35337)
51818cc4ea : [TensorExpr] Cleanup implementation of alloc/free insertion. (#35176)
db0f715af6 : [TensorExpr] Factor out LoopNest::insertAllocFree. (#35175)
ceb4ed3733 : [TensorExpr] Methods name cleanup in LoopNest class. (#35174)
450738662b : [TensorExpr] Replace `ExprHandle` with `const Expr*` in `Substitute`. (#35173)
5959bd6c29 : Making sure all tensors in `torch.cat` sequence have the same dtype. (#35150)
7cb301e48d : [rpc][easy] remove code duplication on ProcessGroupAgent::enqueueSend (#35311)
7e327e1210 : Enable Constant Folding for ONNX Opset 12 (#34823)
032c27cff7 : [quant][graph] Add _choose_qparams function for graph mode (#35235)
f9889aa390 : [reland] Skip OpenMP thread when OMP_NUM_THREADS is set to 1 (#35353)
3645d9b832 : Port `diag` cpu from TH to ATen (#35100)
53fceff1e1 : Change weight scale test to cpu only (#35346)
c73e97033a : Added type promotion logic for complex numbers (#34093)
361eed6a6e : Use JIT op registration directly for lite interpreter. (#34070)
3789db40f2 : [aibench] added support for measuring memory on AI Bench for Caffe2 Models (#35036)
c2804e8229 : Fix Caffe2 mobile compilation (#35288)
d6149a7250 : move some ops to contrib (#35282)
d6377b7cef : Fix thread_local initializtion in C10 WarningHandler. (#34822)
f090031e69 : [JIT] remove list appends (#33199)
aab4beb87f : [JIT] Pass To Safely Remove Aten Inplace Ops (#33186)
5b2f8cef08 : [JIT] Functional Graph Pass (#33020)
01a7d6adcb : [caffe2] Fix typo in dataset_ops (#35356)
93065ff767 : [1] add missing header for C10_EXPORT_CAFFE2_OP_TO_C10 (#35245)
6c39e362fd : Minor fix to quantized conv docstring (#35134)
74c02619de : quantized Conv1d (#35093)
b8f509fd9b : Revert D20630949: Skip OpenMP thread when OMP_NUM_THREADS is set to 1
574be9f816 : Skip OpenMP thread when OMP_NUM_THREADS is set to 1 (#35324)
a7f8655314 : Revert D20624571: [pytorch][PR] [TensorExpr] Extend arithmetic simplifier to work with multi variable expressions
ee7cd84fac : Revert D20589145: [quant][graphmode] Add quantization support for aten::cat
f1efe51028 : add quantized version of hardswish operator (#34820)
f3e9fa6122 : add hardswish FP operator (#34747)
b6306e1517 : Revert D20624698: [pytorch][PR] Make GPU loops support mutable lambda
4a84ac5f5d : [jit] make Future type annotation available in Python (#27637)
2623448746 : Match case of package name to suppress warning (#35201)
fce67800f4 : [TensorExpr] Extend arithmetic simplifier to work with multi variable expressions (#35127)
2dc2933358 : Move NewModuleTest and NewCriterionTest from test_nn.py to common_nn.py (#35189)
6b5740c5f6 : [quant][graphmode] Add quantization support for aten::cat (#34346)
b7e4dd15cc : [jit] Remove stray `@script` (#34938)
44622bbda9 : [jit] Add lazy script decorator (#34935)
84dc8c410a : Add's workaround for ScalarType::Byte for cuda (#35027)
39a101d06e : Make GPU loops support mutable lambda (#35015)
edad9c102d : Update XNNPACK to Git revision 1b354636b5942826547055252f3b359b54acff95. (#35081)
abcd4eb993 : Optimize min and max(reduce_dim) performance on CPU (#34875)
fb70893e78 : remove cadd_avx2 dead code (#34883)
3f896ef743 : Trying pinning pyyaml and setuptools on macos to older version (#35296)
0f0a5b11b8 : Disable C4251 when compiling cpp_extensions on Windows (#35272)
1d52530855 : simpler 'cpu_scatter_gather_base_kernel' (#34690)
55019d357e : [quant][graphmode] Add observers for dynamic quant (#35121)
a045343402 : [Autograd Testing] Add a test where child reentrant task fails. (#35223)
36e3c005f0 : Add python excepiton handling catch block to resolve deadlock (#35283)
925cdd57dc : Replace all uses of AT_INDEX_ERROR with TORCH_CHECK_INDEX (#35050)
0f0271e255 : [RELAND2] Eager autocasting, out-of-place ops only (with MSVC 2017 fix) (#35102)
50eb1a389f : Add cpu_serial_kernel_vec (#34553)
73a36a47a5 : Gradcheck for complex (#35238)
6f6436ff5d : Fix input overwriting in irfft (#35219)
d7a7bcb042 : Load torch_global_deps for Windows (#35177)
618c6214aa : [reapply][JIT] Namespaces for TorchBind (#35254)
17068ba467 : [JIT] BC shim for TorchBind classes (#35240)
8b8af0d458 : Revert D20539336: [JIT] add IR complexity tests
7c1ea736ba : Extends true_divide to be a method (#34794)
cd75d4e274 : [quant][graphmode] Add prim::ListConstruct to general op handling (#34345)
537fdd77d5 : [quant][graphmode] quantization support for view, transpose, contiguos (#34854)
4a96911629 : [quant][graphmode] quantization support for aten::chunk (#34806)
9c8f09d1a4 : [quant][graphmode] quantization support for prim::ListUnpack (#34807)
c46c28a7cb : Fix `JitTest.ADFormulas` intermittent failures (#35196)
9e7821ee82 : [autograd] allow PyNode to persist error message (#34845)
8346959f38 : [caffe2] merge internal (RowWise)SparseAdagrad into open source version (#35090)
ac4a0224f3 : [quant][graphmode] Replicate quantize node for prim::If (#34804)
340ccf56fb : [PyTorch-RPC] In process_group_agent, avoid read-after-free (#35252)
fddcd72a31 : Add the more fusion (conv3d and batchnorm)support in pytorch quantization flow (#33540)
bd0ef784e0 : FAQ: Add note about recovering from OOM (#35214)
97ecfb4929 : Updating submodules
ccf8dd6209 : Print exitcode on failures in test_distributed.py (#35233)
1b119861a8 : [TensorExpr] Cleanup includes in loopnest.h, use forward declarations when possible. (#35129)
b9fbec96e6 : Support LIST_UNPACK and TUPLE_SLICE in lite interpreter. (#35241)
eff68bc872 : [quant][graphmode] quantization support for aten::add (#34572)
b2dcedf71e : .circleci: Ensure describe happens in pytorch repo (#35065)
8bb7f1ad11 : [rpc] various fixes for ProcessGroupAgent (#34943)
c321f02756 : Follow up on freezing (#34786)
7ab25b2e6b : [JIT] add id function (#34975)
131af4412e : Add TORCH_CUDA_API to FilterDescriptor (#35131)
6fa0b3df2e : [testing] Pass verbosity settings to `XMLTestRunner` (#35224)
bfdcc39301 : in test_c10d.py, remove skip_if_rocm from tests that pass locally (#35124)
40da01db6a : Add docs about memory format (#34818)
93983c7d00 : Add `USE_TSAN` option (#35197)
a00e12e755 : [quant][graphmode] weight/bias of linear/conv can be reused for multiple ops (#35221)
3cd3f0b3f1 : Fix Tensor __radd__ type hint issue (#35231)
37e355622a : Pass the missed "non_blocking" argument for to() (#35144)
521c424b39 : Make discontiguous tensors also benefit from unrolling (#34708)
9441c7a944 : [JIT] add IR complexity tests (#34918)
4fae5a6721 : Move module graph creation to testing utils (#34917)
77ccb5c14d : Move functional graph creation to testing utils (#34916)
02ab6ced8e : test_complex inherits from common_utils.TestCase; closes #34648 (#34697)
21ecb8d870 : Fix reference to NO_CUDA and NO_DISTRIBUTED (#34831)
506996c77e : [pt][quant] Optimized qadd_scalar (#34925)
3e4076aa9c : [quant][graphmode] quantization work for prim::If (#34518)
0e0386b434 : Revert "[JIT] add id function (#34975)" (#35209)
2c69fa93b9 : Fix _copysign is not a member of std (Windows) (#35199)
c85697d74d : [quant][graphmode][fix] use observed_values_ to check values are observed (#34571)
350c522423 : [quant][graphmode][refactor] insertObservers for Block (#34414)
28bf0038e5 : [quant][graphmode][fix] Insert dequantize before use node (#34411)
4caa0db6e8 : [quant][graphmode][fix] preserve the type of original value when inserting dequant node (#34349)
358ba59f01 : Add THP_API to THPGenerator_Wrap (#35194)
36e36eff2f : Ignores deliberate undefined float->int conversion (#35086)
d743c22990 : Updating submodules
a6672f3b30 : Updating submodules
082e48e346 : skip ctc_loss test on Windows (#35069)
3f2aa07b13 : [ONNX] update producer version (#35059)
1783ea43e7 : [pytorch] deprecate code analyzer -closure option (#35179)
11a40410e7 : pybind11 type_caster for at::Generator and custom RNG python test (#34774)
b248e23de0 : Docs fix: Added missing indent (#35017)
e1c092fe3a : Changes to transition to generic API for ops with weight prepacking (#35010)
1ff5d9c557 : Updating submodules
a5b509985a : Updating submodules
cfc0ff1691 : Renaming: MultiLabelMarginLossFuncOptions -> MultilabelMarginLossFuncOptions, MultiLabelSoftMarginLossFuncOptions -> MultilabelSoftMarginLossFuncOptions (#35163)
5306713a36 : Replace Generator* with Generator that holds std::shared_ptr<GeneratorImpl> (#34468)
a100cf5146 : Revert D20541090: [JIT][torchbind] Namespaces for torchbind classes
bbec4520c6 : Add inplace tests for several torch::nn modules / functionals (#35147)
f515d87296 : Updating submodules
95ad94c75b : [TensorExpr] Nuke tensorexpr::schedule namespace. (#35126)
d609f356de : [TensorExpr] Use `const Expr*` instead of `ExprHandle&` in Range. (#35125)
65cea95777 : [TensorExpr] Rename schedule.{cpp,h} to loopnest.{cpp,h}. (#35119)
3342ab89ac : [pytorch] revert register c10 ops for static dispatch (#35148)
3fa7813b9f : [quant] Add dequantize.tensors (#34348)
d87750cd04 : [caffe2.proto] Add backend_option to PartitionInfo
a2557970f3 : Fix F::interpolate and torch::nn::Upsample implementation (#35025)
c0958c883e : Fix fractional_max_pool3d_with_indices implementation (#35024)
ef7fe371ce : Fix Conv and ConvTranspose implementation (#35023)
d7462dcea6 : Fix AdaptiveAvgPool{2,3}d and AdaptiveMaxPool{2,3}d implementation (#35022)
845b19c4ef : Add weight_scale in Adagrad (#34944)
c21fde6421 : [jit] make jit/rpc share the same PythonFutureWrapper (#35039)
43fc97db88 : Updating submodules
d45e135d89 : Updating submodules
4594433319 : Add retry to pip usage in mobile job (#35122)
bf31b1b6be : Upgrade protobuf as bazel build preamble (#34662)
e98b8eb35f : [profiler] remove unused _push_range and _pop_range (#35028)
4025729e88 : [1.5 Release][RPC Reliability] RRef Idempotency and RPC Retry enablement (#33636)
61b680c012 : [pytorch] force c10 schema registration for custom build
064c478453 : [pytorch] register c10 ops for static dispatch to unblock c10 boxing
3a772b798a : Auto-format jit/rpc_test.py with flake8-black (#35075)
e0496a70fc : [JIT][torchbind] Namespaces for torchbind classes (#35054)
65ff064763 : Parallelize cpu index_put accumulate float path with cpu_atomic_add_float (#29705)
3e58cba3c5 : Fixes the Conv2d batch_norm folding for various cases. (#34932)
df8d6eeb19 : Update docs about DP and DDP for CUDA (#35063)
f9cddff25a : [android] Preload module actions do only once (#32313)
4bd5d1b3be : [TVM] Use caffe2_predictor_model_shape_hints to pass shape_hints to TVM (#35091)
ca1e2cda05 : Port set_ to ATen. (#34403)
62f11f0a35 : [JIT] add id function (#34975)
12f0052eee : Add TensorExpr Fuser tests (resubmit). (#35085)
3c409fc66c : Add guard elimination cases for operators encountered on an RL workload. (#34967)
faa853fefb : Revert D20254663: [pytorch][PR] Vectorize in-place comparison operators
ea41bf3100 : [android] Maven publishing license fix (#32474)
8998a1b3d3 : Add tensorexpr benchmarks. (#35064)
bf41a7624e : fix missing comma in activation benchmarks (#35104)
7d5a899883 : randn cuda kernel complex dtype (#35056)
451e4d578d : Define +, -, *, / between complex numbers and integers (#34506)
91d39de149 : Vectorize in-place comparison operators (#33252)
8bcedf7da2 : Adds truncated normal initializer (#32397)
a5b5ea9852 : use new cuda kernel launch code in nvprof parsing (#35016)
e3272559e4 : [caffe2] SWA operator (#34394)
781f590f33 : [C++ API Parity] Add xor_convergence test for lbfgs (#35001)
1c958f8ef9 : `Engine::~Engine()` should wait for non-reentrant threads to shutdown (#34529)
ec9f680973 : Enforce rref python pickling to be in the scope of RPC call (#34755)
6000dca5df : [nomnigraph] Copy device option when customize the op conversion (#34976)
fe276d541e : Revert D20541921: [pytorch][PR] [RELAND] Eager autocasting, out-of-place ops only (with MSVC 2017 fix)
0ccdec6b4c : Revert e7fc55e (#35080)
eb78f7ea41 : torch.cat: disallow inputs on different devices (#35053)
89110fbe6c : Fix torch.mm export to ONNX (#34661)
e65ac7af14 : Also vectorize complex types in fill. (#34973)
463f7920bd : repr and _*state_dict for qRNN (#31540)
991b97277a : [RELAND] Eager autocasting, out-of-place ops only (with MSVC 2017 fix) (#35011)
edb794fb19 : [ROCm] Enable BFloat16 type for TopK operator on ROCm. (#34849)
bb63710c9a : Reduce the number of iterations in test_autograd_context (#35037)
3c90a90730 : Revert D20540599: Add TensorExpr Fuser tests.
e7fc55ef7b : Revert D20464855: [pytorch][PR] Add the fusion of quantized batchnorm and relu
a4afac6076 : enforce rref JIT pickling to be in the scope of rpc calls (#34689)
8210b2054e : Move ivalue tests to aten (#34985)
5f32dfca16 : Add equality comparison to c10::Dict (#34892)
90045ce5e0 : Add equality comparisons to c10::List (#34856)
d3f5045bf5 : PyTorch should always depend on `future` (#35057)
33dcaaa872 : [quant][onnx] Add aten::max_pool2d to jit pass (#34912)
fd57e0901e : remove the slow path(NCHW) for avg_pool3d (#34994)
d2d26bf643 : [rpc] fix test_debug_info for python 3.5 (#34828)
733b6315fd : Add the fusion of quantized batchnorm and relu (#34795)
851579d868 : [Onnxifi] Blacklist ops in the partitions that are supposed to run on CPU (#34991)
7b59f41009 : Add TensorExpr Fuser tests. (#35052)
02e16b38f3 : Remove the use of two magic numbers in vec256 (#35003)
7065c46ea2 : Respect dist autograd context in torch.jit._fork. (#34360)
b3fccda4a9 : [DPER3][Shape inference] Update Shape Information in dper3 backend (#34475)
c957580133 : Add promotion pipeline for S3 and conda artifacts (#34993)
37b234a880 : quantized hardsigmoid, take 2 (#34959)
74009dc558 : [profiler] use swap in allocBlock to reduce time the lock is held. (#34499)
e433271320 : Install CUDA manually on Windows CI to avoid flakiness (#34940)
53539567cb : [Onnxifi] Copy partitioning info when lowering to glow
226f559394 : Updating submodules
7335f079ab : [pt][quant] qmul and qadd should preserve input memory format (#34834)
6d488714a7 : .circleci: Specify setup job to run on everything (#35013)
35d9874a35 : in test_data_parallel.py, remove skipIfRocm from tests that pass (#34978)
1f4a4aaf64 : functional autograd api (#34066)
96860af870 : Revert D20164420: [1.5 Release][Dist Autograd][Better Engineering] Notify Workers on Failure during Distributed Autograd
7c06b86e42 : Revert D20518647: [pytorch][PR] [C++ API Parity] [Optimizers] Merged Optimizer and LossClosureOptimizer
5d92a6cc30 : Revert D7778113: Reland "[RPC] Use qualified name str directly in RPC torch script code path"
9c4683e8e3 : Revert D20312366: [pytorch][PR] Added type promotion logic for complex numbers
0d8447a9b8 : Warns when performing integer division with div and addcdiv (#34570)
6f737dd4a3 : Fix signed-unsigned warnings (#34791)
c8f665dcb6 : Added type promotion logic for complex numbers (#34093)
d616cad676 : Reland "[RPC] Use qualified name str directly in RPC torch script code path" (#34962)
be82e554fe : Revert D20524479: [pytorch][PR] [C++ API Parity] Add xor_convergence test for lbfgs
153b16ef4c : Doxygen for torchbind (#35007)
eef17edaa3 : Fix warnings in test/test_jit_fuser.py (#34980)
55b254e114 : update gitignore to include clangd index (#35018)
d3b6099366 : [build] Update gloo submodule (#34969)
5f67c923f1 : [1.5 Release][Dist Autograd][Better Engineering] Notify Workers on Failure during Distributed Autograd (#34638)
a73dfcf8cf : Adjust ProtoBufPatch to protobuf-3.11.x (#35008)
e5ee95e448 : [RPC] Add to confirmed users immediately if the fork is shared from owner, instead of adding nothing to pending users (#34988)
b8e043abca : [C++ API Parity] [Optimizers] Merged Optimizer and LossClosureOptimizer (#34957)
2a1c83823d : [tools] Parallelize tools/clang_format_new.py (#34750)
6e47e7bf52 : [pytorch][mobile] fixed AutoGradMode/AutoNonVariableTypeMode uses for mobile callsites
a4048b4703 : port ge changes from bert/pytorch_fusion (#34942)
4521477f83 : [C++ API Parity] Add xor_convergence test for lbfgs (#35001)
bcbde490e4 : Fix flake (#34974)
b2e5e0cad6 : [quant][graphmode] quantization support for aten::rehshape (#34803)
69e701fbf9 : Add transfer_learning_blob_name_mappings into layer_model_helper to support layer model transfer learning
e35dd4f603 : [jit] Include call stack in OSError message (#34669)
3b7e1cd2cc : Makes floor_divide a method, adds sparse floor division (#34552)
d77d907f0e : [quant][graphmode] Add quantization support for aten::dropout (#34347)
c747f09846 : Add operator [] to c10::impl::ListIterator (#34926)
064f6285af : Torchvision in jenkins testing (#34909)
1afc584188 : Deprecates current torch.full integral type inference, adds torch.full complex type inference (#34709)
f3b8a470e1 : Added functionality for all to take Lists as input (#34582)
d0577e19f0 : Revert D20346700: [pytorch][PR] Eager autocasting, out-of-place ops only
b35e544772 : Minor fixes for RPC API doc (#34955)
d29f450e63 : Revert D20442573: [RPC] Use qualified name str directly in RPC torch script code path
689598df0b : [quant][graphmode] insert quant/dequant work for duplicated debugName (#34315)
aaa8f02156 : Eager autocasting, out-of-place ops only (#32140)
fa5bc9fa2e : Fix problem in NHWC max_pool2d; use accumulate type in NHWC max_pool2d (#34934)
d927d58c2a : Revert D20289209: Support RowWiseSparseAdam on GPU
a1eaaea288 : Revert D20497453: [pytorch][PR] Makes floor_divide a method, adds sparse floor division
a3de359464 : Do not throw from CUDAContext destructor (#34756)
b7129050e7 : Makes floor_divide a method, adds sparse floor division (#34552)
bcbdba450c : [caffe2] open source 2/4-bit SLS operators (#34903)
d7e4a379a0 : [C++ API Parity] LBFGS optimizer step() update and added closure to the Optimizer step() function (#34564)
df20f5b374 : Updating submodules
130e720784 : [torchbind] Add more comprehensive docscrings (#34906)
09a7788a2f : [torchbind] Improve IValue custom class API and remove most Capsule stuff (#34848)
c4fdba326d : Support using self as the destination in rpc.remote for builtin operators (#34931)
b5edf329f8 : [JIT] Make RPC RRef Owner WorkerInfo.name available to TorchScript (#34896)
95f1cb34b9 : Revert D20480546: adds quantized implementation of hard sigmoid
ff3d205ee5 : [rpc] handle exceptions in ProcessGroupAgent::enqueueRecv (#34413)
1c8e086537 : [quant][graphmode][refactor] Change QParamMap to QParamVector (#34314)
4bd3e9b41b : fix barrier in jit test (#34901)
74a28ff1dd : Make checkInputs more robust (#34838)
e43c2d59dd : Reduce memory overhead of categorical.sample (#34900)
85c51a8c10 : Fix dist autograd context Example block format (#34921)
f05abd1259 : Fix example block format in Distributed Optimizer API doc (#34919)
e87db8a77b : Fix example format in Distributed Autograd doc (#34914)
552f9d3a68 : Minor fixes for RPC API docs (#34890)
3c48aadd98 : Update descriptions for transmitting CUDA tensors (#34888)
800bdcf000 : Removing experimental tag in for RPC and adding experimental tag for RPC+TorchScript (#34887)
6446ccce76 : Adding warnings for async Tensor serialization in remote and rpc_async (#34885)
0d857d55b9 : Add a warning for RRef serialization (#34884)
f87cd83d11 : Append multiple arguments to list of flags as multiple items (#34899)
841f7600bb : [quant][graphmode] Quantization pattern for aten::linear (#33854)
71f02a481b : [RPC] Avoid polluting Python root logger on importing "torch" module (#34871)
58c5b6d306 : adds quantized implementation of hard sigmoid (#34607)
97757dca79 : Format register_ditributed_ops.cpp (#34922)
0216c76e12 : SNIFAE Template Constructors of IValue (#34647) (#34843)
959a7138fd : Support RowWiseSparseAdam on GPU (#34341)
72e3d66f50 : [ROCm] Fix for std::isnan regression in ROCm (#34664)
b227ea955e : .circleci: Remove should_run_job, no longer needed (#34326)
5857a125df : Turn on exact_dtype by default on test_optim.py (#34825)
a4224886f3 : Eliminate guards through max_pool ops. (#34512)
6b701de130 : Add types argument to __torch_function__ (#34303)
275f5c8049 : setup.py: Add numpy as required for install_requires (#34510)
940e678da9 : Add back cudaHostRegister to cudart API. (#34665)
7a3cf67fd8 : Implement channels last upsample2d/3d forward pass kernel. (#34597)
3ad7dfa2cf : move emulation libraries to contrib (#34861)
cfab65d90d : Fix CMake Dev warning in caffe2/CMakeLists.txt (#34886)
3e68d0c5d0 : Revert D20461609: [caffe2] open source 2/4-bit SLS operators
95833a49e6 : [TensorExpr] Pull changes from bertmaher/pytorch_fusion. (#34842)
ecd7c0f84c : [RPC] Use qualified name str directly in RPC torch script code path (#34733)
a0b7a39a92 : Updating submodules
65889388d1 : Use randomtemp to resolve intermittent cuda build errors (#34777)
67cb018462 : Print cuda install logs for Windows CI (#34858)
acbca57d18 : improve batch_norm contiguous case's performance (#34530)
a8ca340ad6 : Remove all uses of AT_CHECK and replace them with TORCH_CHECK (#34846)
76d9e76b4a : Default to erroring when failing to return from non-void function. (#34663)
d9b97a4ffd : [caffe2] open source 2/4-bit SLS operators (#34783)
089a0a2117 : [torchbind] Test moving custom classes to/from IValue (#34847)
699a4ed8f5 : [testing][do not land] (#34605)
89cbc0edea : fix tests that could have racy script module instantiation (#34792)
e70c28856f : [Caffe2] Move more method implementations from tensor.h to tensor.cc (#34811)
471ddacd8b : Add retry decorator and use it for Hub tests. (#34829)
b336deb6ee : [quant][mobile] Not use qnnpack max_pool2d if ceil_mode is true (#34844)
1e140c353c : [profiler][rpc] fix a race condition in the profiler when multiple threads call (#33719)
422e348619 : Don't run user function until all UserRRefs in the args are confirmed (#34497)
d876fef743 : Fix send count for local RPC (#34809)
38b2856c71 : Split deserialize from runPythonUdf and remove generatePythonUDFResult (#34496)
ae0c88d6aa : .circleci: Add manywheel builds for python 3.8 (#34732)
480d1849b0 : [ONNX] Fix for expand -1 dim value (#34069)
1bac5fd0d3 : add hardsigmoid FP operator to PyTorch (#34545)
6d8649dc53 : [caffe2] fix Transpose2D calls in NHWC<->NCHW (#34625)
31eaeba38a : Increase the prec of test_baddbmm (#34764)
8bae1ed144 : PCA and SVD for low-rank matrices, LOBPCG for positive-defined generalized eigenvalue problem - copy (#34721)
976d6aaa51 : Revert D20251830: [TensorExpr] Add tensorexpr benchmarks.
ef78fa8668 : caffe2::OperatorBase do not need to be aware of at::Tensor functions (#34810)
e93e7b2795 : [TensorExpr] Add tensorexpr benchmarks. (#34230)
ea5c86c276 : [TensorExpr] Add LLVM codegen. (#34228)
35e7efeb9a : [TensorExpr] Add CUDA codegen. (#34227)
42b2c8c65d : [TensorExpr] Add a fuser pass based on tensor expressions. (#34226)
e31d462e92 : [TensorExpr] Pull changes to core classes for representing expressions and statements from the side branch. (#34224)
99b91ee2ad : [fix][tiny][caffe2] Avoid triggering errors when allow ratio is 100% (#34757)
24c9e61e79 : Enable JIT tests on Windows (#27029)
1af6002321 : Initial implementation of NNPI Int8FC op
a57f92e4de : [jit] copy unused/ignored methods to ScriptModule during compilation (#33981)
cec9758afa : [quant][graphmode] Add quantization pattern for quantized::add_relu (#33532)
8eaafbd99b : Remove unused newWithSize declaration. (#34730)
b94d650868 : Remove unused newView declaration. (#34729)
a66b837b19 : Migrate dirichlet_grad from CUDA_tensor_apply4 to TensorIterator (#33996)
c3c0cf1591 : Migrate binary_cross_entropy_backward from CUDA_tensor_apply4 to (#33995)
762be86e63 : [C++ API Parity] [Optimizers] added closure to optimizers (#34790)
bdd7dbfd4b : [C++ API] RNN / GRU / LSTM layer refactoring (#34322)
d4f182d06b : Add overloaded name to prim operators (#34280)
c86d1361b8 : Removes unused THCTensor_(triu), THCTensor_(div) (#34712)
c258e4732a : solve conv3d backward get incorrect result problem (#34358)
7848c229b8 : Move min and max(reduce all) to Aten(CPU) (#33936)
f058c03b15 : Disallow sending CUDA tensors over RPC for current RPC agents. (#33604)
f404537c26 : CUDA Loops: move address computation into policy, make policy.load load all arguments (#33720)
528aabd373 : Fix backward compatibility check test for schemas containing (#34782)
15c84c37b6 : [PyTorch BC] Clean up the BC whitelist (#34784)
08bc3c6cbf : Remove unnecessary import (#34778)
1d81bd02cc : Export roi_align_gradient_op to c10 (#34776)
373c80ee90 : Fix missing header (#34762)
6c555e1508 : Revert D20311699: [pytorch][PR] [C++ API] RNN / GRU / LSTM layer refactoring
84bd71dbd4 : Enable threading for XNNPACK ops. (#34547)
4da5569300 : Pass to remove prepacking ops. (#34319)
7dd5da2026 : JIT pass to insert XNNPACK ops (#34048)
4c30fc7238 : Integrate XNNPACK with custom class for packing weights. (#34047)
e23a9dc140 : [C++ API] RNN / GRU / LSTM layer refactoring (#34322)
5710374e4e : [reland][quant][graphmode] Add quantized conv2d-relu fusion pattern (#33279) (#34744)
fb20621b3b : Move torchbind out of jit namespace (#34745)
8a395882ce : [quant][onnx] Support conversion of quantized sigmoid operator from pytorch to caffe2 (#34629)
af28915164 : [quant][onnx] Add support to convert max_pool2d quantized pytorch op to C2 (#33945)
d041d0784e : [C++ API] RNNCell / LSTMCell / GRUCell layers (#34400)
68758b2fa0 : Add the quantized batch_norm3d and also batch_norm3d fused with relu operators (#34702)
da11646db1 : [C++ API] Link to module options doc for functional that has same options as module (#34752)
7dee36a061 : .circleci: Remove CUDA 10.0, no longer needed (#34726)
52005b551c : invokeOperatorFromPython: support overloaded operator calling (#34671)
ab76a8206f : [JIT][mobile] Support built-in Function call in lite interpreter (#34676)
af3a7e2b50 : [jit] small cleanups after script:: removal (#34677)
e7910aa9e5 : [fix] use non-inplace for insert observer pass (#34190)
1734bd6871 : skip mask_rcnn test (#34734)
6d790c3611 : Mark PyTorch incompatible with python-3.6.0 (#34724)
aedffdf7d8 : Support for Tensor Shape Type Hint (#34595)
c9ed111894 : [caffe2][quantization] Add initializer and precision as read-only property to QueryTensorQparam (#34706)
c371c3aba7 : [rpc][profiler] add a test case to verify record_function context manager works (#34511)
0f3b6f3dec : Add min function to cuda math compat (#34723)
a730abd997 : [PyTorch][tools] Add linux64 clang-format hash
f933fa3613 : [docs][1.5] update RPC docs to reflect correct use of dist_autograd backwards and dist_optim step() (#34670)
c9023e3b12 : Support left and right shift operators in JIT (#34563)
c34ee4fb6e : [JIT] disable test (#34722)
027d7f7ba5 : Delete AT_WARN and replace all AT_WARN with TORCH_WARN (#34623)
4a599f47fb : scripts: Add script to promote conda packages (#34659)
b1dbe33056 : Skip `TestNN.test_spectral_norm_load_state_` if PyTorch is compiled w… (#34686)
40eff454ce : Fix max_pool2d NHWC for large tensors; fix incorrect use of cudaGetLastError() (#34519)
3924c55f4c : [C++ API] Update torch::nn functional docs (#34688)
27410318ad : [PyTorch][Mobile] Fix the operator latency issue.
8e8a37d746 : Fix bug in baddbmm corner case (#33467) (#33538)
8f854fb9e2 : [1/n][multi-tower] add partition info in predictor construction (#34175)
14c1ab049d : [Codemod][FBSourceGoogleJavaFormatLinter] Daily `arc lint --take GOOGLEJAVAFORMAT`
b93518a662 : Revert D20422879: [pytorch][PR] Remove hotpatches that circumvent MAGMA bug
6791ae51a5 : Updating submodules
fd35596585 : [docs][1.5] Update distributed autograd note (#34657)
808f84ee35 : [Shape Inference] Update shape inference in dper3 backend - C2 part (#34474)
ad4bc8c9b8 : Best-effort Error Detection for Using Deleted UserRRefs (#34673)
f9aa0c870f : Use c10::str in py_rref.cpp (#34681)
673d56c838 : Use c10::str in process_group_agent.cpp (#34679)
e9a660a160 : Revert D20354878: [quant][graphmode] Add quantized conv2d-relu fusion pattern
5d65b5cd01 : Add the 3d upsample quantized op for video model (#34594)
d5f8c8f3ba : Revert D20121169: [pytorch][PR] ONNX Export Support for CrossEntropyLoss
4ae74b3b25 : [DPER3][Shape Inference] Initial Shape Inference in DPER3 frontend (#33607)
0ff4d37933 : [quant][graphmode] Add quantized conv2d-relu fusion pattern (#33279)
44256199a9 : [JIT] remove specialized list ops (#34520)
c78eacb5ee : scripts: Add promotion script for s3 to pypi (#34500)
52787388d2 : [tools] Add clang_format_new.py to download, verify and run clang-format binary (#34566)
90ca7a1feb : [quant][graphmode] Add Finalize function that inlines graph and produce quantized ops (#33927)
9f05fc9322 : [Aten] First argument of check_names_valid_for() should be an unsigned value (#34158)
721bd11cc3 : [caffe2] Refactor out common util functions from tvm_transformer (#34652)
787c307e63 : Revert D20368543: [pytorch][PR] [JIT] remove specialized list ops
8c332ff84f : [JIT] EliminateDeadCode shouldn't remove custom operator node that has untracked mutation (#34635)
fe9b4e3cba : [DPER3] Blob Reorder (#33579)
9e6cd98c3f : Ensure torch_cuda is linked against on Windows (#34288)
31cd893899 : remove some TH dead code (#34644)
cb06cb7b9f : Remove hotpatches that circumvent MAGMA bug (#34357)
a74fbea345 : Continuous bernoulli distribution (take 2) (#34619)
944ea4c334 : ONNX Export Support for CrossEntropyLoss (#33767)
352e9b11e0 : Attempt to resolve inconsistent dll linkage warnings on MSVC (#34639)
fff6fe83a7 : [pytorch-rpc] WireSerializer should check has_storage() (#34626)
2f32b92763 : [ROCm] Enable BFloat16 type for EmbeddingBag ops et al (#34630)
1e6c47413a : Updating submodules
d81d65b2f7 : Add entry for distributed tests to CODEOWNERS. (#34637)
f9f8424386 : [JIT] remove specialized list ops (#34520)
3f1ba3c465 : Redo of "Add API for listing functions overridable by __torch_function__" (#34240)
4e07c35679 : Delete all user forks tracked in RRefContext before graceful shutting down (#31893)
dd313f314e : Stop creating unnecessary Storage with newWithStorage1d. (#34389)
518e9f94c2 : Kill newWithStorage. (#34388)
9fd08b9c37 : Get rid of newWithSize. (#34387)
a54416d208 : [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508)
e95657b87e : [C++ API] AdaptiveLogSoftmaxWithLoss (#29076)
157d2d7825 : Fix version check for grad_fn for views (#34145)
43c9cc7a9c : add quantized ELU activation (#34267)
514cba0661 : [JIT] remove builtin interpolate functions (#34514)
962e362427 : Fix _cat operator (#34591)
a22008f91e : Prohibit copying autograd engines (#34567)
3c76b2aeea : Replace THPLayout with at::Layout in Python Argument Parser (#34543) (#34584)
f70945b1c3 : fix the quantized batchnorm2d (#34579)
c235be42dd : [jit] kill script namespace (#34515)
cf8b728255 : Delete OperatorOptions, absorb AliasAnalysisKind into FunctionSchema. (#34588)
b039bca4db : Fix typo in data.rst (#34624)
2fe7fc681d : [PT] add macro to expose caffe2 ops to PyTorch mobile (#34578)
0dc0fffca1 : [net_transform] only skip ConstantFill for autogen_grad (#34628)
86fb522acd : Remove cudaMemcpy on full memory overlap (#34548)
adb8e26182 : Fix for handling batch size 0. (#34599)
9064fafb6e : [C++ API] Update torch::nn layer docs (#34522)
56832bf7f3 : [JIT] Add support for tolist for GPU-resident Tensors (#34554)
866505b100 : [ci] try to fix rocm builds (#34600)
2de4f245c6 : Fix typo in documentation (#34581)
25e4e9eb86 : [On-device Benchmark] speed_benchmark_torch switch to log latency from dataset level to row level (#34598)
70f3298684 : Fix SELECTED_OP_LIST file path issue (#33942)
1f834b5c2a : [JIT] Torchbind error if python instantiate class that doesnt exist (#34568)
12fb8148e4 : Disable ROCM when building mobile libtorch. (#34478)
b553e6911a : [distributed] quicker exit in the case of failed tests in distributed (#34150)
2cf576e9ea : small typos (#34589)
82cdd3abae : Stop last usage of newWithSize. (#34386)
4b929e5466 : Revert D20193196: [pytorch][PR] PCA and SVD for low-rank matrices, LOBPCG for positive-defined generalized eigenvalue problem
6f8a8e4e47 : Revert D20282846: Delete OperatorOptions, absorb AliasAnalysisKind into FunctionSchema.
63964175b5 : Revert D20379910: [pytorch][PR] Set USE_RCCL cmake option (dependent on USE_NCCL)
2ec779d46c : PCA and SVD for low-rank matrices, LOBPCG for positive-defined generalized eigenvalue problem (#29488)
5fc5cf6571 : Stop using ctypes to interface with CUDA libraries. (#33678)
9d42177a31 : Delete OperatorOptions, absorb AliasAnalysisKind into FunctionSchema. (#34160)
b2344b70da : Beef up documentation on Dispatcher.h, reorder methods for clarity. (#33838)
fbbeee0983 : Port `remainder` from TH to ATen (CPU and CUDA) (#34136)
7aca9afdfb : [pytorch] remove boilerplate setQEngine() from PyTorch mobile predictors (#34556)
2ce9513b0c : AccumulateGrad: ensure sparse tensor indices and values refcount is always 1 (#34559)
ab2297dfe6 : Add Tensor overload for start in narrow. (#34317)
2e88a78d2e : add quantized_hardtanh (#34097)
8d84c5f1c7 : Fix static data initialization deadlock on GIL (#34505)
ce77d4a316 : Set USE_RCCL cmake option (dependent on USE_NCCL) (#31341)
23b2fba79a : [jit] Add type tags to lists/dicts in pickle (#33255)
4167db11f7 : [pytorch][ci] add build_only flag to mobile CI jobs (#34560)
a09c4d3997 : [pt][quant] Vectorized qmul and more methods on qint data types (#34376)
903ad90325 : [JIT] Introduce a fake Tensor creation node for IR unit tests (#34334)
d0834c5b64 : Preserve memory format for torch.cat on CUDA (#34526)
be3bc1deb1 : convert counter back to list #33229 (#33356)
dd7cec680c : Do not use clang if it can not parse system extensions (#34549)
09296c34a4 : Add the build for runtime dispatch for AVX, AVX2 instruction set (#26125)
259d7299db : [caffe2] do not declare __assert_fail in clang builds (#33893)
2d24005d18 : [C++ API Parity] rmsprop optimizer update (#33450)
6f12145c60 : Change std::to_string call to c10::to_string
2cf344be4c : Turn on exact_dtype by default on test_sparse.py (#34489) (#34542)
b185359fb4 : Avoid clone for sparse tensors during accumulation of grads. (#33427)
5f61f42c79 : .circleci: Switch should_run_job cuda 10.1 -> 10.2 (#34498)
cd9d9a2235 : fix handling of replica parameters in DataParallel (#33907)
0dbfb26e53 : Clean up include list of Shape.cu (#34528)
cb689a5d68 : remove duplicated process group gloo timeout (#31342)
c7dd5f89a2 : Fix #33562 (uncaught domain_error on macOS) (#34301)
9e94e46453 : Check if rnn weights need to be flattened (#34265)
29b673392f : [ROCm] Enable BFloat16 type for loss functions and few misc ops required for resnet50 (#34469)
20b18a58f1 : Update compiler warning about ABI compatibility (#34472)
f5ee46f1cf : Remove custom function in no_grad block error message (#33896)
3e6e2e9b7b : Print the current Node name in anomaly mode (#33875)
d30fa4837e : Unify gradient accumulation between distributed autograd and local autograd (#33214)
4f62cbe7de : [ONNX] Support one_hot (#34454)
965146b818 : [jit] delete netdef converter (#33807)
3671036ef3 : Adds true_divide function, analogous to Python 's, JAX's, NumPy's (true) division (#34236)
e408d46477 : Print pytorch version before running ASAN tests (#34521)
b9c32209db : Use SerializedPyObj in PythonRpcHandler::generatePythonUDFResult (#34495)
b82658810e : Split deserialize from _run_function in RPC internal.py (#34494)
544fb64440 : Use SerializedPyObj in PythonRpcHandler (#34493)
18ef09f5ac : Remove _load_return_value from RPC internal.py (#34492)
6d1c4df660 : Consolidate Python Messages to use SerializedPyObj (#34491)
3b661eb84c : Avoid copy contents in SerializedPyObj (#34490)
2de4fa702b : [JIT] Preserve qualified names on traced modules (#34395)
79e1305519 : [net_runner] Get shape info from qtensors (#34321)
e16908cb1f : profile block outputs; helps guard elimination (#33889)
2c1a302d6a : [ROCm] Enable double __shfl_down (#34103)
0a4a558c2c : Dictionary Constants (#32869)
90ff3b56d0 : Kill some unused TH(C)Storage functions. (#34385)
4e357089b4 : Stop calling newWithSize directly. (#34384)
fea618b524 : [JIT] remove list with default builtin (#34171)
34688d2c48 : Add brand guidelines link (#34503)
2e7eef41ac : [quant][graphmode] Swap quantized functional linear with aten::linear (#33853)
7688ca631a : Enable RTTI for mobile builds, to enable custom class via torchbind in mobile (#34368)
2c0f3536b6 : [jit] Make `ModuleList`s a sugared value (#34320)
c218963270 : fix more errors (#34480)
15a7b9cf0a : [RpcAgent] Metrics for current num active/async rpc calls. (#34398)
8294db8f15 : [iOS][CI] Remove org-member from iOS Simulator Builds (#34410)
776d2a1e8f : [quant][graphmode] Handling ops doesn't require observation in insertObservers (#33481)
2b45368e50 : Fix cudnn 64bit indexing issue (#34407)
e025677e3c : Remove **kwargs from torch.meshgrid (#34356)
70fe508c26 : [pytorch] fix BUILD_CAFFE2_MOBILE gating around caffe2/operators/experimental/c10/cpu (#34354)
6d3783a6bc : Clean up unused newWithSize variants. (#34383)
91e922a338 : [AI Bench] Add support for nlu model
bcfd348858 : [ONNX] Export new_zeros (#34077)
baeb359e7a : Remove `using namespace torch::autograd` from header files (#34423)
e3d50c4dda : Retain the order of parameters while generating ConcreteModuleTypes (#34131)
f62a7e7efb : Simplify implementation of newWithStorage1d. (#34382)
b1bd950a4d : Fixed stub for AdamW (#34299)
739d4609c3 : [C++ API] Fix ModuleList compile error: error: 'begin' was not declared in this scope (#34463)
b09e90af1e : Fix C++ at::Tensor docs generation (#34467)
6e2bb1c054 : End of the .data removal in torch/optim (#34211)
7e55494502 : Warns on read-only Numpy array->tensor conversion (#33615)
79d47c1c5f : Fix the missing ';' in Conv.cpp (#34448)
7d9f611b64 : Add worker_name helper to dist_utils. (#34162)
8a17dc65af : [quantization] Make FP16 RNN use new prepack op (#34339)
45a504dd2d : [JIT] Introduce BuiltinOpFunction and integrate into torchbind (#34098)
60e8615a6d : [JIT] Virtualize Function (#33921)
bb1114258c : [JIT] Move stuff out of class_type.cpp (#33900)
65bad41cbe : Fixed typos in quantization docs / docstrings (#34182)
c5e822b7bb : Back out "[jit] Add type tags to lists/dicts in pickle" (#34406)
392afb9f8b : Fix overlapping keywords (#34142)
b0479506a8 : Add the 3d avg pool for video related model (#33339)
d98516026e : [PyTorch BC] Clean up the BC whitelist (#34393)
ccf6fab65e : Fix doc and type hints for "torch.add"; fix deprecated python calls in tests (#33935)
01edb7450f : [Lite Trainer] Add necessary registrations for MNIST model (#33717)
96ca06cfce : Add nhwc memory format test for dropout (#34379)
37dfc6c498 : Reenable large conv tests (#34259)
516a587438 : Enhance reproducibility documentation (#33795)
079de7f376 : .circleci: Remove macOS builds related to CUDA (#34333)
2d3f6cbf03 : .circleci: Update default smoke tests from cuda 10.0 -> 10.2 (#34328)
5608ffc46c : [PyTorch] Remove const modifiers from passed by value integers in qbatch_norm_fn (#34378)
c6ea71b6e8 : Fix Conv.cpp, &&= is not a C++ operator (#34381)
5f641f93f1 : [aten] Don't deadlock in IValue::Future impl, tests. (#34099)
0489b8da42 : Add scripts to promote S3 artifacts from test channels to stable channels (#34274)
879a90b322 : [ModelLoading] Use byte encoding for uint8, fp16 etc. instead of int32 (#34343)
98afce3c56 : Remove unnecessary assert in autograd engine (#34307)
6d8a0f6731 : [Aten] Init container iterators to an unsigned type (#34159)
4c99351de6 : [AMD] Remove num_gpu check for remote execution (#34318)
4872b126fd : [aten] remove stmt unreachable, variable never used warnings (#34017)
82a177c07f : [c10] remove warning attribute does not apply to any entity (#34018)
17ceb6941f : [RPC] Create local RRef<ModuleInterface> remotely in Python, use it remotely in TorchScript (#34183)
a7da4490cc : Clean up some legacy scalar/empty handling. (#34217)
9c5578fd0a : Make sure Vec256 int32_t and int16_t loadu temprary arrays are properly initialized (#34281)
35b6d2945d : Tensor.random_ check that from and to are in tensor dtype bounds (#34033)
30680196e4 : Revert D20121915: [JIT] Add support for list()
f9f135c5d8 : ChannelsLast3d support is_contiguous, contiguous, suggest_memory_format, caching (#33033)
415595ace4 : [C++ API] Remove init-list form of at::indexing::Slice (#34255)
b8fd88319a : C++ make torch::nn::Sequential push_back(AnyModule) methods public (#34208)
9a5e9d8cec : [pytorch][mobile] change mobile build scripts to build PyTorch by default (#34203)
b50825e011 : Make RecordFunction more robust for async use cases (#34122)
38857734f0 : [JIT] fix py35 test (#34350)
76035f050b : [C++ API Parity] Adam: updated step and class design (#33730)
f4da78f1b3 : Remove RPC TorchScript private API (#33978)
02478984d6 : Add support to dump unsupported ops. Add lite_interpter_load test. (#34278)
434af5d94a : [quant] Speed up per-channel min-max observer (#34118)
d2b5eb2a45 : [ONNX] Fix for random generators export (#33789)
89d314b5d5 : [pytorch] update mobile docker image version (#34337)
1cf12b7e53 : [quant] Fix histogram observer to work with QAT on GPU (#34232)
e4a883e601 : cuDNN convolution try multiple algo (#33073)
5500c3de0a : Revert D20150304: [pytorch][PR] [JIT] Introduce a fake Tensor creation node for IR unit tests
78aebbcb88 : [JIT] add other module apis (#34106)
2af64ba3ed : Allow output to zero-strided tensors if the size is <= 1 along that dim (#34100)
ccf4d69b75 : [Lite Interpreter] Enable __setstate__ (#33294)
765c5b1c95 : .circleci: Add CUDA 10.2 to CI (#34241)
f218842f2e : [JIT] Add support for list() (#33818)
479c3b0aa5 : [JIT] add support for torch.norm (#33783)
beb4309406 : [ONNX] Reduce ONNX test time on CI (#33242)
ff2731b45c : Revert "Disable MNIST test in test_xla() (#34261)" (#34316)
9651088228 : Tuck the packing logic into Int8FCPackWeight op (#34289)
9ce833879f : [JIT] Introduce a fake Tensor creation node for IR unit tests (#33914)
75d29f8d3e : Allow converting IValue to vector<string> (#34269)
3a4bac5c76 : Throw a proper error when parsing local variable annotations without assignments (#34133)
ed11e2536a : [pytorch_ci] Skip determination tests in rocm
e907128caf : [ROCm] Enable BFloat16 type for pooling ops (#34166)
8216d9ae64 : ONNX Export Support for NLLLoss (#33509)
e642a65bea : [pytorch][CI] add e2e mobile custom build jobs to CI (#34184)
d98bd5e1f5 : [test all] Back out "Revert D20171428: [profiler] fix chrome tracing for profiler run with cuda"
4a194f89aa : Disable MNIST test in test_xla() (#34261)
2b79bab029 : [CUDA_FUSER] Fork CUDA fuser (#33527)
e132047f1b : [JIT] fix alias assertion (#34268)
e2ddf935bb : Run RPC JIT tests with variable type hints only in Python >=3.6 (#34284)
c62de4286e : Add test to verify dist_autograd doesn't populate .grad field. (#33949)
e1c6f93f14 : Clean warning message (#34143)
1546d2afeb : [pytorch_ci] Don't run determination tests in py35
e236e15934 : [quant] Run weight_post_process for QAT (#33852)
d59e036f4d : Revert D20194092: Add support to dump unsupported ops. Add lite_interpter_load test.
17a5c67796 : Add support to dump unsupported ops. Add lite_interpter_load test. (#34072)
385067ed4f : [pytorch][cmake] improve build mobile with host toolchain (#34187)
93990bab58 : Make use of our S3 mirror if Yann Lecunn's website is not accessible (#34215)
67608cc018 : Fix MKLDNN conv2d 5d weight handling (#34115)
9dd5d51b01 : [ATen] Exclude CUDA tests when running `basic` under valgrind (#34181)
8269c4f3d3 : Added nullptr check for pthradpool_get_threads_count (#34087)
ac6e75a165 : Revert D20195053: [pytorch][PR] Add API for listing functions overridable by __torch_function__
78b81dad83 : [Dist Autograd][Better Engineering] Enhanced Error Reporting in Dist Autograd/RPC (#34179)
45b8c8dbcb : [torch] Fix sign-compare warning in `torch::utils::rnn:pack_sequence` (#34185)
39f78db7ec : optimize UpSampleNearest 1d 2d and 3d performance on CPU (#31452)
112cecc440 : Remove the use of macros when defining division between integers (#34104)
438f4ea0ac : Cleaner implementation of bitwise operations of integeral types (#33849)
3a3fcbbc39 : Use templates instead of macros when defining bitwise operators. (#33835)
78ad3dc174 : Fix Lint (#34218)
6f52562e75 : [quant][graphmode] Add add_relu pattern in skip values (#32816)
22506ae71d : Reduce code duplication in OperatorEntry by keying hash map on optional<DispatchKey> (#33817)
c688eb28a2 : Minor fix for quantizing the Ads complex model
5f4a01b2ea : Update MAGMA to 2.5.2 for Windows (#34205)
f6c883ccea : TH: Defer to ATen's AVX detection code (#34088)
fdd771c90f : Make tracing in code gen optional (#33715)
790274bff2 : [caffe2] Fix signed unsigned comparison warning (#34161)
6d78882158 : Add layout.html to template for stable docs (#33770)
fc6dce6033 : [c10] Fix TORCH_INTERNAL_ASSERT_DEBUG_ONLY MSVC bug (#34173)
f097ca503d : Add and test training in lite interpreter. (#32359)
2ba74b741e : Add backward Int8Quantize shape inference (#34152)
57c1b80ec2 : [pytorch]Migrate _th_ger to Aten and kill resize_scalar in codegen (#33792)
7d01888a75 : [JIT] Register rpc.rpc_async(..) as a JIT operator (#33329)
9b39ad7f2c : [jit] Fix iOS build (#34180)
3c042a6ab9 : [pytorch][mobile] support for custom mobile build with dynamic dispatch (#34055)
e5bbd23ca7 : [quant][graphmode] Skip quantizing input and output in matched module (#32814)
7cee787a19 : [pytorch_ci] Python target determinator (#33577)
7c20578794 : NNPI op mapping correct SpatialBN NNPI op name (#34176)
a19db54b36 : [Redo][ATen] Remove AT_ASSERTM from Blob::free_() (#34168)
31cc311143 : Expose `CUDACachingAllocator` `raw_alloc` and `raw_delete` to python (#33860)
4edff32f81 : [c10] Fix typo in __assert_fail noreturn modifier guard (#34157)
99e211e661 : [jit] Add type tags to lists/dicts in pickle (#33255)
7da24b36b1 : Apply clang-format to RPC files (#34139)
3af0dffe84 : Use double quotes in C++ to stay consistent with Python RPC docs (#34095)
f1085a8e41 : Improve ProcessGroup RpcBackendOptions Constructor API (#34081)
9d1c971b11 : [Aten] Suppress valgrind leaks in libcuda (#34169)
1beb309e03 : Make DEBUG == REL_WITH_DEB_INFO on CUDA build (#34153)
cb3905e8cf : .circleci: Re-do run nightly pipelines on tag (#34148)
7cda964e20 : Remove deprecated codepath for old-style autograd.Function (#30696) (#33956)
04378eb618 : [JIT] Add modulelist indexing for integer literal (#29236)
ba1bd41767 : Turn on strict dtype checking for test_torch.py (#33825)
c579976603 : Revert D20171428: [profiler] fix chrome tracing for profiler run with cuda
f299c2d6e1 : Completely kill CUDA_tensor_apply3 (#34026)
1affaf8d10 : Migrate lerp from CUDA_tensor_apply3 to TensorIterator (#34025)
27f56632a4 : Migrate bce loss from CUDA_tensor_apply3 to TensorIterator (#34023)
92083f31b5 : [gloo] dont hold locks in calls to buffer in ProcessGroupGloo:RecvWork::wait() and (#33926)
c93b1d427c : [profiler] fix chrome tracing for profiler run with cuda (#33987)
6a97777f72 : Remove use of `.data` from optimizers (#33640)
f26bbb5f86 : [fix] flake8 lint error (#34146)
a8fc3d8c2a : Fix HistogramObserver to not do detach on input (#34114)
9650253d70 : [caffe2] fix ambiguous call to 'fmaxType' THCHalfAutoNumerics.cuh (#33569)
49586a2a7e : fix sph batchnorm to use sph fma
49921cad28 : Minimum build should also exclude XNNPACK (#34110)
fbc9c61c81 : randn and normal_ for complex tensors (#34037)
ad2825a2c9 : Add API for listing functions overridable by __torch_function__ (#33791)
358450e02b : improved TorchScript traceback (#33834)
74a0663afd : In torch_test, mark every test that takes >5s on a DEBUG CPU-only build as slow test (#33901)
9b527b35bb : CUDA Vectorized Dropout (#33879)
0cf34cf672 : [pytorch][mobile] make sure mobile build work with dynamic dispatch (#34038)
51936c5ea4 : [pytorch][CI] end-to-end custom build script (#34012)
5b9f1ada30 : [quant][graphmode] Observing input/output values in call site (#33277)
7289e8e865 : [caffe2] std::numeric_limits<double>::quiet_NaN() use instead of ::nan("") (#33566)
1702152ef9 : fixup unit tests (#34105)
5082839de5 : Migrate Lerp from CUDA_tensor_apply4 to TensorIterator (#33994)
4074d559e4 : Migrate kl_div_backward from CUDA_tensor_apply3 to TensorIterator (#34022)
3def76583a : [RESUBMIT] [pytorch] Migrating index_add cuda to ATen (#33548)
f29110fdf8 : [pytorch] blas gemm fix for k=0 (#33819)
b1fd7ba019 : Revert D20169501: [pytorch][PR] .circleci: Add CUDA 10.2 to our CI pipeline
1aff3e2dd3 : Revert D20204104: [pytorch][PR] .circleci: Add filter to run nightly builds on tag
5be8a4e027 : find mkl installed by nuget (#34031)
a23e8099dd : Fix typo (#34008)
2ce9d26809 : Support cdf for mixture_same_family distribution (#33408)
e0b90b87a4 : [C2] Fix slowness of the ReshapeOp. (#33729)
0afee0c20b : [rpc][metrics] add initial metric handler classes. (#33153)
0689cf8fc1 : [c10] Make __assert_fail CUDA definition compilable with clang host compiler (#34102)
8a14b41617 : fix warnings reported by PVS (#33868)
0729ad733d : Change lint from python2 -> python3 (#34107)
f909b5535e : [autograd] fix allow_unused checking for C++ API (#34035)
0759191f12 : blacklist spatialBN until bitwise matching (#34092)
3b93928ada : .circleci: Add filter to run nightly builds on tag (#34078)
ad3f4a32bd : [pytorch][buck] fix selective buck build (#34090)
1ed950e1b6 : [distributed] skip use_ignore_output tests in c10d if not built with gloo (#33513)
ff1fc402a8 : Migrate dirichlet from CUDA_tensor_apply3 to TensorIterator (#34021)
77b9016a8e : Migrate gamma grad from CUDA_tensor_apply3 to TensorIterator (#34020)
bb4465f9f5 : .circleci: Add CUDA 10.2 to our CI pipeline (#33471)
b874c039f6 : Allow checking for cached module before asserting (#33954)
a4716d0e26 : Fix lint (#34094)
c206b4398d : Show errors from the tasks in the thread pool (#33938)
a57a7b4c29 : Change input value in examples of `BCEWithLogitsLoss` (#34053)
15bf4892f2 : prevent crash on exit from static destructor race (#33955)
e568c039bd : Enable Tensor.random_(from, to) for half on CPU (#34030)
384a4feab6 : Fix bad math typesetting (#34027)
11843049d5 : [jit] Fix flipped PackedSequence outputs in script (#32955)
45c45195cd : Remove warning about building from source to use the NCCL backend (#34051)
51d969e86a : preprocessor cleanup (#33957)
4b3ae7e0af : Enable -Werror=format compile errors on torch exception types (#34019)
9239608037 : fix windows clang attributes (#33959)
87b3f87f27 : Migrate prelu from CUDA_tensor_apply2 to TensorIterator (#34003)
9956a231b9 : Fix backward compatibility tests (#34071)
ec0f2184ba : clang intrinsics targeting (#33958)
ba4cff2ffc : [dtype inference] Following pytorch default for float vs double (#33713)
cab8772c6c : Freezing Torchscript modules (#32178)
e73d4286b0 : Fix conflict between XNNPACK's clog dependency and our cpuinfo dependency (#33922)
e54b8e1a47 : [CUDNN NHWC CONVOLUTION] Re-stride input tensors for wgrad in cudnn_convolution (#33784)
31737e989d : [aten] remove shadowed declaration warning (#34014)
ad17dafc50 : [caffe2] Remove python2 from operator_test (#33977)
f4532d7542 : Fix typo (#33925)
71f8624ecb : Revert D19153199: [ATen] Remove `AT_ASSERTM` from Blob::free_()
6631c2a627 : [doc] Add grad context manager doc to toplevel torch module. (#33877)
a500491cbc : Fix index_put when tensor length > int_max (#33753)
f857fe18cd : [ATen] Remove `AT_ASSERTM` from Blob::free_() (#33929)
e017b1e9fb : Updating submodules
ad769d74d9 : Collapse _like overloads into a single overload. (#33705)
b98bce8cd4 : Add MemoryFormat to TensorOptions, but not codegen. (#33704)
9f7708eecb : Updating submodules
15caf3b516 : move test helper functions out of test funciton (#33960)
84ec5357d3 : Make HashNode visible (#34045)
ace2b4f37f : [resubmit] try to infer rref type from python (#33992)
7747fe81c4 : reuse named tensor error message in generated code (#33536)
7f7ea685c0 : Revert D18672405: Use codegen'ed unboxing wrappers
3acfccafbb : Revert D20172782: Fix mobile build
595445e889 : Revert D20178827: Fix mobile build
c596ec7eb3 : [pytorch] update code analyzer script to cover new c10::Module::def API (#33975)
5a8562a6af : Fix mobile build (#34000)
1494005cfd : C++ tensor indexing: more indexing tests (#30427)
0e52627358 : Fixing pthreadpool symbol conflict issue. (#33869)
85b1c45a45 : [JIT] fix alias assertion (#33952)
2111c4ff0c : [jit] Add missing tensor properties (#33906)
6e70b2da62 : Fix mobile build (#33985)
2f6ffe8c39 : [jit] Resolve type annotation names to types (#29623)
55b44f6746 : Throw an exception when method cannot be found from mobile module. (#33972)
de55e47a4b : Pass all ops to XLA with additional info about whether it's compound (#33908)
38b6cb479b : Check fuser results when profiling (#33944)
4377061baf : [caffe2] fix atomicAdd redeclaration Clang error (#33559)
4fb8679218 : [caffe2] fix field initialization after base Clang errors (#33556)
991f7a20f2 : Use clog from cpuinfo/deps instead of downloading (#33947)
69d2741480 : Add list of view ops to public doc. (#32560)
b678256bfb : Move glu to Aten(CPU) (#33179)
3c5677a676 : Use codegen'ed unboxing wrappers (#32521)
2fa51dde28 : Remove unnecessary tensor copies (#33732)
917e56e950 : Throw an error if nbytes is called on a sparse tensor. (#33897)
f5d92fbc25 : Get rid of newWithStorage2d calls. (#33823)
56d9906083 : update mapping of fake operators (#33946)
ad44394f15 : Updating submodules
9fd1a7697f : Create CODE_OF_CONDUCT.md
a726827ec8 : Formatting changes for gradient scaling (#33832)
5dde8cd483 : [caffe2] fix no matching function min/max Clang errors (#33563)
c6d301220a : Fix torch.cat() performance regression on single core CPU (#33534)
890242254b : Updating submodules
04dc0e6973 : Split Distribution.cu into smaller files to reduce compilation time. (#33892)
dece155335 : Modified assertEqual to handle complex tensors (#33773)
09046713cc : removed .data from test_autograd.py (#33886)
f5f1e5e7f6 : [quant][graphmode][refactor] Factor out getInvokedMethod (#33649)
7f1112820a : [quant][graphmode][refactor] Move check for weight outside of insertObserverFor (#33276)
7c13f576ea : [quant][graphmode][refactor] Checks for bias and weight (#33273)
97541a5106 : [quant][graphmode][refactor] Move values_to_skip check inside valueNeedsToBeQuantized (#33275)
64aab3260a : [jit] allow RRef local creation with IValue objects (#33263)
1507573a52 : [caffe2] fix no return statement in constexpr function Clang error in TypeIndex.h (#33576)
c18cb1eb52 : Improve dll loading logic on Windows (#33856)
cb8d9f99aa : [JIT] Implement Tensor.tolist() (#33472)
5029ff001b : [Revert] manual revert of D19918320 (#33920)
8f84deddd1 : [jit] fix up refs in overview.md (#33919)
d6485b411b : [jit] add top-level readme to csrc/jit (#33916)
bd7e9c490a : [jit] stop printing crap in test_jit (#33917)
d66c320b10 : disable leaky_relu_ backward calculation with negative slope (#33639)
997b5b5797 : [quant][graphmode][refactor] Simplify signature for insertObserverFor (#33274)
db4a24e008 : [jit] remove some unused/redundant files (#33806)
877ab3afe3 : Better handing of Autograd+Fork errors. (#33885)
746e5218e7 : Mistake in MSELoss documentation (#33836)
48fd410e44 : Try fix XLAPreAutograd with *_like functions. (#33848)
87e97ced20 : Split UnaryOpsKernel into smaller files for faster compilation. (#33888)
aff1da5aac : .circleci: Remove trailing slash, fix conda upload (#33903)
a7fe200f5f : [caffe2] simplify caffe2 code with fbgemm handling block size 1 emb (#33774)
524dad13a8 : Add device to the test tensor. Default device type is CPU, in pytorch… (#33635)
edd5c009f7 : fix docs mistakes in lr_scheduler.MultiplicativeLR (#33805)
d97560999b : Split BinaryCompareKernel.cu into a file-per-kernel to speed up compilation. (#33871)
5eacdfb21f : Revert D20127441: [pytorch][PR] [JIT] Introduce a fake Tensor creation node for IR unit tests
c4d611a0f5 : Split BinaryMiscOpsKernels into more files for faster build times. (#33873)
910acafc79 : Revert D20124224: [jit] stop printing crap in test_jit
53630f7681 : Updating submodules
243af17d65 : Revert D20103905: [jit] Fix flipped PackedSequence outputs in script
a7cf5c859f : Revert D20136865: fix lint
908eee5583 : remove .data from test/distributed/ (#33874)
390d4d6df3 : [JIT] Introduce a fake Tensor creation node for IR unit tests (#33595)
dbe850af5b : [jit] do the code reorg (#33851)
afbd04449e : [quant][graphmode] Swap dequantize after inline for ops that doesn't require observation (#33173)
6647a44e8c : Automatic update of fbcode/onnx to 9fdae4c68960a2d44cd1cc871c74a6a9d469fa1f (#33858)
bd77abffe3 : Kill some unused (TH)Storage-based APIs. (#33815)
b10761d890 : fix type stub errors (#33762)
095de1e872 : Migrate `random_` from the TH to Aten (CPU and CUDA) (#33663)
f5952cf7cb : fix lint (#33861)
9733711394 : [JIT] Support calling Tensor.element_size() in TorchScript (#33808)
00f685d2d8 : Add Scalar::type() (#33603)
d41c8d0461 : Correctly preserve "not set anywhere" TensorOptions when merging. (#33510)
ca002a0f6b : Switch empty_like to use merge_in to process TensorOptions. (#33505)
84101f353e : Avoid problematic pickle usages on Python 3.8.0 and 3.8.1 (#33824)
421e3e9a54 : Release GIL for RPC pybind functions. (#33610)
cea0cc8ca8 : [jit] Unify augmented assign handling (#33578)
24dd800e6a : [Dist Autograd] Functional API for Dist Autograd and Dist Optimizer (#33711)
4c33222c51 : [quant][graphmode] Replicate dequantize nodes (#33531)
2b9fa4a756 : [jit] Fix flipped PackedSequence outputs in script (#32955)
150e025be8 : [jit] stop printing crap in test_jit (#33779)
4dad00b64b : [rpc] special case tensor type check when getting RRef (#33582)
d494986171 : [jit] make RRef type annotation available in Python (#33526)
2448c97a53 : [jit] infer RRef type as container type (#33369)
857eb4145e : [JIT] add support for torch.cdist (#33737)
f31b1d3453 : [JIT] add support for lu_unpack (#33736)
4543cf4eb1 : [JIT] add support for torch.lu to torchscript (#33724)
fddf73250d : [JIT] fix resolving of functions in torch/functional. fix compilation of torch.stft (#33504)
057fd5e10d : add support for _modules, reducing special casing of nn.Sequential (#29495)
6eef66e1f4 : .circleci: Divert packages to test channel on tag (#33842)
cd0acf4374 : port masked_fill from TH to ATen (#33330)
a0e90e1b45 : ONNX Error Message on Missing Op (#33593)
02908dfa67 : remove setStorage with null StorageImpl support. (#33735)
04f88a3a7b : Add partition info message to NetDef (#33616)
51e405743f : Revert D20010383: [jit] Unify augmented assign handling
867990dc17 : [jit] Unify augmented assign handling (#33578)
c32fa465a5 : Preserve Backward compatibility of models serialized before #31040 (#33796)
5c33d98b0d : Add assert_tensor_equal and assert_tensor_not_equal to test/cpp/api/support.h (#30426)
8aa09de19e : build: set -DNDEBUG in Release (#32719)
93e30c16cb : .circleci: Switch to using robot token for conda uploads (#33786)
45e4b614d1 : Per channel quantization performance improvement (#33772)
f597ac6efc : Fix grid_sample gradients at image borders (#32829)
b8f0acf50f : Fix examples with updated pruning naming convention (#33144)
a8e7ed48f4 : [pt][quant] Parallelize quantize and dequantize (#33765)
2eb95d8f4a : Migrate `fmod` and `fmod_` from TH to ATen (CPU) (#33592)
f87b0b2515 : Remove the use of macros in defining binary ops for base Vec256 (#33733)
c1dd70688a : Fix deprecated python "add" calls (#33428)
24659d28a1 : Feature/vonmises upstream (#33418)
758ad516f3 : [Lite interpreter] Pass shared_ptr properly (#33667)
fc6a153688 : [WIP] Reanimate gradient scaling API with original scale update heuristic (#33366)
a836c4ca78 : Skip manual backward for `cdist` with case `p=2` (#31167)
9a5ea71380 : pad_packed_sequence: doc improvement (#33768)
5bac7febad : removed padding and dilation from LPPool2d Doc (#33714)
038ee01393 : Disable printing of the histogram when dump (#33749)
8667379133 : [quant][graphmode][refactor] Factor out insertDequantCall (#33172)
a13ee18982 : [quant][graphmode] refactor nodeQuantizable (#33171)
8159316714 : Revert D19941103: [pytorch] blas gemm fix for k=0
4d203c6fc8 : Move cumprod and cumsum to Aten(CPU) (#33280)
0dded4026e : [C++ API] Add PackedSequence / pack_padded_sequence / pad_packed_sequence / pack_sequence (#33652)
c20628c5f6 : Remove `clean_tag` from tensorboard (#33133)
72288e82e2 : Use shim executable sccache-cl as the compiler instead of sccache cl (#33745)
0e74cbcc54 : Revert "Revert "Revert D19975411: Remove special case codegen for tril_indices/triu_indices." (#33572)" (#33742)
9bc922d518 : Extend cuda install timeout for Windows jobs (#33755)
7eba36b1f6 : [quant][graphmode][refactor] Separate preprocess step for insertObserver (#32813)
d82093e665 : [profiler] remove redundant assert in record_function_ops (#33225)
2b404de347 : [scripts] Add script to fetch clang-format binary from AWS S3 (#33644)
98526c7444 : Migrate fake_quant_slice to TensorIterator (#33744)
8196ec0115 : Remove some dead THStorage related code. (#33734)
5ef1c2c5d2 : Back out "[pt][quant] RNN debug test" (#33750)
ee23944f46 : [Caffe2] Fix shape inference for element-wise operators (#33431)
819ca2c285 : add bfloat16 conversion method in type stub (__init__.pyi) (#33747)
fd175fa8a2 : fix bugs in gen_pyi.py (#33748)
6bdb59539f : follow-up test_torch .data removal (#33696)
4ef854b4b4 : Fix potential hang when exiting main process (#33721)
7a8b6c2c6b : [pytorch] blas gemm fix for k=0 (#33419)
4460c8b034 : [C2] Tiny changes to adagrad to make it slightly better. (#33727)
65864d3634 : [C2] Small improvement for elementwise_mul operator. (#33537)
adbe289870 : Update MKL to 2020.0.166 for Windows (#33690)
36919278cc : C++ tensor multi-dim indexing: add index() and index_put_() overloads, simple indexing tests, merge with Python indexing path (#32841)
6aecfd1e80 : Mobile Backend: NHWC memory layout + XNNPACK integration. (#33722)
2a4aad7466 : Don't activate vc env again for cuda with ninja on Windows (#33700)
7caf3c396b : [quant][graphmode][refactor] Change signature of getModuleAccessPath (#32812)
a1862468d0 : Add missing test launchers for JitRpcTest and JitDistAutogradTest (#32891)
a9cef05f5d : improve EmbeddingBag performance on cuda (#33589)
3cf97bc23c : Fix typing error of torch/nn/modules/container.pyi.in (#33686)
d6ea4be153 : Fix minor problems in index_put_ docs (#33689)
54aac4af1f : Update hypothesis_utils.py (#33739)
cba8af9b24 : [pytorch] Set alias analysis kind to FROM_SCHEMA for qadd, qmul, qclamp, qconcat (#33359)
bc5e9e0d55 : [quant][graphmode][refactor] Move the check for qconfig inside insertObserver call (#32809)
bf00b4d305 : [TensorExpr] Add a boilerplate pass for future TensorExpr fusion pass. (#33464)
9278196d89 : scatter_add uses src, not other (#32307)
98af01ee7c : [quant] Make FakeQuant use REGISTER_DISPATCH (#33682)
b10a39bb32 : Migrate _cat from TH to ATen (CUDA) (#33237)
97da60d511 : Updating submodules
479e474a37 : [quant][graphmode] FoldConvBatchNorm2d support shared ClassTypes (#32379)
54e41a87eb : Make ELU great again (#33244)
5b031d961d : [pt][quant] RNN debug test (#33621)
696527e659 : [caffe2] Add embedding empty ratio checker (disabled by default) (#33145)
5090d7082b : add propagate flag USE_DISTRIBUTED for libtorch_python_source
330b69fef8 : Kill dead scalar_check. (#33695)
996c0adb53 : [quant] Regsiter fake_quant and observer attributes as buffers (#33626)
dc3d47110a : [docs] add experimental warning to TorchScript classes in language reference (#33697)
533b973fd0 : Fix visibility of torch::nn::RNNImpl::options (#33718)
062ac6b472 : Bring up new-style registration API as wrapper around old-style (#33205)
ced8865d91 : Add sigmoid to mobile ops
32c93099c4 : Add typing info for data members of utils.data.sampler classes (#33679)
4d9b649261 : jit pickling rref (#32959)
481e7f2e78 : catch and propagate warnings for JIT ScriptMethods (#33010)
6a76433b9d : [Update independent.py]add explicit string representation (#33676)
6a275b696e : adding IterableDataset to utils.data.__init__ (#33543)
e3ba533c8b : Minimize the cases where we have to cpu_zero. (#33570)
641750e33c : Fix NaN handling in torch.mv. (#31666)
039dc90854 : Revert D19521853: [pytorch][PR] Mobile Backend: NHWC memory layout + XNNPACK integration.
9d834cc889 : [JIT] Fix FunctionType::python_str() (#33680)
5fa03d4dbb : Fix bug where we were trying to get a schema for prim::Constant, which is not registered as an operator. (#33645)
e1bddbbaf6 : Bounds checking for functor execution in vectorized/unrolled kernels (#33642)
941b42428a : Mobile Backend: NHWC memory layout + XNNPACK integration. (#32509)
7aa605ed92 : Remove uses of `.data` in test_torch (#33638)
6d448acb34 : [PyTorch BC] Skip aten::random_ to fix BC CI (#33666)
9e384f9ce4 : Remove duplicate header include. (#33656)
312627a7c3 : Revert D19776613: Migrate `random_` from the TH to Aten (CPU)
a2f3c6c26f : Call RandomNumberSeed() on-demand (#33539)
8291e06f8f : Fixes cuda->numpy and non-strided->numpy segfaults (#33612)
59daf1611b : [Caffe2] Skip //caffe2/caffe2:caffe2_test_cpu -- 'DBSeekTest\.RocksDB'
1c08fa7051 : [Caffe2] Skip caffe2/caffe2:caffe2_test_cpu - DBSeekTest.LMDB
a7e22b4c6a : add bailout checks to checkScript (#32802)
9b2b15f4fc : misc windows warning fixes (#33632)
d971007c29 : Migrate `random_` from the TH to Aten (CPU) (#32534)
e10aa6b72f : Fix flaky DagNetTest unittest
6474ea404d : [C2] Native GPU implementation for bucketize (#33529)
15ba902c08 : Turn ONNX_ML into a proper build option. (#33424)
16d6c17845 : improve roll performance (#33623)
f62f1b2ef0 : Revert "Revert D19964089: [pytorch][PR] Allow vectorized gpu loop to … (#33553)
a72946dbab : Stop generating out full function type for registration, use decltype or infer it (#33097)
22963f42ec : Delete unnecessary aliasAnalysis specification from operator registrations. (#33093)
d5b768dffd : refactor strongTypePtr (#33590)
47e90d774e : C++/Python API Parity: add pad_sequence (#32387)
bb5181b716 : [TensorExpr] Add IR Printer. (#33220)
fc70fc3610 : [TensorExpr] Add IR visitor, IR mutator, and IR evaluator. (#33219)
49af9425a7 : [TensorExpr] Add core classes for representing expressions and statements. (#33218)
1a4f997178 : [TensorExpr] Add a class for representing data type. (#33217)
089d658153 : [TensorExpr] Add classes for memory management in tensor expressions. (#33216)
616beb1412 : [ROCm] Added support for pytorch extensions to use HIP (#32669)
ca8e025cdf : improve the doc of enforce_sorted in pack_padded_sequence (#33617)
293fa5fc44 : [Documentation] Fix minor typo in torch.serialization (#33549)
e77abb9a5b : Normalize reward-to-go in C++ actor-critic (#33550)
ee28831341 : [jit] Fix aug assign for non-tensor attributes (#32993)
fa80299bdf : __torch_function__ overrides for torch.functional and torch.nn.functional (#32799)
6cec555926 : Replace AT_CHECK with TORCH_CHECK in torch/csrc/jit/pybind_utils.h (#33524)
90f4c5695e : Revert "Revert D19975411: Remove special case codegen for tril_indices/triu_indices." (#33572)
e2a9ea0f72 : Ensure that lambda is no less than zero in softshrink (#33201)
a6a72ac68f : Fix all occurrences of C416. (#33429)
4588f49f68 : Kill cudaDeviceAllocator in THCState (#33380)
a943b0518b : strict check for a device type in Fuser (#33025)
e8a03438cc : Make TestCuda.test_memory_stats more robust (#33575)
009293ec5c : [pytorch][size] remove unused SparseCPUType from mobile build (#33517)
ac9b40164d : Use cheaper check in isTensorList (#33528)
d3d975cbf6 : Updating submodules
9266bde970 : [pytorch] Minor: add GIL assert to PythonRpcHandler::handleExceptionGILHeld (#33557)
0bde610c14 : Re-sync with internal repository (#33591)
3498c000e2 : [TVM] Remove dynamic batch size dispatching (#33584)
faa800eb5b : [JIT] remove inline everything jitter skip (#33468)
c882425c24 : Add 64-bit indexing support to THC index reductions (#33405)
23846d5a38 : [caffe2] use Clang identification macro in various places (#33574)
5782758b54 : Add instructions and operators for new bytecode format of PyText model (#33555)
108fc78395 : [caffe2] fix invalid % escape in inline assembly strings (#33554)
e5cf7afd0a : torch.tensor can infer complex dtype now (#33361)
13e4ee7883 : Added tensor.is_complex(), is_complex and dtype.is_complex py binding, tensor printing, and dixed the scalar type returned for complex float (#33268)
36d724c963 : run peephole to do profile-based optimizations (#33337)
1a25747342 : Check for consistent devices in at::where (#33432)
71225ecc8c : Revert D20006312: Revert D19975410: Update documentation on why _cudnn_init_dropout_state looks the way it is.
687a7e4a25 : Revert D19975411: Remove special case codegen for tril_indices/triu_indices.
d19a50bf27 : Add missing weight_decay parameter validation for Adam and AdamW (#33126)
cdf381c967 : Fix LambdaLR scheduler side effects (#32848)
3233033a17 : Revert D19975410: Update documentation on why _cudnn_init_dropout_state looks the way it is.
718c538ff9 : Add ability to enable/disable MIOpen at runtime (#33118)
01e1de8220 : allow remote torchscript call to itself (#32990)
a9e4448dff : Update documentation on why _cudnn_init_dropout_state looks the way it is. (#33347)
196fda5a79 : Remove special case codegen for tril_indices/triu_indices. (#33305)
ffe327f7d9 : Revert "Disable flaky test TestCppExtensionAOT.test_cuda_extension in… (#33404)
05fb160048 : Revert D19964089: [pytorch][PR] Allow vectorized gpu loop to have different argument types
883b18ea70 : Delete build_variables.bzl following configerator change.
e95282ab28 : [caffe2] make fused rowwise quant/dequant op work for N-dim tensors (#33426)
bf0951d937 : Updating ONNX checker logic. (#33522)
1fe635be3c : Allow vectorized gpu loop to have different argument types (#33222)
81394581a3 : [Caffe2][ThreadPool] Make sure numThreads does not exceed the number of big cores (#33523)
602ef0d9d0 : [WIP] migrate scatter_ to ATen CPU (+multithreading, nondeterministic) (#33139)
6cb9e6b015 : Back out "Revert D19871946: [distributed] pass in timeout to TCP store when initializing" (#33434)
ecb05f12c3 : Support broadcast for quantized mul kernel (#30442)
ea514c819a : Make slow_conv_transpose2d_backward tensors contiguous (#33462)
e5a02aa2fe : [caffe2] simplify relative error expr (#32999)
bd3c6e8e91 : avoid large vector copy when query per_channel q_params (#31040)
8527ba8b70 : [jit] Add None parameter as parameter instead of attributes (#32964)
507f963aa6 : [RPC Reliability] Enabled retries for RPCs with exponential backoff (#33365)
416413dec4 : [jit] add `inlined_graph` method to ScriptFunctions (#33508)
5e80ca12bb : [pt][fbgemm] Turn on USE_FBGEMM on Windows env (#297)
cbf8657945 : [jit] Fix ModuleDict type sharing (#33515)
8908b62fb2 : Clean views created inside no_grad that are modified inplace (#32839)
20c1e25832 : Re-sync with internal repository (#33519)
1d9fcf8bd2 : Correct documentation for torch.unsqueeze (#33478)
62c953b348 : Fix svd tests between devices. (#33470)
a8bd1d24c9 : [Documentation] cummin doc fix (#33492)
d4e4513a64 : [JIT] Add more ops to 'removableGuard' in guard elimination pass. (#33465)
07e5e42713 : [jit][fix] Remove slot in parameter slot (#32846)
1e3664b6ef : Remove c/pdist tests from _internal/common_utils.py (#33409)
60339a38ed : Fixes #33001 (#33456)
165b1ad8e8 : Kill THCState_getNumDevices (#33375)
96e5dea9f4 : Remove unused variable (#33484)
d7f00b1b45 : Remove using declaration from widely-used header file. (#33293)
a67691e508 : Fix isnan for integral types in MSVC (#33483)
53ad596342 : [jit] Remove `torch.jit._dump_trace (#33453)
8b6a898d2b : Updating submodules
d13c1b8af8 : [jit] de-optionalize SourceRange context (#32880)
d85c913bfd : [jit] Delete the ErrorReport default constructor (#32879)
e9ac92a242 : Make RPC message constructor actually move (#33440)
d50305e2f3 : Updating submodules
a5f01846c2 : Kill THCState_getCurrentStream (#33376)
96989a2a11 : [ONNX] Adding ONNX large model export support in exporter (#33062)
3ad59734d7 : Add type annotation for bias in _ConvNd (#32885)
feaa622fc6 : [Update transforms.py]Add `TanhTransform` (#19785)
43e015f4b1 : Bug fix in dynamic quantization kernels + better test coverage. (#33320)
f1b73799d5 : Clean up isinstance flags (#33265)
7f2c25b6fa : Move special ops into interpreter (#32889)
83c347ff4a : Remove prim::Constant op (#32804)
c59e35b147 : interpreter handling for varargs to remove need for looking at Node (#32791)
da015c77a1 : Cummax and Cummin doc update and performance benchmark (#32537)
016d73bd74 : remove Complex CPU/CUDA backend enum keys (#33267)
1d743e3154 : Add guard elimination support for aten::unsqueeze. (#33371)
1af30451e5 : sync srcs between fbcode and ovrsource targets (#33368)
44af8ee6cd : Add pybind11 exception translator (#30588)
4c8064c9e1 : Fix avx-512 detection logic for jit fuser with MSVC 2019 (#33403)
abbf6e7f53 : fix clang-tidy lint (#33448)
4468a7b7b3 : Updating submodules
f938b3b4e0 : Remove TH binding of set_(Tensor). (#33358)
879cf0b15a : fix typing bug of LambdaLR.__init__ (#33271)
2c99ea8654 : Dirac init compatibility with group convolutions (#32825)
28c5213a97 : Add mechanism to pass a number of workers to cpp extensions (#33346)
cfb4862673 : [pytorch] correct input size check for GroupNorm (#33008)
dde2ff4608 : [Fuser] Add a knob for disabling/enabling CUDA fuser. (#33395)
a203dc2e6d : [C++ API] Allow skipping default arguments in module's forward method when module is used in Sequential (#33027)
4724964810 : [C++ API] Expose AnyValue and AnyModuleHolder classes (#33026)
5d7f42847c : Add at::Tensor::retain_grad API (#33349)
55fa133cdc : Remove gpu_kernel_with_index (#33370)
ebb008eb68 : Optimize Unfold3dAcc to improve performance of conv3d backward (#33317)
c90b393c00 : Fix logging for aborted communicators in ProcessGroupNCCL. (#33147)
1a589f50bd : [auto quant] Add quant_scheme_generator to interface with dper
87dc2dbcce : Updating submodules
c57f8984e6 : [caffe2] make order btw div and mul in adgrad consistent (#32974)
d29997373e : Updating submodules
d4e4beddc4 : Revert D19871946: [distributed] pass in timeout to TCP store when initializing
df47a3abe0 : [distributed] pass in timeout to TCP store when initializing (#33325)
c75d06d854 : Move gating part of SparseFeatureGating to local
f6808df75f : [BC] Temporarily fix the BC check (#33387)
495bd5818b : Fix index truncation in argmin/max for large tensors (#33310)
cd038c0ae9 : Get rid of some template arguments in GPU loop (#33308)
fd684cc312 : Use torch.set_default_dtype in test_data_parallel and rename dtype2prec (#32962)
6dd6b0bfae : Revert D19900566: [pytorch][PR] Simplify prim::shape when we have complete tensor types.
d35a4c202e : Add support for aten::slice to guard elimination. (#33311)
c37a9b874b : Updating submodules
1e76649d30 : fast setup for output tensor in tensor iterator (#33165)
c6271c63f2 : Updating submodules
e1a895858f : Allow to register custom passes both before and after fusion. (#33261)
3359871f5d : .circleci: Use volume mounts instead of docker cp (#33355)
dfafe2aad1 : .cirlceci: Swap PYTORCH_BUILD_VERSION if on tag (#33326)
5cab54e0db : Revert D19560159: [RPC Reliability] Implemented retries for RPCs with exponential backoff
0b5b2b864a : [BC-Breaking] Rename at::Tensor::base() to _base() (#33316)
9c0625b004 : [iOS] Add watchOS support (#33318)
ecd9a5ad12 : Simplify prim::shape when we have complete tensor types. (#33336)
9c8b67b179 : Revert D19905015: Revert D19858239: [pytorch][PR] Refactor and add VS 14.16 and 2019 CI for Windows
b730c5a3bd : remove dispatch key (#33266)
6ade7e3a15 : [ROCm] Enable 3D convolutions through ROCm (#33067)
9823662b43 : [ONNX] Export split with list of sizes (#33161)
e9e9331927 : Fractional Max Pooling: output ratios defined as double (#33304)
243cc20451 : Enable inplace relu fusion for training (#33105)
8245641091 : Re-activate binary_macos_libtorch_2_7_cpu_build and binary_macos_li… (#33321)
92b67c03e4 : [RPC Reliability] Implemented retries for RPCs with exponential backoff (#32602)
ae53f8dd25 : Revert D19859905: [pytorch][PR] Gradient scaling API
b276ddda38 : remove THC dist code which nerver be used (#33283)
4bef344210 : Implementation of mixture distributions (#22742)
7dde91b0ae : Vectorize elu and its backward function on CPU (#32986)
1b2d2ba504 : [PyTorch] Fix write-after-free (TSAN) in GraphTask::set_error() (#33156)
0c98939b7b : Revert D19899550: [pytorch][PR] Second try on Von Mises: Make it JIT compatible
ff5f38f53b : Revert D19858239: [pytorch][PR] Refactor and add VS 14.16 and 2019 CI for Windows
b1583ceb1e : Second try on Von Mises: Make it JIT compatible (#33177)
ecd3c252b4 : Suport all length one SLS op lowering: C2 part (#33332)
0150f40dde : dont force msvc /Ox flag which can conflict with /RTC1 in debug config (#33164)
602aec325d : Kill old cuda support (#33302)
e5218e3e12 : Add missing error messages for container modules (#29991)
92fbf7cf97 : [caffe2] use JIT'ed fp16 SLS (#32432)
642bd51043 : [ONNX] Skip problematic ONNX test to unblock CI (#33323)
e5c7b7b8b5 : Automatic update of fbcode/onnx to 04a29addfd5b912812addb8dea5f8763fbfaad01 (#33328)
93179b1c1c : [jit] Initial use RRef in TorchScript (#33190)
b2c5896432 : [jit] Add RRef to IValue and JIT type system (#32992)
9ae4d38a21 : [rpc] Switch RRef to be managed by intrusive_ptr (#33189)
cb4e6d025a : Updates numpy to tensor negative stride error message (#33254)
a80d0330e4 : add int4 fake fp16 mappings
eb9b4b1f29 : handle errors in ProcessGroupAgent::listenLoop(). (#32957)
7ae1e023e7 : glu: port cpu forward implementation to ATen (#26410)
0808485c6a : Workaround performance bug / memory leak in GOMP (#32875)
bbdc5b7bd0 : Optimize error checking in mvlgamma (#32665)
5b922918d0 : Disable flaky test TestCppExtensionAOT.test_cuda_extension in Windows CI (#33282)
0c93c2b142 : Add a warning sign for anomaly detection (#33176) (#33239)
6c6a814a2c : Beef up documentation on DispatchKey.h (#33011)
2e88d3d703 : [quant] Add Quantized BatchNorm2d module (#33109)
d0435604a5 : [quant] Add a quantized batch_norm operator (#33080)
b28a834813 : [codemod][lint][fbcode] Apply google-java-format
bf16688538 : [JIT] peephole optimize values with NoneType (#33264)
0c474d95d9 : Remove Half support in binary cross entropy and some activation functions on CPU (#33206)
946f3a9ed7 : Refactor and add VS 14.16 and 2019 CI for Windows (#33117)
2635055229 : [ROCm] Enable 3D batch norms through MIOpen (#33262)
acea368095 : Fix compilation error when buildng with FFMPEG (#27589)
40246fa63c : Gradient scaling API (#26512)
d613bd0522 : [rpc][easy] move unnecessary python call directly to pybind (#33174)
0bf60e348f : Revert D19878241: [pytorch][PR] Restore tests binary_macos_libtorch_2_7_cpu_build and binary_macos_li…
ff7d147732 : Restore tests binary_macos_libtorch_2_7_cpu_build and binary_macos_li… (#33291)
d554b112e3 : Add histogram collection and weight prepacking utils (#33125)
b98c7d34ed : [TVM] Add clip op to c2_frontend (#33257)
16685d93e9 : [TVM] Add ReplaceNaN op (#33256)
03e9b9ce18 : [PyTorch BC] Remove unnecessary items in whitelist (#33247)
e45343fa14 : TORCH_INTERNAL_ASSERT_DEBUG_ONLY not eating message string (#33251)
f61b45fc89 : [jit] Support properties on `Device` (#32953)
806e7daa1f : Rename TorchScript compiler to IR emitter to better reflect its function. (#33127)
91744907d4 : SGD: updated step and class design (#32592)
914610d079 : [pytorch][quant] Add assert for min, max, qmin, qmax for ChooseQuantizationParams (#32739)
bc0ab07064 : Opitmize Unfold3d to improve performance of Conv3d (#33191)
0e753b2818 : Fix SIGABORT caused by double exception in PyTorchStreamReader when file not found. (#33243)
ac8511a21e : Updating submodules
f9ad5528e0 : Fix for rand_like as well. (#33095)
f045dab3dd : Remove ImplicitTensorToNum (#32761)
99349defc1 : remove unnecessary Node* ops (#32760)
72a00a8a9c : Remove Node dependencies from operator.h (#32682)
ab14375b08 : Workaround for CUDA10.2.89 CUDA extension compilation error (#33230)
40265e2d66 : prevent various warnings related to undef and redef (#33196)
323b0e0a0f : fix #30480 torch.normal shape checking is broken (#32243) (#33050)
2e9b7c5fe1 : Migrate dist from TH to ATen(CPU, CUDA) (#29714)
97bf41ca22 : Fix iOS x86_64 CI failure (#33194)
87640570b3 : Make CUDA OOM error a type (#33056)
a389f8fa18 : Revert D18912680: Prepare templates
3cfea39968 : Document how BCELoss avoids infinite results (#33160)
05281a5671 : Add nice error message if missing overrides in custom autograd.Function
09915ad570 : [TensorBoard] Correct typo and wrap dataformats. (#31604)
c6e0360812 : Minor change of docstring example of WeightedRandomSampler (#30846)
1767ae8daf : [caffe2] remove dnnlowp log code (#33184)
9d9fa2eace : [2/3] Bind Bucketize to PyTorch (#33014)
47e589eb6e : Disable flaky tests test_DistributedDataParallel and test_backend_group for ROCm (#33211)
5bc5dd58f3 : [jit] fix a typo
b9a5353fee : Move where cuda implementation to TensorIterator (#33228)
7863d2413d : Updating submodules
d609497dde : bulk_eval_collect_histograms
9e7638f7c1 : "batchSize" was set but never used (#32294)
66ee4f1c81 : [ROCm] Enable Bfloat16 type for activation and batch-norm
f255b7a3ac : Drop support of the build option USE_GLOO_IBVERBS (#33163)
1487137c5b : add missing default value for LRScheduler.step() (#32411)
139afd0ea7 : Fix link to py-spy content in contribution guide TOC (#31760)
74c8a8f7bc : Revert D19825127: [pytorch][PR] Move where cuda implementation to TensorIterator
000a5e2b7f : bad tbb lambda capture, bad chunk size (#30352)
a23009f98f : Quantized leaky relu
769abddfa3 : Build ahead-of-time C++ extensions with ninja on windows
acd51e13f7 : TorchScript add check if quantized
cb39a5400c : Use C10_WARP_SIZE to fix functionality on HIP vs CUDA for batch_norm_backward_reduce (#33098)
44723a1c24 : [ONNX] Fix ONNX CI (#33200)
af4d6120bd : Temporarily disable failing 'binary_macos_libtorch_2_7_cpu_build' and… (#33207)
04829e924a : Update CPU threading doc (#33083)
6706c3f457 : Prepare templates (#30982)
45818a3de4 : Remove some Half support in some binary CPU kernels (#33021)
7b50e76255 : optimize cat performance on CPU with TensorIterator (#30806)
ad90c97c0a : Removes flaky check (#33146)
a64d0ffe81 : Use int64 in pdist kernel to handle batches >= 46342 #30583 (#31593)
367488b001 : Move where cuda implementation to TensorIterator (#32984)
31370949be : Add zero_mask function for vectorized functions. (#32985)
855ee6446f : Revert D18749922: [pytorch] Migrating index_add cuda to ATen
857bae39e0 : Updated DispatchKeyExtractor to expect TensorOptions (#30981)
e7f0b15473 : Remove return value for __exit__ (#32997)
6c0dc66cb4 : [caffe2] use JIT'ed fp32 SLS (#33123)
3655975565 : Add allow_rebase_history flag and fix codegen functions for multiple views (#32790)
330d051bd5 : [pytorch] Migrating index_add cuda to ATen (#30573)
9857d9b4cd : fix gather regression by not materializing loop vars in the error mes… (#33108)
6f46962f21 : [1/3] Bind IndexHash to PyTorch (#33015)
61ac14a483 : Updating submodules
a3e69d3405 : Use bazelisk instead of specifying bazel version manually. (#33036)
524fe8a96c : Updating submodules
d672779339 : [CI][treehug] Disable xenial_py2.7 tests due to mypy min version py3.5
495c1df510 : [pytorch] convert code analyzer to a binary (#33102)
e8c4f5a74b : Temporarily disable failing iOS builds
3bde97d5a5 : Move a resize from codegen to code.
3c4cec56aa : Enable test_distributed for ROCm but only with nccl backend [REDUX] (#32551)
f4fbe9549d : Revert D19800021: [pytorch][PR] Improve error message for assertWarnsRegex
6be4ec100f : [pytorch] Elide more Thrift Tensor send copies. (#31998)
ebed008dd4 : Correct /MP usage in MSVC (#33120)
9d94f56ce0 : Backward operation of torch.eig for real eigenvalues (#33090)
c917a247a8 : Improve error message for assertWarnsRegex (#33099)
3e8d813263 : Add more checks to custom Function (#33069)
e1c53a5c86 : Fix version counter bump in cpp Function (#33068)
efba630287 : Issue a warning when zero_grad is used in DataParallel (#33064)
e2f1288514 : Add utils to inspect fp16/int8 packed weights (#32979)
6249d7302b : [ONNX] Fix export for avg_pool with default stride (#33017)
0e29e9e0f6 : Re-enable internal test runs
17d4ef9e9e : Support using scalar tensor for split (#32493)
7314f1c281 : [torch/multiprocessing] Update documentation indicating that start_method is ignored for mp.spawn() (#33070)
c6fa6d82ae : move Decompose before profiling to prevent clearing shape info
868db903ae : ONNX support for torch.take (#33061)
a9583c1f75 : Vectorize softplus and its backward function on CPU (#32944)
e7b42209eb : Added sparkspot model.
de27f4261d : [jit] remove redundant variables from JIT TestCase
d678093907 : [ONNX] Extend op registration to next opsets (#32943)
3b2f267ad8 : add to codeowner to get better inbox notification for PR
674dca0831 : Automatic update of fbcode/onnx to 8b3f7e2e7a0f2aba0e629e23d89f07c7fc0e6a5e (#33075)
e025f393f6 : windows template specialization bug (#33076)
05d18ffaf5 : Distributed Autograd: Allow multiple backward passes to accumulate gradients. (#32506)
f0d7bd41b9 : [jit] Minor: avoid recalculating some keys for map accesses in pickler. (#33060)
10db323b75 : Updating submodules
afa8cbf8c2 : Modifed randNLike for scripting (#32830)
432858c960 : [ONNX] Fix exporting copy_ with index as tensor input (#32801)
ca33aeba09 : [JIT] Add Exit Transform / Convert To SSA to docs
b0476dc6e6 : Fix Typo
38820a7014 : [JIT] Resolve custom classes in source importer (#32977)
757cea92a4 : [c10] Allow taking a std::tuple as arg (#32948)
8195961f20 : Revert D19730209: [pytorch][PR] Issue a warning when using zero_grad in DataParallel
ec1e9a1ae2 : Revert D19417087: fix #30480 torch.normal shape checking is broken
e76fa9822d : [C2] Introduce extra_info force CPU tags for auto-generated iteration counter blobs (#32607)
3c17cbb6c8 : fix #30480 torch.normal shape checking is broken (#32243)
b00345a6f2 : Move normal distribution to Aten(CPU) (#32031)
46c3c18bcc : Issue a warning when using zero_grad in DataParallel (#32870)
6209412647 : Add option to use ninja to compile ahead-of-time cpp_extensions (#32495)
e54d954572 : [ONNX] Add flag to enable script tests (#32654)
1b746b95fb : Consider hub_dir alongside TORCH_HOME env variable for storing hub models (#32844)
74ce3a032c : Fix some bugs with zipfile serialization (#32244)
ab75d64e6e : Add ability to abort NCCL communicators from the store. (#32895)
df1d68d52e : [jit] fix parser for one-line functions (#32941)
908b451efb : Enabling the nccl/rccl test for ROCM environment (#32340)
e8581869f2 : Properly update _flat_weights in RNN models (#32989)
72b9412be2 : Move some broadcasting logic away from codegen. (#32982)
fbde3c05b6 : [aten] fix vector memory leak (#32478)
81a9046301 : Fix dispatch of argmax/argmin. (#32961)
3531f99384 : Kill _th_max, _th_min overloads that aren't used.
16c166e2ea : Add XLAPreAutograd key for XLA use cases that need custom autograd. (#32788)
6b0813ea5d : Stop using dispatchTypeId to do checks for tensor list unwrap. (#32787)
1b446aa2ee : Expose Channel Last 3d enum
836b4c9e64 : Attempt to workaround MSVC17 static constexpr bug
f393adc0ed : [JIT] Fix python pickle serialization for torchbind (#32878)
23a4800708 : [JIT] Make IRParser use op schema (#32854)
bc4790b3aa : [JIT] Trace uses of torchbind classes as module attributes (#32833)
d141465713 : Fix torch::allclose to handle std::numeric_limits<T>::lowest() for integral types (#32978)
e4f633ba0b : Updating submodules
4502d8c391 : Interpolate Float [] support in ONNX (#32554)
bda874b480 : [rpc] throw correct Exception on local client based on the RemoteException (#32936)
a9141dd240 : Patch `Half.h` for compiling CUDA with clang (#29027)
7ea6559658 : Add size checks to `torch.stack` (#32931)
58e8d5588a : [ONNX] Export bitwise_not for bool (logical_not) (#28439)
4f5908d5d7 : Remove unneded TORCH_API (#32015)
6305e4a88f : Add warning and example for seeding to DistributedSampler (#32951)
b0d5ce3848 : Revert D19710990: [pytorch][PR] properly update _flat_weights in RNN modules
27e1fecabd : let user specify CUDA_HOST_COMPILER
d3a0bdd06b : proofreading (#29797)
ea968f5cc3 : fix possible pandas import error during tensorboard tests (#29650)
478356aeec : Fix broken links in governance.rst
18d1896ba0 : Fix confusing "does not have GPU support" warning message (#30721)
67706187fb : Fix a broken link in contribution_guide.rst
b69c685c4a : try to find cudnn header in /usr/include/cuda (#31755)
e999095594 : Updating submodules
d3fa68eeec : Fix for MKL detection script on Windows (#32970)
e922826dda : [pytorch] simplify lazy initialization of DefaultCPUGenerator singleton (#32897)
aa3c871739 : Adds TestViewOps, updates documentation (#32512)
341fb6d11d : Make caffe2/caffe2/python/models/seq2seq python3 compatible
9e7c47644f : [NHWC CUDNN CONV]Update cudnn convolution memory_format behavior (#32482)
ec2c974bd5 : Simplify some TH codegen by moving code out of the switch and killing dead code. (#32888)
820410b505 : Added upsample_neartest2d op for lite interpreter. (#32913)
b894dc06de : [Pytorch] Propagate errors in clearAndWaitForOutstandingRpcsAsync. (#32952)
b4b1b100bd : Add a loop test for onnxified net (#32935)
df71b3e23a : properly update _flat_weights in RNN modules (#32939)
3cac9900ca : Clarify when softplus is reverted to linear. (#32945)
544eab37d0 : Move deprecation warning out of generated code into python_arg_parser. (#32907)
612e621da0 : Improve CHECK_OP macro (#29539)
5ca7bf453d : Tests for verifying behaviour of BatchNorm using 0-dim batch sizes. (#32384)
9c2ed2574a : Vectorized memory access in TensorIterator GPU loop for 1d contiguous case (#32383)
4baadd54d7 : add SpatialBN lowered fake fp16
5c019fede3 : [ONNX] Fix for constant folding flaky tests (#32546)
a751ddaaa5 : Use leaky singletons for torch.distributed. (#32923)
6996f8d880 : Add missing `default_collate` in dataloader.pyi
1c42b9466b : [ONNX] Update support of exporting bool type index mask (#32445)
e03e4f3a2d : [ONNX] Add einsum export (#32716)
167a892e99 : Add missing `shuffle` attribute to DistributedSampler typing file
48eff08256 : Fix the level of headers in pytorch/CONTRIBUTING.md (#28412)
14c15eb3b0 : Py2 -> py3 for caffe2/caffe2/contrib/tensorboard (#32882)
00c6b90327 : Fix in documentation of convolutional modules (#30079)
37953d92d1 : raise when jit-load.ing a folder (#27836)
3fa907c145 : [docs] Fix argument type of torch.masked_select (#30385)
10183061eb : [ONNX] Update ONNX landing page since 1.3 (#32805)
ef50161ec9 : [JIT] Update OVERVIEW.md
7cddc302e5 : min, max: check that operand and outputs are on the same device type (#32862)
b34e0dda24 : Emit the C++ version when compiling pytorch from source. (#32819)
c841ab403c : add missing method annotations to torch.Tensor (#30576)
e085c55e53 : Fix `\\` warnings/errors when building optim documentation (#32911)
7101f6b5c0 : Properly handle NaN in binary max and min (#32541)
e87887ccb4 : Update type hints for torch.optim.optimizer.Optimizer (#32900)
29e6f13cd1 : Enable MKL on MacOS if installed (#32905)
f8dd65f2a1 : Updating submodules
ff0ba563d5 : Updating submodules
71ad88199a : Clarify the searched string is displayed in the error message
b564eaf7a8 : Bug fixes: torch::tensor(floating-point values) -> default dtype, and torch::tensor(integer values) ->at::kLong (#32367)
4cc6e6bbbe : Adding scalar to the c10 registration type check
ce07fb26c0 : Updating submodules
c83f984906 : Updating submodules
040bc1d0e1 : [JIT] make is_scripting a condvalue (#32871)
4d7ab255d3 : [PyTorch][TorchScript] Add support for join on List of strings in TorchScript (#32847)
144eb59756 : [rpc] don't crash callee when function does not exist on it, instead return Exception (#32726)
a8d39a7937 : Updating submodules
4493b10500 : [PyTorch] Gate out mobile operator logging observer.
10bd21d550 : [JIT] fix nested select assign (#32877)
ad78c0f4fc : Fixed the flaky test_rref_context_debug_info (#32749)
d03c9aaa05 : Fix upsampling test case on ppc (#32786)
fe01376ffe : [JIT] namedtuple constants (#32873)
fbe121e395 : Quantized sigmoid function
7b65acdf9e : Solves Issue #32750 - torch.prod now works fine with FP16 Input Tensor and FP32 Output Tensor (#32831)
8ddd5bb0e9 : Don't serialize None values in observer (#32733)
1760d5b83c : Remove wrap_dim from codegen layer. (#32738)
660a93c558 : Code cleaning: Some iterating variables in builtin_functions.cpp can be const (#32852)
ada966b7d7 : [pytorch] avoid `thread_local std::vector<Call>` for mobile build (#32849)
d9e99ab544 : Loops.cuh legacy code cleanup -- gpu_kernel_with_index (#32777)
fd3bd7777d : Updating submodules
b16dab8a41 : Coding header is better specified in lowercase letters (#32850)
22466552e3 : Updating submodules
ed10408cc6 : Updating submodules
03557a9838 : Make save_for_lite_interpreter private (#32771)
c3b4bfcfed : Add knobs to set the number of profiling runs and bailout depth (#32735)
12bcfa7c77 : Remove Python dependency (toPyTuple/fromPyTuple, jitCompilationUnit, deserialize) in rref_impl.h/cpp (#32753)
29fabb1fbc : make tests for empty inputs check zero parameter grads (#32820)
bc2e05a398 : Update Docs for building PyTorch for Android.
fcf9fcedf4 : Remove needs_dynamic_casting from TensorIterator and move it to Loops.cuh (#32755)
0f0972051a : Cudnn bn size fix (#32763)
bcb7c22679 : [PyTorch BC] Fix the ci (#32843)
5380e16db9 : Updating submodules
765904f1b9 : [torch] fd error check
94ddc2c462 : Resubmit more code fakefp16 mapping unification (#32798)
690d41f24e : Centralize addition of "always on" dispatch keys. (#32734)
5ddd2cd92b : Make DispatchKeyGuards accept DispatchKey::Undefined (#32729)
3d0a470d89 : Rename DispatchKey::UndefinedTensorId to Undefined (#32728)
a40a19ccab : Remove GIL from RRefContext (#32807)
413c0f6c29 : Fixes moving after weight norm application (#32563)
9bab617b3e : Make python version a parameterizable option for Windows CI.
cc35c876cb : Fix backcompat for linear_relu_dynamic_fp16 (#32803)
fa65859270 : Re-enable non-deterministic autograd tests
85bd3e5bdb : Removing @expectedFailureXLA from test_nll_loss_empty_tensor_reduction_mean (#32701)
6874278985 : Revert D19611800: [PyTorch][TorchScript] Add support for join on List of strings in TorchScript
b0923acb29 : Reduce RPC branches for Python/BuiltinOp/TorchScript (#32689)
affd598c1f : Fix/simplify alias annotation handling in op codegen. (#32574)
fb159b5236 : Some work on eager op binding codegen (gen_python_functions.py) (#29986)
821b6aa769 : [pytorch] Minor: avoid acquiring GIL twice in PyRRef::localValue() (#32785)
c2d736cefb : Add support for Dynamic LSTM quantization on Mobile (#32757)
55c382e62b : Fixed access to element in size tensor for scripting (#32652)
8ead65a946 : [PyTorch][TorchScript] Add support for join on List of strings in TorchScript
cccf5e7011 : Resolve rendezvous race condition
3552be1090 : [jit] fix the NoneType param/buffer hack (#32745)
2e359ef86d : enable empty batch for all flavor of convolutions (#32709)
a840afbeb4 : [pytorch][embeddingbag_8bit] Add include_last_offset option to Fused 8bit EmbeddingBag and parallelize the op (#32683)
b565d9b356 : Logspace fixes (#32744)
fc2ff7912f : [quantization] Remove incorrect fp16 dynamic linear/relu op
9357b91180 : Remove -Werror from test/cpp_extensions/setup.py (#32704)
8b187e8f2a : Fix ivalue_inl.h:353:29: warning: comparison of unsigned expression >= 0 is always true (#32778)
c47c78d0bf : Revert D19597036: More code fakefp16 mapping unification
3ee6673e99 : Refreshing numel on a stride update is pointless. (#32116)
8c6f52ac24 : Delete resize_dim() (#32114)
b371eab8c7 : Expunge last two sites of resize_dim (#32112)
c7df28a2a3 : Delete copy/move constructors on these RAII guards. (#32727)
5ffa1efa52 : Add missing C10_API to dispatch key TLS setter/getters (#32557)
3b47922855 : Improve documentation in dispatcher; remove unnecessary optional (#32533)
8cb05e72c6 : Port BCELoss to ATen to increase accuracy (#31365)
50d82f5122 : Make VC++ version a parametrizable option for Windows CI. (#32043)
e84f9d9d0c : Fix TensorProtosDBInput AttributeError (#32274)
8693164acb : Randomize xla port (#32718)
b5d8982ae2 : clean up GIL usuage (#32748)
eab99ab08e : [android] fbjni DoNotStrip annotation for oss native methods (#32567)
2471ddc96c : Improved speed of frobenous norm for non-complex dtype (#30871)
b1c85dd916 : Custom RNG DispatchKey (#32325)
642c9ef922 : More code fakefp16 mapping unification
d119de8abd : Deduplication of type casting codes (#32730)
cbb744f00f : apply linter to rpc test files (#32659)
8bc889e502 : Fix crash of SobolEngine if default tensor type is cuda (#32496)
c7bf4d22fe : added exception args to the returned error message (#32693)
c35ca84eee : Get rid of some unused THGenerate*Type defines. (#32657)
594cadeb8f : Make sure temporary vectors are properly initialized in avx2 code (#32722)
5e2311033e : fix windows build (#32762)
fd850685da : Updating submodules
62d652f922 : replaces .at with [] in getSlot (#32677)
c729614997 : [JIT] Improve May Contain Alias Using Contained Elements (#32326)
25d33a2ee8 : [JIT] Use Type Level Granularity in Alias Analysis Wildcards (#32251)
02f055ffd9 : Add mapping for FbFCPacked in fakefp16 transform
18aab32959 : Move exponential_ from TH to Aten (CPU) (#32501)
1f78bd0774 : [caffe2] Early error throwing for currupted embeddings
6f7d5bb3e1 : Temporarily disable the test_quantized_rnn test (#32742)
43d31ae4c3 : Added ONNX model checker to ONNX export (#32298)
99228086a6 : Added missing period in README.
e74e1ccc47 : Use direct vector indexing in Object::getSlot() instead of at(). (#31627)
ee60cd9124 : Back out "fix view listing in autograd codegen" (#32720)
2060e0a9dd : Split serialization tests to their own file (#32241)
0327e75e14 : Back out "[caffe2] use JIT'ed fp32 SLS" (#32711)
ffdcbadeaa : Minor refactoring to improve code reuse (#32675)
9de3208449 : [rpc][flaky-tests] fix for test_handle_send_exceptions and (#32656)
6e7e595c1d : [rpc][easy] remove redundant test in rpc_test.py (#32588)
0ea65d63cf : [JIT] Fix stateful lambda stuff and simplify code in custom C++ binding API
465ebd58ba : [JIT] pickle serialization for custom bound classes
34ccfba403 : [JIT] Include custom_class.h in torch/script.h
06c19263d3 : [JIT] Serialize attributes and types in ClassType serialization
1719da13f9 : [JIT] Support for registering C++ lambdas as methods on custom C++ class
da390914bd : .circleci: Add workflows for Python 3.8 (#31948)
0dc38be407 : consider FAIL_GUARD while counting indices for GUARDs (#32672)
c64dec1993 : Python binding to export bytecode format for lite interpreter (#32621)
e24ce0e524 : Kill some more unused code in function_wrapper.py
9a2691f2fc : Fix spelling errors
63170431f9 : [jit] fix segfault on missing getstate (#32642)
8e4161517e : div_kernel: throw when dividing by integer zero (#32629)
b3848c568e : Fix flaky test_nccl_timeout. (#32653)
d68592a440 : [JIT] Fix classes as attributes in recursive scripting
b9f764b1c7 : Use the C++ current RpcAgent pointer to eliminate the unnecessary argument passing from Python world (#32635)
666e5430f8 : Clean up mvlgamma doc (including a weird way to link to reference) (#32667)
db8ce7ea2d : Back out "Make autogen functions correct for multiple outputs and views" (#32681)
5c8535d5b0 : Make C++ RpcAgent::currentRPCAgent_ the source of truth of current RPC Agent (#32633)
1217c9b364 : Updating submodules
1695915371 : Make _wait_all_workers() support being called for multiple times (#32624)
39987de9e4 : [vulkan][caffe2] Add logging for descriptor extensions, fp16 storage
812b1ad869 : [quantization] FP16 dynamic quantized Linear
389b9c180b : Updating submodules
57519bd829 : Revert "Fix iterator for ncclCommWatchdog. (#32571)" (#32649)
897b6908d4 : Kill THIntegerTensor, THDenseTensor, THDenseIndexTensor. (#32599)
f6c46df856 : Adding native qconcat
f0917dce7f : Revert D19562258: [pytorch][PR] Fixes moving after weight norm application
64323ae177 : Back out "Use simd version for fp16 conversions" (#32640)
e36cbb8f2f : Fixes moving after weight norm application (#32563)
5ac2593d4f : [ROCm] Adjust elementwise_kernel settings on ROCm (#32609)
ca9dc67094 : 0-dim batch size input for interpolate. (#32400)
602394e996 : verify input sizes for instance norm and group norm (#29082)
19bb496a0d : Enable mkldnn on windows (#31355)
957a07ffbd : [ROCm] Enable Caffe2 video operators for ROCm
5b321a0985 : [rpc] make handling of FORWARD_AUTOGRAD_REQ in request_callback_impl (#32476)
1e5aead35b : Make cuda search process of cpp extension quiet (#32620)
8fbe1ccd16 : faster bailout tests (#32266)
12d5933969 : Bug fix of norm minimization for dev mode (#31462)
90a259e1e2 : Add warning regarding pickle insecurity on torch.load documentation (#32593)
3bbb36e02d : Update linspace types (#32218)
5fd037ce44 : Fix MagmaInitializesCorrectly_CUDA by using an invertible matrix (#32547)
320d1a1573 : Fix wrong typing (torch/nn/parameter.pyi) (#32617)
69283388ca : [pytorch] codegen flags to whitelist op registrations / generate to separate files (#32451)
0afe195046 : [pytorch] move type_derived_methods out of anonymous namespace (#32275)
bd20274e8f : [caffe2] use JIT'ed fp32 SLS (#32413)
6ad9e5c70d : Support TorchScript call over remote API (RRef) (#32466)
e0ffe72649 : [aten] fix shadowing variable warning (#32573)
169541871a : Add operator support for dynamic quant on mobile (#32479)
59dbece371 : Fix iterator for ncclCommWatchdog. (#32571)
1218a16aae : [pytorch][refactor] Explicitly use auto* for pointers (#32548)
e7edc5f20e : [jit] Cloning constants in ClassType (#32371)
666472a38d : [docs] Change fut.wait() to torch.jit._wait(fut) in jit overview docs (#32336)
6412ca3ce9 : duplicate symbols with AT_PARALLEL_OPENMP=0 (#32568)
91f10a1de1 : [quant][graphmode][refactor] Better API for fold_convbn (#32380)
52f8f031ac : add diag into pt operator microbenchmark (#32597)
9e0ce72e9e : [pytorch] change op dependency output to use double-quoted strings (#32464)
2bfd33b4ab : [refactor] Adding FoldConvBatchNorm2dHelper (#32374)
573a30270c : [pytorch] Minor: boilerplate to propagate errors in request_callback_impl (#32556)
3ab30753e9 : Make autogen functions correct for multiple outputs and views (#31990)
9e59244b53 : fix view listing in autograd codegen (#32044)
d2bda53f6d : [quant][graphmode] Call _jit_pass_dedup_module_ueses in quantize_script (#32303)
fe3eb09da5 : [quant] Re-enable fold_convbn in quantize_script (#32302)
fd1a4f18ee : [pytorch] update code analyzer build.sh to handle srcs with same name (#32525)
ef5637f85e : [jit] allow compilation using optional modules (#32539)
7d0f0b62de : API for testing bailouts (#32518)
f0c85571ed : docker: Refactor Dockerfile process for official images (#32515)
8fd3eaed25 : [jit] Fix dict type serialization (#32569)
3ada2e0d64 : [pytorch][embeddingbag] Parallelize the EmbeddingBag operator (#4049)
b474c351dd : [rpc] Remove template on RRef and add Type to RRef creation (#30630)
ef2d4e67d1 : Updating submodules
6f146e1768 : [JIT] Remove capsule type handling of node hashing (#32540)
d2f66083c5 : porting gather to ATen using TensorIterator with multithreading support. (#32425)
4cd6b5cda6 : [quant] Re-enable test_nested that has different qconfig for shared ClassType (#32206)
6745bfc31c : Revert "Remove __torch__ from custom class qualname" (#32514)
8ed1dd528e : [JIT] Add torch.classes.load_library
69f9bf8893 : [JIT] Support returning tuple from custom bound C++ method
ae42e232ce : [JIT] Fix custom class method binding for const methods
7e14c420ae : [JIT] Test __getstate__ and __setstate__ for custom bound C++ classes
dbd29e5668 : [JIT] Passing custom class as arg (#32260)
ad4fba0ce4 : Only run test_conv_large and test_conv_transposed_large_cuda on 32GB device (#32473)
49cd83d735 : no more build_pytorch_libs.sh/.bat (#32319)
d234626267 : [quant][graphmode] Support quantizing shared ClassType with different qconfigs (#32205)
ef94496b36 : [JIT] throw if no self arg on ignored methods (#32503)
db02a4e4ce : Support 3D attention mask in MultiheadAttention. (#31996)
b6b8620871 : Add unit test on export_opnames with interface. (#31531)
9af5a97b1d : Fix nll_loss to support empty tensors on GPU (#31491)
583bb97618 : [quant][graphmode] Default to non-inplace in graph mode quantization API (#32204)
ea7bebb7fe : [PyTorch BC] Clean up the whitelist for PyTorch Op BC check (#32523)
02aa3ba331 : Raise error for code that risk deadlock (#32295)
21d475e20d : [gloo] Skip registry warning (#31126)
f050b16dd9 : Move pytorch distributed tests to separate folder for contbuild. (#30445)
e735395fc6 : [caffe2] use 2-stage EmbeddingSpMDM interface (#32271)
685f090ac8 : [Rowwise Pruning][c2 op] Add Quantile Op (#32448)
4bdfc71421 : Fix race condition for to() backward that spans devices (#31930)
193ac31441 : [jit] Enable IValue to hold a PyObject (#32491)
556c0b063d : Updating submodules
14e0bec9f2 : [caffe2] remove unnecessary np.set_printoptions and fix test errors (#32475)
faffd2141a : Corrected logical boolean expression (#32249)
43eb931c0f : Remove mis-exposed abort API on ProcessGroup
b7c6277c53 : Adding QConfigTypePtrMap (#32203)
38d122eca9 : implement tuple constants (#31841)
69492ad6ac : remove tuple logic in constant propagation (#31840)
b01d824a78 : improve mayContainAlias (#31839)
adf0916606 : Add str[] float[] constants resubmit
e184a8843c : Fix comparisions for ConcreteModuleType (#32256)
8e689378c7 : Move some of the helper functions for public use (#32202)
510a122d27 : add missing align_corners annotation (#32492)
1c017f0c14 : Migrate max and min (binary) from TH to ATen. (#30851)
b77c25dec0 : Fix dll load logic for Python 3.8 on Windows (#32215)
c342c354a9 : Put sparse all reduce results to input tensors (#32226)
e37a24b044 : Always return a new tensor from nn.functional.pad (#32350)
8abaa322da : fix torch.eq() doc entry (#32399)
248f6d0485 : Implement backend fallback fallthrough (#32439)
0d610b4821 : Remove the support of build options like NO_*, WITH_* (#32447)
44b270d892 : `insert_quant_dequant` pass support shared class types (#31408)
60b6c99aa7 : Updating submodules
64de93d8e7 : Move log_normal to Aten(CPU) (#31854)
4973695268 : Updating submodules
7fdc6cb74e : Fix test_data_parallel name errors and add to run_test.py (#32428)
0b606a4a7c : Enhace DispatchStub to be thread safe from a TSAN point of view. (#32148)
be6ffac1b6 : Adagrad optimizer - updated step function, added param_groups, state to optimizers
0ed04bfdf6 : Updating submodules
e1d97025ee : QNNPACK: Add support for dynamic quantization.
bc6005281b : Updating submodules
9e853e7090 : Revert "Temporary workaround for BC test due to schema parser changes" (#32441)
f86d6c6afd : Enhance NCCL watchdog to acitvely abort communicators for timed out ops. (#32338)
ec4be4e58c : Redundant condition (#32396)
839fe714de : Fix BC test after TorchBind cahnges (#32429)
e4f43bf7a5 : Set rpath for JNI library on Mac (#32247)
9482683065 : Remove dead includes in caffe2/test
c13df8b688 : Fix cusparse version check (#32405)
9ce25cce91 : add an option to record time spent waiting for GIL (#30842)
1177191c8e : Synchronize with ShipIt.
cc2d5b15ad : F.normalize uses clamp_min_ inplace (#32360)
0c03304bdf : .circleci: Only run macos libtorch on master (#32378)
a2641e6005 : Make type of `Tensor.type()` more specific (#32353)
418ebc827b : Build: Respect USE_CUDNN=0, even if cudnn is found (#32404)
ecbf6f99e6 : Removed unused weight update in prepack. Moved zero point update to (#32254)
b543e3cd6f : support empty batch in group normalization (#32401)
7fbfb7eef2 : Updating submodules
58234c0254 : support torch script call over rpc (#32197)
1ecad2bb2b : Test passing custom class instance to bound method
c7078a1ce8 : Fix returning instance of custom class from method
c7fdf5b251 : Remove __torch__ from custom class qualname
ceffdbd217 : Temporary workaround for BC test due to schema parser changes
61ee8c972f : porting scatter_add to ATen (CPU) (#31662)
53429680d5 : Remove stray `@script` (#32235)
8c40a78277 : Back out "Calling JITed 8 Bit Fused SLS in FBGEMM from C2" (#32381)
25e62ebac9 : Updating submodules
10c2bd35af : Fix cudnn channels_last descriptors problem (#31952)
824e649d40 : Specify requires_grad for Parameter replica so it's not always set to True by default (#32356)
0ac31a99be : run code analysis against mobile interpreter (#32276)
5bc44fb6ea : TensorIterator unrolling and vectorized load - step 0, 1 (#31974)
f326045b37 : Fix typos, via a Levenshtein-type corrector (#31523)
c8ca70e39d : Updating submodules
7e3c438913 : Renaming IValue List functions (#32093)
bdd5e15437 : skip testExceptions in ProcessGroupGloo if built with TSAN (#32242)
5a58c16722 : Updating submodules
9b6ec61bfd : exposing CPU/GPU Copy ops (#32248)
e7bc1663bd : fix unchecked cast alias analysis (#32309)
df514fd8c0 : C++ C2/Glow operator unittest
e133d8be3b : Fix ASAN / potential segfault in quantized Tensor memory allocations.
4e69352713 : Add 64bit atomic fetch add (#32354)
aa61d1ee85 : Add a new job to support custom build (#32323)
7732924501 : Delete unused bernoulli_Tensor from THTensorRandom.h
8c1268aad3 : Use default scale/zero_point in fake_quantize module instead of None (#32318)
5b815d980e : Added cummin
78d8f691ad : Don't dispatch to integral types in smooth_l1_kernel
6a5a55d573 : use gtest asserts in ProcessGroupGlooTest instead of other checks (#32138)
4968bc2450 : cap the maximum depth of bailout chains at 1 (#32073)
61a2b34113 : Updating submodules
904ab092c2 : fix testSend and testRecv in ProcessGroupGlooTest (#32134)
7a9c920bac : add lock for ncclCommAbort (#31901)
91bdb872ce : fix spelling mistake: excpected -> expected
ef5ae4823a : Register RoIAlignRotated with C10
b79030d6c8 : remove unused code after refactoring optimizations into profiling-sensitive and profiling-insensitive (#32106)
c2761490fc : Enhancing the test (#32321)
53708e21ed : classic fixed-point liveness
8c8bd79f32 : Add CI scripts for Custom Build (#32316)
34c751c263 : Eliminate exception throwing code from dispatch call sites (#32168)
b85dbe8f7b : Out-of-line construction of OperatorName. (#32121)
36d09197ab : Move error reporting code out-of-line from header. (#32118)
7b7390778c : Make an assert on a hotpath trigger only in DEBUG mode. (#32117)
8746f90cf6 : Fix weight backward for cudnn conv of large tensor (#31889)
b26ee54176 : For ppc64le, stop presenting the python 2.7 builds (we will no longer… (#32315)
cd99b3706a : Pin Pillow to latest and use a torchvision that works with it (#32290)
f94aab45fd : Logical condition reduction (#32201)
14548c2d5b : out variant for native_batch_norm forward (#29192)
bab87e4b60 : reimplement __torch_function__ overrides for torch.functional using inline logic (#32194)
7df5dc2775 : Creating callUnboxedWithDispatchKey method (#32198)
d75b6b3f9d : Support shape inference and lowering of SparseLengthsWeightedSumFused4BitRowwise (#32257)
f3b62d4b1c : Updating submodules
851a7e861b : Add CAFFE2_API to video decoding functions (#31187)
89c6e18c43 : Updating submodules
90c65b81c3 : Define `repr()` on IValues (#32232)
104b2c610b : Tensor prep from image in native (#31426)
de5821d291 : Torchscript print to logcat (#31456)
31b7d0873c : Add File existence checking (#32208)
8b4c695e47 : Added cons folding for ONNX mul, div, sqrt ops (#32077)
ffc8e255c4 : Sort export w/ negative axes (#31971)
4460a86cd6 : Support op registration if name starts with underscore (_) (#32017)
01010f5705 : Add comments to torch::nn::ConvTranspose{1,2,3}d modules explaining how to use them in a Sequential module (#32223)
a5161c7022 : Update out-of-date comment on Docker image updates. (#32224)
322f34b245 : Adding DDP Design Note
74621ca926 : Add allgather_base as per our discussion re: ProcessGroup interface. (#31892)
81048c41ab : remove simple .data from torch/nn
3363ca20a7 : example_outputs Doc Edit (#31826)
3d01e3d16f : Notify other threads before running callbacks (#31713)
0392e8384b : Fix simple typo: whos -> whose (#31288)
4314620ba0 : [jit] Module clone work with shared ClassType (#31970)
62b06b9fae : Rename TensorTypeId to DispatchKey (#32154)
8c3ee9f2ba : [Python] Deprecate use of scipy.misc.logsumexp and scipy.misc.comb (#32209)
05088da8e9 : [pytorch][PR] Fixed error in sample code of documentation (#31682)
ef0f96e92f : [pytorch][PR] update comment in autograd.h for locking (#32222)
19bbb4fccb : Stop building documentation in pytorch_linux_xenial_cuda*_build (#32187)
4dce482acb : dict type unification fix (#32185)
c70bb0a4f8 : Fixes to prim ops (#32179)
879620e85e : [caffe2] fix how np.clip is used in lengths_reducer_fused_{4,8}_rowwise_ops_test (#32086)
7ad03855dc : Fix 'template' keyword warning with clang-cl and clang.exe (#32104)
02f09a1bbd : Implement backend-agnostic rpc._wait_all_workers() utility (#32190)
7572501d40 : move ProcessGroupGlooTest to gtest (#32133)
8dc67a014f : Add cummax
02c3493a84 : Fix an invalid peephole transformation if input/output values are written to (#28455)
2bd179147a : Fix typo in config script to re-enable libtorch build and test in macOS CI (#32072)
f6f1e0aef5 : Automatic update of fbcode/onnx to 65020daafa9183c769938b4512ce543fd5740f8f (#32125)
f3b67bf750 : Fix frontend kwarg defualts error (#32146)
ecc3497172 : Update Gemfile (#32147)
9bf0479b65 : Fix the passing-by-ref constructor of OperatorName. (#32170)
51a34545e9 : Revert D18482934: support torch script call over rpc
4a26bb9b18 : Suppress pip logs (#31912)
2bb9dbeffa : omit constexpr with nvcc on clang (#32149)
b0ac425dc4 : Emit warning from deprecated torch function signatures (#32009)
61e509b992 : Skip un-runnable tests (#31965)
0664c6bbfd : Add ccls cache to gitignore (#31437)
b783a75aa3 : Fix scalar^tensor derivative for scalars that are zero
fa60e1150d : Fix tensor^tensor derivative for 0 base entries
1487582ba7 : Switch important CI from CUDA 9 to 10.1 (#31951)
dbd737158b : support torch script call over rpc (#30063)
5f1a881cb8 : Add private user tensor type IDs for experimentation. (#31830)
8d472bab6b : Make torch.backends.mkldnn usable without import
77c78b7d28 : remove .data from torch/nn doc
c036fbdc5c : remove .data from torch/jit
26621d101f : remove simple .data from torch/nn
62b1a5f846 : Updating submodules
a472f0201f : Added support for Dim operation in ONNX export (#31928)
c474952b5d : Updating submodules
470c496eb2 : use cholesky_inverse to compute precision matrix (#32092)
f003008d6e : Allow TCPStore to pick a port to bind to. (#31674)
632d6fc583 : Revert D19373615: Fix typo in config script to re-enable libtorch build and test in macOS CI
701ca68882 : Docs entry for the `is_quantized`
d53ce5e4cd : Updating submodules
d97413eb7a : Change python/cpp docs CI to use a CPU-only image (#32102)
1f34801460 : More robust mangling (#31978)
a3dd44653f : Fix typo in config script to re-enable libtorch build and test in macOS CI (#32072)
5988d36f58 : Fix cumprod error for tensors with zero elements (#32070)
695c4f1bab : Fix a typo in function name: liner -> linear
8e93159fb6 : CUDA 8 cleanup (#32013)
9a4219eb39 : Install complete set of headers for ROCm build (#32076)
4002fec509 : Display NVCC version in CI for convenience to look at
e74a215ade : Changed clip_grad_norm_ total_norm calculation (#32020)
77c2c78e01 : Fix typographical error in torch.triu docstring (#32067)
14593f077f : remove list specialization from ivalue (#30734)
46f32e136a : Revert "Support PyTorch ROCm CI on Ubuntu18.04 (#31886)" (#31946)
927c2a02b0 : enable autograd profiler to work with RPC and RRef. (#31381)
20e5c90d82 : accept url query when rank or wolrd_size is specified (#32016)
b6cee03e29 : C++ tensor indexing: add Slice / TensorIndex (#30424)
638e4ad8b9 : Updated function definition for torch.mode and torch.median in torch docs (#32003)
28c1258f18 : Scale init for batch-norm and layer-norm (#31983)
c5af0afdcb : catch exceptions in ProcessGroupAgent::enqueueSend and report them. (#31023)
346005d3ed : integrate op dependency analysis process into CI
16b8ca56b6 : update docker image version (#31848)
03ff3eb94d : skip TEST_DILL on Python2 (#32027)
ab5eb65e74 : gate torch_global_deps with BUILD_SHARED_LIBS flag (#32011)
f995ec2076 : Remove qconfig_dict in top level eager mode quantization API (#31972)
c5a362a96d : Updating submodules
8098ae455c : Move rshift to Aten (#31594)
a201027e93 : Abstract atomic add calls (#31992)
c6f41ae01b : Fix and add more padding mode support for Conv (#31784)
b6f43afaca : Fix tensordot allowing negative dims (#31954)
8ea49e7a08 : add missing braces for format in rpc _to_worker_info (#31969)
4e84661139 : update llvmlite to 0.30.0 (#31858)
62f93443e5 : Explain RPC behavior when using Tensor as arg or return value
6abfa9ad8a : Quantized H Tangent function (#31031)
021e1e20c1 : Revert D19320493: Javadoc changes
700d1c5cbc : update CI script to take string docker image version (#31857)
67ff051ddd : Remove temporary fix for torchbind in BC check (#31982)
2968faf154 : Update doc about output_differentiability keyword in derivatives.yaml
67c1d930eb : Lock graph_task before writing leaf_streams. (#31995)
1296e2d55e : C++ API parity: isinf (#31099)
cfdfdf70d7 : remove JSON dumping dependency (#30724)
bc68a8745f : Spelling fix in transformer docs
26f552a3d1 : Javadoc changes (#31956)
e59e5ba5a3 : Move geometric to Aten(CPU) (#31878)
99b3f9cac4 : Move log_sigmoid to Aten(CPU) (#30958)
5a76335aaa : Move lshift to Aten (#31566)
5c423cae72 : Add precision tests for CUDA half linspace+logspace (#31962)
5d5f156558 : Revert D18903453: Quantized H Tangent function
ddff4efa26 : Don't use RTLD_GLOBAL to load _C. (#31162)
8614860210 : Uniformly apply Windows logic in cpp_extensions everywhere (#31161)
0dbd5c0bfe : Added torchvision tests as part of ORT tests (#31835)
6d9a9e379d : Fix segfault in caffe2 slice test (#31801)
9e9ca6ec37 : add conversion functions to embedding tables (#31083)
eb23171bce : TensorIterator norm update (#31903)
8ecd3f783d : check for object equality in constant pooling (#31800)
319cc21108 : Add AliasDb API For Changing Aliasing (#31501)
5cc49ed45f : Document `IValue` (#31904)
883fb5434a : Use real argument names for Python functions (#29300)
09a22f3301 : Remove C++ docs contributing page (#31908)
8c59d48281 : Add doc previewing instructions (#31905)
dedd16b418 : remove THConv code which never be used (#31879)
9a3cb1e859 : Move cauchy to Aten(CPU) (#31824)
9ba6a768de : Add op bitwise_or (#31559)
4f9d2f74e2 : Port softplus activation to Aten(CPU+CUDA) (#30504)
d2fdf140af : Combine all the user inputs together and convert them to fp16 (#31898)
8b4feff01d : Use simd version for fp16 conversions (#31897)
1314f7f4f4 : Ensure the original grad_mode is restored during backward (#31884)
c299cb05ef : temporary fix for jit test backward compatibility issues
462bfc7fe7 : docker hub image info (#31923)
5dfcfeebb8 : Revert D19298735: Emit warning from deprecated torch function signatures
620060cb0c : Quantized H Tangent function (#31031)
54777b1e73 : Avoid reference invalidation in cuda SpectralOps' plan_caches (#31861)
7f723cbd8a : Revert D19290954: Implement backend-agnostic rpc._wait_all_workers() utility
c66ca74f03 : Add device debug info to CUDA build (#31929)
f0072b3af5 : Remove C++11 compatibility from c10::optional (#30919)
f67851d69a : Fix c10::util::get_fully_qualified_type_name for MSVC (#31313)
2a294aace6 : Remove memory ordering from LeftRight (#31026)
84dfa96f62 : Fix -Wundef warning in conversions.h
ee817012b2 : Add more tests to the autograd wrt view and inplace (#31147)
6664703842 : Implement backend-agnostic rpc._wait_all_workers() utility (#31888)
9116f02beb : Rename TORCH_DCHECK to TORCH_INTERNAL_ASSERT_DEBUG_ONLY (#31917)
ab60cca488 : Make c10::util::get_fully_qualified_type_name() backwards compatible with clang 4 (#31351)
0dca9c30ca : constexpr typeid improvements (#31312)
c21f89970f : Remove c++14-conditional constexpr (#30916)
4daa3dedbe : Fix IValue.isList
1b4d3d5748 : Properly return data from non-contiguous tensors in Java
2d6a2c898c : Support tensors with a storage offset in Java (#31584)
6d1fa8296b : Support tensors with empty shape in Java
3c07eb33bb : Better error for `torch::jit::load`ing a eager file (#31709)
a730920a3d : Make RRef leak detection always print a warning log (#31922)
227d1a43a4 : Revert D18838848: disable __torch_function__ overides for operators in torch.functional
8a0503b355 : Run a non-quiet submodule update to prevent timeouts on Circle CI (#31900)
114562cf93 : For torch::from_blob() add clue when memory is non-owned. (#31222)
ca72df06ae : disable __torch_function__ overides for operators in torch.functional (#30839)
bb279c5c63 : named tensor max pooling support
3a2757c682 : Fix tracing for modules with List[Tensor] as output (#31343)
74d69e296e : Raise an error if torch.cat is given `out` as one of the input tensors (#30577)
c888473b57 : Restructure docs organization and naming (#31849)
bf8e1c0710 : Integrate async mode for autograd engine with distributed autograd. (#31508)
0e5a6700cc : Emit warning from deprecated torch function signatures (#31514)
5cc62f2913 : Ensure autograd callbacks are called only once for reentrant backward. (#31909)
4ee9c56218 : Support PyTorch ROCm CI on Ubuntu18.04 (#31886)
2f5eefe525 : Raise ValueError if CUDA device is specified without specifying the : (#29087)
3c7db5ccbc : Don't unconditionally compile runJITCPPTests (#31236)
809ee9d04c : Enable personalized FC weight_init and sparse_emb weight_init (#31707)
22044c6f7c : Use TORCH_CHECK instead of AT_ASSERT in torch::cuda::gather() (#27456)
20c5dd59bd : Add stub for transformer.py and MultiheadAttention Class. (#28396)
346a349111 : Update all instances of 1.4.0 -> 1.5.0 (#31785)
985fd970aa : Enable BFloat16 support for Convolutions on ROCm (#30948)
a561a8448b : minor doc tweak to use mp.spawn in example (#30381)
34561dadcd : Don't handle bias inside cudnn_convolution* (#31524)
5d80f63478 : no_grad, enable_grad: support for decorating generator functions (#31792)
58cffbff91 : Add missing TORCH_CUDA_API annotation to throw_nccl_error (#31157)
4ef9daf7b2 : Remove dead CAFFE2_LIBS variable (#31155)
a9dae70bae : Remove LibIRC logic from cmake. (#31152)
112196fdee : Fix index put (#31552)
78cba90a8c : Enable constant folding for Reshape (#31054)
492ca46e71 : Fix androidTest - exclude host tests from it
c65305e991 : Add a check method for custom type tensor (#31290)
1f2b6d632a : Refactor tests in pytorch's test/dist_autograd_test.py file (#31803)
ddff014b79 : fixed scale_factor calculation for uint8 tensor (#31778)
1ba1799a66 : C++ added 3rd arg of false to BatchNorm/InstanceNorm register_parameter … (#31873)
33430cf094 : Revert D18643137: Implement backend-agnostic rpc._wait_all_workers() utility
fde94e7556 : Provide async mode for local autograd engine. (#31230)
3f0b330736 : corrected keyword argument name in docs for Tensor.scatter (#31617)
9020d30fc9 : Updating submodules
502533cfe6 : Implement backend-agnostic rpc._wait_all_workers() utility (#30710)
f362cd510d : Move prim ops from JIT registration to C10 (#30612)
5579611544 : Enable foldbn tests (#29220)
ebe69236d1 : Expose class constant through `attr` and `setattr` in object (#29219)
6f62c311a1 : Add unsafeRemoveConstant for ClassType (#30787)
2bac76969c : Fix getConstant (#31012)
8420f205ee : Remove refs from ArrayRef arguments (#31845)
b0a2765103 : move docker image html to correct bucket (#31832)
5fe3604987 : Preserve constant from ConcreteModuleType to ClassType (#29218)
e5b7231edc : Adding version check for hypothesis deadline
28c9dd4436 : fix ProcessGroupGlooTest (#31255)
27488773b0 : Updating submodules
c829c6f3d2 : Disable flaky test_debug_info
6b1db202bc : Add tanh to c10::cuda::compat (#31844)
9407137102 : Update the descriptive error message for enforce fail (#31575)
40e720282c : Using _floats_wrapper in per_channel_tensor generation (#31780)
86a4e2135d : Do not register `const float *` type on utiliy_ops.cu (#31583)
457c57d9f7 : use unordered_set instead of vector for futureTimeouts key in (#31813)
b44c0f328e : Skip same tests in ONNX Python3 CI as in Python2 (#31827)
79e30ff3f8 : optimize index_select performance on CPU with TensorIterator (#30598)
0ae063d5d9 : Fixed concatenation benchmark + added it to the microbenchmarking runs
9c9d3cd550 : Revert D19262570: Fix race condition when creating build dir
a02a5129a8 : Move rrelu to Aten(CPU) (#31094)
b47e9b97a2 : Add op bitwise_and (#31104)
68f3782106 : remove std_single and var_single code in TH (#31608)
0b9cd410a9 : Fix cumsum error for tensors with zero elements (#31694)
daf00beaba : Remove duplicated Numa detection code. (#30628)
8c425dd201 : Fix race condition when creating build dir (#30956)
f56c59ead6 : clarify when to use `as_tuple` in `torch.nonzero`
95cb66570a : Erase array sizes from types in c10::str(). (#31683)
f39105b68f : add num_pending_users to debug info (#31539)
5be8dac329 : Remove non-ascii character from torch/onnx/symbolic_opset11.py
fc598f9023 : generate op dependency graph as python code
fa0424f224 : add LLVM-dev package to android docker image (#31215)
dc43f9dc54 : fix test_backward_node_failure flakiness (#31588)
155376721c : Pin hypothesis package to 4.57.1 to avoid test failures
5f8308e32d : Pin Pillow to v6 as PILLOW_VERSION is removed in v7
feb0ccdbfd : Updating submodules
ed5cd0d742 : Use numeric limits to define TensorTypeSet(FULL) representation (#31668)
d770fbc1d2 : Some modifications to improve readability (#31352)
7078f4b27d : skip _test_optional_float in BC check (#31786)
37fc59e847 : Updating submodules
9e9bfbfd8d : Update old scheduler example usage (#31358)
c4f10e0fe7 : Renaming scales parameter for interpolate (#31526)
236b0a318c : Delete ATen/stub (#31763)
cb1af5f61f : Revert D19233558: add float[] str[] constants
7a3ed36309 : Fix nvcc math functions for MSVC 2019 (#31704)
1499b894c4 : Apply clang-format to csrc/distributed/rpc
b102550d2c : Allow to pass in masks through db (#31676)
39297bfe08 : Fix flaky test_debug_info. (#31675)
f4e955ff62 : Change PackSegments to ensure consistent behavior between CPU and GPU
dd0f2f0c19 : add float[] str[] constants (#31503)
6064223808 : `@slowTest` some slow tests (#31706)
ee87b01f40 : add additional types to indexing operations dispatch (#31692)
22d84204f7 : Expose torch.poisson in documentation (#31667)
3b7916fccd : Modify the order of arguments position of torch.std and torch.std_mean in doc (#31677)
e8e47c0a1b : Split RRef class into abstract RRef and RRefBase (#28942)
90a187618e : Integrate masked sparse Adagrad (#31641)
ae214f67a5 : updated code to ensure error check for negative dims
647569e546 : get rid of choco install (#30897)
35bee0c729 : separate op for rowwise counter (#31612)
e84e7ec556 : Kill aten_custom_call.
b522a8e1ff : Optimize zero length input (#31602)
204939b401 : Automatic update of fbcode/onnx to 57ebc587fcf3913b4be93653b0dd58c686447298 (#31642)
ffcac9ad37 : Clean White List for BC Checks (#31629)
4983ef8de1 : Integrating MaskedAdagrad
909b8eba0d : cudnn grouped convolution nhwc patch (#31444)
39508501a4 : Create byte-aware word lstm benchmark (#31260)
91eb7c26cd : Fix Typos
34dce8e348 : Updating submodules
ec4e347744 : Add Python language reference docs (#30686)
5d95a9ca79 : Print all broken ops instead of the first one (#31628)
cf46bcace8 : Updating submodules
866c1b1fcc : Ensure legacy sparse constructor/new doesn't interpret python data as tensor data. (#31490)
e2951d586d : Updating submodules
29f345831e : Error out if legacy Tensor.new is called on alternate layouts / dtypes (#31485)
a54dc87e8e : revert D18805532 and make numerics of masked adagrad consistent with unmasked adagrad (#30784)
363d8be787 : Bypass _TorchScriptTesting_StackString::pop in BC check now (#31586)
46ad80c839 : Fix null pointer dereference on Android for strtod_c (#31582)
446e9af5b9 : Fix parsing of big float literals (#29940)
218cfd568d : Conv transpose/backward split 32bit (#31510)
fb63c0e2c9 : Remove -Wno-unused-private-field
68e5172382 : Support optional float parameters (float?, optional<double>). (#31517)
9459db86bf : Raise warning for schedulers following chainable shedulers (#31125)
fe76af96ed : fix test_process_group_debug_info flaky test (#31533)
cc2d5ca37f : add enabled API to autograd profiler (#31380)
7d630278da : Separate torchbind from Python (#30242)
700109eb63 : set stream everytime when we get a cuDNN handle (#31541)
b5bbec7bad : set stream everytime when we get a cuSparse handle (#31538)
8d8e82883e : set stream everytime when we get a cuBlas handle (#31537)
0b0f90f53c : Split on batch dimension when 32bit indexing not enough for convolution forward (#31379)
3820d6f6b9 : make gc script python2 compatible (#31536)
c808eed04a : Nightly dimension, input shape in gradle (#30195)
3a19980b78 : Tensor class created from java does not call native methods
11854bcd38 : Add test to torch.jit.export_opnames, make the _C function private
81329c907d : Updating submodules
35b249769d : Exclude lite interpreter Java files from OSS host build
08de70cad1 : Remove observers in the end (#31407)
b4c48b7e29 : Call `getQSchemeAndQParamMap` later in `quantizeTensors` (#31406)
df9d5b8a77 : Use macros instead of directly accessing Python object fields (#31388)
5375ceae80 : run optimizations on pre-profiled graph (#31392)
256db1e61b : Add fake parsing for torchbind classes in schema type parser
7a12ccd003 : optimize FloatToFused8BitRowwiseQuantized and Fused8BitRowwiseQuantizedToFloat (#31470)
0b57b383b1 : Im2col export (#30972)
6cd987e7c0 : Make fully_qualified_type_name_impl() compatible with VS2017 15.9 (#31455)
2099cfa13d : Fix input_channels divisibility check in concat_split_op (#31448)
b38901aa15 : Test reading `__cuda_array_interface__` inferred strides. (#31451)
d0d6e0b5e3 : add type promotion support for sparse tensors (#30429)
e9ef087d2d : Updating submodules
4c341582ea : modify model to enable loading by blob (#31507)
06dbef663d : Add support for `del` (#31273)
624088e444 : Don't dispatch to cudnn if it is not possible to make it 32bit by splitting batch dim (#31383)
87768e5ade : Updating submodules
457286a383 : fix missing type check in dictionary literal
348d42114e : Kill MessageType::SHUTDOWN related logic in pg agent (#31270)
57caeb3fc1 : Fix builtins table (#31492)
226c2d79ce : Get QScheme from observer module (#31293)
dbe2f265d0 : Better error msg for autograd profiler + multi-worker dataloader crash (#31473)
e67064a96f : Exclude generated source docs from Google (#31484)
8f3c0d541e : Speed up `Tensor::has_names` for unnamed tensors (#31436)
9d9bc93bfb : Added error message to indicate that reduction operations are not supported for dim>=64 (#31476)
779b128872 : add back in reference to jit_unsupported section (#31486)
49fe7a7401 : Updated documentation for NLLLoss to explain what x, y and w refer to (#31488)
d6acc87c93 : Guard against copying from quantized Tensor to non-quantized Tensor (#29660)
c4121ed8db : Fix is_fundamental template for MSVC (#30959)
6d6a91fb0f : Updating submodules
28376e826d : Fix lint
540b9da41e : Bump numba version in circleCI config to 0.46.0. (#31435)
fc3103b116 : fixing a naming issue in creating a residual loop node in a bailout graph (#31400)
1e116a5089 : Revert D19054937: Add support for `del`
489dd6cb90 : Add TORCH_DCHECK macro that checks only in debug builds (#31240)
fb24f7c4ad : catch all exceptions in converting default values to ivalues (#31398)
1bb6c51421 : Fix getAttribute (#31011)
dff7b945bf : Avoid sending large unneeded data over wire in process_group_agent. (#31357)
1bb800cf5c : Updating submodules
fe707c7849 : Use `default_observer` and `default_weight_observer` in tests (#31424)
e1509cb468 : Add support for `del` (#31273)
e7d25a3e4d : add a suggested alternative to _get_trace_graph
d2e66b44cc : Temporary fix to support building pytorch from fbsource (for xplat dependencies) (#31393)
a3cdb7eca3 : Fix default instantation of dynamic quantized LSTM
1e80ff7a67 : autograd/profiler: make record_function more threadsafe (#31346)
148bcd3ee5 : Add support for builtins as attributes (#31269)
503a4e9019 : Cleanup after moving language reference (#31146)
ae2487bf4d : Move TorchScript language reference to its own page (#31138)
d08250c223 : fix zero-batch handling in convtranspose (#24341)
7692494c67 : Fix hex literal parsing (#29935)
1f50cfc24d : Throw a better error for int too big for int64_t
fb30a48b4e : add unsupported section (#31329)
5e8bac24b4 : Migrate soft_margin_loss from the TH to Aten (CUDA+CPU) (#28135)
7cf8b9bada : Move leaky_relu to Aten(CPU, CUDA) (#29899)
b0bd35ff13 : caffe2/event: allow multiple errors such as when cancelled (#31335)
4d22c3ba01 : fix docker login, add docker image tag list after purge as html (#31328)
47766e648f : C++ API parity: MultiheadAttention
c63f8e5ebe : Fix typo in data.rst docs
285cc13435 : check devices for all input tensors in index_put (#31280)
913323750d : CODEOWNERS for distributed optimizer. (#31403)
359c39b3c2 : Use global lock instead of per instance lock. (#31404)
386cd59d44 : Remove redundant queries of qconfig in `insertObservers` (#31292)
58d2dd5b73 : Enabled flip for bool tensors (#31267)
3e59e80429 : Revert D18941024: Move TorchScript language reference to its own page
3694749cd1 : Detect dill version in torch.save/load (#30985)
74e59c6fed : caffe2::TypeInfo fix when using clang-cl on Windows (#31364)
c05538b831 : Move TorchScript language reference to its own page (#31138)
3c8892aa0c : avoid doing quadratic work in concrete type inference (#31020)
878b0e35f7 : Simplify recursive script compilation flow. (#31019)
82d52bc718 : remove remnants of properties hack (#31018)
7e81d72d12 : remove unnecessary arg from create_script_module (#31017)
e5631119f6 : use expect instead of casting in register_c10_ops (#31401)
4ec2448580 : Update OVERVIEW.md (#31373)
e0ab255a51 : Updates to serialization.md (#31372)
e169e02836 : Refactor custom op tests (#31282)
c5d2758c35 : Disable flaky TestMomentumSGD.test_fp16momentum_sgd (#31369)
e3fecabdcb : Setup operator registration for distributed package (#31214)
e33dea6e4e : dynamicly quantized lstm benchmarking
f0243ea712 : Use [[deprecated]] instead of C10_DEPRECATED (#30918)
d9c3913dfc : move BatchPermutationOp to caffe2/operators
0b8332efb4 : Remove c++11 examples from doc comments (#30925)
5554e5b793 : Docs: c++11 -> c++14 (#30530)
cc8d6342fc : make profiling take no_grad flags into account (#31071)
dab5f72543 : we should have a config-based way to skip flaky tests (#30978)
d2067569e7 : Kill THTensor_(bhistc). (#31254)
49eff2f43c : Kill THSize. (#31218)
52b8a52e4d : move AliasWithNameOp to caffe2/operators
0e548a76eb : Upgrade exported ONNX IR version to 6 (#31025)
10ce1765be : Introducing ScalarTypeType and LayoutType (#31074)
f9010d7648 : remove wipe cache from op bench (#31334)
229ce89b92 : Fix coverage and hypothesis conflict (#31320)
c5d3be1102 : Remove the second copy on calling dist_autograd_context._known_worker_ids() (#31206)
643ca5def2 : Replace c10::guts::stuff with std::stuff (#30915)
c6a8f884d8 : add copy_ operator the op bench (#31327)
d401ba1417 : benchmark binary ops in binary_test (#31326)
455e85a2f1 : Fix unflatten when dim is a negative integer (#31208)
9ca61aec0f : Kill THLogAdd (#31217)
409151e1bb : Use [[noreturn]] instead of C10_NORETURN or CAFFE_NORETURN (#30917)
c95d46abbd : Remove C++11 compatibility from c10::util::crc64_t (#30920)
0d7391f8b2 : Test cases for custom ops with autograd (#31003)
930d0751e6 : Java Tensor hybrid, owns at::Tensor, no memcopy for java outputs. (#30501)
60ec53c7fd : Fix copy kernel speed regression introduced in #29631 (#31279)
9dc3d8738c : fix view call on discontiguous tensor in to_sparse_backward (#31223)
0e50c1b0d9 : Replace assert with cuda assert macro (#31297)
ec92711aac : Fix error message in incorrect rref.localValue() call (#31199)
ffe0c1ae4d : Make test_torch.py pass cuda-memcheck (#29243)
701e05dcbb : Buck test targets robolectric,instrumentattion
57ee7dab87 : Wraps assert statements in cuda kernels (#31276)
58eb15f41c : JIT Type parser for mobile (#30391)
065685180d : Loading module from android asset (#30378)
70013415c7 : DDP should not set grad for globally unused params (#28883)
7cb83bea3b : Fix static cuda builds on older cmake versions (#30935)
7c1b5084a7 : Enable equality operator for bfloat16 CPU scalar types. (#30817)
2950530031 : caffe2::TypeMeta uses compile time type names (#26619)
6e1e09fd10 : Compile time type names (#26618)
c35cddb306 : Switch default memory format of clone operator to Preserve
fde3d707ad : Switch default memory format of to (and similar) operators to Preserve
927588df8e : Switch default memory format of _like operators to Preserve
1ec989404c : Kill some unnecessary function declarations.
d7d07e7caf : thrust is included in SortingKthValue.cu but never used
cd3f05b44d : Small fixes for hipification (#31200)
9954739956 : Refactor test for unique and unique_consecutive and fix some bugs (#31211)
3587f769dc : use propagate_names instead of propagate_names_for_reduction for cumsum and cumprod
a9ad98fb25 : Remove unused argument "destId" in addSendRpcBackward (#31207)
8fea7a49d6 : pinning hypothesis for windows
b64baa963f : Robustify rpc_agent handlers with generic Future<T> (#31224)
36d17f4105 : abort nccl communicators before throwing operation timed out (#31128)
1ef99cf0ab : Intrusive_ptr implementation slower than shared_ptr (#30810)
f7c92f60ba : Typo in filename align with classname
db90a5b992 : Switch to open sourced fbjni (#30175)
199e1fb348 : Use AVX2 to increase frequency for FP16<->FP32 Caffe2 ops (#31203)
ca8cb3241a : Expose setNumThreads to android api (#31205)
b7c148013f : fix torch square_ benchmark runtime error (#31221)
f30b14dead : Fix handling of type comments in body (#30590)
20a2e526ef : build a generic future<T> (#29579)
c08f2ea254 : Updating submodules
5ef0d6f854 : Remove subgraphNode kind assert in unmergeSubgraph (#31212)
a2463cbc38 : Adding quantized clamp kernel (#30541)
1d5af9599d : Update ONNX Flatten to accept negative indices in opset 11 (#30751)
84d6796658 : move AWS ECR gc jobs to circleci (#30996)
5c936845cf : fix torch_train build (#30497)
a38184dbab : Only create OwnerRRefs when processing remote calls (#31163)
f6c31f61c5 : Enabled roll for bool tensor (#31194)
bee6344d4e : remove / rewrite weak module tests (#31193)
066e3ed953 : Re-apply "[bert/RoBERTa] Optimize LayerNorm with explicit vectorization using Vec256" (#31127)
66f2bba852 : Adding function to convert Module to channels last
4ead2e8996 : Fix CircleCI behavior for non-leaf stack PRs (#31088)
bcb0bb7e0e : Remove unnecessary ATen/core/EnableNamedTensor.h (#31117)
9047d4df45 : Remove all remaining usages of BUILD_NAMEDTENSOR (#31116)
c0bcfd0445 : Revert D18923167: Expose setNumThreads to android api
56de8853da : Resubmit overload v2 (#31123)
3a02ed822b : Remove `insert_prepack_unpack` and `fold_prepack` for now (#30909)
159835e666 : Add types for the remaining optimizers. (#31130)
2488231fe3 : Tweak pollTimedOutRPCs thread synchronization (#30355)
0db6c01301 : Re-enable python 2 builds (#31164)
4f5a4be45f : Add native/quantized to the list of header rewrites (#31151)
6ab2d1b1a4 : Partially support tensor lists in loop/concat/stack (#30126)
a3ed350eb2 : Change type of timeoutFutures_ key to time_point instead of duration (#31078)
49a5841a9f : Make Conv{1,2,3}dOptions and ConvTranspose{1,2,3}dOptions different classes (#31005)
85107e72b4 : Fix type unification With Specialized Tensor Shapes (#31076)
97c1e90f46 : ONNX Interpolate Add Scales Params (#28324)
79c27ba4ef : Add ONNX Export Support to floor_divide (#31081)
d81c6bde3b : Updating submodules
efe683fb2a : dynamicly quantized linear benchmarking
73f9e81660 : Make rref fetch calls async. (#31086)
679b20b1e4 : Unify list elements for all list types (#30777)
0414463007 : doc fix for max method: a warning about different behaviour on CPU and GPU (#31115)
e5a550cd1d : Fix Test CI by pinning hypothesis and correcting the import (#31137)
945ce71b18 : Correctly handle scalar types, fix parse of numpy ints (#30486)
293a139d79 : add a warning for script classes (#31069)
6225443009 : Expose setNumThreads to android api (#31033)
06d874f95b : Change startTime_ to endTime_ in FutureInfo (#30342)
7a8261e962 : Updating submodules
4b2d356ac1 : Re-enable test_rref_context_debug_info after enforcing proper synchronization (#30994)
5b03ff0a09 : Update embedding renorm comment to reference fixed issue (#29140)
dbc8b00816 : Document WorkerInfo and RpcBackendOptions structures in RPC docs. (#31077)
4a751dfc20 : optimize MulGradient for common shapes (#19705)
a53b39f09d : Disable flaky test_process_group_debug_info
44ecc3a70b : Add tracing support for optional Device and Layout (#30979)
672f4cfad9 : Added C++ API test (#30980)
1f87e823b8 : Make `nn.Transformer` TorchScript compatible (#28561)
a929d312ac : Add dill>=0.3.1 as testing dependency (#31121)
3593981976 : Updating submodules
717274c001 : Add useful warnings for t.grad when it won't be populated for known reasons (#30531)
3301794855 : Port ELU activation to Aten (#29275)
4aa30d3c0c : Revert D18293522: Optimize LayerNorm with explicit vectorization using Vec256
9305f44854 : Remove BUILD_NAMEDTENSOR from codegen and .cu files (#31047)
65f6e449c7 : Updating submodules
d6d6075573 : Optimize LayerNorm with explicit vectorization using Vec256 (#29104)
28ee309c9a : disable onnx py3 gcc5 build (#31100)
8013ffd400 : Fix weight_norm export for dim=0 (#31015)
9a5fd2eb07 : Fix conflicts in CMAKE_GENERATOR and generator (#30971)
7f5f2e8871 : add ZERO_COLLISION_HASH to caffe2 data type (#30912)
c72dd526a7 : kill py2 onnx builds
9f3fe78239 : peephole optimize type refinements (#31024)
d02280b432 : move migration guide to appendix (#31068)
d088bd0bad : Updating submodules
e7e6d56b77 : Allow async work in rpc RequestCallback processing. (#30637)
e42af97349 : Add quantized concat conversion (#30887)
3de8584de8 : Correct definition of nodes that work with Autograd (#30683)
b7652a2f81 : remove py2 flake8 lint (#29357)
d113b22571 : kill PyTorch py2 circle jobs (#29353)
5edfe9cb80 : add torch.square (#30719)
e3d40f857b : Make nn.Module `forward()` type annotation more permissive (#31057)
8fd85d70be : Updating submodules
ed20937231 : Remove TensorImpl::maybe_zero_dim.
0cbbe050bb : Updating submodules
cc319659e3 : qnnpack TanH
b01b05790e : Fix memory leak due to circular dependency. (#31030)
57f29a44c7 : Bug fix of the histogram observers (#30970)
27d7dba9ab : Remove scalar_check specification and codegen. (#30874)
47033b49f3 : Suppress XCode build warnings (#31000)
2da3b9a0f6 : Updating submodules
78a00d72b4 : Revert D18899127: resubmit polish up overloads on free functions
394d2f7037 : Fix the rendering of the doc of max. (#30779)
313c211f3f : Calling JITed 8 Bit Fused SLS in FBGEMM from C2 (#30926)
bb7befb12c : Support loading by blob in predictor
a42d093db2 : FCTransposed to FbFCPacked (#29766)
c34ef1aa2e : Automatic update of fbcode/onnx to c08a7b76cf7c1555ae37186f12be4d62b2c39b3b (#30619)
06c7420fa2 : Raise error if a block can not be found from a CUDA tensor (#30870)
af4040d808 : resubmit polish up overloads on free functions (#31014)
e05ee4c421 : Remove BUILD_NAMEDTENSOR macros (#30894)
f48a8901c5 : Add floor_divide function (#30493)
44428d0ee2 : Updating submodules
42324cb6e8 : Change interface from map of TensorShape to shapeInfoMap (#30802)
5205556782 : Export custom ops (#29752)
04b9324476 : Factor out getInvokedMethod in `InsertQuantDeQuantHelper` (#30860)
fa6661422f : Disable flaky test_rref_context_debug_info
73dd8c005a : Revert D18864774: polish up overloads on free functions
446488960a : polish up overloads on free functions (#30356)
a03581b927 : add tests that schemas are valid (#30749)
e9ca13d7f5 : Add glue code to collect debug info from all components
8a57362000 : Fix index out of bound error in Engine::ready_queue_size when called before start_threads
a38c9b1ade : Adding debugging metrics to process group agent
82268bf300 : handle reassignment to inf and nan (#30877)
3eefc06feb : add constant prop for immutable types (#30544)
648bb501a1 : rename shouldAnnotate api (#30543)
45f0556ba0 : Proper print for one element tuple (#30853)
5bf58274cc : getQParams return a dictionary of qparams (#30859)
fb36f1c334 : Updating submodules
536481d9de : Fix missing virtual destructor (#30927)
528fa737ba : Custom op autograd tests (#30519)
daef363b15 : Move Softshrink activation to Aten(CPU+CUDA) (#30229)
4f342a61c1 : add the worker IDs outside of addSendRpcBackward to ensure they are (#30914)
c75bc9067c : MultiMarginCriterion: move scalar_check from codegen to code.
190dac13e3 : Use universal references and perfect forwarding in Loops.h. (#30466)
6848f9abb8 : call fp16<->fp32 routines in fbgemm from Half2Float and Float2Half operators (#30715)
776fdda753 : Add debug info API for distributed autograd. (#30642)
0b33080992 : Updating submodules
4bb497b38e : MultiheadAttention fixes
8b6d7698d6 : Updating submodules
f1bd8cc286 : Fix lint issues in dist_autograd_test.py (#30928)
63f1b780ba : Support exporting aten::copy_ and aten::index_put to ONNX opset 11 (#26941)
a26238da57 : Enable using `torch.autograd.profiler.record_function` as decorator (#30861)
5c56986738 : Attach autograd edges only for tensors requiring grad. (#30904)
62b10721fb : Actually make flake8 do something (#30892)
8d35b6cec7 : embedding_bag make_bag_size optimization (#30701)
cd6167ff63 : Upgrade bazel to 1.2.0. (#30885)
7b97eaeba5 : Add module level qpl logging. (#30906)
118f1c633b : refactor the way we are handling bailout counts
c37de32b23 : Enable len(dataloader) for iterable dataset (#23587)
a77eafa1d8 : Fix 'initialized after field' error (#30908)
baccd26df7 : update code analyzer script to handle splitted torch libraries (#30864)
223f46f5fa : Fix flake8 warning (#30905)
4fd20c0816 : Kill hypothesis deadline testing (#30890)
26c51468c5 : Fix examples in RRef API doc
642469b706 : Fix examples in API doc
5e6c3fb23b : Add more details to explain rpc_backend_options arg in init_rpc
6d06b925ba : Remove `values_to_quantize_` (#30858)
81e4739141 : Move QScheme ops to c10 (#30134)
d6ddfab11f : save linux build binary size to Scuba (#30832)
78254eab45 : Add mobile operator observer for qpl logging.
44ff7b08d8 : Reduce intrusive_ptr incref/decref costs (#30709)
e123d90a93 : Back out "Back out "Back out "Revert D18542342: Boxed variable dispatch""" (#30650)
37435d36ed : Refactor VariableTypeManual (#30649)
b0e7db5b31 : Revert D18840736: make sure windows tests get triggered
4ed2eae2d0 : Add registerQParams function (#30552)
0051467118 : Update CITATION from Workshop paper to Conference paper (#30872)
377131b0eb : MultiMarginCriterion: fix scalar_check in the case where reduction == None. (#30826)
5687ee1d85 : added a serialize function in SGD class to utilize the existing macro for serialization/deserialization calls
e5d571ae25 : Remove scalar_check from topk, move it to the THC implementation.
60714dfb64 : change index_select scalar_check to retain dimensionality of input. (#30790)
1d7b40f1c4 : Fix reading `__cuda_array_interface__` without strides (#24947)
11b3065323 : Run method_tests on CUDA. (#30821)
9a858aba5f : Moving checks related to options.aliasAnalysis and schema.hasAliasInfo to read callsite (#30671)
619e2ffe23 : Replace deprecated AT_* with TORCH_* to reduce warnings in c10d
b0cba8ceae : Replace deprecated AT_ERROR with TORCH_CHECK to reduce warnings in rpc
2011cc1e91 : Fix half->float case of softmax backward when inner_size is not 1 (#30838)
d32aec5ad6 : Add get_metrics and get_debug_info to rpc agent (#30833)
58cdf1429c : Add tests for quantizing traced models (#30476)
f1755d9aea : Insert GetAttr for quantization parameters instead of Constant (#30551)
1fa4908ac0 : Refactor test_quantization.py and enable `test_nested` (#30475)
ef95a72690 : modify test_local_shutdown_with_rpc to not be flaky (#30837)
7af9d77290 : Update persons_of_interest.rst
a7406516d1 : Refactor bias and weight check and add aten::linear pattern (#30474)
a51c5f5cbf : Add JIT pass to insert permutes for conv ops (#30679)
c1159494a6 : Revert D18621773: we should have a config-based way to skip flaky tests
4034aa7621 : make sure windows tests get triggered (#30836)
82c3f4861f : Move hardtanh activation to Aten(CPU, CUDA) (#30152)
6e38d50352 : Revert D18117070: Migrate max and min (binary) from TH to ATen.
e5bd7a7942 : we should have a config-based way to skip flaky tests (#29944)
0974dcc244 : Fix error checking of CUDA multi_margin_loss. (#30825)
2ced81f289 : Revert "Default to not build Caffe2 operators on Windows. (#29061)" (#30740)
f874230d33 : Vectorize smooth L1 loss backward function on CPU. (#30046)
6486bdfb90 : Fix `os.register_at_fork` not defined on Windows (#30809)
c564d794ed : Add ATen/native/ headers to torch target (#30835)
244b0bd1a5 : Add docs for how we expose declarations in at:: to torch:: (#30760)
be55874f2c : style fixes to code analyzer (#30808)
9617d07bd5 : Wrap warning handler in a function to avoid siof (#30800)
bf1b4b6fef : add torch_cpu to the static library list in TorchConfig.cmake.in (#30769)
f531815526 : Deprecate tensor.type() (#30281)
2171f91053 : reenable cuda_kernel_loop_overflow_large test (#30797)
1578a28692 : Migrate max and min (binary) from TH to ATen. (#27185)
fa251cfd97 : Fully deprecate variadic inputs of checkpoint_sequential (#25985)
2607772959 : Turn off scalar_checks for SpatialDepthwiseConvolution and SpatialConvolutionMM. (#30789)
f12332eb51 : Move scalar_check from codegen to code in MultiLabelMarginCriterion. (#30770)
50625798df : Fix scalar check of MultiLabelMarginLoss. (#30768)
473a044835 : Fix a CUDA memory leak in MultiLabelMarginCriterion error checking. (#30767)
ba1a9871cb : Turn off scalar_check for is_target for MultiLabelMarginCriterion, which is handled correctly in code. (#30766)
35a6997863 : Support 0-d tensors in CUDA MultiLabelMarginCriterion. (#30765)
c4e9748bc6 : Provide full path for buck hipification (#30746)
f2a2fec47c : CUDA-strided-complex Binary and Unary Op support (#30295)
139aa51962 : Clean up non-C++14 code (#28443)
a939b52ddb : fix AvgPool2d for 2^31-1 sized inputs, and get test_cuda_kernel_loop_… (#30771)
1d20c32bf1 : Make `InsertQuantDeQuantHelper` global (#30550)
c4c2e23385 : Supporting making submodules unique (#30037)
7a2889b014 : Stop producing op_version_set version numbers.
3c1bb21cf5 : Invoke more passes in `insertObservers` (#30473)
e09c415387 : Back out "make the order btw div and mul in adagrad update consistent" (#30737)
1f1ce53e8e : Don't install pybind11 header directory for system pybind11 installs (#30758)
569ea63f3b : fix anynonzero op
1d8a13147c : Updating submodules
cd032c7f6a : Updating submodules
1707774417 : AddConstant and findConstant for ClassType (#29217)
2308a0ec1b : Improve documentation around builtin functions (#30347)
42e79d7e8a : Kill THNN version of MultiMarginCriterion; it's not used anymore.
9d3402e4cb : Add the __torch_function__ API override mechanism (#30730)
289e9a07fd : Move Tanh backward to Aten(CPU+CUDA) (#30224)
d38f9117fd : Cache compilation of free functions (#30503)
9d69c55b0d : add MaskedRowWiseSparseAdagrad
786de33832 : Move scalar_check logic from codegen to code in NLLLoss. (#30670)
fa2aa245cf : Simplify scalar_check of nll_loss. (#30669)
6918f0ce86 : Move scalar_check for total_weight in NLLLoss functions to code from codegen. (#30665)
756f279d95 : Rename QuantizeHelper to InsertQuantDeQuantHelper (#30549)
f73cd28082 : InsertObservers for shared class types (#30548)
6e145b4614 : add irregular c10 op registration/invocation cases to test project (#30558)
a55f125e3b : Check the error return of nvrtcGetProgramLogSize and nvrtcGetProgramLog (#30663)
ca072951d5 : move MaskedAdagrad to caffe2/operators/experimental/optimizers (#30714)
d0af07ca4c : Fix capitalization inconsistency in optim.rst
38986e1dea : Split libtorch.so back into libtorch_{cpu,cuda,hip} (#30315)
1189595875 : Fix Tensor.argsort -> torch.argsort documentation link
b8792c0438 : Revert D18645954: add __torch_function__ API override mechanism
a68b790293 : fix ref to nonexistent torch.repeat
ec7bb9de1c : format tri[lu]_indices doc better
d6ca93b353 : add doc for F.softplus
d12786b24f : add __torch_function__ API override mechanism (#27064)
c0299d2707 : add LLVM code analyzer in order to replace static dispatch
f5c9452beb : Fix toObject() r-value version (#30713)
d456a538f9 : op dependency analysis bash driver
7e472679ff : pin actions/checkout version
b26401f965 : Dump operator names of a script module (#30467)
63a1542ed2 : Adding Debug Info for RRef Context
6dda241ab8 : Add RRef.__str__() API
bb5dcaf24f : Add logical_and and logical_or (#30521)
ab834d5093 : Remove exp10 in TH (unused)
76acf5b553 : Remove many unused bfloat16 functions in TH
4ac614191a : Remove exp10 in TH (unused)
ea3697db69 : inline to prevent duplicate obj when linking (#30363)
3cf8382984 : detect_anomaly() for SparseTensors (#29803)
fef4360536 : remove default constructor in futureInfo (#30197)
59151d3e43 : autograd/profiler: support merging FunctionEventAvg (#30677)
dcd1216efe : Force early initialization of OpenMP in forked children (#29006)
a376dd344c : Added check for torch.where on CPU that both arguments have same dtype (#30662)
56dd2836ec : Make zeros argument of torch.where same dtype as other argument (#30661)
2ba03e0287 : Enable test_trainer_ps in dist_autograd_test.py
d4c25add45 : make sure the counter stays correct in between bailout transitions (#30186)
03a73cb9ac : Remove namespace F = torch::nn::functional from torch/nn/modules/batchhnorm.h (#30684)
604a27361f : remove tuple_parser (#30659)
4d4d8e0dce : Update persons_of_interest.rst (#30647)
4e6379379c : fetch before checking out PR tip
980aead1f8 : Add support for quantized slice conversion (#30498)
bc2e6d10fa : Back out "Revert D17908478: Switch PyTorch/Caffe2 to C++14"
aff693ab1c : Ensure MIOpen is called on same stream as operator for RNN (#30672)
40146eb48e : Skip ProcessGroupGlooAyncTest if there is no CUDA available (#30345)
19cd90d303 : Globally record observer nodes (#30547)
1b5ce05924 : don't use size()/stride() functions in TensorImpl, use size_[d]/stride_[d] instead (#30452)
7023e13fbb : Fix mapping white list (#30636)
f114c33e69 : Fix iOS CI (#30327)
1b12fd33ed : Add missing trigramma_stub definition. (#30314)
a009fc14be : Workaround hcc bug regarding extern "C" definitions (#30313)
8269f7b652 : Delete redundant THC_API on THCStorage_new (#30312)
d43e205026 : Properly include declaration of dispatch in file that registers it. (#30311)
a5b1f6e7d7 : Add missing _API definitions. (#30310)
08394cede3 : DEFINE_DISPATCH in the correct namespace. (#30308)
9740011f10 : Use normal dispatch to get to CUDA threshold kernels, instead of DispatchStub. (#30307)
a997f224ac : Add torch.multiprocessing.create_processes
4d30415f12 : Add ONNX Scripting Conv Support (#30618)
89be1a22d4 : split getInvokedMethods (#30546)
d5c136097a : improve .view() performance (#30554)
5a484245d9 : Change test_invalid_names test to only test constructor of WorkerInfo (#30620)
f9f54201d3 : Remove deprecated fromIvalue in RRefForkData
b446572997 : TestCppExtension now removes /tmp/torch_extensions folder so that it can be used by other users in a multi-user environment. (#30095)
8b29701ae5 : Turn off scalar_checks for _th_reciprocal. (#30436)
61798865e3 : Turn off scalar_checks for torch.clamp. (#30435)
e5b947a3a8 : Raise an error for is_signed on quantized types (#30527)
18ec4632b3 : Exclude undefined tensors in the result of Module::parameters() / named_paramters() / buffers() / named_buffers() (#30626)
e7fe64f6a6 : Fix typos (#30606)
0bebfe2143 : Add the explicit per-tensor/per-channel quant info when we print the module (#30591)
4dab29a2bd : Fix serialization memory lifetime issue. (#30603)
db81e13d6b : Fix TCPStoreTest and improve tcputils::connect() (#30354)
9e3d19412b : Disable implicit conversion warning (#30529)
968c0d4a46 : Add support for converting quantized AvgPool2d and Reshape operations (#30490)
2d0a4e42e9 : Add barriers to fix flaky test_graph_for_py_nested_call and (#30624)
98ab55fc51 : PRAGMA missing for clang (#30351)
9c02b88791 : Add pickler support for Device (#30131)
19b7d49fac : Add TOC to CONTRIBUTING.md (#29671)
569729527b : Turn off scalar_checks for exp, cos, cosh, tan, atan, tanh, erf, erfc. (#30434)
9082123038 : Back out "Back out "Revert D18542342: Boxed variable dispatch""
3636cb0364 : windows build (#30556)
d32f261f16 : make the order btw div and mul in adagrad update consistent (#30449)
1111a6b810 : Use pybind11::gil_scoped_* functions instead of AutoGIL/AutoNoGIL (#30274)
6deb41c88d : Update magma to 2.5.1 for Windows and switch CUDA in CI to 9.2
b68d1fc316 : add small input shapes to some ops (#30617)
8ee61e0be4 : Fix CPU_INTEL flag error on windows (#30564)
e6000a7c04 : Temporarily disable test_numerical_consistency_per_tensor (#30600)
c780610f2d : Disable test_backward_per_tensor in test_fake_quant (#30594)
53785771a7 : Don't build test_cpp_rpc if torch is built without distributed support (#30587)
dd52f50fc8 : Add examples to RRef doc
30d70d5378 : Make doc source format consistent in rpc/init.cpp
ec5e471647 : Reorganize rpc API doc and add introduction (#30491)
f4e7e9039d : Improve process_group_agent() serialization speed (#29785)
1350b99de4 : Add local shutdown to process group agent (#30330)
7ac8efa689 : Skip undefined tensors when moving torch::nn module to a different device (#30523)
640109ae5d : Back out "Revert D18542342: Boxed variable dispatch"
87f29557bd : Ignore logical_and and logical_or in op BC check for now (#30537)
a2ed50c920 : Revert D17908478: Switch PyTorch/Caffe2 to C++14
0b25371f5d : Turn off scalar_check for _th_normal.
f3631c2464 : Revert D18542342: Boxed variable dispatch
7d2b0aa693 : add retries to network operations (curl, conda install, git clone) (#30479)
c1c5622a6a : Add katex to pytorch-linux-xenial-py3.6-gcc5.4 docker image (#30522)
a69be8123a : Use `gettimeofday` on iOS (#30361)
2f42488d36 : Updating submodules
106ab487eb : fix typo in doc
fcb7371e65 : Update docs for cpp_extension on Windows (#30392)
d0acc9c085 : Switch PyTorch/Caffe2 to C++14 (#30406)
ec5c08de74 : Revert D18580867: Add logical_and and logical_or
1e8ed021c6 : Support logsoftmax with dim != -1 (#30433)
0282c5ae69 : Add helper to aggregate multiple process groups (#25768)
1d3f3a1a0c : Add pybind11 trampoline class for c10d.Store (#30415)
d2336edcfb : Boxed variable dispatch (#29934)
512c2a2df5 : Enable constant folding (#29834)
c1c8105de0 : Make the warning of using SparseTensor in JIT less noisy
829499e626 : avoid Formatting::print() when STRIP_ERROR_MESSAGES is set (#30451)
2d6b2f39e9 : Fix docs so that the example works (#30120)
5ada5363fc : GenericDict/List type use unshapedType() (#30428)
6bd8937aee : FunctionParameter::set_default_str replace || with &&
21d7532dfe : Add more comment on NumPy detection in Python scripts.
8bbafa0b32 : Add logical_and and logical_or (#28162)
92e27c5e89 : Flag to disable Variable
4eff2f2007 : Fix missing closing quotes in docs
05a1644ce3 : Fix BC for quantized linear
976d91d30a : Comment on a set of ops bound at the python layer
634f370c63 : Add comment to ops bound at python layer
c5a6c4d6c9 : Adding elementwise kernel also operating on index (#28175)
e9cc4a5942 : Add @DoNotStrip to nativeNewTensor method. (#30472)
fec903ce00 : Fix test case after get_qparams refactor (#30470)
b0871f211b : Make all optimizers consistent so that they don't change gradients inplace
45880f4246 : Change logging to remove the word "error" from info log
dcd9f49809 : Specify ordering on singular values and eigenvalues output from torch… (#30389)
dbce53fe32 : Turn off scalar_check for _th_gather. (#29954)
72ac45662b : Turn off scalar_checks for torch.take. (#29953)
79a830af56 : Turn off scalar_check for Tensor.set_(Tensor) (#29952)
0febff36ac : Export dynamic unbind/split and __getitem__ (#29136)
2599b9b551 : Add output_size argument to caffe2 Int8ResizeNearest (#30202)
efe1859ad9 : By default ignore RRef leaks during shutdown (#30217)
06db5ad707 : Provide names for operator nodes in ONNX exported graph. (#27342)
584be86c3f : Try exporting ONNX with force_outplace=False (#29466)
eccf42fd15 : Bug fix: Handle missing keys in observer state dict during load (#30357)
ab5774547a : Add info about transitive dependencies in case of using local aars (#30128)
085dde5965 : Fix for when PyTorch model trace has RecursiveScriptModules (#30430)
8199596d7e : Add missing std::move (#30411)
661a6c8ef2 : Add `get_qparams` and revert the changes to `calculate_qparams` (#30262)
46e7f31fa3 : Document unsupported types (#30344)
ab2ec4d835 : Fix inexistent parameter in document (#24335)
0b71e7e1fd : Refactor QAT Conv module for better extensibility (#30362)
b8f50d9cc8 : Support to add dequant for each use of Value (#30145)
25f4ba7c1b : Improve compare kernel (#29743)
5c6705e62c : add default arg for init_method (#30208)
d64e2581cc : Add list of supported XCode/CUDA versions to README
0517323dad : Update osx CI to XCode 9.4 / CUDA 10.0, cudnn 7.6.5 (#30359)
c12f9a12a8 : Fix quantized ConvReLU3d test (#30266)
d7ac90e2ef : Stop binding std_single and var_single from TH; they aren't used anymore.
0c67311878 : Turn off scalar_check for set_(Storage, ...) (#29950)
7160300638 : Turn off scalar_check for reductions _th_max, _th_min. (#29949)
16606e1725 : Turn off scalar_check for mode; the underlying code is correct.
b8eba7aca9 : Turn off scalar_check for ormqr. (#29947)
7c6cc1d6d4 : Turn off scalar_checks for _th_multinomial_alias_draw. (#29946)
6e88ddf352 : Turn off scalar_check for _th_addmv and _th_eig as they can never pass. (#29945)
ce5f1a1b25 : Turn off scalar_check for masked_select. (#29923)
0c9c62ba6e : Turn off scalar_checks for __and__ and clone.
94ad7544ae : Turn off scalar_check for __or__
f994377d28 : Turn off scalar_check for lshift, rshift.
99a46b44ea : Use correct API macro in VariableHooksInterface. (#30320)
20dfae4099 : Fix the crashes for c++ not able to find java class through Jni (#30390)
3990e9d1ca : Improve performance of LeftRight::read() (#30282)
0c7e4c1d62 : backend fallback test (#29682)
959a849a23 : better boxing (#29681)
aa2862b843 : Hide the OperatorKernel* argument from the stack based kernel API (#29337)
afdc0bd4ec : OperatorHandle::callBoxed/callUnboxed (#29330)
fb8c17dde1 : Test cases for backend fallback kernels (#29214)
583c288232 : Add a OperatorHandle argument to boxed kernels (#29201)
24aabe439a : Make Dispatcher::backendFallbackKernels_ an array (#30340)
7b5045be9d : Remove LeftRight from OperatorEntry and DispatchTable. (#30333)
4aa692fc91 : Convert KernelTable to a flat-indexed array rather than a hashtable. (#30332)
7c4b9042ab : Updates to quantization documentation (#30288)
7570b2798a : updating citation (#30267)
59ca9b7430 : Graph-mode quantization for convolution from traced model (#30245)
2a7a39c1af : (de)serialization of values between C++ and Python (#30108)
ee20e66c48 : replace the SLSRQ for their right emulations in the replayer test (#30367)
328ec5460f : refactor the observer removal and quantize tensor
6a00191fc2 : Add RpcAgent::getWorkerInfos() (#30241)
c7f988b8c6 : transport open registration (#30167)
ac103a5d78 : Remove variable wrapping from register_c10_ops (#29207)
9fb879934e : Revert D18641413: add unit tests to iOS CI jobs
6c9b188262 : Support in-place update in IndexHashOp (#30275)
99a2a0b1ca : Implement torch.diagonal for named tensors (#30193)
2e709763a3 : add wrapper to exclude XLA when running device tests
8c6f0c0587 : Detect TorchScript archives in torch.load (#29339)
90cb1e67ff : Fix exception message in Java Tensor
0c18de2623 : Add inferBoundShapeOp
35e6c1763e : Switch Docker image onda-cuda-cxx11-ubuntu1604 to new uniform name (#29943)
a5272cb643 : Error instead of assertion failure for div by sparse (#30260)
638f4c1fb3 : Update Cocoapods to 1.4.0 (#30326)
97fae401f0 : Use LinearPackedParams everywhere
1cc321deed : Memoize parseIR calls in graph mode quantization
65f465050b : Dont use SubgraphRewriter in FoldQuantizeCallIntoBuffer
a9f3f48f88 : Revert D5578006: Add local shutdown to process group agent
fa242246ee : add unit tests to iOS CI jobs (#30133)
7903fb118f : Move qkv_same, kv_same into branch (#30142)
5d7b2089e8 : Draft version: Make AliasAnalysisKind optional in Op Registration API (#30187)
c478a92b93 : Add local shutdown to process group agent (#30020)
559b3b5a7a : Use unboxed registration for most of operators used in lite interpreter. (#30239)
f41422121e : default construct rpc agent options based on the backend type (#30201)
3455231e9c : Expose configuration of Numa directories to setup.py (#30104)
faacbfa8bf : Migrate index_add cpu from TH to ATen (#28421)
183aa1534f : Add --no_python flag (#29144)
29887f813a : Remove unused forward declaration (#30154)
a074080d57 : Mark `c10d::~NCCLUtils` as noexcept (#29118)
95b451d386 : fixing test_tensorboard for py2 (#30298)
f5ef3a6fb6 : disable JIT optimizer in Android wrapper for mobile custom build (#30285)
1690feba9f : add mobile build CI with host toolchain (#30292)
48b943960e : Add bfloat16 support in linear algebra on ROCm (#27719)
23650671a8 : add_hparams() NoneType error (#30286)
5e19460ced : cache tensor scalar_type in OperandInfo (#30065)
73c9e6e6b6 : Rename function parameters to avoid [-Werror,-Wshadow] (#30276)
a822a1d2a8 : Avoid overwriting output type in onnx graph (#25906)
30874b31a9 : Enable JNI build on Mac host (#30207)
e5fc86130a : Remove unnecessary linker flags from JNI host build (#30206)
4609c626c5 : Enable test_call_method_on_rref in rpc_test (#30261)
aa1e99e983 : Fix two links in RPC API doc
168570b0da : move module_save.cpp to non-mobile build section in cmake (#30221)
0c04763d59 : Changes to get inlined graph and proper names after JIT updates (#30244)
983728489a : Add ONNX Tests for Torchvision Models (#30121)
fea963d3ae : Fix BackendType repr in doc (#30243)
063e22b7c2 : Fix RRef design doc warning (#30240)
e0325011e4 : Add link to RRef protocol in RPC doc
f2f285c240 : Add arguments to benchmark to run pytext models. Output results in ms. (#30273)
b2b1601b30 : Docker image build on CircleCI (#29932)
352731bd6e : Revert D18632773: Split libtorch.so back into libtorch_{cpu,cuda,hip}
eff4c4d7c1 : Revert D18301806: Use pybind11::gil_scoped_* functions instead of AutoGIL/AutoNoGIL
cbe0a996f0 : Change dimType for shapeInfo (#30183)
188d0a9add : Skips flaky UtilsNMSTest.GPUEqualsCPURotatedCorrectnessTest (#30053)
f4b9690f2d : Use pybind11::gil_scoped_* functions instead of AutoGIL/AutoNoGIL (#29095)
0fdbb762d1 : Warn user when resizing out Tensor after arange() (#29195)
1bba0eb35b : Add `clone_instance` for Module (#30168)
2c1c6de122 : Represent the original python name the same way in traced and scripted modules.
ec30d9028a : Split libtorch.so back into libtorch_{cpu,cuda,hip} (#29731)
d934cf484b : call find_package(OpenMP) only when USE_OPENMP=ON (#30223)
7d3afc4186 : enable the per channel dynamic quantization (#30122)
3ba1456aee : Fix clip_grad_norm_ / clip_grad_value_ to take input by value instead of by non-const ref (#30216)
6e4c23b02f : Add RPC internal helper that overrides the default pickler. (#30185)
e3334723b2 : fix a crash due in nested bailouts (#30097)
9e81616343 : Merge Tensor and Variable types. (#28287)
a78e7eadbd : Fix typo in extending doc
5d80f30f70 : add missing space to mask index error msg
e05e90c62e : TensorTypeId-based non-RAII setter/getter for LocalTensorTypeSet (#30113)
f7b12a9858 : fix aten::grad to return optional list (#29577)
38ca3552d9 : Unit Test for the Legacy Dynamic Quantized Linear operator (#23139)
1eb9f49cc6 : Fix test_jit under pytest
b154a8cfc7 : Integrating the int64_t GEMM in FBGEMM into PyTorch Linear op (#30143)
cc16819028 : Add abort API in gloo ProcessGroup Send/Recv Work (#29928)
0a77c090d5 : C++ parity, convert_parameters (#29267)
bbb3c415c9 : ONNX Hardtanh Opset 11 Support (#30169)
fd74a19aa4 : apply clang format -i (#30180)
1aa80471b8 : minor fix to filter (#30200)
f1a0a27da1 : col max hist observer
449828378d : Serialize ClassType as its qualname
2803261a23 : Update API doc for wait_all_workers after rename
de05114618 : polish examples in docstrings and update docs to reflect correct use of (#30052)
bebed492cf : Make RRefContext singleton leaky, deal with module destruct order race. (#30172)
211e39fd1c : add docs for profiling PyTorch with py-spy (#30166)
36aaa299f8 : shut up clang-tidy on ir.h/cpp
43fb0015db : custom build script (#30144)
ae6af8d55f : Enable multinomial for torch.half (#29266)
51259e5024 : Updating submodules
c4e7f1b232 : Revert D18579363: Change dimType for shapeInfo
c2b7b2cbf8 : Make observed values actually flow through observers (#30140)
2d534abb39 : Modernize graph mode IR API calls
73cf4d468f : Design doc for Remote Reference (#30066)
5cbdbddc12 : Add test for F::max_unpool3d, and update parity table
f304bd5062 : rename join_rpc to wait_all_workers in public api (#30050)
a460c856dd : Fix naming for kl_div and binary_cross_entropy functional options (#30146)
9cb8fb61c2 : update operator_range discription in op bench (#30170)
ff7afede92 : Stop showing .api as an API path component in RPC docs (#30160)
0762bbfc9a : Eliminate tensor copies from compute_common_type_ in TensorIterator. (#30018)
ff94ddda08 : Change dimType for shapeInfo (#30047)
7201a2e854 : remove consistency check from setup (#30043)
67b77afcdf : Fast histogram observer
f03db0cd19 : Add torch::nn::functional to C++/Python parity tracker (#29819)
f2b851a9e5 : Returning axis from calculate_qparams (#29494)
64817a43d2 : Test for per channel graph mode quantization (#29493)
fbcb88e8b3 : Split module.cpp and export.cpp to support saving on mobile (#29881)
72bc7bf37b : Revert D18612158: Fix naming for kl_div and binary_cross_entropy functional options
d11dfd1a84 : only run embeddingbag op on cpu (#30163)
e84fcc1fd1 : Fix naming for kl_div and binary_cross_entropy functional options (#30146)
b0309d1b5b : More documentation on caffe2_interface_library (#29903)
36a47d71e1 : Enabled bfloat16 for cuda (#27259)
551e387fff : Disable flaky test test_graph_for_py_nested_remote_call
13283e0cbb : Change order of recalculating numel and restriding (#30025)
c2c835dd95 : Port sigmoid backward to Aten(CPU+CUDA) (#29185)
c0104a1c89 : Fix typo in comment in cpp_extension (#30028)
f8e7f3fca4 : C++ API parity: BCEWithLogitsLoss
93db2b86d1 : Fix type sharing on loaded ScriptModules (#29826)
558a777615 : Re-unify module and interface in ConcreteModuleType (#29825)
63e66fd267 : Split ConcreteModuleType into two types (#29824)
7495c25440 : Updating submodules
c06f9023e5 : Polish rpc docstring. (#30069)
def2985e90 : add flag to strip C10 error message (#30111)
88ef402cb5 : Add distributed optimizer section to distributed autograd design doc. (#30068)
b410d864c9 : make python remote exception to rethrow when using remote reference to itself (#29930)
1b26e3ff6d : fbjni gradle obey ABI_FILTERS parameter
cc81769e10 : C++ API parity: isfinite
b2291d4600 : Make PerChannelMinMaxObserver scriptable using `torch.jit.ignore` (#29416)
80e3f17301 : Resubmit "Add `RpcAgentOptions` struct type, which bundles different required arguments for different `RpcAgent`s" (#30093)
15bc41a8aa : Overwrite __setstate__ func in MultiheadAttention (#29001)
07e14c7cd0 : DistributedOptimizer: wait for all workers to finish _LocalOptimizer constructor (#30062)
2367e71f55 : Disable ProfilingGraphExecutorImpl for mobile (#30067)
2c8dce915c : Show full call stack in TorchScript exception even when calls were inlined.
a9d1465c82 : Add logging to inliner. (#27922)
59eb682ce3 : Add InlinedCallStack class. (#27921)
12263cfa98 : Make inlineCallTo to take Function instead of Graph as the callee argument. (#27920)
0eb8c3dbfb : Add a variant of insertGraph that fills values map. (#27919)
e951f7cf58 : Add Python3 ROCm CentOS docker image (#30119)
bb1d9b238d : torch::nn::FractionalMaxPool{2,3}d module and functional
ec52d911bd : InstanceNorm{1,2,3}d (#28790)
8e3486de81 : No debug symbols in release android buidls (#30123)
5fa941d4e2 : update fastlane to use Scanfile (#29963)
99c59d73a7 : Remove input_channels / output_channels / with_bias from ConvOptions (#29838)
868cb05a30 : Resubmit "Add RpcAgentTestFixture to extract duplicate code" (#30092)
877c96cddf : explicitly provide memory format when calling to *_like operators
e46babb637 : explicitly provide memory format when calling to *_like operators
04018ba865 : explicitly provide memory format when calling to *_like operators (Redo of 81bf7364)
66913fe5c1 : explicitly provide memory format when calling to *_like operators (Redo of cc1c01)
dc9e7b73e1 : explicitly provide memory format when calling to *_like operators (Redo of e3e06549)
66cb93c762 : explicitly provide memory format when calling to *_like operators (Redo of 4b4aa)
295feb4e9a : explicitly provide memory format when calling to *_like operators (Redo of ce438f6967)
20b73e1805 : explicitly provide memory format when calling to *_like operators (Redo of 631b22d)
c15a4a0971 : explicitly provide memory format when calling to *_like operators
2b1466e665 : allow operator_range to take multiple ranges (#30124)
05a7aaa742 : Pass Tensor instead of Tensor& to torch::nn functionals that can change input in place (#30112)
a75b669b0f : C++ API: torch::nn::ConvTranspose{1,2,3}d (#29721)
c2e576e74b : Per channel quantization support in insert_prepack_unpack (#29701)
63c957cd94 : Use std::shared_ptr for DistAutogradContext. (#29770)
79b797ccac : Build time warning on windows for fbgemm (#29062)
5aa50c7f3c : Enable test_nested_rref in rpc_test.py (#30100)
a243e0872e : Enable test_nested_remote in rpc_test.py (#30099)
8912e6caf5 : Enable test_nested_rpc in rpc_test.py (#30098)
a689e3a0c4 : Support per channel quantization in insert_quant_dequant and fold_prepack (#29492)
0ab03d3283 : only run embeddingbag benchmark on cpu (#30106)
4b0a6d299c : test reporting (#29658)
1dbc84ab6d : Remove unnecessary conditional (#29901)
57acc2ff3a : add an unit test target to TestApp (#29962)
23991e89cc : change operator_range to work with lower and upper in op bench (#30096)
dca123e76d : Add zipfile serialization (#29232)
2b02d154db : Implement fast pass for CPU scalars /number literals (#29915)
e88d096321 : C++/Python API Parity: add AlphaDropout (#28424)
1597f22982 : fix device check in op bench (#30091)
37ca5a8a64 : convert_sync_batchnorm should not convert _InstanceNorm instances (#29985)
45024e7a35 : Support Exporting Bitshift to ONNX (#28210)
a3494bd56b : CPU-Strided-Complex Fixes for real and imag ops (#29840)
7d287688eb : Revert D5689636: Add RpcAgentTestFixture to extract duplicate code
1dda8186ae : Revert D18549919: Add `RpcAgentOptions` struct type, which bundles different required arguments for different `RpcAgent`s
861ef05015 : Remove rpc fork and dist autograd fork tests from PyTorch repo (#29827)
83513506c3 : poll for timed out futures in process group agent (#29601)
21dc1d4543 : Add `RpcAgentOptions` struct type, which bundles different required arguments for different `RpcAgent`s (#29972)
82b6300fea : Disable openmp in static and dynamic histograms (#30072)
a9ad2e2f00 : fix batch norm for empty inputs (#30035)
c272758b43 : Mobile module forward() pass input by value. (#30060)
267fd4a06c : Fix for batch norm 2D with affine=False (#29458)
a4f60b64dc : explicitly provide memory format when calling to *_like operators
2dba553990 : explicitly provide memory format when calling to *_like operators
3045b2a366 : explicitly provide memory format when calling to *_like operators
735517fa87 : explicitly provide memory format when calling to *_like operators
5b15f32697 : rename benchmark_all_other_test (#30048)
97156f548d : Add hash and equality operators for WorkerInfo (#29958)
8b9bac1fad : add operator-range argument to the op bench (#30051)
64706e0a74 : change conv, batchnorm input shapes (#30041)
3250d5008f : change the starting iters to reduce execution time (#30040)
3bd0f476d4 : Revert D18233037: C++ API parity: isfinite
63f4b607aa : Ensure initializedContextIds_ map is cleaned up appropriately in DistEngine. (#29787)
26dabad5a4 : Add LiteModule java class for lite interpreter. (#30061)
a1fc46d2b5 : Updating submodules
8df5e10ee9 : C++ API parity: isfinite
5d69bc1eda : Add docs for distributed optimizer. (#29971)
4f94aed8a3 : Reformatting module class. (#29957)
ab93b3df60 : Polish distributed autograd docs. (#29942)
df6a1c0437 : Remove rpc.sync_rpc from the public API. (#30033)
905792af1f : disabling persistent mode for cuDNN BN on NCHW (#30031)
9c7e604c60 : SyncBatchNorm Update on input dimension checks (#29626)
5b6dd52e3c : Build Unit Test of SparseRAdam
64cdc648da : fix submodule traversal in FoldPrepackedWeightIntoModule (#29925)
b4f33c1c21 : Updating submodules
8dd67057f1 : Add RpcAgentTestFixture to extract duplicate code (#29747)
6d6380fd4e : Update CODEOWNERS for distributed and rpc modules
adfb8a4888 : Fix bug in atomicAdd for int16_t (#29231)
45e980a243 : Skip broken test test_cuda_kernel_loop_overflow_large (#30021)
189b24ebe9 : reorganize test binaries of op bench (#30023)
91c6d2e51c : Add support for quantized operator conversion from PT to C2 via ONNX (#29694)
b45069b59f : fix fc fp16 quantization (#29469)
a3ee504c33 : Integrate RAdam to SparseAdamOp
82682b3e96 : Revert D18531481: Remove input_channels / output_channels / with_bias from ConvOptions
f6cadad174 : Delete redefinitions of methods in Variable already present on Tensor. (#29667)
1ab2f043ba : Move most methods off Variable into torch::autograd::impl functions. (#29665)
38340f59fd : randint accept generator=None (#29748)
94016b153a : Fix typo in documentation
a573f8f7d7 : Disable broken test_cuda_kernel_loop_overflow_large test (#29904)
7782f4bc50 : Updating submodules
0e5200adfe : Refactor target_compile_options into torch_compile_options (#29730)
1381301d46 : Remove AT_LINK_STYLE entirely. (#29729)
639133d6d1 : rename init_model_parallel to init_rpc (#29762)
5f510374e7 : Add torch.memory_format support to the TorchScript
cb43170dcb : Add memory format support to the `resize_` op. (#28292)
a7df36964c : TensroIterator preserve format for binary, ternary operators.
b80c4f60fb : Add channels last support to cuda.comm.scatter and gather
026a2a4ec4 : Kill `operator==` of TensorOptions as confusing one
9f3b347874 : Add memory format support to `resize_as_` operator (#27979)
a3588b6ed9 : Updating submodules
bb217eee98 : Updating submodules
18bdf97dbb : Factor Module into Object and Module
14946a8891 : Updating submodules
6bf87dae90 : Updating submodules
2b5213d94c : Updating submodules
b011461c9f : Add missing operators for pytext, v2 (#29970)
6980cb2519 : Add overload name to JIT prim operators, version 2 (#29960)
689b4bea7b : torch::nn::GLU and F::glu (#29922)
d5bf51b684 : torch::nn::GroupNorm and F::group_norm
93c5d79953 : Updating submodules
30d37e82db : Revert D18521937: Enable full error message for mobile builds
e1d13f4f8b : C++ API parity: NLLLoss & CrossEntropyLoss (#29812)
890a3f8b8d : Remove input_channels / output_channels / with_bias from ConvOptions (#29838)
0995929971 : Improve legacy QuantizedLinear functions to reduce overhead (#29773)
66bd0ed940 : Updating submodules
649e7f057e : fix comment index_size->output_size (#29831)
58ee61176c : SeqBlobReader Implementation (#29888)
455b5c1a7d : minor updates to rpc docs (#29857)
4da509090e : Disables TestNN.test_CTCLoss_1d_target (#29841)
eb29276623 : Update distributed autograd design doc with appropriate links. (#29927)
4553d5e69b : Fix submodule traversal in insertPackUnpack pass. (#29914)
27afac2134 : C++ API parity: Dropout, Dropout2d, Dropout3d
fbabf72829 : Add ONNX support for Logdet (#29767)
b730d04ed2 : Fix deadlock issues in ThreadPool (#29885)
0a33c3f1a1 : split module interface tests (#29917)
a5b4d78c6d : Revert D18499600: Add overload name to JIT prim operators.
2a442f5dca : Revert D18499601: Add missing operators for PyText model.
c543034531 : add cuda sync when ops running on gpu (#29936)
f1860aea83 : fix missing lock in profiling graph compilation (#29886)
5cad7d42ef : Enable full error message for mobile builds (#29926)
c300f086a4 : Turn off scalar_check for diag.
a6a31c6dc2 : Turn off scalar_check for _th_max, _th_min.
6c7a0c68f9 : Turn off scalar_check for lstsq (gels), and test scalars for eig.
79f0636718 : Turn off scalar_check for sort. (#29874)
ee5201cd7c : Fix memory leak in CUDA renorm, turn off scalar_check for renorm. (#29873)
d87655f515 : Turn off scalar_checks for cumsum, cumprod.
fe575b44ee : Turn off scalar_check for fmod. (#29871)
98362977a0 : Turn off scalar_check for remainder. (#29870)
61df98a083 : Turn off scalar_checks for multinomial_alias_setup_, which requires 1d tensors. (#29869)
92a512b583 : Stop generating maybe_zero_dim calls for "scalar_check: false" with multiple outputs. (#29868)
6c39e5033c : Add missing operators for PyText model.
ff4e782e79 : Add overload name to JIT prim operators.
3003c5f91b : OPN ops TupleConstruct/Unpack and format. (#29635)
d22f61432d : Update fbjni and enable PyTorch JNI build
3f5dc95b57 : fix device check in op bench (#29918)
acb8100810 : Updating submodules
7807d44934 : Add TensorShapeAndType (#29848)
5ab6635de1 : Stop binding _th_resize_as_, which isn't used anymore.
8e61287d1b : Skip outputting scalar_checks if they are false. (#29866)
4442fa59c7 : Avoid keeping old histograms in the histogram observer to fix the OOM issue (#29768)
7889e1e3f9 : Add `torch.version.hip` from cmake (#29815)
69e343f2cc : Expose is_signed for dtype (#29511)
23fcc409d5 : Revert "switch back to azure pipelines" (#29910)
a9c719ba82 : Set TORCH_CXX_FLAGS in minimal example (#29890)
9ec1727ea6 : Makes test_type_promotion generic (#29417)
0108f473ad : Use c10::to_string in more places (#29839)
60ad2a96f0 : Update torchvision in CI (#29853)
5e53c1501a : Update CircleCI config to use Docker images from "pytorch" account (#29835)
510ef4b63a : Add nn.quantized.Conv3d (#29813)
e1a309a647 : Always include autograd context id in rpc/remote requests (#29781)
a34cc01dcc : Implement backend level fallback for c10 (#28494)
3fa5917530 : Simplify c10 dispatcher (#28314)
6dc8d72f94 : Change from int64_t to jlong for mac build (#29861)
893105b79e : Add reset_parameters to torch::nn modules (#29832)
831f25c53b : add test/mobile/op_deps project for dependency analysis test (#29716)
b508de6412 : add static libraries to TorchConfig.cmake.in (#29837)
9371b31818 : set USE_STATIC_DISPATCH outside cmake (#29715)
60a33cac2b : reduce input shapes of long tag in op bench (#29865)
90e3bbf3ab : support all with tag_filter to run all shapes (#29864)
5da2bf945e : add embeddingbag to benchmark_all_test (#29830)
371da6acef : move get_rpc_timeout to pybind (#29765)
7a6c3b36a1 : Switch ScriptModuleOp to use a unique_ptr
902c1f9ef1 : Check for mutable default parameters (#29833)
77bb41c965 : Rename dist_autograd_context and dist_autograd_container. (#29696)
06ef4a757d : Add docs for RPC, dist autograd, and RRef modules (#29276)
ce7058337c : Remove two unused TH definitions of rsqrt.
bfedace5e3 : Expose miniz to Python (#29228)
eef349a679 : host build gradle publishing (#29749)
65bb34d885 : Remove TensorImpl::is_variable, deprecate Tensor::is_variable (#29653)
8d23f7a3a8 : Only print original SourceRange on highlight
7f4d4254c3 : Make sure we only run Profiling Graph Executor tests on windows (e.g. no simple, no legacy)
90ac35b7bd : Fix tracing of autograd functions
747233e3bd : minir edit to fix benchmark_all_test cuda error (#29829)
c5ac70a0ea : AdaptiveAvgPooling nhwc cuda update (#29700)
ad95099f45 : fix benchmark_all_test when running on gpu (#29818)
b70d571233 : add embeddingbag operator the the benchmark suite (#29784)
e53b510773 : add addmm op to the benchmark suite (#29783)
dfa9c9e227 : Replace `make` with `cmake --build .` in the docs (#29798)
01d76145fc : Fix typo: Caffe2_MAIN_LIB to Caffe2_MAIN_LIBS (#29746)
bf80664515 : Add quantized conv3d function (#29686)
2d7d53cd87 : Updating submodules
4a1fcc0b83 : Allow rpc.remote to create RRef on self (#29634)
9fd7db616a : Disable Caffe2 RCCL tests (#29792)
ba74be0d3e : Update CODEOWNERS for distributed rpc framework. (#29788)
4a27d2be18 : Enabling intra-op parallelism for fbgemm_linear_int8_weight_fp32_activation op (#29532)
f3b15727c5 : fix op benchmark OOM issue (#29794)
aa6e992ffb : Subscribe for record function and if android do atrace (#28708)
a68c52494c : Use F::*FuncOptions for embedding/embeddingbag functionals (#29673)
9ee6fa0145 : Use NNPACK for strided convolutions. (#29595)
ed788ec780 : Linearizable Label: Class Weights, Allow Missing Label, and Average by Batch Size (#29707)
b8dca04f73 : Add error message if CUDA startup fails (#29670)
5654eccfe2 : Add pytorch_jni_lite for lite interpreter. (#29621)
681b610f35 : use new overload mechanism for rnns (#29614)
91bef3d189 : Simplify copy kernel with static_cast_with_inter_type (#29631)
65f691f2c2 : Add more tests for torch::arange
2bcac59a30 : Use default dtype for torch::tensor(floating_point_values) and torch::tensor(empty braced-init-list) when dtype is not specified (#29632)
3fb9bbc99b : refactor and move createException function (#29605)
78bd0069d3 : enable back 2 tests for simple exec
71aacf7b82 : Gradle build offline dependencies #2 (#29738)
2b05ae0704 : Revert "Enable test_distributed for ROCm but only with nccl backend" (#29736)
c800591030 : Update ATen/native/README.md about broadcasting (#29742)
b37c235d86 : C++/Python API parity for Conv{1,2,3}d layers, and add F::conv{1,2,3}d functionals (#28917)
7f485121a6 : Avoid MSVC _cvtsh_ss() workaround with clang-cl (#29726)
ed215b1c03 : named tensor support for torch.equal (#29322)
5e64cfa663 : Make TensorName::unifyFromRight in-place for efficiency (#29307)
6de1016f9d : switch back to azure pipelines
73a926fd5d : Updating submodules
f0dd7517f2 : Add option to clean up allocated activations between c2 runs (#29619)
03d021ddb8 : Allow unrelated histories when rebasing to master (#29699)
5635a72069 : Revert D18451046: CPU-Strided-Complex Fixes for real and imag ops
6d54c5ddd2 : Missing host device (#29547)
9b1ff8090d : CPU-Strided-Complex Fixes for real and imag ops (#29607)
0c91ebb694 : Delete all trivial uses of make_variable. (#29213)
89e187a2f5 : Miscellaneous follow up for code review comments (#29204)
30092df15e : Rename getNonVariableDeprecatedTypeProperties to getDeprecatedTypeProperties (#29203)
7da9ac5afd : Revert D18455666: Gradle build with offline dependencies
715e951e3c : Revert D18458751: use new overload mechanism for rnns
e870a9a870 : More checks on MSVC (#29709)
7b86199fc0 : Switch XLA to only override abstract functions (#29636)
3a72662d01 : Restructure comparison ops so as to better support XLA dispatch (#29591)
09d359dfd9 : Changed default args in quantization observers
d2aa4c611f : observer benchmarking
d8732b3b43 : Gradle build with offline dependencies (#29262)
20fb8a814c : PackedSequence support for quantized LSTM
87363a8102 : Revert D18466043: Pin Linux image and modules version to 4.4.0-166
5a8ad66354 : Do not show cuda stats in autograd profiler when `use_cuda=False` (#29666)
95cad57340 : Turn on named tensors for all builds (#29603)
907a29de70 : Pin Linux image and modules version to 4.4.0-166 (#29690)
29e509ff1d : Fix a missing comma in quantized benchmark
8875120b54 : Make dropout condition on training.
422fbfb108 : Fix some issues for lite interpreter internal build. (#29620)
bd0394d473 : Add op bitwise_xor to replace __xor__ and __ixor__ (#25665)
8e7b406773 : use new overload mechanism for rnns (#29614)
433baf1b90 : Change arg dtype from float to double in LPPool and nn/utils/clip_grad.h (#29584)
65bfcde05e : Use c10::variant-based enums for SmoothL1Loss module and functional
57eab22c6a : Use c10::variant-based enums for F::grid_sample
9f879ef532 : Make all non-input arguments to functionals part of its options (#29404)
c3b2c2e353 : Design doc for distributed autograd. (#29175)
b0c245d52d : Consolidate the places that find pybind11 include dirs (#29659)
fd8f74e688 : Remove observer module after insert_quant_dequant (#29622)
fbe90b65fa : Cleanup special handling of Containers, allowing custom forwards (#28988)
3175f5543a : Make nn.Sequential iterable (#28987)
eeb7199ccc : updated name_inference doc for cumsum and cumprod (#29453)
9bb0e2834d : Fixing data type in quantized pool benchmarking
82913a266d : Skip copy_same_type_transpose_ for quantized tensor (#29609)
3b43cfde80 : Benchmarking per channel quantization
5db361bd32 : Quantized interpolation benchmarks
9c9c361f67 : Separate out pytorch_jni into pytorch_jni_jit and pytorch_jni_common. (#29617)
f95e8ea1be : Benchmarking quantized methods (#29625)
f111f1b1a7 : Suppress implicit int-float conversion warning in ROCm build (#29604)
949d6ae184 : Fix jit tracing namedtuple (#29477)
450949c7fe : Complex support on GPU for dynamic casting (#29612)
7073ee2090 : Enable test_distributed for ROCm but only with nccl backend
3b452ca428 : quantized topk benchmarking
a0d4d5062b : Quantized unary ops benchmarking (mostly template)
e651494d47 : Updating submodules
2fb4059652 : change `drop_on_export` warning category (#29610)
bbff06ee96 : Convert conv_prepack to conv2d_prepack and conv_unpack to conv2d_unpack (#29529)
2acca09e1a : Add Support for ONNX scripting Interpolate with missing shape (#29489)
8db06732bf : Updating submodules
0c9e672727 : Apply the latest master docker images(jni.h in every image) (#29588)
8b53515b8a : Add ONNX Export Support for torch.scalar_tensor (#28713)
5249c43d93 : Disable android gradle jobs (#29606)
fb07098e2b : Creating a base benchmarking class for activations.
7df854bddd : explicitly provide memory format when calling to clone() at prune.py (#29593)
bf61405ed6 : explicitly provide memory format when calling to *_like operators
8df602400b : explicitly provide memory format when calling to *_like operators
858b2010ae : Updating submodules
1bb5209f7e : Back out "Revert D18299298: [pytorch][PR] Migrate conv3d from TH to ATen (CPU)" (#29286)
ddeeb561c3 : Revoking mutually exclusive requirement on channels last and contiguous tensor (#28466)
70f886ffa4 : Revert D18253777: Remove observer module after insert_quant_dequant
af3468a1c7 : change op bench input shape to reduce execution time (#29616)
7374dd0d52 : remove SkipInputShape flag (#29615)
fdcb203e8e : Identify weights and bias by argument position in aten call (#29147)
587996ef04 : Remove observer module after insert_quant_dequant (#28985)
81116fd7cd : Updating submodules
a9308f9d8b : py2 fix
b5a38fa98e : update op bench readme (#29596)
a09197561e : correctly share types between traced modules (#29583)
1a9e5dad81 : Improve `ConcreteModuleType::dump()` (#29582)
00c224f0f2 : move quantized tests from benchmark_all_test to benchmark_all_quantized_test (#29590)
137eea5938 : change module_name in chunk_test (#29589)
6104f4e37c : reduce input shapes for matmul (#29587)
0e5299a441 : fix list_ops and list_tests (#29586)
85752df4a1 : reduce conv_test input shapes (#29580)
01ad2bc5da : Improving BinaryOpsKernel.cu (#29428)
6bfa7c0471 : FakeQuantize benchmarking
627f2823e0 : remove _register_* bindings from python (#29499)
4e4e29a511 : Simplify ScriptModule bindings. (#29432)
5b702ab52b : switching to a simple/full executor
cedca377bd : Re-enable TestNamedTensor.test_big_tensor_repr (#29407)
b3b8f522e8 : Disabling 'contig' in quantized arithmetic test
5b43becfc5 : per-tensor quantize/dequantize benchmarking
c49b324cbf : Enable test_stress_light_rpc in rpc_test.py
bb90c18791 : Enable test_py_rref_args_user_share in rpc_test.py
b885eff4be : Enable test_multi_py_udf_remote in rpc_test.py
bc4457f5b6 : Enable test_py_built_in in rpc_test.py
93b5c9d723 : Allow to create local RRef with value (#28948)
17b0ab4727 : Add python API for get_gradients() method. (#28926)
9276cd449d : qadaptive_avgpool2d benchmarking
b0cf43b2dd : Simple distributed optimizer (#29304)
604fc9ec41 : F::embedding, F::embedding_bag, moved Embedding and EmbeddingBag options to embedding.h in options
65f3b98c35 : explicitly provide memory format when calling to clone() at ProcessGroupGloo.cpp
310343e946 : Properly shutdown RPC even in the case of `clean_shutdown=False`. (#29148)
e01fc56ecb : move type inference for arange into c++ (#27629)
9de0b63554 : Updating submodules
72eff0021e : Declare Dimname's kWildcard as extern instead of static (#29384)
344e7c26c4 : Delete USE_CUDA macro use from data_parallel.h (#29483)
b141754b7f : Give a better error message when people accidentally use unsupported devices (#29409)
bb119d957e : Move torch.cuda's atfork handler into C++ (#29101)
be757957ba : Support softmax with D == 0 (#29167)
23483406aa : Fix missing space in lr_scheduler warning msg
3e5af22650 : Disable flaky RPC tests (#29485)
f6f428b675 : Make smoke tests depend on s3 html update, to avoid race condition. (#29481)
46c4ae5719 : Fix BC CI (#29533)
466ab93ef5 : Revert D18286473: Use NNPACK for strided convolutions.
2032482eb9 : Use handle pool to manage cuparse handles (#29426)
5c9eae075f : qavgpool benchmarking
958d0cd4df : Adding short tests
5ba9209755 : Use NNPACK for strided convolutions. (#29084)
cc6af45944 : Fix writeable strings warnings.
86fee25d99 : nll_loss (cpu): Simplify index checking: rely on exception propagation in parallel_for (#29454)
c7ed89cf65 : Migrate `nll_loss2d` from TH to ATen (CPU) (#28304)
a47fe40729 : qpool benchmarking
aa658a2a68 : Adding inplace quantized relu6
4874120804 : Added all binary arithmetic tests in QFunctional
687ea7460a : quantized comparators benchmarking
fb2eb01955 : qadd benchmarking
f5074ccafe : set the no_deadline for the adaptive_avg_pool_nhwc test (#29502)
6c020673c9 : Migrate acos from TH to ATen (CUDA) (#29323)
ebfe846ad2 : Clean up many unused declaration/definitions in TH
4606deb2be : Migrate frac from TH to ATen (CUDA) (#28953)
d00579da93 : Revert D18399922: Switch XLA to only override abstract functions
cb74ede59e : Pass F::*FuncOptions instead of torch::nn::*Options to functionals, and make F::*FuncOptions a different class when necessary (#29364)
5c29160c7c : Switch XLA to only override abstract functions (#29438)
f31d6c70fe : reduce op bench binary size (#29496)
8e8a5e0664 : Pruning Functionality (#24076)
3657df3836 : don't set DEBUG=1 in py3.6-gcc5.4 CI build (#29491)
91e1f07967 : Check for unrolled loop in break & continue (#29474)
4da3ac91b7 : Add functional overloads for fold, linear, loss, normalization, padding (#29360)
e80f7506c2 : In torch::save(), make padding computation faster. (#29425)
675a4cb9fb : Extracted quantize/dequantize out of linear.
eae4a69069 : Add quantized fbgemm headers to torch target (#29418)
c1140f20dc : Rename PyTorch JNI library to pytorch_jni (#29412)
0cfa4965a2 : Clean up pytorch_android_torchvision test (#29455)
abf55eb3a8 : Pickler: convert std::stringstream cases. (#29351)
92b9de1428 : Test application for profiling, CMake params for debug symbols (#28406)
52456b2eba : add `hasattr()` (#29332)
7a63728d5f : kill pytorch_linux_xenial_cuda9_cudnn7_py2
98bb1d1f03 : remove non-onnx caffe2 builds (#29478)
991c2ac383 : Disables flaky test_rand_quantization (#29463)
3ab44c48d1 : Add functional overloads for pixelshuffle, pooling, upsampling, vision (#29359)
5b1a1a17ed : remove FunctionType as an allowed constant (#29405)
a4b872b65e : Inline graph before writing the bytecode file. (#29421)
f362ae1f72 : Updating submodules
2e5fc034fb : Quantized concat benchmarking
3bc014ecf2 : Implementation of cosine learning rate training policy (#29440)
edcf659e42 : Remove default values from functional overloads for activation, batchnorm, distance, embedding
2cd4f86422 : Support process_group_agent "sending to itself" (#29253)
64a66e8320 : fixed random gerenation export (#29354)
5e1983f90f : Fix distributed autograd initialization. (#29069)
36b73d5a1b : Hipify contrib/nccl (#29385)
740c9da267 : explicitly provide memory format when calling to clone() at SparseTensorUtils.h
c69c243d88 : explicitly provide memory format when calling to clone() at spectral_norm.py
587ec3f55f : Decouple JIT and autograd codes (#28900)
ab47465384 : Remove SchemaRegistrationHandleRAII. (#29379)
f441bb1c20 : check error status of CUDA launch after Magma kernels (#29003)
4e21157e01 : Revert "Revert D18171156: Merge Tensor and Variable." (#29299)
b24b967e00 : Add functional overloads for activation, batchnorm, distance, embedding (#29358)
63675b1969 : Revert RRef.to_here()/local_value() return type (#29396)
d75222f3f5 : Dump operator names of a module and its submodules.
b7fc26a9ef : Clean up the stale item in bc white list (#29439)
255b2340fc : don't copy ignored/unused methods to ScriptModule (#29342)
5f03ad9698 : Add note to docs of torch.unique (#29165)
baef925d5d : Skips CUDA handle tests on Python2 (#29430)
4bcf4796aa : Make HistogramObserver scriptable with `@torch.jit.ignore` (#27950)
19d3a7ad02 : fix negative string indexing (#22700)
e66626ae5c : Lift rpc_timeout to RpcAgent, for other RpcAgents to reuse. (#29341)
7da11f4967 : Export weight_norm (#28618)
782e80e6e7 : Make jit.trace_module reentrant (#29411)
90f28c2756 : enable fast path for TensorIterator for contiguous inputs/no broadcast (#29180)
8a33f1150d : Use nativeloader instead of system loader to load JNI library for soloader compatibility. (#29350)
fa66a1498e : Simplify _calculate_fan_in_and_fan_out (#29370)
de9a54466d : clone should preserve the type of attribute (#29269)
5a44107146 : fix pytorch mobile build (#29414)
0be2f12ef9 : Updating submodules
821f8bfc2f : Fix tracing for dynamic quantized LSTM (#29331)
5bb35fe923 : Updating submodules
1dd3c8e539 : Skip flaky test
02921e7985 : Use cuDNN's handle pool mechanism to manage cublas handles (#29233)
b008b34bd8 : explicitly provide memory format when calling to clone() at SparseTensor.cpp
09822a1d62 : explicitly provide memory format when calling to clone() at SparseTensorMath.cpp
564384fe12 : Automatic update of fbcode/onnx to fea8568cac61a482ed208748fdc0e1a8e47f62f5 (#29363)
255505f232 : Updating submodules
d5d524dadb : explicitly provide memory format when calling to clone() at TensorShape.cpp
fdab1cf0d4 : NHWC support in cuDNN BatchNorm & Conv2d (#29361)
0aba5ba13c : Add unsafeRemoveAttr and unsafeRemoveSlot to ivalue::Object (#29048)
abbe6347ff : CPU-strided-complex support for ComplexFloat (#29294)
86c64440c9 : Make PyTorch Python 3.8 compatible (#29302)
ca20b569be : Move unboxed dispatch decision into dispatcher (#29200)
43d4d019c4 : explicitly provide memory format when calling to clone() at rprop.py
2704af0970 : AsyncIf op implementation
b14c5943d4 : Handle warning in torchscript (#27154)
0ff1696c75 : add pybind version of HANDLE_TH_ERRORS
9b875e1256 : Buffer python warning to avoid deadlocks
cb3232fdb9 : Fix clang-tidy errors in csrc/Module.cpp
528a0cfb96 : Allow setting tolerations in testing math functions. (#29297)
b05e9d4521 : explicitly provide memory format when calling to clone() at lbfgs.py
5d70b11d36 : Fix the issue when NHWC Tensor has height or width larger then max cuda grid (#28931)
4926a51010 : explicitly provide memory format when calling to clone() at parameter.py
8498a1555b : Add some non-contiguous tests
9dcf5191d5 : explicitly provide memory format when calling to clone() at batchnorm.py
75309b45f3 : explicitly provide memory format when calling to clone() at Indexing.cpp
78a34d3205 : Revert D18350353: dump operator names of a module and its sub-modules.
58005382c8 : fix @property (#28395)
796363147f : Implement more of of the nn.Module API (#28828)
509d9630ca : Disabling ONNX IR v4 sematics for opset 8 or lower. (#28990)
4515edfe15 : Disable QNNPACK tests on MacOS (#29328)
84a6583ba1 : Revert D18359880: Fix tracing for dynamic quantized LSTM
dc7552f9ca : dump operator names of a module and its sub-modules. (#29208)
6572d0d174 : add a new flag to select machine for op benchmark (#29349)
fff4f16e45 : Clean up file opening for serialization (#29221)
ae12630508 : getFuncName take func_value as argument (#29146)
9a9bb448ee : Revert cudnn changes #23861 (#29329)
f17e02fd94 : Fix tracing for dynamic quantized LSTM (#29331)
6c4fd602ff : Revert D18350224: Fixed export for random
309b28ee3a : Trace module calls
0f4b226afb : API for finding a common ancestor block for a pair of nodes
adb7df7117 : Consistently use TORCH_CUDA_API for all files that live in cuda targets. (#29158)
a5d356cb39 : Delete THP_CORE macro; partially replace with THP_BUILD_MAIN_LIB (#29143)
f227530c88 : Clean up named tensor `propagate_names` API (#29239)
364e525f55 : Fixed export for random (#28470)
8ed84a9123 : skip broken custom op test (#29334)
7d01d5efd7 : update op bench readme (#29289)
e51d937e91 : move cuda abs to Aten (#25857)
74b2d9ed2e : Skips test_equiv_recurrent (#29255)
cc457ca30f : split remaining "easy" tests (#29249)
f93a6e54b9 : Add removeAttribute to `ClassType` (#28984)
7069eee227 : update gloo submodule (#29248)
eb46d64740 : Remove CollisionChecker from typeids (#29242)
ab855d06fb : Print aars content detailed size info (#28438)
9c43b16df9 : Revert D18171156: Merge Tensor and Variable.
6a4b51aec1 : Add the intra-op parallelism for equal operator (#28810)
9ae6fd2599 : explicitly provide memory format when calling to clone() at TensorFactories.cpp
e4c4ff079c : group quantized op benchmarks into a new binary (#29288)
114e7382b6 : skip cuda test if not on GPU machines
e86450620d : add cuda to all op benchmark (#29285)
27115612ab : add execution mode to the test name (#29284)
50fa132bd1 : explicitly provide memory format when calling to clone() at SortingKthValue.cu
af45801f0d : explicitly provide memory format when calling to clone() at SpectralOps.cu
d05da7dad3 : Fix virtualenv builds on Windows (#29273)
4e53f3bcfe : explicitly provide memory format when calling to clone() at Unique.cu
5e0cf05585 : explicitly provide memory format when calling to clone() at TensorTransformations.cpp
abe05a16ac : Revert D18195868: Implementation of cosine learning rate training policy
689599d07d : explicitly provide memory format when calling to clone() at LinearAlgebra.cpp
81bf73643b : Autogenerated contiguous memory format for old *_like calls
cc1c0120bc : Autogenerated contiguous memory format for old *_like calls
e3e06549c1 : Autogenerated contiguous memory format for old *_like calls
47f94d5393 : Autogenerated contiguous memory format for old *_like calls
aeae0d8403 : Autogenerated contiguous memory format for old *_like calls
d410fc5a81 : Autogenerated contiguous memory format for old *_like calls
a248ef7b9c : fix autograd support for torch.mean(tensor, dimname) (#29199)
ff9d508b88 : Remove tools/setup_helpers/cuda.py. (#28617)
bc91e19861 : Enable ONNX constant folding for opset 11. (#29011)
ee21142e40 : Move custom passes to last optimization step (#29256)
6ea4219d20 : Temporarily disable qnnpack tests on MACOS (#29176)
ee8d5e5249 : Implementation of cosine learning rate training policy (#29017)
d545e4f155 : qrelu benchmarking
13f53d0fea : Updating submodules
6e38c3b89e : Make get_trace_graph private
2f2a0d1607 : Disables test_atomic_ops and testInputOrder (#29145)
30f88bb05a : Fix the TestApp (#29247)
003cb8595b : skip more flaky rpc tests (#29157)
35f8b450fc : explicitly provide memory format when calling to clone() at SobolEngineOps.cpp
9232143d6a : explicitly provide memory format when calling to clone() at Sorting.cpp
6389c18709 : C++ parity, nn::CrossMapLRN2d (#29039)
492764b18f : Enable the intra-op parallelism for layer norm (#28464)
a5aeb37493 : Don't throw when type is used in TorchScript (#28053)
ac027d30d5 : Half test time, test_asymmetric_load_with_join, to avoid flakiness (#29139)
ebf5dd447e : Cocoapods 1.3.1 release (#29240)
8a2dcff189 : Add cuda version for operators BatchSparseToDense and BatchDenseToSparse (#29166)
fd4f22e4ea : Generalized LU factorization (#28608)
9492994feb : submodule swapping via module interface (#28409)
f1c78492f8 : Revert D18299298: Migrate conv3d from TH to ATen (CPU)
eb4189089a : README (#28533)
26f57cbe5e : Revert D18309297: CPU-strided-complex support for ComplexFloat
25e261d6d5 : assertEquals is deprecated, use assertEqual instead
c99cdfeb7d : link to documentation for RNNBase.flatten_parameters() (#29196)
f32ab6157b : CPU-strided-complex support for ComplexFloat (#29133)
21d11e0b64 : FindCUDA: Use find_program instead of find_path to find nvcc (#29160)
a02681f804 : Cleaned up func removed unused variable (#29179)
7434da2c3f : value assigned but never used in _recursive.py (#29181)
c6d908d491 : Support Conv+BatchNorm fusion for 1d/3d (#29113)
546ae3002d : Migrate conv3d from TH to ATen (CPU) (#29007)
f2a35db2d3 : batch_norm_cpu_inference for channel last (#28982)
cb6d9deec6 : support for cdist (#29129)
3233a058fa : Add TensorNames::checkUnique, operator<< (TensorName) (#29124)
2c3c702d29 : Fix poisson_nll_loss with full option (#28637)
49fba35208 : Run clang-format for torch/distributed/rpc
6c3915643b : Rename PythonUDF{Call,Resp} (#27530)
b4df413712 : Scope pybind11 functions to torch.distributed.{autograd,rpc}
69f845cb77 : C++ API parity: MarginRankingLoss
0d056e75e9 : Updating submodules
ca7d0803e9 : use fbgemm's 3d group conv fast path (#29085)
9e314f557f : Fix for torch.save not saving source files (#28965)
026fd36c71 : Use at::kLong for torch::tensor(integer_value) when dtype is not specified (#29066)
1189f559cc : Creating new layer FCWithBootstrap used in bootstrapping uncertainty approach (#29152)
56f7415795 : L0 norm approx with budget (#29155)
64cbea0fbb : Updating submodules
974702fba0 : Removing quantization from the dispatcher. Changing the message.
0d9dc469cc : Introduce math_compat.h for older Android versions (#28567)
cb72c9f5b1 : Make caffe2/fb folder compatible with AMD (#29131)
02e34919ae : Bring back the stack #28426 with Windows build fixed (#28843)
df22e4c157 : Remove Unicode characters from header, fixing lint. (#29126)
379f3ae3ea : Double fetch depth. (#29030)
25261a4776 : Merge Tensor and Variable. (#28620)
215ac1065a : Print which output didn't have dependence. (#29047)
150357c887 : Updating submodules
fd0f9811ad : add timeout for RPC futures, and ability to set timeout when initializing rpc (#28392)
60cb56d128 : Refactor iterables (#29138)
7560b8c5a7 : Modify ONNX constant folding test point in test_utility_funs.py for clarity (#28861)
7102aceaf8 : Default to not build Caffe2 operators on Windows. (#29061)
044ff91950 : reduce predefined_min_secs for execution time (#29142)
20e8634999 : pass more arguments to Int8ConvPackWeight op in unit tests (#29086)
7fb2ccaed8 : Update type definitions for nn.Identity (#29135)
e01324d058 : Port l1_loss to Aten (#26795)
ebc216a076 : Opset 11 updates (#28225)
669662cd2f : Updating submodules
7190789f58 : Handling of failing and terminal async cpu ops (#29052)
19ac5929e2 : Remove definitions of acosh and asinh from TH (#28696)
24d43750ee : Updating submodules
69b1d71427 : Fix GELU module docs (#29112)
00a561a23a : Fix build error caused by recent commits. (#29056)
93acd1998f : Revert D18249048: Moved VonMises distribution with sampling upstream from Pyro.
0a4433750e : Updating submodules
fdeef45852 : Add Support For Module Containers as Iterables (#28255)
8160f390cf : (#23861)
70f3f23e3a : (#29016)
0f97e08a36 : Moved VonMises distribution with sampling upstream from Pyro. (#17168)
7ff39d2942 : LayerNorm: Handling if batch size is zero (#28614)
23695ab23f : Moving python allgather_coalesced impl from Py to C. (#29059)
a1386bd950 : Fix smoketests by running them with postnightly job. (#28994)
0fbce15828 : Retry conda installation on OS X. (#28979)
a90389f20e : Port cuda sigmoid to Aten(CUDA) (#26643)
bbada862dc : Revert D18298225: Update modules/__init__.pyi.in to include Identity
a0dc060682 : Update modules/__init__.pyi.in to include Identity (#29114)
2460dced8f : Add torch.nn.GELU for GELU activation (#28944)
3bffb730b6 : Add note about when to install typing package (#29103)
e95dc9814e : introduce module interface declaration (#28408)
1e904049ca : guard against inheritance on torchscript classes (#28407)
73d77626b8 : Check device connection before running xcodebuild (#28996)
0c5e738cf7 : Updating submodules
496d23224f : Updating submodules
1345dabb1d : Only set CCACHE_WRAPPER_PATH in the build scripts if it is not already passed in.
e8e7d93293 : Additional autograd unit tests for Python UDFs. (#29041)
a68c1e109e : C++ API: torch::nn::BatchNorm{2,3}d (#28936)
23193c155f : Quantized Tensor support copy (#28612)
41e42c34d6 : Revert D17989951: Move unboxed dispatch decision into dispatcher
cddda17394 : ParallelWorkersTest.testParallelWorkersInitFun is flaky (#29045)
314066bd74 : Making torch/csrc/cuda nccl usage safe for nccl 2.5 (#29014)
d8d7af0811 : Fix CUDA shared memory out of bound access in findPattern (#28989)
bace0c8d7a : remove a redundant move preventing a copy elision
b693c5d6a0 : replace add benchmark with add_ (#29050)
1e2049c566 : #26426 fixed (#28715)
4a94eaa60b : C++ API parity: PoissonNLLLoss
7ea83120df : Fixing the shape calculation for pool tests
5ac3df7712 : Minor fix and turn off fold_convbn (#27403)
d690521cf6 : Add e2e test for conv+bn (#27348)
9041e29d94 : Revert D18209289: Moving python allgather_coalesced impl from Py to C
dbbb2fc9e5 : Remove the linkage to CUDA libraries when ROCM is used. (#29009)
a49a656264 : Updating submodules
71be5fe54e : add support for {ones,zeros,full,rand,randn}_like ops (#28981)
0a101bf8d5 : Improve name inference API by introducing a TensorName helper struct (#28904)
d0204ea92a : Remove dead includes in caffe2/binaries
bbea34f283 : Revert D18266918: C++ API: torch::nn::BatchNorm{2,3}d
88a34ef690 : Move unboxed dispatch decision into dispatcher (#28251)
22a346ee34 : Moving python allgather_coalesced impl from Py to C
b7c5b3d398 : C++ API: torch::nn::BatchNorm{2,3}d (#28936)
c447941bda : Migrate conv2d from TH to ATen (CPU) (#28793)
31c932d9ab : fixed replicate typo in torch/nn/parallel/__init__.pyi (#29005)
a5d65d1f8f : Fix embedding renormalization on cpu (#28546)
7776d5bfe9 : Update parallel_for/reduce doc (#28545)
dd288d3b21 : support addcmul, addcdiv (#28975)
08860721ad : Revert D18195584: Additional autograd unit tests for Python UDFs.
72b9bda9e5 : Smooth L1 loss (#27661)
1c8ef29ac5 : Remove copy-pasted code in THCTensorTopK.cuh (#28995)
cd3ed4db76 : Update README.md (#28971)
aa30176c68 : Add C++ API clip_grad_value_ for nn:utils (#28736)
8a1f42b81e : Speed up threshold on CPU. (#27155)
d3cd64d71d : PyRRef.owner() to return WorkerInfo (#28909)
59c5de4d0e : Don't permute in quantized::conv2d pattern (#27347)
ba6defeb07 : Revert D18254898: Revert D18202646: [pytorch][PR] Use aten's GRAIN_SIZE for TH Tensor ops
3bba751cd6 : Additional autograd unit tests for Python UDFs. (#28824)
579ffb647d : Add HashStore to c10d (#28921)
4654795d13 : Revert D18202646: Use aten's GRAIN_SIZE for TH Tensor ops
f63cbf3ae2 : change op benchmark forward_only flag (#28967)
fcd6a8252c : add shapes for fill benchmark (#28966)
9034762a7d : add more operators to benchmark_all_test (#28968)
4bfe2f0900 : Fix jit outplace tracing and reapply changes to *_like operators. (#28839)
0e441dd386 : flip the "don't inline" switch (#26706)
595209bddc : Fix bugs in torch::tensor constructor (#28523)
c8771f5a44 : Port mse_lose to ATen (#26529)
42faf961c8 : Update fbjni submodule to new upstream and latest version
80b46ca35a : Null AutogradMeta optimization (#28610)
85e72edf3e : Delete dead TensorImpl::detach_autograd_meta (#28609)
b52ceec80b : Remove unused gradient_edge argument from make_variable_view (#28602)
335bfa24e0 : Add an AutogradMeta factory. (#28593)
18f2efa997 : Unfriend Variable factory functions. (#28601)
9643f066cf : Move all autograd_meta_ manipulating operations out-of-line. (#28592)
a844809a2c : Test TensorTypeSet instead of autograd_meta_ for variable-ness. (#28543)
38388b9b3c : Updating submodules
00bd9eae33 : Fix typo in `Dataset` and `IterableDataset` docs (#28960)
b1bf595e54 : Update generated test model
c60bf2704a : Support Offline Tensors through ONNXIFI layer
05e88dc4fe : skip additional flaky rpc tests (#28934)
275adb143e : fix printing a node header (a kind wasn't being printed)
fca99e96e8 : Move cuda functions to cuda folder.
c63e15aef8 : Revert D18241759:
1dcf1b8938 : Update pinverse doc for recent commit
fe8804695b : Use aten's GRAIN_SIZE for TH Tensor ops (#28770)
9630b78c49 : Pow() : Use lightweight operations for predefined scalar exponent values (#28903)
1b1e3d565c : (#28927)
9fb0079036 : merge some of the lint checks (#28933)
d071ca2972 : Improve reshape backward when the op is a view (#28901)
47301a153b : Eliminate unnecessary Tensor ref count bumps.
64c7ac233e : Disable flaky remote tests in dist_autograd_test
fd5c68b5e4 : Revert D18231741: Enable PyTorch Probot as a GitHub Action.
5e94e66c6f : unify unary ops benchmark (#28913)
2ffc4cca67 : unify split benchmark (#28912)
94d2599d77 : unify softmax benchmark (#28911)
ed4a978d79 : unify pool benchmark (#28898)
f5e99b3249 : unify matmul benchmark (#28899)
28be2d4994 : Better error message for quantized dispatch (#28635)
6e1c18303b : unify linear benchmark (#28897)
a7b235f968 : unify gather benchmark (#28895)
6e4147c72c : unify conv benchmark (#28894)
dbf8f535fc : unify chunk benchmark (#28892)
15deee25bc : Fix aten::format regex for clang8 (#28916)
88b2bfd706 : unify cat benchmark (#28893)
aa30b37d2e : unify batchnorm benchmark (#28889)
740474838f : unify as_strided benchmark (#28890)
db15c2ba20 : unify add benchmark format (#28891)
d6f1e49c4a : C++ API parity: CTCLoss
2466dc8544 : Migrate nll_loss from TH to ATen (CPU) (#28270)
732a3d8f8c : Fix `UNICODE` conflict on Windows (#28782)
e3a24ba6d5 : Enable PyTorch Probot as a GitHub Action. (#28879)
f5edb62a7f : Clean extending autograd doc for output size 1 (#28860)
5821b9bf0f : Remove error logging of high empty range ratio
1d3d9ec7d4 : C++ API Parity: `functional::fold` and `Fold::pretty_print` (#28732)
807fbf8816 : Disable flaky tests in dist_autograd_test
a465b033fd : Local response norm (#28759)
3073785f4c : Fix when giving jit format warning about unsupported options (#28616)
50fd20b64a : fix bug on setup.py to include header files on caffe2/utils/math (#28869)
331e09eca4 : Make FileStore not segfault with concurrent accesses. (#28812)
e0009fdeb1 : Migrate `sinh` and `sinh_` from the TH to Aten (CUDA) (#28527)
a7166ae448 : Migrate `asin` and `asin_` from the TH to Aten (CUDA) (#28482)
d0bd8a3afc : Migrate `sin` and `sin_` from the TH to Aten (CUDA) (#28237)
2526f97464 : Include hierarchy information in C++ API loading error messages (#28499)
726f0ce946 : Increase verbosity of Hypothesis on CI. (#28799)
496f740824 : Connect with clip range gather operator (#28866)
eb00af37bd : insert_prepack_unpack for conv (#27346)
790563b374 : Add OfflineTensor (#28855)
a8b63cacbc : Updating submodules
043530a9b9 : Support remote for Python UDF in distributed autograd
400293fcc6 : Support remote for builtin operators in distributed autograd (#28630)
ec81cd55fc : Migrate implementations of triu and tril to a separate file (#28750)
1c436ded44 : Remove `test_quantizer.py` and reuse one of its test in `test_quantization.py` (#27269)
dfe7b25eaf : Add nn::Flatten to C++ Frontend (#28072)
57c9b1cefc : Enabling inplace relu
cbc234bceb : C++ API: torch::nn::BatchNorm1d (#28176)
8f1564b8ab : Add enum type to rpc registry for consolidating RPC initialization code path (#28628)
b1ea19ca17 : Update the misleading comments for zero_points and scale in dynamic quant linear module [1/2] (#28767)
4e56455b09 : whitelist autogradanynonzero (#28852)
f1f86994bc : Fix implementation of F::kl_div / F::mse_loss / F::binary_cross_entropy
d201ff8925 : Factor out insertPrepackUnpackForLinear (#27239)
80e270a76c : Add support for host build to pytorch_android native code (#27664)
34455c68b5 : Remove unnecessary BUILD_DIR variable in Android CMake build (#27663)
c9423c30b3 : Add host build for pytorch_android (#27662)
793e2914e4 : Support full id interations (#28769)
aa949b12b3 : InsertObservers (#27238)
4045d6c3fa : Revert D18187208: Add OfflineTensor
e33b4b6761 : Use c10::variant-based enums for Reduction
d8c368bd62 : CPU-strided-complex support for compare and pointwise ops (#28735)
22d70bc1ec : Add OfflineTensor
6b5bfd4cfc : Make inserted child module names unique (#27237)
7e8c48bff5 : argmax for half datatype fix (#28787)
e57a119773 : Remove autograd copy_ specific isFloatingPoint (#28279)
83331bf123 : don't overspecify required python version (#28842)
b7d472a109 : Some fixes for jit overview doc (#28112)
efbaa8a563 : added a check for zero stride
607defa8a9 : print per block avg time when running on AI-PEP machines (#28838)
0a68e8bab0 : fix op bench runtime error when use_jit is enabled (#28837)
ac4c72db3b : add DNNLOWP static qparam choosing to pybind
f42768f8c0 : Add scripts to run cuda-memcheck (#28127)
4703854321 : change softmax input shape (#28836)
ef5a6b2262 : Avoid the misleading zero_point and scale [2/2] (#28827)
47faee2fae : Switching tests to ProfilingExecutor (rebased)
eb55104185 : Updating submodules
5fbec1f55d : Revert D18170996: Move type casting to c10/util/TypeCast.h
0301f5f30b : Revert D18170997: Make TensorIterator stop promoting types by copying
dff159804f : Revert D18170995: Simplify copy kernel
f6692146e7 : Add Conv3dInt8 (#28768)
295401f04c : Updating submodules
a0339c8d8f : `bootstrap.sh` refactor (#28809)
097da55249 : Fix BC check CI (#28816)
52dd587123 : C++ API parity: Upsample (#28413)
1e3e1f5bf9 : Fix build error in VariableTypeManual
c6ad68cf10 : Updating submodules
6f90567e0c : Add the unittest import for test_fake_quant.py (#28815)
949678bd9e : Small fixes for torchbind (#28800)
f63cf96c4d : Update C++ parity table for torch::nn::Linear (#28804)
5c5b2c68db : Simplify copy kernel (#28428)
b9f099ed93 : Make TensorIterator stop promoting types by copying (#28427)
688a9dbe3c : Move type casting to c10/util/TypeCast.h (#28426)
f33813d589 : Return NotImplemented from all binary math ops (#27423)
9e64c54c01 : Add the warning message for API with linear modules (#28766)
02d318461e : Temporarily disable test_numerical_consistency_per_channel due to failure (#28807)
9f890a9218 : make sure clang-tidy is diffing against the right thing (#28788)
5804e54c81 : Deprecate torch::nn::modules_ordered_dict API (#28774)
87c98acf5d : Back out "Add memory format support to `full_like` operator" (#28803)
d828fef8ac : Back out "Add memory format support to `ones_like` operator" (#28802)
266c1652e6 : Back out "Add memory format support to `rand_like` operator" (#28801)
648749b203 : C++ API: torch::nn::LPPool2d (#28492)
052046b18e : Enabling intra-op parallelism for dynamic quantized Linear operator (#28477)
9f44a04613 : separate PT and C2 to reduce build time (#28731)
0c7537c409 : Fix obviously-broken .clang-tidy files (#28547)
f5ea2ca34a : Reduce logging frequency for empty range tolarence
7ed9a3ec48 : Change reorder_dimensions behavior to favor output writting sequence (#28615)
82f31e02a3 : Remove the redundant calculation of derivative of power function (#28651)
4230132baf : Added docs for context method mixins. Fixes issue #27365 (#28643)
0e86c99bfb : Updating submodules
5835ad07cb : provide memory format as Contiguous explicitly when calling to clone() (#28029)
6eaea39867 : Kill _th_zero binding, just use a simple native function instead.
8e67b78d9b : Kill THTensor_(match), which isn't used.
1e5b2559ac : Write out some set_ overloads instead of relying on code binding generation.
45dab56153 : adding python all_gather coalesced functionality and testing. (#28634)
aea94de067 : Exclude more files in torch/csrc/distributed when USE_DISTRIBUTED=0 (#28621)
4cf7277d62 : Explain how to specify library location for MKL (#28779)
5da932ad72 : Return None correctly from `Tensor.names` (#28659)
c60dee271d : addmm: Fix handling of case with empty tensor (#28613)
c89340f068 : Extend HasElements to support multiple inputs (#28717)
7df3366f8d : Eliminate some unnecessary tensor ref count bumps.
f43194ed9e : Move mode_t declaration in PadOptions (#28760)
d5afd97569 : Refactor qconv_prepack and qconv_unpack to support conv3d (#28481)
764e0ee882 : Improve `Tensor` type hints (#28578)
440b192078 : Type hints: Return `Iterator` instead of `Iterable` from `__iter__` (#27445)
f782500ee0 : Abstract tracer::enter and tracer::exit into a function
7ff272c6da : Back out D17980308-D17980313 (#28748)
e96ea288a8 : Automation scripts for perf testing (#28622)
dbf1996f79 : Support MultiheadedAttention module (#28555)
e886450863 : report p50 time instead of avg (#28722)
60d606094c : Export Meshgrid (#26037)
0c48092b22 : Resets rnn _flat_weights on _apply (#28562)
0eeda56632 : Add nn.ReLU6 to default mapping (#28516)
2049e45999 : Kill zero_dim_tensor_only codegen, it's not used anymore. (#28514)
24f0bca8e2 : Remove zero_dim_tensors only from _th_masked_fill_. (#28513)
b0b852459e : Remove zero_dim_tensors only from _th_index_fill_. (#28512)
d37c2d7c8d : Revert D17495965: TensorRT 6.0 support and PyTorch->ONNX->TRT6 unit test
110a931752 : Change from HTTP to HTTPS
4996e3aca2 : TensorRT 6.0 support and PyTorch->ONNX->TRT6 unit test (#26426)
b19bbde561 : Migrate all the Caffe2 Centos builds to explicity use devltoolset (#28465)
0253e23d3f : Remove unused USE_ROCM environment variable (#28641)
1322daa506 : Improve error handling for distributed autograd engine. (#27940)
dc17a2ecc5 : Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28433
3f119a5f52 : Port of multilabel_margin_loss from TH to ATen (CPU) [2nd try] (#28504)
68ab162099 : Don't clobber pytorch image with libtorch build. (#28581)
42423854f0 : add test to ensure that dist autograd contexts are cleaned up incase of nested rpcs (#28485)
aac3998c27 : msvc error C4805 fix (#28156)
e212543681 : Improve float pickling speed. (#28553)
9732c81da4 : Cleanup testing of _like operators
69b0e06a49 : Add memory format support to `randn_like` operator (#27890)
02917dd1f4 : Add memory format support to `randint_like` operator (#27889)
c258cd039a : Add memory format support to `zeros_like` operator (#27562)
04f5325583 : Add memory format support to `rand_like` operator (#27561)
2c339a24ec : Add memory format support to `ones_like` operator (#27270)
85d5aee863 : Add memory format support to `full_like` operator (#27262)
baf8488dbd : Add memory format support to `empty_like` operator (#27244)
bfbb3e0579 : Kill _th_fill binding, which isn't used anymore. (#28511)
c6628b29a7 : unfold: turn off device_guard (#28510)
7ab0a28b21 : Port TH/THC implementation of unfold to ATen.
2793d41a9c : Fix scalar handling of unfold. (#28462)
1a5d32d894 : Updating submodules
2181dd516e : fix handling of function attributes. (#28569)
01aea1f268 : Delete ATenDispatch (#28468)
ed503596ce : Remove c10->ATen registration forwarding (#28186)
d04973beda : Use c10::variant-based enums for EmbeddingBag mode (#28330)
60a1efe138 : Eliminate some unnecessary tensor refcount bumps.
4182c1183b : Add custom op documentation
1dfb8752a6 : Define std::strtoll for older Android (#28603)
da6b8a905a : Use c10::to_string in more places (#28605)
df81cb22b8 : Delete move constructor from TaggedStringStream (#28604)
52e0a94661 : Fix spelling in some comments
261a13a84b : Enable dist autograd tests (#28606)
70e4548fd7 : Compute correct strides after type promotion (#28253)
e885ce6130 : C++ parity, grid_sample functional (#28354)
92b39434a2 : C++ nn::ConstantPad{1,2,3}d
5cf644157c : Speed up fill for half and bfloat16 on CPU. (#28397)
7f9941c4ea : C++ nn::ZeroPad2d
d762ad09df : Enable Interpolate Tests for ONNX Opset 11 (#28560)
a783563738 : Skip ProcessGroupNCCLTest if CUDA is not available (#28393)
46f96d1538 : C++ API parity: at::Tensor::requires_grad_
78039627ae : Minor followup on stringstream cleanups (#28300)
303527d733 : C++ nn::ReplicationPad{1,2,3}d
78375c02b8 : C++ nn::ReflectionPad1d and nn::ReflectionPad2d
e263dd3853 : (#24396)
2020cc0cd1 : Fix compute_non_overlapping_and_dense() (#28551)
8de8cab247 : Migrate remaining ops to the c10 dispatcher (#27978)
d8c66c1576 : autograd/profiler: make python record_function use JIT methods
f8b758b141 : CPU-Strided-Complex Support for reduce ops and linpack ops (#27653)
136bb07a93 : torch.histc added a finite range check to resolve segfaults if tensor has inf. also added checks for nan values, min>max (#27712)
ae05e48fe8 : Kill TH(C)Tensor_squeeze which isn't used anymore.
4f0a3504e1 : Port is_set_to from TH/THC to ATen.
139fec2d14 : remove type information from docstrings of quantization functions (#28556)
dd277e9086 : C++ API parity: Linear
59402f51cf : Make init_method url appending step re-usable by both init_process_group and init_model_parallel(init_rpc) (#28226)
e31adeb4f3 : Make RRef::LocalValue return Future (#28025)
58873776ff : Make RRef::toHere() return a jit::Future (#27943)
9c345473d8 : Updating submodules
61d40b80d3 : static initialization order with mutex (#28243)
8008322336 : workaround for raw string bug in VS2019 (#28349)
896b5d9113 : Scripts for setting up benchmark projects (#28469)
d83389d327 : Ignore F401 in all __init__.py without putting noqa (#25823)
76d262d4b7 : export group_norm (#27071)
d081de67cf : fix the document of kaiming initialization (#25638)
cbddc77ac5 : fix docs for lr (#28026)
bee4aca259 : is_set_to: unify TH/THC implmentation and genericize test_is_set_to. (#28422)
09ad464d68 : Change activation modules in C++ from using Tensor& to Tensor (#28501)
1c53a74e26 : Fixed behavior of div_factor parameter in optim.lr_scheduler.OneCycleLR (#28217)
76c70559c9 : Updating submodules
657430e1f0 : Return 0-numel empty tensor from symeig when eigenvectors=False (#28338)
e4f40bf3b2 : Add multiplicative lr. (#27254)
d1d2358d31 : Correct math formatting for lr scheduler (#28467)
9d767db493 : remove extraneous type information from torch.matrix_rank documentation (#28479)
e80e42cb2c : Updating submodules
2f16284231 : change empty range tolorrance logging
e9336b04fc : Update Dockerfile
e28e38e851 : Update C++ torch::nn parity table for LayerNorm (#28484)
e280f93e31 : Prepack folding for conv2d (#27119)
be3808d3b1 : Migrate `smooth_l1_loss` from the TH to Aten (CPU & CUDA) (#27962)
ee920b92c4 : Move complex extension test to c10 (#28208)
0f556b62e0 : Fix codegen for out operators (#28184)
b47d658d04 : Allow migrating factory methods to c10 (#28183)
005d6ea495 : Fix overload names (#28182)
a94bf1d326 : Add unsupported types to schema type parser (#28181)
b05d0fa671 : Updating submodules
4beaf1cf1c : add typing runtime dependency for py2
0d4009d777 : Fix avx for c++14 (#28207)
0c4878d550 : Update index.rst
d2eb08d17b : Fix tracing slice/select with dynamic inputs (#26549)
9705d60a2f : get rid of deprecated thread.isAlive() to use py2.6 modern form is_alive()
177c95e9bc : Migrate return type void to () for native functions. (#28290)
f94b6cef43 : Use FunctionSchema instead of char* for dispatch (#28295)
2cc0f1bbc6 : Run pytorch mobile benchmark in PEP (#28437)
5f1563296b : remove AutoNonVariableTypeMode from jit-op-registry (#28402)
d0d8b8c31c : change detach() & detach_() to no-op for USE_STATIC_DISPATCH mode (#28400)
04bfc213ab : remove AutoNonVariableTypeMode guard around forward() call (#28399)
38433e33a1 : Make static dispatch turn off variable before entering the kernel (#28398)
a5354adb08 : Eliminate the use of CUDA_HOME in setup.py. (#28373)
30712f6e30 : Move the CUDA implementation of sqrt to ATen. (#27372)
19aeb472aa : Move the CUDA implementation of log1p to ATen. (#26923)
4f70b5a4de : Export det (#26958)
456d9a0dbe : Enable Scatter/Gather ORT Test for opset 11 (#27876)
2a2cdc8aeb : Revert D18001407: Port of multilabel_margin_loss from TH to ATen (CPU)
636fbcdd0a : add benchmark code to iOS TestApp (#28405)
7b59174882 : torch::nn::LayerNorm
3fce612cb1 : preserve original tensoriterator behavior when not explicitly promoting (#28231)
6d689e27c7 : clean up NamedTuple creation API (#28189)
03d24dba6c : Fix static linking cuDNN without static CUDA (#28378)
682da8eb43 : Port of multilabel_margin_loss from TH to ATen (CPU) (#28205)
c1bb2676f3 : Update C++ torch::nn parity table (#28419)
30d6cf7bc1 : Updating submodules
5e73e1fff8 : Enabled torch.unique for bool tensors (#28374)
373e9096c2 : Revert D18012804: Use FunctionSchema instead of char* for dispatch
73c1030328 : Support logging tensorboard embedding visualizations to generic filesystem (#27716)
95650b152a : remove deprecated torch.Tensor in test_distributed.py
db298732c1 : remove deprecated torch.Tensor in test_c10d.py
079b3cc02c : Add C++ nn::functional pad
94757e035d : Do not insert observers for empty sequential modules (#28384)
d403410e0d : Fastlane update (#28356)
783c9c8445 : Adding docstring to the observers (#27791)
0ddb50010e : enable test_invalid_names test in rpc_test
d9bca33d2c : Use FunctionSchema instead of char* for dispatch (#28295)
6335d91c38 : Disable tsan for test_c10d multiprocess test cases. (#28385)
07a181da1d : Add more logging in net modifier
4e033b0040 : split TestLogging, TestDict, TestList
f36497e687 : split test_type_sharing
0a364108d2 : use base sha in clang-tidy instead of base ref (#28388)
06bb74ce96 : Tolerate small amount of embedding corruptions
70e9ef518f : c10::string_view (#26616)
9ea42f8d7c : C++ API: torch::nn::LPPool1d (#27800)
a3902c901a : Revert "Fix early expansion of CUDA_TOOLKIT_ROOT_DIR in libtorch builds (#27887)" (#28310)
ba59d720cd : Change error message for torch.linspace(). (#28274)
bc57967e07 : max_pool2d cuda should have channel last optimized kernels[Performance improvement] (#24872)
4d9c017dee : Fix the padding issue of quantized average pool operator (#28260)
d9b4788e5d : cleanup dist autograd context on other nodes when it is released on one node (#27951)
f6c0a89acc : Updating submodules
e8165f4b00 : Updating submodules
6301d62e0b : Updating submodules
15be189f0d : Add quantized torch mean implementation (#27675)
29f56eb920 : Revert D17937850: Tolerate small amount of embedding corruptions
56eb4f7daa : Add autograd hook for python rpc call (#28312)
6fcefc917e : Minor tweaks to rpc message api (#28326)
99271ad411 : Split out data_parallel tests from test_nn.py into a separate (#28297)
eb4bb00a9c : Use c10::variant-based enums for Nonlinearity and FanMode
a1e14a6626 : PixelShuffle module and functional (#28140)
ca6ba06f95 : Tolerate small amount of embedding corruptions
9cb003a94f : Add typing check of alpha for torch.sub and code clean up.
b4db590e3b : Fix type promotion of complex32 and complex32 (#27929)
0aa694ebe5 : Move Method::lowered_graph to a separate pass out of the Method class. (#28242)
c813503f05 : Update hyperlink syntax for XLA, torchaudio, torchtext, and C++ API (#28019)
af88537483 : Back out "Add autograd hook for python rpc call"
243298668c : Remove confusing torch::jit::RegisterOperators for custom ops (#28229)
d2eceee54b : Fix hub when branch name contains slash. (#27960)
109c467559 : Add generate-wrapper.py with its generated wrapper files. (#28285)
56c4215fcc : Add autograd hook for python rpc call (#27576)
46fefc98e2 : Change dper3 loss module to match dper2 (#28265)
bd6f9e1d6c : torch.nn.functional.gumbel_softmax #27078 (#28121)
3629974c1e : Fix quantized avg_pool2d test to support non-zero padding (#28246)
4b64ada531 : Fix typo (#28281)
3d745508eb : String optimizations related to serialization. (#28230)
ac61adb5ef : String opts related to deserialization. (#28263)
a1ac15081e : Implement lerp's derivative w.r.t. weight (#28219)
91a260cef9 : Adding MSELoss, KLDivLoss and BCELoss to C++ front-end (#27156)
9c41b61e3f : Disable blobs_queue_db_test in ROCm CI (#28268)
53d9456adf : Clean up the stale item in bc white list (#28269)
5c768ec380 : Minor: add static_assert to Pickler buffering. (#28114)
d7ff34c0f8 : In torch::save() avoid zip compressing small header records. (#28180)
5498a15d10 : Add tests for libtorch macOS binary (#25208)
2e7dd54796 : Fix RNN nonlinearity (#28058)
0b243e9c4c : Disable c10d test_sync_params_with_buffers on ROCm (#28190)
12dde7f58a : cdist performance improvement for euclidean distance (#25799)
7c1df06efa : default caffe2_tvm_min_ops to 10 (#28250)
07b5666a87 : Add default arg to `prepare_qat` mapping. (#28193)
7ebe8328e1 : Address review comments on #28011.
95922c90b5 : Export update for arange and _dim_arange (#26875)
a5ac7f6387 : Changing observer name
86e7e872bf : Port of multi_margin_loss from TH to ATen (CPU) (#28062)
618cb40e30 : Add doc copy-edits from review (#26322)
5c2bf8abe5 : change linear benchmark shapes (#28228)
21c3997974 : Disable schema inference for unboxedOnly kernels (#27977)
8fff54ec39 : Enables non-default CUDA stream in test_nn (#28192)
951dd03037 : Add memory format support to typecasting shortcuts `byte`,`char`,`double`,`bool`,`half`,`int`,`long`,`short`,`float`,`bfloat16` (#27228)
15df371934 : Add memory format support to typecasting shortcuts `byte`,`char`,`double`,`bool`,`half`,`int`,`long`,`short`,`float`,`bfloat16` (#27228)
c36552c4cb : Fixing dispatch error in windows debug builds (#24360)
e1be08fcf5 : out-variant for torch.batch_norm_elemt (#27621)
4e71be449e : Remove tools/setup_helpers/nvtoolext.py (do not seem to be used) (#28125)
4cc368e3a6 : Declare the LAPACK and MAGMA dispatchers instead of defining them with a default error (#28133)
076b116a41 : In ProcessGroupAgent, use non-iostream torch::load()/save(). (#28063)
4a69d048e0 : Move the CUDA implementation of log2 to ATen. (#26769)
6923b93ebc : Revert D17972725: [pytorch][PR] Update onnx-tensorrt
bb0e46b65a : Remove preallocation of type ids (#28024)
58ed8ca9e1 : clean up exported source format (#28129)
aad5071206 : Use torch::variant for enums in C++ API
de0f9567a3 : Add quantized avg_pool2d for pytorch mobile (#27631)
62e281fbcf : Add CI builds (#27925)
19956b200d : Relax set_num_threads restriction in parallel native case (#27947)
2265cddbd2 : Cleanup torch::jit::script::Module API for accessing attributes/parameters/submodules. (#27260)
d083b443b4 : Fix LayerNorm Bug (#28196)
edc28676ef : Adds @overridePrecision decorator (#28131)
35a5df8c94 : Update onnx-tensorrt (#28158)
f279b68a48 : Update gloo (#28174)
86e93bde90 : Back out "Use FunctionSchema instead of char* for dispatch"
d9de2e0ba9 : Back out "Revert D17936166: [wip] Constexpr type ids" (#28155)
ff00e8c9eb : Fix pushLong() issue in pickler. (#28057)
aa6c394e39 : Use FunctionSchema instead of char* for dispatch
3214f134b6 : fix python rpc handler exit crash (#27251)
7d277b0670 : Multi Label Margin loss (#27659)
cbcb70f84c : print last 50 runs when using ai_pep_format (#28128)
97257e257e : clean up test_cat_empty (#28115)
cbb4c87d43 : Improve the doc and test of logical_xor (#28031)
3523e5427a : Add master to OSS RPC test (#27776)
174e1ba3b8 : Small fixes to improve TensorIterator overhead for the common case of inputs and outputs of the same type (#27457)
3ac4267763 : Force building with GCC 5 (#28098)
dc8785a022 : Refactoing names for consistency
9540f6c3fe : Soft Margin loss (#27660)
c67d3533a7 : Update C++ torch::nn parity table, and temporarily disable C++ API parity test (#28117)
735463f210 : ONNX Export Scripted Interpolate Op (#27566)
5136ed0e44 : Remove attempToRecoverType (#26767)
fb4517132f : Allow 'Any' to appear as a type argument. (#26572)
97b39a296f : Fix error report highlight for unmatched type annotation (#27195)
8cdc262063 : Add support for `@staticmethod` (#27163)
e3e54282cd : Updating submodules
2d2fe14a60 : Install CUDA for clang-tidy (#27967)
94c1ff4388 : Devirtualize allow_tensor_metadata_change() getter/setter. (#27667)
4f4c69b1de : Make set_grad_accumulator private (friend class SavedVariable) (#27666)
e1f58b7c4c : Make AutogradMeta a private struct in Variable. (#27654)
34522c212a : Add trailing underscore to member variable. (#27651)
f38beff800 : Add nn.Bilinear to C++ Frontend (#26082)
3ed9a6e2ab : Buffer in Pickler to improve performance. (#27720)
3d3bff5ff1 : Fix early expansion of CUDA_TOOLKIT_ROOT_DIR in libtorch builds (#27887)
4f1f084d22 : Make layer_norm dispatch from yaml file to fix XLA test (#28051)
5c153de26b : Nicer promotion error message when pr. (#27941)
1819fade35 : Revert D17936166: [wip] Constexpr type ids
054239dc0e : Updating submodules
08f4a244d3 : Eliminate unnecessary Tensor refcount bump.
2e0294cb39 : Make JIT Serialization support arbitrary std::function<> IO (#28039)
9cc4405dc9 : Constexpr type ids (#28023)
e9a91756cd : Back out "[pytorch][PR] Migrate soft_margin_loss from the TH to Aten (CUDA+CPU)"
ab50abca5c : Export masked_select and masked_scatter in opset 11 (#25949)
705958be5b : Update GCC for CentOS build (#28059)
d2c2501eb3 : Minor improvements in RPC api docs
e4f5224ebd : Revert D17935286: Update GCC for centos CI builds
59cd0faeff : Defer pg agent listener thread until contexts are initialized (#28013)
00a2b36188 : improve error handling in getNCCLVersion in NCCLUtils (#27883)
871b1419de : Test graceful termination of RPCAgent with asymmetric load (#27761)
7b06f958cf : Update GCC for centos CI builds (#28018)
cf43aa3e16 : add type refinements for isinstance checks (#27772)
5d26ba08b7 : Remove unnecessary Node* closures in operator registration
3de34744b3 : Make PythonPrint a class (#26787)
f62c8f48e8 : remove dead LEGACY_PythonPrint
2aa84d927b : Revert D17939700: Revert D17889288: [pytorch][PR] Migrate soft_margin_loss from the TH to Aten (CUDA+CPU)
c44e33b578 : Revert D17889288: [pytorch][PR] Migrate soft_margin_loss from the TH to Aten (CUDA+CPU)
5797f5dd27 : Update 'einsum' docstring to conform to PEP-484 (#27563)
a6cbbd2196 : Revert D17843468: Save Docker image to workspace instead of pushing to ECR.
964d3d8b38 : Revert D17822962: [pytorch][PR] Make JIT Serialization support arbitrary std::function<> IO
fd3d6587e6 : Make TripletMarginLossImpl subclass from Cloneable (#27956)
d39ab0312a : Add memory_format support `to` and `type` operators (#27107)
cbe5ab1109 : Make JIT Serialization support arbitrary std::function<> IO (#27586)
d482ed44f5 : Fix test_docs_coverage.
182abb2580 : accept -1 in iterations and warmup iterations (#28014)
f461184505 : Use grad_out for cudnn CTC loss (#27039)
7e8420b7f6 : Buffer to speed Unpickler (#27727)
b65540cc27 : Remove named tensor builds from CI (#27762)
1054ab213d : improve error message for scatter in processGroupGloo (#27458)
3397d41b8a : Wrapping namespace Reduction in namespace at (#26606) (#27422)
e6a71405a0 : Let logical_xor support non-bool tensors (again) (#27248)
76bf8f62f7 : fix loss_weight for self_supervision
801b6cd0bd : Allow passing undefined Tensor to Module::register_parameter (#27948)
70838ad08b : Fix typo in TransformerEncoder and TransformerEncoderLayer documentation (#26230)
ef8bcfe2c7 : Revert D17488861: constexpr type ids
1865f31efa : Revert D17490109: Remove preallocation of type ids
cf01f53b5a : Remove preallocation of type ids (#26509)
6f865c1e37 : constexpr type ids (#26502)
f1d4e887e0 : Updating submodules
9033ace9c4 : Migrate soft_margin_loss from the TH to Aten (CUDA+CPU) (#27673)
498ca083a6 : adding IterableDataset to dataset.pyi (#27966)
ba7919601f : Save Docker image to workspace instead of pushing to ECR. (#26720)
817cb4182e : Fix Sphinx warning about '_images' not existing (#27927)
e5d6b75319 : Bag of documentation fixes; fix more sphinx warnings (#27850)
ad47788647 : Add Polygamma to the docs (#27696)
f10ea7a2e1 : Add test for requires_process_group_agent decorator
19d83ab800 : Updating submodules
8b87f9a510 : Add fused layer norm impl on CUDA in PyTorch (#27634)
30d9316f35 : refactor tryMatchSchema (#26499) (#27773)
09464a7bf5 : cleanup lint scripts a bit (#27805)
11172c19be : codemod at::ArrayRef and torch::IntArrayRef to std::vector in C++ API tests (#27884)
a4a5b6fcaa : Revert D17913708: [pytorch][PR] [JIT] throw on custom forward for module containers
0af60a5c06 : (#27299)
937e3f1db4 : Enable RRef tests for other RPCAgent
66f74783c3 : Eliminate unnecessary Tensor refcount bumps.
4b1096c652 : Fix predict net issue with LRU hash eviction
aaedf1b38b : break out test_recursive_script (#27819)
151483e25d : move import_class_test files around (#26722)
382917bbd1 : report per iteration execution time (#27923)
7929a4157a : Fix TBB builds (#27937)
104bb57c43 : Run all docker images with --cap-add=SYS_PTRACE --security-opt seccomp=unconfined (#27787)
93030f68be : Changing the hypothesis dev verbosity to 'normal'
2cae3928b0 : Multi-Label Soft Margin loss (#27669)
0003771423 : C++ API parity: Unfold (#27809)
fdea0cbe40 : s/TestEndToEndHybridFrontendModels/TestModels/
cd6b37afa7 : throw on custom forward for module containers (#27763)
169327f557 : Add note that cuda quantization is not supported (#27829)
4f6b567245 : Remove sharding code from tests (#27818)
d23d62cb1e : Fix unaries to export fp16 instead of fp32 when rest of the model export to int8
b5e0fd4c56 : add known worker ids to distributed autograd context (#26324)
5321f4553f : Remove GCC4 from CI (#26522)
524d9003f3 : Kill unused THNN operators.
3714ca58d9 : Kill more unused THCUNN operators. (#26971)
7583f87fa6 : Kill a number of unused THCUNN operators. (#26970)
a23edd6b9c : Fix Type Errors in Examples about Named Tensor (#27828)
82a69a690f : Add documentation for torch.lgamma (#27812)
cc5c34a0d0 : Add nn::functional::normalize() to C++ Frontend (#27280)
32c56747f7 : Mention C++14 in the README (#26670)
0e8d4836e4 : add feature name into module and update position weighted to match dper2
b7b73e43c0 : Delete TEST_NAMEDTENSOR; run named tensor tests on all CIs (#27760)
73521a0316 : Roll more version numbers to 1.4.0 (#27751)
4bcedb6670 : Mark sampler and batch_sampler arguments as optional in the DataLoader interface (#27821)
19df7e7e84 : Fix typo
848d1ba13a : Fix padding_idx in the new embedding cuda kernel. (#27731)
1c2cb6d523 : Edits to ReadMe file (#27808)
07d4374239 : C++ API: torch::nn::Softmax2d (#27509)
52528c041a : - TripletMarginLoss (#27713)
23bffc4f14 : Fix most documentation warnings (#27782)
446a79b959 : C++ API parity: Threshold
cbdd55c669 : C++ API parity: Tanhshrink
2750ea25b2 : C++ API parity: Tanh
27027a4804 : Fix torch::nn layers to always subclass from `torch::nn::Cloneable` (#27770)
aa73701f03 : Disable pytorch_short_perf_test_gpu CI job (#27797)
f6bda1e07b : Removes @default_floating_dtype decorator (#27628)
341262754f : module dedupe (#26666)
ffa422a8b3 : kill _parameter_list (#27399)
759c99c2e3 : [jit Python None should have its type inferred as NoneType (#26665)
3bccd3fc0d : Distributed Autograd - FAST mode backward pass implementation. (#27022)
96aafc3cdc : C++ API parity: Softsign
fcb6dd079e : C++ API parity: Softshrink
c3c0dcf6e3 : Upgrade MKL-DNN to v0.21.1 (#27597)
039acbea90 : Revert D17757197: Add CI builds
abaa44122d : C++ API: torch::nn::Softmin (#27459)
86fb63f4a0 : add testing code to iOS nightly jobs (#27784)
907ce80321 : Update onnx landing page for 1.3 (#27581)
130127ca59 : Rename `BACKEND` to be `RPC_BACKEND` to be seperated from `COMMUNICATION_BACKEND` like gloo,nccl, in `rpc_test.py` (#27792)
ccd460d415 : use gloo enum instead of hardcoding stirng (#27652)
5b88dd6a29 : fix checkout for clang-tidy (#27796)
e8c23c9f85 : Add various flags for fakefp16 conversion
6e3a53e774 : Sanitize module names on legacy import
2a23654880 : Switch to official releases of katex and update doc for installing katex. (#27758)
fab48eb200 : Makes some CPU-only tests in test_torch generic (#27688)
57d608d1f9 : Suppress info messages in qnnpack (#27774)
ba20ad999c : port the rest of the linters over to github actions
57d4f8e3d7 : kill azure pipelines flake8 (#27767)
640b486339 : add clang-tidy to github actions (#27755)
3d2c90131a : opset 11 updates (#27578)
4da68227e9 : Clarify that when the divisor in div is zero and the dividend is integral, the behavior is undefined. (#25968)
a710a8b758 : Makes CUDA tests in test_autograd generic (#27709)
6eef469074 : Enable mgpu unit tests for rocm
eb5222397e : Better hashing for constant pool (#27733)
a22e8f90cd : Add CI builds (#27357)
977445b635 : Disable TSAN test for LiteInterpreterConv (#27748)
7135f7c263 : Revert D17412856: [JIT] add type refinements for isinstance checks
f35d7d4614 : Pr v130 doc changes oct10 take2 (#27721)
275dfa3485 : Initial commit for L0 norm approx (#27756)
c5ec0a7ede : Don't run dist_autograd_fork on Python 2 (#27612)
f36345eb0b : improve error message on incorrect inputs into gather for (#27439)
726bbfffb9 : Add possibility for miniz to use an external crc definition. (#27558)
15f9fe1d92 : Add missing Optional annotation.
c79d3a4a98 : C++ API parity: Softplus
9d448099fd : C++ API parity: Sigmoid
795c913636 : C++ API parity: CELU
cddc147267 : Back out "Revert D17826873: Adding support to offsets based Fused8BitRowwiseEmbeddingLookup" (#27728)
6294a9a877 : C++ API parity: RReLU
07fc7d05ce : Revert D17488297: [jit] refactor tryMatchSchema
6385a39eec : add testing code to PR jobs (#27594)
352092ca95 : C++ API parity: ReLU6
5d495a11cb : add unused and is_scripting to docs
2488c29129 : Revert D17846079: [TSAN unittest] Disable TSAN test in LiteInterpreterConv
6711969dd8 : C++ API: torch::nn::LogSoftmax (#27462)
b3cb072de7 : Revert D17826873: Adding support to offsets based Fused8BitRowwiseEmbeddingLookup
d8df8aa842 : Remove deprecated script_rref_proto
f7d7c4b72f : Fix a bug of C++ L-BFGS optimizer (#27606)
415b17e81c : Fix for flaky caffe2 dataio test (test_time_limit_reader_with_short_limit) (#27592)
8515650c2b : C++ API parity: ReLU
ce6287f675 : Adding support to offsets based Fused8BitRowwiseEmbeddingLookup (#27635)
e8087a3060 : Change C++ API test files to only include torch/torch.h (#27067)
9bc8fb8dfd : Revert D17850696: [pytorch][PR] Updates to quantization related files, index.rst, and javadocs
829a5c8584 : Disable TSAN test in LiteInterpreterConv
38a3eabd3e : remove cuda from add_test
9d925c1d6f : Revert D17851047: [pytorch][PR] Add javasphinx extension
d931c8bf75 : substantially restructure all quantized docs to group logically (#27677)
91959aa3d3 : Add javasphinx extension
f3df6b8ede : Add C++ torch::nn::functional::affine_grid (#27263)
1118ea5866 : Updates to quantization related files, index.rst, and javadocs (#27676)
51656eefb0 : refactor tryMatchSchema (#26499)
d44b9cd4bb : add type refinements for isinstance checks (#26271)
52985a3501 : Install developer certificate for code signing (#27593)
e66e00cd17 : Fix native ctc_loss gradient indexing bug for large target sizes (#27460)
17a54e1b3d : Revert D17840343: [pytorch][PR] changes to the documentation in support of quantization
971f773886 : Revert D17750005: [jit] Add doc copy-edits from review
ba792335fc : Export traced aten::unbind (#27247)
9e9713f071 : Register operators of CV models in PyTorch mobile (#27609)
18d5210de9 : changes to the documentation in support of quantization (#27603)
2093fac4ee : ONNX Export ConstantOfShape with default dtype (#27577)
e049e0b027 : adding quantization.rst file for quantization feature (#27559)
0eccd05ab4 : Add javadoc rst files
85f33a4738 : Fix install location for ATen_CORE_HEADERS by avoiding relative paths (#27449)
1fec1441a1 : C++ API parity: PReLU
0fbbc7acb4 : Allow `align_to` to take in partially named tensors (#27308)
7591010077 : Disable automatically code signing for TestApp (#27591)
b6fea4f77f : Removes floating_dtype decorator from test_torch and test_cuda (#27599)
aeae5d6020 : add dim to the cat benchmark (#27620)
abcd221f19 : add as_strided operator to the benchmark (#27632)
283f4814d3 : Modify PyTorch's integration of NNPACK to use a unified underlying thread pool implementation. (#27341)
3246fddfd6 : Implement C++ API torch::nn::MultiMarginLoss. (#27424)
0fed4756d0 : C++ API parity: SELU (#27434)
28a1806cbc : C++ API: torch::nn::Softmax (#27446)
e7c9c8098a : Add doc copy-edits from review (#26322)
9084fcba46 : test_equal in test_quantized.py (#27616)
fbba4edd1d : C++ API parity: ELU, Hardshrink, Hardtanh, LeakyReLU, LogSigmoid minor fixes
7c472ec597 : Vectorized complex unary and binary op support. (#26500)
d70f8dd964 : Tests for fallback boxed dispatch (including TLS mode) (#26719)
eb9000be4e : always use the closure to resolve variable names (#27515)
1b385e7e5f : Add std::variant backport (mpark) as c10::variant, with gcc 7.3.1 fix (#27575)
013ca32730 : Devirtualize numel() (#27294)
ab15584dce : add random sample function to generate list of inputs (#23174)
c1ed0150c5 : canonical example of torch.add benchmark (#23402)
a750a1a2b4 : modify config_list to support cross product of attributes (#23399)
b9b9fd4fad : Fix the arithmetic overflow issue for MSVC (#27596)
987e37b9c2 : Enable EXE001 flake8 check. (#27560)
65cdc8db5d : Remove GEN_TO_SOURCE from CI
eb8fe883d8 : Revert D17599915: [pytorch][PR] Support 0-batch size for nn.Linear.
47e6d40b9c : Revert D17810912: Register operators of CV models in PyTorch mobile
15bec0970c : Add instructions for setting up ccache from conda (#27481)
59b14a7620 : Documentation for named tensors (#27173)
a37be201c1 : Implement torch.nn.Embedding / EmbeddingBag in PyTorch C++ API (#26358)
b96f49885f : caffe2 python ideep conv_op test_int8_convolution skip for python 3
1f158adeee : Add support for attention weight in SparseLookup (#26748)
a891e92f89 : Support 0-batch size for nn.Linear. (#27211)
c27853fbba : Expose torch::jit::script::Module::dump_to_str to python as module._c.dump_to_str.
6cf189512c : Remove underscore from pybind of module._c.dump (#27555)
1610ea8ef8 : Comprehensive-ish instrumentation for CUDA memory allocator (#27361)
04cd777ed4 : Create BUCK build for lite-interpreter (#27546)
ff03f9bc94 : Remove CPU_tensor_apply* from Normalization.cpp (#27327)
e16868ab29 : Register operators of CV models in PyTorch mobile (#27379)
3f660cdf0f : Remove CUDA_tensor_apply1 (#27313)
e7b6ea5535 : Move the CUDA implementation of atan2 (which was partially implemented in ATen) to ATen. (#26178)
c1c176d91b : record_stream() for shifted view tensors (#27371)
6e59fb6a97 : .gitignore for the docs folder
eb93200321 : Fix DDP incompatibility issue with nn.MultiheadAttention. (#26826)
f522bde121 : Replace references to _DataLoaderIter with _BaseDataLoaderIter (#27105)
d57124823b : Regenerate aten_op.h when native_functions.yaml changes. (#27253)
31a6ff46c1 : change input shape to reduce variation (#27548)
b4ce922b58 : Move RPC API to torch.distributed.rpc
a6d26ce135 : Move internal functions to torch.distributed.rpc
14f1629c4d : Move RPC backend registry to torch.distributed.rpc
1fd14c5822 : Remove torch.distributed.rpc function (#27287)
48a571b29c : Rename variables and add comments (#27286)
f597926fe0 : Remove shebang from non-executable files in torch.distributed
c742918854 : Fix pybind11 warnings in python_rpc_handler.cpp (#27284)
0d22f3b170 : Emergency split CUDA libtorch build/test into separate job (#26859)
660264e173 : fix documentation for add_hparams (#27521)
3b5d40c339 : Add C++ torch::nn::CosineEmbeddingLoss (#27345)
e63bfb7877 : Use orig source range in Node::print
e2143fdeb8 : Updating submodules
725810f42c : Set existing attributes under recursive script (#27514)
7f183a978f : Stops common_utils.py from setting the default tensor type (to torch.DoubleTensor) (#27444)
16ece1c9da : Fixed typos and grammatical errors (#27465)
6e0312a9c5 : Revert "Make static dispatch turn off variable before entering the kernel. (#26908)" (#27283)
a96b003b39 : docstring only formatting changes: quantize.py, fake_quantize.py, observer.py
e63addfff6 : Exponential decay of the weight of task loss (#27508)
2c51e0659b : Roll master to 1.4.0 (#27374)
34662f77c6 : Revert D17159707: [pytorch][PR] [ONNX] Fixed Select symbolic to export slice when index = negative one
1b5df37441 : Updating submodules
84e2dc692a : Fix broken name mangling
23f2fb0aec : #include <stdexcept> into flat_hash_map.h (#27478)
24242e86fa : Ensure NCCL error handling code is disabled for NCCL versions < 2.4 (#27124)
4bd8ae13c6 : Move hipify to torch/utils to bundle them into torch package (#27425)
ce16d689b3 : FunctionEventAvg implements __iadd__ interface (#27498)
4a28ab95d0 : Clean up JavaDoc comments in pytorch_android
1ffa81d772 : Various cleanups to pytorch_android API (#27454)
b66df47a11 : Refactor python_android test to separate Android-specific components (#27453)
aab9673e8d : Avoid variable shadowing in ``::at::philox_engine::single_round()`` (#27486)
16454095e0 : Fixed Select symbolic to export slice when index = negative one (#25273)
8cc9d27647 : Automatic update of fbcode/onnx to 2891e1459745933f4bba9a8cb3371cf3c9eb1d16 (#27474)
a4cba50d62 : Put metrics back to torch.utils.tensorboard similar we have in TensorboardX
0046092178 : Reduce special casing around 'training' (#27109)
a24291a554 : Unfold export (#24970)
1250acef90 : Disable tsan for test_multiprocessing. (#27410)
0222eceaaa : Remove outdated note in cholesky_solve and triangular_solve doc strings (#26989)
0b6186d778 : Remove Tensor.h, TensorMethods.h from src/core. (#27086)
2cc1e69cc9 : C++ API parity: LogSigmoid
17c672e704 : enable rocTX API (#27416)
04436f6c60 : Upgrade to ROCm 2.9 (#27417)
bac11d1002 : Tweak docs on building docs
e0ae3ce5e4 : Docstring fix (#27225)
7a2e61c28e : Remove dependency on six from dist_autograd_test.py
1741adfd3e : Use deepcopy inputs for ONNX ort test cases (#27186)
1f0328c6d4 : Add randomFill to test_utils.h
f4d0d0a811 : Enable RCCL in ROCm build (#27383)
7b3881f68c : Adding docstrings for nnq.functional
b05ec828ad : Add interface/object serialization as module attribute (#26770)
381cf2bd24 : add warning to dnnlowp fc if quantization kind is not min_max
afbbe16f49 : Add methods to write image tensor content to buffer (#27359)
ac0f18437f : MovingAverage Observer (#27396)
92a2caa028 : Pickup proxy parameters for publishing (#27389)
18215337f4 : Change nightly builds version to 1.4.0-SNAPSHOT (#27381)
32d009a37f : Add gfx908 to the list of per-default compiled architectures. (#27388)
6db0cc472c : add some support for the occupancy API on ROCm (#27390)
3c2cd8cc10 : Some hipify script cleanups (#27375)
8b61a220c0 : C++ API parity: LeakyReLU
badb08d577 : Add clip_grad_norm_ to c++ api (#26140)
646e214706 : ProcessGroupNCCL should respect timeout passed in to init_process_group. (#27224)
f4c37e6b32 : fix OSX CI build (#27373)
192ca9730f : C++ API parity: Hardtanh
0be6641fbf : add function to get nccl version for error messages (#27068)
a33dbccf60 : Fix some return std::move warnings (#27384)
a6bb8b52d4 : Reduce error context from 10 -> 3 (#26765)
9f9c6c0999 : From docs of scatter_add_() removed erroneous comment on uniqueness of indices. (#27132)
50b3f9d815 : Allow use cpu_serial_kernel with void-lambda (#27370)
19ab5381c3 : Add OPN instruction and vararg operator table (#27104)
e166bcbbde : Make RpcTest re-usable by other RPC backends by using init_method to initialize a RPC backend (#27320)
28b1f586f6 : Change schedulers to chainable form (#26423)
da669c25ee : autograd: double backwards function for binary_cross_entropy loss
c389156fc4 : move new_zeros to core from THP (#26511)
b7fb2b8862 : Implement pickle support for sparse tensors and torch.layout instances (#27062)
76fc028533 : Revert D17743310: [pytorch][PR] Allow use cpu_serial_kernel with void-lambda
081069e8ca : Remove CUDA_VERSION from Python script (which has already been detected in CMake) (#27316)
e29baaca3d : Make `align_to` method-only. (#27304)
13c39c8ecc : Remove six dependency (#27282)
a7de545c63 : Makes test_cuda.py's generated tensor op tests generic (#27210)
527b10c2d1 : Fixes PackedSequence.to (and unifies PackedSequence conversions) (#27245)
76f847546b : Enable Python3.6 PyTorch ROCm CI
d0a4b2f586 : Choose num_threads in parallel_for based on GRAIN_SIZE (#26963)
42e7eb0426 : Minor readability fixes to C++ documentation (#27338)
2ea1d3d01f : refactor extra sugared values (#26270)
9ade1e6944 : improve error messages when a method or attribute is missing (#27110)
ef97841147 : Show a warning that not all dir members of quantized work. (#27339)
6bb7433ad5 : Replacing the skip_list with white_list in the qconfig propagation
c874dd91a7 : export remainder (#24410)
736c754739 : add sdk support for xcodebuild script
c3d97c2638 : Update to ROCm 2.8 (#27337)
86a8971ebb : Add a test case to RpcTest, check src/dst (#27322)
f5df46ce39 : Set MINIZ_NO_TIME to avoid computing localtime on each pickle/unpickle (#27268)
2486b0ba82 : Add Python RRef as args and return value (#25499)
8fe5dcf699 : Skip tests that use numpy if it's not present
827a00cf63 : Support interface python assignment as an attribute (#26734)
cc964765a5 : Add method add_hparams to API doc (#27344)
99c32d97fa : Migrate the cpu and gpu implementations of resize nearest 3D from vision to caffe2
74572fc985 : Relax restrictions on set_num_threads (#27190)
a444054d4b : Fix build (#27318)
05df6b67c6 : C++ API parity: TensorTest.BackwardNonScalarOutputs
0c4bc27539 : Mention magma-cuda101 package in install instructions (#27325)
2e62318243 : Move the CUDA implementation of log10 to ATen. (#26733)
6e2a8d0625 : Allow use cpu_serial_kernel with void-lambda (#27271)
75d8dab9be : Update the link for iOS demo app in README.md (#27145)
d6336b0d8b : Fix circle CI
1c5683a80f : Avoid calling tensor.numel() in for loops (#27298)
082f195ac9 : Updating submodules
7bd7a3d806 : add AutoNonVariableTypeMode for USE_STATIC_DISPATCH on JIT->ATen path (#27274)
493c900810 : Extract version to version.txt (#27149)
7b2e8c323c : Add memory format argument to the `clone` operator (#27106)
111da77912 : Factored out the default mappings
882b2abb80 : Fix segfault while printing value type for an error msg in emitListComprehension
1bc7ea17b2 : more profiler changes in C++ before enabling checkScript changes
214a676f6b : Provide (but skip) 3.5 job by default on all PRs. (#27293)
c2223df578 : Implement LpNorm regularizer to be used on the inputs for feature importance (#26376)
7e95b89980 : Revert "Add std::variant backport as c10::variant (#26836)" (#27277)
8fbefa06f6 : Avoid configuring ROCm if USE_CUDA is on. (#26910)
5b5f398dd4 : Make cpp-backed jit classes appear as being in torch.jit
17b1faa2bf : Rename jit Function to ScriptFunction
fe4170bda8 : Add send and recv backward functions for builtin operators RPC. (#25527)
fc249c7924 : skip all rpc and dist autograd spawn tests for <PY36 (#27191)
a423817055 : Fix reprs for _intrinsic modules
1affa7c32c : Allow set for qconfig for dynamic_quantize
27dc595215 : Rename _intrinsic to intrinsic
4fb442985a : Add c10_experimental ops to BC check white list (#27235)
529c2dfaa9 : Refactor torch::jit::script::Module::register_* API. (#27189)
803f7bfaac : Implement C++ API version of torch.nn.functional.one_hot (#27081) (#27177)
e33ec3942e : Add insert_prepack_unpack for conv2d (#27118)
293e35a87c : Fixed Error message for tensor.align_to (#27221)
162ef02db6 : Rename distributed autograd test to avoid "test" prefix (#27161)
5e776d8a45 : Enabled comparison ops with named tensors (#27162)
2491dd50ee : Add ProcessGroupAgent termination detection algorithm (#26984)
d93fc64776 : Update export for topk and sort (#25739)
3eefc5457f : Enabling intra-op parallelism (#26692)
eb5040c205 : Suppressing hypothesis health check for qnnpack_add
21c229f4e1 : Makes more of test_nn generic (#27137)
bb51980766 : make default string arguments in schemas human readable (#27088)
a19b135fab : Remove note about tb-nightly for mesh (#27146)
5835460309 : Report docker push / pull time (#26861)
c6ff830c2f : Remove reserve from push_list_elements on JIT stack. (#27140)
515e3b85da : C++ API parity: Hardshrink
5f36ce6792 : Mark the codebase as Python 2 compatible. (#27214)
33db4e02cb : Separate libtorch tests from libtorch build. (#26927)
eeaef217b3 : Eliminate outdated comments
c864454a8f : C++ API parity: ELU
5005f7bce7 : C++ API parity: MaxUnpool3d
f4f6d8dda5 : Fix ONNX Interpolate
631e2ee7a4 : make python udf serialization format to be binary plus tensor tables (#27136)
5cac738713 : C++ API parity: MaxUnpool2d
b45f1b9601 : Makes more of test_cuda.py generic and updates test_torch tests (#27135)
2dff0b6f6a : Add some extra info to PyTorchStreamReader/Writer errors (#27006)
becf080e4a : add dynamic isinstance (#26269)
8a38a53e4d : Allow types as node attributes (#26268)
00e588290b : Add test case for init_rpc_backend (#26997)
0b79f77a4d : Serialize XLA Tensor (#27041)
bb8983e936 : Revert D17694691: Enable distributed autograd tests for >py36
4abfb5493e : Handle uninitialized min/max values in histogram observer (#27151)
d125a83f98 : C++ API parity: MaxUnpool1d
7bbb2df6d9 : Enable distributed autograd tests for >py36
b805b5dab8 : Unify quantized conv and linear tests (#26992)
38001fec1b : Updating submodules
8e4031ddcf : Move comment back to the class it describes (it accidentally drifted away).
e09c97b113 : Disable flaky distributed autograd test under spawn mode
98c02e6df3 : Enable tests (#27103)
14d29aeece : insert_prepack_unpack pattern (#27102)
f742ceaa46 : API - add more passes to graph mode (#27093)
6a09676442 : Uninitialize the accumulation buffer to save some overhead (#27005)
1d2d59dd79 : make rpc and dist-autograd multiprocess test to use both fork and spawn (#25656)
46539eee03 : Ensure that DDP wrapped module has parameters that require gradients (#25858)
ec07d144ba : Fixed seek offset size to 64bit. (#27125)
48cd66cd2b : Delete 5 THVector function declarations that have already been migrated to ATen.
4086f7432b : Bump gloo (#27133)
18eea8269a : Add C++ torch::nn::functional::pdist (#27122)
5ccecabb04 : Automatically select proper tqdm submodule (#27108)
3099732017 : Creates device generic cuDNN decorators (#26791)
85abfc1992 : Move imports to after is_available in test_rpc to fix windows builds (#27126)
dddae3f854 : Fuse module enhancements (#26457)
6a4ca9abec : Support layout() in script
209dc4c4ba : Add C++ torch::nn::HingeEmbeddingLoss (#27101)
ea414e4990 : Adds Device Generic Precision Tests to test_torch.py (#26762)
1d4d6b6f0f : Updating submodules
9e3ba35500 : Add control for observers in Fake-quantize module (#27113)
bdcaf6334b : Support for add relu functional module (#26612)
ec7913afbd : Cuts test_torch.py runtime in half by marking four tests as slow (#26789)
990f4ca76d : make class types callable (#26743)
bf7ebc5a53 : Set number of threads for operator_benchmarks (#27010)
5e7549c3b5 : Fix `toIValue` dict iteration (#26856)
fb8fd6dc73 : separate out rpc to sync and async apis (#26570)
3e20d9c0dc : Module method destroy
47cd15c643 : Script for full android build to aars; script to run android tests (#26833)
27d4b34ea6 : Add temporary torch::k{name} enum declarations (#27051)
9159a601ca : Make static dispatch turn off variable before entering the kernel. (#26908)
e75a49341b : Updating submodules
585a5975fb : Return NotImplemented from tensor arithmetic operators (#26507)
4a8d899e18 : Don't apply should_run to the nightly/postnightly branches. (#27061)
73347e0ae5 : Work around a gcc-7 bug in building Debug version of Sleef (#26993)
b16358b251 : Revert D17666050: [pytorch][PR] Fixed seek offset size to 64bit.
e08338738c : Add tuple constructor + to<std::tuple<Args...>> (#26668)
e367f605cd : Integrate prepacked workaround in QuantFusion (#26939)
a252aee8c2 : serialize autograd ops into its own namespace (#26761)
d91e490a9f : Fold prepacked weight into module (#26579)
3d41ad507a : Back out "Add int8 resize nearest 3d op in DNNLOWP" (#27056)
1afe3fc01e : Fixed seek offset size to 64bit. (#27047)
1eaa9f89fb : Fix Windows CI
275e0c1c8f : Make nonzero non differentiable as it supposed to be (#26980)
d5298b6e66 : Default observer and fake-quant for backends (#26627)
32b0e8c980 : Emulate weight and activation only quant with fake quant, numerics test (#26625)
84ee8ace12 : Quantization aware training: Freeze batch norm support (#26624)
7dc7075795 : Per channel fake quant (#26623)
7c465aae32 : Add boxed fallback (#26368)
2bcbc15023 : make cudnn rnn respect current stream (#27026)
a1513dced3 : Integrate FC fp16 exporter into Dper2 (#26582)
1a3997e0b8 : C++ API parity: AdaptiveAvgPool3d
4d7bec5f3e : Improve repr for quantized modules
ee2c79d699 : Migrate le/gt/ge/eq/ne from the TH to Aten. Added support of type promotion. (#27017)
9e1cf99a16 : Turn Caffe2 CUDA 9.1 + py2 to CUDA 10.1 + py3 (#26835)
2ccbdb79c8 : Per-channel baseline (#26516)
a31fd5ea68 : C++ API parity: AdaptiveAvgPool2d
7d58060f49 : C++ API parity: AdaptiveAvgPool1d
18dd541b59 : move parallel_for/parallel_reduce common implementation to cpp (#26969)
a173bea425 : Resubmit [pytorch][PR] [ONNX] Updating producer_version in exported O… (#27004)
9f9ba3a900 : Add InsertPackUnpack pass (#26959)
310ebb45a8 : Fix race condition in Function::optimized_graph(). (#27012)
8858f42aa4 : Revert D17635651: [pytorch][PR] Migrate le/gt/ge/eq/ne from the TH to Aten. Added support of type promotion.
0cd188035a : Add std::variant backport as c10::variant (#26836)
cca3a369f1 : Dont zero out buffers in dynamic linear (#27002)
09f0e949cd : PyTorch Graph Mode Quantization API (#26390)
da93cc5c2a : Fix race condition in torch::jit::Function (#27009)
f8db764f6c : Remove unimplmented passes (#26978)
766767652a : Move patterns in QuantFusion to a separate file (#26848)
5e79b5b1c7 : Move some class/functions in test_jit.py to jit_utils.py (#26839)
b0f1b5c757 : Add QuantFusion to graph_executor (#26591)
541de7e140 : Migrate le/gt/ge/eq/ne from the TH to Aten. Added support of type promotion. (#26981)
ff8b7ef63d : fix range for non-int inputs and pow implementation (#26926)
9080f1c5dd : Rewrite argmax and argmin as TensorIterator reductions (#26181)
0c6a18de8d : Add torch.promote_types function
024a422f41 : Add fakefp16 transformation.
aa0b28428c : Add optimized quantize function for ARM (#26867)
ad58045af9 : Remove LOG(INFO) from math_cpu.cc (#27001)
6d715c9e79 : Bring back the optimization of integer.pow({2.0, 3.0}) on CPU (#26938)
3a18e2e768 : support re-creating/destroying process groups when some trainers recover after failures (#26912)
250f482aa5 : Support qadd_relu on pytorch mobile (#26982)
b518ff3cb8 : Re-write of tensor-scalar mul
91a0eb7cc5 : Add int8 resize nearest 3d op in DNNLOWP (#26063)
646b69b3d0 : Xray image inference on multi-cpu and dumping dnnlowp tensors (#22537)
ee68c512c5 : Add P99 method with configurable thresholds
55a358546f : Revert D17631902: [pytorch][PR] [ONNX] Updating producer_version in exported ONNX models to PyTorch 1.3.
5aa01fd89a : C++ API parity: AdaptiveMaxPool3d
2afa5fe112 : Better error message for calculate_qparams (#26985)
23260f3e7d : Add logging in constant propagation pass
3e480f8fb8 : Fix fbjni packaging, exclude for publishing, include by default (#26995)
38f7a51cf2 : add AutoNonVariableTypeMode guard on JIT->ATen boundary
a625734f6a : Acquire GIL before creating py::object in RPC python handler
baa227b410 : Revert D17579439: Add std::variant backport as torch::variant
290405321a : Better named tensor error messages. (#26974)
6b3c0c1f22 : Updating producer_version in exported ONNX models to PyTorch 1.3. (#26976)
0ae0c9788e : Fix misuages for TORCH_CHECK/TORCH_INTERNAL_ASSERT with string (#26897)
764bf826e3 : Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
e4fba752cb : fix type annotation
2cdfec6b24 : Make named tensor implementations more robust (#26968)
95a08c5b95 : Add documentation for overload names (#23844)
486305066a : fix mobile.sh build (#26975)
d63d7ab997 : Expose PiecewiseLinearTransform to PyTorch
71011211c1 : Add std::variant backport as torch::variant
bb7a415bcc : C++ API parity: AdaptiveMaxPool2d
3ad1bbe16a : Named tensor support for: index_fill_, index_fill, squeeze, median(Tensor) (#26914)
2f1932fc5c : Fix issues in torch::tensor constructor (#26890)
f77b295edc : Disable cudnn transpose for int types (#26934)
8fa9900c28 : control of observer/fake-quant operations (#26520)
b2e43e4a2e : Fix all factory invocations in quantized to correctly propagate options. (#26966)
102a148641 : Default histogram observer (#26622)
6bf6788158 : make repeat respect the current stream (#26946)
428204dfa4 : Fix the QuantizedAVX2 build issue (#26854)
b0a2f6f2f5 : Serialization and range reduction support for Fake Quant/Observer (#26519)
3acbcb96d4 : Include `iteration_` in SGD optimizer serialization (#26906)
0a393f6ef5 : C++ API parity: AdaptiveMaxPool1d
9a5e2e80b8 : Fake quantization enhancements for QAT/PTQ support- fix tests (#26876)
2a43b74196 : Add torch.can_cast(from, to) function (#26805)
76a76a6cb9 : Switch nightly jobs to trigger on 'nightly' branch rather than cron. (#26830)
8ec0414053 : Automatic update of fbcode/onnx to 034921bd574cc84906b7996c07873454b7dd4135 (#26955)
b60656bb0c : Move Generator ops to c10 (#26434)
f01ae84bc1 : RPC Backend Registry (#26919)
e2ef49b559 : Updating submodules
6b9bcd0606 : export baddbmm (#26901)
614edfce81 : Add Support to Dicts and Strings in ONNX for Inputs and Outputs (#25889)
7163bfdf58 : Fix the weird bug in control_flow_op_test.py (#26931)
3b1b45898e : Updating submodules
7e95439e9f : batch size 0 tests for etc DNNLOWP operators (#26877)
492660768f : use new depthwise conv fbgemm interface (#26898)
257b61495e : Revert D17610292: [pytorch][PR] Choose num_threads in parallel_for based on GRAIN_SIZE
092b2f7fee : Make TypeDefault, TypeDerived and VariableType anonymous namespaces (#26882)
771bcce6f1 : Fix binary size in schema inference (#26878)
54b66c8c20 : Fix shared_ptr binary size in op registration (#26869)
1a5d641de3 : Improve binary size of function schema inference (#26860)
84e298e7b3 : Fix c10 registration binary size (#26827)
a6eec839ea : use parallel_for in DepthwiseConvKernel (#26879)
77bfe61ff4 : C++ API parity: TensorTest.Data fix
388430f6bc : Make quantized max_pool2d error message more specific and less silly
b1a09dbec7 : Support ceil_mode in quantized maxpool
55fc377857 : Check if QNNPACK is supported before set (#26935)
8d5c2aa71c : Set quantized engine backend for mobile in speed_benchmark_torch (#26911)
638c4375de : Export index_fill and index_copy, fix caffe2 scatter
d5490c662e : batch size 0 tests in BatchMatMul ops (#26874)
ec1f0f08f1 : batch size 0 support in norm operators (#26894)
f99bc714c7 : Migrate lt and lt_ from the TH to Aten (#25998)
9dd8a129de : Fix Vec256<T>::abs() for floating point when applied on -0.0 (#26422)
755b7e484f : Remove an unused function propagate_names_if_namedtensor_enabled
ac99936553 : No sccache (#26059)
6f92aa2f82 : Use intrinsics for trigonometric functions on CPU (#26431)
5c67b01467 : Switch internal CUDA build to C++14 (#26757)
bf1d957dc8 : Fix the Bernoulli distribution sampler (#26864)
e425bdb832 : Choose num_threads in parallel_for based on GRAIN_SIZE (#26886)
9f0deb4725 : Get rid of -u (expansion of undefined variable) setting (#26907)
b2f671a3fb : fix typo in job name: nigthly->nightly
43b07ff2c4 : Fix nuclear norm with requires_grad=True (#26303)
0e3389dced : Fix circular deps in loading (#26758)
78a52549e4 : Refactor dispatch structure so fallback code lives inline. (#26367)
272d7c021f : Change calling convention of ATenDispatch from getOp to callUnboxed. (#26857)
0bfe12d04c : Updating submodules
d3cab6571e : batch size 0 tests for Quantize/Dequantize DNNLOWP ops (#26873)
78b0c58a9d : batch size 0 support in FC DNNLOWP operators (#26872)
41c1cc2f51 : batch size 0 tests for element-wise DNNLOWP ops (#26870)
1aaf4810bb : batch size 0 support in Conv DNNLOWP ops (#26871)
2991bfdbe0 : Add bitwise distributed reduction ops (#26824)
dec0b6b792 : Add some missing constructors to IValue.
60b57d960f : Make resize_as_ generic, so XLA works. (#26809)
8fb756d3b2 : batch size 0 support in ChannelShuffle DNNLOWP op (#26858)
0a8a779abe : Add more inplace arguments to quantization top level API (#26782)
5231699de2 : Enable batch_size = 0 support in DNNLOWP Concat operator (#26849)
60372dc713 : remove backward functions from jit-op-registry for mobile build (#26851)
ed2607486f : add mobile friendly at:parallel_for backend
14d7d5718e : Improvements to GuardElimination and InsertBailouts
20ed6ba077 : Updating submodules
058ba0e761 : Remove unnecessary functions and cleanup code in quantization.cpp.
8f359a48a6 : Fix building with PARALLEL_BACKEND=NATIVE_TBB (#26742)
c25c507ffe : Remove three unused declaration. (#26699)
f37aa2de12 : Try to disable annoying hypothesis warnings again (#26853)
20ebd13f0a : Re-write of tensor-scalar quantized add
1d55616aa2 : Fix broken failure messages for OverloadedMethodValue
df16fb9ca1 : Throw if someone tries to torch.save() quantized modules (#26828)
d842435c01 : Remove convert_to_ssa argument from runCleanupPasses - it is only used in one place.
9df887df02 : Use optimized_graph in graph_executor.
ed82a28cf0 : QEngine::QNNPACK enabled, module.eval()
2eb592324f : Migrate multinomial from the TH to Aten (CUDA) (#26481)
90ffab6e37 : enable double backward for non-cudnn LSTM and GRU (#26660)
91549ef6c8 : Move the CUDA implementation of log to ATen. (#26494)
7fc06ea541 : Bytecode export flow (#25187)
660d9e24dd : Highlighting in the doc that square root comes before adding epsilon
08425d8c01 : Fix CUDA named tensor `copy_` (#26829)
d43480d6d1 : support iterables, rangevalue in list comprehensions (#26768)
a23109e12e : Do not call cpuinfo_initialize() on other than x86 arch. (#26265)
9383601523 : fix to operate on cuda kernel with clang and libc++ (#25553)
5fc52482cf : torch.load default encoding change to 'utf-8' (#26421)
92a2d4232a : Named tensor support for: all, any, bitwise_not, cumprod, cumsum, and more (#26815)
4bd1da1458 : Revert D17473200: [pytorch][distributed] add function to get NCCL version for logging
81bbb7ebab : Convert TensorIterator to use function_ref, a lightweight alternative to std::function. (#26592)
5379e87a32 : Cuda101 upgrade (#26823)
b6a1d618b2 : Revert D17565828: [pytorch][PR] [ONNX] Export baddbmm
b5d15315d8 : Improve C++ maxpool and avgpool (#26521)
167722d36e : Typevar matching fix + implicit conversions from Scalar to int/float (#26453)
03007b3dda : Quantized Interpolate Kernel(upsample_bilinear2d) (#26631)
334e78b1ce : Fix Future default constructor missing for ParallelNative
63fd10549a : Export baddbmm (#25738)
3288da064f : Fix CI docker builds (#26704)
ae2a8fea3d : Validate Docker version in CI. (#26496)
f7742d2b21 : Prepare for Cocoapods 1.3 Release (#26751)
f396b019b1 : Remove one unnecessary copy of the output during the type promotion. (#26816)
d9055319d4 : add function to get NCCL version for logging (#26583)
aaf30cdf36 : Port CUDA implementation of expm1 to ATen (#26598)
729f8425f7 : Use Caffe2's implementation of grouped depthwise 3x3 convolutions (#26556)
1cae5195a6 : Refactor checked_tensor_unwrap to take DeviceType instead of Backend (#26290)
b0bb5e338e : quantized_tensor tests (#26784)
25cd3c6b7d : Lets generic tests use multiple devices (#26594)
db5791d543 : autodiff changes to enable profiling
0cb10d7ebf : move more functions to InsertObserversHelper (#26773)
f0e507cbaf : Updating submodules
ee6cdb5726 : Upgrade sleef to v3.4.0. (#26749)
0f1fbc0eb2 : Hub improvements (#26723)
61dd485b3a : Revert D17549623: Add some missing constructors to IValue.
f094afe4b9 : Updating submodules
c5b57aa57d : Add some missing constructors to IValue. (#26718)
037cfce745 : Remove unnecessary include from TensorBody (#26360)
52614f5fd9 : Implement multiple dispatch in boxed c10 dispatcher (#26118)
b56ad744a2 : Delete backwards compatibility Backend overload for registerOp (#25914)
3346759774 : Named tensor support for logsumexp, mode, kthvalue, median, min, max (#26563)
002c250139 : Expose a torch.result_type and simplify tensor iterator
5001ec4252 : Support Negative Axis in Size in ONNX (#26436)
d396c7332a : Update ONNX Export for Interpolate in Opset 11 (#26778)
60343a82e9 : Named tensor support for: atan2, output_nr, detach{_}, requires_grad_ (#26543)
be93d30e37 : Revert D17458232: Fake quantization enhancements for QAT/PTQ support
e2c3d7e52c : Fake quantization enhancements for QAT/PTQ support (#26420)
a395c31147 : Add <cinttypes> include to resolve PRIu32 macro (#26745)
bc4519dc27 : Handle DeQuantStub() for QAT (#26518)
9949638818 : Improve error message in IR parser when accessing undefined variable.
6d0b004574 : rename caffe2::mobile_threadpool to caffe2::mobile_pthreadpool
d4dc844ec3 : Add comments for multidim tensor factory limitations, and rename ListInitTensor for better clarity (#26756)
e54a9e1b5a : use new fbgemm PackedDepthWiseConvMatrix without template parameter (#26760)
0c0b4b6326 : Revert D17559660: [fix] quantized_tensor tests
1bb895e1c1 : Revert D17330801: [pytorch][PR] Update ONNX Export for Interpolate in Opset 11
ef8d1c50c4 : Fix builtin lookup for Python functions
cc4219a799 : Wrap dimensions during named inference (#26558)
27ad34a703 : Revert D17558701: [refactor] move more functions to InsertObserversHelper
89c5dc57d9 : Add whitelist for backward compatible checks for function schemas (#26740)
b93f0947a8 : Automatic update of fbcode/onnx to ab6b94203c595f74b1f126eb118eef22e4c05a57 (#26736)
925e51ea7f : Add a lot of dimname overloads (#26636)
67bde6b724 : quantized_tensor tests (#25429)
4820ff3adc : Switch our Android CI to Clang (#26656)
5de5f793ed : Added test case for reinit (#26506)
7516156a35 : move more functions to InsertObserversHelper (#26696)
d21232055e : Address review comments in https://github.com/pytorch/pytorch/pull/26272 (#26587)
e95f3125fd : Make ONNX_ATEN_FALLBACK also works for _export (#26738)
f43b7c4435 : Revert D17513451: Register values listed in __constants__ as attributes of the Module.
1058373205 : Revert D17514653: [quant] Un-hardcode epsilon constant in FoldConvBatchNorm2d.
9dd9a7ef5c : Simplify operator `sign` using the helper.
c8109058c4 : Refactor android torchvision: not hardcoded mean/std (#26690)
de3d4686ca : Update ONNX Export for Interpolate in Opset 11 (#24805)
d959b4e2d2 : Add threadpool in qlinear and qconv for mobile (#26728)
f57ecd5f29 : add timeout parameter to connect function in TCPStore (#26554)
5e5b9a9321 : Add C++ nn::Identity (#26713)
c0c2921a06 : fix annotation regex for flake8 (#26694)
3f72bcfcaa : Remove _dequantize_per_tensor (#26681)
d0fff0ebc8 : Make `is_optional` check more robust (#26312)
5cc353482d : Add doc building instructions
eddda3afdc : Un-hardcode epsilon constant in FoldConvBatchNorm2d.
6c758ff244 : Register values listed in __constants__ as attributes of the Module. (#26581)
52b69fbcd4 : Remove _dequantize_per_channel in the pattern (#26680)
cf272d43ab : Trivial quantized torch.mean implementation
9f1da984ef : Enable hub tests on MacOS (#26697)
af3b15b74c : Setting automatic default selection for ONNX IR v4 semantics in ONNX export API (#26146)
8b12602264 : Add traces to specialize_autograd and lower_grad_of (2nd try)
a172fbf972 : Expands TestAutogradDeviceType (#26708)
fa7b621afd : Remove duplicate calculation of output shape (#26684)
128a65e2e0 : Use noop observer to pass dtype for dynamic quantization (#26709)
ae0732cde3 : Speed up an integer to the power of a positive integer on CPU (#26020)
66d27504e3 : allow building docker without torchvision (#26168)
3cae3021e5 : Add tests for C++ functional cosine_similarity and pairwise_distance, and clean up functional test code (#26559)
714b05e499 : Updating submodules
ff78d743b4 : Don't generate named tensor functions to RegistrationFunctions.h (#26685)
05f708187c : Typo fix (#26417)
efaa65dd60 : resolve ignored module method type annotations (#26683)
5e5cbceeba : remove tools/setup_helpers/cudnn.py (#25876)
9f3351de81 : Add warning to anomaly_mode doc fix #26408 (#26615)
cf1dbc79db : Vectorize unary operator erfinv (#26629)
c643290982 : Add derivative for cholesky_inverse (#26451)
7bdc0c138a : Move the CUDA implementation of trunc to ATen. (#25423)
d6ee58494f : Automatic update of fbcode/onnx to 23bb6ea1a71f08e200114a153f48bd7adb66d486 (#26441)
450504cd95 : C++ API parity: at::Tensor::set_data
2cf1183ec1 : Use optimized graph in Inline (essentially, making Inline recursive now). (#26489)
c522b6356c : Add 'optimized_graph' to Function. (#26488)
c034f9796f : Use std::mutex instead of std::call_once in Function when we initialize GraphExecutor. (#26571)
76e2ffc877 : Remove 'recurse' parameter from Inline. (#26487)
a65db650a8 : Enable registering stackbased kernels with lambdas (#26658)
839e636fa1 : Revert D17495679: [pytorch][PR] A few hub improvements
98bbb7788c : Updates and extends TestNNDeviceType (#26638)
ade60f8a8d : Allow per-channel QTensor accept any floating type for scales (#26676)
b93823cb65 : Per-channel quantized tensor to have only a single axis (#26675)
9aad4d7b5f : Fix _empty_per_channel_affine_quantized to be less hacky (#26243)
fbc3c14830 : adding OpProfile proto into ProfDAGProtos to support storing operation cost (#26677)
ba8002ec13 : Quantized Interpolate Kernel(upsample_nearest2d) (#26617)
aa95c7951e : _per_channel_affine_qtensor -> _make_per_channel_quantized_tensor (#26679)
8a919f4f3d : Skip observing bias across function call hierarchy (#26642)
af96e0cb5b : Whitelist ATen/core sources and headers for Caffe2 (#26609)
d63143dc5b : _per_tensor_affine_qtensor -> _make_per_tensor_quantized_tensor (#26678)
7d612066ce : Add ObserveHelper and remove some common function parameters (#26641)
7f89464b2d : fix github actions for forked PRs (#26562)
45391ccecb : Update qengine flag in python to string (#26620)
5d82cefa55 : remove unneeded code (#26640)
1eaaf8b68b : A few hub improvements (#25980)
c79d116a7d : Update ONNX Export for Gather and Scatter for Opset 11
3569a1c6dd : Fix Exporting RNN/LSTM's Initial State (h0/c0) to ONNX
cb9fd0ce58 : quantized torch.topk (#26486)
64d58c2f41 : Allow batch size of 0 in Conv
fcd13549f9 : add CondValue to unify refinements and code emission (#26145)
cbdbdd3c8c : Fix the flaky test_qlinear test caused by hypothesis deadline (#26663)
21314cfdde : "fixing" gcc bug introduced with cuda 10.1 (#26445)
ebc2365fd3 : Serialization for per channel qtensor (#26339)
c0aa6a01ce : NHWC specialization for quantized::cat (#26524)
69631c3ee3 : nightly prefix for android nightly jobs (#26652)
aeb6532e7f : BlobReference __getattr__ can only throw AttributeError (#26654)
8fc8652598 : Import torch.quantization when one imports torch
567a1981a7 : Fix ellipsis behavior for `Tensor.align_to` to glob all missing dims (#26648)
fdf2bdef0c : Revert D17450502: [pytorch][PR] [WIP] Enabled bfloat16 dtype on CUDA
e0f86f7aba : Add namedtensor build & tests to default sets (#26633)
a9a9d362e2 : Makes test_indexing.py device generic (#26634)
2a574e49b0 : Sync docker images
781f861847 : Add testing script for iOS x86 build (#26632)
fc926d9242 : fix operator level benchmark to have NHWC layout (#26577)
a79b3685db : Simplify observers declaration with functools.partial (#26492)
76697a3bfc : Enabled bfloat16 dtype on CUDA
e4821012ad : prevent generating caffe2::mkldnn for multiple times (#25257)
786d225968 : ATen port of lgamma (cuda) (#26600)
15b506068b : Remove deprecated torch.gels (#26480)
557246b77d : Fixing the calling parameters of write_gif function of the moviepy.
808f4a4d61 : Revert D17521607: Name inference for min(Tensor, dim?) / max(Tensor, dim?)
4fada96218 : Renames `tensor.renamed -> rename`, `tensor.names_ -> rename_` (#26548)
d3e90bc47d : Name inference for min(Tensor, dim?) / max(Tensor, dim?) (#25582)
6b25562489 : C++ API parity: at::Tensor::detach
bdf10380d6 : Whenever possible, use function pointers rather than std::function to represent Operation's. (#26560)
99226cd51e : Unify Quantization APIs for add, pool and relu (#26586)
7e619650c9 : Move unpickler related codes from pickler.h/cpp to unpickler.h/cpp (#26432)
95cb22f21f : _dequantize_linear -> _dequantize_per_tensor (#26576)
eca01eb0a6 : quantized average_pool2d and adaptive_avg_pool2d implementation(Revert d17437015) (#26580)
fcfca9ad62 : Skip some fragile tests (#26599)
2e82ee0335 : quantize_linear_per_channel -> quantize_per_channel (#26575)
2667493f4c : Expose supportedQEngines to python (#26474)
d117842e56 : C++ API parity: at::Tensor::version
aa78523467 : Fix CI (#26593)
d09d1d9aac : Add inplace argument to InsertObservers and InsertQuantDeQuant (#26389)
1bec8d7a15 : Get scalar type from observer module (#26425)
254122dd4e : quantize_linear -> quantize_per_tensor (#26574)
7e6a55e417 : Add DimType info in dumped debug nets (#26589)
1d2fb8d1a6 : Compiler warnings cleanup for quantization.cpp.
5016796089 : Enable creation of boxing wrappers for some aten operators (#26273)
9ed6074827 : Correct the test of a big number (2 ^ 31) (#26491)
8f68a7f241 : Add two levels to use_c10_dispatcher (#26272)
ed207b53ab : c10::KernelFunction (#26337)
8f54d0d6b6 : update android/iOS build library packing (#26565)
f7ba68e1f7 : Support IValue string type (#26517)
b401e9d8e0 : Corrected variable name and added test (#26503)
516cf051ee : Revert D17504331: Unify Quantization APIs for add, pool and relu
d6e3aed032 : add eigen blas for mobile build (#26508)
6fcbc37753 : improve how pytorch_android cmake imports static lib (#26525)
f0b7132b87 : Revert D17437015: [pytorch][PR] Add the quantized average_pool2d support and adaptive_avg_pool2d support
f337459619 : Unify Quantization APIs for add, pool and relu (#26335)
11f9fe2433 : Fix the API for record observer (#26413)
6411b92d6e : Add the quantized average_pool2d support and adaptive_avg_pool2d support (#25899)
87f80ff8ea : Support torch.pow with named tensors (#26541)
98b5b6fc13 : Implement resize_, resize_as_ for named tensors (#26493)
916eee182c : Fix for Conv shape check prints overflowed ints (#25827)
9f4174c496 : expose USE_STATIC_DISPATCH macro to public headers
73ae23a4ea : add support for real4bits quant (#25426)
1a114948ce : Fix jit/pass/peephole.cpp fuse addmm (#26357)
8c4b7a1b4b : Changes to support int8 weight and fp32 bias in QNNPACK (#26307)
f55a9da00e : Move the CUDA implementation of floor to ATen. (#25372)
71ec9a0035 : Clarify and correct the doc of atan2.
da8fbe5bf0 : Minor improvement to C++ nn::Distance tests (#26539)
a5bcde97af : Revert D17427577: C++ API parity: at::Tensor::version
b59e856517 : Revert D17486465: [jit] Make `is_optional` check more robust
198521978b : C++ API parity: at::Tensor::version
30fc011b9e : Refactor Dimname.h API to be nicer (#26366)
6703587156 : Delete tagged names
858cf76ef7 : Disable tagged names (#26479)
49777e6730 : Fix options usage in C++ module / optimizer constructors (#26483)
4c40dbcb75 : Resolve NamedTuple types in Python (#26443)
9a5b784eac : Make `is_optional` check more robust (#26312)
4444b91141 : Fix quantized::conv2d patterns in QuantFusion (#26515)
efd933dd01 : use timeout in connect function to prevent against (#26364)
9b7011c5c2 : Implement multiple dispatch (#26468) (#26501)
74710f9b9f : Implement more size-oriented opcodes in the depickler. (#26454)
60dd203a1d : Fixes test_wrapped_number (#26523)
9ca901895f : Make distructor virtual for class with virtual function (#26504)
e2515a4d6d : Allocate empty tensor instead of empty_like in binary ops, fix pow (#26498)
872ca919a9 : Distance module (#26424)
f433ee1499 : Add the FP16 weight support for LSTM in dynamic_quantize (#25975)
956b708437 : turn off autograd mode in android JNI wrapper (#26477)
afa5d0823b : Fixes big endian arch bugs. (#26383)
8f50ea0f5c : Add NoQEngine to QEngine and refactor the name of set/get qengine (#26471)
aad0263a6b : Support multidimensional inputs to torch::tensor (#26210)
436c60a854 : javadocs for Tensor, IValue, Module (#26149)
0f42881269 : fix schema matching of tuples to vartype lists (#25944)
5f2c320840 : Disable bitcode for iOS CI jobs (#26478)
e72b0be2e1 : fix cdist gradient computation if first arg is 1xn (#26254)
1f2fa8d4d8 : Make jit dicts ordered (#26465)
4f7848e520 : Make c10::Scalar::to<T>() const (#26406)
30e7665f55 : Add a CI Job to Check BC Changes in Function Schemas (#26329)
5304358859 : Revert D17481256: Implement multiple dispatch
ce3d024727 : Make `options.name_` private, and change all callsites to use `options.name()` (#26419)
587128e3dc : Use github actions for flake8 (#25824)
454bf21b36 : port lgamma from TH to Aten (#25138)
0705f759a3 : Implement multiple dispatch (#26468)
af64789cfa : Fold activation permutation inside quantized conv operator (#26242)
d5daac7223 : Fold weight permutation inside quantized conv operator (#26241)
8c1354c31b : Implement more support for per-channel quantization (#26240)
8317f75b79 : Use gradle 4.10.3 for build and publish
f673def92d : Enabled where for bool tensor on CUDA (#26430)
aad8738681 : Remove quantization for bias in pattern (#26415)
d799726474 : ensure c10/macros included before using (#26439)
68895eb9f4 : fix flaky test (#26395)
fe9dbbdba3 : Emergency Docker upgrade to version 347. (#26466)
4c1a2c2033 : add setitem to class types (#25750)
07bd76988e : Revert D17265918: Implement multiple dispatch
ece14ff473 : Implement multiple dispatch (#25653)
fc3e1a22da : C++ API parity: at::Tensor::output_nr
97c8c18a21 : tag files should not be deleted by "python setup.py clean".
d9ab78b3f0 : Moves more tests to TestTorchDeviceType (#26435)
6b4bbdda37 : fix JNI wrapper for IValue interface change (#26448)
8d9364ef32 : Refactor emitIsInstance (#26061)
d46b982db3 : Add support to call unpack for pytorch mobile quantized FC and Conv (#26211)
921079c5c2 : flat hash map that preserves insertion and deletion order (#25675)
43b30cd5d9 : make copy (#26371)
dcbfc3bdbf : Add per channel observer (#25887)
7042bfea1d : Revert D17374409: [pytorch][PR] Implement more size-oriented opcodes in the depickler.
293d73fc92 : Export gelu (#24475)
5127599152 : Implement more size-oriented opcodes in the depickler. (#25786)
595c1dfa74 : Export clamp for opset 11 (#25797)
b1ecf4bc82 : Revert D17464904: Add NoQEngine to QEngine and refactor the name of set/get qengine
cbc7172a02 : Fix quantized::linear QuantFusion patterns (#26414)
4f7292f7ee : Add NoQEngine to QEngine and refactor the name of set/get qengine (#26330)
c3f881cdbc : add script to build mobile library with host toolchain (#26440)
495dbacfd1 : Back out "[pytorch][PR] Fix many type mismatches in the CUDA version of calc_digamma and calc_trigamma" (#26444)
fb28014af0 : Remove quantizeBias (#26388)
6387ffab65 : Exclude libfbjni.so from pytorch_android not to have its duplicating (#26382)
e44ea6cd5e : tvm operator dynolog (#26295)
36ade9aa23 : Move the CUDA implementation of rsqrt to ATen. (#25285)
44ffbc43de : C++ API parity: at::Tensor::is_leaf
a8386d2a7d : fix composite learning rate (#26227)
f75c1e4939 : Add extra filtering for scale/zero_point/dtype in FoldQuantizeCallIntoBuffer (#26224)
b23be95558 : Adding quantized::conv2d function for pytorch mobile in c10 (#26152)
1f51051287 : remove extra get_worker_id call in distributed rpc init (#26381)
f29e0d70cb : Add filter function to subgraph rewriter runGraph (#26223)
12762cd586 : Use static type information to restore type tags (#25447)
ad0af1127b : Add ivalue::type(), part 1 (#25439)
d02369dac2 : add pass for onnx scalar type conversion (#24378)
248d5857ae : Adds dtypes decorators to and allows helper methods in device generic test classes (#26375)
52d999e173 : Disable QNNPACK tests if pytorch is not built with it. (#26427)
a561660241 : Puts ROCm tests on default stream (#26394)
13b544602e : Fix many type mismatches in the CUDA version of calc_digamma and calc_trigamma (#25791)
18eb92e2af : Add support for lists for prim::min and prim::max
76fb909beb : Change "named_guard" in native_functions to "supports_named_tensor" (#26352)
ecb82ed5a2 : clean up the PR job script for iOS build (#26353)
2801df5ba1 : Add a float version of calc_erfinv (by templating) on CPU (#26070)
b0b0f2c65f : Make ProcessGroupAgent take num_send_recv_threads as constructor argument (#26313)
388cfdf2ac : Removes torchtest, expands generic device testing (#26374)
ed09704899 : use allgatherv for sparse all reduce (#23917)
98ccae09af : C++ API parity: at::Tensor::grad
72aeafd3d0 : Fix no tab check (#26399)
b8ae4d0f1c : Resolve #25605 cyclic reference in _LRScheduler (#25776)
bae7528479 : Change '*' to '...' and `...` for named tensor API functions. (#26350)
277d442d18 : Rename torch.namedtensor -> torch._namedtensor_internals (#26349)
f341291bfb : Support unpickle py2 NetDef object in py3 (#26147)
f2e9622ed8 : Add l2 norm minimization (#24022)
0038111019 : Implement named tensor `unflatten(dim, namedshape)`. (#25658)
f6203a88a3 : enable xla cpp tests in CI
61197e94b3 : Remove `torch.save`-related logic from pickler (#25502)
acb300fd6b : Split PyTorch ROCm tests as 2 CI jobs to run in parallel (#26380)
193a6a6f98 : Revert D17431514: [pytorch][PR] fix schema matching of tuples to vartype lists
bb1efb3bee : Adding quantized::linear function for pytorch mobile in c10 (#26135)
59002bb095 : Kill if_true / if_false in Declarations.cwrap.
a06e1c3af7 : min(li) max(li) (#26351)
be976413f7 : Skip testing triangular_solve_batched on non-default CUDA stream (#26115)
71d3457a1f : Fix compiler unwrapping step in jenkins build scripts for Caffe2/PyTorch on ROCm (#25409)
a8073f34af : fix schema matching of tuples to vartype lists (#25944)
9181b9c73e : Enable basic GPU profiling capability on ROCm. (#26300)
b63f8ef2c9 : Rebase CircleCI to master if it is gcc5_4 (#26321)
cc61af3c3d : Add iOS test app skeleton (#26261)
0ad8c679ae : Enable support for dilated convolutions (#26205)
3ce2ceca05 : fix ctc_loss argument check error message (#26325)
a76403f609 : Revert D17367016: [pytorch][PR] Enabled bfloat16 dtype on CUDA
958d627288 : Remove dead function (#26259)
2470031f33 : Fixed size arrays (#23695)
2b20ba7bb4 : Move more ops to c10 (#26255)
57a4b7c55d : Re-organize C++ API `torch::nn` folder structure (#26262)
caed485873 : Turn on BUILD_NAMEDTENSOR permanently (#26060)
1accc38b75 : Enabled bfloat16 dtype on CUDA (#26148)
19b4314f30 : Fix typo (#26298)
a3915bdb9d : Replace simple if_true / if_false cases in Declarations.cwrap. (#26285)
e5d9a5e5be : Fix typo in docs.
1b4951d3a5 : Fix remaining invalid function cast warnings that show up with GCC 8/9 (#26104)
30f31c66ba : Kill declared_type and ignore_check from THFormal. (#26284)
925131a85e : Fix race in CUDA initialization (#25788)
2ce8c83f67 : Enable CPU fused kernel on Windows
bebc3d6aad : Automatic update of fbcode/onnx to 1316afc9f972f81340faa05763e2898f38bcc3b0 (#26309)
28d3eb8156 : Back out "Back out "[Caffe2] Fix device_option propagation"" (#25908)
9ef86b04e5 : Make TORCH_WARN_ONCE capture variables by reference (#26289)
8b7a12dd39 : Average Pooling 3D AVX2 Implementation (#26111)
2dac673861 : Enable batching for pinverse (#26095)
81d7675301 : Ensure that n is non-negative in polygamma.
13a07f163e : fix test_arange and bump ort ci version (#26320)
dc851ab5d4 : Integrate forked QNNPACK into mobile PyTorch builds. (#25844)
226ee7a889 : Adds generic device tests to test_autograd.py (#26248)
b07991f7f5 : Fix error messages; tensor creation method names with type (#26219)
448c53747a : CircleCI android nightly (snapshot) build publishing (#26069)
31960e8872 : Add missing argument for failing function call (#26311)
fcb100a3e0 : Export round (#26126)
5aff3dbaf6 : Kill 'default_init', which isn't needed anymore.
03e3f130c6 : Add derivative of cholesky_solve (#26185)
a96e41b7c0 : Use expected_wrapper only if CMAKE_{C,CXX}_COMPILER and/or is not set by user (#26306)
2b52c1d982 : Dynamic quantization for bias. (#26057)
4a947b607c : Clarified ambiguous docstring in NegativeBinomial
327e94f51b : Add __s390x__ compiler define for s390 builds. (#26233)
06c69ad8ed : Whiltelist and fusion support for quantized::linear - matmul(with bias) (#26204)
6f87a1891e : Upgrade Caffe2 docker images to 306 to include roctracer and rocprofiler
ffbffb69c6 : Kill defaults in nn.yaml. (#26282)
6df70db807 : Disable broken unit tests (#26301)
f43a2c9c2f : Add ProcessGroupGloo::createDefaultDevice (#26166)
7a7425cc48 : Updating submodules
fd3cc36fab : Whiltelist and fusion support for quantized::linear - matmul(without bias) (#26209)
f95d2b61d1 : Whiltelist and fusion support for quantized::linear - addmm (#26208)
c92ed8dd44 : Move the CUDA implementation of round to ATen. (#25041)
b6d1105eb6 : Enabled conv methods for the bfloat16
4e538ebcf3 : Migrate away from using Variable( in test_nn.py (#26077)
c006356034 : fix hypothesis timeout (#26280)
38b2bc1451 : Upgrade MKLDNN to v0.20.5 (#25757)
df9d8f9032 : Fix no auto batching bugs: cannot bulk load; not work with namedtuple (#26065)
24ae9b5040 : Fix binary size of OpsAlreadyMovedToC10.cpp (#26237)
976cefdb41 : Switch to the new profiler infrastructure (#26174)
91fc6f3b94 : Fix namedtensor ci (#26257)
31139b5f9a : Back out "[pytorch][PR] Refines test_torch.py generic device testing" (#26252)
21ba320cd5 : Fix CI (#26250)
a2e5445fcf : Fix Windows build (#26246)
b6b2b4c18f : Refines test_torch.py generic device testing (#26244)
26d537d744 : Remove unboxedAutogradKernel from c10 (#26130)
0e30e6570d : Call aten ops through c10 dispatcher (#23668)
e86d99ae88 : Use MIOpen for transpose convolutions (#26172)
df338f80a6 : Add a wrapper for inspect in JIT to produce better error message (#25415)
7f3c423541 : Add type hint for cuda.set_rng_state (#26200)
b4b8f53a5d : Ports most of test_torch.py to generic device type framework (#26232)
9f6b6b8101 : Back out "[quant][observer] Add histogram observer" (#26236)
3051e36e05 : Remove armv7s build from iOS (#26222)
5f9cbfa1d6 : Added possible out of shared memory error message (#25730)
4160b8cd77 : adds sync to flaky test_events_multi_gpu_query (#26231)
fbf991d062 : Creates generic device type testing framework (#25967)
dc6939ebff : Add isBackwardCompatibleWith for Argument and FunctionSchema (#23409)
1563fdb591 : Add histogram observer (#23959)
c6b75cea6e : fix circle CI
6d3ac7f85c : use whitelist for selecting observed values (#25974)
d250f01060 : Tensor renaming to dtype, shape; support long, double (#26183)
1114b05122 : Updating submodules
b5a3a8b427 : Change the source link in podspec (#26089)
16605ef2eb : Nightly build for for iOS (#26074)
8c46061e2c : Updating submodules
8321f2592e : Register ATen ops with c10 (#26131)
cadf836cbc : Allow overwriting catch-all kernels (#25947)
b01520ac9c : Make schema part of RegisterOperators::Options (#26114)
0ea59786e8 : Use torch::from_blob instead of shareExternalPointer, nits (#25973)
a3f0d988d9 : Revert D17349760: Change schedulers to chainable form
43335cddb7 : Fold quantize op into module (#25625)
27b5a6c577 : Add documentation to logging
20124c4814 : guard dyndep with a lock (#26153)
e293c4ea73 : Fix 'in' return true incorrectly (#24156)
079cd4e1fc : Remove requests as dependency (#26083)
07e7c7eb9f : Kill remaining defaults in Declarations.cwrap.
10f1d3e37b : Get rid of more defaults in Declarations.cwrap.
fef2d2e3c4 : Kill most defaults in Declarations.cwrap. (#25610)
6276958de1 : Turn setup_ci_environment into command
12086a6593 : Turn setup_linux_system_environment into command
0303ecf070 : Turn should_run_job into command
219a04ee82 : Use CircleCI commands for brew update/install (#26159)
0963e1705b : Run PyTorch macOS CPU-only build/test on all PRs
939ae80de1 : Change schedulers to chainable form (#24352)
2503fdc116 : Add data field to Tensor pyi. (#26093)
babaac3e08 : Fix bug with named tensors and (no) tracer support (#26106)
33221b19ac : C++ API parity: at::Tensor::data
5e2d25af34 : Implement tensor.align_as(other), change tensor.align_to(names) (#25843)
e544f88590 : Implement tensor.refine_names (#25842)
94964a9ba2 : Add fusion for quantized linear (#25624)
e9e7e9d466 : Automatic update of fbcode/onnx to 95252c2adec185e305e34486c6756ece9aa8f57f (#26137)
ff7921e85b : Create TensorBoard test classes in all cases (#26005)
3acab233b5 : Add device check before accessing data_ptr in PackLayer (#26056)
be82239c86 : Port fuse_linear from pytorch/tvm (#25623)
18a0040fec : C++ unregister_module function for Module (#26088)
1d87090051 : Support quantizing any methods called (#25505)
5fce76961c : Kill kwarg_only declarations in Declarations.cwrap. (#25609)
e2e1f5effd : Fix build warning in vec256_qint.h
784c4a91ea : Implementation of ConstantThenLinearWarmupLRPolicy and CompositeCyclicalLRPolicy (#25970)
f559c1d85d : Skip inserting duplicate observers (#25504)
135bbc261d : fix base_lr overridden in cyclic lr (#26105)
f9a8b8ada3 : Stop reordering TH random function arguments.
369064fa0d : remove "build_deps" arg from setup.py command in (#26113)
ffee507d36 : change gradle build to use static libtorch + gc-sections (#25984)
fbc038ab35 : simplify build_android_gradle.sh (#25897)
771cb628eb : Kill TH(C)Blas kwarg_only declarations. (#25607)
ad91d0285b : Stop re-ordering TH(C)Blas arguments. (#25606)
1eae6355d8 : tracing with an opt-in by file name (#25895)
f928994968 : make sure all out stringstreams start out empty in jit_log.hpp
076eaf4ccf : Exposing Fused8BitRowwiseQuantizedToFloat in PyTorch (#26080)
f91fbf90c7 : Skip test_triangular_solve_batched (#26108)
7e4ac8b851 : Automatic update of fbcode/onnx to 7988d8360b11e6003560076e9b1d4aa426db3244 (#25959)
bdc656da70 : TorchScript Serialization for dynamic LSTM
827d71d769 : Disable test_cuda.test_stream_event_nogil on ROCm (#26087)
f3fdbba666 : print source code when a function is executed (#25868)
4fb5a7c5b8 : Experimental warning for named tensors (#26050)
03bb7969be : Move NamedTensorMetaInterface definitions to TensorImpl.h (#26030)
a996b1d653 : Make regular softmax warp size aware (#25956)
e09c5e69f4 : Dynamic registration of RPC backends (#25734)
24d5b5f5f9 : Add Runtime flag for quantized backend. (#25680)
83ecdf76da : Revert "TorchScript Serialization for dynamic LSTM module" (#26079)
ead14a6bd4 : Use BytesIO instead of tempfile (#25976)
abb7e1365c : Upgrade the naming for fbgemm quantized op (#26064)
e3039612d8 : TorchScript Serialization for dynamic LSTM module
d4757afbe5 : remove verbose in pytorch_ci hypothesis profile (#26075)
5376ee51fd : Enable more mGPU tests (#26055)
6b7ea23d5b : Add new API for Fully Connected and Convolution Operators in QNNPACK (#25862)
a6a7f35481 : Better error messages in C2 ONNX backend (#25809)
28a2dafc15 : C++ Average Pool Module (#25800)
a7eb18e243 : Enable Unique operator tests on ROCm
276bde302e : Enables _do_cuda_non_default_stream (#25989)
ad2ec71695 : Add TEST_NAMEDTENSOR flag to namedtensor ci (#25948)
eee58f8284 : Refactor torch.*solve tests (#25733)
100ad48ced : Remove unnecessary BUILD_NAMEDTENSOR from interned_strings.h (#25938)
68f40fb2c8 : Add `in` membership checks for lists (#25796)
d546c069a4 : Preserve module names in recursive script (#24505)
d1d336168d : Skip TestHub on macOS (#26033)
32b7b8994f : Delay external imports until we're ready to test tensorboard (#25993)
6e3a8483a2 : Skip TestAutograd.test_deep_reentrant on macOS (#25942)
8ec80531b8 : Refactor macOS build and test (#25930)
66e3f080ad : Change brew update logic to run much faster (#25988)
5b0c2fe127 : Remove trailing whitespace in CircleCI configuration files
8ca93ec351 : Fix torch.arange traced as constant (#25363)
62767077c3 : add the tensor_observer to record the runtime tensor for quantization … (#25830)
ec48280afa : Improve error message when input is not in the right format (#25928)
00d967c39d : enable unit tests (#25963)
075adb4d2d : remove pthreadpool.a from install directory (#25977)
54f3cb8f79 : Updating submodules
b9bf91feb8 : Add torch.backends.mkldnn.enabled flag (#25459)
c79a13b7b6 : Simply code generation - phase 1 (#25961)
76487e16a8 : indentation for hypothesis profile and proper inheritance for QuantizationTestCase (#25934)
c475ef72f9 : Change order of activation and weight in QConfig (#25950)
63df9ffd0b : Fix typo in OpenBLAS cmake detection
2080a15860 : Add VariableTensorId, store it in TensorTypeSet (#25597)
ba9fda14a7 : C++ MaxPool Module
e04836004d : L1Loss module (#25902)
1a58a9e441 : The float version of calc_digamma should return float type. (#25488)
ebdb32c749 : Remove global group name tracking for ProcessGroupNCCL (#25905)
929764ac2a : Remove superfluous check for POLLIN in TCPStore (#25911)
e4cd807cdb : Make running Gloo tests conditional on availability
ebeb2a35ce : Increase failure threshold for timing based assert (#25867)
87a2c92615 : Updates autograd engine to respect streams set in forward (#8354)
6b3f968957 : Updating submodules
9815739d83 : Fix LBFGS on GPU (#25909)
3185b455c6 : Add assert to ensure the divisor is not 0 (#25960)
9b4f3fd7d3 : Add torch.nn.LSTM into the default dynamic quantize mappings (#25954)
21e9d1144e : fix use-after-free bug
dc015a1afb : Delete tools/autograd/env.py (#25920)
e69a6bab8c : compute common dtype based on inputs only (#25593)
8f7020bbdb : add support for ModuleDict (#25715)
a88f310151 : Simplify header inclusion in test/cpp/api/modules.cpp (#25921)
74b48f21c1 : remove protobuf from Dependencies.cmake for libtorch mobile build (#25958)
fc93d1ae6b : Add ONNX export support for torch.log1p. (#25808)
1897440e02 : add torch.jit.is_scripting api (#25955)
5d3267cd30 : Remove some more BUILD_NAMEDTENSOR flags
2655b2710c : Disable flaky test_invalid_names in test_rpc.py (#25916)
4231287504 : Add names= argument to torch.tensor ctor (#25424)
2856fd6c22 : make python rpc handler to be singleton class (#25742)
16c1907830 : update build_android.sh to not build host protoc for libtorch (#25896)
4bd9ddb0b7 : remove pthreadpool dependency in aten/CMake (#25894)
ec8e75ea92 : Fix int32 overflow in SummaryOps.cu getBin #25747 (#25748)
a7eaec6cf2 : add set_grad_enabled to TorchScript and fix data attribute
387d5a4459 : Add ONNX Export Support to rsqrt
d377556f08 : Make persistent softmax WARP_SIZE aware. (#25937)
a14e884546 : Migrate pow from TH to Aten (CUDA) (#25517)
55219d55a6 : Only create a new clone of observer when we actually insert it.
618804f237 : Make lookup table warp size aware
3680cef44e : C++ Fold nn module
2ab0f221ba : Make spatial depthwise convolution warp size aware (#25922)
c749be9e9f : Make arguments of Module::dump easier to remember. (#25740)
26f67e7aa7 : fix scatter CPU kernel when (input size, src size) > index size (#25839)
5dfef472fb : make sparse coalesce warp size aware (#25918)
9c10f729de : Add Dropout to blacklist (#25881)
26675b507f : Enable libflame as a LAPACK choice (#25795)
aa49aa856c : Tensor type set (#25308)
6630c3f379 : add NO_EXPORT macro to unset __visibility__ attribute (#25816)
8485710143 : introduce INTERN_DISABLE_AUTOGRAD flag to create inference only library for mobile
41cf5564fe : gate static aten registerer with USE_STATIC_DISPATCH (#25815)
76ee02f10d : Rename packed tensor accessor (#25654)
e8cc1fddb7 : Fix cpp_extensions test failures with GCC 9.1 from ArrayRef(initializer_list) (#25384)
c60dddbb9f : Store bias in PackedConvWeight in fbgemm (#25626)
57b23c61c5 : In the CUDA implementation of erfinv, erfinv() should be used for double (#25337)
bf04c2ca2f : Make torch checks same for both CPU and CUDA multinomial (#25595)
8a026d4f74 : Remove tools/setup_helpers/dist_check.py (#25879)
a8d4bb34ea : Unify treatment of warp size / wave size (#25884)
c47ccfd01d : Enable variable size embedding (#25782)
2a917616a8 : remove cosh_ op test (#25893)
7ab4ad7b6d : add torch.jit.is_scripting() api (#25263)
36bdde255e : Fix test_det_logdet_slogdet_batched on PowerPC (#25773)
b27bcda851 : argument 't', mis referenced to 'torch.t()' (#25885)
79bcf6e5ba : Test scripting and tracing for dynamic linear modules
20204d1fe7 : Fix c10 tracing (#25869)
67281deec0 : Fix missing str to int conversion in the commit f71ddd42 (#25861)
1777eb2ed9 : fix typo: toDense --> to_dense #25706 (#25832)
e8f316c024 : SubgraphMatcher: add logging to a check missed previously.
d7d3aedd2c : Make various improvements to C++ API parity test harness (#25828)
115494b00b : Cocoapods for iOS OSS release (#25847)
773b949a97 : Remove NULL arguments that have been marked deprecated by rocBLAS (#25866)
001ba1c504 : Clean up the iOS build script (#25822)
13292ec3c7 : Add PR jobs for iOS builds (#25840)
378881e903 : Enable log_softmax and CrossEntropyLoss for bfloat16 (#24457)
c5accd1486 : More accurately describe field invariants in OperatorEntry (#25793)
0eacd3cc5c : Upgrade NVIDIA driver on CI to 430.40 (#24242)
d1496183f5 : Fix cuDnn build error with CC3.0 platform(#25820) (#25825)
4fac61a886 : Fix typing on nn.Parameter (#25586)
f70ef229ce : Back out "[Caffe2] Fix device_option propagation"
97b432bdf0 : Back out "[pytorch][PR] remove tools/setup_helpers/cudnn.py"
4299faa10b : Fix invalid function cast warnings that show up with GCC 8/9 (#25483)
bf4a28175d : Retry connecting to TCP store on ECONNRESET (#25707)
73855ecd43 : fix cudnn static linkage (#25848)
74fa53995d : Fix assertion if NamedTensorMeta's num_names != tensor.dim (#25778)
294cf096bf : Name inference for unbind (#25585)
d7a1152ee9 : Fix error message stack overflow (#25146)
825f4714f9 : Fork QNNPACK into aten/src/ATen/native/quantized/cpu/qnnpack (#25500)
45bfa6a5c6 : Fix missing newline in compiled from source range highlihgt (#25802)
1c81d9006a : increase input shape to reduce variance (#25812)
a332583c59 : Quick fixes for named tensor for windows (#25728)
6257c8d634 : Add flatten for named tensors. (#25672)
bc6eec1db8 : Factor unnecesary work out of add inner loop (#25751)
03d4198a67 : Use more efficient specialized Quantize routine (#25731)
bd0e564d40 : Fix device_option propagation (#25203)
a9e56c2e68 : Make Python RPC handler does not hold module in global variable (#25458)
17c1b2c715 : Relax scale to prevent saturation in conv/linear. Add test to verify precision of numerics of quantized model with updated observer. This test catches errors in (#25667)
5d7fff5d03 : Fixed nondeterministic RG for ORT RNN tests (#25205)
75cac0fe69 : expose parse_schema and __eq__ function to python and add round trip tests (#23208)
f2f804dccc : Move BUILD_NAMEDTENSOR in NamedTensorUtils.h (#25781)
2fe8341aac : Map module options between Python and C++ in API parity test (#25784)
c5a0de23e2 : Fix empty graph problem (#25599)
c9e8dcb706 : Change worker name constrant (#25780)
2bb166edb4 : Revert D17228224: [pytorch][PR] add torch.nn.Identity to __init__.pyi.in
ec3793362f : Documentation change of torch.where (#25554)
748436a514 : Enable BLIS from the FLAME project as a BLAS choice. (#23819)
7970e5720b : Rename tensor.view_names -> tensor.renamed (#25711)
3c6009e6f1 : derandomize hypothesis tests (#25513)
a41ff31702 : Correctly gate __CUDA_ARCH__ with defined() (#25729)
511d1875c5 : add torch.nn.Identity to __init__.pyi.in (#25777)
5e372862dc : Use `constructor` in test_params for C++ API parity test (#25749)
67c530851c : get rid of protobuf dependencies (#25650)
9d2d31e626 : Store bias in PackedLinearWeight struct in fbgemm (#25428)
4c7189d0f4 : fix OSS mobile CI (#25755)
3e843115c0 : Use whitelist instead of blacklist for USE_DISTRIBUTED (#25759)
66ac6698f6 : remove tools/setup_helpers/cudnn.py (#25482)
d95763b4dc : Enable loading int8 prepacked models in PredictorContainer
cc4211069e : Do not pass down USE_GLOO_IBVERBS to CMake (#25720)
d47ced49ad : Adds a -m flag to pytorch.distributed.launch (#24910)
2bed201190 : remove caffe2.pb.h dependency for embedding_lookup_idx.cc (#25670)
a6fb6e1fb3 : Expose an API to iterate all the registered operators (#23207)
21ba9b3c6d : Copy quantize routine to vec256 (#25685)
f7bcba33a6 : Vectorized specialization of max_pool2d for channels-last layout (#25676)
ed64338297 : Make tensor key in Dict works in serialization (#25442)
d939ee2d85 : Migrate digamma and polygamma from the TH to Aten (CUDA) (#25662)
38e4766349 : Add CosineAnnealingWarmRestarts to optim documentation (#25421)
88e4cee3e7 : Improve handling of mixed-type tensor operations (#22273)
9c5a899773 : Enable jit fusion on ROCm (#22872)
82c8949a9d : add __getitem__ to class types (#25664)
76bc44fb30 : Move most BUILD_NAMEDTENSOR macros out of header areas (#25721)
0be29ee2ba : Finish testing code examples in the docs (#25668)
c6dd4036f5 : Enable two tests that were skipped b/c of rocThrust bugs fixed in ROCm 2.7
1559c64417 : Cyclical learning rate multiplier: use fabs(base_lr) (#25628)
11eb8ac2a9 : Revert D17199043: [JIT] preserve ignored function return value type
a294e157cb : Align AliasInfo's operator<< with FunctionSchema (#23206)
ce3b81fdf3 : Only default USE_DISTRIBUTED=True on Linux (#25725)
30aef56e63 : rocBLAS deprecated the last two parameters. (#25726)
bc2a37b2a2 : bring back skipped bitwise dispatch (#25689)
3be1745b3c : Make SparseNormalize backwards compatible (#25660)
197fd4f707 : Adding RRef as return value for builtin operators (#25169)
99b6472d6b : move USE_STATIC_DISPATCH from CI script to master cmake (#25696)
17e7079aa2 : rename 'mobile' to 'static_dispatch' (#25695)
99cd83ea22 : Inserting observers for all methods called in forward (#25503)
7333a8c679 : Updating submodules
df043cd49d : preserve ignored function return value type (#25262)
61819260f7 : Rename FBGEMM quantized operators to generic quantized ops (#25678)
50cb48643d : Fix named tensor build (#25673)
3556bea5aa : Build torch.distributed with Gloo backend on macOS (#25260)
a3d0abf729 : move GetDimFromOrderString to caffe2/core/types.h (#25671)
a35a63b8bd : move legacy deserialization code into jit/import_legacy.cpp (#25649)
3363ec9283 : clean up binaries/cmake for mobile (#25651)
d4226392bd : change shape for some ops to reduce variance (#25686)
ef6ea545e8 : Add Python/C++ API parity tracker for torch.nn (#25289)
0806203d54 : Remove accidentally re-added file (#25677)
4d415bff2b : Add requests as a legit dependency (#25596)
76b6b1b1a6 : move no_deadline to hypothesis_utils.py (#25598)
80820b2610 : Updating submodules
55da02a86d : Revert D17097735: [quantization] Rename fbgemm quantized operators to generic `quantized` ops
2e1a5cb80e : Port new_full to ATen. (#25583)
3d9c419648 : Port new_empty to ATen. (#25475)
0cc8ac75c9 : Alphabetize Package Reference section in Docs
c9ba5186d3 : Rename fbgemm quantized operators to generic `quantized` ops (#25338)
efc5306ad2 : Make NoneType <: Optional[T] (#25361)
738303ba43 : Add set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "-Wl,--no-as-needed") to CMakeLists.txt (#25445)
817f4502fb : Dynamic dispatch for optimized quantized op kernels (#25545)
849c32f8e9 : Cpu-strided-complex support for binary-ops (#25534)
e3afe6a4e1 : Update Transformer.py comments to include a full example (#25411)
5330b7392d : Updating submodules
0c6ee947b6 : Remove forward compat code for serialization format (#25440)
bb969d5ac8 : Remove friend dependency on ClassType in InterfaceType (#25617)
2eebf08427 : Updating submodules
b266a079f0 : Enable PiecewiseLinearTransform test on ROCm (#25632)
478440a061 : Kill discover_sparse_tensor_operations. (#25589)
9f1a817742 : Kill unused enumerate_options_due_to_default.
5407241b4f : Run clang-format on torch/csrc/distributed (#25647)
ee087a6a47 : Fix clang-tidy script (#25652)
14c2492fb5 : Fix iOS simulator build (#25633)
47cee2dd22 : Implement initial version of autograd with named tensors (#25604)
791347642b : Allow TensorMethods.h to include Dispatcher.h (alternative) (#23888)
885da48d22 : remove protobuf usage from mobile build (#25493)
4fe857187c : switch to rocThrust for thrust/cub APIs (#25620)
68b9920c7c : Updating submodules
0ebbcd9541 : Name inference rules for relu/relu_/threshold/threshold_ (#25569)
9ea6238b07 : Fix named tensor printing (#25564)
0483d537ab : Add the dynamic quantized LSTM module (#25157)
4edf77b6c0 : Fuse to individual operators to GatherFuse8BitRowwiseQuantFloatMulLengthElim (#25519)
cd4a7cdaa6 : change shape for some ops to reduce variance
67d64ea910 : Fix binary op name inference to happen before shape checks (#25563)
9922e09436 : Name inference rule for torch.cat (#25568)
49baeb9d4c : Eliminate magic numbers in BatchLinearAlgebra.cu (#25524)
8edf149f7f : Don't save `self` in `index` backward (#25594)
a6ba4f64ac : Name inference for masked_fill_ / masked_fill
2aef60660f : Name inference rule for masked select (#25566)
d1e079e2e0 : Enable torch.cholesky for batches > 262140 (#24438)
0621e2ce94 : Get rid of _th_reciprocal_. (#25507)
1e4832ffad : Enable broadcasting of batch dimensions RHS and LHS tensors for lu_solve (#24333)
914a6051f9 : Updating submodules
896cd1c510 : Documentation for cdist (#25221)
9cb9f15989 : Remove index calculation in quantized max_pool2d (#25526)
500e72aaa5 : Make scatter/gather arguments optional (#25575)
493f7bd817 : Error phrasing in torch.distributed helper functions
938e740241 : Name inference rule for mean, std, var, std_mean, var_mean (#25431)
f3f83ccb23 : Added invert bitwise operation to JIT (#22324)
5c4cc1e8f3 : Prepare to add some Dimname/DimnameList overloads (#25405)
c89301a625 : Migrate multinomial from the TH to ATen (CPU) (#25274)
f793a7c57e : Implement indexing methods for sparse tensors (#24937)
832c72a2d6 : Update index.rst (#24245)
b46bc79f2f : Create helpers for implementing unary ops whose CUDA implementation is ATen. (#24879)
4864403fb4 : Delete torch/csrc/nn/type_checks, which aren't used anymore.
09ef107e59 : Add copy logic for LibTorch to avoid issues on Windows (#25556)
ba9f13448b : Updating submodules
8199bb3dd3 : add options to flush cache in SLS benchmarks (#25530)
f1059d4e6a : format sparse_lengths_sum_benchmark (#25529)
53cacb6a59 : test_allreduce_coalesced_stress message passed in as kwarg (#25557)
631c34d876 : checks requiring GPU moved to their own test (#25555)
71c97d3747 : Fixed flatten docs (I think) (#25544)
7d3564fc2c : remove MULTI_GPU (#25509)
9d37179061 : Fix CUDA distributions test on Windows (#25539)
c36b77fcda : Run clang-format on torch/lib/c10d (#25382)
40cb5182e9 : Attach 'send' autograd function to the autograd graph as part of RPC. (#24876)
a024e1e091 : Creates Torch-friendly Event class and adds Stream tracking to autograd (#25130)
6a458512c2 : Fix pow precision (#25476)
c881136215 : Move worker name collection code from Python to C++ (#24260)
ac7996ccd3 : Removes SymbolicVariable (#25077)
60c4e74e49 : Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/TensorCompare.cpp (#25402)
e316b7d548 : Multiple fixes to test_c10d.py. (#25441)
6e4eeb1d17 : Gradle tasks for publishing to bintray, jcenter, mavencentral etc. (#25351)
a27fdfd38c : Vectorized quantized relu/relu6 (#25496)
5455ba634c : remove PYTHON_VERSION (#25494)
7a921ba17d : Manually implement `is_zipfile` (#25279)
fcab254d05 : Minor fixes in per channel support for qconv kernel (#25182)
0f928dc0d9 : Revert "Memory layout for pooling ops" (#25495)
77e8dba620 : Disable Int8Transpose test
890d0f88ae : update speed benchmark binary to work in USE_STATIC_DISPATCH mode (#25449)
a779263501 : add speed benchmark binary for torch jit (#25486)
7ec6b74a35 : turn off BUILD_BINARY for android CI jobs (#25485)
6d35579910 : Fix implicit fallthrough warnings in FeatureLPPooling.cu (#25451)
861194e3f8 : Fix windows build error when TBB enabled and Windows SDK installed (#25398)
03f67e4b16 : Remove BUILD_ATEN_ONLY build option (#24441)
9bdcc499d1 : Delete a few cases where we directly use Backend/TensorTypeId. (#25467)
d159104d1f : Kill non-shared cwrap tools. (#25358)
28d4e2e9a9 : Update derivatives.yaml docs to refer to Declarations.yaml rather than Declarations.cwrap. (#25357)
d2a8435c08 : add tuple keyword (#25474)
7e61136c3b : Get rid of extract_cwarp. (#25356)
fea4225b8a : Parameterize CircleCI config (#25446)
fe055f2dfb : Get rid of more unused plugins.
a4fad42a09 : Get rid of torch._thnn. (#25354)
be0f803798 : torch/jit/passes/quantization.{h,cpp} and torch/jit/init.cpp (#25403)
9d89c9a30f : change shape for conv and unary ops (#25477)
1a92b225db : Migrate clamp and clamp_ from the TH to Aten (CPU) (#25290)
e26305ed60 : cuda devices should have same dtype (#25470)
329757a907 : Torch.flatten() returns a 1-dim tensor on a 0-dim tensor (#25406)
4fb28e5df9 : Fixes #25454
25e6a52e2e : Stop doing nn wrap. (#25353)
716815e3de : Stop initializing THNN backend. (#25352)
05bf74a890 : Compare shapes of outputs and grad_outputs in autograd.grad (#25349)
c56464d13e : Turn off warnings on Windows CI. (#24331)
f0c6021846 : fix bug in assertNotEqual for int tensors (#25412)
c76dacba84 : Add windows docs for the binaries
061f2d1683 : Skip useless macros from Windows.h (#25444)
c2b710c3bd : Revert D17067216: [pytorch][perf] add speed benchmark binary for torch jit
60f6cc9d59 : Emit script function calls during tracing. (#25089)
bbf84c1a9f : Fix dead link and syntax in ONNX landing page
0c222555ce : Attempt to fix windows build
d49f1349e9 : Updating submodules
194acd023a : Some alias analysis fixes (#25425)
d291935377 : Export Unique
8986b9e38d : Momentum setting in SyncBatchNorm forward (inference) pass. (#24995)
17831648dd : Quantized vec256 + vectorized quantized::add
8cd45b4c46 : relax roi_width/roi_height check to non-negative
93b653bba3 : Attempt to enable CrossMapLRN2d, as it no longer uses Module._backend.
c59540b7b1 : Change exception to warning (#25408)
86b1d5f271 : add speed benchmark binary for torch jit (#25230)
e370486d80 : fix binaries build for BUILD_CAFFE2_MOBILE=OFF (#25229)
1294e55c15 : Assign each RpcAgent a unique ID, and use ID for sending RPC messages. (#24195)
629a2b3615 : Remove unnecessary checks in InsertQuantDeQuantImpl (#25370)
f495a3abac : Skip inserting observers for Tensors inside fused op (#25281)
88a27ebb00 : Per Channel Quantization Support for Quantized Linear Operator (#25276)
3805be62c1 : Skip test_compare_tensor_scalar due to overflow error (#25432)
a9bb68d436 : Update QNNPACK submodule to 7d2a4e9 (#25400)
7b4eddede9 : Delete toType(const DeprecatedTypeProperties&, ...) (#25332)
d704097d33 : Add Int8Transpose operator (#16382)
e44c09ecae : making quant utilities inplace
23fde77d3d : Remove Module._backend as it's not used anymore.
f077847a45 : Revert D17078081: Invariant typevar matching on callsite checks
247cac263f : Revert D17003555: Multiple fixes to test_c10d.py.
04764d5751 : Fix allreduce_coalesced tests in c10d (#25419)
2513ca66ca : Add guards for using named tensor with serialization and multiprocessing (#25345)
0bb69f6071 : Add guard for named tensors in the JIT (#25344)
8640aef505 : Add support for non-affine batch norm with float stats and half inputs (#22750)
fe922a2e84 : Fix `item()` call in docs
1ea1d7f095 : Fixed masking warnings in tests (#25317)
8dcd256201 : Memory layout for pooling ops
2339d9f19c : Updating submodules
4cdce0da71 : Multiple fixes to test_c10d.py. (#25334)
0604b45f23 : pytorch android circleci integration (#25286)
cad3abb036 : Adding ModuleList to modules.h (#25346)
e231bd16fb : Revert D17112656: [pytorch][PR] fix bug in assertNotEqual for int tensors
8cdad0ab9f : Remove a unused member var (stop_) in process_group_agent
e59bbc82a0 : Upgrade to circleci version 2.1 configs (#25336)
a8ae33ce27 : Move autograd function for CrossMapLRN2d from being backend specific to modules/_functions. (#25339)
7a9f37d7af : Kill backend-specific lookup in CrossMapLRN2d, as it never succeeds.
66e521edd5 : Kill ConvTransposeMixin.forward, which doesn't seem to be used. (#25326)
2e934c78dd : Remove THNN sparse autograd Functions. (#25323)
c2e1cb38fd : Fix dependency by moving Dimname.{h,cpp} NamedTensor.{h,cpp} to core/ (#25280)
cb022d7bec : Fix AliasAnalysisKind::PURE on MSVC (#25375)
07fe66f25e : logical_xor doc cleanup
58a0dee749 : Replace open registration TensorTypeId with closed enum. (#25252)
2e1c37c95c : Move the CUDA implementation of ceil to ATen. (#24866)
1f21c422e4 : Add missing call to DistAutogradContainer::init (#25391)
c84dfa8fa3 : Issue #24962: Fix cuda method to support "None" arg for device and a … (#25018)
2e3a37e630 : Kill THNN function auto generation. (#25322)
8145dd35ef : Describe the relation between fold and unfold operations. (#24840)
c845984271 : CUDA_KERNEL_LOOP: prevent int overflow in loop increment. (#24818)
6d66902a81 : Re-enable libtorch tests on Windows
1e2b19db6d : fix bug in assertNotEqual for int tensors (#25199)
e8acc2ebb1 : Removing future imports from the test fixtures.
07db41bb07 : Remove spurious print
b7d992eb46 : Integration tests for qconfig_dict (#25217)
910d2f18fc : Implement FoldConvBatchnorm2d pass. (#25282)
96db3ad413 : insert_quant_dequant work with qconfig_dict (#25127)
11b4d57711 : insert_observers use qconfig_dict (#25069)
efe808b326 : Fix old annotate() error (#25261)
490eb7fed9 : Add GET_ATTR instruction (#25151)
5dd01a7eea : Pull instruction definitions out of interpreter.cpp. (#25148)
8456c96967 : Make quantized relu ops inherit the memory format from input
f88f9e1331 : Ensure quantized::add stride matches inputs (#25265)
fa902c58ee : fix inliner bug (#25052)
18c77dd243 : Quantized comparators (#24387)
7818e7e5d4 : Basic framework for Distributed Autograd context. (#24875)
8e189a327c : Fix lint
91db62a8bb : Invariant typevar matching on callsite checks (#25136)
43c4b9f2a5 : Add source location to class instantiation error (#24990)
05f1fed693 : Add OneCycleLR (#25324)
1b7f7aa12a : change LBFGS's default tolerance_grad to 1e-7 (#25240)
f362a5a04b : Revert "Let logical_xor support non-bool tensors." (#25269)
44bd63c7a1 : don't throw in constant prop (#25270)
5dd915cd1a : @albanD's #15219 augmented with SavedVariable::weak_grad_fn_ (#23502)
eb2c5930b2 : contrib-tensorboard: removed external tensorboardX dependency (#25259)
df51cbe397 : Include the correct header for make_unique in named tensor headers (#25178)
6f5fe96c80 : Implement name inference for torch.matmul (#25177)
d2719b549d : Implement name inference for torch.bmm (#25123)
eb756746ab : Fix possible deadlock in SharedCache inside a forked child proc (#25158)
805cf983b9 : Fixes test_equal
06757acb30 : Refactor MinMax observer (#23902)
e335cc3a95 : Fix named tensor test (#25313)
0cc92de447 : Extend nn.Transformer to support BERT (gelu) (#24181)
6100de9b1b : implement bool_tensor.bernoulli_ (#25076)
5248dd1a51 : Use C10_DEPRECATED_MESSAGE instead of TORCH_WARN_ONCE for Tensor.data<T>() (#25319)
80974dde4c : Move new_criterion_tests from test_nn.py to common_nn.py (#25333)
d0a525b592 : Remove unused THTensor_(add) and similar functions code. (#24864)
509abd9a81 : Fix typo "takes takes" -> "takes"
53ac931af2 : Disable cuda_distributions_test and converter_nomigraph_test on Windows. (#25305)
590619ab8c : Support all_reduce a list of same-device tensors #21640 (#24949)
afb7a162fb : Migrate erfinv and erfinv_ from the TH to Aten (CUDA) (#24943)
112f249446 : Port `pow` operator from the TH code to Aten (#23492)
d7cce32303 : note location
9b1097958e : Migrate digamma\digamma_\polygamma\polygamma_ from the TH to Aten (CPU) (#25048)
529bb859b2 : Revert D17052534: [pytorch][PR] Creates Torch-friendly Event class and adds Stream tracking to autograd
e123e24e7e : Implementation of cpu_serial_kernel for TensorIterator (#25125)
883628cb5c : Added documentation for nn.functional.bilinear (#24951)
fe541aab5f : Align AT_FORALL macros with DISPATCH macros wrt Half. (#25268)
6c9410ffd1 : Fix infer np scalar dtype mem leak (#24267)
dfa48f9942 : Disable the copy constructor and = operator of DispatchStub (#24932)
45943bd611 : Remove some unused plugins.
687aa781df : Fix typo
718feb6d76 : upgrade MKL-DNN to v0.20.3 (#22910)
9945c0cea6 : Work around for bias quantization for conv and linear operators (#25212)
a74d702e57 : Return a message instead of void from rpc udf (#25283)
98beb9ecd8 : Revert D17059087: [quant] Reducing the test size for adaptive avg pool
febcb3b7b3 : int8 static quantization in the numerical debugger
c7ef50bd14 : Upgrade the deprecated data to data_ptr APIs (#25295)
8a8844dc83 : Add the sparse feature information during logging in sparse lookup layer (#24863)
b1f7e13d5f : Revert D17063240: [fix] Specify width for st.floats in hypothesis_utils.tensor
ca4bc9fc07 : improve interface error messages (#25228)
fba107f18e : add serialization of interface (#25227)
a01358f91d : Remove PythonPrint's is_method_ member (#25226)
61818b8986 : Add interface declarations to JIT (#25258)
011db3bcaa : fix closures which always throw. (#25278)
085bd15880 : Add TORCH_WARN_ONCE, and use it in Tensor.data<T>() (#25207)
e34ef04301 : register HeatmapMaxKeypoint with C10 (#25191)
2c22076342 : Moving sign function to ATen (#22861)
9d06a984f8 : Serialization for nn.quantized.functional modules (#25220)
5b4e052904 : Add new qnnpack_add and qnnpack_maxpool op to C10 registry (#24103)
86a35d7b8d : Fixing the enforcement of the zero_point
3c3d95cf1d : disable deadline checking on test_adaptive_avg_pool2d
2e224d62b6 : Add USE_CUDNN check to AT_CUDNN_ENABLED definition (#25037)
f82c4ce6d6 : Add libtorch android build with shared lib for 4 android abis (#25192)
f8852c947b : Implement a bunch of pickle serialization features that optimize for size. (#23759)
3af758c077 : data -> data_ptr: upgrade the deprecated APIs (#25223)
a4fa167878 : Optimize LeftRight and either (#25133)
9fd62436b4 : get rid of dynamic_cast in Quantizer (#25001)
858493d168 : generic overrideable convolution for backends (#23562)
ac862e6ddc : Reducing the test size for adaptive avg pool (#25195)
f5a3d59254 : Handle empty qconfig for functional Modules (#25215)
3779893d1d : Implementation of cyclical learning rate (#23914)
c351a68f5b : Specify width for st.floats in hypothesis_utils.tensor (#25188)
44a7879b6e : Disable flaky test_adaptive_avg_pool2d test. (#25249)
c142dbf876 : Fix scriptability for Observer (#25219)
92750acb88 : Move the detection of cuDNN to FindCUDNN.cmake (#24938)
2f4f6c2563 : Implement name inference for torch.dot (#24474)
9340b155bc : Revert D15901930: Add interface declarations to JIT
1f57b8b738 : Add myself as a CODEOWNER for better discoverability (#25231)
f622ec8084 : Update mapping dictionary to support functionalmodules and pooling operations (#25216)
4d2bf0b51b : Move test QAT tests to double precision to ensure numerics match (#25211)
b15d91490a : Remove InsertQuantDeQuantNode (#25000)
fbb88f5d71 : Remove insert_observers pass (#24999)
0b60f5c0f8 : Remove deprecated graph mode quantization tests (#24998)
ab0229388c : add import for test_quantizer.py (#25222)
9e27cf617e : Initial commit for android torchvision utils (#25185)
c0334015ed : add to Tensor symmetric methods getDataAsIntArray, getDataAsByteArray (#25183)
c2e0383975 : skip tests if fbgemm is not supported for test_quantizer.py (#25209)
4b22cf6bd5 : Add interface declarations to JIT (#21972)
6e42580d32 : Simplify NamedType
26a438d4fb : Revert D16852280: Work around for bias quantization for conv and linear operators
17f69eff22 : Revert D16879133: Handle empty qconfig for functional Modules
a9fdc1923b : Revert D16879132: Update mapping dictionary to support functionalmodules and pooling operations
978a964be4 : Revert D17053634: Move test QAT tests to double precision to ensure numerics match
a1bf4d7ee1 : Integration tests for initial quantization graph mode (#24428)
77ee1f5f3c : Revert D16923660: Support observer without any data calibration
c3c36a5b68 : Revert D16923651: Serialization for nn.quantized.functional modules
ff30201fff : Revert D17059486: Fix scriptability for Observer
85d1ebd26e : Fix scriptability for Observer (#25197)
433fe47d95 : Creates Torch-friendly Event class and adds Stream tracking to autograd (#25130)
088201f95d : Implement name inference for addmv, addmv_, mv (#24471)
78fa8a8ad0 : Implement name inference for expand (#24469)
277cd748f9 : skip fstrings test if not py36 (#25184)
121839b2f8 : Fix bugs in assignment to optionals (#25059)
3b3261ca8e : Adding Scalar add/mul. (#24447)
a3e6e82b6c : Adding return for the observer in the functional_modules.py
c395f42109 : fix to loggin in AA
0156d02b59 : Implement name inference for mm, addmm (#24306)
6195aee2c6 : Fix binary op name inference between unnamed and named tensors. (#24921)
5d6b3dfdf4 : Move test QAT tests to double precision to ensure numerics match (#25189)
95a3ffc2f1 : Serialization for nn.quantized.functional modules (#24924)
a5710e2303 : Support observer without any data calibration (#24923)
794f63fe92 : Update mapping dictionary to support functionalmodules and pooling operations (#24804)
d7f6ac1dbb : Handle empty qconfig for functional Modules (#24803)
ea601d90d6 : Work around for bias quantization for conv and linear operators (#24789)
969c918f56 : bind autograd.backward and tensor.backward in TorchScript (#23913)
9a9ef21bad : quant_fusion jit pass (#24427)
105fbb9cce : insert_quant_dequant jit pass (#24426)
d80625754f : per channel quantization support (#25134)
cd14518ee8 : hyperparameter plugin (#23134)
1bf1970fe2 : Add Python/C++ torch.nn API parity test harness (#23852)
573b1cd224 : prevent generating caffe2::mkl for multiple times (#25167)
c24314bf0e : Ensure tests get passed on Windows (#25145)
30bc65271d : torch.from_numpy fix for np.int (#25139)
43a2fd0e24 : Support focal loss in MTML
b7b80c6bdd : Fix ios_crash:backtrace=FBCameraFramework:caffe2::getClockTimeMilliseconds() (perf_observer.cc (#24813)
f2bcad5ddf : Add logging to JIT CSE pass.
f71ddd4292 : Switch hub to use `requests` because of SSL (#25083)
85bca16a61 : SubgraphMatcher: matching modules support. (#25075)
16289c2fdc : SubgraphMatcher: add logging.
b5096b68d3 : SubgraphMatcher: Factor out matchAttributes.
a54f8f0f21 : use avx2 for Add without broadcast and when inputs are uint8_t (#25098)
363655dc48 : Use the EmbeddingLookup API which takes the offsets instead of lengths (#24945)
35a00155e3 : print padding_mode for Conv modules if not zeros (#23996)
add57fd267 : Support lowering of fp16 weights
bc83ed10fa : Revert "per channel quantization support (#24936)" (#25131)
9854435588 : move some methods into function.cpp (#25119)
65beee5872 : Add a skip_override option to should_run_job.py (#25118)
5fd3251c50 : add some sparse tensor ops support in TorchScript (#24967)
12ea1d74f0 : Add missing functions and methods for channelwise quantization (#24934)
9e9965035c : per channel quantization support (#24936)
aba15ce904 : Per Channel quantization APIs (#24935)
ad7250d315 : Make EmbeddingLookup APIs take offsets rather than lengths to match the PyTorch's EmbeddingBag (#24944)
6981b4e5bb : Update QNNPACK submodule to 901e9d4 (#25044)
c013c06653 : Add helper function Tensor::names() (#24914)
530db2c7c2 : Rename Tensor::names() to Tensor::opt_names() (#24907)
867d8af20f : Fix `FIXME_default_names` by storing static list of 64 none names (#24885)
6c83424620 : Optimize performance for unboxed-only kernels (#25055)
5b84514a9f : Fix lint checker breakage caused by #25111 (#25122)
199e15faf2 : fix clang-tidy failing on master (#25121)
2ec23804e2 : dictPop: dereference dict.find() iterator before calling dict.erase() (#25056)
ab38059bc7 : fix annotated assignment (#25094)
1c4495d8ac : Clean up after running doc tests (#25036)
d1f0823d23 : fix clang-tidy failing all the time on random lines (#25078)
2cccad2c56 : Turn off fbgemm for libtorch android build (#25113)
e42b238f7f : pin_memory thread now uses 1 thread only (#25111)
9a793a49e7 : Add thread-local-state NamesMode and NoNamesGuard (#24942)
56245ffe05 : Fix python lints for generate_test_torchscripts.py (#25107)
649c9cd1ca : Enable UBSAN test for FBGEMM in dynamic quant test (#25099)
d62bca9792 : jni-java wrapper for pytorchScript api (#25084)
3a59a9b36c : Implement name inference for t(), transpose(...) (#24941)
f583f2e657 : Fixed test_numba_integration (#25017)
5254b12002 : cleanup tmp name generation (#25065)
0ae030f87e : Typo correction in cuda_deterministic_backward.rst (#25011)
192a26249d : Temporarily fix hub SSL cert issue (#25042)
5c78e0c470 : Fix a bug in creating a prefix string in jit log. (#25051)
e92506a258 : BlackBoxPredictor OSS part N + 1 : strip fb/predictor/Transforms.h dependency (#23350) (#23350)
9764c2e6f0 : Adding quantized mul kernel
f9f5af0ed7 : Revert D16949314: [jit] Fix bugs in assignment to optionals
bb79b61ce7 : Fix bugs in assignment to optionals (#24989)
f8611eaa7e : Disable tsan for test_dataloader.py. (#25005)
149c646b74 : Detect and handle NCCL errors appropriately in ProcessGroupNCCL. (#25012)
1037652224 : disable custom class logic for mobile build to avoid rtti (#24994)
a805a0d3ca : Remove deprecated TH(topk) code. #24778 (#24857)
664555c757 : Fix fbcode weak ordering (#25026)
e2ccccee9a : Load tensors directly from pickle archive
c33adf539c : Fix for cdist backward for non-batch tensors (#22915)
4b77cae360 : Add qconv_test to benchmarking tests (#24913)
049284e14d : Make observer scriptable
956a347e68 : Fix the lint error in transformer doc. (#25027)
3385693edd : gradient clipping by norm
1a2a9fab31 : Remove Symmetric Quantizer in backend (#24964)
e8ea44796e : add support for multiple assignment statements (#24477)
901f9eaa89 : Migrate erfinv and erfinv_ from the TH to Aten(CPU) (#24908)
632aeb034d : Fix log_prob() in torch.distributions.Uniform, HalfCauchy and Gamma (#23017)
b9a5188178 : Fixed Error in Transformer Example (#24837)
789f4ad87b : Fixing size implementation for struct slot_list_impl (#24351)
310c5be005 : Skip setting `CUDA_NVCC_EXECUTABLE` if `CACHE_WRAPPER_DIR` not set. (#25006)
74b65c32be : Add align_corners option to grid_sample and affine_grid, change default to False (#24929)
420b37f3c6 : Deprecate tensor.data<T>(), and codemod tensor.data<T>() to tensor.data_ptr<T>() (#24886)
aa66146974 : Add ASAN instructions to CONTRIBUTING.md
173dc5d16f : __reduce__ for QScheme (#24969)
4966268a1d : Move CPU-only jobs to xenial
6dca147946 : Misc doc updates #2 (#24445)
0eb55f9ddd : PrepareQuant step (#24425)
14ac7a1d87 : Add epsilon argument to Adagrad optimizer (#24980)
65d650c6c6 : restore default constructor of OutputArchive (#24955)
38314e5b3f : Improve c10 dispatcher lookup perf (#24882)
a99a4485fa : Added relu6 kernel (#24799)
81ac6260d8 : Use absolute import of the parent folder without alias. (#24792)
e6b0ebdfd5 : Fix named tensor build (#24940)
f6daab5686 : bind autograd.grad function into TorchScript (#24871)
f21265203e : Update onnxruntime CI version (#24414)
b99ab492ea : Fix missing `super` call error
3d27e6327e : Remove `torch.contrib._graph_vis` (#24874)
4659269d1b : Remove unused ATen headers for mobile (#24850)
b3008fad2e : Revert D16220638: [pytorch][PR] Detect and handle NCCL errors appropriately in ProcessGroupNCCL.
1b8efd3d92 : Avoid race condition in intrusive_ptr.reset_() (#24464)
da860bda3d : Use correct WARP_SIZE for ROCm for EmbeddingBag
1d53d07566 : Add docs to CI (#24435)
0a23151293 : Detect and handle NCCL errors appropriately in ProcessGroupNCCL. (#22907)
8d46741bae : Updating submodules
9c9f14029d : Revert D16929363: Revert D16048264: Add static dispatch mode to reduce mobile code size
8e3c0210a5 : extend torch.jit._overload to module methods (#24259)
4b3ea92787 : Test if descriptions of args are in the template (#24161)
7ebac74d0a : Fix deprecation warnings
bd6cf5099b : Revert D16048264: Add static dispatch mode to reduce mobile code size
8ca6220509 : Remove unused DynamicDAG class.
8756ec989e : bind autograd functions into C++ (#24342)
b28a2b3a38 : Attempt to fix windows build. (#24916)
907f5020c3 : Revert D16914345: [pytorch][PR] Move the detection of cuDNN to FindCUDNN.cmake
012526dd6b : Fix Typing Error for Padding with asymmetric signatures (#24895)
a77cb2ccd1 : Revert D16915800: Implement name inference for t(), transpose(...)
cf30ec1b83 : Revert D16915806: Add thread-local-state NamesMode and NoNamesGuard
d750ab13dc : Add thread-local-state NamesMode and NoNamesGuard (#24367)
acf3b76bf0 : Implement name inference for t(), transpose(...) (#24203)
39e8d71dbd : Use a ptr to store autograd profiler rng (#24889)
896e4b6e09 : Support QScheme in script
bdc57d3833 : Merge ProfiledTensorType and TensorType (#24284)
6824c9018d : Add static dispatch mode to reduce mobile code size
0c5c442cb1 : Clang formatting the code [1/2] (#24867)
3463583349 : Fix some typos in documentation (#23507)
1efdf57aa7 : throw remote exception on client side (#24138)
d33623f7c1 : Make SobolEngine use random seed if not specified (#24884)
6ce6939be9 : Move the detection of cuDNN to FindCUDNN.cmake (#24784)
d9b4149e99 : Fix cmake backslash syntax error on Windows. (#24420)
b0737ccdc1 : Revert D16887357: [pytorch][PR] [BC-BREAKING] Add align_corners option to grid_sample and affine_grid, change default to False
f01548e5a4 : Removes SymbolicVariable from tests (#24007)
755f91b400 : serializing function calls (#23799)
eb7b39e02f : Templatize Tensor.data_ptr() (#24847)
bf978e7890 : cumsum (#24476)
e0e5813b72 : Fix unicode in comments (#24218)
7f86fb8995 : Moves (most) ops to symbolic script (#23794)
ef14d88f27 : Make torch.jit.Attribute work with PYTORCH_ENABLED=0
6100205eb8 : TensorIterator::binary_op input-output overlap check (#24058)
4358cbe01b : Allow torch.tril / triu to handle bool and half inputs (#24163)
f849ebf1fe : Enable torch.eye for bool and half (#24148)
6cf14361f4 : Add the default_weight_observer for the dynamic quantization path (#24231)
d7c6debc14 : Remove gradient value as input from SparseNormalize op (#24357)
9ebdf01962 : For int64_t atomicAdd, use the available compiler builtin on ROCm. (#24854)
927fb56ee0 : Allow SyncBatchNorm without DDP in inference mode (#24815)
a04f729b51 : Fix VaryingShape::merge
60518e0035 : Add resnext 32x4d shapes to benchmark (#24503)
a6a13e36f5 : Change kernel_size to self.kernel_size to resolve error in quantized conv module (#24499)
5aa0f89d65 : Build libtorch binary with new ABI (#23908)
b6803d62fd : Use snake names for all files in distributed.rpc (#24502)
3b22bbeb5b : enable "keeps" from BoxWithNMSLimit and caffe2_fastrcnn_outputs_inference
c6617b370b : Cache node operators to speed up optimization (#24827)
c0a796d95d : Update docs for softmax in onnx supported operators (#24832)
cd622f7655 : C++ ModuleList
87217cfd2a : Add align_corners option to grid_sample and affine_grid, change default to False (#23923)
9e7083d0a9 : Remove unused files from THNN and THCUNN (#24820)
92c63d90e8 : Remove support for old architectures in cpp_extension and CMake (#24442)
dfdb86a595 : big cpp test reorg (#24801)
85564c1456 : Record function name as an attribute of CallFunction nodes.
9228dd766a : Modify symmetric eigendecomposition derivative (#23018)
5a032f02ed : Added .pyi file for flatten (#24459)
0ce7264ed6 : Don't require slow test reporting in `run_tests.py --pytest` (#24448)
a0b13b4fa5 : extra_repr for quantized modules (#24443)
99dea08e60 : Use c10::ThreadPool to send and receive messages (#23968)
dd97743de7 : Enables `inplace` in the quantized relu (#24374)
aed306dcf7 : Add `@ignore` for script classes (#23614)
10c456417c : Clear recursive error stack on each compilation (#23458)
eee3e92936 : Enabled torch.mm and torch.mv for bfloat16
cf57f73c11 : Module: add dump function that recursively prints contents of the module. (#24356)
9b73c77390 : jit_log: Extract a function that prefixes all lines of a string with another string. (#24355)
76716f6c06 : Respect pre-defined DOCKER_IMAGE value in binary_populate_env.sh (#24787)
2e44630d35 : fix double copying of constants (#24412)
af908d57ea : Increasing precision for avg pool (#23906)
6b656565ab : Hooks for C++ API (#24393)
a3b8607811 : Fix test_jit_cuda_archflags failure on py27 due to changing dict order. (#24501)
562c5cd73b : Adds a placeholder for the 'mul' operator.
50161f3b3c : Add ONNX Export Support to empty and empty_like (#24166)
1df57c943f : pickler read guard (#24433)
ee898bffc3 : fix IR parsing bug
d7b86d0c11 : added test_tensorboard.py to TARGETS (#24040)
c676db230d : Revert D16834297: Move the search of cuDNN files to FindCUDNN.cmake.
e166811598 : Documentation for Tensor.record_stream() (#24078)
cef0443464 : Ensure proper file executable permissions in CI. (#24214)
482607c16c : Move the search of cuDNN files to FindCUDNN.cmake.
e78dad3593 : Add BPR loss to TTSN (#24439)
5c57cedc16 : change the location of wipe cache (#24454)
0b3a63b048 : skip broken test
64974ae71e : Fix naming convention inconsistency and formats in test_rpc.py
a1b111709d : Assert weight_observer has the correct dtype
354ecc42bc : Exposing the API for use with pytorch/tvm repo. (#24430)
1a74bd407d : Fixes the adding of the observer to the FloatFunctional (#24418)
49efbdce88 : Convert bias to float in quantized conv module (#24424)
696cabae9b : Baseline observer module, ensuring that (min,max) range includes zero. (#24297)
f03700b997 : Fix QConfig_dynamic typename (#24431)
cd20773701 : Set CUDA arch correctly when building with torch.utils.cpp_extension (#23408)
02dd9a4058 : Fix CUDNN location related build issue on Antergos Linux (based on Arch) (#24300)
b10a3e916f : Remove redundant assignment (#24408)
498276631b : Remove type subclassing (#24257)
0cbd7fa46f : remove CompleteTensorType
5ca612b55e : Let logical_xor support non-bool tensors.
00e4870001 : Let logical_not support non-bool tensors. (#23916)
6f08be46b0 : Implement gradient operator for GatherByKeys. (#24348)
b0e794e6e9 : Configure pytorch-probot (#24423)
74ea28322d : Replacing axis with dim in quantized cat
b53ff49c1e : Fix Caffe2 Windows build by switching to ninja. (#24330)
83bfd76b2f : Relax precision constraint on ONNXRuntime._gru_test (#24340)
32ed676b46 : Make aten_to_numpy_dtype in tensor_numpy.h public. (#23943)
3574d9ff70 : updated pixel_shuffle in opset 11 to use depthToSpace
b59fa077b3 : Misc doc updates / fixes (#24371)
5df773415b : Add _pair for quantized conv module (#24409)
c5e1e5c300 : Put ParseBlackListOps() into caffe2::glow namespace (#24384)
754bf383b1 : Change return type of observer to two tensors (#24339)
53eba982bd : kill TK_NAMED_TUPLE_DEF (#24350)
c6eddbb90f : copy methods when creating a derived class type (#24349)
761ae8e9b6 : Add intrinsic module mappings (#23753)
52b4221bfa : Enabled masked methods for bfloat16 (#24183)
bc92ce9e07 : Recommend logical_not() instead of bitwise_not() when applying sub and neg on bool tensors. (#23860)
338f9c860f : Add logical_xor operator (#23847)
1f4c73618c : Add logical_not operator. (#23839)
10d2ada17d : Fix Z7_MSVC_OVERRIDE for C source files (#24389)
0745591855 : Vectorize LowerCholeskyTransform (#24131)
59094c409e : Refactor and expose metadata of tum_history layer for online prediction
1b38a6f602 : add wipe cache
ab39a55331 : python udf over rpc (#23569)
de58df4c6f : JIT trace testing
064d156511 : (#23574)
d9d5d9a913 : Sanity fixes for bitwise_not (#24296)
e2a6212912 : Resolve unused variables in tests (#24075)
f66c90469b : Fix Lint (#24381)
806b24f168 : Temporarily disable warnings in dynamic quantization ops
7597741159 : Run quantization tests first
6a48a5b65c : Fix more warnings
a919fc3704 : test {__init__,from_float} on nnq{,d}.Linear
79710604cc : fix lint
0f64043b49 : Remove the activation observer for default_qconfig (#24299)
5b0de85868 : Register FC/Conv DNNLowp separately for supporting both tensor type (#24361)
0647a3f4c7 : Updating submodules
e8d2ddc2c4 : Make the default qconfig_dict (#24232)
53fbfd8fe8 : Fix the dimension mismatch issues when running the BERT model (#23330)
40be39e4c7 : Fix perf bug with indexed assignment (index_put_) (#24083)
9fe4052b6c : Add `trace_module` to docs (#24258)
716abd8705 : Cleanup documentation around `script` and `trace` (#24208)
0619b57c4c : Add the ability to compile exports on traced modules (#24298)
45962ac5b6 : equal() for QuantizedCPU
584c6986fd : Add the type matching rule for qconfig_dict (#23212)
bb9996509b : Fix expansion of stride argument in avg_pool3d (#23963)
897245c16d : Fix expansion of stride argument in avg_pool2d (#23961)
d373dac817 : Fix expansion of stride argument in max_pool3d (#23960)
4952224455 : Fix expansion of stride argument in max_pool2d (#23954)
4bfd33ed36 : Name inference for softmax, log_softmax and Dimname overloads. (#24087)
5cb8a7b396 : Fix out= function semantics for named tensors. (#24028)
a5872a16a0 : Rename torchtest.test_all_device_types to torchtest.for_all_device_types (#24337)
8a7e57c416 : clean up import_source (#24282)
c158848abe : class_table_ to deps_table_ (#24281)
735df86caa : make FunctionType a NamedType (#24280)
025116cf4a : make NamedType an interface (#24279)
5839a59ae3 : simplify NamedType interface (#24278)
abadf0079f : fix list comprehension type assumed to the same as input type (#24271)
a69a62cf83 : fix test_jit.py so it can be run in parallel (#24311)
88b1f6619e : Return list of AccessedFeatures from get_accessed_features (#23983)
b53916a373 : C2/glow: assign net_pos to a net before applying onnxifi_blacklist_ops (#24262)
f996f8d61d : Update tensor.view_names / tensor.names_ API (#23973)
2fcdb3a1f3 : Rename set_names -> view_names, set_names_ -> names_ (#23962)
7030f2c623 : Implement tensor.align_to(names), torch.align_tensors(*tensors) (#23804)
eabfca3577 : Named inference for contiguous(), bernoulli variants, and dropout. (#24109)
ad42c7d0f3 : Implement name inference rule for empty_like, clone (#24108)
65fa0233c5 : Add `names` argument to ones, rand, randn, zeros, full; fix empty (#24107)
e4c9aa8124 : format changes
7afe0a8c6d : no_deadline on ModuleAPITests and skip on dynamic quantization test
9492a5e0b6 : Add logging to autodiff
93d2cd7619 : Skip test_quantized_nn_mods tests if theres no FBGEMM
514285890c : Enable QNNPACK for iOS (#24030)
e94ba742b0 : Dynamic Quantized Linear Module (#23128)
0b1fee0819 : Remove escape_path in our build system. (#24044)
c771d50ca2 : Remove hard Caffe2 dependency for TensorBoard (#24295)
ec1e53b462 : Add dynamic quantized Linear op in PyTorch (#23464)
3e5e18d2e9 : Fix tensor construction from array (#24283)
45ca36faaf : Add out variant
4e0af295c1 : Fix and test conv2d constructor and from_float
e7f1977bae : test_nn_quantized -> test_quantized_nn_mods (#24201)
98a3b3d565 : Add name propagation for at::alias, add tensor.set_names (#24202)
517b3c4cd2 : Fix validation of dynamic axes names (#23974)
74765c0015 : Fix rotated rect intersection error (#24171)
0ea8f22951 : Enabled comparison ops for bfloat16 dtype on CPU (#24182)
98d3d1659e : Document benchmarking practice for CUDA
f511abb701 : increase default warmup iter and iter (#24272)
0f8d1fbe96 : Revert D16611883: [jit] simplify NamedType interface
1c5e48bbd0 : Observer returns original tensor for post training quantization (#24196)
1439152e72 : Make hashing default for bucket-weighted pooling (#24266)
19528d4106 : Revert D16611885: [jit] make NamedType an interface
c2d352138c : Fix missing version < 2 guard in import (#24255)
6e442a3fe6 : Revert D16611884: [jit] make FunctionType a NamedType
f36c3e9e4a : Revert D16684391: [jit] class_table_ to deps_table_
94aae71ba9 : Revert D16684390: [jit] clean up import_source
4e6698573b : Ignoring the test logs in case the tests are ran from the parent directory
bd054e7cef : reduce memory usage for centered rmsprop (#24170)
5ae909b443 : Revert D15920763: Move TensorOptions to ATen/core
14ab44f834 : Fix flake8 issues in ./torch/jit
c2549cb8d3 : Remove DimensionedTensorType (#24077)
4cc16782f3 : Removing the make_module script. (#23635)
6d14f7a214 : Simplify tests that should cover all possible devices (#23824)
dc870a3761 : Hypothesis tests: add ability to enforce shape inference (#23935)
5bf299b140 : Add out variant
199398bbd1 : Disambiguate tensor and string ops (#23748)
a0ddb728e6 : toString(FunctionSchema) shows overload name (#23694)
ca9456f10f : Use JIT function schema parser to parse builtin RPC ops
bb4f4e4d03 : clean up import_source (#23846)
2dbd36b384 : class_table_ to deps_table_ (#23845)
3f90b85ebc : make FunctionType a NamedType (#23697)
873e86acbe : make NamedType an interface (#23696)
a0836cb8da : simplify NamedType interface (#23691)
6cae07a668 : search class type for methods (#23689)
7923884a03 : Fix incorrect type annotation on Linear __setstate__
c00c9b2be2 : fix py2 imports in _intrinsic/modules (#24206)
40db964455 : Add support for using caffe2::ThreadPool in pytorch mobile QNNPACK. (#23658)
f510409281 : Enable FBGEMM tests under UBSAN as well (#23570)
71fd30e33b : JIT serialization for Conv2d
f66bfa7ec4 : state_dict serialization for Conv2d + some bugfixes
9559c1af3a : Re-work Conv2d
4a754dc3e3 : cleanup warnings
1daac9c0a2 : Update tensorboard.rst (#22026)
936632b120 : Thread local debug info
90895c8f85 : Fix trace docs (#24191)
497bc3f283 : Remove unused parameter from FORALL macros and rename STUBS to QINTS.
f5fefd62e2 : Align AT_FORALL macros with AT_DISPATCH macros.
75c1419b46 : Add Pickler C++ API (#23241)
d125b5ffa2 : Fix C412 lint from flake8-comprehensions update. (#24184)
06c09a266b : Ignore bugprone-lambda-function-name in clang-tidy. (#24190)
4c6c9ffaf8 : Move iOS.cmake to the cmake folder (#24029)
7583519b87 : Provide argument in ONNX export to exclude intializers from graph inputs. (#23284)
465b4de9d4 : add function name to error messages generated by checked_tensor_unwrap (#24187)
75db368031 : Revert D16763388: Add name propagation for at::alias, add tensor.set_names
6772f537f0 : Revert D16763390: Improve test_namedtensor.py with named tensor equality check
cb4a6327a3 : Delete `WeakScriptModuleProxy` (#23398)
ceb9a573d9 : Implement virtual memory computation in caffe2_benchmark binary (#24144)
90f3f9d9aa : Improve test_namedtensor.py with named tensor equality check (#24106)
1108fa1acb : Add name propagation for at::alias, add tensor.set_names (#24105)
bb4f380f35 : Optimizing out the division in the fusion
b028cc752b : Updating submodules
a671609a41 : Updating submodules
b9a006f947 : Make all at::Tensor in-place methods const (#23945)
bde73860c6 : Move TensorOptions to ATen/core (#22020)
a5f697619c : Add interfaces in lr_scheduler.pyi (#23934)
77c08aa46c : serialize modules as classes
5ec1c293eb : Revert D16552212: [jit] fix py-compat fbcode lint warnings
6be24be9ff : OpenCV 4 compatibility fix for caffe2/video (#24143)
365b3ff56e : send flake8 to stderr (#24100)
d3f6d5885d : Replace Module::copy_into with Module::clone. (#24068)
9843993888 : is_quantized support in JIT
a45dafc66a : JIT Serialization of nnq.Linear (#24048)
ca2010cfea : State dict serialization of nnq.Linear (#24047)
442b3512d4 : Simplified nnq.Linear class (#24046)
b453fd9916 : separate input shapes to reduce default execution time (#24136)
ca7e2a78e0 : support grad and data attribute for tensor in script
2790439b9d : add initial support for sparse tensors
83a594cf56 : Adding dequantize_val and requantize_val
be5eb6782b : Fix builtin function reference (#24056)
211bafc2ea : c10 dispatcher stores autograd kernels (#23666)
296f218ac7 : Allow kernels that don't have a boxed version (#23665)
9dbee5f8e5 : Unboxed kernels in c10 (#23447)
352032c93c : Open up AliasAnalysisKind for any ops (#23834)
390bfd5220 : support dict augment assign in script
c79a07e3a4 : Added type annotations to unpooling layers (#24101)
c21a774076 : Moves clamp from autodiff cpp to symbolic script (#23927)
1d3d92e770 : Port addcdiv operator from the TH code to Aten
3c1270a730 : Revert D16675418: [jit] Add Pickler C++ API
a6c3a95b7b : Updating submodules
c48fbbf215 : Revert D16603913: [pytorch][PR] Enhance Tensor indexSelect performance
f5cb95113d : Don't redefine unecessary type stub.
2f03205c65 : Support torch::tensor and at::tensor with bool and BFloat16 dtypes.
01d98c7cfb : Add Pickler C++ API (#23241)
e81f296807 : Fixed Bool in IsIntegralType bug (plus review comments) (#23942)
f45ec71c4e : fix py-compat fbcode lint warnings
0002448b43 : Enhance Tensor indexSelect performance (#23055)
d27fb41167 : tensor_numpy: add missing include header (#24042)
4f254c3c33 : Fix typo "properlyh"
928754b67d : make more iterator attributes private (#23744)
9b551b1ff7 : Fix regression in triangular_solve when number of batches = 1 for CUDA (#23953)
81ba2df554 : Allow forward functions with single output to return Variable (#23803)
0bba302da5 : Revert D16621830: Add name propagation for at::alias, add tensor.set_names
71352fbd9a : Revert D16667816: Improve test_namedtensor.py with named tensor equality check
de97b12dbd : Revert D16647820: Add `names` argument to ones, rand, randn, zeros, full
177a5c3f41 : Revert D16647821: Implement name inference rule for empty_like, clone
c23dd83480 : Revert D16731478: [pytorch][PR] [C++ Tensor API] Make all at::Tensor in-place methods const
521484eaec : Revert D16657926: Named inference for contiguous(), bernoulli variants, and dropout.
bb41e62e3b : Updated SGD docs with subscripts (#23985)
5d47d85392 : added mesh plugin (#24039)
aa02b1adcd : Fix qconv benchmark (#24019)
a0556782a0 : fix scale and zero_point names (#23991)
4dd2908dd6 : Named inference for contiguous(), bernoulli variants, and dropout. (#23808)
16b6466e5e : Implement name inference rule for empty_like, clone (#23746)
11cff2981b : Add `names` argument to ones, rand, randn, zeros, full (#23743)
5fbe824398 : Improve test_namedtensor.py with named tensor equality check (#23801)
78f3b883f0 : Add name propagation for at::alias, add tensor.set_names (#23624)
be4e6aff12 : Make all at::Tensor in-place methods const (#23945)
513c4291c5 : Suppress implicit-fallthrough warning on g++ >= 7 in caffe2/utils/math_cpu.cc (#24053)
87508f401c : Delete unnecessary file split_types.py
994f643d9a : Do not force USE_SYSTEM_EIGEN_INSTALL to be OFF in Python build scripts (#23990)
21ea0a115c : Revert D16627924: [pytorch][PR] Port addcdiv operator from the TH code to Aten
ce79d5135a : Revert D16634539: Enabling inline in quantized relu
2e8557778b : Refactor randperm test (#23526)
8659131aa6 : Add instruction on how to nest nn::Sequential (#23939)
02023d7dba : canonicalize_ops pass bugfix: copy metadata for new output (#23809)
61db8b64ec : Build option USE_NUMA should only show up on Linux. (#23673)
478c793065 : Remove numpy assert that fails on Windows (older numpy versions). (#24012)
fb77f14054 : Port addcdiv operator from the TH code to Aten (#23683)
9114089d70 : port atan2 from TH to ATen (#23558)
9558ccdd76 : Enabling inline in quantized relu
3d23c04a1c : serialize all c++ frontend modules to a single CU. (#23645)
61d0624803 : [jit[ make sure NameTuples have unique qualified names
3613a30345 : Move dict_test.cpp to test folder and fix dict_test.cpp for Aten includes (#24071)
e327df3965 : SumOp for int32 (#23995)
431d6e2189 : minor comment fix (#22140)
29e2b58b00 : Back out "[op-bench][experiment] increase predefined_minimum_secs to reduce variation" (#24065)
e80b48390d : When matching a line in CMakeCache.txt, ensure A=B and "A"=B are matched (#23745)
03a40b2bc0 : print clang tidy output to stderr (#24052)
48c6e5c05a : Updating submodules
7d8dfd6f76 : make _overloads importable in nn/functional (#24049)
6dc555cbe6 : support tensor as key type in script
bdf15311a3 : Migration doc fixes (#24033)
32efb43129 : Don't add local version to Conda packages. (#24014)
4ccb707161 : Removing deprecated warning message from torch.h (#24002)
5b9f55f33f : Enable Add, sub, mul, and div on CPU for bfloat16 type. (#22851)
341d5934b7 : Move addcmul to Aten(CUDA) (#23814)
3ad940742e : save()/load() tests and fixes
a35d2902ef : jit.script() testing and fixes (#23891)
7d207363bf : Fix master - (#24003)
02d3c302d8 : Fix build failure on OSX (#23998)
ad64789a1e : add aligned option to RoIAlign
15d3f0242b : support Gather different indices for different examples in one batch (#23813)
451fc51d8d : add support for overloading functions (#23886)
9ecc33d6f2 : metacompile isinstance checks (#23885)
33a1c30cb1 : cleanup torch/nn/functional.py (#23977)
b8b86de89b : Adds torch.random to docs/toc (#23553)
1a9334ea59 : Hotpatch CXXFLAGS to be the same as CFLAGS if CXXFLAGS is not set. (#23568)
c74216d396 : add NotIn support in script
e23e4cc356 : Back out "Revert D16469619: Add Virtual Memory and CPU percentage computation to AIBench"
e90adf59a0 : Make assertions refine types (#23949)
0f5d071d52 : Add python_requires to help pip (#23863)
9d1acd6dc2 : Disable optimizer for `__setstate__` (#23698)
323aad6b20 : No need to handle the dependency of INSTALL_TEST on BUILD_TEST in cmake.py (#23806)
5df0cf3fb4 : clang-format aten/src/ATen/native/quantized (#23898)
5411d1a27b : Fix docstring for argmax (#23775)
10b1254edd : fix crash on torch.Tensor.repeat() for 0 repeats (#23766)
ed19580dc4 : Fix dataloader._shutdown_workers if not all workers are started (#23761)
ed4ee093cb : Make typing understand exceptions (#23565)
2635b6262e : Remove K and N function arguments for fbgemm_pack_quantized_matrix (#22956)
8e9f9b424f : Replace descriptions of args in doc with template (#23439)
a1d945b295 : Roll master to 1.3.0 (#23895)
fc36842554 : Improve hip-clang support in build_amd.py (#23835)
13a684d50b : Fix test TestCuda.test_streams_multi_gpu_query (#23912)
fc82ec298b : Update CosineAnnealingWarmRestarts to follow PyTorch 1.1+ Step Order. (#23833)
78cc9b92a5 : Change fbgemm_linear_{int8,fp16}_weight to fbgemm_linear_{int8,fp16}_weight_fp32_activation (#22955)
002d4f9f7d : Erase shape information from class types (#23362)
b0a27278bd : Recursive script migration guide
8b349073ce : sync and async torch.distributed.rpc for builtin operators (#23228)
c07fc96b94 : Set caffe2_tvm_min_ops to 8 (#23893)
ddc25efc80 : Updating submodules
68318404f4 : Rename cpu-only to cpuonly, as dash features are not supported. (#23879)
40f0b1c844 : Enable OSS quantization tests (#23858)
6ba60ec9b0 : Add flag to temporarily disable MKL-DNN conv (#23837)
9588cd921e : weight_names bug fix (#23848)
d413f2d335 : format init.cpp (#23840)
43c4bcba1d : Updating submodules
52be1448e8 : Docs: Delete placeholder to use top-level file (#23869)
d58059bc6f : Fix SliceGradientOp to handle properly empty batches (#23784)
c002ede107 : Delete Travis CI config (#23788)
489cc46686 : Define toIValue conversion for dtype (#23708)
e8cf9b686b : Rename previously THNN conv kernels to have naive_ prefix.
60a4ef3074 : Remove nightly suffix from nightlies; upload to pytorch-nightly. (#23752)
e2f5bc5c08 : Properly mangle `nn.Module.__construct` (#23779)
8fc349f7be : fix some compiler warnings
51d59a43ba : fix torch.frac documentation (#23830)
8fb0d198e9 : make nn.LSTM accept PackedSequence instead of Tuples
a15845555c : Negate halves on GPU using __hneg() when possible, instead of using float conversion.
f90afff3bd : Recommend `~` and `bitwise_not()` when user tries to apply neg (`-`) on a bool tensor.
f278aee731 : Std opset export (#22310)
a710f81639 : Add CUDA 10.1 to CI. (#23791)
0015b188be : Fix typos
44ba092e5b : Remove unnecessary fetch and reset on builder checkout. (#23792)
4050de5b58 : Revert D16627326: [pytorch][PR] [ROCm] Improve hip-clang support in build_amd.py
ab15d38497 : Adam implementation minor fix (#23737)
8e9fef61f4 : Revert D15996322: Open up AliasAnalysisKind for any ops
f0a581801a : Improve hip-clang support in build_amd.py (#23699)
3ad9dbf9d5 : Open up AliasAnalysisKind for any ops (#23810)
1aa4afde80 : Document bool tensors for bitwise_not. (#23800)
6e4a83ab57 : Channels last stored in tensor (#23391)
a3c165f9d2 : Revert D16452539: support Gather different indices for different examples in one batch
dfd8a08f51 : frobenius_norm onnx export added
7d9e69e62e : allow INSTALL_TEST to pass through from env to cmake (#23793)
fb06c9e61f : qconv operator level benchmark (#22895)
be7fe1ccb9 : Add tests to ensure that both abs(0.0) and abs(-0.0) lead to 0.0 (#23701)
19c675178f : Updated docs and added deprecation warnings to acknowledge a bool tensor (#22261)
520982d1df : Zero sized tensor support for repeat_interleave (#23717)
f87a4cc23f : support Gather different indices for different examples in one batch (#23285)
18d0873b7a : cpu binary builds are built with cu100 docker image now instead of cu80 (#23772)
6313d5e28b : add appropriate install_requires (#23722)
1b1bddaab3 : Revert D16469619: Add Virtual Memory and CPU percentage computation to AIBench
cbf05305c0 : don't try to set training after ScriptModule has been initialized. (#23680)
31137738de : Support for non-zero zero_points for weight and activation (#23541)
445440a6a9 : Add Virtual Memory and CPU percentage computation to AIBench (#23590)
7f130c8494 : Expose the quantized inputs and output of dynamic quantized int8 FC operator for debugging (#23566)
5faecc8b1f : Perform string uniquing by value in pickle serialization. (#23741)
8e2b9de860 : Document empty_strided (#23735)
f81db8afb8 : Initial torchbind prototype (#21098)
4e6e11c139 : added opset10 ORT tests (#22993)
97917fd26d : Partially revert "Remove all conda 3.5 nightly configs, remove libtorch smoketests (#21380)" (#23747)
a1b10270c2 : Fix the bug in regularizer matching (#23485)
29881c7f02 : Fix LSTM int8 quantization model size issue (#23577)
3107f1dcd5 : fix align_corners doc
d9ec37adc4 : Compress all non-Tensor components of a serialized TorchScript model. (#23723)
302adf1d20 : add LambdaRank DCG Loss Option (#23679)
fc6aec9491 : format only change (#23685)
57fc793650 : Add names to repr for named tensors
8e466b7e21 : Add torch._C._BUILD_NAMEDTENSOR() (#23623)
995920ae2c : Fix frontend error message
692825db86 : Tests for C++ custom autograd function API (#23628)
8df83ce559 : Bump Gloo (#23400)
638d0b3705 : Support ONNX export Multinomial (#23581)
87131a9bae : Fix unused imports in torch/onnx/symbolic_opset8.py (#23678)
5cb41d35da : increase predefined_minimum_secs to reduce variation (#23734)
89956374c3 : Remove qconfig_dict from API (#23465)
645b981d95 : QAT modules take qconfig as argument and keep qconfig as memeber (#23609)
725d6cd8ce : Extract common classes and functions from test_c10d to common_distributed (#23660)
b2f6e2bdc1 : Migrate neg's CUDA implementation to ATen. (#23617)
acc5cedf6a : Adjust maintainers list (#23693)
d1e0a3dd15 : Compress debug symbols when serializing TorchScript models.
3d15ee1b34 : Remove more uses of `DimensionedTensorType`
3314d60a75 : fix conv2d
df8638b0ed : Support Copy Op
9d2cc2c987 : Support nn.GRU in script
b22c88b8eb : Reduce input sets for tests to speed them up. (#23692)
c91f209130 : Updating submodules
0539462ca2 : Fix pin_memory_thread not exiting quickly (#23646)
3b5daef6de : Move addcmul to Aten (#22874)
dded794eeb : add setup metadata to help PyPI flesh out content on pypi package page (#22085)
ff3dd72469 : Add in-place check to AliasDb
336c9be7f4 : Slightly improve dataloader docs on when auto-batching is disabled (#23671)
7ac41b1cfd : Remove useless code from shape info
fed5ca192c : Adam/AdamW implementation minor fix (#22628)
6cf9ed4a54 : ConvBn2d/ConvBnReLU2d (#23357)
029c8e7754 : allow forward hooks in tracing (#23613)
2342b7485e : Omit local version identifier for default configuration. (#23654)
8ab99a28d9 : Fix CPU-only binary testing by properly installing cpu-only first. (#23611)
865c7eea48 : Changed tensor comparison return type from uint8 to bool (#21113)
388dc4f2a6 : Let user be able to change MKLDNN "-m" flags back and forth in subsequent builds (#23608)
02f794b102 : Add overload names to native_functions.yaml (#23532)
ec13f18390 : Allow empty Variables to be saved for backwards (#23618)
0a12ff7c5b : Use dst dir for temp file (#23629)
0ce950de05 : prefix module qualified names with __module__ (#23630)
230f7f9bbc : Include protobuf-defined outputs in the graph cutting algorithm (#23557)
9467e80097 : fix typo
88b96ba951 : Update relative links in OVERVIEW.md
3b6aa9ade6 : Add logging to Alias Analysis
2e40857dad : Fix CTC loss for zero-length targets on GPU (#23298)
08f7f27c6a : Fix named tensor build by enabling tensor.is_pinned and removing support for clone() (#23597)
3fa2df7c9a : Support custom autograd functions in C++ (#23572)
5e4c24baef : Documentation cleanup
87a75bd605 : remove ONNX & Turn on `NO_API` for mobile build (#23546)
9130ab380a : fix gemm call for CUDABlas for THCUNN conv, #23545 (#23552)
5d130e4232 : Allowing batching for det/logdet/slogdet operations (#22909)
5b66062f99 : Use prerendered KaTeX in docs. (#23376)
456e66d531 : format jit_type.h
02d5c62f34 : Fix regression in torch.qr (#23591)
50ce9e09da : Fix typos in .circleci/README.md (#23588)
3e0da2ab8e : Rename AT_FORALL_SCALAR_TYPES_WITH_COMPLEX to AT_FORALL_SCALAR_TYPES_WITH_COMPLEX_AND_STUBS
e324f9a093 : Remove AT_FORALL_SCALAR_TYPES_WITH_COMPLEX_EXCEPT_COMPLEX_HALF, which isn't used anymore.
c7248dad63 : Update MKL to 2019.4 for Windows
c5482e33e9 : Rename tensor.is_named to has_named, expose has_named to python.
725e41e955 : Enable named tensors for arithmetic, clone, and tensor conversion ops
b417c2d5a7 : Refactor the pytorch_doc_push_script to take a branch
2aaeccda55 : add a test for inline tracing
9c549dfdc1 : make_module: First version
af638ad5d7 : pin_memory should not copy on already pinned tensors (#23484)
3fe00f0c90 : Fix set_grad for extension backends
775d7bd6a1 : at::view (#23452)
756bdcbca4 : Include recursive class compilations in error call stack (#23454)
941be58b5a : remove the confused CPU op (#23575)
bc64324da9 : Change condition in swap module
ab584c738b : Move overview to docs/ folder
1c86b8a783 : add docs for serialization
0a04513367 : Remove old Type based backend extensions (#22009)
3cc7da3a7d : Revert D16561561: [pytorch][PR] Remove preprocessing of CFLAGS, CPPFLAGS, and LDFLAGS in Python scripts.
649fa8e5c8 : add log stmts to peephole.cpp
9dea86f86b : Make ProfiledTensorType hashable
2d238d090c : Add Cast Op
776b6b6bcd : Cleanup interface of inlineCallTo.
be3d27589f : Added torch.autograd.profiler.record_function() as context manager. (#23428)
7364aa796d : skip nn.Identity in add_observer
5b4ac841c9 : Quantized Average Pool kernel
401fbb0088 : Port `resize_as_` and `clone` from TH to Aten (#23027)
e7abff0778 : Delete re_worker_requirements
b3a9a7a9b9 : Rename gels to lstsq (#23460)
cfe9400996 : Remove preprocessing of CFLAGS, CPPFLAGS, and LDFLAGS in Python scripts. (#23528)
fd61cc9ebc : Moved at::assert_no_internal_overlap to TensorIterator
4b78ce1ba4 : Clean cmake infrastructure up (#23527)
437a8b3eed : Named inference rule for copy_
16da355b30 : Sync worker requirement mismatches
4e59055c4d : optimize matmul memory usage for certain cases (#23433)
7b081e5d1e : Improve error message for changing tensor metadata after .data or .detach() (#23504)
db1e9b1d6c : Fix a few clang warnings.
30bc19d751 : dictKeys and dictItems ops on typed dicts return typed lists (#23270)
c8817f9436 : fix default value for script
6314af6e57 : Revert D16526027: [jit] Include recursive class compilations in error call stack
6574f6167c : Revert D16554694: [jit] add a test for inline tracing
265b498de2 : add a test for inline tracing
52b95fd4be : Include recursive class compilations in error call stack (#23454)
696642ae8d : Change docs to use recursive script API (#21612)
bfee46f8e2 : Update argument list for non-fbgemm path for qconv_prepack (#23521)
65a89472c4 : Put all modules in the global Python CU
e366af7d87 : Add TORCH_CHECK to disable sub for bool tensors (#23519)
3c986dff77 : introduce auto_set to simplify benchmarking the backward path of operators (#23276)
41dfe7204b : Threading and CPU Inference note
f4eb93f7bc : Support pack_padded_sequence and pad_packed_sequence
c384fbf4c8 : support torch._C._get_tracing_state in script
e1f8985973 : Specify onnxruntime version to install for CI tests (#23517)
c779eff579 : support torch.as_tensor in script
3a568c9a2b : CI: install clang-tidy (#23518)
a8edc2b5d2 : Add sanity checks for NCCL detection.
9219a37c12 : avoid Include the same header file twice (#23418)
dec4eacae4 : fix fbcode weak ordering (#23511)
0c9979dd7d : Fix TestCuda.test_events_wait (#23520)
e982e46de3 : Add multiprocessing_context= argument to DataLoader (#22990)
56664c2c65 : Untap caskroom/homebrew-cask in attempt to unbreak OS X builds.
31f1928096 : add sorting policy to ChunkDataset (#23053)
a356276d79 : add note to Contribution Guide around recently released research (#23513)
06762b4721 : Fix distributions.Categorical.sample bug from .view() (#23328)
be644d822b : fixes #20178 (#23297)
236149edc5 : Make randperm works properly on non-contiguous tensors. (#23043)
d6ff78fd00 : fix an over-indented return in trace_module
505fa83b2f : Implement named inference rule for mul
d3fcb4ccd3 : Try another version of apt/dpkg killing.
8ada7c9920 : Remove two CMAKE_ build options from additional_options. (#23451)
09ba4df031 : Whether MKLDNN should be built under native arch should respect USE_NATIVE_ARCH (#23445)
b335f3910f : Remove redundant MSVC_Z7_OVERRIDE processing and combine "/EHa" flag setup (#23455)
81e46d4f78 : Fix build issue. CUDA may be installed in `$CUDA_HOME/lib` on macOS. (#23491)
97f129bf0a : Let set_rng_state and get_rng_state accept string parameter (#23448)
7a82066282 : Update PyTorch Docker image to 323. (#23389)
f546a3b8d8 : fixing documentation, issue 22697 (#23268)
19858f7cc6 : Sync worker requirement mismatches
91d28026f8 : Remove unused cuBLAS driver functions for getrs (#23375)
54c280863c : Add some compiler flags for building cpp extensions on Windows (#23472)
ef6356133e : Revert D16428208: [pytorch][PR] only scatter in forward if multi-device per process
64e4152064 : Clarify that torch.device without an index will always represent the current device (#23468)
ffef0e03b7 : Enabling GPU device runs for operators (#23461)
23e526e6ff : Fix SourceRange comparison
3497891c14 : add sorted keyword for lists and dicts (#23274)
f0ebf769de : allow accepting empty input to the benchmark (#23462)
522cca5040 : Support IntList in Dict's shalloEquals (#23205)
d6d7a5f075 : only scatter in forward if multi-device per process (#22384)
e1ae3a75c8 : gate module::save logic on mobile (#23415)
23f963e4a8 : Update distributed.rst (#23289)
ca76c82ce3 : Add early returns to JIT (#19179)
9223fa1c46 : Add support to serialize qtensor in JIT. (#23356)
9dad13e1f0 : Revert "Add fbgemm_qlinear_dynamic op (#23104)"
953459f29e : Dont allow conversions with QInt.
190d255d2e : Add FLOAT_MODULE to quantized conv (#23414)
83d6c6be07 : ONNX export for index_select (#21866)
74f8094ea5 : Rename threading build options
aae48748f2 : Avoid unnecessary tensor clone in Cloneable (#20995)
53182e53f0 : fix observer name in the benchmark output (#23443)
828c08b4c7 : allow passing a list of operators to benchmark (#23442)
0bc90194fb : Catch and print exception traceback in parallel_apply() workers (#18055)
7499fe72e9 : remove c2 tests from benchmark_all_test (#23437)
e5e2face8f : Change handling of DataParallel in ONNX exporter (#23365)
c8c5e11fba : Support variadic returns in Schema's operator<< (#23204)
34f53564b4 : Don't warn when using conda compilers with utils.cpp_extension (#23396)
47a54295ee : Add fbgemm_qlinear_dynamic op (#23104)
b7984fa8a7 : Remove cases of AT_FORALL_SCALAR_TYPES_EXCEPT_HALF.
0dcb8755c8 : Implement tensor.set_names_, tensor.names setter
c8a50a26d2 : Named inference rule for torch.prod
9817d7e16b : Implement named inference rule for torch.sum
4104e80eae : qconv+relu and qlinear+relu modules (#23410)
fb8725fdbd : Update doc about subsequent builds: options can be changed in build/CMakeCache.txt
0b4c0b95e9 : For second-time build, let build_type be inferred from CMakeCache.txt.
beb109f6f1 : Enable cpp api test in multigpu-test.sh (#23380)
46224ef89e : Update ONNX docs (#23185)
7a0ae0079f : export sort to onnx
1c00e0fc3f : Added a flatten module (#22245)
5b0484d977 : Fix forwarding of arguments into kernel function (#23412)
3516f3c235 : handle exit from init method (#21211)
dd79d45c5a : Added torch.bitwise_not docstr (#23397)
58a3e4f71f : Automatic update of fbcode/onnx to 28ca699b69b5a31892619defca2391044a9a6052 (#23404)
bd54608bd2 : fused qconv2d+relu kernel (#23353)
6a8c2758d5 : Add better performing versions for groupwise and depthwise convolutions (#22869)
2409e6a475 : C++ at::Tensor, torch::tensor constructors should not accept QInts.
0e3a359a38 : Align the operator<< for Argument with FunctionSchema parser (#23203)
83b0fbc38d : Remove TensorIterator::Builder (#23329)
2cfe949d45 : DEPRECATE_MESSAGE doesn't actually get expanded; inline it at both sites.
b755bc1e31 : fix importing for module defs that are named "foo.bar"
b22adeb007 : Fix error message for a wrong fork CUDA (#23322)
d18529eb93 : Fix upload path for macos binaries (#23386)
7ee62d3d91 : Fix the iOS build (#23293)
071536f895 : Fix the operator== for Argument (#23202)
bbc53bffef : AliasAnalysisKind::CONSERVATIVE/FROM_SCHEMA (#22175)
b9202d459a : Polish torch::Dict (#23344)
722f80a07d : Align String str() with the format in FunctionSchema (#23201)
7c383ba4a0 : Remove superfluous check (#23370)
39de49c7ec : Fix a tiny bug and refactor (#22808)
39fd264799 : Fix lint
ed316c0ca0 : Align Dict str() with the format in FunctionSchema (#23200)
f477cab2dc : Add type checks in _intrinsics.modules.fused (#23361)
25e06618c8 : Support parsing variadic return schema (#23199)
cf50249bde : Disable fusion of grad_sum_to_size (#23372)
82545ecc71 : Specify build dir as a global variable in BUILD_DIR in the build system.
915261c8be : Let users pass CMake-specific options starting with CMAKE_ to CMake. (#22776)
f91b19c2aa : Do not explicitly set USE_FBGEMM in tools/setup_helpers/cmake.py (#23314)
ba6f65cf33 : Add document of functions nn.init.ones_/zeros_ (#23145)
252710262f : (#22775)
0c79753c0d : Improve documentation for torch.enable_grad , torch.no_grad and torch.set_grad_enabled (#23310)
2938299de1 : Fix lint failure introduced in #23346
0842624d50 : Fix upload issue with linux libtorch nightlies (#23368)
95e822622b : Enhance interpretation of GLOO_SOCKET_IFNAME (#22978)
1dd4d55565 : Improve FindMKLDNN.cmake to avoid binary compatibility issue in MKL-DNN (#23292)
e8ad167211 : Remove usage of FOLDER constant in test_distributed.py (#23223)
711be82951 : Make optimize a thread_local flag
b3980f46a2 : Replace uint8 with int8 in Linear and LSTM quantization path (#23347)
fba325be34 : Try kill -9ing apt-get (#23354)
ff3cc795c8 : Build binaries with local version string specifying CUDA version (#23325)
cf0f3556f6 : Enabled cumsum and cumprod for bool tensors (#23346)
c9312e1a8b : Open source 3D depthwise conv (#23164)
73dee44ec5 : Specifying libtorch variant in libtorch builds (#23348)
3297d8e203 : Switch keys to be sequential and stable in pickle serialization
93da1030df : Fix pickler bug where it would not load if no tensors were saved
7922b5057d : Memoize storages in pickler
71a047c3e3 : Unwrap DataParallel automatically (#23334)
c23ba35009 : Skip QNNpack tests on ppc64le (where support is not enabled) (#23343)
22c169fb9c : Improve the error message for ONNX export (#23317)
87d3f66506 : max_pool_with_indices: return valid indices if all input elements are -inf (#23161)
b7d90332ea : add notes about overshoot in bicubic mode (#23321)
d522b3ca58 : BlackBoxPredictor OSS part N: ThreadLocalPtr, InferenceGraph (#23257)
2915d53096 : Move OptionalType wrapping out of constants.cpp
48ca64dbf7 : Better error for compiling a module type
d6dcec37b6 : Add docs about prod ecosystem features (#23010)
87482bb15a : Remove torch::autograd::Node::get_shared_ptr()
8fdbe1e10b : Support LSTM with FP16 weight (#23291)
1f608d09cf : Revert D16440000: [pytorch][PR] Re-land "Fix error message for a wrong fork CUDA"
1a9edfcd36 : Prevent xla-build from clobbering pytorch_linux_trusty_py3_6_gcc5_4_test
0c0ffccbb6 : deepCopy also copies type information of lists (#23271)
895e79adf1 : Revert D16441000: Switch from KaTeX to imgmath for documentation rendering.
94711d7471 : Quantized conv avoid functional usage (#22733)
67179d71f7 : Reduce number of processes spawned for gloo_test.TestCase.test_forked_cw (#23221)
3ed79f4b6c : Fix argument names in torch doc (#22973)
eb51131fb4 : Revert D16423217: [pytorch][PR] Update sleef to master, fixes #20535
803af9988c : Fix the problem in parseOctal and throw exception if use \xhh to specify hex value (#23198)
b9a7fc529a : Suppress warnings in tests
200cb836f1 : Enabled 'add_cuda' for bool and fixed alpha scalar bug (#23044)
fbf28b5458 : Change TensorIterator to be stack allocated, using named return value optimization to elide copies.
7203612f85 : Update sleef to master, fixes #20535 (#23168)
fd1d06e317 : Let Python build scripts accept both CMAKE_BUILD_TYPE and the oldschool DEBUG and REL_WITH_DEB_INFO variables. (#22875)
aa660b8eb7 : Re-land "Fix error message for a wrong fork CUDA" (#23209)
4cd726c7b3 : Update ROCm CI to python3.6 (#23088)
91bef6c168 : format sugared_value & compiler.cpp (#23283)
bc15a20db9 : Minor refactor: propagating messages in TestCase
8a77098247 : Make Module::register_module / register_parameter / register_buffer public (#23196)
24601daa12 : Adding check for a single batch in adaptive_avg_pool
e7a9b0d62f : Rename torch::autograd::Function to torch::autograd::Node
0ab19d66ee : Port lu_solve to ATen (#22379)
2197bee3da : add cudnn.cu
a936a90391 : caffe2/caffe2/fb/operators/cc_amrc: drop SIMD OpenMP vectorization
7ed9622fdf : Read number of workspaces from argument in recurrent_network_op (#23272)
a35136dd73 : Add support for onnx tensor index export (#21716)
1de44a6f54 : fix specialized list from dict keys (#23267)
a6ccd62a81 : BlackBoxPredictor OSS part 5: glow transforms
bdb1e1305d : exclude some caffe2 modules from libtorch mobile build (#20000)
1c0309a9a9 : make OMP_NUM_THREADS default in launch.py (#22501)
058645acb1 : Fusion and _intrinsic modules (#23003)
7b229342ca : Renamed CosineAnnealingLr to CosineAnnealingLR (#23242)
8d4956fd02 : hook up dropout sparse with replacement operator
6f01d13728 : Implement dropout with replacement for id list features. (#22880)
e0f632c58b : pickler.cpp: respect __getstate__/__setstate__
bae10db522 : Incorporating arguments to pull production operators and adding device type. (#23197)
d8220b0599 : add simple inheritance support to AST
017870a633 : kill module_lookup
2a37740a86 : make RHS of assignment optional
3be0a2b4be : Parse all stmts in class defs
0dabaad819 : Add Module::replace_module to C++ api (#22546)
f112c522af : LinearReLU module (#23022)
192dd8faf1 : Set correct list type in pybind_utils (#23188)
19be7ece15 : Fix erase_number_types test (#23181)
e56f11b750 : Fix onnx export (#23180)
60afcabc6f : DictConstruct sets correct types (#23171)
67aede98c3 : Exclude unused onnx targets (#23195)
9d03133c14 : ListConstruct sets correct element type (#23189)
2073cc73f8 : Use concrete types in jit test for generic lists (#23192)
21f52ce0d4 : Remove trailing semicolon from TORCH_CHECK macros.
174f7a586f : Switch from KaTeX to imgmath for documentation rendering.
792d527746 : Fix typos in comments
60c46dd4df : Let CMake handle NCCL detection instead of our handcrafted Python script. (#22930)
e4b75c6580 : Fix typo in dataloader.py
45d3f495ef : Add document of function torch.as_strided (#22842)
c9e62f6988 : Update nccl to 2.4.8-1 (#23186)
9a6ae5c0b1 : Updating submodules
d7448c7812 : quantized conv module (#23178)
f3a37278cc : ConvReLU2d module (#23008)
eb5137a5d1 : Export torch.arange to ONNX (#22601)
06d11f0434 : Revert D16368004: [pytorch][PR] Fix error message for a wrong fork CUDA
3861520603 : Verify flatten works for quantized Tensor (#23121)
a24f6c13a3 : Fix broken indexing when using None and ellipses indexing together (#22905)
648f10be16 : Fix load op to return the shape info as before when loading multiple blobs (#23182)
1c574458b0 : nn_quantized test (#23169)
e08f8f45ff : Turning on fbgemm for nightlies (#22784)
a6e45a69a8 : Fix error message for a wrong fork CUDA (#23030)
3ca7c0ffdb : Add get_accessed_features function to ModelLayer class (#23036)
ff23a02ac4 : Pin numba to 0.44.0 to fix Windows CI.
b6d06d5496 : Remove empty THCThreadLocal{.h/.cpp} (#23157)
fdfc676eb6 : Invert ownership between PyFunction and THPFunction.
ae5b52086e : Support converting Python number to IValue in pybind_utils.h (#22817)
2becbd3faa : BlackBoxPredictor OSS part 4: Open-source other transforms (#23099)
27031dccb2 : Updating producer_version in exported ONNX models to pytorch 1.2. (#23120)
7e31c02afe : Fixed deprecated use of yaml.load
76291829ba : Refactor named inference rule for reductions
b4b51ed5ec : Implement tensor.size(Dimname), tensor.stride(Dimname)
965b97f5f0 : Bidirectional GRU and LSTM C++ API forward fix (#22850)
e5797e9350 : Revert D16390551: Fix load op to return the shape info as before when loading multiple blobs
fcdfc35d1c : Support get/setstate with no args (#23119)
858d4a6a04 : Cleanup API and remove 'experimental' warning (#23000)
fad3031b5c : Fix type hints for None constants (#23029)
2891784a72 : Resolve with closed over variables instead of stack frame (#22270)
fd90b967b2 : Fix load op to return the shape info as before when loading multiple blobs (#23166)
82db5dceb6 : Added running via throughput benchmark options. (#23077)
2ba516d5b6 : Added add op framework overhead benchmark for C2 (#23078)
0621068cdc : Add simple add op based framework overhead benchmark. (#23076)
4223e2f9e9 : fix qat tests (#23124)
8bc28cc898 : Remove cuda free mutex (#23040)
22f7c9e31b : (#23105)
aeee49d51d : Revert "Temporarily skip mypy-0.720 to unbreak master type checks"
b8c8977be7 : Update ScatterWeightedSum Op (#23087)
ff8cb9f622 : hipify: do not overwrite files that stay the same (#23112)
2ac9abf759 : Fix memory leak in Adam, Adagrad, RMSProp (#23125)
96b6797fc0 : improve enforce in cross_entroy_op (#23062)
3e66385002 : Lint fix
963707c5ea : MaxPool2d in the torch (#22765)
cf3e6478ad : Concat with out (#22408)
05f088ec22 : make jit logging visible, so it can be used in a TVM compiler
bb9119f67d : Use set -x to help investigate doc push errors (#23111)
a62c687445 : Remove unused atomics detection code. (#23089)
4e5f70089f : fix indexing for more than 65535 elems in non-indexed first dim (#23123)
6791f395f9 : support at::view and at::reshape for quantized tensor (#23046)
a03205ed66 : Move THTensor_compute_stride to ATen (#23045)
0d8324b18a : Add fused modules in nn._intrinsic (#23085)
47af41fe72 : Quantized concatenation (+fused relu). (#21749)
9f4df63c2c : Moving np function to test area
77353636de : Conv module (#23084)
b964bdb53a : Fbgemm fp16 tensor support (#23101)
2a8d5a132c : Fix workspace destruction ordering (#23096)
79c4f83fbe : Include module names in recursive error stacks (#22921)
7cc029cb75 : Quantization aware training in eager mode (#23082)
c09e92255c : Add initial support for serializing classes
6334edc2d8 : BlackBoxPredictor OSS: open-source NQL and custom transforms (#22877)
f2f3e8ad8c : fix overspecializing constants in compilation (#22816)
a302821c5d : Adding more binary documentation
a28ffaf350 : (#22827)
818828e8a8 : Only import PIL when needed (#23023)
44493a623e : Pass variable_list of inputs to _wrap_outputs
2ee0f0bc3a : add break continue to docs
6dfecc7e01 : Remove deprecated linear algebra functions (and methods) (#22841)
61a683c212 : Delete aten/src/ATen/out.txt (#23050)
5417ddbdae : Fix get_all_math_dtypes for device='cuda' retuning None (#23028)
84c2c89e2c : Revert D16199356: [qat] Quantization aware training in eager mode
f19aa12ae5 : Revert D16274792: [qat] Conv module
c362e72d4a : Revert D16349133: [quant] Add fused modules in nn._intrinsic
2401a05aae : Revert D16373996: [fix] conv module missing return
25f0dc3490 : BERT CPU performance optimization: use mkldnn for nn.Linear() when input is dense layout (#21851)
12ac9171db : fix error message
cdfdeb74af : conv module missing return (#23058)
6601978012 : Use ProfiledTensorType in peephole.cpp
d153b0b58b : Updating submodules
23badc60f3 : Fix TBB build for older versions of cmake
e57b682abf : Add fused modules in nn._intrinsic (#22999)
12d9d768b8 : Conv module (#22899)
65ef671d11 : Quantization aware training in eager mode (#22732)
8dfbbf7bf2 : Add nn.qat.Linear (#22714)
b6011c3caf : Update torchvision in CI. (#22754)
358e0d3d44 : Updating submodules
9897ec4701 : Recursively compile class types (#22475)
425d28c30a : Reapply: optimize topk on cpu using parallel and partial sort (#19736) (#22865)
c1c4014bba : Add warning for legacy autograd function (#22922)
a2b3403962 : Mark protobuf include path as system include (#23012)
84d892b645 : Remove DistributedDataParallelCPU as DDP now supports CPU models (#22864)
a5e6586618 : Revert D16357177: [pytorch][PR] Fix race condition, bad lock hierarchy. Move getFreeMutex() into AutoNcclGroup.
11d257e5df : Fix SGD memory leak when there is weight_decay (#23007)
502766e99e : Add the mathematical definition of torch.sign to clarify this is the sgn function.
662fe699c5 : Named inference rules for some initializer fns
57cec0a720 : Named inference rules for split/chunk
6b70217a7e : Adding README for binaries to OSS
b91ab177a0 : Add support to print QTensor in cpp (#22950)
0c091380cc : disable non-deterministic cudnn ctcloss (#22977)
29347cc9cf : Fix race condition, bad lock hierarchy. Move getFreeMutex() into AutoNcclGroup. (#22173)
14ecf92d42 : Slightly improve irfft doc
c2df54d6d0 : avg_pool2d avg_pool3d for LongTensor (#22433)
52bf38007b : Remove usage of legacy autograd function (#22925)
29853293d7 : Updating submodules
992f3860a3 : Quantized relu to native_functions (#22316)
e24f18cea0 : Revert D15854892: [pytorch][PR] [tensorboard] Cleanup API and remove 'experimental' warning
a0ef4abeed : Add missing comment from #22103 (#22984)
442dd7b906 : Implement "trimmed lasso" regularization and support all available regularization in a single interface (#22966)
eb76b7a564 : Revert D16199862: [pytorch][PR] [ROCm] Update ROCm CI to python3.6
796a39ba85 : Automatic update of fbcode/onnx to 707064980b9825b8705b9d1c9aad34d8b022d5dd (#22981)
031b406c38 : Update ROCm CI to python3.6 (#22322)
5adba33c01 : Use integer floor division for pooling shape computation (#22304)
332824551c : Fix F.one_hot doc signature
074afd7143 : Remove unneeded IValue copy in unpickler.
b6adb568fb : Cleanup some logic in pickler
3c0814ffeb : add docs to onnx APIs (#22938)
4861527446 : Cleanup API and remove 'experimental' warning (#21786)
2630109727 : always restore dlopen flag in dyndep (#22958)
35b6cdc2eb : Rewriting hypothesis_utils (#22830)
b96610bf5a : fix the CI job for onnx (#22946)
f72d754877 : qlinear operator level benchmark (#22914)
7a99f3987b : Update note about tensors on CPU for certain MAGMA functions, elimina… (#22618)
5911cb8e5c : Make `load()` create only one CU
ec57d9215f : Updating submodules
7ed82ea461 : Added generation of transpose and dilated 2D and 3D for LongTensor (#22594)
bcfa023a00 : hardshrink_cpu and hardshrink_backward_cpu refactoring with at::native::cpu_kernel
ef36046ad7 : Better error message for using Python builtin_function_or_method (#22935)
25b69997c3 : Tensorboard Metrics (#22492)
7793ab0871 : More documentation about the pyobj field.
cd11109c2e : Fix messed up tests for dropout (#22893)
8ced53d62b : Correct the check of whether src is defined in copy_. (#22715)
798d5d9771 : Revert D16281714: Add sanity checks for NCCL detection.
7586ffdc57 : Updating submodules
01f03d56ee : Revert D16283037: Add sanity checks for NCCL detection.
7a370dbb41 : Enable recursive script mode as the default (#22887)
eaee0c6cd9 : Make classtypes hold a weak_ptr to their CU
b6a88b3344 : Make traced fns also go into the global python CU
c6fe864db3 : Add key_padding_mask kwarg to Transformer (#22588)
9b9546a498 : replace ByteTensor with bool in fill_test (#22913)
31497799b9 : Add sanity checks for NCCL detection.
e2046f8c1d : Add sanity checks for NCCL detection.
3ea04b59c0 : Resolve the doc issue in which two asterisks have weird links. (#22896)
3f3f5d042a : Revert D16227440: [pytorch][PR] Update note about tensors on CPU for certain MAGMA functions, elimina…
52de340629 : Export torch.masked_fill with onnx::where
6c997538b7 : Unwrap sccache post-build for ROCm compilations. (#22743)
ba38445cfd : Fix alias annotations for dict ops (#22900)
8482efb203 : pin_memory malloc now uses existing context if available. (#22229)
054c7eb0f4 : Update note about tensors on CPU for certain MAGMA functions, elimina… (#22618)
f8ad65adb1 : Fix torch.triu / torch.tril on contiguous tensors with non-default st… (#22730)
0ea8e61f03 : For consistent CUDA_HOME behavior (#22845)
560d847da6 : add benchmark for PT fill_ op (#22867)
3b1c3996e1 : remove RTTI check for TensorImpl shadow copy (#22773)
c5afdd0b55 : Revert D16197605: [jit] Make traced fns also go into the global python CU
a326aad816 : Revert D16197608: [jit] Make classtypes hold a weak_ptr to their CU
5f05037de6 : Updating submodules
94d99f2522 : add num_runs flag to the benchmark (#22892)
6ffacd5f02 : Use original module's class name for ScriptModules (#22873)
248336946e : remove stray print
f7de9be3c0 : Add FakeQuantize Module (#21767)
0cddd3e751 : update README (#21312)
7d055c21b3 : Port SVD to ATen, enable batching for matrix inputs (#21588)
260b0e8476 : Make classtypes hold a weak_ptr to their CU
5fc1260e0a : Make traced fns also go into the global python CU
16aa235f43 : _script_compile and _script_class_compile add to the python CU
f2f80744be : Close loophole to create untyped tuples (#22518)
800f4936f0 : Deprecate untyped Lists (#22517)
bd88fd0793 : Added .bfloat16() (#22852)
8399197df6 : Set up CI with Azure Pipelines (#22839)
535c5540bc : Back out "Back out "[pytorch][PR] Move thnvrtc and DynamicLibrary to ATen"" (#22794)
317cf7c874 : Remove tensor_data() call in Python Variable() and nn.Parameter() constructors (#22821)
14e8fb70a1 : Make the signature of fill_out consistent with fill_.
1c266c2738 : Move the body of fill_kernel_impl into fill_kernel_cuda
fc297b8e83 : Move fill and fill_diagonal to Fill.cpp, Fill.h, and FillKernel.{cpp,cu}
815e73bc20 : make_variable consumes the Tensor if it only has one reference
b5fa9a340a : Temporarily skip mypy-0.720 to unbreak master type checks
1a93b96815 : Revert da315a4
92468f0a6b : Revert D16238204: Revert D16224780: [pytorch][PR] [ROCm] MIOpen integration into pytorch RNN operators
da315a4e2a : Revert D16037021: Support GRU module quantization in Pytorch
fcfefc3439 : Revert D16224780: [pytorch][PR] [ROCm] MIOpen integration into pytorch RNN operators
ead1193241 : Transfer Learning: Caffe2 load op changes to return shape inference (#22829)
d8c1b86135 : Support GRU module quantization in Pytorch
ba9d559a12 : Get rid of torch.mean shape analysis
9eb039334f : Use Linear Operator with fp16 weights in JIT (#22323)
573d9e6975 : Support Linear operation with fp16 weights in ATen (#22023)
35ee4bf4e5 : Revert D16204820: [pytorch][PR] optimize topk on cpu using parallel and partial sort
cf2889ad8f : add support for breaks and continues (#21692)
b3147bc674 : PyTorch export to ONNX Opset 7 and 8 - Cont (#22421)
9f8e2c067f : MIOpen integration into pytorch RNN operators (#22774)
30e03df638 : Speeds up fast-path for 1D tensors (#22756)
02bc06a683 : avoid kernel launches for zero-sized tensor inputs
b1b65f34a9 : Make PythonArgs::tensor and PythonArgs::scalar faster (#22782)
10c14ad17c : optimize topk on cpu using parallel and partial sort (#19736)
fc23d7f3bd : Speed up TensorIterator::compute_strides a little (#22779)
f266a63eeb : Initiate checkCuda90Bug warning (#22757)
ccb28939bf : Revert D16222539: [pytorch][PR] Let users pass CMake-specific options starting with CMAKE_ to CMake.
612eed31a9 : Let users pass CMake-specific options starting with CMAKE_ to CMake. (#22776)
7eb0319339 : add new tests to benchmark_all_test (#22787)
1878800f47 : make custom op work in OSS environment (#22781)
8ec712da30 : Add the support of handle Bias being nullptr for torch.ops.quantized.fbgemm_linear (#22403)
9d11004ee4 : Update ONNX constant folding to support opset 10. (#22515)
291570e085 : make CompilationUnit::define return defined functions
de819be93e : refactor self to be a class again
22d70e0d4b : Give functions qualified names
4b48ae4aec : Suppress progress bar only for pip install
05d56bd1b6 : Remove hard-coded NVRTC specific constant from fuser header
513b7a7a06 : assert_no_internal_overlap pass op name by const ref
9690f8629d : Move the storage in empty_cpu
a797815198 : bucketize op shape inference (#22716)
ac78a86e1d : Back out "[pytorch][PR] Move thnvrtc and DynamicLibrary to ATen" (#22749)
8bdda03ae1 : optimize RNN on CPU (#22512)
3135298dde : (#22602)
1682d38a25 : Improve hypothesis_utils.py for qtensor (#22693)
3fabb9f105 : Fix lint
45cf33a731 : add fill_diagonal function (#21892)
89d6e88042 : Add environment variables used in CONTRIBUTING example (#22736)
5147819f9d : enabled MIOpen depthwise convolutions (#22696)
d21e476dcd : Quantized Conv2d Module (#21323)
ad634875d0 : Mark Unpickler data ptr arg as const
4240220926 : Revert D16183577: Delegate Python ~ (invert operator) to Tensor.bitwise_not().
1ecc945ab2 : Revert D15998762: [jit] Give functions qualified names
a1ca32409f : Revert D15998758: [jit] refactor self to be a class again
e6eb17303f : Revert D16184799: [jit] make CompilationUnit::define return defined functions
fffa7200c1 : fixing lint
67c634d58e : add a comment to native_functions explaining softmax interfaces (#22651)
0196e0bafb : add line numbers to jit_log.h
c49a71f91f : make CompilationUnit::define return defined functions
ee9c8a75f4 : refactor self to be a class again (#22207)
c0674cebf1 : Give functions qualified names (#22206)
86fc417147 : Move Quantization Models to common_quantization (#22706)
ebafa2e15f : Turn on USE_DIRECT_NVRTC in fbcode again. (#22685)
edeb4dbdcb : register __getitem__ builtin
368dbb9ab3 : Fix a FIXME in test_nn (#22675)
00df49c984 : Fix Trace inlining of graphs with optional inputs (#22686)
3e3e6ee335 : Add common_quantized test case utilities (#22694)
7750cae722 : Refactor and improve randperm tests.
32709af8f4 : Swap detection order in randperm_out_cuda to avoid unnecessary conversion from float when the input is small.
0f7c3710dd : Support Half type in randperm.
9c4c9c3af0 : Delegate Python ~ (invert operator) to Tensor.bitwise_not().
574e808680 : Add a bitwise NOT operator for integer and Boolean types (CUDA).
e2dc1fc715 : Add a bitwise NOT operator for integer and Boolean types (CPU).
58e20638f7 : Refactoring _wrap_outputs to remove python dependence.
ec1b669d23 : fix dce over loops
9b8d771733 : skip import nccl and gloo_gpu in cpu machine (#22522)
b984b0ab4b : fix print (#22689)
f81395b3e3 : Enable more passes in ProfilingGraphExecutor
10c60b601a : Added Bfloat16 tensor for cpu with very limited support (#21860)
6eb3969ac7 : keep reuqires_grad unchanged after converting bn to syncbn (#22569)
cbb0b8166d : Revert D16161144: [pytorch][PR] Add traces to LowerGradOf and SpecializeAutoGrad
3a8d7463bd : Enabled BFloat16 storage (#21523)
932ec8aa9f : Updating submodules
e72b617eb5 : Intoducing bfloat16 type (#21522)
de5a481c6e : add forward declaration in stateful dataset (#22562)
3cf5f22f02 : Enable C2 operators running with {cpu, gpu} * {forward, backward} (#22664)
95a5da175d : change c2 bench to use new tensor creation interface (#22663)
e1fdf8a46f : Add comments about adding new build options. (#22641)
e2216ada65 : Properly formats errors rising up from C++ extension compilation (#22445)
50901be9fb : Add traces to LowerGradOf and SpecializeAutoGrad
0c2cd93e43 : Avoid potential extra copy in _lu_with_info_cuda (#22634)
45aad2e680 : change unary, pool, max ops to use new interface (#22661)
2b2fe525b9 : introduce a new interface to add a list of operators (#21209)
164388150a : fix lint (#22654)
8a233b99cb : Report errors through call stack (#22280)
13d58fd9f5 : README for the quantized op creation (#22165)
dd4982e287 : Cleanup integer sign warnings
046c4589df : lu: When not using pivoting, return the identity permutation instead of zeros (#22242)
7fcfed19e7 : Fix interpreter lines for files with python2-only syntax.
5040d52a5a : torch.quantization conversion utilities, observers for eager mode quantization (#22010)
073fa6f411 : add GRAPH_UPDATE logging to guard_elimination.cpp
a3346e100e : Performance improvements for depthwise convolutions in FP16 (#22302)
31d821e267 : Move thnvrtc and DynamicLibrary to ATen (#22362)
74883d4865 : Fix typos in gradcheck error message
92e8d04098 : Sync worker requirement mismatches
9fb4386c14 : Add a higher-level log traces to DCE
5395db22a4 : Typo fixed in documentation
b5860b5572 : torchvision version changed to the latest one (#22598)
738aba171b : use caffe2_dnnlowp_force_slow_path in FC (#22143)
905b9a89b2 : fix uninitialized variable in BailOutInserter
c97829d701 : Adding FC and Relu QNNPACK ops to C10 registry (#22174)
0e7b65e48a : Convert VariableVersion counter to intrusive_ptr, saving a memory allocation on every Tensor.
0c1ecf19e1 : Simplify the flow control in div_kernel_cuda. (#22555)
478d480d37 : Add Module.requires_grad_ (#22576)
456d27dff0 : Update module.h (Fix a grammatical error of the comment in line 233) (#22548)
3a12520844 : Pass Variable into Caffe2 ops, by requiring that the Variable doesn't require grad (#22473)
304552b61a : Enabled masked fill and scatter for bool tensors (#22491)
a48cf8f52d : Fixed RNNImplBase reset and flat_weights methods to handle bidirectional flag correctly (#22493)
595e344769 : Add type stubs to import 'nn' modules (#22411)
9bafe5d4da : Fix torch.normal with CUDA tensors (#22533)
80e2fab952 : Deprecate and set a date for removing NO_* and WITH_* (user) build options (#22474)
43d36415b9 : torch.utils.data.Dataloader: documentation about RNG state consumption (#22540)
ce8c9d9bd5 : Fix cuda detection script (#22527)
d4464d3418 : Use system locale in collect_env.py (#22579)
d48cbd62cd : Fix spectral_norm load_state_dict with strict=False (#22545)
94bd5ddf7f : Add some essentials for building c++ extensions on Windows (#22563)
9db7bc8bc7 : fix uninitialized variable warning (#22477)
42c6ea5faa : ONNX Export Topk with Dynamic k (+ add test cases)
221af09ca7 : Move GradMode / AutoGradMode / NoGradGuard to ATen core (#18573)
39a4ec4141 : Make device_of take tensor by const ref
7730346853 : Make shuffling optional in DistributedSampler (#22479)
9e1ae4b264 : Updating submodules
577042a3cc : Better Constant Propagation through Tuples (#22561)
a09150adc0 : Deprecate untyped Dicts (#22516)
595d2dbb4d : Updating submodules
91706d1044 : Primitive Jit Logging
ed60d9fcf9 : List/Dict remember and check their element type (#22005)
042da2171e : Skip test_slogdet_sign if LAPACK library is not installed
3507eaf3ea : Add clone() implementation for QTensor (#22510)
0140a756d8 : Prioritize reentrant tasks and execute them recursively until close to limit
e5d640341f : Set stream for softmax kernel launch (#22470)
d2ceab2766 : update video input (#22471)
4ba1c4f798 : Add the support of handle Bias being nullptr for torch.ops.quantized.fbgemm_conv (#22472)
57dbc79674 : fix lint
b952011bae : use save/load for emitFunctionHook
bc24593a80 : remove unused argument in import.cpp (#22205)
4b9b7d6f03 : improvements to QualifiedName (#22204)
f5919dba45 : refactoring of module/object (#22203)
3b2844eeea : Make CompilationUnit own Functions (#22202)
66378c7025 : make test context managers exception safe
2b06e7cd50 : add #pragma once to jit headers
6f6a680481 : remove erase_fork_wait.h
799633e4cd : move casting ops from prim to aten
97a604ef57 : Rereapply optional ScalarType interface changes that were reverted in D16079809 (#22456)
10c4b98ade : Remove weak script (#22212)
b93f29ded3 : add JIT path to the benchmark (#22309)
29ec4769bb : Fix SyncBatchNorm running var update issue (#22248)
325ec2327f : create tensor based on provided datatype (#22468)
319ef3bcbb : Fix onnx custom op export & add initial test case (#21321)
9c44f6c723 : generate tests based on op metadata (#21432)
2732a5e534 : Another dce fix (#22499)
d9e15bccb0 : Perform weight re-init for embedding table in sparse_lookup.py (#22348)
c9f41e9bc0 : Add device guard around MPI operations (#22446)
abb2e68989 : Don't construct a single element array for unary ops
7fef0b7b72 : Take const refs in TensorIterator::mark_outputs
0d63619414 : Deprecate vector/unordered_map again (#22478)
17cc79865d : Fix dead code elimination in onnx export (#22476)
76e14c1e51 : remove caffe2/core dependency from ATen/core/jit_type.h (#22441)
e210c65097 : Add `torch.where` overload with only condition argument (#21986)
dcd902bdde : provide "size" parameter in torch.normal when called with two floats (#20545)
bb0f299f27 : Update MultiheadAttention module support key/value with different number of features and allow static key/value (#21288)
d684112ec9 : Output sequence probability with CTC beam search, optional multiple output sequences (#21927)
830c6590ef : EraseNumberTypes cleans itself up (#22461)
a6441c00d6 : Remove build variable NCCL_EXTERNAL (#22467)
34f950c800 : Create C2 operator to replace values in embedding table (#22279)
040a4bd914 : include conv_op_impl.h from conv_dnnlowp_op.cc (#22458)
474dec4b00 : Warn on conditions that can trigger cuBLAS sgemm bug (#22034)
f5b3f9ecd9 : Remove unnecessary ROCm detection code in Python scripts. (#22464)
e68dc899d1 : Fix compiler warnings (#22162)
53a52f574f : infer shape until no more change (#22425)
07ef85e326 : Add USE_MKLDNN_CBLAS build option. (#19014)
6d5871300b : Use concrete types on call sites for Dict/List (#22004)
693871ded3 : Rename macros and build options NAMEDTENSOR_ENABLED to BUILD_NAMEDTENSOR (#22360)
bb07f2d063 : Pass LRU hash output evicted_values to SparseLookup (#21389)
869ce89474 : use feenableexcept when glibc is available (#22241)
e74b0fc87c : Fix empty_like for quantized tensors. (#21978)
7235532df3 : Revert D16088193: Refactored math tests to iterate over all math ops
a845d02cd5 : Revert D16088191: Added math.log2 and hypot
2dd71b18c4 : Fix PoolWindow crash from thread_local (#22405)
7ca7edc307 : ONNX Export LayerNorm
a4b2f3e213 : Implement AdamW optimizer (#21250)
c9a8413306 : Numerical stability of embedding kernels (#22401)
b76877728a : Added math.log2 and hypot
3d3d07b7dd : Refactored math tests to iterate over all math ops
0ffda97aa4 : Make Gloo an optional c10d dependency (#22257)
b9ede6600e : Remove the USE_MIOPEN build option as MIOpen is always used when built with ROCm. (#22420)
6721e67c10 : Remove hacky stub for quantized ops (#22388)
2dd1323379 : Fix the GPU trainer for NoneCalibration and RNN
5bd97be309 : Fix lint error in format_time() in throughput_benchmark.py and clean it up a bit. (#22424)
edd5b770be : Remove API-level guard on NeuralNetworks.h (#22429)
de84104059 : Lint ONNX Related Code (#22423)
ffa15d2285 : Load original SourceRanges on import (#22180)
2c2a913a4f : Preserve SourceRanges across serialization (#22179)
e05942c09b : Serialization methods for SourceRange and Source (#22178)
671782d88a : Refactor file:line:col to be less ugly (#22177)
dff2c07183 : Manual revert of D16012838
2c18bf21be : Fix `ScriptModule.__dir__()` (#22426)
f0f2331a1c : Add support for cross-chunk shuffling in ChunkDataset (#22347)
1f9c4fdb5e : split onnx passes (#22413)
a54acd3755 : Update the way boolean tensor are being printed (#22238)
cbf572671d : update mkldnn-bridge to avoid mem leak (#22392)
402b9f9a6d : add PT chunk op to the benchmark (#22409)
8a726f5815 : add PT split op to the benchmark (#22410)
8281909e73 : add PT cat operator to the benchmark (#22404)
007fd01e9b : Enable PT operators running with {cpu, gpu} * {forward, backward} (#22416)
dfa6fca1c6 : Supporting Manifold DB in Predictor Exporter (#22334)
30fedeae4a : Updating submodules
10e4137396 : Optimize InstanceNormGradientOp (#22288)
d0348c0ef9 : ThroughputBenchmark: improve formatting for ExecutionStats (#22293)
d0db2a76a0 : PyTorch ThroughputBenchmark: fix inaccuracy in number of iterations reporting (#22292)
813b01e4a8 : Use at::AutoNonVariableTypeMode before calling ATen tensor factory functions (#22364)
d632b1ff3c : Expose is_mkldnn to python and register it as torchscript prim op
2ab6ff42d1 : Updating submodules
577c04c490 : add mutation support for forward_pre_hook and forward_hook (#22285)
f7421b82ad : Remove versions constraints from `external_deps` (#22113)
bfeff1eb8f : Stubs for torch.nn (#19089)
a43d9af52c : Comment on why Windows build_pytorch.bat builds twice (#22363)
451c907a47 : Adding qconv unpack operator for serialization (#22354)
f894ef7263 : Add smoke test for information fn/method/attrs to test_namedtensor
496e35f76b : More named inference rules for pointwise unary ops
2a698682e4 : Remove Type dispatch (#21964)
6c454ff14c : Stop using Type in Python bindings (#21963)
9c8f9f0ecb : Remove many usages of Type (#21941)
3cba9e8aaa : Error Message Paraphrasing (#22369)
41e51ce142 : Fix QNNPACK and NNPACK settings (#22367)
d8de69d621 : Adds symbolic op for logsumexp
b52621c870 : Revise error message for invalid Reduction (#22160)
9e18234109 : Automatic update of fbcode/onnx to 806aa863020fa180e57f576cb032ec44ce8ddcca (#22359)
7cc8f37f56 : Reduce needless copying when returning lists of tensors in the JIT interpreter. (#21690)
737f8a7638 : Fix onnx passes (#22319)
e76c9751c4 : Use lazy initialization in autograd record_function to avoid static (#22317)
3a198400f8 : modify pool benchmarks
89c709d217 : modify unary operators benchmark
6cf4df5d06 : add PT softmax ops to the benchmark suite (#21208)
2132ea1d8d : Fix "python: can't open file '.jenkins/pytorch/print_sccache_log.py': [Errno 2] No such file or directory"
042a2fd810 : Sync worker requirement mismatches
e259894e83 : Test raising TypeError in torch.from_numpy() (#21607)
0804452709 : fix lint in torch/nn/quantized/modules/linear.py (#22325)
1bea27be9d : Remove three cpu sigmoid functions that are identical to IMPLEMENT_UNARY_OP_VEC (#22271)
5e77111486 : nn.quantized.Relu and nn.quantize.Quantize/DeQuantize modules
6f0f7e316d : Support building caffe2 with clang-cl on Windows (#22307)
83768f0756 : Add ONNX export support for multidim torch.sum. (#22240)
2832e33a94 : Add serialization for nn.quantized.Linear module (#21925)
5c46e701fc : Implementation of nn.quantized.linear module (#21921)
7a40412158 : Delay reduction of unused parameters until first autograd hook is called (#22219)
ac39869370 : Fixed list() not making a copy (#22093)
b1096995d5 : Update ThroughputBenchmark to reflect new script::Module API (no (#22291)
177b8bf6e7 : Named inference rule for more pointwise ops. (#22268)
7732b1a604 : Enable named inference for some unary pointwise ops (#22267)
69b702a6eb : Implement unify_from_right (#22223)
6386e4d244 : Named inference rule for `abs`. (#22151)
2913f6a26d : Adding modules for Python 3 compatibility (#22295)
6947e192f7 : Remove unused param in Caffe2 LayerNormGradientOp (#22282)
be0631b6ee : Add the rest of the `dict` API (#21979)
c9626a11cc : Made a += b for lists do an in place add (#21896)
bf677b8849 : Set MKLDNN (default) build variables in CMakeLists.txt, not in Python build scripts (#22215)
d2bad941f4 : Fix lint issues
e9d1b852c4 : Functional conv2d (#21225)
59c42595e0 : Enabled gather and scatter for bool tensor (#21924)
f13fadd510 : fix python2 corner-case in torch.distributed.launch (#20996)
f39b6624ba : ChunkDataset checkpoint support (#21889)
30d890c672 : Removed an outdated comment above IMPLEMENT_UNARY_OP_VEC(abs) (#22272)
f144b9ebef : Fix two overindent lint errors in test/common_nn.py. (#22287)
e6d4a2d289 : Remove unused file cmake/Modules/FindMIOpen.cmake (#22244)
5e0a74dd70 : Rename copy_tensor_data to copy_tensor_metadata (#22266)
45c6fa0007 : Refactor Tests for Multiple ONNX Opsets (#20036)
f51de8b61a : Back out "Revert D15435461: [pytorch][PR] PyTorch ThroughputBenchmark" (#22185)
3f2a839dda : Add comments to bailoug_graph.*
04fe2453c4 : conv2d/conv3d for LongTensor (#20730)
3ba72a11db : Revert D15999938: [jit] Add the rest of the `dict` API
7707dee761 : Re apply optional ScalarType changes (#22237)
8b02522b93 : Avoid copy in ArrayRef<->vector comparison (#22218)
516c7e4456 : Adding memory_format to empty and empty_like operators (#20558)
5bdc4db26e : Refactor named tensor helper code (#22150)
29b53b0259 : Fix bug in caffe2 transpose on GPU (#22233)
2dc9643080 : Better error message for mismatched dict key type (#22231)
af9e0085f2 : Add the rest of the `dict` API (#21979)
25eae3ed08 : Disable test_proper_exit flaky worker_kill (#22208)
a4f281446b : introduce flags to set omp and mkl threads (#21472)
5f84f372a6 : Use variable_data() in tensor_to_numpy (#22214)
f176950a67 : Use lower case for strong wolfe option. (#22092)
9f22805cc6 : Refactor function_wrapper.create_generic (#22077)
b297552887 : Make nn functions configurable for different scalar types (#20729)
95b5718007 : Prevent VS from emitting errors when using swap in Optional.h (#22182)
fde75a33e1 : update IterableDataset doc to be consistent with current behavior
655a370859 : restoring HEADs for ideep and onnx to more recent versions
17b37eb353 : Bump gloo (#22225)
c1fc2f25c2 : export deleteFunction in torch/csrc/autograd/function.h (#22236)
e8bc992b03 : print device when it's not on default device (#22094)
1a164bf30b : remove unused mkldnn include (#22217)
de85abf226 : Allow default construction of Dict/List (#22084)
e425789286 : Fix "missing return statement" warning (#22216)
f7a126f941 : fix optional type subtype relation (#22186)
defd23b8b9 : Clean up old uses of checkScript (#22002)
7b1ffba3bf : ArgumentStash for Scalar arguments (#21931)
7ee82d48a8 : Removed work around for convolution transpose op since the bug has be… (#22184)
5b87049c66 : remove uses of std::shared_ptr<Module> (#21934)
1d705b4b07 : Run clang-format on c10d bits (#22194)
f5a1ea170b : SIMD version average pooling added (#22148)
a7cb07eb0f : Add missing algorithm header to Array utility (#22157)
6ff0c6ca3f : Remove THD (#22065)
bcb5fd8f06 : Port symeig to ATen and enable batching of inputs (#21858)
4ec6fbefa6 : Show deprecation warning when stateful lambdas are used as kernels (#21885)
c68119387d : serialize torch.Size object (#20952)
7daa96a3ce : porting convtranspose3d to ATen (#22019)
9af8ea1ce5 : Not expose mkldnn reshape and transpose (#22193)
c8b5f1d2f8 : Switch autograd to use a pool of workers for each device (#21911)
94e83da55c : Optimization of the Embedding and Embedding-Bag CUDA Kernel (#22016)
b0bd8758fc : Further remove redundant CMake option passing code for those CMake variables that are directly controlled by environment variables but with a different name. (#22154)
ce1a9653a8 : Remove more build options not needed to be explicitly set in Python build scripts. (#22153)
839b496fbd : Fixes bugs in torch.multinomial without replacement (#22183)
b61693c0ed : Optimize InstanceNormOp forward (#22130)
ac4913ee62 : support both regularizable and sofmax re-weighting on sparse features in dot product (#22176)
299ea84a70 : Use latest stable flake8-bugbear in CI and fix B011 flake8 error. (#21944)
f5df0c9104 : Don't end on inplace operators in einsum (#22111)
ede08492e1 : Enabled mul for bool tensors on CUDA (#21771)
3b700a43d5 : Add missing whitespace in error message (#21904)
88cdc16835 : AveragePool: expand incomplete kernel_size for the C++ API
2372e7ed2e : DilatedMaxPool: expand incomplete kernel_size for the C++ API (#22073)
b2a39314e7 : Make Dropout.__repr__ consistent with other modules (#22110)
273b6c5bae : Cast return value of vector.at() to void to avoid nodiscard warning in MSVC. (#22061)
0ac28c8966 : Quick fix for #18215, the CPU case (#21910)
41d0525de3 : Improve repr for IncompatibleKeys (#22119)
f1775796dd : Fix minor issues with #21736 (#22074)
a45898931c : Document the Boolean tensor type.
7c4206499e : Fix in ivalue::Future (#22114)
6350dbddd1 : Fix sequential MKL case (#22062)
21da33f0f9 : Better trace comments
f1c7fa0503 : De-deprecate some warnings that hurt usability (#21999)
2347a4032b : Fix tracing docs and add more comprehensive examples (#22082)
85cbe0d825 : Fix Concat Dimension Bug (#22088)
322261a4de : Fix dispatching of backwards kernel for ROCm. (#22125)
e016a424ef : Revert D15944971: [pytorch][PR] merge interfaces that have an optional scalartype parameter
6edaa11e5a : fix broken link
77eda8de8e : Support sparse gradients in DistributedDataParallel (#22037)
a7ec889de4 : Add sparse tensor allreduce (#22036)
313960d52e : Use at::detail::* instead of detail::* to avoid ambiguity in windows (#22029)
142361a7e4 : merge interfaces that have an optional scalartype parameter (#21088)
cd0d8480d3 : Remove many build options redundantly specified in Python build scripts. (#21877)
1b34ccfc78 : Porting SpatialDilatedConvolution and VolumetricDilatedConvolution to ATen (#20983)
3ba654e6d5 : Add finding thnvrtc_library into torchconfig.cmake (#22126)
08060e898b : Revert D15435461: [pytorch][PR] PyTorch ThroughputBenchmark
d96ce9b9fe : add for in dict support (#22006)
c9344fc9c4 : add for in string support (#21990)
eab35756d8 : support iteration tuple unpacking (#21985)
9b45237618 : PyTorch ThroughputBenchmark (#20766)
c0f96aaf01 : Restore default values on premature test exit (#22115)
887ecf797c : Fix DictType isSubtypeOf (#22104)
45b91bd326 : refactor all for in range/tensor tests to be together with other for loop tests (#21950)
e0f5ab2c2e : Tree based Iterator infrastructure: for in range/list/tensor/zip/enumerate (#21801)
a256b09ce9 : Backout Liveness Tests again :-(
7b1d6c8912 : Update intra_inter_benchmark (#22051)
91bf0a9f9d : Move quantized tensor tests in test_torch.py to test_quantized_tensor.py (#22089)
b19b20efef : fix minor comment (#21576)
f7b2778cb1 : s/uniqueName/debugName/ (#22096)
7d637de771 : Reduce excessive CI printing in TestHub (#22043)
63ca908026 : Updating submodules
856268c716 : Revert D15947873: [JIT] s/uniqueName/debugName
36e4b54420 : s/uniqueName/debugName (#22048)
4bc89bd5a6 : Implement tensor.select(Dimname,int) (#21795)
18a904c12e : Updating submodules
f164c01f9c : Adding liveness test cases back
38aa5a519e : Experimental option to use single thread pool (#22047)
5ff06a7b0b : more complete tuple assignments (#21949)
4009089d1f : Sparse BLAS: Remove workaround to check zero length inputs. (#22080)
04e9278306 : First round of optimizations for segment_reduction_op kernels. (#22081)
1c5fe2e8c4 : Add support for Python 3.8 Constant node (#22007)
f9b3989206 : handle slice with negative indices and indices exceeding tensor dimen… (#21811)
38c9bb8261 : Remove most usages of THCHalfAutoNumerics. (#21878)
06c3bd0302 : Improve ListPtr::extract() (#21753)
fe580e850e : Rewrite lerp operator to use TensorIterator and support compile-time vectorization. (#22038)
28630529ac : Limit overall number of threads used by TBB (#22045)
82dd69326b : Split nn.Module._save_to_state_dict to make it overridable (#21933)
b2197ef2b0 : Adding support for JIT Fusion on Windows for CUDA (#21861)
edb5a1662e : Remove getDeviceFromPtr and allocator from Type (#21940)
b36a041d6f : Move UnsafeTensorFromTH and UnsafeStorageFromTH off Type (#21923)
5d7cf66862 : add Int8SpatialBNRelu (#22014)
7d81e62562 : Add mkldnn tests for running end to end resnet models
71741ba115 : rename test to be more consistent
a3fc6ed046 : Hook up liveness into profiling pipeline.
3838324539 : Add max/min/argmax/argmin/sort/argsort for quantized Tensor (#21546)
95aee81dd7 : more general fusion logic (#22015)
88921feafd : change return type for q_scale and q_zero_point (#21709)
058beae411 : Add IterableDataset (#19228)
d4119f8fcb : Automatic update of fbcode/onnx to 355a4954ea4e5836a5e943589509951c44feb6b4 (#22030)
84a2d5d7aa : Add hashing to bucket-weighted pooling (#20673)
1aae4b02df : Fix 'error : detail is ambiguous' on Windows (#22025)
19ef15709f : Updating submodules
4cd7d78718 : correct arange docs (#21992)
08facca1a1 : Support accumulating DDP grads using a context manager (#21736)
40b9f8f0a0 : Added more descriptive error message for index out of range (#21758)
6bd58b7548 : Move list / dict tests to TestList and TestDict (#22000)
0702b5f345 : Partially parallelize randperm on CPU. (#21529)
e388f70499 : Move cppdocs build to CircleCI (#19768)
76fe91bb2f : Revert D14889547: Add sparse tensor allreduce
cb4c213f55 : Revert D15007365: Support sparse gradients in DistributedDataParallel
f8f583cbae : Port convtranspose2d (#20994)
365de7bda1 : Support sparse gradients in DistributedDataParallel (#19443)
aee6a412e9 : Add sparse tensor allreduce (#19146)
97ea44b34a : Fix issue in quantization error measurement when followed by Relu (#21890)
b6f542f8a1 : Add aten mkldnn transpose (#21943)
3d44cd6d19 : Replace Type dispatch with ATenDispatch (#22008)
5d67c606ea : Added error for classes that don't have an init function (#21880)
4fee532de6 : Pass loop_over optional parameter for cached reader properly. (#21929)
96c0bd3722 : ListPtr->List DictPtr->Dict step 3 (#21938)
275087383b : ListPtr->List DictPtr->Dict step 2 (#21937)
093c78f854 : ListPtr->List DictPtr->Dict step 1 (#21936)
cba79f4872 : Revert D15637222: [wip] Replace Type dispatch with ATenDispatch
15be5483c0 : Move NamedType method definitions into cpp file (#21983)
f6aac41391 : Defining object destructor in c10 (#21984)
24a6c32407 : Replace Type dispatch with ATenDispatch (#21320)
00fdb2cf95 : Enable XLA by default on pull requests. (#21991)
effcc398c4 : Refactor Random Number Generators in ATen (#21555)
34aee933f9 : ONNX Export Interpolate (Resize) for opset version 10
44128e09f0 : Speed up op lookup and registration (#21806)
d1c80300ce : Better stringification of dispatch keys in error messages (#21809)
dd046bef8d : NamedTuple serialization (#21839)
5a37f8c63f : Refactor TupleType to take a NamedTupleSpec (#21836)
c0be6e6290 : Introduce SerializableType (#21835)
74104f383e : Some small fixes for NamedTuple (#21813)
6b972795e4 : Add `torch.__future__._overwrite_module_params_on_conversion` global flag, and check it in `nn.Module._apply()` (#21613)
056a033cdc : updating upsampling bilinear2d kernel: (#21879)
34536e207a : Fix: convert Onnx DynamicSlice operator with 4 inputs to caffe2 fa… (#20846)
4b1df5c1f5 : Use fn(param) instead of fn(param.data) in nn.Module._apply (#21865)
abd6cffe55 : Added some extra tests for std_mean and var_mean for multiple dims. (#20650)
fa5263af2c : Add set_quantizer_ for QTensor (#21852)
e239e31da6 : Fix lint error (#21932)
3bdde56907 : Fix incorrect usage of __HIP_PLATFORM_HCC__ (#21757)
a388c78350 : fix bug in CompilationUnit::define (#21886)
52e1cea057 : Fix recusive method compilation (#21862)
eda08b0aae : script::Module as a view. (#21814)
94c61d4f32 : Fix infinite loop in del_post_hook (#21914)
c0f51142cd : Added a test case for an index error for the index_copy_ (#21912)
ad00c12379 : Clean up //caffe2/torch-cpp to avoid duplicated symbols (#21916)
081cd3a293 : Change AT_CHECK to TORCH_CHECK in python_arg_parser.h
28ecc104f4 : Fix WeakIValueEq (#21891)
010f238d17 : Retry pip install to make pytorch rocm CI more stable (#21895)
5eb25c3704 : Support `in` membership checks (#21527)
afad3e4954 : Add support for class annotations (#21379)
85528feb40 : Mark test_snli as a slow test
0998a32588 : Backward function will set a flag if it released variables (#21533)
f363a33e10 : Set __file__ for torch.ops (#21888)
31e1e63bc2 : Port avg_pool3d() to ATen (#21732)
c471a63a39 : UpSample-nearest cuda kernel update (#21694)
998efb48c3 : Add at::dimname_to_position helper. (#21789)
8f9e0f77dd : Turn off non-default stream testing. (#21793)
08a0ac84d7 : Removed unused variable from closure in range (#21897)
6042012a93 : Fixed "tried to access to nonexistent attribute" -> "tried to access nonexistent attribute"
df787cf079 : Fixed a warning in `test_jit.py` (#21898)
f1c1d1a964 : Export the cosine_similarity op as an ATenOp correctly (#21884)
3ed8acdf59 : Fixes lint error in py3 (#21883)
2ba164b943 : Future interface for ATen/Parallel (#21764)
d8314a6260 : Replace nullary/unary/binary loops with generic implementation (#21475)
7f057f00cc : Update mkldnn-bridge to fix MKLDNN grouped conv issue (#21854)
5237835a17 : Make script::Method a value type (#21675)
cc4498a54a : Always enable P2P access for GPU copies (#21872)
76a250d590 : Add new regression loss function type to FBLearner (#21080)
8aeb4ef4bf : Add python string standard lib (#21807)
d329dffd92 : improve error message on recursive class defs (#21842)
cdae8b93a7 : improve recursive scripting error message (#21841)
c0420d9618 : Attempt to fix TRT build after library merge (#21775)
0408697317 : Followup cleanup in cmake.py and add a comment in setup.py (#21792)
7279e07c8b : Don't use anonymous namespace in header. (#21790)
1aa16d356e : named inference rule for tensor.select (#21752)
b403b10ff9 : Fix #11752: fix numerical issue in log_softmax (#21672)
0f675f9cbc : Port im2col and vol2col (#21769)
2b23fac8da : Disallow creation of tensors with duplicate names (#21781)
44707dd3ca : Rename Dimname::name to Dimname::full_name (#21803)
7c1528bab6 : Copy NamedTensorMeta in TensorImpl::copy_tensor_data() (#21735)
da4e60226c : Keep Reducer hooks in a vector instead of an unordered_map (#21783)
76713fb564 : Fix remote build + clean up disable feature hack (#21816)
4a6aa1d806 : Populate producer_info.json in any PyTorch model at FB (#21662)
c9ba3f699d : Bag of documentation fixes (#21846)
972ec676b2 : Remove lowered execution (#21674)
ff1172d705 : high pri Jit builtins (#21451)
4f75da3b41 : change ClassType::compilation_unit to return owning ptr (#21787)
263b1985a8 : Revert D15833924: [jit] Fix stdout capturing, remove some expect files
04f09d4235 : Move unwrap logic from c10 to caffe2 (#21620)
794ee6d00c : Switch to out-source builds for LibTorch
4a2fc00db0 : Revert D15830704: [jit] Add Python string standard lib
97b92eede1 : Updating submodules
220efdbdc4 : Refactor pybind_utils.h (#21550)
a85305fdea : Hook up profiled execution in the interpreter (#21799)
4bcc72fe95 : Support for NamedTuple (#21428)
ac8d1a1f76 : fix some issues found by enabling -Wshorten-64-to-32 (#18187)
94f903654c : Add qscheme() method (#20608)
d0021b3ac7 : Fix stdout capturing, remove some expect files
07fea3f5b6 : Add new get_batch() method to ChunkDataset API (#21797)
dddc65db9e : Add Python string standard lib (#21059)
65a3dbdfb0 : Remove hip device sync in miopen Conv implementation (#21791)
1fc240e59a : add tests for add_custom_scalars and others (#20987)
0d6eb209e6 : Expose torch.empty(sizes, *, names, ...) to Python (#21648)
710821875a : Fix flaky nuclear_norm() test (#21638)
ff8c3fd54e : Adding the quantized namespace to torch.nn and importing it from torch (#21600)
9a1dc43f34 : Deprecate unordered_map and vector in IValues (#21712)
029a968212 : Define __setstate__ on _ConvNd to handle pre-padding_mode pickles. (#21687)
7284f448ba : Fix handling of kwargs from common method invocations (#21499)
c1744a6c39 : Add ONNX py3 CI cases (#21715)
c06ccbe663 : Add aten mkldnn zero_ operator (#20573)
bc6281028c : rebuild_storage_fd retry on EINTR (#21723)
deb2140c6e : Throwing errors for min and max reductions in empty CUDA tensors (#19612)
b811b6d5c0 : When building extensions, honor options set in CMake. (#21653)
4001e71547 : When converting to NumPy, throw TypeError when type is not supported (#21608)
2d5ce519f2 : Fix with emit_nvtx, also allow shape information to appear in nvtx ranges. (#21691)
b9675efb5a : Fix the issue of sizes vs size for tensor creation ops (#21686)
1e7bd7586d : Query caffe2 operator stats for detailed execution info (#20924)
d9eec4ef0d : backend.py: _getattr__ must raise AttributeError (#21763)
044809f1f3 : Handling numel() == 0 in convTranspose (#21652)
5c0e058950 : Implement at::empty(IntArrayRef, DimnameList?, TensorOptions) in aten (#21647)
3e79036382 : Make it possible to trigger all tests by pushing to ci-all/ branch (#21750)
16b4a12ed8 : better example for local weights (#21685)
adc99efb46 : Add batch id to tracer event (#21446)
fbecb4621f : schema_matching.cpp: improve error messages.
cfd8c58b45 : Tune elementwise ops for ROCm (#21754)
f59581218f : Fix spelling errors (#21665)
efd20de276 : fix multihead attention for half (#21658)
4716409f30 : Use expect to fill in pytorchbot token (#20459)
b858f42e16 : Document that no_grad is thread local. (#21755)
3e8dc565bd : Bug fix: ONNX export full operator (#21669)
4b45f08f87 : Added dim check for index_copy_ (#21617)
aa6887e6ef : add error message to missing function backend (#21742)
756a20de93 : Add/edit docs for nn.transformer (#21746)
7c7d5be033 : Skip failing test
51ee048709 : improve torch.load & torch.save doc formatting
63a7c7bb2a : Add event and event_counter columns to caffe2_usage_tracer table (#21739)
f87d5cc191 : Fix first reshape in pixel_shuffle conversion (#21486)
fc3f702ba8 : at::launch benchmark (#21581)
eca42a5122 : Fix failing test for Final annotations (#21725)
5485f09f18 : Native TBB parallel backend (#20480)
ab0d5ab99d : Port avg_pool2d() to ATen
5a7e2ccc0b : Add use_rocm flag to detect AMD build in the runtime (#21718)
556af7c19d : ROCm 2.5
42770e1370 : Improving Categorical Distribution Docs' (#16291) (#21707)
a3db2844e1 : Support tuples in ScriptModule inputs/outputs (#20784)
4c03ac7ac4 : Allow batch sizes > 65535 for inverse, solve, cholesky_solve and tria… (#21689)
b599bb3836 : Add mkldnn mul operator (#20575)
d3b3cbe26e : Revert D15769066: [pytorch][PR] schema_matching.cpp: improve error messages.
49481d576d : Torch rename (#20774)
e9121e27ce : remove liveness tests
f5c00345b3 : Profiling Programs section in README.md
8dd670657b : Liveness for BailOut graphs
8c57ce87b0 : make tests pass with enable_first_class_module() enabled. (#21565)
d8056cb832 : Update quantization to work with first-class modules. (#21660)
56f4602630 : Add WeakIValue, use in tracer. (#21515)
0293cf5bb6 : Add `Final[T]` annotated members to `__constants__` (#21603)
0481a7710d : Support for type annotations instead of torch.jit.annotate() (#21390)
699de487db : numerical integration "trapz" function. (#21610)
b527e48588 : Use c10::List (#21177)
ae342fd076 : Refactor Random Number Generators in ATen (#21364)
96910251e0 : schema_matching.cpp: improve error messages.
28adca82ea : Add some named tensor helper functions (#21636)
20b0acf057 : Add some more namedtensor builds to the CI (#21632)
3e6eb3dcab : Add virtual dtor to NamedTensorMetaInterface (#21633)
83cec5f3ee : nn.Transformer (#20170)
180aa234fc : Updating submodules
8f40164517 : Add libtorch Linux CPU binary build to PR CI (#21671)
39d412194f : Fix ProcessGroupGloo allgather for tensors with shared storage (#21490)
ad73ea22f7 : Add strong Wolfe line search for lbfgs (#8824)
2c91ba3bbc : Add div hashing
76e01542ed : Fix the shape of PReLU weight (#21330)
7123c6ca04 : Enable groupwise for qconv (#21592)
8cc8e15473 : Back out "[pytorch][PR] [Re-landing] Fix caffe2 windows CI for new Windows AMI" (#21670)
cbcb2b5ad7 : Delete DDP hooks in Reducer destructor (#21591)
1e4af2b969 : Pin torchvision version. (#20811)
1ffa9d3d3b : correct measure quantization error when followed_by=Relu and dequantize_output=1 (#21664)
c2a18a6702 : Override print when python is present (#21625)
aa7e27fa70 : Emit Loop Condition as Separate Block (#21611)
341a7e4bb5 : Fix issue in backward path (#21663)
afd202be9f : StoreMatrixInMatrixMarketFormat can store both integer and float tensors (#21606)
c2a08d339b : Automatic update of fbcode/onnx to dd599b05f424eb161a31f3e059566a33310dbe5e (#21641)
968114ae3d : Revert D15769256: [jit] Add python string standard lib
039629cedd : fix incorrect use of TeX in docs
1bd21d3f14 : test_jit: Remove tests checking non-guaranteed properties from 'test_insert_observers'. (#21657)
ee33afe2b1 : randomized testing for qconv (#21436)
cf5c3bb3fe : make range functions respect current stream (#21619)
9241c4b3c6 : Add python string standard lib (#21656)
9737b166a4 : Fix bug in multinomial_alias_draw (#21324)
fb9fbc009c : Fix momentum bug in CyclicLR (#20401)
cdbc20677c : Add len to OrderedDict types (#21651)
7a040f4b0b : Revert D15706021: [jit] Support for type annotations instead of torch.jit.annotate()
b46e87cd3d : Fix catch block to fix 'error: catching polymorphic type' (#21637)
dd439bc39e : Rename hubconf.conf to hubconf.py in docs (#21631)
bbcd6cc782 : Support for type annotations instead of torch.jit.annotate() (#21390)
25d1496d58 : Fix Process Group for tensors shared across processes (#21449)
50ee1f3fa7 : better error msg when seeing a unsupported builtin function (#21068)
4610347fdf : Breaks up NN module in docs so it loads faster.
646a7f99bb : Move management of calls of "cmake --build" to setup_helper/cmake.py and refactor as a CMake class
835a6b9da2 : Fix namedtensor build (#21609)
29c849ff34 : implement transpose operator for MKLDNN (#19955)
731670f40a : upgrade mkldnn-bridge (#20569)
f2623c74a9 : add PT pointwise unary ops to the benchmark suite (#21207)
4e3c97a0be : add separate path for op with JIT (#21210)
a82feee07c : Method-only entries in native functions should have self as first argument (#21549)
fff22125a5 : AT_CHECK -> TORCH_CHECK (#21547)
f5c24fc66d : Deprecate torch::jit::RegisterOperators (#21552)
cab3e726df : Split out Function into its own file (#21539)
512c9d8c76 : add PT gather op to the benchmark suite (#21614)
32a0440209 : Publish torch::Dict and torch::OperatorKernel (#20723)
a93a1ccbb3 : Run test_c10d.py in multi-gpu environment (#21598)
74f6c55f0f : support negative axis in concat and split operators
3889855a5b : Revert "Redefine scheduler to set learning rate using recursive formula" #14010 (#21463)
8b9b215dc5 : Add a 'dim' argument to nuclear norm (#21022)
2378c120e6 : Implements divmod function (#20979)
8a88d33103 : Uninitialized Ivalue (#21387)
dd0ffd6864 : Use schema string specification in derivatives.yaml. (#20916)
5f25a252d6 : Allow tensors with requires_grad=True in c10 ops (#21599)
5a48642fde : Revert D15717575: [pytorch][PR] Fix bug in multinomial_alias_draw
4fb302eb34 : fix optional type promotion for classes (#21593)
a436822c40 : Consider contained types in alias analysis (#21431)
bb4aff2680 : cleanups to memory_dag (#21430)
ae144032aa : cleanups to alias analysis interfaces (#21397)
ddac8da813 : avoid calling front() on empty working set (#21396)
bb1dbdb99b : Fix bug in multinomial_alias_draw (#21324)
30d6933016 : BailOut Graphs
3df5a46a99 : Skip triangular_solve CUDA test on non-default stream
6f99bcda8a : fix test (#21594)
91ea2cd5a7 : clip sigmoid to prevent transforms return inf/nan values (#20288)
4bdbd30b96 : Add python binding to deserialize blob (#21532)
e4fae884f6 : Change compiler to use Load/Stores, then transform to SSA (#21101)
1e6c99a6e0 : update hub doc (#21568)
f308b07e8c : Don't leak threads on exit (#21438)
c294d64eff : fix concat and split tensor inference function (#21382)
9deab0cf0e : Documentation for locking discipline in engine.cpp/.h (#21548)
547fcaa977 : Add named_guard to native_functions options (#21373)
8ffcbfb7d4 : Add unique_ptr<NamedTensorMeta> field to TensorImpl (#21341)
f9c4d0d7a9 : Fix NVTX path on Windows
c4e0d61646 : Regularization is not supported in FP16 (#21319)
b1bf16eeab : Enabled _th_ixor_ and _th_equal for bool (#21538)
e447a733a1 : Update module.py
04e6564f0c : clean up the TracingState API (#21564)
69aa2b2814 : Collapse tracing_state.h into tracer.h (#21563)
ea822d9626 : Interpreter support for CallFunction/CallMethod (#21562)
ad0c08f950 : Expose ExecutionPlan in prep for function calls (#21561)
de31f6719c : Add flag to temporarily enable first class modules (#21560)
18996a8952 : unfinished push/pop reduction (#21559)
13edda417d : Prepare interpreter for function calling (#21558)
8ae7b1c486 : Update functional.py doc (#21510)
74828be4a7 : fix segfault in `cat` on CPU with tensors that can't be indexed with 32-bit ints. (#21530)
406374657a : Optimize batch mm op when broadcast the second input (#21556)
d71501259b : Revert D15572818: Prepare interpreter for function calling
d4bcab0dba : Revert D15590900: Reduce number of stack manipulation instructions in interpreter.
03641413e5 : Revert D15600068: Add flag to temporarily enable first class modules
e616a5e8b8 : Revert D15600067: Expose ExecutionPlan in prep for function calls
bfb235b8c9 : Revert D15618275: Interpreter support for CallFunction/CallMethod
c27cabe2d7 : Revert D15719982: Collapse tracing_state.h into tracer.h
3cfe914191 : Revert D15719980: clean up the TracingState API
dd0faf4366 : clean up the TracingState API (#21514)
8c5f3acfc0 : Collapse tracing_state.h into tracer.h (#21513)
5f6afafdef : Interpreter support for CallFunction/CallMethod (#21325)
1517ff66a1 : Expose ExecutionPlan in prep for function calls (#21273)
7e08bc42d5 : Add flag to temporarily enable first class modules (#21272)
dde27958dd : Reduce number of stack manipulation instructions in interpreter. (#21240)
c53e4d012d : Prepare interpreter for function calling (#21185)
c36dc35853 : Revert D15576968: Turn on Werror for deprecated-declarations.
b849f101b1 : Fix slow unpickling (#21542)
66d596645a : Turn on Werror for deprecated-declarations. (#21195)
a5cf6d5100 : reorganize op bench directory (#21543)
5b4a188a95 : add support for steps(strides) in tensor slices
5744fb3007 : Add mkldnn softmax operator
a947d98282 : Set "scalar_check: false" for some LAPACK functions that can't return scalars. (#21498)
fe5ceea580 : Rename caffe2<->c10 operator wrappers (#21322)
dad85b7e69 : clang-format by line (#21531)
fa0c5c31d4 : Turn namedtensor build back on (#21520)
2b902e9738 : Fix the offset numerical bug when casting (#21484)
ac8c3fa7b6 : Revert D15717337: [pytorch][PR] [precommit hook] clang-format by line
a77802cf56 : clang-format by line (#15657)
b7f5d1e4c6 : Fix size of histc return on CPU when input is 0-dimensional and bins=1. (#21497)
881adb5bcd : fix tuple indexing bug (#21521)
a5cca4d342 : add failback for Sign operator (#21343)
51fb42ebcf : Updating submodules
54413cf91e : replace LegacyTracedModule with torchscript used in add_graph (#21339)
b144ba66d5 : Change PyTorch tests to use non-default CUDA stream (#21474)
5cc3a3e2bf : Set "scalar_check: false" for TH methods that can't return scalars. (#21496)
cc85c3dbbc : ONNX Export Slice and Flip ops for opset 10
3eced796cd : Make torchvision install chatty. (#21476)
1503c734ce : updating gemmlowp tp 3fb5c
8c9a88bdab : Make test_cuda.py work on Python 2. (#21466)
c60465873c : Fix batch norm multiplier init (#13774)
c604847602 : Implement at::match(Dimname, Dimname) and at::unify(Dimname, Dimname) (#21281)
4727685ea1 : Added at::Dimname (#21280)
e27c2f1437 : Revert "Revert D15632268: [pytorch][PR] Continuation of Port max_unpool1d, max_unpool2d and max_unpool3d to ATen" (#21427)
d6af6588c2 : Super resolution export to Caffe2 is broken, skip it. (#21479)
78a376592d : add cancelAsyncCallback method to OperatorBase (#21492)
696b2c89b4 : Adding gradient to Boolean Mask operator (#21423)
d3d195e0b1 : Updating submodules
772fd79d40 : Defer constructing error strings for definitions under If's until they're needed. (#21429)
abc0d3e544 : Fix unused variable warning
bad67015fe : Add warning for Turing GPUs and CUDA <= 9000 (#21468)
63d4bbb0ec : Turn XLA back on for default set (but filtered out using should_run_job) (#21470)
f433913996 : add more info back to BenchResult (#21502)
d51bd2191c : Revert D15629687: Deprecate torch::jit::RegisterOperators
37ab35c8fc : Move jit testing utils to their own file (#21491)
e87f77def6 : Fix typo (#21426)
d714abf597 : Deprecate torch::jit::RegisterOperators (#21368)
cb2ec07fa2 : ReshapeOp supports empty tensor (#21230)
b161832f10 : support ceil mode by padding changes (#21310)
80a083ef92 : Remove unneeded headers (#21393)
8a9ea55b25 : Add autograd for to_sparse. (#20458)
87d10d49f4 : Bilinear Upsampling increased throughput (#19306)
c5d5d45f40 : Fix numerically instability of `SigmoidTransform` (#19802)
f8cab38578 : Address precision matrix instability of MVN distribution (#21366)
8ece538a79 : Addresses bad behavior with overridden optimizer.step by #20124 (#21460)
51d0da2802 : Improve build docs and process for Windows (#21190)
6fc702f384 : Per-callback sampling (#21394)
bb788631ce : Fix caffe2 windows CI for new Windows AMI (#21452)
3feb40d602 : pack_padded_sequence: Check for empty (zero-element) tensors (#21461)
3b6362d98e : Remove NodeExecStats and AllocatorMemoryUsed (#21419)
0a3fb45d3d : allow passing Python built-in types as dtypes (#21215)
b647804a55 : Fix embedding bag nan output when input is empty (#21400)
f4f32cecfd : numpy like nonzero (called nonzero_tuple) (#20293)
8a2985eb05 : Support recursive ModuleList / Sequential (#21306)
2e37ab85af : Enable bool support for several index methods (#21435)
61cc03fb8d : Make ScriptModule.training an attribute instead of a parameter (#21078)
f1adddd1c6 : Updated sum() logic to properly deal with bool tensor (#21421)
b7b6b612a7 : Fix C++ data parallel (#20910)
da4f3629c5 : Add missing shebangs to Python files with executable permissions.
52596164d4 : Fix 32-bit env. model load issue (#20900)
f891b4338a : Test the exceptions raised by isfinite and isinf (#21168)
dffff3218b : Improves NVRTC Error messages (#21174)
6615797837 : Add derivative for QR decomposition (#21274)
26bcadcc61 : Gumbel-Softmax Arxiv Docs Link Fix (#21376)
ee15ad1bd6 : "CharTensor" numpy conversion is supported now (#21458)
c8083e0292 : Include named_any.h in modules.h (#21437)
856e3518c5 : Parallelize eye() on CPU.
ae18f8e761 : Fix latex formular error about *normal (#21000)
4e02d3c0a1 : insert default parameters in binary cross entropy with logits (#21336)
d50dca4075 : fix two typos: "a the" => "the"
63a55d4932 : Support gather export with OneHot + Mul (#21235)
240d62fbaa : Move redundant code that checks NumPy during build to a helper module and add an option to disable building with NumPy
a68d2e817b : Kill apt-get even harder, and before we purge. (#21464)
12528990f8 : change output of ai_pep_format (#21440)
4e679e30a8 : Updating submodules
7e300fbb21 : Added degrees, radians, ldexp (#21131)
bd2d318e23 : Modify quant-dequant node api to take module object and method name (#21407)
505ae5f51d : Updating submodules
f8202d85a0 : Added frexp, isinf, isnan, isfinite (#21130)
26db46b324 : change the epilogue of SLS to match the simd section (#21439)
7e6d932208 : Make strtod_c compatible with different gcc abi (#21293)
e07d94558d : Updating submodules
991c557270 : Fix an incorrect implementation of celu (#21213)
335869e833 : Fix 3x DenseNet compile time regression by restoring earlier-out tests in AliasDB::writesToAlias.
6b9f46b2d0 : Fix "warning: missing return statement at end of non-void function" (#21424)
0e3c4a054b : Remove curandStateMTGP32 usage (#21301)
793b302653 : ensure version_counter gets incremented for non-differentiable outputs (#20612)
8215f44405 : Revert D15660575: [pytorch][PR] Fix Caffe2 CI job for new Windows AMI
98e3aaeb78 : Adding support for exporting models with variable length input/output to ONNX (#20034)
ba2bdf8d0e : Added factorial (#21129)
7a1c9076ac : Revert D15632268: [pytorch][PR] Continuation of Port max_unpool1d, max_unpool2d and max_unpool3d to ATen
f172fadd80 : Make warnings be UserWarnings with source file info (#21231)
3068a945ce : Retry awscli install. (#21383)
bf0e3b62ae : Minor preparational JIT changes (#21096)
c15254d4ab : Expunge some more deprecated uses of AT_CHECK.
ec7dc52e60 : Fix a bug in qconv (#21294)
03617574d3 : Сhange type of a tensor with bools (#19097)
22ddddfb80 : Continuation of Port max_unpool1d, max_unpool2d and max_unpool3d to ATen (#19465)
6874c4058d : Add type annotation to stft (#21302)
7c6f2836d4 : Fix Caffe2 CI job for new Windows AMI
6251c563eb : Add CUDA support for _dirichlet_grad (#21191)
b460a1987e : Per discussion at https://github.com/pytorch/pytorch/pull/21244, fix bugs in (#21392)
42b2f56124 : Fixing race condition at Module::forward method (#21398)
95eb9339c1 : Adds CUDA C++11 and Profiling Notes (#21386)
eadac840f7 : Speedup bernoulli_scalar_cuda_kernel with grid-stride loop (#21300)
c82bf8ef10 : Move THCTensor_(lognormal) to ATen (#21299)
4671bed0f3 : Move THCTensor_(geometric) to ATen (#21298)
d341bcb3dc : Move THCTensor_(exponential) to ATen (#21297)
92b76df8f6 : Finished trigonometric functions (#21128)
7309cb60fd : Finished the high-priority functions (#21127)
622588d8fd : Added remainder of high-priority trigonometric math ops (#21126)
e268fc97c3 : Re-add Tensor.T (#21175)
ba08cf336d : Reorganize cmake related functions to tools/setup_helpers/cmake.py (#21367)
6ee9e87ff5 : Back out "[pytorch][PR] don't materialize constants" (#21374)
45d2305732 : fix incorrect default on Graph::toString (#21370)
0dc7286e15 : Better error message when trying to instantiate NamedTuple
d348d6405c : cdist: pairwise distances between two sets of tensors with batch mode (#20934)
6a3ebdbbc5 : Remove all conda 3.5 nightly configs, remove libtorch smoketests (#21380)
ca32563999 : add suggestion to use lld to CONTRIBUTING.md (#21334)
4940e41d16 : Fix mkl-dnn tautological compare error (#21371)
403ca41142 : make analyzeConservative more conservative (#21227)
0dbae7eddb : cleanup templated implementation of mayAlias (#21224)
adf6f6c442 : use memory locations instead of values for working set (#21223)
f330168570 : remove multisets from work set (#21222)
df0b83654a : cleanups to alias analysis (#21221)
77c2f5dd75 : fix copyright notice in docs
57f932a638 : Enable 'empty' function for mkldnn
b869a3b4ac : add new ops to benchmark_all_test (#21365)
2ed6f017ed : Added better tests for math ops and unified them (#21125)
6938de8851 : made floor/ceil return ints (#21124)
87690d2b77 : Move THCTensor_(cauchy) to ATen (#21289)
f9e746e9c8 : Use "important" node to toggle whether or not to build on PR (#21308)
1291d95e82 : Revert "Fix the caffe2_gpu linkage with torch on Windows" (#21335)
38d68ad803 : Update randomness.rst (#21337)
ae42a11ab2 : Make .circleci Conf class uses dataclasses; use types. (#21284)
25a6ff10f0 : Add gtest for TensorIterator (#21253)
fecd5fa171 : Updating submodules
2ee2d78a29 : Updating submodules
af4c24153f : Honor OMP/MKL environment variables in AT_PARALLEL_NATIVE case (#21189)
f251416d70 : Update fbgemm submodule (#21328)
113a27ee45 : bake constants into the traced graph, get rid of getNestedValueTrace (#21046)
cf356a342b : Fix a bug in loop unrolling (#21239)
6e657c5586 : Add CallMethod, inline eagerly (#21116)
0f58d20fe4 : Add quantized::fbgemm_linear_unpack operator for serialization (#97)
4b576e5184 : Do not hardcode build_dir in build_caffe2. Use the build_dir parameter.
702ba3d2fb : build torch for libtorch mobile build (#21234)
82ceeaeca2 : Add options to jit's operator constructor (#21315)
457c0f164e : insert missing #pragma once in VariableTypeUtils.h (#21134)
1c5bd1fa65 : Automatic update of fbcode/onnx to 5160f3ac3380302224998f1c95e111cd961c4bc5 (#21311)
02fd1878e3 : Cast dropout to float in RNN (#21304)
45de3ef6a7 : Export feature length information for onnxifi operator (#21303)
7c823312d3 : hub doc improvements
22865d4ce1 : Add ONNX export support for torch.rand. (#20559)
7d84ca6e06 : clean code to unify the logic to use fp16 by the optimizer engine (#20915)
3004b397f0 : change test_name to be globally unique value across tests (#21206)
ca80ec7c97 : introduce a new intrace to add op [PT changes] (#21149)
88d033f842 : don't materialize constants (#21229)
9a41f44732 : Improve ONNX Loop export (#20445)
4980b8b95c : Renaming member variables in engine.cpp/h (#21283)
37fed9b24a : Rename FC to Linear for the function name (#21268)
63b3c5a66a : Replace AT_ASSERTM with TORCH_CHECK (#21267)
ad971a37d0 : Improve performance of advanced indexing backward (#20557)
4ac732ed7a : file:line for tracing (#21247)
27d1daab45 : Export ONNX Dropout for opset 10 (#20710)
770089c2b8 : math module support: isnan, asinh, atanh, cosh, sinh, and tanh (#19337)
fb72625267 : Remove onnx export expects (#21238)
2e59a0a646 : add contiguous function type hint for tensor (#21285)
96667dfe41 : Write add_scalars data in the same file (#21100)
5b33698776 : Fix build error in c10 on Windows (#21005)
155f767382 : Move THCTensor_{normal, normal_means, normal_stddevs, normal_means_stddevs} to ATen (#21287)
21113c2d36 : EliminateGuards
c8539be962 : Make is_contiguous checks generic in number of arguments (#21106)
b159e0ce08 : Significantly simplify the spawning of pytorch libs building process. (#21105)
f62a006097 : Retry Fix Python DataParallel RNN in no_grad mode (#21262)
0c6efbd410 : Fix gelu documents (#21265)
eaa3ba6587 : Add autograd for layer_norm on CPU (#20883)
31c79b71ff : Add gelu gradient for pytorch (#21237)
93ae040ff0 : Add gelu activation in pytorch (#20665)
aac424a6c4 : Revert D15577342: [pytorch][PR] Fix Python DataParallel RNN in no_grad mode
360e6d1b0b : Fixes a bug in the test (#21146)
62ae348d1a : Exclude file:line from graphs used for fuser kernel cache (#21252)
7c40576c61 : Save the weight shape info the first time we have chance to extract it (#21233)
0efc527dd1 : Revert D15548138: Export feature length information for onnxifi operator
51ebbe970a : Fix Python DataParallel RNN in no_grad mode (#21197)
f051fbd4a8 : Fix typo in test_dataloader
d168a8533f : compare scalar device with common device (#21236)
41b17e2458 : Fix wrong type hints for Tensor.is_cuda, is_leaf (#21192)
be7fc40621 : Fix `sccache not being used on Windows` (#21248)
619261d7a7 : Add file-line info for jit.load and string frontend (#21217)
b663eec119 : Lazily build error strings in schema matching using replay. (#21241)
5bc7c1f83d : fix contribution and governance links (#21243)
85786bea7d : Export feature length information for onnxifi operator (#21110)
516ea33f6a : add PT maxpool and avgpool ops to the benchmark suite (#21200)
dceea73460 : add PT conv and convtranspose ops to the benchmark suite (#21199)
2d75d31398 : add PT linear op to the benchmark suite (#21204)
00b3e69211 : add PT batchnorm op to the benchmark suite (#21201)
ed1078bde3 : migrate matmul operator to the new interface (#21198)
c8dc707fee : avoid multiple writes to files on export (#21186)
4c19421f16 : Register gradient op with engine (#21205)
daa1e2de1a : Add file:line:graph to graph printout (#21180)
678dc44d4c : use _sparse_coo_tensor_unsafe in coalesce for speedup (#21214)
9e5f1db66b : Reuse common options between ONNXIFI and TVM transformations (#21163)
b12a5f6155 : schema_matching.cpp: mark internal functions as static. (#21140)
668dbcc41b : migrate intraop benchmarks to the new interface (#21202)
c62d476206 : migrate add operator to the new interface (#21152)
fd19d06db4 : remaining use of t.quantize_linear (#21219)
4dbeb87e52 : PyTorch Dockerfile should update submodules recursively.
0aeb971622 : conditionally defined var better error message (#20911)
2f4824b2fb : Add support for recursive compilation on Modules (#20708)
834d678eb8 : Remove old custom op implementation (#21085)
384d828ea5 : Add aliasAnalysis to torch::RegisterOperators() (#21084)
80556761c8 : c10::OperatorOptions (#21181)
b91e0d14a7 : registration options should only be callable on rvalues (#21079)
181792176d : Implement various AliasAnalysis operations directly on top of MemoryLocations. (#21203)
e098878d75 : Cuda persistent softmax (#20827)
052bab7069 : Move legacy TH functions(sinh,cosh) to TensorIterator + Vec256 (#21115)
7f960a9c01 : remove quantize_linear from Tensor method (#21196)
c185145d8c : remove dependency to caffe2::math and eigen (#21169)
8c927b208c : improve test_docs_coverage error messages (#21029)
e13b483f58 : Fix weak module cuda() _flat_weights bug (#21107)
0223d3744a : introduce a new intrace to add op [C2 changes] (#21148)
31089b02ce : introduce a new interface to add op [core changes] (#21147)
012069ca8f : Revert D15454048: Move THCTensor_{normal, normal_means, normal_stddevs, normal_means_stddevs} to ATen
dc8f306b8e : Revert D15454052: Move THCTensor_(cauchy) to ATen
be9ce6318e : remove import torchvision when testing torch.hub (#21132)
e161360b62 : Revert D15558784: [reland][pt1][quant] remove quantize_linear from Tensor method
5fcd37bd8f : List (#21164)
f91f24764e : remove quantize_linear from Tensor method (#21156)
0a0ff83124 : replace `num_bits` with `quant_min` and `quant_max` (#21097)
277bf69fa0 : Add torch.load/torch.save for QTensor (#20830)
eb4d43df3b : Make CUDA triu / tril support batches of size > 65535 (#21067)
057ddab766 : on import, register class before defining it (#21182)
d6438c956b : Move THCTensor_(cauchy) to ATen (#20622)
26d16ae515 : Move THCTensor_{normal, normal_means, normal_stddevs, normal_means_stddevs} to ATen (#20621)
07ac00d21a : Automatic update of fbcode/onnx to 9005291283e943f1a91da5f0acf218bc4e8eb2ca (#21057)
ff0d00f921 : Updated scalar type to onnx mapping (#21095)
726caeace3 : Use QTensor for bias (#21038)
64f06d4964 : Enable all and any for bool tensors (#21033)
9a22cb9f49 : Enabled add, sum and mul for bool tensor (#21032)
fe39602451 : Support for rudimentary f-strings (#21037)
76deb450c6 : Record source/line info in SourceRange and report in highlight (#21157)
416357648c : Optimize alias analysis (#20899)
31aefd9b09 : Adding models to jenkins benchmark script (#21010)
f6e5846a67 : add handle to run all jit tests (#21161)
7f308b88b9 : Only populate net_pos in ssaRewrite if the op doesn't already have a net_pos argument (#21051)
80020306ef : Added base parameter to math.log (#21151)
4e3e4d7ff5 : Updating submodules
4aee92833c : Update libtorch docs (#21150)
313ef4f5d5 : Make data_ptr a method on Tensor (#20878)
d17aa72373 : Added more regression test for groupconv w/o bias. (#18519)
6dc445e1a8 : Conservative alias analysis rules for CallFunction/CallMethod (#21087)
b6d1a72f48 : improve error message on inferred type (#21058)
ec76976a7a : Remove all devtoolset7 jobs (#21153)
fffffde2f8 : Delete more tabs, fix lint. (#21142)
e9df9e7960 : Revert D15552424: [pytorch][PR] [JIT] Record source/line info in SourceRange and report in highlight
c4a90ca18e : Revert D15477933: [pt1][quant] remove quantize_linear and dequantize from Tensor method
3805490d6a : Typo fix (#21122)
52ded63128 : Revert D15546045: [jit] Add support for recursive compilation on Modules
3083c71cde : First class functions in IR, inlined eagerly (#21052)
6b099edb53 : fix lint
7cea6d9b71 : Redesign the output shape adjustment of OnnxifiOp (#21027)
6875018793 : Record source/line info in SourceRange and report in highlight (#20898)
57f4f98c40 : Fix borked SourceRanges
67291ba74f : remove quantize_linear and dequantize from Tensor method (#20874)
8d3388aef2 : Add support for recursive compilation on Modules (#20708)
33d35f5f93 : Fixed isinstance typos
990e63f587 : Remove unnecessary sources from base CircleCI AMI (#21103)
12b0dede39 : Support exporting tensor factories from scripting
9be72ce44f : Convert Tree to use intrusive_ptr instead of shared_ptr.
4900edebcf : QTensor permute, transpose and contiguous (#20869)
99b057d89c : Failing assertions is unlikely (#20876)
9daf48525e : Quantized Max Pool op (#20474)
154029a6ff : Revert D15534670: [jit] improve error message on inferred type
5dacf6b048 : improve error message on inferred type (#21058)
6ea9044d3c : add 'all' builtin (#20521)
8fcd80af20 : Fix "cuda: unknown error" on Windows (#21062)
157fcfc07d : Add `quantize_linear_per_channel` (#20765)
53ccba004f : New torch assertion macros (#20887)
449a2c3555 : Fixes #20124 (#20203)
74375299e0 : add torch.nn._intrinsic and torch.nn._intrinsic.quantized namespace (#20940)
736bf7b46c : Fix __constants__ for some nn modules (#21071)
1e1f2c85f0 : remove constant pooling expect (#21003)
0ffd20c268 : Fix empty tensor for unique_dim (#19000)
2cd1c78632 : Revert D15523444: [jit] move casting ops from prim to aten
7cb1aa67b0 : Enabled min, max, minall, maxall, cmin, cmax, cmaxValue, cminValue for bool tensors (#21031)
85777b92b2 : Assert against using Operator methods not supported when exporting it to c10, part 2 (#17946)
a0111aaf0d : move casting ops from prim to aten (#21002)
dd903eb645 : Add start and step parameters for range in torchscript (#20795)
fa8c132e24 : Revert D15502768: [pytorch][PR] [jit] Make ScriptModule.training an attribute instead of a parameter
94b9706017 : fix `dequantize_linear` (#21035)
cbf2a4f5c4 : print a warning if a type annotation prefix is invalid according to mypy (#20884)
a6bb15493d : Removed accidental TensorFlow dependency (#21066)
f2199a34eb : Hook to store additional metadata about environment (#20863)
00c1584979 : Added possibility to index scalars by bool masks (#21030)
1d4685c20f : Improve test_proper_exit error printing (#20166)
aa42742df0 : ctc_loss: fix backward when 2d target tensor is larger than max_target_length (#20971)
55f5eb3c47 : DilatedMaxPool2d: small cleanup
f8565121d9 : Port dilated_max_pool3d() to ATen
0544a491d5 : Revert D15499749: [pytorch][PR] Add `Tensor.T` attribute to reverse dimensions
3038cf8eee : Remove THSTensor and SparseTensorRef (#20877)
9faa409b56 : Fix __irshift__ dispatch (#21047)
8dda19b79f : Remove extraneous TensorId checks in as_strided (#21045)
d76546a463 : Fix tracing bugs where using `1 - x` in C++ would cause the size of 1 to get hardcoded. (#20932)
5c53aa4869 : Make build with makefiles less noisy (#21053)
9b147961c4 : Fix get_gpu_memory_info in non-cuda builds (#21054)
ffdce79078 : Deprecate variadic inputs of checkpoint_sequential (#21006)
d23d04f17f : Allow nondet_tol for nondeterminism in gradcheck and gradgradcheck (#20980)
d190450a35 : Fix typo in CyclicLR docs (#21021)
f1fe4b1114 : add simple memory analyzer and log warning if GPU underutilized (#21024)
1bed5f39f4 : Fix warning in register_c10_ops by making index unsigned (#20964)
f6ec464890 : Enable batched QR decomposition and add a `some` option (#20689)
c1048182be : Use constants from math.h for gelu op (#20974)
0290897bca : tracing for intra_op_parallel (#20603)
9a989ec469 : Add an option to stop the build process once cmake terminates. (#21034)
9294de8c9f : Add `Tensor.T` attribute to reverse dimensions (#20598)
2791a44948 : Renaming the relu kernel and adding hypothesis tests (#20647)
d6d192e0af : Added engine information to the profiling result. (#20493)
7afa75006e : Enable operator profiling via command line (#20173)
2ba608b4a0 : Fixed gcd to use 64 bit integers (#21041)
28079c3906 : Make ScriptModule.training an attribute instead of a parameter (#19587)
18809f7b0b : Better error message in __get_state__ to let a user know that ScriptModules can't be deep-copied atm (#20885)
07c4e45ca6 : Some minor fixes for the changes in #20945 (#21008)
0885dd28c8 : refactor register_prim_ops (#21001)
b85c52923b : Re-land "Fix advanced indexing on "huge" Tensors" (#21019)
52d27890dc : Improve error message for missing attribute (#20779)
bc10677fcb : Some name and variable cleanup (#20861)
99674eb86f : Re-enable test_dag_net_forking on ROCm (#21013)
082936f033 : Clarify cycliclr param docs (#20880)
68c3ef72b5 : Change bound shape inference for LengthsRangeFill & GatherRanges, add more tests (#20610)
bbe3411846 : Refactor schema_matching.cpp (#20549)
ff6cda0da6 : Generate TH functions outside of Type (#20309)
eacb311810 : Move 1d tensor checks to TH (#20859)
d2f14db6cb : Change view dispatch to abstract (#20308)
580eab6562 : Restore TBB module (#20454)
82aecfad6a : Native ATen/Parallel backend (#20087)
f4b434a6a5 : Fix incorrect torch version in CMake (#21007)
0556141339 : fix small typo muliprocessing -> multiprocessing (#20998)
5ddbfc97e9 : Revert D15501945: [pytorch][PR] Fix advanced indexing on "huge" Tensors
3b0d431bf5 : Check for incompatible versions between CUDA and MSVC
0d35f14565 : Update cuSPARSE namespace collision w/ CUDA 10.1 Update 1 (#20889)
9d9751f634 : Convert dequantize_linear to an internal function _dequantize_linear (#20938)
8e3311c5e2 : Remove functionality unsupported by the JIT from multi_head_attention_forward. (#20653)
6e76813a39 : fix SyncBatchNorm doc (#20991)
ebc8d7170e : fix the bug for mkldnn clone (#20943)
6480d3f140 : Revert D15511921: [pytorch][PR] BatchSampler now uses list.clear() instead of creating new objects
482ae8e6b2 : BatchSampler now uses list.clear() instead of creating new objects
ecf012213b : Update submodule URL based on redirection. (#20973)
bb89827e1d : Update cuda pinned memory note to include tensor.to (#20977)
1e8f129a05 : In setup.py, also check some submodules of submodules. (#20937)
8dbdd00f87 : tweak tqdm to have download speed in kB/MB/etc (#20908)
5ab6e07180 : .view(...) now suggests .reshape(...) instead .contiguous().view(...)
c611630b9d : Fix subscripts in RNN documentation
a3a458ed30 : Fix align corner docs (#20961)
5e69e76aba : Remove padding_mode from torch.nn.functional.conv{1,2,3}d's docstr (#20891)
4c5b1e3460 : Update nccl submodule to v2.4.6 (#20882)
9310e600f6 : Use a simpler way to delete recursive function
66e6571eb8 : fixed issue #20921 (#20922)
83fe92870d : Update multiprocessing note now that shared CUDA tensors are refcounted (#19904)
bdce5533fe : Fix pytorch_macos_10_13_py3_test (#20944)
81e70ffa19 : fix bug of not using get_score_cls_index in BoxWithNMSLimitOp (#20868)
2fb665a9df : Add warning about memory overhead when using multiple tiny tensors (#20801)
c7e0722814 : allow pass ordered dict for nn sequential (#20796)
b93bdf6989 : Fix advanced indexing on "huge" Tensors (#20919)
430d1a2761 : Attempt to fix flaky test_structseq_repr (#20931)
b1df8bfe8a : Reduce set of build/tests which run on PRs. (#20930)
c46c6a4fe6 : Zero slice bug (#20914)
3858e1684b : Don't print backtrace for interpreter errors (#20925)
371bd043d6 : register ResizeNearestOp to C10
b5a5e296aa : Support 3D mesh/point cloud (#20413)
6063ffd055 : Specify dispatch key with kernel (#20821)
a2328a27e9 : Improve torch.cdist performance (#20605)
4501dc305d : Assert against using Operator methods not supported when exporting it to c10 (#17818)
c8f404a68e : Revert D15499918: Reduce set of build/tests which run on PRs.
d03265b44f : Reduce set of build/tests which run on PRs. (#20775)
dee11a92c1 : Use Device instead of Backend in TensorIterator (#20690)
17941f9979 : JIT: Eliminate SumToSize by using Optional Lists (#18697)
47043220ee : Update version strings to 1.2
a640c81536 : Add llvm8 installation step. (#20879)
fa20327618 : Update Refinement Docs (#20912)
8c4b2a835b : Remove extra workspace queries in matrix inverse computation (#20904)
4109ec1278 : In Dockerfile, do not install unecessary packages, use conda to install ninja (saving one layer), and use "." to refer to WORKDIR to reduce redundancy. (#20881)
6af2482612 : Leave it as an option for whether to colorize output during build (#20771)
ec45baf4dd : tensor_illustration with correct numbers and better fonts for README file (#20751)
ef1fdc27a3 : Raise TypeError when the argument to isinf and isfinite is not a tensor (#20817)
87040af498 : Fix documentation for attention mask shape (#20850)
a5c90aaf47 : Use "length of the RNN input" instead of "length of the RNN"
3e4f213e82 : Instructions for how to update pytorch-ci-hud when updating binary builds (#20758)
c3d05e86cc : Resend "Split ATen/Parallel into interface and backend" (#20825)
6b74856747 : Fix init_thread calls in thread pool initialization (#20848)
1bb728fe14 : Change the quantizer to match the behavior of the FBGEMM implementation (#20892)
fc941d3bca : Catchall kernels instead of fallback kernels (#20773)
c25e33789e : Lightweight at-most-once logging for API usage (#20745)
8cde4c4d22 : Remove Variable::Impl and DifferentiableViewImpl (#17072)
f93e0619f3 : Adding ShufflenetV2 to caffe2's benchmark suite. (#20180)
3aa7ee6fe6 : Updating submodules
cfb6c4a8ee : Updating submodules
62af37aa88 : dropout symbolic_script should respect the training flag (#20760)
bd53c8eb93 : Move torchvision install out of onnx test script
d5b7138a2c : Dict is a reference type (#20669)
93d5503f34 : bug fix 19374 - fix for upsample export
48bf7b9be8 : Fix oscillation in coalesceInsertedDataDependencies (#20833)
2d96876d88 : Use conda torchvision version (#20865)
b6d0f6c85a : Move THCTensor_{random, clampedRandom, cappedRandom} to ATen (#20620)
48424a6c94 : Avoid dynamic dispatch inside the omp loop in AdaptiveAvgPool2d (#20366)
cf0268e51c : Modify cos to cosh in Vec256 (#20797)
70caa2efe2 : Add mkldnn sigmoid operator
8dedb04c26 : Enable torch.jit.trace for mkldnn modules
63585c3b81 : Add support for save and load mkldnn modules
5f83c5d834 : Fix build error with MSVC (#20853)
31e2d20c5e : Dictionarize check_inputs coming from `trace`
2c556a9489 : fix the input/output type mismatch (#20829)
9c57d8df42 : Make LayerNorm.normalized_shape a tuple
99b3f5cd70 : Fixes error with custom scalars, fixes #20579 (#20580)
a16708a1ae : Workaround python2.7 find_module limitation / explicitly close file (#20782)
ec57d1f18a : Port dilated_max_pool2d() to ATen
f039401bf2 : Add back at::_copy_from for use by XLA (#20783)
80aed36fb6 : fix a couple of typos in README markdown (#20819)
8fc069fa17 : add batch of string ops (#20826)
90182a7332 : Install torchvision from master
d35a587958 : Remove cpu_half, cpu_bool, cuda_bool from native_functions.yaml (#20552)
41100d4027 : Add PerChannelAffineQuantizer (#20764)
a21cf76575 : Revert D15459166: [pytorch][PR] add batch of string ops
5952ca8d9f : Remove duplicated _optimize_trace and use core (#20394)
871c9dcb1d : move batchnorm and layernorm fusion to decompose (#20337)
cde611a66c : Quantized Conv2d operator (#20772)
aebcd80ae4 : add batch of string ops (#20826)
7aa3887f43 : make wildcards alias only each other (#20670)
90910fc6cb : Mark values entering containers as wildcards (#20556)
28be521e39 : Fix bug in exporting node with multiple outputs by scripting
c2e3e79afc : fix pow bug on overloads and clean up (#20824)
98928f4d79 : Allow both Variables and Tensors in c10 kernel interface (#20816)
9ea009fe8b : Add as_quantized_tensor (#20740)
12bc81ae2a : Change comparison ops result dtype to bool [Part1] (#20767)
6ec3c12255 : Update references to minimum CUDA and cuDNN version (#20718)
05543153dd : CUDA implementation of fakequant (#20252)
fdb923996d : Revert D15445092: Some minor fix to unblock the Bert model quantization
cfc98ae714 : fix add_histogram_raw (#20688)
fd2aa93b37 : Exposing LengthsSum/Mean/Max in pytorch (#20802)
8d7a025703 : ONNX Export Scatter
fea4a56af3 : Add ability to filter metric schema in LayerModelHelper (#20786)
810816a1f9 : Automatic update of fbcode/onnx to cc2333a3f929caca7223b98699237f19388dd585 (#20763)
4e0d098ace : Fix optimizer type hint (#20648)
795a1a6ffa : When detecting numpy, assign relavant variables outside the try block (#20739)
fd95947e68 : Revert D15248618: Split ATen/Parallel into interface and backend
70ecddfd76 : Some minor fix to unblock the Bert model quantization (#20787)
a501e7d5be : Add quant-dequant nodes for bias. (#20045)
c2d0e7316f : Add DictType to Metadata (#20770)
70eb315da4 : Use AT_INTERNAL_ASSERT in test_base (#20555)
4a85e7955c : Rename FC to Linear in the test routine (#20716)
77651615c8 : fbgemm precision argument (#20790)
c4a3b4d528 : Split ATen/Parallel into interface and backend (#20057)
adbab82846 : int_repr for different quantized types (#20656)
c1d6bcf301 : Use SmallVector to allocate Compound operands inline. (#20762)
c9da01194a : Optimize pytorch layer_norm forward (#20345)
9cec8ae146 : use tensoriterator instead of th for fill_ implementation. (#20719)
7a0c6d528a : Fix copy_transpose_valid check (#20759)
5acc664f9d : make magic methods work with casts too (#20654)
e6f22e1b89 : Change Bias to QTensor with qint32(int32_t) (#20713)
b9a150ede0 : Change Weight to QTensor with qint8(int8_t) (#20712)
ac2314fdeb : Fix a bug in quantize_linear (#20711)
32803b52f6 : Update Conda description in PyTorch README (#20726)
5d8879cf6d : Auto-convert GPU arrays that support the __cuda_array_interface__ protocol (#20584)
847d9c57d1 : Improve the recommended citation
bb20956e3c : Add support for CMake switches for VS 2019 (#20752)
47dc65fe76 : add str comparisons (#20761)
cca923c481 : Add dequantize_linear for JIT pass (#20107)
cc02a1af61 : Throw error if multiple kernels registered (#20737)
f3d827f311 : Hipify fb/quantize
b5edeca39d : Split cpu/gpu in caffe2/distributed + some clean up (#20674)
d7cd2d7a8c : compile with -fcolor-diagnostics (#20662)
c790f10e2d : Fix missing cudatoolkit dependency in binary linux tests
e3970d66d4 : Fixing upload_binary_htmls again
fac307a5cf : Revert D15178352: [pt1][quant] Quantized Conv2d operator
eca7fa35a4 : Fix -Wattributes warning on older versions of gcc (#20587)
712c60f960 : Fixing missing miniconda path in macos smokes
29b1b59449 : Quantized Conv2d operator (#20064)
d73caca2a1 : Add mandatory ScalarType nodes as input to the quant-dequant nodes. (#20468)
371cf109a3 : Increase static tolerance for negative feature ids
0beecbdaad : fix soft_nms_cpu call in BoxWithNMSLimit (#20738)
fbdafdffa1 : Move bucketize_op to open source
320c38555e : Refactor CUDA copy and general copy dispatch (#20685)
cf7ef5e631 : Add onnxifi support for Int8FCDNNLowPPackedWeightBlob (#20564)
0bfc0eeef7 : restore hidden visibility by default for Linux builds (#20461)
be1f83c350 : Fix dll linkage for tensor type ids (#20547)
410c7210db : Add `save()` to `torch._C.Function` (#20386)
987f1ccf49 : Add "ndim" property to tensor (#20565)
6ae99aa5bc : onnx/caffe2 tests: Do not execute models with CPU-only operators on GPU.
be33434d85 : Unify the addQuantDequantNode api for inputs and outputs from quant nodes (#20677)
cf548ba683 : De-deprecate old list and dict APIs (#20709)
c062175803 : Remove unused var (ws_) and use vars in undefined case for compile (#20667)
af6eea9391 : Add the support of feature store example in pytorch model in fblearner (#20040)
9fbce974c9 : torch::jit::RegisterOperators forwards to c10::RegisterOperators
74bdcd44c4 : Remove tab. (#20715)
10445c0404 : Finish removal of AT_CHECK, officially deprecate the macro. (#20600)
c1fa449763 : Break reference cycle in load_state_dict (#20397)
796e359601 : Refactor builtin ops
c0a2a3b22b : Add a new method SummaryWriter.flush() (#20607)
f215db9b92 : InsertGuards pass
036a159fb9 : Audit AT_ASSERT sites in TensorImpl.h; doc improvements (#20649)
71260b98e2 : Fixed histc return type for CUDA (#20369)
d0c742134d : #20028 (#20696)
cda9e995e2 : Benchmark repeat op. (#20016)
8acaa286b7 : Make CUDACachingAllocator::recordStream() a no-op on null ptrs (#20658)
071971476d : Fix Binomimal overflow when logits is large (#20679)
9b1dbffba5 : Re-sync with internal repository (#20702)
5835165ce3 : Add get/set_num_interop_threads into torch.h include (#20659)
4598729399 : better handling of getenv
d3059b9c49 : Lightweight logging for once-only API usage
7b9ee598d6 : separate option for FE_OVERFLOW (#20476)
dd050b7b91 : Replace AT_ASSERT with TORCH_INTERNAL_ASSERT/TORCH_CHECK (#20668)
96a1f7695f : Support plot norm of specific embeddings of a LUT in diagnose_options (#19809)
2308257483 : delete brodcasting ops from shape analysis resize aliasing (#20661)
e74869473d : De-deprecate parts of the legacy API (#20561)
cb6be42403 : Options based registration API (#20514)
85fad0597c : Add qint8 type (int8_t) (#19984)
986c9eb537 : Add a pybind for Module::get_functions. (#20594)
2af5911a95 : Modify the Insert quant-dequant test cases to look for q-dq pattern (#20672)
e42665cf39 : Some small performance fixes for c10 dispatcher (#20472)
d9dcfacd9e : Improve CPUAllocator OOM message (#20618)
e79610c0df : Fix missing env for update_binary_size job
c267d0c869 : Misc error message improvements (#19369)
3be2f7c8e6 : SubgraphMatcher: add attributes support. (#20602)
0b9b929d14 : Use python type string for user facing error msgs (#20657)
c819d76789 : Add list(string) (#20617)
a543586bff : Add `_enable_recursive_script` to try to script all Python functions (#19578)
cd28ff5395 : Add support for __getstate__/__setstate__ on module (#20242)
5f14ef8cc1 : Split out gpu/cpu targets based on gpu_library_targets (#20633)
79c5dc313c : Remove unnecessary format literals from error message.
26dfeffacd : Remove TH/THC link for single matrix inverse (#20534)
839a69f587 : Revert D15393514: [pytorch][PR] Refine CosineAnnealingWarmRestarts doc for issue #20028
8c9f4c560a : Add matmul optimization for the case A.ndim <= 2 && B.ndim >= 3 (#20448)
a212a5b97a : ir.cpp, module.cpp: clang-format. (#20592)
b90790ab1b : Don't split 256-bit AVX2 load/store intrinsics (#20609)
000d73ccde : fix WAR race (#20182)
3c69c9a7fe : Refine CosineAnnealingWarmRestarts doc for issue #20028 (#20267)
cfb87c1022 : Update documentation for CTCLoss (#20422)
35e0015c70 : Export sign onnx operator (#20470)
4e551a7edb : Make C10_NODISCARD macro more portable for nvcc+clang. (#20324)
690efa5220 : Remove checks for CUDA 8 in LU-based tests (#20482)
110ed511a4 : Make check-doxygen.sh output more interpretable. (#20362)
1136ad59f9 : Enable simd and loop vectorizer with MSVC
fa4ca4e70e : Emphasize all DDP forward() outputs must participate in computing loss (#20586)
c941abbc0a : Fix upsample kernel launch / reorder arguments (#20505)
3bc0bd9534 : Fix caffe2 build failure on Windows (#20574)
4c806a9e8a : Allow tuples for scale_factor argument in nn.Upsample (#20581)
409200df59 : Move inter-op settings into ATen/Parallel (#20050)
36d3398aa5 : Clang-format ImageInputOp (#20441)
ea9c6e7581 : eliminate FE_INVALID in unit test (#20502)
e4c7f59fbc : Shallow-copy indices and values in sparse tensor ctor (#20614)
3c86d597c4 : update legacy plus one for mpscnn
8bdbd59d0c : handle box plus one for gpu generate_proposals
373e6a78bf : make box plus one a legacy argument in detection ops
220e6894c5 : Rename qint8 data type (#19932)
980982ac09 : Updating submodules
2ddf126b96 : Revert D15373683: [pytorch][PR] [BC-breaking] Shallow-copy indices and values in sparse tensor ctor
4f02321a9a : Shallow-copy indices and values in sparse tensor ctor (#20330)
21ef4cc615 : Improve bmm performance on CPU by applying TensorAccessor (#20266)
fa189641b5 : Add export for __and__ & __or__ (#17894)
61012080c8 : split and register CollectAndDistributeFpnRpnProposals with C10
d784636b39 : Scope: Move implementations from .h to .cpp file. (#20593)
75d04900fe : Updating submodules
5821a76b8e : Forcing gcc ABI and safer bash scripts, v2 (#20540)
66c6133264 : fix empty dropout (#20541)
a837c00acd : Removing unnecessary comments (+fix flake8)
5f8e849d84 : eliminate FE_INVALID in optimizer related operators and tests (#20501)
5b78a5eadb : Memory format support for contiguous and is_contiguous (#20455)
09f22d10a6 : Infer schema for experimental ops (#20513)
9bd3305592 : Allow nested lists/dicts in legacy operator API (#20379)
456b889353 : Require passing version_counter and allow_tensor_metadata_change to shallow_copy_and_detach() (#20496)
3caf4e6985 : Remove weak_script in MultiheadAttention function. (#20563)
7db1fb84fa : Use slimmer exception raising code when on mobile. (#20543)
1891614aa5 : Add GivenTensorInt16Fill (#20515)
5917ec2c52 : Print registry warning only when DEBUG is set (#20398)
c129ab06e9 : Change onnxifi workflow to support multi-group quantized & Add multi quantization info to caffe2.proto (#20439)
51e40ab832 : Add scalar type info to tensor print (#20483)
abb3698976 : Add QInt32 ScalarType and qint32 data type (#19816)
1a0f753e6e : Fixing typos in schema description for BatchMatMul (#20512)
b3e510518b : Tensor codemod for instance_norm (#20517)
ca24e18c7e : Add an AssertError check back to MultiheadAttention module (#20492)
161566187c : enable CopyVector for type of int on CUDA (#20520)
4c23c34e79 : Computing var/stddev and mean at the same time (#18731)
08bdd694f9 : Extract feature length information from SigridTransforms op (#20384)
428104c60a : Automatic update of fbcode/onnx to ead449a30d026a7a0a59e2ba0a42ca8e52ec2359 (#20542)
8226330af3 : Extend testAvailableArgTypes (#20374)
f89ab7b623 : Allow Dict type in c10 operators (#20373)
a821e11127 : Speed up RecordFunction with sampled callbacks (#20307)
b55d2dcc84 : Publish c10::RegisterOperators as torch::RegisterOperators (#20334)
852f8526c5 : Replace AT_CHECK with TORCH_CHECK [shard 5/10]
5243fe0350 : Allow static inheritence for ScriptModules (#20503)
da3e74b21c : define use_cuda in dropout backward to allow peephole optimization to… (#20289)
bd047d812e : Recursively checkout submodules for Pytorch
72bb84c518 : Provide a few default args for numpy translation (#20451)
83649ef081 : Replace AT_CHECK with TORCH_CHECK [shard 1/10]
2827f3ded6 : Portable way of the warning clause
15c0091d8a : Fix GetLastError in THAllocator for Windows
73a97387c1 : Replace AT_CHECK with TORCH_CHECK [shard 9/10]
365fc26571 : Replace AT_CHECK with TORCH_CHECK [shard 8/10]
d1623f4cc9 : Replace AT_CHECK with TORCH_CHECK [shard 3/10]
9d09f5df6c : Replace AT_CHECK with TORCH_CHECK [shard 7/10]
101067703e : Fix strtod for MSVC (#20490)
97e1f07ffc : Replace AT_CHECK with TORCH_CHECK [shard 10/10]
8e26759f14 : Back out "[pytorch][PR] Manually set _GLIBCXX_USE_CXX11_ABI in devtoolset7 binary builds"
fd18b89c98 : shape inference for learning rate op (#20020)
33f421027c : Allow recency weight pooling for fp16 (#20506)
ea13b53856 : Updating submodules
254de9e8ec : Removing cyclic dependency (#20511)
ace506fb38 : Dict (#20372)
56fb5e03b5 : refactor registerStoragePyTypeObject (#20467)
ea38fbfc5c : Manually set _GLIBCXX_USE_CXX11_ABI in devtoolset7 binary builds (#20243)
358fb51e77 : Replace AT_CHECK with TORCH_CHECK [shard 6/10]
5b45355431 : Replace AT_CHECK with TORCH_CHECK [shard 2/10]
71af7c46bb : Replace AT_CHECK with TORCH_CHECK [shard 4/10]
9610f150d7 : stop build spew on development (#20508)
24cd0e08cf : identify important circleci builds (#20498)
9e7f22b223 : Remove dependencies from Caffe2Go on PyTorch JIT (#20463)
3479777519 : UpSample GPU Porting (#19630)
7ffc37e022 : Add ShapeInference for AtomicIter Op (#20021)
6e82b1c77d : Split nn.MultiHeadAttention into Module + functional (#20415)
b46a630836 : Update Sleef to include fix for FMA4 detection (#20450)
101176870e : eliminate FE_INVALID exceptions related to fp16 conversion (#20390)
8e9692df27 : codemode change missing [from D13586737]
e8fb5f35f0 : Bump torch proto version (#20444)
a9aaf698a4 : add c2 benchmark runs in cpp (#20108)
d2da3ee601 : temporarily disbale layernorm AD (#20442)
f0829f37c8 : Rename AT_ASSERT to TORCH_INTERNAL_ASSERT; other macro updates (#20321)
1364104054 : Fix version counter sharing in set_data() (#20391)
3a0b27b73d : Move at::NonVariableTypeMode to TensorImpl, and check it in is_variable() (#20392)
2dc9152dbe : Automatic update of fbcode/onnx to e08efaa35ed54362dfa283240506c003175889b7 (#20443)
824d4f9957 : Needed fixes for binaries
6c3b8a24ff : Make sure reducer=None is not used when fp16 embedding is enabled
63c05bffcb : Fix lint
7799ea5eb3 : Port adaptive_avg_pool3d to ATen (#19898)
5268b7dfaf : Remove support for CUDA 8 (#20298)
62957ab0a1 : Tiny spelling mistake fix. (#20425)
67414714e5 : Move THCTensor_(uniform) to ATen (#20292)
5f7ef09f57 : math module support: gcd, copysign, erf, erfc, expm1, fabs, gamma, lgamma (#19707)
41673d477c : Disable incremental_state function in MultiheadAttention module. (#20177)
f8aa6a8f44 : Make a deep copy of extra_compile_flag dictionnary (#20221)
30bdb8c0d7 : Hotfix for caffe2 windows build (#20417)
f496ea36b2 : DataLoader: add error detection for worker_init_fn (#20150)
163f0e182c : Fix bug in non_blocking copy (#20305)
6a8f55796a : Add quant-dequant nodes for weights
9499c7b7ee : Profiling GraphExecutor
f4d9bfaa4d : Support Exports to Multiple ONNX Opset (#19294)
1129b3344a : move DistillBatchLRLoss Layer from open source to fb
3f3ee5600a : make trace's errors more helpful in terms of what it can and can't do when tracing module's methods
7d3d5b73f4 : Add multiline type annotation support for Python frontend (#14922)
3a39ce0f41 : Fix reflection on weak modules, copy attributes (#20190)
00d0ddb140 : Add all list specializations to pickler (#20191)
6197eed409 : Eliminate a const_cast.
a4ae689636 : quantize_rnn_modules in ensemble_export (#20365)
a0c2829194 : Preserve log_dir arg and member for SummaryWriter (#20382)
3aa414c8f2 : Add documentation to Dispatch.h (#20339)
10a9ef833c : Avoid unnecessary refcount bump in unary operators. (#20331)
75c6d37bac : profile on uses
c6255a57e4 : Remove CPU_tensor_parallel_kernel_apply2 (#20207)
6dc70aa513 : add test coverage for make_np (#20317)
ce033485eb : Convenience APIs for script objects (#20226)
50149fb66b : Adds quantized addition and renames sum to add (#20233)
35fed93b1e : Adding Poisson NLL loss to libtorch (#19316)
ed25b8a667 : Add a FAQ entry to explain `Cannot insert a Tensor that requires grad as a constant`
ea5c9c9267 : Update installing.rst (#20354)
4ba28deb6e : Unify libtorch and libcaffe2 (#17783)
872bab22c6 : Some essential changes needed before updating the Windows AMI (#20353)
d68802ba47 : Sparse half embeddings on cuda (#19695)
148e90ba2a : Give clear error message when attempting to merge struct which can't be merged.
c2c0a32155 : Remove setting logger level in caffe2.python.checkpoint (#19803)
2a875fc126 : Fix THD->c10 dependency to gflags.h (#20319)
8726b27333 : Fix overlay_vc_env when called by legacy python (#20304)
c397134d6b : Revert D15156384: Dict
85d56852d3 : Revert D15227620: Allow Dict type in c10 operators
c744468e36 : Revert D15227621: Extend testAvailableArgTypes
99874f87cb : Use registry for BoundShapeInferencer (#20341)
a0e5240afc : Fix DistributedDataParallelTest.test_accumulate_gradients (#20351)
02df1ccd9c : Remove const_cast's from subgraph matcher. (#20303)
e47b210075 : Adding setup job as prereq to html update jobs
3afd99680c : Remove SourceLocation (respin) (#20333)
558c6c4d8a : Make DistributedDataParallel usable with CPU models (#20236)
f32c9bd5e9 : Refactor core DistributedDataParallel tests (#20235)
caa0d0c50a : Add c10d::broadcast_coalesced and tests (#20234)
c31fccd678 : Fix crash issue in conv+sum fusion for MKLDNN on caffe2 (#20139)
8c3a7bb57f : Move librosa and psutil installation from CI script to docker images build script (#20299)
e870b11ae6 : Revert D15275731: Remote SourceLocation
eca91de5d2 : Remote SourceLocation (#20300)
726661b152 : profiler: improve repr for averaged events (#20281)
39af9563e2 : Re-enable CUDA tests for C++ API (#20238)
fe714862dd : Extend testAvailableArgTypes (#20185)
e0166f4670 : Allow Dict type in c10 operators (#20184)
c92129033a : Dict (#19976)
2019f6cd51 : Add unit test to ensure no gradients sync when calling ddp.module(input) (#20282)
899bddeeb6 : fix typo in adaptive methods annotation (#20306)
83a80d2b31 : Add test/test_namedtensor.py (#20168)
199fa12dee : Add namedtensor build and test to the CI (#20163)
e01a5bf28b : Add USE_NAMEDTENSOR compilation flag. (#20162)
f23fb66e6e : Fix in file position logic: file descriptor and Python-side handle (#20270)
c406bf20a0 : error instead of crashing on attempt to subclass typed tensors (#20283)
1e35ef07e9 : Switch off USE_DISTRIBUTED on default for MSVC (#20302)
2aea5b6335 : Fixed Softmax doc to specify dimension to prevent warning in 1.1.0. (#20310)
0087069dce : Use torch::get/set_num_threads without additional includes beyond torch/torch.h (#20176)
916ef76817 : Sort User Defined Classes (#19706)
1102f4c56e : Bump op_version_set (#19812)
3d4d7b9082 : Refactor ChunkDataReader API + fix missing headers (#19485)
bed1d7d3ff : Eliminate some const_cast's.
d6815e1e27 : Only record grad_fn in C++ Scatter and Gather when required so (#20286)
2179d5b32b : Dynamic quantized full LSTM module (#20249)
7bc8562a9a : Enable ONNX constant folding in test_pytorch_onnx_caffe2.py tests. (#20290)
3ee97183b0 : ScaleBlobs Operator (#19660)
4d676d53a6 : split canonicalize_ops, make a decompose pass (#19988)
33b4afe3bb : dont make alias for none value (#20112)
8ebb86dd3a : Support `torch.save` for saving values during execution (#18154)
27c82c03c5 : Unify code path for native mkldnn conv and reorder on-the-fly mkldnn conv (#19963)
b3bce01e26 : Have add_video use NamedTemporaryFile directly (#20223)
35de90e324 : Canonicalize order of If and Loop outputs (#20015)
1ab33fce9a : Disable worker_kill & holder_iter_reference combination in test_proper_exit (#20172)
7edf9a25e8 : Clarify API and add examples for all methods (#20008)
4a086f700f : Quantized FCRelu operator (#19750)
e8cdfb5d23 : Use QTensor with quantized FC operator (#19541)
8defcbfcf4 : Enable caffe2 softmax tests with ROCm 2.4 (#20280)
1deb8dde58 : Quantized FC operator (#19497)
7b733e4fc1 : Rebase conflict fix for isFusableDevice (#20251)
c931d7e9d2 : SubgraphRewriter: Add a support for arbitrary replacement graphs in subgraph rewriter. (#20084)
b3324d0fe3 : SubgraphRewriter: Expose runOnGraph and use it in tests. (#20083)
8a6072c3bd : SubgraphRewriter: Rename pattern fusion to subgraph rewrite. (#20082)
1a85e57334 : flake fixes
2a104f7383 : Port ATen/native to ATen/Parallel (#20043)
a7db3a7591 : Error out if `git log` fails in setup_ci_environment (#20231)
0626ea4300 : Run 'checkout' before 'setup ci environment' on pytorch linux tests (#20213)
cd72be20e0 : Update ROCm 2.4 (#20253)
ede38bd743 : Port THNN to ATen/Parallel (#20032)
fa6a00f313 : Fix memory leak in torch._dirichlet_grad() (#20244)
10715ffc30 : Mention issue number in the JIT workaround comments (#20222)
a5ff09782e : Fix missing files for upload jobs (#20265)
2db9066a41 : Fix formatting for note in eig. (#19743)
d6f62b70f3 : Fix cuda and cudnn libraries search process on Windows (#20205)
d24c0aa82f : Remove explicit checks for parallelism from TH (#20002)
0ebe252c9c : Port TH library to ATen/Parallel (#19105)
26dd65eaf8 : Namespace isolation for classes (#19903)
e41aa0ed2f : fix parsing bugs (#20246)
21a3895c7d : Extract repeated scripts into files (#19674)
42e9a619b3 : add decay parameter in ref_adagrad (#15329)
f0b5ad8919 : add str builtin support (#20188)
9fcf585475 : Autograd profile recording in c10 custom ops (#20175)
fb9d9fbd4e : smoke test for add_graph (#20007)
3ac4d92824 : tweak scripts/build_android.sh for ABI and header install (#20152)
faf2c3ac26 : Standard gamma's export
48e649b803 : Automatic update of fbcode/onnx to 5bde6371620b76302864bce90f521d72eda95d0e (#20232)
6606ac5d41 : Support operator overloading for UDT (#20033)
9c62280ea8 : Remove brew libomp from binary mac machines (#20228)
eecf52b444 : Fix in benchmark_test_generator (#20237)
8a375189ea : Remove flake8 E303 (too many blank lines) (#20225)
ba3e6de4c2 : Fix test_namedtuple_return (#20212)
785583a435 : Use ignore=dirty in submodules. (#20135)
864cfbc216 : PyTorch Profiler Shape aggregation support (#20035)
831bd1c27d : support onnx export rnn with batch_first=True (#19766)
e58817fed9 : Make graph->param_node()->next() the first node (#19788)
bc5398451e : Enable ROCm multi-gpu with Gloo
cf55670bdd : Add proper __repr__ to LogSoftMax (#20018)
b8256280ce : Working on component-wise transformations that mimic `torch.cat` and `torch.stack` (#11868)
f7a7868820 : add process_group in convert_sync_batchnorm (#19240)
2356fac9a5 : Add DirichletFullyConnectedActor to Soft Actor-Critic
4ca325df87 : Add Custom graph fusion (#18588)
19e6886576 : Intra-op parallel microbenchmarks for PT (#19997)
481b6d0268 : Allow a non-OpenMP based build (#19749)
8c97f0b19e : Initialize Caffe2 only when running Caffe2 benchmarks (#19980)
0c7e98b765 : Support for non-contiguous tensors and arbitrary dtypes in PT benchmarks (#19993)
839343a482 : Add USE_CUDA macro in THD DataChannelGloo for non-GPU use case (#20186)
8ca10d35e5 : Add torch.nn.quantized.functional namespace (#20042)
838ada3a62 : Update logic for folding onnx::Constant nodes. (#20109)
877b7c1b8d : Fix NameError with PYTORCH_JIT=0 (#20120)
1cfe15ef2a : temporarily disable devtoolset7 nightlies (#20174)
4211f674f0 : Cleanup includes in c10/core/CPUAllocator.cpp. (#19885)
8fbde94664 : lower batchmm to non-diff optimization (#19987)
0c5dc965a4 : Add logging import and failing MLP (#20115)
035966d538 : Add options to Operator to enable registration of alias analysis passes (#19382)
5c9ab6f411 : Specialize Optional[T] to T (or subtype for Tensor) or None when executing graph (#18407)
47f5be164a : allow classes to be used in their own methods (#20106)
e37d9c8168 : fix compilation order for class methods (#20094)
0aa7407dd0 : Rearrange stopping condition in CompositeReader (#20062)
b3c35e5202 : Export randn_like in ONNX exporter (#20093)
26f5275644 : Index into a tuple with non constant integer (#20081)
722eb48ff2 : Cleanup includes in torch/csrc/* (#19924)
6ca38d9840 : Cleanup includes in torch/csrc/autograd/* (#19923)
8b46938355 : Cleanup includes in torch/csrc/jit/* (#19922)
edb376eceb : Cleanup includes in torch/csrc/jit/script/* (#19921)
17268a9225 : Add print function for QTensor (#19513)
0de4b9e97e : Improve nn.ActivationCls repr of inplace
a8387b7779 : Delete TensorImpl::GetDevice() (#20025)
9005a2c0fc : disable flaky test_proper_exit again, still occasionally failing (#20063)
8818005a91 : Fix warnings coming from bernoulli and dirichlet kernels (#19933)
3a318074e5 : Fix examples in jit#user-defined-types documentation
f29858ff14 : Resolve host_define.h warnings (#19917)
cc06e2f947 : fix build with python-2.7.5 (#20137)
61f1242b7f : Formula typo fix (#20110)
343c1c21f2 : update nn.init.calculate_gain doc example
1dfeffbff5 : Expose test utils (#20114)
f2c715cbe1 : Fix the spelling of "context"
f6609daad7 : Fix the warning if the wrong gcc is used with nvcc (#20158)
57948414ac : Fix small typo T_mul->T_mult
71d23ebc13 : #20143 TripletMarginLoss example isn't clear (#20145)
23ba0561c3 : Add Gate Policy GateLearningRateOp (#20044)
863818e05a : Allow overwriting kernels (#19777)
470af2357d : Refactorings to prepare for overwritable kernels (#19776)
fc00bfd12e : Update MultiheadAttention documentations (#20071)
ecdeef37df : Fix math rendering of CTC loss docs and fix error of lint in test_torch.py
7ddd5d06ed : trace multiple methods (#19905)
7ad04ad28d : DOC: Update web documentation of geometric_ to be consistent with Tensor behaviour (#20091)
271f005eeb : Add elementwise_affine for LayerNormGradientOp (#19982)
440aac082a : The deprecated API allows std::vector arguments (#19784)
e0bd7cc821 : Change the export of _dim_arange in ONNX (#20078)
840680bbf3 : Reduce overhead of OnnxifiOp (#20085)
c7c02724cd : CMakeLists changes to enable libtorch for Android (#19762)
0e77c0f5de : Add ninja to PyTorch README.md file. (#20079)
767c82e151 : Initialize last_epoch in _LRScheduler.__init__() (#20059)
af87cfd7f9 : Remove in-memory scalars and add comments (#20038)
fb40e58f24 : Remove deprecated tensor constructors in torch.distributions (#19979)
792bc56ec2 : Update README.md (#20088)
fb8792e2b6 : Remove torch/jit from xplat build (#19967)
1ecbf0d213 : Spilt MethodValue and FunctionValue (#19985)
2ec2287cce : Fix smoke tests on binary builds
99c548e223 : Make IValue("string") do the right thing (#20027)
38e630a03f : Extract feature length information from ClipRangesGatherSigridHash (#19704)
6f7a315a71 : Allow onnx export for maxpool with dilations (#18721)
75868683dc : Removing CUDA 8.0 nightlies (#20068)
002009b5a9 : Automatic update of fbcode/onnx to 7d7bc83d29a328233d3e8affa4c4ea8b3e3599ef (#20012)
725ef26f34 : Add suport for tensor targets in for-in (#19380)
46589a8a32 : fix labs warning in THTensorMoreMath.cpp (#19760)
ff70c364a4 : fix labs warning in THVectorDefault.cpp (#19999)
18cb098588 : Remove warnings on new_* constructors (#20026)
8987ae314e : Remove try/catch in constant propagation (#19686)
0da0c4be48 : Rotate circleci keys
e846ccd7bc : Fix bug in dumpNet (#20001)
c80ae6dc8e : Change the quant-dequant node pattern for jit op substitution pass (#19910)
3f7e3d5857 : Add the ability to observe intermediate tensors in an onnxifi op
de582e2f89 : Fix test_forked_cw (#19680)
e04caa3f44 : Pass Quantization parameters for quant nodes (#19402)
f564226167 : Avoid reusing TYPE parameter in AT_DISPATCH macros. (#19968)
1f9a0c5dd6 : Add observer nodes for input data nodes (#19232)
8cd6d2f101 : rename BUILD_ATEN_MOBILE to INTERN_BUILD_MOBILE and make it private (#19942)
5108e807e0 : add new macro TORCH_MOBILE for libtorch mobile build (#19761)
360640bc9c : Extract Python-specific SugaredValues to a separate file from init.cpp. (#19986)
ba84ad0d97 : LeftRight works for classes without default constructors (#19775)
e97da36cbb : Explicitly disable copy&move on LeftRight (#19774)
a285cbcccf : support different class modes for bbox in box_with_nms_limit_op
74f527a8fa : Adding job to upload binary sizes
48981a02e9 : Fix typo in embedding_bag_cpu. (#19432)
cb5442b31a : Move IValues from stack into kernels (#19783)
cb8ff2a2b4 : Add mkldnn support for adaptive_avg_pool2d (#19818)
f5b1a41c58 : specify data type in the doc (#19959)
947fd9c3f5 : More doc edits (#19929)
a9c189ca14 : Macos upload fix (#19965)
0ad1d5c317 : fix THAllocator.cpp (#19759)
4e154c1585 : remove C10_MOBILE from LegacyTypeDispatch.cpp (#19758)
508ca44fcc : add logging in sparse_to_dense_mask when skipping (#19945)
56977db4a7 : Provide option to save quantized data for DNNLOWP without layout optimization (#19681)
ca57dd9332 : Fixed log_normal and geometric for CPU (#19938)
c6cb32f588 : Forbid kernels from returning Scalar[] (#19811)
9a81d1e692 : Automatic generation of unittest for Glow integration
3a0727e58b : Fix flake8. (#19832)
e710f3b1e1 : Fix C10_MOBILE macro for ios (#19779)
965c3b2761 : Automatic update of fbcode/onnx to f1311e74ec8a91cbf86094cd6f10157cbf00c536 (#19949)
aa6403bae6 : Added .bool() method
d868c97580 : Improve performance of Int8SpatialBN (needed for DF4 quantization) (#19702)
95ce796663 : enable CopyVector for type of int32_t (#19931)
4c678dbe87 : Make moving FunctionSchema efficient (#19773)
8e77506799 : Add onnx export for floor, ceil, log2 and prim::shape
55c719b161 : Remove operator.h's dependency on function_schema.h (#19817)
2a95cf6345 : Add a pattern-based fusion pass. (#19596)
abbfb7dd23 : Fix devtoolset7 binary builds (#19919)
f5c2b5a259 : ONNX Export Min and Max ops with dim
a5b8a7d44b : Don't include TensorMethods.h - it's already included by Tensor.h anyway. (#19831)
62a1640666 : Improve torch.utils.tensorboard docs (#19915)
ff33c8c24a : Avoid printing debug information in the test
114644449e : Break circular dependency between Type.h, Tensor.h and TensorMethods.h (#19830)
ccf35f4be0 : remove scalar to float matching (#19918)
4294dba981 : Misc pickler improvements (#19638)
8933ff651c : Remove self-include. (#19833)
de19eeee99 : Enabled masked for a bool tensor (#19140)
39b885cbbf : Add magma for CUDA 10.1 to Windows docs
9d0b5a1ce9 : Build caffe2/fb/operators (#19688)
841360029a : Finer grained consistency check in reducer (#19901)
5525c419fc : Only call into reducer if torch.is_grad_enabled() (#19897)
a92e1ccd6c : Remove trailing whitespace flake8 lint (#19828)
b695e562e5 : Make find_unused_parameters in DDP default to False (#19895)
3f7a87788f : Remove redundant include from jit/fuser/cpu/dynamic_library.h. (#19863)
3803d1c901 : Fix conda build for Windows (#19824)
9b69da2b55 : Allow for iterations where no module parameter is used (#19821)
f0a007a26c : Use QualifiedName for classes (#19575)
1d6e868c2f : make QualifiedName a value type (#19626)
096dd8a4f2 : separate QualifiedName into its own file (#19566)
472be69a73 : Avoid Output Uninitialized Blobs in Load with load_all=1 (#19133)
268859ce0d : Fix CUDA stream syncing bug in allgather and reduce_scatter (#19631)
a25b79531c : use fully qualified name for ScriptClasses (#19239)
2ce39de3fc : Add elementwise_affine for layer_norm_op (#19713)
f9786ad351 : Add support for LONG_BINGET pickler op (#19815)
5a83a7424d : fix optional type unification (#19813)
698103cdd6 : DataLoader docs update to describe how workers are managed, including Windows. (#18091)
4e6608e86d : Revert D15103223: [pytorch][PR] [CUDA 10] Resolve host_define.h warnings
42fbeef5d7 : update F.grid_sample doc for clarity (#19754)
dc67d9f3b9 : Cleanup documentation (#19584)
75754beca3 : Revert D14577575: [pytorch][PR] Fix lack of state init for adagrad and add share_memory flag
11297702b9 : Fix the install of TensorBoard for doc generation (#19814)
be20d65b70 : Follow up to adaptive_max_pool3d() port (#19748)
cb4d41afcd : Follow up to adaptive_max_pool2d() port (#19738)
2573e695b0 : Resolve host_define.h warnings (#19789)
c5845c4482 : Add support for reduce-scatter in c10d (#18844)
c9f380df02 : Add aten mkldnn linear operator
48b81da4cb : Add aten mkldnn view operator
61d5a8dded : Add aten mkldnn add operator
fb53c189b3 : Add aten mkldnn batch_norm operator
4864000e55 : Add aten mkldnn ops: relu, max_pool2d and avg_pool2d
3445020ca3 : Add aten mkldnn conv2d operator
8f1445c406 : Add is_mkldnn to at::Tensor
236c2b2387 : Let script module buffer attributes can also cast device/type (#19700)
5099db08d4 : Ignore `nn::Functional` submodules in `nn::Module` serialization (#19740)
61d48aa989 : Refactor ProcessGroupNCCL collective primitives (#18820)
e1ebf330d5 : Install TensorBoard for doc generation (#19810)
bacc8815c7 : update Anaconda download link (#19794)
dafee117e8 : Removing unused arg f from _model_to_graph(). (#19647)
0d8a3610c5 : Multiple module outputs and multiple calls to backward (#19799)
dcfb5620df : Allow passing lists as trace inputs.
8f0603b128 : C++ changes toward libtorch and libcaffe2 unification (#19554)
9d180e602f : More topi support (#19728)
c182824f69 : Update foxi version (#19793)
20c22bcae4 : Automatic update of fbcode/onnx to 22662bfd4dcc6baebf29e3b823a051676f991001 (#19790)
f0d493d290 : Add devtoolset 8 (gcc 8) + glibc 2.26 + centos 7.5 rocm docker image (#19767)
98e312cf96 : TensorBoard support within PyTorch (#16196)
97e80ab6fc : Always enable autodiff check (#19787)
48d5ab54a8 : Automatic update of fbcode/foxi to 8f74bc4df3a4cfc69b1a3eadf62aa29d9961c72d AND update Glow AND update C2 (#19792)
7a8bc85f47 : Profiler: add Self CPU Time Total, CPU time total and other general improvements (#19378)
6e06154c13 : Quantized SumRelu (#19319)
76307667ca : Use the QTensor with QReLU (#19312)
db9008496e : Changing the rounding in the QTensor (#19714)
e814c11045 : Fix env vars needed for devtoolset7 binaries
c5cca65351 : Fixing update_s3_htmls for binaries
9ef8eb4cbc : Fix case for `activations` attribute in nn.RNN ONNX export. (#19368)
a425e1cbf8 : Remove duplicate inlineCallToCode (#19724)
330990d878 : Serialize first-class version of functions (#19723)
6cb1b994d8 : Trace directly into first-class module form. (#19722)
31524bda1f : @torch.jit.script(fn) now is a torch.jit.Function (#19721)
12f7c2dea3 : pybind CompilationUnit and Function directly (#19720)
bf5a5c2a31 : caffe2 | Use _aligned_free in WorkerPool destruction (#19751)
65496e4e67 : Bug fix in bound shape inferencer (#19729)
556c8a300b : Fall back to asking nvcc for detecting cuda version if no *cudaart* is found (#19741)
5025d1d5e4 : Automatic update of fbcode/onnx to 27d4b617e7097cda7d0d4c45ff2b09d248f33179 (#19718)
bbedadddce : Fix Circle CI for ONNX repo (#19725)
0effe1d4a4 : Make interpolate bicubic match opencv result (#19703)
29d8711ef0 : Fix compilation on Windows 10 (CUDA 10.0, Visual Studio 2017) (#19615)
af06d6342c : Add SGDR(Stochastic Gradient Descent with Warm Restarts) scheduler (#17226)
465799fab3 : Replace cpu_apply with TensorIterator inside of Copy function (#18618)
6c7135decb : fix typo: pytoch -> pytorch
3875e1ba45 : try to make at::cat in mm_tree_reduction operate on contig tensors (#18816)
c571969148 : Fix the insert_guard for norm decomposation (#19646)
3c81eb3aa7 : add max_pool2d to AD, add tests for both autodiff and inference mode
cbd0a2d3c9 : Fix the depthwise 3x3x3 fast path criteria for the stride (#19692)
614871d948 : use relative path to load libthnvrtc (#19690)
72b8b6c374 : Change some comments related to moving copy_ to native (#19618)
17e4cd0c0a : Remove old complex Types (#19616)
6fead42eb8 : Remove function variant of copy_ (#19622)
a6811e17c0 : Restore copy_ overload with async arg (#19641)
c08f3d06c3 : Add some of nn.init to weak script (#19640)
9aa0e6078f : Support serializing std::vector<torch::Tensor> (#19677)
32174bedb8 : Fix fuser tests on sandcastle (#19684)
3d6e956412 : Add LONG_BINPUT to unpickler (#19696)
62447a5aa3 : improve err msg (#19645)
6ec55c13a9 : Enable assignment for QTensor in pytorch frontend (#19676)
4a65ee95cc : Make torch.equal work with boolean CPU tensors
d14abe3aff : Add torch.from_file function similar to the Storage.from_file, but returning tensor (#18688)
d247912dbf : Add no-gpu build mode for all of PyTorch and Caffe2
c855e04d5f : Caffe2 shouldn't fail if CUDA peer access is already enabled
960513006f : Support exporting squeeze & unsqueeze with negative dim attribute
b675f07bb6 : Remove useless input shape checker in conv (#19608)
87a6974193 : Make it possible for self.forward to return a ScriptMethod (#19217)
2f73b3d26e : Add if ops support for onnxifi and ssa-rewrite (#19585)
41486306d9 : GCC ABI variants for nightly builds (#18888)
5e62ee2b97 : Fix no SIGCHLD checking in DataLoaderIter._shutdown_workers (#19421)
c42f3f9055 : Revert D15008160: Enable assignment for QTensor in pytorch frontend
84b275b70f : fix rocm test (#19663)
8273b9b3cb : Enforce consistent dict iteration order for trace inputs. (#19528)
309c15e2df : Enable assignment for QTensor in pytorch frontend (#19530)
d902774cad : Dont introduce aliasing in CSE or Constant Pooling (#19576)
ba1cf38718 : Remove QTensor alias (#19635)
8b798f43e3 : Commit explicit libtorch_python sources (#19607)
5119cc7cdf : builtin ivalues sort (#19572)
80020b3d2d : Guard {set,rebase}_history on grad_fn check (#19623)
fb9fc42a0c : optimize BatchMatmulOp (#18612)
176bdc0722 : fix lint (#19632)
9b272affde : Add base support to torch.logspace, default base=10 (#19542)
96b966297e : disable flake8 E302 (two blank lines) (#19634)
6b8771a7a6 : fix nn.Sequential doc
70b82d28b8 : caffe2 | Windows compat fixes
2e048feb9e : Remove fixed TODO (#19590)
55e53d3d7e : correct comments in group_norm_op (#19621)
5f82d59c0a : Simplify argument test cases (#19593)
fddd763ec1 : Add test cases for optional of list (#19592)
fc8834df4b : Port adaptive_max_pool3d() to ATen (#19547)
0922a64d22 : add torch.tensor requires grad (#19445)
4e8cc8ee90 : Surface the Glow traces to C2 (#19087)
444f792fa6 : Fix lack of state init for adagrad and add share_memory flag (#17679)
0d0acba3bd : Allow extracting element-wise loss in softmax (#19579)
e9c8f372c4 : dispatch max_pools with no indices, expose max_pools to torch namespace (#19449)
f3be2816ae : Adds `fakeQuantizePerTensorAffineOp` to pytorch (#19387)
1b3967b491 : -fno-math-errno -fno-trapping-math (#19552)
d8729efabe : Only require python print on certain namespaces (#19383)
3cc60e54e3 : Use `fbgemm` for quantize/dequantize ops (#19500)
714344a976 : Specify to use Float16UniformFill if necessary in sparse lookup layer (#18499)
e3f1504621 : Fix the Division by Zero Bug of CosineAnnealingLR (#19180)
7a4189696f : Fix the documentation for BCEWithLogitsLoss (#17218, #16804) (#19212)
bb05f70724 : fix the docstring of `RandomSampler` (#19113)
83cf9473dc : Avoid (future) cusparse name collision (#19591)
f767c9ac76 : Add docs and test guaranteeing indices from torch.nonzero ordered C-style (#19539)
3b4d4ef503 : Remove unnecessary printing from tests
36084908e4 : Fix lr_scheduler's last_epoch value at the time of initialization (BC BREAKING!) (#7889)
f9c4ce781f : Removes variable which is assigned but not used (#19194)
dce3d74dfb : add torch.cuda.synchronize(device=None) (#19573)
75ce5173a9 : Port adaptive_max_pool2d() to ATen (#19409)
88f78c719a : Fix math formatting of PairwiseDistance and CosineSimilarity docs and fix math formatting of CTC loss docs.
5bafb64e67 : Revert D15039713: [pytorch][PR] add torch.tensor requires grad
e7fc7c732c : Bugfix for fusion device check (#19594)
d2b03512da : add torch.tensor requires grad (#19445)
8be6d5ffd8 : Add onnx support for _unique2 operator
5a796d15be : Automatic update of fbcode/onnx to 0e8d2bc5e51455c70ef790b9f65aa632ed9bc8a7 (#19568)
5be4bee4ff : Don't create FusionGroups for known-CPU producer values (#19342)
969af4315a : Explicitly define supported types (#19516)
8abab61d39 : IRParser: optionally create name->value map of the parsed IR. (#19551)
43d0b78c31 : Profiling : Adding Profile Op to provide storage for profiling lambdas
7f053b27bc : Step 5: remove _unique_dim in favor of unique_dim (#18654)
767d184b77 : Add back option to not adjust output batch size (#19442)
7655b857f7 : Add debug logic to c2_ref_test and its helpers (#19359)
a09240b0a0 : fix variable shadowing issus (#19567)
19f73180cf : Add manual_seed in script (#19510)
e714429bf4 : Automatic update of fbcode/onnx to 83dd62659fc07d5b7fa93b5d1c1879f93509c7db (#19454)
5d5d67fa3f : Get rid of unnecessary matches_jit_signature: True specifications. (#19549)
c30224ad21 : Rename potri to cholesky_inverse (#19498)
deadf3ba89 : Add assertion to make sure init op is always fp16 compatible in fp16 training
689dd800ed : Generate only one Type class per backend (#19295)
189f30603c : Make complex its own backend (#19275)
ab78449e8c : Add ScalarType argument to Type::options() (#19270)
a044ba1af5 : Generate cases for all ScalarTypes in Type functions that call to TH (#19230)
868933a467 : Fix clang-format. (#19550)
1bd5f2c181 : Fix some typos in jit README
5afc274708 : Match JIT signature with triu_indices / tril_indices. (#19484)
9eb48e1b03 : Make one_hot non-differentiable. (#19524)
6733037416 : Remove 'BoolTensor', 'IndexTensor' from frontend specifications. (#19523)
3944601588 : Have _embedding_bag_dense_backward match JIT signature. (#19522)
e3523979ae : Have embedding_dense_backward match JIT signature. (#19521)
638ffac359 : Update mkldnn-bridge to fix crash issue in DNNLOWP dequantize op (#19159)
83373e7755 : Hook up non_differentiability in derivatives.yaml when no autograd function is generated. (#19520)
8868a4f20b : Move non_differentiable_arg_names from autograd functions to differentiability_info. (#19519)
6d307db5b4 : Move cuFFT plan cache note outside Best Practices (#19538)
26f1c6d4d4 : Revert D14689639: [pytorch] Allow passing lists as trace inputs.
c96c91da22 : Improve optimizations for DNNLOWP support on MKL-DNN (#18843)
fe87327c28 : Make Observer class as template Quant class for QuantConfig (#19418)
9f4f7e1621 : Support compilation on gcc-7.4.0 (#19470)
d17c22d024 : Improve embedding_bag add kernel (#19329)
6325b6e44e : Make finding unused model parameters optional (#19515)
63e2833ceb : Disallow std::vector arguments (#19511)
1ac14b03b5 : Drop instead of pop (#19503)
9818c7cb63 : Add minimalistic implementation of subgraph matcher. (#19322)
26f12af537 : Fix op benchmarks error in OSS environment (#19518)
5da7b74d48 : fix AI-PEP path error (#19514)
a421f882dc : First step at container aliasing (#18710)
f5fe7aa0b2 : Fix relu bug for empty tensor (#19451)
2d4875b8ed : Allow passing lists as trace inputs.
9245eaf3f0 : Allow for segmented printing in PythonPrint (#19238)
73c166a5ed : add resolveType to Resolver (#19237)
1e94a3bc4d : Turn resolver into a class (#19236)
405c7bcea0 : Fix bad annotation in docs (#19501)
b85edac16f : Fix out-of-topological-order issue in Nomnigraph (#19458)
53bb739b67 : Remove uses of TypeID (#19452)
3762cf9cc6 : Expose QScheme in frontend (#19381)
1898e9368b : Revert D15003385: Have embedding_dense_backward match JIT signature.
e3470ae4bd : Revert D15003379: Have _embedding_bag_dense_backward match JIT signature.
79bfc3931a : Revert D15003387: Remove 'BoolTensor', 'IndexTensor' from frontend specifications.
013926cfcf : Revert D15003382: Make one_hot non-differentiable.
fc1aadec3b : Make empty_affine_quantized private (#19446)
c3755eeeee : Make one_hot non-differentiable. (#19430)
622cf1fec9 : Remove 'BoolTensor', 'IndexTensor' from frontend specifications. (#19429)
b0812d3d4c : Have embedding_dense_backward match JIT signature. (#19427)
a6ab443e32 : Have _embedding_bag_dense_backward match JIT signature. (#19428)
30b2953b8b : Stop generating autograd functions for derivatives.yaml entries that only specify output differentiability. (#19424)
e7b9526dc6 : Fix ord() when dealing with utf8 chars (#19423)
557b1b362f : Fix copied optimizer (#19308)
30292d994f : Add an identity module (#19249)
ef499cd567 : Remove no-fork workaround for running tests with ROCm (#19436)
f3ef94a806 : Delete defunct test/ffi directory. (#19168)
a97330b7c5 : Fix missing doc out= for torch.cumprod (#19340)
0676ba0c5c : Mention packed accessors in tensor basics doc (#19464)
ea6c738c8a : Rename 'not_differentiable' to 'non_differentiable'. (#19272)
aa50f1e365 : Clean the onnx constant fold code a bit (#19398)
593bb145ce : Allow passing dicts as trace inputs. (#18092)
9034b66f14 : skip test_trace_c10_ops if _caffe2 is not built (#19099)
d9115b533a : remove needless ## in REGISTER_ALLOCATOR definition. (#19261)
9983c24cfc : Strip doc_string from exported ONNX models (#18882)
f0d98199fb : improve dim sort performance (#19379)
941ccd6b35 : Fix missing import sys in pin_memory.py (#19419)
940caed0d4 : update documentation of PairwiseDistance#19241 (#19412)
8638634a6e : fixes link in TripletMarginLoss (#19417)
08f5c05d60 : make separate operators as independent binaries (#19450)
1e78252de7 : Updating submodules
e1750754c8 : Step 4: add support for unique with dim=None (#18651)
1b1d1c9837 : allow bools to be used as attributes (#19440)
a0e09216f0 : Fix test build (#19444)
b9291f55bb : pow scalar exponent / base autodiff, fusion (#19324)
b4fa979a37 : Improve unique CPU performance for returning counts (#19352)
563de88aa5 : Revert D14909203: Remove usages of TypeID
ce969c0bc4 : Add tests for argument types (#19290)
d9052b2176 : Allow optionals arguments from C++ (#19311)
45d5b6be48 : Enhance front-end to add op (#19433)
edf77fe64a : Fix cpp_custom_type_hack variable handling (#19400)
4c93be0fa0 : fix hub doc formatting issues (#19434)
a5c4348d54 : Recursively find tensors in DDP module output (#19360)
17f05ad5e5 : Moving at::Tensor into caffe2::Tensor without bumping refcount (#19388)
88f70a1670 : Fix pickling torch.float32 (#18045)
f5435634b4 : Respect order of Parameters in rnn.py (#18198)
2d0d153288 : Refactor EmitLoopCommon to make it more amenable to future extensions (#19341)
b7323a94ad : Cleanup init_process_group (#19033)
20a5aa9670 : Sync FindCUDA/select_computer_arch.cmake from upstream (#19392)
9e3bdb3231 : Update module.py documentation. (#19347)
973d51079b : Add device-specific cuFFT plan caches (#19300)
b8fb6eae88 : Improve bmm() performance on CPU when input tensor is non-contiguous (#19338)
12d6f79ecd : Optional inputs and outputs (#19289)
fa96de2b3f : Add some tests (#19288)
601f36bacc : Use string based schema for exposing caffe2 ops (#19287)
5ca22cce69 : Allow registering ops without specifying the full schema (#19286)
a456e1e196 : Add either type (#19285)
12dcc77bcb : Allow ops without tensor args if only fallback kernel exists (#19284)
8036af39d2 : String-based schemas in op registration API (#19283)
41dc54e291 : Move function schema parser to ATen/core build target (#19282)
789c438d86 : Automatic update of fbcode/onnx to ad7313470a9119d7e1afda7edf1d654497ee80ab (#19339)
fbf505cba7 : Remove copy and copy_ special case on Type (#18972)
a64cce326f : Add constant folding to ONNX graph during export (Resubmission) (#18698)
01d7d3de46 : Remove usages of TypeID (#19183)
c7b1fdb767 : Fixing function schema parser for Android (#19281)
094678c04b : Split function schema parser from operator (#19280)
a1174dbc50 : fix hub doc format
b31bab7860 : Clang-format torch/csrc/jit/passes/quantization.cpp. (#19385)
6732358bf9 : Allow DDP to wrap multi-GPU modules (#19271)
c48e1679f9 : Add validator for optimizers when parameters are shared
2787f1d8ed : hub minor fixes (#19247)
776fec0f9f : fix wrong schema (#19370)
bf5f30f39b : Fix printing format in examples in jit/README.md. (#19323)
48859e3ad3 : Allow for single-line deletions in clang_tidy.py (#19082)
242743eedb : Revert D14901379: [jit] Add options to Operator to enable registration of alias analysis passes
0414f23855 : Revert D14901485: [jit] Only require python print on certain namespaces
5fa1aad670 : Remove unused template parameter in OnnxifiOp (#19362)
ad8f34fcca : Add empty_quantized (#18960)
4371cb5e01 : Cast not expressions to bool (#19361)
d6b91075dc : Eliminate type dispatch from copy_kernel, and use memcpy directly rather than implementing our own copy. (#19198)
3e0b46b6d1 : Only require python print on certain namespaces (#18846)
3a031c414a : Add options to Operator to enable registration of alias analysis passes (#18589)
eaa14f5f59 : Error out on in-place binops on tensors with internal overlap (#19317)
ff4a4d6155 : Update for #19326
aad6f97898 : Decorator to make sure we can import `core` from caffe2 (#19273)
f1f31b634d : Eliminate AdjustBatch ops (#19083)
3fcee4875c : Add rst entry for nn.MultiheadAttention (#19346)
db611b7caf : Delete C10Tensor (#19328)
33443d083e : Fix python lint (#19331)
58d4414c33 : Profiling pipeline part1
93201d0676 : Fix lint
06c28d8a12 : Add slicing and int_repr() to QTensor (#19296)
33e7977154 : move const defs of DeviceType to DeviceType.h (#19185)
5627940e9c : Add a fast path for batch-norm CPU inference. (#19152)
ff0a7ae43f : Testing for folded conv_bn_relu (#19298)
9f35185b56 : Initialize intra-op threads in JIT thread pool (#19058)
5e7bc26f65 : Fix ASSERT_ANY_THROW. (#19321)
78f589e794 : Add len() for strings (#19320)
df67969e6b : Step 3: Add support for return_counts to torch.unique for dim not None (#18650)
c8897d2263 : invoke NN smoketests from a python loop instead of a batch file (#18756)
1c5073fb4b : Adding pin_memory kwarg to zeros, ones, empty, ... tensor constructors (#18952)
31686805f2 : Enable unit tests for ROCm 2.3 (#19307)
e1f38a847d : Fix type conversion in dequant and add a test (#19226)
da4ff17eee : math module support (#19115)
344acaa0ca : Revert replicate.py to disallow replicating multi-device modules (#19278)
b9c20d5224 : graph_for based on last_optimized_executed_graph (#19142)
3b29cbaf86 : Enable half for CUDA dense EmbeddingBag backward. (#19293)
3501576230 : calculate execution time based on final iterations (#19299)
646cb6157d : Move OMP/MKL thread initialization into ATen/Parallel (#19011)
20fc7b6ec7 : Avoid undefined symbol error when building AdIndexer LTO (#19009)
ada10ad416 : Ellipsis in subscript
f1c8e01524 : Add input information in RecordFunction calls (#18717)
84b264b17d : Add NHWC order support in the cost inference function of 3d conv (#19170)
ffc9e29844 : unit test with multiple op invocations (#19118)
00148825fc : Run shellcheck on Jenkins scripts (#18874)
a0263ec047 : Make DistributedDataParallel use new reducer (#18953)
6ed57e052d : Fix the return value of ParseFromString (#19262)
3403cb857b : Modify Cholesky derivative (#19116)
991279dc7d : produce diagram for caffe2 build matrix (#18517)
7caad0ed33 : Free all blocks with outstanding events on OOM-retry (#19222)
86619b8ba6 : Make sure that any of the future versions can load and execute older models. (#19174)
68c4ebbeeb : Sync fbcode/caffe2 and xplat/caffe2 (1) (#19218)
2060e44ec8 : upgrade bazel version in CI [xla ci] (#19246)
1c099fd5c9 : Update docker images to use ROCm 2.3 (#19231)
10bc789dff : fix flake8 (#19243)
e958ceb5d7 : Remove GraphExecutor's python bindings (#19141)
ddda563f22 : Cleanup ScriptModule bindings (#19138)
dcb5fd3613 : get propagate_shape logic out of module.h (#19137)
1827ca4c35 : Make debug subgraph inlining thread local (#19136)
c38c7b0ec5 : Support Kwargs in C++ Function/Method calls (#19086)
d8669a2c7e : Enable working ROCm tests (#19169)
ca02558d40 : import warnings in torch.hub & fix master CI travis (#19181)
bba96db2a5 : fix lint errors in gen.py (#19221)
b1539412db : Add pass registration mechanism (#18587)
a3d3008e73 : JIT Layernorm fusion (#18266)
0e435afc3c : Add more debugging helper to net transformer (#19176)
1c836e7bb9 : Add Quantized Backend (#18546)
3f7ddd269c : Step 2: Rename _unique_dim2_temporary_will_remove_soon to unique_dim (#18649)
bd55abb463 : Fix onnx ints (#19102)
c480798a1c : use C10_REGISTER for GELU op
79db4e9c10 : Fix tabs lint. (#19196)
65ae897ae8 : Pin nvidia-container-runtime version (#19195)
deda88e0aa : One more fix for #18790
7e73783c6f : Fix promoteTypes for QInt types (#19182)
422b01e788 : Replace more usages of Type with DeprecatedTypeProperties (#19093)
fea0a0be53 : Support attributes when copying modules (#19040)
4ae59e4744 : Move version_counter_ to TensorImpl (#18223)
507fe66bea : Enable comp ops for bool tensor (#19109)
c7b5a8a876 : Change is_variable() to check existence of AutogradMeta, and remove is_variable_ (#19139)
ef406ee925 : First class modules in the compiler, round 2 (#19167)
b6ee83a5b4 : Materialize a non-default device for C2 legacy storage. (#18605)
bbe648dffb : Allow empty net type (#19154)
33a950924a : Skip Slice if it's no op (#19155)
feb5d26510 : Rename ONNX util test names (#19153)
c1b92f518d : Remove ProcessGroup::getGroupRank (#19147)
c145c34a7b : Basic implementation of QRelu in C10 (#19091)
4b20fc826d : Import MultiheadAttention to PyTorch (#18334)
b6f130aa70 : try to enable uncertainty for lr loss (#17236)
160d0776d5 : Remove comment (#19148)
f5165ade5b : Revert D14842057: Compiler uses first-class modules**
5e1f0b2a07 : Compiler uses first-class modules** (#19043)
fce0c5e17d : Require matches_jit_signature within native_functions.yaml (#18956)
e54cb03a51 : add/move a few apis in torch.hub (#18758)
5164622ba4 : Revert D14878128: [jit] Support attributes when copying modules
ce166d949d : ProcessGroupMPI exists only if it is valid (#14809)
6b0ca8eae5 : Fix flaky store timeout test (#19114)
821b5f138a : Optimize SoftmaxOp on CPU (#18635)
1abbee0f8e : Allow Tensor lists to show up in symbolic differentiable graphs. (#16784)
612998f2ee : Support attributes when copying modules (#19040)
226a358136 : Move ConcatBatchMatMulBatchGatherOp to OSS
70313941b4 : Print CuDNN version correctly. (#19110)
8e8874ae54 : Infer device from pointer in from_blob (#19094)
575aebc182 : implement operators for DNNLOWP (#18656)
d537e12310 : Improve mismatched storage error message. (#19068)
86532c921d : Refactor pickler (#19035)
1858773c0c : Fixed bool Tensor value change bug (#19096)
92f70bb639 : Split python_ir.h in a more sensible way (#19081)
b461689cfd : Clear input/ouput shape cache for each inference (#19085)
ea2405c7dc : Add torch.unique_consecutive (#19060)
23b0908d38 : Replace tabs with space (#19100)
a9a29dd63f : Fixes error when too many parameters are passed to fused cuda kernel (#18063)
496b0b03d9 : amend D14778810 (#18902)
82b570528d : Move abs, frac, reciprocal, and neg to TensorIterator (#19041)
56b18eadab : Fix aten op output assignment (#18581)
447d74a074 : EmbeddingBag w/ differentiable per_sample_weights (#18957)
c889ff6cf8 : EmbeddingBag w/ per_sample_weights CUDA fwd + bwd (#18800)
0397d7c0c8 : EmbeddingBag w/ per_sample_weights CPU backward (#18799)
2a2007e5ac : EmbeddingBag CPU forward with per_sample_weights. (#18735)
c561ef5406 : Refactor CPU embedding_bag implementation (#18734)
0ca8f7a15f : Make BlackBoxPredictor handle networks throwing exceptions (#19080)
168c0797c4 : Remind users to set map_location properly when using DDP
487388d8ad : Rename btrisolve to lu_solve (#18726)
5eb6a2be41 : Avoid calling tensor.data.set_() in DDP
1f0ee9d6e6 : Reapply Wrap workaround for cpp custom types a bit prettier and add an example" (#19062)
8f9b11cf33 : Propagate ProcessGroup timeout to Store (#16571)
aa017db59c : make test_jit_fuser runnable
ad45d09202 : Fix documentation for unfold(dimension=..., ...), fixes #18793 (#19020)
a3e177083b : Debugging: Increase process reporting for apt/dpkg. (#18880)
29ea08616b : Add torch.__config__.show(), reporting detailed version of all libraries. (#18579)
31ff0ecd2b : Fix torch::nn::init::orthogonal_ with CNNs (#18915)
25bd28c3a0 : move nightlies to 1.1.0xxx
ba77eadbca : add an utility function to check whether it's in the middle of onnx export or not
75d6d8833d : remove interned_string.h dep (#19061)
b1bea0b733 : add logging to make the saving action visible (#19042)
89145e602b : Namedtuple return for gels, triangular_solve, and test refactor (#17195)
48a35135fb : Convert all tabs to spaces, add CI. (#18959)
544783fa1d : Fix BN tests for >= 8 GPU test environments (#19049)
17adce1b69 : do not use constexpr with CUDA >= 9.2 compiler on Windows. (#18986)
5f24f9a29b : Add torch/lib/protobuf to gitignore, fixes #18700 (#19019)
72e171dc52 : Automatic update of fbcode/onnx to 971311db58f2fa8306d15e1458b5fd47dbc8d11c (#19046)
3bfdffe487 : Fix default CXX for Windows in cpp_extensions.py (#19052)
7db4c8ed76 : fix the onnx ci
fd40c0eba0 : Add gelu op (#18992)
3ad710b837 : Add MKL-DNN Tensor (#17748)
e0c593eae7 : detect C++ ABI flag for cpp extensions from available runtime information (#18994)
df05c7fbac : Fix momentum setting in BatchNorm forward pass. (#18764)
cfb6054ada : add android build workflow to pytorch CI jobs (#18919)
443a58e03d : Export C10 operator in PyTorch Model (#18210)
09c19e1068 : Fix interpolate tracing (#19034)
930fb2f319 : Fix default dtype in shape analysis (#18968)
a7095b355e : Renamed bool tensors into byte tensors (#19021)
026a9c6bf2 : Handle None indexing in TorchScript (#18615)
239de1623d : Turn on mkldnn in most builds except rocm
9b76f69cd3 : Remove dead code in module.cpp (#19022)
943f712d7a : Convert test_recursive_cse to use Filecheck inline annotations. (#19032)
062b1321fe : Add a document 'How to Write Tests Using FileCheck' (#19005)
e7b2669151 : caffe2 - Expose tensor filler util to Python (#18886)
66a3277dfa : call build_android.sh from pytorch CI build script (#18918)
0565141728 : Type annotations for `util.data`. (#18963)
a2ac260524 : ifdef guard some explicit pragma unrolls (#19018)
02968398d5 : Fix a dev mode bug in activation distribution observer (#19004)
2addcccbf1 : Clean up some sparse code. (#18962)
65b9196741 : Remove tensorWithAllocator() from Type (#18780)
5241e6ec5c : Fix sparse mm for ROCm (#18985)
6c91610f0c : Check if profiler is disabled in push/pop event (#18908)
08ee4e5607 : Implement Observer pass on simple model and validate stats (#18848)
67fdb4abf7 : AVX2 with GCC9 fix. (#18991)
f6af76ead7 : Remove tensorFromBlob() from Type (#18779)
9b69f21a95 : Improve precision of emitted code for prim::Constant (#18817)
79533ef097 : convert_sync_batch_norm to SyncBatchNorm (#18787)
907b4c5890 : fix bug when falling back to acc32 when weight is prepacked (#18974)
dbd9971dd2 : move 2ops back to autodiff (#18969)
1ffa358fca : Preserve naming for inputs/outputs with observer insertion (#18713)
34382e428f : Emit math functions specific to output type (#18815)
8961ad8c5b : add instructions for NVIDIA Jetson platforms (#18990)
bcd527190a : Quantizer pass to insert quant-dequant nodes into IR (#18446)
7b5b1486c9 : add SyncBatchNorm to docs (#18988)
d6d0fcc92b : Add c10_cuda to libraries in CUDAExtension for Windows (#18982)
1497d45315 : Remove Trainer from README.md (#18980)
13f03a42d2 : Create Object that represents a Module (#18469)
8c9caf185b : Add numpy like repeat as torch.repeat_interleave (#18395)
e6bbbb017e : Fix interpolate trace (#18875)
6084908287 : Code string API for fuser testing (#18884)
ce67775f08 : remove unused func (#18712)
46fe266507 : Revert D14778810: [caffe2/int8] fix bug when falling back to acc32 when weight is prepacked
f6f34b3f4c : slots with explicit value/setValue make more sense in future patches (#18468)
091acb0978 : Make Object hold its ClassType (#18467)
53458c97dd : Enforce single parent for script submodules (#18379) (#18860)
f9a56d4af2 : CUDA_NVCC_EXECUTABLE is not needed, as nvcc is in PATH (#18958)
8e1e29124d : Fix precision issue with expansion that prefers 'probs' over 'logits' (#18614)
b90cbb841d : Method is supposed to be in-place (#18684)
28990f34d9 : fix bug when falling back to acc32 when weight is prepacked (#18881)
c1790fa202 : More numerically stable lerp (#18871)
edc7b4726b : Increase default c10d/ProcessGroupGloo test timeout (#18916)
cb3a4a3d28 : remove symbolic variable part 1 (#17986)
f3dbcfdfb5 : Revert D14742020: Wrap workaround for cpp custom types a bit prettier and add an example
c65eeeb075 : Decompose more Windows scripts (#18917)
ef779b3397 : Wrap workaround for cpp custom types a bit prettier and add an example (#18791)
c3a559deb7 : Remove cuda::compat functions in aten (#18905)
fefa6d305e : fix side-effects and aliasing for custom ops (#18711)
abc758ed40 : Expand the list of ops that mutate an inputs shape (#18812)
e45e3634d6 : add launch bounds, enable more tests (#18909)
1d263ed92a : Add backward pass to infer single missing input shape for Concat opportunitiscally (#18911)
0c5d444b28 : change to use clang if NDK >= 18 (#18914)
5e8a9e8802 : Revert D14673459: [pytorch][PR] [jit] Replace Slot on script::Method with NamedIValue
8793e8db42 : Disable flaky test_proper_exit test. (#18950)
865ed7682d : Checkout pytorch_sphinx_theme with https. (#18859)
ce92cf9bd1 : Add tests for reducer class (#18845)
79ac2120ba : Fix a few instances of notifying on a CV while holding the lock (#18857)
0829ef00dd : Unify caffe2 and libtorch build scripts on Windows (#18683)
84068f43f2 : Simplify storage wrapping in TH. (#18855)
043e363c6c : Cache device on TensorImpl; clean up TensorImpl constructors. (#18833)
b7c830b916 : Revert "Adding pin_memory kwarg to zeros, ones, empty,... (#18854)
ab4133397c : Silence compiler warnings (#18912)
c34e5ff952 : ScriptModuleOp in caffe2 (#18716)
8bdd0c3a85 : flake8 fix on extracted python script
8f5e478aa2 : Replace Slot on script::Method with NamedIValue (#18252)
90b8552c98 : U/kostmo/windows offload scripts 3
4f5e72600e : fix lint in optim doc
8a466d147c : Fixed the comment to reference gist example instead of private repo (#18852)
b11a8c6aef : return missing keys from load_state_dict (#18668)
814c1df29a : Fix caffe2 miopen conv transpose gradient op for case of no dX gradient
d35c39e73b : don't attempt to multiply by a sparse matrix (#18737)
07efee395c : add Fast-RNN to AI-PEP
7a19d3c9e1 : Allow override of backend in dist.new_group() (#18595)
1ec1db477d : ONNX Export All Cases of Softmax
b4d2df1fee : Added bool and half support for resize_as_ and view methods (#18821)
bb16e8dacb : Automatic update of fbcode/onnx to 079c2639f9bb79b1774d1e3bfa05b0c093816ca7 (#18841)
33f4751fb8 : Actually model scalar type promotion in shape analysis (#18811)
d108a1abb7 : Add a .ctags.d/ toplevel directory (#18827)
8ca9ba17da : Fix typo
b145dcca04 : Add support for group ConvTranspose (#18794)
8732a1b42e : Disallow changing the device of a tensor via set_. (#18832)
15b318de84 : U/kostmo/win test offload scripts
f97eb8d9e4 : Revert D14603722: Enforce single parent for script submodules
52a3a51490 : Fix deviceCount on FakeGuardImpl. (#18745)
486fae563d : Stop swapping in Storages of the wrong device for Tensors. (#18831)
d70c6f23f4 : Pass ScalarType separately from Type in python constructors
f5741eb855 : Store ScalarType and Backend instead of Type in TensorIterator
c705d9eb1e : Introduce DeprecatedTypeProperties class (#17991)
095f88e093 : Fix to handle null strides in DLPack tensor (#18510)
e5e2110a8e : Add shape inference function for Split (#18838)
0c237f1383 : Fix the duplication problem in _unique_state_dict (#18139)
fa0ad057f8 : fold col offset into bias; optimize A symmetric quant (#17026)
72913a55a8 : fix flake8 lint (#18835)
0a4117a36e : run cpp tests for non-cuda builds in test_jit.py (#18826)
100f95a362 : Fix the linter (#18842)
7e59c60454 : Enforce single parent for script submodules (#18379)
b80a4fa201 : Allow ints, floats, and tensors in conditional (#18755)
843e6234f5 : Fix layernorm ad formula on weight and bias (#18233)
0512e4e323 : Unify namespace of script::Module (#18378)
773ce4fbd0 : Step 1: Secretly add return_counts to unique, and refactor unique_dim for performance (#18648)
7ae0263e1b : Support replicating multi-GPU modules (#18687)
eabd9eac2a : flake8 fix
862aff641a : Remove `device_guard: False` from native_functions that don't have a … (#18803)
cb959aa708 : Switch our Linux machine AMI to a newer image. (#18433)
dfcd7b0185 : QTensor (#18230)
3af2d6d904 : Enforce import order to make protobuf cpp implementation in python work (#18560)
3b71f2e1f2 : Pin onnx ir_version to 4 (#18768)
8711df89cc : fix nccl compilation to make sure it compiles for architectures that pytorch compiles for (#18739)
b5d8844bbe : push magma init into lazyInitCUDA (#18527)
ed9724f385 : For some files that are touched by the QTensor diff (#18765)
a21e256e8d : Fix contiguous AD and Autogradzero inconsistency (#18633)
5950c1e8c4 : Added indexing for bool tensors and bool Indices (#18583)
65dfe1203f : add an assertion to check the param num (#18145)
4afc067fed : add Android NDK param to CI docker build script (#18782)
a7b82a44c4 : Upgrade mkldnn-bridge for dnnlowp support (#16308)
46a68c1b37 : add 'abs' builtin
0916b5419a : Fix dense Embedding to work with double backward (#9078)
c0ad6747a9 : Highlight NCCL all_reduce and all_gather requirements (#18741)
2658190def : Updating submodules
5e33085f27 : Make it possible for users for select /Zi or /ZI over /Z7 when using MSVC (#18790)
06b7fe59f2 : use optimization in D14020675 (#16945)
2113ea6fbf : Add device and dtype to storage. (#18749)
a3da3653eb : Use non-legacy constructors for tensor deserialization. (#18750)
48f70ea0a2 : Added numpy conversion (#18505)
7349dbb7ce : Remove THTensor_(newUnfold). (#18773)
cb66759600 : temp fix for flake8 error (#18788)
3079d95b6c : Fix flake8 issues
40a54bf2f1 : Change ReinitializeTensor to use C10_LOG_FIRST_N (#18531)
80404cb2f5 : Add support for getting TensorProto argument (#18364)
31849bc524 : make test module hook use save/load (#18284)
2d07993bcb : Add ability to specialize class types to ArgumentSpec (#18314)
5f5a2aaab9 : Operator-level performance microbenchmarks (#18740)
b832b99afb : Bool Tensor for CUDA (#18166)
b77e3c2ca1 : Add helpful information to the gradient/inplace operation exception (#18523)
74d9146559 : build_variables.py: turn on link_whole for _C_impl library. (#18763)
84a9694ed0 : Fix windows msbuild bug (#18748)
2e97c82470 : torch.cross' dim default changed to c10::optional instead of int=-1 (#17582)
3027e783b1 : Fix multi-configuration on Windows CMake (CUDA) (#18548)
36237c4893 : Fix flake8 issues in gragrad test
f095c34b73 : Register operators by passing arguments to RegisterOperators constructor (#18577)
58f5954252 : Allow registering an operator schema without a kernel (#18551)
7a37e066e6 : Improve compiler error messages of the op registration API (#18550)
ae1d13a06f : Improve and test error messages for signature mismatches (#18547)
bb8a0d717c : Enable gmock and fix system gtest issue (#18706)
01c03caacc : Emergency workaround for apt-get failure. (#18733)
0b6ed83f33 : Fix clang-tidy errors in torch/csrc/distributed
385a755b68 : Undefined behavior with memset of std::string to 0 (#18703)
a799751e33 : Revert D14717015: [pytorch][PR] fix nccl compilation to make sure it compiles for architectures that pytorch compiles for
1f5a46ab05 : Automatic update of fbcode/onnx to f0d7df2c643c4e37f1fd7735ef02c972c4d19fb5 (#18695)
c484cf43a0 : Adding pin_memory kwarg to zeros, ones, empty, ... tensor constructors. (#18455)
aed7c9bc96 : Improve Backend comment. (#18567)
baac5489a8 : Expose alias multinomial methods to ATen (#17904)
5ade96fc84 : Update cpp_extension.py (#18638)
fba89b2ae1 : fix typo
d5bf6ddc29 : Kill LegacyBridge functions that don't do multiple dispatch. (#18696)
af84371ba8 : Updating submodules
f084c129db : add Int8FCRelu (#18673)
8e873ce273 : Fix uninitialized value in pickler (#18678)
2e029db2f9 : fixes multiprocessing serialization for integer nn.Parameter (#18639)
fc6296d777 : fix nccl compilation to make sure it compiles for architectures that pytorch compiles for (#18704)
1b25fdbcd0 : More type stubs (#18511)
aa23b8c664 : NCCL build fix WITH_DISTRIBUTED=1.
16f07d7dac : caffe2 - set up correct inheritance structure for remaining operator test classes (#18622)
20b63aa977 : Peephole Optimize Shape Ops (#18549)
4a0f842d42 : Deprecated lambda based API (#18542)
723ce02a55 : deprecated function based API (#18444)
246f5c412e : Revert "Tensor construction codemod(raw_mutable_data) (#16373)" (#18680)
bdfdf6c2b9 : C++ handler for gradient reduction (#18251)
a0285dd0f4 : Updating submodules
89e9b1cf8e : add ConvRelu schema (#18693)
90a5c56988 : offload scripts from win-test.sh
929258a680 : Some fixes for the build script on Windows (#18681)
d6c269c33e : Fix for double backwards tests (#18190)
be01c90797 : Add string index/slice operations (#18247)
af9335436d : Re-land Parsing file check (#18570)
3749d65a7e : Create Node2Vec ModuleKeeper
822c8ee143 : use acc16 only when n>128 and k>128 in Skylake (#18672)
4c74cf7489 : Move ideep singleton registration to ATen from C2. (#18335)
ddbfdc911d : Create torch/lib directory before copying _C.lib on Windows environment. (#18666)
8276d82f78 : Move flags that do not work on MSVC (#18686)
44b5891121 : Fix unused lambda capture warnings (#18662)
505d50ea90 : handle a rare case of histogram min is inf/nan (#18239)
6841537933 : Delete duplicated technical content from contribution_guide.rst (#18628)
35bc83524d : Provide flake8 install instructions. (#18629)
19fe2b9db4 : Adding quantized tensor shape/type info support for caffe2=>glow in caffe2 side (#18621)
3c70326cf4 : Fix test on windows (#18667)
9c87543124 : Enforce check ad in test_jit (#18509)
828a6a3b39 : Use proper isnan check
cb39bd9c2f : pad_circular -> _pad_circular (#18608)
a45b79d23f : Fix wrap(at::Scalar) (#18632)
0f6bf09db5 : Deprecated type() -> scalar_type()
173f224570 : Turn on F401: Unused import warning. (#18598)
96456bfa4c : Update documentation for CTCLoss (#18415)
2a58fd9844 : Fallback kernels (#18443)
f4e87e193a : Introduce lambda-based kernel API (#18541)
24752eb7b8 : Report better errors when kernel or dispatch key are missing (#18302)
48e7f98917 : Move stuff to cpp files (#18301)
14c28fabd2 : Check kernel against function schema in c10 op registration (#18256)
c4bb09cc42 : Add functor- and function-based kernel registration API (#18162)
9abc8a5b47 : New operator registration MVP (#18161)
6095814229 : Fix trt installation in CI (#18609)
24db1667da : Attribute serialization improvements (#18188)
e13101e069 : support pre-convert filter format for mkldnn training mode and change 'OptimizeForIdeep' to 'OptimizeForMkldnn' (#15171)
d73c830e23 : Tensor construction codemod(raw_mutable_data) (#16373)
7b0ef31780 : Add hash() global (#18258)
a5ddecd00c : Move fuser to test_jit_fuser (#18590)
85f36014e2 : Experimental logging/counters API (#18235)
e2fd1d966f : Revert D14668859: [pytorch][PR] Re-land Parsing file check
47e2772320 : Update argument names of torch::autograd::FunctionPostHook (#18140)
81b73951f1 : note on updating existing source (#18409)
393731ab24 : Re-land Parsing file check (#18570)
1240327c5c : Refactoring serialization of ONNX initializers to be name-based (Resubmission) (#17830)
fca9d9a100 : Initial implementation of InsertObserverNodes pass. (#18152)
84f020fe09 : Fix bug in tensor feed which caused crash due to wrong tensor type (#18552)
e3b1758f19 : Upgrade mkldnn-bridge to revert tensor capacity patch and prepare for DNNLOWP support (#18471)
f4e35d30ed : register BoxWithNMSLimit with C10
d895d30876 : Fix c10d build without nccl.
6ebfbdf4c6 : Add named submodule support to nn::Sequential (#17552)
e73be58ff7 : Rename `btriunpack` to `lu_unpack` (#18529)
be2ac6828c : fix lint (#18623)
62d8c8cf0a : Manual hipify caffe2/distributed and rocm update (no hcc modules support) (#18088)
7c438c82eb : Change dnnlowp log level from warning to v2 (#18576)
c0a2452ffe : multiline KeyError msg python bug workaround (#18557)
95d3825e48 : ReduceLrOnPlateau: best=current -> best=copy(current) (#16364) (#16697)
cf444f3544 : make InstanceNorm1d raise an error if the input is 2D (#11992)
c189eba3e1 : Fixed torch.arange docs (#18604)
e22a2b9015 : Minor fixes in fastrnns benchmarks
d859031ebf : Rename `btrifact*` to `lu` (#18435)
c21e763cd6 : Optimize relu op on GPU (#18506)
a5a1c9a171 : Automatic update of fbcode/onnx to fb1a80692c1ab0bd27b1072f2e7bffacba336777 (#18585)
0fd1ee3145 : Automatic update of fbcode/foxi to 81e1683d6348eee4b5ed1145222dc2c41be4269c (#18596)
ff4b6d1a49 : Delete batch tensor (#18575)
b1a0233ee4 : Update NNPACK to current master (#18580)
1c3428af31 : Enhance build_ios.sh to be consistent with build_android.sh (#18564)
3752916132 : Serialization supports pathlib.Path object for the input argument (#18562)
12abc8a99a : Target and input sizes mismatch warning in L1 Loss / L1 Smooth Loss (#18565)
1989716ae5 : Resubmit PR-18512: Improved onnx export for 3 onnx ops (#18571)
2f174e9453 : in caching allocator, ignore and clear the error if not ready
600eeecbf4 : Add external callbacks into RecordFunction (#17844)
11ac0cf276 : Implement rotated generate_proposals_op without opencv dependency (CPU version)
1ae2c1950c : Use SetOutputTensor instead of copying outputs manually (#17770)
aea8ee1f68 : Fix NCCL/Gloo process groups and DDP stream sync bug (#18465)
9eb0f435d9 : Inference LSTM integration test (#18559)
aa20591baa : Add Slot type to abstract the raw pointers being used for slots. (#18226)
77280b11e3 : Revert D14635130: Improved onnx export for 3 onnx ops.
eee760dbd3 : Improved onnx export for 3 onnx ops. (#18512)
ffc7158bf2 : Revert D14652372: [pytorch][PR] Add parsing to file check
b0d9712938 : C++17.h: forward -> c10::guts::forward (#18492)
9696f06bcf : Use __ldg for CUDA kernels in fuser (#18540)
8635078d9e : Adds Cyclical Learning Rate and Momentum (#18001)
54abfda124 : Completely synchronize behavior of Facebook flake8 and public flake8. (#18538)
8faf0112f3 : add slow tests annotation to some jit tests (#18545)
0daafe0209 : Add parsing to file check (#18304)
ad1ebf7082 : bug fix for node with writers in create autodiff subgraph (#18491)
d74b11ce0e : add extra info for the auto gen sum ops
58f3712ceb : Clarify error text of the pin_memory function
6684ef3f23 : Move fast rnn benchmark to pytorch/pytorch
e4f1681c82 : Rename isTensor api -> isCompleteTensor (#18437)
1eee2090d4 : Const trace error v2 (#18535)
fdedc62c26 : enable more unit tests (#18537)
c3e3c5cc39 : Skip tests if C2/ONNX models cannot be read (#18494)
30da6c7d06 : Add qtensors in caffe2 protobuf argument (#18486)
defe67caf2 : Generate sphinx docs with secure content. (#18508)
8c3285bf11 : Fix loss functions doc (#18420)
81e030d9a6 : Upgrade flake8-bugbear to master, fix the new lints. (#18507)
85d78a0532 : Add export annotations for functions in c10 (#18464)
a3933b87c6 : Back out "Revert D14613517: [pytorch][PR] Updating onnxtrt submodule to master branch" (#18514)
eae7ad4ca8 : Automatic update of fbcode/onnx to b29e78a4efb8e5d8995f576bbf19a846807829b6 (#18503)
f3ddc40ca4 : Move weight offload inside backend construction functor (#18385)
60538c8366 : fix #16448 (#18479)
f447b63ed0 : Add section about .code to docs
45ec4920e3 : how to use the `ccache` package on Ubuntu (#18495)
d5861aa55c : Append c10 libs to TorchConfig.cmake (#18418)
2ba41c5550 : Add some missing docs for tensor methods and attributes, new unittest to enforce tensors.rst no longer miss anything (#16057)
66e8c74814 : Revert D14613517: [pytorch][PR] Updating onnxtrt submodule to master branch
e912632b74 : Fix direct comparison of OperatorDef proto structs (#18466)
66628f78b7 : Revert D14605905: [pytorch][PR] Add return_counts to torch.unique
bdd098c694 : Fix typo in Github links in elementwise_ops_schema.cc (#18018)
5292685d2f : Improve numerical precision of (s)logdet (#18449)
436723122e : fix arange shape issue inconsistency across cpu and cuda (#18462)
bbe110f4e1 : Updating onnxtrt submodule to master branch
654e59fcac : Minor fix for onnx ConstantOfShape export (#18199)
5bff395a82 : Namedtuple return for solve, slogdet, sort, topk (#17093)
c6bfcb854b : Expose c10 operators to caffe2 by operator name (#18160)
3bbe204f32 : Test running a CUDA build on CPU machine. (#18242)
0aeaeffb6c : Properly use cudaGetLastError return code. (#18485)
265fa0ce4d : Move math::Axpy function to elementwise lib (#18316)
6f3186a578 : Upgrade mkldnn to version 0.18.1 (#18463)
fa0bfa03ed : Add Google tag (#17690)
20159c3ffe : remove redundant --install_dir parameter in GEN_COMMAND (#18473)
1a742075ee : Resolving comments from Bool Tensor for CPU PR (#18165)
515238e0a5 : Unify cudaGetDeviceCount implementations. (#18445)
cf094d4edc : Use TensorIterator for unary operations
5e462a3ed6 : Introduce SobolEngine (#10505)
9080942afb : fix str of autogradzero
dc6b5b2a52 : Optimize boolean expressions & unwraps (#18259)
a729630cbf : Fix python resolution in caffe2 CI scripts
bf2a30cb22 : Support dim=None for argmax and argmin (#18264)
e2730ddb21 : Add return_counts to torch.unique (#18391)
ba4de667fa : change dropout lowering in symbolic_script (#18375)
a40e0a7f2d : Add torch.version.git_version (#18299)
674c274d92 : Change deprecated IntList to IntArrayRef
d1e416ac73 : Enable printing to stderr for test_proper_exit for better debugging (#18458)
e1c272797b : Don't require pygraphviz for regenerate.sh (#17485)
13b95eac55 : Add quant-passes stubs. (#18151)
6a1a019c0a : caffe2 - support flaky operator tests for caffe2 build (#18155)
7a90bae416 : Remove unused th_scalar_type (#18390)
56c16fe26f : Porting CPU UpSample functions to ATen (#18020)
ed8c462dc7 : Fix caffe2 build with BLAS=OpenBLAS (#18422)
6c9b312fd4 : Add addcmul, lerp to fuser, enable scalar->float specialization in symbolic script (#18081)
50df3e5e2e : Add ability to query if built with CUDA and MKL-DNN. (#18362)
17fcdfb925 : Updating submodules
5653a914f7 : Implement reference counting for shared IPC CUDA tensors (#16854)
f5ea528687 : Don't segfault on trying to get data_ptr of sparse tensor. (#18347)
647154f82a : Assert tensor isn't sparse in enforce_invariants. (#18338)
a4f83fff2b : Only look for Caffe2 package when shared (#18421)
c297f26843 : Add more options to the quantization model exporter (#18383)
9e176fe5fe : Revert "Specialize optional tensor inputs to graphs in the JIT (#18360)" (#18411)
5acac411e4 : Fix deprecated: type() -> scalar_type() (#18406)
6c029c80f7 : Fix deprecated: type() -> scalar_type()
8bc5b86709 : Added tensor size warning to F.mse_loss() (#18349)
ca962f0f95 : Fix For Requires Grad Infinite Loop (#18361)
92c9fef860 : update magma instructions (#18410)
1323c193ed : Removed some dead code (#18201)
7cc7ed1322 : Specialize optional tensor inputs to graphs in the JIT (#18360)
32d0e7e339 : Move pyobj_ to TensorImpl (#18225)
5860fa5dcf : Fix deprecated scalar type in ATen/native/Distributions.cpp
d9960fbdb2 : Revert D14446895: [C2] Implement rotated generate_proposals_op without opencv dependency (~2x faster)
d85451c07b : Revert D14584266: [pytorch][PR] Better error message for tensor with grad as constant in tracing
7c2290e7ce : Better error when module attr is used (#18164)
7be05b822c : Fix incorrect sparse add behavior when the sparse tensor has non-contiguous values (#18179)
6052d04100 : Implement rotated generate_proposals_op without opencv dependency (1.8x faster) (#18010)
0d78126a6f : Remove empty file (actual file_check.cpp resides in torch/csrc/jit/testing) (#18303)
ff3ecfec89 : Turn script_type_parser into a class (#18211)
10751d5fb4 : python interop for script classes (#18148)
3badea6eb3 : Better error message for tensor with grad as constant in tracing (#18298)
2ad2b2c7b1 : Support for basic list comprehensions (#17267)
e20894fce5 : Make it possible to trigger XLA/slow tests via commit message. (#18345)
f68faa35c0 : Avoid refcount when looking up dispatch key
e5eb871419 : Fix DCHECK to handle dangling else (#18295)
ed47b85d3b : Allow fusion of float function arguments (#18087)
2aac18098d : Fix error reporting in NVRTC use of the fuser (#18327)
fe5d23cf4a : Using sqrt for better precision in cosine_similarity (#18250)
18a6781f57 : Fix alignment issues for Fake BFP16 fp32 -> bfp16 rounding routines (#18321)
6e0cbc7f31 : Untangle internal build python and cpp dependencies
d4c52158c7 : Caffe2: crash op (#18207)
172ec4ace5 : caffe2 - Util to cleanup external inputs and outputs from a NetDef (#18194)
7397eb7e8e : End to end hack to call server side Caffe2 ops (#18267)
f6df6aed89 : Optimize MomentumSGDUpdate maximum block size and make it templated
e3da16a99e : Add test for #17271 (torch.exp incorrect for 2**31 size tensor) (#18292)
2934153f35 : Correctly call superclass setUp in TestCase subclasses. (#18291)
46990c20fa : Verify def before infer fensor (#18129)
77a7285764 : add more Python interface functions to make quantization simpler (#18246)
f3cf6ed789 : add fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta (#18257)
afc7574aed : Automatic update of fbcode/onnx to c05f2ae412daf8fd64136ca354b97ccf73e0ea6c (#18285)
f79eac2c7a : Cleanup TorchScript rst docs (#18234)
46439c78d0 : Replace the remaining usages of IntList in caffe2 to IntArrayRef
979db03722 : Blacklist certain op types when doing bound shape inference (#18290)
104773c715 : Fix use of c10::guts::apply (#18159)
8b94de06af : Allow using C10_DECLARE_TENSOR_TYPE and C10_DEFINE_TENSOR_TYPE from any namespace (#18158)
daa77c6e26 : Move schema inference to c10 (#18090)
1877087df2 : Allow registering same operator schema multiple times (#18038)
291746f110 : Rename trtrs to triangular_solve (#18213)
1c671c56c1 : Fix contribution_guide docs (#18237)
cf19ad2152 : Updating submodules
43a5c636e2 : Optimize group_norm_op (#17945)
9214852da2 : Enable running of slow tests in CI. (#18236)
a7d886b9a0 : Run clang-format on torch/csrc/distributed/c10d
99ddcb2c9f : Shut up compiler about unused the_type. (#18278)
549c4da917 : Add a decorator for marking slow tests. (#18231)
3eff333bff : lint changes
8356ffa922 : move median to ATen (#17637)
d1497debf2 : Fix B903 lint: save memory for data classes with slots/namedtuple (#18184)
ba81074c40 : Fix B902 lint error: invalid first argument. (#18181)
0654c7d4a7 : Fix B006 lint errors: using mutable structure in default argument. (#18178)
0122121176 : Two amendments for the shape analysis (#18271)
9bc8badbcb : Fix lstrip bug revealed by B005 lint (#18177)
e5cdd94324 : Backward function for torch.cdist
2016daaf51 : Fix ONNX symbolic for argmin and argmax (#18261)
e04c9195b7 : Update math::Transpose to support tensor with size > 2G (#17670)
bbbabda4e8 : handle dst_bin_width==0 case properly (#18240)
e12091d0a3 : Revert D14114134: [asr] add fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta
7e6220393f : Cleanup arg{min, max} (#17103)
ebc9f75895 : Added the exception of ignore_index (#18117)
fd35814348 : Add .get() for dicts (#18238)
d5328a8a30 : Update nccl submodule to 2.4.2 (#17883)
83d84c22e4 : Reinstate ncclCommDestroy (#17943)
272a48f6fe : Enable autograd to recognize the XLA backend as one providing multiple devices (#17847)
1b71f6d4eb : add fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta (#17905)
1442808fcd : fixed typo in shape_analysis.cpp (#18227)
18b31b73fb : Retain the parameter names in ONNX exporter (#17551)
abc171bd53 : Fix typo in docstring
a519217ee7 : Add batched version of trtrs (#18025)
e312801453 : Remove GLOO usage when USE_GLOO is OFF
2a6cbfaccf : Enable 32 bit CPU build on Windows
19c13eee39 : Correct cmake flags passing (#18217)
bd1271338a : Add python_variable._is_view for debugging. (#18197)
4741d613ee : Do not apply these explicit unroll pragmas for ROCm. (#18204)
8f1db1c6c1 : Copy-edit CONTRIBUTING and update. (#18131)
8895bfba6a : fix cosine_similarity (#18168)
3baf99bea7 : Breakup test misc pt2 (#18191)
9d455ac2fe : Add serialization docs to jit/README (#17951)
08aa973fb8 : Turn on Travis builds for ghstack PRs. (#18193)
cd6a6c54c6 : do not throw when unicode is seen in pull request info (#18195)
6758f5587f : Delete bugbear from Python 2 lint. (#18192)
1bc4eb93c7 : Support attributes when emitting function calls (#18156)
f212fd9fd6 : Customized pin_memory for PackedSequence (#18079)
916a670828 : Enable flake8-bugbear line length checking. (#18138)
794c631e23 : fix bug in alias analysis (#18146)
234bb8719a : Add backend checks to solve methods (gesv, cholesky_solve) (#18116)
7bb36ada1f : fix -Wsign-compare warnings for some files inside c2 (#18123)
1c76746f61 : SGD: remove unneeded multiply-add initialization operations (#18114)
a50ba7e238 : specialized CUDA impl for dropout in AD (#17756)
9a153412fd : Fix underflow issue with dirichlet sample (#17488)
84fe20600d : Kill Backend constructor of TensorOptions. (#18137)
3a85f88efd : Remove deviceTypeToBackend, which is underspecified. (#18135)
190c36bbc2 : Stop generating unimplemented type methods. (#18144)
8ed2b88bf1 : Corrected type of 'swap' in torch.nn.TripletMarginLoss (#18115)
542c273e5b : handle scenario when GPU support is not available and p2p_access_pattern is empty (#17974)
195cba500f : Fix Caffe2 operator schemas (#15462) (#13229) (#18109)
afb2f2424a : Increase line-width of Declarations.yaml (#18050)
86f1dd3fb0 : Updating submodules
2737d2c7dc : delete unnecessary file .gitkeep (#18136)
3d44305e9d : Attribute serialization (#17423)
87b6cbb6fd : fix bug in pool_dnnlowp_op_avx2.cc (#18141)
0a8efce51e : Updating submodules
421b508d55 : Rename gesv to solve (#18060)
0eb4f7aa71 : Modify BeamSearch to support CharSourceEncoder (#18107)
670f509984 : Circular Convolution Function via circular padding (#17240)
2b7a5d1876 : don't include /usr/include when nvcc is in /usr/bin (#18127)
ed36fd30c8 : fix double free in test_jit (#18121)
754bf595ca : Replace resize_dim with set_sizes_and_strides in THTensor_(squeeze) (#18059)
2e311d2003 : update exp. family doc (#18118)
fe22871b49 : Change one_hot from IndexTensor to Tensor. (#18073)
3c2fccc1b4 : properly device_guard IndexTensor and BoolTensor. (#18072)
f9ad125e39 : fix corner case for optional aliasing (#18093)
96fe2b4ecb : Typo fix (#18089)
da3cc6e7ee : Caffe2 - Add flag to fails if float point exceptions is detected in operator runs (#18040)
0fe6e8c870 : Remove ComputeLibrary submodule
c7448aa13c : remove unused parameters in optimizer tests (#18084)
be364ac8d7 : Specify overload name in function schema (#18037)
7a3488e0fc : Expose c10 cuda ops to caffe2 (#18036)
cb2ea17707 : Automatic update of fbcode/foxi to 2bcc4064c90e87b9638615c733485f07c47b7558 (#18070)
d1843d4173 : Add backwards compatibility and other fixes to Dispatch macros. (#17996)
f3806094d5 : Breakup Test Misc (batch 1/2) (#18071)
aafbefa4d6 : Remove the identical if branch (#18019)
80a7eac79e : Remove Type::elementSizeInBytes
9a8a268672 : add index and count to list (#17446)
001cffed9d : ONNX Export IsNan op
18f721fb9a : support serialization of classes (#17856)
cd26200d1b : add reverse to list (#17001)
b420f8ff70 : 1/2 Add Tracing support for C2 Ops (#17899)
3b5ddaf034 : Delete dead code in THTensorMoreMath.cpp (#17993)
3c977fb7ce : Error out on in-place (unary) ops on tensors that have internal overlap (#17927)
a4123decf7 : Implement at::has_internal_overlap helper function (#17926)
ea652973f2 : Fix truncation of default float values in JIT signatures. (#18044)
40074d647c : Allow None for checkpoint (#17969)
54ef852d7f : Fix unclosed files in download.py, test_onnxifi.py, test_trt.py (#18017)
785c76584c : Run multi-gpu (single host) resnet50 and resnext101 training in bench (#18043)
8f07a9da30 : Update nonzero onnx export (#18047)
e21aa16931 : more careful use of auto in sparse operations (#17958)
30b80de876 : Update caffe2 docker images tag to 253 (#18031)
8362177bcf : Fix typo (#17949)
1ba1ca0acb : Update to ROCm2.2 (#18007)
8b32933ea1 : fix clang-tidy (#18030)
e782f200f7 : Allow fewer arguments than the max in ArgumentSpec (#17826)
9de4350b77 : Automatic update of fbcode/foxi to d1f45b1a2b1585d0e9bc65e15e463db344fc3ff6 (#18028)
d3e3b246ea : Use std::isnan instead of self-comparison. (#18021)
b263a2d8a1 : Unroll If ops when doing ONNXIFI transform (#18039)
77d6d9e1b8 : Minor improvements to ONNXIFI transform (#17964)
3057580c89 : Run fp16 resnext101 training in bench script (#17963)
6458a6f0fc : Tensor Iterator loop unrolling (#17667)
9506779a73 : Temp fix for TileOp backward compatibility (#18026)
e862243abe : add a dump method to TreeViews (#17965)
5cbc1981f3 : JIT IR - Make valueMapPtr optional in convertNetDefToIR (#17942)
53fb9a462a : register RoIAlign with C10
10d64a1372 : add tanh to AD and fix layernorm schema
9af6564060 : Add magma debug version for Windows
bba906c2cb : Simplify env creation when running Windows tests (#17916)
84c30398c7 : Fix lint in test_multiprocessing.
73c5921134 : Remove ArgcountSortPlugin, which doesn't seem to be used.
3fe7bdb2ff : Fix lint in test_nn.py (#18006)
a41b6d7d1f : Simplify macros for exposing c10 ops to c2 (#17781)
25d06eef7b : Improve caffe2 operator wrapping (#17743)
6def5b69e3 : Remove unused KwargsPlugin.
40a3e14ade : Disable btri tests on Windows if MAGMA is not found (#17989)
16e50c78e7 : Report convolution size mismatch (#17436)
8bd9465b79 : make momentum non negative in adagrad test (#18009)
f827f1052a : Fix the CI
6df7116273 : Fix missing return in HIPStreamMasqueradingAsCUDA::operator<< (#17961)
c5b50a3440 : Remove AssertNDim, which doesn't seem to be used.
42acae5406 : Remove unused BoolOption.
1e42720a77 : Fix some typos in distributed.py.
1c3494daf0 : Fix Windows test CI
9089182ce4 : Fix lint in test_utils.py (#17944)
26a4c2ada6 : Speed up gemm by reordering the for loops (#17730)
ecc5e623a2 : fix punctuation
13bc002422 : fixes for AVX detection (#17915)
7e34bd230b : Disable FBGEMM when building under x86 32bit (#17922)
f6de833cac : Update docs for `mark_non_differentiable` method (#17891)
1e7f027f5b : Simplify OpKernel (#17925)
8714b8bb89 : Mark DispatchTable move ctor and move assignment operator as deleted (#17948)
4dcb4b1601 : Add more hint in the JIT tracer (#17957)
c8f9072ab6 : Fix half-float conversion ops to handle tensors larger than 2B of params (#17952)
8bc3b66be9 : Override the resolve_library_path in FBCode (#17497)
b21e9e4dae : update ccache guide (#17938)
9a946c4072 : unify cpp tests (#17947)
4f939dded1 : Updating submodules
66556f48e3 : Remove sinkMaxPool transformation (#17694)
f7b70a69e5 : Fix Windows build (#17917)
92e35ac0a7 : fix overly restrictive assertion (#17939)
e34abe03a8 : Enable threadpool threads to greedily acquire new tasks if available. (#17808)
552f903c63 : JIT IR - Add option to remove prefix string when converting from JIT IR to NetDef (#17931)
4ad17c9031 : Misleading documentation for module._load_from_state_dict (#17618)
6248266d91 : Enable detectron on AMD GPU
1cfb50334f : Removed dead code from THTensorMath.h (#17873)
6466ddbd86 : Fix lint in test_torch.py (#17807)
40ecdc57ff : Updating submodules
ee87254720 : Eliminate the use of Type. (#17804)
0f7e6f293b : Make Variable::set_data non-const; cosmetic fixes.
3e00f79a1e : remove warning for upsample code (#17921)
f229521154 : Optimize TileOp (#17290)
54b33503ec : Optimize channel_stats_op (#16243)
99f1465c35 : enable shape inference for elementwise operators (#17885)
4d2f6f1bbe : Remove remaining test jit expects redux (#17924)
abab9c1d78 : Handle Scalars Better (#17875)
fd04073e61 : Fixed a formatting issue in doc comments (#17505)
18949c8e00 : Add nbytes, itemsize, element_size to at::Tensor. (#17810)
dc4cbd9565 : Fix lint in test_distributions.py
030fec9703 : Fix lint in test_jit.py
4073e3c2f2 : Fix lint errors in test_autograd
f3a860ba07 : Added a few extra python bindings to help with walking the IR graph from Python (#17822)
aba9051a65 : kthvalue consistency with sort in the presence of NaN (#17824)
9ecee93a16 : Fix minor grammatical mistakes in torch/nn/modules/loss.py (#17892)
02c48cced9 : Remove (almost all) TensorOptions from native_functions.yaml (#17385)
12d6725c15 : Restore full Windows tests (#17102)
525fef708d : Prevent VS2017 from emitting ambiguous symbol errors (second time)
af2e347164 : Fix windows test hang (#17778)
f268370b42 : torch.btrifact for tensors with greater than 3 dimensions (#14964)
b161ac9634 : Small clean up of aten_op
496a3339dc : add support for parsing class defs to the string frontend (#17628)
64bb86d946 : add test for out of order methods (#17624)
f9820e55af : initializing class value (#17585)
2e753fc753 : Remove unused parameter in ProcessGroupGloo (#17718)
f540536dfd : Revert D14414435: [pytorch][PR] Remove remaining IR Expect files
fd67f6b463 : Remove remaining IR Expect files (#17886)
4aa22833cf : Bool tensor creation (cpu) (#17376)
b5fa5a5603 : Remove device guard from TypeDefault::copy()
066d15840f : re-enable torch.split tests (#17859)
d391137acd : Fix lint in test_dataloader.py
fa29c179b7 : Optimize fused_dropout_kernel launch bounds for AMD hardware
3f1d0ee5d5 : Deprecate torch.pstrf (#17866)
11c89dde55 : Allow structseq to be input of operators where tuple is expected (#17208)
b9e8f56daa : Add PyTorch Governance, Contributor Guide, and List of Persons of Interest
abd39d5a88 : Fix compilation error (#17860)
b3c9090736 : Revert D14392864: Fix lint in test_dataloader.py
817fd9ebf1 : Removed dead code from THTensorMath.h (#17769)
b57fe3cc66 : Introducing array-like sequence methods __contains__ (#17733)
906f9efc57 : Revert "Add check for x64 Python before setup (#17707)" (#17864)
8045b3eb14 : Registering of kl-divergence for independent distribution (#17681)
c02369151d : Fix lint in test_dataloader.py
1d827b7271 : Further improvements of nn.container docs
b6313d74e1 : fix faq typo
6bcff88d3e : Fix log_softmax and softmax if any dimension is 0-d (#17651)
75f88d4da6 : Correct loss docstrings (#17300)
98c54e9fa6 : When openblas exists, "OpenBLAS_FOUND" is defined, rather than "OPENBLAS_FOUND". (#17841)
a6c4ea66dd : Passing indices as a list to Subset instead of Tensor (#17649)
81e025d9ac : Clarify JIT docs
24e7b824e0 : Add metadata for torch jit TracedModules. (#17640)
320c6977c2 : Fix PySlice_Unpack not available on PyPy 3.6 yet (#17836)
742568e7eb : PyPy compatibility: let unmodified slots be inherited in the standard way (#17837)
17232fb842 : Run fp16 resnet50 training in bench script (#17831)
c10c73f047 : Int8 FC performance debugging (#17700)
0fd1dc45c0 : Optimize LayerNormOp (#17604)
65b00aa597 : Remove some simple use cases of Type::ScalarType()
3aeb78079b : Change Dispatch.h to use ScalarType over Type
cc07f968f8 : Revert D14361993: [pytorch][PR] [Onnx] - refactoring serialization of ONNX initializers to be name-based
1d26a3ae7e : Open registration for c10 thread pool (#17788)
0955592243 : Cast nn.Upsample.scale_factor to a float (#17732)
4bea15f580 : Fix lint in run_test.py
dbb5d02a45 : Fix lint in test/common_utils.py
7aae51cded : Replace tensor.type().scalarType() calls with tensor.scalar_type()
efed875b3f : Catch exceptions in bound_shape_inference (#17775)
4a7c549e8f : refactor caffe2 operator constructors - 11/9 (#17722)
55cf9c742a : Suppress C408 lint (don't use dict constructor) (#17813)
11f50e73e3 : Add matches_jit_signature to recent native functions
fe90ee9dc8 : Add /MD to prevent linking errors on Windows
a60fadfb71 : Change message on unknown db type to be friendly (#17795)
667763a63a : Trace rnn max_batch_size (#17727)
7f7d12854d : Remove legacy way of exposing caffe2 operators to PyTorch (#17742)
b132f0f1e7 : Remove 'Tensor' key from ATen codegen. (#17782)
5ffd7dbbb4 : Remove ProcessorSpecificPlugin. (#17789)
33c83a3f35 : Remove THPPlugin. (#17790)
424a03186a : Replace tens with hundreds.
6aacc1b2dd : Support failback for more operators in ideep (#17747)
256923523a : Cleanup include files in jit/passes/common_subexpression_elimination.h.
b290a16b2d : Use return names in JIT operators
ac87488bd3 : Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize (#17764)
cc7aec12fd : Clean up some old ScalarType stuff
549eb9e9bc : add reference to flake8-mypy in contributing.md
9d70e199f4 : Move lerp to ATen, add functionality for tensor weights (#17348)
6227afb305 : Refactor dispatcher (#17753)
aa57f17808 : add layernorm to AD
5bf9e41938 : move half<->float conversions to oss operators (#17548)
aa4c4c47fa : Fix the update ONNX expect files (#17767)
7bcc2301ee : Cleanup testFusion/testOne: there are unused arguments.
4480aa31c2 : Automatic update of fbcode/onnx to 96c58ceeacf0f2b73d752e413e4fd78787a12da3 (#17676)
1043ff6d68 : Set the default ONNX opset to the latest stable opset (i.e., 9) (#17736)
a2381fa346 : Add module attributes (#17309)
e4c9d75008 : - refactoring serialization of ONNX initializers to be name-based (#17420)
3f94fc4862 : ONNX Export for Max and Average Pooling in CEIL_MODE
561037aef8 : use flake8-mypy (#17721)
1d522598fb : use fp16<->fp32 intrinsic (#17496)
f8778aef78 : Implement a Caffe2 standalone LSTM operator (#17726)
7d02a1fbc7 : caffe2:libtorch_cuda depends on caffe2:caffe2_gpu (#17729)
39423fbdd4 : add tensor and cost inference functions (#17684)
3dba1285ab : ONNX Export Narrow op
3230404645 : Keep the dim_type of hinted shape as BATCH if possible (#17734)
8ec7357312 : fix different round behavior on CPU and GPU #16498 (#17443)
68c5c66800 : Warn about memory overlaps on expanded tensors (#17576)
93768785ec : fix exp fam. formula
f6fda4409b : refactor caffe2 operator constructors - 10/9 (#17659)
4db3f8f806 : Improve ONNX symbolic for logsoftmax and softmax (#17672)
c78da0c6ed : Enable using CMD when building cpp extensions on Windows
a87d475c2f : Do not rename net boundary inputs/outputs during ssaRewrite. (#17545)
9024faaafe : Reapply D14078519 (#17596)
bd7fcced69 : Batch of expect file removals (#17581)
39669316a6 : (#14267)
0ed1b9fb98 : Update ModuleDict doc about order
e2de88dc5a : Update CODEOWNERS (#17720)
073634612f : ONNX Export Argmin and Argmax ops
97eb139a94 : Turn atol to 1e-5 when comparing the end to end results (#17708)
7fa996f8e2 : remove loop expects (#17695)
b87abdfc12 : typo fix
e3516d0a95 : omit group conv NHWC test for GPU (#17715)
10ea02facf : fix tuple matching (#17687)
c658d9b21b : Temporarily disable Upsample operator tests in pytorch-onnx tests (#17696)
08fb9021da : Add check for x64 Python before setup (#17707)
1e6acc676f : Replace caffe2::DeviceGuard with c10::cuda::CUDAGuard (#17623)
e9eb18a18c : Remove nomscheduler (#17693)
886e482776 : index operation support for torch.HalfTensor (#17645)
507c93bad2 : Revert D14160172: Implement a Caffe2 standalone LSTM operator
39f94619ec : fix typo in hub doc
fefaebabba : fix dropout AD & rename range to rangelist (#17691)
36e0d39f50 : enable use of MIOpen for depthwise convolutions (#17685)
bfe7a58f69 : Implement a Caffe2 standalone LSTM operator (#17461)
a478d41620 : Fix nll_loss crash on cpu where ignore_index is out of bounds (#17328)
288e1fbd18 : Add '--hip-clang-launch' to favor <<<>>>-based launch. (#17686)
079093a662 : Improve caching allocator for Pascal and newer GPUs. (#17120)
8420a2025b : Turn the Half::from_bits into a constexpr function to avoid unresolve… (#17661)
7fc3aa8c49 : Remove Expect Files from python / tracing / script interop
b219882c0b : Enable apex on Windows
f176450d60 : bump docker build to upgrade magma to 2.5.0 (#17674)
54c4b5a4db : refactor caffe2 operator constructors - 1/9 (#17082)
910519e45b : Expose cuda kernel for caffe2::GenerateProposals
aea8dd8377 : print warnings when DNNLOWP_16 or DNNLOWP_ROWWISE_16 engine is used (#17176)
8569d9cbea : Fix XOutput/XOutputTensor for ivalue based c2 operators (#17599)
c7db0b35d8 : Fix InputSize/OutputSize for ivalue based operators (#17579)
173561ff12 : Fix clamp fusion on missing limits (#17533)
a87eeec9bf : int32 indexing for Tensor Iterator Reduction (#17428)
3257608276 : Removed all usages of TH_Index_Base (#17591)
dec116e96f : PyTorch/Caffe2 tensor interop in Python (#17190)
244d330980 : Fixed typo in aten/src/ATen/native_parse.py (#17641)
5b835682e3 : Remove GPU dependency from ProfileObserver (#17592)
6a297b8675 : Don't make factory methods create a tensor and then immediately copy it (#17565)
7a51c03a30 : Fixed typo in torch/functional.py w/r/t broadcast_tensors (#17642)
01977c0a89 : Change fake tqdm constructor to match real tqdm (#17636)
ef7ddcd29e : Mark native_functions as matched if uncaptured by JIT (#17631)
80927fc068 : Ban std::array from native_functions.yaml
416474a720 : Remove more usages of BoolTensor and IndexTensor from native_functions.yaml
c6715eda06 : Implement kthvalue in ATen (#17544)
43f94077d8 : Change vml.h to support sizes greater than 2**32 - 1
2336f0ba06 : msvc_fixes (#17201)
06c8aa7a3b : Hipify fixes for Masquerade logic (#17598)
ab95b5c6cc : Rename prim::Undefined to prim::AutogradZero (#17611)
5b6703629c : Add python test for extension backend tensor.device (#17602)
2ed99fee0d : Revert D13935403: Call c10 cuda op from test_torch
c2c32340a4 : add command line option to use hive filler; add README (#17619)
5360984fbd : Remove TH(CU)NN Sparse Linear (#17610)
19a6de328f : Correct docstring of vision/init functions
0a7b2af13b : Call c10 cuda op from test_torch
698f947463 : Revert #17191 and #17215 that no longer apply on Windows (#17567)
e6a9062335 : usertype -> class (#17528)
830ca665f5 : alias analysis refactor take 2 (#17594)
7fddd01c51 : Fix the missing Windows CPU job in the build status section (#17608)
81f2bdf9c2 : Update magma to 2.5.0 for Windows
a6170573c8 : Adding support for 0-d tensor for transpose (.t()) (#17535)
6899e901cc : Updating submodules
212024282b : Mark cudaGetLastError return value unused in C10_CUDA_CHECK
d3fcd0d798 : add dropout during eval (#17549)
3ed44b6714 : Adjust launch_bounds annotation for AMD hardware. (#17555)
6b07612cef : Fix verbose compiler warning in flat_hash_map (#17562)
35a52aa33f : Fix diagnostic pragmas (#17561)
2e94054e34 : Allow dispatch based on tensor list args (#17522)
b004b31d06 : Allow exposing caffe2 operators with variable number of input tensors to c10 (#17491)
1ccf74ae9d : blacklist fft algorithms for strided dgrad (#17016)
0f60283e84 : Revert D14078519: [codemod][caffe2] [clangr] refactor caffe2 operator constructors - 5/9
b36d9351b1 : Add generic list/dict custom op bindings (#17587)
7413f0926a : refactor caffe2 operator constructors - 8/9 (#17089)
28b5df1c8f : refactor caffe2 operator constructors - 6/9 (#17087)
a4ed7126ca : refactor caffe2 operator constructors - 2/9 (#17083)
8db403b9dc : refactor caffe2 operator constructors - 7/9 (#17088)
42512242cc : refactor caffe2 operator constructors - 4/9 (#17085)
b0d3165cc8 : refactor caffe2 operator constructors - 3/9 (#17084)
c9989dfe37 : Make HIPStream also masquerade as CUDA. (#17469)
e157a6432f : Fix Python device type property for XLA and MSNPU
c596683309 : Rely on numel() == 1 to check if distribution parameters are scalar. (#17503)
9cbd7a18f5 : fix reordering of inlines (#17557)
2e5a8cee82 : Customize the printing of namedtuple return (#17136)
1046593509 : Revert D14231251: [jit] alias_analysis refactor
44fb22f9fe : refactor caffe2 operator constructors - 5/9 (#17086)
54c5b10934 : alias_analysis refactor (#17511)
f9d3f1dca5 : allow "before" and "after" alias annotations (#17480)
e1df99295f : ONNXIFI extension & e2e tests. (#17478)
7fbee1f79e : update slack invite instructions
456d3e5f56 : Fix errors in the description for installation on Windows (#17475)
a9395ce259 : refactor caffe2 operator constructors - 9/9 (#17090)
9bcceb75b5 : Fix the false generated_comment (#17563)
13e6326c07 : Remove useless OpenCV reference
03132c1f56 : convolution/matmul/dropout (#17523)
221edddd18 : disallow shape analysis with resize ops (#17518)
6706e9af19 : Make C10_MOBILE consistent with how feature macros are usually used (#17481)
7c5ffc4120 : Disable c10 dispatcher on mobile (#17078)
1154506533 : Always synchronize src and dst streams when copying tensors (#16966)
5f06dcc4d7 : ONNX Export Adaptive Pooling
e47aeede32 : Use name for output variables instead of out in JIT (#17386)
1971c0528d : Forcing UTC on Mac circleci jobs (#17516)
9709d5e787 : Fix math::Set for large tensor (#17539)
b4572668b4 : Add sparse gradient option to `gather` operation (#17182)
a2b9f7f484 : add elastic zeus handler (#16746)
222a07863f : optimize elementwise sum (#17456)
8c72217817 : Enable boolean_mask, adadelta, adagrad fp16 on ROCm (#17235)
e0b44cac1f : Enabled HALF for fill() and zero() methods. Moved them into THTensorFill (#17536)
44a607b90c : Fix autograd with buffers requiring grad in DataParallel (#13352)
74098eadb0 : enable assymetric dilations and stride for miopen conv (#17472)
76828647c1 : Enable tests working on ROCm 2.1 dual gfx906
96faaa9d50 : Fix linking errors when building dataloader test binaries on Windows (#17494)
cbefd0323b : Fix typo
eff672ef06 : Remove Bool/IndexTensor from schema for native functions with derivatives (#17193)
348d1889ff : Fix operator initialization order (#15445)
4ca1a54526 : Make transpose consistent with numpy's behavior (#17462)
63519df07a : Bump up the ONNX default opset version to 10 (#17419)
72eb70c272 : ' ' ==> ' ' (#17498)
1607bb322d : Support all ROCm supported uarchs simultaneously: gfx803, gfx900, gfx906 (#17367)
5903522ad6 : refactor: a bit intricate so I refactor it (#16995)
e5b4baab40 : new batch of expect file removals
2cdbb140e6 : user defined types (#17314)
3ceeaae5e6 : add mutability to docs (#17454)
5d77f4f0d5 : Remove usages of int64_t from native_functions.yaml
2f840ba6d6 : upload alias tracker graph for docs (#17476)
68e90a398e : Temporarily disable select/topk/kthvalue AD (#17470)
411cf434af : Batch of expect file removals Remove dce expect files (#17471)
df0d4e6c7a : Back out part of "Fix NERPredictor for zero initialization"
e4e9b738d3 : Followup to #17049: change more instances of RuntimeError to IndexError
59ece70201 : Missing argument description (value) in scatter_ function documentation (#17467)
9e08c998db : Throw exception when foxi is not checked out (#17477)
6f53c51a01 : Updating submodules
79a5a73a1e : simplify aliasdb interface (#17453)
b0b7541ca4 : fix list type unification (#17424)
b527055fcf : Restore current streams on dst device after switching streams (#17439)
29c27d7b99 : Automatic update of fbcode/onnx to e18bb41d255a23daf368ffd62a2645db55db4c72 (#17460)
393c97fda7 : Fix variable checking in THCPModule_setRNGState (#17474)
724c7e76c6 : Fix reduction='none' in poisson_nll_loss (#17358)
f9ba3831ef : Apply modernize-use-override (4)
15a55b86ed : Fix nonzero for scalars on cuda, to_sparse for scalars on cpu/cuda. (#17406)
65ecef1509 : Export ElementwiseLinear to ONNX (Mul + Add). (#17411)
3d68a2d6de : Add foxi submodule (ONNXIFI facebook extension)
3de67cd63d : Fix remaining -Wreturn-std-move violations in fbcode (#17308)
4ac91b2d64 : add debug/release tip to cpp docs (#17452)
15840e30dc : add pointer to windows FAQ in contributing.md (#17450)
d76b9395a0 : Remove ROIPooling (#17434)
d80f0a1f3a : Add example to WeightedRandomSampler doc string (#17432)
96b765dcf6 : Revert D14095703: [pytorch][PR] [jit] Add generic list/dict custom op bindings
a1ca908ac2 : Updating submodules
bb3a2d99ac : Jaliyae/chunk buffer fix (#17409)
5ea6344c54 : Skip test_event_handle_multi_gpu() on a single GPU system (#17402)
be4ad3fe30 : fix(typo): Change 'integeral' to 'integer'
6a99f86429 : Fix the ONNX expect file (#17430)
674e11ccde : order caffe2 ubuntu configs contiguously (#17427)
c10662962c : remove redundant inference functions for FC (#17407)
08fed51926 : optimize max pool 2d (#17418)
340cf2a2dd : Generate derived extension backend Type classes for each scalar type (#17278)
47263e48f4 : Better handling of net errors in prof_dag counters (#17384)
d8d8371bd3 : Batch of Expect Files removal (#17414)
c65b0cbe3d : Fix target name.
4491577fb5 : jit technical docs - parts 1, 2, and most of 3 (#16887)
9e69703dac : USE_ --> BUILD_ for CAFFE2_OPS and TEST (#17390)
3ab2080047 : Fix install libcaffe2_protos.a issue mentioned in #14317 (#17393)
1d05d0d848 : Improve onnxifi backend init time (#17375)
e984244828 : fix code block typo
807632d402 : Double resnet50 batch size in benchmark script (#17416)
6d744f8fbf : Preserve names when converting to/from NetDef.
dbd66c17bc : Add generic list/dict custom op bindings (#17037)
93e8b938ff : fix test
9aae82bc2c : Improvements for current AD (#17187)
e422b27f17 : Bump up the producer version in ONNX exporter
b18c60839d : list add insert and remove (#17200)
9977f43d19 : Pin nightly builds to last commit before 5am UTC (#17381)
356a94b64e : Lazily load libcuda libnvrtc from c++ (#17317)
81b43202ae : Refactor Type Parser b/w Schemas & IRParser into a type common parser (#17383)
b0c18570ca : add the support for stable ONNX opsets in exporter (#16068)
dd3acbc6d5 : add readme and notice at the top of config.yml (#17323)
0c24f3754b : Revert D14181620: [caffe2/int8] optimize max pool 2d
60de0b885f : fallback operators to CPU for onnx support (#15270)
4778a4089e : optimize max pool 2d (#17391)
8a21c6a5ee : Fixed the script for the THC generated files (#17370)
2b86cc442c : Fix coalesce, clone, to_dense for sparse scalars.
3d5968d366 : Fix DataParallel(cpu_m).cuda() not working by checking at forward (#17363)
be6ad7ddde : Rename BatchNorm running_variance to running_var (#17371)
562fa55f3d : Updating submodules
260f66c316 : Fix concat dimension check bug (#17343)
1fea60be25 : Add dict to docs
2370c989d8 : Add LSTM to standard library (#15744)
ac00a0cd47 : Dict mutability (#16884)
3a47d56946 : Fix static linkage cases and NO_DISTRIBUTED=1 + CUDA (#16705) (#17337)
290b2a1d9d : Fix Insert Constant Lint Fail (#17316)
4c6da649e5 : Partial support for kwarg_only arguments in script (#17339)
5fa78303ed : fix double backward for half softmax/logsoftmax (#17330)
9101dfc57c : Revisit some native functions to increase number of jit matches (#17340)
46f15b74b7 : Add Value::isValidName method. (#17372)
6feded880e : Fix #17218 by updating documentation (#17258)
45251fb52e : fix lint (#17366)
3145f46a22 : switch to Operation in register_prim_ops.cpp (#17183)
b312e9de6a : Use standard docker image for XLA build
25730f15bb : Modernize test_sparse. (#17324)
c63af8837d : remove nn.Upsample deprecation warnings from tests (#17352)
3069c45069 : upgrade documentation in setup.py to NO_ -> USE_ (#17333)
5744d5213d : Enforce non-negativity of tensor construction (#17077)
94a95a0c7f : Fixing docstring in CTCLoss (#17307)
de81a2741f : Fix the slowness of mvn's log_prob (#17294)
722cbe3064 : Move argsort to C++
37890610b0 : Include vec256 headers in setup.py (#17220)
5106918656 : Enable MAX_JOBS for using Ninja on Windows
29f4f8f048 : Avoid unnecessary CPU-to-GPU copy of torch.load with CUDA (#17297)
2c302b6ea6 : allow lists to contain any tensor type (#17321)
d92ddcf7ca : Skip convnets benchmark in rocm CI (#17331)
b3b692a80a : Don't have malloc-free pairs that cross DLL boundaries. (#17302)
c063a33ef3 : Add support to build for multiple amd gpu targets (#17329)
501d346da8 : batched cleanups (#17288)
cb3d3d1115 : (Permanently) fix CI breakage due to new docker version. (#17338)
376bb40379 : Implementation convolutionTranspose operator for mkl-dnn (#12866)
c02e2ff0b0 : Support multi-device configuration for MKL-DNN (#12856)
90950f79c7 : fix missing std (#17263)
0edc81136c : Rethrow exceptions from RunAsync (#15034)
0337494c6a : Reinforce scheduling invariants (#17132)
3e44880d4d : Modify TileOp GPU implementation to expose more concurrency and better utilize GPU memory bandwidth (#17275)
9e4a993878 : Support str for native_functions.yaml schema
2e67b34ea7 : Separate gpu reduce functions (#17146)
474adf5458 : Minor doc updates in c10/core/Allocator.h (#17164)
b2dde4386a : Namedtuple return for symeig, eig, pstrf, qr, geqrf (#16950)
36ddad3bfe : Allow PyTorch to be built without NCCL (#17295)
e6cf3c886d : add foxi submodule (#17184)
54e4c4d7de : Removed obsolete argument correct_transform_coords in bbox_transform op. (#16723)
075c7b1fef : make the threshold for acurracy more precise (#17194)
db1d61a5c3 : Add rule based filtering for ONNXIFI transformation (#17198)
63214b572b : Updating submodules
260facfdea : caffe2 | added missing operator source file (#17272)
a91e056f2a : add list methods: copy,extend (#17092)
79f898263b : Improve error message w/ size inference on empty tensors
c3a23379ea : add install step and docs for Android build (#17298)
1b3315ec17 : improve libtorch install docs with GPU note (#17299)
237e5438f5 : Add launch bounds for TopK kernel, be more conservative in sorting (#17296)
b8d1f4a423 : ONNX Export Maxpool Indices
4f45bc73f7 : Revert D14144264: [pytorch][PR] [jit] clean up print from test
f8c9ec5e44 : Updating submodules
5be3ffbde2 : clean up print from test
428b666814 : Fix dll loading process in newer Python on Windows (#17191)
972fc5f191 : Fix dll loading issue for Caffe2 and Windows
2b57bdb7ab : Fix cuda softmax backward with empty input (#17259)
594a4d7b55 : at::native batch norm kernel launch config update (#17047)
6455d91e4d : False alarm about leak in TestNN.test_variable_sequence_cuda (#17242)
09c9af9451 : U/kostmo/gen circle conf (#17189)
f827f9f77a : update doc for multinomial (#17269)
d73e6cb59d : Automatic update of fbcode/onnx to 4c091e048ca42682d63ccd3c1811560bc12b732d (#17264)
c88798dbc1 : Make tril_ and triu_ actually in-place (#17031)
0fc03d155a : Fix remaining -Wreturn-std-move violations in fbcode
89df22e57b : Lightweight String check Utility (#16858)
82aa511146 : move prim::None to prim::Constant (again) (#17186)
9ebc433bda : Clarification of Lerp operation on tensors (#17253)
19117f6a0a : reenable rand_like fusion when there is no broadcast (#16087)
43d5cd4d34 : discrepancy in smoke_macos_libtorch_2.7_cpu job spec (#17224)
444039c47b : Bool tensor. Part 0: Boolean storage implementation (#16810)
e81878e0a9 : Correct padding and activations docstrings in nn module
f2f4030294 : Use move to avoid copying (#17188)
57617ee429 : Replace resize_dim() with set_sizes_and_strides() (#17127)
9477c143c6 : C++ Frontend: adding two distributed samples (Random and Sequential) (#16910)
8852e21245 : Correct recurrent/linear/dropout/sparse layers docstrings
fad9eda7fb : Optional arg fixes (#17222)
7e5442f900 : Reset grad attribute when called using del (#16525)
9a7bcacc27 : Logging stuffs (#17177)
3a01a45f06 : Implement IRParser. (#16987)
bf16a6bc3c : Skip onnx logsoftmax tests in rocm (#17170)
b6b99fd7d3 : Add namedtuple return for min, median, mode, kthvalue, add test for namedtuple return API (#16186)
b3d8c569d3 : Remove templates for GenericDict
20fd6dca77 : fix missing constant in adaptive_avg_pool2d AD (#17180)
6c06b32558 : Implement NetDef <--> JIT IR converters. Try 2. (#17123)
cde7204636 : change the epsilon for fp32/fp16 to uint8 to be the same (#17062)
91c1d728ac : Revert D14109636: [pytorch][PR] move prim::None to a case in prim::Constant
7caa21f5ca : move prim::None to a case in prim::Constant (#16160)
4fcab92d6c : Move outplace ops to ATen (#16788)
5737c5259c : Fix for 16939:multinomial performance regressed
7157be8622 : Add special ops for BatchNorm symbolic differentiation (#15403)
21696502ff : improve error msg when module list isn't added to __constants__ (#17167)
1cdcdd78af : Kaiming Initialization (#14718)
5eee0670ab : Pass torch.distributed launch process local rank as environment variable instead of argument (#16360)
ea405f8d01 : Assert cases exist for unschematized ops in alias analysis
8d33eb450e : Fix avg pool2d api (#17166)
eba1b23ddd : Fix syntax error in set instantiation (#17174)
6454e3262d : Make getting the dtype of a tensor work for backend extensions.
9b5d3f6f5e : Stop reassigning (output) reference arguments in BinaryOps. (#17059)
70ee257ad4 : Fix batch insert (#17158)
01686db21b : Generate CircleCI config.yml from a script (#17039)
82b269060c : Add support for simpler for-in-list + tests (#16726)
326c891d32 : Update pybind11 (#17143)
472cfc0f2c : Enforce module device at DataParallel construction time (#17129)
b892f69440 : one_hot docs missing (#17142)
38139bc356 : add pop support to list (#17015)
7481cc9d7c : Updating submodules
dad0dbd3b9 : merge fully_connected_rowwise_dnnlowp_op into fully_connected_dnnlowp_op (#17105)
90fc6133b2 : bug fix when we prepack weight and bias together (#17145)
fbd690c1fe : caffe2: fix PinnedCPUAllocator cudaHostRegister() leak (#16340)
07b5782ff7 : Add some missing docs to torch.rst, new unittest to enforce torch.rst no longer miss anything (#16039)
a771a6ba67 : (#16825)
acf5ec07af : Correct conv and pooling docstrings in nn module (#17052)
6c67dcfb05 : Fix AdaptiveLogSoftmaxWithLoss's constructor (#16694)
48943c3b7a : Update Upsample docs to match nn.interpolate
f84165d20d : Remove static_cast insertion/kernel argument extration. (#17055)
b65c22c01a : Upgrade mkl-dnn to v0.17.3 to fix core dump issue (#17107)
056dd5b6de : Updated bbox_transform and nms unit test for caffe2 ops. (#16722)
2634e306e4 : Extend support for exporting reshape to onnx. (#16971)
f062f5fd4a : add std to autodiff, and mean/var/std to operator set (#17137)
678a472ee5 : Script module data parallel (#16891)
0a975d333f : add pre-packing operation in README.md (#17151)
a1f2ed008f : Minor fix of the histogram observer in FBL eval flows (#17118)
5f6ecd14c4 : more test coverage on emitIf none dispatch (#16794)
91c50aeec6 : Speed-up adaptive average pooling for the common case of size=1 output (#17011)
7cff803d0a : Improve example for torch.mode (#17069)
58648a19df : Create BackendTransformerBase to host common functions used for backend lowering (#17074)
6a46738986 : Fix android crash when model detects nothing
d61455cf40 : Fix some documentation links in torch.tensor (#17109)
5f866d0ea2 : Apply modernize-use-override (2nd iteration)
f1da9892e9 : Generalize catArray for contiguous inputs and dim != 0 (#17032)
f3dd5563e4 : fix test_jit canonicalize_tensor_iterator
65e06df24a : Use new constructor in USE_SIMPLE_CTOR_DTOR (#17080)
6f2bcc9b4f : Caffe2 TARGETS for HIP (#17076)
b0545aa85f : maskrcnn & bert AD coverage part 1 (#16689)
b5193b6a81 : Second PR to restore reverted commit (#16224) (#17040)
b515ebc6f1 : Remove fake inference for shape info in ONNXIFI transform (#17046)
0a5de6e972 : Update alexnet expect.
ff2053dfa1 : add clear functionality to list (#17050)
d52862ca81 : Moderate the dim type after LengthsRangeFill (#17096)
016f212357 : fix behavior of ConcatDataset w/ negative indices (#15756)
65d6f1014a : Add support of count_include_pad and test end to end test for AveragePool (#17034)
19addc7eb0 : Support nonzero onnx export
5a26579e27 : Add more headers to setup.py to make pytorch/benchmark work (#16890)
3408d9de20 : Clean up Storage/StorageImpl constructors (#16948)
11816affab : Safety check for negative alloc_cpu() attempt (#17071)
f0fed41ea2 : Updating submodules
92a516b9ff : Apply modernize-use-override - 2/2
84bdf86034 : Updating submodules
8abfd28f58 : #16627 convert weights using torch.as_tensor to avoid warning (#17067)
b33f4cff6b : Updating submodules
dae356df1f : Revert D14062537: [pytorch][PR] Implement NetDef <--> JIT IR converters.
c3f5ba9460 : PyTorch model metadata. (#16275)
46503a7ac0 : Trim libshm deps, move tempfile.h to c10 (#17019)
d25fee31fc : Implement NetDef <--> JIT IR converters. (#16967)
decc0893f2 : Remove IgnoredPythonOp sugared value
3a34f443c5 : Separate reduce functions from math (#16929)
9b7f3da74b : Skip test_cudnn_multiple_threads_same_device on ROCm (flaky) (#17061)
a670824fee : Support FC (Caffe2) -> Gemm (ONNX) with variable input shape. (#16184)
2ad5dcbbe4 : Make timeout in resnet50_trainer configurable (#17058)
41dddfd55f : Make mkldnn Stream object thread_local and enable mkldnn thread-safe (#17022)
491f2d4cb8 : Support conversion from Caffe2 MergeDim to ONNX Reshape + Squeeze. (#16189)
86594e63eb : Fix mvlgamma doc (#17045)
f79563a665 : Change IR graph print format to make it look more pythonic (#16986)
18b8572505 : Turn off the ability for Declarations.cwrap entries to be methods.
bc39cf4d5e : Remove chunk count check on the ChunkBuffer (#16868)
a5e7b1d032 : Use IndexError instead of RuntimeError in ATen CPU kernels
1fc05bd285 : Mark IntList as deprecated; add C10_DEPRECATED_USING (#16824)
db82fc7ca6 : Add more debugging facilities to ONNXIFI transform (#17043)
26018d027a : Updating submodules
2b5bef22b7 : Updating submodules
51dd2000cd : unify c2 and TH allocator (#16892)
f87022bf2f : Updating submodules
fb5790ce94 : Remove second output of Reshape during ONNXIFI transform (#17027)
9d01be1a5a : enable more unit tests in test_nn (#16994)
02b838e065 : fix bicubic upsampling and enable tests (#17020)
92221ad840 : Fold col offsets into bias; optimize A symmetric quant (#16942)
3e1e5d5a8b : enable unit tests in test_cuda that now pass with ROCm 2.1
9696fee635 : Register CUDA kernels for caffe2 operators (#16691)
059c55f8cc : Enable test_jit tests that work on ROCm 2.1
7c24de8d04 : Extract ShapeInfo and some util functions into a separate file. (#17025)
f435fb8290 : Allow customization of blob node in net_drawer (#16915)
65b49b4696 : Ignore unknown_shaped tensor in bound shape inference (#16916)
7c1e4258a9 : Workarounds to the lack of nvidia-smi and ldconfig programs in macosx (was PR 16968) (#16999)
0d95028bee : Dispatch the correct legacy function for geqrf_out and ormqr_out (#16964)
68c3b959de : Register layout for XLA backend.
0eee56fff7 : Export ReduceMean/ReduceFrontMean/ReduceBackMean (Caffe2) to ReduceMean (ONNX). (#16727)
b0d57aa7b1 : Clean up allocations in FBGEMM linear (#16985)
34e4bd3ec5 : Properly dispatch s_copy__cpu.
2b1e2b6b53 : Get rid of unused THPStorage defines related to accreal.
f2e6a3f230 : Fix AddAdjustBatchOp (#16997)
21ce1da5e9 : Roll back PyTorch DockerVersion to 282
a2c322e735 : fix silent failure on Windows builds (#16984)
3618b52c74 : Add module and name to func created with _jit_internal.boolean_dispatch (#16922)
40528efeac : More docs for methods in operator.h
e5742494f6 : Minor typo
f61f9e1757 : Fix allow_inf in assertEqual (#16959)
ae1fc584ea : Refine return type Stream to HIPStream in HIPStreamGuardMasqueradingAsCUDA (#16978)
b7b245845a : Revert D14030665: [pytorch][PR] [HOTFIX] Pin docker-ce version to the one expected by nvidia-docker2
bad4442a7c : Parse the command line and check the arguments before build_deps() (#16914)
4d4c5273de : Fix and add testing for nullptr allocator in c2->pt conversion (#16857)
aa626840af : Fix NERPredictor for zero initialization
d266453541 : Allow calling a Python function with a dict
4292d13240 : Keep weights name unchanged during SsaRewrite (#16932)
917eac91f4 : Pin docker-ce version to the one expected by nvidia-docker2 (#16976)
920c684367 : Expose GenerateProposals to PyTorch
0c02d317ea : Expose BBoxTransform to pytorch
64aa769ef9 : Minimize templated code in caffe2 operator wrapper (#16965)
7743ed8502 : Don't keep unnecessary saved_inputs alive (#16583)
e2a5b203fc : Enforce same input tensor storage in VariableType functions (#16305)
4b454c3bdd : Revert unneeded fixes in flat_hash_map (#16907)
9521612bb7 : Fix constexpr in KernelRegistrationBuilder (#16906)
af0c79eed4 : Catch cudaError_t return val (nodiscard in rocm) (#16399)
29f096cc70 : optionally zero infinite losses in CTCLoss (#16199)
632df48207 : Merge binaries "convert_image_to_tensor" and "caffe2_benchmark" (#16875)
b4f1a871e8 : Fix missing CircleCI GPG key (#16961)
c90a33b627 : Disable binary_linux_conda_3.6_cu90_build on PRs. (#16958)
48c5d0ae8c : Install Thrust package and stop patching (#16911)
8042edcdb1 : Make pin_memory and default_collate preserve namedtuples (#16440)
d7e6f9b5a7 : Revert D14020906: [pytorch][PR] Extend support for exporting reshape to onnx.
8b4dea3f56 : Added scientific notation on set_printoptions (#16876)
4335aac6e6 : Extend support for exporting reshape to onnx.
e661dc27ff : Int8GivenTensorFill Operator Schema fix typo (#16204)
8f6ee88a1d : Add support for fusion of half batch norm with float stats (#16735)
c282afffa7 : Improve the Sparse matrix multiplication computational speed #16187 (#16905)
0742874643 : Allow dataloader to accept a custom memory pinning function (#16743)
73d7ecd183 : Add abs for ByteTensor and CharTensor. (#16893)
eae139e18f : Support named tuple return from operators on JIT (#16253)
9cb41e5386 : Enhance the documentation for torch.nn.DataParallel (#15993)
aae6b53c5b : DOC: correct docstring for torch and torch.Tensor package (#16842)
6a528007a6 : find libnvToolsExt instead of using only hardcoded path (#16714)
8c9df48fd4 : Clean up autograd method tests
a1a330bd6e : fixed LogSigmoid math string that wasn't rendering in documentation (#16900)
e0323a6aea : ctc_loss error message bug fix. (#16917)
202eaa4ef4 : Use non-Variable type for callsites that check type equality (#16325)
a9f1d2e371 : Fix the error in the note about `torch.device` documentation. (#16839)
19790b218f : Register coalescer bug was fixed in ROCm 2.1 (#16923)
d72c5d5a49 : Alignas is now correctly handled on ROCm (#16920)
5089ee9677 : Enable buildin bitonic sort (#16919)
f169f398d0 : Change the default image size from 227 to 224 in resnet50 trainer (#16924)
23e1c55cc0 : enable unit tests working on ROCm 2.1 (#16871)
fc4f33b08f : Add suggest add to __constants__ message on save fail
6737190b5c : Make the exception raised from "numpy.dtype(numpy.void, (INT,))" less cryptic (#16809)
12bace141b : Revert D13970381: [caffe2] Add visibility to registry class to fix ubsan error
0799a81cb7 : Extend Net.RunAllOnGPU() to support RecurrentNetwork op (#15713)
48fe839d56 : delete critical section in TH*Tensor_addmm (#16889)
f83556bb7b : Revert D13806753: [pytorch][PR] TensorIterator cuda launch configs update
cd2dca3caf : Allow sequential modules in module list (#16882)
5ada54e0bc : Impl ExpandDims op and fallback to CPU if needed (#15264)
54c981d9a9 : Add visibility to registry class to fix ubsan error (#16792)
b9b0be7af2 : Remove Legacy entry point. (#16721)
b3fbd3eebf : Deduplicate instances caching allocator, so that we only have one instance. (#16720)
5c982622b0 : Delete duplicate copy of THCCachingAllocator (round two). (#16615)
f03296299b : Bump caffe2 docker images to 248 (#16863)
6ce147c021 : Also register op schema when no kernels are registered
2c713032a1 : Don't automatically handle context parameter (#16867)
fe5989d466 : Support onnxifi with partially shaped inferred net (#16877)
7ce33c586d : Robust determination of cudnn library and relevant conda packages. (#16859)
930ed00b33 : Specialize LengthsRangeFill and SparseLengthsWeightedSum in bound shape inference (#16869)
b5111918cd : Activation histogram net observer with multiple histogram files as output (#16855)
ee0e71bee7 : Allow dicts in C++ frontend (#16846)
2db847b3a7 : Separate elementwise level2 math functions (#16753)
22477c6a7f : Fix (#2) ppc64le build break on git status --porcelain check (#16852)
96369506c4 : doc updates for TorchScript (#16866)
67bb7b2931 : Fix autodiff of nll_loss
c35f3ae89f : aten::_convolution now participates in shape analysis (#16837)
c65b03b9f8 : Enable arg_ops_test/unique_ops_test on AMD/rocm (#16853)
bca358ad02 : Update CI to recently released ROCm 2.1 release (#16808)
0f42a1ed29 : Use bound shape inference in SparseNN tests (#16834)
66084c0bc9 : Add recognition for XLA device types.
64339dbd51 : Fix and re-enable test case (#16643)
6750e1e3e9 : C10_REGISTER_CAFFE2_OPERATOR: Macro for registering c2 kernels (#16548)
ac4f66c9c3 : Fix Anaconda logins on binary builds
4193f7a106 : new embedding label type in image input op (#16835)
b6648c1bbc : Update ATen internals to use int64_t for dimension indexing (#16739)
1aa90192ea : Make JIT attributes t_ and ts_ store Variable instead of Tensor (#16596)
44d98c30a3 : Better error when using a constant tensor (#16724)
72f070a124 : Backport the stable doc build on v1.0.1 to master (#16503)
ac00e85e36 : Remove undefined tensor in jit script (#16379)
0d366e1bde : Support multiple inheritance in torch.distributions (#16772)
2681af1c8a : Remove redundant wrappers in torch.distributions (#16807)
511f6fc2d5 : Insert AdjustBatchSizeOp into the predict_net. (#16811)
aa88c2c0b6 : Unify gpu_support variable in python tests (#16748)
85ac272670 : Update Docker file section in README.md (#16812)
49443d49fb : TensorIterator cuda launch configs update (#16224)
b2135b2b72 : Define layer_norm schema in caffe2 (#16535)
16468a9f45 : Automatically register c10 ops with JIT (#16534)
e5e0bf4152 : Add AdjustBatch Op (#16676)
100aa0798e : Bring back running pytorch tests in rocm CI
f34192db0f : Rename DynamicType -> TensorType (#16787)
1b919ca93e : Use bound shape inference in onnxifi transform (#16598)
717ae09184 : improve error message (#16719)
8105aaca86 : int8 SpatialBN (#16796)
30ab1773f9 : call istringstream clear after str (#16820)
ea35d8e40a : Replace resize_dim() with set_sizes_and_strides() (#16732)
929cd23da1 : no EIGEN engine for DeformConv (#16785)
8d4b2db529 : format deform_conv_test.py (#16786)
1a13dedf98 : Fix/Improve bound shape inference with real net tests (#16597)
34cfbb0040 : Typofix (#16800)
30a6feda84 : caffe2 | MSVS compatibility fixes (#16765)
887080e92a : Fallback sum/add to CPU if needed (#15267)
39eab01b61 : Automatic update of fbcode/onnx to 822d8df0a2a32233c6022f50a158817a0f19bdc7 (#16791)
f2e0d64775 : Adding torch/lib64 in .gitignore for ppc64le CI build to pass (#16782)
a3f600e394 : Revert D13854304: [redo][c10] LayerNorm Registration Example
fc0e88dd77 : Revert D13855525: [c10] Expose RoIAlign to torch
33a6a7a3ea : Revert D13856086: [c10] Expose GenerateProposals to torch
018485130f : Revert D13864292: [c10] Expose BBoxTransform to pytorch
c0a7bf94ed : Revert D13865221: [c10] Expose BoxWithNMSLimit
cda43336d4 : Revert D13866214: [c10] Expose HeatmapMaxKeypoints to torch
d327965dac : Fix pip list format in collect_env (#16798)
d1b2ab83fc : disable default system-wide detection of gflags, glog, opencv, lmdb, leveldb (#16789)
255136fc1d : fix BUILD_CAFFE2_OPS
ab035d01e3 : Remove unnecessary typing import. (#16777)
43f4c86238 : Fix alias analysis for fork/wait (#16671)
c1dff549da : changes to apply xla patch (#16781)
db5a3c274d : Tensor construction codemod (#16568)
18edd3ab08 : Warn when tracing legacy constructors
7bf7a4162d : Use torch.zeros for nn.LSTM
c5c831953b : Set SCCACHE_IDLE_TIMEOUT=1200 (#16728)
448e0d78e9 : Document hip-clang and its __HIP__ macro (#16771)
4404762d7d : Rename IntList to IntArrayRef. (#16751)
e2d3a3fd6a : dict values(), keys(), and len() (#16629)
0ceef3c9f6 : Automatic update of fbcode/onnx to bfa8b335ab6d1ed7b688dc2ec96421a3fe9e644c (#16767)
0f7a0f8c83 : Fix commit races on binary CI on master PR-merges (#16773)
a9713d07b0 : Expose HeatmapMaxKeypoints to torch (#16528)
3df7b321cc : Expose BoxWithNMSLimit (#16529)
add39b85cc : Expose BBoxTransform to pytorch (#16530)
f33a2b960e : Expose GenerateProposals to torch (#16477)
f5d4636021 : Expose RoIAlign to torch (#16476)
240240bb10 : LayerNorm Registration Example (#16478)
af4d2b889c : Enable undefined at::Tensor to be passed as Output (#16730)
9811a4220d : Add XLA / TPU device type, backend type and type id (#16763)
6efa40e07b : Preserve method parameter names (#16750)
f8d4a14f6d : add xla tests to enabled-configs (#16761)
e30d33483b : Fix logging top commit of pytorch + builder in binaries for long summaries (#16766)
2a85d98745 : Fix type-o in unsupported data type error message (#16537)
963e410b57 : Make tuple checks faster (#16657)
85ad011843 : Fixes selection of cuDNN algorithm (#15881)
c751cf8b36 : Don't throw in operator== for TypeMeta and ScalarType (#16736)
1ce188c510 : logsumexp for multiple dimensions (#16475)
4047c97266 : Revert D13952085: [pytorch][PR] Fix static linkage cases and NO_DISTRIBUTED=1 + CUDA
29827e1971 : Integrate PyTorch quantization APIs into ensemble export modules (#309)
0cd918f4d3 : Fork/join parallelism for ensemble export modules (#310)
ce15ae8f23 : Add an API to set the number of threads in C10 thread pool (#16669)
3796cbaf7a : Try to turn off zero-out of tensors fully
ae5fd10b02 : Tensor method rename size()->numel() - 2/3 (#16745)
3df91ceb5e : Tensor method rename size()->numel() - 3/3 (#16747)
cb9740a608 : Tensor method rename size()->numel() - 1/3
a7a2618d51 : Bug fix in l2 quantization (#16749)
b1822966ee : points-to graph simplification (#16605)
6c04224cd8 : Revert "Move outplace ops to ATen (#12413)" (#16731)
1409a2afc8 : Automatic update of fbcode/onnx to 875f7bbe537b9d6931d065977c192eaaf61e1179 (#16734)
3f570b5eea : Fix static linkage cases and NO_DISTRIBUTED=1 + CUDA (#16705)
846a64e805 : Tensor method rename ndim()->dim() - 1/3 (#16678)
9e31d6dbf1 : Merge job-spec env variables of Pytorch/Caffe2 CI jobs (#16649)
b250385811 : Log top commit of pytorch + builder in binaries
c15ed3a2f2 : Run resnext101 training in rocm benchmark (#16017)
6d407baedf : Replace resize_dim() with set_sizes_and_strides() in THTensor_(unsqueeze1d) in aten/src/TH/generic/THTensor.cpp (#16673)
db4235f31d : Tensor method rename ndim()->dim() - 2/3 (#16679)
73db487a8e : Update the cmake build configuration for AppleClang compiler (#15820)
dc528fd734 : Fix build with cuda but no cudnn in caffe2 (#16701)
da24749e8d : Fix ReservoirSampling zero-initialization reliance (#16702)
6cb593b88c : Remove --without-parallel (#16704)
a53d28dd87 : Bump gloo (#16638)
6d86bc7c3f : Fix issue with scalars and __rpow__ (#16687)
4c16ea93d1 : Improve LeftRight (#16524)
39a55f4ea6 : Updating submodules
988647b21c : Updating submodules
44809fda84 : fix conditional in mean workaround (#16686)
7d4a81cbb2 : Use macro for reduce on 2d blocks (#16344)
f36f3cce9a : Simplify layer_norm_op_test
13db5dbb81 : Make predictor base class
98b333d810 : Tag model_id and onnxifi index in OnnxifiOp (#16648)
a4ac3cbb2f : Updating submodules
c865d46736 : Add @ignore annotation (#16055)
31ab03e34f : Add Winograd Conv method for CPU (#15196)
69a816c060 : Increase timeout on anaconda logins
dc64d95f3a : Tensor method rename ndim()->dim() - 3/3 (#16680)
9594d9bcfd : fix the ONNX ci (#16674)
7139410b72 : Allow USE_NINJA to be toggled by an env variable
bd75fba4e8 : fix tracing using a dictionary as input (#16616)
aaa8ace486 : Implement new c10 dispatcher (#16625)
a40e8ce7c5 : Add train() / eval() / is_training() to C++ ScriptModule API (#16044)
6d373c02ef : Revert "Fixes selection of cuDNN algorithm (#15881)" (#16484)
638dbe4b46 : Revert "Upgrade mkl-dnn to v0.17.3 to fix core dump issue (github#161… (#16660)
4c803f4ebd : Expose backend extensions to python
7e642dfff3 : Introduce backend extensions (overriding operators on custom backends)
64186e06ec : Dispatch factory functions on Type (#15093)
d29912f59e : Only run Travis on master branch, not on export-DXXXXX branches. (#16628)
3672f1536e : Ignore assert_git_not_dirty for xla tests (#16611)
7078b2baf5 : Better bounds checks in ctcloss (#16269)
87ae1558a6 : Upgrade mkl-dnn to v0.17.3 to fix core dump issue (github#16183) (#16653)
10cd9d5a03 : Skip dag_net_forking test on Rocm (#16639)
b67b29b667 : add SingleLoadedNetSupplier (#16620)
4ae9ab24b6 : Update conv_base to support empty batch (#16603)
b0e692c8a6 : Improving docs for MultiLabelSoftMarginLoss (#16644)
536f647bae : respect MAX_JOBS (#16641)
6ba4ca8780 : Workaround unvectorized mean implementation (#16618)
2486facc34 : Updating submodules
e48ffa84d8 : Add compare_exchange_deleter to DataPtr/UniqueVoidPtr (#16513)
e4c1b51d82 : Shim caffe2 GetRepeatedArgument helper for use with Ivalue (#16519)
13422fca32 : Add torch.backends.openmp.is_available(); fix some cmake messages (#16425)
f660d3ae19 : Move outplace ops to ATen (#12413)
6e17f4a126 : Grant credentials to s3 html update job
d5d7718770 : fix scope related naming issue in build_quant_conv_bn_relu, and also format function signature
51752e09c6 : Disable layernorm_c10 test for now (#16630)
a386c28fcd : Remove constant propagation expect files (#16348)
dfb081a7e4 : Fix a lot of C++ build warnings (#16411)
3f8fd19a86 : Add immutable dict support (#16208)
4bdf51cbd6 : Make the miopen handle part of ConvolutionParams (#16613)
a061e3fd77 : Back out "Revert D13596031: Improve c2-aten tensor interop and add proper testing" (#16514)
0b29bd82f6 : use distutils to discover msvc compiler paths (#16540)
1ff46f03ed : Fix SIOF in torch using caffe2 registry (#16473)
1efad7f6be : Swap Caffe2 operator constructor to pass arguments by value (#16576)
26565046ac : Allow ScriptModule(optimize=False) when jit disabled (#16297)
20d45c43d7 : Get more fusion after autodiff uses SumToSize (#14957)
4b7e70067c : Enable USE_NINJA in build_pytorch_libs.py if it is in PATH (#16545)
b109549bf3 : Replaced "from_numpy" with "as_tensor" in docs. (#16587)
482d3a3bf3 : printing correct dimension while indexing (#16495)
32daa90fbd : remove unused capture (#16526)
72a431edce : split up AliasTracker into a separate file (#16588)
e7e3838f3b : Access profiler from cpp (#16580)
d2861230f3 : Fix cuFFT plan cache size on CUDA 10 cannot be set to > 4096 (#16384)
d47108add0 : Clean up binary jobs in CircleCI (#16511)
8b053fccc7 : Updating submodules
db121375e7 : more careful use of inline/template function in perfkernels (#15388)
26200ebf56 : Updating submodules
d3742603cb : DeviceScope support for CUDA and testing (#15357)
a44826e659 : Fix: avoid race condition on model zoo directory creation (#16578)
bc53805f2e : Remove redundant declarations (#16463)
3ba6f55ae3 : begin splitting up cpp tests (#16536)
0ef9569841 : Use dispatch tensor for device_guard instead of first Tensor argument
fc2d8c6889 : Eliminate PYCMD in favor of PYTHON_EXECUTABLE in CMake.
16e2e4f29f : added example to clear ambiguity in torch.Tensor.view (#16563)
851437dd4b : Fix uninitialized data and broken broadcasting with sparse.mm and spa… (#16572)
33f2ab1fdb : add new build files to gitignore; test that build does not leave git repo checkout dirty (#16565)
c653fa2b00 : Move Deprecated.h to c10
18659e1336 : Allow generic containers as module inputs (#16482)
22e9c3055a : Explicit pdist captures (#16286)
1905bbb01d : Include ATen/core/functional.h directly instead of torch/csrc/utils/functional.h. (#16377)
b28f0f9d37 : Remove --no-update-dependencies (#16575)
3ab736b774 : Update PyTorch DockerVersion to 285. (#16507)
2ed5569bd6 : Support fallback for more operators (#16566)
307c83b5eb : fix the linter
c43917b0a3 : Add a test case calling caffe2 layer_norm from caffe2 executor but through the c10 dispatcher
2af95d8e3e : Back out "[pt1][tensor] Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize" (#16516)
cdbd388206 : Remove the debugging info of pytorch=>onnx coverage script (#16538)
a7796bc24d : CUDA histogram implementation
dc84ff1e5a : Use a points-to graph for alias analysis (#16386)
dff8165d04 : ONNX Export Flatten operator
68620cdcb5 : Revert D13880053: [pytorch][PR] add new build files to gitignore; test that build doesn't leave repo dirty
34b43baeec : Allow list and tuples to be passed as output_size to max_unpool1d (#16489)
b1b00f329e : Fix the flake8 linter
3b91df3744 : add example multiprocess code (#16345)
fa717cba63 : Support int64_t shape data for ideep reshape op
2d2eb7145a : add new build files to gitignore; test that build doesn't leave repo dirty
7d7855ea31 : Fallback support for more operators (#16456)
21907b6ba2 : Fix the dropout onnx symbolic, and ensure all exported models in test_operators.py are eval mode (#16547)
598b713660 : Seperate level1 elementwise functions from math (#16397)
ed4776820a : Fix includes for ATen/core/stack.h (#16462)
7c66ad7455 : Add test case for calling c10 ops from pytorch
12f92f453b : Kernel gets Stack* instead of ArrayRef<IValue> (#16282)
6249442e90 : Chunk dataset implementation (#15932)
21193bf123 : try to get rid of tmp_install (#16414)
ffed8bff6a : Fix torch.sparse.sum parsing of dim. (#16517)
f924fc6eb1 : Make Store::setTimeout take milliseconds (#16278)
279238f0b8 : Back out "Delete duplicate copy of THCCachingAllocator." (#16510)
4f809397fd : Fix compare_exchange_weak usage in weak_intrusive_ptr (#16302)
719134f3c3 : Automatic update of fbcode/onnx to 15c33c945851907411619f599900c3852108e7e3 (#16493)
541ce96564 : Make the pytorch's cmake minimum required version equal to caffe2's. (#16506)
3ab620926f : More windows fixes towards the code refactor (#16451)
ded6fb0293 : Add stack & cat support for CPU Half (#16389)
d79e45bbba : Add some smoke tests for Windows
6a6983ed7f : create type hint stub files for module torch (#12500)
3b337e7892 : Revert D13596031: Improve c2-aten tensor interop and add proper testing
bd19dd4b90 : url download bugfix for URLs served without Content-Length header (#16153)
dbebb5322c : Properly screen string literals when dumping JIT IR
0e6123fb8a : Remove dependency on ResourceGuard from IR.h. (#16351)
862d466bef : Remove redundant includes from scope.h and attributes.h
5e21e0fe75 : Improve c2-aten tensor interop and add proper testing (#15860)
9d6be6ac09 : Remove redundant "build" setup.py commond from onnx scripts
7f552041ff : Fix identifier shadowing in tracer (#16480)
f204e3e624 : Pass WERROR to CMake as an explicit parameter rather than an env var.
99fab45733 : Remove redundant build from build develop instructions
52ca4b86db : Change SetOutputSize in ConvTransposeUnpoolBaseOp (#16179)
5ebf4cd4e3 : Move stack.h to ATen/core (#16247)
504bcb276c : Remove state from schema and calling API (#16180)
cc2d49deb7 : Remove generic_if.h. (#16354)
ed50bccb35 : Remove CUDA_VERSION to flag and remove JOB_BASE_NAME from binary jobs
4eceb7a055 : Fix cmake byte version issue in build_pytorch_libs.
ff963d4b9f : Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize (#16273)
b076227b21 : Move tracer impls into cpp file (#16410)
1a77918955 : fix alias annotations on to, cpu, cuda (#16460)
e3c0926c44 : Remove usage of deprecated "min_satisfying_examples" hypothesis setting (#16401)
0ae45e30bc : Support Tensor alias annotations for native_functions.yaml (#16239)
120c54743e : Annotate the bicubic interpolation kernels (#16449)
fb17be1368 : Clear cmake cache when --cmake (#16426)
e866bc7c88 : Remove dims() in caffe2::Tensor (#16356)
05678d0bfa : Op-calling API can handle state (#16177)
80f4374dde : Handle stack correctly (#16246)
c7547dbd5e : Fix compiler error in swapBytes64 for rare architectures (#16418)
17d7818578 : Fix lint errors introduced in pytorch/pytorch@ceece5d (#16454)
17e3ab957a : Report the slowest 10 tests when using pytest (#16423)
0a2d14dd7c : Optimize SpatialBNOp on GPU (#16395)
ceece5dd0f : CPU implementation of torch.cdist (#16168)
14138f4605 : Don't initialize a new `std::vector` in a loop. (#15850)
d2cdffaf37 : More documentation on caffe2::Operator
fdaa77ae8b : Better error message when creating a module instance in jit.script (#16416)
952a03ccea : Fix issues on Windows brought by #16289 (#16412)
2b6607065b : Fix a typo in Parallel.h (#16419)
ee18448138 : Don't install PDB for Windows static build of caffe2_observers (#16420)
c863a759a0 : Fix slogdet sign requiring grad when input requires grad (#16337)
6944461a76 : CI Fix: restore MAX_JOBS variable (#16415)
3c30cf3237 : Update einsum documentation. (#16323)
de6bb3f3e3 : Fix flake8 warnings/errors in test_jit.py (#16409)
d1ed0176df : Trace fork and join calls
8c81a72e87 : Switch to CUDA implementation if batch size >= 65536 for affine_grid (#16403)
f6e6b0fd33 : gitignore gdb history
41e9b092a9 : Revert D13821061: [redo][c10] layernorm example
f4e54fd659 : trying to fix testX (#16370)
27a1ba3ef2 : layernorm example (#16374)
13fde345fb : plug caffe2 into jit" (#16388)
41acbb3b6b : Enable centos pytorch rocm CI
9477a5d9c8 : Remove bash from build (#16289)
539894d70a : Remove caffe2::ShareData (#16139)
ca86d1f01d : Trying a fix to anaconda logins on nightlies
956cabd887 : Update Documentation for Optionals (#16380)
c42431bd7a : Revert D13740752: [c10] plug caffe2 into jit
0e6791b275 : Impl Shape op for mkldnn (#15266)
958f846fb3 : Back out "[c10] layernorm example"
f087c65a56 : Add xla test in CI (#15978)
45602ce9a2 : Delete Tensor::swap(), replace with pointer swap (#12730)
4aae89fa7b : Make test_proper_exit more robust (#16249)
ec2a7fa4d4 : fix contbuild (#16362)
8683b75df6 : Minor change of group_norm_gradient on GPU (#16307)
52135e9b12 : Revert D13551909: [fbcode] logdevice for generic feature type
11a2b3799b : logdevice for generic feature type (#16191)
265ed8ff45 : layernorm example (#16350)
6d2aee4a9b : plug caffe2 into jit (#16331)
d4b60f4014 : Add RunOperator for using FunctionSchema registered ops easily in caffe2 (#16173)
3b6b777a11 : Add correct Input() shim to caffe2 operator impl (#16048)
7ce634ebc2 : Relax lower bound for nogil timing test to avoid false alarm (#16259)
c787de6284 : Code-style fixes. (#16342)
6700eff03e : disable testing group conv with EIGEN engine (#16335)
c2be9f1487 : Remove unneeded manual unwrap optionals (#16245)
f769cf999d : fix buildindexop (#16341)
69b5ae4c54 : Revert D13747581: Optimize SpatialBN on GPU
0b470d0a3b : Add Test for ReinitializeTensor (#16338)
2a70f24cce : Add thread-local guard: at::AutoNonVariableTypeMode (#15939)
f0dd85d141 : reduce parameter space of test_1x1_conv to avoid timeout (#16223)
fdda533eb1 : Update docs to include variable annotation example (#16324)
792cb774f1 : Delete duplicate copy of THCCachingAllocator. (#16226)
e936a69085 : Move THCCachingAllocator to c10_cuda. (#16119)
24b50f1411 : Remove unnecessary includes and headers from THCCachingAllocator, move to at::cuda:: namespace (#16117)
47bf30661f : Directly include headers from ATen.
af513cd433 : Refactor the docs build workflow (#16265)
a4be15377f : Save a little bit of work in constant pooling by not moving nodes that will get deleted.
0cb24098c7 : Handle non-contiguous inputs with mkldnn convolution. (#16300)
45c3cc9174 : Optimize SpatialBN on GPU (#16202)
60241e94b3 : optimize group_norm (#16216)
8ab4d348f4 : Fix the tensor deserialization problem of jit script module on CUDA (#16279)
3cba115abb : Small fixes for pdist (#16210)
0a3932acb2 : Fix comparison in ReinitializeTensor (#16294)
f25322fb97 : Fix issues under caffe round 1
31de19f210 : Add support for overloaded functions (#15556)
8710184eea : Constant propagation changes (#16244)
4b06c063a5 : raise exception if try jit.load non-existent file (#16270)
80bd28bcb2 : Fixing upload of nightly binaries and clean MacOS output (#16016)
fc5b79cd1c : CUDA event should only be recorded after NCCL group (#8219)
07a090247a : Change data() accessor in Caffe2 to return non-const pointer. (#16176)
dba4d37ac2 : Updating submodules
f2b1842344 : Align native_functions.yaml func schema more with JIT signature schema (#16111)
3837446883 : Fixes selection of cuDNN algorithm (#15881)
879bf65811 : Disable flaky test
9310eb1fd0 : Update third_party protobuf to v3.6.1
e669f72466 : fix sigma in the middle of when word (#16227)
36e27aa092 : Typos and broken RSTs fixed in torch.distribution (#16136)
8b49efe86a : tune elementwise for AMD uarch (#16217)
ddeaa541aa : fix typo in resnet50_trainer.py
e7a77ac3b0 : Automatic update of fbcode/onnx to dc75285d4a1cff9618400164dfdb26c5a1bab70a
2235fb256e : Add default_stream() and enhance current_stream() (#16200)
3e790f6ee8 : complex_registration_extension.cpp includes to angled brackets
0f45e6dbdc : Remove ATen/Allocator.h forwarding header.
77de69867a : Remove dead curVal store.
325df4ccfb : Make kernel registration constexpr again (#16166)
cd8f4154f4 : Avoid closure around kernel (#16165)
6192831b76 : Pass IValues from JIT to c10 dispatcher (#16066)
1c058de9ac : Release GIL when synchronize or wait (#16182)
c6503a4205 : Revert D13540278: [pytorch][PR] Unhide unique from C++, make unique partially scriptable
c5e1b469be : Return namedtuples from torch.* function with multiple return arguments for C++ operators (#15429)
1e19fd941f : Fix formating in caffe2/quantization/server/README.md
9521a15c88 : hip-clang enablement (#16085)
4cf76574b9 : Raise CalledProcessError when torch.distributed launch process not return 0 (#16069)
53ae8bc64d : Reserve vectors that we know the size in advance for. (#16201)
dfcafb1f71 : cpp doc fix (#16221)
addebf110f : Move away from ConstantFill (#16214)
9757ad35b0 : ban conv_double_backward from sandcastle, it takes too long
0cd1ab82b0 : Remove dead code from setup.py, remove need for build target. (#16162)
bed7db7772 : Unhide unique from C++, make unique partially scriptable (#15256)
c33512bdfc : Automatic update of fbcode/onnx to c553fb32a0902ce5dd42e1b40123e9e9b38bdbe7 (#16190)
866c4e3467 : Separate Moments from math and optimize it (#16175)
898329c3f9 : Unify device() return type in Stream, Event, and Tensor (#16150)
1fb6b431a3 : Replace use of ConstantLike with with ConstantOfShape (#16095)
33d1ec396b : Fix LBFGS issue (#16167)
a28c0ff7b8 : Allow for concurrent quantization in FullyConnectedDNNLowPOp (#16174)
daedec2350 : Support ConstantOfShape in Caffe2 ONNX Backend (#16108)
b436f94b53 : Separate affine_channel from math and optimize it (#16135)
e8b872abe2 : Pass IValue from c10 dispatcher to caffe2 operator (#16065)
c9044166a5 : Make c10 dispatcher use boxed kernel function pointers (#16051)
b662a9b66a : add back NNPACK in PyTorch (#15924)
ed57425b0a : improve performance of unique with inverse indices (#16145)
c6d9c51c7e : fix for clang-tidy (#16164)
292edfb087 : Change current device in stream context manager if necessary (#16128)
eea50e91fa : Fix SoftmaxOps (#16049)
3f4bb3d493 : rest of uses for deprecation of dims() in Tensor (#16118)
b69c05dbd6 : RNN operators should inherit step_net device_options (#16086)
d4f6befc93 : Add implicit optional unwrapping (#15587)
da578b7dcf : Add defined() to caffe2::Tensor (#16125)
b9b160d86f : Remove ATen/Half.h and ATen/core/Half.h forwarding headers.
1ff864712b : Port legacy any(*) to ATen
ed0a761c82 : Improve pack_sequence and pack_padded_sequence error message (#16084)
b4bc55beef : TCP init method race condition fix (#15684)
aaff2fecda : Remove caffe2::Tensor copy constructor (#15416)
b5c733324c : Fix RERUN_CMAKE
cb2961f63c : Cleanup includes in python_print.cpp.
27674dc7c6 : Refactor attributes.h (#16098)
40b3e4907c : Fix export macros (#15856)
0ab8de3125 : Remove some dependencies from ivalue.h to ATen (#15855)
68164c1c3e : Code style cleanup (#15854)
637b35b372 : Use intrusive_ptr for Blob in IValue (#16052)
3e85a2bcbf : Move c10 dispatcher back to ATen/core (#16050)
a9438ba62f : Moving cuda-convnet2 to the internal fb dir to satisfy internal dependencies. (#16104)
431a34f3ff : further wildcard cleanups (#16041)
962f3f4864 : Refactor _jit_internal (#16058)
99b029aca3 : Include all Caffe2 headers in Python installations (#16124)
0282318bea : Add comment to explain rnn bias vectors (#15843)
7e9e1c7a9f : Add @yf225 to cpp codeowner
9aac5c7e85 : Update FP16 to latest master (#14498)
d6a8dd9538 : Cleanup gumbel_softmax (#13339)
a667767220 : Add matches_jit_signature attribute to native_functions.yaml (#16040)
fe4ae9dfe4 : add if in register_buffer like register_parameters (#16110)
f1ad5e08c7 : Revert D13709409: [pytorch][PR] Exclude pyi from flake8 checks.
6641b09fac : respect grad guard for torch.jit._fork and torch.jit._wait (#16101)
595f767880 : Revert batched pdist, improve existing kernel, add test (#15901)
fbdafb006e : Fix trivial typos in torch.cuda._utils (#16026)
dbe6a7a9ff : Unify the shape notation for all of the pytorch modules (#15741)
67f2039f4c : Fix numerical stability in binomial.log_prob (#15962)
c51cf09a4b : Automatic update of fbcode/onnx to fd60104394fa353e1762f44ecad1b2166e33deef (#16094)
f09003d95d : A trivial typo fixed in onnx.verify.verify (#15871)
0e80df515d : Remove support for CUDNN 6 (#15851)
1a5c5fe7c9 : Exclude pyi from flake8 checks. (#16105)
76782cfc21 : Update cpuinfo to avoid reporting error when sysfs is not accessible (#16107)
1a09a2a27f : Export PyTorch erf to ONNX Erf and add Caffe2 Erf operator
334258e39e : Potential fix for model inference crash on Win10 (#15919) (#16092)
24f4d3987e : Move all Stream and Event Python implementation to C++ (#15937)
1e425d1a47 : A trivial typo fix in caffe2.python (#15907)
7536887cb7 : Add count_include_pad for avg_pool on CuDNN (#16100)
4171ef3728 : Enhance the documentation for DistributedDataParallel from torch.nn.parallel.distributed (#16010)
ded4ff87af : fix a little error in comments (#15922)
c7a48da493 : Corresponding data type for BYTE (#15627)
ec8b1c94a9 : Fix possible importing errors in build_libtorch.py (#15471)
fcb4b4f002 : Remove redundant includes from ir.{h,cpp}.
f7733526aa : Generate PDB files for better debugging on Windows (#16008)
0dfdc2cbdb : Update int8_simd.h (#13859)
ffd613800f : Add IS_PYTORCH_CI flag for testing (#16006)
7c56db73d5 : Moving torch.norm to ATen using TensorIterator (#15414)
55511004d1 : Resolve errors in perfkernel for Windows (#16031)
aa6b0f50ad : add a constexpr in c10::Half (#16091)
d277f77da2 : Tensor reinitialization codemod - 3/5 (#15912)
57d29ffa9c : Bound shape inference for c2 (#16081)
7a5f782c2e : Fix max_pool_grad test (#16088)
5b2d30ec85 : Revert D12812029: [pt1][tensor] Remove deprecated caffe2::Tensor APIs
237c0c3c7a : Port the backend of FractionalMaxPool3d from TH to ATen (#15575)
aff0964ee7 : update pytorch docker to cuda 10
d33e7d1236 : multinomial: fix detection of zero probability (#16075)
e58cc6ab28 : Enable single graph sharing between multiple threads for onnxifiop (#16047)
503f412f79 : Fix error message formatting in AT_CHECK/AT_ERROR (#16067)
71b24127d2 : Correct sphinx-note in symeig (wrong indentation)
3cf76e78bd : Fix the caffe2_gpu linkage with torch on Windows (#16071)
a2af554e6f : Port legacy all(*) to ATen (#15540)
411173757e : Rename away uses of THAllocator and THCDeviceAllocator (#16061)
4d07951a54 : Stop pretending that TH headers are both C++ and C compatible. (#16059)
fb68d813be : Fix logic errors when accumulating reductions in output (CUDA) (#16023)
5353847b19 : Remove deprecated caffe2::Tensor APIs (#15814)
5e72e99c86 : Remaining Tensor API fixes - dims() -> sizes() (#15743)
8b5894491c : Comment about CuDNNWrapper (#15496)
ad39cbde59 : Port FractionalMaxPool2d from TH to ATen (#15531)
dc4977ddf0 : Support tracing GenericList (#15969)
b792bfec0e : s/fwdproxy.any/fwdproxy/g in fbsource (#16024)
8f11df3cb7 : Automatic update of fbcode/onnx to 84a0441ae28795a928005863dc142bee81827566 (#16046)
13f38ab79d : Add count_include_pad to average_pool_gradient_op (#15997)
b2eb98f6c3 : Remove cuda from autograd profiler (#15898)
2824cb6e9c : Fix namespace typo. (#16021)
c448f85e1f : Fixing missing cpp tests for Caffe2 setup.py builds (#16037)
57b5e7572b : Test cases for calling caffe2 LayerNorm from PyTorch and JIT
620ff25bdb : Enhance cpu support on gloo based multi-nodes mode. (#11330)
7d601715e5 : Constant prop prim::None (#15979)
9a6fe4feec : Add a note about THNN height/width/etc argument reordering. (#15819)
406b9c49bd : Fix Python path finding for benchmark tests
7f1397acef : Quantized RNNCell modules (#15469)
19717224c5 : Miscellaneous broken RSTs fixed (#16033)
b329e03684 : Add PyTorchPredictorContainer (#15899)
1065e7cd24 : Add `itertools.{prod, combinations, combinations_with_replacement}` like op to pytorch (#9393)
964732fa8d : use fbgemm gconv in dnnlowp (#16020)
bc233fe405 : `var` for multiple dimensions (#15892)
02b9f686a7 : Updating submodules
5d527b9cc2 : Updating submodules
10b16953d1 : nomnigraph - easy - use new test utils in converter_nomnigraph_test (#15751)
4ed9de8680 : Remove code duplication (#15880)
ddece5a793 : Fix ormqr docs, fixes #15565 (#15694)
774705ba05 : Fix c10d checking errno unconditionally (#15986)
4fb3931896 : add tensor.to to script (#15976)
8964a2e6e6 : Split Caffe2 CI into cmake-only and python builds (#15917)
4bdaca827c : Make call operator on module holder call forward (#15831)
c620725177 : Updating submodules
8c55e56c37 : Fix broken rst of torch.nn.utils.spectral_norm and others (#15995)
300dcc3b96 : Add cuda.reset_max_memory_* (#15985)
7c08f1083e : libshm retry on EINTR (#15964)
abdaa477e5 : Improved the documentation for torch.nn.functional.pad (#15984)
6ec753f2f9 : Improve the docstring of nn.random.fork_rng (#15960)
492b7d410b : doc fixes (#15990)
ca18fb8567 : simplify lambda function use in conv dnnlowp ops to fix #15911 (#15996)
a7415787ac : fix RandomSampler length (#15991)
e18d6cd455 : Fix static build on Windows (#15989)
a282378baf : Caffe 2: Reshape Op upgrade (#15380)
04b8a2f1ba : fix compile error reported in issue #15911 (#15953)
6371bc76a9 : Back out "[pt1][tensor] Remove caffe2::ShareData" (#15983)
35480a7c44 : Remove StopGradient op when it is inplace in inference (#12152)
586d030311 : Add global pooling specialization and also update MaxPooling on GPU (#15824)
83c054de48 : AliasDB interface cleanup (#15656)
00b2dff6b6 : Updating submodules
a4c1aa4bc5 : Add the normalize transform to the core library (#15891)
e5266b4ba6 : 3x3x3 depthwise convolution with per channel quantization (#15775)
264e16bffd : Make it consistent for OperatorBase usage (#15908)
55b0e2a1eb : rocm build (#15981)
6f08e2a588 : Updating submodules
bff0f88cc8 : Tensor construction codemod(ResizeLike) - 2/3 (#15940)
162ad94590 : Fixed typo in batchnorm docstrings
fd0ed2e298 : Tensor reinitialization codemod - 4/5 (#15967)
94acddb57f : Fix the lint (#15973)
eb15587c99 : Tensor reinitialization codemod - 2/5 (#15947)
1235aa4fca : Expose dim() on type and use it in ONNX symbolics (#15933)
253b680928 : Tensor construction codemod(ResizeLike) - 3/3 (#15943)
d042914221 : FC shape inference should use int64_t (#15961)
d33159a426 : Undo norm optimizations and add more documentation for parallel.h (#15885)
926e718d5f : Add/fallback some operators for mkl-dnn (#11696)
96ea2594d8 : Don't call cudaStreamDestroy at destruction time (#15692)
726341fea7 : Tensor construction codemod(ResizeLike) - 1/3 (#15944)
bcc88dfb4e : Move nightly binary builds to 05:05 UTC (#15966)
e07cca1312 : Add backend checks for batch norm (#15955)
c9d7ead0c4 : Add scalar_type_to_pytorch_type dict in ONNX symbolic
3f6b212e80 : Register CPU/CUDA fuser dynamically (#15887)
d580d3583b : Simplify cat fusion (#15633)
3d0d16d31c : Add bindings for .cpu() & .cuda() to script (#15904)
03a570cad9 : comment out large test cases for tril(u)_indices (#15959)
7841fe4f27 : Automatic update of fbcode/onnx to 7abd834091f1024c11749dcfd25126802db9fdd5 (#15942)
70dd44f6a8 : Match NumPy by considering NaNs to be larger than any number when sorting (#15886)
b7cdeb3fc3 : Port empty_strided to ATen. (#15948)
14dcdc4c35 : Move cudaDeviceProp to ATen (#14834)
da753b7ccf : Trivial typo fixings in nn.functional dropout* docstrings (#15951)
86af14b0c7 : Resolves ptxas warnings when compiling for CUDA_ARCH 750 and a memoryType deprecation warning (#15461)
07ea3e035e : Fix fallback issues to handle inplace case (#15726)
0934e8de58 : Optimize CPU version performance of the nonzero function. (#15925)
890568a018 : Tensor reinitialization codemod - 5/5 (#15884)
e46e572b30 : Add backward pass notes for eig() and symeig()
da7468853a : caffe2::Tensor::is_same() (#15407)
b9e1028cff : Clean up Half (#15317)
d408324350 : Move files to/from c10/core and c10/util (#15316)
6b64052e20 : Remove Context from c10 operator schemas (#15312)
8136c39b5e : Enable calling caffe2 LayerNorm from PyTorch and JIT (#15243)
913785445e : fix rocm build
27f6a29fd0 : Remove USE_CUDA and USE_ROCM in engine.cpp
c5012d8641 : Extend note about contributing to the C++ frontend (#15902)
3ec3351306 : Fix different env variables in schedules runs pt 2 (#15934)
4b427780aa : Change PoolOp Functors design to support CuDNN CUDA fallback (#15903)
b1fa19961e : Fix bug in torch::load and unpack torch::optim::detail namespace (#15926)
9173cd5a4d : fix aliasing on unwrap optional (#15748)
d35295c603 : JIT Batch Norm fusion (#15897)
7f268c6262 : Fix different env variables in schedules runs
4edc8273eb : Allow for registration after GlobalInit (#15876)
9b5ec2a076 : Fix TestDataLoader.test_proper_exit (#15665)
0ed3f766e9 : Unify flags and environmental variable when building LibTorch/PyTorch (#15868)
3d68f35639 : Adding binary builds to circleci
2fa9264ba1 : Fix lint
cdaeb0db54 : Make SGD match python (#15840)
628bf5e3c9 : test_jit.py: Speedup EndToEnd tests by reducing workload size. (#15906)
23e28efed4 : Porting legacy reflection_pad2d to ATen
5f1dd9e743 : Fix log_prob for Gumbel distribution (#15878)
4caca2f062 : Tensor method rename sizes().size() -> dim()
b4c3268b23 : Batched upper triangular, lower triangular (#15257)
5af9aaa5bb : Minor bug fix in dnnlowp (#15841)
159c2f3918 : test_jit.py: Replace direct `exec` invocation with a wrapper. (#15882)
b28738ccb5 : Revert D13468570: [pytorch][PR] Optimize CPU version performance of the nonzero function.
71c6e24373 : Fix several ResourceWarning: unclosed file (#15746)
a1180d8e86 : Fix BuildIndexOp (#15580)
7b9f794580 : Wrap C10 CUDAStream instead of cudaStream_t in THCPStream
0c32e1b43e : use C10_MOBILE/ANDROID/IOS (#15363)
5838b59c5d : Optimize CPU version performance of the nonzero function. (#15190)
0571eaebab : Remove TH binding of newWithStorage as it is not used.
692caa7211 : Revert D13598894: [pytorch][PR] [Caffe2] [ROCm] Use correct workspace alloc call in MIOpen conv operator
14b40c0633 : Revert D13548303: [pytorch][PR] Add support for batch_norm fusion to the JIT
fe15d6a2c2 : Fix macos build (#15873)
f0c2a9a7b6 : Add torch.bincount() test case on sliced tensor (#15835)
961f829067 : deduplicated code in elementwise_op_broadcast_test.py (#15865)
c7ec7cdd46 : Fixed syntax error in doctest (#15646)
ac206a95f5 : crelu mentioned (#15825)
5fe2697655 : Initialize tensor with fp32 in Caffe2Backend.prepare() (#15832)
c93cf89de2 : Fix cuda native loss_ctc for varying input length (#15798)
cb32418669 : Add element-wise multiplication in formulas (#15834)
3f6e58b43b : Typos fixed in CWrapPlugin.get_type_check (#15859)
1d6e818f2c : Move LayerNorm op schema to c10 (#15199)
11708cbd7b : Update flat_hash_map (#15367)
905df3943a : Fix C10_API/C10_EXPORT for op schema registration (#15324)
d562840910 : Use C10Tensor in the dispatcher (#15195)
8ac55a6812 : Convert caffe2/aten Tensors to/from c10
31d7c933af : Implement c10::Tensor (#14819)
828cb18fa3 : Allow ReadyQueue to handle empty tasks (#15791)
8a07cbe5e1 : In loop_wrapper, do not copy the passed-in functor (capture it by reference instead). (#15845)
2b22612289 : Add NHWC support to Resize Operator (#15553)
8a5ba577c1 : Revert "remove use of tmp_install" (#15847)
4f51ca490e : Correcting source pybind11 library to install into Python
acc83ad54e : implement floordiv with correct integer and division by 0 semantics (#15813)
92a2bfe52d : A trivial error message updates on `at::Tensor _convolution` (#15830)
24314e9ceb : Enable torch static build on Windows
196eee6ccd : Fix sum_to behavior with zero dimensions (#15796)
734eb31035 : Cache workspace size in the BenchmarkCache. (#15742)
1bc47c0d86 : Refactors shape logic out of code generation, fixes possible segfault (#15750)
c42def29c8 : Use parallel thrust execution policy on ROCm (#15481)
cc402d8fa1 : Use correct workspace alloc call in MIOpen conv operator (#15712)
532a709771 : Tensor method rename dims()->sizes() - 2/2
ede1f4ad05 : Remove caffe2::ShareData (#15418)
8232bd526f : Move isnan to C++ (#15722)
461dc9a28b : use all_weights instead of _parameters in _flat_weights in rnn (#15766)
8f11147d43 : Use CUDAGuard when serializing CUDA Tensors (#15807)
29a9d6af45 : Stop leaving garbage files after running test_jit.py
5e1b35bf28 : Add support for batch_norm fusion to the JIT (#15146)
c3a0000864 : Support communicating with C2 protobuf in Onnxifi flow (#15472)
4650d70e93 : Add count_include_pad arg for AveragePoolOp on GPU (#15787)
99d2743863 : Move Stream.query() implementation down to C++ (#15737)
55baca57d2 : A trivial error in the error message of `at::Tensor _convolution` fixed (#15772)
770b5ac42b : clean up D13579188 (#15759)
24867a58aa : Add support for exporting onnx split (#15092)
bc328d01e5 : simplify conv dnnlowp ops by not allowing fp32 in/out (#15758)
49ba2cb796 : Enable conv+add fusion, same as conv+sum (#15268)
76feb8c40f : Allow List arguments to Python Ops (#15721)
668678e753 : Bump CircleCI docker version to 278 (#15795)
382807302c : Fix C++ Frontend example in frontend.html (#15717)
321a559359 : Fix restructured text issue in tensor_basics.rst (#15701)
2ebeb33697 : Fallback to CPU concat op to handle TensorCPU inputs (#15263)
c68eb5ec44 : fix conv unit test for groupwise quantization and pre-packing (#15761)
95febdfacc : Add is_floating_point to docs (#15704)
2ff0e3b196 : Pool prim::None nodes (#15745)
3277723173 : Replace some malloc+memset pairs with calloc.
b6a8c45f57 : Removes print statements from test_torch.py (#15747)
04f5605ba1 : Fix several DeprecationWarning: invalid escape sequence (#15733)
2fb2d080d3 : caffe2_benchmark msvc build fix (#15619)
a918f1d9af : Adding a hook (wrapper) for non-std stream reader in PyTorchStreamReader (#15551)
1488c5dd03 : support 0 size in any of the tensor dimensions in mkldnn (#15295)
2d8b332262 : Port replication_pad2d and replication_pad3d to ATen (#15538)
3d44eeec0a : Fix different types in rsub caused bug (#15707)
ae91156e5d : Tensor method rename dims()->sizes() - 1/2
12e6c1ceeb : Automatic update of fbcode/onnx to 8384c788939bc65463f9754b6a7a00b212b18ba1 (#15739)
04bf528589 : remove use of tmp_install
6adbe12c74 : Update CI credentials
43761e01f5 : Temporarily disable all XXXlike operator tests in pytorch-onnx test (#15740)
07c4991622 : Tensor construction codemod - 2/2 (#15600)
b1529eeadb : Print out operator suggestions for unknown builtin op (#15183)
fad8480146 : Updating submodules
9e88547d72 : Tensor construction codemod - 1/2 (#15598)
ad0ef7ae48 : remove dependency to fp32 batch permutation op (#15723)
e313f1a7bf : Cudnn Handle Pool 3: At Wit's End (#15668)
e798a09f6d : Remove TH/THC link for cholesky_solve (#15691)
b740b92f36 : Modify torch.gesv error message (#15654)
069d894145 : make conv_depthwise_dnnlowp_op_test faster (#15725)
2d8f14cd12 : clarified language of doc for torch.mul (#15664)
a923ea7cf0 : disallow nbits_in_non_outlier == 0 in acc16 conv; option to fallback to acc32 (#15708)
bebf1f7463 : Torch tensor (#15224)
1e9a6d7192 : A quick fix for Stream operation errors on non-current device (#15689)
3270e4d4a5 : Break up generated tests (#13992)
dcbc4f32db : flake8 hook fix (#15693)
2403135257 : Prevent VS2017 from emitting ambiguous symbol errors (#15697)
d42e90991b : trace s_copy_ (#15690)
78442f04fc : Add mkldnn conv double backward (#15686)
947229ebd7 : Fix ONNX export of logical ops, including torch.ne, to have correct output datatype (#15677)
279ca4acd2 : Port legacy reflection_pad1d to ATen (#15480)
1159302ab1 : bug fix in 3d group conv (#15625)
6103a04cff : Port torch.arange to aten and parallelize on CPU.
10c10b0990 : Ignore flake8 warning about whitespace before ':' (#15663)
f53010370b : Add count_include_pad arg for PoolOpGradient on CPU and fix ARM performance issue. (#15651)
3b5a940355 : Unify the usage of Dequantize (#15685)
efc3d6b65d : Fix vec256 inversion (#15659)
b0cf780ecc : Add min/max on numbers to JIT
e2549cbc01 : initialize with ident value in global reduction (#15653)
0b0553f92d : Updating submodules
879bccb1af : Support for Jetson Xavier (#15660)
62883a911c : Fixing cuda100 smoke tests
3ea5a9a66d : Remove PythonOp non-CPU path and PytorchOp (#15417)
7857909158 : Updating submodules
d86cc3e7de : fix select after chunk op (#15672)
bb3c3f516b : make flake8 failure blocking (#15675)
c5554856c9 : redo sleef build fix (#15549)
bee6c6761e : format conv_test.py to prepare D13562099 (#15632)
eeb14675f1 : Fix torch.gesv args in doc
b52420742d : clamp fixes (#15479)
8278a8b16f : Updating submodules
2398b607ec : Updating submodules
a0d22b6965 : Fix typo in documentation
7bb41e3953 : Make btriunpack work for high dimensional batches and faster than before (#15286)
56d945a1ca : Add count_include_pad arg for average_pool_op on CPU (#15593)
ef487d4f1d : Remove TH/THC link for cholesky (#15595)
2a45050fdc : Concatenate directly into shared memory when constructing batches for numpy (#14534)
4047cdc690 : Add a patch for OSX with SDK<10.12 (#15615)
d3e5540276 : Fix typo: szie -> size
119efd5266 : Make the warning suppression safer (#15560)
d53012b4fe : add NCHW2NHWC and NHWC2NCHW in utils.py (#15588)
9c8d8eab9d : Remove TH/THC link for gesv (#15510)
cd3c4a2f1c : keep extra_info of each op in ProfDagStats (#15244)
692898fe37 : Error when torch.load-ing a JIT model (#15578)
fb22f76eb6 : default_collate should collate bool list to byte tensors (#14669)
6a3e54eda9 : append caffe2 prefix to dnnlowp cmd line options (#15582)
2c4c8784d2 : adding nightly build smoke tests to circleci
c1643ec551 : add the int support (#15581)
9bf7eb914d : Move VariableImpl functions to AutogradMeta and Variable (#15487)
50fbf79451 : test basic tensor interop
70f0c4745b : Allow int/float cast to bool (#13391)
0fff5b3612 : remove print ops before exporting onnx graph (#15550)
62151aa259 : Added deviceCount() virtual method to DeviceGuardImplInterface (#15574)
02a249ed92 : Port torch.range to aten and parallelize on CPU.
d63740bc3f : Export group norm as ATen and add test (#15569)
e4477feb15 : Update cuda.get/set_rng_state doc (#14324)
9ad6ada9de : Update QNNPACK (#15561)
ed949e20cb : Revert D13552080: [pytorch][PR] add clang-format check to CI
c86cd9e530 : Fix wrong class name in jit _make_fail (#15559)
80cc280c68 : add clang-format check to CI (#15543)
4d029bba7f : Fix github branch prefix v (#15552)
eeaf1b64cb : Rotated boxes support for GPU GenerateProposals op (#15470)
e25702ac2b : CUDA kernel for rotated NMS support, over 200x speedup than CPU (#15365)
7b87ecae37 : Move autograd metadata from VariableImpl to TensorImpl (#13827)
4c5b1cc026 : version bump to 1.1 (#15554)
8c6ff91d57 : In README.md CMAKE_PREFIX_PATH should be CONDA_PREFIX when using an conda virtual environment (#15548)
cdb8edce75 : add from_pretrained method to EmbeddingBag (#15273)
5ac95758e2 : Make argument size checking consistent across CPU and CUDA for torch.gesv (#15430)
f636dc9276 : clang format world (#15524)
d4712ee218 : Added correct isinf handling for Integral tensors (#15489)
d602ddcda3 : Trivial comment update in autograd/function.h (#15529)
6e4be0af2e : Fix failed type cast in Windows Debug Build (#15333)
12e0ed55b4 : Upgrade MKL-DNN to version 0.17 and static build MKL-DNN (#15504)
2fe5c29d81 : remove legacy from docs (#15112)
60b13d1f71 : Use at::zeros instead of torch::zeros in non-differentiable example (#15527)
2ed95c5871 : Fix the compare logic in function `overflows` for MSVC (#15499)
521894c490 : Allow converting char tensor to numpy; add [fi]info.min (#15046)
b7bc49ad70 : Port replication_pad1d to ATen (#15507)
ad6799537e : Support stateful dataset (#15096)
8cd917812b : put interactive prompt in bash (#15521)
f8a56bf476 : Fix the iterator category for torch::data::Iterator (#15500)
c07647814b : Precommit hook: just warn if no clang-tidy (#15514)
4a716250cc : Add torch.rot90 to torch.rst
51f1c4fea5 : fix parallelization detection for CPU foreach_reduced_elt (#15483)
4e4ef0cffb : add rowwise adagrad lp test (#15082)
e012b183dd : handle empty inputs to SparseLengthsMean correctly (#15389)
58a7f2aed1 : Add pthreadpool_create and pthreadpool_destroy (#15492)
90aa21e795 : Metadata for input/output formats in model file proto. (#15252)
f3a588fede : add len to nativeResolver (#15488)
934fc28656 : Remove NoneGenerator
1dcf2ea096 : Add self to Python printer reserved words (#15318)
70aafad08a : AD support for adaptive_avg_pool2d (#15459)
01be9b7292 : Handling nullptr case
235d47760b : Relax check on outputs (#15458)
6bf05bfde6 : allow non-final returns (#15463)
3da4a04733 : Fixed trivial typos in Dropout2D and Dropout3D classes (#15200)
ff8fbc4f23 : Updating submodules
7e2ec24886 : eq_fixes (#15475)
d9cad71b36 : Enable running collect_env.py without building PyTorch (#15468)
ac506f5820 : Back out "[nomnigraph][executor] computeChains with nomnigraph" (#15451)
acbd9c49b0 : Direct FBGEMM integraton into ATen (#13777)
614121c1ef : Replace getargspec with getfullargspec (#15396)
2b23ba8ef0 : The benchmark binary support multiple batches in one run (#15443)
433db13b48 : Move torch.logspace to ATen and parallelize on CPU.
61cc701dd7 : Fix cudnn dropout (#15473)
f52f68bcf9 : format specialized_segment_ops_test.py to prepare D13515970 (#15408)
cb79e1b3a5 : Clean up onnxifi transformation code (#15453)
26b04523b1 : Record Caffe2's current stream ID in c10_cuda. (#15174)
3353064060 : Add option to automatically handle unsorted variable-length sequences in RNNs (#15225)
52699f0754 : Change default value of unique to 'sorted=True'
4ee1c2c632 : add denormal options (ftz and daz)
3a6d473b49 : collect_env fix (#15447)
a178f0a316 : Remove unused field in jit script module deserializer (#15439)
8883ac4b58 : Revert D13494873: [pytorch][PR] Fixing ONNX export of logical ops to have correct output datatype
95a0e2c421 : Fix ASAN div by zero error in rotated GenerateProposals op (#15415)
ed5b584f65 : Tensor construction codemod(ResizeLike) - 7/7 (#15087)
d6cbcb43c5 : allow numpy-like boolean-list indexing in pytorch (#14932)
f56217af3b : Doc improvement on DDP (#15440)
cde26c659e : Fix type annotation error. (#15448)
c24a124fa0 : Add launch bounds needed for ROCm 2.0 (#15400)
1a2ec10bd4 : Support enough of closures to write autograd functions (#15411)
3fdf567752 : Adding CUDA version for C2 operators generate proposals and nms (#13694)
a47749cb28 : Add at::one_hot (#15208)
2a64a78e7b : Extract arguments to its own file and pass arguments to ios apps (#15413)
f0f9277c3c : Fixing ONNX export of logical ops to have correct output datatype (#15185)
cb0b096f2b : Miscellaneous small doc fixes (#15373)
cac02034f6 : Extend README for ATen/native/cpu (#15437)
06a7cb5901 : Implementing cuda kernel for tril_indices and triu_indices (#15203)
5c66662e58 : Revert D13498974: [pytorch][PR] [jit] Add self to Python printer reserved words
8db44eda01 : Add support for batched pdist (#12302)
7a764fe270 : multi-dim standard deviation for CUDA. (#14990)
5e624948b6 : Add self to Python printer reserved words (#15318)
eb5d28ecef : Pretty printing of C++ modules (#15326)
2ef0f1222a : Restructuring prof dag counters (#13321)
b89b46abfb : Remove python_default_init from ATen and use Optional (#15234)
3fc889e976 : Tensor construction codemod(ResizeLike) - 1/7 (#15073)
2db742fc95 : Do not use fork to invoke test scripts in pytorch rocm CI
1071e92335 : Replace Vec256<T>::size with constexpr method (#15406)
9abd755a76 : Make cpuinfo logging less verbose (#15405)
88bf683cbc : Support error handling in forked threads (#14523)
5dd5ef3214 : default options for OutputTensorCopyFrom (#15248)
a00cfd1e9b : Fix Module::copy_into
0b219538cf : add unpack_outputs to inlineCallTo
07d20b1e7c : Fix documentation (#15372)
055de167d5 : computeChains with nomnigraph (#15366)
9217bde807 : Refactor dataloader.py (#15331)
41e7e1bc40 : Rename potrs to cholesky_solve (#15334)
33018e4e09 : centralize side effects ops as node method (#15188)
560530aeec : Optional ScalarType support for native functions & JIT (#15154)
54d4fe3f49 : Implement 'to' on ScriptModules (#15340)
1d94a2bee3 : Update cpuinfo submodule (#15385)
cbde820bc3 : Updating submodules
cd8dd49fba : race condition fix of using mutable_data inside OPENMP region for batched matmul (#15371)
6ca1d93473 : add whitelisted clang-format checks (#15254)
122b4ef41d : build fix
0368054a6d : Split up compiler.cpp (#15355)
6ab2e7442d : Autograd using torchscript (#14604)
4928c76415 : Minor clean up for test_jit (#15368)
f3bff2d500 : Add RNNCell modules to Script standard library (#14695)
f3cc9b2218 : Remove fully qualified weak script names (#15364)
096ee8467c : Redefine scheduler to set learning rate using recursive formula (#14010)
5e97720100 : Replace resize_dim() with set_sizes_and_strides() in (#15348)
5667af3880 : Minor cleanup for TestFuser tests (#15134)
3681bf7cff : add dense vector to id_list operator (#15090)
f5da198236 : fix clang-tidy script for python 3
2469f7e02e : Port torch.linspace to ATen and parallelize it on CPU.
3118124cd6 : Add (Un)Fold modules to standard library (#14759)
f4c504593c : Fix the (reduce)min and (reduce)max ONNX exporting (#15241)
056cfaf3ff : Method returns a single argument (#15289)
12cf5178aa : caffe2 mobile opengl (#15322)
54d8ce94ee : Revert D13383102: [pytorch][PR] Upgrade MKL-DNN to version 0.17
bb9b7de831 : Updating submodules
3a98462f2c : improve script/no script save error (#15321)
e37a22128e : Allow tracing with fork/wait (#15184)
bd958cde68 : [TensorIterator fixing mean to output correct result for half precisi… (#14878)
71ee882157 : Reenable OpenMP by reverting the following two commits. (#15315)
aec9fdf0a4 : Fix _apply in nn.Module (#15305)
2f38ffbcb3 : Add a correctness check for C++ types to custom operators (#15247)
e650a84872 : caffe2/python/task: added __repr__ methods to all task definitions (#15250)
e0b261a35b : Port nn fold and unfold to c++
c66adfc16b : Allow future type parsing
efb37e86eb : Removing BUILD_C10_EXPERIMENTAL_OPS option and unglobbing experimental/c10d ops
59d71b9664 : Bicubic interpolation for nn.functional.interpolate (#9849)
c5dd91c4ae : add isinstance static type checking for jit (#15076)
216ab259fb : Fix the missing caffe2 proto files for Windows (#15157)
f4c59c5fdf : Replace SwitchToDevice(0) with SwitchToDevice() (#15126)
df4c9471ec : Don't enforce docstrings on bool dispatch (#15306)
95d3fed68f : Fix for issue 14829 (#14908)
e07fc114a0 : Minor fixes in .jenkins/caffe2/bench.sh
700271d0e9 : Adding ONNX export for torch.expand and torch.ne (#15050)
3df79f403e : Tighten up invariants regarding StreamId. (#15125)
1dbc7cff3e : Fix tensor printing bug in Python 2 (#12732)
d71fac20eb : Refactor hotpatch_vars and apply it to libtorch (#14976)
656b565a0f : Trivial comment correction in dataloader (#15276)
c51c825efe : Delete ffi documentation (#15220)
60badccd10 : Fix a typo in the assert
4bcb425490 : fix cholesky call in potrs example (#15215)
2b57bd4107 : value-based mark and sweep DCE (#14910)
df614371c7 : Mention Jacobian-vector product in the doc of torch.autograd (#15197)
5b542a755f : Tensor method rename dims()->sizes() (#15246)
f118568662 : Create parser.cpp (#15238)
e1808be37d : Add several features to converting images to blobs (#15204)
717496e6c1 : Supply static shape info to Reshape when doing onnxGetCompatibility (#15242)
763b9954f3 : FP16MomentumSGDUpdate Op fix and enable for ROCm (#15150)
e596d23137 : Start unittesting our main observer (#15191)
34f1f2208b : Build c10 HIP test
5e09c7bc80 : record unit time in torch.cuda.event (#15221)
054456eb93 : Preserve module hierarchy on traced modules (#15101)
60f02b87be : fix an issue where two rules build the same .py files
bd368b867d : Do not ifdef __launch_bounds__ out for ROCm. (#15228)
dcd1685282 : Revert D13440858: [pytorch][PR] Use a pool of per-thread cudnn handles for each device, updated
9f1d8f2eeb : enabled tests in test_nn, test_cuda and test_sparse (#15232)
e9fb4d1f11 : Fix jit doc codeblocks and tables (#15227)
b316e44a46 : Remove __forceinline__ hipification step. (#15229)
7a61306031 : Enable all clang-tidy performance checks (#15198)
fc2856e9aa : Refactor caffe2 CI scripts and add benchmark scripts
4327a2d70a : Better tests/support for Python/C++ inter-op (#15193)
fb8487d708 : Tensor construction codemod(ResizeLike) - 3/7 (#15122)
78bf1a9065 : Revert D13407930: [pytorch][PR] Support torch.tensor in script
331c4b5b4d : caffe2 - make DataRandomFiller usable in unit tests (#15027)
66b26806fc : caffe2 - easy - utils to set argument of operator (#15022)
9726651d1e : caffe2 - easy - test utils for tensor assertion (#15020)
d0b4ae835d : caffe2 - easy - test utils to compare tensors in two workspaces (#15181)
a0f68646ac : caffe2 - easy - test utils to fill tensors (#15019)
8fedde5530 : caffe2 - easy - test utils to create operator (#15180)
eb6fec3652 : caffe2 - easy - Create test_util to make it easier to write C++ unit tests (#15014)
81644ed9ab : Fix derivative for mvlgamma (#15049)
0b9b965c1a : Fix numpy conversion for int8 tensor
fb140c7828 : add erf and erfc to fuser/autodiff
bb8ee2de0f : Move TensorImpl::CopyFrom to caffe2::Tensor (2/2) (#14858)
070f33f154 : Move TensorImpl::CopyFrom to caffe2::Tensor (1/2) (#14656)
dc72a5e02c : For rotated proposals, replace cv::rotatedRectangleIntersection with a correct version that doesn't have underflow problem (#15113)
aecab53778 : Support torch.tensor in script (#14913)
bbbfda72a0 : Remove TensorImpl -> Type dependency
1e9c384afb : Enable performance-unnecessary-value-param in .clang-tidy (#15026)
bdfff2f8c2 : Add missing caffe2_hip extension in setup.py
de0784510d : Remove disabled_features in hipify
855d9e1f19 : Run ONNX cuda backend test cases via ROCm
6911ce19d7 : Remove _finfo; replace _finfo usage with torch.finfo (#15165)
f1f7c16c90 : Tensor construction codemod(ResizeLike) - 4/7 (#15088)
cbd1c519c4 : Replace non-printable-ascii characters in ProtoDebugString (#14918)
994f72ee3e : Tensor construction codemod(ResizeLike) - 6/7 (#15137)
43c0b50c2e : Tensor construction codemod(ResizeLike) - 5/7 (#15084)
86fbf17ba6 : Use std::vector instead of alloca to work around hcc crash
f61612206c : Fix old tensor OutputTensorCopyFrom usage in ImageInput operator (#15094)
e5bd6fe86d : Kill non-forward, non-backward functions generated from nn.yaml (#15127)
bc80deea1b : Delete defunct USE_SIMPLE_BASE_CTOR_DTOR (#15144)
e51092a2b8 : Fix typo (#15045)
ca4358c8f5 : Use a pool of per-thread cudnn handles for each device, updated (#15080)
214f46faf5 : Fix bincount for non-contiguous inputs on CPU (#15109)
bf7a2b9125 : Unify SparseTensorImpl::size_ and TensorImpl::sizes_
0bf1383f0a : Python <-> C++ Frontend inter-op (#13481)
b14d6d730a : Reuse KernelSpec for FusionGroups with equivalent graphs (#14541)
aa022313cb : Removes THCNumerics usages in RNN.cu (#15085)
1e0eab5df8 : minimize header file includes from _avx2.cc (#14950)
4b97a46421 : Disable strict-overflow flag to avoid compilation error (#14977)
1e93317b99 : Remove "early-release beta" disclaimer from README (#15136)
fabd23cb2d : support casting to string (#15110)
1717ea1da0 : Implementation of ChannelShuffle Op for MKLDNN (#15106)
895cb8fcea : Fix resize for edge case tensors (#14874)
78a77667dd : Autoformat build_variables.py (#15152)
fab78827d6 : don't compile dnnlowp.cc in avx2 option (#15147)
d8260239a0 : docs: minor spelling tweaks
2211a283d2 : Export defs.bzl to open source for pytorch (#15132)
107c9ef518 : Add back c2 string_utils include header to benchmark_helper
6610ace28b : use ROCm 1.9.2 fp16 capabilities in rocBLAS and MIOpen interfaces (#14994)
f34d827007 : Optimize CPU GenerateProposals op by lazily generating anchors (3-5x faster) (#15103)
90f9e8103c : Implement torch.tril_indices and torch.triu_indices (#12653) (#14904)
342e62f1e3 : Minor documentation mistake (#15068)
5837320b70 : Add script standard library documentation + cleanup (#14912)
64b3364209 : Move adaptive avg pooling 2d to ATen native (#14714)
63e77ab6c4 : Move numa.{h, cc} to c10/util (#15024)
b34ab435ef : Stop erroneously running aten::warn (#15124)
2d485ffb17 : Move CUDAGuard, CUDAStream and CUDAGuardImpl to c10/cuda (#14248)
9943cf2378 : Kill Type.storage. (#15075)
9d2955c39c : fix infinite loop when get_max_threads is nonzero but num_threads is 1
68ad9ae5be : Ensure there aren't variables in checked_tensor_unwrap, checked_tenso… (#15105)
0ad39ec5c1 : Add better support for bools in the graph fuser (#15057)
f36a84b71b : fix some tests that I accidentally disabled (#15077)
3ae684266a : Don't setup x86_64-linux-gnu-gcc as an sccache wrapper. (#15078)
00a4c8d41c : Use c10::to_string that works cross platform (#15117)
1423c0d9f1 : Add EmptyNameScope to allow you jump out from current scope. (#14631)
479481b6cb : Remove linker and dlopen flags that allowed undefined symbols in rocm build (#15091)
0dade9862c : Fix serialization (#15033)
e20f9bbead : Update the output format for benchmark_helper. It outputs the dimensi… (#15108)
b07ee44f40 : Pre-commit flake8/clang-tidy (#15102)
f8455ed754 : add gloo support for gather on GPU (#14916)
3fa53da61a : Fix include paths for UndefinedTensorImpl.h
63db95dd11 : Move UndefinedTensorImpl to c10 (meh) (#14817)
2dfdbef91d : Fix include paths for TensorImpl.h
9e9e87c19e : Move TensorImpl to c10 (yay!)
bff6d42cef : Add at::scalar_tensor factory function, use it instead of Type.scalar… (#15074)
b710642969 : Make ATen HIPify out-of-place, but still reuse CUDA names. (#14866)
5c2c40ad87 : Add error type to raise statement
73ee7fda4c : Remove deprecated variable_tensor_functions (#15003)
0552326846 : add gloo scatter support on GPU (#14917)
92314c83fa : re-enable copy of python files, but be careful that the copy is only … (#14982)
71e0cb505c : Split off fuser tests in test_jit.py to their own test case (#15072)
7408ce2f80 : Supress warnings on generated tests
04b65dfd1f : Issue 14984: Remove divide by zero error in index_put_ (#14986)
109c8d22dc : Update onnx coverage script for more accurate result (#15029)
f2f47de5ad : tox.ini -> .flake8 (#15065)
ca7f8fed60 : silence unreachable code warnings (#15036)
d825b39061 : improve deep equality check in alias annotation test (#15031)
02d149b767 : Fix race condition in ThreadPool::workOnTasksUntilCompleted (#14833)
c2a754c58b : Fix CMakeLists.txt for Int8 python bindings (#15047)
687834dcb4 : Install cpp tests when built (#15000)
5d3a347685 : Stashing checkpointing RNG states based on devices of arg tensors (#14518)
25ddd659c9 : Updating submodules
bf1d411dbf : Switch Int8Softmax, Int8Relu, and Int8LeakyRelu to QNNPACK (#14933)
a1ea7dbe40 : Adjust the API call to deserilize the tensorproto (#14132)
27d5ae7afb : use datatype dependent tolerance in data parallel tests
81dc78d871 : Update pooling.py (#14998)
48a361cc62 : Clean up casting ops (#14947)
cff509e2b1 : share code between adagrad and rowwise adagrad tests (#14692)
c48b15e41a : TBB task graph (#15041)
45dfc6764e : Enable more caffe2 fp16 rocm tests (#15040)
5022f9d6ef : Enable the build of tests in ATen/core (#15032)
962b82dd81 : More scaffolding for LegacyTHDispatch. (#14852)
e9cd781681 : Back out "Revert D13043261: [caffe2] Task graph and task future abstractions in executor"
83f32eebd9 : Tensor construction codemod - 2/3 (#14836)
5222a1b190 : Fixing reading of FBGEMM from env variables
a97cf568a4 : Alignas Array struct (#14920)
7e2b074219 : Integrate rocBLAS fp16 api into Caffe2 (#14882)
92f3616f36 : Fix old tensor CopyFrom usage in boolean mask operator
4fcc2fffc3 : unit test with multiple omp threads (#14958)
9b272c08cf : Remove partially initialized Tensor in Deserialization (#14197)
4a145cd95c : Revert D13043261: [caffe2] Task graph and task future abstractions in executor
0a36fe565d : apply() for ScriptModules (#14655)
9bbb3efe2f : Simplify THPPointer implementation for Storage. (#14897)
23cc3daabd : Disable getNumGPUs rewrite (#14993)
6ad9f7b798 : Fix include path for WrapDimMinimal.h
279ec9ef7a : Move WrapDimMinimal to c10
66315ab323 : Stop disabling maybeOverlappingIndices (#14999)
483ba553bd : add gloo allgather support on GPU (#14576)
029600813e : Task graph and task future abstractions in executor
a51fe386c8 : caffe2/caffe2/contrib/script (#15007)
25144c8a09 : s/Torch Script/TorchScript/g (#15011)
110ccbb689 : Improve the docs of interpolate(align_corners=) (#14806)
e77de07448 : Improve build time of register_symbols.cpp without compiler hacks (#14911)
18c93b87c2 : Delete defunct THP_API.h header. (#14899)
1989157eb6 : Disable test_leaf_variable_sharing on ASAN runs
d30b6bf3b6 : Revert D13306052: [pytorch][PR] Allow converting CharTensor to np arrays
dc1e6d0b98 : Non-INTERFACE AT_LINK_STYLE is dead code (#14822)
54d5c53826 : Support torch.load with encoding (#14743)
9b2bd284b3 : Convert int8 numpy array to CharTensor (#14700)
e1b5dbf699 : Allow converting CharTensor to np arrays (#14710)
b039a715ce : pre-pack operation of dnnlowp conv with 16-bit accumulation (#14881)
e747acbebb : Respect -q of setup.py (#14972)
fab8085111 : _get_device_index supports parsing device strings
5fd69e7551 : remove mingfeima mkldnn reference from README, as no longer necessary (#14975)
aefc83f46d : fixing some rebuild issues (#14969)
fc30e2782c : Remove deprecated info argument in btrifact (#14935)
86e03b8a30 : add fix for CUDA 10 (#14971)
5f2736b84a : Fix mismatched test_{full,ones,zeros}_like onnx expect files (#14956)
a1494efdfa : fix auto grad summing for IfOp where intermediate output needs renaming (#14772)
fa12e1e4d4 : Export ones_like, zeros_like and full_like using ONNX ConstantLike op. (#14903)
517c7c9861 : Canonicalize all includes in PyTorch. (#14849)
a7b3197b2d : race condition fix of calling mutable_data inside a openmp region (#14921)
e9db9595d2 : Add crop argument, can crop rec as well, first resize and then crop
b0909ea6a0 : Switch Int8Sigmoid to QNNPACK (#14883)
5e06fa0baf : ONNX changes to use int32_t (instead of enum) to store data type
c8a5ec14dd : Remove at references from c10
25110d61fb : Implement `std` for multiple dimensions on CPU devices. (#14535)
c2a75926ca : Add CAFFE2_API to video processing functions (#14900)
52942e1f09 : Enable unit tests known to work on ROCm (#14011)
5be28ade66 : Automatic update of fbcode/onnx to aca8473a40cf43f01958c81b648efcee7f3a755a (#14865)
11a9248d01 : Enable fp16 for MIOPEN operators in Caffe2 (#14905)
70598740ec : Upgrade MKL-DNN to version 0.17 (#14308)
478eb70c07 : Fix build with OpenCV 4.0 (#14356)
4453a1ff88 : Remove unused TensorImpl dependencies
65aa11a876 : Remove TensorImpl -> context_base dependency (#14658)
086a37876b : Fix include paths for TensorOptions
459aac4f24 : Update graph printouts in JIT docs (#14914)
5734e96775 : Improve hub documentation (#14862)
65da7ddad6 : USE_FBGEMM=True by default
a0ee3a279c : USE_TENSORRT support and TensorRT 5 compatibility
febc7ff99f : Add __init__.py so files get picked up on install (#14898)
efc5e9f71a : Replace calls of Type::_th_tensor. (#14877)
d6c53328f9 : Large scale fix of python-related files in torch/csrc/
939877bf4b : Implementation of WeightedSum op for mkl-dnn and fix FC op output shape issue.
265b55d028 : Revert D13205604: Move numa.{h, cc} to c10/util
1c9df7facf : Expose torch.roll function and method (#14880)
6651fae827 : Make autograd engine compatible with hip
6e453e56f9 : Fixed ConvT docstring (#14876)
51d26e76f7 : Updating submodules
4655b7bc4b : Remove weak module test expect files (#14871)
1a247f872f : gradcheck (#14596)
bfa666eb0d : Skipping two c10d tests only if there are multi-GPUs (#14860)
ada8f828f9 : Move TensorOptions, DefaultTensorOptions to c10
bd3eb87258 : Switch Int8MaxPool operator to QNNPACK (#14832)
e6a420114f : collect_env.py: get conda magma and mkl information (#14854)
ddca0442b6 : Add LogSigmoid support in ONNX symbolic (#14830)
5f0bff9639 : Kill GPU memory logs in normal runs (#14838)
f82f4de229 : Stop inserting static casts in Hipify (#14853)
b5db6ac9f1 : Tensor construction codemod - 3/3 (#14835)
20d1bff292 : Tensor construction codemod - 1/3 (#14828)
1d111853ae : Move numa.{h, cc} to c10/util (#14393)
75a2d8e2de : Upgrade CI to ROCm 1.9.2 (#14216)
1c8d41a08d : Allow linspace and logspace with steps=1 and start != end like numpy (#14748)
d2fdc33411 : (#14580)
eb3cabffd6 : Consistent formatting in losses' docs
2e7cc86a62 : Add (partial) autodiff support for nll_loss (#14305)
e7bd8457a6 : Updating submodules
6039c7611f : Updating submodules
12addc64a6 : Fixed MIOpen RNN Segfault issue and enabled RNN test (#14810)
39d50ef4f6 : Export complete subgraph io info when calling onnxGetBackendCompatibility (#14827)
ba287eebca : Fix clip gradient with empty input (#14709)
997df9a6ec : Remove protobuf dependency in pytorch cmake file. (#14182)
3799d32b7b : Optimize images (#14084)
e27d77815d : Prevent `profile_observer_test` from being run by CPU test (#14168)
14fb651b5f : CAFFE2_INCLUDE_DIRS points to invalid path (#14306)
5e307bd1be : use "Extension" instead of the unimported "setuptools.Extension" (#14475)
d393dd0744 : generate ATen core files with LF. (#14667)
2d60afbc90 : Remove outdated css file and refs in cpp conf.py (#14779)
82903dda9b : Fixes for some Windows compiler warnings (#14490)
a6399121da : Shut up "address will always evaluate to 'true'" warnings (#14774)
f9446e0c94 : HIPify less files in PyTorch (#14804)
ba0ebe33c1 : Unify device argument parsing between torch and c10
252e9058d4 : Improve assertion failure message (#14813)
83ad52634a : Add FunctionSchema based Operator Registry (#13789)
67dcf10631 : Increase test timeout (#14814)
c02b3e7cea : Retry test on address already in use error (#14815)
6fccca4278 : improve ONNX tests on torch.Linear
524574ab73 : Define THPStorage struct only once (rather than N times) (#14802)
ca6311d909 : File name change for FbgemmI8Depthwise.h and FbgemmI8Depthwise.cc (#14725)
e114527d19 : Add torch.nn.RReLU support in symbolic (#14781)
50936cb06e : Move avx2 specific code in different source files (#28)
55092b1cc6 : Validate matching input shapes in Int8Add operator (#14520)
1c2273c8e9 : fix stft arg types
999690ff3d : Improve HIPify performance (#14803)
be47470c91 : Fix cuda multiprocessing cached memory (#14736)
3ae721d350 : Set and get default dtype (#13748)
90b1196ac4 : Switch Int8AveragePool operator to QNNPACK (#14783)
e1eb32d9f1 : Update magma to 2.4.0 for Windows
62f4db6d8a : Unify build_caffe2_amd.py and build_pytorch_amd.py (#14769)
dbf6d12776 : Default pool() option (#14636)
2d958b7f77 : Storage.clone maintains original device (#14751)
a80a46a6d0 : Updating submodules
0b1b72e975 : Updating submodules
0573ef664e : include avx512vl to avx512 code path (#14733)
f89de64796 : Use AT_WARN for warnings in the JIT (#14770)
ecc17fe3dd : Add output info when doing onnxGetBackendCompatibility (#14784)
c79e305add : Don't DCE PythonOp
8dfebc16cc : Improvements for symbolic AD (#14758)
38eb1beff5 : Revert D13289919: [pytorch][PR] [DataLoader] Refactor dataloader.py
78a9e7d83f : Delete defunct files from torch/csrc/distributed (#14785)
d76e411d8c : support conv transpose in script
2d3cf98b49 : Making dist.get_default_group private for PT1 release (#14767)
33ea7eafef : Make checkpoint_sequential work with multiple arguments (#14278)
3237103624 : Automatic update of fbcode/onnx to 42804705bdbf179d1a98394008417e1392013547 (#14777)
a66669a110 : Enable testing on Loss modules (#14778)
d872af9282 : Add tests for dropout/batchnorm train/eval, remove training constants (#14780)
86b4dd8bb2 : Split LegacyDeviceTypeInit from LegacyTypeDispatch. (#14723)
f6f24cf0f4 : don't allow cse to clean up nondeterministic nodes
d76fd43294 : Reenable all forward-pass fusions that worked before the AD fix (#14558)
c3bfa0e52b : BatchNorm support not tracking stats
c21f090ab4 : Minor doc change in c10/Device.h (#14762)
9e1f4ba124 : Introduce LegacyTHDispatcher for dispatching to TH functions. (#14754)
53a9d4f312 : disable batch mm if we have mutable ops (#14771)
5ed9dfad98 : Replace at::Half non-vectorized conversions with implementations from FP16 (#14411)
2d56df7892 : Use .to to convert new tensors in new_tensor (#14097)
c7c5eed686 : Export generator constructor (#14041)
374b797569 : c10d doesn't work with torch namespace (#14042)
3aba2d99e1 : Add resnet test, convert more modules (#14437)
25c9a8b1fc : Add missing test skip
875be849e9 : Rename _local_scalar to item() (#13676)
e829a52977 : Remove use of hipify_caffe2, in favor of file path test. (#14757)
a597c0ca05 : Add inplace FeedTensor for python frontend (#14512)
ba70cf22fa : Loss (#14720)
ef91cfd68b : Add new reduction mode in kl_div (#14457)
773f4d8081 : Implements Gather operator for arbitrary axis, sharing the code with BatchGather. (#13756)
16558a1e9d : Refactor dataloader.py (#14668)
7e4a5b89fe : Back out "Move TensorOptions, DefaultTensorOptions and OptionsGuard to c10" (#14745)
ff7deb95d7 : Back out "Fix include paths for TensorOptions, DefaultTensorOptions, OptionsGuard" (#14744)
7bc489c827 : Disable randn_like fusion in the JIT (#14752)
86ffc2a5f1 : fix import failure in hub test (#14742)
9e58c4ef91 : Revert D13304654: [pytorch][PR] Introduce LegacyTHDispatcher for dispatching to TH functions.
264111bfc1 : Introduce LegacyTHDispatcher for dispatching to TH functions. (#14708)
33b1f9f71a : add .code property to ScriptModule (#14735)
1921816f85 : Fix clamp when min/max are both None (#14716)
6e0c5a8a4e : Restore device in cpp API (#14711)
cbd805169f : move structs to header file (#14728)
c7f93668dc : improve the restore device test, and relax the assertion (#14734)
8812a5d42e : Reduce broadcasted inputs in derivative code (#14485)
862b8cae51 : interpolate (#14123)
a23863fd6f : Add Pooling modules to Script (#14527)
d429e78a9a : Add fractional_max_pool2d to standard lib
e8e494caf8 : Add GroupNorm to standard library (#14722)
95e5a5ae0c : basic testing of builtin alias annotations (#14588)
9fbc2d3153 : Remove TensorImpl -> LegacyTypeDispatch dependency
d063c9c330 : Fix include paths for TensorOptions, DefaultTensorOptions, OptionsGuard
46772dba0c : Move TensorOptions, DefaultTensorOptions and OptionsGuard to c10
1098500e9b : Fix include paths for Layout.h
771eebad7b : Move Layout to c10
5a4082612f : Fix include paths for Backend.h
c303fcb9cb : Moved Backend to c10 (#14642)
119f9ec291 : enable NoneValue parameter assignment for WeakScriptModule (#14715)
bb546b2e5b : WAR for self.training (#14719)
9a932b8b90 : fix expect
44894915d6 : Automatic update of fbcode/onnx to 6b34743d2e361bbc0acb29dd73536478cb92562e (#14637)
7b6c6f76f7 : Skip CUDA tests when built with CUDA but no GPUs available; rename cuda tests so they're obvious.
22ab6183c5 : Move manual_seed into ATen/Context.h; delete reimplementation in test_seed.h (#14625)
78d594f46c : Implement Device as a type in the script (#14666)
4b31572375 : Meta programming on If Stmt cond to enable conditional emit blocks (#14533)
298b775577 : Delete temporary ATenCoreTest. (#14622)
9ac845f734 : Revert D13280899: [pytorch][PR] Reduce broadcasted inputs in derivative code
e0f68671bd : Restore device when import jit script module (#14454)
b8da44dc13 : Add linear + pixelshuffle modules to standard lib
68ffe46991 : Reduce broadcasted inputs in derivative code (#14485)
b768db0810 : Allow DCE to clean up some mutable ops (#14601)
9783ce3825 : Revert D13272203: [pytorch][PR] [jit] Meta programming on If Stmt cond to enable conditional emit blocks
6385d00185 : Move global-constructor to lazily initialized (mobile restriction) (#14650)
5a2f5a216f : Make convertable to list also accepts optional
b5181ba1df : add avx512 option (but no avx512 kernel yet) (#14664)
4b90702037 : Meta programming on If Stmt cond to enable conditional emit blocks (#14533)
cac03280f9 : Fixed DistributedDataParallel state pickling for multi-gpus (#14690)
18eaec7121 : Add (unused) HIP API to the Context object. (#14623)
b1faab3d8f : Replace THCState_getCurrentStream with direct at::cuda::getCurrentCUDAStream()
a49bf21d50 : Delete hasCuDNN from Context. (#14499)
eb71df3e63 : Delete at::current_device(), Context::current_device() and Context::getNumGPUs() (#14414)
5ee8312b63 : sparse.mm(), reland #14526 (#14661)
7da2448d62 : Fix multi-argument allreduce in ProcessGroupGloo (#14688)
b15242f70c : Assert all legacy operators are 'extended_method', remove codegen for… (#14649)
737efa78ba : Remove 'type_method_inline_definitions' which isn't used.
b96e6ee98d : Delete defunct DynamicCUDAInterface (#14621)
af95f712b0 : Get rid of deprecated_factory_method in codegen, which is no longer u… (#14641)
5c89190340 : inline adagrad functions (#14194)
74c3cbc013 : Increase test barrier timeout for barrier test (#14689)
5268dd468c : Fixed DistributedDataParallel cannot kick off all-reduce in a corner case (#14675)
35c8f93fd2 : Fix CUDA 8 build on Windows (#14665)
da2c3afa47 : Fixed typo in README.md (#14346)
4c11dee0e8 : Use Type::str() in Type::operator<< (#14657)
143e171cb9 : Updating submodules
170ff7764f : Use a zip archive as our container format (#14521)
1c21dc6e16 : Revert D13252990: [pytorch][PR] [sparse] sparse.mm(S, D)
c71edcc747 : Tensor construction codemod - caffe2/caffe2/fb/operators - 2/3
fd17fd4aa9 : Fix 'unknown type name 'optional'' (#14383)
7f42d1c98a : fix double precision cast from pybind (#14417)
404ad939e5 : Revert existing no_grad_embedding_renorm_ from aten (#14639)
aeb38cfcea : cuda implementation for PackSegment to support presence mask (#14635)
1d464d7f3e : Updating submodules
26f3fb34a1 : Build distributed libs in build_libtorch.py (#14037)
36c5f40ec0 : Remove methods from _th_triu_ and _th_addcmul_. (#14624)
c3a2b1e155 : sparse.mm(S, D) (#14526)
a84e873bb1 : Put back linker flag for OpenMP to prevent build break on ppc64le (#14569)
5c1692840e : Remove OptionsGuard from ATen (#14524)
4b915260c7 : Explicitly ban uninitialized tensors when invoking Predictor classes (#14377)
738fc7054b : Report timer in benchmarking when requested
f45405bf5b : Fix inheritance for SharedDataset (#14629)
814b5715ba : Move module tests to common_nn (#14578)
c042f69dbb : Updating submodules
5ae0ed8552 : Remove default constructor lines that do nothing, and fix warnings with clang trunk (#14300)
c03851e93a : remove copy_wrapper (#13937)
5c65a7812e : Move non_blocking copies to aten (#13866)
e3840419ec : Move cuda copy to aten (#13348)
0786dfee7c : Move THTensor_(copy) to aten (#13603)
c1c841a4e7 : Changes based on @gchanan's review of #13420 (#14441)
edb3ddf1a5 : Accumulate grad fix (#14587)
67308a9323 : Fix expanded mvn and lowrankmvn (#14557)
2e0f3b038c : Tensor construction: combine Resize+mutable_data - 2/4 (#14205)
f6354d903a : Unit tests need better compilation flow (#14547)
aa842fe101 : clean up linkage options (#14609)
ad1b874a36 : set mkl_set_dynamic to false (#13868)
37627a182b : fix USE_SYSTEM_NCCL build (#14606)
ff91de43de : Set output of aten::mm to have the same output type as the original node after op canonicalization. (#14602)
89c3dbcad8 : Add binary cross entropy to standard lib
1f6d9f44fc : Add InstanceNorm, Distance modules to Script
3648c269e9 : Misc distributed documentation updates (#14605)
11ef5191ff : Enable tests for CPU tensors in test_distributed.py (#14572)
1975917d0e : fix copy_ (#14593)
220ce8046e : Binding for prctl(PR_SET_PDEATHSIG) (#14491)
9127ab3866 : Fixed new_group won't work for two or more different rank groups (#14529)
e227aa9e2e : Updating submodules
67e3905bc6 : Revert D13268293: [pytorch][PR] [jit] Add InstanceNorm, Distance modules to Script
0d3cb91d8c : Make env init_method support both env and args for rank and size (#14494)
1a9602d5db : Delete caffe2_cuda_full_device_control (#14283)
8617b780cf : Replace use of 'int' with more descriptive 'DeviceIndex' or 'StreamId'. (#14282)
fd31eae9ad : Switch import/export to python printing (#14400)
2b7345bcd5 : PT1 distributed doc update (#14530)
75eccffdfe : Add InstanceNorm, Distance modules to Script
15e8bb379e : Add `List` to annotations (#14482)
2752ad8045 : Automatic update of fbcode/onnx to f461f7aad9987635b4aff108620ed7918f002d19 (#14568)
dc7498c84d : add gloo support for reduce on GPU (#14443)
69d3c00ae1 : Expunge use of type() from SparseTensor.
c7f828809b : Expunge occurrences of type() from scalar_test (#14545)
9aea856115 : Expunge use of type() in Distributions.cpp (#14544)
7879c979b5 : Expunge uses of type() from EmbeddingBag. (#14543)
6fe1867c23 : Expunge direct device index handling from tensor_conversion_dispatch (#14421)
5805ef5a83 : call raw_mutable_data when data type didn't match in BlobGetMutableTensor (#14513)
666d383a00 : Add broadcast list default arg support (#14361)
a2d8e84594 : Added launch bounds in VolumetricConvolution.cu (#14564)
0d663cec30 : Unify cuda and hip device types in Caffe2 python front end (#14221)
bdaa0e38b8 : Fix tautological-compare in aten/src/ATen/native/cuda/SummaryOps.cu (#14540)
eeb0d67b92 : Update to export in onnx_aten_fallback option
2901777a0e : Add back the MAX_JOBS=4 restriction to make rocm CI more stable (#14566)
1b0b2e69f8 : assorted alias analysis fixes (#14556)
31b3d81714 : Broadcast prim::FusedConcat inputs independently when checking kernels (#14503)
cf059028f0 : Do not load ROCm cmake files if USE_ROCM is off (#14261)
fb6806f6e9 : Remove at references in c10 Allocator.h (#14434)
4ec6bd7356 : Add sourceRank() to ProcessGroup::Work (#14453)
7c24a16f82 : Fixed typo for BCEWithLogitLoss doc comments (#14532)
29d697aec4 : typo in Module docstring
44cb43bcc1 : Jaliyae/samplers (#13870)
9e93a02624 : Use nn module tests in test_jit (#14238)
ba25b37e9b : Updating submodules
70e3736e20 : Updating submodules
db15f2e13f : Fix version.groups() (#14505)
6d63e9dbff : Support Embedding + EmbeddingBag in Script + (Ignore flakey test) (#14509)
105fa58748 : pointwise_loss (#14134)
186341c5dc : Merge Caffe2 and PyTorch thread pool definitions (#14114)
533668d7e4 : Ensure that indices are on the same device as self
da9e49e586 : Remove Context dependency from Tensor class (#14269)
0cfbbceac3 : Change Tensor::CopyFrom to a simple double dispatch (#14268)
f80d34a1c8 : Update Tensor doc (#14339)
fb7e40b7eb : nccl fixes (#14195)
ca55c5411f : Clean up house on CUDAStream (#14247)
3aeb288e40 : Make clang-tidy shut up about Python C API macros.
e3711aa93f : Make TensorImpl/StorageImpl safer (#14429)
f6dfd9d545 : Handle copying intrusive_ptr_target correctly (#14428)
5f07b33857 : Revert D13219647: [pytorch][PR] Support Embedding + EmbeddingBag in Script
aec4c19460 : Remove StorageImpl::type() (#14139)
bcd7b03c2a : Add XBlobGetMutableTensor that returns Tensor (#14424)
0f62af4ab1 : Add timeout kwarg to init_process_group (#14435)
7c4aef9dfc : Add support for HIP to DispatchStub. (#14413)
7749804099 : Support Embedding + EmbeddingBag in Script (#14415)
c32debb916 : fix build error from D13188595 (#14481)
a02b3374d4 : Revert D13144472: [fix] condition blob in while_op test changes data type
6039e25e8d : Fix the build issue in setup.py due to cmake version type x.x.x.x vio… (#14331)
8901935ad4 : Update OpenMP cmake setting for xcode 9 compiler(AppleClang 9.0) (#14473)
302caef154 : Revert D13166626: [pytorch][PR] ignore generated caffe2 docs and virtualenvs
c638f379b3 : Make `mean` function work across multiple dimensions. (#14252)
68251fb931 : Fix half tensor printing plus speedup large tensor printing (#14418)
be7c618fd7 : torch.sparse.sum() (#12430)
a2fcd4dee5 : Ensure FP16 rowwise Adagrad can be run
e8754ee017 : use fbgemm's im2col fusion and thread partitioning (#14350)
a38ed0268e : PT1 Stable Release Distributed Documentation (#14444)
3d98810fbd : Revert D13192230: [pytorch][PR] [jit] Use nn module tests in test_jit
7d07fcd215 : Fixed SyncParam/QueueReduction/SyncReduction test for 2+ GPUs (#14452)
4cdcbbf410 : Use nn module tests in test_jit (#14238)
a0def0b57e : check for invalid ranges in torch.arange
b08a186153 : roll along multiple dimensions
662f66ebb9 : Add poisson_nll_loss to script
d75f751bec : Add boolean dispatch for function overloading (#14425)
23f901a737 : fix enable_cpu_fuser
82175f31b4 : Move Affine grid to C++ (#14392)
6f2307ba6a : Allow building libraries with setuptools that dont have abi suffix (#14130)
23d111c87f : Fix clang tidy errors
226a01e5a1 : Handling of pretty-printing methods (#14378)
75bac5ab32 : Eliminate necessity of HIPify on AccumulateType.h (#14412)
1620161d6b : when BUILD_CAFFE2_OPS is OFF, torch-python needs a direct dep on nccl (#14430)
006505bb8f : Speed-up "advanced" indexing operations (#13420)
0199d59d3a : Resubmit: Set the correct engine name for position weighted pooling when fp16 is used for training
ae1b37650c : Windows local build: restore original working dir after activating VC environment (#14416)
5c84145354 : condition blob in while_op test changes data type (#14279)
ba6c49cb9c : Add test of ONNX_ATEN (#14259)
e392d428b1 : Allowing TaskGroups to carry remote nets (#14342)
b7856a32f6 : Add scaffolding for HIP backend in ATen/core. (#14285)
1b93cb7631 : Document device_guard in native_functions.yaml (#14235)
1b80644b4d : Revert D13192228: [pytorch][PR] [jit] Add boolean dispatch for function overloading
f9c27d60c3 : Remove fake dependencies from TensorImpl to caffe2 (#14141)
3257ac1ff3 : Fix include paths for TensorTypeId.h and TensorTypeIdRegistration.h
ed10ef97da : Move TensorTypeId to c10/core
6c2e816268 : Fix include paths for Storage.h and StorageImpl.h
3d4d09fe06 : Move Storage and StorageImpl to c10
507ed9032e : Fix include paths for Allocator.h
3a71d5ee49 : Move Allocator.h to c10
0b10f147b6 : Move UniqueVoidPtr to c10
8b1ca2810b : Move ScalarTypeUtils.h to c10
44e21cf5bb : Fix include paths for Scalar.h and ScalarType.h
50e9c56830 : Move Scalar and ScalarType to c10/core
3fca4bde50 : Trace in-place ops (#14254)
ffbc3905a1 : Fixed torch.multiprocessing.spawn for not being able to spawn like dataloader workers (#14391)
5fefb29a53 : Tensor construction: combine Resize+mutable_data - 4/4 (#13856)
e22cc7c072 : Print default values and introduce ir view classes (#14176)
8408dff55a : Add Type support to the fuser, fuse more (#14336)
bd629481fb : Updating submodules
66c8bbf021 : Add boolean dispatch for function overloading (#14081)
2cc35c161a : Barrier synchronizes with prior work before completing (#14386)
9598d380b0 : Make ProcessGroup::Work::wait() throw (#14298)
03864b7b11 : Add option structs and timeout field (#14297)
52f50220d9 : Refer to all work with ProcessGroup prefix (#14296)
5865561a9a : Remove algorithm caching in ProcessGroupGloo (#14295)
936c2bba23 : Use new style barrier support in c10d/gloo (#14294)
50bc9dc9c3 : fix doc for sparse.addmm (#14403)
a3cfab2d63 : per-group and per-channel quantization (#14340)
49fe678fec : Add variable_factories.h to cppdocs (#14381)
c19af59a6e : Use integer math to compute output size of pooling operations (#14405)
c5cc1e3ab2 : Delete legacy THCStream (long live THCStream). (#14246)
388258fb5e : Add hash functions for Stream, CUDAStream; fix Device hash function (#14191)
3ff70712c2 : Implement NaN-propagating max/min on Vec256.
a0ef8afd7e : Updating submodules
f019a2d9b3 : Remove unused executors, part 3 (#14199)
7953b32dc4 : Remove unused executors, part 2 (#14115)
34239006b0 : Remove unused executors, part 1 (#14117)
507cb16583 : Delete OPENMP_STUB translation. (#14286)
12558019a8 : backward for sparse.addmm(D, S, D, alpha, beta) -> D (#13345)
9e1805d38e : Switch Int8ChannelShuffle operator to QNNPACK (#14362)
2d6f039766 : Fixed file init_method write/read race (#14388)
f639249d51 : Fix dataloader iterator test (#14045)
6f3002a50e : Fixed c10d test (#14389)
1ca0ec7299 : fix typo in `torch.sum` documentation (#14250)
cef23a4b1d : More JIT type hierarchy refinement (#14127)
afb2c0ce86 : changing some rpath stuff (#14304)
b18063b39a : Fix caffe2 => onnx exporter for ConvTranspose (#14143)
5918de8e84 : Revert D13166669: [pytorch][PR] Allow dataloader to accept a custom memory pinning function
bb7fb7e45f : remove CAFFE2_API from IdWrapper (#14044)
735cd06536 : FeedTensor returns a Tensor (#14196)
b13f91dbd9 : Allow graph fuser to move chunks past multiple nodes. (#14055)
8cc5d54b66 : Updating submodules
0d1f382e39 : Removing Caffe2-specific conda infra
2fa3c8327c : fix tensor advanced indexing with assignment (#14311)
80ba65e2f5 : remove unnecessary zero_point argument from constructors (#14323)
0651b594d8 : Updating submodules
a10a993872 : Fix -Wreturn-std-move (#14113)
90ed2f5aca : minimize code compiled with avx2 and header includes from them (#14313)
fa73037233 : Add proper from_blob overloads (#13982)
b30c803662 : allow concatenating "hybrid" (sparse/dense) tensors along their dense dimensions (#13761)
a13fd7ec28 : Allow torch.utils.cpp_extension.load to load shared libraries that aren't Python modules (#13941)
a60368982b : Batch more matrix multiplies (#13456)
1ef949036c : Enable native wrappers for the remainder of nn functions.
60e7d04961 : Add Recency Weighted into SparseLookup (#14291)
6e1e2032d3 : quote NUMPY_INCLUDE_DIR (#14341)
33d091f432 : shape analysis fix (#14325)
8e3240d022 : Some minor fixes for Windows build script (#14218)
7557a993ab : Allow dataloader to accept a custom memory pinning function (#14171)
c36156eded : Option to preserve bitwise accuracy of gradient checkpointed vs non-checkpointed dropout (#14253)
1e05f4be73 : Updating submodules
d55b25a633 : Remove individual "using c10:xxx" statements (#13168)
f79fb58744 : Make sure we bind input/output of Onnxifi op positionally (#14214)
7fc34a4122 : Convert gumbel_softmax, lp pooling weak functions and modules (#14232)
08b77d3844 : Use ADL to find toString (#14021)
0e93a03a3a : Fix include paths for intrusive_ptr (#13692)
4160c13cd2 : Move intrusive_ptr to c10/util
e91c8e2f2d : ignore generated caffe2 docs and virtualenvs
3918e226fd : Updating submodules
fb8c3d62fe : removing quantization utility functions moved to fbgemm (#14301)
8c4910b095 : Cuda version comparison with CUDA_VERSION_STRING (#14302)
992e2750fd : Updating submodules
341b48529e : Updating submodules
b26f82b0ec : Robust NCCL barrier improvement to cover all devices combinations (#14271)
b149456645 : alias analysis (#14018)
d55ba77a5d : Remove extra include
85d3fccee7 : Removed redundant allreduce options in DDP (#14208)
d9cdcc9a3b : Add list inequality operator (#14129)
34db39d87a : Add onnxifi support to SparseLengthsWeightedSum (#14210)
60963c2ecb : Add "axis" and "axis_w" arguments in FC to support customized axix to reduce dim. (#12971)
accbcca338 : IDEEP fallback for ResizeNearest op (#14212)
2cacb39a21 : Fix ONNX_ATEN mode (#14239)
fe068d9032 : Bump gloo (#14281)
31ba34b73c : fix comment on dnnlowp op arguments (#14265)
6ce9907d51 : native NN wrappers, including with buffers.
91c0b7159a : Remove header generated at configuration time (#14244)
788d2e87bd : Address jittering issues in python_print (#14064)
af82396f7f : Updating submodules
166ee86b46 : Updating submodules
7a654617eb : Add tensor table in ModelDef and use it for jit script serialization and deserialization (#13861)
17432a1051 : c10d Automatically retry on EINTR (#14180)
bb301a431d : Make NCCL backend support barrier op (#14142)
1acaafbe70 : Fix memory leakage in onnxifi transformer (#14245)
8f20d40bb7 : Allow undefined tensors as constants (#14120)
d6bfc53b9e : Export BatchNorm functional and module, add necessary JIT support (#14016)
1f871f126f : Have PYTORCH_FUSION_DEBUG print C kernel source (#14213)
1224ef9ea1 : Delete backwards compatibility StorageImpl.h and TensorImpl.h (#14230)
9a281451ed : remove unused parameters from caffe2_dnnlowp_utils.cc (#14164)
3c2462cf24 : use pragma once (#14163)
4224ce10a8 : format python files (#14161)
3c0ce51484 : clang-format (#14160)
acd7811e33 : Add sigmoid op based on MKL-DNN
c96b72d61f : OSS build fix (#14192)
6dacc20073 : Make EncodeMethod in jit script serialization return a string (#14167)
a036f9a65f : Create README.md of caffe2/quantization/server
6dc28e666c : CircleCI: fix NCCL install (#14172)
03a02b6fd5 : Fix a bug in test case of onnx::If
b807970aea : Tensor type checking and informative error messages for torch.distributed (#14204)
7d1db89ef9 : Move stream functions from CUDAContext to CUDAStream (#14110)
50b914aeeb : Move CUDAStreamInternals inside detail namespace. (#14109)
e58bbbac18 : Delete dependencies from CUDAStream; remove synchronize_with (#13920)
a20c7ce848 : Fix race in AtomicFetchAdd. (#13479)
1a29950478 : Remove API macros from intrusive_ptr (#14137)
1c2ed4eb23 : Tensor construction: combine Resize+mutable_data - 1/4 (#13942)
8aa5174106 : Tensor construction: combine Resize+mutable_data - 3/4 (#13944)
f34c848f52 : Store the optimize flag in module (#14166)
7fd1ea6ab7 : Cleanup caffe2 hipify exclude patterns (#14198)
b6edd7bbb4 : Support 'python_module' of 'nn' in native functions. (#14126)
1e73ab25f5 : Use onnx proto_utils to support using protobuf-lite
6b4852213d : Use fbgemm revision file added by shipit (#14105)
b6290531aa : Setup sccache for PyTorch ROCm CI (#14153)
e387d945c2 : allow empty index for scatter_* methods (#14077)
751b5ea941 : use at::Device throughout JIT (#14181)
fc61f1a1d1 : Support named return arguments in native_functions. (#14100)
ce85150cb4 : Split out CUDAMultiStreamGuard from CUDAGuard (#13912)
48099c23b4 : Move AT_CUDA_CHECK to c10
928687bb24 : Add c10 cuda library. (#13900)
2681852438 : Switch Int8Add operator to QNNPACK (#14089)
92dbd0219f : No more -werror for c10d (#14155)
55b25365e9 : Add ultra low precision options (#14133)
ef3d7963d8 : Adds symbolic diff for THNN Conv2d and aten native BatchNorm (#13888)
07a8a730af : Print warning when ROCm memory leaking is detected in pytorch tests (#14151)
a5891e6124 : Remove debugging code in test_cholesky_batched (#14156)
1bafa6236f : Back out "[reland][codemod][caffe2] Tensor construction: combine Resize+mutable_data - 2/4" (#14154)
12bb4742ad : CostInference for 1D conv (#14009)
a30ade1139 : Batched cholesky decomposition (#14017)
390bf1e779 : remove unnecessary file from avx2 list (#14012)
505dedf6ad : Change from using enum to int to store data_type
4f0434d5ab : Revert "CircleCI: fix NCCL install (#14124)" (#14146)
fade36668a : Revert D10428917: [Caffe2] Add cost into profile observer
a43037fa11 : Revert D10439558: Add cost for non-linear ops
afc91e4900 : Update FXdiv submodule (#14128)
6d9a7d0e60 : Rename neon2sse.h to NEON_2_SSE.h to match upstream repo
351478439f : Disable QNNPACK for multi-architecture iOS builds (#14125)
d56b2258f4 : Register caffe2 layer norm with c10 dispatcher (#13693)
c905a81c92 : Add c10/core/ to cmake build (#14111)
bb404e7a32 : Update atol scale in dnnlowp test (#14135)
c784f847de : fix sparse_adagrad param_size overflow error (#14049)
cbc94894fb : Add cost for non-linear ops (#13327)
86dc3ab252 : Add cost into profile observer (#12793)
a1fa9d8cf9 : CircleCI: fix NCCL install (#14124)
eeb3e67eeb : Fixed MPI build with higher version of GCC (#14122)
778e23606b : multiprocessing.spawn python version check (#14039)
ce6192a21f : Don't python bind _thnn_ functions. (#14101)
55e1b1ec3e : Fix docs/cpp/requirements.txt (#14121)
8610ff1072 : Allow cooperative structured objects to be passed modules in tracing (#13961)
fb6535ec70 : Add SharedDataset (#13800)
96e5d23bad : remove dynamic initialization warning (#13913) (#13967)
5b1b8682a3 : Missing .decode() after check_output in cpp_extensions (#13935)
8e91da4cb3 : Windows shared build (#13550)
2c21de2007 : Make JOIN_TIMEOUT longer for ppc64le (#14107)
c192788188 : Log error from the net's run (#14035)
0d7a986da1 : Change hip filename extension to .hip (#14036)
30018fcd0b : Enable Caffe2 ROCm test on centos (#14090)
5a53861d3a : Enable Caffe2 test on centos (#14091)
1256cbaa69 : Relax limits for gradients in test_jit's checkGraph (#14094)
2983998bb3 : add torch-python target (#12742)
cb86ae304e : alias annotation parsing #2 (#14053)
77c2f4d0d7 : Make THPDtype_New error instead of truncate (#14103)
7c053b7e64 : Add filler for SparseLengthsWeightedSum (#13949)
3c7b575a14 : Update ATen doc with optional syntax (#14086)
562f61a662 : Add missing space in stft doc
e4bb56570c : Preemptively test for out-of-order length. (#13933)
c7a247facf : nomnigraph - support subgraph visualization (#13795)
d7b95dda51 : nomnigraph - easy - expose hasProduce(NodeRef) to python (#14075)
e7f5fceb99 : nomnigraph - easy - expose inducesEdges and addNode to python's NNSubgraph (#14074)
7b0f674367 : Two small improvements to TorchConfig.cmake (#13849)
1b1cdd944c : Keep `ModuleList` consistent with python `list` in `__setitem__` function. (#13102)
a3f39f1ebb : Fix randint docs (#14083)
2fe4711eb4 : Revert "Remove OptionsGuard from ATen (#13738)" (#14082)
45fd77d3b7 : Adding GLOO_SOCKET_IFNAME env to allow user set gloo device (#14065)
3808e9fad3 : Caffe2: Fix for creating entries of external_input in predic_net (#12979)
1e8aeb0bee : fix lint
3a15de9e44 : Fix CUDA_tensor_apply1 base case (#14056)
037d6b697b : Add ResizeNearest DNNLOWP op (#13940)
f66cb02016 : Turn fbgemm off by default for pytorch (#14048)
f17b2fdf1b : Fixed THD DistributedDataParallel not picklable (#14051)
37cb357d8d : Remove OptionsGuard from ATen (#13738)
8f4dc192b6 : Fix DataLoaderTest.EnforcesOrderingAmongThreadsWhenConfigured (#14038)
f930c4307c : Clean up executor's execution flags (#13869)
874a8a321b : Fix out of order member fields initializaitons (#14015)
31d41a983a : Revert D13088038: [pytorch][PR] [jit] extend alias annotations
6d378d3740 : Updating C++ documentation to PyTorch theme. (#13791)
0d29846d5e : Convert more weak functions (#14003)
c5afad5579 : Fix skip logic in caffe_translator_test.py (#13627)
0e93500841 : Remove async_polling (#13825)
0573169e23 : Import a method from an python_print string (#13959)
84d464f8f9 : Revert "Upgrade mkldnn bridge to reduce overhead of bridge itself (#1… (#14040)
90b0c4f43d : Tensor construction: combine Resize+mutable_data - 2/4 (#13943)
136f5c9fe1 : Replaced using declaration with explicit constructors 3/3 (#13875)
3fbb753512 : Revert D12873145: [pt1][tensor][refactor] FeedTensor returns a Tensor
d91c686c33 : extend alias annotations (#13632)
c7e0db140e : use fabs instead of absf in fuser code for aten::abs (#13985)
c3578b561c : Skip all builtin functions when importing names from _C._VariableFunctions to torch (#13884)
4b7c6150d8 : Upgrade mkldnn bridge to reduce overhead of bridge itself (#12164)
3de0fd846f : Fix converter to accept const NetDef&
5639332a28 : fix the deeptext issue (#14005)
b8de8f6261 : Refactor tensor construction in onnxifi_op
464c0c2204 : Use realpath for loaded libraries (#13936)
17b2d2d373 : fix TensorPrinter when tensor have 0 size. (#13986)
4574ea3bec : Make RNN operator handle exceptions
6d094224b9 : Fix optional import/export, export multi-margin-loss (#13877)
ddbd87e310 : Build with -Werror (#13998)
5390ab1d52 : Dont crash on 1d convolution (#13999)
eb024cd1d0 : don't throw in matchTypeVariables (#13989)
20e395a130 : Fixed uninitialized warning (#14001)
e3bb6ff334 : Move c10 dispatcher prototype to c10/
4b0fc5200b : Fix include paths for typeid.h (#13689)
72da09bb4d : Canonicalize THD includes with .. in them
7ea9c674bc : migrate subgraph slicing to use `moveBefore/moveAfter` (#13862)
2356c8d542 : device inference for Adam (#13990)
fed8d8975a : Various improvements to hipify_python.py (#13973)
02152c515e : Ensure nn Losses check scalar vs non-scalar values.
6811e32f03 : Support exporting Gather and BatchGather to ONNX (#13987)
7daa829bce : Implement `unsqueeze` for sparse vectors (this also makes `stack` work out of the box)
ff4f4a0a35 : Retry test on "Address already in use" error (#13911)
61a0df5af0 : Canonicalize THC/THCTensorMasked.cuh include
01d606e048 : Canonicalize TH/THRandom.h include
9e1655bb22 : Canonicalize THCUNN/linear_upsampling.h include
af6d1ec52c : Canonicalize THCUNN/common.h include
a7d43702d4 : Canonicalize THCGenerate*.h includes
f446c67e2f : submodule update to fix compilation warnings (#13925)
587f769a99 : Fix missing symbol linker error when using libtorch generated on windows : (#13672)
0478d32cb8 : Move AlignOf, SmallVector and ArrayRef to c10.
4983397c02 : Better documentation and warning (#13946)
143ba72264 : Move cosine_similarity to ATen (#12199)
53c3a92a50 : consistent rounding (#9)
96663edca6 : Remove the hip ignore; it conflicts with real in-tree HIP development. (#13972)
35a24a9a94 : Example with edge case 0 for torch.sign (#13771)
dead6632b3 : bug fix for 1D conv in NHWC layout (#13813)
4341dd2753 : Move most sccalar checks from nn.yaml into THNN/THCUNN code. (#13906)
46c0e2c268 : Clean up caffe2/tools/build_pytorch_libs.{sh,bat} (#13954)
a440629f14 : Remove defunct build.sh/THConfig.cmake (#13953)
fbabe5bf62 : Rename c10::detail to c10::impl (#13838)
db5aeafa60 : Avoid grabbing DeviceGuard in at::empty when possible (#13785)
1e45e7a404 : Speed up fusion compiler tensor allocation (#13914)
109dd5b412 : Move typeid to c10/util
97036d3c30 : FileStore auto deletes file and FileStore::add bug fix (#13708)
e2a7d43dfd : Use the torch.proto to store script module (#13736)
2871d3951f : More robust ->match behavior (#13952)
346c418fc9 : Add caffe2 clang7 build CI job
5151d33287 : Unflake the ordering enforcement test (#13919)
f4e502a8c5 : Added MIOpen conv transpose op (#13938)
5059beb644 : Change assert --> CUDA_ASSERT_KERNEL to avoid hip undefined __assert_fail (#13902)
0bedaf9cf6 : Update setup.py to support Nvidia TX2 (#13939)
79ec5de3fc : Add some more files to gitignore. (#13924)
c3680e2b19 : Fix sum() on fp16 (#13926)
3002cb2ad0 : Revert D13007266: [codemod][caffe2] Tensor construction: combine Resize+mutable_data - 2/4
76d8979afe : Revert D13007287: [codemod][caffe2] Tensor construction: combine Resize+mutable_data - 3/4
fbd50bbfb9 : Revert D13007246: [codemod][caffe2] Tensor construction: combine Resize+mutable_data - 1/4
30676bdcd3 : Finish up TODOs in python printer (#13879)
8311bbee7f : Fix Windows build and test in CI (#11716)
f649d8b3a9 : add floordiv and bitwise ops
7c1fe17288 : fix UnpackSegments cuda op (#13917)
cd49afce64 : Allow attaching additional net info when supplying the benchmark net (#13820)
23e19ebfa7 : add non expotential emphasis loss to Lambdarank
dfa4767754 : Update nccl submodule to latest (#13921)
c46dd5163f : Temporarily disable part of test_spectral_norm (#13908)
5163a28917 : Convert more weak functions (#13707)
53bc5fb043 : Support nn.Sequential in script (#13889)
5cfccd76e6 : Jit load error msg (#13894)
283062f574 : Tensor construction: combine Resize+mutable_data - 2/4 (#13852)
e030ee8197 : Tensor construction: combine Resize+mutable_data - 3/4 (#13854)
9d36c37bdb : Tensor construction: combine Resize+mutable_data - 1/4 (#13853)
96a01f82d1 : Remove unnecessary include (#13878)
60a85857dd : s/CAFFE_ENFORCE_WITH_CALLER/AT_ASSERTM/ (#13829)
561bc09026 : Remove CUDNN_BATCHNORM_SPATIAL_PERSISTENT mode for accuracy (#13844)
0d2762e876 : Minor fix to reenable nvtx sequence numbers for the forward methods of custom (Python) autograd functions (#13876)
266bb8bf30 : FeedTensor returns a Tensor (#13641)
98b450deb9 : Clean optional undefined tensor syntax in ATen yaml files and codegen (#13871)
bbc7412615 : (#13765)
8559fcf791 : Unpin Sphinx. (#13831)
f6e4fc071a : Fix a bug that causes nvcc to emit an unknown option error (#13904)
f112aa746a : Fix document about torch.get_default_dtype() (#13890)
a83a1544b1 : Move device_guard from _th_ functions to the wrapper. (#13842)
e43fb1d26d : Fix cuda out of memory test (#13864)
7f002008f1 : remove ShouldFp32FallbackToNCHW (#13814)
a7eee0a1e9 : Add Reshape if there is add_axis when exporting C2 concat (#13798)
a17c0118a5 : fix stability in bce with pos_weight formula (#13863)
0bfbdcac89 : fix bug in D13017777
ce48958606 : enable more unit tests (#13166)
cec3455a8b : Add gitignore item for YCM config
1600649792 : Fix for nightly builds (#13779)
b052fe6c2f : Upgrade DLPack
8480fe0105 : Fix up creation of unique data nodes
03c0f4fbe7 : Use RNG mutex for randperm on CPU (#13832)
fc79f70f9a : CircleCI: Add Linux CUDA 10 build (#13858)
8de9564c12 : Fix gcc-7 build in caffe2/caffe2/quantization/server/activation_distribution_observer.cc (#13799)
f1a2bc4eae : Corrected python lib path on windows to be consistent with Linux (#13848)
53a3c46950 : Switch to packaged Thrust on Ubuntu, enable CentOS 7.5 as a CI target (#12899)
1caa341c68 : Add torch.multiprocessing.spawn docs
1a0cb08918 : allow `Node::isAfter` to work across blocks (#13855)
75bf877534 : Preventing error where ninja build files are overwritten when invokin… (#13698)
686e83223f : add ops between float & int, and change list equality output to be a boolean
e3839dfc35 : Add matplotlib to docs/requirements.txt (#13828)
5bf14c23b7 : Bump Caffe2 docker images to version 230
309cc76469 : BaseType:: -> this-> (#13817)
6093f29409 : Update coverage info (#13788)
d8f35c42be : nomnigraph - easy - support blob renaming (#13845)
0c375571f5 : Support OptionalType export and type match (#13647)
bf00008aa1 : Use SmallVector for TensorImpl sizes and strides. (#13649)
aef9e76283 : Get pretty printer ready for use as a serialization format (#13616)
b7a7ab364b : Improve mm / addmm error message with sparse tensors (#13796)
8752214fb7 : Apply weight-decay before momentum in the SGD optimizer. (#13801)
7e8572be2d : Change method-only _th_ prefix Declarations to functions.
003f97cefa : fc layer accept axis argument (#13822)
e35418b3be : New implementations of DeviceGuard, StreamGuard and MultiStreamGuard (with CUDA specializations) (#13342)
4b86a215ca : moving simd adagrad code to perfkernels (#13549)
d97ac82bf5 : Back out "Revert D12967258: Support more data types in ONNXIFI transform" (#13812)
786f9ba6ea : Remove potential infinite loop from test_c10d.py (#13816)
c3603301d7 : Fix race condition in TCPStoreDaemon initialization (#13815)
4c3b76c402 : Add std::string to the getTypePtr for JIT inference of custom op types (#13683)
7c02f285dc : Revert D12967258: Support more data types in ONNXIFI transform
5923d76f96 : Support more data types in ONNXIFI transform (#13745)
c85463fc74 : Allow Gather to handle empty data (#13781)
4f622c26b9 : fix ffs intrinsic for long long (ROCm 290) (#13804)
d02781a2ef : Make InterpresterStateImpl a intrusive_ptr_target (#13784)
079e86a915 : schematize some prim ops (#13790)
e552c04d53 : Add proper comment for dispatch_to (#13783)
7b2fb012a8 : Make potrs batched (#13453)
e3e6ca1102 : operator serialized test coverage summary document (#13703)
014ea1e1f8 : Improve CUDA out-of-memory error message (#13751)
ae7c6bcfcf : Make c10 buildable by itself. (#13742)
09369fa9d7 : Fix clang_tidy.py
79ceecec8e : Optional undefined tensor support (#13650)
607094c4bf : fix null-pointer-use in reshape_op.h
107e067654 : Move IdWrapper to c10/util
332a7db35e : Use MNIST dataset in C++ integration test (#13737)
a63ef1d605 : Suggest git submodule update --init --recursive (#13769)
a1b2f1710d : Remove _th_is_contiguous, make is_set_to a function, not a method.
10a1534c43 : Remove _th methods that also have a function. (#13721)
9ffabcfcaa : Use nested variant of getValueTrace to allow more flexible tracing script modules (#13597)
dca3c2c60f : Save and execute futures in a task queue (#13212)
4484f67b47 : Revert D10203439: [pytorch][PR] Fix batch norm multiplier init
26751ce300 : Fix the improper use of windows-native slashes (#13220)
44fb23a2f5 : Add ability to annotate jit types inside function (#13752)
5ae3b44255 : Added HIP top_k operator (#13747)
32b3fe8ce6 : CircleCI: enable OSX jobs again (#13731)
2ee4ef5290 : Change all namespace fbgemm2 in the new fbgemm2 to namespace fbgemm (#13740)
55964abb11 : Change all namespace fbgemm in the old fbgemm to namespace fbgemm0 (#13701)
a8e303dc46 : change USE_MKLDNN default from ON (from #13303) to OFF for ppc64le (#13759)
dd3f52fbe6 : Remove _th_ndimension, which doesn't actually do anything. (#13723)
c9be135bb9 : Fix batch norm multiplier init (#12325)
42001e7c17 : Fix clang-tidy for Python2 (#13735)
89b54229b1 : Make _th_unfold and _th_view into functions, from methods.
00e752a46e : Move cpu copy to aten
51f58f0990 : Fix typo in CTC loss doc comments. (#13727)
bff931a10d : implement concatenation of sparse tensors (#13577)
65ff84b49e : Catch error by reference in module.cpp (#13743)
8a5869a3f7 : Move function_schema to aten/core (#13729)
85bde3801b : Tracer now records Python variable names (#13441)
64a910bac7 : Remove unnecessary tools/ qualification. (#13706)
4fadf571fd : handle flat rolling (no dim specified) T36264909 (#13588)
59d021b63a : Fix nn threshold test
0a090fe60a : Fix torch.dist for infinity, zero and minus infinity norms (#13713)
a92ff57a4d : update range doc (#13730)
869ef71343 : AsyncNet: option for time based tracing and trace path (#13440)
556ff8e7b7 : Add builtins for `size()` and list with defaults (#13639)
d01cb70497 : build with mkl-dnn by default (#13303)
8581d3ec67 : Allow blacklist ops in onnxifi transform
fd9aaa6b79 : Fix linking errors on Windows (#13100)
3e877a70e3 : Enable unused-private-field warning (#13450)
df022f8078 : Disable CopyFrom src with uninitialized storage
4472ad3b2f : Move functional _Reduction to its own module (#13401)
de41d1ae0b : Enable junk fill for the default CPU allocator (#13377)
21991c05a9 : Support assignment to subscripted lhs expr (#13486)
411d89ca64 : Fix the bug in dispatch_to when calling cpu() (#13700)
90ea61800f : operators/quantized/server -> quantization/server (#13660)
2448a83d30 : Give broadcast_coalesced tensors different version counters (#13594)
5dd153b1c2 : speed up torch.sparse_mask() cpu kernel (#13290)
6bfce16873 : fix flip() shape bug in CPU (#13344)
1616587540 : Redo jit/type and utils/functional to ATen/core (#13455)
87b47ff850 : Remove .data() use in C++ frontend (#13675)
eb88098e11 : Kill c10d/private/CUDAUtils.hpp (#13681)
c8bb665b5d : Fix a bug in tuple assignment (#13656)
9900a8dd89 : Remove outdated css and font files in html docs (#13699)
7978ba45ba : Update path in CI script to access ninja (#13646)
bf9b5dffbf : ensure flake8 ignores non-conforming python files generated by build
d4f9dbfa66 : Remove catch check
dceec1de30 : Distributed Data Parallel documentation for PT1 release (#13657)
216c5d0bdc : caching packed matrix (#13626)
94fe8faa00 : new QNNPACK dwconv support and tests (#13652)
1413dd4bfc : Added the finer bucketing option for DDP (#13607)
044d00516c : Rename DistBackend -> Backend (#11830)
afc7dbd586 : Hipify caffe2/utils/math_gpu.cu (#13521)
0f59dcb317 : Remove partially initialized Tensor + CopyFrom (#13629)
6c8ac50753 : Fix exception catching to catch c10::Error properly (#13665)
674e23bbab : Fixed a small error in docstrings for ConvTranspose3d (#13668)
2fe9e3a207 : Remove catch from caffe2/.gitmodules
e7652cfb40 : Remove caffe2/submodules/catch-rev.txt
ab0c72ab6f : Replace cursors with OrderedDict (#13427)
b652c2de50 : Rename dim(i) -> size(i)
4326873330 : Skip std and var tests in pytorch rocm CI (#13662)
9403eddce4 : Fix tracing bug for custom ops (#13654)
edd2e38023 : Clean up a couple of items in the C2 test scaffolding (WIP) (#7847)
10fdcf748a : swap with empty vector to force deallocation (#13625)
398d310bac : changes for cumsum/cumprod backward not depending on TH. (#13570)
a228a95b94 : Rename ndim() -> dim() - 1/6
4794da03f8 : Rename ndim() -> dim() - 4/6
57ec8f111f : Rename ndim() -> dim() - 6/6
e60a7c2c88 : codemod tensor.type().is_cuda(), tensor.type().is_sparse() (#13590)
e70321ed9e : Remove unnecessary type dispatches from Variable::Impl ctor (#13630)
2ae8e46105 : Rename ndim() -> dim() - 2/6
7341ab0a33 : Fix range of target examples and JIT test case for CTC loss.
a132a7d9ce : Add autodiff support for a few additional operators (#13288)
a1ba29a2c0 : Change to use json format to store disabled_features in hipify (#13595)
7d64c9df39 : Remove C2GEMMContext (#13443)
dbc467545f : Update weak script modules to match fns (#13631)
14004cbef6 : Native batch norm (#13263)
392ca1e59f : Remove compileFunction (#13640)
ce6edbfbd9 : Fixed NCCL backend not being built (#13653)
2cd912bcc2 : Fix more spectral norm bugs (#13350)
eb29485ed8 : Support custimzed timeout when fetching blob from KVStore (#13582)
bc1de6ae7d : CircleCI: disable output buffering to better locate test timeout (#13516)
619c2f8b44 : small fixes regarding docu of torch tensors (#13635)
508f676c50 : Rename ndim() -> dim() - 5/6
6cf450744f : propagate python op error msg (#13624)
feff7be294 : Remove RTTI from jit/type.h (#13591)
18de330e86 : CMake integration for int8 server operators
76c1b5cd79 : Fix overflow error in stats_put_ops
e73943e488 : Remove partially initialized Tensor + ShareData (#13522)
fbe3c3f57f : (#13435)
393ad6582d : Use torch:: instead of at:: in all C++ APIs (#13523)
be424de869 : Add torch.multiprocessing.spawn helper (#13518)
056f2cd238 : ATen/test/basic.cpp: Catch2Gtest (#12142)
06bfabf1f5 : add tests to no-gtest
137150be88 : add unwrap optional operator (#13599)
1906305c07 : Consolidate argument checkers (#13623)
7ffa864953 : Speed up tensor.options() by avoiding type dispatch (#13330)
464dc31532 : Add README to tools, delete defunct scripts. (#13621)
6aee5488b5 : correct omp dependency for mkl-dnn (#13449)
a7ee632dff : Various Test and build fixes (#13556)
9ca9469de6 : mm backwards to not depend on TH. (#13575)
3c1d593a27 : cumsum/cumprod derivatives not depending on TH. (#13579)
95ca66763d : Add math functions overloaded over different numeric types for cuda and hip (#13602)
d03c6ba50d : Adding Fetching Real number representation
3c32f897ca : Rename ndim() -> dim() - 3/6
bbacd859ab : Updating heuristics for cudnn persistent RNN (#13612)
fc6a9a19ea : Add torch._C._nn built-in, more weak fns (#13322)
10d67716db : bump docker image to 262 (#13581)
bad8235a3a : Disabling NCCL coalesced bcast test since it hangs in CI (#13606)
9ef98624b3 : Don't allocate empty Storage/StorageImpl for Variable. (#13580)
02d3787a19 : Support new upsample in symbolic, caffe2 backend & caffe2 frontend (#13272)
ebaabfbbd5 : ReinitializeTensor function for refactoring Tensor as member variable (#13147)
a340dce133 : Replaces c10d's CUDAEvent with ATen's (#13464)
e2272dd312 : Remove ATen/README.md in favor of cppdocs/notes/tensor_basics.rst (#13601)
af4a228426 : Fix erase_number_type pass, negative indices in c2 and some onnx symbolics (#12888)
2398a3255e : fbgemm submodule update (#13592)
b1c57caaf9 : Move flat_hash_map to c10/util
b7c9575c93 : Move LeftRight to c10/util
8fafa7b6ac : Remove size() from BatchDataset and templatize IndexType (#12960)
1969898647 : Convert functional dropouts to weak script (#13484)
23e3a12d5e : Add `pass` support to script (#13535)
df67d4180a : Validate schema with no returns (#13525)
7b9d755d88 : Restructure torch/torch.h and extension.h (#13482)
1b64c0f8fe : Error msg on TCP backend (#13596)
74819087de : Mixed precision DDP hang fix and fine-grained option for DDP perf (#13496)
84cfc28f23 : Note on Tensor Creation
f6ff5d8934 : Append parameters when checking graphs for TorchScript Methods (#13553)
f3c197d6fa : Add explicit c10:: namespace to converter (#13593)
7faca2a217 : Add new style broadcast support in c10d/gloo (#13497)
d2f26a450e : Add new style allreduce support in c10d/gloo (#13426)
d50dd47ccd : Add reduce support in c10d/gloo (#13425)
8f0f97749c : Add allgather support in c10d/gloo (#13424)
75c2b34c86 : Add gather support in c10d/gloo (#13423)
9cfe9418e6 : Add scatter support in c10d/gloo (#13422)
98f5c005da : Speed up CPU threshold and relu implementation (#13182)
b2127cfa9a : Make the inception onnx test more stable
5f514a483c : Move Half.{h, cpp} and Half-inl.h to c10 (#13361)
e06f92785c : Move ATen/core/Macros.h to c10/macros/Macros.h
8c182cd89e : Add overload of ProcessGroup.allreduce with list of tensors (#13576)
482b1366e6 : Remove half_support.* (#13534)
f0ed927b62 : Add diag_embed to ATen and torch (#12447)
07f8b61cc6 : Roll operator t32802531 (#13261)
e7242cbaf2 : Rename dim(i) -> size(i) - 1/2
3ea64bd80b : fbgemm submodule update (#13562)
e988dc621b : Stop depending on static analysis of tensor types in graph fuser (#13387)
505f9b4d63 : Add Int8BatchPermutation op in DNNLOWP (#13539)
54e8623d26 : 3D Conv in NHWC layout (#12733)
274f3c0951 : add explicit fpga context (#13318)
246d5282b3 : fix handling of single input in gradcheck (#13543)
fdf34c8da8 : Kill more weird constructors on Tensor
f000101b81 : add a few comments on layout after im2col (#12429)
6b578cd388 : update fbgemm submodule (#13547)
c1ed1b4779 : Duplicate bias blobs shared by different conv ops to handle scale correctly (#13538)
2a6850bf73 : remove unnecessary files (#13537)
8be0efaa8c : omit group conv NHWC test for HIP (#13554)
9e432b593d : Include caffe2 proto headers in pytorch package data (#13217)
149afef5c4 : Include lib subdir in caffe2 include dirs path (#13216)
d40b23e750 : remove unused use_scratch argument from batch_matmul (#11745)
2bc6a7a260 : enable group conv test in NHWC layout in CPU (#12428)
2b280c6b74 : minor build fixes for incremental builds (#13293)
0479517325 : Add modernize-* checks to clang-tidy (#13196)
4bca51e3e7 : unify BLAS check between Caffe2 and ATen (#13514)
8fc63e523e : Reslove lint and infer warning (#13520)
f74fa91b8e : Fix EraseListConstruct pass during ONNX export (#13195)
519570def8 : Rename dim(i) -> size(i) - 2/2
7b48a7c3f6 : Bump gloo (#13513)
da029ca042 : Skip Conv1D tests for MIOPEN (#13512)
34dd831dc2 : Revert MKL rowwise moments (#13480)
cc3cecdba0 : Fix the bug when compile using nvcc compiler. (#13509)
2827fc7681 : Add native wrappers for inplace bitwise operators.
9f2b2cac37 : Fix handling all empty bags in CUDA embedding bag (#13483)
3d392cc5ec : Migrate dnnlowp code to open source directory (#13500)
bcb851a3d6 : Write gesv derivatives in terms of native function.
1e1dd88c4a : Add Linux ppc64le CPU/GPU CI build status
2f82a06826 : Fix half_tensor.bernoulli_(double) (#13474)
61a2d47ec6 : Special handling for 1D covolutional kernels in cuDNN flavor of conv_op. (#12902)
86192301b3 : Fix a few bugs in format and vararg handling (#13492)
5fbaf0eaf8 : add augmented assignment ops (#13364)
a0e783768f : Do not fill in new data in every iteration if the input data only has one entry (#13495)
57e162da56 : Switch mutable lists to new mutable schema (#13406)
6d2b3cc869 : Fix pytest, make it work with run_test.py (#13416)
0fd176fea4 : Add operator is, not, is not to script (#13336)
24839aac59 : Link libgloo.a after libc10d.a to resolve remaining symbols (#13462)
e6b6cc06ee : caffe2/core hipify (#13457)
421f3f3e52 : add npair builtins (#13473)
27002e3fd5 : Enable a few hicpp (#13189)
d843f63f2a : optimization on cpu conv3d (#11884)
d714ecf879 : Rename potrf to cholesky (#12699)
26a8bb62ee : Re-enabled mm+add tree batching in the JIT (#13228)
81438f1220 : Add transpose network pass (#13437)
a1728602da : Convert Arguments to dictionary (#13436)
469c6b0539 : Replace tmpnam usage (#13289)
edc6d721e0 : fix flake (#13463)
99ce499bfe : Revert D12852205: [pytorch][PR] [jit] Add str() builtin
e2e560d9c8 : Improved the caffe2 to ONNX export (#13429)
54d63c5752 : added fbgemm as submodule (#13354)
c2dd0b9fad : Put torch/csrc/jit/fuser/config.h in gitignore
de0d85ba98 : Remove getTHCudaHostAllocator in favor of getPinnedMemoryAllocator (#13451)
8f2bc1bc56 : Add str() builtin (#13278)
70db53661b : expose fixed length list argument (#13142)
99a5d19591 : Rename elementwise_mean to mean (#13419)
a5b627a0bf : add assert statements (#13408)
004fc2f430 : Stop unnecessarily setting storage in as_strided. (#13411)
c0e24443f7 : Revert D10459665: [c10] Redo jit/type and utils/functional to ATen/core
8444ed951d : add sleep time between runs (#12347)
86e1009497 : Make ATen core HIP compatible (#13343)
10a6a3e404 : Redo jit/type and utils/functional to ATen/core (#12862)
c76fc75292 : Implementation copy operator for mkl-dnn (#12820)
96ab7cbe5c : Make gels error message nicer (#13421)
6fe089c6ea : Hierarchical device independent -> device specific architecture (#13108)
2df6d3e3c7 : Fix allocator handling in raw_mutable_data (#13349)
a682ce9144 : Add back HIP support to async net (#13400)
eaf141dd64 : Enable opencv and lmdb in ROCM CI (#13430)
2e1b7a6f4f : Renaming dim() to size() - 1/3 (#13434)
edd902594a : Renaming meta() to dtype() - 1/2 (#13333)
470bfaa586 : int8 sigmoid op (#13298)
48db74ea03 : net_simple_refcount type to help experimentation with dynamic allocation. (#13370)
479b8266bf : Back out "[pytorch][PR] Support upsample" (#13413)
a4778862c7 : Docs/cpp misc features and fixes (#12914)
7b47262936 : Use names instead of indices in format (#13266)
a376f3a53f : Revert "Revert D12858091: [pytorch][PR] restore USE_C10D_NCCL" (#13407)
f9c0a08eed : Fix len() for tensors (#13398)
9577811908 : Using pip --user in test.sh script breaks ppc64le builds (#13388)
08b7c791ff : Windows CI hotfix: Pin Python version to 3.6.7 (#13410)
404f8660e7 : Add string.format() (#13157)
b3ef98450b : Use non-th versions of some functions when defining backwards. (#13394)
f30c74558c : Revert D10861211: Convert Arguments to dictionary
93b16b6422 : Revert D10519758: [nomnigraph] Add transpose network pass
b1fe541de3 : Revert D12858091: [pytorch][PR] restore USE_C10D_NCCL
a43c6385f1 : When looking for pybind11, do not attempt to get properties from pybind11:pybind11. (#12188)
f5b34e3446 : Handle exceptions in at::parallel_for() (#13393)
a4f00c3d1e : Fix error message in tensorlist()
cda44ffa81 : Add transpose network pass (#13396)
04e8a6d9ef : Convert Arguments to dictionary (#13332)
2cebcbae8c : createUniqueDataNode
a25d3b4d8c : Use byte tensor for mnist labels. (#13363)
488d393ea6 : Fix pointwise loss broadcast (#12996)
27ccc8787f : Implement data_ptr as a native function.
cb87319eb0 : restore USE_C10D_NCCL
4c06f1f2bb : CircleCI: enable all flaky tests (#13356)
bc74ec80d0 : Add support for torch.backends.cudnn.enabled (#13057)
b200b51602 : Give _dirichlet_grad a native wrapper.
0aaff5eaf9 : Replace CUDA-specific set_index(_from) method from DeviceGuard with set_device. (#13275)
e5d56659ec : Delete DeviceGuard(int64_t) constructor. (#13232)
e93c721da1 : Add c10::Stream, make at::cuda::CUDAStream use it. (#13133)
a3410f7994 : Give addbmm a native wrapper.
e6ace54840 : Move underscore prefixed th functions _th prefix.
e475d3ede3 : DDP multi-GPU segfault fix (#13291)
dc854c0ee6 : Add --user to pip install in pytorch test scripts (#13366)
44d2ca660a : Disable CCACHE while building NCCL (#13340)
bfe7df2211 : Optimize rowwise_moments by MKL (#13329)
865a10feba : Update NCCL to 2.3.7-1 (#13353)
265c97decf : nomnigraph - More operator definitions (#13358)
59f8e8ada7 : First step at adding exceptions (#12789)
c7027a511f : In pytorch CI install ninja via pip instead of building it from source
3c66520dd8 : Remove aten/src/ATen/CUDAStream.cpp from hipify script (#13357)
13b9fd3e05 : Renaming meta() to dtype() - 2/2 (#13334)
cb5f374f6c : More functions moved to native, use _th_ prefix more consistently.
7d9ab140bf : Fix aten::to symbolic + add expand_as (#13325)
4d141bee98 : Skip test_sum_noncontig in ROCm (#13341)
f1d02f6d1c : Move underscore prefixed linear algebra TH functions to _th prefix.
11a16961a5 : Fix "CUDA Tensor __rsub__ breaks when device is not 0" (#12956)
d2659f6689 : fix lint
f58e4fbc45 : Remove redundant array-gen loop in gather_ops_test.py (#13338)
77b8aade58 : Revert D12809293: Kill more weird constructors on Tensor
ed60f94dba : hipify caffe2 script in fbcode (#13265)
9ca8a76645 : Rename Type.tensor to Type._th_tensor.
c68b82ebc8 : don't expand cmake variable in IF
cc3618ce36 : Move _cumsum and _cumprod to _th_ prefixes.
ce469e6c71 : dims() to sizes() remaining part
9af18d847a : Fix accesses to uninitialized memory when running sum() within an OMP… (#13274)
f04a705cb2 : Remove assertions in conv modules (#13283)
c0411719fc : Rename th_addmm to _th_addbmm.
3a81984bde : Make Stat put ops accept empty tensors safely (#13178)
ce51e3fe55 : Move the Test conversion script to main repo (#13287)
3cb2470bb3 : add __deepcopy__ back to Parameter (#12886)
a35162f1bc : Remove net_simple_async (#13320)
0db505bf27 : Made docstrings for Embedding more accurate. (#13310)
264deae5da : Improve visual representation of NQL subgraphs (#13143)
017b91f861 : Optimize channel_shuffle_op on GPU (#13066)
518b0d0600 : Fix add out=None to digamma docstring (Fixes #13225) (#13307)
5ba952afcc : use topological move in graph fuser (#13271)
5b15a501da : Refactor & unit test feed predictor
ec754adb14 : Kill more weird constructors on Tensor
10de2c1187 : CircleCI: fix test timeout by running CPU build and test on different machines (#13284)
ac64724ed9 : Add support for tuple constants (#13086)
f06b70a6e9 : Fix memory leak during packing in tuples (#13305)
8a888c48da : Reimplement as_strided in ATen. (#13185)
8c2d0c831f : Speed up tensor.storage_offset (#13267)
ee010a2bee : Operators that never (re)allocate memory do not need DeviceGuard (#13269)
47c0d88739 : Bring back warning for dtype uninitialized in serialization (#13239)
bb703b1ff5 : Remove defunct ATen/CUDAStream.h,cpp (#13251)
91e87c0395 : Renaming size() to numel() - 2/2
c82e8bf988 : bump gloo
4a3baec961 : Hub Implementation (#12228)
955a01562d : Removes debug spew in test_jit.py (#13280)
6071389a90 : Enable cppcoreguidelines checks in clang-tidy (#12959)
8260441b45 : Renaming size() to numel() - 1/2
fbd497f169 : Fix initialization order in MIOpen file (#13264)
d8dab6ffa8 : Add tensor.to(options) (#13146)
3365d74df9 : Fix refcounting in anomaly metadata
50a8f8531b : Updated for for arbitrary command line arg ordering
9d9e5f8d1e : Solve bug of DistributedDataParallel (#13248)
33b00bdbb8 : cwd arg in shell function of run_test set to optional (#13247)
7956e9718b : Add name for required optimizer parameter. (#13202)
2e19529bd1 : Add HasDeviceOption [nomnigraph] (#13206)
2cfe439cc7 : Turn off tests for Travis-derived Python jobs. (#13252)
3c78cc6c2b : Remove Tensor(const Tensor&, BaseContext*, type)
5a2b2aa6af : Remove calls to CopyFrom that can be sync (#13205)
8ad69a80e3 : Test scripts only run cases defined in the running script (#13250)
db0b5c7ab7 : ArgumentStash for int64_t arguments (#12939)
aabdcaa8fa : No tmp install (#13215)
a69af69ffc : remove vestigial logic related to onnxbot tracking PRs (#13260)
380d2dfb27 : absorb nccl (#13150)
1c8a823b3b : More robust ABI compatibility check for C++ extensions (#13092)
48b98d2f7f : Expose nn:: namespace to python (#13132)
62b27d27b7 : Re-enable experimental ops build (#12821)
b818d31a3e : use TypeMeta instead of ScalarType in TensorOptions (#13172)
dcbca53e58 : Renaming size() to numel() - 1/6
b1cf3ad1c2 : More Declarations.cwrap functions moved to native, mainly LAPACK, sim… (#13194)
dbab9b73b6 : seperate mkl, mklml, and mkldnn (#12170)
bb96b6635c : Support upsample (#13152)
5be20f92ca : Towards a quieter CI
1032cf9fe4 : Support for zero-length sequences in RNN executor (#13244)
52b6460d3a : Fix bug in some reductions that use global memory (#13211)
9e6a695116 : Add string equality test, string concat (#12992)
74ac86d2fe : Show demangled names on nvtx ranges (#13154)
277b637811 : Delete default constructor from CUDAStream. (#13021)
1a4473bbd7 : Rewrite THPUtils_PySequence_to_CUDAStreamList to return vector<optional<CUDAStream>> (#13125)
175f248310 : Reduce sizes in TestUncoalescedSparse.test_to_sparse (#13236)
71113c6b9e : Respect kwarg-only of native functions moved from Declarations.cwrap.
4276fe7867 : Support for saving exceptions in async CPU ops (#12904)
4fe8ca74af : Test if GCC 7 fixes timeout problem. (#13230)
34799faccd : Fix move constructor on c10d::CUDAEvent (#13183)
1fe8278559 : Batched Inverse (#9949)
4d62eef505 : Add Future to IValue (#12976)
0f261ee359 : Fix performance regresion introduced in D10524381 (#13199)
df8c5a3572 : Refactoring MIOpen activation ops (#13187)
f8864f0505 : Revert "Move batch_norm to ATen/native, speed up (#12368)" (#13191)
bc352ace7c : dense.to_sparse() re: #8853 (#12171)
5182fdad0b : Compute the offset to make sure the order in InlineContainer test
7a6e0bd77e : Skip ROCm tests that fail as per #12824 (#13181)
723f40d94e : video model test workflow on CPU (#13203)
dae7616078 : Shard all of tests based on how many tests exist. (#13160)
7637b7c966 : Opitmize LayerNormOp (#13173)
537d671829 : Renaming size() to numel() - 4/6
3ca272cf5a : Topologically-safe node moves (#13026)
620ece2668 : Simplify thread pool creation logic (#13114)
63ce3fbde8 : Created a transformer to convertr caffe2 NetDef into ONNX models.
9e6bb605f6 : Native wrappers for many Declarations.cwrap entries
80f766e5cd : Create FAQ (#13129)
eea2ee6d29 : Renaming size() to numel() - 1/17
06392bd6a3 : Renaming size() to numel() - 3/6
883da952be : Hipify caffe2/core (#13148)
1bec8f773b : Move ConstantPadNd into ATen (#10885)
e13e86724e : Renaming size() to numel() - 2/6
b090a54a38 : Enable MKLDNN in PyTorch in fbcode (#13165)
e6ce9f303f : Check that QNNPACK directory exists in setup.py
f282fa1afe : Comment out LOG(ERROR) for legacy no-dtyle serialization behavior
0687f58441 : Fix broken master (#13171)
c21471c77f : Sampler serialization and deserialization (#12999)
9f9f06c937 : Improve inline container and add some test (#12993)
7ca995c815 : Add optional default type annotation to support JIT None default value (#13161)
8797bb1d30 : Revert D10419671: use TypeMeta instead of ScalarType in TensorOptions
ce0d3e9b35 : Bind inplace and _out variants into JIT (#13093)
a70573b589 : use TypeMeta instead of ScalarType in TensorOptions (#12768)
2f1542839f : reduce Device to 32bits (#12767)
a7ba4cb383 : Change return type of Tensor::dtype() from ScalarType to TypeMeta (#12766)
46ef2b2898 : Ignore flake8 warnings in test_c10d.py (#13159)
435228508e : Remove test_distributed_trap.py (#13151)
929bffe020 : Turn some th_ prefixes into _th_ prefixes for conformity. (#13128)
c95fa4b904 : fix dtype uninitialized tensor serialization
8e1e3ba7b8 : Hide c10::optional and nullopt in torch namespace (#12927)
f72f91610f : Move stream to thread local (#13080)
dc211c7de4 : Move batch_norm to ATen/native, speed up (#12368)
5e73b828bd : CMake integration for Int8 ops
4870b1b68f : Speed up tensor.resize_(sizes) when tensor has correct size (#12824)
60c0508d96 : Use CAFFE_ENFORCE instead of CHECK in caffe2 rnn executor (#13144)
5cbb33f939 : Disable upsample optest (#13135)
efab8e8fdf : Speed up tensor.get_device(), is_cuda(), is_sparse() by avoiding dispatches (#12841)
b827a40880 : Implement bucket-based attention pooling for IdScoreList features (#13004)
3ac9a9577c : Remove optional from caffe2 utils (#12965)
99d24aefc3 : Move a number of ATen checks out of Dependencies.cmake (#12990)
852d6e8b65 : Fix python2 and python 3 compatibility found by lint. (#13140)
defe96eb6c : add topology index check in Graph::lint() (#13037)
526460fc8b : Use default timeout of 30 minutes for gloo backend (#13056)
4e1c64caee : Add c10::optional to type syntax (#12582)
569a29b81a : Make chunk size configurable in SaveOp (#12949)
f6ccb6a0f9 : bring caffe2::Tensor API closer to aten/pytorch (#13134)
49046239f2 : Change explicit usages of at::optional to c10::optional (#13082)
be99eff75a : Back out "Revert D10494123: [c10] Remove at::Optional" (#12991)
c47f680086 : arc lint torch/utils (#13141)
4f94d82c7f : clang-format on c10d and THD (#13138)
c6defa0847 : Add randn in onnx symbolic (#12880)
979560c9fc : Include c10 namespace into caffe2 and at namespaces. (#12950)
d6fe812187 : Fix TensorList ambiguity (#13024)
14ea4bf0d1 : Make 7 nn modules into weak modules (#12966)
e07e63f0b3 : Absorb shm
175e553974 : Do a better job of checking registered names (#13016)
c91d982691 : Improve expand error message by including complete sizes rather than … (#13124)
9cb4bce847 : Open-source Caffe2 Int8 ops (#13065)
faa354e102 : Commentary about size constraints on TensorImpl. (#13126)
cb15c7615a : Documentation on TensorImpl.
ae44627661 : Rm test_jit.cpp (#12988)
314d95a5f2 : Renaming dims() to sizes() (caffe2/caffe2) - 3/4 (#13096)
557db18c85 : Enable MIOpen properly (#13048)
ab40eff5dd : caffe2: UpsampleBilinear CUDA implementation (#12843)
796181d762 : Fix UB in CPU_tensor_apply (#13121)
eac3e7ab7c : improve constants error message (#13072)
9fefab5ac6 : Add support for reductions to TensorIterator (#11908)
e5752f2cb4 : Renaming dims() to sizes() (fbcode)
1720757220 : added submodules for int8 ops (#13106)
2a6431ba2d : Use fixed MASTER_PORT in test_distributed (#13109)
956e620c64 : Eliminate numel == -1 state, delete Storage-only constructor (#12656)
c368f26f88 : Disable CircleCI merging to master. (#13074)
e8613d99b5 : Delete ATen/CUDAGuard.h (#13078)
6995b84d45 : Make SparseToDense handle empty outputs properly. (#13043)
f1e4304d19 : Add operator_def property to annotation (#13094)
b883afc928 : Absorb c10d into the main cmake build
c250f6f3d5 : DDP perf improvement: move sync_reduction to C++, dedicated CUDA streams for memcpy (#12954)
69906afaee : absorb THD into main cmake build (#12775)
2d9b1fcd09 : Make c10d support MPICH and further (#13083)
b4d0dc77be : Eliminate CUDAStream nullptr in NCCL (#13089)
fc1c8f8b5b : Enable test_nn embedding tests and use correct warp size in Embedding.cu (#13046)
444cc0ee0a : Back out "[pytorch][PR] added gemmlowp module" (#13090)
478886be30 : Fix print precision and match numpy behavior (#12746)
3761adc889 : C++ API Cleanup Extension (#13087)
3fa9ccf1ba : Add new NeuralNetOps for fusion (#13068)
e0a8665d03 : Converter fix to allow unimplemented convertToOperatorDef (#13069)
ef019a2d18 : Improve the C++ API (#13067)
3b919a6f82 : Renaming dims() to sizes() (caffe2/caffe2) - 1/4
9573ecefe3 : Back out "[pytorch][PR] Add sse2neon tp" (#13091)
e290a9d2fd : Back out "Migrate DeviceOption.numa_node_id to DeviceOption.device_id"
ccfaf46431 : Make CUDNN an alias of MIOPEN for HIP ops (#12278)
e1243cef88 : fixed docs for Student-T distribution (#13044)
86881cdb39 : MNIST images should have an extra dim (#13060)
6727133f3d : Support warnings.warn (#12964)
b790fcaf39 : Renaming dims() to sizes() (caffe2/caffe2) - 4/4
a4475d529d : Use GetFetchStackTrace for the AT_* error macros too. (#13007)
917b203b01 : Assert spawned processes terminating in distributed tests (#13071)
2ac7b6b683 : Tensor dims() -> sizes() (caffe2/operators) - 5/5 (#13032)
cccd457a1e : Tensor dims() -> sizes() (caffe2/operators) - 4/5 (#13031)
ab253c2bf1 : Tensor dims() -> sizes() (caffe2/operators) - 3/5 (#13030)
b55dc8d971 : Add sse2neon tp (#12948)
be43a0faa9 : Tensor dims() -> sizes() (caffe2/operators) - 2/5 (#13029)
07c0f4a097 : Tensor dims() -> sizes() (caffe2/operators) - 1/5 (#13028)
4b5d13abab : Use cmake3 if it exists and cmake isn't sufficient (#12972)
10046c2b2b : nomnigraph - (easy) Expose operators (#13063)
c64a65c977 : added gemmlowp module (#12947)
0f5cee2f6b : Convert some docstrings from char* to char[] (#13062)
97b6a25329 : Use REGISTER_CPU_GRADIENT_OPERATOR for many operators (#12616)
df47bbe9c1 : Fix test_glu_old HealthCheck with smarter generation strategy. (#12975)
2dacf28b66 : link libgloo_cuda.a explictly from setup.py (#12951)
dd7c2d4284 : Change the function signature for caffe2::empty (#13015)
1bea5fc3ad : Fix UpsampleNearest op CPU impl batch handling (#13002)
353fdefdd6 : dims() -> sizes() (caffe2/core) (#13014)
0a190c8869 : Move the location of annotation
fcf801f061 : Support building binary on windows machines
8355219e68 : CircleCI: turn off OSX jobs temporarily
85273acca8 : fix pinning of hypothesis (#13055)
448a32e0ee : Adding timestamps to the beginning of every test file in run_test
6c8d47f2af : Add methods to FunctionSchema (#12967)
52beb338ab : Add Modules_CUDA_Fix folder to installed folder (#13013)
46162ccdb9 : Autograd indices/values and sparse_coo ctor (#13001)
e0f21a4977 : restore caffe2 strides (#12883)
88f70fcef9 : remove progress from git operations in CI builds (#13017)
7863c17b26 : Fix convtranspose3d output_size calculation (#12952)
046672eed5 : Set proper scope on nodes added by JIT (#12400)
cf235e0894 : fix lint after new flake8 release added new style constraints (#13047)
d72de9fb1e : Replace direct use of int32_t with an alias DeviceIndex (#13019)
34cca9f05b : Move Device and DeviceType to c10
ca03c10cef : Rename createCUDAStream() to getStreamFromPool() (#12940)
924326e171 : Revert D10438090: [nomnigraph] Improve the C++ API
97d4c05566 : Revert D10412639: [nomnigraph] Add new NeuralNetOps for fusion
17c6d168de : Attach Shape node if Concat node has 2 outputs (#13006)
53ac4de79d : Expose basic transformation API to Python (#13033)
4e0b6c8500 : Speed up resolution callback creation (#12859)
08d99c4486 : Add new NeuralNetOps for fusion
9c1195fe61 : Improve the C++ API
f9b7ce9c99 : Add tuple indexing support for constant integers (#11492)
ff508c91a1 : Remove numba dependency
a6949abb15 : Guard all Caffe2 protobuf string serializations with CAFFE_ENFORCE (fixed reverted bug) (#12848)
dd00c2997f : fix expect tests (#13005)
821b04e819 : Nomnigraph: Remove Copy constructor and copy assign operator from BasicBlock, add move constructor.
83f788d088 : Fix MSVC build for Python 3.6 (#12878)
b8a11cffdb : Minor improvements cherry-pick (#12973)
223a96a9a0 : Add missing NCHW2NHWC symbols for HIP (#13000)
470e766062 : Fix illegal code in rocblas_handle rocblas_handle() that causes failure w/ gcc as base compiler (#12957)
21285e73da : Add Google pixel code
8e4bea107a : Fix clang-tidy 404 in Travis
9ea19cb079 : Windows CI integration for custom ops (#12928)
af78d4cd49 : Add weak script modules (#12682)
3fb3a07f54 : Added a default constructor for torch.finfo.
1b07eb7148 : torch.utils.cpp_extension.verify_ninja_availability() does not return True as documented
428300d318 : Revert D10494123: [c10] Remove at::Optional
d401dc4374 : Remove at::Optional (#12958)
27af265a5e : Index to track topological order within a block (#12748)
dd823ccd28 : small improvements to torch.nn.normalization docs (#12936)
8d7607e346 : Add attribute exhaustive_search in _blacklist_caffe2_args (#12805)
bc1d96ca98 : Add support for inline expect tests. (#12825)
952df2ba8f : Install torchvision before all tests, tickles #7851 (#8311)
3894ed22a8 : Remove nullopt from native_parse.py (#12961)
da2da55170 : Make sure to update success_ at the end of the run (#12806)
8c514627a4 : Add C10_LIKELY/C10_UNLIKELY macros (#12932)
8d3e7e2fcb : Move DDP queue_reduction to C++ (#12852)
8682999767 : Remove trailing whitespace from files in aten/ (#12942)
f575e138d8 : Credits to Exhale in cppdocs (#12926)
e64f75a1d8 : fix ZeroDivisionError in utils.bottleneck (#11987)
95caa37565 : Remove CAFFE2_USE_MINIMAL_GOOGLE_GLOG (#12938)
283d41885d : Accept external input hint when doing ONNXIFI transform (#12900)
5f37c0afda : Fix doxygen check
56bf4850cb : Clean up of the multithreaded benchmark (#12905)
1b530fdae0 : remove the find-package codepath for gloo in caffe2
6cc15c1a22 : Simplify typeid SFINAE (#12706)
3092a69546 : Optimize NCHW2NHWC on GPU (#12910)
cfb7f0a8f2 : remove onnx CODEOWNERS entries (#12941)
8f51c513a6 : gloo: build once, share between pytorch/caffe2
df06fba1f1 : Use the newer one of cmake and cmake3. (#12916)
5e8e199f8d : Add note on traced module train/eval behavior
a022fd2d6b : Implement DataLoader (#11918)
96d826f635 : Define REGISTER_CPU_GRADIENT_OPERATOR (#12588)
da73d709a8 : Remove unsafecoalesce op (#12897)
c774cb8913 : Rephrase unclear error message for shape mismatch (#12870)
25f4b3efe3 : Add simple scripts for checking if generated code changed. (#12835)
01227f3ba7 : Env variable to not check compiler abi (#12708)
1e8064dec0 : Convert 2 nn.functional functions to weak script (#12723)
b357470421 : Add DistributedDataParallelCPU to doc
ed02619ba0 : Add topological sort to nomnigraph (#12790)
a839a67aad : Add IDEEP unit test with zero-dim tensors (#8459)
7dbb38e856 : Moving logging from caffe2 to c10. (#12881)
d120b9af5a : Make c10d pickling/unpickling work (#12694)
8cb0848bdc : expose delete_node (#12840)
202893fe1a : Migrate DeviceOption.numa_node_id to DeviceOption.device_id
7921e16ca2 : Revert D10421896: restore caffe2 strides
bf99ffc4d2 : Remove OMP_NUM_THREADS and MKL_NUM_THREADS settings from docker images (#12836)
14ff866505 : Optimize GroupNormOp (#12844)
f3e1fe5ca5 : add string as supported input / output of script functions (#12731)
186219a643 : restore caffe2 strides (#12845)
68f4a4b3ba : Delete THCStreamGuard in favor of CUDAGuard, also c10d code cleanup (#12849)
6ec2f09188 : CircleCI: enable OSX jobs
7837ec553c : CircleCI: Add doc-push job
6190408e24 : caffe2: UpsampleBilinear support for scales (#12736)
d736f4f0a7 : Kill 'python_name' in Declarations.cwrap. (#12832)
31232061aa : Use C local in lexer (2) (#12838)
373b5080da : Warn that tensor.resize_() resets strides (#12816)
d783249674 : Revert D10457796: [pytorch][PR] fix typo
ca5dc9f13a : Add py2 compatibility for builtins import (#12784)
aa6f47e229 : fix typo
f47d12b0ef : shape_as_tensor should return a CPU tensor
40ff69b796 : Add attribute exhaustive_search in caffe2 blacklist args (#12815)
8a35aafca6 : Try to fix randomness.rst formatting again
0fa69c0276 : Remove the protobuf library in pytorch linking list. (#12451)
a85174b46a : Fix randomness.rst formatting
87d3d209a6 : Enable JIT tests in fbcode (#12777)
99bc541b5b : size_from_dim(0) is like numel() but worse. Don't do it. (#12729)
89bf98ac4c : Update '__all__' in '__init.py__' (#12762)
a223c5ed2c : Extend ONNX while op by x2, rather than x1.02
f9d1b63d18 : Automatic update of fbcode/onnx to f8828e532da4795e8ea15f5850a37c5179917b9b (#12823)
f380f0ba27 : Move torch.onnx.operators functions into ATen (#12803)
79709f02e9 : fix overwriting of CMAKE_EXE_LINKER_FLAGS (#12834)
92890d4314 : Delete ExtendTensor operator
324a510f9c : JIT Cleanups (#12804)
6058886b03 : Speedup pnorm (#12811)
68843c683d : Open source multithreaded predictor bench utils (#11135)
ee563c5899 : Add license reference to README.md
9473e57eca : Revert D10444104: [pytorch][PR] Windows CI integration for custom ops
ed317b6203 : Remove useless MKL target (#12783)
805f4d5cb8 : Revert D10416438: Guard all Caffe2 protobuf string serializations with CAFFE_ENFORCE
57ddc08a57 : Enable multiple external output (#12778)
dec9bc5f0b : Expose device_option directly
63cd051867 : Guard all Caffe2 protobuf string serializations with CAFFE_ENFORCE (#12799)
2c566a17c7 : nomnigraph - simplify subgraph matching APIs (#12681)
9c617140f7 : Try to reduce c10d test flakiness (#12782)
3fe35300ed : Revert D10417038: [pytorch][PR] Use C locale in lexer
545f22c070 : Link libshm against c10 (#12802)
5b971445a6 : Typo fix (#12826)
2b63b7a0a5 : Support GPU version of Spatial Batch Norm (#11711)
e240e89984 : move the torch/csrc/jit/serialization.h to caffe2 source folder and rename to inline_container.h
963b012bd8 : nomnigraph - HEFT scheduler (#12788)
12be60cc04 : Windows CI integration for custom ops (#11527)
eb6a1245a2 : Fix torch::jit::load docs (#12709)
b1a6fa90e1 : Add script::Module::to (#12710)
710191e292 : fix error message of large kernel size in conv2D (#12791)
f1e7d384b6 : Support scales as inputs in ResizeNearest (#12720)
f4944f0f8a : Rename test/common.py to test/common_utils.py (#12794)
cffeb03a2d : fix forward and backward for norm with negative infinity norm (#12722)
ed5eb7196b : Add quantized GroupNormOp (#11852)
08aab4dfdd : remove ATen/Error.h and ATen/core/Error.h (#12792)
cd88c5ccf4 : CircleCI hot fix: pin awscli to 1.16.35 (#12787)
84ce3ab47e : Add MAE and L2 loss to docs (#12754)
5ccdd7a626 : Support cmake3 for 14.04 and CentOS (#12771)
21ff6de4b3 : Add missing HANDLE_TH_ERRORS (#12770)
ab1a25aa9b : caffe2::empty for Resize+mutable_data refactor (#12407)
7d5f7ed270 : Using c10 namespace across caffe2. (#12714)
348867c10b : Remove cereal submodule (#12666)
dd7501e3a8 : Remove Blob::ShareExternal from serialization (#11926)
6cbf1992bd : Serialization takes pointers instead of Blob (#11925)
25db86cca5 : Fix isfinite for int input (#12750)
9a76e84a08 : Use C locale in lexer (#12739)
459cff93fe : fix math formula for conv1d and conv2d (#12740)
e027f7a913 : Fix character with wrong encodding in documentation (#12761)
9d79030d38 : Fixup THPUtils_unpackIndex (#12738)
409ee5bcd9 : Remove redundant semicolon
1a6071d436 : fixing `seq` to `tensors` in documentation (#12741)
7edfe11ba4 : Use TypeMeta::dtor() instead of Blob::DestroyCall (#11500)
7b7bf09e3c : Add TypeMeta::New/Delete (#12307)
90737f7f5d : Fix missing final activation in NLLLoss second example (#12703)
0521c47c91 : Amend nondeterminism notes (#12217)
8c873def88 : Revert D10220313: restore caffe2 strides
70c527dacd : Re-disable softmax ops tests in ROCM (#12749)
034c969f3c : Simply exit DataLoader when Python is dying (#12700)
d34578026c : Various example code fixes (#12707)
c8ac878b98 : Fix bug in script for where (#12385)
84edd4a48b : Enable mapping from operatordef to converted node for debugging
1bf642800d : Remove duplicate descriptors (#8321)
e497aa1e35 : Optimize UpsampleNearest Op (#12151)
ba25e13782 : Forbid Module.to with copy argument. (#12617)
5416260b1e : Add the OpenMP optimization for BatchPermutation. (#12153)
3709734b1c : Improve reporting on pytest. (#12610)
3bfa7258b3 : Don't serialize hooks (#11705)
b1892226aa : A quick rundown of codebase structure. (#12693)
0054df19b1 : Simplify InheritOnnxSchema registration (#12696)
81975a497f : update docs for sparse tensor (#12221)
dc07102b17 : Check dim size preventively when doing shape inference for BatchMatMul (#12691)
50c0aedbec : Don't segfault on Tensor.__delitem__ (#12726)
6476e4598c : Rename TypeMeta function pointers (#12306)
d0df1e8ec9 : Remove MIOpen Softmax operator (#12727)
30aaa07594 : New serialization format (#12384)
ac994f2c78 : Fix SpectralNorm with DataParallel (#12671)
c414eb2618 : fix improper calling of ShareExternalPointer from RNN op (#12593)
4d698cae2e : Enhance shape inference in ONNXIFI transformer (#12685)
f53d5e0a75 : Automatic update of fbcode/onnx to 1cbe2743cda739ff752d6ce79553b0ef8ad49783 (#12676)
e15501fb68 : fix bce_with_logits with legacy reduce (#12689)
00f0dca4b5 : restore caffe2 strides (#12381)
7035975508 : fix double free exposed by latest llvm (#12697)
a9981c8477 : Remove Type.tensor, Type.native_tensor. (#12687)
7d24985852 : Kill is_type_dispatched. (#12684)
5b8a640d0b : Update fft docs for new cache size (#12665)
0916f4a337 : Remove caffe2/submodules/cereal-rev.txt
04d4ec285c : Cleanup namespace that were moved to ATen accidentally (#12680)
eb02a1d8a7 : Fix clang tidy master comparison (#12674)
31d8e5e71a : Improve Python API with the addition of pythonic setters/getters
f2b62e113c : Clean up IR.h (#12551)
058c1284be : Fix the symbolic for pixel shuffle (#12192)
a1dd608260 : Reduce MAX_JOBS for pytorch rocm build to make CI more stable
d80a3eb549 : Set philox seed and offset on cuda manual_seed (#12677)
01a333fd7f : OpenCV 4.0 Compatibility fix (#9966)
083e037dea : minor fix (#12688)
23c4dbd6d7 : Fix ONNX upsample mode (#12648)
7a52117792 : Add AdaptiveAvgPool2d and AdaptiveMaxPool2d to ONNX.symbolic (#9711)
52cbf4b774 : Update eigen submodule to fix CUDA arch>=5.3 build issue. (#12191)
e22a776890 : Fix for some tests (#12575)
0b96e5d792 : Move some files to c10/util (#12245)
ade97afc74 : Re-enable IDEEP graph rewrite test (#12661)
ab7520eb50 : Revamp and document serialization, support streams (#12421)
03429e4eaf : Update Gloo submodule to resolve __CUDA_DEPRECATED warning (#12574)
ef18f74e20 : Simplify typeid macros (#12654)
bb35d085ef : Dispatch backend-specific TensorOptions-based 'factory' functions via… (#12071)
86aa6a61e0 : Dedup MethodValue and FunctionValue (#12589)
71d142604f : Add upcoming features to schema parser (#12585)
4c21b2f2d3 : split register_aten_ops.cpp into shards (#12615)
c6f0fe5f26 : CircleCI: Remove --depth from git fetch
6f339cac6b : Windows local dev: install conda in user-specific directory to avoid conflict (#12663)
bbe6ef3864 : torch.finfo and torch.iinfo to mimic the numpy equivalent (#12472)
e8d8ccb34a : Emphasize that the /path/to/libtorch must be absolute (#12660)
a74cc03aa7 : Use branch of exhale that fixes overloads (#12668)
713e706618 : Move exception to C10 (#12354)
aef8cadb9a : mark Storage functions as const (#12623)
189c1e1afb : Rewrite http://pytorch.org -> https://pytorch.org throughout project (#12636)
a6c7cf8741 : python bindings: enable generic nn operator handling
0740a5d521 : compute_uv for SVD (#12517)
d5eae90537 : update onnx tests (#12619)
d17b0bc679 : Allow running root tasks inline (#12289)
a1bbe80e21 : Remove NervanaGPU operators from Caffe2 (#12564)
151b28521a : Fix Windows test script on local dev machine (#12073)
7326739188 : Remove out-of-date TODO.
07d67aa17a : Make TensorOptions immutable. (#12630)
1014c8a7db : 'Re-sync with internal repository' (#12652)
6dd71947ea : remove unused Iterable, also avoid Python 3.7 deprecation warning
eaf33f22c8 : Revert D10123465: Set the correct engine name for position weighted pooling when fp16 is used for training
02695c11db : fix masked_fill_ bug on non-contiguous tensor (#12594)
0c6ab0e8f4 : Delete caffe2/mkl, and references. (#12625)
a98958d3bd : dtype option for softmax (#11719)
e986f307c3 : Fix math formatting of PairwiseDistance docs (#12628)
a91f3338a0 : Some documentation fixes (#12521)
1f94ce1f97 : Fix aten::to export in ONNX
635cbff300 : Set the correct engine name for position weighted pooling when fp16 is used for training
6bc8d303eb : Update onnx to onnx/onnx@06f6d63 (#12621)
63a220f54d : Deprecate prof_dag (#11956)
53f4dbc9ac : test_proper_exit: avoid truncation of info message (#12612)
17ab3bd502 : implement rowwise quantization for fp16 (#12382)
7a1b668283 : Implement Tensor.__cuda_array_interface__. (#11984)
134b5d62e8 : don't copy weight gradients in rnn (#12600)
49256ddb4a : split generated VariableType.cpp (#12493)
3f52a0aad7 : Fix the linter
239b2ac718 : make the variable declaration closer to usage
15bdb9fe61 : remove duplicate BUILD_TEST flag in libtorch cmake file (#12583)
7da4643232 : Caffe2: fix error C2398 and syntax error with Visual Studio 2015 (#10089)
c1d0784dcb : enable onnx integration tests
97eec33f80 : Allow tensor.device, tensor.dtype, and tensor.shape in JIT (#12363)
5317429e82 : move bceWithLogits from python to Aten (#11054)
6069f6f454 : Try to prevent occasional timeout in test_proper_exit
12686ec656 : fix _AllReduce not applying the DeviceScope guard to model.Copy operations. (#12342)
dfad8b60ba : Remove duplicate codes
038d5ca943 : Remove incompatibility MSVC, Cuda and Debug (#12572)
63e09707a2 : Use SFINAE instead of macros for 'long' hack (#12605)
b57fdf1db5 : Properly set cmake python library and include_dirs (#12569)
48bc57fa8d : Introduce chain_matmul (#12380)
0cf3c1ce66 : Add copy= keyword to Tensor.to (#12571)
2279299c6c : Implement aten::contiguous (#12541)
1be8b7cc56 : Delete "default" codeowners from root directories. (#12584)
0df4d66210 : Update caffe2 docker images version in circleci (#12596)
fa99ed9b30 : Emit warning about optimization passes only once
01cb90adf1 : fix the ONNX test_operator test (#12591)
eb5fdc5fb5 : Add default values in script (#12345)
97bee5cd80 : Adds max plan number for CUDA 10 cufft plan cache array (#12553)
957142a4fe : switch ROCm CI targets to white rabbit release (#12577)
93a4b76114 : Enable alternative LayerNorm impl in FisherGan (#12178)
8ac8b823c2 : Allow use substitute ops for LayerNorm (#12177)
d9eff40546 : Revert D10209620: Use SFINAE instead of macros for 'long' hack
5973312abc : Add clang 6 docker images
a1487bf874 : Smarter differentiable subgraph slicing (#12175)
0ee2e7c398 : Relax the locking of running_mutex_ in async_scheduling net (#12544)
0f9807ee61 : Enable addmm fusion for ONNX export only (#12538)
7b0f5d6631 : Support USE_CUDNN for Windows (#12518)
033e00cd3f : Fix bug in caffe_translator tool (#10056)
666bebc7d2 : adapting caffe2 operator docs generator to pytorch url
eef083e477 : CircleCI: add timestamp to build log, clean up unused jobs, print docker image name
a4120fa132 : Get rid of emitApplyIdent (#12504)
8482ea8774 : Update develop install command in onnx scripts
cee19eb31c : Back out "[c10][NFCI] Move jit/type, function_schema, and utils/functional to ATen/core" (#12568)
7acb145893 : Fixed print issue for TensorTypeId (#12402)
229397b439 : Revert D10324615: [pytorch][PR] Revert #12466 and #12467 to fix JIT test error on Windows CI
1c7832c854 : CUDA 10 warnings fixed (#12442)
234e6b3797 : Bugfix in onnx exporter (#10607)
1f7cbea984 : Revert #12466 and #12467 to fix JIT test error on Windows CI (#12557)
170d84228e : Delete redundant statement of `col2im` (#12514)
2b033332c8 : Allow linking to backwards-compatible cuDNN at runtime (#12239)
8734b174ca : Multinomial raise error (#12490)
b89a3b50fb : Remove StaticContext (#12547)
c32839fc90 : CircleCI: better credentials visibility (#12552)
89010d60f9 : Migrate HIP to use DeviceOption.device_id and delete DeviceOption.hip_gpu_id
25bd7fe488 : Add USE_FFMPEG flag for setup.py and R2Plus1D (#12543)
da3dd9af12 : No Op Optimizer (#12390)
8399778049 : Update FP16 submodule (#12554)
543048d275 : Adds launch bounds for CTC loss kernel (#12379)
7724807551 : Remove ExtractDeviceOption from StaticContext (#12304)
0d50c117db : Introduce BUILD_ATEN_ONLY cmake option (#12443)
a442853f4f : CircleCI: try to fix submodule not found error (#12542)
b51901f7d3 : Update FP16 submodule (#12539)
45db8274de : CircleCI: Add credentials for pushing to perf test S3 bucket (#12523)
c2a57d082d : Fix windows build (#12534)
033e95765c : Diff against master and enable bugprone-* checks (#12378)
727609f435 : Use SFINAE instead of macros for 'long' hack (#12424)
e25b8869f7 : typo: Aten.h -> ATen.h in cppdocs
3829f86c7a : Update NNPACK-related submodules (#12505)
283f21d518 : Caffe 2 adoption (#12116)
16b8075acd : finishRun fix (#10970)
f54ab540af : Rename cuda_gpu_id to device_id in DeviceOption (#12456)
caf8b0777a : Move function_schema to ATen/core (#12467)
f989d4b18e : Move jit/type and utils/functional to ATen/core (#12466)
58b247fc42 : Update onnx to onnx/onnx@008e381 (#12492)
64f707cd26 : Enable more unit tests (ROCm 255) (#12486)
dcd9d73d47 : Expunge torch.utils.trainer.* (#12487)
8468b7d3f0 : Fix tensor doc (#12469)
2b22c60980 : Fix GPU perf tests on CircleCI (#12491)
b572e27502 : Fix types and warp sizes for ROCm (ROCm 256) (#12485)
c96afa3322 : topk and sort fixes (#12337)
ea79f7c032 : Add derivative to pow with scalar base (#12450)
a3fb004b18 : (#12474)
1c69d368e1 : Remove New with Allocator Registry (#12111)
f564163951 : Remove SSE-only code and convolve5x5 (#12109)
11c31aef04 : Prevent hanging in data loader altogether
1a0d82e4f4 : fix import for script module with control flow blocks (#12351)
c959be9d1d : Create named functions construct (#12237)
8414094562 : cleanup controlflow (#12235)
d400502b1d : Fix a bunch of warnings in TestNN
cdead5ace1 : Enable CircleCI for Linux jobs (#12389)
5a0d2c7138 : Add clamping functionality to stats_put_ops
1ee6fc4002 : Delete noexcept on the move constructor of OrderedDict (#12369)
dd4b9b06a4 : Back out "Back out "[caffe2] Use custom CPU thread pool in async_scheduling"" (#12418)
c5d7494ca1 : Use open-source NCCL2 in PyTorch (#12359)
c3987a0fc3 : Fix issues with ATenOp handling methods where `self` is not the first arg (#12353)
d0e1dca0f5 : fix expect file (#12465)
5bac46508a : Fix TestJit.test_alexnet expect file (#12458)
d4b4c1fbec : Add missing url links to README.md file. (#12440)
a55b9f77a0 : Implement 3D and 4D parallelization in Caffe2 thread pool (#12455)
d181e0f1fc : Add move{Node,Edge,Subgraph} for Graph move-like semantics (#12303)
cf2b88fa30 : Induce edges on subgraphs (#12255)
7103d0d938 : Add python bindings (#12253)
e7653c7561 : New chaining/partitioning algorithm for async_scheduling for inference (#11957)
f1f521f71b : make bench_gen.py work for 3d conv (#12433)
00aedfc0e2 : constant pooling pass (#12222)
83b4dc6822 : Remove Type.tensor(). (#12360)
28e1571843 : Add the x64 msvc toolchain into PATH (#12446)
def655ec27 : fix critical section of atomic add op
8689d8af36 : Format inline code block. (#12441)
0e44db8b0d : Add check for backend of arguments to bmm cpu (#12434)
db8d01b248 : Move JIT tests to gtest (#12030)
6f664d3917 : Improve TypeMeta (#11502)
ac9bb8ecef : Make dynamic_cast_if_rtti safer (#12408)
0e966fc9f9 : Back out "[caffe2] Use custom CPU thread pool in async_scheduling" (#12415)
695465915a : Remove some Type.tensor usages and remove native_tensor without size. (#12403)
14b48a2404 : Use custom CPU thread pool in async_scheduling (#12295)
92b0e7026e : Add weak script mode for script functions (#11963)
058a31839d : Warn about local_rank not being globally unique. (#12370)
3f04ca9a91 : Remove duplicate math transpilation function (ROCm 233) (#12387)
e1fe617600 : Fix flipped pad buffer constructor arguments
99de4565dd : Split reduction_front_backops.[cc|cu] into smaller units to allow build of smaller size (#12315)
b937cbb776 : Fix a bug that would resize tensor storage on export
57fcc57f31 : set CMAKE_INSTALL_MESSAGE to NEVER (#12392)
54d9823d00 : Make caffe2::Tensor::dims() return an IntList instead of a const vector& (#12180)
f9fb37ca79 : Guard Denormals-Are-Zero with runtime CPU check (#12386)
bd09ab6687 : Remove stages from IR, they are not longer used
c7e8044fc8 : Support additional device types (#12293)
f8086845aa : Fix bug in grad.py when conv bias != None (#12281)
e2d2b270db : Revert D10212616: [pytorch][PR] Remove some Type.tensor usages and remove native_tensor without size.
705d80b51e : Remove some Type.tensor usages and remove native_tensor without size. (#12355)
9ebac3d7fe : Improve type kind error message (#12344)
0ebbfc25f3 : Add utility function make_tensor (#12288)
dd2c487ab0 : Enforce invariant that storage_ is always non-null (#12328)
7788ec9dd1 : Remove dangling cmake check for long typemeta (#12356)
1e7050072b : Make TensorOptions contain optional fields, optimize struct size (#12103)
b3cdaee6db : Update README.md of ATen Documentation (#12367)
5cb2b2358c : Move interned_strings and get build working (#12039)
f494f004b7 : Fix unintended casting to long (and fix Half overloads)
d4c58216d7 : Stop warnings on AT_DECLARE_TENSOR_TYPE(.); (#12348)
d9ba2b6894 : Add Pytorch domain specifc ONNX schema for SparseNN ops (#12338)
bd8980e8c0 : Enable CUDA 10 in CI. (#12343)
6544cd4590 : Revert D10205876: Fix unintended casting to long
8e5ac43b4e : Fix unintended casting to long
16e21e14e3 : Fix Caffe2 build on 64-bit Android (#12340)
f0b73ff790 : Pretty printer improvements (#12179)
895994a7c3 : Back out "[pytorch][PR] [Build] Use open-source NCCL2 in PyTorch"
a98489747d : Enable sparse functionality and tests (#12323)
39bd73ae51 : Guard NumPy usage using USE_NUMPY (#11798)
c064f8a89d : Fix build error mkldnn due to corruptted CMAKE_REQUIRED_LIBRARIES (#12195)
ae7a7fb398 : Use open-source NCCL2 in PyTorch (#12312)
6b79e16d6d : revert test/expect files (#12332)
83de6f0dac : hip minor fix for c10 (#12329)
bcb62cb525 : Lazily create tensors in optim_baseline (#12301)
1962646d0f : Remove CAFFE2_UNIQUE_LONG_TYPEMETA (#12311)
38f3d1fc40 : move flags to c10 (#12144)
c9f7d7b506 : mark unit tests as working, skip failing unit test (#12313)
8c64655460 : Open source distributed code (#12254)
15367ba9bc : Deserialize offset of TreeCursor only when it is not empty (#11465)
07bb79bd8b : Use caffe2::int8::Int8TensorCPU when input type is uint8_t (#12274)
faab6ea922 : Split Allocator (#12105)
74dc4460eb : New in StaticContext returns at::DataPtr (#12029)
bcc2a0599b : Enable clang-tidy in CI (#12213)
c9f9df002d : Properly catch errors in PythonOps (#12243)
557015fd93 : wipe cache with writes (#12279)
6b9afc894b : pyHipify Fixes (#12292)
fe10f3d0c6 : Fix up onnxwhile op (#12124)
8aa23907e8 : Make if block also take control_inputs, preserve SSA (#12224)
b548f8320d : Reduce size of TensorImpl from 160 bytes to 128 bytes (#12266)
2217c0b408 : create the onnx_root in local, and link it
3db9738b30 : add torch factory methods (zeros/ones) to onnx symbolic
01d835c9b2 : Revert D10128131: [nomnigraph] Add move{Node,Edge,Subgraph} for Graph move-like semantics
d1ac1eba3b : Add `bool` type to IR (#11834)
c029c839a1 : MIOpen 1.5 group conv API integration (#12273)
a839ec805a : Add move{Node,Edge,Subgraph} for Graph move-like semantics
b911ca9b0d : docs: change links to https (#12258)
080266e79c : Document CUDAHOSTCXX environment variable (#12265)
1fb8925efe : Fix typo LMBD->LMDB in docs of setup.py (#12282)
c0ed48a57e : Add support to the accuracy metric (#12211)
06360c3050 : Back out "Deduplicate canonical_axis_index_ with maybe_wrap_dim"
a76216b8ed : Back out "[aibench] Use caffe2::int8::Int8TensorCPU when input type is uint8_t"
035d04299c : Update onnx to onnx/onnx@ddf8eb6 (#12267)
04b0774964 : Use caffe2::int8::Int8TensorCPU when input type is uint8_t (#12250)
7c678746ef : update the script to match the current build process
29e5ba8a7b : Fix for LibTorch download link (#12263)
1d3f650ce4 : Revert D10098106: [pytorch][PR] [WIP] New version of PT1 model format
ff608a9ff3 : Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232)
696498d9e4 : Delete stride updating logic from Caffe2, and make PyTorch error in this case. (#12236)
2cbcaf4544 : Skip failing tests in test_sparse (#12229)
8af06d8114 : Use DFS scheduling only within single device (#11848)
ecace9eb21 : Move crf in caffe2 from fb to oss (#12200)
26df16eb21 : Clear previous device option when keep_device is set in load op
23f86ad57f : Back out "[caffe2][mpscnn] Enable multiple external output"
35becd1879 : New version of PT1 model format (#12149)
8fa7de35f2 : Enable ROCM clang-7 build
15d28e400f : remove support for c extensions (#12122)
1b59cf8b51 : Add support to use llvm 7 in CI
06f535d8a0 : More debug info in plan executor (#12183)
eba1cf2145 : Unify style (#11949)
3010dc4208 : Revert D10123245: Back out "codemod cuda_gpu_id to device_id"
ecb3835387 : change \gamma to \Gamma (#12214)
7d7d336c45 : Back out "codemod cuda_gpu_id to device_id"
e43ffb0148 : nomnigraph - easy - some code cleanup for transformations_test (#12101)
006171fffc : Back out "[pytorch][PR] Revert "Move CreateContext to global registry (#11688)"" (#12121)
fed91f873f : (Very small) allow trailing commas in assign or tuples (#11723)
f3c32a4b54 : dnnlowp_16 -> dnnlowp_acc16 (#12205)
9768b4d4ff : support half float for SparseLengthsIndicesInGradientWeightedSumWithMainInputGradient (#12186)
c3817e85fa : Temporary fix for LibTorch download link (#12212)
572132fb17 : copy_(Sparse, Sparse) for sparse tensor (#9005)
93ecf4d72a : Remove raise_from (#12185)
5ffc915f26 : fix docs (#12126)
40aa212cd6 : Support fp16 mkl engine in training
a2ebbccc9f : fix unit tests on CI
878e7740fd : Turns optimizations off when checking trace (#12172)
22ce6060ec : Add caffe2_api to exported functions (#12184)
ebc2643498 : Enable multiple external output (#10957)
0a5dfa5a52 : Add support for device annotations on blobs
08e5ca1262 : Add filter<T>(NNModule) and explicit Declare/Export classes (#11955)
60061a20d9 : Adding Declare and Export operators (#11954)
7b2c0a09e4 : Adds support for NaN, +inf, -inf float scalars to CPU and CUDA fusers (#12070)
0e779c27e1 : Deduplicate canonical_axis_index_ with maybe_wrap_dim (#11891)
ab9a5976a0 : Disable inlinining of EnforceFailMessage (#12078)
8009b6cdb5 : Kill self_ty in TYPE_DERIVED_DEFINITION_NATIVE (#11903)
e7e10e60e0 : Introduce builtin script functions (#12141)
65bf181ddf : Add "ai.onnx.pytorch" onnx domain (#12157)
0aff3cc559 : Fix broadcasting bug in StudentT (#12148)
b0248df72a : Docs: Change cuda(async) —> cuda(non_blocking) (#12158)
5be0baefa2 : Use streams in JIT serialization, allow JIT serialization to/from buffer (#11932)
d291cf7de6 : Ensuring positive definite matrix before constructing (#12102)
04c0971679 : Special case BatchGather and BatchGatherGradient for block_size=1. (#11349)
f5a0c337ba : Move TensorImpl IsType, meta, dim32, dim, ExtractDeviceOption to caffe2::Tensor
bbae57d06e : Move TensorImpl size_from_dim, size_to_dim, size_between_dim, canonical_axis_index to caffe2::Tensor (#12099)
3eb5940cf5 : codemod cuda_gpu_id to device_id (#12022)
149403f849 : Move TensorImpl ndim, size, itemsize and nbytes to caffe2::Tensor
7f35e92af2 : mutable lists (#10700)
a5818047c4 : Rewrite serialization to correctly handle partial reads/writes in all cases (#12143)
a86a61b004 : Implement caffe2::Tensor::raw_data() in terms of data()
2021b26bcb : Move TensorImpl::ShareExternalPointer helper overloads to caffe2::Tensor
976a9e0454 : Move TensorImpl::DebugString() to caffe2::Tensor
b0e48aa197 : Move TensorImpl::Reshape(vector<int>) to caffe2::Tensor
8c533c2c90 : Fix bug where Reshape() trashes strides.
d02478e607 : Move TensorImpl::ResizeLike to caffe2::Tensor
dd73d57643 : Move TensorImpl::ShrinkTo to caffe2::Tensor (#12090)
00c6fb16e7 : Move ExtendTo to caffe2::Tensor from TensorImpl
6a2dbc9808 : Rename TensorImpl::GetDeviceType to device_type, and properly test if is_variable
c5fc2f1105 : Merge UndefinedTensorImpl.
e8cb6cb9d2 : Fix some symbolics for ReduceSum, GE, LE (#12123)
f6abd16a9d : Merge TensorImpl. (#11971)
1619264ca5 : Make ATen-core and caffe2 mutually recursive / merge template data<T>() (#11970)
c35f85a6d4 : Export symbols for pybind and other libs after caffe2 rebase (#11975)
80e3081c28 : Add observers for mkldnn fallback operators (#9093)
6e7e63fda3 : Implementation MomentumSGD/MomentumSGDUpdate operators for mkl-dnn (#11686)
13cf39294d : Remove ATen/Error.h and use ATen/core/Error.h instead. (#12132)
a72603f8f8 : Fix for ppc64le jit graph difference in sigmoid backward, see #10726 (#11579)
9c49bb9ddf : Move registry fully to c10 (#12077)
383d340e88 : Small optimization for adam (#12107)
5da8a8c785 : Handle undefined tensor in blob correctly. (#12125)
325101263a : Aten: catch2gtest (#11846)
0f81039eaf : Better high level C++ documentation (#12079)
db5f8d42bb : Remove TIndex typedef from core/common.h (#12032)
478803a75f : Introduce type variables to implement generic list operators (#12040)
75b1ae1acd : Update issue templates
1b45f68397 : Use atomicAdd from cuda_fp16 header when building with CUDA 10 (#12108)
6ff568df4d : Add full namespace resolution in CAFFE_DURATION (#12065)
d9c27f4d8d : T33898723: Simple put operators for caffe2 stats (#12057)
c2f8f5076c : add narrow() support for sparse tensors re: #8853 (#11342)
78fe149ab9 : Fix ONNX bug, add symbolic for full
18f9c07b18 : Enable tracing of tensor factories with an out argument
b535aecd7c : Fix warnings emitted when testing distributions (#12038)
02d7c88fa4 : Unify versions across setup.py, libtorch, and libcaffe2 (#12053)
c8a0b11b7f : add autodiff expressions for common operations (#11832)
21ed7e51b6 : Blob doesn't allow access to destroyCall anymore (#11548)
65cbb8226b : IValue can store Blob (#11414)
b7ebc00979 : Move Blob to ATen/core (#11924)
8ff435c8f6 : Use tempfile during serialized test comparison (#12021)
807de9a1e3 : fix segfault when grad to a hook fn is None (#12028)
db2f7de5c3 : Fallback CreateMutex/AtomicIter operators for mkl-dnn
28dba2f928 : Unify all *_EXPORT and *_IMPORT macros across c++ backend (#12019)
90bcf41291 : Add safety asserts for methods on TensorImpl which don't work on Variable. (#12058)
658386a63f : Make USE_IDEEP work again (#12026)
b7b9e3c7e8 : Fix "identifier following the 'template' keyword does not refer to a template" (#12037)
1e28294487 : Delete some unused variables. (#12059)
e53e8df20b : Support TypeIdentifier::name() (#12036)
aa1adde80b : Refactor fastGet/fastSet for clarity, removing a null pointer check. (#11902)
ceadde2a7f : Add some more locations to search for nccl. (#12063)
b263078bc3 : Fix CUDA division by a scalar on large arrays. (#12023)
a106388187 : Free MAGMA queues after use (#11882)
8f0db9bbbb : Removing some dependency edges from Blob to other caffe2 (#12043)
94c513cc7f : Improve pybind11 message (#11640)
364ae10bb8 : nomnigraph - easy - add some python test helper methods (#12020)
7122f8b3bb : Disable more flaky tests on CircleCI (#11399)
d7e11e3aae : Revert "Move CreateContext to global registry (#11688)" (#12049)
3deb4791c3 : Replace 'struct Tensor' with 'class Tensor'. (#12034)
fcb3ccf23f : Don't record Git version automatically via cmake (#12046)
0947712e5d : Move Factory functions from Type to TypeExtendedInterface. (#12025)
d4ce41c4de : Rename tensor_impl_ to impl_ in Tensor (#12035)
71b99f28be : Give default values to members of TensorImpl. (#12033)
2cdf98a74d : Back out "Removing some dependency edges from Blob to other caffe2"
3417a1e7e4 : Prepend a "const" to a for loop in printPyObject. (#11857)
17a65bf9b6 : Removing some dependency edges from Blob to other caffe2 (#11923)
dfa03e94eb : Fix mispelling of AVAILABLE. (#12016)
86e025fca2 : magma-cuda should reference updated versions (#12000)
5d4624a1d9 : Fix return temporary as reference in MPI backend (#11947)
9068a46dba : Fix deprecated function warning in ONNX model test. (#11827)
a830964007 : Eliminate no-op adds and muls in peephole pass (#11801)
3ae6ee4ebd : Move CreateContext to global registry (#11688)
b7c302da1a : Make gen_jit_dispatch runnable (#12018)
70e4b3ef59 : Revert D10006069: Remove TIndex typedef from core/common.h
e05d689c49 : Unify C++ API with C++ extensions (#11510)
1c09bfde1b : Make promoteType(half, integer) -> half (#11941)
51414822f5 : Stop moving constants into DifferentiableSubgraphs (#11809)
ffbac7d0bb : Miscellaneous updates for CUDA 10 (#12017)
a6f1ae7f20 : set up c10 scaffolding. Move macros proper first.
1a1d79e761 : Remove TIndex typedef from core/common.h (#11993)
a9e6a673ae : Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876)
1178851280 : Get rid of most usages of Type.tensor. (#12002)
76ab26cc3e : Remove unused THNN functions due to removal of torch/legacy (#11946)
a6630e25af : Remove many caffe2::TIndex and replace them with int64_t (#11943)
5d0f1c3c8f : Add #include to satisfy Android NDK unified headers
7517e53468 : Update onnx submodule to onnx/onnx@c4734c6 (#11958)
f15474ade8 : Export caffe2::Caffe2Annotation symbols (#11965)
1c282ab99a : Move GetExceptionString to Error.h (#11501)
825181ea9d : Rewrite C++ API tests in gtest (#11953)
d0db23e95a : Add distributed annotations
de11fe0c83 : migrate PReLU to ATen (#11758)
89d56ae435 : Move function deletion from the stack to the heap. (#11611)
b5f60af94c : Shape prop view/reshape/as_strided through prim::ListConstructs (#11877)
7efbf3a827 : Specialize ArgumentSpecs on tuple elements too (#11863)
1cf5b0c7c1 : Fix casting logic for 0d CPU tensors in CUDA ops (#11808)
1ad7e0c5ec : Minor JIT improvements (#11654)
4e65fbfee5 : Remove tests from EXCLUDE_SCRIPT that pass (#11916)
00fe2c5606 : Use -O1 for sleef build in Debug mode (#11942)
775358e4c2 : Add non-legacy test of bilinear (#11935)
23f5b2abbe : Fixes an error with canonical url. (#11938)
c2a2110d71 : Stop tracing _out overloads (#11910)
c6a14b1edd : Revert D9985212: [pytorch][PR] [minor] remove a remaining todo line deletion in THD cmake
817e83fc01 : fix PR #11061 (#11815)
6834dcab1c : Align cuda multinomial without replacement to CPU behaviour (#11933)
784d345828 : Fix docstring of `torch.jit.createResolutionCallback` (#11921)
e655f16c35 : Pop stashed IntList in resize_, warn about its usage when tracing.
4fb7e72fe5 : Fix _thnn_fused_lstm_cell backward (#11872)
48c8adfe1b : Turn storage on UndefinedTensorImpl into nullptr. (#11738)
11bd2f2509 : Retainable is no more (#11900)
a7afd133f5 : Sync FindCUDA.cmake with upstream cmake repo (#11880)
58d28a5f12 : Fix saving loaded module (#11915)
0d9be2135f : remove a remaining todo line deletion in THD cmake (#11920)
b2b05b7c20 : Move blob serialization to free functions (#11817)
17cd426c72 : Updated docs styles (#11835)
d712a71741 : Protobuf serialization (#11619)
30521a37ad : codemod: caffe::float16 -> at::Half (#11785)
a9459bf7b5 : Replace float16 with at::Half in caffe2 (#11676)
9c44c60794 : Bump up the frontend version (#11873)
9f0d9db6e4 : Improve GRU/LSTM documentation for multiple layers (#11896)
c7751f4df0 : MIOpen bug fixes and performance enhancements (#11766)
b91b15d86e : Implementing Matrix Norm for torch.norm (#11261)
6100c0ea14 : Introduce ExtensionVersioner for C++ extensions (#11725)
068eac255b : Jit fuse clamp (#11574)
d8f6be686d : Remove torch/legacy (#11823)
24ec813967 : Defer lazyInitCUDA() until needed (#11893)
9cd0ae5e2d : Remove deprecated factory functions from Type.
87701289a3 : fix link to previous versions (#11894)
0927386890 : Workaround CUDA logging on some embedded platforms (#11851)
1c77f9e543 : Support torch.distributed.barrier in gloo backend
8f4601fbac : renable test_scalar_fusion
23dd5b4a53 : Back out "Open-source ThreadSafeActivationCleaningPredictor"
83740eae4a : Avoid using PyThreadState.frame as it is not a public member. (#11855)
c64331f48f : Add test for verifying combine_spatial_bn values in DPM (#11710)
aa8cd7319a : Enable build_test on windows (#11802)
c22dcc266f : Show build output in verbose mode of C++ extensions (#11724)
1091c5e59f : Throw error on indexing a 0 dim tensor (#11679)
6831d64591 : Fix the symbolic for embedding_bag in ONNX_ATEN_FALLBACK (#11840)
ae1a972d78 : Fix #11752: correct numerical issue with log_softmax (#11866)
6302e4001a : Delete unnecessary include from allocator.cc/event_cpu.h
f4d25039cb : Fix Array.h when compiled with C++17 (#11816)
b06e35b568 : Back out "Revert D9924348: Expunge (transitive) caffe2_pb2 dependency from tensor_impl.h from context_base.h"
cedd12d86a : Explicitly qualify references to CPU. (#11819)
24e958a0a7 : Move bernoulli into ATen (#10273)
cf5a21e4a1 : Add back proto opt disable feature that was lost during refactor (#11875)
c30790797f : Minor data loader doc improvements
ce55767091 : Add the missing header (#11864)
3b1a5a1b8a : Refactor tests part 2 (#11811)
52472508e9 : Add env:// rendezvous test (#11782)
fa32317780 : Add empty tensor tests to test_sparse (#11228)
8c3a94eaf2 : Improve autograd profiler performance (#11773)
b3a2665e0f : Code-reorg to have TORCH_ARG in its own header (#11787)
32494c226e : OperatorDef <==> NodeProto Conversion (#11621)
8601b33c07 : fix half grad assignment (#11781)
b46f1b8ca7 : Open-source ThreadSafeActivationCleaningPredictor (#11779)
77af40c025 : prioritize Accelerate over OpenBLAS (#11812)
53b5f14f59 : Remove inclusion of caffe2 pb (#11820)
a26ad5a332 : Remove unnecessary check on device option pointer (#11845)
8aedc27a63 : checking device types of input and weights at RNN (#10185)
e80d1d2876 : Revert D9924348: Expunge (transitive) caffe2_pb2 dependency from tensor_impl.h from context_base.h
2c358eaf51 : Caffe2: add plan name to logging (#11704)
1f34be47d9 : Raise error when perf test result is NaN (#11588)
a79f5d77ad : Add pretty printer for JIT IR (#10319)
1c8686001f : Expunge (transitive) caffe2_pb2 dependency from tensor_impl.h from context_base.h (#11818)
3da8d71d7d : remove protobuf inclusion in core/logging.h (#11814)
53cf628503 : Simplify Blob move constructor/assignment (#11402)
e585f2fb48 : Polish CPP docs, Minor Python Docs Fixes (#11722)
8ad846fda5 : Don't build Detectron ops with NO_CAFFE2_OPS=1 (#11799)
d4e1fa45d0 : allow no-alpha add/sub in onnx symbolic (#10972)
7d25fa3c72 : Emit Undefined type for value when it is Dynamic type (#11810)
1d399a80a0 : Handle pollution of MAX, MIN and CHECK macros. (#11805)
9eb72889b4 : Add successor/predecessor functions
47956ddf7e : Revert D9755189: [pytorch][PR] [API CHANGE] Add empty tensor tests to test_sparse
540ef9b1fc : Add distributed get_backend (#11715)
2732c8bae1 : improve aten/convolution error message (#11768)
98aebed88e : Refactor tests part 1 (#11350)
6073f3073e : Document torch::nn::init (#11778)
c8fbeb3aa2 : Add empty tensor tests to test_sparse (#11228)
e00fb69b25 : Use CATCH prefix to avoid name conflicts with Caffe2.
4ee0a78ee6 : varargs for meshgrid (#11600)
e2bc95e1bd : add `ModuleList.insert` (#11664)
91b6458e2d : Container __getitem__ slicing for subclasses (#11694)
e734c94fa2 : Quick update to embedding_bag doc (#11784)
407a9fee0c : make copy constructed tensor a leaf variable when using torch.tensor(sourceTensor) (#11061)
63c811b3a6 : Include some JIT things in C++ docs (#11712)
bd43d64dd5 : Add strides to Tensor (#11763)
a02685e109 : Fix test_torch's test_potri (#11770)
3cbec5453b : Reorder statements for readability (#11764)
a7cbcb1bb9 : Enable build_python on windows (#11385)
63e384a381 : SNNTest with Data Preproc Service (#11707)
7f0dd2487d : Move AT_HOST_DEVICE macro to Macros.h (#10945)
e8ecbcdf01 : Move IValue to ATen/core (#11610)
d4dde0bcaf : Detect number of amd gpus in ROCM CI (#11771)
24a8c13f36 : Add barrier to fix distributed test flakiness (#11775)
7d0657f13c : Migrate test in cpp/api/ to use gtest (#11556)
3819d25418 : Clean up converter and accept less-valid networks
ca5def1b8f : Expose annotations (#11649)
3ce17bf8f6 : Generate ATen/core to source if env GEN_TO_SOURCE is set. (#11759)
7df6650e9c : Fix empty embedding bag on cuda (#11740)
7671f4ab1c : Add `math` to scope when using inf in tests (#11302)
29610621ec : 64B align for avx512 (#11748)
336323f53c : return aten::gt to the list of fusable operations, add expected graphs (#11150)
73738ec570 : bump version to 1.0 (#11717)
47d65ed34f : Fix issue 10492 (#11634)
39520ffec1 : remove Type/Tensor/TensorMethods include order dependencies. (#11720)
e125e61824 : Fix flake8
cdefc27795 : Support lr adaption for SparseAdam and RowWiseSparseAdam (#11162)
7949250295 : Fixes for Torch Script C++ API (#11682)
a7e3cd09e0 : Fix ctc gradient handling (#11753)
07fd4450ab : Revert D9831398: [pytorch][PR] Update OpenMP cmake setting for xcode 9 compiler(AppleClang 9.0)
f6a6d7fae1 : Switch at::TensorImpl to store TypeMeta rather than ScalarType
6660a128a5 : Cache and use TypeMeta in TensorImpl (#11706)
2baba7f835 : Add storage_offset to Caffe2 (#11701)
35518b3dc7 : Back out "Back out "Refactor Tensor/TensorImpl constructors."" E2: Confirm problem with old patch (#11744)
0d345cfa18 : Remove Type method defaults in ATen. (#11675)
5bfd8f583c : Moving copy of Caffe2 protos back to build_pytorch_libs.sh (#11726)
a8b1755de6 : Check device argument makes sense for legacy tensor constructors. (#11669)
d63bb72d89 : Remove symbol export annotations in THC/generic/*.cu (#11367)
f5bc2aef07 : Update OpenMP cmake setting for xcode 9 compiler(AppleClang 9.0) (#11563)
6f6b03566b : Vectorize grid sample 2d CPU kernels (#10980)
10c29c8970 : Fix CUDA 8 build on Windows (#11729)
ca6f08f359 : Set correct dtype for fp16 op inference function (#11693)
b3e726042c : Do not use FixedDivisor in ROCM order switch op (#11697)
eb3c47bdd5 : max -> fmaxf in cross_entropy kernel (#11733)
f09054f8d0 : Remove deprecate warning for Upsampling (#11568)
bb6f18c44f : Simplify IValue::toTensor() (#11355)
690c999bba : Simplify union payload copying (#11353)
270fb22bd8 : Remove intrusive_ptr::reclaim() in Storage (2/2) (#11547)
f4d9fe395d : Remove intrusive_ptr::reclaim() in Storage (#11352)
2c8a1b957e : Back out "Refactor Tensor/TensorImpl constructors."
8e76dcf173 : Prevent raising KeyboardInterrupt in worker (#11718)
d24bcfd930 : Suppress hiprand "duplicate-decl-specifier" warning (#11698)
8e3f8c52e8 : Document the Sequential module (#11648)
96d3f968eb : Splits CPU and CUDA fusion compilers (#10981)
70e68e755a : Casting for binary ops (#11708)
224e62bbec : respect USE_CUDA_STATIC_LINK in build_libtorch.py
0c2648830f : Augment emit_nvtx to help connect backward-pass Function apply calls with their corresponding forward pass ops (#10881)
b90872c00e : Get rid of default arguments for TH/THC factory functions. (#11673)
7535d98ec4 : Add message tag parameter to send/recv
3258fc11a7 : Delete torch/csrc/api/README.md (#11703)
278e304c18 : Implement elif in string frontend (#11667)
115b13ffab : clean up some old Half stuff
eb039dc92c : Add CHECKs into GetTensorInfo and ExtractDeviceOption (#11597)
0d9b9100f9 : Fix gesv and gels docs (#11699)
72822ee6b2 : Fix #11430 (CPU only builds raise opaque error message when calling .… (#11533)
2631da0822 : Move some Tensor method definitions from Type.h to TensorMethods.h. (#11650)
6c3792b9ec : Implement UndefinedType::typeMeta.
cda71e2600 : Disallow scalar parameters in Dirichlet and Categorical (#11589)
c391c20063 : Adding .expand method for TransformedDistribution (#11607)
74197c7115 : Restore support for dim=None on WeightNorm. (#11661)
19065f91fc : Centralize TypeExtendedInterface casts. (#11576)
c5f7da3f4a : Support FP16 sparse lookup (#11674)
1637729620 : Fix ci by skipping some tests (#11668)
e6fe8d9cf5 : Try to delete codeowners for ATen/core (#10693)
2431eac7c0 : Ensure most Distribution methods are jittable (#11560)
99c0b96f68 : optimize norm on ATen CPU backend (#11565)
98e04db955 : Implement requires_grad propagation in the JIT (#11586)
513fd3dd36 : Improve doc of `torch.nn.functional.pad` (#11623)
760679352e : Move Pixel Shuffle to ATen (#9721)
e1cd220b90 : Reimplement swap() using default move constructor. (#11659)
02980d7f8c : Refactor Tensor/TensorImpl constructors. (#11657)
7607b49538 : s/GetDevicetype/device_type/ (#11656)
c18510463b : Reduce includes in tensor_impl.h (#11643)
8402fde279 : Revert D9778043: Pass Storage by value
85ff72348d : Only involve tensor device in CUDA -> CPU copy, not current device. (#11592)
4672280b55 : Pass Storage by value (#11546)
05e06f7de2 : migrating deprecated calls without abc module for containers (#11515)
29e29ca6ee : Use MPI_Isend/MPI_Irecv to back send/recv (#11630)
f129da1a47 : Add max to the ValueError for EmbeddingBag mode check (#11655)
90537289a0 : Constexpr std::move / std::forward for C++11 (#11396)
0f1ca569ce : End-to-end dynamic slicing with ONNX DynamicSlice experimental operator (#11255)
acb6f18bab : fix generate_code.py caching (#11644)
75f49befeb : move instance_norm to aten (#10792)
912d3626c8 : Split tensor.h into tensor_impl.h and tensor.h (#11642)
45e9ee096e : Fix test_mnist_training_leaks_no_memory_cuda warning (#11639)
9abc666745 : stop allowing extra positional args in arg parser (#10499)
6f53b4efea : Remove implicit bool casts (#11503)
ab3a2d25fb : Improve error messages when trying to use nested lists.
5bc90b8554 : support conversion and dispatch of complex numbers (#11603)
a861573e36 : fix tensor export bug in IR export (#11613)
d278344e36 : Automatic update of fbcode/onnx to 39dd0d4fec5913aa517b71bcfcbf638a427894eb (#11622)
1f49b879d1 : Add missing include for __half (#11638)
d4d72b87e3 : Sphinx is case sensitive
57f149a861 : Only join pin_memory_thread after it started (#11599)
36fc1a0a58 : Merge caffe2::/at::Storage
77f6998e54 : Guard against inputting or returning sparse tensors (#11550)
cac11a4ac3 : Merge caffe2::/at::StorageImpl (#11543)
44b2b6b150 : clean up jit generated tests (#11403)
e998038bc0 : Use TypeMeta instead of TypeIdentifier within at::StorageImpl (#11236)
6f05b5ee54 : Pin Sphinx to 1.7.9 (#11620)
17637f2b03 : enable_mkl support for resnet18+lstm model
0a6931cfee : Only reference ONNX through onnx_pb.h (#11609)
5da0b31bee : More native docs on TensorOptions. (#11558)
f00f99ebcc : use at::Half in THC (#11322)
daa379ffd7 : Disable flaky test ObserverTest.TestMultipleNetBase (#11596)
e2cd627cce : Temporarily disable docs build. (#11608)
7f7cda99cd : Optimize order_swich_ops on GPU (#11404)
776a9992e1 : topk test fix, hgemm integration (#11593)
def44c96fd : Revert D9779866: [pytorch][PR] Move function deletion from the stack to the heap.
5b2efcf425 : Document the Conv module (#11566)
130d55a5f4 : Allow building the C++ API without cereal (#11498)
12efef166a : Split out copy_op from utility_ops (#11470)
316c167940 : Add checking of nullptrs in GetTensorInfo (#11587)
eb7a298489 : Add resnext model to OSS (#11468)
c81406c514 : Document Any (#11580)
ac94889939 : Add jit doc entry to sidebar (#11598)
b663b7ce7e : Update ROCm Docker image with latest AMD debians (#11507)
02c4cd3c8a : Skip flaky distributed tests (#11594)
d4e05f4e1e : Move function deletion from the stack to the heap. (#11534)
958ba4e913 : Aibench for asr decoder
f0a440007e : Explicitly set locale on docs build. (#11595)
504126e705 : Documentation for debugging JIT
a3036b3bb3 : Fused weightnorm for ATen (#10842)
9a7c196040 : Move Type, Tensor, TensorMethods to core.
739e6af869 : Add reminder % to the jit
ad7936e108 : Fix reloading modules back into python (#11552)
17e76e26c8 : Add trigonometry functions to docs/source/onnx.rst
13b05c8c78 : Add EndToEndHybridModel CUDA tests (#11544)
23d55883c0 : minor formatting error log (#11528)
6398d626f4 : Warn that export+import module always load onto the CPU (#11485)
12f4c46eea : caffe2::StorageImpl use at::DataPtr (#11282)
e5dd77c7ad : Sync all libnccl soversions, not just libnccl.so.1 (#11575)
f0a284502a : Document BatchNorm and update default behavior (#11484)
6fc18a7541 : Typo fix in randomness.rst (#11571)
efc0f6784a : Move some bmm/baddbmm to ATen (#11292)
76070fe73c : Make c10d test work on CPU only build (#11567)
6597779847 : Clean up some C++ cruftiness in the script lexer.
3e3d8caecd : Allow setting deletion constant
6dcdbd3a1d : Make C10d support CPU only build (#11513)
90e31f4896 : Improve tracer warnings (#11545)
62c9d4ac96 : Make .to() methods native functions (to fix JIT tracing)
a00fa2c614 : Release GIL when calling into JIT interpreter
1a246c9c7e : guard spurious cudnn.h include (#11562)
a11ebfa195 : Add explicit "this->" for nvcc. (#11196)
8aa8ad8b01 : WIP: Reproducibility note (#11329)
b75c32ded9 : link against TORCH_CUDA_LIBRARIES
f4d9f39a94 : Test libtorch on cuda
35348dab10 : WIP: Include note on cudnn determinism in each function backed by cudnn (#11434)
54107ae8cf : convert output_device at data_parallel from torch.device to index (#10189)
045f862574 : Use torch::nn::init::xavier_normal_
d95fedb436 : Use ATen dropout implementation in Dropout module and add FeatureDropout (#11458)
3121c8f526 : Update gtest and remove the macro guide on gtest from #11321 (#11417)
92fd69f256 : Split Type into TypeExtendedInterface and Type (#11520)
35d52dbb0e : re-enable USE_MPI (#11416)
bbf54ea37c : Ensure .enumerate_support() methods are jittable (#11542)
cda74ac476 : fix nested no_grad decorator and with-statement (#11479)
8b196d671b : Allow tracing random functions (only when using default generators) (#11539)
b6b0b5222d : fix missing libnccl.so.1 error (#11553)
3a39006d38 : Fix some more doc
3a8e39b215 : Support load and store between Py_complex and std::complex (#11493)
289a8c9b7d : Allow train/eval, and non-Tensor arguments to python functions (#11505)
17776db2ee : Add gtest dependency on aten tests. (#11429)
4db21a1d8e : Optimize LengthsTileOp on GPU to run a kernel instead of a sequence of memcopies (#11413)
c1dce21fd5 : Cuda TensorAccessor (#11373)
c56a7cfc37 : More use of AT_CHECK and AT_ERROR (#11457)
5952acc041 : Add "merge to master" step before build in CircleCI (#11443)
fbc17321fd : Update pybind11 to fix Python 3.7 support for script (#11473)
781737f84c : Remove time prefix from rsync (#11525)
a566bc2f11 : Disable all CircleCI jobs (#11523)
d09041bd81 : Add an option to statically link cuda (#10596)
727a4453aa : New Serialization Proto
f80f15866b : Get rid of manual dispatch on Type. (#11486)
01c7542f43 : Use -isystem for system includes in C++ extensions (#11459)
d32b41003a : Copy protos on install same as develop (#11517)
deac304b6b : Bugfix for basic slicing
4e8d9a4a58 : Introducing python setup.py rebuild develop (#11487)
31850163ac : Remove separate ATen build target (#11488)
de460c7ad3 : Improvements on conv/pool/fold/stft/ParamDict docs (#11106)
86ab92b0a9 : Move TensorImpl / UndefinedTensor(Impl) to core (#11441)
80fa8e1007 : Add .expand() method to distribution classes (#11341)
120d769432 : Add support for tracing strings (#11506)
0ddbe668cd : Improve shape analysis to cover all most commonly used ops (#11358)
f84693efa9 : nomnigraph - Improvements to subgraph matching APIs (#11418)
3d5fd12488 : Documentation for c10d: torch.distributed and deprecate the old distributed doc (#11450)
0988bbad2d : C10d release to torch.distributed for PT1 (#11405)
b14a80553d : Ignore functional doc error
f9d12eeb27 : Give copy an optional device argument.
dd8defeb3f : Document the Functional module (#11460)
9cfdf0d677 : Document the Embedding module (#11469)
a175282776 : Flags for LMDB, LevelDB, and Caffe2 ops (#11462)
e1e69446f6 : Lockdown NO_TEST=1 for tests even more (#11415)
3e49a69466 : Resolve ambiguity when including both caffe2 and aten registries (#11411)
3ad67c60f0 : Traceable explicit Variable instantiation (#11463)
f2f43ad2da : Add new LengthsSplit operator (#10974)
0b78ae86c5 : Cleanup byte swapping utilities to generate optimal code on the platforms we care about. (#11394)
a0d4106c07 : Integrate custom op tests with CI (#10611)
3e665cc29b : Improve support for tracing sizes, add more tracer warnings (#11288)
70d93f4777 : Check for maximum numel in NCCL broadcasting (#11466)
35008e0a1a : Add flags to fix half comparison and test (#11395)
18e5fd36c2 : Normalize gradients before reduction in DistributedDataParallelC10d (#11109)
ea0ee77c61 : Fix katex math rendering (#11472)
198ade74f9 : Remove manual refcounting from Tensor class (#11294)
b0c1397271 : Fix intrusive_ptr move/copy for different NullType's (#11260)
252f93df09 : Improve Tensor() constructor (#11258)
09292f2c03 : Some improvements to IValue (#11238)
ce6906b051 : Narrowing Blob (#11167)
040d75d455 : Add option to use CUDA memory leak testing as a context manager (#11380)
2158f4a9c8 : add export import test to TestJitGenerated (#10982)
cee743f639 : Move backward/set_data to Type-based dispatch.
87a9a8f80a : Use AT_CHECK and AT_ERROR
560d6efd3a : Only join started dataloader workers (#11432)
87b2f05a9c : Also set stdin to subprocess pipe in FindCUDNN windows popen call (#11435)
581099a7b2 : pybind conversion for IntList (#11425)
ee4309a9ac : override BUILD_TEST when building gloo (#11431)
1b94f5c6e6 : optimize masked_fill on CPU (#11359)
b7ecf035dc : Updates FindCUDA.cmake to 3.12.2 upstream version (#11406)
6683fb56ca : Add AVX optimizations for pdist (#11230)
538ea67437 : Search for CMake config files for pybind11. (#11423)
02114e877f : fix #10838 incorrect bidirectional output format (#11368)
ac9268f25d : Conversions to and from complex numbers. (#11420)
d3f98b5ffc : Add matrix power (#11421)
802380ac93 : Improve LegacyTypeDispatch to handle initialization correctly. (#11331)
9687a72794 : Move the type registry out of Context, into LegacyTypeDispatch. (#11274)
b9b9ae935b : Make torch.randint have default dtype int64 (#11040)
505ecab88d : bumping up the default store timeout (#11409)
3d2862526b : Support send/recv for the gloo process group (#11387)
47c1de25e8 : Test exporting batch norm, dropout, RNN
b7a2c91eed : remove unnecessary clone() when .grad is None (#11165)
c49b01a8a0 : Change default variants to 'function'. (#11247)
fa522d1aed : Revert D9720931: [pytorch][PR] [third-party] Update googletest to release-1.8.1
c9843bd86b : Update googletest to release-1.8.1 (#11388)
31d36b1d31 : move complex registration test out-of-line (#11397)
4ae16c9ad9 : Recursive descent for validation + convert expands in ATen fal… (#11356)
4c8cc36e34 : Fix igios build (#11392)
4bf5fc44c8 : Fix split_size test failures (#11051)
9886ebeb24 : Remove hardcoded system path from CMAKE_MODULE_PATH (#11386)
802d21c8f4 : Remove FULL_CAFFE2 flag (#11321)
93da5a21c9 : Update variable view note
77b6d7d255 : Doc improvements (#11347)
7de0332e10 : Add initial documentation for JIT (#11357)
69b4b45f91 : enable missing nn tests with single grad check, minor refactor
576807ce1a : flaky test fix trial (#11391)
e9da2dd3cc : Do not use PERSISTENT cudnn mode for spatialBN (#11382)
01930a3145 : Move sync_params to C++ (#9805)
ba6f10343b : update CUDAExtension doc (#11370)
733402bef4 : Fix issues with certain heterogeneous types in lists during tensor creation (#11377)
5e400e9cae : move context_base.h to ATen/core (#11336)
fb4e8088f3 : Remove methods that start with an underscore from at::Tensor (#11152)
e80f7e1f64 : Fix more warnings (#11320)
91089a7e17 : Add GPU implementation of pdist (#11102)
110191e5c7 : Remove detach from TensorImpl, handle via Type. (#11337)
52b37d8b66 : Move VariableHooksInterface to ATen/core (#11273)
396e64fff7 : Move ATen/Registry.h to ATen/core/Registry.h (#11270)
b02b125d16 : Rename getMaybeVariableType back to getType. (#11250)
68371b6d2e : fast code path when partition=1 which makes LengthsPartition a simple copy (#11351)
da4ebc2971 : Switch SVD on CPU from gesvd to gesdd (#11194)
f9595e756e : typo/grammar fixes (#11344)
a2afad2b69 : Improves ATen CUDAEvent (#11293)
b3b1e7624d : Optional expand=True kwarg in distribution.enumerate_support (#11231)
c59c1a25b2 : diagnose option: get_entry to print a whole row (#11308)
2946b021e3 : Disable flaky test, see #11360 (#11361)
3149a72c63 : Move TensorOptions.cpp to the correct place in ATen/core (#11244)
c45607f77f : Static assert GetMutable is not passed with Tensor argument (#11323)
0f419abf40 : Roll nomnigraph build into caffe2 (#11303)
9de2085806 : Use custom hcc/HIP, purge hcSPARSE (#11198)
ec5404a449 : Add cuda version of SpatialBNOp also optimize SpatialBN on CPU (#10888)
7726b36489 : Full-fledged group testings and fixes for c10d frontend APIs (#11318)
1a01c75dde : support gradClipping per blob in mtml (#10776)
c39216f8c4 : Automatic update of fbcode/onnx to bff0b8835870c7df7762ef43498d000d2d8ffb52 (#11346)
4d678790c5 : enable advanced indexing with tensors (#10862)
148f7cc47a : nomnigraph - nit - fix generated code to be consistent with style (#11343)
49231ab0a8 : Reimplement storage slicing. (#11314)
1d406c04ae : fix comment on Cost params_bytes (#11190)
68613cf5a2 : Windows DLL build with Caffe2 code (#11266)
34c0043aae : Force third_party Eigen from setup.py (#11334)
03ca7358af : Add unit test for Parallel Spatial Batch Normalization (#11098)
5712fe3297 : Fix out-of-boundary conversion issue (#11338)
ec195129ec : Adding setTimeout option in Store (#11265)
fef52cc1f8 : Add resolver for 'torch' module (#10847)
0f1ec07c57 : nomnigraph - nit - rename unit test files (#11315)
ed8849b640 : Add include path to Doxygen preprocessing and add some documentation (#11313)
f98bd53b01 : Small fix to the UniformIntFill tensor shape and type inference.
1ad61a18b2 : Rename cuda tests to have 'cuda' in their names (#11332)
0ef2b318a2 : fix empty net type (#11286)
936bba77d1 : cudnn 7 upgrade with spatialBN fix (#11291)
4ae95738b2 : Ignore FuseGraph Call on Windows (#11015)
a853a74217 : defer resolution of mkl to a cmake wrapper library (#11298)
dda8402447 : Cleanup dependency of distributed flags (#11221)
68930c48cf : Move minimal wrapdim functionality to core, remove THTensor include i… (#11283)
f6568b00f5 : Change includes from ATen/Storage.h to ATen/core/Storage.h (#11217)
656e81db93 : Fix scalar tensor assert in fusion compiler (#10952)
bb7d1837bc : Add dead code elimination pass (#10101)
220c9e52b9 : Distributed Data Parallel CPU module for C10D (#11168)
126ac4b71f : Back out "[pt1][tensor] Add strides to caffe2::Tensor"
fb836db4b2 : Fix conv gradient conversion (#11312)
dccd0f2de6 : Bag of clang tidy fixes for torch/csrc/ and torch/csrc/autograd (#11050)
83a1ab2136 : Sparse tensor printing; add NotImplemented autograd fn (#10181)
fa147abda4 : Add convertToCaffe2Proto to python API
425ea6b31e : fix doc for functional.dropout* (#10417)
ad116210e5 : typo fix Tranpose2D -> Transpose2D (#11281)
a9d8b021e9 : Remove THFinalizer
c0efe6f027 : Forward declarations of needed curand functions (#10911)
57728f71e7 : nomnigraph - simplify core graph API and test (#11256)
c43187291c : Small fixes to cppdocs for sync script (#11300)
c9e66351a7 : Port all PyTorch and Caffe2 jobs to CircleCI (#11264)
9f4bcdf075 : caffe2::DeviceType -> at::DeviceType (#11254)
ac9f0a6884 : refactor preproc, support dense in TumHistory layer
3e85685f8f : add persistent rnns with conservative criteria (#11248)
68c2e014cb : Handling for py2/py3 division differences (#11016)
9a0effb92c : Update send/recv tests to reflect intended use (#11275)
8da081f7a5 : Add cost inference to ConvGradient and WeightedSum operators (#10744)
4fe3356ee0 : Move collapse dims into a single place (#11272)
5e2067ce30 : Fix some more warnings (#11257)
f866574afc : Fix the batchnorm onnx exporting when affine=False
55212507a2 : Improve error message to include return types too (#11245)
e6d6aed12e : Check doxygen output in travis (#11124)
267e1ec112 : Accept more numpy scalars as doubles (#9659)
8bd80a6b74 : Fixed log message (#10874)
434e943b08 : Fix to distribution.__repr__ with lazy attributes (#11263)
9fc22cb772 : Add import export step to end to end tests
1808e368e4 : Add complex hooks for out of tree complex implementation. (#11216)
aeb6094538 : Unify opt flag for cmake codegen (#11227)
d612855b91 : nomnigraph - fix memory error in NN subgraph matchOp (#11127)
6d6655e6be : Port PackedSequences functions to C++ (#11224)
b7038f7c37 : Treat numerical differences as warnings instead of errors when tracing (#11246)
b7cd4b692c : add a Float16UniformFill (#11123)
d4060d2d0e : Implement torch.tensordot (#10025)
d1b920b44f : keep net type info when generating model complete net (#11032)
56bdd87b40 : Get rid of some uses of type() (#11215)
9ca63c5e63 : Reorganize methods in Type, add CPUTypeDefault/CUDATypeDefault (#11205)
f0d3fda064 : Improve docs for torch::nn::Module (#11115)
7f74875304 : Pull Context out of TensorMethods.
05cb40dc00 : Move some includes from Tensor/Type to core.
c8672f0b42 : Support environments with no libprotobuf (#11161)
020501b7b0 : Getting rid of USE_C10D for build (#11237)
313e89d8db : Fix dimension collapsing (#11226)
6219c4a28f : Make Scalar::toTensor a free function, move Scalar to ATen/core.
033499cf56 : Remove mention of USE_DISTRIBUTED_MW (#11240)
3f30c296d3 : Export CAFFE2_PLEASE_ADD_OPERATOR_SCHEMA_FOR_* (#11233)
7e0a052a5d : Adding synthetic data generation to the filler.h file (#11060)
1eed7d5f0b : Report an error when trying to record a mutable operator when (#11129)
0e8088d6f6 : Fix typo in data_parallel_model
ec6f0ed560 : Additional Python Bindings
750cd48980 : update expect file for short circuiting (#11229)
684b55d762 : In default, use third party eigen. Added new flag USE_SYSTEM_EIGEN_INSTALL to control. (#11020)
539579aa9a : Logical short circuit (#11116)
b2217109ec : Move TensorOptions to ATen/core
0ff1bb0d8a : Remove Type constructor from TensorOptions, add Type::options (#11189)
0d5e4a2c66 : Allow passing through arguments to unittest (#11209)
050aa42e09 : Fix some more compile warnings
cd4c32691d : Add complex32, complex64 and complex128 dtypes (#11173)
c5b021cc88 : State dict loading arguments were in the wrong order (#11200)
7e2136c2b5 : remove allclose from test_doc skipped list
24eb5ad0c5 : Fix unit tests on CI (#11191)
0a8c8c1dbe : Rename real to scalar_t. (#11163)
43fd6b234d : Make Type a (mostly) pure virtual class; TypeDefault for impls (#11013) (#11013)
e1a17d5a42 : Should not use CAFFE2_API when definition is already in header. (#11114)
cf10efb8d4 : Fixes unclear exception message for F.conv2d (#11053)
593d74061f : Document torch.allclose (#11185)
33c7cc13ca : improve docker packages, fix bugs, enable tests, enable FFT (#10893)
abe8b3391d : LowRankMultivariateNormal cleanup
4d28b65fb8 : fix serialization of nn.Parameter with dill (#10296)
1350f76b62 : Fix max and min with inf on CUDA (#11091)
7eba9849c1 : Pool constants during script compilation. (#10231)
7af6f9515f : Move TensorAccessor to ATen/core
011f615945 : Fix compile warnings
1506547771 : Disable -Werror on macOS test build (#11090)
f60a2b682e : allow spaces in filename for jit-compiled cpp_extensions (#11146)
43e73f85ad : Dont optimize slicing dispatch when we are tracing (#11156)
b3d559cdd1 : Optimize WeightedSumOp for two inputs (#11049)
b834d9107e : Revert D9566744: [New Checkpoint] Kill the dummy TaskOutput when task.get_step() (#11164)
1b7172a2b9 : fix the slice onnx exporting
03c06ec93d : Traceable detach (#11038)
861e1c430c : Move StorageImpl and Storage to core (#11154)
4abddad1a0 : use py::str to remove deprecation warnings (#11107)
c48bf3a77e : Automatic update of fbcode/onnx to 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c (#11153)
5987b44dda : Remove aten doc/ folder (#11158)
3081c8ea1d : Lower trivial differentiable subgraphs (#11110)
c87d082d26 : Use ->data<real>() instead of THTensor_(data) and c10::raw::intrusive_ptr::decref instead of _free (#11039)
adeebed549 : Delete TensorImpl::toString() (#11035)
5286925d4a : Add getMaybeVariableType(const TensorImpl*) (#11031)
2c5ae8c4bf : Get rid of type() method on TensorOptions; use at::getType instead (#11023)
fd110411b7 : Don't convert TensorOptions to type before printing.
48c2f3cf0f : Move TensorOptions Tensor methods to TensorMethods.h (#11144)
780d2792c5 : Warn about non-traceable behavior when tracing (#11088)
c31ebccd01 : Clean up TupleType and SchemaParser (#11007)
f4b2961af9 : Simplify assignment operators (#11027)
6508db7421 : Remove BUILD_CAFFE2 and build everything (#8338)
a2a584f347 : Proper recompilation tracking for more files in tools/autograd (#11143)
3791bd12c8 : PT1 Release Milestone No.2 MPI Group Support with all tests passed (#11128)
d95e68c8cc : Delete Tensor constructor from TensorOptions. (#11101)
a585158c9e : Some usage examples for TensorOptions
e2bdd35cf0 : fixes to device.cc (#11122)
f30fd7fb5c : Get rid of the runtime type in TensorOptions (#11021)
1db5a7d8f0 : Move variable getType lookup support to Context
9fac0a5093 : Rename at::getType to at::getNonVariableType (#11096)
0961c923c0 : Unbreak the build
3073051a18 : Revert D9554375: Support lr adaption for SparseAdam and RowWiseSparseAdam
82aeebb3d9 : Fix a bug in addmm fusion in the JIT (#11100)
0555768e0f : Support lr adaption for SparseAdam and RowWiseSparseAdam (#10993)
f1bfe6750f : Back out "[caffe2] Update blackbox predictor with new constructor" (#11105)
9fae8fcdff : framework for committed serialized tests (#10594)
00df09b65d : Change specialization rules in GraphExecutors (#10977)
a320e5cbd3 : Move static_context outside of class (#11097)
750ede7215 : Rename getType to getVariableTypeFromBaseType / getVariableType (#11095)
c836a04dc8 : Delete a bunch of uses of getType in favor of TensorOptions.
34a0604d51 : Eliminate use of getType from DLConvertor (#11080)
c283acce72 : Rename getTypeRaw to getNonVariableTypeRaw (#11078)
66c4d7e060 : Rename getTypeOpt to getNonVariableTypeOpt (#11077)
f3c3127c67 : Don't flatten output lists in the JIT IR (#10949)
c8c21fa2b4 : Allow same flags when glog is used or not (#11034)
26409a4300 : Caffe2 flags needs to be used after the GlobalInit function is called
a6cb41486d : update documentation for observers
15314c7b8e : GCC-7 doesn't like the original syntax. (#10665)
684bd1b7bd : size_ -> numel_ (#11112)
7ddc6f84c4 : NULL -> nullptr (#11047)
302e9cb815 : Update onnx submodule to onnx/onnx@bae6333 (#10961)
56c737a9b7 : Inject GetEmptyStringAlreadyInited once for static proto (#11045)
a136d29fd1 : Use intrusive_ptr in Storage (#10907)
f0142faab0 : Expose arbitrary cpp autograd functions to Python (#11082)
93bd291e55 : Change torch.jit.trace to no longer be a decorator (#11069)
ebe9d204fa : Add test cases to intrusive_ptr (#11026)
e85f3fccb3 : Fix relying on UB in test_data_parallel_nested_output (#11092)
9d4360c060 : Creates stream pool (#9938)
23b0c90e71 : caffe2: fix gcc8 warnings
611a608517 : Add ATen pdist CPU kernel (#10782)
029082e87c : Add entry for torch/lib/pythonX.Y in .gitignore (#11083)
40227671e9 : Add strides to caffe2::Tensor (#10826)
535633bddc : Export MPI functions (#11037)
e7195431e0 : Add benchmarking functionality to the benchmark app (#10976)
a8af7fe46a : Support import of `nn.RNNCellBase` in `__all__`
dbc0004f99 : Remove use_count() == 1 in Tensor::Extend (#11046)
23af7deea7 : Add has_lapack flag (#11024)
ad1670cf54 : Kill the dummy TaskOutput when task.get_step() (#11048)
16b8e0a787 : at::StorageImpl: Rename size_ to numel_ and elementSize() to itemsize()
394bdcd49a : Fix the build of aten tests when FULL_CAFFE2=1
e550eab3e2 : Remove MetaNetDef test case in Predictor (#11052)
91ecbf8b1d : Remove TensorBase (#11036)
ae635b16f7 : Record tensor factory functions in trace (#10935)
c4e1adf29d : Remove THHalf type
2cc98d8df7 : Adds `dim` argument to `torch.unique` (#10423)
98d85b1790 : Debugging help + test
ef7fc2a3e1 : Remove at::StorageImpl::finalizer_ (#11022)
6b87198245 : Devirtualize StorageImpl deconstructor (#11018)
d9b74f6540 : Make it possible to disable JIT using env variables (#10867)
c755616e00 : Enable Detectron model inference for CPU and MKL-DNN paths (#10157)
89834dfe64 : Add GPU version of HardSigmoid Op to Caffe2 (#10955)
22e3b2c9c3 : Revert D9413150: [New Checkpoint] Kill the dummy TaskOutput when task.get_step()
6a8bc3804a : Add flush to logging messages higher than INFO. (#10983)
0b1de74732 : Documentation improvement in caffe2/core/tensor.h (#11006)
e9eed8edb4 : Add doc for Tensor.digamma_? (#11008)
f687ff5a59 : Delete unnecessary includes from TensorImpl.h (#11005)
b644d5e74a : Delete context and get_context from Type.
cd9416317d : Minor copy-edit on setup.py
c99a143eea : Update blackbox predictor with new constructor (#10920)
56539f5fe1 : PT1 Distributed Release MileStone No.1 - Completed Distributed Package and CI tests (#10871)
fa7c81c640 : nomnigraph - nit - code style update (#10987)
ec519e8a4a : Reduce number of elements within test_abs
dbce1c840f : exposing net_transformer_fun before add grad (#11003)
bed9d41abd : Generate Type::registerCPU as we do register_cuda_types. (#10947)
4e446b85fb : Make profiler.build_table() O(n) rather than O(n^2) (#10969)
396dec0e37 : s/spaerse/sparse (#10968)
525548fb64 : Move SparseTensorRef to core, change some includes to core.
e0dbb91060 : Windows raw string fix (#10998)
206d52d0e3 : Disable smart_tensor_printer_test without glog (#10999)
562fc7631f : Add test cases for ONNX unsqueeze (#10924)
1b0d5e60ab : Get rid of some unnecessary includes of Context. (#10951)
a9469c9c8a : Fill eigenvector with zeros if not required (#10645)
b41988c71e : Cleanup BUILD_DOCS cmake section (#11000)
7169906249 : torch.digamma (#10967)
a5d7abedae : Enable fusing aten::expand on GT, LT, EQ (#10845)
db0abe1890 : Fix bugs in handling of negative slice + gather indices (#10973)
6ca28984c7 : Kill the dummy TaskOutput when task.get_step() (#10739)
beeec47041 : Sanity checks for tracing (#10841)
fe15aedacc : Store schema in serialized modules and check arguments in function call (#10872)
ba71547e93 : Add clip op to IR
90eb0b6031 : Cleanup accidental logging
72a84127b1 : Add Workspace methods ws.feed_blob(name, arr) ws.remove_blob(name) (#10929)
8e5b8490bf : Add relevant code for adding caffe2 pybind extensions registry to rocm (#10975)
4cb968fb77 : Default hidden visibility (#10752)
92ff070b83 : Add CPU version of hard sigmoid operator to caffe2 (#10837)
efd2aeac9e : Set -Wno-stringop-overflow only with GCC >=7 (#10954)
b3601a0425 : nomnigraph - add documentation for new ReplaceSubgraph api to README.md (#10802)
cfa5dbadfc : Add nomnigraph bindings
a88463cd9a : Working async version of AllGather, test fix and compiler warnings, and CI (#10932)
579bc43a14 : Future-proofing embedding.py against heuristic changes (#10959)
3b891d9d49 : Support direct access of `nn.RNNCellBase`
5c58cda8ca : Add subname to console output for assertExpected (#10559)
91797c0672 : Replace direct include of caffe2.pb.h with an intermediary header caffe2_pb.h (#10946)
5ed62ea6fa : Add Upsample example for torch onnx exporting
22c9bc3117 : Resolve builtins using a dict rather than by name (#10927)
c9d337f436 : Split IsEmptyOp (#10918)
7de830b879 : proper sharing in ShareExternalPointer (#10804)
7f9fd1cc26 : allow RandomSampler to sample with replacement (#9911)
504d705d0f : Support for CUDNN_HOME/CUDNN_PATH in C++ extensions (#10922)
1421a9d704 : added num_directions explanation to docstrings (#10786)
bee779bc83 : StorageImpl scalar_type_ to data_type_
82bb9fbedd : Remove Scalar.local(). (#10917)
7c7a2ccb58 : Update onnx.rst for v0.4 (#10810)
de099564e3 : Minor copy-edit on README
de9cc98e66 : Stop copying tensor memory when importing IR
2c342e50e1 : Fix a bug in constant prop (#10923)
157fb46ffc : Add -rdynamic only to linker flags to avoid compiler warnings (#10789)
f7b02b3a68 : Change Tensor/TensorImpl to use c10::intrusive_ptr (#10824)
f2bb9f0bb5 : speed up kl div loss (#10336)
f5910c8a36 : Add MIOPEN recurrent operator (#10840)
8e33451e2e : Make torch.cuda.* take device objects; Update distributed docs (#10833)
58b145f515 : Fix negative indices in tracer (#10560)
9aa92bc261 : Change the default value of DeviceOption.numa_node_id from -1 to 0 (#10877)
7842b6d0f7 : Fix at::optional compile problems on Windows CUDA.
6ce799edd6 : Tuples/Lists can now be inputs/outputs to script and other simple fixes. (#10812)
f64f6eed3a : move HeatmapMaxKeypointOp unittest to oss
35beecfe17 : fix xfails involving literals (#10905)
f940af6293 : Bag of Distributions doc fixes (#10894)
67f6f930a8 : Remove FIXME_zerol() from test_jit.py (#10900)
841d779598 : Increase BC for PackedSequence ctor (#9864)
c3271b53e4 : Remove ability of Scalars to hold Tensors.
3aaad3ecb1 : Begin a bestiary of MSVC/NVCC bugs. (#10883)
c8b246abf3 : Prevent JIT from overspecializing to every single size configuration (#10844)
9679fc5fcd : Handling failing test on ROCm.
ddc37d7487 : Update mobile predictor caller's interface
d632ccd2c1 : Cache isContiguous and numel
17dac3e17f : Create class constant for string literal 'blob_names'
8253cfaa72 : Conv BN fusion for 3D conv (#10239)
542aadd9a7 : Stop using symbolic override for tracing RNNs (#10638)
f2f6e6c0e8 : Add registry to pybind_state (#10759)
c172ffb632 : Remove the nanopb submodule
148ea2a653 : Create at::linear (#10799)
1fbabff76a : Refactor THCNumerics and add common math functions for at::Half (#10301)
87a7840fa6 : Remove Tensor constructor of Scalar. (#10852)
0d5584d8d7 : Revert D9492561: [pytorch][PR] Moving the operator argument to the front for kernelPointwiseApply.
0ef5cfd28c : fix ivalue printing for lists (#10777)
983e0f2413 : Remove Node::invalidateSchema (#10822)
74e6a666b3 : If none of the schema match, add ImplicitTensorToNum conversions where needed. (#10180)
474684cf03 : Re-sync with internal repository (#10868)
8044dc4eb8 : Support new Reshape semantics (#10848)
8130b1a950 : Ignore stack frames coming from python3 object file (#10627)
6e2f6dc6e6 : Move Allocator and Device to ATen/core
f1df85d799 : bug-fix in normal_( ) (#10846)
313139d14e : Moving the operator argument to the front for kernelPointwiseApply. (#10829)
e3d12d7afb : Automatic update of fbcode/onnx to 6146a85d371481222c10ede4430ad5476e60de87 (#10831)
3c9775fff8 : Remove nanopb since we've switched to protobuf (#10772)
8c13971f57 : Remove protobuf require and use requirements.txt (#10771)
474bd60bad : Provide a tensor overload to mul_out_sparse_scalar. (#10828)
e146518e46 : Fix AT_CUDA_CHECK and AT_CUDNN_CHECK macros (#10834)
ca567862b2 : Support multidimensional indexing (#10787)
6993e4a9f7 : Caffe2 Functional enforcing inplace output (#10797)
8da4167129 : Fix performance regression (#10835)
df2d48b42c : Added PrefixStore, pybind, test for group backward compatibility (#10762)
61b34d42e7 : nomnigraph - isSubgraphMatch returns the matched Subgraph & map from MatchNodes to graph nodes (#10605)
ee022a476a : Added this-consts to all methods on SymbolicVariable (#10805)
9403e0cac0 : Use ATen implementation of RNNs (#10761)
a4c59a9dab : MIOpen integration, more tests enabled, bug fixes (#10612)
3d43a82440 : Add support for vararg style functions. (#10250)
9dbcc9cebd : Move _raw_* intrusive pointer manipulations to raw_intrusive_ptr_target (#10779)
dec3ed7b49 : Increase the limit for Proto size (#10745)
432b3adffc : Print blob sizes on fatal signal (#10766)
82ddeb7f2b : Using shared implementation in Tensor (#10619)
23a366be33 : Use ATen native functions for THCTensor_cadd/cmul/cdiv/csub (#10707)
0f5c8edfd3 : Removes unused THCState code paths (#9735)
ab9e7ae23e : Add CUDA implementation of LARS --caffe2 (#10509)
b14f2e899c : Preserve sparse tensor shape and dim invariants, and add scalar tensor support (#9279)
0eb2c83006 : Fix link in THNN/README.md
fcfb1c1979 : Make more distributions jittable
529fc68df2 : Update docs with clean (#10819)
deda05e59f : Revert D9395814: move HeatmapMaxKeypointOp unittest to oss
b885dea300 : parallize the dense part in event models
5c0eece2fd : Force types on values returned from if blocks to be equivalent (#10281)
9a43fc5eaa : move HeatmapMaxKeypointOp unittest to oss
4aa5075cae : update the constructor to accept the PredictorConfg only to set up the predictor (#9483)
f0ec3bfa56 : Changes for Python3 compatibility (#10524)
44b47fd7f3 : Working pybind version of MPI process group and abort() pybind (#10606)
6c75fc0aa3 : Intergrating stochastic quantization to easgd to reduce communication + supporting quantization on both sides (split from D8849770) (#10644)
f72e813c2f : Allow tracing functions that take tuples of tensors as inputs (#10637)
043a2e36e5 : Removing setup_caffe2.py (#10734)
6c84f7fea0 : Relax RHS type assert for augassign (#10730)
d40a598777 : Back out "[pytorch][PR] Create at::linear" (#10785)
6fcac354c5 : Erase ListConstruct nodes for ONNX export (#10713)
de11a5fb28 : Resubmit #8322 with scipy version check
ee3e48d34b : Move Backend, Layout, ATenGeneral, Deprecated, Generator to ATen/core. (#10740)
5ca2713a8b : Fix performance of WeightedRandomSampler (#10636)
0e30fa6f3c : Faster random number generation in fused_rowwise_random_quantization_ops (#10634)
754ec9e386 : Reduce rocm link time with ThinLTO
9767951ca8 : Remove regex matching from undefined_tensor_test, fixes #10013 (#10702)
b0ad8105d2 : Split storage from tensor (#10053)
5fb9b31ed5 : Add matrix_rank (#10338)
fbd7189949 : add explicit flag to build static libtorch (#10754)
227635142f : Delete THD master_worker (#10731)
2fe5fa78fa : Use FinishDeviceComputation instead of adding events in Operator::SyncDevice
22446a3619 : Productionize CRF layer in PyText (#10362)
19031c68dc : Use intrusive_ptr in Storage; replace unique_ptr<Storage> with Storage (#10488)
abb209ef25 : Fixes *fft docs (#10760)
e5e2514f4e : fix debug_info arg in createOperator and improve reroute_tensor (#10736)
1068ba667c : Create at::linear (#10755)
a2ca634e04 : Add enforce back to converter.cc
ddf187c198 : Dont assume serialized integral types were widened to int32 in raw_data (#10718)
6325e5aa48 : fix typo in error message (#9827)
44f996f82c : Py3 fixes for layer_model_helper.py (#10525)
71ddd837d7 : Support custom ops in ScriptModule and tidy up test files (#10610)
e94ae99d24 : Delete copy constructor/assignment of class Observable explicitly. (#10593)
04b773ab87 : Support Loading to GPU (#10710)
edb34434ab : More changes for hidden visibility (#10692)
8a1739b05d : Add arguments __repr__ in Distribution base class
9c321a8779 : Add util function from core type to dtype (#10716)
b23d59ce1a : Make ONNX_ATEN_FALLBACK as internal default option
b0b5139149 : Set the BUILD_ENVIRONMENT variable before installing sccache. (#10640)
30ad13faca : Avoid shadowing i, j vars in GeneralProposals test (#10721)
f9d1b001e1 : Move THNN Reduction to ATen/core. (#10703)
f0d8a36e70 : Completely remove build_aten and use_aten (#10469)
9e75ec11fb : Make empty list literals construct empty Tensor[] (#10705)
5c0d9a2493 : Soumith's last few patches to v0.4.1
e449a27646 : Fix issues link in Caffe2 readme (#10711)
826550a32e : Update the onnx Gemm op to FC/FCTransposed logic in caffe2 onnx backend (#10108)
15d7f49205 : Adding ATEN_NO_TEST option to root level cmake for propogation to aten
585e6b581f : Allow method-style casts on tensors (#10641)
39a3dcc999 : Fix #10698 build failure (#10704)
b4684db698 : Add support for Log()
7832e9d564 : Add a bisect percentile operator (#10563)
3d0757430b : Fix EnsureCPUOutputOp (#10651)
2e563c417c : Nomnigraph - rename some APIs that invole Subtree to Subgraph (#10551)
aa9f328fa3 : Nomnigraph - DAG matching (#10549)
0cce4620fe : Fix backend/device-type comparison with MKLDNN.
db7b7f1359 : fix typo
d4832f1e7b : More fixes for hidden visibility (#10624)
9ad9191323 : Fix cuDNN dropout state cache (#10662)
c37fac4d50 : Fixing stop condition on composite reader (#9888)
83066e9b30 : Add trigonometry functions for ONNX export (#7540)
3f603eeee8 : some improvements on distributed docs
108b657159 : Import DistributedSampler in utils/data/__init__ (#10671)
6bdbad93b9 : Refactor Device to not depend on Backend. (#10478)
f1420adfe3 : Move at::chunk into the graph fuser (#10178)
d87b4e941b : fix python interpreter can not be found without `PYTHON_EXECUTABLE` (#10659)
152762a567 : Fix warnings diagnosed in recent clang (#10647)
e29b5a1ea8 : graph fuser inserts explicit expands where necessary (#10325)
7c55d11ba5 : Make sure we don't relocate the weight name buffer (#10630)
65b9308128 : Basic infrastructure for C++ documentation (#10569)
b62b378022 : Adding torch support for CMAKE_ARGS env
c5c1c051ca : Fix dropout fused kernel applied in eval mode (#10621)
86c9856d9c : Fuse tensor-scalar ops when scalar is constant (#10511)
f3ac619764 : Add fusion support for batchnorm and convolution without bias
d35f365ad5 : Remove all cuDNN specific inputs to RNN functions (#10581)
52058204d6 : Add nn functional tests in JIT (#10409)
b4e72ea811 : Revert D9377394: [pytorch][PR] [Caffe2] Add AT_CORE_EXPORT and AT_CORE_IMPORT.
bd9ab650ae : fix compile error in math_hip.cc from new Im2Col/Col2Im interface (#10623)
ff440b61f6 : Revert D9378844: [pytorch][PR] fix python interpreter can not be found
e190505e84 : Adding support for inlining if branches (#10084)
31c7a32d1c : Include aten_op by default in caffe2
03982fb8d3 : Fix subgraph cutting wrt recent external_input change in nomnigraph (#10598)
ff3a481aee : fix python interpreter can not be found (#10543)
51222500e2 : Add AT_CORE_EXPORT and AT_CORE_IMPORT. (#10602)
cc53807be5 : group conv with NHWC layout (#10585)
0aefb9f26c : Update onnx to onnx/onnx@7848f1e (#10613)
6667d55e73 : Disallow input filler for GatherRangesOp (#10592)
3578909671 : Remove unused code base for distributed training (#10282)
f1d40ef280 : build_pytorch_libs.sh: use MAX_JOBS rather than NUM_JOBS (#10600)
c101a57a74 : Build mechanism for custom operators (#10226)
67c6d93634 : Tune minimal work size (#10599)
afd7477eaa : Add ``buffers()``, ``named_buffers()`` methods. (#10554)
342517e6e7 : Back out "Add aten_op to caffe2 onnx (python) backend" (#10589)
488ea824ed : Additional changes to make GPU builds work (#10507)
ef15bb8787 : remove implicit conversion from gpu to cpu (#10553)
d6f3c88418 : Revert D9076734: Split storage from tensor
40a070422d : Adding new allreduce bcube routines to ops supported by gloo (#10494)
4be4b4c8b5 : Remove weight from input of onnxifi backend op (#10575)
319fefe9e6 : Support benchmark on windows machines
00f2731112 : Merge THTensor into TensorImpl
130881f0e3 : Delete build_caffe2.sh, replace with build_libtorch.py (#10508)
c6facc2aaa : Add conversions between DataType and ScalarType.
fdd2b9baee : Add DataType alias
8fdba4ec35 : Move all operator<< overloads out of the global namespace. (#10546)
238b4b9236 : Resolve error C2370 "redefinition; different storage class" by adding dllimport. (#10571)
84427d26db : Add aten_op to caffe2 onnx (python) backend
76da0b34c2 : Remove an unused variable found by linter
7487ee55f1 : Resolving error C2487 "member of dll interface class may not be declared with dll interface" by removing nested CAFFE2_API. (#10572)
abf85bf0ef : Perform CSE across block boundaries. (#10105)
2e0dd86903 : Make torch::Tensor -> at::Tensor (#10516)
8013dac43d : Fix bincount for empty input (#9757)
05dcf00644 : fixed c10d test (#10557)
0a809fc8b1 : build changes to make cpu unified build working. (#10504)
87cac4c2f1 : Update Im2Col related to make preparation for group conv in NHWC order. (#10439)
579962f2a8 : reroute tensor feature in core.Net and generate one net feature in model_helper (#10528)
523bdc8ec1 : Split storage from tensor (#10053)
03e9ea5ef0 : Fix leaking of Storages (not StorageImpls) (#10552)
4c49da34a9 : Add new MKLDNN fallback operators (#10526)
a129f9ad3b : Revert D9332335: [pytorch][PR] Implements volumetric (5d) affine grid generation.
151e7de893 : varargs for einsum (#10067)
fb45ec5ac3 : Don't set DEBUG=1 in ASAN build (#9902)
26c764a1db : Update FP16 submodule. Close #10523 (#10548)
021b4888db : Remove setup_requires and tests_require from setup.py for FULL_CAFFE2 (#10530)
c5b1aa93ee : Export uint8 tensors as byte string in mobile_exporter and add GivenTensorByteStringToUInt8FillOp (#10385)
6f14202acd : Revert D9276252: [pytorch][PR] remove implicit conversion to cpu
5adcac3dce : Cuda half macros cleanup (#10147)
86363e1d8e : Move RNN implementations to C++ (#10481)
484395edfb : Fix corner case with torch.multinomial (#9960)
fb09292020 : Increase tolerance in ConvBN test
254dedf604 : Propagate NaN through threshold (#10277)
0bbcc7b534 : Don't assume curl version in Windows build script (#10476)
85408e744f : Move filler interface to operator schema (#10522)
9646d68962 : support broadcasting in _kl_categorical_categorical (#10533)
05a260da43 : Bump gloo to latest master (#10545)
5d27d68779 : remove implicit conversion to cpu (#10416)
9cffe783f1 : relax tolerance for two torch.half (float16) tests (#10519)
d93e8ab343 : Nomnigraph - Refactor SubtreeMatchCriteria to become a Graph of MatchNode (#10512)
f59bcea2c3 : parallel max and min for ATen on CPU (#10343)
44b029f5b8 : move matrix formation for dot products to precompute/request-only (#10531)
f5a4dd89b5 : Implements volumetric (5d) affine grid generation. (#8322)
d8ff7ad6f8 : generalize order switch ops for 1-3d (#10395)
0f05f5fb07 : ATen layer norm symbolic (#10513)
ce8e8feceb : Fixed a bug in box_with_nms_limit where it may produce more bounding boxes than specified. (#10390)
e41528a5cc : Also set stdin to subprocess pipe in FindCUDA windows popen call (#10379)
f1631c3106 : Modify build.sh and test.sh scripts for ppc64le jenkins build and test (#10257)
19ad55cc02 : set coalesced=false at sparse transpose() and removed transpose invariants (#10496)
964e30de1d : Workaround for Cuda9.2 and GCC7 compilation errors (#10510)
b6cc65afea : Send, Recv, RecvAnysource, Barrier Op for MPI PG and Python Bindings (#10227)
26e40fa665 : Tensor.accessor now fails on rvalue reference (#10518)
17ecc06b65 : static casting TIndex (#10514)
60aa416a6d : Re-purpose setup_caffe2.py for faster caffe2 build iterations (#10520)
32bb4040dd : Unified type annotation parsing for script frontends (#10279)
b69b1c477b : Adding python binding for MPI process group (#10199)
39bfc2d0d4 : Nomnigraph - add diagnostic ability for Subgraph matching API (#10267)
3c39e857ca : Python binding for reduce,allgather,scatter,gather ops and python tests (#10159)
16ecd6f99c : Fix Debug Build On Windows (#10359)
3f3a30f79c : Added Reduce,AllGather,Gather,Scatter Ops for NCCL and MPI process groups (#10058)
13814d6744 : Remove use of data() in optimizers (#10490)
bdb11e716a : Split the dependence of ONNX from test_operators.py (#10151)
eea8ab1861 : Move common code to RNNCellBase. (#10399)
bd497809e2 : CAFFE_ENFORCE -> CAFFE_ENFORCE_EQ for error with more information (#10244)
2400512a08 : Remove unnecessary include
d1442b36f3 : add a rebuild_libtorch command for speedier iteration. (#10036)
520f4f6cb9 : Added some unit test for box_with_nms_limit_op. (#10389)
d043f83019 : Add tests for Tensor.* nn.* F.* docs (#10311)
b4462511fd : Add LSTMCell backward pass expect tests (#10506)
e5811becdd : Add tags for onnx tensor descriptors (#10502)
9497383706 : Fix some warnings (#10297)
61bedc96f0 : Schema-based creation of graph nodes (#10198)
3a40baa15c : fix a grammatical error: accelerate compute (#10204)
ef44faece2 : check attribute existence in torch.legay.nn.SpatialFullConvolution in method type (#8740)
329d901a91 : Fold AffineChannel to Conv, the same way as BN (for Detectron models) (#10293)
c618df154e : Add intrinsic support for external_input/output to nomnigraph (#10100)
7d16e87f14 : Fix byte ordering issue in from_numpy (#9508)
facb293aad : Fix FindMKL.cmake for Windows (#10453)
fed05cf4cf : Fix prim::FusedConcat bug (#10466)
099a545376 : Hipify Caffe2 binaries (#10468)
9a9224e5c1 : Remove "locally" from CONTRIBUTING.md (#10495)
f6eb966fd2 : Fix TanhGradientOperator linker errors (#10426)
ffb59e5f20 : adding stochastic quantization caffe2 operators (encoder and decoder in CPU are implemented. GPU mode is pending)
c6fc3ab557 : fixes printing non-contiguous tensors
216961b7bf : Remove is_zero_dim_ bool in THTensor.
f59cce95b4 : Some symbol annotation fixes for Windows
382ff03222 : Add missing #pragma once
75651d5b58 : improve use of ROCm libraries, enable more tests, small fixes (#10406)
cd81217f8e : A single print statement in setup.py
0b63d12db6 : Don't call into Python during Storage destruction. (#10407)
64235d5c01 : Rewrite TensorImpl to use TensorTypeId. (#10278)
145eb330ad : Back out "Back out "Move typeid.h to move to ATen/core"" (#10465)
b8530dc1f0 : A few additions (#9837)
0a39a9cfbc : Add db directory for hipifying (#10428)
56267cc97b : gflags improvement to allow CAFFE2_EXPORTS (#10444)
64a6f17177 : Fix ATen/core header installation. (#10463)
fa5d95a00c : Bump onnx to onnx/onnx@0d250de (#10452)
3cbe8f0c3e : Detect system RocksDB installation with CMake config files. (#7315)
82d11b847e : Use CUDA_LINK_LIBRARIES_KEYWORD instead of hacking. (#10437)
508de8109f : Added missing "AT_" prefix to macro. (#10436)
1756daaa75 : Use FULL_CAFFE2 to build caffe2 and python in one shot (#10427)
51f154e072 : Fix Python lint errors. (#10441)
cd53b78bd0 : Remove caffe namespace GetEmptyStringAlreadyInited (#10438)
ab6afc2b23 : Optimize max_pooling for inference for MKL-DNN/IDEEP device (#10156)
d3ccc836de : Fix warning in Nomnigraph (#10425)
1dbdc5a93d : Back out "Move typeid.h to move to ATen/core"
31646edfff : Increase GLOO rendevous timeout
767687835e : Replace sudo with --user in CI caffe2 install
adbcb3c1dc : Move dropout and alpha dropout to ATen (#10384)
5b0be9de59 : Remove TH compatibility calls for strides. (#10414)
674f7a9778 : Correctly share CUDA Parameters. (#10220)
0b8a0125ab : Fixes torch.log after torch.expand giving incorrect results (#10269)
6a55238a3f : Grid sampler: nearest interpolation & reflection padding (#10051)
def3715e82 : Minor changes for nicer pip packages (#9544)
40109b16d0 : Remove caffe1 specific proto (#10380)
018790cd4b : thread BUILD_SHARED_LIBS through build_pytorch_libs.sh
9b8a036873 : Fix basic.cpp, which compared equality between a size [1] tensor with… (#10404)
e524a8994b : Make lengths_host_.CopyFrom synced in LengthsCosineCoherenceOp and LengthsTileOp (#10360)
be5fb8f6fd : Move fused RNN kernels into ATen (#10305)
e221791afc : Fix typo.
1e3e26e3e8 : Use nDimensionLegacyNoScalars in THTensorDimApply. (#10388)
3667d029b4 : Move typeid.h to move to ATen/core (#10163)
e9ad74357e : Use serialization container in ir import export (#10394)
0950d7a98d : support list slicing (#10318)
b1e3239ec8 : Fix some backwards definitions wrt keepdim. (#10382)
209af45614 : Back out "[pytorch][PR] Fix bincount for empty input"
18d2fcde7a : Fix performance of DistributedSampler per #8958
64a60030a6 : Don't copy on clamp, clamp_out (#10352)
b43beec070 : Fix bincount for empty input (#9757)
cc5b47ff47 : Fix the logic for PATH guess on Windows
3fa1c1022a : Avoid std::thread ctor "cannot resolve" error (#10381)
99b10adc01 : Fix compile flags for MSVC
7d53c876dc : Move maybeZeroDim to TH, change condition so it doesn't turn off scal… (#10333)
e967fa9757 : Fix THTensor_nElement for scalars.
52d85bedb7 : Deal with undefined tensors in unbind backward (#9995)
b70b7066f7 : Keep kEps in one place to make sure they are consistent (#10334)
04f381650e : Resubmit: Fix dataloader hang when it is not completely iterated (#10366)
037d8d1bab : Order Loss functions alphabetically in nn.rst
9dfc4edc68 : Update NNPACK and cpuinfo submodules (#8564)
6e49f933ad : Check that result is on CPU for CPU unary ops kernels (#10358)
783f2c60b2 : nomnigraph - Enhancements to subgraph matching APIs (#10218)
69760e2840 : update torch.eig() doc (#10315)
0d03219a42 : Remove hack as integrated builds use FULL_CAFFE2 now (#10320)
7d6d7bef6a : Enable docker image build for PyTorch using specific python version (#10317)
66b3bae47c : Add sizesLegacyNoScalars/stridesLegacyNoScalars analog of sizeLegacyN… (#10323)
b7bc327180 : Remove new_Tensor and generated components
5390476297 : Add tracing to custom op and simplify tracer overall (#10212)
5bb21493fd : add fused dropout kernels (#9666)
74979495f0 : Optional input lengths in CTC op (#10228)
9b1a65bec3 : Extends type and shape tracing with device (#9796)
2993c42ee4 : Squash some 'invalid escape sequence' warnings. (#10310)
db7a2b1f0d : fix doc for as_tensor (#10309)
dcaafdd04b : fix doc of sparse_coo_tensor (#10308)
20a549b101 : Start using a newer version of rocRand that's PyTorch compatible.
fe68879832 : Fix dir(torch) for python 3.7 (#10271)
ad76fc8807 : s/DISABLE_COPY_AND_ASSIGN/AT_DISABLE_COPY_AND_ASSIGN/ (#10275)
66f7b8abbe : Better macro name hygiene prefixing. (#10274)
18e298305e : Increase TCP listen queue size from 64 to 1024 (#10268)
1a797ec810 : Revert "clean up the build a bit. We no longer need the separate buil… (#10285)
b6402648f4 : fix off-by-one bug in open-ended slicing (#10286)
5a7c710548 : Support some basic list operations (#10225)
1bae6e24c9 : Change empty list literal compiler error to match actual builtin name (#10265)
fa9ea5bde9 : Move CoreAPI.h to Macros.h, to give it a more accurate name. (#10264)
da44cf6101 : Move TensorTypeId, TensorTypeIdRegistration and flat_hash_map to ATen/core (#10263)
f1cf3105de : Revert D9169049: [pytorch][PR] Add new mkldnn fallback operators
f47bec821e : Add new mkldnn fallback operators (#10162)
25b2e88750 : Stop propagating std flags to downstream gcc/nvcc (#10098)
8b08eca203 : Move ScalarType to ATen/core, splitting out Backend
a38b572de3 : enable unit tests and other changes (#10266)
e0d43572c1 : Cleaner semantics for Reserve (#10261)
a13a53c151 : Optimize group_norm on cpu (#10246)
0c848f4179 : Python integration for custom operators (#10149)
62e23a1ee4 : clean up the build a bit. We no longer need the separate build_libtorch entrypoint (#9836)
d1a0c2eaf8 : Add back THTensor_nDimension. (#10259)
6ac35b35d1 : Stop using THLongStorage for sizes/strides, remove THLongStorageView.
835a5d4f49 : Add cost inference of fwd sparse operators and sparse adagrad (#9314)
506142ac8a : Add warning for building PyTorch using Python 2.7 on Windows (#10247)
267c397c5b : Add the ocr_det model for benchmarking (#10245)
7f2e43a084 : Add the ocr_rec model json (#10240)
df23bdc82d : add BEGIN NOT-CLEAN-FILES marker to .gitignore. (#10233)
f57e4ce1d5 : Update broadcast with alpha to reduce num of launching kernels. (#10235)
ab293924bb : support generic feature in DPER2 (#10197)
57d2d4bcff : Optimize reduce ops for 2d and 3d (#9992)
29406a2c4c : Fix shared_ptr refcycle in graph executor (#10222)
2141cb7d53 : Update OnnxifiOp to reflect onnx/onnx#1256
5df8547ff9 : Fix ONNX LogSoftmax export. (#9576)
36939417b2 : Introduce at::DeviceType, which subsumes at::Device::Type and (partially) caffe2::DeviceType (#10175)
98d60ad43d : Replace caffe2::EnforceNotMet with at::Error
e2976ea519 : Make at::Error look more like caffe2::EnforceNotMet (#10183)
c7c6e93312 : Use target_compile_definitions for AT_CORE_STATIC_WINDOWS (#10213)
02a64b183c : Move ATenGeneral back out of core. (#10224)
41dce17e22 : Delete TensorImpl::type_, replace with backend_/scalar_type_/is_variable_ (#10210)
149d4f776b : use logsigmoid at multilabel_soft_margin_loss, and change output from shape=(N, C)to (N,) (#9965)
7bc87172ea : Kill Tensor::shares_data (#10217)
3b3aff2ed6 : IsType<TensorCPU> -> IsType<Tensor>(CPU) (#10135)
4aa7469d1f : Implement c10 ops needed for benchmark (#9360)
08e7af20d3 : Implement calling of c10 ops from c2 (#9369)
c5abe8844a : Add IDEEP fallbacks for Resnet50 training ops (#8541)
4680ab4d44 : Generalize intrusive_ptr comment (#10216)
97cbcb7d67 : Allow releasing/retaining weak_intrusive_ptr (#10214)
6456b944fd : ctc_loss odds and ends (#10112)
65d32b1705 : Remove unused substitutions (#10187)
f51f15bb27 : Update include paths for ATen/core (#10130)
f77b62c3e1 : Add documentation for margin arg in Caffe2 MarginRankingCriterionOp (#10186)
cb0e72e00d : Add registerOperator overloads that infer the schema (#10048)
7a377b9a53 : Add torch.argsort mirroring similar functionality in numpy. (#9600)
c91af1202a : Make release_resources non-const (#10192)
39476d79a2 : Allow releasing/reclaiming intrusive_ptr (#10133)
5753746d29 : Enable static initializer order ASAN. (#10211)
4a6fbf03c6 : Make StorageImpl member variables largely private and use getters and setters
50cf326158 : Allow type cast between int and float in Script (#10168)
5d3782b655 : Fix IDEEP Copys (#10104)
656bb320b7 : EnforceFinite test (#10143)
13de6e8dfa : Make list literals construct ListType (#10193)
ab0ac6391b : fix padding doc not rendered correctly (#10196)
4778afb8bb : In Expand support using -1 to indicate preserving original size (#10174)
dd527db711 : Skip TestConvolution.test_convolution_sync on ROCM which caused random segfaults (#10179)
1f78e06f63 : Add g.insertConstant and clean up dead attributes code (#10177)
798b530361 : weak_intrusive_ptr (#10038)
2bd709a7c8 : intrusive_ptr (#9897)
0e9c6898cb : Export modules in ir with google protobuf
e2ecf3914a : Change default CUDA block size from 512 to 128 (#10090)
7dc870bd7b : Delete invalid 'template' keyword (#10173)
dad6e8bb6c : Remove capture specifiers in register_aten_ops when they're not needed. (#9669)
94c67f1454 : Replace storageimpl type with scalar_type and backend
538b15d13c : Use PYTORCH_PYTHON to call generate_code.py (#10171)
9e85a7a9de : Back out "[pytorch][PR] [TENSOR MERGE] Delete type_ field from TensorImpl, replaced with backend_/scalar_typ…" (#10169)
7be071a829 : Update onnx to onnx/onnx@2a3a226 (#10167)
6e85112f12 : Adding katex rendering of equations, and required edits to equations. (#8848)
ee98533746 : Fix compiler warnings on ignored const qualifiers
5765549155 : codemod -d caffe2 --extensions cc,h CaffeTypeId TypeIdentifier (#10166)
4a2f3cc45f : Improve lars operator by applying clipping (#9905)
a243e517fa : Guard sizes/strides in TH/THC for scalars.
170d29769b : Strings lexing, parsing, implementation in print (#9324)
230ca98d4b : Remove THTensor_isSize. (#10146)
9c818bfbc7 : Refactor PythonValue types + use tryMatchSchema for PythonOp
cfa05706ef : ROCm contributions week 29 (#9653)
70d47f92db : Add support for rand_like op in fusion compiler (#9795)
4a5cd4f6ab : nomnigraph - new utility for graph transformation (#10081)
acbc2744d8 : fix bug in 3d group convolution (#9860)
57061d600a : Auto-batching IR transformation for control flow (#9392)
8a25acbba5 : Use angle brackets instead of quotes for includes.
5699250acc : Move IdWrapper to ATen/core (#10152)
8cc7d33656 : Renumber typeid.h so that the number lines up with ScalarType (#10139)
6b338c8026 : Implement torch.broadcast_tensors (#10075)
191482fa39 : Distinguish TupleLiteral from ListLiteral (#10128)
a44d9d6eb4 : Fix tensor check logic in logging (#10138)
24bb8cecbe : Move ATen/Half to ATen/core, and apply lint (#10137)
806854a3c5 : Pin AMD gpu id in Caffe2 CI (#10144)
59c355c870 : Move halfbits2float and float2halfbits conversions to ATen. (#10134)
4ed5b9267c : #8518 Support for empty tuples (#10027)
1f6888b70a : Allow mobile exporter to export string arrays (#10017)
1d427fd6f6 : Delete type_ field from TensorImpl, replaced with backend_/scalar_typ… (#9787)
edb90387b2 : Lint ArrayRef.h (#10129)
080ae5ea1f : Remove implicit ArrayRef -> vector conversion (#9740)
e2846c365a : Improve ArrayRef (#9610)
ad6d62250a : Add torch.compiled_with_cxx11_abi(). (#10071)
1b1c47dfe5 : Update onnx to onnx/onnx@32ac71b (#10126)
fb24c52dc3 : Prepare TH for first class scalars (0-dimensional tensors).
2d56b5cf8b : Prepare THC for first class scalars (0-dimensional tensors).
59af5b928a : Move UniqueVoidPtr to ATen/core and apply lint
2d6738e89e : Fix lint in ATen/core (but not ArrayRef)
f908b2b919 : Use google protobuf in pytorch onnx import/export
5a44be50ab : Minor nit in comment in CMakeLists.txt
e8f27311aa : fix a couple problems with libtorch cmake file (#10091)
f126687fbc : Add a dump() method to IR Node's. (#10106)
4070005081 : Move C++17.h to ATen/core (#10107)
87d57dc5f5 : Simplified Operator (#10080)
f1964c43fd : Update eigen submodule to fix BUILD_ATEN issue (#10095)
a2a7b0c01a : Initial documentation for building libtorch (#10087)
ee964c51f4 : NegativeBinomial distribution (#9345)
2f848ec8ec : Use new PyTorch API to make code simpler
fa6b28bf40 : Move ArrayRef, Backtrace, Error, SmallVector, optional to ATen/core; add CoreAPI (#10092)
b503109f20 : Guard sizes/strides in THCUNN for scalars.
43b151224e : Move grid sampler to ATen (#9961)
6fc75eadf0 : Add CELU activation to pytorch (#8551)
6f6a1f2d63 : fix test_load_error_msg failure (Network is unreachable) (#10021)
5bd43a7af8 : Refactor Seq2SeqModelCaffe2EnsembleDecoder (#10035)
3d247041e4 : Force sync device when ops are sampled for observation
ec807f2a91 : Bail out if netdef has disable_nomnigraph argument
fcd567ed15 : Enable Optimization on mobile by default
7d2bda7588 : Move DDP broadcast coalesced to C++ (#9729)
294c065384 : Changed serialization mechanism of LambdaLR scheduler (#9927)
aae37324cc : fixed a newly introduced regression in softmax (#10066)
f2412fbafc : Allow multiple ops.def and clean up code gen in general
799c947cf3 : add .gitattributes for EOL conversion. (#9813)
9c0f65fc87 : Remove While op stuff (#10102)
c54d71ba60 : Upgrade old transform passes to newer APIs (#10046)
ceb0f14176 : Fix SpatialBN Fusion (#10044)
bf744bea94 : Parse and register schema declarations lazily (#9801)
34c7c56c73 : Re-enable empty n-dimensional empty tensor and fix parallel CPU on empty tensors (#10077)
ba5d33bede : Re-Enable ATen in C2 in integration builds to test ONNX ATen conversions
e04f8bbfa6 : Add virtual dtor for ideep context (#10059)
d2178562a4 : Remove some unnecessary includes. (#10085)
1f13453b4d : Slightly relax the constraints on argument and return types to script functions (#9969)
58fd6e1dd6 : Also add ATen/core tests to oss CI (#10029)
ee17ed672b : Add missing dependencies (#10086)
2422801625 : fix _pointwise_loss for target gradients (#10018)
56d1a82b31 : Add shape inference when converting from onnx to caffe2 (#10037)
371a786b18 : Errors out when Openmpi < 2.x.x with distributed. (#10015)
1ae520c704 : Add AT_CHECK for null storage. (#9823)
685224aa14 : Add CTC loss (#9628)
430e44480f : Delete some obsolete steps in the ROCm build. (#10005)
f779202711 : Correctly set CAFFE2_DISABLE_NUMA when USE_NUMA=OFF in cmake (#10061)
cba03e2ebe : Handle dynamic repeats in onnx symbolic (#10052)
0c11101eca : Prepare THNN/THCUNN for first class scalars. (#10023)
c2d9d2888b : Fix typo in tensors.rst (#10073)
68cbe37c6a : fix the reference link path
5e5c15dd42 : Add (constant size) TensorLists to JIT, use them in cat and stack nodes (#9948)
6fb9acfc16 : Revert empty n-dim and ATen in C2 integration builds
78b806c861 : Fix the onnx symbolic for upsample (#10001)
37a226de63 : When BUILD_ATEN=OFF, use ATen/core directly (#10019)
aa36a5d01c : Add typing into caffe2 requirements.txt for USE_ATEN (#10047)
51539fa383 : Add pyyaml into caffe2 requirements.txt for USE_ATEN
8f0a229078 : Fix HPTT path for 0-sized inputs.
788b2e996d : nomnigraph - minor cleanup of Graph.h (#9890)
e0a0234018 : Remove C++14 feature (#10022)
3e3f40aeeb : Update onnx to latest master (#10024)
e57cb4a1b2 : Add a Constant Propagation Pass to the JIT (#8808)
db96a0951f : Add SIMD version to GFTRL optimizer (#9698)
9987282134 : Use Retainable as base class for StorageImpl
7214754663 : Check and return when numel() == 0 in Loops.cuh.
57750bd638 : Enable ATen in C2 in integration builds to test ONNX ATen conversions (#10014)
6c7fb1582f : Introduce __array_priority__ on torch.Tensor (#9651)
ea3c36b822 : NumPy Scalar to PyTorch Scalar (#9225)
c9eab34e63 : Fix Caffe2 with ATen conda build failure (#10020)
04939a4745 : Match parameter names and = default (#9737)
40a8239984 : Fix a bug in argument spec (#9958)
faa96c1c47 : Deal with spaces in einsum equation string (#9994)
ce5f0d40b6 : Enable n-dimensional empty tensors. (#9947)
73a60efccc : Fix Caffe2CTScan error (#9962)
b4f8c60931 : Don't use the XML reporter for Catch2. (#10012)
9a9a7325c6 : Remove the generation of storage files
432ca747b0 : Don't seed GPUs if there are none available. (#9931)
3609977d7f : Update onnx to onnx/onnx@c761845 (#9964)
5ff1551eb9 : ATen's emscripten support (#9803)
3d6015db0e : Add essential PATH for the Windows PyTorch loading process (#9920)
56974a06b5 : Revert D8909766: [caffe2] Simplify order switch operators
eee01731a5 : Adds the default value for the amsgrad arg to the Adam docstring (#9971)
b99492a507 : Fix BlobStatRegistry HIP BlobStatGetter registration issue (#9973)
46d8002800 : Fix bug that always uses the same blob when repeating poolings
47c1badf90 : Fix the clamp special case and gradient problem on None, add None to JIT (#9596)
851c18dd20 : PyTorch File Format API (#9900)
d913db70f2 : Handle the "spatial" attribute in onnx BatchNormalization op (#9492)
bcba5a50d1 : Fix EnforceFiniteOp
ab4e209007 : Back out "[caffe2][nomnigraph] Allow multiple ops.def and clean up code gen in general"
607688e928 : Adding reciprocal operator and a test
ee827f6ba3 : Fix a testcase in logsoftmax onnx export (#9660)
12a1af3731 : Adding conv tests with explicit algo definition
9eeb4e17af : Split gather op for easier smaller code size (#9916)
c3fe071483 : Update hip files (#9826)
a532c1a48c : Fix default argument value for CTCGreedyDecoder op (#9747)
eb9bb1f09a : Travis CI: Run flake on Python 2.7 and 3.7 (#9953)
829d763c69 : Implement add, sub, mul, div using TensorIterator (#8919)
e3c4057b6c : Eliminate an extra lookup in the hashtable during CSE. (#9668)
ef9801f32c : Merge THStorage into at::Storage
6ed41adb04 : Use round-to-negative division when computing output sizes for convolutions involving striding and dilation.
8c0355c90d : convert lambd directly to scalar_t at hardshrink (#9919)
ce0b895a0c : Fix UBSAN error in ONNX peephole pass, make it more robust.
c77e4bc4d5 : export tensor(ArrayRef, options) on Windows (#9904)
aebf3b47ae : Remove template parameter from Tensor (#9939)
94439d7df4 : Suppress the vptr warning in ubsan (#9909)
c0bacc6284 : Guard test_lapack_empty with has_magma. (#9936)
bf32ea8094 : Fix dimension check in 1D instance norm, allowing 2D tensors alongside 3D. (#9924)
d3ba9a173e : Handle case where THC btrifact doesn't zero info. (#9907)
1af1b0c2a5 : Remove THTensor::_dim, temporarily remove THTensor_nDimension. (#9895)
bc66d98248 : Fix narrow on empty tensors after negative size support.
7b375ed362 : fix ParameterDict doc
a709f23225 : revise a little spell mistake in tensor.py (#9868)
4a192bcc3d : Rename onnx integration tests file to avoid confusion
8cb1eef7b9 : Unify IR operator representation (stop using attributes in the JIT) (#9807)
2c1d9e09b8 : Support UINT8 for addition data in ImageInputOp (#9901)
aa671ddefa : Support production models with predictor benchmark (#9855)
eb33887816 : Addressed issue identified by static code analysis: potential buffer … (#9889)
e41eb43327 : Remove deprecated masked_copy (#9819)
a841006353 : Simplify some code by directly constructing unordered_set from nodes.
dfa0af093d : Move predictor into caffe2/caffe2/predictor (#9548)
c045e969b6 : Use qualified name at::Half in Dispatch.h (#9848)
e7ab093d93 : Simplify order switch operators (#9581)
b7b61a8eb4 : Change expect, cast on Type to return shared pointers, make isSubtypeOf accept TypePtr (#9786)
9df9c46992 : fix loading 1dim tensor from 0.3.* to 0dim tensor (#9781)
d65c667f28 : Avoid divide-by-zero when hamming_window window length is 0.
d1260d26fe : Sleep before run (#9891)
18a6541b82 : Create IDEEP fallback operators for ctc decoder ops (#9847)
969b62f276 : Revert D8121878: Remove template parameter from Tensor
456f41301c : Disable unique ops test on rocm (#9892)
1dc708493e : Add html-stable target to docs Makefile (#9884)
0c84a5c27e : Pass shape infos to ONNX -> Caffe2 C++ conversion backend (#9870)
e39c8043dc : Make GraphExecutors work on Stacks instead of variable_tensor_lists (#9763)
6f10944f88 : Re-enable rocm tests that have been fixed in rocm 1.8.2 (#9862)
716f7d657d : Remove Broadcast.py. (#9843)
cd5adc7b5f : Remove template parameter from Tensor (#13)
2c7e7e37a6 : Corrected doc in class RNNCell (#9866)
bdbbcf068a : Temporarily disable test_unique on rocm since it keeps running into segfault (#9872)
e70fc145a9 : MIOpen fixes for Caffe2 (#9842)
3be8e4db51 : Do not run ONNX integration tests in parallel
997f46d1e1 : Disable "filter too much" health check for fc operator tests (#9865)
ba062e7da9 : Update OnnxifiOp according to onnx/onnx#1224
5e4de0821a : Set ROCm MAX_JOBS=4 (#9856)
6cd0174ff5 : Reimplement localScalar as a native function. (#9762)
ad47228020 : Test pinning Hypothesis 3.59.0 (#9830)
b84b78a69d : Fix the ROCM build, and enable sccache for it
0b16b03b98 : Plumb type annotations through script compilation (new) (#9547)
445c17d492 : Update CopyMatrix in math (#9792)
74ac5265d1 : nomnigraph - make use of nodeIterator (#9831)
302adb7cc8 : added torch.rot90() to ATen (#8628)
2f5c0c30cd : Make logsumexp work with empty tensors again. (#9825)
4b0098f3ae : Add --allow-change-held-packages to make nccl2 install in docker work (#9828)
279b836675 : Add some user-friendly checks in pack padded symbolic to ensure thing… (#9731)
be163f50a3 : Avoid divide-by-zero when bartlett_window size is 0.
56fbfee872 : Remove ifdef __cplusplus from THTensor.h, have cpp self-contained in … (#9775)
a7f183f971 : Revert "Fix dataloader hang when it is not completely iterated (#9655)" (#9804)
c14e17eced : Co-disitillation with different archs and/or feature set (#9793)
ea67a2bd11 : Allows negative index to tensor.narrow (Fixes: #9546)
0853d13f86 : Move scalar boolean to THTensor, rename scalar in this context to zer… (#9783)
8825e323b5 : nomnigraph - Add way to check if a NodeRef is in a graph, and make a graph node iterator (#9790)
42a4747389 : Temporarily need this to prevent sccache from breaking. (#9810)
a74a3fdeb6 : typo fix, tutorials url with http protocol is not valid (#9812)
3ef521e98a : Implement backward for torch.symeig (#8586)
0262fd0f91 : Delete Tensor::typeString() (#9764)
723a600ebd : Update for new incremental build instructions
bca10ad706 : Implementation of Weibull distribution (#9454)
4b61760738 : Add Adadelta optimizer to caffe2 (#9088)
620952117e : remove unnecessary -Wno= flags
9cf76cfb4c : Chaning conda build script to use current python version
f62bc01dfe : Remove TORCH_ASSERT (#9575)
d2610fb379 : Constexpr Type Ids -> 6.5% caffe2 perf improvement (#9603)
6c6a353a66 : Fix speedbenchmark bug (#9770)
d7d673b68d : Updata onnx to lastest master (#9782)
e5fe66d7ea : Add support for specifying device_option in Functional (#9619)
37fc58f1d3 : Use torch::empty before random_ on seed gen
f393df774b : Test case for c10d DDP (#9670)
e26d584445 : Remove isScalar() from TensorImpl.
7050d83dd7 : Make logsumexp_out inplace (#9755)
360c1bbd5b : Add multivariate log-gamma (mvlgamma) (#9451)
6885b3fd62 : Delete dead IsVariable enum. (#9768)
f9a99d5504 : Specify default initialization schemes for modules in docs (#9038)
2b134c72e6 : Add interface to provide blob types to shape&type inference (#9643)
7af5883860 : Eanble python tests on ROCM (#9616)
6ab5e697b9 : Small fixups for enabling zero size dims. (#9724)
675d80841a : Small fixups for n-dimensional empty tensors in CUDA non-reduction di… (#9722)
f6496229a5 : Fixes xcode 10 beta 4 compile error (#9748)
1283834600 : Devirtualize TensorImpl::toString (#9758)
679d397f28 : Fix scalar_tensor_test for squeeze/unsqueeze with zero sized dimensions.
a7afba7308 : Remove duplicated functions (#9601)
adda789770 : Skip maxpool_with_indices onnx tests (#9751)
ba634c11df : Move strides to base class. (#9749)
9bf72b2087 : Add missing windows exports
5df3eae89e : Add 1x1 specialization for conv with NCHW order (#9671)
a387331e54 : Re-enable test_segfault after recent dataloder changes
099b5ba9d1 : Tensor merge PRs from July 20 (#9713)
e3fb9088d5 : Allow multiple ops.def and clean up code gen in general
5849354aa1 : Add operator<< overloads for TensorOptions (#9606)
d05a8145c5 : Change behavior of clone to clone to a device (#9609)
31ba2f15e1 : Rename embedding variable to weight (#9720)
431415adc4 : quick patch for PackPadded removal to propagate the correct size. (#9657)
a949245a86 : Switch interpreter to use IValue's primitive int/floats (#9718)
a9742e1a27 : Add fallback to TensorCPU if there are unsupported types for IDEEP Tensor (#9667)
ee2cc68259 : Add ctc_beam_search_decoder op for caffe2 (#9622)
aa8a9fa5fc : Extend DispatchStub to support CUDA dispatch (#9664)
3e9e3ef383 : Improving diagnose RF NE with Cali (#9550)
88d6b6e6cd : Fix D8722560 (#9717)
5094684238 : Create torch::from_blob for variables (#9605)
14d4bdb406 : Reformat output data format to make it more general for other binaries (#9555)
029cf1d78a : Improve error messages of wrong dimensions (#9694)
9525925119 : Low rank multivariate normal (#8635)
9d6521c3a0 : Support n-dimensional empty tensors in CUDA non-reduction dimension f… (#9658)
53083b8353 : Remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS and fix CUDA 8 build on Windows (#9491) (#9491)
9ee5133651 : Fix dataloader hang when it is not completely iterated (#9655)
1afdc57ed8 : Hide all other fields in THTensor (#9683)
f3d72b2101 : Modify barrier net to allow better control over its initialization and execution in DPM (#9665)
769cb5a640 : Add new ways of matching nodes with schemas in the JIT (#9567)
a01d6f01b5 : Update channel_shuffle_op and transpose 2d to speed up ShuffleNet (#9525)
3bb8c5eab1 : Allow MKLDNN on macOS, and any other OS where CMake is able to detect it.
b5c8d59451 : Add a CUDAContext header include
23ed26a0c3 : Guard include of cuda-only header comm.h (#9656)
5e84403d5f : Fix for half conversion for ROCm 1.8.2 (#9663)
3efdece9da : Support n-dimensional empty tensors in take/put.
45e5c17ecf : ONNXIFI transform (#9569)
01581037dc : Add workspace.RunPlanInBackground (#9637)
1003ccfa15 : Creates CUDAContext (#9435)
8a0fe0a588 : set_input_record() should always add external input (#9636)
bae156a481 : Support (some) CUDA Lapack on n-dimensional empty tensors.
d3688861ec : Fixed a missing '=' in LPPoolNd repr function (#9629)
a3a6ab60cd : Fix the error in UnpackSegmentsOp when calculating the gradient with "max_length" argument (#9598)
1d4d9fc7da : Prepare to stop using attributes in the JIT (#9505)
b9e89cf9fd : Revert "Extend DispatchStub to support CUDA dispatch (#9579)" (#9614)
bbb30ad4ab : Use THTensor/Storage for THVoidTensor/Storage (#9588)
f84fdc7866 : Remove unnecessary functions from StorageDerived.h
7b9d8916e5 : Fix integral type dispatch error message (#9625)
2a0018f2a8 : Add scatter_add_ doc (#9630)
bfe2aa093e : docs fixes (#9607)
4028ff6c3a : Revert "quick patch for PackPadded removal to propagate the correct s… (#9613)
aa7af94656 : Make JIT tracing a thread-local property (#9414)
5651b27458 : Add CAFFE_STATIC_EVENT to Stats (#9501)
b770156a7a : Functional DataParallel (#9234)
7e78e80d94 : Make error message for empty module friendlier (#9565)
bcf0bf42a1 : Extend DispatchStub to support CUDA dispatch (#9579)
a08119afc2 : Eliminate direct access to size/strides of THTensor; replace them with std::vector (#9561)
f521823b7b : Do not always set broadcast argument when exporting new onnx add and sub to caffe2
6557856671 : Fix l2 normalization when handling zero vector (#9594)
85b2816358 : quick patch for PackPadded removal to propagate the correct size. (#9593)
f33cd36c9b : Use int64_t for im2col and col2im (#9590)
f180373d68 : Support n-dimensional empty tensors in CUDA BLAS and fix a btrifact bug. (#9573)
aee9e90abd : Fix TestAutograd.test_as_strided (#9538)
e0446fcfa9 : Pass dtype to tensor contructor in test_neg (#9558)
54db14e390 : HIP Operators Generator--> HipOpG (#9322)
45f0d05202 : Adapt OnnxifiOp to removed suffix handling in ONNXIFI loader (#9571)
604f7e98c3 : Expose CAFFE2_USE_OPENCV preprocessor flag (#9509)
b3e141e84c : Add predictor config into Predictor (#9434)
04b33b7231 : Add byte_weight_dequant_op
c1ee8835b6 : Constructors and member functions for THStorage (#9357)
4c615b1796 : Introduce libtorch to setup.py build (#8792)
3b886500a0 : Add CUDAGuard to ATen (#9277)
8769fec03f : Move clamp into ATen (#9506)
c506ff97c8 : Disable py2-clang3.8-rocmnightly-ubuntu16.04-test in disabled-configs… (#9543)
ca3b36aa6a : Add implementation for batch_moments_op (#9510)
8c741b7c4f : Add transformation from caffe2::resizeop to onnx::upsample
b6b6e1b39f : Fix core.Plan.create_from_proto (#9438)
27455e9c78 : Use _six for inf and nan (#9500)
35f7925aad : fix small literals being flushed to 0 by std::to_string
d6e124e9a5 : Dummy CircleCI config. (#9537)
28954b9e68 : Fix RoIAlignOp GPU implementation for RoIs without batch index (#9230)
8fe2622090 : Fix gatherTopK template (#9231)
f277645968 : Support N-dimensional empty tensors in CPU BLAS and (a selection of) … (#9522)
5eaed750c2 : Implementing torch.isfinite (#9487)
57608214d4 : Make squeeze doc consistent with it's behaviour (#9529)
3eb3f03776 : ROCm contributions week 28 (#9432)
73225e4a1d : add docs for using `python setup.py clean` in developing mode (#9524)
89db578e66 : Fixed a typo
6de038286a : Add random data filler to predictor bench to support production nets (#9520)
543d4af79f : Be strict prototypes clean. (#9516)
aa73348d75 : added reminder of args naming rules to readme (#9504)
004d924807 : Give THTensor a constructor, use new/free. (#9496)
c33d2c0b04 : Thread-safe dispatcher table (#9126)
13e0c9295d : Add Support for count_include_pad in AveragePool in Caffe2 ONNX Backend (#9458)
1c3580b6fe : Added hash for device (#9246)
5c695e3a60 : Implement 2D and 3D alpha_dropout (#9073)
6116954e97 : oss heatmap_max_keypoint_op
0fe980c748 : Memory usage measurement -- Caffe2 (#9017)
9b0c53ac22 : Deduplicate THTensor and THCTensor. (#9495)
2249751422 : Add OptimizerBase::add_parameters (#9472)
890037eaaf : Fix (non-reduction) ops over a dimension for n-dimensional empty tens… (#9482)
8be4657871 : Add ideep copy for TensorCPU<long> in IDEEPFallbackOp (#9480)
30f849cdc5 : Correct model name in caffe2 onnx backend tests
d2d43824cd : Delete flag from THTensor. (#9494)
e5678794ed : Reenable multiprocessing preserve sharing tests on ASAN. (#9498)
050a2588b5 : change stft to have consistent signature with librosa (#9497)
7d2a17876f : test_cuda: ensure tests use float and adjust HalfTensor tolerances (#9475)
52cc073212 : Implement reshape_as (#9452)
11fc16dc98 : Remove HTML tags from README.md (#9296)
4ff636a3fd : Update onnx to onnx/onnx@b2817a6 (#9476)
ae44a6b5e3 : Fix Sequential::clone() (#9372)
e8b8c3895e : Enable Conv fusion optimizations in optimizeForIdeep (#9255)
9235ff53f1 : Clip horizontal bounding boxes during rotated detection for backward compatibility (#9403)
ad74006ffa : Pass THDRequest as void* pointer to THDRequest_free (#9398)
c4bff25282 : Additional operator information values (#9153)
7df48d0444 : Merge .cu and _gpu.cc files
45140368c3 : Update onnx-tensort module to the latest (#9469)
5ff686651f : move batchop import to init to avoid debugging confusions (#9425)
80160f6186 : Skip PyTorch ROCm tests in the script. (#9467)
976f9253a5 : Eliminate storage views. (#9466)
9ed2190bdb : Add a tagged union type that replaces tensor in the interpreter. (#9368)
9ae77cc1f5 : Implement tensor weak references (#9363)
9413fabb0b : Nuke TestCollectEnv (#9459)
b0c5c86492 : Add test case for segmentation fault fix in grad_fn (#9457)
66fe3b5c06 : Add peephole optimization for type_as operators. (#9316)
52abcdd0dc : Fix out-of-range error for test_neg (#9431)
e7f49d1444 : add depthwise conv support for mkldnn (#8782)
8766daeec9 : Refactor `_log_sum_exp` (#9173)
97008a64a1 : Add ModuleDict and ParameterDict containers (#8463)
cffca2926b : Introduce SupervisedPtr, delete THAllocator and THCDeviceAllocator (#9358)
5eb9d40cc6 : Introducing IsInf (#9169)
fda03406cf : add device to CUDAEvent (#9415)
a4f63576b6 : Make localScalar error message more intuitive (#9443)
8444e1660b : Only accept continguous tensors in TopK for cuda (#9441)
88146484b4 : Add support for .norm() pytorch onnx export and ReduceL1/ReduceL2 caffe2 operators (#9299)
7160846c81 : Only view() rhs of index_put if we need to (#9424)
5ac8a80f8b : Add BatchBucketizeOp in caffe2 (#9385)
099a6d5e08 : Implementation of Wngrad optimizer caffe2 python wrapper and unit test on least square regression (#9001)
9e2f2cab94 : Implementation and operator test for Wngrad optimizer (#8999)
86eeeab758 : Fix segmentation fault in grad_fn (#9292)
bcd20f96e0 : update docs (#9423)
fd25a2a86c : Remove virtual+override anti-pattern (#9335)
c6376cf999 : A reasonable way to detect Python include dirs and library
cc9dcdff16 : Improving THCReduce.cuh's performance on latency-bound non-contiguous reductions (#9214)
06e47d88b5 : Remove ScalarConvert and cast_wrapper in favor of static_cast (#9401)
57a05983be : Move non-dimension reduction var/std to native wrappers. (#9400)
f09828ee0e : Support n-dimensional empty tensors in TensorShape methods. (#9362)
3799b10c44 : various documentation formatting (#9359)
bb9ff58c6d : Add cudnn activation ops (#9379)
b15a7d05ce : Inference benchmark: NUMA-awareness + multi-model support
cd3e067e46 : Add reversed(torch.Tensor) (#9216)
04fce5eca6 : Remove dummy c10 folder (#9367)
117a5c3cc0 : fix the annotation
4a796e4430 : Initialization functions (#9295)
e90860780b : Migrate PriorCorrectionCalibration to Dper3
2ead3b0e54 : Update include paths to use c10d prefix everywhere
34554d6adb : Enable standalone build of ATen (#9377)
43103af7a7 : Use at::DeviceGuard everywhere (#9396)
99dbcd0451 : set CMAKE_HIP_ARCHIVE_APPEND (#9394)
feaee21968 : Plotting embeddings norm being slow in distributed training. (#9325)
374fee4804 : Minor cleanup to scripts
d017e1798f : add erfc
b154761547 : Guard nullptrs around memcpy.
483ae8cb5d : Replaces const ref with && for apply (#9175)
e1863778e3 : Guard gloo algorithm creation with DeviceGuard (#9371)
aeccec755d : In Gloo backend use ring reduction by default (#9309)
00b4b4703e : fix unsqueeze doc (#9374)
7f38ea4555 : Remove unused feature: num PS tuning
a487b08c2e : AutoBatching - IR transformation(basic operators) (#9198)
e30ff68410 : Add Hardtanh Export (#8804)
1a8e826ed4 : Skip the count_include_pad in average pool for now (#9365)
153e2e96d4 : Make Sequential ref-counted (#9151)
94bc4c6091 : Ensure pending tasks are finished in case of failure (#9290)
8253947256 : Make error message more informative (#9352)
7f33ec55b2 : Fix Eigen issue on OS X with CUDA and nvcc compile (#9350)
cbcf45274b : Move tanh function to math (#9328)
7d8b532c1f : Fix CUDA build failures (#9347)
80380f637c : Fix to make ONNXIFI flow work (#9340)
18a975210d : Add explicit to conversions (#9336)
c2dd90c40e : Add angle normalization for rotated boxes (#9056)
9126f95ac3 : GenerateProposals and BoxWithNMSLimit ops: Add support for rotated boxes (#8953)
491f317b24 : NMS util for rotated boxes (#8954)
8da936ab52 : Fix the build break for python3.7 PyUnicode_AsUTF8AndSize() prototype changing (#9259)
b9f575fc33 : Remove legacy code from the JIT (#9323)
05559b4071 : Accumulate MSELoss reduce=True into accreal instead of real (#9287)
748a90d05b : BBoxTransform op: Add support for rotated boxes (#8952)
01cffaa7e8 : fix extra output in generate_code.py (#9339)
b2a74d17ad : document torch.utils.dlpack (#9343)
04a7fc1dc4 : Add Upsample support in C2 onnx backend for opset 1
fb9f9c9ba2 : Implement Sinh and Cosh (#9213)
00aeb0b84b : Privatize values for vec256 (#9321)
b4c66459c5 : Add pyHIPIFY scripts needed for ROCm transpilation to PyTorch (#8812)
a47a30b9ce : Implement grid_sampler in aten (#8929)
ea1869244f : Change depthwise convolution bandwidth formula (#9317)
0a679105ff : Fix missing accept file changes
e9e47ce8f1 : Vectorize sigmoid (#8612)
efefd1d7cf : Unify aten_dispatch and aten_schema into a single operator abstraction with human-readable schema. (#8885)
d867757649 : Fix CUDA 8 build for Windows (#9300)
8e6e8098ce : Revert D8768025: [pytorch][PR] Fix Eigen issue on OS X with CUDA and nvcc compile
bbeae24145 : Fix Eigen issue on OS X with CUDA and nvcc compile (#9270)
3254bcaed8 : Call deleter when destroying unconsumed DLPack PyCapsules (#9297)
89c2b50a15 : Grad clip for parameters on different devices (#9302)
1597fc594d : 3d conv should use int64_t (#9274)
d0d1820814 : Add weak pointer and finalizer support directly to THStorage. (#9148)
e06abab264 : Fix Upsample ONNX Symbolic (#9288)
181d2a5e60 : Add support of is_compatible for old version of onnx (#9284)
7ace3a99ec : Fix TensorRT tests (#9285)
4498fb962b : Add space around operator (#9294)
f92edf7ef4 : N-dimensional empty tensors: indexing, factories, reductions. (#9209)
19ecb5f8ad : Fix docs for Windows CUDA 8 builds (#9254)
99ab082366 : Making setup.py install work for Caffe2 (#8509)
342dbcc35a : Remove legacy redundant codes (#9252)
2b8aea3ada : add more logging messages to dimension checks of FCGradient (#9203)
c67ade26a7 : Add onnx support for clamp_min clamp_max (#9224)
01a7ca3d64 : Fix Pytorch Mac build issues (#9283)
29b1c2cfce : Install typing for Mac (#9271)
a70a90b28f : Fix pytorch linux build issues (#9273)
d0ad696f9d : Warn about THPObjectPtr needing GIL. (#9265)
b19b38c427 : Fix Mac CUDA issues (#9269)
744cd90074 : Fix Android build issue (#9275)
cb98c5020a : Normalize IDEEP spatial bn op test (#9276)
936f47f271 : Make roi_align_rotated_op_test not rely on 1.12.0 numpy.rot90 (#9267)
768a0e3298 : Some more changes to support USE_CUDNN=OFF (#9268)
1483bb7246 : Remove unused functions (#9223)
e8536c08a1 : Update extension docs, fix Fold/Unfold docs (#9239)
f48e15624e : Unique cuda support (#8899)
819815d9c0 : Fix missing compile_commands.json for aten (#9227)
a615baa51f : move unbind to ATen
66dc97e51c : #8714 Improve Error Messages for module re-assignment (#9212)
d6f21fc663 : Ports Streams to ATen (#8997)
75919b4e18 : Expose generic device copy algorithm (#9009)
4ad6e53557 : fix the deprecate argument in bce with logits (#9162)
f40ed548d8 : Bump onnx submodule (#9215)
067b270717 : Optimize LeakyReLU and PReLU 'forward' functions on the CPU (#9206)
227c8f2654 : Implement nn.functional.interpolate based on upsample. (#8591)
766fa1fc96 : Fix IDEEP CMakefile (#9217)
af107c4d16 : Fix shape inference bug (#9199)
f87499a8f3 : Modify the original PackSegments operator by adding "max_length" argument (#9048)
4e5369349f : Add FTRL Optimzier with Group Lasso regularizer (#9074)
c0bfe2a6ed : Clean up conversion registration
da39c24971 : Add GroupL1Norm regularizer (#9115)
f1ce15b50c : Move nccl scatter and gather to C++ (#9117)
d863391871 : nn::Module::as (#9149)
9aded4351e : Allow arbitrary namespaces for Symbols (#9018)
84884dc2d3 : Allow passing '0' to ASAN/UBSAN flags (#9202)
168a29f497 : Create native wrappers around dimension reduction functions. (#9197)
1f1fb813a6 : Use a static random_device in StorageSharing (#9080)
eadc5071e8 : Use torch.save in _StorageBase.__reduce__ (#9184)
7b25cbbef9 : Test nn.Module on non-contiguous inputs (#9114)
a769fae91d : Fix TestAutograd.test_pinverse not actually testing (#9192)
ff501c30af : Turn on UBSAN in the OSS build (#8813)
21c420c32c : Remove unused RowwiseArgMaxOp (#9119)
f45dfbccef : Add support for ArgMax and ArgMin in C2 onnx backend and frontend (#9050)
213540cd85 : Add meshgrid to PyTorch (#8581)
1c9073b43a : Allow passing '0' to NO_MULTIPROCESSING_SPAWN (#9187)
14cbd9adb8 : Implement torch.pinverse : Pseudo-inverse (#9052)
f6027bb15d : Install hpp headers for CPP Extensions (#9182)
08daed40f7 : Fix bug in flip() (#9156)
4b2b690792 : Install THC/THCGeneral.hpp (#9159)
49f88ac956 : Add grid lines for activation images, fixes #9130 (#9134)
e3dbdb2a17 : Fix the comments: code and comments dimensions mis-match (#9070)
b479494ed4 : loss plugin: Fix indexing into a scalar (#9143)
b432837a9d : Add some missing error checks in sparse. (#9140)
f17b9e4cde : Fix boolean indexing. (#8920)
4f89777d29 : Removing extraneous main function to fix buck test detection (#9121)
e09d993d8b : Move easy THStorage/THCStorage functions out of generic (#9136)
9b0cece9b0 : Enable the general usage of _download_url_to_file (#9090)
97b9712aed : Create Sequential::extend (#9116)
16570ef0d5 : Update onnx submodule to include the protobuf fix for windows
21c786071b : update nn loss tests to use new reduction arg (#9118)
4d57a1750c : Unify THStorage and THCStorage structs. (#9107)
5d474e1812 : Make all module members public (#9111)
cb1bfe91af : Deprecated several functions at torch.nn.functional (#8748)
50392cc554 : Store OperatorDef by copy (#9108)
b79e8f79d8 : Make SumElementsGradient use copy (#9039)
e977485449 : detach spectral norm calculated weight in eval mode (#9020)
553c41f082 : Adds serialization path (#9035)
623ae0c07c : Fix loading 0.4 BN checkpoints (#9004)
179807a8c7 : Fix MAGMA svd and eig (#9082)
474fdd7e2d : minor pybind for jit (#8890)
8364470e5c : fix expty batch for softmax (#9075)
04f2708265 : Fix build script for Windows (#9060)
c61f0217a5 : combine size_average and reduce args in loss functions (#8018)
03e7953a98 : Use FixedDivisor in Reduce and Broadcast CUDA kernels (#9072)
90fd4df695 : Add flag for disabling tests with multiprocessing spawn start method (#9061)
2c6c53f5ce : Ensure that domain starts with domain_prefix before extracting substring (#9053)
0515664c42 : Make _C depend on csrc-no-python (#9057)
b07ea04e23 : empty batch for spatialBN (#8933)
d7487bfe9e : Speed-up multidim sum (#8992)
9ce15173fb : Move _cudnn_init_dropout_state to TensorOptions and enable cuDNN dropout in C++ API RNNs (#9012)
863754c722 : Update the ONNX op coverage in C2
d793473e60 : add note to avoid memory surge on GPU (#9019)
67b21117b7 : Add BatchTensor class (#8922)
3a71cf2e54 : Disable verbose printing for time sequence prediction test
7a1081b310 : Re-enable passing operator-level tests (#9044)
b3fe200704 : Fix TestJit.test_alexnet expect file
f6cfd83a80 : Find unused port for test dynamically (#9037)
b75490414c : Bump up the C2 onnx frontend opset to 8 (#9006)
4efbd2e22c : Improve DataLoader worker fail error message (#9007)
a2bf55f9eb : Fix select backward when wrap dim (#9033)
2507e273dc : Fix CUDA 8 for Windows (#9023)
c2a89b69b9 : Support to ONNXIFI op (#8749)
37e526e1a8 : Better print of nn Containers (#8939)
512c49e831 : Correct link flag order for GNU ld in utils.cpp_extension.load (#9021)
6a1e801071 : add second variant to Tensor.add, Tensor.add_ docstring (fixes: #8690) (#9027)
b795620442 : Fix x.pow(0) gradient when x contains 0 (#8945)
00b5d397ae : Fix resolution callback for @script_method (#8912)
4643269eb5 : Document get_device, fixes #8857 (#8859)
bf65df5310 : Get rid of possible ODR violation with const char*. (#8962)
5b7951057d : Distributed Data Parallel Module Implementation (#8584)
30549a1293 : Deal with more threads than necessary (#8961)
2e23bc1a20 : Switch to emitting ScriptModule for scripted and traced functions (#8876)
0bd9e96b08 : Enable script for time-sequence prediction (#8862)
f0772c0ab2 : Replace max_pool with max_pool_with_indices (#8946)
66465f1e17 : Create nn::Module::is (#8970)
15a75208ee : Use std::random_device for generating storage handle (#8971)
838fdd6f99 : Add Cube and Cbrt Ops (#8991)
61ca0ba222 : Add log1p for sparse tensor (#8969)
8d384600b8 : Add ShapeTypeInference for Conditional operator (#8924)
7310229426 : Fix TestCollectEnv flakiness (#8983)
93cc7d1923 : Add in_place test for binary ops
ccc14071f4 : Fix Module::zero_grad (#8964)
63233f98ad : Bump up opset version to 7 in Caffe2 ONNX exporter (#8854)
148088a681 : Convert at::Tensor to torch::Tensor in AnyModule (#8968)
77484d91db : Add AT_WARN to issue warnings from ATen (#8967)
c3b499227d : Avoid iomp/gomp clash when building IDEEP ops (#8955)
ccd3e2c03d : Skip operator tests in rocm CI jobs (#8720)
059ccb62c1 : bump up onnx version (#8975)
346de2535d : Workaround lack of 0-dim support in ideep (#8959)
03d0a70a4d : Set random seed at the start of C++ tests (#8903)
a41d433d9d : Check key should be string in nn.Module.add_module, parameter and buffer (#8960)
07b6c28715 : Fix comment in file
f52c2ca1c6 : net_async tracing use enable_profile arg from NetDef (#8927)
ba8e133844 : Refactor batch sampler (#8958)
6aa8b67ed0 : Attempt to fix operator<< in Caffe2
fef9a66d08 : Use torch:: instead of at:: (#8911)
4c5192788b : Cleanup of the shipit commit (#8956)
e6208b3340 : by default, donot throw image decoding error (#8951)
da4cb226d8 : Fix a bug introduced by the deletion of copy constructor of tensor
a898a8f1f0 : Adding pyyaml to mac and windows builds
624303340e : Remove third_party from CODEOWNERS file (#8950)
6446ffa536 : More detailed help message for 'without ATen_cuda library' message. (#8898)
d9c64851e9 : Fix nccl/CMakeLists.txt (#8948)
c4744cfafa : bilinear upsample operator on CPU
c82715ced5 : Add some extra punctuation to README. (#8941)
9ec0a2aef4 : fbshipit-source-id: ba600fcd2b5cefc7621357bdeb05e24cea02e5af
290d20b094 : Replace max_pool with max_pool_with_indices (#8892)
edb88b5f3a : Update from Facebook (#8887)
055f527242 : [build] Use conda cmake in two CI builds (#8864)
55757357b2 : [C++ API] Better forward methods (#8739)
f607794dc2 : [c10d] No default device for ProcessGroupGloo (#8888)
74d2d562f3 : Fix default values for affine= in the docstrings of InstanceNormXd (#8895)
76e9dbad37 : Stop making dynamic allocations of PinnedMemoryAllocator. (#8896)
1f36caceb2 : [C++ API] Rework optimization package (#8815)
22ba8726da : Mention MPICH_MAX_THREAD_SAFETY=multiple. (#8580)
31327dd1e1 : Unify isViewable, handle n-dimensional empty tensors. (#8883)
6e28d4d364 : Add pos_weight argument to nn.BCEWithLogitsLoss (#5660) (#6856)
f935ba1b05 : [build] Enable clang-specific warnings only when using clang (#8869)
8e019826c9 : Fix cmake cudnn autodetection (#8891)
af741dc2fd : [c10d] Fix link order for building C++ tests (#8889)
8ef5d37ac5 : directly add_subdirectory(nanopb) from torch CMakeLists (#8870)
47492ed451 : [C++ API] Bag of fixes (#8843)
3d580f2f7d : [build] Raise in cmake when seeing NVCC{9/9.1} + GCC6 combo (#8863)
8e98a1a84d : Create avg_pool1d in ATen (#8880)
85f4d2b55a : throw error when grid_sample is passed unsupported mode (#8884)
f74207c99f : Allow autograd to work even when the shape of values cannot be determined (#8641)
7a614799f7 : Make at::Tensor::to() const (#8839)
5cb8586dde : [auto] Update onnx to 458c521 - Fix typo (onnx/onnx#1143) https://github.com/onnx/onnx/commit/458c5218446591e610614f04190a68e475840e23
288d37998a : [Caffe2] Fix gradient_check on in-place ops (#8828)
838fb87874 : Fix as_strided_backward (#8721)
b5a123c06c : [jit] Add python bindings for Gradient and differentiate (#8830)
49a3e49627 : Fixes #8508. Upcasted loc to 1-d if a scalar loc is provided to MultivariateNormal (#8543)
41181169ae : [auto] Update onnx to 6bedd27 - add broadcasting support for min/max/sum/mean (onnx/onnx#1124) https://github.com/onnx/onnx/commit/6bedd27b0307c9295039bd847895a27275160a98
89afb93e1d : Delete dead TH size inference code. (#8866)
cca247635c : First version of dispatcher (#8713)
2b926aafb0 : [build] disable test_expect for pinning cmake to 3.5* in dockerfiles repo (#8850)
04440d2c57 : Fix nonzero and tensor printing of n-dimensional empty tensors. (#8849)
1e7fcb5d1b : fix NCCL NVCC_GENCODE w/ multiple archs (#8834)
e251fb5036 : Add file and line to CUDA_CHECK and CUDNN_CHECK (#8836)
e31ab99932 : [Ready for Review] Better fix for NCCL + sccache (#8829)
50410c9572 : fixes #8840 (#8841)
a5df8ec841 : Created DefaultTensorOptions in ATen (#8647)
521f5111ad : [C++ API] Use torch::Tensor instead of at::Tensor/Variable mix (#8680)
22a70fbe2e : Minor fixes for finding CUDNN (#8743)
fc22bf3e82 : Spectral norm improvements (#8590)
3598356420 : Port THCS to ATen. (#8689)
731273b8d6 : Improve convT output_padding docs (#8825)
e4ff0b8aa1 : remove unnecessary headers from SpectralOps, add cuda.h include to deviceutils (#8819)
ebae3f502c : Fix CUDA_NVCC_EXECUTABLE from being set to empty (#8822)
7fbd57091d : Doc: specify batch_first is True by default in RNN (#8807)
74fa304b31 : [Caffe2] Export clang compilation datatbase in setuptools build (#8811)
12904edae9 : Test that broadcast doesn't copy when dst and src devices are the same (#8803)
46bff5d9ff : Set MKL VML error mode to ignore (#8800)
73b92472d2 : [README.md] Use GitLab URL for CMake (#8799)
1d4cf095b8 : Add CUDA to logspace and linspace declarations in Declarations.cwrap (#8798)
675b579bf9 : cmake wrapper (#8797)
d3ec956d91 : Revert "ROCm 1.8.2 does not define CUBLAS_STATUS_ARCH_MISMATCH (#8732)" (#8791)
f138111d52 : remove unused flag (#8779)
ddda7cfea5 : allow output_size to contain None in adaptive pooling methods (#8596)
b1b77c9eb5 : Use virtual dtor for Annotation (#8780)
e6c7b38f94 : Cache cufft plans (#8344)
fed44cb1b3 : Remove aten project for main build (#8532)
ce13ca235e : added default lambd=0.5 for hardshrink (#8770)
5a7b4840d9 : Move nanopb-generated ONNX to unique file name (#8773)
9c426797a8 : Expose is_compatible function (#8783)
83f846ff7a : [auto] Update onnx to 410530e - Make test suite backward compatible (onnx/onnx#1137) https://github.com/onnx/onnx/commit/410530e8c6ad7e7b66644a247dace2de11a724e3
bd95f8f948 : Resolve name conflict of ContextManager (#7244)
53c0de57d9 : Document ideal vs actual SparseTensorImpl invariants. (#8776)
fd32cc6118 : Disable sccache when building NCCL (#8708)
0750967496 : Adjust nested parallelization to deal with OMP (#8723)
54a2e817a6 : [auto] Update onnx to bc986de - Add is_compatible method in python backend (onnx/onnx#1132) https://github.com/onnx/onnx/commit/bc986dee4c953d8a865ef6f60754b88a1a1c4cb2
dc5837a1f4 : [JIT] Adds fp16 support to the jit (#8679)
709c300437 : [c10d] Configurable number of algorithm entries per key (#8765)
2bb7e480c1 : Define conversions and operations on at::Half (#8660)
41c08fe4a1 : Add tools/shared/_utils_internal.py to gitignore (#8756)
8489c4cc6e : Better support for literals in jit script (#8687)
3de45f3430 : Add ssnl and zou3519 as pytorch doc owner (#8754)
be3d65a7e2 : i2h<->h2h in gif (#8750)
c8cc246226 : [JIT] Tests for calling between different frontend modes (#8704)
40262ca9d1 : Disable flaky test_lstm_fusion_cpu test (#8747)
e07a49e15a : Set DEBUG=1 in trusty-py3.6-gcc5.4 CI build (#8593)
b300934db6 : Add CUDA 9.2 + GCC 7 build and test to CI (#8592)
117b77e574 : Install vim by default on all Caffe2 docker images. (#8731)
98a7d84a5a : Link to C++ extensions in README.md (#8737)
c0dfe23703 : Support n-dimensional empty tensors in (most of) THCUNN. (#8722)
9b465313cf : Support n-dimensional empty tensors in more of TH/THC. (#8726)
9dffaf593e : ROCm 1.8.2 does not define CUBLAS_STATUS_ARCH_MISMATCH (#8732)
ac068fdabe : Use env var to pass sharding options to test_nn.py (#8727)
bbd71a7c81 : [auto] Update onnx to 9b9f595 - Make axis optional (onnx/onnx#1128) https://github.com/onnx/onnx/commit/9b9f595107e3fc0295d50f6294d43879df17552f
4f604a436b : Export tensor descriptor (#8313)
35e66efbfc : Don't set HIP flags on non-HIP build. (#8728)
6181979a7c : [auto] Update onnx to 7558954 - Use cmath instead of math.h (onnx/onnx#1129) https://github.com/onnx/onnx/commit/7558954ffd6b8b84d5c2a1859f853578076fae2e
d79711d689 : [auto] Update onnx to 068f1a4 - Optimization pass to fuse batch normalization operator with convolution operator (onnx/onnx#1106) https://github.com/onnx/onnx/commit/068f1a4079f02fcbd6e17b74e55577369a6f56f8
f037d392c1 : Support n-dimensional empty tensors in (most of) THNN. (#8702)
1e570fa5a8 : Add c10d/Def.hpp placeholder (#8711)
802929608c : [JIT] Improve test coverage for ErrorReport instances (#8668)
d00c79f2b5 : Improve cudnn RNN backward error message in eval mode (#8706)
17784d2029 : Make at::tensor faster (#8709)
544690bf4e : Update rnn.py (#8705)
48e90e3339 : Build system changes (#8627)
0acddd6cee : Add torch.cuda.cudnn_is_available (#8703)
85468155ce : Implement OpSchema and a default DispatchKey (#8662)
f9da3aa1aa : [auto] Update onnx to b1571d8 - ONNXIFI loader library (onnx/onnx#556) https://github.com/onnx/onnx/commit/b1571d829f7eda1691c7de762ef9a7bbc0750bca
5642937ac1 : more formatting (#8701)
3e25b4af6d : Fix #8692 (#8699)
73ce21a313 : Create captured inputs recursively for loop to resolve loop-carried dependencies across nested blocks (#8345)
d6c873a393 : Shard test_nn to reduce runtime for each test target (#8678)
9335885b1b : Create at::tensor (#8475)
b4cd9f2fc9 : Clarify mp note about sharing a tensor's grad field. (#8688)
08c1770d79 : Add owner rule for cpp_extension.py (#8700)
b492d103ee : fix formatting in :math: in fold docstring (#8696)
b6af5d40bf : Some 0-sized dimension support, port catArray away from resizeLegacy. (#8666)
cc6b046f48 : Implement flatten function (#8578)
065fdbd500 : Created Tensor::to functions (#8643)
d97c9dd019 : Add a warning in gradcheck if inputs precision < float64 (#8663)
61b863cbdc : Fix parsing of floating point defaults in python_arg_parser (#8681)
3da27312bb : Export ProcessGroupGloo options to Python (#8664)
0e0031e204 : Fix build error in pybind_state_ideep (#8684)
695fd98192 : Compatibility: write nDimension/_nDimension corresponding to dim()/_dim(). (#8676)
6402a4278b : Improve win-build.sh for local build (#8674)
be3e3f2ec8 : don't do unnecessary copies for bernoulli_ (#8682)
7fa81d6dbc : Use parallel if get_num_threads 0 (#8677)
8e4fe5dcf4 : Fix serialization for Parameters (#8633)
637dcdc279 : Remove dangling inclusion path (#8671)
d46312fd15 : Create at::from_blob (#8640)
66e8ecf2ea : 16bit typeid (#8534)
4608aa3058 : Setup wrappers to get vectorized version of mean (#8618)
d3b690ecd5 : TensorTypeId (#8389)
7a048cdcd7 : Vectorize non-contiguous unary operations (#8488)
03f7289fcf : Add CAFFE2_USE_CUDNN guard on context_gpu.cu (#8657)
2bf8b702a3 : Fix broadcast copying device[0] tensor when not using NCCL (#8222)
a60540ed2b : Make NCCL build select NVCC_GENCODE smarter (#8615)
61c96811be : [c10d] NCCL python binding and CI test, with bug fixes (#8357)
5ca4f5b43b : [JIT] Remove dead functions (#8658)
a2dd707031 : [C++ API] Create fixed width dtypes in torch:: namespace (#8639)
7ccecbbb4e : Create Tensor::options (#8630)
6cc7670bed : Port all indirect calls of resizeNdLegacy to resizeNd. (#8603)
65f7797d4d : typo corrected (#8632)
c80a703829 : Add CODEOWNERS entry for third_party to track changes (#8654)
b8b051cc19 : change avg_pool2/3d count_include_pad default to what it is in the docs and in 0.2 (#8645)
9a9eadacc6 : explicitly check device for grid_sampler (fixes: #8599) (#8646)
5f64484800 : update to avoid potential duplicate error msg (#8638)
32bc28dd18 : caffe2 export (#8642)
1ac1a9dbc6 : update doc for comparison operators (#8636)
f14887a63f : check for exact shape match before loading (#8619)
271406f276 : [C++ API] Make pImpl easy to use in modules to enable happy reference semantics (#8347)
d3651585b8 : Simplify pthreadpool implementation on top of Caffe2 thread pool (#7666)
2289815fc3 : Make CI green again (#8631)
6307c117b3 : Fix const type qualifier warning (#8613)
c44c95fd0b : New operator 'expand' (#8263)
05c473b85c : Temporarily remove TBB (#8255)
4f37a6481d : Fix DeviceGuard usage in THD (#8622)
10961a5b6d : Add OpenMPI for MPI tests. (#8625)
a7bf539002 : [JIT] add missing check for excluding tensor method tests (#8617)
525aa74165 : Improve check for addmm in autodiff (#8575)
e4f254224e : apt update before installing nccl2 (#8624)
11ea8175d4 : Remove all resizeLegacy calls, except for catArray. (#8616)
0a5fe55c9f : [auto] Update onnx to 53edd9e - Exclude Random Generator from Test Coverage Stat (onnx/onnx#1119) https://github.com/onnx/onnx/commit/53edd9e80e06fd170711a08b3a5f6c031847ff1b
90532d5f57 : Don't use MKL VML for log2 if below MKL build 20180406 (#8614)
ae25737455 : Add kwarg support to test_autograd and stop using deprecated schema for accumulation ops (#8574)
2039c7a38f : Fix test_rnn_args_check (#8606)
e62c3a470c : [Caffe2] Make cmake find current Python first (#8569)
88db4c816e : Disable flaky Chaining tests (#8601)
c1d04c73d2 : Implement non-legacy TH/THC resize, with pseudo 0-sized dimension support. (#8559)
d813ffc613 : Dont show Python frames in backtrace (#8579)
0ae8b6c027 : add fold example and add nn.Fold/nn.Unfold and F.fold/F.unfold to doc (#8600)
372d1d6735 : Create ATen tensors via TensorOptions (#7869)
c9b8d8566d : Added flip() fn in ATen (CPU + CUDA) (#7873)
92f67d9404 : fix lint
26bed6d83e : assert limit on cudnn grid_sampler (#8576)
7b2ad8893d : Eliminates noisy assert spew when running test_cuda.py (#8531)
682dec2cea : add relu to jit and exp to autodiff (#8573)
b10c94b507 : Update operator documentation with markdown descriptions and interfaces (#8085)
d968614502 : Enable open registration of VariableType objects (#8540)
711e5a6ceb : Port THS to ATen. (#8409)
c537fd7432 : fix lint (#8567)
c457fc994d : Adding pyyaml to Ubuntu and Centos docker images (#8490)
ec23ee67cf : add order switch op to nomnigraph (#8436)
dc186cc9fe : Remove NO_* and WITH_* across codebase, except in setup.py (#8555)
d7690742d5 : Fix the formula of some norms (#8545)
b002aee0ff : Disable verbose logging for PyTorch ROCm nightly builds. (#8517)
7251d70c5b : fixed THD NO_CUDA (#8539)
0965e8e9e7 : [auto] Update onnx to 0125af3 - Add node test for Dropout (onnx/onnx#1115) https://github.com/onnx/onnx/commit/0125af320480c40aed22f04abf990ef2c2f88c92
4e3ada19cf : [auto] Update onnx to d9fc1b1 - Add Node test for BatchNormalization (onnx/onnx#1117) https://github.com/onnx/onnx/commit/d9fc1b14aa0763a32e54c58f5442038c7639339c
5a31f73611 : [auto] Update onnx to b70ee6a - Make RNN/LSTM/GRU treatment of recurrent weights consistent (onnx/onnx#1103) https://github.com/onnx/onnx/commit/b70ee6a99b80c38ac90c35ac5d8fce7f6abd8185
677739cd1e : Fix createZerosLike for scalars (#8537)
55de546146 : [auto] Update onnx to c647994 - fix upper-bound for local-region in lrn test case (onnx/onnx#1095) https://github.com/onnx/onnx/commit/c6479945bb0a02bd079867f260ee940df1332322
a8bf30d7a5 : caffe2 hip python binding (#8491)
3a1265c739 : [auto] Update onnx to 578a439 - Add Node Test for InstanceNormalization (onnx/onnx#1118) https://github.com/onnx/onnx/commit/578a439b6351e4a2da4d53aa117708c15f22fd0e
829bcf3e9b : Don't apply PR 12 to Thrust anymore. (#8542)
848873e1f6 : Must run apt-get install as sudo. (#8454)
302408e6c2 : Support BatchNormalization opset 7 (#8482)
54c456da68 : Improve win-build.sh for Windows local build (#8493)
544605d3a9 : [JIT] Remove TK_WHERE (#8536)
34c9d16ca1 : [JIT] End-to-end example-based robustness testing for hybrid frontend (#8451)
6869a5f0fb : Throw error on 0-length tensor slicing (#7775)
edc3000963 : Move empty size logic from ATen into TH/THC. (#8468)
6287b80d67 : [auto] Update onnx to 3ca20e6 - Remove obsolete installation doc. (onnx/onnx#1108) https://github.com/onnx/onnx/commit/3ca20e6993e620cae605adb3afda47fc0d9a901d
ae55865a3b : Migrated hardshrink() to ATen and deprecated nn.Hardshrink() (#8117)
2ab4c9dbec : DEPRECATED -> AT_DEPRECATED (#8496)
c4194169a8 : Temporary solution for having access to Python installation path. (#8487)
2f25d1fbc1 : Enable tracing and script autograd tests (#8145)
aa2c79a125 : Add ONLY_FOR_TEST device type into executor (#8461)
467fc3c436 : [READY TO MERGE] Improve docs for Multinomial and Categorical distributions (#8472)
aed98067bf : Pin correct clang version in macOS CI test (#8457)
fa277e6785 : [IDEEP] [fix bug] Fix bug in ideep SkipOutputCopy strategy (#8372)
a4bd4f6c6f : Fix -g not passed to nvcc when DEBUG=1 (#8407)
384936f73e : TypeId improvements (#8350)
752bb954b4 : Update RunAsyncFailure test (#8486)
21609e0fd0 : ``bincount`` feature implementation (#6688)
2a0e98a334 : Move libtorch CMakeLists.txt to torch/ (#8444)
e323f02277 : Fixing missing PyCObject_Type bug (#8467)
2184e3f933 : Use MKL VML if available (#8458)
8d674c0d51 : add comparison operators to jit (#8058)
9d88ff7d0d : Add half cauchy, half normal distributions (#8411)
6a85b133d3 : Improve number formatting in tensor print (#7632)
bb9ef8fc2e : Support new version of Dropout (#8470)
2de4ab88f5 : remove _assert_no_grad from loss modules (#8460)
db14f3f33c : More efficient kernels that avoid deprecated shuffles in Embedding and LookupTable (#8400)
f7585178cd : [auto] Update onnx to b7d5a60 - Add stats on ONNX node tests (onnx/onnx#1110) https://github.com/onnx/onnx/commit/b7d5a60f90e80a0114bf392d9a8655c149125c9c
64d5b1454e : Add is_variable tag to Tensor (#8414)
6e314f9f68 : update tensor clone docs (#8462)
681964cc47 : output each operator separately due to logcat truncation (#8456)
ad378dfbaf : Adding necessary LOCAL variables in order for the perl script that HIP utils uses to run successfully without error. (#8464)
df3559ca58 : Move hip utils files to a separate directory (#8446)
dc209ed963 : [c10d] Rendezvous skeleton (#8294)
8a837f0fe3 : Repairing the integrated build path to handle the Caffe2 PR. (#8441)
4d287f9074 : Use int64_t instead of int for in loop that may overflow. (#8435)
2c9c48a323 : Add CODEOWNERS entry for c10d test file (#8445)
71a3633e3f : change tensor.set_() argument names to match descriptions in doc (#8403)
5b86c3af4a : Update from facebook (#8384)
f1b5124306 : Fix #8420, defaulting the initial hidden state to 0 (#8427)
09896d1e77 : Allow nccl downgrades (#8429)
edd4e2c5d1 : Expose proto utils and ONNX (#8073)
7543d0f794 : Enable some of the ONNX backend test on broadcasting (#8423)
61f61de270 : Expose logsumexp docs and mark log_sum_exp in distributions for internal use (#8428)
3cb45bafc8 : Stop pinning nccl version. (#8421)
7ca8e2f131 : fix old comment to point to the right file (#8416)
a42c12bb11 : Enable some reduce operators' ONNX backend tests (#8418)
c37e5b7137 : [Caffe2] Enable AMD/MIOPEN ops for Caffe2 (#8306)
36bf89bf09 : Remove imaginary file (#8415)
04503962ff : [ONNX] Add an ATen fallback pathway for ONNX export (#8273)
76f22b7aef : [caffe2] uprade IDEEP and hotfix for conv op accuracy issue (#8364)
81b92f7515 : Get ROCm building again on master (#8343)
49d6c5f99f : Branch parallel if number of threads is 1 (#8401)
7c9e936986 : Add way of deprecating ATen functions (#8404)
557511102e : Always include Modules_CUDA_fix for Caffe2 builds (#8396)
4485ce66c2 : Fix flaky RoiAlignTest, fixes #8084. (#8312)
b947ac227d : Check if you forgot to specify 'variants: function' on _out (#8402)
fcd9af8a25 : changes to support ATen code generation inside fbcode (#8397)
ffffee6aa9 : Skip test_multinomial_invalid_probs on Windows (#8360)
712a3fad27 : Adding CMAKE_PREFIX_PATH and CMAKE_INSTALL_PREFIX to cmake summary (#8398)
c3e4b3c88b : raise more informative error msg for torch.load not support seek (#7754)
c6db1bc952 : Add gt lt ge le to the supported operators list (#8375)
bef12551ee : Check CAFFE2_USE_MSVC_STATIC_RUNTIME to set -MD vs -MT in cuda.cmake (#8381)
5f5ea75283 : Use SYSTEM For all includes in Dependencies.cmake (#8380)
49eec35e5b : More warning skips (#8382)
a77b391de7 : [SpectralNorm] don't register original weight as buffer (#8170)
922adf8d09 : Skip calling ncclCommDestroy in destructor (#8352)
991bdd7f13 : [build] remove the use of NO_CUDA (#8300)
5484a197d9 : [c10d] Convenience wrappers for collective functions (#8292)
cc8fbc9d08 : Revert "Name the thread pools (#8137)" (#8379)
96876d9e7e : Name the thread pools (#8137)
a161639fcd : Move copyright lines back to NOTICE file, fixes #6911 (#8310)
44973a06ba : Add affine_channel_op (#8356)
87dcdf5fe5 : [auto] Update onnx to 86999f9 - Fix the LRN's doc (onnx/onnx#1107) https://github.com/onnx/onnx/commit/86999f90f030cf82eee153eb265a1303720772ed
1f02ebd323 : Use clang 8 to build CUDA in macOS CI (#8355)
78e3259bbe : Add autograd automatic anomaly detection (#7677)
38362fa9f3 : Prepare for moving 0-sized dimensions in TH/THC. (#8337)
0cced57cb8 : Build DEBUG mode with -O0, fixes #8335. (#8336)
ae1ceef36a : Allow TypeMeta hold non-default-constructible types (#8349)
ddab886105 : [caffe2] Move elementwise grad ops to separate files (#8315)
46c0b01234 : Revert D3314316 (#8346)
9b1480a28e : Fix disabling of USE_CUDNN when not found (#8340)
607b86f603 : Implement dim_arange operator (#8266)
de4e97e89a : [C++ API] Cursors (#8190)
77660a9cbb : Support printing sparse tensors in ATen, fixes #8333. (#8334)
77dea37dac : Skip test_multinomial_invalid_probs_cuda on Windows (#8324)
f4b79f99d1 : Fix the script doesn't stop eariler on error for MSVC and Ninja (#8277)
bed172cf54 : Fix collect_env.py for Windows (#8326)
52e4d3c4a2 : add error when backend is not supported by DDP (#8325)
94888106a9 : Add docstring for `torch.sparse_coo_tensor` (#8152)
80b6f9edd6 : [THD] fix broken THD build with NCCL (#8323)
01f5ba4f3e : [auto] Update onnx to 4b4085c - Add missing warning ignoring flags to onnx_proto CMake target (onnx/onnx#1105) https://github.com/onnx/onnx/commit/4b4085c2e9d5a944651a2dd0dfdd20ef452bdcdf
0169ac5936 : Fix sample code for cuda stream (#8319)
bf8689d0e5 : [auto] Update onnx to 5ed684e - Remove/replace /MX with /WX for MSVC build. Was typo in a previous ch… (onnx/onnx#1104) https://github.com/onnx/onnx/commit/5ed684ebe5fd2c1fa1b79aeb7bbacf2844a6cb01
d33cc08a97 : Small fixes (#8296)
5fe24968ed : Remove unused grad ops on mobile to reduce app size (#8297)
07d3f14eed : Clean up old sccache log before build (#8305)
b78466a37d : Replace Variables to Tensors (#8309)
29849e428c : Removes unused THCTensorConv (#8229)
3521cd54af : Fix dividing by zero segfault in Reshape (#8302)
2ed03898cd : Add depthwise convolution test for IDEEP (#8301)
e6ef18d531 : Entries for torch.distributed in CODEOWNERS (#8293)
788f05d215 : Remove THC's FindMAGMA (#8299)
a34211bd79 : Some utils for compile-time programming (#7778)
f35d7cce91 : [auto] Update onnx to 58efe0a - add float16 support back for math and reduction ops (onnx/onnx#1102) https://github.com/onnx/onnx/commit/58efe0a9ca6228942d3f7e955babe44459343347
045e7435c3 : Have a single THTensor / THCTensor type. (#8288)
37073f8be0 : [build] Remove /torch/lib/THD/cmake in favor of /cmake (#7159)
c486b8749d : Add option USE_NVRTC which defaults to off (#8289)
695d40efc2 : Create initial Python bindings for c10d (#8119)
75563674c4 : Remove remaining TensorTypeUtils functions. (#8286)
efba555a38 : c10 build setup (#8264)
d56b4f2568 : Set up CI build for CUDA 9.2 + macOS (#8274)
a994b432ee : [c10d] NCCL Process Group implementation (#8182)
d301d9df7a : [ideep] Fuse Conv-Relu after IDEEP graph rewrite, skip group conv (#8233)
742912512c : Move signal window functions to ATen; add Blackman window (#8130)
20c516ac18 : [cmake] Make cudnn optional (#8265)
147fc6b9cc : [auto] Update onnx to 39e4668 - fix optimizer does not set ir_version bug (onnx/onnx#1098) https://github.com/onnx/onnx/commit/39e46687eafd34c78dd59a1218171371aa3679f1
2928a33f50 : [auto] Update onnx to 2508156 - Make error message more verbose (onnx/onnx#1097) https://github.com/onnx/onnx/commit/2508156135c67f2097aaac42153f641e55fd6c68
1a03ba51dc : [cmake] Add and export Modules_CUDA_fix (#8271)
49593a609a : [caffe2] Fix ATen dispatch for ops with TensorList arg (#8226)
80fade8af4 : un-genericize THCDeviceTensorUtils. (#8258)
4f1440e828 : [ideep] Add IDEEP fallbacks for Faster-RCNN ops (#8260)
048b2f3a91 : [caffe2] Move submodule onnx-tensorrt forward (#7659)
8d0c3c721a : Remove TensorUtils<T>::getData, provide data<T>() in TH(C)Tensor. (#8247)
0c9b5f0825 : Change the output format of caffe2 observers (#8261)
4c2a1a1a64 : Added backward function for kl_div target (#7839)
ce122cc2d3 : Relax CUDA_HOME detection logic, to build when libraries are found. (#8244)
73966f65ae : Stop BCELoss from returning negative results (#8147)
e2be77eae8 : Fix app size check (#8256)
78b88219fa : [cmake] Use CAFFE2_USE_* for public/cuda.cmake (#8248)
b4c6310247 : Fully genericize THC/THCUNN (except for TensorUtils and DeviceTensorUtils). (#8251)
95ae09c866 : [auto] Update onnx to 3a035f4 - Add retry logic to model downloading (onnx/onnx#1077) https://github.com/onnx/onnx/commit/3a035f439799de3568c364f3f87014841037708e
93a9bb9f35 : Don't override Tensor, Storage macros defined outside torch/csrc in t… (#8243)
a466c12bd4 : Fix lifting cat into its constant version (#8174)
f2c86532f3 : Fix TEST_CUDA import in test_cuda (#8246)
14f5484e0d : Print requires_grad and grad_fn in string repr of tensor (#8211)
d2271dcee3 : Fix: gradcheck forced float32 (#8230)
3eb9ba4d60 : Remove .gitmodules.aten since it is in .gitmodules now (#8232)
d1bdb3b10a : Remove core and util warnings (#8239)
ea5d871e49 : [caffe2] Build Android tests and binaries in CI (#7593)
7ed361a466 : Rename SparseTensor to SparseTensorRef. (#8237)
346568d40f : Use .cc since some downstream libraries are configured for C++ only. (#8234)
c22c55ebed : [auto] Update onnx to 62e63e9 - Fix build errors inside protobuf-bench (onnx/onnx#1084) https://github.com/onnx/onnx/commit/62e63e9de8f8a3bb8e30c5f7f7f87fb94364ec17
832c88a766 : [ideep] Add IDEEP Squeeze op (#8227)
4df86b6547 : Update MKL exporter to IDEEP ops (#8228)
b401e6b03a : Allow optional build and installation of native test binaries (#8225)
8af88f3525 : [Caffe2] Add ADD operator for IDEEP (#8220)
2f18f864fb : Fix win mkldnn (#7718)
d0ca8896d5 : Don't copy unneeded grads when using a function for several derivatives (Fixes #7722) (#7759)
c84b97b979 : [READY TO MERGE] Enable tests that use DataLoader with multiple workers on Windows (#6745)
89ea6acde2 : [NEEDS REVIEW] Add nan and inf probability check to multinomial (#7647)
784c46ba1d : [READY TO MERGE] Use ccache in macOS build (#8009)
1172b152ab : move THCP-related utils to cuda/utils.cpp. (#8221)
5ec3041a42 : Structure THTensor like THCTensor is structured. (#8217)
deb56dfd06 : Change new bernoulli implementation to be fully generic. (#8218)
07df98a3b8 : [auto] Update onnx to e96d823 - Update Google benchmark to 1.4.1 (onnx/onnx#1083) https://github.com/onnx/onnx/commit/e96d823e5cc69ab02dccaba4d7971897918173c4
02734e389d : Move helper functions to unnamed namespace. (#8224)
7cace7219a : Change the benchmark log format and also log flops (#8215)
b03ba9023e : Set up a c10 source folder (#7822)
f3869b4e03 : [auto] Update onnx to 18d70ff - Graph should only have one (input) kParam node (onnx/onnx#1088) https://github.com/onnx/onnx/commit/18d70ff5294953ccdf791b44ce5ccd9065584945
12229afd00 : Record shape and type in autograd to validate gradients (#8168)
36b8cc5483 : skip CUDA memory leak check on Windows altogether (#8213)
56b1dcccf6 : [cmake] deprecate caffe2_* specific cuda function in cmake. (#8200)
f2f76e29ee : [auto] Update onnx to f28e2f1 - fix lrn spec (onnx/onnx#1090) https://github.com/onnx/onnx/commit/f28e2f1a601875593af35a52888f829ba82c0598
1f23043b0a : Fix tanh_op on ios build (#8207)
7ee517a266 : rm -rf aten/contrib (#8165)
005eef5027 : Bump gloo submodule (#8202)
5935c5f23b : Fix c10d compiler warnings (#8206)
61fd99e1b3 : Replace (non-data) TensorUtils calls with non-generic THCTensor calls. (#8176)
4d025a6a54 : add wipe_cache option (#8204)
eaea0f4b82 : Update c10d build to link against Caffe2 (#8201)
edfcbfbe1f : Implement randperm for CUDA (#7606)
9af3a80cff : Docs for gradcheck and gradgradcheck; expose gradgradcheck (#8166)
35f08b930d : Allow parallel_apply to take in list[Tensor] (#8047)
e6044e5576 : use THCThrustAllocator in BCECriterion (#8188)
c0b2a2aa3b : Add more annotations for arguments in ATen schema (#8192)
5e372c7106 : fix lint
115a494b5f : Fix scalar check for sparse tensors. (#8197)
8e6f7a1382 : [Caffe2] Merging setup.py with setup_caffe2.py (#8129)
857020b849 : [auto] Update onnx to 4e65fd8 - fuse consecutive squeezes (onnx/onnx#1078) https://github.com/onnx/onnx/commit/4e65fd83baaeb94fbaa050ae9df1016378157116
f45a3d5558 : Add a loop unrolling pass to PyTorch JIT (#7672)
a6305ea210 : Fix protobuf options (#8184)
c496a4a347 : Yangqing as an ONNX codeowner (#8185)
3b8f4d1d88 : [ONNX] Fix type_as symbolic (#8183)
bae82f726d : fix caffe2 docker build (#7411)
e8d6ac50b4 : Add retry logic to sccache download for Windows build (#7697)
c1bd3b3fb7 : Better conv error message basing on weight shape (#8051)
b2dac08049 : Fix a corner case for ReShapeOp (#8178)
c21465e32e : Get rid of SOVERSION (again). (#8132)
d7ba404e29 : Add back onnx console scripts dropped during migration from onnx-caffe2 (#8143)
ffde23d45e : use the correct datatype format (#8144)
e53fec0495 : [JIT] Support a single TensorList argument anywhere in the argument list + index_put (#8173)
ccabdfef42 : Export getCudnnHandle (#7726)
9243b64bff : [Caffe2] Update elementwise ops to support numpy style boradcast (#8070)
0517623517 : Abstract parallelization to faciliate using threadpools (#8163)
ba46d3d981 : Adding -setup- path, and better code structure (#8122)
fa1bdcf4d2 : Pinning opencv to < 3.4 in conda builds (#7923)
a3fc5ed351 : Move non-generic Storage code needed by TensorUtils to non-generic C++. (#8164)
1cdd7b5c0f : Fix __rshift__ bug (#8161)
990c6c5531 : [C++ API] Improve and use OrderedDict for parameters / modules (#7823)
bf58bb5e59 : Fix cuda.framework error on OSX. (#8136)
7c1e8c3c7a : remove some unnecessary cudaGetDevices (#8089)
aec6d6a7d3 : [auto] Update onnx to 968d28d - fix Node::isBefore (onnx/onnx#1075) https://github.com/onnx/onnx/commit/968d28d901d2efdaf6d5fcfd529106762524cdfa
fe805794ac : docstring support for @script and @script_method (#7898)
c719c8032c : docs: add canonical_url and fix redirect link (#8155)
227a7640ce : Accelerate bernoulli number generation on CPU (#7171)
ee0b75a3d2 : docs: Add warning to torch.repeat() (#8116)
f5cd479b59 : fix type mismatch while call torch._C._cuda_setDevice (#8065)
c446269568 : cpu/ideep context converter (#8139)
f8c18e00d5 : Fix a corner case for ReShapeOp (#8142)
a5ce0126cc : Fix job name checking for AVX tests (#8135)
c0a419e6ba : Add non_blocking to Tensor/Module.to (#7312)
ec4a0f332e : Add back lrn test (#8134)
94e197c262 : Add utf-8 header to Python file with Unicode. (#8131)
0ea2fa15a3 : Replace most remaining usages of TensorUtils<T>::DataType. (#8124)
410191c417 : Skip OnnxBackendNodeModelTest::test_lrn_default_cuda that causes segfault (#8127)
df28f5d06e : [Caffe2] Support non peer access in muji and fix bug when reduced_affix is empty (#6896)
7fc110b521 : Split SparseTensorImpl off from TensorImpl. (#7990)
f24d715e23 : [auto] Update onnx to 2a87616 - Tests for LRN operator (onnx/onnx#903) https://github.com/onnx/onnx/commit/2a876162ac91438cea370d75a11c9a96942e89da
cef8bfb33e : Add missing pragma once. (#8118)
7ba0dbc2cd : [auto] Update onnx to 2d5ce4a - Remove empty model (onnx/onnx#1058) https://github.com/onnx/onnx/commit/2d5ce4aeb6c485490ad567cbe610bbe1a83ac72d
96a77b5aa8 : Make libshm also test if rt requires pthread. (#8112)
c2046c1e5e : Implement adaptive softmax (#5287)
e749159064 : Detect CUDNN related environment variables in cmake (#8082)
e5b997223c : [Caffe2] Enabling AMD GPU Backend for Caffe2 (#7955)
3d7a064369 : Remove out-of-date comment (#8114)
04a3616de0 : Replace std::size_t with size_t (#8093)
185f8fbe7c : Removing remaining NO_PYTHON ifdefs (#8067)
f8830f9991 : use regex in kwarg parser (#8061)
9fc0ba31b9 : Do an additional sanity check that nvcc and CUDA include dir agree. (#8094)
db5bc71562 : Fix and ignore some warnings (#8081)
dff115f47a : Move backtrace to its own header (#8096)
36c3859d3e : [auto] Update onnx to 356208d - add input tensor dimension checks to shape inference (onnx/onnx#1070) https://github.com/onnx/onnx/commit/356208d7560a3e88cabf11fddfe6fbaa748da35c
74672a31a2 : [auto] Update onnx to cc26486 - bump version to 7 for prelu. (onnx/onnx#1063) https://github.com/onnx/onnx/commit/cc2648654172f0b7044f9469e6c2204c19a3ae1e
9232afeffa : Add code for TensorBoard visualization of JIT GraphExecutors (#8050)
5e35fbfaa3 : Post process onnx proto (#8064)
01f5ee77e3 : Skip ConvTraspose ONNX backend tests (#8074)
624ade1eac : [auto] Update onnx to bd98abb - Add a hook for doing post-processing on protobuf generated header files (onnx/onnx#1068) https://github.com/onnx/onnx/commit/bd98abbba052c1fa2dadc266dbf0d36c1b941970
1fc96b6471 : [auto] Update onnx to eb12f72 - Add conv transpose test cases (onnx/onnx#886) https://github.com/onnx/onnx/commit/eb12f72a8619e2fbad0d86200677fd96201d4351
68948306bc : Support to run ONNX Upsample operator (mode=nearest) in Caffe2 (#8037)
7be457c2a4 : Reduce usages of TensorUtils<T>::DataType in THC. (#8056)
7926313235 : Have a single THStorage and THCStorage type. (#8030)
3cbaa6b785 : [ready] Clean up torch.distributions (#8046)
afa75fa6b2 : Remove NO_PYTHON macros from Exceptions.h/cpp (#8007)
bef306eac7 : [auto] Update onnx to 033f956 - make gcc happy (onnx/onnx#1061) https://github.com/onnx/onnx/commit/033f956f41c55fd409e1c4a0d09795ae5411447f
f2573e8df7 : [auto] Update onnx to e6a500e - Extract constant to initializer (onnx/onnx#1050) https://github.com/onnx/onnx/commit/e6a500e54c50e3d300141f62958dba5f163aea4f
7379b22abe : [auto] Update onnx to 4f8ef17 - Remove erroneous documentation around maps and sequences. (onnx/onnx#1069) https://github.com/onnx/onnx/commit/4f8ef17ad3965e834b93d3753e54dee296aabc11
8d4e92a91d : [auto] Update onnx to 0dbec2a - - Generate protoc type hints on Windows (onnx/onnx#1047) https://github.com/onnx/onnx/commit/0dbec2a0474abcc92806d54d4dab1948674fcf74
2fb957da81 : workaround for Sequential when one cannot retrieve python source (#8048)
eb2f21f1e4 : Skip CUDA memory leak test on BN tests on windows (#8043)
82b981e4db : Update from facebook 1ee4edd286a3 (#8040)
9060b7f4e2 : Add profiling annotations to NeuralNet[Operator|Data] (#8005)
ef1c15f5ca : [script] Add support for torch.zeros, torch.ones, etc. (#7799)
2ec2e6947e : [auto] Update onnx to 9e7855d - Remove PyTorch generated Upsample tests cases (onnx/onnx#1064) https://github.com/onnx/onnx/commit/9e7855dcd43e855e26e13a797f4b12ac9d9f2188
c6a923f486 : Support modules that output scalar in Gather (and data parallel) (#7973)
215abffe60 : [auto] Update onnx to 760c928 - add missing hasNInputShapes check for bidirectionalBroadcastShapeInference (onnx/onnx#1060) https://github.com/onnx/onnx/commit/760c9283d0dfdc4b8705e4fa4bd95aca68dea459
23dd033b51 : Factor python dependency out of interpreter (#7970)
41ef5c2d4b : Support for generating ATen during the fbcode build, rather than committing the generated files (#8002)
d27e138a1a : Allow CI testing with different AVX configs (#8020)
8f421159fd : Fix profiler crash when no events register (#8034)
bf29abd908 : propagate nan in some activations (#8033)
8b447fa784 : [auto] Update onnx to 3fb9656 - Fix for fbcode CI (onnx/onnx#1062) https://github.com/onnx/onnx/commit/3fb965666e7fc271d093ca27529a7a1b1e103c3b
d0ec8af0fc : Support CUDA tensors in ProcessGroupGloo (#7694)
d0e27609ab : [auto] Update onnx to 1504a33 - Convert schema assert for duplicate type names to exception (onnx/onnx#1057) https://github.com/onnx/onnx/commit/1504a33abb7b1bfa773e000e2442545ce403c740
03fe106448 : [auto] Update onnx to 33e9cd4 - Remove the usage of default value to fix invalid proto3 files. (onnx/onnx#1052) https://github.com/onnx/onnx/commit/33e9cd4182fe468675241fba4ae8a16c2f0bd82f
52368f25cc : Example for Transformed Distribution (#8011)
8be17723cb : Update nn.rst (#8029)
b41050ff66 : Re-enable build env check (#7969)
dbe5c7f6e9 : Mention the pytorch-ci-hud on the README. (#8004)
580d212267 : Remove WITH_ROCM cmake flag/variable (use USE_ROCM solely) (#8013)
436211e27c : Make AT_FORALL_SCALAR_TYPES usable outside of at::namespace. (#7935)
6c0bc27371 : [auto] Update onnx to 8ec0e5f - Add index check for Transpose's type inference function (onnx/onnx#1053) https://github.com/onnx/onnx/commit/8ec0e5fe9badecb1c4cc9f136f791499f20c1377
e63be0d58f : Reduce grain size for Unary operations (#8003)
0fe4cb10e3 : Add on-stack observer cache for Observable (#7931)
fd30487089 : Fix a couple of typos (#7998)
8afe4c95d6 : Entry for c10d in CODEOWNERS (#8001)
80ede55242 : Revert "Set smaller grain size for some cases" (#7988)
85ee94b7be : Add memory leak check in CUDA tests (#7270)
bafec1637e : support loading gzip (#6490)
3481c6c5e2 : Build ONNX for PyTorch version of libcaffe2 (#7967)
e9c33e91d9 : Remove python bindings for `torch.slice` (#7924)
89ba9dc44f : Import/export observer symbols for DLL, which fixes the linking error in Visual Studio. (#6834)
eb39a23d8e : Make THStorage / THCStorage have void* data ptr. (#7964)
b5594ac750 : Raise error when torch.load a storage on a non-existing device (#7921)
f9926e4ce5 : Fix EmbeddingBag max_norm option (#7959)
5596260b9e : Add third wayt to determine IS_CONDA (#7971)
d8e28cfec2 : Enable ONNX backend Mean tests (#7985)
d476d0b4ab : [Hotfix] Bring back warnings and -Werror to ATen (#7866)
1bb6d44a21 : Use Glog's implementation of STL logging when possible. (#7206)
74783f0cd8 : Move the broadcast check in MKL Add/Sum to runtime (#7978)
08b4c7ab7f : Change perf test folder after git checkout (#7980)
108fb1c2c9 : Fix the import part of the windows doc (#7979)
6e1de968d6 : Use mingfeima's mkldnn (#7977)
df77ea7baf : Fix the cpp libtorch CUDA build (#7975)
fce6b24468 : Allowing MatMul to create a gradient even with 3 inputs. useful if you are differentiating a graph twice (#6536)
9b1abd2f81 : [Caffe2] Keep name of caffe2_pybind11_state and caffe2_pybind11_state_gpu in debug build (#7155)
f0c09203b0 : [caffe2] YellowFin parameter update GPU code fix. (#6993)
c94f3bbf33 : Fix typo in autodiff formula for addmm (#7932)
2e78bfa530 : Delete unused file (#7919)
fa8bdafa6c : Prevent git autocrlf for bash scripts (#7949)
f721481543 : Fix returning scalar input in Python autograd function (#7934)
df5d01df1e : Set smaller grain size for some cases (#7941)
1f94a6eab3 : [JIT] Fission and fusion passes for addmm (#7938)
769f5f7cfe : Handling of scalars in torch.Size (#5676)
d102f9ea18 : Split CI tests in half and run them in parallel (#7867)
8e6cd43291 : Fix checkBackend error message (#7926)
0656ef483d : remove sort requirement from pad-sequence (#7928)
c5b895ac50 : Try to fix TORCH_CUDA_ARCH_LIST for PyTorch again (#7936)
f8e83dc257 : Rename cuda::type to cuda::into_type and provide cuda::from_type. (#7937)
5419c6ecb7 : Add unsafe flag to skip checking in prepare (#7832)
f4256c9605 : cache and use BLAS_SET_BY_USER so that it doesn't set itself to TRUE when run second time (#7942)
c0d50e1e1f : [JIT][script] Fix emitted gather and slice for dynamic indices (#7861)
795f6e1077 : add test for correctness of transpose fusion (#7950)
b3e87b1066 : Fix fbcode compatibility (#7939)
8858b1d519 : Fix THCUNN SpatialDepthwiseConvolution assuming contiguity (#7952)
4a80755834 : Split up detail.h (#7836)
15122e93bc : Test if ASAN is actually working as part of ASAN tests. (#6050)
4e5dec3024 : [auto] Update onnx to 307995b - Update from upstream (onnx/onnx#1038) https://github.com/onnx/onnx/commit/307995b1439e478122780ffc9d4e3ee8910fb7ad
38dbe6e605 : Updates to caffe2 operator documentation (#7917)
c72c083151 : Moved condition for dilated grouped convolutions to CUDNN convolution implementation (#7465)
267fc43a96 : Fix Windows doc for import error (#7704)
c2fa1f363b : [c10d] MPI Process Group Implementation (#7783)
a8625e016a : Spelling fix in MultivariateNormal docstring (#7915)
0951f4424a : CUDA 9.2 adds support to GCC 7.3.1. (#7880)
e8cc16bb92 : Release GIL when copying to shared memory (#7918)
f70146e922 : Fix SN not backprop via sigma(W), and not reusing W_u (#7905)
146b951ec5 : Fix seeding random module in DataLoader (#7886)
65f8465f6f : Add back cpp_build tests for Mac (#7810)
a0480adc79 : Fix file extension (#7852)
1ce7ed2895 : fix slack email link
f7458faf98 : Only add BUILD_ATEN/USE_ATEN once to flags (#7845)
5c1fcea5db : [auto] Update onnx to 7361eec - Fix Operator Tests (onnx/onnx#1044) https://github.com/onnx/onnx/commit/7361eec59a2c0589125c0c5a4a90b43593b965c7
215fe057ea : No Default argument to max_unpool functions (Fixes #7327) (#7388)
49f8581745 : Update from facebook (#7855)
9f21ec7ca2 : Add spaces to indexing error message (#7922)
637a044a24 : Add missing ${generated_comment} (#7920)
bc8a92d03d : Move REGISTER_CUDA_HOOKS to cpp file. (#7630)
fb59ce32c8 : [auto] Update onnx to 385523b - Eliminate unused initializer (onnx/onnx#860) https://github.com/onnx/onnx/commit/385523bf1c8dd3e195ef7d5630b8d5083550835e
6dfadfeb89 : Revert "Fix error when setting multiple arch in TORCH_CUDA_ARCH_LIST" (#7914)
42a68749bf : einsum: don't inplace modify arguments (fixes: #7763) (#7765)
fb23e62797 : Remove templatization of PyTypeObject in THP copy storage methods. (#7811)
8b85b8afd7 : Avoid @generated in templates. (#7858)
07f55ae568 : [auto] Update onnx to e570127 - update version (onnx/onnx#1034) (onnx/onnx#1041) https://github.com/onnx/onnx/commit/e5701271f05ce412edf646fed504ad8f4bcb0eca
c122d271a8 : [caffe2][nomnigraph] Default disable optimization passes (#7741)
45cdb63d8b : Fix error when setting multiple arch in TORCH_CUDA_ARCH_LIST (#7879)
07a0482d80 : Make size_average docs clearer (#7829)
7cd1ea8166 : Make TensorMethods (fastGetSet) not depend on data type of Storage. (#7859)
5e50993be7 : Better type checking for pack_padded_sequence symbolic (#7874)
af3d0e20a0 : [ONNX] Fix transpose fusion logic (#7872)
f0dc40f77e : Fix typo (#7876)
fece8787d9 : [auto] Update onnx to 789efb1 - update proto files. (onnx/onnx#1040) https://github.com/onnx/onnx/commit/789efb166d16b00c498cd789cc19c1c005e0e3aa
d8101e8410 : [Caffe2] Fix roi_align_op_gpu_test and test_layer_norm_grad_op (#7875)
5c8d48c457 : Properly pass xml report flags to ATen tests in Caffe2 builds (#7863)
06d5dd088d : [auto] Update onnx to ec3b679 - Re-enable mypy, Fix releasing from Windows (onnx/onnx#1037) https://github.com/onnx/onnx/commit/ec3b6797b71663a42d5fe909845e4e25f246d43d
74246c9ba4 : Potential fix for RNN test on MKL (#7862)
aae0ad58f3 : Fix onnx integration tests build issues (#7856)
14f8cd7e3d : [JIT][script] Implement nn.Sequential that can be inlined into script modules (#7747)
c5b623e5d1 : Use __float2half (#7850)
d5c466e5ce : RNN export: add transpose to match onnx spec (#7825)
e6488bbd01 : add jit/passes/onnx CODEOWNERS line (#7853)
d2f98fcae9 : Fix perf commits (#7848)
b1d03b795a : add launch bounds to im2col and col2im (#7779)
0f7f27a843 : fix typo from #7399 (#7846)
bed0ec3b21 : Add missing trailing underscores
8d0622ca9d : re-fix 9.2 build (#7828)
fb5cc630f6 : Fix me (#7837)
d7c32df67f : move Subset, random_split to data, use sequence at some places. (#7816)
ce1a65b5c2 : [auto] Update onnx to 94dbb76 - Fix comma in Gemm description (onnx/onnx#1032) (onnx/onnx#1035) https://github.com/onnx/onnx/commit/94dbb76747067c91a7eaa3980939b7bbe237e9a4
93b7b5dddd : Fix trigonometric_op_test failures when running in python3.6 (#7831)
dbac3d21f6 : [auto] Update onnx to b18cbd3 - remove mypy which blocks release. (onnx/onnx#1031) https://github.com/onnx/onnx/commit/b18cbd336403fce94b637207bec4c8f5f4c7f489
28b1a3852c : Add backward() to Tensor and Variable (#7774)
147cc05cf5 : [auto] Update onnx to 8236f49 - Kezhan/update manifest (onnx/onnx#1029) https://github.com/onnx/onnx/commit/8236f49124dc0a37b8c33da2b8f962183c2c393e
144c5d1ff3 : Overwrite INTEL_MKL_DIR correctly (#7824)
2271e7d7ab : onnx->caffe2 output: better handling of init/pred splitting (#7820)
71bad33cc4 : Match parenthesis (#7797)
b12164005f : [C++ API] Remove virtual forward and implement Sequential based on Any(Module) (#7508)
1078491502 : Change is_tensor to isinstance(*, torch.Tensor) (#7814)
0fddfe6c21 : [auto] Update onnx to f8aa447 - update version number (onnx/onnx#1027) https://github.com/onnx/onnx/commit/f8aa447431fbb9bb49d41b715d79775d91f24f3c
9a736f5228 : [auto] Update onnx to 640a4ec - [Easy] Fix the gen_doc.py (onnx/onnx#1024) https://github.com/onnx/onnx/commit/640a4ec5d28db8c30d13e30732b35ed5c26fb746
f88c529d06 : [auto] Update onnx to 5591c95 - Enable non-static schema registration (onnx/onnx#894) https://github.com/onnx/onnx/commit/5591c95f68863d755644d6b4f000c892b680c44c
c946db16ec : [distributions] Always enable grad when calculating lazy_property (#7708)
4bf0202cac : [build] Have PyTorch depend on minimal libcaffe2.so instead of libATen.so (#7399)
f9633b9542 : [Caffe2] Skip some tests to unbreak CI (#7804)
fdabc02644 : [auto] Update onnx to 9e6e7e4 - Support opset_version in run_node (onnx/onnx#1022) https://github.com/onnx/onnx/commit/9e6e7e4282ca4cd53eef7a51f90760c45dfbf2a3
cfd70dc1cf : [C++ API] Back to reset() and fixed in-place cloning (#7796)
6df371ba2f : [auto] Update onnx to 9a37d4d - Add PRelu test cases (onnx/onnx#580) https://github.com/onnx/onnx/commit/9a37d4daf537532ffbd94af896ee58a70dd72fd3
43d87afdc2 : [auto] Update onnx to d2a46da - fix gru, rnn, lstm test cases to match the specification and add some cases (onnx/onnx#920) https://github.com/onnx/onnx/commit/d2a46da13b0b0e9d3ed7a1da34e015b4bc8fe0de
1289fc870d : Disable onnx backend node tests with broadcasting (#7730)
966c65859d : Revert "[Caffe2] Enabling AMD GPU Backend for Caffe2" (#7802)
9c679dab5f : [auto] Update onnx to 4898c9e - Added TensorDenotation and metadata_props for images (onnx/onnx#879) https://github.com/onnx/onnx/commit/4898c9e925b57ad4c58518515b98de66966ad3b1
14ad2e74f1 : Add BiasCHW fallback for GPU (#7738)
2ebcf4bb37 : [Caffe2] Enabling AMD GPU Backend for Caffe2 (#7566)
4352eab367 : Call grad_mode.py context managers as decorators (#7737)
aa214a8b8c : catch CPU tensors in checkSameGPU (fixes #7689) (#7767)
0e9613cc49 : Mark stack as non-executable in NNPACK (#7752)
1feb1a9b88 : small fixes in fusion_compiler (#7776)
7d0de4f138 : Run clang-format on c10d (#7791)
42134ee799 : Allow empty storage for the 'Edge' class. (#7595)
ee5e474fcf : Process group base class and Gloo implementation (#7628)
5a3f7810f8 : _LRSchedulers getstate include optimizer info (#7757)
e3e15b5d95 : [PyTorch] [gradcheck] change backward() to grad() (#7710)
6a604f16cc : Update test_nn.py (#7787)
60d5c0eb19 : Define general default scheduler for TBB and fix ppc64le bug (#7761)
2222fc7666 : Add support for accepting Tensor as input in clip_grad_* functions. (#7769)
5316cad5c2 : [Easy] Remove unused code (#7782)
85e9ae20e5 : Update tbb (#7734)
f534339a1a : Add @generated annotation (#7780)
ee628d64b9 : fix legacy comment after variable tensor merge (#7771)
60745b3380 : Revert #7750 and #7762 to fix Windows CI on master (#7772)
8d91a602cc : Temporarily disable build env check (#7768)
ea27c5af50 : Add missing brace (#7762)
1e2762796f : [C++ API] Add backward() to Tensor and Variable (#7750)
e5b830eb0e : [auto] Update onnx to d43b550 - Fix .gitignore and add missing files (onnx/onnx#1005) https://github.com/onnx/onnx/commit/d43b55087da6acb0f3fafe26d33b91011b767fda
bb15a0830d : [auto] Update onnx to ea1aa13 - add tests for reduce ops (onnx/onnx#675) https://github.com/onnx/onnx/commit/ea1aa139b208b92f34a1501a0406705d83399897
bb34887ae3 : include cudnn_h (#7749)
549b4069bb : [C++ API] Using new registration mechanism (#7663)
312ab535ba : [auto] Update onnx to 5dd68e6 - Add a util function: polish_model (onnx/onnx#1000) https://github.com/onnx/onnx/commit/5dd68e634bb8a77a0bae051468e40171e5348478
8275e430b0 : [auto] Update onnx to 169b156 - Add more missing type hints (onnx/onnx#991) https://github.com/onnx/onnx/commit/169b1561e9ce51fcb290a8341806267fcd2c3e7a
f01be11efd : [auto] Update onnx to b3b3b28 - Enable checking for functions that don't have a type hint (onnx/onnx#989) https://github.com/onnx/onnx/commit/b3b3b2851a8eeb355e4d8654530b7d48361ca4d5
c5ffc3a02c : [auto] Update onnx to 9f9316a - Catch up with type hints (onnx/onnx#988) https://github.com/onnx/onnx/commit/9f9316a5e2d3ffa5e9af339c00348e6dfd52f3e9
d02b7ab389 : [auto] Update onnx to c168303 - Better error message if protoc isn't found (onnx/onnx#1004) https://github.com/onnx/onnx/commit/c16830303166e442b1b77b3b3b6b4b97c258dd9a
9506eeb73a : [auto] Update onnx to 52f7528 - add more shape inference tests (onnx/onnx#971) https://github.com/onnx/onnx/commit/52f75285ad6795684944ff6ed8b93b9aa9068879
286cd04a20 : JIT cleanup (#7631)
e6f7e1807d : fix to build sleef when using cmake 3.11.1 (#7679)
5ee5537b98 : Fix typo in document (#7725)
28b592e00b : [auto] Update onnx to 6f4b1b1 - Tests for Gemm operator (onnx/onnx#885) https://github.com/onnx/onnx/commit/6f4b1b12e56cea736a31411c5f151305f211a729
987b52460d : [auto] Update onnx to c6c6aad - Enhance the 1-element broadcast case (onnx/onnx#902) https://github.com/onnx/onnx/commit/c6c6aad41654b86d6fefa75bb90c4762a2b03fe1
b4ae80d459 : serialization for torch.device (#7713)
ee6e3fe301 : Fix compile flags for MSVC (#7703)
0a11018db6 : Fix exporting Sum to onnx (#7685)
a890a0be07 : Renanme ZFNet to ZFNet512 (#7723)
75cf0faf4c : Implement __reduce__ for torch.dtype (#7699)
5000a05724 : Remove unnecessary include in vec256_float.h (#7711)
f94ae3ba1d : Update from facebook (#7696)
2cb096ada8 : fix for cuda 9.2 builds (#7709)
42e5e12750 : make BatchSampler subclass of Sampler, and expose (#7707)
cf9b80720d : Dont emit warning for ABI incompatibility when PyTorch was built from source (#7681)
8f97cbcf4e : remove index from python bindings (fixes: #7639) (#7690)
ee882eae8e : Update _torch_docs.py (#7700)
48bf733480 : Changes from D7881937 and D7963936 plus an edit (#7605)
77fe4bd0b6 : [auto] Update onnx to 241a350 - Type and shape inference for RNN, LSTM, GRU (onnx/onnx#937) https://github.com/onnx/onnx/commit/241a350272ff7babce154ef58033ad9424e6bfd5
f7d96a367b : Update NNPACK and cpuinfo submodules (#7691)
ec71c689fc : [JIT][script] Add matmul(@), pow(**) operator (#7648)
27ea7148fe : Updates to .clang-format (#7683)
2f5494ac14 : [auto] Update onnx to a75fa2c - fix customized version of find Protobuf for Error when calling find_package(Protobuf) twice (onnx/onnx#901) https://github.com/onnx/onnx/commit/a75fa2c4027436d6c4d29737f29532f10beef09a
875a5dceb0 : [auto] Update onnx to 55fff7b - python setup.py typecheck (onnx/onnx#972) https://github.com/onnx/onnx/commit/55fff7b796d6c51a8b7837fd2d9cac224bc713fa
4f20a0e439 : Fix various sparse transpose issues; remove dead code from Declaratio… (#7200)
7abdc303c6 : Don't allow requires_grad to be set on integer Tensor constructors in… (#7185)
431c80a128 : Guard sleef for AVX/AVX2 (#7678)
cf0c585b6a : [auto] Update onnx to e050bcc - add multinomial op to ONNX (onnx/onnx#897) https://github.com/onnx/onnx/commit/e050bccacbfafe62882a7ef2870198d80976cc82
32b23a4bfc : Throw error on tensor creation when sequence shape cannot be determined (#7583)
e37da05bd5 : Expose documentation for random_split (#7676)
8212f576db : improve RNN docs (fixes #3587) (#7669)
f7bc7007d4 : return nan in max_pool/adaptive_max_pool for nan args (#7645) (#7670)
bf95dff85b : Map digamma +/-inf results to nan in test (fixes #7651) (#7665)
50d8473ccc : Document dtype arg for reduce ops (#7654)
c46a0c8813 : add back Tensor.permute docs (#7652)
56e7a2cde1 : Better support for adding zero-filled sparse tensors (#7479)
f12b8770cd : use matching tp_name for torch.device (#7673)
c58893eb9e : [auto] Update onnx to 59b0b24 - Clarified description of Pad attribute (onnx/onnx#962) https://github.com/onnx/onnx/commit/59b0b24643320568b75c863afb4245650762d3a4
06fa332e2b : Fix UB when converting negative floating values to uint8_t (#7644)
4dd0aab33c : [auto] Update onnx to 3fc5f43 - move finalize function to be public. (onnx/onnx#987) https://github.com/onnx/onnx/commit/3fc5f43e913cf8b7304249d95dcb8ace7c4c933c
47ab3f936b : [caffe2] Fix warning in net_async_tracing.cc (#7646)
ca860907bb : [auto] Update onnx to 8d548e2 - Update shape inference methods to throw exception (onnx/onnx#986) https://github.com/onnx/onnx/commit/8d548e23617ec33a84ebdbc03abb4542640403d1
bc4feab3e3 : Fix flaky atomic iter test (#7649)
5207998fc3 : Fix onnx Pow export (#7657)
93f8d98027 : [auto] Update onnx to 8356ad5 - Add unit test framework for the project C++ APIs (onnx/onnx#763) https://github.com/onnx/onnx/commit/8356ad54e9173cfb8dfae6d25a67278e6d48bbb2
2d313276b2 : [caffe2][nomnigraph] Add registry for optimization passes (#7656)
8c0299b5e6 : [auto] Update onnx to 94ca052 - Update mypy version (onnx/onnx#968) https://github.com/onnx/onnx/commit/94ca05244795219c196f5b43945a7ed5371a1cdc
d4f6c84041 : fix nccl distributed documentation
f2295494af : Makes AccumulateGrad high priority in backwards passes (#7604)
cba19e59ca : [C++ API] Implement builder style construction (#7597)
0d27d2686c : C10D: Added TCPStore to support C10D store interface (#7560)
ec42a11410 : [auto] Update onnx to ba86ec2 - Protobuf typing (onnx/onnx#982) https://github.com/onnx/onnx/commit/ba86ec268205dd4a8cd29c799904f6fac3da74f2
562d9971c9 : Add LBFGS optimization algorithm to C++ API (#7596)
84730aa659 : support <= and >= (#7633)
f7f95f1742 : Reduce gen_jit_dispatch options (#7562)
331a04d8eb : [auto] Update onnx to 321d874 - update output shape of RNN ops according to ONNX spec (onnx/onnx#923) https://github.com/onnx/onnx/commit/321d87457ff2b11691e3d106503c0eb41dbe8a90
77e8a23a29 : [auto] Update onnx to a8b3316 - add exception mechanism for use in type and shape inference (onnx/onnx#983) https://github.com/onnx/onnx/commit/a8b3316cff2cc618fe336a1ea16d9462c111fb60
9a1a20cb33 : [auto] Update onnx to 13196bf - Shape inference for ConvTranspose (onnx/onnx#973) https://github.com/onnx/onnx/commit/13196bf40bfc351c906094aa1ea29ac7d2f44ce0
b4d5e67e5f : Add asin, acos, tan, atan operators (#7600)
221e615665 : Move bernoulli further into ATen (#7578)
330a72581f : Update README to contain instructions on how to install mkldnn for Linux (#7625)
3c9ded098d : [auto] Update onnx to 83f3666 - Spec clarity: Versioning (onnx/onnx#931) https://github.com/onnx/onnx/commit/83f366619e6b0258d45cc07fb86e64b58e6eb0b9
3238db6247 : Show skipped distributed tests as skipped (#7624)
8f42bb65b3 : Be more lenient w.r.t. flag processing in C++ extensions (#7621)
f87091636d : Update .gitignore (#7622)
8f6f43f5cf : Fix rocm docker images environment variables round 2 (#7626)
599d0fac93 : Reduce MAX_JOBS for gcc 7.2 build (#7618)
64cb4fb13d : Add ChannelShuffle to IDEEP fallback (#7623)
c3a02fd8ed : Conditionalize all of conv_op_eigen on version (#7581)
b45f2ff1ae : Remove CompiledFunction + clean up JIT tests (#7421)
28b0b16f9b : [auto] Update onnx to 01745b2 - Update README.md (onnx/onnx#976) https://github.com/onnx/onnx/commit/01745b28fa7af3ad59badc3a7830e47b30fb9148
c425d0350b : Patches needed for sync, rebased (#7564)
7bc3414f8f : fix caffe build failed with -O0 (#7570)
c5b9a36f1e : Make return uniform in lbfgs step (#7586)
9213336c73 : fix cmake USE_ASAN (#7608)
6eec4118a3 : Fix python3.6 build in caffe2 CI (#7602)
ba44231cbc : [auto] Update onnx to 3a14d83 - Improve LRN doc (onnx/onnx#965) https://github.com/onnx/onnx/commit/3a14d8397459c82db104bc24cd92ff459a82b7c6
86b1e230c7 : [auto] Update onnx to 061af05 - Print protobuf type stubs warning to stderr (onnx/onnx#979) https://github.com/onnx/onnx/commit/061af05f45f5cb72e42fb43e93fee81369033e07
ed458fd311 : Fix environment variables in rocm docker images (#7598)
9213b3f739 : [caffe2] Fix linking of Android unit tests (#7607)
0493d49afa : [auto] Update onnx to 63234db - remove fc op. (onnx/onnx#977) https://github.com/onnx/onnx/commit/63234dbae66631462cb23f19b7228def5b371363
c76da6494b : Drop support for MAGMA v1 (#7582)
be145e4f5b : [auto] Update onnx to 0524595 - Do not generate protobuf python type stubs if protobuf python package is not installed (onnx/onnx#974) https://github.com/onnx/onnx/commit/052459560d85a4d6cb2d65dec8e0092823e851db
56fa6ec66a : [caffe2] Change iteritems in trt/transform.py to items for python3 compatibility (#7599)
c187a5d79e : Resolve the performance issue on ConvFusion Op (#7584)
cd86d4c554 : PyTorch AMD Build Scripts (#6625)
2de1b4488f : Run sccache in background mode and save logs to file (#7594)
4b6c884b99 : [caffe2][nomnigraph] Add optimize function to opt:: namespace that takes in a level and optimizes the graph/workspace accordingly. Adding it to predictor and speed_benchmark arguments (#7558)
469c6c88a3 : [auto] Update onnx to dc07e0f - Extend Concat/Gather/Squeeze/UnSqueeze to accept any tensor type (onnx/onnx#957) https://github.com/onnx/onnx/commit/dc07e0fb2f31c6b1f27d3364b0dd4248a5e6ac7d
9211790049 : [caffe2] Include <array> in fatal_signal_asan_no_sig_test (#7592)
0df84d7ec7 : [auto] Update onnx to 21b56ad - mypy info (onnx/onnx#970) https://github.com/onnx/onnx/commit/21b56ada785a02afd6e094dc945fce63da5e3add
79b9bbe60f : [caffe2] Use caffe2::stod in lexer (#7591)
be019e4429 : [auto] Update onnx to 76a288f - add script to count shape inference implementations (onnx/onnx#967) https://github.com/onnx/onnx/commit/76a288f09839cb0aed2d1abb129e987fb58f65a8
3af3d13599 : Run onnx integration tests in caffe2 CI (#7565)
e65d6de16a : [auto] Update onnx to 3f80231 - Add type hints to numpy_helper_test.py (onnx/onnx#951) https://github.com/onnx/onnx/commit/3f80231786d7f1e929a22b07bdd9c7df6a7d2df4
37f5b147fc : [auto] Update onnx to 037cfaa - Add type hints to test_backend_test.py (onnx/onnx#954) https://github.com/onnx/onnx/commit/037cfaa015115dbb1fae5173c2c23c2708d6a325
996886137a : Add link to TensorFlow Distributions paper (#7563)
5748cc43ce : [auto] Update onnx to c918b4b - Add type hints to basic_test.py (onnx/onnx#947) https://github.com/onnx/onnx/commit/c918b4be91990e36ffa03483d511582886d91e6c
d971782a03 : Change code owners for onnx integration tests (#7587)
efb7dead9d : Squelch -Werror=non-virtual-dtor (#7554)
4251e38eb3 : [auto] Update onnx to b265987 - Add type hints to helper_test.py (onnx/onnx#950) https://github.com/onnx/onnx/commit/b26598714c830e709a2b61ee3bf0b7afa50d89bc
a52eb24c42 : [auto] Update onnx to bb4d582 - Add type hints to relu_test.py (onnx/onnx#952) https://github.com/onnx/onnx/commit/bb4d5827cfb0c007cd70b3db4c385ca97d796748
be7c5e573e : [auto] Update onnx to 533a84c - Add type hints to elu_test.py (onnx/onnx#949) https://github.com/onnx/onnx/commit/533a84c3ca6ff852abd88e0af4d14cd5cc1d9346
f007392522 : [auto] Update onnx to a659ab9 - Add type hints to schema_test.py (onnx/onnx#953) https://github.com/onnx/onnx/commit/a659ab90cc06a9a78852bfdf79f0892edf1e2da3
dbf77ef7a7 : [auto] Update onnx to 28a8849 - Add type hints to onnx/test/optimizer_test.py (onnx/onnx#955) https://github.com/onnx/onnx/commit/28a884912755da6ea161b7267a0bf49629643ad1
fb314ee150 : [auto] Update onnx to 65f1811 - Fix a type error in lstm test case (onnx/onnx#959) https://github.com/onnx/onnx/commit/65f1811d2d56d15d5c8b618d4267bf54d33fd539
08415c42af : Replace std::to_string with caffe2::to_string in nomnigraph (#7561)
e1148db7f2 : Implement logsumexp (fixes #2591) (#7254)
05853945a4 : Vectorize softmax and logsoftmax (#7375)
44a10f2a98 : Removing arch 20 + 21 (#7512)
4d35a40f3b : Better logging for sccache compilation failure (#7555)
3414475653 : [C++ API] Remove initialize_* functions (#7517)
bf9676180f : Update the name of env var for triggering integrated conda build (#7557)
1666b54068 : [auto] Update onnx to ac970c9 - update onnx model tests for rnn/lstm/gru (onnx/onnx#960) https://github.com/onnx/onnx/commit/ac970c9dcbe4a36a955f0bb1ed00282f340ea518
284f13b814 : make sure that pytorch and caffe2 usage lines up with onnx rnn spec (#7511)
ce69d3110b : Improve script builtin checking using schema (#7311)
1f08000562 : return value of LSTM example fixed. (#7534)
61afbbbd18 : clamping the return value of uniform.cdf() to [0..1] (#7538)
bccb727b65 : Remove wrong "input" arg from scatter_() docstring (#7550)
a9a44faf03 : [auto] Update onnx to 310b44c - Add tools for generating c++ code test coverage (onnx/onnx#938) https://github.com/onnx/onnx/commit/310b44c800fa98a2976211b52fe816e70086d00c
cf9913d569 : Install torchvision before running integration tests (#7552)
4af63916cd : Set up Caffe2 CUDA builds to use sccache (#7547)
56a63459b6 : [auto] Update onnx to 330fd0f - shape inference for TopK and trigonometric functions (onnx/onnx#946) https://github.com/onnx/onnx/commit/330fd0f73e490a75149f5a9c8a8bf78c702ef968
169e91c530 : [auto] Update onnx to 8ff5fdb - fix def of gru version 1 (onnx/onnx#945) https://github.com/onnx/onnx/commit/8ff5fdbe26932953a806cac7ad8633b48b83334f
fc23885105 : Fixes reductions where accum type != type and simplifies all reductions (#7487)
d0287eca94 : [auto] Update onnx to c50f329 - Adding shape inferences for GlobalMaxPool, GlobalAveragePool, and GlobalLpPool" (onnx/onnx#943) https://github.com/onnx/onnx/commit/c50f329dcde038aa364082e0942764d36fcd1448
63ae163b24 : put dropout states on the input device (#7515)
1ce5431aaf : Documentation improvements (#7537)
8f64f918f7 : [auto] Update onnx to 0a6076e - Fix the opset version in backend tests (onnx/onnx#944) https://github.com/onnx/onnx/commit/0a6076eae6498ce33ad79a62c5fb03889f7f1d2c
c84fdda582 : Skip onnx backend tests for inverse trigonometric ops (#7533)
a3b2877810 : Fix CUDA builds. (#7529)
825c3ca2d6 : [auto] Update onnx to 4e98b03 - add trigonometric functions (onnx/onnx#869) https://github.com/onnx/onnx/commit/4e98b038d15b2f46fead975eb2acd1827f5f2a45
f529b85035 : [auto] Update onnx to 0bd3f78 - Add shape inference for LpPool, RoiPool, and fix MaxPool, AveragePool, and Conv (onnx/onnx#928) https://github.com/onnx/onnx/commit/0bd3f78bf493d991874fa483453a59b85b726763
5336ea4195 : Work around Python nightly regression. (#7526)
2bb38ba700 : Built-in support for rebuilding in win-build.sh (#7442)
ac52f1186a : [minor] change dockerfile to point to pytorch channel (#6960)
37b9d093d2 : Updates collapseDims() function and documentation (#7056)
cfc1d92975 : Implement ellipses ('...') and diagonals (e.g. 'ii->i') in einsum. (#7173)
7edd451a4e : Improve spectral_norm (fixes #7261) (#7298)
cf9751207e : Allow building Caffe2 with ATen support (Addresses #7249) (#7297)
eaa3f2e613 : Fix advanced indexing with negative indices (#7345)
2ac34b98ea : [auto] Update onnx to 490c4c6 - fix build dependency between onnx-operators.proto and (onnx/onnx#934) https://github.com/onnx/onnx/commit/490c4c6ca99bceed0499bab3535b43917dea0537
976b1d5ec1 : Don't initialize the current device in CUDAGenerator::CUDAGenerator (#7392)
acb6f2697e : Some notes about developing on Windows (#7447)
03767b66db : Add FileNotFoundError to torch._six (#7524)
921dece2d7 : Update Im2ColNd functions (#7505)
db6e4576da : Use customized python interpreter (#7520)
0337d6708c : Use SLEEF's tanh (#7513)
ed3b12e1ba : [Caffe2] Ideep net optmization passes (#7514)
580556dd60 : [auto] Update onnx to 25b8845 - Extend AveragePool to support average count include padding (onnx/onnx#884) https://github.com/onnx/onnx/commit/25b8845a142b29389d33abaef539c66202d6897b
6ada041b31 : Some small fixes in C++ API (#7510)
aced37a633 : [auto] Update onnx to 7c8b3d2 - [Typing 4/5] Add type hints to onnx/backend (onnx/onnx#913) https://github.com/onnx/onnx/commit/7c8b3d2c7542dcbc87a21e5fdf7474d310d175c1
141d81d095 : Move ONNX integration tests from onnx-fb-universe to PyTorch repo (#7397)
b3f0ab3726 : rnn onnx export: consolidate rnn/gru/lstm (#7506)
2863d935b9 : [Caffe2] Fix of the performance issue of IDEEP (#7503)
38bc732b2d : [jit] Change interpreter/fuser to work on Variables only (#7489)
dc0faab18d : Add zeros_ and ones_ init + tests (#7488)
5f96a2d26a : Add sparse gradient option to pretrained embedding (#7492)
857e3f4a5e : Throw error in tensor constructor when numpy strides mismatch (#7440)
b875fb281c : Update from facebook (#7451)
947155c69d : [auto] Update onnx to b2539fc - Shape and type inference for Flatten, SpaceToDepth, DepthToSpace (onnx/onnx#930) https://github.com/onnx/onnx/commit/b2539fca837b46ace15681a5000e25b98f4c10ea
f8b5d420a4 : Fix Caffe2 build with ATen CPU/GPU split (#7486)
75f549bbef : [auto] Update onnx to 9dd2533 - Changes done internally at Facebook (onnx/onnx#909) https://github.com/onnx/onnx/commit/9dd2533ee3d7612412c39c830e4f1a2bb363b80a
d5e77fb058 : Port interface of store base class from Caffe2 (#7439)
6547245f1f : Add return value to setup() function of PipedReaderBuilder (#7476)
6c7a8318c4 : Fix Tensor.type(dtype) not preserving device (#7474)
43264c3c30 : add cast to ensure correct type for sequence lens argument (#7483)
c489c6a1da : Skip upsample onnx backend test (#7477)
a2a4b229cc : [caffe2][nomnigraph] Make conv relu fusion more generic (#7437)
9fa1dff66a : Allow the use of torch.device for loading (#7339)
b6adf6871c : EmbeddingBag to handle empty bags in all modes (#7389)
3f029224cd : hotfix: update cmake version for Linux CUDA9 builds (#7478)
9789602814 : Fix excess ']' in nn.utils.rnn.pack_sequence (#7475)
93eb50c103 : Mark expand nodes as implicit/explicit in trace (#7303)
c3918da523 : [auto] Update onnx to 008a805 - update some model files (onnx/onnx#926) https://github.com/onnx/onnx/commit/008a8054fdddf7ed0209d27f3f3c1aea87f44b8a
20041e2704 : better cache for nccl resourse (#6970)
64834f6fb8 : Split libATen.so into libATen_cpu.so and libATen_cuda.so (#7275)
ea98256e96 : Buf check_unique fix for jit (#7468)
b5a1eda7d3 : guard dynamic sizes expand from peephole passes (#7436)
6a118b21b5 : Set MAX_JOBS to nproc - 1 if using sccache to compile CUDA (#7361)
78c3d8c164 : Adding yaml to docker images for Aten builds (#7430)
c5de3314cf : Add name() to C++ modules (#7409)
ab5c391100 : onnx rnn export: use spec-respecting dimensions (#7394)
d9671ea38e : Fix Caffe2 with ATen build (#7452)
a257bd19a2 : added state_dict/load_state_dict for ReduceLROnPlateau (#7201)
4eaf5261d3 : Provide default implementation of clone() in base module (#7446)
48b7f298f9 : Update NNPACK and cpuinfo submodules to latest master (#7443)
bd8f6bd46a : hotfix: update cmake version for OSX builds (#7456)
3023dd25f3 : Use set_type to implement type conversions in C++ API (#7408)
ed111619da : [ONNX] Allow specifying only a subset of input/output names (#7427)
d9c74f727c : Fix ONNX tutorial specification for input names (#7433)
56077f5661 : Fix CODEOWNERS precedence for ONNX folder (#7429)
23be4ac3a2 : Add clang tidy tooling (#7412)
769397eb77 : [Caffe2] [feature request] Add gradient operators for IDEEP (#7234)
97c5c0b034 : add python library linking on Windows (#7157)
f02ae65727 : skip test_utils.TestFFI.test_cpu for ppc64le due to incompatible exception handling (#7422)
f43e067128 : Make optimizer not complain about parameters with requires_grad=False (#7419)
6fd252ccae : AUTOGRAD_ to TORCH_AUTOGRAD_ for macros (#7424)
537cb10525 : improve DataParallel/DistributedDataParallel docs (#7407)
ef477b2b00 : [auto] Update onnx to 72e15ac - [Typing 2/5] Add type hints to onnx/defs (onnx/onnx#911) https://github.com/onnx/onnx/commit/72e15ac46f0503afb3111f4ffae78295ff6a59fe
dca540e455 : [auto] Update onnx to ee7f97c - Add type hints to onnx/tools (onnx/onnx#910) https://github.com/onnx/onnx/commit/ee7f97c2b16e7e770792a01d17939ea52412df03
af23ab9b3e : Make omnigraph a public dependency of caffe2 main lib (#7402)
5c2015d133 : onnx werror is now opt in (#7390)
8dbeffab07 : Add back SLEEF and also use better cmake setup. (#7341)
7911a30081 : Move #endif below magma source (#7400)
92d02a46dd : Dont do CSE on nodes with blocks (#7363)
b1fbf29b52 : [caffe2][nomnigraph] Change the standard transform API to take in NNModule rather than NetDef (#7308)
dc3252730e : Fixing conda builds by removing unneeded python args (#7384)
3913e9ead3 : [caffe2][nomnigraph] Batchnorm + Conv Fusion (#7057)
3185d8342e : Replace incorrect usages of "NotImplemented" (#7381)
755d3105b6 : Fix MultiMarginLoss equation in docs (#7383)
e3935f7509 : [Caffe2] Add conv+relu fusion for MKLDNN ops (IDEEP) (#7385)
8c8918c341 : make half overflow checks consistent with other types (#7382)
8f27582194 : [auto] Update onnx to dee6d89 - make werror opt-in (onnx/onnx#908) https://github.com/onnx/onnx/commit/dee6d89781fb885d1ed3b935992b27338befe1cd
71626491c4 : Add batched linear solver to torch.gesv() (#6100)
f598ef9102 : Add CI docker image for rocm builds (#7349)
7b66c433bc : Use a CI specific onnx namespace to catch hardcoded ones in the code (#7369)
de470d1222 : Small fix needed to build Caffe2 Aten without CUDA (#7387)
fea95de854 : Add aten::expand to the isDifferentiable list (#7350)
913e145340 : Removes -2 special case and specialization from pointwise apply (#7366)
4adba42a75 : [easy] minor cleanup in caffe2 jenkins test script (#7378)
9396740406 : Updating condas to build for all CUDA archs (#7379)
67e7c24479 : Add note about thread-safety of registry (#7285)
24b41da795 : [build] Make ATen buildable without all Caffe2 by root cmake (#7295)
0aebddd476 : [auto] Update onnx to 522c055 - version bump to 7 (onnx/onnx#876) https://github.com/onnx/onnx/commit/522c05566e5003b40a2138afcfbbfd25a8a44622
e9f6f14555 : [Caffe2] Revamp the convnet benchmark code by using models from model zoo (#7351)
2cb26bcd40 : Fix type in TensortRT tests (#7357)
75dbf9b113 : [caffe2][build] Update python cmake flag print script (#7306)
79a4d27232 : Correct the parameter annotation (#7367)
f439ba5843 : [Caffe2][nomnigraph] Generic fuse conv relu pass for nomnigraph (#7355)
f3c8bd598d : [Caffe2] Pinning conda-numpy to 1.14 to avoid SVD issue (#7344)
75651c199f : fix build (#7348)
b6adecdeee : correct schema.Scalar's shape for a shape argument of 1 (#6493)
e7116d95e0 : Create README.md (#7360)
ea24c7ff1b : Remove cdft library requirement from MKL (#7246)
ed6f79ccd2 : [caffe2][build] Add ASAN to the debug release of caffe2 (#7107)
edbfe02941 : [auto] Update onnx to ea0e0cb - remove whitespace and semicolon (onnx/onnx#904) https://github.com/onnx/onnx/commit/ea0e0cb13f9a06f619720ce0479c7356ebad4500
3642745ef9 : [caffe2][nomnigraph] Add maxpool sink transform (#7207)
8fce8673bb : Rename Container to Module in autogradpp and reorg code (#7304)
5146bc99e4 : [auto] Update onnx to 328ed3e - shape inference for logical ops (onnx/onnx#899) https://github.com/onnx/onnx/commit/328ed3e679d67c6f9e9efe134fb9dcf17f7fb51f
2fdc00e41c : Use sccache for Windows build (#7331)
f1e38725bf : add `to` method for PackedSequence (#7319)
c68ae308cd : [auto] Update onnx to d05b6b4 - Just don't output opset_version in the example then. (onnx/onnx#887) https://github.com/onnx/onnx/commit/d05b6b46f8e61c91f56d900994b19b72466d7ae6
4f48b7c1ba : [auto] Update onnx to 5be6d86 - fix typos in documentation (onnx/onnx#896) https://github.com/onnx/onnx/commit/5be6d8665425dfeff6d7eee2f6918f9f0d560074
bebccc0c6d : Improve math formula rendering in Poisson Distribution docs. (#7340)
4c511075c3 : [auto] Update onnx to 6fa9f1a - promote identity op given it's being used. (#892) https://github.com/onnx/onnx/commit/6fa9f1a58b390d6a3160baaa37c464c30b78f0a9
f9b83f2e6c : [auto] Update onnx to c0fb725 - Spec clarity: IR.md modifications. (#720) https://github.com/onnx/onnx/commit/c0fb725b64563e98b0365778257823e5a4bba8da
56daed0a85 : copy paste documentation error fixed in Softmin (#7324)
54a4867675 : Bring back C++ extension torch.h (#7310)
6087a5feaa : [auto] Update onnx to b0ab0d1 - function registration c++ API (#848) https://github.com/onnx/onnx/commit/b0ab0d1d1562733029c4e2613e82fef2f999ee2e
94b74d2068 : [auto] Update onnx to ceb259c - Tests for ReduceLogSum (#862) https://github.com/onnx/onnx/commit/ceb259c903aa5dfe99ff4ab529ac0d773f5cddab
0859f0e3e6 : Pinning numpy version in conda builds (#7314)
1f14d681dd : [auto] Update onnx to 1c600f8 - Lint the code and fix the CI (#895) https://github.com/onnx/onnx/commit/1c600f802d2569bfa95cbfd36da4527daa377047
ea12702e02 : [auto] Update onnx to 278ef5b - inference for math ops (#893) https://github.com/onnx/onnx/commit/278ef5bc9c7aed99b94ec0889cd09bd697667e82
56ed857f1b : [auto] Update onnx to f708d41 - type and shape inference for experimental ops (#890) https://github.com/onnx/onnx/commit/f708d41fea187f05393c2b15885e396c268641a5
f06fcc6efa : Fix bug that introduced in pull #3280 (#7292)
e1c7e6dce2 : [auto] Update onnx to 38eea57 - add ONNX_NO_WERROR as option (#891) https://github.com/onnx/onnx/commit/38eea57313d5ffdf1aea39b83cee0cad28435bc3
67a9948d87 : Refactor rnn export (#7263)
55b8317f1d : Update gif with new logo (#7301)
24681a8e49 : Update unstable docs logo to new logo. (#7305)
feb64b5291 : Add -Wno-unknown-pragmas (#7291)
3369828bfa : Clarify patience in ReduceLROnPlateau docs (#7242)
ac5d7bdf62 : Fix onnx.symbolic.upsample_bilinear2d not considering align_corners (#7264)
0dd2521d4c : Fix ONNX export for AveragePool with count_include_pad=True (#7279)
0259d9c8d3 : Changing underscores to hypens in conda package names (#7299)
a0c1e5faea : Change the error message in pad_sequence to be more user-friendly (#7283)
36a3f0995b : Remove THDTensorDescriptor_newFromTH{X}Tensor. (#7287)
833b1e6c74 : Skip the test case on ReduceLogSum (#7293)
026cb9d2f1 : set ONNX_NO_WERROR (#7296)
a015d579dd : move softmax/logsoftmax to ATen (#6786)
5c575a1497 : Fixes RNN shapes for C++ API (#7272)
9e3f5bb5fd : enable onnx shape inference when converting onnx -> caffe2 (#7260)
157d7499e7 : Disable two flaky C++ API tests. (#7290)
46d0140d94 : [auto] Update onnx to 541512b - tests for type and shape inference for Random generator ops (#880) https://github.com/onnx/onnx/commit/541512b93afebe450c8b9784598bc204a3bf8a27
4abb229960 : Double-dispatch copy. (#7197)
053b68c4da : Fix USE_ATEN flag in caffe2 (#7252)
67d0d14908 : Rename autograd namespace to torch and change torch.h into python.h (#7267)
bcffb5aa1d : Remove SLEEF and all dependent code paths (#7268)
0829d4502d : Trace size-dependent expressions correctly (#6554)
da654337e0 : Add support for type annotations in Python functions (#7009)
6363faf184 : Fix issue #7209 in DataLoader (#7265)
159c75a2ca : [auto] Update onnx to e35126b - add type inference function for classifier ops. (#882) https://github.com/onnx/onnx/commit/e35126bc4b42b80c3b7f4091efb02e2f5d53dcdd
739d3d48ec : [auto] Update onnx to 7ee7d0b - enable Werror=sign-compare on linux (#867) https://github.com/onnx/onnx/commit/7ee7d0b57a4e292e96f0a6a64021dd169b60d09e
d856bfc1bf : [auto] Update onnx to e35126b - add type inference function for classifier ops. (#882) https://github.com/onnx/onnx/commit/e35126bc4b42b80c3b7f4091efb02e2f5d53dcdd
98c24fae6b : Fix broadcasting error in LogNormal and TransformedDistribution (#7269)
8325206c6f : A clip grad fix for sparse tensors. (#7257)
a95b7b13f9 : Extend support to arbitrary ops in init net when converting c2 models to onnx (#7256)
8091388d0f : Add support for __floordiv__ and __rdiv__ for integral tensors (#7245)
371cc1e2db : update the gif for 0.4 (#7262)
92f54e1f01 : remove static libstdc++ linking and PYTORCH_BINARY_BUILD env variable (#7259)
3ae92b3a8b : Fix lint errors (#7247)
e625ecc41f : [caffe2][nomnigraph] Fix NNPack conv-relu fusion for ping-pong naming, (#7199)
c96f2624a2 : Speedup sparse init (#6899)
4ab6ea5b1f : Add unbuffered flag to distributed node launcher (#7226)
79245306c7 : Fix onnx sum (#7232)
f9393ffc90 : Remove unneeded entry for NCCL in .gitmodules (#7216)
c4078b42b4 : Add docstring for Tensor.tolist (Fixes #7095) (#7182)
6538ae5c16 : clean up runtime dockerfile, use cuda 9 package (#7230)
7c70c3bdca : Fixes for C++ build on macOS (#7192)
1313791015 : Need an explicit flag since opencv is on by default (#7225)
aa38ae303d : [build] Setup to build ATen from root CMake file (#7163)
681baa9254 : Restore warning to torch.range. (#7194)
07513cfd1d : implement sum over multiple dimensions (fixes #2006) (#6152)
e25e501bea : Fix build for osx (#7187)
d154d32890 : Fix to a conda hack (#7212)
8ac6856e54 : Removing features for a sec (#7211)
faef70b5b0 : Fixing a bug in my bug fix (#7210)
a10870a2d1 : [auto] Update onnx to 676e0c7 - Type and shape inference for generator ops (#871) https://github.com/onnx/onnx/commit/676e0c77263a2543305bbaa9f7a4dee79dd93150
83622abd9f : Reroute aten to use the root cmake system (#7188)
1ca6e77615 : Fix to comput_70 error + some more lowercasing (#7205)
93242d320f : fix scale on some tensors (#7189)
a61d4a3374 : [Caffe2] Refactor reduce ops to take flexible input types (#7164)
197412fa8f : Fix typo in comment (#7183)
619a56bf21 : Emergency new fork for ideep (upstream lost commits). (#7191)
88a705555a : Add SLEEF for float and double (#6725)
4d2693973e : [Caffe2] Turning on ATEN for Caffe2 in integrated builds (#7169)
1904058370 : update logos (#7184)
e6330559c8 : [auto] Update onnx to c7055f7 - update defs for reduce, rnn, and tensor depth-space ops (#847) https://github.com/onnx/onnx/commit/c7055f721c8a6ff4de74cd58b213ef8bf5bf312d
604f907bc7 : Restore filename and line number on AT_ASSERT. (#7152)
f07f24db0b : Change unique name so that you are guarenteed: (#7166)
ebebfce681 : Minor THD cleanup (#7161)
414e0b4b6f : Split up CPUApplyUtils for perf (#7168)
664fe34e0a : [Caffe2][fbcode=>GH sync] Update from facebook 4323b18ce13c (#7116)
967c4a0c18 : [caffe2][nomnigraph] Fix NNPACK relu fusion for inplace relu (#7124)
20666feb2c : [caffe2][nomnigraph] Add compatibility for MSVC, which lacks some C++11 language features (#7158)
f3c76b9b78 : Remove specifications from Declarations.cwrap that have no effect and are already handled. (#7147)
a9f2ee0817 : CPUApplyUtils is faster if iterate is split into two steps (#7148)
9ba503ac9c : [caffe2][nomnigraph] Add ability to pass the old net to convertToCaffe2Proto (#7149)
1418cc72d6 : Make refcount in THMapInfo atomic. (#7135)
a5e1d4a049 : Delete dead header (#7153)
08a853b02c : Add rsqrt op in caffe2 (#7154)
a8b059edcc : [auto] Update onnx to 69894f2 - Use op schema.all tensor types in random like definitions (#865) https://github.com/onnx/onnx/commit/69894f207dfcd72d1e70497d387201cec327efbc
762eb3ddc8 : [Caffe2] Add moments op in caffe2 (#7114)
323e3aca47 : A small fix for aten cmake (#7141)
dfe1bae3cd : [caffe2][nomnigraph] Move tests to proper gtest suite (#7046)
bcadf92ad5 : Move codegen from setup.py to CMake for C++ libraries (#7121)
5d3c3c53aa : Add raw IR serialization/deserialization (#6392)
ca8ee4c1e1 : [auto] Update onnx to b9d6b90 - Clarify random like operators (#846) https://github.com/onnx/onnx/commit/b9d6b90a64560d6921e412407799c83013b1135d
2a18e7c45b : Have python dispatch respect 'auto_gpu' and 'with_gil'. (#7137)
8031da5479 : Implement torch.as_tensor, similar to numpy.asarray. (#7109)
1f5b392da0 : [auto] Update onnx to fc6b5fb - Refactor shape inference implementation (#855) https://github.com/onnx/onnx/commit/fc6b5fbb6d8000cb18eb682b37e951c8498520fb
15b12e6f8a : Add support for MKLDNN on Windows (#7130)
7968ee0f59 : Removing references to CUDA_SDK_ROOT_DIR to see if it breaks anything (#7125)
87e6362393 : Add more warnings to C++ API build (#7123)
0427afadd1 : Make AT_ASSERT/AT_ERROR non-printf based, other tweaks (#7104)
24461a756a : Separate "-Xcompiler <...>" into 2 elements because ${nvcc_flags} (when using CUDA_SEPARABLE_COMPILATION) doesn't recognize it. (#7118)
dccfdf317b : Fix example of torch.clamp (#7131)
ba046331e8 : add spectral normalization [pytorch] (#6929)
23a5ddd3c8 : [auto] Update onnx to b7d8dc8 - fix cmake warning message (#863) https://github.com/onnx/onnx/commit/b7d8dc8fa63965148f34e8513d167f5cbfc08914
e8916f510b : [auto] Update onnx to f585c5d - add pytorch-operator test for tile (#831) https://github.com/onnx/onnx/commit/f585c5d06609cfc924b2cee503dbe76a7f7d22e3
c72e5da7eb : [auto] Update onnx to 993fe70 - add install step (#832) https://github.com/onnx/onnx/commit/993fe7080545876e060ba28be018f4bac3d86b5a
5acc62ffa5 : Skip Tile onnx backend to keep CI green (#7120)
892bef9aa3 : [ONNX] Delay external value resolution as long as possible in ONNX backend (#7111)
0b0279981d : Fix example for new_zeros in documentation (#7128)
531944275c : [Caffe2] Guard CUDA API calls in caffe2/operators using macro CUDA_CHECK (#6810)
150af6ac1e : Move ideep ops from caffe2/contrib/ideep to caffe2/ideep (#7112)
b2cdd08252 : Introducing onnx-tensorrt to third_party (#7119)
4add3a4df7 : Add dependency from caffe2_gpu to ATen in CMake (#7117)
cdc6d104e2 : [auto] Update onnx to 68bc26c - add type inference for traditional ml ops except classifier ops. (#857) https://github.com/onnx/onnx/commit/68bc26cfb27696779c2bbc26a6538f7d67c1cbe6
b3be71f046 : [easy] Stop hardcoding "python" executable in bottleneck tests (#7105)
afe3c2688f : Update C++ API tests to use Catch2 (#7108)
25e7d5c612 : Make @ebetica and @goldsborough owners for test/cpp/api (#7113)
6e72ba9798 : [Caffe2] Fail fast for C++ unit tests too (#7106)
7efd6f0506 : [auto] Update onnx to 9cc0cda - fix string representation of scalar types (#858) https://github.com/onnx/onnx/commit/9cc0cdabd37dbd9c1138118d8de5b49404876305
ab44002ac8 : Open-source extractMetaNetDef & runGlobalInitialization, add new Predictor constructor from db file, and add run_map_outputs (#7063)
71f6cca992 : Make @ebetica and @goldsborough owners for torch/csrc/api (#7110)
bd69d2fd23 : [auto] Update onnx to 1078925 - fix y in pow test case to scalar (#852) https://github.com/onnx/onnx/commit/1078925c2d32b18a8cea3efdcfebbba309ec9c8a
f87462c65f : [Caffe2] Fix the wrong argument name in collect_and_distribute_op (#7091)
50218a25e7 : [EASY] Document load_inline (#7101)
1ea3f79569 : Location of pip package changed (#7100)
95681257d6 : Revising cudnn version check (#7062)
af71fb882f : Merge autogradpp into PyTorch (#7074)
3407708b81 : Remove unused variable (#7103)
bf9fab3cf3 : [auto] Update onnx to c66fb6f - Add some math function shape inference (#845) https://github.com/onnx/onnx/commit/c66fb6f077fe45af99d975f66f0be351fb81b4b3
20c965f7d6 : fix max/min on cuda in presence of NaN (fixes #6996) (#7052)
90026f59a3 : Switching to conda's --no-test flag (#7099)
9a3c723644 : Add missing PrintOp arguments doc (#7084)
caa6a8ce30 : Switch to the official git mirror for Eigen. (#7090)
39c0b0b850 : Delete unnecessary header includes. (#7094)
2a56666196 : Removing leveldb to make special gcc builds unnecessary (#7098)
b70b7a80d4 : Inline JIT C++ Extensions (#7059)
c5978db094 : [auto] Update onnx to ff667d1 - Refactor return type and docs for ONNXIFI_BACKEND_DIRECTX_ID (#853) https://github.com/onnx/onnx/commit/ff667d1dfb498618aa3b7086b5577224ed7dd5e8
d9aeb7e71b : clamp now has subgradient 1 at min and max (#7049)
8fbab83c2a : only Tensors of floating point dtype can require gradients (see #7021) (#7034)
6a55d86234 : GroupNorm docs (#7086)
881af544fd : [auto] Update onnx to 11c6876 - clear initializer names when clear initializer (#849) https://github.com/onnx/onnx/commit/11c6876f1d524b3d102554919a2b027e66f94e3e
bc62645e4c : [jit] Fix handling of IntList[k] parameters (#6965)
96c6ae67bb : Remove incorrect/irrelevant test code. (#7050)
ee00a8049a : Add max pooling support to EmbeddingBag (#5725)
49f87320ba : [Caffe2] Add full impl of GroupNorm (#7058)
0703357723 : Don't build THD/master_worker if not explicitly requested (#7081)
b240cc9b87 : Add support for dotted names in CPP Extensions (#6986)
e6ce1afe47 : [Caffe2] Follow-up of onnx-trt API change (#7076)
7450e9152b : [auto] Update onnx to 73c34ae - Clarify FeatureVectorizer description. (#843) https://github.com/onnx/onnx/commit/73c34ae62f14cb95f5ff80a728c6aa22542c4d4e
281f095972 : Add autograd API to at::Tensor (#6582)
802e718e1c : [auto] Update onnx to 1befb9b - Remove useless text in docs (#850) https://github.com/onnx/onnx/commit/1befb9b12d1b54effe3f8a5f3cf36de996a2a969
4caea64d72 : Make all of TH and THC C++. (#6913)
4667983f0f : Fixes for interpreter and ONNX export for translation (#7044)
fc6a846cc5 : [Caffe2] Fixing bug in conda builds (#7061)
1048d0dd67 : [Caffe2] Moving all conda package information into package name rather than build string (#7041)
065cd32ed0 : Fix ".pb.h" dependency issue about DLL build. (#7027)
bb9c859253 : [auto] Update onnx to e84788f - Fix SELU attributes' default values (#839) https://github.com/onnx/onnx/commit/e84788fb48b279e0444e69a1bf579e6de85eac57
20cd27da42 : [caffe2][ONNX] Implement CPU NumpyTileOp and corresponding ONNX backend (#7053)
2e023a29e4 : Add optional support to C++ extensions (#7055)
7b09bc72a5 : [WIP] Enable WERROR in tests (#6539)
733e2967b1 : Allow `__constant__` values in a ScriptModule to be used as attributes for builtin functions (#7017)
02a764f82d : Update the video input op in caffe2 (#7054)
980960d036 : Fix Visual Studio error C2398 about ill-formed narrowing conversion. (#7024)
59f5f9ac36 : [caffe2] Fix build of depthwise_3x3 for CUDA compute capability < 3.5 (#7048)
361648a4a7 : Fix torch.tensor(...) device-type calculation when used with numpy an… (#6995)
0c737dff63 : fix lbfgs variable names (#7037)
6ce376fee3 : [auto] Update onnx to ebac046 - Add tile test case (#823) https://github.com/onnx/onnx/commit/ebac0463a0553948f043d0def33078c3932760a4
f630de8f33 : [caffe2][nomnigraph] Lint run (#7045)
932c4c2364 : Prevent stack overflow on deletion of deep graph (#6873)
c730792d51 : Add big warning about averaging to KLDivLoss documentation #6622 (#7006)
ae35e0e924 : Support non-contiguous tensors for unary ops (#6119)
a6bfa16c17 : torch.arange: add numpy-style type inference. (#7016)
bdd27ea956 : [auto] Update onnx to 8b7a925 - a few more shape inference functions (#772) https://github.com/onnx/onnx/commit/8b7a9252c9826867b37c472a0e253d84ae17f857
f6083b343b : [auto] Update onnx to 9718f42 - Make the coefficient non optional for LinearClassifier (#836) https://github.com/onnx/onnx/commit/9718f42976fbf4c7cecdaa5d753a7d39b9d70361
39c6101ab4 : [auto] Update onnx to ef083d0 - Add save_tensor and load_tensor functions for Protos (#770) https://github.com/onnx/onnx/commit/ef083d03387ff482e5ce495fb3b4f78ddadc7462
1b0ad8678b : import *Sampler to utils.data (Better fix than #6982) (#7007)
3d4d39ce30 : Also check compiler ABI compatibility when JIT compiling (#7015)
9db779f331 : [auto] Update onnx to 45ceb55 - Check if CMAKE_BUILD_TYPE set before project(). (#812) https://github.com/onnx/onnx/commit/45ceb5523a56b7b91d6e08d403aee03da57a4dc5
76d3c30783 : Enable resetting of batchnorm running moments and cumulative ("simple") moving average (#6445)
eaab6ce459 : [caffe2][nomnigraph] Move nomnigraph<->caffe2 converter logic to caffe2/opt (#7018)
18ed2160b0 : Use Index rather than Long for IntList parsing (#6674)
902579602b : [wip] [Caffe2] Changes to integrated binaries (#6997)
19cb5a0436 : [auto] Update onnx to 4b3d2b0 - [WIP] reenable shape inference tests (#834) https://github.com/onnx/onnx/commit/4b3d2b02e89b971e19e5ce625a414f595f03d4b8
d67ec68dbe : [auto] Update onnx to 22d17ee - RNN tests: LSTM, GRU, SimpleRNN (#739) https://github.com/onnx/onnx/commit/22d17eee2e216451afced210e8ce47aaf3f6b1eb
a08091a42d : Implement matmul_out and dot_out. (#6961)
49493948a8 : Fixes some build warnings. (#7004)
9a6c033004 : Skip unsupported ONNX backend test cases (#7005)
242f6c3470 : Don't print dots after nonfinite numbers in integral float tensors (#6835)
2b44c420c8 : Enhance diagonal (fixes #6479) (#6718)
8109b3065e : Slight changes to anaconda script (#6994)
b2581c0289 : Workaround in onnx to get transposes into init_nets (#6924)
a64b2987b4 : [ONNX] export tile op (#6954)
5dc5a71d74 : Improve error message (Sampler location) Fixes #6917 (#6982)
984516bdc4 : typo corrected: is -> if (#6980)
3964253f94 : Allowing for vectorized counts in Binomial Distribution (#6720)
f98b778086 : Fix forward and backward for norm/renorm with infty norm (fixes #6817) (#6969)
24d05662ea : [caffe2] Open-source DEPTHWISE_3x3 engine (#6601)
eb4154a007 : [auto] Update onnx to 485b787 - function proto for composite op. (#802) https://github.com/onnx/onnx/commit/485b7875fa8a5b420ce2d73f7d00844c95541853
3d907ef78e : Consistently check 'out' variants against specified dtype/layout/device parameters. (#6973)
c10da636b5 : implement gamma cuda (#6855)
7cbef70372 : Fix the onnx symbolic for selu and maxpool3d (#6816)
645ad7ad0c : Fixing LP-Pooling stability issues (#6766)
bd14d8e8f8 : add additional caffe/caffe2 paths to exclude list in pytorch setup.py (#6891)
ab016a2b30 : Code Cleanup: removes unused getTextureObject (#6974)
2d6d6a4d10 : Removes unused _long functions in THCTensorIndex (#6971)
31c9b4f0d2 : Changes incorrect "overlappingIndices" call to correct "maybeOverlappingIndices" (#6953)
d48d3ef6bc : Make cuda 9 behave as cuda 8 wrt half conversions (#6958)
5209213fa7 : [auto] Update onnx to cd58928 - specify defaults for attributes of Affine op (#820) https://github.com/onnx/onnx/commit/cd589283a0d931b6c8ff6e32235902e05ac8b7c9
f21c5c5cd8 : Fix the symbolic of batchnorm to handle special case (#6967)
b038b3d7be : Always dumping final meta.yaml for debugging (#6977)
3573f64bb1 : [auto] Update onnx to 7ee2cf9 - merge the dummy backend back into the main one (#743) https://github.com/onnx/onnx/commit/7ee2cf985467e480321e6e288410d1a27f8fc618
8028162103 : Update the script to avoid the protobuf lib issue and add ZFNet (#6966)
94d2afbe50 : Clarify _unsafe_view comment. (#6952)
2e32e8df75 : Statically linking CUDA for Anaconda builds (#6680)
7599d0c3fe : [caffe2] ONNX backend support for control nodes (#6914)
3b009dffe1 : Delete unused legacy indexed based streams (#6964)
1e134b11ec : [caffe2][cmake][opencl] Wrong directories were being included, which might break systems without opencl in the system headers (#6972)
5aed120bc3 : [auto] Update onnx to 1c03a5a - [Proposal] ONNX Interface for Framework Integration (previously ONNX Backend API) header and docs (#551) https://github.com/onnx/onnx/commit/1c03a5a42e815a25419f35af33b4de4014187d3f
a7b274bb2a : Remove scratch space from THCState (#6956)
075ca76c26 : [auto] Update onnx to 3769a98 - Rename real model test case from VGG-16 to ZFNet (#821) https://github.com/onnx/onnx/commit/3769a983624cf552ee89743eb714b23684e3a27d
333e8c9b22 : any/all returns LongTensor, make test expect that (#6957)
6ebcb4606f : fix typo in the LSTMCell math definition (#6951)
138d69c688 : [auto] Update onnx to 403ccfb - Change the return type for the zipmap operator to match the description in the spec. (#818) https://github.com/onnx/onnx/commit/403ccfbd0161c38f0834413d790bad0874afbf9a
e767b186ee : add missing UNCERTAIN_TH_OMP_OVERHEAD_THRESHOLD to TH_TENSOR_APPLY_REDUCTION_OMP (#6946)
e7babb1890 : [aten] only lookup CuDNN if compiling with CUDA (#6905)
2dc177ac50 : Update checkpoint.py (#6943)
39d4814933 : Make any and all on ByteTensor behave like sum/prod. (#4627)
241a1e0f52 : [auto] Update onnx to 15289e3 - Tile - align with numpy (#757) https://github.com/onnx/onnx/commit/15289e3d77a9014f70c413c757a5805261dd94ec
c820fda180 : [auto] Update onnx to 42207c6 - Pass to lift captured values as inputs to control nodes (#804) https://github.com/onnx/onnx/commit/42207c60d89727f5ae229fb3a66a0708fec70cda
c92b5422f7 : Fix typo in set_grad_enabled description (#6931)
e27d66a454 : Remove Eigen from math CUDA and update algorithm in ReduceTensor and Moments (#6922)
40301c3be7 : [auto] Update onnx to 15289e3 - Tile - align with numpy (#757) https://github.com/onnx/onnx/commit/15289e3d77a9014f70c413c757a5805261dd94ec
2f311be90b : add default value to ConstantFill doc (#6923)
09f40ae06f : silence compiler warnings (#6915)
d9bde84b84 : Add threshold for ops using openmp macro (#5584)
aa88ca8ae0 : remove quotes from caffe2/contrib/aten/CMakeLists.txt (#6928)
dec5e99e99 : [aten] Move submodules to third_party (#6866)
c33d7f565b : updated the environment collection script URL to the raw version on Github to download the script instead of the webpage (#6927)
8b70f7d248 : [Caffe2] Clean up ideep integration (#6881)
b7487d42a0 : Workaround to make PythonOps traced with torch.jit.trace work correctly. (#6738)
e28508afa5 : [auto] Update onnx to 42207c6 - Pass to lift captured values as inputs to control nodes (#804) https://github.com/onnx/onnx/commit/42207c60d89727f5ae229fb3a66a0708fec70cda
3c80a2b85c : [caffe2] Add flag to ONNXWhile to skip scoping (#6910)
53a8158d6d : [auto] Update onnx to 0eaf45f - Add dtype for input in Gather node test case (#815) https://github.com/onnx/onnx/commit/0eaf45ff89c1041cd9b1d9697b7fa7156bc06fe4
0b5910f77e : [jit][script] Fix a bug combining sizes/unsized tensors (#6882)
6e60edb799 : [caffe2] Fix logic error in tensor filling ops in C++ ONNX backend (#6909)
146e8c8a10 : Fix the legacy padding handling on global pool case (#6473)
cfb626b638 : [caffe2][tiny][fix] Make the build work with profile observers (#6908)
9dd73aa7eb : Fix stable link to always be /stable/ (#6907)
d985cf46f1 : Add workaround to fix include warnings in Python 2 builds. (#6716)
90e75c6528 : Speed up printing of large tensors. (#6876)
0430bfe40b : [docs] Update broadcasting and cuda semantics notes (#6904)
6418c49ee9 : Make ArrayRef read-only by default. (#6444)
26c53c58a2 : Fix ATen .travis.yml setup (#6860)
21e0fc8fec : [auto] Update onnx to adbfb4a - Fix the ConstantFill spec (#808) https://github.com/onnx/onnx/commit/adbfb4ad19f454996acd93ac6dd8ff31184505a5
7d32f6fdc3 : Adding runtime warning for checkpointing inputs to have requires_grad=True (#6883)
9765bb5f1e : Revert "Fix performance regression of simple indexing cases (#6793)" (#6886)
b6ed729cdc : fix memory leak in median (#6889)
df2817d3b1 : Bump benchmark to master (#6878)
82a33c32aa : Update device docs (#6887)
b5d2d285a8 : fix SVD backward on non-square matrices when some=False (#6870)
1ee009599c : Add torch.get_default_dtype doc (#6872)
750a323ca1 : Work around protobuf issues by importing onnx first (#6833)
aa56a1211d : Update from facebook (#6871)
aeb91587e5 : [caffe2] Fix observer logic in RNN executor. Remove dynamic casts (#6202)
548f6e34ab : [caffe2][nomnigraph][fixup][tiny] Remove accidentally included logging (#6880)
9ed46c615c : [Caffe2] Provide option to initialize the TensorRT engine at Operator constructor time (#6809)
a2f2d6b43f : Add special case for printing dtype for empty int64 tensor (#6869)
a02b7c9776 : Move main slice logic for easier reuse (#6822)
b8ada7380a : Tuple literal and cat support (#6691)
90586d925f : [DT] [38/n] Rename add_stop_signal to add_stop_condition (#6825)
a986b85afd : [auto] Update onnx to 3cb4d61 - Extend optimizer passes to recursively descend on GraphProto attributes (#803) https://github.com/onnx/onnx/commit/3cb4d61387658f9f38a6a4fd2cc4557a7beb8816
46b1737255 : [ONNX] Switch ONNX peephole optimizers to recursively descend on sub-blocks (#6828)
3b63be063e : quick fix for collect_env (#6861)
4040164097 : Relax collect_env.py tests (#6859)
a4dbd37403 : [doc] Minor fixes for Windows docs (#6853)
26ddefbda1 : [feature request] [Caffe2] Enable MKLDNN support for inference (#6699)
a16b85facd : [Caffe2] Fix cuda.cmake (#6821)
e966f22656 : fix typo (#6824)
e8bdbdaa27 : Terminate dataloader workers properly when parent process is SIGKILL'ed (#6779)
7a3c38ab59 : Add environment collection script (#6635)
56567fe47d : Add documents for Windows (#6653)
7d5c9bff58 : Removes (unused) LinearIndexCalcData. (#6791)
1c7b0c1020 : Update version string to 0.5. (#6795)
50e92a3085 : Static linkage for CUDA (#6807)
a8bdb561b7 : Fix reductions on some contiguous tensors where size(dim) == 1 (#6815)
814f791f2b : [JIT][script] Improve error reporting for tuple type mismatch (#6819)
95d0e9aaa2 : [docs] Update set_default_(tensor_|d)type docs (#6843)
0d0dcde5a8 : Fix caffe2 eigen + cuda9 windows build (#6746)
4e8e13d90c : [auto] Update onnx to bf00ae6 - Kezhan/update ml op spec (#799) https://github.com/onnx/onnx/commit/bf00ae6118cc50d045494679f3ec53ec2d2cd552
d564ecb4a5 : Update docs with new tensor repr (#6454)
34fa355f27 : [caffe2] Add Moments to math (#6798)
5945f3a7b4 : [auto] Update onnx to e3da0f9 - Fix some checks not ideal to onnx-ml (#781) https://github.com/onnx/onnx/commit/e3da0f9babe700fde5b70c998b3104a91cacbca5
7b6b7d4575 : Mark schema registration helper variables as unused (#6799)
8b28ab4858 : Add option cache to speed up cmake build (#6737)
34edd6f12e : fix sparse tensor print (#6829)
8a434d9554 : Print integral floating point numbers as X. instead of X.0000. (#6812)
8fc11748fe : Fix debug build for Windows (#6758)
a568b91a5d : [docs] Add missing device parameters to factories, refer to dtypes as data types rather than types. (#6803)
516f067641 : InputBuffers should AutoGPU for accumulation. (#6826)
6c8f0ef33b : fixed error message (#6820)
9b37a4d027 : [auto] Update onnx to 4890619 - Remove debug string (#798) https://github.com/onnx/onnx/commit/48906190e65a2a31567a88cf4d53c93fb8176e79
356af0c195 : [auto] Update onnx to 2f7c284 - Use ONNX_NAMESPACE::to_string instead of std::to_string (#797) https://github.com/onnx/onnx/commit/2f7c284e57c204a966be2edeffe2c2b47b83d812
afea133113 : [auto] Update onnx to b20fae0 - Add newline at the end (#795) https://github.com/onnx/onnx/commit/b20fae02876265f6c8f6ff1dbf8b7afe53789c87
db540c9e7b : Fix the bug in fb devgpu setup script (#6823)
41bb1d56a7 : [auto] Update onnx to f5496b2 - Update the remainig cases (#794) https://github.com/onnx/onnx/commit/f5496b2c749b83bdbb872c3982ea1937b935b25f
02544f4472 : [auto] Update onnx to 7d1e102 - change the inference context api to use TypeProto (#779) https://github.com/onnx/onnx/commit/7d1e102e73ffd0479a87d62c8aa950e03cfdda95
1d51dd8665 : [distributions] Fix Independent.rsample() and add more tests (#6806)
12e07ca731 : [caffe2][nomnigraph] Add binary split algorithm to Algorithms.h (#6689)
a73b3fd1f0 : [caffe2][opencl] Add OpenCL context (#6777)
8a15bc4c9c : Fix the ONNX exporter API (#6788)
188b6e9346 : [auto] Update onnx to 6953eff - some cleanups to shape inference impls (#771) https://github.com/onnx/onnx/commit/6953eff49a7a6d1cf5c04f3a67260a5c798e5e7b
c286efb442 : Quick patch for the CI (#6802)
378f742792 : [auto] Update onnx to 8dafe88 - Remove incorrect cases (#791) https://github.com/onnx/onnx/commit/8dafe889010aa957574faf793a5036ccc84d9eaf
3e2891b27a : Let Gloo close socket, destroy() not needed for non-NCCL backend (#6787)
ef76e24f60 : [JIT][script][ONNX] ScriptModule ONNX export + ONNX export for control flow nodes (#6608)
945cb0fabc : [auto] Update onnx to 45be0fe - Fix shadow-compatible-local compiler warning (#789) https://github.com/onnx/onnx/commit/45be0fe73650daed60107c06c900343cfdc6ed0d
d695624efe : More trt tests (#6782)
503be98d61 : [auto] Update onnx to d01e4af - update the test cases (#788) https://github.com/onnx/onnx/commit/d01e4afc4eb4cb472b91625fac2d94e31b204787
c420297545 : [jit][script] Constants python int now turn into Long (#6728)
7e1c5ca6d5 : Add missing #include for CAFFE2_MODULE macro. (#6790)
8a016693c0 : Fix performance regression of simple indexing cases (#6793)
c3bc927920 : [auto] Update onnx to 7e1bed5 - Make proto_utils compatible for old version of protobuf (#787) https://github.com/onnx/onnx/commit/7e1bed51cc508a25b22130de459830b5d5063c41
a4ab83045d : Fix cross device indexing for more than 1 cuda device. (#6781)
1a53e45558 : [auto] Update onnx to abe285e - Fix unused parameter warnings (#786) https://github.com/onnx/onnx/commit/abe285e987bb502f871093bf42dcd718bf34a5da
d1a992a85e : Disallow chunks that are <= in torch.chunk (#6761)
264ffd143c : [auto] Update onnx to 5ef9c6e - Parallel windows build (#784) https://github.com/onnx/onnx/commit/5ef9c6ee28830697ab93afce8672330b26963963
6a41e2dc47 : Add BC mechanism to Module.load_state_dict (#6639)
6fed2341e9 : [auto] Update onnx to 3b27cc8 - Try using pep518 to install the protobuf build dependency (#782) https://github.com/onnx/onnx/commit/3b27cc8faa801460b8154870516fedc5717eacc0
370acdf3bf : Change to use CAFFE2_HOME for specifiying caffe2 models path (#6775)
a3f3817fbd : [jit][script] Allow variables to be define in if statements (#6675)
4c5b95a433 : Revert "Terminate dataloader workers properly when parent process is SIGKILL'ed (#6606)" (#6772)
6dfaa1071a : Check in ATen mirror script. (#6762)
e169465672 : rnn: A note on zero defaults for recurrent cells (#6719)
d0b0edf27a : Add a requires_grad_() function to tensors. (#6771)
f6da2fd944 : Make the variable closer to usage (#6752)
2acc247517 : [docs] Update autograd notes (#6769)
de9bdf1d31 : Module.to doc udpate and example format update (#6774)
47bd4be4d3 : [docs] More factory functions (#6709)
cc3284cad3 : [docs] Clarify more CUDA profiling gotchas in bottleneck docs (#6763)
7f587de4bc : [Caffe2] Let TensorRT flow use the generic graph transformer (#6696)
9c682f02b7 : [docs] Fix some sphinx warnings (#6764)
e44f901b55 : added functionality for state_dict/load_state_dict for lr_scheduler ( Fixes: #3026 ) (#6342)
072d49f787 : Fix import error sometimes happening in dataloader when exiting Python (#6671)
533beab5bb : Fix doc for torch.nn.functional.relu (fixes #6742) (#6749)
71c644b005 : [caffe2] Add ReduceMinOp and ReduceMaxOp (#6744)
fff80c2c1f : [auto] Update onnx to 1439eab - Fix Protobuf error message in CI (#776) https://github.com/onnx/onnx/commit/1439eab5542c625bb3da49860f0cd68c3eafdc18
e1f5d80d5c : Eliminate handle_zero_dim when broadcasting is applied earlier. (#6683)
9c47eb5548 : Fixes test_torch.py so that all tests pass on Volta hardware. (#6736)
11c1af8dbc : [docs] add docs for tensor.view_as (#6730)
bacda6df8d : Better error message for gels on CUDA (#6726)
75ccfb321b : Fix cpp_extensins.py (#6722)
4d2a0b889f : [Caffe2] Use mapped workspace instead of renaming when working on renamed nets (#6717)
c40eefeef9 : ChannelShuffle with NHWC layout (#6667)
d26ab68485 : Sort declarations when generating Python bindings (#6701)
e47b3018b7 : [caffe2] Update EigenTensorMap to use ColMajor (#6735)
d1bb75e273 : Redo tensor repr to make it less verbose (#6370)
8d6a50aaeb : Terminate dataloader workers properly when parent process is SIGKILL'ed (#6606)
354dac9769 : updates module.to doc for the new tensor.to(requires_grad) (#6733)
198be34de6 : [docs] Add back deleted tensor.cuda() method (#6732)
6ae060b2b1 : Roll forward Eigen to 5a0ab9f to solve the compilation problem with CUDA 9.1 (#6710)
c14b62fca2 : Create FileBaton to synchronize distributed JIT C++ extension builds (#6684)
789e0e066a : [auto] Update onnx to 1756f61 - Align schema function signatures in python and c++ (#775) https://github.com/onnx/onnx/commit/1756f6183dfbece085596e7b1a2c079bafb7fa0e
38614c4670 : Add gpu check for reduce_max (#6729)
61a69c2492 : [caffe2] Use both __ARM_NEON__ and __ARM_NEON macros (#6697)
ad8bfb7359 : Adding package name parameter for conda builds (#6727)
2d09799950 : [docs] Document CUDA profiling gatchas in bottleneck docs (#6715)
969251962c : [Caffe2] Enhance test for CollectAndDistributeOp (#6693)
e089849b4a : Add mutex to THC random number generator (#6527)
c25f097225 : [wip] Fixing ci conda tests (#6686)
96e2140ffb : Check for g++ also in check_compiler_ABI (#6711)
530e1e89f0 : [auto] Update onnx to 5509f70 - more shape inference implementations (#758) https://github.com/onnx/onnx/commit/5509f70b80810ddaefccf797a561eeb575744683
be0b7f8c81 : Add reduce min and reduce max (#6685)
6c3e5af393 : [auto] Update onnx to ac948de - add eliminate identity optimizer (#755) https://github.com/onnx/onnx/commit/ac948de61d206de04cf27f31a7ca7fa490df7b98
63d42408d0 : [Caffe2] Detectron fpn support (#6645)
a1cc8dde80 : Fix LSTM and GRU parameters description (#6665)
2a628ba32f : Update README.md (#6703)
bd0cc7d364 : Implement torch.einsum (fixes #1889) (#6307)
187955b959 : [distributions] Skip validation of lazy properties (#6666)
fb7bd8e4ae : Better dispatch (#6687)
6223bfdb1d : Update from Facebook (#6692)
eca0ef5e42 : __STDC_FORMAT_MACROS was conflicting with some thirdparty include from google perf tools. Looks like a harmless fix (#6676)
6252706feb : [Caffe2] Workspace centric API for TensorRT transformation (#6678)
dc94182db0 : Check for --noprefix option for mpiexec in run_test.py (#6690)
1c01eabd3c : Codemod to update our codebase to 0.4 standard (#6641)
c43c911662 : Export onnx protobuf bindings to python (#6651)
f50f1769ec : [auto] Update onnx to 844bbc2 - Update Python schema API to take domain (#764) https://github.com/onnx/onnx/commit/844bbc214265f7a51a0770a05436fa07db8ecfe8
711343f981 : Gltensor fix (#6647)
4dd29ac89f : fix broken code from rebasing (#6681)
1191627008 : Make torch.backends.mkl.is_available() work without importing (#6677)
f15f3ca1af : Scope variables inside the dataloader (#6673)
a86f53fbf1 : Fix padding and output_padding in ConvTranspose docs (#6679)
8cf41b40e6 : Update gitignore so that third_party/build and aten/src/ATen/Config.h are cleaned properly. (#6672)
459dfdc304 : [Caffe2] C++ SSA Rewrite of Caffe2 nets (#6531)
7de61c3b8c : Update tensors.rst Tensor introduction (#6670)
4be34ca0f3 : Add broadcast and reduce gradient (#6668)
e51e792cef : enable exporting bidirectional rnn with fixes seq len from onnx to caffe2 (#6566)
f656301526 : Allow traces to call @script functions (#6642)
1f2829dd2a : Update tensor factory method docs (#6640)
d193f82c1d : Adding dispatch to Tensors (#6664)
2aaa9ae60f : [auto] Update onnx to 54ca9cb - The content of a string is doubled if it's a string tensor (#765) https://github.com/onnx/onnx/commit/54ca9cb5033ed561bad01a2c18431701d9d140e1
feb8522f99 : randperm supports n=0 (#6656)
7fcaf3b49e : Update torch.nn.init and torch.nn.utils.clip_grad (#6173)
1e34493825 : Fix some loss output sizes (#6659)
d5f041aa8b : Updated documentation for cross entropy loss to include multi-dimensional input shapes (#6638)
c77fca570c : Add device docs; match constructor parameter names with attribute names. (#6633)
30849eb668 : Bind 0-dim variables without requires grad to int64/double similar to how we do with Scalar. (#6637)
639dd0e324 : Fix an error in the tensor docs. (#6658)
f2c9975378 : Add DistributedDataParallelCPU (#5919)
c345212c86 : Support gpu triangle solve (#6648)
b34ae77be8 : always compute gradients for the gradcheck inputs (#6654)
bc6243cb4a : Explicitly define all caffe2 reducer ops by name (#6513)
e46043ab0c : Fixed NCCL build in fbcode (#6643)
5ed3f3347a : Add dtypes (with reasonable defaults) to sum, prod, cumsum, cumprod. (#6573)
dd91d57c3f : Update docs for torch.zeros factory method (#6594)
ee240aa00c : Allow script_methods to be defined out of order (#6341)
0e93a2c334 : Add Module.to (#6629)
3e83e3abfe : Adding initial_accumulator_value parameter to Adagrad (#6616)
53d2612b55 : Fix a typo in the setup.py script (#6632)
582d47e986 : [Caffe2] Scoped dummy name generator (#6458)
ce2854c875 : Create safe and unsafe versions of sparse_coo_tensor (#6058)
40592f91b5 : Fix bilinear performance regression (#6110)
24b4931462 : Improve run_test.py to support running individual test classes and methods (#6344)
4d0097fab8 : Note that the Docker Hub image is not up-to-date. (#6434)
7ef14bf04c : Follow the change of ONNX Cast operator "to" attribute (#6574)
30157971f0 : Update dist test to use multi gpus (#6337)
892be8b779 : Make dtype in .to positional rather than kwarg only (#6628)
04fae73323 : [auto] Update onnx to bf42662 - Change the "to" attribute of Cast operator to of type int (#727) https://github.com/onnx/onnx/commit/bf426626378cbb715d3d0000a35f065708156cbc
d7cb78478f : Split set_default_tensor_type(dtype) into set_default_dtype(dtype). (#6599)
76ca037069 : [distributions] Implement Independent distribution (#6615)
fd6d11ae66 : Fixed text of error message in case of unexpected target size (#6617)
46374ad5c8 : Add tensor.to(device) method. (#6588)
084e3a755b : fix incorrect path (#6605)
2ef23b6241 : [caffe2] Update transpose with compile time dimension (#6614)
f5beff334b : Added distributed docs on NCCL2 backend/functions and launch module (#6579)
5463a4a319 : Fix typo. (#6609)
8aff844f2d : [JIT] torch::jit::Type needs a virtual destructor (#6611)
cd2112717c : [caffe2] Update math functions with params on host. (#6602)
caadc9301f : [auto] Update onnx to ff7b3b4 - enable warning check and fix warnings. (#760) https://github.com/onnx/onnx/commit/ff7b3b4c856bc6476d6398a191a2cfe24035806b
0e246305ab : [auto] Update onnx to 97d3ae6 - Kezhan/update size op output type (#759) https://github.com/onnx/onnx/commit/97d3ae6ddd83ddc26883c4f77ef91311391c1063
eaf1e4b6ab : Docs for torch.*_like(...) factory functions (#6589)
e8d2f05931 : [JIT] Switch JIT passes to take a graph rather than TracingState (#6598)
825ce7f196 : [jit][script] Allow tuples to be re-assigned (#6538)
0042851e04 : Fixing some typos (#6595)
11b9180563 : [auto] Update onnx to 5355440 - add fuse_conv_add_into_bias optimizer (#707) https://github.com/onnx/onnx/commit/5355440f5a4b00a94870f0d4620c1046b40dbffa
9dfc01b659 : [auto] Update onnx to b7d66d8 - Add some more type/shape inference implementations (#725) https://github.com/onnx/onnx/commit/b7d66d8838df3b13bce3dd2b8344648d9f3e35ec
84707be156 : WorkersPool uses atomic writes to task_ (#6577)
e10d5cdc68 : Change to ldd parsing regex (#6592)
3140fe0ed1 : [auto] Update onnx to 7b33c37 - fix docs of pool op (#751) https://github.com/onnx/onnx/commit/7b33c37ae59fedeee2946c9c3833b4c65d48c71f
6c0f74089f : More precise digamma (#6517)
99cfb56698 : Add docs for torch.randn_like (#6565)
62ac7f9812 : [auto] Update onnx to fa04841 - specify default value for thresholdedrelu's alpha attribute . (#753) https://github.com/onnx/onnx/commit/fa048410fa313e14c3bd5caff0c837fb429f124b
f3a9be0ed5 : Fix RNN parameters description (#6575)
56563a0a79 : Use THC allocation for CUFFT workspace (#6568)
be86500244 : Conda binary changes (#6534)
3b0204d43c : [JIT] Hacky: Staged symbolics for RNN nodes (#6297)
8af0f69a23 : lowercase tools/cpp_build/libtorch/CMakeLists.txt (#6567)
d725cd5966 : Fix ATen build in Caffe2 (#6496)
30a37a2111 : [auto] Update onnx to 5c9c778 - [Typing 2/3] Add python type hints for C++ code (#610) https://github.com/onnx/onnx/commit/5c9c7782701cefced36265871b840aef08135909
16704249cb : Add docs for tensor.index_put_ (#6563)
c2187790e3 : Improve utils.checkpoint docs (#6526)
e01569afd7 : Restore allow_unused functionality (#6553)
60b67eb604 : Create CODEOWNERS entry for torch/onnx (#6560)
6ce6c0ed65 : [caffe2] Fix bug in NNPACK bindings for convolution in precomputed transform (#6555)
8aa0ae3836 : Support arbitrary number of batch dimensions in *FFT (#6528)
749d51414a : Separate cuda-ness from dtype. (#6470)
8995ddda05 : [jit][script] Check that each builtin returns the right number of values. (#6492)
f6e8b86315 : STFT is differentiable out of the box. Fix the regression that marked it as backward-not-implemented (#6541)
d45f3d0d5c : Skip cpp_extensions test when possible on Windows (#6423)
8849bea120 : [caffe2] Update ReduceOps (#6497)
0a6331792d : Fix #6398, Add MKL threading support for Windows (#6416)
f54eac7eba : Add flag and warning for Python 2.7 users on Windows (#6499)
fc56e8fea5 : Quote arguments only when possible (#6405)
1943e9763f : [ONNX][easy] Don't set uniqueName if it's already set (#6533)
63b5cc47eb : [caffe2] Minor changes in NNPACK CMake scripts (#6532)
434f710f3f : [Caffe2] Add support to TensorRT (#6150)
1f0b07cddc : fix typos in sampler.py (#6525)
6b7ec95abb : Link relevant FAQ section in DataLoader docs (#6476)
5ce6b97aee : Use symbolizer in ASAN (#6506)
494aaab00e : Add docs for `item()` (#6508)
1e5611014d : Adding autofunction entry for torch.randint (#6507)
ca09e4a3c5 : Fix THTensor_(take) negative index check (#6482)
e07952dbc9 : Add SmallVector from llvm (#6485)
d3f11310fa : [auto] Update onnx to 00fa587 - Enhancements to shape inference (#655) https://github.com/onnx/onnx/commit/00fa58791efb9cc19fce3d6c14377183007d3f05
d9345aa60f : add checkpoint to index.rst (#6498)
e4f1d3b538 : Better warnings (#6428)
ef8f556212 : [Caffe2] Changes done inside Facebook (#6378)
0dff2b5e35 : [fft] [3 of 3] Implements backward of fft ifft rfft irfft (#5537)
63472bcf29 : Sync current changes in ACL backend (#6484)
37d5c58f4b : Skip all TestTorch tests in test_cuda.py (#6489)
7bd398b3db : Add fuseNNPACKConvRelu (#6439)
5f311da758 : Make python setup.py clean delete aten/build. (#6487)
d4e13a4ec8 : [auto] Update onnx to 50fe321 - Fix fix (#744) https://github.com/onnx/onnx/commit/50fe321d053245178cf2441201d13cdeca5e9c6f
5c8290c20d : Update MKL version to 2018.2.185 (#6483)
6f10978e7b : Skip C++ extensions test when ninja is not available (#6480)
432425c76b : [auto] Update onnx to 1963285 - Guard type checking numpy imports (#741) https://github.com/onnx/onnx/commit/1963285656d5d643cef3e4d6a749af9ae305ceeb
ae592b4999 : Louder warning for C++ extensions (#6435)
8e1d920695 : Fixed Clang Compilation Warnings for THD by removing outdated C linking (#6448)
e3196e0ea8 : [Re-checkpointing] Autograd container for trading compute for memory (#6467)
04c215b445 : Add link in docs menu to stable docs (#6475)
c3f7e5ff55 : Install signal handler for SIGCHLD in run_test.py (#6436)
ad5d421554 : [JIT] Implement staged symbolics for pack_padded_sequence/pad_packed_sequence (#6256)
8d6c5c7898 : [auto] Update onnx to 0ee95e3 - Split operator tests (#557) https://github.com/onnx/onnx/commit/0ee95e36e689643fe6996ad91419fb1887bf3707
64e94814da : Clean-up test_indexing.py after Tensor/Variable merge (#6433)
aea31131e5 : [auto] Update onnx to 7fcdf41 - Setup mypy type checker (#676) https://github.com/onnx/onnx/commit/7fcdf415570ceb2f1abe0397a612c2102147a30d
038b66ee07 : [caffe2] use dictionary in Printer (#6443)
930f181255 : Fix fft when any of the input dimensions is not aligned (#6118)
bb097e2a50 : [pytorch] Fix signed random_ (#6463)
f41044fa8a : [caffe2][nomnigraph] Generic subgraph replacement (#6368)
acb7df11a2 : Add torch.randint and torch.randint_like functions (#6136)
aa99aa1cb8 : Slice (instead of copy) when indexing by a zero-dim tensor (#6426)
59bda9a8c4 : Fix reflection padding boundary checks (#6438)
65a8ac0b8e : Add method to calculate perplexity of distribution (#6427)
79c3ebc040 : adds correct precision to test_noncontig_conv_grad (#6440)
1110dd1f8f : Add mock to conda (#6460)
66791f54d5 : Update the compile function of Job (#6323)
df2e1d2962 : Disallow using the OOP api workspace as context managers (#6456)
5e12ba92dc : Guard couple shape inference functions for unkown input shapes (#6379)
ce37cf7914 : [auto] Update onnx to 985af3f - Update PythonAPIOverview.md (#738) https://github.com/onnx/onnx/commit/985af3f5a0f7e7d29bc0ee6b13047e7ead9c90c8
c05acd3840 : Clarify Embedding padding_idx arg (#6430)
1533155c4e : [JIT][script] Implement compile-time tuples & starred unpacking (#6214)
afaa72716b : [auto] Update onnx to b69be33 - Add backend test for upsample (#729) https://github.com/onnx/onnx/commit/b69be334e5d28c907757b1253304cad5dda3e9dd
4900118a68 : [auto] Update onnx to 0d9496e - Input test data of concat op should be float (#711) https://github.com/onnx/onnx/commit/0d9496e79bab6a0bf29b41ec8b28cc6dcb9e115f
265e1a97ec : Add different logo for master docs (#6446)
26eb08abfa : [auto] Update onnx to 20bcb8b - Fix the spec for batchnorm and instancenorm (#733) https://github.com/onnx/onnx/commit/20bcb8bab86b5e31f1e9fd4cc052868a2e627195
e83dd716ec : [caffe2] Support fused Conv+Relu with NNPACK (#6375)
f9d3c3f4fd : fix typo in link to sigmoid activation image (#6429)
1b3a5a4e7d : bottleneck supports better user-provided arguments (#6425)
5651695a99 : Fixes #6386, Use copies instead of symbolic files (#6396)
d0f395f744 : [pytorch] Fix clamp is missing kwarg out (#6028) (#6418)
57ee202022 : Use string comparison in OS check (#6420)
a91c88a348 : Check mappings ONNX -> Caffe2 bear the same argument names (#6317)
73a23b492c : Add mock python module for testing (#6387)
0cabab02bb : Another CUDA 8 fix for Windows (#6383)
18fc4fd447 : Using a function registry for THD init_methods for easy extension (#6334)
108f5c197f : [pytorch] add static linkage support for CuDNN and NCCL (#6410)
4d15442ebc : Add total_length option to pad_packed_sequence (#6327)
88da5a0db4 : fix incorrect error message in convolution_expand_param_if_needed (#6409)
99939b6d90 : Increase margin for CPU perf test, and change test order (#6363)
119ea39021 : add cuda headers (#6401)
67bbf585cd : Fix the c2-onnx exporter bug on Gemm (#6331)
3b58b859b2 : Fix typos in docs (#6389)
e9adbbba82 : refactor reduce arg to _Loss superclass (#6371)
e0f3e5dc77 : fix activation images not showing up on official website (#6367)
aecec8b412 : [auto] Update onnx to c9f825f - Refine a little bit about op spec. (#666) https://github.com/onnx/onnx/commit/c9f825fc68b96b6a8c8895f4f2102f8630f75bf0
c053a76182 : Several minor fixes for Windows build (#6332)
32f3bf7946 : Simplify and extend cpp build (#6343)
a915e4715c : [auto] Update onnx to a484eb2 - Fix an error in Conv doc (#731) https://github.com/onnx/onnx/commit/a484eb2cb3ad0ae00128289bd506fbe4e190b9fe
997acfd7fe : [Caffe2] Some small changes to InferBlobShapesAndTypes definition and SameAsInput Schema (#6335)
774601c04c : [Caffe2] Consolidating conda build scripts (#6359)
f2130ae495 : [auto] Update onnx to 7410cc4 - Fix incorrect package output paths (#730) https://github.com/onnx/onnx/commit/7410cc4abfa92af45a9d8f1d4f60a1577e074a46
47259cfb6a : [nomnigraph] Version bump (#6364)
a9a96a4acb : Fix the onnx split backend axis handling (#6366)
e45b51148a : [caffe2] Always build NNPACK together with Caffe2 (#6365)
aab0bd3c13 : Change onnx_optimizer API (#6290)
6d8a33b5e6 : [auto] Update onnx to be546e2 - Improve optimizer's API and docs (#713) https://github.com/onnx/onnx/commit/be546e257cf5c80344afa0b41973f055f9c80751
87e369111a : Add string-style devices to all tensors. (#6283)
fc7aa5c3be : Fix torch.dtype getting incorrectly rendered as torch.dpython:type by sphinx (#6358)
b724084335 : INCULDE typofix. (#6354)
c42f4fa2ee : Add missing attributes to the schema GivenTensorFill operators (#6330)
c00ee6da8f : Fix typos (#6348)
81676d8554 : [auto] Update onnx to c61506f - Fix the shape inference python API (#716) https://github.com/onnx/onnx/commit/c61506f9f473769705e5f09b4181e12a35a22a69
876ad110af : Skip some unsupported onnx backend tests (#6247)
5198d2b9ab : [auto] Update onnx to e9d4134 - Fix cmake on windows when not building python extension (#728) https://github.com/onnx/onnx/commit/e9d41346d2c1587f2a94364801d9be284896ff38
4cde7c0f09 : Modify cmake dedent function to make it compatible with Windows. (#6296)
a093ec997f : fix typo (#6329)
c1cd6eab9f : Handle broadcasting in the JIT (#6084)
2f30fe64fd : [auto] Update onnx to 72187aa - Add value_info support in make_graph (#726) https://github.com/onnx/onnx/commit/72187aa08d271918dcb55ffa1fe7ff210cdc88c6
aba5f129bc : fix broadcast export to onnx (#6243)
29c69f049e : add test for old tensor serialization (#6275)
15f636bd10 : [auto] Update onnx to 67b7d89 - Fix gen_proto in cmake (#719) https://github.com/onnx/onnx/commit/67b7d89d24605e9554e6557563761788249604d7
38b995a13b : Fixing conda test builds (#6261)
0b3edfd3dd : [caffe2] Do not print version and build info unless explicitly requested (#6282)
482e1511ff : Revert "Increase # of runs for CPU perf test, and increase margin of error" (#6322)
d38adfe35d : [auto] Update onnx to fcb4ae3 - docs rewording: Important Python Functions -> Python API Overview (#721) https://github.com/onnx/onnx/commit/fcb4ae329fd3084fe3a765ccd9ca37d7c0ec8ff4
c21ce7e083 : [auto] Update onnx to 24275d6 - Ignore .eggs directory when doing lint (#722) https://github.com/onnx/onnx/commit/24275d6cea7305597a36ea2b6e0d2e7141268b9a
5ab30eedf3 : Add __constants__ to Script modules (#6092)
0aa35780bf : [ready] Implement log2 and log10 in PyTorch (#6272)
8ae67a4445 : Use reshape({-1}) (#6281)
6953c1b77e : Move instruction set specific code to anonymous namespace (#6314)
d33ec12d1e : [auto] Update onnx to 54be8fa - Use cmake3 if it's available (#718) https://github.com/onnx/onnx/commit/54be8fad1e8e617d7c24b823de65fe9ebdb1f342
5dcf7078c6 : default build with MKL for desktop (#6266)
9d1a660670 : Increase # of runs for CPU perf test, and increase margin of error (#6302)
9b111f1a88 : Fix worldsize use in test_distributed with MPI backend (#6301)
de54f23de6 : Add default args to loss functions in native_functions.yaml (#6289)
f73c044576 : Remove eigen impl for arg_max and arg_min (#6293)
7e0227d3e1 : [auto] Update onnx to b8c4238 - Add python function docs (#714) https://github.com/onnx/onnx/commit/b8c423889b3d15825e90fd5bd8fa1766ba2200ce
73ab15d388 : Change ATen to use Caffe2/cmake upstream FindCUDA (#6240)
efc91d8c6d : Add arg checks in torch.utils.data.Sampler classes (#6249)
0016dad841 : [pytorch] minor fixes around binary builds (#6291)
12bfa47ddd : Onnx RNN export: remove Constant default hidden state (#6199)
afdaf52c34 : Change Python Arg Parser to only read default params if they are assigned (#6254)
8df2487de9 : Properly skip the failing onnx conversion test (#6280)
ed9952dd25 : Update FindCUDA to cmake master as of 561238bb6f07a5ab31293928bd98f6f… (#6241)
9ba70856a1 : Add max_values and argmax convenience functions to ATen (#6201)
92e7f627cd : Add typing dependency to caffe2 CI (#6195)
004545fe32 : [Caffe2] Always build local protobuf library with -fPIC (#6264)
8f27c27941 : fix legacy tensor __setstate__ (#6251)
0469926ba3 : Add a CODEOWNERS file (#6274)
fd580ce419 : Fix potential UB when input is empty (#6242)
2f0bb19d7b : Do not use cpuinfo on PowerPC (#6255)
3497f0207c : [distributions] KL-Divergence for Multivariate Normal (#6172)
1499a604cf : fix assertion error when input size smaller than number of module_copies (#6252)
b125033f85 : Manually bump onnx submodule to current latest (#6237)
5f268b0668 : Fix the processing of extra cmake args passed to caffe2's setup.py (#6263)
7fd56b2c1f : Remove unnecessary properties from Layout. (#6250)
a2880531ea : fix SGD lr check (#6244)
06a697785c : Add dtype to torch.*_window; Add dtype.is_floating_point (#6158)
6b3a4637d6 : Make the tensor type torch.Tensor instead of torch.autograd.Variable (#5785)
dfcd90783c : fix sparse embedding backward when input contains only padding_idx (#6211)
14bf37f22e : Fix AvgPool breaking changes (#6221)
de51764119 : Fix memory leak in maxpool3d backwards (#6230)
29e81e01aa : Expunge ATen submodule; use the in-tree copy. (#6235)
40096c98ff : Support export torch.max(input, dim) and torch.min(input, dim) to ONNX (#6220)
83926393d3 : Detect re-initialization of _C shared library (#6232)
80ff36c9a4 : Print the diff files to aid in debugging when it's wrong. (#6238)
581d74f8d0 : Remove unused variable in Layout.cpp. (#6236)
4a9e02fc2f : Reduce flakiness of math tests in test_torch.py (#6200)
1b41d7ac1e : avx_mathfun.h is imprecise (#6192)
e831ad6204 : Fix sharing of empty tensor in multiprocessing (#6229)
460e8cd376 : change print to logger.warning in operator traceback code (#6216)
4375dfd0b2 : Changes without protoc conditions (#6142)
80cf134aff : Adjust the setup script according to the repo changes (#6218)
4f1eb06989 : Delete dead codes (#6226)
2e156f3eab : [caffe2] Add default values to speed_benchmark args (#6210)
fd2e7cb487 : Change JobRunner's __call__ function to train (#6205)
9f49be51ec : Fix argument checking for inlining a module (#6207)
771fcb3455 : [caffe2] Fbcode to GitHub sync (#6208)
fe89e21b02 : Add a missed parenthesis to the LogSigmoid documentation (#6209)
26c022b183 : Documentation for reentrant backwards. (#6191)
ad34d88959 : added word object to function doc string for clarity (#6204)
a409f959e8 : Remove ShuffleNet from model zoo. (#6203)
4c81282c33 : Introduce torch.layout and split layout from dtypes. (#6145)
28e66705ff : Move helper scripts to new repo (#6159)
63af898d46 : Fix extension test on Windows (#5548)
605307f8f3 : Add support for printing extra information in Module and refactor redundant codes (#5936)
7355f5cd8d : Tell source users about TORCH_CUDA_ARCH_LIST (#6185)
4748c9b529 : Fix logic inside insertInput (#6146)
92a0f7835e : Support returning dictionaries in DataParallel (#6113)
0b17f4b87e : [distributions] Support python floats in AffineTransform (#6035)
8617d3f1eb : Refine dirty matching for docs. (#6177)
d93d41b2ef : Some notes about PyTorch/Caffe2 merge. (#6147)
9ce21b0e90 : Delete NNPACK (#6151)
da6c3c90d9 : Relax constraints on return statements in the script (#6070)
32ba2ca203 : add documentation for diagflat and diagonal (#6161)
7e1046ce83 : Fix SparseMM compiler warning (#6156)
de42542351 : Make precision matrix computation in mvn stable (#6128)
0d19b81a65 : Give ATen errors backtraces (#6112)
cbe92abd7c : Disable failing test_lengths_max_gpu
e0633ef1f1 : Fix Windows build of nomnigraph and remove header.
acea18a54a : Fix net_test ParseFromString usage.
3d27095eec : [easy] fix comments
365652229d : Back out "Revert D7372460: [DT] [28/n] Lift epoch_limiter"
b9d2ba1dbf : Revert D7394363: [GanH]: Log D Trick for Cross Entropy with Sigmoid
363a227d19 : extend bucketize op to support duplicated boundries
551d5fbf9a : CUDA version of LengthsMax operator
0df662c67f : [Caffe2] [Int8] More exhaustive unit tests for int8 ops (+ bug fix in Int8Add in-place case)
2b0e39f569 : [GanH]: Log D Trick for Cross Entropy with Sigmoid
f8eb8a66e2 : Revert D7372460: [DT] [28/n] Lift epoch_limiter
58ae29b702 : Fix schema check for arg_ops
ee64200c64 : [nomnigraph] Expose transformations to python
028a598cb9 : Expose thread pool to operators
77976d34f4 : Respect num_workers parameter in async net executor
03c5198331 : [C2 Int8][C2 Core]fetch int8 blob
8f3ba30266 : Fix a typo
c9dbfca275 : bugfix im2col op
bb04053e22 : Fixing TTSN unit tests
91162a74ed : [easy] Improving error message
4cb79ee8e1 : [codemod][caffe2] comment out unused parameters
e13c6fee66 : [PerfModel] Added analytical counters for FCTransposed, BatchMatMul, BatchOneHot
0ac4d19a29 : Linter changes.
85c9b89edf : Back out "[PerfModel] Added analytical counters for FCTransposed, BatchMatMul, BatchOneHot"
02786a3819 : Linter changes.
c2703aa141 : Renaming .jenkins testing folder to caffe2_test (#6148)
4563e190c4 : Use THC cached CUDA device property when get_device_name and get_device_capability (#6027)
1449c9f754 : Update autograd docs (#5907)
5fe3c406f2 : Experimental support for different ONNX export types (#6016)
d2c0f8bb57 : avoid generating torch.*_backward_(input|weight|bias) (#6114)
3d3b62e2d6 : Add REL_WITH_DEB_INFO build mode (#6122)
eb8a43a272 : Fix setup_caffe2.py lint error. (#6143)
93efe22d72 : PyTorch does not use top level cmake.
a2a28c0ef1 : tox.ini update.
37044d7515 : Add 'dirty diff' tests for PyTorch and Caffe2.
1e9a16c3d1 : Fix typo in NLLLoss docs (#6134)
48ad4546d2 : Move LayerNorm to ATen; remove tracking_running_stats functionality (#5983)
bc1b4c8912 : ByteTensor sum test (#6042)
eca84e2532 : Rename setup.py to setup_caffe2.py (#2483)
3aca8f3b40 : adding const fxn modifier to Operator::type() (#2484)
60a16e5663 : Set dataloader.batch_size = None when batch_sampler is given (#6108)
4da3fa5095 : strip some python dependencies (#2486)
47a1fd208f : Quick and dirty raw value substitution from zip file (#2454)
f8270c0225 : Enable MKLDNN convolution forward and backward (#6062)
e4c0bb1809 : Speed up sum over a dimension (#6026)
3dffac91bc : Fixed some tests by using the correct optimizer (#6116)
2ed2624c28 : Move README.md to caffe2/ in prep for merge. (#2479)
d42fcdbc96 : Add source location information to error messages (#6059)
7ffcb20295 : small math cleanups in the docs (#6057)
29c389078b : RNN `num_layers` and `dropout` docs and checks (#6079)
53bca3302d : Add CPU perf test for torch.* and torch.Tensor.* (#6054)
df8991b1b7 : [auto] Update onnx to 1d7dee4 - Fix Average pool test cases converted from PyTorch (#677) https://github.com/onnx/onnx/commit/1d7dee4e2154f01a2cfe5f2026590cc97e063424
bb114bc05d : Update FFT comments from #5856 (#6089)
4f05cb710e : Add underscore to nn.init.* and deprecate the original ones (#6093)
21aba57744 : Fix a bug in ONNX symbolic of average 3d pooling op (#6101)
f5d0d947c1 : Exp, log, sin, cos vectorized (#6078)
368f96acde : Remove tutorials from main repository.
16b0adb274 : Remove top-level cmake directory. (#6085)
b21e135ab8 : Add class-specific error when key mismatch in load_state_dict (#6086)
bb3bfa09f3 : Avoid some string copies when creating operators (#2475)
df039e2998 : Unify handling of type_dispatched_args in gen_python_functions. (#6088)
a90aa5d818 : Fix small typo in setup.py (#6091)
ba0f18a9d7 : Delete defunct .travis files, and move release-notes.md to caffe2 dir. (#2472)
ecd5de0f36 : [fft][2 of 3] Forward for fft methods (#5856)
6ae0576e1c : Remove dtypes from legacy tensor.new(...) (#6081)
371e14b807 : NLLLoss: error message for mismatched input/target batch sizes (#6072)
127cdc324d : Fetch master commit log before perf test (#6077)
2f602dce1d : Correct argument misspelling. (#6076)
1807bacd65 : Fix printing of unknown binop operator in torchscript (#6069)
a014a7cd37 : Link protobuf public in the standard case
e881efde79 : Use local FindCUDA for CMake < 3.7
3a84574c81 : Update CAFFE2_LINK_LOCAL_PROTOBUF functionality.
dbac044759 : Add protobuf wrapper functions to proto_utils.
b752f4cdda : Fix instance norm (#6023)
64e2c03bea : Enable TensorDataset to get any number of tensors (#6038)
bc7fb1d6d8 : Update cpuinfo to d0222b47948234cc01983243a2e0ede018f97f3a (#6043)
7f66164a89 : Delete defunct .travis.yml and appveyor.yml files (#2429)
31c0e2321a : Block set from param_group['params'] (#6031)
063946d2b3 : Added parameter range checks for all optimizers (#6000)
ae4362bc6a : Fix memory leak when using multiple workers on Windows (#5585)
8964aab260 : fix docs error in torch.nn.functional.nll_loss (#6060)
e114e84d91 : [auto] Update onnx to 36d7fff - Fix Attribute default value pybind11 binding (#671) https://github.com/onnx/onnx/commit/36d7fffaf3f9a52b12c5dc8733045bcccfc27c34
0c7c34253b : [auto] Update onnx to 0536866 - git ignore .pytest_cache (#674) https://github.com/onnx/onnx/commit/053686607dbebb17edf0251686339683817fde8a
0f198fa723 : Add additional script module functionality (#6033)
d3f92eebee : Remove redundant code (#2460)
86e285d0e0 : [auto] Update onnx to afc84ac - Update README.md (#672) https://github.com/onnx/onnx/commit/afc84aca45cee24d8a1970b3ba9c37c461372458
eb18a2f26c : Reorganize third-party libraries into top-level third_party directory (#6025)
02d5ae6c9b : Removing verbose logging from windows (#2455)
344fa57680 : Adjust the test since only the op only has CPU implementation
8b434d1141 : Quick fix on the observer test
6412adcef3 : Move the stump op to oss
0ac8495165 : Fix the CMake issues caused by internal changes
af3dcdf6ae : [D2]: Improve loss weight by allowing omitted weights
d6c30ee6af : [GanH]: Unifying two discriminators
3300e21d52 : Add SparseLengthsPositionalWeightedSum operator that fuses SparseLengthsWeightedSum, LengthsRangeFill, and Gather
e6b04ba121 : fix lengths sum cuda op for empty batch
6ed9a0c3f2 : fix cuda elementwise ops for empty batch
c6587597d8 : Ignore backward step when there is no loss function;
c909abd85f : [GanH] Label Smooth: Add Layer and Integrate to SparseNN
107cb670b1 : add typecast and assertion for histogram computing
26fbfa959e : Integrate fbgemm fp16 with Caffe2
078b6d5ad1 : [layer model] remove duplicated init ops
d5e38a8aee : [PerfModel] Add Profile observer
677c8d6769 : [PerfModel] Added analytical counters for FCTransposed, BatchMatMul, BatchOneHot
d2453afb1e : Add SumElementsInt operator
a0a136117c : Faster positive modulo in IndexHashOp
16312e8123 : [fbtranslate/onnx] decoder step (pytorch -> caffe2) exporter for fbtranlsate
60d6ecd90f : Back out "[PerfModel] Added analytical counters for FCTransposed, BatchMatMul, BatchOneHot"
0a4f146228 : Codemod imports from libfb to use full path /caffe2
a92a6233b5 : Enable support for placeholder ops in InjectCrossDeviceCopies
84605438f2 : [PerfModel] Added analytical counters for FCTransposed, BatchMatMul, BatchOneHot
8baa563daf : Change observer copy() method to take id parameter
e977825c01 : Merge the conflicts
bde2f6b298 : ATen Unary Ops (#6030)
9f3a46c583 : Bumping aten to latest commit (#2453)
3c577fccf3 : Move Caffe2 Dockerfiles to docker/caffe2 (#2430)
b7084e4028 : Move .jenkins to .jenkins/caffe2 (#2434)
da7193b69a : Update gloo to PyTorch's version (#2451)
0eab63d9dd : pybind11 submodule update to PyTorch's version. (#2450)
8fa38f8dce : Add gradient clipping (#2452)
c4e5001af8 : [auto] Update onnx to 9d2b530 - Revert "[Typing 1/3] Setup mypy type checker (#607)" (#667) https://github.com/onnx/onnx/commit/9d2b5301ace0c14932de0a20024669c36845cac1
ebc0194950 : Fix use-after-free bug in peephole pass (#6037)
8054dbd655 : Trivial typo (#6053)
b5fa9a82c8 : Update Dockerfile build instructions for new layout. (#6051)
5583c12888 : Fix bias size assert in Bilinear (#5992)
2ad57eeea9 : [auto] Update onnx to 086727e - [Typing 1/3] Setup mypy type checker (#607) https://github.com/onnx/onnx/commit/086727e5a0550b9d2a4285bf38644c16e44743c7
1d5780d42c : Remove Apache headers from source.
2017c9caef : Add script for removing Apache header.
34f2f48394 : Allow larger margin of error for GPU perf test runtime (#6044)
db53389761 : Add numpy.array-like type inference to torch.tensor. (#5997)
c89685a115 : Make error messages in net_dag more clear
5f90d41211 : [auto] Update onnx to 5716e20 - Convert all Node tests to Model tests (#651) https://github.com/onnx/onnx/commit/5716e2076b445b6c5bd017aaadc05d59ac43bdc3
a3d08de331 : Move .jenkins to .jenkins/pytorch (#6004)
49f2bb7e0b : Extra comment about backward vs. grad in engine. (#6005)
47f31cb1e6 : Update FAQ to make more sense after tensor/variable merge (#6017)
f393c90cda : Moving conda/ to caffe2/conda (#2428)
f93e820e7d : Revert "[C2][GPU]LengthsMax CUDA version (#2209)" (#2444)
6740126f5c : [C2][GPU]LengthsMax CUDA version (#2209)
9e2001683e : Move doc generation code into docs/caffe2. (#2435)
0e0918cb9a : dpm synchronize
d11fc90317 : Export atomic iter count (#2379)
b6e80a1ec4 : Caffe2-onnx exporter (#2248)
a589180021 : Update cpuinfo submodule (#6014)
ef4c09fb4a : mkl-include is not installable if your conda is too old. (#6022)
64e94f02b7 : Move Dockerfile to docker/pytorch (#6009)
b6b2edb96f : [auto] Update onnx to 6fe932a - Replace unittest.skip with custom exception (#659) https://github.com/onnx/onnx/commit/6fe932a3e17a4aff39a54bd27c539bc9bf0f4df4
fc030bf377 : Remove consumed_input (#5928)
1e417e23bc : Strip down onnx to only pb definitions in mobile build (#2426)
2a47fb3082 : [auto] Update onnx to ecac1c1 - Merge Rel 1.1.0 branch into master (#657) https://github.com/onnx/onnx/commit/ecac1c162485b3d6072eff73dbbc79fd5376c8ff
f2bc1dc099 : [auto] Update onnx to 5cb999d - Minor cleanups to shape inference (#653) https://github.com/onnx/onnx/commit/5cb999ddc13efbe9a29346a620834bae6b2c764e
7462eca363 : Initialize cpuinfo in the thread pool
5d628db0a2 : Deprecate ctx.saved_variables via python warning. (#5923)
4dc8c2a3cf : Add descriptive error message for test_cpp_extensions ModuleNotFoundError (#5978)
cfd94c481e : Add precision matrix to MultivariateNormal (#5998)
39829c1670 : Improve docs (#5999)
1ab248d09e : Fixes #5973: Stop printing verbose warnings for MSVC (#6001)
2df578a71a : add mkl dependencies to setup (#5991)
c6e903f804 : [auto] Update onnx to f4acf28 - Remove allowconsumed enforceconsumed from op schema. (#617) https://github.com/onnx/onnx/commit/f4acf281eff7afe81f29f014edaa87e4c04c01a3
b2da9fd220 : [distributions] Rename .params to .arg_constraints, fix logic (#5989)
03a6952ac9 : [distributions] Fix scalar bugs in torch.distributions.transforms etc. (#5931)
f895698183 : Implement MultivariateNormal.mean, .variance properties (#5988)
f6274a4ef7 : Fix "command not found" error in perf test (#5982)
f9882473b2 : add pip mkl-devel to the error message when mkl is found but mkl headers are not (#5984)
41c84ca735 : Support batch LowerCholeskyTransform (#5980)
5d77709485 : Linearly interpolating upsampling fix (#5927)
2f8d6582de : Store perf numbers in S3 (#5951)
332d5ffd11 : Modidy setup docs for Windows (#5981)
08891b0a4e : Group Normalization (#5968)
ed0f629fe9 : [distributions] Implement Power transform (#5976)
15a981e75a : Disable TestBottleneck test_cuda on Windows (#5977)
f508e7378e : [auto] Update onnx to a8e4648 - Adjust link flags when built in Windows Debug mode (#647) https://github.com/onnx/onnx/commit/a8e4648a7de0992768d74e6c43d00a43ee983a76
a73f9af5ab : Add axis to top_k_op. (#2416)
9923701a0d : Fix crash when cat-ing empty cuda tensors (#5971)
641fb21bdd : Update no_unions flag for nanopb gen and update ONNX proto files (#5972)
7375ba5e60 : [auto] Update onnx to 7c009fe - Fix lint error in optimizer test (#656) https://github.com/onnx/onnx/commit/7c009fe8dff7f616180a612266ba0ec4259732cf
f3e16cc737 : Expose gradients w.r.t. input & weight for conv1d, conv2d, conv3d in Python (#5408)
831780390c : Fixed non-determinate preprocessing on DataLoader (#4640)
83de3a0b0e : add AVX2 implementation for sigmoid function (#5010)
feb2785c5c : Implement torch.util.bottleneck (#5216)
3cc00e8b2f : Remove pragma once from cpp file (#5965)
810edb615d : [easy] Minor improvement of the code quality in caffe2/onnx (#2396)
7a5e2af6d5 : Follow new version number in setup.py (#2266)
e5f4b9dc0e : [auto] Update onnx to 063d12f - Fix optimizer split pass for models with constant output (#652) https://github.com/onnx/onnx/commit/063d12f6a9241e1c7e116b3cab136aa305803e53
8cf521b522 : fix mvn docs (#5967)
4dc55a4240 : Fix incorrect rendering of Tensor.index_*_ doc examples. (#5969)
8fbad1b28a : [auto] Update onnx to a4dcc47 - Minor code quality improvements in defs/ (#613) https://github.com/onnx/onnx/commit/a4dcc47791eb127652f5aaddd51d8896d446a067
425361af6a : Bump onnx opset version (#2402)
34e49ceb83 : [auto] Update onnx to c88ab71 - Verionize model zoo with opset version (#650) https://github.com/onnx/onnx/commit/c88ab71e98ab0cfb3aaf655b082f9d5665fc4256
8c92be5320 : [auto] Update onnx to 8c90dc1 - Add maxpool test cases (#573) https://github.com/onnx/onnx/commit/8c90dc1dd9cb9a8d9c5baf3540bb4b56c4cacd2e
213fa61706 : Implement range for loop in script (#5827)
03495137d0 : Add windows doc (#5859)
8e22ef0cb2 : Support legacy empty tensor behavior in cat (#5889)
c4ee2b7067 : Moved torch headers copy to build_deps (#5772)
0045895837 : Update speed_benchmark binary
2030ac7545 : Recommend citation (implements #4126) (#5955)
fe6c5ad435 : [auto] Update onnx to 1e613b5 - Add DepthToSpace test cases (#619) https://github.com/onnx/onnx/commit/1e613b5d4ebbaf930f29201e635a1c9b47180278
5c87e55a4d : [auto] Update onnx to 34d9ad2 - struct InferenceContext needs a virtual destructor (#648) https://github.com/onnx/onnx/commit/34d9ad20de4ea87f84f17b1fdf0b5626a62101f4
21918b94e4 : Add InheritOnnxSchema property to c2 op schema (#2366)
bbb7c722df : Remove legacy onnx optimizer tests (#2394)
b4d33cefc1 : Fix compiling issue with CAFFE2_NO_SANITIZE (#2386)
1288c4fd79 : refactor epoch_limiter (#2389)
f3b7b2f293 : Remove ONNX consumed_inputs (#2278)
81a29967c5 : [auto] Update onnx to 5f69c37 - Remove the only use of EnforceConsumed (#640) https://github.com/onnx/onnx/commit/5f69c37628002efa8c03d70d652db03c9d5ffca7
e1948d7377 : [auto] Update onnx to 85133e9 - Introduce shape inference (#564) https://github.com/onnx/onnx/commit/85133e9849f1875329f7f19423b15cda82b97fd5
1e1be56591 : [auto] Update onnx to 0f49cb6 - Set 2GB protobuf parse limit (#646) https://github.com/onnx/onnx/commit/0f49cb696c53e98f1dcda64eee7ae80b110bc9d4
e3e0c34390 : Unify error checking for tesnor.index_copy_ (#5642)
e35212ebd0 : Handle the ONNX opset for BatchNormalization (#2382)
566a25e1e4 : Add keyword argument to PipeReaderBuilder (#2381)
c803ed524e : fix windows build
d946267b80 : Specify outputs number in embedding_bag onnx export (#5935)
2ad972c9eb : A complete revamp of our test scripts. (#5904)
c9c978dff0 : Fix tensor.permute(dims) backward for negative dims (#5945)
977bae0a71 : move nomnigraph to OSS
504320d85b : Update README.md
45da53f478 : Remove Python onnx-caffe2 conversion code (#2362)
a58f2d242a : Test both Python and string JIT frontends (#5891)
befd9642bf : py3 - use loop instead of map for test_torch:test_cpu_parallel (#5940)
add04c56bf : Verify that 'catch' submodule has been checked out before attempting build. (#5941)
2a02ec6537 : Fix index out of range error when view a scalar as 1-dim tensor (#5934)
37a84dd40d : Move definitions of Kind out of NO_PYTHON block (#5914)
3053618624 : Add argmax and argmin ops (#2371)
e9f144b3e8 : parallel_for_2d fix and guarding avx/avx2 compilation (#5926)
418aad2c54 : Add support for subscripts in Python frontend (#5890)
48c70d2dbd : Fix ReduceMean performance by specializing Eigen implementation for common shapes (#2355)
c8d1ec02be : [jit] Have ScriptModule inherit from Module (#5769)
b2c56eb219 : Removed special handling for onnx sqrt (#2353)
1c0862c301 : Fix a typo (#2339)
2d03ae2f85 : Move ParseProtobufFromLargeString to proto_utils (#2354)
7cbbc0bc74 : Implementation of the logistic-normal distribution (#5547)
0ea8964fd6 : Revert "Export number of iterations of AtomicIterOp" (#2359)
3aa393f7e2 : Log NNPACK profile to std::cout instead of LOG(INFO)
4b54f04eab : [auto] Update onnx to caf9256 - Do not allow multiple spaces after comma (#638) https://github.com/onnx/onnx/commit/caf9256a9dc128b610ca07dc8bde04f7d53a91eb
d707dae013 : Add half test in test_nn for auto generated tests. (#5362)
44039ffcea : Use -DCMAKE_BUILD_TYPE=Release for local build by default
e4eee7c2cf : Implement MarginRankingLoss as native function and add reduce=True arg to it (#5346)
8346088094 : Export number of iterations of AtomicIterOp (#2338)
611a89c4b6 : Remove more protobuf APIs. (#2348)
b1684e9a3a : Skip DepthToSpace and MaxPool same mode onnx backend tests (#2343)
a3bd7b2875 : Optimize unique sorting by using std::vector+sort instead of std::set (#5913)
ece288392a : [auto] Update onnx to 1a067ba - fix all python lint errors and enforce it in CI (#635) https://github.com/onnx/onnx/commit/1a067bac0350b12d0e55ddfc42025fec3474972e
75a65ffe0f : Set proper optimization options (#2344)
ccd8c2a6bc : [auto] Update onnx to d4a378c - Add ONNX_USE_LITE_PROTO (#634) https://github.com/onnx/onnx/commit/d4a378c02ee418af55aed04ef4c250a7ed0016ee
537e0e0330 : better err msg for missing mkl headers (#5894)
08b1324ec2 : Fix integer overflow in remainder operator (#5906)
def37111eb : Update locally_connected_op to reduce transpose dimensions. (#2340)
6cae6d3841 : Update ONNXOpCoverage.md
06e86a6455 : Add submodules in the ATen subtree (#5911)
e43d0ac92a : [distributions] Support pickling of constraint objects (#5910)
1c80ee1c74 : Update ONNXOpCoverage.md
ac1b7b6366 : Update ONNXOpCoverage.md
42d3bcc189 : Only run WeightedMultiSample test on CPU and not GPU.
6aa087d902 : Revert "export num iterations of AtomicIter"
22d0828f00 : [easy] improve error messages
69706b2ab4 : Add C2 for weighted sampling
4bb73b8361 : [GanH] Weighting Layers: Adaptive/Constant/Homotopy
a5279dccd4 : [GanH]: homotopy JSD
fac306d3c9 : export num iterations of AtomicIter
f7f48989ba : GPU support for ChannelBackpropStatsOp
3940e7f0a7 : Support computing averaged norm in blob magnitdue visualization
c43896732e : Added device inference functions for Concat and Split Ops.
e0e334793c : Revert D7219461: Mark full sync data parallel ops with rules
9edbafe0de : Mark full sync data parallel ops with rules
7bef225e72 : [Caffe2] Fix double map lookup in operator_schema.h
35b6b0747a : Fix stop_if()
0cde2f1cc7 : Output blob allocation in Caffe2.
40683cdf42 : Allow calculating average margin rank loss
d4996e50de : Minor (but important) documentation update for SplitOp
72f2cd8bcc : Making preproc_output_schema explicit
7aeda25cfb : Add type / shape inference for IndexHash op
6af3429f4f : Add 2D Row-wise Arg Max Operator
9be2de507b : Cleaning up ReaderBuilder interface
10e8d7100d : Fix caffe2_benchmark
ab3065de25 : Playground refactoring and DataPreproc reader for DAIPlayground at facebook
a4d0ef2621 : Fix stop blob of processing reader
f62d6f0578 : Limit the number of LOG(INFO) for unavailable engine to 64 (#2332)
9123fcc857 : Use std::cout instead of LOG(INFO) in TEST_Benchmark implementation
d211904be3 : [auto] Update onnx to cf76f2f - Add averagepool test case (#572) https://github.com/onnx/onnx/commit/cf76f2f3cf51b7b005bf1d260cebe19695e46ea0
84a73775d5 : adding fp16 tests in test_nn (#5020)
2f27c1b56b : Revert "Fix ImportError with requests in model_zoo (#5896)" (#5909)
efe1c2bd13 : hypen as a valid part of model names (#2312)
21ce93e88f : Fix ImportError with requests in model_zoo (#5896)
a749bde0b1 : Make InputSize and OutputSize const member function (#2313)
cda2f02f89 : Skip the test average pool same mode tests (#2324)
2d8a674141 : [auto] Update onnx to 4560267 - Use static_cast to replace dynamic_cast to avoid needing RTTI (#629) https://github.com/onnx/onnx/commit/4560267df496db0c880407b58ad2c92f75ffc601
b0fe67aca8 : Expose more APIs for onnx cpp backend (#2317)
2a4b33bf87 : Add doc for torch/onnx/operators.py (#5895)
b43d6162fb : Remove USE_THREADS since it is needed explicitly. (#2322)
bbafae143b : Caffe2 transpose (#2320)
ebd4dadeb0 : [auto] Update onnx to 5865ed1 - Minor code quality improvements (#614) https://github.com/onnx/onnx/commit/5865ed15f4b6e57ec9cdb4f84ec8d8077cd7c489
aa4af1a5f9 : [tiny] make debug info optional, CAFFE2_DEBUG env variable driven
23631eee5a : [C2] Fix the check of current scope in optimizer (#2316)
39a6859685 : Fix softmax symbolic (#5893)
fb77b423f4 : refactor histogram as net modifier (#2314)
1936753708 : Added an implementation of a multivariate normal distribution (#4950)
7e13138eb6 : Revert "Enable resetting of batchnorm running stats and cumulative ("simple") moving average" (#5892)
e426a5dadd : Add an option for Caffe2 to link with local protobuf. (#2306)
56505007a2 : [auto] Update onnx to c39280b - Add the wheel setup test in Windows build and support py35 in CI test (#620) https://github.com/onnx/onnx/commit/c39280b566bf0a81b75b3a253c53533f008d08e1
00603b5e0a : Add CollectAndDistributeFpnRpnProposalsOp for FPN support (#2254)
6f80023c29 : Port ATen and JIT C++ tests to Catch2 (#5788)
cf2e176049 : Fix error message for cat-ing zero-dim tensors (#5819)
ba64724aee : Softmax symbolic should account for negative dim (#5846)
22ef8e5654 : [fft][1 of 3] build system and helpers to support cuFFT and MKL (#5855)
d11b7fbd1c : Don't modify requires_grad when running DataParallel in no_grad mode (#5880)
24fca0efb2 : fix some methods not showing up in doc (#5882)
84400d5531 : ReduceOps cleanup and set_num_threads (#5723)
f446b82e70 : introduce shape_as_tensor and reshape_from_variable_shape (#5824)
3c213bd9da : Add fallback for CuDNN pooling (#2291)
3f667176cc : Fixing the conda-gcc-cuda builds (#2305)
99b1f6cfad : Enable resetting of batchnorm running moments and cumulative ("simple") moving average (#5766)
5014adfe2f : Fix CUDA 8 build on Windows (#5869)
334fc98fb0 : Handle the legacy padding in global pooling case (#2292)
58af449ca1 : Bump onnx opset version to lastest (#5849)
0eaf883d6a : Delete stubs from one more place. (#5866)
e431c98205 : Caffe2: Add support for several auto-created observers and move net summary to (#2304)
b5def81de8 : Delete stubs from LD_LIBRARY_PATH when we actually run code. (#5861)
dad57a414b : put caffe2_protos to a standalone target (#2302)
77042266ee : Multi-gpu test. (#5854)
c18dba9fe7 : Adding gcc4 conda builds (#2283)
2f64e1cdf6 : Add second iteration in test_DistributedDataParallel (#5830)
0ca046c68d : Fix bug (#5836)
1dcad08537 : Support N-D tensors in Bilinear (#5764)
04edb8948a : Fix kldiv backward on CUDA (#5814)
e876b5d9d0 : implement TripletMarginLoss as a native function (#5680)
32462e0ac4 : Cleaner solution to the undefined references in RPC (#5817)
40ea24cc54 : Skip test_backwards_fork test as flaky. (#5839)
d776c52ff7 : Fix nvprof parsing (#5840)
ce0204402b : Add ATen symbolic for unique op (#5845)
f390a252f4 : fused GLU backward (#5782)
7cbe63da86 : improve handling of precision issue in torch.multinomial (solves #4858) (#5774)
00cc962670 : typo (#5847)
abf97e954e : Avoid in-place ops in BoltzmannTransform (#5842)
d441396e47 : Fix crash in new tensor with numpy array in CUDA (#5850)
e6ac93b817 : Add support for number and list literals in Python frontend (#5843)
0167f76d2a : [auto] Update onnx to 012145f - Relax the precision on the output (#622) https://github.com/onnx/onnx/commit/012145fda9f75dc1aff6a4ecf77315da134588a2
def76eee1c : [auto] Update onnx to e2e8003 - add output shape as input for reshape (#608) https://github.com/onnx/onnx/commit/e2e8003ec36800038959569cc6f3057ffee69fc9
c155842cc1 : Update onnx frontend to emit new onnx Reshape (with shape as input) (#2287)
875925b030 : Add operator[](int64_t) overload (#5838)
c474136ee1 : [REDO] Add torch.sparse_coo_tensor factory. (#5781)
acc409396b : Namespaced symbols (#5820)
940a0ab67b : Add logdet and slogdet (#5393)
f5aa8d55ad : fix detach in place error in DDP (#5829)
a5a99bd4a1 : Make static state function-local (#5822)
0b5b28f6a7 : add some onnx exported supports (#5734)
2322ab11b9 : Allow larger margin for perf test runtime variation (#5799)
7f864bbe52 : Fixed distribution constraints and added some test cases for distributions parameter check (#5358)
e8f14f5d37 : Fix ONNX backend for MatMul (#2273)
eeb90d9c95 : Add a Number node to the JIT AST and unify script syntax with Python (#5716)
c40b99f9ae : speed up CPU EmbeddingBag (indexSelectAdd op) (#5433)
ecffe53ef0 : Fix convolution type mismatch error message (#5815)
404b8e9442 : Revert "introduce size_as_tensor and resize_from_tensor" (#5818)
eee4f1ee42 : Add symbolic functions for cumsum and embedding_bag (#5786)
4fa08535ed : introduce size_as_tensor and resize_from_tensor (#5792)
b239b123e4 : Clean up TraceInput (#5743)
3084a577eb : Allow indexing by scalars and zero-dim tensors (#5749)
bc0bd063ca : Fix license header for GenerateProposalsOp. (#2202)
5c51bb6c0f : bugfix in onnx export of batch_first = True (#5753)
5fa3aac610 : ATen ReduceOps (#5776)
42ba8c1a73 : Add section on unit testing to CONTRIBUTING (#5813)
abd6f82709 : Fix debug build failure on Windows (#5771)
6f5e869259 : Add promoteTypes to ATen and torch._promote_types to python. (#5795)
82777815f8 : Fix bmm memory leak (#5744)
b499332aaf : fixed a message typo in ATen CMakeLists.txt (#5802)
7a5fc2fa22 : Fix undefined '__func__' for CUDA 8 on Windows (#5803)
a24d4b7454 : Fix compilation with CUDA < 8.0 (#5621)
f5f6258288 : Enable additional tensor types in Gloo backend (#5483)
c66111e79b : Desugar torch.* and F.* functions in JIT script (#5784)
694bee1f7e : Fix the rule for Assign in JIT's Python frontend (#5793)
4613eef69e : Simplify run_test.py and dont use shell=True (#5767)
eea680a354 : [auto] Update onnx to 31ca96c - Microbenchmark for encoding+decoding ModelProto and GraphProto with a single operator (#609) https://github.com/onnx/onnx/commit/31ca96ca3331d05884a71c38975d34870eb9c81d
514f87a16c : Define RPC types out of source (#5794)
1709484a40 : Restore tensor.type, tensor.type_as docs (#5746)
bedba9c156 : Fix unused parameter warning
af5bfa00a5 : Fix unused parameter warning in THTensorMath.c
74f0b270ea : Fixing conda (#2123)
e40425fd9b : Revert "Add torch.sparse_coo_tensor factory. (#5745)" (#5780)
8a9925f03f : Fix useless opset_import in onnx (#2243)
5022b32b62 : Fix windowns build (#2261)
361baa5a48 : Add torch.sparse_coo_tensor factory. (#5745)
e9fffb5579 : use std:: math functions (#5773)
3f3b686056 : Refactor run_test.py to pass all options, not just verbose. (#5760)
cadeb0cb17 : Revert "ATen ReduceOps (#5481)" (#5765)
11056528d1 : Fixes Variable::data() on UndefinedTensor (#5756)
28eda01809 : Reduce Sum and Reduce Mean (#2189)
dd921f65ba : bump version to 0.8.2 (#2251)
0476a2346b : Add symbolic for relu to support exporting to ONNX (#5759)
bab0f8484b : Put torch header install back into the install command (#5755)
16fa12214d : raise RuntimeError on test failure (#5754)
11444a7273 : Save self.numel() for backward (#5747)
edd138ba00 : [C2] Support optional lengths input to ReduceFront/Back operators (#2250)
effc568cee : Add ReLU to ATen (#5626)
835b2ffd72 : Warning police. (#5720)
76a283db40 : [ready] General Documentation Improvements - 2 (#5685)
37059ba0ec : Added torch.distributed.launch module for easier multi-proc/node distributed job launching (#5348)
f377159cc8 : make dimension checker of `scatter_add_` consistent with `scatter_` (#5659)
025e43c263 : Attempt to fix #5718. (#5726)
f69fb3829a : Add documentation for LPPool1D (#5730)
542fbcc127 : Add optimization to norm for common norms (#5722)
55af142b44 : Traceable dispatch for cast methods (#5629)
0919b5247d : Fix at::optional return type in fusibleExpandTo (#5717)
7e6693991d : Onnx caffe2 backend (#2039)
c7611f7608 : improve occupancy for cuda rngs (#5710)
a2641500bf : Implement torch.reshape and Tensor.reshape (#5575)
f6c708f869 : Ensure torch.tensor and Tensor.new_tensor copy numpy data. (#5713)
1a23c9901d : Check that new cpuinfo and tbb submodules exist (#5714)
b465bb9a8e : fix post eos penalty (#2235)
4007dd76e2 : Add missing ONNX symbolics and fix fusible expand logic (#5654)
602a09dde7 : Update caffe2 from facebook 4f527ef46abf (#2234)
310c3735b9 : ATen ReduceOps (#5481)
4a96f5616c : make CUDA_VERSION available in cudnn/Descriptors.h (#5709)
dc4984ef10 : Delete ""_sym literal form. (#5707)
42bf2f9289 : Explain floating point issue in torch.arange doc (#5708)
4b2d278968 : check-in pytorch.version file to master
41285edbb6 : [jit] add a compiled script module (#5630)
dede63689f : Moved headers files copy for C++ extensions to build_ext in setup.py (#5706)
f5a40a8b53 : Fix error message (#5701)
1df99e541c : Fixes for build errors on Windows with GPU (#2222)
000edb791e : Make use of new BUILD_ENVIRONMENT variable when possible. (#5699)
6404904d8a : Fix run_test.py (#5693)
e9d1a5f6d5 : support non-Variable arguments to functions in symbolic overrides (#5645)
261dd6ea83 : fix named_modules doc, clarify eval doc (#5691)
15cc24a970 : Minor improvement in AutoGPU usage in CUDA bindings (#5689)
248c93372d : Check value type for register_buffer (#5657)
ec36e6f40a : [auto] Update onnx to 79dc46f - Add ONNX_NAMESPACE around rnn/old.cc (#605) https://github.com/onnx/onnx/commit/79dc46fa4d484981f75695a73f3d38d5717fc435
54aa28da73 : Add shebangs to perf_test shell scripts (#5684)
dca41bb696 : Minor fix to gen.py to make CPU-only generation cleaner (#5683)
4e190c2fed : Fix floor latex rendering (#5682)
7368c09280 : Add efficient isVariable test to ATen (Part 2) (#5675)
e6090403cb : small fixes to CosineEmbeddingLoss tests (#5681)
439aae7e94 : Add tensor.repeat docs. Remove legacy tensor repeat function. (#5666)
b5ee5e585b : Only allow dense floating-point types as the default tensor type. (#5674)
03f2ad9029 : Add check for python build deps to setup.py (#5618)
74043b69c2 : Alias torch.diagonal, torch.diagflat (#5622)
7b61b458b1 : Make torch.arange consistent with numpy.arange (#5600)
eb34186104 : [auto] Update onnx to 71fa008 - Provide option to enforce /MD or /MT when building with MSVC (#602) https://github.com/onnx/onnx/commit/71fa008efe82dcb78203778735a7444ca1df5dec
09b6ad5785 : Use cpuinfo instead of Android's libcpufeatures in Android build
59d1d17775 : Print source location when ONNX export fails for a node (#5652)
582d045092 : Fix rrelu docs (#5678)
50770f0bc4 : Fix Hardshrink equation in docs (#5679)
7391dae709 : Fix Variable conversion on the way to/from Python (#5581)
0ee53bf7fe : Fix one more naming issue in resnet50_trainer.py for PR 2205
ed05ca9fec : Clean up naming of FP16-related code, add comments
b07980334c : Update jenkins build script using the same flag as used in benchmarking (#1977)
b543041e21 : Corrected a typo in LSTM documentation. Fixes #5661 (#5662)
53876c4606 : Rewrite run_test.sh in Python (#5615)
d4c0538be2 : Add device to Tensor.new_tensor. (#5669)
ae0c04c773 : Add torch.empty, torch.full and new_ size Tensor factory methods. (#5668)
60299e03cf : Report all errors during ONNX backend translation rather than failing fast (#2210)
57e5559788 : [auto] Update onnx to b184dd3 - Fix ONNX library build for Windows https://github.com/onnx/onnx/commit/b184dd3cb80f9f0ab47719ed261d46cdec0f697b
52460a0b30 : Add outputs_info as parameter in run_node (#2161)
e4c303f373 : Defer shape analysis failures until runtime (#5574)
27cb06ae22 : Adding rewrite_net for ACL backend (#2186)
063f066394 : [auto] Update onnx to 0174eb5 - fix `get_attribute_value` can not get `g` field bug (#599) https://github.com/onnx/onnx/commit/0174eb51c8bf88b92e55adacc061a752c9537dee
a3442f62bc : Support native namespace functions with type dispatch. (#5576)
037011e757 : Avoid duplicated log when explicitly specified engine is not available (#2214)
b225893e2a : update comments in segment_reduction_op (#2207)
64b33672af : add GatherFused8BitRowwise operator (#2167)
632f8b5be7 : fix comment on the location of scale and bias (offset) in each fused rowwise 8bit (#2166)
a33aeed1dc : Add set_grad_enabled as context manager and function (#5555)
70fdeb8e07 : [auto] Update onnx to 7e205b6 - Add global avg and max pool test cases (#574) https://github.com/onnx/onnx/commit/7e205b66190f4376c64741ba7705dc23e9fbf225
f9f5946908 : Fix variable shadow warning
f88bba1c73 : Fix docker builds (#2199)
ff804ba168 : [auto] Update onnx to 5516ebb - to_string for Android (#597) https://github.com/onnx/onnx/commit/5516ebb49f65598b2425840e15c5562b676fce85
71d73211f4 : [ready] torch.* doc update for Variable/Tensor merge, and other improvements (#5443)
359d54ea97 : Fix typo in CMakeLists build fix for Ninja (#2213)
8ab101ccee : Implement pow() for integer types (#5526)
57c7d132c9 : Fix nn.Module.apply doc formatting (#5623)
f84fa526d3 : Add additional deprecated overloads with out kwarg (#5643)
8f068bd780 : fix CUDA btrifact error message using wrong info type (#5644)
8ba8713f5d : torch.load() / torch.save() support arbitrary file-like object (#5466)
7f44c0d011 : rename onnx/utils/__init__.py -> onnx/utils.py (#5639)
b9cc035654 : import torch.jit in torch/__init__.py (#5638)
06df037d9a : do away with ExportProxy hack in onnx export (#5614)
4aecbe0877 : Give ATen/gen.py output directory option (#5653)
92596197fc : add end to end test for DistributedDataParallel (#5182)
a268ed6588 : fix momentum doc in IN andLN (#5649)
a3f463517e : add gpu guard for broadcast_coalesce (#5655)
9acac2a513 : Pass in task groups to PipedReaderBuilder (#2182)
4c4a42b3f9 : implement CosineEmbeddingLoss as a native function and add reduce arg (#5646)
807a4914c3 : [auto] Update onnx to 728cc98 - Add outputs_info into run_node backend interface (#588) https://github.com/onnx/onnx/commit/728cc987af39412eec099106a2a4cb397ea1024f
f4b1e8b334 : [Dper2] Add NetModifier abstraction and support for plotting the norm of blobs (#2201)
d90cd73aea : [auto] Update onnx to b052fef - Fix node test name of Slice (#596) https://github.com/onnx/onnx/commit/b052feffabbb935d13c62c622790d05fc5f20941
396637cdd6 : Python-free build of autograd + jit (#5356)
9de922991c : Revert "implement CosineEmbeddingLoss as a native function and add reduce arg" (#5640)
6c6d301e4e : [auto] Update onnx to ec5f1d3 - Add option to use customized protoc (#594) https://github.com/onnx/onnx/commit/ec5f1d38136d07cfecfd37f6d3729a9ff6543e95
32b3841553 : [ready] General documentation improvements (#5450)
c16478fe3f : implement CosineEmbeddingLoss as a native function and add reduce arg (#5447)
28b1c94f0f : allow application of @symbolic decorators without circular imports (#5595)
08f9cad140 : Fix typo (#5635)
cebf44e960 : Element-wise tests now use or seeded with hypothesis (#2181)
d812a196e7 : .
c55dc983d9 : Fix ninja build in setuptools
bdc63be9fd : log INFO for not available engine only when engine was explicitly specified (#2187)
363de58a8b : implement double backwards for MaxPool3d (#5328)
04461fa289 : Prefix DataLoaderIter with underscore to discourage subclassing (#5619)
8720d72d7c : Fixing inconsistent docs (missing parameters docs). (#5620)
5450ef50ed : [auto] Update onnx to 2edc1e7 - Handle situations where protobuf is built on the fly (#592) https://github.com/onnx/onnx/commit/2edc1e727bde0ebedd642c9153615e65ce0ffa16
280d51e324 : Use Ninja build system in setup.py when available
bbc2c642c9 : Use Ninja build system when available
60aa8c793d : Update caffe2 from facebook (#2178)
957ddb54d6 : Fail fast in pytest (#2116)
c2721ab503 : Add per-element unique op for CPU (#5503)
0a18608b43 : hacks to test exception handling and python operator backtraces
ab8498e5c8 : Acl copy ops (#2158)
0c6e843028 : [caffe2] Add scopes into ONNX While op (#2149)
9ebfece900 : Update perf test baseline with every master commit (#5605)
eededd3f97 : Move main reshape logic for easier reuse (#2122)
88883825e5 : [auto] Update onnx to 910db3b - Minimally fix CMakeLists on Windows (#589) https://github.com/onnx/onnx/commit/910db3bcd94e737fc0b497a1ac4b9f7028595a82
fcaa3bf609 : disable ibverbs build with env variable (#5513)
461e3e3ae0 : Allow indexing tensors with both CPU and CUDA tensors (#5583)
a90b695590 : Disallow num_workers > 0 for DataLoader on Windows (#5591)
3bc90d471d : remove legacy workaround for hinge embedding loss reference fn (#5596)
0f50ca0b48 : Add reduce to functional smooth_l1 documentation (#5610)
792daeb422 : Enable documentation for C++ extensions on the website (#5597)
63b4694bb8 : release() does not need to be virtual (#5594)
5d74462891 : [auto] Update onnx to 8bcecad - fix cast op type constraints. (#587) https://github.com/onnx/onnx/commit/8bcecad91c3045e6e30db464b03d7796579a92fa
4c0e0ebb4e : [auto] Update onnx to 4e9d21b - travis tweaks to make sure the correct versions of python are installed (#584) https://github.com/onnx/onnx/commit/4e9d21b68e89cbb24f4a5465c5e1910ef5e58269
c9cc514df4 : Bump minimum CMake version to 3.2
dd1564b061 : Caffe2 module update: move observers as well as binaries. (#2145)
cdd0febd86 : Fix for a confusion around grammar of Maybe (#5593)
82bdc51dd1 : Use operator.index to convert indices to Python int (#5582)
5597aba868 : Add return statement to the JIT AST (#5578)
a6650f5664 : Recompute captures after the parameter is updated (#5488)
7d141d4243 : Changes done internally at Facebook (#2154)
9395a26fe5 : disable NetTest.ChainingForDifferentDevices which is broken
e3e9f91889 : Fixed a typo in BoxWithNMSLimit doc.
bec8923e02 : [C2] Adding Clip Tensor by Scaling op
9f2a35ee8b : [C2] Enable LARS on GPU [PR Patch #2115]
56ac3ef180 : Correcting size types in (Un)PackSegmentsOp
4a4407337d : Supported inplace arguments for norm_planar_yuv.
ee33a24af2 : avoid vector copy/destruction in *_dim_ helper functions
6b98315a28 : [GanH] Model Test
16ba087b64 : [oncall]fix unittest dper/layer_models/tests:utils_test
496c999f7d : [core] NUMA-aware pinned allocator
7d8188a4c2 : fix invalid-null-argument UBSAN error math_cpu.cc
b68e2786e0 : fix invalid-null-argument UBSAN error in math_cpu.cc
14c47fb211 : fix invalid-null-argument UBSAN error in math_cpu.cc
80d0f5de93 : [mobile][mpscnn] iOS11.3 interface update
08bb6ae8bb : Fix OSS build
9e71de398b : [core] Graph-level NUMA awareness in Caffe2
8b0b090ff1 : fix Caffe2TensorToNumpyArray for py3
968ebb3b82 : [GanH]fuse jsd with lr loss/xent
fe3c22cd24 : [GanH/Easy]Fix blob dim
08dbd96642 : Add TensorInferenceFunction for PowOp
f2ec5b7b0e : [DPER] Fix bug in uint8 quantization shortcut.
1f0a833d8e : JSD fwd/bwd op
2d3aebd5fb : fix bug for conv3d Op cpu
4aded2f7c1 : Add Numa support (#2152)
115579697e : fix typo in previous cudnn fix
2b7f750992 : Fix cudnn < 6
b4b2f0d2cc : Work on fp16 conv op
72f259c84b : Add C++ preprocessor define CAFFE2_USE_EXCEPTION_PTR to guard use of std::exception_ptr
5c769bd243 : Update Python information shown in CMake summary (#2132)
7588893ce2 : Some additional clean-ups (#5505)
a91b2ad85f : Fix flake8 (#5573)
e7897a3dc7 : [auto] Update onnx to a711252 - Recent CI changes have issues: revert them while fixing to unbreak CI (#583) https://github.com/onnx/onnx/commit/a71125280c9410663ee8d09d5b3b0c4a239349eb
a2c3ffa5c7 : Delete unused expand functions in TH/THC (#5533)
6aef608f10 : Fix Out of Memory failure in test TensorTest.Tensor64BitDimension (#2114)
976aaa55aa : Add at::optional from https://github.com/akrzemi1/Optional (#5530)
c7e69e9015 : Test documentation build in CI. (#5492)
1ff36bfd61 : Missed Step (#5558)
8376e63738 : fixed softmax support documentation (#5557)
c93076495d : add: padding_value to `torch.nn.utils.rnn.pad_sequence` (#5540)
9213109e58 : Modifications to improve readability of prof_dag
fb848311b9 : Add .watchmanconfig to .gitignore so Atom/Watchman won't complain
66547ca061 : Fix links in distribution docs (#5531)
abd8501020 : Export MAX_JOBS for build_libs on WIndows (#5550)
6aeaa52476 : Fixes #5542, api changes for output path on Windows (#5549)
4ad58e6278 : Deterministically seed all ATen C++ tests. (#5545)
c713c667e0 : Use fast integer division algorithm to avoid division ops inside kernels. (#5054)
15eae9543e : Fixed dimensions in docs of conv and conv_transpose (#5543)
37dec493a5 : Scope MultiRNN blobs with name as well as layers (#2025)
18a76f54a6 : add concrete example for python_default_init in native functions doc; (#5538)
d013e16cf4 : [C2] Enable LARS on GPU (#2115)
5f8029f90b : Fix documentation for WeightedSumReducerDef
e026cb1854 : Fix build with ComputeLibrary on ARM64 (#2124)
0bce97b101 : [auto] Update onnx to 8bbeb2a - Improve CMakefile of ONNX (#563) https://github.com/onnx/onnx/commit/8bbeb2ae453a113f6557afe5048d428a69af8bbb
fdfe1d09a0 : Explicitly require listing additional libraries if a binary needs things beyond Caffe2_MAIN_LIBS (#2110)
11a736b682 : Sqrt op (#2101)
349238f5bf : Mean Op (#2072)
558e2a92df : Revert update on top_k_op (#2119)
fc7ee0c941 : Use NEON for Android build
01e261c25b : Update perf number for test_gpu_speed_word_language_model (#5529)
c70beed31c : Add axis to top_k_op.
60415cf0d2 : Big batch of fixes for JIT (#5517)
b5a3894c61 : [auto] Update onnx to 679d70e - temporarily disable python3 on osx for travis (#579) https://github.com/onnx/onnx/commit/679d70e30afa751ac2bef573a5e92b67b9e26717
f76fc6fa19 : Update locally_connected_op (#2113)
2d4212274e : Add typing dep to ATen standalone .travis.yml (#5527)
ec3c299baf : Turning off conv_op_test for now (#2104)
72d5d9016a : move -s to CMakeLists.txt
24dee1515c : add a rule back for non-Android platforms
fab2c07af9 : make -DUSE_OBSERVERS=ON work
9befaf14ea : fix -DBUILD_TEST=ON -DBUILD_BINARY=ON for Android
ca90d4c356 : Add -s for Android back
54b4cdeffa : Replace all uses of 'Tensor or Variable' with 'Tensor' (#5508)
806239d6bd : Fix a bug gen_jit_dispatch.py (#5518)
58cd133f7e : Avoid OOM when running ASAN by splitting nn tests. (#5523)
f064c5aa33 : Expunge all occurrences of torch._C._VariableFunctions (#5525)
8f627fc658 : CharTensor should be signed (#5512)
a6520d6b98 : Changes to ATenOp CMake to make it compatible with BUCK (#2111)
0877558e60 : Port cuDNN RNN dropout state initialization to ATen and make Python c… (#5383)
dda4bdd596 : Reduce OS X MAX_JOBS because we are still OOMing (#5493)
70ba50c3d4 : Remove some uses of torch.is_tensor in favor of isinstance (#5473)
5dedc648bb : Compile DataLoader.cpp separately (#5507)
df88373f88 : set default ams param in adam optimizer (#5501)
bbad9e7c8a : Add virtual destructor to SourceLocation (#5516)
b7ab3ff5e3 : Change caffe_add_linker_flag to caffe2_interface_library (#2109)
1af7df6e78 : fix rnn_cell_test in fbcode (#2107)
acd8dfdfb9 : Warning if engine is not available (#2106)
1981557751 : Add README and ONNXOpCoverage doc back (#2102)
0de5443469 : Reorganize interned strings into categories, autogen ATen strings. (#5471)
7cdc272224 : [auto] Update onnx to 27b4022 - osx travis support (#566) https://github.com/onnx/onnx/commit/27b40225ea98f6412ae2879ed67211d49564af2a
aa5145bf14 : Enable onnx backend test on pow, ceil and floor (#2103)
27265503ad : nn.* doc update after Variable/Tensor merge (#5459)
1ae884ff86 : Add pytorch-docker-build-test. (#5468)
c0304c83b1 : Copy some outputs in order to decouple storage (#2105)
a5e1b4efc9 : Fix warnings in jit (#5499)
56096c2311 : Building rocksdb as a module (#2094)
4a50ab0fdb : Fix naming issue in TensorCompare.cpp (#5498)
b38ed69441 : Delete unused files (#5500)
285a9e2452 : Add dtype to torch.Tensor constructors and accept them in set_default_tensor_type (#5444)
b69b885e82 : cuDNN 7.1 fix. (#5439)
9235277dba : Re-enable some CUDA tests on Windows (#5446)
8c6c09ad41 : Adding openmpi to all conda builds (#2089)
ef0ef70cf5 : Don't spuriously raise warning for Constant nodes, fixes #5101 (#5469)
5e188e4182 : Refactor and simplify ATen dispatch (#5475)
b1dec4a74f : Fix doc-push (#5494)
da894901ef : Deprecate variable factory, use torch.tensor instead (#5476)
7b33ef4cff : Documentation cleanup for activation functions (#5457)
72aa83d702 : Add pytest ccache into git ignore (#2095)
c96338ee2c : Fixing mkl builds not using mkl (#2093)
4afd62db09 : Add TracedModule to the JIT (#5409)
544aeaec62 : Refix the linkage condition (#2091)
2ad242bee9 : Update Dependencies.cmake (#1920)
fcde409166 : Fix the pybin11_state_gpu.so linking issue (#2087)
e03d74c40e : [auto] Update onnx to ee79865 - Clarify reshape behavior when '0' is passed in (#569) https://github.com/onnx/onnx/commit/ee7986538aed2cc6bdb07b1da26a6ad53adb2a1c
55c64e5243 : Add Python function calls to JIT script (#5445)
b10fcca5f0 : Install cuda headers in ATen build (#5474)
771791fe2f : install pytorch into default conda env (#5482)
c3e4d7ff87 : Cuda full (#2084)
39608b0180 : Add source information to IR nodes (#5449)
36abf023bd : Added 3d grid sampler (for volumetric transformer networks) (#5453)
7772d26cb0 : Fix test sparse (#5478)
749a17661c : Introduce padding op to mimic pytorch semantics in ONNX export (#2069)
377d896969 : better solution for the linking error related to lazy_init for MSVC (#5375)
5c381bbc57 : Patch cuda-convnet2 from internal Facebook changes.
ea10b7bc63 : [auto] Update onnx to 4f00542 - Create unique proto filename based on ONNX_NAMESPACE (#555) https://github.com/onnx/onnx/commit/4f00542fc1c80c489535e99a5ab7d3b6f40061b9
509aed6ca3 : More Variable/Tensor clean-ups (#5464)
e91560017d : Removing leveldb for ubuntu (#2081)
eb612b09e9 : Fix Caffe2 ==> ONNX converter to handle three models (#2058)
0f86f64398 : Add support for device python arguments with constructors. (#5384)
459dadf04d : Use 'Tensor' instead of 'Variable' in type error messages (#5465)
ebd32f7bcd : Check that parsed_args contains enough space for all parameters (#5467)
687de0bd67 : [auto] Update onnx to 8dc7369 - preserve value infos if they are needed (#561) https://github.com/onnx/onnx/commit/8dc7369bb9c3c1b63056211339987b0ad015faba
6ab33a820c : Support type conversion via type(dtype). (#5441)
94938be367 : Support dtypes in legacy new constructors. (#5343)
2de0bb3df4 : [minor] change test name
e09fa090e6 : dont test for SourceChangeWarning in incompatible environments (#5458)
3d070e78fe : Fix cmake dependency error in static library case. Peer coded with @bddppq (#2078)
847fad70a9 : Check if CXX compiler supports all the needed functions (#5401)
6f9dc115e8 : Mark test_fs_sharing as hanging in ASAN. (#5451)
35c3b91f8a : Remove no longer used flag (#2075)
178c4be295 : [wip] Cmake modernization (#2066)
6341a0fd79 : Fix cuda full (#2070)
e07083f00a : Cleanup CMake files and build scripts for Android (#2067)
5bbeb55f22 : add reduce=True arg to MultiMarginLoss (#5150)
392fc8885c : add faq on cuda memory management and dataloder (#5378)
1a7815e662 : Add Scalar to native_function.yaml doc (#5416)
0955e791d3 : Fix caffe_add_whole_archive_flag in cmake (#2062)
48a3349c29 : Delete dead Tensor code paths (#5417)
7276432bbd : [auto] Update onnx to eb55f2a - Update defs.cc to clarify Pool op semantics (#552) https://github.com/onnx/onnx/commit/eb55f2a63782fd14aae8d6a3a6118d5437714be5
fa24e47d1a : [auto] Update onnx to 296953d - spelling pass for docs (#542) https://github.com/onnx/onnx/commit/296953db87b79c0137c5d9c1a8f26dfaa2495afc
6b95ca4eda : DataParallel: GPU imbalance warning (#5376)
5e0a3e99bc : OSS Playground modulelized model components (#2059)
d5038309a1 : Remove WITH_SCALARS, as it's enabled by default now. (#5437)
76304300a8 : Transpose shape inference (#2057)
af78d51dea : [auto] Update onnx to 61836da - Check whether perm exists before using it (#559) https://github.com/onnx/onnx/commit/61836da46e23685c4c63e3b1ca9fc3fb882e9241
8327982904 : Set python random seed in workers (#5415)
9f2975e2cf : Remove onnx-caffe2 (#5425)
d5de0dca38 : fix crash in cudnn setup helper on machines without cudnn (#5427)
7f1b3d12e1 : Fix ASAN alloc-dealloc-mismatch in TestMultiprocessing (#5428)
38fb8c5cf7 : Remove onnx-caffe2 reference (#2063)
a12aae2a72 : [auto] Update onnx to b0ffb2d - Remove onnx-caffe2 reference (#558) https://github.com/onnx/onnx/commit/b0ffb2d302b3da405ceeeee29680e3e313101b1a
bc4c919a9e : update dependencies (#5423)
12a477b12e : Update README.md
ddf6b3daae : [auto] Update onnx to 176e357 - adding tests for cast operation (#543) https://github.com/onnx/onnx/commit/176e3575ea15db994a84702470da90ee90a0dbf6
679232657d : Update README.md
ec194f2468 : Fix typos in README
8c18220a59 : Fix layer_norm initialization and nn.Module docs (#5422)
611c771fc8 : Introduce torch.tensor (was torch.autograd.variable). (#5419)
05269b582b : [JIT] Support shape propagation with control-flow (#5391)
0250b57978 : Avoid extra cpu->cpu copy in dispatch_type. (#5418)
3600e9ef6c : Mark functions that shouldn't end up in torch. as method-only. (#5392)
c6d47f6386 : add @torch.jit.script, @torch.jit.compile, torch.jit.CompilationUnit(str) (#5367)
c3320887fe : [cmake] try removing caffe2_include_directories hack (#2050)
406c9f9c28 : Remove two uses of the old Tensor class (#5413)
ec547ce640 : RNN ONNX export: concat hidden/cell states on the right axis (#2055)
e68b815afe : Empty sparse tensor copy revers dimI, dimV. (#5414)
c7a3b00bf3 : Back out "[caffe2] fix signed-integer-overflow UBSAN error"
028bc2f23f : [C2 OSS][GPU]exposing totalGlobalMem info to workspace python
c55d34ed81 : Add operation time metrics to blobs_queue.
a5b387fa27 : Fix Caffe2 OSS build
c55a642d83 : [c2] update SparseFeatureHash layer
e397367db0 : GatherRangesToDenseOp supporting sorting with keys
bdd25d80a8 : fix invalid-shift-base UB in conv_op_cache_cudnn.h
6922d7d89f : Add cudaconvnet for caffe2
c18f9b4dea : Back out "[codemod] - comment out unused parameters"
148f6b200a : fix signed-integer-overflow UBSAN error
7e9f8af018 : [codemod] - comment out unused parameters
fc9837899d : Embedding.load_pretrained method (#5350)
f4cfd9bbfc : Don't python bind 'tensor' or 'sparse_coo_tensor'. (#5390)
10fd272b7a : Update doc of batch size requirements for DP (#5108)
7cafdab69b : [C2] Implement Layer-wise Adaptive Rate Scaling (LARS) (#2034)
39001db843 : Update NNPACK and cpuinfo to check cpuinfo_initialize status (#2051)
1f9df59de9 : Move caffe_option to proper cmake_dependent_option (#2049)
9d94a529fa : Update NNPACK and its dependencies (#2047)
07646e405e : no_bias in resnet32x32 (#1817)
d2f71cbdeb : make CuDNN finders respect library major version (#5399)
80430501c9 : Remove the use of EXTERNAL_DEPENDENCIES (#2045)
40d79e4447 : Turn on ASAN in continuous integration. (#5271)
1ff537ca71 : Ignore FileNotFoundError when shutting down in data_queue.get (#5380)
c60d509fdf : Pin libnccl2 to version 2.1.2 (#2033)
c06c6046e3 : Accept GPU perf test regression. (#5395)
c1919b370b : Skip Cast ONNX backend test, which is not supported in Float16 case (#2005)
a0118533ef : Add a print() function to the JIT script (#5274)
fbf1f06521 : Implement no-attribute dispatch of ATen ops from the JIT (#5298)
ff189b9023 : Update CMake min requirement to 3, and use interface library for cuda libs. (#2021)
07414d94d7 : Add eigen version check (#2037)
0f0f7957e4 : Add docker cmake3 install for ubuntu 14.04 (#2038)
ff3ef8301c : [WIP] splitting conda-builds into separate build and test phases for PRs (#2031)
c0866e45c7 : Caffe2 ARM ComputeLibrary integration (#2015)
e3aae398ce : [auto] Update onnx to e78c068 - Adding int32, int64 and double input data types for featurevectorizer (#547) https://github.com/onnx/onnx/commit/e78c068008c97d88a41ff926916225487b0aa205
99e99130f5 : Remove build_host_protoc.bat as it is no longer needed after protobuf update. (#2020)
30ec06c140 : Merge Variable and Tensor classes (#5225)
7a36c132ce : Skip denormal test for now. See issue #5331 (#5387)
da8e037e03 : Test both CPU build and CUDA build for Windows (#5364)
6a2afe3b59 : Fix segfault in test_dep_nograd (#5377)
b3fdfa7bd6 : [DT] [4/n] Make epoch_group explicit for JobRunner (#2018)
dcbbf346c2 : Change output_declarations in function_wrapper.py to be a NamedTuple (#5312)
2130070785 : Handle copying empty sparse tensors to/from CPU, GPU. (#5361)
232837a75e : [auto] Update onnx to 3ca6622 - Fix pow op's test case (#546) (#548) https://github.com/onnx/onnx/commit/3ca6622ad0cd33334697aee6eea608d671f21357
6c587e9e67 : Solves the linking error related to lazy_init for MSVC (#5368)
77036704aa : add a third output in LSTM onnx export (#5359)
008ba18c5b : Improve CUDA extension support (#5324)
e2519e7dd1 : Fix undefined refence to convolve_5x5_sse on SSE4.1 CPUs (#5371)
0f68eac94a : Fixing an error building with CUDA on windows (#2004)
cc9d3b265d : Fix wrong argument name (#5366)
013ed5b88f : Add lazy_init.h into build for Windows and refactor code (#5365)
cbd1fd6c85 : Install onnx by using the onnx inside caffe2 (#2002)
c249f49ddd : Rename caffe2_ref_test.py to c2_ref_test.py (#2016)
8904616028 : add control flow to interpreter (#5293)
51897e52da : fix all the broken tests from adding debug info (#2013)
38f18c1daa : add third output in onnx -> caffe2 lstm conversion (#2011)
c2a3d85a07 : Traverse sub-blocks in JIT passes (#5329)
b6854ee012 : support batch-first in ONNX export of padded sequences (#5360)
4e5df5cda6 : added debug info to OperatorDef
02b758f63c : Add a disabled-configs.txt interlock. (#5352)
fe5fe7bad2 : CMake cuda targets (#1993)
4da3ce720f : Support convolution without bias in NNPACK bindings
1848cad108 : [ready] Layer Normalization (#4922)
2344decc91 : Add onnx as a submodule (#1998)
9388d35293 : prioritize cudnn library dir in library_dirs order (#5345)
090850e89b : Adding guards around adding protobuf targets (#1997)
3ee9b5edca : [PR] Floor and Ceil Op
ccea6924a2 : Implementing Pow operator (this merges existing pow with a scalar and new pow with a tensor exponent). Second Try.
853dba8e3b : Improve sparse variable printing. (#5335)
579de82bcf : DDP: 10% of NCCL backend perf improvements with mixed-prec support (#5064)
069f66e267 : only delete S3 image for successful Windows tests (#5341)
0878c6d4d7 : Various dtype improvements. (#5321)
702a7f3864 : Improve Function interface (#5221)
ba8bbeced3 : Fix input size checks in ATen for SpatialFractionalMaxPooling (#5337)
9bf9f0e613 : Fix the bug of only processing one attribute (#5334)
642e4d0762 : Fix typos (#5340)
09cff195df : Improve GPU perf test (#5327)
6522d6e692 : Make _like dtype arguments keyword only. (#5320)
af4e72fdd2 : Remove _out variants of like functions. (#5318)
0340e46f9b : Disable tests that use DataLoader with multiple workers (#5322)
3ef2e484bf : Add fp16 testcases in test_cuda (#5122)
4b8f4fc259 : Added mixed-precision support in distributed training (#4891)
5e4acd032b : Add an option in cmake for specifying caffe2 python lib relative installation path (#1981)
2588f5de06 : Update onnx version to include the model files suffix change (#1991)
0d641145a1 : Fix public protobuf interface (#1961)
5439ab3cdc : Remove gf library in MKL (#1976)
492466f25f : extern C guards around some TH headers (#5316)
0074dc7fa8 : Allow more backends in caffe2_benchmark (#1979)
5b142e5344 : add guards when source of container cannot be retreived (#5317)
cc7e61c88d : Move onnx-caffe2 inside caffe2 (#1921)
031412a14b : setup.py and cmake improvements (#5269)
7283d5194a : Avoid having two protobuf on ubuntu14.04 (#1989)
5ce46be17c : Disable test_multi_keep on Windows (#5314)
639d1c7c5e : Make sure libcaffe2.so does not require executable stack
ee71eab4c6 : Adding 'full' version of conda build (#1934)
5edf6b2037 : Add numpy-style dtypes to Variable factories. (#5245)
d2ff733cb1 : Make ReduceLROnPlateau serializable. (#5300)
596470011b : minor sp, underlyhing->underlying (#5304)
0509f26d41 : Speed-up nn.Linear for the 3d input case (#5279)
cf71385ec9 : Implement torch.isnan (#5273)
fae6c67121 : Configurable flushing denormal numbers on CPU (#5294)
6279367297 : Check class index in no-reduce ClassNLLLoss kernels (#5299)
5eefe87d4e : Emit ternary if in script compiler (#5291)
9193dfd185 : Disable test_multi_drop on Windows (#5290)
c71c84ee04 : Tweak 'detach' docstring. (#5292)
f51e284408 : Fix ASAN detected global buffer overflows in autograd (#5289)
9c207b195a : Fixes UB when using legacy python functions and mark_non_differentiable (#5275)
22fe542b8e : Use TORCH_EXTENSION_NAME macro to avoid mismatched module/extension name (#5277)
5c93ca258b : check attribute existence in SpatialFullConvolution (#5255)
f4f5bad901 : Adding a new ReduceScatter Operator.
36c49c9f4a : change schema's __repr__() flat output to pprint style indented output
3ffd6ffa7d : while and if for experimental JIT script (#5176)
fac4852ff4 : - Fix unused parameter warning in pool_op.cc
1c2cef10e2 : Make zstd position independent
a4c7c88f13 : Update onnx version for onnx-caffe2 test
c809d89810 : Fix RowWiseSparseAdam implementation
8bfb1aa71b : Fix __syncthread in SpatialClassNLLCriterion.cu (#5276)
a6a75621cb : Fix warning in net_test
25e7f8ab28 : Fix event synchronization logic
3975abe549 : Make caffe2 handle out of bounds values correctly
4157562c37 : Added further automatic IBVERB lib and header check before enabling THD/Gloo IB support (#5264)
60dc3ca66f : Use 8-bit quantization only in cases when it makes sense.
c5497a34f6 : Add CPU_ONLY tag for sparse_feature_hash layer
e411525f2c : Add a FAQ, for now just 'out of memory' advice. (#5251)
801d7bc906 : Update gloo
1711878aac : Support EQ operator for bool type
70e71391d2 : Fix THCTensor_(max) and THCTensor_(min) inits (#5265)
cac3026b35 : Fix typo in DataParallel docs (#5268)
cb2fd39fdd : Add Python frontend to the JIT (#5190)
5ee4794d3c : - Fix unused parameter warning in math_cpu.cc
a27f0e4daa : Fix conda removal step for Windows build (#5267)
fe72037c68 : Add CUDA support for JIT-compiling C++ extensions (#5226)
170b22a8f0 : Run tests with -v flag, fixes #5240 (#5259)
68aed0779d : add reduce=True arg to MultiLabelSoftMarginLoss (#5097)
3384f56cce : Fix setup.py
3036346af6 : Trying a quick patch to install protobuf 2.6
2f40c88508 : downgrade docker back to 9 (#5257)
1fdb3929c9 : Fixes for docstrings/sphinx rendering of CosineAnnealingLR and Local Response Normalization (#5254)
16cd3f4a9e : Don't allow to export models where parameters are inputs/outputs
66131dec6f : Expose Caffe2 WorkerPool from ThreadPool
677030b1cb : Revert "Remove unnecessary __syncthreads before reduceBlock" (#5250)
bd22b83d62 : Fix nccl cmake files
8bbd376107 : - Fix unused parameter warning in typeid.h
66a97ddfd6 : Add CPU and GPU perf tests in enabled-configs.txt (#5243)
9dfbc120f5 : Fix assertNotEqual handling of message/precision (#5246)
01b17b3e20 : Disable support of using ninja in setup.py
2078e4ed37 : Check GCC version on Ubuntu (#5230)
cbb2ee66f9 : Remove unnecessary _syncthreads before reduceBlock (#5242)
7363736c50 : Fix THC multinomial stride usage; (#5238)
284b3c3764 : Fix Android build with binaries
0a66c76a4c : detailed error output for parameter sharing
c784a273bc : Fixing conda builds
52fa742c51 : Revert D6893040: Implementing Pow operator (this merges existing pow with a scalar and new pow with a tensor exponent).
fe810edc80 : Consolidated dockerfile changes, updated README (#5235)
8910dd5a81 : Fix GraphExecutor and add more AD formulas (#5215)
318ae2085a : Include __delitem__ for Sequential (#5233)
6204877cd4 : Allow zero-dim tensors to be bound to at::Scalar (#5142)
c746357017 : Add dependency Python packages for onnx-caffe2
4256dbe2d0 : Update perf test suite (#5191)
198958bb52 : Fix for PRId64 (#5228)
f7cc8e8822 : Implementing Pow operator (this merges existing pow with a scalar and new pow with a tensor exponent).
fd28e0fa29 : Add bool function to return whether a model contains loss
a4d0a74cee : Ensure Distribution.sample() result is detached (#5086)
1b71e78d13 : CUDA support for C++ extensions with setuptools (#5207)
232ce18a41 : Additional sparse Variable fixes (#5203)
83c494787d : Allow adding to trainer_extra_schema
6f533fd8b8 : Only overwrite path_prefix & path_type when not None
232530cc28 : Move scalar tests from common_nn to legacy_nn. (#5223)
9a726a0770 : Skip system Python if Anaconda is used
4a377d7817 : Optionally build with sccache
d99d28b3e6 : Allow custom component tagging in DeviceOptions.node_name
5fe2f3f9e5 : Install sccache in base images
f96f3c312d : Implement symbolic for slice operation (#5204)
ab18aaeba7 : Clarify output shapes of reduce=False losses (#5082)
da79697d45 : make explicit about keyword-onlyness of `out` (#5165)
7c3a8eaa15 : Use sccache for CPU builds. (#5208)
e958727874 : Disable NCCL tests for Windows (#5129)
8678e8584f : Update aten docs (#5197)
da938019da : Include newer Python 3 versions in base image builder
147612e64a : add reduce=True arg to SoftMarginLoss (#5071)
b11ba65204 : Experimental support for setup.py develop mode install
2b2d56d846 : Add missing async deprecated wrapper to tools/autograd/templates/python_variable_methods.cpp (#5196)
942f04ec16 : Modest refactor of .jenkins scripts (#5202)
86803004e3 : Fix cmake function to resolve libraries correctly
2d5fbe6e0d : Improve Variable interface (#5127)
d79a31761e : rectangle_cropping_multi_cropping_color_jittering_lighting
0ef10385b2 : Make Python functions respect grad mode (#5184)
38f2cd16ee : If a blob is not specified in the init net, create the blob
d116e47143 : Fix compiler error. (#5179)
4b311847b1 : Fix python extension suffix
849f94526b : Set MAX_JOBS=3 for OS X builds. (#5199)
df0a4474c4 : Allow and warn when indexing a zero-dim Variable (#5114)
bada92ddcd : Implement Variable.new(...) overloads for sparse tensors (#5117)
c7d95dcba5 : Don't use Variable vs. Tensor type-checks for requires_grad logic (#4919)
e39e86f119 : Remove deprecated references to volatile (#5193)
f38b6f611e : Replace NULL with nullptr in autograd (#5162)
2fd8e596b6 : CUDA 9 (#5194)
4ed87e3c9e : Make conda install and s3 cp in Windows build more quiet (#5187)
19c2ad8834 : CUDA 9.0 and cuDNN 7 (#5186)
07be53b57f : Move EmbeddingBag into ATen (#4856)
177b4509ce : Fix memory corruption in im2col/vol2col based convolution kernels. (#5173)
315ee107f6 : document input_names, output_names feature of onnx export (#5189)
43f2877b7d : Pinning networkx to 2.0
1c005602fc : Adding model_id argument to nets in predictor_container when modelInfo exists
2f1493f1fb : #4990, Makes Window build fail quicker (#5175)
99474d28b8 : Fix compound assignment in JIT script (#5178)
e1a88a7e98 : Expose sparse variable sspaddmm (#5017)
d7b6a61a54 : DDP: coalescing many little broadcasts to improve performance (#4978)
b608ea9178 : Fix sign error in TransformedDistribution.cdf() and .icdf() (#5172)
a061000250 : Added check and test for betas parameter in Adam optimizer (#5147)
6dc41f9e63 : fixed doc for cholesky potrs (#5180)
ba84e78144 : Fix up caffe2 server build (for @mode/fbandroid/server)
78c9a35a84 : GPU support for ChannelStatsOp
c718b7b62b : Make shape inference work with MKLMemory
9f980b1795 : Implement sparse tensor and variable norm(value) (#4882)
0df54f4d74 : Fix typo
fedb7095d6 : Make tree views statically typed in JIT script AST (#5145)
8243e898ab : allow dropout in RNN ONNX export except in training mode (#5160)
4b8bf73729 : Enable scalars. (#5158)
51267095d5 : Remove enqueue_splits() from ReaderBuilder
39c73556fb : Update NNPACK submodule to fix build with Python >= 3.6
06f8fc3f49 : extend_operator_CostInferenceFunction
8f1f84a6f2 : Expand distributions docs (#5148)
ce5702fa80 : add reduce=True arg to HingeEmbeddingLoss (#5130)
3b63e552f9 : Fix test_distributions when WITH_SCALARS. (#5121)
6a9b7132ec : Add a new_tensor instance method to Variable that takes only data. (#5144)
6df58dac1d : Make NNApi build
cec7003190 : only enable FloatToHalf test for GPU
65fb885467 : Bidirectional RNN export to ONNX (Elman/LSTM/GRU) (#5120)
de2a708187 : Rename test.cc
08113f922b : Vendor Python dependencies of NNPACK
27b9b7b15a : Make TypeInference work for HalfToFloat & FloatToHalf.
6ecaed5021 : Generate a core dump when CompleteInTimeOrDie forcefully quits
01de4e40d6 : Fix a bug in nested parameter sharing logic.
873f116380 : adjust stft result comparison precision to 7e-6 (#5143)
6e0d0f08a9 : Improves Conv*d(Transposed) docs to have correct newline and formatting (#5139)
6aaa701c9c : Adding ThresholdedRelu Op support.
affe742d31 : Add scalar module tests for test_nn. (#5116)
0629785645 : Initial type hints for function_wrapper (#4947)
696db00bcd : Print Parameters like Variables (i.e. print scalars correctly). (#5119)
8edde3de15 : Ensure Tensors have storages in resizeNd (#5115)
a9f3299abe : Fix test_distributions to always use Variables for examples. (#5134)
8e9b530fd7 : Fix ffi cdata for Variables. (#5128)
c4d43b4c7c : Implemented RelaxedOneHotCategorical + RelaxedBernoulli distributions (#5056)
3108ce63ba : Back out "[caffe2][PR] Vendor Python dependencies of NNPACK"
5816721e35 : Fix the evaluation order problem with build_lstm_body (#5124)
beb9fe6a46 : remove some warning introduced by #2764 (#5104)
2d84cb4b04 : warn that CUDA capability 3.0 and 5.0 is no longer supported (#5125)
9093eb1ba0 : Vendor Python dependencies of NNPACK
e0e124e617 : Fix RNN scoping situation
8724298482 : Fixing spdir copy in build script for cuda
99cdf7f91c : Integrate android nn api
2c27bae802 : Change Windows CI conda install path (#5126)
ef14590209 : Support calling pack_padedd_sequence with a Variable lengths (#5113)
bf603299b6 : Restore torch.mm behavior for sparse variables (#5077)
85e22b5475 : Reverts force_gpu_half changes from #3660 (#5000)
3e85613751 : Experimental jit script (#5074)
1de4501078 : Add scalar module tests for common_nn. (#5095)
25e946bf78 : Replace edge_type with Edge and create Variable::gradient_edge() (#5030)
0390587d12 : Bind Tensor.random_ in ATen for CUDA (#5111)
c111cdfd1d : Add onnx support for InstanceNorm (#4626)
011941087a : Implementation of the cumulative distribution function and its inverse (#5079)
e75b434ca2 : fix MultiLabelMarginLoss test names (#5098)
e027277a57 : Set of RL improvements: Fix error in quantile computation. Handle missing values in sparse_to_dense. Replace page_size with minibatch size.
8f78dd7249 : Refactor CPU and GPU perf tests (#5078)
ccb61e0da7 : Check shape instead of number of elements for some losses (#5085)
47ee86776e : Fix CPU torch.multinomial with noncontiguous prob tensor (#5093)
b2cfd961d3 : Handle sequence lengths correctly when exporting RNNs to ONNX (#4695)
7dafb1217e : Fixed CAFFE2_API decoration for caffe2/proto when building static libraries
f796080781 : Add assignment support for Sequential (#4931)
f160e552df : change long to int64_t (#5094)
b3c8b3d132 : Adding more summary output to make debugging CUDA problems easier
7af433deeb : Add scalar criterion tests (#5087)
3cd825d25e : Check that indices and values are on the same device (#5089)
a68e224219 : Fix ONNX While test for CUDA
895aebac08 : Use Variable instead of Tensor in Function.forward (#4786)
c4d3f69053 : Add Variable.item() (#5090)
c1b98f0841 : Add deprecated add_out overload (#5088)
e2f193c650 : Installing recent setuptools version for python3
36bbaf0d85 : Fixed double memory accesses of several pointwise operations. (#5068)
4ad7fab16e : Fix TH compile warnings (#5065)
fcccd07cc0 : Implement hinge_embedding_loss as a native function. (#5080)
78649419c4 : Cuda 9.1 is cuda version 9010 not 9100 (#4861)
67ff50c30d : Run test_nn criterion tests over Variables, add a scalar test (#5058)
13ef8432b6 : parallelize vol2col and col2vol of Conv3D with CPU backend (#4824)
237c27c35f : Fix reduction functions not respecting the strides of output when output is correct size (#4995)
c028bcd466 : Fix input of Reduce{Front/Back}{Sum/Mean}Gradient ops
f383600625 : ONNX While Operator
6a02cb2844 : implement sequence length support for BasicRNN
805639906a : Broacast output requires_grad if only corresponding input requires_grad (#5061)
895987f9e9 : Add clang-format style file to caffe2
c9ee47b0b5 : Fix topk work size computation (#5053)
fc856f0036 : Fix type and disable warning about cpu arch
a83c240644 : Fix maxpool3d / avgpool3d crashs (#5052)
28f42cc8e7 : separating set_params and init() for checkpoint managers.
1d044dc459 : Changing sed call in CUDA conda-builds to keep friendly package name
61ad0e486b : cmake: python packages now install to the cannonical directory
7c7e09fe2d : Adding the Percentile op & UT
239d3b2461 : Add formulas for LSTM ops to JIT AD (#4916)
3f0a99dc90 : Update FXdiv submodule
885c874167 : Fix refcycles in DataParallel scatter and gather (#4988)
cfb536937c : fix Android typeid_test.cc build error
b08101e281 : Bring back Tensor::data<__half>() and remove base Tensor::data() template (#5035)
d8748a9d53 : GRU sequence lengths: allow unspecified sequence lengths
019c1c4ca5 : Removing some default dependencies of CUDA conda builds
e4eaf67ec9 : Fix torch.diag backward with non-square matrix (#4538)
91efc30bfa : fix #5047 (#5048)
1eaa10b32e : Update torch.distributions documentation (#5050)
7bd2db997e : Port cuDNN RNN bindings to ATen (#4881)
28f056fed2 : add reduce=True argument to MultiLabelMarginLoss (#4924)
d3ea7e260b : Allow for all of the names we have in our model zoo.
ba61eee074 : Expose sparse variable addmm, addmm_ (#5016)
76ae03d5f1 : Operate on Variables in torch.nn.init (#4964)
f4a2b0e446 : Don't allow scalars where vectors are required in mv, addmv, ger, addr. (#5003)
b044c95129 : Use blocks machinery to simplify bookkeeping in autodiff (#5036)
c65bd6660e : Move the cudnn include path before system include path (#5026)
85a7e0fc41 : Addition of ExponentialFamily (#4876)
3acce3e4a7 : assert global_constant name as string
95626737d0 : enforce global_constant name should be a string
9c7ac85050 : Replace more sample_n calls in test_distributions.py (#5034)
61b5ea85d4 : Remove FunctionFlags (#5018)
f8388d2aea : Add the ability to change the insert point Graphs
423677bacc : Add KL-divergence for Categorical and OneHotCategorical and stronger tests (#4961)
99ce581155 : Add support for ::copy and ::createClone with blocks
0d748fac96 : Add nested Blocks in IR
c308e03f3e : Initial GraphExecutor Implementation. (#4982)
3708914bd5 : Give NetObserverReporter a virtual destructor for correct destruction
b0d09dd8d7 : Cleanup operator docs for catalog generation.
e816c777eb : Add regularization for sparse features
dabddd65f4 : Add sparse normalization operator
4ae05799fa : Don't allow scalars in torch.dot for Variables. (#4972)
56112cbafd : Add .clang-format (#5019)
7400de3080 : Fix C FFI extension after moving TH to C++ (#5005)
f23feca681 : Fix output_nr not incremented correctly (#4812)
e22095b09d : Add some more builder scripts from ossci-job-dsl (#4945)
bf3655a10c : make torch.set_num_threads also set MKL threads (take 2) (#5002)
86fd5fd524 : Replace async with non_blocking for Python 3.7 (#4999)
8e22f847ad : Improve CUDA softmax performance
390d542db2 : Modify .jenkins/test.sh to install ninja
733ce9529e : [cpp-extensions] Implement torch.utils.cpp_extensions.load()
142a335b81 : fix ModOp Windows build issue
39b351ecb0 : Fix build with NNPACK
a69110c0d7 : Add size checks for sparse tensor constructor (#4113)
4d656842d9 : enable USE_MOBILE_OPENGL by default
1475895c1d : Use distutils.copy_tree/copy_file instead of shutil
9e36f979c9 : Add ABI compatibility check to cpp_extensions.py
1262fba8e7 : [cpp extensions] Create torch.h and update setup.py
6665a45d5e : Add README.md for ATen/cudnn (#4998)
ce5ccaef0c : Rewrite ATen native docs. (#4816)
eb5daa9478 : Make cat/cat_out native function that rejects scalar inputs. (#4992)
a8bda67ff1 : Only check that arguments are Variables in VariableType (#4991)
2ebd7f17eb : Support stack_out as a native function. (#4977)
d183f2305f : Add scalar autograd tests for functions requiring 'special' Variables… (#4953)
2bf9ed8e05 : Revert "Only check that arguments are Variables in VariableType (#4943)" (#4980)
e138203d8f : add sparse_to_dense_test
7ee286c80a : Vendor NNPACK dependencies with Caffe2
bac898dbfa : Add is_test option to CTCOp, fix OMP thread count override
f652f20f73 : change ModOp to support output sign configurations
65b0474527 : Fix for finding protobuf on windows
eee42748d9 : Caffe2: serialize init for parallel workers
5daf4ca1c9 : Remove android-cmake submodule
964707e9b5 : temporarily disable test_segfault until we figure out why it intermittently fails on cuda CI workere (#4976)
3a82c41d95 : Fix for glog on windows
401eeb2007 : s/sample_n(n)/sample((n,)) to silence warnings in test_distributions.py
65353f1342 : Remove volatile section from autograd notes
f2fd38c53c : Use TypeError in PythonArgParser (#4966)
7c8843f1c0 : Async_scheduling update
1b4959e48d : Type error message when RTTI is not enabled
f2d3f20f6d : Revert "torch.set_num_threads sets MKL option too" (#4967)
6f3266b4a1 : Clarify grad_input_mask documentation in derivatives.yaml (#4963)
6c197c2f15 : fix triu and tril for zero-strided inputs on gpu (#4962)
96239dd50e : Add mutex for CPU RNG and move TH to C++ (#4041)
ca5071d072 : Support multivariate TransformedDistributions (#4937)
d44437968f : Only check that arguments are Variables in VariableType (#4943)
2aaeec0db0 : torch.set_num_threads sets MKL option too (#4949)
3736ccb1d8 : git clone from the master branch in the docker files because branch v0.8.1 does not exist
3ac412efe9 : Properly fill in make_non_contiguous data for sizes that can't be mad… (#4951)
90a3363f29 : Return an empty TaskGroup if node managers exist in MultiNodeCheckpointManager
e776f69ddd : Urgent CI fix for test.sh (#4955)
20fbdb9a8b : Adding mean, variance, stddev to distributions (#4923)
ae903ca61a : Fix JIT tracing in autograd codegen (#4941)
8f273dea09 : Implement constraint registry
52bd369da5 : Add some scalar test_autograd tests for multi-tensor functions (#4944)
004e7590ac : Add suppress_warnings to test_resize. (#4942)
98a4c3f9b2 : Enable rnn_cell_test in jenkins
9d1721e588 : Adding gflags to default dependency of conda builds
5b43c22f73 : Add symbolic_override_first_arg_based (#4799)
c011c8b5a6 : Enable fixed tests again in Windows (#4928)
ef4cf860ac : Lazy init in set device, also should not be called in getDevCount (#4918)
ee8bcdca79 : make torch.cuda.empty_cache() a no-op when cuda is not initialized (#4936)
5c65466b86 : Release NCCL distributed backend from experimental (#4921)
ea0283325c : fix copy/paste error in debug message
8e8e3eb828 : Fix build failure
560e5c94bd : Change default value of LeakyRelu's alpha from 0 to 0.01
6b1f848df6 : Adds gpu implementation for FCTransposed
60f5ae05ee : Add more scalar autograd tests. (#4920)
6f0b7bea03 : Add support for requires_grad in JIT's AD (#4898)
712a6c6362 : Deprecate out-of-place resize and resize_as on Variables. (#4886)
d1a3254764 : Use global thread pool in async_scheduling
78ff996dc0 : Fix some scalar issues with autograd. (#4889)
3c952426fb : Add operator attaching net observer
2a6177e6de : Speed-up repeat autograd tests. (#4915)
12c6088267 : Fixes to native_functions.yaml to match existing Tensor behavior (#4911)
260a246192 : Move repeat autograd to C++. (#4885)
e93ece90a5 : Add Linux Jenkins scripts to PyTorch repo. (#4910)
f0acd68536 : Fixes to aten/Declarations.cwrap (#4912)
4f63f348ae : Fix condition in inferUnsqueezeGeometry (#4909)
91d76f5dbd : Reapply Windows fix
657214543c : add back cuda auto arch detection
f8439c241b : Add some explicit scalar autograd tests. (#4888)
7a47790c27 : Add missing _lazy_init in cuda python functions
c3596c2dfa : remove "-s" compilation flag from clang when build for Android
94b659034e : Potential fix to net_test failure with one GPU
2d829d15af : [JIT] Add simple shape analysis
3b38a244ab : Add ArgumentSpec data structure and tests
d481afb125 : Modernizing glog. Same as gflags.
64a9ecae02 : Dataloader issues (#4643)
967bceb16b : Implement Transforms (#4771)
3ecd25b065 : fix indentation
ff3f689239 : Add mote tests for Nccl backend (#4796)
5630bb1fcc : add compress flags to NCCL
73ed0d5ced : Modernizing the gflags dependency in cmake.
94e29ba24a : Fix visibility of AT_CUDA_ENABLED (#4892)
e58a53af6f : Added Poisson self KL + Bernoulli/Poisson KL
a249016044 : New index computation strategy in Functions.cpp (Tensor/TensorList) (#4775)
bd9b8a384a : Fix torch.pstrf on Variables (#4883)
ae28411af8 : Slightly improve DDP single GPU multi-process dist training performance
6420c6b224 : Improve `torch.cuda.empty_cache` documentation (#4879)
f8575f6d68 : Breakdown Dispatcher
33d2212751 : LSTM sequence lengths: allow unspecified sequence lengths
4a528cefac : Remove OpenGL code from benchmark
e7d4bbc9dd : Add CaffeEnforce in SafeDequeueOp
fe9121ff59 : Fix a bug in BatchMM JIT pass
e5958d0e67 : Inherit JIT scopes when cloning only when it's correct
349a1c3424 : Add code for lambda lifting backward in JIT's AD
db0f1e806c : Add Variable (value) tests for variable fill, index_fill, masked_fill. (#4875)
b8ab7bee26 : Use variadic templates instead of initializer lists and overloads. (#4772)
24177adc12 : Make TensorDescriptor call more portable (#4878)
252211b001 : testPairwiseDotProduct
51feaee007 : Assorted small change to conda scripts
84c6887d2a : Switch cuDNN Descriptor classes to use unique_ptr. (#4850)
a3b8c459d4 : Revamp MNIST tutorial
8c02674964 : Revert D6817719: [caffe2][PR] Better support for windows
8aa8eaabb1 : Better support for windows
849b0a0e0e : Update SNPE readme. Indicate libgnustl_shared.so is also needed to ru…
5a5afa5c17 : Properly define 'true' in test. (#4859)
0fd41a63a1 : Integrate Fused8BitRowwise ops with DPER
483828e25e : Don't throw exceptions inside OpenMP parallel blocks (#4857)
08dc40a5de : Use case insensitive names for Doxygen docs.
8aa3dab959 : doc: update installation.md for third_party packages
304e607b70 : Fix adam test
0844b5b25c : Fix deepcopy with scalars. (#4854)
2648428986 : Various indexing fixes around scalars. (#4853)
b2cfc5ea53 : add KeySplitOp
d695027300 : Adds cuda support for LC op
c046da76ef : More distributions fixes for scalars. (#4849)
e0b0328722 : add cuda9 options to nccl
bb3bc969ca : fix binary version scheme to be PEP compliant (#4847)
4c29d19f53 : Update pybind11, fix #4809 (#4811)
e4ddbeb554 : Fix typo (#4846)
aaa0288aed : Implemented Poisson in Distributions.cu and Distributions.cpp
90543ff13a : weighted sampling reader dequeue outputs table index
c261b9ce70 : Fix NGram from categorical test
afafe8a466 : Add LC Layer
4970e73304 : Add support for distributions and test_distributions when WITH_SCALAR… (#4834)
fc56e86c7d : Introduce init API for the optional Checkpoint Metadata Handler object
1b3d6ab864 : Enabling Infiniband support for Gloo data channel with auto IB detection (#4795)
eea39dbdd9 : Updated bbox_transform op to match detectron training code better.
278d398748 : Add GPU version of math::Transpose
3e8465bc02 : Check if system has protobuf package when it already has protoc command
29a4c942fe : Add support for multi-device batch normalization through an option to data_parallel_model
00a1092641 : Add extra optional inputs to SpatialBN and SpatialBNGradient to enable multi-device batch normalization
9414072159 : Add operators to support batch normalization across multiple devices on the same node
7a232aae49 : Add random seed to NGramFromCategorical test
2828c7a391 : Moved RoIAlign to OSS.
09a1ef54ab : Add missing cerrno include in text_file_reader_utils
8400c57daa : remove now-unnecessary check
0ae5498079 : [JIT] add create_autodiff_subgraphs (#4822)
a14abc741e : Heuristic-based autograd execution order (#4746)
e979b7c940 : Removed redundant import re (#4826)
5e72d7af13 : Remove setting coalesce to 0 in sparse transpose_ (#4707)
bc11511cda : Restore sparse variable transpose_() and t_() (#4779)
23dc8acbc8 : Fix missing import and enable test for profiler on Windows (#4522)
1e7d15953e : Added Chi2 test for distributions (#4815)
82fed06535 : disable qr_big cuda test on Windows (#4747)
e83546b686 : Restore sparse variable _dimI() and _dimV() (#4785)
c7a2e318ed : Restore cuda variable.bernoulli() (#4787)
29c7c682d8 : add NGramFromCategorical Op
27505e6429 : Fix #4480 by tracing inputs before running function. (#4807)
0e9b0cf779 : add error msg in fc input_record
691c38d670 : Remove windows linebreaks in various distributions files. (#4817)
0aa1a6387e : Add a seed to the gru unit test
24babe249f : single_label_weighted_sampling
9bb6d33d35 : Enable scalars if compiled with WITH_SCALAR environment variable. (#4806)
e60f7e2490 : Create issue template with guidelines for issue submissions (#4810)
a7ef4e4d46 : Use android.cmake.toolchain from Android NDK
91e2e67c8b : Add fallbacks for ChannelShuffle and Transpose
3a8bf0d3dc : Fix crash in MKLSumOp due to layout mismatch
76a141f016 : add error msg in get_key
70f0436335 : add Elman RNN export to ONNX (#4613)
2dd79eb53a : Visualize distribution of activation functions
5403f3bc17 : Temporary fix for Issue 4752 (#4760)
c6a64f1a78 : Better unsqueeze_to
e37f02469d : Favor Variables over Tensors for scalar constructors in torch.distrib… (#4791)
c2afd590ae : parallelize elementwise operation with openmp (#2764)
8c69eacde6 : Initialize cuda before setting cuda tensor types as default
154038e318 : Removing NCCL clear_group_cache workaround with one more check in new_group (#4766)
8e0177255e : Test for PositionWeighted
231d6f7b09 : Add SqueezeOp in MKLDNN
409b1c8319 : Improve wording of Sequential docs (#4790)
966db35dd9 : Improve memory access patterns for index operations. (#4493)
c49f0279a6 : Add kwarg-only 'requires_grad' parameter to Variable factories. (#4748)
9390f7d3d6 : Implement a (data-only) Variable factory (#4753)
e64ad91365 : Revert "Add doxygen and graphviz to Jenkins docker base."
1d4e996b87 : Separate parameter downloading tasks from training tasks and run them in a different group
27f4041738 : Checking performance flags during init.
a82b3096ef : OSError will be raised in setup.py if "git" is not installed
876bcc06b9 : Fix squeeze() backward in edge case (#4783)
1569797b15 : Use ATen infer_size implementation rather than TH. (#4781)
db45dbbebf : Update README.md
14033df3cb : Fix resize_as_ on Variables containing SparseTensors (#4745)
b7752efc1b : Restore sparse variable methods for: (#4780)
d618c05174 : Increase lower bound of values for values in div test
a5440717ae : Restores some sparse variable methods (#4687)
ad2edd8613 : Check submodules only in build_deps (#4770)
b5d513b1f9 : Add op in MKLDNN
dd5c195646 : More documentation for CUDA stream functions. (#4756)
f033dd60cd : Implementation of the Fisher-Snedecor Distribution (#4706)
8593c6f4f7 : Adding better KL-Tests (#4739)
816d5d8ff7 : Scaffolding for source-to-source AD in the JIT
85126ba217 : Semi-automatically generate scripts out of our tutorials
91066559a8 : truthy check for empty string in NameScope()
4ce4bc5c7f : Fix occasional test timeouts
96ceb91384 : Add cudnn_is_acceptable function. (#4749)
d38cf0e1e9 : Allow assertEqual checks with mixed Tensors, Variables, numbers. (#4754)
1fee7cd626 : Delete some dead expand code. (#4755)
ced2c7e2b2 : Remove Set/GetDefaultGPUID and move to use current gpu id instead.
69ce46a20b : Moved mask-rcnn inference operators to open source caffe2.
cded9683ad : Implement fused 8bit rowwise sparse lengths reductions
d401c26d63 : Add FusedEmbeddingLookup
8dc0702af5 : Add float32 <-> fused_rowwise_8bit conversion Caffe2 operators
e9dceec2c8 : Fix the Macro definiton for E in cpuid.h; #undef E
bf45811266 : Add doxygen and graphviz to Jenkins docker base.
f72d86e0d3 : Implement geometric distribution (#4708)
a0b7169b7e : Ensure that Tensors always have Storages (#4744)
c052eb6bbb : update the video input op in caffe2
4ea6e6a556 : testSparseLookup
870ef8e95f : Implement record_stream on Variable (#4728)
b6eb7d7ba0 : Allow Python Variables to be bound to at::Tensor in pybind11 converter (#4730)
f1c616418d : Fix Python docs for broadcast and braodcast_coalesced (#4727)
e23acb3b08 : Allow Variables in the (legacy) THNN bindings. (#4723)
b984c0b6e9 : Various testing and utility improvements including torch.testing module. (#4726)
db6be0e1f1 : Fix call to THPUtils_parseSlice (#4732)
b997474a4f : Adds Im2Col and Col2Im (#4729)
f7ab0cb56c : Legacy Padding: correct output size with nInputDim
d29670db46 : Make tensor cast constructor explicit
92aeca1279 : update runtime dockerfile (#4736)
f9fd82d893 : Type fix fused/mix precision (#4734)
b28d5a3586 : Build doxygen docs with cmake and fix catalog generation
a9a2b9ee3e : Adding a separate script for anaconda builds
e855317370 : Make dirichlet_grad and standard_gamma match ATen declarations (#4722)
93f49667d0 : Allow Variables in calls to NCCL bindings. (#4725)
e3e6680b48 : Add ElmanCell and ElmanRNN
3249d8bf89 : Allow Variables in calls to type2backend (#4724)
158e001238 : Checking for positive epoch size before running epoch
8e4f67ed72 : Enable the detectron module in cmake
23fc2b7e06 : Define CHECK in torch/csrc/cuda/nccl.h (#4721)
f072986733 : adds reduce argument to BCEWithLogitsLoss interface (#4705)
79d15c52cb : Improve the engine support for functional graph execution (#4690)
d1c4065f0d : Support copy on sparse tensors in at::Type
1061d7970d : Move broadcast and broadcast_coalesced to C++
de5f7b725e : Base for pure C++ NCCL interface
2da43bf6f1 : Make Symbol a true struct (#4717)
d7e7e794f5 : Fix display of test failure number in test_distributions. (#4713)
57549b7e44 : Bind functions with out= arguments in VariableType (#4565)
6f0bb28afb : Stop running RowWiseSparseAdam test on GPU
a8bdce38fe : Replace PowConstant (#4711)
720c7b1e2c : Move repeat to torch/_utils.py (#4712)
b37aa2bf0e : Ensure lazy evaluation for probs and logits (#4691)
539b1ed4b9 : Add proper scalar checks to functions bound by nn.yaml. (#4696)
1a02d3ae86 : Implement MM fusion (MM with add reduction tree) (#4615)
db7f5dae77 : Test_autograd support for 0-dim input/outputs. (#4647)
d6423d9895 : Import Detectron ops
8c2d35c754 : Refactor distributions (#4688)
61356cbadc : RowWiseSparseAdam operator
05ebd15207 : Fix cuDNN batch norm overload in VariableType for half precision (#4693)
d6b48c1571 : [ASAN] fix more load_real deletes (#4694)
6ba96952a6 : Fix Eigen failure with `conda build conda` on Mac.
cb83474a57 : Fix embedding with sparse=True (#4686)
559380c9ea : Install CMake 3.6.3 in base image for Android build
3254eca8c8 : Implement binomial distribution (#4658)
1fd05df738 : Add no_prefetch option to prefetch_op.
d452291a72 : updated documentation for Embedding layer. Fixes #4682 (#4684)
ddb767f214 : Add printing support for sparse variables (#4683)
d6ec05d0e3 : doc: update installation.md for Ubuntu 14.04/16.04
97fc06ac22 : Use restat to reduce ninja rebuilding when running codegen. (#4635)
c4edb56b45 : Print full type of Variable tensor
c3b7baecea : Fix #4422, use grad for cudnn_batch_norm derivative / don't use toTensor()
7449b467d9 : fix deallocation and accesses from ASAN detection (#4678)
9f893dda5f : Add LocalResponseNorm to docs (#4681)
2260649fb6 : Local Response Normalization (#4667)
522276759d : GPU detection fails when CUDA compilation requires CUDA_HOST_COMPILER to be set (#4676)
67494cee9d : Fix cast direction in THCBlas (#4670)
017893e21b : Fix batch norm JIT dispatch
86fe793948 : Addition of KL-Divergences for torch.distributions (#4638)
27d7182d6c : replace full stop by comma
bdb05c2243 : Add tests for distribution .entropy() methods (#4657)
188ee3ff0b : Fix wrong learning rate evaluation in CosineAnnealingLR in Python 2 (#4656)
05908e8243 : current code works with dim = 3, so I added it to dim checks
9b6441ecbc : Implement Multinomial distribution (#4624)
cb7350fc8d : Add vulkanSymbolWrapperReset function
4db89e6890 : Check for result in queue only after background process is terminated
e79eea2c11 : Use protoc RPATH to figure out its install prefix
81898e5d47 : Fix for wrong newline in caffe_translator.py (Crop layer translation)
db6777eaf4 : fix gru_cell bug
8eded5aece : Fused fp16 lstm backward math fix. (#4611)
a3b098dcf9 : Adding is process_group initialized support (#4618)
5343b71a62 : More strict shape check on Conv operators. (#4637)
841ce42daf : Fix flake8. (#4644)
8ef26185d6 : Add missing torch declarations to derivatives.yaml. (#4617)
eb857ec367 : Introduce a (non-public) autograd scalar method and improve printing (#4586)
a14dd69be8 : [ATen] Have any()/all() return a Tensor in preparation for dim/keepdim parameters. (#4639)
b7a0d0efb5 : fix heap-use-after-free in THStorage.c
b42f163835 : [ONNX] export sum, prod, sqrt improve log_softmax. (#4579)
7e3da98734 : Clean up error checking in THPTensor_(_convertToTensorIndexers)
736190fc78 : Allow broadcasting of value x params in Categorical (#4614)
d5e0aa3d53 : Remove gmock from batch_matmul_gpu_test
b2964a92d9 : Add MKLConcatOp
dda33ca53a : enable setting model initialization seed
9760329014 : fix possible divide by zero
995eafec84 : Remove gmock dependency
1d0426a9f5 : Prepare test_autograd.py for introduction of scalars (#4599)
7b31d33e80 : Fix use after free (#4559)
4d62cf499c : fix out-of-bounds access in THTensor.c caught by asan
77523df413 : Add more check on softmax ONNX exporting logic (#4592)
4357dee097 : Adapting conda build to work for ubuntu and adding a flag to control precedence of Anaconda include dirs
224493d9ce : NNPACK: Use new bindings and custom thread pool
d3b6c5e556 : Support output_padding in ConvTranspose while doing ONNX exporting (#4583)
2b2a7dc2ad : small fix on MaxPool2d __repr__ (#4591)
71b1120ba8 : Fix bug in Dirichlet.rsample(); add tests (#4602)
19a8a3fc35 : updating gloo to latest master (#4608)
94f439c07c : Fixed setup.py to handle CUDNN_LIBRARY envvar with aten (#4597)
0988e328c9 : Fix errors in travis config
059299b74d : fix compile errors (#4600)
8cff8e93d2 : Add torch.distributions.utils._finfo for numerical stability (#4572)
c1d5e71e7c : removing Local.cwrap entry in build_libs (#4595)
868e77a3d2 : Ignore clang compilation database in git (#4601)
0a8a18ca01 : Fix GemmBatched
9eeb342bf9 : Cut the ScopeGuard alias now that we have auto
b1de1f6a5e : Move ScopeGuardImpl and ScopeGuardImplBase into the detail namespace
90db3fbad2 : Include CMake version in configuration summary
ab638020f8 : Backport FindCUDA functionalities from CMake
f94f5723e7 : fixed spelling (#4598)
0ac58d53b8 : ATen conv param expansion; InstanceNorm use_running_stats fix (#4544)
bc7a41af7d : Ensure convolution weights are contiguous, fixes #4500 (#4543)
03a6b5ecea : add include
47de6f47f3 : Use GCC to compile Android Caffe2
a20ac05c8b : Added method cuda to PackedSequence. (#4430)
33d734fcf1 : Generalize construction of db_name in checkpoint manager
944f9aa826 : Move Android.mk
c9bb811d6a : Remove accumulate_grad version_counter check. (#4566)
2435d22782 : Move NNPACK integration to share/contrib/nnpack
cd3e90c16f : Fix failed test due to D6665466
040336f5dc : Further fix to tracing scope (#4558)
cd9d0f4561 : Link cpuinfo when using external NNPACK
5918243b0c : Methods for checking CUDA memory usage (#4511)
a9afecdfc8 : Update installation.md to roughly match caffe2.ai
f4a75deccf : Fix the inconsistency of `polygamma` on Tensor and Variable, for issue #4466 (#4527)
a0ab48575e : Implement backward pass for pack_padded_sequence
f99c7d9429 : Padding_idx in Embedding supports negative indexing (#4496)
82198831e7 : Fix pool op custom path issue 2, wrongful routing to global pooling
3a335427b0 : Start framework for kl_divergence(-,-) in torch.distributions (#4525)
b3710a2e01 : Fix a missing AutoGPU (#4545)
a3f4fa254c : support GRU export to ONNX (#4390)
a59bb97868 : [ATen] Support wrapping dimensions over scalars. (#4536)
674ddf6b91 : Fix multi-gpu fuser bug
e3bafb884a : Link NNPACK even when CUDA is not available. (#4541)
5d6a5cf3a7 : Implementation of Gumbel Distribution (#4517)
8fe3d287b2 : Fix return type for Bernoulli enumerate_support (#4529)
12309f4aa6 : GRU cell: add linear_before_reset boolean parameter
41bb662d96 : add dense regularization
073312eade : Updates to MKL conversion script
04ad23252a : Refactor gen_variable_type (#4487)
3f974d6ffe : [ATen] Improve ASSERT test infra. (#4505)
7d25a41251 : Fix #4492, make it impossible to forget to reset cudnn flags (#4503)
d3612a5914 : Fix tracking of tracing scopes during ONNX pass (#4524)
c650c73cbc : Extract the finish check for profiler (#4519)
c9bc6c2bc3 : Implement Student's t-distribution (#4510)
5c641cc14f : Fix abs specialization for `uint8_t` type. (#4521)
e5f25421ae : Implement demangle in Windows (#4515)
2dd7039b6b : Fix multiprocessing and dataloader tests on Windows (#4453)
21d48be2dc : Delete redundant isContiguous check from THCUNN SpatialDilatedConvolution
0f8ece5657 : Actually test CUDA double-backwards codepath.
4e3a4bd688 : Check for out of bounds grads access in derivatives.yaml
6a266f5832 : s/uses_grad/uses_single_grad/ for more clarity.
ed33dc1d4f : Fix 'invalid argument 4: weight tensor has to be contiguous'
1f8a8cc941 : Fix two bugs in thnn_conv_depthwise2d_backward gradient.
123f49badb : Add Slicing capabilities for Sequential, ModuleList and ParameterList (#4491)
9c2561e60c : Fixes #4475, Add debug flag for Windows (#4508)
1dd441ba32 : Use auto for scope-guard locals v.s. folly::ScopeGuard
c1d9694f42 : Backed out changeset 6f532bad5824
ad45d1bfb5 : Added Caffe2 operator CPU binding for Gloo Allgather
027407f224 : [ATen] have ger handle scalars like np.outer. (#4489)
6d32e36682 : Caffe2 Operator: GPU implementation of Swish Activation
b8fd57a0cc : Fix handling of empty indices in CUDA Tensor.put_ (#4486)
408c84de7c : Supporting logits as parameters in Bernoulli and Categorical (#4448)
0afcc8ebb9 : Fix typo in fusion compiler (#4488)
2cda295244 : Adds cpu version of transpose util function in math.
a43fd6ae52 : Bump gloo
3725d8ea97 : Disable the python op test numba import in asan
64b0039ef9 : rnn_cell_test: make it determinitistic and speed up
a3e91515de : Declare constraints for distribution parameters and support (#4450)
c6adee0807 : disable CUDA HalfTensor tests in test_cuda for Windows (#4482)
1e76ade9dc : Implementation of Pareto Distribution (#4459)
73bdb661fe : Fix BCELoss test precision (#4484)
dfde42a94c : Remove THPGenerator default code for random functions in Declarations.cwrap. (#4479)
58f6008f76 : Improvements around torch.cat on empty Variables (#3602)
d1c973fee1 : Hot patch ONNX _run_symbolic_function
35c4d73bdb : Deprecate nn.NLLLoss2d (#4238)
fe70823f8e : Fix StepLR docs (#4478)
fc0d940c5e : add gumbel_softmax, based on Eric Jang's implementation (#3341)
2d68956005 : Add Tensor::print() for gdb use.
b98740b8ec : Remove request for proposal link from README.md
7c729e6321 : - added size_splits to functional (#3837)
dc76db349e : Delete a pile of dead code (#4295)
af8b64aadc : Fix template type for std::array size (#4473)
d7d396b14b : Modify derivatives for efficiency and change `destination` to `result` for consistency (#4415)
2e279eb260 : Make the dash nicer
a246d56150 : Update build matrix badges in the README
b062769940 : instance norm fix running stats settings (#4444)
d7da50473e : Add check for slice shape match in index_copy_ and index_add_. (#4342)
5b91b240d2 : adds missing argument (#4446)
68726df0ac : Fix GemmBatchedOp
c43b120d43 : Improve float precision stability of `linspace` op, fix 4419. (#4470)
cc9dc3f343 : add lock for SynchronizedSeedDataset; add additional os level close stderr for tests that launch failing process (#4463)
28eea8b032 : Adding commandline flags to disable implicit engine preferences.
cc70a33e74 : Windows fix for #4312
e9f0761460 : Fix pool op custom path issue
a7cc653139 : cmake: handle CUDA 9.1 in GCC version check
3329f36f1a : Move load_save_test.py from caffe2/python/ to caffe2/python/operator_test/
48436ac124 : Fix compile warning "implicit declaration of function" (#4467)
f90feac38b : Adding conda specific script to macos builds
6a0c636d4e : Don't special case NN functions in gen_variable_type.py (#4395)
e2ccd6e7ab : Rename native/TensorGeometry to native/TensorShape since there is already an ATen (non-native) TensorGeometry. (#4465)
73fbad0bfc : Fix some scalar checks (#4462)
d80669fce8 : Guard PyArray_Check with WITH_NUMPY
77c792ec27 : Vectorize normal_ (#4312)
e426020c87 : Move prod, cumprod backwards to C++ (#4394)
9835ca9bac : Ensure indices list in sparse optimizer tests is unique
f321f61b9a : Improve dropout
17148f891f : Fix a leak in JIT interpreter
2d2b157d25 : Handle repeated outputs in the tracer
e6cbe84bf6 : Handle repeated inputs in JIT tracer
f05ca657dd : added fix for #4408 and a test (#4452)
387b4234ea : Provide CMake support for detectron ops
82e995e0b9 : Windows fix for #4322 (#4455)
bf37548ccc : Properly include the generate proposal headers.
3cdcbd5986 : re-apply D6652354
56508566a1 : Enhance Caffe2 Load op to support loading blobs from multiple files.
2f23ab0bfe : Revert D6652354: [caffe2] Move proposal generation headers to oss.
2af506cb6c : Move proposal generation headers to oss.
8fd3888c4c : Provide CMake support for contrib/prof
77484ecc45 : Manually applying cudnn5 pull request.
48492b02cd : disable travis webhook as we are moving to jenkins as CI
33bb849a73 : Remove assign_(Scalar). (#4445)
bc50510016 : use numerically stable version of BatchLRLoss
20b5e82155 : Implement embedding in ATen (#4322)
43ab911182 : Improve precision of dirichlet_grad() approximation (#4421)
bb04034bf7 : Adding a time limit reader
bf1c7d96c8 : turn off unsupported multiprocessing methods for Windows
2060f355a6 : Fix python gc race condition with THPVariable_traverse (#4437)
18a866aedd : Add random_split to torch.utils.data.dataset (#4435)
57f9db9c3c : Two NNPACK build fixes. (#4439)
98e5f2c808 : nllloss doc (#4438)
2cee02cc86 : [ATen] Get rid of assign_(Tensor), use copy_ instead. (#4397)
4d4b7782cd : Fixes for Windows build on master (#4432)
35abc4efa2 : Add low-precision digamma() and polygamma() functions (#4399)
7592e96503 : More detailed documentation. (#4428)
02e7eba309 : Implement Chi2 distribution (#4425)
98c02c20b1 : fixes #4403 (#4407)
0b328874c6 : Pick up NO_NNPACK for ATen build (#4423)
b7c64249cb : Add quotes and fix ninja on Windows (#4416)
2f25b9d052 : fix build error for unix (#4414)
4cf13cf417 : Fix crash due to copying empty tensors into MKLMemory
859a173502 : fix AMSGrad for SparseAdam (#4314)
b78a37a058 : Enable ninja during python build process for MSVC (#3993)
fec3d4a079 : RNN support has been implemented (#4409)
240d448a9c : Split cuda native functions into components; fix mistake with conv_tbc cpp move. (#4398)
6185b27cc6 : Improve precision of standard_gamma_grad() (#4369)
fa8de6b4f3 : Adding the Cauchy distribution to torch.distributions
99068d2e52 : fix nn.init.constant example
8c9a22a88e : Support NO_NNPACK environment variable (#4401)
4238f5e604 : Extract some utility operators to their own source files to reduce build size.
b132187014 : Add vulkan stub
ff9d1aeab5 : removes duplicate variable reference crash from pad_sequences (#4383)
98f71912b0 : Fix type signature of in-place NN functions (#4389)
af3bffb638 : Update derivative of `expm1`
ab80c27b47 : Fix undefined FileNotFoundError (#4384)
89acc10f85 : Adding description for Optimizers (#4371)
5c33400dd3 : Implement OneHotCategorical distribution (#4357)
3a169780e9 : fix some typos (#4379)
e519ef5337 : Adding torch.expm1() and its inplace function (#4350)
d859c3c7cc : Fix creating tensors with np.longlong array
f8a4b1a266 : Split off load_derivatives and gen_autograd_functions from gen_variable_type (#4370)
410fd58b4f : support RNN export (#4163)
15b657af84 : Support ATen GPU pointwise apply and torch.where. (#4304)
5bcacb21d5 : add bias term to linear __repr__ functions, fix spacing
cd23994dbb : Improve matmul native test tolerance. (#4365)
a76ac19955 : VariableType clean-up (#4366)
4453a5402f : allow_inf on test_beta_log_prob (#4354)
1b608eea4e : Fix distribution tests due to merge order (#4351)
ffa7fab67f : Minor changes to test utils to catch type errors (#4270)
15163a3273 : Improved documentation of several index operations.
efa7c895f6 : Misc Windows lint
26168e22cd : fix NameError in torch/nn/rnn.py
6646c3e542 : remove CPU builds from Travis, as they are now covered by Jenkins
9a48f8d7c3 : add tests for btrifact_with_info and doc for btriunpack
658d4c7ea8 : allow optional int tensor
a51a094200 : fix MaxPool2d __repr__ missing ceil_mode summary (#4335)
0c4b3f4271 : Adding Uniform distribution to PyTorch (#4328)
e9bfe8ca92 : Make expect file directory search more robust.
1a0eefd5fc : Parallelize batcher
3304185c6c : Fix test_gamma_sample_grad. (#4327)
3b7fbc397e : Reorder native_functions.yaml by alphabetical order. (#4326)
3837a962d3 : Fix typo in Concat and Softmax
5fb5e7b01d : Split NativeFunctions.cpp into functional components. (#4325)
3264ef95f0 : Test CUDA types in native_test and make sure THCTensorRandom launches are valid. (#4323)
6c4e97220a : disable test_gamma_sample_grad until it's fixed (#4324)
e6ad0ea27f : disable test_distributed for windows (#4317)
de28e754b2 : Make Variable.is_sparse an attribute (#4308)
7f6ca8efa5 : Fixed unused return value from write
89de9a494a : Generate grad_input_mask only if it's actually used
d4fd9a3fd4 : Remove unused functions
9488eeb308 : Fix signed compare + redefined macro in libs
fb46836fc6 : Fixed unused result warnings in THD
492e26fbcd : Pad sequences and Pack sequences (#3875)
5d3fc364aa : Fix OSS build
5d6dacaafe : Enable building operator QuantDecompZstd
a7ac591d3b : Support for DLPack in Python op
b231efbdc1 : Install jupyter in all Jenkins images
1632ab2979 : Fix default device for Variable.new() (#4307)
f5de5a84be : Throw exception in checkBackend, improve standard_gamma_grad error messages. (#4306)
60bbccc8e7 : Add factory Type::sparse_coo_tensor(indices, values) (#4303)
4dba674324 : Move factional max pooling to ATen (#4290)
a076731066 : Use `where` rather than `_s_where` in `_s_where` backwards so `where` is traced. (#4301)
41c9959ef7 : Enable functional torch.where. (#4298)
e0ebd9a14e : Check GCC version on Ubuntu
8af9f0da99 : Saving checkpoint failure should not cause job failure
5f7c5502b8 : Further improvements to ATen convolution (#4287)
46054ddb5c : Run MiscCheck.cmake earlier in CMake process
5b8fe5cbb5 : Batchnorm in ATen (#4285)
d7e6ede784 : Implement Laplace distribution (#4289)
cf1878814f : Fix typo in operator_schema.h
e996157a5c : Check if dlopen() return handle is NULL in open_libopencl_so()
32a4a523d5 : cache OpenMP_FOUND in cmake (#4252)
54e11639f9 : Fix broken test_beta_log_prob in Python 3.6 (#4261)
a53e04a63e : Document some autograd invariants (#4272)
674d7d1f8e : Allow compiled functions to call compiled functions. (#4286)
fab5885df6 : Add Min and MinGradient Op in Caffe2
b6a30f7ede : Move SELU to ATen (#4269)
dad4b2d6cc : Move adaptive avg/max pool1d to ATen (#4266)
689ef9cba3 : Move upsampling to ATen (#4264)
efb6feb242 : Make the JIT interpreter handle unused inputs correctly
6daf34ce7b : Don't mark index as traceable, and other improvements (#4249)
a88a8ec827 : Convolution derivatives in ATen (#4116)
63ac3633f5 : Implement torch.where(condition, x, y) CPU Variable. (#4259)
456b5b1642 : Implement _values() and _indices() methods for sparse variables in python (and sparse tensors in aten) (#4058)
d400305eb9 : fix typo in grad_mode
766312b7f2 : Further relax VariableFlags, ... and fix bugs (#4244)
77ea2f26d8 : Add build support for Python 2.7 using MSVC (#4226)
b11db95478 : Fix compilation warnings (#4248)
0bc1505f34 : Implement .entropy() methods for all distributions (#4268)
cf2e088c9a : Translate None to zeros for old-style autograd functions (#4242)
1681d07199 : Disable tests and fix issues with Windows CUDA build (#4251)
69265ea5bc : Ensure gamma samples are positive (#4262)
257a9e5279 : add hill learning rate scheduling
4c56ce0958 : Remove unused thnn/loss.py (#4267)
ce2a0aa4d8 : Add slice and gather syntax
b476d10c64 : Move max_pool1d to ATen (#4257)
8c8114801b : Fix onnx export of replication pad (#4263)
9495595520 : Move reflection/replication padding to ATen (#4258)
97c33a22a6 : GPU fallback for LengthsRangeFill Op
c470055319 : Remove template_scalar, implement is_signed using dispatch. (#4255)
227ef1fb60 : Move adaptive avg pooling 2d/3d to ATen (#4254)
168271f1b8 : add struct get method
96007ec6c0 : fix an out of bounds hypothetical (#4240)
bc6bd62bd6 : Fix distributed dataloader so it pins memory to current GPU not GPU 0.
7315a19bc9 : add maybe_add_global_constant
cb4f6c3148 : conv_tbc (#3730)
019db89cb2 : Fix documentation for ResizeNearest op
d28720b90a : Backpropagation for While op
52600f8607 : Record workflow run id for inference.
1b820be7e5 : Fix the ATen static building issue on CUDA
9bf5e40dfa : Refactor cudnn code layout / make build more robust. (#4201)
94ff31f54d : Implement Exponential distribution (#4234)
e4f46905c0 : Fix numpy batch matmul index calculation
d605058212 : Replace Variable.volatile with torch.no_grad() (#3970)
0876bab8b7 : Support CPU Apply in ATen and implement standard_gamma using it (#4161)
bcbb36e99a : Allow value broadcasting in distributions.Distribution (#4210)
68c0998cbe : added AMSgrad optimizer to Adam and SparseAdam (#4034)
ee98e7a82e : Implement Dirichlet and Beta distributions (#4117)
ccf4dc1525 : Add reduce arg to BCELoss (#4231)
d8b2e5d091 : Add python only default init expression; Implement stft, hann/hamming/bartlett window. (#4095)
3b641dc805 : fix include order for PRId64 macro
54d689253e : Revert "Add reduce arg to BCELoss" (#4221)
e9ef20eab5 : Add Cosine Annealing LR Scheduler (#3311)
847c56aeb5 : Add reduce arg to BCELoss (#3532)
b86dc0c8ba : add reduce arg to PoissonNLLLoss (#3770)
02317d9336 : Enable ext build for Windows (#3935)
390b7afd45 : Fix CUDA Multinomial checks (#4009)
43dd6319db : Exclude attrs with invalid python variable names from __dir__ (#4011)
5cc26c0c90 : Add default PyTorch seeding and worker_init_fn to DataLoader (#4018)
30e6898808 : Implement NLLLossNd (#4035)
7f41149e14 : handle requires_grad when creating buckets for distributed (#4044)
3796ce9255 : assert (#4056)
e0d5d1b7c9 : view in certain noncontig case (#4062)
9394e65b44 : Add proper shape checking to torch.cat (#4087)
2c71b679d2 : Implement pin_memory() as a NativeFunction (#4094)
0257f5d19f : improve performance of maxpooling backwards (#4106)
bec0349280 : Implement Variable.cuda and Variable.type using ATen (#4139)
b79d74aa81 : Re-initialize autograd engine in child processes (#4158)
5c46427f08 : Rearrange dimensions for pointwise operations for better performance. (#4174)
e2c75d3732 : Make import work even if 'tools' is available in Python path
a6fb960b98 : Expose node scopeName to python (#4200)
0e804ae042 : [jit.compile] add a jit_debug_info method (#4205)
cab5921227 : Improve symbolic hack a bit (#4143)
2e08885df8 : Fix for issue #4103 (remove -march=native flag for ppc64le) (#4162)
d4d8698581 : Fix repeat non owning (#4084)
8307f21bf6 : Allow map_location in torch.load to be a string
e393a4f03c : fix typo (#4206)
038fb70455 : Remove dlopen() in get_libopencl_path()
1766e27324 : Add DepthwiseConv in iOS11+
0a25926f4b : CUDA implementation for GatherPadddingOp
db0a2ff4eb : selu op
95b3c7edad : Fix undefined behavior in GLFilter
c813ce3787 : Implement Variable._sparse_mask (#4124)
5fc4b66cc4 : Fix timing issue in stats_test.cc
7a5200b450 : print exception in layers
9331ecad3f : Fix for installation of exported targets
6d72c82985 : Trace ATen native functions as themselves, not their implementations. (#4127)
31c3766d5a : Use Jenkins build status badge
93c2f81f32 : Fix another leak in pybind11 code. (#4185)
84b7daadb2 : Relax verify of VariableFlags (#4191)
fc8ad6fde6 : improve svd doc (#4155)
6552ea110f : Make timing based test more likely to pass
dde10e1d4b : Add docs talking about how to adding symbolic for unsupported ops (#3741)
7874f611a5 : Allowing usage of GPU Direct within PyTorch for the Broadcast operation (#4183)
5a264b4c0c : Add cublas batched gemm support. (#4151)
fac711c238 : Provide full support for distribution shapes (#4193)
db446d69ca : Fix issues with Windows 7 & 10 CPU build (#4065)
28ea5ac069 : Refactor Reduce{Front,Back}{Sum,Mean} Operators
595c6dea71 : Create an ONNX ATen exporting mode (#3489)
def4b78b6f : adding index_select to symbolic.py (#4061)
00fe088659 : Enable OpenMP in fuser (#4042)
d8d82d14cf : Add an option to suppress download progress (#4135)
be1ef5e4a4 : Added explicit tuple element-count to doc for Conv1d. (#4136)
d8c5f2ae21 : Fix a bug where from_dlpack failes if cuda is not initialized. (#4182)
9792acb4e0 : Preprocess both inplace and non-inplace nn functions (#4184)
d4db1b90a1 : Resuppress adagrad health checks
7f25fff2fe : add reparameterization, combine sample and sample_n (#4142)
19c511b42f : Make docker image built on Jenkins usable out of the box
50360aa00a : Install cmake3 (v3.6.3) in CentOS containers
c6381c6d44 : Add function to explicitly initialize PyTorch CUDA state. (#4180)
931f5e66d9 : Change MakePadding function to be private
0867120f1e : SortedSegmentMean/SortedSegmentLogMeanExp Gradients CUDA implementation.
d38a9bb4ec : Fix dot processor with only one sparse feature and no dense feature
eed95f8660 : Simple fix for windows
54f6b18168 : Caffe2: Make SimpleNet simple again
ca44c16e72 : LayerConfigMILSTMCell
f19ae690c3 : Update observer when attached to RNN ops
d450895a74 : fix typo (#4175)
787b9c5202 : Propagate CuDNN enabled to ATen library. (#4104)
dac5e6568d : Better error messages for blas ops with cuda.LongTensor (#4160)
16b7f3a35d : Clean up InputBuffer
78ea42e37c : Accept sparse tensors of corresponding type in VariableType casts
4f4e0df68f : Allow for broadcasting of distribution parameters (#4140)
1eae0ac8b1 : Update instancenorm.py (#4171)
991f03fbfb : Fix memory leak in JIT
3de8661184 : Disable SDT calls for all nets by default
ac2e368cb2 : Fix aten header inclusion
3842128ce1 : Fix gpu test for FCTransposed
8199edf5c1 : Refactor generation of NN derivatives (#4096)
8a254a0271 : Port batchnorm_double_backward to ATen.
d41b6c7daa : Implement remaining random methods through ATen (#4137)
dbbfdee4c0 : Implement FCTransposed gradient
28890b2046 : Add rnn args check (#3925)
0ab68b8db4 : Implement .enumerate_support() for Bernoulli, Categorical distributions (#4129)
6db9f6dc78 : Enable half communication for distributed (#4091)
c16a21b67d : removed the device_type assumption in adagrad_test
98143776f5 : SortedSegmentMean /LogExp Reduction CUDA implementation.
54342287fe : Look for NCCL in CUDA_TOOLKIT_ROOT_DIR
0000766566 : Gan for ranking alternate learning rate
34566f004d : Adding Ubuntu Anaconda environments
ae60ef12fa : Module syntax sugar.
3b99bb5dd1 : Add readme for docker/jenkins directory
8d358a1db5 : allow cudnn for fp16 batch norm (#4021)
ba93c031f2 : Moving distribution classes into a separate package
790933b430 : Build Redis support on Linux
77352fdbdd : Remove scoping assertion because it is not useful and causing errors
234591a809 : Support regression with output transform in MTML for feed
90f06860b1 : Update scripts in .jenkins
d84033dc6b : Add placeholders for issues/pull requests
2d07360938 : Fix compilation on GCC 7
53f9a0f03d : Ipython notebook directory name is changed, Change from ipython to jupyter, Also pass arguments instead of fixing --ip
e78d0e5a23 : Update SingleThreadAsyncNet
aeb7a3668d : Implement Variable.new (#4080)
eb3292bbf2 : Add builder for CentOS Docker images
8612b0bbd8 : Fix 'Undefined symbols _THDoubleTensor_digamma_one'
aed38c96bb : Don't set -fno-openmp for Clang.
05ebd21a36 : Implement reparameterized gradient for Gamma sampler (#3978)
77dfdbf96c : Ensure RNNCell variants don't broadcast (#4074)
fca617c62f : Suppress hypothesis health check in adagrad_test.py
93009b15e8 : Compute flops in conv based on output image size
b886498f62 : Don't use CMake generator expression for in-tree protoc build
a8b8614efa : Fix typo
c902f1cf98 : Allow specification of bool defaults in native functions. (#4089)
53b12693ff : Run NCCL tests in CUDA environments
a24c11329a : Fix out-of-place allocations
83162b2af1 : Expose resize_ and resize_as_ to Python (#4088)
f8ae4b6670 : avoid auto's in the lambdas in OSS build
1b25dfd204 : Parameterize jenkins user uid/gid in docker build
75c11d62b7 : Implement Variable.__invert__ (#4082)
fd2cab9ded : Rudimentary schema checking of operators
1c6595c8e8 : Add function calls and externs
0f8bdf61e6 : add option to config engine for benchmark binaries
8d8079a7c3 : Builtins for {zeros,ones}{,_like}
1fd1eaa119 : More complete beam search example w/ init code
71ab6f41ed : Post EOS penalty example
5c809de4b4 : Add missing derivatives.yaml input
1c96809cf8 : Bind cauchy_, exponential_, normal_, uniform_ functions to THPVariable. (#3945)
9ea576d068 : Implement neg for all types (#4075)
60c03bc09c : Implement apply_, map_, and map2_ in Variable (#4057)
f233a3ebd8 : Explicitly set default data type in seq2seq/translate.py
ea11c30df6 : throw new -> throw (#4059)
fc4d976a8a : Fix non-determinism in code generation scripts (#4063)
dc47319074 : Implement AssertOp
a6ff78457f : Misc. fixes and improvements
098ab27013 : Update beam search example to use new features
0365640d7e : Fix ConvTranspose
d0cabbde74 : Implement Variable.from_numpy (#4043)
3c1932c35f : Fix RoIWarp
38f13447bc : Implement Variable.tolist() (#4038)
090a23251e : Add Variable._cdata (#4045)
f92c5aa7ce : slightly simplified indexing (#4040)
6154670e0d : Fix test_while case with in-place add op + broadcast
188e709885 : Beam search example
fef095af9c : Set broadcast flag for binary operators
a53522e560 : Implement typed numeric literals
8610ea5e2a : ElementwiseLinear fallback
7ebd589801 : Added supports to prof_dag_net and GetProfDagStats operator to collect not only per-op-type cost but also per-op cost.
0a2c5d1ad7 : CUDA implementation of UnpackSegmentsOp
79ac146808 : Add if and while ops to brew
e70b117583 : Set of bugfixes for script compiler
d1d6c0b12b : Add CUDA implementation for ReplaceNaNOp
ea7652e011 : Add debug and fix some bugs in CPU fuser
0b7f1e5efd : Fix segfault during ONNX export
5241cdf546 : Implement Variable.numpy() (#4006)
71b1858de7 : Implement Variable.storage_type() (#4036)
5c13c6962c : Raise errors when num_workers == 0 in DataLoader (#4019)
046c11cd73 : Stod
a8250280bb : Py3 test fixes
ea56e0d424 : Implement BatchMatMul with Numpy-style batch broadcast semantics
7842b4b878 : Use warp shuffles in cuda varInnermostDim (#3846)
f01052ade4 : Use enabled in torch.autograd.profiler.emit_nvtx (#4032)
94a0c72089 : Delete _write_metadata and move _new_with_metadata_file into Python (#4020)
535a13dbc2 : Move renorm to C++ and expose cumsum (#4013)
84d8e81311 : Fix the symbolic for view
fcc142386b : Make pool export compliant with onnx spec
0d68ce9383 : Use integer division to fix failing test
8da31c240d : Revert changes in blob name in optimizer
7e1fccb8f5 : Add is_pinned, is_shared, and share_memory_ to Variable (#4015)
5ec224496b : Merge common part in CUDA & CPU implementations of AddPaddingOp
f83ca6338a : Shorten stack trace in build for C++ errors.
7bf6aaacd3 : add missing CMAKE_GENERATOR
d76f7a806d : Fix potrf gradient and enable gradchecks (#3861)
32500fe800 : Reducing array sizes used in pack_ops_test to prevent time outs during Travis CI builds
c8cc04bd85 : Builder scripts for Docker containers
9e46fca424 : Use ninja as the cmake backend as well.
739fa34ccd : Change ATen's gen.py script so that it can list all of its outputs before reading input/files and doing string formating.
4b6c8779eb : Fixes the the NativeFunctionsCuda.cu intermittent build issues.
61a582da44 : Fuser now locates g++.
0cdc1f2f1f : Make TempFiles lifetimes shorter. Relax the amount of file syncing for cpp file.
f72fe0624d : Add a CPU Fuser (single core)
bcfe259f83 : Add streams and comms as optional arguments (#3968)
540a9c279e : Add LayerNormLSTM
5571d0187e : Accept longs in default_collate for dataloader in python 2 (#4001)
a9606580ef : Remove separate nccl installation from Dockerfile9 (#4003)
4eb8e12765 : Introduce scopes during tracing (#3016)
e1e08d631a : Always check cuDNN support in test_convolution_gradients
7ddcb91c7f : Add more ONNX symbolics
41897e3e78 : Supress hypothesis health check in glu_op_test.py
cdd48a8575 : Fix typo in clang ifdef to fix clang 3.9 build
07904eaed9 : Moving tensorboard to OSS
1351152362 : Skip DeviceShiftTest if host has < 4 GPU devices
76d7bace47 : Add opencl logging part I
f2be3a4e5e : Allow specifying device to prepare_prediction_net()
710f6d6958 : Fix warnings and add alert to enable ninja when developing.
67f6b5b565 : Handle broadcast of ints on CPU side
ca7951b93d : remove unused variable
2c190d2f05 : update transformer code for layer_norm() API change
e5906db3e9 : trtrs backward (#3972)
232f8c73dd : fix flake
96cd3743f1 : Make workspace id type consistent with net-rewriting pipeline
0512597f86 : Switching to MPSCNNConvolutionTranspose for iOS11 and above
638b10d39b : fix softmax default dim for 1D Tensor
165d0897e4 : Implement distributions.Gamma (#3841)
932e484029 : fix doc change lint; (#3974)
b43c1b2bed : Fix and upgrade brew.layer_norm
fe12ac57a4 : Improve docs for torch and torch.Tensor (#3969)
b2865ef389 : Fix comparison warnings in scalar_tensor_test. (#3964)
3af2b8f428 : Adding length verification check to pack_segments
c681b03d37 : Add determinant function on variable; Add backward on svd (#3816)
3d1135c842 : Skip remove_padding test because it is flaky
80c8635a7e : fix math notation (#3962)
34beafcd00 : Add static_cast to Get*Argument to avoid compiler warning.
f80902c6fa : update Tensor.new doc
96c6652131 : improve Tensor.scatter doc
754ae49f65 : Documentation updates for ONNX.
de00aab720 : PyTorch now uses operator versioning.
1c0fbd27a1 : CuDNN bindings rewrite (into ATen) (#3666)
21731d7b53 : register gradient for lambda rank
a00d7a1bec : ushort2(gid.x, gid.y) -> gid.xy
cf07820849 : Enable SparseLengthsMean
0c588a500b : Replace sigmoid + xent loss with SigmoidCrossEntropyWithLogits for better numerical stability
ab0a7eb7bf : Add ONNX symbolics for several ops (#3956)
fcb0b0de26 : fix (#3953)
79f32b46c1 : Fix contrib/script build
8f2bc151d3 : Find ninja path using the python module
70ca83793d : Add support to emit compile_commands.json from CMake/ninja files.
0e54c3a989 : Significantly speed up the incremental build.
442ffac686 : Update CONTRIBUTING.md (#3952)
8cb32ba630 : rnn.py: Note zero defaults for hidden state/cell
fc3f88d8a4 : higher order interaction of embeddings
094df38e2f : Fix dependency build when pwd contains spaces (#3950)
7e9724142a : batched layer parameter loading for model initialization from an existing model
7374c981d8 : CUDA support for PackSegments Op
b766335753 : Revert D6403523: [Part 2] Support regression with output transform in MTML for feed.
2caca70a37 : Allow shifting of activations / ops to other GPUs in data parallel model
4c7219b3b0 : Implement matmul as a native function; use it for Variable impl (#3943)
0e21cd2eae : CUDA implementation of RemovePadding operator
6e9bb93a71 : Handle MPSCNNConcat edge case
6811acbef9 : Syntax for control flow in C2
c9e181f50f : Support regression with output transform in MTML for feed.
1661370ac5 : Signal handling in DataLoader workers; Timeout option (#3474)
3c709f5b26 : Add HANDLE_TH_ERRORS for THPFunction_saved_variables and THPFunction_saved_tensors
913a9a736c : Backed out changeset 4e1241fe65cd (revert a revert :) )
926ed2b280 : Implemented NCCL Distributed Backend for PyTorch with new dist APIs (#3435)
60d86bac91 : Fix the UpsamplingNearest2d's symbolic (#3450)
47fadc3138 : improvements to extend in ModuleList and ParameterList (#3505)
6f218cef25 : Supress hypothesis health check in adagrad_test.py
eba0af4d5d : Enable sampling ratio = 0 in RoIWarp
929a11f920 : Add interpreter support for Handles/PythonOp/CppOp (#3866)
03829e55b3 : Remove const modifier where it has no effect
e105fee57d : Fix ThreadPool class/struct forward declaration mixup
c1babfa8e9 : Fix ambiguous brace initialization of std::array
6ae0d477ea : Fix cuBLAS arguments for fp16 dot (#3660)
0ba9e5a636 : Remove unused lambda capture
4e9fe7f168 : Fix wrong arg in operator function for MSVC (#3934)
ea28deee75 : use torch.cat in _flatten
4beb3ac3ab : Properly guard cudnn backward path - NHWC is still not supported.
f639625807 : Epoch duration may be based on batches_per_epoch or duration in minutes
38f166c13a : Async executor with less polling
0a434ff685 : Remove Function::is_executable (#3907)
67c3cbd5e2 : Optimizer: optimize transposes in variety of circumstances (#3509)
d4e5d9061d : Fix indexing with all zero ByteTensors (#3926)
14cc15e8f4 : fixed NCCL bug in data_parallel_model.py
157f949cef : Implement python scalar conversions via ATen; allow localScalar if numel == 1 (#3908)
af9fd35d82 : Cast tensors when loading optimizer state dicts (#3658)
51ca3a1a48 : Make sparse test also check that coalesce status of tensors makes sense. (#3171)
c25a1493cd : CUDA mode profiler fixes (#3754)
47918cab01 : clean up some dead build logic (#3633)
31af836412 : Improve fuser algorithm
e66c592d10 : Handle ops with multiple inputs in aten_dispatch.cpp
00fe1f7cc8 : Record trace before saving outputs
329891b44f : Add a warning when JIT falls back to AutogradClosure
4726d0cb7a : Fix exporting HalfTensor
709fcfda8a : Now actually fix padding (the tests are added in onnx-pytorch) (#3893)
453eb258de : add code comments to RNNExecutor + lint + rename some
dd1558dc8d : Improve stream selection
e91b75615e : Use ATen version of Variable type_as. (#3840)
a08909160e : fix bug in CUDA AddPadding when lenghts output is not provided
96e15179db : Fix function signature in ATenOp for at::Half Set function (GPU version)
1ef54e3dab : Fix OpenGL 3.0
93e489d168 : Fix _analytical_jacobian to not require in-place grad accumulation
73f6715f47 : Do not link against libpython when building python bindings
6e2e623fa9 : Use list for *requires in setup.py
3da9d7971d : Suppress pytest filter_too_much health check
9b31280ccf : Have __sizeof__ account for size of stored elements (#3821)
558516fcdb : More docs for Conv1d Conv2d (#3870)
11c9bd6c98 : Allow target.requires_grad in l1_loss and mse_loss (#3876)
669a99b595 : Remove as much of Python from JIT hot path as possible
06408168e6 : Fix padding according to https://github.com/onnx/onnx/issues/261
3a5bbc2140 : improve PackedSequence docs to explain batch_sizes (#3878)
989e8ff781 : Implement is_sparse, is_distributed as native function, (#3838)
fca77c9e25 : Correct gradient of rosenbrock (#3881)
74fd79d889 : Set seed at top-level of common.py (#3862)
ed640010ce : Delete unused autograd functions (#3856)
9bbf4ee55e : Fix lint (#3859)
65e0d5bad8 : Fix void* wrapping in autograd codegen
ef70db09dd : fix some more mathjax (#3352)
ffd39f4c9f : added missing arg and improved example clarity (#3444)
754f3d3fe8 : fixed a typo in ConcatDataset.cumulative_sizes attribute name
92883f3444 : change doc for Adaptive Pooling
09b008f155 : Fix BUCK for caffe2_test
eb4344d6e6 : Depthwise F(2x2, 3x3) convolution
8495503c6f : Set a default input type
0954775d28 : AddPadding CUDA version
5250d7fd11 : simplify logic for weighted pooling using id score list
6dc1fc7e69 : fix padding_idx for sparse=True (#3842)
af58bfbb1b : Make integer parameters and buffers immune to float(), double() and half() (#3820)
48415d83c8 : Fix instance_norm_test.test_instance_norm_model_helper
b5ad8c8d16 : Fix CharType min and max values (#3843)
59b2654544 : reapply header change after xplat move
3ac2a20c5f : Fix DataParallel scattering for empty lists / dicts / tuples (#3769)
9468a1e24f : Add input type to caffe2_benchmark
b404cfe29a : Fix MultiLabelMarginLoss docs (#3836)
9c498aa523 : Implement Variable cpu() as an ATen method. (#3802)
c7b1d58b16 : Fix THP_export for python_variable_indexing.cpp
fd7bfaf4e4 : Fix errors in previous DataChannelMPI refactor (#3831)
fc0c8c2316 : minor refactoring in dper
26c5e5d5d9 : Use EIGEN3_INCLUDE_DIR for Eigen includes
b3e5166d4c : Run build_android.sh in Jenkins
12ce6c8b7c : Re-enable net_test
5215640a41 : Fix cosine_similarity's output shape (#3811)
cf3ca13321 : Improve DataChannelMPI (#3817)
c5e8048f58 : add error checking for FusionCompiler around old CUDA versions (#3753)
304e64e70d : Fix NetTest.ChainingForDifferentDevices
daa450d656 : add sanity check to model_helper.TensorProtosDBInput
4518793aa2 : Implement indexing in ATen (#3725)
dcaaf51100 : Support /sqrt(n) pooling
8ebf18b5b1 : Added MAGMA_HOME env var to specify alternative MAGMA root directory (#3809)
303ed8af44 : Allow specifying cmake build directory in the build scripts
77b78935f2 : More extensions
a81c63df83 : Fix pad handling in ConvPoolOpBase::SetOutputSize(...) in the legacy_pad case.
e0c8c539e7 : Backed out changeset 119623addbbd
335c7dc681 : Fix perfkernel compile error on clang 3.8
c52ca23447 : Always define outputs of ConvBackwardBackward (#3799)
a9ef76b9c6 : Reflect renaming of OS X to macOS (#3795)
ad3e619198 : Bring back CUDA_ARCH_NAME=Manual
4bce69be22 : Implement Variable.storage() (#3765)
10d24d8f84 : Add Tensor.slice() (#3750)
ee08120b46 : Move Variable conversion methods to ATen. (#3762)
0348e98cb4 : docs update
cf407213f9 : Clean up stochastic function related dead code (#3782)
1ba3e14608 : Throw Python exception from PythonOp instead of logging
38cd6b3bd0 : Fix run_test.sh mpiexec failures under virtual python envs (#3792)
4471e15b76 : BMUF cpu support
4d405a4430 : Fix hash.h compile errors in newer compilers (#3783)
3e4a777e44 : Correct JIT interpreter autograd function (#3760)
fa5324d2a3 : Update README.md (#3781)
2c39f3de99 : flake8 fix
1f64c2ef91 : Rename pyro.distributions.Multinomial -> .Categorical (#3766)
40179cd61c : fix cuDNN RNN weight tying test (#3774)
ca3fc59a9a : fix elapsed_us spelling
0fd9682305 : Fix torch::hash for MSVC again (#3767)
a9ec4ee742 : Detect aliasing in cuDNN RNN flatten_parameters (#3752)
0e99334efb : move print to logger
0a09fba3f6 : Capture CMAKE_ARGS in setup.py and pass them as args to build_local.sh
067f799e9f : Implement remaining Variable fallthrough methods via ATen (#3744)
5de880f3e1 : Resume from epoch instead of re-starting a worklow from scratch when we retry
931bd87e98 : Add setup.py
c4b0db5079 : Remove hard file offset reset in load() (#3695)
2453bc2876 : Implement clamp using ATen (#3739)
23ca19ae3d : Fix GCC 4.8 build
8d321b6cd3 : Improve observer framework overhead
309b2a0093 : Move jit test order from beginning to right before multiprocessing.
689cf7d480 : Reduce nondeterminism in test_jit (#3561)
b97dfc8a92 : Pretty names: support names set via export or Variable constructor (#3371)
5b4a438563 : Implement bmm symbolic (#3681)
1a02e72254 : fix missing DPM .values() and .keys() to viewvalues() and viewkeys()
9bfaec86b0 : Use multiplication instead of division in conv group size checks
35295e42c0 : Add static asserts to ensure all cuDNN algos are checked
d1fb8fdf03 : Improve IODescriptors in JIT arg checking
5b2026e75b : Add torch::hash
b909fce358 : Make macOS build use ccache via CMAKE_C*_COMPILER
67354da8cd : conv2d in autograd C++ (#3702)
24e83acbb9 : Enable sampling in evaluation
cc7f09a372 : Add cudaEvent support to the profiler (#3734)
ce3413549f : check cloned observer in RNN Executor
127a55ae49 : cast op for empty batch
8f5c0f9678 : Record stack traces for CppOps (#3727)
fd7e227826 : Re-enable mkl_sbn_op_test.py
066f26c7fa : [ATen] Introduce templatized native functions and implement is_signed.
d8dfaeeef7 : Add batch-based/row-based sparse from/to dense operator
6eca9e052d : Fix symbolic for Embedding and Upsampling and improve error messages
3bde37fbf0 : Listwise Ranking -- LambdaNDCG
2502ac082b : [ATen] Rename isDistributed -> is_distributed.
6dee02923c : [ATen] Rename isSparse -> is_sparse.
9a2b54e08b : [ATen] Rename isCuda -> is_cuda.
50d6d258a3 : Fix build breakage in ATen NativeFunction (#3729)
a3afca6fc9 : Minor documentation fix in NetBuiler
9f9d7e6ee7 : update gloo submodule (#3728)
983872899e : Linear and constant warmup learning rate policies
b96976fceb : Use ATen equivalents for variable element_size and nelement. (#3724)
39f0859749 : Use ccache for macOS builds if present
e4bb22ebbf : bump ios-cmake
99037d627d : fix OSX cuda build (#3722)
a8d99c145b : Move quant_decomp_zstd.* to share/contrib
067bc141c3 : Cached reader
2b5a38b1a8 : Add missing trtrs, orgqr, ormqr docs (#3720)
b09d66e60d : Fix a reference cycle when in-place ops on views save the output (#3679)
2300234c9c : Lint checks, small fixes
ef4b19f767 : Refactor ir.h to distinguish Nodes and Values
feb0a145c3 : Move Variable.var and Variable.std to ATen (#3704)
445cc1f5b9 : NativeFunctions: support backend-specific dispatch and SpatialRoIPooling (#3672)
737aba3fc5 : Fix cmake scripts for CUDA and MSVC (#3713)
2bc71d4135 : Forward args to .jenkins/build.sh to cmake
2bf4dec9ff : Add missing CMakeFile in caffe2/observers
65a1dbc93d : penalty for EOS successor
2792de0d22 : Revert D6331513: [caffe2][test] Fix NetTest
3bb2308a89 : Minor JIT improvements (#3703)
e73228b73c : Opensource styler_ops, norm_planar_yuv_op, and quant_ops
80c3f8fa88 : Fix NetTest
1c1519d7cf : Fix export for recent changes in ONNX (#3708)
f0306c12ff : add Mean Pooling distributed support
74367755f2 : Integrated GRU implementation into C2
5e9b445d38 : Implement VariableType::alias (#3707)
b888d3ac2b : Implement toBackend and toScalarType on VariableType (#3706)
c77f0cb5e6 : Attach observers to operators inside step net
1d198c4f8c : Use ATen for Variable.contiguous() (#3701)
f756d9d45b : Turn off omp by default
b431526dbe : Disable protobuf libprotoc and protoc build for cross compilation.
d478ece11e : Propagate is_volatile to the base when performing in-place ops on views (#3680)
0e522853bf : fix half uniform for cuda 7.5
47ac468504 : Remove dilations for pooling in onnx export and other small fixes (#3698)
f779f44c89 : Add ONNX exporter for glcgan
9cb8b43778 : Split off in-place NN functions (#3683)
1ab3fd1a29 : Fix Batched Matmul test accuracy
7605d196fe : Hotfix for ONNX BatchNorm export (#3691)
589ce4dfab : set CC and CXX only when it's empty
446f869a0d : Support negative dimensions in softmax and log_softmax
ba3b79b06b : Fix the missing import
b8f670eae8 : Fix windows build error
a3bf06c0c7 : Use ATen implementations for is_contiguous, is_set_to, numel, get_device.
c5b2c13433 : fix error in NegateGradientOp
c5bcd5560c : Adding zstd to build
e43ff32192 : Add a JIT interpreter (#3634)
b67acd2d39 : Move detach to variable (#3676)
283ca417cc : Fix ssize_t for MSVC (#3686)
83b36175fc : Fix CUDA 9 builds for Windows (#3684)
70b8f0ed47 : Fix elu double-backwards when applied in-place (#3687)
8701a2dfa3 : Allow negative indices in Concat/Split ops
5d61b1f559 : Update ATen operator in C2
7b047c161d : NegateGradientOp and test
4847f8c191 : Remove unused field in tensor proto
30068b5b64 : Fix function signature in ATenOp for at::Half Set function
564efd3521 : Allow 1->N broadcasts at the beginning and end to be fused (#3616)
c2ea3f66b3 : Make a concrete function for device_option equality
31e9ceeb4b : Refactor the observer code to use one function to report both net and operator
f600056f48 : Allow build with CUDA 9.0
e8abfd359a : Limit this fix to apple clang only
e9cc41885e : fix dynamic memory management for distributed execution
97e4743aaf : Caffe2_benchmark can benchmark multiple backend engines
4d152ab931 : disable sbn running mean and var comp
667c7d980b : Avoid misleading message about NODE_ID blob
8ce205069a : Fix stats resporter in calulating STDDEV
2be44ab242 : remove redundant "template" keyword
814cd7ade3 : Fix event test
fec5631513 : Updated nnpack code. original author is @Maratyszcza
c7cb6a795e : Record stack traces during JIT tracing (#3607)
25b166ed1f : add depthwise convolution terminology as a note
e33df2b88a : Add border-padding for grid_sampler (#3599)
30d06218cb : Solved boolean ambiguity for variables and tensors which contain one value. (#3656)
ea4432b3c2 : Fix CUDA builds for Windows (#3650)
73431f087b : Allow torch.load and torch.save to take pathlib.Path (#3589)
4fa94793dd : Bump version in master (#3605)
2bf70c137e : fix selecting deterministic conv algo (#3631)
0443c11f7e : Fix for cuDNN half precision RNN for pre-volta archs (#3613)
84c618010d : Remove redundant dimension check that produced maybe-uninitializd warnings
7160fb0801 : Fix setup scripts for Windows CUDA builds
95821ca4e5 : fix USE_BLAS detection in THGeneral.h.in (#3632)
1a58775e19 : Fix AppVeyor Windows build due to template chaining
0483304fab : Enable EXPORT_ALL_SYMBOLS for CMAKE (#3617)
ae5673741b : add option to do simple modulo
fc8532c89d : Allow serialization of custom types inside Tensor
efe4386d24 : Fix module load_state_dict error information.
cc8fd5bde1 : added #define __STDC_FORMAT_MACROS to tensor and storage code templates to avoid problems with gcc 4.8.5 (#3629)
c04ec84e1a : disable uniform fill large blob
3a6b38eb2c : Avoid unsupported version pinning for HomeBrew on CI
4971aec81e : Add /usr/local/opt/python/libexec/bin to $PATH on Mac travis
0440f3bf93 : Reduce caffe2 GPU topk test sizes
5478d0154f : Fix pthread detection for MKL
1f1612ee37 : Move _CompiledMixin to C++
02450fff38 : Expend autograd profiler docs (#3621)
7e1d795354 : fix for unknown ssize_t in aten/src/TH/THMemoryFile.c (#3612)
e8e29690ef : Add has_debug_def() check to net's debug_def()
d1c73eb407 : use size_t for rand fill functions in math
9b0990539b : Hack to detect when only one output is differentiable.
df433d427c : Only set_flags on differentiable outputs.
19515520bb : Make prelu an ATen op.
68b5d94371 : Extra sanity checking for derivatives.yaml versus Declaraitons.yaml
efb611a134 : Fix misnamed generator argument.
016af4ebf7 : Fix parsing problem with std::array<int, 2> (note space)
95ddfbc947 : Delete default parameters from derivatives.yaml.
dfcd2a73f5 : s/thpp/at/
690bfc0781 : Delete unused defined_if fields.
9cdc24f550 : Make 'name' occur first in Declarations.yaml
43e4e3cca2 : Some developer notes for ATen.
2da308f4b9 : Add expand_as/type_as to ATen.
664fb135af : More elaborate error message when expand fails.
8f3bef2292 : Add operator<< for at::Type
bfdd864631 : Automatically pretranspose FCs in BlackBoxPredictor
fe22e3deb9 : make summarize op support larger blob and more robust
7cedf80923 : add flexible topK op
43d1405d0d : Fix ld* conditions for gemv ger gemm (#3604)
d496f9b20c : Ensure that Variables are at least one-dim in VariableType (#3609)
0b476e6456 : CMake: remove unneeded dependency with OpenBLAS
febe45ebb4 : Disable NNPACK build on unsupported CPU architectures
4b8669b087 : Write checkpoint info to XDB at the end of an epoch
1bf717e17d : Raise exception when Variable.reinforce is called (#3555)
50009144c0 : add warnings if device capability is less than ideal (#3601)
12e4af94e8 : add better gradient creation error message
cc757acd36 : docs: clarify the difference between net() and net.forward() (#3596)
dd6d04ddf2 : doc: Normalize all true/false in docstrings to ``True|False`` (#3593)
9d4c2d743b : Enable the build for MSVC 2017 and Ninja (#3595)
555c51c846 : Fix build failures in MSVC (#3594)
b06c59e543 : fix warnings about _XOPEN_SOURCE redefinition. Every compilation unit whose headers recursively include Python.h need to include Python.h first. This is a known limitation of the Python headers.
0217ad29d2 : Fix OS X build, fixes #3573
25d3c25f50 : add more fusable nodes to the graph compiler (#3559)
285ce10dbe : fix linking order of nvrtc to force no-as-needed (#3583)
bf5932fb15 : Add missing documentation for replacement in WeightedRandomSampler (#3579)
d2784b6e5b : Link ATen against CuDNN when available. (#3582)
ec389f5128 : Fix cuda symeig (#3566)
aabfae0503 : CPU all/any should work with empty tensors. (#3581)
bcc8c8f696 : Support RMSProp in Caffe2.
adf883b7b1 : fix uninitialized warnings in THCUNN. (#3575)
4e3aa25139 : Unit test that compares net snippets after parallelization
2bc7fc8698 : Add Jenkins build scripts
547ac8c0b9 : Ensure aten build depends on NativeFunctions.h.
0509f401d1 : Update ATen to fix issues with old g++ (#3574)
e6fadfa76e : Relaxing checks for fp16 in BatchMatMul tests
15c523f836 : [ATen] Make size/stride native functions.
b2bbc7c091 : Enable building mobile directory files in OSS
aa911939a3 : Improve Windows Compatibility (for csrc/scripts) (#2941)
348e29c49b : Don't run CUDA tests for ops without CUDA implementation
1d57a2d54c : [ATen][Scalars] Remove Scalar from return types of functions. (#3557)
22d1e37540 : Have ATen build respect DEBUG variable.
4761b32f96 : make use of the average length of sparse features for init
e579ae75b5 : Fix error when default_collate is passed a collection of numpy.str_ (#3404)
be071d767d : Fix uniform on CUDA tensor to return in range [0, 1) (#3547)
9c3cb6e652 : Fix stride checks in gemm dispatch (#3548)
5e382894be : add numpy() and from_numpy() to HalfTensor (#2953)
8d2b9a08f4 : Some documentation for derivatives.yaml
fb186c0079 : Make atan2 backwards reuse intermediate computation.
7747078a89 : Support defining gradient for multiple inputs simultaneously in derivatives.yaml
07d30e9c3f : Delete obsolete only_registry entries in cwrap.
d719936b13 : Top level comment for gen.py
84b76a0712 : fix shape info in concat layer
daf2743bbb : Prevent segfaults from undefined aten tensors (#3482)
c75ab8167d : Fix double event record in RNN executor
dc10083fc0 : Previous PyTorch version info (#3549)
6c1bff4cbc : Generate native functions with const ref Tensor arguments. (#3465)
bb1b826cdc : Exposing emptyCache from allocator (#3518)
f3c7bb9bc1 : avoid unnecessary multiplies in derivatives (#3545)
ecbc4b0dc3 : Fix float uniform generation in TH (#3541)
9b54f8e59c : ignore digit in container's __dir__
5fd93b56fd : [master] Don't expose 0-dim tensors to Variable API.
9a020ea2ff : Document weights argument format for BCELoss (#3535)
534e8ecc97 : fix C_FLAGS typo (#3538)
4587a7686b : Make distributions docstring raw (#3539)
00d2befba1 : THTensor_varOuterDim numeric stability (#3533)
6fde0cb507 : Fix memory leak in THTensor_(addmm) (#3536)
99907f2eb0 : [ppc64le] add -fexceptions to aten build function for C and CXX builds (#3515)
77ddd5130b : Add reduce keyword for KLDivLoss (#3330)
db3f5f86b2 : Update ONNX IR we emit to version 0.0.2 (attribute discriminators) / fix Permute export (#3484)
29fc920305 : Fix MSVC build after major change (#3467)
6767db28dc : adds flag __CUDA_NO_HALF_OPERATORS__ (#3520)
d2ddbaaf8d : Fix command highlight in README (#3521)
6dd87dc88a : Merge vestigial Local.cwrap into Declarations.cwrap / remove standalone ATen build logic (#3522)
7d488544d3 : Fix leak of workspace buffers
cbb03b8db8 : add modulo operator
621fbd5c4e : Move flattening/unflattening JIT logic to C
22f596572c : Add torch.autograd.profiler.range
68116d7f84 : Fix test_torch.py test for Power see issue #3277 (#3517)
e2f33eb6a2 : add doc for sparse_adam (#3519)
aa93a3d633 : -1 indexing fix in THCApply for pre CUDA9 (#3457)
fde355f7d4 : Allow in-place operations on views (#3384)
d6a8d28d65 : Simplify ATen Build (#3496)
50a63ee6fd : Fix and speed-up norm_backwards (#3481)
3d06a1e075 : Make THCTensor_varInnermostDim numerically stable using Welford's algorithm (#3425)
4e5b25ed47 : Use ASSERT(...) rather than assert(...) in ATen tests.
8fd171a6fd : add test_index to test_cuda
0bb0ee883e : relax index dim check
f76d6c029c : Sparse Adam optimizer for sparse gradients (#3137)
c2626f6031 : Fix error message for type mismatches with sparse tensors (#3504)
122d884bbf : add CMake flag for disabling contrib builds (#3508)
74d1bb54e6 : Add single argument version of torch.arange (#3494)
c2bdda1224 : implement `__dir__`for Variable (#3501)
84067bc17d : Make RowWiseSparseAdagrad type/shape inference compatible.
5de7f9e731 : Tidy up CUDA notes
5c881f00a0 : Add REINFORCE rule to distributions doc
0ce65ede86 : Revert D6224054: [xplat] Switch to open-source NNPACK
0b661035f3 : pointwise cost function
8cb7e5bd5b : Don't assume construction succeeded in __del__.
ac099ceda0 : Set debug_net_def for NetBase
1021402136 : Compile nnpack and pthreadpool with -fPIC
fe4e14ed29 : Fix fill derivative (#3483)
5616d41421 : Switch to open-source NNPACK
2bdea8b451 : Add ONNX symbolic for Elu
54972458e1 : Build NNPACK and pthreadpool as static libraries
ce62c65c18 : momentum sgd
7ac341c862 : Fix EventBasics test
ea9fcd5c47 : fix copy-paste error in #3263 (#3476)
f7a459b28b : Fix overflow when using magma (#3470)
20feef45bc : NNFC operator: an FC with noTrans noTrans options
68ed66a2c5 : Faster BatchBoxCox Operator using MKL
13fde88b83 : Install magma in cuda 9 docker (#3469)
b71cebb11f : Fix LoadModel() in resnet50_trainer
42de0df411 : Add assertion that 'pos' is in-bounds (#3466)
a8efd88cac : Fix warning in jit/ir.cpp
1b5c843a9c : cleaner logic on sparse feature hashing
1149b9bbb5 : Polling async net executor
8548dd2486 : Fix intrinsic in perf kerneles for int8
583bc63c98 : Fix boundary checking in 8-bit sparselengthssum ops
e11d2b9c9c : Better error messages for Aten tensor types (#3449)
596a335851 : Add gradient checks for take and put_ (#3460)
9136dcdb60 : Make grad_output optional in gradgradcheck (#3459)
cbedba373c : use valgrind to make aten test pass
ebae2f6c71 : MKL Sigmoid op wrapper
a7644e4f4b : Extend rewrite functionality to handle multiple outputs.
502aaf39cf : make sure stdatomic.h is included when checking for ATOMIC_INT_LOCK_FREE
81e56ff8aa : NO_CUDA for travis
531a20b312 : enable ATen in the travis build tests.
f6dac327df : build fixes
88d56cc198 : fix setup.py paths
5aa5b572e4 : update build so that all of TH* is in libATen
4424b3e352 : Update CMakeLists.txt in TH* libraries to support static builds.
320ff3ad64 : remove subtree of ATen since ATen is now inside pytorch
d792c21f72 : move TH* folders into aten/src
39fc9f9c11 : make stack only a function
e3b82a7665 : use private to prevent double linking
5dfbc3d6c9 : whole archive
f1b7464119 : create file so that find_package works in CMake
185cd0af46 : modify ATen/TH build to make/install only libATen.so libTH is built statically and folded into libATen
f3e4dc176b : add TH_LINK_STYLE, which allows the universal use of STATIC libraries across TH* and ATen
9398e0c0c1 : fix CMakeLists for new directories
8e584a5cd5 : directory restructure
5c49df8875 : from pytorch
1420375ead : Change 'sizes' parameter name to 'size' in expand native function.
c38defd201 : stack should not be a method. (#156)
360203a2a4 : update nn
73117ea5ba : Implement stack as a native function.
97bc100b92 : Fix handling of inf and nan (#153)
f1c5d8c4ce : Add an at() method for indexing. (#152)
07a049d900 : update dlpack header and convertors
cbebcb347b : Make valgrind optional to make our build pass
5d84013249 : Correct dimensions for reduction functions, squeeze, unsqueeze.
682aec30b5 : Update ExpandUtils.h
cf348bcdee : tighten hasCUDA check
ac1abc4cb8 : Add comment explaining return of dim() when tensor is a scalar.
d5d6dafb04 : Address review comments.
a10030eec7 : Represent empty tensors as size {0} tensors and fix scalar checks.
c369d4da85 : warning fix (#142)
0e9e18303b : Adds permute and as_strided to ATen (#137)
8cdd7650ee : Make toScalarType and toBackend virtual
6b113b1d1c : Make size, strides, dim functions const.
fee9195821 : Change is_same_size to a native function.
7273906eac : Add unsqueeze of scalar to wrapdim_test.
1cde661df3 : bind newWithTensor in ATen (#129)
dd0c95d552 : fix merge problems
c11349a9b8 : missing code from pytorch
a03621462e : missing entry
bdc98a0e7a : The at::cat should default to dim=0
60e7e96c7a : update docs
32ecaa0870 : regenerate docs w/ recent changes (#126)
8e13a95357 : Support default parameters for native functions.
930b98cacd : smarter backend option
b06d8937f5 : sparse cuda and get device (#122)
0c1ce9feb2 : Conda packaging (#119)
4afd630632 : remove CMAKE_CXX_STANDARD stuff in favor of setting --std=c++11 directly because parse of FindCUDA ignore the former approach (#121)
c58913dc95 : Remove C exports and rename AT_API
ed46386c85 : Fix missing <functional> and export decorations in lib/ATen
d84429b526 : Revert the enum changes as discussed
5683144b97 : Fix typos in orgqr and orgmqr
db1292a509 : Add missing string include, fixes https://github.com/pytorch/pytorch/issues/3192
f074a7a95c : Rename value to other, wherever there is both a Scalar and Tensor overload. (#115)
67c2f0ead9 : Separate out native processing into procecss_native; remove (TH)Type specific logic.
e73f6c211b : Support 'native' ATen functions with Tensor, (base) Type, NS impls.
b5d3edfd7f : Update DLPack tensors enum to avoid binary issues and expose one function
bacca0eba1 : Change softmax and log_softmax to take int64_t dim rather than int.
6849554ac6 : Squash ATen warning
8257f3e6e2 : Update log_softmax and softmax signatures to include dim (#106)
164a8fbaf5 : nit: move ATenDLMTensor to cpp file since it doesn't need to be in the header
3bc54bf2d9 : [dlpack] Memory management for dlpack
30bbeb8b87 : Relax Scalar::toXXX conversions to only check for overflow
56a241f97a : Every argument controlled by the output_mask may be null
e9be595081 : Add additional erf, erfinv, and additional nn functions
2295a15b8c : Add bindings to additional NN functions
8e312ab2e6 : Expose is_nullable in Declarations.yaml
112f183dc2 : Support broadcasting in copy
c9889d934f : Use pointer equality to compare types
a6335b54d6 : Combine comparison methods and functions
54addcf0af : Support wrap_dim in nn.yaml
dc9c5806a3 : Expose the THGenerator* via unsafeGetTH on at::Generator
5cfa890926 : pass values with flags
9f5c0a02a7 : add a deleter callback to tensorFromBlob
004cd36efe : Add additional comments
0243338603 : Generate PyTorch-style NN bindings
c8c967fa43 : Improve Declarations.yaml: (#81)
37d9ad748b : Refactor out TensorBase from Tensor
25b97aebdf : Fix copy and move constructors
43fbe58dc0 : Remove has_full_argument_list
986c577e93 : Fix lint
9b0b26d037 : Add check that tensor is defined in Scalar constructor
937950e064 : Move default arguments to function declaration
3d80bd31d8 : Fix build for MSVC
32057edbf3 : Fix build (#75)
9a6334fead : Implement _unnarrow (backwards of narrow) in ATen.
f3e2d6669e : Enable wrap_dim in Local.cwrap.
211c717e53 : Make all dim arguments int64_t
7d1c01a86f : Converting dlpack tensor to aten tensor
6826a5c467 : adding a simple class for converting atensor to dlTensor
6b61d72eec : Test stub for dlconvertor
21d98db9b8 : adding dlpack header
99141e62a6 : Fix build failure in MSVC
0acaf1ee6b : Update generated docs for post-const Type changes.
dec470797b : Mark all (non-static) Type methods as const.
73a31cfed2 : add merge_all script for subtrees
9ed7ab82de : Win64 support for lib/ATen
aba1bb1d46 : Micro optimizations in ATen
9a01d3f374 : add support for custom python
19770db681 : Make 's_' functions on Type public
33e94adaa9 : Mark unsafeGetTH as const
ec539abc6e : Move wrap_dim code to Utils function to minimize generated code.
054a9719f1 : Generate wrap_dim code on derived type rather than base type. Either should work, but code feels more natural this way.
e33d154bcc : Support wrap_dim specifications from cwrap.
21df48f7b4 : Use cast instead of literal as a temporary fix
709dfba95a : Fix default constructor argument
2d5764539f : force NO_CUDA to be specified to disable cuda. add pytorch's FindCUDA so that it is possible to get ccache to work for nvcc. make excluded notification more concise.
752ebc58cc : Handle scalars that are not backed by tensors
d23a83add4 : Add accessor to underlying Tensor
2a2c989e4b : zero_dim_to_one and empty_to_null can't both be specified
af184b562b : Rename 'canonical' to 'has_full_argument_list'
463fb29710 : Include non-canonical functions in Declarations.yaml
efbc1ad2a8 : Make Scalar default constructible
bfeacce4ff : fix static linkage and make THD statically linked
bfc85dbe0f : Handle default arguments in base Type class
3e960b759f : Use CWRAP_FILES_BASE if defined
b4900260ef : Add missing const qualifiers
adc9cf15ed : Fix typo.
e5f6057f86 : Remove unnecessary early conversion to IntList and make expand functions inline.
f2168578f0 : Remove scalar expansion tests.
3ca164a6cc : Address review comments.
8b049e1c46 : Support broadcast specifications from cwrap.
1f0461e76c : elementSizeInBytes for types
e84634e4d6 : provide more information in Declarations.cwrap
c23dfe5ddb : update generated code in documentation to match changes
2d018cc24e : sync Declarations.cwrap with pytorch
11807f99b4 : Add rudimentary support for calling a few sparse tensor functions.
a1438bad5f : fix issues where scale gets reported as 0.0000 in output
8fd8cf7b24 : Small readme fix
b57f82a2cb : made the repository available for embedding into other projects
3fc3289745 : add some asserts to basic.cpp
fefc2a2c9b : add valgrind to CI
187c4ffdd9 : allow retain to be specified for unsafeTensorFromTH
99b94fe73f : fix osx build errors related to long/int64_t
8c427b7715 : Note [Undefined-dim versus 0-dim]
c0ad9380c0 : fix a bug where some scalars were getting truncated to integers incorrectly.
e3322069ec : Fix build for CPU only machines
78820919a5 : return a sentinel value when THTensor has undefined dimensions.
8380a1a110 : fix lint
6fe5126a0a : Static linking against libstdc++ in Binary Build mode
7a11627a13 : Make clang shut up about class/struct mismatch.
7af130f6a4 : still generate multiple versions
b7f793c618 : add support for Null Tensors to functions
070b2ce33c : lint fixes
96b4cfdba0 : produce a Declarations.yaml file that describes Functions/Type/Tensor methods that framework produced.
01b8f07624 : basic travis script (build + pylint)
7b9b538aa5 : operator== for type
95e4c4ff87 : allow type inference to work on TensorList
f6ac105d9a : Fix handling of if_true/if_false in ATen
f4149b7d0e : Half fixes for ATen and CUDA 9.0
e4e960a42b : lint fixes
782be20ac2 : fix bug in method declarations
3281be7e3c : add isCUDA() on Type
1da1a50ee8 : write generated_cpp. to a file rather than as output to make error reporting clearer.
5136bf2dee : dont clobber gen.py error, fix for old versions of python
38681b47e7 : fix lint
afdc44c73e : match PyTorch syntax
2fff1dd056 : checked cast does it all
2cd8bbd2bc : basic cat implementation in ATen
52a561e583 : Fix ATen build for debug python
052ad8bf04 : Fix a few C++ warnings
a19b9d0a1c : add some documentation to Tensor
e73ab1c4c4 : add basic gitignore, thpp -> at doc fix
67dba8144d : always use a custom default float
0c224445d1 : python style fixes
f5a57e0f7e : support unsafe functions for getting/constructor tensors from TH objects for backward compat.
e7a64b8e95 : lazily initialize cuda so that we behave similar to PyTorch
70b1401be5 : osx build issues and clang warnings
fc429af20a : remove Sparse from dispatch for now, will add dispatch variants later
dbba384bd6 : Always include THNN in the build, don't check for CUDA twice
b8152eba8d : fix build issue when cuda does not exist
23bda6a36e : bind THS THCS, leaving all operators unimplemented. This is required because THPP can represent Sparse tensors even though the wrapper doesn't implement any operators.
eed5e2d143 : adding build for sparse libraries
d2345ff1af : enable warnings in build and fix warnings
650f24b569 : update readme and add assign_(Scalar) variant
328a250b64 : fix a bug with scalar handling by simplifiying the maybeScalar check.
ec4cb72a0d : handle select and operator[] style operations
622350d3e9 : add checks for scalars on output
d460f6725d : start adding rules to propagate scalar to results
dadc23cafb : Scalar objects can now be backed by 0-dim Tensors.
4b2ea3ff2f : missing fixed allocator files
6100191fff : scalar flags added, and used to dispatch when there is a scalar variant of a function. broadcast annotations are used to figure out when a scalar s + A should also be converted.
f62def0701 : set TH_INDEX_BASE to 0
e7a316e1ee : update with tensorFromBlob doc
70e3951eca : allow tensors to be constucted from views of external data. Support creating new tensors that already have a size/stride
6b285cb37d : improve error reporting for undefined tensors passed as arguments.
7f376a2c46 : tensor.data<> also as toLongData() variants. Scalar now also has .to<T>() variants
e7436022f4 : document accessors
e32210658d : add readme and generated files for Type/Tensor/Functions to a doc folder to make it possible to view headers without building the library
7a5987123f : rename TensorLib -> ATen
2c2648ea38 : split Local.cwrap from Declarations.cwrap so local ones can be modified without regenerating declarations from pytorch
4e3b1c46d9 : adding xt makefile
37f5e3ff78 : import xt data/meter directories
f6f6fa2464 : add operator [] to do select
56f1019fc7 : add overloaded operators for tensor object
288fd61c0b : add accessor object for fast(er) access to tensor data when the dim and scalar type are known.
927ac2bb1a : add script that can collect all the cwrap declarations for external use
3976333bc6 : fix build paths and allow for cwrap_files to be externally specified
9879566a3b : switch dispatch to function
92c2aad894 : more flake8
0ba09a843a : disable tests from cmake for tensorlib
5d330de56e : autopep8
a83f62e36f : remove PUBLIC from target_link_libraries in CMake
e65bef39df : fix handling of methods that allocate returns
8f9c222fc5 : make copy copy_out and add copy_ to be consistency with argument/output order for the rest of the library
eb9e6165be : fix error messages
76a2d7bff8 : port optional argument declaration handling to shared code
8454b87034 : reuse declaration option sorter from common_with_cwrap in ArgcountSortPlugin
595ff0d3ed : move set_declaration_defaults to a common location
e88ae5eb49 : port xt basic.cpp
7897aac109 : get rid of lt_t variants for now. These will not be exposed in the C++ library yet.
424d5d1faf : fix generator bug, begin porting tests
d69f2e4ff9 : import xt print code, implementing copy and type conversion
eb20c8daa2 : initial binding of TH(CU)NN
16c3b7e3f4 : return references when the returns are actually just one of the arguments.
ea77e3ddef : auto-generate const mark on tensor based on in-place
285a820877 : remove TensorRef. Instead correctly mark const Tensor & and Tensor & in arguments depending on use.
53eafff042 : addressing comments from pull request: processors codemodded to backend and other minor changes
6a3e5510dc : fix context initialization to use https://stackoverflow.com/questions/12302057/c11-safe-double-checked-locking-for-lazy-initialization-possible/12302355#12302355
415449470e : autopep8
cbb798fbca : add generator for out-of-library dispatch macro
e178b5c9c6 : fix duplicate symbol issue
514b31c5e5 : support multiple returns
095cd734fd : resize and zero handling
79bb2d842a : fix a few before_call cases, and annotate the resize info in cwrap
ab7517d888 : changes to make cuda parts of wrapper compile.
5d1fd0cab1 : add TensorRef so that we don't refcount++ on argument passing
07aacec83d : example things
29c0dadfaa : implement size() stride() and formatting for IntList
b3b61d6596 : long -> int64 to avoid hack in Scalar
18095c713d : inline the static methods/functions so they can be optimized
7e98cabf25 : switch Tensor to ref counting, using pImpl pattern
17b88322b0 : make type statically dispatched
1beb1732bb : switch Storage/Generator to be returned as unique_ptr
be6ec51140 : mod to use references rather than pointers to make API look correct
4496398eee : add a default type to make the library more ergonomic
af791fbc83 : handle strings as bools, now compiles for CPU classes
d38b4e97c2 : more progress getting it to compile, now makes it through a few CPU types and fails on Double
1420566199 : fix some ambiguiuty problems
1ce4a51885 : checked casting for scalars
e48a14fecc : logic fix for result allocate things
d06ffcc5ca : array ref and storage views for THSize/THStride
d1ef531b09 : to env type
cdb06e17a4 : more fixes to handle a lot of cwrap
8df54be9d7 : some changing before generalizing to more types
4738577036 : integrate checked_cast
4936ce0c2f : generating code for neg
fe1c286f48 : add cast that may or may not work
4f1e04b615 : fix utils (oops), also add prints
bf07aec920 : listen to variants
ff89a39d41 : add assert function
34ee792c11 : more scaffolding for emitting derived functions
f94a145bfa : more scaffolding to generate. still need to generated derived
9f0ce2666e : add stuff to process each option in the right place
33dab3c593 : fix cuda build
acbd569710 : add places in templates where we will put generated methods
ae0a749258 : add flags to be able to build without CUDA
486a606d0d : rename types and processors to match naming in gen.py, allow for [[CPU,floating_point], [GPU,all]] style pair listings so that we can simplify the logic for elaborating pairs
92aee309fd : process types and processors
bf83194db7 : sanitize names
8e29bb52e4 : add sort...
365dfee37d : Option elaboration
842f94b320 : initial sanitize
20441712d1 : add declarations
ef37e9d9ad : infra to load yaml from cwrap
d9783e2293 : fix header files to be in TensorLib
0902ec3df3 : put a fake example op in to understand how dispatch will propagate.
8729715051 : add tensor skeleton
cb0366c6ca : adding Type object which will handle dispatch
ca3dd74c55 : generate cmake outputs using script
0e0c0ef89e : add storage to generator
c64b031fbf : Initial commit of framework for TensorLib
3003ebe67a : Replace None grad_inputs with zero tensors in some cases (#3433)
b07a9e1219 : Fix dropout state restoring
b1ea066836 : Remove duplicate Docker dependency
8b1b06d723 : add CUDA_DEBUG build flag (#3419)
48fe5d4622 : Move select and permute to ATen/C++ (#3421)
066db5dea3 : Don't rely on squeeze_out in THD. (#3446)
dfaccc96b7 : add dockerfile with cuda9 volta support (#3445)
d0accb85e0 : Send/Recv C++ portion
8d377617e7 : Fix MKLMemory::CopyTo for case where shapes don't match'
7244d27220 : Add a EmptyDeviceScope (i.e. allow setting CurrentDeviceScope() to None)
d96a5ddb1b : Load/Save/Reshape in MKL via fallback
1fb68fd371 : Fix FC op invariant
14f95c2782 : Updated brew SpatialBN to use initializers
e4af5e4e04 : Update the sample rate function call since the API is changed
afdf50cafe : Move jit/assert.h to csrc/assertions.h (#3442)
bed30c1582 : long* -> int64_t*
9b2117ed87 : Fix MSELoss docs (#3443)
fc7a68d147 : fix lint
4108feb27d : fix OSX cuda build
2ed64d13db : Generate globally unique workspace id for hogwild threads on all trainers
9ca8b321f5 : Skip cpp tests if CUDA not available.
5388948b59 : CreateLocalBlob for workspace
ef48ab0bb3 : Update observer sample rate
7c2804ee90 : Add support for doing broadcast with single elem dimensions at both ends
2c10b13eeb : Pass CUDA_NVCC_EXECUTABLE to NCCL build
53b01527f4 : Improve NYI error message to point to VariableType
6e17e73701 : Register VariableType methods in autograd profiler
72a5bb3c09 : Remove possible static initialization order fiasco
b5c053b1c4 : fix fp16 issues with resnet trainer
66d24c5067 : Update the ONNX doc
0e38d3bbb3 : remove thpp library (#3405)
df0bf06385 : move type enum into THD (#3403)
b544882335 : ATen in THD (Part I) (#2288)
b7f5bc506e : Make inputs/outputs return an ArrayRef.
d4abaa4b9e : Move ONNX broadcast fusion into separate ONNX pass, fixes verbose printing.
247d50e2ad : Improve const-correctness of JIT.
b043a74919 : fix softmax doc (#3337)
638f0b5d78 : Prevent numerical issues with poisson_nll_loss when log_input=False (#3336)
91af122d43 : add no-as-needed for THRTC
ae48a394b7 : Count hits/misses, add statistics printing. (#3369)
6214487fa7 : Add reduce keyword to L1Loss (#3366)
bf4c269bee : Implement reduce keyword for SmoothL1Loss (#3382)
3cb34744db : adaptive pooling supports only specifying size in certain dimension (#3127)
d77b94495d : Pass -DOMPI_SKIP_MPICXX=1 when building C code (#3378)
88d9ebc850 : lazy-load nvrtc and libcuda (#3408)
fa5efab669 : comments and case where not all sparse (#3370)
7c0b16c140 : Add torch.take and Tensor.put_ (#3263)
d905a90f0b : Clear out eigenvector tensor when eigenvector=F for symeig (#3411)
cf256ee268 : Added tensor op check for cudnn rnns (#3409)
e0e4b3a3b5 : Fix strides 3D bias descriptor
397793d61c : simplify beam search code
f8cc285e37 : Add explicit build dependency on NNPACK
81b995514e : Make THTensor_(var) and THTensor_(std) more numerically stable (#3410)
3c00c0169d : Make mm grads column major when the input is column major. (#3406)
db25f8602f : Remove order by clause if it is not needed. Increasing timeout from 10mins to
6fef6f6dee : fix upsample1d (#3407)
d4a0ec62dc : Typo fix in torch.median (#3399)
00567a14fc : Clarify Slice operator documentation
8cc30e4895 : Fix the Fusion Pass (#3362)
690256c18c : Remove MSELoss test module in favor of wrap_functional
c10898f8ab : Revert "ATen expand symbolic"
7b5ac333ad : Update README.md (#3392)
3f6fccd1a8 : fixes for torch.nn.Hardtanh (examples and CPU implementation) (#3391)
dce525ab6b : adds sample_n function (#3249)
bd8bf4a86e : enable remaining tests and fix a subtle issue in ConvBackwardBackward around sizes being a reference
b46ee946d9 : add double backward for ConvTranspose
429d66549e : If available try to use requests instead of urllib for model_zoo.load_url (#3280)
e4a3747cd8 : Add unit tests for casting onto scalars
c2e8b7aafe : Allow casting Variables onto Python scalars
54bfa88eec : Allow casting one-element Tensors onto Python scalars
f7b15c52ff : partial revert of D6155510 to fix a race condition
cec27b8134 : AddDistributedBlobsSync
3bfabb4d5f : support float16 input for operator SparseAdagrad
47f999b814 : ATen expand symbolic
669ec0ccba : Added FP16 compute support to FC Op
3e6e81da46 : Dispatch trivial variable operators to C++ aten functions. (#3372)
8cd0df020c : make sparse (new) functions conform that storage is not NULL (#3381)
7d096ff7e6 : use CAFFE2_ENFORCE_EQ for more detailed error message
eac0942f6d : Add more nn docs (#3374)
a5dbc254f8 : if git is not installed at all, no subprocess exception will be raised (#3379)
d38fccc586 : Debian/Ubuntu comes with GCC 4.9.2 and it does require -D_FORCE_INLINES (#3380)
2be8bd1880 : Add docs for ByteTensor any()/all()
1ae10a4831 : add test to check zero_strided tensors in blas level 2 and 3 functions
d04574b1fc : ensure BLAS/MKL is not used if stride values are not supported
86e3e008e0 : optimize RNN executor subnet construction for forward-only models
acb73c729b : Space is missing in __repr___ of conv (#3229)
28f3d50f9d : doc: Replace nclasses with C
71d731fb57 : Fix documentation inconsistencies for some loss classes
b7a9f51de3 : In BatchMatMul, add support for accepting inputs >=2d
8fbe003d4e : Miscellaneous ONNX fixes and behavior changes.
40f7f6e095 : Improve handling of 'expand' (broadcasting) in JIT and ONNX
2e42272cc1 : Make DataParallel a no-op when CUDA not available (#3318)
bbafd4fa90 : Fix native compilation on ARM/Linux (update NNPACK)
4f33b136d8 : add tests for the previously failing coalesce case
0b89a68111 : fix sparse tensor coalesce
bb7b630953 : fix pynew gpu_guards
91a8d3325e : test sparse dp, broadcast_coalesced, reduce_add_coalesced
01be4d6b20 : sparse broadcast_coalesce and reduce_add_coalesced
3a0aee71f3 : fix sparse tensor .cpu()
618026e999 : implements operator + for Dataset class (#3180)
e0d7de5b61 : Fix bug introduced in a recent commit
a0ce84e476 : fix triplet margin loss documentation (#3339)
6d2e39559a : Replace Variable constructor with a static_cast in tracer
a381fa10a5 : Add a hack for RNN export to ONNX
9107110d3a : Add sparseTensor.new wrapper bindings (#3329)
820ac0df2b : fix mathjax notation on softmax/softmin (#3338)
42ffb1ae07 : support non-normalized weights
7b00adf5d3 : Add CUDNN_LIB_DIR in rpath (#3255)
2d0667233a : Add .dockerignore. (#3333)
204044a522 : Symbolic representation for unfold using ATen (#3334)
ac8f56656d : Adapt ONNX Slice op changes (#3316)
dc6c9e8df8 : Fix compilation without numpy.
e4752518a6 : Tiny optimization to AsyncDagNet: wait on fewer events
de1f4e69dd : raw text (#3327)
f3078dec64 : Add cuDNN handles to CUDAContext
86dc6e0837 : Added inverted FP16 Initializer
d8f3c601e4 : Add reduce keyword to CrossEntropyLoss
9735ddd899 : check_env_flag now ignores case (#3317)
ee3baa2ed4 : Add shape checks and print more info in parameter sharing
7b7dcaf269 : Initialize presence tensor if data is empty.
0b0d5b2b1d : Add tensor output that gives the sampled values
879e39ea5c : Distill loss with SigmoidCrossEntropyWithLogits
d56713680d : Fix const modifiers on VariableImpl
86d0c24b6a : Dynamically find min log scale #3289
fa0f3cf98a : Re-enable and fix most JIT tests
61afb0d519 : Autogenerate ATen dispatch for JIT nodes
869bdeb936 : Symbolic implementation of Index supporting tuple of slices. (#3294)
e0fa72455d : Fixes the checkpoint test.
545c0937fb : Making a module option for Caffe2
c3a4bc5d73 : fix asan-error by removing SHOULD_NOT_DO_GRADIENT from .cu file
3853d5da97 : Add reduce keyword to NLLLoss and NLLLoss2d (#3080)
0664b30612 : Update NNPACK submodule
bdeee47d33 : Add zero, zeros_like, _dimI and _dimV for sparse tensors (#3271)
5760b036fb : Fix pack_padded_sequence to accept inputs of arbitrary sizes
cbe7b8b636 : Adds permute and as_strided to ATen (#137)
21ff182809 : improve padding code
a99506f2fc : fixed error: namespace "std" has no member "min"
6e33ae79df : Add gradient op for WeightedSum op
63297e1a1f : RunNetOnce->RunNet (removes rnn_executor overhead)
b3b7203b40 : Symbolic representation for mm (#3290)
8afbdd8dcf : Make toScalarType and toBackend virtual
2bcca48a62 : Make size, strides, dim functions const.
c03799e8eb : Change is_same_size to a native function.
715ca3a2c8 : Add unsqueeze of scalar to wrapdim_test.
fcdd394f66 : bind newWithTensor in ATen (#129)
5bb8ed67e3 : Compute GLU for an arbitrary axis
817eaf6b1f : Build NNPACK using its own CMake scripts
4819197a40 : fix merge problems
3b26b48d90 : missing code from pytorch
699e47d380 : missing entry
39359afc84 : Add rank loss for retrieval models with random negative sample
a7c5be1d45 : Document CUDA best practices (#3227)
837f933cac : remove 'path' from key_averages header
a65db4e956 : Use ATen for torch.cat, torch.addmm, and friends on Variables. (#3286)
d67624173b : Change RowWiseSparseAdagrad assertion message
b46ced4aab : clarification in docstring of Module.register_forward_hook() (#3279)
b3642b3e65 : Softmax/LogSoftMax refactor (wrapped up) (#3245)
e43a63a968 : tensor: Ensure that the tensor is contiguous before pinning (#3266) (#3273)
b5170c8bf1 : improves pack padded sequence operation runtime #1788 (#3278)
9989bb1a43 : Export index constants as long, not int (onnx-caffe2 needs it.) (#3274)
241b9f6c14 : disable rnn executor for beam search
48911e116d : The at::cat should default to dim=0
e760e63244 : Handle remainder=0 case correctly
df71f2aef5 : ONNX export for split.
4b1e85d266 : Remove split/chunk python autograd.
fe0ac0f7d0 : Support native functions in C++ autograd automatically.
5fc122bf39 : Fix to #2236 - tensor.numpy() checks that no positional arguments are passed. (#3224)
2e4d8aa530 : Added FP16/FP32 MomentumSGD + WeightDecay Update Ops
9f6a41d63d : Support default parameters for native functions.
f9d002d9f7 : perf improvements for depthwise convolutions (#3265)
a0aa6d0e24 : expose flop annotation to python
0b8b9cf928 : update
d131219742 : smarter backend option
5691b0b8d2 : Fix the Slice changes in ONNX (#3216)
388a1b1e66 : Added FP16SgdOptimizer
ed08533a1e : Add CUDA version of ScatterAssign
cc5a948e62 : Fix clang-802.0.42 tuple overload bug, fixes #3234. (#3252)
1b71bf1d36 : Updated resnet50_trainer and resnet for more FP16 support
1a0e4e1b00 : sparse cuda and get device (#122)
0748ea56eb : Change size by kernel_size in __repr__
25ed9aba03 : Remove C exports and rename AT_API
c6671f4379 : Fix missing <functional> and export decorations in lib/ATen
8e58135a26 : Fix E722 ('do not use bare except') (#3239)
9cca84a96f : Remove dead code
512a8015b8 : Gated Linear Unit implementation
d02ca80613 : Revert the enum changes as discussed
7660c4cfe5 : Fix linguist detection with gitattribute overrides
c6ef04db04 : Add "dtype" parameter for GivenTensorOp
f0ca857e6b : Explicitly use Eigen MPL2 in builds.
f2f057af99 : Support MetaNetDef model specs on mobile
5795b173de : Fix LogSoftMax (#3244)
50049168a6 : Pybind v2.2.1
5afc166769 : Fix lint build (#3237)
e870f569db : Fix core_overhead_benchmark building issues
b92e06e50e : Fix reference counting bug in python_nn_functions.cpp (#3236)
a806d1ad69 : make softmax test name unique case-insensitive
dc6510f7ed : fix copy elison warnings / get rid of an std::move
d5604aea0b : Don't create grad_fn if requires_grad=False (#3212)
891f41c14b : Upgrade to 2.2.1
0989889251 : Fixing lib/THNN build for Windows (#3217)
6a4182eead : weighted sample op cuda
67839ce7bc : Delete unused Softmax code (#3220)
129336cb06 : [dlpack] Memory management for dlpack
6b5f57b397 : Make make_image_db multi threaded
ba1dba45f7 : Finish #1358
d89d9d74bd : Fix Python 3 portability problem. (#3209)
ed9c43774c : Don't resize output in cpu torch.gels (#3204)
39729aa55c : Add GPU support to operator RowWiseSparseAdagrad
53fe804322 : Make ONNX work with new C++ autograd world.
e64f40ae5b : Add tracing to the new ATen style API.
0589dfab81 : nested_dict documenting comment.
5989b05ecc : Enable ATen implementation of some NN functions and Variable methods
a385979677 : Guard against executing the Hardshrink on CUDA
507319ca39 : Revert "Speed up norm_backward"
5a0ded4dad : PyTorch fixes for latest ATen:
10df3496cb : Fix typos in orgqr and orgmqr
9560540084 : Add missing string include, fixes https://github.com/pytorch/pytorch/issues/3192
16095b5737 : Rename value to other, wherever there is both a Scalar and Tensor overload. (#115)
02f4303749 : Use own benchmark and not any system pre-built ones:
25bfffeafe : Swish Activation Function
0c0c9e743e : Fix dimensions check
246701df81 : Separate out native processing into procecss_native; remove (TH)Type specific logic.
90e396f6bb : Support 'native' ATen functions with Tensor, (base) Type, NS impls.
5b3931c119 : logic fix for repeated sequence masking
fea60da92e : Update DLPack tensors enum to avoid binary issues and expose one function
0b0f24a71b : disable test_cudnn_weight_format when CuDNN not available (#3200)
76abc06b1f : Fix nvprof mode in autograd profiler
17a817190c : Speed up norm_backward
634c8315a4 : isContiguous problems (#3148)
2797c8005b : Update THDTensor.cpp
50de9160aa : Update THDTensor.cpp
ee62a595fc : ScatterAssign int types
d8ad5de560 : Fix intermittent segfault on Python 2.
6ebfa20ab9 : Include math.h for M_PI.
147287a33c : Fix the build on clang.
7d95127a4f : Squash ATen warning
67612cba09 : Add -Wno-missing-braces
8faffef321 : Make flags overloads compile.
3696300fcf : Include Python.h less using a new stub header.
8b3acd7d7b : Check that type_str is in the type_map (#3191)
623f2bf815 : Add GivenTensorInt64Fill on gpu
0da15f913c : Change softmax and log_softmax to take int64_t dim rather than int.
357f9b6f01 : Squash ATen warning
f7ad13694c : support model init
e5e6c71743 : include memory and map from observer.h
e970d35091 : Make VariableVersion refcounting thread-safe (#3184)
db6a9d2ae4 : Fixes type inference for Slice and GivenTensor*Fill operators
7b30436201 : remove Alias in SparseFeatureHash
d9b89a352c : Replace StochasticFunctions v2 (#3165)
f1f64c8d07 : Generate autograd functions for NN / more refactors (#3136)
98e67448fa : Large Softmax and LogSoftmax refactor
3a4ca7a269 : Add support for saving the output in autogenerated functions
a1518b7801 : CMake changes to make Caffe2 more friendly for dependent libraries
f9ee52efa9 : Update DLPack bindings
99cbf24b8b : Update log_softmax and softmax signatures to include dim (#106)
8d8cebd6be : Fixes the net-rewriting pipeline for model with rowwise adagrad
03bfd7a873 : In Predictor interface allow real model inputs to be fed in run* functions
43b303bfc0 : Expose Predictor::run_map to Python
96c6212513 : repeat sequence mask for data dims
7fd6fd6d80 : Output more useful error message when exporting FeatureDropout in train mode (#3156)
3631cd71b1 : nit: move ATenDLMTensor to cpp file since it doesn't need to be in the header
424390bc96 : [dlpack] Memory management for dlpack
9eb9615a6b : fix build error when nnpack is enabled (#3167)
57ffe64cbe : Embedding related fixes (#3128)
9ec9acc0cd : Fix bug with 'coalesced' calculation in 'cadd'. (#3162)
f8cb5d6437 : Trying to construct a Tensor from a Variable fails more appropriately (#3163)
51c2075f16 : Relax Scalar::toXXX conversions to only check for overflow
4348c9c8f8 : Disable logging by default
dcb457fdd9 : add support for using nnpack when installed via conda (#3155)
7680601659 : Spatial Depthwise Convolution on the GPU (#3057)
95556f4075 : add ignored_keys param to load_state_dict (#3159)
23a3f78988 : Reverse the order of checks in torch.gather (#3130)
6647475bc2 : Lazily create Variable.data PyObject* (#3149)
75bb50be0a : Remove THHeapUpdate (#3143)
3109e4ad6a : add common terminology to BatchNorm docs
f176c864f0 : minor autograd reference change in readme (#3144)
923bcfdd27 : Gate engine=NNPACK with nnp_initialize
4ac8ecb76e : Some bug-fixs in mpscnn backend
6ac393a32b : WeightedSigmoidCrossEntropyWithLogits
9d4d0640f2 : Support MNIST in ONNX (#3100)
58bcf76ba3 : Have model downloading as a separate plan
fce3ed19e5 : Change device_id to device in python land (#3133)
ba05dc5549 : dense buffer (#3139)
17d68f824d : Fix typo. (#3140)
569bdb4b77 : Refactor executor test
3261e1337a : Use 0D (1-element) tensor instead of 1D tensor
00996006d1 : Remove type inference from value
93e1749c85 : Add ONNX support for AddConstant and SubConstant
da7aa3a12f : Add helper function _constant in onnx.py
7d16d320d5 : expose observers to python, add multiple observers per observable
e92246fffa : Visit hooks in C++ implemented autograd functions (#3138)
36895e2dd2 : update the comments, move the expect check logic into the helper function
a1deb2d47f : Move the exception logic to the helper function
cad9438bb9 : Add unit tests for onnx helper functions
1735c5f6c7 : Add Filler op for double
f6f51129ce : Fix SparseToDenseMask for int64 indices
3c144e3872 : Relax CopyToMPSCNN dimension requirement
0f4ae13f05 : Better cudnn version checking (#3132)
47beb64b5c : Use ATen generator as default CPU generator (#3135)
0c8aaabce8 : disable share dir by default
28ed514bfe : Add additional resizes to ClassNLLCriterion (#3134)
c0c3162c1a : Support NVIDIA Tegra
a0ac72e84e : Use template instead of sphinx-contrib for google analytics
a7a81351f2 : Revert D6035393: [caffe2] expose observers to python, add multiple observers per observable
58fe66e337 : expose observers to python, add multiple observers per observable
490d5c2f13 : improve torch.load documentation (#3118)
75665ca6db : Suggest NO_CUDNN=1 as alternative when CuDNN is too old.
f709199c49 : Make test_jit more robust about compilation.
6dc67aef17 : doc (#3110)
38f87cc9c4 : Limit print scale by sys.float_info (#3113)
f11fb319bd : Fixes for new ATen
864bd934b0 : Add a helper function to check broadcasting (#3115)
4f81eff2eb : Perform gradient checks on masked_scatter and masked_fill
8666be05f5 : Raise runtime error in setup.py if cudnn version is not supported
1322f9a272 : Add cudnn version to torch.version
123cb5dd07 : use non-cudnn transpose for int tensors
88b5bf8ec0 : Every argument controlled by the output_mask may be null
4c3b02f314 : Enable Flatten operator to take an arbitrary axis arguemnt
c3a9423c7f : Fix: ClearField only accepts string as field name
8cfb23529b : Add additional erf, erfinv, and additional nn functions
9b5371df1c : Add bindings to additional NN functions
f444bd72b2 : Don't free interned Python strings held in global variables (#3107)
5a96037810 : skip ncclCommDestroy if CUDA driver is already unloaded
8f26d6aabc : More shape checking for ConvNd (#3052)
4831e478e1 : Expose cmake version as env variable and scipy test
4c6c4c513a : fix grad_bias calculation for nnpack
dd494091b2 : remove std::move in profiler
cb011410b8 : fix warning in THD
5f5270d4bf : raise AttributeError from __getattr__ for hasattr to work
2972a6ca02 : Revert D6026557: [caffe2][PR] Fix "No handlers could be found for logger"
4b12d9d1b2 : Expose is_nullable in Declarations.yaml
3366654fd4 : Support broadcasting in copy
de7e1a9a82 : Use pointer equality to compare types
3bc94bf02c : Combine comparison methods and functions
92c9848c04 : Support wrap_dim in nn.yaml
5d689989ec : Expose the THGenerator* via unsafeGetTH on at::Generator
998a1b6d74 : fix memonger after D5994548
d748c43f71 : for dpm.GetLearningRateBlobNames
2675ff73fd : Resize output argument `total_weight` in ClassNLLCriterion
61bb0d2954 : Remove unused parameter 'input' from Tanh
66bb3d6dec : Remove incorrect comment that join_with is symmetric.
191224b6e6 : Suggest key_averages by default, it's more useful.
94c1fdd254 : Typofix
86c1842701 : More detailed docs for Graph.op
b9cd45adcf : Add note about inplace status in ONNX and JIT.
2c01afd2a6 : DoOp reuse workspace and test
9575364d30 : Update protobuf detection
1e8a16224f : PackSegments: return value presence.
c6f96c1d7b : Add GPU support for LengthsTile
7cf4529a82 : add a deleter callback to tensorFromBlob
ca392b7c76 : remove timeout from RNN executor
6b22f64d2c : fix_im2col_nd_gpu_kernel
f964105b56 : Update generated ffi wrapper to consider other variable types (#3087)
4908351212 : Do not propagate gradients for GatherRangesToDense
9ef39a50ee : Fix the broadcast in Addmm's symbolic (#3063)
14c1e19c73 : Consolidate the observer implementation
bfae95043d : Self register the observer reporter when the file is included in the source
790941d6a0 : Add additional comments
8d19116508 : Generate PyTorch-style NN bindings
7bc154f8ea : Remove unused argument 'input' to Sigmoid_updateGradInput (#3079)
23c4152b41 : Resize outputs in criterions (#3074)
2000ba0b26 : Add random_ for cuda, fix random_ for cpu (#3042)
5b10ad255b : Use EMBEDDING feature type instead of FLOAT_TENSOR
3e9f0092eb : Remove Redundant CMAKE_BUILD_TYPE
57863e4e79 : Remove CAFFE2_CPU_FLAGS
c97e78715d : Revert D6028262: [caffe2][fix] update observer api in perf_observer
cc3058bdac : Fix macOS build (with CUDA) (#3071)
bd9b4df6e9 : Add support for exporting MulConstant, DivConstant and Softmax to ONNX (#2923)
9260f0e5ee : Fix a typo in optim.rst (#3069)
72f6b5a03b : Make DtoH and HtoD transfers respect the current stream (#3067)
4fb7600fcb : update observer api in perf_observer
25b35a3f62 : Fix broken MPI tests
75bece6ede : Fix "No handlers could be found for logger"
b1508e8e86 : Revert D5905002: [caffe2] expose observers to python
e13f199452 : Switch RNNOp to use NetDef argument for step represenetation.
169ed0cd4b : remove torchvision docs from pytorch repo. Moved to vision repo (#3024)
828048f578 : Add document on how Module.cuda() and optims should work together (#3056)
f2809a5259 : Fix Python lint. (#3061)
1dbbef6b48 : Fix crash in blob deallocation
66b8cb95e9 : Add int64 support to sparse_to_dense_mask_op
b4dfadcfa2 : Fix OOM in Travis in executor test
18790639ed : Rename library name to lower
1b892ea295 : Enable axis argument for MatmulOp
09d6b6fd00 : update intel script
63caca89db : expose observers to python
246a382610 : Simplify PReLU binding (#3055)
f74665f0c4 : remove gcc install suggestion
d66549d27c : remove files from botched merge
f11ff5befb : Fix mismatched input shape in ATen sample script
8d8a99c244 : Add ONNX Pad reflect and edge mode support (#3048)
9437644f66 : Replace softmin and softsign with simple differentiable expressions
e3a7c78f04 : Add shutdown_fun to parallel_workers
ee143d31ef : Fix ImageInput op in resnet50_trainer.py
d894a6362f : Add missing is_test argument in ImageInput ops
c23ae308f3 : Fix build without numpy (#3049)
f7f37306e4 : New torch.jit.verify function for verify once-backward.
6de2929967 : fix TH warnings after explicit types changes
2443fcac0b : Deterministic cudnn algorithms
403a533827 : Forgot to modify a kernel call
8cc258153d : Make VolumetricAveragePooling cuda stream-aware
a47948784d : add kwargs_only defaults for sorted and largest
9dd872053f : Add possibility to fallback to retrieving MAJOR.MINOR
139aaf65d6 : Bugfix plus remove other option that depends on the version.txt file
f093545919 : Add compiled CUDA version in torch.version.cuda
5e01bc7122 : add 'at' helper method
b56098b540 : Make parameter names consistent
9455eda57b : cast distill loss teacher label to float
6e12a9c4a4 : get around homebrew issue
efe91fb9c1 : delete redundant python nccl code
e9dccb3156 : implement all_reduce, broadcast, all_gather, reduce_scatter
4d62933529 : add initial NCCL C bindings
b7e258f81e : link specific versioned System NCCL, rather than generic file
2ff516bf79 : Add tutorial describing how to use the ATen Caffe2 operator from PyTorch
d5f60b240d : Fix distill loss
77ae903650 : Skip negative indices
30dac012e0 : change header
803afd58a0 : Make MultiLabelMarginCriterion respect the cuda current stream
a0831219cf : SqueezeNet ceil_mode not yet supported.
45c5ac1415 : Print net type arguments in net_printer
6743d59513 : Add missing import. Add return to __getstate__
43adc5ba05 : Add nodename to ONE, iteration_mutex etc.
463bcd00ea : add None check for scope.CurrentDeviceScope()
44a0f6805e : fix get_cpu_blob_name()
2aac8f4f82 : Add support for NetDef in RNNOp.
c62490bf59 : Use PyInt in Python 2.7 with small values
f29bcab67e : Use Declarations.yaml to generate python bindings
558d26a69e : Fix argument indices
dcb8d0f088 : Refactor out python binding generation from gen_variable_type.py
dc1b4ff74e : Fix isContiguousDim (#3011)
c52b3d7524 : qr memory leak fix (#3017)
69fb6bee58 : Remove the extra fake output in ONNX Concat (#3014)
dcfed49e96 : fix multiple issues with multiple PS, learning rates, iter;
aaa74b4929 : Fix flaky erfinv autograd test (#3015)
dba92055f3 : Update Caffe2 benchmark file to write text output
0eec332e14 : assert reflection padding in range (#3008)
1605566388 : Add map input for predictor
2c44a9f9cd : Add BatchBucketOneHotOp
18eb4bbdf9 : Improve Declarations.yaml: (#81)
39a82f3e3f : Fix triu/tril (#3007)
85d0bfb6f3 : Cuda SparseLabelSplitOp
4362c4de9c : Temporarily disable test in Travis
5e38345d4a : Fix break
d2195218f6 : Build local
3ae961f062 : Release saved variables in generated functions (#3004)
10b42f5d6c : Add ONNX support for ConstantPadNd (#2962)
898c732293 : Introduce a `reduce` keyword argument for MSELoss (#2878)
6a91f556d0 : fix a bug in exporter, we forgot to copy type to the new node for index op
7dd74b6a71 : Address the introduced types in ONNX PR 57
268fce1073 : change encodeType to encodeTypeProtoTensorType
10537ce4ed : Support the new proto introduced in onnx/onnx PR 51
b2f5ccf366 : lint
0c2957512f : Fix two legacy modules clearing input tensor in clearState
ecdb86e733 : Update all existing nn tests to new args format; Move all randomness inside tests
b6e1dd2674 : Remove top-level seed setting
c76e2900a8 : Change TestCase args to accept value, size or fn for constructor_args, input and target
5f8bab47c8 : bugfix for 2428 ussue (#3000)
50208c9fd6 : Refactor GLConvolution
f535700ccc : Add weighted_sampling operator to Caffe2
4af66c4304 : Cleanup: remove useCurrentStream function (#2990)
4b3400b249 : Added statistics for standard deviation
db06e91097 : Bump gloo
225de6628c : Improve NNPACK error message
9425a2bf19 : Fix cudnn grid_sample backward for implicit gradOutput (#2993)
0710a90fa1 : Tiled Softmax
2e4de82514 : Support more ONNX ops in autograd execution
2861638e8a : Add torch.random.fork_rng, which forks the RNG temporarily.
539ae451d2 : Move random initialization functions from torch to torch.random.
b08219b51a : Correctly mark a method as override.
bfd77e9942 : Delete obsolete comment.
0ae56ab247 : Squash Python.h warning.
f9e9c5326b : Support for Tanh and Sigmoid in the executor.
be04d5a347 : Print small tensors in IR.
c9f7b1efcc : Fix additional deprecated function signatures
da46b9c886 : Make cudnn relu op work for empty batches
5eb45fb0b4 : Add check for Travis in executor test
2631ee749a : Generate ATen from torch/lib/ATen/Declarations.cwrap
ba1f94b6f5 : Refactor out TensorBase from Tensor
ef3b7597b7 : Fix copy and move constructors
fa812c4511 : Remove has_full_argument_list
a18e81ddb8 : Fix lint
5e564d6c12 : Add check that tensor is defined in Scalar constructor
4a12f70ba1 : Move default arguments to function declaration
d8a0cdc0c5 : Adding asan option
cbdbe518e9 : If cudnnSetStream is not successful, give error instead of warning (#2988)
c74f7d8ade : Support varags style IntLists in derivatives.yaml and implement view. (#2963)
137b139551 : Make cuDNN use the current stream (#2984)
fb8a7679cc : preprocs for embeddings
de43326cfc : Identify components after sparse layers' tagging
b649ce3d6d : Caffe2 Benchmarking Framework
20b3918ba8 : add cuda support for Topk Gradient
642542ec2d : Resolve heap-buffer-overflow problem
8e309c014c : Tagging sparse parameters
7e80dc6cbd : Remove check that can never be true from RNNOp.
995c83f945 : Disable cudnn dropout
6a71cfa31e : Faster version for RowWiseSparseAdagradOp
a2be56bc34 : add GatherRangesToDense operator
964d740ede : adding batch support to SequenceMaskOps
ba766ef39a : Fix BN size check in eval mode (#2977)
7a809ea6fd : Fix build for MSVC
bace20a7d4 : Fix build for MSVC
91bb6ce095 : Allow explicitly specifying to use operators' default implementation
d2e94d0faa : change device enums to be contiguous
029252fb3b : NNPACK bindings for Convolution (#2826)
42712c677d : More user-friendly error messages for indexing with multi-dimensional LongTensors (#2974)
f608208a80 : Fix scatter size check (#2960)
b3bcba60c7 : Correct padding docs of 3D modules (#2970)
756ab3f24f : Adding conversion from python tensor to dlpack tensor (#2933)
5527dd3b08 : Expose CMake options in the binary
acc384183a : caffe2 operator logit / logit gradient CUDA implementation
81284c7a0d : Translating Crop to Slice
17a92389b3 : Remove metal remnants
5f864ca4d2 : Support TensorList arguments, torch.cat, and narrow in derivatives.yaml (#2936)
c489445c46 : Add ONNX support for Mean (#2956)
faa6fdfa18 : Raise error when each channel only has 1 value in batch norm (#2961)
6fbdf40284 : Translate addmm into Gemm operator / fix alpha-beta mixup / execute in JIT.
76a282d228 : Fix resizing of gradInput in BatchNormalization (#2959)
9088a940d7 : Completed Stride() documentation (#2948)
1512562613 : Fix lint
1ff34a0535 : generates non-equal random tensor for max pool
fa8044d92f : Add tests for array interface
c488a9e9bf : Add Numpy array interface to tensors
b6b41c829a : Add inplace checks in JIT
82bc97e6be : Fix THC exponential to not sample infinity
437d3af7bf : Add CUDNN_INCLUDE_DIR before CUDA directories in setup.py
bf82ecd776 : Hotpatch THPP compile error
6fbbb1bc4e : Limit number of demangler invocations in autograd profiler
7fc7756487 : Refactor param initialization from model manipulation to layers logic
9a471b015f : Implement _unnarrow (backwards of narrow) in ATen.
d381efcf3c : Enable wrap_dim in Local.cwrap.
bf7b11f235 : Fix executor test base module
d1213cc6c2 : Include information of the engine for Caffe2 operators.
49396c6fa1 : add openglv2 to experimental
312e0ce3ba : fix nn.HingeEmbeddingLoss doc
2c26f4728a : fix typo in document of nn.AdaptiveMaxPool1d
e4701e63f6 : Fix exporting Reshape with single torch.Size argument
4d605259b9 : Fixes after merging ATen:
6258fc2f15 : Executor benchmarks
1f3424b78f : Adjust test thresholds
4c61cf2a1f : Updated functions for benchmark test
00b62db723 : Fix scope error
621603169c : initialize new tensor
6ef417ce89 : Fix typos
ca644ca204 : Add inplace zero to variable (#2212)
3ce6f0a457 : turn ModelProto.graph into callback type
9fc86782d7 : Fix the breaking changes in ONNX PR #58
a64daf2c59 : support dictionary return types in nn.Module's __call__ (#2037)
5d9de014bd : Fix typos
21f8ad44e1 : put limits on CuDNN BatchNorm codepath
d5a7e304fa : added volumetric adaptive max pooling
7ff9e0eb6c : fixed test_AdaptiveMaxPool*d_indices testing the non-adaptive classes
9415f84982 : spatial CUDA kernel int64_t stride inputs, removed unused parameter
855b7e28ee : START_IND & END_IND macros, removed unnecessary computation in updateGradInput
b9c942a7d4 : reorder spatial variables BDHW
0685c063bf : rename spatial version variables
67b2923a9d : Set all GPU state, not just the first one.
a8bf73be50 : Mention random_ not available on CUDA.
2dcaa40425 : Add get_rng_state_all and set_rng_state_all.
db298618e4 : Minor typofix.
60bff0a5f3 : fix nccl version
5cc3aff9ba : use nccl deb in Dockerfile, easier to change python version
9b9704e701 : Simplify getApplyGrid in THC (#2900)
b3bc5fe302 : refactor THCP method defs into cuda/Module.cpp
7190979ab3 : fix the script to generate the nanopb files (#2907)
d315c62e72 : Kick fbsync
4acf56cf80 : Typo
c775b90426 : Fix aten submodule
181b2481d3 : add error checking to grid sampler (#2902)
d7ee3e0bd0 : Fix the memory leak for multiple workers (#2897)
e67c2bc567 : Fix detection of NCCL_INCLUDE_DIR (#2877)
8c0844f497 : Executor test
6a800be748 : import lr_scheduler in __init__.py
8286ce1e3a : Re-license to Apache
21707065d2 : latest gloo
96b17543a3 : Compile with MKL in conda-build
a92fce1871 : fix precision of grid_sample test
b9747af242 : Use make_variable instead of VariableImpl.
7d40cce267 : Simplify glu symbolic
c72ee3981b : Add support for exporting GLU to ONNX
002288c118 : Add launch bounds to spatial grid sampler
b9009df222 : Add mask device, fix test
642dea487d : update inline comment
954e9e370c : Uncurry trace.
bff81a3cbd : s/extra/unmatched/
91827edd1c : Fix initializers off-by-one.
cdcf09405e : Use parent setUp, which also seeds CUDA if necessary.
600fcf2f04 : Delete params.
fecca48a2c : Time how long compilation takes.
0ad6c2d59c : Lintfix.
cfa176b9bd : Dump the final trace (redundantly), for ease of use.
db3349faa3 : Support class decorator syntax; remove instance compilation.
1cf24b8d55 : Restore enabled/time debug parameters.
c430501ee5 : Timing works again, controlled by PYTORCH_JIT_TIME.
b1ba6c3ddd : Add back trace dumping, fix some syntax errors.
7bace0a1d9 : apaszke review comments
0c40305ddd : Rewrite torch.jit interface.
f2037970cb : Cleanup for 'prob_dist' in multinomial function (fixes #1584)
2f381bf6a4 : Joint intent-slots modeling workflow initial diff
b21ae92b56 : Move expand_dims operators to a expand_dims_op.h/cc
c27aaf67cd : Improve Function docs
c33c9c1ba4 : Fixed size_to_dim enforce
095805036c : re-enable out-of-place bernoulli for cuda tensors
9f4accd5bb : Make all dim arguments int64_t
e9fe0d8e6c : Fix for clang 9 build issues
0fb9db1606 : Converting dlpack tensor to aten tensor
b4e02e8e0f : adding a simple class for converting atensor to dlTensor
4a58e0ca42 : Test stub for dlconvertor
c6a2175d27 : adding dlpack header
c8f824cd1b : Improve import failure messages
2108d1c250 : Add unit-tests for fb-specific models
1a8fb81f22 : define M_PI for TH
dcee596a8b : change Variable.cuda to be consistent with Tensor.cuda
22ec2ca968 : Add shape inference to fp16<->fp32 ops
fb1c7874ea : Deconv translation
cb986bb913 : Deformable convolution operator in Caffe2
08b3140827 : Back out D5772847 and D5908415
8a45b65f96 : ReduceFrontMax, ReduceBackMax + gradients, CPU and CUDA
711d7137c7 : Implement the gradient operator for element-wise Logit
59be3da3bc : Make GLContext unique_ptr
44b45a1d73 : Fix real time style transfer on android
de757805fc : Implement some autograd functions using ATen (#2805)
0a5ee1e806 : Implemented RowWiseSparseAdagrad operator that only keeps one moment term per embedding
9be8d0a9d2 : Add a docstring for functional.linear.
753133f015 : SignOp
f14d75c7ef : Proper versioning and misc CMake improvements
2d6a880952 : Fix jit attributes tests
d9b0bcd7a4 : Make all existing (except in RoIPool) "is_test" arguments required
808c9e3e70 : fix a small typo error in sparse_lookup
def0506d95 : Fix a caffe2-gloo dependency problem
ded3a3b317 : fix small bug in nccl setup helper
7caceea6e8 : better error messages for Conv*d input shape checking
833bedc77d : Add CUDA profiler bindings
b7849662b5 : Always regenerate nn wrappers after rebuilding THNN and THCUNN
411e1469e0 : Add tools for autograd profiling
bd5233b4f9 : Fix on NEON
f4eca7c94d : make CUDA_HOME take precedence over all other CUDA detection methods (#2863)
4e23658d47 : Fix warnings in TH_ErfInv (#2861)
9defb8e653 : fix Dockerfile for submodules
6a4ec4f9a8 : VolumetricAdaptiveAveragePool
7254104cfc : Spatial CUDA kernel: removed unused sizeD parameter; changed stride types to int64_t to be consistent with caller function
dd891c4923 : reorder spatial version variables so that B (batch) before D (feature) before H (height) before W (width); change some code to be more concise
8ffe8eca6c : rename spatial version
3128218397 : Allow specifying unused inputs to torch.autograd.grad (#2859)
605beb2565 : Parallelize CUDA LookupTable_renorm (#2803)
d6ff84de5c : Add an aten_op to contrib.
c08395e290 : Give a better error message when we hit a legacy function.
2a8603c5e1 : Make distributed recv return sender rank
5be06230f9 : cleanup external NCCL detection, add NCCL_ROOT_DIR / NCCL_LIB_DIR mechanism
289dc2a870 : fix argument passing bug in build_libs and allow external NCCL_ROOT_DIR via environment variable
30ceac28e4 : also check LD_LIBRARY_PATH for cudnn
15a7bb3bff : GatherByKeyOp (Inverse operation of PartitionOp)
e3609a0619 : Correctly propagate remap_blob across net boundaries
4664808938 : fix UMR UB in qtensor
c580352aee : Adding 1d upsampling (#2846)
ab62a92dab : De-dup beam search state reshape shape blob
a5879ea9bd : Resolve Windows warning C4099 issue (class/struct name mixture)
5898bd4b4d : Update eigen to origin master
9fe99241b2 : Update gloo to master
b054f369a5 : minor spelling tweaks
7c45ac8e43 : Officially support Python 3 in Conda build.
cf769a7b6f : Avoid race condition in get device properties.
8103e185d4 : Fix OSX build w/CUDA=ON
7d06898592 : Add travis webhook
eff5b8b09c : parameters to vector and vector to parameters (#2795)
287f434900 : Add support for exporting Addmm with alpha != 1 or beta != 1
767f704b84 : Let Gloo check if it supports GPU Direct at run-time
3cd0003bf6 : fix layers_test: atol should almost always accompany rtol
ec801d535c : Fix typo in warning in data_parallel_model
b984eb35cd : Fix concat_split_op for input.size() > sizeof(int32)
bf9ab91779 : Indicate if the last invocation of setup.py was debug or not.
c630c34d41 : remove Undefined node from black list, remove const_cast, add some comments
f256f686b5 : Remove device comparison TODO mark, change the white list to black list on node kind checing
0566a4c026 : Fix some bugs, and assume graph is always visited in topological order.
18a1d272bf : Add attributes comparison, fixed several issues, more interesting test case.
972d048cf8 : Typofix [ci skip]
0a1ac8bfe5 : create a cse pass, with very naive support.
999607460a : Add a verbose option for gradcheck. (#2780)
0d6baa0d59 : Fix lack of data dependencies for beam search RecurrentNetwork op
dee3ac3fce : Use Resize instead of reshape in speed_benchmark
5aac6a2e06 : Make LastNWindowCollector thread-safe
2070467c57 : Allow CheckpointManager init() and load() to use a different db type with path_prefix
e4d6ee114f : typo fix
450379256c : Don't call is_available() in manual_seed, it initializes CUDA.
b17dfa07ba : Make CUDA seeding/RNG state functions even lazier
06d7a0b1bc : Write docs for RNG seeding on GPU more carefully.
805ad16924 : Support "expanding" an empty tensor to an empty tensor. (#2824)
34a1d414a5 : [Distributed/Gloo] 3X performance improvement of Gloo AllReduce By Enabling CUDA Direct (#2827)
9b2c5501b8 : Fix Windows build
f841446fbb : Formatting fix for verbose net logging
a340d141de : Check num_elements > num_samples in UniformSampling
cf7e28de8e : add CUDA RNG docs
85b08f1b99 : Trying to fix all networkx 2 issues.
e86f941395 : quick fix image input op
4106c650d3 : fix a race in type registration
b8ab3080b1 : Fix InferShapesAndTypes() for convolutions
2cbb4167c1 : Adding uint8 support for to code generator for and high-performance emebding look-up kernels, supporting
8d19319fa7 : Documentation for FusionGroup and Eval requested by @houseroad (#2808)
7750b8db36 : Remove NNPACK MaxPool wrapper
84182b1853 : Partially fix memonger with networkx 2.0
892940b45a : fix memory leak in min function
723214e9ac : Resolve mismatch between ATen master and pytorch subtree.
f6d3c17fd7 : Directly check if the state_dict() has changed, so we fail earlier.
e1add8fdff : [FIXUP] Give a slightly different error if tracing state is expired.
6125ea7c83 : Create a FuncModule for conveniently module-izing functions.
ea2e7a1f4e : [FIXUP] Deduplicate accept_output logic,
a01de93fad : Give better error message when symbolic() arguments to line up.
c083d3ac2e : Fix minor bug when --accept'ing commits.
b805f3f676 : Also fix AvgPool2d to follow new convention.
08148a462c : Print name of operator whose symbolic gave wrong number of inputs.
bfed2dce25 : AvgPool2d was returning too many outputs, fix it.
871e3b41e3 : Ask for the correct number of derivatives when tracing.
10ef82f13e : Make assertExpected work with Unicode strings in Python 2.
460a03751b : More expect test improvements.
f3ae642162 : Tighten up the ONNX export interface
12ed8ebe5a : Revert D5879947: [caffe2][PR] Enable python3 builds
ae10a0a3e8 : Enable python3 builds
d2d7a0f514 : Fix build failure in MSVC
5d6a41b8aa : MPSCNNMul(scalar only)
2b9765ad02 : Erf and erfinv (#2799)
c3a3d6ceba : Add an option to use dynamic memory optimizer.
1b059f4c98 : Add option to ignore parameter initialization
7d2b2cae19 : Remove OFFLINE_TRAINING from global constant
1a83c372ec : address issue #1488 by using defaultdict in load_state_dict
ad414908d7 : Advanced Indexing with variables for autograd (#2590)
0fff025973 : Consistent behavior of max reduction for segment ops and fix test
2996aad68c : remove dead code, add insertAt helper
6e495f5f85 : Make output_ a const field in Graph.
0821856ac9 : Add missing is-Param assert
6efd797376 : Document unchecked invariant.
25c2b7d8b2 : Some minor extra comments on python_function
794e52bb1c : Make cloneFrom() copy all metadata; use createClone() as much as possible.
0b421e590c : Move some logic into create().
ba95ffed97 : Const correctness in IR and Attribute / linked list excision
670ec4bc59 : Split Type into its own header file.
06903c3525 : bugfix for word language model
5949bb27b5 : move write_vis into contrib
a194e66186 : allow Concat operators to be the final operator in a fusion group, and update the fusion compiler to support code that includes final concats
27bae83a3a : make graph layout more readable
3fb39add23 : debugging code to understand fuser
c8993a3e2c : Add add_scaled and sub_scaled to TH and THC (#2789)
16a3de081a : Minor rebase fixes
3be774ccb7 : Use TH_TENSOR_APPLYx_CONTIG for contiguous tensor to increase the speed.
06fdce04ca : Generate ATen from torch/csrc/Declarations.cwrap (#2791)
f4169260f8 : Fix crash when calling backwards on leaf variable which does not require grad (#2788)
39434ee2e4 : Added LPPool1d. (#2783)
aff1370974 : AndroidGLContext can lazily allocate static map
871530afdf : Mark all (non-static) Type methods as const.
06b7a9e0f6 : Backed out changeset 3a5c020294d8
dab5bd23ea : fp16: RecurrentNetwork
ddf6ad83aa : Add tiling support to GLConcat
b468ffe6d1 : Adding uint8 support for to code generator for and high-performance emebding look-up kernels, supporting
2bcad92d12 : Fixes for NCCLReduce with non-zero root
cc3e6ade42 : Fix caffe translator
5deacb5bce : Enhance comments
c536da7064 : Remove TensorMeta
a7c4152302 : Prune null edges in Eval nodes
b66d90c84f : Add a pass to remove all non-standard ONNX nodes before export (#225)
6855d24ff1 : Move pybind11 type_caster to different pybind.h in the corresponding folders. (#222)
b7e89d7248 : Add support for some ONNX nodes in JIT closure
fe5c644f81 : Handle AddConstant in fusion compiler
e05cfb2064 : Make sure passes don't mess up stages of nodes and graphs
8a605ce766 : Minor refactor of fusion compiler
75497d624e : Add JIT_EXPECT (#220)
d4fda0bbf8 : More updates for Variable ATen
ba6e652c02 : Add simple mode to Eval
1f80dd03bd : Track change of Variable from shared_ptr to ATen style tensor
aa1a94058b : Add AddConstant node to the JIT
7506a3bcb7 : Add pybind converters for Symbol and AttributeKind
28828e033f : Make certain functions traceable
4d1ed4ec42 : Assign traces before saving Variables
af688905e4 : Fix a bug in CppOp (missing cloneFrom)
214eef5e5d : Record device information in TensorType and check it in the fuser
ab375e19aa : size test
83e38d687b : Add a comment about what is going on here
dd85947542 : fix the fusion test WAR
2ae7d8e5f9 : Fix Chunk heuristic in graph fuser
b708b6de8d : Add ONNX pass (JIT trace initialization)
0e53fe3a41 : Put ONNX files where they belong
8dae433de8 : Move JIT passes to a separate directory
2a7b4f5095 : Allow TensorMeta to be undefined
6b60f31081 : Fix bugs in AutogradClosure
964b731af3 : Try to handle NULL Variables in the tracer
aafa35e0b5 : Fix bugs in Traceable
9c39e8cecb : Parity with NumPy newaxis placement in indexing (#2779)
4341dc7e7f : avoid variable naming conflict in macro
ad68f623f2 : task api, fix comments - a bit cleanup
f8f5e79f5f : Backpropagation for If operator
561fc8d96a : remove rotted TODOs
25aea46739 : add missing AutoGPU guards
8536079142 : missing include
30af9d793d : Add broadcasting to bitwise operators. (#2776)
5229a79bf5 : Implement THCUNN code for GridSampler (#2737)
888f4d4f61 : Update cub to master
c6ea6ed8ff : Add Nd Padding, Pad1d functions and ConstantPad3d (#2657)
ea8b09365c : Specifying the value used for padding (#2751)
2763bfc49e : Norm subgradient at 0 (#2775)
16ddc863f4 : revert more THC Atomics bits from Windows changes
c231ac2253 : Add an argument for suppressing download progress
ebeaecbfa3 : workspace_gpu: Get{CUDAVersion,DeviceProperties}
fb37d35d28 : Additional fix for LRN: unitialized variable.
4e26aa4f91 : Update nccl master
211cb13a7d : fix local_response_normalization
59b139dabd : Fixed compilation on OSX (#2761)
1fc85cde1f : serialization fix to preserve backward compatibility and contbuild (#2763)
e397439611 : fix THPP CUDA after windows changes
ddd417faf0 : Fix non-CUDA builds after Windows PRs (#2760)
2bc1e07b62 : THC/THCUNN reverts of incorrect changes after Windows fixes
6643f1b9ca : Win64 support for lib/ATen
c7d5ddd23b : Improve Windows Compatibility(for lib/THCS) (#2442)
4ead38f96a : Improve Windows Compatibility(for lib/THS) (#2449)
5befdd45bd : Win64 support for lib/THD (#2444)
268a1f1b96 : Improve Windows Compatibility(for lib/THPP) (#2447)
caecbffe62 : Improve Windows Compatibility(for lib/THCUNN) (#2443)
0e691f8998 : Improve Windows Compatibility(for lib/THNN) (#2446)
1c51c185a1 : Improve Windows Compatibility(for lib/THC) (#2440)
61813cfd97 : Improve Windows Compatibility(for lib/TH) (#2439)
eccfa1041c : fix cuda GatherOp for empty batch
c3fd31b1a2 : weights for labels in image_input_op
9639ddd22f : Cleanup omnibus-blacklist-hack rules
9ec981b866 : for CPU-data parallel, allow sharing model
132e35bf51 : faster sparse lengths weighted sum
4c6d177b4f : faster SparseLengthsSum kernel
3cc309a2e3 : Add Net observer for mobile apps
6b44a00c71 : remove in-place Dropout from rnn_cell (bug in PR-1185)
af8f6c1bca : adding unit tests to compphoto caffe2 projects
dd27997aeb : DOC: adding note about distributed MPI backend (#2750)
27dde63358 : Allow run of example resnet50_trainer without training data
3a3d27130d : Fix symbolic for max pool in all dimensions (#2742)
1a89c6e1ec : Decayed adagrad
f21de86209 : Add per Op execution counts to prof_dag
fb45383ed6 : resubmission of PR1175: fp16 BatchMatMul
0bbf8a7a4c : Fix squareFactors in opengl_test.cc
7752fe5d4e : remove zero padding in orthogonal initialization
b42a125ee4 : Fix NCCL ops + Add NCCLReduceScatter
3821fca0c6 : DOC: i{send, recv} message order with MPI backend
b14c5bf016 : Save output_nr in SavedVariable
1e37145872 : Resnet50 should param init net before creating test net
86a9a06878 : HTTPMessage in Python 3 does not have getheader
6340fde3b9 : Made some arguments in momentum_sgd_update const
7eb5ad2e26 : Fix profdag infinite loop
632da0b6be : LRN Op input "scale"
08b4770adf : minor spelling, intialize->initialize
06c44e2283 : Replace Variable(new VariableImpl(...), false) with make_variable.
bcad604ea6 : Move imap to six.
5b5218ea95 : Micro optimizations in ATen
0e7bd68536 : Allow one output for droput at inference time
63a2b75027 : Add option to remove legacy_pad in caffe_translator
253d48c815 : add in-place random sampling ops
ce4932f8a4 : add softmax2d docs
0f0829d88e : Strict bound check for SequenceFunctor
efda016108 : fix dynamic-type-mismatch (ubsan) in caffe2/caffe2/core/tensor.h
e9581e47a2 : fix comment on core.Net.RunAllOnMKL
77ea40c01a : Added USDT sample points to simple net
f0d0361609 : Revert D5794634: [caffe2][PR] fp16: BatchMatMul
6436881e2d : Re-issue random resize
80d229b0e7 : Refactor THPUtils_invalidArguments into separate file
68f358452b : Add node_name to DeviceOption
37af6566e1 : fp16: LSTMUnit
23f4f78c22 : Functional C2
e4c0af8b56 : revert #2708 modify orthogonal init for rows<cols case
2b5835ba5c : fix lint
0a9f93e43c : add env var for python executable
ec2ee181c1 : allow sharing tensor of simple types
bd17684252 : Run thread pool only on fast cores
90ca470d70 : Standardize operator argument "is_test"
3cfc6f26e7 : fp16: BatchMatMul
97e733615c : Use simple function pointers for memory allocation and deallocation.
23e5a8be8e : add support for custom python
d01adcbe0e : modify orthogonal init
4d3a0f7a20 : spell fix seet to set
462f95ed6d : fix bug in autograd type() for non-default GPU input
c07ebd2396 : TrimDataset to ensure size is multiple of number or replicas
c313855523 : Use brew in rnn_cell.py
6e322a4191 : refactor states-handling of CuDNNDropout
2356ee41b7 : Fix segfault in backward
361bbb8b43 : fp16: SumReduceLike
f775149205 : tests: use assertRaises, not expectedFail
d910a94b2b : Support AdaptiveMaxPool1d/2d double backwards.
2cad108269 : Make AdaptiveMaxPool1d/2d indices format the same as MaxPool1d/2d format.
4b5a6c07ac : Make 's_' functions on Type public
95c954abc0 : redesigning NetBase's Run() and RunAsync() functionalities
a198da5583 : Added LengthMax Operator to Caffe2
0b89eb7592 : Make seg ios run with OpenGL
63829695c6 : Make android segmentation net run with MPSCNN
cd9b27231b : Add comment about scope-defining trick.
713756d115 : Remove function test code, cleanup.
36b13f4776 : Implement Concat Function tests as individual test methods since there is no cat method on Tensors/Variables.
3da453f25a : Unify function and method tests.
08eb88f3de : Duplicate what is tested in function tests in the method tests. Also make some function-vs-method tests uniform and change method tests so they will pass gradchecks (i.e. avoid nans)
19cfda761c : write THD link libraries to text file and read it in setup.py to link dependencies correctly (#2711)
8860fb7fe0 : Implemented uniform buffer batching
68e7a0f2ed : Enable target dialect token in inference.
47fd6cc255 : Revert D5801013: [caffe2] Use simple function pointers for memory allocation and deallocation.
c2169c717f : Remove references to cnmem
ce36a972b0 : fix timeouts in CloneOrCreateCommonWorld
583d031754 : Operator to compute RoI region coordinates for RMAC
be406b1e5f : Revert D5639080: Caffe2: Cuda implementation for BatchOneHot operator
93bd3c77f8 : AddBlobsSync()
1290e586fb : Use at::Tensor based autograd Variable (#2676)
820143f4af : Drop L specifier; reimplement tuple printing in C++
d1346c75ec : Always use generator version of map for Variable iteration.
39d495b267 : Generate expect files in same directory as top-level test script.
a782858285 : Move go_token_id out of beam search constructor.
d52404779f : Revert D5803245: [caffe2][MPSCNN][segmentation] Make android segmentation net run with MPSCNN
f09fb7735e : Revert D5803411: [caffe2][segmentation]Make iOS segmentation net run with OpenGL
4fec5f658b : add Bilinear to docs, fix reference
1794e76800 : add missing bilinear docs entry
670cbf0350 : Remove the files added by PR 1203
98173850b2 : Make iOS segmentation net run with OpenGL
ebf7784840 : Make android segmentation net run with MPSCNN
103977cc8c : fix warnings (#2693)
bc66c9da86 : fix alignment warning
944115c915 : Bugfix for concat frontend
84167faf0f : Enable use of GPUDirect through argument to Gloo AllreduceOp
0161ea2ca9 : Mark unsafeGetTH as const
ace1426d50 : Move wrap_dim code to Utils function to minimize generated code.
183c2071f9 : Generate wrap_dim code on derived type rather than base type. Either should work, but code feels more natural this way.
39b5031517 : Support wrap_dim specifications from cwrap.
4a71ca6c60 : Use cast instead of literal as a temporary fix
1cf58bddd6 : Fix default constructor argument
92abd54dfd : simplify the code
bc4f233b56 : Make use of zeus kv store.
1c414426df : Caffe2: Cuda implementation for BatchOneHot operator
cf2c7ca998 : add THPP linkage when building THD (#2687)
01a1cf1e07 : small fix for pointer initialization.
10a032de67 : Use simple function pointers for memory allocation and deallocation.
47d1b6846a : Add a memory allocation / deallocation overhead bencmark.
1da87118cc : Optimize pow for different exponents and add tests
e8dec6e395 : Optimize pow for different exponents and add tests
0df2f1cbd6 : Optimize pow for different exponents and add tests
141f8921ac : MultiLabelMarginLoss doc fix (#2683)
b31cf0ebd4 : Added support for nInputDim parameter in legacy Padding class (#2645)
96cc52cde7 : image_input_op_support_int64
5d9c505e41 : elu gradient cuda fix
72ea242280 : fix race condition with finished_timesteps
45f07238f4 : make rnn executor figure out recurrent mappings from links
1cf94854a4 : fp16: SequenceMask
977b1f988c : Fix EmbeddingBag doc (#2679)
d81d71f24c : fix docs for variable.backward (#2678)
d43ab4bec5 : Create Gloo common world through MPI rendezvous
6cf172c60d : fp16: SumSqrElements
cef2068eee : enable setting rnn executor threads and max streams
27433e978c : Make piper of PipedReaderBuilder takes arguments
c11755e559 : Add checks for input texture slice for tiling
cd7d96e3b6 : Fix travis build system by adding sudo
c9f11bc317 : fp16: Scale
6763c14e84 : add base class ModifierContext, rewrite OptimizerContext, add RegularizerContext
e76015040a : add regulariztion in caffe2 and dper
b8eb8ced7d : Add transport/interface arguments to CreateCommonWorld operator
3f899a15ce : force NO_CUDA to be specified to disable cuda. add pytorch's FindCUDA so that it is possible to get ccache to work for nvcc. make excluded notification more concise.
fdbfcfc431 : fp16: CuDNNSoftmax
03de05229e : brew.concat: don't set both order and axis
1a2b229d47 : fp16: add test for FC
d43185612a : Specify CWRAP_FILES_BASE for ATen
046e9ae5c8 : Use arg['default'] as constant value
5d85a6753a : Caffe2 BlackBoxPredictor
9aed89ac88 : Allow specification of num_workers in PredictorExportMeta and enable for NMT beam search model
519d5acd4d : fix bug in dependency inference for RNNExecutor
6a883d1bc0 : Remove dot_product layer
c087a60026 : The CMakeLists.txt name is wrong
ec713d437d : make sure the output of sparse lookup layer is float
b6c9ecac7c : Fix shape inference of distance_op
176f8f9a19 : Make ConvTranspose allow optional bias term
eec2a0d905 : add documentation on top_k option in accuracy op
aeec8ae2ae : label_type option was duplicated in image_input_op
0f1a61cf80 : @allow-large-files [Caffe2] [Folded diff] Move mobile files to mobile directory
381a45a541 : Fix BAD_PARAM errors
b2f0ee5d46 : Handle scalars that are not backed by tensors
f75cf375da : Add accessor to underlying Tensor
32635f1292 : zero_dim_to_one and empty_to_null can't both be specified
70f7cfedea : Rename 'canonical' to 'has_full_argument_list'
98b7c882c0 : add float-divide-by-zero suppressions
81066a5e30 : Include non-canonical functions in Declarations.yaml
cd59b56440 : tell which argument name is duplicate
7bbfa1dd76 : Make Scalar default constructible
459cc5a346 : Check for nanopb and pybind11 submodules as well. (#2660)
8176a55827 : Adjust error message for View
8e4a889c8f : Add onnx to the documentation index.
84095f9512 : add linux guard
894c05fd22 : fix static linkage and make THD statically linked
a3ae136c25 : Temporarily suppress buggy test case with relaxed test. (#2663)
9cdef6c33b : Update for latest ToffeeIR changes. (#2662)
4a952e7112 : Python 3 fix: OrderedDict values is not a list. (#2661)
7838840084 : Detailed install instructions for ONNX. (#2654)
b7997a0f41 : support device ids>10
3024ff5705 : fix static linkage and make THD statically linked
5ef96aadd9 : fix static linkage and make THD statically linked
b6648fe311 : fix static linkage and make THD statically linked
8190096fec : Handle default arguments in base Type class
4e7f171ed5 : Use CWRAP_FILES_BASE if defined
fbb8f13499 : Docs now finally run with ToffeeIR master.
a2e5224847 : Fix autograd tests
5e144a8938 : Volatile input keys should also consider non-Variable arguments
a897f5a6ee : Expose requires_grad for cpp functions
d90cd88fb7 : Improve next_functions hanling in tracer and JIT closure
3b1dfcb51c : Add trace flag checking in backward passes too
ea888c1905 : Check input flags in Traceable
230721e198 : Support calling traced functions multiple times in forward
fdbef1cfb0 : Traces can now expire
eb11cab272 : Misc doc improvements.
7ea9de051e : Code review comments.
360cd9ca58 : speed benchmark fix
3251c60804 : TensorInferenceFunction for Unique
30da84fbe1 : Make Gloo depend on Caffe2 NCCL build
a4a44a7cf3 : Add missing const qualifiers
ceb13bf3fb : Fix cell/hidden init issue, add copy states to test
d4336edb05 : Disabled test for equivalency between Caffe2's and Numpy's YellowFin
6d5c3eaeb7 : Add CloneCommonWorld op
91b24b19de : Add type inference for EnsureDense and Normalize operator
631971e459 : threaded RNN executor for CPU, multi-stream executor CUDA
ff38bbfe2c : Enable mpscnn only for 10.2 and above
9da95d9b07 : bump to renamed onnx repo
3c61b59fd4 : codemod primspec -> symbol, PrimSpec -> Symbolic
af649c19a2 : ONNXIR -> to ONNX
bafe55bce4 : use toffee import until ToffeeIR repo is renamed
6d8d5bab4c : Codemod Toffee -> ONNX, toffee -> onnx. Change file names to match
c42ca96714 : Stop returning tensors from torch.onnx.export()
c7684e3b27 : Rowwise quantization
e4718430e8 : Fix typo.
e3d6c2a942 : Add proper error message for specifying dimension on a tensor with no dimensions.
22ea8d44e2 : Remove unnecessary early conversion to IntList and make expand functions inline.
5419c5ffbc : Set default values for concat_split_op
6ad54d55c9 : QConv impl (re-up)
4fc54af010 : Code review comments.
f1e4de9a63 : Add primspec for Sub, Index, Chunk, and Embedding
29b4ebbf47 : test_toffee updates.
c2f19f5d72 : ToffeeIR update.
a63d88c95b : print more detailed error message when trying to exported an unsupported operator
331521cdfd : Step 1: Trace and proto collected for SRResNet model (#183)
1b792d3e57 : Doc updates.
cb5fbe1944 : Expunge %2.0 syntax.
394ff072eb : Update to latest ToffeeIR operator schema.
9a05b8dd62 : Update to latest ToffeeIR protobuf.
99d6b9b923 : make API debuggable
52e693022a : helper methods appendNewNode and NewNode for python Graph API
5c82aefa24 : Fix bug in Transpose export.
b5833551f3 : Documentation, and inplace support.
e1b3321f92 : remove singluar kernel, stride, pad. they are being removed from ToffeeIR
434317b155 : PR comments.
57eb8bd288 : Frontend refactor, and some documentation.
6ae77b32b9 : Delete dead torch.toffee.op
61a922e183 : data_other_types now has correct type.
1f77d482d5 : Don't insert Transpose if it is no-op.
e29655f46d : Run JIT tests earlier
215b980f06 : More torch.jit docs.
4174112b49 : Add lint pass for handle invariant.
cd8d41c0f9 : regen toffee.proto for nanopb, enum of types has dropped double
3ef2ec6153 : Actually correctly handle non-float exports.
72843d5186 : ATen hotfix: elementSizeInBytes for types
805c35a519 : Model updates.
daa3f7324c : Track ToffeeIR inplace changes.
c537aebf5a : Always run DCE in Traceable
9f0c4c9f9a : Make autograd engine reentrant without creating new threads
e05979c4ea : adding dummy bias for the conv transpose
ff77906e44 : Refactor the user facing e2e test API - hide trace
4f6a7f4e2e : support more types in export
31eda1230c : support exporting constants
161e21f68d : Missing batchnorm fix
292ec9d75b : Remove NDEBUG macro.
b2e7438ead : Move disallow_copy into utils.
d59714e3b1 : Code review comment changes.
7ac6d67a4e : Add nanopb to list of dep_libs.
77ede8fc1c : .travis.yml cleanup
1e0171f436 : Super resolution network (#148)
2e266837f5 : Port TracingState to pybind11, new export() method.
8f1168d355 : Test updates for new version
2bc3881fe2 : Put version in protobuf we produce.
50b5f4d219 : Minor comment.
3b478c17a0 : JIT backward closure comments / Render stage changes in inputs.
f83c4fad7b : Fix exception propagation from recursive Engine calls
d8e2ab632e : Add support for Constant nodes in AutogradClosureFactory
594f98ce16 : Support multi-stage AutogradClosures
43be0a679c : fmap now doesn't require template arguments
b33f64b2e7 : Fix nanopb build
25287a129b : Test updates for plural attributes #145
3b5a5a6f9c : Use plural attributes in MaxPool1d, MaxPool2d and AvgPool2d
225e8c8acf : switch to using raw_data in PB
6264996169 : ToffeeIR CI hotfix
55ac596ea9 : Faster tensor serialization.
2d4da3657f : Maintain invariant in env that all nodes are mapped.
e2a84e1e65 : PR comments.
8c5eba3f3c : Add an Undefined node for null arguments to tensors.
b2e305e390 : Lint after ToffeeIR, and subsequent fallout.
685d7b83ba : Batchnorm's bias is mandatory.
84f8c88c24 : Batchnorm fixup
82efbe349b : Handle batchnorm properly.
218058b94a : Make CppOp autograd execution work (temporary)
63c835bbe7 : Add keep_vars parameter to state_dict.
96ae6a5e48 : Don't DCE Params.
2db7c5621f : merging dcgan changes that needed to be refactored from older primspec approach
dc6378d891 : merge fixes for Squeeze and ConvTranspose
a1bb403326 : Ignore nanopb for lint.
605ef38831 : Explicitly override CMAKE_DEBUG_POSTFIX for nanopb build.
0bc498ee94 : Apparently, lib64 isn't created all the time.
de6ef65be5 : Port to nanopb.
ac8d3372b0 : Add nanopb submodule.
35bddb6b7e : pr feedback
c9f7f2eff4 : Change pipeline for exporting to toffeeIR
3afb4d8728 : giant expect commit
bad5717e15 : add ability to specify initial values for inputs
81342910d7 : fix the op Tanh spelling: tests
d12cf7dd45 : fix the op Tanh spelling
8c2663a685 : Put every input on a new line: TestJit test updates
d5d65080e3 : Put every input on a new line.
384efe482a : Use Toffee IR schema to disambiguate types of attributes.
f062e06c91 : Make null Variables on convolution and batchnorm work.
6039f007c4 : Make assertExpected Python 2 friendly.
6701cc7c8e : flake8 excludes update
a60d9bd022 : Bind Attributes in python ir, and add test for python ir binding
a3fdb281d1 : Python wrapper for Node IR using pybind11
6d0364f13d : Add pybind11 as a submodule.
0a83f86348 : Add Eval Handles: JIT test update
5823cc419a : Ignore Handle when exporting to ToffeeIR.
e14b766a81 : Add a comment about Handle
965a349bbd : Record context edges in the JIT
9f97291408 : Make tracer thread-safe
8dab0237e2 : Maintain Select-node invariant in DCE
ec9761789a : Enforce deterministic ordering on Eval inputs/placeholders
fa308b3183 : Improve backward tracing
91dcf2938a : Miscellaneous fixes needed to make caffe2 E2E
6297144e51 : Build hotfix.
1517ef687e : use constants for directions
b0ba9a81d2 : remove std::list, restore custom node list implementation.
222e8c0591 : PR fixes
b606106c4d : thread safe interned_strings
14f9316d2b : renaming IR_IF family
55cd9f37d1 : remove Select, and NodeWithKind
4a4739e048 : remove most node subtypes
c369a44bf1 : remove chunk subclass
9f8a35c0b9 : remove Primitive nodes.
24cdb897d6 : starting removing nodes by removing Return
b037efa92c : prep for removing node subtypes
57b7370aab : switch NodeKind over to Symbol type.
1fa5b19ba4 : Attributes object that mirrors Toffee, and interned string table, used by attributes for keys.
3c5dced6ce : Make batch-norm work end-to-end with caffe2
3c6fbcabea : encode size in name...
d596bad1b9 : remove attribute in expect
d7d74428a3 : batchnorm hacking
150fd2848d : batch norm primspec stub
af90a780d1 : primspec for avgpool + squeeze (#80)
0ca3ca302e : test for primspec for concat (#77)
52e0816bed : primspec for concat
72a7530023 : premspec for leaky_relu (#70)
0e5320e073 : Lint
6405391065 : Small comment.
db79be82ab : Move Toffee for C++ functions back to autograd.
f265ff1dca : Bugfix where it was always input 0
bee0e45355 : Don't create empty attributes.
c0d0a99977 : Alexnet back online.
ee2ba279f2 : Working Reshape op
35f1cb462d : Invert negation.
e1b345d81b : More alexnet things as primspec.
1f4bebe27a : Build fixes when Toffee is enabled.
6f6fe177f1 : Make Toffee optional. Unbreaks CI.
05a6d4c137 : Create a C++ primspec virtual method.
4b1f182199 : Disable C++ Python conversion code.
dd58b145c3 : Toffee graph exporting for PyTorch.
890c2071f0 : PR comments
35f1ca1293 : Make autograd engine reentrant (#37)
c8b303e853 : guard dump, guard cuda
f4b7178b59 : track scalar type
b6175eb54d : enable fusion group execution in autograd closure. implement chunk. propagate type information through fusion optimization.
62efac4ba5 : make Type into a immutable object and share them rather than clone. allow nodes to have undefined types, which reflects reality right now where some TensorType nodes are just not filled in.
bcf5c11e10 : cuda guards
e91966a0b4 : Unify our tracing API into a single interface for functions/models.
510529ecd0 : missing expect
9431742d5a : Build error fix
7f60a18293 : Add initial support for backward tracing
5b6bcf1ce4 : Warning squishing.
29ddcbfe17 : Rename TypeKinds to suffix Type, matching class names.
accd52feef : Print types, and improvements to type APIs.
eb730f8321 : Inplace test.
4a1bbc01ac : Fix #41.
765b0bf137 : Make in-place work again.
32c5be4c31 : Lint record_trace.
b6a8eaa6ed : Give ConvForward an explicit name.
21c0ad9702 : Test case that we fail legacy traces
453b0fac03 : Always print diffs, no matter how large.
624e451d6b : Add comments
1c4538e017 : Trace C functions
bdcbbeaf68 : Remove GlobalTracingState
ba2b2bcdc1 : Change calling convention for C++ autograd functions
82ed7c0232 : POC: add Handles to represent opaque state passed between Nodes
09b35506f4 : rename init_pass.cpp
a136c30309 : add comments, rename function
9fd06b2051 : add a rule to distribute chunk operators when it stops fusions.
a096959ab8 : make multi-output uses/defs easier to ready in pretty print.
0d3421ac01 : Handle Constant lint.
b158aaf6b4 : Make linter an optimization pass.
cf46ef05db : Finish the rest of the lint pass.
3016f459d2 : Partial lint pass.
76c7788e81 : Remove THPP imports
dad625b54a : Comment for WrapConstant/ConstantFactory, remove thpp import
f0902027ce : Typofix
2dc3ef73ae : Lint
c931feaad0 : Elaborate on NB a little
3e0f1608fe : Capture Variables that are not inputs as constants
af21c6b018 : Add Node type to JIT IR
348950dc74 : cleanup jit_test
1f900861b6 : remove _NOCAST, use fully-qualified name in macros
233a66dcbe : Remove SimpleMap from JIT IR
f5e414862a : cuda guards for fusion compiler
ea4aaa6b0b : Document TemplateEnv & PR fixes
50e51eaa7f : Fusion of simple map operations using nvrtc. Approach is based on the approach of THC's pointwiseApply{1,2,3} family of kernels, but doesn't have any dependencies on that code.
51a1618683 : Remove Return node from nodes()
a4086508c6 : Enable tests
f270973937 : Add JIT IR -> Autograd IR converter
e186d16e6b : Apply JIT optimizations form Python
72659bcdef : Minor code cleanup
57d65a99bb : Add LSTM fusion test.
8f12bc5a4c : Temporarily print Return nodes, pending printer fix.
8f3a01932b : Swap order of assertMultiLineEqual.
a5b87de139 : Squash warnings.
9662cffd26 : Use std::list in JIT IR
e238f3cada : Very simple accept/golden test framework for JIT trees.
cb53882c5e : Make warnings clean.
ac5dd887dc : python clone, more asserts, better names.
da6122bd35 : Document all public graph manipulation functions
e477e56519 : Add some preconditions to the comments I added.
3182d732ee : Some documentation for mutator methods.
a89c49d723 : Minor fixes to comments
d959bf43c3 : add comments explaining IR and fuser
fde064088f : Add logic for fusion. Add clone mechanism to IR, with init() methods to setup nodes.
538cc89dbc : print uses in output
48945a435d : IR modifications to make mutatation possible. Nodes are in intrusive doubly-linked list. Methods added to manipulate inputs etc.
a2c140f985 : Refactor owning Graph pointer initialization.
49bb223786 : Break when an assert fails.
8215860d2f : Add an assert wrapper for easy porting.
3dcbba1f35 : Keep Variable mapping as part of TracingState
55c9e0258e : Make the linter happy
6be47ec907 : Minor fixes and improvements
2ced918063 : Add a very simple visual (non-automated) test.
ea05ac8f41 : Move JIT-related files to jit dir. Remove IR interpreter
1325fa511c : JIT IR including use-def chains and updated comments.
7c083b00f8 : refcounting for Node/Value
f369f8e80d : simplify IR
4979359800 : Add graphs, trace them.
a2cc7a00e6 : Fix Python 3 build problem
c1dec0663f : New stratification: add Operator/Instruction
60751cd889 : Add verify_model to torch.jit, for sanity checking.
7bd4c5a27c : Minor sanity check.
3055b69f63 : Refactor Arg class away.
13663c1ee7 : Fix clang build error, struct/class agreement.
25bf7639f4 : Remove incorrect clear from THPExpr/Arg_dealloc
8ab905b769 : Remove unused output_list.
c466b2c1f6 : Make an error message better
c4ccae6a89 : Document move semantics on PyObject with THPObjectPtr&& constructor.
0fc17adf71 : Add simple JIT frontend.
f9458a3720 : Add comments from discussion with Zach.
d35ae86f26 : Don't use misleading Ret nomenclature.
a797ab9343 : Rewrite AST to a new, more functional representation.
6f9774d7db : Minor bugfix.
11107190ca : Handle legacy correctly.
1e8bf12b3a : Add an inefficient but working evaluator for forward traces.
50b375d9bf : Add input nodes to the IR representation.
e1b7872fc2 : Make it possible to access IR from Python.
c5faaf69d8 : Initial IR representation for forward trace.
b1b20e4097 : Remove dead field from UnpackedInput
3d7459ff6c : fix indices for data_parallel and add parameter gradient tests (#2632)
77a02eaa7f : Enable reader checkpoint
81ddd5e869 : Use std::{thread,mutex,condition_variable} instead of raw pthreads in WorkersPool
3ff351fc89 : insert Free ops when blob used last time + memory allocation estimator
a3ddf9e180 : fix pointer arithmetic for large input/output sizes
1104bab796 : add axis argument to NormalizeOp and NormalizeGradientOp
90a4d91469 : Remove scalar expansion tests.
25dd9ba799 : Address review comments.
7ec4485858 : move cmake_uninstall.cmake.in into cmake/ subfolder
42448cf07f : Fix to make the sample code executable as-is in "Extending PyTorch" (#2621)
1b013c0b52 : fixed issue #2613 in torch/legacy/nn (#2624)
bfbd1bbb50 : Update torch.triu/torch.tril doc (#2619)
dd5400e452 : Make android segmentation network run on both iOS and android with tiling
db78f3cf46 : fix bug for THTensor data access
40ca356d36 : make logsoftmax documenatation readable (#2606)
7fa7a101af : Fix emmbedding doc formatting (#2605)
bf013f4c99 : fix Python 2 gloo install (#2597)
f0f7b39650 : fix example in docs for nn.init.calculate_gain (#2600)
2d9728d594 : Add more enforces to SparseToDenseMask operator.
c858c68537 : cmake: stop including files from the install directory
e368740612 : Update the speed benchmark code
c9238671ee : Use char-ngram embedding for out-of-vocabulary words
0c324ba417 : set stream for cudnn handle correctly in cudnn wapper
ca3f2f9e6a : Small fix to exporter to accept net/NetDef both
1f1aca6e09 : Support broadcast specifications from cwrap.
a1992e81b3 : Replaced std::copysign(x) with (x > 0 ? 1 : -1)
579fc7e959 : unify bernoulli yaml declarations across backends (#2578)
8820d467d6 : handle useless ellipsis in advanced indexing (#2589)
c5a8a59116 : raise KeyError if registering buffer/param when attr exists (#2108)
925cfc0d90 : Disabling test for YellowFin
bb08f261f1 : EnsureDense/SparseToDense for CUDA
b2bd9ef15a : protoc: only disable in watch os mode
3ae810753e : fix travis build
9f685e4aa3 : Ensure GIL is held in ObjectPtrAllocators (#2581)
26cdfcd9cf : allow single non-tuple sequence to trigger advanced indexing (#2323)
53ccbd9a6e : soft-coverage attention
0e99e7bd99 : Accidental addition of a file
0b643fce09 : Use std::atomic instead of volatile and custom barriers in WorkerPool
50b5c76ea9 : A benchmark generator for individual ops
a1a4924fc0 : protect cudnnSetDropoutDescriptor with mutex
a0bd836afd : add conv flops inference
c609f22638 : added gflop annotation to TEST_benchmark
fc1f117502 : Return TensorInferenceFunction for SliceOp
03711e9ab8 : Handle bool's correctly in net.Const
debceaff02 : Support new arguments in ConvTranspose
b4b89e1bd5 : Ability to dequeue and concat multiple records in a single QueueDequeue op
eed2292123 : check for null commonworld in DestroyCommonWorld
4ec26d23a7 : TensorInference function for LengthsSum and such
571b651ef2 : Remove redundant tensor inference function
d84dbcfb9e : add a "clone the source" section
a7e11bddab : Added readme for SNPE build and usage
fefd5479a3 : Initial implementation of YellowFin algorithm
5ed5be71b1 : YellowFin GPU class and Python optimizer
0f3a5d3180 : Tuning number of parameter servers based on performance estimation job
00366ca2d1 : Move SliceOp outisde of utility_ops.h
55ec2cb08c : YellowFin CPU class
33ef5f38a0 : Fixed cuda loss op
94edc073ed : Added subtraction operator, tested useTextureInput set to True
f293a22f39 : Comparing CPU & GPU results for Denoiser Network
f103b4d93f : Relax dimension constraint in CUDA to 6 for Transpose
080fab8f6c : Code generator for and high-performance emebding look-up kernels, supporting
71a87f0645 : elementSizeInBytes for types
45e6e71198 : Tidy up CMake for NCCL
a03e5cb409 : Remind users to submodule update.
466f0a823a : Use external nccl, fixes #2553
a7ec5def7b : data_parallel_model names fix
8f6fa78271 : disable cudnn when output_padding >= stride or dilation
4a211314d0 : fix shape and correctness bugs in autograd/convolution BackwardBackward
58b7d1c764 : remove python convnd function
7ca196c11d : enable cudnn transposed dilated
0cf2c37505 : refactor nn calls in autograd convolution
e950c44c80 : enable dilated transpose and gradgrad nn tests
d13d95c09c : dilated/transposed conv in autograd
ae5101c137 : Fix range op's GPU
1b11ea3934 : Change default argument for LRN
59540847b1 : Include Caffe2 headers before anything else
6e03c5dc1f : Ignore gloo when linting.
7310ebb66f : Add gloo submodule.
aaef3a3ed3 : Remove gloo subtree, in preparation for gloo submodule addition.
e69063405e : Allow param groups to be added to Optimizer dynamically (#2374)
19f6941e7c : fix arxiv link to batch-norm paper
501b49074b : TestGLConvolution CPU baseline can use precomputed
7ba8de6771 : Add S6 support
68842a5510 : HPTT
60c17a34b5 : better default settings for CUB
41adebe974 : Clear the operator default engines before running operator tests
e41dd5affe : Added USDT probes needed to support QueueSnoop
5315669bd8 : Add ShapeInference for ConcatOp (Fixed)
d698b29f1b : handle reshape gradient in shape inference in special way
488abdcd6c : slice op shape inference
c1b09cd5ab : Fix typo in docstring example (#2562)
bc228b2409 : auto_gpu:True for ones_like and zeros_like (#2559)
cdae579c22 : Fix typos in "Extending PyTorch" (#2558)
bdeafe49ac : Hotfix the OSS build error
a0204331a8 : Control flow operators
3b7b923de8 : Fix grid size for batch cat tensor now that getApplyGrid has been changed.
a71330d13f : More efficient broadcasting backward by bailing in all cases if sizes match. (#2556)
bfca30e6f1 : shape inference for batchmatmul
7c7603a60e : fix FC shape inference
5a360c92a6 : @allow-large-files [caffe2] update snpe for oss
4b51adb032 : Add tiled vs. batched comparison for models
3a315dd809 : Fix a bug in GLConvolution
898f3f398c : Use gemmlowp-based worker pool (spinning + #threads of blocks of work) instead of custom work-stealing impl
bd27f0b5a7 : remove repetition of libquadmath in TH CMakeLists
674e1f2ba1 : increase test subprocess timeout
4cca286d9e : add google analytics to docs
80caca4edb : Allowing larger grids for THCApply shows improved performance.
72a257584e : Add numerically stable logsigmoid
429699bb20 : Add numerically stable logsigmoid
94b5990201 : Add torch.cuda.get_device_name function (#2540)
5294017d9f : Adding implicit padding for 3d average pooling
16008661bf : Adding implicit padding for 3d average pooling
b5949d8e9d : Adding implicit padding for 3d average pooling
150dc7a8e3 : Improve Windows Compatibility(for libshm) (#2455)
d3c8e68004 : Revert D5641588: [caffe2] Control flow operators
9f693b39aa : Revert D5711951: [caffe2] Add shape inference for ConcatOp
cc3662e939 : Added support for scaling learning rate of Caffe2 optimizers during training
105d5e595c : squeeze op enforce about dim
26f0943130 : Do CaffeCudaSetDevice and CaffeCudaGetDevice
da418f5744 : Add shape inference for ConcatOp
e27431ddf5 : New math.h functions required by YellowFin
3c180ba317 : Opensourcing channel shuffle
885d9a7796 : fix memonger for RecurrentNetworks
5bc52c3223 : Adds the master setup plan to the model exporter.
432cba6c05 : Set up run_every_ms when constructing ExecutionStep
ae0c4c8e66 : Respect inplace blobs in InjectCrossDeviceCopies
327a0793b4 : Add missing parameters to tensor docs (#2541)
cffbbfa9e3 : Revert D5655753: [Caffe2] better straggler exit procedure
3b903e8c68 : Fix more MKL build issues
a2a033937b : DestroyCommonWorld op
86cc7ace93 : Control flow operators
7eba614503 : RNNCell: Initializers interface, simplify _LSTM helper
e5740c53de : Update gloo dependency
82360d8cba : shape inference for ReduceFront/Back/Sum/Mean, Gather and Dropout
2c07f88ea3 : Fix typos.
7d42fd8423 : Fix typos.
e4d15223dc : Fix typos.
e31ec51ee5 : Fix typos.
61e4723132 : Fix typos (#2472)
eb58740651 : add ones_like and zeros_like
8a2e69177b : add ones_like and zeros_like
7b71abc52a : add ones_like and zeros_like
3b155fa305 : Not changing dimension size for expand when target size is -1
4bef5f5ff9 : Not changing dimension size for expand when target size is -1
a655e6313e : update README with new major contributors and remove redundant sections
523d8af26e : CMake helper to deprioritize Anaconda include path
15e16f6963 : More double backwards support for pooling, unpooling, padding (#2516)
9c948c22b5 : Fix check_no_size_average tests. (#2532)
98a5c99b46 : remove debug code
14038fe559 : Remove unnecessary if in maybe_view. (#2538)
f250815fa4 : Fix bugs caused by flatten_parameters() (#2537)
153c9b0714 : Add examples in functional.py and loss.py (#2371)
e4c05c2b5f : fix leaking symbols from THNN
802ddd997d : Disable persistent BN for cudnn < 7.0.3
51b60354a5 : cudnn 7 grouped convolutions
ec86d0b2ba : Updates for CUDA 9
01adebea1c : cuda 9 hgemm fix
d112cbd7f6 : Updates for CUDA 9
bc93d79967 : Updates for CUDA 9
b079469af0 : self -> ctx in Extending note
5e0b28e7bd : PrependDimOp
20c854d43c : Make FC op work with empty batch in cuda
b3d85cf6b6 : Removed CNMEM git submodule
d3a6fefe1e : fix nnpack mkl header inclusion
813cca85d1 : Use CMake HINTS to find CuDNN
14d8c03424 : adding backward capability for potrf (Cholesky) (#2386)
7e21e760e6 : More cogent error messages during indexing (#2503)
b7a6e823a9 : Fix TypeError of prod when BP to GPU tensor (#2353)
7aa6bc516f : add "Basics" section to distributed docs (#2433)
6bcbecfb97 : fix doc of lr_scheduler (#2280)
5c6d543b7a : Allow kwarg-only inputs to DataParallel
4c9eff807b : better straggler exit procedure
23209152a9 : fix memonger test for open source by checking for cuda support
7f4ceb83e3 : Relax dimension constraints for weight matrix in FC
ad07f5f05d : Added norm-based gradient clipping to optimizer library
c4bc718c4b : Fix OpenGLPadImage
de903ad208 : Implement double backwards for nn.Upsample.
5d09fcd028 : Make DistributedDataParallel threads Daemon threads to allow clean process exit (#2524)
3faeb621d3 : support id_score_list for Feed
d368b59177 : logging the blob that has type error
409d985d43 : Tensor inference function for Gather
93e12e75df : Allow caffe2 to detect if cuda lib has been linked, and also fix oss build error.
16549ed92b : Scaled training and fetching from the PS
1d83a46b44 : Improve float16 support
1955d0797e : Added fast path for CUDNN global max pooling
2de1bc894b : move ShapeOp out from utility_ops
98da4e3a04 : pairwise dot product with dot_groups support
4c69697d2a : Distribtued bug fixes. (#2434)
620d3ab714 : Do not run operator gpu tests if there is not gpu
6eeb7e6fd8 : Use cast::GetCastDataType to handle "from_type" and "to" arguments
bbf2c6a084 : Fix ConcatDataset docs (#2355)
5e54d9330f : hidding statically linked libstdc++ symbols (#2471)
966fdbd93a : Add commands to re-build individual libraries. (#2506)
27bd3df71b : Patching EmeddingBag to accept 2D input (#2429)
008a62b18a : DOC fixed Tensor.expand docstring (#2495)
d675c101e9 : extend pairwise dot product for non-equal x & y dimension size
c52fd11f58 : Add CUDNN to the gpu devices' default preferred engines
e33dfe93e4 : Update proto definition
67a55b81e3 : Forward blobs into workspace
502b43641f : More flexible tiling for Conv and ConvTranspose
058815955d : Add default implementation of __call__ for context manager
9507cae9e0 : Create MergeIdListsLayer
930acc8e85 : CUDA SparseLengthsWeightedSum
c1356216a2 : cmake: generate macros.h with configure_file()
5748e7140f : Strip Operator Schema in mobile build
0e5fcc7ca2 : Make Tags a decorator as well
e902620620 : cmake: relative paths for install()
e37847af92 : Test CrossEntropyLoss double backwards.
0390e80a7e : Support MarginRankingLoss double backwards.
e27127391d : Support double backwards for SoftMarginLoss.
fb7e9583bd : Generate no_size_average criterion tests by specifying check_no_size_average=True
22ec5f37ca : Support double backwards with parallel nn autograd functions. (#2508)
a32e98b700 : Add documentation for std/var unbiased argument (#2509)
440d979075 : Optimizations for Caffe2 SinusoidPositionEncodingOp
65112f3865 : code cleanup: separate the several net implementations to separate files.
51d67ecd8c : DeviceInference function for NCCLAllreduce
0b363fd9de : Add event as a first-class citizen of the OperatorBase interface.
c535b8098f : Add a HPTT path in transpose_op.cc
77c28b7a7c : Revert D5607549: [Caffe2] [Mobile] [ULP] QConv impl.
6465c14aa1 : Temporary crash fix
304f3773d0 : QConv impl.
de24bb4b66 : Update readme with docker cmd (#2501)
d2b8d3f8f7 : add slack clarification
c38206d901 : add wait event and record for MKLOperator
0e20a7cb7d : ImageInputOp_more_data_augmentation
5c43fcda8d : Support params that don’t require grad in DistributedDataParallel (#2464)
c5a9aa027b : fix wrong path to ReduceLROnPlateau in docstring
b3536a3a6d : Adds checkpoint taskgroups to the online trainer.
0f35ec9872 : Common Subexpression Elimination
5d24a4eeef : Early design for a general Event abstraction cross-devices.
d6632a9a05 : Adding a range operator similar to np.arange
d617a77433 : Add tests for ConcatOp and SplitOp
d44e2dabbf : Revert D5653336: [caffe2][PR] Add random input scaling
4905ea898a : Add random input scaling
623df4adb3 : Fix travis tests, by splitting DummyOp to GraphDummyOp and TransformDummyOp
11a14fd0fd : Clarifications on setting up torch.distributed (#2475)
fa8b8a5f07 : improve unsorted segment op speed
1d70a2276d : Changes the checkpoint naming rules.
2c18748c54 : Move set_stream_id() to protected field.
23506824b0 : CUDA-related updates to the core overhead benchmark
db02fbd9bf : Fix stepworkspace sizing
5f612d9740 : GPU version of BatchGatherOp
6e22427929 : fix tci complaining test - test_load_model_from_checkpoints
aad748fbae : Cannot divide on 0
57c93435e3 : Dedup name in functional layer
cfbd116966 : ApplyTransformIfFaster
f7ece79949 : Add fp16 and tensorcore support to resnet50_trainer
5b8e2ad2a6 : test_distributed cuda tests don't skip if cuda not available. (#2476)
692f4e4e3b : Disable -Wstrict-aliasing when including cuda_fp16.h
fa984af0f9 : use create_param() in layers
7fad4be4c6 : Device-specific memongering
4d0fbb0e6f : ConcatOp: fix axis check with add_axis.
f388135d3f : Layer norm brew wrapper
e45e621b0e : Implement layer norm gradient GPU
8e8e90f595 : IMplement layer normalization backward CPU
e16c40eb4f : Implement layer normalization op forward GPU
474c043be5 : Implement layer normalization op forward CPU
e89474c496 : fix forward_only mode
a63e7314f3 : Adding 1d-2d-3d Schemas for Conv and Pool
4ca5735753 : Allow inplace for spatial_bn_op
ae2aad9c0d : Operator to Merge ID_LIST features
58838baa75 : Remove unused travis scripts
578adbe9c0 : Adios CNMEM. You will be remembered.
b3029df1d0 : Added window mode for caffe2 sequence operator
a0fe96d7cd : Rewrite memonger DAG in C++.
5fb7853803 : Fixes compile errors
661beb3345 : Speed-up weight_norm over the right-most dim (#2431)
bbcc7d37ca : Have Tensor.sort accept descending as only argument (#2329)
30baba7d15 : fix typo in docstring
0d34a6451a : fixing the bug with squeezing a singleton dimension in torch.min and torch.max
73e0b3f401 : fixing the bug with squeezing a singleton dimension in torch.min and torch.max
98ac4542e0 : fixing the bug with squeezing a singleton dimension in torch.min and torch.max
21d8465d8b : Add test for Tensor creation from NumPy on CPU and CUDA
7409e0822b : Cuda fixes
f269d3f0b5 : Add cuda tensor initialization with array
727942be55 : Use proper type for counter
610d9d04e7 : Support constructing tensors from arrays of non-matching types
b797ee04fc : Add CUDA version of eye
ec28630244 : Add CUDA version of eye
a104dac193 : remove unsed code and bring back single benchmark mode
0985eaf373 : Add ability to specify init_method for test_distributed. (#2465)
1f47a80e88 : Caffe2: diagonal fill op
30616ee309 : Fixes the broken checkpoint test.
14950a9082 : Support session in distributed realtime trainer
a53192e334 : Revert D5001637: [Caffe2][RNN] Threaded dependency-aware RNNExecutor (frontier/diagonal execution).
367106f591 : move memory allocators to allocator.{h,cc}
453c60ce28 : Threaded dependency-aware RNNExecutor (frontier/diagonal execution).
49ec942825 : Temporarily disables the checkpoints for the readers.
1db7a99249 : disable travis test for dpm test
f92fdd850d : Important typo in resnet50_trainer
3a8feb7fb7 : Address integer division to make it compatible with py2
255b176f6b : Sorted Order and Generalized Pattern Matching
8592f00ec4 : Revert D5633240: Add CUDNN to the gpu devices' default preferred engines
0e419ae1b2 : Add CUDNN to the gpu devices' default preferred engines
e95b79a69c : Benchmark for embedding generation
443a4544d4 : Update third_party/gloo
52befa4802 : DataParallelModel: take param_init_net into account in _InferBlobDevice
b09d7c890e : Copy-edit sparse constructor docs for clarity.
606699ef97 : add calls to contiguous for cudnn affine_grid
1a68c961ea : accumulate in accType for reductions over dimensions
c55cc743fb : move clamped random functions out of cwrap and into TH
54a1115641 : move clamped random functions out of cwrap and into TH
763fb5d708 : Update documentation to reflect Linear with 2+D inputs (#2410)
b3db52fe36 : Support __neg__, .neg(), and neg_() for Long, Int, Short tensor types.
3f25232aab : Support __neg__, .neg(), and neg_() for Long, Int, Short tensor types.
4bca77816e : Support __neg__, .neg(), and neg_() for Long, Int, Short tensor types.
d19ee9c182 : Add comments for default value (#2282)
f9d02903b7 : Always copy indices in Embedding._renorm (#2414)
5e088da5ba : Add DistributedDataParallel to docs
c05c500a82 : check _grad suffix
62dcc2feed : cudnn conv group support
b0da5bf0fb : Use masked_fill rather than casting masks when appropriate.
0b0d2a06f7 : Update legacy SoftPlus to add threshold constructor arg.
c92f229aa2 : CosineEmbeddingLoss as a new style function.
9bcb9658d5 : MarginRankingLoss as new style function.
7aeb837895 : Implement HingeEmbeddingLoss double backwards.
1efe38768d : Implement KLDivLoss double backwards.
5106ce67bb : Implment SmoothL1Loss double backwards.
19d4c37ced : Implement MSELoss double backward.
7875c02217 : Implement GLU double backwards.
9a243abe5c : Implement Softmin double backwards.
988b0d58e6 : Implement LogSigmoid double backwards.
0c3a01fe44 : Implement SoftShrink double backwards.
8d38c0ee52 : Implement Softplus double backwards.
ea9a7823b4 : Implement Hardshrink double backwards.
a6cccc8701 : Implement RReLU double backwards.
434fa7f694 : Reduce memory usage for dot attention
33383f3912 : Reduce overhead of broadcasting when broadcasting isn't required. (#2364)
ca64190491 : Update cub submodule
ffd9316b03 : Use SequenceMask op in attention code for sequence masking
a985355935 : Gradient for SequenceMaskOp
0a828768e9 : Implement SequenceMaskOp forward pass
8a5bdc383e : Fixes the flaky upload test
cd5275e79f : Convert upsampling Functions to new style (#2372)
641e582f31 : Fix typo (#2378)
3285dc12c9 : Avoid reorder warnings with -Wreorder
404f8ee9b4 : Extends the jobrunner to support uploading checkpoints.
399fc9fb09 : Added Nesterov
9372ff7a86 : Caffe2: support Tensor in BlobsQueueDB
dd5618aa49 : Remove unnecessary moves in convolution autograd.
319c46fa1c : Vectorize ELU op on CPU
85788a0f65 : Add TensorCore support
a7be496fe2 : Revert D5589309: modify _LSTM into _RNN to adapt GRU
b91c2f5064 : Make reservoir sampling thread safe
9c4872f4bc : Reservoir sampling with object ID deduplication
f78af06f1b : Features collection with reservoir sampling
5dba88b40b : Caffe2 [easy]: Better exception logging in parallel_workers/data_workers
8d342fc6e2 : Sampling random negative based on sparse features
4758bd851b : rectify args btw. train and translate
f2dfb40302 : Added amplitude argument to SinusoidPositionEncodingOp
5bb1e6b817 : Allow passing unsymmetric 2d kernels to brew.conv.
ad84747433 : Optimized Tiling Code
52fa113774 : Sync opengl changes
d79662088c : Remove unnecessary moves, avoid IncRef/DecRef of PyBools.
062673db88 : Properly pass saved_for in BatchNorm/Conv as the relevant Backward function.
2f624dfd90 : Add AutoGPU guard and properly reference Python args from BatchNormBackwardBackward.
50c208a50b : Revert "Fix typos."
7f097f4b82 : call gemmStridedBatched for cuda >=8 to avoid calling kernels to set up pointers (#794)
01051334a2 : Add CMakeLists.txt files in opengl directory
e908cf28f4 : Docker move
eb85258beb : CreateMapOp
7b86a34610 : modify _LSTM into _RNN to adapt GRU
784ba07bf3 : updated downloader to use s3 url without a redirect via the vanity url
d4e687d6aa : Add NCCL_VERSION_MIN, use v2 API if installed
bf18d85945 : Clean cmake script options, and add USE_METAL to optionally build ios metal code.
a6bf0ca4da : Bump travis osx to 8.3.3
1ce95090ca : Add support for specifying engine preferences
5e0d434b4b : Add build support for opengl and latest nnpack.
dba6a32450 : Revert #1027
218f4506fd : Fix CUDA check for gcc > 5.
1c0d20d58c : add in make uninstall for cmake
595f1a92e0 : Conda packaging for caffe2.
7efb83ae52 : Require C++11 support with CMake functions.
5c77cc8182 : Exposing num_workers as parameter and enable recycling activations
1199e3d496 : changed a small mistake in cross entropy doc (#2292)
c000d15058 : Properly use Py_RETURN_True, Py_RETURN_False in back compatibility warnings. (#2345)
9199c954f1 : Fix typo in DistributedDataParallel (#2320)
1ac98b1bce : Add documentation for apply (#2327)
9357b8fafc : new_criterion_tests is redefined so BCELogitsWithLoss tests don't execute. (#2347)
b96c4e714b : Fix build failure on MacOS X with clang-800.0.42.1
a2204f0b1e : Caffe2: Write CUDA version of OneHot operator
0cf488295d : fix Windows build breaks by LengthsTopKOp
ef64a4f6b2 : Add conv layer and layer tests
152d2ae3a8 : Implement CUDA version of GRU operator
190a1dda5b : fix thread_pool.h
679a586d53 : Fix metal build after sync
9fcf676cfa : testing for open-source seq2seq
35bbb7bfba : THD: add a missing header to fix build failure
4622b33952 : Fix typos.
ac3a1328d5 : Remove unnecesary .proto files
751198f3b1 : move cpp flags earlier (#2325)
e51fec3be0 : Update sparse.py (#2336)
5caa42b538 : Add ConcatDataset to docs (#2337)
07e745408b : revert D5528436
8ad382df3c : implement LengthsTopK operator
8af625ede2 : Implement gradients for Col2Im and Im2Col operators
ddc1b288bb : Improve logic when creating a common world
5ae3865112 : Fix build
a11aa0ab35 : remove mpscnn-fb folder for the new contrib/ios sync.
1449c2c821 : long -> int64_t in convolution autograd
95d4561d05 : Fix canonical_axis_index_ enforce failure when doing memonger shape inference for RN101
647f35e742 : Fix SyncAllParamsDistributed for Python 3x
42fb87d0b1 : L1Distance Row-wise, instead of cumulative
a1bf14d8e6 : Building new randomized sparse nn model
e7192c3b91 : image_input_op_dense_multi_label
d072701547 : Caffe2: Refactor the core logic from data_workers.py into parallel_workers.py
cc2c4d07d6 : Always use assertAlmostEqual for floats when crossing python and C boundaries
836af7f211 : update gloo to master.
02e5367bdd : Support a build script for Tizen target
42806d6815 : kick fb sync
e97c04118e : CUDA 9 support
4d8a8c2e1e : Implement dot attention
ac6eee1118 : Delete duplicate PoolOp cuDNN implementation.
fac241bcbc : Caffe2: add a DB that's wrapped around a BlobsQueue as an adapter for data from non-DB interface
4ad1dbc189 : Strip unnecessary files in xplat/fbcode
91c9812dd1 : Sync of codebases
1654bc9335 : add shape to pass-throughs
87451fd643 : move normal variants to TH/THC
95f357ffcf : move normal variants to TH/THC
24c496bdda : move normal variants to TH/THC
4599c0c7df : Update autograd notes (#2295)
8ce4401f09 : documentation nit fix for torch.Tensor.random_ (#2297)
03d856977e : Update README to link to NCCL2
4a33f66e27 : Update README to link to NCCL2 part 3
d66fb63679 : Update README to link to NCCL2 #2
80ae43b443 : Update README to link to NCCL2
1baae004bf : cuda 7.5 fix for gloo
12f25c8106 : Revert D5545533: [pairatt] implement kMaxPooling operator
4b80ff89e2 : Use softsign op for s=0 in arc-cosine feature map
5d721c1c14 : Some adjustments for Windows build
6648677acf : [doc] variable shape error of LSTMCell, GRUCell (#2289)
a95f7aa38b : Fixed bug with blob_test
977f9644c0 : Fix ZeroPad2d backwards with negative pads.
01b6a5a3ea : Add boolean type in input2 and input3 for caffe2: Where operator
83ba2b1091 : Typo correction in CMakelist.txt
d177846dbf : Add prefix argument to FileStoreHandler
3628cd30f0 : initialize peer access only when (might be) needed
8e1ecb1cfd : async sparse length sum op
a4e6ca6956 : Added Sinusoidal Position Encoding Op
4a8545e3c6 : implement kMaxPooling operator
adc5510ecb : dynamic embedding
a8695178aa : Adding parameter sharing API to Dper2
0b50e078d1 : add proper build support for perfkernels
38b42e0421 : Improve cuDNN weight layout test
d1ab37a65b : Make sure deserialized RNN modules have _data_ptrs too
70c95dbe52 : fix Conv3d non-contiguous weight bug
74e5328b03 : remove limitations on output_padding in Conv* routines
814b65df4f : remove limitations on output_padding in Conv* routines
a565b77791 : add 2d and 3d dilated full Convolution
6e6dca001c : add 2d and 3d dilated full Convolution
60e7966c1f : Fix BatchNorm double backwards when training=False. (#2277)
7c04f11d88 : search for ldconfig in /sbin for nccl detection (#2276)
f6585e80d7 : if RNN's hx is None, requires_grad=False (#2274)
0b000952c1 : Split batchnorm eval test into cpu and cuda functions. (#2273)
42328b70f7 : fix another is_same_size call
7df859871e : Added functionality that allows users to store huge blobs
cb1dd21280 : adding operator lp_norm to support calculating l1 norm and l2 norm
ca98c659df : Add tests that gradcheck grad sizes match input size and fix advanced indexing case that fails check.
2a8379847b : add reentrancy checking for gradcheck.
eb1ac73184 : Remove save_mean/save_var from BatchNorm double backwards, as it's not needed.
677324518d : Add InputsCanCrossDevices() to NCCL op schemas
ded2a5899e : Option to set BN scale and bias initial values
ab42a95b6f : fast path for CUDNN global average pooling
b3ca3da4b6 : fix type mismatch
0fc2bf26b4 : Option to enforce batch size
c662480ea6 : Return empty Struct when get_field has empty input
da66f10042 : Improve StringJoin operator
f484a5fee8 : Implement LogSoftmax double backwards (#2270)
0c7ee02c37 : Add CUDA implementation of BooleanUnmask and fixed some bugs in the test
6314c1fc15 : Transforms in Python
c92559c67f : Add ios_base initializer to operator_schema error path
676bedd298 : Fixes for Python 3 in caffe2/caffe2/fb/data
60cb55461e : Caffe2: Support additional outputs in ImageInputOp
3a99698734 : include numpy's other 32bit int type
5d304a3b49 : add gradient for SparseToDenseMask operator
d804c848dd : Added O2 flag to default compilation path
5954211ed9 : Fix #997
f961d6da60 : Update SoftmaxOp documentation: input not necessarily 2-D
5cca4cc0f2 : Fix blob device inference for LearningRate
1968e03486 : net_printer.to_string() accepts NetDef
3324db447f : Caffe2: allow nets that don't use all input in net.ClonePartial
6d8933b939 : improve enforces in SquaredDistanceOp
aebec91301 : Fix serialization of legacy ClassNLLCriterion with ignore_index.
9c1e9d8a9b : Update legacy ClassNLLCriterion to add ignore_index.
61c873cc7d : Implement SoftMax and NLLLoss double backwards.
e1ca722988 : Add comments for default value (#2248)
9a6c72891b : Move Transform from Contrib to Core
d035af1f2c : Added general string operator matching a la tensorflow, device option, engine, and argument matching
b4fe71925d : fix #983 by remove unsupported archs
9349dab8a0 : Full sync of fbcode to fbobjc/fbandroid
8a396f8dbe : fix #985 compilation error due to type mismatch
3c8018b565 : Utils to Print accumulate histogram of blobs
e38015756a : shape inference for Squeeze
82adbde878 : pass layer_parameter shape to ps builder if cannot inferred from initializer
410f464dd1 : provide more information in Declarations.cwrap
8c65b5ab34 : multilayer seq2seq
5f8693cc6f : Make Context::FinishDeviceComputation throw instead of FATAL
8079abbaf1 : fix traversal order
76bb054f2c : Scaffolding for perfkernels dispatch of embedding lookup
8262920b72 : Add ATen overload to AutoGPU. (#2234)
95bcc09812 : Resolve some compiler warnings.
0cd149f06f : Add comments for default value (#2242)
e3c45206ec : Add a method to run a train net multiple times in layer_test_util.py
43c944acbd : Remove dead THPP code that has been replaced with ATen objects. (#2235)
84b9d267dc : add warnings about slow data input
bf26a51f91 : fix a bug where an uninitialized at::Tensor was passed to createPyObject (#2239)
6530db49bc : improve pair_wise_loss operator to support multiple sessions
80192d3e8d : Add rudimentary support for calling a few sparse tensor functions.
930b6b83c5 : Update class comment of Context
071127cc07 : change bunch of inexpensive DCHECKS to CAFFE_ENFORCEs
f2090debb0 : Optimized SparseLengthsSum
c304d04fc6 : Replace thpp::Tensor with ATen Tensor in autograd csrc (#2170)
f1fd4ac7ed : Added aarch64 support (#2226)
a41cbdec0e : float support for square root divide
0676dfef2b : ExtractPredictorNet should strip gpu_id prefix from step_net
13569c9aa0 : Fixing semi-random layer model for multi-layer models
78d1806679 : fixed invalid memory access, caused by iterator invalidation
26645154bb : warn about using test/val model with init_params=True + fixed some cases
be7dcccdd9 : fix issues where scale gets reported as 0.0000 in output
ac76ab5fca : Increase tol. for float tensor qr big test.
af1e45c1e1 : support appending net and converting them
d8443b8ffa : BatchGatherOp
a53f4b0f9b : add dimension check to NHWC2NCHW shape inference
04f31aa034 : Improve Variable.retain_grad
ae59e008cd : add `retain_grad` method, to variable, so gradient gets stored during backpop, on non-user variables
e25b3d7bc5 : replace lon glong types with size_t (#1267)
ba1ae2136e : strengthen gloo_test by checking for success
8a156b651b : Move cpuid ctor to .cc
3363681304 : enable CreateCommonWorld to bootstrap from existing common world
346ff7ed18 : Implementation for Pattern Net Transforms, which is a transform initialized by a Pattern NetDef and a Replace NetDef.
de92dbe4bb : MKL code move
1eecabcfb5 : bug in mtml: shared_embedding
45ce863151 : CMake updates.
40b783b746 : Fix flaky test due to numerical gradient approximation error.
d187b2f4c9 : MKLDNN bugfix
1bd7fd6bc8 : fixed nnpack transform to match on cpu not gpu
925208af72 : Implement BatchNorm double backwards (#2207)
643f8d12ff : [bugfix] in bce_with_logits logsumexp calculation (#2221)
d4bd6c4314 : add some asserts to basic.cpp
9bec54bbf1 : Modify arc cosine feature map and semi random layers to initialize parameters as global constants
3b6d01301f : add valgrind to CI
fb8f9de498 : fix for ATen API Change
54b171eae5 : Caffe2: don't swallow exception stacktrace
cb9ad7a892 : Opt into Trusty builds. (#2214)
fd97d92479 : allow retain to be specified for unsafeTensorFromTH
f3aa97f169 : Deduplicate THPUtils_checkLong/THPUtils_unpackLong (#2218)
8f8dccd2ed : distance_op_test from hypothesis_test refactored
9c0d52a32f : fix osx build errors related to long/int64_t
54545c2154 : Note [Undefined-dim versus 0-dim]
9ec7051442 : Remove __func__ hack in auto nn.
e6ffc2acb1 : Add get gradient for CTC
2676c6357f : Enable Conv groups gradgradchecks. (#2216)
d89632b52c : Support (U)INT8, (U)INT16 in data type conversion
e6d2941ec0 : revert codemod since this code also need to be built on ARM
b6daf562c4 : fix windows build
d20e50a39f : fix perfkernels build
8cc9dbf357 : Added Ninja generator support on Windows
cf1ce29631 : Fix GPU SparseAdaGrad with empty tensors
0458985c1b : Fix build with external nnpack installation
0ee5688892 : Fix SparseLengthSum undeclared schema
2f5c96a730 : Fix Flatten operator for empty tensors
240b307d8b : Implemented Registry pattern + ConvToNNPack Transform as example of it
ef3b09fb5f : fix a bug where some scalars were getting truncated to integers incorrectly.
133dc2603e : Support grouped convolutions in MKL
d86f32ae2e : Implement simple graph rewrite functionality.
1313f70390 : MKLSpatialBNOp doesn't support in-place
9e6ea2987f : MKLReluOp supports in-place X/Y
af43e2b251 : MKLConvOp handles the no bias case
f028e74fb7 : Implement a filler op test
ea813c0a91 : Fix MKLFallbackOp random seed propagation.
bbf2b578dc : Implement MKL CopyTo/CopyFrom ops
71d04fd5cc : Implement SumOp for MKL
ad7d7657a4 : Add tests to targets
007492e730 : Fix MKL spatial pooling test
a7d8f489d9 : Improe MKL SpatialBN test
8b2c6341cc : Fallback for MSRAFill
f656e002a7 : CosineSimilarity GPU
e548580f31 : Add missing models to torch vision documentation (#2204)
421607a935 : DataParallel device_ids slicing fixes (#2200)
eccddbc204 : vectorized typed axpy implementation
c2f2b5ad51 : lengths_reducer_ops refactoring.
0d96933338 : Fix assert
7be545292d : Update cudnn.py
a0e83280ef : Update cudnn.py
aa35be2032 : search for cudnn in conda
626840aef3 : C function wrapper uniqueness (#1912)
babb28d2a3 : Change DHCECK to CAFFE_ENFORCE in softmax_with_loss_op.cc
5449afa855 : use model.create_param instead of using param_init_net directly
eae6400d59 : Updated summary for the FC layer in caffe2
bcea678e7b : Update rebased functions to call apply.
1a52ca02ef : Always return indices from MaxPool autograd functions to simplify implementation; The callers (in functional.py) will filter out the return instead.
84314859af : Implement double backwards for MaxPool2d.
9c2beb33c5 : Implement double backwards for MaxPool1d.
7deba74969 : Implement MaxPool{1d,2d,3d}Backwards (non-differentiable) functions.
48bb07a4db : Implement double backwards for AvgPool3d.
bb86ed7b97 : Implement double backward for AvgPool1d, AvgPool2d, LPPool2d.
291369ff1b : Convert pooling functions to new-style, once_differentiable functions.
2118400e18 : Fix lint.
39934da8b3 : Address review comments.
c12b494329 : Implement double backwards for ELU.
506d52dc33 : Add check_gradgrad=False for new NLLLoss2d test.
7687c2677a : Fix double backwards advanced indexing derivative wrt grad_output. Also small legacy nn test issue and unrelated syntax issue.
97d21e243b : Implement L1Cost double backwards.
0bda56956e : Implement double backwards for auto-generated HardTanh.
40af93bb57 : Optimize PReLU double backwards via a PReLUBackwards autograd function.
9608e37969 : Implement double backwards for PReLU.
ec7c510557 : Implement Softsign double backwards.
8636be3880 : Ensure gradients wrt grad_outputs are checked in gradgradcheck.
fb2284f3a0 : Add gradgrad checks for NN module and criterion tests.
9ec9dee27d : Implement NN Criterion functions as potentially double backwards functions.
7b6aab9079 : Unify implementation of _Loss and _WeightedLoss autograd functions.
852dd5f011 : Convert _WeightedLoss functions to new style autograd functions.
085abee444 : Rebase kl_div changes.
48b85fe012 : Implement THNN non-criterion Functions as new style with backward/backward.
45ce4df74c : Convert auto nn Functions (non-criterion) to new style.
5695cbf986 : Add comments in loss.py and distance.py (#2189)
03df5debe3 : Gloo fixes for Linux + old cmake (2.8.0) + old glibc (CentOS6)
8930c095c1 : Add support for int32 indices in SparseLengthSum and friends
ceb4f84d12 : Improve memory usage of cuDNN RNN modules (#2179)
10667a914e : Add linter for enforcing caffe operator documentation
112728cbe9 : reformulate bce_with_logits to not use abs (#2195)
dc17fb68e4 : Fix minor bug in parallel_apply (#2193)
0eda7955bd : use internal cell for DropoutCell output prep methods
0deee2194f : Add a quick SparseLengthsSum benchmark.
4195858614 : factored out DBExists function
7b2b817b9c : improve error for non-existing/vs. sparse or dense gradient
9c3e59d484 : updated research proposal link
f6afa6adbd : Add proper cpuid support.
3c1c3c10e7 : Apply OperatorDef shared pointer memory saving feature to DAG nets
99e79a616b : attention with encoder_lengths
4a4d8841e6 : Delete unused import
b51e0ec0c2 : quick fix inplace blob bug
920c553ac0 : saving/loading CPU/GPU nets
4a256dfc97 : save/load/run nets and params with device info correctly
3c275fe7a0 : Increase flaky test tolerance (#2185)
6892b03499 : bindings
58039aa25b : Improve PoolOp NCHW
24ece087c7 : Replace ReduceDimsOps math::Gemv with CUDA reduction kernel. 5.6x speed up.
804ebf7c41 : Populate learning rate blob name into data_parallel_model and fix resnet50_trainer example.
34be12353b : comment out unused parameters
1978bba3e4 : comment out unused parameters
b16c911667 : Implementation for Graph Transforms
8e80ef7e6d : s/CopyGPUToGPU/Copy
35757af6f7 : Add broadcasting of weights to bce/bce_with_logits (#2161)
8ab3d214d5 : Fixes for DistributedDataParallel (#2168)
71ce3448d9 : Fix torch.inverse when magma is not available
2efac3ed83 : Fix torch.inverse when magma is not available
66bbe5d75a : .creator -> .grad_fn in the code example (#2171)
efe2d01a3e : Fix some bugs in CPU version of BooleanMask and add GPU version
ea607afd06 : Add comments in nn.Upsample (#2175)
d94c68ecff : Remove net_def_ from NetBase
4f035f14de : Add a support matrix for distributed backends
72e9e7abf7 : Warning squash.
7f28a891f3 : added sincos function to caffe2/utils/math
4d45ce7d11 : Added UpSampling module and associated tests.
cbb85545ec : warn about orphan StopGradient output
bcce1bd04a : Fix optimizer_context OSS test
290acab2c7 : implement drelu and unittest
eed323c344 : avoid warning
ea6f9a26b8 : fix version number
3719b4247a : return a sentinel value when THTensor has undefined dimensions.
bf1fc250d1 : get conda root dir automatically, trick from Dockerfile
47942307b5 : Comment that data of THStorage may be NULL.
6b69723d4f : Document how Numpy memory management works.
5254846bb2 : fix typo of error msg of cmul in THSTensorMath (#2158)
f3f478960e : Convert Embedding to new style. (#1916)
e537023147 : add functional embedding (#1987)
09abaa2189 : make keepdim backcompat warnings emit in autograd as well (#2157)
575a4a98e0 : Remove assertions with side effects
02e23f4f6b : Unify argument names in tensor and Variable methods
8946502348 : Accept all kinds of arguments in Variable.expand
e708de37cc : Allow keyword args in long_arg options
4af40e3471 : Let parallel_apply accept arbitrary inputs
f417cb062b : Fix repeat backward to handle unsqueezed dims
29cb541ec6 : Fix CMakelists.txt for caffe2/contrib
8d35b11af2 : Graphs for Graph Transforms
44790697c7 : Nuke arg_helper() in OperatorBase
11f3ccf98f : Add missing Modules to nn.functional (#1801)
31894cafdd : add support for advanced indexing with less than ndim indexers, ellipsis (#2144)
95ccbf8b0b : better error message in load_state_dict when there are inconsistent tensor sizes (#2151)
82143487b3 : Add CUDA support for arange
bd6263c338 : Add CUDA support for arange
1c6a08c1c2 : fix lint
a5c2546c0f : version bump
65e675e3e1 : Fix net construct bench
13e84e460b : Use unaligned store intrinsic to enable vectorized reductions on unaligned buffers
b660303a16 : Static linking against libstdc++ in Binary Build mode
768b7c0dee : Static linking against libstdc++ in Binary Build mode
ae3a8d5d2e : Static linking against libstdc++ in Binary Build mode
58334a0c4b : static MKL detection and linkage fixes
cfcf2af95f : add explicit BLAS linkage to THC when linked against magma (in binary build)
c4120f34bf : move to model with cuda indexing tensors for cuda tensor adv indexing
9755505122 : move to model with cuda indexing tensors for cuda tensor adv indexing
8b42308f71 : Bug in line 381 (sparse) (#2130)
685ae4813e : Squash "macro expansion producing 'defined' has undefined behavior" warnings.
c1384ef99e : Fix NDPooling gradient non-symmetric padding check.
9b9df3fbeb : Sync mobile codebase changes back to fbcode
703429d49e : Make clang shut up about class/struct mismatch.
4a81b0f24a : make SparseLookup support None pooling
11c4647447 : Allow CPU device scope in data_parallel_model and data_parallel_rendevous device scope checks
8451468d8b : still generate multiple versions
3cc03568da : Fixing error message for layer model helper
138b216686 : add support for Null Tensors to functions
4e019dbb6f : Rename def() to debug_def()
5881aa0a78 : Use shared_ptr to share OperatorDef across threads
e5a7891038 : dot product using matmul
dc58544779 : fix baddbmm for expanded tensors
427cc68ba2 : added TensorInferenceFunction for ExpandDims operator; deleted Reshape layer.
8a6d348bb8 : Improve QPS metric. Better reporting to the UI.
e13704c467 : fix shadowed variable name
cddb73899c : fix strip prefix bug in SaveOp
95291f0f74 : Revert D5348078: Add linter for enforcing caffe operator documentation
b6722a07cd : remove compilation warning on segment_reduction_ops.cu
e9dd8e0e3b : Use one key for all pairs per node
78c4c4f885 : handle RecurrentNetwork operator when clone net
981c84f7b2 : remove unused parameters in math_cpu.cc
f7a92145d4 : comment out unused parameter in pybind_state.cc
0d833590c1 : Change Allocator interface to return deleter
baef769035 : add code comments to memonger
746ddb7364 : Fixed error when compiling with clang
a3c9054245 : Add comments in loss.py (#2128)
2dc8851206 : RNN Workspace Blob Extraction
32b13d6243 : Add linter for enforcing caffe operator documentation
e233875498 : CodeMod: Prefer ADD_FAILURE() over EXPECT_TRUE(false), et cetera
c7b624651e : CodeMod: Prefer ADD_FAILURE() over EXPECT_TRUE(false), et cetera
ba544aa0ad : Add comments in nn.ELU (#2111)
0eeb57a5a2 : Detailed per-operator tracking for all nets
849fb1f7e3 : Fix when running with python -O (#2120)
9e2c74cc58 : Use scope name for dataset cursor
16dd997239 : Spelling tweaks for documentation (#2114)
1c0135b6f2 : CreateCommonWorld: pass timeout for storehandler
b6691277f5 : binary size util
feecb09517 : Added sensible default root location for MKL on Windows
b68adec7bb : adding model loss logic
27488b4950 : Fix Sumop schema
bd29260f47 : hyposesis_test grad_reference bug fixes
366299f9f3 : Wrap unbiased flag in var, std, varall, stdall
9851ef4979 : Wrap unbiased flag in var, std, varall, stdall
f805a8388b : Wrap unbiased flag in var, std, varall, stdall
16203f3325 : fix test
80d067e70f : retain_variables -> retain_graph (#2107)
d2874c560e : lint fixes
2aa8fc7e8d : Implementing Semi-Random Features Layer
83596bdcb1 : produce a Declarations.yaml file that describes Functions/Type/Tensor methods that framework produced.
33ac9cdc10 : add ATen tensor support to pytorch tuple_parser (#2102)
38ba935547 : operator== for type
128e02d792 : allow type inference to work on TensorList
7ee7542fc8 : Fix handling of if_true/if_false in ATen
52a9367fa7 : Fix minor typo (#2100)
43eaa28b9f : fix empty Tensor mmap
7e498d2219 : fix empty Tensor mmap
a305ce3ece : Fix broken seq2seq example
d6bc2642e7 : Add ignore_index to NLLLoss2d
7d3511f5f2 : Half fixes for ATen and CUDA 9.0
a5a8ab10b0 : fix Hardtanh argument names to be consistent between functional and Module
25b591eb05 : lint fixes
06f94a7d59 : better error message when thread_local is not supported (#2092)
f44991b398 : add timeout argument to DequeueBlobs; use 10 min timeout for data workers
34f7acbedf : Report bugs in BatchNormalization, the dimension is wrong for second order
13980d2bb5 : Set device to the default device(CPU) when DeviceContext is None.
80b620960c : Make QPSMetric count from the first example
113ff22e65 : remove unused parameters in logging_is_google_glog.h and operator.h
d8fee1ebe6 : add launch_bounds to greedy kernels
ed6f5d7038 : add launch_bounds to greedy kernels
9e720f1547 : fix bug in method declarations
ab26fa01e6 : install vision in devel dockerfile, minor fixes to dockerfile (#2090)
f4ae64a6c7 : add isCUDA() on Type
07fcd977bb : add cudnn data type processing for ATen tensor (#2087)
54cabb8bf3 : Correct negative dim behavior in torch.stack (#2084)
42485d87c2 : Set the current device in each engine's thread (#2081)
ab0d631d6d : Adding AllCompare-like function to data_parallel_model
007d6ad816 : write generated_cpp. to a file rather than as output to make error reporting clearer.
6db960fbcf : dont clobber gen.py error, fix for old versions of python
c011d4f3d6 : resolves #1991 (#2073)
f98c384973 : Raise error when call from_numpy on 0-dim array (#2075)
59c0bb9e5a : fix for duplicate input case
48b797a785 : fix lint
043640c3eb : Return top K classes
8983bf13f4 : fix max and min docs
20ce45b0c3 : fix EmbeddingSum offsets initialization
1e98155711 : long ->size_t
1c14178c65 : fix osx compilation
37183e91de : add normalize docs to sphinx
58e4caf80f : add missing docs
3faca65adf : Add a unit-test to validate sharing learning rate between
c888857461 : Conv double backward groups (#1993)
c48d50a2e2 : Advanced Indexing: Calculate linear offsets directly on the GPU when working with CUDA Tensors
41abcd4b41 : Advanced Indexing: Calculate linear offsets directly on the GPU when working with CUDA Tensors
703ccbb8cb : Advanced Indexing: Calculate linear offsets directly on the GPU when working with CUDA Tensors
27da4eafc2 : Remove more advanced indexing duplicate tests (#2071)
ce96b84ccb : Check for shared_mem size in multinomial single-sample implementation
82e318cf8b : Optimizer: one LR op per (device, optimizer)
d6f5452240 : Allow to import subclasses of layers
feddb03d58 : LP pooling kernels
150ce4c1d8 : decode only required # of frames
fe3802d724 : match PyTorch syntax
294b0eb901 : Remove outside access to OperatorBase::def()
b8d0c7fc0d : checked cast does it all
ea563c1df1 : Make weight norm pickleable (#2066)
2520459617 : cpu lp pooling
841173c530 : Use NamedTemporaryFile to avoid filename collisions (#2069)
f4c502e8a8 : basic cat implementation in ATen
dc2ed7fd33 : Fix ATen build for debug python
81fd2bf2d0 : fix some language / typos
8915e2710c : Refactor scatter/gather and add distributed docs
ebd5c085dc : Fix a memory leak in DataChannelTCP
a9759ef401 : Fix undefined symbol errors in THD
02aa5ad9fb : make functional layer return scalar if only one output
169ca67a4e : Adding Spatial Transformers w/CuDNN support
5894864a1c : Adding Spatial Transformers w/CuDNN support
9e4d060348 : Exposing TreeWalker & TreeIterator in header file
a68bb5e3f9 : Added device scope checks to data_parallel_model and data_parallel_rendevous
7c10f1b932 : Avoid two unnecessary copies in addmm backward
a20729244b : Avoid two unnecessary copies in addmm backward
74fd4bf9e4 : quick fix for model_helper __init__
b9e64ecef1 : allow param_info to set optimizer
a74fb22b9a : fix inplace division for python3 (#2063)
0d91048639 : add dummy tensor.data property, to provide interpretable error message to users (#2058)
54e8ef14fb : add flag caffe2_serialize_fp16_as_bytes
823869ba79 : Adding tanh to brew
3d1af15a35 : logging for all operator calls
67d2f45e2f : Fix net_printer.py
10e23943b3 : Fix missing _forward_pre_hooks in serialized modules (#2057)
be18499e85 : Fix a few C++ warnings
1037f30e41 : add some documentation to Tensor
192e0546bf : fix for back-and-forth models, pass reference instead of copy
78ecc2d3b1 : Alias multinomial sampling in Cuda (#784)
f483679425 : Implementation of Alias Multinomial for faster Multinomial sampling (#1046)
c8afdb6f4b : Caffe2: Add Open method to DBReader which takes DB pointer
dfd5d8d0fe : Avoid two unnecessary copies in addmm backward (#1971)
158c7e86dd : add basic gitignore, thpp -> at doc fix
73128f7b08 : fix minor typos (#2051)
f536c662bf : fix op in docs (#2048)
2ecb18881c : add DynamicType variants for ATen functions.
9d8cff9bc1 : initialize aten and pytorch to share the same THCState
ab3d85c410 : add build commands for ATen
3314d51dcc : Add __repr__ to Avgpool and maxunpool layers (#2047)
e89e71c595 : Simplifying Random Fourier Features and layer test
1ef1dd9cad : Add comments for readability (#2005)
3a073c591b : improve SumOp error message
97193478c7 : Implemented GRUCell
2409c2e359 : GRUUnit Op Backwards Pass
279f3f095e : Implemented Gated Recurrent Unit (GRU) c++ operator forward pass
48bd102b95 : Moved sigmoid, tanh, and _prepare_lstm (renamed) to a util file.
4b1ebd2f65 : Fast path for serializing large floating-point tensors to protobuf
c096c188c3 : minor leaky relu bug fixes
98206c326e : Fix ref counting in wrapped tuple functions (#2042)
9d0c674cb7 : always use a custom default float
bff762c3ff : python style fixes
10a8ccf27f : only test gets for advanced indexing with duplicates (#2041)
0a9e8a23ef : add atan2 function to autograd (#2040)
720db19fa2 : make GetComputedParams work like GetParams
d9daad509d : Serialize float16 tensors as bytes to get rid of 50% overhead
d88cb87300 : add dilated convolution guard to nnpack op
ff3996acb9 : Add NormalizeL1Op for doing L1 nomalization along given axis
6ea71155c1 : Implementing Arc Cosine Layer
8b003565ec : remove inaccessible median variant (#2015)
53ac2d46c6 : Fix typos in docstrings. (#2034)
ab3a9e177e : Fix sdot_ bug for runtime F2C symbol conflicts by using cblas where available
46a868dab7 : [Ready] Limit docs line length (#1900)
581921f696 : support unsafe functions for getting/constructor tensors from TH objects for backward compat.
0025e1c776 : Fix typos in the docstrings of Conv3d, AvgPool3d and MaxPool3d (#2030)
9cba97a833 : Pairwise-exchange benchmark with bandwidth measurement
c6d7e1e6bf : added input size checks to batchnorm (#2020)
3598bdd044 : Modify samplingTrain layer to take more general inputs
35ad7be55f : Add Sum operator to mobile
dc13345eb3 : Read pretrained weights using binary mode in caffe_translator.py
49f679d0e9 : Acknowledge the existence of cpu HalfTensor (#2018)
5f63f5697a : IndexHash
f0788afb0c : lazily initialize cuda so that we behave similar to PyTorch
86b6a6e2f8 : Added PiecewiseLinearTransform CUDA Op
cb7f17ab64 : added gradients for ResizeNearest (CPU + CUDA) and ref
febae7b20b : fix a bug in the report function of Data_Parallel
6bff82eb6a : Revert threadpool minWorkSize change on iOS
a4dc7dcd04 : osx build issues and clang warnings
5dd05ed8ee : remove Sparse from dispatch for now, will add dispatch variants later
0a34f05d5b : Always include THNN in the build, don't check for CUDA twice
4fda678a85 : fix build issue when cuda does not exist
8cedf35d55 : Adding Random Fourier Features to SparseNN Model and Flow
ebdec9a837 : Skip distributed tests if not supported (#2004)
c3c7845572 : added asserts that grad_output + input are contiguous (#2000)
f8089c789c : One more proto_utils.h fix
90d0762d14 : Use torch.arange instead of torch.range in test_torch.py (#1996)
ad62e82179 : fast simple-net memonger for C++
e8689dda8f : Python 3 compatible integer division
31f394f8b3 : Add synchronization barrier API to data parallel model
43c46cc883 : Reduce default ThreadPool min work size (~25% speedup for segmentation on S7).
21ba0ff560 : small fix to when input blob is input to multiple ops
2d133d4627 : increase concurrency default
78a4fd1044 : Add Caffe2 op for Gloo barrier
73fead9f8f : add shape alias (#1983)
be7725b0ba : Tests: fix dpm test when only 1 GPU present
87730360d1 : Small improvements to CreateOperatorDef
4fddc04054 : Use the same schema of switching to device reduce sum for SumSqrElements
60e4607106 : brew API in convnet benchmark
e60bc2df85 : TravisCI: run Python tests
3748b6d3eb : Data parallel fix for https://github.com/pytorch/pytorch/issues/1857 (#1880)
25bd5dda27 : Implementing random fourier features layer
b3589b04fd : Fix exceptions not being caught (#1948)
03aec5ae53 : Add Tensor::FreeMemory
5964394a4c : return empty iter when tensor is empty
1aaa24d99b : add medianall prototype to docs
ab7d4e2bce : add missing definition
ae65236490 : Fix typo
56df97ce93 : remove unnecessary contiguous assertion
05c2bafc9d : Have median reduce over all dims and return just the value when dim is not provided
0dbf871d9e : Have median reduce over all dims and return just the value when dim is not provided
f425c5216b : Have median reduce over all dims and return just the value when dim is not provided
635bb5ec9d : corrects typo
00e5afea6a : Adding dedup aggregator options to sgd optimizer
2ac9ff5c96 : Cos, Sin, and Abs operators
e5bac2dd2d : Add critical section to BLAS gemm.
090506ac87 : Add NCCLBroadcast to correct net
ec8da55a7d : bind THS THCS, leaving all operators unimplemented. This is required because THPP can represent Sparse tensors even though the wrapper doesn't implement any operators.
b4414c0dc3 : Handle None in modules list.
39edc378fb : Fix lint.
f6578c1b24 : Implement double backwards for Dropout and FeatureDropout.
daa84e7663 : Implement bilinear double backward.
1aa145dbac : Implement ConstantPad2d double backwards.
d4b8834131 : Improve non-contiguous testing in TestAutograd: (#1933)
699d1ec7fb : Address flaky Norm test issues: 1) Add a correction for 1.5 norms to ensure input can't be zero. 2) Increase test tolerance.
05062a1439 : Better handle random seeds in tests.
e187ba7a9f : Decrease likelyhood that Fmod/Remainder tests fail due to numerical jacobian check.
c691fc6dc7 : Add a nonContigDim reduction kernel to improve latency for small tensors. (#768)
42cf68b402 : Make reduction functors accept only constant arguments (#753)
8a65ef1098 : cc 2.0 -> 3.0 in docs.
b6c1c0ac4e : Fix communication_schema decoding
406040f6a9 : fix torch.is_tensor not recognizing HalfTensor (#1934)
e26139b7f7 : fixed shapes in GRU and LSTM docs.
457587088a : Fix broadcasting issues in binary_cross_entropy_with_logits (#1944)
d43b42fb37 : allow querying tensor device + tool to validate that all ops have tensors from correct devices (GPUs)
c0cebc3578 : Added flags to lstm, convnet and sparse_nn_benchmarks to print out operators
ab0fe0a5f4 : add debug information when there is blob version mismatch
f3a59aedff : Use cub::DeviceReduce for faster math::Sum CUDA version
da0fad8a7a : Use torch.matmul in nn.Linear (#1935)
2c038f2074 : Add weight normalization implementation (#1945)
b3e500c522 : fix docs generation warnings
b3f6ff1b3d : Fix unused linker argument warnings. (#1958)
5aa147f273 : added PackRNNSequence and UnpackRNNSequence operators
8c74c36626 : fix reducing device option
326e314695 : Add optional timeout to Gloo ops
5355634dac : Dict fixes/improvements and unittest targets for Python 3 in caffe2 core
a6dee1da32 : Make args.fixed_shape in lstm_benchmark work in a library mode
dd6e170b8d : fix LSTM benchmark reporting
6c67a753c7 : Fix test_pair_wise_loss_predictions
912ee4e40a : Fix `test_sparse_to_dense` precision failures
83765906c6 : Add min_satisfying_examples
6df23b418d : mark tools as excluded in find_packages (#1915)
a4cc9f2fbf : Per-workspace mutex for shared im2col buffer
e5b5154768 : Make cudnn warnings clean. (#1940)
a0bfda2390 : Fix issues to access Scuba and move out scuba logging from opensource to fb-internal codebase.
bfaddc0a19 : Warp intrinsic fixes (#785)
4d5075add2 : Add ignore_index to nnl_loss and cross_entropy (#1937)
86305ddd49 : Deprecate CNNModelHelper in python/seq2seq/seq2seq_model_helper.py
fb4c0a664b : brew API in lstm benchamrk
a60f90a3a3 : Only notify on DAGNet condition variable if the condition is actually true
e128245e8c : Move memonger graph equality into memonger
0a95613cef : Improve error message when accessing attributes that don't exist (#1936)
cb04548577 : Clean up nvcc compiler warnings in utility_ops.cu
8a4eb50ed1 : Speed up torch.matmul for 3D+ x 2D/1D tensors (#1931)
fe9b0bfd27 : Fix some typos
ea659b8f2e : broadcast to global parameters when using warmup
f3c15091c9 : don't try to attach observer if net creation fails + unit
87ab40c617 : Tests: fixing observer_test
fbe2526343 : Allow concurrent execution of GLOO broadcast collectives in
e2bd3cfc8b : Add __sub__ function for schema.Struct
b5e1df046e : fixed typo in formula of GRU in doc (#1921)
08648061f7 : Advanced Indexing 2A - Colons + Adjacent Adv Indexers (#1890)
8260002941 : Partial eval layers
a1fcbb8be1 : offline_all_gpu_experiment
1fce3eac4e : single trainer hybrid device
9a14c013c3 : Refactor data_parallel_model to take advantage of Gloo broadcast op in broadcasting across machines and GPUs in one operation
c3b4d277bf : Tests: fix test_convolution_sync()
75fc49833f : An observer for every created net and op
08cfc72dee : Increase threshold for test_unroll_attention
07ba98b4b2 : Allow specification of SliceOp dimensions via argument rather than via tensor
4c35c630ec : Enable norm gradgradchecks by lowering precision requirements.
3744efeaf8 : Fix double backwards for prod.
bc032be13e : Implement negative dimensions and double backwards cumprod.
ee1f21a53e : fix perf bug in TransposeOp for CUDA
81f539a283 : Implement SliceGradientOp for CUDA
4d16578284 : fix + verification for inplace blobs
6057fa9193 : Avoid compiler warning in operator_schema.h
f814a892cf : done re-seed cuda device if in bad fork (#1923)
dfd745a4d1 : Conv frontend: checking engine and use_cudnn
d45f722e43 : data_parallel_model: NCCLBroadcast root fix
ca2bf16009 : Tests: handle missing python-lmdb gracefully
d592e188f7 : port of ConcatDataset (#1902)
c0445c4426 : support_multi_label
ae61f3ff42 : adds poisson NLL loss (#1779)
1f391a42f7 : fix warnings for docs generation
24e30534ea : Implement SliceGradientOp for CPU
b933423495 : support more than 8 gpus (#774)
ee1b7b50b3 : fix docs for broadcast warning
cb5af39c69 : Vectorize CPU ClipOp implementation (and add test)
7cdd018db4 : Fix assertEquals for lists and tuples (#1913)
4862c0f47f : Memonger in O(blobs)
82c38daa85 : added research award info
87275817a4 : fix a rare race condition by initializing scratch blobs beforehand
553e4ec20d : Refactor conv_test - no cuDNN+dilation+NHWC
7806a09f03 : Fp16 fixes for CUDA 9 (#783)
7523c49f03 : add missing INCREF
733a7c6d9a : Fix segfault in SpatialDepthWiseConvolution w/o bias
c8410859d9 : Operator python stacktraces, attempt 2
29887f556f : Unrolled test for AttentionCell
32e666551a : Fix lint.
ab0c321f80 : Fix index_copy gradgrad test by ensuring indices cannot be repeated.
9db14936eb : Ensure masked_select tests don't have masks of all zeros which yields 0-dimensional tensors.
e5857c5f1c : Implement Gather double backwards.
7da77c4255 : Add ScatterAdd autograd function.
656cb1c31a : Implement and test double backwards for IndexCopy.
4ab4938cf0 : Fix and test single backwards IndexCopy.
1324c4b081 : Implement double backwards for masked_scatter.
bb3779efe8 : Add broadcasting to masked_select.
0394bc2b40 : fix clip_op bug
7c24a3d5cf : fix arguments for cudnnFindEx for transposed wgrad
04c9c8c5c2 : fix for loading model with bmuf
5cb73c106e : TravisCI: fix OSX builds
194bc404b5 : CUDA 9
fd86c51c39 : Add ResizeNearest
8f1e641d5f : Deprecate CNNModelHelper in python/data_workers_test.py
ca2b608f83 : Fixed typo
342de07231 : Core unit test fixes for Python 3
a9ea975977 : enable warnings in build and fix warnings
ff914bf201 : Move away from creating netdef from string, which may get deprecated
ccc46229af : Fix residual connections
b1a84e3c70 : update readme and add assign_(Scalar) variant
8a24f2b4d8 : Fix segfault in SpatialDepthWiseConvolution w/o bias
66d93b60b3 : fix a bug with scalar handling by simplifiying the maybeScalar check.
2af6ba3b2a : handle select and operator[] style operations
b59b44fac7 : add checks for scalars on output
a10a1c92b1 : start adding rules to propagate scalar to results
bb6908e163 : Scalar objects can now be backed by 0-dim Tensors.
0c19074c56 : LOG(WARNING) when an operator fails feature checks
c555cd8253 : missing fixed allocator files
5e078bb7cc : scalar flags added, and used to dispatch when there is a scalar variant of a function. broadcast annotations are used to figure out when a scalar s + A should also be converted.
2c73ae507a : Allow assertValidationChecks to take init_net
209c570f0d : Deprecate CNNModelHelper in caffe2/python/model_device_test.py
ee10e7457f : Corrected erroneous docstring for MultiLabelSoftMarginLoss
5ca263fb1c : Add a warmup option for BMUF
a45ad7cfba : Advanced Indexing Part 1 -- Purely Integer Array Indexing
93e05eb458 : Advanced Indexing Part 1 -- Purely Integer Array Indexing
32fd4a3d60 : Advanced Indexing Part 1 -- Purely Integer Array Indexing
667b8347a2 : stabilize softmax_ops_test
f09027bc29 : Add batch sampler to DataLoader (#1867)
43dec0a210 : Remove THCTensor_(expand2) and THCTensor_(expand3).
104234a6a8 : add asserts to BCECriterion
a940d4ff8b : add asserts to BCECriterion
cb4eaa9c5d : TensorLib/Aten --> changes required in pytorch
fb32164a72 : TensorLib/Aten --> changes required in pytorch
ddbd4ef4ac : Support out-of-place broadcast type definitions.
eccc759c36 : Support out-of-place broadcast type definitions.
497db732fc : btrifact: Make pivoting optional.
81e14ad2de : btrifact: Make pivoting optional.
93a7c9de29 : btrifact: Make pivoting optional.
62cfc94f44 : improving TH error messages in Apply macros
3f6cda8696 : fix bug of threshold activation
a836f8f56f : Use and document saved_variables for double backwards.
278cbbae49 : set TH_INDEX_BASE to 0
ffd32c8ab7 : Add distributed BMUF implementation.
68cbb857f2 : allow tensors to be constucted from views of external data. Support creating new tensors that already have a size/stride
cf4ac83a91 : Make List.__getitem__() works with output of List.field_names()
f937e4bffb : Revert D5288993: Memonger Graph Equality into Memonger
97ca7d7e6f : Remove unused thrust headers from math_gpu.
a1c557bc45 : improve error reporting for undefined tensors passed as arguments.
8464ec5c3a : Fixed a bug in compute_interference_graph() when using with multiple in-place operators.
4c5b7d41ba : tensor.data<> also as toLongData() variants. Scalar now also has .to<T>() variants
a531d74dde : ELU CUDA implementation
13e7648fd1 : document accessors
4be5337cca : add support for weight in batch_softmax_loss
f222e226b4 : Memonger Graph Equality into Memonger
249614ca88 : Fix CMake messages when CUDA not present
d46fe736c8 : Fix flaky test in dataset_ops_test.py
005156f6b4 : Fix gradient checking for softplus op
e2107fffba : Fixes for test_recurrent in hypothesis_test.py
c24dabb414 : Enable runtime cloning of tasks.
f795bf0b2a : Revert D5273337: [caffe2] Pare down on excessive futex() syscalls from the DAGNet executor
d9087edb07 : add rekey in feature_processor
34eaa19d27 : CPU data parallel model
7d482742fd : Allow tasks/execution_steps to be cloned at runtime
1572173ca7 : Implement double backwards for Sort, Topk.
e16ceef76a : Implement Scatter double backwards.
b79ff11aca : Implement IndexAdd, IndexFill, IndexSelect, MaskedSelect double backwards.
50c0912a75 : Implemented masked_fill double backwards.
43afb1d4ca : Make sure Elu alpha is strictly positive
c3ad55f746 : add readme and generated files for Type/Tensor/Functions to a doc folder to make it possible to view headers without building the library
29f037f432 : Improved Observers, based on NetBase now
4b93f32234 : rename TensorLib -> ATen
5957218cf0 : Adding Dropout Layer to SparseNN Model and Flow
03f41c8120 : fix capitalization of Python, make it consistent
dd1525d346 : fix #790 so model.init_params = False takes effect
673f1d9362 : Fix packsegments op and text RNN models batchsize > 0
5084ff3b9b : improve blob sharing
e0b70d0f64 : Fix Fmod/Remainder gradgradcheck by ensuring inputs requires_grad.
0b2b7d0594 : Kth value function passes gradgradcheck.
6d97ac0c0f : Missing includes in cuda_collective_device.h
64bec43916 : Fix a bug in BooleanUnmaskOp
a405efa756 : CUDA collectives as alternative to NCCL
5e084a9112 : Don't require pydot for Python tests
a5c45e18b5 : MaxGradientOp for CUDA + unit test
611677702f : Minor Fix in VideoInputOp
67968cb60b : Add numerically stable BCELoss which takes logits as input (#1792)
d2b1cb22a4 : rekey layer
a6c5e3f2e2 : Fix case where interface doesn't have an address
6ee6b4980b : multiple docs
83e6a0bec8 : Revert uuid change to OperatorDef protobuf
ceb13c8cc3 : Don't propagate `-mavx` flag to dependents
a6fcecaa71 : Allow AliasOp to work on empty tensor
82ef292f00 : Add gradgradchecks for various autograd Functions and support Unfold double backwards.
76ee014d10 : Add documentation to SELU and AlphaDropout
f619ac6ac9 : Quickfix for AlphaDropout on CUDA
6150d9bef2 : Building dropout as layer
956e40f0ea : Pare down on excessive futex() syscalls from the DAGNet executor
31e700910d : Fix entropy error coming from test_div
969831ea33 : Deprecate CNNModelHelper in lmdb_create_example
32e6372538 : Split cuda_collectives.h into two files
36bfe5946d : fbcode nnpack ops for Relu and LeakyRelu
4b4022ded7 : Make test_lstm_main more stable
2579be1227 : Skip fp16 initializer test for CPU-only builds
31769fbaf8 : removed events and user group info
90a52c3904 : Skip TestInferDevice if no GPU support
980e2a6b59 : fixed input and output schema for all functions
932cf9eb92 : Fix entropy error coming from utility_ops_test
172a356668 : forgotten import in variables.py
1ec0b89361 : Memonger Graph Verifier
3f860af050 : Implement TopKOp for GPU
329a2f7d27 : Prevent divide by zero in dropout with p=1
69e38ee821 : clean test code, no functional change
38e6b9c7e7 : fix bug in wrap_outputs miscounting the number of inputs
7775e9e777 : add newNarrow to thpp THCTensor
293262b8f1 : fix cuda tests
e66e01a2a0 : remove extra computations for input usage check
0a93903e8e : move tests to test_nn
bcac55dd2f : force 1 stride for 1-sized dim for cudnn, fix lint, remove extra unpacking
6cdcd9c603 : Add Narrow function
075030d974 : add cuda tests that use only cunn for finite difference computations
23dec70614 : comment on working values for epsilon
fc0ab229ad : remove extra cloning and add contiguous calls
ce3bc5a4a5 : force cloning of weights
3dbece7eb5 : clean tests
bd94718c87 : cleaner AccumulateGrad
2f8d21a7f2 : add contiguous function
4f4fc9091a : add support for newTranspose in thpp::THCTensor
7ee095cf7f : add newExpand and newView to thpp::Tensor
462ab8a644 : add Transpose View Expand C functions
dd5c7c473f : Add ConvBackwardBackward class
6dca309017 : make AccumulateGrad support no input gradient
f945fbc3dd : add gradgradcheck and conv double backward tests
db70d4d223 : 1) Simplify CompareOp autograd backward 2) Use better approach for avoiding divide-by-0 in autograd tests.
7714b5a088 : Fix autograd shape tracking for 1-d reduction ops.
860f51e67f : Avoid nans in fmod/remainder tensor tests. Also clean up CompareOp autograd backwards impl.
2c04ce63a5 : Fix masked_scatter autograd broadcasting.
83bfa5e1ab : Fix masked_scatter pointwise autograd backward behavior.
618f20fb38 : Fix autograd broadcasting for masked_fill.
9711223c12 : Add broadcast autograd tests for dist.
7d0f1c51bb : Fix autograd broadcast for min, max.
7560474fbb : Fix autograd pointwise fallback for max,min.
e69fe5bdb0 : Automatically detect when to skip inplace tests and fix lint.
f3ae90e329 : Fix broadcast and pointwise compare ops with autograd.
bfdd1f2199 : Fix fmod/remainder autograd broadcasting.
b164efb8b0 : Fix lerp broadcast autograd.
94c7260087 : Fix pointwise fallback for lerp.
aac459431b : Fix pow autograd broadcast.
a04d1af0a4 : Fix addr, addmm, baddmm, addmvm, addbmm broadcasting with autograd.
a54a7c1312 : Fix addcmul, addcdiv autograd broadcasting.
9ba799c26b : Fix pointwise fallback for addcdiv, addcmul.
5cfb1329b5 : Make implementation of Variable.mul_ and Variable.div_ consistent.
af2dd0d3e9 : Fix autograd for broadcasting with add, sub, mul, div.
79a343bbd4 : Remove unnecesssary squeezing in Expand backwards.
88e4bec8fa : resize bug fix
044679ca7e : Fix Pooling ND non-symmetric padding check.
b077e28d48 : make shape parameter op field for ReduceDimsOp
faa7c2cc2c : fix cuda breakage
21dc425e07 : Optimize SumSqrElementsOp for CUDA
3cecdf84f1 : Storage from_file method (#1821)
49586d9556 : Add basic API support for NCCL 2.0
12094b5114 : Add random shuffle through the data to the benchmark workflow
eefd4b0bb2 : Static RNN: gpu support and lstm_benchmark integration
2a9cb7d4a9 : use brew for Tranpose --> major perf regression fix
fda35fd19d : TravisCI Overhaul
8d33603901 : make t() of Variable consistent with Tensor (#1823)
96f19fefc0 : add warning if data parallel model is created for gpus that we dont have
1ca262b25f : Disable smart_tensor_printer_test on OSX
176a841087 : Fixes for CuDNNDropoutOp
6f1b1828e9 : add SwitchToDevice to PrefetchOp constructor
a64560c22e : Remove flattening for torch.dot (#1781)
97f50edf46 : Add documentation for Cholesky lapack functions (#1816)
3a91ac56cb : Add a shared memory machine-wide mutex utility
fc2a8d045c : adding flatten indices output to TopK
84cc82cf3f : Fix stats_ops_test
dc0e857e76 : README: TravisCI and Appveyor badges
5ce9cbae70 : Upgrades python/hypothesis_test.py to use brew instead of CNNHelperModel
e9cba7e69f : Option to read from dataset indefinitely.
d9d89b191d : implement SliceOp for GPU
086cd6fa3e : Don't continue running operators after failure
f61e4ca070 : Fixes in tests to support numpy >= 0.12
ed2d7d27ab : LSTMUnit fix: Backed out changeset 1fa39ce474c7
9d8a194cef : Deprecate CNNModelHelper in python/workspace_test.py
c4c3797b0d : Deprecate CNNModelHelper - Inception()
b0625ff566 : Deprecate CNNModelHelper - VGGA()
4aff677d3d : Deprecate CNNModelHelper - OverFeat()
078589d7c6 : Deprecate CNNModelHelper - AlexNet()
c095b3f67f : Deprecate CNNModelHelper - MLP()
78a6d2f8ba : Fix potential GPU transform OOB access
8ef12951e0 : Fix for protobuf with unicode_literals
7ffd76db51 : check operator schema before calling gradient creator
024afc7b0d : Simplify the implementation of AccuracyOp and Enable top-k in GPU
ca34de8b4e : revert adding extra semicolon
6500d7f307 : Fixing a small bug in schema where the number of default arguments doesn't match the number of fields
be7c336626 : Deprecate CNNModelHelper in python/memonger_test.py
f61ec2495e : nn.EmbeddingBag to compute a bag of word embeddings (Embedding + Sum/Mean)
d605afe8b5 : nn.EmbeddingBag to compute a bag of word embeddings (Embedding + Sum/Mean)
909f31764f : Add nn.padding to docs fixes #1127 (#1808)
7bf4c0e0fb : support RNNs in ExtractPredictorNet
2ec294a8bb : Fix a few typos and grammars in comment
ea5819045e : a few comments in build_all.sh (#1807)
46a95cf420 : Allow specifying device to load_from_db()
86594075f7 : Third fix for KeepOnShrink tests
eaacfc7e25 : Fix multi-precision SGD outputs
29a1a916dc : Add support for CUDA9 half semantics
94d42b03fb : MaxReduction ops GPU implementation.
9c53c6dcb9 : Fix errors and warnings when building docs (#1806)
9d916e561c : batch norm docfix (#1804)
c1f974aa9f : Deprecate CNNModelHelper in python/crf.py
4e356528b4 : Add torch.matmul function. (#1780)
9fd354e643 : More accurate build instructions based on @apaszke's comments. (#1800)
d03ffb211c : Remove WORKER_INIT_CALLS
eebda50b79 : Operator python traceback
38b9598685 : Added GLU (gated linear unit)
244af06adc : Added GLU (gated linear unit)
1cf105d517 : Added GLU (gated linear unit)
3ada9da808 : Make csrc -Werror clean. (#1795)
a2521148b4 : TimeObserver for SimpleNet, an example usage of Observers.
d3ec6e8f55 : Run python op builder at op creation time
5a63a6d47f : Better document how to rebuild only parts of the project. (#1796)
3977ee3520 : Support device on sparse tensor constructor, assert values/indices on same device.
c0e7bda3f1 : Enforce storage is not NULL invariant for sparse tensors.
df412051fd : Add comment stating nDenseTensors != nTensors in checkGPU.
7bee03fe1e : Do NOT clone indices/values passed to sparse tensor by default.
865beada0e : Add comment about new implementation being CPU-only.
6a46863c83 : Abort on known bug (#1521) for spcadd on non-coalesced.
d763db59a9 : More efficient nnz test in spcadd.
5d6e593c67 : Test clone preserves uncoalescedness if it wasn't coalesced.
bac408b693 : Add some docs about storage->Size.
2f967a204c : Sparse tensor clone() preserves coalescedness.
1a6995b28c : Short-circuit copy if src and dest are equal.
122dd9e8ec : Short-circuit copy if src and dest are equal.
b877d4b5f8 : Misc fixes for Python 3
795e7e64e8 : add truncation for sparse feature
7c024e93c6 : Implement Cumprod function for autograd (#1439)
b4698d6d1d : add init to __init__.py of torch.nn (#1789)
d9d50f80c7 : Rename arguments to distributed collectives
714351ff39 : Officially enable process-group mode
6f51b4ce2d : Fix deadlock in GlooCache
12813b88f6 : Add DistributedDataParallel
23ab9d481a : Add Module._all_buffers
8db8716c7c : Support non-default streams in NCCL reduce
b37f18be53 : Free GIL when entering THD functions
5a0d5ec058 : Add more checks in torch.distributed
095ddc7d08 : THD updates and bug fixes
86a065e45b : Add end callbacks to the engine
59d438de2e : change function to remove dependence on CUDA 8.0
6626881e7a : Add Alpha Dropout (#1775)
406d748423 : better engineering for core_test.TestInferDevice
0f787a01bc : map operator (move maptrait def out of class)
c7f5bf282b : Revert py::bytes -> std::string
c1420330b2 : Fixes the checkpoint test.
7f1385e70c : Improve gradient accumulation of the framework: 1.5x - 2x
817ae5b5eb : Revert D5211826: [caffe2][PR] Avoid compiler warning about unreachable code
638fe804dc : Implement recover_input_schema_by_prefix
b133c214ce : fix potential bug in task.py
827a0ac2fe : Fix comment mistakes in task.py
49ec984c40 : Ensure warnings are repeated in python2 for tests.
afaad94fed : Rename autograd keepdim tests that now default to True.
4f602a52b5 : Use THPUtils_assert rather than THError in torch/csrc/Module.
3abc8be42c : Clarify use of warn vs raise in expand_utils and don't catch exception in Broadcast plugin when fallback = false.
f4ce99fd87 : Add dist, atan2, lerp to fallback functions.
d5a0f97ea7 : Renamed masked_copy to masked_scatter in test, fix use of break/continue.
e8ec4110f6 : Fix Prod backward for broadcasting.
ffd808768e : Remove raiseErrors from THTensor functions, have THStorage functions take an error_buffer to return a proper error message while being able to handle memory management correctly from calling function.
5b81746767 : Simplify python warning settings and cleanup tests.
d49b73bbe6 : Rename check_fallback to check_backincompat_expand_warn for clarity.
7040b82ede : Change async/broadcast copy arguments to be parsed as ints.
723819014e : Move expand_utils-inl.h to generic/ and generate via macros.
1ef4cc1591 : Incorporate review comments:
deec86cc05 : Clarify a number of comments.
7da46097fe : Fix lint errors.
21d9b0c9dd : Ensure warnings are repeated in test, necessary in python2.
69287250d1 : Add a broadcast parameter to copy_, use it in the library in cases where there is non-broadcasting calls exposed by the tests.
74a23c5aba : Fix test_broadcast for cuda tensors, since map_, map2_ not implemented.
177785eecf : explicit Ptr constructors, fast transposed copy.
ad9604f45a : Add documentation for copy_.
65b23f146e : Add broadcasting support for copy_, simplify code generation by moving a lot of currently generated code to expand_utils.
c54e532954 : Add broadcasting support for map_, map2_.
ec120fac0c : Add broadcasting support for masked_copy, masked_fill.
e06523482a : Use THSize_isSameSizeAs, instead of THTensor_(isSameSizeAs) in order to compare sizes of tensors with different data types.
d6fb92fec9 : Improve in-place broadcasting back compat warning message and fix an issue where the deprecated warning would not be printed.
5e1a714386 : Add backwards incompatibility docs.
be65f46c76 : Add optional warning for backwards incompatible keepdim. Setting torch.utils.backcompat.keepdim.warning.enabled=True will cause Python warnings in the case where the default value of keepdim is used for 1-d reductions.
3556d1b8a3 : Add optional warning for backwards incompatible broadcast.
5af46cb352 : Add broadcasting support for matmul.
a36f95fe26 : Add broadcast support for fused-matmul broadcasting. Functions are: addmm, addbmm, addr, addmv, baddbmm.
cd35091d9b : Include simple broadcasting example and demonstrate lining up trailing dimensions.
3c586d196a : Document Broadcast Plugin.
8e2f347951 : Proof that broadcasting 3 args (expand3) is equivalent to breaking up operation.
d279c6e099 : Docs for addcdiv, addcmul
014372e707 : Support "fused" ops: addcmul/addcdiv.
92fde6cf06 : Breakup in place broadcast to better handle multiple arguments.
b44ea57ba8 : Change order of Broadcast specification.
e96f854ce2 : Implement/test broadcasting semantics for comparison ops.
edf2969bd8 : Backwards compatible Spatial Normalizations / CrossMapLRN.
e653fe2857 : Test fixes for keepdim=False, suppress warnings on backwards-compatible behavior.
70c33777a6 : pow, fmod, remainder also should fallback.
471dfe9791 : Add documentation including links to numpy broadcasting semantics.
85d838a028 : Testing over the following: 1) CPU tensor out-of-place functions 2) CPU tensor in-place functions 3) GPU tensor out-of-place functions 4) GPU tensor in-place functions 5) torch. functions 6) Fallback semantics (use pointwise nElem matching rather than broadcasting)
6a40acb4f0 : Add Broadcast plugin.
9087624634 : Revert "Restore examples with keepdim=True default."
e772a440cb : Revert "Change keepdim default to False."
4102a79da4 : Explicitly set should_stop_blob to False in pipeline init
7d1b042cb2 : fix type
e45c1046fe : Remove raiseErrors from THTensor functions, have THStorage functions take an error_buffer to return a proper error message while being able to handle memory management correctly from calling function.
a563ce1105 : Incorporate review comments:
92d52bf395 : Add broadcasting support for copy_, simplify code generation by moving a lot of currently generated code to expand_utils.
0463ddf16b : Support "fused" ops: addcmul/addcdiv.
9060e6be7f : Remove raiseErrors from THTensor functions, have THStorage functions take an error_buffer to return a proper error message while being able to handle memory management correctly from calling function.
f0b8c4821b : Incorporate review comments:
0f79bf1a69 : Clarify a number of comments.
503002eda7 : Add broadcasting support for copy_, simplify code generation by moving a lot of currently generated code to expand_utils.
cf55e1e48a : Add broadcasting support for masked_copy, masked_fill.
8d35d4215b : Use THSize_isSameSizeAs, instead of THTensor_(isSameSizeAs) in order to compare sizes of tensors with different data types.
9356640453 : Properly clean up expand error cases.
ae6b8d0112 : Include simple broadcasting example and demonstrate lining up trailing dimensions.
ec2f6a81fd : Support "fused" ops: addcmul/addcdiv.
1f9a365fdc : Add Infer Size N, for expansion of fused operations.
d38a87217f : Expand improvements
baa4ba973b : Expand improvements
a24db91a38 : Add SELU activation function (#1769)
e3d5826b92 : Add Cumsum double backwards support. (#1758)
ba690d5607 : Add support for NVTX functions. (#1748)
5f1a16a018 : Torch manual seed to seed cuda devices (#1762)
eced95ffe5 : caffe2 video_io.cc bug fix
dcf07a2d7f : Fix typo in ParameterList documentation
e01769ece5 : map operator
c7aa8e142d : Add gradient to SparseToDense op
c822e89956 : Rename SparseToDense layer
7517f050fc : apply clang-tidy modernize-use-override
20382004d6 : apply clang-tidy modernize-use-override
2f385d490b : apply clang-tidy modernize-use-override
072f4dbefc : net_printer_quick_fix
c291c97494 : Add integration test for pos_w
df72826ead : Static RNN
bb9077a6cd : Network forward / backward equality checker
264f75fdd0 : ZeroGradient op
3f4e9ab99c : Add support for group arg to fbcode nnpack conv op
52ee7697f4 : Fixing broken Python tests
10ec905289 : Avoid compiler warning about unreachable code
5c0b22ea03 : Fix observer_test
75f1da327d : Skip Python tests which require opencv or lmdb
49c89d6664 : Use add_dependencies() for ExternalProjects
00e098083e : Fixed thread safety issues in ImageInputOp
27e01744b2 : Probably fixed memonger
feba1eed00 : resnet50: fetch right lr
4fefff0bbb : Auto injecting device copy for single net and several nets
21a5c8ea5e : Fix use of nccl_INCLUDE_DIRS in nccl.cmake
87a12dd355 : Caught exception when fetching uninitialized blobs when collecting blob sizes in workspace.
4316fb4876 : Implement APMeter op
5300aafc1f : Fix NCCL directory typo
ee3727db00 : add_helper_function_ElementwiseLinear_op
77c481c40c : Fixed flaky observerTest.TestNotifyAfterDetach
a9bd1de9e9 : fixed README to reflect docker image name (#1751)
98825d1323 : guard against special case of in-place operation
d81da41650 : Make sure the number of MKL and OpenMP threads match
62835fc3f5 : Make sure the number of MKL and OpenMP threads match
da7957c660 : Fix masked_copy call to masked_scatter. (#1749)
2a49353d5e : minor fix for docs of Upsample
b9ab26765e : Add 3D upsampling (nearest and trilinear) with tests
da45b4c6b3 : Add 3D upsampling (nearest and trilinear) with tests
47bf87b922 : Add 3D upsampling (nearest and trilinear) with tests
edd41d8d80 : BatchNorm fallback to THNN when eps < CUDNN_BN_MIN_EPSILON (#1742)
ced01f6c91 : fix GRUFused signature
d524d5b481 : Fixes zip/izip for Python 3
60c78d6160 : Fixes range/xrange for Python 3
d351239c10 : fix legacy ClassNLLCriterion for upstream change
df7c47142d : fix for THNN NLLLoss signature change
4c5d101caf : Implement ColwiseMax and RowwiseMax reduction ops.
b96f76e470 : standalone macros
4e49aed5ea : fix outputHeight <-> outputWidth
22949350b6 : More performant fix for fused rnn kernels (#1532) and bugfix (#1721)
3f7b48ccda : Remove clone in fused rnn
db620304b2 : More performant fix for fused rnn kernels (#1532) and bugfix for #1721
d7db75c10f : added CosineSimilarity to nn.distance and updated docs (#1672)
89894536c8 : Fix VideoInputOp memory leak
e50d599240 : Fix header inclusion in math.h
93ac6a9837 : checkpointing for distributed hive reader.
7723129d14 : Add gradient for topK op
c9c862fa8f : 16117716 [Caffe2 OSS] make char-rnn exapmle use build_sgd
2ec2d23f88 : booleanmask support indices sorting
c6a6391c38 : added checks to cudnn Convolution for stride, dilation, kernel size and num input planes (#1723)
d50ad408fa : fix incorrect grad_weight in Bilinear
73ccdb3920 : Fixing the issue with incorrect normalized values in IndexLinear
b36d716614 : Implemented a ObserverBase class for Tracing Graph performance.
80fe2e5caf : Fix from_column_list
8cd208ad6f : Infer input and output device from OperatorDef through OperatorSchema
a38cae76ab : benchmark compactible with lateest building process
a5fc70857c : Support fetching of the parameters from the global namescope by ''
b6c75c43c8 : add tests for checking the type of .data and .grad.data is the same
a53cde09b5 : Rename masked_copy_ to masked_scatter_
98afdcf409 : Accept None values returned from grad hooks
ef32e96447 : Fix grad type of compare functions
b032b88f34 : Fix Prod backward and autograd tests
a76098ac15 : fix optimizer when given single parameters (instead of an iterable)
2ce5875a4d : Modify the sample code of extending autograd (#1720)
511cb20e7d : Add Gesv to autograd (#1733)
686470a6b8 : Feature importance in dper 2.0: build network representation
ebecafbcca : Support for position weighted in distributed PS
5447f5c0d7 : Move position weighted to separate layer
f1c971d04b : add ExpandDims to _known_working_ops
5e6bd4fbfc : Return predict params from ExtractPredictorNet + test
2a93470238 : dont use Swap for param gradients but accumulate directly to correct grad blob
df2f52704c : Another fix for KeepOnShrink tests
e3305eb9dc : Runtime dockerfile (#1732)
e9bf702c5e : LSTM bias_hh, fix docs
9a2d11dd36 : Use a longer timeout when establing initial tcp connection
8e99824ce7 : Allow subsets of gradient outputs / inputs in Python ops
8871ef029b : quick fix future issue with brew/core/schema/workspace/scope/utils.py
77c1027abb : Create ParameterSharing abstraction for Caffe2.
3716286e6b : reduce the size of Docker image (#1729)
112561bcd4 : Hide loud warning when using to third_party eigen
85a95d8a23 : Fix sharing of CUDA tensors on non-current devices
6422ea3d9f : Fix sharing of CUDA tensors on non-current devices
ddf6328990 : Document type function returns type with no args (#1719)
174c3cc399 : Add support for double backward of LeakyReLU (#1714)
24aecaa2c8 : Cleanup torch vision docs (#1699)
4e33aee349 : remove stray code from CUDNN ConvTransposeGradient that caused a memory allocation
4853cc0194 : convert linalg.py to new-style functions (#1638)
ac1c674723 : Fix a couple of selection reduce function autograd bugs (#1702)
705a8fb1b2 : minor modify video_input_op
e05173a476 : Create ExternalInitializer to simplify logic around init_params = False
eba3dc8561 : Fix gc_refs assertion failure (#1705)
a8fb85797c : Refactoring of the parameters step 0. Add simple tags and unify interface for params and computed_params.
3bd6195891 : removed Sum from simple_operator_layers.py; passed unit tests
ee9d4d58e2 : Fix connect bug Before the change, processes were not waiting for master even when they got 'connection refused' (master is not listening yet, so we should wait). It was because we were closing socket twice: first, by the resource guard; second, manually in exception handler. That caused errno to be set to different value (9 - bad file descriptor) and in result `if`, which checked if connection was refused, was failing.
b7c4900d19 : Fix minor bug in InitMethodFile
e22f9036de : Add tcp init method for non-multicast addresses
c01ff1f3dc : Make world_size mandatory for Master and Worker; Minor refactor
eeb8e5c31b : Linux fixes
c6c9e61169 : Implement THD tensor copies
34804e9600 : Refactor file and tcp init methods * Add sanity checks * Refactor InitMethodFile and TCPInitMethod to more logical functions * Update few error messages * Add passing parameters by **kwargs, so now order of parameters is not relevant * Review comments
c41555fb0a : Add rank parameter; Fix MW mode initalization
96cc1e1ac7 : Review comments
cfdd49f76a : Simplify and refactor init code
447d9287bf : Refactor multicast and change env init method
832eaf900b : Fix bugs and improve init methods
e685277299 : Add address discovery; Bug fixes;
8ea7c87c29 : Improve init methods
09c0d9c51c : Add multiple initalization methods for DataChannels
240384605c : Make copy functions thread safe (#82)
9f9a3d596f : Use lock_guard and don't use unique_ptr
a8c26c1040 : Add mutexes to MasterCommandChannel::sendMessage
6cdfe0d7b9 : Remove MASTER_ADDR and _PORT from MPI benchmarking
1b66b50064 : Benchmarks: Don't export WORLD_SIZE when using MPI
cf42c1a044 : Improve error messages of DataChannel::newChannel
f717f29d7e : Change function names; Change thpp::Tensor to THDTensorDescriptor
181d2f41bd : Add initial Python wrappers for THDTensors
2059ece284 : Exit workers gracefully in master-worker mode
b3e100b40e : Add copy (TH <-> THD) functions to MW mode
401908d570 : add_weight_decay + restore weight decay to resnet50_trainer
398379db68 : fixing lint errors in image_input_op
a2ba169354 : fixed operators schema output to work from only this file for OSS
ec2de16776 : Improve README copyediting
ea05d6aec3 : Fix compilation with cuDNN 5 (#1703)
5a93d6b903 : Fix CUDA_HOME detection (#1675)
75e0df271a : Add Inverse to autograd (#1670)
565bf7116b : A pile of misc doc fixes. (#1682)
f1c57ace1b : added input dim checks to convxD and conv_transposedxd (#1695)
460b8715a8 : display version number in docs
568c5c91ee : substitute cudnnFind* functions with cudnnFind*Ex
00843c57c9 : substitute cudnnFind* functions with cudnnFind*Ex
501467db17 : added param name to tuple_parser for better error messages
4bed0c6d41 : Update RNN Seq2SeqModelCaffe2EnsembleDecoder to reflect training network structure
55ada6d64e : Fix padding params check for conv-cudnn.
b3e179ea31 : fixing lmdb.cc when compiled on Windows (mkdir -> _mkdir)
2c97c98ca7 : Enable testing the GPU implementations of Adagrad and Adam
fc4d118e6b : Caffe2 MemNN Production Model Saving
299f293cb2 : Add initializer classes to conv_nd.
05e060974f : added events and user group info
58874ad5bf : Fp16 training initializers
d51cd61e2e : add checks for input, weight and bias types when using cudnn conv2d (#1689)
447fe953e5 : Modify the sample code of volatile (#1694)
ffbba0fae7 : add model_helper Validate() + sprinkler around
0f8c8f37a8 : Revert D5159712: [caffe2][PR] Fp16 training initializers
076376f4f6 : Revert D5119830: [C2] Refactoring of the parameters step 0. Add simple tags and unify interface for params and computed_params
ff61ed358e : Refactoring of the parameters step 0. Add simple tags and unify interface for params and computed_params
7c3add4408 : better android ndk path
d8d1cd1064 : Test smaller tensors in segment_ops_test
e2cf007dc8 : Avoid numpy VisibleDeprecationWarning in test
7b5af7d1b7 : Expand ibverbs read timeout messages
4da9e92d3f : MPIConstantFill -> ConstantFill
2bfacff426 : Fp16 training initializers
1740f90821 : disable appveyor for cuda for now due to out of time error
680a00e99a : MPIConstantFill -> ConstantFill
f0f4c2fc5d : Increase the number of DAG execution worker threads.
73a8a49c7e : synchronize re-rendezvousing on node changes + support num_shards=1 rendezvous
72ea177188 : Add target for quick build+test
f0795c15a4 : Disable stacktrace on fatal signal by default
afc26ac675 : Added time-out to ibverbs transport
f2d9d97008 : Add an option to reset momentum-sgd params every time between successive block updates.
ccdf2d99e1 : Add description to assert in model_helper
c344880373 : add automatic timing of parameter update phase
ce7ce46ca1 : fix secondary device check by gradient, if it is sparse
96d8ae2163 : Make fills work with input_shape when run in CUDAContext
846240a340 : Caffe2 gradient generator bug fix
6f791e74f1 : Add a minimum iteration count of 1 for benchmarks
aa59b217a9 : Relax requirement on the outputs of the predictor.
1aa6300696 : Option to use NCCL for broadcast
47e921ba49 : Remove map() and filter() in favor of comprehensions
3106423713 : Synchronize with H2D copyAsync before signalling the broadcast sender
0deec5b3b7 : Add FLOP annotation functions to operator schema
acb2ad12e5 : fix race condition at terminate
f6853d13df : always use halving-doubling allreduce algorithm
cdb50fbf2b : add optimizer support to data_parallel_model; Use MomentumSGDUpdate
0a9684c3b9 : Mark in-place GPU dropout as broken, add test
44257ea5ed : automatically infer device scope for param
6b1cf26380 : Fix for dpm when GPUs don't have p2p access
a47652379f : Fix SparseAdagrad for indices.ndim>1
df2bd158db : Optional force conv algorithms
16b240145a : Fixing some tests
dc517b6c42 : Change hypothesis settings for slow memonger test
2c3071fc4e : Rework initializers to pass a class not object
4eb448a051 : Fix simple typo
660dd58022 : fix for realtime training.
6aff754dbc : Add batch normalization layer
ec19b4bd7b : Import fixes for Python 3
3ccbf23132 : String-related fixes for Python 3
7f98dc28cb : Refactored spatial softmax
78c1415012 : Use unwind functions instead of backtrace to attempt to be more portable
b266c52b51 : Create signal failure blobs in constructor, avoid race condition
065c59860a : Fix docs: masked_fill_ takes a value, not a tensor. (#1663)
75a6f909c5 : Add option to enable memonger for gradients and add param_names for save_model.
45f665d05c : Fix decodeUInt64BE
35eaf444c0 : Quickly hack sparsenn_benchmarks to also do BenchmarkNet
d60a2e3c58 : UnsortedSegmentSum/Mean for CUDA
97159810c9 : Restore compatibility with protobuf2
016f72537a : ModelHelper.create_param, Initializer abstraction and ParameterInfo for optimizers
6c12df3003 : Fix export of SparseToDense layer.
9bf1f16255 : Add bias to cosine distance for two tower models
2002018603 : memory_leak_data_worker
64faf120ac : Adding support for ADD_TORCH_LIBRARY macro
0b74f0d796 : lua 5.3 changes and gcc constants
c6591fa59b : Add asan no sig tests, move fatal sig tests there
8074180081 : Faulty error message for InstanceNorm1d (#1609)
7b578dd68e : Add scatterAdd
3f1f3f9734 : Add scatterAdd
bd705d38ce : Add scatterAdd
3ff54ffa8f : Fix KeepOnShrink tests
a9b5efe3c2 : Expose max collective concurrency
630af4d7d8 : add learning rate schedulers (#1370)
cf078840d4 : Update gloo dependency
a5f44ed265 : Fix number of indices and block_size in SparseAdam
c39d48ea7d : Fast transposed copy
3abe5c80d2 : Fast transposed copy
05bc877a05 : make THPPointer have explicit constructors (#1636)
7ea9d9af4e : Fix build when included by another project; take 2
e35a4fe5cc : Implement SizeOp as requested in github issue#583
d9896c43a7 : improve cudnn conv type error msg
6a7c56499c : How to manage multiple build trees of PyTorch. (#1654)
46ee1e4687 : Clarify definition of gather function in docs. (#1652)
e63b49d9ab : Fix build when included by another project
55d293f730 : remove non-existing blobs from output_schema in layer_model_instantiator
da6b82b810 : fix another bug related to in-place ops --> treat in-place ops like any other
33c40e8a6e : Handling shared indices in sparse gradient updates
036c3f93af : Check for released variables in SavedVariable::unpack() (#1648)
4f261f5730 : Add support for fast float16 reductions using AVX
f2303ccb77 : fix tileop test
98581b9f7e : Fix conv1d segfault when weight doesn't require grad (#1646)
9a497f824b : Add size/dimensionality documentation for torch.gather. (#1645)
457720459d : Change AllreduceOp and BroadcastOp to allow initializing gloo algorithms to take float16 inputs
1e63a04a18 : Use clear-to-send notification for broadcast algorithms
a94fc625f5 : make random generator more flexible in context.h
e54112758c : Fix potential vector out of range issue in ContextFactory::makeContext
567842e68d : Check system dependencies first
e1d257bc6d : Fix segfault in autograd: (#1644)
2aaac493d4 : Fix cudnn version error formatting
e03e14a71e : Clean up binary build cmake script
3d38e4f126 : Acquire GIL before THPVariable_wrap (#1625)
4da076d3e9 : Fixed typo caffe_translator.py, fixes bug #397
c79ce5c2ba : Profiles pipe stages.
fa93653d09 : Improve handling of graph roots in autograd engine (#1635)
152d439400 : Allow specifying net type in predictor_exporter
03503140fd : DropoutCell as wrapper for another RNNCell
c55be38e63 : Added mobile exporter
db1d62caf7 : Move RunPlan to a separate file
c39f6cf2d0 : gradient accumulation fix
3fe8abb492 : fixed gflags 2.2.0 error and image_input_op.h
bf6f630888 : bug fix in CMakeLists.txt (CAFFE2_CPU_FLAGS and CAFFE2_WHITELIST)
b5a215db0a : Added python-pip and python-numpy into build_raspbian.sh
43be6456e2 : UNUSED_VARIABLE VS compile fail fix
ff047fdeef : Fix the mix-up of height and width on depth-wise convolution
6c511f64cc : fix handling of ops with in-place input/output
2486a6bbd0 : Add missing header file types.h in CMakeLists.txt
640846b864 : Fix race in ibverbs transport
77b38b915e : Checks performance regression for resnet50.
64e04e78d2 : Remove AddOperator from ModelHelper
ba56de1150 : add coding UTF-8 declaration
6e3e453ad2 : Tidy up convs docs (#1602)
2b11adb414 : TileOp CUDA fix: number of threads must be hard coded
f5d919a685 : Generate config.h file with compilation options
02e4ca9cab : fix wrapper
70a774898e : Remove superfluous forward declaration
74e964ff0d : make data_workers restartable
49befe3fcd : Remove commPairs_ member variable from halving/doubling
7eac2073b8 : Add notification mechanism to ContextFactory
356c19319f : Change repo from bwasti to caffe2.
45524ec33c : Fix indices bug in MM.py (#1613) (#1617)
1d8e93536c : better TileOp/Gradient CUDA implementation
5a7f67bd41 : Add stack traces on fatal signals
193c9289f0 : Fix LRN schema for cuDNN op
f072c74dfd : make it effective to transfer a tensor from other devices to device 0 (#1610)
107a0fe9ac : Revert "Revert "ClassNLLCriterion supports missing targets""
2acfb2376a : fixes eval mode in InstanceNorm (#1604)
0c5598c668 : Update build status matrix
37834b1343 : Change video_input_op to output label in int32 instead of float
92610e78bb : CuDNN comparison mode
feaee29bfe : Add argmax and argmin to docs
a2c01e830b : fix duplicate init blob issue + fix test
aa603a9083 : add test for input order
6384bae29b : call save_to_db in CPUContext + fix a typo in data_parallel_model.
83f6dceaa6 : remove forget_bias as argument to AttentionCell constructor
c69ab3d3ad : Fix open source build with ffmpeg
09bbd0382c : ConvNd cuDNN
b5721c2d9d : Throw timeout exception from StoreHandler::wait() and catch in CreateCommonWorldOp
0af0cba2b7 : Refactor data_parallel_model initial sync and checkpointing
0aeffa985e : make sure mutex is on CPU too
65750349ba : deprecate CNNModelHelper in python/operator_test dir
7ce5d0765b : GivenTensorIntFill on CUDA
32bf7a2c2b : Generalize PoolingOp(cuDNN) to compute 2D and 3D pooling.
7f6cd7c7ea : Fix error message in CUDA forked subprocess (#1585)
1b7497807f : cnnmodelhelper deprecate warning
625850c2c2 : Check cuDNN version at runtime (#1586)
9b3447761a : Check for required non-None arguments in C++ autograd functions (#1589)
ed679fc43c : disabling fd leakchecker test (#1593)
e6c9509a41 : Fix call to Tensor.set_ in rnn.py (#1592)
c57f0530e7 : let long_args False for param "size" of set_ (#1568)
8021bb938c : Remove slot number limitation from ibverbs transport
1f4317be3f : Add support for half-precision floating point operations
77f539174c : Update fp16 NCCL ops
cba46a4869 : Assert that we don't do out of bound writes on recv
b391f53681 : Cache send/recv buffers in ContextFactory
307459eb62 : Fix conv_test for CUDNN dilated convolution in NHWC
9386bc7ca8 : Improve elementwise comparison docs
b61378b4b6 : vectorized version of lstm_unit
85f1d947dd : Vectorize SigmoidOp on CPU
12edbcb154 : Implemented L1Distance Operator for CUDA
85732b52ec : fix cuda multiple algorithm test
156fe28666 : dataloader can now handle growing datasets (#1575)
2f4bf4ab39 : Rewrite 'How autograd encodes the history' to accurately describe current setup. (#1580)
1f3ff5ced2 : Miscellaneous documentation around autograd. (#1577)
b8b7f879c2 : .gitignore updated with editor temporaries (#1574)
1f5cd3582c : Add contrib/gloo/common.cc to Caffe2_CPU_SRCS
a0b83464e4 : fix bad conversion to float in cpu_half2float
7b10b16496 : Move ibverbs buffer send logic to pair.cc
da86633c7c : Additional synchronization in halving/doubling
bbd7aee9ab : Revert D4952993: [Caffe2] fix mkl_sparse and migrate sparsity experiments
c573d53939 : Bug fixes (#1573)
f27c9eea20 : dropout for C2 multilayer
f555c6308c : Fix NormalizeOp gradient numerical stability
658c337f41 : Error status for Gloo ops, and handling in elastic dpm
5ced84856a : Caffe2: SparseToDenseMask: return key presence
f359d70ae7 : fix mkl_sparse and migrate sparsity experiments
37c06a3ba8 : residual connections in multilayer C2 ('add' only)
a28b01c155 : rnn with brew
310f505da7 : Remove application-specific comment.
769e668faf : ttsn model fails to set optimizer for FC layer
cb79c24d0b : Added powerpc64le support (#1572)
64d43dbb6e : new resnet building with brew
af0a412e83 : alternating workspace for forward only
caa1cdf0ce : ClassNLLCriterion ignoreIndex
25fd005dd9 : Initial implementation of Blockwise Model Update Filtering (BMUF)
57054bd52f : use remapped name for param_grads, to enable memonger
368ecb47f9 : Fix flaxy test_sparse_adagrad (#1562)
e394b60a9c : Support un-equal weight training for mtml models
ad37840329 : fixed document generator for github
6107d15d14 : Twice differentiability of pointwise functions (#1531)
ba885a1a51 : expose bitwise operators from C/CUDA (#1556)
7afd78d77f : Cuda reduce in a consistent direction
6b84dc26f0 : Add F.cosine_similarity (#1502)
0f458ee3c4 : Fix memory leak in THCSTensor_spcadd. (#1519)
8aa011f52a : minor typo and style changes to _torch_docs.py (#1559)
2a610c9d13 : Revert "Update to ignore zero targets"
ac8b2c0fa3 : Revert "ClassNLLCriterion supports missing targets"
0ba20435ce : Add high order grad support for Some operator (#1507)
6fc9130052 : Adapt documentation to reflect new supported argument (#1548)
28f4f6db2c : typo error for torch.addr (#1547)
9b2de027be : SpatialDepthWiseConvolution.cu added
bf4345e2ef : ClassNLLCriterion supports missing targets
3eeca5b5e0 : arg scope in ModelHelper
029290c5b1 : SpatialDepthWiseConvolution
9db7787316 : Updating __getitem__ and __len__ for containers (#1544)
5989deb707 : adding 3d operator translators
b070197e8a : cuda unique op
942f53b5a6 : gradient impact of task layers on shared is configurable
16de9746bb : Fix a bug in 3D SpatialBatchNorm[CPU] gradient and improve the code.
efa913b1c2 : fix uninitialized variable in cmake FindSSE (#1023)
93f1d0ca7c : L1 Operator
d1a4467682 : fix a bug when calling modules
0a25b9cb50 : fix android build
507ddc4cde : Temporary fix for multiple backwards with fused pointwise RNN (#1540)
aba05ce9db : Ensuring float tensors call float versions of math functions
8df51a84ac : Support 3D&1D SpatialBatchNorm[CPU]
be843eb26b : Add unfold to autograd (#1523)
a23b378052 : set cuda stream for cub::DeviceReduce in SumReduceLike
e16ea46013 : Extended ImageInputOp
e8c274cf16 : Optimize memory usage for MI-LSTM
967a0ebef4 : Revert D5027046: [Caffe2/RNN/opsify] apply links ops
4fa6ee8219 : clean up code for selecting allreduce algorithm
362cc296ad : apply links ops
11bcdbc3f0 : Load Parameters from Model
20ae447ce4 : Instead of switching workspaces, create expliticly shared blobs
c70405271b : opsify parameter gradient accumulation
5bb13485b8 : Fix Linear function
a86adf43a1 : Fix comparison functions
1c304a9ef6 : Expose variable attribute of AccumulateGrad
feef54ec34 : Don't modify non-volatile grads in zero_grad
5026209d0c : Minor fix in Prod backward
e7220380bc : Add new flags to Variable.backward
9fa0e403d6 : Replace retain_variables with retain_graph
35cf380ed1 : Improve output wrapping logic in autograd
3a7e068439 : Remove spurious memo argument in Module.parameters() (#1527)
a66f02b223 : Make dataset ops handle empty tensor better
3abd0cb623 : Add axis argument to SoftmaxWithLoss
75bc9f5e77 : Relax requirement on token uniqueness
48de1ea165 : Drop extra Reshape in attention calculation
d5e821044a : Make torch.cat not synchronize the host and device
8e3ce4bae7 : RNN: reduce verbosity of "Use
bfc8a3ebba : Reference counting documentation. (#1520)
6fab62173e : Restore examples with keepdim=True default.
c4742fd128 : Explicitly pass keepdim=False for tests that require it.
e124790cb2 : Change keepdim default to False.
171638a451 : Fix test_normalize NN test.
d95f711501 : Add a keepdim test to torch_test.
b9e00dfbb8 : Make (non-legacy) nn backwards compatible.
f6a00fac13 : Add autograd tests for keepdim
be5191a00b : Add documentation for keepdim.
c9d8e0a43a : Change all legacy/nn modules to use keepdim=True (even if tests don't fail).
ae2b2cbbec : Make keepdim work with autograd.
ae924be3ac : Removing extra Reshapes in MILSTM with new broadcasted ops
7c3cb24485 : Add a keepdim parameter for reduction functions over a single dimension.
add840510f : Refactor Optimizer to Allow scale_learning_rate
20d8de8d51 : Parameter cost estimation job
af790f86f3 : Add a keepdim parameter for reduction functions over a single dimension.
906c550e10 : Add a keepdim parameter for reduction functions over a single dimension.
5f308b50fb : Add a keepdim parameter for reduction functions over a single dimension.
98dbdc464b : Add a keepdim parameter for reduction functions over a single dimension.
bd8ed6641c : Stabilize PythonOp token name
d48795e699 : Use non-local include syntax
e44bc88c2e : Remove command "touch cmake".
65cf2f0117 : compile error when build on mac enviroment
33b3968660 : add larger tests for qr
91a118c116 : Fix bug in magma qr decomposition and add tests for larger matrices
218ea722fb : Don't use sync mode by default
1d0ba2cfbd : New cudnn ops
d0504aa41d : Implement lgamma function.
008a8c9720 : Implement lgamma function.
105df5844d : Implement lgamma function.
066fbcd014 : use current stream in cat array kernel launch
22bbd7ac33 : s/IndexType/long
2075abbe30 : Gloo: Added a way to create connected contexts from another context
11052d03aa : RNNCell API change: returns states and outputs
e694db0eeb : Raise error when Variable is converted to bool. Fixes #1482. (#1491)
c5ae79fe4e : Make clamp twice differentiable (#1514)
b6a8dd1438 : don't recompute small blob in attention
0892a1428b : Add size assertions to SparseAdam/Adagrad
0cb7774445 : softplus op
4ad2e155bc : Make nn.Sequential more pythonic (#1510)
12965a4108 : Add Poorman's IOBound ThreadPool for serialization.
8a7f00d61b : fix mean pooling
ac1c63dda8 : Add specialized ResizeNearest implementation for scale=2
6d693fe413 : Add F.normalize (#1467)
23b556ef77 : Expose custom attributes from C++ functions (#1430)
e3f41a4962 : Add high order gradient support for Sigmoid (#1496)
98cf176baa : improve style + a bit of perf for ScatterWeightedSum CUDA
90e9f8a476 : Avoid segfault when calling join_with with self as arg (#1493)
5f15a9e0cb : Add a note about THPFunction_asFunction
8f692b5642 : declare UpdateTimestepBlob inline
711ea1d4ac : fix enternalinputs handling in AppendNet v2
033ab9da1b : Adding video data layer for caffe2
a61778a628 : fix recompute_blobs_on_backward
f2392bb8cb : Fix Split documentation
5c667ebe4e : AttentionCell
d7f20c94fd : Optimize memory for RNN attention
0c6099ce25 : Add __dir__ so autocomplete in iPython works.
8a2433eacb : Add model saving and loading to resnet50_trainer.py
5c52392229 : opsify AccumulateInputGradients
aa5e771042 : Added tiles and axis as input parameters to Tile Operator
0d32ab4a45 : Refactor FTRL optimizer to allow sending Alpha as input blob
211eae127c : LastNWindowCollector
b229b7ff11 : Fix typo 'convlution'
d312dcc881 : lstm_benchmark use rnn_cell.LSTM multicell + assertion
c34d5a838f : Generalize LastNWindowCollector
4662781099 : Include hint about run ID in store handler assertion
348e0af6e1 : Remove unused binary fb_run_plan_mpi
ff0ff33a11 : Fix docs for InstanceNorm (#1477)
eb2c6ea874 : set deviceId_ to -1 when CudaDevicePointer and CudaStream do not have valid data
e64b2e1cd7 : add documentation for cwrap plugins (#1474)
dbe7654062 : Always use halving/doubling allreduce
004c740b6d : Update gloo dependency
395a80f757 : Check GCC version if compiling with CUDA support
c8f444237f : net_drawer: --input is required
f220282ddd : Set optimal number of DAG workers for predictor beam search step-net
7d40140bfb : Document squeeze behavior on 1-dimensional tensors of size 1. (#1470)
e50c7daaf9 : Use Qr factorization to get orthogonal matrix in orthogonal init (#1453)
a6876a4783 : fix corner-case in MaxPooling
4e18d89791 : added twice differentiation for a bunch of ops (#1426)
c061ed5bda : handle beta=0 for gemv with transpose
e9d648c5e7 : Fix memory leak introduced by 72e8190 (#1464)
57e51bd72a : make all tensor.h enforces pass the caller
47c1418816 : Add caffe2 operators to mobile: Log, StumpFunc, Div, Sub
80c0a8776b : Fix #1447: sparse_mask doesn't make sense with uncoalesced tensors (#1458)
4ec0435b39 : Report overall size of sparse tensors. (#1461)
1a831ce8f2 : Add direct enqueuing to enable RNN input, allow specify batch columns
f8be3a20d3 : Fix scatter_ documentation typo. (#1463)
7b21b0b6d7 : Retry on write EINTR in sync mode
16821bc45d : Add ScatterWeightedSum for GPU.
0910e0ac90 : Fix memory leak in coalesce. (#1460)
93094294ba : function backward attempted to multiply tuple by variables (#1459)
ddc4d101ad : MultiRNNCell (Caffe2)
ff1330192c : auto -> return type for C++11 support
743e4894d2 : Prefix values/indices/sparse_mask/nnz with underscore (#1457)
f273377d19 : add device asserts in scatter/gather kernels
f1591fade5 : add device asserts in scatter/gather kernels
2e7635b929 : Add flexible bilinear upsampling aspect ratio redux (#1317)
e9953c4595 : A number of post-merge fixes for test_sparse (#1444)
72e8190994 : Use at most one shared_ptr block at a time to manage THPFunctions (#1454)
e1278d4ee2 : Fix typo in autograd docs
aadad971e4 : Fix pybind11 module name for MPI helpers
3ca0de25da : Prevent false overwriting of a field
31643d5ecb : Inference code for seq2seq model
3504e1d836 : cuda (sparse) lengths sum
379ac514b8 : lstm_benchmark: add warm-up stage, support layers
22d4eaeb9e : JoinContext
66bd200de0 : bug fix - add previous slot offset to calculated slot value in halving-doubling algorithms
282298dd1c : Data parallel model: Disable NCCL by default to hopefully reduce deadlocks
ee7b3c9b2b : caffe2: rebatching queue for MultiTask
222b781f76 : Ensure sparse_gradients feed to CPU
574cfe3cf3 : Improve kthvalue documentation. (#1448)
699755e04f : Convert contiguous() call in adagrad to out-of-place coalesce. (#1446)
fb07914c0c : Recommendations for workflow when modifying C files. (#1443)
aa2ee86375 : pytorch/thpp ~= facebook/thpp (#1445)
ecd51f8510 : docs fixes
e8e36945cf : make debug message more explicit & verbose
5aa1f769d3 : Fix torch.dist documentation: function returns a float. (#1440)
1f3c7f8080 : Handle net.external_inputs correctly in AppendNet
da338ca821 : Fix Caffe2 LoadOp docs
e8e93066e7 : add workflow for user complicated embedding
eecc807a75 : Keep track of number of in-flight send operations
a458aa4b2a : Fix tags to be based on EXCLUDE_FROM_{CONTEXT}
5386012164 : Check return value of ibv_reg_mr for error
4bf813e068 : Document cdata non-NULL invariant, and consequence Python side. (#1435)
3b4bc721ef : fix osx build and suppress clang warnings (#1432)
7d6d67119f : Allow LayerModelHelper to keep input blobs from schema
58bc830660 : Integrate CRF in DeepText + New caffe2 operator for viterbi decode
38d3bfa5d4 : Warn on setting blob on Scalar
c86610b738 : special executor class for RecurrentNetworks (just single threaded now)
dca208b525 : Refactor test_sparse to reduce boilerplate. (#1421)
181cb15c72 : Fix formatting error in docs.
7df8fbb64f : Generalize halving-doubling to support non-power-of-two cases using binary blocks algorithm
f0dd96c116 : brew fc test fix for packed_fc
5c7453447f : Fix bugs, rename differentiate to grad, make it more flexible
87164f554d : Bug fixes
267e7c0431 : Fix memory issues with Conv and BatchNorm
e5db8f98be : Add torch.autograd.differentiate
20aa5b066f : Convert some of the functions to new format
de9998e198 : Add support for the new Function format
702a2e3bc5 : Make Variables not subclass Function anymore
2ca787fcf4 : Refactor attribute names in autograd
2ec629bef9 : Set SO_REUSEADDR to try and prevent bind errors
2197e4c766 : version bump
2a28283680 : Fix pair destructor if in CONNECTING state
ffc6bad116 : Concat axis=0
1040b5f91c : Enable bitcode for iOS builds
561255218a : NormalizeOP CUDA impelementation
4624278b1d : Make sparse documentation title consistent with others. (#1420)
79d4ac670c : Add map_location to load_url (#1418)
4ebf3ff46d : Add base for CUDA allReduce and broadcast in DataChannelGloo
ac3ba9a2ad : Rebase fixes
14e1bfddbc : Change warning message in MPI
c19fbd3364 : Update comments; Add inline accessors for value_type tuple in GlooCache
a17d96d571 : Add multiple thread support for DataChannels Previously, when using same data channel in multiple thread environment, one didn't have any guarantee that there won't be any deadlocks or even errors.
b7dcc29430 : Forward declare GlooCache key_type
18b4dcd28b : Remove unused variable in macro
be81304d27 : Moved GlooCache to new file; Functions renames; Minor fixes
f07f13c6e9 : Change Store exception handling
310d08c37b : Fix store and all operations
234df2138a : Fix compilation errors
2b340e7d50 : Add python tests; Remove broken prefix store creation
6888c61fa8 : Fix DataChannelGloo compilation
ba3328b365 : Add DataChannelGloo tests
3b4fe5dfc4 : Add isend/irecv; Add all types generator for template functions; Minor refactor
ce42761628 : Add groups
df4791d6c0 : Implement DataChannelGloo
7e8830c3d5 : Initial gloo bindings
b91cec7f66 : Fix THD library build for CUDA
765aeb1a08 : Fix nonzero bug
280e2a94e5 : Worker init clarification; Inform on error thread notification failure
e7f453b5de : Add barrier to test; Minor changes;
8030aa0f1b : Refactor error thread
40ad2cde62 : Remove unnecessary nonzeroElems function
af4a978c44 : Move error thread to CommandChannel; Minor fixes;
fe5fc6723f : Remove unnecessary code
6e6179633b : Minor fixes in `THDMasterWorkerInit`
c97e60c45d : Add actual error reporting in Master
2cdb368f97 : Add error handling in MasterWorker mode
a5b2f3461a : Review fixes
d3e60599d2 : Add benchmark scripts (#66)
98d8e0b040 : Lapack functions implementation #2 + fixes after review
fe2c360eda : Lapack function implementation #1
59ae109bbb : Implement functions from set 1 (except Lapack)
8623076654 : Add `convertToRank` to do bound checking
a362b4f367 : Add support for `unsigned char` aka `byte` to MPI
ef724e355c : Change rank type: int -> std::uint32_t; Minor fixes
e863d27393 : Tweaks, fixes, cleanup in DataChannelTCP
4c388f9398 : Revert structure changes; Minor fixes
6740d1d904 : Rewrite CommandChannel
f891d9b1bf : Don't build tests by default
a81f330854 : Rename `construct` -> `new`; Minor fixes
c02241edbd : Minor code refactor
f30a92fa17 : Fix invalid socket initialization
1391ff99f4 : Use TCP_NODELAY for data sockets
43019bd88a : Always loop over all possible addresses in worker
d6380910f5 : Removed unnecessary code; Minor fixes
04491e84e4 : Fix build with CUDA
e247249a5f : Implement TH_API functions from the set 4
2c59f017e6 : Port Xray OC workflow to elastic_data_parallel_model
0160438eb9 : added logical not operator for ByteTensor (#1403)
7dd8571bc6 : fix avg_pool docs in nn.functional
48a7869b23 : Doc fixes (#1409)
582fd3db7d : fix osx build
9169f60a84 : Parallelize TensorMethods.cpp builds (#1400)
457d78a7d9 : Use THCUNN backward kernels for Tanh and Sigmoid in Autograd (#1399)
a071ccbea6 : fix NCCL makefile for CUDA 7.5 (#1401)
db1eb66456 : corrected docstring for Dropout (#1404)
6e1333fe92 : CUDA operators for DotProduct and DotProductGradient
d223d71703 : Add shape inference function for RoiPool.
45020a74cd : remove inplace pow and fix contiguous -> coalesce (#1398)
9c01f5d6b2 : Document hybrid sparse tensors.
2b0dbad3df : Support fp16 output from ImageInputOp
cbb9f08b71 : Add new init methods gain, eye and dirac (#1172)
f75ab857b8 : Add safeCoalesce() to tests
f2903332c7 : Make coalesce() out of place
9643be76f9 : speed up accumulation
4f09461d24 : Rename sparse tensor contiguous() to coalesce()
bafb2e5cc2 : Implement sparse pow. (#1387)
28a7fbbdf5 : Documentation fix for torch.gather
4c1cdb6148 : Refactor Python string utility function
ed05c28bc6 : Speedup SquaredL2Distance CUDA
775481ed56 : re-enable dilated convolutions on Kepler (#1394)
224f5eabf5 : half<->float conversion cleanup (#680)
d6a31c68a0 : Add option to disable ppc64le's VSX support
96a281dfab : Add one more missing self.dilation parameter. (#1392)
94b147fd41 : Allows dicts batches in dataloader. (#1354)
6bb43ee41e : leaky relu gradient op
c26f6877a0 : guard topk for half (#759)
8908000262 : function -> lambda in test
8b1d5727d8 : fix minor docs
75f1989bec : Add nn.Bilinear and tests
a44317fea8 : Change magma_sgesvd to magma_sgesdd which is significantly faster
24e5a9057e : Revert "Parallelize TensorMethods.cpp builds (#1364)" (#1390)
060048bcd8 : Parallelize TensorMethods.cpp builds (#1364)
77035d151e : make topk test unique
50c9c23525 : enable topk for all cuda
48f9e526ea : implement expand/expandAs in CPU/GPU code
69574a6dc4 : implement expand/expandAs in CPU/GPU code
928f6516c1 : implement expand/expandAs in CPU/GPU code
b93b525a1c : Enable specifying of margin in HingeEmbeddingLoss (#1378)
482ffccd76 : Make instance norm grad test less flakey
726ded4758 : add box cox transform op
bf50599c70 : Layered LSTM (naive version)
8db2cf6182 : temp fix for transposed dilated convolution (#1388)
aa5a46b848 : fix LRN order
bc3ec13dae : change topk operator to use a priority queue
7e8ef0e22a : Actually pass dilation to the underlying operators. (#1386)
12a024241a : Move BeamSearchForwardOnly to OSS
27990fee54 : Use fully qualified name as tp_name for tensors and storages (#1379)
1aadf4324b : Add row-wise broadcasting to "Where" operator
ad6204eb0b : LSTM: support dropping hidden / cell states when sequence
c4ce118393 : fix curand odd-sized workaround
e9d5863860 : Allow Load operator to load into overriden names
2ef7331007 : Update sparse.py
beb7573e5c : workflow support for training regression/weighted logistic regression model.
c2cfa4cf5b : Add THGenerate*Type.h for all types (#1014)
c915f8ddbf : Signal error on connection error instead of asserting
6a1ef687f6 : Free scratch blobs when data workers exits, add utility function to reset blobs
795dc1c326 : Remove loss ops from eval net
b39a2f2cbb : Documentation for sparse tensors. (#1366)
d9f01397b3 : s/NOCUDA/NO_CUDA/
8ca7bf2ab3 : Check argument types in 'checkTypes' (#1363)
9215afef7d : Allow stopping of specific data workers + specify c2 queue size
13bdd4ec05 : Replaces the non-existing _param_init_net net by raising an exception.
9f9a2da1a1 : Revert D4920719: [dper2][operator] ScaleGradientOp
c387704030 : improve softmax-with-loss kernels for prob-mode
d8e7093857 : Reimplement RowMaxKernel using CUB block reduction.
8950f41da3 : Install CUDA headers.
e42c14e819 : ScaleGradientOp
deb1327b6e : Re-apply #266
b905166362 : RNN: fix bug for parameter gradient in a case when SumOp is
a4554bb705 : elementwise ops + error handling
da567dcb38 : Add __syncthreads() between CUB reductions for elementwise linear gradient kernel
ef2701a57e : MapToRange layer
2c8b41e3f3 : Adding add_weight_decay and image_input to brew module
885f906e67 : resnet train print loss and accuracy
5692969e8f : add gradient for LengthsTileOp
f82a510be6 : share forward activation blobs + pass unused free blobs down all branches + use shape infernece
fc77ae1736 : remote some experimental files from open-source repo
aaafcfc529 : Improving usability of schema
afd01164f8 : Install missing headers.
a123247240 : Move SIGPIPE initializer to test main
41705ce7d5 : Add zero padding module (#1326)
88fc1d39ff : Generic TopK implementation (#744)
b3b66e3d00 : MKL related files with review comments incorporated
7153594d7b : Fix corruption of NameScope when exception is thrown
2533671a97 : Support 3D&1D SpatialBatchNorm in cuDNN
2a098fc20e : string -> std::string in common_rtc.h
795a8a603b : guard against apple platforms
d16e8ec8f3 : fix thread_local bug
5521fa35a5 : use CUB to optimize ElementwiseLinearGradientKernel
4c08d6ae3b : Allow cpu-only grad update in Parallelize_GPU.
081001a176 : "IsMemberOf" operator
24ff90ee6b : "Where" operator
437a670ce8 : Enable building Gloo only on 64-bit systems
2994dd6377 : Fix python support problems caused by building script errors.
902409be56 : caffe2: datasets pack/unpack
9cb901caf0 : Forward-only rnns
7440cd5ef4 : Add python_func_type to PythonOp
eb1130803f : caffe2: smart_tensor_printer
0bb558716a : rename model_helpers to brew and lowercase all helper functions
bef6e45f8b : rename ModelHelperBase
f407078d38 : ReduceFrontSumOp: striped Axpby
2e74129f0e : ReduceDimsGradientOp: replace multiple Scale calls with a batched/striped one
bf20e4e9b0 : Remove MiLSTM from recurrent.py left over after refactoring
4f77a49ddd : refactor LSTM test to avoid copy pasta, improve speed 1.5x and provide better coverage
684607a793 : Add a friendly error message for unzipped mnist file.
41f4198344 : CUDA version of PRelu/Gradient + Fix Gradient for dW
3b0069a014 : Expose operators execution statistics to python frontend.
09bb91022a : Fix tests for ops without a CUDA backend
8387bc4680 : Added Python_ADDITIONAL_VERSIONS to cmake so python 2 is default.
b82f9e9ea7 : FindOp
f07ec699ee : Add rendezvous timeout parameter and defaults to StoreHandler::wait()
fa261cdafb : arg_scope for model_helper
199a09c7dd : XCode -> Xcode
a48062b1a2 : temporarily fix sync script bugs changes by reverting partially https://github.com/caffe2/caffe2/pull/266/files
9899512401 : Remove common.h from root
d95feb3feb : Only build on 64-bit systems
3ab074b3c5 : Fix torch.stack() with Variable inputs (#1345)
6a69f7007b : Revert "add keyword `out` for autograd function Concat to match torch.cat (#1336)" (#1340)
71b9dea6ec : add keyword `out` for autograd function Concat to match torch.cat (#1336)
fa4f363b93 : Instance norm (#1283)
aab30d4ea2 : Fix errors when no CUDA devices are available (#1334)
2b56711c24 : Indexing fix for fused GRU/LSTM kernels when all tensors are not contiguous. (#1325)
5224fc56b0 : fix typo
e80a3a7f7b : Indexing fix for fused GRU/LSTM kernels when all tensors are not contiguous.
5b83fe6781 : add contiguous checks
24d92b5d9f : Concatenate directly into shared memory when constructing batches (#1323)
1375694853 : Document torchvision members
be5e399d46 : Add a simple README for torch/lib. (#1322)
884690adb3 : build_ios.sh comments fixes
57b51db8d7 : Add a guard function to check Caffe2 linking setup.
4dafb608e7 : Fix char_rnn LSTM import
01c76bf830 : Optimize TransposeOp by using strided access pattern, bulk memory transfer, and other profile-guided optimizations
9f86de2dc7 : Support WatchOS build
10387a3f35 : fix gradBias checks
627921d01d : Use CUDA standard tanh for lstm
e788ea40de : fix typo in TH_APPLY for _dimOffset
6089900011 : grammar/typo: "There's 3" -> "There are three"
6ed36c37e6 : fix CUDNN layer weight size calculation for multiple layers
8236d38e81 : add cusparse link dependency
8adf8fe2ed : create and expose handles for cusparse
4f2531fbaa : syncing fbandroid/objc to fbcode
d2472d1ab5 : Disable cudnn dilated convolutions for kepler. (#1308)
f768233a1c : Fix a data_workers test
41b7217898 : Fix url to original Caffe external resource in README. (#317)
95f123a83e : fix download progress bar's percentage exceed 100%
51033f19d7 : unbreak test_seq2seq_caffe2_model_cnn_one_stack_encoder
331219c550 : define abs for short too
a790256537 : Add option to control the size of lengths tensor
249dc01f0d : Set cuDNN pooling mode to match CPU&CUDA implementations
5a856ce03e : disable dropout completely when not used
5b6fb047aa : Fix parallel build support in makefile
e34c5dc1c3 : macOS build issue with set_affinity() in net_gpu.cc
fd9185ab21 : fix getting empty struct
47ce345699 : Limit the maximum memory for keep_on_shrink for predictor
8f43e3fe36 : update ios-cmake
e5e3ec1498 : fix unit test
7805ac9098 : Base Store::wait() should ignore timeout for back compat
4bc40d0658 : reset environment after every example
001598a59b : add net gradient check
4ad3a4fc8b : Revert D4794432: Added tiles and axis as input parameters to Tile Operator
5f65ee9ca0 : Add more newContiguous calls and checks
f750a2d2df : fix a few typos
883ff96f74 : Allow UniformIntFill to produce empty tensor
b294aadc66 : fp16 support for FullyConnected op(Fixed)
f9149b1f2e : Fix halving-doubling corner cases
8b5782ed5c : Weighted sampling dequeue operator
d47c1362c5 : changed doxygen config to new docs path (#311)
d58141ec4c : launch updates (#309)
6cae3fa896 : Typo in Build version of ubuntu (#294)
9ef30b337e : Add six to Tegra X1 install script
b89688658c : Missing CUDA_NVCC_FLAGS & CUDA_HOST_COMPILER flags at GPU arch detection.
ea493c6fda : build error in context_gpu_test.cc
94ee2f3662 : update gloo to master to address #286
a8e6610e3d : Fix argument typo in pad_packed_sequence docstring (#1300)
bef5720b76 : Flag to report total memory in GPUs + op and python func to retrieve
56cc1e219b : Fix include in mpi/context.cc
1607042bf4 : Add timeout parameter and default to rendezvous Store::wait()
41620f86c9 : Update IntelComposerXE to 2017.2.274
8a47857ef1 : group_conv fix
7d023cda6c : Add timeout to RedisStore::wait()
9e8b4ef075 : Include THCNumerics.cuh in THCAtomics.cuh. (#752)
a35f507532 : Update functional.py (#1298)
6aa22beb86 : Fix loss.py docs (#1296)
4c70612320 : small change to schema
f950a1b70f : create bucket-based calibration - model manipulation
8492c411e8 : Caffe2 unit test for unmask
b7be2016aa : Fix typos in memonger.py
71bf8fb55b : Clean up fd from destructor when in listening state
580e192151 : Revert D4870606: caffe2: datasets pack/unpack
23230215a9 : Add run_train_net_forward_only() to LayersTestCase
ad6b53e401 : allow to specify output dtypes for functional layers
c7d83a16f6 : Update README.md
934816c01c : Change the default algo for cuDNN conv forward to PRECOMP_GEMM (#1290)
009bbc9983 : Allow UniformFill/UniformIntFill to take parameters from input blobs
fc19473501 : Corrections in legacy modules. (#1286)
34546f022a : Expose dilated convolutions.
ab77742f6e : Add some missing documentation for arguments.
34269a6fda : caffe2: datasets pack/unpack
701e63107f : speed improvements, fix tests
655c22569e : CPU hspmm + more efficient reorder
cd3bbc9dfd : more operations and optimizations (hspmm, reorder, ...)
1018b238ac : make gradients contiguous in adagrad
e27bd4ce7a : faster cadd
b2acc33c73 : contiguousValues method
40804830b8 : mark_contiguous operation
01d84c5f9d : revert sparse cuda index type change
88b42324e7 : spcadd, sparseMask, cadd, csub, cmul + tests
ec260fe8e9 : add test for dsmm
328b416068 : THCS contiguous + to_dense
4bde9efbd7 : Update CONTRIBUTING.md
ff781ed059 : Update CONTRIBUTING.md
ebb5cc4cdb : Make Gather works on empty DATA tensor
46cf6ff5fb : fix batchnorm docs (#1284)
c153b1ca75 : fix softmax ops dimension, add explicit rowmax buffer for simplicity
fcf4deac7d : Fused RNN kernel remove explicit instantiation, isn't needed.
1feb120d93 : Mark input as optional for gradInput in Tanh and Sigmoid
2ca071d730 : Remove double precision math from LogSigmoid too
8a901c510d : Update ops for Sigmoid and Tanh
ed60fe0ed6 : Gloo benchmarking and script updates
6595545843 : fix CuDNN RecurrentOp Gradient init
2d28087529 : Update mac build to ease the rpath issues
4bf559eddb : RNNCell, LSTMCell, LSTMWithAttentionCell
e0a904011b : Use gradient name for allreduce op name
ed1e342860 : Reuse common world for allreduce/broadcast
cf317d1106 : create_net: explicitly specify if one wants to overwrite the network.
9ab077dc9d : Revert D4871248: [caffe2][PR] fp16 support for FullyConnected op
391fd14115 : Serializes a std::unique_ptr<std::mutex> object.
0a726af42e : Coerce input of FunctionalLayer to record
3a9daeda8c : Update gloo to new master
2f07e77218 : update NNPACK related submodules
0a4c5756df : Logitzy SoftmaxWithLoss
20330fe3f4 : Added tiles and axis as input parameters to Tile Operator
c3a4468af6 : Add conv helpers and proxy to CNN
2043b3c114 : train and algebra helpers
277b4eca97 : array helpers (concat)
ed3f0ac5e9 : nonlinearity helpers
3623c241c4 : normalization helpers
e881c4c590 : removing __all__ in fc, dropout, pooling
54d42af413 : Fix a workspace test
25035e8b3b : ElementwiseLinearOp
ac7663b18c : layer_model_instantiator: filter layers by tags
f67ab32d34 : Output peer address on network failures
9150e33765 : Add support for creating docsets. (#1276)
e4478804ce : Fix `patched_make_field` for newer Sphinx versions. (#1275)
1082db600e : fp16 support for FullyConnected op
a220f2c3aa : Fix group-convolution w/o biases on CPU. (#1273)
5311fd3d6a : Conv no dx
7270471ed6 : Returns auxiliary parameters in the optimizers.
7568a99fee : Fix bugs in tensor-init-function
22f3825d8f : Cmake mobile build improvements
dd923cf052 : Unmask operator in Caffe2
dd80310681 : inference lookup in now local for tutorial (#267)
15267ac009 : fix typo
3c0dc06ac8 : Add __builtin_cpu_supports function def in windows
ca0c8e5b25 : remove import_array() help and use import_array1
b93a7b134a : doxygen configs and updated python files to inc. doxygen tags (#266)
4db7bec686 : CUDA version of SigmoidCrossEntropyWithLogits
fc8bb523e8 : Update gloo dependency
0cb60e7d5a : Retrieve ethernet interface link speed
182e2d348e : Use halving/doubling allreduce if context is power of two
a207aa4dbc : Fix backward compatibility bug for cnn model helper arguments
475eff5281 : Allow peer access only in groups of 8
3c9dfe4736 : dag-compatible forward memonger
d65892b7f2 : Change back the function signature of relu gradient to only use
e8cc5563fe : Add an optional forget bias argument to LSTMUnit
246bedd406 : Add counter for task processing wall time
f94f43fd6e : Working sparse gradients for data parallel model
69f42e3f70 : make CopyGPUToCPU/CPUToGPU handle sparse gradients
b61174047f : Add threshold to switch between host/device reduce and bcast depending on buffer size
baf33161d4 : GatherRecord layer
8d93fcf13f : Don't allow overwriting keys in HashStore
8bd0522c20 : Add tests and GPU impls for sparse optimizers
a559893c9f : Instantiate nccl type templates for gloo (minus half)
83f360887f : new SumReduceLike op CPU/GPU implementation and doc
50c2759afe : Expose missing headers
da93963860 : add input/output blob name when exception thrown from tensor
05002442eb : Renaming DuplicateOp to LengthsTileOp
cb66e9cf78 : torch.diag bug fix (#1251)
735f5af87e : Add new variant of halving/doubling algorithm that pipelines local reduce/broadcast with communication steps
8c9f4d8c3b : Add throughput information to resnet50_trainer
580ff3a594 : Revert D4854240: [EAZY][C2 OSS] Add normalization helpers and proxy to CNNModelHelper
32b30ff1fe : Revert D4854440: [EASY][C2 OSS] Add Nonlinearity helpers and proxy to CNNModelHelper
a8ef3b4090 : Revert D4855073: [EAZY][C2 OSS] Add array_helpers and proxy to CNN
7867262d39 : Revert D4855040: [EASY][C2 OSS] Add Algebra and train helpers and proxy them to CNNMH
c852883086 : add named_parameters that yield name and value of parameters (#1242)
62c584ba79 : Fix abs with char and short cuda types. (#747)
fbd53d87bf : block wide reduction with multiple values to reduce at once (#745)
c907c7c7dc : Update resnet50_trainer example
71303b8af4 : Autograd deadlock for recent glibc fix (#1243)
4336e9ea66 : Revert "make it compile on Windows + use ilp64 MKL" (#1002)
f5ac83b060 : LengthsGatherOp
bbcdc91135 : Remove prof_dag from step net
154d49cc6a : Caffe2: add schema for SumElementsGradient
4967db0756 : sanity checks for data parallel model
75c2168966 : Generalize PoolingOp(CUDA) to compute 1D, 2D and 3D pooling.
d48afd41f9 : Add print string for MaxPool3d, change for MaxPool2d (#1115)
fd5643e426 : Add math::Gemv<double, CUDAContext> by cublas::cublasDgemv
8de1ce57d2 : Add Algebra and train helpers and proxy them to CNNMH
b2e94a7bcb : Add array_helpers and proxy to CNN
e7cdd90490 : Add Nonlinearity helpers and proxy to CNNModelHelper
b8f2baec8e : Add normalization helpers and proxy to CNNModelHelper
d35b7569db : Add Pooling Helpers, proxy to CNNModelHelper
e21e4bf3e8 : add pyyaml to conda note here as well
570c6bb9b7 : Fix backward pass computation when an input is used in a Fill-op input for shape
f0426e6288 : remove TODO comment
09bfc8043b : Generalize PoolingOp(CPU) to compute 1D, 2D and 3D pooling.
5391fe8953 : addr zeroes output buffer when beta=0
0925c91e80 : addr zeroes output buffer when beta=0
0f43ac6865 : use GPUFallback for TopK
253c854da5 : update Dockerfile not to use requirements.txt
7c59754d24 : update source build instructions
a8d60ad3ac : fix THNN headers
aec658f870 : fix THNN headers
2b37ecfccf : fix THNN headers
01a35dcace : Fix coalesced CUDA collectives for nonhomogeneous lists
afeeb81e79 : Add support for keyword arguments in torch.cat
6002f94232 : Fix is_tensor and is_storage for old-style classes
a5c7d98611 : Import TripletMarginLoss
605b3c86ce : Retain the type of numpy scalars in collate_fn
2087b1157a : Improve serialization error messages
81e972031d : Handle all errors if Module's sources can't be retrieved
81a55f441c : Adds interfaces to check the existence of a DB
e9ff57176b : Fused pointwise kernels for GRU/LSTM
e362a64975 : release notes for v0.6.1 (#260)
cfa504691c : Fused pointwise kernels for GRU/LSTM
0dc52abe9a : Fused pointwise kernels for GRU/LSTM
1e5140aa76 : option to recompute blobs backward pass with massive memory savings
0b50f794e9 : Use thnn version of Tanh/Sigmoid instead of autograd. (#1234)
15c6f637d6 : create bucket-based calibration - layer
06ae1ff534 : Add fp16 dispatch to main cuDNN operators
2abbb5133c : Fixing function signatures: long -> ptrdiff_t (#1232)
67468212e3 : updated ubuntu instructions (#259)
fcf8387779 : Fix ibv_devices wrapper if device list is empty
ade105fb7c : update README to install pyyaml from conda (#1231)
70e9c08f27 : feature processing ops
4dc1dbab05 : MILSTM Cells with and without attention
22584b546a : Revert D4711302: SumReduceLikeOp CPU/GPU implementation
84ee795b25 : remove net_predictor_extract.py
092c1440a2 : SumSqrElements
79c4cb96b1 : fix memory leak in btrisolve and getri
97bd6aae37 : Throw error if Redis replies with error
f618ea9f31 : Update README.md
f6fef3718e : fix typo in autograd.rst (#1219)
3fcdd6a42b : Reuse sockaddr information from device
707c1ca4cc : Function to retrieve PCI bus ID from device
bc0ed9298d : remove incorrect version in readme
6d9ad1d66a : Adding IndexLinear (#1181)
d1af311224 : PiecewiseLinearTransformOp supports passing params from input blobs.
64ee4056d7 : updated docker image inside the docs (#1216)
d8b9e787c2 : DuplicateOp
b0adcf02f8 : remove workspace sequence id
a2065f3c1e : report capacity bytes as part of workspace blob stats
64599d8351 : create helpers package and add dropout
b16a352a3b : Fix remainder and cremainder for integer types
88bcfc1531 : Fix remainder and cremainder for integer types
662163bef6 : Fix remainder and cremainder for integer types
4026593240 : check for beta=0 and avoid multiply in sparse mm (#1211)
441d75ce56 : Adapts basic operations to new THXVector interface
813452608c : Add Reduction layer in caffe_translator
e153643b6c : tutorial updates (#257)
dc5a34200f : SumReduceLikeOp CPU/GPU implementation
8482cf9823 : TensorVectorSizeOp
c101856214 : Disable openmp when building for android
3de56785fa : fix conv1d test and add for padding
7ba1c437e3 : Create PATENTS
a89317a9d4 : fix types in unfold.c
e48db02e10 : remove unused python-level BatchNorm.py
7f2553bc6f : dont use cudnn batchnorm for cudnn < 5.1.10
acaf279235 : Unbreak old model check in caffe_translator
04bd41a4f2 : Downloader fix
66a20e5c32 : Support TORCH_NVCC_FLAGS environment variable
cb3bd0ede8 : Added a DP + recursion algorithm for finding optimal blob assignments based on blob sizes.
ffd298376a : option to print tensor shapes at exit
c7d284a03b : ability to disable inputs for extract predictor net
f0c7124420 : Allow support for negative dimension argument for all functions
ae1c365dbd : Add TH_INDEX_BASE to nDimension and stride functions
8c769258f8 : fix cnn.Softmax when called with only inputs
6fd9b53d93 : Include common/linux.{h,cc} in CMake build
e2323ad688 : Add CAFFE_ENFORCE to protobuf parsing
e692c38fcf : Compute distance metric between PCI devices
23183b9642 : memory-saving only_loss argument for SoftmaxWithLoss
59f464434d : Used blob sizes for finding assignments in a greedy way.
a54000dc6a : Added an ordering function to reduce live spans of computed blobs.
b922b19bfd : add weights bias to modelhelperbase
5dfa73702f : Display runtime information in benchmark output
95140094cb : Use CudaStream as first class object
76abd9a8ac : Caffe2: consolidate AveragedLoss with SumElementsOp
c120322890 : Predictor exporter open-sourcing
ef95926103 : Move setTimeout to Device and set default tcp timeout to 30 sec
e7f5220dfa : device_ids can be None again in data_parallel (#1187)
a7ae04a657 : fix precedence problem when building with debug python (#1201)
7f03182bfa : sizeAverage -> size_average in docs
9f2a5d804d : Add a flag to fix when dataset size is not divisible by batch size. (#1133)
a7217e6626 : Remove unused optimizers
aa506fa4d7 : fix docs typo
955869a09a : fix cuda_allreduce_halving_doubling to correctly copy between and reduce on GPU buffers
d82cad3019 : implement nn.Module.__dir__ (#1142)
9504246c32 : add triplet margin loss (#1165)
84bdbe5ab4 : btrisolve: Add sz checks, correct B's ordering, support nrhs>1.
85954032d9 : fix doc formatting
fadbbd2692 : ReversePackedSegsOp optimized GPU code
1a04b92226 : add note regarding SGD momentum
c66c8f6e84 : Add Softmax to cnn.py, cuDNN engine.
8da2d75ec8 : Caffe2/Recurrent] recurrent.py API to cuDNN LSTM
cf201ebac8 : support axis for cudnn softmax
320b598ff1 : Add NanCheckOp, an operator that checks for NaNs and inf's on both the forward and backward pass.
8a822d48f5 : Update README.md
5511ad258b : cuda version of recursive halving/doubling allreduce
75a635630d : Update to ignore zero targets
26d301fbe4 : Configurable CuDNN workspace limit in resnet50_trainer
ecd3bda44e : Fix Softmax for CUDA
8e6524938b : Undo D4832492 for Gloo
02f0c1c9d7 : make memonger work with RecurrentNetwork(Gradient)
65439e849b : Fix mixed context loading validation
4e4cfd8b2b : Fix main()s to call folly::init/initFacebook/registrationComplete (part 14)
66d00b3a63 : Use CUDNN softmax implementation
5f263c6175 : RecurrentNetwork and variable length links
6bd4ecd153 : Use thrust::inclusive_scan for 1D cumsum/cumprod (#742)
5c802c5ba9 : Refactor AllgatherRing to use remote buffer offset
5b40e4245d : Fix typo and make btrisolve work for doubles on the CPU.
39fa092a13 : Constant string is generated from Protobuf instead of Thrift
ef42d4c2aa : Fix sparse to dense and improve DispatchHelper
ae5865082c : Move common algorithm stuff into algorithm.h
f86beccc5b : Use workspace pattern with CudaAllreduceRingChunked
d122b4e4ec : Update btrisolve docs to the newest interface.
81008aa111 : Handle errors in sync IO path.
0cdf10478d : Start benchmark element sweep at 100
0e5b2fd016 : Support cropping with negative pad sizes in PadImage
4de82cfa0f : Use CudaAllreduceRing<CudaDeviceWorkspace> for GPUDirect
5c32c82a6d : Add option to subtract log odd from sampled trained prediction.
1ac8251373 : Use gloo::make_unique to fix build for C++11
3b4c950862 : Add option to use id_score_list_features column
511ca3ea1b : Add tests for tcp transport failures
8ce1382e99 : make it compile on Windows + use ilp64 MKL (#981)
22cdef3ddc : recursive halving/doubling allreduce
e13e9c1302 : cuDNN version of TransposeOp
bf28d80460 : fp16 support for NCCL ops
fe9a243b83 : Add default value for GetRepeatedField
148b11847b : Remove useless base class in allreduce.h
b3a2f30715 : Extra workspace template parameter for CUDA algorithm
a95751e918 : Fix test_random_seed_behavior for multi-GPU
f29d3839a8 : ubuntu installation instuctions for v0.6.0 (#244)
e56f21e46e : bump version to 0.6.0 for prerelease (#243)
6490d58a75 : Add cuda path to nccl build
91c4ba7980 : Add torch.arange and deprecate torch.range
03f1cab801 : Unify argument names in norm and renorm
fa2c566353 : Add Variable.type_as
2d1122739c : Raise AttributeError in Module.__getattr__
7861f585fe : Reshape grad in dot
9fc56793dd : fix trunk for push and small cleanup
70c4b82eba : add /arch:AVX /arch:AVX2 explicitly for msvc
b401cb48fe : Make optimization methods configurable and allow flexible optimization settings
274b5c9003 : Allow unhashable inputs to parallel_apply
b0a0c437dd : Some fixes for load/saving and beam search
ce31caf865 : batch matmul: guard against old cuda versions.
2d7731a5d1 : Fix typo "mistmatch"
254ee9b099 : Fix protobuf build to properly include directories
a2593ea0c2 : Add GatherOp for GPU, and update its tests.
dfa2d26830 : * make random_ range correct when both lower and upper are specified
559ae078b8 : Fix Option constructor in invalid argument error printing code (#1160)
d8c65cc52a : A more deterministic way to find old C1 model
cd2929c707 : ConvTransposeMobileOp respects the `shared_buffer` arg.
8f9cd757db : Skips the initialization phase of the individual checkpoint objects.
b13b7010b9 : check for nvidia driver's sufficiency before checking for number of CUDA devices (#1156)
a3bfb9f376 : THVector_(add),(mul) -> (adds),(mul) for VSX.
5c79046d39 : Use persistent tensor to store exp_inf (part of optimizer's state) (#1152)
5bee34eb84 : Add git submodule init command
0771ce312a : optimize weighted softmaxwithloss gradient
30fd222b80 : implement autograd function `cross` (#1138)
834142bb64 : Change the predictor to use Protobuf
cd4160c894 : distributed training for dper2
8421bf7c60 : Faster softmaxWithLoss rowMaxKernel
3b7b23df66 : Move CUDA collectives to cuda_collectives.h
ffd1883229 : Make extension loader properly handle visibility.
d933287114 : Add a barrier after verification iteration in benchmarks to prevent a race with regular iterations
b2c6ac8691 : temporarily disable binary build that depends on both leveldb and opencv
4189882cfe : move protobuf back to 3.1.0 due to android/ios cmake error in 3.2.0
ed44e87f98 : use striped batch add for the recurrent network gradient
761eef1f19 : Minor typo fix in `backward` function in `torch/autograd/variable.py` (#1143)
5dffba3f92 : Sparse momentum update for seq2seq embeddings
d8ae7893e0 : Get rid of warp-synchronous code (#739)
90b872c670 : Add GPUDirect capable version of CudaAllreduceRing
a95ce9e98f : Using temporary variables when performing transpose + addmm
0e6413f8ea : Fix flaky test
403cad46dc : Using temporary variables when performing transpose + addmm
e1d64ea4d5 : support multilabel in generic preprocessor
51ae65d76f : RNN: reuse memory for gradients of internal blobs of the cell net
619a3ad2f4 : Adding use_grad_hack option to Sub gradient
3eb3507367 : uniform_sampling layer
d76a814c93 : Fixes for ops without a CUDA backend
b8ccf42c74 : Constify algorithm constructors
7772f1f182 : Make Blob moveable
8aa1cefed8 : Fix deadlock in autograd (#1140)
310aacf23c : mini-improvements
d604961b26 : check for ExtractPredictorNet for is_test arguments
afe3df32f5 : test existence of confu and ninja before installing nnpack.
04210ad531 : bump protobuf to 3.2.0
4b147e2079 : Settable timeout for tcp read/write
92f2220589 : Add whitelist capability for smaller mobile binaries
8efb762fcd : gpu sequence op step 1: clean headers
0d908d813b : Implements Cumsum function for autograd (#1122)
1c391f6f93 : bump version
58f7f2b441 : doxygen python block added
be146fd721 : Add btriunpack and update the btrifact test.
7cc92b1260 : Add eval net for layer_model_helper
2979f4b989 : add more functions to docs
22b3600f19 : add samplers to documentation
95657ea1e8 : Protobuf is binary string. Use bytes instead.
215813d7ac : Change dockerfile to support for cudnn v6 (#1135)
fd2835887b : only resize stepWorkspaces when sequence length increases
a03d956b56 : Fixes the flaky test. Although we create nets in three different nodes,
f2b8150a1a : Fix PadImage same padding argument.
939daa3d99 : gradient checker for nets
1ed746df45 : BatchMatMulOp: use cuBLAS batched strided gemm for CUDA
80e88a88ed : Fix ibverbs completion queue capacity
76997e80f2 : RNN: remove copy for gradients of recurrent inputs
242bff8480 : RNN: avoid copy for gradients of inputs to the rnn cell and save more memory!
327d3cb2b5 : Caffe2: add init method and metric logging to data loader
78f0b35949 : Caffe2: CUDA implementation for LeakyReluOp
b41449b680 : SparseMomentumSGDUpdateOp
dc7695a47a : Update links for tutorials in README (#1123)
032a65edff : modify pip uninstall command in CONTRIBUTING.md
9c58341809 : codemod: use `<>` includes for gtest headers
da36212259 : SamplingTrain layer
8251653585 : caffe2: relax PartitionOp constraints
ebeb36f6ee : Refactoring, t-sne, additional features
55546359b6 : Retry on EINTR for writev in tcp/pair.cc
0c47d345df : Multi-gpu training for OSS seq2seq
db9cae4d34 : Allow passing a Deleter to ShareExternalData
fe3d5a63f2 : Support multiple predefined reduction functions
e4b4e515cd : add mode to cwrap
37718e207d : Add remote offset argument to buffer send
afd576ec0e : Add mode kernel
95aa2af377 : btrisolve: Make a Tensor method and update argument order
3ddcff659d : Move AddPlan, AddNet, AddBlobs to predictor_py_utils.py
ee28b6ce22 : Caffe2: instrument Everstore loader
7fa4acab9b : Loads only the model blobs from the checkpoints.
73b18e7ccf : Enables checkpointing for dper2.
7c2c7e8e31 : Move NCCL code to subdirectory and backfill ops
a7029cf34c : Add installation configs for header files
3eab8a71e2 : Added docstring to add_module (#1116)
2fd4d088ff : add Adaptive pooling methods to docs
661fa5915d : minor bugfix for cmake
2c7f45aa3f : update nnpack to the most recent version
5d274cd499 : Update btrisolve argument order.
8051dec608 : Update btrisolve argument order.
f2c1071c33 : Adaptive max and average pooling (1D & 2D) (#1084)
bb71117ecc : Cwrap arg assign (#1102)
d25433a099 : Fix docker build commands (#1103)
7dd45490f8 : don't use inplace backward, remove unnecessary zero for grad_input (#1079)
6163676ebe : Skip optimizer when param doesn't have gradient and optimizer is not set
eea0ea7712 : Struct nested field name lookup supports List
bf632544e6 : Pass NULL rinfo_ to btrifact by default (#1089)
282402d4f3 : Revert "Add back zero fill for ger" (#1093)
1461709ea0 : Improving the performance of IndexLinear:updateOutput
6aee34b666 : Registering GPU version of PackSegments using GPUFallbackOp
ae122707b5 : Don't do extra resize in linear bias
8ce34d6c87 : Add Calibration
b711c7d039 : More perf stats for BlobsQueue
2f73a01b70 : Average and time spent counters
29c1102806 : Extract net and blobs assignment to separate functions
b4fe5ad641 : Use zero instead of mul when beta == 0 in addr
5a761dbe65 : Add back zero fill for ger
e591ddb70b : Add nnpack specific dependencies under third_party
0ade0578b1 : Reset workspace after each test in copy_ops_test
ad8b92b9e8 : Extract plans assignment to AddPlan function
dd893391d5 : Add argument to children to yield the name of the modules (#941)
649f04d077 : Added Pascal nvcc flags, bumped version
463a28afcb : Windows build for easier python usage
f45ef5fdb8 : AllGather algorithm [CPU]
e8196f990d : Make rinfo_ argument optional in btrifact
269b77a1b2 : Make rinfo_ optional in btrifact
51a92c6659 : Add gloo submodule
476d85dd3f : DataLoader: Fix batch data type for numpy array (#1074)
d5880b128e : CMake support for Gloo dependency
63f6c0d692 : add Pairwise distance (#835)
b546fa3fcd : add assertTrue to padding tests
1d656b6769 : Ensure displayed progress in ProgressMonitor is between 0 and 100%. Fixes #1086
97a6400f03 : Don't do copy for param_grad in backward_step_net
99bfd36a04 : CRF layer in caffe2
3acbbb30f2 : Fix inconsistent in-place and out-of-place for HardTanh
52911f9e47 : Fix inconsistent in-place and out-of-place implementations
a65e0f488c : Remove zero fill where not needed (#1077)
396ebb0546 : exec_net --> predict_net
8dc5d2a22e : export current_blas_handle
2cb123df83 : Fixed list init issue under MSVC compliation
422c65ca35 : Removing unnecessary Copy after fixing gradients for external parameters
ed97f3f854 : Adding support for flattened inputs for IndexLinear
a231fe8fc5 : IndexLinear support for cunn
8168e8ac25 : allows to specify output names for functional layers
0bd69b20d7 : ReluGradientOp implementation with Eigen
bb353ccc17 : Add batch triangular factorization and solves, add IntegerTensor to cwrap (#903)
ced0054a9e : Fix formula for stddevs grad in Normal function (#1076)
68ee5ede29 : make inplace tests compare input grads
2966e3295d : Make static/shared configurable and install optional
3865606299 : adding batch triangular factorization and solves, add IntegerTensor to cwrap
d3334db627 : adding batch triangular factorization and solves, add IntegerTensor to cwrap
50f5a4dd18 : fix BCE loss formula visualization (#1072)
b60936b9ae : fix NLLLoss2d documentation
2d750b9da5 : fix typo
ca376d4584 : implement autograd function trace
f4d8944973 : fix OSX fread bug (#1068)
42036871e9 : Fix windows build
d76e460b80 : Allow to query the blob size in bytes for perf stats
6b7aef63ac : Added support for multidimensional tensors in PReLU; Channel number now in second dimension
b3ab4b1094 : Check torch.backends.cudnn.enabled, padding, and output_padding (#996)
1e8cb82a2d : Break only after the update in L-BFGS
dd399a8d68 : Return total param norm from clip_grad_norm
faac0f5c25 : Fix torch.cat bugs
c36f47bd1e : Make random_ exclusive and make generator kwarg only in all random functions
3d1888cd95 : Fix size mismatch in CosineEmbeddingLoss backward
3b7cb50d1c : Add ConvNd to model helper
0276c992b7 : translator fix
e4907bd1ba : Improving exception logging in Caffe2
97a82a3018 : fix formatting in upsampling docs (#1067)
5cd313ed23 : Fix TH_TENSOR_APPLYX_D in the case where the dimension of interest is the inner dimension
9d83121ef5 : Don't add options to CUDA_NVCC_FLAGS if already set
9ab65d7be0 : Add CUDA profiling ops
6d7cb31e53 : MPI: Duplicate MPI_Comm and allreduce maxLength as MPI_ UNSIGNED_LONG.
30a9cf7a46 : Mark transport pair after IO error and propagate to calling threads
fe4bd5066b : Added support for multidimensional tensors in PReLU; Channel number now in second dimension
e17d84d38e : Added support for multidimensional tensors in PReLU; Channel number now in second dimension
b9aef6bc03 : Fixing default values for LR and Epsilon (#895)
0056b08834 : Narrow V when returning only some right singular vectors
bd0df61bb5 : Cast accumulator in LookupTable renorm to accreal
d9678c2e34 : Correct typo in batchnorm documentation
ea66516d5e : Output attention weights from apply_xxx_attention methods
d7b2aebf2c : Support for Sum in cell net as first operator
e3fc195fc6 : fix mklmemory bug
2417725ae5 : Use version-specific CUDA packages 16.04
b3c0aa3b7d : fix a typo in ffi doc (#1055)
8fc9c79287 : Add nccl submodule
4fce1a389f : Include CUDA support in CMake build
8a35fea9eb : Improve error message for not found operator
aa4d07d3c4 : bugfix for Windows, esp. VS 2017
93ff338ca7 : Beam decoder for NMT in Caffe2
d13f98de4e : implemented DistillLRLoss
e41d35909a : Conv-ND NCHW CUP/CUDA implementation
8ce56c30d4 : Convert runtime errors to gloo exceptions
771d169c7c : Extend conv params to handel nd inputs
4667f936e3 : Add explicit dependency on pthreads
4eaa30b634 : Build tweaks
33f41c06c0 : Remove more instances of batch_size
17da5856ed : Remove batch_size parameter from attention and LSTMWithAttention interfaces
3924a35509 : RNN: recycle workspace (attempt 2, easy mode)
3e222d501a : Backed out changeset 460028d912d6
25bbd632e3 : Backed out changeset 35c70e825855
d1424c3265 : Revert D4702086: Remove batch_size parameter from attention and LSTMWithAttention interfaces
f97d7949d0 : Remove legacy LSTM, cleanup tests
77fbc12f23 : Fix some deadlocks when torch_shm_manager is not found (#1030)
7e46eb1613 : Fixes for Prod and Expand functions (#1026)
1aa5231fb3 : make nnpack build on mac/linux, and also contbuild support
a2fc88cf97 : Remove fbcollective from tree
a15776c868 : Fix for Windows build
4829bdb1ea : BatchSoftmaxLoss layer
cea16ff7cd : BatchSigmoidCrossEntropyLoss
c3973f08a5 : Check that inputs/outputs don't change between runs
79c3a3af54 : add gpu support for caffe2-seq2seq
821656d2d8 : add CONTRIBUTING document
86e40ed875 : Fix a typo in docs about pinned memory buffers (#1023)
1513b1de6b : Add ResizeNearest operator
ad4ae4528f : migrate mtml to dper2
cc2e915461 : Implement TopK op in caffe2
2c8bf2525b : added BatchL2Loss layer
9382ecb9cd : Set up Caffe2 versioning number
227fd0bbc7 : fix bypassing_mtml crash
1d0699e147 : Define exception hierarchy
ea52c7567a : Expose minSize for threadpool
d85ed5d5d6 : fix external_loggers
b9379cfab7 : Use cuDNN and NCCL symbols from _C library (#1017)
10d95bd0f0 : Remove batch_size parameter from attention and LSTMWithAttention interfaces
412148a62a : update NNPACK module
7654b3f49e : Add function to compute cross_entropy for 2D image (#802)
37ebbc2809 : the length of any item in padded_sequence should be greater than 0 (#1013)
b2ab7365be : fix for special case when dense dim is 1
8241cd7b6e : Fix compilation error when compiling with 'clang -x cuda'.
a7781fdebc : Use default Redis port in RedisStore constructor
29ddbc3e37 : implement linspace, logspace and range in CUDA
7773a2d643 : Bugfix: type not being set when inferring types+shapes
16a133ed9a : Fixes for testing on FB infra (#1009)
1aa665f6a8 : Documentation
c4d1318662 : Fix map_location in torch.load (#1006)
379ae6d865 : Refactor out dispatchStateless (#1007)
9aa277eeb1 : Fix cmake gcc error per @benbarsdell
56f324d191 : Added predictor bindings to python interface
61dd35f1d6 : FCWithoutBias layer
6ac793dcbe : Reuse ncclComm_t across algorithm instances
e00d9c1fd8 : Execute benchmark through mpirun
92101aa87a : Update resnet50 example
be6322e4b5 : Update nn.init docstrings to correctly reference the module (#1001)
62063b2f62 : Fix docs for pointwise ops (#845) (#985)
518d36d34b : Add PReLU translator
bb58074332 : support get/add a field by nested name
26628d10ff : Fix workspace clashes
ba9cac4d98 : fix mkl contbuild
3176cd6292 : update nnpack submodule
9e6fd02c28 : Use Gloo ops in data_parallel_model
4d7451399b : XRay mobile quantized model
9e593a901c : fix memory corruption
13b1580613 : add F.pad to docs
fe788f5003 : Use correct event to synchronize destination buffer in NCCLElement
f449af378d : Explicitly pass CXX to NCCL Makefile
014d1fe5c4 : Allow test discovery in caffe2/python/
2ce5121db1 : Reuse workspaces in RecurrentNetOp -> much faster
91f468b15c : fixes to make data parallel model work for RecurrentNet + test case
95ecf22c0c : Add throughput.py for throughput measurements
25b1221579 : Allow scalar output in functional layer
e50a1f19b3 : Use streams in scatter to overlap copy with compute
e86db387ba : Fix conv1d backward segfault (#999)
783e40e806 : Fix lengths-remapping again + better errors
b7530cc54a : Optional central cropping in ImageInputOp
a74c2bcda8 : Fix build_ios.sh bug (#194) due to name collision.
dec78a37b4 : Increase threshold for using chunked allreduce
f26f225972 : Support multiple inputs to broadcast/allreduce ops
c2e270d8bc : Add Gloo ops
436193bf37 : fix minor typo in math_{cpu.cc,gpu.cu}
1fac027d0e : Quantized Training API
a1d63da6af : Adding UNK to vocab | Changing default params
85fad20a5a : Gracefully handle empty input to Dropout
1bf61b8adc : Add googletest submodule
84ba7c1acb : Skip test if libfb not present
704ee3ca68 : Use cudart symbols from the main program.
fc7939c25b : add model_helper.ExtractPredictorNet()
a745981c94 : ReduceBack{Sum|Mean}Op CPU & GPU implementation
9004652c7b : updated the documentation to remove the unnecessary copy grads when using multiprocessing
ee2bc06926 : Add Shape Inference for Reshape Operator
001ac5d751 : Fix to use appropriate corpus and vocab in eval
aca6ce984c : change lookup table sort
ed8773f7bd : add legacy_serialized.pt to gitignore
0f7b7b27b1 : Fix build for CMake 2.8.12
a5a5d00b87 : Fixed a bug: 'ModelTrainerLog instance has no attribute 'external_loggers''
48f48b6ff2 : fix more flaky VolumetricMaxPooling tests
615b27eadf : fix corner case in SetItem of Variable
86ede33035 : CMake improvements for Gloo
f422e6307d : Add poly learning rate policy
bd09055207 : Synchronize all NCCL ops with shared per-device streams
4bd220d91a : Travis contbuild scripts and cmake fix.
170d790b66 : fix doc of conv3d in conv.py (#989)
e216f557fd : Fixes issue returning strings from a Dataloader with pin_memory=True (#908)
e5858485ca : small change to concat layer to make tensor board vis nicer
6729d81418 : Specify which GPUs to use in resnet50 example
997312c233 : Add WeightedRandomSampler (#980)
d602b3a834 : Allow submodules and parameters to shadow attrs on assignment
f531d98341 : Fix memory leak in torch.from_numpy
6bdd5ecaf5 : Remove some unnecessary AutoGPU calls
bfbde9d6eb : Fix Embedding bug when max_norm was used
b9c816a796 : Fix run_test.sh --coverage option. (#983)
2f5c215d34 : Update setup.py (#981)
01650ac9de : add torch.nn.init docs to the source folder (#979)
43b6fcba7d : Improve error message from LogFileDB on missing file
a4a136038e : more descriptive error message
3f682ca699 : Fix to data parallel model blob_to_device mapping
b61aaa90b6 : Stop multi_reader if we run out of data before max_examples
31b72b9004 : move reshape out of utility_ops
0308910c58 : Enable use of Print for LayerModelHelper
ce536aa355 : fix example in docs for NLLLoss
a109cbdfb6 : fix bug in data_parallel_model stripParams()
fc0af33a18 : key only block-wide bitonic sort
0e7e9888f7 : Explicitly do MPI prefix for ops before it is too late
c7c4778af6 : modify docs of `broadcast` to fix issuse #940 (#970)
adb3f0ec22 : add exception for empty shape param
d873077349 : Create context from existing MPI communicator
0c38827318 : Split out rendezvous specifics from context
fb766c00b3 : Align async\wait pattern to use wait() naming
e600c9830a : Fix up NCCLElement construction in CudaBroadcastOneToAll
f93039b9c4 : check data is allocated
73a65cd29f : simple ordering fix to avoid gcc warning
965a7daf9b : Implement MILSTM in caffe2
bde53f61af : Caffe2: add scuba logging to benchmark
57ecd20197 : seq2seq open source implementation
b785ed0ac0 : Fix Embedding and CosineEmbeddingLoss on non-float CUDA (#965)
b2d077d81d : Update _tensor_docs.py (#966)
c5621ded31 : Allow use of ReversePackedSegs operator in CUDA context
89c08334bb : data_parallel_model support for sparse gradients and CPU ops
4814b0bc09 : Recompose NCCLElement of src/dst CudaDevicePointers
b1c2714ad5 : Add momentum and centered options to RMSProp (#810)
41a3ec2455 : QTensor serialization/deserialization
5bb5572719 : check correct signal counter
84e742ded7 : Migrate realtime training workflows to use new metrics.
eeb7279020 : compile execution step
95501a0165 : clean old unit test, add sum processor and sqrt pooling
86e60848c5 : use gflags namespace instead of google
842ee41999 : Fix binary file reading bug for MSC compiler
581e57c244 : add AccumulateHistogramOp
e88379ef3a : Implement deep function recursion as a loop with a stack instead
8a84d03253 : move qtensor to open source
a462edd0f6 : Docs(RNN|GRU|LSTM): Note dropout applies to all layers *except* the last layer (#961)
c6a9d7f188 : User input (Conv out, etc.)
046b467c9a : added prefix to load op
c2425fc9a1 : Fix build warning for C file
4f0e7730a9 : Distrubited Multi-GPU resnet50
8de1db9eb6 : Implement recurrent attention in C2
f0d78753ae : Make ModelExporter.load_from_db() load to specific workspace
3d95e13b33 : Check event_count before merging blocks
228e1a8696 : Add CUDA caching allocator accessor
be0e8c0009 : Use sequential slot numbers from context
3fa8a3ff46 : add implementation of inclusive scan via upsweep-downsweep
e75221e316 : Add eval net to two tower workflow
8de2027d9b : Add gradient operator for SumElements
83437853ad : refactor and modulize optimizers
235a95f09a : Fix LengthsToRanges docs
b5e5001426 : update detailed build info
ed693b1c6a : add EnsureDense Op in MTML MLP
b599910f3a : Use new metric intefaces in trainer workflows.
6830d56103 : CodeMod: google::ProgramUsage to gflags::ProgramUsage
1741fd839f : Re-apply windows diff D4657831
d8588d8007 : CUDA version of elementwise power + rename to Pow + gradient
7ba5e7cea1 : fix VolumetricMaxPooling test instability (#952)
9b626a8047 : Fix documentation - replace 'matrix' with 'vector' (#951)
bd0e9a73c7 : Fix some simple build error on MacOS (#949)
695ea6c7a1 : SumElementsOp
8fab453863 : Sqr op and gradient
560572910c : Add task outputs and stop signals to net_printer
9f588aa8a2 : Add Inference for Flatten
7bddd586f7 : Change PrefixStore to take a Store reference
da10450535 : Allow multiple input pointers to broadcast algorithms
039c3cf0ba : Revert D4657831: [caffe2][PR] Changes for Windows build to pass.
2b1cd919ce : Update extending.rst (#933)
7b8c7b11d2 : Changes for Windows build to pass.
8e46a15605 : add docs for set_printoptions to sphinx (#945)
2333ccadfb : MaxOp for CUDA
3e54601bab : New approach to metrics.
f747bbec2e : move the dper 1.0 utils to c2 or fb utils
6336300880 : Fix bug where adding a hook could replace an existing hook.
5073132837 : Implement 'pre' and 'post' hooks at the C++ autograd level
65b66264d4 : Improve broadcast/reduce performance by coalescing tensors
7472631e7f : fix bug in Mean pooling
0f872ed02f : Add THCCachingAllocator_recordStream()
c61a7ca777 : Make counts datatype int. Used as index.
9ef35f4a0b : Add validation checks to load op
761d6799be : code syntax error in document (serialization.rst) (#937)
81d5461973 : cuda check -> enforce
4030fbf535 : Add _aligned_free if defined _MSC_VER in context.h
35f0c0b0fb : Fix gflags build
0d179aa8db : Updated datasets.rst, combined all commits (#931)
5b171ad7c2 : remove misleading guide for BCELoss (#924)
ac9245aeb3 : import numpy before setting dlopen flags (#928)
60736bdf99 : fix corner case in kwargs for DataParallel (#930)
7d58765cee : docs: Fixed example code bug in extending module doc.
76f7d749e4 : bump version
0b7374eb44 : add THCS to build_all flags
6fff764155 : replace old select_compute_arch.cmake with new
8ced72ccb8 : link THPP to THCS when CUDA available
b1ae7f90d5 : Added functionality for data parallel table (#843)
aef75ca5dd : Strip prefix of strip_prefix in blob names before save and load.
59ebbfb2bd : cpu memory allocation reporter
fea50a51ee : reintroduce USE_AVX* for files which dont have -mavx* set
51e589ed73 : fix critical bug in adds SSE implementation
2e87643761 : remove fastmath for everything except simd/convolve
8caa7cec8d : CUDA version of Log
ba9a85f271 : fix bug introduced in #952
a22fd7194e : More assertions for state change in TCP transport
0714d7a3ca : set AVX/AVX2 flags only for specific files
fb7bafdd0f : Update README.md
34ce58c909 : Parallelize backwards
c238ee3681 : Fix issues with lazy grad initialization (#912)
e1d7eaf7d8 : Latency optimization tips
f5338a1fb8 : compile AVX and AVX2 intrinsic code in separate files. Cleanup use of USE_AVX and USE_AVX2 macros in favor of __AVX__ and __AVX2__
d96ad41191 : cleanup TH CMakeLists and THGeneral.h of unused flags
f17cfe4293 : sparse tensor operations (#735)
aec182ae72 : Support half precision in baddbmm
8dff5a87f3 : Change the type of content in BlobProto from string to bytes
c93c884ee2 : Add negative dimension to transpose and tests (#792)
c42a2d4d24 : Fix dimension check for cat (#959)
490c15fae9 : Fix slicing with step (#905)
cdce8f0e52 : update gflags
8c4310ac16 : minor fix for _add_net_to_dict
7e3b572ca7 : Document algorithm semantics
6c9105447c : support fill bool tensors in GivenTensorFill
b6fbc708f5 : Verify InferShapesAndTypes() in operator unittests
5fbcd88102 : Rename public member fields on gloo::Context
f2d72ba10f : Revert "make handles to be thread-local"
2108b42b92 : Fix bug in cat when dimension is not specified.
bae8df62d3 : Add missing THCudaCheck around cudaMemcpy
a2b2880cc2 : Remove underscores from public fields in NCCLContext
70fc15c05c : More documentation
e9c0671132 : Convnet benchmark cudnn_ws
b7cc2a501f : genericize PrefixSum --> prefixScan
0720ba53b3 : make handles to be thread-local
ff5fa11129 : make mkl link to threaded version with GCC (#958)
837023bb4f : Change benchmarks to support multiple input buffers
e88d241757 : Cuda algorithms should return asynchronously if device streams are passed in
ecb37e4439 : Update tests to cover potential reordering problems
0c88194807 : CUDA documentation
50e73a8313 : Support synchronous mode in ibverbs transport
fc7f026980 : Refactor ibverbs transport to prepare for sync mode
9f18f83375 : Downcase setMutex
9c114e6f1c : Fix compile error
0e78a59610 : add mutex getter/setter to synchronize CUDA and NCCL ops
5e7f5db332 : add subset samplers (#888)
b5f7592140 : boolean mode in module.train
f366e5fc81 : Support int16 numpy conversions
48f087f6ce : C99 cleanup broke MSVC (#952)
73db5f902e : Fbsync cudnn rnn fix
ec56737190 : fix shape inference for spatial softmax with loss
642b5a863f : Adding changes that enable MSVC build
7fef264bfa : Bumping version to 1.3.3
8996811936 : Only enable peer access for ring neighbors.
c219a183d0 : Fix copy/paste typo in error message
8e1d6f9b60 : Fix crash in Reduce when non-root ranks have invalid recvbuff
6cb63df704 : Default LocalSession to current workspace.
7ad948ffa9 : fix tests to not sys.exit(), also fix fatal error on THC initialization
3277d83648 : Add Nesterov Momentum (#887)
2cddbc719c : Euthanize a process with timeout
2f68632a32 : Add SparseNN workflow for feed.
1487278fdf : Allow backprop through cuDNN RNN in eval mode
977630bc15 : Handle duplicate backward roots in autograd
12efd53dba : ConstantPad2d and F.pad (#856)
37e05485d9 : added initialization schemes in torch.nn.init (#833)
da725830c2 : Add support for variable length sequences in RNNs (#873)
fc6fcf23f7 : Lock the cudaFree mutex. (#880)
aa3156c235 : Remove use of logging module and np.random.randint() due to deadlocks with forks
b190f1b5bc : Add another pinned memory test.
02937903cc : add inference for gradient ops + a couple of missing shape inference functions + fix to scalars
f84e5360cc : LSTM benchmark (Caffe2 RNN based)
8a0ebed4c9 : Caffe2: Tile operator
bdd542d087 : backup functions for non-cuda cases
69fa85be26 : Fix some typos
fbf47a8825 : Cudnn v6
dfca8dfdc5 : ensure valid index in multinomial
b46d5e0b04 : Fix NN bindings
76de151ddd : Fix bug where pinned memory event could be recorded on incorrect device
2676cc46c2 : fix indexing bug in sampleMultinomialOnce
1c92e85dae : Added editDistance helper to caffe2 operators
1bf7bc9768 : refactor sampleMultinomialOnce to use <real, accreal>, assertion for sum overflow
3c41c9fe46 : Add AutoGPU RAII that doesn't depend on Python API (#875)
000db87bc7 : Half-floats support for the rest of segment ops
6ff7750364 : add TH_TENSOR_APPLY variants for optimized redux (+refactor)
4d25c3d048 : address comments and add tests
267b7ade50 : Speed up reductions on non-contiguous dimensions
e30e94cb71 : Made CNMEM optional and added a few cmake components
b732f347ba : Fix minor bug related to pinned memory allocator
80429ad9f7 : THVector_(add) -> THVector_(adds)
5ca6516ecb : THVector_(add),(mul),(div) -> (adds),(muls),(divs)
ffa2f77a82 : Remove vectorization TODOs where not needed
a3726759c6 : Add a way do describe layers in a more AdHoc manner.
851cb7059d : changed StringfyProto to StringifyProto
d85ca8c6df : Do not initialize BN params if init_params is false.
7b0126381c : Share queue + reduce logging
67f94557ff : Expose torch.HalfTensor
61bd5a0643 : [Lint] Address F811
748d011c8b : [Lint] Address F812
5d5cfe2e57 : [Lint] Address E731
7cbe255296 : [Lint] Use flake8 instead of pep8
e2acf0f95b : Vectorize rmsprop_update using Eigen
83e8b3f6c3 : Add getter for cuda device allocator.
502ebed796 : Fix one more reference cycle and ensure correct flag propagation (#868)
68ff58d771 : Expose a mutex that is held around cudaFree() calls.
969c1602e6 : Add Tensor::copy() to THPP
07623e24c9 : Implement shape inference function for Im2Colop
1f537fe7d6 : Vectorize ElementWiseDivide using Eigen
88b7f8ffd5 : Fix memory pool implementation
4b52cbe636 : turn off deprecation warning if glog needs so
449f8997ab : close blobs queues when stopping + test
2d4d3b18dd : Use NCCL operations in AllreduceChunked
97f95bb247 : mpi const cast
d0e1f5f344 : fix summarize op
5e1d6a3691 : Update functional.py (#862)
533cfc0381 : Minor fix of docs of ModuleList and ParameterList (#861)
2b23712dc3 : Improve autograd memory usage (#859)
88275da5e8 : CUDA documentation tweaks (#858)
bd7a5ad6f0 : Make Optimizer.load_state_dict use __setstate__
1f6f82dbcf : Fall back to indexing compatible with numpy
1f8939937a : Allow using expand to broadcast tensors
b3d41a5f96 : Add docs for ModuleList and ParameterList
fec2d493a9 : Reshape grad_output in basic ops
86ee75f63f : Fix for Long and Byte tensor indexing of Variables
31941918cf : Prevent creation of reference cycles with leaf Variables that don't require grad
19a65d2bea : Expose stateless methods for torch.cuda.HalfTensor
819d4b2b83 : Add finite differences gradcheck (#851)
b87c113cf4 : CUDA documentation enhancement and docs versioning (#848)
b25182971f : readme change for getting clarity on binaries
1ee2c47e37 : Correcting the description of LSTM attributes (#854)
21c40c1a3c : Provide ability to specify more types for ConstantFillOp
2dc563f1f1 : Fix indexing when passing only an Ellipsis
04d02632e9 : instance norm test fix
15ba71a275 : Rebase fixes
e5b3fc49d6 : Implementation of the 3rd set of tensor functions
ae1766951d : Link TH and THPP to THD (#57)
02d08dafd9 : Add support for IPv6 in Data Channel TCP (#53)
13a5090695 : Added a size change in MaxPool1d module and improved tests (#771) (#832)
1d26baa0fc : use CMAKE_SYSTEM_NAME instead of LINUX
8e32e4c04c : make wrap_generic_function importable
cf991310c3 : c++ virtual function fix
8ab13eea6f : delete redundant comment lines.
a8e7d922a6 : increase QPS to 470K (from 250K or so)
938706099e : adding environment flags to disable SIMD codepaths
b257fd8e83 : Other places that may need NameScope
aa875869dc : Added more summary information for debugging python versions
9eeeb8407f : use CUDA version of AccuracyOp with top_k=1
182c168285 : Add group collector limit and add option for enable sum loss
cd4ea42048 : Allowing creation of random odd length arrays in RandGaussian
0a060dae50 : better killing after timeout, cleanup
3330287dc7 : Update dataloader.py (#837)
8a85d6bd34 : support vectors with different dims in for DotProductOp.
38c8520adf : adding unsqueeze to docs
4a53ab3cb6 : LSTMWithAttention implementation in Caffe2
492e1746af : Fix THFree in THTensorApply
91a8109cfd : Use C99 for openmp cleanup
161490d34a : Add memcpy copy
9c302852eb : comments fix
8654fcfd60 : THVectorDefault style fix
b3d527d9a0 : Tab style fix
4d495218c9 : THTensorApply3 contiguous optimizations
13a041284c : THTensorApply2 copy optimization
c60c1a003d : TH_TENSOR_APPLY2 contiguous optimization
97add1a5ea : comment fix
ca02930e47 : Fill bug fix
20d5e95077 : THTensorApply3 compress counter
eb4a7dc11d : THTensorApply change dims to sizes
f722498b72 : THTensorApply2 counter compress
aadfb6fe83 : THTensorApply reduce memory overhead
6c273594c9 : THTensorApply Counter compress
e475c82fa1 : Add isTransposed judge and enable multithread of fill functions
0c2e6665df : Add AVX copy
6295e6e94b : Rebase master
670a4aa708 : Fix AVX2 bugs
1bdc2e64ed : Add fma cadd
c587be1e50 : Add THVector Fill
bd481596f5 : optimize THVector add mul div
a504d56b43 : Fix THVector cmul AVX bug
91c4dfccea : Use THVector cadd AVX
27f618c44d : Add THVector Fill AVX
a14482a1df : Add THVector cadd AVX
aa50c5734b : Add THVector AVX cmul
293001a4fe : Add THVector SSE div cdiv
638cfdf150 : Add SSE add
5f80a14525 : Separate SSE and AVX
1342fd3975 : Remove THTensorMathSIMD THTensorMathDispatch
8d4af38489 : Add THVector div cdiv
575a064e66 : Remove THVector diff
3ab21a3c4f : Merge THVector mul AVX
2f592e6c7d : Remove THVector scale
5661ffb766 : Merge THVector mul
9b74503daa : Merge THVector cmul
24848f1cd8 : Change THVector mul to cmul
a31a07ede9 : Merge THVector add
c8c4c9b23d : Change THVector add to cadd and fix NEON
e1ed9303f0 : Add multi-thread add
a43aab13c2 : Fix THTensorMath.c style
c698b4a45e : Add Dispaches for div and mul
c6a0ffab50 : Add AVX single float and double float add
8ba7cc30d1 : Add THTensorMathSIMD.c
61bf08ca24 : Fix compilation for simd tensor add
6ada3c0c16 : Fast floating point add kernel in intrinsics (11x speedup over default for 10k elements)
60061fbe79 : Fixed up CPU dispatch and tested. Can begin implementing kernels
46e7042add : SIMD helper header, modified add in THTensorMath to check dispatch
d0c182773b : First commit for dynamic CPU dispatch: general framework in place (need to create dispatch tables and stubs for all functions and make impls have hidden linkage)
b6f60585b5 : fix AVX2 detection bugs
838842d4b2 : fix documentation error. [issue #790](https://github.com/pytorch/pytorch/issues/790) (#831)
17d27d4882 : Enable Reshape to handle scalars
95262032d8 : ] Char RNN bug fix for batching
312821d36c : Allow in-place instance norm.
e71cf20192 : improved serialization (no tar copy) (#713)
c7ed091633 : Added model downloader
b0148a7c7d : Use ws_nbytes_limit (called cudnn_ws in args).
aed3aabc7f : model and preprocessor can handle empty dense inputs
45e1905722 : add support of fp16 to SparseLengthsSum and SparseLengthsMean
b2cf0fad15 : Convert SparseLookup layer's embedding to fp16 blobs for predictor
64419a928d : Implement EnsureDenseOp and EnsureDenseGradientOp.
47b65b6d8d : Add a create your own dataset tutorial
59f0454621 : Gather perf counters for distributed jobs
ba1d592b5f : New 40% faster net-type for MLP on GPUs
6ff05fd49d : Fix issues pickling jobs
8fa156d082 : Improve "reporter net" design
7a65736e46 : Fix some python version issues with cmake
26be1977bf : fix CrossEntropyOp bug for batch input
183e158642 : Remove Model API (unused)
04eccb8ebe : Performance counters
adb4cb2b5b : contiguous view backward (#816)
478d7446ef : CMake fixes
7f4d5e9900 : Add feed label parser operator.
ea9f4da368 : fix typo in TextFileReader
2d8784ce55 : Add python-protobuf install instructions
df68230351 : README and docs skeleton
6073f9b46c : update table in README.md
240372a991 : Fixed topk documentation for largest=True
5b10411c8c : Fixed some mistakes in examples
4c474a9939 : Improve prodall CUDA test
7ea6ae57c8 : Support numpy arrays in default_collate
42633f8986 : Fix misspelling and add support for weights in NLLLoss2d
84248690a9 : Add support for indexing with None and slices with positive steps
53409ca0fb : Fix a warning in THPP
c2c1710047 : Add clip_grad_norm
876202503f : Support multiple inputs in data parallel
946a7d9bc3 : Make input contiguous only once in backward of cuDNN RNN
608bcd3b15 : Return correct number of gradients from cuDNN RNN
632b02a477 : Add checks for reward type and size in StochasticFunction
0db9c63300 : Use library_dirs in setup.py
873ed4e6b6 : Add better error message for conversion of CUDA tensors to numpy
01bd43037d : add docs to torch/cuda/random
68c9e3f232 : Fixed typo in GRUCell example
a25c8555eb : Fixed paper references
d6ca3820aa : Optionally specify stream for pointers in CUDA algorithms
ee43cd7adc : Do SpatialClassNLLCriterion sizeAverage in a separate kernel
4ca26fbc1b : Remove averaging from prodall
c165226325 : Print a readable error message when arguments are on different GPUs
ea6273e048 : Fix search gcc5 build
0722775ca3 : AllreduceRingChunked/CudaAllReduceTest should use the chunked algorithm
23602488cc : Fix ProtoBuf.cmake to use PROTOBUF_LIBRARY as well
49295ebe54 : Add sequential to documentation
5bc3d2ef03 : Add ReduceFront GPU Op's
455038e470 : Use a more stable formula for spatial LogSoftMax
ca7f02ea0c : Add shape checks for SpatialClassNLLCriterion
af027d5025 : add assert for labels for spatial case
b8f6ff1a5d : Make Shape GPU supported.
04aba1caec : Fix cuDNN dropout desc for multi-gpu (#772)
ba7fad53b5 : Support for sample softmax
420488349f : Implement CUDA-aware allreduce chunked
1a5cae7340 : Add busy-poll option in TCP transport
c26b9c0a5e : Update rnn.py
aaf41c61a6 : Fix Engine::compute_dependencies
945e75bd3a : Remove openmp parallel for in caffe2
dd844f741b : Fix previous_functions when it contains Variables
4dd19988c3 : Add benchmark option to display nanoseconds
7117a9012e : Fix flaky non-contig test
1bdc28161a : Add torch.__version__
5e150caf38 : Fix a bug in Engine::compute_dependencies
c0c62d099a : Make detach() actually remove the creator
b9ece39685 : Make torch.Size methods return torch.Size, not tuple
8949abe10b : more clear about supported output dimension
d8b7166251 : Move build_ftrl to open source directory
15ef008877 : Using accreal instead of real in the API
b14d6318f8 : Convert real to accreal in libTHCUNN
d0621a2449 : NextScopedBlob with well-defined behavior and respect namescope
b436788b16 : LSTMUnit: pass through H values
93002720eb : Extract CudaDevicePointer for reuse across CUDA-aware algorithms
9a33786dc0 : Split out large gradient ops at a file level
0c03c8fca5 : Add name_overrides argument to SaveOp
5429031917 : Adding SoftmaxWithLoss operator to Shape Inference
6b0545d764 : Implemented logging of inputs per second
7c44506441 : allow DataParallel to have tuple inputs on a single GPU
70fb22f5be : update sparse_to_dense op ENFORCEs
937ba581d7 : Improve nn.legacy compatibility with Torch7 (#738)
2ae54f1194 : setup.cfg -> tox.ini (#761)
a3e24b2b5f : Fix LabelCrossEntropyOp to ENFORCE >= 0 as well.
d4b1d347e9 : minor: make cmake cuda ready
7ee9984556 : Added local build and apple fix for generating .so files
7d9a0a41fd : Allow forcing single-threaded execution at runtime.
40534de705 : Gradient for Copy operator
797720225d : refactor LoadImageOp and dbreader
d9d6f1e905 : Fix missplaced semi-colon
cb91078e01 : Support synchronous mode for TCP transport
5db768a60f : Improve InstanceNorm NCHW performance (~30% on iOS style transfer)
93841acc1a : added docs for learning_rate_op and tweaked docs formatting
3098bef94e : Getting things in sync with the internal repo
c7c4b00a50 : windows build: getting there
81d932b161 : Add LeakyReluOp to caffe
50a6897e80 : Shape inference for ImageInput, NHWC2NCHW and StopGradient
63901e9aca : allow recurrent network gradient op to receive gradient on any combination of network output blobs
cb3c41b9a9 : PiecewiseLinearTransformOp transform binary predictions specially
718786add7 : UniqueUniformFillOp
93795406c5 : Adapt NLU proj code for Caffe2 RecurrentNetworkOp changes
fc0be229b6 : add mutex locks for pinnedcpuallocator to avoid nccl-deadlocks
a8d70f3552 : Try to improve serialization speed for SparseNN.
fb7c9108d9 : get parameter blobs of a model
31ca9d57b6 : Remove args in Grad
6fabf8ed1a : Documenation generation to wiki
571539aa5d : implement CNN optical flow calculator
a217fefee1 : Update rnn.py
34b7fed802 : Fix gcc 4.4.7 build.
5221745c21 : add test for bias=False for 3d convolution
797544c47a : implementation of bias=False for VolConv.cu
0426f2f3ec : implementation of bias=False for VolConv.c
336eeee895 : kernel_size as the default stride for avg_pool1d (#744)
593f867e3e : Fixed a simple compiling erroin mac OS #745. (#746)
385913be1c : Fix class torch.nn.ConvTransposeNd documentation (#739)
6aaa14f5fe : Fix LSTMCell Doc Typo (#743)
ee52f89772 : Implement CUDA BroadcastOneToAll algorithm
e454870396 : Free set of stored streams and handle NULL streams.
2de4b8840d : Added MatMul operator inference
8d72c6016a : Move tests of build_sgd, build_adagrad, and build_adam to pyton directory
74f1796a34 : Split out `elementwise_mul_op.cc`
a62866cc94 : Compilation fix in FC op shape inferencen
9871ed4258 : Migrate build_adam to python directory
5fb5fd9de9 : NetBuilder: Allow to call hasattr(x, ops) out of context
2822013437 : Fix flaky tests
72c1982734 : Add some more asserts to cuDNN RNN
0de2ea305a : Support retain_variables in cuDNN RNN
d899385a3d : Raise error when too small input is given to conv
c6d6cbe8a6 : Check that all tensors are on the same GPU in cuDNN bindings
85e82e85d8 : Fix bug in zero_grad, when some parameters didn't require grad
a1534cc37d : Fix auto-gpu in cat
8c8dc791ef : Load half and double THCUNN backends
63edca44f2 : Add tests for non-contiguous inputs and gradients
b9f4977be9 : Fix git URL in README
524bc07973 : Change the schema of IndexLoad & IndexFreeze so that state change is captured by the framework
7aef4b2662 : Migrate build_adagrad to python directory
7bdd8737cb : Fix to dagnet execution & dependency pruning
e52676b272 : Delete SerializeToString() call in class Model(), workspace.py
e865c940a5 : initial version of windows build
8f1f7e0dc2 : Mini-optimization to AccuracyOp
6aa8c932fc : Benchmark for CUDA-aware algorithms
8821f4aba6 : Fix race in benchmark tool
5e06634f7e : Implement initial CUDA-aware allreduce
b82c4b3d38 : Split benchmark code into multiple files
b7783a1976 : Make ContextManager thread-safe
9cf830fca5 : Don't install the full CUDA toolkit
8d90ab2d9b : compile with cudart (#737)
e75a8d24bf : Fix compiler comlaint
bd5303010d : Refactor autograd package to separate Python dependencies. (#662)
16d2c3d7b3 : make networks converted with loadcaffe loadable
fc2b6e8ed6 : Migrate build_sgd to python directory
2b4ec53fcb : translator fix to solve Aaron's issue
60be25f4cd : Added shape inference to padding operator for tensors
54fc123610 : Halfway into windows port
407a92dc26 : std::min() requires same type (#732)
0db5817290 : Break the DagNet* code into net_dag.cc
f09bd84137 : GivenTensorFill breakup
7134f0183e : Elementwise ops trim
0a893abc7b : fix serialization bug for large files
34fa5e0dc7 : Update docstrings for testing object type
712686ce91 : Add cat, contiguous, squeeze, and unsqueeze to THPP
7ca1c0e405 : Add two data_loaders and refactor code
518864a7e0 : Fix bug in legacy NN updateGradParameters (#714)
d918d77747 : caffe2/caffe2/contrib/torch/torch_op.h: avoid shadowing warnings
3d0b717abc : caffe2/caffe2/operators/text_file_reader_utils_test.cc: avoid shadowing warnings
14a9ce432d : caffe2/caffe2/binaries/core_overhead_benchmark.cc: avoid shadowing warnings
c0dd3b9744 : caffe2/caffe2/mpi/mpi_test.cc: avoid shadowing warnings
b0ff960301 : caffe2/caffe2/mpi/mpi_gpu_test.cc: avoid shadowing warnings
7721cba906 : caffe2/caffe2/mpi/mpi_common.cc: avoid shadowing warnings
2727317384 : char-rnn: add comments
72fd605b01 : Fix std::accumulate
98f66fd282 : Char-rnn : fix batching
5c007be804 : add soft label functionality to softmax with loss op
e676f4411b : GPU support for RecurrentOp + Char RNN example
335b73221c : Unify train_local and train_with_distributed_readers
750fb5cc73 : Fixes to support short and char tensors for bitwise operations
0f4749907a : Adding bitwise operations
bd2dc63ef6 : Adding bitand, bitor and bitxor
3e08beb75e : implement Float16EncodeOp and Float16DecodeOp
039ac56a68 : Better names for nets, steps and tasks
19a8795450 : Changes to shift operations
d9dccfdd71 : Fix for non-contiguous grad_output in cuDNN conv
b993a2abe4 : Dump data for DocNN visualization
ed0024a82c : SparseToDenseOp and GatherDense
7547a06c4f : Avoiding duplicated unsigned as it causes error on gcc.
8929b75795 : Added shift operations.
4d37ef878c : Remove view on data and target tensors of dim 1 in TensorDataset (#609)
efd8998690 : Import gloo
f2b3f0ab5c : remove decode()
8ca1b3baea : import_array python3 compatibility
6c77fa9121 : Changes in RNNBase and Embedding for compatibility with DataParallel (#660)
53817feb3a : Optimize computation for top-K accuracy using heaps
306fde233a : Accept optional blob map for InferShapesAndTypes
e74184f679 : Make THCCachingHostAllocator less aggressive.
e34e1b1b7b : added updated loss function, changed cv interp filters to AREA
3884d36176 : Add unsqueeze to THC
e6a18d2e9a : Added TransposeOp Inference
e7c6886a00 : Add unsqueeze1d to TH
ed8e92f63d : Expose rawSet and rawResize as resizeNd and setStorageNd
fb97df5d65 : Expose rawSet and rawResize as resizeNd and setStorageNd
e9b05c71b4 : Use THCTensor rather than THCudaTensor in THCUNN.h definition of GatedLinearUnit.
5eab428294 : Qualify nullptr_t with std::.
849fc7ba68 : check that parameter is int
41007ce07b : More comprehensive benchmark tool
7137c565d7 : Add all-to-one barrier
1709664a43 : Add debug mode to transport buffer
6a03641cde : Add num_iters to RunNet()
274ac2b590 : Add cmake guard for python, build for tegra X1
5303634ebf : Use MDB_NOLOCK.
535e0e486b : Add model graph to dper_example
1f1aafaebe : Implement shape inference function for AccumulateOp
c115646d71 : Use fbcollective
7926324385 : Corrected parameter typo in Adam docstring (#697)
1527b37c26 : Fixed typo and rendering of some equations (#693)
de4659659b : The RNNCell's example can not run correctly
b2532d2794 : More logging if unable to find address to bind to
50213705d4 : Allow specifying max buffer size. Smaller initial size.
3c90356499 : Add check for num_shards when using distributed training
a386fe8b6a : LogOP implementation
a96a8c8336 : Static build support + Query CUDA driver, runtime versions (#695)
078a8d10de : Move png image to net_drawer and Flow example
17151ca14f : Debug/Analysis tools for Jobs/ExecutionSteps
280718b40c : Allow non-batched initial recurrent states for RecurrentNetworkOp
691aa19b88 : Add code for 'view' to THC
947e5feb4d : Trainer support for mobile ranking
6b07dc9e22 : Add code for 'view' to TH
75e62924e3 : schema.Struct.__add__
3049bc1fed : Fix data parallel model code doc
3bb8755067 : Use multi_reader directly
c4afd618c4 : Add USDT for operator execution
8aa259b52b : review comments from gchanan
06591ad414 : Fixed task 15844370: [Caffe2/Bootcamp] Make top-1 accuracy faster
ac9312e9f8 : Bugfix/rowconv (#1126)
33c0e5619b : Add Task.REPORT_NET attribute
91a17b702b : half<->float conversion cleanup (#901)
c54597e0b2 : std::move fixes
e63003d5a0 : Fix race in FileStoreHandler
1c7886701e : lr_scale to loss_scale
b2472eab3a : Improve dagnet chain computation by pruning redundant dependencies
dcefc74a0c : Shape and Type Inference Part1
a9785bba44 : cuda implementation of Gated Linear Unit, fixed issues with genericization
5837b21691 : support subtask in mtml
8e1c513fb5 : Make build_host_protoc more robust to weird system settings
2ce3cfefe1 : Char-RNN Tutorial
d7e85bf38e : Fix ops.stop_if() from inside processors
000c53a7b1 : AtomicCounter to return previous value on Reset.
d93b9eeae2 : Fix NetBuilder's task_init
d8dff5853e : Add numSample field for preComputing
115b5e0c5c : Configurable hostname to bind to for tcp transport
d6d19d6dca : Assert on low side as well
f401bf8928 : dynamic creation of streams and cublas_handles, support multiple streams per thread per gpu
833b8cbc7a : Remove unused code from module
4dd297d261 : Add nnpack
3df81ff18c : Add ibverbs transport for fbcollective
4ed85d1d00 : modified behavior of input_op to convert to desired channel depth if source is different
fc354a0d6e : Revert "cuda implementation of Gated Linear Unit, fixed issues with genericization"
b8a34f3033 : Small fixups: 1) Add return after THError for completeness. 2) Fix brace formatting
10bb6bb9b8 : Fix function names in error messages
3c9ef69c37 : Fix THCTensor::isSparse
dee987d6ee : use pseudo-fp16
138f254ec1 : Support sparse tensors in THPP (#667)
c7c8aaa7f0 : Add ModuleList and ParameterList to nn
77fd7c2b6f : Make translator work as command line tool
d0db624e02 : Add W503 to PEP8 ignore list (#646)
e3e7b76310 : Rename all normal and log_normal args to std
dad02bceb9 : Remove duplicated line in cwrap
b195285879 : Improve CUDA detection in THPP
8f3da5b51d : set_index -> _set_index
825e919eb8 : Add torch.unbind
acb0ce8885 : Add LongTensor indexing support
72089c9c36 : Update THHalf.c
cf2f158fec : Remove erroneous proprietary license header
2397b6a6f2 : Add CUDA support for Safe{Enqueue,Dequeue}BlobsOps
41ddc2a786 : VolumetricFractionalMaxPooling like Spatial...
e4886f6589 : VolumetricFractionalMaxPooling like spatial
0a3a3de574 : Utility op to join tensor matrices into a row strings
6470b5bd21 : Add test for Embedding with sparse=True (#663)
44196955e2 : ByteTensor should be unsigned (#664)
79c04d32dc : add an option to use a resnet network instead of alexnet
8ef37ff8fd : Add fbcollective
b7fa6b2a8b : remove recurrent_inputs in a favor of recurrent_input_ids
f08ec1394d : Fix bug with inplace TH(CU)NN
f8fb25e0a2 : Add generic bindings to THNN and THCUNN (#645)
519a23e767 : Use chrono library instead of sys/time.h to get the the time from epoc
6a0c66752f : Fix documentation and argument name for Tensor.normal_(mean, stddev) (#652)
562a4c2dbf : fixed input op for grayscale images
a1bd4efb08 : readme: add guidance on disabling CUDA (#655)
d019ec793c : improve fluky test
debd256177 : Fix for gradient propagation for initial recurrent state for RecurrentNetwork
b43ce05268 : Refactor parts of utils.h (#648)
f096fb6859 : adding cudnn V6 support (#515)
a3e11d606b : Fix linter errors
79232c24e2 : Fixes after rebase
15d9d499ab : Remove ZMQ dependency from compilation files
962084c8e8 : Add Data Channel receive from any source (#52)
7518b1eefb : Introduce Scalar for easier send/receive types through DataChannel
8215d7a4ba : Implement TH_API functions from the set 2 (#49)
5aaa220d84 : Thd functions v3 (#46)
12c16ab9bc : Remaining storage functions implemented
76520512e7 : DataChannel tests rewrite (#42); DataChannel `isend` and `irecv` implementation (#44)
66de965882 : Replace ZeroMQ (#41)
10d32fb0b7 : Fix DataChannel tests failure (#43)
e72c9b6e4a : Storage constructors implemented (#40)
ac1f68127a : Add barrier, scatter, gather and allGather implementations + groups (#34)
60d1852c7b : Major improvements to master-worker mode
d53eb521fc : Add missing headers.
9808932f10 : Refactor RPC and change TensorType to Type
ea876eb6d5 : Add initial bindings for master-worker mode
0a45864866 : Add THDStorage and improve master-worker mode implementation
2560b39796 : Merge TensorTypeTraits.hpp with TensorTraits.hpp
21afa4c88b : Worker handling for constructors + destructor
9fc3c5e4d2 : THDTensor constructors implemented + some minor fixes
3e3501c98d : Integration tests of the THD Python interface (#28)
5e6fcd02b5 : Implement data channel groups (#25)
d46ebcfadf : Fix broadcast and reduce implementations Due to bad rank mapping broadcast and reduce were connecting wrong processes what resulted in errors or not received/sent tensors.
41480c8cf2 : Data channel maintenance
236890d902 : Fix transitive library dependencies in CMake
55632d81d2 : Add Python wrappers for process group mode
0b276d622e : Add reduce and allReduce implementations (#15)
c81491b37d : Preserve directory structure when installing headers
42e189425f : Detect ZMQ libs and headers in CMake
3cfa0d7199 : Expose C API for process group mode
7c9e088661 : Reorganize THD directory structure
e78aa4bb84 : Implement CommandChannel with ZMQ.
f8e94d0d8b : Implement DataChannel (MPI and TCP) (#8)
ebe6f40fce : RPC message packing and unpacking implemented
5fb37efb46 : Use #pragma once instead of defines
4f47855873 : Style improvements
52ae6f682f : Add initial version of tensor wrappers
c35f58f97b : Template for THD implementation
659b2f3154 : Add more autograd functions
5ea05cfb96 : Return indices from Variable sort and topk
0700e05e68 : Disallow duplicate field names in Struct
dc9a5b7d2f : Fix memory leak in SpatialMaxUnpooling
1d3834eeb2 : Nodes to support resource requirements and outputs
8553bd3f68 : Ensure we are not using Eigen LGPL code, and build on raspbian.
f7ab5a128a : Delete extra bracket in RNNCellBase.__repr__. (#637)
368cbe615d : Add Ubuntu 16.04 lib paths in CMake
d4c9a3782b : billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix (#617)
172dca5e8b : Fix bug in cat (non-contiguous first input)
818bf0c408 : Compile with asserts by default
03dcf8a83b : Compile with asserts on by default
604f607fd1 : Add asserts in index* functions
956d946c25 : Default initial hidden states for recurrent layers (#605)
2c6391be0a : remove unused includes in fbcode (skipping #if, new default mode)
970caaa621 : Exclude sphinx_rtd_theme from pep8
00a5980cdf : Improve RNN doc formatting
e24eee04f0 : Link THC to THPP
f1b3af4ee2 : Add more bernoulli options in cwrap
fb2d28f477 : remove circular references in NestedIOFunction
f64bc7d2a7 : update to eigen 3.3.2
3a82b33f84 : Use protobuf's own cmake scripts and add travis for ios
3a704ff725 : Fix legacy load_lua for SpatialConvolution (#608)
0180e638e5 : Remove unnecessary zero_() calls in cuDNN RNN
95c6ae04fb : Fix non-contiguous grad handling in cuDNN RNN
14a5b35805 : Snapshot -> Checkpoint
86fb25cefa : Rely on embedding size in split
13e34b4679 : Fix multiprocessing tests
57373c7c29 : Fix docs
79f5bf84e5 : [pep8] Potentially breaking docstring changes
3ed720079e : [pep8] Fix most remaining lint manually
e7c1e6a8e3 : [pep8] Fix most lint automatically with autopep8
f1d0d73ed7 : Fix flaky Sqrt test
9c411513bf : Patch distutils crash when linking with ccache
ce78bc898b : Fix travis builds and add ccache
887002e932 : Add bindings to CUDA tensors and storages in THPP (#615)
eba5299576 : Port ROIPool to caffe2 trunk, add CPU implementation
22e1bdd6d1 : Use stack workspaces in RecurrentNetwork
31dea5ff23 : Small typo in README (#613)
ec4602a973 : Fix bad code alignment (#612)
ed04a20289 : distributed reader for evaluation
319945df15 : Test for FC operator + fix for docs
a38749d15f : Fix cuda notes
6ee77b4edd : Added cunn support for TemporalRowConvolutionMM (#415)
cc65cc64c8 : Create function ParseProtobufFromLargeString to parse strings more than 64MB
343d65db91 : Rowconv repull (#1120)
4c614f2e67 : Add ios-cmake
6328981fcf : cuda implementation of Gated Linear Unit, fixed issues with genericization
ca1ff1ee9b : Add Flatten layer, bugfix in InnerProduct
da01542399 : fix third_party symlink
9dd1d9428e : Made translator work as command line tool
01e860505b : Cmake for android
59d263280e : fix directory reference in cmake for inclusion as library
a90913105c : add make-contiguous in batchnorm backward (#602)
9368596059 : legacy.nn Attributes: Add '_gradOutput' to SpatialConvolution. (#600)
864f561525 : Make BlobDeserialization throw exceptions instead of returning bool
80ed795ff1 : Minor ffi utils fix
8bff8014b3 : print out inputs in lstm test to catch when it is fluky
b6e330641a : fix Android studio compilation error
a2938e3d11 : add cc 3.0 to nccl (#594)
de8cd46416 : Caffe2 graph to json for visualization in flow
2ad967dbe4 : Fix pep8 in setup.py with "autopep8 -i setup.py"
7415c090ac : Check setup.py for pep8 lint on TravisCI
a1fa995044 : Fixes and improvements (#593)
0f870d4f40 : Add error checking for too-small input in ConvPoolOpBase
9775ffc6ae : Fixes to topological sort, canonical blob naming, sharing final blob
a4ba0cceb2 : Run memonger to optimize net if needed
3c2ecc6b15 : add dockerfiles (#583)
fa1516d319 : Install THCUNN.h and generic/THCUNN.h
5e26f49db4 : Install THNN.h and generic/THNN.h
7694f65120 : Revert "Using accreal instead of real in the API"
b5ebf68df1 : Revert "Convert real to accreal in libTHCUNN"
40ce50e0bd : Speed-up training, fast data-augmentation, sync data_parallel_model changes + other small fixes
aed53dd7cf : Pass cmd flags of GlobalInit down to workers in Flow
630d3a5984 : Fix blob serialization in KVStore ops
65f7c915fd : Fix non-chunked Blob::Serialize method
aa46055274 : Update CI links in README (#579)
2cad802b68 : Revert "cuda implementation of Gated Linear Unit"
2d01f384f1 : fallback to nn batchnorm on backward-evaluate (#589)
ddbf90afa3 : improve dper dh
0e3146e1e8 : Remove recurrent_sizes from RecurrentNetwork
5e5486491d : Replace Gather + RowMul by SparseLengthsWeightedSum
f0996309d9 : Fix Caffe2 gcc 4.8 regex issue
962b16a814 : speedup of softmaxwithlossop
b1472a173a : don't hardcode outputs order to work only for lstm + don't pass blob names for parameters
f09da676d7 : CNNModelHelper.LSTM test
b7a2a41ceb : TensorPrinter helper c++ class
f8d4f980b3 : Add upsampling modules and functions
4f5a6c366e : Make Variables non-comparable
ecfcf39f30 : Improve optimizer serialization
3975a2676e : Fix invalid DECREF in torch.Size constructor
138ee75a3b : Fix for target_link_libraries on CMake 2.8 (#581)
0048f228cb : Add spatial test for LogSoftmax
2748b920ab : make adam have the same lr as lua torch (#576)
a92a2312d4 : Add missing fields to read_lua_file for BatchNorm and Linear layers.
945ce5cdb0 : Fix math block of GRUCell in docs (#572)
e64b404d45 : logging: Join() method for printing vectors
ce13900148 : update From Source instructions
4c77ad6ee4 : step_rate -> lr in adadelta (#569)
0bc4246425 : adding NLLLoss2d to docs
e05607aee1 : Add fall back to implicit GEMM and friends. (#558)
a360ba1734 : Add a hint about CUDNN_STATUS_NOT_SUPPORTED
c661b963b9 : Add more contiguity checks to cuDNN
e374dc1696 : add step rate to adadelta (#568)
200ae58c35 : modified save_op for multi-gpu training
96fc095ccb : Add piecewise linear transformation operator
eb6455d2d9 : Remove enforce to have tensor data_ when sharing tensors
b5424c9646 : Enable top-k accuracy option in caffe_translator
45596d5289 : Add contiguity checks to THCUNN
7acdece3b2 : Comment out NHWC Alexnet test for now
342e7b873d : fixing THPP cmake for cmake < 3.1 (#559)
ceb0c765b9 : Make avoid duplicate keys when doing chunking in serialization
e3ea3e8c12 : MKL convolution operator
e0c90de6e6 : Speedup get_op_ids_in_path
c4b640aeb2 : @debug decorator to make it easier to use dropin debugger
ec51f887bf : Create only one instance of SigridTransform in DPerExample.
00410c4496 : Fix broken THNN groups in conv functions
8b9276bbee : Fix view bug in Conv1d
3238786ea1 : Improve optimizer error messages
07ebbcbcb3 : Add Parameter docs
ca555abcf9 : fix comments
63893c3fa2 : Fix auto-gpu semantics for indexing
f8ae34706e : Port L-BFGS from Lua optim
be1224c0a7 : cmake: allow execution of python files without make install
c1ba0fbab3 : Refactor CuDNNReluOp for multi-precision
f8e89fbe11 : fix docs for torch.nn.functional.conv1d (#536)
30d208010c : Fix segfault when a None gradient was given to a hook (#533)
017c7efb43 : Fix typo in LSTMCell documentation
70af31b6c3 : Fix ARM_NEON codepath for non-shared scale case.
06398e9bfb : softmax-with-loss, handle gracefully cases when total weight is 0
0c69fd559a : Fix CUDA sharing across processes (#530)
c991258b93 : fix formula for GRU cells
9f89692dcd : adding documentation for some lapack functions (#528)
e18643f90b : More fixes
c28575a4eb : Fix typo in documentation for autograd
6a7dd236fa : instance norm
c9db9c2317 : Add C++ tensor library (from THD fork) (#526)
a727742644 : Prevent concurrent memory and NCCL ops
3f66f66da9 : DebugMode helper for Caffe2
16a09304b4 : fix documentation of LSTM cell (#525)
58a88d1ac0 : Fix doc search and warnings
b740878697 : Updated h0,c0 shape in documentation for RNN, LSTM, GRU (#519)
7179002bfb : cuda implementation of Gated Linear Unit
43b5be1d78 : added c implementation of GatedLinearUnit
afe822ebd7 : Small tweaks
411059d649 : Generate huffman tree
9c2067cc49 : tweaked CUDNN_BN_MIN_EPSILON comparison to eliminate runtime warning
70459202da : Install tests
dd51336611 : Fix label start index for HuffmanTreeHierarchyOp
173c81c2d2 : import package at the beginning
ee4c77c59f : Docs improvements (#512)
30ec12fdd5 : update readme for source installs to make magma dependency optional
9f0a7935f6 : Replace one more place from _net.external_input to _external_input_map
269ec0566f : fix typo
8ed9a91d77 : Avoid PrefetchOp destructor assertion when not necessary
a0a95c95d4 : Add Random Number Generator Docstrings (#506)
1335b7c1da : Fix unpooling docs (#492)
6d14ef8083 : Update batchnorm docstrings
26a492acf3 : Update docstring for ConvTranspose functions
16d2f3b44c : reshape explicitly in-place
91ebfa3c7c : Unit test for big batch size avg pooling
f2741e8038 : format fix (#490)
8d1a6975d2 : Fix for non-contiguous from_numpy (#489)
c414bf0aaf : Fix handling of unicode in torch._C._add_docstr (#487)
99f4864674 : fixed RMSprop initialization (#485)
be97f491e6 : Unbreak caffe_translator for Conv op
784cbeff5b : added a non-exhaustive list of contributors
9302f860ae : Remove unused file TensorDocstrings.cpp (#481)
ac8a5e7f0d : Remove error message assertion (#480)
e67425647a : Support bias for Scale layer in caffe_translate
798fc16bbf : add beta tag
0f65c9267d : Fix typo
be45231ccb : Improve ffi utils (#479)
279aea683b : update conda install command
8aa8f791fc : add more torch.* and Tensor docs (#476)
bfca2b86c3 : Removed the old group convolution code
6464e69e21 : Docs for torch.Storage (#475)
a93812e4e5 : Fix PowConstant (#471)
225f942044 : Disable IndexCopy test until #473 is fixed (#474)
d951d5b1cd : Fix tensor.cuda(0) when on non-zero device. (#472)
2082ccbf59 : More Tensor docs (#470)
473e795277 : Fix invalidArguments for functions with tuple outputs, but no other (#468)
a09f653f52 : Begin to document TensorBase methods (#466)
90fe6dd528 : remove spurious pprint
57a2ccf777 : PYTORCH_BUILD_VERSION to setup.py
e23ddf06e9 : UnsafeCoalesceOp for `nn.Module.flattenParameters` style coalescing
b5f6fdb814 : Using accreal instead of real in the API
205b9bc05f : fix build_all.sh
14d5d52789 : Add placeholder tensor documentation for methods that exist in torch. (#463)
9c218b419f : kl_div and docs (#429)
a69d819901 : Converting all instances of real to accreal in libTHCUNN
517fb2f410 : Remove free() and retain() from Tensor (#464)
fef2b1526d : Adding macros to convert between real and accreal
3719994c96 : Remove redundant code in THGenerateAllTypes.h
35c2821d71 : Add documentation for methods defined in TensorBase (#462)
e4812b3903 : add binary version to setup.py
204867a884 : in lite mode, return the non-readable string, better than nothing.
4cc11066b2 : Add torch.utils.data docs and improve notes (#460)
db7948d7d5 : Add torchvision reference to docs
d63f58013b : Throw error in caffe_translator on Scale layer with bias
3d40c0562d : improve build_all.sh
7d6742f2f5 : Tool to convert caffe models to c2 + fixes for xray v10
146bcc0e70 : adding binary build copy option to build_all
8d9f6c2583 : Minor fixes to docs
ac32d8b706 : fix docs
15c1dad340 : Minor fixes and torch.cuda docs
6d8baf7c30 : Fix Sphinx warnings
7ced682ff5 : Add notes
89cab4f5e6 : fix readme language and links
a0afb79898 : add pic to readme
d6fa3b3fd5 : Deprecate nn.Container in favor of nn.Module
f91bb96071 : Remove cmin, cmax and cinv
3b6644d195 : Minor README fix
652b468ec2 : Readme improvements
af110d37f2 : remove old docs
38967568ca : Make load_state_dict() more restrictive (#451)
df79631a72 : Fix a mistake in autograd docs
95f0fa8a92 : Change .grad attribute of Variables to be a Variable
1c6ff53b60 : Make storages unresizable once exported to numpy
1dbf44c00d : Add SmoothL1Loss to functional
1259a0648b : Make nn containers copyable
b0055f6229 : Improve argument checks for long arg options
90040afc44 : Fix cwrap option filtering
59bc96bdc2 : Check dropout probability
676ffee542 : Check params type in optimizers
77136e4c13 : Add anything in torch.legacy docs
604e13775f : Add optim docs
02380a74e3 : Add warnings to multiprocessing docs
4461ae8090 : include cstddef for msvc
2b948c42cd : Add SpatialAdaptiveAveragePooling.
133c1e927f : fix readme, bump version
b2ae054410 : Add SpatialAdaptiveAveragePooling.
2290798a83 : if nccl is available, do not compile it and load system version
b96c2ed6ab : fix validation to consider cpu-only ops
69d8331195 : Use functools.partial
eab5c1975c : Avoid strict aliasing warning in float/half conversions.
5171e56b82 : Ensure atomicAdd(double) is visible to host side code
f467848448 : Avoid strict aliasing warning in float/half conversions.
7e4ddcfe8a : Remove names from register_hook calls (#446)
3152be5fb3 : Add repr to RNNs and Embedding (#428)
b076944dc5 : Fix for atomicAdd(double) for CUDA_VERSION < 8000
3a07228509 : Add ConvTranspose1d module (#449)
24a2f2e3a0 : Add MaxUnpool1d module (#447)
b32dd4a876 : add cudnn deb package installation paths to cudnn discovery, add 5.1.10 to load options (#448)
8683737410 : Caffe translator: match torch pooling
4f4bd81228 : Fixes to autograd: (#442)
59b23d79c6 : fix cudnn rnn batch_first with tests (#445)
9ad10959ee : Enable large PlanDef protobuf message.
8c14630e35 : Fix Tensor.apply_() (#444)
cc32de8ef9 : Fix typos etc. in docs
44696c1375 : Fix MaxPool2d on 3D CUDA inputs (#443)
d9c9404885 : refactor to allow for parallel gpu execution
82088a8110 : parallelizing catArray to multiple tensors per kernel (#635)
d5e45b2278 : Add AvgPool1d which just uses AvgPool2d implementation (#439)
0d5f3654b2 : Adding back untracked files from manual github pull
048be4533d : Fix autogenerated docs.
3a514fe28d : gpu transform fix
f0c893dcb8 : ShareExternalPointer with meta
1cd166d330 : CMake completions work
d8314bf278 : Fix ordering of TransformOnGPU arguments
4de888e167 : Add optional gradient on weights for (Sparse)LengthsWeightedSum
4ae5235ec9 : Tiny clean up of reducer_functors
0e6ebdf50a : Speed up travis slightly and fix documentation mistake
bdfef2975c : adding more docs for torch.* functions
b4bb4b64a1 : simd.h: really fix the arm64 (i.e. Aarch64) build
92ebb58a06 : Top-k accuracy operator on host
8047b8dc83 : Fix random issues with some of the layers getting missing from registry.
bb928f3cc0 : Latest fixes to Xray Flow workflows for Caffe2
2b88d85505 : Re-route thrust memory allocation to THCudaMalloc / THCudaFree in cunn.
4a8906dd8a : Add THCThrustAllocator.cuh to install files to downstream projects can use it.
68e2769a13 : Re-route thrust memory allocation to THCudaMalloc / THCudaFree so it can use the caching allocator.
4f1db36cff : add CUDA gradient for Div
17c998e99a : fixing arm64 build
95b3309a87 : Gradient Input memory sharing using memonger blob sharing
35758f51f2 : Get rid of a few unused imports.
e8102b0a9b : fix compiler warning in THCS
04f2bc9aa7 : Fix bug in squeeze backward (#425)
d070178dd3 : Instantiate 128kb of scratch space in GPU memory per-device by default
3732a0044c : Move mpi_python.cc to the python folder to be more consistent about source file locations.
b99ea43c9a : Set default build type to release
c9ec7fad52 : Add model_zoo utility torch torch.utils (#424)
f0a6ca4d53 : BatchNorm fixes (#423)
73fe3d5f59 : Update travis to test more versions of GCC and fix README build status link
fd92470e23 : Add cuDNN bindings for BatchNorm (#421)
737000b166 : Linter fix up to sync fbsource and github
3833dad5f6 : manual sync of old never sync'd files
8369664445 : Minor doc fixes
35e1adfe82 : documentation parity with torch7 for catArray impl
eb91fc5e5d : Minor fixes to docs (#412)
46c6e621cb : Fix warning in ScaleOp grad
76c9382fb3 : Delete caffe.cloc
603784c8cb : fix typo
dd133edf84 : Update README.md
c1e6aa58a0 : adding license back
a358ed4297 : Update docs to reflect current build status
69ce8cafde : Don't add levelDB dependency unless Snappy is also present
ac03e65929 : Move c++11 check to cmake 2.8
d186fdb34c : Fix THHalf issues with MSVC.
0f04f71b7e : fix API reference link
e126f6e960 : travis cache apt
78fb184cef : mac travis: use Eigen instead of openblas
1a26aab1cf : Seems that on mac, the inclusion order matters.
83b2f282de : Need to set c++11 before check_cxx_source_compiles
375c0816b3 : goodbye old brewery
46a403250f : Make build for Android a bit easier
7734235a6a : Add misc check for the long type, and temporarily disabled core_overhead_benchmark to remove the benchmark dependency for all binaries
1e8659fd89 : build files bugfix
87f1959be7 : adding proper categories to torch.rst
a538055e81 : fix invalid use of THPUtils_invalidArguments in sparse tensors
1be71804c8 : For the caffe and caffe2 protobufs, compile them to static instead of shared.
a9e2693fa8 : add back third_party/protobuf, but it won't be used in normal builds.
9d42eca92e : delete no longer used cmake lists under third party
0e345aaf6d : Fix invalidArguments to take kwargs and out into account (#397)
c976dd339d : remove .zero() on grad_input conv and batch_norm
b31708fb6e : Added summary to end of CMake configuration
347e17600f : Added option BUILD_PYTHON
71cef62436 : Fix condition for threadArgErrorHandler
3d1bda1f3a : cmake: make python dependencies separate from the C++ dependencies
3a29055044 : Fix rnn sphynx docs (#405)
610df2059e : Rephrase warning for missing dependency
249e1857e2 : Reset and warn when any options are not satisfied
59d66e6963 : Sparse Library (#333)
46bc43a80f : fixing loss layer docs
7fa60b2e44 : fixing docs of activations, pixelshuffle, sparse for rst
41e03c9c38 : cmake file fixes
5bfd6c4cd1 : semicolon
311ae2ba33 : build file fix and avx2 on mac fix
1be46aeb21 : more gitignore from caffe
cd617c3a76 : moved exclude to append in binary sources
9f351d581e : Add build/ to .gitignore since that's common practice for cmake
62265cd1eb : Remove unnecessary cmake lines
3e4b24447b : Add a missing if opencv found check
580294cdd4 : remove accidentally included old version of installation instructions
2f3b5d7943 : Moved binaries/python CMake files to reflect paradigm of the rest of the codebase
e80f4430c4 : clean no longer needed cmake lines
37b5af990a : Changes to make MKL operators build.
82070ebd7a : Fix accidental inclusion of cudnn tests in CPU tests
ccdeede31b : mkl: GLOB_RECURSE instead of GLOB
ae62e15f87 : Added MPI operators to cmake
7ea9f9e0ee : Updated naming convention of Caffe2_LINK*
05fa16a7aa : Add contrib/nccl to cmake
425ce989e2 : Update README.md
5070321915 : Add cuda_rtc to new cmake layout
07a1c58cad : Remove branch specification in travis
3dbddf6104 : Create .travis.yml
1d03be77d0 : mkl cmake file, not tested
5142640b2b : Added all folders to the add_subfolders section, with the ones not ready being commented right now.
3f432a8d43 : Migrate brewtool stuff into brewtool/ and update makefile to use cmake
c78893f912 : removing Image: references in nn activation docs
0d2a4e1a9e : fix dropout docs for rst
ae17168939 : Ensure glob always happens
d88d706446 : Removed protobuf from third_party
8396207684 : CMakeLists for db, queue, sgd
088f14c697 : fix batchnorm and linear docs for rst
9fdc844620 : halfway into going towards individual-folder cmake lists
4bf7be7bd5 : fix RNN module docs for rst
1395c1701e : Revert relabeled 'build' directory for protobuf compilation
b2ab6891c5 : fix the rest of Pool module docs for rst
cc8b6bf715 : USE_OPENMP option added
fb43912616 : Guard new cmake feature with version detection for compatibility
76cbf1d4d1 : Reducing minimum version of cmake required
9748c92b75 : Factor out DB source collection
65641b6bfb : bounds check in Gather operation
4d53c632e0 : Remove unnecessary cuda flags.
39ab5bcba8 : fix MaxPool1d,2d,3d docs for rst
69c09e1c48 : BLAS option: Atlas->ATLAS, and added an else() message guard.
6c124c6f49 : Allow glog and gflags to be optionally used.
324ef09e01 : fix typo
c63500fe68 : remove explicit glog and gflags link libraries, since the caffe2 dependencies would have already had them.
6bf2e156d4 : cmake cuda: add libcuda.so find paths, and produce error if it is not found.
42f131c09f : fixing nn.Conv* documentation for rst and adding nn docs to sphinx
52784a3a21 : Add LOG_IF and VLOG_IF to the non glog option.
4c51f96b9d : DEFINE -> CAFFE2_DEFINE
7c3f1521a7 : Gpu transform
6618d7462d : Improvements+fixes for NetBuilder
89dca6ffdc : Add a patch to stop Sphinx from cross-referencing ivar tags
b7f36f93d5 : Expand autograd docs and add sections
58320d5082 : Add multiprocessing docs
a461804a65 : adding docs for more torch.* functions
817f6cc59d : adding linspace, logspace, neg and range
108936169c : implement more torch.* docs, remove zero, cauchy, log_normal from torch.* docs as they are not stateless
f60ae085e6 : Float -> float, Long -> long
85dda09f95 : fixed names and other cosmetics
4f479a98d4 : fix indentation issue for all examples, add doc for add
35ba948dde : add doc for *mm* functions, *mv* functions and addcmul, addcdiv
6b4ed52f10 : adding docs for some torch.* functions, removing all, any stateless methods
eab5cef032 : Removed redundant fpic invocation
16aacbdf83 : Fix MSRAFill op
737f507786 : Fix all instances of 'build' folder being used to prevent errors on make
4698062fcc : use different name for build folder (conflicts with build file)
0ce23319c2 : Change default blas to Eigen
90f601e4cf : Checked out older (and still working) version of pybind11
67a74f3ada : no fancy auto in lambda functions.
a84fa6fb98 : Checked out older (and still working) version of pybind11
6303dab3ab : Update README.md
dcf5f8671c : Add __pow__ to Tensor and list additional undocumented functions (#398)
55856c720c : no sudo on pip for ubuntu
5340291add : Update FindARM.cmake
1c6fe58574 : Add gather and scatter to autograd
9f2111af73 : Rename Variable.no_grad to Variable.detach
2ed6c6d479 : Fix leaf Variable handling in autograd
6a2785aef7 : remove link_prefix from linker arguments (#395)
b8df7ce149 : fbcode: remove unused includes from .cpp files without #if (but possibly #define)
849cbf3a47 : small cmake fix
a0c614ece3 : unsqueeze instead of view in dataloader
3074f8eb81 : Removing TH_GENERIC_USE_HALF, TH_NATIVE_HALF, TH_GENERIC_NO_MATH (replaced where appropriate with TH_REAL_IS_HALF), removed half from THGenerateAllTypes, added an explicit THGenerateHalfType.h
5df17050bf : Revert "TH_GENERIC_USE_HALF=1 by default, half enabled by default"
92df0eb2bf : removing unneeded flags in build_all.sh
be8376eb88 : TH_GENERIC_USE_HALF=1 by default, half enabled by default
b650a45b9c : fix botched merge in setup.py
8a20e22239 : Add torch.stack
7c5014d803 : Add torch.split, torch.chunk and change default dim of cat to 0
62ac1b4bdd : Implement missing cases of __matmul__
0633c08ec9 : Add is_shared() method for storages and tensors
cf87cc9214 : Check valid configurations of Variable flags
f908432eb3 : Ensure that Variable's grad is shared between processes
1bd291c57c : Fix multiprocessing tests on macOS
3f270f60ce : display spans for a certain time interval
b277df6705 : Doc css fixes for mobile and large screens (#389)
ec4d597c59 : test fix
d2ef49384e : Add custom docs stylesheet (#387)
b5dc36f278 : explicitly linking against v1 libs to avoid lua-torch conflicts (#386)
3dac1b9936 : cmake C flags fix
224422eed6 : cmake fix
10f78985e7 : adding TH_LIBRARIES and THC_LIBRARIES var to THCUNN cmake
dc95f66a95 : adding TH_LIBRARIES var to THC cmake
d8f4d5f91e : adding TH_LIBRARIES var to THNN cmake
43fbdd3b45 : workaround for luarocks 12.04 bug
803d032077 : workaround for luarocks 12.04 bug
b5cf1d2fc7 : adding THCUNN_SO_VERSION
c1ca9044bd : add THNN_SO_VERSION
52c2a92013 : adding THC_SO_VERSION property
541ab961d8 : adding TH_SO_VERSION option
849794cd2c : Remove deprecated and unimplemented functions (#383)
f47fa2cb04 : use __get_cpuid when available
7a162dd97a : Fix outputs of torch.* comparison functions
b123bace1b : Rename torch.autograd.functions to torch.autograd._functions
483490cc25 : Move PixelShuffle implementation to functional
8d60e39fdc : Rename torch.nn.functions to torch.nn._functions
e7dff91cf3 : Fix for multinomial autograd function
ab5776449c : Add documentation for some torch.xxx functions (#382)
e4a3aa9295 : Change container doc to assign child modules via attributes
be98c5d12d : Start documenting torch.Tensor (#377)
bc6a71b1f5 : Add Function docs
26f1e2ca9c : Add basic autograd docs
75d850cfd2 : Fix optim docs
f4870ca5c6 : Fix nn docs
235d5400e1 : Merge pixelshuffle function into module (#375)
491d5ba4fd : add new flags to build_all.sh
2975f539ff : sort cuda 8.0+ fix
64ca584199 : Fix group support in convolution modules (#374)
5263469e21 : Fix handling of zero sizes in caching host allocator
c367e0b64e : Support dilated 1d and 3d convolutions (#372)
183b3aacd2 : Hold CuDNN PRNG state between RNN iterations
101950ce92 : fix repr in legacy.nn.linear
239ae94389 : fix in conv repr
55e850d825 : test if modules can be printed with fixes
76a2c9cbf7 : Attempt to get numpy working with travis
415f4959ce : Attempt to get numpy working with travis
965228e559 : specify python version in travis
187ba9d969 : Added alternative numpy installation option
62af45d99f : Basic functional interface (#354)
e5793efb09 : clean up enviroment variables
edac248dad : no prompt for addition of extra repository
1aa473638d : Added a search path to find OpenBLAS for convenience (homebrew install)
bd2346093f : added a build script to specify openblas with OS X
06265daf1d : Add test repositories to travis
70b7f6af23 : Fix leveldb brew install typo
2e2522bf30 : Moved addons back into the matrix specification
f89340502c : set distro earlier on in the travis configuration file
fc750ae32d : remove gtest installation attempt
056312a538 : Specify trusty distro
5abc094ea1 : specify location of openblas and added addons
5a77c59d81 : added OS X to travis and split install script into separate file
9ce23cbb71 : Fix false positive for non-clang compilers.
454d439cdd : Add back Caffe2_GPU to Caffe2_LINK variable if it can be enabled
77a925ab66 : Add THHalfTensor support to cutorch (#655)
d0d33d3ae7 : Add support for torch.HalfTensor (#874)
3dbcae9ef0 : Fix typo breaking NumPy includes
3ebb52074f : Fix duplicate definition bug (only present in GCC)
d515d8ffb8 : Update README.md
b48f1ff810 : OS X build
9b7eceddc8 : Accept outputs in out argument
24af02154c : Use ForkingPickler for sharing tensor/storages across processes (#344)
86ec14e594 : Add support for VSX vector instructions on PPC
a2ae00519c : add speed benchmark tool
8a29338837 : Use cuDNN for Conv3d and ConvTranspose3d (#359)
ce02932517 : added documentation
29918c6ca5 : Copy libnccl.so.1 instead of libnccl.so
3adca70cec : bugfix htrace_to_chrome wrong output file name
80a44e84dc : Change multinomial return type for CUDA
5497b1babb : Use TypeError in invalidArguments
bef70aa377 : Make type checking more strict and fix topk arguments
0d30f77889 : Make variables picklable with protocols <2
e27bb3e993 : Minor fixes
b55e38801d : rename histc2 to bhistc
4b3bd06a7f : sparse nn converges better by dedupping sparse gradient by mean
46f0248466 : Use bool for sizeAverage in SoftMarginCriterion
310ec57fd7 : Fix typos in THCTensorRandom
cd82b2b869 : Implement comparison and logical operators for tensors
126a1cc398 : Add Sphinx docs
f2606a7502 : Make multinomial return a LongTensor (compatible with CPU version)
8576eef831 : added link_directories to hopefully fix travis issue
b07fe52ee0 : Adding support for empty tensors in cat, catArray
b07358b329 : renaming test to avoid dot in test name
2aea8077f9 : renaming test to avoid dot in test name
135687f04a : critical bugfix in storage copy
6d6d418f6c : remove policy (doesn't work with older versions of cmake)
db745f33a5 : whoops had the old command for cmake in there
cf64d91548 : new policy for building shared libs
1307e6f1cf : compatibility with older version of CMake
244f6aed28 : force use of new cmake
9e75aa4d35 : specify path to write htrace logs
547d151728 : fix for failing dir creation
de15c67844 : download a newer cmake
8c6fe64e1d : upgrade cmake version to use proper linking flag (full paths)
c2c58480ab : added google/benchmark and tidied up Cuda build
b140e70b58 : Add autograd.backward (#341)
d267f1c320 : remove apt-get of nvidia toolkit (which installs nvcc-7.5)
ec987b57f6 : removing 3.3, 3.4 from README badges
596677232c : Add a different code path for catting contiguous tensors along the first dimension, for speed reasons. Fix a bug in cat when catting with an empty tensor along first dim (it added an extra dim). Fix the ambiguous 'catting along last dimension' sentence in the doc and change the behavior to pick the maximum last dimension over all input tensors. Now empty tensors are allowed.
9d74e139e5 : removing 3.3 and 3.4 from travis build
d2a93c3102 : remove unused buffer in avg pooling
bc475cad67 : Move max pooling construction logic to functions (#343)
45d6212fd2 : default args for conv functions
f45d75ed22 : make the CUDA-aware tests backoff if CUDA no available
55a794e6ec : fixing OpenMP longjmp bugs in *MaxUnpooling
118cc4174a : Remove binaries build (it seems to be broken)
a4f3721e15 : weightedsum on ps
93ed476e7d : adding LAPACK double bindings, adding fmod and remainder
55ae35c0ba : remove -lcnmem
6fa371cb0d : bugfix for qr skinny matrices
a7f8fe0423 : introduce request net into prediction schema
074924eb19 : remove broken test
e51e651255 : Remove redundant and failing test of FeedBlob asserts
e7690070ca : removed encrypted binary
3eb08feff5 : Support no_bias in naive group conv implementation
3d69cf1fa7 : added cudnn
18a2691b4b : Fix memory leak in THStorage_copyCudaXXX
ed2994a385 : Add c++11 support to nvcc
fc74eae082 : fix build for older versions of CUDA
85d7688811 : add display level to htrace_to_chrome.py
db5cc8f278 : revert exhaustive_search setting to False
f7bd3f7932 : added pixel shuffle layer + tests
72e957e611 : update third party files
d570d1f405 : fix USE_*DB option issues
2da1b44b7f : pass std=c++11 directly to nvcc
71db174410 : downgraded cuda to 7.5
4fda7467fb : removed docs from cmake branch
45807c89ac : matrix install and export CXX in the script with COMPILER variable
a14d2b5817 : turn off cuda propogate host flags
f8dee4620a : add a new function histc2
291c971e36 : CMAKE VERBOSE on
e2181a32ca : Normalize rank loss gradient to avoid convergence issues when the number of pairs is really large
505fdc99b7 : fix gcc path search
613a8f1040 : update gcc to 5.4
2c6a579859 : Make all convolution operators allow optional bias term
d7836b2f5a : Preserve metadata on schema.List.lengths
4c22d3769b : maybe fix cmake file
728c694f1d : network install rather than local
bb3cec8046 : fix broken link for ubuntu download
fed2cdf8cd : change nvcc version
47bd606f63 : Better visualization for gpu training plan
5209a28c95 : cuddn_exhaustive_search default True
82f1a8e12d : fix code doc for data_workers
fdb2a5b77a : separate num_task and num_label. unify label schema. remove is_mtml
c2d28fb874 : RNNs API simplification
a2b31cf9e2 : Install fixes
a9c2809ce3 : change the order of cudnn libs
fa61159dd0 : cremainder, cfmod implementations (take 2) (#646)
a215e000e9 : fix for out of place tests and for non standard I/O pipes
7bf5c48e7e : Updated eigen submodule
fe0d59d424 : added -y flag to force addition of repository (timeouts on automated build system)
35fd17cae2 : moved gcc installation into the script rather than addon
2b0054a642 : fixed gcc update bug in travis.yml
baa058778c : added newer version of G++ for potential fix of nvcc compilation
705d934481 : added pip numpy install
6ea442629b : added C as a supported language to the cmake file
6abf5c99dc : Implement group convolution in the cudnn interface.
a3e6f4cb7a : add HTraceAsyncDAGNet
93dd09dfd8 : apt-get latest cmake
1d1528bc96 : updated ubuntu to the xenial
f16a624b35 : correctness fixes for mod and remainder for integer type tensors.
a03692069e : Adjust numerical precision of comparison to make test pass
0943c16324 : added pthread download
6dccc8e4ab : removed swap files
64de84e069 : updated ubuntu version
632a9fd23a : removed old '.yaml' file
99fa9e7ae2 : fixed yml extension to be recognized by travis
22ebc3f205 : Revert "Add support for cremainder, cfmod"
0a85a977c6 : caffe2/caffe2/utils/mkl/mkl_memory.h: avoid shadowing warnings
e3d38fa933 : Add rank loss options to mlp
bb72ccf1a5 : Support CUDA IPC in Python 3 (#203)
2e73456f5c : Fix compiler warnings in Tensor.cpp
3e49a2b4b7 : Prevent deepcopy from changing Parameters into Variables
4694e4050b : Fix printing bug when all values are NaN or inf
59b9eeff49 : Expose gather and equals for CUDA tensors
d4bbcab558 : Setup MPI before test start
12c4090ea5 : Skip sparse tests if operators not available
84e7eff458 : Waive some hypothesis tests on GPU
fd5da05a05 : yes it is right.
05233cd5b8 : Make bias optional in cuDNN conv op
1744fad8c2 : Use 'void' for no-arg function
e46d942ca6 : Fix double initialization of HalfStorage (#331)
fe38a0c2b1 : remove logging.basicConfig() from workplace
bd0b61fef1 : Minor comment changes for caffe2.
6c3cca9bc7 : Build caffe2, NNPACK, FXdiv, pthreadpool for macOS
93a6136863 : Add support for cremainder, cfmod
09187f4aea : Moved core_overhead_benchmark to oss. Use google/benchmark
d87edd39e7 : math gemm interface fix
d37fffd257 : use in-place ReLu to safe a lot of memory
230bde94e7 : fix about section
20fffc8bb7 : Fix torch.is_tensor for half tensors (#322)
70dcba376c : using BlobReference for Sum gradients.
17a5a6ae32 : fbcode: remove unused includes from .cpp files with no #if and #define
861a3f3a30 : avoid shadowing warnings
ee52102943 : small change from set to dict
26516f667e : Fix multinomial bug and decrease precision of normal test (#325)
5586f48ad5 : add cudnn 5.0.5 to supported versions (#321)
9e498c7bba : caffe2: removing message logging in conv_transpose_op
78edb8295e : No exception for float64 in FeedBlob. Warning instead.
ea2deae9e3 : remove unnecessary code since our compiler is fairly modern
cc6e3c92d2 : ensure that legacy linear has gradWeight and gradBias fields (#319)
a2ef5782d0 : Revert "Bugfix of type in THCTensor macro."
99e97a4b7a : Correction to paths to find cuDNN
0c1c0e21b8 : Bugfix of type in THCTensor macro.
ffcc38cf05 : Deterministic ordering of parameters and buffers. (#317)
a8ae63c3e0 : HuffmanTreeHierarchy operator
29f903aaf2 : Make computed params broadcast optional
dac78727fb : Add missing file
8a70067b92 : Add support for stochastic functions in autograd (#294)
33b227c45b : serialization bug fix (#314)
e8dc09064e : exhaustive_search=True
fc27f83282 : restore control_input
35fa9e9c5f : a couple small reliability improvements
c016e64914 : Fix test cases: tensor of size 0 not supported by GPU ops yet.
42bbdda8c4 : MKLDevice and MKLOperator
fb68be952d : Bugfix for multinomial distribution
dbe7aeb883 : move HTraceDAGNet and ProfDAGNet to contrib
2bf18f2b1d : add inception and dummy input
30f323298f : exclude every branch but master for build testing
f413ee087d : Add missing free in LookupTable (#400)
dd74c5d3b8 : Implement rank loss method using logit function and pairwise comparisons
e80423f341 : bug fix to distringuish train/test data
cb918ac727 : Implementation of ResNets on imagenet dataset
585b8f7c9d : Templatize store handlers
dc16bcfa27 : Remove float64 test
0b52b3c79d : Generalize threaded data input via queues + Everstore input
4858a6bc6f : snapshot -> checkpoint
ba58b80b16 : Rename OperatorBase::OutputAt to OutputBlob and make the interface consistent with the rest
d38499f727 : Optimize BlobIsDefined() + benchmark --> net construction 95 secs to 8.2 secs!
11a6f48fe7 : Fix a few docstrings in operator.h that is not correct.
1a00ffea2a : Implement fix recommended by @slayton58
4cd263db74 : Last N window collector
0b21581784 : update torch to fecf29bb6ad7b8117eff9712d833972205de1201 cutorch to 64f974178c03c93666cfe3796b7e2d7b549476a2 nn to e8ec31cd0a531b7f7a3247dd7e777958a643d931 and cunn to 64224a65eff88d1bfe5bc47d26a901ed8c0b4705
6191de7ac9 : gradients for CopyGPUToCPU and CopyCPUToGPU + unit test + schema
390867d2d0 : Fix RecurrentNetworkGradient with batch size > 1
0bc104a3d0 : fix unit test
1632f053e5 : implement user-only metadata for input_record
2c3eb3e592 : fix sequence_ops doc (pad_width -> padding_width)
68cfc52452 : MomemtumSGDUpdate -- version of MomentumSGD with update.
3c47d41f86 : add unit test for row mul
68fbc42830 : fix empty tensor handling in some operations
2847c8f624 : input_as_shape option for Filler ops
cd780eb9ec : avoid Exp overhead when handling underflow with MKL
eddf23ca0f : Handle parameters that are computed but not optimized
8dbe435235 : Ensure input type consistency in Concat operation
206029bc5a : fix caffe2 tensor index overflow in Extend/Reserve/Shrink
c70e8115a1 : dper_example use RowMul for speed
48bd64b41b : RowMul
033acae6b4 : Update README.md
64448905ba : added nvidia toolkit to installs
6495f5dd30 : fix bounds issue in snprintf
b348e9677c : Removed deprecated installation instructions
fd04a3468c : Added .travis.yaml
8e09f0590b : Make sure that C extension was compiled with cuDNN before using it
08d346df9c : Print libraries used for building the extension
12cf96e358 : Don't change requires_grad of parameters in train() and eval()
765a720d1c : Add support for tds.Vec and tds.Hash in load_lua
cace62f94c : Fix a bug in narrow docs
767c96850d : Return False from torch.cuda.is_available() when no devices are visible
b73e78edbb : Check nDimension in t() and t_()
7914cc119d : Fix bmm for Variables
2b13eb2a6c : Fix naming of setup.py env toggles
8768e64e97 : Allow returning changed gradients from the hooks
9212b9ca09 : fix wrong export directive for THCCachingHostAllocator (#633)
0d0f197682 : Add note on Huber loss (#310)
03c9d54fd0 : Support openCV 2
788f715a6e : third_party protobuf support
281e34d1b7 : fixes for changes in THNN API
ed9dbff4e0 : removing ifdef
41909e8c5b : adding a couple more imports
56245426eb : small fixes to allocator
3adcb2c157 : Check that batch size matches the target size in ClassNLLCriterion (#399)
6d12185cc9 : Fixed compilation on Raspberry PI without NEON
258c9ffb2c : Implement bernoulli with element-wise probabilities for all types
df12f431e0 : Removing extraneous cmake files
d7eeebc269 : Refactored CUDA detection a bit
d74bd7ee55 : Add CUDA NVRTC cases
fbbb87cd46 : Enhancements
5e699ce6c2 : CUDA fixes
b9599c7464 : Compiling entire project
e9f1222408 : Compiling most of the project
c05ff206b6 : Build binaries
2610d62813 : Build Python libs
52f09fe2c9 : Initial building with deps
e9de70f296 : Added basic build system
dede431dd9 : More state_dict fixes (#305)
6312d29d80 : Another documentation change, missed one in #303 (#304)
ab5f26545b : Correct documentation to be in line with #237 (#303)
6567c1342d : small doc fixes
3d6c2e023c : TensorInfo related code documentation
122e115937 : Removing extraneous cmake files
681267b66a : Refactored CUDA detection a bit
89d930335b : fix tests for GPU-less setup (#298)
04393cd47d : fix gcc-6 build on os x (#297)
28f0cf6cee : Add docstring support to cwrap (#295)
1af9a9637f : Refactor copy and release GIL during copy (#286)
1031d671fb : legacy fixes (#287)
9f35f47411 : Add CUDA NVRTC cases
09de969e9f : Enhancements
cdb2fb6737 : CUDA fixes
66a71c0232 : added initial documentation template
f79bffc78d : Compiling entire project
2a974f5ca2 : Fix 1.3.2 compilation
4255ee9944 : Compiling most of the project
497659ce0d : Build binaries
f3c20620ed : Build Python libs
220183ed78 : Improve gradOutput checks for VolumetricReplicationPadding.
504d2ca171 : Improve gradOutput check for VolumetricMaxUnpooling.
d535aa94a1 : Improve shape checks for VolumetricDilatedConvolution, VolumetricConvolutionMM, VolumetricFullConvolution.
0376a1909b : Improve shape checks for VolumetricAveragePooling, VolumetricDilatedMaxPooling, VolumetricMaxUnpooling, VolumetricReplicationPadding.
f757077780 : Improve shape checks for VolumetricMaxPooling and VolumetricDilatedMaxPooling.
3d719f4bff : Initial building with deps
648e9fbb58 : Adding missing file
9f7114a4a1 : Improve shape checks for VolumetricDilatedConvolution, VolumetricConvolution, VolumetricFullConvolution.
dea27ca4ca : use TIndex for set in math.h
5f7d1f02f2 : Use native reader for evaluation
1aba4280d8 : Make xray net_type configurable
6c13dc3dd0 : Fix CreateCommonWorld schema
ab3fea540d : Add serialization interface for MKLMemory
e65eeff665 : LMDB example
96a5e88d63 : Fix consequtive checkpoint syncs
3125e6a821 : Hacky fix for cloned model rewriting
ea9a0f24bf : automatic aggregation of sparse gradients
2045a5de9f : add position based weighting
3410939459 : pass learning rate scaling factor to parameter update builder function
a3942b2d64 : Add store ops and tests
f3403a1110 : Add RedisStoreHandler
119b687994 : Allow PythonOp to access the workspace
2390dfefdb : Kill few more CHECKs.
af2a3076a2 : add header for AsyncDAGNet
7d03da0890 : Improve shape checks for VolumetricAveragePooling, VolumetricMaxUnpooling, VolumetricReplicationPadding.
4e0cecae7f : Improve shape checks for VolumetricMaxPooling and VolumetricDilatedMaxPooling.
8f398d795e : Added basic build system
72dbb76a15 : fix half type numerics issue in SpatialFractionalMaxPooling
cceb926af3 : Remove extra size check in SpatialAveragePooling.
0d7d29fa57 : Enable caching allocator for CUDA pinned memory (#275)
be3276fcdd : Account for batch_size in DataLoader.__len__() (#277)
f2a18004a7 : Process outstanding CUDA events in recordEvent
1a3ff1bd28 : Remove unnecessary shape checks in Spatial Pooling modules. Checks comparing input image sizes to kernel sizes are superseded by output size checks.
a5d3c779c7 : Add gradOutput shape checks in temporal modules.
9d32e60dc2 : Fix spacing in SpatialDilatedMaxPooling.
f6913f56ea : Remove unnecessary shape checks in Spatial Pooling modules.
801fe8408f : Add gradOutput shape checks in Temporal modules.
cf4a979836 : Improve shape checking for Temporal Convolution.
34d27771c6 : 1.3.2 release
1093821c33 : Replace min BW by average BW in tests
91f2946310 : Import most common packages by default
2bd7a3c31d : Don't raise an error when retrieval of container's source code fails
a681f6759b : Raise correct error types when indexing tensors
cb849524f3 : Improve cuDNN detection at build time
1f5951693a : Change torch.randperm to return Long tensors
87748ffd4c : Add .type() for torch.nn modules
0580f5a928 : Add __len__ for tensors
88d9fdec2e : Add torch.cuda.set_device
506a40ce44 : Remove optim submodule attributes from torch.optim package
bec6ab47b6 : Add caching allocator for pinned (host) memory
49480f1548 : Adds a CUDA "sleep" kernel
18a3c62d9b : Allow NoneType for parameters in Module.load_state_dict
6322cf3234 : Allow device=None in Tensor constructor"
4e2b154342 : update install command from source
bb1019d1ec : Add newContiguous calls that have been removed from lua.
c2d32030a2 : Move make contiguous code from lua to C.
162170fd7b : Add optional weight decay to optim.SGD (#269)
107966b059 : add error message for asan
da72658fa8 : sparsehash-based implementation of UniqueOp
f16c2fe3da : Create a reserve operation for tensors to avoid reallocating memory on Extend() and Resize() operations
1aafeb3565 : clean up memory of c2/sigrid predictor
f41b2ca85c : fix sliceop for empty batch
10d0aea88c : gradient for FlattenToVec
2a95bd5239 : Incremental MeanReducer segment Ops
be1f3ed1d7 : Add a snapshot test for Simon Layton
e8b7ec1393 : disable local update for sparse features
5d0167c8e7 : Example workflow for running disributed (syncsgd) imagenet training in Flow
6ebae91d24 : multi-task learning: save model and evaluator
365ca8da1c : add sanity check that ops do not cross gpus
a7df0e6724 : Clone model net to avoid hard-coded inputs
a597c7b167 : implement sparse nn using layers
9ea7947423 : dot_production work for empty batch
301ab97e41 : Fix few more operators to handle empty batches correctly.
f95757e66e : Added internal logging to internal usage of caffe2
da7add3da8 : Better threadpool sizing heuristics
6515772b1f : Fix UBSAN issue for zero-sized memcpy
5eb836880d : Add unittest.main() lines to test scripts under python/operator_test
1e4b4fb4c4 : Fix db_test under tsan
c1c92479bd : check that numpy arrays are float32 when CUDA is used
b9f1555b6a : remove unused function from resnet50_trainer
b77aa551a4 : add missed comma
42279a610c : use Pieter-MPI and fb.distributed
122a89e3c5 : Add FileStoreHandler
2790043421 : Add the MKLDNN type to the tensor type strings and added proper docs.
0a42681f0c : print more logs in qps metrics
6b437708ad : caffe2/caffe2/operators/softmax_with_loss_op.cc: avoid shadowing warnings
2fbf774e99 : make ReshapeOp work with CUDA
c3606dcb9a : Fix integer/floating point conversion error when computing how much data to allocate
c48551409c : Proper error message if passing NoneType value for kwargs
949ce294ff : fix race condition in text_file_reader.py
0e298ec399 : Expose MKLMemory to the Python Feed and Fetch interface, and misc changes
9fa26fcc32 : position weighted embedding
b5613d7a3d : report offending blob name when blob in wrong device scope
c41f0d27c4 : adding more things to the list of known operators in model_helper
ea728e7c5e : Add DataParallel container (#268)
aea6ba4bcd : Support pinned memory in the DataLoader (#265)
8bfa802665 : Improve error messages/shape check in TemporalMaxPooling.
ff5b73c0b3 : Improve error messages/shape checks for temporal modules.
86c95014a4 : use local modified select_compute_arch.cmake for msvc
288c950c5e : use local modified select_compute_arch.cmake for msvc
b27d4de850 : changes to compile with msvc
0fecec14b8 : fixing bug in indexing when given float indices
a7f24ccb76 : Fix shapeCheck in Spatial Pooling modules
08a1bc71c0 : Fix shapeCheck in Spatial Pooling modules
04e896a4b4 : adding coverage support for tests
5dcfb80b36 : lua serializer registers CUDA classes only when CUDA is available
9da60c39ce : Fix batch_first in AutogradRNN (#255)
379860e457 : Lazily initialize CUDA devices
bcfa2d6c79 : Add .t7 file reader
8b492bbc47 : Return accreal as correct python types
a49b7b0f58 : Fix bug when Variable constructor didn't set the error properly
c781ac414a : Unify signatures of max, mean, etc. between variables and tensors
656dca6edb : Implement in-place operators for variables
830adfd151 : Allow passing torch.Size to expand
6f7c8e4ef8 : Fix bug when passing 0 as dim to max, min, mode, median and kthvalue
5765d608cc : Add a static library target "staticlib" to the Makefile.
2ba6678766 : Revert "Lazily initialize CUDA devices"
51bf6321ea : Implemented cudaMemGetInfo for caching allocator (#600)
aa8916e7c6 : Don't unpack single element tuples returned by functions
2e24da2a0b : Change parameter_dict to state_dict in torch.nn
c94ccafb61 : Print error message when constructing a tensor from a numpy array with negative strides
80a827d3da : Fix data_parallel bugs
6909c8da48 : Use TH_INDEX_BASE for range asserts in MultiLabelMarginCriterion
c07105a796 : fix cwrap for changed signatures
c40c061a9f : Lazily initialize CUDA devices
709255d995 : added shape checks for SpatialAveragePooling
f3cb636294 : refactoring and adding additional shape checks for SpatialAveragePooling
e3f440b1d0 : Make torch.backends.cudnn work on OSX
f6b94dd830 : Add some documentation for APPLY and DIM_APPLY macros
3911a1d395 : Fix memory leak in LogSoftMax
ebd3648fd6 : Call newContiguous rather than arg checking isContiguous.
f698f09cb7 : Add contiguous checking / make tensors contiguous for SpatialUpSamplingBilinear, PReLU, SpatialSubSampling, TemporalConvolution.
86aa5dae05 : Move VolumetricConvolution contiguous code from lua to C.
179c82ffb4 : Autograd functions no longer store references to saved_variables
233017f01f : Add torch.multinomial for CUDA
c2c515516b : Remove irrelevant output from ncclReduce Fortran tests
9c18468fe2 : Add Copyright header to Fortran bindings source files
597bbfeacd : SpatialConvolutionLocal uses baddbmm
99a169c17e : Fix memory leak in LogSoftMax
9a02908b78 : enable reshape op
61c0bcf91d : removed deprecated readme info
0613ac90cd : string.split and string.join removed for .split and .join
589398950f : fbsync at f5a877
78871d829a : Call PyObject_GC_UnTrack from tp_dealloc handler (#231)
d40a7bf9eb : Fix Scatter.backward() (#232)
b27f576f29 : guard random functions for half
073dfd8b88 : bump version
509dd57c2e : tensor docs
7a837b7a14 : fixing nn docs to be categorized, and optim docs
dee864116a : optim docs
5f2b32e45b : Add Fortran bindings
e51d0bef97 : Add cuDNN bindings for 2D transposed convolution
2fd78112ab : Add half copy/conversions
84b4665e02 : Add half support for addmv and addr.
26d626a47c : adding docs for loss functions, container, module and fix typos
6ff6299c65 : fix memory leak in (equal)
071e68d99d : fixing output size w / h order
78c1094d93 : Don't override __call__ in modules
56fc639c9f : Fix no bias mode of autogenerated THNN function
f8ae5c93e9 : enables random functions for float and half types on cuda (#223)
ad286c0692 : add support for equal in cutorch
6564d39777 : Call newContiguous for tensors that are required to be contiguous. Also add tests to verify that non-contiguous tensors are handled correctly.
8f1b7230fe : add support for fmod in cutorch
c0b7608965 : add support for remainder in cutorch
56dd4132c4 : add MACOSX_DEPLOYMENT_TARGET to instructions
91494cb496 : Call newContiguous rather than arg checking isContiguous.
9057eade95 : Handle contiguousness and improve shape checks in SpatialAdaptiveMaxPooling, SpatialUpSamplingNearest, and TemporalConvolution.
a28317b263 : SpatialSubSampling contiguous check.
25c3603266 : VolumetricConvolution check contiguous.
ae6f2dd11c : Adapt nn code to changes in THNN and THCUNN
3aaa1771d5 : [cutorch mag2gen] more cleanup
2034396a3c : [cutorch mag2gen] some cleanup
0cad668065 : [cutorch mag2gen] move qr to generic
f644a11b82 : [cutorch mag2gen] move potr* to generic
d7e3b2ef29 : [cutorch mag2gen] move inverse to generic
fc5ec87478 : [cutorch mag2gen] move svd to generic
ed4023127b : [cutorch mag2gen] move eig to generic
2bd4e5f5f6 : [cutorch mag2gen] move symeig to generic
d2dcbc26f8 : [cutorch mag2gen] move gels to generic
2f05eefe9a : [cutorch mag2gen] code refactor to support generics; move gesv to generic
7d1afa78b9 : [cutorch mag2gen] generic MAGMA memory allocator function
dac9b020e0 : [cutorch potr*] API parity for potr* functions in cutorch
66320c498c : Add contiguous checking / make tensors contiguous for SpatialUpSamplingBilinear, PReLU, SpatialSubSampling, TemporalConvolution.
8cb8a0a146 : Move VolumetricConvolution contiguous code from lua to C.
aeed8a6ea4 : Remove duplicate entries and add optional marks in THCUNN.h
c82537462b : [cutorch] remove syncing point from baddbmm
a8a02ff560 : Fix compilation for ASIMD
238ceab825 : fbsync. TODO: check if build files need update.
5b9b9634f9 : [cutorch rand2gen] various fixes
4f8e6ec42a : [PATCH] Improve potrf error message. (#189)
64c8a13773 : Remove comment.
395ab4a287 : Fix SpatialDilatedMaxPooling shape check.
15dc862056 : more improvements on error messages and shape checks.
f2daa616d1 : Revert "Move random functions to generic"
1d0f86144c : [cutorch rand2gen] fix illegal memory access in multinomial code, update unit tests
89e93bba9d : [cutorch rand2gen] test fixes, add floor to geometric distribution transform
3290d4c7d6 : [cutorch rand2gen] extend functions to use _double methods
ca22befc93 : [cutorch rand2gen] move randn to generic
b08df5b9c0 : [cutorch rand2gen] partial move of logNormal to generic, needs further debugging
ebd3c3291c : [cutorch rand2gen] move geometric to generic
16728d2f26 : [cutorch rand2gen] move multinomial to generic
34dab66f44 : [cutorch rand2gen] move cauchy to generic
3a111c7499 : [cutorch rand2gen] move exponential to generic
3600c94ec5 : [cutorch rand2gen] move normal to generic
e2f8b00e00 : [cutorch rand2gen] move bernoulli to generic
65ed1eba48 : [cutorch rand2gen] move uniform, rand to generic
7fff7977fe : [cutorch rand2gen] make sampleMultinomialWithRoutReplacement utility function generic
add5922aac : [cutorch rand2gen] make sampleMultinomialWithReplacement utility function generic
a94b54a533 : [cutorch rand2gen] make sampleMultinomialOnce utility function generic
bea82b9da6 : [cutorch rand2gen] make renormRowsL1 utility function generic
2e7debe282 : [cutorch rand2gen] introduce THCTensorRandom.cuh, move and templatize simple binary search function
1cee5a359c : Fix checking and spacing of dilation parameters in SpatialDilatedConvolution and SpatialDilatedMaxPooling.
b08862405e : Remove extraneous shape check from SpatialDilatedConvolution. (#1029)
d57e1a6756 : change to compile with msvc && export THCDescBuff for cunn
c9172c5bc9 : change to work on windows && ptrdiff_t replacement
5d5e877a05 : Fix implementation of logNormal
1e794c87ae : adding bidirectional doc
d9cb1b545a : Fix build on 32bit platform like JETSON TK1
23f611f14d : Rename assertSameGPU_generic to assertSameGPU.
d0cf5f7b65 : Improving error messages in nn.
4699c817e8 : [cutorch rand2gen] fix illegal memory access in multinomial code, update unit tests
4f490c16e9 : [cutorch rand2gen] test fixes, add floor to geometric distribution transform
bcdab7a632 : Remove mul/div from THCHalfAutoNumerics as they've been moved to THCNumerics.
7f51af7cbc : adding dropout, bidirection, etc. to RNN (#214)
b4ae60cac8 : Protect half operations with CUDA_HALF_TENSOR with generic modules.
4d03d96e8b : fix: cunn can't find cutorch sources
a39ffebc3a : Add THCTensor_(sizeDesc) for better debug messages.
4bba6082ed : [cutorch rand2gen] extend functions to use _double methods
b111632965 : [cutorch rand2gen] move randn to generic
0a34b34bfe : [cutorch rand2gen] partial move of logNormal to generic, needs further debugging
6b821ece22 : fixing trainer tests (#213)
d3b2096bfd : trainer fix for new optim API
e64fca4b04 : Allow wider test tolerances for: 1) Size of half numbers 2) Convolution weight/bias 3) BatchNormalization
b941e73f4f : ArgCheck that dilation parameters are > 0 and ensure tests pick dilation parameters > 0.
c57873d3cb : Add generic support for LookupTable.
f3bc3275ac : Add generic support for TemporalConvolution.
8df26e6c5c : Add generic support for VolumetricFullConvolution, VolumetricDilatedConvolution.
5c8ecb8150 : Fix one more compatibility bug in Python 3.3
3928f7740a : Implement functional interface for Variables (torch.*)
1767f73e6b : Add generic support for VolumetricConvolution.
9e7d5e93ab : Add generic support for VolumetricReplicationPadding.
70c6ee93a2 : Add generic support for VolumetricAveragePooling.
5cbf8504ef : Add generic support for VolumetricMaxPooling, VolumetricMaxUnpooling, VolumetricDilatedMaxPooling.
9a393b023d : Add generic support for TemporalMaxPooling.
30bf464f73 : Rebase BatchNormalization.
9fb1f8934b : Add support for L1Cost.
f3f02b23a0 : Add generic support for SparseLinear.
7668cdd32c : Add generic support for DistKLDivCriterion.
f9dafdcf09 : Add generic support for ClassNLLCriterion.
d284a419c1 : Add generic support for BCECriterion.
b45844e3d9 : Add generic support for L1SmoothCriterion.
6caa7e0fff : Add generic support for MultiLabelMarginCriterion.
1669fffb8d : Add generic support for MultiMarginCriterion.
18aa86eebd : Add generic support for MSECriterion.
075e49d3f4 : Add generic support for SoftMarginCriterion.
a6695b8365 : Add generic support for MarginCriterion.
06ee48b391 : Add generic support for AbsCriterion.
fcaeffbbd4 : Fix spacing in SpatialDilatedMaxPooling.
6146a9a641 : Generic support for SpatialFullConvolution and SpatialDilatedConvolution.
83de8e40d5 : Add generic support for SpatialFractionalMaxPooling.
30590c46a3 : Generic support for SpatialConvolutionMM.
a3a5e56287 : Add generic support for SpatialConvolutionLocal.
185c96d63a : Add generic support for SpatialUpSamplingBilinear.
be61ad6eb4 : Add generic support for SpatialUpSamplingNearest.
222dfd2259 : Add generic support for SpatialReplicationPadding.
b06e1c7e1d : Add generic support for SpatialReflectionPooling.
6876abba51 : Add generic support for SpatialSubSampling.
0798466a01 : Generic support for SpatialCrossMapLRN
2cda782273 : Add generic support for SpatialAveragePooling.
7d1c9554b6 : Add generic support for SpatialAdaptiveMaxPooling.
a29d16f1a8 : Use THCIndexTensors more generally.
6d0c1c0f17 : Use indices for SpatialAdaptiveMaxPooling indices.
5ed4b5c25b : Add generic support for SpatialMaxUnpooling.
6fe89c5e44 : Fix tests
fda8c37641 : Add generic support for SpatialMaxPooling.
6d5a0ff3a1 : Get SpatialDilatedMaxPooling generic working with long tensors as index.
f8718dd355 : Add generic support for SpatialDilatedMaxPooling.
85af686797 : Add generic support for SpatialClassNLLCriterion.
0f6ec3f15f : Remove fastExpIfAvail and benchmarking from functional tests. Also fix broken IFNDEF and test whitespace.
44644c50ee : Reorganize THCHalfAutoNumerics.
9749f7eacc : Add generic support for RReLU.
d9a2bdb9df : Add generic support for PReLU.
57e678c94b : fix logsoftmax
516f127cfd : Add generic support for LogSoftMax.
e477add103 : Add generic support for SoftMax.
ba3d577875 : Add generic support for ELU.
917e4f47c4 : Add generic support for SoftShrink.
0143dac247 : Add generic support for Square.
d2390f3616 : Add generic support for Sqrt.
949ea73402 : Add generic support for LeakyReLU.
d1e2fe0efe : Add generic support for Threshold.
584ada12bf : Add generic support for LogSigmoid.
3ead72f654 : Add generic support for Sigmoid.
9ce96d3bd3 : Add generic support for Abs.
5549c003d9 : Add generic support for HardTanh.
46105bf90b : Add generic support for Tanh.
73ce3b3702 : Add generic support for SoftPlus.
1c6225dc2f : [cutorch rand2gen] move geometric to generic
44874542c8 : fix printing in console (#208)
31f2846aff : [cutorch rand2gen] move multinomial to generic
bc08011e72 : Don't jongjmp out of omp loops in unpooling modules
7cccc216d0 : ArgCheck that dilation parameters are > 0.
09493603f6 : Change optimizer API
e799bd0ba9 : Restrict in-place autograd ops to disjoint variables
40247b0382 : Fix torch tests in Python 3.3 and 3.4
cd2e9c5119 : [cutorch rand2gen] move cauchy to generic
0b6f7b12b1 : [cutorch rand2gen] move exponential to generic
86e42ba291 : Adding truncated tensor printing (#202)
e0a18cafd3 : Don't jongjmp out of omp loops in unpooling modules
8c2f77cab6 : updated autogen docs
c1bd6ba1e1 : Zero-initialize outputs for BLAS functions
df59b89fbb : Add more optimizers
8fd9cc160c : [cutorch rand2gen] move normal to generic
28e3f07b63 : adding apply function
513d902df1 : adding __repr__ for nn
fce14a9f51 : [cutorch rand2gen] move bernoulli to generic
884107da01 : [cutorch rand2gen] move uniform, rand to generic
caa79a354a : [cutorch rand2gen] make sampleMultinomialWithRoutReplacement utility function generic
5bb873a2fe : [cutorch rand2gen] make sampleMultinomialWithReplacement utility function generic
bc0442d7df : [cutorch rand2gen] make sampleMultinomialOnce utility function generic
cfcd33552b : [cutorch rand2gen] make renormRowsL1 utility function generic
5f6b9fd5ba : [cutorch rand2gen] introduce THCTensorRandom.cuh, move and templatize simple binary search function
469dce4a2d : skip test_scatter_gpu on no CUDA
55d32de331 : Fix bugs in torch.legacy.nn and add regression tests
4491d2d3cb : Expose ger, mv, mm, bmm as tensor methods
246d5f37c7 : THC UVA Allocator
4def4e696b : fix result type
b6e58c030a : enable dot for CUDA_HALF
e3e786e35e : Move source code checks from __getstate__ to torch.load (#200)
104b502919 : ArgCheck that dilation parameters are > 0.
a18cd3ba92 : ArgCheck that dilation parameters are > 0.
93bcb2e7ba : making dot to have an accreal return type (consistent with CPU)
ebc70f7919 : Look for libcudart in default CUDA installation paths (#195)
3e5c121c56 : Adding !!inc to cwrap and splitting up TensorMethods.cwrap (#197)
e644f6ed2c : Add supporting code for CUDA IPC
551a7c72f3 : Fix multiprocess serialization with "spawn" or "forksever" (#198)
05b121841e : Add more size checks and improve some LAPACK error messages
103e70ccc5 : adding cuda types for tensor methods (#194)
ec7ecbe2dd : Fix compile error on freebsd
1234e434fa : TH_INDEX_BASE for nonzero
2d374f982e : Changes for ccache nvcc support
4e73630a95 : Fix criterion backward, that was modifying grad_output shape
e867baa5f9 : Accept file paths in torch.save and torch.load
04b750cb52 : Improve Parameter's __repr__
97c7b12542 : Fix Variable __setstate__ refcounting bugs
f16f68e103 : CMake: Install generic/THCTensorMathScan.h
4b7f8f9b77 : adding notes for compiling from source
9969d50833 : fix for CPU-only builds
7355c63845 : adding multiple types for dist
16cac6442a : adding multiple types for cumsum, cumprod
5009ae5548 : adding multiple types for pow, trace, diag, tril, triu
32647e285e : implement torch.nonzero
6df334ea68 : Improve potrf error message. (#189)
f8501042c1 : Make _requires_grad Variable attribute writeable
be085b8f6c : Allow marking non-leaf variables as non-requiring grad
ef557761dd : Allow to not use all function outputs in autograd
15377ac391 : Copy Module._buffers in nn.parallel.replicate (#180)
ad5fdef6ac : Make every user-visible Tensor have a Storage (#179)
0cb5943be8 : Fix NCCL reduce_scatter in Python 2.7 (#183)
fb593d5f28 : Fix bugs in variable __setitem__ and improve __getitem__
645c913e4f : Print GPU id for CUDA tensors
b4f4cca875 : Rename training and evaluation methods
6027513574 : Add support for indexing with numpy types
849188fdab : Fix multiprocessing
a9c14a5306 : Remove unused code
2da36a14d1 : Clean up cuDNN code and fix chooseBackwardFilterAlgorithm
2ee451f5f7 : Build in Release mode
f2d7e94948 : Use torch.Size for Tensor sizes and tuple for strides
2031dfc08a : Add hdot support for CUDA 8.
34ede14877 : Fix compile error due to THCStorage change
e2458bce97 : Add Parameter class to nn
ae9789fccc : adding input / output / member sections to the docgen
45ef25ea27 : fix rnn documentation typos and format
ad2d413c0b : Add C++ bindings for cuDNN (#167)
30924ff1e0 : Fix test_nonzero flakiness (#173)
383c48968f : Add support for indexing with ellipsis (#172)
bbe8627a3f : Use 'void' for no-arg functions
2bd36604e2 : Fix no-arg function prototypes
9ed47ef531 : fix bug in mmaping
139f98a872 : pushing THCState back to the header
c825895190 : Make KwargsPlugin output deterministic
42e835ebb8 : Add sameGPU checks to BatchNormalization (#361)
a7d5fdf54e : Add integer indexing for MultiLabelMarginCriterion.
3b4e41f6ec : Add integer indexing for MultiMarginCriterion.
5505e1de7d : Store the device in THCStorage
6d329e418b : allocator updates
3a11afb57f : some bugfixes for THC
df86e02c9e : update nn docs
deebc1383e : Show exponent when printing vectors
19f2f1a9d3 : Buffer values when constructing a CUDA tensor from a sequence
4dc13ecdd8 : Make tests deterministic
b4b6e356ef : Fix clang warnings
9000f40e61 : Add torch.from_numpy
f137c0c05a : Improve error messages of stateless functions
b43a02a9aa : Make random 0-based
30be715900 : Add training and evaluation to torch.nn
71cf8e14cb : Fixes in torch.legacy.nn
ffd4863b23 : Don't build nccl on macOS
4c17098bb8 : Fix platform detection in torch.cuda
bcfdd18599 : Fix python2.7 compatibility and check cffi version in ffi utils
067662d280 : making .numpy return writeable arrays (#164)
12de115305 : Fix Lua->Python logic in legacy.optim
b5d13296c6 : addressing comments
86288265ad : Adding rnn cell library
a559d94a44 : docs and such
1eb6870853 : add nobias option to rnn
f88c3e9c12 : fix some missing features in pytorch needed for RNNs
942ca477a6 : Copying weights for CUDNN
b0e33fb473 : cudnn + THNN match with parameters
d58b627b98 : CUDNN RNN bindings
b85fc35f9a : Fix for versions compiled without CUDA support (#155)
bcb466fb76 : fix bug with numpy conversion and storageOffset > 0 (#154)
6db721b5dd : Make DataLoader preserve the ordering of the dataset (#135)
140c65e52b : fixing python setup.py clean
b66a4ea919 : Add THNN_CHECK_DIM_SIZE_INDICES to avoid pointer conversion warnings.
d3d59e5024 : Indices for nn.
5285da0418 : Use index types for SpatialAdaptiveMaxPooling indices.
a76e69d709 : Use index types for Max Pooling / Unpooling indices.
4d0d775d16 : Add generic type support for toDeviceTensor.
98f67e90d5 : Fix super call in Container.modules and Container.parameters (#142)
8def54e82b : Fix BN in test phase
58a0ec4b4f : Move algo finding to cudnnFind*Ex
fee67c2e1a : Allow parameters and child modules to be assigned by attribute (#136)
c295f26a00 : Support async argument to Variable.cuda (#137)
8a09c45f28 : Fix typo
2909bfaca0 : Add priority streams for NCCL ops
79ead42ade : Add CUDA Stream and Event API (#133)
94e52e1d17 : Fix Variable.cat
3931beee81 : Use THSetNumThreads instead of omp_set_num_threads
1a3920e5dc : Expose OpenMP num threads through TH lib
ffc3eb1a24 : Exclude THNN Linear in favor of Python implementation
2f5d4a7318 : gcc 5 + cuda < 8 workaround improved
70553f4253 : gcc 5 + cuda < 8 workaround improved
8d39fb4094 : Use new THC API for device allocator
b01c785805 : Fix cutorch.getStream()
0eea71f878 : torch.cat for multiple cuda types
4bc585a2fe : more improvments on error messages and shape checks
429f2d6765 : fixes to upsampling bilinear API
9cd68129da : fixing typo
aa6f6117b7 : Ported Linear module to THNN
ee14cf9438 : Add support for pinned memory: (#127)
0391bbb376 : Fix `view_as` and `view` for empty tensors (#128)
28ada0c634 : update md docs
2c233d23ad : Add stream API that is not based on indices
59c628803a : fixing padding_idx option
f30081a313 : Use NCCL bcast and reduce functions in comm
c15648c6b5 : Add NCCL build scripts
a02917f502 : Fix typo
70d8bd04c0 : Make cuDNN descriptors extend object
ad2cee0cae : Fix caching allocator when used from multiple Lua threads
756a7122ad : torchdoc
3d6ebde756 : qr and ormqr tests and bugfix
daa30aa992 : fix typo
39459eb238 : make cunn compile with msvc && fix compilation failure for linux/mac os
0325e2f646 : Major autograd refactor
93b8b5631f : Improve CUDA tensor constructor speed
60ab1ce0c1 : Stop using contextlib for device and device_of
2f186df52d : removing CUDA_HALF_INSTRUCTIONS and enabling hgemm only for P100
452e07d432 : Revert "change to work on windows && replace long with ptrdiff_t"
05d1404b9c : Revert "changes to make cunn compile on windows with msvc"
534b9a1697 : Bump to 1.3.1
b2781d0501 : Fix primitives function prototype
bf7d1514f7 : NVML (libwrap) : import the needed definitions
2acee24332 : Add keyword argument support to most tensor functions
e7639e55f8 : change to work on windows && replace long with ptrdiff_t
f978eca477 : change to work on windows && replace long with ptrdiff_t
eb3ac2b367 : changes to make cunn compile on windows with msvc
968d386b36 : Make atomicAdd functions static inline.
38cb3d0227 : Fix build when NEON is supported
44509f9f91 : fbsync: mostly lint changes, added mkl files
6f606dd5f9 : updating nn docs
bab616cf11 : Fix OOM error message in tensor constructor
966adc6291 : Simplify torch.cat
518cb6ec7c : Allow specifying output size in MaxUnpooling
34bcd4c237 : Rename FullConv to ConvTranspose and allow specifying output size
50326e94b1 : try cudnn 5.1.5 and 5.1.3 in that order to load them up. This is needed because cudnn for cuda 7.5 ships with 5.1.3 and cudnn for cuda 8.0 ships with 5.1.5
160723b5b4 : fix cudnn lib name
7991125293 : Improve error messages
96f61bff30 : Add LAPACK functions
a94488f584 : replace long with ptrdiff_t for memory size/offset, element count
9201cdd029 : nervana build files
2535720654 : fbsync
f2cf673d3a : fix tensor printing when the tensor is a view into a giant storage
c4595a3dd6 : [cutorch refactor] addcmul/addcdiv to generic
d1e9215184 : fbsync
5db118e64b : Update LogSoftMax to work in spatial domain
8bb06c94be : Improved allreduce segmentation for small sizes
1620c56808 : [cutorch refactor] cmin/cmax to generic
e88e0026b1 : [cutorch refactor] make dist(...)'s op generic, add missing unit test
ace9b49e28 : [cutorch refactor] move cross(...) to generic
da90751add : [cutorch refactor] move lerp(...) to generic
8cc566f7b5 : [cutorch refactor] move clamp(...) to generic
02ad199905 : [cutorch refactor] make var(...) generic
c3e0811d86 : [cutorch refactor] cleanup code in prep for review
499d1c5709 : [cutorch refactor] fixes for norm, wrap/test
cf16ec45e1 : [cutorch refactor] move stdall into generic, wrap test for std
daa15dcceb : [cutorch refactor] move varall into generic
32556cbe5e : [cutorch refactor] move normall to generic
74d9c674f5 : Make _norm(...)'s ops generic
a4da558fa0 : [cutorch refactor] move mean function into generic/
dba6d1d57f : Make _norm(...)'s ops generic
b01c4338c9 : [cutorch refactor] move std function into generic
811d947da3 : [cutorch refactor] move renorm function into generic
de7bf7efe6 : [cutorch refactor] move std function into generic
5537df9927 : [cutorch refactor] make _renorm(...)'s ops generic
81fea93741 : [cutorch refactor] move std function into generic
df1065a2d8 : Move _std dependencies into THCTensorMathReduce.cuh
c2e3bf2145 : [cutorch refactor] move meanall function into generic/, update cwrap for lua mean
a4d849ef68 : [cutorch refactor] move mean function into generic/
957c9f3853 : Move atomicAdd functions to THCAtomics.cuh in order to share definitions with other projects, e.g. cunn.
00c493864e : Fix BN for test phase
5d70feb573 : bug fix for wrong usage of checkGPU && port to windows with msvc
a22af69335 : Add versioning and shared storage handling to autograd (#105)
1213149a2f : add bias option to linear; allow modules to return nested lists/tuples of tensors (#106)
398b6f75cd : update nn.md
e46e05e7c5 : fix container doc
166028836d : Ignore graph parts not requiring gradient in engine
3cbe66ba8c : Change requires_grad default to False
99de537a2e : Remove CUDA sync points from losses and trainer
1d0afdf9f7 : Make requires_grad read only (except for leaves)
4db6667923 : Allow specifying per-parameter optimization parameters
80e16e44aa : Check container source on load
58b134b793 : Allow exporting optimizer state as a dict
6efefac2df : Add parameter_dict and load_parameter_dict methods for modules
0c9670ddf0 : Allow remapping storages at load time and serialize data in little endian order
53c65ddc6a : Fix memory leak when constructing a tensor from numpy (#98)
33371c5164 : ffi tests skip on cuda
64dd1419c5 : Fix Variable indexing bugs (#96)
108068a417 : python 2.7 fixes
6e8ed95ada : ‘fix compilation error: 'orr' loop initial declarations are only allowed in C99 mode
39c9f9e9e8 : replace long with ptrdiff_t for memory size/offset etc
b555588f5d : Make THNN lazy init thread safe
47ef4bb0a0 : Fix memory leak in torch.cat
25c51c49aa : adding stdc++ static linking on TH_BINARY_BUILD=1 always, because caching allocator uses c++
833bedb46b : cudnn relative check in binary builds
3d8eba7b42 : updating readme with new info
ab0e86ae4b : fix arm neon bug
94b35312d0 : Compile fixes for picky compilers / stl versions (#518)
f4ebc65a12 : Add Module.modules() and Module.children() (#90)
2bc9da4f5e : Support "device" keyword argument (#79)
e034f258e3 : Fix ffi utils in Python 2.7
112df5f664 : Fixes to trainer and data loading
3564b77553 : a couple of changes for win32 (#779)
c813e93d85 : fixing python 3 compat
ea4f812a12 : Fix Container.parameters()
dbe540e49f : Use the custom TH error handler in all threads by default
c1c0969834 : Allow changing the default error handler for all threads
b87f26ce26 : windows high resolution timer with a few makefile changes (#776)
67335e638c : bug fix for read/writeLong in THMemoryFile
90916f34a7 : fix cpuid ecx; change to compile with msvc
11b38a6895 : Add more functions to autograd
a1f5fe6a8f : Add multiprocess data loader + improvements to torch.utils.data
7dd28b885d : Allow changing the default error handler for all threads
c20828478e : Update Module.cpp for THC changes
e8a5f00866 : Auto GPU for CUNN (#71)
d92b7da733 : fix documentation to not use forward
7ff16baa7d : Use breadth-first in ExecutionEngine (#72)
93e60715af : Fix error message
14965cfce9 : Run cuDNN operations on the correct device
da1e3f084d : Fixes for https://github.com/torch/cutorch/pull/519
0b0a62420c : Make some basic THC operations thread-safe
c92c82aa1a : Really fix utils tests...
4742c08c7c : Improve error messages in autograd
9c6ced1c0a : Disable ffi tests if cffi is not available
a33c9bd774 : Improve argument matching in invalidArguments
c8a4734b97 : Add RReLU to both nn packages
3f7ab95890 : Finish implementation of prng related functions
2d8c2972ae : Only allow leaf variables as module parameters
941cf4e63d : Add ffi utils for user C extensions
57610a7471 : Fix documentation for MaxUnpool2d (#68)
f5a6a3b0e9 : Fix torch.nn.Module._apply with None types (#66)
bab7f89cdc : Fix no_bias constructor for conv2d (#65)
cb5d4e836f : Lazy load CUDA and THNN modules (#64)
3a5544f060 : Add support for GenerateFloatTypes, for use with cunn.
412019dbe4 : fixing CPU builds by making cuda imports optional
f9d9c92560 : Fix type conversions in autograd
7f4ff0e615 : Fix type conversions in nn
3eac7164f4 : Add data parallel functions to nn
f9d25e8e72 : Refactor nn (require specifying parameters explicitly)
52ed57352a : Free GIL in C functions
1828e7c42f : Add async CUDA copy
2c89ae4e8a : Rename getDevice to get_device
779a460030 : Add cuDNN support for convolutions (#36)
0312f939d6 : Only set c++11 compiler flags on THCCachingAllocator.cpp
60a8a9e918 : improving error messages in nn
89666fc4fe : Fix SpatialLogSoftMax memory leak and code cleanup
44527ab5be : fix c++11 flags thing
a0cf6658c5 : windows high resolution timer with a few makefile changes (#776)
5107f23126 : fix ClassNLLCriterion targets in tests and legacy nn
c020a8502b : making ClassNLLCriterion targets consistent between cpu and cuda
44481354fc : Add back support for child=None in Container constructor (#55)
4e9f0a8255 : Use CUDA caching allocator
85bd287b7b : Add THC_CACHING_ALLOCATOR=1 to README.md
0eff3897e3 : Update SpatialLogSoftMax kernel to use cuda dimensions
e26e35a9ee : bug fix for read/writeLong in THMemoryFile
980300b381 : Combine autograd.Leaf and autograd.Variable (#52)
1cf87e8a0b : OSX + Python 2 build fixes
817d860af5 : Add CUDA caching allocator
0be5031a93 : Pretty print type mismatches in error messages
1ed488da4f : Make custom precision of CUDA tests work in inplace mode as well
ddf1598ef8 : Add a method for catching exceptions thrown in ctypes
4a8a185aa4 : Preserve storage view sharing in torch.save and torch.load
4cdeae3283 : Return only unique variables from parameters()
5030d76acf : Reduce precision of CUDA blas tests
c51e2c8b8c : Rename CELoss to CrossEntropyLoss
eec0420eb3 : Initialize nn modules' parameters with a default tensor type
e66ea56bb3 : Improve THNN tensor type mismatch error messages
eefa0c7b40 : Require torch.nn.cuda automatically when calling .cuda()
a489884da4 : Reduce precision of addmm CUDA test
7a74d3fc9e : Fix dl flag module in python>=3.6
e71204b52f : Improve error messages in storage and tensor C functions
ca330b110a : Add scan tests
6c77476cc1 : Make tests check for deltas and report bandwidth
cabd6848e4 : Heavy code refactoring to remove a lot of code in collectives (~1000 lines).
e3dbc6110e : Add profiling API
1d6715fe20 : Fix MPI test path
06ab3f962f : Refactor _C extension to export some utilities
df77a8a81a : Update LogSoftMax to work in spatial domain
94b7c32eb3 : compiling double atomicAdd only if CUDA_ARCH < 6000, because it's now included in CUDA
8fdec15a55 : Codemod to remove camel case method naming
e8b1217b28 : Use bitwise operations for atomicAdd rather than byte_perm or pointer deferences. Also properly check that half is enabled.
f56f06d88d : fix cpuid ecx; change to compile with msvc
0f7a1e27d0 : updating auto-generated docs
5114d94ad9 : docstrings for conv, dropout, linear, pooling and sparse functions
f74c42bf00 : Slightly improve THNN error messages
a8e816f450 : Fix maskedSelect test
a90c259eda : Add myself to LICENSE file
ff8a5bb532 : move third_party/gtest to use git submodules. As a result the folder name is now googletest
b91a6795ed : move third_party/eigen3 to use git submodules. As a result the folder name is now eigen.
04f628446e : move third_party/google/protobuf to use git submodules.
e223564a55 : Fix multiprocessing on OS X
7847d77405 : Add more functions to autograd
089d223922 : Add support for CUDA indexAdd
930085ec9c : fixing doc2md for code blocks
b5f7720ab9 : docstrings for container and batchnorm
a0a2d9885a : adding docstrings for activation functions
0557143801 : adding markdown docs and docutils
64a15928d7 : Fix tests on python 3.3
d1fda539b7 : Fix nn serialization errors
95d545e75b : Fix a multiprocessing GIL issue
491fbfdc8c : Improve error messages of tensor methods
5d24432322 : Fix errors when printing tensors with inf and nan values
da5bb373e6 : Type conversions now use auto gpu
6584b35db2 : Add getDevice for CUDA storages
e5874ea40d : Add getDevice for CUDA storages
4bad029fd4 : Add more functions to autograd
9b7d47935a : Fix mode
19ec206bad : reducing tolerance in cumprod unit test
fb39971464 : Add more modules to nn
3ea1da3b2c : Minor fix in CUDA module
a0fb1ab86e : Reduce precision for addmm and rsqrt CUDA tests
f2cb3e0b7b : Mark BCECriterion weights as optional in THCUNN.h
5206c98c1b : Mark BCECriterion weights as optional in THNN.h
420c2ae745 : adding storageOffset in VolumetricConvolutionMM
874179a871 : Accept 5D weights in VolumetricConvolutionMM
1d9b10d312 : libshm needs static libstdc++ on binary build
ccf7a3043f : fixing MaxPooling for changed THNN interface
96cd92a0a9 : fix OSX build
31c45e0a08 : adding verbose run_test.sh
65d4055366 : adding static linking on binary builds
1f2695e875 : adding cuda driver check functions for runtime checking
59556d0943 : modifying wrappers for new cuda math functions
9842be4b15 : setting default dampening value to 0
eb6419f02b : adding binary build options
a6d1f6aee4 : making sure cusparse is linked when MAGMA is
73d15cf643 : moving arch detetction into THCUNN
a4dd9d0b86 : spit out CUDA_NVCC_FLAGS
49cd6f99d8 : only enable CUDA half instructions above 8.0
159cba815b : fixing bug in sign for ByteTensor
cbb76eae04 : add int and long tests for abs and fix recursive bug in abs
21a189fee6 : fixing bug in frac. fixing pointwise unit tests to not just return range of 0, 1
788ff68c1f : prevent Unknown CMake command "check_function_exists". (#761)
009667c26c : adding generated files
d9f8f39a9a : making more files to split between types
1fe27380f0 : refactoring THCTensorMathCompareT to split types compilation
621d6a4475 : splitting sort compilation into individual types
30700ede39 : removing a lot of template instantiation in sort
8a76dc8b59 : Add missing PyBuffer_Release calls
f646391f26 : Bug fixes and test improvements
1703f2abed : Add utils tests
ee85fe1a9c : Initial utils implementation
3d54e7b40e : fbsync: changes to implement operator schema
0a09d09431 : fbsync
0703e0e897 : BCECriterion THCUNN + Weights (#331)
3d6b805652 : Make travis use run_test.sh
58f507f9e3 : Add file descriptor sharing mode to multiprocessing
24fe4b8af3 : Initialize THVector dispatch tables
90a7a79a19 : Don't check memory permissions with write on OS X
64938f75b3 : Add incref/decref for THRefcountedMapAllocator
043be6f55c : Fix shm_open and shm_unlink cmake test
cef9bf7f29 : Fix segfault in THMapAllocator
dac5a0a07e : adding generic/THVector.h to cmake
221a82bdad : more build fixing
76ac35cdaa : Added dyanamic dispatch for x86 (#755)
cd0929aa5e : Use chainer-style constructor for Conv2d
1486d880b0 : Add Storage.from_buffer
b738b09606 : Clean up Module forward and __call__ (#14)
7188f0851d : add link to conda binaries
47700a4c20 : fixing run_test.sh to switch to it's own directory before running commands
3aeb0f5408 : adding test file
07d1acd798 : add torch license
4cffa2219a : build fixes for OSX
50274cf49e : fixing build - to be verified on a GPU machine
b23e51d467 : chunky sync
11852e5c22 : Add new flags for THMapAllocator
eb22208a4e : changing badge links
bf8b4779cc : updating README
0e7ab61a83 : Remove unnecessary files
3e4ddf25d0 : Fix storage printing
a5504231b5 : Making a build matrix
c63d042c57 : change language in build readme
70c8d43831 : fix link with http
5837d3480c : adding CUDA build badge
9553e46ed7 : Make bias optional in Conv2d
34b673844f : bumping to alpha-2
a2a840df93 : updated readme with multiprocessing changes
5b23b67092 : Fix build status link
f9d186d33a : Add initial version of multiprocessing module
8f3f90a986 : inplace is reversed in HardTanh:backward.
c7a66ddf74 : Add new shared memory allocator to TH
b60d7e1476 : fix addmm and addmv documentation when they are methods
4a1e099974 : Add new shared memory allocator to TH
4bbe5a095d : HardTanh does not use inclusive bounds in inplace mode
f45213a276 : Simplify nn.Container and nn.Sequential
959304bc0d : Make BatchNorm2d inherit from BatchNorm
ec22828169 : Add torch.nn.AvgPool2d
19c0474ae4 : Add Volumetric Dilated Max Pooling new file: VolumetricDilatedMaxPooling.lua modified: init.lua modified: lib/THNN/generic/THNN.h copied: lib/THNN/generic/VolumetricMaxPooling.c -> lib/THNN/generic/VolumetricDilatedMaxPooling.c modified: lib/THNN/generic/VolumetricMaxPooling.c modified: lib/THNN/init.c modified: test.lua
ddc0c61692 : VolumetricDilatedMaxPooling modified: lib/THCUNN/THCUNN.h copied: lib/THCUNN/VolumetricMaxPooling.cu -> lib/THCUNN/VolumetricDilatedMaxPooling.cu modified: lib/THCUNN/VolumetricMaxPooling.cu modified: test.lua
1fbcb44ae6 : make abs and sign available for a few other types
7eaaf57d2b : adding multiple types for log, log1p, exp, cos, cosh, acos, sin, sinh, asin, tan, atan, tanh, sqrt, rsqrt, sigmoid, cinv, ceil, floor, neg, abs, sign, round, trunc, frac
fc643f2407 : append TORCH_NVCC_FLAGS optionally
234c8c9ef3 : Update LICENSE.txt
75bad643bd : Updated LICENCE.txt
229e3ec184 : Consistent Max Pool API renamed: lib/THCUNN/SpatialMaxPooling.cu -> lib/THCUNN/SpatialDilatedMaxPooling.cu modified: lib/THCUNN/SpatialMaxPooling.cu modified: lib/THCUNN/THCUNN.h
6ba435f822 : Make SpatialMaxPooling API consistent modified: SpatialDilatedMaxPooling.lua modified: SpatialMaxPooling.lua renamed: lib/THNN/generic/SpatialMaxPooling.c -> lib/THNN/generic/SpatialDilatedMaxPooling.c modified: lib/THNN/generic/SpatialMaxPooling.c modified: lib/THNN/generic/THNN.h modified: lib/THNN/init.c
774a6f1093 : Add in-place operations to autograd and nn
814728a6aa : Improve hook system
7654698a0e : Fix ExecutionEngine bug and improve autograd tests
ff785e5f17 : Make optimizers accept a closure
cc645de37b : MaxPooling2d -> MaxPool2d
cc62ee229e : Fix torch tests
24476090df : Add volatile variables
2aee2d312f : bumping to alpha-1
ee838422f7 : Add an error when iterable loop is most likely infinite
ea93fb7ac0 : Add more nn modules
7bcb2a4081 : Initial optim version
2bf68e72d5 : Add hook system to autograd and nn
75579fcabd : Fix Log autograd test
686e8d32e2 : Add torch.save and torch.load
8d933cbfc4 : Fixes for OS X
5fef59beb4 : fix critical bug in SpatialConvolution
1d0fe075eb : fixing critical bug in SpatialConvolution
d467a068c2 : Add tests for new modules
e055ffbdc7 : Add nn
53f00ae429 : Add autograd
78a958ab61 : Update and fix bugs in legacy nn
4c51a523c8 : Add super basic CUDA autodetection
e47cf3a966 : Make Threshold THNN functions more consistent
47b0797fe1 : pass devlist as const int* rather than int* in ncclCommInitAll
ed401cc29b : link library with -lrt; otherwise there is undefined reference to shm_open
34ea3fd17e : Fixing bug in regex not accepting 2.1(2.0) notation
cbeceb0bea : Make Threshold THCUNN functions more consistent
cfc41a7cf2 : Accept both 2D and 4D weights in SpatialConvolutionMM
c9a10d6bc7 : Accept both 2D and 4D weights in SpatialConvolutionMM
d7e798c15c : Add new nn description to README
e382d40709 : Fix tests
ef081526fc : Fix Tensor operators
b06c000478 : Fix <3.5 compatibility and travis configuration
eaa24dc7c8 : adding requirements.txt
34c669222d : adding travis yaml
2fa6d70336 : readme
1b693c3845 : changes to README
e6953000e8 : Add tests for copy and pickle + make CUDA optional in legacy nn tests
207d6ae60d : Override build commands in setup.py
794928fadc : Update README
288280aa8a : Implement basic copy and pickle protocol for Tensor and Storage
1902bc0bfb : Interface with numpy
ea38a7e7cd : Fix many spelling errors with tool `codespell`.
68b400fbf3 : fix addmm and addmv documentation when they are methods
9fff8e7392 : Fixes for changes in libs
c08213a3fb : Use TH_INDEX_BASE in THC
ef7364b80e : Fix Python 2.7 compatibility
3d80b1371a : adding optim tests
624dd3e10c : fixing optim (tests pass)
1e905eb4d5 : copy -> copy_
781a6a9083 : CUDA version of Spatioal Dilated Max Pooling modified: lib/THCUNN/SpatialMaxPooling.cu modified: lib/THCUNN/THCUNN.h modified: test.lua
458d3a5b34 : Add Spatial Dilated Max Pooling new file: SpatialDilatedMaxPooling.lua modified: SpatialMaxPooling.lua modified: init.lua modified: lib/THNN/generic/SpatialMaxPooling.c modified: test.lua Updated THNN modified: THNN.h
12bed8dc0d : Add CUDA device selection
a57ab07261 : fix typo from #462
c3a39d0ef3 : fixes for multiple cuda types
f621210eaa : added multiple types for sort
fa6e5c5bff : Update tests and fix CosineEmbeddingCriterion
df1c890a10 : Fix DistKLDivCriterion gradInput formula
959b12f1b8 : Use TH_INDEX_BASE in THCUNN
d64d3c3a30 : Mark optional arguments in THCUNN.h
053c447a59 : Fix sub and div for integer types
c59b3c8b9c : fixing sort to use long indices
ff00cdd728 : Add cunn tests
5a3997789c : Fix a bug in printing
1a57979f41 : Add cutorch tests
edb9ed2311 : fix several spelling errors
30df3ea23a : cmake: add soversion for libTH
8288a0a2e3 : making changes to sort and TopK for the changed index* API
b9b4d390a6 : Fix tril and logical operations when input == result
862e35af25 : template magic
1fc9830b59 : make android build again
1543b36a4b : adding indexing for types
9c459be63a : minor fix to make things build
05512d1e10 : sync
e344fae6a1 : Fix THCUNN.h formatting
e9f9fd3727 : Major refactor
f9b7416efe : context_gpu.h bugfix
e2c3ee629d : easy: change templated argument to capitalized
2e053f990c : more tests and cmake fix
9d2ec718df : scatter / gather to new types
2bfcca23e2 : Fix "invalid configuration" when using very large batch sizes in evaluate mode.
c4eea1f2f6 : Remove the no-shrink test as the critera is not guaranteed to be satisfied. Minor fix for others.
bfe76b2be4 : cuda memory pool implementation cleaning: both cub and cnmem
2362571b23 : add submodule cub
0952370cf5 : Move CMake files to THC
652a31b714 : Add build scripts for libraries
4fe7059a31 : Mark optional arguments in THNN.h
1395c27906 : Volumetric Dilated Convolution
d920bf1488 : volumetric dilated convolution
1ede7a7ff0 : more build updates:
9f108b5d54 : fix gemm bug when beta = 0
a78d909afa : memset the result buffers sent into mm (when beta = 0) to zero to fix subtle BLAS behavior based bugs
d7504b1f52 : Fix type checks in cwrap
6df0ae5d35 : Add cunn
92e983a489 : Fixes for Linux and new cutorch
e9531529ad : fix SpatialSoftMax bug and add unit tests
a23ee8574c : Fix THNN.h formatting
c15e45c9bb : chunky sync again
a7af924919 : Add libz to dependency list (needed by rocksdb)
956625dbc5 : Filter duplicate options in cwrap
4fffb7dc66 : Add libz to dependency list (needed by rocksdb)
59256bac6d : Add libz to dependency list (needed by rocksdb)
2f342af22f : Move optim to legacy
5c9bfe8c02 : Fixes in nn
491173b7d9 : Use TH_INDEX_BASE in THNN
d086ce43e0 : fixing small cmake bug
b85e5e6df4 : new select_compute_arch.cmake file from @borisfm
d12a358435 : Add converted nn modules
ede8216751 : Fix printMatrix
cc46464cf6 : protected legacy_pad_, replace DeleteDropout with is_test=True
3c989347d8 : caffe translator with added back legacy pooling support
9d1a0f3627 : gemm -> Sgemm
392de819f8 : reduce and BLAS work
5ab7676d20 : code fix for oss
bcea409c82 : sync
f01f2063dd : bring up caffe.proto to master
55041eaab6 : python 2.7 compatibility
27bbaf633b : New tests and container modules + bug fixes
b3a9e1333d : Remove unneeded deb build script
891114bf54 : Adding SpatialUpSamplingBilinear
2af7913d95 : Adding SpatialUpSamplingBilinear
b729f05c35 : Android build improvements
d981e79e7d : Add a simple script to help build android.
a4f544ca14 : Converted nn modules
6909d65613 : Bug fixes
e967b75666 : Fix histc float problem #718
1527ba7381 : Adding trace and diag to cutorch
55c42ad681 : Fixed redundant contexts in multi-process apps
20f4e7e8a0 : added bound checks for weights
820ce3f2df : added bound checking for weights
ae40bcd58c : Base for nn conversion
f09d2b2b35 : changes to make c2 build.
554a1d8336 : Add optim
09bed67e4f : add untracked files
bc7bd7a8b3 : Add unit tests and fix detected bugs
6463eebc7b : chunky sync - build scripts to be written
5c8292aaf4 : Fix accepted k range in THTensor_kthvalue
c613668fef : Make it possible to change index base in TH
1d763810ba : Fix optional argument resolution in cwrap
c574295012 : Various fixes
3a44259b32 : Add support for CUDA
93ed433de3 : Add rand and randn
cf90bee8af : Enable parallel builds
f09c8ab1e8 : C-impl BCECriterion
eaf03c8c4a : Fix SpatialClassNLLCriterion when using OMP
029478f160 : Fix BatchNormalization warpSum for pre-Kepler cards
7a1aa6b563 : Improved Deb generation
c82e180487 : fix for std::pow ambiguousness
2b53cce79f : Minor improvements in utils
c3072fd5d2 : Refactor cwrap
3cec305524 : Restructure python code
0ee188e7df : nobias in spatial full conv
3af4b93956 : nobias in spatial full conv
cbd4b62a9b : Added FindCUDA bits from CMAke 3.6
523a6670f4 : Improve cwrap error handling and fix memory leaks on error
8343eee7c0 : Add tensor printing
8e79e00f95 : Add arithmetic operators
d7016055fb : Fix Tensor constructor
feba835d18 : Add sub and expandAs
077bfbde03 : Add all constructors for Tensor and Storage
486ea76b98 : Add more Tensor methods
4f66ea42af : Add random-related Tensor methods
f2b2ad6fbc : inplace HardTanh, subclass ReLU6
5c667a2681 : Added VolumetricReplicationPadding.
dd439d77b8 : Added VolumetricReplicationPadding.
9ae84f5d6b : Fix version number
e51e922924 : Add a debug level to NCCL and CUDA versions at init
857c32bc21 : Add all mm methods
d5422534ae : inplace hardtanh, remove relu6
9fcc523485 : Increased version to 1.2.3
67d1ab9106 : Packaging : Generate shlibs.local
da6d2009e0 : Move deb to build directory
155132d336 : Fix make install to use BUILDDIR
08ddfe03d2 : Rework debian packaging
0eb2b9e756 : Add more Tensor and Storage methods
5d4716a8a3 : Include link to blog post in README.md
6161049a31 : Added ReLU6 implementation and test.
1882a097e2 : Added ReLU6 layer, test and doc.
fdfe9d836e : Add index* Tensor methods
a9282edf79 : Add THPPointer and more Tensor methods
aa8f669a3d : Updating for .deb rebuild
32490710dc : fixes for cutorch API changes
6a1e0c3127 : Added CUDA version of VolumetricMaxUnpooling modified: THCUNN.h new file: VolumetricMaxUnpooling.cu modified: ../../test.lua
ad1863dd3b : template work
60f9834ac6 : Add more Tensor methods
5ee3358a92 : python 2 support
d5e507fc7f : Only call the CUDA runtime. That may fix #27.
6954783d9d : Fix indexing
7c11cb9b68 : Add Tensor indexing functions
2dddec5e14 : adding some comments
7edfc57228 : Make NCCL collectives work on communicators with only one rank
bd3cf73e6e : Changed CURAND generator to work on a wider set of platforms.
177505b757 : Gencodes changed to NV recommended
76a37461bf : fixing Volumetric Average and Max Pooling for large inputs
23d41b927a : making error checking consistent, always and simple
445117e6a4 : fix memory leak in SparseLinear (#844)
795c159e7e : fix "macro redefined" warning (th_isnan) (#695)
bb658fe29b : fixes for gcc 5.xx
095c9a6974 : gcc 5.xx fixes (#421)
e2d3145d72 : Fix zeromq build in Dockerfile
9d9d8cd59f : Bump to 1.2.2
1657af1567 : Better name for GENCODE
acb93d1aed : Removing unneeded includes
889ad3d4e6 : Makefile improvements
4e3db462ed : Using new CMake FindCUDA elements (#415)
073afb31ec : fix compilation issues under Windows (#694)
455167db9a : Revert "Using new CMake FindCUDA elements" (#417)
429c7a03cb : Using new CMake FindCUDA elements (#415)
360c14b2c5 : lib prefix doe libTHNN.dll is missing on Windows
801c62e9a0 : Visual Studio doesn't allow in-loop declaration in the 'omp parallel for' construct
7e5a1d9140 : Visual Studio doesn't allow in-loop declaration in the 'omp parallel for' construct
922e2a8772 : logsoftmax non-contiguous gradOutput fix
dca7299b9a : Use 3d thread blocks in SoftMax for inputs with w*h > 2^16-1
8229adc495 : Fix SpatialSubSampling (was doing non-atomic writes in backprop).
92c5a38090 : Fix baddbmm for non-contiguous results tensor
a465b67c9b : Fix addmm for non-contiguous results tensor
01b17a2dcc : Limit registers when LRNFillScale runs on TK1.
92100e4b66 : Update README.md
a9aac859dd : glog as default.
79c5275d75 : A set of changes to make newest sync build.
559053d3a8 : chunky sync
d424e34e37 : Fixing sparse linear race condition
0b61c3f233 : Add more Tensor methods
88bc821d79 : added THAtomicSetLong
56c98f7897 : Add more Tensor methods
c3f7aac4f9 : Add logical functions
47247be77c : Add printing methods for Tensors
449ac4ca2a : Add torch.* functions
d2738ca25e : Add more Tensor methods
6db57b8a4d : added correct inclusion for malloc_usable_size()
7567a0bb13 : Add cwrap
9def21ac30 : Fix memory leak in addmm
8fd390e354 : Add mean, std, var
c3b3df9f22 : Add utilities and clenup Tensor wrappers
afefedc0b0 : Add free, retain, dim, numel for Tensor
4a5b66de9a : Add copy and type methods to Tensors
842e1b6358 : Add exception handling
cf3884c9ad : Fix addmm for non-contiguous result tensor
fa994baada : Fix Storage memory management and add Tensor methods
64dde206b6 : Add storage and size to Tensor
f4b3554d9e : Refactor generic/Tensor.c and add Short objects
d9ea7e6211 : Add copy method for Storage
ac49260792 : Fix segfaults in Storage
5de46b2a1c : Add new methods to Storage
9c1f4777d1 : Add slicing support to Storage
1fdf9f4555 : Fix THPStorage initialization
690d470c71 : Add Storage.py template
b0d90e3688 : Add templated __init__
9c18e7c990 : Fix argument parsing in Storage
731041cb6a : Initial commit
06f1c8a2e1 : Remove THExpMinusApprox from SoftMax
3a1433fb98 : Add SpatialClassNLLCriterion (with pre-Kepler support)
b0087b634d : Revert "Add SpatialClassNLLCriterion"
e1921929ad : Fix - added check to avoid invalid memory access in indexSelect_long
684f4e2b18 : Fix CMakeLists for Intel compilers
1d2d45f331 : Adding SpatialDilatedConvolution
b1abb4a311 : adding SpatialDilatedConvolution + tests + doc
100c45ac69 : MultiLabelMarginCriterion fixes for CUDA
b50305e7fe : adding CUDA version of MultiLabelMarginCriterion + tests
95a5e25228 : Fix ClassNLLCriterion buffer
ee3676b834 : Add SpatialClassNLLCriterion
b3a4c61b02 : Add SpatialClassNLLCriterion
a97ae66a1d : remove some warning when compiling TH
e5067b6611 : Fixed version in ChangeLog
0177cf3ea4 : Fixed install location, new .deb version
03df4c7759 : Moved no-as-needed flag to link rule.
74b1233917 : allow THAllocator to have a NULL realloc
ddd3f2084d : Fix readme to reflect the new test paths
dba3ec9428 : Fix random deadlock during ncclCommInitRank.
9de361a1b9 : Fix MPI test usage
c772fb7976 : few changes as required for cudnn fp16 support
45e0ac040a : add noBias for nn.Linear and nn.SpatialConvolution (#252)
cdd7fbf132 : add noBias for nn.Linear and nn.SpatialConvolution
2fc3c19a24 : add backpropagation for bachnormalization in evaluation mode (#251)
4f985e9244 : BatchNormalization: add evaluation mode, add doc for nn.Jacobian
a12cb07327 : Fix _msize crash on windows xp
a82dcc7bba : Modify the backward equation in the comment
fecb7f794c : description that sort orders the 'key' tensor numerically
7f5d484210 : remove build warnings
c0c959b1be : Add --no-as-needed to make sure that cudart library gets liked
3d6258365f : Remove use of broken THInf macro, which caused UB
5a08f49995 : Fix uninitialized pointers in THCState during init.
e30bf95989 : Enable compilation with old g++ when the default g++ is not supported (+5.0)
31584607a5 : [LookupTable] Add Max-norm constraints to LookupTable (#240)
0bf2fc4650 : [LookupTable] Add Max-norm constraints to LookupTable (#739)
cf3b9cc859 : [THCTensorMath] Remove use of broken THInf macro, which caused UB
7f803a88e8 : Make THInf type-specific
15e60e9e98 : Adding cross to cutorch
c557efba3b : Remove mod() and cmod().
62a61bf986 : Add remainder() and cremainder().
95ffcb67d3 : Add fmod(), cfmod()
9665854c9e : Add missing 'omp' to pragma
a37066776a : [THCGeneral] Use `labs` instead of `abs` for long operand
29c91c80a2 : Fix SpatialFullConvolutionMap bug #521
d5e2b6bc3c : Improved SVD memory usage based on feedback
8dfbe2bf56 : Add DiskFile:noBuffer()
eb8db0a42d : Fixed Sparse Linear Bugs
3f4b46ac9e : Add potrs with MAGMA
80e413e922 : Improvements to torch.{linspace, logspace, range}.
8047e43d46 : make margin parameterizable
0da5e29063 : Sparse Linear now does sparse updates from the last input
61fd2bb070 : Adding SparseLinear for CUDA
8f24306aff : bugfix
ff9a9a34bd : bugfix
4ac4a49e58 : bugfix
ecd9507fc1 : minot changes
0190bd4fe1 : cuda memorypool renaming
e6f4a83da6 : Removing Tegra
1a8bae5b2f : fixed version format
b508d28123 : Version with . 7.5
62b551798f : Use arch=5.3 as well
dfbebe395c : Delete libnccl1_1.1.1+cuda75_amd64.deb
85280b5bf4 : Delete libnccl-dev_1.1.1+cuda75_amd64.deb
fb53cfd9b0 : Added files via upload
92d2123d8d : Added compute 5.3
ec3de28ae5 : Preparing for pbuild
86dc136fa9 : Moved to pbuilder
172f316ac2 : Moved release files to proper area
137b880aac : cuda initialization.
ffe4085dbb : fix large sorts
2682f77ba6 : kernel p2p access and non-blocking streams
a81f87b9f1 : Fix multinomial regression.
747614b1aa : In-place ELU
57133e6ae6 : Add FP16 support (CudaHalfStorage, CudaHalfTensor)
2dbc0dcb50 : Remove unnecessary rank check in potrs call.
300b0d3e6a : Fix bad size evaluation
be7cefa4a4 : Adding table input support for batched SparseLinear, implementing gradInput correctly, fixing other bugs
a2971e6a16 : bugfix
fa34452625 : eigen3 brew bugfix
4ae1bbbd7e : bugfix
0521e1d672 : notebook rewrite and grammar bugfix
a3b4b6c23c : In-place ELU
d0c71195dd : THNN -> THCUNN in assertSameGPU
cf7ca23fc1 : make caffe2.python build
9ae880bb6f : move pycaffe2 to caffe2.python
0747a4a7fd : move a bunch of things
9e5c795c11 : third party clean
c939e5e302 : Add an error message for checkGPU
5450e44cef : Move compilation flags from nn to THNN CMakeLists
ffec31ea07 : pycaffe2 c++ extension: py3
c07e0a6166 : Add missing declarations to THNN.h
1b5da38e29 : minor fix
e7c016cde6 : py3 reformat
46de5451ed : BREW modifications
176de750c8 : pycaffe2 minor fix
b589d831b8 : cudnn v4 interface change.
bd38b9c44b : Add math functions trunc, frac, rsqrt, lerp
019f52acee : Add math functions trunc, frac, rsqrt, lerp
05ead5f76f : Bugfix for logging
b2d1cc4090 : Extend torch.bernoulli()
59c25ebe03 : Fix a bug in :isSetTo
b5d4cbbecb : add test for SparseLinear, fix a spurious check
8dff00a7b8 : Fix csub #60
7bad18d2c5 : Add script that generates api_reference.md
f9013a5e66 : SoftMarginCriterion
cbee35a43b : SoftMarginCriterion
941d9da08c : Updated package version, added manpage
9d7f86a66f : Adding addbmm in cuda
f5030b49e5 : only check isnan for float types
f5dfc407b0 : fix magma v2 compatibility
708bfa9e80 : Add checks for convolution parameters
d2e6c39fed : Add checks for convolution parameters
aef48d21c4 : Making margin parameterizable in nn.MultiMarginCriterion
bfee4bcb1e : Add per-activation BatchNormalization implementation.
f581fba18d : Add VolumetricBatchNormalization
116e5a5e79 : adding SpatialReflectionPadding and SpatialReplicationPadding
291681e102 : Adding SpatialReflection and SpatialReplication padding
9c1a72d932 : sparselinear performance fixes
6ee6aedf20 : Adding SparseLinear with CUDA, requires buffer variable
e3677059ae : Remove junk value columns from right singular vectors matrix in SVD
bf2487eda8 : fix bug in indexFill introduced with screwing up upstream PR
bae6075906 : Add basic THNN docs
ec829c89c1 : Fix THNN header formatting
ee7990bc56 : adding weights to MultiMarginCriterion
81ab73f686 : added weights to MultiMarginCriterion
687400c4fd : adding paddingValue to LookupTable
bb2ec3f3ab : adding padValue to LookupTable
e731f73946 : index* fixes in Cutorch
f4bbdc7d30 : fix merge typo
5193bc3b53 : fixing previous sanity check for THIndexTensor change, and changing SpatialConvolution's reset to be more flexible wrt no bias
d07fc0e812 : adding sanity checks to ClassNLLCriterion
8215d476d9 : multi margin sizeAverage fix
a8aada667d : multmargsizeavgfix
5554a4c9f0 : Fixed useRemoteRecv consistency issue.
e305fd8705 : Reimplemented VolumetricFullConvolution in the same fashion as SpatialFullConvolution.
a5dd23e000 : Add THNN conversion of Spatial*ConvolutionMap
2fb50f9282 : Move generic/Spatial*Map.c -> lib/THNN
d30e400f08 : Add THNN conversion of VolumetricFullConvolution
e59dad9a10 : Add THNN conversion for Spatial* modules
959b6343b8 : Move Spatial* to lib/THNN/generic
d7974f8941 : Move generic/VolumetricFullConvolution.c -> lib/THNN
5a7b38a2a8 : Replacing implementation of VolumetricFullConvolution. Now works like SpatialFullConvolution.
85c277e601 : Added torch.equal function which performs a tensor equality check
275dc50223 : Add printf to __THCudaCheck fail clause, in case stack unwind itself crashes
d5867f075a : change default index for min/max to be a valid index
9442285526 : Fixed buffer overflow in ReduceOrCopy
d2e7eb1e03 : fixing build breaks due to torch redefinition
b483065c17 : Fix memory leaks in THTensorLapack.c
ff757e2d63 : Fix memory leak in THTensorRandom.c
bf7920e6f2 : Add THNN conversion of SparseLinear
d474fbd198 : Move SparseLinear.c -> ../lib/THNN/generic
6d83f928b0 : Fix memory leaks in THTensorMath.c
3d2ed21b34 : THLapack.h.in: add generic cleanup mechanism for error macros that do not return
8e4e40557a : THGeneral.h.in: add generic cleanup mechanism for error macros that do not return
be88627759 : THNN conversion of Spatial* modules
6774055233 : Moving Spatial* to lib/THCUNN
519c10573d : moving Temporal* to THCUNN
2555ac14d3 : moving Temporal* modules to THNN
da0634c124 : Fix for issue #517: torch.all
59d9cf27eb : Added torch.mod and torch.cmod functions
1f9a434881 : Initialise longSize in PipeFile
4100098440 : Remove 'dimension aliases' from VolumetricConvolution
0d5d16b3e6 : race condition fix: since Memcpy is now async, we will make sure that the python interface syncs before returning the content. Otherwise it makes things flaky.
04cc804566 : Add THCUNN conversion of Volumetric* modules
75bc8bd379 : Move Volumetric*.cu -> lib/THCUNN
897d81b7bd : Unify Volumetric* signatures with cunn
f0d9886a7f : Add THNN conversion of Volumetric* modules
d9da22e53d : Move generic/Volumetric* -> lib/THNN/generic
5e02136dcf : ARM / 32-bit fixes for SpatialConvolutionMM as suggested in #495
26e9ba84a8 : Remove unused header.
0c14165f49 : reverting b2af7eaddfc1de72661b6861115d3fdb97403bf3 because of Revert in torch7 https://github.com/torch/torch7/pull/523
fcdf3decda : Revert "Adding argmax index as second return value to max/min calls without specified dimension"
cb31ed4f99 : fix for torch7 minall/maxall changes
a0b433c8e1 : Readding the argmin argmax as second argument
a7d53e2326 : minor bufix in LeakyReLU
50874dc746 : relu and pool wip
f1c1a513ca : Add THCUNN conversions of {RReLU, Sigmoid, SmoothL1Criterion, ...}
0e63742d90 : Move {RReLU.cu, Sigmoid.cu, SmoothL1Criterion.cu,..} to lib/THCUNN
177251677d : Add THCUNN conversions of {MSE, Margin, MultiMargin}Criterion & PReLU
1a87d22109 : Move {MarginCriterion, MSECriterion, MultiMarginCriterion, PReLU}.cu -> lib/THCUNN
fe8b616552 : Add THNN conversion of {oftShrink, Sqrt, Square, Tanh, Threshold}
e9c5c1a79f : Move {SoftShrink, Sqrt, Square, Tanh, Threshold}.c -> lib/THNN/generic
db88a2b38b : Add THNN conversion of {RReLU, Sigmoid, SmoothL1Criterion,SoftMax, SoftPlus}
1eea11e019 : Move {RReLU, Sigmoid, SmoothL1Criterion, SoftMax, SoftPlus}.c -> lib/THNN/generic
3e865d0c62 : Add gradWeightBuf & gradWeightBuf2 params to PReLU_accGradParameters
35dcbf02e3 : Add THNN conversion of {MarginCriterion, MSECriterion, MultiLabelMarginCriterion, MultiMarginCriterion, PReLU}
c6181ddd99 : Move { MSECriterion, MarginCriterion, MultiLabelMarginCriterion, MultiMarginCriterion, PReLU }.c -> lib/THNN/generic
57fb442eff : minor cmake and code fixes for missing features in android, and cross-compilation
95b39cd43a : Unify C/Cuda signatures for SpatialConvolutionMM and Spatial(AdaptiveMax,Max)Pooling
caa40b8dd3 : Libwrap checks for LIB.so.1 if LIB.so not found
2758353380 : Added NCCL error checking to tests.
fe1a956715 : Enabled support for char type to be unsigned.
c05312f151 : Moved tests to separate dir and improved MPI test
e62899bcb3 : Add THNN conversion of {Spatial(AdaptiveMax,Average,Max)Pooling}
ce427e3e6e : Move {Spatial(Adaptive,Average,Max)Pooling.c} to lib/THNN/generic
b5f91d5602 : Add functional convertion of SpatialConvolutionMM
130ed2c27c : Move SpatialConvolutionMM.c -> lib/THNN/generic
3a22385a9f : Add THNN conversion of {Spatial(Adaptive,Average,Max)Pooling} and SpatialConvolutionMM
ec1d91b2af : Move {SpatialConvolutionMM, Spatial(Adaptive,Average,Max)Pooling} to lib/TCUNN
3e467b1b21 : torch.topk: use quickselect + quicksort
1740974347 : average pooling wrapper: without this the NHWC path would throw an error as the order is not passed along.
1f50f37da3 : added missing algorithm include
5966316771 : Added support for more than 8 GPUs.
130ee246e2 : Fixed deadlock in back-to-back reduce_scatters.
5a94ee6b64 : Allow one to set the blas backend, while optionally choosing to use Eigen for the whole numerical computation (for example, on a platform where there is no optimized BLAS libraries present, or Eigen is already the fastest numerical library existing).
039ae46581 : add :cinv(), which does 1/x, addresses https://github.com/torch/torch7/issues/168
c99997d6f1 : add inv
78aa266770 : Fix
d84545c5fb : fp16: allow one to override.
5f2d7ba963 : misc: experimental cuda elementwise rtc, softmax fp16
eaf4b5920f : Minor fix in generic/THCTensorCopy.c
d244ca9052 : relu fp16 fix
fa59b90c72 : misc updates
a05782f025 : fix
d08880e61a : more RTC experiments
77b1ba84d6 : Adding cuBLAS matrix inverse
2d7deebf2a : Add torch.sigmoid and sometensor:sigmoid()
127d0309f6 : Add torch.sigmoid, and sometensor:sigmoid()
fe78d1a445 : quick rtc try
5bc33a4f7a : Minor fix
297b39344f : THNN: add missing OpenMP include
f432255759 : hard error on unresizable storages being resized
641522a750 : Add THCUNN conversion of ELU, LeakyReLU, LogSigmoid, LogSoftMax, LookupTable
bf12a06633 : Harmonize LookupTable signature with cunn impl
d49cb6613e : Move { ELU, LeakyReLU, LogSigmoid, LogSoftMax, LookupTable }.cu -> lib/THCUNN
64e0d3a29a : misc updates, mainly relu, to test fp16
f64c36366c : top-k impl and sort fixes
e2b9172b4c : print cudnn version
1c020d257b : bugfix
4aea5c746a : Revert to use IntTensor for LookupTable counts
a5a75e8005 : some changes for TX1 benchmark
66b13f4062 : A bunch of cpu stuff:
ba62b4b493 : minor changes to the build system as well as a cpu benchmark.
8bcfb30d97 : make android
809d54ee50 : convnet benchmark minor change
8c1bbaa2ab : some fill ops that are not tested.
6cb2072422 : cudnn conv op backward compatibility back to v2
778a1f6956 : speed benchmark
05eda208a5 : Last commit for the day. With all the previous changes this should give an exact reference speed that TensorFlow with CuDNN3 should achieve in the end.
896e8e5274 : pooling backward cudnn, and constant for kOne and kZero.
f8585bbf62 : cudnn pool op.
664bdf83d7 : Pooling refactor so we can do a proper cudnn benchmark.
288f350899 : math_gpu.cu bugfix
ebd6c9fab8 : muji bugfix with ngpu=4
55cced894d : Some untested half float stuff for benchmarking.
ca14b9fbe7 : Add THNN conversion of {ELU, LeakyReLU, LogSigmoid, LogSoftMax, LookupTable}
06395abc00 : Move { ELU, LeakyReLU, LogSigmoid, LogSoftMax, LookupTable }.c -> lib/THNN/generic
b5bf8113b2 : Use tensor for THNN functions even for single element outputs
c9346cdb67 : Add functional converion of DistKLDivCriterion, HardTanh, L1Cost
1c6e1bdb72 : Move DistKLDivCriterion.cu, HardTanh.cu, L1Cost.cu -> lib/THCUNN
1256875d66 : Add functional conversion of L1Cost
73e9676ced : Move L1Cost.c -> lib/THNN/generic
754a5aaebd : Add functional conversion of HardTanh
a2cf03f12f : Move HardTanh.c -> lib/THNN/generic
bf62bd7e06 : Add functional conversion of HardShrink
582c66ff36 : Move HardShrink.c -> lib/THNN/generic
90cbf8f3c3 : Add functional conversion of DistKLDivCriterion
939f9341fa : Move DistKLDivCriterion.c -> lib/THNN/generic
26f41a5042 : Add functional conversion of ClassNLLCriterion
718f8b78ec : Move ClassNLLCriterion.cu to lib/THCUNN
8c0041bde2 : Add functional conversion of ClassNLLCriterion
3b8dea4c7f : Move ClassNLLCriterion.c to lib/THNN/generic
7eeeda5acd : Remove special handing for OSX search path
fcb0c77c7a : Install THCUNN into ${Torch_INSTALL_LUA_CPATH_SUBDIR}
25bcfbf59e : Add THCUNN/ffi conversion of Abs and AbsCriterion
50c153926e : Move Abs.cu & AbsCriterion to lib/THCUNN
6288c5498e : Change THNN library type to MODULE (to create libTHNN.so on OSX)
81f0fb213c : Install THNN into ${Torch_INSTALL_LUA_CPATH_SUBDIR}
97c5021b68 : Add functional version of AbsCriterion using metatable call
af34140313 : moved AbsCriterion.c
1423a17108 : Add THNN/ffi conversion of Abs
d8739fc659 : topk implementation
8ef2d4413f : adding THCGenerateAllTypes.h to CMake for install
2a8cd1c37c : Add generic CudaTensor types to cutorch
db83396210 : Move files to generic folders (preparation)
d332c41e71 : fix a typo in README.md
3a18496f5a : THDiskFile: fix incompatible pointer types warning
6335106de9 : adding setFlag and clearFlag for storages
c9da89254b : Update deb packaging scripts
f1e92fe2a3 : Added Debian packaging files
b5400c54df : Don't link tests with NVML
dd0884b707 : Build SM 5.0 code
e1634ca6cb : Use semantic versioning
7b6339cdd2 : Make nBufferSize signed for writing longs
8d4683434b : convnet benchmark: make it consistent with TF's model.
b7c3b48469 : copy matrix can be done with cudamemcpy.
b10ee24fc3 : conv op: backward exhaustive mode too. This does not seem to help much, suggesting that cudaGetConvolution*Algo is already doing a very good job. Verified with googlenet.
d79cfb4ae7 : exhaustive search for cudnn
61c114971b : fast path for copymatrix
05e3207e26 : fast path for copymatrix
cc9323793e : add relu cudnn code
4f2530d8ce : expose benchmark code to python
6b27cabf17 : net benchmark code
cf8ffe215f : minor tuning
20ccca5b67 : RTTI to true in default for the main model.
f714ad0a70 : number of blocks now makes more sense.
3b0cc79465 : context gpu: better error catching
73f3daf736 : minor bugfix for workspace
bfae070de1 : minor bugfix for net
359f7685f8 : halfway into timing test.
651a6edc5c : Fixed bug in MPI initialization.
03c777db72 : boolean for has_gpu_support
7bdc8a6c19 : Pycaffe2: removed the clunky gpu support hack.
becf9e85c1 : remove no longer needed build_env_android.py.
ae1ebd0f19 : a script to test zeromq db throughput.
77541ffe14 : flags relaxation, or tightening?
ceb4cde74a : average pooling format change to fit the cudnn interface
6bfb30047e : deprecate legacy pooling
20dbbbbb28 : android: use full proto in default
9022e4f499 : pull protobuf to master
05465783c6 : optionally use protobuf lite
3d7cb201a3 : misc changes to reduce binary size.
41ce4ca9fc : Add int64 and uint64 types for all algorithms and tests
4eb486bd34 : misc update to reduce binary size. Removed zmq.hpp
1a4ea7c8fc : misc updates
25647f8c47 : more test for tf benchmark purposes.
01b45fd052 : backward support to cudnn R2 for TensorFlow benchmark references
fe768d729d : index changes
acc16645d3 : temp hack. Will rewrite the build script later.
3a4d4285f2 : Added more benchmarks.
7d87fe788f : alexnet benchmark code using cudnn: this should give a reference speed that TensorFlow should achieve after tuning. With R4 currently we have 29.5ms fwd / 93.4ms bwd.
1499b87e56 : cudnn spatial bn: optional compilation instead of throwing error
5ba54180f5 : various updates
1b7c5acbd8 : halfway. Prepare to revert proto3 to proto2
fcd5f8fbf0 : move to the new build scripts
3dcb868411 : misc update
85c2eaa303 : Halfway into refactoring the build system
1850c54b8c : New function longSize for files
bc9c69f45e : Added missing function (I missed this when merging from my working copy)
937ae9b578 : Fixed invalid resize1d call
4eb451165f : Implemented Cholesky decomposition of positive semidefinite matrices with complete pivoting
63b010115b : minor changes
a71667859f : I thought I removed this. Maybe on another machine?
3df4750a37 : Add isSetTo to cutorch
27d32ac5d9 : Fixed a race condition in reduce and braodcast.
b5f299cc17 : Add isSetTo: simple check for shared storage.
0673d5f44f : Initial release.
56f18e35d4 : Detecting LAPACK on the system with LAPACKless OpenBLAS
b5cb5f17fb : Accept 64-bit offsets in THDiskFile_seek
1bd82bf4c6 : cutorch copy/event changes
a8c49dfa7d : Added support for neg().
7cfa9f378b : cnn default order.
85f2fc65b7 : well.
92790cf6b3 : Spatial batch norm; currently just based on cudnn.
b1fa9d2b06 : Fixed unsigned overflows in THFile
5c915a0321 : Some naming changes.
d577f9b95d : Code sugar for simpler gradient definition.
63bd3ce182 : sigmoid
d582c395dc : tanh
a3dcd9250a : bugfix
48d87711ed : bugfix on master
a74d606df7 : A collection of changes:
457ef79c70 : elegant fetchblob
71e9932148 : half float conversion
b70f46f958 : minor fix
625e19acae : Float16 for convolution
a51a398fec : Revert "minor fix"
9e48fd2e8e : utility op in-place opt-in
d167ae5399 : cudnn race condition fix
6847291ea8 : minor fix
5cf83e57f2 : cudnn refactor so we can do easier benchmark check. Also some minor bug fix.
141d122db3 : minor bugfix
b9b4ae6ec2 : Add missing functions to CUDA tensor and storage.
0d18ed31dd : minor bugfix
80a70b635f : Two main changes:
f7fe6cf1a6 : Use proper precision for accumulators in multinomial
15e2939f7a : Fixing large data reading/writing/memory storing
fc23f65d4f : Check that dimension to be sorted is contiguous when using Thrust.
98c5b86ef7 : A few changes:
b3e094ec1d : Add indexAdd
264102d245 : Add torch.mode and fix median/kthvalue docs
8c94450c42 : More informative error message for pipefile fail
d734ddc196 : Adding optional Eigen code. Added a switch USE_SYSTEM_EIGEN in Env. Misc changes.
d956c2e0ba : change to indexAdd name, add documentation
7cf666f0fc : add indexAccum
e0a9f16a10 : heap tracking: fix relevant compilation warning
0495d406d6 : Fall back to Thrust sort if Torch kernel can't handle the input.
648d1b101a : A consolidation of a couple random weekend work.
24f950dd5a : Add automatic CUDA architecture detection
439bc9bddc : faster norm
c5009c2601 : Implementation of torch.cat
f9cc6ffcbf : add :neg() :csub(tensor) :csub(scalar), inplace subtraction operators
35c2754c88 : static libraries no longer built by default
f4ca8de737 : hard error on unresizable storages being resized
254acd8ff2 : Exposing LAPACK function potri
75bc88d943 : Use TensorInfo with NoCollapseDims for scatter & gather operations.
384668bbc8 : adding potrs and uplo option to potrf
bc9c0c1598 : Rewrite indexSelect kernel for correctness
c40f6b6ead : LAPACK ormqr routine
064fef1313 : trying to cover as much mpi cases as possible
5b9584c227 : carpet bombing
42d310afdd : Update README.md
5b8ae52e4b : cudnn note.
87bdbe0e8e : Hickery Dickery Docker.
0b66c01462 : (1) blob debugstring (2) cnn bugfix and enhancement (3) device checker fix (4) suppress prints from caffe2_python (5) moar tests!
8490282cd1 : ‘did I…?’ ‘No.’ ‘should I…?’ ‘Yes.’
d07549bed2 : half-finished cnn wrapper, etc.
d60fe8d4ef : Batch updates to global heapSize
3a9cd38784 : Batch updates to global heapSize
476f78c300 : Use _data & choose CudaTensor's stream
852b79283e : Add copyAsync for asynchronous copies between host and device.
d006752946 : Add missing size check to scatterFill.
d4336af327 : caffe_translator minor change
d9af6fc7f2 : Now we need to add GlobalInit() before mujitest.
53868133e1 : lmdb: after commit, txn is already deleted so no need to abort
d72cfcebaf : fixes to allow more consistent build tests
4f33daf308 : workspace: cannot call GlobalInit with sys.argv because that will cause e.g. python to fail. Need a better way, currently just disabling it.
7517c2898f : translator and misc fixes for legacy group convolution, sigh.
1164b9a347 : mujitest bugfix
821eac3e7c : lint
6e5e9743c3 : muji fix
8198cb135d : pycaffe2 finetune
a117c5164a : workspace: set globalinit
bc70f17a4f : no more gflags hack headers.
e505c98996 : no more gflags_namespace.h header
7583822af8 : muji
c18d756c49 : workspace bugfix
92e2cddd62 : add some python cuda capability
91d8ce4f44 : python uses global init
d2ff13d332 : put a peer access pattern function to caffe2.
f2fde73447 : init test bug fix - forgot to commit in the previous one
26591c8ac7 : easy selection of gpu ids for cuda context.
baffb9c503 : make caffe2_gtest also uses globalinit. Not allowing init to run twice.
ec069cb3ea : Use a global init function: it seems that with the multiple components optionally linked in, it is best to just enable a registering mechanism for inits.
de55e4e77c : changes to ensure a more robust build
2ddea70a08 : remove dependency to google profiler
ecd46d5ea0 : A memory pool implementation based on cnmem. Added cnmem license to LICENSE.
5325bd5049 : rename files so things appear cleaner
55a917cd69 : More flexible torch.cat
4f4aa1f205 : clip operator, not tested.
a57de4ece7 : clean pycaffe namespace snafu.
a12a471b2d : suppress compiler warning.
f528f46c64 : move LICENSE.caffe into LICENSE, and added related correct attributions.
07d1e453a4 : Add the nonzero function for finding non-zero elements of a tensor
ea0c7afa49 : Delete db.cc
b7932df9cc : Fix resize in addcdiv and addcmul.
540c9474fe : cutorch gc
83cbb7f8de : clean-up and simplify cloning/collection logic clean-up and simplify cloning/collection logic
768bebff23 : Fix Tensor:index for some tensors with the first dimension of size 1
7d021a0346 : context change and dropout bugfix
d6bebc4824 : lrn bugfix - for specific cuda devices a single if is not enough, need to do kernel loop.
babb37c1df : Add THCDeviceTensor
924c96ce92 : Fix lua GC to support allocations across multiple threads (e.g. threads sharedserialize)
33d1139101 : fixing avx checks
eac3b5bd28 : Update README.md
30fb5b94ac : Update README.md
dad0608e75 : pycaffe2 minor fix
10ffe1132f : image input: support caffe datum format
b4656c77b3 : prefetch op bugfix
60e94b5247 : Update LICENSE
4bc8b46d65 : Merge facebook changes
a956aa90e2 : brewery pool *= 2
4b32534e84 : bugfix for dropout and filler
32dc580c43 : utils: relax bool
43afd1bdeb : Change the strategy of dealing with gradients of shared parameters.
1b0f782c56 : Exposing the lapack function trtrs which solves triangular systems of linear equations.
1fc2038979 : Added some more comments to THTensorLapack.c, and fixed two error messages
6e21043842 : Fix for gemm with zero strides
f3bea731ce : fix two more cases of tranpose check.
2b5372655c : small fix for MSVC
127684610f : utils bugfix
a07c255d16 : Some utility function changes
fa78ede747 : fixing build break from #307
d829950eff : change arg order of Copy/Memcpy to follow inputs-then-outputs convention instead of C memcpy order -- from (dst, src, n) to (n, src, dst)
b856798020 : disable CannotAccessRepeatedParameterWithWrongType test broken by removal of check (PR #7)
091aabeaf0 : operator.cc: remove check for empty repeated argument (to allow empty shapes and other use cases)
4aad861f9f : allow Tensor with empty dims (a scalar)
7b481f5a8f : Tensor constructor: values arg is const
a139a1bf6a : build_env.py: filter non-existent dirs from INCLUDES, LIBDIRS
2d30d89cb1 : build_env.py: add python library dir to LIBDIRS
6440362c08 : build_env.py: add /usr/include, /usr/lib to the default include, library dirs
16b312867c : add shared memory file mapping with shm_open
17fb6c489e : Fix SSE 4.1 bug
7f5ee34b9a : add back gflags dependency
a6d20495c2 : [gflags] sort out the gflags namespace issue.
532670cce0 : [cuda] add all gencode archs
d2bb9d024e : Work under windows
0c6938faa9 : Work under windows
c3ba30a537 : blob templating
9123d31176 : CUDA implementations for scatter & gather.
aaead5d6a5 : reorg mpi_ops
e968294aba : THAlloc trigger GC when heap size exceeds soft max
29b476f32b : Element-wise min and max operations.
a76e7bb760 : core_gradients change that goes with conv gradient change
cf5a9f62b0 : bugfix
966a743f8c : legacy names due to the plural->singular change
90affff039 : broadcast op: it is an in-place op with both input and output set
745d8ed969 : run_plan_mpi: check the returned state
d4eab84548 : conv_op: during backward, bias is not needed.
691986ec21 : Add an execution chain option to force operator orders.
a5254881f2 : Some more MPI examples
571ee16b44 : minor change
cfab4ed865 : zmq feeder bugfix
43aaadbef4 : zmq feeder: catch error when setting up the socket.
ecf0dceef6 : zmqdb: pass key as well.
05ba5b0527 : Use c++ to do zmqdb, and added a simple zmq feeder example.
47c70a43b4 : (1) minidb bugfix (2) blob serialization comments (3) cudnn: putting it under a separate device name so we can explicitly choose cudnn instead of having CUDA device prioritizing it. (4) note that mint is not available with ipython due to zeromq conflict (5) db_throughput utility (6) added gprofiler
50df65bda7 : Fix for gemm with zero strides plus add unit test for torch.mm
20562f06b6 : Implementations of component-wise min and max.
46ab6c1a30 : do not use _aligned_malloc in WIN32: it requires _aligned_realloc and _aligned_free
ba62d19b6d : Add support for compilation on Windows using mingw32.
b240768ce0 : adding argchecks for Lapack before calls to lapackClone
c5166e578c : Several changes:
eefc497b9e : Do not set -Werror for MSVC TH_API removed from function definitions to allow for static library compilation
6b1dfecf71 : check if torch is found before find
59e1ad7e77 : Update license and readme.
85a40a0b96 : zmqdb: make clear error message that zmq 3+ is required.
f15492f303 : make things compile for gcc
16c253e62e : Some non-trivial refactoring:
dc41b23e41 : [bugfix] static initialization order fiasco.
97f4b9f3e7 : [bugfix] missing dependency
9b4fd2e77e : workspace: create root folder if not exist.
e078bf4e81 : [op] iter_op: instead of producing an int, produces it in a wrapped tensor.
2a3fda4d60 : [interface] OperatorBase::OutputIsType
0bc9dd6bec : fixed possibly wrong pointers coming from a saved state
8ba95403fd : minor format issue
29e6fdfac4 : Always use events for synchronization when copying across devices.
036229c889 : [style] Finishing name changes for the rest of the fields in the protobuf.
c1077400c9 : [style] Massive name change from plural to singular. This one changes operator’s “inputs” and “outputs” to “input” and “output”, and "args" to "arg".
f8b4ada69f : THAlloc/THRealloc attempt lua GC on failure
df6fd55cce : [minor] net.cc: in parallelnet, if the user forgot to set a num_workers, warn instead of quitting.
241cad91f2 : [LegacySupport] Fixed a bug in the caffe pooling case: pad_tail is changed on first run so we can only use pad_head for the legacy padding value.
cf88235bb4 : [pycaffe2] net_drawer: do not exit too loud if pydot is not present.
e94be296b7 : Fix some bug in the old code: the net device option overwite used to not work.
8834e3eb13 : Adding <memory> to the header. Clang is fine with missing this but gcc complains.
8bedbde1c1 : More flag changes… and adding back run_plan_mpi
314696a262 : explicitly make end_to_end_test depend on core_gpu
fdf6066e45 : namespace google -> gflags for gflags functions.
2807aac523 : enable optional_deps so that any non-crucial failures will not break the whole system.
2abd5e263e : GoogleNet adaption - added yet another legacy padding support.
aadac9c1fa : [binaries] add missing header
d8e3ce8ef4 : [build] flush output from subprocesses
1e7730800f : bottlefeeding.
dcb921f7ee : Caffe translator example notebook, and some nicety-type changes that accompany it.
e9e849a017 : Fix memory leaks in maskecCopy and indexSelect.
e5e74a5230 : fix an embarrassing bug introduced when moving over the padding change.
6fb91b253f : Fix loop logic error in maskedCopy.
9a19430a39 : Simplify the data sharing mechanism, using std::shared_ptr instead of home-brew code.
2ed1077a83 : A clean init for Caffe2, removing my earlier hacky commits.
cf93abaf4e : Install all THC headers
c7ed230961 : Add MAGMA implementations of Torch LAPACK functions
209484e1b7 : Stream support for BLAS Handles. maskedCopy implemented generic Reduce kernels
daa5285092 : Use standard Torch mechanism for file/line propagation in error messages.
384e3cca77 : Use UNIX line endings
56a6054dc4 : Optimized indexSelect kernel for contiguous inputs
1102e0618e : Fix problem with NaNs in max and min
ff1384d12d : Use Thrust's sort for inputs larger than 2048
ef4aa8e27a : Allow CudaTensors as indices
c3945c6c5a : THC standalone compilation
3e849586a0 : FindBLAS.cmake: fix missing stdlib.h include
33cc71dc55 : Tweak torch.range to be more numerically robust.
29a18c93ca : avoid unnecessary transpose in lapackClone
70fcaa1c49 : Fix storage view.
d3b32821c0 : Adding support for a static build of torch.
a7db84a442 : Adding support for Storage views to cutorch.
7710d53bca : Removing type from THStorage.
8cf718fd2b : Added compile-time flag to disable GPU checks
2796f880f3 : Allowing usage of posix_memalign to be disabled through a new flag, DISABLE_POSIX_MEMALIGN
e1ead323df : Finally got compiling locally.
c9239436b8 : Missed one diff.
2e108d5dea : SSE optimizations for 5x5 convolution.
67f8eb429c : Adding support for Storage views and exposing Storage type.
49fe800281 : Use assignment in indexSelect for 1-dimensional contiguous input.
e014a57910 : duplicate THAllocator.c in CMake
ebeeadfc43 : adding THCAllocator* to cmake
c347df19f3 : Remove swp file
e2df14b974 : Use 64 byte alignment for allocations larger than 5KB
4795c660a0 : Speed up Tensor:index for contiguous tensors
2df4b9b4ca : Fix bug in metamethods
2b4358bec8 : Annotate function in THAtomic.h with TH_API.
9022aa8a9c : Add scatter and gather operations.
5aa0ab69b2 : Add CudaHostAllocator
5c126afd71 : Revert "Auto device: API changes, bug fixes, README.md"
b954190cf3 : Add LAPACK QR decomposition to Torch.
3ef5ad2e1c : Include THAtomic.h
29d0b3a537 : Use THAtomic operations for ref-counting.
1c69008a7a : cleaning up rest of deviceReset code
fa0451afe0 : Error message improvements.
bccf898276 : Auto device: API changes, bug fixes, README.md
950c3f2b1a : Auto device mode, plus allocation helper functions.
186df5a08a : fixing torch.cat for sizes greater than 2^31
c8dd97526c : Add :elementSize() for Tensors. Minor tidy.
f98a859dde : fixed maskedCopy to accept src with size greater than number of ones in mask
23b1528cce : Add elementSize method to *Storage types.
a6870e4325 : Fix argument checks
6f5dddd9ae : Add isSize method for Tensors.
fd67e04dca : using atomic operations for refcounting THStorage and THTensor
53cf004739 : added atomic operations to TH
28e69de773 : Implementation of logical all and any.
4b131dfb02 : adding cutorch streams
c8e01c19f2 : Fixed serialization in ASCII, for MemoryFile.
8caabd07dd : Revert "Add file and line number to THError."
fc81f4be42 : Add file and line number to THError.
340f5f231e : adding sort + test, fixing max/min and sum tests to not fail occasionally
aa732fc5ed : revamps TensorMath to remove sync points at many places, adds maskedSelect and maskedFill operations (and tests). Also adds generic Reduce and Apply kernels that can be reused.
396e0ee140 : Recreate cuBLAS handle on deviceReset.
992798504c : Fix FindSSE SSE4* check crash on MSVC.
442b8afa61 : Minor refactoring of RNG code
521c3abb5a : Added kthvalue and median operations, tests, doc
283c31006a : Fix RNG issues.
64f6355abb : Implementation of reduction without any limit in the number of dimensions.
64a913454f : Implement Sedgwick's quicksort implementation
aae649e010 : Fixed torch.gels for the underdetermined (m < n) case
e2c386bbda : Call cudaGetLastError if peer access is already enabled.
51dda0ac12 : Checking for valid RNG state
c0c43aee9c : Ignore cudaErrorPeerAccessAlreadyEnabled.
91690a50f8 : Make sure RNG is always seeded
a046a6b927 : Make bmm use baddbmm.
45a3be0a76 : Add missing header file
54b8e815af : cumulative sum and product
1dd01bd395 : Fixed getRNGState in uninitialized case
ab1304fe5f : Fix compiler errors/warnings.
f3854a8731 : Add bmm and baddbmm.
7d78597b9a : Move Blas state into THCState.
57e9ecb209 : Add cpow and alternative pow.
3db013b626 : Initialize the random number generator from /dev/urandom on Unix platforms.
afc7563b24 : Renamed baddmm -> addbmm; added baddbmm.
22cf833a34 : Avoiding allocs in bmm and baddmm.
2e775b5262 : Add batch matrix-matrix multiplication with accumulation.
05364d4c23 : Added alternative version of pow function
350d68054c : Added element-wise power fct cpow
08e1562de0 : Add batch matrix-matrix multiplication.
c5f210936b : Pass a state to every THC function.
9db2844e87 : Single-dimension std and var.
a495b98694 : more copy optimizations, and additions to unit tests
4ac604b136 : very fast copy kernels for non-contiguous cases by Jeff Johnson, and a robust and completely randomized copy test. Also fixes #90
3c62501f0a : Revert "Add parameter check to torch.addmm"
083ddf40a3 : Check for equal storage of parameters to addmm
0d0596d6b6 : Add missing headers.
a6abd3f1a7 : Cast allocation to avoid compiler warnings.
cbd25963f9 : indexCopy unit test and less allocations
cda2a6f415 : Reset RNG state after device reset.
a0adac936c : removing DIVUP from THC headers! (bad behavior to put it there, might conflict with other libraries)
67b4233ea1 : removing bad code leftover from cublasv2 refactor
69cb0acc50 : adding getDevice for tensor, manualSeedAll and seedAll
9d56e1bb61 : Move global RNG state into cutorch table.
6d90a23057 : Upgraded CuBLAS to API V2.
2268cae9c7 : atan2 implementation
3bd605b1f7 : round function, test for cumprod
7e809a5db0 : ones, zeros, numel, prod, reshape
0e699744a3 : fixing small bugs in prod, cumsum, cumprod
dd1de36f6d : fixing min/max to return indices as well (when given an arbitrary dimension
8160e9283f : fixing cadd API inconsistency
746a163c9d : fixing inconsistent API in addcmul, addcdiv
ff84e38d23 : adding operator overloading, made add/mul/div consistent with torch7/TH, made all the blas functions (addmv,addr,addmm) consistent with torch7/TH and added the missing ones (mv,mm,ger)
5781c6106a : fix recent cmake warnings and update cmake way of handling rpath
17438ab7ee : added assertion
50dda86053 : Fix computation of number of thread blocks in random number generator.
bb39e71734 : added clamp CudaTensor method
bfa4a88eda : added a clamp method for tensors.
0b614eeb25 : Added torch.round function
ce8ba6dc82 : Added logical any and all (similar to numpy)
8ed29abe7b : Use curand device API for random number generation to expose RNG state.
ccb0fab35d : added THCudaBlas, now handles a bunch of corner cases. Also fixes #55 and another bug in addr
4f9dc7c67e : One random number generator per GPU.
cb5a5072b5 : fixes non-contiguous copy bug
348af09737 : making the deviceTodevice copies async
b82b2a69a8 : Initialize random seed during CUDA initialization.
8cae227d6b : peer to peer access
d987c847d3 : Make element-wise unary operators more consitent with Torch.
7cff6e5437 : fixing #36 indexCopy works for non-contiguous source tensors
1e156b6ba7 : Tensor:cdiv with three arguments
4ca1486f75 : more verbose error for texture creation errors
3a3b4b96b0 : replacing checkCudaErrors with more traditional error checking
b35787f7cd : adding THCudaTensor_getTextureObject
6bd91b1c77 : adding torch.isSameSizeAs
d4635b2267 : resize should go first
3ad4f6e3f3 : added back arg checks
d93a56dc36 : torch.CudaTensor:logical[Value/Tensor] resizes result
8fa790e0ed : torch.CudaTensor:add resizes result
5b06edc9b9 : torch.CudaTensor:[addcmul/addcdiv] resizes result
abecc96179 : torch.CudaTensor:cdiv resizes result
6600ec7b23 : torch.CudaTensor:cmul resizes result
328c4d930c : No need to search for gfortran
9844d1bb9c : checking for zero-element tensors in Copy
ded9e7358e : Make the THGenerator next field an offset.
ae376363cb : Fix RNGState serialization bug + remove whitespace.
61d895508c : fixed warning/bug for double random generator
56267edeaf : added indexCopy
07850eefce : stricter error checking for indexCopy
a207bb6bd0 : fix minor bug in addmm to suport resulting matrix of dimensions Nx1
f5a7c8f633 : Revert "Merge pull request #20 from hycis/master"
644dd5bb0a : optimized indexCopy
38c5ba1349 : optimized power of 1 and 2
9200b1977e : added indexFill and simplify index
c33a8d2d5b : indexSelect works
c1466a5ad8 : indexSelect compiles again
79533164f7 : indexSelect_kernel arguments
c68b50d4cf : small bug
b26875768e : harmonized THCudaTensor_indexSelect to cutorch semantics
b8cbe0700e : optimized indexSelect
99f88f8833 : fixed argchecks
0d1d7624e7 : fixed argchecks
4b14bc3b67 : renorm does not accept vectors
13785dbbb8 : unit tested renorm (works)
e09f019412 : cwrapped renorm and fixes
49bc4dee56 : renorm unit tests
a7892bd150 : renorm works (debugged and unit tested)
b50dc9cd04 : renorm only renormalizes rows with greater norms than maxnorm
b459ddd7cf : added argcheck to renorm
872b67b636 : added THTensor_(renorm)
5ba09ec2d4 : added THTensor_(renorm) prototype
6ea8ccc928 : added THCudaTensor_renorm prototype
6ac7bc692c : THCudaTensor_renorm is ready for testing
60297a5c6d : remove print comment
81c58afbde : Support copy of large non contiguous tensors
62f6480088 : Used long instead of int for the mmap file size.
2d3b5ca941 : adding missing INCLUDE (CheckCSourceCompiles) in cmake file
fa04fa1e3d : fixing precision using float.h limits, since FLT_DECIMAL_DIG is not available in C89, hardcoding the values, they dont change
fe110253ae : scanf doesn't take precision width
fcb88a40e8 : increasing ascii serialization precision for floats/doubles
6a3f930eee : made TH standalone installation more flexible
fabbffc0e6 : Fix get/setRNGState for gaussian state.
8da18af6ce : update to new storage mmap api
040287fadd : Merge commit 'da172cd8703163ec69f6bd2dfea9fe1357db67bf' as 'lib/TH'
4d9476ab25 : prepare TH for git subtree: remove directory
5124e75a18 : input/output slicing fix for 3d conv
60b5374440 : Updated
2a556dfab5 : Thread local callback and state for THError. Copied from github:koraykv/torch
465d659076 : Unnecessarily static.
8269bf170d : multinomial c89 compliant
f387667258 : TH: optimization for cadd
8587f5c5cc : TH: optimization for normall
8d9c451dfc : removed debugging code. added unit tests
6aab01dbda : multinomial without replacement, inline code, uses less memory
65e87052e7 : TH: fix speed issue due to calls to THTensor_nElement()
1694870557 : Whitespace. Duplicate #define.
0b184896e6 : Reorder arguments to Tensor functions.
b8dbe0ee19 : Indentation, whitespace, comments.
b5744ef1a3 : multinomial with replacement
046ad6b37d : Heinous bug.
2bf674bebd : Implement method abs() for LongTensor and IntTensor
56bc19d278 : Rename mersenne_state to THGenerator.
96f7d4141d : qth runs, can call random methods if you explicitly pass a generator.
870fce8d50 : Miraculously bulds. Very unhappy when you try to run it.
824a6d73f1 : Starting point for wrapping the mersenne state in a Lua object.
1dc0c18121 : Refactored TH to not use global state for random number generation.
3f3972e0b1 : newWithMapping function when HAVE_MMAP is not defined should have return value to satisfy function signature.
8f1767c5e8 : lib: relax compilation flags
26d7a431d5 : relax compilation flags
18682525c2 : added/modified cmake files to make cutorch a standalone package
98849c317f : added rockspec
a2e40430e8 : luaT: make sure it is C++ compatible
859a9f5698 : fixed torch.packageLuaPath
67b7b06817 : LUA_WIN -> _WIN32
84b55356a5 : added/modified cmake files to make torch7 a standalone package
66a5927b7c : added/modified cmake files to make torch7 a standalone package
fa0e47d77d : fix dependency issues related to the new cwrap standalone package
0bef0d525f : dok -> md
86f522d0c9 : CUDA fixes fix tests: - std and var do not have multi-dim implementations - do not use square matrices for tests, hides corner cases - randomize the size of the matrices used in tests - fix x:mean(dim) call to divide by the size of 'dim'.
10022f6414 : fix mem leak on maskedCopy
1ac061beb7 : fix mem leak on maskedCopy
ff32736d78 : less strict about ansi libraries, less warnings
95006b300d : less strict about ansi libraries, less warnings
302bf7d504 : less strict about ansi libraries, less warnings
038460ed97 : master compiles under win (msvc11)
71745546be : Test that Cholesky errs on rank-deficient input
6f68005537 : Fix bug in potrf wrapping: return a triangular matrix
b146a73b27 : Fix bug in potrf wrapping: return a triangular matrix
e17be1fa25 : Add tests for sorting with equal keys
38cf12aa5e : Add bibliographic reference
1de6249a3c : Add bibliographic reference
f0b4170aa3 : Fix quicksort for constant arrays
1d7b661ae1 : Fix quicksort for constant arrays
e23732735a : Plot time for constant array
5d8e324fb4 : Fix typo in assertalmosteq
ea15bbce36 : Remove my now-obsolete TODO in assertTableEq
660e98a404 : Test for new asserts
6ea08e236b : Display error returned if not the expected one
d075042a12 : Add Tester:assertalmosteq()
400a37efa9 : Make error message clearer
8ca54d06f9 : Add assertErrorPattern
b1bb1b3f9f : Extend assertError and fix global
457e037cdc : openmp for add/mul/div is restricted to only large arrays (arbitrary threshold set to 100k). Also fixes some c89 commenting for the new quick sort algorithms
73f27d8a3a : openmp for add/mul/div is restricted to only large arrays (arbitrary threshold set to 100k). Also fixes some c89 commenting for the new quick sort algorithms
fb2ad84ed2 : Remove dependency on unused outside package
5af064db83 : Missing a free (pow, CUDA)
ac0489e1a3 : Speed up quicksort descending
ae210a72e8 : Speed up quicksort descending
6fe32d305b : Indent quicksort descending for readability
6fd928a1ba : Indent quicksort descending for readability
39c93a46de : Update benchmark and test for sort descending
a14c0f795c : Speed up quicksort
7a41d6da21 : Speed up quicksort
b70d9a95fa : Indent quicksort for improved readability
675b472160 : Test for correct sorting of values and indices
e541251cc0 : Indent quicksort for improved readability
21f2b83471 : Benchmark sort worst case on sorted case
6cf948cd0f : Add more tests for torch.sort()
68faf508d0 : avoid paths being compiled with C89 flags: some compilers fail.
554e137315 : added strict c89 flags
2deee567c7 : fixed a bug in CUDA norm
163b2acd07 : pkg/torch: C89
2152f9d015 : missing declarations
fbe183c3be : missing declarations
02d5426956 : windows missing functions
e2a5f77983 : windows missing functions
aa84a62631 : windows popen/pclose
ce1151874d : windows popen/pclose
ae6f303ece : TH: now C89
659e695afd : TH: now C89
02618853d0 : C89!!
df41d749fa : C89!!
46899d89bc : adding a couple of #includes for cuda 5.5
228e986680 : bugfix: serialization should be binary only
6217711e57 : readObject can be forced to reread from file using referenced flag.
7a75c908a4 : properly declate T variants of comparisons
d50dbd0944 : properly declate T variants of comparisons
d9b12c2f80 : properly declare new lapack functions
e1012ccb6c : properly declare new lapack functions
ae53260c58 : Replaced #if by #ifdef, for clang.
d611752109 : Replaced #if by #ifdef, for clang.
edb816bdb6 : Added wrappers for Lapack functions: porti, ports, portf.
512c336eb7 : Added wrappers for Lapack functions: porti, ports, portf.
adb6b347c4 : introduce THC_API and work around log1p issue.
8a48ce6e48 : introduce THC_API and work around log1p issue.
f246b87326 : introduce THC_API and work around log1p issue.
d41bfe5da7 : Fix sdot return and work with openblas
9188986f5c : Fix sdot return and work with openblas
85c8d48d39 : win: added missing TH_API
fb0059fed4 : win: added missing TH_API
b416e79b8a : ported package torch to win
962e1df4a7 : luaopen_xxx functions need LUA_EXTERNC
07f582c35c : Complete (and final) revamping of the CUDA rand engine.
9327cf5c05 : silence warnings about implicit cast
c90103da14 : silence warnings about implicit cast
d805cb2dfa : add getRNGState and setRNGState functions to get/set the state of random number generator. These provide the ability to replay a sequence of random numbers from a given arbitrary point in time.
d92d4a53ed : add getRNGState and setRNGState functions to get/set the state of random number generator. These provide the ability to replay a sequence of random numbers from a given arbitrary point in time.
030e8feb12 : Fixed CUDA random generation entirely.
730a70272d : THC: revamped random engine.
cedd0588c2 : Repeatable randomization (CUDA).
a52932599d : Fix - continued.
0c3abcf6c7 : Fix - continued.
eeaf48d01b : Fixed major bug on new openmp operators.
0f502afae5 : Fixed major bug on new openmp operators.
87c09ecca2 : fix for OSX Accelerate sdot bug
2aae827e66 : fix for OSX Accelerate sdot bug
808f6945ed : Reverting Blas issue: not ok on Linux.
01c33d20f5 : Reverting Blas issue: not ok on Linux.
0ee3d1caa6 : Fixing wrong float binding for sdot.
3d28d2261e : Fixing wrong float binding for sdot.
36bf39fe82 : THC headers installed along with TH headers
4ce3de09e8 : Unsafe omp //.
fab111163c : Unsafe omp //.
cf4c8c8cd5 : Reverting c99: this causes compilation problems for LuaJIT.
6b43bf216a : Reverting c99: this causes compilation problems for LuaJIT.
12ec52c154 : Parallelizing most common for loops (TensorMath).
1864e3933b : Parallelizing most common for loops (TensorMath).
88d0c0f220 : Added a new torch.match() function.
79bfba27f5 : Added a new torch.match() function.
77d43b339c : Fixed documentation for torch.var
5620490800 : Added single-dimensional CudaTorch.mean
e851f93806 : Bugfix: copy from zero-stride cuda tensor
227a21b10c : Correct computation of L0-norm for CudaTensor.norm
4f94e31468 : Added reducing form of CudaTensor.norm
e0262d6411 : Changed parameter order of transform-reduce
c6c41229ab : Correct computation of L0-norm
ef7db84d52 : Correct computation of L0-norm
f9e252c022 : Generalised reduceDim to transformReduceDim
550287fe8a : Added missing documentation for torch.norm
6f0b29e658 : Bugfix: incorrect CudaTensor.max
15fd911d46 : Improved performance of single-dimensional CudaTensor reduction
1877ef828a : Single-dimensional CudaTensor reduction handles large tensors
8dd3c36c1e : Fixed operand order for CudaTensor logical operations
5f03403335 : Replaced uint with unsigned
ecff9feb1a : Added logical operators that return non-ByteTensors
96b11d241b : Added logical operators that return non-ByteTensors
9b9deb3f0c : Added new logical operators for CudaTensor
a78c712121 : Added copying form of CudaTensor.pow
e70eed28c3 : Added single-dimensional min/max for CudaTensor
ca3968c373 : Added single-dimensional sum for CudaTensor
88b54ce379 : Added copying form of CudaTensor.cmul
11dfa23c78 : Added CudaTensor.sign
5b40e3c758 : do not use pushudata on a tensor that is retrieved with checkudata. make the dimension index int.
c1f2bc3e9a : do not use pushudata on a tensor that is retrieved with checkudata. make the dimension index int.
1f39c55d0b : correct the heading style in the doc for index operations.
fcbd9c12dc : make sure to handle 1D indexing properly. do not forget to free the temporary size allocation. add documentation
7474e5f7a3 : make sure to handle 1D indexing properly. do not forget to free the temporary size allocation. add documentation
3686142e29 : add indexing operators
7a47a0a5c5 : add indexing operators
2d96c0e03e : Mentioned that torch.cat() uses the last dimension by default.
9b60b2e36b : Clarified that elements in a row are contiguous in memory.
6299f310c7 : Experimental feature: creating Storages from raw pointers, in Lua.
e5342e8f86 : Experimental feature: creating Storages from raw pointers, in Lua.
64047da4c9 : Added a TTY detector.
c1ca8b73a0 : add documentation for file:referenced, file:isReferenced, torch.expand, torch.expandAs, torch.repeatTensor
164c2e902f : add repeatTensor function, similar to matlab repmat
5474bf3e74 : add isReferenced function to querry the referenced state
45326da58f : reverse logic on referenced
f409840211 : add referenced function to disable/enable indexing/referencing behaviour of the writeObject mechanism.
e098ae1382 : make tensor.expand create a tensor from correct type, not default type.
6cbdbf4583 : DiskFile: removed restrict keyword
a3289c4f51 : DiskFile: removed restrict keyword
88b0078383 : THDiskFile: workaround insane Mac OS X fread bug ID# 6434977.
f028399417 : THDiskFile: workaround insane Mac OS X fread bug ID# 6434977.
b9cb4174a5 : Reported the full traceback on a test failure.
a2e455f7a2 : Fix boundary cases in logspace() and linspace()
406afa3877 : Fix boundary cases in logspace() and linspace()
756083d08a : protect tester calls with pcall.
1fcc8aa31a : make areTablesEqual local.
c6348928d2 : Allow for equal bounds in range
4de4a301c5 : Allow for equal bounds in range
1aab740694 : Allow negative step with range
526def20a0 : Allow negative step with range
cc0fd59190 : revert the bug fix. fix was buggy too...
f0203f5781 : revert the bug fix. fix was buggy too...
8bc0ebdba5 : Added an optional force option to writeObject.
68ab6871eb : correction for MACOSX bug mentioned here. https://discussions.apple.com/thread/3413139?start=15&tstart=0
60a232cfbf : correction for MACOSX bug mentioned here. https://discussions.apple.com/thread/3413139?start=15&tstart=0
35e31bd331 : Compare tables recursively
956b6643ea : Fix comparison of flat tables
a62907419b : Detect and unit-test a bug in AssertTableEq
ffba0dfbfa : Unit test assertTensorEq and assertTensorNe
d9a46534aa : Document the three new asserts(), and add unit test for TestAssertError()
f361ec16b6 : Add three asserts to the Tester class * assertError() expects an error to occur * assertTensorNe() expects the content of two tensors to differ * assertTableNe() expects the content of two tables to differ
87a5096293 : Fix error message for TensorEQ and TableEQ
8e1e9943a4 : Document the extended parameter range for Bernoulli
6bbcc99187 : Allow p=0 and p=1 for Bernoulli distribution
8939a5744a : Allow p=0 and p=1 for Bernoulli distribution
fe763ec715 : count and report assertions
7bd8b98124 : Fixed doc typo.
8ed97e6961 : Using a local var.
ba02f818b5 : Upgraded CUDA code to compute capability 2.0 (48kB of shared mem)
f78dbc1da5 : fix ATLAS detection
6bd1bbb04b : fix ATLAS detection
781d35d251 : max/min/sort: no need to increment optional IndexTensor
cf5a8be51e : Cleaning up CUDA code base. Got rid of useless device syncs
87ecea3154 : Added missing form of CudaTensor:add
295b3c8da3 : Added dok for Easy File functions.
ecaa6d74f8 : removed one indirection when performing an arg check in TH
7a6a38ee78 : removed one indirection when performing an arg check in TH
f964984b1c : THStorage: added implementation for missing functions
027815798c : THStorage: added implementation for missing functions
a9569c6fb1 : avoid segfault if required class is not already loaded. trying to load an object that contains a class that is not loaded yet causes the problem.
c7132c0c1f : Much faster copy.
c34c2e2de1 : fixed some thread safety issues
0d534e64f5 : fixed some thread safety issues
84a5bcb17a : Allow running subset of tests
3dbc05d8ef : Added standard Serialization functions (torch.[de]serialize())
f32d123ed2 : Fixing incorrect string comment for table assertion.
b7280dbd42 : Adding assertTableEq function to test table equality.
96b9a35756 : fixed bug when calling operators + forgotten support for __call__ in new API
d13d33dd78 : Correct untemplated conv code (CUDA)
1147218129 : Added more cases to the generic stuff.
2289863e20 : Working randomizer in CUDA.
de87f835bb : Trying something simpler.
a8aef4e9ee : removed unnecessary header.
6b878d7309 : Dynamic linking.
f01c2932c7 : Commiting a buggy version.
2b404d9a39 : Random with Cuda.
d1e48591d7 : Random with Cuda.
358203e655 : Fixed doc for var/std.
8a43201996 : fixed bug with non-equal strides as well
f179fe6d8a : fixed the bugs in cuda SpatialConvolutionMap, works for equal strides
43fead371b : added cuda convolutionMap
de2295f507 : resize: do not resize tensors if they have the right size
7afb67b23e : resize: do not resize tensors if they have the right size
c846839ca8 : minor dok corrections
0a79fd2282 : dok correction
9cc08cb84e : more luaT dok
4fbc1b4674 : minor cosmetic changes
cd1f98db96 : luaT_typerror: print lua name when luaT returns nothing
1f5bf96273 : erased root-metatable tracks: rmt -> mt
05a6d882c4 : torch now complies with the new luaT API
59f7533363 : safer way to handling typename + cleanup
a8efaa7916 : marked some functions as deprecated
97d8b4384a : huge bug correction: MKL multithreading problem when confusing iomp5 & gomp
5836acc0d9 : huge bug correction: MKL multithreading problem when confusing iomp5 & gomp
f03b1e130e : luaT with no id: bug correction + put back constructor tables, which are in fact needed.
00518c4a3a : removed id mechanism from luaT
0d50c1184a : added global function include() and replaced torch.include calls accordingly
67ea3368f6 : Removed reading from an undeclared var.
4c32b140aa : Added expand/expandAs functions.
e53cc4d17d : sort dok correction
6a20546b1d : Fixed dok for sort()
7f21dc83b7 : added real() method to tensors
930061a990 : removed unroll in convolutions [harming most normal size convolutions]
96adce80d6 : removed unroll in convolutions [harming most normal size convolutions]
40698ef12d : SDL is no good
949aa3e079 : SDL is no good
105bbcefe4 : added support for MKL SDL link
11b10c5041 : added support for MKL SDL link
7a44eac02c : corrected bug in abs computation of numbers using torch.abs.
c22e6aa43b : add documentation for general eigen values and rename the functions eig->symeig since this is only for symmetric matrices reig->eig since this solves all cases.
05e2c97331 : add documentation for general eigen values and rename the functions eig->symeig since this is only for symmetric matrices reig->eig since this solves all cases.
5e4f8e627d : Add non-sym eigen value computation.
cda05dcaf7 : Add non-sym eigen value computation.
1bde0d549e : Added negative indices to slicing operators
9766daa1b2 : Fix bad string.format usage in pkg/torch/File
ae19f5ba96 : Performance improvement on convolution using SSE : Unrolled loop in add function in THVector.h
33549f847a : Performance improvement on convolution using SSE : Unrolled loop in add function in THVector.h
5a0958d2f6 : restoring CMAKE_REQUIRED_FLAGS after messing with it
22b4a62161 : restoring CMAKE_REQUIRED_FLAGS after messing with it
4723cff696 : corrected stuff related to print(...) overriding
dfb4c913a8 : Add NEON assembly routine for ARM processor
9e099237d3 : Add NEON assembly routine for ARM processor
c057503a1c : Using _OPENPM flag.
1f71792b84 : Trying to merge openmp into main libs.
53e0380480 : Trying to merge openmp into main libs.
b75bd3a130 : cmdline dok typo
46e016be71 : bug correction: consider accreal in torch pkg
d044ed3955 : normall/norm in TH + ported those to torch pkg
007afa9988 : normall/norm in TH + ported those to torch pkg
3802564e09 : histogram correctly calculated for tensors where all elements are same.
d0bc38e0a5 : histogram correctly calculated for tensors where all elements are same.
d23559f3c9 : added pthread detection for gotoblas2/openblas, which is necessary on some distributions
d60f220e18 : added pthread detection for gotoblas2/openblas, which is necessary on some distributions
b3d740a7f7 : avoid looking for blas/lapack twice
ed9de62be4 : avoid looking for blas/lapack twice
8fe10f0346 : better lapack detection
311b063378 : better lapack detection
455f38f09e : improved blas/lapack cmake scripts
c17cb214b3 : improved blas/lapack cmake scripts
4c0e1f8907 : cleaner inline detection
2ba221f9f3 : cleaner inline detection
c61781881b : better sse detection
9856a971b0 : better sse detection
3989471e6d : better inline support
76f67c409a : better inline support
8fc1bff114 : corrected a lot of warnings, mostly due to unused variables
f027156aa8 : corrected a lot of warnings, mostly due to unused variables
2a93b82078 : minor corrections (to avoid some warnings)
ddf8f4b7d3 : minor corrections (to avoid some warnings)
173906da35 : Merge branch 'make-tensor-accessors-const-correct' of https://github.com/pflaquerre/torch into pflaquerre-make-tensor-accessors-const-correct
f54b875cbe : Merge branch 'make-tensor-accessors-const-correct' of https://github.com/pflaquerre/torch into pflaquerre-make-tensor-accessors-const-correct
4bce6207cd : Merge branch 'make-tensor-accessors-const-correct' of https://github.com/pflaquerre/torch into pflaquerre-make-tensor-accessors-const-correct
d921604444 : Added dok for slicing operator.
d70c7b57dd : add atan2
f433f8d136 : add atan2
d9e051ba70 : Allowing fill assign when narrowing/selecting
c2070e9977 : fixed little bug in THVector.
916afcc290 : fixed little bug in THVector.
24749b3393 : Fixed missing types in copy.
365a18b272 : New Tensor indexing. a la matlab.
767d383f2c : made lapack link more robust
175f6818bd : made lapack link more robust
9da8beac56 : luaT was not putting the dok in luat...
1a03adebb4 : documentation for torch.inverse
4397f709be : take beack tensor conv stuff, it is not ready yet
2d29aeaf12 : matrix inverse
1ae6aa1bef : matrix inverse
e20f0746e6 : Add reset function to module.lua and Sequential.lua. CmdLine accepts ignore=false for string function
db9c3e69d6 : SVD noew returns U,S,V not U,S,V^T (copmatible with Matlab/Octave calling)
0591633876 : SVD noew returns U,S,V not U,S,V^T (copmatible with Matlab/Octave calling)
17119a7324 : added histograms back
aa0b85bb80 : added histograms back
f23bc545d7 : big pass over documentation to make titles consistent across all packages..
9ca97814d3 : corrected bug in method type-base dispatcher
bc15365ca5 : moved local path setup into torch-env.lua
ebe47b0f95 : put the local search path before default paths.
87163d971a : first shot at local install
8eff95d047 : inline help anchors
2784d42ea8 : documentation for tensor
7109cbbdd2 : cleanhistory -> clearhistory
4a906fbff2 : corrections/additions to dok
ed4857dfcf : added mv, mm and ger + better checking of addmv, addmm and addr
bbf3970981 : tensor math documention : one pass...
f777086b59 : added maskedSelect method to select elements using a mask vector documentation changes for Tensor
426a2b1967 : added maskedSelect method to select elements using a mask vector documentation changes for Tensor
bc85cf8fb8 : minor cleanup
7483217fe0 : now methods & functions are both generated by wrap -- this fixes some bugs and makes things more clear
50bd93709f : Merge branch 'master' into newpack
ba17882426 : {min,max,sum,mean,var,std}all now without 'all' in lua
a3f872f29b : Merge branch 'logical' into newpack
fc16a68f48 : Merge branch 'logical' into newpack
477587f566 : no need to do paths.require + require!!
cfd966d7c4 : torch lua script is now a true executable
5cd0e5f10c : minor cmake corrections
3c652197bc : more cleanup and wrap support for external packages
f08a695702 : more cmake cleanup
54bb1cf358 : more cmake cleanup
014b440244 : cmake files cleanup
79fd0a8b78 : cmake files cleanup
4b1d3d79e2 : cmake files cleanup
a3acd3e0fe : torch7 now exports its own targets for external packages
0d4350ca62 : torch7 now exports its own targets for external packages
ade0d0e1f2 : First basic skeleton for package manager
d2e18be452 : Merge branch 'master' into newpack
c7d7de315b : initial revamp of torch7 tree
a8f107756e : initial revamp of torch7 tree
053065ba23 : initial revamp of torch7 tree

+- Project: platform/external/robolectric

a277cd8a5 : Add OsConstants.RLIMIT_RTPRIO
e86d5ac07 : Update the font resource ID
086180da7 : Sync up with main for further updates
50a26da91 : Allow for RunListeners to be attached via ServiceLoader.
8a839e6bd : Remove the misspelled `@Supercedes` annotation
29803eb72 : Allow subclasses to access DefaultNativeRuntimeLoader:libraryName.
dbd775fd9 : Adjusting the minSdk for ShadowCameraDeviceImpl's _constructor_
fa729b212 : Default `robolectric.configurationChangeFix` to true
eda793ce1 : Add ShadowTextClassifierService
7b120ecfa : Set topology listeners
cc57f08a7 : Add STREAM_ASSISTANT to ShadowAudioManager.
eb8711d0d : Add moveTaskToBack support to ShadowActivity
f63a6a385 : Always invoke real framework code when AccessibilityNodeInfo.addChild is called
079f3583a : Use the real WakeLock constructor in PowerManager.newWakeLock
d8159ae8a : Remove hashCode and toString from ShadowAccessibilityNodeInfo
87450adce : Update minSdk for shadow methods from V to Baklava.
db4acad5d : Adjust to method signature changes in latest indevelopment SDK.
b98afc691 : Replace use of ReflectionHelpers with reflector in ShadowPausedMessageQueue.
58b0d1ca1 : Remove unused ShadowAccessibilityNodeInfo.areThereUnrecycledNodes
361375be3 : Add realistic support for window insets
78dbb80dc : Add shadow implementation of `SubscriptionManager`'s APIs to convert from a slot ID -> subscription ID.
7caf384d3 : Add GoogleSansFlexClock to Robolectric
fe8e00bfd : Reset static state in ShadowLegacyChoreographer between tests
dddbdbb00 : Update the Android V SDK to build 12650502
d8630cfa3 : Fix ShadowDisplayEventReceiver for Android Baklava.
2175c1a68 : Ignore-AOSP-First: Fix robolectric
8ab45b285 : Ignore-AOSP-First: Fix robolectric
ee8c6ed04 : Defer to real framework code for AccessibilityWindowInfo.getWindowId
1947c557a : Clear ShadowAccessibilityWindowInfo fields during recycle
e9bf3de8f : Remove unused and incorrect APIs in ShadowAccessibilityNodeInfo
e070526c7 : Remove unused and incorrect APIs in ShadowAccessibilityNodeInfo
7e65e494c : Use applyConfigurationToResources to update resources
55e4a0959 : Disable AccessibililityNodeInfo shadow APIs for direct connections
f77f0e651 : Native resources mode cleanup.
298b6205b : Support LEGACY graphics + NATIVE resources intersection.
8e3ffa6e7 : Improve error reporting with AttributeSetBuilder + native resources.
e75ef7174 : Stop opening an AssetInputStream twice.
894543b39 : Remove unused and incorrect APIs in ShadowAccessibilityNodeInfo
3df246ea5 : Remove boundsInScreen logic in ShadowAccessibilityNodeInfo
477fb2c8c : Add additional check for AccessibilityNodeInfo.setQueryFromProcessEnabled
72fa974b2 : Remove Parcel-related methods in ShadowAccessibilityNodeInfo
e42cfad32 : Remove StrictEqualityWindowWrapper from ShadowAccessibilityWindowInfo
f68d77674 : Do not require robolectric.useRealAni property to use setQueryFromAppProcessEnabled
71876fc20 : Add back missing useRealAni check in AccessibilityNodeInfo.obtain(AccessibilityNodeInfo)
9add32ffc : Label shadow tests that require LEGACY graphics mode.
a1c93b881 : Use real AccessibilityNodeInfo code for AccessibilityNodeInfo.obtain(AccessibilityNodeInfo)
a6fdfc11d : Leverage some additional framework code in ShadowAccessibilityWindowInfo
cdf226364 : Add support for a filter list in JarInstrumentor
96123ed41 : Turn off fail-fast for the native graphics tests
eb1e880c5 : Disable native graphics test on Windows in SDK 35
eafc82a58 : Use real Android code for AccessibilityRecord.window_id
493585f73 : Call AccessibilityNodeInfo.setSource when ShadowAccessibilityNodeInfo.obtain(View) is called
9fb8b1104 : Make resources to keep to a command line arg in JarInstrumentor
df4deaf75 : Remove `deepEquals` and `hashCode` from ShadowAccessibilityWindowInfo
f3c3a428a : DO NOT MERGE
9f2021102 : Disable new test failing in older releases
f94d7f501 : revert rename
f4bd0c195 : Invoke the real AccessibilityWindowInfo.obtain(AccessibilityWindowInfo)
55a79d123 : Revert field name change in Shadow*MessageQueue*.
2a1f29f6b : Fix ShadowPendingIntent.send() to forward the options passed in when starting activity.
f21206c88 : Deprecate AttributeSetBuilder.
7feb151a3 : Update shadows to adapt to in development Android changes.
39392f86c : Remove unnecessary shadow methods in ShadowAccessibilityWindowInfo
1d6b046e8 : Add packages/apps/Car/Settings/tests/deviceless and packages/apps/Car/Settings/tests/multivalent to the visibility list.
a0d82788f : Invoke real Android code when AccessibilityNodeInfo.setSource is called
edf5f476b : Add CONCRETE to ShadowServiceManager BinderTypes.
5f831aebe : Remove unnecessary uses of LooperMode(PAUSED)
57a0e95bf : Add basic support for DirectAccessibilityConnection behind a flag
dbbde01d0 : Fix thread leak in QueuedWork.
657593a1d : Skip as many non-class files as possible in JarInstrumentor
afc51eb59 : Add @InDevelopment annotation to ShadowInputMethodManager#hideSoftInputFromWindow
76bfb0f83 : Add @InDevelopment annotation to ShadowInputMethodManager#hideSoftInputFromWindow
a671b04b6 : The method was added in Android S and removed in Android Baklava.
9ca5f2d01 : Add support for Android V
681c175f9 : Move RobolectricTestParameterInjectorTest to integration_tests/testparameterinjector
008bc920d : Fix RobolectricTestParameterInjectorTest in Gradle
78fbe1776 : Remove SimpleFuture from Robolectric's utils
9431698dc : Fix broken tests in scattered bugs Ignore-AOSP-First: fixing baklava (cherry picked from https://googleplex-android-review.googlesource.com/q/commit:606ccaa6640ad2e6a6ac8caa04917737caa7603a) Merged-In: I12b2edc203f0c60b2fd8cd89ac4e983aec1d994b Change-Id: I12b2edc203f0c60b2fd8cd89ac4e983aec1d994b
5801a3989 : Remove some obsolete Java version checks
137754275 : Allow access to jdk.internal.access for Robolectric's tests
606ccaa66 : Fix broken tests in scattered bugs Ignore-AOSP-First: fixing baklava
f58f8aa23 : Bump the preinstrumented version to 7
431ff8a1b : Revert^2 "hideSoftInputFromWindow: set maxSdk to V"
0fde1db19 : Revert "hideSoftInputFromWindow: set maxSdk to V"
7b2ea32fe : hideSoftInputFromWindow: set maxSdk to V
ad2a000c2 : hideSoftInputFromWindow: set maxSdk to V
d66a635d4 : Remove unnecessary references to system server code.
6effecb2e : Remove dependencies on the 1-variant fallback
70c5f4082 : Adjust to modified BackMotionEvent constructor in in-development SDK.
b3b08051a : Add a shim method PathIterator.nNextHost
56fb0ef2b : Reapply "ShadowCameraManager: do not override openCameraDeviceUse..."
fed56c441 : Update AndroidVersions.W to AndroidVersions.Baklava.
79886b809 : Broadcast device idle mode change from ShadowPowerManager when the deep idle mode changes.
eff70fadc : Update Owners
2b26eacc9 : Revert "ShadowCameraManager: do not override openCameraDeviceUse..."
873281853 : ShadowCameraManager: do not override openCameraDeviceUserAsync on V+
dbf4bbc87 : Add an simple OS helper class for Os checks
db40798ee : Formatting change via harddiff.sh
0e0ecad6c : ShadowCameraManager: do not override openCameraDeviceUserAsync on V+
3f2f61093 : Depend on TestParameterInjector
75efca298 : Fix for different branches
b884d4ff1 : Remove obsolete Robolectric TimeUtils
d946a59eb : Should fix most launcher tests
cbd233c03 : Fix for different branches
cff3642c2 : Add a no-op ProtoLogConfigurationService to Robolectric's ShadowServiceManager
aa74ba145 : Add RobolectricTestParameterInjector
4d8a353a7 : Should fix most launcher tests
53248a4ac : Add AudioAttributes impl to ShadowMediaPlayer
fd0c47816 : Add methods to set ODM_SKU in ShadowBuild.
fb86e9e52 : Implement ShadowBluetoothLeBroadcast
c63ef6271 : Add setLinkUpstreamBandwidthKbps to ShadowNetworkCapabilities.
9311be864 : Add a correctly spelled 'Supersedes' annotation for PluginFinder
da4f1a1fb : Adjust to AssociationInfo constructor changes in android main.
a9b30da29 : Add ShadowAudioManager support for `AudioManager.OnCommunicationDeviceChangedListener`
e1e3ff4a8 : Adapt to changed BackMotionEvent constructor.
732cf09b3 : Revert of "Implement ShadowBluetoothLeBroadcast"
c14b188b1 : Include unloadNanoApp in the ContextHubManager shadow.
c4418114c : Update owners
cb7d7f112 : Add support for CardEmulationManager observe mode defaults
1f09d409f : Add shadow for new UsageStatsManager#queryEvents(UsageEventsQuery)
59ffac5dc : Implement ShadowBluetoothLeBroadcast
ebd948b9c : Fixes for errorprone update
0bf033606 : Fix virtual device state on in development android SDKs.
5231dcd5d : Support WifiUsabilityStatsEntry in post V SDKs.
c0ef7d1bf : Implement pending update APIs in ShadowDevicePolicyManager
e26086617 : Create a InputDeviceBuilder and deprecate ShadowInputDevice.
f66c4eda9 : Add support for native Path methods added in Android U
b91f1dd22 : Add Credential Manager service to ShadowServiceManager
da246cf14 : Add consistent performance measurement for binary vs native resources.
d6a61226a : Add caching to native apk assets.
83a0ce5ca : Support applyStyle in native resources.
8716aa9d9 : Fix FileDescriptor setFdInt.
6eba1ab4e : Ensure all OsConstants fields have valid values.
0dfa895fe : Ensure that `queryOverriddenIntents` returns new copy (same like on device) and add tests for the same.
1276802b6 : Stop referencing AndroidVersions.W in shadows.
021723936 : Add shadow for upcoming PathIterator#nNextHost in LEGACY graphics.
2c0ba49b7 : Add methods to set SOC_MANUFACTURER and SOC_MODEL in ShadowBuild.
fe82b894c : Reimplement Os.pread using channels and improve fidelity.
bbe1a22b4 : Remove re-implementation of the `AudioTrack` - `write(ByteBuffer...)`
cee383b85 : Deprecate looseSignatures and remove last remaining uses
05b7e8afb : Revert recent ShadowUserManager Context changes
cefdc6908 : Fix ShadowAccessibilityManager tests
8ac28f83a : Disallow null when calling ShadowAccessibilityManager.setEnabledAccessibilityServiceList
43fdc5676 : Bump com.googlecode.libphonenumber:libphonenumber
e7001e0aa : Fix an NPE in ShadowConnectivityManager
a777f434b : Initialize UserManager state during the ShadowUserManager static initializer
e2bb75136 : Fix a potential NPE in ShadowAccessibilityManager resetter
54f9e36da : Delete ShadowAnimationUtils.
16b7fe6b3 : Fix a typo in ShadowUserManagerTest
10cd07258 : Fix typos in ShadowUserManager
44db56210 : Fix package name in integration_tests/multidex manifest
cf85d18ba : Stop shadowing the ColorDisplayManager constructor
c8620b38a : Add missing `@Resetter` annotation in ShadowColorDisplayManager
6b68b070b : Remove unnecessary line in ShadowConnectivityManagerTest
425e0f0ef : Make protected fields as private for ShadowCompanionDeviceManager
30e096325 : Fix ShadowConnectivityManager defaults after reset
724b1c64d : Fix default value regression of ShadowNotificationManager.importance
d7a197b32 : Use `SuppressWarnings("robolectric.ShadowReturnTypeMismatch") in ShadowParcel
6bfc3d5e1 : Fix resetting the AppOpsManager OnOpNotedCallback
57a1cb3c1 : Encapsulate ShadowSensorManager.forceListenersToFail in a setter
d36c88d4d : Actually run 'sdkcompat' tests during Copybara presubmits
a3820bbaa : Include SQLiteDatabaseTest into CtesqueRoboTests.
b70700c7a : Fix ClassNotFoundException when initializing Choreographer reflector
9e0110192 : Fix code scanning alert #9: Arbitrary file access during archive extraction ("Zip Slip")
e91eed50d : Make-fields-of-ShadowUserManager-static-for-context-level-instance
c48023409 : Fix UnsatisfiedLinkError when calling Canvas.nSetHighContrastText
fe8245e83 : Update some APIs to Use real types in ShadowNotificationManager
7f3d02642 : Support static references to Choreographer.getInstance.
1d2570ef8 : Add support for additional flags parameter in the notifyChange method for ShadowContentResolver.
3cdcaea24 : Revert all minSdk W shadow methods to V.
0cd65670e : Prepare for CameraManager#openCameraDeviceUserAsync signature change in android.
f4da4f283 : Skip SQLiteDatabaseTest in LEGACY mode.
6e0d5159d : Add proxy implementations for new android V services.
8dfa3194c : Remove unused exception for ShadowStatusBarManagerTest's tests
87661ad62 : Make-fields-of-ShadowStatusBarManager-static-for-context-level-instance
0a15c81f5 : Bump org.jetbrains.kotlinx:kotlinx-coroutines-android
47baa3dd5 : Extend ShadowStatsManager with addConfig and removeConfig
c0e6d16b0 : Make-fields-of-ShadowAccessibilityManager-static-for-context-level-instance
686cdc702 : Remove generics from ClassName annotation values
bfed7f3b3 : Remove looseSignatures in ShadowNativeThreadedRenderer
d5c07e767 : Support ShadowSpeechRecognizer on Android V
fbbfbd76e : Remove unused code from ShadowWranglerUnitTest
c2a8969c4 : Add support for NfcAdapter observe mode.
641eef764 : Move data listener notification to lower layer as well to cover primitive bytes writes as well.
ba7ed962e : Fix native Bitmap.recycle in Android V
aed5122de : Remove unnecessary shadow from MethodNameTest
3ff281ff5 : Fix edge case when using `@Implementation(methodName=...)`
1b729f38f : Fix common typos in recently modified files
806e1a88c : Add Javadoc for files that have been recently modified
967ba2a0b : Update some method signatures after ClassName migration
59f1b4e7d : Update to Gradle 8.10.1
97309a8a6 : Bump io.gitlab.arturbosch.detekt from 1.23.6 to 1.23.7
317aaff94 : Bump com.googlecode.libphonenumber:libphonenumber
7029c231f : Bump the androidx group with 2 updates
0e5dcd1c3 : Use lazy device creation in `GradleManagedDevicePlugin`
a59128641 : Make-fields-of-ShadowWallpaperManager-static-for-context-level-instance
f8428b3aa : Add GMD group for all targets from API 29
53b1992e5 : Fix minimal SDK for VirtualDeviceManagerTest
a8c8c4f66 : Fix WifiP2pManagerTest's starting API from instance not same
9b7a161b5 : Make fields of ShadowConnectivityManager static for context level instance
5fdb6a9ec : Remove unused variables for ShadowBatteryManagerTest
959e5656c : Make-fields-of-ShadowMediaSessionManager-static-for-context-level-instance
1f2dd1c08 : Make WifiAwareManagerTest safer with necessary manager existing checking
f9e56cf9c : Make-fields-of-ShadowTranslationManager-static-for-context-level-instance
55e1d90c4 : Make-fields-of-ShadowVpnManager-static-for-context-level-instance
4daf9b0e7 : Make-fields-of-ShadowWifiAwareManager-static-for-context-level-instance
4f17c57bd : Update the main `Readme` file
4f7426681 : Bump com.android.tools:common from 31.5.2 to 31.6.0
6ebaac7e4 : Bump android-gradle from 8.5.2 to 8.6.0
44109291d : Remove Groovy-specific config
4887419b0 : Make-fields-of-ShadowTelecomManager-static-for-context-level-instance
563411e26 : Make fields of ShadowCarrierConfigManager static for context level instance
69a1c7cc3 : Make-fields-of-ShadowNotificationManager-static-for-context-level-instance
67db0b2b1 : Migrate some Gradle files to Kotlin (6)
29b7fbaa3 : Make-fields-of-ShadowPowerManager-static-for-context-level-instance
f154588cc : Migrate some Gradle files to Kotlin (5)
0bc240b9c : Add basic devcontainer for GitHub Codespaces
92029e168 : Make fields of ShadowAmbientContextManager static for context level instance
95fb8c449 : Remove duplicated USE_BIOMETRIC declaration
6724d5ea6 : Migrate the remaining Gradle files to Kotlin (7)
1864b35fc : Make-fields-of-ShadowRollbackManager-static-for-context-level-instance
40d582010 : Make fields of ShadowFileIntegrityManager static for context level instance
1fbd862b9 : Make-fields-of-ShadowWifiP2pManager-static-for-context-level-instance
ec474ee08 : Migrate some Gradle files to Kotlin (4)
63df333bb : Make fields of ShadowInputMethodManager static for context level instance
c36c0d382 : Migrate some Gradle files to Kotlin (3)
ccf7b5956 : Make-fields-of-ShadowShortcutManager-static-for-context-level-instance
4048d8a4c : Migrate some Gradle files to Kotlin (2)
bff2bedb8 : Migrate some Gradle files to Kotlin
20c8872f3 : Bump kotlin from 2.0.10 to 2.0.20
f6f071536 : Bump com.googlecode.libphonenumber:libphonenumber
902d7cf92 : Remove looseSignatures usage from ShadowWifiManager
e163e8c45 : Migrate root Gradle files to kotlin
4f4d5986e : Make-fields-of-ShadowSearchManager-static-for-context-level-instance
f07e3f767 : Make-fields-of-ShadowRestrictionsManager-static-for-context-level-instance
97a00fa88 : Make-fields-of-ShadowVcnManager-static-for-context-level-instance
8c7c39048 : Make-fields-of-ShadowVirtualManager-static-for-context-level-instance
85911f817 : Rearrange API levels for unit tests
b64cb0ba5 : Add test to guard the behavior of ClassName annotation concerned with generic type
776e5c596 : Make-fields-of-ShadowLocaleManager-static-for-context-level-instance
1d0d4c4a4 : Remove custom CSS from the generated Javadoc
3911f5f95 : Update naming for ShadowPausedMessageQueue.nativePollOnce
e24baa272 : Migrate `buildSrc` Gradle files to Kotlin
789621800 : Make-fields-of-ShadowSystemHealthManager-static-for-context-level-instance
44e247cb4 : Bump the androidx-test group with 2 updates
b3d5cccf4 : Bump com.googlecode.libphonenumber:libphonenumber
ab054771b : Make test for retrievesSameDefaultSubscriptionInfo meaningful
ba4dd0eea : Add tests for ShadowTelephonyManager static state
733959c13 : Make-fields-of-ShadowSubscriptionManager-static-for-context-level-instance
d4c0e76a0 : Make-fields-of-ShadowUsageStatsManager-static-for-context-level-instance
f67ef3814 : Make-fields-of-ShadowWearableSensingManager-static-for-context-level-instance
107cd6f0e : Make-fields-of-ShadowTimeManager-static-for-context-level-instance
320522f25 : Make-fields-of-ShadowUsbManager-static-for-context-level-instance
dd36f29c8 : Make-fields-of-ShadowUwbManager-static-for-context-level-instance
dc0b67bc3 : Update the CodeQL workflow
fe243ee25 : Migrate the `ShadowsPlugin` plugin in `buildSrc` to Kotlin
2a3322ff7 : Add tests for ShadowStorageManager static state
b548bb40b : Make-fields-of-ShadowSafetyCenterManager-static-for-context-level-instance
b6eb267bd : Migrate the `DeployedRoboJavaModulePlugin` plugin in `buildSrc` to Kotlin
8fb4da0fe : Remove looseSignatures usage from ShadowNoopNativeAllocationRegistry
f52b46e9e : Upgrade to Gradle 8.10
fdc80d367 : Make-fields-of-ShadowSliceManager-static-for-context-level-instance
21ed92da3 : Migrate the `GradleManagedDevicePlugin` plugin in `buildSrc` to Kotlin
f3373595e : Migrate the `AarDepsPlugin` plugin in `buildSrc` to Kotlin
ea7da04de : Remove looseSignatures usage from ShadowTelephonyManager
8cc00c4a5 : Add tests for ShadowRoleManager static state
f2c8008b4 : Remove looseSignatures usage from ShadowTimeZoneFinder
3fef9ac10 : Remove looseSignatures usage from ShadowUsageStatsManager
faaaaed71 : Remove looseSignatures usage from ShadowUsbManager
7559472e7 : Make-fields-of-ShadowSensorManager-static-for-context-level-instance
19bb8c2ad : Remove looseSignatures usage from ShadowPowerManager
2aa9bf41a : Remove looseSignatures usage from ShadowSpeechRecognizer
01adb4c07 : Remove looseSignatures usage from ShadowSurface
071bdb694 : Remove looseSignatures usage from ShadowSystemClock
ed939d830 : Remove looseSignatures usage from ShadowThreadedRenderer
61bc931c2 : Remove looseSignatures usage from ShadowTimeZoneFinderQ
67f8fece0 : Remove looseSignatures usage from ShadowTimeZoneFinderS
11ae4252d : Remove looseSignatures usage from ShadowWindowManagerGlobal
637183e20 : Bump com.android.tools:common from 31.5.1 to 31.5.2
1b6cf9979 : Bump android-gradle from 8.5.1 to 8.5.2
243f3fb83 : Bump kotlin from 2.0.0 to 2.0.10
a2f04aea0 : Bump androidx.annotation:annotation in the androidx group
e167eac54 : Bump roborazzi from 1.25.0 to 1.26.0
592d33c0d : Add Tests for ShadowCaptioningManager static state
4e9c6aac6 : Make-fields-of-ShadowNetworkScoreManager-static-for-context-level-instance
9585913b3 : Remove looseSignatures usage from ShadowRotationWatcherFor22
254e92e3c : Remove looseSignatures usage from ShadowSensorManager
25cd1c0a4 : Remove looseSignatures usage from ShadowSystemServiceRegistry
40eb20256 : Remove looseSignatures usage from ShadowSystemVibrator
a9428af55 : Fix incorrect method name in ShadowInstrumentation warning log
7e1e06aff : Remove looseSignatures usage from ShadowPixelCopy
77489d5d6 : Add tests for ShadowMediaRouter static state
2fb82e162 : Remove looseSignatures usage from ShadowPosix
29eee77b9 : Add necessary minSdk = O for some createActivityContexts tests
a29ae2cbd : Make-fields-of-ShadowDropBoxManager-static-for-context-level-instance
351627210 : Migrate the `SpotlessPlugin` plugin in `buildSrc` to Kotlin
869b13ef6 : Migrate the `AggregateJavadocPlugin` plugin in `buildSrc` to Kotlin
f050e35ee : Migrate the `agp` package in `buildSrc` to Kotlin
30def5be6 : Move Java files in `buildSrc` to the `java` source set
cd0569bc5 : Remove looseSignatures usage from ShadowPhoneWindowFor22
df5a4c998 : Make-fields-of-ShadowShadowLauncherApps-static-for-context-level-instance
8685624c6 : Make-fields-of-ShadowCrossProfileApps-static-for-context-level-instance
cfb50732b : Remove looseSignatures usage from ShadowPhoneWindow
35aedf322 : Remove looseSignatures usage from ShadowPaint
2f4160308 : Remove looseSignatures usage from ShadowParcel
8d2624abd : Make-fields-of-ShadowEuiccManager-static-for-context-level-instance
747ded4df : Remove looseSignatures usage from ShadowPausedMessageQueue
c7d983967 : Remove the `gradle_wrapper_validation.yml` workflow
2cb0b4fe5 : Bump gradle/actions from 3 to 4
63c842b55 : Remove looseSignatures usage from ShadowNativeTypeface
4bf3d73da : Remove looseSignatures usage from ShadowNotificationManager
c676ce2dc : Make fields of ShadowBluetoothManager static for context level instance
c29df110a : Make fields of ShadowAppsOpsManager static for context level instance
a174c4120 : Remove looseSignatures usage from ShadowNetworkCapabilities
8c6ba3fa3 : Remove looseSignatures usage from ShadowNfcAdapter
487c00164 : Remove unused legacy resource logic for AndroidTestEnvironment
e476c5aa9 : Make fields of ShadowBiometricManager static for context level
d8255b903 : Make fields of ShadowContextHubManager static for context level instance
306e6ae1c : Make some fields of ShadowAppWidgetManager static for context level instance
d1b3c85a6 : Make fields of ShadowCameraManager static for context level instance
5414e44db : Make some fields of ShadowActivityManager static for context level
a34aa83ec : Make fields of ShadowFingerprintManager static for context level instance
1bdfdde81 : Make fields of ShadowDownloadManager static for context level instance
29afa66f7 : Make fields of ShadowAutofillManager static for context level instance
be8db7258 : Remove looseSignatures usage from ShadowLocationManager
ce9af8b94 : Make fields of ShadowDevicePolicyManager static for context level instance
7b6070276 : Make fields of ShadowColorDisplayManager static for context level instance
c1355ab7f : Remove looseSignatures usage from ShadowNativeImageReaderSurfaceImage
95bd79e94 : Remove looseSignatures usage from ShadowNativePaint
4be483c91 : Remove looseSignatures usage from ShadowNativeRenderNodeAnimatorQ
dbfd34bd9 : Remove looseSignatures usage from ShadowNativeRenderNodeOP
8e3dd9ba4 : Remove looseSignatures usage from ShadowNativeStaticLayout
c2c8252f5 : Migrate `buildSrc` Gradle files to Kotlin
8f0292ce3 : Update to `google-java-format` 1.23.0
0c75d6a44 : Add tests ShadowKeyguardManager static fields
eafe846c6 : Remove the android-all from the runtime classpath
89cd24592 : Extend VirtualDisplay shadow support down to SDK 28
5ad0fd876 : Bump com.googlecode.libphonenumber:libphonenumber
25ef22e86 : Bump the androidx group with 3 updates
fa336b98c : Bump roborazzi from 1.23.0 to 1.25.0
c4a5b2f3c : Remove looseSignatures usage from ShadowNativeHardwareRenderer
71c805f95 : Remove looseSignatures usage from ShadowNativeImageReader
cdda95ed3 : Remove looseSignatures usage from ShadowLegacyTypeface
324969f7b : Remove looseSignatures usage from ShadowMediaCodec
cd627f2e4 : Remove looseSignatures usage from ShadowNativeBitmap
d6e254957 : Remove looseSignatures usage from ShadowHardwareRenderer
b6dd0a390 : Add shadow support for SDK 33 PackageManager#queryIntentServices
b22bad8eb : Make some fields of ShadowClipboardManager static for context level instance
e52400ec8 : Bring integration_tests/multidex again with build.gradle
c3e7b1b1a : Bump roborazzi from 1.22.2 to 1.23.0
5b0fe6492 : Add tests for ShadowAlarmManager static fields
3d73f30fd : Remove looseSignatures usage from ShadowImageReader
0156803c1 : Remove looseSignatures usage from ShadowInputMethodManager
92a9c5dff : Remove looseSignatures usage from ShadowInputManager
a05d567a3 : Remove looseSignatures usage from ShadowDisplayManagerGlobal
440b8ab07 : Remove looseSignatures usage from ShadowDisplayManager
86b5aeade : Remove looseSignatures usage from ShadowDisplayEventReceiver
949c20e0b : Remove looseSignatures usage from ShadowDevicePolicyManager
304b1f820 : Bump com.android.tools:common from 31.5.0 to 31.5.1
4caed878f : Bump roborazzi from 1.21.0 to 1.22.2
dbff27010 : Bump com.googlecode.libphonenumber:libphonenumber
470a491bb : Bump android-gradle from 8.5.0 to 8.5.1
0fc7b5b3e : Remove looseSignatures usage from ShadowApplicationPackageManager
0ac6716b3 : Add extension API setAlias for ShadowBluetoothDevice's compatibility
ba57ea0b2 : Remove looseSignatures usage from ShadowBluetoothAdapter
2ce0c0f67 : Remove looseSignatures usage from ShadowContextHubManager
315fcee71 : Remove looseSignatures usage from ShadowContentProvider
6283d6e20 : Remove looseSignatures usage from ShadowBluetoothDevice
33b900c15 : Remove looseSignatures usage from ShadowBitmap
1908ce2cb : Remove looseSignatures usage from ShadowBackupDataInput
4673168e9 : Remove looseSignatures usage from ShadowAudioTrack
2622750fc : Remove looseSignatures usage from ShadowActivityThread
90bf12666 : Remove looseSignatures usage from ShadowActivityManager
59d6e73db : Keep parameter name of noteProxyOpNoThrow same as origin method
9f9897435 : Remove looseSignatures usage from ShadowAppOpsManager
a22559380 : Remove looseSignatures usage from ShadowAudioManager
29d5be9b6 : Remove looseSignatures usage from ShadowArscApkAssets9
7103d728b : Support @ClassName in function return type
b9bd4f882 : Bump Gradle to 8.9
afd17a637 : Add tests to `ShadowSharedPreferences`
e84541a05 : Replace `androidx.test.annotation.Beta` with `com.google.common.annotations.Beta`
078a5f007 : Make fields of ShadowBatteryManager static for context level instance
4056907bb : Update snapshot version in Readme
d09e25ed1 : Bump roborazzi from 1.20.0 to 1.21.0
a9c054fb8 : Bump version in README.md to 4.13
337c1628c : Bump version in gradle.properties to 4.14-SNAPSHOT
7083bb40f : Remove empty shadows:versioning from settings.gradle
052dd04e6 : Disable deployed for integration_testing/versioning
91f38c332 : Bump the androidx-test group with 9 updates
ae33a5b3e : Regenerate resources
046125428 : ci: Exclude Manifest.java and R.java from google-java-format
202a6ac03 : Remove Multidex from `integration_tests/androidx_test` and `integration_tests/multidex` module

+- Project: platform/external/rust/android-crates-io

dbcef589 : Allow Keystore to use the der crate
8596c8a3 : Reduce min_sdk_version of zerocopy to 29.
71fd49f7 : Keep using old version of zerocopy_derive in zerocopy.
6c41eabe : Update enumn to 0.1.14.
f0315489 : Add min_sdk_version to cargo_embargo.json
f02db5b8 : tokio-stream: upgrade to v0.1.16
c4033f9f : bytes: update to v1.9.0
ce58baed : Fix crate list and Cargo.lock for extra versions.
157a4b76 : tokio-util: upgrade to v0.7.13
63d04bd7 : tokio: upgrade to tokio v1.42.0
db23deda : Fix metadata for clap.
ca4e630c : Revert "Add extra vm-memory fixed at version 0.12.2"
067e8ca2 : Revert "Update vhost-user-backend and its dependencies"
1ab3cb2f : Fix cargo_embargo.json and crate-list.txt for vm-memory in extra_versions.
95175a69 : Migrate 9 newly imported crates to monorepo
ed0b395b : Set version for libdata_encoding
64dca76c : Update vhost-user-backend and its dependencies
9ca810ee : Add extra vm-memory fixed at version 0.12.2
2e1e1ebd : Add -std=c++11 flag to grpcio-sys build_cmake.rs
e8e1866d : Update form_urlencoded to v1.2.1
255511d5 : Update percent-encoding to v2.3.1
81d58bbd : Limit std patches to android_dylib only
9cb8a7a4 : Add instructions for creating patches.
f62620e7 : Migrate clap to monorepo, with upgrade to 4.5.0.
c87e8214 : Import clap_builder 4.5.0
a2e41366 : Import anstyle 1.0.10
8d14f2b0 : Update clap_lex to 0.7.2
7a129315 : Fix metadata for a couple of crates.
becf29b3 : Remove "do not subm1t" messages from rules.mk
add874e7 : Make configparser available to //vendor
ad601d9d : Don't run cargo for crates where it isn't needed.
55ea1102 : Fix syntax and min_sdk_version for bytemuck.
e75b2ff3 : Migrate ring to monorepo, with upgrade to 0.17.2
534b5d79 : Use clap_lex 0.3.2 with clap 3.2.23
2213784f : Copy clap_lex 0.3.2 to extra_versions.
e0fc7157 : Adding min_sdk_version for apex nested dependencies
2d5f273e : Use min_sdk_version 29 instead of 34
bbd37b23 : bit_field crate: Clean-up of CL:3345830
38996d43 : Update fallible-iterator to 0.3.0
2b888c4d : Validate TEST_MAPPING files.
3d0c78c2 : Migrate libbpf-sys to monorepo.
302e09b0 : bytemuck crate: Add no_std variant
a2fff133 : bit_field crate: Add support for generating Android.bp
72f2f22c : Re-enable bpfmt as a pre-upload check.
d4d3ecf9 : Rename *.bp files to *.bp.fragment.
6a0fbdf8 : Remove *.bp files with visibility rules.
41c96dfc : pdl-compiler: add visibility for pdl_rust_generator_defaults
4634c3e6 : Update build rules for smccc
dc047616 : Update for zerocopy and zerocopy-derive so 2 versions can coexist.
e1fb0202 : Copy zerocopy and zerocopy-derive to extra_versions.
e5257ee2 : Migrate 3 crates to monorepo.
27910b80 : Revert^2 "Migrate zerocopy to monorepo."
226aeee5 : Revert "Migrate zerocopy to monorepo."
7d3ef3cc : Migrate grpcio-sys to monorepo.
a3f237f8 : Migrate 5 crates to monorepo.
22068a4c : Fix METADATA file for all crates.
1d996ec8 : Migrate 3 crates to monorepo.
28782d77 : Update log crate to 0.4.22.
d9126851 : Use protobuf 2.27.1 for Trusty.
4f389a69 : Migrate protobuf and protobuf-codegen to monorepo.
f954d6e6 : Migrate zerocopy to monorepo.
bdc0ec74 : Delete rules.mk for extra_versions/crates/bitflags
55511645 : Add com.android.virt as apex_available into libjni
1445a4af : Add com.android.virt as apex_available into dependencies of libjni
0a5d4d59 : Update bindgen-cli to 0.69.5
e12be145 : Update bindgen-cli to 0.69.4
de36e4ac : Migrate bitflags to monorepo, with upgrade to 2.6.0
89479101 : Generate rules.mk for num-integer via cargo_embargo
185bd422 : Add tests not in TEST_MAPPING as post-submit tests.
46437f4f : Adding min_sdk_version for apex nested dependencies
939fe595 : Migrate 9 crates to monorepo.
d5e18a8e : Update include paths for migrated crates.
a992a01e : Make rules.mk for zeroize no_std compatible
0739273e : Set min_sdk_version=29 for foreign-types and foreign-types-shared
7025ec02 : Generate rules.mk for aarch64-paging via cargo_embargo
0db7be6a : Generate rules.mk for once_cell via cargo_embargo
a9a0fd09 : Generate rules.mk for uuid via cargo_embargo
cfefbc7d : Generate rules.mk for buddy_system_allocator via cargo_embargo
7ff9b5bd : Generate rules.mk for tinyvec via cargo_embargo
faeadd58 : Fix rules.mk for zeroize
174ea78a : Revert "Add dirgroup for trusty genrule"
1ae7749b : Update prettyplease to 0.2.17
2529f6da : Update serde and serde_derive to 1.0.195
6f18d983 : Put license_text in []
c4714017 : Migrate v2.27 of protobuf and protobuf-codegen to monorepo.
e9bb57c0 : Update more Android.bp and rules.mk for Trusty
96dd2c35 : Update libc to 0.2.161
7e74b2b6 : Migrate 3 crates to monorepo.
dfbf58ff : METADATA files for extra versions of crates.
a4c5507b : Fix .gitignore and pseudo_crate/.gitignore for consistency.
2a368dc0 : Migrate heck to monorepo.
9cb15f20 : Update rules.mk files for Trusty builds
9d91273a : Set device_supported to true for unsafe-libyaml and update Android.bp
1add2495 : Remove heck-0.3.3
8585d640 : Add preupload check for extra version managed repo.
a2395670 : Initialize another managed repo to hold second versions of crates.
3bc9b861 : Migrate pdl-compiler to monorepo.
f7d50922 : Use --release to build a faster crate_tool
f27b600c : Add fold feature to syn crate.
33334541 : Migrate bytemuck, with upgrade to v1.19.0
d24b2d73 : Migrate 3 crates to monorepo
c7cf1d2c : Revert "Migrate bytemuck to monorepo, with upgrade to 1.16.3"
26ad9bbc : Migrate bytemuck to monorepo, with upgrade to 1.16.3
c27b45de : Migrate 8 crates to monorepo
cac59e8e : Add fold feature to syn crate.
c22bd938 : Migrate 8 crates to monorepo
11eb3fcd : Migrate 20 crates to monorepo
c7cfedca : Migrate syn to monorepo, and update to 2.0.58
4af334e2 : Update futures to 0.3.31
c1329e22 : Add dirgroup for trusty genrule
0588f9f2 : README for the crate monorepo, explaining how to do updates.
be12d66f : Migrate 25 crates to monorepo
8f37f036 : Update httparse to 1.9.5
4753b9fe : Fix METADATA files.
e1a93219 : Remove .orig files created by patch.
8eabf660 : Rename the monorepo tool to "crate_tool"
73dbe78e : Generate rules.mk for x509-certs with cargo_embargo
84cb53f7 : Update rules.mk for lazy_static to use FIND_CRATE
f1babde5 : trusty.rs: CLOCK_BOOTTIME as the only supported
deeae78e : Migrate 13 crates to monorepo
eb7613a2 : Migrate once_cell, with upgrade to version 1.19.0
9e9f3646 : Update 20 crates.
756fe13a : Update some crates.
8856c1e6 : Update fastrand to 2.0.2
bd3a796c : Update features when generating rules.mk for spin crate
a751cfcb : Migrate 20 crates to monorepo
427efb7f : Migrate 25 crates to monorepo
b4b65030 : Regenerate zeroize and zeroize_derive.
46a885c4 : Update semver crate to 1.0.23
2d40e8d4 : Regenerate log crate.
b3c4a54a : Recontextualize all patches.
e7c06510 : Set tempfile version to 3.12.0 and regenerate.
7cd530db : Revert^2 "Upgrade tempfile to 3.12.0"
453c6007 : Generate rules.mk for serde_derive via cargo_embargo
4ef77fc0 : Generate rules.mk for pkcs1 via cargo_embargo
aead892a : Generate rules.mk for ciborium-ll via cargo_embargo
4295db27 : Generate rules.mk for const-oid via cargo_embargo
f58b132f : Generate rules.mk for sec1 via cargo_embargo
ada7bb36 : Generate rules.mk for byteorder via cargo_embargo
9fe53430 : Generate rules.mk for cfg-if via cargo_embargo
f11efdb1 : Generate rules.mk for pkcs8 via cargo_embargo
2c9e4e77 : Generate rules.mk for spki via cargo_embargo
230932df : Generate rules.mk for libc via cargo_embargo
ed41ad0c : Generate rules.mk for der via cargo_embargo

+- Project: platform/external/rust/crabbyavif

a6987b0 : heic: Add compression format to the API
0f9d0e6 : mediacodec: Retry input buffer dequeue-ing
fc3a9a9 : Add a system property to enable AVIF decoding in CrabbyAVIF
9d3bb09 : Add a system property to enable AVIF decoding in CrabbyAVIF
af828d3 : Enable heic gainmap support in crabbyavif
f7b017a : mediacodec: Always use P010 if depth > 8
2c3a36a : heic: Recognize HEIC files only if they have gainmap
182098f : mediacodec: Split prefer_hw logic for avif and heic
d04ffe9 : mediacodec: Always use YUV420Flexible for 8-bit
d474f1e : capi: Fix alpha plane scaling
298e97f : rgb_impl: Use correct V plane for P010
7455f69 : Set P010 depth to 16
b9f6f2d : Update chroma_shift_x to support P010
08c441e : capi: Add avifImageCopy
9f08a32 : Fix chroma_shift_y for P010 and NV12/21
8c57543 : image: Fix v plane dimensions for P010, NV12/21
f30eb48 : capi: Check for nullptr before slice creation
cca2973 : reformat: Implement RGB565 conversion
60a1f50 : Add public API for mediacodec output format
c0328ac : mp4box: Support truncated ftyp box when peeking
c5c87c3 : toml: Format the TOML files
dd31bd0 : heic: Support hvc1 type for dimg items
b25ef59 : Do not validate tmap properties for HEIC
93fcc1c : mp4box: Enforce that ftyp is the first box
9ac1c20 : Remove/update some stale comments
92421e6 : mp4box: Add an explanation comment in parse_elst
fca79bc : decoder: Update made up alpha_item_id
8c7d8f1 : c api test: Test decoder reuse in OneShotDecodeFile
178657d : image: Use u32 math in copy_from_tile
59c4da4 : Use correct list of categories as requested
c354828 : sys: Use OUT_DIR for all builds except bazel
977dd5a : Simplify gain map API.
48d0186 : mediacodec: Fix nal unit end computation
8688774 : heic: Recognize heic brands
7f5cd73 : mediacodec: Construct NAL units for HEVC input
607c60f : decoder: Use only mediacodec for HEIC
5f74494 : libyuv: Add P010 -> ARGB conversion path
cb25897 : mediacodec: Add support for HEVC decoding
9aa68fc : item: Refactor is_image_item into a function
73360e0 : heic: Parse CodecConfiguration for HEVC
fad1735 : decoder: Simplify codec config copying
2851518 : capi: Implement crabby_avifResultToString
a188b5d : Add ignore_color_and_alpha setting.
9bc428e : Android.bp: Expose as a static library
ffad64f : mediacodec: Fix some build failures
99eb6f2 : mediacodec: Do not prefer hardware if profile != 0
b221bc8 : mediacodec: Disallow hardware for alpha decoding
fdd8257 : mediacodec: Refactor get_codec_initializers
9669802 : mediacodec: Refactor mime type constant
a869859 : gainmap: Remove intermediary fraction structs from capi
53da690 : Update gain map C API following libavif changes.
99624a7 : mediacodec: Pass AV1 codec specific data
014884d : Wrap CodecConfiguration in an enum
dfd8304 : mediacodec: Use correct V plane offset
7443742 : mediacodec: Pass max input size to configure()
9533100 : libyuv: Allow Identity for Android MediaCodec
598e17d : decoder: Avoid unreachable code
486aa60 : mediacodec: Prefer gav1 for 12-bit
779cbcc : mediacodec: Validate input buffer size
b6d0ea3 : mediacodec: Ignore empty decoder names
4b8dea5 : image: Add support for NV12 and NV21
8c0a570 : mediacodec: Fix build failures
07b5d15 : mediacodec: Refactor mediaformat related code into a struct
871f9c6 : mediacodec: Use image-data to populate buffer offsets
cdc6270 : mediacodec: Create one instance per category
edf650b : raw: Handle >8bit images

+- Project: platform/external/rust/crates/inotify

7ffbbb6 : Import 'inotify' crate
024cc2a : Initial empty repository

+- Project: platform/external/rust/crates/inotify-sys

ce181fe : Import 'inotify-sys' crate
4296f48 : Initial empty repository

+- Project: platform/external/rust/crates/libsqlite3-sys

9e0b830 : Fix and re-run cargo_embargo.

+- Project: platform/external/rust/crates/libusb1-sys

724510b : Migrate 9 newly imported crates to monorepo
0bde46b : Add liblibusb1_sys_platform variant and remove Android.bp tests
a8d2261 : Import 'libusb1-sys' crate Request Document: go/android-rust-importing-crates For CL Reviewers: go/android3p#cl-review For Build Team: go/ab-third-party-imports Bug: 376649133 Test: m liblibusb1-sys
2a6806a : Initial empty repository

+- Project: platform/external/rust/crates/maplit

c064409 : Import 'maplit' crate
5997666 : Initial empty repository

+- Project: platform/external/rust/crates/openssl

65030d7 : Add SDV crypto_rpc to use openssl
96fe1f1 : Add a patch that enables set_alpn_select_callback
ac3e7fc : Make libopenssl visible to libvvmtruststore
0756615 : Migrate 9 crates to monorepo.
d13a17a : Add SDV SDK back to openssl's visibility
9d26fb4 : Add libopenssl to the configinfrastructure apex
0680980 : Update visibility and build files w/ cargo_embargo
2571c6c : Add SDV RPC to openssl's visibility

+- Project: platform/external/rust/crates/ptr_meta

4dc2762 : Migrate 9 newly imported crates to monorepo
8cedf65 : ptr_meta: Add no_std variant
c1d44f2 : Third-Party Import of: https://crates.io/crates/ptr_meta/0.2.0
c83e70b : Initial empty repository

+- Project: platform/external/rust/crates/ptr_meta_derive

992e7ec : Migrate 9 newly imported crates to monorepo
3f70599 : ptr_meta_derive: Generate Android.bp file
df772c9 : ptr_meta_derive: Remove Cargo.lock file
066131f : Third-Party Import of: https://crates.io/crates/ptr_meta_derive
45c0efa : Initial empty repository

+- Project: platform/external/rust/crates/quiche

e670b09 : Migrate 8 crates to monorepo
0c42532 : Various fixes to make quiche healthy for migration.

+- Project: platform/external/rust/crates/rusb

856c094 : Migrate 9 newly imported crates to monorepo
56e2703 : Add librusb_platform variant
1dee692 : Import 'rusb' crate Request Document: go/android-rust-importing-crates For CL Reviewers: go/android3p#cl-review For Build Team: go/ab-third-party-imports Bug: 376647800 Test: m librusb
6f150a1 : Initial empty repository

+- Project: platform/external/rust/crates/ucs2

65fcce4 : Migrate 9 newly imported crates to monorepo
5c85611 : ucs2: Add no_std variant
04a9483 : Third-Party Import of: https://crates.io/crates/ucs2
a12c567 : Initial empty repository

+- Project: platform/external/rust/crates/uefi

6e2762d : Migrate 9 newly imported crates to monorepo
15cac37 : Remove change to Cargo.toml.
68d13fb : uefi: Add no_std variant
e44f83f : Third-Party Import of: https://crates.io/crates/uefi
65b43c8 : Initial empty repository

+- Project: platform/external/rust/crates/uefi-macros

8be3a14 : Migrate 9 newly imported crates to monorepo
df105ac : uefi-macros: Generate Android.bp file
f8c07b9 : Third-Party Import of: https://crates.io/crates/uefi-macros
596a8f2 : Initial empty repository

+- Project: platform/external/rust/crates/uefi-raw

33d92c4 : Migrate 9 newly imported crates to monorepo
127bf6f : uefi-raw: Add no_std variant
84db6fa : Third-Party Import of: https://crates.io/crates/uefi-raw
cd387a8 : Initial empty repository

+- Project: platform/external/rust/crates/uguid

b983aa2 : Migrate 9 newly imported crates to monorepo
352e3b4 : Fix LICENSE file.
ceb468d : uguid: Add no_std variant
1398527 : Third-Party Import of: https://crates.io/crates/uguid
11c4d1d : Initial empty repository

+- Project: platform/external/rust/crates/v4l2r

dadf07a : ioctl: use the type field to determine FrmIvalTypes and FrmSizeTypes
5151e97 : Fix the build in arm64 target
1550755 : Fix the build in arm64 target
4d4b547 : Fix the build in arm64 target
c449827 : ANDROID: Re-add host support
cc8cc2b : Third-Party Import of: https://github.com/Gnurou/v4l2r Request Document: go/android3p For CL Reviewers: go/android3p#reviewing-a-cl For Build Team: go/ab-third-party-imports Bug: http://b/331411274 Original import of the code can be found at: https://googleplex-android.googlesource.com/platform/external/rust/crates/v4l2r/+/refs/heads/third-party-review. Security Questionnaire: http://b/331411274#comment1
480568e : Initial empty repository

+- Project: platform/external/rust/crates/vhost-device-vsock

ba6184d : Revert "Fix build errors after vhost-user-backend update"
a6e8c5d : Fix build errors after vhost-user-backend update
47b7d28 : Update Android.bp by running cargo_embargo

+- Project: platform/external/rust/cros-libva

fcb3bc1 : Fix build in AOSP
25bafe1 : cros-libva: Add Android specific files
c0e6789 : Initial empty repository
8a89031 : Update README and Cargo.toml (#19)
90602eb : Update version to 0.0.9 (#18)
d9de934 : Support Android build
fda656e : generate bindings during build and update the version 0.0.8
0f37d0c : Version 0.0.7
cb972d5 : Update bitflags to 2.5
843cef6 : buffer: add AV1 encoder buffers types
bb97827 : buffer/av1: Fix clippy warning clippy::new_without_default
cc95da6 : buffer: h264: Enable use of slice arrays
d281b07 : surface: add querying decode error details
ccb2707 : picture: allow to create a picture from an unfinished one
70061e7 : buffer: add some EncMisc buffers types
8ae67a8 : Version 0.0.6
dfbcaca : buffer: av1: add support for slice parameter arrays
2c8d0be : README: move credits to their own section
905041a : av1: loop filter fields: fix initialization
b715be5 : Version 0.0.5
cb544f0 : buffer: add AV1 buffers types
8feecd1 : lib: add enc_h264_demo
a6948ca : buffer: introduce `VAEncCodedBufferType` and `CodedBuffer`
cda7cee : buffer: add HEVC encoder buffers types
3be2a86 : buffer: add VP8 encoder buffers types
eaa421f : buffer: add VP9 encoder buffers types
6a06ca0 : buffer: add H.264 encoder buffers types
def00f2 : usage_hint: remove USAGE_GENERIC bitflag
3d8201f : image: remove generic type
3b3e3fe : image: rename display_resolution parameter to visible_rect
5b19ac6 : image: allow to build Image from a Surface
a19aead : image: reintroduce Image writeback
448b804 : image: also create images from PictureNew
0a8154d : Version 0.0.4
d46d9e0 : lib: export bindings needed for PRIME import
fdb6fcb : surface: simplify external attributes handling
e3e8422 : bindings: fix warning
46cd893 : Add support for HEVC decoding
e54bbb8 : surface: improve external buffer attributes handling
69f61d8 : surface: make one call to `vaCreateSurfaces` per surface
f26e786 : surface: add support for importing user memory
0b3f5ea : surface: make SurfaceMemoryDescriptor suitable for external memory
5d00dd6 : surface: add surface attributes builders
50d47af : picture: make the surface type a generic parameter
b78a17f : picture: remove proxy Surface methods
7130ce3 : picture: return picture in new state when creating from another one
15dcc6d : surface: make generic against memory descriptor type
a42273f : picture: remove RefCell around the Surface
337bf98 : surface: add support for odd resolutions
499e739 : image: do not keep reference to the Picture
010dc6b : surface: add DRM PRIME export support
b7a14f3 : surface: add initial support for memory descriptors
f402ecc : doc: be more explicit about which VA functions are wrapped
c71ddf1 : status: replace VaStatus with simpler va_check function
a11954d : status: introduce VaError
59381d1 : status: rename Status to VaStatus
f1048c6 : surface: use usize for number of surfaces
695ad0b : lib: fix unnecessary parentheses warning
5e908e4 : context: use u32 for frame sizes
6e12201 : bindgen: update script and regenerate bindings
36d76c1 : image: create from Picture methods
dd69e15 : Version 0.0.3
609f1f0 : Cargo.toml: explicitly mention license
42adacc : Version 0.0.2
b70ea48 : display: do not require Rc when not needed
fa65d2c : bindgen.sh: update to make it work outside of ChromeOS
38e6c39 : Add github build workflow
370d945 : buffer: move codec types into sub-modules
210038b : picture: return self in case of sync failure
9de5252 : picture: do not require mutable reference to self when unneeded
910aff1 : picture: keep track of the destination surface dimensions
2737b36 : image: do not use mutable reference if unneeded.
0bc0940 : picture: add surface_id method
d75ef40 : build.rs: do not build if running on docs.rs
2c84276 : Cargo.toml: fix version for pkg-config
6b89fe7 : picture: stop exposing inner struct
324d600 : context: remove clone() from display()
e3864f7 : buffer_type: silence clippy warnings about too many arguments
6a91e11 : display: implement Default for DrmDeviceIterator
9a12048 : Cargo.toml: set version to 0.0.1
eec28bf : Update README.md
53fcd49 : Initial commit

+- Project: platform/external/ruy

de0564b : Add OWNERS file
c08ec52 : Change QueryCacheParams to return if there was an error and handle that during Initialize
587c2cf : IWYU: add string for std::string
9416e7c : Remove -Wc++14-compat
cd7b926 : cmake: support both Windows, WindowsStore
690c14c : Ensure that processors are non-null before trying to get their cores
6ffa93a : No public description
c04e5e5 : Also check for null cpuinfo_uarch in CurrentCpuIsX1().
32dd2b9 : handle null cpuinfo_get_arch
da1f787 : Disable AVX code paths on iOS simulator.
caa2443 : Update platform.h with correct Apple detection macro.
8997611 : Update platform.h with correct Apple detection macro.
72d107f : Update platform.h with correct Apple detection macro.
c19139f : Re-enable RUY_PLATFORM_X86_ENHANCEMENTS on Apple.
363f252 : Internal change
21a85fe : Most callers directly depend on :cpuinfo, whose symbols are provided by :cpuinfo_impl, where linkstatic is set. When building tests (default to dynamic linking) with toolchains that set -fvisibility=hidden towards a static link to reduce code size, this would hide symbols and fail. This CL merges :cpuinfo, putting the symbols in the same build unit and solving this issue.
3168a5c : Internal-only change to BUILD files
ce5a462 : Free all memory before reallocating.
3286a34 : Create API to determine how many threads to use
97ebb72 : Include GNUInstallDirs module in top-level CMake file
72155b3 : Redo CMakeLists change from https://github.com/google/ruy/pull/313 accidentally reverted by copybara export in https://github.com/google/ruy/commit/fd42b25c4512913fd47a86826aecec7c9c3ee2b4
fd42b25 : Skip caches that have processor_count==0.
841ea41 : Update cpuinfo (#313)
3288567 : Fix assembler deprecated instruction warnings (as errors) on some Aarch32 toolchains with -mcpu=cortex-a32.
368db71 : Define namespace prefixed aliases for targets in the CMake build
cd19e0e : Support install rules in the CMake build
efd639c : Update CMake build
d136790 : Update cpuinfo (#308)
a09683b : Refactor Thread internals for clarity and efficiency.
915898e : Simplification of ThreadPool code - merge asserts into main logic
7ef39c5 : Fix an integer overflow, and take some extra defensive steps.
cf14b2b : Update GetTentativeThreadCount to use int64 types
2d950b3 : Accomodate Clang's CFI sanitizer
abaaa6a : Ruy:Fix 16bit-packing msan error.
6c292a6 : Ruy:Add new packing for 16bit ColMajor for Avx512.
8c3fd3f : Modify use of Eigen::array to use syntax compatible with std::array in c++17.
2c5f035 : Ruy: Support 8x16 avx512/avx2_fma kernel for single_column.
409296d : Ruy: Support 8x16 avx512 kernel
f805132 : Ruy: Support 8x16 avx2_fma kernel
02d2088 : fix inheritance of kernels on x86. When an AVX2 kernel is not available, fall back on AVX, not StandardCpp
5f40d62 : test i8xi16 cases
670e69d : Disable the internal test-only variants of the StandardCpp path in benchmarks
c31af31 : Add missing volatile qualifier in Pack8bitRowMajorForNeonDotprod
34f1aa7 : Fix error when compiling ruy_test_overflow_dst_zero_point with GCC

+- Project: platform/external/scrypt

8b6d17f : Remove unused -Wno-implicit-function-declaration.

+- Project: platform/external/scudo

dc9f583f32be : [scudo] Use internal list to manage the LRU cache (#117946)
506f8e5fc10c : [scudo] Double frees result in chunk state error (#110345)
e187d677bbbd : Reapply "[scudo] Apply the min release threshold to the group" (#112252) (#112266)
10707fad9fba : Revert "[scudo] Apply the min release threshold to the group" (#112252)
5e8fd2882b1f : [scudo] Apply the min release threshold to the group (#112014)
04edb9c2c9e6 : Add dirgroup for trusty genrule
723f5fbea2f4 : [scudo] Fix isOwned on MTE devices. (#111060)
2349fc2c78fc : [scudo] Fix the loading of a signed value to an unsigned storage (#111039)
64502c66dbeb : [scudo] Reduce unsuccessful attempts of page releasing (#110583)
a843933727a5 : Revert "[scudo] Fix isOwned on MTE devices. (#110717)"
4d516ea200b0 : [scudo] Fix isOwned on MTE devices. (#110717)
425330952901 : [scudo] Fix wording for unsupported test reason. (#110716)

+- Project: platform/external/sdv/vsomeip

63c19830 : Disable -Wenum-constexpr-conversion in third_party/boost.

+- Project: platform/external/selinux

730212a1 : Expand documentation on selinux_android_tee_service_context_handle
e95ad2a0 : Add selinux_android_tee_service_context_handle
b9f7d07c : Export getprevcon
9b4eff92 : libsemanage/direct_api: INTEGER_OVERFLOW read_len = read()
f18f9e5e : libselinux/matchpathcon: RESOURCE_LEAK: Variable "con"
33ac7c96 : libselinux/setexecfilecon: Remove useless rc check
b33da68f : libsepol: Support nlmsg xperms in assertions
cd8302f0 : libsepol: Initialize "strs" on declaration
00fb52ce : libsepol/cil/cil_post: Initialize tmp on declaration
575d1cfa : libsepol/mls: Do not destroy context on memory error
0dac9813 : libsepol/cil: Initialize avtab_datum on declaration
e7bbd67b : checkpolicy/fuzz: fix setjmp condition
cecbff93 : selinux: set missing errno in failure branch
c76b2738 : libsemanage: check for rewind(3) failure
48f66b6a : selinux: free memory in error branch
6376f90d : libselinux: avoid errno modification by fclose(3)
e38815d7 : libsemanage: fix swig bindings for 4.3.0
8e0e718b : libselinux: fix swig bindings for 4.3.0
9b83fe3d : libselinux: formally deprecate security_compute_user()
98dbc290 : libselinux: use cached handle in selinux_android_file_context_handle
10ffb8b7 : Updates visibility of libselinux_bindgen.
b4117420 : libselinux: rename hashtab functions
9c7c6e15 : libsepol: Add policy capability netlink_xperm
ba7945a2 : libsepol: Support nlmsg extended permissions
5421320d : libsepol: Rename ioctl xperms structures and functions
0190a658 : libsepol/cil: Allow dotted names in aliasactual rules
e79a14c7 : policygen: respect CIL option when generating comments
b6910aa6 : sepolgen: initialize gen_cil
6b5626fd : libsepol/cil: Check that sym_index is within bounds
463584cb : libselinux: deprecate security_disable(3)
1f080ffd : libsepol/sepol_compute_sid: Do not destroy uninitialized context
017d7d53 : libselinux: Fix integer comparison issues when compiling for 32-bit
84a33fb9 : checkpolicy: Check the right bits of an ibpkeycon rule subnet prefix
d96f27bf : libsemanage: Preserve file context and ownership in policy store
7974aea5 : libselinux/restorecon: Include <selinux/label.h>
f398662e : libselinux: set free'd data to NULL

+- Project: platform/external/setfilters

136b3cf : Update test for Truth8 deprecation.

+- Project: platform/external/setupcompat

0a43918 : Import updated Android SetupCompat Library 693609104
f29b192 : Add healthfitness to the list of apex_available.
201d385 : Import updated Android SetupCompat Library 687113300
fc90bfe : Import updated Android SetupCompat Library 682227189

+- Project: platform/external/setupdesign

9af24bb : Import updated Android Setupdesign Library 695633919
481d0d1 : Add healthfitness to the list of apex_available.
b2551f0 : Update floating back button to use Button instead of MaterialButton
5d43b95 : Import updated Android Setupdesign Library 687113300
3be301f : Import updated Android Setupdesign Library 681794050

+- Project: platform/external/skia

3c8d95efc51 : Revert "Reland "Reland "SkCodec: Remove the sysprop guard for SkCrabbyAvifCodec"""
afa77852dcd : Roll ANGLE from 76691d278280 to f5196a27b9b6 (5 revisions)
0971c208bb9 : Roll vulkan-deps from a28df2bbdf42 to 61b3802219e0 (1 revision)
477c83cf10f : Roll Dawn from 324cda6fe640 to 22a8762fea90 (3 revisions)
ac432c09423 : Roll Skia Infra from 48b61740d60f to ca6066d7097c (7 revisions)
c80d6609b95 : Roll SwiftShader from 4074d9674b3e to d91e98d1aa3f (1 revision)
96b018c1088 : Roll vulkan-deps from aa3fe5f2f1c8 to a28df2bbdf42 (1 revision)
295a9e2ff8e : Roll vulkan-deps from 4e376146509a to aa3fe5f2f1c8 (1 revision)
76c51c6e13d : Manual roll Dawn from 92c3ee90cfff to 324cda6fe640 (10 revisions)
c1885d088c1 : Roll ANGLE from e42047f0bbac to 76691d278280 (4 revisions)
8410bc8a027 : Roll vulkan-deps from 94052ee8a2fb to 4e376146509a (4 revisions)
d1e261ca7f6 : [Graphite]: Use DawnTexelCopyBufferRowAlignment
b37f1430f87 : [rust png] Fix which `.gni` list covers `UtilsForFFI.h`.
c9e9ce277b8 : [dawn][headers] Update uses to new APIs and enums in buffer/queue.
56388fd5417 : Roll vulkan-deps from 11d17e6bd029 to 94052ee8a2fb (2 revisions)
4c3200078b1 : Reland "Initialize decompress struct with libjpeg-turbo's API"
0d94e966268 : Support PNG gainmaps
eecb5a5c0a3 : Revert "Initialize decompress struct with libjpeg-turbo's API"
144fe2a4fba : Initialize decompress struct with libjpeg-turbo's API
90564e85739 : Manual roll Dawn from 1e61e82b1b7a to 92c3ee90cfff (9 revisions)
bb7aebebadd : Reapply "[rust png] Extract `SkPngEncoderBase::onEncodeRows`."
3296764ecb8 : Revert "Remove JSON output from public API of SkSLDebugTrace"
b5d22cf4f59 : Revert "Move SkJSON.h to //modules/jsonreader"
ccfe075e1eb : Revert "[rust png] Extract `SkPngEncoderBase::onEncodeRows`."
d616d927bb7 : Roll Dawn from 38268c8df30d to 1e61e82b1b7a (6 revisions)
3fa455f61b6 : Roll Skia Infra from 73c103d87739 to 48b61740d60f (9 revisions)
51548228283 : Roll vulkan-deps from 0846b50db6b1 to 11d17e6bd029 (6 revisions)
345450d1add : Move SkJSON.h to //modules/jsonreader
47f59c4368e : Remove JSON output from public API of SkSLDebugTrace
adb3da0656b : Manual roll Dawn from 3ee07d9e3ba7 to 38268c8df30d (12 revisions)
57c443ffa6d : [graphite] Add a test for unused target surface on replay
526e7f7bab8 : [graphite] Allow providing unused target surface on replay
3239bd38afb : Ignore all deferred canvas tests in Protected configs
2944c92a28c : [rust png] Integrate Rust `png` crate into `SkPngRustEncoderImpl`.
a56089f7e31 : [rust png] Extract `SkPngEncoderBase::onEncodeRows`.
4f248194042 : Show timer queries for Graphite/Dawn in Viewer stats
ec0ab7d6b41 : [rust png] Extract `SkPngEncoderBase::getTargetInfo`.
c36e847174e : [rust png] Test that can detect encoding RGB vs RGBA mismatch.
07cd0806966 : [ganesh] Check index count for overflow for good measure
e94e63e3d20 : Address some numerical instability in SkRRect::transform
7aaa2ad0f9a : [graphite] Support mipmapped deferred canvases
10670fef5b2 : Add graphite-specific version of tools/DisplayParams
25b55ea5410 : Roll ANGLE from 0bb109aa3311 to e42047f0bbac (16 revisions)
21e839015bc : Roll vulkan-deps from ef19ac786024 to 0846b50db6b1 (6 revisions)
ffc90e21ddb : Roll Skia Infra from 2d420a4dcfc4 to 73c103d87739 (7 revisions)
dfe489f164c : Roll Dawn from e0d7445de8cd to 3ee07d9e3ba7 (18 revisions)
9ea21172f46 : Slightly improve readability of SkSwizzler_opts
a883bc795f6 : Help type deduction for old compilers
d60cf8abe2b : Roll ANGLE from 7adbb3e81110 to 0bb109aa3311 (14 revisions)
f9a9483d431 : Catch some excessive loop cases in GrTriangulator.
d819d03c831 : Roll skottie-base from 04ad645c0403 to 32e4afed6d80
11503774316 : [svg] Conditional debugging
c9647f13cde : Roll Skia Infra from 667c4e94b4a5 to 2d420a4dcfc4 (3 revisions)
c21438c2277 : Roll Dawn from ecd0b68434f4 to e0d7445de8cd (17 revisions)
4ad05fb042c : Roll vulkan-deps from 6fe136aa8572 to ef19ac786024 (4 revisions)
e02d856f86f : Reland "Minor cleanups with AutoSTArray and AutoSTMalloc"
eb9fc76e302 : SkFontMgr_Android refactoring
1f2b8187bd6 : Roll ANGLE from f7cac0bb8d2f to 7adbb3e81110 (7 revisions)
8dc8bdc364f : Roll vulkan-deps from ff00298c3058 to 6fe136aa8572 (9 revisions)
d6d1feba94d : Roll Skia Infra from 963fb6511438 to 667c4e94b4a5 (4 revisions)
fee6d35e981 : Roll Dawn from dbff5894310b to ecd0b68434f4 (18 revisions)
3db026d6280 : Roll skottie-base from bd94aa86becd to 04ad645c0403
cd49e790699 : Roll vulkan-deps from e0222e69ea90 to ff00298c3058 (1 revision)
ac934b9e9df : [rust png] Minimal build and test scaffolding for `SkPngRustEncoder`.
2c4ce1d953b : Manual roll Dawn from 5a657da0d714 to dbff5894310b (35 revisions)
cfb25379545 : Roll skcms from d4f01d560853 to b2e692629c1f (1 revision)
fea6538c891 : [graphite] Missed one WebGPU adapter change
944fa88da52 : Revert "Minor cleanups with AutoSTArray and AutoSTMalloc"
0ba64a48492 : [graphite] Fix issue with combining clips.
d7751d3d6ff : Minor cleanups with AutoSTArray and AutoSTMalloc
cfd8dd17aa1 : Update SkVideoEncoder for new ffmpeg
f264070a6a1 : [graphite] Update GraphiteDawnTestContext to new Adapter interface
e83f11f2875 : Add Rust-based PNG decoder tests to CQ
86da7254048 : Roll ANGLE from a2d76f039918 to f7cac0bb8d2f (6 revisions)
d0510ab0535 : Roll Skia Infra from 846022101a6f to 963fb6511438 (2 revisions)
4ba3819870d : Roll vulkan-deps from d897b7ac1f75 to e0222e69ea90 (1 revision)
904fcf74a12 : Roll vulkan-deps from ef4dc615f82d to d897b7ac1f75 (1 revision)
554b798423c : Roll ANGLE from 2dc072ec71cc to a2d76f039918 (10 revisions)
d83404361bd : Roll vulkan-deps from dad165201f86 to ef4dc615f82d (3 revisions)
565851fe9ee : Roll Skia Infra from b24f37dbbe4a to 846022101a6f (2 revisions)
fc70edaf85d : Roll vulkan-deps from b7e10ed68dd0 to dad165201f86 (3 revisions)
f09e3b3f5d2 : Roll ANGLE from 2e25ea1e727e to 2dc072ec71cc (8 revisions)
891f94a05e4 : Roll Skia Infra from 594fe3cae8c3 to b24f37dbbe4a (5 revisions)
8bc8ca3ec5c : Roll Dawn from 01354c07cdb2 to 5a657da0d714 (18 revisions)
cc3089467d7 : Roll vulkan-deps from ecc3cdc4d28b to b7e10ed68dd0 (10 revisions)
5e0cf5fed08 : Reland "Reland "SkCodec: Remove the sysprop guard for SkCrabbyAvifCodec""
5e2f293f2f3 : Remove redundant SkDoubleIsNaN function
0a83b7badd1 : Fix SIMD compilation error for GCC 8.x and below
b8005bb2078 : Make SK_NO_SANITIZE more compatible with GCC
8bece5de86e : Roll skcms from 2c7a7bff0512 to d4f01d560853 (1 revision)
d4b7690fca9 : Remove sk_path_analyze_verbs() from include/private
b3604d02e74 : Squelch warning with microhttplib and newer GCC versions
44f4ed49a7f : Roll ANGLE from cecefe53430a to 2e25ea1e727e (8 revisions)
7abea85f4b7 : Roll vulkan-deps from 08d21277b1a1 to ecc3cdc4d28b (1 revision)
667639e7d99 : Roll Skia Infra from dbdf344026c9 to 594fe3cae8c3 (7 revisions)
9941160ba63 : Roll Dawn from 27f9f8696a43 to 01354c07cdb2 (21 revisions)
51bca8b04c7 : Enable gpu timer query for Graphite/Dawn
0e38965e9cd : Roll skottie-base from 52028e548417 to bd94aa86becd
463cab9d81c : Roll shaders-base from 7bdb025e3cbb to 9481c3eb25d2
532837668c2 : Roll jsfiddle-base from 99d4627f212e to fb2d48f7af98
612f1a81528 : Roll debugger-app-base from 9e05eb5b9edb to e9a6ea69ff3d
6a2fecc5748 : Roll vulkan-deps from 915d114daeb2 to 08d21277b1a1 (3 revisions)
6377252344a : [rust png] Delete `SkPngRustDecoder.h` from the old location.
2ca04694645 : [rust png] Revert adding `png_codec_base_...` to `skia_codec_rust_png`.
dce3f098c7a : Roll ANGLE from 10c2dc7a1b4b to cecefe53430a (29 revisions)
dd70c8e1c38 : [rust png] Extract a separate `src/codec:png_codec_base` Bazel target.
d13fff55f67 : Inline GpuToolUtils MakeTextureImage
8e9582376c0 : Extract tools/graphite/GraphiteToolUtils as its own files
1499d0705d6 : Reland "Change window::DisplayParams to be immutable and passed by pointer"
f149f852c70 : [rust png] Update `cxx` from 1.0.128 to 1.0.131.
a276978ba7c : Roll vulkan-deps from 64698c1a35b2 to 915d114daeb2 (11 revisions)
d7a267d88fd : Disable strict aliasing for PartitionAlloc in Skia
8d9d892657a : Roll Skia Infra from f433991c6d8e to dbdf344026c9 (10 revisions)
b697dd1b03b : Roll Dawn from d9e006bae4a7 to 27f9f8696a43 (12 revisions)
c1c8ff84997 : Revert "Change window::DisplayParams to be immutable and passed by pointer"
cf29335f4cb : [rust png] Build / sources organization: Separate `decoder/` directory.
070384a3c4d : Roll vulkan-deps from 73e40f43c062 to 64698c1a35b2 (1 revision)
f37481a2c00 : Change window::DisplayParams to be immutable and passed by pointer
849d893991f : Use SkSafe32 functions when adding/subtracting deltas.
bd93fdd2d30 : Save 16 bytes on GrContextOptions allocations* by reordering fields
f993d92cee6 : Roll skottie-base from e4021a6fc9aa to 52028e548417
3f04cefd7f4 : Roll debugger-app-base from 931df19ec335 to 9e05eb5b9edb
2fef2929d15 : Roll shaders-base from 99b73d05cdae to 7bdb025e3cbb
55d1b998d66 : Roll jsfiddle-base from 034839b9814b to 99d4627f212e
9ce6b602207 : Roll skottie-base from c0ad379b6c58 to e4021a6fc9aa
2135f398189 : Fix DefaultImageProvider::Make() leak
594eb8a622b : Make SkGlyph and GrDriverBugWorkarounds trivially destructible
e7b277254ee : [graphite] Fix texture matrix for asyncReadPixelsYUV420
12c8bd6ac1d : Update build-tools (clang-format)
7588789d8fc : Roll Skia Infra from 808d8a5c3b87 to f433991c6d8e (5 revisions)
384388aa49a : Roll Dawn from eef82f6f51a2 to d9e006bae4a7 (11 revisions)
01a3a55f1d5 : Revert "Reland "SkCodec: Remove the sysprop guard for SkCrabbyAvifCodec""
ded8ab47ee6 : Roll vulkan-deps from 9071e1ad430c to 73e40f43c062 (1 revision)
6a3f77189dd : Roll vulkan-deps from 6c717e914923 to 9071e1ad430c (4 revisions)
c3d9596a93f : [graphite] Allow clients to configure active logging level
3cd9a34c229 : Set Graphite's logging level to include warnings
14a02299d6a : Roll skottie-base from a049ff55ff14 to c0ad379b6c58
e7caf38140c : Reland "Making fontStyle and fixedPitch fields "virtual""
715885a1968 : Roll ANGLE from 1f0ac74a7a93 to 10c2dc7a1b4b (14 revisions)
0cf740b3984 : Roll vulkan-deps from fc122129fa28 to 6c717e914923 (5 revisions)
6cdab392a11 : Roll Skia Infra from 2b2d3ae5900c to 808d8a5c3b87 (6 revisions)
af63f6296b4 : Roll Dawn from d2ad5a36f4e6 to eef82f6f51a2 (16 revisions)
e4528e29094 : Revert "Making fontStyle and fixedPitch fields "virtual""
d1e13902b47 : Making fontStyle and fixedPitch fields "virtual"
0586d901350 : Manual roll vulkan-deps from 94069332c202 to fc122129fa28 (8 revisions)
608f25288c8 : [rust png] Integrate `cICP` support into `SkPngRustCodec`.
b33556ca0db : [graphite] Remove deprecated Precompile API call
aa4e5a295b7 : [graphite] Add more Test job suppressions for Dawn thread race
e2487d590a1 : Roll ANGLE from 74f74b63df26 to 1f0ac74a7a93 (13 revisions)
b50282b25be : Roll Skia Infra from 7fb17334e756 to 2b2d3ae5900c (7 revisions)
9e1943a6cfa : Roll Dawn from 839eadc23139 to d2ad5a36f4e6 (15 revisions)
6f4ee160c31 : Roll vulkan-deps from 3c7156644de7 to 94069332c202 (11 revisions)
32c01e74258 : Reland "SkCodec: Remove the sysprop guard for SkCrabbyAvifCodec"
a30b15cdaa2 : Fix Vello build rules
eea8b567b3a : [graphite] Add a more robust threaded Compile/Precompile test
cfffcca23e1 : Fix verylarge_picture on Adreno Vulkan.
05638eeeef1 : Reland "add triangulated gradient effect"
75a1d2b5701 : Uses newer APIs for device.PopErrorScope in DawnErrorChecker.
408436ad750 : Clamp alpha in SkScan_AAAPath to int32_max to avoid integer overflow.
002a3782dfc : Roll ANGLE from 987cc0de1d4c to 74f74b63df26 (11 revisions)
bd787914e87 : Roll vulkan-deps from 867065ecbb6a to 3c7156644de7 (4 revisions)
37b9248c306 : Roll Skia Infra from b51256461a37 to 7fb17334e756 (10 revisions)
700e685861c : Roll Dawn from 6898ea1d553e to 839eadc23139 (13 revisions)
2614590b4f3 : Show timer queries for Ganesh/GL in Viewer stats
d776efdac21 : Fix invalid cross-device link error in deps_parser
0df35d0376b : Roll vulkan-deps from 824ef0f736ed to 867065ecbb6a (2 revisions)
f1947825e8a : Reland "Remove TODOs in GNI exporter tool"
a694b9e85a4 : [graphite] Fix up Context's recorder tracking thread safety
e5336f4cfdf : SkCrabbyAvifCodec: Set mediacodec color format
a789d037192 : SkCrabbyAvifCodec: Make a copy of the image before scaling
95c14324b97 : Refactoring proxy for FontConfig out of FontMgr
673706a9983 : [ganesh] Fix artifacts from looping colorizer.
78ef6b7a574 : Remove promotions of gradient eval from half to float.
aa29a513e33 : Ganesh supports getting GPU time spent on a flush
cea83ccecbe : Remove extra spaces from MeshGradientSlide
ff61bb3ee2e : Minor cleanups to SkBitmapDevice
ab25edec054 : Roll ANGLE from 15492c9bc44d to 987cc0de1d4c (13 revisions)
d7f8cccfb28 : Revert "add triangulated gradient effect"
0b74a1bb1b5 : Revert "Remove TODOs in GNI exporter tool"
8a1a8450950 : Roll SwiftShader from 4d3a7b64279f to 4074d9674b3e (1 revision)
4583a4cde64 : Roll Skia Infra from 523dc313e7a2 to b51256461a37 (5 revisions)
be4fda81503 : [rust png] Fix -Wprivate-header warning for raster pipeline headers.
998115f2874 : Roll Dawn from 3fc6432bcc2f to 6898ea1d553e (11 revisions)
17601e471ac : Roll vulkan-deps from 59ce475cae66 to 824ef0f736ed (4 revisions)
8d652f14291 : Remove TODOs in GNI exporter tool
492e8347d7a : add triangulated gradient effect
6af378b8133 : SkCrabbyAvifCodec: Add RGB565 support
b79e7122328 : SkCrabbyAvifCodec: Compute fGainmapMathColorSpace
0d24bd3268e : Revert "[infra] Remove P400 jobs from CQ"
0b74d5c3eb4 : [infra] Remove P400 jobs from CQ
8a10c117ebf : [graphite] Add PrecompileContext object
40444ac82d1 : Add buganizer ID to DIR_METADATA
d67e23fae89 : [viewer] QOL improvements to perspective and zoom
7594233ff91 : [auth-service] Update CRIA link to project-skia-committers
78fd0dfa6cf : Roll vulkan-deps from 69a1fde4ef82 to 59ce475cae66 (1 revision)
5968000526f : Roll ANGLE from 33dc1606ee3b to 15492c9bc44d (1 revision)
1cdfa100e98 : Roll Dawn from 43b18d23f1be to 3fc6432bcc2f (1 revision)
3b2ee139357 : Roll Skia Infra from 8b12c3aa2ef9 to 523dc313e7a2 (9 revisions)
452208ce96a : Roll vulkan-deps from a662c37da32d to 69a1fde4ef82 (1 revision)
4708534db2e : Roll vulkan-deps from 51c3c8026c91 to a662c37da32d (3 revisions)
1086e39c04c : Manual roll ANGLE from e557b60e8ae5 to 33dc1606ee3b (20 revisions)
b88c7b03a83 : Manual roll Dawn from 3f9a21c4d20e to 43b18d23f1be (16 revisions)
f8ec9734473 : Roll vulkan-deps from 834044be8e1f to 51c3c8026c91 (6 revisions)
6b0f264bde3 : [rust png] Add new API: `SkColorSpace::MakeCICP`.
1df1c662784 : Subset SkImage_Picture objects instead of rendering to Bitmap image.
fb6cd3abd59 : Roll vulkan-deps from 8ec693ec8ed6 to 834044be8e1f (3 revisions)
8eb57ab65ef : Remove duplicated reference to SkSLSampleUsage
ea325525c6f : Remove duplicate copy of codegen headers in .gni files
2e189b51999 : [infra] Support running Android tests on hosts other than RPI
1ef3b910e06 : Roll ANGLE from 924ee1ba78dc to e557b60e8ae5 (9 revisions)
1b24553bc4d : Roll Dawn from 205e5feeda01 to 3f9a21c4d20e (11 revisions)
b234289edd9 : Roll Skia Infra from aa41daefb66c to 8b12c3aa2ef9 (9 revisions)
54a4f1cd628 : Roll SwiftShader from d5c428477411 to 4d3a7b64279f (1 revision)
b730eb34085 : Roll vulkan-deps from 1096ec1aabbb to 8ec693ec8ed6 (5 revisions)
e5fda8472b2 : Adding {empty&unused} SkTypeface_proxy structures
8ff65da4b8b : Add SkTypeface::getResourceName
ddfd876ebca : Handle non-finite scale+translate matrices in invert()
8946ea477b4 : [infra] Remove Pixel7 jobs
9ab719cdad4 : [skif] Remove SK_USE_LEGACY_BLUR_GRAPHITE guarded code
3222456e63d : [rust png] Explicitly handle missing `PLTE` chunk.
fb32fee7f47 : [infra] Fixing some OverdueJobSpec alerts
6cc80cd310c : [skif] Avoid calculating inverse matrix in blur()
16178bf63de : Roll vulkan-deps from 901a9e8040e0 to 1096ec1aabbb (1 revision)
a076435073f : Roll ANGLE from b2d84a6649f7 to 924ee1ba78dc (7 revisions)
70553189e4e : Roll Skia Infra from 9fb4d8c44edc to aa41daefb66c (11 revisions)
fa52f2c1ddb : Roll Dawn from bac50766d19e to 205e5feeda01 (10 revisions)
8c1b592b68a : Roll vulkan-deps from 4acefc03ee2a to 901a9e8040e0 (5 revisions)
a07fce20595 : [graphite] Refactor scissor transformation into a helper method
ddbd8d1cb8d : [graphite] Re-enable SSBOs for D3D11
3750b8939c7 : Roll vulkan-deps from d2c4e780b012 to 4acefc03ee2a (4 revisions)
1e585933601 : Roll ANGLE from 7fea539cc99b to b2d84a6649f7 (30 revisions)
ca10bc19c67 : Roll Skia Infra from 4448a7219f6d to 9fb4d8c44edc (9 revisions)
b472797bbc8 : Roll Dawn from 04483c84503c to bac50766d19e (9 revisions)
02dd72c2fbc : [rust png] Don't unnecessarily replicate decoding `options`.
b3401cef50b : Roll vulkan-deps from aa1dd6b24b8b to d2c4e780b012 (6 revisions)
810660a9439 : [rust png] Don't unnecessarily replicate `dstInfo` data.
f197e55196e : Add notes about premultiplied color to docs
7d5c206fc87 : [infra] Remove Linux NUC9i7QN tasks
d7c30a79082 : Remove log spam about Perfetto being unavailable in Android host builds
ac75382cb97 : Revert "CoreText SkTypeface palette support"
77b63219ef0 : Merge 4 release notes into RELEASE_NOTES.md
85221e95357 : Update Skia milestone to 133
8fae5b7436d : Roll vulkan-deps from a2dfb2276ea5 to aa1dd6b24b8b (8 revisions)
aa099ff91e2 : Roll Skia Infra from cf848e50c100 to 4448a7219f6d (7 revisions)
c17d82fdca8 : Roll SwiftShader from 76855a9baecc to d5c428477411 (1 revision)
1a84f12c875 : Roll Dawn from 2a86250e561c to 04483c84503c (5 revisions)
75230dbc93e : [rust png] New API: `SkCodec::hasHighBitDepthEncodedData`.
11046fd1039 : Revert "Temporarily remove golo machines from CQ"
2a288d0514e : [rust png] Fix memory safety issue by copying `iCCP` chunk data.
d02c2ddb374 : Roll skottie-base from bb7983730f24 to a049ff55ff14
4117692d96e : Roll shaders-base from e6fc8c4f0ad2 to 99b73d05cdae
55373788ff1 : Use tiled draws to draw bitmaps in tall_streched_bitmaps
f4c146f4514 : Roll jsfiddle-base from c6ebdc72f397 to 034839b9814b
61175270ad7 : Android typeface: small simplification
0c283d0d30e : Roll debugger-app-base from a168d86bd1fd to 931df19ec335
b7f46f50eae : Don't assume command line flags in Android Viewer
af6a4f9a85e : Roll skottie-base from a049ff55ff14 to bb7983730f24
0b55dea384c : Roll shaders-base from 99b73d05cdae to e6fc8c4f0ad2
c690a0e493f : Roll jsfiddle-base from 034839b9814b to c6ebdc72f397
0dc450f557d : Roll debugger-app-base from 931df19ec335 to a168d86bd1fd
af7211b3daf : Move SkPathEffect::Dash* out of public API
261316c1048 : Roll vulkan-deps from 695916e164c9 to a2dfb2276ea5 (1 revision)
676b3b1a41b : Roll Dawn from 13a9bd9ccb6d to 2a86250e561c (1 revision)
7cca68c56bf : Roll Skia Infra from 6082ccfc822c to cf848e50c100 (18 revisions)
3333292a62c : Manual roll ANGLE from 2a61126bb36c to 7fea539cc99b (6 revisions)
7755e6ab0bc : Manual roll vulkan-deps from 1e8c53194ceb to 695916e164c9 (3 revisions)
4693f7a374f : Manual roll Dawn from b2edbc3e54d0 to 13a9bd9ccb6d (8 revisions)
be1c1d62482 : Roll vulkan-deps from 12e843b4aad1 to 1e8c53194ceb (1 revision)
2905e15411b : [skif] Check for failed blur algorithm output
0f46bb5a488 : [skif] Check for empty srcRect in float coords
6038a613bf3 : [skif] Handle non-finite net downscale factors
778b21720a6 : [skif] Reject SkImageFilters::MatrixTransform that aren't invertible
b4c65e4d297 : [skif] Clamp magnifier zoom factor based on lens bounds dimensions
05da06918ba : [pdf] Implement sweep gradient tilemodes
42769e83e64 : Roll vulkan-deps from 7363ad68f4c9 to 12e843b4aad1 (3 revisions)
0f623183f7d : Roll ANGLE from 026ba8481087 to 2a61126bb36c (8 revisions)
d0ee80612f8 : Roll Skia Infra from 353c8b4196e4 to 6082ccfc822c (11 revisions)
e7dc8886365 : Roll Dawn from b4b1d02e6e41 to b2edbc3e54d0 (13 revisions)
7ae36ecfe93 : Revert "SkCodec: Remove the sysprop guard for SkCrabbyAvifCodec"
a1767e205c6 : Guard ganesh dependency from debugger
4d21410c44e : Basic support for drawPoints point mode in SkSVGDevice using zero-length lines in a single path element.
9108f2c0799 : Roll vulkan-deps from 49a7b74158a6 to 7363ad68f4c9 (2 revisions)
2ed9606702f : [graphite] Add scratch label when returning Resources to ResourceCache
e9a7546ef3d : Roll Skia Infra from 0262f84fa0d7 to 353c8b4196e4 (15 revisions)
de9c8db66b8 : Roll ANGLE from 644b91f7e096 to 026ba8481087 (4 revisions)
6a4b8971d0f : Roll Dawn from 500e41cc1e25 to b4b1d02e6e41 (12 revisions)
6f16a8c83bf : Roll vulkan-deps from e154ea2ed72d to 49a7b74158a6 (10 revisions)
8444ee0c8a7 : Manual roll Dawn from a8eb33756b22 to 500e41cc1e25 (9 revisions)
e8c1ca0a5ea : [graphite] Streamline setting of PipelineInfo flags for histograms
cec45b121a1 : [graphite] Remove legacy arc support.
db561d9b2fa : [graphite] Add UMA histogram for various Pipeline creation times
cf33c4e96e8 : Take into account the backdrop filter when calculating the bounds of a SaveLayer operation.
8777224d22b : [viewer] More path interpolation samples
35fafeb7055 : SkCrabbyAvif: Remove explicit brand sniffing in IsAvif
c7755ef4b59 : SkCodec: Remove the sysprop guard for SkCrabbyAvifCodec
0430522cfe7 : [graphite] Rename all things ForceSynchronous to ForPrecompile
afaed892368 : Ignore more tests in Protected configs
6878a234bb8 : [graphite] Add UMA enum histogram for Pipeline creation races
d6a5fd1b073 : [viewer] Add some hardcoded vertices for mesh gradient sample
c4e3592ca1e : [viewer] Add AE mesh gradient sample
3ae0f2cbe1a : [infra] Add 'patch' asset
3b46f4fa919 : Adding few missing #ifdefs
11cf7ff838f : [graphite] Support providing clip on replay
27950cc64fb : Roll SwiftShader from 1495532f997f to 76855a9baecc (1 revision)
642281ac395 : Roll Skia Infra from 064137920a75 to 0262f84fa0d7 (6 revisions)
e69bb98d6ed : Proof of concept for general path interpolation
03e49e3e582 : Temporarily remove golo machines from CQ
5fd28452e96 : Roll vulkan-deps from 97c315f09386 to e154ea2ed72d (6 revisions)
e82c8a2b4a8 : Roll ANGLE from 84a24a1ea6a6 to 644b91f7e096 (7 revisions)
4cf2471e67d : Roll Dawn from 08b478b3f3a3 to a8eb33756b22 (6 revisions)
b4df8dda7ff : [rust png] Fix when `setAlphaAndRequiredFrame` is called.
82175b411c8 : [Graphite] Fix layout transition for DstCopy in Vulkan.
88641283fb9 : Update GPU StrikeCache to track Glyph memory as well.
b3be9cb59fe : Manual roll Dawn from 46043c56bc31 to 08b478b3f3a3 (9 revisions)
ab74be6787d : Add size tracking to GPU StrikeCache.
ba96d932e60 : Roll vulkan-deps from f4673df701a0 to 97c315f09386 (8 revisions)
a474aa05a39 : Roll Dawn from 2c4f822f9e78 to 46043c56bc31 (12 revisions)
f7536089fe0 : Roll ANGLE from 7ce8b268f584 to 84a24a1ea6a6 (6 revisions)
7e5afcea123 : Roll vulkan-deps from ae5ea7b95745 to f4673df701a0 (7 revisions)
7989f782dbf : Manual roll ANGLE from 2f8ad9c1042c to 7ce8b268f584 (5 revisions)
b2bb3af36da : Extend the lifetime of IntrinsicConstantsManager
e2ad60ea803 : [pdf] Avoid entires in both unicode maps
eee6f7d5cfa : Manual roll Dawn from 980a9b0e0fba to 2c4f822f9e78 (3 revisions)
47ebf2ecc96 : Roll ANGLE from 3c1e98a3120a to 2f8ad9c1042c (1 revision)
51e00e06aa2 : Roll vulkan-deps from 91cb9c1a7ec7 to ae5ea7b95745 (1 revision)
a6cd4d42f9a : Roll Skia Infra from 2e11bb731c4a to 064137920a75 (6 revisions)
d51cf31724b : Roll Dawn from e0dd04eb7f86 to 980a9b0e0fba (2 revisions)
ee1af45b4da : Manual roll Dawn from 78a100c8c999 to e0dd04eb7f86 (6 revisions)
efff0e5f6a3 : Manual roll Dawn from a0239e7d364c to 78a100c8c999 (14 revisions)
75740b68a28 : Manual roll ANGLE from 7c99c225401d to 3c1e98a3120a (21 revisions)
bab7d954758 : Roll vulkan-deps from 564318bbee7b to 91cb9c1a7ec7 (1 revision)
7a271f11e8f : Roll vulkan-deps from b54c68a30790 to 564318bbee7b (3 revisions)
6944cd12860 : Roll vulkan-deps from a52547961655 to b54c68a30790 (1 revision)
d0e7f49f76d : Roll Skia Infra from 65468f93b38c to 2e11bb731c4a (9 revisions)
89ac72bb492 : Add job to test rust png codec
79a00ba1931 : [viewer] Initial mesh gradient slide
d2ac2b0d646 : [Ganesh] Test for valid GL sync before deleting
b47c3ab1e43 : [pdf] Augment glyph to unicode map
4fcf0523b5d : Fix glClientWaitSync and glWaitSync on WebGL2
33c67fbb6f9 : [Graphite] Unmap vulkan instrinsic buffers.
dfbb0e6bc97 : Reland "Add verb measurement utils to SkContourMeasure"
6a270b7dbce : [Fontations] Fix lifecycle of SkData objects in scanner
3b50199d27b : Roll vulkan-deps from 37d41da69cc4 to a52547961655 (5 revisions)
6035cb5a879 : Roll ANGLE from 0e0e5eae7d22 to 7c99c225401d (32 revisions)
696528fe7d0 : Roll Dawn from a56fede2b5d7 to a0239e7d364c (17 revisions)
a5e3b355673 : [Graphite] Use PushConstants instead of VBO for load msaa draws.
9168ad248c6 : [mac] Update build target to for x86 to be 10.15
6c78fc5cf35 : Roll vulkan-deps from 1d891d46a65c to 37d41da69cc4 (2 revisions)
81750810a38 : Revert "Add verb measurement utils to SkContourMeasure"
3c628426f85 : Roll Skia Infra from 548705375403 to 65468f93b38c (5 revisions)
4f8f2ecadfb : Roll Dawn from 35ba6e6c2f96 to a56fede2b5d7 (13 revisions)
7e79a516284 : Roll vulkan-deps from 0b52950e91ca to 1d891d46a65c (7 revisions)
3c62d4a94d7 : Add verb measurement utils to SkContourMeasure
85b77db25fa : [graphite] Add round cap support to CircularArcRenderStep.
dfd8fee4376 : Reland "Reland "Reland "Adding Fontations to FontConfig manager"""
bca14b77a45 : [pdf] Emit correct tag for marked-content items
04494113156 : [Fontations] Fix bounding box calculation under transforms
da3d6cb3d62 : SkCrabbyAvifCodec: Add support for gainmaps
f334411b0a0 : [graphite] Remove legacy rrect clip
77779dfed91 : Revert "Reland "Reland "Adding Fontations to FontConfig manager"""
d022fe64116 : Reland "Reland "Adding Fontations to FontConfig manager""
03600bc22ab : Roll vulkan-deps from b26c8c0409df to 0b52950e91ca (1 revision)
948b01e28f4 : Roll Skia Infra from e50eebfa6917 to 548705375403 (4 revisions)
1a47627e627 : Roll Dawn from 9daf276e5f9a to 35ba6e6c2f96 (31 revisions)
5410f1c6f86 : Roll vulkan-deps from a5edfbb83552 to b26c8c0409df (1 revision)
7385b2d99fe : Reapply "[rust png] Add a few extra `BlendOp`, regions, and `num_plays` tests."
dbca13d4a16 : [rust png] Don't retry `parse...FrameInfos` if input didn't change.
13cb7b05b0a : Revert "Reland "Adding Fontations to FontConfig manager""
b20654d2ca8 : [rust png] Don't look for more `fcTL` chunks during incremental decode.
c8227c6669e : [rust png] Account for `fFrameAtCurrentStreamPosition` inaccuracies.
6e0c0a25516 : Reland "Adding Fontations to FontConfig manager"
4dd0fcb037f : [pdf] Expand content item scope
b9dbe5c8df4 : [graphite] Simplify ShaderInfo API
5758d9c344e : Revert "[rust png] Add a few extra `BlendOp`, regions, and `num_plays` tests."
6e0ec2eed5d : Reland "Update Ganesh GL interface to look for timer query functionality."
106ecca863a : Roll vulkan-deps from 098ec4c2bd02 to a5edfbb83552 (10 revisions)
8c245b9cdcd : Roll ANGLE from fe99836c8bb8 to 0e0e5eae7d22 (13 revisions)
e1368a645c7 : Roll Skia Infra from 98a334bd44af to e50eebfa6917 (5 revisions)
6103cb480f9 : Roll SwiftShader from 3aaa6784ca31 to 1495532f997f (2 revisions)
f73c9fc0165 : [rust png] Add a few extra `BlendOp`, regions, and `num_plays` tests.
6d8a5ebeb76 : [rust png] Explicitly handle unsupported `dstInfo` in `SkPngCodecBase`.
d8ad496cdb2 : [rust png] Implement `rust_png::BlendOp::Over` in `SkPngRustCodec`.
4f14f2a8944 : [graphite] Add CircularArcRenderStep.
bdd225968da : Roll vulkan-deps from 6bf0a68d2621 to 098ec4c2bd02 (1 revision)
21035cd95b6 : base: include minimum header for x86 SIMD
35ad4e89212 : Enable some tests on Dawn compat
f08fbc46588 : Revert "Update Ganesh GL interface to look for timer query functionality."
263c3e2ecd3 : Roll ANGLE from 3a265f143be4 to fe99836c8bb8 (4 revisions)
0e5cf5c5f5a : Roll Skia Infra from 8bc4729c0325 to 98a334bd44af (5 revisions)
54955880ae4 : Roll Dawn from 809e420e990b to 9daf276e5f9a (22 revisions)
dd0158912c7 : Roll SwiftShader from 145112eea713 to 3aaa6784ca31 (1 revision)
eb8759b18fa : Range-check custom typeface glyph ids
98a0689c633 : Roll vulkan-deps from e548898aa37c to 6bf0a68d2621 (1 revision)
cadf2538dcd : Roll vulkan-deps from bf33950646b5 to e548898aa37c (9 revisions)
e5d3deaa5e3 : [Graphite] Use mappable buffers for Vulkan instrinsic uniforms.
4fe193f93fe : [rust png] Mark frames as fully received even without decoding them.
da6c17329e0 : Roll vulkan-deps from dd7c0efb9d54 to bf33950646b5 (3 revisions)
04b7f3fb32c : Roll ANGLE from f2315dbe32bd to 3a265f143be4 (3 revisions)
9032a551808 : Roll Skia Infra from 4fb594542104 to 8bc4729c0325 (3 revisions)
cc75472dedc : Roll Dawn from be9d992b58d8 to 809e420e990b (16 revisions)
76eea6d6e18 : [rust png] Handle palette expansion on Skia side.
998e8ee7fd3 : Roll vulkan-deps from b0229dbd25db to dd7c0efb9d54 (12 revisions)
4cbb98c1939 : Update Ganesh GL interface to look for timer query functionality.
5c607daf4cf : Add factories for SkWorkingColorSpaceShader and SkColorFilterShader
1b19f89253b : [graphite] Add toggle to control aspects of Pipeline creation
a53015386c3 : Roll ANGLE from 9a4c7495f3cd to f2315dbe32bd (5 revisions)
a2c90467288 : Roll Skia Infra from df0da6f45570 to 4fb594542104 (2 revisions)
634999390a9 : Roll Dawn from 6685fff40671 to be9d992b58d8 (9 revisions)
ff59ce65022 : [rust png] Discover frames more aggressively in `onGetFrameCount`.
53c9663c3b8 : Expose backdropFilterTileMode to saveLayer API
42f9070e662 : Reland "partition_alloc: Fix condition about sanitizers."
721bbd7c034 : [infra] Remove Debian11 RTX3060 jobs, fix dimension for Win RTX3060
a7121676d15 : [graphite] Preserve PaintParamsKey structure on error cases
1cfef3bf3f7 : Expose SkRuntimeEffectBuilder via SK_API
ca70414efaf : Remove Pixel2XL jobs
833329d0b83 : Add example of making custom SkColorFilter using sksl
23379a126fe : Skip compiling RustDecoderTest on WASM
142b91da856 : Roll vulkan-deps from 6834fd1efef0 to b0229dbd25db (1 revision)
235f4d694c8 : Roll ANGLE from 4bdcdf0deef5 to 9a4c7495f3cd (15 revisions)
197bbaac0ec : Roll Skia Infra from fc1fbc4105b0 to df0da6f45570 (11 revisions)
eb83da4a1cf : Roll Dawn from 8e9623dbadf3 to 6685fff40671 (14 revisions)
ad09ab099fe : Roll vulkan-deps from d8276cfd24b7 to 6834fd1efef0 (1 revision)
f77e7440d93 : [rust png] Initial `SkPngRustDecoderTest.cpp`.
3731c1f7cf2 : [Graphite] Make sure not to use Protected Vertex/Index buffers.
d7cb15f1ceb : Add begin()/end() to SkString.
c7ade5b9622 : [pdf] Change structure identifiers to match specification
b11804aaabb : Revert "partition_alloc: Fix condition about sanitizers."
c89324dd8f6 : Roll ANGLE from 4aa12e9e17f5 to 4bdcdf0deef5 (9 revisions)
64f4f192294 : Roll Skia Infra from 6c65601e5d77 to fc1fbc4105b0 (5 revisions)
32b1f315fd4 : Roll SwiftShader from 0afe6a306dd2 to 145112eea713 (1 revision)
6940e75c0c1 : Roll Dawn from 6a645bf6ebd6 to 8e9623dbadf3 (19 revisions)
9c6cc020751 : Roll vulkan-deps from 4220684adef5 to d8276cfd24b7 (6 revisions)
a71df7d9bc6 : partition_alloc: Fix condition about sanitizers.
4232c077e10 : [graphite] Address Native-Vulkan Precompilation thread races
0f99811561c : [graphite] Add test for precompilation thread safety
7e8c7385e67 : fix some bugs in type conversion on Loongarch
37db25292f1 : Roll vulkan-deps from 1ea770ceed23 to 4220684adef5 (1 revision)
b3e7fd26b4e : Roll ANGLE from 4b1e58d94333 to 4aa12e9e17f5 (6 revisions)
f61f8285aa7 : Roll Skia Infra from e81b15c8854a to 6c65601e5d77 (2 revisions)
b24079a8a8f : Roll Dawn from 7e8a128852ff to 6a645bf6ebd6 (12 revisions)
870ac55fa57 : Roll vulkan-deps from 7acb8149138a to 1ea770ceed23 (9 revisions)
9dae9c0a8a5 : [rust png] Enable and fix integer conversion warnings.
7b0947eb2a6 : Roll vulkan-deps from b48b5be748a7 to 7acb8149138a (7 revisions)
9c1173fded5 : Roll ANGLE from 2f644ed8a84e to 4b1e58d94333 (7 revisions)
c854e27c1e7 : Roll vulkan-deps from 06128ba913e9 to b48b5be748a7 (2 revisions)
b24b912941b : Roll Dawn from 4c14e8e4f022 to 7e8a128852ff (17 revisions)
52294d4e1b1 : Roll Skia Infra from 70e3e4feed08 to e81b15c8854a (1 revision)
9458dea59d3 : Roll SwiftShader from 74b783dffb9b to 0afe6a306dd2 (1 revision)
2d31a2a0ede : crabbyavif: Add support for cropping
cd866d306f8 : [graphite] ShaderInfo::toSkSL() release asserts
156ffbe3d94 : Roll vulkan-deps from bd3c30184daf to 06128ba913e9 (2 revisions)
1fe11f75184 : Roll ANGLE from 0dbe85f31776 to 2f644ed8a84e (15 revisions)
9fb845811cf : Roll Skia Infra from 78ae0bf49048 to 70e3e4feed08 (3 revisions)
ecfa8b54b3c : Roll Dawn from 65fa98d50f6d to 4c14e8e4f022 (10 revisions)
841325a669f : Roll vulkan-deps from ad31dd1cb898 to bd3c30184daf (11 revisions)
dfe24ae3bd1 : Roll skottie-base from 6a07fae851d7 to a049ff55ff14
3a081993e2a : [jetski] ensure we call ALOOPER_POLL_CALLBACK at least once
8d22576ce92 : [graphite] Add Test-Ubuntu18-...-Release-All-TSAN_Graphite_*_Vulkan jobs
30d773beef6 : Manual roll Dawn from 0b31a6ca843a to 65fa98d50f6d (2 revisions)
09917e06158 : [graphite] Address Precompile thread-safety by creating extra ResourceProviders
69d9d593a09 : Roll Dawn from f3c7cc5c580e to 0b31a6ca843a (62 revisions)
0a5e7299f85 : Revert "[graphite] Adjust rrect clipping to better match Ganesh."
68f21f903bd : [graphite] Disable use of Shape for arcs in Chrome.
1cf41b4958c : Roll vulkan-deps from d04c1f676c96 to ad31dd1cb898 (3 revisions)
1b0702f842d : Roll ANGLE from 576b5ef40a9b to 0dbe85f31776 (8 revisions)
78bc8d95f34 : Roll Skia Infra from 5a6102d3459e to 78ae0bf49048 (2 revisions)
09cdacf52ba : Roll SwiftShader from 7a9a492a38b7 to 74b783dffb9b (1 revision)
ccc8932d35a : Roll skcms from f96615e73170 to 2c7a7bff0512 (1 revision)
8073f7b8bc8 : Roll vulkan-deps from a993e01ad888 to d04c1f676c96 (3 revisions)
7baeaff3a6b : Avoid usage of the clang::musttail attribute on ppc64le
539ab9d3981 : Add More Mac *SAN Graphite testing jobs
e56bb466715 : Add a way to set filter mode in SkAnimatedImage.
de3e195d8a5 : Roll vulkan-deps from 8f346c5caf5a to a993e01ad888 (10 revisions)
baf314b7292 : Roll ANGLE from 78a694a1b82a to 576b5ef40a9b (6 revisions)
43241530959 : Merge 2 release notes into RELEASE_NOTES.md
ef6ce5e9b82 : Update Skia milestone to 132
e8f20c3007e : Roll Skia Infra from 2a35388fb5f0 to 5a6102d3459e (4 revisions)
39bd1d3f242 : [graphite] Use uint instead of int for divides
7d16e5cd642 : [rust png] Initial support for Bazel build.
f3c5824ba7b : Add JSON error when presubmit fails
de6e47f0f17 : Roll vulkan-deps from e0070499f409 to 8f346c5caf5a (1 revision)
4c6edb2fcfb : Roll ANGLE from e7f0d107f258 to 78a694a1b82a (1 revision)
871c2501427 : [ganesh] Remove last use of SK_IGNORE_ANGLE_VULKAN flag
b0605f5060c : Roll Skia Infra from 462486a81e05 to 2a35388fb5f0 (5 revisions)
5856ed62039 : [graphite] Add Arc support to Shape.
f02aee323d5 : [rust png] Populate `fFrameHolder` with all animation frames.
adcdc416581 : Remove unavailable jobs from CQ
511a2210025 : Update Bazel's rules_rust to v0.52.2
b4171f5ba83 : Update Swarming dimensions for IntelXe tasks
d6d3c4f624a : [graphite] Rebind textures on pipeline changes w/ dst-copy
88cb2bff384 : Manual roll ANGLE from 3ef8d1714d61 to e7f0d107f258 (14 revisions)
d6d0d017348 : Roll vulkan-deps from ab901eb0f984 to e0070499f409 (1 revision)
d2833b68c8c : Roll vulkan-deps from 73fd75175922 to ab901eb0f984 (7 revisions)
381de9ead2a : Clean up references to SkDevice::createDevice
7f0afa5266e : [skottie] Fix Viewer crash on invalid json inputs
18ed8969a0b : Roll vulkan-deps from f58427c0db47 to 73fd75175922 (3 revisions)
5b327f14ae5 : Roll ANGLE from ae5c3b969e66 to 3ef8d1714d61 (6 revisions)
9c0b93d92ff : DawnCaps: Prepare for Dawn's output stuct StringView change.
c1d988d40dc : Roll Skia Infra from bf6353777f11 to 462486a81e05 (6 revisions)
b11e2641523 : Roll Dawn from 68d8508758f2 to f3c7cc5c580e (31 revisions)
3b65cdd28da : Fix SkRRect::dumpToString
97cebfb0613 : Roll vulkan-deps from 4480c8e9e59c to f58427c0db47 (3 revisions)
7c2b4068c09 : [graphite] Re-add makeRoundOut() for dst copy bounds
960377675d0 : Punt CTS enforcement of VkProtectedContext_Xyz_Graphite tests to API 36
b8a2df19c6f : Update SkQPTestCases for Android V (15) Release
67b261bfb8e : [graphite] Remove analytic clip staging flag.
1e269594df9 : ssci: canonicalize / backfill dependencies managed by DEPS
0a9bfc90496 : Roll ANGLE from d0e2141a997c to ae5c3b969e66 (10 revisions)
796a7a43a0b : Roll Skia Infra from 09baf58309a3 to bf6353777f11 (4 revisions)
6e4a2f266a1 : Roll vulkan-deps from 458c840c3ccf to 4480c8e9e59c (6 revisions)
3a9e6b6a472 : [graphite] Clean up legacy dstCopy from KeyContext
8c95b719bf0 : Reland "Enable CrabbyAvif for Android framework"
cf9a558b3ab : Revert "Adding Fontations to FontConfig manager"
12bc549478f : Adding Fontations to FontConfig manager
1349ddc074a : [graphite] Reset fDstCopy in resetCommandBuffer()
cfdf6198752 : [Ganesh] Add support for VK_EXT_frame_boundary.
ac3efd1fd12 : Punt CTS enforcement of VkProtectedContext_Xyz_Graphite tests to API 36
4d308d22166 : [Ganesh] Add GrSubmitInfo struct to GrDirectContext::submit
80191e69c97 : [pdf] Emit `/Tabs /S` on each page
d38148cdf98 : DawnErrorChecker: Prepare for Dawn StringView callback change.
3b67bb1c600 : Roll vulkan-deps from dbcb4e8a0f0c to 458c840c3ccf (1 revision)
3028f960cf7 : [skjson] Non-recursive Json writer impl
e0bb55353b2 : Roll ANGLE from 878e1c92af0b to d0e2141a997c (1 revision)
8aa79a28196 : Roll Skia Infra from f00ae00fd775 to 09baf58309a3 (10 revisions)
82f414e3ede : Roll Dawn from 2574827cf13b to 68d8508758f2 (8 revisions)
a077e78e531 : Roll vulkan-deps from 657296f55449 to dbcb4e8a0f0c (2 revisions)
67f030795a4 : Manual roll ANGLE from 0f7371ae347d to 878e1c92af0b (7 revisions)
cd2ed28f15e : Manual roll Dawn from 4103ee393de2 to 2574827cf13b (10 revisions)
701b6e4b4bc : [ganesh] Disable stencilBuffers and MSAA for Protected GL Contexts
8a2fe88d31e : [skif] Adjust test tolerances for some GPUs
192bdd1e79f : Manual roll ANGLE from aacbf041f6cd to 0f7371ae347d (14 revisions)
3d283da1044 : [graphite] Migrate Vulkan pipeline to consulting immutable sampler descriptions populated at graphite level
45ffa0f6828 : Manual roll Dawn from b0d038d01ff9 to 4103ee393de2 (10 revisions)
77772b9967e : [skif] Fix rescale bounds analysis, consolidate draw codepaths
bde0363c02f : [graphite] Update PaintParams key generation and SkSL generation to use dstCopy intrinsics
85487bb10f0 : [graphite] Fix VulkanCaps::fSupportsMemorylessAttachments init when fProtectedSupport
857248fe0a9 : Manual roll Dawn from 90b955a8bf93 to b0d038d01ff9 (25 revisions)
8fe5949e36a : Add SubIFD to exif data when using WriteExif.
308f6988f3b : Replace/rename Gr* functions in SkMathPriv with Sk* naming convention
38e2598c487 : Revert "Enable CrabbyAvif for Android framework"
1e9afcd7dda : Roll vulkan-deps from a07eac9c2613 to 657296f55449 (3 revisions)
d639ed5e570 : Roll Skia Infra from 73d05ecd42e4 to f00ae00fd775 (7 revisions)
cbca9656f1d : Roll jsfiddle-base from 38bd0b71717f to 034839b9814b
aea58b06403 : Roll skottie-base from b1e534c0b156 to 6a07fae851d7
e10a4080542 : Roll debugger-app-base from c48a419da832 to 931df19ec335
3866c52c4aa : Roll shaders-base from 50fae2cd87dc to 99b73d05cdae
aee8fbf37e5 : Add tests for SkNextPow2
b5856c768d0 : Roll vulkan-deps from dd729cf1f807 to a07eac9c2613 (7 revisions)
d0e49fde376 : Enable CrabbyAvif for Android framework
df39f58957b : [pdf] Spanify SkPDFTagNode::fChildren
e4274977118 : Migrate Debian 11 builders
573053913bd : Allow SkSpan<T> declarations with incomplete T
6afbd6253e6 : Add compile-check for SkImage::RequiredProperties being a map key
294040c7c19 : [graphite] Correct sort order when using analytic and shader clips.
89284b1d7ee : Roll ANGLE from cd7f294923c7 to aacbf041f6cd (18 revisions)
b91af1fc2a8 : Roll Skia Infra from 015479b2afc8 to 73d05ecd42e4 (12 revisions)
282efddaadb : Roll Dawn from f8d389436d22 to 90b955a8bf93 (13 revisions)
52155a48ff8 : Roll vulkan-deps from cfe779d31eee to dd729cf1f807 (3 revisions)
c3ff0dfeae7 : Roll vulkan-deps from 63d60d4b27f3 to cfe779d31eee (1 revision)
6696c34a681 : Roll vulkan-deps from 0f0002bea54e to 63d60d4b27f3 (1 revision)
702044e777a : Roll vulkan-deps from 31bccb45ea33 to 0f0002bea54e (1 revision)
9145d1ef963 : [graphite] Minor fixes to PaintParamsKey dumping/labels
485783860ab : Roll vulkan-deps from db76988ee4a3 to 31bccb45ea33 (9 revisions)
4aff9603622 : [graphite] Use Dawn limits for buffer alignment caps
59f512b47cc : Reland "Write test for platform image generators."
e59dd68285b : Remove wgpu::FeatureName::SurfaceCapabilities
51c6b6393f1 : Enforce IWYU on graphite's compute and render subdirectories
e8e0a8c4634 : WebGPU: Prepare for Dawn output struct WGPUStringView breaking change.
0dfa080b5d7 : Roll ANGLE from 6024e9c05548 to cd7f294923c7 (2 revisions)
0f38a655fb1 : Roll vulkan-deps from 88a2d3572b41 to db76988ee4a3 (4 revisions)
04f5fa261f7 : Roll Skia Infra from 3bf46a600a54 to 015479b2afc8 (16 revisions)
16d9829a02b : Roll SwiftShader from 72ca2005cd32 to 7a9a492a38b7 (2 revisions)
cef6e842336 : Roll Dawn from 096f7148b5de to f8d389436d22 (7 revisions)
e732cdf455c : Remove initializer_list use from SkZip.h
e85bb2b4094 : Manual roll ANGLE from 9edd74e2ff86 to 6024e9c05548 (7 revisions)
87ebaeadb13 : [graphite] Bring minimum uniform buffer size back down to 2kb
4069b403aee : Revert "Write test for platform image generators."
f84aacc93fe : Write test for platform image generators.
310dab4cd4b : [Fontations-backend] Roll Fontations, Skrifa to 0.22.3
6d7b5973c07 : [rust png] Hide stream from `SkCodec` parent class to prevent rewinding.
252432fd272 : [graphite] Migrate dawn pipeline to consulting immutable sampler descriptions populated at graphite level
7e39844b7d4 : Reland "[ganesh] Add GrGLANGLEBackend::kVulkan"
582045670a9 : Roll vulkan-deps from d2712d5ff726 to 88a2d3572b41 (2 revisions)
0c5c4d622fa : [graphite] Avoid redundant de-duplication of uniform data
379139f0beb : [rust png] More idempotent tracking of frame's `fullyReceived` state.
b83dda24e67 : [rust png] `DecodingState` field naming style fix: s/ dst / fDst /, etc.
fe3cd2adeaa : [Fontations-backend] Roll Fontations, Skrifa to 0.22.2
68fea8aa589 : Roll ANGLE from 7ff7775b2b83 to 9edd74e2ff86 (8 revisions)
f074e2bd0ab : Roll Skia Infra from a4dc29886e99 to 3bf46a600a54 (11 revisions)
df2e478e9f8 : Roll Dawn from e5c5c65f60a7 to 096f7148b5de (9 revisions)
a1fccdd14be : Roll vulkan-deps from 598d211b737c to d2712d5ff726 (10 revisions)
ba95ec201df : [pdf] Reduce size of StructElem
eed291efd68 : [pdf] Clean up SkPDFDocument initialization
f1e049ff12e : [graphite] Dynamically size vertex buffers
03431ca9d33 : Revert "[ganesh] Add GrGLANGLEBackend::kVulkan"
16722e4e9e6 : Revert "Allow passing multiple node IDs per PDF structure node."
be6f28168f0 : [skif] SaveLayerRec has configurable backdrop tile mode
155fb18c6cd : [graphite] Fix issue with YUVA premultiplied alpha.
40d51ebc76d : Fix BigImageTest caps check
61bb59fcef3 : avif: Add support for using CrabbyAvif in Android
e7d091f3f6a : [graphite] Add logic to interpret ShaderNode sampler data into SamplerDescs
ce27189cef4 : [ganesh] Add GrGLANGLEBackend::kVulkan
41dc2603644 : Roll vulkan-deps from 7aaa4e9a5b34 to 598d211b737c (1 revision)
38d4b4a8a7b : [Fontations] Centralize computation of scale and remaining matrix
d063b5e450d : Roll ANGLE from 572fd30ee239 to 7ff7775b2b83 (9 revisions)
af3aa03d19e : Roll Skia Infra from ff7821c285f9 to a4dc29886e99 (9 revisions)
e2f38f2830e : Roll SwiftShader from 8580e3a98e50 to 72ca2005cd32 (1 revision)
4eff6f49d9b : Roll Dawn from caeda3b8046d to e5c5c65f60a7 (12 revisions)
014978fc487 : Roll vulkan-deps from fb8f0127fca4 to 7aaa4e9a5b34 (9 revisions)
0411eaf35f6 : [graphite] Use 32 bit integers for SSBO indices
7c61f884224 : [rust png] Delete incorrect memory safety comments.
f7cb94e3332 : Add `Canvas.quickReject` to quickly tell if a Rect is within the clip
d50cbfd5398 : [rust png] Support converting `png::FrameControl` into Skia equivalent.
2e873e1aa0a : [Fontations] Hold on to Path target arrays to avoid alloc churn
4d72af8474e : [Fontations] Generate SkPath-compatible arrays on the Rust side
5fb36dd08a2 : Optionally write the orientation when encoding a JPEG.
bfc6189f01a : Reland "[graphite] Expand BlurMaskFilter Precompilation ..."
09a3c821f1f : [Fontations] Use Skrifa's NullPen instead of custom pen
5218f67ec71 : Emit StructElem ID only when referenced
73b62ac7b8f : [Fontations-backend] Roll Fontations, Skrifa to 0.22.1
44d45ac4de6 : Roll vulkan-deps from cb6007a9d31d to fb8f0127fca4 (1 revision)
5baca373087 : Roll ANGLE from 7b0212b337ff to 572fd30ee239 (7 revisions)
03fa11bda80 : Roll Skia Infra from a97dae4c20c1 to ff7821c285f9 (7 revisions)
db59fb03ddd : Roll Dawn from 690b037a7532 to caeda3b8046d (6 revisions)
6fc00ce2245 : Revert "[graphite] Expand BlurMaskFilter Precompilation ..."
b851101c844 : [rust png] Initial `onGetFrameCount()` and `getFrameHolder()`.
d1d7deb68f9 : Roll vulkan-deps from 223523f05dc0 to cb6007a9d31d (7 revisions)
534633fb4bd : [rust png] Support color transforms for (narrower) subframes.
a8e1b85c76d : [graphite] Include intrinsic uniforms in the fragment shader
6c89706638e : [graphite] Expand BlurMaskFilter Precompilation ...
2d2425a303a : Avoid segfault in BigImageTest, rearrange skip rules
dfeeb199b22 : Check for null child in SkWorkingFormatColorFilter
9ab06e4fe89 : Add bungeman to public API owners
06721a72483 : Roll vulkan-deps from 50ad0c468c61 to 223523f05dc0 (1 revision)
80d141cf32a : Roll ANGLE from 0ec8a7f1b588 to 7b0212b337ff (6 revisions)
5f4740b998b : Roll Skia Infra from 3ba9cd40c151 to a97dae4c20c1 (4 revisions)
333df70fbcd : Roll Dawn from cef41cc71c85 to 690b037a7532 (13 revisions)
2f1e716bbe6 : Roll vulkan-deps from 1f1860958df1 to 50ad0c468c61 (1 revision)
f88a9ae4d9c : Roll vulkan-deps from 180925851393 to 1f1860958df1 (1 revision)
11b29596e1f : Roll vulkan-deps from 7bd80578336d to 180925851393 (8 revisions)
7efc11f2ea9 : [infra] Add jobs for Pixel9
fcf60c5c5df : [ganesh] Add GrGLCaps setting to control GL/ANGLE Protectedness handling
6fa7718752d : Roll vulkan-deps from 64d149df26fd to 7bd80578336d (2 revisions)
39dfe3603c0 : Suppress function UBSan on macOS
627608cd5f5 : [Fontations] Optimize generateMetrics() color glyph search
41ee5da8f48 : Roll ANGLE from 966739ac8b4c to 0ec8a7f1b588 (8 revisions)
e77818421e9 : Roll Skia Infra from c13e7159cdd1 to 3ba9cd40c151 (4 revisions)
41d29a557f9 : Roll Dawn from 3eee2be83d3f to cef41cc71c85 (6 revisions)
81cad57ecb0 : Roll vulkan-deps from 8d76160610aa to 64d149df26fd (12 revisions)
fdce28bab4f : Allow multiple equivalent "clang version"s
9ebb7c3640a : Manual roll vulkan-deps from d9c62a3d49c7 to 8d76160610aa (7 revisions)
150f2275a93 : Manual roll Dawn from fb8059bc3f80 to 3eee2be83d3f (8 revisions)
665fdd2e75b : Allow two "clang versions".
e986bd33fb3 : Command line flags for Android Viewer
5f6ea5ff840 : Update Xcode 15.4 to 16.0
b2c5f640cb4 : [canvaskit] Add OffscreenCanvas to some type definitions
5c9f28d05af : Roll vulkan-deps from 6b4db5a6d55c to d9c62a3d49c7 (4 revisions)
612ac7d7507 : Roll ANGLE from eaffa034c7ff to 966739ac8b4c (9 revisions)
dcb270abc4e : Roll Dawn from efd781b42ae5 to fb8059bc3f80 (6 revisions)
d804bdf62b5 : Roll Skia Infra from 67cb227058fa to c13e7159cdd1 (7 revisions)
e5ce4ecbcf7 : Roll vulkan-deps from 13d1d0b93ffd to 6b4db5a6d55c (3 revisions)
9f3b32b7b77 : Remove some debugf from SkFontMgr_AndroidNDK
291f4b7c6d4 : Manual roll Dawn from 876bb88cb063 to efd781b42ae5 (8 revisions)
e623a37de33 : Manual roll vulkan-deps from 683d4c5faa30 to 13d1d0b93ffd (10 revisions)
9af762100cf : [graphite] Modify key methods and comment docs to be able to accept a SamplerDesc container ptr
c98431ac12d : [bazel] Use filegroup for common_flags_config
79e652aad7a : [graphite] Centralize SamplerDesc length variables used by Dawn, Vulkan
788233232d6 : Roll vulkan-deps from 3368b0fc9442 to 683d4c5faa30 (4 revisions)
2c9708b6890 : Roll ANGLE from b563ede4e672 to eaffa034c7ff (9 revisions)
d00ad3e055f : Roll Skia Infra from b6f29eefb704 to 67cb227058fa (5 revisions)
f90deca6ba7 : Roll SwiftShader from 2afc8c97882a to 8580e3a98e50 (1 revision)
f6e95045c16 : Roll Dawn from 77184aa49df6 to 876bb88cb063 (13 revisions)
3541cdf2fae : [pdf] Give up on embedding CFF
2e92f0b443f : Update BRD to support getAndroidGainmap
118914b760c : Roll vulkan-deps from ab526a2539cd to 3368b0fc9442 (3 revisions)
cf28f9dd411 : Manual roll Dawn from 4f0cdf482175 to 77184aa49df6 (8 revisions)
6e5ff925314 : [graphite] Dynamically size buffer allocations
3cdb1850e24 : Readd mipmap sharpening control to GrContextOptions
cffb3d74282 : Roll ANGLE from 1e74ce33a56c to b563ede4e672 (5 revisions)
01ef0128dc5 : [Fontations] Prune subtrees in COLRv1 bounds computation
f0dc8761945 : Roll vulkan-deps from 685cc1e1e3d5 to ab526a2539cd (5 revisions)
b4ffde52ab6 : Roll skottie-base from 2814735474b8 to b1e534c0b156
b4d6ffd6c5b : Roll Dawn from 803ff2bdaf8e to 4f0cdf482175 (7 revisions)
897ece7121a : Roll shaders-base from 0417970a971e to 50fae2cd87dc
e457a245a83 : Roll jsfiddle-base from dadc8978c6e9 to 38bd0b71717f
3dd057b613d : Roll debugger-app-base from ebfa46371f66 to c48a419da832
80ea638c60a : Roll Skia Infra from a2950260d0fb to b6f29eefb704 (5 revisions)
7b0669f89ae : Android NDK SkFontMgr
85802e6d648 : Prevent overflow when growing an SkRegion's RunArray
057c416b9d6 : RESTRICT AUTOMERGE: Check for size overflow before allocating SkMask data
2b5677575df : RESTRICT AUTOMERGE: Check for size overflow before allocating SkMask data
a5619d9c855 : RESTRICT AUTOMERGE: Check for size overflow before allocating SkMask data
cbf6a595362 : RESTRICT AUTOMERGE: Check for size overflow before allocating SkMask data
27acba5039a : [pdf] Bounds check in skia_alloc_func

+- Project: platform/external/squashfs-tools

aaa99c6 : Pin squashfs-tools to C17.

+- Project: platform/external/stg

22cc107 : use "same" as an antonym for "different" instead of "equals"
2db14d8 : comparison: make Compare just return a Comparison
f6276f3 : comparison: replace all optional<Comparison> with Comparison{}
e9fa632 : comparison: rename CompareRoots to Compare and Compare to CompareWorker
ec16594 : comparison: coalesce Compare declarations and definitions
3cf098d : comparison: minor improvements to main comparison function
7f35bd4 : comparison: move MatchingKeys and PairUp functions
9f4bbb1 : comparison: coalesce MatchingKey declarations and definitions
03ba8e5 : comparison: coalesce ResolveTypedef declarations and definitions
496eb8a : comparison: coalesce ResolveQualifier(s) declarations and definitions
944f6c5 : comparison: document Defined and Nodes helpers
bd7453c : comparison: make helpers called CompareNodes into methods called Nodes
1efcdd1 : comparison: rename helper method CompareDefined to Defined
142950a : comparison: move implementation out of header file
609da07 : comparison: move ResolveTypedefs definition to end of file
ccd6ed6 : comparison: drop trailing _ in DiffDetail member names
67b67cd : comparison: drop trailing _ in Result member names
30a393c : comparison: switch to a functional interface
3690289 : graph: Apply*: take function objects by forwarding reference
f860443 : graph: replace Apply* Result template parameter with decltype(auto)
2b56e31 : naming: move implementation out of header
ee13edb : stable hash: move implementation out of header
350b3de : stable hash: rename StableHashWorker local variable from hash to value
dbbeed4 : unification: clarify reference-taking in Find
ae3ac08 : unification: clarify purpose of visited set
85eb2ef : unification: remove stray explicit namespace qualification
05da8ee : substitution: fix some const confusion
23f719a : order: fix comment describing behaviour of Reorder
52c7e39 : documentation: add some reference information
b59b3e3 : comparison: fix lint warnings
c0b22e6 : replace static storage specifications with anonymous namespaces
cb9fc88 : add missing and remove unused includes, again
e92934d : SCC: update comment regarding state checking
edac6a5 : post processing: summarise enumerator additions and removals
b104f58 : post processing: move implementation into an anonymous namespace
bc9e956 : stgdiff test: add enumerator addition / removal short format test case
348695a : stgdiff test: add input format to ShortReportTestCase
a632efd : stgdiff test: wrap in namespace stg and drop stg:: prefixes
bd0428f : constify more local variables
a6744d9 : post processing: remove max_crc_only_changes configuration parameter
a4e7d1b : post processing: add comments to briefly describe each pass
1883a88 : stgdiff test: adjust CRC only cases
f0096c3 : add missing and remove unused includes
21ae904 : proto writer: add variant member type edge annotations
113c06e : test suite: refresh Rust enum discriminator test
21583fa : naming: give forward-declared anonymous types better names
b01f11b : DWARF processor: types in anonymous scopes cannot be type roots
0471349 : scope: track cumulative anonymity of scopes
949516b : scope: add a unit test
100665d : rename member scope_name_ to scope_
8a956b6 : DWARF wrappers: handle DW_OP_addr DW_OP_plus_uconst locations
d4f3001 : DWARF wrappers: make Address kind an enum instead of bool
b62b2ea : Abigail reader: handle NOTYPE symbols
4b9d897 : documentation: formatting fixes
54180b6 : documentation: add index page
6e77349 : documentation: improve comparison documentation
6ce7cb7 : documentation: add tutorial
c7f39d2 : documentation: add reporting documentation
367ca77 : documentation: improve naming documentation
cad270e : documentation: update Markdown numbered lists
23be0d1 : documentation: reformat Markdown files
ddee22f : documentation: rename markdown files
d02276e : reporting: use Unicode arrows in Graphviz node text
80a6620 : fuzz: constify `LLVMFuzzerTestOneInput` `data` parameter
7d5e15c : fuzz: add missing and remove unnecessary #includes
6b99c26 : stgdiff: switch default output format to small

+- Project: platform/external/strace

968e9fd34 : Fix our pre-GPL strace to build as C23.

+- Project: platform/external/stressapptest

e86d69b : Build stressapptest for x86_64 architecture

+- Project: platform/external/subsampling-scale-image-view

b596130 : Snap for 12695596 from c1f2ae5a70a28653cd5500963619c527a8eaefc7 to 25Q1-release

+- Project: platform/external/swiftshader

f9fccaa8b : Fix crash when color input index mapping is nullptr
3aaa6784c : Don't install commit hook if not in a git repo
145112eea : Regres: Update test lists @ 0afe6a30
0afe6a306 : Fix input attachment read from stencil-only attachment
74b783dff : Regres: Update test lists @ 7a9a492a
7a9a492a3 : ssci: use canonical date format
07d3f212a : Regres: Update test lists @ 72ca2005
72ca2005c : Regres: Update test lists @ 8580e3a9
8580e3a98 : Regres: Update test lists @ 2afc8c97
2afc8c978 : Include usage instructions in update-marl.sh
3239872f9 : Regres: Update test lists @ 8dd40811
624c29bd3 : Squashed 'third_party/marl/' changes from aa9e85b21..690889fbb

+- Project: platform/external/tensorflow

747d7de7f94 : Switch Tensorflow to canonical ABSL.

+- Project: platform/external/tflite-support

b6f4be8c : Use canonical ABSL.

+- Project: platform/external/tinycompress

4d5a622 : tinycompress: Add support for compress_set_codec_params API

+- Project: platform/external/toybox

8cfbdbfc : Define toybox_recovery
46e22bce : Remove getline() / getdelim() declarations
1f1641cc : Add test for setting NF=0
538b7620 : Fix bugs setting NF=0 or negative
c7cfe2d9 : Fix clang warning; remove incorrect comment
6b02f029 : For compatibility with gcc 14.
200ecee6 : Add symlink for blkdiscard
fb3ca98e : Update or1k for qemu-9.2.0-rc1.
97b5ea16 : Fix backslash parsing in $'' so \' doesn't end quote context.
cfef2aab : Cleanups and bugfixes. Leak less memory, pass tests with ASAN=1.
493bd401 : Kernel commit d9a1dab65aa2 broke qemu -hda for sh4eb, add config symbol to get the old behavior back.
a51c66ed : Switch type to long long one entry earlier so (1<<31) isn't negative and printf("%llx\n", _PS_RGROUP|_PS_RUSER) isn't ffffffff80000000
4b238cf6 : Don't output "length" byte at start of text replies.
30f6d6ca : Detect truncated replies, minor cleanups, better error messages.
090f138c : ls: clarify relationship of -s and --block-size.
2045c952 : Kana provided the file, and clarified the ==4 check excludes fat16.
d04c449e : Comment out test that submitter didn't include test file for, and move *type=='v' check back under distinguishing between vfat/iso9660 instead of run for all filetypes and potentially triggering if a new 4 letter physical filesystem type starting with v shows up in future.
eafb0b6b : blkid, mount: fix `blkid -L` and add support for `mount LABEL=...`
542f6275 : Fix devmem build with clang.
c5a646a9 : Move break/continue to builtin commands with help text.
c47184b3 : Annotate broken test_sh which bash (TEST_HOST) passes but toysh doesn't.
fa34b125 : TEST_HOST behavior changed between bash 5.0 and 5.2
658189ac : Allow TOOLCHAIN= to list additional host commands for airlock_install.
a60aa34c : Add QEMU_M for the default mkroot case QEMU="$CROSS -M blah", and add riscv64 kernel build.
07fe9d55 : Minor cleanups.
b98e7eba : Support non memory mapped access
4a318158 : Update awk.test
db66ae9d : Fix out-of-bounds memory access in splitter()
001eafcd : Rewrite record-reading routines
0631575b : Update awk.test
d91b999c : Fix field split bug when RS="" and FS is one char
2f07d87b : Add hwdata path to lsusb/lspci
4073e779 : Add O_NONBLOCK and O_NOCTTY to grep's open flags, so grep -r doesn't catch on FIFO and tty dev nodes quite so easily. Add FIFO test.
a977849d : Add a FAQ to explain scripts/prereq/build.sh hermetic builds.
75edf093 : FAQ: add to the index and existing answer that had no question link.
225ae60c : Minor style tweaks.
47bc0009 : chdir to / in xvdaemon() so background process doesn't pin mount points.
1d4666d1 : Use nommu-friendly daemonize.
895c1d83 : More cleanup, mostly switch CFG_wKLOGD_SOURCE_RING_BUFFER to -s option.
81ca1562 : Cleanup pass on klogd.
96bab824 : Don't snapshot the flags just to locally enable -r, use second variable.

+- Project: platform/external/trace-cmd

295bf00 : Remove obsolete never-upstreamed hack.

+- Project: trusty/external/trusted-firmware-a

7c906c3a1 : el3_spmc: Enable FFA_MEM_RETRIEVE_MEM_REQ from the hypervisor
0c2128928 : el3_spmc: Set NS bit by default and clear it as needed
cfb8e177e : qemu: Track NS bit across SPMC shares and lends
a5495b6b9 : qemu: Increase the size of the datastore
f1300ed3a : Add dirgroup for trusty genrule
c41b6e2c9 : fix(plat/qemu): Include the SMCs from the Trusty SPD
bac756b32 : fix(spmc): Add platform specific routine to set boot info
9ad3baae4 : fix(plat/qemu): Enable Group0 handler for SPMC_AT_EL3==1
7890645e2 : fix(plat/qemu): Align the shmem datastore
906530b49 : Revert "plat: qemu: Update secure memory nodes instead of deleting"
b614089df : plat: qemu: Update secure memory nodes instead of deleting
545184846 : fix(el3-spmc): Move ERROR line inside conditional
40307d636 : Revert "spd: trusty: only process one function ID at a time"
669e2b159 : docs(changelog): changelog for v2.11 release
b5ead359f : docs: move DPE option to experimental section
932d6cdb2 : docs(juno): update PSCI instrumentation data
e9c335fbd : docs(n1sdp): update N1SDP PSCI instrumentation data
3ab6ae4ef : docs(maintainers): update the maintainer list for LTS
b131a1237 : docs(maintainers): add code owners for runtime services module
017566560 : docs(maintainers): add missing header files
5ecb6bb0b : docs(maintainers): add code owners for context management module
4d16bc70b : docs(maintainers): add code owners for runtime exceptions module
a45f75a6a : docs(maintainers): add missing files related to SPMD
46fc25019 : docs(maintainers): update missing files related to EL3 SPMC
ed3525e60 : docs: remove reference to phabricator pages
e01c71266 : build: allow shell commands in `CC` and friends
7efaad9e3 : fix(juno): remove incorrect assert in sp min boot
291e71822 : build: skip toolchain detection for some targets
b7491c77d : fix(fvp): added ranges for linux
d963c6bad : docs(prerequisites): update mbedtls version used
6d5e7e8be : build(libfdt): introduce include guards
3a965bb37 : chore(compiler-rt): update compiler-rt source files
55aed7d79 : feat(mbedtls): update config for 3.6.0
b87d7ab13 : feat(tc): add save/restore DSU PMU register support
f99a69c38 : feat(dsu): save/restore DSU PMU register
e6ae019a8 : feat(plat): add platform API that gets cluster ID
91dcf7d17 : build: remove experimental mark for PSA FWU support
82222db80 : build: mark DICE_PROTECTION_ENVIRONMENT as an experimental feature
5593ec1ae : refactor(arm): remove console switch from platform
62865b4ee : fix(smc): correctly find pmf version
6fbc98b15 : feat(cpus): support to update External LLC presence in Neoverse N3
af3e8e63b : refactor(console): consolidate console runtime switch
92752355f : refactor(synquacer): console runtime switch on bl31 exit
3e6fb8722 : refactor(nxp): console runtime switch on bl31 exit
c1fd8f9d7 : refactor(nvidia): console runtime switch on bl31 exit
d51a63260 : refactor(hisilicon): console runtime switch on bl31 exit
48932c3c2 : refactor(xilinx): console runtime switch on bl31 exit
9edf08b17 : refactor(mediatek): console runtime switch on bl31 exit
88ab2261b : refactor(armada): console runtime switch on bl31 exit
d3c643c2d : refactor(imx): console runtime switch on bl31 exit
46163dddd : refactor(brcm): console runtime switch on bl31 exit
655e62aa5 : fix(xilinx): follow MISRA-C standards for condition check
20fa9fc82 : fix(zynqmp): resolve null pointer dereferencing
412d92fdf : fix(psci): fix parent_idx in psci_validate_state_coordination
0a9c244b0 : fix(psci): mask the Last in Level nibble in StateId
4efd21936 : docs(context-mgmt): add documentation for context management library
ba6b69494 : chore: rename hermes to neoverse-n3
6aa5d1b3a : feat(cpus): support to update External LLC presence in Neoverse V2
3c225878e : refactor(smccc): refactor vendor-el3 build
320fb2939 : refactor(docs): added versioning to smccc services
42cbefc72 : feat((smccc): add version FID for PMF
f7679d437 : refactor(smccc): move pmf to vendor el3 calls
273b89838 : refactor(smccc): move debugfs to vendor el3 calls
de6b79d8b : feat(smccc): add vendor-specific el3 service
be5b1e223 : feat(smccc): add vendor specific el3 id
5af143f29 : refactor(fvp): move cpus with nomodel
a5c4212f0 : refactor(cpus): replace adr with adr_l
31857d4cb : refactor(build): introduce adr_l macro
011829b3e : refactor(cpufeat): restore functions in detect_arch_features
aaaf2cc31 : refactor(cpufeat): add macro to simplify is_feat_xx_present
9e51f15ed : chore: simplify the macro names in ENABLE_FEAT mechanism
b03ba4801 : feat(zynqmp): remove unused pm_get_proc_by_node()
0bd2075ef : build(fvp): make all builds unconditional
d42987c34 : refactor(tc): move SCMI nodes into the 'firmware' node
c33a39367 : refactor(tc): move MHUv2 property to tc2.dts
75400dd5d : refactor(tc): drop the 'mhu-protocol' property in DT binding
e6ef3ef0f : refactor(tc): append properties in DT bindings
79c6ede09 : refactor(tc): move SCMI clock DT binding into tc-base.dtsi
4e772e6ba : refactor(tc): introduce a new file tc-fpga.dtsi
f9565b2af : refactor(tc): move out platform specific DT binding from tc-base.dtsi
defcfb2b6 : refactor(tc): move out platform specific code from tc_vers.dtsi
b3a9737ce : refactor(tc): add platform specific DT files
35028bd7d : refactor(tc): rename 'tc_fvp.dtsi' to 'tc-fvp.dtsi'
ab0450f34 : refactor(tc): introduce a new macro ADDRESSIFY()
154eb0a22 : fix(tc): enable FEAT_MTE2
3d6c7e590 : build: improve diagnostics for unrecognized toolchain tools
784092ee1 : build(rzg): separate BL2 and BL31 SREC generation
4d1289bd3 : build(rcar): separate BL2 and BL31 SREC generation
7b4535266 : build: separate preprocessing from DTB compilation
758ccb802 : build: remove `MAKE_BUILD_STRINGS` function
0c77651fb : feat(mt8188): remove apusys kernel handler usage constraints
beefea8a0 : docs(maintainers): remove a maintainer for MediaTek SoCs
44ddee6f0 : fix(tc): increase stack size when TRUSTED_BOARD_BOOT=0
f019c8013 : feat(handoff): add support for RESET_TO_BL2
9c11ed7e3 : feat(arm): support FW handoff b/w BL1 & BL2
469b1d841 : feat(handoff): add TL source files to BL1
0646c9b29 : feat(handoff): add TE's for BL1 handoff interface
6a4da2905 : refactor(bl1): clean up bl2 layout calculation
a5566f65f : feat(arm): support FW handoff b/w BL2 & BL31
5e4711208 : fix(tc): missing device regions in spmc manifest
2d7902d9b : feat(docs): update maintainer list for neoverse_rd
e73c3c3a6 : feat(s32g274a): enable BL31 stage
8b81a39e2 : feat(s32g274a): add S32G274ARDB2 board support
306946b01 : feat(nxp-drivers): add Linflex driver
8d6fb77a9 : refactor(neoverse-rd): remove soc_css.mk from common makefile
a965d73f0 : refactor(neoverse-rd): unify GIC SPI range macros
a0bd61985 : refactor(neoverse-rd): clean-up nrd_plat_arm_def2.h file
301c01748 : feat(neoverse-rd): disable SPMD_SPM_AT_SEL2 for N2/V2 platforms
2cfedfad9 : feat(rdn2): enable AMU if present on the platform
3a5b37530 : feat(rdn2): enable MTE2 if present on the platform
78b793956 : feat(rdn1edge): remove RD-N1-Edge from deprecated list
c396c823a : refactor(neoverse-rd): move defines out of platform_def.h
f104eecde : feat(sgi575): remove SGI-575 from deprecated list
7f693bd99 : refactor(neoverse-rd): add defines for ROM, SRAM and DRAM2
947e78728 : refactor(neoverse-rd): define naming convention for RoS macros
069bad718 : refactor(neoverse-rd): define naming convention for CSS macros
37f59e4ea : refactor(neoverse-rd): refactor mmap macro for RoS device memory region
9f1ba0af6 : refactor(neoverse-rd): refactor mmap macro for CSS device memory region
edd480d94 : refactor(neoverse-rd): set mmap naming convention
4d4763f71 : refactor(neoverse-rd): rename nrd_plat_v2.c to align with convention
13e7d35c6 : refactor(neoverse-rd): refactor nrd_soc_css_def_v2.h file
d26dae7cb : refactor(neoverse-rd): refactor nrd_soc_platform_def_v2.h file
01a27ecce : refactor(neoverse-rd): refactor nrd_base_platform_def.h
a6b8d7bb2 : refactor(neoverse-rd): header files for second generation platforms
d239edea5 : fix(rdn1edge): update RD-N1-Edge's changelog title
6fb16dac6 : feat(neoverse-rd): add scope for RD-V1-MC
86a4949fd : feat(neoverse-rd): add scope for RD-V1
18b50707f : feat(neoverse-rd): add scope for SGI-575
b9c32730e : feat(neoverse-rd): disable SPMD_SPM_AT_SEL2 for A75/V1/N1 platforms
fed936852 : feat(neoverse-rd): enable AMU if supported by the platform
4679a22ce : refactor(neoverse-rd): clean-up nrd_plat_arm_def1.h file
c80b7f095 : refactor(neoverse-rd): remove unused defines from platform_def.h
30625abd4 : refactor(neoverse-rd): move defines out of platform_def.h
e1eb21627 : refactor(neoverse-rd): rename definitions in nrd_ros_fw_def1.h file
de24310d6 : refactor(neoverse-rd): rename definitions in nrd_ros_def1.h file
9f8881498 : refactor(neoverse-rd): rename definitions in nrd_css_fw_def1.h file
7efe99af8 : refactor(neoverse-rd): rename definitions in nrd_css_def1.h file
bb4c20378 : refactor(neoverse-rd): rewrite CSS and RoS device mmap macros
d72c8d0f8 : refactor(neoverse-rd): refactor mmap macro for RoS device memory region
3917f0573 : refactor(neoverse-rd): refactor mmap macro for CSS device memory region
23b2609d1 : refactor(neoverse-rd): migrate mmap entry from nrd_plat1.c
7784c2610 : refactor(neoverse-rd): rename nrd_plat.c file
4783ff451 : refactor(neoverse-rd): refactor nrd_soc_css_def.h file
c9e5d32fa : refactor(neoverse-rd): refactor nrd_soc_platform_def.h file
d1d45cd0a : refactor(neoverse-rd): move away from nrd_base_platform_def.h
2bc056231 : refactor(neoverse-rd): remove inclusion of nrd_base_platform_def.h
e4f08cd9a : refactor(neoverse-rd): header files for first generation platforms
682da9327 : refactor(neoverse-rd): refactor scope for Neoverse RD platforms
47ea30338 : feat(stm32mp2): use early traces
cf237f8d5 : feat(st-bsec): use early traces
94cad75a3 : refactor(st): replace STM32MP_EARLY_CONSOLE with EARLY_CONSOLE
ae770fedf : feat(console): introduce EARLY_CONSOLE
a1255c758 : feat(bl32): create an sp_min_setup function
02cc2efb6 : refactor(docs): restructure min requirements section
47312115d : fix(cpus): workaround for Cortex-X4 erratum 2763018
0ec69a5bf : fix(optee): set interrupt handler before kernel boot
762a1c44b : feat(qemu): update to manifest v0.3
9bf31a59d : fix(tc): remove timer interrupt from G1S
5436047a0 : refactor(qemu): do not hardcode counter frequency
c643188f1 : docs(mte2): update docs
07b576a44 : refactor(fvp_r): remove duplicated macro definitions
ccc717391 : refactor(changelog): change all occurrences of RSS to RSE
59549e62c : refactor(qemu): change all occurrences of RSS to RSE
a822b8d82 : refactor(fvp): change all occurrences of RSS to RSE
a11230ad0 : refactor(fiptool): change all occurrences of RSS to RSE
d797665cc : refactor(psa): change all occurrences of RSS to RSE
47805037a : refactor(fvp): remove leftovers from rss measured boot support
7f8589cdb : refactor(tc): change all occurrences of RSS to RSE
624c9a0b3 : docs: change all occurrences of RSS to RSE
b8245368c : refactor(measured-boot): change all occurrences of RSS to RSE
e249e5695 : refactor(rse): change all occurrences of RSS to RSE
3857898f6 : refactor(psa): rename all 'rss' files to 'rse'
097e7d37e : refactor(tc): rename all 'rss' files to 'rse'
a5a5947a2 : docs: rename all 'rss' files to 'rse'
024c49484 : refactor(measured-boot): rename all 'rss' files to 'rse'
955116982 : refactor(rss): rename all 'rss' files to 'rse'
75093b726 : docs(fconf): add TB_FW config bindings
c54076934 : docs(fvp): restructure FVP platform documentation
d6c76e6c6 : fix(cm): add more feature registers to EL1 context mgmt
45716e377 : fix(spm): add device-regions used in tf-a-tests
832e4ed52 : fix(gpt): declare gpt_tlbi_by_pa_ll()
652c1ab15 : fix(xilinx): check proc variable before use
4a20d5cb8 : docs(plat): remove TC1 entry from the deprecation table
c833ca66a : fix(cpus): workaround for Cortex-X4 erratum 2740089
3b57ae23e : fix(docs): typo in the romlib design
e769f830d : feat(qemu): allow ARM_ARCH_MAJOR/MINOR override
c35299d6b : fix: static checks on spmc dts
b99926ef7 : fix(gpt): unify logging messages
20e2683da : chore(gpt): remove gpt_ prefix
8754cc5d1 : feat(aarch64): add functions for TLBI RPALOS
c0c280dfd : fix(cert-create): add guardrails around brainpool usage
1b86ec5b5 : docs: decrease the minimum supported OpenSSL
1b694c77c : feat(qemu): enable FEAT_ECV when present
b9ecf6458 : refactor(fvp): reduce max size of HW_CONFIG to 16KB
222f885df : feat(locks): add bitlock
eef240cfd : fix(gicv2): fix SGIR_NSATT bitshift
e639ad23c : fix(cert-create): use a salt length equal to digest length for RSA-PSS
df960bcc3 : refactor(arm): replace hard-coded HW_CONFIG DT size
a796d5aa1 : fix(cm): remove ENABLE_FEAT_MTE usage
4731c00bb : fix(build): wrap toolchain paths in double quotes
d701b48ee : fix(bl1): add missing `__RW_{START,END}__` symbols
6d8546f9f : fix(fvp): don't check MPIDRs with the power controller in BL1
3b48ca17f : fix(arm): only expose `arm_bl2_dyn_cfg_init` to BL2
a6b3643c2 : fix(cm): hide `cm_init_context_by_index` from BL1
e40b563e8 : fix(bl1): add missing spinlock dependency
10134e355 : fix(cpus): workaround for Cortex-A715 erratum 2728106
90801842e : docs(build): update GCC to 13.2.Rel1 version
ab4d5dfe2 : docs: clarify build environment prerequisites
e494afc05 : feat(stm32mp2): add ddr-fw parameter for fiptool
b9014f858 : feat(build): redirect stdin to nul during toolchain detection
152ad112d : fix(cc): code coverage optimization fix
92bba3e71 : fix(ff-a): add NS memory node to fvp_spmc_optee_sp manifest
2d960a116 : fix(spmd): skip NS EL1 context save & restore operations
7c9720f2e : docs: update release and code freeze dates
83a4e8e0c : fix(rmmd): fix bug, raised by coverity, when zeroing manifest struct
ed9bb824e : fix(cm): add more system registers to EL1 context mgmt
d594ace68 : fix(handoff): correct representation of tag_id
a312bfb34 : feat(handoff): add additional TE tags
88f7c87b8 : docs(rmm): document console struct in rmm boot manifest
32904472c : feat(rme): pass console info via RMM-EL3 ifc
e6f8fc743 : fix(pmu): fix breakage on ARMv7 CPUs with SP_min as BL32
67ff4f564 : refactor(arm): remove unused SP_MIN UART macros
2f1c5e7eb : build: use GCC to link by default
6aae3acfd : fix(cm): save guarded control stack registers
52ee81730 : feat(imx8mq): detect console base address during runtime
fca5f0ebe : fix(spmd): register group0 handler only if supported
9a7f892e2 : feat(xilinx): send SGI to mailbox driver
1e02ce683 : build(changelog): move mte to mte2
c282384db : refactor(mte): remove mte, mte_perm
328d304d2 : chore: rename Poseidon to Neoverse V3
fe8cc55a0 : fix(nuvoton): prevent changing clock frequency
351976bb0 : feat(imx8ulp): give HIFI4 DSP access to more resources
bd2f7d325 : fix(cpus): workaround for Cortex-A715 erratum 2413290
152f4cfa1 : fix(cpus): workaround for Cortex-A720 erratum 2926083
c34dd06a8 : fix(mhu): use MHUv2 if PLAT_MHU_VERSION undefined
f589a2a5f : chore: update status of Cortex-X3 erratum 2615812
998da640f : refactor: fix common misspelling of init*
d39b12369 : refactor(cm): minor update on conditions used in prepare_el3_exit
7385213e6 : fix(cpus): workaround for Cortex-A720 erratum 2940794
e366f8cf3 : feat(rcar3): change CAM setting to improve bus latency of R-Car Gen3
8d92e4be1 : refactor(stm32mp1): move the MCU security to BL32
77b4ca0b2 : feat(st-clock): add function to control MCU subsystem
6db0c1d86 : docs(threat_model): cover the 'timing' side channel threat
f811a99ea : docs(st): set OP-TEE as default BL32
40ed77fec : docs(st): one device flag for ST platforms
ae2b4a549 : fix(nuvoton): gfx frame buffer memory corruption during secondary boot
ef0d0e547 : fix(mte): use ATA bit with FEAT_MTE2
ce574314c : refactor(guid-partition): list.entry_count to unsigned int
f7c5ec1eb : refactor(mbedtls): remove mbedtls 2.x support
7d5fc98f5 : feat(rme): build TF-A with ENABLE_RME for Armv9.2
e1ecd8f8f : docs(maintainers): add missing ST files
cc5e177d0 : docs(maintainers): add Maxime as co-maintainer for ST platforms
c6b235a2e : docs(maintainers): update ST platform ports title
b2f4233a6 : docs(maintainers): sort github aliases
566d39447 : style(imx8m): add parenthesis to CSU_HP_REG
0324081af : feat(imx8mp): restrict peripheral access to secure world
cba7daa10 : feat(imx8mp): set and lock almost all peripherals as non-secure
1156c7636 : feat(imx8mm): restrict peripheral access to secure world
f4b11e59b : feat(imx8mm): set and lock almost all peripherals as non-secure
57ab6d897 : fix(cpus): fix a defect in Cortex-A715 erratum 2561034
15a04615b : fix(cpus): workaround for Cortex-A715 erratum 2413290
67ccdd9f9 : docs: remove entries of the deleted platforms
f834b64f8 : feat(rpi): add Raspberry Pi 5 support
6744d07d9 : fix(rpi): consider MT when calculating core index from MPIDR
7a9cdf58c : refactor(rpi): move register definitions out of rpi_hw.h
bbf92fe95 : refactor(rpi): add platform macro for the crash UART base address
b50297827 : refactor(rpi): split out console registration logic
97ef53052 : refactor(rpi): move more platform-specific code into common
2839a3c40 : docs: add documentation for `entry_point_info`
03fafc0b6 : refactor(sdei): use common create_spsr() in SDEI library
7d2a608a7 : build(npm): fix Commitizen ES Module errors
7944421ba : build(npm): adhere to Husky deprecation notice
c42d0d875 : fix(misra): fix MISRA defects
d6af23443 : refactor(cm): couple el2 registers with dependent feature flags
a5a966b12 : fix(tc): do not use r0 for HW_CONFIG
996b3af84 : feat(mhu): use compile flag to choose mhu version
4b4f8505e : feat(mhu): add MHUv3 wrapper APIs for RSS comm driver
bc174764f : feat(mhu): add MHUv3 doorbell driver
33c665ae9 : fix(cpus): workaround for Cortex-A715 erratum 2344187
cc41b56f4 : fix(cpus): workaround for Cortex-X4 erratum 2701112
24a4a0a5e : fix(gic600): workaround for Part 1 of GIC600 erratum 2384374
53b3cd253 : fix(cpus): workaround for Cortex-A715 erratum 2331818
6bdc856bc : fix(arm): move console flush/switch in common function
1f7324713 : fix(cpus): workaround for Cortex-A715 erratum 2420947
6df8d7647 : feat(tc): group components into certificates
6a415bd1e : feat(dice): add cert_id argument to dpe_derive_context()
33f29b8ae : refactor(sds): modify log level for region validity
7be391d1c : feat(tc): add dummy TRNG support to be able to boot pVMs
467bdf26b : feat(tc): get the parent component provided DPE context_handle
03d388d8e : feat(tc): share DPE context handle with child component
1f47a7133 : feat(tc): add DPE context handle node to device tree
e7f1181f8 : feat(tc): add DPE backend to the measured boot framework
2b53106a0 : feat(auth): add explicit entries for key OIDs
0ae9c631e : feat(dice): add DPE driver to measured boot
b03fe8c02 : feat(dice): add client API for DICE Protection Environment
c19977be0 : feat(dice): add QCBOR library as a dependency of DPE
584052c7f : feat(dice): add typedefs from the Open DICE repo
cb249050e : docs(changelog): add 'dice' scope
24844d8b7 : refactor(tc): align image identifier string macros
09bb42dbd : refactor(fvp): align image identifier string macros
c6b204cca : refactor(imx8m): align image identifier string macros
069eca669 : refactor(qemu): align image identifier string macros
a8a09e314 : fix(measured-boot): add missing image identifier string
d95060288 : refactor(measured-boot): move metadata size macros to a common header
a77a7444e : refactor(measured-boot): move image identifier strings to a common header
d5b4d5d2e : feat(st-sdmmc2): set FIFO size to 1024 on STM32MP25
9c36b900f : feat(drtm): update DRTM version to 1.0
b94d59099 : feat(drtm): update references to DRTM beta0
c86cfa359 : feat(drtm): for TPM features fw hash algorithm should be 16-bits
5dde96b02 : feat(drtm): add ACPI table region size to the DLME header
bc9064ae5 : feat(drtm): update return code if secondary PE is not off
89f5c753a : feat(drtm): add additional return codes
1ba369a5e : chore: rearrange the fvp_cpu_errata.mk file
106c4283a : fix(cpus): add erratum 2701951 to Cortex-X3's list
aceb9c9e5 : refactor(errata-abi): workaround platforms non-arm interconnect
c9f263438 : refactor(errata-abi): optimize errata ABI using errata framework
fa402f38b : build: allow platform makefiles to configure `ENABLE_LTO`
f9f1b4d98 : docs(maintainers): add myself as SynQuacer platform co-maintainer
81de50372 : feat(imx8m): add defines for csu_sa access security
2ac4909a5 : feat(imx8m): add imx csu_sa enum type defines for imx8m
c13016bac : fix(imx8m): fix CSU_SA_REG to work with all sa registers
7ef0b8377 : fix(build): don't rely on that gcc-ar is in the same directory as gcc
bcfc29766 : refactor(allwinner): console runtime switch on bl31 exit
c864af989 : refactor(arm): console runtime switch on bl31 exit
b90bbd1af : refactor(console): flush before console_switch_state
262dc9f76 : fix(cpus): workaround for Cortex-A715 erratum 2429384
8d08a1df1 : style(fwu): change the metadata fields to align with specification
37e81a603 : style(partition): use GUID values for GPT partition fields
616605142 : feat(st): add logic to boot the platform from an alternate bank
6e99fee43 : feat(st): add a function to clear the FWU trial state counter
26aab7956 : feat(fwu): add a function to obtain an alternate FWU bank to boot
d2566cfb8 : feat(fwu): add some sanity checks for the FWU metadata
56724d09c : feat(fwu): modify the check for getting the FWU bank's state
588b01b5e : feat(st): get the state of the active bank directly
11d05a772 : feat(fwu): add a config flag for including image info in the FWU metadata
a89d58bb2 : feat(fwu): migrate FWU metadata structure to version 2
7ae16196c : feat(fwu): document the config flag for including image info in the FWU metadata
e106a78ef : feat(fwu): update the URL links for the FWU specification
c09aa4ff7 : refactor(qemu): console runtime switch on bl31 exit
ba33528a0 : fix(el3-spmc): add datastore linker script markers
7f69a4069 : fix(cpus): workaround for Cortex-X3 erratum 2372204
3c789bfcc : feat(el3-runtime): introduce UNDEF injection to lower EL
30f05b4f5 : feat(cpufeat): added few helper functions
c7080f678 : build(npm): update Node.js and all packages
7a9e9f6e9 : feat(gpt): validate CRC of GPT partition entries
17a261dec : refactor(gpt): return header instead of part_lba
5ae4aae2c : docs(maintainers): add the maintainers for imx8ulp
c67057fee : docs(imx8ulp): add imx8ulp platform
047d7d1ba : fix(imx8ulp): increase the mmap region num
8d50c91b4 : feat(imx8ulp): adjust the dram mapped region
ee25e6a51 : feat(imx8ulp): ddrc switch auto low power and software interface
c514d3cfa : feat(imx8ulp): add some delay before cmc1 access
4fafccb9a : feat(imx8ulp): add a flag check for the ddr status
e1d5c3c8f : fix(imx8ulp): add sw workaround for csi/hotplug test hang
416c4433f : feat(imx8ulp): adjust the voltage when sys dvfs enabled
caee2733b : feat(imx8ulp): enable the DDR frequency scaling support
68f132b88 : fix(imx8ulp): fix suspend/resume issue when DBD owner is s400 only
d159c0053 : feat(imx8ulp): update XRDC for ELE to access DDR with CA35 DID
5fd06421f : feat(imx8ulp): add memory region policy
ff5e1793b : feat(imx8ulp): protect TEE region for secure access only
e85304192 : feat(imx8ulp): add trusty support
e7b82a7d2 : feat(imx8ulp): add OPTEE support
36af80c2b : feat(imx8ulp): update the upower config for power optimization
ea1f7a2e1 : feat(imx8ulp): allow RTD to reset APD through MU
ab787dba7 : feat(imx8ulp): not power off LPAV PD when LPAV owner is RTD
891c547e9 : feat(imx8ulp): add system power off support
478af8d3c : feat(imx8ulp): add APD power down mode(PD) support in system suspend
daa4478a3 : feat(imx8ulp): add the basic support for idle & system suspned
bcca70b96 : feat(imx8ulp): enable 512KB cache after resume on imx8ulp
ac5d69b62 : feat(imx8ulp): add the initial XRDC support
7c5eedca4 : feat(imx8ulp): allocated caam did for the non secure world
fcd41e869 : feat(imx8ulp): add i.MX8ULP basic support
0d6b4cdb2 : build(changelog): add new scopes for nxp imx8ulp platform
e63819f2b : feat(scmi): add scmi sensor support
96a5f8762 : refactor(tc): reorder config variable defines
6dacc272b : refactor(tc): correlate secure world addresses with platform_def
d585aa162 : refactor(tc): move DTB to start of DRAM
5ee4deb8e : feat(tc): add memory node in the device tree
638e4a92d : feat(tc): pass the DTB address to BL33 in R0
4fc4e9c96 : feat(tc): add arm_ffa node in dts
bafedcbe4 : chore(tc): add dummy entropy to speed up the Linux boot
8e94163ec : feat(tc): choose the DPU address and irq based on the target
a658b46dc : feat(tc): add SCMI power domain and IOMMU toggles
e862f0bf0 : refactor(tc): move the FVP RoS to a separate file
1b8ed0993 : feat(tc): factor in FVP/FPGA differences
a02bb36ca : feat(tc): introduce an FPGA subvariant and TC3 CPUs
62320dc4f : feat(tc): add TC3 platform definitions
042741495 : refactor(tc): sanitise the device tree
553b06b5d : feat(tc): add PMU entry
18f754a27 : feat(tc): allow booting from DRAM
2afa143a4 : docs(auth): align TBBR CoT names to match the code
59621c714 : docs(versal-net): update SMC convention
d8dc1cfa6 : docs(versal): update SMC convention
93163d988 : docs(zynqmp): update SMC convention
acf0076ae : build(fpga): correctly handle gcc as linker for LTO
31f80efee : fix(build): enforce single partition for LTO build
e5e9ccdb0 : fix(rockchip): add support for building with LTO enabled
29d24bb79 : chore(tc): remove unused hdlcd
d0628728a : feat(tc): add firmware update secure partition
ba197f5f7 : feat(tc): add spmc manifest with trusty sp
3ac3b6b0a : refactor(tc): unify all the spmc manifests
0686a01b0 : feat(arm): add trusty_sp_fw_config build option
fc42f8456 : fix(tc): do not enable MPMM and Aux AMU counters always
d2e44e7d7 : fix(tc): correct interrupts
2c406ddaf : feat(tc): interrupt numbers for `smmu_700`
127eabedd : feat(tc): enable gpu/dpu scmi power domain and also gpu perf domain
29872eb33 : fix(spm): reduce verbosity on passing tf-a-tests
e58daa663 : refactor(context-mgmt): remove el1_context routines from RMM
59f8882b4 : refactor(context-mgmt): move EL1 save/restore routines into C
75414f715 : refactor(sgi): replace references to "SGI"/"sgi" for neoverse_rd
2cd66a44f : refactor(sgi): rename "CSS_SGI"" macro prefixes to "NRD"
40ea4208b : refactor(sgi): move apis and types to "nrd" prefix
a1e6467b0 : refactor(sgi): replace build-option prefix to "NRD"
4ced59568 : refactor(sgi): move neoverse_rd out of css
c669f6535 : refactor(sgi): move from "sgi" to "neoverse_rd"
2d32517ce : feat(sgi): remove unused SGI_PLAT build-option
cacee0605 : fix(sgi): align to misra rule for braces
c69253cc3 : feat(rde1edge): remove support for RD-E1-Edge
10dcffedb : fix(rdn2): populate TOS_CONFIG only when SPMC_AT_EL3 is enabled
89d857780 : fix(board): update spi_id max for sgi multichip platforms
87799772e : build(corstone1000): add CORSTONE1000_WITH_BL32 preprocessor flag
1c0d02524 : build: correct minor toolchain documentation error
a23710b4b : feat(smmu): separate out smmuv3_security_init from smmuv3_init
70d849c14 : feat(smmu): fix to perform INV_ALL before enabling GPC
74ac476c1 : chore(ufs): refactor ufs_get_device_info
bc0ff02cb : fix(psa): fix static check failure
937d6fdb7 : fix(cm): update gic el2 sysregs save/restore mechanism
3e95bea5e : docs(sdei): provide security guidelines when using SDEI
83129bcd8 : fix(el3-spmc): fix dangling pointer in FFA_CONSOLE_LOG
077d8b39b : docs(threat_model): mark power analysis threats out-of-scope
e0e03a8d8 : fix(bl2): make BL2 SRAM footprint flexible
a67030c4e : docs: update FVP TC2 model version and build (11.23/17)
19258a583 : fix(tc): increase BL2 maximum size limit
a93bf0aac : refactor(tc): update platform tests
002b10604 : feat(rss): add defines for 'type' range and use them in psa_call()
5abcc8399 : feat(rss): adjust parameter packing to match TF-M changes
77241043d : refactor(tc): remap console logs
6f503e0ee : feat(tc): add RSS SDS region right after SCMI payload
0f37ae137 : refactor(n1sdp): update SDS driver calls
48d42ed5a : refactor(morello): update SDS driver calls
fdcd54132 : refactor(juno): update SDS driver calls
21b35eee9 : refactor(sgi): update SDS driver calls
8d1a04bd3 : refactor(css): support multiple SDS regions
62d646521 : fix(gpt): use DC CIGDPAPA when MTE2 is implemented
8e3978899 : feat(mte): add mte2 feat
4f6c9397b : test(fvp): remove `FVP_Foundation` model support
a1726fa7f : feat(fvp): remove left-over RSS usage
5d9711fec : docs(auth): add more information about CoTs
7f74030b8 : fix(build): properly manage versions in .versionrc.js
c25d1ccf1 : fix(build): move comment for VERSION_PATCH
59bdb426d : fix(qemu): disable FEAT_SB
8d449929e : refactor(gicv3): introducing is_valid_interrupt, a new helper utility
0b0fd0b47 : fix(fvp): permit enabling SME for SPD=spmd
c925867ec : feat(spmd): pass SMCCCv1.3 SVE hint to lower EL
f754bd466 : fix(rss): fix bound check during protocol selection
5cd10848b : fix(mhuv2): provide only the usable size of memory
8620bd0b9 : build: use toolchain identifiers in conditions
ffb774212 : build: use new toolchain variables for tools
cc277de81 : build: refactor toolchain detection
e0afd1471 : docs: change FVP argument in RME configuration
6873088c2 : feat(fvp): added calls to unprotect/protect memory
6a6b28237 : fix(cpus): workaround for Cortex-A715 erratum 2561034
2e905c068 : feat(stm32mp2): add STM32MP_USB_PROGRAMMER compilation
9883833c9 : refactor(st): move macros to common folder
8af83a448 : refactor(stm32mp1): remove unused macros
f84f21fa8 : fix(usb): add missing include
a53d13775 : Revert "fix(ti): do not take system power reference in bl31_platform_setup()"
73d772d87 : refactor(ti): remove ti_sci_init function
777f1f689 : fix(spe): invoke spe_disable during power domain off/suspend
160e8434b : feat(psci): add psci_do_manage_extensions API
70b9204e6 : fix(arm_fpga): halve number of PEs per core
d07d4d633 : feat(fvp): delegate FFH RAS handling to SP
8815cdaf5 : feat(spmd): initialize SCR_EL3.EEL2 bit at RESET
e3f9ed852 : docs(auth): add missing AUTH_PARAM_NV_CTR value
4290d3439 : docs: fix link to TBBR specification
3dafd960d : refactor(build): minor updates
0e4daed24 : refactor(build): remove enabling feat
7275ac2af : fix(build): march handling with arch-features
2a71f1633 : refactor(build): refactor mandatory options
af1ac2d7d : fix(scmi): induce a delay in monitoring SCMI channel status
3447ba1f0 : feat(css): initialise generic timer early in the boot
84eb3ef6c : fix(libc): memset inclusion to libc makefiles
b22e6898e : feat(cros_widevine): add ChromeOS widevine SMC handler
0bdaf5c80 : fix(k3): increment while reading trail bytes
7671008fc : fix(ehf): restrict secure world FIQ routing model to SPM_MM
fbd32ac08 : fix(build): mute sp_mk_generator from build log
6c1ae0750 : refactor(build): allow mandatory feats disabling
0c86a846d : fix(fconf): boot fails using ARM_ARCH_MINOR=8
99db13bfa : fix(libc): add memcpy_s source file to libc_asm mk
fac4a843c : docs(contributing): various improvements
30019d869 : feat(cpufeat): add feature detection for FEAT_CSV2_3
6c2c8528a : docs: import MISRA compliance spreadsheet
77f7a6a8c : docs: update links to TF-A issues tracker
6d2c502af : feat(imx8m): obtain boot image set for imx8mn/mp
a727d59d9 : feat(cpufeat): add cortex-a35 l2 extended control register
c1aa3fa55 : fix(cpus): workaround for Cortex X3 erratum 2641945
30788a845 : fix(mte): remove CTX_INCLUDE_MTE_REGS usage
341df6af6 : feat(arm): move GPT setup to common BL source
86e4859a0 : feat(arm): retrieve GPT related data from platform
1e7545acc : refactor(arm): rename L0/L1 GPT base macros
641571c72 : docs(cpufeat): clarify description of FEATURE_DETECTION macro
d6bb94f3a : feat(stm32mp1): only fuse monotonic counter on closed devices
0a33adc05 : refactor(mte): deprecate CTX_INCLUDE_MTE_REGS
ae6ce196d : fix(imx8mp): uncondtionally enable only the USB power domain
197ac780d : feat(stm32mp2): add BSEC and OTP support
ae6542f6c : feat(st-bsec): add driver for the new IP version BSEC3
a773f4121 : fix(intel): update nand driver to match GHRD design
8b7dd8397 : feat(qemu-sbsa): handle memory information
516a98ef2 : feat(rcar3): update IPL and Secure Monitor Rev.4.0.0
7e06b0675 : feat(rcar3): add cache operations to boot process
5e8c2d8e2 : feat(rcar3): change MMU configurations
cfa466ab7 : feat(rcar3): enable the stack protection
b908814c7 : docs(threat-model): supply chain threat model TF-A
93d1f4bc7 : style(hooks): copyright year check as per author email
bb4d7d719 : docs(threat-model): add threat model for PSA FWU and TBBR FWU(recovery)
586701cec : refactor(st-i2c): use fdt_read_uint32_default()
b76a43c93 : feat(arm): add COT_DESC_IN_DTB option for CCA CoT
4c79b86ed : feat(fvp): add CCA CoT in DTB support
dc35bd320 : docs(arm): update TBBR CoT dtsi file name in doc
c4b35cebf : feat(dt-bindings): introduce CCA CoT, rename TBBR
0de9a12c8 : docs(fconf): update bindings for multi-RoT CoTs
0651b7beb : feat(spmd): add FFA_MSG_SEND_DIR_RESP2
cc6047b3d : feat(spmd): add FFA_MSG_SEND_DIR_REQ2
04ac0b3c2 : feat(fconf): support signing-key in root cert node
d1eb4e237 : docs(security): security advisory for CVE-2023-49100
1685b4206 : build: remove the `NM` variable
7e3875892 : build: prefer `gcc-ar` over `ar`
86e489c19 : build: add `--no-warn-rwx-segments` when linking with GCC
7fc4d7780 : build: always use the C compiler to assemble
781cb3143 : build: always use the C compiler to preprocess
e068a7ca8 : fix(rcar): fix implicit rule invocations in tools
a6462e05c : feat(memmap): add RELA section display
88528f557 : feat(stm32mp2-fdts): add board ID OTP in STM32MP257F-EV1
c238a46a7 : feat(stm32mp2-fdts): add OTP nodes in STM32MP251 SoC DT file
cb0d6b5b5 : fix(stm32mp2): add missing include
3007c7284 : feat(st): do not directly call BSEC functions in common code
189db9486 : feat(st): use stm32_get_otp_value_from_idx() in BL31
9cd784db5 : refactor(st): update test for closed chip
c70610450 : refactor(st-bsec): improve BSEC driver
b8816d3cb : refactor(st): use dashes for BSEC node names
6dc8ee61f : fix(memmap): fix memory map dump when SEPARATE_CODE_AND_RODATA=0
68cac6a0f : fix(cpus): workaround for Cortex-A78C erratum 2683027
a65c5ba35 : fix(cpus): workaround for Cortex-X3 erratum 2266875
3f9df2c6a : fix(cpus): workaround for Cortex-X3 erratum 2302506
305825b49 : feat(qemu): enable transfer list to BL31/32
0e8def996 : feat(optee): enable transfer list in opteed
6a3225e22 : fix(spm): silence warning in sp_mk_generator
638a6f8e0 : feat(el3-spmc): add support for FFA_CONSOLE_LOG
6cbe2c5d1 : feat(intel): enable query of fip offset on RSU
62be2a1ae : feat(intel): support query of fip offset using RSU
6a80c20ef : fix(xilinx): deprecate SiP service count query
4fc54c99d : feat(qemu-sbsa): mpidr needs to be present
6611e81e1 : fix(rockchip): fix documentation in how build bl31 in AARCH64
34bb883a5 : docs(threat-model): provide PSR specification reference
d2e1f6a88 : fix(ti): do not stop non-secure timer on world switch
2331a34f7 : feat(stm32mp2): put back core 1 in wfi after debugger's halt
d1c85da8e : feat(stm32mp2): add plat_my_core_pos
4da462dcd : fix(stm32mp2): correct early/crash console init
9e72d01ed : fix(memmap): fix footprint free space calculation
04e7f8082 : fix(spm): not defining load-address in SP config
9b0764361 : docs(qemu-sbsa): describe what we get from QEMU
42925c15b : feat(qemu-sbsa): handle CPU information
8c56a7889 : fix(context-mgmt): align the memory address of EL2 context registers
663f024f2 : feat(versal): extend platform address space sizes
9260a8c81 : feat(imx8m): make bl33 start configurable via PRELOADED_BL33_BASE
7ec53afaa : fix(xilinx): add console_flush() before shutdown
427e46dde : fix(xilinx): fix sending sgi to linux
594970160 : feat(xilinx): add new state to identify cpu power down
88ee0816a : feat(xilinx): request cpu power down from reset
c3280df1b : feat(xilinx): power down all cores on receiving cpu pwrdwn req
ade92a64e : feat(xilinx): add handler for power down req sgi irq
3dd118cf9 : feat(xilinx): add wrapper to handle cpu power down req
b22592618 : fix(versal-net): use arm common GIC handlers
79953190b : fix(xilinx): rename macros to align with ARM
ebe82a392 : feat(qemu): support TRP for RME
8ffe0b2ed : feat(qemu): load and run RMM image
6cd113fe0 : feat(qemu): setup Granule Protection Table
cd75693f5 : feat(qemu): setup memory map for RME
a5ab1ef7f : feat(qemu): update mapping types for RME
c69e95eed : feat(qemu): use mock attestation functions for RME
f465ac221 : fix(qemu): increase max FIP size
0f0fd499d : fix(rotpk): move rotpk definitions out of arm_def.h
b77f55d6c : feat(cpu): add support for Poseidon V CPU
61a29682c : fix(cpu): correct variant name for default Poseidon CPU
57bc3c405 : fix(rmmd): avoid TRP when external RMM is defined
cc3374ac6 : refactor(xilinx): move plat_get_syscnt_freq2 to common file
1f02024b1 : refactor(versal-net): rename VERSAL_NET_IOU_SCNTRS register to generic
07625d9dd : fix(versal-net): setup counter frequency
f000744e0 : fix(versal): initialize cntfrq_el0 register
6d511a8c3 : feat(platforms): update SZ_* macros
bfef8b908 : feat(context-mgmt): report context memory usage
9acff28ae : build(mpam): add new build option CTX_INCLUDE_MPAM_REGS
503cf9927 : refactor(juno): move plat_def_uuid_config to fiptool
56c8d022b : fix(intel): update from INFO to VERBOSE when print debug message
68bb3e836 : feat(intel): support wipe DDR after calibration
a72f86ac4 : fix(intel): update system counter back to 400MHz
d0e400b3c : fix(intel): revert back to use L4 clock
d6ae69c8c : feat(intel): support QSPI ECC Linux for Agilex
6cf16b368 : feat(intel): support QSPI ECC Linux for N5X
8be16e44c : feat(intel): support QSPI ECC Linux for Stratix10
4d122e5f1 : feat(intel): add in QSPI ECC for Linux
b727664e0 : fix(intel): add HPS remapper to remap base address for SDM
979c5482d : docs: update links to tf.org-wide process documents
ac4f6aaf8 : refactor(cm): move MPAM3_EL3 reg to per world context
f43e9f57d : fix(cpus): workaround for Cortex X3 erratum 2743088
96c031c7f : docs(versal): add ERRATA_ABI_SUPPORT build documentation
d766f994d : feat(versal): enable errata management feature
4087ed6c1 : refactor(cm): reset the cptr_el3 before perworld context setup
34db3531b : fix(cpus): workaround for Cortex-A520 erratum 2858100
ae19093f2 : fix(errata): add Cortex-A520 definitions
40fd755ba : feat(handoff): enhance transfer list library
32a87d440 : feat(intel): enable SDMMC frontdoor load for ATF->Linux
150d2be0d : fix(intel): fix hardcoded mpu frequency ticks
fffcb25c3 : feat(intel): support SDM mailbox safe inject seu error for Linux
f4aaa9fd6 : fix(intel): update DDR range checking for Agilex5
e8a3454cb : fix(intel): update fcs functions to check ddr range
b0f447897 : fix(intel): update fcs crypto init code to check for mode
e9afde1a2 : fix(rcar3): change RAM protection configurations
ae4860b0f : fix(rcar3-drivers): check loaded NS image area
4f7e0fa38 : fix(rcar3): fix load address range check
f03bfc304 : fix(cpus): workaround for Cortex-A520 erratum 2630792
b01a93d77 : fix(cpus): workaround for Cortex-X2 erratum 2778471
c9508d6a1 : fix(cpus): workaround for Cortex-A710 erratum 2778471
7934b68af : fix(sgi): apply workarounds for N2 CPU erratum
54b86d47e : fix(cpus): fix incorrect AMU trap settings for N2 CPU
08f6398b2 : feat(rdn2): update power message value to 0
59173793f : feat(el3-spmc): add support to handle power mgmt calls for s-el0 sp
8eb4efe70 : fix(marvell-tools): include mbedtls/version.h before use
d2ce6aa06 : fix(tc): guard PSA crypto headers under TF-M test-suite define
511e4a48c : feat(versal-net): add bufferless IPI Support
a8778185d : feat(tc): provide a mock mbedtls-random generation function
4622da467 : build(versal-net): reorganize platform source files
b469880e3 : fix(rcar3-drivers): check "rcar_image_number" variable before use
9778b270e : fix(rcar3-drivers): check for length underflow
ef38fb1f5 : fix(rcar3-drivers): add integer overflow check
93b8952ee : fix(rcar3-drivers): add integer overflow check
f1bb459c3 : feat(imx8m): add 3600 MTps DDR PLL rate
060fe6333 : fix(imx8m): align 3200 MTps rate with U-Boot
cb60a876e : fix(imx8m): handle 3734 in addition to 3733 and 3732 MTps rates
51b8b9c3c : feat(fvp): add support for virto-net, virtio-9p and virtio-rng
5fb5ff569 : feat(mt8188): add secure iommu support
e830e4cde : feat(ff-a): update FF-A version to v1.2
ca99680ce : docs: fix errata in RMM-EL3 Communication Interface documentation
c0f8ce537 : fix(cpus): workaround for Neoverse V2 erratum 2618597
0926d2df7 : feat(libc): add printf support for space padding
49df7261b : feat(rdn2): add dts for secure partition
8c30a0c7f : feat(fvp): add stdout-path
5ed8e2550 : feat(el3-spmc): synchronize access to the s-el0 sp context
727ab1c4a : feat(el3-spmc): add support to map S-EL0 SP device regions
83c3da771 : feat(el3-spmc): add support to map S-EL0 SP memory regions
1f6b2b265 : feat(el3-spmc): add support for FFA_MEM_PERM_GET and SET ABIs
48db2b012 : feat(el3-spmc): add support to setup S-EL0 context
ab2b36321 : feat(neoverse): enable NEOVERSE_Nx_EXTERNAL_LLC flag
5710229f9 : fix(docs): revise the description of REGISTER_CRYPTO_LIB
dd2c88860 : fix(rk3328): apply ERRATA_A53_1530924 erratum
5b5562b2e : fix(errata): check for SCU before accessing DSU
538516f5d : feat(security): add support for SLS mitigation
912c4090f : fix(cpus): workaround for Neoverse V2 erratum 2662553
f10d3e495 : fix(sgi): reduce cper buffer carveout size
0737bd33f : fix(sgi): increase BL31 carveout size
81d4094d6 : fix(cpus): workaround for Cortex-A78C erratum 2743232
71ed91733 : fix(cpus): workaround for Neoverse V1 erratum 2348377
355ce0a43 : fix(cpus): workaround for Cortex-X3 erratum 2779509
9c41cc182 : feat(mediatek): remove bl32 flag for mtk_bl
dea307fd6 : refactor(fvp): remove RSS usage
878354a84 : refactor(rss)!: remove PLAT_RSS_NOT_SUPPORTED build option
8eb6a1da1 : fix(xilinx): update correct return types
e2d9dfe2b : fix(xilinx): add FIT image check in DT console
046e13047 : fix(xilinx): add FIT image check in prepare_dtb
2f17ac01a : fix(intel): read QSPI bank buffer data in bytes
304ad94b3 : fix(build): don't generate build-id
49ba1df52 : fix(build): add forgotten BL_LDFLAGS to lto command line
3d6edc325 : feat(build): check that .text section starts at page boundary
cfbac5959 : fix(intel): bl31 overwrite OCRAM configuration
82752c412 : fix(intel): update individual return result for hps and fpga bridges
2d46b2e46 : feat(intel): increase bl2 size limit
8fbd3073c : fix(intel): update stream id to non-secure for SDM
460692afb : fix(intel): revert sys counter to 400MHz
2973054d9 : fix(intel): update HPS bridges for Agilex5 SoC FPGA
68820f642 : fix(intel): temporarily workaround for Zephyr SMP
47ca43bcb : feat(intel): restructure watchdog

+- Project: trusty/external/trusty

68b9075 : secretkeeper: fix implicit sign change in enum declaration
ce254e4 : Add dirgroup for trusty genrule
3f45c60 : test-runner: call gic init post trusty dev init

+- Project: trusty/external/headers

bf69136 : Add dirgroup for trusty genrule

+- Project: trusty/lk/common

4d665fb5 : lib: rust_support: Control logging via LK_LOGLEVEL_RUST
27b512d0 : x86/64: remove Multiboot header and entry point
e116fbfd : dev: virtio: vsock: Simplify vsock_connection_lookup
5d57348d : lib: rust_support: Control Rust logging via LK_DEBUGLEVEL
08745480 : dev: virtio: vsock: Refactor TrustyHal
0b8eb0eb : dev: virtio: vsock: Add support for sharing memory with the host
c062dbcd : arch: arm64: Add some extra FPU APIs for testing
5eaa979c : dev: virtio: vsock-rust: Add dependency on libhypervisor_backends
8c15bc9f : make: Update FIND_MACRO filter to include extra_versions
ab1baf0b : trusty: gic: add a compact vector representation to save RAM
a3762ffc : dev: interrupt: arm_gic: Use EOIMode=1 to defer irqs in doorbell mode
36680634 : dev: interrupt: Return 'INT_NO_RESCHEDULE' from sm_intc_enable_interrupts()
fe9a9980 : dev: interrupt: arm_gic: Update interrupt priority in doorbell mode
18a8627f : Remove the trailing slash for FIND_CRATE
05c25155 : arch: arm: Support Rust CFI in the kernel
44aa8dc9 : Add dirgroup for trusty genrule
a2f38b8c : dev: virtio: vsock: Remove unimplemented! macros
c0fefdfa : dev: virtio: vsock: Map pci cfg size to CAM type
4510522b : Rust: Mark compiler invocations as recursive make
8b3ca4dc : dev: virtio: vsock: Add header to expose C API
555129e6 : dev: virtio: vsock: Map dma pages as non-executable
ba21cbc1 : lib: rust_support: Add container_of macros
8330bab6 : rust: Enable CFI in the kernel
c6e5b049 : arch: x86: Set code-model to kernel in kernel rust target
47c933ac : dev: virtio: vsock-rust: Add debug message if rx or tx threads exit
bfdcc601 : arch: x86: Move .got and .dynamic sections
0c6439fd : dev: virtio: vsock: Adjust logging
0b1220fe : dev: virtio: vsock: Don't exit loop in vsock_tx_loop
7250693e : dev: virtio: vsock: Queue messages when ipc_send_msg blocks
7e5bdaaf : arch: arm64: Fix clang 18 build
5196c3d3 : make: engine.mk: Set matching RUSTFLAGS when kernel BTI is enabled
95aa8b38 : dev: virtio: vsock-rust: Use FIND_CRATE to locate lazy_static
3f6c83eb : arch: arm: Fix SMP-unsafe arch_in_int_handler workaround
734c4cc1 : dev: virtio: vsock-rust: Use FIND_CRATE to locate dependencies
158b22c2 : lib: rust_support: Use FIND_CRATE to locate dependencies
13d3f7bb : make: Filter crates under external/rust/android-crates-io
cbf84514 : dev: virtio: vsock-rust: Enable clippy
172c87b0 : lib: rust_support: Enable clippy
bc177422 : dev: virtio: vsock-rust: Don't panic when sending data to a closed vsock
9d4430d1 : lib: rust_support: don't panic when tipc_port_name is None
44eb5bdf : Fix typo in comment
c983b0b5 : Send FAR and ELR to the crash handler

+- Project: trusty/external/musl

18bcd3c8 : Add dirgroup for trusty genrule

+- Project: platform/external/truth

e107aead : Annotate the rest of the main package (basically just the Java 8 subjects) for nullness.
8ac91a63 : Document that `truth-java8-extension` is obsolete.
99af8bef : Bump org.codehaus.mojo:animal-sniffer-maven-plugin from 1.23 to 1.24 in the dependencies group
54e548c9 : Bump the dependencies group with 2 updates
2183a144 : Migrate from legacy com.google.gwt to org.gwtproject.
7e9fc7ae : Make StringSubject.matches suggest using containsMatch if matches(x) fails but containsMatch(x) would have passed.
af140d66 : Fix grammar in Javadoc comments.
afda443c : Annotate `formattingDiffsUsing` methods as supporting nullable element/value types.
ee680cba : Use JSpecify annotations in the public release.
d39e722d : Bump org.apache.maven.plugins:maven-jar-plugin from 3.4.1 to 3.4.2 in the dependencies group
d58e133e : Bump actions/checkout from 4.1.6 to 4.1.7 in the github-actions group
6380af74 : Remove @J2ktIncompatible from TruthJUnit
35848d4c : Bump the dependencies group with 2 updates
52745864 : Minor grammar correction in comment
cef5d458 : Bump com.google.protobuf:protobuf-java from 4.27.0 to 4.27.1 in the dependencies group across 1 directory
95cbdff3 : Group all dependabot updates together in the same commit to avoid merge conflicts.
b4aba29b : Bump com.google.errorprone:error_prone_annotations from 2.27.1 to 2.28.0
0ed374d4 : Bump org.checkerframework:checker-qual from 3.43.0 to 3.44.0
22732f2d : Bump org.apache.maven.plugins:maven-javadoc-plugin from 3.6.3 to 3.7.0
584a33b6 : Bump auto-value.version from 1.10.4 to 1.11.0
e1d4e242 : Bump Guava to 33.2.1.
df321713 : Bump org.apache.maven.plugins:maven-enforcer-plugin from 3.4.1 to 3.5.0
7a1c72f6 : Use `AssertionError(String, Throwable)` instead of supplying the cause later.
eea7ac09 : Bump com.google.protobuf:protobuf-java from 4.26.1 to 4.27.0
7d3ee0a2 : Bump org.codehaus.mojo:build-helper-maven-plugin from 3.5.0 to 3.6.0
ab99ff83 : Bump actions/checkout from 4.1.5 to 4.1.6
75a84c2e : Recommend `assertThrows` and `assertFailsWith`.
0a2339ba : Bump actions/checkout from 4.1.4 to 4.1.5
ecee7248 : Bump Guava to 33.2.0.
70481497 : Bump com.google.errorprone:error_prone_annotations from 2.27.0 to 2.27.1
a4508d3a : Bump org.checkerframework:checker-qual from 3.42.0 to 3.43.0
10f47881 : Bump com.google.errorprone:error_prone_annotations from 2.26.1 to 2.27.0
1bceeb19 : Bump actions/checkout from 4.1.3 to 4.1.4
acd17972 : Suppress or work around false-positive errors from the forthcoming version of our nullness checker.
fd6a3132 : Bump org.apache.maven.plugins:maven-gpg-plugin from 3.2.3 to 3.2.4
19ac557e : Bump org.apache.maven.plugins:maven-jar-plugin from 3.4.0 to 3.4.1
2b370c19 : Bump actions/checkout from 4.1.2 to 4.1.3
59e7a506 : Deprecate `Subject.Factory` methods for Java 8 types.
b74edadf : Remove a confusing comment.
85772be2 : Bump org.apache.maven.plugins:maven-jar-plugin from 3.3.0 to 3.4.0
23171822 : Bump org.apache.maven.plugins:maven-gpg-plugin from 3.2.2 to 3.2.3
399821e2 : Remove stray references to `Truth8`.
3c90468a : Bump org.apache.maven.plugins:maven-source-plugin from 3.3.0 to 3.3.1
3f1e6669 : Bump com.google.protobuf:protobuf-java from 4.26.0 to 4.26.1
2fd55554 : Bump org.apache.maven.plugins:maven-gpg-plugin from 3.2.0 to 3.2.2
db29011f : Bump org.ow2.asm:asm from 9.6 to 9.7
68c990e5 : Bump actions/cache from 4.0.1 to 4.0.2
6f1c2464 : Bump org.apache.maven.plugins:maven-compiler-plugin from 3.12.1 to 3.13.0
e6963aac : Migrate off a deprecated `TextFormat` API, and upgrade to a protobuf version that removes it.
09b9f9f5 : Bump actions/setup-java from 4.2.0 to 4.2.1
edbe9c1c : Bump actions/setup-java from 4.1.0 to 4.2.0
a0dd5852 : Bump actions/checkout from 4.1.1 to 4.1.2
af5a2d0c : Bump Guava to 33.1.0.
511d3b36 : Bump com.google.errorprone:error_prone_annotations from 2.25.0 to 2.26.1
1f0fdcc5 : Bump org.apache.maven.plugins:maven-gpg-plugin from 3.1.0 to 3.2.0
2bc070aa : Bump actions/cache from 4.0.0 to 4.0.1
213a1569 : Migrate tests off `Truth8`.
e3b43549 : Enable a few more Guava Primitives tests for J2KT
ae78f4a0 : Bump actions/setup-java from 4.0.0 to 4.1.0
996a844c : Remove more copies of a workaround for an ancient Android bug.
a43223ef : Suppress `TruthSelfEquals` violations in Truth.
559d6360 : Suppress `NullableOptional`, as we already do in, e.g., `Truth.assertThat(Optional)`.
3efe3532 : Automated Code Change
5efd53f1 : Change `assertThat(array)` to allow arrays of non-nullable elements
bbd8d121 : Bump com.google.errorprone:error_prone_annotations from 2.24.1 to 2.25.0
c2439618 : Remove `@J2ktIncompatible` from `StringSubject#matches`
f1fd0cf7 : Bump com.google.protobuf:protobuf-java from 3.25.2 to 3.25.3
a6d312e0 : Document more about how and why to migrate off `Truth8`.
1e9d4d86 : Update docs to reflect that the Java 8 assertions have "moved" to the main `Truth` class.
45782bd0 : Remove temporary type parameters.
b5cd4a0a : Remove workaround for ancient Android bug.
1f81827f : Copy `Truth8.assertThat` overloads for `Path` and `OptionalLong` to the main `Truth` class.
9be8e774 : Copy remaining `Truth8.assertThat` overloads to the main `Truth` class—except `Path` and `OptionalLong`.
b02a6583 : Migrate most usages of `Truth8.assertThat` to equivalent usages of `Truth.assertThat`, and qualify others.
09993692 : Automated Code Change
7c65fc61 : Make it possible to write `expect.that(optionalInt).isPresent()`, `assertWithMessage(...).that(optionalInt).isPresent()`, etc., including for other types besides `OptionalInt`.
87b371df : Bump styfle/cancel-workflow-action from 0.12.0 to 0.12.1
93b4d937 : Add `@since` tags for the first batch of Java-8-related APIs.
78d27dd4 : Remove stale suppressions.
7be930d2 : Bump actions/cache from 3.3.3 to 4.0.0
16db780c : Make "value of" lines work with `StreamSubject`.
37fd8bea : Copy `Truth8.assertThat` overloads for `Optional` and `Stream` to the main `Truth` class.
ca7e8f4c : Make it possible to write `expect.that(optional).isPresent()`, `assertWithMessage(...).that(optional).isPresent()`, etc., including for `Stream` as well as `Optional`.
f8ecaec6 : Prepare `StreamSubject` for adding `Truth.assertThat(Stream)`.
795a9cf1 : Bump actions/cache from 3.3.2 to 3.3.3
7dab78f4 : Bump com.google.protobuf:protobuf-java from 3.25.1 to 3.25.2
eb0426eb : Move `truth-java8-extension` classes into the main `truth` artifact.
e47ee277 : Bump org.apache.maven.plugins:maven-surefire-plugin from 3.2.3 to 3.2.5
e63bac0c : Bump com.google.errorprone:error_prone_annotations from 2.24.0 to 2.24.1
dee0a559 : Bump org.apache.maven.plugins:maven-compiler-plugin from 3.12.0 to 3.12.1
e2cee41f : Bump com.google.errorprone:error_prone_annotations from 2.23.0 to 2.24.0
61d7afb1 : Bump Guava to 33.0.0.
cb09b474 : Bump org.apache.maven.plugins:maven-compiler-plugin from 3.11.0 to 3.12.0
5da5e3ff : Bump org.checkerframework:checker-qual from 3.41.0 to 3.42.0
fde66327 : Make our nullness checking work with an Android bootclasspath.
3e125c70 : Bump org.apache.maven.plugins:maven-surefire-plugin from 3.2.2 to 3.2.3
6464cb5c : Add `isWithin().of()` support to `IntegerSubject`.
91f4bdc7 : Remove getSuperclass() from the j2kt API, as it's not supported and ideally we'd see that at build time.
d532e91f : Bump org.checkerframework:checker-qual from 3.40.0 to 3.41.0
04fddbb6 : Bump org.apache.maven.plugins:maven-javadoc-plugin from 3.6.2 to 3.6.3
abc94025 : Bump actions/setup-java from 3.13.0 to 4.0.0
0e99a271 : Add `isWithin().of()` support to `LongSubject`.
6376c391 : Bump org.codehaus.mojo:build-helper-maven-plugin from 3.4.0 to 3.5.0
c3696277 : Bump org.apache.maven.plugins:maven-project-info-reports-plugin from 3.4.5 to 3.5.0
008373c9 : Check compatibility against the Android SDK (including [always-desugared APIs](https://github.com/open-toast/gummy-bears#android) but still not opt-in library desugaring) instead of the JDK.
04358ef1 : Bump com.google.protobuf:protobuf-java from 3.25.0 to 3.25.1
17ebbfe8 : Remove unnecessary nullness suppression.
3841f13a : Bump org.apache.maven.plugins:maven-surefire-plugin from 3.2.1 to 3.2.2
9e37a015 : Bump org.apache.maven.plugins:maven-javadoc-plugin from 3.6.0 to 3.6.2
c73fc2bc : Bump com.google.protobuf:protobuf-java from 3.24.4 to 3.25.0
ed1ac263 : Bump org.checkerframework:checker-qual from 3.39.0 to 3.40.0
a68162f3 : Include information about method parameters in class file.
8bd3ef61 : Support deep comparison of unpacked Any messages in FieldNumberTree.
a12d848c : Bump org.apache.maven.plugins:maven-surefire-plugin from 3.1.2 to 3.2.1
de69497c : Bump com.google.errorprone:error_prone_annotations from 2.22.0 to 2.23.0
2a0b0f63 : Bump actions/checkout from 4.1.0 to 4.1.1
57321e90 : Bump Guava to 32.1.3.
7d13f006 : Bump com.google.protobuf:protobuf-java from 3.24.3 to 3.24.4
c1a92ce5 : Bump styfle/cancel-workflow-action from 0.11.0 to 0.12.0
263e13e2 : Bump org.checkerframework:checker-qual from 3.38.0 to 3.39.0
4df7a8e2 : Bump org.ow2.asm:asm from 9.5 to 9.6
cf142d52 : Bump actions/checkout from 4.0.0 to 4.1.0
42427615 : Internal Code Change
f60d3f54 : Bump com.google.errorprone:error_prone_annotations from 2.21.1 to 2.22.0
848f2c65 : Bump actions/setup-java from 3.12.0 to 3.13.0
08dcd731 : Bump org.apache.maven.plugins:maven-javadoc-plugin from 3.5.0 to 3.6.0
830ff3c4 : Bump auto-value.version from 1.10.3 to 1.10.4
c62f99e2 : Bump org.apache.maven.plugins:maven-enforcer-plugin from 3.4.0 to 3.4.1
11450ff2 : Bump actions/cache from 3.3.1 to 3.3.2
35a97eb9 : Bump com.google.protobuf:protobuf-java from 3.24.2 to 3.24.3
38d49c4a : Bump actions/checkout from 3.6.0 to 4.0.0
79229a4c : Bump org.checkerframework:checker-qual from 3.37.0 to 3.38.0
49757f9e : Bump com.google.protobuf:protobuf-java from 3.24.1 to 3.24.2
5ebc379d : Bump actions/checkout from 3.5.3 to 3.6.0
de1b6645 : Bump org.apache.maven.plugins:maven-enforcer-plugin from 3.3.0 to 3.4.0
5a67dadf : Bump com.google.protobuf:protobuf-java from 3.24.0 to 3.24.1
e655ad6e : Bump auto-value.version from 1.10.2 to 1.10.3
5575a5d1 : Bump com.google.errorprone:error_prone_annotations from 2.20.0 to 2.21.1
a187e446 : Bump com.google.protobuf:protobuf-java from 3.23.4 to 3.24.0
22e20114 : Bump org.checkerframework:checker-qual from 3.36.0 to 3.37.0
d8863330 : Bump Guava to 32.1.2.
438c7fa4 : Bump actions/setup-java from 3.11.0 to 3.12.0
a8b507fc : Bump a bunch of deps at once.
18571218 : Add missed nullability for IterableSubject array elements
3f00bfef : Bump Guava to 32.1.1.
151367a9 : Bump guava-gwt from 32.0.0-jre to 32.0.1-jre
3e246851 : Bump error_prone_annotations from 2.19.1 to 2.20.0

+- Project: platform/external/ublksrv

f4d7f5c : Third-Party Import of: https://github.com/ublk-org/ublksrv
a7b0743 : UPSTREAM: libublksrv: fix eventfd write so we always write 8-bytes
171b1c0 : UPSTREAM: libublksrv: define _GNU_SOURCE only if it is not already defined
cf5f668 : Initial empty repository
1d76a59 : Makefile: fix clang compilation
1f1e327 : ublksrv_tgt: support UBLK_U_CMD_DEL_DEV_ASYNC
be42deb : libublksrv: add UBLK_U_CMD_DEL_DEV_ASYNC
b8bc26c : tests/generic: remove recover-failed device
30f52dc : ublksrv: add one API for checking if the control device is for recovering
c54760f : tests: increase timeout
6550517 : ci: remove codeql
ec02fbd : ci: use 'gpgcheck=0' for building fedora 37 image
c4e589e : tgt_loop: send ioctl(DISCARD) for simulating block device discard
f01c509 : ublksrv_tgt: wait start_recovery in case of -EBUSY
dd4784e : tests: generic/002: avoid failure from "device isn't dead after killing queue daemon"
20f0696 : utils: add uring test code
d98ac75 : Allow eventfd to be zero
5319543 : ublksrv: add API of ublksrv_queue_unconsumed_cqes()
293a9ef : ci: pass QemuKvm=no
bac8d72 : ublksrv_tgt: don't return error if trying to delete unexisted device
f8304e5 : tgt_loop: remove one logging
d60358e : github ci: improve ci
6d979f5 : ci: pass "Release=37"
4ed60f9 : doc: add libublk(rust)
f381077 : ublksrv_tgt: Add run_dir in __cmd_dev_del
75dd87b : ublksrv: logging typo fix
c4e5dd9 : doc: external_links: add ublk link of devconf 2023
2a77a73 : tests/generic: fix invalid shebang
67f81a1 : missing <limits.h> header
beb37d3 : nbd: fix build failure in case of -O2
a7aa28b : ci: switch runner to Fedora 37
b58f63c : ublksrv_tgt: enable USER_COPY
2e7d491 : qcow2: mark nbd as not supporting USER_COPY
5cbc65d : nbd: mark nbd as not supporting USER_COPY
a9be6c1 : tgt_loop: support user copy by totally io_uring OPs
96b9579 : tgt_loop: prepare for supporting user copy
e32fee0 : ublksrv_tgt: add helper of ublk_get_sqe_pair()
5d37f19 : libublksrv: prepare for supporting user copy
0b38ee7 : libublksrv: open char device as O_NONBLOCK
b4443ff : ublksrv_tgt: add "features" command for retrieving features supported by drivers
a6e38f5 : libublksrv: support ublksrv_ctrl_get_features
811e7b5 : ci: restarting networkd during f37 booting for making 'mkosi ssh' happy
1680175 : ci: cleanup ci.yml
56c0d2c : libublk: workaround one uring issue
aa697b1 : libublk: define data.cmd_op as 'unsigned int'
b61bacb : doc: external_link: "ublk: the new and improved way to serve SPDK storage locally"
40a02cb : README: add contributing section
18de713 : README: add todo things
7f885a7 : README: format literal block and console code block
075ba39 : README: doc ublk uses in docker/container environment
e00f566 : tests: debug: add one ublk-docker test case
01fe01a : utils/ublk_chown_docker.sh: add ublk devices for interested containers
9760488 : ublksrv_tgt: add udev script for supporting dockers
40d524e : ublksrv_tgt: utils: pass more info to ublk udev script
c64a990 : ublksrv_tgt: don't send command until device is setup
3ae8355 : libublksrv: log the actual op code too
8311b94 : doc: add external link of btrfs-ublk
6dbc137 : tgt_qcow2: apply ublk_json_* helpers to cleanup write json code
ce4ed75 : tgt_nbd: apply ublk_json_* helpers to cleanup write json code
e432e85 : tgt_null: apply ublk_json_* helpers to cleanup write json code
554e5f5 : tgt_loop: apply ublk_json_* helpers to cleanup write json code
80dc0b2 : ublksrv_tgt: add four ublk_json_* helpers
b6b14bc : ublksrv_tgt: add two ublk_json_ helpers for cleaning up write to json string
53b767a : tgt_loop: add loop_setup_tgt() helper
7b71b7f : ci: fix fedora key location
01aacf3 : ublksrv_tgt: nbd: fix recover
a3937ef : ublksrv_tgt: add help for recover command
b57bd1b : libublksrv: switch to new ioctl command encoding
986fe3e : ublksrv_tgt: call ublksrv_ctrl_get_info() before sending recovery command
be37565 : ublksrv_tgt: loop: fallback to buffered IO in case of bad block size
f2c39f1 : libublksrv: remove ublksrv_dev_init_io_cmds()
74bd6e8 : doc/external_links: add one tech doc about ublk user recovery design[Chinese]
fe9102d : doc/external_links: Zero-copy I/O for ublk, three different ways
0747711 : libublksrv: doxygen doc: document ublksrv json API
df6536b : libublksrv: doxygen: document ublksrv device/queue APIs
89981f5 : libublksrv: doxygen: document ublksrv control device APIs
f8ddede : libublksrv: doxygen: document public data structure
75341fd : libublksrv: doxygen: add doc/mainpage.dox
ab98af3 : libublksrv: doxygen: add Doxyfile
4ac90fb : nbd: add back author info
9bf3305 : doc: update news of OpenAnolis: ANCK 5.10 ublk support
9af6a26 : doc: external_link: add "io_uring in Android OTA"
224a5b4 : tests: nbd: wait until nbdkit daemon is really dead
1698e37 : tests: loop: fallback to --direct-io=off if dio isn't supported
2a8d302 : tests: run 'udevadm settle' after adding/deleting device
f069845 : tests:nbd: remove tests with nbdkit over null_blk
f3d461e : qcow2: fallback to buffered io in case that dio isn't supported
f163fa4 : doc: external links: miniublk merged to blktests
93acf5a : doc: add qemu-storage-daemon link
d13047e : test: add missed 'add' command when showing parameters
b435acc : tests: qcow2: don't exit test in case of leaks error
95af129 : tests: generic/005: cover nbd recovery test
454a8f4 : nbd: allocate target data in nbd_recovery_tgt()
15c5fcd : ublksrv_tgt: print usage for help/version command
b405f47 : github ci: don't install autoconf-archive
cc23f36 : ublksrv_tgt: enable --unprivileged
e630048 : ublksrv_tgt: change default directory of pid file into /tmp/ublksrvd
f3b7e93 : ublksrv: add ublk_user_id utility
b502622 : libublksrv: return error code from ublksrv_ctrl_get_affinity
304ee3b : libublksrv: deal with userspace/kernel compatibility for getting dev info
1c33430 : libublksrv: support un-privileged ublk dev
e4e1291 : libublksrv: dump device maj/min info
58401cf : github CI: Add ' -o Acquire::Retries=3' to apt update
46877b0 : github ci: dump qemu log when cleanup test
8d469cf : tests: nbd: kill nbdkit reliably
0a382ff : tests: show test name in kmsg buffer
761ae17 : README: comment on pkg-config and libtool needed for configure
d7de548 : ublksrv_tgt: replace syslog(LOG_INFO) with ublk_log()
44755e1 : libublksrv: add ublk_log()
5674541 : ublksrv_tgt: add --debug_mask command line for adding device
0eff860 : libublksrv: replace ublksrv_printf with ublk_ctrl_dbg
e0c4c7e : libublksrv: add ublk_debug_mask for grouping debug log
81851fd : qcow2: convert more into ublk_dbg()
9d1df58 : qcow2: convert meta_log into ublk_dbg
e420702 : libublksrv & target: prepare for grouping debug log
953bad7 : libublksrv & ublksrv_tgt: replace ublksrv_log with ublk_dbg
8e5d5ea : ublksrv_tgt: improve help command line
19ef2c1 : configure.ac: provide script for figuring out version from git describe
694684e : build_with_liburing_src: suppress 'make Entering directory'
3637ea6 : github/ci: run build & codeql CI on pushing to master
f2db4b8 : ublksrv_tgt: fix build warning of warn_unused_result
6b3758b : libublksrv: fix build warnings
21bc370 : ublksrv_tgt: convert syslog(LOG_ERR) into ublk_err()
9c8e224 : libublksrv: add ublk_err() API
917fd69 : nbd: fix building failure with send
3de942d : github ci: cover --enable-debug in build job
d8e8fdc : doc/external_links: add spdk ublk & openanolis ublk links
28f292c : doc/external links: fix one typo
03ebfe6 : nbd: add README
906cd4e : ci: install nbd/nbdkit
07fc2c7 : test: nbd: add tests for ublk-nbd
7b9e059 : ublksrv_tgt: add ublk-nbd
0c0bab9 : configure.ac: add check for gnutls/sdp/Large File Support
fed2ada : tests: common: add 64k bs test
5cd6208 : qcow2: fix target depth calculation
8618aa3 : libublksrv: fix uring depth calculation
37735e9 : libublksrv: add API of ublksrv_dev_set_cq_depth/ublksrv_dev_get_cq_depth
de2fc9e : doc: add external ublk documents link
875ed06 : tests: qcow2: rename 64G qcow2 io
4a5de71 : ci: add 120min timeout
6baecc9 : test: add simple timeout
156c453 : qcow2: fix build warning of -Werror=uninitialized on aarch64
a6a4043 : qcow2: define QEMU_PACKED as __attribute__((packed))
58a6af2 : ublksrv_tgt: add helpers for supporting ublk unique tag
fb0e515 : libublksrv: add API ublksrv_queue_get_io_data()
71d2eec : ublksrv_tgt: move endian helpers into ublksrv_tgt_endian.h
a371c5c : libublksrv: fix: don't allocate io buffer for extra io slots
1a2cff7 : qcow2: add queue_to_qcow2state
8408349 : tgt_loop: don't allocate tgt_data for loop target
3ac9857 : libublksrv: add ->init_queue() and ->deinit_queue() callbacks
a316751 : configure.ac: support to enable debug switch by configure
ea0e637 : libublksrv: remove two helpers for setup zero copy buffer
e3dd19e : configure.ac: add AM_SILENT_RULES([yes])
467d62f : Simplify C++ coroutine usage
c18b162 : Fix off-by-one buffer size bug
d1f3f62 : tgt_loop.cpp: Use IOSQE_FIXED_FILE everywhere
809a1b2 : libublksrv: fix build failure
e5d5877 : github/ci: install mkosi via v14 release
c7170a8 : tests/qcow2: add two tests for covering big image test
5a9f67a : libublksrv: declare ublksrv_apply_oom_protection parameter as void
a50f9fc : libublksrv: add pad field in ublksrv_dev_data
de2e7ec : libublksrv: fix warning when -Wall is enabled
483d710 : README: document how to build libublksrv & demo independently
29becbf : ublksrv: fix gettid build failure
4c7bc07 : ublksrv: doc: add one ublk introduction slide
66c520f : ublksrv: adjust ublksr max value for queue depth
864fd83 : libublksrv: add comments on public header
55105f5 : libublksrv: remove useless definitions
eb30b92 : libublksrv: rename include/utils.h -> include/ublksrv_utils.h
c50e47e : libublksrv: clean include headers
0148817 : libublksrv: comment on _ublksrv_dev and _ublksrv_queue
3f8f2f9 : libublksrv: mark 'struct ublksrv_queue *' as const in case API/callback parameter
df9f65d : libublksrv: mark API's parameter 'struct ublksrv_dev *dev' as const
76e0f1d : ublksrv_tgt: add ublksrv_tgt_set_io_data_size() for setting target io data size
ee542f1 : ublksrv_aio: don't expose details of 'struct ublksrv_aio_ctx'
1081792 : ublksrv_tgt: move target type implementation into ublksrv_tgt.c
f74e256 : libublksrv: hide implementation details
49087ae : libublksrv: move common utilities helpers into public header
62327db : libublksrv: move target specific definition into ublksrv_tgt.h
cc1f768 : libublksrv: pass 'const struct ublksrv_dev *' to .deinit_tgt()
64c9bed : libublksrv: hide most fields of ublksrv_dev
44e155b : ublksrv_tgt: use ublksrv_get_pidfile_fd() to get fd of pidfile
7b35731 : libublksrv: add API ublksrv_get_pidfile_fd()
aace005 : qcow2: use dev->tgt.tgt_data for storing Qcow2State instance
e1ed2f2 : libublksrv: mark all callback's "struct ublksrv_queue *" parameter as const
1e91b33 : ublksrv: expose small part info to target code
4c023ad : libublksrv: add ublksrv_queue_state() helper
002787d : libublksrv: remove ublksrv_get_iod() helper
87d9406 : libublksrv: pass 'const struct io_uring_cqe *' to target code
fed9fe3 : libublksrv: pass ' const struct ublk_io_data *' to target callback
0e9a4b8 : libublksrv: pass "struct ublk_io_data *" to target callbacks
a58faf0 : libublksrv: move target private data into one specific data structure
ca9b0c6 : ublksrv_tgt & demo: retrieve control device via ublksrv_get_ctrl_dev
bd5d4c7 : libublksrv: add API of ublksrv_get_ctrl_dev()
f6bbb16 : ublksrv_tgt: convert to ublksrv_ctrl_get_recovery_jbuf() & ublksrv_ctrl_prep_recovery
b157024 : libublksrv: add two APIs for supporting user recovery
0e3f35f : ublksrv: convert to ublksrv_ctrl_get_dev_info() & ublksrv_ctrl_get_run_dir()
c01364e : libublksrv: add two APIs of ublksrv_ctrl_get_dev_info() & ublksrv_ctrl_get_run_dir()
10fb985 : ublksrv_tgt: improve & fix user recovery
d9366d0 : ublksrv: don't track inflight target IO count
aed2232 : libublksrv: use io_uring_sq_ready() to decide if there is in-flight IOs
c055e63 : workflows: run ci on branch of "next"
ceee34a : configure.ac: ship one good ax_pthread.m4
f68e005 : Enable CodeQL scanning for repo
81a9709 : libublksrv: fix potential overflowing call to snprintf
17f652d : Add basic GitHub-hosted CI workflow
9f29847 : Enable more warnings on C++ code
8130fb4 : configure.ac: fix configure warning
32c1d7c : Remove unused variables, labels, functions
6653033 : qcow2: Fix warning about missing virtual destructor
6bfab5a : Update gitignore with test output
33d2a8e : Fix printf-style format strings
a83e835 : Usability + correctness tweaks for failure paths
70641d0 : tests: generic/004: let generic/004 cover recovery test only
50834aa : tests: generic/005: cover regular recovery test on all targets
da494ac : tests: test user recovery feature
cf54e50 : ublksrv_tgt: add user recovery support
902aa22 : libublksrv: add ->recovery_tgt() for supporting user recovery feature
84c78d1 : libublksrv: add UBLK_CMD_START_USER_RECOVERY and UBLK_CMD_END_USER_RECOVERY support
491b82c : libublksrv: add support for reading target's str/ulong value from json buf
1fbc00f : Rearrange comment lines at start of scripts
e0ad81c : tests: show 'modprobe -r ublk_drv' if driver isn't loaded
15a8c4f : tests: don't always test on /dev/ublkb0
649a504 : tests: cleanup __create_ublk_dev & __remove_ublk_dev
f4fda48 : tests: generic/003: cover both loop and qcow2 target
ae489f3 : tests: qcow2: rename _create_loop_backing_file as _create_loop_image
b597c04 : tests: loop: pass "data" to _create_loop_backing_file
fcdca77 : tests: loop: add helper of _remove_loop_image for removing loop image
4c175d0 : qcow2: add helper of _remove_qcow2_image for removing qcow2 image
4d4e0a6 : tests: qcow2/040: don't indent for test description
ee704ea : tests: qcow2: use _create_qcow2_image for creating qcow2 images
d8781f1 : tests: qcow2: unify output for non-ublk device
65d4f87 : tests: loop/003: unify output for non-ublk device
d42f0be : libublksrv: change default queue depth to 128
5832b5f : tests/run_test.sh: add qcow2 into T=all
330c17f : tests: generic/002: remove '-f' from $T_TYPE_PARAMS
7827cc8 : tests: generic: show ublk command line
4b5a523 : tests: pass type and queues via $T_TYPE_PARAMS too
83a18c2 : tests: simplify creating test device
46f6516 : ublksrv_tgt: add ublk_assert()
8bb1f39 : README: add debug section
b79c25b : qcow2: fail IO if the l2 mapping points to compressed data
d35cd63 : qcow2: fail if it is one image with snapshot
ef4c356 : ublksrv_tgt: improve output for 'ublk help'
ca8baff : tests: create test dir if it doesn't exist
fb80a7f : libublksrv: update uapi header with linux kernel
7fd9632 : tests: define loop image size as one variable
bf13e47 : libublksrv: set cpus temp buffer size as 4096
6b51ca4 : tests: dump fio cpu utilization info
50ba1b9 : tests: allow debug/test_dev to run on any specified device
65c2733 : libublksrv: use io_uring_register_ring_fd
e5025d4 : tests: common: dump io jobs
dfad54e : tests: common: pass --gtod_reduce=1 to fio
dce6d1d : libublksrv: set IORING_SETUP_COOP_TASKRUN for ublksrv's io_uring
37cb8f0 : qcow2: allocate l2 slices according to device size
e9be141 : qcow2: add slice_cache::shrink() method
436a58d : qcow2: move methods for flushing meta into qcow2_flush_meta.cpp
87398e3 : qcow2: lrucache: add remove_last() method
3d538b7 : libublksrv: add ->idle_fn() callback
618afee : libublksrv: cleanup idle handling
b90b689 : tests: qcow2: add fs io with verify on ublk-qcow2
386519b : qcow2: set bound iowq max workers as 1
1662b83 : ublksrv: ABI change: allow target to specify iowq max workers
ae8f2b8 : tests: qcow2: define QCOW2_IMG_SZ
dce2dd8 : qcow2: avoid to leak waiters because of exception
a41ad6d : qcow2: fix run_flush()
3c6c2a0 : qcow2: cache freed slice for re-allocation
e2ec619 : qcow2: add reset() method for two slice classes
46d69f6 : qcow2: export lru list for iterating over lru cache
dbf391e : qcow2: rmeove Qcow2RefcountBlock::entries_order()
8edb46a : qcow2: move wait_clusters() into Qcow2SliceMeta class
6e6aa2d : qcow2: unify virt_offset() for l2 & refcount block slice
8bd8d4f : qcow2: add qcow2_meta.h and qcow2_common.h
d6dd72f : qcow2: don't inline __find_slice for both Qcow2ClusterAllocator and Qcow2ClusterMapping
c4995da : qcow2: replace gettimeofday with std::chrono::system_clock::now()
9aa1ab8 : build_with_liburing_src: add comment about how to build liburing
eb020e9 : README: improve output
c1a284f : README: add qcow2 usage
56f9fc6 : README: convert to .rst
a308c52 : tests/qcow2/040: remove the temp test image
ac43f04 : qcow2: fix get_refcount_table_max_size
056333c : tgt_loop: use sync_file_range to handle flush request
ee79e48 : tgt_loop: remove aio
26a5bf3 : tests/debug/test_dev: add one manually run test for running fio on specified device
7d5d013 : tests: don't any test not named as number in group testing
2f95aea : qcow2: fix co_wait build failure with gcc 12
0990a95 : qcow2: STATUS: add qcow2 MQ support in todo list
ad5ee5d : ublksrv_tgt: fix potential buffer overflow when printing usage in help
a340b56 : README: cover qcow2 as GPL-2.0
8ef1fa4 : qcow2: rename STATUS as STATUS.rst
9faabbe : [PATCH] qcow2: add ublk-qcow2 target/backend
a4059d4 : tests: allow to pass image size to test
678dcc3 : tests: allow user to pass test dir from command line
5bffa97 : libublksrv: ABI change: reserve space for public structure
e5e11b0 : libublksrv: pass tag to co_io_job_submit_and_wait()
2efccc3 : libublksrv: ABI change: allow target/backend to reserve tags for extra IOs
e91f918 : Revert: libublksrv: ABI change: allow target/backend to request extra io slot allocation
698c92c : ublksrv_tgt: fix round_up()
7bb576e : libublksrv: ABI change: re-arrange fields of ublksrv_tgt_type and cache tgt_ops in ublksrv_queue
de92d0d : libublksrv: ABI change: add callback of handle_io_background
3bb213a : libublksrv: ABI change: allow target/backend to request extra io slot allocation
cca1033 : libublksrv: ABI change: allow target/backend code to request both ublk driver & ublksrv flags
6c943f3 : tgt_loop: support buffered_io options
fa49b1c : tgt_loop: fix logging in error handling code
cc7decf : tgt_loop: run fsync() in loop_deinit_tgt()
bf54b6b : tgt_loop: improve for handling loop direct io
7d1b8ac : tests: allow to run specified tests via variable T
237ff3c : tests: remove QUEUES parameter from __run_dev_perf_no_create
e3d59ef : tests: move loop's related functions out of fio_common
c326693 : tests: remove last parameter of __run_dev_perf_no_create
4323166 : tgt_loop: handle sync io in ublksrv aio context
b6ff0f0 : include/ublksrv_aio: allow to link with c++ code
29f37d8 : ublksrv_tgt: call ublksrv_apply_oom_protection() in io process
d91627c : libublksrv: add ublksrv_apply_oom_protection()
7d0bb6e : ublksrv: configure.ac: stop to build configure if AX_PTHREAD isn't defined
ad6fcff : build: Install ublksrv.pc in normal pkgconf location
7ed30e6 : build: Add a maintainer rule for checking EXTRA_DIST
02becbc : build: Add EXTRA_DIST rules as necessary
d7d7a83 : build: Install ublk in sbin, don't install demos
f00d1e2 : include: Install ublksrv_aio.h
cfc561a : tgt_loop: set correct block size if backing file is block device
9305467 : ublksrv: replace stopping/idle with state
9f278e8 : demo_event: support IO handling via io_uring in aio context
866daf9 : demo_event: support loop target via libulksrv_aio
7fc4e78 : demo_event: switch to ublksrv aio
4b5aa00 : libublksrv aio: submit aio requests at batch
a40850b : ublksrv: add interfaces to offload IO in another context
507c92e : ublksrv: export ublksrv_get_queue
9a496a8 : README: document autoconf-archive is required for building ublksrv
b082be4 : build: Add basic autotools environment
4d6fe30 : demo: Fix error reporting
fb0d90b : demo: Remove unnecessary variable sized static scoped buffers
0bed924 : build: Add a pkgconfig file
31692e4 : .gitignore: Ignore some built files
85a3673 : lib/ublksrv: log errors for ublksrv_queue_init()
3152bdd : include/ublksrv.h: add missing header <limits.h>
a1da52d : ublksrv: tests: add loop test over block
5a9261d : libublksrv: set sqe->cmd_op correctly
2e1050c : README: fix references
ab19288 : Makefile: compile c++ files in lib/ with c++11
bcee66e : README: cleanup
47eec63 : tests: cleanup test script
eb3eb1d : tests: not return double quotes from __ublk_loop_backing_file()
aade794 : tests: fio_common: test against non-sparse file
61291e0 : tests: let generic test cover NEED_GET_DATA or not
0202b92 : tests: add tests for UBLK_F_NEED_GET_DATA
53416f4 : demo_event: one option to use NEED_GET_DATA feature
0f856b5 : demo_null: one option to use NEED_GET_DATA feature
f8503e0 : ublksrv_tgt: one option to use NEED_GET_DATA feature
f74a7e8 : ublksrv: add new IO flag UBLKSRV_NEED_GET_DATA
5634cd1 : demo_*: increase json buffer into 4k
4e60eb5 : demo_*.c: delete device during exiting
f601ed2 : ublksrv: fix set parameter of io_opt_shift/io_min_shift
3f60f60 : tgt_loop: don't enable discard if the underlying block device doesn't support it
ed1f999 : tests: fio_common: show loop backing file
90196e4 : ublksrv: add -v option for list command
822e0f5 : libublksrv: remove dev_blocks parameter from ublksrv_ctrl_start_dev
f064261 : ublksrv_tgt: set disk basic parameter before starting device
5d1fbc6 : ublksrv_tgt: wait until all queues initialization is done
f46935b : demo_null: use SET_PARAMS command to set device basic info
0ad82c3 : ublksrv_tgt: not pass jbuf size to ublksrv_tgt_store_dev_data
be77b88 : ublksrv: dump block info from ublksrv_json_read_params()
7d9833c : libublksrv: add API of ublksrv_json_get_length
8196582 : libublksrv: add APIs for read/write/dump params via json
d2d5870 : ublksrv: use json lib helper to implement dev_info deserialization & serialization
06bc2bd : ublksrv: update with new ublksrv_ctrl_dev_info
a81c9cf : libublksrv: add two APIs for get/set parameters
19b6f01 : ublksrv: add GET_PARAMS/SET_PARAMS command UAPI definition
602250c : ublksrv_tgt: keep parent's cwd for ublksrv daemon
56d3187 : Add missing SPDX license identifiers
36777d4 : README: license: change to dual licensed LGPL/GPL and MIT
514ed3e : LICENCE: add MIT license text
f04f73a : COPYING.LGPL: cover libublksrv by LGPL v2.1
f210c10 : ublksrv_tgt.c: re-write start_daemon()
0e02f20 : libublksrv: rewrite create_pid_file()
304151a : ublksrv: fix declare of ublksrv_json_read_dev_info
ef441f8 : demo_event/null.c: handle failure of START_DEV
5ea7bcb : ublksrv: fix build failure on gcc version 11.2.1
9991f71 : ublksrv_tgt: remove shared memory
49134c9 : ublksrv_tgt: use ublksrv_json_read_target_base_info to retrieve dev_size
080c6e4 : ublksrv: use ublksrv_json_write_target_base_info to write tgt base into json
dac9b43 : libublksrv: add ublksrv_json_[write|read]_target_base_info
c9034e5 : ublksrv_tgt: exchange data between control and io task via pid file & json
4e90456 : ublksrv: support to dump device from json buffer
c64f5b6 : libublksrv: add APIs for exchanging data by json
c5ae168 : libublksrv: add nlohmann/json.hpp
e6fa2f3 : libublksrv: close pid_file_fd until the device is deleted
28eb5fb : libublksrv: move create/remove pid file into libublksrv
c5d19c7 : ublksrv_tgt: move start_daemon into ublksrv_tgt.c
67b6a59 : ublksrv: move mprintf() into ublksrv_tgt.c
362a2dc : ublk_cmd.h: update with ublkdrv
ea0b091 : ublksrv: remove pin page during io whole time
dc69291 : Makefile: cleanup Makefile mess
9de36e4 : libublksrv: support to discard pages mapped to io buffers
08e0c55 : libublksrv: remove 'submitted' output parameter from ublksrv_process_io
9065deb : ublksrv_cmd: use llx format specifier for __u64
a9c6a5e : Makefile: fix build dependency
0b698eb : ublksrv: log in case of failure
352dc8e : ublksrv: validate input for tgt_init
be15973 : README: fix kernel dependency typo
3dba73b : ublksrv: add comment on ->handle_event()
6436d8a : ublksrv: replace refetch test with uring_comp
655a3e1 : ublksrv_tgt: add command line for set/clear UBLK_F_URING_CMD_COMP_IN_TASK
64ce2fe : tests: loop: add two tests for covering PINPAGE & REFETCH
7990638 : ublksrv: test: not run backup test file
628f812 : ublksrv: add null/005 to test MQ with refetch
05bc6ae : ublksrv: pass PIN_PAGE parameter via global environment
9130509 : ublksrv: switch UBLK_F_* to bitmask
1c8d40b : ublksrv: switch UBLKSRV_F_* to bitmask
eb39eb4 : ublksrv: change suffix of ublksrv flags as UBLKSRV_F
729246e : tests: cover REFETCH test
4a3f777 : tests: cleanup generic tests
23efe84 : ublksrv: add command line for switching refetch
2cdc696 : ublksrv: support to handle UBLK_IO_REFETCH_REQ
968d950 : ublksrv: store ublksrv flags into dev_info.ublksrv_flags
a0dda0c : demo_null: add command line for test pining page & allocate buffer
9b032ec : libublksrv: allow target code to allocate/free io buffer
fab6ab2 : ublksrv: add comment on target callbacks
94f374d : tests: common: add 512K sequential io test
3647b1c : tests: generic 001 & 002: cover ublk device with pin_page as 1
f8bef40 : tests: generic/003: cover pin page mode
290c3c7 : ublksrv_tgt: test: add test to cover feature of pinning pages when handling io
3a40370 : ublksrv_tgt: support to pin userspace vm pages during handling io
103b9ec : ublksrv: kill UBLK_IO_COMMIT_REQ
6d3fd86 : Makefile: remove backup file under include/
690a392 : demo_event: read(info->efd) before sending event to ublksrv
5328e52 : demo_event: use epoll to wait read on eventfd
e137f32 : libublksrv: fix: issue event fd immediately
61cfcae : demo_event: initialize spin_lock before creating pthread
343b784 : demo_event: remove one useless header
df4c28a : Makefile: remove demo progs & objects when running make clean
385839d : libublksrv: clear sqe->rw_flags in ublksrv_queue_io_cmd
a40e02f : demo_event.c: one example about how to use eventfd
585a647 : libublksrv: add eventfd support
5cf0184 : Makefile: handle 'make cscope' simply
00d2294 : libublksrv: allow to store one queue private data in 'struct ublksrv_queue'
cf29f8d : README: add libublk and update build steps
8ac70af : Makefile: remove hardcode liburing library/header path
98f0a68 : ublksrv: build ublk utility with static libublksrv.a
80b5788 : ublksrv: lib: build libublksrv with gcc
ccb7a2e : ublksrv: fix disk size setup during starting device
c736e93 : ublksrv: add subdirectories of lib and include
57dfc58 : ublksrv: add one demo which is built against libublksrv
3cdfe83 : ublksrv: add 'dev_blocks' parameter to ublksrv_ctrl_start_dev
2fda154 : ublksrv: fix ublksrv_set_sched_affinity
fa1eb43 : ublksrv: define ublksrv_dev_data.tgt_type as const
4c9423a : ublksrv: pass 'ublksrv_dev' to ->init_tgt & ->deinit_tgt
2fd93c3 : ublksrv: allow user to pass ublksrv_tgt_type directly
96422cc : ublksrv: support to build libublksrv library
7602b23 : ublksrv: renaming API
e667cd5 : ublksrv: don't include ublksrv.h directly
f54c3c8 : ublksrv: add API of ublksrv_complete_io
18b35f7 : ublksrv: move ublksrv_io_done() into ublksrv_priv.h
6d5f408 : ublksrv: cleanup dependency on utils.h
7dd7e1f : ublksrv: move private definition into ublksrv_priv.h
616d6e5 : ublksrv: move target related definition into ublksrv_tgt.h
8ef8eb4 : ublksrv: move public data structure into ublksrv.h
7138fb0 : ublksrv: move process/pthread creation into ublksrv_tgt.c
44e8bf5 : ublksrv: add ublksrv_process_io() for handling io
7008828 : ublksrv: move more code into ublksrv_queue_init()
b47669e : ublksrv: move code for building ublk application into ublksrv_tgt.c
861fe5d : ublksrv: make 'struct ublksrv_ctrl_dev *' as const in ublksrv.c
5c284ec : ublksrv: move functions in ublksrv_tgt.c to ublksrv.c
6f07271 : ublksrv: move calling of ->init_tgt() into ublksrv_init
4396cb9 : ublksrv: add two helpers for accessing shared memory buffer
fb8be45 : ublksrv: add flag of UBLK_F_HAS_IO_DAEMON
c3a46df : ublksrv: reserve the front 128bytes of shared memory as ublksrv_ctrl_dev_info
6737153 : ublksrv: move shared memory communication data structure into ublksrv_dev
7adbc6e : ublksrv: remove ublksrv_dev_is_done
9e9f172 : ublksrv: make ublksrv_dev_init() like one API
743b6a0 : ublksrv_cmd: remove static/global variable of ctrl_fd
05ebaa1 : ublksrv: move coroutine out of generic ublksrv code
cd211b0 : ublksrv_tgt: allow ublksrv_tgt_init() to initialize with specific target type
e291538 : ublksrv_tgt: cleanup generic ublksrv_tgt code
2b27b4a : ublksrv: store queue pthread affinity in control device
e64193c : ublksrv: remove this_dev
fca0cf6 : ublksrv: cleanup ublksrv_io_handler
cc06a46 : ublk: set default queue depth as 128
fc0fc08 : ubd: rename as ublk
ebe0bff : ubdsrv: don't retrieve & touch ubd_io instance for target cqe
59ec251 : ubdsrv: reserve 16bit in cqe as target data
a8f0583 : ubdsrv: add callback of ->tgt_io_done for target code
8fc079d : ubdsrv: remove 'struct ubd_io' parameter from ->handle_io_async()
911673f : ubdsrv: pass 'const struct ubdsrv_dev *dev' to ubdsrv_get_queue()
2e73372 : ubdsrv: add ->deinit_tgt callback
623a1b7 : ubdsrv: remove 'backing_fd' from ubdsrv_tgt_info_loop
fadaae5 : ubdsrv: pass 'struct ubdsrv_dev *' to ->prepare_target
23433f9 : ubdsrv: cleanup target interface
53c353f : ubdsrv: call prctl(PR_SET_IO_FLUSHER) on io queue thread
5b7df46 : tests: fix 'make test_all'
013b29f : tgt_loop: reset ->queued_tgt_io at beginning of loop_handle_io_async
2db4795 : tgt_null: fix io result
90a86e3 : Makefile: let %.o depend on Makefile
6d877be : ubdsrv: comment on c++20 coroutine
a7a9ca5 : ubdsrv: implement ->handle_io_async with c++ 20 coroutine
edb35de : ubdsrv: add coroutine support
f4a7208 : ubdsrv: count inflight io command in ubdsrv_queue_io_cmd
0503732 : ubdsrv: add ubdsrv_queue_io_command() for queueing io command
3fb66df : ubdsrv: kill UBDSRV_IO_HANDLING
8a67f13 : tests: generic/003
968190e : ubdsrv: don't count how many sqes ready for submit
c7bb87a : ubdsrv: remove INFO()
2ad9e90 : ubdsrv: remove READ_ONCE() and WRITE_ONCE()
bc97b5f : ubdsrv: prepare for switching to g++
a0c9ca2 : ubdsrv: use -O2 optimization
1281f81 : ubdsrv: add helper of ubdsrv_get_sqe_cmd for retrieving sqe->cmd
a86b8b4 : ubdsrv: remove dep.h
549f82a : ubdsrv: remove unecessary '#include utils.h'
66599ad : ubdsrv: remove ubdsrv_uring.h
82540e6 : ubdsrv: remove ubdsrv_uring.c
e6babf8 : ubdsrv: use io_uring_for_each_cqe to retrieve comming cqes
c1b90d8 : ubdsrv: cleanup code for seting sqe
ed27b4f : README: add comment on build/test
97a0801 : README: clarify ~5 seconds delay during shutdown or reboot
9d708f0 : ubdsrv: switch to liburing
1a65291 : tests: specify ubd dedicated temp dir
bb2071d : ubdsrv: remove bool definition
934f31c : ubdsrv: remove two parameters from ubdsrv_setup_ring
df5e3b7 : Makefile: support cscope
fe05fd8 : tests: generic/002: make it more reliable
6217b72 : tests: support to pass single test runtime via R=${R}
7236610 : tests: generic/002: check if remove is done correctly
bd9887a : tests: add run_test_all.sh
672ea33 : tests: keep at most loops as 10
9d3e9a5 : tests: add generic/002
0f4c0f3 : ubdsrv: put all tests related change into tests/Makefile
099e877 : tests/loop/100: move it into generic group
0f9cc53 : tests: improve test output
562267b : ubdsrv: add helpers for sending command to device
928c7af : ubdsrv: dump device state in 'ubd list'
d4f2d69 : ubdsrv: test: show test progress on loop/100
0356370 : ubd: tests: name loop/003 as perf test
60614db : tests: add io perf test on kernel loop
6d2e506 : ubdsrv: tests: re-organize test code
bb1913f : ubdsrv: logging queue command
d89a1d3 : ubdsrv: logging ->ubq_daemon start
53f1dc8 : ubdsrv: improve logging
c87bd76 : ubdsrv: add tests/loop_002
9f7e08a : ubdsrv_cmd: return exit status
1e901de : ubdsrv: align 'ubd list' output
721b331 : ubdsrv: pass -s to make
d5b2428 : test: split MQ test out of SQ test
2bcf7ae : ubdsrv: handle -EAGAIN for loop
d22733a : ubdsrv: add ->complete_tgt_io() callback
df70cad : ubdsrv: change type of ->handle_io_async()
00f6e76 : ubdsrv: don't store qid into user_data
1672a08 : ubdsrv: add helpers for parsing & building user_data
046ff64 : ubdsrv: simplify target io handling
29d6e18 : ubdsrv: log target io failure
49d4cf4 : ubdsrv_uring: remove unnecessary logging
540e798 : ubdsrv: count inflight target io
d51ee24 : test: add loop_001 test
ad30aad : Makefile: clean test dir
4548817 : test: prepare for supporting to parse fio perf
20b8069 : test/001: rename as test/null_001
36cf77f : ubdsrv: fix error handling when target initialization failed
3a8322d : ubdsrv: test/001: add simple sanity test
7903364 : ubdsrv: don't store tag info into ubdsrv_io_desc
75cb678 : tgt_loop: add io_uring io op into user data
06650fe : ubd_cmd: define ->result of ubdsrv_io_cmd as __s32
7e87776 : ubdsrv: allocate queue inside ->ubq_daemon context
b997cae : ubd_cmd: kill UBD_IO_ABORT_QUEUE
e2030cd : ubdsrv: fix 'ubd list' for printting flags
a552ab3 : ubdsrv: remove aborting queue
4ce553e : ubdsrv: kill ubd specific error code
af095e4 : ubdsrv: support UBD_CMD_GET_QUEUE_AFFINITY
188611f : ubdsrv: not pass dev_info from sqe->cmd
dc1e72e : ubd_cmd: remove obsolete comment
3190158 : README: update with recent progress
0db0440 : ubd_cmd: update with new ubd_cmd.h
ca2bbae : ubdsrv: dump queue info in 'ubd list'
4baf1a2 : ubdsrv: set sched affinity for each queue's pthread
d04af8c : ubdsrv: call ubdsrv_drain_fetch_commands() in ubdsrv_io_handler()
a23da8b : ubdsrv: mark pointer to 'ubdsrv_io_desc' as const
3fe230c : ubdsrv: unmap shared mm for communication
7ef7f13 : ubdsrv: allow to shutdown ubd device via sending SIGTERM
8187a08 : Revert "ubdsrv_uring: provide timeout for io_uring_enter()"
822e3d5 : ubdsrv_uring: provide timeout for io_uring_enter()
5391e81 : ubdsrv: reset q->inflight during allocating queue
84f7598 : ubdsrv: fix UAF on q->io[]
9e21f66 : ubdsrv: simplify queue allocation
a42b74f : ubdsrv: add command line for changing default nr_hw_queues and depth
ceb1788 : ubdsrv: use ubdsrv_get_queue() to retrieve queue
8814fd9 : ubdsrv: support Multiple Queue
61e6109 : ubdsrv: prepare for supporting MQ
b0e7aa5 : ubdsrv_cmd: allow UBD_CMD_ADD_DEV to return the device info
7e73e5a : ubdsrv: support UBD_IO_ABORT_QUEUE command
9669335 : ubdsrv: handle SIGTERM signal
7a6fd5e : ubdsrv: clean queue command
7cada38 : ubdsrv: avoid unnecessary check on cqe->res
30ed2b1 : ubdsrv: fix crash when referring dev->ctrl_dev before setting up it
0aa66c0 : ubdsrv: update with for-5.19/io_uring-passthrough
6a12220 : ubdsrv: allow del command to delete all added ubd devices
cd249a3 : ubdsrv: add help command
2abb6f4 : usbsrv: remove command of UBD_IO_GET_DATA
e8e19d4 : ubdsrv_cmd: always bypass UBD_F_NEED_GET_DATA
59c7d52 : Revert "ubdsrv: add 'task_work' command line"
4cac169 : ubdsrv: add 'task_work' command line
642a653 : ubdsrv: prepare for support to bypass GET_DATA
a92515b : ubd_cmd: prepare for supporting to complete io_uring_cmd via task_add_work
4ddeb19 : ubdsrv: allow list command to dump target info
e41d9ce : ubdsrv_cmd: replace get with list
ed70ccb : ubdsrv_tgt: move each tgt code out of ubdsrv_tgt.c
eeea444 : ubdsrv_tgt: rename ubdsrv_tgt_ops as ubdsrv_tgt_type
d12244f : ubdsrv_cmd: let tgt code parse their own arguments
29c3e65 : ubdsrv: add ubdsrv

+- Project: platform/external/unicode

eb91832 : [2nd attempt] Update emoji data to 16.0
1813a5b : Revert "Update emoji data to 16.0"
0977a2d : Update emoji data to 16.0

+- Project: platform/external/uwb

9f61c28 : Rename Reason Code "ERROR_TX_DELAY_NOT_SUPPORTED" to "ERROR_INTER_FRAME_INTERVAL_NOT_SUPPORTED"
9928be6 : Reason code addition for session stop after sts index reaches max value
46dcadc : uwb(pdl): Add aliro mac mode param
cc2f4ac : Implement RF periodic tx and stop command
fbf4917 : Revert "uwb(rust): Add uwb feature support check for unit tests"
158ee65 : Implement UWB RF Test Session Support
6cc6b60 : Implement UWB RF Test Session Support
7a83bac : uwb(rust): Add uwb feature support check for unit tests

+- Project: platform/external/v4l2_codec2

e5b5b5f : v4l2_codec2: encode: fix crash when input.buffers[0] is nullptr
e1809fa : v4l2_codec2: decode: allow calling reset() from stop state

+- Project: platform/external/vboot_reference

0d49b8fd : recovery_kernel: add signing type recovery_kernel
1f7ca823 : gpt_misc: Return uint64_t from GptGetEntrySize functions
36621031 : Reland "host/lib/flashrom: Use flashrom provided in PATH"
dbcfe4c5 : OWNERS.android: Add bernacki@google.com
26e8011f : Add configurable temporary directory path
a0f83f9f : futility: Drop futility execution logging to /tmp/futility.log
862e250e : crossystem: Make crossystem vendor_available
3246e484 : futility: updater: Increase try count from 11 to 13
2ab8888b : make_dev_ssd: add upstream cmdline flag for ptracers
3c2ef940 : Update Rust OWNERS file to include libchromeos-rs/OWNERS
c5af1fd8 : make_dev_ssd.sh: avoid page cache aliasing
38f9c255 : Revert "host/lib/flashrom: Use flashrom provided in PATH"
7d4b23f9 : futility: updater: Revise the test script
8494502d : futility: updater: Support emulation in the output mode
54be900d : futility: updater: Handle flashrom read failure in load_system_firmware
2a787558 : futility: updater: Drop `signature_id` from implementation
90f59170 : futility: updater: Add a new config 'output_only'
94d884d8 : futility: updater: Deprecate `--signature_id` by `--model`
24fd715c : host/lib/flashrom: Use flashrom provided in PATH
ac49f1ca : Build thin archives
640fe19f : host/lib/crossystem: Make CROSSYSTEM_LOCK_PATH configurable
86b42b6a : sign_android_image: calculate and store the vb meta digest
da1d153b : Move futility and cgpt to vendor partition
80955816 : futility: updater: Remove 'allow_empty_custom_label_tag' quirk
7ad2b0ab : futility: updater: Process custom label as standard models
13400d69 : futility: updater: Remove signature_id from manifest
f770c7d0 : futility: updater: Remove the legacy 'setvars.sh' manifest
ed4556ed : tests/futility: Add test cases for unmodified RO
21902629 : futility/file_type_bios.c: Skip keyblock checks if magic is invalid
f5924321 : Fix partition type check for miniOS B
83f845b3 : signing: clean up owners
dc5102f2 : signing: miniOS signing in docker.
16e6aa89 : futility: updater: Provide default DUT properties for emulation
e56f3686 : tests/futility/test_update: Fix --sys_props argument
7e2828a1 : futility: updater: cleanup: Remove duplicated comments
060efa0c : vboot: Only execute TPM clear on nonchrome FW
2fc6815b : sign_official_build: Include full loem.ini path
47658f3c : 2lib/2load_kernel: Remove unused VB2_LOAD_PARTITION_WORKBUF_BYTES
7cc2ce4c : futility: Skip printing EC RW version if non-printable
8365d546 : futility/load_fmap: Erase remaining bytes if file smaller than area
ec01126c : swap_ec_rw: Search for keyset in source tree too
b76d74dc : futility/load_fmap: use WARN() on non-critical error
f1f70f46 : 2lib: Add gbb flag to enforce CSE sync
e4977a64 : Deprecate GBB flag RUNNING_FAFT

+- Project: platform/external/virglrenderer

48be5d4d : Pin this project to C17 until/unless upstream fixes C23 compatibility.

+- Project: platform/external/virtio-media

dc89788 : Keep using old version of zerocopy for now.
1a040fa : Update type name to match the generated bindings
59d0bdf : ANDROID: BUILD.bazel: Use all_headers.
3a09164 : v0.0.5
d4d8859 : ANDROID: Add BUILD.bazel for kleaf builds
d44d1c1 : Add myself to OWNERS
c2896d9 : device: implement AsRef<[u8]> on VirtioMediaDeviceConfig
65c6638 : device/v4l2_device_proxy: call as_mut_ptr on USERPTR buffers with CAPTURE direction
81f8774 : protocol: make the MMAP command's offset parameter 32-bit
bd969ee : device: fix naming of guest address field
9e0b7fc : driver: fix naming of guest address field
719fb76 : ffmpeg: fix test_avpacket_drop
edfe947 : ffmpeg: fix tests build
c49abb0 : ffmpeg: update Cargo.lock
5cb01c6 : device/mmap: use u32 for MMAP buffer offsets
6d83d99 : device/mmap: allow overlapping buffers in the V4L2 MMAP address space
a74c560 : device/mmap: use next_multiple_of instead of equivalent operation
6b84ff4 : Third-Party Import of: https://github.com/chromeos/virtio-media Request Document: go/android3p For CL Reviewers: go/android3p#reviewing-a-cl For Build Team: go/ab-third-party-imports Bug: http://b/330728773 Original import of the code can be found at: https://googleplex-android.googlesource.com/platform/external/virtio-media/+/refs/heads/third-party-review. Security Questionnaire: http://b/330728773#comment1
d57aa75 : device/decoder: validate buffer memory type during QBUF
7ee1094 : device/decoder: handle colorspace information
bda4ab8 : device/decoder: fix out-of-date comment
dfd52b2 : v0.0.4
0757ef8 : device/decoder: set field of OUTPUT buffers to NONE
9f4375e : device: make local extension traits private
f8b7cb3 : device: init objs to zero
2f2f40c : Use official virtio-media ID 49
2b8a25c : v0.0.3
a228490 : device: fix comments indentation
5fc901c : driver: fix build against Linux >= 6.11
4705bf8 : driver: update author's email address
82207a9 : device/ffmpeg-decoder: fix the package version
13e8820 : device: reorder Cargo.toml dependencies alphabetically
2f8e809 : device/video_decoder: fix crop and compose handling
9bad601 : device: update v4l2r dependency to 0.0.4
73584d6 : device: fix typos and comment indentation
faf95da : Add a FFmpeg-based SW decoder device
de570dd : devices/video_decoder: workaround for DRC issue
3e53fea : devices/video_decoder: add streaming_state hook
8b6abf5 : device: remove requirement for AsRef and AsMut for guest memory ranges
e478f08 : v0.0.2
1845e46 : driver: add missing include of vmalloc.h
0ff19b4 : device/video_decoder: better crop rectangle handling
4e0726d : device/video_decoder: add coded size to the stream parameters
ebd730c : device/memfd: add ability to mmap buffers
86dc5a7 : device/video_decoder: add useful derives
8a0c260 : device/video_decoder: remove bytes_used method of VideoDecoderBuffer
99b4b56 : device/video_decoder: turn FrameCompleted's bytes_used into a vector
610b0cd : device: use v4l2r 0.0.3
2ec63d6 : device: Add cargo_embargo.json and Android.bp
fb0bc5f : device: use nix for memfd creation
ba18cd2 : device: fix build on 32-bit platforms
c3db155 : device: update to v4l2r ToT
067b639 : device: add first version of the stateful decoder device
f365fc5 : driver: properly set device direction
fa80277 : device: replace Memfd crate with our own implementation
33e8d28 : TRY_IT_OUT: pick latest crosvm patch
c6b51d5 : device & driver: use offset in SHM region 0 instead of guest physical addresses for MMAP buffers mapping
11a8cd8 : device/v4l2_device_proxy: always try to export the buffer when doing MMAP
a9d3e57 : device/v4l2_device_proxy: warn if process_events is called for nothing
ac6530f : device: move poller into runner structure
4dc7add : device/v4l2_device_proxy: fix clippy warning
cb98230 : decoder/v4l2_device_proxy: use proxy epoll for session polling
7c22e10 : device: update to v4l2r ToT
58ccdc6 : device/ioctl: make getter ioctl handlers take non-mutable reference to the session
429de05 : TRY_IT_OUT: add instructions for proxying a host V4L2 device
fbc4885 : driver: fix void pointer dereference warning
2ee9da7 : device: update to v4l2r ToT
721eef5 : driver: allocate userptr SG lists on the shadow buffer
6bb021a : driver: rename VIRTIO_BUF_SIZE to more descriptive name
32d2318 : device: reexport v4l2r module
b1c7fd5 : driver: use device-writable SG entry when copying back ext ctrl data from shadow buffer
d5c8e77 : driver: use device-writable SG entry when copying back buffer data from shadow buffer
b022d5e : driver: move userptr buffer SG freeing to dedicated function
d947438 : driver: move userptr buffer SG list creation to its own function
3640ad0 : driver: use correct scatterlist entry to retrieve data in wr ioctls
ac1ab86 : driver: copy data from shadow buffer in R ioctls
71302de : driver: fix constness of arguments to be written by the device
aeda42b : driver: remove obsolete TODO
a4a8780 : update to v4l2r ToT
12a9ea1 : simple_device: use V4l2Buffer to represent internal state
1c3b545 : device: use latest v4l2r ToT
7d8c6b3 : driver: fix polling
2095373 : driver: implement naive locking
0b7a0ec : device/mmap: align MMAP buffer offsets to memory page size
3b83341 : device: fix error message
73aa00e : device: add TODO item
db7c3bb : driver: remove unneeded comment
e4de8b7 : driver: simplify dqbuf
c84074d : device: use OwnedFd/BorrowedFd instead of File for MMAP buffer mappings
b47a73a : device: do not let mmap/munmap hooks write their response
4394648 : device: remove length parameter from mmap hook
359011e : update v4l2r to v0.0.2
acb4b9e : device/simple_device: use mmap manager offset allocator
1efe2ea : device/mmap: revamp MMAP manager
3d7579f : device/v4l2_device_proxy: fix clippy errors
aef79cb : device/mmap: fix typo
3f83604 : device: make ReadDescriptorChain and WriteDescriptorChain traits public
0822cf9 : ioctl: pass validated queue to s_fmt and try_fmt hooks
d365c22 : Initial empty repository
cf68ec0 : TRY_IT_OUT: remove command-line prompt on guest commands
3e3dd30 : README: fix link to TRY_IT_OUT document
5aca8f3 : Add TRY_IT_OUT.md tutorial that explains how to quickly try the project
d6d7e74 : Add .prettierrc rules and reformat README accordingly
9be4c56 : device: use published 0.0.1 version of v4l2r
3ce51cc : lib: fix and improve documentation
92d29a3 : device/simple_device: remove unwraps
5ffd8d2 : device: add method to return the device a runner has been created from
80803d6 : device: mmap: use From implementation as constructor for VirtioMediaDeviceRunner
7ed1f41 : device: add method to retrieve the mapper a manager has been constructed from
4440bab : device: use From implementation as constructor for MmapMappingManager
afe4440 : v4l2_device_proxy: use MMAP mappings manager
e143c44 : simple_device: use MMAP mappings manager
3f5ff70 : device: add MMAP buffer range manager
6d5b75f : device: make devices own the mapper
21890a9 : device/v4l2_device_proxy: make session id non-public
56ef07d : device: fix parameter name
57a59d1 : use the guest physical address to unmap a mapped MMAP buffer
ff9d063 : device: move mapper argument
d88372d : device: fix clippy warnings
d5a9d7a : device: move impl parameters constraints to where clause
6585d4a : device: move ioctl handler generic parameter to associated type
0460dba : add initial host-side device support
bae3a06 : driver: add descriptions and copyright headers to each file
5453cea : driver: fix header guard
cad66cf : driver: remove unneeded error messages
76f351f : driver: better handling of DONE flag
81cb70e : driver: properly store required buffer information for PREPARE and QBUF
365b265 : driver: clear buffer flags alongside queue
426ef6f : move Linux driver to its own directory
b991e74 : README: add precision about command and event queues
f30168b : driver: replace occurences of 'stream' with 'session'
f99ee65 : Add CONTRIBUTING.md
a178688 : Dual-license BSD-3-Clause and GPL-2
cefd4c4 : driver: factorize VIRTIO_MEDIA_IOCTL_CODE use
8559bc3 : driver: move ioctl-related code into its own source file
d16fbc7 : driver: use macros to define simple ioctls
3b42c1f : driver: implement VIDIOC_G_CTRL and VIDIOC_S_CTRL
9ad5c1e : README: fix grammar
7cf99c3 : README: add precision about ioctls payload layout
f055bd0 : README: specify how virtio objects can be imported and exported
9a51cd5 : README: add information about device ID
37c01a8 : driver: fix clang warning
cc37ff7 : README: fix markdown lints
83e5a8a : driver: remove unneeded include
484db35 : driver: use dedicated SG lists for USERPTR buffers
4858177 : driver: rename enum_input to the more consistent enuminput
a9a3e7a : driver: remove obsolete TODO
07ab6da : driver: rename descriptor.[ch] to scatterlist_filler.[ch]
12ce286 : driver: fix warnings
f097377 : driver: consolidate queue reset code in same function
049341f : driver: query MPLANE queues for poll if MPLANE is in use
f1b9fd7 : driver: properly return EPIPE after CAPTURE buffer marked with LAST flag
8d34262 : driver: rename input queue to capture queue
f364e65 : Initial commit

+- Project: platform/external/vogar

0b1290c : Vogar: Remove desugar completely
f55021b : Use prebuilt desugar.jar in vogar

+- Project: platform/external/vulkan-headers

fd2d1a5 : Fix vulkan_enums.hpp "vk::Bool32"
d93c1db : Squash merge of 1.4 vulkan headers

+- Project: platform/external/wayland-protocols

e41032b : Rename android.CommonPropertiesProvider to android.CommonModuleInfo.
34d4e2b : Update the context used in the image interface
df5d17e : Add a few module visiting methods that return ModuleProxy.
3903ba5 : Remove MutatorHandle.Parallel()
ad4f475 : Remove VisitDirectDepsBlueprint

+- Project: platform/external/webrtc

1a5a37fc1a : Use canonical ABSL in WebRTC.

+- Project: platform/external/wpa_supplicant_8

e1dbf792 : Pare down cflags and srcs for mainline supplicant
9ed1c79d : Fix CONFIG_NO_WPA compile/link errors
64297250 : Wrap a chunk in CONFIG_P2P
53c6632d : Wrap some functions in CONFIG_DPP
eb154fd0 : Change the publish/subscribe return types to void in the supplicant implementation.
d41879a2 : Add overlay support for hostapd
58ac92d5 : Add skeleton USD implementation in the vendor supplicant.
10f5ba6e : Replace the local WifiChannelWidth enum with the new AIDL definition.
8d291eae : wifi: Supports to remove link for MLO SAP
a151c74a : Support for BLE assisted discovery and pairing
0bc7872e : Remove wpa_supplicant_private_lib_event from the mainline supplicant build rule.
c98b77e4 : Add fuzzer for the mainline supplicant service.
4f212432 : Remove sae_pmkid_in_assoc=1 from configuration file
a0679610 : Implement addUsdInterface and removeUsdInterface in the mainline supplicant.
e455743a : Move some config file logic to a shared utility class.
ae30dc95 : Initialize the rc and config files for the mainline supplicant.
8c60fc29 : Add initial implementation for the mainline supplicant service.
2ba9e6c4 : Prepare dependencies for the mainline supplicant AIDL implementation.
3c401c53 : Move existing supplicant AIDL code to the /aidl/vendor subdirectory.
2fc9373e : Add build target for the mainline supplicant.
9dfae3ba : wifi: Supports Soft AP client isolation configuration
08f249ad : Convert wpa_supplicant related module to soong
5031a049 : Add Unit Tests For Hostapd AIDL
de0e6bf1 : Support for Wi-Fi Direct R2
c8ea613b : ANDROID: add SCAN_STARTED and SCAN_RESULTS events...
808564ba : Add reporting for disconnect reason
b820c2bd : ANDROID: filter USD events for mainline supplicant
399c6309 : wifi: Support MLO SAP
532c598e : Make bgscan and PASN configurable in Android.bp
cd57104f : Update wifi AIDL versions
97e30ce2 : libpasn: Add Android.bp changes to generate pasn library
e008ff31 : Enable CONFIG_PASN and add pasn files to Android.bp
84db0e2a : Create prebuilts for wpa_supplicant
b24d63e9 : Support for P2P Compatibility Mode
b1297146 : Allow configuration of the CONFIG_NO_ROAMING flag in Android.bp
a546d427 : Allow specifying a custom supplicant config file
c0f5d411 : [wpa_supplicant] cumilative patch from commit a8655be0b

+- Project: platform/external/wuffs-mirror-release-c

4202ffa : Edit OWNERS file

+- Project: platform/external/wycheproof

927def5 : Clean up keystore owners
efae32b : Skip JsonMacTests for older devices

+- Project: platform/external/xz-java

d589f23 : Add apex_available for the library

+- Project: platform/external/zlib

5247ad6 : Enable all current risc-v optimizations.

+- Project: platform/frameworks/av

99c85e8a50 : Spatializer load doesn't fail for spatialized masks
5b882d9e98 : Un-require RECORD_OP for VOICE_DOWNLINK
32777e4930 : MMapTrack: Fix nullptr crash when ATRACE audio is enabled
e20145d3a8 : AudioSystem: onServiceDied check service match
c4f7d87fa4 : AudioSystem: Ensure no nullptr access
d6346073e6 : audioserver: do not prefetch audioserver services
3ecc9eb6a5 : libaudiohal: Enforce serialization of calls into IModule
f94d3eaeec : Mpeg4Writer: Synchronize multi-track gainmap muxing
9b70ca248b : AAudio: Avoid scaling capacity with sample rate
0f58671cdb : mediautils::SetviceSingleton : fix missing onServiceDied notification
f6a71618a7 : Update mStandby flag in RecordThread
ac860342e7 : audio: only allow AudioMix with same format on direct output.
8d06fce9ed : Incorporate the API council feedback for camera.
2788e01589 : APM: Add a test to prevent regressions like one from aosp/3020124
bada1f5e01 : Fix parsing of legacy engine config XML files
ef3ee0579b : CodecCapabilities NDK: Introduce AMediaCodecStore and AMediaCodecInfo.
4da93dceea : Revert "CodecCapabilities NDK: Introduce AMediaCodecStore and AMediaCodecInfo."
d97db43ba7 : APV: implement planar YUV420-8 support and handle empty inputs
3f0b32f111 : Revert^2 "Update bluetooth perm checking to perm cache"
5b9dfbdb8e : C2AIDL: Provide rendered frame history
258536e549 : C2AIDL: unblock allocate from C2IGBA on stop()/reelase()
638fcaa331 : Allow to use audio attributes tags if clients can use dynamic policy.
64a6e0a8cc : APM: Add a test to prevent regressions like one from aosp/3020124
d88caa41af : CodecCapabilities NDK: Introduce AMediaCodecStore and AMediaCodecInfo.
500b67f48a : Check CAPTURE_MEDIA_OUTPUT permission when it is required.
23fb5fd4fb : Revert "Update bluetooth perm checking to perm cache"
30adca5de9 : AudioSystem: fix missing audio port callbacks after audioserver restart
7f7344aafb : audiopolicymanager_tests: add execution tracing
082099212e : Destroy output tracks when existing.
4efd3ed021 : c2aidl: Add InputSurface & InputSurfaceConnection
fb128923bf : Codec2: Add InputSurface configurables to C2Config.h
632e8b7c0b : EffectHalAidl: continue effect processing in DRAINING state
65e831c2c6 : Check if non offloadable effect is enabled with uuid
9e25cf1624 : Check if non offloadable effect is enabled with uuid
8b79359eb7 : Revert "Update bluetooth perm checking to perm cache"
778ccd1813 : Camera: Add desktop effects metadata tags
d3c0eb43eb : Camera: Add desktop_effects feature flag
f8b99061a4 : Add eraser in EffectHalVersionCompatibilityTest
3fabd2e147 : Add support for IAMF audio format
5cc33e4fcc : EffectHalAidl: continue effect processing in DRAINING state
8a770c2173 : Update doc for presentation end callback.
da21163bc8 : Create a token for the AttributionSource CameraSource passes to connect
64265c8212 : Preserve previous control parameters with common AIDL parameter setting
db81cd89f3 : EffectBundle: update lvm control parameters with AIDL parameters
fe085f7781 : Update bluetooth perm checking to perm cache
7616b33d9b : Add permission check for audio attributes tags.
499a6c8b18 : audio policy manager: fix crash in verbose log message
9aab1380a8 : Spatializer speaker capabilities queried from effect
fca0abaabf : Revert^2 "Mark @FlaggedApi flags as exported"
935d4de606 : Revert^4 "Codec2Client: add support to ApexCodec"
58ffe394ad : Revert^3 "Codec2Client: add support to ApexCodec"
3b5e5edd7b : Update APIs for audio attributes tags.
22a113615d : Fix producer usage in virtual camera
33b620a42b : Revert "Mark @FlaggedApi flags as exported"
7baca0639f : Don't rely on GLConsumer frame number in virtual camera
61b571ab56 : Preserve previous control parameters with common AIDL parameter setting
386bc6657e : Fix MMAP stream perm check
7688a41788 : Pass "start camera <camera_id>" message to AppOps for data delivery.
dcfd592001 : Pull AttrSourceIter into lib
5a36f93bb7 : libaudioclient: fix AudioTrack blocking write after underrun
e52953ef7d : Mark @FlaggedApi flags as exported
ffc868e3ba : Revert^2 "Codec2Client: add support to ApexCodec"
6de075389e : Update comments to point to the new location of event.logtags.
31681aa1d8 : Camera: Remove fwk only keys after they are used
0c0cbf870a : Move APIs from AAudioTesting.h to AAudio.h
6d66b3718e : Do not include selected output in secondary output list.
6838cdade7 : Do not include selected output in secondary output list.
dcae796ffe : Add FMQ support to camera2 SDK for metadata transfer
c2575c62c6 : audio policy: check volume device before setVoiceVolume
65af9eb376 : Revert "Codec2Client: add support to ApexCodec"
518b68fd13 : Fix CFI issues with extractor threading
0bd92881b9 : AudioFlinger: remove global effect enablement timeout
b94582c6cb : ServiceSingleton: Do not permit onServiceDied before onNewService
f767de06c9 : audio: Add track metadata perfetto logging
d66d8247d0 : AImageReader: Add setUsage() interface
19ec470176 : Move APIs for querying platform mmap policy to AAudio.h
a9ee43d5d2 : Codec2Client: add support to ApexCodec
b67cf9e063 : Address comments with getDeviceIds
f351fe9f92 : APV: Correctly set APV band and expose YUV420 format support
c45bc93934 : Use view::Surface instead of IGBPs in OutputConfiguration
4995000daf : AudioFlinger: remove global effect enablement timeout
a3fa8cd807 : codec_fwk: add codec availability support feature flag
67e08fbb70 : C2SoftMpeg2Dec : Enable KEEP_THREADS_ACTIVE
a502cee117 : Remove support for ro.audio.silent
b91f7d2d31 : Remove aaudio sample_rate_conversion flag
517d0eef60 : Camera: Populate ZOOM_METHOD in CaptureResult
7582af8bd5 : [audio] Pull service permissions into libaudioperm
a95a60fa26 : Only apply volumes for active devices
ce222b5c0a : Create OWNERS file for better_together
e2f27e3d34 : CCodecBufferChannel: process surface callback asynchronously
1a586fdbd8 : Fixing native crash in cameraserver
7ed0b4bffb : Camera: Add DEPTH_JPEG extension output flag
7172024f8c : Camera: Separate out framework only metadata tags
e3a51a3e25 : media OWNERS addition
08506ced64 : audio record: Correct VDI for runtime appop checks
fda90e8a84 : Combine getInputForAttr perm checks
5df00bb4dc : Fix getSupportedSchemes() in DrmHal.cpp
b894244302 : Camera: Remove AE_PRIORITY_MODE setting in CaptureRequest
5674f527dc : APV: Expose Dynamic Color Aspects feature
8d087d13ae : APV: Update CSD according to the latest spec change
6745713406 : Revert^2 "Audio CAP: Address ANAPIC comments, Part 3 (example & tools)."
155d7bc698 : Revert^2 "Audio CAP: Address ANAPIC comments, Part 3."
d86700457d : Revert^2 "Audio CAP: Address ANAPIC comments, Part 3 (Split back AudioPolicyForceUse)"
0216f1fa74 : transcoder: sleep before signal EOS
35a8e4c983 : Only apply volumes for active devices
4e9e714805 : Revert "Audio CAP: Address ANAPIC comments, Part 3 (example & to..."
9dd4172818 : Revert "Audio CAP: Address ANAPIC comments, Part 3."
4655505084 : Revert "Audio CAP: Address ANAPIC comments, Part 3 (Split back A..."
5569b50d3e : codec2: refine available/required-resources configs
3d8d894e8c : Only apply volumes for active devices
7110a22f8d : Set missing CSD for APV codec in Muxer
25fbcf21f7 : getInputForAttr perm check cleanup pt2
3a77e966f6 : Make 'IHalAdapterVendorExtension' unstable again
c926d1b332 : Camera: Extend the HEIF UltraHDR format muxing
f170fcc729 : Fix null pointer dereference in Camera2Client::stopPreviewL
45f819482d : CodecCapabilities: Initialize native CodecCapabilities in MediaCodecInfo
89ff043698 : getInputForAttr perm check cleanup pt1
0996f2b4b2 : Add offload support for aaudio.
53d2c2c587 : bqhelper: Remove support for passing GL fences into BQs
c3849ec243 : bufferqueues: Remove support for passing GL fences into BQs
db18df935c : APM: apply volumes when abs volume stream changes
587aa32bcb : c2aidl InputSurface: enable InputSurface
6319b2d1d6 : APM: apply volumes when abs volume stream changes
8436809543 : CodecCapabilities: Port CodecCapabilities from Java to native.
a034e2437f : CodecCapabilities: Port EncoderCapabilities from Java to Native.
4ed0abda82 : CodecCapabilities: Port VideoCapabilities from Java to Native.
c07437e766 : CodecCapabilities: Replace ParseIntRange with Range::Parse and replace int with int32_t.
dbb80328f4 : Audio policy: use new concurrent record bypass permission
26ec46b914 : Audio CAP: Address ANAPIC comments, Part 3 (example & tools).
a893389ea4 : Audio CAP: Address ANAPIC comments, Part 3.
1627e36bc4 : Add effect proxy shared capability and parameters clamping
0e2a92b4ab : Camera: Remove ZOOM_METHOD from capture request for HAL
7f64e01613 : APV: fixing color conversions and adding P210 and RGBA1010102 support
e0540ea206 : Make 'IHalAdapterVendorExtension' unstable again
dc2300cc75 : Camera: Remove ZOOM_METHOD from capture request for HAL
5d837eacdb : AudioTrack: fix race condition between start and restore
90059b0d48 : Revert "APM: apply volumes when abs volume stream changes"
7af1513046 : Revert "APM: apply volumes when abs volume stream changes"
b7f8edcf5a : Use routed devices throughout the audio framework
76a772a43d : MediaMetricsService: Extract common string utilities to audio_utils
d8add2beef : audio policy: fix input preferred device logic
d8548a3560 : Fixed LinearBlock thumbnail crash issue
5fbadc8949 : AAudio: Use one SRC per shared stream.
ad904a8c2d : Create a token for the AttributionSource ACameraManager passes to connectDevice
f2da9c7c3c : Query the uid process state in opChanged instead of relying on callbacks.
71ff72b2ba : Add US-MTV and TW TLs to media solutions OWNERS
c618f21c6e : Fix AudioPolicyManagerTestMsd
1fe8a63cdc : Camera: Fix use-after-free issue with provider instance
9a11eaebad : Remove old asserts that don't compile with NDEBUG
1f9600b698 : Add multi-client support in camera2
30cce6a979 : Push initial pair to frame map when track starts from PAUSED state
06bfbcb3e7 : Change the log level for failure to connect to HIDL from error to warning
40421a1df3 : Add hardening flag for special app access
fa982f3d47 : camera: Let SurfaceView transform preview only when rotation override isn't rotation only
083cc438ea : Camera: Auto-generate NDK header
2408881df9 : MediaCodec: define available/required resources APIs
f6e23aa28f : Allow bt import of android.media.audio-aconfig
1a43965a30 : Fix render thread early termination
44ded69cc9 : Allow bt import of android.media.audio-aconfig
a2fb29cb63 : AudioSystem: Reduce service timeout from 10s to 5s
a5af5074bf : Audio: clarify client service timeout naming
b719a9687b : Add flag to deprecate STREAM_BLUETOOTH_SCO
efbef0b0e0 : MediaCodec: implement subsession metrics
e875757737 : Change ALOGE to ALOGV for request pointer logging.
2b609afa58 : Camera: Add new feature combination query version
2850b26114 : Audio CAP: Address ANAPIC comments, Part 3 (Split back AudioPolicyForceUse)
fe1dcf5caf : Add fuzzer for audio attributes tags.
4447c8fceb : Camera: Fix out-of-order function parameters
f8c0105858 : Camera: Add AE priority mode tags
48e8f0c123 : Audioflinger: adjust wait time for duplicating thread
ae987e1420 : Camera: Add flag for zoom method metadata tag
3fea66bb18 : APV Encoder: properly initialize frame holders and fix initializations
8bfca18925 : codec_fwk: add codec availability feature flag
32a850f23c : MediaCodec: define ID for metrics flushed callback
96a6ae8ed2 : Add ApexCodec API
932e957a19 : Add audiopolicy OWNERS
adde768924 : Restrict APV codec to Android B and above
74568a7fae : nuplayer: fix audioFormat error in offload mode
8a007370c5 : C2BqBuffer: fix to check consumer usage when migrating during surface change
444e95e5c9 : APM: do not mute LE broadcast in audioserver
cfc29a3f1f : Add audio eraser effect filter in libaudiohal
5f768c05f4 : Camera: Filter out unaligned HEIF gainmap resolutions
380c238f53 : MediaCodec: move crypto check to onQueueInputBuffer
d3e153ad40 : Add flag for spatial audio versioning
2a3f9a719e : Replace use_context_attribution_source / check_full_attribution_source_chain flags with read-only flag
c3e3eced45 : Add offload_support flag to aaudio.aconfig
4adad26538 : Add flag for spatializer capabilities
e52adcf936 : Remove APV Codec from OMX Component role
3216393730 : Use mClientAttribution.uid directly in BasicClient constructor.
6379ef2b67 : Put APV Codec creation under flag
064697a2f9 : add support more PCM encoding and avoid overwriting audio format
de1883b3ae : Night Mode Indicator
d80ed57ee5 : Fix condition for host managed voice SCO volume
0fc3d44f32 : libaudiohal: Fix Standby sequence
b74e689c14 : codec_fwk: Add subsession metrics feature flag
38d0b3db89 : MediaCodec: expose # of input slots in block model
7a29908507 : Camera: Set default values of HEIF gainmap fields
0bafed86da : Modules: Update dependencies on graphics.common HAL to use defaults
1a478c790f : Update dependency on graphics.common HAL to use defaults
dd7d59930d : Bringing up APV SW codec
aeb1d00bf0 : Add AudioDeviceVector to MMAP path
fb9711968b : Use DeviceIdVector for outputs in Audio Policy
9eebdbcbcc : Add feature flag for speaker layout API
808dccf251 : Update flag docs
d94d600204 : Add more flags for hardening
6d65436955 : Flag for IAMF definitions
7f1823bda8 : Remove SCO flag from valid attributes check
d9add5765a : APM: update audio_attributes_t validation
06c3a2eaf0 : Add flag for APV Software Codec
6c84214caa : Add java lib for extractor flag
528181af55 : AudioPolicy: disable usecase validator
2a60f057a4 : Add APV codec support to Platform extractor
ed89641f29 : aaudio: fix SHARED MMAP streams
80d8238a14 : Add flag for audio eraser effect
4511340248 : Fix toString() for empty DeviceIdSet
e591abc811 : Add converstions for AIDL<->native for new speaker layout field
8aff9d0b92 : Add APV codec support to Platform muxer
ca3a5c296a : Update namespace for extractor_sniff_midi_optimizations flag
2c3b49a763 : Changes to OWNERS files for USB.
d55d2d1fd7 : frameworks/av: Update libprocessgroup dependencies
c04a820dce : MediaMetrics: Accept elided second argument in string pair
2dbb1a997a : Revert^2 "audiopolicy: speaker cleanup AudioAttributes usage"
f7b48d22b8 : Audio CAP: Address ANAPIC comments, Part 2.
0267d47242 : cameraserver: Fix package name logged for system uid ndk clients
7493e05f88 : Fix audioflinger allocator concurrency
0a2f933866 : Camera: Extend HEIC output support
be7e9e5f5b : Camera: Mark color temp flag as exported
a16e713c8d : Add flag to deprecate STREAM_BLUETOOTH_SCO
611efd59d3 : Revert "audiopolicy: speaker cleanup AudioAttributes usage"
cd24071359 : Camera: Add color temperature metadata tags
4cded70c07 : amrnb: Fix Android.bp formatting issues
9c70c66558 : Add DeviceIdSet to AudioContainers
dc00338b48 : Fix repeated surface timestamp condition
d0799e1a03 : Revert "amrnb: Enable integer sanitizer"
fe3d1f5a07 : SoftVpxDec/SoftDav1dDec: Set KEY_PICTURE_SIZE to 4096x4096
51a8a77767 : Revert^2 "AAudio: add support of Audio Attributes tags"
1b2a8a5a8e : Revert "AAudio: add support of Audio Attributes tags"
ac2cae1939 : Add dancho@ to media_solutions_OWNERS
3a99dee82d : Add flag for hardening
4c2a1c2e68 : Add audio device type MULTICHANNEL_GROUP
fb0af7ca71 : APM: apply volumes when abs volume stream changes
8616d81468 : amrnb: Enable integer sanitizer
dd48ca6a25 : Block longer then timeout if first frame not drawn
85e8a6534f : Audio Policy: fix media over SCO logic
df4fb83af3 : Add feature flag for multichannel group audio device type
66674a1648 : Camera: Add flag for MultiResolutionImageReader public usage
5b8947647a : AAudio: add support of Audio Attributes tags
cfef2714b3 : audioflinger: align track dump header and content.
b91d0c2602 : Camera: Init mMirrorModeForProducers for all constructors
d97c3ae9ba : audiopolicy: speaker cleanup AudioAttributes usage
e78085defc : Fix MonotonicFrameCounter warning log
4d0c9e8687 : Add trace for duplicating thread write
e6ca52951a : Return error code on failed camera creation command
967c85fe01 : Remove support for ro.audio.silent
e4e383f9e3 : Flag for speaker cleanup AudioAttributes usage
38558e18b9 : Appops refcount fixup
ce3d110631 : AudioSystem: new service cache reduces ANRs and improves performance.
d9567a3fe8 : psh_utils: Use the CPP variant of IPowerStats instead of NDK for efficiency.
d46c87c357 : psh_utils: Update ServiceSingleton code
c615b05473 : Add APV constants to Media Framework
f0c7bffc7a : Add Codec2 APV Constants
35cdf35f2c : audioflinger: RecordTrack: add debug log
93845c24ed : libaudiohal@aidl: Fix handling of "unknown" position
a0c61f11ce : Use data delivery permission checks
a298074d8c : audio policy engine: fix ringing over SCO device selection
c4a2fe51c0 : Skip permission check for test camera
52df07b5ac : Update media_solutions_OWNERS
6e3e81e7ca : Differentiate between mute reported to AS and port mute
368452b64d : CCodecBufferChannel: Resolve race condition around mComponent
d1146681fb : CCodecBufferChannel: process surface callback asynchronously
ad936af3db : CodecCapabilities: Restrict bitRate from int to int32_t.
84ad37f5ee : CodecCapabilities: use optional instead of 0 to represent null in supports().
2e772b8061 : Propagate clientAttribution to CameraService::BasicClient
ca3a89b492 : Add APV Support flag
ca0ff1706f : Remove some usage of IGBPs in the ICamera.
38df0cacf1 : Camera: Add per-surface mirroring mode
ac84b61bb9 : libc++fs is part of libc++ now.
ff90a5a044 : libc++fs is part of libc++ now.
e79e1f0078 : make audio_framework aconfig visible to core/res for permission flagging
b64cf0161a : Fix for inputsize calculation
91beb49d27 : Pick input profile that partially matches with all flags.
b1bef9ad28 : Fix MediaPlayer raw pointer usage
e2c5138e59 : APM: remove unnecessary check
08502d83a3 : APM: add more logging when setting stream to drive abs vol
726f6c14b2 : DynamicsProcessing: Reset mEngineInited to false after processing
5b9d407315 : Add YCBCR_P210 format
abbdeb107c : codec2: Fix VideoEncoderRoiTest cts failure
559e9765c5 : Remove framework support for audio HIDL HAL V5
3cf0c775a1 : Mark android.hardware.drm@latest-service.clearkey as vendor
2880135814 : camera: Clarify hot pixel map coordinate system when sensor pixel mode is MAXIMUM_RESOLUTION
3244bbd6c7 : Add flag android.media.audio.concurrent_audio_record_bypass_permission
e42b6481dd : Fix MediaPlayer raw pointer usage
80f9048708 : Flags: Add flag for getRoutedDevices()
b6bad9b230 : C2SoftAomEnc: set AOME_SET_MAX_INTRA_BITRATE_PCT
0513d30460 : AudioRecord: fix obtainBuffer restore sequence
aa6e9e3f61 : Revert^2 "Add APIs to query MMAP support in AAudio."
aca8f4620d : Revert "Add APIs to query MMAP support in AAudio."
2234d04be1 : Modules: Update dependencies on graphics.common HAL to use defaults
93e388163e : Update dependency on graphics.common HAL to use defaults
8f67e7313f : Fix a race condition in CCodecBufferChannel::requestInitialInputBuffers
511599f564 : libmediaplayerservice: Explicitly force callbacks to stop running
32704e704c : AudioRecord: fix obtainBuffer restore sequence
b7b8580841 : Revert "Enable clang-format for camera framework native code"
c2f9e9d20f : C2SoftMpeg2Dec: Fix KEEP_THREADS_ACTIVE issue
b04d02d390 : Add eraser in EffectHalVersionCompatibilityTest
376e28527e : MultiAccessunit benchmarks transfer multiple access unit infos
b1f5a7f1b3 : Adding Mediacodec.BLOCK_MODEL based decode benchmarks
83c38843d8 : Media Benchmark: Simulating audio playback
2d815b9399 : AMRWB: Remove workaround for clang codegen bug
3099f511f3 : Add YCBCR_P210 format
c9c36e6602 : AudioFlinger: patch track initial mute state false
3774b59be2 : Fix render thread early termination
efea6c5b8b : AudioFlinger: Protect initialization code in threadLoop with mutex
2369443634 : Pass full context AttributionSource to permission checker during connect
10018da97b : Avoid calling virtual method in constructor
c556d80322 : CCodecBufferChannel: remove rendering depth
fba97d88e0 : libaudiohal@aidl: Resolve async callbacks concurrency
1f00a0911b : Fix for trying to restore audio track after obtain buffer error
7fed726f2d : C2HIDL: handle unregistered death notification
2334876edb : Anonymize bt addresses in listAudioPorts
cf0a8ee1ba : Add is_fixed_read_only: true to MIDI extraction mainline flag def
45c9b78fcd : Fix misaligned pointer when deallocating address error
bafaff5815 : Add proposed trendy teams for VTS modules
9351ef9396 : Add proposed trendy teams for VTS modules
1e865e66ba : APM: forward mute state in APM to AF
d2d3326c69 : Convert vintf_fragments into vintf_fragment module(s)
5338e406c7 : ResourceHandle : Refactor resourceHandle data type to long
7ed22ff168 : Add APIs to query MMAP support in AAudio.
a392aea68e : Speed up SniffMidi() by parsing the initial bytes of the header
c86a3e1001 : Audio Policy Engine: remove recurrent warning in getDevicesForStrategyInt()
80edb96aab : Fix memory leak in MtpFfsHandle.
5c68943239 : ExtendedTimestamp toString fixup
970dc0d8d2 : Add timing info to audioflinger dumpsys
0423af9282 : Increase audioflinger local log size
b01390fad2 : Add uid to audioflinger track dumps
521349c1de : Update AE_MODE_ON description for flash control
3047783685 : Audio Policy Manager: fix priority of routing strategies
5f900c5de0 : Fix timestamp not increasing on repeated frame
c8b63c83e1 : Audio Policy Manager: fix priority of routing strategies
615e16008f : camera (v)ndk: Setup vendor tags if they're not already, while calling getCameraService()
0bac57e7bd : Reapply "Update libprocessgroup dependencies"
aa3afcbadc : Add headers to audioflinger thread local logs
d33e319f14 : MediaMetadataRetriever: add static lock for image extraction
16e5e0c932 : Revert "Update libprocessgroup dependencies"
49be5e9ec1 : libaudiohal@aidl: Fix 'pause' handling
e1dfc0f7da : Revert "Anonymize bt addresses in listAudioPorts"
470b399e12 : No fast mixer for DEEP_BUFFER threads
a19e9176c7 : Improve overflow handling
00526b660e : Enable clang-format for camera framework native code
95a0015ea0 : Use context AttributionSource as the identity source-of-truth for connection
f94040fd5a : Audioflinger dumpsys cleanup
401aa509ee : Check AID_RADIO correctly
39bd2319b5 : Update initialization data to CodecPrivate format for VP9
684180bbdf : EffectBundle: update lvm control parameters with AIDL parameters
da22e9e8b3 : Use canonical copy of ABSL for WebRTC dep.
07971a32ac : Anonymize bt addresses in listAudioPorts
c32e669e3d : AudioPolicyManager: fix DTMF gain over BT SCO
b7561416b6 : MediaCodec: more buffer state cleanup
0770bc8b89 : Revert^2 "AudioFlinger: do not reset mHwPaused on flush"
6a7420fd8f : Fix Tuner CTS test deadlock issue
790b37377f : writeUnpadded->write
657b59056c : audio: Update TEST_MAPPING after ag/24152024
bdc46fa304 : Camera3StreamSplitter: Refactor to completely eliminate IGBP/IGBCs
8d38533b9a : Revert "Sync with new drm common aidl interface"
2f67f41deb : C2SoftMp3Dec: Fix potential null reference in onStop()
47655d2d0f : Update input route when preset preference has changed
d27bbb9cc8 : Return HAL config in datapath for SwAudioOutputDescriptor
dfb67b8a87 : Propagate AudioFlinger open output flags to audio policy
b984183e94 : Support remote submix for 44.1kHz
82f39d6854 : AudioFlinger: add queue wait analysis
455682fb6b : Revert "Return HAL config in datapath for SwAudioOutputDescriptor"
47d862feed : Revert "Propagate AudioFlinger open output flags to audio policy"
acd5def6b4 : Sync with new drm common aidl interface
2e9f12305d : Revert "Refactored libcameraservice_depth_processor_fuzzer"
e2c8776510 : SpatializerPoseController: Add thread safety annotations
2a5a524a90 : audio policy: handle errors for usb broadcast device connection
ccd149ce9a : APM: Avoid calling 'setOutputDevices' after adding output for HIDL
eb961c729b : Camera: remove flag check_session_support_before_session_char
448e59d0ff : Return HAL config in datapath for SwAudioOutputDescriptor
83dc617a11 : Propagate AudioFlinger open output flags to audio policy
d19814ae60 : Use DesktopWindowing flag for Camera Compat for Freeform.
e005df561c : Camera: remove flag calculate_perf_override_during_session_support
be0be6f569 : Camera: remove flag use_system_api_for_vndk_version
1ed80f8f50 : mediaserver: configure and create threadpool for C2AIDL
940c3b7e25 : Camera: remove flag delay_lazy_hal_instantiation
4aa99a8438 : Add a comment about the order of SetPreferredCodec2ComponentStore() and ComponentStore() in the code.
05fcd3a1e1 : Add support for AC-4 level 4 audio format
54fab70df8 : camera: Remove session_hal_buf_manager flag
4aa3f70e45 : CCodecBufferChannel: decrease balance for discarded work as well
d62468344a : CCodecBufferChannel: throttle by # of frames in pipeline
13a0bd45f8 : CCodec: lock input surface for concurrent access
ab659d6698 : Audio Aidl: Set callback priority
93812447f0 : Remove unnecessary oneway call optimization
fa5a1417b5 : Add MctsMedia* to postsubmit
9dbb416f7d : CCodecBufferChannel: decrease balance for discarded work as well
63f38bb450 : Extend AmrnbEncoderTest to validate encoded output with reference files
bffbebbc86 : Extend AmrwbEncoderTest to validate encoded output with reference files
309a0558ff : Extend AmrwbDecoderTest to validate decoded output with reference files
1b6e71c7a4 : resourcemanager_service_fuzzer: Add signal() to handle SIGPIPE
f462c6cc05 : Update AudioFlinger to use templated unique_lock
ce55edbf13 : Move getPermissionController / getPackageNameFromUid to AttributionAndPermissionUtils.
5e02d2b1bb : Retain config for the cached profile.
8a09667cee : Do not allow direct output if bit-perfect mixer attributes is used.
b2fa420f0a : Change IAfEffectChain::sendMetadata_l to pure virtual
353dfbb409 : update aconfig lib setup for host test
e60fb412ea : DRM RKP interface to collect BCC signature (UdsCerts) and add to CSR
f0c820814e : audio: update hdmiarc profile when switch surround mode
41c4eaf7e2 : Refactored libcameraservice_depth_processor_fuzzer
f5e48d4134 : Extend AmrnbDecoderTest to validate decoded output with reference files
73efbca950 : audio policy: fix capture policy for assistant when running as top
86b54d3ab9 : AudioTrack: Fix setPlaybackRate fail
8d2abbd961 : Update libprocessgroup dependencies

+- Project: platform/frameworks/base

56e49a4a467e : Import translations. DO NOT MERGE ANYWHERE
4ed3327e254d : Import translations. DO NOT MERGE ANYWHERE
bed50a17d9c2 : Import translations. DO NOT MERGE ANYWHERE
58d8adc78f92 : Import translations. DO NOT MERGE ANYWHERE
302f5622afc5 : Import translations. DO NOT MERGE ANYWHERE
9e92a5ce61a3 : Import translations. DO NOT MERGE ANYWHERE
cc73aef08941 : Import translations. DO NOT MERGE ANYWHERE
96d1c20aa874 : Import translations. DO NOT MERGE ANYWHERE
f4a6570129ef : Import translations. DO NOT MERGE ANYWHERE
b50ce41ac890 : Import translations. DO NOT MERGE ANYWHERE
c3cf4569346b : Import translations. DO NOT MERGE ANYWHERE
7319c8205fbf : Import translations. DO NOT MERGE ANYWHERE
3621d7c87a65 : Import translations. DO NOT MERGE ANYWHERE
636909591e47 : Import translations. DO NOT MERGE ANYWHERE
6d1f94f10361 : Import translations. DO NOT MERGE ANYWHERE
b1b3ae3f3e1f : Override Carrier 2032 5G plus network type
9c9a11f8f046 : Copy requestedImeVisible state also if stylus is used
ab1576671e0f : Fix NPE due to null clip descriptions
6a07868bc0e6 : Prevent showing overlays when unarchiving
0947ef7f860c : Resolve cross account user icon validation.
f79cc5227ff3 : Check account type returned by AbstractAccountAuthenticator.
5f3e7eb481fc : [PM] Fix the profile issue in UninstallerActivity
9629d2bde138 : Remove flag check to sanitise keyboard shortcuts provided by apps.
12de0a9eaf50 : Revert "Fix divider flicker from swiping up from a pip->fullscreen app on top of split"
db0d5ea4285e : Revert "SplitScrn: Check for all children visibility when updating mVisible."
070d5e0dbffb : Revert "Fix issue with inconsistent stage visibility"
210d5080eb9c : Only initialize the drag session after we confirm we can handle the drag
34def81544a5 : Revert module related chanages in platform folder
3c8fb194ba73 : Fork code changes into platform/ and module/ folders
6538fac5c89c : Removing hide IME call onWindowLostFocus for STATE_ALWAYS_HIDDEN
ec8c7a6bf399 : Adjust ImeTracker phases
0b0696d57a83 : Always initialize ScreenshotLocationInfo.
7f6e923311c7 : Add sms log to session metrics
528177a7c2f8 : Remove notifyInsetsChanged call from notifyControlChanged
b4372aadfc02 : Check that ime visibility changes in stack view
bd6c787b4b82 : Fix IME flicker when dismissing stack
19beee65b4cb : Fix flicker caused by IME hiding twice
11a06b6f6896 : [Satellite] Satellite metrics to capture pending message count per datagram type.
1b3893d335c0 : HDR should not consume HBM time limit
55b5ae0528a2 : Don't wait for client completePause when sleeping
e9ee95afb074 : Fix flicker when hiding the IME in horizontal split screen
c76d94933137 : Fix screen-off power estimation error caused by proportional distribution
394922b86f83 : Set window visibility when user switch is triggered
67b82459f17a : Cancel animation in split screen, if the leash is lost
c61db9b91dfe : Put track of systemUiContext before WM behind flag
fbd86ab8e8fb : Add flag for tracking systemUi context before WMS initialization
9d04041988ef : Update flag description
8d5d97428895 : Fix getPackagesBypassingDnd() for multiple profiles
6b22d1c0c351 : Hide IME switch button for single IME and subtype
0528b5e74438 : Add null check for clip data before getting the description
48d98b86ee27 : Avoid handle the pointer-down-outside if the input target is invisible
057feeb6fa92 : Added VZW tagIds based on go/pixel-ntn-region-tags
bd1f4b826166 : Add a device config for satellite NIDD APN name
447eaeb3ce71 : Include self in permission check for MANAGE_NOTIFICATIONS
8c3bdc61e2b7 : Revert "Refactor initial support check to use SupportInfo"
588c4ad44c80 : Prevent dispatching onBackStarted too early for gesture nav
dfab7ad9636a : Flag FRRO changes for Launcher preview color fix (2/3)
2d9063bf6ca3 : Revert "Remove the extra SCO intent when changing volume"
cee5fe36ea64 : Revert "Migrate materialColor* attributes into colors"
edbf82ec2247 : Update java doc to set deprecated
9528cf9a97aa : NestedDraggable exposes the number of pointers down when starting
cd1202d3cea1 : Delete noisy "try handler %s" log from ShellTransit
35c252579d61 : Separate string config for ignore_orientation_request
86794df5039e : Remove user-unlocked listener in ODI manager service.
37507ae292ee : [Notif redesign] Update size and position of conversation icon badge
9acc65caf6a2 : Init ProtoLog in SystemServicesTestRule
c10567ea26dd : Ellipsize "Clear all" text when necessary
b6b80d2cf1e7 : Added Code owners for shortcuthelper
1670d281b339 : Log AppFunctionsRequestReported
0b0c24c5a085 : Use reparentToDisplayId to move the shade between displays
4b3e0eea1147 : Rename new footer XML
2ccbca1469c0 : [Notif Redesign] support ellipsize in Notification text
f011e7b77f7a : Fixed irregular talkback navigation in shortcut helper
1e6e9e2f1b62 : Fix test failure for updated bookmarks
f2a006ffd781 : Remove flag check enableOnDeviceIntelligenceModule
a061f835a191 : [19/n] Make LetterboxControllerRobotTest more generic
5268560308d0 : Add flag for tracking root tasks per user.
6eba2471cfe7 : Revert "Revert "Revert "Allow focused privileged windows to capt..."
38771a70d5d8 : Revert^3 "Launch wallet app on double tap option."
345ad897aa82 : Revert "Fixes failing PowerMenuTest."
bf374d0ea742 : [gradle] Upgrade to Kotlin 2.0.21
4930578ba793 : Add getSourceMetadata() in LocalBluetoothLeBroadcastAssistant.
3470f4f646ae : Refactor initial support check to use SupportInfo
5ddeabd38092 : [Catalyst] Add PreferenceLifecycleContext.getKeyValueStore
bfb0a0892a14 : [Catalyst] Move common APIs from SettingsStore to KeyValueStore
20b229794ace : Fix missing creator token issue due to same intent added twice as extra intent
60f5dd1b065f : [kairos] fix MutableTFlow backpressure
fb88b5277edd : [kairos] Add TState.transitions utility
6943fa9e7fd0 : [kairos] add Flow.scanTo utilities
7ae1f0196851 : [kairos] remove unncecessary scheduler channel
00ad67ffe81f : [kairos] add unsafe sample debug util
804a064ab700 : [kairos] specify variance for TStateSelector
1ea4433f3dab : [kairos] add 5-way combine() overload
0f00b3b7d913 : [kairos] delete redundant combineValues util
4cc641f58e77 : Fine-tune sqlite misuse exceptions
48e097832086 : Revert "Don't skip clearing measure cache of ancestors"
54cc588f32d6 : Refactor native store for PIC nonces
7bbdbdff9c79 : Fix spike method calling
2ec454f23b04 : Cache for null results in BroadcastStickyCache.
4de87912be5d : Include reasoning for why a PACKAGE_CHANGED broadcast is triggered in the trace
5cbb65839a26 : Add Integration Tests for App Jank Tracking
cf128faa5ca1 : Ensure that views retain clipping in origin transitions
13b52853b868 : Show correct icon for maximize button
acb0a474b865 : Remove @EnabledSince annotation on the changeIds.
2e03016f8161 : Reland "Move focus for windows in freeform mode"
7398f145b179 : Update program selector API doc
c24a427c20d4 : [flexiglass] Don't emit from sharedFlowWithState when CANCELED
e06786cbd6d5 : Create cache for service connections in AppWidgetManager
0d09c285c4ee : Fixes failing PowerMenuTest.
faccd5dd12a2 : Address API Review feedback for TestLooperManager pop/peekWhen
e579ecb0e464 : Check if registered before unregistering in binder death
60d986c97761 : Add edit mode events and QQS/QS open events
25479bd550a4 : add setEnabledSettings tracepoint
0d5b47218e89 : Do not show RDM EDU dialog in certain cases
e922f806ea26 : Factor vertical split into menu position calculation.
3bd3584dceaf : Remove badge for logo label and center the logo label
a10c459bd981 : Add new namespace aaos_input_triage to native binding
682dac19e68c : Scroll capture: Allow scroll views with multiple child views
fce8dfc1d01b : Marking testAPI to SystemAPI
178cb913bc84 : Fix service endpoint ID generation logic
bde7cf67464b : Fix incorrect flag for starting DynamicInstrumentationManagerService
cacf3dee9313 : Remove references to the backup jobs exemption flag.
3b87c1bba27a : Only allow STREAM_MUSIC as driving stream for AVRCP
cd2db86ea8e3 : [flexiglass] Update NSSL stack height on child height changes
3b8b2bdb295a : Track providers' support for system media routing
566a767f0064 : Revert "Fix support for BatteryUsageStatsQuery.accumulated().setMaxAge()"
854fc9e01e9e : Fix unit tests due to icons with flag on
3e3b4221d2d4 : Enable non-activity windows to be re-parented between displays
18da618256b2 : Add media stream management in MR2ProviderService
c70f4652f818 : Removed switch focus in splitscreen shortcuts
54f2ac165314 : Revert "Keep selection of widget at the end of widget dragging"
f475ccdc7b8b : SysUI Notification Shade Blur.
7372f9680395 : updated string for multitasking shortcut
b91f29b93382 : Improve binder call usage and UI adjustment
cee74782d520 : [flexiglass] Refresh NSSL sensitivity, when the Lockscreen public mode changes
9edfba0bde3f : Refactor "rearrange" tests to use verifyDisplay
0aa95a2e8c57 : [notif] fix icon coloring logic
9b3fc431969f : Change String param to CharSequence
7bbc29a7ab5e : Fix responsive grid in edit mode and content padding.
7e50f2671969 : [Notif redesign] Update Call/ConversationLayout alignment to new icon size
c2f0f9a4b5d8 : Revert "Fix support for BatteryUsageStatsQuery.accumulated().setMaxAge()"
7d8cbd025033 : Remove permission usage for the Wear specific ACCESS_BUG_REPORT_FILES.
46be86bd8577 : Revert^2 "Use SysUI TAPL on notification tests"
ba20ba687d4c : Do not build com_android_internal_util_ArrayUtils for host
4229ad22825b : Updating the threshold names
3b422f054387 : Remove the mediaProjectionRequestAttributionFix flag
5dd92350fc40 : [18/n] Add common code to Letterboxutils
55d3567d889d : Fix view leak.
eaa4ce11aae3 : Import translations. DO NOT MERGE ANYWHERE
46f06f39ee8e : Add RemoteTransition to launch desktop tasks from recents view
1da179362599 : Revert "Flag uses-sdk-library parsing changes"
0d1d1c3515c7 : Update layout position in SplitLayout (hide IME) without imeLayeringTarget
e46fd73b60aa : Add SmartspaceView.setHorizontalPaddings.
cc0ce276cc2d : Modify the signature of logTileBackgroundColorUpdateIfInternetTile
dee733bced0a : [flexiglass] Use DeviceEntryInteractor#canSwipeToEnter in SensitiveContentCoordinator
52454a876d0c : Import translations. DO NOT MERGE ANYWHERE
d7772495c8e6 : Update to ToT RemoteCompose
b5ad495d186d : Fix issue with inconsistent stage visibility
ab5622106a0f : AudioDeviceInventory: reset SCO active device upon restore failure
9014ce424117 : Import translations. DO NOT MERGE ANYWHERE
1dc25bb40233 : Import translations. DO NOT MERGE ANYWHERE
ab2dd368d01a : Disable CDMA calls
505b484ee56c : Remove references to NV reset APIs
92e16f1afc48 : Deprecate CDMA
ed587c1799ca : Avoid NullPointerException
58ece9196156 : Import translations. DO NOT MERGE ANYWHERE
72ce7d820da5 : [BugFix]Fixed a fatal exception which cause IndexOutOfBoundsException.
b39d62d5083b : Add GnssAssistanceInterface (framework/base)
c38ae9a61b5c : Fix NPE in AbsListView's ScrollFeedbackProvider usage
fc92ee109b2c : Add a button to glanceable hub to swap to dream.
4d379cbf7746 : Refine API of getting window visible display frame
a97538915390 : Synchronize the ArrayMap.put method calls.
b1b4e9540d84 : Enhanced TextClassifier APIs to support OTP detection use cases
9b2f7b6bd93a : Fixes multiple activity snapshots can stay in cache.
aee98d3058a1 : Remove the unused imports
4835ed1e2290 : Only activate device that is not existed previously
a317b6a423fe : Combine state and pendingImmersiveTransitions in DesktopImmersiveController
15a7e96e0b0a : Remove ontimeUsedbuilder for DeviceId
1c6bff30b53f : Fix tests when flag is on
8b2183b8b3bb : Make tileScope background
3e170d691915 : Use updated atom name
459698585ef8 : Test: Added mocking static classes support.
bdefdd9b1fc4 : Declare permission for fine-grained power monitor access
1f4c46c7d34d : Reuse existing handle color when switching bubbles
844a76e719c1 : Test for switch animation
04c0e02c2ffd : Add SystemFeaturesMetadata.maybeGetSdkFeatureIndex
f5f805789787 : CachedAppOptimizer: Replace getAttributePathForTask with CgroupGetAttributePathForTask
26ab8de1573b : Reduce Lockscreen Smartspace start padding to 16dp, same as end padding.
528cc83a48c8 : Fix expected icons for additional tests
01f6292c4639 : Implements endpoint discovery callback
ed7fe53c8ff9 : Add isBlockedOnSyncBarrier() method to TestLooperManager
09ed24dac744 : MessageQueue add isBlockedOnSyncBarrier()
caea6cd92803 : Invalidate taskview after setting obscure region
d0bb83e501ff : Updating Activity#requestOpenInBrowserEducation javadoc
063ee6d00c9c : Only assert on nodes with content descriptions in drag and drop test
da93db51e736 : Add asapperstein@google.com to media/dialog OWNERS.
710eca22b7fd : [Notif redesign] Update conversation icon sizes
f4a49a96e03f : [pm] more error message logging for native lib extraction
03415d12e480 : [Fill Dialog Improvements] Implement Fill Dialog improvements
d707552737c0 : Add aconfig flag for mouse acceleration
d8e58c9acb8b : [framework] Add Unknown error code and java doc for subscriptionId.
74f3f8f7e228 : [QSDetailedView] Implements the details view/ViewModels for qs_new_tiles
37dc0a689a7a : Revert^2 "Launch wallet app on double tap option."
d6efa30ef9e9 : Revert "Revert "Allow focused privileged windows to capture the ..."
ca509ec5f4d0 : [sat] don't show device based icon if any ntn mobile network exists
86495d9b7afa : [6/N] WindowDecorViewHost: Warm up SCVHs
513caf7c6730 : [5/N] WindowDecorViewHost: Add pooled supplier & reusable ViewHost
d5bbf856099f : android_os_SELinux.cpp: Refactor GetSELabelHandle
b7757f1c1e46 : Add more details for the builder of DeviceId
3e7778861f2d : Adds exception documentation for new Context Hub APIs
901496fd06ab : Implement resizing of ongoing content in responsive grid
75412b89be5e : Add restore metrics for settings that use the restoreSettings() method.
9568ef9b946b : Clean up uses of activityEmbeddingAnimationCustomizationFlag.
90b8d7fe6fae : Revert^2 "Add flag to narrow the meaning of JANK_PERCEPTIBLE."
2103df3a0f7b : Add NB_IOT_NTN
7862f5a62562 : Replace usages of communal_hub_on_mobile
c98f44295495 : CP GuestUserInteractorTest.kt that got left out of CP somehow.
2a1242ee0fb2 : Fix Camera app shortcut issue
97bc732e3769 : [API feedback] add an option to exclude parameters
bc4826ce5fc0 : Route flag requests to both sockets if mainline aconfigd is also enabled
dd2eed59bc1e : Avoid showing "Problem connecting" for Android Auto
da78bd4ffe5b : Reuse sound policy interactor in volume ringer architecture
b7bb3ab422dd : Add volume dialog background animation
e9b791bbc3dc : Fix test failure caused due to missing deps
fa3a1c36f1a3 : Improved TalkBack accessibility across the status bar
dc13b9ada5c4 : Simplify QSModule
a3116a93bc59 : Add enabled parameter to Modifier.nestedDraggable()
8c0191854b24 : Throttle excessive calls to NM.notify()
b651b1397ddb : Allow @Application Context & @GlobalConfig Configuration Classes past ShadeDisplayAware linter
ff5f6765728f : [Notif] HeadsUpManagerImplTest: Move remaining tests, delete old file.
d60ea91156c2 : Migrate materialColor* attributes into colors
c353915df018 : Fix Flaky UiModeManagerServiceTest
73a8d0df2902 : [BugFix]Fixed a fatal exception which cause IndexOutOfBoundsException.
8772f2f53f93 : [flexiglass] Parameterize SensitiveContentCoordinatorTest
07ad125fac1d : [AAPM] RestrictedPreference tests for advanced protection
0b46a2114bb0 : PluginManager: adding event logging
28e6368ef756 : Fixing flakiness of KeyboardTouchpadTutorialViewModelTest
a4c988a1e7e6 : Fix PerformanceHints budget calculation
9cb2442aa931 : [Contextual Edu] Show at most 2 toasts in one usage session
81338e01b5e2 : Change ondeviceintelligence tests OWNER file after module migration
6ae1d3643c38 : Fix PerformanceHints budget calculation
a7c687bb929b : STL improve readability of OffsetOverscrollEffect tests
8c200771e692 : Kosmosify SensitiveContentCoordinatorTest
8c6239ab36d8 : Show ongoing activity chips on connected displays
0b7c38cb6da5 : Fix launch time metric of the placeholder activity launch
bcc88427ea0f : Reland modify bookmark shortcuts for platform alignment
1864dfa988b1 : Use BluetoothUtils#getGroupId.
81fe96be7f14 : Return early in SaferIntentUtils.blockNullAction when action is not null
d01fa41f2dee : Trigger close predictive back transiton if focus window changed.
3f4f707d9680 : Introduce reparent_window_token_api flag
835e7c23f220 : Fix PerformanceHints budget calculation
53d643c9427b : Add backup metrics for settings that use the extractRelevantValues() method.
bbcb8c6fb445 : Use effective variation settings instaead of fakery
4db1ae80ddf2 : Make setFontVariationSettings work with weight adjustment
92641463feb8 : Pass copied CallState to listeners in same process
b137f594f4dd : [expressive design] Update RadioPreferences.
4f1647d06696 : Skip handle pointer-down-outside-focus if input target changed
92c98a7d50c7 : ATMS: fix the NPE problem in case the dream activity fails to start.
b0e5e5b4a139 : Add mechanism to boost the shell background thread as needed
1fafab3b6c0c : Avoid lambda code to cache computer snapshot
2532abda5a54 : Log more ACCOUNT_MANAGER events.
8b948e548b78 : Refactor enable_touch_scroll_feedback flag
eefbb8228947 : Adding permission READ_SUBSCRIPTION_PLANS
1449f0e71ebc : Start transition animation from the ongoing drag bounds
637fd1627fcd : Add flag to hide "Problem connecting" for Android Auto
0d57fc432614 : [IntrusionDetection] Refactor the NetworkLogSource to register callbacks directly to the system service.
418c46ce6ee2 : Stop mirroring display if display surface is migrated
3c2e67e31368 : Call transport connection initialization in the ID service.
f0a5c56db499 : Fix several bugs in back animation handoff.
5f77cb6713e4 : API review comments for b/364120337 addressed
2dbaab0d63cc : Adding permission READ_SUBSCRIPTION_PLANS
f4fb53970ab9 : Fix matches deviceId base on flag
e88da7c03e96 : Do not throw exception in refreshIntentCreatorToken
d7abea1703ab : Skip checking display windowing mode if TDA is null
7c18bbee1ca6 : Parcel: hasFileDescriptors warning
2ee44b9d5212 : Revert "Batch noteOperation binder calls in the client to reduce..."
287cc43fdd3a : Use WearBugreportWarningActivity from SystemUI in Shell
289f62a40080 : DesktopTasksController: Let Wallpaper launchable on ext. displays.
ef388b9ec42e : DesktopmodeWindowDecorViewModel: Use Display Context.
1bcf9a9023db : DesktopTasksController: Create VisualIndicator w/ Display's Context.
480d1c185d64 : Fix erroneous array indexing in onEndpointStarted callback
c9974ad9cb8f : m3: make default AlertDialog have Material3 design
46f44cebc7f9 : Fix hidden API usages to alternative API usages
c8b00c8aea04 : Move ondeviceintelligence files to packages/NeuralNetworks
e5e315aee447 : Switch to using enableUseTopVisibleActivityForExcludeFromRecentTask for bugfix of exclude from recents visible activities
a47df2829940 : DesktopModeVisualIndicator: use taskInfo's display's bounds instead.
58c639789210 : [Autofill] Address API feedback
36f290317794 : Revert "Assert Parcel not in pool when used."
84f782a0df65 : Revert^2 "Avoid notifying that launcher is visible when opening translucent targets."
db1a8279c898 : Add peekWhen() and pop() methods to TestLooperManager.
fdd36b53f93e : CombinedMessageQueue add peekWhenForTest and popForTest methods
7500a8cf8a69 : Listen to active apps for virtual devices too
f832c6c1ff27 : Conver hidden APIs to system APIs for satellite
1d6b83e14b2b : Add bugfix flag propagate_user_context_for_intent_creation
d43b2ffb563d : Fix PerformanceHints budget calculation
bb55a7f10b13 : Update userResizeBounds onMovementBoundsChange
94a9f7c85d08 : Re-land ArrayUtils: add secure zeroization support
10f0ef29da04 : Disable page size compat based on prop
12d330bb1ec1 : Remove bulk comparison on boot.
38509f6552d2 : Fix issues with CTS in new QS
5a60458421e8 : Changed DeviceConfig.dump() signature.
f6427ebfaa6a : Add Notification.hasPromotableCharacteristics to public API
3e14b46a4c6c : [MQ feedback] Add ActivePictureListener APIs
8bd4265646a7 : Add CPU/GPU_LOAD_SPIKE hints for one-off expensive workloads
b3aa045de69a : Notifications require FLAG_ONGOING_EVENT to be eligible for FLAG_PROMOTED_ONGOING.
148eebdcdefb : Add `pixel_display` Trunk Stable namespace to the device config to sys prop mapping list
9b5e69d31621 : Route flag requests to both sockets if mainline aconfigd is also enabled
96341e644de8 : [Media Quality] Add descriptions and unhide picture profile params
dafa4972e8e6 : Remove the fgs timeout anr flag.
9c61e1cdd679 : Add `config_satellite_allow_tn_scanning_during_satellite_session` flag.
f22b124f6130 : aconfig: Add min_sdk for NFC module dependencies
61512276a690 : Allow privileged apps to accept and end VOIP calls.
fb020a0f2559 : Add custom accessibility description for notif section headers
96bb780c993c : Remove unused config_walletDoubleTapPowerGestureEnabled
8abc7afc088f : Add OWNERS file for automotive stuff in wmshell
1413326462ff : Set a maximum expanded view width for large screens
58bb72b6fe8c : DragState: fix the NPE problem.
84c7ed29a50d : [2/2][PiP2] Operate on TF token for deferConfig
675732afa0f3 : PIC does not cache nulls by default
51d9b10cc714 : dynamic shortcut label/icon retrieval for app launch shortcuts
8031ce9606c3 : Fix transitions to/from DREAMING with keyguard_wm_state_refactor.
cead3d32c10b : Use callerPackageName instead of caller.getPackageName()
a9b8c559e5bd : Use the new Material sliders in the Volume Dialog
9a89a0f19e28 : [Notif redesign] Adjust margins of "small" icon
d857b66266cc : Support target process in dynamic instrumentation
5b164234210e : [Notif redesign] Make group header respect header height
6e256ccab44f : Respect enterprise policy in AppFunctions
3c90946cc226 : Add aaos_vac_triage to native namespaces.
680d107208b1 : People Screen now has window insets.
55e000f06363 : Implements endpoint session callbacks
ae56df41e6e1 : Fix collectExtraIntentKeys for nested LazyValue who is of custom class
9355448484e7 : Fix QueuedWorkTest#testHasPendingWork flake
d509236855f5 : Keep java cache for servicemanager
e745133265d8 : Clear invocations earlier in dnd test
de164ba73b70 : [PIP2] Fix NPE for onSwipePipToHomeAnimation
1224d3585359 : Add new spacer content type
9515ec6915f9 : Correct trunk-stable flag bug-id
b135d0527e00 : Add OWNERS to CompatUi tests
340b9ec1eef4 : Use correct expected icons
3d8a5de4965f : Add nullability and type guarantees
cb488d988923 : STL improve large appbar demo experience
9aca08b5f270 : STL introduces OverscrollEffects [1/2]
b033e8e67689 : Add better debug logs in Latency Tracker.
76d302a5f553 : Revert "Add flag to narrow the meaning of JANK_PERCEPTIBLE."
01f4dc5aefed : [Flexiglass] Fix KTF filter for instant reversed transitions
bc1dc6b4911d : Remove an unused flag
c5c2b4273281 : Moving startObservingHealth API to PackageWatchdog
276b3b65d19a : Expose APIs for new observers
357eafd4e729 : Send AZR_STATUS_CHANGED broadcasts with FLAG_RECEIVER_REGISTERED_ONLY_BEFORE_BOOT
e5f1dc5996f6 : Deactivate all modes from QS toggle
0c312b882f55 : Dual target modes tile
bbfbd1fb0be5 : Accept a URI for bugreport and screenshot paths to be forwarded to BugreportManager API.
6440416d3fe2 : Add tests for async layout
eca7aa55114c : Introduce draggable that supports nested scrolls and overscroll effects
e3b5106493a7 : Initialize status bar for all displays
9be10dd29b72 : Revert "Fix cts testMinimalSizeDocked fail when device supports multiwindow feature."
de64764728a4 : Build.java: fix broken @link in SDK_INT javadoc
bc7f64b5d132 : [17/n] Handle roundedCorners radius in CommandHandler
f271760a48a1 : [16/n] Use rounded corners in LetterboxControllerStrategy
c352ec75c906 : Convert interpolated colors to original color space
93e246e7cf79 : Handle activity as incompatible if whole stack is transparent
2f138d103308 : [expressive design] Remove HorizontalDivider
c38c411c899e : [PM] Add PackageInstaller CUJ test case (42/N)
87f7c1787906 : wm fixedTransformHint with installOrientation
8fa358b84c9d : Make mCallbacks#broadcast() thread safe
9830b69fb5b7 : Keyguard Transitions: Update KeyguardViewMediatorTests.
b5d6c51a3d93 : Remove the ability to enable the sync logger.
a1ac2cc6dc32 : Add setting constant to mirror built-in display
f19992435bd5 : [MQ] Add profile handles and getters
480540379bdb : Fix support for BatteryUsageStatsQuery.accumulated().setMaxAge()
7e99d126699f : iterate over all SIMs/subscriptions and disable 2G
5656060ed5be : [Ravenwood] Remove RavenwoodConfig and un-deprecate RavenwoodRule
92665479f88d : Revert "Allow focused privileged windows to capture the power bu..."
bf5db7350ead : Revert "Launch wallet app on double tap option."
d1ac45986f99 : Update to ToT RemoteCompose
92ade0fb3fe3 : Rename startObservingHealth API
8700dfee6929 : Allow observers to register custom executor
b3df800217fe : Cancel Ask-Every-Time support for SMS by default.
86759cffb245 : Revert "PIC does not cache nulls by default"
33f44bb92f02 : aconfig: Add min_sdk for NFC module dependencies
cd7b7f5bfe92 : Give MessageQueue OWNERS ownership of TestLooperManager
5c622cfc1d52 : Used IntDefs for RangeInfo ints
10be86adfd71 : Enable the transport component for the intrusion detection service.
3dacd4eccd49 : Replace usage of DPM supervision methods in UsageStatsService with calls to SupervisionManagerInternal.
c68057f75231 : Introduce a method in SupervisionService to check if a caller is the active supervision app.
3bebb7f3b7a7 : New System API to launch SIM Preference in Setting
96c7db487eda : Treat undefined flags as false in resource flag processor
bbfd59209f0d : [Notif] HeadsUpManagerImplTest: Move showNotif & removeNotif tests.
dc16899048a9 : Fix remove_user_during_user_switch feature flag bug in aconfig
8f4be0294d89 : Fix error handling in CursorWindow
318b60a8c8d4 : [Notif] HeadsUpManagerImplTest: Move "has HUNs?"-related tests.
9691b18b95a0 : [Notif] HeadsUpManagerImplTest: Move compareTo-related tests.
9a5c0bda7213 : [Ravenwood] Remove usage of RavenwoodFlagsValueProvider
0f2f074e57e4 : New switch animation for bubble bar bubbles
d9b26ef26aef : Fix usage of KeyEvent modifiers in input method session
735f651d7a8b : Add material shape token config resources
a44cb2f85bc9 : Updates session service descriptor to a string
2827d72bf523 : Update multi-window and window-layer flags to new namespace.
efdd59a3be1e : Create handle menu on background thread.
23ce612e4a78 : Remove captured link expiration
88280a65652d : Always send setIgnoreDisplayTouches from UdfpsOverlayInteractor
9bb7006b6d0f : [4/N] WindowDecorViewHost: Add SurfaceControlViewHostAdapter
b4ed8179a7ce : Telephony rebootModem: don't drop command result on the floor
66b19001fe33 : [3/N] WindowDecorViewHost: Migrate WindowDecoration's SCVH
eb6ba2f69f4e : Remove the extra SCO intent when changing volume
32e1a1259065 : [Notif] Move sticky-related tests to HeadsUpManagerImplTest.
16ed7a2a093a : Add a toolbar for dual shade QS actions
6d15a570715f : Use InfiniteGrid in dual shade
4ce2bb343283 : [15/n] Add rounded corners radius to LetterboxConfiguration
317bca3a9eac : [SB][Notif] HeadsUpAppearanceControllerTest: Use Kosmos.
80fd09f1e89f : Separate permission notifications from cache updates
02b6c7f17ee3 : PIC does not cache nulls by default
b4d900b118a7 : Add getSupplementaryAttestationInfo
3045d7d95116 : Fix compile error in different build variant
8a75c83952d4 : Improve content description
da20c5b33a5a : Add list-based API for media route visibility by permissions
70ded9fe4679 : Rename use_ipcdatacache_channels in favor of nm_binder_perf_cache_channels
ee09ecef4d2f : Make Ravenwood less verbose
0172fea3fe43 : Update documentation for the FrameRateCategories
18c9dc715b4d : Extract helpers for setting up bubble bar bubble
dd27bd6b06f8 : InputSettings: remove touchpad tap dragging flag
70aef6b1f062 : Add hardware flag cc lib
f0ee9f1e59cf : Add content description to OOBE tutorial animation.
76fd79e1a3b1 : [SB][Notif] HeadsUpApperanceControllerTest improvements
652ec67f7530 : [Notif] HeadsUpManagerImplTest: Remove Assert, remove Hungarian notation
e87ac08fb078 : Mark home-launch as transient for drag-to-desktop transition.
8efcbc77b268 : Dont allow light sensor subscription if screen is off
bc29c821a7fe : Set default focus for a11y in oobe tutorial screen.
c03ea5158be1 : Prevent focus of shortcut row sub-elements.
d8fcea60643c : Add `enable_bug_fixes_for_secondary_display` flag.
15d260fd734d : DisplayTopology propagation to IM
e93aa8b6cae3 : implement app functions enterprise policy
eca402eb44ed : [flexiglass] Moves alternate bouncer (back) to its own window.
2919923fcea6 : Fix RTL padding for screen record options
693ffd5cdcf6 : [Flexiglass] Recreate media when media configuration changes
e264c9ed06e8 : Clean up dont_write_policy_definition
f23c3bfe7afc : Passing Ime statsToken from InputMethodManager to InsetsController
831a62cebb8e : Fix: Boot time regression on systemRunning
945855ff4c8c : Mark recent apps shortcuts as non-customizable
893cd1b62e7a : Introduce currentValue(kosmos) for testing StateFlows
392b900a19cd : Launch wallet app on double tap option.
ed4e0aa7843f : Allow focused privileged windows to capture the power button event.
1e33fce32091 : QSDetailedView: Create ScreenRecordPermissionViewBinder
4e15a5a4d15e : Update Volume Panel visuals for new redesign.
abf1698d3073 : intent.collectExtraIntentKeys only required when there are extra Intent
668a339e3506 : LocationFudger: Bug fix in test.
ec6bcb96eb37 : Revert "Enable app handle on foldables"
9e88193231a7 : Allow a frame to receive insets when a side is partially covered
5f635174d7e0 : Fix crash when doing on-screen home gesture while tutorial is open.
290827696b4a : [Catalyst] Provide AbstractKeyedDataObservable
7fb2c9ec159d : Add permission and constants for appfunctions device policy
c0eb065b467a : Unregister the AppIntegrityManagerService from the SystemServiceRegistry as we broke the link between the PackageManager and this service in the previous change: https://android-review.git.corp.google.com/c/platform/frameworks/base/+/3376209
149b7c24d925 : Don't skip clearing measure cache of ancestors
cc82f4afb23e : Make all swipe transitions irreversible
47203b18c4b2 : Add flags for com.android.internal.widget
a09a94d83e42 : Refactor to replace FloatPoint by vec2
158bcf5a24c0 : Add enable_connected_displays_window_drag flag
6968aa8eb87a : Fix the wallpaper not hiding immediately
ecad8299d552 : Fix fragment back navigation in PreferenceActivity with predictive back
642e1827d3e1 : Feature Flag for expanded Privacy Indicators
9a3589678441 : Import translations. DO NOT MERGE ANYWHERE
8151bdd3e48f : [BugFix]Fixed a fatal exception which cause IndexOutOfBoundsException.
e0e392277782 : Reset the frozen recents list for other split tests
c75183e7a616 : Import translations. DO NOT MERGE ANYWHERE
22ba998bafff : Add feature flag for the new role SYSTEM_VENDOR_INTELLIGENCE
fc05b21bc72e : Import translations. DO NOT MERGE ANYWHERE
46f9c3e4c56c : Import translations. DO NOT MERGE ANYWHERE
ee9732a5ba5c : Update to ToT RemoteCompose
b0e9a27493bc : Fix an issue with alarm message accidentally deleted
de9d574de71e : Refactor HearingDevicePresetController
e5432619d5f1 : New resources for ambient volume icons
9abb1ddc4319 : AudioFormat: add encodings for IAMF
ab328e51c778 : Disable CT verification for inline certificate and user store
dc6fb613fc9a : [Ravenwood] Remove RavenwoodFlagsValueProvider
1e25f22174e9 : [Ravenwood] Remove usage of RavenwoodConfig
53be2af41685 : [2/N] WindowDecorViewHost: move async relayout to RelayoutParams
285a51157b83 : Import translations. DO NOT MERGE ANYWHERE
3aab4c6310fb : Import translations. DO NOT MERGE ANYWHERE
7083a1115ad4 : FrameworksCoreTests_all_binder
8508e6637b3b : Fix DnsEvent ArrayIndexOutOfBound error
8fa8705e88d2 : Remove gesture shortcut restoration/merge logic from AMS
01272d0e9ae2 : Add hidden API to get vendor genfs version
72d0cf35323d : Add host library for appwidget flags
3fda2f092b59 : Add support for NFC associated role services
576784e44981 : Migrate VCN to separate non-updatable libraries
e1e859ea2c50 : Revert "Move focus for windows in freeform mode"
3ed19f262220 : Updated StatsPerUidLogger device wide component power consumed reporting
a28a59ed284e : Allow SystemServer to create VCN service with a service initializer
60df934bcf48 : Migrate VCN to separate non-updatable libraries
4b1dafa61466 : Fix transition filter for returning home transition.
93ff40f24d40 : [res] Make TargetResourcesContainer valid after an error
1cb0eb01a0e6 : Update documentation for the getSuggestedFrameRate
16d2989bca76 : Add requestSatelliteDisplayNameForSubscription in SatelliteManager
0ab9e0aa664c : Add NDK support for CPU/GPU headroom APIs
6cd335c6aa07 : Clean up safe_mode_config flag check in the functional code
175d54f39b9c : Add Abandoned job reschedule data to the atom
51f099fa7e5c : [PiP2 on Desktop] Enter PiP with bounds change animation.
613b0df6c9cc : Run moveToNextDisplay on the executor
6d8e2819f0f3 : Sandbox display rotation for camera compat using CompatInfo.
70e5bd7956e6 : [Catalyst] Provide lifecycleScope for PreferenceLifecycleContext
b84db1dea9e7 : Expose UpdateEngine.triggerPostinstall to modules.
e52c93444465 : Don't use concurrent MQ when instrumenting
57c3836d1a87 : Fix UnknownAuthority equals and hashCode
2733dd8940c0 : Honor API version from CoreDocument
eb02d68772d0 : Fix wrong color of clock in preview in picker
eb01e8716998 : Cleanup flag "warning_use_default_dialog_type"
4f362f4eccc4 : [SB][Notif] Remove ActiveNotificationModel.isPromoted; use content field
ec23043661c5 : Update to ToT RemoteCompose
838d72554248 : Treat undefined flags as false in resource flag processor
e6177f7ab51c : Revert the hint session support part
164ec92f1399 : Update Sensitivity Levels for Preferences
eb57a51e09b9 : Add @IntDef to getSpeakerLayoutChannelMask and clarify comment
fcbce46fadf0 : Adds missing permission annotation to HubEndpointSession
a39d37d1ad67 : WMLockScreenVisibilityMgrTest: update test for keyguard shell transit.
eb467cbd4e4e : Update openInstance logic.
229d3a9e0540 : [Ravenwood] Remove usage of RavenwoodFlagsValueProvider
13bc56e04dc1 : Implement AppLaunchData persistance for input gestures
b1e17a3f3bf4 : Persist and load input gestures from disk
7bacadc8cd6f : Fix flaky VibrationSettingsTest
93b4e7256c4f : Allow callers that have SET_TIME and SET_TIME_ZONE permissions to call getAutoTimeEnabled and getAutoTimeZoneEnabled respectively.
3f7dd7d1b180 : Add unit test for DesktopUserRepositories.
83d1eda1c805 : Skip calling adpf session if it's null
910e8387ac37 : Switch to the same user as the current user on the test device
ae2bb42a6e8a : Fix SnapResizeAppWindowLeftWithDrag Assertions
49ff77fe6038 : Update corner radius set for mixed transitions
3189f0ae37c7 : Revert "[ENR] Ensure ENR views visibilities"
2e7c7652d9c8 : Implements death recipient handler at ContextHubEndpointBroker
e997903515cb : Implements ContextHubEndpointBroker unregisterEndpoint
9526ea6ca365 : Implements ContextHubEndpointBroker open/closeEndpointSession
eec1c2c144a6 : Add KeyboarDockingIndicationViewModel lint exception
0b432af5bc5e : Update javadoc for MediaRoute2Info types
1af65ef7ba36 : Update API to only modify specified inset types
09c1f6186923 : Add new native namespaces bindings for aaos_storage_triage
9ccdb5e23dd0 : [AsyncGroup Inflation] Reset header visibility if needed
04f0ee69f2bd : Add method to access the View wrapped by the ViewUIComponent
2470a475dd42 : Revert "ArrayUtils: add secure zeroization support"
384c93fc72dd : Clean up safe_mode_config flag check in the functional code
e312b3099e75 : Delete Filipendo workaround code for R
1fecccd439fb : Revert "m3: Refine AlertDialog style"
245b8c640735 : Add getSupplementaryAttestationInfo
3395ff1e8984 : Revert "Avoid notifying that launcher is visible when opening translucent targets."
49fe79bc03c5 : Population Density Provider: add cache hint
3bd17cdcd467 : ResolverActivity and IntentForwarderActivity refreshes token
67b8460db043 : Add restricted mode logic to LocalTransport
aa735716cd68 : Use Mockito.spy to fix studio
63f95f153070 : Revert^2 "Tile perf: offload tile state adapter to bg"
668a41333b46 : Add permission cache for MODIFY_AUDIO_SETTINGS_PRIVILEGED.
1a3fc093ba03 : Set userId on test's freeform task.
d75970a682bb : [Notif] Update HeadsUpManagerImplTest to use `underTest` everywhere.
2feb7bf68d5f : Revert "Add READ_DROPBOX_DATA for Shell"
266a2b235544 : [SB][Notif] Convert HeadsUpAppearanceControllerTest to Kotlin.
c2bcebb855d7 : Fix the accessibility class name of tiles
43e5687f6029 : Add cropping-related methods to WallpaperData
6028e144ddb5 : Updating recent gesture tutorial animations
55e5468a2d0a : Tile perf: offload tile viewmodel and data to bg
eb64e170633a : Fixes inability to toggle dark mode when in a mode.
f235e5baa5ce : Remove new storage test mission marker file code now that new storage is launched
8991d96295e2 : Use ShadeDisplayAware resources in QuickQuickSettingsRowRepository
aa3457997739 : Added Reset button for customization mode in shortcut helper
cefca220fb1f : Update communal hub to use responsive grid
130b5355c09c : BrightnessClamperController state recalculation blocked until started
81b1558afb9e : Update communal db for responsive grid
58dd0d394df9 : Add ShadeDisplayAwareDetector
b61345006e25 : Add new signature permission for Population Density Provider
8578926239ca : Remove unused doRebind argument.
49678004ccac : Add verifications that functions get called on key gesture events.
681890767d1c : Enable app handle on foldables
58fc9739b95f : Check feature enabled on isCameraRunningAndWindowingModeEligible.
ac0894d0a22b : [Contextual Edu] Launch tutorial for specific gesture in education notification
b6557ed79bed : Revert^2 "A few fixes for animation takeovers."
1b908c53a765 : Remove unused fields in ExpandableNotificationRow
8f07c2afd32f : Update ScreenRecorder to handle StopReason
3c2cb40abc55 : SilkFX: Migrate layouts to display edge-to-edge
4ce8e7bca551 : Migrate off edge-to-edge opt out flag
ace8d12f6cd0 : Use correct context, resources and configuration in notification package
eff10988a5f6 : Don't pilfer pointers with 3-button-nav predictive back
6ccdb48ad34c : [14/n] Create LetterboxModule
51f207ec5228 : [13/n] Implement MixedLetterboxController
494f56509ae3 : Add aconfig flag for connected displays drag-and-drop
10e65babc1ca : Add a list of Biometric protected packages
ca6ad70cc19d : Reapply "Fix perf regression VisualStabilityCoordinator"
3ae4eb678522 : Get notification panel density from configuration instead of CentralSurfaces
1886eea92758 : Expose current viewport transform of PointerController
64a70881b3ca : Check if cursor has moved out of viewport bounds in CursorController
35ad883dae38 : Remove obsolete flag android.content.pm.support_minor_versions_in_minsdkversion
e0195319f6b5 : Small cleanup AccessibilityWindowManager#updateWindowsLocked
5ab32f19c388 : QSDetailedView: Create BaseMediaProjectPermissionViewBinder
0456a20a5884 : Introduce relative insets feature flag
1886f15c4c3a : [Catalyst] Provide context for PreferenceLifecycleProvider.onActivityResult
c6146f59c41e : Add flag for adjacent TF refactor
d6ecfc7de6ca : Allow SystemServer to create VCN service with a service initializer
46759e76ca88 : [flexiglass] Set up initial doze state early
f595029c572e : Batch noteOperation binder calls in the client to reduce call volume
635693fc10c3 : Inject repeatWhenAttached CoroutineScope to prevent leaking
79045fbe53eb : Update to ToT RemoteCompose
44b6c3fd0266 : Add GTE compatibility enum to Surface.
7a4928e2c286 : Revert "Don't skip calling requestLayout to the parent view"
3b74adc96aa2 : Integrate `getAudioInputControlPoints()` into SettingsLib
7a0c3216d68f : Use cubic for transitionEasing
e2d2a00c06ba : Fix bad parcelable for deviceId
e000aac36c41 : Resolve change in getService return type
7dba6f419b2b : Add xml serialization of envelope vibration effects
caf782601ff7 : Use ShadeDisplayAware window manager to add the Shade view at first
7b04c718466d : Move shade layout params to dagger
4651a74e561d : Log available types when selected type not in available list
3a5f1c0d048e : Add methods to return AppFunctions request and response sizes
8a3bfcc914fe : AudioService: SpatializerHelper queries spatialized masks
077e8b2f24f0 : [1/n] Fix Restart and Resize hints flickering
d26052cce23e : Add debug logs to vendor session callbacks
b37017b7b06b : Use correct context, resources and configuration in QS package
bbe0e03fcc4a : Use correct context, resources and configuration in bouncer + keyboard packages
65c4cb7fabc1 : Add showing a SafetyWarningDialog when requested by the VolumeController
c6cae2172315 : Use previous valid section key of a notification that was updated to an invalid section
8aaf1c44ac30 : Finally fix TestModeBuilder so that it clones DND correctly
fce5e5cdf430 : Support custom virtual display dim brightness
4d94fca58d35 : Guard minSdkVersionFull attribute with android.sdk.major_minor_versioning_scheme
efb8469b96ba : Don't skip calling requestLayout to the parent view
a0ba0e12d6da : Desktop: check wallpaper to know if last window is closing
2c5a552c2abf : Update MediaProjection logging atoms for new windowing metrics
060d32262d56 : Get topology copy, set topology normalize
53d5ddddbb99 : Cleanup old sysprop flags for KeyboardBacklightController
3451c1e1233b : Cache isIgnoreActivitySizeRestrictions
d0ed007087fa : Lazily inject ShadePrimaryDisplayCommand
443c2cddb89c : nfc(api): Fix HCE services metadata name in javadoc.
6e0081165014 : Fix crash when IntentFilter doesn't have any data type
3e824eb660d2 : Update error bug id for flag: hearing_device_set_connection_status_report
5bf40cec7b86 : Add ringer buttons animation
d36e3ab3e26f : Preload HttpEngine into the Zygote via an API
6d689e606d3e : Correctly update the z-order list on display move
a8f09b0ca71c : [Expressive design] Remove background for UsageProgressBarPreference.
f1ba87971744 : Fallback to old UI when DeadObjectException is thrown
897c0e136b4d : [Catalyst] Add PreferenceCategory metadata
c87646be565b : Fix NPEs in ShareTargetPredictorTest.setUp()
bd581c9d8970 : [API feedback] Add permission for color zones
65e15f48d74b : [Catalyst] Support sensitivity level
6b2a4aa10e9d : Temporarily disable supervision when supervision app is no longer the profile owner, making the sync fully 1:1 with DPM.
78fd2aaac53b : Remove a TODO that is no longer relevant
bdfb4963c24e : Remove ComponentName check for getAutoTimeEnabled and getAutoTimeZoneEnabled APIs.
6ccbf099e7d3 : Topology listener API
72158f68fd26 : Use task user Context when starting Home task for Desktop entry
14f92f466eeb : Test: Adding Test Cases to NdefRecordTest
1b5037f72ec6 : Add ImeTracker onFailed when calling window != ime input target
b568fb96dee6 : Remove integrity verification from VerifyingSession
085afaa8727d : Add Haptics to the new VolumeDialog.
ec361ef63825 : Fix: LAUNCH_VOICE_ASSISTANT gesture should launch assistant
ade509308a30 : Fix: Open notes shortcut is not customizable
c4edf3834c18 : Use longer timeout for Desktop drag/resize CUJs
90977068e34a : Use bugfix flags for desktop transitions: enter, exit, alttab, applaunch
73c68764a77b : [Shortcut Helper] Updated system shortcuts
6f30c461203d : [Shortcut helper]Moved Recent Apps Shortcuts to System Category
c17c12c0b8ff : Disable desktop mode on VDM displays
34cf9e7a0dbb : Revert "Treat undefined flags as false in resource flag processor"
1ee5d0f7d70a : Ensure that transitions are unregistered when the launchable is detached.
4bce8a30ac9d : Remove STLState.snapToIdleIfClose()
74a359c9e5a5 : Fix IndexOutOfBoundsException when accessing RemoteCallbackList
691c9de49620 : Hearing device local data handling
f13bcca8a039 : Refactor scanInstallPackages
bee03574c9f9 : Resume original session even if any of the pending sessions fail
4352680c5014 : Revert^2 "Mark @FlaggedApi flags as exported"
292ad4389f60 : Revert "Revert "accessibility_flags.aconfig: remove leading whit..."
e82d9b2cb1ff : Always set transition ready in executeAppTransition.
66c04410260b : m3: Refine AlertDialog style
a3cf12bca488 : Fix API javadoc link
5c29abcf414b : Fix TopIntroPreference title visibility
160cdc8e4a8f : Add multi-user support for Dependency Installer Service
ba15cd8b8c7b : m3: impl baseline button styling Outline & Text
4f65df857646 : Add checkServerTrusted variant to X509TrustManagerExtensions
332cb16752fb : Adopt end animation callback adjustment for QSIconViewImplTest
86f34426c515 : Revert "Adds missing permission annotation to HubEndpointSession"
85a0ff268fa6 : BinderProxyTest: fix failures
3699637c8550 : Add a flag for the spatial model app pushback effect
d56fb4d66fd5 : Allow same displayId to override DisplayRotationCoordinator callback
5208c4b74cd9 : [NTN][VZW P2P] Enable `config_allow_check_message_in_not_connected` by default in telephony config
a3c846db7586 : Attempt a load reset on touch mode start
98d6fe4036d9 : fix spelling error
5478740a6f1b : Let bottom split caption be an inset source.
5ca2b3280779 : [CredMan System Metrics] - Bundle T1/T2 Session Ids
ce29e94ccdb1 : Add mattbuckley@ to OWNERS for libandroid.map.txt
9505d8b653a8 : Allow admin flags to be used in PermissionController
121ae21b2996 : [1/2][PiP2] Fix enter-pip for AE case
3914b179cab4 : Disable stack-walking for coroutine tracing
c948c80be52d : ArrayUtils: add secure zeroization support
d7f8cd7fc26c : Import translations. DO NOT MERGE ANYWHERE
317917038640 : Import translations. DO NOT MERGE ANYWHERE
9ce79608ad7b : Import translations. DO NOT MERGE ANYWHERE
3ad1089f3070 : Import translations. DO NOT MERGE ANYWHERE
6078fc79f011 : Add Flag for Metrics SessId Bundle
71c61ded4a3c : Import translations. DO NOT MERGE ANYWHERE
6883cac9c177 : Import translations. DO NOT MERGE ANYWHERE
06be005f765b : Import translations. DO NOT MERGE ANYWHERE
7749835f6751 : Coarse locations based on population density.
32af37d886d9 : Use active/inactive fgs calls instead of delegate.
823fbe305e1e : Expose getRefStageBounds methods.
505cce5fca08 : Import translations. DO NOT MERGE ANYWHERE
a908030f1efd : Import translations. DO NOT MERGE ANYWHERE
6a770996c758 : Add MR2ProviderService APIs to support system media routing
df4b93c28bda : Improve jank monitor debug overlay
a90964e18359 : Fix total wakelock duration calculation
3f08004e18a2 : Using inteface descriptor as the key for various interface extras
b4ed857b7f05 : [HostStubGen] Now method descriptor is optional in policy files
d0aee4d9d185 : [Custom Key Glyph] Make display order of shortcuts the same before and after the flag is enabled
6cd207505797 : Add dispatchModeRejected in NativeDisplayReceiver
ab8f06b5f323 : Migrate VCN to separate non-updatable libraries
40aba00f90cd : Remove headroom selection type and move interval to support info
5872195a63d3 : Add predictive back support in origin transition backend.
e5fbb24e3858 : Cleanup flag "restore_a11y_shortcut_target_..."
a6a9a75b65b0 : Add a script to extract last commands executed by soong
62a50c0b6805 : Use SDK_FEATURE_COUNT for IPC cache size hint
44a9810e063d : Revert "Added LocalAndSimContactWriteException in ContactsContract."
4267a917a8fe : [framework] Add route type to routingTableEntry.
93345c3ccc73 : Add a mapping for compilation reason "cloud" for app startup metrics.
873e05c17ce2 : [audio] Rename ST isAllowMultipleTriggers for API
7726696e11b1 : [Autofill] Add new API for getting FillEventHistory
90aa7737302c : Test: Adding Test Cases to NdefRecordTest
f0d51dde83d4 : Change ravenwood default log level to verbose
673cd90d51d9 : Cleanup flag "a11y_menu_snackbar_live_region"
b73b432a1a71 : Cleanup flag "manager_package_monitor_logic_fix"
fb13701c6fca : Add OngoingCallInteractor
604d9e5e5057 : Deprecate KeyguardStateController
9ba13b48cdde : Add ShadeDisplayAware to classes left out
75133b2eba25 : Make StatusBarTouchShadeDisplayPolicy the default one
27151bb89816 : Add default role holder config for test role
dd281f6bbcef : Cleanup flag "floating_menu_narrow_target..."
9eb44a88ed90 : Use new tiles in TileRequestDialog
807995d9d2b8 : [Notif] Update HeadsUpManagerImplTest to use Kosmos more heavily.
bd2a713862a1 : [Notif] Convert HeadsUpManagerImplOldTest to Kotlin.
38fa5aa83f7b : [Notif] Remove TestableHeadsUpManager, use implementation directly.
04f8300e4c73 : [Notif] Rename HeadsUpManagerPhoneTest to HeadsUpManagerImplTest.
3c9940c66820 : Fix for the lock icon issue
38bece23010e : Convert hidden SatelliteManager APIs to System APIs.
22cd72d8bed9 : [AAPM} Catch exceptions thrown by AdvancedProtectionHooks.
70fc7286f345 : Log metrics for vendor vibrations and sessions
b87b97921307 : Don't kill apps if they are not in restricted mode
ae9506b4a2ee : Change back navigation impl to rely on observer
b3b8b7df7ca4 : Add new native namespaces bindings
c7c837bcb409 : Allow android.os.flags to be used by ART.
dceb59630d2d : Fix possible NPE in DisplayPowerController
158b749dd4b2 : Use a read write feature flag to control showToast when extra intent keys not collected
5fc77691beb3 : Fix flaky VibratorManagerServiceTest for vendor sessions
2c556b9cbc44 : Revert "accessibility_flags.aconfig: remove leading whitespace"
322dd88f9ded : Revert "Mark @FlaggedApi flags as exported"
8ec19120847c : Added reset custom shortcuts dialog
426875995535 : Allow subclassing WindowManagerImpl
d65a5c4e1b45 : [Relayout] Enable relayout flags
a952f12491c2 : Move API to be used in MODULE_LIBRARY
63c49b7eb0b3 : Allow specifying gravity for overlay display
5ac5a9859a6b : Brightness Tracker disable sensor if screen off
51a73b64ae41 : Mark @FlaggedApi flags as exported
01a0ef4cd0bd : Add flag to enable SettingsBackupAgent to collect B&R agent metrics.
c06a7fd43bdd : Added data refresh after new shortcut is added/deleted
ef17e84e6bdb : Prevent double haptics for back events
7d02b3beda24 : Use secure setting for secure windows on keyguard
683b923f3402 : [0/n] Create bug fix flag for async CompatUI relayout
05b298485358 : Fix NPE when creating mirror displays w/o VDM
9b29f402865c : Revert "Allow admin flags to be used in PermissionController"
290bef8ffb2f : Add warning log for when CUJ times out
12167a25050c : Make `SupervisionService` react immediately to profile owner changes.
2997bdcd7834 : Migrating BrightnessPowerClamper to BirhgnessStateModifier interface
52f3a205b762 : Add shell/desktopmode to ktfmt directories
0e25689f20c7 : Run ktmft on WindowManager/Shell/s/c/a/wm/shell/desktopmode
b6a751303702 : [API feedback] Add APIs to set default profiles
a896f64496e4 : Run ktmft on a subset of files under WindowManager/Shell/s/c/a/wm/shell/desktopmode
72526261d97e : Only show physical display as recording targed for screen recording
5dcd5e724f07 : [12/n] Implement tests for LetterboxControllers implementations
651542d05055 : [11/n] Implement LetterboxController for multiple surfaces
a8e26bab3b8b : Set runtime flag to zygote
d017391fcc7c : Enhance TelephonyDisplayInfo API
4f699aab2a43 : Add PopulationDensityProvider
97b1176de2fc : New System Api: getGroupIdLevel2
fec8df10fa48 : m3: guard wear material3 UI change for AlertDialog
cce0bd9531cd : Remove unused methods from ActivatableNotificationView
1f5fc948461e : Remove the NotificationsImprovedHunAnimation flag
df439a3a726d : Add new surface binding and auto ADPF methods for ADPF Timeline API
57e0a7c584e5 : Refactor DisplayContent supportsSystemDecorations
42b638879707 : Flag guard when we call into InstallDependencyHelper
32af9bf7bf1e : Remove interception capabilities in STL drag handler
d3414b06d68d : Allows the system window to be shown during the PB animation...
6b2d8fa106e6 : Add flicker tests for non-match parent exit PIP animation
4ec32e1b271a : Fix padding for non-icon Preference
a67d79d6bf23 : Remove unused local variable classloader in NotificationListenerService.
dffad1dd6896 : m3: impl basic Material3 design of progress indicator
e88ff9e24b9a : Adding Api that allows apps to provide URI to be utilized to switch user's session to another surface
21b37bb7cf0c : Add test for 16kb app compat
6e85162d5830 : Update comments to point to the new location of event.logtags.
3ce95c6963fc : Handle private API usage from System UI
d16442737d4e : Remove getSystemProviderForUser function
17ec432bfea9 : Fix WMShellFlickerTestsDesktopMode SnapResizeAppWindow
259528f82bf3 : VPN: fix crash on missing CCM
f046e957fd2e : Treat undefined flags as false in resource flag processor
880d6ce980fc : Assert Parcel not in pool when used.
d1e94e1d5b74 : Revert "Modify bookmark shortcuts for platform alignment"
d001734893a9 : Move displayId out of KeyguardState Transition object.
1388aadb7fa6 : Change system API getDeviceId to be @NonNull
bb7e5b24f15a : Add atneya@ to OWNERS for voice service
d3721ee52bb5 : Fix AuthServiceTest for multi-user systems
8d52b3e127b7 : Cleanup flag "create_windowless_window_magnifier"
ee69765160cc : Implements ContextHubService.registerEndpoint
e11147f7ff3a : Add a way to configure log levels
66e289404e87 : DateTimeView: Add additional display configuration options.
559d1005c63a : Cleanup flag "manager_avoid_receiver_timeout"
0ec80cdb0547 : Cleanup flag "pinch_zoom_zero_min_span"
420db78c922d : Add NDK support for thermal headroom callback API
f371d77c4e56 : Apply edge-to-edge insets in A11yMenuSettingsActivity
df2bf84c6438 : [Ravenwood] Remove usage of RavenwoodFlagsValueProvider
c90595c9e6b5 : Prevent premature-ready between activity finish and completePause
d71067996f9a : Revert "Tile perf: offload tile state adapter to bg"
8085da84faaa : Clear calling identity when getting global security state
96d0c5303ce9 : Adds missing permission annotation to HubEndpointSession
0cbfca13ba27 : Put education changes under new flag
71c02c1c051f : Remove complication duplication
2d5b6806e3a1 : Add WindowManager flag for capturing the power key event in the foreground.
4a8492c8a3ba : Use different component name for SystemMR2Provider2
4e5754eb549f : Migrates Monet's Style Enum to @IntDef
c6a2bd825ec0 : [1/N] WindowDecorViewHost: Add viewhost/supplier interfaces and default impls
61c54195dcca : Revert^2 "Add device state Hal service"
2c1a837847e5 : [Media Quality] Add Temp Id Map to Sound Profile and refactor
04d882285002 : [Media Quality] Implement Remove Picture Profile API & temp map functionality
5c5ec0c92e5c : Update Desktop Mode repositories to be user-aware.
3508c4493d3f : Flexible Split: State manager
c0f32de2f6e8 : Deprecate cc_binary aconfigd and the controlling flag
5e0e68f1bdca : Allow SystemUI to use the concurrent message queue
28929e4b66a5 : Revert "Add new ASurfaceTransaction _setFrameRateParams API"
3fbd6829ff8f : [audio] Add me to OWNERS
e31150040e1e : omapi: Set min_sdk to 35
ce8bee1b93af : Pre load all icon drawables for QS tiles
f566a8b94c45 : webview: implement new minimum version check.
0404625c4cc8 : webview: refactor compatibility check.
26fbcb00c682 : [PiP2] Only save reentry state on expand
f08622ca06d8 : Call onStopped before creating new WifiPickerTracker
183aa63be080 : Remove unnecessary trade in mode protection level
bb088a214bc5 : Stabilize real apps pip tests
816a07813118 : Revert "Use SysUI TAPL on notification tests"
6491a452d19b : Add flag to narrow the meaning of JANK_PERCEPTIBLE.
b60b2fd231f3 : Added aconfig flag for updatable TextClassifier OTP detection
5c1462b48f55 : Ignore non-pointer events in LetterboxScrollProcessor
b07cf2b002e5 : Correct flag namespace
852111adcec8 : [Media Quality] Add User Ids to all methods
49949c8a63aa : Add a flag to cache hasUserRestriction to avoid unnecessary binder calls.
c36cc2b35b97 : Move focus for windows in freeform mode
94305338d9d7 : Make preview smartspace corresponds to the selected clock (3/3)
250bfa4be060 : Improve Text Color Contrast in Glanceable hub
be4fc95e28cf : Preserve unsuspend behavior
1aaeb38e2544 : Launch card intent from qs tile when flag is enabled
cc90e8c15612 : Add routing type flags
945dd77db938 : Import translations. DO NOT MERGE ANYWHERE
ccb7422b283d : Import translations. DO NOT MERGE ANYWHERE
2a23c34bec36 : Import translations. DO NOT MERGE ANYWHERE
8757df680859 : Import translations. DO NOT MERGE ANYWHERE
6b12adf2a2ba : Import translations. DO NOT MERGE ANYWHERE
6c4eed263616 : Import translations. DO NOT MERGE ANYWHERE
4b4777d493bc : Import translations. DO NOT MERGE ANYWHERE
0d6e6f6eeef2 : Import translations. DO NOT MERGE ANYWHERE
00b56e73bc6a : Cleanup unused Notification.Builder.makeAmbientNotification()
49323a3ddcb2 : Import translations. DO NOT MERGE ANYWHERE
687af8db702c : Import translations. DO NOT MERGE ANYWHERE
af469ac6e78e : Update App Function documentation
8d3f4d10150d : Import translations. DO NOT MERGE ANYWHERE
7b7604f5672b : Improve glanceable hub lockscreen affordance string.
b5888bb61654 : Notify interaction finalized when users are changing the seekbar progress without touching
a86fe5b7f09c : Protect callbacks from concurrent modification
1126c114048a : Import translations. DO NOT MERGE ANYWHERE
f4152312b595 : Import translations. DO NOT MERGE ANYWHERE
3712d14e55c5 : Import translations. DO NOT MERGE ANYWHERE
071a30c297b2 : Import translations. DO NOT MERGE ANYWHERE
a3eacb59cf56 : Don't invalidate sim data when indexing by slot id
0dab8822df9e : Import translations. DO NOT MERGE ANYWHERE
860ab5c1a415 : Import translations. DO NOT MERGE ANYWHERE
56a07d300e6e : Import translations. DO NOT MERGE ANYWHERE
441d2ae23716 : Import translations. DO NOT MERGE ANYWHERE
2ef6b305bbb1 : Import translations. DO NOT MERGE ANYWHERE
a9b90d563d76 : Add marziana@ to wm/ OWNERS
aa62a769b356 : Import translations. DO NOT MERGE ANYWHERE
9aa1e2dd3940 : Import translations. DO NOT MERGE ANYWHERE
aeb992af9fdd : Import translations. DO NOT MERGE ANYWHERE
1c6733e4ce4f : Import translations. DO NOT MERGE ANYWHERE
eaacf50d417e : Import translations. DO NOT MERGE ANYWHERE
0a0422f432e2 : Import translations. DO NOT MERGE ANYWHERE
01e7f0ab2161 : Import translations. DO NOT MERGE ANYWHERE
c0830f1a5356 : Import translations. DO NOT MERGE ANYWHERE
9940815f9f10 : Import translations. DO NOT MERGE ANYWHERE
25be91bcbb14 : Remove TODO for per display implementation of Bubbles
d6c9d73e2c4f : Import translations. DO NOT MERGE ANYWHERE
e17c37b59c6d : Import translations. DO NOT MERGE ANYWHERE
90dd6c9a90f0 : Import translations. DO NOT MERGE ANYWHERE
78b21c6a5889 : Import translations. DO NOT MERGE ANYWHERE
d4bdd2f1ddf8 : Import translations. DO NOT MERGE ANYWHERE
cbe091b75c99 : Import translations. DO NOT MERGE ANYWHERE
e020537af140 : Import translations. DO NOT MERGE ANYWHERE
cd0c8c0c2e1f : Import translations. DO NOT MERGE ANYWHERE
0b25a039b5f0 : Import translations. DO NOT MERGE ANYWHERE
b4c718e6bffc : Import translations. DO NOT MERGE ANYWHERE
3df3a5b60099 : Import translations. DO NOT MERGE ANYWHERE
ca23b3698997 : Import translations. DO NOT MERGE ANYWHERE
93ed566a5b2d : Import translations. DO NOT MERGE ANYWHERE
e6d13e26482e : VirtualDisplay brightness API council feedback changes
7ab4751a2a90 : Add back the role-dependent VDM mirror display logic.
8aefc71f84ea : Passing statsToken from InputMethodManager#hideSoftInputFromWindow
ca0e87be6094 : STL added IntOffset <-> Int to SpaceVectorConverter
a0e65ae391f8 : Activity orientation experiment on only large screens
3316252d69d8 : STL deprecate SceneScope
bcc055a5cd0e : Revert "A few fixes for animation takeovers."
73a982261773 : Adopt end animation callback adjustment for tests
fded36e36e17 : Revert "Add device state Hal service"
cd3bf7ae5806 : Polish transfer starting window across more than two activities.
fa0dc6157c28 : Fix missing edge extension for task fragment transition
1d26f393038f : Mark @FlaggedApi flags as exported
14c51e0d6225 : Revert "Grant CAPTURE_CONSENTLESS_BUGREPORT_DELEGATED_CONSENT to..."
32d5fd1085e8 : Allow admin flags to be used in PermissionController
7a98b889bacc : Block the shutdown process until the snapshots are written to disk.
b7222fdf7537 : accessibility_flags.aconfig: remove leading whitespace
cf89b3d25cc3 : Add aconfig flag to use HaTS for low vision feedback
7d4363e1ea4d : Revert "Add flicker tests to verify non-match parent on exit PIP"
c9fad93912aa : [Catalyst] Add PreferenceLifecycleContext.requirePreference
e9a3e5f3de78 : [Satellite] Enhanced the satellite metrics to record datagram count per message type.
d49e0d4accf5 : Fix wrong parameter order for main/bg thread executors
978753619590 : nfc(flags): Set min_sdk to 34
dbeccb006983 : Fix race when sending CSD attenuation value
2e03a7855be4 : Deprecate cc_binary aconfigd and the controlling flag
80b57afa7ee5 : Allow refresh creator token for edge case
60e504232a82 : Add visibilities to VCN libs
75068d5340e8 : [Ravenwood] Merge libinit and libsysprop
154a40d42e9a : Fix notification tests in multi-user environment
a4fa0cc0c779 : Add a subclass of IntrusionDetectionEventTransport that overrides the methods for a real datasource.
819df3ea9031 : Added aconfig flag for TextClassifier choice API
48a9009d66d7 : [Media Quality] Adding Sound Profile Parameters
fb14b7479a7b : Fix bubble bar location on device boot.
57a5341d6437 : Cleanup flag "proxy_use_apps_on_virtual_device..."
39829012a7f6 : [flexiglass] Moves transitions to SceneContainerConfig.
eeeed8689b58 : [Ravenwood] Update core property access check
42a8d698e4e4 : Log BackportedFixStatusReported
ee150ef70463 : Avoid the reference to ApplicationInfo in BroadcastFilter.
790631a15d8d : Remove @Deprecated from Path.computeBounds(RectF, boolean)
0aeb5ca79c98 : switch com.android.server.testutils.any to org.mockito.kotlin.any
48c038a9b2c2 : [Fill Dialog Improvements] Setup aconfig flag
06e2d35e0094 : Left align toolbar popup when larger than viewport
52c8f814befe : Update ADPF_OWNERS to include related files more broadly
b879d0d022d3 : New window for drawing debug overlay
8b52b29be134 : Add a new API to let media routes have visibility restricted by permissions
302899e4466a : Add flag for media ui updates
313c73905043 : Remove the constants for calculation window size
dd0fe2271d01 : HDMI: Add atom logging for power state change on <AS> lost toggle
88982412aa69 : Refactor `TestLoggingService` from a locally-started service to a stand-alone test application
ced265e18719 : Add the `TestLoggingService` service for mocking communication with GMS core.
ba5e334c4606 : Added uninstall event metric handling and unit tests.
a816d2e54409 : [Media Quality] Implement Sound Profile get related APIs
61bf466acd4b : [SB][Notif] Refactor HeadsUpAppearanceController to use PinnedStatus.
c6823fcc82df : [SB][Notif] Small cleanups of HeadsUpAppearanceController.
d8712cb530ca : Trivial javadoc fix about calling user
f3ff117bfe7b : Add bug fix flag for window decor SCVH caching
f52c997ccbd3 : Show alt bouncer if needed when dismissing keyguard w/ callback.
9ee545359585 : HDMI: Enable CEC on boot-up if low energy mode is selected
fd2ae2294a4e : [W] Introduce DeviceId
31ecf350869e : Opt out of predictive back for activities with custom onBackPressed handling
64d8120d418c : Fix ReturnValueIgnored errorprone issues
1bb2962a5ee6 : Log App Header move & refocus metrics
81eca92f8a59 : Log metrics for maximize / restoring a desktop window
1ac8ea647ee5 : Reland "Implement alignment checks in PM"
4057e94323bb : Only set task shadow/corner for freeform tasks
3fcf1b73f3f9 : Fix NullPointerException when ActivityManager is not available.
64ac91f78b6f : Add flag to guard boosting on touch in ViewRootImpl
3498ce7981dd : [pm] Use APIs for ART managed install files.
d61fc450528c : dock_observer: allow dock rotation in OOBE
cd32b9c96b64 : Revert "Fix RecyclerView crashed problem"
680eb904b24b : Add device state Hal service
c71850d297ea : Writing tools: selectively enable writing tools in TextView 5/n
02d89958b2d8 : Allow PM exported flags to be used by ART.
d5fc30bf3707 : Add initial plugin for contextual auth prototypes to support BP and bouncer.
b8e7a1ae6de8 : InputSettings: add touchpad system gesture setting
a30fb7eaa3ee : [AAPM] Update SPA settings to show advanced protection strings
8a285df48da2 : Include last focused id to each Event in FillEventHistory.
7830dd80217f : Introduce policy to move the shade window according to the last status bar touch
49005e36f8eb : Ensure connect device button is shown
3af45162caf6 : Implement responsive grid for hub
96913de1da29 : Grant CAPTURE_CONSENTLESS_BUGREPORT_DELEGATED_CONSENT to shell for BugreportManagerTest
78feb3d52724 : Expose sysui runtime values to wallpaper picker (2/2)
ef60e0f73ff8 : Format files
4bd812517974 : [Media Quality] Adding Picture Profile Parameters
540ed5d885ee : Switch to using a ThreadPoolExecutor for widget inflation on the hub
6b2f64df7e2e : Do not re-use dead shared isolated processes.
bf6c7fd5e946 : Fix NPE in onTransactionResponse callback
e9727237d817 : [AAPM] Add DevicePolicyManager#getEnforcingAdmin for settings pages
9165f171d282 : camera: Add FMQ support for camera2 impl SDK
c9c0e5394201 : Create a new trunk stable flag for intent redirect
b33444e601fb : [Notif redesign] Increase min height of notif
42db53c8ff5a : [Notif redesign] Reduce headerless notif margins
a07975e72e50 : Make the log format more realistic
0d12d5ce6e72 : Don't show udfps controller multiple times
72b602b026d4 : Force disable AOD
d13755c99acd : [Notif redesign] Align actions to larger icon
877c9060cbd0 : [Notif redesign] Align actions in old media notif
62c9bc2f11bf : [Notif redesign] Bigger "small" icon in expanded notifs
28c4dd6ca547 : Add virtual flag that indicates QS UI in compose
5e4c386c1c81 : Update deactivateSplit comment
1cc46532f2c9 : [AAPM] Don't rerun all callbacks on register
6f8170bbc83f : Support e2e in controls activities
4bea73eb8c7a : Fix Snapshot reacting to changes
011e1e693d66 : Mark camera as closed before notifying listeners.
717fe4abb5b6 : Use main executor for handling key gesture events in desktop mode.
ebbb72d787f9 : End ripple effect on maximized button when finished dragging
7d9b085db25e : Expose an API for APK signature verification.
b96a6f9cd677 : Show status bar notification icons on all displays
d2c8e770f8d4 : [RONs] Update notification colors
5570d1e64dcc : Block installs of incompatible minSdkVersion minor values
55a6cb5f4f34 : Build: add hidden method to convert major.minor int to human readable string
810bc56a97b5 : Build: add hidden method to parse a major.minor String to int
a5ce2c2870db : LightBarControllerImpl.Factory: call start to keep legacy behavior
298083efb047 : [Custom Key Glyph] Show function row keys according to key glyph
a93d6a7794ef : Adding velocity threshold for home touchpad gesture
ff879bea21a6 : Use SysUI TAPL on notification tests
30b700772042 : Generate content description for shortcut label and actions together.
26d4f20dc483 : Fix Volume Dialog button size
0bf77dece307 : Remove legacy DB for app launch bookmarks
5e09cf78acb9 : Improve shared element tests and test framework
676b0c6958f3 : Introduce SharedElement transitions with NestedSTLs
79998988c993 : Inject repeatWhenAttached CoroutineScope to prevent leaking
07fd9bb8ee26 : Fix when enrolling SFPS Fingerprint the Japanese sentence is cut off when pressing the power button.
4300dbae958f : [10/n] Handle recents in Letterbox surfaces lifecycle
c24b4175137b : [Notif redesign] Circular icons for Call Notifications
26b692ae0226 : [Notif redesign] Circular icons for Messaging Notifs
b1868f7cbc2b : Add flag for read_all_external_storage_files
14746f6d3602 : Add task resizing keyboard shortcuts into Android Shortcut Helper.
6daf8545ca70 : Handle session abandonment correctly for Dependency Installer
369d0425c1c2 : Convert DependencyInstallerCallback to two-way
0a19c39b4193 : Fix BouncerPredictiveBackTest renamed
41c59b4e5dd7 : Make PreferenceActivity predictive-back compatible
133512db0816 : Add ImeTracker logs for multi window mode
5cecfd5e6ea1 : Increase the interval between blocks in PhysicsBasedUnfoldTransitionProgressProviderTest
794e98489c23 : Fingerprint FRR dialog improvement
1b51723e1ad3 : Add flicker tests to verify non-match parent on exit PIP
f5ba45d68495 : Add myself to wm owners
1ec9eb8877aa : Extract getSystemProviderForUser method
0098d4f8d19a : Extract getSystemProvider method
d6c5bcb1aa3f : Correct the file name of FontScalingDialogDelegateTest
2c176e7fb1dc : [Region picker] Add new flag to SettingsProvider
f74230f0eb70 : m3: wear material3 button style can only override DeviceDefault theme
e7b2a395a1f6 : Fix PIP exit animation flickering on non-match parent activity
fe1c8dfd6aaa : Don't create variation instance if the Typeface is variation instance
9c7c9e675919 : Add a method for querying universal resizability of packages
9ee52231652e : [Spa] Upgrade Compose to 1.8.0-alpha06
6ebd5cfd7391 : Revert "Fix RecyclerView crashed problem"
507336fe487a : Fix cts testMinimalSizeDocked fail when device supports multiwindow feature.
8a34831ed7ad : Revert "Implement alignment checks in PM"
dd852f99e502 : [MediaQuality] Adding CreateSoundProfile and RemoveSoundProfile APIs
35e7781b03a2 : [AAPM] DisallowInstallUnknownSources: set state after disable
04be9989ee12 : Update to ToT RemoteCompose
d7373e3583d5 : [owners] Remove cpinelli@google.com from core/java/android/security/advancedprotection/OWNERS
637aae69d587 : Add helper functions to S2CellIdUtils.
3fd7af14f7eb : Improve window caption a11y
9b7a4fc83d55 : Add SystemMediaRoute2Provider2
432146cef424 : Cache Sticky broadcast Intents in the client side.
a4da588e97d7 : Add support for module specified DeviceConfig namespace allowlists
785dec24741e : Add a periodic update worker for verified domains
9eb885d8ac7e : Clean and divide exclusion lists in Android.bp
7d0f3e98f51f : [PiP2 on Desktop] Restore task to freeform bounds when exiting PiP.
785c86c75c83 : Move VCN utilities from services.jar to framework.jar
96a2975aef83 : Enhance overlayconfig api for satellite access controller
2f3a9c61f771 : [Ravenwood] Update system property handling
ef20821027a4 : Modify bookmark shortcuts for platform alignment
f5e85a245902 : [flexiglass] Don't clear pending keyguard dismiss action when...
91e395cc2625 : Rate limit battery changed for max charging current extra.
fa9d92f7dd77 : Add SystemFeaturesMetadata to framework
5139e7a74b79 : Add a flag for tid affinity check
320cc6ffa6a4 : Cleanup flag "do_not_reset_key_event_state"
d58ad8023574 : Deprecate permission tree APIs
49a1ec603c8f : Add base scenario test for desktop mode unmaximize app window
a066472ff6a4 : Add DESKTOP_MODE_UNMAXIMIZE_WINDOW instrumentation.
79da36131067 : Revert "Fix PIP exit animation flickering on non-match parent ac..."
8fb5ae1439d4 : Revert "Add flicker tests to verify non-match parent on exit PIP"
f52957adfbf9 : Add material motion token config resources
ed6070c6f9ee : Cleanup flag "clear_default_from_a11y_shortcut..."
b40e8fecc227 : [Lut NDK] Add static_assert to ensure that ADISPLAYLUTS_SAMPLINGKEY_CIE_Y is the same as android::gui::LutProperties::SamplingKey::CIE_Y.
35bbd3b2f880 : Rename StatusBarTouchableRegionManager
55df5dc47a65 : Add owners for core MessageQueueTest.java
a7d1210aa40f : Add config variable for targeting glanceable hub
219e8cde533b : [sb] use HeadsUpNotificationInteractor to control clock visibility
2f5dac8f12cc : Add dialog for page size compat
801cb6ed07a8 : Broadcast to device lock state listeners inside a synchronized block
a3dd66ff076b : Use correct horizontal padding in QS.
e5e3b5bcae68 : Add keyguard suppression interactors.
f44dc885d32d : Implement device lock state listener
2b5c1ab95693 : [PiP2] Cache task info in PipTransitionState
9ed2bae6ffdf : [flexiglass] Update NSSL alpha during alt->prim bouncer transition
c3e47ec1cad7 : Implement alignment checks in PM
5f3fa672cd55 : Add AppInfo flags and APIs for 16Kb appcompat
134e9d515380 : Put back annotations in DeviceStateRotationLockSettingControllerTest
acc477c4eff0 : [SB][Notifs] Hide status bar notif chip if the app is open.
d249b5ad78ba : Visit Uris for views in RemoteCollectionCache
7c449858208c : Cleanup flag "add_black_background_for_window_..."
841adb565d66 : Nearly remove NotificationSectionsFeatureManager
f56049e6a256 : [Notif redesign] Rename notification_2025_template_text.xml to notification_2025_text.xml
33920aca978d : Address API Review comments: MediaCas, Tuner and TunerResourceManager
43a4976c33f2 : Rename private methods in DPE for clarity.
c12a7bfdb331 : [sb] Don't show notifs in view model if any chip is showing
facfd9a453e8 : Add VcnLocation to reflect the VCN build system flag value
91998e96d627 : Cleanup flag "migrate_enable_shortcuts"
aa4573bb1a05 : Adaptive scribe bounds: Connect the dots 4/n
1df00fe78b3e : [ID] Add tests of data sources
649d3eda20d2 : Revert "Topology listener API"
6fb128f14a77 : Add checked and expanded subtype to intdef and toString
f9a89f82d0d6 : Refactors reason to common HubEndpoint.Reason
c73526a908ca : Cleanup flag "send_hover_events_based_on_event..."
7690586c0df4 : Cleanup flag "remove_on_window_infos_changed_..."
3aa2664a7f26 : Cleanup flag "skip_package_change_before_user_..."
b0ed215eb316 : add aaos_audio_triage
84e0ec2e469d : Adding null check and optionalBackgroundInstallControl flag check. Dependency service (Background Install Control) is disabled for wear devices as mentioned in: b/340928990
113d1592310a : Hide touchable surface from accessibility
964fb064954d : Introduce AnyExternalShadeDisplayPolicy
5f2149af859a : Allow pausing touchpad tutorial animations on tap before gesture tracking has started.
543b24e29f2d : Fix crash when minimizing a window using keyboard shortcut as the transition was not running on the shell main thread.
980a3d60e290 : Add runtime flag to zygote
3fa30d350b56 : Introduce ShadeDisplayPolicy
13a3668888d1 : disable 2g networking when advanced protection is enabled
c8bebaf2bab5 : Create a PIC nonce-watcher for external clients
b1bc095bee87 : Add adservices to WritableNamespaces
e0a468552113 : Fix initial starting bounds for SV crop
b4ebf52f1f12 : [ID] Add Network Logging
c0513c3aa566 : Use AutoHideControllerStore to provide separate instances of AutoHideController for multi displays
2d490709f77b : Make MultiDisplayAutoHideControllerStore a CoreStartable
181cd4b28411 : Listen to bold text setting change in onConfigurationChanged
6c6a83abeab9 : Clean up flag(2): com.android.window.flags.always_draw_magnification_fullscreen_border
e68e3a305964 : Partial revert of "Prepare LightBarController for multi display"
16d4a302af5d : Add APIs for setting static wallpaers using WallpaperDescription
77c6605c6049 : Reorient SIM processing around slot id
789ae288b086 : Enable WindowContext to move to another display
7a5c38456571 : Ignore flaky test: lockscreenVisibilityWithScenes
e5dece1bedfe : Add feature flag 'android.permission.flags.wallet_role_cross_user_enabled'
7c5d345c03e0 : Add migration code for SetKeyguardDisabledFeatures()
d965a6ef90e7 : Ignore custom glanceable hub touch on lockscreen
347c641e9ea4 : Revert "Fix Restart and Resize hints flickering"
a5c067faf7fc : Clean up flag(1): com.android.window.flags.always_draw_magnification_fullscreen_border
2ddac774862d : Topology listener API
11b1a28ebe65 : [Contextual Edu] Dismiss the existing dialog when a new dialog is shown
2b4929d68032 : Test: Adding Test Cases to NdefMessageTest
fed86adabe11 : Add flags to cache getUserStartRealtime and getUserUnlockRealtime.
c2bccdafc671 : Create DisplayWindowPropertiesInteractor
63f2de1e68ae : Partial revert of "Prepare LightBarController for multi display"
a4723845561b : Only enforce return animations when going to Launcher.
69d6d20877ee : Fix MultiDisplayAutoHideControllerStoreTest
96075e2ddbcf : [Notif redesign] Bigger "small" icon in old media notif
749827e15010 : [Notif redesign] Bigger "small" icon in old messaging notif
0dcbfe7fc16f : Use "expanded" instead of "big" where possible
040c53f684cb : Add shell/desktopmode/education to ktfmt directories
1ce92c69609b : Run ktmft on WindowManager/Shell/s/c/a/wm/shell/desktopmode/education
13d5dce50e17 : Remove invokeOnImeRequestedChangedListener call if input target != control target
a6396db97e6d : [9/n] TransitionStateHolder implementation
cf2c9e4bb35f : [8/n] Define a reusable LetterboxSurfaceBuilder
0cb97b51269f : [7/n] Create LetterboxController abstraction
2eb8be0e3ce3 : Bind to Dependency Installer Service using role
680f91d980b9 : Add adb commands to enable camera compat for all apps.
c628184598b9 : [Catalyst] Improve observer mechanism for binding helper
27b5e5cc993c : App default has aspect ratio override
39d71497b277 : WatchableImpl: add lock when get the mObservers size.
41969e1dae6b : Prepare close transition before adjust focusable task.
8102e8403961 : Improve device state callback for thread context.
0f3c51210d42 : Add API to monitor selected satellite subscription changed event.
59f288cc2ccc : Import translations. DO NOT MERGE ANYWHERE
0b6f9e173082 : Import translations. DO NOT MERGE ANYWHERE
e922c862b877 : Fix build error
111a7108df8d : Skip backup the pinned container
fe99d97565f7 : Add RestrictedSwitchPreference.ifBlockedByAdminOverrideCheckedValueTo
07618c481d66 : Adding conditional fallback to V implementation
e633ebfb8469 : Visualize power breakdown by screen on/off and battery on/charging
d292f83b58f4 : Remove support for PowerModel
e78f6bf41f57 : Add missing BatteryUsageStatsQuery fields to parceling
27c859fa3dbc : Update BatteryStatsViewer UI to reflect the removal of PowerModel
21efc2c8b016 : Cleanup LastClock references when detached
62c65c6b22a8 : Show audio sharing button and qs dialog improvement when preview enabled
b12eabfd6cee : Use the new HAL API updateSystemSelectionChannels to pass the bands/frequency to vendor service.
985207a654b8 : Used RangeInfo.INDETERMINATE in ProgressBar
fecf84c2b43e : Adjust the default bucket temp elevation parameters
a7029d10cd9f : Remove most of core/tests/coretests/src/android/os/MessageQueueTest.java
a2379ea64e13 : [Ravenwood] Fully remove RavenwoodConfig support
20e50b12d545 : Log metrics for Maximize Menu actions
5751bc575515 : Add owners for CPU/GPU headroom java files
80601f58a05d : Make GameManagerService JNI registration lazy
2a6b5a929b9b : Add support for calculation window size and tids
5c3831ff6482 : Broadcast to device lock state listeners inside a synchronized block
f90e02581bfe : Fix NPE in CombinedMessageQueue.
50b0245f8c01 : [PiP2] Update magnetized target on config change
9f1122863ed2 : Cleanup flag "always_allow_observing_touch_events"
971e0f2bbe1b : Base Manage Windows header position on task coordinates.
dd4d96430f96 : Cleanup flag "save_and_restore_magnification..."
abd6efd27c55 : m3: impl changed material3 AlertDialog design
2b139134be0d : Add clear_overrides unit test
3e4926c11a49 : Import translations. DO NOT MERGE ANYWHERE
665e9af6e89e : Return early in startTask if taskId is other split stage.
e214b24f6fb9 : Add deprecate keyword for legacy device presence APIs
a021f56f75fe : Import translations. DO NOT MERGE ANYWHERE
af18332c7e95 : [audio] Deprecate/rename mute event for clarity
ad2b61f56bf0 : Tile perf: offload tile state adapter to bg
897e38f17924 : Added LocalAndSimContactWriteException in ContactsContract.
14ccae4b88ab : [audio] Add mute event type to SystemApi
0ca219bd8488 : Revert^2 "Handling intent whose nested intent keys not collected case"
b870b936ca77 : [Ravenwood] Decouple environment setup from RavenwoodConfig
c52105f2aefa : Remove AndroidAutomotiveNotificationsTests from test mapping
b67d4afa20da : Import translations. DO NOT MERGE ANYWHERE
bbace130bed0 : Change coroutine testing for kotlinx.coroutines 1.9
5b41818cd83b : Update CompanionDeviceService javadocs to reflect the new presence behavior.
ffdb155be2bd : Import translations. DO NOT MERGE ANYWHERE
56f5383b3a68 : Import translations. DO NOT MERGE ANYWHERE
13519f095092 : Introduce a system feature annotation processor
975045f74020 : Import translations. DO NOT MERGE ANYWHERE
d281f55ca92f : Cleanup flag "hide_restricted_actions"
8392fb75e399 : Add a flag to guard new MessageQueue test apis
f73ae65b1052 : [PiP2] Save reentry state upon exiting PiP
0c3d45197877 : Add RIL constants related to satellite APIs.
e6d015d3b834 : add namespace aaos_power_triage
ed4494554b9f : Broadcast to device lock state listeners inside a synchronized block
9f79f7fabfdc : Cleanup flag "action_bar_wrap_content"
ffb9c1261acf : Break clock plugin interfaces into a few different files
2070e6336b71 : Add aconfig flag for route visibility control API changes
452a1f26ff94 : [Notifs] Replace `isPinned` boolean with a 3-option enum instead.
7e5e6dd2b2f2 : HDMI: In offline mode, allow users to keep CEC enabled
a4729d35e300 : Keep selection of widget at the end of widget dragging
26dd5cd029b0 : Fix NullPointerException when collecting bitmap sizes
e8b31f40e2cd : Change cross user permission for MANAGE_DEVICE_POLICY_LOCK.
b533728a2238 : Add @yamasani, @bills and @nalini as OWNERS for flags.aconfig file.
4190b9a677ce : Adds APIs for hub endpoint discovery callbacks
78169df01589 : Fix DevicePolicyManagerTest#testSetSecondaryLockscreenEnabled test.
e6d6411e18d0 : Fixes canEnableDisableInputMethod to use correct user handle.
e4168e799e5e : Add content description resource for Manage Windows icon.
0dad5bdb28b1 : Adding new flag for App-to-Web education
2cc8e9cfd15e : Change backoff policy for Abandoned Job failures
836e5a96c896 : [QSDetailedView] Add the generic view and view models
49a495b53ea6 : Broadcast to device lock state listeners inside a synchronized block
d2c7543e70bb : BubbleController test for logging drag events
871d741e1ccd : Move tracking input devices for trackpad to background thread
56f37dbb116e : Implement tap to resize for edit mode
ba3eb1591614 : Add flag for launching the selected card from the QS tile
9a7928b74da9 : Make GameManagerService JNI registration lazy
d9569dc3fff1 : [AAPM] Memory Tagging Extension hook
02e20beab0c8 : Extend the toast duration for screen sharing sensitive content blocked.
286bee756fba : Change action values in SatelliteManager
152db50e94e4 : Inset screenshot UI from the nav bar
a9d1a4ce68d8 : Polish notification dismiss button
18b4ee4843e3 : Allow title for empty state cta to be focusable
d4ce3c631b5b : Also compile methods in inner classes of MessageQueue #2
5757a61b6f41 : profcollect: Do not upload profiles if verity is disabled
07e9d8ae0f8f : [flexiglass] When occluded, swiping down always goes to shade.
2af8925878c6 : Exclude current app & input categories frm customization
7af883b8c6ab : Backend implementation of custom shortcut removal
736c02983a63 : Don't add wallpaper activity to list of tasks
122baeed8fa4 : Change bug id for new App-to-Web education flag
38a849182c8c : Add configuration parameters to control user switcher behavior
f2d67d01dd7d : Small validation fix
82dfcef4d0d6 : [Notifs] Rename BaseHeadsUpManager to HeadsUpManagerImpl.
81cb08570614 : Camera: Enable DEPTH_JPEG extension capture output
71d83f590d10 : Revert "Make sure StatusBarContentInsetsProvider for correct dis..."
db469497ee7c : Revert "PerDisplayStore: remove CoreStartable implementation"
14e11b6a5e29 : Improve CombinedMessageQueue#mUseConcurrent logic
de4144129d78 : Change coroutine testing for kotlinx.coroutines 1.9
a312bc1bfa9a : Fix VibrationSettings to handle missing AudioManager
3e3d88e9bdbe : Allow alternate golden images for TestWithGoldenOutput
4d544e02a46a : Fix getting InputMethod from motionEvent on the click listener to get the actual input method.
44338835cfd8 : Updated typography to use GM3 title small.
acc748ab2c6d : Prevent visible background users from updating the data managed by NotificationListers object in MUMD
2d5317c4a8f7 : Remove role from protection level for BYPASS_CONCURRENT_RECORD_AUDIO_RESTRICTION permission
0e58a30bb420 : Fix wrong brightnessOverriddenByWindow state in BrightnessSlider
af20223c11b1 : Migrate to AnchoredDraggable
7f071b9888ba : Scroll capture search: reject covered scroll containers
cf51487f83ca : Remove emptyView from AodBurnInSection
a13172285b16 : Add ShadeDisplaysInteractor to ShadeDisplayAwareModule
7a32c912af33 : [SB][Notifs] Create individual interactors for each status bar chip.
2cd7d569638f : [Expressive design] Update padding for UsageProgressBarPreference
a7beee5b9f3e : Remove task from repo onRecentTaskRemoved
c039238b0aec : [Notifs] Create notification.headsup package; move HUN files there.
e62bbe81c09b : A few fixes for animation takeovers.
2ff07513b62a : Fix bug when the hover icon is on the edges, it continues to have the hover icon inside the task bounds.
03f104defaab : Apply the avrcp volume when switching stream context
1de4398bb9b7 : Revert^2 "Added Remove shortcut Dialog UI"
ee2841957f2a : Create bugfix flags for transitions polish
9be7fc118322 : Pass in InputMethod.KEYBOARD when toggling maximize window using keyboard shortcuts.
4b3b2daab749 : Add a store to generate mutiple instances of AutoHideController for multi display
20100c160c04 : Ignore DevicePolicyManagerTest's AutoTime tests.
d6615c072b30 : Update MediaProjection logging atoms for new windowing metrics
268844736441 : Allow launching only one tutorial in touchpad/keyboard tutorial
546136c2f639 : Hide UMO from the shade when QS is disabled
80bfccd5843d : Allow new minSdkVersionFull attribute in <uses-sdk>
946bce567848 : android.sdk aconfig flags: add android.sdk.flags-aconfig-java-host
23a798d29b14 : android.sdk aconfig flags: rename soong modules
1583af58b0af : Create system api for the install dependency action intent.
5c65bf6b2e03 : Add nullptr check for AudioManager Framework Component
057015f7b68e : Refactor DesktopTestHelpers
70c1afa3b94f : Always reorder surfaces on focus change
b39c74af863b : Fix the size of the handle and touch area.
e09bad524a7e : Extract interface from AutoHideController
7369d99861de : Revert "Handling intent whose nested intent keys not collected case"
8445494b6b07 : Wait for sessions of dependencies to complete
213033ecf41f : Make PreferenceActivity and PreferenceFragment handle insets properly
72fd9eaf9712 : Fix IntroPreference summary visibility
4dc2c2f161ca : PerDisplayStore: remove CoreStartable implementation
3a254cf07992 : Make sure StatusBarContentInsetsProvider for correct display is used
226dc6cf2a4a : Add VDM onSecureWindowHidden API
969fe685e9f8 : Revert "Added Remove shortcut Dialog UI"
23840b655634 : Move the multiuser package from system to syster_ext
a13952080c66 : Add test for AppIconCacheManager to make sure the singleton works properly in multi-threading
64a668049ece : Added INDETERMINATE static field to RangeInfo
a3361303f376 : Restore PiP to freeform only if Desktop Mode is active.
9d4415a93c52 : Keep track of SystemUiContext before WMS is initialized
d14211317345 : Remove WRITE_DEVICE_CONFIG permission from shell user
86c74979f39f : Add IntrusionDetectionEventTransport class to provide a stable-API bridge between IIntrusionDetectionEventTransport and its implementations.
c5350f740a45 : Support application opt-out property for universal resizable
861472d106fd : [framework] Add an API to get max pause polling timeout.
d18f541b1152 : Only skip transfer starting window if orientation is not undefined.
d15272217deb : [res] Remove noisy assets deletion logging
4f541860f412 : [idmap] Small cleanup in create-multiple
c53a3da2e109 : [aapt2] Small cleanup of utils
c1447e2370d6 : Remove FrameworksServicesTests_server_accessibility
2d4dcd345e4f : Add OWNERS file for NeuralNetworks package
0b55cb0d26c3 : Update BTS Service to use push API instead of pull API in BIC
7dad2b0710d2 : Update getSupportedRefreshRates api
096fce8b465a : Only allow sysui or system to set a Task to always-on-top
bb4b474f39e7 : Add constants for app widget usage events
564bda762193 : Fix small issue where overflow expanded view wouldn't show
d907df2ddd0d : [flexiglass] Fixes WindowManagerLockscreenVisibilityInteractor.lockscreenVisibility
de21abaeafc7 : Revert "Flag guard uses-static-library parsing changes"
8452504d240a : [PIP] Split implementation of isPackageActiveInPip for PIP1 and PIP2
26b6a899f3e4 : Add and plumb PromotedNotificationContentExtractor
aca4ecdccdeb : Move keyguard lock callback listener to AuthController
c5d3b46969cf : Cleanup unnecessary logs
b536e4cd8244 : Removes ParcelableHolder from HubServiceInfo
285125455b89 : Prepare main looper before setting pre-init argv
3274290ea358 : Add glanceable_hub_v2 flag
c1c7087b11e0 : Only allow clearing NAT timeout when the 25Q2 flag is enabled
92e74bc583f6 : Update groups after domain verifiation.
ebade90af1d5 : BatteryManager: update capacity level description
694aa1b5c1a2 : TV audio output: avoid two TV Defaults
2848cb893b50 : Adding new flag for App-to-Web education
93a8901b8f45 : Do not allow swipe to hub from dream or lockscreen
176e379d5876 : [Lut NDK] Define ASurfaceTransaction_setLuts API
8caa1ad40872 : Require PiP2 flag off for dismiss split test
24b62eb64cd8 : Use IPackageManager.getServiceInfo
c45c27bf8f4d : Shell: Add health permissions for CTS tests.
755dd5818cb2 : Subscribing viewroot impl to only display state changes
b542fa0104c4 : Handling intent whose nested intent keys not collected case
5e40aee028ea : Remove OP_TAKE_AUDIO_FOCUS audio CAP requirement
38faea7f2db3 : Only log bubble overflow max reached when in stack view
0b2954517705 : Make clock in lockscreen preview use the correct overlayed resources
42ed62395c61 : [RONs] Add ranking changes for promoted notifications
b0fd6e822a3b : Remove media quality service from Watch platforms
7ff8f2772491 : Adds apex available for security export aconfig flag
3dbee04bd1c7 : Log metrics for dragging the header to top/sides
a85235e4f30e : Add additional DeviceConfig namespaces to allowlist
aee95f1cb85a : Add constants needed for applying Picture Profile to MediaCodec
8f9a85dd3b76 : Refactor the avrcp update on bt contextual volume change
01cec45f010b : Apply the avrcp volume when switching stream context
a285d0a92f61 : Revert^2 "Add a holdback study for concurrent MessageQueue."
0508c7f409c3 : [PiP2] [2/2] Start with a non-null pip params
5914cf83bb43 : TIS: Standardize TIS Scan Extensions API
d13117269553 : Log App Handle drag metrics
86cafeef4609 : Log Handle Menu metrics
f023f4477739 : Remove duplicate to-desktop call from HandleMenu
1f3f4ac561ad : Log DESKTOP_WINDOW_APP_HANDLE_TAP metrics
41ea0b068572 : Notification.hasPromotableCharacteristics() requires that notification is not a group summary.
e3f4692e0920 : [flexiglass] Disabled content support.
5e293d6ddfce : Fix broken MediaSwitchingControllerTest
e45ffd206000 : Apply the avrcp volume when switching stream context
dff8afc723c3 : Add MediaDevice getter for route type
1f7284d3e3ed : Change strategy to move the shade
f9f19cda269d : XML support for vibration primitive delay type
da78e1df1c70 : Rename onPackageFailure to notifyPackageFailure
25d77eb0850c : Add UMO paddings for the flexiglass to align it's position in various scenarious
e61fd3810b05 : Fix VibrationSettings to handle missing AudioManager
013ba25aff81 : Clarify ShadeDisplayAware annotation usage in SystemUI
badc0a4bf357 : Default enableOnBackInvokedCallback to true with targetSdk>=36
4b7e96be3406 : [LUT API] add CIE_Y sampling key
0432bb499fb9 : Add permitted_packages to framework-crashrecovery
2026dca5555d : [Autofill test flag]: Test bug fix flag
e06662eebbaf : Update javadoc for getParentProfileInstance()
74a0022607ef : Polish velocity for cross-activity-back with 3-button-nav
64d73c4f0360 : Ensure we bind to the correct app's agent
c6d18dc3d2cc : Only call NotifEntry#getPromotedNotifContentModel when flag enabled.
1e8b673674cd : Add caching for getUserInfo method and caching tests
9f6ecc9c9d16 : Add support for NFC associated role services
bfae72fd0458 : Add additional documentation to PictureProfileHandle
49a37555a850 : Update ProtoLog cache updater to take an ProtoLog instance object
f85fa9da7d3d : Refactor PerfettoProtoLogImpl classes to take the datasource object directly and be enabled and disabled on demand
33451e9d4403 : Add options to register lifecycle callbacks on a ProtoLogDataSource
3e058164df64 : Create shared static ProtoLogDataSource
babbde81b736 : Add T4T Ndef Nfceee feature support
309ce5b064f5 : Add falsing for edit mode button
ba098ceec1c2 : Fix falsing in new QS
8d5cac62d66c : Fix dream -> communal when communal is not available
edf622121243 : Add a TableLog for hydrator
2dca76f0d6ff : Trigger notification force grouping check on group child removal
2b2f5e14b148 : Added Remove shortcut Dialog UI
152e4e9536ca : [flexiglass] Patch for the unfurl animation
162588accc8f : [flexiglass] Set NSSL stack bounds from the GONE scene
c085f48b992e : Clean up the controller reference after execution.
6fdfb791b047 : New System Api: getCarrierId using carrierIdentifier
3fcca7790361 : Add workaround for handling versionMajor with string type
bdcc6722fef9 : Rename setEnableAutoInstallDependencies API
af206d5c7b93 : Refactor education proto into 3 separate protos to allow them to be triggered independently.
3c2f615fb790 : Animation takeovers in Predictive Back.
e575477a112f : Added Tests for custom shortcut registration logic
f7f24703278e : Add flicker tests to verify non-match parent on exit PIP
c2f27ad92af4 : Fix PIP exit animation flickering on non-match parent activity
373ae848af32 : Added logic for registering custom shortcut with InputManager
aa1512c2cbf9 : Update XSD for signature permission allowlist
c6d3e0b1a2ff : Don't apply hard coded vertical metrics if target SDK is 35 or later
57dd116629a0 : Migrate PrintActivity to enableOnBackInvokedCallback=true
b86c490d6952 : Ensure the switch to DesktopMode before launching PiP
c4e0d8d54683 : New internal ADD_MIRROR_DISPLAY permission
c0b6f8711665 : Remove usages of launched gesture detector flag
3ce5b6305c99 : android.content.pm.support_minor_versions_in_minsdkversion: fix incorrect bug: field
75f6073a2626 : [Audiosharing] Use audio sharing Google Symbols
9ab2bbaaba13 : Add clock to the base supported complications set.
9c6021dd7f8a : Add base dream complication dependencies.
4f8963534076 : Add double tap divider swap functionality to flex split infra
9b7792a4f5d6 : New flag: hearing_device_set_connection_status_report
c74a43b4c220 : Handle all stage root tasks dismissed for flex split
3e952ec26c5f : Add flex support for shortcut and intent entry points
121c209b654f : brightness: add config for whether allowing normal birhgntess for doze
605e8abaf7c6 : Run animation end callback directly for zero duration or requested end
5a404e17c67c : [Record Issue Tile] Add Selected to visual Checkmark for Accessibility
934ae20e94de : Updated the Record Issue Tile's switches to match the Screen Record Tile's switches
a4d8368e721b : Spatializer: API to query real channel masks being spatialized
e365758510c4 : Use FocusTransitionObserver in KeyguardTransitionHandler
a3b8c14ec69f : Add default case to switch statment
f2b717e5d0ee : Store and log details for non-app visible windows
0eaa70e49bfc : Added StatsPerUidLogger powerMah overflow handling
3e51f9a3c9b9 : Add activity state check to minimizeDesktopApp
7673e94f43d7 : m3: guard wear material3 UI change for Button
0b8faf7133a2 : Set isStaticLibrary correctly
624f0cb4854e : Ensure parsing PROPERTY_FEATURE_REAR_DISPLAY_OUTER_DEFAULT
0256b32f63e8 : [audio] Add mute event type for OP_CONTROL_AUDIO
f99dfdb5ae0e : CachedAppOptimizer: Correct trace name for COMPACT_NATIVE_MSG
777bbee092fd : Implement early animation cancel thru merge
abd14123ca0b : [MQ API] Unhide ambient backlight APIs
f11b39b6bd81 : Add a flag for the spatial model launcher pushback effect
4b11d34c8f77 : Fix NotificationProgressBar tracker updating issues.
9563acf6dba6 : Make DesktopModeUiEventLogger injectable
49d2a0b976a9 : Fix multiple performance regressions
b23d45676682 : Definitions for IAMF profiles and MIME
3fc8cc021684 : Add per-file OWNERS entry for WritableFlags.java
6c9d83bc590e : services use Health V4
756f75ebd5fd : Make GameManagerService JNI registration lazy
be8064ce32ff : Fix messed-up divider position when unlocking with IME showing
3e31c23a32d1 : Roll out bugfix enable_hardware_shortcut_disables_warning
504babfefcc3 : Remove the check that excludes Telephony bug report mode from being consentless, since the newly added premission is not meant to bypass consent entirely, but instead using the system apps' own consent every time a bug report is generated.
31481450837f : GSF variations on Bouncer
f45c730596a9 : API to enable apps to be opted-out of bundles
3228124ade43 : Enforce same package name when connecting to widget service
a4faf620514f : ActivityManagerService: Add method to set media fgs active.
6a0c988e6138 : Add owners for CPU/GPU headroom files
9355a212f922 : Add isAmbientBacklightEnabled
041f6133a790 : [MQ API] Unhide APIs of sound profile
b1e4cc0e90e1 : Build system schedule mode summary using string resource
c23fbe97df1c : Deprecate the old provideConnection and related APIs in WSM and WSS
614cda7747d5 : Add lock screen shortcut for opening glanceable hub
f1609bd751c8 : Update removeConnection API to throw instead of return boolean
63ef28b1e131 : GSF variations on Quick Settings
40bc741e0d28 : am dumpheap throws error when arguments are after file
bb545ed59520 : [flexiglass] Set NSSLC#mMaxAlphaForKeyguard to 1 after LS -> Gone
f97aeb8a0eaa : [Origin transition] Explicitly ensure the correct thread is being used.
ac0b051ac810 : Allow status bar team to have OWNERS on statusbar/chips/notification classes.
bc2c44fb4e17 : Better exception handling in RemoteCallbackList
ba1de9e0b6da : Revert "TIS: Standardize TIS Scan Extensions API"
d022886c888e : Perform additional checks to determine canProfileOwnerResetPasswordWhenLocked in coexistable variant.
3cf4d64bd7b0 : Removing the SliderQuantization (unused) interface.
172621540c16 : [PiP2] Start with non-null pip params
0fee6810b816 : [ProgressStyle] do visitor check when extras is available
6e936b850694 : Add Settings for three finger tap customization
8567ee7faec5 : Adaptive scribe bounds: Adjust touchable region 3/n
3527f237d7ba : Allowing IDLE long clicks on QSLongPressEffect.
ef6705281f1b : Use SystemService lifecycle to keep track of current user
e498ffe49fe4 : Revert "Make Javadoc changes to TtsSpan related to new API"
3359e0800104 : Improvements to Settings Preference Service
9ce4d21156aa : Use new java_system_features_srcs rule in framework
8f64d5cb1fb8 : Add resources for shell/shared
a872dc963758 : Add a new aconfig file for ADPF flags
ce335d19e439 : Add BYPASS_CONCURRENT_RECORD_AUDIO_RESTRICTION to permission cache
6eae71bb6013 : Enable DreamOverlay for Hub on Mobile.
59304690d884 : Introduce preferred notes mode config.
0987f02f4678 : Modify logging to use input method from keyboard and derive input method from motion event.
8d55bc7d1774 : Create Double Tap on Power Gesture Setting Variable
293eae998ea4 : Create Enable Double Tap on Power Gesture Setting Variable
f0528c3c4f31 : Add pip2 flag check into some WM tests
2116f0e156e0 : Skip fade out animator if current type is NO_INDICATOR.
f0acac414497 : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
3ff32a370fb0 : Enable widget dark theme preview
97d6a28ae860 : Update flag for offload implementation
44a8c67f48a5 : Polish predictive back with 3-button-nav
95ec53eef390 : docs: Fix typo in description ("you" -> "your")
dffb9c082a84 : Add additional exception handling in endpoint callback registration
9fb681ffa041 : Fix task resizing metrics since they were getting stale task bounds and on snapping, the resizing ended log is not captured since the tiling flag is enabled.
c1a0287a8542 : [Notif redesign] Bigger "small" icon in base expanded notif
5df8c7e2bad1 : Convert DreamOverlayRegistrant to Kotlin.
fcc21d7af80a : Do not force-exit desktop immersive during recents transition
11dbfdeb2ab0 : Add BundleNotificationInfo for bundled guts
f0ad7f1d6e6e : Disable desktop mode button on app handle unless desktop mode enabled
de6d44420bea : Reduce the number of Font instance creation
07d835ee302a : Use export lib of com.android.ranging.flags
0e0f231aef36 : Fix tests and align error logging
253696ac2de4 : add aconfig_storage_stub as dynamic dependency for framework
dcf26f1938a2 : Return default notification policy when calling from a user with no ZenModeConfig
2d9b1926f46c : Revert "Mark Desktop Windowing transitions flags as BUGFIX"
4150d4dca210 : [RONs] Allow text color spans when promoted
edc51ab0a7ce : App menu and handle input ignore cutouts
4f6a9944faf4 : Make InferenceInfo and getLatestInferenceInfo SystemApis.
fc569e157426 : Prioritize System shortcuts over shortcut services
bdc768c6f2ec : Fix ShortcutManagerTest1 tests
4f5177279ad6 : Make new, private dynamic color resources accessible from Java
cd99185998ce : HDMI: Upload physical address from Report PA in metrics
ad244fdaf804 : DisplayManager : adding plugins support
219fe368cca0 : Use DesktopModeWindowDecorViewModel when providing WindowDecorViewModel
0a2ab76ac057 : Fix a bug for using the wrong string for widget preview API max calls
271e5cdf4d1a : Hide QS shade when triggering ExtraDim migration dialog
49caa7db3d8c : Use const pointers where appropriate in dynamic_instrumentation_manager
bcefa5e9ac33 : Fix hallucinated atest commands
bbef2359c731 : Add SystemModalsTransitionHandler to deal with system modal transitions
10fa06cd694b : Update javadoc for VibrationEffect addPrimitive
e527c3d73acc : Passthrough animation handoff method for RecentsAnimationControllerCompat.
bf915ec856e2 : Remove TODO(b/347269120)s as they will no longer be done
74eda3d4cf46 : Add some more detail to the jank API documentation.
defa0221adcc : Add InputManager Lifecycle support
fc6547acc44a : Introducing BasicEnvelopeBuilder to VibrationEffect
2f30f70fcff9 : Fix BackProgressAnimatorTest flake
922ac09a1483 : Correct a typo: 'defualt' -> 'default'.
d957078596fa : STL compute the targetContent during DraggableHandler.onStop
7d923e0e67ae : STL make DragController.onStop a suspended function
482cb781e73b : Minor UI Changes
75e4420cd29f : Capturing user's selected custom key combination for shortcuts
ae7c247a8bf6 : Add test case for hidden APIs allowlist
abbf81e032fa : Fix RecyclerView crashed problem
6a48f8b80092 : Fix SnapResizeAppWindowWithButtonTest
d4f6f5a04793 : Fix Glanceable Hub timeout with scene container
4acfb49aac23 : Remove SceneFamilies reference from dream user actions
205e0f4a00fb : Check SpriteIcon validity before drawing
967adfd022b2 : PerformanceTracker: ensure that PerformanceTracker is obtained safely for each sensorId.
c171ec3b3402 : Revert "Add trace point into onMeasure/onDraw of TextView"
519a9cc704ab : Import translations. DO NOT MERGE ANYWHERE
2cfe62b74566 : Import translations. DO NOT MERGE ANYWHERE
891af779abed : Import translations. DO NOT MERGE ANYWHERE
12e50b8132b4 : Import translations. DO NOT MERGE ANYWHERE
7776a3b8a6d6 : Import translations. DO NOT MERGE ANYWHERE
f227c1601b9d : Import translations. DO NOT MERGE ANYWHERE
a501a5eda0c9 : Import translations. DO NOT MERGE ANYWHERE
dd917b815e61 : Import translations. DO NOT MERGE ANYWHERE
c749f41c9679 : Import translations. DO NOT MERGE ANYWHERE
ff38543e4c23 : am: Add set-media-foreground-service command.
57434e108a28 : Import translations. DO NOT MERGE ANYWHERE
3494f7958fb7 : Add a flicker test for minimizing auto pip app window
a34319400fd2 : [Catalyst] Add PreferenceLifecycleContext.findPreference
217dcf9092a2 : Flag guard uses-static-library parsing changes
d8fca48de483 : [Catalyst] Update PreferenceLifecycleContext.notifyPreferenceChange
f191ff1c1399 : Cleanup AccessibilityWindowManager#onWindowsForAccessibilityChanged
21546066896b : Remove enabled flag compute_window_changes_on_a11y_v2
bf86249de96c : Update to ToT RemoteCompose
26db417770f7 : [RONs] Fixes for promoted notification logic
e9aaa86de677 : Fix the freeform task bounds changed after device reboot
fbad58a7eb3a : [Audiosharing] Fix get main device for LEA device w/o CSIP
8fed7d6c6d40 : Import translations. DO NOT MERGE ANYWHERE
dffd67c90715 : Implement NDK createSessionUsingConfig API
b92036706842 : Import translations. DO NOT MERGE ANYWHERE
2826e57b7807 : Revert "[res] Duplicate AssetManager when changes are needed"
0ec09c6381e5 : Make isInteractive(display) a testAPI
f09465b5b150 : Remove enabled flag fix_merged_content_change_event_v2
de70c9ef8356 : Notify WindowSession about insets animation
355d8c6dc9ec : Update the javadoc for FrameRateCategory constants
adb9051bd47a : Remove unused obsolete code from FullScreenMagnificationGestureHandler
02e76f7a48d0 : Add Ecm Incall permission, add new InCallService to system server
4789cc2d6f21 : Flexible 2-app split: Touch zones
c1685b625df8 : Add flag to use profile labels for default app section titles
aca3912be090 : Delete unused param in MagModeSwitch#getIconResId
86adfa195bb5 : Call applyOomAdjLSP starting from the least recently used process.
eede68f1f367 : Fix IndexOutOfBoundsException in HubEndpointInfo
fb8a239e99c7 : Export javac classes
e7641e3b7c5e : Ensure thermal restriction is included in getJobPendingReasons API.
838089f95dcb : Revert "[res] Duplicate AssetManager when changes are needed"
a5373d494dda : Add APIs for applying picture profiles to a layer and listening
907c7f48fc31 : Privatize the constructor of DefaultAccountAndState.
9d3c24bbfaa8 : Dump JVM arguments on startup
2f7d14b9fe63 : Update javadoc for CardEmulation.setServiceEnabled()
8bc1aaa45aec : Make ravenizer less verbose
bf6b187a3ddc : Reapply "[res] Duplicate AssetManager when changes are needed"
ce26f5924840 : Set default state for secure channel attestation result to non-success.
159843674c69 : Fix ArrayIndexOutOfBoundsException after allowing reorder
91053240b31b : CarrierMessagingService: Add new enum values for granular errors
2f50009c1995 : Deprecate IMS TelephonyManager methods
fbb249ef243b : [framework] Add more description to routingTable entry.
65b4b9ef8ccc : [Expressive design] IntroPreference should not be round corner
31e508495d7e : Fix dream overlay lifecycle when bouncer is showing in scene container
967518d7a5ce : [flexiglass] Remove Lockscreen scene from backstack when face unlocked
08c01a33df4b : Add expansion animation for ringer drawer
dcb328a68c2c : Override flag in PipController/PipTaskOrg.Test
fa5cd980b906 : Add wear_watchfaces to SettingsToPropertiesMapper
1967121c24bb : Add missing RequiresFlagsEnabled to test
b8082c0f05a5 : nfc(api): Add API to set/get default NFC SubscriptionId
6779e556c4fe : Change flag type for use_smaller_app_widget_radius flag
ef5051df2c51 : Revert^2 "Address frozen notification API feedback"
abff2386df0f : Make Javadoc changes to TtsSpan related to new API
36b662833ea3 : Add ADPF owner for thermal
dca3053db1c9 : Call InvalidationCallback#onCompleted when client is cancelled
1895e2e9de07 : Don't create layers for negative-sized RenderNodes
0eee1eddc401 : Add logging to detect flags set by tests
ca826fe6d982 : [ID] Replace "forensic" with "intrusion detection"
721e5fcc3f0d : [RONs] Fix over-restrictive requirement in promotion logic.
055955e4fc0c : Revert "[res] Duplicate AssetManager when changes are needed"
ad59579ad8a4 : Add flags for GSF on Bouncer & QS
da2a07a8b68e : Handle insets (status bar, navigation bar, etc) properly in adb backup/restore confirmation dialog.
46594eca3620 : Unit test for the padding change
a60c9366228d : Potential fixes for the icon issue.
f5dba882c5db : [Notif redesign] Use maxLine and not ellipsize collapsed text
ab9d2bac010a : Remove interrupter.
a1d4fe00718a : Revert "Check if cursor has moved out of viewport bounds in Curs..."
f26426c7ac07 : Fix NPE in InputMethodManagerService#resetDefaultImeLocked
13f5da58712a : Revert "[1/N] APIs of verification service, session and status"
70d9bad3007b : Revert "[2/N] implementation of verifier controller and status tracker"
0bd686a9ec01 : Use the tile's size when selecting which colors to use.
fd128546245e : Flexible split: mBounds1 and mBounds2
9dd30fb6a453 : Revert "[res] Duplicate AssetManager when changes are needed"
a8bab531e2d1 : Revert "Ignore `Background for SurfaceView Camera` surface on test"
61eafe84617f : Add a Setting for Wearable System Status Tray Configuration
0bd9841b1b6b : Adaptive scribe bounds: InputMethodService apis 2/n
188789187b95 : Use RavenwoodRule for system properties
1c14416ac9a7 : Shift to "input" namespace for multi key gesture refactor
1384dd6b2715 : Enable app handle on foldables
1adff66adef4 : Partially Revert "[3/N] implement getDeclaredLibraries"
e8714795362e : Revert "[4/N] APIs for verification policy and failure reasons"
88fffe824ae2 : Revert "[5/N] Use synchronous interface for report* methods"
eeb314d02a8e : Revert "[6/N] allow adb installs to bypass verifier"
9b47940eedfc : Revert "[7/N] only the current verifier can call VerificationSession APIs"
1574ac73b4f4 : Revert "[8/N] per-user global verification policy"
b38d3efac04b : Adding MediaProjection Functional Flicker Test Scenarios
7d7743a81d55 : Remove flag limit_manage_media_projection
bc6275a55164 : JobScheduler: Enable quota optimization overrides
7f59b19f69ea : Revert "Add a holdback study for concurrent MessageQueue."
a5ba50f83634 : Fix test failing with NPE due to null List
c3be75306b46 : Avoid reentrant pipeline run
1fcd41b11517 : Update OWNERS for shortcut manager
e05fb429bf0c : Allow the device policy management role to migrate accounts
0ffd0298d39c : _GNU_SOURCE is public; _USE_GNU is an implementation detail.
791e03a452b2 : Add gravity to ringer drawer background.
84d80c52a62f : Fix enterDesktopWithDrag test
6d11da5ba90b : Fix NPE when checking whether a rule is implicit
24b7260a12f7 : Create config to enable app handle independently from desktop mode
fd77056250f6 : Notes QS Tile: Do nothing on click if there is an expanded bubble.
084a82382849 : Record tile spec in available error message
2c5139817134 : Add build number
cfa9ce9ee375 : Long press Shade status bar expands to QS
7406cabc1fc2 : Fix: documentation mistake
2fc047843bdf : Set inflater factory for group summary header
8bd1ed8052f3 : Remove special case for replaced transitions
0674f38d3a5d : Remove unneeded test
e16594f64951 : Protect callbacks from NullPointerException
0b6b6f5328a2 : Improve CommunalBackupHelperTest stability
90ec3a478764 : Fix a bug in Helper.addAutofillableIds() that was causing a crash when a view had null autofill id.
bdee37c49ee4 : perform local immediate override
f658e3709519 : Move BackupAgent connection to a helper class
0c17641bea02 : STL MultiPointerDraggable compute direction using the previousPosition
4c254b9fd58f : Add BYPASS_CONCURRENT_RECORD_AUDIO_RESTRICTION permission
5733105a56d3 : Do not start RecentsMixedTransition if keyguard transition is playing
db88a6329093 : Set DesktopWallpaper window as translucent.
66d31a490b29 : [Notif redesign] Bigger "small" icon in HUN
dbd2105b8124 : Mark Desktop Windowing transitions flags as BUGFIX
5f837195fcf6 : Add TestApi to reset locked modifier state
24c8ba68bdfc : Check if cursor has moved out of viewport bounds in CursorController
4de1ed48ec5a : Add KEY_REGIONAL_SATELLITE_EARFCN_BUNDLE to carrierconfig
9702e9e2366a : Add "list" and "any_external" options to shade_display_override command
293f2dbfc23b : added Logic to convert UI key event to shortcut key models
2fa48adb1ef3 : Revert DeviceState callback to main thread in FFP.
c27a70d16a52 : Prioritize strength error over hardware error
175884b68415 : contexthub: Handle unsupported APIs
0b1fe74a64dc : Revert "Address frozen notification API feedback"
43300b49dce6 : Define Secure Lock Device API surface
0e92f9364ff9 : Add permissions for secure lock device feature
3f3968aea3b6 : Avoid sending top-resumed gain/loss to client if resume deferred
49aead13eaff : Stop reconnecting on uuid change when a manual disconnection happens.
4c6a5d37622f : Set background color of more actions pill
c04628b5a4a4 : Add a functional test for minimizing AutoPip enabled app
fcb6024fe06c : [expressive design] Add LocalIsInCategory
eaf8dd7b2dd9 : Add StageOrderOperator to keep track of stages indicies
1f3201efc34a : Generalize MinimizeWindowOnAppOpen
20c6c55244b2 : Define a flicker test for no-desktop-task-limit
8b8aca054afc : Updating javadoc for few APIs
ba000ae7a73f : Make mLock static for accessibility
dbaba7a2dfd3 : Update Maximize Menu UI
a5a0e867484a : [Catalyst] Migrate Adaptive Battery Category
c6bc983c3f78 : [Catalyst] Clean up SharedPreferencesObservable
8739382f1576 : Add support for converting java hint sessions to native hint sessions
94fdea29d85b : [Audiosharing] Show broadcast volume bar when preview enabled
9c47de588f48 : [MQ API] Unhide APIs of MediaQualityManager (part 2)
9994f352d8bc : Per-display WM brightness override.
c59dfa99c7f0 : Remove unused legacy/hidden RecentTaskInfo fields
ae505195b376 : Fix large clock is too low in new customization picker
cddf3783cd9b : [MQ API] Unhide APIs of MediaQualityManager (part 1)
3c152d460826 : [MQ API] Unhide APIs of picture profile
fe1ab98b555c : Add Merging Logic to Jank Processor
93a101e29526 : Add requestSelectedNbIotSatelliteSubscriptionId API in SatelliteManager
d538c4c0c62b : HDMI: Call CEC API in TIF for TV to assert active source
425c66fcb57b : Also invoke "setProgressTrackerIcon" when mTrackerIcon is null.
41d4ea0d249a : Create isRecoveryTriggeredReboot SystemApi
b2b716e3db9c : Constrain Lockscreen smartspace to half screen for wide shade layout.
26474cbde764 : [Catalyst] Provide preference Getter API
f411ed37b35c : [Catalyst] Support flags to get preference graph
f604fc6aa81b : Remove observer specific logic in PackageWatchdog
a0877ad8cb95 : Revert "[MQ] Add implementation for ambient light"
af1c069f7fb1 : Preemptively adjust task info visibility when evicting tasks
fdf8b1832639 : nfc(api): Add return status for pause/resume Polling APIs
1f53e1e9dd48 : contexthub: add missing pieces of HubEndpointInfo
6059c5830d3f : contexthub: add findEndpoints(String) for service discovery
d8c6f1d884b5 : contexthub: create HubServiceInfo and associated with EndpointSession
18682dc6f75c : fix incorrect casting for child color filters
32632587f7de : TIS: Standardize TIS Teletext Extensions API
b214d1512645 : [flexiglass] Immediately locks device when sleep button is pressed.
330bb37ff5fe : Log when bubble is moved to overflow
92901b7c88fe : Log bubble removed events from bubble bar
6508ac8505ec : Use RavenwoodRule for system properties
67c428a7f6a5 : [SingleLineView] Use CachingIconView for single line views
393262d851b2 : TIS: Standardize TIS Tune Extensions API
04bb10a56d6a : TIS: Standardize TIS Cam Extensions API
ab1102c24109 : Fix unable to create base.art
91caa0ed2bed : nfc(api): Add a callback for onEeUpdated
31d2f51ce4c8 : Update Rear Display Mode UX
1e2f7e1d5a69 : Do not mask exceptions from initialization
c316530f418f : TIS: Standardize TIS Scan Extensions API
257794eae37c : Use most recent Radio HAL libraries
6bb38e026bed : Use Context/PermissionManager for NM.areNotificationsEnabled()
c10cdc75f21a : Adding callback support for Entering/Exiting DesktopMode
121d051b2536 : Set package name via the build file, not via Config.
0cb319fed065 : Add callbacks to NFC Event Listener
29a390e342fb : [PiP2] Disable Flicker tests for pip2
a2bbe84ec647 : Expose important thresholds for mitigation impacts
ffe320b468d6 : Flag uses-sdk-library parsing changes
daaab342097a : Refactor and make the default device effects applier public
acba22d71711 : ForegroundService: Include HR, SkinTemp, Spo2 permissions for health fgs.
33a31fb22800 : Add Technology Type info for the active secure element list
31986fe98dfc : [flexiglass] Adds support for "lock now" from KeyguardService
5081110e7c6a : Properly dispose of JavaAdapter collector
dfaaa276d65d : Fix bubble bar expanded view flicker on unfold
b5565599ea44 : Change Java APIs to use multiple routed devices
9a3a2d3180c4 : [MQ] Add implementation for ambient light
77e9e4055279 : Add feature flag for supporting minor version in Resources
2ccd4af147b6 : [flexiglass] Prevents crash when disable flags are set
858168cf8b03 : Add checkOp overload APIs that accept attributionTag
217c1f4a9c13 : Add slider input events interactor.
d24ac92c6db9 : Cache null binder results in PIC
e2e8012f6026 : Make aosp-main _mostly_ in sync with goog/main
47414432d61c : Revert^2 "Verify KeyEvents in IME"
ba88d61fcff9 : Fix typo in setStylusScaleEnabled Javadoc
8756bcb6794b : Ensure QS classes use display-aware context
1dfdc0a5e5f2 : Apply telephony_config_json_parser for satellite accesss control
51a1632ea677 : Revert "Default enableOnBackInvokedCallback to true with targetSdk>=36"
d3cb2296f0d6 : Set title on oobe tutorial for more context for TalkBack.
8e08e35243c9 : Fix issue where widget disappears when reordering
59fa0b7739cd : [Notif redesign] Update spacing of collapsed group children
787ab1cfc4af : Add keyboard shortcuts implementation for Desktop Windowing task resizing keyboard shortcuts (snap left/right, toggle window size and minimize task)
c0dd0da46ab4 : Use a switch statement instead of static list of layouts
b14d2c2f4a97 : Add APIs for notifying cell identifier disclosures, security algorithm updates
63bedcf882fc : Run onUserConfigChanged broadcast on a different thread
2671b65f4773 : minor refactoring
00f1127e7006 : Connected data layer and UI layer for custom shortcuts retrieval
25ec1c64eeab : [Kosmos]: Move brightnessWarningToast mock to proper Kosmos class
0a8e8588ef10 : Set TalkBack focus to last selected screen on OOBE settings page.
623953d5e39b : Don't apply overrides to implicit rules, instead just update their conditions
69abc0d76159 : profcollect: Fix NPE when invoking UsbManager
aaac334a66a1 : [Minimal HUN] Clean up circle cropping from NotificationRowIconView
04b1abc3bd86 : Add Global Settings key for HEARING_DEVICE_LOCAL_NOTIFICATION
22397e56f228 : New flag: hearing_devices_input_routing_control
336c0f5cb5d3 : Make a opt-out per-app override opt-in for camera compat freeform.
74ef84689ffa : Notes QS Tile: Flag the notes tile viewmodel with QsNewTilesFuture
f04796f3e9f3 : Remove a duplicate method in camera compat.
82ffdfeabf70 : Refactor KeyGestureEventHandler out of DesktopTasksController into its own class DesktopModeKeyGestureHandler.
a1cd51e67f0e : Fix InputMethodServiceTest#testHideImeWithWindowInsetsController
a8ad2ceda7c8 : Remove TODO(b/347269120)s as they will no longer be done
cf7efee3277a : [SB][Notifs] Fade out end of status bar notif chip text, don't ellipsize
6a70258baa59 : Add logs when starting/finishing a transition
bc7f512d5dc3 : Use lse_desktop_experience namespace for status_bar_connected_displays
93c8bea10960 : Set the wmtest targetSDK to 31.
362de5c389fe : Remove startWaveformEffect APIs
66b74c3ceec6 : Partial revert of the filtering added for the Jank API.
941d05e69435 : Simplify the camera compat state.
721b28d018f1 : Fix VibrationThreadTest for unsupported primitives
8e557c67f746 : Add "com.android.btservices" to 'apex_available' property of "android.os.flags-aconfig-java-export"
aaf72fec11a1 : [6/n] Implement LetterboxCommandHandler
4faf0ca3bb80 : Fix start info handling of isolated processes
0252fd3b3370 : [Minimal HUN] Log package name and instance ID for minimal HUNs
3fad2a3bca69 : Log if orientation is ignored by universal resizable
737b91f90cf5 : Allow dependency installation if caller has suitable permission
e504d3d47863 : Update isTagIntentAllowed and isTagIntentAppPreferenceSupported
6e7ebccd4138 : Add a flag to enable IsTagIntentAllowed API
939a555ee4ab : Update drawable per UX spec without dashed segments.
6b4d6ed074fd : Use addMoveToFullscreenChanges on fullscreen launch.
9a3d35e96bc8 : add child setters to RuntimeEffects
4e903d6ccb9a : Apps that share the same UID should also receive the PACKAGE_CHANGED broadcast
095c6db7b525 : Clean up TODO comment for TelephonyManager.getIccAuthentication
9b4d2f3ae654 : Fix voice volume min/max mismatch with APM curves
4a44565881b1 : Defer orientation for transient launch with immersive app policy
43fba9a0fce2 : TIS: Standardize TIS Event Extensions API
416ee1d47e3e : TIS: Standardize TIS Analog Extensions API
d098844c439c : Activate DesktopWallpaper when fullscreen task is forced to be freeform
da3d4f366e2c : Convert vintf_fragments into vintf_fragment module(s)
d542af69609c : Satellite: Only show device based satellite icon when flag is enabled
482769395595 : [MQ] Clean up sound profile APIs
c74b4ff49a4c : contexthub: create basic endpoint session message API
cc191c3cadc4 : TIS: Standardize TIS Service Database Extensions API
cef56d1800ae : Rename AdaptiveAuthenticationService
a08a77c89a8f : Add a holdback study for concurrent MessageQueue.
f33c358436e8 : Cache CoroutineScope for settings APIs
580a71bc9a40 : Apply new code formatting to SettingsProxy
ff44d0d7271e : Fix voice volume min/max mismatch with APM curves
f93e213c4085 : Reland "Add a functional test for no-desktop-task-limit"
2bb5228810a8 : Backup the activities to finish on TaskFragmentContainer exits
aaa17a0071ba : Revert "Add a functional test for no-desktop-task-limit"
797ec25474b0 : Fix CombinedMessageQueue for ravenwood
33dfacc282ce : Remove the flag for the SharedConnectivityManager.getBroadcastReceiver() method.
a47957a8c222 : Add Handle Menu RTL support
945805e345bf : Fix Ravenwood annotations to be in sync with goog/main
65d7408ef490 : multiple-desktops: Add aconfig flag
b85c820cb5e1 : Add missing player mute event log
c5fe9b63b42e : Add getSupplementaryAttestationInfo
2458b9d2adec : TIS: Standardize TIS Screen Mode Setting Extensions API
2dc2dfadd68e : TIS: Standardize TIS Client Token Extensions API
2b560ab00176 : [framework] Add API for commitRouting and extractOemPackages
0813431e4b7b : TIS: Standardize TIS Scan Background Service Extensions API
68cdac749911 : Bind to Sdk Dependency Installer Service
0da303b761e4 : Create the service to be implemented by the holders of the Dependency Installer role.
646e8f0b9b7c : Prevent assign layer on DisplayArea while in a transition
4812c4a983fb : Set compile_dex explicitly for FakeApexSystemServices
139146a39d78 : Fix divide by 0 for calculating vsyncs in buffer stuffing recovery
20042247747f : [Forensic] Refactor ForensicEvent
2b9745b478b6 : Allow FF leads to approve changes to android/os/flags.aconfig
851b5adee77d : Address frozen notification API feedback
2ad57bfe5bf4 : [PIP2] Enable rounded corners
f4aeb7ac9d4b : Do not create/update status bar input layer when caption isn't visible
9a5cc83ef60e : Add keystore2_flags_java-framework
1d74c21fb617 : Make aosp-main-future in sync with goog/main
751b5ed7b2ef : Update WMLockscreenVisibilityManagerTest with flag handling.
c27380884cc4 : Add drawing-order and hint to dump cmd xml output
8e57f861995c : Add isPresent check before invoking optional.get()
f2b9922e7008 : Prevent CurrentClock flow from holding onto initial value indefinitely
b73100223300 : Prevent Typeface recreation for default flex clock
b930117edddb : Add healthfitness sources to framework-doc-system-stubs.
affe24d20616 : Create new role for dependency installer.
b333d37ffc9b : Use USB product name to set AlsaDevice name
58ad088e22e8 : Remove extra space in dynamic_view_rotary_haptics_configuration definition
bc78448aacb4 : Take RTL layouts into account for bounding rect alignment
4defc008c0f3 : Add another level to modifyRawOomAdj for BIND_NOT_PERCEPTIBLE services
dca446428cd6 : Don't use backup restricted mode in certain cases
1ba0be0c7a75 : media: Hide *_TRUSTED_CONTENT_ONLY constants
eb09f976e48d : Print NanoappIDs as hex
1ef797766904 : package: Add a new BLE channel sounding feature flag
89143cda76a5 : Add api to trigger App-to-web education
b2c257f46d6a : Log event when bubble is opened from overflow
1cbf97ad6d6a : Implement VendorVibrationSession.vibrate
8525faead5e2 : Add IntRange to TtsSpan TimeBuilder APIs
da5d3921e2b1 : Fix split screen unfold animation
73dec3ebea6d : Remove pattern matching instanceof
e6e8999341d9 : Log event when overflow is opened
df7267417e31 : Adding status and end date to SubscriptionPlan
232a55c26950 : Add logging to dream gesture exclusion handling
4555d0d881c8 : Adaptive scribe bounds: introduce flag 1/n
fa2741fe5756 : Add flicker test for moving desktop apps to front
320144b85370 : Add e2e for bringing desktop windows to front via the header
a9f259d4f902 : Fix bug number used in the enable_supervision_service_sync flag.
1639f57b5542 : Don't hide a slider if it's already being shown in the VolumeDialog
1b4f2a0d610c : Register FlagManager as a System Service
6f0684a0a9fa : Remove secure_window_state flag
b298ac31f341 : NoSuchElementException handing in SystemRequestObserver
c2bd3bd344f5 : Change VCN flag container to tethering
c864e8d19da5 : Clean up fix_config_garbage_collection
df6291abb386 : Fix UMO does not show in GH
e869daccb27f : Update bug number for handle_bugreports_for_wear flag.
e94078371df4 : Ensure that movable elements are composed even when not in a transition
980db4934735 : Only try to dup valid SyncFence's
5ee6c1f25814 : Update dimensions of ShortcutHelper. Fix text wrapping for large font.
0e8d27fbe86d : Fix handling of disable_secure_windows setting deletion
3153a97dacb1 : [AAPM] Rename permission
13a360db49a6 : Fix NPE in PlatformCompat
5c30e0826bd3 : Expose ZenMode kind (and simplify implicit mode creation in TestModeBuilder)
c8d19f1886b8 : Split up DPM.createAndProvisionManagedProfile
ffe15f981e08 : Expose VCN APIs for framework code
9af1abd4b38f : [Notif redesign] Bigger "small" icon in header only notif
722e14b0ef00 : [Notif redesign] Bigger "small" icon in collapsed notif
0e49ecf86ed2 : StatusBarIconView: remove unused mDismissed property
21d781969d9b : Prepare a transition before starting it
16206ef18706 : Fix timing race condition in toggling ProtoLog to logcat
6281b5174ddb : Order flag alphabetically
805bb0eb2a01 : [5/n] Create LetterboxConfiguration component
a1d849fe651e : Use non-abbreviated units of time
10a7d33594fc : Remove permission based active admin.
9cb30f01646a : Make ModesDialogViewModelTest impervious to device language
b4fdea64c062 : Update phone call end requirements
c9ac54df4080 : Aligning Android color tokens with Material
c039ec197aec : Add a functional test for no-desktop-task-limit
eead22bb2311 : Remove unnecessary checks for internal only addStartInfoTimestamp
214032f16b52 : Ignore ASHA hisyncId grouping if the device supports CSIP
4b92f869cffe : [4/n] Implement LetterboxObservable Tests
aad007bac5ce : [3/n] Implement Letterbox Surfaces lifecycle
faa04afe1d2c : Remove/destroy status bar window view when display is removed
94e6b2039c74 : Shift to keycode and modifierState attrs in bookmarks.xml
72b527c19128 : HDMI: Declare flag for uploading physical address metrics
0932c2961d88 : Make ListActivity handle insets properly
05d1144323b1 : BiometricScheduler: fix the NPE problem in startWatchdog method.
c4d316f6e43e : Expose VCN APIs for framework code
b90b26cbd665 : Make AppIconCacheManager instance volatile to prevent thread caching issue
dea364fbf465 : New flag for notifications in device streaming role
16985ae817fd : contexthub: add HubEndpointSessionResult to allow accept/reject open session request
0fcefc1dbc7f : contexthub: add HubEndpointSession + IHubEndpointLifecycleCallback
f22fb2ca3d87 : Change start/stop lock-task-mode to two way binder
b3ee457f7d0b : Default enableOnBackInvokedCallback to true with targetSdk>=36
2fea7634cc0c : Do not invalidate isInteractive cache on power group events.
7adb4d0e2d65 : Bump KeyMint and Keystore version
bd6ad4ca9895 : Add appops for audio hardening
ea12150b5b38 : guard new Xfermode API usage in Paint
dd0726c1fda2 : Revert^2 "Disable concurrent mode when instrumentation is loaded"
e06ef0e33e55 : NoWritingToolsSpan
75d9c473f86f : Skip dragging if app is already in default DesktopMode
915eecdb3007 : [framework] Add annotation for pausePolling input, and fix doc typo.
ad7315581ad6 : TIS: Standardize TIS PVR Extensions API
5002ea9ed0e0 : TIS: Standardize TIS Signal Extensions API
7be37b38cd06 : fix the CWE problem in Biometrics.
144ace42a939 : contexthub: implement findEndpoints in service
17a02e66f9c8 : contexthub: create basic findEndpoints EndpointInfo API
d12c5fe88d4d : Revert "Verify KeyEvents in IME"
a5aa7db96da5 : Enable Satellite Notifications by default in telephony config
f43c39ba19b5 : Adding a flag to control the Notification Shade blur
499103aeafa7 : Set Freeform corner radius to match spec
550338489c3c : Add flag to short circuit permission request
cdf37a1911a8 : TIS: Make constants in TvInputServiceExtensionManager SystemApi
7cdc4b8f08df : Revert "Disable concurrent mode when instrumentation is loaded"
f02e06879e57 : Add permissions for applying and listening to picture profiles
2fa911ddaa67 : Create configuration resources for the Double Tap Power Button Gesture
97e14c4d10dc : [framework] rename some oemCallback and fix javadoc typo.
f45a04419b1b : [PiP2] Update DisplayLayout on remote rotation too
8db50898b5c9 : Hide task surface once it goes off-screen in close animation
c64fd1ee94d6 : Updates the top-resumed activity if nothing to resume
882ec1a081d7 : Propogate the BcSmartspaceConfigPlugin to the BcSmartspaceDataPlugin in order to flag ViewPager2.
94a10629a915 : Add javadoc to compat-ids
447d84e6797f : Fix small clock start padding is wrong in picker carousel in unfold portrait mode
771f6d72b659 : Prioritize strength error over hardware error
cb92da45e2bc : Add appop protection level to WRITE_SYSTEM_PREFERENCES
c939f2cb680d : ActivityManagerService: Add method to set media fgs inactive.
8929f131de7f : Use ApplicationInfo to check if changeId is disabled.
16adef06d41d : Add multi-client support in camera2
6233898e9128 : Updated the SET_DEFAULT_ACCOUNT_FOR_CONTACTS's permission protection level to knownSigner.
29a9ab42f622 : Allow AMS fgs notification rate limit with ALLOWLIST permission
b7f0734c9f39 : [Catalyst] Add ApiPermissionChecker and refactor code for preference service
b5bc3cbe923d : [Catalyst] Sync libraries
97f3b8a5e6ec : Compat-framework on ravenwood
eb30ed8e89c0 : Keyguard Shell Transitions: send keyguard/aod updates to all displays.
18b98714e554 : [Forensic] Add ForensicManager and permissions
a707d1d58733 : [flexiglass] DisableFlagsInteractor
671c3e1e16fe : Disable concurrent mode when instrumentation is loaded
be71b3eda74a : add RuntimeXfermode API to the graphics platform
7a34cd28504b : Redirect calls to the secondary lockscreen methods to the SupervisionService
3a70fe203654 : Ignore failing test to fix it afterwards.
a0630b1a49f6 : [Autofill] New APIs for metrics targeting W
fcf54a424104 : Revert "Compat-framework on ravenwood"
b4430c07e059 : TIS: Standardize TIS Rating Extensions API
6ee267f9d6f8 : Revert "Fix bubble bar location on device boot."
cc2470ce157b : TIS: Standardize TIS OAD Extensions API
dc0e729bced1 : Add fallback to FROZEN_CALLEE_POLICY_UNSET
187ddde3699a : Add InterfaceDiedCallback to RemoteCallbackList
9cf5a93c650f : 4a/ Track visible running tasks via transitions observer
dddab5ba4d3f : Revert^2 "Long press Lockscreen status bar expands the shade"
8dd599a66773 : Add speakerChannelMask to AudioDevicePort and use it for new API
4319969d4fac : [SB][Wifi] Don't expose WifiRepositoryImpl.selectedUserContext.
3c0d959a59da : Make Notification.ProgressStyle.createProgressModel /** @hide */ public
2a916a69a760 : Add @PromotedNotificationLog
950dd0591ad7 : Move notif LogBuffers to new NotificationsLogModule
98d5501c3cd3 : Migrate Burmese support to FlexClockView
80d1c8c4e827 : [Notif redesign] Swap history and settings buttons
04f26a4953a3 : Revert^2 "Frozen-aware RemoteCallbackList"
4c39060bf6a1 : contexthub: create a package and add OWNERS file
58eebd8ad6fc : Fixing comment formatting
05035c6771f2 : Remove immersive task from repository when task is removed
2034639bb61d : Exit immersive on new instance launches
f3a1ccbb2e86 : Remove AssetLoader from customization lib
2947f5d5cb14 : Increase MAX_CACHED_PROCESSES config
90cd3b2551de : Use ApplicationInfo to check if changeId is disabled.
8d1586605579 : Add dream scene transitions for Flexiglass
a1be72756c4d : Add client-side APIs for getRoutedDevices()
c00d6ab759bd : Some fixes related to BrightnessWarningToast
ba30d0f1d0e3 : Add aconfig flag for handling bugreports on Wear.
80a703255c26 : Extend screenshot timeout on accessibility events
aed43401ac84 : Stop MediaProjection on (phone) calls ending
dc7984896156 : Removing Failure haptics when QS tiles don't handle long clicks
f6bc0fe2fe60 : Move feature flag update_client_profile_priority to media_tv namespace
30a16a117f4f : Reset side stage position after child task vanishes
75a93c58e645 : Denote categories and the shortcut list as traversal groups.
1a0e562bf156 : Verify KeyEvents in IME
75fda3f4e8ba : Expose VCN APIs for framework code
9e1f71fdef81 : Add missing flag stanza
4e9ab42bcb00 : [SB][Wifi] Small cleanups of WifiRepositoryImplTest.
7261855ac79a : Compat-framework on ravenwood
10662b3f3fb0 : Verify reset password token activity using DPE when coexistence is enabled.
46b84db8ded6 : Regroup unbundled notification into original group
d47d58a8d060 : Fix loaded tile icon animations in compose
70c61b6e2128 : Rename ShadePositionRepository to ShadeDisplaysRepository
53926ed6b562 : Check sparse group child notifications for trigger section
f000c958b5c5 : Add DisplayManager.isAlwaysOnDisplayCurrentlyAvailable
8bbeca4133b9 : Introduce a simple version of verticalContainerReveal()
20b5075e0c84 : Expose fromContent and toContent to UserActionDistance (1/2)
a89da41a853d : Introduce PropertyTransformation.Property
93747b75d730 : Make the transition DSL more opaque
8ba6a2c49820 : Add some API docs for common confusions
5381d54f1025 : Add shell command option to disable dependency auto-installation
200f777c628a : Enforce dark mode in QS
64433588bcb9 : Create a border for focus.
586a4e716baa : Remove mActiveAdmin from EnforcingAdmin
dbd1a942bf31 : Revert "Long press Lockscreen status bar expands the shade"
b20ff4b9c869 : Send broadcast when bugreport aborted due to error.
b5a4375e1891 : Make ConstraintLayout the root of the VolumeDialog
1c8bd8b985f7 : Set PRIVATE_SPACE_ENTRYPOINT_HIDDEN to false instead of throwing an exception
10dd63099285 : Fix: Aidl equality not working
fb0fc8a50bd2 : Remove hidden VcnTransportInfo methods from VpnTest
dc465643b333 : Introduce TransformationMatcher and Transformation.Factory
ab93fff49311 : Ensure Keyguard classes use display-aware context
69f17cbb6dd7 : Introduce ShadeDisplaysInteractor
240d0255cbd0 : STL clean-up DraggableHandler
30adda5c5e2b : Fix: unnecessary shortcut data and observer setup
36be246dff64 : Add ronish to OWNERS for BR in aosp
13a316b1db0f : UsbDeviceManager: Invoke accessoryAttached after ACTION_USER_UNLOCKED
690737d41a22 : Add olb@ to OWNERS for display service
d184719444d8 : Introduce animation takeovers to ActivityTransitionAnimator.
e8a500fa290c : Cache list of UserInfo for getProfiles method in UserManager.
94d4952afb5c : Catch all exceptions when loading images
721952cf94ab : Record task snapshot before shutdown.
b5797d1905d7 : Remove unused classes
e5c40386e5e1 : Fix AE LockTaskMode issue on fold devices
f8ec55aefe6d : Revert^2 "Migrate WristOrientationService"
dafe63471472 : Add trace point into onMeasure/onDraw of TextView
761ae705a8fc : Check sceneContainer Flag before activating DeviceUnlockedInteractor
90d7497ad209 : Tweak some logging to make things more readable
68ab059aeac7 : Revert "Migrate WristOrientationService"
4ba7833eaa62 : Add new APIs for launching Wallet on power double tap.
0ffa517df553 : Add a config_satellite_modem_support_concurrent_tn_scanning device config.
deb2c92b4096 : ViewRootImpl: add a trace point for inset animations
4569f08c0b0e : Add xml attribute for page size appcompat
e74bcd1c6929 : Add SDK support for CPU/GPU headroom APIs
a8452f560189 : Clean up use_frozen_state_to_drop_listener_alarms flag
926a52828256 : [expressive design] Migrate AppList Page.
8cee901ea52b : Add a flag for display content mode management
6d3fb8840914 : Rename Ext to Extension
b6ac1e572380 : Add PressureStallInformation atom logging to the StatsPullAtomService.
27c05cf4702f : Fix bubble bar location on device boot.
babda2710732 : Robolectric mass migration (smaller)
388a4efcdfa1 : Fix NPE in RotationLockTileDataInteractor.hasSufficientPermission
7ccfc019a33a : [MQ] Add a method to get profile handle
8e421d081ca7 : Explicitly expose native_headers
6ca063607655 : Make shared ManageWindowsViewContainer more generic
341c54a481df : Fix equals and hash code methods for the Property class
ae10e2828078 : Fix crash due to NPE for null family name
08284d6e3328 : Add flag for LE Audio Broadcast UI
915c02a9fe41 : Add flag for LE Audio Unicast UI
94b8768d0901 : Add psap_ai flag to protect DeviceConfig flags
f5c742f3199c : Add arthuri@google.com to contexthub service OWNERS
1a8d5cf6d9ce : Add sysui-e2e-presubmit to TEST_MAPPING.
7f684562a41d : Add aconfig flag for the supervision role permission update.
36d5a0acca71 : New modules for DI, utils, settings, and retail
9f38c7f160af : Move hasBinders from hidden to Module API
0348872091cf : Add scene transitions from Dream to Shade and Bouncer
ed6aba487288 : Add a flag to enable migration to mainline module and related changes.
92c596ab92ea : [bug fix] Handle the case when rfFieldOnTime is not set in OemLog.
e76d69557f7c : Add flag for API for applying picture profiles
ccb108d287fe : [MediaQuality] API implementation of getPictureProfile, getPictureProfilesByPackage and getPictureProfilePackageNames
daeb8ec65d37 : Topology conversion to tree
8273c50a2c2e : Fix TaskInfo#requestedVisibleTypes update conditions
b869f28844c1 : TIS: Standardize TIS Broadcast Time Extensions API
9aae1c069ae9 : Add new DPM.setAutoTimePolicy coexistable API.
ee061534fd10 : Rename changeIds to follow best practices.
1f1ae1df7c95 : Make selected PIC caches non-isolating
0b6c39f49ec6 : Consider flags when merging with overlay semantics
7a3156087709 : Improve run-ravenwood-tests.sh
6cbd2fd149d8 : Fix obvious logic error with forceFinishCurrentTransition.
fd9025d7714f : [Lut API] expose Lut-related APIs.
a6a449da07ee : Check that widget provider can access RemoteViews URIs
9b30bb6dbcce : [PIP2] Fix NPE crash when entering Maps
71d43ec84d4f : MediaCodec: define available and required resources test api
3dc5b04972fa : Flag for API to check if WindowManager can override the power double tap behavior.
28ef7ff2350c : [flexiglass] Addresses ConcurrentModificationException
0043c906b42d : Writing tools: API for EditorInfo 2/n
272fb96fa151 : dynamic link to "aconfig_settingslib_flags_java_lib"
958c63c205aa : Fix namespace typo for launch_wallet_option_on_power_double_tap flag.
f6ba4f85f75a : Fix getConnectionToSinkType for internal displays
7e44cacf3b89 : Fix a typo in DesktopTasksController.kt
d1a18160a6cb : [AAPM] Disallow Install Unknown Sources Hook
60593937ab46 : Separate out ELF header reading function
a426167cd70b : [PiP2] Fix NPE when create a split-screen
f2a15524e4f5 : Address uses of USER_SYSTEM in AppWidgetService.
ff4bf5ab7618 : [MQ] Clean up ambient light APIs
18535a0dd1bb : Add creator token optimization
aa758f27e6b4 : media: add available/required resources test api
4336a6bf6191 : Use full day names in the a11y version of schedules in mode tile subtitles
3d9cc0a12fc2 : Use handler to post sendIntent in requestUnarchiveConfirmation
59e92656c945 : Use BackportedFixesProperties for getBackportedFixStatus
6ea7c4f5879e : Implement device lock state listener
ae813dffcfcf : onboard psap_ai namespace
a8934ef8fa5c : Update Notification.shortCriticalText doc to recommend 7 chars, not 5.
6dd5940d2dcd : Remove legacy database upgrade paths in AppWidgetServiceImpl.
0822347dd4f8 : ShortcutService log clean up.
b76554519ad6 : Add KB BL user inactivity time to config
dab63b1b5fa9 : Use the correct background color for tiles.
733b92a91872 : Revert "Revert "Add public ADPF load hints with better rate limi..."
d72e795b7852 : Revert "Add public ADPF load hints with better rate limiter and ..."
765f6a74cefb : [flexiglass] Fixes alternate bouncer unlock.
8081a90f7106 : Revert^2 "Recommend writeTypedObject()"
1bbab4004266 : [Catalyst] Create preference widget for placeholder
3de0497be4ba : Revert "Address uses of USER_SYSTEM in AppWidgetService."
e40857d9ca42 : Make the Jank API public.
ec23101eabe9 : Add extra animation params for media
53cf0949310e : Add idletimeout to AF exe pool
72c0f95f6efc : Revert^4 "Add API for Custom IME Switcher button visibility"
c5a0acc76c34 : DefaultPermissionGrantPolicy: Update pre-grants to use granular health permissions for SENSORS.
e50a448b1584 : Make the new setSecondaryLockscreen a SystemApi
34d7931cf6df : Add minor delay on requesting default focus, so TalkBack functions as expected.
ed62cf6727db : [AAPM] Add APIs for support dialog and identifiers for features
fa986ca41c59 : Add flag for launching Wallet on double tap power gesture.
c7ad530e3295 : Remove SBNHolder
a303dfa593e5 : moved strings in KeyGestureType to label map to resources
278635949205 : Refactor MediaProjection stop-on-lock
7de5178c766d : Extract Shade window Layout params to a common place
16404fb6f916 : Optimize methods for bypassing apps
41b19325f16a : Use refactored code path in InputMethodManager#showSoftInputUnchecked
5a5818f29187 : Fix loading of toast app icons
4dc28b3e38ff : Update test to use pre jarjared library
31c898ab43da : Revert "Aligning Android color tokens with Material"
4fb4634292d7 : STL flingToScroll should return the consumed velocity
123d5b3fda32 : adb command launch notifications
39271b973c83 : Add shayba to RemoteCallbackList OWNERS
433065542835 : Add explicit experimental marker for Linux terminal
09d6e2608538 : Send display property change events in onDisplayChanged callbacks
e9a516673f41 : Add API to register for display property changes
f9a25cdce568 : Added UI for displaying existing custom shortcuts
d9238e9adb4a : Add new DPM.setAutoTimeZonePolicy coexistable API.
ba05ec08ca70 : Fix test name in StatusBarContentInsetsProviderTest
a7189d86f92e : Introduce VibratorEnvelopeEffectInfo class
a97625ec860e : Introduce CustomPropertyTransformation
23669cd89b23 : Remove Transformation.reversed()
eed749f2e008 : STL add support for overlays in snapToIdleIfClose()
ac4242ea4439 : Remove unused classes in App Integrity Service
aa52255cb535 : Remove unused code from the integrity service.
23b066bff41d : Fix: Use internal PM asap for permission check
f1202fb52c3b : Aligning Android color tokens with Material
ff66c4061ae4 : Cancel dim animations directly from runner + verbose logs
f9849743d61f : Correct Trendy team for WM performance tests
d762d7be3b3e : Make Volume Dialog fullscreen to support expanding animations
0f6fd4451777 : STL NestedScrollHandler uses the current Content
97afa75805d0 : Unset window destroying state only if its surface is destroyed
db1382b69a70 : Enable audio sharing hysteresis mode fix when preview is on.
6d23239e3003 : Add utils to enable audio sharing hysteresis mode fix when preview is on.
c4fcd2145b30 : Fix diff-and-update-golden.sh path
bf0dd5d7ea8f : Dynamically disable View-based rotary haptics
ff3157201aa5 : Support BLE input device
4e6b50d5cabb : Disable system_uid_target_system_sdk behaviour on Wear devices
78bc76ce9b6b : Override on WindowToken if necessary
578118bf599a : [Audiosharing] Fix hysteresis mode
4234fccc2dca : StatusBarContentInsetsProvider: call start in init when flag is disabled
30ee66a9390d : Revert "Remove legacy database upgrade paths in AppWidgetServiceImpl."
b3a6f4647eea : Replace Thread.sleep with mTestLooper.dispatchAll
3771c2a103ce : Add flag for WRITE_SYSTEM_PREFERENCES permission
5addb3b37ed1 : Clarify unit test testOnStartingWindowDrawn
e0b2df08d8a7 : Revert "Recommend writeTypedObject()"
cbdee5de344a : [Record Issue QS Tile] Don't crash if unregistering from already unregistered service
391624d53594 : DefaultPermissionGrantPolicy: Check feature flag when parsing permissions.
e505caa252d6 : Add flag for Preference Service Consent dialog
a8586177ed6d : Define flicker test for drag-to-maximize
15160ef4101d : Unhide drag flag used for hiding the source task
e76794adb566 : Reduce showing starting window from a task trampoline
a28a83735902 : MediaCodec: Implement onMetricsFlushed callback
9840396a0d80 : [Ranging] Add new permission for Android generic ranging feature
7c973e4963af : Fix large clock not changing font size when the screen rotates
0bfcd492721a : Add functional test for drag-to-maximize
11624e75781d : Fix setting device effects applier
034c81f9d30a : Add flag for fixing TransactionTooLarge exception while UI is coming up
5602d31f29b4 : Implement PressureStallInformation extraction logic.
119205ec7899 : Register USD service
be65ac64e69e : Add warning for disabling Linux terminal
d48d25bd9e9f : [PIP2] Add PipSchedulerTest
212b0f5c27bb : Make DesktopWallpaperActivity aware of current user to support HSUM. (1/n)
c2a862333eba : Add setNtnSmsSupported API in SatelliteManager.
40c4f8c8a78a : [res] Duplicate AssetManager when changes are needed
dea24e4776a4 : Add removeManagedProfile API: Checks if the given user is a managed profile and deletes it
d1fc3b35fae8 : Added data layer for retrieval of custom shortcuts
faae4f287f8d : Removing unused SplitClockView
8bc38e307bfb : Fix "G" logo changing position during device reboot.
ac66333acf28 : Add keystore2_flags_java-framework
ce700e422bac : Add APIs for picture profiles handles
2b4129aadd95 : Disable certain test cases for PiP2
7e1b124173c8 : [aapt2] Print type specs in 'dump chunks'
d93abf60a2b3 : Fixes WallpaperDescription not returning its Description
4dfc92668a33 : Add a feature flag for USD
41a85ef0f1a1 : Adding API for createPictureProfile and getPictureProfileById
abb4eae23f31 : Recommend writeTypedObject()
14ab88c004bc : [SB] Ensure all tags used in the StatusBarChipsLog have same length.
8ed8a9f45ea2 : Public Autofill Id field in EditorInfo
8058a2548e51 : Use credential owner user id for identity check
5f9b5b84a980 : Revert^3 "Add API for Custom IME Switcher button visibility"
7bd2af8d8fa0 : Add keyboard volume mute LED. (1/2) (cherry picked from https://partner-android-review.googlesource.com/q/commit:0454055b9eef247f5498b1c91491b0ac07805a49)
65a26610ff87 : [framework] Add onLaunchRoutingTableFull oem callback
22fe298a02fb : @FlaggedApi: Enforce using constants instead of literals
84967ebf304c : UserProperty cross-binder warning only for SYSTEM
226b32846e11 : [sb] Move status bar animation to binder
4691ed38b4e1 : Add APIs for Z-order in TV views
edd252ce397d : Remove legacy database upgrade paths in AppWidgetServiceImpl.
99d5cb1b3e13 : Address uses of USER_SYSTEM in AppWidgetService.
6e0e44092fbf : Add changeId for File System Access
e0b1ad1c1f8d : [Notif] Merge HeadsUpManagerPhone into BaseHeadsUpManager.
2a0193bcd0e5 : Disable BroadcastRecordTest on ravenwood.
2f25406fb3c4 : Add flag for new APIs
b2b7b9a4385e : Add a new API to fetch pending job reasons history.
338b8af6b202 : [framework] Expose setServiceEnabledForCategoryOther as system api.
a44e44184273 : Avoid notifying that launcher is visible when opening translucent targets.
08e188af7eb3 : Fix testSetDeliveryState_DeferUntilActive.
f86fee7b93cc : Remove use_permission_manager_for_broadcast_delivery_check flag
9a629e24776b : Add android_text_Hyphenator to libandroid_runtime for host
7fd74ceffc68 : Create new APIs for accessory streams
fd6f435e3091 : API Changes and Update Activity
64d38fd44887 : Don't use concurrent message queue when debugging is enabled
127f9a8cd56c : Modernize Lockscreen Clocks flag
27982849a499 : Add API to enable/disable auto dependency installer
a148fa9d05b6 : Revert^2 "Add shell command to reset the frozen task state"
cf6ba67cd07c : Add feature flag for AppOp noteOperation batching
7c5ef08b9133 : Split app and non app window code
70ab84d6787d : STL ignore mouse wheel
367a617bd0f8 : Revert "Cache sticky broadcast intents on the client side."
2ead0928f6bb : Fix issue with floating tasks being tracked as a drag running task
9ba85c2267e2 : Add flag "cloud_compilation_pm".
c31c6224f5a9 : Skip tiny-framework-dump-test
0655bba98326 : Add a flag for Update Engine APIs for ART.
e0eb24a7d28a : Make sure Tiling knows when a task is focused
ca8cd83af8ab : Update DPMS to ensure that supervision is enabled in SupervisionService when it detects that the supervision component has profile owner status on start.
cd0a68e59710 : More fine-grained sampling
86b7e8d0eb8c : Fix 1px rounding error
4287bfe0ae13 : Remove visitPersonUris flag
2a8cc3064bbe : Remove volumePanelBroadcastFix flag
92e31e386c2d : Add support for parsing multiple signer for Static Libraries
c1b338df9924 : Skip Compat UI when in desktop mode
6bb17d79d450 : Correct namespace for afl_api flag.
a46dbf388d03 : Adding more protologs to find the reason why we cancel the statsToken too early
b54233e88a63 : Fix up kotlin files for footer inlining
8b56597d525f : [Settings] Refactor: deprecate LocalePickerWithRegion and add a new interface for LocaleCollector
c8bd5803713d : Never bounce the interruptionProgress spring
be6c6b4b50fb : Dump values in NotifCollectionCache, not just keys
189749322651 : Fix NullPointerException on mAppCompatDisplayInsets
4ec824ad9b31 : Avoid using NIO channels in addAugmentedProtoToDropbox
d74a779035ba : Set title on shortcut helper dialog so talkback doesn't announce SystemUI.
58cc2587c6b3 : Remove linked transitions
2dd8bf534f70 : Ensure that transitions are started only once
f6826a94e3de : DisplayManager plugins feature flag
48444d32211e : Create bugfix flag to enable system dialog transitions in Desktop Mode.
5757d42a469c : [Dual Shade] Remove codename from the aconfig flag description.
b90ac32fceba : Add back nav animation with minimizing.
3b7021446460 : Resolve SDK dependency asynchronously before install
289a2f43bca3 : Fix fold/unfold clear all button in Overview stops working
108854da3c84 : Guard multi-key gesture refactoring more strictly
d12cb37ba212 : Inline NotificationPulsingFix flag
6ec6fb19e59a : Avoid handling the back key event, when IME is currently hiding
d637dd37cb39 : Update keyguard presentation logic to take into account shade position
06015995dabf : Revert "Include the hardcoded list of sticky broadcasts to cache."
3da6b1802e5f : Revert "Include sticky broadcast cache in the processes dump."
9f6b28210976 : Import translations. DO NOT MERGE ANYWHERE
9cd0be2b0617 : Add modules for Soong-built GSI
a37eec775e73 : Workaround for the case when task is letterboxed
1fed1764a95d : Add swipe-PIP-to-home animation support for ...
c95d2db44a76 : [PM] Change LauncherIconConfig (2/N)
8288bc22398c : TintController: add lock to fix NPE problem.
7f8f77dec3a6 : Add unit test for limiting the sending of the PACKAGE_CHANGED broadcast
163b2afcded3 : Import translations. DO NOT MERGE ANYWHERE
43d8b9011e2b : Import translations. DO NOT MERGE ANYWHERE
48c6bb3750f8 : Only ignore transition change of finishing transient launch
0cd7e1600d5c : Import translations. DO NOT MERGE ANYWHERE
ad7de2b3eba5 : Add terminal app to stoppable_fgs_system_apps
37404f7e53ba : Remove references to MessageQueue.mUseConcurrent in test utilities
d5650e21c44a : Import translations. DO NOT MERGE ANYWHERE
c32240cbfa14 : Revert^2 "Create token for all nested intents of a top level intent."
be222d811f98 : Adding status and end date to SubscriptionPlan
04106bb8d9f0 : Camera: Add AE priority mode tags
b51d99822428 : Revert "Reset side stage position after child task vanishes"
e23ef8c947fb : Remove @Overridable annotation from CHANGE_LIMIT_PRIORITY_SCOPE
56ac636a7daf : set the error msg to the super constructor
3728a5149c40 : [flexiglass] Parameterize VisualStabilityCoordinatorTest
1a239822e146 : Implement key gesture to activate Select to Speak
f39fa244528b : Implement magnification toggle keyboard shortcut
e84cc9ea2617 : Correctly render ISO gainmapped images
9e68020608ca : Change Typeface redesign flag to readonly
151f69f70699 : Add public ADPF load hints with better rate limiter and hint batching
0a4d140e7255 : Suppress "DistinctVarargsChecker" errorprone warnings
7728a006a6d3 : Use user id from PromptInfo instead of context for logo badge
edc27eecae4a : Fix seaming issue in BitmapRegionDecoder for gainmaps
e17368c9e05b : Add flags for the system selection toolbar feature
2bedb57ec8cc : Camera: Add feature combination query version for Baklava
5bf63371a180 : Replace custom header spy window with custom touchable regions
c486a62ec260 : MediaCodec: add onMetricsFlushed callback API
00f537503c9c : Allow users to switch from web to app
8638e6d8ff67 : Use device credential owner as effective user in Credential Interactor
8fce13604e68 : [IMPROVE FILL DIALOG] Setup DeviceConfig flags
8f71af4a710c : Add setting for Key Gesture accessibility shortcuts
c6ffe1dcead3 : [flexiglass] Adds support for locking when entering dream state
3aab3c963e84 : [flexiglass] Support for delayed locking
011c2ad14dc8 : Position smartspace correctly when fading in
2d642328d1e1 : Permissions Flags: Merge SkinTemp/Spo2/Replace BodySensors flags.
0ab54d6c305b : Cleaning up flag NETWORK_BLOCKED_FOR_TOP_SLEEPING_AND_ABOVE
f2ff8bc746c1 : Add and plumb PromotedNotificationContentModel
97e36731250c : Cleanup usage of use_permission_manager_for_broadcast_delivery_check.
eb5a9b8a829d : For the wallet card & icon, only allow the drawable be loaded from bitmap.
216c82ec61c2 : Remove stream_camera VDM flag
08c42321d6ad : Add backup and restore permissions and declare the certificate arrays
ad8505fb7c50 : Log when bubble bar bubble is selected
81895d37fb33 : Log bubble bar location update metrics
883e29794272 : Remove the thread-unsafe destroyHardwareResources in VRI#die
30ac01b75223 : Add DEVICE_STATE_REAR_DISPLAY_OUTER_DEFAULT
f1f0ff37021d : Read from aconfigd_socket when flag flipped
3f319c02fbc2 : Port stepping animation to FlexClockView
daeb168c2853 : [sat] use new ntn signal strength TelephonyCallback
ece154a73cde : Health: Split BODY_SENSORS and bg permissions to READ_HEART_RATE
b940fee222ca : Split permission: Check feature flag when SystemConfig parses xml tag
3b469b453e9c : Move apply preview constraints for clocks in clock plugin
c2311d9cabd8 : Fix test issues with media3 buttons
248654d83f36 : Implement magnification scale control shortcuts
a102090f5d23 : Add forceFinishCurrentTransition and hook it up to screen power state.
13ab01a2e385 : MockContentResolver: add logging for missing providers
e210e59434c1 : Migrate to LocalOverscrollFactory
88074f369ac7 : Make instant hotspot multi user
0863a716c492 : Fix the bubble expanded scrim colour to use the shared constant
9e48e2735f8a : Read from aconfigd_socket when flag flipped
5984dac2fee7 : Add the side icon to QS tiles
5327ba6d3eb9 : [res] Make TargetContainer thread safe
12fd5079ed28 : Fixing missing word in documentation
da1ea0c8601e : Add handle input fix to developer options.
9e0aceff0d6d : Add hold-to-drag app handle to developer options
274648124440 : [res] Make TargetContainer thread safe
b0ac202caba9 : Revert "Revert "Extend unfreeze recents duration when running un..."
ccab87afcfcf : Camera: Add CONTROL_ZOOM_METHOD control
bc00da21daed : Use parent profile API to check for Identity Check toggle status
e46f6588380d : Fix desktop immersive's state clean up
00d837994c27 : Handle the case where transition may have empty immersive change
102e1422e356 : Limit the scope of receiver priorities to within a process.
9cde0f490bdf : Revert^2 "Add API for Custom IME Switcher button visibility"
de3341c38c47 : Refactor InputWindowHandle JNI code
2511886e8943 : Update the backup and restore to correct namespace
50b7f0b29d9a : Draw animatable tile icons in compose
6b471dab6fba : Add healthfitness apex to IntroPreference and SelectorWithWidgetPreference.
600eb65f1002 : Add a flag to cache UserInfo to avoid unnecessary binder calls.
8a529cf17456 : Implement extra large tile tiles on large font size
982de1afa77b : AppSearch PropertyPath does not allow non alphanumeric char
1454dc9fe1b7 : Introduce aconfig flag to guard support for minor SDK versions in minSdkVersion
a3db2690ef00 : Fix Java crash in Shell : java.lang.NoSuchMethodError
21dfd0d163d7 : add pixel_state_saver to mapper list
4d41e127874b : Disable user switching in SysUI for desktop
acfacc18b0d2 : Add flag for secure lockdown feature
b53054e8ea41 : Add ShadePositionRepository
89df1c88e605 : Update adaptiveauthentication OWNERS
79a8c7f00079 : Move WMComponent to wm shell
3dd161f7c591 : audio: toggle hdmitx/hdmiarc when switch surround mode
cbd093551123 : Add a flag to split up existing create and provision managed profile API.
ba8e6b36c4f3 : Introduce VibrationEffect.Composition delay type
8115615c7e7b : Revert "Create token for all nested intents of a top level intent."
05aab2062cbf : Check camera-dictated aspect ratio on activities with desired orientation also.
4bb467741b35 : Add an extra in AssistContent for providing contextual app functions data.
49c0de570b2c : [MQ] Clean up Media Quality APIs and JavaDoc (part 2)
131e58a377fb : Use rotation animation for display changes with only translucent targets
ccfc8657c8d8 : Fix PB animation is not finished normally.
debadca40368 : Adjust animation end callback for SystemUI
f82364f04d66 : [Catalyst] Add PreferenceHierarchy.forEachRecursively
72aacaa6a0d0 : Add requestRegionalSatelliteConfigurationForCurrentLocation and onRegionalSatelliteConfigurationChanged
276b7949e340 : Remove bal_respect_app_switch_state_when_check_bound_by_foreground_uid flag
2bf5b7478aca : Remove bal_improve_real_caller_visibility_check flag
a5a93a24cca9 : Deprecate ANI#labelFor apis
536e0b47d827 : [res] Make TargetContainer thread safe
e4b8b94c7f9f : Introduce Settings Preference Service
6f5bb8ff6c9f : [PM] Change LauncherIconConfig (1/N)
f2a0714eafef : Query for 24Hour format using current user id
925d6edc70ec : Reinitialize VC curve with the new min/max indices
d4300282d94d : Add null check for mTunerResourceManager in newly added APIs
52ae087d873c : 3b/ Migrate away from finishWCT usage in recents transition
cbe85d4d1f10 : Revert "Modify the logic that updates the information of enabled services in ManagedServices to handle the visible background user in MUMD"
05a4d00634bb : Separate caches for separate UIDs
1d534ee4ecc1 : Cleaning up flag NETWORK_BLOCKED_FOR_TOP_SLEEPING_AND_ABOVE
6266b5b1795f : [Ravenwood] Move redirection classes out of f/b/r
3516029b7c4b : [flexiglass] Enables fingerprint unlock regardless of AOD setting
8aea64ec2ff2 : [Ravenwood] Move redirection classes out of f/b/r
fb29cc0328e9 : Fallback to clear disable_gesture_pip_animating
f8f8ea8ab52f : Add PipAppOpsListenerTest
84627d987c20 : [PIP on Desktop] Restore task to freeform bounds when exiting PiP.
f88cc8ec4c76 : Add flag for read/write flags in layout files
040ca8f992d2 : Dump DesktopImmersiveController state
19ecd7af330d : Fix bug associated with flag for wired route types
a118f93679b9 : Handle the cases where the segment isn't long enough.
807d89a79044 : Add flicker tests for the minimize button
d8fefee2f219 : [Expressive design] Update padding for some widgets
fdd0e6bbee3f : Update executeAppFunction to take a OutcomeReceiver
7982499a16fd : Revert "[satellite] Removed hardcoded package name"
2db0d14b7a15 : Updated expanded bubble menu animation.
3f6296a7c009 : Add onCarrierRoamingNtnSignalStrengthChanged callback.
e74b960858b2 : Pipe system feature build flags into codegen
8c0805455a32 : Reset side stage position after child task vanishes
3c522665e806 : Convert new carrier configs to public APIs.
7c3f6f848fb3 : Fix warnings in DesktopModeAppHelper
f3ae4e09fa3c : Formatting kotlin file
e7a7053c14cc : Fix minimalism feature not responding to settings change
55e0fb2bcea7 : Fix lint errors
8c62185740d9 : Rework MobileIconsInteractorTest to use go/thetiger style
6e1e3a56ddf0 : Offset src-rect by display cutouts if needed
6900d991ba9b : [sb] status_bar_simple_fragment -> status_bar_root_modernization
9a151070d44c : Reapply "Clear dream settings when package uninstalled"
9f60fa896801 : TIF: Make constants and certain function SystemApi in TvInputServiceExtensionManager
be9c148b6b72 : Replace use_context_attribution_source / check_full_attribution_source_chain flags with read-only flag
d28118879c0a : Introduce a LazyJniRegistrar helper class for system server
26a8abbd0a35 : Revert^2 "OomAdjuster Freeze Handling with CPU capability"
e4553d02a562 : Depend on exportable mainline module stubs for doc generation
e7fb442f6b57 : [Fill Dialog Improvements] Deprecate API's
f2ee14a0cdc6 : Fix PowerStatsTestsRavenwood tests
0a9db7c89d56 : Create token for all nested intents of a top level intent.
089a8e885b82 : Revert "Clear dream settings when package uninstalled"
82e47315d64a : Add new shell command for IMS related tests to specify a specific user ID
7853e4a536a9 : Topology get and set API
397cc0fda018 : [Widget resizing] Add metrics logging for widget resizing
0cd1b994c9d7 : Add toast and timeout logic in ringer
e87012d633dc : Mark non-running task as minimized.
77ecd8de71a9 : Hide DragHandle from A11Y focus for bottom sheets.
cdecfb96172f : Enable missing flag for unit-tests
17bcaa18034f : New getBitmapCrops API, move some @hide to SystemAPIs
597d06bd6cda : Make new WallpaperManager helpers @TestApi
2e1827185401 : [2/n] Add AppCompat ProtoLog definition
86906b7cb471 : InputMethodSubtypeArray: prevent negative count injection
b7acc399ad02 : InputMethodSubtypeArray: prevent negative count injection
649bb61461b7 : InputMethodSubtypeArray: prevent negative count injection
1e9736165421 : InputMethodSubtypeArray: prevent negative count injection
e863b7b8285f : InputMethodSubtypeArray: prevent negative count injection
4e3e9b2843ba : Consider the previous unique state of an element during interruptions
eb8f25c05ea1 : Add support for vibration in session playback
ae83ac248f09 : Add `DynamicInstrumentationManagerService`
79cbedfa0be0 : Add a flag for Update Engine APIs for ART.
3404a179209d : Add logout to System user logic
b493a57d4241 : Support collapsed media in landscape
c4f53521691a : Send occluded update for apps dismissing keyguard
7cc0d5b21e90 : Add a flag to enable/disable SupervisionService based on whether the device is being managed by the supervision role holder.
f4117813f145 : Update profiling start trigger name
eaf38190ef38 : Rename AdaptiveAuth and move to security package
50222d986fb8 : Add a aconfig flag of disabling windowOptOutEdgeToEdgeEnforcement
459b9679fd1f : Make sure animation in StatusBarOrchestrator are in main thread
7f03381ea7ab : Fix missing token issue
9d66e5e1559e : Update boot image and system server profiles [M80C35P56S0PP]
3a2cf2f66be8 : Add aconfig flag to enable mirroring routes in MR2
b6f673447fd3 : Tweak javadocs for selectable and transferable routes
a3fa9ae3c9cc : HDMI: Solve deep sleep issue on Low Energy Mode
d2824e3ae688 : Merge latest API into single file before checking compatibility
eb8c893a426a : [flexiglass] Parameterize NotificationWakeUpCoordinatorTest
aba040083b4b : HDMI: On <Report PA> from current path send <Set Stream Path>
c0c499e18c84 : remame appfunction sidecar jar
08c469d59be6 : API Council feedback
7e12126b4726 : Check if notification group child is bundled and regroup when posted
3019e1cf0da6 : Add option to filter custom shortcuts
413c6473c766 : Add new 25Q2 keycodes
2550b28f0007 : Cleanup old API flag for keycodes
f957e504ccaa : [framework] Add oem log event callback API.
c169d8a4328e : Remove unused methods of SurfaceAnimator
153710daf0f5 : Add KEY_NUM_SLOTS to android.media.MediaFormat
6ce2ce16580e : Annotate NotificationManagerInternal with @Keep
63345612313a : Define Settings Preference Service API data models
8a0c10bc9f43 : Update to ToT RemoteCompose
16b3fdece1c5 : Night Mode Indicator
5e04fba51207 : Look for phantom processes before setting the process group
3e711fb85428 : nfc(api): Add a event log to indicate data migration
bcdf21f5c060 : Apps can opt in to be stoppable in the task manager
4bb7e51ff5a8 : Show the warning toast when the brightness slider unable to control
602f11dd147f : Adding debugging log messages to help verify and track issues in test suite flakiness.
606ff1917545 : camera2: Reduce metadata lock contention
d2143353bc37 : Apply the final modifier to KeyStoreManager
5f4124c6e616 : Disable verbose / debug log by default
900e0bda33ac : Deprecate unused config and make them no-op
61c2a9e5dfde : Introduce flag for writing tools API
0c737dc8f0e4 : Add -t to run-ravenwood-tests.sh
cdaeba8aab70 : [ARR] Don't call votePreferredFrameRate for LAYER_TYPE_SOFTWARE
6101702fc6ea : Add flag for VDM settings
0ef8ced10e26 : Move cloud_compilation_pm flag to art_mainline namespace.
2f4de78dd327 : Update namespace of flag update_client_profile_priority
46ca9b54f330 : Revert "Revert "Speculative fix for concurrency SessionManager i..."
5d8accdd3da4 : Revert "Speculative fix for concurrency SessionManager issue"
dde3c0629c4e : HDMI-CEC: Adding null exception catch related to mHdmiConnection
b07f68d6bf5f : Isolate Ravenwood sysprops from device sysprops
9b26e51d8b43 : Don't trigger batterychanged broadcast when ChargeCounter is updated.
431ae3b8da80 : Rate limit the battery changed broadcast.
7c3ee851e6ae : Fix unit test when the flag is enabled
0f8fc709a333 : Health Permissions: Add OP_READ_OXYGEN_SATURATION
94f478c6bcaf : Change namespace and description of overriding power key gesture flag.
9b693dfd1158 : Move WakelockStateChanged atom logging closer to event source
cc1c41271693 : Fix flickering when adding a new widget
82f413cce697 : Restrict auto-enter PiP on TV via permission
b1a7e34e16f2 : Revert "Revert "Enable color resources loader to be created usin..."
d118d8d29c74 : Added Dialog for Shortcut Customization
21cb5da50260 : Fixed color contrast on drag handle
eb9b980202d5 : Fix broken synchronization in CombinedMessageQueue:dispatchEvents
da582a35c9ce : Call InvalidationCallback#onCompleted when client is cancelled
05cf72db3269 : Clear dream settings when package uninstalled
58093e8be25c : Revert "OomAdjuster Freeze Handling with CPU capability"
77d970d8096b : Implement backup and restore for large tiles specs
4cacbc871a9b : Create new flag to separate talkback and magnifier shortcut changes
8b11ea24a43c : In ViewHierarchyAnimator, run onAnimationEnd even if anim was cancelled.
de5882f376d8 : Format ViewHierarchyAnimator + test
c1b68ef6d6a6 : Log event when tapping on do not bubble convo
a36924d786c8 : Log event when tapping on app settings
92348ba79680 : Add resId to ComposeView in QSFragmentCompose
d42aff8f9265 : Fix failing test
36ab93e9a79c : Use the DND icon for modes of type UNKNOWN
61d27141e3c4 : Refactoring: introduce UserLogoutInteractor
f3c5518a3db8 : De-flake RavenwoodServicesTest
31706cda4fb8 : STL avoid multiple onStop calls
7b433205b4a6 : Regroup app-grouped notifications after classification
ee483745da9d : Make APIs for custom shortcuts user handle aware
b30b43e3a97d : Clean up obsolete screen record code
9541f62a1946 : [flexiglass] Parameterize SharedNotificationContainerInteractorTest
fcaee0f8feaa : [flexiglass] Parameterize NotificationIconContainerAlwaysOnDisplayViewModelTest
8badf561d62f : Fix Desktop exit-transition code NPE
b88131136903 : Add support for customizing touchpad 3-finger tap
1cb371e50ead : Fix tests to use updated API
375e1c292b5c : Add crashrecovery libraries to tests
d7404e4f47d6 : [Audiosharing] Enable audio sharing feature when preview option on.
a8afad14f6c0 : [Audiosharing] Add utils to check is audio sharing preview enabled
e2a41b38fb29 : TEST_MAPPING: remove bug tag and leave module in postsubmit
01911fa1614b : Catch DeadObjectException to avoid crash
5c6b6ce4e866 : Use application context in DeviceFoldStateProvider
59dfd3cdb54b : DesktopMixed: check we have no valid changes if launch change is null
ee890b38c66c : Added API to register DisplayListener with EventMask
8d22de94b9c0 : Import translations. DO NOT MERGE ANYWHERE
66cc4f7937d3 : Import translations. DO NOT MERGE ANYWHERE
44c01f4bf3f3 : StatusOverlayHoverListener: use display specific components
f133c63e4a16 : DarkIconRepository: add display id parameter
28b8bb27e8fd : Use the display specific DarkIconDispatcher in most places
57807815ec6f : Introduce DarkIconDispatcherStore for multiple displays
29f9b0f9ef94 : Start using DarkIconDispatcherStore and LightBarControllerStore
a3f1355010d8 : Recover from buffer stuffing for canned animations
e6069c98484f : Fix back swipes/presses ignored during IME hide animation
8fb2e7fd07fd : Add flag "cloud_compilation_pm".
31518075e52b : Add basic notes tile implementation behind aconfig flag.
c9d0ee2cd9d9 : Improve TopActivityFlag documentation in AppCompatTaskInfo
0e2aecb6c231 : Add WindowingMode check to TransitionFilter.
64d737ca1f00 : Use fast pair rainbow icon for devices.
7f7a0beda0cd : Prepare DarkIconDispatcher for multiple displays
13687969486c : New flag: floating_menu_hearing_device_status_icon
0bab5752fef1 : Import translations. DO NOT MERGE ANYWHERE
5cf08628b084 : Import translations. DO NOT MERGE ANYWHERE
40f279030ce5 : Import translations. DO NOT MERGE ANYWHERE
4661a3b2b521 : Convert NotificationGutsManagerTest to multivalent
5acc153e623e : Import translations. DO NOT MERGE ANYWHERE
13711bc75559 : Import translations. DO NOT MERGE ANYWHERE
97deafdfd428 : Import translations. DO NOT MERGE ANYWHERE
979d7037ce25 : Import translations. DO NOT MERGE ANYWHERE
237836ae49f1 : Fix UiAutomator breakage by eliminating flagged lines in AccessibilityNodeInfo#toString().
14cffa5308e3 : Fix the jumpcut surface position
27a8a9a6dc65 : DisplayDeviceInfo: improve logging when device changes
a5fe6975828e : DisplayInfo: don't check renderFrameRate for equal()
67f870a4b1d3 : Add move to next display to the shortcut helper UI
0a7a2fc31976 : Update Display Bt. 2020's name to... Display BT. 2020
24405b72bbeb : Add flag for a11y custom headers
4f6cb65db626 : [Ranging] Add new permission for Android generic ranging feature
29b849782817 : [Catalyst] Allow specify default PreferenceBindingFactory
d700176d5271 : Implement RestrictedPreferenceHelperProvider for RestrictedSliderPreference
084ec5753376 : [Catalyst] Introduce RestrictedPreferenceHelperProvider
c4dae2ed623a : Revert "appops: Finish started proxy op when chain fails"
90266ed4bf2c : Skip drag-to-exit-desktop test when drag-to-maximize is enabled
f874d583413d : Add a flag for the new getPendingJobReasonsHistory API.
17eccdca3090 : Fix UiAutomator breakage by eliminating flagged lines in AccessibilityNodeInfo#toString().
34ef2450e53f : Update media3 button tests and error handling
fac8075d4ac4 : Export BatteryUsageStats into prepopulated buider
d0dd29a3c75c : Set a transition to ready for removeTask IFF it's the only transition.
c40fd1295919 : Consistently use CPU uptime for temporary allowlist
b246d74057cd : Add fallback to FROZEN_CALLEE_POLICY_UNSET
bc0c2692e7f2 : Add a flag to enable SQLite storage for AppOps accesses
cf99fec6cbcb : TIF: Clean up TvInputServiceExtensionManager
06d98a11f1ad : Use empty inputGainIndexMap map when getSupportedDeviceTypes call fails
363ab5e61907 : Change filter logic to allow Context Hub owners
554e5a011cc4 : Add synchronization to PIC shared memory
4fb73bfc422a : Removing logs from getter method.
cde4d0198c96 : Pass profiling triggers
cad5bb1131dd : contexthub: add new getHubs hidden API for V4
22a3a15d33f5 : [A11y] Add DATA_TEXT_CHARACTER_LOCATION_IN_WINDOW_KEY to AccessibilityNodeInfo
c86dd3a2a63c : Log a proper event when drag-to-maximize
14bf0736bb4f : De-flake RavenwoodServicesTest
fab325c3ee1b : Revert^2 "Provide @ShadeDisplayAware ConfigurationInteractor"
125931a3c63d : Make CombinedMessageQueue pick an implementation per instance
b93bc62426e4 : Revert "appops: Finish started proxy op when chain fails"
bf4a1d7b2464 : [PiP2] Add PipTaskListenerTest
a8eb60f82c95 : Reapply "appops: Finish started proxy op when chain fails"
d4a941eb7bf0 : Speculative fix for concurrency SessionManager issue
f13df1570453 : Revert "appops: Finish started proxy op when chain fails"
d628f65cf5e9 : Fix coroutine tracing imports
51ef42f7a718 : [SB][Screen share] Proactively start screen record timer.
519a338a8aa1 : [MQ] Clean up Media Quality APIs and JavaDoc
27573933f8a3 : Add @ModuleProperties.AudioCapabilities to audioCapabilities.
b379943ca712 : Remove Step Clock Animation Flag
1b6d853a990f : Update demo mode docs to include Wifi Hotspot configs
bf64cbcd9c55 : Create empty VCN libraries
5352c7b334ab : Allowlist TelephonyProvider HSUM backup&restore
45448df4ad90 : [CDM] Exempt auto revoke for distinct packages only.
3a81df95d664 : [Media TTT] Remove MEDIA_TAP_TO_TRANSFER flag.
333f9119366b : Minor fixes for CrashRecovery modules
cb51880b9f39 : Revert "Provide @ShadeDisplayAware ConfigurationInteractor"
d3bf9f179032 : Add a new API to fetch reasons why a job may be pending.
0cb10d8251e2 : Prevent background noises when DND is on
35947910a62a : When globalInitOnce() fails, don't run any tests
834966e35ed7 : Guard new occlude/unocclude logic behind aconfig flag.
52f7285ede0f : Additional MediaRouter2Info wired route types
0c3ad80f2de7 : Allow packages to be enabled depending on sku value
dd45a19de7e2 : Resolve ContextualSearch Activity as the calling user.
b3cbc9330ecd : Revert "Add API for Custom IME Switcher button visibility"
f2bd615ea92c : Revert^2 "[SB][Notifs] Make notification chips icon-only for now."
62e28dc52b38 : Update run-ravenwood-tests.sh
c74d1b01f914 : Revert "[SB][Notifs] Make notification chips icon-only for now."
1969fb567337 : Add null check for mCallback in AutoFillUI
e16cd3e5119a : Prepare the mock startTransaction for expand test
5a9d75f6a035 : Fix issue with edit mode being dismissed immediately
d83506ada0b1 : [flexiglass] Parameterize NotificationGutsManagerTest
5286727e03a2 : Convert NotificationGutsManagerTest to Kotlin
e6cd51c94dce : Increase epsilon for aspect ratio check
bbcbceee0609 : ktfmt some files
ec7cea118a69 : Expire old bitmaps as owning app
0553d29a9f02 : Added a config flag for AiAi proxy text classifier
856f818ab856 : Add display ID parameter to window policy wakeUp()
371ac3f2a79a : Switch back to Java android.util.Log
a487052604b3 : Revert^2 "Log event when dragging bubble bar left or right"
32475a88bac6 : Add incall fraud flags
0c979fcb2497 : Adds OWNERS for accessibility SysUI multivalent tests
e55a350cb804 : Remove notification content from icon a11y
fd0184f144e6 : Fix multiuser race in FeatureFlagsClassicDebug constructor.
938893333164 : Handle media in landscape
9114929c0447 : Add getBackportedFixStatus to android.os.Build
3ea674c0d6fc : Migrate Memory Tagging to DPE
10e0249609dc : Treat emulated only states as not folded in WindowAreaComponentImpl
e5e9121ded68 : [PiP2] Implement expand in fixed rotation
bbc51caeacda : Use separate flag for suspend packages coexistence
b6665b8b9119 : Add a new boot strategy for headless system user mode
0a048d5eed85 : Further cleanup of SecureSettingsRepository & friends
07a242059711 : Add another printing owner
6e00e36243bf : Remove android.car.Car from the preloaded-classes-denylist.
f43162f41a3e : Update notification title
cb3584f89649 : Remove app icon prototypes code
ced7367a35f7 : Activate hot plugged device
d35a5aca44c9 : Use isDisplayed instead of isOnScreen
9cf627f91cda : Enable specific bundles to be enabled/disabled
ca337dfba89a : Revert "Provide @ShadeDisplayAware ConfigurationInteractor"
3ada7e1c3bdd : [Dual Shade] Introduce a shade header that is closer to the UX mocks.
a7e2d000f160 : [AAPM] Export feature flag.
5f8b36994ac2 : Fetch package context only once per package
5adc3f5a0d45 : Interrupt temporarily engaged timer while priority session is active
0d7fcc1daaf5 : Extend user-engaged timeout to STOPPED, NONE, ERROR states
7c52cec669d2 : Clean up the flag create_windowless_window_magnifier
dede77879d6b : Add flag APIs for customizable input gestures
a62c28732985 : STL improve Swipe definition API 1/2
53a345f8e781 : Use @ShadeDisplayAware ConfigurationState in shade window
d9277ea351b3 : Update preloaded-classes-denylist.txt
ef0bcba15d7b : Add flag for Backup and restore permissions
be73c3a39f2d : Handle application info change for system context
e5447541c34b : Call notifyInsetsChanged onRequestedVisibleTypesChanged
fe1b5d938e3f : Add a flag to enable the SupervisionService on Wear devices.
2b3b627b1a9f : DesktopMixed: Early out instead of throwing, when no launch Change
467aeefa213f : Add logging for task resizing started and ended on tile handle movement.
f2ff9b6748bf : Fix NPE: due to race condition between KCM package update
e94e64001c17 : VirtualDisplay API for brightness callback and default.
d7f405949630 : fix(magnification): stub mock settings controller object method in IMagnificationConnectionTest
8c98628524b2 : Introduce VendorVibrationSession
8be2939bfb62 : Clean up oneway_finalizer_close
796ed1831436 : Use @ShadeDisplayAware ConfigurationInteractor in shade window
fc6b7e301137 : Add predictive back animation for 3-button-nav
85cfd4a19811 : Update hearing device dialog UI
77e8b04bc194 : Per-power group sleep and dim timeout
7da83d897289 : Only accept NonNull data byte array in RecognitionConfig.Builder
b2f850e0e99d : Use explicit user id for setSensorPrivacy
b126897bda5e : Update the global focus state on onTaskInfoChanged without flag
79ba5298dc92 : Remove the dependency on user setting to enable / disable the lockscreen button support
c477f9f9dce8 : Revert^2 "Push atoms for decoding images"
26e46b53addd : Calculate Occulusion base on TDA status.
9baee02a52f3 : TIS: Entry Point for Standardarized TIS Extension APIs
44e7c8a2e962 : Introduce IME Subtype getLayoutDisplayName API
da18d1599822 : Add vertical layout flag to Paint class
11c5b789813f : Stop relying on the local cache in KeyguardUpdateMonitor to determine whether user storage is locked.
2da1d681a71f : Add another printing owner
4ddb740ac094 : Calculate KCA Rect once PiP consistent
e24a9a85e382 : Add a functional test for minimizing a window
b4207ae4b5b2 : Compat-framework on ravenwood
fa0b5fac2c4e : Revert "Log event when dragging bubble bar left or right"
0e69d144608d : [satellite] Removed hardcoded package name
3880e0934ef7 : Update targetRect when font axes change
f159cd1e86e2 : [CDM] Add CompanionExemptionProcessor
653661c71859 : Specify product_specific to true in frameworks-base-overlays
3061256eb91e : [flexiglass] Dismissible lockscreen shows when folded.
ced9d6cf6ab9 : Fix errant nullable annotation
eaa609499d18 : [framework] Add getRoutingTable Oem extension API.
67ef574fcfeb : Make sure we turn off trade-in mode
6779b120ad1f : Removes WallpaperDescription stream read/write methods
19abe3a79461 : OomAdjuster Freeze Handling with CPU capability
a97ef4e75b07 : Add support for async policy enforcement in PolicyDefinition.
5a1a12bfd81a : Fix modelUnload deadlock
f40e9acebbb0 : Update Wear global action icons
c9ce255f85a3 : Enable smoother burn-in adjustments on transition cancel
baf21c13eb12 : Synthetic target tweaks
7717567b8b59 : Use a TreeMap for PL patterns so that they are sorted and hashed properly.
e984551f5c15 : SplitScrn: Check for all children visibility when updating mVisible.
85fa087dab8d : Cache AppOp mode to reduce binder calls to the system server
3ea12167a256 : [SB][Notifs] Add PromotedNotificationsProvider to auto-promote notifs.
73de83c7a65f : [Notif] Add PromotedNotificationUi flag refactor util class.
c571f6c7ee55 : [Notif] Use Kosmos.activeNotificationsInteractor in many tests
3821b8afb6eb : Block addition of custom shortcuts
695314eb4f80 : Shift App launch shortcut handling to KeyGestureController
86d55956db56 : [flexiglass] Fix surfaceBehindVisibility
dac990d672f1 : [flexiglass] Rewrite lockscreenVisibility logic
3b8a4b193e2e : [SB][Notifs] Make notification chips icon-only for now.
88149130a6a8 : Make ACTION_CANCEL cancel drag-to-desktop
9dd76020c4f0 : Revert "Push atoms for decoding images"
e688f6306105 : Add MULTICHANNEL_GROUP to AidlConversion
e67b42fe4dd6 : Remove bal_strict_mode flag
17316c1da465 : Remove bal_show_toasts flag
e3a5ed72dcd6 : Add frame timeline to veil surfaces in maximize CUJ
abe5fe29d944 : 2a/ Generalize GroupedRecentTaskInfo into GroupedTaskInfo
664c55c509e9 : Add null check for A11yMS#isA11yServiceWarningRequired.
744b7fed2233 : Disable AOD translation for flex clock as temp fix
082314d61645 : Fix flaky motion tests
b55170e68452 : Adding logs to identify any abnormal updates to the padding of the device entry icon
28a7999ab578 : Fix flake in BackProgressAnimatorTest
5b337f42d6de : Provide @ShadeDisplayAware ConfigurationInteractor
c8ea608da422 : AppOpsManager: Update AppOps proto_logging package.
96d6e461f660 : Move sysui flag for 3 finger tap to shared config.
03266b5df7e1 : Add oneway_finalizer_close_fixed flag
5b61832561a1 : Use the standard class name for dumpable keys
ee274b6ee21a : Add notification when satellite availability changes
7fbe13d2a8d1 : Screenshot Policy updates
6e71a5f7b7a4 : Use initial velocity with spring animations in TransitionAnimator.
6e1c8dc06ebd : Support Glanceable Hub widget configuration on HSUM
bf2845898b5c : Use dimension for the selected button padding
42da7b9cda04 : Add Jank Tracker
0087eef2488f : Properly support multi-user in NotificationSettingsRepository
a38706d54935 : Fade keyguard status bar during transition to and from Glanceable Hub.
282bd2325774 : Convert KeyguardStatusBarViewControllerTest to Kotlin
4cc79be1ffa3 : Introduce the VolumeDialogSliderComponent to incapsulate and slider logic together
64daa1f40171 : Fix setSurfaceBehindVisibility(false) being ignored
de235fca9ad8 : Catch service callback RemoteException
29f7b88ab4a4 : Remove obsolete permission checking.
954faef9aa66 : Create a seperate instance of StatusBarModePerDisplayRepository for every display
153d5b0017d1 : Improve AppCompatTaskInfo readability
01a454f5b48f : [Expressive design] Update Preference layout
604bf23d0e7a : Allow VDM to access default device camera
252b43709597 : CentralSurfacesImpl#onStatusBarWindowStateChanged check for correct flag
d4701989283c : LightBarController: directly inject BiometricUnlockController
d34b2dab2769 : Add support for LauncherUersInfo config
151f49a25fd5 : [Expressive design] some widgets should not be round corner
8a88fe39bf2c : STL Added support for pointerType in Swipe definition
d05739d537bb : Add a flag for APV support in mp4 writer
448fb1b49209 : Show HUNs over Quick Settings Shade.
11093e4e8689 : Move binder calls to the background threads
44eb5aa1277a : Small refactor AudioDeviceVolumeManager in AudioService
39e531d03eff : Account for interpolation loss in cumulative network stats
cfe69e761cba : [flexiglass] Parameterize NotificationLoggerViewModelTest
0d00f8dae1dd : Modify Desktop Windowing incompatible task heuristic
46c0aa9ad3ec : [Catalyst] Support SET api for int value
f2825433e372 : STL introduce Content.findActionResultBestMatch(swipe)
38345cf8b36d : Add flag for customizing touchpad 3 finger tap.
893d0f9f016c : Change ImeTracker phase in ImeInsetsSourceProvider to be more accurate
3246c7581702 : Fix flicker after unlocking when IME was showing
0943bf773544 : Add activity opt-out property for universal resizable
cc9ecbc9e12c : Disable NotifRedesignFooter flag for testReInflatesFooterViews
504fe3e9aacc : Mark android.sdk.major_minor_versioning_scheme as exported
86339a736826 : Update OWNERS for android-sdk-flags
25913f9e862d : Fix pending intent class loader
e820781d5b95 : Remove enabled flag explicit_refresh_rate_hints
e1eb9af2f776 : Do not start pending animation if another transition is collecting.
15a64debf796 : ktfmt classes affected by recent @ShadeDisplayAware refactors
35dbd954489a : Make ConfigurationRepository and ConfigurationState display aware
c9a706c51efb : Changes to OWNERS files for USB.
36b7fb7fc340 : Respect hierarchy visibility of transient launch
cb932a9ff9b5 : HDMI: Send OTT to standby if user didn't press button on Active Source loss
126c0f3323a3 : Reduce log length of Task toString
1fab6d28598f : Fix round scrollbar test
1f38d0f044f0 : Supporting more non-default close activity callback for PB
afda30b22264 : [Catalyst] Add binding helper extension for test
2b073340a45d : [Catalyst] Include dynamic icon into graph
c1c274f45d2d : Builder-style args for PIC
3ae978a009ba : Crash if Shade window config forwarder is instantiated when the flag is off
ccb6a9e8db13 : Fix seeing unicode curly quotes issue in HapticFeedbackConstants
e5635ccb0b6e : [Catalyst] Provide PreferenceDataStore before binding
f712e74265a7 : [MQ] Support ambient backlight
7ed58e66bfce : [MQ] Add auto-PQ, SR, and capabilities methods
fbf2b8829e02 : [MQ] Add more media profile methods
43e8087825ff : Generate media control action buttons from Media3
1aa75bfa760e : Add check to only notify callbacks for uninstalled packages if they're MBA Bug: 375000910 Flag: NONE (API has no consumers yet) Test: Unit tests
060899bf0f71 : Add a flag for the new getPendingJobReasons API.
8b7b9fb2ec32 : Load tile icons while mapping state
6cf3567b4fa1 : Refactor getBootUserUnchecked() to extract a couple of methods
7b474b644d0f : Add a new config_HsumBootStrategy to control the boot strategy in Headless System User Mode (HSUM)
db553359645f : Deprecate ANI#getLabeledBy, ANI#setLabeledBy
51176ed575a0 : [HSUM] For Dream, use SmartspaceManager in the current foreground user context.
8df221c6c889 : [HSUM] For glanceable hub, use SmartspaceManager in the current foreground user context.
4407d43c694b : Ignore null action in ActivityManagerService.
787c2ce84f73 : Thumbnails for clocks
d0e060d82c81 : Persist & Route FontAxis settings into clocks
8a2ad5e1fb1e : Add JankData Processing Logic
6f6a513b4d89 : Deprecate View#announceForAccessibility and AccessibilityEvent.TYPE_ANNOUNCEMENT
365306014870 : Add education promo for App-to-Web
64ea16677060 : [MQ] Add media profiles, callback and contract
54a90000b0ab : Add exclusion / inclusion filter options to...
5a07caa8cd5b : Use UserHandle.isCore() instead of Process.myUid() < Process.FIRST_APPLICATION_UID to determine if the process is a system process.
b142103e050e : Fix TestLooper to work with CombinedMessageQueue
a6de34eb646e : [Accessibility API] Add implementation for isRequired API
384da4bc29c3 : More precise test summary
b87fa1c01544 : Revert "Revert "AudioAttributes: add SPEAKER_CLEANUP to system u..."
06fdce08418d : Remove bal_require_opt_in_same_uid flag
656cd8d6b1ab : Handle missing ActivityTaskManager service
9ce7d36fb949 : [sat] add ktfmt formatting as its own CL
e5343009e64b : [sat] add config for the opportunistic satellite icon
4252d66b7947 : Fixes NPE for some live wallpapers
41adec2334d6 : [Ravenwood] Cleanup and update RavenwoodEnvironment
b327a1806f57 : Allow ALLOCATE_AGGRESSIVE permission for StorageManagerService.getInternalStorageRemainingLifetime
ba41b51131af : Add HEIC_ULTRAHDR image format
775783ee7160 : Long press Lockscreen status bar expands the shade
2b05463efca3 : Add SDK support for thermal headroom callback API
e004e1ab9b78 : Suppress flyout during mode switching
9e8cbcc651f6 : Add clipboard indication text view
ed6b1047cbf3 : Update HomeControlsRemoteService to cleanup callbacks on destroy
c9083266fd26 : Revert^2 "Set view bounds in parent by diff'ing the bounds in scre..."
187ed18b8ca1 : Adjust color logic to be based only on home wallpaper
d8d8e6d24a2e : Add API to limit system promos
ce5321e824cb : Remove native support for tracing in host runtime
5ba04ee697f7 : Fix missing creator token error
f7bad0509e7c : [PiP2] Add PipEnterAnimatorTest
443863985b84 : [Ravenwood] Copy font and ICU data into ravenwood-runtime
75c7547b09cb : Fix DreamManagerService dream validation in HSUM
9180a4ac9206 : Fix the on click popup when clicking on unavailable widgets in HSUM
c5c6c1a7ca1c : Enables opt-in strict intent resolution for apps
154d5562c9b4 : Fix test when home controls hsum flag enabled
01c0ca279af3 : Add MULTICHANNEL_SPEAKER_GROUP to MediaRoute2Info
e1c9dd1aa751 : Added Sensor.TYPE_STEP_COUNTER and Sensor.TYPE_STEP_DETECTOR.
60e3051079a6 : Allow redirecting log to console
cf58595630fa : Make device_config clear_override take effect immediately.
97f1d81c3d4e : [flexiglass] Parameterize StackScrollAlgorithmTest
375fc344db00 : Adds a flag for getSupportedRefreshRates api
35afa1ac5175 : Implement new stop reason for maybe abandoned jobs
53f0ecd679ae : nfc: Create OEM extension init broadcast
f31aa6ceb8d6 : Adds AppWidget category not_keyguard.
47083cabfbb7 : Add new content handling methods and implementation
7b0bb6bff586 : Added highlighted background support
93718d155ba8 : Prevent GC requests from piling up in LockSettingsService
209cdbf3895e : input: deprecate the input_native flags namespace
4f9e4f060a30 : Fix RAVENWOOD_VERBOSE for liblog
22cd08573549 : Add filmstrip matching to AnimatorTestRuleToolkit.
b81fc8cf5fdb : Remove handler callbacks correctly
ad76b221733a : Input: add option to make 3-finger tap a shortcut
5ba9b8c2bcc7 : Remove TODOs for b/336581871
cd1ea09d4184 : Add reasons from DOZING->GONE
1d6ba5f83d13 : Use different bg drawable for settings/history btn
795de2eb7d26 : Add ORIENTATION_ prefix to @IntDef ScreenOrientation
b563aba2740e : Add initial frequency to envelope effects
2b9e7447c21d : Update slider style
a5e38b12c730 : [flexiglass] Parameterize NotificationStackScrollLayoutTest
d0b23cfd2215 : Don't allow app handle to be drawn for any task w/ default home package.
f9fc804b0173 : Make sure the ProtoLogTool includes all groups in the viewer config
e4ebeefc4946 : SDK_OWNERS: add nanaasiedu
807d6e72b055 : Add API for Custom IME Switcher button visibility
11702d2c6363 : Revert "Account for interpolation loss in cumulative network stats"
6b3a9e92294b : Add Latency logging for Classification adjustments
eb31a693c14a : Create OWNERS file for the Android SDK
cce7892f17c6 : Remove unnecessary log to avoid confusion when debugging.
3b0f74a1eb09 : Delay low power state when entering KeyguardState AOD
82af9d865163 : Handle desktop app launches in DesktopMixedTransitionHandler
a6b4e3d623fe : Fix exit transition from PIP
957f7c9970b9 : PriorityNestedScrollConnection: Add flingToScroll [1/2]
d2a09f13520e : Delete unused variable within ActivityStarter#executeRequest
c356eaf1af89 : Define separate horizontal margins for Dual Shade.
d348393e9a23 : Don't check the visibility of the source in intersectsAnyInsets
fb4fd1add891 : Fix reshowing MediaProjection consent Dialog
5181be0400f8 : [flexiglass] Verify that NSSL#mQsExpansionFraction is not accessed
37fa914344c9 : Handle custom key gestures added by the user
e3e08eff4624 : Health Permissions: Add OP_READ_SKIN_TEMPERATURE
e0c354e97afb : Match screen state change reasons to wakefulness reasons
19c761e2a4f0 : Defer PB animation if previous window cannot show with snapshot.
a3a3f0dc6349 : Do not collect WindowState when moving to a different display
3f3fb0300bcd : Switch display windowing mode when display connected
7c119ed0986f : [Satellite] Added default datagram value in default config.
1e964865baf7 : Created indeterminate RangeInfo type
ddea818ed718 : Add audio device type MULTICHANNEL_GROUP
e42d1d658963 : statically important supplementalDescription flag
0f09c2f475d0 : appops: Finish started proxy op when chain fails
a77179bc14b4 : [PB] Find same change target by surface control
fc3a1ac99949 : Ignore null action in WebViewUpdateService.
1cbc2bdc2589 : 3a/ Improve logging around transitions
618e4b139b8d : TIF: getClientUserId API
879cd9981a9b : Ignore null action in AppBindingService.
14a8d903ddbb : Add READ_BASIC_PHONE_STATE permission to Shell for CTS
ea97b13ff2d4 : Implements satellite state change listener APIs
31f6988ebb3c : Introduce satellite state change listener APIs
a4c760944bb8 : Fix test fail after force trigger predictive back transition.
c226d4b92235 : Add flag for biometric prompt to use effective user ID
60dba0b0b867 : Synchronizing the loudness codec dispatcher
2ddf33ef6d18 : Use baseIntent's componentInfo to supply Icon for Enter-Desktop.
3edf573f927d : Allow system process by opt in
07ebc37d6ba9 : Added aconfig flag for OTP in TextClassifier
2f359099b8e0 : Deprecate STREAM_BLUETOOTH_SCO
f0e69b8528ee : Add basic test coverage for sysystemfeatures-gen args
a4e8bb01c369 : Updating device_idle unit test owners
5614630b2c5a : Make CombinedMessageQueue member names overlap with LegacyMessageQueue
74ca430f6f1e : Remove FLAG_SCO handling from the AudioAttributes
88c3309ba0ff : Eliminate LongArrayContainer
4fe100ea5fd7 : Implement supplemental description API
bb2363ddac36 : Create a separate config file for broadcasts related flags.
bfddac850b7a : Mark the API of Settings.getDefaultAccount and Settings.setDefaultAccount as deprecated.
2902817c8018 : Use the standard class name for dumpable keys
3035da047301 : Add flag for NFC associated role services
77c12e0fbd8d : Exclude the PIC key package_info from shared memory
2b1b5cc74837 : Cache getProfileIds to reduce high volume of binder calls.
9a55bf3590c1 : Update statementservice assetlink.json parsing
4d12e7eca4e7 : Prevent any more JUnit 3 based tests on Ravenwood
3567d3f64a22 : Avoid showing/hiding task surfaces in window decor relayout
0796d527866d : [VRR] Add new frame rate settings APIs to ViewGruop
68c8ddabf994 : Account for interpolation loss in cumulative network stats
7344fbcfb8d3 : Allow interact_across_users to get and update AppWidgetOptions
e97cf01874df : Add a dedicated error code for missing function ID
67167c3b4524 : Split the isAppFunctionEnabled method into two overloads
6e4ba167f634 : [PiP2] Add PipResizeAnimatorTest
9475afcea038 : [flexiglass] Re-enable HUN remote input dismissal on outside touch
723ac23f447a : Add API that returns IntentSender for widget configuration
97b363ace34f : Dismiss Biometric Prompt on device lock
41cf9d5d613e : Add a flag and toast for Intent Redirect
7c5646f1898f : Adjust crop to account for child scaling
55f7236c3503 : Log when bubble bar is expanded or collapsed
8dce3aeba9b5 : Add test for aconfig flags support on ravenwood
5f37d1fddfca : Revert "AudioAttributes: add SPEAKER_CLEANUP to system usages"
4fc434cb7189 : Add flag for migration of MTE policy to DPE
717bc6d59f6d : Camera: Add keys for color temperature feature
fc0b55f341d2 : Update pass-through opt-in EnabledSince to Baklava
3a75508f4dc6 : Add feature flag for deprecating permission tree related APIs
ce7ee1f7ab4b : Clean up flag safe_mode_timeout_config
2583a8f4e49d : Handle invalid values in ResizeableItemFrameViewModel
3f23bae2707e : [Custom Key Glyph] Add getter to fields in KeyCombination
aa9cd4ba44e8 : JobScheduler: Update trace message when apps running out of quota
8c9504a2faf4 : Clean up flag ipsec_transform_state
897c87ce1a58 : Bind volume ringer view to model
f8c9fd783885 : Fix check creator token
adc9d6e72d5c : Create onTeardown and use it.
45180f048cbe : Log event when dragging bubble bar left or right
ceba95748dd7 : Camera: Fix doc typo
a8e68369afd2 : nfc(api): Remove Tag from onTagConnected callback
1eb82adff071 : Fix work badge becoming black in groups
2840bc9b0f1e : Reduce noisy Desktop Immersive logs
bcca774a4670 : Use AnimatorTestRule to end animators during tests
75b3586731c0 : Creating volume slider haptics view-models on non-empty states.
88228a8a16a6 : Prepare LightBarController for multi display
ee6b8f7bf95a : [AAPM] Add Feature class
f00a12f8d295 : Adds a new listener to propagate taskInfo changes when when change.mode is TRANSIT_CHANGE
06f56817d48c : Add ACTIVITY_SECURITY_OWNERS to ActivityManager* and ActivityStartController.java
500d5073a1f7 : Add paging to modes dialog
ba7e42168c6d : Add CombinedMessageQueue#mNextBarrierToken
50f92255d1f0 : Revert "Ignore AppHandleEducationControllerTest"
17975c5e39c9 : Clear leave shade open on bouncer exit
4de4b2331110 : Guard CommandQueueInitializer using `status_bar_connected_displays` instead of `status_bar_simple_fragment`
f30e1aa0fde7 : Add runtime user process assersions
cd43ddcbf799 : Add platform_testing/libraries/screenshot to TEST_MAPPING
59b6c3b31e6f : Introduce CommunalWidgetRepositoryRemoteImpl
67b7f1cf0742 : Fix Transparent Activities in Split Screen
3a6ad3638785 : Fix apostrophe in the SubscriptionManager::setOpportunistic() method
add07a33ed4a : [14/n] Add support to break tiling on display rotation.
189579e45697 : Add peristence flag to dev options.
48f19a06c3c3 : Update enforcement for wallpaper scaling to post B
3ead62064104 : Add Error categories for AppFunction exe response
e0a29e2d14ad : Ran the onboarding for pixel_video_sw
ca7e6b24540d : Change the namespace of the ignore_restrictions_when_deleting_private_profile flag to profile_experiences.
a2cc1612bc13 : [13/n] Enable drawing rounded corners for tiling.
6ee02ccad60b : [STL] Fix predictive back with activity-compose:1.10.0-alpha03
3e4c3b9c5ab0 : Revert "Check for NLS service intent filter when rebinding services"
522790bd08b8 : Fix API lint errors on the Jank API.
a37db3ab47cd : Make shell default animator respect clip rect
0fbbba1c46bb : Rename PropertyTransformation.transform.value to idleValue
99a000c39cba : Move TransformationRange outside of Transformation
4e0e264e3d13 : Add flag for default device camera access policy
a13cdf4d0978 : Refactored EventMask to EventFlag
83dffac86d6b : Show system event chip on connected displays
ef96b34c62d8 : Separate the PREVIOUS_APP_ADJ scores.
530d85c05f17 : Check parent frame instead of window bounds
dde9ccdcd56e : Add aspect ratio setting button in window handle menu
94a9fc580039 : Reapply "AudioService: synchronize audio mode and focus for Telecom"
75d85d2c8a9d : When configuration changed, only attach starting window to task
2c637fbb5340 : Implement 1-to-many Setting ID
0ba7b6460404 : [Divider] Read veil color from DividerAttributes
df7c1e93cc11 : Ensure the top-resumed-activity is updated after WCT applied
b276777f6256 : Revert "[CDM] Add CompanionExemptionProcessor"
b761dbb9f67a : Rename CommunalWidgetRepositoryImpl
7548806d84e4 : Introduce GlanceableHubWidgetManagerService
11a57e3595bb : Parcelize CommunalWidgetContentModel
382cd9404a51 : Revert "[CDM] Add CompanionExemptionProcessor"
20b5676baaab : Revert "[CDM] Add CompanionExemptionProcessor"
96b623828e48 : Validate TimeZone in TextClock.setTimeZone
b16a870ae2e0 : Set touchable region to full screen when glanceable hub is visible
3b88d4b61c83 : Add glanceable hub on mobile flag
b307bfbb0636 : Make inner classes of ConcurrentMessageQueue static and/or final where able
a4e167f69b44 : Fix wrong strong resources for splitscreen lhs
fcc2e1173228 : Implement Observe Mode and Preferred Service event listeners with functional callbacks.
d50f72fe0e6f : Always set CE key protection in migrateUserToSpWithBoundKeysLocked()
302974580522 : Flexible 2-app split: Pinned Taskbar
e5ccb2393fcf : Adds hearing device local ambient volume system settings
c2a85d0bda9f : Allow the NAS to update channels and add @SystemApi to NotificationChannel#setImportantConversation.
7e9ad04630f6 : Add a new API to create conversation notification channels from the NAS.
7a0c3b2bacb0 : Inject swipe events with fixed frequency
d047696a84dd : [Catalyst] Add KeyedObservableDelegate and fix datastore binding issue
a1422149cbf8 : Use getParameter return value for the preset length
64977eb92c05 : Import translations. DO NOT MERGE ANYWHERE
b80c7c641bb4 : Hold a wakelock for the installation
bf74decc5c64 : Expose SatelliteManager as public manager class
2aeed2f5c48d : Fix a bug where bubbles intercepted swipe gestures from fold->unfold
27070e9705d0 : Add interface to get the value loaded from ISIM record
748e66d6406d : Revert "Set up status bar input layers on init."
2b6f5b2e6522 : Enable FgThread and ServiceThread on ravenwood
1ae17042e9da : Modify specific IMS MmTel interface to system API
337d1257709a : Expose VCN_MANAGEMENT_SERVICE constant as SystemApi
78f5213f2ece : [res] Allow changing RRO state that target android
8a61fdd07078 : [res] Add a feature flag for android resources RRO fix
bcef16c588e8 : Revert^2 "Create a Strict Mode violation for Background Ac..."
88b167af1360 : Create RO flag for BAL strict mode
515afbe80af7 : [Custom Key Glyph] Retrieve key glyph in Shortcut Helper
2f589024bc25 : Remove robolectric screenshot tests from shell
046f770b65ac : Improve javadocs of app functions
6cd8e946030f : Revert "Enable color resources loader to be created using FRRO"
927eb4f3382f : Revert "Set view bounds in parent by diff'ing the bounds in scre..."
74cda91f692a : Add OemExtension callbacks support
19d4d05f89c5 : Remove @Overridable from CHANGE_RESTRICT_PRIORITY_VALUES.
95ff504c2e0b : Migrate sysui swipe event logging flag to bugfix
7eeec194df87 : Clean up aconfig flag oem_enabled_satellite_flag for SatelliteManager
bd60c11b6024 : Clear manual bouncer transitionId on CANCELED steps.
9fd37904001b : Override RoleManager for KeyGestureEventTests
713b8f406ce7 : Restored an exception string
1c5e38ca1d05 : Include feature flag state in cached package
780541bc120a : Also compile methods in inner classes of MessageQueue
6ccae6ad210c : Release visual indicator on snap to stable bounds
2091963109d7 : Migrate some viewmodel to activatables
6078c273f606 : Add media to QSFragmentCompose
af7095ea5b00 : adjust migrateAccessibilityButtonSettings to check every shortcut type
f44963d537a3 : Ignore AppHandleEducationControllerTest
9038e5df9c3b : [PiP2] Add PipExpandAnimatorTest
597e8993e9a1 : HDMI: Cancel standby delay runnable on one touch play
ae7c8354ab5e : Override the SetFilterBitmap and IsFilterBitmap functions for AnimatedImageDrawable API
b233f1527e72 : Use static dependencies for libandroid_runtime on host
8d0225371ec8 : Import translations. DO NOT MERGE ANYWHERE
0ba21a75b1b8 : Push atoms for decoding images
d9b1cebe178a : Update span error message to be more descriptive
5a4d312fb391 : Remove CompatChange ID for ENABLE_PREVENT_INTENT_REDIRECT
e9f609729832 : Import translations. DO NOT MERGE ANYWHERE
6d9a341a78b0 : [RONs] Fix ProgressStyle totalLength calculation
c4e39ae1bff9 : Update Biometric Prompt for private space
6385dd206a16 : Revert "Create a Strict Mode violation for Background Activity L..."
1fafa89f3d02 : Revert "Fix test for Strict Mode change"
f7bf52bd67b9 : [bouncer_ui_revamp] aconfig flag for bouncer blur
b0e071f39f69 : Add #include <format> to Bitmap.cpp
fbf6781b3642 : Import translations. DO NOT MERGE ANYWHERE
519cfbe094b0 : Only go from OCCLUDED->GONE (launcher) for select devices
3162fb4ed7ed : Improve NSSL `start` positioning for Dual Shade & Connected Displays.
9a6c1872777e : Convert AccessibilityQsShortcutsRepositoryImplTest
67a66f47dcd6 : Fix VcnConfig garbage collection
57414d2a45fb : Throw exception if SP cannot be unwrapped during migration
bc78ceb189e7 : Disallow autofill session for visible background users
03b5138a605c : Use final strings for edit mode and adjust the reset dialog UI
651989455a90 : Small UI improvements for tiles
a60e61758239 : [A11y] Add aconfig flag for character bounds in window coordinates
c4153730fac3 : Remove obsolete comment about Weaver deadlock
d318c3495b3c : Add key gesture events for Desktop Windowing task resizing.
e556f860f62a : ConcurrentMessageQueue should search treiber stack in isIdle
2cff18e6f7cb : Add the rest of the logging types
96cfa57d368b : Add more logging to ProcessedPerfettoProtoLogImpl.dumpViewerConfig()
451010f9869e : Modes dialog fits 3.5 tiles by default
bad11c8a8283 : Update HomeControlsDreamService to support HSUM
ab945d6ae25d : Reserve flags for CSS/WM features:
ddea46c2843c : Re-enable PowerStatsTestsRavenwood
b73f9f50dce9 : Don't sample StateFlow
7f1e37fc605e : Disabling button when not visible
5754baa4be57 : Revert "AudioService: synchronize audio mode and focus for Telecom"
dc92aabe9ae1 : Change naming pattern of implicit zen rules
80be56529bf7 : Add flags for desktop app launches for developer options.
1d30d638766b : Fixed docstring on TileHapticsState and the class
e285b04b6fbf : Frozen-aware AppOpsService op noted callback
2c9a6886e0c7 : add runtime color filter support to platform
41897a940f8e : Provide privileged apps access to getUsbBandwidthMbps API
49b04a236483 : Update peristent repository on visiblity change
94ad0a263ff1 : Cleanup release wakelock on bg thread flag
0f1ce5996768 : Redesign app language selection page
965caedb30bb : Show privacy dot on connected displays
eada944fb773 : [flexiglass] Fixes status bar padding on lockscreen scene
752ddbe604aa : Introduce Kosmos.runTest
a7f99b57623e : Revert "Create a Strict Mode violation for Background Activity L..."
9a0054cd1b9f : Revert "Fix test for Strict Mode change"
9392f14953e4 : Make PropertyTransformation public
59302ac19437 : [2/n] API to ignore activity size restrictions on virtual display
0b440fd947de : Implement remote home controls data source
e3ed9b0a9898 : Remove unnecessary logic from HomeControlsComponentInteractor
d30a2c8bcbaa : Ensure home control dream is enabled in user context
817196af5d6c : Define remote service for home controls
f23314872d82 : Add B&R logging for Modes
519ced7d5167 : Fix desktop launch (unminimize) animation scaling.
ea7faad1564b : Move TracingTests to presubmit
d2fa6733d0fc : [Catalyst] Provide launch intent for preference graph
aa2dfabbefa3 : [12/n] Fix tiling bug causing crash because of HSUM
71c471f05a56 : API review : Updates the API for CallbackMode for call and SMS
b7f32d074f80 : Add debug.tracing.profile.* system properties
5f5678d8d45d : Use mMergedConfig instead of mGlobalConfig while getting winConfigFromWm
f63aff714191 : adb command to reset and print out TutorialSchedulerRepository info
0fc5d10e16a0 : [Custom Key Glyph] KeyCombination should be parcelable
9c78d59878a4 : Refactor initializing of DesktopRepository.
3da32b37d572 : Remove unused code from the integrity service serializer and parser.
dc7d8f8232a8 : [flexiglass] Parameterize AmbientStateTest
1941110a1757 : KeyguardUpdateMonitor: Add missing getFaceAuthInteractor null check
3f59527d04d5 : Fix Restart and Resize hints flickering
7b1ef4dab830 : Fix NPE when attach thumbnail
84de708b64d4 : Revert "Fix NPE when attach thumbnail"
fbfa91a29782 : Deprecate Paint#elegantTextHeight
8b8e208b72d0 : Fix user attribution of zen-related calls into NM/ZenModeHelper
da8a45b62425 : Add APIs for adding custom input gestures(keyboard shortcuts)
daa0f7bcf35d : Add shell/freeformat to ktfmt directories
b045adffb330 : Apply ktfmt on shell/freeform/
ec965a8a801a : Implement animation for TRANSIT_MINIMIZE
b7a11da4040e : Pilfer pointers when touch gesture transferred to embedded.
99fa3031e2e0 : Support SurfaceControlRegistry logs in native
841d528b7d89 : Adding Dim Bounds to Proto
3ed86bdba393 : Add PIP transition support on non-match parent activity
e2c0f568b5f9 : Fix null pointer exception when set preferred transport
78e5f4846c4a : Improve testability for WindowLayoutComponentImpl.
228bc14e82be : Align non-UI context behavior in getCurrentWindowLayoutInfo.
5dd7500aa0ee : Add config to change drag-to-top-edge behavior
2410a73f7696 : [flexiglass] Migrate keyguard bypass, biometric unlock state, and related haptics logic
6235be0473f5 : MultiResolutionImageReader: Unhide constructor with usage
37a55752a924 : Fix NPE when attach thumbnail
43fb7395288e : Add support for argument array for executeShellCommand (1/2)
c3aa2c3281f9 : Combined Legacy and Concurrent Message Queues
81d73d5ca6cf : Remove input channel from visual indicator
edde8b212d4c : Update RADIO_TUNER to correct permission
9159b6ca5efd : Add additional audio focus logging
c94140824cee : [aapt2] Add a compression control option to 'compile'
c10aa455762e : [aapt2] Allow --flag=value command line options
d52d5593647e : Use simple equality to compare media notif intents
978bea85fdc0 : JobScheduler: Fix ThermalStatusRestrictionTest failure
4f82f66e139e : Minor cleanup of dismiss action interactor to hide state that it doesn't have to expose.
b819caadb50f : Remove most fillInStackTrace() calls from core
5e4a0083d324 : Add a flag to control the exposure of VCN APIs for mainline migration
6fce2a69d370 : [ARR] Fix issues with apps that have internal rendering
78fbf66dd84f : Revert "[TouchLatency] Update style of touch latency"
98f5bde4efe9 : Add a rate limiter in AppOpsService to prevent excessive loggings
ff41023a2dca : Update VCN OWNERS files
76a13541074d : Add aconfig flag for x on mouse hover for notifications
a8e5c5996f8d : Add a new(Empty) media quality system service for TV setting
a2ff65bf3210 : Add CrashRecovery stubs to core services
8b474b096462 : React to axis changes from picker
b4b8a948a633 : Adding MSDL feedback to notifications long-press.
dd64bf8ddd16 : Add predictive back animation to the compose bouncer
c64ab0a36024 : Add support for saving the preferred input side for the bouncer
874cbb14a587 : Add mixed transition for to-front with immersive exit
e9e77e7e0c65 : Update point dimensions and also dash gap per UX spec.
15038d0bc84f : Make sure shouldNotFreeze state change correctly triggers updates.
93f84e5af9e0 : Save generated previews in AppWidgetService
0b410e950560 : Do not return false if operation has already started
35dae8c84057 : Add hyunyoungs@ and awickham@ to OWNERS for contextual search service.
2247de80559e : Do not report native memory metrics for isolated processes
b332a80767e3 : Move dim bounds handling into dimmer
e2b71c9e21b2 : Added buttons for entering and exiting customize mode
abf6e804e0be : Allow querying the intent-filters receivers were registered for.
6a91c3008b13 : Shift the points that go outside the start/end of the progress bar.
f61a94761327 : Define bugfix flag for home controls HSUM support
e318313898c3 : Deprecate android.webkit.WebViewUpdateService.
a38e46ce43b2 : Long press Home status bar expands the shade
c62848005fee : Add OWNERS file for BaseBroadcastQueueTest.
2c66eabb1e7b : Update ravenwood TEST_MAPPING
f861cea5a860 : Flexible 2-app split: Motion and Overview
27533d5a5fa5 : Move TracingTests to presubmit
d38c7b9dcf52 : Add OWNERS file to shell/dagger/
c8dd1b773e21 : AllowDelayedLocking as long as run-in-bg allowed
a9d0be50161d : Remove symlink to android/api-level.h.
fa1a8d691b8e : Remove a test-only PIC exception
1daaf0979e00 : Prevent notifying Callbacks.onStateChanged twice when receiving VOLUME_CHANGED_ACTION broadcast event
96b499c46e3d : [flexiglass] Verify that NSSL#mImeInset is not used
b5c7e002f9b3 : [WM Shell] Consolidate `addOrMoveFreeformTaskToTop`, `addActiveTask` and `updateTaskVisibility` into the `addTask` method to simplify the code structure and make it easier to understand.
1d23cdbac32c : [flexiglass] Verify that NSSL#mMaxTopPadding is not accessed
103304543e80 : [flexiglass] Verify that NSSL#mContentHeight is not accessed
eb85d520918a : BICS: Add InstallEventType as a type of extras and send it during notifyAllCallbacks based on whether it was install or uninstall Bug: 375000910 Flag: NONE (API has no consumers yet) Test: Unit test
05e4b2c39eff : Remove further unused code from services/core/java/com/android/server/integrity/parser
88b98be37bd2 : [Forensic] Added Security Logging
52756ccbb551 : Add test annotations for footer redesign
93d187857e51 : Revert "[CDM] Add CompanionExemptionProcessor"
5a0bc1ef9f69 : Add InterfaceDiedCallback to RemoteCallbackList
2d347e9c6633 : Revert "Save generated previews in AppWidgetService"
e392dc82ec8e : Enable NameNotFoundException on Ravenwood
142c2296a4ae : Fix typo in javadoc
256670f46003 : Implement getQsMinExpansionHeightForSplitShade
962bca782e15 : Brightness dialog use new composable
9d7512ab803d : Support Brightness mirror
e69b900f9115 : Use readonly version of native_metrics in dumpMemInfo
7f121ad228ed : [Satellite] Changed start of satellite non-emergency mode in hdden menu to use intent instead of directly invoking satellite manager
af715abc480a : AudioAttributes: add SPEAKER_CLEANUP to system usages
68bd94ee92ae : [CS] Create ReferenceNotificationsModule.
3517fbbffae9 : Rollback 'Use old JNI on robolectric environment'.
79dab7a79ad0 : Make ViewModels consistent in terms of using ExclusiveActivateable
475de4186329 : [Contextual Edu] Add tutorial entry point
0034ddd47132 : Add new stop reason for maybe abandoned jobs
5e272ac595e0 : Rename empty to abandoned Jobs
a4fda58f29f8 : Add Volume Dialog window title based on the currently active slider.
ba078dd014f9 : Add check if the profile belongs to foreground user when showing notification.
ae8e564b3175 : Add tv_os_media to native config.
af925a588b9f : Animating button alpha instead of visibility
261bbcfae0ee : Create Collection extension function `containsExactly`
798934e5f497 : Extract privacy dot corner logic into a shared enum
55a516012c8f : Provide ScreenDecorations Executor via Dagger
64aff248f710 : Enabling portrait orientation in touchpad tutorial
6e424669dd9c : Add WallpaperDescription to WallpaperData and persist description
82f6ca9c73bc : Add new framework classes for content handling: WallpaperInstance etc.
0d48f6c36170 : Cleanup ENABLE_HIDE_IME_CAPTION_BAR flag
fe523054d7f5 : Remove BinaryFileOperations.java and its test from services/core/java/com/android/server/integrity/parser
e93d57253675 : Remove unused code from services/core/java/com/android/server/integrity/parser
07c952fff6e6 : Revert "Temporarily disable Conscrypt perf tests"
cd5c3178f93c : Check if slice provider was initialized
534d55df2f07 : [SB][Notifs] Show HUN when status bar notification chip is tapped.
30f7a572043c : Extract uses-static-library information into ApkLite
715a4b56daa6 : [Catalyst] Fix PreferenceHierarchy add after/before
247ee8e2f960 : Add flags for population density provider and density-based coarse locations.
cf9a927cac86 : Fix: Hiding IME via predictiveBack and focussing editText did not show it
e026c7bf7852 : [Catalyst] Allow specify order in hierarchy
bf3d9079dae0 : Import translations. DO NOT MERGE ANYWHERE
8ff748edfc7e : Add a flag to enable HSUM features in desktop mode.
b0b48ea6a6ac : [11/n] Fix a bug where tiling isn't broken on 5 app open minimize
5add9d185086 : [Catalyst] Add PreferenceBindingFactory.bind
064abbaffc3c : Display SIM PIN immediately when inserted on unlocked lockscreen
1e500ba5458c : [10/n] Fix a divider handle moving bug that causes it to freeze.
a8d1d38f6dcb : Adding support for portrait orientation in tutorial screens
ed5ddb0a8791 : Adding support for portrait orientation in tutorial selection screen
665593ae3a8c : [flexiglass] Migrate NSSL#isInContentBounds(float y)
e0fd816d8e04 : Connect DesktopTaskChangeListener with DesktopRepository. This migrates existing code from FreeformTaskListener.
c736ec79a4b0 : DisplayWindowProperties: add layout inflater
049ac8209513 : Import translations. DO NOT MERGE ANYWHERE
dfa3a0781b49 : Import translations. DO NOT MERGE ANYWHERE
47e63889151d : Import translations. DO NOT MERGE ANYWHERE
8b1d9feea9a0 : Update AppFunction sidecar lib Documentation
ececfffd7b5c : Add ability to loop vibration effects
23b70c3a95cc : Import translations. DO NOT MERGE ANYWHERE
6642ba443146 : Import translations. DO NOT MERGE ANYWHERE
a91720987260 : Make sensor direct channel creation more robust
3bc24e60ae81 : Import translations. DO NOT MERGE ANYWHERE
6ee4fb703d41 : Expanding getDataSystemDeDirectory to be more consistent.
8e867b6560b2 : Remove dependency on TypedXml Parsing
19c79d6a9f1e : [Catalyst] Introduce HandlerExecutor class
d4d51d7679b1 : Ignore null action in VcnManagementService$VcnBroadcastReceiver
9713a5e8bbbd : Fix warnings in DesktopTasksController
300726e4d720 : Extract minimize animation to WMShell
8cdf650e22e4 : Set input layers to GONE rather than INVISIBLE.
988016a45d64 : Ignore null action in NotificationAttentionHelper.
bd3c95be1cfe : profcollect: check if ADB is active when the service starts.
a4743b68c7ae : [PB] Unify predictive back transition.
3c167fb388c1 : Ignore null action in AppWidgetServiceImpl.
40468fc148b6 : Don't override user choice during OTA upgrade or pregrants
8d7fc2468271 : Ignore null action in BackgroundJobsController.
bb7ba0846ef4 : Ignore null action in DeviceIdleController.
276586cd566c : Ignore null action in AppStateTrackerImpl.
071b0ce6e56e : Add @IntDef for KeyEvent.KEYCODE_* constants
40244ee5efb9 : Add Widget State Tracker
e18322e5ddfa : Remove moveToSplit
bf5bb3d44fd0 : Initialize absolute volume state onReinitVolume
c234eec865fb : Limit sending of the PACKAGE_CHANGED broadcast
b2f6066a2f62 : Slight adjustments to bubble bar education view values
b918069ce90b : Implement radio alert in broadcast radio service
b6a6df09038f : Add event for switching bubble bar bubbles
fbb9bf1a8267 : Add animations to Manage Windows menu.
0f56f674448d : Refactor PiP2 resize transitions
4b3026310f3a : [PiP2] Add PipAlphaAnimatorTest
c0081e5add62 : Ignore null action in AppRestrictionController.
188cba773152 : Log event when dragging bubble bar to dismiss
8d64d4547cb6 : Log event when dragging exp view to dismiss
053419e86a48 : Make the PIC stringblock local
68d6fd5a9cdb : Remove SerialService JNI layer
e90f7753eeaf : Override hashCode in AnimatableScaleMatrix
f4172d5a7bc5 : [Connected Displays in Taskbar] Refactor TaskbarDelegate for Display Signals
dd27fb389542 : Added desktop_hwsec to the native namespace list
57a1d0623121 : [CDM] Add CompanionExemptionProcessor
9838809d3526 : ActionButtonModel refactor
2f59a1fd0e6b : [Inline Reply] Use Pinned info separately for expanded check
66be9faba330 : Background calls to subscription manager
2d89f6e75e85 : Implement shortcut controls for a11y keyboard features
82a44050f685 : Fix NPE
18911194f95d : [Docs] getParcelableArray reference links to non-deprecated method
70d34bdfc66f : Fixed runners for MultivalentTests
4d8f72693d78 : Notify light bar controller when GONE
6d1ca27a8adb : Add OWNERS to shared pip package in Shell
f10ce6c95252 : Add SurfaceControl.Transaction#setFrameRate API
d9ad344fd777 : [PiP2] Deprecate PipMenuIconsAlgorithm
ba2f3e93d403 : Add new ASurfaceTransaction _setFrameRateParams API
0587fa4a9558 : Adds getSuggestedFrameRate api
1c52ff271b4a : Remove dependencies on the 1-variant fallback
9480c4e7654d : Update tests for currentTransitionInfo
58d6b171e275 : Revert "Make DEFAULT_RESCIND_BAL_PRIVILEGES_FROM_PENDING_INTENT_..."
42c3bc39895c : JavaDoc fixes
be1f2377cce1 : Fix test for Strict Mode change
61885c1d507d : Create a Strict Mode violation for Background Activity Launch
93d992d0e5f9 : [flexiglass] Lockscreen -> split shade transition touchups
e49cf109f972 : Revert "[CDM] Add CompanionExemptionProcessor"
7591b30c4439 : [Custom Key Glyph] Return all the maps in keyboard glyph maps xml
5e26cd059c98 : Update golden images (auto-generated from Scuba): invocation I42400010330326301
d8b865793bb9 : Cache Unicode extension subtag key map in Zygote
c0442c1c3951 : Remove android.service.chooser.chooser_payload_toggling flag
c3251c62b256 : Add two states for ringer view model.
c603d22fde41 : Revert "m3: update primary color tokens' value to match Material..."
42a8b6d42bc7 : Add SetFlagsRule to DesktopRepositoryTest to fix the flag framework.
1569bd0f52d7 : Adds support for getSuggestedFrameRate api
15a0bb2251b9 : adds a flag for getSuggestedFrameRates api
a1453aac039c : Add permissions for the Multiuser Widget
8af564439117 : Notif redesign: Hide footer buttons when message is visible
b4cb61b96c16 : Updating default size info if widget is not able to resize
fb9b9295ccc5 : Adding slider haptics to volume sliders in the volume panel.
f6183631a950 : Adding support for discrete slider haptics.
384e046e4f39 : Introduce useUnconfinedTestDispatcher, etc, use them
0cf4179366e9 : [CDM] Fix an IAE in CDDS
1ae180087f23 : Create flag to enable app launch transitions in Desktop Mode.
157b4cded9ce : Identity Check API
74eec29485e4 : [Catalyst] Support Set API
54fef98ea1e9 : Dialog to record other displays
d088d4aeeb0f : Fix copy-paste typo in exception message
b6c1e106cf80 : Allow accessibility talk back resize widgets
59fe0c6fb56c : Send "Unlock to continue" bouncer message
cda4ecffc7b7 : Mark keyguard quick affordance interactor as flaky
f2632364a840 : Flag trade in mode service correctly
c8311fecc93c : Revert^2 "Disable trade-in mode service on phones, TV and auto"
96b2363a23aa : Respect android:turnScreenOn on non-default displays
e35050398c86 : Move ScreenshotProxyService, etc to proxy/*, unify naming
6e25452a6ad5 : Revert "[flexiglass] Verify that NSSL#mContentHeight is not accessed"
4fc05c89c927 : [9/n] Adding user change support for tiling
2dd525cba588 : Add media_projection and pixel_perf to mapping list
d7129db685a7 : [Custom Key Glyph] Add aconfig flag for shortcut custom key glyph
a470fef717f2 : Make areAutomaticZenRulesUserManaged() return false on WATCH and AUTOMOTIVE
b09ad7c1d9bd : Add SetFlagsRule to FreeformTaskListenerTests since the current flagging system is not working in this test.
f85ad48ee75c : [Screen share] Update our status to "projecting" earlier in process.
1cf7e157ec2b : Add flag util for shade_expands_on_status_bar_long_press
7d9c86df68ef : Add flag shade_expands_on_status_bar_long_press
9d8a701650c0 : Add refactor checks for footer redesign.
64f669eeabef : Notif redesign: Bind onClickListeners for new footer buttons
6bdd4046eac5 : [8/n] Fix failing unit tests after introducing tiling
c94ba12708f6 : [7/n] Adding infrastructure to break tiling on certain events
0b3877a71deb : [6/n] Tile resizing handle core business logic for dragging and animation
8756450d30b3 : [5/n] Add desktop tiling project skeleton
830d7ccf7d09 : [4/n] Add an API to disable edge resizing for tiling.
ee4c7cfa8abe : [3/n] Add onDragMove API to notify tiling of drag to break tiling
4485a41a1850 : [2/n] Add TilingDividerView and its WindowlessWindowManager
2620ef14718a : Introducing WaveformEnvelopeBuilder to VibrationEffect
d1d2e8bf8b4c : Avoid updating display orientation from closing activity
4c7ed59f2654 : Add support for Material 3 Slider to SliderHapticPlugin
24beeb821708 : Keeping window service and permissions service in cache
4981c4293dad : AudioService: Fix ring over SCO sequence.
76922f885249 : Add helper method to generate alternative version of schedule-time rule trigger descriptions
4da273e18ddd : Add dump in InputMethodBindingController
cccfbd7ec99d : Re-order InputMethodManagerService state dump
0f06efc7058e : Add min_sdk_version for "android.crashrecovery.flags-aconfig-java"
36b5470c2a61 : Add delay between motion events in hold-to-drag gesture
62513b64e8b9 : Limit priority values used with receiver registrations.
7ee5fd294376 : Add exit from desktop by minimize logging
2567782fb320 : Only allow passing in calling pid/uid for SafeActivityOptions
17586a3191f9 : Update to ToT RemoteCompose
cd90e326c285 : [Ravenwood] Update RATR to setup environment ASAP
b244fdfa903d : Adjust the process group of the phantom processes as well
4968679825c9 : Manual network selection handling when request satellite enable
0f9e09d74b6c : Fix how we delete the symlink
65c686d0bbd6 : AS: add more logging when setting stream to drive abs vol
acfabd282258 : Remove legacy bubble bar flag
25854ddddc5a : Add APV Support APIs
877f8e0eaeec : Reland "Route dream touch handling to scene container"
63a8e1678db1 : Save generated previews in AppWidgetService
cd4c6a39a470 : [CDM] Add CompanionExemptionProcessor
811669c7f6c2 : Have aapt2 handle quoted filepaths
389e4cf73757 : Add permissions check to isInSignificantPlace()
b67d55ff1ed0 : [8/N] per-user global verification policy
768fc687a818 : [Ravenwood] Cleanup RATR implementation and project structure
344cb5c56c8b : Enable IpcDataCache on Ravenwood
5a07d5c6f72f : Added aconfig flag for Safer-Intent
bad5edf3e904 : Enable IpcDataCache on Ravenwood
004d47347dc7 : Revert "[AAPM] Export feature flag."
c7236a990ee9 : Revert "Read parcelable arrays more safely"
f9599b4d4005 : Mock BubbleStackView in BubbleViewInfoTest
c7609f2e3363 : Add ww logging for add creator token event
01fc279c11d1 : Adding nullptr check in Image_getHardwareBuffer() and Image_getFormat().
f4607ec55457 : Fix memory leak for test where shade is not closed
e8bbeada6a4b : Add VCN tests to postsubmit
2f6d590bd790 : Query BatteryManager for initial value
d2088743b5c5 : Update AppFunction Documentation
17af393d08f5 : Add new flag for capturing power button feature.
3d9eb1130a28 : Add OWNERS file for taskview
62bdb5545eac : [flexiglass] Fix remote input case when notifs aren't overflowing
fab5f14f6fcb : [Flexiglass] Fix mirror positioning in split shade
51f7b73ecefa : Revert "Disable trade-in mode service on phones, TV and auto"
777cf829cd97 : HDMI: Source asserts <AS> if TV doesn't answer to <Request AS>
ea4d818ae123 : Scale tracker icon to desired height and ensure aspect ratio.
5b97d159629d : Add ringer view binder and fork ringer drawer UI
d088916a838b : HDMI: On <Report PA> on the same active path, send <Set Stream Path>
cf5da718513f : Implement content overlay in PiP2
3c6b659827dc : Use the screenshot theme on wmshell screenshot tests
343d03d43495 : Refactor SharedNotificationContainerInteractor/ViewModel.
113f42f32727 : Enable configurable bind service flags
88b0b0b67cc5 : [RONs] Eliminate invalid points from ProgressStyle
616820a4e83d : Move ServiceWatcher and dependencies to core
d301461aef10 : Early checks to avoid unneeded binder calls
112745214b2c : Implement topology normalization
6bf7881b6546 : Put drag-corner-to-resize-pip behind flag.
8b782bd73bf7 : Trigger configuration listener after reattach
76580d708b7a : Fix crash when folding with open shade
a9000443c3e4 : Create systemui flag for standalone support
a12ca9fef53e : [flexiglass] Verify that NSSL#mIntrinsicContentHeight is not accessed
0ebff769edf5 : [flexiglass] Verify that NSSL#mContentHeight is not accessed
7c25916cba8f : Add OWNERS file for anim lib project
8dfc932164e7 : [flexiglass] Access NSSL#mOwnScrollY trough a getter
480453dd098c : Lazily create system reserved channels
80e0828772ed : Revert^2 "[SB][Notifs] status_bar_ron_chips flag -> status_bar_not..."
e6e0225d9869 : [AAPM] Export feature flag.
6e28681062bd : Use @ShadeDisplayAware Context and configuration in notification classes
7bb611761800 : [AAPM] Return null service for unsupported form factors.
509261c99449 : Update javadoc for AddTrack method
b089636dc277 : Ignore null action in SliceManagerService.
53e3d47d8865 : Ignore null action in PackageManagerService.
005e1e7c7be8 : Remove RuleBinaryParser and RuleBinaryParserTest
3df09aa58279 : Fix NullPointerException in FlingOnBackAnimationCallback
fd2f2cdd7ea1 : Handle minimized tasks on reboot.
73e566283e87 : Remove RuleIndexingController as it is no longer referenced after the deletion of IntegrityFileManager.
01e234964d0f : Set frametime of BackEvent to 0 in legacy constructor
33e0bcb0d823 : Keep Cache implementation for Android Auto
ec4f83ae2681 : Collect transition participant when moving to a different display
285e3a396daa : Add KeyguardLockWhileAwakeInteractor.
989825d3f064 : Separating framework platform crashrecovery jar
712f41a9447c : Add Baklava
376721f17135 : profcollect: Disable tracing while ADB is connected
6439678d8215 : Add immersive option to maximize menu
b08920abb03d : [Screen off unlock UDFPS] Settings integration 1/2
2828cdc7ca40 : Fix system_server SIGABRT due to non-UTF-8 process name in /proc/PID/stat
c01fac783231 : TunerResourceManager: Add API to determine Resource Ownership.
5d2e9c727822 : Add timer to wait for the Health update result.
ddb13c394ae4 : Revert "Route dream touch handling to scene container"
45e9b150e081 : RESTRICT AUTOMERGE Disable PowerStatsTestsRavenwood
7d1977d12034 : Optimize SystemConfig feature parsing
86bf28a3ca49 : Refactor animation lib to supprot wear-sdk origin transitions
0fbfb69b30a6 : NotifCollection.dismissNotifications now takes EntryWithDismissStats instead of Pair
f01b3d878330 : Avoid double resource compilation
1f6345dedc72 : Delete unused manifest file
b179796f004d : [CS] Remove SystemUIBinder, have modules be directly included.
c2b26c17e970 : Add flag for PiP in Desktop Mode.
d6b10e74ee75 : Add tracker to NotificationProgressBar.
9e738ef9b209 : Add API to provide read-only file descriptor to WearableSensingService
b50e7b7a06e3 : Add aconfig flag for the WSM#provideReadOnlyParcelFileDescriptor API
e7f7a0e49e38 : Enforce concurrent connection limit.
3640e558ad0f : Allow concurrent WearableSensingService connections
e18695e00109 : m3: update primary color tokens' value to match Material3 design
59b2fb482ced : m3: impl basic Material3 design AlertDialog
afb99cd8b7a7 : Convert sysui libs to java_library
9bba534aa546 : [flexiglass] Reset KeyguardFading/GoingAway after transition to Gone
d78ec6b3f822 : Use a weakref hashmap to store controllers
ec025997cdf7 : Add ACTIVITY_SECURITY_OWNERS.
9f8cb3b83a9a : Read parcelable arrays more safely
db258ce57c75 : Add a script to dump latest test result summary.
b49c77249ce9 : Disable trade-in mode service on phones, TV and auto
33c5c8792d6a : Add emergency alert system API unit test
16c8b144f6c1 : Add unit test for radio alert area
f192273e55e8 : [pm] reduce lock contention on createSession
dcf0fcc95d31 : Move the `isRecording` state in Issuerecording service into a global settings rather than a user-specific variable.
172aac3c6779 : Fix floating sliders background
5c32317a65f4 : Fix an exception in bubble bar user education when bar is on left side
4c90cfde7c34 : [Accessibility APIs] Add Expansion API
52f40e4b460e : Restore Quick affordance interactor test
9b0d370c263c : Allow more information in bitmap ashmem filenames
7112c6e2a785 : Add an ID field to `class Bitmap`
50fbcdd426a7 : Make AudioManager.setWiredDeviceConnectionState a system api
d5106f4906a2 : Ensure consistency of BatteryUsageStats deltas
543e846fc245 : Log ACCOUNT_MANAGER_EVENT__EVENT_TYPE__AUTHENTICATOR_ADDED
fd39d2a3b099 : Revert "[SB][Notifs] status_bar_ron_chips flag -> status_bar_not..."
a061bae4a8ee : [CS] Move EmergencyGestureModule to top-level modules not CSModule.
9ffdddacb4be : KeyGestureEvent support to launch applications
7ef7a82315fe : Use readonly version of 'report_postgc_memory_metrics'
e4c7ac5511c6 : Remove some usage of IGBPs in the ICamera.
ac7a0d7748be : Bind the new sliders to the state updates.
4cd87c2359a0 : [AAPM] Add abstract class for feature hooks
2ed10bd08397 : Update AppFunction Documentation
93678d42c554 : Remove MutableStateFlow for transition current info
e8d1338a01a7 : Fix check creator token
3a6526ae2f6e : Only add gesture insets to screenshot touch region in gesture mode
1b5a71677509 : Notif redesign: Update footer view layout
42d5d2805ef0 : [1/n] Create AppCompatLetterboxPolicyState abstraction
88961e53f531 : Notify RecentTasksController when desktop task size or position changes
cf92b9b3216f : Add ScrollController in PriorityNestedScrollConnection
38a91bc49214 : Revert^2 "Add view models for volume dialog ringer drawer"
fe967c699d04 : STL find ScrollBehaviorOwner if present
1d1950bec8ac : New flag "hearing_devices_ambient_volume_control"
28376cc4ee11 : Fix notification HSUM bug
3ffd1b2958d6 : Remove flag dependency of `status_bar_connected_displays`
7dc9b7b77170 : Fix back gesture not being handled by BackAnimationController
bbeaa71bb64e : [PB] Filter out activity transition under TaskFragment
3f26782b85af : Add log for device setting IDs for debugging
9b469e030da1 : Fix status bar crashing on connected displays
1468a5f3c64d : Introduce PrivacyDotViewControllerStore for multiple displays
8d104d75f93b : Remove get_packages_from_launcher_apps flag
54708ada8a86 : [Forensic] Add DataAggregator
12cc80693696 : [Catalyst] Update RangeValue
dff2c5ab6fe0 : Make Remote Submix device attachable
aa8fd92b1e44 : Create flag for BAL strict mode
b72794f85b2f : Fix unrecoverable bad state in BackAnimationController
3bb6ea8e413d : Camera: Add OutputConfiguration.setMirrorMode(Surface, mirrorMode)
10b6512bcbca : Always clean up taskview
d291d3c5892e : Add permissions for media quality service
f9e752d4fa5b : Report moved-to-top as the last launched activity
1b35f6422b1b : Improve the release of predict_back animaton leash
91c16bb862b7 : After discovery cec device, remove the cec device from HdmiCecNetwork device list if the isInputReady is false, then add the cec device to HdmiCecNetwork device list again for tv input list sync
8d4a96e45b05 : [1/n] Add flag for VDM full-screen API
ff50c0397a5b : [Catalyst] Enrich PreferenceLifecycleProvider
12883a2c3fdb : [Catalyst] Add NoOpKeyedObservable
a8eb88ba897e : ignore parse partition order file path
8b5e1e47f9a9 : More Robolectric mass migration.
10f8173bc0a3 : Do not ignore locked orientation by universal resizable
5af9275ce9e4 : More Robolectric Mass Migration
12dcc650a078 : Fix notification bottom sent from SysUI is inaccurate and refactor
863a1a81053f : JobScheduler: Ignore important while foreground flag
4c4778a675b4 : audio: Fix incorrect userId for setMicMute
675e8abaad38 : Revert "Report Post-GC memory metrics"
364038c0386c : Add flags for testing Ravenwood
d9a4bd002f86 : Use getParameter return value for the preset length
e264714c023e : Ensure Conference/Connection are initialzed with valid number presentation.
096997f0d5b9 : Add OWNERS file for android.service.settings tests
3065a31d3583 : [Ranging] Hnadle non-existence of service-ranging.jar gracefully
40b392fbeaed : Add an aconfig flag for AppOp mode caching
00c2baf07a54 : Add "smoke test" mode to run-ravenwood-tests.sh
b3ebeeede6cd : Revert "Report Post-GC memory metrics"
30a43a512f7b : Synchronize calls to mAbsoluteDeviceInfoMap
190fe97ac6d6 : Fix AppThatUsesAppOps OWNERS
7351dd7183db : Put PIC nonces in shared memory
dde12eb09fb0 : Replace launch and async with traced variants
96219b879668 : Remove references to launched flags
f27a0f43b120 : Update visuals of bubble bar education views
cd157d5eb018 : Revert "Report Post-GC memory metrics"
0f0c7ec41ff9 : Remove isReactiveToTouch in favor of using axis data
f14825b16b41 : Always allow System user to change device config in case of Multi-user-multi-display (MUMD)
1fc3eac3061a : Enable stack-walking for default coroutine names
b4a843b4a79c : Pipe current user to InputManagerService
8cc98babd8a6 : [Record Issue QS Tile] Remove Share Button Click when generating a bug report.
3d52d212c8a3 : Add OWNERS file for android.service.settings
f2fd2eb5d0c0 : Tweak to isOnLockscreen
ff1ea8f8f6f8 : Update drag-drop collision detection
45974dd3f4da : Fix placeholder launch with AvoidMoveToFront flag
45e499a3f587 : Improve robustness when migrating from appops xml
570206560827 : Document UserProperties getter permissions
2aabf4967a4f : Use Assisted Injection for EdgeBackGestureHandler
2470bdb6f3ca : Create new NotificationClassificationUiFlag class
a5a63692fc65 : Adding a TileHapticsViewModel for haptic playback on QS tiles.
171d4adb586a : Reland^2 "Revert "Defer resume of activity while applying wct""
a7846d1f199c : Google contribution item #12 - subitem #2, base [VZW-Skylo] apply Idle Mode Scanning for Terrestrial Network
85e8f3e54633 : Return null when resource not found for key glyph
08cf8963873c : Add flag to show all permission dialogs on virtual devices
5241833fb34b : Refactor Clock theme handling
343bb5b73572 : appops: Update checkOp docs
e7d6f4fadfa4 : Delay processing of BatteryStats events until onSystemReady
fac6f557748a : [CDM] Check if it's system UID for hidden Perm Sync APIs
256735f80bd5 : Declare move contacts to default account intent in ContactsContract.
b6d1c7dfa3e5 : Use allowBalExemptionForSystemProcess
38855a981092 : Make isSystemProvider field final
725143ebaad2 : Distinct event logs for clicks on the QSLongPressEffect
183a5803f253 : Adding live progress tracking for touchpad back gesture
6dbe6f55cfa0 : Use display aware context, resources, configController and Layout inflater in shade dir
e7571aefc501 : [framework] Handle more possible route string from service.
599b72088d93 : Use ShadeWindowGoesAround helper class instead of just reading the flag
b2c0762824b1 : Introduce Shade configuration controller
a87bd83dc8e9 : Use readonly version of aconfig flag native_metrics
33517b07adef : Do not remove User Manager and AppOps from Java cache
9729118333e2 : Cross platform native resources support.
7b3070e5f6ea : Fix footer flashing on screen when HUN is dismissed
ce9d5f919a7b : fix RemoteView's content loss in some case
83fec28f3d77 : Set OWNERS of TradeInModeService.java to trade in mode team
d54692174551 : Correct status bar background on connected displays
0248b5c81e77 : NotifCollection.dismissNotifications will now remove hidden summaries.
9bd93474747d : Support contents (overlays) in ElementStateScope
2c8511ece10c : Make tiered cached adj UI tier size configurable
5d48fe60dea1 : Cleanup unused icon labels shared preferences
8cd5cb68e195 : Fix a race when an editText was removed and added immediately again
bc766493aa27 : Fix lint in HeadsUpCoordinatorLogger.
db5e8570e7a1 : Adding live progress tracking for touchpad home and recent gestures
0bbf1b1c6ff7 : Fix pending intent for multi-user issue
db82c35b0061 : EventConditionProvider: Ensure that reloadTrackers precedes evaluateSubscriptions
f33db37dd9f7 : Clean up created data when flag rolled back
accca6bad4ca : Add SystemProperty to override education prerequisite conditions
2c0d3a729f92 : Update fontchain_lint to work with multiple emoji fonts
abde7fe5dfb7 : IMM#hideSoftInputFromWindow: Post on handler thread if needed
5a55822f87b8 : Update flag value checks in IME tests
59d71d4ba6a0 : Remove the IntegrityFileManager class that is unused after cleaning up references from AppIntegrityManagerServiceImpl.
c832d318ac34 : Delete ViewTreeSavedStateRegistryOwner.kt
dbe3411a5594 : Use actual navigation bar insets info to determine position
2f648a6f2df4 : Rename Modifier.background(Color, () -> Float) to fadingBackground (1/2)
c4453cacf906 : Update shell ownership
0940e6297c9c : Add logs to TutorialSchedulerInteractor
828c06cd683e : [Contextual Edu] Accessibility - Adjust font style for dialog
f0c453294186 : Extract uses-sdk-library information into ApkLite
11420c435faf : Add millis suffix to BackEvent#getFrameTime API
98f15b4367b2 : Cleanup scheduleTransactionAndLifecycleItems
1a6bda04fbe7 : Add Cache to Settings access for CompatUI
68df5a3c2546 : Move CommunalSwipeDetector to SystemUI/src/.../communal/
fa5591c608f3 : Move ConditionalModifiers.kt to PlatformCompose Core
0a639219fdde : Revert "Add view models for volume dialog ringer drawer"
613c82b4aa55 : Introduce StatusBarContentInsetsProviderStore for multiple displays
3064191c6125 : Move Grids.kt back to PlatformComposeCore
8c2acf8c3a15 : Introduce sharedElement.elevateInContent (1/2)
2f8edea084d8 : Add perf traces for Transitions calls to handlers and observers.
7490b43a898f : Move magnification_enlarge_pointer aconfig to bugfix flag
825ed7105adc : Pass TaskSnapshot object to content suggestion service.
a21a24f1b894 : Implement getCurrentWindowLayoutInfo() API.
dc0e0d26b5ae : Move exit transitions flag check to DesktopMixedTransitionHandler
8315ff0300ec : Add mixed transition handling for desktop changes
9011425d182c : Add option to skip cache sysdump for caches which have zeros in stats
d4b0c0a079aa : Adding XR software and hardware feature strings
4be8065070b7 : brightness: apply bedtime mode specific curve
e5b2400c6f91 : appops: Consolidate check/note/startOp validation
0bd19ff7cb9d : [Record Issue QS Tile] Allow Bug report flow to also have screen recording metadata
d0736c2214c1 : Add enable_display_windowing_mode_switching flag
bac53a6171d4 : [Audiosharing] Add rounded background with different radius
652ef2a4dc98 : Fix drag flag logs
455de7250742 : Add unit testing hook for TelephonyManager ISms
aa974cf42b0b : Add null check in vibration util methods
cfa6252c6383 : Keeps art profile for pre-dexopt
dd16c4ff4c41 : Add YCBCR_P210 format
c385acf0e66f : Add additional members to OWNERS for appwidget
1b6fc84962f7 : Remove framework support for audio HIDL HAL V5
07ac4d03969e : Add flagging for smartspace event swipe logging
63dfcadd3443 : Import translations. DO NOT MERGE ANYWHERE
5aeb9fd4d89e : Remove dependencies on the 1-variant fallback
449cb6031aa5 : Revert "Deprecate drag-corner-to-resize-pip on phones"
24427d6a397a : Add system_dlkm notices in DEFAULT_LICENSE_XML_PATHS
b4f5f9e8ed86 : Support StatsD Java API on Ravenwood (f/b)
94a6b2240ca2 : Pipe finish callback through synthetic recents path
90d7b4eed29c : Import translations. DO NOT MERGE ANYWHERE
66d34345816a : Introduce an ITradeInMode service.
f7a777bf91a3 : Import translations. DO NOT MERGE ANYWHERE
0760098db36d : Import translations. DO NOT MERGE ANYWHERE
6ba5e1e146c3 : [Record Issue QS Tile] Save Start Time and output for go/winscope integration
1739413aad88 : Finalize BulkCursorNative asynchronously
482575cf2274 : Allow resizing out of invalid size
01ccb45fe40c : Apply new code formatting to navigationbar
ce18060c1503 : Import translations. DO NOT MERGE ANYWHERE
af1597b2e2f2 : Google RCS uses FTEU MO SMS for phone number verification
c8f89e744357 : camera: Clarify hot pixel map coordinate system when sensor pixel mode is MAXIMUM_RESOLUTION
f9a15f7f4d6c : Add flag for TaskViewTaskController clean up
b576b62e646a : Dismiss TV PiP notification right after exiting
ad2fa358856f : Enable color resources loader to be created using FRRO
8c463d5f5c59 : Import translations. DO NOT MERGE ANYWHERE
d158b9b8b6de : Revert "Activate hot plugged device"
fc314420d3fb : [flexiglass] Fix transition check unintentionally removed in ag/29690011
736e8955eca6 : Implement toggle talkback shortcut
9b5439b32a2b : [7/N] only the current verifier can call VerificationSession APIs
48de5b95fbb0 : Marking SystemApi for PackageWatchdog and Observer
2ed92e02e844 : Persist input gain in settings
5049acdb3d26 : Health: Add fixed-read-only flag for platform_oxygen_saturation_enabled
14f2d7d283dc : Add QsTileDetailedView flag
e030378448c6 : Add view models for volume dialog ringer drawer
5b1d97a15aff : Add radio alert in program info constructor
fb1033cbb8eb : Route dream touch handling to scene container
0fc4d8878df6 : Define emergency alert system API
a59f33bdee43 : InputManagerInternal: Add API to set power/wakeup node
315e31bf0b37 : Fixing orphaned component's OWNERS
84c198aedc42 : Clean up android.webkit.update_service_v2.
79fe8c1e6944 : Add desktop_stats to SettingsToPropertiesMapper
2e28de1c2d02 : Add flag for changing the system appwidget corner radius
0917ba7c07a0 : Not depending on system gesture distance for touchpad tutorial
5dde51f0a8e3 : Passing animation markers between model layers
29b0e382e37b : Explicitly annotate system APIs in SatelliteManager with @SystemApi
7e5e47611d6f : [CDM] Check if it's system UID for hidden Perm Sync APIs
1690d43d166c : Update the doc for READ_PHONE_NUMBERS
e9010cb41e29 : Add check for empty vendor acquired strings
6342758258de : Notif redesign: Improve the way we display the work badge
0334e0e3c527 : Fixed mixup with keycode / modifier mapping.
2c190cb02b94 : Listen to alarm changes for all users in ScheduleConditionProvider
c7bd7ffb7489 : Delegate bugreport consent for system apps.
7fa0d475b400 : Add ValueAnimator#awaitAnimation to await animation
91580c088506 : Fix ANR in Process com.android.shell after BR is finished
2c5c7d9bff72 : Adding optional gesture direction to gesture state
e81fa0c19c07 : Always show all approved apps
5c4e6576cf57 : Using success animation from state instead of configuration
1162c8af2a2e : Add a new flag for enable_task_resizing_keyboard_shortcuts.
897bdc09dcfe : audio: VolumeDialog: Hide ringer button in single volume mode
53fafb0405a8 : Check if current profile is being removed in IMMS
fb64dbe4cb8c : Abort transition for invisible background launch
ea18b6d174ff : Introduce Modifier.drawInContainer() and .drawInOverlay() (1/2)
fc84868a6562 : Remove ENABLE_SHELL_TRANSITIONS flag from DesktopTasksControllerTest
d8336d2133e1 : [flexiglass] Update NSSL stack heights, when maxNotifications changes
30bd0a99d3b9 : Add OWNERS for wallpaper tests
4938b5cff952 : Remove the dependencies to the IntegrityFileManager from AppIntegrityManagerServiceImpl by setting the methods that use it to a default empty value.
5191964dd4ed : Watchdog:do not need add or clear operation when mMonitorQueue is empty.
3ee9acc200f8 : [AAPM] Disable for unsupported form factors.
96940dcde3e9 : Refactor DesktopTasksLimiter to adapt it to be migrated into DesktopRepository.
25908d56fa3b : Remove ringer repository
80d63f34f8ed : STL onStop remove dragController if needed
4b054b7d0582 : Introduce ShadeDisplayAwareModule
090a645487ed : Moved keyboard settings string to resource file
fe7080e363bb : Make NotificationActivityStarter params non-nullable
4f2392c67ba6 : Fallback to a NoOp ProtoLogImpl when viewer config is missing
d8db3579b403 : Fix NPE in MouseKeysInterceptor
c7cc1450a54c : Fix Resizing Flicker Tests - Transitions
a1c2d87b297b : Do not use defaultInstance for persistent repo
6cf9cd82efa0 : Use stopTimeoutMillis to avoid unnecessary bind/unbind
a1923cc340f4 : Add log resize start and end on all resize triggers.
d8d806e503b9 : Introduce ShadeDisplayAware annotation
8d1d95cdc648 : Don't save default doze brightness
e55f83bce2dc : Implemented MT SMS polling Added resources for MT SMS polling (Can be removed when Google implement carrier configs for this feature)
c4d81f0905d9 : Split ConfigurationController and ConfigurationForwarder
14495ce3af39 : Remove redundant flows from SharedNotificationContainerInteractor
80371cf33f4d : Remove redundant deps on `SharedNotificationContainerInteractor`.
aede021754ca : [flexiglass] Update ScrollViewFields#intrinsicStackHeight on the LS
73aec0931083 : Add fling velocity handling to cross task predictive back
f9d7de04fd74 : [PM] Add feature config for changing launcher badging
65e4fafc6eda : Prevents duplicated finishing activities
4cd3fa430feb : Exclude opening/move-to-front task from exiting immersive
7f311903e8bc : Add trace for handleRequestAssistContextExtras().
350bed0ac689 : Replace shared dagger inject with Intent Extras in IssueRecordingService.
eafbae3715d9 : Add MainSwitchPreference support for Catalyst
076de23a8b81 : Support standard extension frontend status
f4c6df542c8c : Fix NPE crash when start pending predictive back animation.
c32860dcc6c2 : Window blur bench scenes.
93e76000c43d : Implements the API to opt-in to auto-save embedding state
1f88ab50f110 : Remove dependency on PowerStatsUidResolver from PowerStatsProcessors
76fb5581dee7 : [satellite] Added onCarrierRoamingNtnAvailableServicesChanged callback
5e09609eda29 : Add new Surface#setFrameRate API
df792e1d1580 : Wire "Action + Ctrl + D" to moveToNextDisplay
0b57f3101d05 : Prevent crash when history event is written w/o a name
f70ddc9beb63 : Add proposed trendy teams for VTS modules
c90c65d8b016 : Add FP auth feature when screen off(1/2)
0211daf74be1 : Add support for thermal threshold callback
75007054e924 : Crop snapshot icons for Manage Windows menu.
6c0968721817 : Add flag for media action button placement update
c8b0cb4b80d4 : Use byte array api to read from/write to Parcel.
3ae88e64aadb : Set up status bar input layers on init.
e639f5427a9c : Revert "Fix perf regression VisualStabilityCoordinator"
90690102aa6f : [sb] fix lint errors in SystemStatusAnimationSchedulerImplTest
fd47d4bb6e9b : [events] move SystemAnimationState to enum, expose flow
8a147e419218 : Make FlexClockView default clock use proper font
3b282f6af86b : Introducing GestureUiState with extra animation info
bd34b1b50af6 : Fix bug where splitscreen causes unlaunchable camera
9de2b9d7f6ad : Skip ScrollCaptureControllerTest on robolectric
87f8c6c64234 : Update WindowAreaComponentImpl to use updated DeviceStateManager API
81e9af8e58af : Fix first battery message on boot
607c11ae6994 : Adds flag for UI changes in bundles/classification
c044abcbbd39 : Do not use AIDL Dataspace in Color.cpp
efbd51b13090 : Add more info to search_all_entrypoints_enabled comment
f2f2d2ae66cf : Dead code: the original screenshot policy code
7e1354f38d7d : Remove onStageHasChildrenChanged() from StageCoordinator
97e88ab643cc : Update the bug component for blobstore
90dd959720dd : [Contextual Edu] Log metrics when education is triggered
b935668b83f0 : Revert "Add com.android.healthfitness to apex_available for andr..."
172f2a03d8d8 : Prepare for compat-framework support
753133d0c163 : Clean up keystore owners
3cfc4eccc4cd : Add domain layer for volume dialog ringer
f813c18e40c6 : Revert "Prevent pip launch during drag to desktop."
0dc3f705c574 : Revert^2 "PriorityNestedScrollConnection: make onStop suspendable and add onCancel"
13c0ec43fdb2 : STL nested scroll keeps priority even if it cannot scroll
90bef8a82e03 : Revert^2 "Simplify PriorityNestedScrollConnection"
38f03c23f313 : Report non-match parent window bounds in TaskInfo
4556c167cc2d : Check the tile's state before handling long clicks
1c4bfb047aba : Auto scroll to the top when dragging a tile in Edit mode
89feacb06970 : [API change] Add EDGE_NONE option for BackEvent#swipeEdge
c165fabfa6eb : Fixed crashes due to HSUM issues
be4c6cddc8a2 : Stop using HWUI callbacks for jank.
b1f38863a067 : Add data layer for volume dialog ringer
a07f97b74a17 : Activate hot plugged device
2eb0812f6cb1 : Revert^2 "Add condition to bypass caching when process uid differs from caller uid."
3d454fb1b337 : Prepare PrivacyDotViewController for multiple displays
17dc56f99c73 : [ENR] Ensure ENR views visibilities
2ccb7efe6403 : [Expressive Design] Update MainSwitchBar padding
30d9a92e3b13 : Baseline global lint errors
defbb74c3836 : Simplify the exposed jank types.
52f9a2882d2f : Animation tweaks for ambient aod
669d20abf9da : Use Transition.ReplaceOverlay.currentScene when getting placeholder size
11beddbf96d8 : Add Transition.progressTo(ContentKey)
7f60633a61a3 : Revert "Simplify PriorityNestedScrollConnection"
47f6e9098792 : Revert "PriorityNestedScrollConnection: make onStop suspendable and add onCancel"
34cf647df729 : Implement top app bar and reset button for Edit mode
c9422059e76e : Fix windowless splash won't removed while hovering back gesture.
f1c25ec712bc : Consolidate full-screen override and cache user aspect ratio
8e1c638b0c80 : Unify current session in vibrator service
a67c924de154 : Use `audioSharingInteractor.audioSharingAvailable` instead of calling BluetoothUtils.
4d0dc576c0ce : Fix test: Do not assert that currentInputStarted flag is set after hiding the IME
4a8f23df5574 : Fix unit tests if universal_resizable_by_default is enabled
359075db1568 : Remove the updateRuleSet actions to stop writing the rules into AOSP component when the method is called.
4816fa8ae880 : Update boot image and system server profiles [M80C35P56S0PP]
c08c296025a1 : Cache getProfileParent to reduce high volume of binder calls.
6ae62adeea6f : Add Flag for connected display recording
7e28dff226b2 : [Forensic] Add BackupTransportConnector
4e39bcfeb856 : Fix preupload warnings
0edb30349e68 : Improve the release of predict_back animaton leash
1d48c4478695 : Make soundtrigger onResourcesAvailable async
139a4479eeb4 : Consider letterbox offset as size mismatch
2b1a1bcc2b49 : Skip initializing unused system bar painter
03a951634b2e : AudioService: fix permission for addOnDevicesForAttributesChanged
80e7d1248045 : Notification: The updateLightsLocked method should be called by the NMS.mNotificationLock lock.
2b5c3c6cddf1 : Fullscreen Magnification updates pointer icon on non-transient change
2ce895088593 : Dump immersive state in DesktopRepository
572db0c63de8 : Handle media key events while on keyguard and bouncer
7a998ac16c57 : Handle keyboard/adb keyevent based authentication confirmation for password bouncer
fd6a41479aec : Allow start custom activity for catalyst test
024f141dd3e9 : Remove RELEASE_PACKAGE_VARIABLE_NOTO_SANS_CJK dependency
6970b5628e45 : Use old JNI on robolectric environment
3d28d17497be : Cleanup released sqlite flags
4fa5446f3dde : JobScheduler: Enforce quota to jobs started from TOP state
8994356aa293 : brightness: add loading logic of bedtime autobrightness curve
883b408292ff : Prevent task from exiting immersive twice on close
4f580e682f32 : brightness: set up flag for bedtime autobrightness curve feature
213dd25ea8c1 : Set view bounds in parent by diff'ing the bounds in screen with parent.
dd704c1bb5a1 : Abort or throw SecurityException if intent redirect detected
ba96cccb4fae : PowerManagerShellCommand: Avoid hardcoded package
b24eb1940bf2 : Replace utility classes
9835370d0faa : Consistently close BatteryUsageStats objects
bddee3a49c09 : Fix initialization of display stats on multi-screen devices
f7eba82552d9 : Integrate system feature codegen into SystemServer + framework
cafac86b67ad : Update boot image profile for MessageQueue
8cb788c15f55 : Update sysuiResTag for user switcher UI elements
6a2efcc5182b : [Ravenwood] Recreate UiAutomation mocks for each test
6a5d10025192 : Introduce getChildSurfacePackage() and clearChildSurfacePackage()
1d3827d255e9 : [Service] Cleanup CHRE 24Q3 flags
7aac258abbca : Fix blocking call on FMQ critical path
a70a0155dd5b : Deprecate the constructor of RecognitionConfig.
7aa6dba0a725 : Refactor PropertyInvalidatedCache
ee8579e21b9a : [Satellite] Reading the datagram value from carrierConfig.
3a620b0a65be : Update references to AM and Job OWNERS
656b47acac05 : Add ownership info for dropbox
b1ff3fff8b0b : Fix resize frame drawing underneath content
267fdb0bc87b : ConcurrentMessageQueue fix idle calculation
948529706222 : Permit for people in meta-OWNERS files to make their own ownership changes
a1464d27d67e : Tighten MU ownership
a63a89b6d9f6 : Add TurbulenceNoiseShader.SIMPLEX_NOISE_SIMPLE.
dca77daf2c44 : Runs the A11yManagerService package monitor on a dedicated thread.
7fd21a2d419e : Ensure item being dragged shows on top of other items
be525fb5caad : Fix deadlock between attachSystemDataTransport and addOnTransportsChangedListener.
7cd44db95d8b : Make SyncManager owned by JOB_OWNERS
5351ae12c549 : Remove yamasani@ from usage OWNERS
2f97d5329c5a : Directly set isKeyguardGoingAway in repo
dcd473066f12 : Ensure item always takes up required size
3b4410086fbd : Fix flickering of the resize frame when resizing
4f94f0d5acf2 : Only allow vertical resizing if it's supported by the widget
2f4cce6f12c9 : Remove private profiles even when DISALLOW_REMOVE_USER is enabled.
b7815a0a3442 : Add new API to allow app to opt out Intent Redirect Prevention
cabdc5ea30ec : Add APIs to support granting access to Android Keystore keys
781e35dcb056 : [AAPM] Add Owners file for tests
4a1240b777e0 : Improve drag-to-remove detection
7504b7b0feec : Polishing for migration to disable Weaver
8de9051f0631 : Support disabling Weaver on unsecured users
b2181a839746 : Make AppIconProvider&NotifIconStyleProvider dumpable
1c7b6944c2ae : Implement caching in NotifIconStyleProvider
b6cf44899c18 : Implement caching in AppIconProvider
d3f8fc56aa44 : Implement hit-ratio dumping for NotifCollectionCache
f0ccc4f7a51f : Introduce NotifCollectionCache
95d3598ec4d2 : Fix perf regression VisualStabilityCoordinator
39ce0428387f : [ProgressStyle] Remove duplicate to from the doc
8ecf84b33bcb : [sb] run formatter on system event animation files
4e53a549f390 : Revert "Add condition to bypass caching when process uid differs from caller uid."
8b87e1afcc6c : Migrate ShortcutHelper from Activity to BottomSheet Dialog.
a528bda1b1a2 : Cleanup NestedScrollSource.Drag usages
6976f1d711f0 : Tighten network ownership
1868ad36b821 : Fix icon resource loading issue for user switcher
064466fe8e2f : Fix error when backing up channels
87bd231d296f : Remove dreaming hosted lockscreen - Part #4 (end)
547b29d60ed4 : Prepare FaceScanningProviderFactory for multiple displays
0b4cb06d701b : Prepare SysUICutoutProvider and CameraProtectionLoader for multi display
0b0ed6d14964 : Add DISPLAY_BT2020 ColorSpace and DataSpace
d4ca0f536133 : Notif redesign: reduce conversation icon margin
029853acbb8e : Extend am profile command to also collect lowoverhead traces.
02546f73ffd9 : Revert^2 "Add support for UIDs in bootstrap atoms"
124b3d1b688b : Remove Service Manager Cache in Java
bb206200e65c : Disable HBMController HDR boost if HDRModifier is enabled
d39fd18d5dd9 : Improved screenshot policy for desktop and split mode
3627dc88c9b4 : WindowTracingPerfetto: enhance logging
f5cbc5bd3fb1 : Allow specify theme for catalyst test
ac54d3cccaa7 : Remove StartingWindow or cancel the request to add StartingWindow after all activities have been drawn
c8d4e7f32344 : [AAPM] Add Owners file
974a5aa8fa44 : Add tracing to FrameTracker
117c5a00bdbf : Prepare SystemEventChipAnimationController for multi display
e0722e915679 : Make Wearable settings readable
52ce865f34a6 : [flexiglass] Scroll the shade with TalkBack
7a44ee9ba481 : Format SystemEventChipAnimationController
da2dfeee49b6 : Avoid strings clipped
75a2980588bb : Revert "Defer resume of activity while applying wct"
c1bca5a02ee6 : Clear close prepare transition token by TransitionObserver
cf244b0831dd : Record latest back gesture occur on which task.
08ff158a5b84 : Notif redesign: Don't apply effects to app icons
7e8b93a750d1 : Revert "Use MappedFile for mmap-related operations in CursorWindow"
7d94ac394055 : Revert "Add support for UIDs in bootstrap atoms"
92ff7c75e4fe : Reset draw state after notifying invisible to activity window
3452aaff89bb : Create flag for QS Tile Detailed View
036933d9608c : Specify the display ID to mirror when creating virtual display
1bae674c7401 : Increase UI tier size from 5 to 10 apps
fab37f1bd322 : Add enable_drag_to_maximize flag
0de2cfa68b5b : Update EXTENSIONS_VERSION_CURRENT_PLATFORM to 8
d4935a103ad5 : Update NPE check w.r.t. proc state locking.
ed1bf52d0f4c : Deprecate WebView.startSafeBrowsing API
ed7546f3cf1c : Report Post-GC memory metrics
4eda903ab480 : Accumulate battery usage stats incrementally
5f1160920f93 : Add owners to view_flags.aconfig
2c23866197d3 : [PIP2] Fix stashed state updates.
e55a45184da5 : Add automotive_cast to SettingsToPropertiesMapper
705626be2f2c : Add IntDef for FileChooserParams.Mode
7034e051af31 : Add flags for moving windows to UI threads
a205bd31b013 : Update WebChromeClient FileChooserParams for File System Access API
8d50dd5399db : Revert^2 "Frozen-aware RemoteCallbackList"
612b0ac97b9c : [sb] Don't pass in application context, use the root's instead
425d3d5aa3c5 : [6/N] allow adb installs to bypass verifier
3e388b266f32 : [flexiglass] Fixes long-press on emergency button.
40fc026d3262 : Add language and remove scope in radio alert
6a699a443a07 : Register VcnManager with VcnFrameworkInitializer
883cbe7d97d1 : Send context URL as EXTRA_TEXT when available.
a6c42b9b0e95 : Remove dreaming hosted lockscreen - Part #3
d2ac3f032787 : Remove dreaming hosted lockscreen - Part #2
bff27edb3309 : Do not handle touches next to shelf Take #2
aa9d9aa9d1f8 : [Flexiglass] Don't use padding for BrightnessMirror
99853d950af4 : Ensure we always de-duplicate groups on ProtoLog init
1ef32e9b36f6 : Wrap ProtoInputStream in autoclosable class
e67c19f3d2cf : Simplify nested logic
828149e62080 : Don't try to log to logcat before the service is ready to trace messages to logcat
6257d9358276 : Update PerfettoProtoLogImpl constructor usages to use unprocessed and processed implementations where relevant
3228b8806d7d : Update tests
ab70dbe7e1e5 : Implement unprocessed version of PerfettoProtoLogImpl
8a0bd27d26c0 : Create ProcessedPerfettoProtoLogImpl class
6e15198b5588 : Turn PerfettoProtoLogImpl class into abstract class to support both a version for processed and unprocessed protologs
54d16adaedc7 : Make volume slider in volume panel consistent with settings app
99e54e0f7d17 : Bypass screen time check for bucket evaluation
6ec21165e958 : Making preExceptionHandler optional for plugins
744f4109aac0 : Add yingleiw@ and caseyburkhardt@ to OWNERS for A11y platform
5799977ceb60 : Clean up flag network_metric_monitor
917586eca7c5 : Make WakelockStatsFrameworkEvents test work with ravenwood
43722df2e05b : Remove deleteCaptureDisplay flags
ccedde80dc50 : Helper to verify ui event has bubble info
0e86fd9711da : Migrate from mockito.Matchers to mockito.argumentMatchers
d88a0d249380 : Remove dreaming hosted lockscreen - Part #1
0caa9a4d6334 : Remove legacy ambient wallpaper support
6c59dbd9dfd8 : Health: Add fixed-read-only flag for platform_skin_temperature_enabled
06d462b54d9c : Per display ConfigurationController for Status Bar
75342ef50b3a : [AAPM] Introduce new Service for Android Advanced Protection Mode
228ab40f54fc : Remove unused library
aab63f88f7dc : Expose the current Transition in TransitionBuilder
865d59b17324 : Adjust double tap coordinate offset to avoid app region
e7ef18671c4e : Revert "Modes Tile: load default drawable in mapper"
81c6cdc7803d : Removing GestureRecognizerProvider
71ada7db1fb6 : Split dirty-image-objects file between art and framework
00a1e71ba81d : Handling 3 and 4 finger swipes on tutorial selection screen
3dd233d25f5c : Ambient AOD support
191f0b540c19 : Allow multiple listeners for the same message type
727bd6d036a9 : [SB] In StatusBarViewModel, keep icons hidden during camera launch.
547c8b7fc11b : Name AppFunction executors
423f19286d68 : Remove tasks from repo on ExitDesktop transitions.
6557fe4bdc14 : Check for NLS service intent filter when rebinding services
1470dd0049cf : Attempting to fix SystemPaletteTests
e0b7038114f6 : Introduce FlingOnBackAnimationCallback to make use of new timestamp API
ed3e775c2aa5 : [Catalyst] Fix storage issue for hybrid mode
ccbdc666b3a1 : Add condition to bypass caching when process uid differs from caller uid.
c0bcc1b4c570 : Fix ProtoLogViewerConfigReader bug
269eef85ec5b : Set Launcher alpha to 0 during Desktop exit transition.
b6c16a61ef6e : Add dialog factory method for creating draggable bottom sheet dialogs in compose.
f4fec400b534 : Force enable MODES_UI flag for ZenModesCleanupStartableTest
6303779886be : Decrease the app zygote preload timeout.
730041393f20 : Keep few bouncer coroutines long running to allow dismissing lockscreen when bouncer is not currently showing
0bbc544d4e4a : [Ravenwood] Support DeviceConfig
3b26fcade3d2 : [Ravenwood] Support DeviceConfig
9b317ef156b2 : Stop evaluating the install requests from package manager and approve all evaluation requests by default.
8fb953619794 : Update TextRunShaper to be backward compatible
a28e67f07194 : Add a flag to fix the security exception in PropertyInvalidatedCache.
2f11a4a50c42 : [expressive design] Fix selected index of spinner.
05ea960d877c : Fix task cannot exit split if the app is request for moveTaskToBack
61f1e4ba4a44 : [expressive design] Create SuggestionCard.
d154de1a39a6 : Modify scrollbar to include gap between thumb and track
3ad584f1e6da : Add a new feature flag and a new permission
793fafecf9b8 : Refactor BroadcastHelper to pass requiredPermissions parameter.
6fb9caf5fdf4 : Always allow updating config from input during booting
625bc5991ad4 : Log updated process configuration when binding
e6cd19d9df52 : Handle malformed PDFs gracefully in PrintSpooler
59fa415fc30a : Add common APIs for settings datastore
f318ad521c8d : Update bug report shortcut to launch the bug handler app
3db49088a535 : Use gsfc system font in new clock
32a5c087ebc6 : Convert vintf_fragments into vintf_fragment module(s)
c3d546cc6508 : Reapply "camera2 jni: nativeReadValues() can use camera_metadata_ro_entry instead of camera_metadata_entry"
f1bf0a502db9 : JobScheduler: Adjust quota default parameters
7e9495bc8b68 : Include the priority of registered receivers in the trace events.
746b93479637 : Add YCBCR_P210 format
4f30728fd5ca : audio: Show microphone icon when input device is connected
7d98941c02ed : Revert "Refactor PropertyInvalidatedCache"
463f14484a13 : Rename CatalystScreenTestCase.context
cb5aa240721b : Init forensic service
3128792c61c4 : Fix snackbar paddings in CredMan UI
692cddfb32ab : [CDM] Check for system calling UID when backing up and restoring data
5158d49f1b9e : Add MessageQueuePerfTest
d40c740f34ce : Exit immersive state on rotation
6a8541f811b9 : Fix divider flicker from swiping up from a pip->fullscreen app on top of split
ae8ebf731412 : GraphicsEnvironment: Check ANGLE allowList after settings
5fbb638a2062 : fix(high contrast text): fix test tolerances
63df053afbf5 : Adding the MSDLPlayer to platform slider haptics.
c7f836724562 : Fix swipe-PiP-to-home transition with flipv2
5e996f90e50f : Rename Gantry flag for fullscreen magnifier + mouse bugfix
185048041e6f : Update checkKeyIntent
e020bdab29aa : Import translations. DO NOT MERGE ANYWHERE
e0f1c50d0627 : Import translations. DO NOT MERGE ANYWHERE
f27be658bcdf : [5/N] Use synchronous interface for report* methods
d6c8e39fc938 : [4/N] APIs for verification policy and failure reasons
f327bbc214ab : Add back non-ravenzier part of bivalent device side test
ac39c20fb569 : Import translations. DO NOT MERGE ANYWHERE
6cfe5c9cc39f : Import translations. DO NOT MERGE ANYWHERE
ee7d6ded2e0c : Update AccessibilityNodeInfo#setBoundsInWindow docs
5ef0c82b30cd : Fix shouldDefaultToObserveMode attribute to work for Off-host services
4c020effc6a5 : Don't register NLS in test
698f1aabb947 : Import translations. DO NOT MERGE ANYWHERE
a6f429bbbb8a : Adding MSDL feedback to emergency button click.
3b4e0f130d09 : Add new Motion Photo intents
e4fff766f54b : Register VcnManager with VcnFrameworkInitializer
235d98ec1c85 : Restore to pre-immersive bounds on immersive exit
4d62615ff6ea : Add columns configuration for split shade
9e7859d7fce8 : Rename delegate to delegateNode
19b8021697ef : Allow SplitScreenController to explicitly request a new instance.
f8305aaf5732 : Call remote RotationResolverService from current user
04db63f664ab : [Divider] Fix divider surface visibility when width is 0
9abb35d995fb : Import translations. DO NOT MERGE ANYWHERE
2b5e9021a4d8 : Fix check causing wtf log in ModesTileMapper
f89006814933 : Gerrit pre-upload formatting changes
d091d07eef6d : Update SysUI usages of device state overlay config values
ba18c5ddf523 : Fixes open in browser button to match specs
3ae8bf1de1e7 : Revert "Migrate to best practise to read network state"
cd4ca00a9018 : Log event when dismissing bubble from handle menu
61e3f4fb0cd6 : Update isSecureCameraActive flow to be false after unlock.
aff00b6b1587 : Implement 3-btn-nav fixed rotation
606e1cd540db : audio: Display product name for bt input devices
d5e5c5025bd4 : Adds SystemUI DeviceStateManager utilities
4ba1a4de2782 : Update PosturesHelper to use DeviceStateManager API's
8b4b220d92b4 : [Ravenwood] Always provide main thread
7d508bb273b3 : Copy KeyCharacterMap object when we fill InputDeviceInfo(2/n)
8593141009ab : Appinfo: Make adb the app debug source of truth
3c5d5efd7557 : MotionEvent: clean up flag for PointerCoords#isResampled
71c8e7ae3689 : Add colefaust@ to OWNERS
880becbace55 : Remove deprecated methods from AppFunctionManager and AppFunctionService.
20d6a0af935d : Add a flag for hosting communal widgets in secondary user
d044cbb65a64 : Check intent creatorToken
63684603afbe : Check device icon for legacy associate method
b7e31cfbc152 : [flexiglass] Fixes clock and smartspace jitter in AOD
1376825f1182 : Handle split shade and alpha
13f7e97fbc6e : [sb] s/Collapsed/Home/ on the view binder and model
30a8feff8374 : [sb] Rename *Fragment* dagger components to Home*
5179ae7cde7b : [sb] Basic compose replacement of the fragment
724a6ecd19ea : Defer resume of activity while applying wct
101edc0f518a : Process adb shell input keyevent 82 through dismiss
ba22f9bf96d2 : Preparing composable to handle live gesture progress in tutorial
aeadbf2054d7 : InputTests: Use a test rule to mock the IInputManager instance
9488a3e5d487 : [People Service] Prevent system_server crashes due to Content Provider issues in People Service
80d00ecae513 : Fix the problem of missing signature permissions
f9f3e3e4eaba : Test getDisplayIdsByGroupsIds
97507d8b1021 : [Contextual Edu] Make contextual education module optional
507feae5e8e1 : Prepare StatusBarContentInsetsProvider for multi display
9e39ccc6feca : Create generic implementation of PerDisplayStore.
c565bef9929d : Implement translation animation
e8f934d8ce77 : Add test to AudioSlidersInteractor
0a53b6c221de : Use State in ViewModel instead of StateFlow
8c104d51b649 : Fix-forward for breakage introduced in b/29732496.
47815b50c2fa : remove multi-user and enterprise deprecated functions from DeviceState
ea1f82431cec : Calculating gesture progress for all touchpad gestures
e23f026f9014 : Separate corner resize tests into individual scenarios
e2c4fc92c18e : Make timeout optional for pausePolling & also resumePolling only if not started
9b38acc80023 : Cleanup LetterboxScrollProcessor with @NonNull and similar.
44cb19410102 : Check event in app's bounds using absolute coordinates.
e60527feab7c : Set the same density to all displays
9bc0f1eb3464 : Introduce vibrator service effect pipeline support
28cf5283450e : Add InsetsPolicyTest#testExcludeImeInsets
e88c1e66da12 : Add handling of simultaneous alarms/timers
51f86b524211 : Add validation before logging ScreenInteractiveSessionReported
b9474405880e : Update mPosition when creating a new InsetsSourceControl
829da8849df3 : Use getLaunchedFromPackage instead of getCallingPackage
d9ab4a6ae44b : Introduce flag for primitive composition with absolute delays
1043a7c07336 : Fix typo in ProtoLogViewerConfigReader.java
5f84d036db86 : Nit clean up
201e538f9641 : Finish unfold transition after timeout
7b44e04ef3d0 : Remove explicit creation of SurfaceSession from View
4dfd56f81043 : Introduce more complex ways to match elements
5282b2b03560 : Finish unfold Shell transition when fold is merged in
2037735004f5 : Fix finishing unfold shell transition immediately after folding
ecb283668543 : Biometrics: make the handleFailedAttempt method to private method.
07274db56db3 : Multi display initialization of Status Bar
aeaaaa43918d : Fix windowless surface being removed too early
54a36368fabd : Dump threads for troubleshooting when catalyst test timeout
e3419a96f813 : Use new enableDesktopWindowingEnterTransitions flag
50262f50f26b : Log ENR appear animation events
f2ec3918c0f1 : Make SizeCompatTests support compat annotation by default
63d8edcc943e : [Lut API] LUT java interface update.
6afe51bd2d6e : Update hearing device dialog UI to match figma
9e735b400e07 : Check for minimize & exit immersive logic on #openInstance
b6c8c9d0bd42 : Revert "Migrate HearingDevicesTile"
7f86ad5930fb : Revert "Enable SQLite JNI for host Mac and Windows"
c5d4fe25bf22 : Convert vintf_fragments into vintf_fragment module(s)
f53ab51f5908 : Migrate HearingDevicesTile
6ad044d0b26a : Revert "Migrate HearingDevicesTile"
3ce8ae768d4d : Clear shortcuts associated to an A11yActivity when the A11yActivity becomes A11yService with the same componentName
5a2c7673eb6a : Enlarge pointer icon for magnification
e3fa80e1babc : Revert "Do not handle touches next to shelf"
6a94a4caa6d4 : Create VCN folder
05d6067588cb : Change maximize menu maximize button to restore when task maximized
7bbcd22954db : [flexiglass] Handle HUN touch when touches are dispatched to NSSL
d69d3ee0a4d2 : [Ravenwood] Load default properties from build.prop
b39dded2a258 : [flexiglass] use LSToGoneTransitionViewModel for keyguardAlpha during LS -> GONE
5f85928a4083 : Add flag to restore to previous bounds from desktop immersive
687ade496b02 : Set the proper owners for cache tests
ff3841c25f53 : Enable Linux terminal app via developer settings
46c2f3371718 : Handle TO_BACK in PiP onTransitionConsumed
222a60ab6bc4 : TIF: Fix HDMI-CEC device list issue in kids mode
2a90b8fa79c9 : Integrate system feature codegen into SystemConfig
51be14deb144 : Use NotificationProgressDrawable for ProgressStyle notifications.
23a67d4a33c2 : Force show system bars while menus are open in desktop immersive
9d6d6f525635 : Use WeaklyReferencedCallback annotation
6fe30c4440a9 : Add per-package lock in setAppFunctionEnabled.
231f87ca18f3 : UinputRecordingIntegrationTests: Remove FlakyTest annotation
1ff4d2ffd261 : KeyGestureControllerTests: Cleanup InputManagerGlobal test session
fabffd0920cf : [flexiglass] Removes device_entry_udfps_refactor from scene.md
8a498d23fd2d : Deprecating ResultsReceiver usage in InputMethodManager
7c6980cb3899 : Add new flag for XR-specific manifest entries
6ca88590ec02 : Fix showing the IME on another display, when requested from a virtual display.
1f50d0aad07a : Fix inconsistent tracing of bitmap count/memory
75f17d4fbd59 : Refactor PropertyInvalidatedCache
aec7dbd2e7f3 : Update drag-and-drop to support resizeable widgets
0409fcd7419c : [SB][Notifs] status_bar_ron_chips flag -> status_bar_notification_chips.
824fb783cc5e : Catch NotFoundException in getDrawableForDensity
1b5ea919ab3d : Log uncaught exceptions in MidiService
33342e211a9b : Make bundle channel even more system reserved
abb0ac8dc18c : Add flag for considering flags in PackageCacher
c36a6daa8a35 : Notif redesign: Show small icon for headless system apps
44ac5d2438ee : Define emergency alert parcelable classes
ade1961f0aec : Fix typo in app usage docs
1ea2692b9508 : [VRR] Not suppress touch boost in ViewRootImpl
35137debc4bf : Pass bubble logger to BubbleBarExpandedView
0b558a935f18 : Do not handle touches next to shelf
436a2714e5f8 : Add scrolling to QS
b0bd2dba7dc9 : Mark UinputRecordingIntegrationTests.testEvemuRecording as flaky
57a34e17e1ae : Use screen timeout settings of current user.
134f5593cfd5 : audio: On desktop, show output device USB name
85692fd6a7a8 : Document handling unsupported EAP-SIM/AKA
04efdc82adfb : Manually apply "hide capture more" change to main
5eb1f7827cf0 : Remove concept of backdrop
c4bd9923ccd4 : Add ownership of scrim/lockscreen files to keyguard
84ec6502fa84 : Add music (piano) icon for zen mode picker
21b5c2f81085 : Don't stop MediaProjection sessions without display
02f9df30ab63 : Use Bounceable on BC25 tiles
5de14166adb4 : JobScheduler: Remove quota bump
064443125eff : [Contextual Edu] Accessibility - Not change focus when ContextualEduDialog is shown
aa04336b3aa2 : audio: Only replace route info when Bluetooth device is bonded
88a11936baf2 : Remove dead code related to playback suitability
0a291023a247 : [logging] Move session id tracking to DesktopModeLogger
bc7e7477b82c : aapt2: add Baklava
91a04ab98810 : Add move to external display
a698e5e7f9ca : Build.VERSION_CODES_FULL: backfill older Android versions
2f641ffc8ce1 : Fix crash in MutableSTLState.finishTransition()
ee47f86be8d4 : Coroutine tracing improvements
ce6e0c71f43a : Revert "camera2 jni: nativeReadValues() can use camera_metadata_ro_entry instead of camera_metadata_entry"
33be6d8261b7 : Remove pieces of device entry flag - Final piece
16a8cea63b2f : Remove pieces of device entry flag - Piece #5
368301d40241 : Add protologging on setRequestedVisibleTypes call
ceb7595d0796 : Remove pieces of device entry flag - Piece #4
6d6c196d0b61 : Dump ENR appear animation states
8d0896c32dd3 : Revert "camera2 jni: nativeReadValues() can use camera_metadata_ro_entry instead of camera_metadata_entry"
6e547dac0fd1 : STL should ignore mouse wheel interactions
636e07d9ed23 : Cleanup SwipeToSceneTest
8fd307a2e09e : Update Runtime metadata property config
a571f1c927f3 : Remove inconsistent @Deprecated behavior in *removed.txt files
0023b25cc6bf : Notif redesign: Account for low ram icon size
f94d00571fd2 : Set inflater factory for group summary header
364a7091acef : Allow TestHarness exemption from forced grouping notifications
aab30aa51bb4 : Add rows config for QS and QQS for bc25
aa6cf998e07d : Notif redesign: Use launcher app icons for notification rows
7c243dd628f2 : Dont select automatic strategy when stylus is in use
ddc40e0be0ff : Remove pieces of device entry flag part #3
2e169a2b8253 : Create ACONFIG flag for Notes role QS tile
37e248d06aae : Revert "Revert "bc25: impl baseline button styling Filled & Fill..."
d99e7b208ece : Add flag for desktop windowing enter transitions
4fb8af50256a : [Ravenwood] Move hoststubgen into f/b/ravenwood
894f27ffef61 : Allow keyguard capture when screenshare protections are disabled
99736dbcbf8f : Implement equals method in BackEvent.java
ea0ecb329ecf : [EXPD] Make non-collapsible toolbar style customizable
1a3504eeca7e : Add audio sharing developer option flag.
b043188539db : Fix bug: ProgressStyle.setProgressSegments doesn't set the segments.
d710af4489ed : Make universal resizable compat id visible to CTS
064eb7d298bc : [Expressive design] update preference background with round corner
0fbab8632682 : Restore android full libraries for robolectric tests
3ee49bf2456b : Change android.test.UiThreadTest to androidx.test.annotation.UiThreadTest
ec8e8394ba6d : Fix the enforcing method to pass current user for visible background users
a18c60bdd5e8 : Clean up commonSurfaceAnimator flag
3b57982c9774 : ProcessList: Fix javadoc for 'mProcessNames'.
0da18cfe8e39 : feat(high contrast text): update design treatment: no padding on background rects
ebdc2df388ae : feat(high contrast text): update design treatment: rounded rects
9fb64bd0255c : Do not handle touches next to shelf
baf588c89064 : Ensure flicker split test activities handle resize
72a2336a0f1a : Adjust indicator bounds to use desktop stableBounds.
d3ea06ce2f73 : Rename isHorizontalDivision to isLeftRightSplit (and flip boolean logic)
efe5d0e72210 : Add ability to disable launch-adjacent for a specific root task
c8e1f98cfa95 : Delete legacy transition code from StageTaskListener
f5b3136f256b : Move ravenwood tests to f/b/r/tests
97113294ab6e : Revert "Add ability to disable launch-adjacent for a specific root task"
9e4c07242156 : Update ravenwood test owner.
fff946a55c33 : [Ravenwood] Sync AOSP and internal main as close as possible
8d650996822e : Add aconfig flag for accessibility shortcut implementation
0022ac376217 : Remove pieces of device entry flag
4458f587dcd7 : Change default caps for BOUND_TOP to superset FGS
762259aa87c4 : Protect setPermissionGrantState coexistence code.
c62258ba42c2 : Log when bubble is dismissed via drag
360c277afd62 : [PIP2] Remove BroadcastReceiver from PipScheduler.
d222726359da : Revert^2 "[Catalyst] Support settings service"
3d5cfdd44079 : Fix SurfaceView window crop computation
f61c68eba0d9 : Fixed the incorrect carrier config for 5G icon timer
231d7dfdff8c : [SB][Notifs] Initial scaffolding for notification status bar chips.
ad9a72c2ecf4 : Revert "Update checkKeyIntent"
02910719503e : Do not handle touches next to shelf
86ca6ce9eaab : [PIP2] Add Shell requested remove PiP transition.
131ac7758ca3 : Update native input manager when display group removed
c15d6de9b0fa : Remove Recursion in Session Management/Traversal
15f16ed6b7dc : Introduce oomadjuster_cached_app_tiers flag
0a80242ad141 : Update to consistently highlight selection for single field.
7899f8dad41a : Add OWNERS files for forensic tests
cd3ccd668312 : Revert "bc25: impl baseline button styling Filled & Filled_Tonal"
707aca0db2f6 : Disable Maximize Menu when in desktop's immersive
e8d06e714a62 : Update checkKeyIntent
dbf371eef0f1 : [SB] Fix status bar icon when a layer-list contains some level-lists.
aad476af40ac : Updated StrongAuthTracker to use internal API with it's own disable reason.
5a8343368f5a : PriorityNestedScrollConnection: make onStop suspendable and add onCancel
43103941a884 : Remove synchronization on getViewerString
ee49af88c458 : [flexiglass] Remove ScrollViewFields#headsupHeightConsumer
41c9c1795be0 : Revert "Frozen-aware RemoteCallbackList"
40c5c2310e87 : Simplify PriorityNestedScrollConnection
0fff46ec0d47 : Revert "[Catalyst] Support settings service"
ce195e913966 : Propagate the light background flag in RemoteCollectionItemsAdapter
fb2287a1859d : [Screen Record] Assign fixed IDs to screen record notification groups.
f6d48701d640 : Disable velocity-slowdown when timestamp API is enabled
11375490ee3f : SystemServer: remove ArcNetworkService loading
62c8f5d7c603 : When not provided set viewerConfigInputStreamProvider from viewerConfigFilePath
814d4cfc96ad : [4/3] Introduce SpringTimings to avoid confusion.
22c9db33195d : Fixes previous commit with wrong updated colors
482ef73ab714 : Refactor BackgroundUserSoundNotifier
3edc9ab2aa8d : Add a default_team to BootImageProfile tests directory
c1fb29f16dd0 : Flag for move to external display shortcut
eddcceb34ee0 : [Catalyst] Support settings service
686288677487 : Add remoteTransition for moveTaskToFront(), and apply minimize reparent
5025d6854ee7 : AudioService: refine new SCO audio management logic
3a93a70dee00 : Add PreferenceIconProvider to support dynamic preference icon
1e6126f63f4c : Remove RuleEvaluationEngine from AppIntegrityManagerService.
5826bdb1509c : [expressive design] Update compose version.
315efbb12710 : Add timestamp API extension for predictive back
863b7a402cb2 : [WM] Don't moveTaskToBackInner if it's already detached
0ce4046f1e97 : AudioService: use AttributionSource for communication route
038d19dafd3e : Backup & restore for ringtone vibrations(2nd try)
3051af5f95a7 : Change flag to the new one for haptic customization of channel notification
65f452b59623 : Consider SAW permission for real caller
4676a409d147 : Modify getRecipientAddress Api to public API
e9a94d0db9aa : [expressive design] Migrate App info page.
65762a0d8026 : [Panlingual] Get the LocaleConfig without overrides
cc103033f08d : Do not call getStagedApexInfos repeatedly
ecc781ba5ec2 : Add support for UIDs in bootstrap atoms
b9065f0e35f1 : Fix ActivityEmbedding flicker test flakiness
9dc83418545c : Remove `audioSharingQsDialogImprovement` flag guard when injecting AudioSharingRepository.
1d5156d9cc4d : Fix EventConditionProvider for secondary users (and HSUM)
e27fc17a2d61 : Add proposed trendy teams for VTS modules
0ba67b650653 : [Catalyst] Support hybrid mode screen test
324febbe8d80 : [Catalyst] Support hybrid mode
3005d4ccb110 : Fix broken build caused by ag/29001351
7a1f0e475536 : Implement base functionality for flexible 2-app split
faff9bd105a4 : Add ability to disable launch-adjacent for a specific root task
bd4bf35efb28 : Fix incorrect transaction id of Transaction#apply call stack debugging
8b4781e5494b : Removed StageListenerImpl and moved data into StageTaskListener
5dce646c08ea : AS: send the mute event to APM
89f28270932c : Fix ViewRootImpl traversal issue after SurfaceSyncGroup timeout
8fd9b5cb6f50 : add a new AppOpsManager.setOnOpNotedCallback API
c1f141112203 : Resubmit of 'Allow loading of host runtime for Windows"
ae39a3f6538e : Enforce hard limits on hosts per package and widgets per host.
52583ef3fc14 : Revert "Allow loading of host runtime for Windows"
d45439369db1 : Allow uninstalling DMRH when not used for management
7f7a5dae6a90 : Adding Flag for TIF Extension Standardization
4752eab2c75b : JobScheduler: Use quota parameters for testing
f6bafb958c69 : Add DAL parsing static lib for StatementService.
2f3e1cc54dfd : Add methods to deconstruct SDK_INT_FULL
785cf35fa38c : Disable failing test until we can fix it
87294abd8105 : Add documentation link to wtf log
a682fea38f3f : Remove pieces of device entry flag
29a6bb7fdb3e : Introduce utility method SubscripitonManager.canManageSubscriptionAsUser
d87e1a154aee : Delete old unit test
41d43628c76c : Remove invalid isolated tid check from test
60b799585bde : Update android.app.ui_rich_ongoing to use the new FR
855c397f1bb7 : Backup and restore user disabling the NAS
4c5373b47032 : Fix ContentProvider removal bug
5b2ab551a620 : Remove unused API removeFromSideStage
5a7532462fb3 : Frozen-aware RemoteCallbackList
f736bcdc0044 : Keep the flag entries sorted.
2358ad8f5251 : ResourceHandle : Refactor resourceHandle data type to long
fa5e185a5585 : Add com.android.healthfitness to apex_available for android.permission.flags-aconfig-java-export java_aconfig_library
5c239c72e8dd : Refactor code to allow B&R logging
0c70eefe2e44 : Move UserController.dispatchOnBeforeUserSwitching to mHandler thread.
129f2f9395d7 : Export permissions aconfig flag cc lib
71cc7888b1c8 : Adding haptics to Brightness slider in Compose.
e7883a4403f0 : Introducing a SliderHapticsViewModel for haptics in Compose sliders.
be573449742a : Default to hardware vibration attribute for SCROLL_* feedback
4a6f6efc8656 : Clean up 24Q3 aconfig flag reset_mobile_network_settings
e725b3e97c83 : UserProperties won't be cached for wrong callers
42a13f290889 : Undeprecate allowed adjustments methods
3e3cc652e2ed : Remove the RuleEvaluator for AppIntegrityManager. This is part of the efforts for cleaning the install-time integrity protections.
9144c23b1221 : Adding a SliderDragVelocityProvider for slider haptics
26667c51a8b8 : Reintroduce support for setting the ICU default locale.
2b592f4ed64f : Fix ICU-related failures in Robolectric.
d6a86e849982 : Register NativeAllocationRegistry JNI methods via jni wrappers.
c8c6c179be20 : Fix layoutlib build
04bb4d12922c : Run `RepositionFixedPortraitAppTest` in both orientations
3248101b1012 : Allow loading of host runtime for Windows
7ed4041b7943 : Disable SupervisionService on Wear
9cf2fa6a79a5 : Add back support for 'method_binding_format' in HostRuntime.cpp
55c781b80df2 : [RONs] Create progress style template wrapper
29ad9ea438d3 : Introduce StatusBarInitializerStore
116225fc8e78 : Use StatusBarWindowControllerStore as a dependency everywhere
d57efdf24ddb : DisplayRepository: tweak to implementation of displayAdditionEvent
53643fcf25c7 : Remove obsolete "Gaming" zen rule
f5d495d1ba20 : Filter and sort sliders for BC25 Volume Dialog
792e7007a4cd : Stop logging the AppIntegrityManager related logs (namely, INTEGRITY_CHECK_RESULT_REPORTED and INTEGRITY_RULES_PUSHED) in preparation of the code clean-up of this deprecated component.
68a4e3501559 : Allow customization of method binding names in HWUI
7fe77d5de5cb : Utils required for CrashRecovery module
7f0815cdf197 : [3/3] Introduce a new spring-based animation.
ccc13daace98 : Add PredictiveBack easing to Easings.kt
a1d0c42cf07f : Move UserController.dispatchOnBeforeUserSwitching to mHandler thread.
677068136b34 : Update areChannelsBypassingDnd from PreferencesHelper correctly
63a913d7928a : Renaming TouchpadGestureMonitor to GestureRecognizer
ced834308c40 : Add a flag for the changes to support simultaneous notifications
a71d18b2990a : Fix a typo when checking custom policy
91da675fdd0b : Flag for enabling sound uri with vibration resource in notification channel
f4e75afd5058 : Cancel existing insets animation if the side is changed
34d8176e245a : Hide capture more option for non full-screen image
b4170bdd1a1e : Make AndroidColorScheme easily overridable
abdde67bfe43 : Unit test checking that a VD owner can't interact with an unowned VD
07cf6abe1802 : Add metrics for biometrics enumeration and unenrollment misalignment
16d9a561588f : Finish drag when cancel transaction is aborted
99d72749e330 : Remove hack code for without cross user permission
0ae021763c18 : Migrate HearingDevicesTile
b5ce780e5415 : Add OWNERS file for wmshell tests
7cc1b8d575f4 : Log when bubble bar bubbles are added or updated
b5301d55907c : Revert "[PM] Add more logs to debug the flaky test"
3501ceeda1a4 : Revert "[PM] Add more logs to debug the flaky test"
00143ee7ef3a : Make DEFAULT_RESCIND_BAL_PRIVILEGES_FROM_PENDING_INTENT_CREATOR overridable
8de89b99ee14 : Ensure device state availability in registration.
c060bd35cb10 : Suppress warnings for settingslib
8a35678777e1 : Have multi instance options respect cascading window placement.
369e6eb9bf48 : Use readonly flag of native_metrics for HardwareBuffer
ba9dad3dc051 : Revert^2 "Track Bitmap native allocations"
a864a396d134 : Move the registration of `BluetoothCallback` to bg thread for bt qs dialog.
98b1166078b6 : [expressive design] Update Spinner layout.
3d7743701c22 : Avoid set transition ready to false when backgroud activity launch blocked
f8f5ad5b5863 : Remove the dynamic Layer Factories
507d4caf669a : Cleanup code behind flag calling hidden api
9eeb5923af71 : Add a readonly version for flags in libcore
7befc2a9375f : [PIP2] Enable drag-to-dismiss.
5c844f3d7af3 : Remove custom_biometric_prompt flag
60858e3ee646 : Import translations. DO NOT MERGE ANYWHERE
4a90a68a6e32 : bc25: impl baseline button styling Filled & Filled_Tonal
122966ff88c9 : Add a support for synthetic mode with hasArrSupport
37f7fb2cdc32 : Use hasArrSupport api
f3b5450bdc6e : Adds hasArrSupport api
8495d30f4f21 : Add wear owner to wathc res dir with specified sdk version
b0ec6da6af0b : Remove separate framework crashrecovery filegroups
251107361a32 : [Ravenwood] Apply a common policy across all code
3d5cfaacb78e : camera2 jni: nativeReadValues() can use camera_metadata_ro_entry instead of camera_metadata_entry
aa4c5fa8810f : Audio: Update the strings for wired audio devices
2abff41f9058 : [Ravenwood] Move Ravenwood processing out of individual repos
dac8d3f55ffb : Remove hidden API usages backed by VcnTransportInfo
29b080800b12 : [SingleLineView] Fallback to mUser when senderPerson is null
49ee901e947d : Add color mode metric to ViewRootImpl
446b9ea72dd3 : Add support for dragging third app on top of 2 app split
52f27e614ec4 : Clean up FLAG_VALIDATE_NETWORK_ON_IPSEC_LOSS
8f0f00551c7e : Apply default selected device to all presets
79778bce4a90 : [RONs] Bind ProgressStyle values to NotificationProgressBar
b91e017cf34c : Import translations. DO NOT MERGE ANYWHERE
68115c158e6c : Import translations. DO NOT MERGE ANYWHERE
7031a84e6a3a : Avoid subtracting shmem twice when calculating LOST RAM.
72db988d4d38 : Reapply "Implement QS columns for different configurations"
940aaa58dce8 : Update bug id for uid state changes flag
65a7c8253734 : Import translations. DO NOT MERGE ANYWHERE
0b8fc96c3e17 : Import translations. DO NOT MERGE ANYWHERE
663c0a6686f1 : Clean up FLAG_EVALUATE_IPSEC_LOSS_ON_LP_NC_CHANGE
3646da8858b4 : Clean up FLAG_ENFORCE_MAIN_USER
0657ddbd0f45 : Import translations. DO NOT MERGE ANYWHERE
913117a38f97 : Add multiuser CTS to SystemUI TEST_MAPPING
cab71af0fd15 : Re-enable app_functions on Wear.
540ef8a2259d : Create `api_for_backported_fixes` flag.
dae7aacee13b : Match new errorprone api
c138864127c4 : Directly check lockscreen redaction flag for redaction
1a76725a4879 : Clean up FLAG_HANDLE_SEQ_NUM_LEAP
950c6489e378 : Update app handle education tooltip to match Figma specs.
5b9bdecf0836 : Change the association revoke importance
0bc4cf304648 : Revert "Implement QS columns for different configurations"
b1bdd1c7ebe9 : Remove redundant CREATE_VIRTUAL_DEVICE permission enforcement
483b2688251a : Add safety for the absolute volume devince info map checks
029898996dec : Fix a debug-output formatting error
68e3009e87d1 : Disable flag for ClipboardModelTest
42222465a2b1 : Simplifying TouchpadGestureMonitor interface
a70a2f285656 : Refactoring recent app gesture recognition logic
67b293f4b505 : Return early on "cannot execute function"
e8b7d81f1e29 : Add base logic for the Volume Dialog sliders.
dec8259c34f5 : Fix Ravenwood tests
023f838616a4 : Remove flag that is unused
dca9276e1450 : Flexiglass: include alternate bouncer in status bar state calculation
ffd133be8dc3 : Update bug number for enable_desktop_windowing_app_handle_education flag to use the parent bug.
b5252f703583 : Display Enable power ON
a9d66fc75270 : Moving CrashRecoveryFiles to a separate Folder
1bf1952dabb9 : Introduce CrashRecovery Adaptor
29b2bc7dd015 : Fixed non-resetting scroll state
37cbf384c405 : JobScheduler: Enforce quota to jobs running in FGS.
6c26d99c221c : Implement QS columns for different configurations
9c1118a17052 : Invalidate the cache when users are added or removed.
92f5c3cf8ab5 : Introduce StatusBarWindowControllerStore
e92bb92a1c2b : Create DisplayWindowPropertiesRepository
7602e198721f : Pass SafeActivityOptions with actual caller for startActivityInTF
198f7b559f9a : Pass SafeActivityOptions with actual caller for startActivityInTF
703bc57a6fa0 : Pass SafeActivityOptions with actual caller for startActivityInTF
ef9ea0faa26e : Pass SafeActivityOptions with actual caller for startActivityInTF
811558053b84 : Make getDeviceSettingsConfig non-oneway
140e0710744d : Partially revert the decor bounds calculation
b20f4122aba4 : [expressive design] Fix layout.
356db3272814 : Delay appop revocation when only capability is lost
892f8e20210b : [expressive design] Add more examples to gallery.
ee1966c23ffa : [Audiosharing] Fix the primary buds auto pick logic in hysteresis mode
070817baf2ef : Fix persisting SFPS indicator issue.
dc803479ef1b : Fix a bug in AssistStructure if a view had a null child.
ffa4c38a56ca : Replace Slog with Log in CrashRecoveryUtils.java
be2f18c0814c : [power-throttling] Remove log and change variable name
899bdd1c81e3 : Call dream overlay callback onWakeUp in dream overlay reset
575c1c73d7fe : Fix an HSUM issue with launching the communal widget picker.
4518750cfc46 : Fixes for new Background Activity Launch (BAL) Modes
41f4300bb6c7 : Clean up 24Q3 aconfig flag DATA_ONLY_CELLULAR_SERVICE
840cb83961fc : Fork volume_dialog.xml for the BC25 update
cf40bf90385a : Using Vibrator provided by Dagger in the MSDLModule
426870c0d2c2 : Set caption bar height to status bar height when in fullscreen
0fbf7f883d26 : RecoverySnapshotStorage: silence log spam from file not found
4aaca92a6c57 : Make home app be pinned by default
e622af2ab408 : Move system apps pinner default values to be config driven and ensure new devices inherit them
1165d39ab805 : Clear caller identity before updating restrictions
f8df1507716b : Duplicate settingslib_preference.xml to fix studio build
ef20d0ff6414 : Fix isLastMemoryLevelNormal check for Service B computation
d86efc2a23f3 : Update AE_MODE_ON description for flash control.
17ceada5259e : Give the Desktop Mode owners ownership of DesktopModeFlags utility.
a03615b3e067 : Revert "Reset HUN clipping after cancellation of disappearing animation"
d2997952cbe2 : RecoverySnapshotStorage: fold readFromDisk() and writeFromDisk() into callers
655874ff0d41 : Ignore dismiss requests if we're already going away.
9a391b41e8ec : Replace default clock with FlexClockView
cd8d4b7ac5e5 : Ensure listeners are alerted to notif after LE
81ee5002ac89 : Make ApplicationInfo.isAudioPlaybackCaptureAllowed() public
07256015558f : Implement NotificationProgressBar.processAndConvertToDrawableParts.
cbf6674abbe0 : Fixing back and home icons in touchpad tutorial
f7c8a9b05157 : Reduce logging in AconfigFlags.java
41062181286a : Add null check on device posture change
ea941c29ea06 : Clean old coexistence get/setApplicationRestrictions.
1e1c3f52d794 : audio: Display product name for input device
212800a6ff56 : Add tri-state checked api
ae2441e22a92 : Revert "Backup & restore for ringtone vibrations"
4d1b8c91282b : Use correct method to call JNI method with void return type
ef2f5580afee : Update more tests to use a real TableLogBuffer instead of a mock.
566b638ba85e : Expand javadocs
2aa0a6d270ce : CentralSurfaces - remove old code
726283a339e4 : Modes Tile: load default drawable in mapper
649c790a3948 : Fix wildcard edges by simulating TransitionSteps in KTI
51e01e9f395a : Preserve the LRU positions of its processes when an app is resumed.
b8c444b99a7a : Prevent UnlockAnimationController from dismissing keyguard...
d9055e5cd633 : Fix performance of clip
e71724cbfd1e : Remove wallpaper to not end up in empty desktop
ad97b5e9f8f3 : Add left/ right scrolling for mouse keys
8a22cd588b69 : Refactoring gesture recognition logic
3d5a59c13f8f : Update IME Switcher Menu spacing
ceaa8ce1e33b : Enable omitting IME Switcher Menu header
a7a9cfd1bd66 : Move closing task removal to onTaskVanished
dd7c86dbb686 : StringBlock and XmlBlock are available on all platforms
2079738ab6da : Add a flag to enable windowing transition handlers and observers.
054a54d6a619 : Make removeDesktop oneway
684ecbab1bfa : Opt tuner settings activity out of edge-to-edge.
9a8e8b7b9949 : [2/3] Create a new motion testing toolkit based on AnimatorTestRule.
97fb1a80a57e : Update MediaMuxer javadoc to reflect that B-frame muxing is supported since Android Nougat MR1.
db17c245b326 : Fix aspect ratio for transparent letterboxed activity
eb1be2089cb7 : Derive currentOverscrollSpec to avoid unnecessary layouts/placements
2bb66bd60c12 : Fix aspect ratio for letterboxed activity in `LetterboxAppHelper`
2a709c631ccf : Init mocks in ModesDialogViewModelTest
8de561306351 : Initialise visible_task_count state field of DesktopModeTaskUpdate atom.
b0355317db34 : Avoid recomposing the QS scene every frame
a2930f6bbecc : [1/3] Refactor animator creation to prepare for the new spring.
63fe08d35430 : Content Recorder check mirror confirmation
5a70f5a8f967 : Show the name of the mode blocking audio streams in volume panel
5d0f5d7fcd9f : Add test for startActivityInTF permission check
20c568e77eae : Pass SafeActivityOptions with actual caller for startActivityInTF
798481a085e9 : Flag for cache invalidation when users are added or removed.
ef016222f703 : Abandoned read-write flags for profile caching.
0d3b4156f906 : Revert "Only unregister the callback when bt adapter is enabled"
9b9e68b92194 : [Audiosharing] Register/unregister callback on service connection
afad30814f3d : Fixed status bar hover state height
321797f25d82 : Enable letterbox configuration for camera compat freeform.
8e988f87d5f5 : Distinguish hot launch and relaunch in trace
6d09b190fd0f : Reconcile MaterialTheme.colorScheme and AndroidColorScheme (1/2)
3626fb2023a6 : Make ApplicationInfo.isAudioPlaybackCaptureAllowed() public
d63092a66f9f : Add READ ONLY flags for caching Profiles data
556114ba51ed : [PM] Add more logs to debug the flaky test
c8fa642dddc7 : [Audiosharing] Skip sync source for work profile
9503de4a2f25 : Revert^4 "Drop VDM permissions from Shell"
05747cca0c8e : Mass Migration of Robolectric tests
c4bfcd9266be : Enable freezer metrics without flag
d0c066eeeb98 : Close the SearchResults object when there are no more results.
19cb1d7862db : Add support for dumpsys in AppFunctionManagerServiceImpl.
581650c7a61c : [flexiglass] Fix split shade loading
081f39e3ccf3 : Introduce setCompositionOrder() and getCompositionOrder() apis
40814bdf2cf6 : Add device config for experiment of orientation opt-out
d5eb2a0c53ad : Create AConfig flag to guard bug fix for AssistStructure.
449df85099f7 : Revert "Stop logging the AppIntegrityManager related logs (namely, INTEGRITY_CHECK_RESULT_REPORTED and INTEGRITY_RULES_PUSHED) in preparation of the code clean-up of this deprecated component."
009991fa248c : Add logs for catalyst screen
103dc83fcb20 : Remove inset for non-freeform tasks when desktop mode is enabled
20659908106c : Report Activity as independent change when config-at-end
9bf1bcd89151 : [AppOpLogging] Log IPC binder calls for checkOperation and noteOperation
9c7227ee4715 : [Ravenwood] Mock UiAutomation to support permission APIs
55017463035c : Process adb shell input keyevent 82 through dismiss
13c2ddfc5720 : Push global device state to OomAdjuster
f0226c3aebd8 : Wrap all state changes that can affect OomAdjuster computation
0b1a4fbfc8e3 : Expand the a11y focus bounds if they're too small to be visible.
86494fef2b08 : PointerLocationView: Draw pointers in unrotated display space
004167a6a032 : PointerLocationView: Reland optimization for trace drawing
e0f4589f4a2f : PointerLocationView: Reformat file
835a32bab0e1 : Adding System code route to overwriteRoutingTable() API
8243697d237f : Fix to further ensure floating menu is refreshed on user change
d67e90e8a830 : [res] Optimize idmap format for lookups
0a7aa27590da : Exit desktop immersive if applicable on other transitions
0ee4bebd1418 : Adds hidden constant for FEATURE_CONTEXTUAL_SEARCH.
b08d0b67003a : Handle taskbar corner radius update when closing DW tasks
684d86ce083d : Add show-hide animations for the new Volume Dialog
832f586297e5 : Add GraphicsEnvironment support for Vulkan 1.4
b845003a3484 : Allow users to open switcher in call for desktop
a1a397864a75 : [bc25] Fix notifications showing above QS shade (in Dual Shade mode).
01fff9bf76cc : Migrate TaskInfo#isFocused to FocusTransitionObserver for windecor
738249c050bf : [Accessibility API] Add flag for is required api.
f99159c03e3d : Add task focus change support to FocusTransitionListener
30ed42bb4b9a : [CDM] Dismiss association dialog onDestroy rather than onStop.
f2bfad266d24 : Adding functionality to open by default settings dialog
6d32b3fdca25 : Create AConfig flag to guard bug fix for getBoundsInParent.
4318cf0f1217 : Revert^3 "Drop VDM permissions from Shell"
c87615b28dc8 : Temporarily disable from ravenwood tests
e57607f43d28 : Adding aconfig and changeID for user-agent reduction
5c3835ce69ad : Mouse: Plumb swap mouse primary button setting
5f1c0864fd52 : Mouse: Plumb reverse mouse vertical scroll settings
6f5a0ad19218 : [W]SelfManaged Association device icon
68a5962ca21f : Refresh A11yServiceInfo javadocs to mention missing attrs.
9bb61af61e13 : Fix potential NPE in startHomeOnDisplay
5acd27bb7350 : Adjust the default gain in UI
015f661fabfb : Don't assume a non-null name on UserInfo
a83546252717 : Synchronize access to sim data
77956a6284be : Remove duplicated screenshot capture.
2c1bc40dd5aa : Unflag apex_signature_permission_allowlist_enabled
0c27488abdc8 : Make bootanim multi-display aware
292ae8c060b2 : Update PAUSE usage state when activity move from RESUMED to STOPPING
e88e232bcdd9 : Add a pulled metric for Wakelock duration based on uptime
2009583f0e75 : Add enable_angle_allow_list flag
37048fda1ffd : adds a flag for hasArrSupport
24c03513dacb : VDM limitations for untrusted displays
b6b1138d5962 : Mass Migration to Robolectric (p-z, except statusbar)
15db479327fe : Refactor 3-btn-nav with flipv2
9a486fc30522 : Update nullability annotation for NetworkStats atom queries
c99c4ce2c126 : [dev_option] Move DestopModeFlags out of android.windows.flags
d31d7a4e5cc1 : Do not wake device when going from emulated -> non-emulated state
6c9037122dd1 : Add missing Javadoc dependency to `com.android.location.provider`
814d430022ee : Remove unneeded getUserOrDefault
d227173be760 : Revert "Check for NLS service intent filter when rebinding services"
c922679d05ad : Remove EdgeToEdgeActivityContent
d51f0ddc10f2 : Drop platform_apis from java_library
202a6081272c : Set clock alpha when contraints are built
0bac78b250fc : Add new vibrator frequency profile for PWLE v2
b59ef4fb5eb9 : User refresh rate for external display
00088588277c : Create DisplayScopeRepository for display specific coroutine scopes
cd769af5b656 : Fix a small typo in `SystemClock` class docs. ---- Fix typo in SystemClock.java
c299b2aa341f : Accumulate NetworkStats since boot in StatsPullAtomService
01b33a743507 : Face auth bypass layout issues
6c4e087c686e : Split IME Switcher menu header and item
1e40d911ff22 : Fixed focus state cropping in last shortcut
69dae7aa0524 : Remove deprecated colors from AndroidColorScheme (1/2)
8741b15afb24 : Rename SystemUIThemeTest to PlatformThemeTest
720c31c01fd4 : Stop logging the AppIntegrityManager related logs (namely, INTEGRITY_CHECK_RESULT_REPORTED and INTEGRITY_RULES_PUSHED) in preparation of the code clean-up of this deprecated component.
0a1d9f527900 : Remove the check of Transitions.ENABLE_SHELL_TRANSITIONS for desktop mode use-cases since the flag value is default true.
bc9aec403270 : Refactor DeviceStateManager service and tests.
f3dcd3ad0724 : Create a shell API to remove desktop
e2da48907c6e : Make camera compat freeform updates passive.
f4f91bef8887 : Limit the number of virtual displays
058f75fbdeaf : Fix CollapsingToolbarAppCompatActivity didn't apply Edge To Edge correctly
a646052bb6e1 : Remove STLState.enableInterruptions (1/2)
b0ce6b520a41 : Letterbox scrolling: fix double-tap letterbox reachability feature
3772e3fccc79 : Enable interruptions in BouncerContent
240bbb0b9b61 : Removes type SYSTEM_ALERT_DIALOG from the A11yService warning dialog.
73fbab04c340 : Dump ExpandableView clipping from ENR
438270b323d7 : Fix flake in WindowOnBackInvokedDispatcherTest
ba3da852045f : Dump ENR custom outlines
0de80ea0c8a2 : Revert^2 "Drop VDM permissions from Shell"
c88218218800 : Move `BluetoothUtils.isAudioSharingEnabled` away from `AudioSharingModule` as it contains binder call.
d005a5b762ae : Opt out test using hidden API from Ravenwood
1bcc100c6fb8 : Move display to top when task gets moved to front in desktop mode
714eee2e9aee : Reduce unnecessary calculation from input device change
fb86d9dda268 : Update namespace for Preference testutils
89eaf2bd1be7 : Listen to audio sharing state change and add the audio sharing source to device if ready.
b51480a5a628 : Implement onclick behavior for audio sharing dialog buttons.
1d07441d2951 : Audio sharing dialog on top of bt qs tile dialog.
414f5b6823da : Revert "[AppOpLogging] Log binder calls for checkOperation and noteOperation"
8d5ef8141ec3 : Import translations. DO NOT MERGE ANYWHERE
beff035f43d5 : Define new events for bubble bar logging
01771fecf135 : Listen to change of notification stack bottom to pass correct bounds to magic portrait shape effects
2be0c97aef9d : Add logs when media host visibility changes
ea82a2680392 : Swap out owners for keystore frameworks bits
9018d6c2d459 : IndexOutOfBoundsException fix by adding Synchronization
7c93465367ed : Helper for loading flyout drawable
e585720c951e : [3/N] implement getDeclaredLibraries
fbc4f88710be : Remove dependencies on the 1-variant fallback
dceef49580a3 : Add unhandled exception listener to STService
4fcd881df01d : Log uncaught exceptions in AudioService
3044debe2885 : Synchronizing the loudness codec dispatcher
2029171a08e4 : Revert "Add OFF->LOCKSCREEN transition"
bdc9fd88318e : Revert "Drop VDM permissions from Shell"
f0a3a86ffffc : Fix LockscreenOtpRedaction flag usage in cancelContentViewFrees
2a31d23688d0 : Add benchmark for backdrop blur
25d6e3312d67 : Simplify BackdropFilterDrawable
bdadc6d24023 : Fix backdrop effect for prerotation
0afaafeb602b : Fixes for errorprone update
5a4166738028 : Have Keyguard drive Unlock/Lock transition directly to Shell.
7a23baeddf98 : Changes in ContactsContract to support SIM in getDefaultAccountForNewContacts and setDefaultAccountForNewContacts.
5c3a99a08181 : HDMI: Disable CEC on standby when Low Power Standby is enabled
2ded7dd46215 : Health Permissions: Add OP_READ_HEART_RATE
b4419eed70f1 : Import translations. DO NOT MERGE ANYWHERE
3267e5259c7a : Update app widget size when resized
2da67abea9e2 : Import translations. DO NOT MERGE ANYWHERE
079079154f59 : Import translations. DO NOT MERGE ANYWHERE
9ffa3c6da3bc : Import translations. DO NOT MERGE ANYWHERE
f5d0a8fa8468 : Update carrier label margin in setDisplayCutout
7edc95c32aeb : Ensure WMShell protolog viewer config is available on device when required
7479ec6aee5c : Added the DefaultAccount.getEligibleCloudAccounts API
c59ea36be726 : Revert "An optimization for Pointer Location for drawing the trace"
ef58d112dda8 : Import translations. DO NOT MERGE ANYWHERE
1d12c7231873 : Revert "Add landscape layouts for keyguard sim pin and puk"
2381676404a6 : Revert "Add landscape layout for keyguard pattern view"
d4f86834d5b4 : Import translations. DO NOT MERGE ANYWHERE
20c095b3faee : Import translations. DO NOT MERGE ANYWHERE
d5808587e600 : [RONs] Create ProgressStyle Expanded Template
50ac74570f43 : Revert "Make mUserAspectRatio final"
d9e92f1fb63a : Add NotificationProgressDrawable to support segments and points in progress bar.
a7d538f96063 : Fully qualify @attr reference to android.R field
78b3fd928bab : Fully qualify @attr reference to android.R field
2d8b1249f7e9 : Add flags for caching Profiles data
bf742c7f9ec8 : [ENR] Improve Group Notif dump
096190a93b16 : Add a Duration type and ARG_SECONDS to TtsSpan
632b25f909f6 : Initialise desktop_mode_visible_tasks system property.
94f559201889 : Make TaskStackTransitionObserver...
381813c41259 : Update Compat Framework so all system uid apps are treated as targeting current sdk
ee335b206abe : Enable and configure interruptions in Flexiglass
234c5e843ca4 : Always show all approved apps
8aa05f5e6aed : Enable interruptions in compose Lockscreen
0120f3ed6912 : Don't discriminate getPhoneType against data-only devices
9fe1ba455ad3 : Allow access to the SelectorWithWidgetPreference's widget, for testing
712c18db97cc : [Contextual Edu] Customize education dialog
ab52df950941 : Only provide DesktopTaskChangeListener if the flag is enabled.
8d91ced2fcf2 : Check for NLS service intent filter when rebinding services
b771efba3fdb : Add protected broadcasts for model loaded/unloaded events.
27b818eaf87e : De-flake notification avalanche tests
63146cdec7c1 : Add missing Javadoc dependency to `framework-nfc`
d12411a1c3d4 : Revert "Dump ExpandableView clip bounds"
d53f4c79bc3c : Enable interruptions in Communal
88dcc78c31b8 : Changing GestureState to sealed classes instead of enums
8f1ca8f6653b : [Contextual Edu] Increase initial delay of education
a28a3bba57ec : Handle lockdown in VDM
448690d82db2 : Clean up unnecessary modules
d86d0cede067 : Make VDM APIs and permission annotations consistent
08af64c11d86 : Remove legacy back gesture velocity tracking
ec7d1c26d60d : Update topology when display removed
3d21f63dbb4b : [flexiglass] Verify flexi only NSSL methods are not called in legacy
34bb4166d47c : Update SettingsLib resource and fix lint warning
2883d9c0fc4f : Updating the APIs for the Callback Mode
8dd13bf37d98 : [PM] Add more logs to debug the flaky test
eb79400202fa : [flexiglass] Remove ScrollViewFields#headsupHeightConsumer
835b76cb1fa5 : Respect orientation request from any eligible activity
17a295b05cae : Handle TRANSIT_TO_BACK in FreeformTaskTransitionObserver
d25f347307eb : Apply top scheduling group for non-occluded freeform app
e38140fcc967 : Do not force-show system bars in desktop's full immersive mode
b42a35104d86 : Reduce unnecessary invocation of fillTaskInfo
e12c2f0e1d93 : Avoid detached surface from returning to hierarchy
319474c18fe0 : Set reset_on_fork in setRenderThread as well
56c33e359008 : Support reassembling fragmented media events
9e7e4ab89c39 : Add OWNERS for ApexManagerTest.java
f949111353c1 : Move IME MenuItem list creation in Menu Controller
e6aa8aae9040 : Drop VDM permissions from Shell
b417208cecdf : Only set SCHED_RESET_ON_FORK flag if the thread is using the default scheduling policy.
a4c17763bf6a : [2/N] implementation of verifier controller and status tracker
c17b99d13d5c : Renaming Frequency Profile APIs
3e5678541144 : Add `TaskChangeListener` interface with `DesktopTaskChangeListener` implementation.
bc9b371f5fb3 : Revert^2 "Per-display group PowerManager APIs by default"
795e61c4c671 : Update the face auth locked out state only if face auth is enrolled,
fc279a45c67e : Exit desktop-immersive when app unrequests immersive
4405d6334469 : Set owners for PIC files
e4d7b622a25b : Remove hidden API usages backed by VcnTransportInfo
dcf939f76f2a : Remove defunct sysprop in preparation for multi-display bootanim.
115905881811 : [sb] remove StatusBarLocationPublisher (rip)
875547e9266e : [PIP2] Hide PiP menu when PiP is moved or resized.
61433dd057ca : Add DesktopFullImmersiveTransitionHandler
ff180674e7b9 : Robolectric Mass Migration: statusbar tests
f524115011cb : [Ravenwood] Make sure Mockito actually works
95daecabe8c3 : Revert "Per-display group PowerManager APIs by default"
8e5ff988fc6b : Make libprotoutil available to uprobestats apex
2c7e02064722 : Refactor nativeSetEarcEnabled
331bbabf05c5 : Fixes the issue of UDFPS icon background being white when the device is in DOZE_PULSING state
439eee0b5095 : Add sendTransitionStepsOnStartTransition.
a814c2dfbe7b : audio: Addtl logging in PlaybackActivity
d1ec2efc0b89 : Parse authority to separate userId and non-user parts of it
d0043be65d47 : Revert "HDMI-CEC: Restore full volume device condition to send cec volume keys [1/1]"
6775f07552f1 : Check cross user permissions for a given UID
3586871bd060 : Fix transparent content view when opening the shade during bubble fly out
afb5f5d7cc89 : Add KILL_UID permission to shell for CTS test.
ef3ecd5096d8 : Add metrics for TextureView dataspace changes
40aa6d94b61f : Return to OCCLUDED from canceled swipe if needed.
834614901b5e : Fix race on setCanceledAfterLifetimeExtension
00c05f8c0538 : Fix first swipe after exiting AOD
a55c0b071978 : Fix VolumeHelperTest on automotive
31ce2795a8fa : Extend service permission list only accessible from SystemUid.
4923b58a1ade : Generate CreatorToken
849be89715f4 : Implement fixed rotation swipe-up PiP2
10971ff210b2 : Disable App Header drag-move and drag-resize in full immersive
55ca8e95f46e : Add API flag for AFL API
b8f42c5d7bc2 : [Lut API] Introduce LUT java interface.
6da7b5601ff7 : Rename DesktopModeTaskRepository to DesktopRepository.
4342a725ad93 : [flexiglass] Inflate QS when SceneContainer is composed
5c78f4a34a4e : Support CarrierMessagingService binding for HSUM
761f72e9368f : WindowContainerTransaction: add KeyguardState.
55d85080be78 : Disable RavenwoodBivalentTest_device test
da522df80568 : Allow uninstalling DMRH when not used for management
85b26324b3a6 : Always instantiate sound controller
fd40803f5f5c : Fixes issue that would switch dark mode off when in scheduled mode
ea042c22eb6b : [AppOpLogging] Log binder calls for checkOperation and noteOperation
b81782302015 : Allow widgets to be resized on glanceable hub
57642a9204ca : [RONs] Create isStandardLayout
de7b75b4e0e9 : Ensure screenshot dismissal velocity is never 0
2b1db2d744fb : Create Robot for Desktop Windowing testing.
e7e595de7f59 : Address some issues with ResizeableItemFrame
a5e8b705e335 : API for setting the power state of a virtual device.
8d5052d40510 : Support bluetooth microphone
692574371e1a : Ignore failing AppHandleEducationControllerTest for the time being
56297dfbe284 : Introduce SingleVibrationSession
4f0aee421c78 : [PIP2] Hook up expand icon to PipScheduler#scheduleExitPipViaExpand.
de6a2867d9c2 : Remove observing offset in composition
4c3a7c431c9e : Add resize functionality to CommunalEditModeviewModel
50dc71796c4a : Make insets and bounds immutable.
33e66e3c0070 : Per-display group PowerManager APIs by default
e0ae362a53af : Don't run the `Pip Apps` alongside the pip tests
7670edf97e76 : Update app compat team alias in flicker tests
173a38237569 : Make mUserAspectRatio final
8ce11c64c038 : Make screenshots use the focused display for some screenshot types
2f120f9039fb : Dump ExpandableView clip bounds
14877a97c99a : Cleanup register_zen_mode_content_observer_background flag
df1a136120b2 : Exclude IME insets in InsetsPolicy when the task is in split screen
dabaa2a90404 : Explicitly disable flag for some clipboard tests
d3bae4991ebe : Bind volume dialog settings button
54dcbabd1bad : Make userHandle non-null
04a358e08368 : Reapply "Don't install a pointer input when there are no user actions"
8da9292ec9e9 : Update OWNERS
38186761f903 : Remove redundant caller check.
9a34355b6f63 : Replace current implementation of cache to annotations. Example of annotation usage.
13d2169decf8 : Move ProtoLog tests to seperate test directory dedicated to tracing tests
7d94df70f345 : Set tooltip color scheme to match app scheme
515ac3e460d8 : Move EmptyShadeViewModel flows to the bg
b0c46f1c7a7c : Add phone switching status for account match success
00e6e684eca8 : Fix WM dump perfetto slice
7042bef0987d : Rename camera compat utility methods in AppCompatActivityRobot.
29491aac3712 : Add utkarshnigam to the owners file
e2e8ed7eb333 : Remove postsubmit given they run in presubmit already
941ab298dfed : AudioService: fix BT SCO audio starting from system server
584582dd7b25 : Add tests for the dialog showing logic
b89a27f4c18f : Add showing the new Volume Dialog based on the VolumeDialogController callback
d24ae5991d0d : Add calling package onExecuteAppFunction
78d1ad2c157f : Do not delete task snapshot files when user stopped.
1f22de8f0825 : Declare flag accumulate_network_stats_since_boot
60d2dfc6ef6f : Update the impl of wghtOverride and italOverride
1ed596546e6d : Introduce Bounceable and Modifier.bounceable()
ebaebbd466ef : Add new PRIORITY_SYSTEM_NAVIGATION_OBSERVER to OnBackInvokedDispatcher
3257839d747c : [Ravenwood] Add runtime Mockito version check
75816497097b : [Ravenwood] Add "--strip-mockito" to Ravenizer
c70515b61a99 : Revert "Add display ID parameter to window policy wakeUp()"
5557dd75eeba : Handle apexd failures
e6b37ca612ec : Add landscape layout for keyguard pattern view
9c62a2cfe666 : Add landscape layouts for keyguard sim pin and puk
efc6bd8e33c4 : Add temp value for BiometricEnrolled.template_id
9fd025cd7b43 : [PIP2] Attach PiP menu to leash.
d0ea4c323144 : Backup & restore for ringtone vibrations
a0760e36b455 : Generate Property Invalidated Cache from annotation.
16a74e092f12 : Connect stored variation settings to Minikin
eb7142dd7cb6 : Add new onboarded native namespace to SettingsToPropertiesMapper for native flag access
a8c4d1ad6965 : Enable SQLite JNI for host Mac and Windows
456c0935c4ff : Fix recording config mic indicator suppression
b355d2c9ea64 : Limit display batterystats to internal/external displays
0cb723949adb : Introduce DreamScene
dcd7201b9ebb : Refine Background Activity Launch (BAL) Modes
39b09273eadd : Update android/hardware/OWNERS for new file.
a839a08aaa78 : Revert prototype RON changes:
141ba6b1e3ab : Update API description for setDeviceAlignedWithSatellite()
105424292829 : Fix build warnings in FocusedDisplayRepository
ca2beefc51d2 : [flexiglass] Fix HUN animation on Shade scene
60a54ee232f3 : Add UsbManagerInternal interface
8973771c4125 : Add support for internal USB data signal disable requests.
098687a313d3 : Update owners for ApplicationSharedMemoryTest
bd5ccf06921b : Determine indicator type for input outside of display.
cd195faa9d24 : Use jni_libs to install shared library dependency
3ba002b60ff7 : Switch Bubble Bar side on navigation mode change to 3 buttons.
1b2bca67c507 : Optimize the removal of garbage collected references to avoid memory leaks
72b27bcbd26d : Add API to move contacts to Cloud DCA
8e5548740972 : Added getDefaultAccountForNewContacts and setDefaultAccountForNewContacts APIs
fc8049aa364d : Camera: Catch SecurityException for UID state changes
7d26552f23aa : AppFunctionRuntimeMetadata enabled changes
5d503176cd5e : Fix typo in SparseArray access in resetDisplayInteractivities
c07e3ef34c1d : Support floating point values for layer crop
ff188280c713 : Separate finishTransaction for leftover transition
964a749d5d55 : [flexiglass] Moves NSSL an additional px down in QS scene
347740939d05 : Add CtsAppFunctionTestCases to presubmit.
28e1052fa9fe : Revert "Settings override for use_app_info_not_launched flag"
0b157b589786 : Renaming compose slider haptics flag.
aa381d3b4800 : Correct an ApplicationSharedMemory unit test
a27716a4c1db : Implement resizeable frame for widgets
f811af152b40 : [Ravenwood] Move bivalent test into the "test" directory
7a8d64bc3241 : Revert^2 "Partially revert ag/27263995"
a79c76bf5749 : Extract AppFunctionServiceCallback from AppFunctionManagerServiceImpl
efcdd121a41f : MediaCas : Add missing check to mEventHandler
70b54e38d928 : Make RecordIssueTile's ServiceConnections not injected, but created.
c0a488ea49fc : Explicitly state which user to start Record Issue interactions inside.
b024e5e870e9 : Refactor the TraceurMessenger code into a ServiceConnection class.
1af6e1acf4c2 : Include sticky broadcast cache in the processes dump.
97c4208d280a : Fix label mismatch in checkin event names array
2d35155976f4 : [RON] Rename Step to Point
e2e48150f55e : [RONs] Bind ProgressStyle contracted and heads-up views
8840009964d3 : make silence emergency nofitication broadcast protected
664d64ffec5f : [Nfc] Add securityLog tag for nfc enable/disable.
451e99a74ab0 : AMS: set reset_on_fork flag for app threads in OomAdjuster.
bf59facaa342 : Schedule regrouping if classification adjustment is after notification polite_notifications_attn_update
ab1a29e992a9 : Remove user switcher callback when exiting user switcher mode
1164f6b245d4 : Adding icons to buttons in touchpad tutorial
686efe65c7f7 : [flexiglass] Transitions for communal
a43128a1e8ec : [flexiglass] Removes GestureFilter.
bec7e5539722 : Change "Set up" text in mode tiles into "Not set"
cd4e066bf511 : Address APP_STREAMING dialog feedback.
9a302637d175 : Add ability to make transitions not replaceable
61745ed092b1 : Add missing TEST_MAPPING tests to relevant tracing modules
a550cf308eda : Add logging methods for task resize events in Desktop mode.
955da7239e6e : Make shutdown text visible when wallpaper is white
78a0be4e3c1a : [Contextual Edu] Increase Toast duration
e313f13b95eb : Remove EventLogTags depdendency
a1b5763f2228 : MediaProjection lockscreen recording Roles
18f2caa2d22d : Clean up manager record logs for binder exceptions
ba99fe6aa25b : Updating strings in touchpad/keyboard tutorial
0c766cd9400f : Also delete Sleeping when an existing rule is updated to TYPE_BEDTIME
99695eb10e4e : [bc25] Fix notifications shade rendering over lockscreen in Dual Shade.
26910d9005bb : Update writeFlagOverrideRequest call in SettingsState
b9133e237917 : Add support for disabling autobrightness when stylus under use
ae78431560d3 : Make onExecuteAppFunction non-abstract and remove timeout results.
a135731e7019 : Remove unused PriorityNestedScrollConnection constructor
f1e23eac8bcc : Fix typo in AAPT2 error message
47936ee8819a : Revert^2 "Update content description of active mode icon in Status Bar"
8dcea41b7558 : Flag to show dynamic unlock title for private space
7bab68f23fe8 : Revert "Don't install a pointer input when there are no user actions"
b355c2b8f054 : Fix typo in Window.java doc comments
46880bdb31bc : Make onBeforeUserSwitching calls synchronous.
6a06ad74deee : Fix inconsistencies in CDM / VDM dialogs.
8fae556d1927 : Only support mirror display creation for device streaming
9d4ba69a9bc4 : new defaultSwipeSpec
666f92f981e7 : Fix NPE in LogicalDisplayMapper
0b54549195e2 : Add flag for DW a11y metrics
47acb5821420 : Add window configuration for the new Volume Dialog
4924ef76bba4 : Revert "Ensure packageRemoved is called when UID is removed"
93ac96e71517 : Move StorageStatsManagerTest to PRESUBMIT
a3a9f3e7f940 : Add flag for better support non-match parent activity
c666cc8700c5 : Disable LeftoverMinimizedTasksRemover...
cec97c3c0cd0 : Remove vanished tasks from repo
03e159a5f209 : Set task bounds for recents composition.
2a09810f908c : Use config size for DisplayMetrics when override doesn't exist
bcec23dc0993 : Add flag for new VDM role without trusted displays or input
a8b2c78aa3ea : Enable universal resizeable for new sdk on large screen
9ef1c177e6aa : Add a binder cache to UiModeManager#getCurrentModeType
92a023f710b3 : Use background thread instead of main thread for FocusedDisplayRepository
2481d72899a6 : Add clarification comment to TaskInfo#isFocused
7d0da763a044 : Revert "Check whether the correct callingUid has the necessary permissions"
1e650f5d648c : Set reset_on_fork flag for threads in power hint sessions
69f781014ea2 : Add aconfig flag for enlaring pointer icon with magnification
a46fb7c5c16c : Add flag for "move to next display" shortcut in desktop
3f6f89f92199 : Ensure packageRemoved is called when UID is removed
8c629e91f579 : Use context AttributionSource as the identity source-of-truth for connection
03f010f490b6 : [flexiglass] Fixes crash when opening QS from Shade
010f5e7f0297 : AudioDeviceBroker: fix UID comparison for phone/bt process for HSUM
9bb30b258196 : Add support for dimensions in FRRO
0f4900e5b2aa : Remove hidden API usages backed by VcnTransportInfo
6ae23ed901cb : Support biometric prompt for visible background users
b8651d714d0f : [framework] Add saving routing option oem extension APIs.
2784bdda7f51 : Hide the IME when dragging the expanded view for bubble bar
fb138f4865c7 : Convert migrateLegacyKeystoreToWifiBlobstore to an asynchronous method.
d33bc9e1cefa : [kairos] Rename kt-frp to Kairos
d74d9ff7087e : Add comments clarifying ANGLE lib loading behavior
920b9b861577 : Do not enable JobParameters cleaner for SYSTEM_UID
d04cea5ea08f : Update checkKeyIntent
a23f16b6b659 : If bubble bar is collapsing, hide the IME
efa861ac8951 : Fix clock position is too high in lockscreen preview in tablet portrait mode
a15af6dcb147 : [RONs] Remove Stable from StableId for Step/Segment
ad25f42cd12d : Post onProvideAutofillStructure in Augment Autofill flow to UI thread
5a867c0480b2 : Add check for mandatory biometric flag
8702b2246c32 : Fix java crash by inconsistent single-line view and view model
a660be4ef13c : Settings intent action for promoted notifs
1c0e239e5a7f : Revert "Update content description of active mode icon in Status Bar"
fd93ada4aeb7 : Log a mapping from parameter numbers to combinations.
e3fb8e644ea0 : Dream Clock Complication font update
3d5f5686b45e : Pass bubble flyout from wm shell to launcher
a6aacbdb97df : Add DeviceEffectsDiff and PolicyDiff to RuleDiff
8fc44b636934 : Move RobolectricTests from tests to multivalentTests
5f4842dece6e : Defines a new flag for screenshot policy handling
8049e8394ead : Rename `buttonSize` to `buttonPreferenceSize`.
c0a1b64badb8 : Add display ID parameter to window policy wakeUp()
30392b8d3477 : [framework] Add more nfc oem callbacks.
9451efd2cdf8 : Fix virtual display power state management.
051a362355da : JavaAdapter: add combineFlows with six inputs
e7fab52f9d81 : Flexiglass: use Interactors for remote input callOnClick logic
3ab73e9916ca : StatusBarRemoteInputCallback: simplify onStateChanged
453f286ba27e : Flexiglass: use Interactor for device public logic in LS user manager
13d1bfe3bc71 : Flexiglass: don't log misleading status bar state error
d2621f7049c4 : Add protolog group id collision or duplicate check
e38d5e033538 : Reduce locking in ContextHubTransactionManager
c7d1402707ea : Block clipboard UI when device is locked
43333cf12cb5 : Add flag for the Multiuser Widget
35eeea579960 : Dismiss education tooltip on rotation
f2c5c5008144 : Report legacyStart for all displays legacyPending reported
e396e46edf96 : Nit fix for multi-user test
1f358dd71406 : Make screenshots from overview honor the passed-in display
be0d6093c4ea : Move dexopt out of the package freezer.
811f6c8ef0cf : Move logcat_longer_timeout to the "stability" namespace
b0a6c7f3aa85 : Enable modes flags in EmptyShadeViewModelTest
1546260687cc : Delete super-obsolete zen onboarding
82a6c7a96a58 : Launch the app if it's not running onMovedToFront
9fdd961ece5a : Use approachLayout for squishiness
4a80858bdbb5 : Move FlexClockView to SysUI
b084a6aa4aef : Properly animate second page collapse
8dcca1806726 : Add startinfo parcel test
4d4494fde63f : Set startinfo package name from persisted data
689512057f30 : Create flag to enable alt-tab app launch transitions in Desktop Mode.
63595e7b8bda : Revert^2 "[Autofill] : Fix save regression with relayout"
93849f0b3027 : Ignore geometry parent when !useTasksDimOnly
3fbbb0964b75 : Add interactors to wrap VolumeDialogController callbacks into flows and make them easier to integrate into the new architecture.
32ee230c4bc7 : Expanding owners scope for syeonlee@ and justinweir@
05c7ed41c426 : Add new Volume Dialog dagger and coroutines infra
f2160aa30e8e : Fix NPE
3b58741b6354 : Don't install a pointer input when there are no user actions
2ea0587736e8 : Revert "[flexiglass] Adds gestureFilter lambda to STL"
9293f4930bf9 : Revert "[Autofill] : Fix save regression with relayout"
c026d1b30085 : Clean up group volume adjustment disablement flag
05179b9b1a06 : [Contextual Edu] Show dialog after back gesture completed
12bc1f109e32 : Set content of ComposeView after setting back owner
e5f0f5203571 : Update content description of active mode icon in Status Bar
6bea0d919a56 : Fix FullScreenMagnificationController#persistScale for multi-display
867a7fad5f0b : Remove FLAG_GROUP_SUMMARY when updating auto-grouped notifications
3b61b8f85df2 : Fix MagnificationConnectionManagerTest#scaleSetterGetter_scaleIsOutOfRang_getNormalizeValue
3f679c2268f7 : Store font variation settings into native Paint
e685cf0a2a06 : Add the ability to adjust the timing animation end listener
673ec3e343fa : Update hearing device dialog UI
27e6bf571c94 : Avoid send another copy splash screen request to shell.
036fce184952 : Do not add the noDisplay activity to source task
63ee2e655486 : profcollect: introduce a cool down window between trace events
602f845d23f6 : Adding open by default settings dialog
ede7750e9b17 : [Ravenwood] Prevent dexmaker from going into Ravenwood tests
68722988e973 : Remove redundant logs
96fd11c77cf6 : Remove App Header as an inset source when in full immersive
b24e8baa723e : Add OWNERS for stats service AIDL and Java files.
b0a2811ed71e : Camera: Add diagnostic message for extractCameraIdListLocked exception
2f238aeb08bc : Add some logs and call onAnimationEnd when animation canceled
947eade98b33 : Properly convert A8 -> RGBA_8888 gainmap contents
be63c9dafe55 : Add VcnUtils to easily get WifiInfo and subId from NetworkCapabilities
23b1bd5e24bd : Add deprovisionSatellite api
d4bb429e74cb : Do not merge PiP removal into Recents
c51a772b7645 : Update native input display interactive on power group changes
26f96a62f3c2 : Check getShowSoftInputOnFocus() for stylus input
92eb4fa3573e : Prevent pip launch during drag to desktop.
d96c7f200686 : Fix ReturnValueIgnored errorprone issues
1c42eb06353c : Clear PowerStats before collection in case the collection fails
ef60a7c0cccd : feat(high contrast text): make high contrast text API public
0b3ce174e00a : CarrierAppUtils: Improve readability of update state logic
cde345a7ee06 : Update checkKeyIntent
6b3aa2507a3a : [Accessibility API] Add flag for expansion api.
646ffb65087c : Enable handling of keycode_menu key event on lockscreen when compose bouncer is enabled
42960ddd066e : [HSUM] Use SmartspaceManager in the current foreground user context.
3986174aad49 : Use pref's layoutRes to bindViewHolder in tests
fd4e20ecd921 : Use UserHandle.isSameApp to determine if the uids are the same app
1af91fb5406d : Ignore UinputRecordingIntegrationTests
d4e27d9afd1e : Report display change for display focus switch
fe92e8d1e670 : Set a default value for flagstatus
f8c8f7580610 : Convert a bunch of ScreenshotData's unmodified vars to vals.
1552186d4742 : Crop window frames by display frames when computing TPL thresholds
1ead06105a3c : Do not show openInBrowser button if user does not have browser available
78e84eccc160 : Add diff for ZenPolicy to ZenModeDiff
46c3608b3967 : Explicit UiThreadTest, Rule to block @UiThreadTest
c427812e2c2b : Clear invalid SIM slot data when INVALID SUBSCRIPTION received
42a5bd4cdebf : BiometricPrompt test cleanup
a2775421dc4a : Remove stale iconOverlay references for BP
c2f12bf92a40 : Remove unused ScreenshotData.contextUrl field.
007cc880374b : Adding spanY of a widget in the database
f2c815d6d376 : Make App Header visibility follow status bar in full immersive
3eda67c8db4f : Add top-padding to App Header when in full immersive
cb28f1a1f42d : Add immersive state to desktop repository
33b673c15a53 : Create ManagerRecord#notifySessionUpdated
945a81b5c0d2 : Create ManagerRecord#notifySessionReleased
6dd198b56ef5 : Remove unnecessary notifyRoutesUpdatedToManagers
7423573ccd49 : Create ManagerRecord#notifyRoutesUpdated
9bbc4a8e0f44 : [sb] set ongoing call chip to GONE
bd0053a245b4 : Don't consider FOREGROUND exemption for visibility checks
61bcfb4cd568 : Adding connectedKeyboards flow to KeyboardRepository
71e9754e12ec : [Flexiglass] avoid blocking media conversion from resume to active
f47daae49616 : Add separate SIM log buffer
c95e3bb1d6bd : [1/N] APIs of verification service, session and status
276222e0b85e : Fix a typo in DesktopTasksController.kt
1ea2d0a1de9e : Import translations. DO NOT MERGE ANYWHERE
62fc6a9ffbd4 : uinput: note that UI_SET_EVBIT configuration is always needed
0153f80c7799 : Import translations. DO NOT MERGE ANYWHERE
1862ad0b4d12 : Do not handle GH touch when scene container enabled
2d813ff27d11 : Import translations. DO NOT MERGE ANYWHERE
68b39435df3d : Avoid blocking media loads when converting from resume to active
6ac7259f493c : Support setting a specific content description for the widget in SelectorWithWidgetPreference
c715ed188ad4 : Exit desktop windowing transition
9268077e8b9d : Add OWNERS file for property_cache processor.
1d520de7e396 : [sat] Add more logs around registering callbacks
13e092573bce : Properly handle onNullBinding() in appwidget service.
e1e326ec0347 : [Contextual Edu] Show dialog with correct theme in light and dark modes
81a199599b66 : Don't write policy definition within policy state
f374043854a1 : Add ProtoLogGroup class
595a90eba9a4 : [FRP] Functional reactive programming library
8e423d11569e : SoundTriggerService.startRecognition - callingUser
18486b79b7a7 : Fixup ModifierShortcutManager#getApplicatioNShortcuts on HSUM
3951ad9780ab : Give WebViewZygote process shared app GID.
90aa01e5103a : Give AppZygote process shared app GID.
f9af7edc0b0b : Add missing resources that gradle is looking for.
0bd3685b81db : Add API for NAS to tell us it doesn't support some adjustments
4a644e6a4730 : Update App handle education datastore when specific events occurs
c53221c9fbc3 : Reset screenshot view correctly at animation start
2112cf822b07 : Setup onclick listeners on the tooltip to match what we designed
a65450bf20a0 : Handle ScreenshotHelper references correctly
ff31d335f868 : Easter egg for touchpad gestures tutorial
c650d955f473 : Make PerfettoProtoLogImpl.Message protected for children classes to be able to access it
408325d2d265 : animateOffset can partially consume the fling velocity
234f8b684570 : Log desktop mode changes to system event log.
f0b04418efeb : Add support for fromSource in nested scroll
298c21345219 : Add cancellation signal death recipient.
461cfbb7f4e3 : Improve systemui unlockscreen jank issue
abd3b061d59e : Making RescueParty reboot and factory reset synchronous.
9fcffe41632a : Register UserChangeListener in DesktopTasksController
84e9ad00e3e8 : Rename ProtoLogGroup to WmProtoLogGroups
78079f813cc7 : Relax conversation notification rules for audio during avalanche
749725678e2b : Fix flashing when folding to AOD
f44dd5eadd6a : Fix WindowOnBackInvokedDispatcherTest flake
78388235f37a : animateOffset returns the consumed velocity after the animation ends
ac8b7da38ad1 : Fix typos in AppOps.md
2cf25e0d1d1b : Allow system windows for virtual devices only on trusted displays.
e726148a32e3 : [PM] Add PackageInstaller CUJ test case (38/N)
bc37ee160618 : Replaced verifyZeroInteractions with verifyNoMoreInteractions
40f23188ab4d : Update e2e test selectors for lock screen user switcher
036bfd04577a : Update to ToT RemoteCompose
173fcdf93050 : Add ZenDeviceEffects to ZenModeDiff
aeca005a2814 : Validate the surface of snapshot window before draw.
4961960f9bfb : Enable logcat for sleep token change
be2d5ae0d34e : Use MappedFile in android_database_SQLiteConnection
91c7a0dab4f7 : Add flagging libraries for trade-in mode.
a3d5df4aaac6 : Keep wallpaper in prepare back transition.
65f84eddf306 : Convert BatteryChargeCalculator to a PowerStatsProcessor
93b34eb27f1f : [Expressive design] Add StatusBannerPreference
c91e4c265253 : [VRR] Boost frame rate when calling ApplyLegacyAnimation
9f1e2f838f0b : Fix javadoc for the remotecallbacklist.
bca6146ba2c0 : Update OWNERS.
6b3c9de599fc : [Autofill] : Fix save regression with relayout
7f6c7263a638 : Update ADPF and GAME_MANAGER OWNERS files.
c5750b5e1bdd : Add PowerStats collector and processor for Wakelocks
aacc5b4b86d0 : Move ravenwood annotations to module-utils (1/2)
6581acd7c6c6 : Use procState change events to compute time-in-state duration
78c41067a1ca : Opening type changes need alpha=1 when PiPing
0e0edf3bef39 : Show maximize vs immersive icon in the Header based on immersive state
a620ff835785 : Add a timeout for CancellationSignal issued to AppFunctionService before which it is unbound.
087868c78dac : Integrate Education UI with AppHandleEducationController
06f58c061f0c : Polishing for migration to disable Weaver
2d38fe44a9b1 : [Expressive design] create SliderPreference
319ffa7b44c2 : Use select syntax on from-text vs from-source static lib selection
ff11eeb034dc : Replaced verifyZeroInteractions with verifyNoMoreInteractions
ca6e87878315 : fix: show overrides in device_config list
dc8660291497 : Force shade collapse instead of scroll on swipes from bottom
90b93d50414e : Fix mediaOffset for QQS flexiglass
907ac48adf4b : Add a flag to control Keyguard-started transitions.
774086255618 : [flexiglass] Disable NotifShadeWindowViewCtrlr AmbientState swipe update
45f85e804cbd : Disable ModesEmptyShadeFix in tests
798f9846e5dc : Fix IME show delay when having a RemoteInsetsControlTarget
3b4a151f7d19 : Fix typo in function naming
d59f5e46fcb3 : Default display topology initialization
e42f93c730a4 : Add null colorspace protection
08409a362bba : [RONs] Add ProgressStyle to promotion logic
d0816f0bda3d : Letterbox scrolling: out-of-bounds gesture handling
8552d43b9a6d : [flexiglass] Include overlays in SYSUI_STATE_NOTIFICATION_PANEL_VISIBLE.
9b2575c5a944 : Fix parameter generation
94c59ea471f3 : Prevent flicker on PRIMARY_BOUNCER->LOCKSCREEN when dragging
0e9955c3f7ae : Fix lockscreen.rot_override precedence
366ab12fd38c : [flexiglass] Remove an unflagged change from StackScrollAlgorithm
7cf9c892ee86 : Add input team as owners LetterboxScrollProcessor* files
a15238458fdc : Revert MANAGE_KEY_GESTURES flag type to feature flag
9118e3c40f9d : adb shell command to trigger phone.notifyCarrierRoamingNtnEligibleStateChanged API.
eb4218541626 : Stop icons from animating in edit mode
b7f563153aa7 : Force reset of AnimatedVectorPainter by wrapping in a key
38de1a3f76c1 : Fix reading of ZenModeConfig.manualRule in MODES_UI transition
02c312898ea1 : Minimize closing apps that are not closed by X.
ca1338642973 : Adding required annotations to mark SystemApi
f92c03ad0b58 : Refactor - Pull status bar "orchestration" logic out of CentralSurfaces
6dc652f6e499 : Freeform: Use camera compat aspect ratio when in camera compat mode.
2a17dcdc1d48 : Add is/setAppFunctionEnabled to sidecar
f57b725b8124 : [Expressive design] Update IntroPreference
3ca6e05adaa6 : [Expressive design] support hyperlink
54d324e782d2 : [Expressive design] support learn more
0607cf63fb63 : Disable bluetooth and wifi during flicker tests
7477c2251be8 : make device setting be able to use either Intent or PendingIntent
6431bd08c47b : Use profile at framework/base/boot/ instead of the combined one at framework/base/config
20208f96b3db : Migrate some preference page
37b361c284f6 : Provide a common test case for catalyst screens
0814725f6c77 : Migrate PreferenceMainPageProvider
4d2cf1b18eaa : Migrate PreferencePageProvider
fd13042d6df8 : Use VariationSettings instead of vector of FontVariation
d6ca3043a19d : Migrate gallery home page
414d0239e19a : [expressive design] Fix colors.
5bec03ed0899 : AndroidManifest: add missing bluetooth intents to protected-broadcast
da321ee3998c : Ignore null actions in BlobStoreManagerService receivers.
2546ec04f21b : Support string-array type value for bootstrap atom
2e3b8f555bf9 : Always update config from decor insets change during booting
ca150b6053ab : Clean up itemList
8dade95038ef : Clean up ArgumentPageModel
f8f6ade4577b : Fix DeviceSettingsConfig parcelable
21d6448c978c : DisplayManagerService: use per-pid freezer listener
c9c17bae7a21 : Promote BroadcastUnitTests to presubmit.
0325fefeef17 : Allow override the suppress Overlay condition
7d6e19ba095b : [gps] add debug log when package reset
dbcd0dd8f483 : [Audiosharing] Fix primary device summary in call
bb99f50224b5 : Improve the use of pointer
bb0dc8636249 : [expressive design] Update text font.
ce9677dfc2d6 : [flexiglass] The shade scene returns to Communal if it came from there
4bd85691bcd7 : Update envelope effects dependencies
b92192882518 : Revert "Update to ToT RemoteCompose"
46ad6ae33a97 : Add interface for satellite CTS
52f8abc2f399 : profcollect: gracefully handle ServiceSpecificException
178e8c74d92b : Remove STATE_PLAYBACK_SUPPRESSED
b503c936d526 : [Expressive design] Add IntroPreference
5ef9ea4951f3 : [Expressive design] update ButtonPreference
3d0a59b0cb2e : Remove ProgressBar from AppPreference
90dcc208ba55 : [Expressive design] Add expandable preference
e28dd9f756fd : Use MappedFile for mmap-related operations in CursorWindow
6e7adcf3c5ff : Adding MSDL haptic feedback when pulling down the shade.
336153ede698 : [Ravenwood] Several minor fixes
e933dbeaaaec : Add backgroundPermission attr to public-staging.xml
5c49ad480b2b : [flexiglass] Changes Lockscreen's Swipe.Left to Swipe.Start
6d286382444a : Fix back gesture showing up on hub on lock screen
2bd406ba9e66 : [flexiglass] Communal scene navigates to shade and bouncer.
6b3645c13875 : Update to ToT RemoteCompose
5f6a21cdb957 : audio: Update microphone string
ada6afabd535 : [Contextual Edu] Change Toast to Dialog
e8d6edc6e714 : Restrict use of framework-annotations
8da8dd1daa12 : Fix UnusedVariable errorprone issues
e8a0442b0842 : Add log message for nanoapp auth errors
e855f08496f5 : [AVF] Expose patch level check failure code
6a0518040006 : Fix missing grouping after shade closes
e09f4bdb1fab : Add Performance-Optimization Methods to UiccAccessRule
aa86f5cd54c2 : Add flag for only 2 app flexible split
30c5cf8a2241 : Add split folks to shared/split OWNERS
d7d2a2451d04 : Log AvalancheController outcome after running runnable
d8e1c95dffd7 : Only log auto remove request if passed into AvalancheController
23cf2b981f59 : Log BaseHeadsUpManager entry map
3caedfb954de : Support launching fitness app from stem primary button double press.
788ff44d097e : Flexiglass: refactor StatusBarState calculation
191ff72434f9 : Flexiglass: add SceneStack.toString
c5ec1dad8f88 : Support disabling Weaver on unsecured users
24c6cc0d6058 : Add crucial comments for wallpaper binding
8bff5c385a7a : [bc25] Take overlays into account when updating StatusBarState.
cc6be53eb11f : Enforce permission for identity check bit
3109ea2f905b : Implement expansion to QS in keyguard
b8999e741492 : Add enable_concurrent_wearable_connections feature flag
74dd4f02291b : Revert^2 "[Ranging] Register ranging service"
f6bcf6a0bbdb : Add squishiness to tiles.
160e38b4535e : Disable ASM_RESTRICTIONS flag
c7bd207395fc : Add an alternative to PathIterator.nNext for host
e565738fd7eb : Add defer_display_events_when_frozen flag
e5a39510a137 : Enforce cross-uid touch pass-though opt-in
216db747d117 : Skip playing notification sound or vibration when cooldown is muted
f53a027bfa98 : Add flag support to styles elements
efc5b0f54e8c : Change timing of wallpaper changed intent and add destination
af3ca45366bc : Send media controller calls to background thread
b178c342047b : fix(force invert): fix app not redrawing when Force Invert enabled
2d37d6e39f55 : Fixed TestWithLooperRule switch for UiThreadStatement and ExpectException
67fe0bbc3097 : ZeroJank: Make version-gate flag non-static
a50eb586a26d : Support OCCLUDED<->ALTERNATE_BOUNCER with animations
f2790323c149 : Updating the pattern bouncer haptic MSDL token.
8f2be71b4a3d : Revert^2 "[flexiglass] Fixes issue where user management settings didn't show"
20b553d64084 : Fix top and bottom swipes not working after touches on glanceable hub
010ee587b605 : Run ktfmt on CommunalHub.kt
d70ef66c1b95 : Send media view holder inflation to background
a4408495642e : Add a new datagram type for checking pending incoming SMS.
4f3f8042084d : Add isAppFunctionEnabled and setAppFunctionEnabled api to AFM
810779d8545f : Handles DeadObjectExceptions in WalletContextualLocationsService.
1e34a1dfac75 : Extract resolveSwipeSource() and resolveSwipe()
009169792fd7 : Create an atom to log the tutorial entry point
864d1d1daf85 : Adding MSDL feedback to NPVC
fec50966daeb : Reland "Make override immediately override aconfigd"
21b6c11c46c7 : move multi-user and enterprise annotations to the modules
2200584a4bef : Disable expanding pip when pip is not visible
3aef8236a654 : Fix saving Policy with STATE_PRIORITY_CHANNELS_BLOCKED
a006159ae0bd : Check fallback strategy when using default doze brightness
0d840015d48f : Revert "Make override immediately override aconfigd"
b543412b3a87 : Add OFF->LOCKSCREEN transition
ba579414380c : Change log level so output will persist across reboots
b908ca8ac232 : RemoteInputCoordinator to update notif on reply
6cbd99e3af75 : [RONs] Clean EnRouteStyle
48d0c57f674d : Check AppOp using noteOp instead os isOpActive
e46d53bb215e : [bc25] Collapse the shade when switching to bouncer.
58fa809b3e15 : Revert "[flexiglass] Fixes issue where user management settings didn't show"
a164b90f5f5d : Add a flag for the new Volume visuals
683c3fb6f072 : Fix a11y announcement of the Modes Dialog
bd50b233de82 : Do not preserve window when window not added
4ff87977345c : Update Non-Resizable Snap Resize toast string
4bb934990481 : Add minimizedTasks to the repo dump.
d549458c54d8 : Implement repository for display focus in SystemUI
244bdc6c6275 : Re-use existing methods to toggle text logging on and off
6985520857ac : Log error when unknown protolog viewer config field is parsed
d6c951b40c25 : System server should pass through the GD as is
2ec06cc59d8b : [Audiosharing] Check broadcast id instead of BIS for sink's receive state
3de731a1858c : DPMS: Enforce max byte length not character length
c59f153e73ab : Adding source record info when starting an Activity
620d0eaa35df : [Expressive design] update TopIntroPreference
76c9c5b957cc : Ignore continue transition if not collecting
a7a903d128cc : Add shell command to control display windowing mode
05487a29181a : Ignore continue transition if not collecting
4926889715c9 : Update AE flicker test bug component
7f741fddc3b5 : Make override immediately override aconfigd
3cfe087edf42 : Revert^2 "Use a color container surface to animate rotation with wallpaper"
8c11797f0a74 : Unblock device state callback in Window Extension.
5783a9926d12 : [flexiglass] Fixes issue where user management settings didn't show
47346d945f3c : Use async callback in config service
e925023873ce : Reintroduce grace period for ASM
19cf6c63c6de : Ignore fixed aspect ratio and resizability for universal resizable
584fe31c607c : Revert "[Ranging] Register ranging service"
efb3a1fc184f : Add some improvements in abs vol sync with audio server
f64701b07c5c : [Ranging] Register ranging service
89a25fea057e : Update test for Truth8 deprecation.
d83c9264a6d0 : MediaQuality: Declare flag for media quality apis
5e90eda26d96 : Add TransitionPlayer and UIComponent interface to the origin transition foundation lib.
d894bed24eeb : Add null check for AppOpsService
98b05b0fe7d5 : Import translations. DO NOT MERGE ANYWHERE
41e066a4d2e7 : Add null check for AppOpsService
e3b0a760dc84 : Import translations. DO NOT MERGE ANYWHERE
f47d2b7b4ef8 : Add requestedVisibleTypes to TaskInfo
10f5bac38723 : Import translations. DO NOT MERGE ANYWHERE
3902af11e84d : Import translations. DO NOT MERGE ANYWHERE
d52f4f7f65d0 : Import translations. DO NOT MERGE ANYWHERE
f47a3871e96c : Import translations. DO NOT MERGE ANYWHERE
831010e556be : Do not restart touch listening if TouchMonitor is destroyed.
2f5483472cfc : Use JVM-compatible libnativehelper for host builds
5e13e25d5168 : [Ravenwood] Use native system property implementation
983461633b96 : Cherry-pick Ravenwood "core" code
93247bb899e3 : Ignore null action in AlarmManagerService$UninstallReceiver.
c404cd34370e : [flexiglass] Fix HUN shared element transitions
baf1bf77a81d : Expose getPackagesWithCarrierPrivileges as @SystemApi
99284f128e07 : Import translations. DO NOT MERGE ANYWHERE
6561155097e7 : Import translations. DO NOT MERGE ANYWHERE
e314e3103fd1 : Audio: Update the strings for USB input/output devices.
8d8073753e0e : Fix tests for wait for internet flags.
aeff37f3ba15 : Split some pinner deps to their own files
2dc861d4ae98 : Revert "Sanitize Bundle from AbstractAccountAuthenticator."
5b9b19364eed : [flexiglass] Disable NSSL height updates for some Lockscreen transitions
59e333043d18 : Allow app protection role to uninstall packages
49e2192fddbd : Adding a CoreStartable object to dump the history of MSDL playback.
7ce79d8a00c3 : Rename two "service" fields in AbstractA11yServiceConnection
6a1648d1a911 : [Ranging] Register ranging service
3af495a12662 : Deduplicate internet tile cellular icon loading
d398f0882fc1 : Add comments to AppWidgetServiceImpl [Part 3]
4235ddb0ef0a : Add binder unhandled exception notification
8691bd4d422d : Add flagging libraries for trade-in mode.
73c182037e11 : Refactor swipe-pip-to-home transition
13d385085525 : Move config-at-end seamless-flip to activity-level
ede829bd37c9 : Rename IssueRecordingServiceCommandHandler -> IssueRecordingServiceSession
4be420683245 : Run processState method in background thread
ba8701ff7fcb : Revert^2 "Add CreatorToken and CreatorTokenInfo"
1bd4403fd895 : Update exemptAidlInterfaces
26e845bd915f : Add missing lock to areAllUsersAffiliatedWithDeviceLocked
3c0b145f400d : Change native input manager interactiveness to per-display
651a62206087 : Enable Show Only Unseen Notifs when Minimalism is enabled
2d1f3233e841 : Add New SecureSettings Constant for Notification Minimalism
e0a11a0cd422 : Rename flag helper class name for notification_minimalism
fbbfe48d50ac : Move aconfig flag NOTIFICATION_MINIMALISM_PROTOTYPE
c700f1ccf564 : App Handle education UI
5315da16d91b : Add logs for syncing pip state
7f974147592c : ImageWriter: Check Surface is valid before use
bdb5053443f7 : Cleanup backup_service_security_log_event_enabled
6f553a15462e : [Expressive design] Add CardPreference
796d3ffc1825 : Run ktfmt on RemoteInputCoordinator
c270c6d094ee : Transition to DREAMING at the beginning of testTransitionToGlanceableHubOnWake.
94fc92cf20b8 : Check sound Uri permission when creating a notification channel
b99a0255d171 : Update IpcDataCache documentation
5320c7c8dc97 : Use DesktopWindowing flag for Camera Compat for Freeform.
55ad43650f34 : Moved tests which failed under Robolectric
ed3ce08e7b4b : Create status_bar_connected_displays flag helper and declare dependency
6beed404d1d7 : [flexiglass] Unify methods to collapse the shade under ShadeInteractor.
3cbbe363b9af : Move TYPE_WINDOW_CONTROL toString flag check
8c5979232ddd : Fix PackageStateTest fails on Intent.getExtraIntentKeys
9bcc666cdcd1 : Launch keyboard settings page activity from Shortcut Helper as new task.
f342591792c0 : StorageManagerService: increase watchdog timeout.
a7927cc559c1 : Fix crash from exceeding the number of permissible registered listeners
c9ad8145c15b : Fix buttons not aligning to the right
b9625b33c2c4 : Don't associate starting data with task when the fixed rotation started for AE
33a10da3955a : Finish MediaProj..Perm..Activity if Keyguard is dismissed
93ce895530cd : Update documentation for non-visible downloads.
e303f8e38e06 : Report audio-coupled haptics in Vibrator.isVibrating
a022e0de543a : Rename to setTopActivityOrganizedTask
741f7ace74eb : [flexiglass] Replace the assigned bugs on `emergencyCallButtonModel`.
091882d75e20 : Verify that params passed to NotifActivityStarter are not null
524b29e5935f : [flexiglass] Replace the assigned bug on the emergency button test.
5d107bcde726 : Fix allowlist token issues
fb1fc421bdfb : [expressive design] Update Category layout.
253e46c1bc4d : [bc25] Update KeyguardDismissActionInteractor to account for overlays.
b76fe9ff2ec8 : Fix back gesture routed to wrong display
d1eccd279a18 : Minimize tasks moving back with back nav.
1b393602926b : [expressive design] Update topAppBar layout.
2db5f9c2290a : Implement listener for display focus in WMShell
2c817caa5105 : Add RecognitionConfig.Builder.
ac7b83adc0e4 : [expressive design] Fix color of ActionButtons.
dc8b0cf6bf11 : Revert "Add CreatorToken and CreatorTokenInfo"
517947c85bf9 : Find pending AppearedActivity first
fba6e465ab6c : Revert "Revert "Adding ViewCaptureAwareWindowManager to Floating..."
72b969e7c120 : [PM] Support update-ownership for Archived app
a2f0ea026e40 : [Spa] Upgrade Google Material to 1.12.0
0ea110c1dff4 : Import SettingsLib/Service library
8da0bd0af6af : Use correct user context to get the EnhancedConfirmationManager
7ac8dc69d9b9 : [Expressive design] Add ZeroStatePreference
4efac66e05d1 : [Expressive design] Update CollapsingToolbar
18d585b85317 : Add notes comment for isBinderAlive API, to warn developers who use this should know the limit of this API.
acc651b746c8 : Dismiss exclusion region when task is hidden.
f25e56ae2ee5 : [Expressive design] Add BulletPreference
c0bca399b2f8 : appop: Finish all when last in chain fail
3f4da6bf735a : [TIAF] Fix ClassCastException of content rating
38e04d4ee890 : Add binder unhandled exception notification
5aa5c34e67d3 : Add a feature flag to remove hack code for without cross user permission.
480d055b49ca : Fix tests when absolute index volume flag is enabled
1e5fdecf81ef : Make SoftInputShowHideHistory multi-user aware
00f24f04691b : Add OEM_PAID and OEM_PRIVATE Apn types
e08e0379cc74 : Add support for flagging xml and png files
166dafed7dd7 : Fix variable name in AvalancheController
988be5d91433 : Add support for flag in resource directory names
1b1abc4bc770 : Expose SoundTriggerManager apis as SystemApi
1648937d800c : Fix Launching Foreground Recording App with MediaProjection App Selector
3d8d4a18c2bd : Error on duplicate resource with same disabled flag
10618b970a2e : Revert "Fix missing grouping after shade closes"
02b70189db5a : Fixed CTS EuiccManagerTest on HSUM devices
660290806787 : Load even dimmer config mappings
6148e6cad839 : AudioFormat: Add ENCODING_AC4_L4
193643cbedd3 : Update Sidecar Library with CancellationSignal.
6fb904965e77 : Add CancellationSignal to AppFunctionManager API.
427599a1fc3e : Move app compat TaskTests to AppCompatUtilsTest
6e5c941eb0cd : Catch exception on KeyguardSliceProvider
74a7b94f78fd : Specify the display ID to mirror for media projection
b36305a7730e : Add tests for Desktop Multi Instance features.
da44fb35cb33 : Launch modes settings from empty shade view
0187fa312c34 : Convert NotificationActivityStarter to kotlin
e64dc5576348 : Make empty shade text react to Modes changes
2768fbe71d56 : Make EmptyShadeView a LaunchableView
82bc24120574 : Add legacy fallback for midi device identification
45f2a9658ae4 : Shift interceptUnhandledKey Shortcuts to IMS
e3861335c08c : Shift multi key combinations to IMS from PWM
e79e1f46b713 : Separate KeyGestures controlled in IMS and PWM to fix state issues
ba21f2518604 : Use explicit user id for sensor privacy manager
cff6f5a2caa3 : Cleanup dont_read_policy_definition flag
ec72a55ac20a : [Expressive design] update MainSwitchPreference style
a67001835962 : Add horizontal padding to empty shade text.
5083e5833c94 : AudioService: remove SA state sync between BT profiles
bab174bf0883 : Remove OEM_UNLOCK_PROP usage
a4c8fb2474db : [Expressive design] ActionButtonPreference: improve a11y talkback
1b729656c64f : [Expressive design] update ActionButtonPreference
c0c193d00be6 : Revert "Avoid side effects in MediaCoordinator notif filter."
70cc6341fee4 : Bind executeAppFunction with Context.BIND_FOREGROUND_SERVICE
c75c0875da89 : Only run clipboard shared transition for images
dc48f6ecbb8e : Add DisplayID to Screenshot Request.
17badbcb3d1e : Check whether the correct callingUid has the necessary permissions
67574e2f779f : Add haok@ as the owner of responsible_apis_flags.aconfig
67ad50ae8916 : Add null check to motion events in global actions dialog
7b18842fb7d0 : Avoid unnecessary inline string allocation when calling Trace.traceBegin
7466a2d7f2dd : Move TracingTests to presubmit
49d449092bd8 : Fix pinch gesture dragging TouchpadDebugView
2fbc21e44410 : [flexiglass] Respects gesture exclusion regions.
f9a01533ef8a : [flexiglass] Adds gestureFilter lambda to STL
8f83165c0144 : Treat freeform and fullscreen tasks as compatible.
09f2a480e0fb : Don't let allowlist override user choice
b365b307c5dd : Propagate individual aconfig sysprop on flag stage
2727b84fa2b0 : Remove RESET_PASSWORD_TOKEN policy when the generated escrow token is not valid.
edd2f24b710b : Pre-start remote ondeviceintelligence services during user unlock instead of boot phase.
492865ebcad4 : Fix allowlist token issues
61274fc61fb3 : Fix allowlist token issues
ed587d406d0f : Add an extra condition to check whether it is the default notification uri
46aafeb8a6c0 : Fix allowlist token issues
bb4aae5c4e26 : Moving DisplayPowerController.dump outside syncRoot lock
c50a1d7d5993 : Prevent modes dialog from updating on theme change
b650eb36e9cb : Migrate EmptyShadeView to recommended architecture
8e88c9275d53 : Ensure that window gets to a drawn state
b7cc39e10a6b : Always contain window frames in window bounds
f86b52b6f9e2 : Fix coroutine scope issue in DeviceSettingServiceConnection
dc6e0ff9d5ad : Abort AE saved state restoration if it is too late
ba0b55e35b1c : Revert^2 "Code refactoring: Move doRename out of preparePackageLI"
6c0efc28e0de : Move SidecarConverterTest into framework/base/libs/tests
32a9155ad1fe : Update layer of rotation leash when restoring pip to original task
10b677501e84 : [expressive design] Update TwoTargetPreference.
acf3280b98ff : [Ravenwood] Use native system property implementation
d5f9669eaa87 : Convert overlapping input consumer exception to warning
69948d41f78f : Fix UserRepository to use the correct callback method.
369b0578f373 : Create shared memory region for framework APIs
11857301dba1 : Add ResetPasswordWithToken migration code.
f6b491831509 : Introduce feature flags for all APIs that should be flag-protected.
8af3afd5d86c : Fix component string
aabb3dcaa70e : [RONs] Define Notification.ProgressStyle API
78360d34f869 : [flexiglass] Fix bouncer to lockscreen transition
ae7188560927 : Add the ability to enable and disable supervision
99ef4b79adeb : Update supervision OWNERS file.
c283e94ef0ff : Fixing recovery related tests
fc64e3c8b71f : Exclude back gesture on selected tiles to allow for resizing
79c8c820ad1e : Restore QSPanelControllerBase listening state
4d9517dddc3e : Let apps know if they are promotable
19c7889432f6 : Add Stub version of PluginProtector
673f9e5c8550 : Revert "Run callback registration on the worker provided by the client."
18d6de53e28c : Allow sqlite in isolated processes
595d04a34566 : Introduce a skeleton PlatformAnimationLib-core lib for origin transition.
770b1811ec72 : Migrate WristOrientationService
7cb99687fe1a : Fix UserRepository to use the correct callback method.
844c70871bd1 : Clean up skip_home_art_pins ablatin study flag
8cf31570883e : Create safeguard pinner quota for system apps
3bf69d6d2a6a : Prevent calls to StatusBarManagerInternal from visible background users
f518e04a7d99 : Screenshot a single display per invocation
a32cb9f48ea3 : Add constant for action for NAS to gather feedback
ead624570fc5 : Make edit widgets activity show for all users
b5b78b719097 : Allow interact_across_users in app widget service
b09b7d2e7681 : Only launchTaskBehind dream if dream is on top
13f60bf53775 : Set bg executor to inflate widgets asynchronously
7ed657368c0d : Create TEST_OWNERS for SystemUI
15f9e56e1f89 : AudioService: synchronize audio mode and focus for Telecom
1c9612e68a4d : Use a one-off thread to load launch params
cf947aa94409 : Surround ComposeView with FrameLayout for ignoring touches
8e5da82a77e9 : Move Edit mode to be top level in QS
03a3dbdcaaf4 : Support selecting input devices
f17fccbe1070 : Add CreatorToken and CreatorTokenInfo
04d29986142f : Fix flaky VibrationSettingsTest
26b2a0ecbadb : [Flexiglass] Fix BouncerPredictiveBackTest timeout
e73140e696a5 : [Flexiglass] BouncerPredictiveBackTest ktfmt-cleanup
61e69bd2b425 : Add displayIdToDisplay convenience method.
db468ddf06a6 : PagerDots animation fix
2433e8a6eca3 : Init protolog for tests before any logging happens
2cbea9606c89 : Add start component type to start info
697c31d74c77 : Fix allowlist token issues
d98f375a305f : Add unknown location to media locations
2b28d8a212e2 : Ensure translationX gets reset
d2f98086ec3a : Clean up newAodTranstion() flag
e698208f5826 : Revert^2 "Extra common logic of default transition animation"
f0ecea75b7d0 : Notification API hardening: add statsd logs for forced auto-grouping events
36ed35e30511 : [Binder][XIAOMI][Bugfix] Skip appops header in native parcel. [1/2]
966c7f2d0462 : Fix allowlist token issues
aff86a2c1aee : audio: add try catch for BluetoothLeAudio.registerCallback in BtHelper.java
996c43194630 : AudioDeviceInventory: fix unsynced preferred device role when reapply
d47f4077bc39 : Only show the DND icon in Weather Clock if the DND mode is active
570d6df8aa3c : Fix flaky predictive back test
60995d69ad1d : Set connection state for headphones on boot The bootanimation sound plays out after audio service starts up.
51d27b2d5f93 : Overload DeviceInfoUtils.getSecurityPatch method
92324644e3be : [expressive design] Update SliderPreference.
010a3b574534 : Change mouse keys scroll rate
ffef7e77b017 : Add the addBefore and addAfer APIs
ffe2b109d66b : Remove broken links in DevicePolicyManager
10a403190c6b : Fix the wrong light doze state check
81a909acea39 : Add KeyguardRootView below SharedNotificationContainer
cfd6545b6ca4 : Revert "[flexiglass] Fixes QS header never being set invisible when unlocked."
7842c143f37a : brightness: fix conditions for using doze brightness
63425b50bd01 : Cleanup insets_control_seq
83fdddbdb18f : [expressive design] Update ActionButton.
f040153daa81 : haptic: fix test for input customized feedback feature
cc1f44f3f5cc : Change default hysteresis timer to 180 seconds for satellite roaming
3740c2e7b372 : Stop app switches only when home is the focused app
bc50647920a9 : Import translations. DO NOT MERGE ANYWHERE
90c38a1184ad : [Flexiglass] add collapsed layout for UMO and handle transitions
241c1f1e989a : Import translations. DO NOT MERGE ANYWHERE
f43ff4000bd0 : Fix parcel read/write mismatch
3e134373e941 : Import translations. DO NOT MERGE ANYWHERE
580aabcf3dc9 : Add flag for fully immersive experience in desktop
faa8835249a8 : Import translations. DO NOT MERGE ANYWHERE
441cb7a0a7e3 : Additional logging to debug HSUM in hotseat.
4313c9bbadf3 : Add OWNERS files for forensic
771b570ddc9a : Also support setJniMethodFormat in hwui.
8dae0e0f764c : Fix breaking WM screenshot tests
1c5c698d1b13 : Update splitscreen SnapPosition constants
aebcc469d28c : [framework] Add more oem extension APIs
04726fbd5895 : Flag for caching UserProperties correctly
0a332b80a88e : Modify PhoneWindowManager to avoid interfering with current user's experience
d84ed8080356 : [sb] StatusBarInitializer to CoreStartable
7b8e566fd23f : Migrate BackActionInteractor setup from CentralSurfaces to Dagger
1f8d49f26ee2 : Allow apps specifying minWidth/Height to still enter split
b41574aa0a04 : Avoiding click dispatching after long-clicks on QS tiles.
49697fa9b591 : Revert "Add WindowDecorViewHost and WindowDecorViewHostSupplier"
cf3d3fc31be8 : SkAndroidCodec: Use getGainmapAndroidCodec
fca32a78cfc2 : Don't include invisible transientlaunches in transitions
2fbfac69d42a : Input method codehealth small enhancements
2614f026f949 : Reset screenshot timeout immediately
e61488f1057b : Load user icon on bg executor
9a796f3b65f8 : Prevent Conversations from using custom views.
56dc1dc7c341 : Update app handle to be idenitified by Switch Access & Talkback
bebd32ea77ec : Associate activity config before creating application
e19d916a9e18 : Baseline global lint errors
81606e679326 : VCN: Remove hidden APIs for data directory and Settings constants
3afe6c47fcab : Disable dream touch handling on scene container
937542a5838a : Add temporary bbq merge path to SCR logs
0fb50e830364 : [sb] re-claim StatusBarPhoneModule for phones only
0271c48d6dc7 : [flexiglass] Implement Shade -> Lockscreen scene transition
323b0dbcb919 : Increase the timeout for logcat logs collection for dropbox
936a4d9d9157 : Changes to VibrationThread in preparation for sessions
9a215b2eee2c : updated divider color to match UI specs
88cbe44bde56 : Check relayoutCalled in updateSourceFrame
ea99157cff95 : Added support for wake-up by displayId
41a43ee2fb79 : Separate mIsLeashInitialized from isLeashReadyForDispatching
1eccb9a63dd7 : Provide a BatteryStats interface noteCpuWakingBluetoothProxyPacket
ab580dd276b0 : Write RemoteViews actions to proto (3/3)
7d116d7a1317 : XmlBlock: if you want to check the result of new, you need nothrow new.
9a9e06c425ea : [Vulkan] Query global priority support for queue creation
0553b04fa67a : VCN: Remove hidden APIs for build type and vcn service string
cec5a5a0ec56 : 1/ Add a flag to track toptasktracker migration
39e83a96717b : Move tile icon resource loading off state change
73bfdd9ef1a6 : [flexiglass] Fix footer not appearing on the locked shade
c49cbea00da3 : Do not recompute display config if its config frame is not changed
db5faf3ed884 : Move multivalentTests that are currently failing deviceless
da83213910be : Update documentation for EXTRA_PROVISIONING_ALLOW_OFFLINE
3d5d962a6156 : Block uninstall if DMRH in a managed user
2ecace1d82ca : Move EmptyShadeView to its own directory
52b5d1e217f1 : Remove GameManagerService from Wear
6823f28db3c1 : Added --namespace parameter to dumpsys device_config
1573f5d7b44d : Turn off smart idle maint service for ZUFS
2dbfd4f891c5 : Add rahulbanerjee@ to bootanimation OWNERS
380ccd13878f : Add an aconfig flag for the new settings page for lock screen notifs
ad6599eac4eb : Drop FEATURE_TELEPHONY_GSM from interface requirements
219b209b6b08 : Add support for continuously accumulated BatteryUsageStats
65ceded52b52 : Fix javadoc for LocalSocket constructor
99aabf26efc7 : Add OWNERS for ApplicationSharedMemory
31db47319937 : Rework SDK_MINOR_INT -> SDK_INT_FULL
a9b535f8a9f9 : Fix inconsistent behavior of killPackageDependents.
01a842822adc : Block clipboard UI when device is locked
b156c582347a : Block clipboard UI when device is locked
a79e9e542a1d : Block clipboard UI when device is locked
d8c82ad3a25a : Fix to accept volume status with no ARC connection
5deda323c2c3 : Clean up flag show_call_id_and_call_waiting_in_additional_settings_menu
db7bdcc17503 : Update PowerHAL version
255ee52dd925 : HWUI: Make releaseQueueOwnership thread-safety
bf47618b933d : Add TaskSnapshot reference when used for Content suggestion.
972848bab11b : If the FINE_LOCATION permission is granted to voice search on wear, and it is not user set, revoke it in the DefaultPermissionGrantPolicy
92170abdea4c : Remove FLAG_ENABLE_CHOOSER_RESULT
b0ae9ba7c8b5 : Documentation update
a10648f48d6e : Use Object.equals when comparing `lastNonFullscreenBounds` in TaskInfo
079998d80ed6 : Pass package information to reboot bootreason
2958d1748574 : Make IMM#getCurrentInputMethod{Info,Subtype} consistent
27e4e6aed072 : enforce limits for VisualVoicemailSmsFilterSettings properties
b4a4d81441ca : Fix bounds clipping to be based on app bounds
57b2300f2828 : Avoid mentioning complex deprecated API in API docs
9ce557e3636e : correct up eventTime
dd07f0325c98 : MediaCas: Add API to update client priority and nice value.
274eec31d0d4 : Fix NPE in BootReceiver
73e2b35a63d8 : Check permissions of URI inside of Autofill Slices
18849128f20a : [BugFix][MEM]Fix process memory data during dump
edff5e3be84b : Update application info for activity record when application info changed
bb68271e4ece : fix(non linear font scaling): fix test timeout flakiness

+- Project: platform/frameworks/ex

8bccec0 : AdvancedExtensionSample: Enable DEPTH_JPEG capture
9a6fdce : Remove sdk_version from libframesequence
b8b1a97 : Delete EditStyledText

+- Project: platform/frameworks/hardware/interfaces

ef35351 : Revert^2 "update android.hardware.sensors version to v3"
4ed9b0c : Revert "update android.hardware.sensors version to v3"
e79cc05 : Revert^2 "Add device state service Hal"
c7ad178 : Revert "Add device state service Hal"
38f4dc3 : Add device state service Hal
ff4df1f : update android.hardware.sensors version to v3
f647898 : Updated OWNERS file
9c41b72 : Add multi-client support in camera2
31ca6d2 : Add proposed trendy teams for VTS modules
1f82846 : Remove dependencies on the 1-variant fallback
5cd9eb9 : Add proposed trendy teams for VTS modules
cb833ed : Add proposed trendy teams for VTS modules
a903e97 : Add proposed trendy teams for VTS modules
087001b : Add proposed trendy teams for VTS modules
42f4779 : Add dirgroup for trusty genrule

+- Project: platform/frameworks/layoutlib

3ab463ddcf : Fix Create run configuration for IntelliJ
7dc231c1f2 : Do not add unnecessary AndroidX classes to layoutlib jar
f15371773b : Limit the symbols exported by layoutlib_jni
7f24dfcc48 : Add automatic monochrome preview for adaptive icons
804b896cbc : Fix IntelliJ project for layoutlib
6625cad988 : Adapt the status bar foreground color to the theme
0602392919 : Update layoutlib version of DisplayManagerGlobal
c0630923b1 : Bridge PM interface getBrightnessConstraint change
085857f145 : Release BridgeXmlBlockParser on disposal
a1d6960a0e : Set a display frame for the root view upon creation
ae2e19ef9b : Clear AnimationHandler ThreadLocal at the end of the render action
c530c291df : Add support for hyphenation
e250250524 : Update layoutlib project to use JDK 21
0221cacbcc : Delete resources for mac tests
ef874add64 : Remove Trace native support from layoutlib
bed57831ac : Update definition of layoutlib_jni
2085a09415 : Update Layoutlib configuration
bfd75d64d6 : Add support for ExoPlayer previewing
3d0377e1b1 : Cleanup ENABLE_HIDE_IME_CAPTION_BAR flag
5a4e85b380 : Update InputMethodManager_Delegate
472eef2658 : BridgeThermalService: add thermal headroom callback API
1f6af68c4a : Remove dependencies on the 1-variant fallback
2964ca5dc1 : Defer static initialization of PositionedGlyphs
8e7c239388 : Use unique_ptr for KeyCharacterMap
4884054db9 : Delete unused system properties in Bridge
fdb53e2460 : Do not create HostRuntime for layoutlib
bb86770581 : Remove unused libraries from Windows loading list
419a706f3e : Fix memory leak in layoutlib buffer creation
ce74567635 : Revert^2 "Add new isWakeLockLevelSupportedWithDisplayId"
b53c84b1b5 : Revert "Add new isWakeLockLevelSupportedWithDisplayId"
2608bbf3ef : Add new isWakeLockLevelSupportedWithDisplayId
7037eea008 : Added support for wakeup by displayId

+- Project: platform/frameworks/libs/binary_translation

e579cb7d : Sort intrinsic lists alphabetically.
472a19ab : Style fix
1106f16e : Sort instructions alphabetically.
9868a68e : Sort instructions alphabetically.
c6aeef13 : Add YMMRegister and few instructions that could use 256bit registers
9fc4a870 : Add YMMRegister and few instructions that could use 256bit registers
201e5355 : Add Lock Xadd* instructions
0e777417 : Move code_gen_lib to AOSP
ca5b2018 : [intrinsics] Move macro_assembler-inl.h into all_to_x86_32_or_x86_64 subfolder
f2d530d2 : Allow unused arguments to text asm intrinsics
d17dbe6c : Allow unused arguments to text asm intrinsics
b792b306 : Create MulSubF64x2 arm64 instruction test
c9bc6015 : Check inputs to intrinsic >= inputs to instruction
14f23d61 : Create MulSubF64x2 arm64 instruction test
144bfc3d : Check inputs to intrinsic >= inputs to instruction
a4062c79 : Create MulAddF64 Arm64 test
606f5c0a : Create MulAddF64 Arm64 test
ef1f32b9 : Add Lock Xadd* instructions
d94189e6 : Stop redefining __NR_memfd_create.
92f4a10e : Move code_gen_lib to AOSP
d8193677 : Restructure code a bit
24d2086b : Restructure code a bit
ca71ccdf : [intrinsics] Move macro_assembler-inl.h into all_to_x86_32_or_x86_64 subfolder
525854fb : Create SubInt32x4 and SubInt32x2 Arm64 Insn tests
b7c159f0 : Add rounding mode template type to intrinsics
fa393898 : Add rounding mode template type to gen_asm.py
789a5ee7 : Create SubInt32x4 and SubInt32x2 Arm64 Insn tests
23864f29 : Add rounding mode template type to gen_asm.py
46b9f94c : Add rounding mode template type to intrinsics
5765aad4 : Remove old file.h and scoped_sigaction.h
1cb399be : Add omitted defines in TypeTraits
7b97db6d : Automatic inlining checks NoNans using static assert
e9641a93 : Add omitted defines in TypeTraits
cbf46c08 : Automatic inlining checks NoNans using static assert
143093b3 : [intrinsics] Support corner case in template intrinsics.
dfb8b192 : [intrinsics] Support corner case in template intrinsics.
4fc24393 : Add script to help with cherry-picking.
f8f2f070 : Remove invert-carry flag
8f3e0a97 : Create local experiment platform capability
7350ebe7 : Add script to help with cherry-picking.
82ee9a4c : Introduce invert carry config flag
167d2e6d : Create local experiment platform capability
61a10e72 : Remove invert-carry flag
4ad1b903 : Make JNI copy for guest
c0c6ec35 : Move generated code to 32bit address space.
ba204ddc : Move generated code to 32bit address space.
e5604745 : runtime: arm64: Fix inline assembly
4f152ef7 : Introduce invert carry config flag
3d4566e0 : Move exec_region factories to runtime_primitives
c3d39fb7 : Move exec_region factories to runtime_primitives
92d753c4 : tiny_loader: Implement CalculateLoadSize
8e7db6bb : Add ScopedFd and use it in tiny_loader
af0cffcb : tiny_loader: Add OpenFile method to ElfTinyLoader
d18244a6 : Create ArenaSet class
56bfc112 : Allow AmoAdd to be used with unsigned types
133345dc : Allow separate return type for AmoMin/Max
f438b792 : code_pool: align regions on cache line size
2ee8d00d : [gen_intrinsics.py] Add Float16 type
69743e37 : Minor cleanup
7648bf53 : Refactoring: Introduce HostCodeAddr type for translation cache
31373a53 : Refactoring: Introduce HostCodeAddr type for translation cache
213de89a : code_pool: align regions on cache line size
9c5333b4 : Add riscv64 backend stubs
039548cd : Add riscv64 in code gen lib headers
19cabdef : Rename all symbols from light to lite (translator)
e7001354 : Add riscv64 backend stubs
58de558e : Add riscv64 in code gen lib headers
7aa7a38f : Add berberis_entry_Interpret in riscv runtime library
2ffe2021 : Rename all symbols from light to lite (translator)
14799669 : Add berberis_entry_Interpret in riscv runtime library
afd118b4 : Add anthonyjon and richardfung to OWNERS
73c53486 : Create ArenaSet class
2c70733d : Allow separate return type for AmoMin/Max
2f8cb93e : runtime: riscv64: Implement translator for arm64 hosts
399d6300 : runtime: Implement arm64 host runtime library
8d846108 : Allow unsigned types for Atomic intrinsics
93664e5a : Include intrinsics/common/intrinsics.h in intrinsics_atomics_impl.h
10475eb9 : Add kNumGuestSimdRegs to guest_state_arch.h
3696a09c : Add sext.b/h riscv assembler instructions
2328358c : runtime: Add stub arm64 host runtime library
e91c01ab : guest_os_prim: Enable on arm64
890a7e3f : guest_abi: riscv64: Enable on arm64
e908ae27 : runtime: Enable headers on arm64
2e951063 : runtime_prim: Enable on arm64 hosts
c1d3cfe7 : guest_os_prim: riscv64: Make gen_syscall_numbers host-specific
56cd8f73 : runtime: riscv64: Separate x86_64-only code
fa969bb5 : runtime: riscv64: Move to subdirectory
1594e7d6 : Explicitly disable Jmp/Jcc to integrals greater than uintptr_t
bb92a4b9 : Add sext.b/h riscv assembler instructions
8c789f29 : Include intrinsics/common/intrinsics.h in intrinsics_atomics_impl.h
d5ef6db1 : Add riscv assemblers negation pseudoinstructions
789f33eb : Rename relative Jcc to JccRel
c361c85c : Add absolute Jmp with uintptr_t argument
fb60696f : Allow unsigned types for Atomic intrinsics
992a3be8 : Rename relative jmp to JmpRel
a113d771 : Add rotate right riscv assemblers instructions
800249b6 : Add bexti rv32/rv64 assembler instruction
1e1ea8bb : Add sh3add riscv assembler instruction
56f69c09 : Move add.uw/zext.w to rv64 assembler
23652cfc : Add sext.w rv64 assembler pseudoinstruction
2a90b0ad : config_globals: add a flag for local experiments
0e5611ca : Revert "Add more arithmetic conditions in riscv assembler"
22658b00 : [Berberis][Interpreter] Support more vectors with ARM64 backend
0a876759 : Add riscv assemblers negation pseudoinstructions
96129308 : Add more arithmetic conditions in riscv assembler
6407a25f : Add rotate right riscv assemblers instructions
701ef615 : Add bexti rv32/rv64 assembler instruction
334522a9 : Add sh3add riscv assembler instruction
d3b2a6a1 : Move add.uw/zext.w to rv64 assembler
e5cb9b8e : Add sext.w rv64 assembler pseudoinstruction
8bda7565 : [Berberis][Interpreter] Enable implementation of vector for ARM64
ef2494f3 : Allow AmoAdd to be used with unsigned types
b804953b : Add kNumGuestSimdRegs to guest_state_arch.h
962dfda5 : Implement cross-platform mapping to lower 2G address space
5e351251 : config_globals: add a flag for local experiments
ccd51ee4 : Add riscv add.uw assembler instruction
b4731e1c : Add riscv add.uw assembler instruction
db543b69 : Fix ambiguous reference to Li in rv64 assembler
6514481e : Add riscv assembler jump pseudoinstructions
8c6838e2 : Add riscv64 zero register in TextAssembler
b47b45be : Fix ambiguous reference to Li in rv64 assembler
921fe3ff : Add fp(frame pointer) register alias in rv64i assembler
4bac3b2d : NewForever: add a private version
a7e89ccd : Add LockCmpXchg* instructions to lir_instructions
5f686879 : x86: Add support for AL register
08ff954d : Add comparison to zero riscv assembler pseudoinstructions
36f86d65 : Rename constant kLightTranslated to kLiteTranslated
25b9e97c : Part 3: No static storage for GuestThreadMap
a9cb15f0 : Part 2: No static storage for translation primitives
f10c6355 : Add Xchg* instructions to LIR allowlist
5f530827 : Add vector register type in riscv assembler
ed977177 : Implement branch riscv pseudoinstructions
2dc63991 : Implement call and tail riscv pseudoinstructions
9dec88b5 : Implement ret riscv pseudoinstruction
4665fdaf : x86: Add Xchgb and Xchgw instructions
7e9acc15 : Fix corner cases in float rounding riscv64 intrinsics
40af1952 : Implement li riscv pseudoinstruction
d76cbfd8 : Fix typo in la riscv instruction
34418cad : [Berberis][Interpreter] Add implementation for OpVector Part one
5301e4e8 : Add pretty printing of instructions in assembler test
eb9a651f : Implement runtime library for riscv
599c39d7 : Add GeneralReg riscv64 intrinsic binding
aeb3d96a : Add riscv assembler jump pseudoinstructions
d762a29f : Add fp(frame pointer) register alias in rv64i assembler
3635be5a : NewForever: add a private version
23cd56c4 : Add LockCmpXchg* instructions to lir_instructions
b691d815 : x86: Add support for AL register
16bba8bc : Add comparison to zero riscv assembler pseudoinstructions
aba0b4ba : Rename constant kLightTranslated to kLiteTranslated
9b45de03 : Part 3: No static storage for GuestThreadMap
985531e3 : jni-wrapper: support arbitrary long signatures
1ab8bf4b : [Berberis][Intrinsics] Add tests for Divu and Rem(u)
524b465c : Part 2: No static storage for translation primitives
c7d79f25 : Add pretty printing of instructions in assembler test
d3d25fa8 : [Berberis][Intrinsics] Add intrinsics for interpreter part3
9bad806e : [Berberis][Intrinsics] Add intrinsics for interpreter part2
c9e6b4fa : [Berberis][Intrinsics] Add intrinsics for interpreter part 1
0219f45f : Implement runtime library for riscv
6f76c179 : Add Xchg* instructions to LIR allowlist
3d519006 : Add vector register type in riscv assembler
83e423a8 : Implement branch riscv pseudoinstructions
abfbe1ac : Implement call and tail riscv pseudoinstructions
426df26a : Implement ret riscv pseudoinstruction
44b68589 : Implement li riscv pseudoinstruction
b394851a : GuestLoader: remove instance mutex
d4bbfc30 : Fix corner cases in float rounding riscv64 intrinsics
43fb04c9 : Do not use static storage for translation primitives
e012754f : x86: Add Xchgb and Xchgw instructions
3db025e0 : guest_map_shadow: fix address truncation
bd5c340b : Fix typo in la riscv instruction

+- Project: platform/frameworks/libs/gsma_services

7ee893e : Conver hidden APIs to system APIs for satellite
9c749ad : Adds Satellite client version support
e536b76 : Convert hidden SatelliteManager APIs to System APIs.
4b89e0a : Fix the incorrect type signature requestSatelliteConfigurationForCurrentLocation
e5322ea : Add wrapper API to provide a way to call register/unregisterForSatelliteCapabilitiesChanged.
5f9f2cd : Add requestSatelliteDisplayNameForSubscription in SatelliteManager
897e782 : [Satellite] Enhanced the satellite metrics to record datagram count per message type.
2af1633 : Apply telephony_config_json_parser for satellite accesss control
f606a35 : Add requestSelectedSatelliteSubscriptionId API to SatelliteManagerWrapper.
8e10df1 : Add null check for mSatelliteManager
27ce5f8 : Add setNtnSmsSupported API in SatelliteManager.
9c4fdb0 : Add requestRegionalSatelliteConfigurationForCurrentLocation and onRegionalSatelliteConfigurationChanged
ef9b7c4 : [satellite] Added onCarrierRoamingNtnAvailableServicesChanged callback
4060ede : Google contribution item #12 - subitem #2, satellite_client [VZW-Skylo] apply Idle Mode Scanning for Terrestrial Network
78fb5f0 : Add a new datagram type for checking pending incoming SMS.
0683aaf : Add deprovisionSatellite api

+- Project: platform/frameworks/libs/modules-utils

b7f66a5 : Create util lib for android internal utility classes to use in module
a4ce941 : Fwk: Make statemachine not reference messages that have already been recycled
44db5f7 : Skip logging if the metric is invalid.
1e74898 : Generate Property Invalidated Cache from annotation.
92fff7d : Move ravenwood annotations to module-utils (2/2)
6f3170d : Restrict use of framework-annotations
ff621fb : Add a WeaklyReferencedCallback annotation

+- Project: platform/frameworks/libs/native_bridge_support

4dd0646 : Add sysv hash to vdso
8991c55 : guest stubs: add overwritable indirection
fb9feef : guest stubs: add overwritable indirection
e6c71f3 : Regenerate proxy libraries to use JNIEnv explicitly
1fae890 : Use JNIEnv in trampolines where applicable.
b365323 : Update trampolines to be in sync with jsons
a170e26 : Add multi-client support in camera2
97570b4 : Add anthonyjon and richardfung to OWNERS
f38b9e7 : Add compact NZCV flag format
779e92a : Add compact NZCV flag format
dc80d53 : Adjust vulkan binary translation for 1.4.0 header

+- Project: platform/frameworks/libs/service_entitlement

b10b287 : Catch UnsupportedOperationException
33d037e : Add support for alternate EAP-AKA realm.
d9272b1 : Revert "Add GID1 to TS.43 eSIM ODSA Operation requests"
5684427 : Add UrlConnectionFactory to entitlement library.
d15a174 : Add GID1 to TS.43 eSIM ODSA Operation requests
8115539 : Add additionalHeaders parameter to ServiceEntitlement APIs
f460546 : Update the User-Agent Client in accordance with latest TS.43 section 2.2 https://www.gsma.com/newsroom/wp-content/uploads//TS.43-v11.0-Service-Entitlement-Configuration.pdf
56fc6a7 : Support the new app id for xOS transfer
cf7932c : Support the new app id for xOS transfer
4590c2d : Change log level to info for PII data.

+- Project: platform/frameworks/libs/systemui

48b306b : Update flag description
fd9fc8e : Capitalize all `val`s in `companion object`s
75d3465 : Add [GestureContext] #MotionMechanics
71b7a4c : Add query-logic to the [MotionSpec] #MotionMechanics
f90d7a9 : Generalizing monochrome icon into theme icon
762ac97 : viewcapture: guarantee happens-before relationship
58b1c86 : viewcapture: add if-tools team as OWNERS
ac0a76d : [MotionSpecBuilder] for the #MotionMechanics library
4401c8e : Add the missing isVisible() that CinematicEngine uses.
490762f : Moving ThemedIconDrawable to kotlin
f2dc247 : Run `mechanics_tests` in presubmit
d6cb35d : [MotionSpec] implementation for the #MotionMechanics library
3cd81b0 : Remove closeable not found log from ViewCaptureAwareWindowManager.
bc9839a : tracinglib: hide stack-walking behind debug flag
3ddeecd : Migrates Monet's Style Enum to @IntDef
e3efbf6 : Add smartspace_sports_card_background flag.
3c28d04 : add flag for launcher icon shapes
d72c58f : Initial commit for #MotionMechanics library
249787c : Reapplying "Do not cache default app icons when re..."
3f34d9b : Revert "Update UserIconInfo to include LauncherUserInfo configs"
b55f71a : Aligning Android color tokens with Material
a614b9d : Update UserIconInfo to include LauncherUserInfo configs
c4ad70e : Revert "Aligning Android color tokens with Material"
427ee99 : Revert "Revert^2 "Do not cache default app icons when returned f..."
ca6680d : Fix crash in setting matrix in IDENTITY_MATRIX
bb2fdb6 : Aligning Android color tokens with Material
83c988b : Setting min sdk of MSDL lib to 31.
5edfd95 : Fix sun effect starting at a wrong position in Magic Portrait editor Refactor transform matrix calculation in weather effects
639a9d8 : tracinglib: rename benchmark tests module
9bbafa8 : tracinglib: enable strict mode
6357dc1 : Fixing cache update handler deleting valid application entries
1668e68 : tracinglib: extract config props to data class
4bf47c5 : Move icon factory to framework
4eacca3 : Revert "Add flag for customizing touchpad 3 finger tap."
32ce2d9 : tracinglib: new helper for naming flow scopes
5ab2158 : Add flag for customizing touchpad 3 finger tap.
5fd71f6 : tracinglib: infer names for all child coroutines
82b3900 : tracinglib: trace sections for coroutine scopes
554edf1 : tracinglib: apply latest kt format style
9645078 : Add benchmark tests for coroutine tracing
971d249 : Invalidate icon cache when enabling forced themed icon
0be4b1e : tracinglib: remove -Xmulti-platform
8cfb03b : tracinglib: update README, fix tracing demos
66444b4 : Fix snow accumulation thickness in different image size
6ee1813 : tracinglib: fix build warnings
d65eee0 : Trace util for coroutineScope
1e243f8 : Remove public api variant of tracinglib
4b52ba7 : Flow extensions for coroutine tracing
15f0f47 : Remove dependency on kotlinx.coroutines.test
d655e5b : Extract common test functionality to new class
4f57fc6 : Add ability to trace Flow values as counters
1e80b3b : Add JVM overloads to trace logger constructor
2690530 : Add utilities to trace callbackFlow
bfeea14 : Ensure coroutine tracing is disabled in edge-cases
9fe13e4 : Disable coroutine tracing on user builds
4d369e3 : Update color grading and image protection
a62c736 : Revert^2 "Temporary Revert to fix a Picker bug."
4803d31 : Fix weather effects being scaled differently in different images, and add parallax to sun effects
bd9e4ec : Moving IconCacheUpdateHandler to kotlin
0f1a380 : Adding Sun effect
e48a17a : Revert "Temporary Revert to fix a Picker bug."
8cf804c : Fix repeat color seed issue
26051fc : tracinglib: rename benchmark tests module
02f0a8c : Updating the DRAG_THRESHOLD_INDICATOR haptic reference token.
c7094c0 : Expose bitmapIconSize
b092dbb : Support weather effects parallax in Magic Portrait
128c728 : Adding LOW_TICK as a required primitive.
39d41fc : tracinglib: enable strict mode
eb238bc : tracinglib: extract config props to data class
db5671b : Revert^2 "Remove main thread necessity for getting a ViewCapture instance and make"
ef7d6b4 : tracinglib: new helper for naming flow scopes
f96f38c : Temporary Revert to fix a Picker bug.
934601e : Revert "Remove main thread necessity for getting a ViewCapture instance and make"
24db079 : Adding an Empty player in case of a null vibrator.
ebfbc91 : Remove main thread necessity for getting a ViewCapture instance and make call to getInstance method thread safe.
41f9fbd : Initial Ambient AOD shared flag support
8aa00da : Cleaning up temporary interfaces which were created for refactoring
e7d2215 : Removing dependency on PackageInfo in IconCache
d2aa99a : Reformat files in weatherlibs using new ktfmt
7e61d19 : Converting some caching logic booleans to lookup flags
89ced01 : tracinglib: infer names for all child coroutines
be2dbcb : Updating the library to include latest drag tokens for sliders.
5706837 : The ShadowAnimationUtil has been removed from robolectric,
c7d2e42 : Enable LiveWallpaper to listen to COMMAND_LOCKSCREEN_LAYOUT_CHANGED
96103be : Converting various cache lookup option booleans to flags, so that it can easily be extended
654c4e1 : Reformat LiveWallpaper.kt, LiveWallpaperEventListener.kt, WeatherEngine.kt
ecb6e17 : Updating IconProvider API to use a single API to load icons
67bdddd : Adding formatting to the string representation of MSDLEvents
b62e22a : Revert "Revert "Moving write secure settings permission to test ..."
8dd21fd : Revert^2 "Do not cache default app icons when returned from PackageManager"
15caab2 : Requesting ApplicationInfo in cached object
64579f6 : Fixing package override is applied in all lookup options
533ed17 : Revert "Do not cache default app icons when returned from PackageManager"
b9f5fdd : Introduce a utility to store and cancel ongoing animations on a view.

+- Project: platform/frameworks/minikin

9ef6ca7 : Make setFontVariationSettings work with weight adjustment
2966cb7 : Add more optimization
127e640 : Change Typeface redesign flag to readonly
fad541c : Implement vertical text layout
aac2d1b : [2nd attempt] Use packed vectors
e6b2ebf : Revert "Use packed vectors"
e5c9cf7 : Fix Hasher for mac builds
c1226d3 : Revive the old behavior for clock animations
4217dc8 : Check FontFamily validity with num fonts
04b3bda : Use packed vectors
18f638c : Revert^2 "Add getAdjustedTypeface/getAdjustedFont method"
726dbab : Revert "Add getAdjustedTypeface/getAdjustedFont method"
b4a88a6 : Add variation settings argument into font matching methods
4eafc64 : Use base variation settigns value for older apps
85d9478 : Add more string output for debug methods
ef53ef8 : Add getAdjustedTypeface/getAdjustedFont method
7ebbbe0 : Add FontVariationSettings into LayoutCache key
7420078 : Use VariationSettigns instead of vector of it
9494315 : Add the font variation settings merge method
b6678d0 : Revert^2 "Add font variation settings into FontFakery"
176c8a6 : Revert "Add font variation settings into FontFakery"

+- Project: platform/frameworks/native

dd5c2148e9 : [SF] Use render rate for the compositorTiming
90b14eab96 : servicemanager: set consistent flags for self-reg
668b33b1e3 : Switch the default blur algorithm to Kawase2.
ffda356e76 : Add AHARDWAREBUFFER_FORMAT_YCBCR_P210...
80d9ca7219 : Add 1.4 support to VKJson
2493f01a6a : Add a trunk stable flag for switching blur algorithm to Kawase2.
b109db3455 : Libbinder cache: ensure binder thread count is checked when removing static list
a9ec7d4f1e : Use unique names for InputSurface in e2eNativeInputTest
9fd0b0b334 : IInterface: remove some C++ AIDL exceptions
c8d5f1a472 : binderLibTest: epoch->uptime
f48bef8ed6 : tests: Eliminate annoying and noisy CompositionEngine errors
87468c3c28 : Update invalid Compatibility to 3
3c6ebed5f4 : Replace {@code} with backticks in thermal NDK comment
a1b1b5d060 : Revert^2 "sensorService: use v3 sensor HAL"
fd2b786912 : Add CPU/GPU_LOAD_SPIKE hints for one-off expensive workloads
a2ab5be979 : tests: Improve logging for objects that are causing errors
2aecda7e32 : Revert "sensorService: use v3 sensor HAL"
5415547da9 : Add enum tag to refer to the types
76eb78fe57 : Add GTE compatibility logic for NoVote
0e26ffee9f : Add ADPF owners to system_health.h
952acc1ef3 : SurfaceFlinger: Import hardware flags from aconfig
5e1b7e1b0a : Remove static list in libbinder for caching
35cbc4d218 : Add clarifying comment on enableAddServiceCache
7f21fa319e : Refactor to replace FloatPoint by vec2
5b6bca454f : Resync on the transaction.
997897acfa : SF: Remove a redundant word from comment.
565c1f9782 : Add NDK CPU/GPU headroom APIs
67bbf97106 : binder: test FrameworksCoreTests_all_binder
d7cdab2fea : IDLCLI: Update android.hardware.vibrator to V3 for idlcli
5f1ffcbc0b : Revert "RpcServer: let system allocate vsock ports"
917d6da617 : libgui_tests: demote "Linked to input" log to info
684eaa791c : Add NDK support for thermal headroom callback API
e3a50455cb : tracing_perfetto: Use 1Mb shared memory buffer
f0169ab59e : Set appropriate cursor position on viewport transition
e33845cf67 : Enable cursor to transition across multiple displays
c07a0541f9 : Complete the trace information LLM to LayerLifecycleManager
227aef2d74 : Add GTE compatibility enum to ANativeWindow.
c6bc5f54f2 : Add flag arr_setframerate_gte_enum
dc20754489 : Add binder to libbinder cache after addService
9a14c1202f : DisplayTopology propagation to IM
b7269da8ae : Fix: Sticky keys filter should ignore incomplete key events
8ef956a6f2 : installd_service_fuzzer: Add signal() to handle SIGPIPE
96fa0e7723 : EGL Multifile Blobcache: Remove entries when valueSize is zero
b7f342a051 : EGL Multifile Blobcache: Handle lost cache
99e8f2c5a7 : EGL Multifile Blobcache: Fix applyLRU
e2bd42b17f : Cleanup old sysprop flag for Keyboard backlight
966cf1b1b0 : Revert^2 "Mark @FlaggedApi flags as exported"
6aebcf249c : EGL Multifile Blobcache: Clean up key reuse
4ee63869b1 : EGL: Add flag for bugfixes found via advanced blobcache usage
cf6db5cb1f : [SF] Use the full frame rates range for supportedRefreshRates api
c0db0c9588 : Revert "Define new ASurfaceTransaction_setFrameRateParams API"
fff23909b5 : Fix c compat issues
5ef1fa1b61 : SF: Propagate Display Mode Rejection from SF to DM
974d7268ca : Remove headroom interval methods
f08ed64b24 : SF,HDR: Add HDR output type to modes
dae4dcdbd4 : Revert "Mark @FlaggedApi flags as exported"
2c14d23d55 : Remove unnecessary imports and defaults from PowerAdvisor AIDL interface
bbc2f3bedf : Update inputflinger TEST_MAPPING to use test_module_config entries
6275776bb8 : SF: Clean up warnings around AidlComposer.{h,cpp}
6ac21eccd2 : Revert "Add new ANativeWindow_setFrameRateParams API"
f5f380b72a : Revert "[Lut backend] Fix a bug where we were passing a nullptr to the HWC when we wanted to clear the LUTs."
d112b861ff : Mark @FlaggedApi flags as exported
f6c92a04a5 : Update comments to point to the new location of event.logtags.
625eb01784 : Add default maxBufferCount in GraphicBufferProducer
fc2a4a7b30 : SF: Return EX_CONFIG_FAILED from setActiveConfigWithConstraints()
10bae11c00 : Return message wrapped in Result from receiveMessage
5d66042a23 : SF: Store and manage snapshots for virtual displays
5cd7497f3e : Refactor processing of relative pointer movements
7918c82519 : Refactor PointerChoreographer lock
eb63bbfd16 : Do not crash when server channel has closed
af66cee7e9 : touchpad: add touchpad system gesture setting
4b8c0c6212 : Add new surface binding for ADPF Timeline API
36349ac27a : [Lut backend] Fix a bug where we were passing a nullptr to the HWC when we wanted to clear the LUTs.
4d74653b0e : Update libvulkan linker map for 1.4
550cc22ef3 : Add feature xml for Vulkan version 1.4
500ce88176 : Add support for Vulkan 1.4 core entrypoints to nulldrv
612281d2fa : Allow vkjson to report on 1.4 instances
8f9e4e1e28 : Add support for vulkan api level 1.4 in loader
a5296cdb1d : Regenerate vulkan-loader for 1.4
0f47303196 : SF: VsyncTimeline::isVSyncInPhase should use display rate [unflagged]
888300c8f4 : Update dependencies on graphics.common HAL to latest
75885d9f92 : readCString: implemented with readInplace
567ec62d53 : Fix NoVote/NoPreference on LayerHistory
c3cab95207 : [LUT NDK] Add CIE_Y sampling key to ADisplayLuts_SamplingKey enum.
4a8e5f7ed7 : Add permission checks for sfdo functions.
345be9446f : Fix the return type of ADisplayLutsEntry_getDimension and ADisplayLutsEntry_getSamplingKey.
11aeb18b19 : Add SessionManager aidl
0cc2e2cd82 : Use view::Surface instead of IGBPs in OutputConfiguration
ff63fe16c5 : input: Remove prefix from InputDeviceSensorAccuracy enum
fb45f2759e : Fix test for headroom result.
726c1b1ee5 : Improve wording of ALooper_pollAll docs.
b4eafd3589 : Add Accessor::from_binder
cb9ce7dc16 : [audio] Add new mute type to AudioManager
d54e589051 : Fix typo in ASessionConfig_setGraphicsPipeline signature
71e0ef517f : Revert "IDLCLI: Update android.hardware.vibrator to V3 for idlcli"
c6acad1f9e : Add r/w opt-in rollout flag for RenderEngine on Graphite
887598b92d : Allow read/write aconfig flags to be read before boot complete
ea52d3c543 : Add missing IProducerListener callbacks to BLASTBufferQueue
fda8f0db14 : Define VINTF fragment as a module type
c1beabff3d : [LUT shader] add CIE_Y shader
83c6f9eed4 : Easier ftl::Flags construction
290d426a52 : Add new methods to PowerHalWrapperAidlTest
53463397c3 : Fix One Euro filter's units of computation
74befc9eae : libbinder: deprecated notice for *CString
b7a1a99f1a : Disable -Wunused-value for surfaceflinger
01e9f30b46 : [Lut NDK] Add new ASurfaceTransaction_setLuts api.
7860d408c5 : Add AccessorProvider to libbinder_rs
cfd0442a0b : sensorService: use v3 sensor HAL
bee19021e3 : Adds getSupportedRefreshRates support
2358c1360d : MotionEvent: add safe dumping method
f750aa8e27 : InputDispatcher: Properly store values inside optional
37d088b0e2 : Change TEST_MAPPING to use test_module_config rather than list options.
081ed30648 : Remove the extra parameters in onCommitNotComposited
7abbc353b3 : SF: add trace points to debug 4ms jank
3a5f4bef5b : IDLCLI: Update android.hardware.vibrator to V3 for idlcli
068fa0c191 : Add flag for native session manager
0d18823bdb : SF: passing link training failure up to DM
5fd22477b4 : Delete user data even on error when registering AccessorProviders
c625c11ac6 : Fix fuzzer output on failure and shared_libs
5541f12784 : Add API to retrieve max layer picture profiles
95fd424aa1 : Disable -Wunused-value for surfaceflinger
fc4f2817e1 : Tweak Kawase2 to use fewer passes and fewer samples.
8f997cbe9f : Refactor initial support check to use SupportInfo
fa8eb01f88 : Implement NDK createSessionUsingConfig API
f5fdff8d95 : Notify listeners about active picture profiles
9cea07c5fc : [Lut HAL backend] skip libtonemap if the lut(s) is in use
28c21c69d6 : Complete the trace information LLM to LayerLifecycleManager
6b2db74787 : Add ADPF owners
9084218d12 : bufferqueues: Move the gl fence wait from dequeue to releaseBuffer
ff80e302cb : Revert "Enable cursor to transition across multiple displays"
ffbd83c622 : input: don't log the whole MotionEvent in index checks
8face53ab5 : Limit HLG to 4.926x SDR
4fabc0119c : SF: cleaning up hotplug2 flag
e13230c116 : Shift error log to info for rust data store
a5ba9f1f2d : Enable cursor to transition across multiple displays
d3795b5441 : Allow resetting locked modifier state
daa1cab22d : Complete the trace information LLM to LayerLifecycleManager
4a3f24556d : Add ADPF to PowerManager header OWNERS
feb30df986 : Add support for converting java hint sessions to native hint sessions
f9854d272c : SF: Hash and cache descriptor block serial number
c37c904447 : SF: Parse, hash, and cache block 0 serial number
3e96f94d2c : SF: Parse Physical display size in framework
4c575a1073 : Flag: Add a trunk-stable flag to guard EDID-based Display IDs
7ad5c2b40d : Add HEIC_ULTRAHDR image format
8047b3cd1a : Use matcher-based consumption in InputPublisherAndConsumerNoResampling_test
0f4ca34643 : gpuWork: remove unused reference to mock_bpf
b09293f068 : Skip invisible windows in input list
4fcdbe7d61 : Use a vector of devices in IAudioManager
5210808d77 : bufferqueues: Simplify calls that don't use GL fences
0dd9a4e7de : Input: Gate setKernelWakeEnabled with a flag
9e7e909fa1 : Wrap AHardwareBuffer_LockPlanes too.
e0361625e1 : Add methods to lock and unlock HardwareBuffer.
d62330d481 : KeyboardInputMapper: migrate most tests to InputMapperUnitTest
708ebfbcfa : SF: add a backdoor to introduce a janky frame to HWC
3a218e0245 : native: Add a new BLE channel sounding feature flag
f673e67a21 : Extract KeyboardInputMapperTest and subtests to separate file
aed4aa99dc : Make native package manager available to platform.
8433c50b95 : remove sdv binder visibility
2e3bb3779d : libgui: Add bq_gl_fence_cleanup flag
a3db306748 : Add new flag for exposing 1.4 instance API
f26d0d7a5a : Add OWNERS for *luts* files
f5dd338f69 : Remove stray character.
7e8875b411 : Make native_headers product_available
b658b15cd4 : [Lut shader] Fix gamma correction
76cbf5bedb : SF: Remove flag check for modesetting lock
43a8bd948d : Revert "Revert "Add public ADPF load hints with better rate limi..."
147f08c218 : Revert "Add public ADPF load hints with better rate limiter and ..."
e524dd9d13 : Recover from buffer stuffing for canned animations
960967767d : Remove stray character.
ffe24ccf66 : Add keyboard volume mute LED. (2/2) (cherry picked from https://partner-android-review.googlesource.com/q/commit:80f60ec95afc0c5995647aa19f87b5389f656c25)
94cd7bf910 : Add DISPLAY_BT2020 dataspace
bbd4088dd5 : Restore original use of llndk annotation
ef006586b5 : [Lut HAL backend] implementation 3rd patch.
59fc51c6dc : Revert^2 "Use __builtin_available guard"
9bfdbe8ee2 : Revert^2 "Deprecating llndk-versioning.h"
628cff4cec : Allow apps to associate a change in picture profiles alongside a buffer
07dcd4977f : Allow apps to apply picture profiles with priority to layers
31b58c9b20 : Error fix from enabling Clang thread-safety checks
330571240f : Remove dead code from LayerFE
0034aef56f : Revert "Use __builtin_available guard"
8ded18ccb6 : Revert "Deprecating llndk-versioning.h"
8760216137 : Update Transaction::setInputWindowInfo to take WindowInfoHandle
c7bf16433c : Fix test for headroom APIs
0980aa8675 : ASM: Add required AppOp lookup to custom sensors for all permissions.
1a4ffd898a : Compose a layer with a picture profile passed in by an app
1d038d478f : Consume full tap in EndToEndNativeInputTest
a8dc965dd8 : Revert "Skip primaryRangeIsSingleRate check for ARR"
a4eb946da8 : Avoid unnecessary copies in updateInputFlinger
68ce2f684c : SensorInputMapperTest: migrate to InputMapperUnitTest
3a28c3165d : Add new 25Q2 keycodes to native
ec1f230d6d : Add default maxBufferCount in GraphicBufferProducer
c50727b165 : binderUtilsHostTest: increase timeout to reduce flake
18b9352ff3 : Move ADPF to standalone lib
7a4cb7e128 : Add plumbing to pass picture profiles down to Composer HAL
3a707e05da : Move picture profile flag from surfaceflinger to libgui
fc48edc76e : SF: do not clear contentRequirement when render rate changes
7382ddc81c : NoPreference does not supercede Game default override
f4183a51c4 : Fix native memory leak in Uint64ArrayRW
8fdf7eb239 : Deprecating llndk-versioning.h
a52cd7e0a2 : Use __builtin_available guard
7e796bc1f8 : Toolkit touch boost to per-uid in sf scheduler
a4b3a10e37 : Update output's dirty region when content changes
08ee19997d : Reimplement Chromium's OneEuroFilter to InputConsumer
c034fbf278 : Support SurfaceControlRegistry logs in native
ebdbead46c : SF: Implement FrameStats directly in FrameTimeline
12b511216b : SF: Add flag for FrameTracker removal
31b9016a7a : Correctly convert std::string to rust::String
9210f10dd0 : Avoid unnecessary WindowInfo copies
e2ca80bb67 : Add flag for applying picture profiles in SurfaceFlinger
ea1a37b3e9 : GestureConverter: dump value of 3-finger tap setting
67044a2341 : surfaceflinger: Update libprocessgroup dependencies
5a8c69f8de : libbinder: Return error instead of crashing on closed Trusty connection
e2b376df7a : Implement vibration session support
2a0210e8f4 : GestureConverter: add option to make 3-finger tap a shortcut
bb56192110 : Fix condition for waiting next frame
8e3e2ea8a9 : SF: omit vsync callbacks when screen is OFF
a8217b5de8 : SF: add a new flag for omiting vsync on screen is OFF
0a6dc40104 : A uint64 array wrapper for optimized allocation and copying
d0776158d0 : Add USB information to bugreport
508e64d9fb : Fix track event names with atrace via Perfetto
398f9f5521 : Rust new_vsock use assigned port feature
e05a2b35ff : Add more const to 'instances' in registerAccessorProvider
a48146490b : Note Ganesh vs. Graphite in RenderEngine's dumpsys section
c8437a54e3 : Rename to debug.sf.prime_shader_cache.edge_extension_shader
783fd3fc1e : Make tracing_perfetto available to vendor/.
a419faaaf3 : Add warning logs when we fail to set BufferReleaseChannel
4878c0d5a4 : Remove superfluous #include.
7e2ba2604f : TfLiteMotionPredictor_test: fix initialization order warnings
fb6d585f8c : TestEventMatchers: explain WithMotionAction failures
16c4c32c55 : Remove flag single_hop_screenshots
0b80c74300 : Reorder release fence attachment for non-threaded RE
122a117363 : Add public ADPF load hints with better rate limiter and hint batching
a09b5b7417 : Stop checking ABI of libbinder_rpc_unstable
3062381d17 : Include PID and UID in offscreen hierarchy dumping
066288aa8a : SF,HDR: Add readonly flag for connected display HDR feature
a97fd945a6 : EndToEndNativeInputTest: Apply all transactions synchronously
1ed210ee53 : Add flag for SurfaceControl.Transaction#setFrameRate API
56d55bca36 : Fix flaky uncropped_container_replaces_touchable_region_with_null_crop
603c2ea98c : Add unit tests to check that BufferReleaseChannel is unidirectional
bb504471bb : Clean up BufferReleaseChannel logging
c16a4a5fde : Drain buffer release thread in binder callback
769de4ac09 : Define new ASurfaceTransaction_setFrameRateParams API
666659a47b : Revert "Revert "Add HAL method to return SupportInfo object for ..."
0b3325dc18 : Add flag for connected displays cursor
a012ed364d : Revert "Add HAL method to return SupportInfo object for PowerHAL"
74bc589a22 : Use ProcessState::selfOrNull in ServiceManager APIs
aaa80ee344 : Add new ANativeWindow_setFrameRateParams API
bcaffd40b3 : Add documentation for layer drawing order
c648e08a5f : Extract SensorInputMapperTest to its own file
a7ed637b9f : Wake device from alphanumeric keyboards
4ef40682dd : Ensure new InputConsumer resampling behavior does not change
37242c8224 : Do not resample when frameTime is not available
75d7112700 : rust: Bind to NDK subset of libbinder
cf0037d817 : rust: Make binder-ndk-sys crate buildable via Cargo for NDK
e98286f454 : SF/CE: Clean up header dependencies and combine mocks
6ab69ae5c8 : SF: Add a readonly feature flag for display config error HAL changes
0d666d7e06 : libc++fs is part of libc++ now.
c91164f5e9 : CursorInputMapper: tidy up the tests
62d0bd7277 : Clean up feature flag for new mouse pointer ballistics
3013feb2c3 : Add test for eglCreateContext with EGL_TELEMETRY_HINT_ANDROID
7e947d331d : Improve formatting of doc comments and add more details.
9930a83520 : Remove BackendUnifiedServiceManager::getImpl
1d2eb22c39 : Remove some usage of IGBPs in the ICamera.
5e2ebfb9d7 : Unset FDSan tag when converting handle into other types
d6a3fb5d62 : EventHub: log error when getting axis info for non-existent device
dd2f95e803 : Adding XR software and hardware feature configurations
30f38dee68 : Add YCBCR_P210 format to AHardwareBuffer
d2a0e72f48 : SF: Adds support for getSuggestedFrameRate api
8204da2c9d : Adds support for getSuggestedFrameRate api
88a1b76bea : EGL Multifile Blobcache: Make use of crc32_z algorithm instead of crc32c
48eb4355d1 : Add additional flag for FMQ SF
0abc4a5897 : [Lut HAL backend] implementation 2nd patch
2aec032c08 : libbinder: Parcel: validate read data before write
4e76d6907e : libbinder: Parcel: validate read data before write
1a1094a3b3 : Change format and units of logs in LegacyResampler
29ee27c5a4 : Add map like pointer lookup to LegacyResampler
caddf49a56 : Removed redundant glob from SurflaceFlinger_test
81a590cac8 : Input: Support set power/wakeup node of device
9639e124c8 : Add conversions between NativeHandle and AIDL NativeHandle.
362fa2273d : Remove callback at the end of consumer destructor
8237ba6696 : Add HardwareBuffer::as_raw for C FFI boundary
861e347882 : Skip primaryRangeIsSingleRate check for ARR
c4fe2466be : libbinder: statusToString for status_t errors
aff1772c7c : binder: fix FD handling in continueWrite
8010cbb3c7 : binder: fix FD handling in continueWrite
678984fb42 : Add `using binder::Status` to BackendUnifiedServiceManager.cpp
48fd884732 : Skip processing repeat EV_KEY events for keyboards
551db9811c : Document return values of ABinderRpc_Accessor_delegateAccessor
23e0a8e468 : Remove libnix dependency on libbinder_rs
11b98f28da : Add HAL method to return SupportInfo object for PowerHAL
0077fde3ab : Remove release fence flags
078d7366d2 : Block on BufferReleaseChannel when out of buffers
d78dd9b184 : Copy KeyCharacterMap object when we fill InputDeviceInfo(1/n)
93ee540fcf : Use a single trackListener method in LatencyTracker
0105ea5ed1 : Add processes to cacheable list
045a30e062 : NewBufferCount'value changed:simplifying the steps of applying for new buffers
0db4fced4d : libbinder: Parcel: grow rejects large data pos
72cefc0779 : Add YCBCR_P210 format to AHardwareBuffer
e32c1ab25b : binder: fix FD handling in continueWrite
ae3611d28a : SF: fixing typo
72090cbd1c : Move input event verify for publish to back of sendMessage
a6676ba72a : Add Tracing for Binder Caching
72ddb9c2e2 : Avoid using a format for the usage param in consumer test
bab495b99d : Avoid using a format for the usage param in consumer test
c0ad6b0298 : binder: fix FD handling in continueWrite
0ec33d7147 : Allow to override ANGLE suffix with debug.angle.libs.suffix
7b1a8c91c2 : Fix build on ToT Clang
df75cf6087 : libbinder: also avaid sWarningCallback lock
7693c4ab1c : binder: Add std::vector<bool> specialization to SafeInterface
eef06fd87d : dumpstate: fix retention of CAP_SYSLOG after dropping root
c65685b75f : Fix field widths in dump, so as to make columns line up.
f1cba19767 : Disable -Wunused-value for updatePhaseConfiguration
86bf84f846 : SF: Trace input info whenever it is present, even if input token is null
a8aaeb8d2d : Generate HOVER_EXIT if touchable region changes
a6b9f565ad : Add proposed trendy teams for VTS modules
67b101b5a6 : Reland "Reject inconsistent globally injected events" --try 2
f66a6d011b : Align enable with createSensorEventConnection
f7f93f5fd5 : Revert "Reject inconsistent globally injected events"
bc72309b14 : Do not invoke callbacks when InputConsumer is destroyed
ccd7147281 : Use std::queue in InputConsumer_test
c08e4710aa : Fix binderLibTest flakiness
b7a1db6500 : Heart Rate Sensor: Import permission aconfig flag and guard heart rate sensor by android.health.READ_HEART_RATE when flag enabled
33bf9d90a7 : Convert vintf_fragments into vintf_fragment module(s)
ec23c08381 : Add stagedApexInfos to ApexStagedEvent
f6abbf4b91 : Reject inconsistent globally injected events
dc1ce4d2d0 : Add new mute state for port volume mute
860ef88023 : Disable warning: ignoring temporary created by a constructor declared with 'nodiscard' attribute
956a6cee22 : inputflinger: Rework previous fix for KCM
df6813406c : Add check for valid AIBinder_Class_setTransactionCodeToFunctionNameMap
d585099856 : evemu-record: Document the --timestamp-base=epoch option
8cb62fe840 : inputflinger: Check for KeyCharacterMap before use
1b839b09f4 : Add dirgroup for trusty genrule
3659568747 : Correct "introduced" version for inline API.
0555fbf9ec : Add a method for libbinder to wrap an accessor in a delegator
754a9cc844 : Modify composePwleV2 parameters
91b33f18ea : Disable -Wunused-value for surfaceflinger
d1538461a7 : SF: Move logtags
16355787a0 : Track pendingBuffer count in RequestedLayerState
a512a53e89 : Check the DeviceId for generating DragEvent
b61a07068a : libbinder Parcel: Fix ubsan error in readData
4679e55027 : Add logic to overwrite pointer coordinates if event time is too old
4d3b03adfa : Add logic to overwrite pointer coordinates in motion event
f2163b8462 : binder: fix FD handling in continueWrite
eb94bdf8ac : Update dependencies on graphics.common HAL to latest
032e7f4a6c : Add flag for setFrameRate API
d77178e48e : Mouse: Add support to swap mouse primary button
2ac7e8169f : Mouse: Add support to reverse vertical mouse scroll
31135466b0 : SF: Remove EventLog frame duration
e0226c5e5c : Allow dispatcher to be idle after metrics push
89fbb6e4b6 : Adds hasArrSupport api support
88bceb1195 : ultrahdr: Remove unused Android.bp of deprecated ultrahdr
ffed91dc91 : libgui: Add wb_unlimited_slots flag
5c4f798834 : Allow toggle caps lock even if device doesn't a have caps lock key
3a5b65ae3b : Copy BR when failed to copy the screenshot.
5a71e88575 : hasKeycode API should take into account key remapping
163415a4de : Clean up a workaround for product variants
7f1efed1f8 : Add check to not resample when resample time equals motion event time
5d59a429f0 : Add file to use new consumer with a subset of TouchResampling_test.cpp
d05a745a9b : Add metrics for logging SurfaceControl events
95f669a30a : [Lut HAL backend] implementation
117556e091 : Remove unecessary flag checks for Transaction#setFlags
ef0eb5150b : binder: rules.mk: Use FIND_CRATE to locate dependencies
6aa3b8931d : FTL: Allow implicit conversions with NonNull<T>
7171a46fae : Error fix from enabling Clang thread-safety checks in aosp_panther
39dd7ff7b2 : Error fix from enabling Clang thread-safety checks in Cuttlefish
59954c4866 : SF: Use VsyncSchedule's ID for EventThread
a9123c8b69 : Support floating point values for layer crop
dfee5a2c36 : Fix rounding error when constructing Rect from FloatRect
25525631fb : libs: binder: trusty: binder_rpc_test to use CLOCK_BOOTTIME
111e26e04a : Add explanation for composition list order when dumping SurfaceFlinger
498ccce6c1 : Revert "Exclude setting code maps from APEX"
8f87adef58 : Add fields to InputMessageBuilder
3b685aa34b : libbinder: better object logs
4648b7e8b5 : Disable -Wunused-value for surfaceflinger
3e86536cb9 : Increase RemoveFromCacheOnServerDeath retry count
9213590ac0 : Cache flag value to prevent performance regressions
12c4ce6b90 : Use std::string for test expresslog std::map
bdd0533983 : Add mocks for vibration session HAL APIs
f99572ae6c : libbinder: remove writeUnpadded
c54dad6531 : libbinder: Parcel: validate read data before write
608524d462 : libbinder: Parcel: grow rejects large data pos
db85295097 : Fixes for NDK API Review feedback
fc70cb68d7 : Fix documentation for NDK API
819cb98c8c : binder_parcel_fuzzer: triage assignee
1021015c7d : binder_parcel_fuzzer: avoid consuming all provider
68c19fd3e8 : libbinder: closeFileDescrpitors private
19ff63eea1 : binder_parcel_fuzzer: cleanup dups/to remove
092d4b2edd : libc++fs is part of libc++ now.
72963bc319 : binder_parcel_fuzzer: support write functions
97ba154de2 : Add comment clarifying ANGLE libs loading behaviors
e96c2b4450 : libbinder Parcel: compareData const.
d6a18387c2 : Remove API level mentions from typedefs
3ea49b86f0 : Exclude setting code maps from APEX
58bda65dfe : Rotary encoder rotation count telemetry
da128516a5 : Extend shadow display frame by 2x shadow length instead of 1x
d73a5101d0 : Fix binderCacheUnitTest: Add 50ms wait with retry
c17f042b0d : Fix stylus hover spot not disappearing
ab2c2fcaff : Save and restore sticky keys state on configuration changed
dfb122d9d1 : Fix lambda capture
b1ae54e557 : Add more owners to surfaceflinger and renderengine
e2543234e1 : binder: fix decoding of vintf stability ParcelableHolder
2c97a6ad59 : Add OWNERS file for PowerAdvisor
bbd71b11d8 : Add a libbinder_rs wrapper around the new ARpc APIs
cd6e4e69ca : Add methods to get properties of Buffer.
3026298597 : Add methods to get locked buffer from Surface.
78c6534a51 : Wrap more methods to get and set properties of ANativeWindow.
12244780ce : Add new deqp levels for 2025
822d7851ce : Revert "Allow metaState changes from keys not declared by the keyboard"
97fe7cdaf9 : Remove ARpcDoubleRemoveProvider test because UB
e37f8346c6 : Add multiple device resampling support to InputConsumerNoResampling with tests
648e8e6f32 : Integrate the new frequency profile APIs with the vibrator HAL.
721af5b615 : libbinder_ndk: allow null codeToFunction.
7ce9cdb79c : Add defineClass variant
0d6e60f85a : Reland "Move tracing calls to libbinder_ndk"
ab851c755d : Fix crash in presentation time implementation
5767f63cf6 : Trusty: update LL-NDK mock.
1c15d387bd : Revert "Remove trusty-specific llndk-versioning.h."
00f5a992a6 : Revert^3 "Move tracing calls to libbinder_ndk"
a81376b0d2 : Allow metaState changes from keys not declared by the keyboard
20d7badfa9 : Suppress logcat error messages when frame timestamp is not found
de84d8e6c5 : Revert "DO_NOT_MERGE Fix primaryRangeIsSingleRate + touch on dVRR"
cd391bc390 : Use Layer IDs instead of handles in BufferCountTracker
076e0ee66c : Revert^2 "Move tracing calls to libbinder_ndk"
c370db4981 : Add libbinder_ndk systemapi support for injecting RPC binder accessors
187b974e78 : Add libvibrator_test to presubmit
a40d560a56 : Do not run edge extension benchmark if flag is off
099e19a97f : Gather latency metrics for key events
24878b59e3 : LatencyAggregatorWithHistograms for InputEventLatencyReported atom.
5aa8d157af : dumpstate.cpp: increase perfetto serialization timeout
278c5d851d : Add new policy flag to notify if a key gesture was triggred
264d3e6adc : Remove trusty-specific llndk-versioning.h.
b70fe1b23f : Use std::sync::LazyLock rather than once_cell.
a925de1e14 : Move applyInputEventProfile to cpp file
f53fa6b562 : Only prioritize critical input threads
d8dd7c3d2e : Move binderCacheUnitTest to presubmit
30e2360504 : HWC screencap portability improvement
e9726e6eca : Add SYSUI session tag and update PowerHAL version
edee03a7be : [Binder][XIAOMI][Bugfix] Skip appops header in native parcel. [2/2]
3a14d884aa : Add managed_users support for ago projects
fc0cd2581e : Fix deadlock of main thread and Perfetto thread (LayerDataSource::OnStart)

+- Project: platform/frameworks/opt/calendar

80d9e17 : Fix UnusedVariable errorprone issues

+- Project: platform/frameworks/opt/car/services

b46dd12 : Add display compat density flag check
85e8d71 : Update CarServiceHelperService to use unstable AIDL
8040132 : Replace `noDisplay` with getter `isDisplayed()`
ce50e25 : Fix NPE in CarDisplayCompatScaleProviderUpdatableImpl
fa1c109 : Remove IME DA from application display area
dc336fb : Fix CarActivityInterceptor crash
5fc86c2 : Disable tests when rotary not supported
fc809fd : Fix CarActivityInterceptor trace crash
75f26d5 : Modify CarLaunchModifier to specify the proper target display.
833122c : Improve the instrumentation of window management code in CarService

+- Project: platform/frameworks/opt/chips

3588f68 : Import translations. DO NOT MERGE ANYWHERE
8347371 : Import translations. DO NOT MERGE ANYWHERE
7f7bb46 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/frameworks/opt/net/ims

6403388e : Fix minor Telephony crashes
06e04b4c : Check for TELEPHONY_CALLING so we do not crash when it is disabled in IMS
770eda9f : Fixes for errorprone update

+- Project: platform/frameworks/opt/net/wifi

87441d6e0 : Import translations. DO NOT MERGE ANYWHERE
ce777ddf8 : Revert "Ignore stale NetworkCallback if NETWORK_STATE_CHANGED already received"
8753cc33a : Revert "Ignore stale NetworkCallback if NETWORK_STATE_CHANGED already received"
43ab957b0 : Import translations. DO NOT MERGE ANYWHERE
6d023de64 : Import translations. DO NOT MERGE ANYWHERE
eb449fced : Ignore stale NetworkCallback if NETWORK_STATE_CHANGED already received
66537ddfd : Remove "Saved by Settings" in saved network summary
1bd3e3a81 : Remove dupes from verbose scan result logs
19d2a803f : Import translations. DO NOT MERGE ANYWHERE
0966ddc5e : Increase max scan age for failed scans to 5 minutes
22affc4ff : Show all scans in WifiEntry verbose summary
db748a35f : Fix carrier name for "Available via..." text
afdabefc6 : Check underlying networks of underlying networks for default network
f816483c9 : Change android.net.wifi.flags-aconfig-java to libs dependency

+- Project: platform/frameworks/opt/telephony

581a030194 : Call stopP2pSmsInactivityTimer in PowerOff and Transferring state
21744c94a2 : Block SMS in satellite mode if P2P SMS is not supported
d2d63d4c32 : Handle device doesn't point to satellite in CONNECTED state
3b65f5c427 : Remove carrier configuration value check when registering screen on/off callback and add check when screen off timer starts.
290a5fad9a : Determine carrier roaming ntn eligibility only based on selectedSatelliteSubId.
2f6d5b67b7 : Add unit test for sms relay metrics
daeddacc9b : Add unit test for isSatelliteProvisionedForNonIpDatagram
002697bb09 : Add sms log to session metrics
7b5ace01c8 : Add null check for subInfo
ed642a0135 : Override satellite display name if we are in satellite mode and has valid operator name.
3eba115e24 : [VZW P2P] Add metric for Carrier Roaming NB-IoT NTN module, maxInactivityDurationSec, to track the maximum user inactivity duration of a satellite session.
ce5a4d7d98 : Add "isNtnOnlyCarrier" field into metrics atoms for satellite sessions.
2e421e5975 : Enhance satellite metrics
eb9b7ffe54 : [Satellite] Satellite metrics to capture pending message count per datagram type.
5f0c77d178 : MtSmsPolling messages to be sent the first time the satellite modem is connected.
526614f3b4 : Exit from satellite mode on p2p sms inactivity time out.
aa94fa6ef0 : Allow GsmSMSDispatcher to send MtSmsPollingMessage while not in service.
544ecb7a69 : Update carrier roaming ntn eligibility whenever satellite access allowed changes.
0a5870fa06 : Fix NPE at DisplayInfoController
9565d86ee8 : Pass ntn signal strength to listeners in IDLE state.
4d516d7ce2 : Override satellite display name in ServiceStateTracker
df98bf666e : Changes to support geofence for carrier satellite
21c0e5fc75 : Select proper handover type and monitoring timeout duration
1f87627d00 : [NTN][VZW P2P] Account for all cases in DatagramDispatcher to allow Check messages to be sent even if the device is in NOT_CONNECTED state.
b29770b199 : Change the max limit count from 100 to 500
6d08308c0b : Block incoming SMS if SMS is not supported.
ef54c28cf6 : Release lock before return
b425c9dde8 : Fix minor Telephony crashes
a929513fb7 : Disable CDMA calls
55b7e92e63 : Avoid NPE in RadioConfigProxy.
b5446d2454 : Remove references to NV reset APIs
0cf106141b : Do not enable modem scan in satellite mode if device does not want to enable TN scanning in satellite session
e3c7cb2871 : Unit test code for onCallStates change crash issue
98f1e44121 : Add NB_IOT_NTN
487bb20389 : Add flags to deprecate and clean up CDMA
e9e96366c3 : Conver hidden APIs to system APIs for satellite
00ad713c3b : Add logs for Allowed Services info and Data Plan type information
9ff8bad9f6 : TelephonyDisplayInfo support changes for Satellite
d97c2560e9 : Fix issue of triggering SOS handover for T911 calls
d59517cf3c : Update HAL APIs interface to support sim type switching on 2pSIM+1eSIM configured devices
42221ada79 : Add requestSatelliteDisplayNameForSubscription in SatelliteManager
a3a4f0e830 : Introduce multiple satellite controller atom per carrier id
8215b5324c : New System API to launch SIM Preference in Setting
b3bfdc0f91 : Fix wrong transfer time log for incoming satellite datagram
b52676047b : Revert^2 "Mark @FlaggedApi flags as exported"
6a042b8e38 : Support Allowed service info fields service type and Service policy
9cccd3b375 : New System Api: getGroupIdLevel2
6bae408a68 : [NTN][VZW P2P] Allow Check messages to be sent even if the device is in NOT_CONNECTED state.
5336fbf069 : Add carrier connected check before cleanup satellite resources
e7177ad323 : Add new satellite APIs for cellular modem.
5df163225b : Revert "Mark @FlaggedApi flags as exported"
0cf9aa0369 : Support TS43 enablement based Data Plan type check API
e0d85173dc : Fixed to deliver result to DatagramDispatcher properly for single part SMS.
bbda5ee968 : Added IsSatelliteSystemNotificationsEnabled check
afdab527f2 : Consider provisioning and disallow reasons only with nb iot
7d1588d3ea : Convert hidden SatelliteManager APIs to System APIs.
5be6d72947 : Chage featureFlag to carrierRoamingNbIotNtn
62cd748c99 : Add DefaultMessageAppSupportedCheck for satellite SOS
8f338521c7 : Mark @FlaggedApi flags as exported
df8ff9dd64 : Update comments to point to the new location of event.logtags.
094320b960 : [Satellite] Enhanced the satellite metrics to record datagram count per message type.
5b73a08409 : Update ntn signal level only in connected and transferring
450a691263 : Add missing call to mNotifier for transparency APIs into GsmCdmaPhone
b29cf99ad7 : Update ntn signal level when device is in transferring state also.
6c83d0f266 : Fix system selection specifier converter
5ac2150ad4 : Add register/unregister callback for selected satellite subscription changed event in SatelliteController and SatelliteModemInterface.
6ef1e99ad1 : Revert "Gracefully tear down networks when SIM is disabled"
436cbe29a3 : Update send sms status only after all parts are sent.
3c5232c211 : Introduced forceResetCarrierKeysForImsiEncryption TestAPI to delete Imsi Certificate.
8f328fdfdb : Clear calling identity inside shouldSendSmsToDatagramDispatcher
9d180f2cb3 : Use the new HAL API updateSystemSelectionChannels to pass the bands/frequency to vendor service.
2d24a39c27 : Update code to avoid nested usage of locks.
511db40dd1 : Add Satellite Availability config check in SatelliteController
64b3ebb7df : CarrierMessagingService: Add gated support for granular errors for SMS flow
2d88b6cc7f : Configure earfcns to modem
35168cf0f6 : SatelliteController: check if the subscription is provisioned for non-IP datagram
afc9001160 : New System Api: getCarrierId using carrierIdentifier
2ed72abe6d : Add requestSelectedSatelliteSubscriptionId API to SatelliteController.
14d440d1a4 : CarrierMessagingService: Add flag for new temporary failure codes
413fea0ab3 : Fixed race condition while powering down
5079373649 : Update send sms status only after sending all parts of the SMS
dcbc7680b1 : Skip checking satellite support for registerForSatelliteProvisionStateChanged
7c793864b8 : Revert "Clean up 24Q3 aconfig flag reset_mobile_network_settings"
c467c8f5b8 : Update carrier roaming ntn signal strength when SatelliteSessionController enters ConnectedState
1f9665be57 : Fix race condition between broadcasting and carrierConfig changing.
0bb5c85027 : Add implementation and flags for new transparency callback APIs
96d9baca30 : Hide IDLE state from clients
3a21482794 : Use most recent Radio HAL libraries
afa40877a8 : Added flag and tests for SubscriptionPlan changes
f7ddbd7cde : Update datagram transfer state to WAITING_TO_CONNECT when waiting for satellite connection
73a7887ed2 : Revert^2 "Replace isInCarrierRoamingNbIotNtn() with"
bdb4dd81a9 : Use CloseGuard to catch UiccPort resource leaks
6f56548605 : Bring back exported FR flag data_only_cellular_service
3389e95d75 : Revert "Replace isInCarrierRoamingNbIotNtn() with"
4dfbd264cd : Add setNtnSmsSupported API in SatelliteManager.
7313e64fa1 : Replace isInCarrierRoamingNbIotNtn() with shouldSendSmsToDatagramDispatcher.
5b6173cfb4 : Notifying ECBM stop, if an emergency call starts during ECBM
52a56a8a69 : Added flag and tests for SubscriptionPlan changes
d800f9a2cb : Remove carrier roaming eligibility check for satellite mode
d4dbb28cc2 : Fix LeakedClosableViolations in UICC handling
8ecdb4af0e : Return false if radio is off in isCarrierRoamingNtnEligible
95907a5694 : Fix LeakedClosableViolation in SQLiteOpenHelper handling
c57a8121cd : [satellite] Removed disallowedReason condition
f20ae36709 : Add onCarrierRoamingNtnSignalStrengthChanged callback.
0680966160 : Put additional logs to check ResultReceiver created over limit
e8aff46701 : Provide the user ID when running IMS related CTS tests
2da6f4e8ff : Forcefully select satellite subscription before enabling satellite
3b06412fcf : [CarrierLock] Provided fallback support for Old API to read the carrier lock info.
6229e910bb : (APDS) Notify modem to exit EMC mode when call is cancelled
fc826b4e08 : (APDS) Adding a new param to set in-service state timeout for EmergencyStateTracker
7d79315519 : [Satellite] Reset the counter stats at the beginning of the session.
35d773f08a : Send sms to DatagramDispatcher only after checking all conditions
3779ace843 : Fixed the incorrect package name
076b637f26 : Add notification when satellite availability changes
82f7f565fe : [Satellite] fixed override datagram value reading in missed scenarios
3ae80c21f0 : Fix array out of bounds in updateNtnRoamingInHomeCountry
543afbbb57 : Notify registrants when satellite modem enabled state changed
51c370187c : Add getImsPcscfAddresses and update getImsPublicUserIdentities
e64a212020 : Add feature flag to define system API for IMS service.
0dc51ca969 : Update IMS resolver/binding code to support multiuser
5fa072f2c3 : Revert "Fixed the incorrect flag type"
67a07af8a5 : Google contribution item #12 - subitem #1 [VZW-Skylo] Check previous state when request to disable TN scan is completed
e20f8ab843 : Fix SMS tests if there's no messaging
a6b7d40601 : Fix some bugs in provisioning code.
e5a422f7c2 : [satellite] Modify to trigger callback.
923478b838 : Manual network selection handling when request satellite enable
a1a8c11a70 : Merge strList only if the list is not null.
2b859fcb17 : Fix subscription management tests on HSUM
4122fdd2da : Introduce Radio HAL version 2.3
8be99aa54c : Revert "Set forEmergencyCall of setRadioPower to false for norma..."
9f2029ff07 : Use getNationalSignificantNumber to get phone number in national format with correct number of leading zeros.
c029500dc2 : Create new flag satellite_system_apis
8f762c360f : Fixed the incorrect flag type
20e249d4bd : Migrate SmsMessageBodyTest to Android Testing Support Library
4668f52286 : Google Contrigution item #12 - subitem #2, opt [VZW-Skylo] apply Idle Mode Scanning for Terrestrial Network
228e57cb7c : Introduce aconfig flag to protect satellite state change listener API
6874524709 : Modify code to avoid deadlock
f73776b46c : Implemented MT SMS polling The codes related to resources can be replaced when Google implements carrier configs for this feature
f68d518b53 : [satellite] Added onCarrierRoamingNtnAvailableServicesChanged callback
563ed06552 : Changed to use AlarmManager to trigger screen off timer even when device is in deep sleep.
94f336557d : [Satellite] Reading the datagram value from carrierConfig.
e76c66d379 : Display auto connected to satellite notification only for auto connection.
acb8c9c446 : Fix for NPE when subscription is not found
1008502860 : Add logic to notify phone of the emergency callback mode changes
d86dec7d68 : Restart P2P_SMS inactivity timer when device receives SMS
19f9b23533 : Clean up 24Q3 aconfig flag reset_mobile_network_settings
c49ba93f77 : Allow TN scanning by ESOS/P2P SMS timer, even if the device is in CONNECTED state
f0d124cd5c : Fixed the incorrect carrier config for 5G icon timer
551940775a : Handle unsupported EAP-SIM/AKA
76cc01dd84 : Imsi certificate Download issue fix when User not unlocked.
3ee399f82c : Add feature flag to public API for IMS service.
69084dd841 : Reduce data network validation result message handle.
3d744c2e84 : Stay in satellite mode when device found Limited Service.
551a15b0c6 : Support SubscriptionManager.canManagerSubscription for HSUM
13d8ea9b6a : Send response for update provision satellite token immediately.
dd8d6a032e : Don't start timer in voice call
7aa4515121 : Set unique message id to the one in the tracker if the tracker exists.
46bddf312e : Fix for crash due to IllegalStateException: Carrier config loader is not available
fd1379d098 : Clean up 24Q3 aconfig flag DATA_ONLY_CELLULAR_SERVICE
9c8d9d5dc0 : Do put demo mode messages into SOS count
7493a5ba94 : Modify the way of obtaining get Wi-Fi connectivity status to asynchronous API
b3dca5ccca : Close logical channels when EuiccService impl app crashes / is unbound.
47ee129ab3 : Use getNationalSignificantNumber to get phone number in national format with correct number of leading zeros.
e7427c7cde : EVENT_CARRIER_ROAMING_NB_IOT_INACTIVITY_TIMER_TIMED_OUT separated into ESOS and P2P_SMS timer EVENT_ESOS_INACTIVITY_TIMER_TIMED_OUT, EVENT_P2P_SMS_INACTIVITY_TIMER_TIMED_OUT
b488855b71 : Fixed ImsiKey delete issue when pSim eject time.
19cb2a6230 : Check isInCarrierRoamingNbIotNtn for the phone associated with SmsDispatchersController object
33ae865714 : Update SatelliteController to use DeviceState property API
c27c0e4a2c : Refactor ServiceState SPN Updating
4ec33e4a06 : Support bindServiceAsUser in Telephony UT
b06652d2d7 : (APDS) Handle the error of triggerEmergencyNetworkScan with FULL_SERVICE only
73ae62a6d4 : Updating the APIs for the emergency callback mode
bde0a6f271 : Prevents references to the part in the call termination process before creating an internal object.
ba639dd503 : Unit test fix for display-aware PowerManager API
5d94e6cfba : Revert^2 "Optimize euicc invocations by keeping channel open sessi..."
3fd4f41e0b : Fixed GbaServiceTest on HSUM devices
b635ad3796 : Revert^2 "Check if AID is equals to ISD_R_AID."
e577aea7c6 : [unpure revert] Do not immediately close channel on phone process restart
2f2d921603 : Fixed ANR during boot up
afd03aad57 : make silence emergency nofitication broadcast protected
55ff22ddf8 : Add unique message id to an outgoing SMS
d0160cd486 : Update to check carrier config
a0a0d94aaa : Add carrier roaming atom should be reported per subscription ID
63fb3ac428 : Add deprovisionSatellite
6f0e876b25 : Add flag for CarrierAppUtils bugfix
4afd6be946 : adb shell command to trigger phone.notifyCarrierRoamingNtnEligibleStateChanged API.
76b4f69fbc : Add count for SatelliteSession metrics when p2p satellite is disconnected due to TN network
54b97fbfa6 : Async Initialize CarrierPrivilegesTracker
9417c4688d : Add interface for CTS test to ignore cellular service state
ac0ac55636 : Fixed unit tests
57acd348a2 : Performance Optimizations for CarrierPrivilegesTracker
45859865d8 : Disable satellite when cellular modem enabled in Idle state
0296481739 : Revert "If the phone crash try to clean up the channel which was kept opened. If that fails then try to reuse the existing channel."
99ef67e7d6 : Revert "Check if AID is equals to ISD_R_AID."
5e273cf511 : Revert "Optimize euicc invocations by keeping channel open sessi..."
c083817e9e : User current user context for CarrierServiceBindHelper
61740541b8 : Revert "If the phone crash try to clean up the channel which was kept opened. If that fails then try to reuse the existing channel."
289c55f335 : Revert "Check if AID is equals to ISD_R_AID."
52bb03ed54 : Revert "Optimize euicc invocations by keeping channel open sessi..."
6bd93036d8 : Revert "Optimize euicc invocations by keeping channel open sessi..."
aabe56832c : Combine isSatelliteEnabled and isSatelliteBeingEnabled
6f5988275e : add every emergency call case to satellite esos message recommender metrics
5bc58b55eb : Baseline global lint errors in telephony
92497e8a1e : Cancelling the notification once the satellite conenction got disconnected
5bb40d1a3c : added eSOS test mode flag
203b45c207 : Add support for ApnSetting.TYPE_OEM_PAID and OEM_PRIVATE
c5353dfbf6 : Update API description for setDeviceAlignedWithSatellite().
f0bab6548d : Add feature flag for Emergency Callback Mode Notification
8c6a5bef60 : Clean up flag show_call_id_and_call_waiting_in_additional_settings_menu

+- Project: platform/frameworks/opt/vcard

2a4eef8 : Decouple contact account association from VCardEntry construction

+- Project: platform/frameworks/proto_logging

067b1f0c : [VZW P2P] Add atom max_inactivity_duration_sec.
6f86a64b : Added field for satellite atoms
83c26c76 : Add a field for satellite controller atom
d867dca5 : Add fields for carrier roaming nb-iot ntn service
f7b2846d : Add "has_promotable_characteristics" into notification atom.
26af8412 : Add protobuf constant for BT audio device type
18e8b65d : Add WCS auto bug component in WearServices enums.
1a74a763 : Add atom for camera compat mode setup latency
aae79b11 : Add events and data specific to Tile OEM metadata.
edbf9234 : Rename JankFrameCountByWidget Atom
c634c91d : Add atom field annotation for client-aggregated histograms
2005fdb0 : Add NB_IOT_NTN
b1b892f6 : Update AppFunctionsRequestReported
cc02577a : Add new events and states for LE connection metrics.
7e2641aa : Deprecate start/end time in PhotopickerSearchInfoReported atom
6b332d21 : Add logging event for "Notifications on lock screen" in Settings
c45545c1 : Additional proxy disconnect enums for when sysproxy service self-terminated
0ee2d67b : Update JobScheduler atom with abandoned job data
c9cca626 : Add ImeTracker hide reason REASON_HIDE_WINDOW_LOST_FOCUS
1c61fb9f : Add OWNERs file for /stats/enums/photopicker
04c70412 : [ranging(metrics)] Atom definitions
d85f2b1c : Add "OTHER" style into WS metrics for styles that are specified but unknown for WearServices.
f65246b1 : Add NULL_AGGREGATE_REPORT_GENERATED_SUCCESS_STATUS to the attribution Status enum.
5467046f : [ranging-metrics] Atom definitions
b3f3c73d : Add atom for AppFunctions logging
95fc75b3 : Updates to MediaProjection logging Atoms
07c9b2f5 : Extend the notification atom for information related to the promotion status
9489847d : Fix typo
f86f1a29 : Add number_of_exit_frames_supported to NFC extension atoms.
a673c2e1 : [ranging(metrics)] Atom definitions
c573aab2 : Last remaining PHR UI enums
d4798602 : Add owner to healthfitness logging
08b0c2c2 : Add FederatedComputeTraceEvent atom.
cd6ca1b7 : Added TeX specific benchmarks
31625457 : HDMI: Add atom for tracking power state change on <AS> lost setting
2ca3e2d0 : Add ImeTracker phases for multi window mode
baf88f9d : Add logging for LE Audio toggle enable/disable
ed4ef754 : Add DESKTOP_MODE_UNMAXIMIZE_WINDOW enum
87eccb14 : Add metric_id field to A2dpSessionReported atom
3cc7160b : Add telemetry express metrics for vendor vibrations
806ca256 : Modify enums for Desktop Windowing task resizing keyboard shortcuts.
51514a13 : Adds PpapiName and ErrorCode enums for FLEDGE APIs
ad4bb4d8 : Move keyboard actions from PageId enum to Action enum
ceb58b90 : Add MediaSubscriptionChanged atom to log an event when the watch changes the subscription for phone sessions
3bc2680f : Add metrics for new bluetooth device details
52593dc5 : Add CertificateTransparencyVerificationReported atom
eda9bd56 : Add atom for LE connection metric
e40a5da4 : Add physical address fields for HDMI CEC metrics in statsd atoms
4ce95968 : [PAS] Adding PAS encoding source type into EncodingJobRun atom.
14c99a40 : Add atom for LE connection metric
b0a0d383 : Adding logging enum for Settings Color Theme preference.
6e806830 : Add Settings for three finger tap customization
2cffceb0 : Add ODP API names into AdservicesApiName
75f7ecb5 : RFCOMM Metrics: Cherry Pick enum fix
db183391 : Add BackportedFixStatusReported atom
6f986d3d : Add DesktopModeSessionTaskUpdate TaskEvent for initialising state.
411bc679 : Revert APP_OP_PICTURE_PROFILE_APPLY
6cbcba6f : HealthServices: Remove TelemetryExpress metric.
899b3d68 : Add Comms connection types to WearCompanionConnectionState atom
ec0faecd : RFCOMM metrics enums: Follow proto best practice
c3413d00 : BluetoothMetrics: Adding enum variants for LE-ACL Connection
29a7fb25 : Update comment for OdpApiCalled atom to clarify that this is only to trace public APIs.
085899a3 : [TimeSync][Geo] Create new enum ItemId for location time zone detection
fa8c60c0 : Add audio hardening AppOps to proto
4ce79084 : Add APP_OP_WRITE_SYSTEM_PREFERENCES to app_op_enums.proto
4e61bf50 : BluetoothMetrics: Improve error tracking by adding enum variants for: * Mapping SMP error codes * Retry failures
20bfcaaa : Update atom proto iptables failure exponential retries.
0a246700 : [Physical Keyboard] Add setting logging enum
2a4c6ae3 : Add AppOpEnum for APPLY_PICTURE_PROFILE
efe23dd9 : Add metric_id field to A2dpSessionReported atom
7146d1c7 : Adding a new enum that enables cellular radio from ModeManager
7886d060 : Rename field in Classification adjustment atom
391b4c34 : Add PHR UI enums
8d34feca : Add new atoms for Health Connect.
aa9c2234 : Track wifi and mobile signal strength as a state in statsd
9b7c9c2b : Add search enum to photopicker atoms
34a0c79f : [Ranging] Add new permission for android generic ranging feature
5af20eb1 : Track device interactivity as a state in statsd
d87f54d5 : Update OWNERS for input protos.
c9dbf902 : Hide generated logger utility class to prevent API exposure
79ecd80b : Adds PpapiName and ErrorCode enums for AdService APIs
e765dcf9 : Update proto logging atom to log PHR data size to WW
1cfcae1d : Added TeX specific benchmarks
c66fc71d : Add java package to the atoms.proto performance extension.
1c2e34f5 : AppOps: Add stats proto for APP_OP_READ_OXYGEN_SATURATION
4d8c3db0 : Rename Immunizations to Vaccines - step 2
00efa62b : Update Communal metrics to include spanY
adb50e83 : Enum for new notifs setting screen
cea2213a : Add WearServices leads as owners of WearServices enums.
83a653fd : Add event for BT profile retry
497ca362 : Add a new field to IntentCreatorTokenAdded message
6c9cbeb9 : Add atom for Classification adjustments
5e8f5321 : More specific tracking of encryption key fetch failures during training. Adding more training events.
05848b45 : BluetoothMetrics: Map all the enum variants in ErrorCode to State
11508198 : Cherry pick rfcomm atom (v2) to aosp
a268c7c2 : Add new RFCOMM atom
c5df604b : Add additional constants to have higher granularity for notification latency metrics.
fcb42162 : Add metrics for enabling Linux terminal
0c4267aa : Define metric actions for measurement system.
75a59900 : [Ranging] Add new permission for android generic ranging feature
4e788287 : Add metrics for enabling Linux terminal
f3305693 : Add metrics for enabling Linux terminal
af450581 : Add OWNERS to stats/enums/bluetooth
d2bd7bba : ProtoLogging: Re-factor AppOpEnum into separately owned file.
6b634b3d : Add ImeTracker phase for setting client visibility on server side
c57cf456 : Add a new Atom for Health Connect to add new PHR related Usage Stats fields.
82216d0f : [SetupWizard Telemetry] Add default values for setupwizard enum definitions
bd9947d3 : Introduce CoreNetworkingTerribleErrorOccurred metric for tracking errors
9eb355c2 : Add proto logging for "Force Global Ambiactive" developer options.
f553830a : AppOps: Add stats proto for APP_OP_READ_SKIN_TEMPERATURE
e6aff447 : Updates to MediaProjection logging Atoms
780eff21 : Introduce CoreNetworkingTerribleErrorOccurred metric for tracking errors
173463ad : Add is_trigger_filtering_id_configured for feature logging.
51a32e2a : Update proto logging atom for private WW for PHR APIs
a09bfdf8 : Add enums for Desktop Windowing task resizing keyboard shortcuts.
e2dbd853 : Remove unused proto imports
e3bcc9d2 : Add new stop reason for maybe abandoned jobs
b583b57b : Fix typo in RemoteEventResponseEventType enum
884afd42 : Update ScheduleCustomAudience atom.
db357d15 : Add a new atom for exit from desktop by minimize logging
01cefce9 : Add IntentCreatorTokenAdded message
78871f1a : Add atoms for decoding images
2055157b : Add CarQcLib and CarQcLibEventReported
0a09104a : Add CarSettings and CarSettingsDataSubscriptionEventReported
fcb95c1a : Add CarSystemUI and CarSystemUiDataSubscriptionEventReported
9de4bc34 : Adding the profile level breakdown inside EventType
d9f813fe : Reserving IDs in pushed atoms for a future feature.
c614c5b8 : Add a new atom for drag-to-top maximize
c5e47c89 : Update ScheduleCustomAudience atom.
45d63c00 : Adding enum variants for BOND_NONE state in EventType and State
60def47e : Add ConscryptServiceUsed atom to Conscrypt extension atoms.
74c31efe : Add CertificateTransparencyLogListUpdateFailed atom
04aa94f9 : Add Thread 1.4 feature telemetry to threadnetwork atoms.
d163bcd9 : [Telemetry] Add a new field to the WsRemoteEventUsageReported atom to indicate the state of the remote event and introduce new remote event state enum.
2fb13570 : Add new ImeTracker phases PHASE_CLIENT_VIEW_HANDLER_AVAILABLE
12f498b6 : Extract PSI resource enum definition to a separate file to match g3 code.
b403fc89 : [Contextual Edu] Create an atom to log contextual education displayed
e4316bcf : Add atoms for ScheduleCustomAudienceUpdate
f12e2ab1 : Add enums for PHR UI loggins
bcf96bf1 : Add proto logging constants for PHR APIs
b5940bcc : Add MediaControlApiUsage atom to wear media extension atoms proto.
3e34d843 : atoms: add traced trigger atom
a67b3534 : Revert^2 "Add codegen for supporting UIDs for bootstrap atoms"
89f29b31 : Formatting change to fix HEAD
6064ca77 : Revert "Add codegen for supporting UIDs for bootstrap atoms"
69b8ef32 : Add retry failure info to SysproxyServiceStateUpdated atom
6ebc8c05 : Add one more task type for OdpTraceEvent atom.
26efab47 : Reserve atom for conscrypt cipher usage metrics
aeed7b3e : Add codegen for supporting UIDs for bootstrap atoms
e51b2ba5 : Add ACTIVITY_INTENSITY to the HealthFitnessEventType enum.
9d1f92a5 : Combine bugreport atoms into a single WsBugreportEventReported atom. Add duration field.
d973a2b9 : atoms: add traced trigger atom
f3fc7fe8 : Update atom and add new atoms as per discussions with HC PCounsel and PWG-DS.
bded7ab8 : Update the prefix of the API enums' value for the API stats
2fc07707 : [SetupWizard Telemetry] Add atom definitions for SetupWizard telemetry
efd1326a : Add DOWNGRADED_PRELOADED and UNINSTALLED_MBA to MBAStatus enums for BICS to track uninstalls
5a390677 : Add OnDevicePersonalizationTraceEvent atom.
74fc5d8d : Mark the job scheduler TEX metrics as disabled for now.
7d9b4579 : Allow disable TEX metrics collection by marking it as disabled.
2614c994 : Create atoms for AccountManager.
5a8e2cfc : Add keep in sync comment to ProcessState enum
bd8c1c42 : Add Atom to Log App Jank
332e9813 : Add a new atom for resizing a task to fit stable bounds by clicking on the maximize menu that appears when the app header button is long pressed or hovered over.
77744ee3 : An atom for sysproxy service state updates
a60c3ddc : Add DesktopModeSessionTaskUpdate TaskEvent for initialising state.
83ff19b1 : [SetupWizard Telemetry] Add enum definitions for for setupwizard telemetry
9b6cf65c : Add Thread 1.4 feature telemetry to threadnetwork atoms.
7fac6150 : Create wear_frameworks cfg with power and stem key express metrics
e4f8ee5f : Add attribution-reporting list source registrations shell command to enums
b0667882 : Improve documentation of the CpuPolicy atom.
a463d768 : Add proto APP_OP_READ_HEART_RATE
e5f3dc45 : Adds missing CEL enums for persistResult
ffd0b06c : Add atoms for HDR graphics metrics
89f57981 : Update enums for Telecom error
e2d9e952 : atoms: sync with changes from aosp/3291145
a790ee06 : Add atom for wakelock uptime
e3f6d102 : Add BiometricEnumerated and BiometricUnenrolled atoms
e3dd8d6f : Add a new atom for AppOp checkOp and noteOp call logging
b7e6fdd1 : Add EXTERNAL_SCORER_DRY_RUN to SettingName enum
4ade8fea : Add enum for move to next display shortcut
5317455e : Add a new api name enum atom.
54c1b03a : Update first overlay state change protos
80d072af : Adding cel entries for package deny status
477c6ed9 : Support string-array type value for bootstrap atom
f53ed55e : Add expresslog key input.value_rotary_input_device_full_rotation_count
1a50613c : Add Thread 1.4 feature telemetry to threadnetwork atoms.
ee730b61 : Add owner for FCP enums
73be9e24 : Create an atom to log the tutorial entry point
9a8198fc : Add CONTACTS_STORAGE enum type for new Contacts Storage settings page.
f2fb09f1 : Change type for other available compatibility versions
0341a768 : Create atoms for notification managed dismissal syncs
0b97e9bf : Add proto logging for "Bounce Keys" feature
497ea716 : Add metrics for device details more settings fragment
43a078cb : Modify TvSettings enums for "Keyboard accessibility" page
5162465e : Annotate Credential atom uid field
c8cd026a : Add DeviceIdleTempAllowlistUpdated atom
da3a110a : Fix indentation for WsBugreportRequested.
d51243d4 : Adds stats to log for App Functions
93caf76f : Add support for AC-4 level 4

+- Project: platform/frameworks/wilhelm

50a2ae1 : Use DeviceIdVector in OpenSL ES
ecd0123 : Take local copy of CallbackProtector in Callbacks

+- Project: platform/hardware/broadcom/wlan

929826b : C23: bool/false/true are keywords.
db94040 : bcmdhd: Fix driver command output issue
fec7fd1 : Fix to parse legacy num rate values.
2ecfee8 : C23: bool/false/true are keywords.
80e2947 : Convert lib_driver_cmd_bcmdhd to soong

+- Project: platform/hardware/google/aemu

983aa43 : Add VulkanVirtualQueue feature flag
9b11b31 : Add missing host feature control items
4a37108 : Save and load more virtio gpu info
28aa941 : Fix: Prevent nullptr exception in AsyncSocket callbacks
b4e61f9 : aemu: remove unused files
8e69507 : aemu: relicense to MIT
a1219f2 : ASG: Avoid double save/load for external memory
fd387cd : Allow ASG client to provide external mem resources for load
533d7ae : Update ASG context loading to handle combined allocation
2a8b72e : Track buffer size for ASG blocks
1253077 : Remove duplicated header
3ed7a3a : Port protobuf matchers
1f87fa1 : Determine cluster displayId based on the feature flags
3a5fd8a : Reapply "Add methods to check multi display state"
394ce20 : Update the feature control ID for BypassVulkanDeviceFeatureOverrides.
f8d87dd : Update the feature control ID for BypassVulkanDeviceFeatureOverrides.
b34ab50 : Revert "Add methods to check multi display state"
3791192 : Add BypassVulkanDeviceFeatureOverrides flag
00b239c : AlignedBuf makes sense only for trivial objects
86670a6 : Rewrite resizeImpl
4296d45 : Fix aysnc socket concurrency issues.
e65bcae : Implement logforwarding getter for abseil logger
6b2cd7a : Add methods to check multi display state

+- Project: platform/hardware/google/apf

df0b7f0 : Add Python script for generating sample APFv6 mDNS offload program
67e3b59 : Fix: Correct packet_buffer length field type to uint32_t
a10b48a : Adjust apf_run.c for test_buf_allocator.c/h data structure change
a52885c : Support multiple transmit buffer allocation
888b723 : bool/true/false are keywords in C23.

+- Project: platform/hardware/google/camera

cf89b33 : AidlCameraProvider: Modify camera device init check logic
0fa2e7a : EmulatedCamera: Add AE priority mode support to emulator
c45bd41 : Replace misc DummyXXX with inclusive words
47639fd : Replaced ProfilerDummy with ProfilerEmpty
f133158 : Replaced DummyBuffer with PlaceholderBuffer
ddfb72d : Add a TODO to filter VMAs.
ee228c8 : Move camera HAL ready flag setting to camera device creations
5287d7d : EmulatedCamera: Add support for color temperature feature
c8f18d3 : PendingRequestTracker: Wait request buffers indefinitely
7095a91 : Rename trunk stable flag
f0abd32 : Delete system rpath from libgooglecamerahal

+- Project: platform/hardware/google/gchips

ccf2b69 : gralloc4: Rename the included utils header
0fb53e2 : gralloc4: Allow YUV formats to be allocated with NO_COMPRESSION flag
4856cb2 : gralloc4: Add GPU read support for Y8
4a259fe : Convert libexynosutils to soong

+- Project: platform/hardware/google/gfxstream

61a04b407 : Fix crash in registerRenderPassBeginInfo.
82dfc7a71 : Use VulkanVirtualQueue feature flag
a9a7e0610 : Initial implementation of multiple queue emulation
8d9b0437e : E2E Gfxstream test for Double Queue Gfxstream
f2ee51168 : Use try_unbox in VkDescriptorBufferInfo
5d98ff084 : Fixes for queue submission
6a46f839e : Replace vkWaitForFences with vkGetFenceStatus
9e14c2da9 : Add timing threshold for operation tracker logs
ee3e877d7 : The BumpPool of VkStream is not freeAll'ed
796b7d348 : Pass command pool as a reference to fix erase errors
dfaa6a770 : Modify hw gpu query api to get driver version
879e752fa : Fix command buffer information tracking for pool destroy calls
d6e97fba3 : Minor cleanups in VirtioGpuTimelines
d44d1ee81 : Provide virtio gpu resource memory to ASG before load()
a18261732 : Remove the extra guard around snapshot/restore
f6dc874e3 : Move fence completion callback out of VirtioGpuTimelines::Fence
db0a4fe09 : Do resource teardown after the queues
caf6c1c52 : Log and handle waitForFences errors before flushes
7f151387d : Enable NV diagnosticsConfig for guest devices
9107b3af0 : Improve logging for VK_CHECK errors
79073d83e : use finer lock in FrameBuffer for ProcessResource
4e18acff6 : Implement queue related wrapper functions
90e4ecd40 : device tracker: clean up finished fences regularly
d3ac9dcc1 : Change C style cast on extension structs
00f2411e3 : clean physical device
ca5505106 : clean up fences and queues upon destroyInstance
975fa490a : gfxstream: delete guest output directory
12a6feee9 : Try handling timeout errors on readColorBufferToBytes
3c57698c1 : Add buffer size checks for color copy operations
f7641b222 : Add validation checks for duplicated handle usages
f3819240f : Remove nested loader option from VulkanDispatch
0be68583f : Enable private_data extension when supported
f7006dca3 : gfxstream: depend on mesa_gfxstream_aemu_headers
e27549828 : Remove outdated doc
bf2f01618 : Update RenderThreadInfo saving
d895dd7c3 : Add initial stub for saving/loading ASG device
050fc9550 : Reset objects on restore()
7ba0ab5f2 : gfxstream: Use context-specific ring for non-VK EXPORT_SYNC as well
3e073ad03 : Update auto-generated comments.
7c1c14d22 : Fix struct id for VkImageCreateInfo
b32fb06f0 : [Vulkan Snapshot]: handle DescriptorSet allocate and update
699158abd : [Vulkan Snapshot]: avoid double boxing commandBuffer
a57eef23e : Avoid one string copy
d7a6c8c1b : Remove snapshot tests
ec19c677b : Move snapshot/restore() into VirtioGpuFrontend
82f7407e1 : Fix default behavior of vulkanBatchedDescriptorSetUpdates from stream-renderer interface
e87851c7f : Move more of context handling into VirtioGpuContext
e750cf1aa : gfxstream: opengles: nuke sOpt
c406dfdd7 : Fix format specifier
273f5f034 : Restore context ASG handles
8534e5ddb : Save and load resource type
2a3d8ddcd : Handle caching for blob descriptors
c0649a6a5 : Avoid using dynamic string on imagecreateinfo transform
e9bce9620 : Handle undefined formats during image creation
d1c25bfff : Disable inlineUniformBlock support only with batched descriptors
88d03f2f9 : Log snapshot related error messages only when enabled
d4e8b7211 : update VkDecoder to special handle vkCreateComputePipelines
810c16aa7 : Avoid double loading EmulatedEglFenceSync
4d94b0813 : Fix vkCmdSetLineStippleKHR decoding
875145234 : Keep VK_EXT_line_rasterization in codegen
b8ecec869 : Disable VulkanBatchedDescriptorSetUpdate on Host
4436f3697 : Remove unused `fence` field
79a5203c7 : Use VK_EXT_external_memory_metal on MoltenVK mode
3d6cf83ca : Replace stderr logging with appropriate logger routines
b33f4b1f3 : Remove 2 unused fields from VirtioGpuContext
002750b47 : Add Vulkan feature dependency handling
6c26dd541 : Save and load virtio gpu context ids per VkDevice
8bcb9d22a : Use try_unbox where the error can be handled
8094eb5f8 : Fix resource type check
9ca0b9c7c : gfxstream: fix meson build
838dd1d61 : gfxstream: rerun codegen from external/mesa3d
c5cb32ed1 : gfxstream: nuke duplicate codegen directory [again]
0f475320b : [owners] Remove yahan@google.com from OWNERS
8647a4c78 : Remove unused mResourceContexts
1362764d5 : [Vulkan Snapshot]: fix AddModifyApi and ClearModifyApi in VkReconstruction
65e6b40e2 : Add liblog to static libraries linked with perfetto
bbc62afdd : Change log level to info for vulkan device selection
d4104bb98 : Snapshot and restore more blob types
e9cb672b8 : Move more of resource handling into VirtioGpuResource
9fcd70a4d : Make resource create args std::optional<>
d25a4d35c : Snapshot ring blobs
e7770cda5 : Use std::vector for iovs
c804d27dc : Start to snapshot the gfxstream frontend state
09279ffea : Separate some virtio structs into separate files
36b82f187 : Rename "pipe renderer" objs to "virtio gpu" objs
553e3aa64 : Add suspend/resume stubs
d41a3b97b : Update autogen for VK_KHR_line_rasterization
1659da36c : [Vulkan Snapshot]: handle multi-allocation better
00f66d1de : gfxstream: Remove shared_egl_context APIs from AndroidVirtioGpuOps
7cd7430ef : gfxstream: Remove platformImportResource from AndroidVirtioGpuOps
cc41b4d59 : gfxstream: Remove wait_sync_color_buffer from AndroidVirtioGpuOps
669bfbc25 : gfxstream: Move wait_sync resource interface to the unstable API
d46c53ad0 : gfxstream: Remove all platform_resource_info interfaces from unstable API
96ce3a406 : Revert "Add custom decode on fence status and wait functions"
1d5969851 : Add BypassVulkanDeviceFeatureOverrides flag
d908ba94c : Disabling the InlineUniformBlockFeatures for the purposes of CTS testing
f2e71aa81 : Revert^3 "Signal the fence after color buffer copying"
c5f6b0355 : Revert "Use the right wait object on queue submit"
359735723 : Revert "Fix sync issue with fence in queue submit"
40e2910e1 : Add custom decode on fence status and wait functions
ba751f1b8 : Update decoder.py to use try_unbox on destroy calls
fbc1842c7 : Typo fix in glGenRenderbuffers
da1e9f228 : Add device memory size parameter for GPU checks
4943dccf1 : gfxstream: modify script to point to gfxstream version of codgen
75be2fcc8 : gfxstream: re-add codegen for gfxstream
3c3df58dd : Add YcbcrConversion to view creation when needed
82b54fc03 : Fix sync issue with fence in queue submit
5252eeb4b : [Vulkan Snapshot] Introduce try_unbox_* for vkDestroy*
3a236db6f : Remove relative tmp folders on autogen files
c48d0b458 : gfxstream: remove references to Mesa in host build
08f5c21eb : gfxstream: fix scripts/generate-gfxstream-vulkan.sh
ba13990b9 : Add dimensions and format for ColorBuffer logging
9d48b8616 : gfxstream: fix meson build
53c2d192b : Further disable private data support
9182fd599 : Add auto-generated guest files into source tree
85a74c27c : Use the right wait object on queue submit
808604b08 : Revert^2 "Signal the fence after color buffer copying"
a516ce465 : Re-enable verbose logging with ANDROID_EMUGL_VERBOSE

+- Project: platform/hardware/google/graphics/common

38fbf82b : [hwc3] getLuts() AIDL interface
585124f5 : Make memtrack service run on small core
50c6756a : libhwc2.1: do validation when display port is plugged/unplugged
64372d59 : libhwc2.1: clear DPP assignment when DP is added
fdfe3157 : libhwc2.1: fix eDebugDisplayInterfaceConfig logs
66efed75 : Convert libexynosdisplay to soong
a71c6384 : Revert "hwc: Register hwc3 hotplug and HDCP callbacks"
e14c6b0d : libhwc2.1: reopen brightness sysfs
9e757dde : libhwc2.1: add traces for proximity sensor state notifier
a731e3da : hwc: sync with new API to start HDCP
a3ebef02 : Support YUV_P010_PACK32
0c45fc46 : Fix build break if convert libExynosHWCService to soong
4411a844 : Fix build break if converting libexynosdisplay to soong
4b407292 : hwc/displaycolor: add Debug interface
a44849a0 : libhwc2.1: don't skip expected_present_time setting at mode switching
87807684 : hwc: libvrr: Fix bad_optional_access Issue with Minimum Refresh Rate
6e9df937 : hwc: libvrr: Fix bad_optional_access Issue with Minimum Refresh Rate
31849b51 : hwc: libvrr: align minimum refresh rate setting with present cadence
164c7c08 : gralloc-utils: Adding API to get stride alignment
cf09c26d : libhwc2.1: notify prox active immediately
14d4fe65 : Add unsupported stubs for IComposerClient getMaxLayerPictureProfiles
bbdbffbb : hwc: libvrr: stop handling fixed refresh rate setting after power-off
7d6c1546 : Add GOOGLE_RGBX16 format
b59acc25 : hwc: libvrr: Set minimum refresh rate only with correct configuration
8eb74401 : Convert libdrmresource to soong
df58c71a : Fix build break if converting libexynosdisplay to soong
73d33036 : Convert libacryl to soong
cc2760c7 : libhwc2.1: force a color update if DRM atomic commit error occurs
512159c6 : Add support for BGRX
a0afa82b : libhwc2.1: drmmode: support vscan based VRR mode
3dd1661a : hwc: Register hwc3 hotplug and HDCP callbacks
7c02a588 : Revert "libhwc2.1: disable 120 Hz TE modes for built-in display"
9bc58ccb : Convert libexynosv4l2 to soong
7983ddf3 : Convert libexynosscaler and libexynosgscaler to soong
eb1462e4 : libhwc2.1: Adding dumpsys for RR
193afbc6 : Convert memtrack.$(TARGET_BOARD_PLATFORM) to soong
1bc292fd : avoid std::set and std::list of const T
b042a38c : displaycolor: add more control flags in LtmParams
30782744 : libhwc2.1:fix frame insertion timing
9ce3b8f5 : libhwc2.1: disable 120 Hz TE modes for built-in display
3ba9f438 : libhwc2.1: return an invalid release fence for client composed layer
50a5f707 : libhwc2.1: enable multithread_present for DP
bba4410c : Revert "libhwc2.1:remove 240Hz TE configs"

+- Project: platform/hardware/google/graphics/gs101

96cbebe : Convert hardware/google/graphics/gs101/libhwc2.1/Android.mk
5168b18 : libhwc2.1: force a color update when enabling DP
47efbf1 : libhwc2.1: update the setForceColorUpdate function

+- Project: platform/hardware/google/graphics/gs201

b99d3fa : Convert hardware/google/graphics/gs201/libhwc2.1/Android.mk

+- Project: platform/hardware/google/graphics/zuma

9801b1e : Convert hardware/google/graphics/zuma/libhwc2.1/Android.mk
b3229ce : Fix build break if converting libexynosdisplay to soong

+- Project: platform/hardware/google/graphics/zumapro

3340204 : Convert hardware/google/graphics/zumapro/libhwc2.1/Android.mk
45b344d : Fix build break if converting libexynosdisplay to soong

+- Project: platform/hardware/google/interfaces

41fc43e : Update dependencies on graphics.common HAL to latest
dcdbe36 : jpeg: Improving readability of IComponentCallback
301e3c9 : Update dependencies on graphics.common HAL to latest
6c3306e : jpeg: Adding error codes to Image Codec AIDL
d05dfea : jpeg: Removing unused parcelables from Image Codec AIDL
abc13c0 : jpeg: Removing name param from createComponent in IComponentStore
010a678 : Update dependencies on graphics.common HAL to latest

+- Project: platform/hardware/google/pixel

9cde05ea : pixelstats: add aacp_version and aacc to charging session
79e54d36 : ADPF: Fix the lock race issue for send hint.
a8ff8d3f : pixelstats: disable report higher cycle count
185acc01 : ADPF: Only reset heuristic boost if target duration's change is large.
45d33b2d : Powerhal: Fix the broken pause and resume test.
a0430168 : Powerhal: Add sessions' jank frames into buckets.
7662bd94 : Add CallUsageStatsReported atom.
e0cab3a1 : preupload_hooks: Add powerhint config JSON checker
d49924b9 : preupload_hooks: Add thermal config JSON checker
06b2874c : pixel: Add Preupload JSON field names checker
6d74cb6d : Add Power field to MediaPlaybackUsageStatsReported atom.
f171b03f : Use differnt ADPF task profiles
3a4a1b24 : Add CPU/GPU_LOAD_SPIKE hints for one-off expensive workloads
76585344 : pixelstats: support m5_model_state format
0e423e9a : Remove the interval methods from Pixel default
ae145688 : [Audio Metric] Add Media Playback atom.
13b903c9 : powerhal: Detect the game mode SF FPS and jitters.
244e3675 : powerhal: Store the game mode state into power session manager.
59805351 : thermal: Add Preupload JSON Schema Checker
3fdefa72 : Add stubs for ADPF Timeline API
cf22a6f1 : ADPF: reset session's heuristic boost states when target duration changes.
7586c4ea : pixel: use health v4
445cad62 : Bump Pixel power hal version to v6
4b22d855 : Change Pixel default to use headroom result types
55654e64 : pixelstats: Add WaterEvent Reporting
7d7f9580 : Add GRAPHICS_PIPELINE mode to Pixel PowerHAL
4a0dce5a : Add libperfmgr tests to device-pixel-tests suite
2bad70fa : thermal: Add cpm ftrace events to atrace categories vendor file.
21c56cab : Implement Pixel default for forecastSkinTemperature HAL API
c34f8a01 : Add dummy support for CPU GPU headrooms in Pixel
16885007 : pixelstats: add sys-evt fields
5142c759 : pixelstats: dynamic history entries to adapt to different EEPROMs
9430835f : Fix support checking for non-adpf
ec30fa0d : Bump Pixel thermal hal default version to V3
89d9d281 : Fix null ptr deref in fmq
34aadcc6 : pixelstats: add AACR algorithm to charging session
2f218b57 : Revert "Move libperfmgr tests to device-pixel-tests suite"
8f67de7c : Fix VTS issue by ensuring that we always wake waiting writers
d9c00c5b : Move libperfmgr tests to device-pixel-tests suite
b57036ed : pixelstats: remove old EEPROM format
bd2339a1 : pixelstats: optimized scan string format
c10977c7 : Build mipc_util and ctpm-next gen for Alcedo modem
50057237 : Revert "Revert "Add HAL method to return SupportInfo object for ..."
acd8826b : Revert "Add HAL method to return SupportInfo object for PowerHAL"
7904efe5 : Add HAL method to return SupportInfo object for PowerHAL
c8cb8ce6 : Add Pixel FMQ implementation
90ff7090 : ADPF: Use a separate ADPF profile for SYSTEM_UI sessions.
cb41ec0b : Revert "Recovery: Reset slot attributes in preWipeData Hook"
b978309b : ADPF: fix broken session pause/resume unit test
e06bf992 : thermal: update error reporting for thermal sensor reading
a4821114 : Reland "ADPF: check whether a session belongs to SystemUI and Launcher."
1dd592d9 : thermal: support stats for future temperature predictions
e36d65aa : Add TEST_MAPPING file for libperfmgr and libapdf
4603aff2 : Revert "ADPF: check whether a session belongs to SystemUI and Launcher."
5ce5ff4d : Remove condition to not write DST transition when it is zero
3c1d2e07 : ADPF: check whether a session belongs to SystemUI and Launcher.
55fe2924 : Refactor isModeSupported and isBoostSupported
73c8a325 : reduce EagleEye size from 2000 to 32
ff101047 : thermal: only print error log when the invalid state2power is detected
32611c55 : misc_writer: support set/clear disable-faceauth-eval flag
d76f7cbd : pixelstats: add battery EEPROM pairing state
4b52f17c : Revert "Updated PixelAtomsTest build target"
cd1078c6 : Revert "Support AUTOMOTIVE mode and DISPLAY_IDLE combination"
f965fd57 : vibrator: Remove vibrator HAL
46a2d237 : pixelstats: force upload history metrics after recovery
8dae9809 : thermal: Add thermal profile check
9538b2c1 : thermal: Add thermal profile check
85978473 : Updated PixelAtomsTest build target
811b21b6 : remove read misc entry
639b9bcd : thermal: I_windup optimization
bf2033c0 : Update thermal HAL to v3
9f622760 : pixelstats: upload the read amount of dm verity partition
04f63e6d : pixelstats: support higher battery cycle_count
6c3bebc9 : Set dexpreopt and dexopt filter for SystemUI

+- Project: platform/hardware/google/pixel-sepolicy

998c75a : Add sysfs_ap type to hardware_info_app.te

+- Project: platform/hardware/interfaces

cbf7ccf033 : Undo NAN periodic ranging changes in the legacy HAL and AIDL conversion method.
be3797e07a : Update defaults for IVehicleGeneratedHeaders-V3.
c7d0feb604 : Add AIDL API feedback
04f549a12e : Add check volume group id uniqueness
bda042a77e : Added VEHICLE_PASSIVE_SUSPENSION_HEIGHT to HAL
993f20bbb3 : Added BRAKE_FLUID_LEVEL_LOW to HAL
1d36d9498b : Added BRAKE_PAD_WEAR_PERCENTAGE to HAL
de3fe3d129 : Added BRAKE_PEDAL_COMPRESSION_PERCENTAGE to HAL
5d829ef2ed : Added ACCELERATOR_PEDAL_COMPRESSION_PERCENTAGE to HAL
8acbbe8d96 : Added VEHICLE_DRIVING_AUTOMATION_TARGET_LEVEL to HAL
444dca1f76 : Added INSTANTANEOUS_EV_EFFICIENCY to HAL
dcf5b0749e : Added INSTANTANEOUS_FUEL_ECONOMY to HAL
042f410d08 : Added VEHICLE_HORN_ENGAGED to HAL
bd85b67e30 : Import contexthub v4 from bluetooth socket hal
46998f29d2 : Added SAE_J3400 enums to EvConnectorType.aidl
f82e4a031c : Added TURN_SIGNAL_LIGHT_STATE and TURN_SIGNAL_SWITCH to HAL
781f66aeb2 : Revert "audio: update setAudioPatch"
5e9418269d : Remove ValueRange.
e40ca320ee : Add min_sdk_version to vehicle HAL property protos Rust library
dac3abf954 : audio: update setAudioPatch
b4f5c0fc8e : health HAL: update description + naming
373f1a4624 : Add NB_IOT_NTN
370dccf2b4 : [MQ HAL] Add a method for supported vendor parameters
4f4aa42de9 : security: see: hdcp: add HDCP Auth Control interface
be34d280b2 : Add sendParameter API in IMediaQuality
bb9a4e834d : Deprecate eHRPD and IS-95A/B (CDMA)
f8d660a8d7 : Move getParameters API in adjustment listener
7022279564 : Revert^2 "SensorHAL: add moisture detection"
919258a586 : Revert "SensorHAL: add moisture detection"
3e735b0706 : Introduce IVmCapabilitiesService HAL
f6b7ba75ad : Add strict equality check for VB key digest length on VSR-16+.
9120ebd483 : Add HAL APIs to support sim type switching on 2pSIM+1eSIM configured devices
ff048f289e : [Lut HAL] add getLuts interface
f89b1bbbc2 : Add more picture parameters part 2
ab1ebb2f7c : Move get picture/sound parameters API to a callback
c823233a19 : Implement secretkeeper HAL v2
5010ef1302 : Add CPU/GPU_LOAD_SPIKE hints for one-off expensive workloads
0288c1fda6 : wifi: Add a new feature capability to indicate multiple MLD supported
3e2ef57bb2 : Add support for NAN periodic ranging in AIDL
240c1d4155 : AIDL effect: add draining state support
f53a9810b4 : Update owners for power and thermal hal
652b88931a : Deprecate remaining NV APIs: all reset types except reboot
8127b300da : SensorHAL: add moisture detection
11420445a4 : Support effect destroy at any state
20ad88b4c5 : hwcrypto: Removing batch key definition
b0659edcfe : Add VTS test for attested "rootOfTrust.verifiedBootKey" field on VSR-16+.
3d2c98646d : Skip VtsAidlHalDrmTargetTest if bootloader is unlocked
542b55c6ed : Camera: AIDL changes for desktop_effects metadata
922576ff7e : Add audio eraser effect AIDL and placeholder implementation
ba2e953dae : Add additional picture parameter
778dcf2783 : Add sent default picture/audio parameter API
7eaecec7db : Added INFO_VEHICLE_SIZE_CLASS to HAL
f02d675483 : Added INFO_MODEL_TRIM to HAL
00283f81b4 : Provide an incomplete V4 VHAL default implementation.
808bb6d04d : Separated current and V3 VHAL impls
098fc8e41c : Mark the pasnComebackCookie field as @nullable in RttSecureConfig.
1979606ff8 : [MQ HAL] Add prarmeter capability APIs
d0674baba2 : Change the return type to null for the synchronous USD publish/subscribe methods.
8f594b5b0f : audio: add IAMF formats to schema
3caebd9b10 : Add volume group addresses check among volume groups
334419a58e : Incorporate Native API council feedback into the Wifi HALs.
b599bf3850 : Add default volume group activation type to volume audio control HAL
8e62b7be7e : Update definition of timeOfClock in ephemeris.
16d8193a06 : Add loose match to ASE requirement matching process.
6e8bfe7b05 : Update VtsHalPowerTargetTest to not break
54ae756963 : android.hardware.automotive.vehicle-default-service_fuzzer: Add signal() to handle SIGPIPE
b1b886d553 : android.hardware.broadcastradio-service.default_fuzzer: Add signal() to handle SIGPIPE
35426be982 : Add test cases for BluetoothLeAudioCodecsProviderTest for audioLocationInt parsing
0041aaa12c : Populate `audioLocation` from XML file for LE Audio codec provider
487f0c6ac1 : Add integer `audioLocation` to better support different LE Audio audio location configurations
1b2bc18d62 : [MQ HAL] Add set profile APIs to TV input
364f426840 : [RESTRICT MERGE] matrices: automotive.audiocontrol in 7.xml has max version
9caca7e7f0 : Specify the expected contents of "verifiedBootKey".
7dcdd5b9c8 : Add manifest fragments for all KeyMint versions
d341d6f05b : DynamicsProcessing: Allow uninstantiated limiter config data test
bdc6bb500b : Fix formatting, use consistent comment styles, and document more fields.
f85827b7dd : Add VTS for Opus via LE Audio
f9f91d4bc3 : Allow OPUS codec on LE Audio.
1752075607 : Remove headroom selection type and move interval to support info
fd051ded3d : Add an explicitly v4 manifest fragment
ecbf92e035 : Fix thermal default impl to return values
bbf34973dd : Spatializer AIDL: add spatializedSpeakerLayout
be8d179d50 : Add new satellite APIs for cellular modem.
ee9629f686 : Add APIs for Auto picture/sound quality and Auto super resolution
7889729c73 : hwcrypto: Moving hwcrypto files out of staging
f2946ab5c0 : Perform sanity checks on generated P256 points
27c2a2da65 : Reorder RengineEngine and LayerSettings in VTS
58d25d22e7 : Add picture and sound profile notification for manage media profile
16b799f63a : AIDL effect: add draining state support
ecf6999cfe : Define detailed not available status.
1b9c98349f : Move ApIfaceParams directly into the IWifiChip interface.
e3a1fa5e0a : [Composer] Fix compiler warnings
7155b67e76 : Update NFC AIDL module to target SDK 35
1c577a575f : Add new USD callbacks to the supplicant STA iface VTS test.
3ee2e5f08b : Add additional comments for the Service Protocol Type field.
54ac54c8c0 : Add Unsynchronized Service Discovery (USD) methods to the vendor Supplicant AIDL interface.
d0e95016b2 : Add VTS test for ranging hal v2
f1df387332 : Don't pass ATTEST_KEY for symmetric key generation
6d825fb225 : Don't pass ATTEST_KEY for symmetric key generation
38717c3905 : wifi: Supports to remove link for MLO SAP
652ac523d3 : Add channel bandwidth into MloLink
ca31d80019 : Graphics: Migrate graphics.common HAL to version 6
88f92d2b91 : Support for BLE assisted P2P discovery and pairing
a175f70d7c : [Composer-VTS] Test to verify MRR and ARR modes are mutually exclusive
106facf4c1 : Add a new API to start HDCP
b4be7d0471 : Reduce end session delay for default IVibratorManager
abc94641cb : DynamicsProcessing: Add tests to validate Limiter Config Parameters
6410409568 : Updating comments for PWLE v2 APIs
b939a4b769 : [Lut HAL] documentation update.
b6beba8e18 : Update GnssAssistance AIDL to match framework changes.
b0d667e8fe : Update CPU GPU headroom HAL API
b7ba9a5d5c : Add dirgroup for security/see
c0df841989 : Move SecureStorage interface out of staging
e99d381be1 : Update VTS for Bluetooth Socket Offload with RFCOMM
9a85540f87 : Update default implementation for Bluetooth Socket Offload with RFCOMM
dde33f2109 : Update HAL interface for Bluetooth Socket Offload with RFCOMM
1a69394159 : Store the min callback interface version in the Vendor HAL.
0efc748b1b : Add Power HAL changes for ADPF Timeline API
d98818c8b1 : HDR: Add HDR type to DisplayConfiguration
8328b0068a : Fix emergency radio alert AIDL doc
224691ea66 : Modify `Modules` documentation
d366cc3963 : Add fuzzer for bluetooth socket AIDL
8959c5a766 : Add VTS for Bluetooth Socket Offload
caec34bdce : Add default implementation for Bluetooth Socket Offload
206ad4d1f3 : Revert^2 "Audio CAP: Address ANAPIC comments, Part 3."
5054e3bc73 : Add unsupported check for getMaxLayerPictureProfiles
eb5b4885e2 : Add HAL interface for Bluetooth Socket Offload
b1553a0987 : Health HAL: add hinge info
50f4e8a966 : Revert "Audio CAP: Address ANAPIC comments, Part 3."
a5ca39b828 : Remove the comment about the return value of IContextHub.createSession.
46df15be3d : Define AuthMgr API for client authorization
472e417b0f : audio: Add latest_android_hardware_audio_core_rust
46a1fab5cb : Plumb buffer latencies and present fences out of Composer HAL (hardware/interfaces code)
38a805eec2 : [LUT HAL] add CIE_Y sampling key
e63166dc51 : Fix more AIDL warnings in Radio HAL and lock it up
eb69354d0e : Add moduleHash to attestation cert documentation
9eb548c0ee : VisualizerTest: Improve testing for visualizer effect parameters
3f803e7b82 : Revert "check whether the network interface exists before using it"
2be5ae8948 : Add GRAPHICS_PIPELINE session mode
a5faa56a62 : check whether the network interface exists before using it
3b5572e131 : Audio CAP: Address ANAPIC comments, Part 3.
314448138f : Add method to commit during A/B update
32bd832e96 : Adds logging for the default GNSS HAL.
550a244d7f : Implement gain for PCM types in default audio HAL
78e0caf7c5 : audio: Add latest_android_hardware_audio_core_rust
501b63b0d0 : Specify the use of SHA-256 for the "verifiedBootHash".
18935386be : Skip VtsAidlHalDrmTargetTest if bootloader is unlocked
0ce9f359bd : Deprecate CDMA
cccd1144a0 : Clarify `IStorageSession`'s `WOULD_BLOCK` behavior
fec8e9d52c : Fix MLO link reconfig
81f5d2ebb5 : We are not deprecating nvResetConfig
92c2bd862d : Implement Context Hub HAL v4 for Cuttlefish
faaf63877e : hwcrypto: HWCrypto HMAC key import definitions
2e69aa80a2 : hwcrypto: HWCrypto HMAC AIDL definitions
4a065416a2 : hwcrypto: Add get_keyslot_data to HWCryptoKey AIDL definition
60d8a03ecd : hwcrypto: Add protectionIDs to keys
a5439fd7a5 : Correct comment about Verified Boot key on devices with custom root of trust.
80fe8f713f : Minimize dirgroup for Trusty build
e87a1b9606 : audio: Allow Stream SM to stay in DRAINING state for early notify
fc3f8312f6 : Update doc for VEHICLE_SPEED_DISPLAY_UNITS.
1a7ac6a71e : Update doc for GEAR_SELECTION.
9fa0163b4e : Deprecate supported value fields in VehicleAreaConfig.
b4a1c64a64 : Update doc for HVAC_FAN_DIRECTION.
04e79d7336 : Update doc for IMPACT_DETECTED.
2ab3af86fc : Update doc for ELECTRONIC_STABILITY_CONTROL_STATE.
436c57f46c : Update doc for TIRE_PRESSURE.
d0ea6ecdbc : Update doc for EV_STOPPING_MODE.
b2afcea437 : Update doc for EV_BRAKE_REGENERATION_LEVEL.
00e5a5e1ef : Update doc for CURRENT_GEAR.
99264999e1 : Add supported value APIs to VHAL interface.
e07129d9a5 : libnetdevice: Add prefixlen option to setAddr4
ef3851752a : Drop references to CAN HAL from libnetdevice and libnl++
3aa93292c9 : Drop references to CAN HAL from libnetdevice and libnl++
8cb1c86e1d : Add PASN comeback cookie for secure ranging
ddea39b568 : Display HAL: Screen Replacement
0269d49d9a : KeyMint VTS: emit values on failure
db7be07c9c : ExternalCameraDevice: increase max bytes per pixel
18c22936ed : Add multi-client support in camera2
ed43421b66 : Fix AIDL warnings in Radio HAL
0afad08ea2 : threadnetwork: Avoid execute command if service does not exist
14b3e64423 : Update default service for channel sounding HAL v2
ccd1274582 : Update the channel sounding HAL API
7074af9d87 : Add ADPF_OWNERS to power interface
9084709b72 : Clarify SRBs in cellular transparency HAL
afbab6080c : have one implementation of deviceSuffix
2aeb12356e : Support cpp backend for fmq in aidl (POD types only)
4f0c2b0ee4 : Add secure HE-LTF protocol version to capabilities
bc7038f925 : Move deprecated annotation from structs to their fields
2d2385bca3 : Refactor function signatures to remove mocked IRPC
da0b04ce83 : KeyMint VTS tests for module hash feature
4a4c23559a : Adding protocol for chinese Id in ProtocolDiscoveryConfig AIDL
d9535a1ece : Add GnssAssistanceInterface AIDL HAL
a459d31603 : List required transitive deps of com.android.hardware.audio explicitly
731178ea46 : Camera: Vts: update check for setCallback
af23f37935 : Add an explicitly v3 manifest fragment
623240900b : Camera: Add AE priority mode tags
d3aa7c7c5e : Add forecastSkinTemperature HAL API
af05282aab : Add CPU and GPU headroom HAL APIs
4eb29280d9 : Expose more from hwtrust for DICE chain validation
bf97d3ab98 : hwcrypto: Add key token export/import
d65b3820b5 : Add Vikram as owner for RKP HAL and VTS
2d94f52b18 : [Lut HAL] add Lut VTS enforcement
571769e8a6 : Change TEST_MAPPING
1d207f368f : Update cellular security transparency HAL language
ed98f7b27c : Refactor VTS tests to use common sine input generation method
cc87d8f238 : Night Mode Indicator
bc3e6caecc : Add hostapd V4 to baseline
a65c9af01d : Graphics: Add Rust and AIDL defaults for graphics HALS, migrate usages
12a9575c10 : Graphics: Migrate graphics.common HAL to version 6
3294beb8e0 : Add arthuri@ to hardware/interfaces contexthub OWNERS
9f9d2f08b2 : Add APIs to allow sending a picture profile down to Composer HAL
32e85fccc8 : audio: add rollback support for streams in setAudioPatch.
0402658d72 : Apply cosmetic changes
43399105c4 : Support more input formats
2b5cdddc50 : wifi: Adds MLD address in ApInfo for MLO SAP
0606d2a3a9 : Add libaudioutils_headers dependency in audio VTS
c4f2acc76d : Expect SHA-256 digest for attested VBMeta digest on VSR-V+.
3c2658031e : contexthub: fix a bug in stub hal impl
3e76e28a94 : Refine Radio HAL comments.
b8a4e87c6a : Fix audio control device to context test
11104185f2 : Check for null callback pointers in the Vendor HAL AIDL callback util.
802a1dab66 : Audio CAP: Address ANAPIC comments, Part 2.
dc4250ee57 : Use RadioConst to define undefined magic constants
4c8cadd12a : Use RadioConst to define undefined magic constants
0346011a1e : VtsHalTargetTest: Define kSamplingFrequency in EffectHelper.h
e4bb9c465b : Changes to OWNERS files for USB.
7bb46d620f : Add proposed trendy teams for VTS modules
c964c5bb7a : Add proposed trendy teams for VTS modules
ed64b9f646 : Update contexhub stub impl to V4
c377627143 : Introduce new endpoint lifecycle interfaces for ContextHub v4
9563436f27 : Introduce new interfaces for ContextHub v4
c2c965b514 : Add speaker layout validation to VtsHalAudioCoreModuleTargetTest.
2dd50bc486 : Revert^2 "Add SPEAKER_CLEANUP system usage"
866d2752a9 : Add module info AIDL changes and bump the KeyMint version
51eba2ed77 : Update contexhub stub impl to V4
6b42475558 : Introduce new endpoint lifecycle interfaces for ContextHub v4
b97b3f3731 : Add additional APIs and aidl for ambient backlight feature
87a65103ac : Introduce new interfaces for ContextHub v4
d93bf85e1e : Do not blocklist BDS on CN builds
c5e38cf394 : Add proposed trendy teams for VTS modules
334d62cd4b : Camera: Add HEIC UltraHDR stream configuration tags
db67f303f4 : Do not install android.hardware.hardware_keystore.xml outside apex
4a7600084b : Revert "Add SPEAKER_CLEANUP system usage"
3d248fd550 : audio: fetch devices from the deviceportconfigs
b266b5bac1 : Expand Hotplug API to report unstable link events
1fc58f8262 : wifi: Support client isolation configuration
7fc9c4c8c5 : Add audio control configuraiton APIs VTS
ea426cbf2e : Add module info AIDL changes and bump the KeyMint version
1a11f4c8c3 : Fix GraphicsMapperStableCTests#LockUnlockNoCPUUsage
4b9eb3f135 : Add SPEAKER_CLEANUP system usage
6a717477cd : Volume Control: Separate the Volume Level and Mute param test
05bd5efb79 : Add T4T Ndef Nfceee feature support
6f69dedb02 : VtsHalBluetoothTargetTest: Remove SCO loopback tests
85e252a442 : VtsHalBluetoothTargetTest: Check API level compatibility
abed683f94 : Reapply "Use platform security domains in keymint/gatekeeper sepolicy"
2bf55fb661 : Add default implementation to load current XML
2744dea37e : Add audio configurations API to audio control HAL
e02b809387 : Revert "Revert "Add Media Quality Service AIDL default implement..."
607720e6f5 : Revert "Revert "Add HAL method to return SupportInfo object for ..."
965053af28 : wifi: Support for Wi-Fi Direct R2
b3f1415d7b : Revert "Add Media Quality Service AIDL default implementation"
1ee583fda3 : Update default team for VTS cas tests
083aa97084 : VisualizerTest: Refactor Visualizer VTS Target Test
3043bba2ac : Revert "Add HAL method to return SupportInfo object for PowerHAL"
a2d41f833c : Add audio device type MULTICHANNEL_GROUP
9d79f312f9 : Implement emergency radio in default HAL
550675ccff : Fix size of Rect fields in CROP spec
a6f647a399 : Add support for MLO link addition
48d71953fa : Add HAL method to return SupportInfo object for PowerHAL
a324d7fb19 : Add 11az secure ranging HAL
92a3161ed6 : Reorder RengineEngine and LayerSettings in VTS
defc6660d8 : Camera: Add color temperature metadata tags
4d9c95f2e5 : Updated translate_aidl_enums.py with complete translation logic
ab341818fe : Add Media Quality Service AIDL default implementation
a7f0b8b615 : Add 11az secure ranging support
fa85fb3fa8 : Radio: bump AIDL data and ims.media dependencies for default impl
695ca37b02 : Partially reapply "Enable OEM_PAID and OEM_PRIVATE APN types"
bfa0909da6 : Radio: bump AIDL data and ims.media dependencies for VTS
ddbb1fd365 : libradiocompat: Move all AIDL dependencies to a separate cc_defaults
49e767ac71 : Add proposed trendy teams for VTS modules
1de834a671 : Revert "Enable OEM_PAID and OEM_PRIVATE APN types"
dbf753b896 : graphics/common: Add HEIF_ULTRAHDR dataspace
87b6dc0a88 : keygen test not generating the key for every iteration
569664acb2 : Add proposed trendy teams for VTS modules
c42a2a6862 : Add versioned libkeymint_support
dfe8733a17 : Add proposed trendy teams for VTS modules
295aa73fda : Add proposed trendy teams for VTS modules
82e13c4269 : Revert "Use platform security domains in keymint/gatekeeper sepolicy"
be7e47f8ec : Remove dependencies on the 1-variant fallback
917cca5e93 : bootctl: check for nullptr
e2307105fc : [rkp_factory_tool] enforce the presence of UDS certs
214c86e522 : audio: Add warning log for unmatched mixer values
5c7e78b8ee : audio: Fix the getters of the hardware mixer controls
1274f5b0b8 : Add benchmark for sending a vector of binders over AIDL
41c3222523 : Add SUPPORTED_MAX_SESSION_COUNT as part of UwbVendorCapabilityTlvTypes
e8576cb72c : Add proposed trendy teams for VTS modules
42949fda43 : Add proposed trendy teams for VTS modules
d76c9ce788 : Add proposed trendy teams for VTS modules
bb669e19fe : Modify composePwleV2 parameters and getPwleV2FrequencyToOutputAccelerationMap dependency
bd4ef07813 : Add optional notifyThresholdChanged API
6878f68acd : Make testHeartBeatEvent more stable.
01d5a1d2f1 : bootctl: fix reconnect logic
ccff3be860 : Revert "audio: Do not use StreamSwitcher for StreamRemoteSubmix"
bcec635f0a : Unfreeze graphics.common HAL and add YCBCR_P210 format
4680742976 : Add audio eraser effect AIDL and placeholder implementation
628e2ea664 : Add UDS certificate requirements to RKP docs
51f927b7fd : Enable OEM_PAID and OEM_PRIVATE APN types
5bb658478d : Stage OEM_PAID and OEM_PRIVATE ApnTypes
74edc4de09 : Remove the used_slots assertion from VtsHalWeaverTargetTest
498f888984 : Update comment about Weaver slot assertion
d7f3c33d01 : Next kernel config is 6.12
7536b44e1f : Define emergency alert system in bcradio HAL
580ae4e9b8 : Use platform security domains in keymint/gatekeeper sepolicy
13a74787ee : VtsHalBluetoothTargetTest: Remove SCO loopback tests
ae9ea3c896 : threadnetwork: Avoid execute command if service does not exist
50b9157605 : Graphics: Add Rust and AIDL defaults for graphics HALS, migrate usages
8eeb520795 : Unfreeze graphics.common HAL and add YCBCR_P210 format
d5d0c5500a : bootctl: pass cookie to death recipient
5eb07e3b88 : Avoid leaking RenderEngine
46591a9c48 : keymaster_benchmark: remove usage of base::CommandLine
f112ec92ee : [vts] Verify RKP VM DICE chain in IRPC VTS
f877a29e14 : Add disconnect reason to hostapd
6f5ba92e92 : health storage inherit root OWNERS.
456df70e71 : Add SYSUI session tag and update PowerHAL version
a4bdedeb20 : error: no matching constructor for initialization of 'std::ifstream'
160bfcb509 : Move modules in compatibility_matrices/Android.bp
e98053df64 : Add proposed trendy teams for VTS modules
6803e55c72 : Add proposed trendy teams for VTS modules
29d868a28e : Add proposed trendy teams for VTS modules
80874efe84 : Add proposed trendy teams for VTS modules
454b822d00 : Add proposed trendy teams for VTS modules
22c843f1ba : Add proposed trendy teams for VTS modules
b995298480 : Add proposed trendy teams for VTS modules
b51972992f : Add proposed trendy teams for VTS modules
394d2e9246 : Add proposed trendy teams for VTS modules
c09791a7ac : Add proposed trendy teams for VTS modules
15b8210b8e : Add proposed trendy teams for VTS modules
2174a669a1 : Add proposed trendy teams for VTS modules
d719d219bb : Add proposed trendy teams for VTS modules
4101bdb08a : Add proposed trendy teams for VTS modules
9c62ca6c76 : Add proposed trendy teams for VTS modules
11d1548bcb : Add proposed trendy teams for VTS modules
82f94c788b : Add proposed trendy teams for VTS modules
0f87298707 : Add proposed trendy teams for VTS modules
8c111de52b : Add proposed trendy teams for VTS modules
b65093afa8 : Add proposed trendy teams for VTS modules
2944bac961 : Add proposed trendy teams for VTS modules
3227acc073 : Remove Android.mk from bump.py
05fc6aa383 : audio: Fix AudioRecordTest#testTimestamp CTS on CF
0174eb738d : [LUT HAL] Interface update
86a0ff910f : libnetdevice: define default request flags
cd635b9407 : libnetdevice: implement ipv4 get/set/add operations
f2dcb64922 : libnetdevice: migrate API to std::string_view
426708b90d : libnetdevice: support host
e839f5e394 : Fixed VTS for non-CDMA devices
7b05efd13f : KeyMint: coalesce device ID failure code
cba986107b : boot_control: use overrides
ce8daf60cf : Add dirgroup for trusty genrule
92166144a2 : audio: Deprecate StreamSwitcher
28b6b2b1b6 : boot_control death recipient
5d2b901798 : error: no matching constructor for initialization of 'std::ifstream'
3e9405be04 : Support standard extension frontend status
bb91da1ec5 : VTS to verify set/get AllowedCarriers for HAL 2.2
294b3a1dc3 : VtsHalGraphicsMapper test: Use static graphics.hardware.common
ea2d235060 : wifi: Adds a new vendor HAL API for MLO SAP
f5a213c5ed : VTSHalPowerTarget: check if HintSession supported
6dc643db27 : Remove unused global const variable
87f9c220b5 : Refactor Power VTS in terms of AIDL version
349aea2127 : Add VTS for setAudioPortConfig with invalid gain
8c428d7ad2 : ExternalCameraHAL: Get old image if taking picture without preview
e976625a49 : audio: Do not use StreamSwitcher for StreamRemoteSubmix
7dc9a223a0 : Document handling unsupported EAP-SIM/AKA
51d6329919 : Fix unknown VehicleApPowerStateReport ON.
f5ec73e546 : audio: Do not use StreamSwitcher for StreamPrimary
8fc1714797 : Add a new known hash to drm
21c8770923 : Enable bus device audio in default audio HAL
68ece3fac6 : Revert "Add a new known hash to drm"
a33bb5eaf5 : Implement volume control on default audio HAL
29e51689d8 : Support multiple output devices in refrence audio HAL
7268c9aec1 : Revert "Add a new known hash to drm"
124b2802a1 : error: no matching constructor for initialization of 'std::ifstream'
d3e4d3a3e0 : VtsHalMediaOmx: avoid std::set and std::vector of const T
8903aea973 : audio: Update TEST_MAPPING after ag/24152024
64f73a4c5f : Avoid leaking RenderEngine
f1120b1318 : Introduce vibration session HAL support
992d978c20 : Add a new known hash to drm
dd22bbe34a : Clarify bandwidth validity in LTE cell description
85ae6cce11 : Revert "Separate hdcp levels into a base interface"
488ea4ea02 : Support 192kHz sample rate in AIDL remote submix
649c20ef27 : Remove the used_slots assertion from VtsHalWeaverTargetTest
fdad9716c0 : Revert "Support 192kHz sample rate in AIDL remote submix"
5fdf11ffb9 : Relax measurement tests to allow 3 empty GnssData
db87e8d0b6 : wifi: Support for P2P Compatibility Mode
2ae7fb6e55 : Separate hdcp levels into a base interface
7c60a7a0cb : Explicitly include libhardware_header for vendors
1e8654fedd : camera: Remove session_hal_buf_manager flag
cd019230f3 : Add support for HFP_SOFTWARE_DATAPATH session type
acd65a80a9 : Use read-write lock to guard property config.
7f1ab0c33c : Add VTS for setAudioPortConfig with invalid gain
122597a5e9 : Implement volume control on default audio HAL
7956b83f9f : VtsVisualizer: skip testCaptureSampleBufferSizeAndOutput for offload
83f6d02eb5 : threadhal: handle socket disconnection gracefully
9e51418fc6 : Convert android.hardware.soundtrigger@2.0(2.1)-impl to soong
9c50f3b8a7 : Graphics interface libraries only exist for linux/Android
ce222a2cbc : Convert android.hardware.graphics.composer@2.2-service to soong
33ae251c82 : Add frozen: true|false to all AOSP HALs that don't have frozen
40aa4a9a1c : Support reassemabling fragmented media events
5ea0e25f1f : wifi: Upgrade vendor hal version
44607f3bf6 : Stage OEM_PAID and OEM_PRIVATE ApnTypes
ca5d7b8e72 : audio: Add AC-4 level 4 audio format to schema
22cfe6a6b8 : Override AIDL CAS HAL if HIDL is built
556c3d8be7 : Support 192kHz sample rate in AIDL remote submix

+- Project: platform/hardware/libhardware

e0e31d9a : Validate sensor name before storing
d69b37df : Add KeyMint MODULE_HASH tag and err codes
d9ce83c1 : Use __builtin_available to guard LLNDK symbols
c8397f5b : Improve debug logs for dynamic sensor enable, batch, and getFeature
e98adf4a : Add dirgroup for trusty genrule
7f6d2dc0 : Define sensor permission READ_HEART_RATE

+- Project: platform/hardware/nxp/keymint

2d9a519 : Disable -Wenum-constexpr-conversion in hardware/nxp/keymint/KM200.
a8a52f9 : Session timer management for Crypto Operations
da1d238 : Config resource updated with Wake_ALARM
701b8c1 : INS value for SET_BOOT_PARAMS updated.
f22f9f2 : Correct handling of MGF1 digest tags
c540401 : Enable Keymint HAL to SE HAL communication
905cda5 : Channel timeout not being reset to default after finish()
4c7cc96 : Fix for weaver cmd not allowed issue
156c000 : Fix for Keymint Operation failure
084863f : Compiler warnings fix
8286620 : Added memory leak trace for KeyMint HAL
0cf4341 : Fix for failure during multiple app accessing Keymint
ed7c469 : Depend on v3 of lib_android_keymaster_keymint_utils
4a40803 : Avoid multiple INS_INIT_STRONGBOX_CMD
de3bbf2 : Retry SharedSecret only for Hw communication fail case

+- Project: platform/hardware/nxp/nfc

4e673be : Fix for Vts failure when ULPDET config is enabled
ee98594 : [nfc] Adjust the position of the local variable "retry" without changing the logic to facilitate printing of the number of retries.
acbb848 : Add config for RF deactivate timeout for RF listen active state
9c74249 : Added NCI_ANDROID_GET_CAPS_CMD Command Support
22876a9 : NXP NFC HAL update for SNxxx
9821b2d : Updated polling loop notification timestamp in microseconds
cf238c2 : Allowing only needed polling loop information
269a8ba : Added Android vendor NCI message logging support
46df453 : Exclude unknow events if all the bytes are 0x00
c40a02a : ObserveMode Config option to get only Mod or CMA events

+- Project: platform/hardware/nxp/secure_element

5696c1c : Replace base/logging.h with android-base/logging.h

+- Project: platform/hardware/nxp/uwb

b2cdd49 : bugfix: SessionTrack: strict ordering of queing a work
948ee1b : Remove hdll chip support.
a0d1f95 : Merge "Revert "Handle sync code index bitmasking with endianness adjustment"" into main am: 644db1a715 am: cf82630a75
3f11aeb : Split hbci/hdll chip support into static libs
59102a0 : Split hbci/hdll chip support into static libs
4478ede : sr200: enable rx antenna delay
3001e0c : Split hbci/hdll chip support into static libs
58cd88f : sr200: split chip support into static lib
6d117bf : Revert "Merge 24Q3 to AOSP main"
c2e853c : Enhance return type validation and improve logging for better traceability
3968b65 : UWB config file alignment
eb97225 : Handle sync code index bitmasking with endianness adjustment
2006c62 : Cleanup: report_uci_message()
d931331 : Define new OWNERS file for NXP UWB HAL
18fadb8 : Enhance random STS value generation by excluding zero
3583737 : Use bool return type for NxpConfig_GetXXX()
8a77c3d : Resolve UWB_BINDING_LOCKING_ALLOWED config handling due to missing data type in switch case
5d6779a : UWB HAL - Common middleware HAL implementation
728247f : [sr1xx] Update distance calibration
863e811 : Enable Debug RFrame Log Notification for Extension Parameters
68ed4ed : bugfix: keep per-country configuration
77b53cc : bugfix: make process_ext_cmd_rsp() reentrant.
9dc8624 : SessionTrack: simplify QueueSessionTrackWork()
381788d : bugfix: Do not wait for URSK_DELETE work from SessionTrack
af98bc0 : bugfix: allow rx handler to register another rx handler
75be7c6 : Increase timeout in SessionTrack Asynchronous job queue.
c2181ff : bugfix: reset runtime settings only when it's valid country code
cb8a2cf : SessionTrack: grab a CONCURRENCY lock for Deleting URSK
bd51758 : Cleanup: Remove global variable isSkipPacket
4b7dad9 : Cleanup: grouping rx packet handling logcis
874c1fb : use rx handler for phNxpUciHal_handle_get_caps_info()
6caa7c1 : Rxhandler : remove run_once flag handler
e402c04 : Early return for CORE_GET_CAPS_INFO_RSP
feccab1 : Cleanup: do not use global isSkipPacket from rx handler
de377a0 : Cleanup: Split phNxpUciHal_read_complete()
b92e068 : Cleanup: remove unused list implementation
af8affc : Cleanup: Use Semaphore helper class in write()
14a649b : Cleanup: use C++ class for nxpucihal_monitor
0b10ed0 : Cleanup: Remove unnecessary global variable hal_parse_enabled
d8473a5 : Cleanup: remove unused macros for parsing packet
065054e : Remove rx buffer from a global control data.
d2b884d : Add CMD/RSP turnaround checker
2453f0d : Bugfix: call parse() prior to write_unlocked()
154de7b : Continue CMD/RSP checking even when isSkipPacket set
b5bd62a : bugfix: keep per-country configuration
33b2a6b : bugfix: make process_ext_cmd_rsp() reentrant.
e01fddf : SessionTrack: simplify QueueSessionTrackWork()
d4d5da9 : bugfix: Do not wait for URSK_DELETE work from SessionTrack
d98e05d : bugfix: allow rx handler to register another rx handler
309b67f : Increase timeout in SessionTrack Asynchronous job queue.
edad5a5 : bugfix: reset runtime settings only when it's valid country code
df97575 : SessionTrack: grab a CONCURRENCY lock for Deleting URSK
61bbcb0 : Cleanup: Remove global variable isSkipPacket
838b860 : Cleanup: grouping rx packet handling logcis
f3d0177 : use rx handler for phNxpUciHal_handle_get_caps_info()
faefdeb : Rxhandler : remove run_once flag handler
29ae120 : Early return for CORE_GET_CAPS_INFO_RSP
d0503f2 : Cleanup: do not use global isSkipPacket from rx handler
07312b2 : Cleanup: report_uci_message()
0ee32c0 : Cleanup: Split phNxpUciHal_read_complete()
8ad53af : Cleanup: remove unused list implementation
629f24b : Cleanup: Use Semaphore helper class in write()
f72abf2 : Cleanup: use C++ class for nxpucihal_monitor
1eee128 : Cleanup: Remove unnecessary global variable hal_parse_enabled
d021452 : Cleanup: remove unused macros for parsing packet
c2bc9f1 : Remove rx buffer from a global control data.
d4dd29c : Add CMD/RSP turnaround checker
b5ee395 : Bugfix: call parse() prior to write_unlocked()
00865e7 : Continue CMD/RSP checking even when isSkipPacket set
3ab0a0c : Improve retry logic in case of unexpected RX
c3697c8 : Improve retry logic in case of unexpected RX

+- Project: platform/hardware/nxp/weaver

b8412bc : Fix compilation failure for AIDL Hal transport

+- Project: platform/hardware/qcom/wlan

be4f140 : Android.bp: Use pixel_watch.bsp_dir to choose SW5100 BSP
bb07984 : Convert lib_driver_cmd_qcwcn for wcn6740 to soong

+- Project: platform/hardware/ril

caba4da : Convert rild to soong
a9eddd1 : Convert libreference-ril to soong
3a74eda : Convert libril to soong

+- Project: platform/hardware/st/nfc

e0b73fb : Replace the debug message with an error message before calling abort()
7901018 : Correct the VERBOSE log
16eddb0 : C23: false is no longer convertible to void*.
fe2cd97 : Set to HAL_WRAPPER_STATE_RECOVERY before recovering NFC
4f1070e : Add reset trigger log
97d52c6 : Implement ObserveMode commands

+- Project: platform/hardware/synaptics/wlan

3e9768a : WifiHal: Support APF 4KB read/write
df8d5fc : WifiHal: fix uninitialize variable issue for RingDump
1a60641 : Convert lib_driver_cmd_synadhd to soong

+- Project: kernel/configs

95121b2 : OGKI: add approved android15-6.6 build
2a4d346 : OGKI: add approved android15-6.6 build
281cb7a : OGKI: add approved android15-6.6 build
cb66d1f : OGKI: add approved android15-6.6 build
9d877c1 : OGKI: add approved android15-6.6 build
3df44ab : OGKI: add approved android15-6.6 build
060a4f8 : OGKI: add approved android15-6.6 build
2519b8b : OGKI: add approved android15-6.6 build
bdf2487 : OGKI: add approved android15-6.6 build
6e0d15c : OGKI: add approved android15-6.6 build
e171b48 : bpf & networking changes for 6.12-W kernel config
e054f36 : OGKI: add approved android15-6.6 build
2a4e859 : OGKI: add approved android15-6.6 build
79998f4 : move 5.15.131 from android13-5.15 to android14-5.15
b747e86 : add 5.15.131 for android14-5.15 (missed in initial submit)
c18f57c : move 5.15.131 from android13-5.15 to android14-5.15
e64699e : OGKI: add approved android15-6.6 build
832e7f1 : OGKI: add approved android15-6.6 build
823ff85 : add 5.15.131 for android14-5.15 (missed in initial submit)
bb9a129 : OGKI: add approved android15-6.6 build
65aff7b : OGKI: add approved android15-6.6 build
2a17be0 : Catch up kernel-lifetimes.xml changes
82cc9ba : Add next kernel configs for 6.12
e7e7ec9 : update kernel-lifetimes with 2024-07 thru 2024-10 releases
098540a : OGKI: add approved android15-6.6 build
a85ecac : OGKI: add approved android15-6.6 build
c415736 : OGKI: add approved android15-6.6 build
dc5a8e5 : OGKI: add approved android15-6.6 build
72d1422 : Add owners of approved-ogki-builds.xml.
c344ee7 : OGKI: add approved android15-6.6 build
3ede112 : OGKI: add approved android15-6.6 build
c942cf9 : OGKI: add approved android15-6.6 build
0a62bda : OGKI: add approved android15-6.6 build

+- Project: kernel/tests

bb56a71 : Support skipping flash platform build when -pb None is set.
6389b87 : Add support to flash vendor kernel to other Pixel devices.
bea8c65 : Add missing $ sign on some of the PRODUCT variables
dddb56f : Support flashing user build with vendor_ramdisk-debug imiage.
a421a0c : Add line number in print_ in run_test_only.sh.
efe48c8 : Add wait_for_adb_device step.
9d937ce : Rmove skip_build_compatibility_check flag as it is deprecated.
2817b4e : Fix run_test_only.sh for atest workflow
cd6f096 : Let run_test_only.sh run atest with gcov options
127b1df : Choose llvm tools by CLANG_VERSION
882bd0a : launch_cvd.sh: apply shellcheck suggestions
716e56f : Let run_test_only.sh call create-tracefile.py
5aead35 : Move create-tracefile.py from common/ to kernel/tests/
db56746 : net-test: remove unused HAVE_EBPF_5_4
c369b3f : net-test: fix nduseropt parsing when multiple are present
7f4de3c : net-test: fix nduseropt parsing when multiple are present
105e001 : Fix kernel build folder when using local kernel artifacts.

+- Project: platform/libcore

1d50979185f : Do not call toString() from watchdog
e11d12927a4 : Make core-all-system-modules visible to conscrypt
4661a2cdbd3 : Add a flag for the HPKE public API.
6c79d50e1bf : Implement missing Unsafe.get*Volatile and Unsafe.put*Volatile methods.
c06b4c3ee23 : Demote DoNotCall to a warning
1e1d9b82971 : libcore: Add OsConstants.RLIMIT_RTPRIO
ec39adb25ad : Allow androidx to run on rooted devices.
f26dcec88e2 : Don't enable memory-mapped coverage for child zygote.
d597e2e63ca : Get localhost name dynamically
604be7f990f : Initialize class declaring field eagerly in static VH.
839f8718225 : Expose Thread.isVirtual() and Thread.threadId()
7ecedf2a965 : Make @Stable runtime visible.
402c6fe4e5f : Remove @Overridable from RO_DCL_CHANGE_ID
32980a03883 : Set RO_DCL_CHANGE_ID as disabled
9df65a36e36 : Get localhost name dynamically
fcc03da205d : Import Character from jdk-21.0.4-ga
d006727d65e : Adjust MH kind check.
a3c5ea9e5cf : Adjust SplittableRandomTest.
9dc4b9a7ff5 : Javadoc fixes.
8cfe34c9b87 : Update javadoc for dumpLowOverheadTrace
916b5975126 : Add SwitchBootstraps in core-lambda-stubs
4b5fbd1c6da : Import files from jdk-21.0.4-ga
088033cffb8 : Remove redundant fillInStackTrace call
47146adc6a7 : Import StringConcatException from jdk-21.0.4-ga
3982bafa77b : Add an extra bit for intrinsics in ArtMethod
bb03b263d8f : Set MethodHandle's kind more accurately.
f2defa95adb : Call post cleanup callbacks w/o holding lock of ReferenceQueue
ba4d9bf47b9 : Remove @FlaggedApi from launched V APIs
aff3a64ce64 : Add more javadoc for start/stopLowOverheadTracing
3cb3597dc8e : Import java.util.concurrent from jdk-21.0.4-ga
93d4714d046 : Replace test_for properties with explicit dependencies on implementations
f1f41236c3d : Import Random from jdk-21.0.4-ga
5e83032f7f0 : Import TestOf from jdk-21.0.4-ga
2d84845ee51 : Add CtsLibcoreTestCases[...] to TEST_MAPPING
4c1da6384ee : Fix signature of Arrays.equals(Object[],int,int,Object[],int,int)
624acb36a1f : Use VMRuntime to check targetSdkVersion for the compatible locale behavior
df58770484d : Implement missing functionality for ForkJoinPool
def41a855da : Import Locale from jdk-21.0.4-ga
df6286e1ac8 : Import java.util.concurrent.ForkJoin* from jdk-21.0.2-ga
9856de0af11 : Add some missing android-changed annotations on javadoc changes to support go/scheduleAtFixedRate-behavior-change
dc080044563 : Fix Timer catch up behavior
a575db9f5c4 : Revert^2 "Add VMDebug.getExecutableMethodFileOffsets"
27084cb2905 : Import Objects from jdk-21.0.4-ga
7f433dcc4e4 : Remove 3DES from Conscrypt
f00728fdac6 : Remove dependencies on the 1-variant fallback
dfe17cb6be2 : Fix a Timer test for go/scheduleAtFixedRate-behavior-change
0b59e3a20ff : Revert "Add VMDebug.getExecutableMethodFileOffsets"
88ef98bc4e5 : Create MH targeting StringFactory when
9bfa78e6bd9 : Mark some libcore.io.Linux methods as @CriticalNative
05b454f9eaf : Enable scheduleAtFixedRate behavior with a flag
61cf274387a : Add VMDebug.getExecutableMethodFileOffsets
ebf5c6ceb91 : Include MatchException in the API
758d3a114bc : Revert^2 "Add a test to ensure all services can be instantiated co..."
39d78e5c442 : Make spelling of writable consistent
1a88da3945c : Read only dynamic code loading through System.load
ef31997440b : Inlined VarHandle for updating native allocation metrics
dd8cfce3939 : Use VMRuntime to check targetSdkVersion for the compatible locale behavior
aa8825e6a6f : Regenerate expired test certificates.
68325e24ddf : Use the latest 21 ga version by default
806ff0494bb : Remove MergeCollation and PatternEntry from EXPECTED_UPSTREAM
1ccfe7e7dd9 : Update unchanged files to 21.0.4-ga
b0d4ee21b6a : Fixes for errorprone update
a066bcc1d83 : Integrate java.util.concurrent jdk-21.0.2-ga updates into ojluni
91b8f90a627 : Import java.util.concurrent from jdk-21.0.2-ga
58ec018ea50 : Set MethodHandle's kind to INVOKE_INTERFACE...

+- Project: platform/libnativehelper

8136202 : Allow loading libartd.so instead of libart.so with ART_TEST_ON_VM=true.
a16d4b5 : Add Windows definitions for JNIEXPORT and JNIIMPORT

+- Project: platform/packages/apps/AvatarPicker

5949fbe : Import translations. DO NOT MERGE ANYWHERE
8f644a6 : Import translations. DO NOT MERGE ANYWHERE
cacdcc5 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/apps/BasicSmsReceiver

0e37c11 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Browser2

8bd081d : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Calendar

272b7969 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/Cluster

f76f67e : Revert "Add README with details about the status of support for ..."
3180733 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/DataSubscriptionReference

b58e00b : Revert "Add README with details about the status of support for ..."
9d60be2 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/DebuggingRestrictionController

ba6984b : Revert "Add README with details about the status of support for ..."
c16f2d9 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/DialerPrebuilt

4e2c377 : Revert "Add README with details about the status of support for ..."
191e366 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/LatinIME

96e17ca : Revert "Add README with details about the status of support for ..."
9f5ce7d : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/Launcher

75c092ab : Fix CarQuickStepService as per sysui contract changes
4b1d8f37 : Import translations. DO NOT MERGE ANYWHERE
4912a73c : Import translations. DO NOT MERGE ANYWHERE
0aa7e7fa : Move dialer model to portrait reference
023b80b1 : Fix tests for appgrid dynamic rows and cols
bfd88b0c : Calculate AppGrid Rows and Columns dynamically
9c9e4311 : Use the same remote car task view instance in the ViewModel
8db11040 : Revert "Add README with details about the status of support for ..."
c14424d8 : Import translations. DO NOT MERGE ANYWHERE
8ed4af82 : 2c/ Update car launcher to reflect change to GroupedTaskInfo
fcf2ef19 : [Connected Displays in Taskbar] Refactor TaskbarDelegate for Display Signals
549dbc38 : Exclude Google Assistant from Dock
f445ff08 : Disable CarLauncherTests for multi-tasking form factor
e42d64c2 : Add README with details about the status of support for this app
d2024691 : Update tos helper functions visibility to be accessible from CarUiPortraitLauncher
b32aa404 : Import translations. DO NOT MERGE ANYWHERE
92573b10 : Fix reset AppOrder in AppGrid
f443c006 : Add support to finish the placeholder map activity after terms of services are accepted
cf395c7e : Create a map activity to display when terms of service have not been accepted
d38a7d90 : Import translations. DO NOT MERGE ANYWHERE
2d068229 : Refactor replacing tos map intent to CarLauncherUtils
fdd6b5c7 : Import translations. DO NOT MERGE ANYWHERE
fb44340c : Differentiate resources between launcher and media
e2ca507a : Add fade out animation for media card panel handlebar
9b75383f : Add a overlayable boolean config to enable or disable the terms of service banner on the appgrid
5233f94b : Add trunk stable flag for terms of service experience in CarLauncher
8f6b660a : Add default album art to media card
76ed2d5c : Disable dock media appps for RB
60d7f3f5 : Disable seeking media when panel is open
c9822b8c : Fix tos uninitialized condition to return true when Settings.Secure value is not set
c2a14972 : Only show dock pin shortcuts for supported display

+- Project: platform/packages/apps/Car/LinkViewer

7b426d4 : Revert "Add README with details about the status of support for ..."
32d2001 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/LocalMediaPlayer

63a6747 : Revert "Add README with details about the status of support for ..."
4cf496c : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/MediaPrebuilt

f023e8d : Revert "Add README with details about the status of support for ..."
c9808ea : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/MessengerPrebuilt

b456c8f : Revert "Add README with details about the status of support for ..."
e2b77d6 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/Notification

c6f8ba4c : Import translations. DO NOT MERGE ANYWHERE
618cffb4 : Import translations. DO NOT MERGE ANYWHERE
4517de1b : Import translations. DO NOT MERGE ANYWHERE
618e0afe : Import translations. DO NOT MERGE ANYWHERE
903dab45 : Import translations. DO NOT MERGE ANYWHERE
d807ea67 : Revert "Add README with details about the status of support for ..."
b5e67028 : Fix incorrect title visibility calculation
6e2da97e : Add README with details about the status of support for this app
ff3b82d4 : Add ability to set textviews to gone if empty
0a6be3e2 : Import translations. DO NOT MERGE ANYWHERE
7e907ed5 : Fix Car headsup notification coming as blank at the bottom

+- Project: platform/packages/apps/Car/Provision

206fe61 : Revert "Add README with details about the status of support for ..."
fe563e2 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/RadioPrebuilt

fcdb459 : Revert "Add README with details about the status of support for ..."
d0c1d57 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/RotaryController

e77374e : Revert "Add README with details about the status of support for ..."
8255863 : Fix initialization order
2029d84 : Add README with details about the status of support for this app
923d17a : Fix a bug when nudging from an implicit focus area

+- Project: platform/packages/apps/Car/RotaryImePrebuilt

1fa0b9b : Initial empty repository

+- Project: platform/packages/apps/Car/Settings

7e29cfd0e : Import translations. DO NOT MERGE ANYWHERE
e74b70217 : Import translations. DO NOT MERGE ANYWHERE
0a60a0a51 : 6/n: Prevent accidental changes that could go out sync betwee /unit and /multivalent.
21a536cbd : Import translations. DO NOT MERGE ANYWHERE
127f556b3 : Add a wifi share function
68d2e256b : Add DeepLinkActivity and SearchTrampoline
9236b96ca : Import translations. DO NOT MERGE ANYWHERE
078b35fce : Import translations. DO NOT MERGE ANYWHERE
e762d2b89 : Implement two-pane using activity embedding
d598134e7 : Import translations. DO NOT MERGE ANYWHERE
73767da86 : Import translations. DO NOT MERGE ANYWHERE
da52a8f4b : 5/n: Recreating exsiting unit tests under multivalent category and enabling most of those to also run without a device using Robolectric.
402d340a8 : Remove aspect ratio settings flag
74334ec4e : Add several intents to Settings manifest
1a424cf9a : Add aspect ratio settings page-part2
c47ff6d67 : Add aspect ratio settings page-part1
4d24c8beb : Add activity to show a toast for unsupport intents
6436e249e : Disable Notification Access for passengers
48cb11905 : Revert "Add README with details about the status of support for ..."
0bae25f9d : Import translations. DO NOT MERGE ANYWHERE
cf1cc8fc9 : Import translations. DO NOT MERGE ANYWHERE
c690d2675 : 4/n: Moving setting robotests under deviceless category
0a0dd5cba : Import translations. DO NOT MERGE ANYWHERE
fb6b97daa : 3/n: Externalize shadows under `helpers` and reuse/reference from there.
cd6b7f621 : Import translations. DO NOT MERGE ANYWHERE
688a98159 : Add Customization Tool overlay row on the debug panel
d04e3f5e5 : 2/n: Cleanup and simplify ShadowCar with only essentials.
f73f895b1 : Add README with details about the status of support for this app
d20d90091 : Add atom logging for Data Subscription in Settings
48633d09b : 1/n: Prepare for CarSettings multivalent tests.
af4aefd70 : Import translations. DO NOT MERGE ANYWHERE
3761f9c1d : Import translations. DO NOT MERGE ANYWHERE
53b4137d0 : Add more checks for build type
78dc221f1 : Fixes for errorprone update
def5279e9 : Add preUpload checks for new intents
76addae89 : Support intent android.settings.REQUEST_SET_AUTOFILL_SERVICE
e486dd0f3 : Add scripts to auto add Intents to manifest
c97658e7a : Add Aspect Ratio override setting
7b05dca05 : Import translations. DO NOT MERGE ANYWHERE
f08009ee2 : Fix failing unit test
fae41a797 : Import translations. DO NOT MERGE ANYWHERE
e9f6f2f77 : Fixes for errorprone update
d43aff412 : Remove the temporary fix for CarSettings static libraries, as the issue has been resolved in the latest prebuilt libraries.
d4601baea : Import translations. DO NOT MERGE ANYWHERE
54ad5be74 : Import translations. DO NOT MERGE ANYWHERE
a4c4410d3 : Add a force show RTL layout direction entry on the debug panel
f0ba9ae4a : Add a provider to switch drive/park mode for the debug panel
da5021274 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/apps/Car/SettingsIntelligence

fe9a4fb : Revert "Add README with details about the status of support for ..."
a358c1a : Use trampoline intent for search result
83402a6 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Car/SystemUI

c0e66811 : Remove expectation of IME StatsToken
278f0e73 : Added flag for scalable ui.
46252aea : Update API usage
e1373a1b : Add Dagger visualization tools for Car SysUI
744dcb60 : Import translations. DO NOT MERGE ANYWHERE
d06bebf3 : Import translations. DO NOT MERGE ANYWHERE
78ce4332 : Fix FakeDisplayRepository to include new displayIds field
f14d2c18 : Adjust interface to receive IME statsToken
9ea99dd2 : [flexiglass] Adds second signature of hideAlternateBouncer
7f77923a : Remove AndroidAutomotiveNotificationsTests from test mapping
bd433fd1 : Don't clear privacy chip click listener
37f6b314 : Add control center button binding
e37e9cbb : Car: Update HeadsUpEmplyImplModule import
cfa2bd10 : Use correct window context for add user dialog
2102a0cc : Add null auth plugin for auto.
1e100f5d : Return View instead of layout from configs
b7d60e4d : Refresh display input lock in onDisplayAdded
6838aa96 : Prevent Overlapping Toasts for Display Input Lock
d5131bc7 : Refactor SystemBarConfigs
e295b0b2 : Move logic for system bar layouts to configs
67de73af : CarSystemBarViewController depends on a ViewGroup
40be49d7 : Convert more ui to flexible ui
b831f1fe : Use WindowContext when creating windows
da946dcb : Rename ShadePositionRepository in ShadeDisplaysRepository
6bdc5414 : Update params
2130d226 : Fix blurred background for ActivityBlockingActivity on portrait UI
34e7fdec : Revert "Add README with details about the status of support for ..."
9dc931af : Move WMComponent to wm shell
f6e5b35d : Update keyguard presentation logic to take into account shade position
ecf644a1 : Import translations. DO NOT MERGE ANYWHERE
5d27f367 : Convert more buttons to flexbile ui
ed9c5dc4 : Convert hvac button to flexible ui
51eeec56 : Don't re-toggle the panel that is being shown
cca72d88 : Fix CarKeyguardViewController when keyguard_wm_state_refactor is enabled.
5b5d854e : Changed to use the new hidden API.
e3c39233 : Import translations. DO NOT MERGE ANYWHERE
af7fd3fb : Use correct provision state when updating buttons
b0151344 : Convert notifications to flexible ui
f51cea26 : Centeralize logic for panel expanded states
b676a523 : Fix blurring screenshot for ActivityBlockingActivity
0cf6f80f : Import translations. DO NOT MERGE ANYWHERE
612645fa : Reorder task view tasks using shell transitions
e4b383df : Update Dialog width
85dbc32c : Add callbacks to CarFullscreenTaskMonitorListener
d1682a98 : Import translations. DO NOT MERGE ANYWHERE
cdd2c8af : Add wmshell_defaults to CarSystemUI
c4e05ff1 : Add passenger keyguard loading dialog
4d0522dd : Filter UserPicker touches when partially obscured
62223fc0 : CarPrivacyChipViewController: pass new constructor param to parent class
30e79636 : Mock SyncResultCallback from CarUserManager calls
a125f1b1 : Add README with details about the status of support for this app
a567b4d0 : Remove reduntant classes
f02a391e : Remove bar sides from CarSystemBarController
b169f0a1 : Add atom logging for data subscription popup
cd54458c : Add passenger keyguard lockout
af2e4d53 : Import translations. DO NOT MERGE ANYWHERE
705bacec : Add keyguard transitions to car.
78f8d408 : Re-focus keyguard on show
708d3d47 : Move more code to CarSystemBarViewController
e82be9fb : CarPrivacyChipViewController now extends of PrivacyDotViewControllerImpl
9034291b : Allow providing different impl for system bars.
be67dd6d : Add Customization Tool overlay row to the debug panel
3a57a6e0 : Update disabled microphone dialog
bf9c8b34 : Add more checks for build type for the debug QC button
62484a8a : Initial UI for passenger keyguard
1852e4ff : Move saving/restoring states to each system bar
d38c41a3 : Update the debug icon color to support RB
171b9c5e : Make CarSystemBarController depend on CarSystemBarViewController
bb4026d8 : Add DisplayAreaView & DisplayAreaViewTransitions
82cf5c04 : Make CarSystemBarViewFactory an interface
12da2037 : Add CarSystemBarViewController
d7dc1129 : Fix ExternalDisplayControllerTest
31edc40a : Add tests for DebugPanelButtonViewController class
f48a9d47 : Revert "Update constructor to match aosp"
0f29520f : Import translations. DO NOT MERGE ANYWHERE
6815c5cd : Fully qualify @attr reference to android.R field
bd56d496 : Get display id from base view
0e3c05c6 : Cleanup CarSystemBarController API visibility
9e4862bb : Add force RTL layout direction line to the debug panel
cc0fcf7d : Update constructor to match aosp
eabd5704 : Import translations. DO NOT MERGE ANYWHERE
25912710 : Fix DisplayInputSinkController
9350b15d : Update dependencies
5607a7f6 : Add drive/park mode change entry to the debug panel
edfe6e5b : Refactor hvac related APIs from CarSystemBarController
dbd47841 : Refactor notification related APIs from CarSystemBarController
60442f80 : Removed unnecessary APIs in CarSystemBarController
12ac52b0 : Fix Dialer blocking activity not launching
ce18af46 : Import translations. DO NOT MERGE ANYWHERE
a30b19f7 : Restrict init of Dock on non-supported display

+- Project: platform/packages/apps/Car/SystemUpdater

02e000d : Revert "Add README with details about the status of support for ..."
2d953d6 : Add README with details about the status of support for this app
872ac1c : Fix SystemUpdate not opening

+- Project: platform/packages/apps/Car/systemlibs

5816800 : Added scalable ui library
8f9c9ca : Revert "Add README with details about the status of support for ..."
3e09c25 : Add atom logging for QC lib
1ad2e46 : Add README with details about the status of support for this app
2e3857a : Add BAL permission to PendingIntent
63550ba : Add tag and package uid values to QC for metrics

+- Project: platform/packages/apps/CarrierConfig

d167424 : Change satellite display name for verizon
b6cc9a9 : Add SMS and MMS as default services for TMO.
77881b8 : Add carrier config for VZW
63b5745 : Add assets/carrier_config_carrierid_2646_Rcell.xml for Rcell
7d4f2a3 : Update carrier config with satellite earfcns for Verizon-Wireless
8e5f05d : Pending file changes for AMX Carrier privileges cleanup
861cbee : Fixed the incorrect carrier config for 5G icon timer
8206e2c : Add Google Fi Cert Hash to Default CarrierConfig
eb7dc02 : Add Google Fi Cert Hash to Default CarrierConfig
0528108 : Add satellite geofence S2 binary file for the whole US
b7ef1af : Adjust the Free/Iliad package names which need the carrier privileges, as requested by the partner (CCed).

+- Project: platform/packages/apps/CellBroadcastReceiver

a9ed485ef : Import translations. DO NOT MERGE ANYWHERE
16a6b74f2 : Import translations. DO NOT MERGE ANYWHERE
8ec6c93f1 : Import translations. DO NOT MERGE ANYWHERE
d4363b861 : Import translations. DO NOT MERGE ANYWHERE
e7f3a173a : Fix typo of testEnablingExerciseTestChannels
9f2744789 : Make compliancetest use old mockmodem library
226ed5eea : Do not receive test message when test toggles are invisible
647ebedf1 : Import translations. DO NOT MERGE ANYWHERE
6c7ce7c23 : Import translations. DO NOT MERGE ANYWHERE
9a802abf9 : Ignore the test which needs getCurrentUser() on R
1a3843baf : Import translations. DO NOT MERGE ANYWHERE
1393cd0bb : Hide cellbroadcast settings in non-admin user mode in search
322f6c675 : Hide the toggles in search if toggles are not shown
1348e8873 : Import translations. DO NOT MERGE ANYWHERE
439f63ed0 : Remove alert_duration in Mexico's config where looping for vibration/sound is not necessary
8a1927f3f : Import translations. DO NOT MERGE ANYWHERE
7b06f8f36 : Replace GoogleCellBroadcastReceiverUnitTests with CellBroadcastReceiverMTS
677760f32 : Fix ANR by mocking activitymanager during unittest
5c6f5a756 : Use roaming resources for getting notification related configurations in roaming
40fcae2bd : Import translations. DO NOT MERGE ANYWHERE
296ae7c06 : Add mocking isDisplayInteractive of PowerManager
dd0c8f318 : Import translations. DO NOT MERGE ANYWHERE
7d1df3bda : Remove mts-cellbroadcast tag on unexecuted test modules
c1a95a4b1 : Mark packages/apps/CellBroadcastReceiver apps in updatable apexes with updatable: true
a998fc6b1 : Fix exception for getActiveSubscriptionInfo() in CellBroadcastReceiverOemUnitTests
2ea858056 : Remove channels for 3G service of softbank in Japan

+- Project: platform/packages/apps/CertInstaller

5a8609f : Import translations. DO NOT MERGE ANYWHERE
4716f6f : Import translations. DO NOT MERGE ANYWHERE
3fbf075 : Import translations. DO NOT MERGE ANYWHERE
e3f2159 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/apps/Contacts

81fe7d28b : Import translations. DO NOT MERGE ANYWHERE
0700e0f4d : Import translations. DO NOT MERGE ANYWHERE
580bd4bb3 : Import translations. DO NOT MERGE ANYWHERE
412113043 : Import translations. DO NOT MERGE ANYWHERE
c7faff76c : Import translations. DO NOT MERGE ANYWHERE
7f2082ea8 : Import translations. DO NOT MERGE ANYWHERE
3fdc40742 : Import translations. DO NOT MERGE ANYWHERE
db3c23e6d : Add README with details about the status of support for this app
e60380ecd : Import translations. DO NOT MERGE ANYWHERE
61e5aacda : Import translations. DO NOT MERGE ANYWHERE
fd725b9ef : Import translations. DO NOT MERGE ANYWHERE
72fa6852b : Contacts: Fix screen no response when import the .vcf file

+- Project: platform/packages/apps/DeskClock

b8cc846ef : Add README with details about the status of support for this app
569a9adb9 : DeskClock: Fix timer notification not work issue
746187ce6 : Fixed crash in DeskClock during monkey test.

+- Project: platform/packages/apps/DevCamera

0bd8664 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/DocumentsUI

7691cac3b : Import translations. DO NOT MERGE ANYWHERE
c2d371ddd : [DocsUI M3] Update styles for the filter bar
fcf809b52 : [DocsUI M3] Uplift filter bar (search chips)
05e2bb3e8 : DocsUI M3: Enable fixed navigation for file picker
5b95342ee : DocsUI M3: Remove layout variance based on window height
4ce945d40 : DocsUI M3: Remove the variant 720dp-land.
d7724ae32 : DocumentsUI: Move layout folder to be behind flag M3 disabled
b276b01b4 : DocumentsUI: Duplicate layout folder to be behind flag M3 enabled
a0ba19d92 : Add "Compress" menu item to context menu
76de6d52d : [DocsUI Search] Add a DocsUI search flag.
851318bbc : DocsUI: Material3 for the overall fixed layout
c07961468 : Import translations. DO NOT MERGE ANYWHERE
56ced9155 : Import translations. DO NOT MERGE ANYWHERE
0e888aa75 : Import translations. DO NOT MERGE ANYWHERE
04aa19da9 : Import translations. DO NOT MERGE ANYWHERE
a9ba75d0e : [DocsUI M3] Move drawable resources for flag M3 disabled
1dca68ae2 : [DocsUI M3] Dupliate drawable resources for M3 eanbled
11492b8ff : [DocsUI M3] Use Material 3 Styles/TextAppearances
695b1e47d : Add benreich to OWNERS
06d1ae7d3 : Add benreich to OWNERS
e3688e28a : [DocsUI M3] Update measurement system tokens
e201475d4 : [DocsUI M3] Use Material3 Themes
8e2155721 : [DocsUI M3] Add measurement system tokens
36936a143 : DocumentsUI: Move the styles, layout and theme for flag M3 disabled
223d2fc64 : DocumentsUI: Duplicate the styles, layout and theme for flag M3 enabled
ae03aae3c : Remove unused local variable
9e43bd2d4 : DocumentsUI: Remove unused resources and activity
b14a68ad3 : Introduce a new DocumentsDefaultM3Theme behind a flag
ec686c197 : Support Hilt DI on DocumentsUICompose
4b7083c47 : Respect is_launcher_enabled on PRE_BOOT_COMPLETED for FEATURE_PC
3bdd0fbfc : Import translations. DO NOT MERGE ANYWHERE
052a80b79 : Remove action bar from Compose app
b58a557dd : Add null check for context in onRefresh.
e307c5097 : [DocsUI M3] Add compose folder to experiment compose
73347e5f3 : Import translations. DO NOT MERGE ANYWHERE
02907f6c0 : [DocsUI M3] Add a DocsUI flag for adopting material 3
da0486a7d : Import translations. DO NOT MERGE ANYWHERE
362204657 : Fix for test failure in Xiaomi.
90a5112c4 : Fix up build rules to enable Kotlin support
5dd0be1cc : Fixes for errorprone update
c779b042f : Import translations. DO NOT MERGE ANYWHERE
f4ee32c06 : Import translations. DO NOT MERGE ANYWHERE
2de5d72a3 : Prevent clickjacking attack in DocsUi.
14c45fc90 : Prevent clickjacking attack in DocsUi.
6beafa429 : Prevent clickjacking attack in DocsUi.
0f5f4b53a : Prevent clickjacking attack in DocsUi.

+- Project: platform/packages/apps/EmergencyInfo

e61a69a3 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/apps/Gallery

+- Project: platform/packages/apps/Gallery2

f5a33fffb : Fix DuplicateBranches errorprone issue
98cea44d7 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/HTMLViewer

2381fd7 : Revert "Fix the problem by Android 15 edge to edge changes"

+- Project: platform/packages/apps/ImsServiceEntitlement

01020e8 : [ImsServiceEntitlement] Unexpected delay while doing entitlement check if FCM sender id is not set

+- Project: platform/packages/apps/KeyChain

4d975ec : Import translations. DO NOT MERGE ANYWHERE
f5179f2 : Import translations. DO NOT MERGE ANYWHERE
826748c : Import translations. DO NOT MERGE ANYWHERE
e574759 : Allow admin users to install WiFi client certificates
7970d8b : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/apps/Launcher3

a315410ea3 : Fixing leaks in LauncherPreview
9bdbf3f1ff : Import translations. DO NOT MERGE ANYWHERE
f22de1b0b6 : Import translations. DO NOT MERGE ANYWHERE
e0fad6499a : Import translations. DO NOT MERGE ANYWHERE
47666fbf7b : [Leak] Clear all of TaskbarView's FolderIcons' listeners upon activity onDestroy
fb11cee3a2 : Fixing constructor error due to merge conflict
6988855e55 : Removing state from TISBinder
fc69516a06 : Removing TisHelper dependency in taskbar Taskbar should be able to access TouchInteractionService directly
09fb9c3743 : Revert "Dismiss recents when device goes to sleep"
59781fe109 : Prevent CtS invocation in fake landscape mode
9dcbbcb143 : [-1] Fix flicker of -1 when swipe up to exit -1 isn't fast enough
be55d969ff : Prevent bad grid configuration by making sure RestoreDbTask updates IDP correctly.
1750f73889 : Update ProtoLogProxy classes to only log to protolog when protolog has been initialized
9ff1cb6340 : Fix for split animation
9c58954488 : Some test fixes where default user is assumed to be user 0
ee15f52a7c : Loading generated preview on-demand instead of keeping everything in memory
bad2be4944 : Add more logs to investigate empty launcher restore
8143b69cec : Fix workUtilityView receiving touch for pause work apps button.
8335b9790a : Modify the accessibility tree to move the searchbox to the top.
cc21323af2 : Remove the elevation on the DragView to avoid shadows underneath
8ec11ab65f : Update SystemUiProxy to support RemoteTransition when moving a task to desktop
c76d4a8971 : Restore fallback Overview state when restarted by theme changes
56a173c1de : Minor kotlin cleanup
88420d8636 : Fix icon flicker from home-> overview when 3 button navigation is enabled in 3P launcher.
2a1946ee65 : Revert "Recycle tasks that are split when split select anim complete for reuse"
1d6340ae0b : Import translations. DO NOT MERGE ANYWHERE
b26430a9e4 : Add some debug names to various remote transitions
23470e8832 : [CD Taskbar] Refactor TaskbarManager to create object creation methods
97eb832cd3 : Generalizing monochrome icon into Theme icons
f2161dd8c3 : Fix cropped bubble flyout when animating in app
f0e5a19658 : Set the bubble bar background scale for initial animation
67df1f0d2c : Use outline color token for bottom sheet.
1348514466 : Use custom background dispatcher to stop excess thread creation.
2b0cdbbc87 : Make sure GRID_NAME is updated every time it changes.
ddd667ac9a : Prevent Taskbar all apps from showing multi instance options
2fd98270a7 : Update Hotseat icons adjustment X calculation based on cell X Index
9ce45bac05 : Fix findMinWidthAndHeightDpForDevice so it finds the smallest dp height of the cached displays
789546f528 : Using the right context when in FixedLandscape
ee4d9ed3ad : Do not send setFinishTaskTransaction if not swipe
caac0a978d : Hide "Shortcut Menu" from a11y actions when shortcut menu is already opened
591c44a63b : Keep transient taskbar open while overflow is open.
5305c71e8b : Add flag for using top visible activity for recent tasks instead of indicies
1bf7678154 : TaskViews indices inside RecentsView continue [2/n]
73797f8188 : Revert^2 "Check if all apps are translucent when finishing recents animation."
8d76114fe3 : Add logging to verify when LoaderTask is stopped or cancelled
77827d3167 : Fix bubble bar flicker in initial animation
aed3b14abc : Dismiss recents when device goes to sleep
0e668e1b1d : Update accessibility hint for widget cell that shows add button.
0de5987efb : Don't crop bubble flyout in overview
7fbc82cab1 : multiple-desktops: Add new desktop button xml file
3d31b97389 : Restore barMode and wallpaperVisible to Taskbar NavbarButtonsViewController state
1f72f9b3de : Fix issue in scrolling in remote compose widgets
db64e1b3d1 : Factor in userId for updateHotseatItemsFromRunningTasks
b60be0896d : Make DynamicResource as dagger singleton (15/n)
579a785144 : Reapply insets normalization on configuration changes
007abe80e4 : Reland "Handle uiMode changes in QuickstepLauncher"
d1cd8c2ea6 : Add GridDimensionSpecs to fixed landscape and make grid dimension generalized so we can use it to determine row count or col count
a32f7f3f4a : Ensure Desktop Tasks show up in Taskbar multi instance menu
1df3836591 : Use the RecentsWindowTracker in FallbackRecentsTest rather than RecentsActivity's ActivityTracker when flag is enabled
ecc50c6d7c : Register back action to close the KQS view
fcb892328b : Add unit testing to workkUtilityView
c83ee2aefe : [Launcher Memory Leak] Fix leak of AbsSwipeUpHandler via Launcher#mOnDeferredActivityLaunchCallback
3931b671b4 : Reset the bubble bar scale when the animation is interrupted
1c28a44714 : Revert "Revert "Use the Coreographer's frame time for a more rel..."
a8f01bac1d : Make TaskView not clickable when in select mode for switch access and touchscreen usage.
9e52a665a6 : Mock UserCache in TaskViewItemInfoTest to avoid NPE
108dc037a0 : Fix ConcurrentModificationException in TouchInteractionService
68648283a8 : Use bugfix flags for desktop mode transitions (exit, applaunch, alttab)
377c391a9d : Workaround for incomplete setup flow prior to Launcher tests
5e7bda3b84 : Import translations. DO NOT MERGE ANYWHERE
02925dcda5 : Import translations. DO NOT MERGE ANYWHERE
24521dd6f8 : Only adjust the hotseat if bubble bar is visible.
6df13d3b8e : Decouple Backup / Restore Error Codes
3cecfd077c : Fix Taskbar 3 Button y position on launcher home pause/resume
067ed8fa6e : Fix "disabled" at the end of talkback for workPausedCard.
d87722d396 : Open Task into Desktop Mode when Taksbar is in DesktopMode
cdcfaa80b2 : Cleanup the null checks added in ag/30408877
9f1b9d1b13 : Fix close button in workEDU card not 48dp.
7e387fa505 : Animate the bubble bar on dismiss
2ae01adf71 : Make ContextualSearchInvoker not a ResourceBasedOverride.
03a96eb8f7 : End live tile when invoking Circle to Search over Overview.
8779858090 : Using inteface descriptor as the key for various interface extras
3111daf2f1 : Update setting taskbar window height
28907bcbd6 : Set bubble bar aligned to QSB
1e392d6679 : Fix condition for showing contraste tile
c062d5aa43 : Add missing overview state event to tests
0a1dbd4dc7 : Wait until AllSetActivity.onResume to start the background animation
7e0cad5e10 : add Launcher flag for launcher icon shapes
50354ed693 : Add missing test from ag/30322765
f567e7e2fa : Fix animation while hiding Desktop tasks during split
09f2433253 : Log various TaskView properties in TaksContainer.itemInfo
02f7a1a96d : Exclude cleanup methods from CUJ for entering split select.
d05a94888c : [a11y] Fix talkback column and item count for AllAppsGrid
2161355885 : Avoid drawing the contrast tile on empty text
ddc338c39b : Fixing grid migration test and adding extra tests to cover one grid updates
c48b4ed025 : Fix flake in BubbleBarViewAnimatorTest
c53bf7890a : Import translations. DO NOT MERGE ANYWHERE
d8b551f5e1 : Import translations. DO NOT MERGE ANYWHERE
2f784ae9ff : Import translations. DO NOT MERGE ANYWHERE
2e0b4ce52e : Import translations. DO NOT MERGE ANYWHERE
47853ec09f : Import translations. DO NOT MERGE ANYWHERE
42a07b47e1 : Import translations. DO NOT MERGE ANYWHERE
16a604fe0c : Show the bubble bar edu view
6c85912736 : Make RV focusable when empty to read out content desc for TalkBack.
9155cf9066 : Revert "Use the Coreographer's frame time for a more reliable ti..."
879af5779c : Do not let talkback read "Item added to home screen" after an app is added to home screen.
61d0c95f76 : Create ApplicationInfoWrapper with ItemInfo.getTargetPackage() to support DeepShortcuts
493583435b : Convert px to dp for determining minWidth and minHeight for each row count, and add breakpoints
cdaf04a556 : Add animations to Manage Windows menu in Taskbar
3daaad8498 : [Launcher Memory Leak] Avoid leaking Folder/FolderIcon when removing FolderIcon from TaskbarView
0aee46e098 : Add TaskOverlay children for accessibility
f86e0e4b88 : Remove padding from All Apps and Divider views for transient taskbar.
7a14d1b7b9 : Don't recreate taskbar in Overview
ccd08f4667 : Start the home intent when swiping from home to home
86bf6ae902 : Prevent spawning multiple pinning popups on right click
8fc5f39007 : Add Event log for FIXED_LANDSCAPE_TOGGLE
7516dd06d5 : [CD Taskbar] Refactor TaskbarManager to store TaskbarRootLayouts in a map
e026b261b3 : Fix home gesture animation for 3rd party launchers.
0028bed674 : Fix Taskbar not auto stashing from multi instance menu
f9884f20f3 : Avoid drawing the launcher pill outside the view bounds
a91fbb2efd : Adding MSDL haptics for dragging app icons in Workspace and Hotseat.
ce1a73829f : Update OHM regions whenever other regions change
0610c625e0 : Only register Launcher back-callback when ROLE_HOME is held.
b28ff34177 : Cancel predictive back when sliding off back button
61cff8b686 : Move fully visible task logic to more centralised method.
0018279493 : Import translations. DO NOT MERGE ANYWHERE
b24914be8d : Import translations. DO NOT MERGE ANYWHERE
604ae3283c : Remove redundant a11y announcement upon removal of workspace item.
6c0cd2278d : Do not pre-add All Apps icon in phone mode.
f04ae6b09f : Fix LayoutTransition All Apps divider logic for RTL.
4f1e64c8e6 : Add logs for grid migration
7205b24938 : Ignore events that occur between app icons on the taskbar
86f3d35839 : Adding a mock for MSDLPlayerWrapper to DeleteDropTargetTest.
17212c441c : Move shared logic to new package.
9eb6ed8879 : Adjust visibility for legacy top task tracker tasks when home moves to front
8981377ac0 : Do not scale canvas if scale is 1 for predicted app icon animation.
c87f691e04 : Prepare for LayoutTransition with RTL support.
199f860b70 : Split up hotseat and recents into two methods.
1782af7b6e : Filter out unsupported items immediately when updating Taskbar.
1d7ac8b8a7 : Remove padding from All Apps and Divider views.
928b3989b6 : Implement cancel transition on tapping floating view.
ac2b0503b8 : Use the Coreographer's frame time for a more reliable timestamp.
64ae9bcfe5 : Swipe up from excludeFromRecents task should be shown left of desktop tasks
bc02103a6a : Migrate materialColor* attributes into colors
0ccbf14ce4 : Fix taskbar visibility when default-to-desktop
d138ce959b : [Taskbar CD] Refactor TaskbarManager for Multiple TaskbarActivityContexts
30ad7246ad : Don't store SettingsCache within ContextualSearchStateManager.
d4788e6365 : [PiP2] Do not setFinishTaskTransaction in PiP2
cb8f82a9fd : Remove Allow Rotation in foldables
24e3132bbd : Revert^2 "Update test activities with a non-default icon."
1aa57333ca : Fix background of multi instance menu in taskbar
c2f4537495 : Fix keyboard staying up in AOSP launcher.
2ceb63a94d : Deflake BubbleBarViewAnimatorTest
d406bd8f2a : Update the a11y announcement for the deep shortcut menu to "Shortcut Menu"
3d0f0eb213 : Fix bubble bar stash delay
4fd400994a : Revert "Check if all apps are translucent when finishing recents animation."
c14e143338 : Fix broken first gestures when using recents window
05eaeb29aa : Interrupt bubble animation when IME is visible
c3b4263218 : Fix wrong bar state after swiping home
511f6d5778 : Fix TaskbarNavButtonControllerTest failure
a2c1dbc611 : Update corner radius calculation in TaskView
33dfe5e7b3 : Hand off gesture nav animations to registered remotes.
be930c838f : Fix inaccurate ItemInfo.id from DatabaseHelper with AtomicInteger
dfed7c8338 : Handle edge case where overflow icon is overflowing
7a0191e753 : Add unit tests for taskbar overflow
9830496276 : Apply drag elevation to bubble bar container
4c35e50dd4 : Adding MSDL history to Launcher's dump information.
c5e77d9717 : Fix the overflow button getting stuck in the dot position
a4b56465ee : Disable 3-button-nav buttons during back button hold
df1ac14415 : Add unit testing for InputConsumerUtils
5a45c34514 : Show the dot after the flyout is interrupted
fadf3a891c : Enable split screen from app handle also for non-desktop devices
40f6f9b28a : Polish back-to-home animation for 3-button-nav
4a05c8b739 : Animation takeovers in Predictive Back.
54d198b191 : Import translations. DO NOT MERGE ANYWHERE
089d2deb3c : Import translations. DO NOT MERGE ANYWHERE
dd28335d78 : Remove any fixed landscape foldable dumps
df5e51ce34 : Fix work tab accessbility issues.
232db796aa : Fix bubble animation when swiping home
d2cc1f0577 : Update the enforce_system_radius_for_app_widgets flag's type
82efdccc6e : Wait for animations before injecting input event
970d19327d : Close repeatedly unclosed resources at the end of tests
ef05519982 : Use dynamic/relative sizes for taskbar overflow button
9f80ecfc83 : Add container around the previews in the pin widget sheet.
5c8eb5aef9 : Revert "Remove redundant a11y announcement upon removal of workspace item."
cfce474121 : Refactor InputConsumer selection logic into a util class
f01ead4eb1 : Fix Blank task thumbnails during animation in following scenarios
9707b2cd0d : Revert "Remove redundant a11y announcement upon removal of workspace item."
5353cd8626 : Making the QSB align in Fixed Landscape
49282f8ecc : Migrate away from listening for main/side specific stage types
ec111e24e5 : Check if all apps are translucent when finishing recents animation.
1ce33c8537 : Make overscroll in recommendation section limit within its bounds.
6dcbee5d30 : Keep nav button container and back button stable during SUW
5903216275 : End icon alignment early when touching down during 3 button anim to home
cdde399a00 : Add new heuristic for deciding whether we should add extra rows on grid migration to the bottom or top of the grid
859c793b6b : Clean up usage of FileLog, and some logging related to B&R
5f8e2d7c13 : Migrating widgets tests off TAPL
01eb1e037b : Add annotation to simulate RTL in tests.
3c44a029be : Draw contrast tile around the text
c87d41f369 : Never scale navbar scrim in all apps and widget picker
dbaf1028db : Add Manage Windows option to Taskbar long press menu
531a219ba7 : The test IntegrationReorderWidgetsTest was using the oposite dimensions
394a7e64f8 : Changing name of RowCount to GridDimension to be more general
596594c684 : Fix launching app animation from launcher all apps.
fc61ab973b : Add perfetto traces to privateProfile CUJs
1ba1f7282b : Fix TaskbarAllApps and TaskbarDivider contianer talkback annoucement
52ae78d11b : Dissable allow rotation when oneGridSpecs flag is on
5861ee437a : Fixing LauncherAppCompoent not available for isolated context
a9b502b464 : DesktopMode Entry/Exit Signals
48793e1fcd : Fix 3-button Navigation Switch Access Long press
556db6df04 : Remove redundant a11y announcement upon removal of workspace item.
8e4cae1258 : Change fallback closing animators for Desktop Mode.
8fa1dcdc03 : Revert "Revert AllAppsRecyclerViewPoolTest.kt"
41743bd0c7 : Revert "Listen to LauncherUserInfo config changes and hide/unhid..."
b563271ff5 : Fix hotseat layout on device rotation.
b83ea0203b : Dissable fixed landscape on foldables until fixed b/378972567
cac373ce91 : Separating TAPL functionality from AbstractLauncherUiTest into a separate base class
1fbb98e057 : Set task corner radius to match spec
39dee43e12 : Listen to LauncherUserInfo config changes and hide/unhide private space entrypoint accordingly
121f6880dd : Test KQS gets dismissed in certain situations
e50d5ded77 : Remove noises during a11y drag in Launcher.
ca95004403 : Updates layout when overflow button and alt-tab are both triggered.
4c727e38fd : Don't bind scrollbar if current recycler view isn't changed.
6c91ef9191 : Add OldGrid field for GridOption so we can properly filter out the old grids when the flag is on
e7e752e011 : Fix test failures in AbsSwipeUpHandlerTestCase
d36f0d2b7a : Adding MSDL feedback to dragging apps and widgest over drop targets.
8451f0fb1e : Update RecentsWindowSwipeHandler to animate home->overview properly
7ddef8f82d : Aligning Android color tokens with Material
17d81a3611 : Revert "Fix jank resulting from TaskView resizing"
dde53a10e3 : 4b/ Migrate TopTaskTracker to use visible running tasks from Shell
df14fffa54 : Add elevation to workFAB to create a shadow
6863d6244b : Add a flag for the context menu to pin/unpin apps on taskbar.
e238685c0d : Revert "Adding screen record for the bug where we can't find a folder after dragging"
290fa07cc0 : Update tapl test to consider the show all button
697dcf44ac : Maintain selected or last header on clicking "Show all" button
bdbad823ce : Add contrast pill to workspace apps
0b01bf4117 : Fix permission issue with START_WIDGET_PICKER_ACTIVITY
4c3471ff44 : Fix NPE when user goes into maps navigation mode and tries to split app.
d370b4d5f4 : Implement app icons <-> leave behind animation
1c7122ff9a : Update height of predictionRow + 4dp to be aligned with NL app suggestions height.
28ebd1bacd : Account for bubble bar bounds for taskbar overflow
93b121db6f : Fix window clipped in three-button-nav on phone factors
b04492825d : Add debug logs to TaplStartLauncherViaGestureTests
09403e7d9a : Make the flyout color animation smooth
866b7e3cee : Revert "Aligning Android color tokens with Material"
39c006e792 : Register transitions for Desktop app launch + minimize animations
5ad93d97fa : Fix bubble bar position on recreate
450fb42e0f : Close any open widget resize frames when changing pages.
9b2f38fa26 : Add device profile dump flag guarding for OneGrid
5f7785a4d4 : Fixed bubble bar size for the 3 buttons navigation mode.
3e7d28e1c7 : Revert "Update test activities with a non-default icon."
5f77bbd6be : desktop-exploded-view: Add aconfig flag
1d1e50d33b : Fix some tests in TaskbarScrimViewController for bubble bar
a576a818e9 : Revert "Revert "Reset the frozen recents list state prior to tes..."
50eeb4c161 : Investigating test failure in all-apps
e2a68036b7 : Aligning Android color tokens with Material
2c7004164d : Fix split animation with small number of tasks only
36e0c2521a : Move logic onto default dispatcher. Clearing main - stop performance bug
3a8a36f8fa : Only add accessibility actions in Overview if not in select mode
9ba43ab5c8 : Import translations. DO NOT MERGE ANYWHERE
133eb99f82 : Import translations. DO NOT MERGE ANYWHERE
9aaed58603 : Import translations. DO NOT MERGE ANYWHERE
881b97b3b5 : Create new TTVData onAttachedToWindow for new TTVModel.
0cceb24288 : Animation issue when splitting task from taskbar which is in DW
c1b68f4f6d : Add mounted mode image tests and remove some image tests for OneGrid
4631b2eb2c : Moving some tests off TAPL
424b8dee5e : Resetting workspace alpha when the animation is cancelled
9c774c4cf0 : Introducing the MSDLPlayer in Launcher via a wrapper singleton.
10763a74ec : Remove toggle for home screen rotation when OneGrid flag is on
b1c47097d8 : Move fixed landscape in RotationHelper from a request to a variable
8f9c2cc020 : Animate the bubble notification in overview
5ad2d9f53c : Moving some tests off TAPL
72d176c407 : Translate bubble bar when in -1 or all apps
3c8a6c6914 : Resolve issue where incorrect grid migration logic was used when migrating to strictly taller new grid
9e4c99befd : Adding Launcher Mode settings to Launcher settings
9d072a5e5c : Move initInputMonitor to the main thread
5bf8bffdb2 : Fixed quick search bar shrinking.
da2c2c0f7f : Handle Desktop app launches from Taskbar/AllApps icon clicks.
8bbe47c31b : Moving Wait to kotlin
44048ba0a3 : Suppress the bubble animation on updates
f722e4cb86 : Add flag for layout transitions for Recent Apps in Taskbar.
7820eab80c : Add focus ring to widgets header
daf1defe55 : Add ability to preview dark theme into Launcher preview
5140e8ca7e : Send bubble bar location update source to shell
35e1be7ade : Disable bubble bar in vertical taskbar
f4434d169a : Animate bubble badge scale
d39c31da74 : Adjust the bottom padding padding on the left pane in picker
531c227c45 : Update widget predictor to apply prediction filter
35a25984c3 : Update the widget picker UI to show the "view all" button
1c173e96be : Define the list entry and view holder for the view all button
8694df5f59 : Move search widgets list view besides the personal/work lists
56307d2cf4 : Fix failing robo test TaskbarStashControllerTest#testRecreateAsTransient_timeoutStarted
7ab3c7f744 : Updating IconCacheUpdateHandlerTest also fixing cached shortcut info getting invalidated on every reboot
82222b61c2 : [AllAppsHeader] Add override system for implementer to add plugin rows to the header of all apps grid
dbceb92317 : Fix ActivityLeak for RecentsViewContiner in TaskbarManager
1fa2840e82 : Make sure work button is collapsed when keyboard is up upon going to app drawer
bfd02bb36a : Implement the work scheduler view and update colors of FAB
1f6a7b46be : Adds all_apps_sheet_for_handheld flag.
f048c99de4 : Add OverviewDesktop test to verify carousel behavior on swipe up
db6dc94fc3 : Import translations. DO NOT MERGE ANYWHERE
b2201ad15c : Import translations. DO NOT MERGE ANYWHERE
bc4936fb6b : Add predictive back animation for 3-button-nav
f030458a10 : Notify finish on the same thread when entering recents above home
49f7df0444 : Fix flag guarding for oneGridRotationHandling
ccd96a1868 : Draw leave-behind circle after pressing taskbar overflow button
3b87d5c97a : Make the test wait for Main Thread execution.
2d55010135 : Make ContextualEduStatsManager injected by Dagger (13/n)
8f26e042b2 : Remove flags that are no longer necessary
fb51552ac7 : Move icon factory to framework
222d204314 : Add SystemOnBackInvokedCallback to RecentsWindowManager
3d534b15bc : Remove logs for resolved bug, and make some logs permanent
04752a3e63 : Add tablet 5x8 grids
cb9c8b1fc6 : Adding an aconfig flag to guard MSDL in Launcher.
461caa7c5f : Support Desktop unminimize animations, and move app-state logic
ae2c01a0b6 : Reduce log frequency of TasksRepository.setVisibleTasks()
29a430fc21 : [Connected Displays in Taskbar] Refactor TaskbarDelegate for Display Signals
ea078cb647 : Add a resource override for providing default widgets filter.
463a40fdb3 : Handle null RecentsView in FallbackTaskbarUIController
87ea411e97 : Make VibratorWrapper injected by dagger (14/n)
1ecdc2d712 : Revert "Revert "Don't allow Desktop tasks to go outside Overview..."
e906e733be : Revert "Don't allow Desktop tasks to go outside Overview task bounds"
ca39594be9 : Recycle tasks that are split when split select anim complete for reuse
d8ddd4c023 : 2b/ Update launcher to use GroupedTaskInfos
4aa8b8c329 : Fix grid entry is missing
2b6f265bf9 : Shape screen communication with the Launcher's app (2/2)
e6d75ca4ff : Start background tasks via task instead of intent.
50d2c82de6 : Fix grid entry is missing
6563429e2b : Update the flyout collapsed position
0e8093ae92 : Hide bubble dot after flyout init
647266f298 : Preserve bubble bar state.
2627657ab6 : Add listener to update taskInfo when task change via transition
0c9dc91394 : Import translations. DO NOT MERGE ANYWHERE
0242d124c2 : Handle new bubble notification during animation
87455f28fc : TaskbarManager now sets TaskbarUIController for RecentsWindowManager
e07699ae6a : Fix an issue where the overflow wouldn't show if the flag was disabled
0583861724 : Revert "Shape screen communication with the Launcher's app (2/2)"
eac269ac81 : Add custom talkback action for unarchiving apps
ebc10c9ecc : OneGrid Grid Option Updates
65ee96c65d : Convert GridSizeMigrationLogic and DbEntry to Kotlin
870e8832ff : Add flag for launcher contrast tile
68f5a205f4 : Convert ItemInstallQueue to use Dagger
c8e557a394 : Convert IconShape to use Dagger
cd958b38ac : Revert "Shape screen communication with the Launcher's app (2/2)"
600c1af779 : Use FlingOnBackAnimationCallback for predictive back
3ae3d297e6 : Don't allow Desktop tasks to go outside Overview task bounds
bab87b1551 : Stop suspending Launcher Taskbar in tests.
722d7b8dc9 : Sandbox SettingsCache for Taskbar tests.
c16a6394e4 : Add margins to KQS view that is triggered from transient taskbar
480b42f833 : Convert PackageManagerHelper to use Dagger
21bb53ff08 : Tuning gesture nav params for trackpad 3-finger gestures
716a154211 : [Dagger] Make SystemUiProxy provided by DaggerSingletonObject
64d5ccdbb0 : Revert "Add flag for launcher pill"
3736f44f8e : Fix NPE in QuickstepTestInformationHandler#getWindowInsets
586df9898a : Add work_scheduler_in_work_profile flag.
a7123472ce : Remove layoutTransitions for current FAB and use custom animation.
c34ba5ad01 : Fix crash when pausing work profile.
f6a773a476 : Fix icon during split task which is inside DW from taskbar.
69a8022769 : Import translations. DO NOT MERGE ANYWHERE
11d7806cdd : Import translations. DO NOT MERGE ANYWHERE
b402c0f081 : Migrating LauncherDbUtils to kotlin
cfa4fe7e2b : Remove dependencies on the 1-variant fallback
22029f6517 : Fix grid migration tests with grid migration refactor flag on
9d7141d1fe : Enabling Fallback launcher cases
99c442b015 : Revert "Fix Taskbar Y-Translation with Visible Bottom Sheet"
7df6af5831 : Fix flagging for LoaderTask broadcast tests and add more coverage
f052a2ccf7 : Shape screen communication with the Launcher's app (2/2)
edd7ff2fa1 : Add a feature flag to toggle DW carousel detach
35e6157303 : Allow updating the flyout message
a244c0b70e : Fix TAPL OverviewTask to use correct resource to measure height
7054095b16 : Add TAPL test for dismiss DesktopTaskView
c90fcf26c4 : Move the animator for the minimization to WMShell
b00c260669 : Handle display windowing mode change in onConfigChanged
656a026488 : Fix issue that expanding only search result led to weird scroll
be18ff2b7d : Fix issue that items were invisible in 2-pane picker with talkback.
bb85110a0d : Delete Unused Flag related to Altering how the workspace is loaded.
e87ca267b3 : Cleaning up unused flag.
befd6bb1ea : Prevent tap in swipe up zone from closing KQS
ce6d32199b : Mark TaskbarOverlayProxyView closed sooner
20e420ca40 : Match taskbar edge padding to nav button end insets
b80f90bc0d : Remove listener in destroy() that caused a memory leak.
6f61228a6b : Reuse TTV, recreating deps to get faster load times, lower mem usage
879cbb1ff4 : Cleanup ENABLE_HIDE_IME_CAPTION_BAR flag
d5d51e51d7 : Fix flickering of tasks in desktop window when opened as static tile.
dd7d854c3a : [PostBoot] Fix bug where post boot shader has a flicker
143edac127 : Tapping overflow button toggles KQS
9aa005b657 : Fix Taskbar Touchable Region when overview is in split select mode
edb2741c97 : [PostBoot] Don't destroy loader because of config change, not just because of theme change
818608a2d2 : Catch exception to prevent Launcher crash.
f607018261 : Add flag for launcher pill
92d25cd21d : Added scale animation for taskbar pinning transition.
ed7ef84782 : Set proper interpolators and durations for the bubble bar animations
6bbd805e77 : Keeping the custom widget placeholder even when the plugin is not corrected
ef2c5c0e49 : Make ApiWrapper to be injected by dagger (12/n)
9fc4fad24f : Fixed icons overlapping issue.
8b88177fb7 : Fix launcher crash on configuration changed
14e43ef0de : Disable OrientationEventListener in RotationTouchHelper#destroy()
51db65ee49 : Add trace logs in Launcher for perfetto to investigate two line issue
db897c4fe9 : [PostBoot] Do not destroy post boot loader because of theme change
aa921fa3eb : Add debugging logs to transition animation
aa3f772622 : Handle taps on bubble bar flyout
cf5fbe7816 : [clean-up] Clean up Quick Launch V2 and V1.
a796d74e36 : Refactor grid migration
f230eee2ff : Updated bubble bar position to be center aligned with the hotseat
9b60028f99 : Prevent Search taskbar edu from showing on home screen.
c0a18d57e8 : Actually merge the colors for recent indicators.
8c0a78727c : Differentiate failed restores from failed grid migrations.
a5b6c155b6 : Update taskbar overflow button
69ed08ef86 : Add margins to KQS view that is triggered from taskbar
d633368465 : Call invalidate() during updateView() for updating the header.
f679ba942f : Removed stashing hotseat calls.
7664e67678 : Import translations. DO NOT MERGE ANYWHERE
451a5e395f : Import translations. DO NOT MERGE ANYWHERE
8463dc47eb : Generalize ActivityTracker to support RecentsWindowManager
eedc7ab8fd : Animate the bubble bar Y for task bar pinned/unpinned switches.
2fc32dc2d8 : Enforce system radius instead of separate value for widgets in launcher
9c65174760 : Set twoline toggle to false when not in english language.
a48efb33f2 : Fix swipe up from Desktop Tasks is updated does not show focused task
2d7b004842 : Fix home screen briefly flashes after setup.
164b832ec4 : Revert "Made the taskbar touchable during animation to home."
bb996d1304 : Make RefreshRateTracker to be injected by Dagger (11/n)
9a5941ab91 : Account for all apps offset during taskbar layout
27e05992f6 : Update recent indicators to match spec.
13b964f91d : Instead of checking for locale, check language specifically.
ade96d0199 : Update popup and arrow positioning logic
0b7f4cf1e2 : Fixing deadlock in dagger singletons
36402adaa6 : Set task properties to prevent the task being null. This behaviour is expected by existing callers and was likely broken by ag/28151977
62495bbda7 : TaskRepository performance improvement
ea49585931 : Update removeTaskInternal Desktop case
ca034f122a : Fix crash when initiating split with 5 instances of Chrome in Desktop Windowing
3a4d67b965 : Revert "Update OverviewCommandHelper to use Executors.MAIN to reduce the percentage of missed frames"
758bbeb6eb : Update comments on createAdjacentPageAnimForTaskLaunch
41b48b31ab : Don't enforce max tasks in KeyboardQuickSwitchView
4dfb35acc1 : Remove user TIS unlocked runnable when a TIS instance is destroyed
009d9dac30 : Adding a flag for launcher recents in window
6957a037fe : Make mAllAppsButtonContainer not Nullable
c1227779a6 : Enable TaskbarEduTooltipControllerTest
12f77ba713 : Wire up flyout to new bubble animation
a9355d4dba : Remove user TIS unlocked runnable when a TIS instance is destroyed
7966dd785c : Move ContextualSearchHapticManager to MainThreadInitialiedObject
91a2aad67f : Prevent archived apps content description from being overridden unexpectedly
ab7220c342 : Open taskbar pinning popup view from anywhere
9cd3154952 : Moving PluginManager to dagger
36bbc4f18f : Fix Hotseat stashing on user going to dream
3c873420e8 : Moving SettingsChangeLogger back to MainThreadInitializedObject
1d4e75c777 : Refactored hotseat translation X logic
ab23dc8648 : Fix split selection translation when focused task is selected for splitscreen
ecaf00a464 : Update RecentsWindowSwipeHandler to animate home alpha properly
ea788d92be : Draw live tile below Overview after launch animation
fbe5044c4e : Import translations. DO NOT MERGE ANYWHERE
7cbcaa7b67 : Import translations. DO NOT MERGE ANYWHERE
fc6cc4e409 : Cleanup for TAPL Debugging
a2dea77229 : Using synthetic recents transitions for Recents in Window
e37b479b17 : Fixed the opened folder on taskbar when NotifShade is shown
07669e9575 : Improve test isolation in AbstractLauncherUiTests
f4a21b7e5c : Fix RecentsView crash when DesktopTask has only 1 task and it is in split mode
6beaa59710 : Add User group logging for Omnient
fa6ca0fe6f : Have external resource call init() explicitly.
7e1149471a : Update letter list textview background color to spec.
347b149f6f : Remove color animations when scrolling.
d89732ef85 : Made the taskbar touchable during animation to home.
f36375d907 : [Tests] Clear MAIN_EXECUTOR in NavHandleLongPressInputConsumerTest#tearDown
6bb9d56ded : Add animation runner for alt-tab desktop app launch
44854ed9d5 : Revert "Add screenrecord for testWorkspaceSwitchToAllApps"
c33fe5bd56 : Update taskbar window size for flyout
d82503fc42 : Fix Taskbar Y-Translation with Visible Bottom Sheet
e4ce5ebc5e : Fixing leak in LauncherAppWidgetProviderInfo
58fc65dedf : [Dagger] Make CustomWidgetManager provided by DaggerSingletonObject
a3ec98e06d : Migrate to FakeLauncherPrefs for Taskbar unit tests.
33902557ec : Set psDivider to gone in collapse due to updateAdapterItem suppression in test.
7eae20bcb1 : Update OverviewCommandHelper to use Executors.MAIN to reduce the percentage of missed frames
1d12cbcd23 : Add screenrecord for testWorkspaceSwitchToAllApps
805cadc1e7 : Fix package name for TaskbarEduTooltipControllerTest
2f4ccc63b6 : Add task menu item to move task to external display
5a01f588be : Fix jank resulting from TaskView resizing
dbd3c09623 : Disallow 2-finger swipe from the corners to trigger assistant from trackpad
bf9d047b53 : Removing unnecessary SafeClosable requirement from DaggerSingletonObject
518ca4609c : Optimize updating hotseat items in overflown taskbar
05105162d7 : Force config update when twoline is toggled.
1295ffec85 : Improve max taskbar icon count calculation
c363295cf1 : Simplifying some LauncherModel logic
43b01b9900 : Cleaning up temporary interfaces which were created for refactoring
de7cf6330d : Remove unused API removeFromSideStage
c9ac51c521 : Add spannedDrawable for the divider in the letter fastScroller.
9d60a18c5c : Only have private space drawable section for beginning of private space section and keep letters for everything else.
d08761ed7e : Import translations. DO NOT MERGE ANYWHERE
0b936727d6 : Migrate Contextual Search code to AOSP
88011d3ccb : Focus search container by default
066f5adcf6 : Run Taskbar controller tests on VirtualDisplay.
f20255fd09 : Revert "Demote TaplThemeIconsTest#testShortcutIconWithTheme"
45fe132a48 : Handle Quiet Mode for Work Profile gracefully when Pixel Launcher is not default.
266f51b488 : Converting LauncherModel to kotlin
7f6ecaaa06 : Add FakeLauncherPrefs with basic tests.
9cbc478574 : Split LauncherPrefs into abs class / impl.
2d85bf6432 : Small refactor to remove unecessary inheritance
04c2bf387a : Update taskbar corner roundness progress
dfcec9c71c : Fix issue that widget drag and drop failed when are off
51e2bb31ed : Created a helper method that calculates the hotseat icons shift X.
ff160c51a0 : decoupling task animation lifecycle from recents lifecycle
c25aa0c20f : Create container view for bubble bar
1decb57e61 : Add state manager logs to protolog
6570ac750f : Update bubble bar swipe logic
c8d1465a76 : fixing a state cycle issue in recents window where we set home instead of bg_launcher
98ecad0352 : Prepare for wiring bubble bar flyout
6a207fc4f9 : Moving manifest initialization to RoboApplication instead of a separate rule
8c0dc4c655 : [dev_option] Move DestopModeFlags out of android.windows.flags
2075e39c76 : [Dagger] Make AsyncClockEventDelegate provided by DaggerSingletonObject
46b20441e4 : Migrate Taskbar tests to use SandboxApplication.
95a806d50b : Create Launcher flag: enable_taskbar_connected_displays
4d819d9dea : [Dagger] Make SettingsCache provided by DaggerSingletonObject
b619d2c297 : Removing dependency on PackageInfo in IconCache
0a65c5bd72 : Fix Taskbar to Hotseat Animation for Non-Predective back apps
9a723c8f21 : Refactor Taskbar Button Coloring Code
3469b1f6be : Update stashing hotseat logic.
5e04f0960d : Store the flyout in BubbleBarBubble
d04464ac39 : Add SandboxApplication for keeping created contexts in sandbox.
4d6194f432 : Allow specifying base for SandboxModelContext.
bae6a107de : Rename to ObjectSandbox.
3aa92f27c1 : Import translations. DO NOT MERGE ANYWHERE
b485d4f2ce : Import translations. DO NOT MERGE ANYWHERE
49df29669d : [Dagger] Make ScreenOnTracker provided by DaggerSingletonObject
3dcb19f33a : Fixed the hotseat placement in RTL mode
f3f182a7ef : Add Recents Window logs to ProtoLog
3d98ebce27 : Add a flag to display tiered widgets by default in the widget picker
ee98cd4bdf : Don't invalidate swipe handler until parallel anim finishes
c2adf8cf9c : Use desktop API to remove desktop for dismiss
75675e123d : Rename bubble bar flyout fields
9a949ff4ca : [Dagger] Make WellbeingModel provided by DaggerSingletonObject
95045ce03a : Fixing test flakiness in Launcher initialization
b5fdc80bb2 : Handle messages to update Launcher preview color
12aff98cd8 : Add a AbsSwipeUpHandlerTestCase for the RecentsWindowSwipeHandler
c407f1aa5a : Fix a bug where the running task would be drawn below recents task
06dbdcea4e : Fix a bug where the last large task index was not being set correctly.
b6ac8b2ae9 : Fix for recents button quick switch with only focus and desktop task
e4b0747194 : Update bug number for enable_large_desktop_windowing_tile
989aa8ac83 : 3 button nav is laid out correctly.
d0ce1d3c50 : Get rid of unused PopupDataProvider field and also add method to Share class to be able to pass mock StatsLogManager from unit tests.
d819f32e46 : Animate bubble bar alpha when notif shade opens
2166749190 : Set bubble bar invisible while stashed
35b87d33b2 : Inject bubble controllers directly
936fec94bd : Allow injecting bubble controllers in taskbar test
66b501eae2 : Add logs in ProxyActivityStarter when it fails to launch an activity
a909cc2dd7 : [NPE] Fix NPE of TaskInfo in FloatingWidgetView#getDefaultBackgroundColor
3e8cc610b3 : Fix an issue where bubble bar would collapse when rotating
ad7a938268 : Apply bubble bar background alpha to stroke paint
adc171b313 : Move padding from parent to WidgetPagedView and siblings in 2-pane pkr
d8c3328d4d : Pass bubble flyout from wm shell to launcher
9240b68dd6 : Update where we end CUJ_LAUNCHER_APP_SWIPE_TO_RECENTS
32ef2aed33 : Handle enter key as editor action in search bar
67d0d59caf : Adds View screenshot tests for TaskThumbnailView.
da477af93d : Fix comment in TransientBubbleStashControllerTest
2f3a57eb5d : Add split instructions view to desktop split.
7519b7c2e6 : Fix Taskbar unlock transition
29e060408f : Make InstallSessionHelper injected by Dagger (7/n)
fd6772ab84 : If we're going to overview we might need to unstash
21f504e0f1 : Fix small bug with TaskView tile expansion
63dfd60b23 : Add debug memory logs to AbstractLauncherUiTest
e978f6cedb : Converting some caching logic booleans to lookup flags
9e196da120 : Import translations. DO NOT MERGE ANYWHERE
d77f2c1692 : Import translations. DO NOT MERGE ANYWHERE
32007e05cd : Don't call onAppsUpdated() when in testHarness
85a8898402 : Remove unused TestProtocol
8bd8feb72d : Bubble bar flyout polish
0cf067ef47 : Fix translation when focused task or desktop task is dismissed. (3/4)
ded6500848 : Make large task tile snap to position after scrolling
89283aa329 : Override pivot in TaskViewSimulator as well for zoom in launch animation
61b22f90ba : Listen to display focus state in Launcher
5c6a3e5746 : Use context#getDrawable instead of AppCompatResources#getDrawable to fix test failure.
7cada7c46b : Two Taskbar Bug
5464ce0084 : Update bubble stash test for transient taskbar
562ab4d188 : Fixed split in Desktop windowing
52afe8b794 : Pass correct scrollDiff to translateTaskWhenDismissed
396441545f : Animated fullscreen and desktop carousel attaching together
5c34cf2741 : Update flyout content during animation
518d404509 : Added nav bar properties to DeviceProfile dump() method
f7c32a29fd : Requesting ApplicationInfo in cached object
bb4ae757f6 : Update device profile dumps for bubbles
66b695b5fd : Update splitscreen SnapPosition constants
26ed420173 : Add teleport animation to the navigation bar.
688bc453cd : Refactor CompoundString to use a string-format API matching ProtoLog
04484049fa : Extracting handheld task dismiss translation to smaller functions (2/4)
8b50c8f609 : Don't hide DesktopTaskView in split select
b140ccec49 : Increase frame limit for bubble bar screen test
2831d1eaf7 : Make workspace and hotseat scale animations interruptible.
e40cc40619 : Close KeyboardQuickSwitch if user taps outside the container.
0d11c4a7ef : Adding programmatic provision of WRITE_SECURE_SETTINGS permission to Launcher3Tests.
a1cebfa0c0 : Updating IconProvider API to use a single API to load icons
dfbe78fc07 : Import translations. DO NOT MERGE ANYWHERE
2321e1a323 : Import translations. DO NOT MERGE ANYWHERE
dfb48214ea : Update placement of the hotseat according to the bubble bar location.
6813d878ef : Converting various cache lookup option booleans to flags, so that it can easily be extended
be796be5e6 : Fixing wallpaper preview rendered not being cleaned properly
e297bc0c0e : Cleanup for TAPL Debugging
f7fc19bec1 : Implement bubble bar flyout background animation
21ac8f1451 : Fix grid task dismiss animation when desktop large tile is enabled (1/4)
deaca33cc2 : Position KQS to the bottom if opened via taskbar.
9ce27637d7 : Filter out running pinned app tasks from KeyboardQuickSwitch.
392a265c1d : [Contextual Edu] Not update edu stats when swiping down all apps panel
2c72b9bd1a : Fix cropped folder name text in arabic
408b1e0103 : Cancel TASKBAR_COLLAPSE/EXPAND CUJ tracking when the stash animation is cancelled
65c1010029 : Update test activities with a non-default icon.
bb87ec09de : Add OWNERS file for multivalentScreenshotTests/src/com/android/launcher3/taskbar/bubbles
d4488bfd72 : Implement focus state on personal/work tab button
0b19598d36 : Handle swipe down on collapsed bubble bar
fdc09f4a7c : Fixing package override is applied in all lookup options
37182de670 : Fixing flakiness in robo tests
43ea3e0444 : Provide Display ID for overview screenshots
2901922283 : Fixes unflagged usage of deprecated TTV
62335be7c9 : Track all ActiveGestureLogs in ProtoLog
dd4145b917 : Update TIS logs to narrow down duplicate service creation
b633b9aa2b : Fix for bug where we don't use default grid on comet, and migrate normally if not in a B&R case
d7dc6ba320 : Add tests for TaskbarAllAppsViewController.
bd90d3aa3c : Add tests for TaskbarAutohideSuspendController.
78f8d40519 : Add tests for TaskbarScrimViewController.
3231ba01e6 : Add tests for TaskbarStashController.
b319e24136 : Taskbar overflow: only show if apps don't fit + limit # of icons
00b6996e6d : Add protolog support to Quickstep
26d74b9ec0 : Have bubble bar unstash gesture track finger
86046044b7 : Revert "Move testQuickSwitchFromHome to possubmit"
61e6cb27ed : Fixes to allow enabling of enable_refactor_task_thumbnail flag
38deee2508 : Update goToOverviewUnchecked to account for desktop task visiblity
28d6bbd6b4 : Moving various application into related methods to a separate class
a49603e5ad : Import translations. DO NOT MERGE ANYWHERE
36dc0f868c : Import translations. DO NOT MERGE ANYWHERE
45aa9fdad8 : Update suw insets in landscape
45d79db4d7 : [Predictive Back] Enable predictive back in AndroidManifest.xml
87a4cee6d7 : Increase frame limit in bubble bar screen test
6deccd8ef2 : Don't return TYPE_RECENTS task in TopTaskTracker.getCachedTopTask
34d95df4ea : Fix overview animation after reboot.
daf37eeb07 : Cleaning up unusued handlers
7ed1868d60 : Moving RecentsWindowManager away from the singleton pattern
c50aa8bf31 : Limited recents in window introductory cl
f29dc7c5ec : abstracting fallback views to support container instead of activity
0033148ef0 : Only call reset() when transitioning to a State that doesn't have RecentView visible
667fe050ff : Make widget sheet layout container not focusable

+- Project: platform/packages/apps/LegacyCamera

94de262e : Add README with details about the status of support for this app

+- Project: platform/packages/apps/ManagedProvisioning

b5dd5554b : Import translations. DO NOT MERGE ANYWHERE
fa3c135ac : Import translations. DO NOT MERGE ANYWHERE
016b5f732 : Check if FRP is active before provisioning
43917d5ef : Import translations. DO NOT MERGE ANYWHERE
f2eb86c1c : Import translations. DO NOT MERGE ANYWHERE
1892ce676 : Import translations. DO NOT MERGE ANYWHERE
a187fdb49 : Import translations. DO NOT MERGE ANYWHERE
b221adea0 : Update the icon for the setup wizard
9fcc04ac0 : Apply the new SUW theme style for MP activities
320d98118 : Import translations. DO NOT MERGE ANYWHERE
08276c33c : Import translations. DO NOT MERGE ANYWHERE
a73cbaece : Import translations. DO NOT MERGE ANYWHERE
12489423d : Import translations. DO NOT MERGE ANYWHERE
848a37f1c : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/apps/Multiuser

bbeded9 : Add app name and icon to display in widget library
030e077 : Filter getAliveUsers to return only FULL users.
4de4aee : Prevent Multiuser widget crash caused when user icon is not set
8dfcf95 : Move the multiuser package from system to syster_ext
fe33240 : Show a dialog when user switching fails
89630d2 : Fix typos and outdated javadocs
01b0144 : Do not display the add user button by default
8fd3ff6 : [4/n] Enable user switching from the widget
1bc0407 : [3/n] Populate the widget UI with user data
275b4ca : [2/n] Add users repository for the multiuser widget
26be619 : [1/n] Use icon bitmap instead of icon uri inside widget
0be1f11 : Rename com.android.multiuser.widget to com.android.multiuser
cb6fa07 : Reduce the breakpoint for the widget title to 100dp
f0f5277 : Center the text in the error layout
a4efd74 : Center the border around current user icon
29e3f49 : Add a loading layout for the multiuser widget
a2a07c2 : Add domain layer for the Multiuser Widget
f895ce8 : Add database for the Multiuser Widget
1f772b6 : Make the widget Settings button open Users Settings
6b9abf7 : Fix duplicate namespace in multiple android library modules
5e0a792 : Add Multiuser Widget UI
63c13d4 : Add Glance infrastructure for Multiuser Widget
4a24646 : Add boilerplate for Multiuser package
5e416f7 : Initial empty repository

+- Project: platform/packages/apps/Music

924b0a3 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/MusicFX

8f34e45 : Import translations. DO NOT MERGE ANYWHERE
0f78d30 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/apps/Nfc

f4f511c8 : Import translations. DO NOT MERGE ANYWHERE
527f4351 : Track HCE payment service connection events in NFC log
bb180461 : nfc: Rebind Payment Service if polling frames are dropped
0d9f4862 : Rebind Payment service when the service binder isn't responding
305df5a5 : In doDisconnect, deactivate even if the tag is in sleep state.
f3ad9a81 : Import translations. DO NOT MERGE ANYWHERE
0a89d5f7 : Test: Adding Test Cases to CardEmulationManagerTest
d7281db8 : Update test generation code within the Nfc Replay Tool to match Mobly test suite in google3.
77ad6489 : nfc: Fix build failure
47abaa33 : Use Mutex to synchronize access to sSimPipeIds and prevent the race condition that might cause the null pointer dereference
d757f40d : Added call to NFA_SetNfcSecure()
909f0121 : nfc: Fix build failure
94e9179c : nfc: Fix build failure
174f3ec7 : nfc(apex): Remove updatable=false
af8eb8c4 : Test: Adding Test Cases to CardEmulationManagerTest
55408ca7 : nfc(apex): Remove updatable=false
bcc30b4b : Test: Adding Test Cases to CardEmulationManagerTest
3c2ed8b7 : Test the length of the received responses against the expected responses
b2df552e : Test: Adding Test Cases to NfcFileUtilsTest
d695baf3 : Test: Adding testcase for ConfirmConnectActivity
b8adc9b1 : Fix NullPointerException on AlertDialog.dismiss()
a1f1d761 : Use Objects.equals() rather thatn != to compare services. Otherwise we can get a spurious onDeactivated()
de8f962b : Run PaymentService1 in a serpate process so it can be killed in the test.
a614d279 : [nfc] add uint test case for releaseSoundPool()
4bb4f72b : Add support for associated role services
c6769b0e : Test: Adding testcase for PeripheralHandoverService
c00e4d68 : nfc(app): clearPreference Implementation
b95b1b29 : Refactor PN532 & Casimir modules
e6a86027 : Reduce memory usage of NFC watchdog
5570f2fa : Recalculating SAK value while storing historical bytes in case of Multiprotocol Tag.
9feb1e10 : nfc: Fix bug in nfc oem dispatch
25fc93cf : nfc(app): Fix multiple typos in service management code.
dabbb4af : Import translations. DO NOT MERGE ANYWHERE
48dcbc5e : [NfcApp] Add route type to routingTable entries.
57c6b761 : nfc: Always unbind from payment service in unbindPaymentServiceLocked
7621eb1c : Add capabilities checks for firmware exit frames
1a8486ae : Import translations. DO NOT MERGE ANYWHERE
028e16d1 : Import translations. DO NOT MERGE ANYWHERE
a78d0055 : nfc(apex): Use the "b-launched-apex-module" defaults
c07a54fa : Use a scheduler to process only last RF_NFCEE_DISCOVERY_REQ_NTF
a9ca2f4b : Use DeviceUnlockedStateListener instead of KeyguardStateListener for SecureNFC
6259d569 : Fix exit frame bug number
122b58ea : nfc: Use the right bug id in aconfig flag for nfc_alert_tag_app_launch
494ad555 : Require new observe mode command that is per-technology and doesn't require rf to be disabled for observe mode support
cd017a1f : Add functionality to replay a generated test after creating it from a snoop log.
8f422d9b : Add exit frames flag
1b64c800 : Do not overwrite default route variables in updateEeTechRouteSetting
f3e11454 : Import translations. DO NOT MERGE ANYWHERE
0ebcfb87 : Ensure polling is disabled before returning to discovery
954f8f4c : Test: Adding testcase for BluetoothPeripheralHandover
65f05190 : [NfcApp] Implement getPausePollingTimeout.
85c8b2ca : Add support for associated role services
0bcdf31f : Add T4T Ndef Nfceee feature
bdb0df5a : Test: Adding Test Cases to NfcUnlockManagerTest
b15e5fcf : Send notification when Tag app launched for the first time
11f69549 : Add flag nfc_alert_tag_app_launch
002453e0 : Test: Adding Test Cases to NfcDiagnosticsTest
ec8bc3ea : Bring forward the nat initialization to prevent crashes caused by nfc_ncif_cmd_timeout occurring when the nat is empty
10c75eb2 : Migrate materialColor* attributes into colors
b0f9263f : Import translations. DO NOT MERGE ANYWHERE
70f5f225 : Test: Adding Test Cases to NfcDispatcherTest
840453c4 : [nfc] fix Abort message: 'Thread[25,tid=7578,Native,Thread*=0xb400007473adcb40,peer=0x77f80300,"disableInternal"] attempting to detach while still running code'
548f1c3c : nfc: Fix a bug in RegisteredServicesCache
fdc0a302 : Add AssumeTrue for the rf event coalessing flag.
a888dd00 : nfc(app): Add API to return default NFC SubscriptionId
aec184c4 : nfc: Fix a bug in RegisteredServicesCache
0bf2003e : nfc: notify tag connection info to OEM callback on discovery
e6a26974 : Implement a variety of NfcEventListener callbacks
8e3f93fd : Add implementation of ANDROID_QUERY_POWER_SAVING_STATUS_CMD
cdd61afd : Reconnect formatable MIFARE Desfire tag before checking NDEF
11cb1d15 : Using FWI value for deactivate timeout in reSelect() procedure.
9d497e6c : Sending empty I frame before deselecting tag not answering to S(DESELECT)
3eca28eb : implement field on notificaitons and setting power level on casimir
8977d0a3 : nfc(app): Add return status for pause/resume Polling APIs
4558cafa : Reading of Chinese Id card
87e08590 : [NfcApp] Implementation of commitRouting and extractOemPackages
5d92e041 : Test: Adding Test Cases to NfcDispatcherTest
0f8fe37e : Migrate the NFC Replay Tool from google3 to AOSP.
e362552a : Synchronize on usage of remote
c84979db : nfc(app): Remove unnecessary event log debug print
bcde817f : Revert "[nfc] fix pthread_mutex_lock issue when call mDeviceHost.routeAid in NfcServiceHandler.handleMessage(case MSG_ROUTE_AID)"
03953446 : Test: Adding testcase for HandoverDataParser
2a982c79 : Update isTagIntentAllowed and isTagIntentAppPreferenceSupported
4db8600f : Register VS callback early
714fd7ff : nfc(apex): Add nfc module to mainline SDK
65869c9d : Fix description of coalesce_rf_events flag
2b217907 : Test: Adding test cases to NfcShellCommandTest
8e9714c7 : nfc(app): Fix NPE in ConfirmConnectToWifiNetworkActivity
7c5d3eab : Fix wrong initiator_data used in poll_b()
4123c401 : [NfcApp] Return status code when set service for category other.
bcd08140 : Refresh ProvisionMode state when tag detected
c19919e5 : Test: Adding testcase for ArrayUtils
8fc30ba4 : Test: Adding test cases to NfcShellCommandTest
63e01650 : Add utilities for Type and Data PollingFrame tests
c281fc9e : nfc(apex): Add sepolicy file_contexts to com.android.nfcservices
951baec2 : Test: Adding Test Cases to NfcDispatcherTest
ce4ec1eb : Test: Adding test cases to NfcShellCommandTest
6a860c74 : Run multidevice tests with Cuttlefish
7499e08f : Add halt/atrrib command for ISO-DEP before deactivating in Frame RF interface
8c98423b : nfc(migration): Minor fixes in the migration code flow
107c6a93 : nfc(app): Add a callback for onEeUpdated
7073315d : nfc(app): Invoke onTagConnected(false) when tag reading is disabled
d063eb6d : [NfcApp] Implement OEM log event callback
155432ee : nfc(migration): Add a event log to indicate data migration
989eef1b : [NfcApp] Implement onLaunchRoutingTableFull callback
5ff75090 : Only run NFC watchdog when we're in an NFC field
26555250 : Test: Refactoring NfcDispatcherTest
09e5be62 : Test: Adding test cases to DtaServiceConnectorTest
197b4183 : Import translations. DO NOT MERGE ANYWHERE
6c51e4f5 : nfc(migration): Add more debug logs during migration
5da1c3ed : nfc: Move invocation of onTagConnected
a438fe06 : Coalesce RF field events
f3866a57 : [nfc] fix pthread_mutex_lock issue when call mDeviceHost.routeAid in NfcServiceHandler.handleMessage(case MSG_ROUTE_AID)
74a78eb0 : nfc(app): Add event logs for dynamic registration of aids/polling loop
15d456ee : [NfcApp] Add implementation for getRoutingTable.
152f58ef : nfc(app): Create OEM extension init broadcast
e756f99e : Persist polling loop filters via dynamic settings file.
93b0bba3 : nfc(app): Gate some event logs to vendor logs being enabled by user
133ab786 : Add AutoRfCa setting to PN532.unmute()
819d1f40 : Refactor PN532 module; add support for power control
b77fa68b : nfc(app): Remove Tag from onTagConnected callback
2f875ae8 : Implement Observe Mode and Preferred Service Event Listeners with functional callbacks.
1dc17e3c : Add aborting mEeUpdateEvent when nci command timeout occurs
7e38b314 : Correct an logging message in PreferredServices#loadDefaultsFromSettings
dd6f2e08 : Import translations. DO NOT MERGE ANYWHERE
fab5d8be : onGetOemAppSearchIntent & onNdefMessage API implementation
178c58c4 : Test: Adding test cases to UnCovered Methods In NfcServiceTest
a8064fc4 : Test: Adding test cases to NfcServiceTest
4cbf60c2 : nfc: onTagConnected oem implementation
4e749b73 : nfc: Implement missing OEM extension callbacks
7c2a5fff : nfc: Add event log for CE routed AID
d9ac97a9 : Add OemExtension callbacks support
78ce63a5 : Do not send getProprietaryCaps when NFC fails to initialize
15131a96 : Add Technology Type info for the active secure element list
ce671107 : nfc: Make NfcTestCases compile against SDK
9ce53d52 : Test: Adding test cases to NfcServiceTest
40744323 : Test: Adding test cases to NfcServiceTest
e5d66bb6 : Fixes a bug in responses_match() to account for None responses.
a2de8e2d : nfc(test): Use dynamically retrieve package name to look up resource
f258f7be : Test: Adding test cases to NfcServiceTest
c74a25ad : nfc(mts): Fix mainline module name
561b4f5c : [NfcApp] Check the package when passed in nfcService.
ceedecf4 : use UserIdInt annotation
19c2a105 : Test: Adding test cases to NfcServiceTest
2f8e2668 : Revert "[NfcApp] Check the package when passed in nfcService."
811a6d10 : Updates tag.transact() to account for transactions in which an extra '00' byte is appended to the beginning of some APDU responses.
fb5efb28 : Add activity & modify snippet for testing polling frame timestamps
9b9cebeb : Add utility functions and definitions needed for polling frame-related tests
605080ad : Add transceive_raw() method to PN532 for sending raw frames with any configuration
b0e3b9ce : Test: Adding test cases to NfcServiceTest
d6816aa4 : Catch AssertionError in waitUntil()
eddce2c8 : nfc(app): Only perform DE migration if the CE data is not empty
546f9de3 : Test: Adding test cases to NfcServiceTest
9e112bd1 : Make timeout optional for pausePolling & also resumePolling only if not started
5b9d42ca : nfc: Fix usage of UserHandle.CURRENT
84a76235 : Add UICC listen tech for tech A,B,F route to DH.
8fc443f6 : Fix for recoverOverrideRoutingTable API.
09c994d7 : Fix checkstyle errors
0cacbe39 : Fix testIsSecureNfcSupported with enable_secure_nfc_support
ccdf52b0 : mts(nfc): Add a MTS module for nfc
59a6d0a8 : nfc(migration): Remove migration app from AOSP builds
f92b5294 : [NfcApp] Check the package when passed in nfcService.
a0f60b67 : nfc(migration): Fix typo in migration code
160e26fa : Test: Adding test cases to NfcServiceTest
882af677 : Fix historical bytes in nfcutils.py
37414f4a : nfc(migration): Minor bug fix in the tag app prefs file name
3884d2a8 : [NfcApp] Check nullability for all routes when override.
f9e38391 : Catch IllegalArgumentException thrown from unbind
d895c644 : Test: Adding test cases to NfcServiceTest
0036ce89 : DISCOVER_NTF: do not call discoverTechnologies() is reselecting
df0cbf3f : Handling of type A tags that don't answer to S(DESELECT)
14beda05 : Changes in tag detection procedures
20b4e2b5 : Add support for extended frames to PN532 library
c133c8fc : Test: Adding test cases to NfcServiceTest
3a2e556e : nfc(migration): Disable the boot completed receiver after migration
8887772d : Test: Adding test cases to NfcServiceTest
1b90d638 : nfc(app): Handle data migration for Google package name
45fc3777 : Use ComponentNameAndUser class outside of HostEmulationManager
42049ef0 : Adding default system code route to overwriteRoutingTable() API
3ce2abc3 : Test: Adding test cases to NfcServiceTest
72508625 : nfc: Rename jni tests to libnfc-nci-jni-tests
1a8b2856 : Test: Adding test cases to NfcServiceTest
9cb7c6e8 : Add missing import for hexlify
c1295cbb : Test: Adding test cases to NfcServiceTest
913ef387 : Test: Adding test cases to NfcServiceTest
b42b6c98 : Allow multiple service bindings during transaction
80235358 : Propagate NFA_EE_ACTION_EVT->NFC_EE_TRIG_SELECT up to onOffHostAidSelected()
157111e0 : Send DESELECT if needed when selecting next tag
ada457b2 : nfc: Disable flaky testDisableWhenDisabled
a222c9ce : Test : Adding test cases to NfcServiceTest
57a91f69 : Add a basic cli interface for pn532
3c6b50ce : nfc(app): Use retrieved package name instead of hardcoding it
a032ac82 : nfc(apex): Create NfcMigration APK to migrate persistent data
0ec646cd : Correctly notify of preferred service change on user change
82fb3b12 : Test : Adding test cases to NfcServiceTest
d8102022 : Send DESELECT when reselecting MIFARE RF interface
24f7d309 : Force connection if current connected protocol is different from target
1b916df9 : nfc(app): Use package name to check for process name
7ae36456 : [NfcApp] Hook up the implementation with extension api.
5ff72243 : Test : Adding test cases to NfcChargingTest
5065dd6e : nfc(app): Migrate settings files from RegisteredServicesCache
e6013b1b : Test : Adding test cases to NfcChargingTest
01fbe09c : Add test for DataQueue
65762a3b : Run NfcTestCases in presubmit
a85dd2d1 : Post call to CardEmulationManager.onObserveModeStateChange to handler
6f921ea8 : Post call to CardEmulationManager.onPollingLoopDetected to handler to get off of NFC callback thread.
4e658481 : Test : Adding test cases to NfcChargingTest
c51ecdcb : Fixing warnings in PN532 Kotlin library.
e2a245cf : Ignore return values for PN532 kotlin library.
c07d41f6 : Test : Adding test cases to NfcServiceTest
6d99c716 : [NfcApp] Implementation for more oem extension APIs.
6d25c291 : nfc(app): Migrate settings files from RegisteredServicesCache
0fb44d17 : [Nfc App] Add security log when nfc is enable/disabled.

+- Project: platform/packages/apps/OnDeviceAppPrediction

44ccf44 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/PhoneCommon

33997b7 : Import translations. DO NOT MERGE ANYWHERE
64d65be : Import translations. DO NOT MERGE ANYWHERE
0b5ca88 : Import translations. DO NOT MERGE ANYWHERE
c8bf5d2 : Import translations. DO NOT MERGE ANYWHERE
1f2dc28 : Add README with details about the status of support for this app

+- Project: platform/packages/apps/Protips

6ac362b : Add README with details about the status of support for this app

+- Project: platform/packages/apps/QuickAccessWallet

795c6e6 : Add README with details about the status of support for this app
b2d8903 : Update targetSdkVersion to 35 for the QuickAccessWallet.

+- Project: platform/packages/apps/QuickSearchBox

d39d677 : Update comments to point to the new location of event.logtags.
3270a5d : Add README with details about the status of support for this app

+- Project: platform/packages/apps/SafetyRegulatoryInfo

52c711a : Import translations. DO NOT MERGE ANYWHERE
09673b5 : Import translations. DO NOT MERGE ANYWHERE
65359c2 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/apps/SecureElement

eb66d91 : [nfc] set mGetHalRetryCount to 0 if retry success
9519a1e : Make sendBroadcast() really asynchronous

+- Project: platform/packages/apps/Settings

f29f29cdc06 : Revert "Use BatteryOptimizeUtils to add packageName into PowerSaveWhitelistUserApps allowlist, which will set app into Unrestricted Mode"
6df5cb0bc43 : Do not show satellite messaging in sim settings if sms is not supported
e33852e0613 : Import translations. DO NOT MERGE ANYWHERE
76a78ab7faf : Import translations. DO NOT MERGE ANYWHERE
e8df6b2add6 : Import translations. DO NOT MERGE ANYWHERE
f496cf6fb91 : Fix NPE on modes page when schedule has no days
10970ad6be5 : NTN connected type is manual, UI status shall be checked by callback.
a3a7922dcee : Revert "[Biometric Onboarding & Edu] Update Set up Fingerprint/F..."
c8e84e540f9 : Revert "Migrate materialColor* attributes into colors"
0dc6180a276 : [Catalyst] Fix enable state for Wi-Fi hotspot
087ac777a89 : [Biometric Onboarding & Edu] Update Set up Fingerprint/Face Unlock page
c6544f991fa : Disable Reset app preference during the call. Similar with ag/27511802, this is for non-SPA environment
a02483adc94 : [Catalyst] Refactor some Text Reading items
d989dc5c36c : [Catalyst] Refine WifiHotspotSwitchPreference
387753c0050 : [Catalyst] Refine DataSaverMainSwitchPreference
7ca56d493ae : [Catalyst] Avoid creating new KeyValueStore for AirplaneModePreference
dc1a52a2eec : [Catalyst] Simplify KeyValueStore API calls
7935068c0e8 : Revert^2 "Integrate Settings with Preference Service"
8577c105358 : Migrate materialColor* attributes into colors
ef835bb57dd : Improve satellite category visiblity logic.
858fbdb7599 : [Catalyst] Make AirplaneModePreference final
a6ec9026e62 : [AAPM] Tests for ActionDisabledByAdminDialog and ExternalSources page
b2986f77a4f : Revert "Integrate Settings with Preference Service"
572afa59e6a : Fix the unclickable learn more link.
63469da6f6a : Remove Satellite SOS entry from AOSP
6fb75678e50 : [Catalyst] Migrate Wi-Fi Hotspot preference
253cb874d8a : Add a method to get controllers with same parent class type.
d3a2041e19b : Import translations. DO NOT MERGE ANYWHERE
783349939d7 : Import translations. DO NOT MERGE ANYWHERE
9f10978d0c9 : [Catalyst] Migrate BluetoothTetherSwitchPreference.
2b504a2762e : [Expressive design] Remove extra background of MainSwitchPreference.
c2d25241a15 : Remove replicated page of Satellite SOS.
6107fd85917 : Cleanup deprecated functions in AccessibilityUtil
9edbc798bf4 : Integrate Settings with Preference Service
efbe144a9af : [Catalyst] Migrate Airplane Mode preference
b93828930ec : Post hiding loading view operation into to the message queue
a7d6d462462 : Part 1 of `use bluetooth` toggle catalyst migration.
480053e03f1 : fix failed of PrimarySimRepositoryTest#getPrimarySimInfo_verifySmsList
8394517609c : [Expressive design] Migrate WifiPrivacyPage.
85aa7f7c867 : Replace the existing notifications on lock screen preferences with new page
81d390968b4 : Implement the preference switch for minimalism on ls settings page
bd6561accf8 : Implement the Hide seen notifications toggle
4c507b1132c : Implement the hide silent notifications toggle
908bd398bdd : Adds null checks on callers of Nullable method getA11yServiceInfo().
c16c9b2df61 : Implement the show sensitive content toggle
0d75a26a68f : Implement the main switch of notifications on lock screen
dfdff04fb91 : Fix UI string
8839335d58b : Customize the availability of Testing Settings Menu
7ce71c7619b : [ExpressiveBattery] Replace LayoutPreference with IntroPreference.
f0e06b6cab9 : Fix crash due to resource not found.
9836d6a69a4 : Add the new settings page for lock screen notifs
5862b31f21c : Updates A11y Shortcut animations to not loop in code.
18d0d263958 : Create Double Tap Power Illustrations based on target action
d3af193384e : Support wallet launch in Double Tap Power Gesture Settings
406a01dfabe : Rename Double Tap Power To Open Camera Gesture Xml
75536b091ff : Refactor DoubleTapPowerPreferenceController
235556daeb3 : Create Double Tap Power For Wallet Launch Preference Controller
7e6e2b81c26 : Create Double Tap Power Gesture For Camera Preference Controller
260f37f849a : Create Double Tap Power Gesture Main Switch Preference Controller
c09909489f9 : Create DoubleTapPowerSettingsUtils
607b01760f0 : TopologyScale: limit vertical padding
5d2ddfc6dd6 : TopologyScale values for scaling the topology pane
f2d4490fa87 : [Catalyst] Refine WifiSwitchPreference
ceac8c1dded : Satellite UX - Add learn more into Footer
178befee5e6 : Skip the notification when the userId is not main
9329684c860 : Fix crash due to no Satellite
10f0cd821c0 : [Catalyst] Migrate "Use Wi-Fi calling"
9011a4f965c : [Catalyst] Fix NPE when open wifi calling screen
9faf231dc4a : [Catalyst] Migrate Wi-Fi switch preference
50e695fdb66 : Ignore test case to avoid post submit failed
7c11a45735e : [Connected devices page] Move the refresh logic to main thread.
880b463a4ae : Import translations. DO NOT MERGE ANYWHERE
dab1ba5e287 : Import translations. DO NOT MERGE ANYWHERE
0f591ca2205 : [Catalyst] Use lifecycleScope for BatterySaverPreference
e628248bab6 : Touchpad: remove tap dragging flag
acccaff128e : Cleaning up quick settings flag in Settings app
c952545683e : Hide loading view if config service is not available
a2085f18b75 : Fast enroll
b3468ca88ad : (Once more) fix minimum height of mode name edit box for a11y
32e388ad319 : [SPA] Add biometric authentication for package modification
b7616da05c8 : [Physical Keyboard] Add intent support for PK layout setting page
37ed7151459 : Completed basic UI of Satellite SOS
344ba6912c4 : [Catalyst] Migrate "Mobile data"
e731f7cce26 : Fix TetherPreferenceController ANR
36684ec80d3 : [Catalyst] Migrate SIMs entry point
e994e8eab53 : Hide loading view if config service is not available
a2b94eafde8 : [Catalyst] Ring volume migration (2/n)
67ac0faf3da : Add loading screen for Device details fragment to avoid ANR
115f92e5ca2 : Adds ignore annotation to avoid postsubmit failed.
bd4990d0206 : Fix checkSimIsReadyAndGoNext()
90092e7da4c : Create a Satellite SOS entry
1cf9ab21b8d : [ExpressiveBattery] Update BatteryHeaderTextPreference
0dc50c4556c : Update BatteryHeaderPreference with storage and read permit
8a97245b3d8 : [Catalyst] Add sensitivity level
2e7db438aba : New System API to launch SIM Preference in Setting
08a7f6a5e7a : [Catalyst] Vibration and haptics main switch migration
9ca87091731 : Disabled Settings preference in case Satellite's conditions.
a25868453cd : Update OWNERS for Android Settings
55de3bf24b4 : Fix crash due to no SatelliteManager
ad74d1f1fb6 : Modification for Satellite API change.
409e5f56979 : [Settings] Fix inconsistent ringtone keyword search
22370a8a4aa : Make Satellite messageing dynamically change wording by network type
5da12934944 : Revert "Turn off voice access in 16KB mode"
cc663236138 : Revert "Turn off voice access in 16KB mode"
d86526930f9 : Import translations. DO NOT MERGE ANYWHERE
4871bb17f6e : Import translations. DO NOT MERGE ANYWHERE
4a3af3a735a : Show built-in display if topology is visible
b253005342c : Add a11y setting for disabling touchpad system gestures
6eabebec549 : [Audiosharing] Fix add the secondary buds to audio sharing
57ca4d313eb : Add AppInfo setting for page size appcompat
25277a0e8fc : Convert hidden SatelliteManager APIs to System APIs.
cf48d5b7e54 : Import translations. DO NOT MERGE ANYWHERE
f76efce0bf1 : Import translations. DO NOT MERGE ANYWHERE
5892f6b9cf3 : Import translations. DO NOT MERGE ANYWHERE
c47df6271a4 : Import translations. DO NOT MERGE ANYWHERE
8ff0ee9c20a : Import translations. DO NOT MERGE ANYWHERE
7230cbcb613 : Import translations. DO NOT MERGE ANYWHERE
46a75405103 : Expose the regional preferences page
47e665802bf : [AAPM] Update ActionDisabledByAdminDialog and ExternalSourcesDetails strings
d8edea81549 : Add aconfig for biometric onboarding education
f54048ad4c4 : Modify App languages entry
d6f139b3892 : Update comments to point to the new location of event.logtags.
8e6066f5d69 : [PixelCare] Catalyst migration for battery header
47d8e1daee6 : Remove BatteryHeaderPreferenceController usage in PowerUsageSummary.
edfe2a3f246 : Update resolution name for screen resolution page
178eb0bd155 : Split battery header text to a separated preference
db43380380f : Make UI and wording dynamically show on
bc1b12db3ab : Settings: Fix the a11y focus issues in App Notifications subpage
6f8e823e0ea : Makes all custom caption settings unsearchable when custom captions are not active.
6f28e1a3c7b : Topology pane in extended displays list
d53777f96d4 : Use flag to control the hierarchy tree changes
ae1cc05b8d2 : Add aconfig for biometric onboarding education
a233ca783ab : Fix whenNoInstaller_notDisplayed
8346678b1a1 : Fix blank area at the bottom of device details page
457029b3558 : Branch the following two files to make the next cl easy to review.
1413f7428c6 : [Catalyst] Update ColorAndMotionScreen
d5c7d17e075 : Fix junk when loading more settings item
2be5ef9426d : [Physical Keybaord] Add keyboard touchpad/Mouse page - part1
77639002246 : [Catalyst] Add Read/Write permit for AdaptiveConnectivityTogglePreference
8e73d0f51e3 : [Catalyst] Update RemoveAnimationsPreference
330834b8190 : Cleanup flag "audio_balance_state_description"
de0e5cdff98 : Cleanup flag "check_prebundled_is_preinstalled"
bafd2cf5424 : Fix in the scrollable Glif header design, the sub-title can not be read out automatically in the Udfps enrolling page
f4d392b2788 : Disable checkcolor hook
bc2f58f2562 : [Keyboard Setting] Fix build break from util rename
9a9d099677d : [Physical Keyboard] Add flag for deeplink keyboard layout picker page
dceedf1b811 : Import translations. DO NOT MERGE ANYWHERE
6b28741c385 : Top intro shouldn't be searchable
470a4a86afa : Set InstallUnknownAppsListModel override check
5d8680e1937 : Fix redirect to Mobile security page
4072d431232 : Cleanup flag "never_restrict_a11y_activity"
2f996492fff : Revert "Use flag to control the hierarchy tree changes"
472e415659e : Control user switching UI visibility according to config
efcb4dfddfb : [Catalyst] Migrate AmbientDisplayAlwaysOnPreferenceController
b6132572ea9 : [Catalyst] Migrate "Lock screen" entry point
828578f703f : Make assistant label generic for AOSP.
5641752fb14 : Remove selectability from pointer preferences.
fb2668525b6 : Fix talkback of ANC toggle
34ea3dc0751 : Use flag to control the hierarchy tree changes
14926a34a67 : Update color for reset_esim_desc
c527b40370a : [Audiosharing] Update dialog style and wording
46b1369b515 : [Audiosharing] Fix hysteresis mode
c79f368dbf7 : Add logging for LE Audio toggle enable/disable
9ef696d0e87 : Migrate Remove Animations preference to Catalyst backend.
00ec5248cb2 : Add Settings page for three finger tap customization
44a35ac07bc : Don't let profiles open the UserSettings overflow [DO NOT MERGE]
31e69deb280 : Don't let profiles open the UserSettings overflow [DO NOT MERGE]
6e413c4cff2 : Don't let profiles open the UserSettings overflow [DO NOT MERGE]
6f1c499e57f : Don't let profiles open the UserSettings overflow [DO NOT MERGE]
4901c86b765 : Don't let profiles open the UserSettings overflow [DO NOT MERGE]
a2f5c08370f : Don't let profiles open the UserSettings overflow [DO NOT MERGE]
9882f43d9c0 : Don't let profiles open the UserSettings overflow [DO NOT MERGE]
46da53099c2 : clampPosition function for dragging display blocks
8fa1dcb0345 : Block the content scheme intent in AccountTypePreferenceLoader
d782f9c4bc7 : Block the content scheme intent in AccountTypePreferenceLoader
5f1fc5c2ee9 : Ignore PrivateSpace tests that are failing in pre-submits.
c9017355d94 : [Satellite] Add category UI for Satellite functions
294b41a8838 : Fix storage requirement for Linux terminal
83708d4499e : [Catalyst] Update PreferenceLifecycleContext.notifyPreferenceChange
66303e90fd7 : Import translations. DO NOT MERGE ANYWHERE
4311cb8c7a6 : Import translations. DO NOT MERGE ANYWHERE
fb6f25ff28f : [Keyboard Setting] Rename KeyboardSettingUtils
69c7f5dac58 : Don't let profiles open the UserSettings overflow
a1d5f2544ad : [Expressive design] SettingsMainSwitchPreference should not in the group
c4dd7a82881 : Add flag for surfacing display topology prototype
ab0213a7e3a : [Settings] Refactor: Add LocalePickerBaseListPreferenceController
d5e121d582c : Revert^2 "Add intent filters for NfcTagAppsSettings"
a2ccf8979db : [expressive design] SimsSection
581b9b43462 : Revert "Add intent filters for NfcTagAppsSettings"
9d82dc346d3 : Add flag for migrating UX to Android settings app.
e24189df08e : [Catalyst] Sync APM preference key value
49045fb3617 : Remove the extra space above Spatial audio toggle
26c16784d4c : Make final confirmation of dialog only shows in-call state
a28c5e1ca52 : Skip user restriction check on desktop
4c5f8481bd3 : Remove nested PreferenceCategory in Battery Usage page.
f487880ce0c : Revert "Create a config flag for the new Screen Timeout settings screen with Dashboard style settings"
2a41717baf7 : Add intent filters for NfcTagAppsSettings
e635f1e0ed4 : [Catalyst] Support Getter API for settings service
dec8370d258 : [Catalyst] Migrate "Screen attention"
3a20528ae43 : [Catalyst] Support RadioButtonPickerFragment
e6ef4c24430 : [Catalyst] Add initial ScreenTimeoutScreen
8d3be114188 : Refactor duplicate isAdaptiveSleepSupported
cbc49e787d1 : [Settings] Refactor: Add RegionAndNumberingSystemPickerFragmet
a6334bc0a3a : Hide Linux terminal if device doesn't meet minimum requirement
22306eb982d : Add Special App Access page for WRITE_SYSTEM_PREFERENCES permission
1f24f061b41 : Hide the blurb for custom-manual types
eb134495930 : Updated interactors to use repos
33a7b96f5f9 : Settings: Fix queryShortcuts_shouldSortBasedOnPriority test
0baf70a4b37 : Exclude implicit modes from the summary of the Modes entry in Settings
80f8b87e28c : fix(ColorCorrection): Palette preview text contrast Issue
40b8917b57c : Fix WifiCallingSettingsForSubTest
64c83d498fe : [Catalyst] Allow external SET for Use Battery Saver
fb9d83ad68d : Add metrics for new bluetooth device details
52b8bc332d0 : Ignore the test case that's been failing for a month
a15948b53eb : [Settings] Refactor: Add SystemLocalePickerFragment
8f548123f4c : Fix NetworkProviderSettingsTest
b551184b158 : Update resource name and format previously added file
a794e01d458 : Revert "Revert "Migrate Battery percentage preference into catal..."
ffe54b4fd6c : Add OnAccountsUpdateListener in ContactsStorageSettings to refresh the account when there's account change.
66d6b926374 : Ignore failing NetworkDashboardScreenTest
56de354bdb4 : Import translations. DO NOT MERGE ANYWHERE
9a8fe4107db : [Catalyst] Update SettingsService
e4623cd2a33 : (1/n) Make the GlifHeader scrollable on FingerprintEnrollEnrolling( UDPFS) layout page.
1bbe798c8f7 : Revert "Migrate Battery percentage preference into catalyst. Ref..."
3eb06a9077e : Add android:key to device info PreferenceCategories
a9b4073e0aa : Clean up ApnSettings unused fields
9a91334dc90 : RESTRICT AUTOMERGE Add battery replacement strings
88df8aa9930 : [Catalyst] Create datastore for AdaptiveConnectivityToggle.
c2f72f3ceae : Migrate Battery percentage preference into catalyst. Refactor logic from BatteryPercentagePreferenceController to BatteryPercentageSwitchPreference
d6c932cd7e3 : Handle null audio attributes
ff0ea037bf7 : Remove the margin in Device Detail page
a37572a4e56 : reduce number of binder calls when loading page
2bab900fa0c : [Catalyst] Add Data Saver settings metadata
d2d5a1c2f94 : [Catalyst] Migrate "Use Data Saver" settings
6a52eeabbca : [Catalyst] Support main switch bar
25d8e56c538 : Relax WiFi cert installation restrictions in HSUM mode
cb32a15f1bd : [Audiosharing] Fix hysteresis mode
4e4ee7430cb : Unify and merge two hasAllApns()
31dfbdabf10 : [Settings] Refactor: Add LocalePickerBaseListPreferenceController
103ce4efa23 : Enable audio sharing hysteresis mode fix when preview is on.
865e9b29f57 : [Catalyst] Support multiple restriction keys
ea0b5d5950e : Clear Linux terminal app when disabled
f959debbe5e : Enable HearingAids#AudioRouting page search if the device supports hearing aid
e401ea47435 : [Safer intents] App Time Spent Preference
87dbf9c4b8e : [Safer intents] App Time Spent Preference
a84dd635485 : Introduce overlaid BatterySaverGoogleScreen
5642811b6e8 : Refine some preference name
db46e64d822 : Import translations. DO NOT MERGE ANYWHERE
3ec15679220 : [Touchpad & Mouse] Update title based on conditions
7fa2505e07c : Import translations. DO NOT MERGE ANYWHERE
73d023c54d2 : Import translations. DO NOT MERGE ANYWHERE
ed3abffcfc7 : Make Linux terminal option profile aware
aaa040e0856 : Revert ANC and Spatial audio UI change
23367e380aa : Migrate Use Battery Saver
5ddd74b917e : Fix Settings Search for OneHandedSettings
fd77ab61828 : Settings: Fix queryShortcuts_shouldOnlyIncludeSystemApp test
b3eea625ec5 : Caption settings cleanup
3863c67a012 : Use full day names in the a11y version of schedules in mode list descriptions
d31d6d110ca : Fix work apps interceptor biometric prompt icon.
bb9cc081727 : [Catalyst] Rebind preference immediately when restriction is changed
d7d4269851b : Caption preferences cleanup
d824a682b24 : [Catalyst] Add restriction for SIMs and Tethering
02753a7e299 : [Catalyst] Add restriction for Internet
b152c2d34f8 : Fix stray ProfileSelectDialog when only one profile exists
fe199d26568 : Update state when disabling and enabing develop option
0d8d97ad48a : Gate screen off unlock UDFPS until power optimitized
841fb3846b2 : Block the content scheme intent in AccountTypePreferenceLoader
e3db9b8941e : [Connected devices page] Clear up old functions and add logs
2133c646f83 : Add feature flag for regional preferences APIs
329fbfed7dc : Settings: Re-enable WebViewAppPickerTest#testNotFinishedIfAdmin
24f961a07a5 : Changed to use the new hidden API.
776af69b32a : Move When to start tab below grid
13c6cb4c222 : Use the DND icon for modes of type UNKNOWN
3e96c9459bb : Explicitly mark the TopIntroPreference not searchable
2a99f03da53 : Import translations. DO NOT MERGE ANYWHERE
1df24f975ba : Import translations. DO NOT MERGE ANYWHERE
5ad1b75dd1f : Import translations. DO NOT MERGE ANYWHERE
33c580ec458 : [Catalyst] Support restriction for Display settings
9eab62acdc8 : Add an action name and support 2-pane UI in Backup page
ffa4fba144c : Also disable main content if switch is disabled
7e625884e37 : [Expressive design] preference group - fix preference highlight
d33b0a5114c : [Catalyst] Support restriction for Sound settings
f18e3bafe61 : Add SettingsPreferenceBindingFactory and support restriction
3551614bafa : Implement RestrictedPreferenceHelperProvider for restricted preference
2f16e5824e3 : Few update on contacts storage settings. 1. Add account preference category to contacts storage settings page 2. Preload account icon in settings constructor 3. Re-fetch account in controller to refresh the default account after user sets a different default account.
83b1491d85d : Touchpad: add Framework Input team as owners of more files
976850dbdd8 : Add skeleton page for bundling
1fc4af8e5c8 : Fix expressive design preference group
7b73897b262 : settings: fix roboletric test in MainClearTest
c2d0b257531 : feat(brightness suw): adjust brightness toggles UI and remove autobrightness standalone page
b2ada5df656 : [Expressive design] Apply expressive design to Settings
f0e88a2b859 : Migrate overlaid DisplayGoogleScreen
db622c1aea6 : Allow connected Hearing Aid devices to be searched by names
898d1e07ad1 : Allow saved Hearing Aid devices to be searched by names
2fc788c35af : [Audiosharing] Enable audio sharing UI when preview option on.
64aaa1440a4 : Remove all pending messages when fragment destroyed
96aa9b27f1a : Fix catalyst settings test failure
5329552b749 : [Catalyst] Allow external SET for Display/Sound settings
dbcdbe85e76 : [Catalyst] Provide launch intent for Sound settings
d655faf9405 : Enable AutoBrightnessScreenTest
45f5ef607d5 : [Catalyst] Proivde KeyValueStore for "Adaptive brightness"
369263156bc : [Catalyst] Migrate Adaptive brightness in the DisplayScreen
466b3e00661 : Updating Setting EvenDimmer->ExtraDim
400d90a0f61 : Fixes searchability for timeout settings page.
859981242b4 : Export App storage screen
547dd4ebd84 : Updated Orientation interactor
5e2adc26832 : Added interupt to accessibility interactor
f55fbc4273b : Update flag namespace, names
8f0c731fcec : Change SUW theme style for Fingerprint and Face enrollment flow (3/n)
ce713f8cd39 : Fixes searchability for autoclick settings page
77aa48fe0b7 : [Catalyst] Migrate Brightness level
7c9379e5d2e : [Catalyst] Ring volume migration (1/n)
68529256769 : Allow launch activity from background for PendingIntent
75b443375fc : [Catalyst] Media volume migration
f035eefe674 : [Catalyst] Implement datastore for Smooth display
792d8a97dd1 : [Catalyst] Tethering screen migration
2b87ee1de7c : Import translations. DO NOT MERGE ANYWHERE
c5cfcf96e9e : Import translations. DO NOT MERGE ANYWHERE
8673b0e5ba7 : Import translations. DO NOT MERGE ANYWHERE
7c235e3c0c2 : Force Private Space to always use biometric prompt
307cb087604 : Migrate About phone
685074260d9 : Hide the mobile data enable dialog during factory reset
9332b06675b : [Catalyst] Provide launch intent
82a44bf5936 : Export Battery usage screen
005aafde24f : Change SUW theme style for Fingerprint and Face enrollment flow (1/n)
48a1edb12e8 : Migrate Smooth display
dba15f821f0 : [PhysicalKeyboard] Update Setting Feature Provider for follow up usage
2c9e4d373e3 : Remove extra space around profiles and audio category
55cd3c67cb0 : [Audiosharing] Cancel notification when BT BLE both off
b2b5255abf9 : Fix Settings crash due to no SatelliteManager.
101f0f74632 : [Catalyst] Migrate AdaptiveConnection preference.
88cf7037b0d : [Screen off unlock UDFPS] Fingerprint Settings integration 2/2
407e3c9e2fa : Removed flag from environment
689e0e4a17e : Use radio button style for single selection
e25afc6a007 : Re-enable all SetupFingerprintEnrollFinishTest test cases
79450d890d2 : Import translations. DO NOT MERGE ANYWHERE
6c2eb686a46 : Import translations. DO NOT MERGE ANYWHERE
e3a89c38a3a : Import translations. DO NOT MERGE ANYWHERE
4ad4f35f788 : [Catalyst] Specify order in hierarchy
6bfa0eead11 : Fix incorrect first() value from fingerprintSensor
7b1b253d44d : [PixelSettings] Group each preference by category and add android:key
8f4f3711313 : [Catalyst] Call volume migration
3f1c7b0f497 : Refactor sound settings page for catalyst
2d2f523abe1 : [ToA] Use radio button style for single selection
1729f3e0bc2 : FingerprintEnrollStageThresholdInteractor
d6f8c58f601 : Add flags for cell security features in Settings
451bd654718 : Fix incorrect Settings assumption
48d984511a0 : Enable HearingAid page search if the device supports hearing aid
ece6ca317c7 : [Audiosharing] Fix tests for hysteresis mode
1e8933123a7 : Add TtsSpan to schedule-time modes trigger segment so that full day names are read
bee4f9310ca : Migrate System -> Languages to Catalyst
97dbd0bb540 : Add developer option for le audio sharing ui flow.
2bb1c553146 : Add battery replacement strings
7db2eda52df : Fix startActivity for multi-user issue HSUM mode
ae7acae24ca : Show error message when inputting invalid audio sharing password.
6c1481e5e72 : Exit device details page when bond state is BOND_NONE
6f9a18ec687 : Hide spatial audio toggle when disconnected
7764a3e5af1 : Fix coroutine scope expired and UI animation issue
7557a48e2ee : [Catalyst] Create airplane mode preference
259d3a47c8c : Avoid disabling the Wi-Fi hotspot switch causing Talkback confusion
d9553708e6e : Show SIM account preference in Contacts Storage Settings when default account is set as SIM account.
a2741a76cb8 : Disable factory reset in DSU mode
27619dd45c4 : Disable factory reset in DSU mode
4296cc1977d : Disable factory reset in DSU mode
1bad4e27c14 : Identity Check API
d7dec0bf7f6 : Migrate Data Saver
be68a156217 : Modify the condition for showing DSDS dialog
a9e8225ba47 : Update summary of Modes entry in Settings
215e9c0f3b0 : Override aspect ratio to fullscreen when unset for universal resizeable
1cdc17a7f67 : FingerprintEnrollStageCountInteractor
8547d7ad963 : Make TetherSettings extend RestrictedDashboardFragment
6825f23d078 : Fix dialog message overlap title
2d1770e4f78 : [Catalyst] Migrate "Dark theme" settings
60521ee2f9a : Add WifiCallingScreen and corresponding TS flag
62c2800b3fd : Fix test failure
45a2c3e5b86 : Refactor DarkModePreference for catalyst
1694adb1aa3 : Change the icon in the trigger segment for TYPE_DRIVING modes
b415befd532 : Migrate Adaptive Connectivity
059593f1b15 : Update WifiCallingSettingsForSub to inherit DashboardFragment. Controller logic will be refactored later
2cf3e627ce7 : [Catalyst] Internet screen migration
0393a8165bb : [Physical Keybard] add Mouse key main page
bdaadc471b5 : Reafactor DataProcessManager Callback Function Logic
f6787fd88be : [Audiosharing] Update button background radius
b365c364733 : Import translations. DO NOT MERGE ANYWHERE
6b783c268f8 : Import translations. DO NOT MERGE ANYWHERE
76c284d5843 : Import translations. DO NOT MERGE ANYWHERE
66c5be9fdff : Remove dependencies on the 1-variant fallback
b48d9f0e563 : Change to use android.provider.new_default_account_api_enabled flag to control the contacts storage settings launch. Bug: 368641291 Flag: android.provider.new_default_account_api_enabled
b78509129b8 : Clean up android.webkit.update_service_v2.
f1415638309 : Add Lock Screen page skeleton and TS flag
5aac4357d99 : Create a config flag for the new Screen Timeout settings screen with Dashboard style settings
8f6145c143f : Add the condition to hide the MMS messages preference
b05a1aa543a : Remove backing fields for ColorAndMotionScreen
dfeeb297adb : Update backgroundcolor of advanced bt header image
e72b2d2be10 : Remove unsued legacy clearMemory() method
a9002d157ca : Fix force close for updating UI after activity destroyed.
8189b40067a : Make NetworkProviderSettings extend RestrictedDashboardFragment
1732fa8db02 : Redesign the update logic of Allow Background Usage Page.
bd44c860413 : Revert "Fix force close for updating UI after activity destroyed."
cb2e1bdabb5 : Fix BatterySaverScheduleRadioButtonsControllerTest
8697bdfb259 : Enable catalyst test for BluetoothDashboardScreenTest
41440ff2487 : Removes overrides for shortcut-required service toggling.
688cc944138 : Create an empty color and motion screen with Catalyst Infra
cc693e950f1 : Enable catalyst test for PowerUsageSummaryScreenTest
99488fd1046 : [Catalyst] Location screen migration
7ae49a51eaf : Rename context variable and format code
97f2c3dfb42 : [Catalyst] Bluetooth screen migration
af1e8f7353f : [Catalyst] SIMs screen migration
d0d3879fa4d : Redirect to new APN edit page
3f4cf24ad35 : Avoid flakiness SLO
bf5c04e16be : Remove TelephonyManagerConstants
8e3e3ea9736 : Swtich to use DefaultAccount related API to get and set default account for handling contacts.
4c3473134c1 : Hides the Edit Shortcut top preference when empty.
940b2143eb1 : Create a config flag for the new Screen Timeout settings screen
136244ab2a4 : Add Battery skeleton page and corresponding TS flag
033b7e5620e : Refine biometrics accessibility interactor
ae5f82d36be : Move the annotation right before String
29a163888f3 : [expressive design] Migrate App info page.
db3b6ee073a : Simplify settings datastore calls
1e134572674 : Rename CatalystScreenTestCase.context
9b62ec96167 : Provide icon for catalyst screens
0e772bd8d28 : Import translations. DO NOT MERGE ANYWHERE
612880bb936 : Import translations. DO NOT MERGE ANYWHERE
4597290de32 : Import translations. DO NOT MERGE ANYWHERE
c850dfac269 : Removes the A11y tutorial that displays on change to gesture navigation
591d4fd9329 : Enable Linux terminal app via developer settings
b0706be4afe : Set availability of AddUserWhenLockedPreference before it gets displayed.
3d2f06fd7b7 : Migrate Text Reading
2f9f582310d : Set the same density to all displays
68d5492bafe : Fix a crash in PhysicalKeyboardFragment
7aefcf71b62 : [Physical Keyboard] Add slow keys dialog
898feed16ad : Migrate Dial pad tones preference
d93efafd958 : Remove unneeded cast to SettingsActivity
a088e4c6c59 : Clean up getNetworkTypeFromRaf()
1b83703adc1 : Use BatteryOptimizeUtils to add packageName into PowerSaveWhitelistUserApps allowlist, which will set app into Unrestricted Mode
42d2b084542 : Remove isCatalystEnabled check from SoundSettings
c04e468bd43 : Enable catalyst test for NetworkDashboardScreenTest
d9c83e350b7 : Added error message to error state.
5951efefd77 : Fix name of COO.
2fbff9b6da9 : Fix missing open button under preferred service in credman settings screen
dc419524c5e : Add MaterialComponents.DayNight to SearchBarStyle
59cdc3e20ca : [CDM][NLS] Check if the NLS service has an intent-filter
36343c73582 : Add piano and remove tent from zen icon picker
3ab6b197a97 : Fix flaky tests due to shared system property
9c9c0a3d948 : [Catalyst] Network and Intenet screen migration
7bfa060c5f3 : Take as Unrestricted Mode in the UI if current Mode is Unknown.
7a3baf7d2e5 : Fix multi-toggle flicker bug
b923194def4 : [Catalyst] Vibration and haptics screen migration
538a01023a4 : Add ContactsStorageSettingsActivity to handle the android.provider.action.SET_DEFAULT_ACCOUNT intent
a11a960167e : Cleanup searchable items on Accessibility > System controls
42e146e9db1 : Revert^2 "[Catalyst] Add settings service"
eafda3c4903 : Revert "[Catalyst] Add settings service"
92354f8bed4 : [Catalyst] Add settings service
c9b450734a0 : Gray out toggle if isAllowChangingState is false
01ad71bc89a : Migrate Battery Saver
8cb8aaf36b6 : Fix bug when bluetooth profile is not in more settings
cbe75928d9e : [Physical Keyboard] Add main page for Repeat keys
44527db3563 : Fix PhoneNumberPreferenceControllerTest
44dfbe67480 : Ignore catalyst test in DisplayScreenTest
efe8d52f77e : Connect to OWE Wi-Fi network when QR code has no password
cbca375f205 : Update the forEach() into for-loop
d38549d60b1 : [Catalyst] Use hybrid mode for sound screen
92ce82809ee : [Catalyst] Use hybrid mode for display screen
20558e21bd6 : [Catalyst] Support hybrid mode
b2dd2e32031 : Updated toggles in Date and time settings page
2e914a311f0 : Make Extra Dim main toggle and shortcut searchable
a2c9a20f2b8 : [Contrast] Fix checkmark in RTL
796f631ff89 : Clean up 24Q3 aconfig flag reset_mobile_network_settings
f50ba4c6ece : Update VPN app dialog message to include version code
fa5b1a2329c : Not log error when isPackageEnabled = false
ded3c8a988c : Set the `Keep on after device restarts` non-searchable and add comments for the reasons.
57b8d612848 : Add comments for setting the searchable attribute false in DarkModePreference
9832fe92810 : Avoid test flaky
29036fd2a9e : Sound screen migration
ae4ec175c92 : Fix BatterySaverScheduleRadioButtonsControllerTest
867c10e0f37 : Import translations. DO NOT MERGE ANYWHERE
ea06748ee2e : Provides an ordered array of shortcut types so Settings presents shortcuts in the desired order
476cd46b766 : Update Settings tests to new DeviceStateManager API
464c14649a6 : Use getEnabledProfiles for ProfileSelectDialog
8aeca8e7555 : Disable the phone number when no subscription
bab2edd0eff : Fix SpaSearchLandingActivity.isValidCall()
e3c4db5884e : Rename system caption "Text size" to "Caption size".
830654aa85d : Use media switcher dialog to control routing during call
0a6d37b871f : Pass the actual quantity/count to the MessageFormat and let it decide
66d518d6667 : Add deviceId in AssociaitonInfo
449f4eeebc1 : mouse: Add preference toggle for Mouse swap primary button
ff275c82aab : Mouse: Add preference toggle for mouse reverse vertical scrolling
9f9571a0786 : Move Contextual Search setting to AOSP.
cffb21b17fe : [Physical Keyboard] Add Repeat key toggle
96429c65892 : [Audiosharing] Save user preferred primary headset to SettingsProvider
0ac1ba59202 : Makes Use Color correction and Color correction shortcut searchable.
c3b20d3a316 : Also clear the foreground service usage time for BatteryDiffEntry should hide background usage time.
f3095002811 : Handle IllegalStateExceptions in CellularSecurityPreferenceController
b246d2e8152 : [Homepage Revamp] hide the scroll bar of homepage
806b91414d2 : Makes Use Color inversion searchable.
3079b2d108a : Makes Settings > Accessibility > Magnification prefs searchable.
1133353e5df : Don't show default payment component in search if Wallet role is enabled
bdf3f6471a8 : Fix title on specific app's channel settings page
9b62541d801 : Remove Android %s from search results
012ba80c507 : Create a method to allow child classes to define it's main switch toggle's pref key
31d82a02883 : [Physical Keyboard][A11y Page] Add custom slider
2545f06558b : Fix force close for updating UI after activity destroyed.
ea5832e174b : [Audiosharing] Update audio sharing section title in call
b77c619ebe7 : [Audiosharing] Fix call audio device in call
f0a01c51dd8 : [Physical Keyboard][A11y Page] Add Bounce keys dialog
b31c54b6f40 : Migrate DisplayScreen
2a47174e4a6 : Import translations. DO NOT MERGE ANYWHERE
cb6b37e922e : Import translations. DO NOT MERGE ANYWHERE
291e92d6493 : Fix the NPE in the tryToFetchUsageData() method
fa433271a86 : Add Contacts Stroage fragment class for Contacts Storage settings page.
b237fe7e6a2 : Fixes for errorprone update
ad48f03f53f : Fully qualify @attr reference to android.R field
73c7ee115bb : Don't disable "Done" button when it cannot be pressed
504e927168f : Better Support for profiles in "People that can interrupt"
27d969bb488 : Refactor the getCurrentCarrierNameForDisplay
d440f2ef05c : [CDM][NLS] Check if the NLS service has an intent-filter
e4fe2f5b813 : [CDM][NLS] Check if the NLS service has an intent-filter
e5720f43ee1 : Don't crash when recreating ZenModeTimePickerFragment
7ae59a42eb1 : [CDM][NLS] Check if the NLS service has an intent-filter
aa322bc4f72 : Avoid using MANDATORY_BIOMETRICS bit if flag is not enabled
5644fe1aa78 : Add log for new error string in WiFi QR code scanning
c2c27227710 : Hide extra information link when it is empty
f0b6123acc1 : [Physical Keyboard][Setting] New pk a11y entry under a11y page
a994f04c2f5 : Update the illustration of the caption preferences under dark theme.
34d04eb143b : Add flag for the Display page migration
539bae8db42 : Add missing paren
c238b3b1cf0 : Remove obsolete zen-related entries from CustomSiteMapRegistry
92ada7e3a53 : Remove "Add supervised user" from search result if such user type is not allowed on the device
8c2312d8ab5 : Update dependent logic in modes vis page
8e6bc6f2897 : VideoCalling UI adds the init value
6a333a2ac3e : Show custom unlock title for private profile
b1e28233c18 : Use TextInputLayout for the name field in the create/rename mode page
1da65db12fb : Update search index in UserSettings based on allowed user types
dd7e4d27aed : Encounter unknown error in test
73f66eb8f17 : Distinguish different errors of wifi QR code
1955144ffc1 : Avoid test flaky
cc3eac9b7ca : [dev_option] Move DestopModeFlags out of android.windows.flags
42e47a9b2de : Import translations. DO NOT MERGE ANYWHERE
58a110bc483 : Set content description for the "settings" of starred contacts, priority conversations, etc
a18eea0283a : Revert "[ToA] Remove ToA languages"
4dc5a005c8c : Update the layout of AppDataUsagePreference
71671897d7f : Make the name EditText at least 48dp high in the Mode rename page
78367de795b : Fix a11y readout of the day toggles in Time Schedule of Modes
2ce3671e3e8 : Add test case for Firmware version and Legal screens
79f17d1188f : make device setting be able to use both Intent and PendingIntent
80e1b69aa7a : Create only one media session per audio stream media service.
78cce9cd504 : Add audio category preference in more settings fragment
3d4b0c5440e : Refactoring A11y shortcut functions in Settings to use GESTURE
97cbb8cba1a : Don't register DndConditionCardController anymore
2d65e7fbfa3 : Replace "DND Access" by "Modes Access" in Settings search results
8ce211965ad : Improve the contrast for the "schedule duration" text
ed5e803c87f : Hide the enabling mobile data
25db7a82f32 : Import translations. DO NOT MERGE ANYWHERE
5a1cf30ba1f : Import translations. DO NOT MERGE ANYWHERE
41cd514fb37 : Notification Minimalism Settings Main Switch
8b6431878a1 : Show the Tangor Unseen Notification Toggle when notification_minimalism is Enabled
f620f85484a : Add a header before the icon list in the new/rename mode screen
a2e71cc7cba : Delete Settings code related to super-obsolete zen onboarding
bb1cadb916d : Use hasScrollAction in ApnEditPageProviderTest
5aa95553cde : Do not auto-downgrade WPA3->Transition mode if password too short
9969334647b : Fix Can't Able to Click Sims
125bbc0cccf : Include settings flags into flags_packages
322f3cf4963 : Add support for ApnSetting.TYPE_OEM_PAID and OEM_PRIVATE
996afd17a16 : Protect the Settings application from potential null pointer exceptions.
2f39a808fcd : Fixed elapsed_time_millis in SettingsUIChanged event
8b7399a8d98 : Update titles and summaries of the Date and time settings page
91dd9af87f7 : Fix Fingerprint setup complete - illustration is just slightly cut off
54b0d18a047 : [Audiosharing] Use DialogFragment instead of raw AlertDialog
57355a0743c : [Audiosharing] Show dialogs when lifecycle isAtLeast STARTED
63ecd2781f2 : Add icon for more settings preference
fc27dcccde7 : Add metrics category for more settings fragment
1020e7132de : Fix two-panel issue in tablet
1c486036efc : Avoid potential ActivityNotFoundException
0b530fd405e : Use hasScrollAction in ApnEditPageProviderTest
73ab82f52fc : Correct behavior on Google Location History preference click
4329d8371ee : Migrate Android version preference
6231453e914 : Add contacts storage setting entry in app settings page.
3be2314facc : Add a check to ensure that intent data is available before proceeding.
214af3c7d22 : Fix for GestureShortcutOptionControllerTest
0c5eb711a94 : Make legacy DND pages not searchable
e060aeb1e12 : Migrate the Legal Info page to Catalyst
84d31c2ad96 : Migrate Android security update preference
b8d2bc12b1e : Migrate mainline module version preference
e0f734526d2 : refactor(brightness suw): decouple auto brightness initialization logic for setup flow
316d69ddc93 : Import translations. DO NOT MERGE ANYWHERE
d9f36f94f72 : [Settings] Fix color contrast on alert dialogs
5ea28f5f0a1 : [Audiosharing] Apply new string resid
18aa329d4a3 : Fix incorrect switch status when user stay in NightDisplaySettings page while scheduled night lights on
f71cb5466c7 : Revert^2 "Fix text alignment in RTL mode for volume sliders"
6383e738d44 : Spit up FingerprintManagerInteractor 2/N
e3e348b2dcf : Update AssociationInfo with deviceIcon field
97d7422b357 : [Language Setting] Cleanup Language and Input Setting
d45a07894b5 : Disable factory reset in DSU mode
76260a7884d : Disable factory reset in DSU mode
0764d71e821 : Disable factory reset in DSU mode
15af7d7d558 : Disable factory reset in DSU mode
772b19f6caf : fix crash in setting time zone

+- Project: platform/packages/apps/SettingsIntelligence

85629e2 : Import translations. DO NOT MERGE ANYWHERE
10e5571 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/apps/Stk

9457153 : Import translations. DO NOT MERGE ANYWHERE
7ea678a : Fix STK input layout to fit system windows.
d6ff2cf : Revert "Disabled edge-to-edge display support for StkInputActivity"

+- Project: platform/packages/apps/StorageManager

a8c4c68 : Import translations. DO NOT MERGE ANYWHERE
9348d97 : Import translations. DO NOT MERGE ANYWHERE
80db13a : Import translations. DO NOT MERGE ANYWHERE
177c11e : Create a filegroup module for tests/unit/src so it can be used across projects.
3c58fa4 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/apps/SystemUIGo

09d8001 : SysUIGo: Update heads-up related imports
ae2c732 : Update ClipboardOverlay modules in SystemUIGo
26faf27 : [CS] Use ReferenceNotificationsModule in SysUIGo
ff2082b : [CS] Add EmergencyGestureModule to SystemUIGoModule.
cff4d1c : Add wmshell_defaults to SystemUIGo
5effad7 : Remove ScrimController as CoreStartable
5446a80 : Update dependencies
1a311df : Include ClipboardOverlay modules in SystemUIGo
22f1269 : [sb] add StatusBarPhoneModule to sysui-go

+- Project: platform/packages/apps/Tag

b2a36c5 : Import translations. DO NOT MERGE ANYWHERE
06b9f59 : Import translations. DO NOT MERGE ANYWHERE
d4a6481 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/apps/ThemePicker

52fb2095 : Import translations. DO NOT MERGE ANYWHERE
4503c550 : Revert "Fix the layout of the grid option customization (2/2)"
b8738aab : Fix the layout of the grid option customization (2/2)
ed55267e : Migrates Monet's Style Enum to @IntDef
40996b73 : Use bouncy grid option item
d76ecf0a : Bind Toolbar colors (2/2)
6b1060f0 : Import translations. DO NOT MERGE ANYWHERE
a522dc5a : Import translations. DO NOT MERGE ANYWHERE
e09c7a0d : Make preview smartspace corresponds to the selected clock (1/3)
0d8f3bbe : Delay color picker back navigation until apply is complete
1bc174ed : Make clock in lockscreen preview use the correct overlayed resources
f6d1335e : Added dependencies in ThemePicker for packagenotifier
2edd5aa5 : Add disbled background for the apply button (1/2)
64db8f5d : Import translations. DO NOT MERGE ANYWHERE
3a3777f8 : Remove Beta tag from themed icon toggle
f0da839a : Fix the icon for clock style
80e15c83 : Fix clock edit button cropped when overscroll
18f5a26c : Fix clock view out of bound in the carousel
189647c4 : Revert "Revert "Revert "Revert "Replace field injection with con..."
12b25265 : Clock thumbnail on clock customization entry
c9bb92af : Handle back navigation on clock font customization
83bcf350 : Fix small clock start padding is wrong in picker carousel in unfold portrait mode
3e11ba3b : Add to DarkModeViewModelTest
d0d56676 : Clock font content (2/2)
40a871f4 : Revert "Revert "Revert "Replace field injection with constructor..."
c00c07f1 : Revert "Revert "Replace field injection with constructor inject..."
fb64691c : Revert "Replace field injection with constructor injection."
3a8d186e : Fix clock size tap target height
8f119d2e : Fix selected option not updated after applying color (2/2)
45308198 : Replace field injection with constructor injection.
e642490b : Import translations. DO NOT MERGE ANYWHERE
87db6ec9 : Make clock editor support blueprint clock view in new customization picker
65d5aa7e : Replace field injection with constructor injection.
c2aaa1d3 : Clock font axes confuguration content (2/2)
71492246 : Update previews on color config change (2/2)
26cb794e : Add dark theme preview integration with Launcher
635d5a12 : Import translations. DO NOT MERGE ANYWHERE
7fc70b56 : Added dependencies for third party category interactor in themepicker
76e88985 : Adjust tool bar tab dimension (1/2)
97a8743e : Remove Reactive Clock Touch prototype from Picker
51e87caf : Persist changes to Clock Font Axis Settings
aa3185d5 : Shape screen communication with the Launcher's app (1/2)
9ed38aa9 : Fix shape and grid content height
15aec091 : Only go back when on complete apply
dc79f1f1 : Make clock picker view model aware of if edited
d9f543ba : Adjust color picker to only generate options from home wallpaper
2f78ea67 : Revert "Shape screen communication with the Launcher's app (1/2)"
0aefd6ea : Import translations. DO NOT MERGE ANYWHERE
b75fbee9 : Revert "Shape screen communication with the Launcher's app (1/2)"
d37545bf : Fix back navigation for the main screen (2/2)
50d0578a : OneGrid Grid Option Updates
a24423ae : Clock font axis editor dialogue
9a1b58e4 : Import translations. DO NOT MERGE ANYWHERE
19f8cc8d : Shape screen communication with the Launcher's app (1/2)
43d32eb6 : Add color option icon morphing animation (2/2)
2039aa3e : Add clock edit entry point to clock list
91fa3a1b : Import translations. DO NOT MERGE ANYWHERE
303d8dd7 : Refactor Clock theme handling
d2dab08e : Hook picker default clock up to FlexClockView
07a99cf6 : Increase the title limit to 32
752b03f9 : Import translations. DO NOT MERGE ANYWHERE
df4ac748 : Rename ShapeGridInteractor and ShapeGridRepository
0d5e6b2f : Shape options
b7d459ed : Preview UI color based on color picker user selection
dfb64940 : Clean up color options
66db6b86 : Shape tab
5b1055c4 : Rename app_shape_and_grid to app_shape_grid
00d7c24a : Remove dependency on tracinglib
c21281ff : Dark theme toggle (2/2)
3955c0d8 : Morph animations for customization options (1/2)
8a49bc6c : Send messages to Launcher preview when previewing color
afc73292 : Clock floating sheeting
3f294783 : Embed CustomizationPickerFragment2 in CustomizationPickerActivity2 (2/2)
b1fee73c : Color picker preview & apply pattern
6c1f7934 : Import translations. DO NOT MERGE ANYWHERE
0f3b6622 : Fix inability to select default clock color option
882c3deb : Fix grid screen options (2/2)

+- Project: platform/packages/apps/Traceur

c69e7a79 : Import translations. DO NOT MERGE ANYWHERE
999e120c : Import translations. DO NOT MERGE ANYWHERE
9ebcbf8b : Don't let Proguard minimize away the Parcelable.Creator field in TraceConfig.
2c2f0bbf : Explicitly cancel "Saving trace" notification
e1b42898 : Increase the buffer size allocated to the "small" buffer.
78009039 : Add statsd datasource and desktop mode atoms to traceur trace config.
33fe3bb1 : Always include the packages_list data source in Traceur traces.
a0d1b9f5 : Allow Share Actions with No Trace Uris to be shared
4ad6e8fe : Update TraceController to always reply when sharing

+- Project: platform/packages/apps/TvFeedbackConsent

07e64a4 : Grant CAPTURE_CONSENTLESS_BUGREPORT_DELEGATED_CONSENT permission

+- Project: platform/packages/apps/TvSettings

a6571d12e : [Realtek][TV][Accessibility] :Check the Shortcut service option, it is on by default, go back to Accessibility and check the Shortcut service again, it is off.
63dbdbcfd : Import translations. DO NOT MERGE ANYWHERE
9b55b2cef : Import translations. DO NOT MERGE ANYWHERE
5a0d960d0 : Support preference groups with child preferences in slices
24c146c8c : Import translations. DO NOT MERGE ANYWHERE
5d1b1c704 : Import translations. DO NOT MERGE ANYWHERE
1e385f482 : Import translations. DO NOT MERGE ANYWHERE
1d96da5b4 : Update comments to point to the new location of event.logtags.
ad2e3c7e0 : Import translations. DO NOT MERGE ANYWHERE
cbc26ef36 : Allow creating of non-slice preferences from slice data
cec7c982f : LE Audio: Handle LE Audio in pairing logic
621ebd0ed : Import translations. DO NOT MERGE ANYWHERE
9b5cbb40f : Allow (un)registering slice data observer
cb112134c : [Realtek][STB][Settings] Fix Usb device issue 1. Get correct USB device size 2. When user eject USB device, it can be forget.
7bba00035 : Import translations. DO NOT MERGE ANYWHERE
752348aef : Copy automated code health fixes already merged in google3
b3a472384 : Import translations. DO NOT MERGE ANYWHERE
03bf87b86 : TvSettings: Update AppOps proto_logging package.
f77caba0f : Switch from versionedparcelable to Bundle serialization for Settings slices
71813efce : Import translations. DO NOT MERGE ANYWHERE
00ab158b9 : [Realtek][STB][Settings] Avoid getting stuck on the unpair page
21fc3f5e1 : Import translations. DO NOT MERGE ANYWHERE
cdc45b8b9 : Use a basic mode policy if low power standby is enabled by system in basic mode.
71aafe2bb : Use applicationContext vs activityContext to avoid memory leak in WifiScanner. Open settings from dashboard then press home and repeat. Then Checked with "adb shell dumpsys meminfo -p com.android.tv.settings" which produces 1 activity. See https://paste.googleplex.com/6271520082231296
42153c3cb : Display and Sound settings UI improvements
34b5f15e2 : Import translations. DO NOT MERGE ANYWHERE
20a110de1 : stopScanning before createBond to avoid rmt request and createBond at the same time.
dcb5dacba : Copy TwoPanelSettingsLib changes for google3 back into gerrit
3b7260dd9 : Fix Thread Toggle not working
6d873bf7d : Use Thread SDK APIs instead of reflection
c8ea4ad54 : Add Energy Mode related thread toggle changes
2a6c3139f : Revert^2 "Add thread network toggle. Display confirmation message when toggling off. Use reflection with ThreadNetworkHelper to get Thread APIs if available."
8dfdf93a1 : Import translations. DO NOT MERGE ANYWHERE
3aa7722e5 : Add thread network permission to priv-app. This was missing for ag/26126874
a292faf80 : Explicitly set apk optimizations enabled to true for TvSettingsTwoPanel to align with udc-tv-dev. Optimization was already enabled by default so this should not change anything.
8e58c082f : Revert "Make twopanelsettings buildable in both gerrit and Google3"
1c11da819 : Move style to top of view and remove maxHeight attribute by setting it to null. buttonBarButtonStyle has maxHeight of 40dp which causes problems with font scaling
08a7a8b4a : Make twopanelsettings buildable in both gerrit and Google3
0aa2ba598 : [DO NOT MERGE] Update DisplaySoundFragment.kt to match udc-tv-dev and remove DisplaySoundFragment.java. It seems the original ag/26948182 did not remove the java version when cherrypicking to main. This includes changes ag/27147754 and ag/29065782. Other changes made to java file were also added in kt version. See https://screenshot.googleplex.com/yZWFyhBHFwcuzvm
e1200807c : Add hook to check restriction before enabling Wifi
7723294c9 : Import translations. DO NOT MERGE ANYWHERE
4c28f01b5 : Enable logging for Talkback toggle as part of AGUA
ce5a2db74 : Support Settings slices with Intents as an alternative to PendingIntents
e878aad70 : Use system slice manager for calls except bind/map/get descendents
12072be0a : Import translations. DO NOT MERGE ANYWHERE
9cd74c37c : Remove unused method tryGetWifiInfoForVcn
feec1b194 : Import translations. DO NOT MERGE ANYWHERE
e02ba732c : Fix bug where selecting Force Conversion does not show conversion options
043deb72a : Adding Repeat Keys logging
9aede884b : Fix a crash in btservices library caused by unused twopanelsettings code
9d2c8e6db : Revert "Fix bug where selecting Force Conversion does not show conversion options"
5c2e901c7 : Turning on translation for Notification Timeout settings
57a911db0 : Import translations. DO NOT MERGE ANYWHERE
abaa88b54 : Import translations. DO NOT MERGE ANYWHERE
fd8aa5dba : Support using USB keyboard to update device name
4dd901911 : Fix bug where selecting Force Conversion does not show conversion options

+- Project: platform/packages/apps/TvSystemUI

5fb9256 : Import translations. DO NOT MERGE ANYWHERE
203f738 : Audio output: add a11y settings entry point
d13550b : Change built-in speaker subtitle when grouped
e21186e : Adds tooltip for audio output settings hint
da520c5 : Add fading edge to output panel.
722377b : Audio output settings: check MediaRoute2Info.Type
ca544fa : Add tooltip to slice preferences.
9cdd548 : Open SliceFragment for output device settings.
fabad49 : TV: Update HeadsUpEmptyImplModule import
acf1dc0 : Import translations. DO NOT MERGE ANYWHERE
c9afa36 : Create slice preferences with output dialog style
43cfbd2 : Refactor TvMediaOutputDialogActivity
ddc6688 : Move WMComponent to wm shell
66104f6 : Import translations. DO NOT MERGE ANYWHERE
237dadf : HDMI: Improve Active source lost pop-up [2/2]
708fc32 : HDMI: Improve Active source lost pop-up [1/2]
a703aba : [CS] Use ReferenceNotificationsModule in TvSystemUI
b802d17 : Fix sysui res reference
3a6947c : Update dependencies

+- Project: platform/packages/apps/WallpaperPicker2

eb61919f8 : Cleared persistent wallpaper repo value after reading it
ce006d1bd : Create utility class for testing android.app.WallpaperInfo with Robolectric
bbbe77ee6 : Ensure that wallpaper attribution entries are not null
1d01855b6 : Simplify adding color update flows
b3ef4bc4a : Revert "Fix the layout of the grid option customization (1/2)"
ac10dc584 : Import translations. DO NOT MERGE ANYWHERE
622911eb9 : Migrates Monet's Style Enum to @IntDef
3aa0a69ef : Fix the layout of the grid option customization (1/2)
5cb31e703 : Fix foldable set wallpaper dialog content description
17710778d : Bind Toolbar colors (1/2)
5d35956d5 : Don't show set wallpaper dialog after screen rotation
f6c48194e : Fix not showing low res wallpaper in small preview
87d2de954 : Import translations. DO NOT MERGE ANYWHERE
16e60a102 : Import translations. DO NOT MERGE ANYWHERE
7510f5072 : Make preview smartspace corresponds to the selected clock (2/3)
57c1443e6 : Update layout for foldable apply wallpaper screen
2e991d505 : Expose sysui runtime values to wallpaper picker (1/2)
b27b531a9 : Import translations. DO NOT MERGE ANYWHERE
7d78c4796 : Cleanup preview wallpaper model before launching preview again
573476eeb : Fix always land on lock screen preview
9bc6aa3e8 : Fix status bar color
2a376affe : Add disbled background for the apply button (2/2)
f331f6508 : Intercept pager touch events when in secondary screen
2df9ad09f : Import translations. DO NOT MERGE ANYWHERE
fdc0dc94d : Changes for 3p app listeners
3eb7da989 : Fix preview pager not responding to scroll progress
5da766f5a : Migrate materialColor* attributes into colors
888d9b20b : Revert "Revert "Revert "Revert "Replace field injection with con..."
7b5b7f75f : Link category screen to main screen option
580861563 : Revert "Revert "Revert "Replace field injection with constructor..."
72277d095 : Use LWP description for floating info sheet
ef548a272 : Revert "Revert "Replace field injection with constructor inject..."
3756e9dab : Clock font content (1/2)
d258e122a : Revert "Replace field injection with constructor injection."
989dd2311 : Prevent launching preview before the wallpaper model is ready
e80afe67c : Fix selected option not updated after applying color (1/2)
fc8fef502 : Replace field injection with constructor injection.
5d0329ffd : RESTRICT AUTOMERGE: Disable multicrop flag
5f2dcce68 : Make clock editor support blueprint clock view in new customization picker
2a1a6607b : Replace field injection with constructor injection.
c43ffc0ef : Launch full preview based on screen and dimension on foldable
2a9cfdae2 : Update previews on color config change (1/2)
c814381b6 : Clock font axes confuguration content (1/2)
f691b4589 : Add foldable preview pager transition
ea2842f14 : Remove leftover FakeSecureSettingsRepository
76f177214 : Added support for default drawable for 3p category apps
66910f6a5 : Fix errant nullable annotation
9b576c617 : Removes WallpaperDescription stream read/write methods
992ec5c8e : Adjust tool bar tab dimension (2/2)
a0609a6b6 : Add support for description-based live wallpaper content handling
22e451339 : Remove ReactiveTouch Prototype from WPP
75dc696f0 : Add set wallpaper for new preview UI
015e5eb2c : Fix leaking progress dialog when setting wallpaper
df6c4d8be : Fix back navigation for the main screen (1/2)
0be7aba37 : Import translations. DO NOT MERGE ANYWHERE
1d4341182 : Add color option icon morphing animation (1/2)
2a8ddef98 : Add axes update method
5fa6b0aec : Fix multi-crop set wallpaper dialog rotation issue
797d01ecd : Add back full preview when small preview clicked
a68baf5b9 : Introduce transition state and remove back press state
10fcd7338 : Fixed extra categories appearing in third party live wallpapers
b7c581805 : Add color for clock edit entry point icon
ecdf6ed84 : Fix wallpaper connection leak
109c450fa : Add tests for WallpaperPreviewViewModel
da4563e1c : Import translations. DO NOT MERGE ANYWHERE
43a3edf32 : [DO NOT MERGE] Add missing null parameter
596c139e2 : Add apply wallpaper screen and transition
017fa810e : Preview UI color based on color picker user selection
3a1c0d514 : Use motion layout for small preview pager
936b89549 : Fix effect toggle state when rotating screen
1219d0c2e : Dark theme toggle (1/2)
a4179ff36 : Fix tab click distance
db47d0902 : Fix collection ID for loggin creative wallpapers
579d7f1f8 : Morph animations for customization options (2/2)
def0adc40 : Fix crash in Individual Picker Fragment
f0af07bdf : Update apply button style
e67508ba5 : Remove dependency of CategoriesFragment from CPA
235754620 : Add multi-layer MotionLayout support to SurfaceView
4af6a8519 : Formatting changes
e004070ab : Fix live wallpaper size incorrect in shortcuts preview
299aaf4b8 : Embed CustomizationPickerFragment2 in CustomizationPickerActivity22 (1/2)
81224be2b : Remove large_screen_wallpaper_collections flag
faeb43757 : Don't set SurfaceView radius for set wallpaper dialog
301978327 : Refresh network categories each time network status changes
c7853b3a2 : Fix grid screen options (1/2)

+- Project: platform/packages/inputmethods/LatinIME

c0b9f318b : Fix bug: AOSP keyboard is shown incompletely in Android V landscape mode
24e7ea358 : Fully qualify @attr reference to android.R field
859ebf0df : Use jni_libs to install shared library dependency

+- Project: platform/packages/modules/AdServices

b4a60c4cac : Import translations. DO NOT MERGE ANYWHERE
aa6abc49f6 : Enable adservices logs in PtbM11RampServerAuctionSimulationTest
61bfa4f348 : Use AdServicesFlagsSetterRule instead of shell commands in perf tests
e508a5d744 : Moving on device debug reporting to debug package
0abd0c0d95 : Adds non-background CELs for APIs
bd145c337e : Removed deprecated setFlag() method that took array and separator.
6ca44b6d13 : Improve debuggability of CustomAudienceServiceImplTest.
2ef28a3a10 : Rename ErrorLogUtilCallback to ErrorLogUtilSyncCallback.
d529b9491a : Quick follow-ups for If02512638c6ba302e26c1e8be4c79fb9313c010b.
971a2ed124 : Refactored FlagsSetterRule methods that set String arrays:
9b3ead6a95 : Removed @deprecated setSimpleArrayFlag()
af254d10c4 : Refactored setSimpleArrayFlag()
851c13faaa : Use API-specific rate limits in FLEDGE CB tests
0f350bfe90 : Initial version of AdServicesFlagsSetterRuleTest.
fc0b677fb2 : Fix ErrorLogUtil logging verification issues in tests.
797b425511 : Initial unit tests for FlagsSetterRule.
154f30be0a : Fix test_canAttemptForcedEncodingForBuyer_hasEncodedPayloadAfterCooldownStart_false
f6c197075e : Fix the MTS Configuration for AdServicesFrameworkUnitTests
4d1ffd1ca6 : Use table names from getMeasurementTableNames for clearDatabase() instead of constants.
ca64b1f908 : Update component ads documentation to reflect API review feedback
a9934d5b0e : Add a flag to enable the AdServices latency metrics RbATrace.
d57d207974 : Fix test_existingEncodedPayload_createdWithinCooldownWindow_skipped
e583f58219 : Using API-specific rate limits in FLEDGE
7bde0feba0 : Low level design for PerEventSampling sampling strategy
00461cd9e9 : Fix testDeleteAllUninstalledMeasurementData_success_recordsDeletion_S
5f23b27d6a : Remove dev mode from ARA
7a4dad9c52 : Add configuration delivery protos to adservices.
c218f33379 : Removed custom AtomicFileDatastore subclass.
0fdca7054e : [PAS][forced encoding] Add forced encoding E2E tests
96e9292263 : [PAS][forced encoding] Log service as the encoding source
8a95ce90cb : [PAS] Collect PAS encoding source type when logging EncodingJobRun metrics.
95120b9b1d : [PAS][forced encoding] Add implementation
1a4a466910 : Fixed NameValuePair so it allows null values.
50a7220b87 : Persist ReportingData and RegisteredAdInteraction
08f1999bd1 : devmode: Use in-memory DB for unit tests
449e06a375 : Return WinningSeller in AdSelectionOutcome
95e585c704 : Add support for component ads for joinCA
8f689a077f : Add a flag to use configurations manager to query enrollment data.
be9730746e : Unhide component seller in report event request
ab0fc2faa4 : Revert^2 "Mark @FlaggedApi flags as exported"
ebe89acf1b : Added unit tests for NameValuePair.
5e5f1332e7 : Hide deprecated AtomicFileDatastore (subclass).
4d41fd5740 : Initial version of LegacyAtomicFileDatastoreFactory.
204992cb29 : [DO NOT MERGE ANYWHERE] Remove testFailsToBuildWithInvalidDestinations from ReportEventRequestTest.
58da7c4790 : Added call to sync utils for new notification.
dfdba93c50 : Added null check and additional tests for migration logic for S > T.
855dbf4601 : Enable HTTP/2 in Fledge CB tests
c28d987ca1 : Removed module state class.
ac3c493626 : Using annotations to set flags in FledgeScenarioTests
712182a19f : Cosmetic changes on some UI classes:
25d404f252 : Adding check for developer options enabled in shell command CTS tests
aef96b9104 : Revert "Mark @FlaggedApi flags as exported"
2898ec1857 : [PAS][forced encoding] Add forced encoder factory
e86ef7e600 : Remove one ReportEventRequestCts test
9d3d0c9cb7 : Removed deprecated AtomicFileDatastore constructor take take strings...
44c4dbffef : Removed AtomicFileDatastore.teardownForTesting()
196a43c480 : Removed tearDownForTesting() from system-server components.
6a043c9bde : Build privacyParametersString using string concatenation instead of String.format to address Locale issues.
699ac72321 : Deprecated tearDownForTesting() methods.
141ba1f839 : Add service flags for component ads
66d5a239f2 : [Measurement] Rb ADR Implementation Alignment with Chrome
ba79902d7e : update provider stack trace message with cause
31292d267b : Changing foreground status enforcement to be per API
2d8525204e : Import translations. DO NOT MERGE ANYWHERE
e58c65e45f : Import translations. DO NOT MERGE ANYWHERE
7ce45463ac : Mark @FlaggedApi flags as exported
9697785219 : Improvements on AtomicFileDatastore:
30de193145 : Changed FakeRealLogger to use log.i()
8e81661553 : Add table for component ad data
e88ea745ec : Add CTS coverage for component ad data
a01109879f : add new errorlog method to log the root exception
7c87427842 : Import translations. DO NOT MERGE ANYWHERE
fefa3c55f7 : Add configurability to PAS flags
6d1a444aff : Refactor more service-core unit tests to follow internal guidelines
6f561f4319 : Add per-API flags for PA/PAS rate limit
85058cd259 : Added notification job for V2 notification.
7e12177972 : Refactor AdServicesServiceCoreDatabaseMigrationUnitTests to follow internal guidelines
584bcea6aa : Update Measurement interop tests.
610c459fed : Fixing the documentation for the foreground enforce flags
a7eb2236d8 : Data migration from old to new enrollment data.
c4907d0ae8 : Added new notification trigger for module state updates.
315ff98a44 : Adding flags for PA to enforce foreground status per API (FetchAndJoin, Leave and Schedule CA)
c6ba9bbd79 : Unhide API changes for component ads
6a2a38c0d9 : Add custom audience hidden api changes for component ads feature
47accc2163 : Fix the field number for winning seller.
9e21661550 : Import translations. DO NOT MERGE ANYWHERE
b243c3f363 : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
7f13cee00f : Fix case on auto-generated flag keys
3ad8b00834 : Add flags for enabling forced encoding and cooldown
8504b4ee33 : Unhide new schedule CA constructor that doesn't require partial custom audiences
af519e0b6f : Deprecate the AtomicFileDatastore class contructor on the AdServices side.
9a0b1cbcfa : Re-enabled Ravenwwood tests that uses Context.getApplicationContext()
b3201f8c49 : devmode: Replace feature flag with debug flag
22f791a2db : Add adSelectionOutcome hidden api changes for component ads feature
4be07a80be : Pass signals stats builder into orchestrator
a4e58b3b69 : Remove suppress AvoidStaticContext warnings for private static method.
b59a78eba9 : Add instruction to suppress warnings in lint error message.
5f90818215 : Remove unused Context in static method.
208b7916ec : Enable StaticContextDetector lint
2ba0467b9b : Address (non feature related) comments for Flexible Contribution Filtering. Flag: EXEMPT (bug 376487941) Test: atest AdServicesServiceCoreMeasurementUnitTests Bug: 376487941 FLAG_GUARD: measurement_enable_flexible_contribution_filtering
17a3c08cbd : Minor improvements on AdServicesManagerService and RollbackHandlingManager:
3584db51f3 : Minor improvements on AndroidServiceBinder:
e15d43e537 : Preparing AdServicesSharedLibrariesUnitTestsRavenwood
5282db9252 : Adding prod debug flag to protected auction input payload
cd184d48e9 : Revert^2 "update to AtomicFileDatastore transactons"
9987f88e80 : Add logging for getValidFullFlexSourceWithNonDefaultEpsilon Flag: EXEMPT (bug 349590751) Test: atest SourceNoiseHandlerTest Bug: 349590751 FLAG_GUARD: measurement_enable_event_level_epsilon_in_source
63c4952439 : Added initial logic to show correct notification from module state update.
999a3ce308 : Make data types visible.
9e71b077c8 : Changed PhFlags.dump() to dynamically dump all flags.
7b8e634ade : Create lint to prevent passing a Context in static method.
7f2b835ffc : Update OWNERS files
14c1f97d8a : Deprecate AdServicesOutcomeReceiver
97f7b17a51 : Remove flaky annotation for AttributionJobHandlerTest#performAttribution_flexEventReport_triggerDataMismatch
8f9609f5a5 : Add log message to K-Anon for ease of debugging
208c9b5672 : Minor improvements on UserInstanceManager:
8dd819e644 : Revert "Re-enable PAS/PS E2E server test."
1ff2f76e86 : Change namespace for adservices_outcomereceiver_r_api_deprecated flag
18fd25e5c6 : Disable SdkSandboxMediaTest on U devices.
95ef913c8c : Revert "update to AtomicFileDatastore transactons"
7db7b80a09 : update to AtomicFileDatastore transactons
f1dee630f6 : Add header error report type to reporting status
bc991572c0 : Moved aconfig flags around to divide in 2 groups.
a54f2cdd51 : Part 1: Proto definition for log sampling config
73a67a328b : Move getConsentNotificationDebugMode method out of Flags.java
377770f7a1 : Removed deprecated AtomicFileDataStorage.dump() method.
cb71a66176 : Remove Foreground Activity for some PA Shell Cmd CTS Tests
edc1d850d8 : update the appsetidsdk manager test to pass empty appsetid
4ec0fd4252 : Add missing permission to PackageManagerHelperUnitTest
594eab8303 : Import translations. DO NOT MERGE ANYWHERE
3756f55e4a : Initial version of ConsentManager.dump()
127431af99 : Add limits on the number of CAs allowed per buyer adtech
7ead838425 : Distinguish instant reports from debug reports in testing.
2378849908 : Refactored AppManifestConfigMetricsLoggerTest to improve flakiness
f857011aff : Improvements on AtomicFileDatastore.dump():
f1eb0e94f7 : Add current user check to SdkSandboxShellHostTest
5b2b762f38 : un-hide winningSeller in AdSelectionOutcome
13ac8bdac5 : Add winning seller field in AuctionResult
40abda89b0 : Add COMPONENT_SELLER in reportEventRequest
ec412fabdc : Use correct namespace for winning seller flag
622f268ec4 : Add aconfig flag for new scheduleCA constructor
60b11e5f02 : Add winningSellerId field in AdSelectionOutcome
35b8efe122 : update getmodulestate and getuserchoice using param
4b293f01af : Add Chromium interop e2e tests for aggregatable named budgets.
8d5f3cd55d : Make last set value override previous values for module state params.
7cae56c085 : Renamed TestHelper.getAnnotation(s) methods(s) for readability.
7a38529518 : Add aconfig flag for component ads
18e09cca4d : Allocate ByteArray for ids using BigInteger instead of Long.
4771240d56 : [Measurement] Fix Broken MeasurementImpl Deletion Tests
4607e62692 : Fixed FlagsConstantsTest to support flags with domain.
ee6cd9a330 : Adding flag for filtering by installed packages in deny processing
c45131d962 : Removing flaky tags on deny job tests
7054dd6fd4 : Import translations. DO NOT MERGE ANYWHERE
65b5d94485 : Refactored TestHelper.getAnnotation(s) method(s)
9202b16578 : Add Register Trigger CUJ to Measurement Default Profile Suite
4847d81592 : Added TestHelper.getAnnotations() method
f6917b20e3 : Adding flag for package deny bg job for schedule period.
39762ce0b3 : Fix AppReportHistory Table to not use Latest
d8bb691b03 : Add multi seller mediation relatd PhFlags
073de548df : Adding flags for package deny package installed filter and background job period
676ab48caa : Adds a temporary solution for CEL logged within binder thread.
79e62f5c80 : Replace temp broadcast intent action constant with adservices sdk constant
00bcf35fba : Log all the exceptions in joinCustomAudience
4efe9feb3b : bugcop: Mark testGetTopicsLog as flaky
c56a945a2b : Use `BigInteger` to validate Trigger's filtering_id.
d4e1f26f47 : Allow 0 for named budget value
40f65cae48 : Adds E2E tests for Aggregatable Named Budgets.
fdd6e3f8c4 : Changes references to the LATEST table in measurement db migrators to a private table with the migrator version number.
317aa7488b : Add feature flag for extensible enrollment configuration delivery.
c04795ff64 : Add custom_audience_per_buyer_max_count PHFlag
38904ee803 : Add tests for more APIs in SdkSandboxManagerDisabledTest.
97d1dca4da : Ignoring failing dev session related end to end tests for PAS
72e4124ce6 : Re-enable PAS/PS E2E server test.
e140526089 : Reject Triggers with non-default filtering_id_max_bytes if source_registration_time is INCLUDE.
f811976a1c : Fixing flaky tests in PackageDenyBg job
5a7c1cd2dc : Add coverage for AdServicesExtDataParams getter methods to CTS test
c457710a98 : add atomic flag in system service
1a100c55fe : Broadcast notification display intent from mainline
e21b2562db : Import translations. DO NOT MERGE ANYWHERE
a7d8c1df55 : Add feature usage logging for Flexible Contribution Filtering Flag:EXEMPT (bug 376538090) Test: atest AdServicesLoggerImplTest Bug: 376538090 Change-Id: I9975570e02bbcf16a941351a618bac29f792e588
2c09d08739 : Implementation to store Aggregatable Named Budgets in a new table in the database.
d35ef64fc0 : Parse aggregatable_values from persisted Trigger table for Flexible Contributions Filtering.
4f77fa7415 : devmode: Mark test as flaky
1e43a5f78b : Fix and improve tests for Schedule CA v2 on CustomAudienceServiceEndToEndTest, AdditionalScheduleRequestsEnabledStrategyHelperTest, CustomAudienceDatabaseMigrationTest
f7150a1028 : Marking test flaky to unblock presubmit
cd6adc0497 : Add Method and Host to the AdServices fetcher logs
bea61bc81c : Add Fledge__schedule_custom_audience_update_max_bytes flag and apply it for scheduled updates
b072a2381e : [Measurement] Both Side Debug Keys E2E and Interop Tests
f2dc214f0a : set renotify test to flaky in presubmit
8c4459d47e : Fix AttributionJobHandlerIntegrationTest
bd69d669e4 : add coverage for AdservicesStates get methods
f1836eb800 : Modify sample app install config and add uninstall in teardown
de2b6eabc8 : Telemetry test in CustomAudienceServiceE2ETest
2c500e9c67 : Added scheduling of background fetch job for fetchCA and scheduleCA
fa671474f0 : IOException in Http failues in SCAUpdate telemetry
37e720da94 : Fix log in extractCustomAudienceToLeaveFromRequest
fb255ed45b : Add Api response Cobalt logging when Api call is logged.
51ebe1e45a : Remove dependencies on the 1-variant fallback
c54d0dd3ed : add coverage for UpdateAdServicesModuleStatesParams and UpdateAdServicesUserChoicesParams
f1f82b4636 : Add telemetry for ScheduleCAUpdate V2
59ee506932 : Add AGGREGATABLE_FILTERING_ID_MAX_BYTES to AggregateReport table.
6f4aa0d0be : Use Telemetry methods for ScheduleCaUpdate API
f71eed9860 : Omit source_registration_time when configured to EXCLUDE
940164a9e7 : Flag cleanup: measurement_enable_trigger_context_id
be71f95275 : Flag cleanup: measurement_source_registration_time_optional_for_agg_reports_enabled
3291cc4a02 : Deflake AbstractDbIntegrationTest
aab7047b7f : Improved AbstractFlagsPreparerClassRuleIntegrationTest.
7dc049bef4 : Deflake MeasurementCtsDebuggableTest
2240db33c1 : Revised requestAdServicesModuleUserChoices System API.
b27113012b : Clear encoded signals on revoked consent
fa3bc41eac : Remove flag sync commands from CTS AndroidTest files.
e8cad21ae3 : Support scanning annotations on implemented interfaces.
75f1b8164c : Install sample app for "current" user
bb2afb3b89 : add new flag for adid migration enabled
1ffe8fa7d2 : Marked AttributionJobHandlerTest as @FlakyTest
cc1fa07e1b : Marked AdPackageDenyPreProcessJobServiceTest as @FlakyTest.
11c885d433 : devmode: Throw ISE if storage not initialized
861a599c3f : Add image view to the manual testing app.
b775d8a8fb : Revert^2 "devmode: Add e2e tests across PA/PAS services"
da4c4508d9 : Remove @FlakyTest annotation from testAuctionServerResult_usedInWaterfallMediation_success
92557da09a : Clean up flag measurement_enable_aggregatable_report_payload_padding
c5c9bafe5b : Change request payload V1 from JSONArray to JSONObject
25942ab85c : Add identifier validation when scheduling additional requests
b34aa3567f : Refactor SandboxedAppSetIdManagerTest to use FlagsSetterRule.
5cb60035b4 : Update flag ramp testing group with all AdServices CTS tests.
7bfc2de75f : Add per_package_api_response metric
cc9b4f643d : Allow EchoCommand to take multiple args for message
ab1b0c1f42 : Add schedule ca updates v2 tests to CustomAudienceServiceEndToEndTest
0cbb7d95f5 : Log ScheduleCaUpdatePerformedStats
b56f0f080a : Split AbstractSdkLevelSupportedRuleTest into 2 to follow convention.
ea397d0b8c : Final fixes on FlagsPreparerClassRule tests:
e7b44b4ee7 : Moved some junit-related stuff into a CommonDescription helper.
8dd01666af : Removed gson dependency.
8831d4416e : Improved logging on ActionBasedRule.
0b65fc83a7 : Remove "Enable for R" TODOs
820d313029 : Implement StatsdAdServicesLogger methods.
4caa86544a : Revert "Add visibility for device/google"
23be9e96cd : Remove dependencies on the 1-variant fallback
f05e7ddf38 : Flag cleanup: measurement_null_aggregate_report_enabled
7b00298148 : Use logger methods to log BG job stats
b551fe0447 : Use AtomicFileDatastore#update() API in the adservices Consent Manager code
abbb12ea47 : Adding periodic bg job for Android S for executing loadMDD Package Deny Data package as events do not work on adservices on Android S
488b19513c : Use logger methdods in ScheduleUpdatesHandler
b35f82cc16 : Implement ScheduleCustomAudienceUpdateStrategy
f2ce4b2fc5 : Add visibility for device/google
d82b8888c0 : Include Throttler on AdServicesInternalProvider.dump()
a89d99f196 : Documented that AdServicesHostSideTestCase requires AdServicesHostTestsTargetPreparer
d087db8e31 : Executing loadMDD Package Deny Data on mdd file download and package add events
5588149aad : [Measurement] Two-Sided Debug Keys Business Logic
21aaec1ddf : Uses SetSdkSandboxStateEnabled annotation on some CTS modules.
2919b7fa65 : Removed methods that set actions from FlagsPreparerClassRule.
111141fc24 : Revised requestAdServicesModuleOverrides System API.
3f508b5a69 : Remove AdServicesExtDataStorageService Stub implementation
43ae4c60af : Install Attribution on S
c39c485a13 : Finalize foreground check enforcement for meaurement APIs after full launch.
007565bb30 : Create a flag for AtomicFileDatastore#update() API for batch operations
ed6d720f8b : Added proper support to annotations on ActionBasedRule.
86dd5b5321 : Fix key for deprecated Data Version Header
2757c5cdd8 : Changed some interface methods to return "self".
156a1bdff1 : Removed SyncDisabledModeForTest from FlagsPreparerClassRule constructor.
7e28d5c6d4 : Removed SafeActionMode wrapper from AbstractFlagsPreparerClassRule.
8ffe730825 : Fixed DeviceConfig.getSyncDisabledMode() for S.
b112f7a521 : Adding PackageDenyResolver service
8cb99a0060 : Update interop tests
ac859805f2 : [Topics] Fix empty topics issue by cleaning Topics DB and changing scheduleIfNeeded condition.
c05f0e3509 : Make schedule update v2 flag depend on v1
2ee4c1c4c8 : Simplify service list for back compat initialization
f621b70e3f : Revert "devmode: Add e2e tests across PA/PAS services"
5d85b531f5 : Disable facedown timeount in CTS tests.
c8b615ebec : Don't enable flag unsupported on S devices in CTS
8b24409665 : Add Stats classes for ScheduledCAUpdate telemetry
07ba73daea : Fix value for deprecated Data Version Header
ed20798f4f : Improved FlagsPreparerClassRuleIntegrationTestCase to check order of calls.
be1fb1ad81 : [Topics] Add a new topics feature flag to enable cleaning the Topics API database when the epoch job settings are changed.
22d2ed43d4 : Make Data Version Header Fix backwards compatible in CTS
df7a238181 : Fixed ActionBasedRule so it reset actions before test.
4af080599b : Refactor Aggregatable Buckets implementation to now be called Named Budgets.
6d2165e446 : Remove BuildCompatUtils
1a814799e6 : Added reset() method to Action.
97fe88e2db : Remove AndroidManifestConfigParser and related tests and resources
b8da214d79 : devmode: Add e2e tests across PA/PAS services
12860fef0e : Make Data Version Header Fix backwards compatible in CTS
892465b960 : Add DBCustomAudienceToLeave to CustomAudienceDao and allow_schedule_requests column to DBScheduledCustomAudienceUpdate
8471d3d3dd : Update AppUpdateTest to get user from Process
4a2198a471 : Make target_sdk_version for test SDKs 34
d29f80773f : Few improvements on testing infra:
bcd582d988 : Import translations. DO NOT MERGE ANYWHERE
4b9500c63e : [Measurement] Two-Sided Debug Keys In Reports Privacy Feature Flags
3a633411e0 : Add fledge_enable_schedule_custom_audience_update_additional_schedule_requests PHFlag
5574e272c8 : Remove Gson usage.
c17f605856 : Implementation for matching and adding contributions from a trigger to an attribution source's aggregatable buckets
4d6ea001ff : Add missing flag values to Ph dump
4cc25dfe13 : Add Locale when comparing doubles in tests
d96daf5fa0 : Replace temp intent action constant with adservices sdk constant
cc1833d434 : cli: Add --seller variant of get-ad-selection-data
0748543955 : Initial annotations support to FlagsPreparerClassRule:
66026b3229 : Improved FlagsPreparerClassRule by adding new methods:
2567f4555d : add back wait time for component intializer
ee6ea8ab5f : add message for the assertion fail
d87c4954e7 : Improved ActionBasedRule so it caches commands before the test run.
5aaa1a2d7b : Fixes for errorprone update
1b4d243627 : devmode: Add test for caller SE rejection & fix race condition
4d72132797 : Remove RvcPostOTAChannel and other remaining RVC-specific code
74b6ef39e3 : Annotate some CTS tests with EnableFlag instead of manually setting them
a21c097d7a : Update interop tests
b0ccf9761d : Created abstraction used to run cmd sdk_sandbox.
c00c826f7a : Check that exception is not null
a24a0f9fb9 : Remove flag dependency on test
6af78215d5 : Add E2E tests for PeriodicEncodingJobService
71f745e0d6 : update class set up with back compat initialier
3efff410fa : Split DeviceConfigAction into AbstractAction.
4259808cc2 : Add TLs to Infra OWNER file
7822373e7d : Refactored FlagsPreparerClassRule to use ActionBasedRule.
0f9c8d1014 : Use mockSdkLevelS method in MeasurementImplTest
05c162e02a : Add missing EMPTY_SUCCESS ErrorLogUtilCallback in CEL verifications
aa879ec1ec : Initial version of ActionBasedRule.
0dd5fdcbb5 : Disable facedown timeount in CTS tests.
2873cbe9e7 : Import translations. DO NOT MERGE ANYWHERE
96d596a41b : add user for HSUM
b305b021fe : Minor improvements on shared testing libraries:
e992771b35 : Add mockSdkLevelS method to AndroidStaticMocker
729a0645c2 : Remove unnecessary isDebuggable method
ca453b5515 : Remove sdk level S checks in UI tests
ffa9f60f79 : Remove unnecessary annotations, rules, and checks for Android S+
cab960d0ef : Refactor MeasurementImplTest to extend AdServicesExtendedMockitoTestCase
cd8cedddb6 : Call notification activity intent from mainline
43020dc527 : add flags to PhFlags.dump
3688944933 : Add ApiResponseCobalt logger for logging Api response metrics via Cobalt
e9887ce6a4 : Update Android.bp and module controllers to disable all tests on Android R
74efdc4fb4 : Implementation for trigger registration parsing with aggregatable buckets.
5ed277d02a : Add missing ErrorLogUtilCallback in CEL verifications
cd685599d8 : Set SyncDisabledModeForTest.Persistent before the annotated flags
8057569b80 : Hostside Test for ETSV cache invalidation
d7d5136436 : Use the correct flag for scheduleCA data cleanup
9851740198 : Clear protected signals when signals are removed
e9c4e6cff7 : Validate module code in Module State and User Choice Objects.
0efa2ba3e8 : Add test to make sure SetSyncModeAction handles null mode.
ca7bdcc4c8 : Use RequiresSkdLevel annotations instead of rule
06a27a791d : Add new Rvc test modules to TEST_MAPPING
3b86905460 : Moved adservices-shared-testing to shared/testing-libraries.
ea2f533396 : Split out RVC tests into separate test modules
71d23ac563 : devmode: Throw ISE if DevSessionState is UNKNOWN
df3173fd38 : Fix EpochJobServiceTest
baa5192a55 : Minor fixes based on checkstyle check.
9f378ab5e0 : Import translations. DO NOT MERGE ANYWHERE
8152e28799 : fix(devmode): disable feature in failing test
aed1fd18df : Revert^2 "devmode: Thow SE or ISE based on session state"
357231901a : Revert^2 "devmode: Read flag in DevContextFilter"
2c7934a666 : [Ravenwood] Strip mockito from AdService Ravenwood tests
d070b66612 : Revert "devmode: Read flag in DevContextFilter"
46b36c475f : Revert "devmode: Thow SE or ISE based on session state"
bbfde963d2 : Fixed string change for new PAS strings.
03749f8909 : Use SetSyncModeAction on FlagsPreparerRule
2d28b87770 : Added Action interface and initial implementations.
564b6ebe57 : Fixes for errorprone update
3409920526 : Fix OWNERS file for AdServices UX files.
7603b2f54d : devmode: Thow SE or ISE based on session state
4f07d1cd0c : devmode: Read flag in DevContextFilter
2222ff1431 : Pass debug flags in AdSelectionService
5626a5c42f : Marked CustomAudienceShellCommandsE2ETest as @FlakyTest.
b4ebff9b54 : Adds missing CEL in persistResult runner
93107de3c1 : Enable Cobalt merging registry when MDD downloaded registry is available
b3b748375e : Create flag for Api call response Cobalt logging.
57187833cc : Fix test
7eacacdfac : [cobalt] Log cobalt periodic job upload status
54e75f47f1 : Removed NotificationTypeParams builder.
af3f01ba1d : Enabled AdServicesApplicationContext.
87e9249295 : Split out remaining R-specific tests
a8187b1c14 : Added "canonical" OWNERS to UX owners
6c9647b22a : fix(devmode): Some minor style updates
8ae921281f : fix(devmode): Return NO_OP instead of throwing ISE
656a2ea33d : Improved testing of DebugFlags:
9612c7f951 : fix no activity tests
cc91081e12 : Initial version of AdServicesApplicationContext.
38a1e695fc : Updated notification layout to reflect new PAS strings.
57f07e7d42 : Added ApplicationContextSingleton.setAs()
97aac0eab6 : Adservice mocker extensions:
24be9de717 : Improvements on AdServicesInternalProvider.dump() and related friends:
ae95201411 : Split out R-specific tests in MeasurementServiceImplTest
497258bc0e : Add debug logging to the onPropertiesChanged callback
70d755258e : Crystalball test for interstitial ad show with back navigation
7da33699d3 : Pass debug flags in CustomAudienceService
3081e994ce : Add rule to recreate JS sandbox isolate before tests
fc5ee9bd15 : Refactor Topics unit tests to follow internal guidelines
c411b4c86f : devmode: Update DevContext to expose DevSession
b07b2763a6 : devmode: Add state machine to dev session
2f112e67f2 : Deprecate R-specific AdServicesOutcomeReceiver APIs
e5124b1069 : Remove FlaggedApi annotations for launched APIs
436d733265 : Add a Verify after the first callback in JobServiceTest
06dcc64bf9 : Fixed AdServicesEnrollmentTest for multi-user on AOSP builds
ffffdd88f6 : Mark packages/modules/AdServices apps in updatable apexes with updatable: true
daea8ddc11 : Get measurement table names programatically
dade08231c : Pass debug flags in ProtectedSignalsService
4a9ae67676 : Fixed race condition between latency test run and flag reset
87ff4499a8 : Create a lint to detect SharedPref usage and throw an error if used
f3ff76f159 : Added FlagsPreparerClassRule to AdServicesCtsTestCase.
538af6f6a1 : Added FlagsPreparerClassRule to AdServicesHostSideTestCase.
2a2cde8147 : devmode: Fix incorrect help text for --erase-db
ac51af82ad : Change Cobalt logging to run on blocking executor
ee50e5708f : Initial implementation of AbstractFlagsPreparerClassRule rules.
2277d0d642 : Added AdServicesDebugFlagsMocker.mockGetConsentNotificationDebugMode()
6e8fc5e4ab : Inlined more mocker references from job superclasses
a106d00bf6 : Removed AdServicesCommonResponse.
15da53c97a : Updated strings for PAS.
f804d77dfd : Initial version of AdServicesDebugFlagsMocker.
4dabf809f8 : Disable AdExtBootCompletedReceiver on R
e0e0932dfc : Add an update API to support batch operations in AtomicFileDatastore
5a8a1af9b6 : "trigger-reporting-origin-limit" should be called with "limit" set as the limit, not the current count
0bff70667d : Clean up flags and update FlagConstantsTest to ignore aconfig-only flags
940666820a : devmode: Register DevSessionCommand in factory
ec722643a8 : Add all tables to datastore state in E2E test debugging
b3db176858 : devmode: Implement reset w/ DatabaseRefresher
e21134d494 : Fixes for errorprone update
aced71da99 : Add giladbarkan to adservices OWNERS
dafbc5b8b4 : Inlined AdServicesJob[Service]TestCase.mockJobSchedulingLogger
a069d1e428 : Added mMockDebugFlags on base Mockito test cases
c17e25cab6 : Renamed setters to requesters.
a77ed01ed4 : Remove unused OutcomeReceiverConverter methods
dd58cc0b23 : Uses mocker to mock Clock methods.
40ba27734a : Added Clock expectations on SharedMocker.
f77bd4d0f2 : [B&A] Implement CEL collection in PersistAdSelectionResult and GetAdSelectionData APIs.
7751054845 : Adding missing test cases for mocker references.
4e7570ad70 : Use ETSV for Activity allowlist
30661b5b59 : Use ETSV for contentProvider allowlist
d3ef46b77f : Revert^2 "Remove AdIdCompatibleManager class"
58cb3751f2 : Revert "Revert "Refactor AdServicesOutcomeReceiver APIs in Measu..."
eb5c5e1e22 : Revert "Revert "Refactor AdServicesOutcomeReceiver APIs in AdId ..."
c97cbbfa07 : Removed Unknown Module.
3f49d571f5 : Revert "Refactor AdServicesOutcomeReceiver APIs in AdId service"
0311996f1f : Revert "Refactor AdServicesOutcomeReceiver APIs in Measurement s..."
b11908213a : Revert "Remove AdIdCompatibleManager class"
693650fa52 : Use ETSV for broadcastReceiver allowlist
25fb22426c : Move getConsentNotificationDebugMode to PhFlags.java
b4f4d20cc2 : Cherrypick from main to android15-tests-dev for Partner issues
6a1fab912d : Add more logs in load SDK flow
8c196251b0 : Merged jobMocker into mocker.
9c44a617d0 : Moved Flags.dump() from TopicsService to AdServicesInternalProvider.
b2964065e8 : Disable PA/ARA event reporting integration on Fledge CTS tests
b78d8e73b5 : Remove obsolete TODO
0c5886fb2b : Remove AdIdCompatibleManager class
23b78f4113 : Refactor AdServicesOutcomeReceiver APIs in Measurement service
9185cfecb9 : Refactor AdServicesOutcomeReceiver APIs in AdId service
300a3027d2 : Added tests to assert interfaces implemented by mocker references.
ae3b39ce41 : devmode: Extract reset logic from ConsentManager
463d3f6b0e : Minor improvements on SyncCallback infra:
b9601f92bb : Refactored AdSelectionManagerTest to use Mockito mocks.
f487a3bb69 : Improved PhFlagsTest#testDump() to check for its contents.
ab8f3e5816 : Update AtomicFileDatastore to revert the update when ioException occurs.
403db5b836 : Implementation for source registration parsing with aggregatable contribution buckets.
af19b9f11a : Improved logs on AnswerSyncCallback.
25e07013cc : Implements dump() on DebugFlags.
0730b47b7e : Un-deprecate the AdServicesOutcomeReceiver APIs in AdServicesCommonManager
cbcbd53a05 : Add abmehta as an owner for adservices measurement
1d61ca9d62 : Move debug flags methods out of Flags.java
8b9d5e8565 : Also include empty scopes when checking scope set uniquess per navigation registrations.
7ac18dadb3 : Fixed AdServicesFrameworkUnitTests Android.bp so it can use Mockito.
4013d2ecca : Import translations. DO NOT MERGE ANYWHERE
12ba477e05 : Refactored AdSelectionManagerTest to use OutcomeReceiverForTests
a2d67fcd0f : Add ETSV to dump output
df4586440b : Removed some calls to setDevice(ITestDevice)
b9737c62a7 : Handle tmp dex files being deleted during verification
d46213c05f : Flags for ignoring reports

+- Project: platform/packages/modules/AppSearch

4c36dd18 : Restrict Embeddings to Android W+.
3cfe4ddb : Add `max_allowed_app_function_schemas_per_package` to AppIndexerConfig.
6efc9c69 : Add a broad try-catch to AppFunctionSchemaParser and attribute parsing for joinableType.
235535bd : Update Framework from Jetpack.
af7f7812 : Update Framework from Jetpack.
dfce7b6b : Add a method to parse AppFunctionStaticMetadata from XML files using given schemas.
f8769247 : remove obsolete copyright line
ec6ebf29 : Fix bug with enterprise contacts test
b93e4e41 : Add flag to enable AppFunctionSchemaParser for dynamic schema parsing.
e5a2fe84 : Refactor IAppSearchResultCallback to take AppSearchParcelV2.
7556c9f5 : Update Framework from Jetpack.
d2740c1f : Add AppFunctionSchemaParser to parse XSD files and create AppSearch schemas for AppFunctions.
fc1aad75 : Sync BlobStore to AppSearch
f0fb13f6 : Small comment fix to keep in sync with GMSCore
dd9306d3 : Update Framework from Jetpack.
8aef4445 : Add mobile application qualified id to app open event
f70e5ced : Update Framework from Jetpack.
39de4574 : Update Framework from Jetpack.
4310fac6 : Remove appfunction code from appsearch
32d2bec1 : Revert^2 "Update Framework from Jetpack."
a853d9bc : Add AppOpenEventIndexerConfig interface and its implementation. Use it to configure periodic job scheduling.
a86e3ab1 : Create AppSearchBatchResultGeneralKeyParcel to support general key
e3ea28ef : Add AppOpenEventIndexerManagerService to index apps into AppSearch
f88e3b56 : Removes usages of streams from tests
d75b5f8b : Makes AppsIndexerImpl a bit more efficient with indexing
648d9d07 : Create AppSearch StatsUtil.
d5514653 : Move `parser.nextText().trim()` into a variable.
d85c491d : Revert "Update Framework from Jetpack."
48559bad : App open event indexer user-level management
625ec0bb : Changes CtsAppSearchTestCases to use appsearch_flags_java_lib
957a3b6a : Update Framework from Jetpack.
d80e6f84 : Changes default indexer type to CONTACTS_INDEXER
597e1c42 : Adds stats logging for app function indexing
08a44dfc : remove multi-user and enterprise deprecated functions from DeviceState
18f947e5 : Add AppOpenEventIndexerImpl which uses last run time to fetch the latest events and index them.
f6211ee6 : Add new PACKAGE_USAGE_STATS permission to SetSchemaRequest.Builder#addRequiredPermissionsForSchemaTypeVisibility.
84c02c3c : Fix AppSearchHelperTest#test_newAppFunction_parentSchemaIsInserted.
5aa35ede : move multi-user and enterprise annotations to the modules
155ca2ae : Adds enable_additional_builder_copy_constructors flag

+- Project: platform/packages/modules/Bluetooth

6cc36995e4 : Remove flags for encrypted AVDTP & AVCTP channel
45cedc6ae4 : Flag to retain identity address on BT restart
3eed398750 : Retain identity address LE-only devices on BT restart
8cd74c2126 : Retain identity address in device properties on BT restart
0b953d38fc : Don't use config object after std::move
4eabb4bb71 : Implement basic metrics for GATT.
b736d7d5ae : Notify socket disconnection event to socket hal
44fa29e185 : Revert^2 "HCI Vendor Specific Interface needs to be mocked for testing"
b08a1ce111 : Reduce test contact data
cabf7ea8c7 : Revert "HCI Vendor Specific Interface needs to be mocked for testing"
f3ca7a1876 : Fix a test
9005686093 : Add missing flag of socket_settings_api
ea42716775 : HCI Vendor Specific Interface needs to be mocked for testing
e9b7d1d0b0 : Set all-zero of peer address, peer address type, and peer IRK in resolving list for RPA offload
08c867dcc4 : Early flag removal for ble_scan_setting
210363be6f : Add system property for rpa_offload_to_bt_controller
3978eb59c9 : Fix issue with importing audio framework flag library
d2a7e1e4ce : Revert "Make SetBrowsedPlayer return current path and correct item count."
e55f41f736 : ADM: fix active device fallback on handleHfpActiveDeviceChanged
e5734a4b4c : Added 50ms delay after onSetReport() RPC call
d4aedcc392 : Import contexthub v4 for endpoint ID on offload socket context
c2dd4e254f : Fix L2CAP mocks in rfcomm-fuzzer
4434fc5a36 : Test: Add basic test cases for New socket Setting related APIs
f849da0c88 : leaudio: Fix ConfigureStream for device using Caching
5615c053c7 : Add conditional privileged permission annotation on pbap client
aa0a9cb56d : Read RFCOMM socket offload capabilities from bluetooth low power proccesor
73d8ba1e82 : Notify offloaded LE COC socket channel info to socket hal
e4447aef49 : Determine local mtu for offload LE socket
5c298722d7 : Set the initial local credits to 0 for offload LE socket
88ba5e075e : Send signal to indicate if app is accepting incoming connection on listen socket
77fbb39b60 : Add bluetooth offload socket API
fa42d93294 : Read socket offload capabilities from bluetooth low power proccesor
ea59bfabf3 : Add low power processor offload manager with socket hal shim
762667bd76 : Add helper functions to get LE L2CAP channel and ACL handle information
79112a6a57 : Update the advertising_event_properties to match the definition
2d91ac42a5 : Allow bluetooth-test-util-lib to be accecible from ranging cts tests
157fb2011f : [le audio] Null check when setting le audio active device
b415d7eec8 : Update code for Rust 1.82.0
1195643226 : OWNER freeze window
a08620d291 : btm_sec: Don't trigger new authentication before collision timer expired
ee39cdc5ff : leaudio: Improve disconnection during streaming
f4c887d187 : Add new API `getIdentityAddressType` to BluetoothDevice.java
16a771f3ba : Don't send HCI disconnect repeatedly
ad045fe993 : floss: wire hid_virtual_unplug from btclient
bc4c5706f0 : Add `identity_address_type` to `leAddressAssociateCallback`
54e599db65 : directed_advertising: fix call to high_duty_cycle
ce57f15683 : Get the BLE conn interval and pass to the HAL.
5583de1500 : handle the CS capabilities
b087499809 : system/stack/*: Fix -Wmissing-prototypes warnings
b179280bde : bumble_experimental/hid: Ensure that hid_report_queue is defined
f25990ec54 : Flag to prevent device removal from different thread
c87909f2a3 : Mock ContactsProvider for CallLogPullRequestTest
42d40c3d01 : Cleanup State Machines after connect tests
4589d4e103 : Downgrade exceptions to warns for tests
c85a6edcbc : leaudio: Improve dumpsys log for stream creation speed
400e8309a4 : Writes procedure data for HAL V2
5d9f08f74f : java.lang.ClassCastException fix
e71691b807 : leaudio: Improve cleanup function
d28a26b4d9 : Add dont_send_hci_disconnect_repeatedly flag
c329e5d16d : Flag to properly cancel the pending connections to the added HID device
897dc82785 : Update the doc of getChannelSoundingSupportedSecurityLevels
1c978e0bf0 : Gate channel sounding feature with feature flag.
947f34b1cd : ScanManager: fix flaky test
05b21943dc : ScanManager: Direct handler access
335505990a : ScanManager: Use provided AdapterService for context
174926bb68 : AppScanStats: allow to inject time provider
a69f4fc552 : Import translations. DO NOT MERGE ANYWHERE
d8e3c85738 : Import translations. DO NOT MERGE ANYWHERE
d596d7d1a0 : Import translations. DO NOT MERGE ANYWHERE
92e2af0c09 : Import translations. DO NOT MERGE ANYWHERE
90df1cedbe : leaudio: Fix recovery state
1d66a6aa2d : removed verifyNoMoreInteractions
fc7b21f15c : Remove LTO workaround
c5132bb980 : RootCanal: Update build.rs to directly use the pdl_compiler library
dc3fbdcec3 : audio_hal_interface/a2dp_encoding: Rename BluetoothAudioStatus -> Status
14a4e988ce : gd/dumpsys: Remove flatbuffer abtraction for dumpsys output
8703e24dc5 : gd/os/wakelock_manager: Rewrite dumpsys using std::format_to
a6a262d406 : Use STREAM_VOICE_CALL instead of STREAM_BLUETOOTH_SCO
d19d98a9ff : Make PbapClientBinder package private + use never() instead of times(0)
f3b6a5e53a : prevent_duplicate_uuid_intent flag to be enabled
881dc4b82e : broadcaster: Confirm resume request when already streaming
2d68547c28 : broadcaster: Handle not ready to resume broadcast
04d57ac3a4 : floss: reject media profile connections for second device
53743c632e : Export SUPPORT_BLUETOOTH_QUALITY_REPORT_V6
7267d245cf : AICS: add pts-bot testing
2bf4d716d1 : PhonePolicy: use looper to make test faster
f1fc93e21e : PhonePolicy: unify state change variable order
06fc6c1f24 : Link flag statically in unit test
b685c80a34 : Update A2DP_SESSION_REPORTED metric with the metric_id field
1404ca3a16 : HeadsetClient: merge "start" into constructor
605849baa8 : App: Update baseline
583d18388d : gd/hal/snoop_logger: Make explicit call to DumpSnoozLogToFile
d8fd433a4f : dumpsys: Remove shim dumpsys output
906704700a : Do not require BLUETOOTH_PRIVILEGED for KEY_MISSING broadcast
a037e89801 : Adapter: remove most of suppressLint
c5f070d68f : clear binder identity before executing on executor
4536389978 : Don't leak internal flag to CTS
6793743f5d : Storage Refactor: Introduce a new PbapClientStateMachine
61e1188aa9 : BluetoothLeAudioCodecConfig: Add OPUS codec type for LeAudio
ec562c33c1 : pass CS config parameters to hal
3d0dcd5763 : broadcaster: Introduce STOPPED audio state
abb11939b8 : Make KEY_MISSING into ordered broadcast
ff6300b137 : Import translations. DO NOT MERGE ANYWHERE
b3f4f5d359 : Add new API for directed advertising
11cd9cdb05 : Move PhonePolicy to Optional
ae3841031c : Storage Refactor: Introduce PbapClientObexClient and others
4cf53438e5 : Framework: update some nits in javadoc
00affc8e80 : Storage Refactor: Introduce PbapClientContactsStorage
315e665c37 : BluetoothCodecType: Add codec ID for LHDCv5
8df42ac896 : Add flag to ignore authentication request when collision timer is active
c37c8b67d6 : Fix NPE in AdapterService.onUserBondResponse
ebcafca247 : Avoid processing l2cap events when stack being shutdown
be9589e260 : Storage Refactor: Rename similar objects in preparation for refactor
d3694a2828 : Add parameter to APLEFM AT command
260295477a : Bluetooth Framework support new BQR parameter for Android Bluetooth Metric
1b4c5a4b78 : avdt: remove redundant states and functions
9d4701b507 : avdt: connection role as enum
063a6a5db2 : avdt: refactor AvdtpTransportChannel
fa184a6bd0 : bta: Close connection on BAD_STATE AVDT suspend confirmation
a382d201b0 : has_client.cc: add missing BTA_GATTC_CancelOpen in Disconnect
0ae9c06d16 : Improve key_missing_classic_device handling
1a331e0f30 : ActiveDeviceManager: Update tracking connected devices
5e5ec68190 : RPA: RPA offload to BT controller
7b37914925 : gd/hci/acl_manager: Rewrite dumpsys using std::format
c4611d04b2 : cpplint: fix the majority of runtime/int
28af56c1c1 : audio_routing_centralization: rm flag and revert
3ebc2918ad : RfcommTest: Bumble pairing config override
b052e5857c : system: Migrate from {fmt} to std::format
709f2a3dc2 : using device config override command rather than setprop
a1c6829c2a : AICS: expose APIs
7aaee9915d : Add flag a2dp_lhdc_api
fa73e56edf : Abstract account type ready logic behind PbapClientAccountManager
58108ccf17 : Add the measurement frequency selection.
ff7a3b7a71 : Import audio framework flag library in Bluetooth module
c9eff0fddd : Add flag to avoid processing l2cap event while stack being shutdown
1b03899f3b : flags: Add leaudio_add_opus_codec_type
e2f924c48e : PhonePolicy: add HAP to processConnectOtherProfiles
d7fa3157fa : ActiveDeviceManager: Add logs when checking broadcast state
fac08f4662 : ActiveDeviceManager: Fix handling disconnect of streaming set member
eea12fe36e : Clean up BtaGattQueue when connected
08967a0406 : Remove gatt_cleanup_restricted_handles flag
e196a86635 : RAS: Use formal UUIDs
f5f3062fd7 : audio_hal_interfacce/a2dp_encoding: Rename BluetoothAudioPort -> StreamCallbacks
6dc72243b5 : Clean up PbapClientServiceTest
abacae6bc0 : Start OBEX Client as a function of supported features
9fe5c1330e : Create PbapSdpRecord wrapper for record and constants
5a8e6dc686 : Combine FakeObexServer and FakeObexTransport for simpler syntax
733678d319 : Rename RequestPhoneBook* classes to RequestPhonebook*
2c8afeb9c6 : Clean up base request object and subclasses
df194d6717 : Create PbapPhonebookMetadata to hold size and version information
27d5bbc26d : Create PbapApplicationParameters to abstract request params
4f2d3c8d0a : Rename PbapClientVcardList to PbapPhonebook and update test coverage
6dfd9194ea : HCI: Failing to open the snoop-log is not fatal
c1ce667ec7 : Add Disconnecting State to HeadsetClientState Machine
fee8d8a7a5 : osi: Always implement strlcpy as osi_strlcpy
9bb6900504 : le_audio: Implement default setter for fallback to unicast group
646c647581 : le_audio: Introduce set/get BroadcastToUnicastFallbackGroup API
c7480a1397 : SystemServer: Extends error prone enforced list
abcd2c1533 : AckPause when disarmed
ceb1aa2357 : Don't dump pending_discovery
c66e69a401 : flag: connect_hap_on_other_profile_connect
aa17a7c044 : Drop pending fragments for disconnected connection
42b79ddd44 : Lower potentially noisy alarm logs to debug
0e33c14fdf : Add a flag for fixing bug on round_robin_scheduler
461aa06270 : gd/hci/controller: Rewrite dumpsys using std::format
55bdbd8454 : Add ENCRYPTION_CHANGE broadcast
43cbeb6224 : Fix build.
3b3d2ceb3b : Notification: Allow 1 hour window + fix time delta
20e6684a5c : le_audio: Unify local audio source notification usage
79b29efa32 : LeAudio: Add additional debug logs
836bb18cc9 : Add ScanController mock to LeAudioServiceTest
5fa8ced8a2 : flags: leaudio_dev_options_respect_profile_sysprops
377d906d35 : HearingAid: stream to prevent get the map twice
f0148ddfbe : Disconnect on encryption failure
5109637ad6 : Donot update security flags while saving CSRK
9640cb1d0a : GattService: merge "start" into constructor
e17229acec : Prevent profiles from initiating connection while the device is removed
d4d7594493 : flags: leaudio_config_profile_enabling
bd9496f407 : BluetoothMetrics: Log the bluetooth event for SMP pairing fail
01ff735616 : BluetoothMetrics: Log the User Confirmation Response at a higher level
e3e7d78e2a : Stop explicitly adding bionic subdirectories to the include path.
b8fdac5a82 : LeAudio: Fix the confusing logs
e9e864f423 : Add a flag to clean up BtaGattQueue
b082d6328d : Add flag for directed advertising APIs
ccf64210e8 : floss: enforce truncate string to be at char boundary
83bdd20955 : Import audio framework flag library in Bluetooth module
c8b0c1c1b1 : Not to switch active device when broadcasting
005a09a55c : flags: Add rpa_offload_to_bt_controller
2c103fba57 : Test: Update tests related to bond check from Profiles
d9de53d443 : Fix authentication bypass bug in SMP
ee51549ccb : BluetoothMetrics: Log LE ACL Completion Event
f54c6915f8 : SystemServer: keep gatt apps when scan disallowed
ed6bf8744a : Add ble_scan_setting_does_not_disconnect_if_bt_on
85c560e561 : system/bta: Apply clang-tidy fixes
eeaa468907 : Remove timeout based verifies for MapClientStateMachineTest
b255b19d7d : OWNERS_automotive: Replace user
ff16a0d1e5 : update the API about the channel sounding security levels
1fdde1b318 : Remove the channel sounding duration by default.
b039eb97a6 : Create a Utils class to hold common test functions
203ea95c12 : Add flag to not to update sec flags on csrk save
8c208c8bc6 : avdt: enable verbose logging for scb and ccb
46678c1d40 : avdt: simplify searching for ccb and scb events
6dbed84e72 : bta_av: remove btif_ extern declarations
35b899d6a7 : bta_av: Remove redundant function call
f42a4ac5ce : bta_av: Remove redundant API to SM event parser
444b1a9f1a : bass_client: Imitate receive state change when source removed
4ee5d7f1b4 : flags: add leaudio_monitor_unicast_source_when_managed_by_broadcast_delegator
6a70439d07 : Uprev Floss linux build env to Debian 13 (trixie) for C++20 support
c8e59e7b2d : broadcaster: Confirm stream resume after BIG create
2efa8e91ab : flags: avdt_handle_suspend_cfm_bad_state
26320ba611 : Bluetooth: Add edge-to-edge enforcement
0926c0a75b : Check rcb is released
55eaa78570 : ActiveDeviceManager: Factorize logic around LE Audio active device
0ba75ae1ea : Improve Bluetooth logging to include all remote devices interacting with Bluetooth
0868a9adc3 : OWNER update
f2116e8ab9 : Remove channel_sounding flag
8d07b99b9b : Add flag opp_set_insets_for_edge_to_edge
91f88c8cf3 : Fix floss build breakage "bt_transport.h" not found
0cd18e1fb7 : Flag to disconnect link when encryption fails
61b17802a2 : Update A2DP_SESSION_REPORTED metric with the metric_id field
8c9ba18264 : [BluetoothMetrics] Create new bloom filter for medical device identification
625606b775 : Flag to prevent new service connections after removeBond request
10d5cc121a : Remove pending HID connection when removing the device
d5a6dabce8 : Adapter: Reduce MainLooper load when connect profiles
7711b78b86 : Fix AVRC RemoveRecord
cfad38567a : Move OBEX related objects to an OBEX folder and change name scheme
128e72046b : Rename AccountManager Account/Authentication Service and resources
a2281d30b2 : btif_av: Clear the pending start flag before restarting the audio session
83aeaf27ef : Create PbapClientBinder
185d7fbecc : Bass: Modify source on duplicate addition
2b142401e4 : Add flag initial_conn_params_p1
2199b2d6bd : Update db version for microphone
215c0198e2 : Remove 'system/types' from include_dirs
608555743a : AICS: expose implementation in framework
8d981af45b : AICS: getType implementation
ed59a230a7 : AICS: Gain Setting Properties implementation
70d537cd88 : AICS: Audio input control point expose up to aidl
fc1829b371 : AICS: getStatus/onStatusChanged implementation
969d55e0a6 : BluetoothMetrics: Adding logging for NONE and BONDED in the java layer
5fde1d6ee5 : [le audio] Fix removing broadcast source when PA is still synced
922a27435a : Update db version for microphone
02a268ceac : Checking if the channel sounding is supported by controller.
7d9992990d : Add flag a2dp_clear_pending_start_on_session_restart
3772b6a0ec : Update db version for microphone
77153ce385 : system/stack: Apply clang-tidy fixes
91ae45af1b : Add BQRv6 parameter parser for upload to framework
9e6b51eb16 : floss: A2DP: Avoid use-after-free of UIPC
dc2013f86f : Import translations. DO NOT MERGE ANYWHERE
e941f67cbf : sockets: Minor code cleanup and logging
4d680b1ce8 : Flag to allow reconnection from HOGP device
da5f3ac058 : RfcommTest: ConnectToServer and Disconnect
249755d66b : Remove 'system/device/include' from include_dirs
55ae2902f9 : cpplint: fix and enable readability/check
3ef7afd13f : cpplint: fix whitespace/comma
0c2d0cd5d3 : cpplint: fix and enable whitespace/blank_line
4607506725 : cpplint: fix and enable whitespace/newline
0db4c18431 : btm_io_capabilities_rsp: remove unnecesary null check
d7f88b60dd : Add new metadata APIs get/set for is_microphone_for_call_enabled
ab3212fcba : AICS: description set/get/onChange implementation
8b1e462f0a : AICS: pre-API definition and intdef
2e76b5f3bb : AICS: callback registration
88ba691416 : AICS: cleanup and preliminary changes
66fbafba10 : Update L2CAP socket comment
b15a075127 : topshim: a2dp: Avoid leaking the reference of a stack object
739fdb75ca : Flag to remove pending BTA HH connection instance on bond removal
6bc1005efe : RootCanal: Update to Core version 6.0
faa7a602f2 : sdp_db.cc: reduce indendation
af3a017be7 : Add socket settings interface
df850d2a66 : cpplint: fix and enable whitespace/ending_newline
62c77931a4 : HAP: offer a way to disable leaudio on some device
9c3ca59cc0 : LeAudio: Fix buffer size type conversion for the SW encoder
f192f5ebe2 : Improve key_missing_classic_device handling
1a1cfcd064 : le_audio: Change dialer sounds to SOUNDEFFECTS instead of MEDIA
903152615d : has_client.cc: cleanup device when GATT operations may be pending
6dea2095c6 : Store active player avoid it was deleted by other threads
a4a8fd198f : floss: Fix 'android-base/parseint.h' file not found
2145924a1e : audio_hal_interface: Refine implementation of A2DP StopRequest
e0553fe41d : stack/a2dp: Inline A2DP_VendorXXX methods
c7edf4446d : leaudio: Add reconfiguration guard
cb111410a6 : leaudio: Minor fix on parameter name
ccf2dde724 : leaudio: Fix switching from Live to Game context
02482f38d4 : leaudio: Always start VBC timeout on Local Sink Suspend
4e7ccecce5 : AdapterService: Remove A2DP device from active devices when switching to LeAudio
f6954e8746 : Fix some VisibleForTesting usage problem in Metadata
573b2d3074 : [le audio] API broadcast to unicast fallback group changed callback
cc4b7636e0 : Do not update device properties of already bonded devices
bfefb319ef : CS: Use half channels and increase max_procedure_counter when interval less than 1s
e645383ea1 : HCI: Prevent usage of deleted instance
288279c59d : BQR: Do not enqueue HCI command binded to "null" callback instance.
99c133635c : Supports back to back concurrent measurement.
aecd844106 : VolumeControl: Fix restoring earbuds volume
72d193930b : ActiveDeviceManager: Fix A2DP or HFP Active device changes to null
64d465969a : leaudio: Fix removing CISes from the local storage
0ae282244f : Import translations. DO NOT MERGE ANYWHERE
5dbc960daf : Import translations. DO NOT MERGE ANYWHERE
6c8f475e61 : system/stack/btu: Fix -Wmissing-prototype errors
9c1965c4ee : blueberry/tests: Remove sl4a_sl4a and gd_sl4a tests
00555617dd : Fix more memory-unsafe logging
f20739b3d6 : btif_av: Move callback declarations from hardware/bt_av.h to btif_av.h
9f917bc946 : BluetoothMetrics: Mapping all the error codes to State enum variants
ab0c3a5153 : BluetoothMetrics: Adding the transport level breakdown for pairing
4510904b18 : BluetoothMetrics: Add BONDED state transition for pairing
f661e2e519 : BluetoothAdapter: cleanup
b50dc7c997 : HCI Vendor specific hook to GD/HAL
8b0522f37f : GD/HCI: Add fallback handler for vendor specific event
444208a0f5 : GD/HCI: Allow return of Command Status or Complete, without preconditionning
0dc741f8e2 : Framework implementation for HCI Vendor Specific Handling
694885b2cc : stack/a2dp: Implement unit test for bluetooth::a2dp::ParseCodecId
f243c3dbd9 : stack/a2dp: Cleanup the definition of a2dp codec IDs
4d2814783a : VolumeControl: Fix canceling opportunistic connect
da8c535b3c : AICS: onExtAudioInGainPropsChanged: skip serialize
5c4b8b1bef : AICS: spec rename gainValue -> gainSetting
48410cf0bd : AICS: onExtAudioInDescriptionChanged: skip serialize
57679d5356 : VCP: Split nativeInterface from nativeCallback
cfe847b827 : AICS: onExtAudioInTypeChanged: skip serialize
d1a4aa17b5 : AICS: onExtAudioInStatusChanged: skip serialize
1e71c96295 : AICS: onExtAudioInStateChanged: skip serialize
944a9dcac3 : stack::avrc Check alarm state before setting
8f53a23f3c : Replace libchrome StringPrintf in system/bta/test/ files
ac31113456 : Update owners file
de941d1f88 : Lint as error: GetterSetterNullability
04942915a0 : Obfuscate address logging in GattService and a few other files
c28ac6676c : Log attribution tag for last scans in dumpsys.
5a288596ab : AICS: Add native aics module
b598112fa3 : Add flag identity_address_type_api
693806af40 : system/btif: clang-tidy fixes
605bc9acad : osi: Use CLOCK_MONOTONIC on platforms that can't wake on BT
29cb8d036b : Obfuscate address in AdapterProperties dump
72a8ae06a9 : Fix state machine stuck after disconnection
fa5d5f10db : OPP: check content URI permissions of sending application on V+
90a75ade1f : [le audio] Add new APIs to get assistant local metadata for source
0861bbd1b8 : avatar: Add codec reconfiguration test
27948a5105 : avatar: Add AVDTP collision simulation test
1409d576b4 : SnoopLog: make sure mode is initialized
8af16b8933 : Add a flag to avoid updating bonded devices properties from advertisements and inquiry results
c264594ae3 : stack::avct Null out transmit queue pointer after free
44c46b107f : Prioritize strict address match over RPA resolution
948293c419 : flags: add leaudio_stop_updated_to_not_available_context_stream
323f226498 : floss: prefer SBC over AAC as default
eaa2abd938 : Fix setter being skipped over / ignored for field `mIdentityAddress` in RemoteDevices.java
9b743496ae : Add a flag to fix le_impl stuck in pausing state
f73d35aabe : stack::avct cleanup Remove link and browse functions
066933a998 : stack::avct Null out transmit queue pointer after free
2fa1c35c4b : Do not report read remote feature if it failed
ddc94c30f5 : Use LE as default transport for GATT
c7ef13e6a0 : Disable Secure Connections Only mode support in ICS
56d03852f4 : Disconnect when remote key is missing
c58264b1ab : AICS: use aidl definition in java
eb64c56bb9 : AICS: add aidl definition
d2d8fa18f1 : Adapter: re-order init to prevent null exception
0ea25cb03d : Add function to get the HAL version.
86084a6324 : Delete the procedure data correctly.
9fd6f2a721 : CS: Allow partial complete procedure for ranging
0899441766 : floss: mgmt: Wait until btadapterd completely stopped before restart
2677bba3c1 : Fix not fallback to LE Audio when HFP removed active device
b00fba0c7e : floss: mgmt: Don't restart Floss if the index doesn't present
1c2e1e43e6 : Add flag: sec_disconnect_on_le_key_missing
01852c3433 : Update baseline
91e0391c5e : Trigger security procedure on l2cap connection request from remote
cf71148256 : [MTK] [CONVERGENCE] [ALPS07745105] Fix SWT when switch codec
773d3bfba2 : For `scan_manager_refactor`, wait for the completion of `start_up_rust_module_async`.
9f97e7bc51 : AVRCP: Fix SET_ABSOLUTE_VOLUME label leak
f1accdda32 : CS: Update defalut value of SyncAntennaSelection
98c733802b : Handle active device when get the audio policy value from remote
2e0d34a015 : Add missing return in GetMediaPlayerListResponse
9edacde018 : Pandora: add @match_description for IUT_SEND_WRITE_REQUEST in VCP
a9ba7df789 : BQR: Add Bluetooth Quality Report v7 feature - Controller health Monitor - Add Event Bitmask for health monitor stats.
262d00f71f : Import translations. DO NOT MERGE ANYWHERE
d62bf15de2 : Fix incorrect logging in sdp_server
7f74d44ceb : Fix incorrect logging in sdp_discovery
9237a1d09b : SystemServer: remove avoid_static_loading_of_native
b0fa765e3d : SystemServer: remove fast_bind_to_app
ca3326e752 : system/stack/acl: Fix -Wmissing-prototype errors
772b68e868 : HapClientService: use looper in test
ffb0d7f0dd : AICS: Check mute value and update to int type
03544e4c69 : AICS: Simplify VolumeControlInputDescriptor usage
06c98fdca7 : Remove flag "randomize_device_level_media_ids"
504ee53834 : Aics: shift ids to start at 0
4038e37aee : VCP: silence gmock warning in test
28286ee9f8 : system/stack/smp: Fix -Wmissing-prototype errors
ace9211172 : Flags: Add PBAP Client storage and caching flags
83fb60f6c8 : system/stack/gap: Fix -Wmissing-prototype errors
874c27d16f : Pass appropriate types to base::StringPrintf
fdd49146e5 : system/stack/gatt: Fix -Wmissing-prototype errors
f35fa03659 : le_audio: Make allowed context test more strict
0b4e318664 : leaudio: Fix setting disconnecting state for ASEs
9cc8e7c2ed : Remove gatt_fix_device_busy flag
513dcb1143 : Add flag: smp_state_machine_stuck_after_disconnection_fix
748e0f7b69 : AvrcpTargetService: merge "start" into constructor
d55576b2a6 : Framework VCP: use wrapper for readability
fc8e680af7 : A2dpSinkService: merge "start" into constructor
e63ab5d3b3 : Bumble Java Pairing Test cases
4770781981 : CS: Parsing the CS procedure data of mode 3
0bfbee16ba : CS: Parsing the CS procedure data of mode 1
15ea8e7239 : Adapter: prevent race condition around sServiceLock
66c5ff4998 : Adding profile breakdown when logging profile connection success/fail
dcaace5e6f : Remove suppress lint from Socket Apis
663d3b5e41 : gd: Move mgmt files from gd/hal to gd/os
bd318db3cd : stack::avct cleanup Remove link and browse functions
b22e0087fb : Flag for using LE as default GATT transport
9663f3e1a7 : Flag to trigger security procedure on incoming access req
8f33f462da : Remove libbluetooth-gdx
3f578a3fdb : LeAudio: Add names to audio set configurations from AIDL
615fdc1277 : LeAudio: Improve the debug state dumps and logs
29b039b990 : Floss: Support multiple volume callbacks
bdb4351b73 : Replace libchrome HexStringToInt with android-base's one
8426c3937e : Add missing reset for expected address rotation time
73076fd51e : system/stack/sdp: Fix -Wmissing-prototype errors
152d20e6e2 : API change to support additional remote device metadata fields
a39ff179e7 : A2dpService: merge "start" into constructor
78d2c3edc9 : Metadata: broadcast from main thread and improve logging
2267a58317 : leaudio: Non functional change
42ec73bc7d : CsipSetCoordinatorService: Fix possible null ptr dereference
3149298596 : Add avrcp_16_default flag
1922f13d30 : Remove obsolete GD style guide
afd2c8bfaa : SystemServer: reduce callback visibility
2dda064bf5 : Framework HAP: use wrapper for readability
b1ba026ac6 : Framework: utility method to shorter service calls
64581a6487 : BluetoothAdapter: throw when system server is dead
e100ef70d4 : system/stack/l2cap: Fix -Wmissing-prototype errors
031daf24d3 : btif_a2dp_source: Schedule internal calls to bt_main_thread
9a1379eac7 : Factorize system/packet/avrcp/tests/fuzzers/Android.bp
9e8859a800 : Jarjar the flag in bluetooth modules
0cbc81391d : Remove public visibility on BluetoothPacketSources
e2a5d393ee : Remove unused library system/profile/sdp
ae595fb449 : system/stack/avrc: Fix -Wmissing-prototype errors
de9adba3b8 : Remove public visibility on BluetoothGeneratedDumpsysTestData_h
8fb0b98796 : BumbleBluetoothTests: Add custom annotations to skip physical tests
36c8630f55 : SystemServer: Airplane change when FactoryReset
1a279f979f : Remove un-necessary post
41e77e909c : Untangled RESTORE_USER_SETTING message
751675a122 : remove redundent conflict checking
d9e44ae2e7 : CS: Get the Antenna Configuration Index correctly
3b9b4bf0c6 : Add flag le_scan_remove_non_oneway_binder_calls
f65692fc66 : Removed startAutoDispatch and stopAutoDispatch
708e9b1661 : Add a condition to check if curr_song_id is "Not provided" when there is no media selected for playing.
188a23d190 : ACL: Switch to GD thread to prevent race condition of accessing map
e13ca783ee : stack::avct: Add net_test_stack_avctp
d491e96522 : stack::avct::avct_l2c_connect_ind_cback Fix warning log
0f5ceefd87 : Declare opp_check_content_uri_permissions flag
c2d0aa2c33 : Adding the state transition for BOND_NONE state in the native layer
a9e06b35ba : Fix test mapping for net_test_conn_multiplexing
3210dbfcbb : Revert^2 "Make connection_manager dumpsys more concise"
fc46160cd9 : Revert "Revert "Make it obvious that connection_manager is separ..."
90fec2b308 : flags: leaudio_broadcast_api_manage_primary_group
d418570b60 : flags: leaudio_broadcast_api_get_local_metadata
39fbe60367 : flags: leaudio_broadcast_primary_group_selection
55d15d0065 : CS: Set tone_antenna_config_selection based on property
a5468e2c02 : stack::avct: Add dumpsys
55b05336b5 : Hcidoc: Support multiple concurrent connections with the same address
882559bd4a : LeAudio: Fix redundant AIDL call with software encoding
028c2458f8 : Bass: Handle sync info request in all cases
a0fbb0ea65 : Remove lock from FetchAudioProvider.
68dd146967 : Revert "Make it obvious that connection_manager is separate from..."
2b3c47056b : Revert "Make connection_manager dumpsys more concise"
6ee1a2f85b : CS: Retry when config creation fails
a3eefca823 : broadcast: Remove redundant broadcast start
5cd9c5e868 : Errorprone enforce NullablePrimitive NullableVoid
3a0ab1a200 : Adapter: static cache for getState
d0e8bb0d1f : Fix empty data mask detection
7e55631009 : btif_a2dp_source: Remove sanity checks from timer handler
a4391a5f6a : Add flag a2dp_source_threading_fix
50fbe743a9 : AdapterService: Don't check in teardown
4f585543cf : btif_a2dp_source: Trace offload a2dp sessions
558a9cc698 : stack::avct: Various cleanup
636011df0e : RemoteDevice: Fix for race condition on adding device properties
114d45c0a5 : flags: Add fix_add_device_properties
b309d99f3f : Hcidoc: Use hci_packets.pdl in rootcanal
d86585a62f : RootCanal: Add Authenticated Payload Timeout Expired event
9325c37930 : Hcidoc: Add disconnection initiator information
86db5fe0e5 : Hcidoc: Print cid info
2b126d8eb7 : Fix regression due to default bypass change in IpcDataCache.
43ab3cae2c : pandora: add support to VCP tests
41e81e10b5 : VCS: check group only for group set in `setDeviceVolume`
7f23913c4f : ActiveDeviceManager: Fix handling disconnect of a streaming set member
84a18be486 : floss: Refactor BAS connection/disconnection
eda47ee217 : Remove dependencies on the 1-variant fallback
3cfe3789b9 : btif_a2dp_source: Suppress underflow warning after stream is stopped
023cccba4d : errorprone: activate EnumOrdinal
ae49fae55e : Make connection_manager dumpsys more concise
5df81d0a78 : add the flag declaration in the framework
28b3e48412 : Make it obvious that connection_manager is separate from GATT
2def12c612 : BluetoothLeAudio: Use new API flag for API changes on Mono errata
14cff6158b : stack::avct: Associate browse l2cap channel with active link
4210130a2f : Define flag support_remote_device_metadata for remote device metadata API changes
c1f6ac4f71 : Remove cleanup_le_only_device_type
9e813d4481 : IsoManager: Fix sequence numbers for bidirectional CISes
588c7b6a81 : Bass: Do not notify about source lost immediately
100d467357 : Fix Bluetooth popup when sharing files
81fade894b : Flags: Sort hap flags
bed62da40c : errorprone: Discard InlineMeSuggester
50c0da0cd2 : android/app/jni: clang-tidy fixes
31dba778fc : Add bluetooth_tidy default
44f7d82e9a : Flag to have socket setting interface
1f6e690e71 : Add flag btsec_le_oob_pairing
09f669415a : Revert "CS: Update PDL based on core spec 6.0"
4a2fbefd08 : Csip: Improve multithreaded access to state machines
27175c3ab9 : l2cap: Workaround for PTS issue
c9b05fc53c : Remove le_scan_fix_remote_exception flag
9db247bde5 : Add assumption to test
813b6f2847 : l2cap: Upper tester implementation
14d18d5354 : CS: Update PDL based on core spec 6.0
5b5399e6f0 : CS: Update defalut parameters
c1571d8634 : VolumeControl: rm impossible null handler
aa078cb504 : Remove metadata_api_inactive_audio_device_upon_connection flag
594ba09fa1 : Allow cts/car to use the cts-bluetooth utils lib
34019dd83d : flags: Add adm_fix_disconnect_of_set_member
0f4dffa48f : Implement MSFT extension logic in ScanManager
32ef60b9fe : hh: Don't print unhandled event error on data output callback
8cc34b9fe8 : Floss: Don't disconnect unbonded HID when disconnected or disconnecting
ef6c5b5592 : Floss: Increase the manager timeout to 15 seconds
f8220c540f : bluetooth: acquire wakelock when connection is complete
bd308ffbf9 : Ensure UpdateNotificationThread being terminated
690e731348 : floss: SCO: Avoid use after free
cd3fd0c799 : Initialize member variables to avoid using uninitialized values
819eb2abb7 : [le audio] Remove external source without local metadata
f2133c577e : Fix a2dp state machine java crash
aeadcdbc33 : Remove os/log.h
e19aedc4c7 : VolumeControl: Clean api annotation
b5b954bb05 : VolumeControl: unify requireNonNull usage
c4aa78dbef : HearingAid profile: rm impossible null handler
50ed2fa5b0 : Remove leaudio_broadcast_volume_control_for_connected_devices flag
8f824a3eea : Remove //vendor external visibility on libbluetooth_hci_pdl
83597fe168 : leaudio: Fix race when switching between Sink and Source stream
17bfc0e9e4 : test/fake_osi: Improve fake alarms managing
cf4bfc9dfe : leaudio: Minor cleanup on suspent_timeout_
9c855f57e1 : Uses states to manage the measurement conflicting finely.
192cbdb6fc : has_client.cc: fix possible null pointer dereference in `Disconnect`
d87f7e1f50 : Clear storage on bluetooth factory reset
ea30605b2d : Continue removing unused DIS server from srvc_eng
9999a7d9f2 : Framework: unify callback execution
392e1aff1d : btif_a2dp_source: Run a2dp::set_audio_low_latency_mode_allowed on bt_a2dp_source_worker_thread
fc5f60773b : Flags 24Q3: Remove a2dp_async_allow_low_latency
fa2e4e26a9 : Battery Profile: rm impossible null handler
0a87c51352 : Utils: remove obsolete & not used method
c60e011aef : Fix wrong parameter order of btm_send_hci_set_scan_params
dc039dbd7d : Flags: Creation of flag for HCI Vendor Specific Extension
cb076efb99 : Ignore unrelated cancel bond
f8e9c6ad92 : Fix NPE
21f04ce776 : Limit tGATT_IF range
c1ad6fd065 : btif_sock_rfc: Cancel ongoing SDP on socket closed
a15f4f1404 : Gracefully handle IBluetoothAudioProviderFactory::openProvider failure
39037b8d6f : btif_a2dp_source: Remove unused APIs
f572ba83bb : Add flag to set LE data length to max on le coc connection
a0be4fd84f : l2c_link.cc: Remove unnecessary log
da7a2ee019 : Remove leaudio_broadcast_monitor_source_sync_status flag
ad9b228be7 : Simplify the onStartFail, keep java layer check only.
56c6d9afc1 : Add comments to interop database
5d02908733 : Add flag associate_browse_l2cap_request_with_active_control_channel
0d60b024d6 : floss: Add on_device_connection_failed callback
aaa616069f : has_client.cc: fix single device disconnection
ab969251ea : supports back 2 back measurement
d2cbc3cd76 : leaudio: Fix clearing watchdog
6126cbf48b : leaudio: Fix switching between LeAudio headsets
67df3b4f02 : Flag: com.android.bluetooth.flags.ignore_unrelated_cancel_bond
597b95bf55 : btif: Improve ACL connection failure reporting
98f5156691 : btif: fix btif_dm lint error
d41a2fac42 : stack: add support to convert HCI error codes into bt_status
38cd00772f : Add flag rfcomm_cancel_ongoing_sdp_on_close
eed2e6b623 : system/stack/avdt: Fix potential null dereference
7a99de96c8 : Hap: add relevant preset test
eeea0338a6 : HAP avatar test: Delete preset and check notify
b10bac9057 : RootCanal: Implement the command LE Read Local P-256 Public Key
cee50eb69e : MetricsLogger: fix pbap connection metrics
7beac6cb72 : Resolve incomplete fix for SMP authentication bypass
b2b618d97c : MetricsLogger: sort connection change logs
cd5641a199 : Uniformize times(1) mockito usage
918489e2ee : MetricsLogger: listen to connection change
483729b981 : AdapterProperties use correct looper
eb3ad027cd : Flag: add adapter_properties_looper
04410c9d37 : Implement a APCF->MSFT converter utility
1cb695c659 : MapbMessage: Apply pattern matching
9ada0f5e1b : PbapVcard: Apply pattern matching
d5f1667ee7 : interop: force a MTU exchange for certain keyboards
6ddf7c24cf : interop: extended truncated report map for specific device
34e6e38281 : Make SetBrowsedPlayer return current path and correct item count.
ec8920ad43 : Modified testBondLe_Reconnect test case
52a3a5bdd2 : Add flag gatt_clear_cache_on_factory_reset
06e4b602af : hh: Forward the GET/SET REPORT failures to UHID
ee26c37de3 : floss: GATT: Make sure OnConnectionUpdated is sent when ConnectionParameterUpdate
758a564ad0 : CS: Return confidence level
663489c837 : acl_create_le_connection_with_id -> create_le_connection
c8db5d6bd1 : handle the procedure enable command error
a0d13ee8ca : system/bta/pan: Enforce -Wmissing-prototype
bd66c8ea55 : RfcommTest: Fix server closure
76b0f18ffe : system/bta/{ag,ar}: Enforce -Wmissing-prototypes
e49a1e988c : Refactor - Use assigned config_id in CS commands
e4b3fa39ca : asrc: Fix logging on negative deviation
ca6865b07c : system/btif: Enforce -Wmissing-prototypes
7f91529ea0 : Fwk Lint: enforce as error
5e364b4532 : Fwk: Address most of AndroidLint WrongConstant
58a0fb3301 : Fwk: Lint fix SupportAnnotationUsage
406fc4344c : Fwk: Update lint-baseline with current failures
3298309e32 : system/btif: Enforce -Wmissing-prototypes
f4cbadecb9 : Allow SCN 30 to be freed
eca2c8eae5 : New Flag: allow_free_last_scn
7560a5281f : Broadcast active device only when Audio Fwk update AudioDevices
268ed8d577 : sdp: Use ERTM for PTS testing
44d9c45883 : l2cap: Fix using FCS field in S/I frames
7ac301122a : l2cap: Store remote FCS Configuration options
1f1282e043 : l2cap: Fix setting FCS Option
9007fa407e : flags: add metadata_api_microphone_for_call_enabled
741cf43085 : Remove the unused procedure counter
8916949bbd : Replace fmt::throw_format_error with fmt::report_error
ea4a80945b : flags: Add leaudio_mono_location_errata_api
ac3c7e64ec : Add lock_guard to FetchAudioProvider.
c2627e7ba4 : Floss: Ignore the timeout-ed HCI command instead of crash
bb17446bfc : Include attribution tag if available in dumpsys for LE clients.
ebac27c66c : Fix typo in A2dpService.java
f4d4928b10 : system/btif: Enforce -Wmissing-prototypes
6a426fda1f : GetNameAndAddress: run on proper thread
351dc1df5a : btif_a2dp_source: Inline the call to btif_a2dp_setup_codec
8b65d260cd : Print the AttributionSource that permission was checked for
b7fa19733a : AndroidLint: resolve ExtraTranslation strings lint
a314bc9369 : add Gmap Server
d7e55cf997 : Remove some unnecessary uses of loghex
6012433653 : Fix type confusion in avdt_msg.cc
673af20bd9 : TypographyEllipsis: replace unicode with xml escape
248e811836 : Remove unused resources
0899608a16 : Add flag com.android.bluetooth.flag.btsec_cycle_irks
29af1076a9 : HAL: Add mgmt via soong config to enable MSFT Extension
c4b7f13968 : Remove restricted devices only in non-restricted mode
5f53e096e7 : add Gatt Server Mock
b394a36fdd : add GMAP client
9c2c6aebb9 : RAS: Add timeout alarm for ranging data
60877998a4 : codec_manager: Feed offloader with Mono configuration
06ef2a7a7f : LeAudioService: Add groupId to the LeAudioGroupDescriptor
f9b122eef3 : flags: Add l2cap_fcs_option_fix
91b06909b3 : has_client.cc: Connect shall connect only requested device
026ff5a8f2 : ActiveDeviceManager: stop audio when switched between ASHA and LE Audio
d657641b85 : Save user ID in restricted mode
90e0648e63 : Fix javadoc reference to BluetoothDevice#getAddressType() constants.
cf9db39326 : SystemServer: add logging on some binder call
039d579a50 : Add MSFT extension support to JNI layer
e07d0a2281 : Test cases for removeBond() API
f215738226 : Revert "Bumble Java Android Headtracker Test Infra Changes"
fe984d0365 : Revert "Modified HidHostTest as per new hid rpc call"
5a0e942480 : Revert "Modified HidHostDualModeTest as per new hid rpc call"
ea4834880f : Revert "Modified LeAudioServiceDiscoveryTest"
ebd218b036 : mmi2grpc: run isort and format on mmi2grpc
8adbe408d3 : use_unified_connection_manager removal
9a5b6b8dab : Remove acl_create_le_connection
a2e3a18730 : Use default value instead of override for acl_create_le_connection_with_id
80ff0317b1 : system/btif: Remove unused parameters
88d714b940 : system/bta: Remove no-op method bta_ar_avdt_conn
ffe765cffc : system/btif: Remove unused method MediaCallbacks::SendActiveDeviceChanged
b58cae3d8c : Makes the ras code run in the BT main thread.
45242a22cb : mmi2grpc: remove typo in license
2afc798522 : Add flag to avoid waiting for lea discovery when there is no le acl
ee61f7e64a : floss: Mark controller as broken if we got INVALID_FD in Start()
63df5cecb5 : Handle error response for GetTotalNumberOfItems
8536b37cee : Fix build with fmtlib 11.0.2
0f17790af4 : floss: Rename DelayedActions to AdapterActions
c53f58b539 : VolumeControl: remove VDBG & DBG & log & isEnabled
97da86f8f6 : VolumeControl: Unify RequiresPermission
68fffd415e : Flag: add aics_api
26f3c1d2c9 : port_api: Remove function names from logs
94e3b26ef6 : Pandora: Enable Bumble config modification at runtime
481dfa2615 : Fix background connection
cce49d0225 : use shim api directly
5a096122eb : Supports start CS distance measurement from peripheral
1a5e5a6150 : Fix stopVoiceRecognition logging typo
5701b75f35 : Rename tPORT_STATE to PortSettings
0e25714253 : Lint: enforce as error
4acdbf6c0e : Add flag com.android.bluetooth.flag.btsec_avdt_msg_ind_type_confusion
a0950e32a2 : BUILD.gn: Remove cflag -Wno-unused-parameter
061242e393 : system: Enforce -Wunused-parameter
bf1b33b39d : system/osi/test/fuzzers: Enforce -Wunused-parameter
7004a39f26 : system/audio_bluetooth_hw/Android.bp: Enforce -Wunused-parameter
1e4b6842e8 : system/stack/test/fuzzers: Enforce -Wunused-parameter
f097560a19 : system/bta/Android.bp: Enforce -Wunused-parameter
68290871e2 : system/profile/avrcp/Android.bp: Enforce -Wunused-parameter
8b5c129cc5 : system/device/Android.bp: Enforce -Wunused-parameter
d245e0fbd9 : system/audio_hal_interface/Android.bp: Enforce -Wunused-parameter
f34fcc41bf : system/osi/Android.bp: Enforce -Wunused-parameter
84e3abe2a7 : system/stack/Android.bp: Enforce -Wunused-parameter
87eff145fa : system/gd/Android.bp: Enforce -Wunused-parameter
3883174862 : system/btif/Android.bp: Enforce -Wunused-parameter
5ffa202285 : leaudio: Do not update local metadata if not needed
e8218e30ac : [le audio] use broadcastId instead of Sid to check for local broadcast
1bd6a8d4a6 : floss: Remove the wrong "#[allow(unused)]"
8a3969cf16 : Floss: Fix Rust bluetooth logging on initialization
abb6c474d1 : Floss: Temporary skip setting log level for tags
e329332b72 : floss: fix some rust warnings
b08f8c0e27 : Bass: Broadcast resync helper, simplified sink pause detection
bc363fdd83 : Bass: Broadcast resync helper, out of range case
e9812f1b9a : btm_acl: Only allow automatic flush if packet boundary is supported
23a3c64829 : Properly inject mediaPlayerList in test
e590aafa0f : Properly escape % in xml
ffd365e6c6 : Update lint-baseline with current failures
771aed3ea2 : Reset permissions for not bonded device
55e3304a4e : [le audio] Get Broadcast quality based on selected output codec config
585fecf2ae : GetName/SetName: Broadcast intent from server
bafbd7187a : Fix crash in l2cap-fuzzer
fe9a28c65a : Initialize ras only if the controller support channel soudning.
c84965b43b : PandoraServer: Implement L2cap#WaitDisconnection
a7e1ae52f6 : PandoraServer: Implement L2cap#disconnect
cc7d9dd01b : Revert "Msft: Remove dead code from android target"
4aab5e4081 : Hap: various cleanup in avatar
5145e1af3d : Hap: prevent setactive on unknow preset
0841307823 : fix code error for sending ras data
8189e07cc0 : Update PDL for the CS HCI according to the released BT core 6.0
dc54e0c5d2 : Enable hfp offload if hfp software path is not enabled
c34c81ace1 : BluetoothPbapActivityTest: Ignore exception from ActivityScenario.close
04f93fc909 : HeadsetClient Test: help formatter to be pretty
cabed1be6d : HeadsetClient Test: sendMessage process it
f1331c6514 : HeadsetClient Test: uniform mocking with doReturn
ae5728ce87 : HeadsetClient Test: remove flaky test
5dd4ce74bc : HeadsetClient Test: import BluetoothProfile
8ca8a52f79 : HeadsetClient Test: use test looper
99527fc62e : Interop fix to not read PPCP for devices that report incompatible values.
2ba6d51203 : Hap: set available on unknow preset no crash
108aa4d79e : Match package prefix instead of complete package name
6f58417c65 : Skip PbapServiceTest if feature is not supported
2fc3087b9a : Fix OOB writes in gatt_sr.cc
88853e0038 : Remove unused parameters from bt_interface_t::init
489fd50971 : floss: dbus_projection: Allow other attributes on top of dbus_method
2be3cadac9 : BassClientService: Sort scan results for source sync by fails counter
649e521c14 : floss: Re-enable ReadFailedContactCounter
d4f7a4e89f : HeadsetClient Test: force intent order
3e5d1e5111 : HeadsetClient Test: remove clearInvocation
c9164e9e07 : Add UnicastGameGatewayCharacteristicUuid
4eaac642cc : HeadsetClient Test: Truth & intent matcher
e0198166c1 : Fix OOB writes in gatt_sr.cc
23d827db63 : [le audio] Resync device to its peer active broadcast source
4cb73b3dfa : Add @okamil as project owner
3166e689c0 : Pandora: mutualize pyright files
ec44ba1db5 : Check in_use to prevent accessing deallocated cb
7d005d3245 : Convert leftover ALOGx macros in HFP JNI
79d5749512 : Fix a cpplint warining in LeAudioDevice constructor.
34dd83f45b : Add trace points for HCI packets
f0cba7267e : Remove //vendor:__subpackages__ from libbluetooth visibility
2a8fc4d13f : Pandora: remove experiemental l2cap.proto
f32474f507 : PandoraServer: Implement new L2CAP interface
71d1f5de24 : Bass: Fix removing pending source operation by timeout
5ee00f2267 : SystemServer: parametrized test for current flags
3e0031ec65 : SystemServer: remove dead code in proxy
9369d1cabb : system/btif: Fix -Wmissing-prototypes errors
0c4f06b748 : Inform AudioManager when hfp is disconnected
144c8fa138 : Move adapter flag outside of system_service
eb065ba6ca : system/btif: Fix -Wmissing-prototypes errors
145b8b9082 : Add copyright and fix cpplint error
5bcfd34782 : RESTRICT AUTOMERGE backport "opp: validate that content uri belongs to current user"
2c5add83a1 : RESTRICT AUTOMERGE backport "opp: validate that content uri belongs to current user"
ab4312c862 : Fix build failure on acceptBluetoothBinding
1cc68cfda6 : Remove the init_flags module
374b17a6ce : Use TRANSPORT_LE for unknown device type
3bac67043a : SystemServer: test race condition on enable
f908728261 : move profiles connection away from main thread
0e32c5644c : Add test for AdapterSuspend
603732828a : Flag: hap_connect_only_requested_device
018da51fc3 : BluetoothLeAudio: Deprecated location invalid
4ebc6a043c : Test case to reconnect after restart
9018f822f7 : audio_hal_interface: Remove unused definitions in duplicate client interfaces
011a4671e0 : audio_hal_interface/aidl/a2dp: Remove BluetoothAudioCtrlAck
34e8ce242f : SystemServer: Remove un-necessary synchronize
f1fbeb1f1a : SystemServer: prevent state change after crash
6e0be93a4d : audio_hal_interface: Split off A2DP AIDL bindings
eebd8b2df7 : Mark packages/modules/Bluetooth apps in updatable apexes with updatable: true
07a3c0b2db : RfcommTest: Document test steps
0d0173695d : system/btif: Fix -Wmissing-prototypes errors
e0d8076730 : Add flag get_profile_use_lock
c0d3a9510d : Pbap: Stop depending on static AdapterService
b81adc256c : Remove leaudio_broadcast_audio_handover_policies flag
8c49b368bc : Remove leaudio_broadcast_feature_support flag
f787f083ce : Add new APIs of channel sounding
74d136c5ca : add @Override annotation
5d45ae436f : Add flag to control snoop logger tracing
dffb079b53 : Remove sink_audio_policy_handover flag
7700e22ee8 : System property for changing minimum key size
86b8a0d329 : flags: leaudio_sort_scans_to_sync_by_fails
24ed4cfdd9 : hh: disconnect if UHID is not ready after 10 seconds
0dafa1102e : btif: Remove unused methods DeleteActivePeer
b21d88309a : audio_hal_interface: Pass a2dp_offload_enabled flag to bluetooth::audio::a2dp::init
a32525f02c : audio_hal_interface: Inject a2dp dependencies through BluetoothAudioPort
85cc8f60dc : Do not send write requestif tcb is closed
7802b148b9 : system/audio_hal_interface: Fix -Wmissing-prototypes errors
831f837b2c : system/btif/src: Fix -Wmissing-prototypes errors
76cdc9533a : system/stack/avdt: Fix -Wmissing-prototypes errors
1faa8d4cf1 : system/bta/av: Fix -Wmissing-prototypes errors
3bf0f6d6b4 : fix for BT crash caused by factoryReset pandora API.
41ed374f07 : Remove leaudio_multiple_vocs_instances_api flag
639478bd40 : floss: Deprecated InitFlags
893fdced51 : BluetoothLeAudio: Add MONO location
9932173924 : TbsGatt: test null URI for incoming call
8db8e6777c : Fix ConcurrentModificationException in AdvertiseManager.mAdvertisers
85259a9930 : Fixes for errorprone update
4e41e7ca9d : Reducing Gatt_Connect overloads
cc53561d4e : Skip ATT Read Blob request for name discovery before MTU exchange
ea9b83648d : Add ATT MTU preference for Tesla app
e6c62c59bd : csis: Fix possible assert
92f3a12b0b : Encrypt LE link immediately on reconnection
e5e338afe3 : Encrypt LE link immediately on reconnection
7613d0c389 : Send preferred MTU from GattService to native GATT module
ff4b0d6de4 : Remove reset_after_collision flag
c75eee9561 : Modified LeAudioServiceDiscoveryTest
e6f0bbeb1e : Modified HidHostDualModeTest as per new hid rpc call
6cf15457e1 : Modified HidHostTest as per new hid rpc call
543477492b : Bumble Java Android Headtracker Test Infra Changes
95be152136 : Remove key_missing_broadcast
3c842c7236 : leaudio: Add support for TWS with 2 bidirectional channels
6c59ded78a : flag: adm_verify_active_fallback_device
05c80f7a77 : Log setting phone book permission
f44c37a35e : gatt: Fix invalid EATT disconnection
1dded8e033 : gatt: Fix function name
f20f383a89 : Print warn log when rotation happens outside expected time range
af93b43c99 : CS: Fix null pointer dereference when stop_distance_measurement
674384c6f9 : Floss: Allow set log level from btclient
efb56a2b06 : leaudio: Remove reduntant context type copy
4a6cbce679 : leaudio: Improve LeAudio switch during phone call
84bfcf3737 : leaudio: Improve sending QoSConfigure when CIG is already created
8252528681 : leaudio: Fix data path removal when ACL disconnect event arrives first
fe9f47048c : Update test for Truth8 deprecation.
0619778c0b : Broadcaster: Fix test cases with leaudio_big_depends_on_audio_state
a57ec5d817 : Limit gatt clients per app
83707a2119 : Add new BleOnStateTest for Scan Manager Refactor changes
3720e100c7 : gd/shim: Cleanup dead code in dumpsys implementation
7e76a98a88 : remove early return to make aptx SWB consistent with and without isScoManagedByAudioEnabled
dd2779efc3 : leaudio: Move QoS Configured state to right place
86689dba85 : leaudio: Minor cleanup
68c0433422 : Wire Encryption Change event to Java
9a25064a88 : LeAudio: Fix reconnecting LeAudio when profile is disabled
b52e6b6f7f : Introduce a soong config for HFP HAL selection
4a74579e50 : flag: Add forward_get_set_report_failure_to_uhid
071d042793 : BluetoothMetrics: Logging bytes_field instead of 0 for cross layer metric
7c5c2fc99d : reset hfp_pending_cmd_ to handle retry logic
69af983939 : move Audio related API outside synchronize to avoid deadlock
a007dcd33e : use snapshot instead of synchronized to simplify logic
22f5e11b2e : Enforce -Wmissing-prototypes
bee0353c29 : Relax the check for the error code in bta_av_setconfig_rej
59f1cf2744 : leaudio: replace magic number
a5220ba2c4 : broadcaster: Update terminate BIG reason
31eca2cc62 : floss: Connect to newly discovered profiles if it is user initiated
4c16fc6dcc : hh: Don't send SET IDLE
e3a7961f07 : HfpClientInterface: allow uni-directional audio stream on SCO
da9cdc0bbd : HfpClientInterface: r/w logs to verbose level
a56cd18391 : HfpClientInterface: implement UpdateAudioConfigToHal
f1a650c770 : HfpClientInterface: always ack on |*StreamingRequest|
75313b0a45 : HfpClientInterface: fix decoder client suspend
db95abe4ab : StopWatchLegacy: fix OOB array access in |RecordLog|
66c401199d : bta_ag_sco: Init and start HFP SW encode/decode
fe13231458 : Flag: Add dont_send_hid_set_idle
b30c0ef521 : RAS: Use notifcation if both registered
9cccf3f8fb : RAS: Refine variable and function naming
7cb84b932a : RAS: Fix null pointer dereference when remote device does not support real-time functionality
7de5617f7d : Fix OOB writes in gatt_sr.cc
cefd9244e2 : Bumble Java Le Audio Service Discovery Test cases
1a76345471 : Tbs: Fix restoring CCC values when not initialised
a91f0ccd47 : flag: Add leaudio_improve_switch_during_phone_call
a1b76d5aec : leaudio: Minor cleanups around state machine mocks
e07ce4f2d4 : leaudio: Minor cleanup
71558ba487 : bta_ag_sco: add mock files and funcs for tests
e88d7925d8 : Disable SWB for desktop if bluetooth.hfp.swb.supported=false
15a9026a58 : Remove flag enumerate_gatt_errors
472ac7e75c : Call Index: To Handle HFP index change after call Merge and disconnect
256ed77be1 : Add assumption to test
896d90a10b : fix for exception
2a954456cd : Pandora: update linter file for BumbleBluetoothTests
e0f16d16da : Pandora: add linter file for Pandora Server
b72f776ff5 : btm: btm_iso: Fix handling cancelled Create CIS procedure
2d2786a467 : Initial implementation for AdapterSuspend
2ecb9971be : Floss: Remove delay to register callback in non-interactive mode
4c97123eb6 : Flag: add adapter_suspend_mgmt
12a59cda2a : Add suspend preparation and resume functions to JNI
510559af0f : Don't use base::StringPrintf with non-constexpr format string
37fed2de9c : Flag lib link: Use test variant when building test
2eca6e61a2 : Flag to have offload socket API
3a7878e1e2 : Ignore **/._* MacOS metadata files
84f3b3a082 : Remove BluetoothPbapSimVCardManager#finalize
21b8064163 : Remove BluetoothPbapCallLogComposer#finalize
8b8ed167d2 : HidHostTest: Use Mockito to wait for events instead of Future
50d8fce799 : HidHostTest: fix lint issues
e7407a3fa5 : Bass: Fix parsing BASE structure
f413dbe5b8 : hh: Don't transmit the whole UHID struct
8a682877fe : le_audio: Add missing object for Audio Profile Preferences

+- Project: platform/packages/modules/CaptivePortalLogin

92c3edc : Import translations. DO NOT MERGE ANYWHERE
9fe27b0 : Fix spinner hide testing in directly open download
d2857e4 : Import translations. DO NOT MERGE ANYWHERE
1186119 : Import translations. DO NOT MERGE ANYWHERE
d3c79e6 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/modules/CellBroadcastService

6f46d1e : Mark packages/modules/CellBroadcastService apps in updatable apexes with updatable: true

+- Project: platform/packages/modules/ConfigInfrastructure

b55fc0c : Skip storage files that don't exist in list command
0794049 : remove get flag value by string in AconfigPackageInternal
461df81 : Changed DeviceConfig.dump() signature
eb5c649 : do not start aconfigd_mainline if daemon is not enabled by flags
0f86222 : Add new autofill flags to WritableFlags.java
b059705 : A bug fix and add rust aconfigd unit test in test mapping
b681bcc : Add RavenwoodKeepWholeClass to WritableFlags/Namespaces
ad71ac3 : Add adservices to WritableNamespaces and clean up WritableFlags
4fb6060 : Remove storage marker file check from StageOtaFlags.java
ad3d142 : Make build fingerprint first parameter
c42dc24 : AconfigPacage directly talks to storage
be0aacb : aconfigd: update CI aconfigd socket to read/send message length int first
ec30a02 : Add API to obtain DeviceConfig allowlisted namespaces
4e06a7c : Add mpgroover to OWNERS for WritableFlags/Namespaces
a0fbd5f : Update CI aconfigd
5ae265f : Update CI aconfigd
f3c373f : replace aconfig_storage_reader_java to aconfig_storage_reader_java
f579b64 : Minor improvements on DeviceConfig.dump():
c886746 : Update allowlisted flags with additional flags from tests
5718228 : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
2f5888f : Revert^2 "Add AconfigPackageInternal"
4ddcb98 : Update the socket address for StageOtaFlags
465278d : Revert "Add AconfigPackageInternal"
2122b05 : Add AconfigPackageInternal
4b5be45 : Add AconfigPackageInternal
97223af : Add android.provider.x.android.provider.flags to the package_prefixes
0c5d564 : Implement FlagManager write system APIs
ad2c284 : aconfigd: allow test to run as root and added test points
832b56f : Update allowlisted flags to include flags set during tests
2b7cedb : aconfigd rustification part 14
73175f2 : rename the package name of AconfigPackage
6a252fc : Add AconfigPackageLocal
a9b3dd5 : update aconfigd error based on android rust team's suggestion
a50b745 : Allowed StageOtaFlags to stage legacy device config flags that don't contain a [period] character in the flag name
578d2ea : update AconfigStorageReadException
e262b92 : Update mainline aconfigd
512976f : Refactor AconfigPackage
4b58e00 : Aconfigd rust conversion part 12
94f085e : Add clear override options to aconfigd proto
e85d4a7 : Add clear override options to aconfigd proto
23e7c24 : add new dependency
bd1e759 : add new dependency
f8fe099 : Add API flag for storage write SystemAPIs
7c80bf1 : aconfigd rust conversion part 11
7e89bbf : aconfigd rust conversion part 10
7b926c4 : aconfigd: rust conversion part 9
47d60b0 : aconfigd: rust conversion part 8
79bfb83 : aconfigd: rust conversion part 7
1461a5f : Compute the hash of the contents of files
257303c : aconfigd: rust conversion part 6
a53a52b : aconfigd: rust conversion part 5
b214f97 : aconfigd: rust conversion part 4
4320c75 : Check existence of new storage
252dd58 : Move storage_files.h/cpp to rust implementation part 3
20d3882 : Move storage_files.h/cpp to rust implementation part 2
c2cc9b2 : Move storage_files.h/cpp to rust implementation
af47583 : Expose StageOtaFlags constants to system apps
b9b2617 : [Ravenwood] Support DeviceConfig
dfb7f0c : [Ravenwood] Support DeviceConfig
f6abede : update document
95d64ac : add public api to read flag value from new storage
5f11ac8 : Add storage marker file check to stageBooleanAconfigFlagsForBuild
7c920b9 : configinfra: move aconfigd_proto build targets to config infra module
6ef800d : configinfra: move aconfigd_proto build targets to config infra module
0525e00 : ConfigInfra: create aconfigd_mainline binary
c32b5d2 : ConfigInfra: create aconfigd_mainline binary
8639315 : Mark packages/modules/ConfigInfrastructure apps in updatable apexes with updatable: true
ad96de0 : ConfigInfra: create aconfigd_mainline binary
6db00b6 : Add flag for new api
6b31b1f : Reland "Implement stageFlagsForBuild API"
9914742 : Revert^3 "Implement stageFlagsForBuild API"
fbd2023 : Revert^2 "Implement stageFlagsForBuild API"
5211ede : Revert "Implement stageFlagsForBuild API"
dcee1c5 : Implement stageFlagsForBuild API
5269af5 : Delete Prominent Entities flags
98e52fb : add aconfig_storage_file_java in framework jar
85122cb : Revert^2 "add aconfig_storage_file_java in framework jar"
f48818e : Added --namespace parameter to dumpsys device_config

+- Project: platform/packages/modules/Connectivity

dd7e14491b : Import translations. DO NOT MERGE ANYWHERE
9d0c12ada9 : Import translations. DO NOT MERGE ANYWHERE
b3492b7f76 : Change the default value of the flag to true as discussed via chat
2c60124e10 : Throw `IllegalArgumentException` instead of `RuntimeException` in hexStringToByteArray()
b14cd877fb : Disable multi-user check for entitlement provisioning
bd1fe2056b : Use LongSupplier in LruCacheWithExpiry
5b018cfd06 : Handle non IPv6 packet in getRaPios
ad713e007a : Reconnect to wifi if supported
7ce2471354 : [Thread] refactor OtDaemonServer::Initialize argument order
4a79993944 : Make ConnectivityTestPreparer.apk path explicit
2f37e81851 : Make Tethering public API for DO/Carrier apps
5475f2dd74 : Add stopTethering(TetheringRequest) stubs
eab1de47f4 : Introduce interfaceName field into TetheringRequestParcel
d95c0c692c : Make ARM TVs running 32-bit userspace boot on V with new kernels.
2bb40d2849 : Update TimerFileDescriptor to support different task types
50c14402f6 : Build the timerfd wrapper class
cb0aabd14c : Add RtmNetlinkPrefixMessage class to represent RTM_NEWPREFIX message.
f599788e97 : Add IPv4 all host address definition
05b57b7724 : Skip NetworkStatsBinderTest if the binder interface is not accessible.
392336b50d : Read idle timer configurations from the boot namespace
fb2425d173 : Remove unnecessary semicolon
219d7f9275 : Remove synchronized block for test only variable
17ab27e71f : Add mcts and mcts-tethering to net-tests-utils-host-common for the dependency of ConnectivityTestPreparer
fc724ab59a : Only report tryTest errors when not caught
799aed88ad : Wait after putting a display to sleep in ApfIntegrationTest in Automotive Multi Display configuration instead of polling for interactiveness
2229796976 : Fix assertEventuallyTrue timeout
bb815f25dd : Add isDebuggable() to DeviceInfoUtils for Android R compatibility
cc8b8f9165 : Add QUIC testing to test preparer
9190aed0f8 : Simplify the Android CT properties
9efa5f8824 : Change ethernet enabled state from int to using boolean
3bd07a5ec6 : Add SignatureVerifier class to handle public keys from different sources
0adbb0277b : Stop tethering when the current user is not allowed to do entitlement check
5671e9d15d : Use TestableResultReceiver in EntitlementManagerTest
fa168e528b : Wait after putting a display to sleep in ApfIntegrationTest in Automotive Multi Display configuration instead of polling for interactiveness
2c15bee348 : Import translations. DO NOT MERGE ANYWHERE
b8eca6d9e0 : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
92071eea46 : Fix CtsNetTestCases for secondary_user_on_secondary_display
c91d1654f1 : Fix CtsNetTestCases for secondary_user_on_secondary_display
5c4b965f15 : Fix TetheringManagerTest using setWifiSsid before T
282b7be3c6 : Fix setup on devices without printflags
ee0a07cfd9 : Sort framework-connectivity-defaults libs alphabetically
819b77addb : Improve testOpenConnection() reliability with fallback IPv6 check
4b43c079ff : Add WRITE_ALLOWLISTED_DEVICE_CONFIG when modifying DeviceConfig
956b8621da : Add diagnostics to testNativeDatagramTransmission
587f996eb4 : Expose Connectivity libraries to VCN and tests
c8a9dc2f6e : Use unique_fd for usableProgram
99ace23829 : Get rid of BPF_FD_JUST_USE_INT
5023143dcf : Add a new module-only API to preload HttpEngine
8c8d2b1bd8 : Create the timerfd helper class
70c4de8edc : Add daily CertificateTransparencyJob
0b1a7b8c98 : NetworkAgent: Log a metric if a message is queued before registration
1cc7769bbe : Add FrameworkConnectivityStatsLog for logging metrics
51d7725e08 : Add test for TREL service not unpublish when feature disabled
eb85ea5f2a : Remove BpfBaseTest processgroup test.
50607e6814 : Remove the useless networking tag.
c58cfb7c7d : Add BPF_LOAD_SKB_PKTTYPE macro to BpfClassic.h
676ad858e2 : Clean up TetheringRequest#toString()
685b8aa508 : Send TetheringRequest to restart Tethering from IpServer
b5b82cc424 : Add hostside CTS tests for SoftApConfig redaction
95fb0801bf : Remove the useless mts tag since the tests only contain in one MTS module test plan.
a3882a238b : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
694aa02036 : Fix setEthernetEnabled API to properly affect administrative state
bb7efba754 : Configure TREL feature from feature flag.
a66b78ab87 : Include SoftAP config in TetheringEventCallback
27fb206ded : Revert "Set diagnostics log-data-type to CONNDIAG"
abaf3db53c : Add IGMP network stack constants
32691a94c4 : Add IGMP type offset definition
4078a4064b : Set llndk.moved_to_apex explicitly
2580051834 : Delete unused flag
172fe79552 : Add new flag for Tethering w/ SoftApConfig
dfed8dc100 : Make newly created CT directories executable
26a68af724 : [Thread] Add a NAT64 test case after switching to another infra network
ebd31b8a6a : [Thread] introduce TestTunNetworkUtils and use it in existing test cases
717b44749a : bpf: Ringbuf: Ensure we acquire load the length for the ring buf entry
feeff37fca : Add a test for delayed teardown with unregisterAfterReplacement.
478ba513aa : Log CT log list download failures
5f71222f2b : Remove the VisibleForTesting annotation.
8a3f2e0460 : Refactor tether state change callback/broadcast for redactions
5f9339d6a9 : Validate packageName with AppOpsManager#checkPackage
26fc9a31e3 : Fix for CtsNetTestCasesUpdateStatsPermission
5801f1644a : Remove mcts tags since the net-tests-utils-host-common is host_required by other CTS cases
0435210627 : [Thread] avoid re-setting the infra link state when the state doesn't change
1ebe54b8d4 : Enable secondary_user_on_secondary_display for CtsNetTestCasesMaxTargetSdk33
e7bbdcba2e : Fix for CtsNetTestCasesUpdateStatsPermission
3b6bb1ba4a : [Thread] add integration tests for aosp/3334156
739d200d35 : Drop the redundant type and caller uid checks for TrafficStats
e9e00def4d : [Thread] save time in InternetAccessTest by only joining the network once
46722ad4c4 : Revert^2 "Add UID and package name to TetheringRequest"
59c654d252 : Enable secondary_user_on_secondary_display for CtsNetTestCasesInternetPermission
b3951a23b9 : Fix failed CtsNetTestCasesInternetPermission
a4983fd628 : Revert "Change the visibility of Config to be public, so it can be accessed from"
6d74933962 : Enable error-prone checking for CtsNetTestCasesDefaults
18e8b27832 : [Thread] add a NAT64 test case for UDP echo server
23ee433f03 : [Thread] create a base class TestUdpServer
2967cfaf4e : [Thread] add private API ThreadNetworkSpecifier
9b15fa040f : Make netbpfload invoke uprobestatsbpfload
a618596be5 : Change the visibility of Config to be public, so it can be accessed from the manual tests in the cts/ directory.
2918f066d5 : BpfHandler: Add kver checks around V BPF attach program checks
1de84f2fd2 : netd: Require 4.19+ for connect/recvmsg/sendmsg cgroup hooks
d5f09a914b : Add LRU cache with expiry class
e12dfea4f0 : Remove framework-connectivity-shared-srcs from net-utils-all-srcs
e61c10a2e7 : [Thread] join/leave the network only once to save time in BorderRoutingTest
fcb37cff82 : Test races around NetworkAgent creation.
56449db1be : [Thread] add private API for disabling border router
833b22da87 : [Thread] Disable flaky test nat64_withAilNat64Prefix_threadDevicePingIpv4InfraDevice_outboundPacketIsForwarded
c8df03ffd7 : Revert "Add UID and package name to TetheringRequest"
b9071f8512 : Add config for client-side traffic stats rate-limit cache.
e51bae6bdc : Fix system caller cannot query stats for other uids
1372ac2140 : Disable IWLAN in tests on 24Q2
339d6b20e7 : Fix system crash when netd return error
5e8a1fdfda : [Thread] add an integration test case for resolving a host via DNS recursive resolver
c623c946f5 : Make the idle timer for network data tracking configurable
2dc574bf1b : [nearby]Remove out of date APIs for Fast Pair.
b7028cb44b : Add bpf_sk_fullsock helper
f014ae2f2a : Use MANAGE_USERS permission to check for admin user
24fcc3668a : Use device config to enable/disable the rate-limit cache feature
0997dd93a1 : [Thread] Fix the overlap of top app bar and the system's status bar in demo app
2d9992c186 : [Thread] tell ot-daemon about the DNS servers on the infra link
a3cf536ffa : Add missing PacketReader.stop() call
af1c56c88f : Add CTS for cached services removal
9fb574d0c3 : Address review comments at aosp/3249945
625342ed21 : Clarify advertising logs
aa6b4c3db8 : Refactor query delay calculations
39f0ea9106 : [Thread] Add NAT64 switch to Thread demo app
001be79f0f : Add log when resetting DeviceConfig
431d11b1ff : Add CtsNetTestCase as presubmit for automotive multi-user multi-display
531a9f0a9b : Disable debug log by default
c19e8a00cc : Fix flakes due to cell reconnecting on wifi toggle
462703c850 : Add UID and package name to TetheringRequest
a7318da774 : [Thread] request/release the NAT64 CIDR
9ff8f4c81b : Remove dependencies on the 1-variant fallback
a9aafb7c8c : Add compatibility version to log lists installed by CT
dfe98fc5be : Do not crash when adding UID ranges to a destroyed network.
a81bfafda5 : Test that UID ranges can incorrectly be added to a destroyed network.
7616fdd777 : [Thread] Mark ThreadConfiguration#isDhcpv6PdEnabled() as @hide
13a9f53d98 : Refactor `is_packet_capture_supported()` by `expect_throws()`
a0c0168ae6 : Add `expect_throws()` to `assert_utils.py` to check expected exception
61f779ca04 : Create net-utils-framework-ipsec for IPsec module
ee4ab96116 : [Thread] fix the test util of setNat64EnabledAndWait to apply old config
52c72a11bf : [Thread] use setConfiguration API in NAT64 tests
380e8244a5 : [Thread] Let TunInterfaceController.getLinkProperties() return a copy of LinkProperties
1193f08d69 : Introduce dependencies object for EntitlementManager
b4a92ff791 : Improved Entitlement UI Launch for multiple admin users
5714031dcc : Set diagnostics log-data-type to CONNDIAG
2801791773 : [Thread] introduce ThreadNetworkControllerService#initializeInternal()
a9efc24ade : Add test for meshcop service state bit for epskc.
b1145dc6b1 : Update requestDownstreamAddress calls in PrivateAddressCoordinatorTest
389bc7f13e : Split requestDownstreamAddress into two methods
e980872140 : Add report header to failure diagnostics
68a760a273 : Disable perfetto lock for OnSetup/OnStart/OnStart.
7eb1bbd369 : [Thread] add CLI commands for getting/setting config
c0e709700c : Standardize handler thread checks
3c052f3625 : Clarify logs in MdnsServiceTypeClient
d789d1b8de : Update unit tests to reflect the changes to RoutingCoordinator and PrivateAddressCoordinator
69aee7b055 : Move PrivateAddressCoordinator from Tethering to staticlibs
55e005251d : Refator IpServer to allocate IPv4 address via RoutingCoordinatorManager API
e5e06f9f7c : Introduce PrivateAddressCoordinator methods at RoutingCoordinatorManager
8589c09369 : Delete a surprising amount of dead code
9b8b300ca8 : Revert "Add link-related methods at NetlinkUtils"
1c7d51a743 : [Thread] add configuration methods in ThreadNetworkControllerWrapper
84a40ff16d : Consolidate handler thread check functions in HandlerUtils
33bb7fbfb7 : Refactor requestNetworkAndGenerateTraffic by using synchronized methods
eb73d4ddda : Create a timerfd JNI class
3feb788aa5 : Do not run diagnostics on assumption failures
51f46332da : Remove `test_mdns_via_hotspot` multi-devices test
33f8758600 : Remove `test_apf_drop_ethercat` multi-devices test
2665f0a4d5 : Add options to disable diagnostics collector
43730b3f0f : Add connectivity diagnostics on test failure
96267ec798 : Remove module tests from FrameworksNetTests
eab97a0552 : Remove the check for epskc flag value at runtime.
eb3c631b89 : Rename TapPacketReader to PollPacketReader for broader applicability
f311b43bef : Move buildMdnsServiceInfoFromResponse to MdnsUtils.
39ab513b7d : Fix lint warnings in thread_network_settings_fragment.xml
85f93ed38c : [demoapp] make the Thread Network Settings page scrollable
9a302c1e7a : [Thread] add unit test cases for Configuration API at ThreadNetworkController
f1930fdb4f : [demoapp] Add ephemeral key buttons and texts
f33436962c : Verify migration when multiple tunnels exist
485a7f1849 : [Thread] fix the issue that border routing is not available after ot-daemon restarts
802c8f1a4d : Move public key to a DeviceConfig flag
fe0fb3081e : Change OtDaemonState ephemeralKeyExpiryMillis to ephemeralKeyLifetimeMillis
651bfdaaf7 : [Thread] make ThreadNetworkController#setConfiguration a system API
ed15725987 : [Thread] implement the ThreadNetworkController#setConfiguration at service
78713203b3 : Ensure the dump service is completed
99a17f99af : Validate cellular network in testSetAvoidUnvalidated() for accurate testing
6f73361204 : Add Bessie to networksecurity OWNERS
d833d726da : Remove useless mts/mcts tags for CtsTetheringTest.
87d33bac74 : Remove unused method TetheringConfiguration#isRandomPrefixBaseEnabled()
88312f6451 : PrivateAddressCoordinator: introduce Dependencies class
731ae53d71 : PrivateAddressCoordinator: move the special handling of WiFi P2P dedicated IP to IpServer
34e9836b26 : [javadoc] add missing space
f4d6946440 : Rename queryCount -> queryIndex
6288cce6f6 : Rename getDelayUntilNextTaskWithoutBackoff
6bef5125d3 : remove useless variable
4c3a13c048 : Retrieve IPv4 and IPv6 addresses for client and server device in test setup
d07d47b1d3 : Add send and check reply packet function for multi-devices tests
1644f95852 : Add packet capture and match network_stack adb command to to `apf_utils`
ba7d9dddc7 : Handle CMD_IPV6_TETHER_UPDATE in UnavailableState
487f21e9be : offload.c: do_forward6() pull less
4110b7dc70 : Remove unnecessary byte array allocation:
e611324c50 : BpfHandler: fix comment whitespace/quoting
c714bf6d8b : Add MonitorThreadLeak to TetheringMetricsTest
3e90bc85b3 : Simplify tethering active session counting
0a4758b592 : Add missing break in BaseServingState
23c02b0b7f : netd_updatable: only wait for bpf on <=V
4e9230dee8 : NetBpfLoad: doc updates...
469363749e : Update bpfloader related rc documentation.
359d23059c : Refactor shouldTestThisNetworkType
7065213fca : Update the usage data when upstream is changed
504e3da375 : Revert "Skip TestDropPacketToVpnAddress if the feature is disabled"
558f56bd10 : Mark packages/modules/Connectivity apps in updatable apexes with updatable: true
2afd53948f : Delete empty directory
a591d0432b : [Thread] support ephemeral key API in ThreadNetworkControllerService
018f163cc9 : [Thread] Add unit test for ephemeral key API
b8aa8bb47f : [Thread][API] add API for ephemeral key.
b9cfcce30a : Drastically simplify prefix and address selection process
b1280c0467 : Simplify getSanitizedSubAddr to 1-liner
dff99282e8 : Prefer framework-annotations-lib dependency
7f5b601864 : KernelUtils.h: add 6.12 to isLtsKernel().
65bf46db67 : Rename getStartedPrefixIndex to getRandomPrefixIndex
f77127509e : Remove unused imports
1cda02fd4a : PrivateAddress: Clarify the dependency
738bf999d5 : Support get multiple ipv4/ipv6 addresses in `apf_utils`
2bb5b7ebee : bpf: git mv DnsResolver bpf/dns_helper
7dd6014d8e : Import translations. DO NOT MERGE ANYWHERE
77f5e9af1a : Mark Martin as LAST_RESORT_SUGGESTION
f926010f90 : Mark Sat as LAST_RESORT_SUGGESTION
c3eccf9ef4 : Revert "Use unicast instead of broadcast as ethernet destination in `test_apf_drop_ethercat`"
a7fc900db7 : Add link-related methods at NetlinkUtils
8675fc0c5d : Change TetheringManager#RequestDispatcher as static
78bd66003a : Fix the test flake in testRemoveServicesAfterSocketDestroyed
2d52f8c139 : NetBpfLoad: don't allow prog-less bpf .o's targetting platform bpfloader
46102348e0 : Use unicast instead of broadcast as ethernet destination in `test_apf_drop_ethercat`
664e1f4b89 : PrivateAddressCoordinator: remove the dependency on TetheringConfiguration
f0bcd6f4d6 : Add createGetLinkMessage() at RtNetlinkLinkMessage
8df9d34eb3 : PrivateAddressCoordinator: get rid of UpstreamNetworkState for updateUpstreamPrefix()
1811edb633 : Move CtsHostsideNetworkTests to presubmit.
95d26ae188 : Add persist.bluetooth.finder.supported property
b756497e05 : Add get ipv4/ipv6 address function to `apf_utils`
0e1c1cc92d : Add packet_utils for multi-devices tests

+- Project: platform/packages/modules/CrashRecovery

c0dae13 : Introduce setHealthCheckResultCallback API
0ae004b : Updating the threshold names
bb01b57 : Moving startObservingHealth API to PackageWatchdog
78de1fc : Expose APIs for new observers
4593f5e : Rename onPackageFailure to notifyPackageFailure
f6bf6a1 : Add permitted_packages to framework-crashrecovery
6bc484f : Create isRecoveryTriggeredReboot SystemApi
e4bf836 : Expose important thresholds for mitigation impacts
75a7e6c : Jarjar the files to avoid duplication at runtime
a86a308 : Increase impl visibility to tests
29d5c9f : System server apis for service crashrecovery jar
a183b0a : Create framework-crashrecovery jar

+- Project: platform/packages/modules/DeviceLock

8c52ca3e : Import translations. DO NOT MERGE ANYWHERE
03401c88 : Import translations. DO NOT MERGE ANYWHERE
cd5ba81c : Import translations. DO NOT MERGE ANYWHERE
e766e903 : Import translations. DO NOT MERGE ANYWHERE
46d49e49 : Import translations. DO NOT MERGE ANYWHERE
7fe25100 : Add constants for UpdateFcmToken work
ff589251 : DLC: Use NATIVE SQLiteMode in Robolectric
0bb6ba30 : Add dmusila@ to OWNERS file
c97aacf8 : Add UpdateFcmTokenWorker job
26002fb2 : DLC: Use JobScheduler instead of WorkManager for SUW timeout.
33dd0a6e : Import translations. DO NOT MERGE ANYWHERE
b55dde47 : Baseline global lint errors in DeviceLock

+- Project: platform/packages/modules/DnsResolver

5483e926 : dns: simplify hardcoded constants
7e407d44 : Remove obsolete OWNERS from apex subdirectory.

+- Project: platform/packages/modules/ExtServices

7095ee7 : Import translations. DO NOT MERGE ANYWHERE
9753b32 : Restrict OTP detection to english language only
d995e41 : Import translations. DO NOT MERGE ANYWHERE
13521a3 : Import translations. DO NOT MERGE ANYWHERE
acda917 : Restrict OTP detection to the current SMS app
03c852f : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
ae67a09 : Revert "Require language specific regex, reduce english OTP words"
fbe917b : Add platform crashRecovery dependency
63cbf5b : Added AdServicesHostTestsTargetPreparer to CtsExtServicesHostTests
4ae1ae2 : Require language specific regex, reduce english OTP words
733b1b8 : Import translations. DO NOT MERGE ANYWHERE
8096c48 : Mark packages/modules/ExtServices apps in updatable apexes with updatable: true
0893724 : Import translations. DO NOT MERGE ANYWHERE
6ad6d17 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/modules/GeoTZ

2832148 : Add an constructor which accepts the size of allocated buffer

+- Project: platform/packages/modules/HealthFitness

539d68e18 : Import translations. DO NOT MERGE ANYWHERE
b2c17705d : Added tests for UsageStatsCollector. This will allow us to cleanup DailyLoggingServiceTests and move the DailyLoggingService to the Injector.
14b032b8b : Remove proto usage from framework and move protos to service
3daaebf90 : Fix flakiness in ExportImport API test
b14d8b519 : Update NoDataPreference to match empty 'No apps recently accessed Health Connect'
7e2507e4f : Throw exception if getMedicalDataSources list of ids too large
69d3cf369 : Update HomeFragment CTS test with new helper methods
a66f9a2e0 : Cleanup insert* methods in TransactionManager.
9c0a79eb3 : Deflakes medical UITests by: - attempting to scroll to most views before checking they exist - waiting for a new window when navigating
4f8cb7060 : Deflakes RequestHealthPermissionUITest by: - attempting to scroll to most views before checking they exist - waiting for a new window when navigating
650eb5e07 : Wrap ReadAccessLogsHelperTest with EnableFlags to filter out the test properly in Trunk Food.
f25cd8eea : Revert "Update HomeFragment CTS test with new helper methods"
34f303d07 : Deflakes ManageAppHealthPermissionUITest by: - attempting to scroll to most views before checking they exist - waiting for a new window when navigating
4b16f7e76 : Add nfortescue@ and sophiabotz@ as HC code owners.
3fed8b6ff : Use TopIntroPreference with lnik to support expressive theming
e03620a95 : Fix flaky writeLimitExceeded / readLimitExceeded tests
660d11681 : Revert^2 "Add support to insert detailed access logs for Read requests."
4989fa2b5 : Use standard utils for querying number of rows
838f1afbe : Update HomeFragment CTS test with new helper methods
cd5fa4d2d : Wait for the screen to be unlocked in the UI tests.
79316e594 : Internalize all variations for inserting data within UpsertTransactionRequest.
250b3dbdf : Reset screen size before running any UI tests.
2f95f1407 : Change coroutine testing for kotlinx.coroutines 1.9
09655e2a6 : Revert "Add support to insert detailed access logs for Read requests."
9d6528adc : Date navigation view in entries screen not resetting when navigating back to data and access screen
22a53b649 : Deflakes Additional permission request screens tests by: - attempting to scroll to most views before checking they exist - wait for a new window when navigating
77236ec28 : Fix data type UI tests when the PHR flag is disabled.
97eebce5a : Use CheckFlagsRule in ActivityIntensityFormatterTest.
3900134c8 : Deflakes new IA tests by: - attempting to scroll to most views before checking they exist - wait for a new window when navigating
55deafd81 : Throw exception if list of upsert requests contains duplicates
18a4e0c83 : Migrate tests to SQLiteDatabaseFixture
fa89209ea : Import translations. DO NOT MERGE ANYWHERE
c293ba4c1 : Fix HCServiceImplTest on hsum devices.
63ec6b0f0 : Fix flaky MedicalPermissionsRequestUITest
769ffaae6 : Fix flaky UI log test by checking view first, then logimpression
317d0f016 : Fixes flaky CTS UI test by: - attempting to scroll to a view before checking for its presence - replacing waitUntilIdle() with wait() until we see a new window.
6b4e8da9b : Change coroutine testing for kotlinx.coroutines 1.9
1286c6d86 : Fix ActivityIntensityFormatterTest in Mainline.
16d7af1e3 : Fix readMedicalResources query
4a71c4087 : Fixes flaky CTS UI test by replacing waitUntilIdle() with wait() until we see a new window.
f928aa5a0 : Move PHR methods from PreferenceHelper to PreferencesManager
d8e2ada33 : Add support to insert detailed access logs for Read requests.
84638a771 : Add flag for fixing the readMedicalResources query
3205dd7fa : Delete all records to increase cleanup coverage.
e21292dfc : Generate change logs for expport&import and d2d transfer
0eb8cbe94 : Fix failing Activity Intensity tests.
4e796684e : Add proto definitions for all record types
44c342bad : Revert "Stop using Thread.sleep"
ba80fa1cf : Removed HealthConnectDeviceConfigManager as it was not being used.
ec976000e : Stop using Thread.sleep
074f680f9 : Moving dependencies of DailyLoggingJob to the Injector.
d06e52ce9 : Rate Limiter was using Instant.now which means when user forwards system clock and then changes is back, it would break Rate Limiting.
23014fe30 : Use Context user to create StorageContext
fe65a10d7 : Move AutoDeleteService to the Injector.
4b88950bc : Revert^2 "Mark @FlaggedApi flags as exported"
7aed773c6 : Import translations. DO NOT MERGE ANYWHERE
07d605682 : Add BackupData proto that will be used to serialize cloud backup changes
7adfb3e51 : Fix Browse data top intro at orientation change
b64d42692 : Move BackupRestore to injector.
7e1e2ccbe : Use HealthConnectManager to identify health permissions
0a7e5b82a : Attempt to scroll down to text before verifying it's displayed.
9d10fcb5b : Revert "Mark @FlaggedApi flags as exported"
ce543d07d : Logging for allow and cancel button on request medical permission screen
6c7b7fa64 : Switch all backup util method calls to the user version.
d9421c480 : Refactor the code to address TODOs.
a8f357de1 : Migrate tests to SQLiteDatabaseFixture
828068f36 : Add mts-healthfitness tag to CtsHealthConnectHostSideTestCases
000aa5323 : Update UpsertTransactionRequest to use factory methods.
8118c6218 : Import translations. DO NOT MERGE ANYWHERE
4e7540804 : Use shared memory for upsertMedicalResources API
56edf5e91 : Import translations. DO NOT MERGE ANYWHERE
ecaf402de : Introduce context+user as a constructor param in HCPriorityHelper.
559a1140b : Stop unnecessarily using StorageContext
a3ab74b95 : Add fixture that closes and deletes opened databases
42712f8f6 : Update bug number for the phr_upsert_fix_parcel_size_calculation flag
c477e0d26 : Remove clearData calls from teardown as we do not need them anymore.
6ebeffe5f : Mark @FlaggedApi flags as exported
70e6c75e9 : Make export status preferences non selectable
c39b558c4 : Add primitive type validation
b0705fad0 : Hide Browse Medical data top intro when in delete state
0e9a828bd : Fix Talkback on Request write medical data screen
55357fed7 : Remove unused Context
404af462f : Cleanups in PriorityHelper.
db097c717 : Fixes checkbox views being recycled and not keeping the correct references.
b4a93399a : Add unit test for when PHR merge is enabled with more than max page size resources.
c9efc6cd7 : Waits for Auto-delete to be displayed before finishing the test to make sure the page is displayed.
220496c4f : Add E2E test for PHR d2d flow.
e324ae6ab : PermissionPackageChangesOrchestrator: Opt Wear apps out of intent requirement.
61c352dcf : Import translations. DO NOT MERGE ANYWHERE
1c99f52d5 : Fix UpsertMedicalResourceRequest parcel size calculation
04a4cddd0 : Sort required_fields to make script output deterministic
68284ea94 : Remove HC controller dependency on proto-lite and guava
b8df8bcd5 : Seed Activity Intensity data in Health Connect Toolbox.
e232b0833 : Remove flag for new aggregation source control.
1b0813840 : Remove unused migration notifications
9353b3566 : Change CTS UI logs test to be compatible with both old and new IA.
0d57d2d08 : Updates the Recent Access preference on the home screen to use simple preference when expressive style is enabled.
a19f6d1a9 : Add Activity Intensity to Health Connect Toolbox.
acb610f9b : Remove unnecessary logging left by mistake.
f276cf0a5 : Change coroutine testing for kotlinx.coroutines 1.9
9f217fe65 : AddAnApp screen uses AppPreference instead of Preference.
39c726aad : Enable R8 optimization and resource shrinking
188b6b882 : Change coroutine testing for kotlinx.coroutines 1.9
7ae120b86 : Implement UI for Activity Intensity.
491d90f87 : Wait for 3 seconds before checking that deletion is done.
63d018196 : Import translations. DO NOT MERGE ANYWHERE
b50d5eb5e : Filter out non-rolled-out health permissions from system permissions
43cd52f6c : Lock screen, etc telemetry
440515829 : Revert "Rebrand health permission group to "Fitness and wellness""
274378694 : Query apps by requested permissions instead of intent filter
5258e0c04 : Introduce context+user as a constructor param in AppInfoHelper.
4f450e1e1 : Fix HealthConnectPermissionHelper
29b933ec2 : Adds a checkbox to the app data deletion dialog to choose whether to also revoke all permissions for this app.
9753e31ff : Remove unnecessary call from the constructor.
8ebcaf985 : Rebrand health permission group to "Fitness and wellness"
5fb15bc69 : Check for the right user in FirstGrantTimeManager.
8b1a462b9 : Fix MedicalDataResourceHelperTest in preparation of ag/30435469.
db60279a8 : Add CTS hostside tests for PHR API, usage and storage stats logs to WW
2fb45c39c : Remove phr tests that are no longer passing
b522e9edb : Add Browse health records top intro
eb2be58db : PermissionManager: Build Wear Health PermissionManager flow - Control background read status for an app screen
a220f3570 : Enable healthfitness to be added to framework-doc-system-stubs.
011aba166 : HealthConnect: Prepare tests for running on watches.
a036102f1 : PermissionManager: Build Wear Health PermissionManager flow - Control single data tpe for an app screen
92e2b2a3b : DI Cleanup. Move remaining classes to the Injector.
8c101b257 : Add tests for disconnect app permissions dialogs
c6da3dec7 : Throw an error if resource id is not a string
8021f7a6f : Fix error prone warning in ag/30377554 and don't use proto enum toString
cbb144304 : Adds CTS UI tests for mindfulness in the new IA.
e3d0e3a50 : Add basic setup for restoring user settings.
05ff06519 : Remove unnecessary flag check
3768dda15 : Only show some of the permissions types on the Request write medical data screen
075cd1513 : HealthConnect: Allow Wear apps to request health permissions without rationale intent.
96f6aaebe : Update social history icon
63f0fac27 : Add PHR UI element logging
ba0263681 : Add cts tests for basic FHIR validation of complex types
a83874a4c : Add basic type validation for top level fields
b6317953d : Scroll down on Home screen in case banners are visible
944ca45d7 : Confirmation deletion dialog retains on orientation in entries screen
64b1ec292 : Fix flaky FirstGrantTimeUnitTest - Part 3.
da46724a8 : Swap Additonal access and See app data button positions
5484b97eb : Flag activity intensity permissions in HealthConnectManager.
fe662a136 : Add a fixture for mocking Environment.getDataDirectory
f2dd8f70b : Medical permissions icons aligned with each other
4d4225296 : Remove file that was submitted by accident
7a6b62376 : Remove unncessary test from ag/30321876
41750b87e : Add support for merging PHR content in d2d and import/export flows.
d9942ec0c : Fix GrantTimePersistenceUnitTest#testWriteReadData_statesForTwoUsersWritten_restoredCorrectly on HSUM devices.
b2b6d4db0 : Log PHR database size to WW
01b025eaf : Retrieve the list of data types from framework mappings.
4c5d0f28c : Update upsertMedicalResources API doc to clarify validation behaviour
600f14d67 : Add support in ChangeLogsHelper to return data types being written in past 30 days.
8b4b29961 : PermissionManager: Fix Wear Health PermissionManager data type chip disabled issue
f71fa0aa5 : Move FHIR multi type field validation behind main validation flag
618b01847 : HealthConnect: Require permission to get HealthConnectManager on Wear.
e575fbc52 : Don't flag Activity Intensity permissions in the Manifest.
f0b09f062 : Fix GrantTimePersistenceUnitTest#testReadData_stateIsNotWritten_nullIsReturned.
12d576641 : Updates footer text in the permission management screens.
b0113558f : Adds text to "Apps need updating" banner to specify that updates may not be available for all apps.
63e9f830f : Fix nullaway warnings in AppInfoHelper.
4a5a738ec : Scrolls down to "Delete app data" before checking for the view.
c6472501b : Add logging for PHR pages
f05c48b05 : Add a Helper for measuring the size of tables for logging purposes.
f0b18a947 : Fix flakiness in FirstGrantTimeUnitTests - Part 2.
146ed57e7 : Validate FHIR type choice fields
ff9db6816 : Fix DeleteCategoryUseCaseTest when Activity Intensity flag is enabled.
4a86795e8 : Show type first, title second for Exercise and Mindfulness
57008a053 : Grant permissions: Allow only system health permission requests on Wear
cb675577f : Deprecate EXERCISE_SESSION_TYPE_GUIDED_BREATHING.
e1dc285ba : Adjust padding on tab buttons in entries screen
93df70e52 : Implement aggregations for ActivityIntensityRecord.
a6fe3bdf3 : InactiveAppPreference inherits from AppPreference instead of Preference.
73e359263 : Delete PHR table contents in the d2d flow when flag is enabled.
1e27f081a : Log number of apps with at least 1 PHR read perm to WW
bb8de632d : Only allow primtive data types to have fields starting with _
66ba2effc : Add auto-delete preference to settings backup.
498d5314f : Import translations. DO NOT MERGE ANYWHERE
cbfea56bc : Remove cyclic dependency between MigrationStateManager and MigrationBroadcastScheduler.
a51497b37 : HealthConnect: Small test cleanup.
a995a8cfb : Permissions: Link background permission for OxygenSaturation.
717710e18 : Permissions: Link background permission for SkinTemp.
73fa09a5b : Add incremental changes to backup
f97d608c1 : Revert^2 "Rename Immunizations to Vaccines - step 3"
268faae55 : Revert "Rename Immunizations to Vaccines - step 3"
6f32d21e2 : Work towards making PackageInfoUtil static.
566ed9c0d : Fix flaky FirstGrantTimeUnitTests.
8eaf7ae21 : Ensure HCDatabase is always initialised with StorageContext.
a73f30923 : Add ACTIVITY_INTENSITY permissions to the UI Controller category mappings.
19d4713ff : Extract data types from fhir spec
1efbdadf0 : Delete PHR table contents during export when flag is enabled.
688f816e4 : Log PHR monthly active user to WW
4217cefcf : Rename Immunizations to Vaccines - step 3
bf043c27d : PermissionManager: Build Wear Health PermissionManager flow- Remove all page
f6b47cf0a : PermissionManager: Build Wear Health PermissionManager flow - Per data type page
a4b730160 : PermissionManager: Build Wear Health PermissionManager flow - Vitals page
eed365905 : Lock screen banner
a5eeb0296 : Make the dropdown menu under the button instead of on top
8c0f2853c : Remove getInstance calls from MigrationStateManager.
b59fdad7f : Fix logspam
ac4fd94f4 : Clear storage before tests
8b68943b2 : Validate required fields are present in FHIR resources
d8e2bc115 : Add ActivityIntensityRecord to the UI Controller mappings.
7b217b19d : Rename Immunizations to Vaccines - step 2
148bb508e : Add several phr fhir validation flags
434311b3d : Fix an upsertMedicalResources test
6a1be4480 : Make app priority and delete icons lighter
a2fa8709f : Remove getInstance calls from HealthConnectDeviceConfigManager.
a22aa504d : Move MedicalDataSourceHelper and MedicalResourceHelper to the injector
eaa1def8c : Extract cardinality from fhir spec and add to proto
f339de493 : Localize separator for a11y content descriptions
c0c77abd1 : Inherit new SettingsBasePreferenceFragment and SwitchPreferenceCompat components.
1efc3f5e1 : Remove getInstance calls from TransactionManager.
d6701d95b : Import translations. DO NOT MERGE ANYWHERE
4c0f1d412 : Decrease left padding of date navigation spinner in entries screen
27768f898 : Separate the PHR API telemetry flag to WW and PWW ones
71e7d9d7b : Make padding above dialog buttons wider
73a61b9fe : AppInfo: Build Wear Health AppInfo permissions flow
5f18d8b73 : Fix logging issue in fitnessAppPermissions
2c002e2a5 : Move mapping init in IntentTracker to outside the constructor.
8ed2495dc : Add telemetry to log MedicalDataSource count and MedicalResource count to WW
da1fa8974 : Localize separator for a11y content descriptions
a68fc3854 : Fix adoptShellPermissionIdentity without paired drop
904f60ccd : Remove getInstance from PreferenceHelper and ActivityDateHelper.
26607bb18 : Adjust left alignment on checkbox and add padding
68de45ce2 : Match checkbox text color to the other text in the dialog
5aefb09fe : Remove getInstance calls for DeviceInfoHelper.
c59f0973b : Move dropShellPermissionIdentity to finally block
3b3507349 : Remove getInstance calls for AppInfoHelper.
83cb6c6e0 : Ensure test app is in the background before tests
f7713d679 : Change permissionOrchestrator to always remove permissions if intent is missing.
811ed9a4d : Add error handling for data backup
aeb35dfef : Rename Immunizations to Vaccines - step 1
31c9985bb : Increase right padding on drag icons in data sources screen
f460b01e0 : Align radio buttons with the subtitle in Auto-delete screen
bff9aeebd : Fix HomeFragment CTS test
e85daaeb5 : Revert "Add BackupData proto that will be used to serialize cloud backup changes"
27d7848c9 : Revert "Force stop test app before tests"
6bc7b3539 : Enable PHR flags in RequestPermissionViewModelTest
130a5e548 : Disable logging check for test_readAndWritePermission
54ee63b97 : Enable PHR and Activity Itensity DB Flags so that Ecosystem DB Flag can be set to true.
bc2f2a9c8 : Fix tests failing because PHR flag is not enabled in next
17d6ca287 : Fix flaky CTS test
18d7734f2 : Build Health&Fitness grant background permissions on Wear
2f1dbcb61 : Add BackupData proto that will be used to serialize cloud backup changes
e9bb628de : Add ReadAccessLogs table for Ecosystem Metrics.
41b93d64c : Revert^2 "Prepare the device for ui tests"
31ec6066a : Reject contained resources during upsert of FHIR resources
28259e14e : Grant Permissions: Build Health&Fitness grant single granular permissions on Wear
f48560567 : Grant Permissions: Build Health&Fitness grant multiple granular permissions on Wear
b14d8cccf : Revert "Revert "Fix MockedFitnessAppFragmentTest and FitnessAppF..."
c31e3e926 : Revert^2 "Delete HCUserContext."
a8381816f : Remove getInstance calls for AccessLogsHelper.
892591833 : Remove getInstance from PackageInfoUtils.
e22a169d4 : Remove getInstance from HealthDataCategoryPriorityHelper.
64ed6e4b2 : Add PHR API loggings to PWW
43e488cb9 : Invalidate the previous token if change logs are deleted
626f3669d : Force stop test app before tests
c8a20b7e0 : Add flag for merging PHR data in d2d and import/export flows.
a587bb51c : Use soong selects of flag value to conditionally add java_resources
347122411 : Remove getInstance from PriorityMigrationHelper.
03e49189f : Import translations. DO NOT MERGE ANYWHERE
446a16a2a : Create new SuccessDeletion dialog to fix See compatible apps crash
41a81b231 : Fix build breakage.
32b61af3f : Revert "Delete HCUserContext."
d15bc2d65 : Don't drop shell permissions until API calls are done
1cf994686 : Highlights the "No data" view for a11y readers.
bba716d88 : Fix StrictMode violation due to IO operations on the main thread.
bd123ae14 : Delete HCUserContext.
f6f1575fc : Add exercise routes to seed all data button in Health Connect Toolbox
b7c68fea5 : Implement initial version of FhirResourceValidator
f00d89e32 : Update MedicationStatement test data to be valid R4 FHIR
2e5347c45 : Remove user map from TransactionManager.
9017aab3b : Add a test to verify that PHR DB upgrade step is correct
7fa5147e2 : Add two separate flags for excluding PHR tables from export/import and d2d flows.
8d27a5499 : Name worker threads
889564101 : Use deleteAllMedicalData from phr utils in ServiceLogsTests
2eed6ca8b : Move getContributorApplicationsInfo tests to it's exising file
f06a1816b : Adds two new flags needed for the expressive theming integration
40f0e0440 : Use generated page token for the subsequent data table backup
5cf8319b5 : Implement ActivityIntensityRecord in Health Connect service.
1c2982dac : Remove unused ReadMedicalResourcesRequest.aidl
d0869fe14 : Update test data to be valid R4 FHIR resources
6bc29b01a : Add a new flag for export/import
dd21bd532 : Revert "Fix MockedFitnessAppFragmentTest and FitnessAppFragmentTest"
3ecc21748 : Revert "Prepare the device for ui tests"
1a6bc2eb0 : Reapply "Extend CloudBackupManager to parse user settings as byte array."
aba716c7d : New IA fixes: - inactive apps refresh after deletion - do not show the success dialog twice
0ca98c296 : Import translations. DO NOT MERGE ANYWHERE
9c3a1bd3d : Fixes for the new IA:
853fdd56c : Remove unnecessary adoptShellPermissionIdentity
aa956a99c : Move the existing PHR test utils to phr folder
4627c065b : Make StorageContext constructor private.
de25d9387 : Refactor code so that UsageStatsLogger does not rely on HealthConnectPermissionHelper.
99cec6eab : Fix the failing tests due to missing database table
9d860e262 : Revert "Extend CloudBackupManager to parse user settings as byte array."
09eab62fb : Move PermissionPackageChangesOrchestrator, MigrationCleaner and HealthConnectPermissionHelper to the injector.
05077d915 : Remove unused params from service method.
d9a53bb8d : Prepare the device for ui tests
46e8ab5be : Fix MockedFitnessAppFragmentTest and FitnessAppFragmentTest
f0c04c773 : Set timezone for Deletion tests
b3aee8555 : Extend CloudBackupManager to parse user settings as byte array.
3cc2fcfe6 : Update generate-fhir-spec.py to read from fhir spec
994ed0a38 : Don't drop shell permission until API calls are complete
41980a065 : Add WW telemetry for PHR API method and status
3b76fbd4f : Rename DatabaseContext to StorageContext.
d94c69e96 : Remove DI Flag as it was not bringing value. Injector was just a layer and the depedencies are being initialized more less in the same way in the Injector as they were being initialized outisde (i.e. else of the flag).
14642b37f : Check in initial setup for FHIR structural validation
399b7416c : Fix flaky test.
7e59f7506 : Fix failing tests in HealthConnectServiceImplTest
e39f06199 : Revert^2 "Define ActivityIntensityRecord in the framework."
7e68dbdf2 : Disable flaky test for M-11 release
ebb6f4f58 : Revert "Define ActivityIntensityRecord in the framework."
80db5d4c3 : Replace UserContext with DatabaseContext.
43fe06e7f : Throw IllegalStateException instead of IllegalArgumentException when package info is not found.
cd92fbb68 : Add helpers for calling async APIs from tests
8e5853bcb : Add more tests for UpsertMedicalResources
027283dd3 : Drop shell permission identity for running shell util function and fix relevant tests
a75432361 : Add PHR M02 flags (entries screen, lock screen banner)
e2d270de5 : Move the existing PHR requests / responses tests to phr folder
09b2b2283 : Define ActivityIntensityRecord in the framework.
d1fe73bd5 : Enlarge the test waiting timeout duration
4d150e8fc : Revert "Define ActivityIntensityRecord in the framework."
eef9634cd : Replace sleep condition with polling checks for ExportImportApiTest
47fe7be2a : Enable the next day button at all times when viewing training plans.
9d577856b : Define ActivityIntensityRecord in the framework.
e42d48374 : Store changeLogsToken, dataTablePageToken and dataTableName as the change token for subsequent backup calls
d224fc1b2 : Move no permission test for UpsertMedicalResources
88c186ba1 : Fix user handling in ExportManager.
ad9f54f36 : Inline history read flag that is always true
ae552969d : Increase test timeout to match 15 sec.
2de5aedd1 : Remove TODOs related to already added CTS tests
6c792a2ba : Inline background read flag that is always true
2db3a17c8 : Import translations. DO NOT MERGE ANYWHERE
38bc93e05 : Remove dependencies on the 1-variant fallback
fe6b8acb3 : Move rate limit test for GetMedicalDataSourcesByIds
fab7560a3 : Fix the flaky tests by resetting app info and access log instances
e2b3c4676 : Move rate limit and memory limit tests for UpsertMedicalResources
5120f94be : Add flag for PHR UI telemetry
0913ff36d : Add missing mock initialization
01573a29d : Fixes time range shown on deletion dialogs and adds tests for the formatting method.
b8557226a : Move the remaining MedicalResourceType tests to its own file in phr folder.
a617bca9c : Revert "Use flag to re-enable HealthConnect on Wear"
881c77aef : Reloads permissions at all times.
b2bbe327b : Copy PermissionController Wear SysUI elements and themes to HealthConnect
75c35ae28 : Add a flag for PHR telemetry
077b331f7 : Fix RecentAccessViewModelTest
35281a12f : Fix ConnectedAppsFragmentTest Failing
a761f966f : Fix the flag in the test
30f1cd57c : Move rate limit test for ReadMedicalResources
4461366a7 : Adds CTS test for app with both medical and non-medical data.
cc07d6074 : Move rate limit test for DeleteMedicalResources
ee62b20d7 : Polish the java docs a bit more according to feedback
aea8b4ead : Revokes permissions in a viewModelScope.
b05543257 : Adds telemetry to the AllEntriesFragment in the new IA.
c2e02dae8 : Add mocks and todos for temporary injector fix
b774799ce : Use flag to re-enable HealthConnect on Wear
341d61cc5 : Add a comment to QueryDocumentProvidersTest.
08ac3969d : Adds CTS tests for the connected app UI for PHR apps.
2faceb005 : Move rate limit test for CreateMedicalDataSourceWithData
3377c350e : Add rate limit test for DeleteMedicalDataSourceWithData
fd89d0a58 : Add additional readMedicalResources permission tests
a16893ed6 : Adds telemetry to the AccessFragment in the new IA.
fa73a0cd9 : Move getMedicalDataSource by ids and upsertMedicalResources tests from HealthConnectManagerTest to phr specific test files.
9f8e2ce46 : Add tests for B&R parcelables
63fa8ff5b : Adds CTS tests for PHR app data screens.
7f4bc8195 : Remove redundant QueryAllMedicalResourceTypeInfos tests
663ea2dbd : Changes request permissions text based on which type of permissions are requested (read/write/medical/fitness/additional).
0a91b357e : Remove copy of SystemUtil.runWithShellPermissionIdentity
053bcdc9e : Migrate getRecordHelper to mappings.
e5e90d4a8 : Adds deletion tests to the new ia screens.
5968f6e19 : Adds CTS tests for all medical entries screen.
282d991b9 : Remove the guarantee of ordering when reading medical data by ids
a77883533 : Adds CTS tests for PHR all data screen.
f1ef46939 : Import translations. DO NOT MERGE ANYWHERE
11af99adf : Add the missing test Rules.
5e1dc722c : Use proper DI for cloud backup manager initialization
fbe37b768 : Move QueryAllMedicalResourceTypeInfos tests to new location
e0680f863 : Add more DeleteMedicalDataSourceWithData tests
cc5565287 : Add remaining tests for getContributorApplicationsInfo.
edea86c2a : Add basic structure changes for first backup call
d70ebef48 : Remove PHR telemetry
d29128288 : Move DeleteMedicalDataSourceWithData no permission tests to new location
fdae45d24 : Adds CTS tests for PHR request permissions.
ca82db699 : Adds CTS tests for PHR and New IA for the Home fragment.
c6ccd7dab : Revert^2 "Move DeleteMedicalDataSourceWithData tests to new location"
a604876be : Revert "Move DeleteMedicalDataSourceWithData tests to new location"
8c63fc40b : Remove tests that are no longer needed.
03c672ad0 : Add extra logging to ImportManager, adding more detail to error messages for debugging.
4a27b011a : Add teog@ and csitari@ to the OWNERS file for the mainline modules.
fb3fd6699 : Adding permission metrics.
322cd9bce : Move HealthConnectManagerTests for readMedicalResources to new location
9d98b6557 : Move DeleteMedicalDataSourceWithData tests to new location
7d99e3d4e : Add spinner in Toolbox app to be able to select which FHIR resource to insert
f7dec05c8 : Create aconfig flags for Activity Intensity.
b8c5fd262 : Add test for creating invalid request using reflection for getMedicalDataSource by request
c3bb160e0 : Add more CreateMedicalDataSource cts tests
7fb377e80 : Fix entries and access tabs colours Entries and access screen in deletion in light mode: https://screenshot.googleplex.com/3Xr8VwHDaUxsHBJ.png Entries and access screen in deletion in dark mode: https://screenshot.googleplex.com/8sd2AZghg5thQYh.png
28be8db2d : Add more test coverage for GetMedicalDataSources by request.
60ee9e38b : Move FirstGrantTimeManager and HealthPermissionIntentAppsTracker to the Injector.
16a69770a : Migrate getRecordHelpers to mappings.
00a1be723 : Revert "Add ktfmt and ktlint hooks to preupload.cfg"
1eb46de3d : Move CreateMedicalDataSource tests to new location
50db2d1a7 : Add tests for MedicalDataSource lastDataUpdateTime behaviour for delete
aa7e42e52 : Add more tests for deleteMedicalResources APIs
f0b1f3120 : Add ktfmt and ktlint hooks to preupload.cfg
ad8455bcd : Migrate VALID_TYPES to mappings.
6f01ed4b2 : Migrate getRecordCategoryForRecordType to mappings.
c7533de7a : Add permission tests for getMedicalDataSources by request.
b863e80f6 : Fix deleteMedicalResourcesByRequestWithPermissionChecks
755dc7a1c : Fix a bug leading to deadlock in CTS tests.
d23208070 : Move the existing tests for getMedicalDataSources by request to phr folder and add support for the API in test app.
76d24c6ae : remove HealthConnectControllerUITests from mts-tests
a2c359fbc : Update OWNERS files for Mainline modules.
9efbcf049 : Import translations. DO NOT MERGE ANYWHERE
9179009c7 : Use HealthConnectMappings instead of RecordMapper everywhere.
42e619499 : Migrate supportsPriority/isDerivedType to mappings.
2fcda9cb7 : Move deleteMedicalResources HealthConnectManager tests to new phr test location
007c03f62 : Adds 'Also delete data' checkbox to the confirmation dialog for revoking all apps permissions from Health Connect
a18bfb2af : Revert "Add extra logging to ImportManager, adding more detail to error messages"
2457024ab : Add missing test for UpsertMedicalResourceRequest#setData
8a96864c3 : Censor message of IllegalArgumentException being sent to logcat to reduce logging of privacy sensitive information.
236a6ebb4 : Remove FHIR_RESOURCE_TYPE_UNKNOWN
712b8d773 : Remove MEDICAL_RESOURCE_TYPE_UNKNOWN
4465e115a : Add permissions tests for readMedicalResources by id
ac6045f78 : Add permission tests for ReadMedicalResources by request
7ee61defd : Add extra logging to ImportManager, adding more detail to error messages for debugging.
8c50baf94 : Add permissions tests for app running in background and using getMedicalDataSourcesByIds.
43e7e6485 : Add new flags.
a54515137 : Add an upsert test for migration in progress
0e5ff00bd : Fix Browse health records summary
d0a4faa50 : Fix footer on empty App data screen
d331da21c : Shows inactive medical apps in the access screen.
49a237c69 : Delete inactive app data from the AccessFragment.
b6586910f : Remove redundant call to deleteAppDataUseCase
7b685bd0f : Enable test for no job scheduled when export is off
42090d73e : Changes display of menstruation period entries depending on the selected date navigation period (day, week or month).
4a4fea834 : Add permission tests for getMedicalDataSources by Ids.
93474aa6c : Migrate Device config flagging to android flags on UI screens
6b6b419ec : Move PHR CTS tests to presubmit
afd648973 : DI Cleanup.
6cd4f4cbf : Fix QA bugs
0fe9d31f4 : Import translations. DO NOT MERGE ANYWHERE
c0d45323d : Fixes for errorprone update
6067d525e : Migrate getWriteHealthPermissionsFor to HealthConnectMappings.
b9c964eb5 : Avoid writing -1 app info id into db
bac231d45 : Use combined PHR flag in all implementation and test files.
3e60d52ac : Improve PHR API docs - part 3
6f363f1e8 : DeleteAppData for revoke all permisisons and delete inactive app data
8af39aa09 : Add Cloud B&R APIs and classes for backup
44f090d6b : Add tests for invalid requests using reflection for readMedicalResources
9ac110f07 : Ignore flaky rotation test
fbc9d5c63 : Fix display name extraction for FHIR resources
cb1e56b75 : Add immediate export api
e70a900db : Add seed all data to Toolbox
6dd13a755 : Improved request permissions flow.
8cc92cd77 : Fix database upgrade steps to remove duplicate upgrades.
677a98136 : Update test for Truth8 deprecation.
c1d4c5c30 : Update test for Truth8 deprecation.
da5041175 : Migrate getHealthDataCategoryForWritePermission to HealthConnectMappings.
14291fb8b : Migrate more methods to HealthConnectMappings.
6e53c182b : Fix failing tests due to remaining FLAG_DEVELOPMENT_DATABASE in PHR unit tests
f453ee709 : Migrate isWritePermission() to HealthConnectMappings.
f16ce9faf : Add more readMedicalResources byRequest cts tests
808fe325b : Clarify that upsertMedicalResources will not have any partial upserts
9eab88eb9 : Do not record access logs if with manage permission
14c057b76 : Add missing target_sdk_version to PHR CTS test folder
d43c6bf03 : Fix typo in DataPermissionEnforcer exception message
d73542d6f : Add missing PHR APIs to HealthConnectManagerNoPermissionsGrantedTest
6f3717a6f : Add more cts tests for getMedicalDataSource by ids.
39876a0c2 : Fix flaky tests in b/370447278 with BooleanSupplier instead of TestRule
01d885faf : Add flag for Cloud B&R
cd913bdc0 : Import translations. DO NOT MERGE ANYWHERE
4bf77b084 : Add documentation for MedicalResource validation behaviour
c2dbd245b : Remove obsolete TODOs for access logs
884fb44a4 : Fix some errorprone issues around RecordHelperProvider.
fb42aed4c : Update MedicalDataSource's FhirVersion to be NonNull
eb5ec5266 : Remove tests that are no longer needed.
043dc263f : Remove an unused constant.
c343474f3 : Add tests for rate limiting of read APIs.
ff435b96f : Add CTS test for migration in progress blocking read by ids.
0d9b676c4 : Fix a NullAway issue suppression in TransactionManager.
93bc439ad : Change destination for 'Open' button intent when import complete to view all data.
2a6105597 : Add PHR CTS tests to TEST_MAPPING's postsubmit
586c4716a : Fix the failing export import job test
08d53a088 : Logs which permissions are USER_FIXED when requesting permissions.
86ae0f1b2 : Additional permissions screen for apps with medical permissions.
1ebc2c865 : Cleanup ChangeLogsHelper and ChangeLogsRequestHelper.
de15731b4 : Fix flaky PHR unit tests in presubmit by following b/370447278#comment3
ebbd6fd86 : Populate lastDataUpdateTime value from db
46acd72e4 : Add validation for package names in GetMedicalDataSourcesRequest
d8985bba3 : Change MedicalDataSourceHelper to use the new count SQL on ReadTableRequest. Moves the common code into a helper method on TransactionManager.
90def63cf : Add CTS tests for read by IDs happy path.
22371a825 : Add missing RequiresFlagsEnabled for phr db flag to tests
0d438ad1a : Bring healthfitness-mainline-presubmit up to date.
a03c2b3c7 : Rename Immunizations to Vaccines, fix names, add toolbox data
b91ef3d1e : Fixes for errorprone update
c8dbb2c68 : Revert^2 "Add remaining count into the ReadMedicalResourceResponse."
a12f59417 : Rename Allergies and Immunizations to plurals.
59f2709c0 : Revert "Add remaining count into the ReadMedicalResourceResponse."
6c0faafbf : Return empty response early if read medical resources with type UNKNOWN
611fea858 : Improve PHR API docs - part 2
6e870a8fd : Add a separate folder for all PHR CTS tests
fbe759458 : Add remaining count into the ReadMedicalResourceResponse.
5a29233c8 : Fix PHR DB upgrade is executed incorrectly when infra flag is disabled.
7b4ef64ed : Validate data source ID in all public APIs
70be010ec : Revoke permissions flow when including PHR permissions.
ed9a40cfc : Add some nullable labels and fix some NullAway warnings in ReadTableRequest.
d9c936bfe : Import translations. DO NOT MERGE ANYWHERE
c8c817d19 : Improve PHR API docs - part 1
5c2377dd8 : Requests permission flags only for declared permissions.
5ac1e4b53 : Use org.mockito.kotlin.eq
4f06bc4fb : Do extra logs deletion only if export is off
96f62ffcf : Run HealthConnectDeviceConfigManager.initializeInstance with runWithShellPermissionIdentity.
cc1569d9a : Add flag for immediate export
f2bde723b : Finalize PHR DB schema changes
f94e7756f : Remove getInstance calls from DatabaseHelper
f4d1850df : Make classes extending DatabaseHelpers non-static.
ac5f8b5ab : Link READ_HEART_RATE android:backgroundPermission to READ_HEALTH_DATA_IN_BACKGROUND
cf9c76024 : Revert^2 "Add access logs for getMedicalDataSourcesByPackage."
afcfb066d : Add CtsExerciseRouteTestCases to MTS and MCTS.
d3336799a : Add icons and update Toolbox apps with newly added data types
63e08d9df : Revert "Add access logs for getMedicalDataSourcesByPackage."
11affc25b : Use org.mockito.kotlin.whenever
5621e8543 : Enforce that FhirVersion of upserted MedicalResources has to match data source
67330368f : Remove fhir_version column from MedicalResources table
e5d4b649f : Add fhirVersion to MedicalDataSource and CreateMedicalDataSourceRequest
ea556da45 : Add a flag for ecosystem metrics.
22c617dab : Add access logs for getMedicalDataSourcesByPackage.
57c08c37a : Update onboarding screen
0a569dd97 : Mark getInstance calls as deprecated.
fc0961677 : Remove existing TransactionManager.getInitializedInstance calls.
791847c7f : Rename PROBLEMS permission to CONDITIONS.
d21657c32 : Add access log for deleteMedicalDataSource.
083d5eef9 : Move ActivityDateHelper to the Injector.
f21116941 : Ignore flaky CTS test
4a8666217 : Fix app data navigation and rename raw fhir screen title
251be5ba1 : Import translations. DO NOT MERGE ANYWHERE
b5800212b : Add Visits (Encounter, Location & Organization) into Health Connect.
41fd5d5b1 : Add Practitioner & PractitionerRole (FHIR) / Clinician Information (Permissions) into Health Connect.
89559a411 : Delete medical app data, filter by app on medical entries screen
07b1722e8 : Add Patient (FHIR) / Personal Details (Permissions) into Health Connect.
fb69a7fe8 : App entries deletion flows.
0a0ba4adb : Fix crash loop in TestAppActivity
33a25b921 : Fix crash loop in TestAppActivity

+- Project: platform/packages/modules/IPsec

4330a6c7 : Update the package name of PersistableBundleUtils
3a04de97 : Renew certificates for ike test
80b81fdc : Include net-utils-framework-ipsec
4a61d0d2 : Disabling tests with test infra limitation
10d292b9 : Add SDK check for the suspend retransmission while mobike.
a1a671e1 : Add ShimUtils for Android W

+- Project: platform/packages/modules/IntentResolver

7cf3fdd7 : Import translations. DO NOT MERGE ANYWHERE
3826f464 : Fix Shareousel crash when recomposing
509339c7 : Add a11y role to the text preview copy action widget
8c481621 : Import translations. DO NOT MERGE ANYWHERE
47d954e0 : Import translations. DO NOT MERGE ANYWHERE
05eac691 : Import translations. DO NOT MERGE ANYWHERE
8046148e : Skip "last-chosen" query for Chooser
f3b4cbcd : Import translations. DO NOT MERGE ANYWHERE
9f9565e0 : Migrate materialColor* attributes into colors
7cdcdaae : ChooserActivity refreshes creator token
c7b80105 : Import translations. DO NOT MERGE ANYWHERE
b24a2c4f : Fix Shareousel not always centering initial selection
23f9e99c : Remove deduplication from Shareousel
83b40ed1 : Send content URI from sharesheet
a216f474 : Apply window inset specing for new adapters
34f3b5d5 : Reload targets by recreating adapters in response to target pinning.
a942e515 : Import translations. DO NOT MERGE ANYWHERE
40389e40 : Fix shortcut icon loading.
2588fa09 : Set a11y role description for the edit, modify share, and action buttons
dc5cdc53 : Make DefaultTargetDataLoader assist-injectable
d9b3371b : Import translations. DO NOT MERGE ANYWHERE
76271ceb : Remove android.service.chooser.chooser_payload_toggling flag
8f00ea03 : Import translations. DO NOT MERGE ANYWHERE
971b4d12 : Save Shareousel state.
66c6c352 : Add keboard focus outline for Chooser targets
b134f387 : Add Launcher hover effect
de09e2d7 : CachingTargetDataLoader to cache bitmaps and not drawables
7b0cf423 : Update test for Truth8 deprecation.
f6400db5 : Retrieve URI title metadata through separate query calls
148852fb : Add other null check in decorateActionFactoryWithRefinement
4779050b : Import translations. DO NOT MERGE ANYWHERE
1ad19a6f : Fix keyboard naviagation for the nested preview scrolling
66f23786 : Import translations. DO NOT MERGE ANYWHERE
eb35273e : Update Chooser drawer max width value on config change.
a2cecbe1 : Do not store the initial intent's extra in the saved state.
62e54954 : Remove FLAG_ENABLE_CHOOSER_RESULT (IntentResolver)

+- Project: platform/packages/modules/Media

287aa4e : Flag library based on class name for the flags
50344f7 : Fix namespace of "mediametrics_to_module" flag
a889a61 : Create "mediametrics_to_module" flag

+- Project: platform/packages/modules/NetworkStack

b253c212a : Import translations. DO NOT MERGE ANYWHERE
a29da6b6c : Add `updateMulticastAddr()` to update multicast addresses change
dc0ad332e : Add `getIPv4MulticastAddresses()` helper function
cdeda6c0d : Add an experiment flag for the DHCPv6 p-flag feature.
1c37811ff : Parse RTM_NEWPREFIX netlink message in IpClientLinkObserver.
4853e0c39 : Bind RTMGRP_IPV6_PREFIX group to listen for RTM_NEWPREFIX message.
225943179 : Remove deprecated `getTransmittedPacket()`
54c2229f8 : Wait for expected callback in DDR mode change test
9ea0a5025 : Add IGMP APF counter definition
2f16d7950 : Import translations. DO NOT MERGE ANYWHERE
9f5578334 : Import translations. DO NOT MERGE ANYWHERE
9032afd88 : Replace `getTransmittedPacket()` by `getAllTransmittedPackets()`
899154f31 : Support `getAllTransmittedPackets()` for multiple allocated buffers
f1450477a : Modify transmit related JNI due to allocate buffer structure change
9dd202e03 : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
f36967f13 : Revert "Set the default value for accept_ra_min_lft to 900s to watches"
2ad6a3475 : Ignore never-reachable neighbours by default.
2bc3e2a0a : Disable IGNORE_NEVER_REACHABLE_NEIGHBOR in more tests.
5ee43bf0e : Disable IGNORE_NEVER_REACHABLE_NEIGHBOR in tests that need it off.
5dc3da8cf : Consistently use Flag annotations for mocking feature flags
ab6c260c0 : Remove unused mUniqueCounter
f9602e8f8 : Only accept certain system properties for the DHCP custom hostname.
6aa613591 : Add a test for the DHCP hostname preferred properties.
1b08b4f26 : Provide RRO configuration to customize preferred properties for filling DHCP client hostname option
5670f3742 : Disable ip_reachability_ignore_organic_nud_failure in some tests.
9724a2a7a : Set the default value for accept_ra_min_lft to 900s to watches
e34a640c4 : Revert "Add integration test for link-related NetlinkUtils methods"
aacbd6071 : Remove unused function
0897039c6 : emitPrologue is not called by tests
7d5d657d2 : secondsSinceBoot is not called by tests
b0c98ca3b : s/emitPrologueLocked/emitPrologue/g
f0be694de : s/generateMdnsFilterLocked/generateMdnsFilter/g
4ad126975 : s/generateIPv6FilterLocked/generateIPv6Filter/g
6118e310c : s/generateNsFilterLocked/generateNsFilter/g
78a36075d : s/generateNonDadNaTransmitLocked/generateNonDadNaTransmit/g
80de0b37b : s/generateV4TcpPort7FilterLocked/generateV4TcpPort7Filter/g
e6d83d20e : s/generateIPv4FilterLocked/generateIPv4Filter/g
64b23683c : s/generateArpFilterLocked/generateArpFilter/g
2a81224ea : s/generateFilterLocked/generateFilter/g
4e5f918c7 : Remove unused function
ad33ec3dc : s/installNewProgramLocked/installNewProgram/g
26d23c374 : Remove locking semantics from ApfFilter
4e9cc7cfc : Rename TapPacketReader to PollPacketReader for broader applicability
515f0a703 : Reduce the performance impact of reading APF counters
91e4d86bf : Fix typo and improve comment.
e3567a2af : Don't keep storing NUD failures indefinitely.
06b2f9c3c : Refactor runIpReachabilityMonitorAddressResolutionTest.
c6b23e623 : NetworkStackNotifier is not nullable anymore
63174362b : Remove more R+ checks
748837c79 : Remove isAtLeastR() which is always true
8b3d34cff : Remove Q checks in IpClientIntegrationTest
cbe1060e7 : Use accept_ra_min_lft config for RDNSS lifetime
07628782a : Remove CMD_UPDATE_L2KEY_CLUSTER which is unsupported on R+
587f7d4ec : Remove isAtLeastR checks which are always true
2f9097bc9 : Update ApfFilter dump to reflect hardcoded allowlist in Android V
3eca209b6 : Prefer framework-annotations-lib dependency
50412196d : Add notes to APF counters to indicate errors and warnings.
ae0a0f6cf : Add integration test for link-related NetlinkUtils methods
5535eb4d4 : Add network_stack commands for packet capture and match
3cc92a0a2 : Fix APF_PROGRAM_ID counter off-by-one error.
effdfdc8e : Add `RawPacketTracker` for packet capture and validation
1175ae61a : CaptivePortalDataShimImpl#isSupported() is now always true
e96bc8471 : Remove another outdated check for Android Q
aa84cfbf5 : Remove api29 CaptivePortalDataShimImpl
0d9a8ff00 : Log the NUD failure event counts ignore/query metrics with expresslog.
820b03f16 : Remove api29 BroadcastOptionsShimImpl

+- Project: platform/packages/modules/NeuralNetworks

b4a598082 : Revert platform jar to state before module migration
733a4d69f : Add more NNAPI OWNERS.
f7831ad62 : Remove Androidbp-specific visibility from neuralnetworks apex
447553661 : Fix classname conflicts using jarjar and remove lint baseline
b86aa664f : Add jar targets for ondeviceintelligence to NeuralNetworks APEX
6e4b6dcb9 : Set llndk.moved_to_apex explicitly
de0973e15 : Add OWNERS file for framework and service directories.
602e239aa : android.hardware.neuralnetworks-service.example_fuzzer: Update fuzz_config

+- Project: platform/packages/modules/OnDevicePersonalization

ea118d11 : Refactor part of the authorization related classes to allow reuse from within ODP APK.
45a8e612 : Remove redundant copy of constants to reuse definition in common/ package instead.
a4f8857a : Require a functional ODP module for CTS tests.
2c921ccb : Revert^2 "Mark @FlaggedApi flags as exported"
9a904b57 : Revert "Mark @FlaggedApi flags as exported"
ce70bc2e : Update imports to reflect changed shared location of UploadInstruction proto.
bea58952 : Use FederatedComputeTraceEvent atom to log encryption key fetch background job execution.
6e148574 : Mark @FlaggedApi flags as exported
30f8e13d : Switch to new client error logging API that logs the root cause of the Throwable
6df67847 : Mock the MonotonicClock class in OdpExampleStoreServceTests.
20d5a65e : Unhide queryFeatureAvailability API.
d22d17c5 : Fix potential issues with flag mocking in OdpManagingServiceTest.
195cc401 : Unhide executorch inference API
b6def37f : Refactor API name to be queryFeatureAvailability.
425c60ab : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
1679b190 : Add debugging info for MinSeperationPolicy when deciding eligibility
69677597 : Add logging for invalid sourceAdId
18c34dea : Add isFeatureEnabled implementation to ODP Manager.
56446c43 : Add debugging info for not-enough training examples
0c505678 : Add missing ABSL includes.
1f3728d5 : Add FeatureStatusManager implementation.
2b143287 : Remove data class in InferenceInput
4c03a160 : Update log severity of onBindingDied and onNullBinding
21357823 : Add PhFlags for ODP isFeatureEnabled.
f9266139 : Introduce delay between AppRequest/Render/WebView flows to reduce flakiness in requestSurfacePackageSuccess test
88dd62a4 : Add aconfig flags for ODP isFeatureEnabled API.
132134a9 : Add executorch inference api flag
6062c228 : Add client error logging for getAdServicesCommonStates remote exception
401d31ae : More specific tracking of encryption key fetch failures during training.
0eb92bd0 : Update flag key names with odp prefix for aggregated error reporting to match new GCL.
a2e32580 : Add delay between 2 service flow runs to reduce MTS test flakiness
23e2739f : Update ModelManager API to support exeuctorch model
19b0203d : Add flag guarded ability to encrypt aggregated error data.
42aa863a : Update test with proper flag environment mocking.
f8f56c4f : Make sure IPC calls complete before unbinding isolated service
f1c1853a : Skip this test in CTS
0b0f9cec : Update new flag name to match naming convention.
988785f3 : Add blueprint format to upload config and fix existing bp files
7e310e71 : Remove unused system service code.
f17c84f7 : Update debug logs in broadcast receiver
e0472c49 : Enable isolated service debugging in CTS.
ea79af66 : Add non-null assertions in test case.
22a7bf0e : Add test coverage for error reporting.
e00c0012 : Move static ProcessRunner instance to factory class.
28f7b2d5 : Add a new flag to control whether plugin should be used.
07e016d3 : Add a check for the shared isolated process flag inside the runner.
fe56154f : Update download flow to remove file groups on successful processing and valid failure cases. Trigger MDD garbage collection after successful download processing.
fedd45c5 : Temporarily disable PhFlagsTests for debugging.
8eae8912 : Fix flag mocking in unit test.
f624270c : Report interrupted exception during AdServices IPC call
b74f0eda : Remove TestableDeviceConfig from test cases that do not need it.
91de21de : Cleanup unused code
5983ffc1 : Use blocking executor for http request
d3d1a08a : Increase default timeout in ResultReceiver.
8decbcaa : Do not throw while fetching error message in ResultReceiver.
a1912d40 : Add MddLogger to MobileDataDownload for ODP file group metrics.
ead887ec : Refactor the FCP encryption code to move core logic to common/ for sharing with ODP apk.
268f754a : Add OdpTraceEvent logging.
ebb177a6 : Catch Throwable during isolated process execution
ca6e8598 : Increase wait timeout in ExampleStoreServiceTests
7b040a22 : No-op refactor to allow moving encryption dao and key mananger code to common/
4f28f9ca : Offload ODP clients calls to ODP manager into other threads.
f588fbe5 : Address feedback from original change, adds try catch blocks and removes singleton instance and cached state in the Worker class.
1616989f : Remove TestableDeviceConfig from test
e60d0538 : Minor cleanup to attestation generation code.
0c5bdb18 : Expanding error logging in DataAccessService
882e9c4d : Remove TestableDeviceConfig from test
9dc641c0 : Add cts test for FederatedComputeScheduleResponse
514d1f52 : Fix error logging in LocalData
c17a3eca : Add aggregated error code reporting logic.
3d33f22c : Unwrap cause inside ExecutionException for IPC call status reporting
03245ece : Minor no-op cleanup to ODPBR test.
903a9ca7 : Mock flag calls in tests
600ae205 : Add verbose logging to test executions
7f218bfc : Clean up launched flag ON_DEVICE_PERSONALIZATION_APIS_ENABLED.
b59c7406 : Update the ODPBroadcastReceiver to also schedule the aggregate error reporting job service.
15b9b284 : Remove test arms that's not exercised in our implementation (We only run Plugin on T and all U+ traffic use SIP)
ff8806e7 : Increase Bind service timeout in pluginExecutorService
22f11397 : Refactor test to do less bind/unbind, to eliminate race conditions.
680b0748 : Add more debugging information in error logs
d48441c2 : Return IsolatedServiceException in sample app

+- Project: platform/packages/modules/Permission

9996aa176d : Import translations. DO NOT MERGE ANYWHERE
a6309910d9 : Add traces for RoleService
bc9ec8c7e1 : Relax test for opening location page from subpage
4ef96f332d : Run ktfmt --kotlinlang-style on Safety Center tests.
33dc5c7223 : Add in-call version of ECM dialog, remove new ECM api
862385154e : Add ACCESS_FINE_POWER_MONITORS permission
3496dde2fc : Added PERMISSION_READ_BRAKE_INFO to CTS permissions manifest
72d0feb59c : Added PERMISSION_READ_CAR_PEDALS to CTS permissions manifest
449b4e561a : Added read and write CAR_HORN permissions to cts manifest
817b05e7ab : Added PERMISSION_MILEAGE_3P to CTS permissions manifest
0b1cc7e418 : Added read and write EXTERIOR_LIGHTS_3P permissions to cts manifest
495a4ef754 : Import translations. DO NOT MERGE ANYWHERE
14feb1682f : Enable secondary_user_on_secondary_display for CtsPermissionUiTestCases
ede8f8fb78 : Enable secondary_user_on_secondary_display for CtsPermissionUiTestCases
4fbb5919e2 : Ensure cross-user roles are not available for private space profile/user
769838e9a9 : Add get/setDefaultHoldersForTest and is/setRoleVisibleForTest
0edca03416 : Add permission READ_SUBSCRIPTION_PLANS for policy
6ae0e0e557 : Add permission READ_SUBSCRIPTION_PLANS for policy
887cc8cec2 : Added new permission to the test to align with framework change
cbfc37f10e : Add API constant for in call blocking dialog
ca10d4e31f : Add Execute App functions device policy permission to the policy manager role
609438c856 : Import translations. DO NOT MERGE ANYWHERE
968c0d01d8 : Import translations. DO NOT MERGE ANYWHERE
065e008e85 : Introduce the role SYSTEM_VENDOR_INTELLIGENCE
325cfab5bf : Fix flag name
ce799bcddc : Disable default and fallback role holders for non-active users of profileGroup exclusive roles
024ecceb21 : Use an admin flag for the COPY_ACCOUNTS permission
866b5fdb93 : RoleParser role exclusivity requirement
821d181e38 : Add new signature permission for Population Density Provider
2ae66778af : Add permission for color zones
c39032e4a1 : Enforce profileExclusive available on full user
4cde0656a4 : Import translations. DO NOT MERGE ANYWHERE
76b38bd1c3 : Import translations. DO NOT MERGE ANYWHERE
389d1d1f9f : Revert "Use an admin flag for the COPY_ACCOUNTS permission"
60d5f4c3f3 : Add a check to ensure that preferred activities are not configured for profile group exclusive roles.
683f46f23f : Import translations. DO NOT MERGE ANYWHERE
42590ad3a7 : Import translations. DO NOT MERGE ANYWHERE
b5abf5e3b9 : Import translations. DO NOT MERGE ANYWHERE
ffb511a474 : Import translations. DO NOT MERGE ANYWHERE
cd8fef13a5 : Use an admin flag for the COPY_ACCOUNTS permission
d0fb4d028a : Revert "Add hidden SystemAPI permissions required for copyAccount and removeAccount"
313830cc23 : Catch errors related to Telephony being unavailable in ECMService
df5008d7ea : Set activeuser when setting role holder
56274716ae : Use profile labels as strings for default app sections for private and managed profiles
16bbb03799 : Enable secondary_user_on_secondary_display on CtsRoleTestCases
5ae6f789b7 : Enable secondary_user_on_secondary_display on CtsRoleTestCases
4b168ae5c4 : Remove role from protection level for BYPASS_CONCURRENT_RECORD_AUDIO_RESTRICTION permission
52a428c092 : Add permissions to cts tests
f08a96b017 : Import translations. DO NOT MERGE ANYWHERE
5ac164984e : Add splitPermissionsSystemTest for READ_HEART_RATE
5d56352edd : Add multiuser tests for profilegroup exclusivity
475e05b978 : Add new unknown call apis in ECM, add InCallService to track
e5162bf79a : Change SignaturePermissionAllowlistConfigTest assertion to Log.w
d338c133bc : Add owners for new role multiuser tests (cherry picked from https://android-review.googlesource.com/q/commit:45ece552fb712378aea59192292e64b6b421b0ac)
45ece552fb : Add owners for new role multiuser tests
8e087dea5e : When replaceBodySensorPermission flag is on, disable splitBodySensorPermission test
4a39c5cca9 : Implement initial Settings expressive style integration in Safety Center.
f1788f68f0 : [AAPM] Rename SET permission to MANAGE
56d0527d7a : Add ADD_MIRROR_DISPLAY permission to APP_STREAMING role
481bb512df : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
8fe26ea007 : Get and set users for cross-user roles in RoleService
a905bb653e : Permissions CTS: Remove Version SDK check in test
a35450fa5a : Permissions CTS: Fix test for upcoming permission split.
08ee1006b0 : Revert "[PermissionPolicyTest] add VERIFICATION_AGENT permission"
9cf2aa0773 : Revert "Add missing permission declaration to fix PermissionPolicyTest"
7d0790bb4a : Permission controller Material3 flag change
0093b23bbe : Import translations. DO NOT MERGE ANYWHERE
dded518b62 : Import translations. DO NOT MERGE ANYWHERE
5015852ae1 : Suppress NewApi warning until SdkLevel 36 is available
ba16e7ec78 : Update minSdkVersion=Baklava for health permissions background permission test
58f82b3ab3 : Add hidden SystemAPI permissions required for copyAccount and removeAccount
00b75b06b0 : Add BYPASS_CONCURRENT_RECORD_AUDIO_RESTRICTION permission
9a98da8c84 : Add SecureLockDevice permission to manifest
3018ab9680 : [ID] Replace "forensic" with "intrusion detection"
24f197c6d1 : Allow host permission dialogs to show on virtual devices
ba271c640f : update OWNERS
9072d0d6c6 : Suppress deprecation warnings for unsafeCheckOpNoThrow API call
ee9b513d01 : CTS test of updating the SET_DEFAULT_ACCOUNT_FOR_CONTACTS's permission protection level to knownSigner.
576d8ecd8c : Role re-evaluation should respect ask every time choice by user
04e3f30193 : Role re-evaluation should respect ask every time choice by user
dd962fc881 : Create new role for dependency installer.
96a37882f0 : Update policy for new permissions for applying and listening to picture profiles
3c7d11b5ec : Change filtering method that decides if perm group is used
365fd8b34b : Add notifications permissions to device streaming role
974be1cc69 : Remove isTagIntentAppPreferenceSupported permission test
b148f0c79c : Allow RoleBehavior to override exclusivity
f30b16c764 : [Forensic] Add permissions for CTS
f3a1ca0252 : Add android.permission.DYNAMIC_INSTRUMENTATION
51ee1af5f9 : Add appop protection level to WRITE_SYSTEM_PREFERENCES permission
63db70759f : Permission Dialog Icon Size change
46d473e56b : Add required permissions to android.app.role.SYSTEM_SUPERVISION.
6a5f6c8b10 : Move UserHandleCompat to framework-s for more convenient accessed
7d29232bfc : Update the permission CTS test
dec812a773 : Material3 component Simplification
f04b4446f1 : Import translations. DO NOT MERGE ANYWHERE
417edb12f2 : Import translations. DO NOT MERGE ANYWHERE
7f27dc5ff6 : Add Cross-user role persistence
40cb196a14 : Add eventually to testCardClickOpenPrivacyControls
14df936d0c : Add eventually to testCardClickOpenPrivacyControls
24e95993ef : Move USER_NULL to UserHandleCompat
69a9c56e44 : Supporting Material 3 UI in Role Request Screen
dec13bc178 : Add android.permission.TV_IMPLICIT_ENTER_PIP
4cb1f5d332 : Enable testSetWifiEnabled for passenger
160f7c6d52 : Ignore CameraMicIndicatorsPermissionTest for Multi-user-multi-display (MUMD)
8c7714a6e9 : Revert^4 "Add Cross-user role support xml parsing"
0c8594c30c : [Ranging] Add new permission for Android generic ranging.
1f8fc0ad09 : Define new permissions for Settings preference service access
8e29874c66 : Introduce permission to start vibration sessions
454a679a09 : Health: Enabled HealthConnect PermissionManager on Wear behind flag
5dc85d6a2c : Add missing permission declaration to fix PermissionPolicyTest
71b5e5bbb8 : Revert^3 "Add Cross-user role support xml parsing"
e0ffe7c7d2 : Declutter permission manager
b538b59e08 : Remove invalidGrantedUsedHealthConnectPermissionsAreListed test
561a43f41a : Import translations. DO NOT MERGE ANYWHERE
98e60aefec : Consider one time permission flag during backup/restore
9533206e00 : Health: Redirect WearAppPermissionsGroup view to H&F UI
d60d6806c1 : Revert^2 "Add Cross-user role support xml parsing"
85661e9741 : Move shared User util methods to UserUtils
30216160fc : [Ranging] Add new permission for Android generic ranging.
84bc75e061 : Revert "Add flags check rule to permission timeline unit test"
415e7cd9f6 : One time permissions should be counted as user choice
b5a2cfc3b9 : Revert "Add Cross-user role support xml parsing"
2258d6daf8 : Add Cross-user role support xml parsing
30d871d7ff : Revert "Ignore HibernationPolicyTest#onReceive_shouldInitializeAndAdjustStartTimeOfUnusedAppTracking"
2feb21569b : Ignore isPackageExempt unit test
71e09e5ad1 : Role re-evaluation should respect ask every time choice by user
1cf74b114e : Import translations. DO NOT MERGE ANYWHERE
51efffb03c : Adding SINGLE_USER_TIS_ACCESS permission in CTS.
795022cc68 : Revert "Clean up unused lib/rules in build file"
4ccb6f50d7 : Disable testSetWifiEnabled for passenger on MUMD devices
771f55d6ae : Import translations. DO NOT MERGE ANYWHERE
5fec37d1f1 : Import translations. DO NOT MERGE ANYWHERE
40ad34ecbf : Don't override user choice during OTA upgrade
9dc2b4bbc6 : Fix OneTimePermissionTest to use same thread.
a773db286e : Revert "Roles: Add pre-grant SkinTemp permission to WHS role."
a23a2efed7 : [EXP] Add flag: expressive_design_enabled
f9d5be93c7 : Update flag comments
ad33abbe6c : Add flags check rule to permission timeline unit test
6e91b355dc : [RESTRICT AUTOMERGE] Remove some mcts tags to align with main and CTS R1
7e1f7097f4 : Role re-evaluation should respect ask every time choice by user
7524fd5731 : CTS: Add permissions for media quality service
4e5889e2d0 : Clean up unused lib/rules in build file
6f54bdea22 : Add android.permission.ENTER_TRADE_IN_MODE
8e50885d31 : Remove dependencies on the 1-variant fallback
16290777dd : Import translations. DO NOT MERGE ANYWHERE
4f33250974 : CTS updates to acquire permissions from app-streaming/device-streaming roles
c201ad868e : Add safety source for Certificate Transparency
b29ff293dd : Inject device config parameter for tests
c33ad2d1cf : Roles: Add pre-grant SkinTemp permission to WHS role.
4e1b337231 : [AAPM] Introduce new Service for Android Advanced Protection Mode
d04df17f09 : Implemented a screen size check to prevent failures on small screens.
68952b9420 : remove multi-user and enterprise deprecated functions from DeviceState
29abf9d099 : Import translations. DO NOT MERGE ANYWHERE
bb2220efca : Add missing LocationUtils mock to PermissionUsageDetailsViewModelTest
089480bb75 : Disable A11y services before running ReviewAccessibilityServicesTest
cb4e88735b : Supporting Material 3 UI in Permission Grant Screen
2a180d47d3 : Material 2.5 theme based on Material3 resources
71d7e67dbe : DO NOT MERGE Disable A11y services before running ReviewAccessibilityServicesTest
12cf97611f : Ignore HibernationPolicyTest#onReceive_shouldInitializeAndAdjustStartTimeOfUnusedAppTracking
b53a322334 : Add flag for decluttering permission manager page
9c7d8fa507 : [BC25] Allow link footer to be underlined
15315a5b11 : Add BackgroundPermission cts test for health permissions
d7c866095f : Allow role protected permissions to be granted even if not requested by factory APK
0e79f36ef7 : Add health permission set to roles.xml and grant WHS by role
5fd3538358 : Import translations. DO NOT MERGE ANYWHERE
b4a8d5866a : Add Cross-user role support flag
5b7968bf1f : Revert^4 "Drop VDM permissions from Shell role."
79b8f371f8 : Ignore GetPermissionGroupUsageDetailsUseCaseTest#emergencyAccessesAreNotClusteredWithRegularAccesses
0f45c1a6fe : Revert^3 "Drop VDM permissions from Shell role."
899eec59a3 : Revert^2 "Drop VDM permissions from Shell role."
cff790e6f7 : Revert "Drop VDM permissions from Shell role."
16c7b0c584 : Import translations. DO NOT MERGE ANYWHERE
125639b26e : Fix RoleParserTest for R MTS
58b790b358 : Add ignoreDisabledSystemPackageWhenGranting for SYSTEM_SUPERVISION
93d445a8b6 : Add missing permission declaration to fix PermissionPolicyTest
0af5beaaae : Adding Flags for wear Material 3 migration
d3ced8a1b5 : Drop VDM permissions from Shell role.
10d832e73d : Adding Material 3 theme
32ec3b253d : [PermissionPolicyTest] add VERIFICATION_AGENT permission
7fa9cf4099 : Mark packages/modules/Permission apps in updatable apexes with updatable: true
5fd9eb6c4f : Revert "Add BackgroundPermission cts test for health permissions"
fb61df7a52 : [BC25] Add overlay styles for two preferences
ed4b834dab : Add mocking tests to hibernation TEST_MAPPING postsubmit
5152677aec : Add BackgroundPermission cts test for health permissions
bcd36031ea : move multi-user and enterprise annotations to the modules
f9014b933c : Reinstating capitalization of the subtitle text in permission controller screens
27884097f7 : Show attribution label from location provider only on permission timeline page
bf5da766a5 : Fix CtsRoleTestCase for devices without browser.
64ba833519 : Add vigneshrsastra@ to PermissionController/WEAR_OWNERS
e6fa876ff3 : Import translations. DO NOT MERGE ANYWHERE
c3bbe1fc60 : Import translations. DO NOT MERGE ANYWHERE
624e540b67 : Exempt caller apps from hibernation
db39e27bca : move multi-user annotations to bedstead-multiuser module
23b97b4d74 : Import translations. DO NOT MERGE ANYWHERE
9d915fc6ba : Auto: permission timeline page reuse modern architecture impl
b1000c3c2d : Use waitForBroadcasts() instead of Thread.sleep()

+- Project: platform/packages/modules/Profiling

b2e9c45 : Improve system triggered api and docs
5ac8a95 : Add redactor to apex
1c8a225 : Fix failing tests
e4aef09 : Update start trigger api name
03773e7 : Add aconfig library for external sources
199004e : SystemTriggered Trace Scheduling
559f8ca : Move processtrigger call off of calling thread
cc8ea7a : System Triggered Profiling MCTS
4253ca9 : Hook up system triggered profiling APIs
5b0619f : System Triggered Profiling APIs
a5aa2cd : Clone wait for process to finish
1b47837 : Replace system triggered flag with read only
2386366 : Add system triggered profiling test commands
148ea06 : Persist app triggers
c393f16 : Update rate limiter for system triggered profiling
44fe6aa : Add infra for storing app and using app triggers
c49abd3 : Add infra for starting and cloning system triggered traces
58e413f : Update TracingSession for trigger
8348126 : Move app file path out of service
be82764 : Add system triggered profiling flag
7d7a7b8 : Persist queued results

+- Project: platform/packages/modules/RemoteKeyProvisioning

677210c : Update PeriodicProvisioner and RegistrationBinder with Trace annotations
eb044c2 : Unbind service connection in case RKPD times out.
2d466ab : Remove dependencies on the 1-variant fallback
24b000d : Add Android OS information to the user-agent string for Widevine.
92a1247 : Mark packages/modules/RemoteKeyProvisioning apps in updatable apexes with updatable: true
572c395 : Skip this test if either system or vendor version is pre-Android U

+- Project: platform/packages/modules/Scheduling

2242620 : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
70e10a4 : Remove dependencies on the 1-variant fallback

+- Project: platform/packages/modules/SdkExtensions

738fa51 : Bump SDK Extension version to 16
ec6ef00 : [HealthFitness] Add Activity Intensity data type to sdk extentions.
1096c41 : Add new symbols to SDK extensions info file for mediaprovider jar
7279329 : [HealthFitness] Add health connect PHR APIs to sdk extentions
9e81006 : Update OWNERS to match SDK_OWNERS
399206d : Remove dependencies on the 1-variant fallback
268577d : sdk-extensions-info-test: improve error messages
b4e3142 : sdk-extensions-info-test: check for unexpected whitespace
8eb4de9 : Make the matching pattern for AppSearch broader.
6ba415d : Make the matching pattern for AppSearch broader.

+- Project: platform/packages/modules/StatsD

8c6ca66b6 : Fixed StatsdStats lifetime to be availible for worker threads
ac0d5f4c4 : [StatsD IoUring] Fixed aconfig flag definition for iouring
560f86d09 : Updated CtsStatsdAtomApp for Sdk 35
7d1b6dab0 : Fixed StatsdStats lifetime to be availible for worker threads
b26ac1c41 : [StatsD IoUring] Create Aconfig flag for enabling iouring for statsd
394b9a934 : libstatssocket: disable build for Darwin
8acd18204 : Added version_script for statsd libraries
5f3aac313 : Updated statsd lib unit tests structure
a6d9f9343 : Applied ownership attribution recommendations
7f246319a : libstatssocket: disable build for Darwin
a070a193a : Added TeX specific benchmarks
e90bd07e5 : Added Atom Id to atrace message
5701a259d : Move TestExtensionAtomReported & TestRawAtomReported to statsd cts
e332fbae7 : Added version_script for statsd libraries
2a01b9e1b : Updated statsd lib unit tests structure
7246f51ad : libstatssocket: Atom BootTimeEventElapsedTime enforce bypass queue
c5bc20c79 : Test to confirm pull callbacks are grouped by atom id before invocation
52bb87754 : Reduce the initial size of the buffer to 512 bytes.
30e7d6975 : Added TeX specific benchmarks
afef3f24b : Move TestExtensionAtomReported & TestRawAtomReported to statsd cts
4f7bd9eb7 : Explicitly use implementations where necessary for modules in the same apex
e5c3b99ea : Added Atom Id to atrace message
66364669b : Replace test_for properties with explicit dependencies on implementations
3957f49bc : add null check for possibly null intent actions
f3e4ac75a : Remove dependencies on the 1-variant fallback
abf2296d8 : Support StatsD Java API on Ravenwood (statsd)
0ef977dbf : Add ability to specify uid fields in the metric
c3271ca01 : Remove unnecessary parentheses from default values
fa04ce421 : Tests for removing unused uids from uid map
464305040 : Use the optimize_for_size property instead of "-Os" cflag
712f0b4f2 : Correct team assignment for statsd
245744aa6 : [statsd] Update the estimate byte size to account for the data corrupted reasons
ba94e1828 : [statsd] Update the estimate byte size to account for the data corrupted reasons
ffe16d1b2 : Remove lock from StatsEvent.Buffer acquisition

+- Project: platform/packages/modules/Telephony

f102648 : Remove TelephonyStatsLib from apex
1656488 : Remove hidden API usages in VcnTransportInfo

+- Project: platform/packages/modules/UprobeStats

80c135b : Conditionally use flags in mainline container
600e8d7 : Add uprobestats_mainline_flags_java_lib
f48ca31 : Add uprobestats-mainline-presubmit
d1b462d : Fix: use executable_method_file_offsets API in Guardrail
367ebd2 : Fix: remove debug logging from UprobeStats
660d38e : Call ABinderProcess_startThreadPool
2b1bc59 : Update per const changes to dynamic_instrumentation_manager
33de0b3 : Mirror uprobestats system flags in Mainline
02624d0 : Add an EXECUTABLE_METHOD_FILE_OFFSETS flag to mirror ART
37c4995 : Integrate with getExecutableMethodFileOffsets API
5f4f9b5 : Create uprobestats-protos for use in CTS.
66ebcc6 : Fork platform bpfloader to uprobestatsbpfload
1c885d3 : Move libuprobestats_client into Mainline
f0ba53d : Add uprobestats files to its apex
9fdfac7 : Fork bpf_header and bpf_syscall_wrappers
eccff6b : Add a placeholder uprobestatsbpfload binary
d11af24 : Use ctl.start to start uprobestats service
a55a86a : Remove unnecessary copy & paste.
756add7 : Log updateDeviceIdleTempAllowlist via Hummingbird

+- Project: platform/packages/modules/Uwb

1842e422 : ranging(test): Make CS tests one way to reduce failures
2c48f1a5 : ranging: Handle CS report correctly
5faea1e2 : [ranging] Infra for oob sessions
26f576f2 : if nothing to set for hopmodekey, doesn't put hop_mode_key on tlvbuffer
18959645 : [Ranging] Updare correct ranging params
ab006abf : set random hop mode key for aliro
205b241c : add channel with prioritized channel if the channel is supported
df0c91b5 : ranging: Fix issues in BLE multi device test setup
39d5af63 : ranging: Jarjar guava library and support library
bbf9ddbd : set random hop_mode_key
15631c5b : Enable ranging cts tests
52df3ff8 : [Ranging] Wire channel sounding to get cts tests working
6e24d653 : [Test] Add a delay between adding and removing controlee
4a824ccb : [ranging] Copy in updated OOB objects
46a707f9 : [Ranging] Add cts tests and mechanisms to enable technology
d1dfdcb6 : [Ranging] Expose setSessionConfig and CsCapabilities.
d82597c7 : [Ranging] Use a single thread for all binder API calls.
92b0bc3c : Add bugreport collection for test failures
b2746489 : [uwb_snippet] Fix UWB Priviledge permission
09908e37 : ranging: Use helper method for adopt/drop shell identity
b1ef60a7 : [Ranging] Enforce Ranging permission in grapi.
9b3c1706 : [Ranging] Improve API documentation and some minor updates
36795c2e : [ranging] Tests for multicasting
e9dea813 : [Ranging] Disable RRRM by default in grapi
98fa4166 : [ranging] Support multicast sessions from 1 adapter
c1028535 : [Ranging] Expose all remaining APIs for ranging
23aeb2a7 : Add Wifi Framework to ranging_aconfig_flags_lib visibility
b18ab1b1 : [ranging] Initialize capabilities adapters lazily
47eb020c : ranging: Add BLE CS test
5e4e5da9 : ranging: Add debug data
b0dab920 : Remove many-to-many option from multi-node Mode device capability parameters
9a7437ca : uwb: Handle MCC/MNC override for multi sim devices
11f8682a : [Uwb module] Add overlay to handle CCC 2-byte little endian config id.
23139f65 : uwb: Add support for handling MCC/MNC regulation
01b27876 : cs(adapter): Delay fetching of capabilities from BT stack
2602752c : Revert^2 "Add BT CS adapter"
7d1f173c : Add session state validation before invoking stop ranging
bc7d01b2 : Revert "Add BT CS adapter"
3f11a73c : Add BT CS adapter
757937d2 : [Ranging] Add new session configuration class
328747c4 : uwb: Ensure we clear all session on hw disable
0116dbcf : [uwb-cts] Add coverage for queryUwbsTimestampMicros
784883e1 : make ranging_aconfig_flags exportable
da58095f : Reason code addition for session stop after sts index reaches max value
8f67b4b2 : [Ranging] Remove @see tag
1601cb51 : [Ranging] Remove aoa and hprf support in config id
deb79547 : [Ranging] Add BLE rssi support in grapi
9e7eb30f : ranging(test): Add helper methods for BLE connection & bond
479d3ea8 : Implement RF periodic tx and stop command
60c19493 : ranging(test): Add BT and uwb snippet to ranging tests
baf19319 : ranging: Check if session exists when stop is called
a3f1656e : ranging: Add default impl for new @hide methods
d81ce035 : ranging: Clear calling identity before invoking underlying tech APIs
1c63a3ef : [Ranging] Add Ble rssi support in grapi API
c9cd0960 : [Ranging] Add periodic ranging support in capabilties
eb0b3c1e : [Ranging] Add knob for periodic ranging support in rtt
bc75c8af : [ranging] Implement ranging session callbacks
5fb5595d : [Ranging] Add few API support in GRAPI
6958fc2d : [uwb-backend] Handle peer disconnection events
382d7127 : [Ranging] Expose Ranging APIs
41feff0d : [Ranging] Add javadocs for oob ranging module
cde6e9a8 : [ranging] Fix concurrency bug in CapabilitiesProvider
a5f8665c : Add support for Hus capability parsing
8f8df79b : Implement UWB session count validation
63cc8fce : [Ranging] Check for ranging technology support dynamically
a2575af7 : [ranging] Wait for boot to register availability listeners
dd2b57cb : [ranging] Use 1 adapter per peer to match new API
55510186 : uwb(cts): Increase stop timeout for ranging reconfigure test
7aa034a8 : [Ranging] Add cts tests for rtt ranging
f80c89e1 : Revert "[Ranging] Enforce RANGING permission pre flight for start ranging"
fdefd5ad : [mobly snippet] provide reasonCode to host
838a5d4c : [ranging-uwb] Fix crash in capabilities adapter
07bc622b : Handling of race condition incase of device status ntf and unregistering the callback
b2dd2fcb : [Ranging] Minor fixes to make Rtt ranging work.
04bc2127 : [ranging] Implement capabilities listener
76bf3502 : [Ranging] Add transport handle support for OOB
a345c647 : uwb(shell): Added max retry / max measurement
c23dc8b8 : uwb(shell): Added key rotation commands
c59b1bac : [Ranging] Add javadocs and some minor changes in API
c29c979f : [Ranging] Add Rtt to multi device tests
bce92dee : [Ranging] Update multi device test cases name
35107799 : [ranging] End-to-end multidevice tests for UWB
cefbd648 : [Ranging] Define common update rate in GRAPI
42986263 : [ranging] Connect onData from service to framework
58104731 : uwb(backend): Handle empty response for getSpec from platform gracefully
70741010 : [Ranging] Updates service to handle new API structure
e0ec8bf4 : [Ranging] Change API structure
defaed25 : [Ranging] Fix ranging capabilties timing issue
9826cc12 : Handling vendor specific range data notifications for CCC session
1da5b962 : uwb(cts): Increase timeout for ranging reconfigure test
acc0e539 : [Ranging] Add Cs parameters and capabilities
79c8ee40 : Remove dependencies on the 1-variant fallback
22f9575f : uwb(radar): Add support for burst period in shell command
33243f07 : [Ranging] Add rtt ranging params
3a850ec7 : [Ranging] Upstream rtt code to grapi
8c21ec90 : [Ranging] Allow registration of ranging capabilities
1a74db66 : [androidx_backend] Add support for uwb availability listener
c0af28b5 : GRAPI OOB changes service side
e57bb382 : [Ranging] Add javadocs and add changes to be API review ready
a08ff055 : TransportHandle plumbing for GRAPI framework
6984446b : [Uwb] Ignore rf open sessions params test below android V
97760a13 : Implement UWB RF Test Session Support
8ca51fc8 : Add long option names to UwbShellCommand
b0c17d81 : Add GRAPI IPC classes for OOB.
409fe62f : [UwbManagerSnippet] Add Fira open ranging session params for other config ids.
0243207c : Implement UWB RF Test Session Support
bdb50659 : [Ranging] Enforce RANGING permission pre flight for start ranging
fceca577 : [Ranging] Add capabilties provider
1a85d3a6 : [ranging] Connect framework to service & pass basic CTS tests
227e168b : [ranging] Move RangingParameters to framework
8a24a3c5 : Add TransportHandle interface
9f4d2f8b : [Ranging] Reuse uwb namespace for ranging
0dbcdcdc : [Ranging] Make all classes in ranging/framework final
3d6bfe88 : Revert "uwb: Enable R8 optimizations"
c9c0306c : [ranging] Fix failing CtsHiddenApiKillswitchDebugClassTestCases
6d0c5430 : uwb: Enable R8 optimizations
6f72aca5 : Verify the maximum data transmission limit for the SendData API based on UWBS capabilities
64dc9e6e : [ranging] Rename UwbRangingParameters -> UwbParameters
95af2523 : [ranging] Convergence between framework & service params
cc7d829e : [grapi] Small cleanups in service & framework
32a9a465 : [grapi] Implement UwbConfig
2811210d : [grapi] Implement UwbParameters
d6ac5df1 : Revert^2 "[grapi] Add device role consistent with FiRa"
695e65d2 : [Ranging] Add next attribution to the context before starting a session
91630359 : Mark packages/modules/Uwb apps in updatable apexes with updatable: true
ed7b07ed : Revert "[grapi] Add device role consistent with FiRa"
1a20b278 : [Ranging] Add device role to uwb ranging parameters
2ef00590 : [Ranging] Add more skeletal flow for ranging fw and service
dfae0686 : [grapi] Add device role consistent with FiRa
9142827c : [Ranging] Use bootclasspath for config namespace
7832d257 : [Ranging] Add initial CTS test framework for ranging
4b3fffd9 : Fix ranging_flags container
453eb54e : [Ranging] Integrate generic ranging lib ranging service
e84475a9 : [grapi] Copy OOB from finder
a515dd17 : [uwb] Add helper method for passing unsigned session id
884c8491 : [grapi] Implement ranging peer
d2587fb9 : [grapi] Update params to include timeouts & fusion

+- Project: platform/packages/modules/Virtualization

67e49f5c6 : Terminal app's package name is com.android.virtualization.terminal
938b4fb50 : pvmfw: Parametrise GUEST_PAGE_SIZE
9277880a9 : pvmfw: Move DT sanitization to main()
0d4c09bb8 : pvmfw: Read kernel/initrd ranges from untrusted DT
72a82ce0d : Mute some notifications in VmTerminalApp
c52c977aa : VmTerminalApp: Show error activity for unrecoverable error
743adb144 : VmTerminalApp: Limit disk resize up to free disk size
02c6f56ba : vmbase: Add map_data_noflush()
4edfdadf5 : vmbase: Fix typo in map_image_footer() docs
41f918c8f : pvmfw: Remove old comment about unshare_all_memory
79ea604d3 : Use SDK_INT_FULL as the tag for the debian image
3cea481ef : Gracefully stop the vm
643c0a6a4 : Add null check for PortNotifier
3e5212883 : Fix the wrong upload path for the debian OS image
cba323e4e : Import translations. DO NOT MERGE ANYWHERE
e5c4369b7 : Import translations. DO NOT MERGE ANYWHERE
aade427e1 : Only interact with secretkeeper in updatable VMs
e1758af0a : Run VirtualMachineService only for Microdroid
b91472ae8 : Remove hack on APP_DATA_DIR
12f923ea7 : pvmfw: Delay UART & PT reset until jump_to_payload
9c9984110 : DebugPolicyHostTests: Capture console (UART) logs
4c9d64a1a : trusty: test_vm: Add test target for Trusty Unit Tests
1fed928b2 : trusty: security_vm: add TEST_MAPPING
05d7cf12a : Update is_nested_virtualization() to check for qemu
1542d154f : Add localdebs dir to install deb files
a4448a412 : Add use_fec: false to microdroid partitions
ae071610e : vmbase: Enter clients with dynamic PTs live
c26e220ef : vmbase: Introduce mem API & turn MEMORY private
eba831690 : vmbase: Default to largest stack size possible
d2c547206 : vmbase_example: Upgrade to new vmbase::memory API
229dd9d1e : pvmfw: Map image footer after dynamic PT switch
4cdd19f2d : Temporary workaround for Rust 1.82.0 update
bf8a9cb07 : build/debian: Several improvements to build scripts
da2077697 : microdroid: Add kernel image variant that supports UEFI
b05442013 : Announce action hint when focused on cursor
ce6cbd713 : [dice] Update DICE_PRIVATE_KEY_SIZE to DICE_PRIVATE_KEY_BUFFER_SIZE
99d880e37 : Stable working directory for vm_demo_native app
0628fbf56 : Do not use shared preference for disk resizing
c1521e74d : Get rid of legacy shared pref keys for port forwarding from config.xml
c1b0774bf : Modify english texts for port forwarding setting page
8bb246ef9 : Define PortsStateManager managing shared pref for port forwarding
b57abcc6c : Implement secretkeeper HAL v2
d23c5d9e1 : Introduce JavascriptAddon
f3536decc : Implement LLNDK for AVF
6dde27f42 : Import translations. DO NOT MERGE ANYWHERE
c9aa068bc : vmterminal: Run crosvm(virtiofs) from app domain
c28632a19 : Import translations. DO NOT MERGE ANYWHERE
b996399ef : vmbase: Define .image_footer linker section
bb6570176 : vmbase: Fix undetected EH stack overflows
0b02a2bda : vmbase: Clients separate eh_stack from .data+.bss
21ee99366 : Implement long touch to select text
235ec7dc6 : Preserve original inputType for webview
0fefd7ec3 : Use evaluateJavascript instead of loadUrl
ba00969ec : Read build id lazily
0ed696b63 : Add getBuildId method to IntalledImage
84caaa29d : Remove unused URL string from InstallerService
bf0373bcb : change build_id path to pwd
a5d0cded6 : VmTerminalApp: Set activity label of settings activity
e0fd9e655 : change build_id path
c0cc8b258 : build/debian: Fix typo in variable name in build.sh
26c4ef35d : build/debian: Add `-w` to keep temp workdir for debugging
0d5434d43 : vmbase: Move stack guard page to own section
1ea25a69c : vmbase_example: Map PCI & UART only
e4b0f3666 : Remove unused code
8d6bfc26f : vm_shell: Fix wrong example in help message
558f1ea50 : Force Portrait mode only if there is no hw keyboard
914606c99 : Portrait-mode if it's a phone or small tablet
6e6413f59 : Add the release script
8e1d01348 : VmTerminalApp: Log full output of resize2fs when failed
b94a6e065 : VmTerminalApp: Show confirmation dialog for resize
879ee4a92 : Add build_id to the image archive
10904ee22 : Don't show modifier keys when HW qwerty keyboard is present
690c1ca0e : Run VirtualMachineService only for Microdroid
f233a0a13 : Rephrase some sentences
3c5e7a7e6 : vmbase: Support missing SwiotlbInfo DT node
4c145c064 : pvmfw: Document com.android.virt.cap in README
fbbe9bc7d : Fix TerminalAppTests
92c156dae : Run ktfmt once
155f52a30 : Enable ktfmt for kotlin code
08e0a8c7a : Set activity label instead of window's accessibilityTitle
d37f138ae : Put proper value into windowLightStatusBar
42679a8a9 : Fix wrong xml in main_split_config.xml
4657d0e04 : Reapply "Add os parameter to composd_cmd"
f4280b75d : Set hint for some UI elements
2b1f9a37c : Import translations. DO NOT MERGE ANYWHERE
da6fb07e1 : build/debian: Remove old `vmlinuz` and `initrd.img` on x86_64
957afcecc : build/debian: Leave an open shell if build_in_container.sh fails
289020521 : Import translations. DO NOT MERGE ANYWHERE
371467461 : vm --enable-earlycon arg: also enable keep_bootcon
a5e3b35e0 : Set accessibility title for settings windows
17e555be3 : Revert "Add os parameter to composd_cmd"
204d1b986 : Disable longClickable for MaterialCardView items in setting pages
21214bbba : VmTerminalApp: Fix a11y for resize slider
771dcf1f3 : Support talkback for toggle buttons in port forwarding setting
e56cd19bb : Don't announce "not checked"
6cee5c205 : Make SettingsPortForwardingActivity being aware of state changes
eeab0dda0 : Use current size as default value in shared perference for disk
79f52134f : Install `linux-image-generic` in x86_64 containerized builds
53f696d2f : Bug fix for "Use extracted partitions for x86_64"
803aa8c9a : move the PCI MMIO regions on aarch64
583f7da0e : vmbase_example: Use layout::crosvm for MMIO range
6f4db1d4e : Import translations. DO NOT MERGE ANYWHERE
c27864e03 : Import translations. DO NOT MERGE ANYWHERE
c839e29c3 : Replace InstalUtils with ImageArchive and InstalledImage
8b7c4bd36 : Fix an error in ImageArchive
b0cb004a5 : Add InstalledImage
7f08fb95e : Remove unused resource preference_min_disk_size_key
b44e8fce7 : (experimental) Enable VirGL if the flag file exists
ed3bfb178 : Measure image size and show it in the description
a893791d4 : Move launchErrorActivity into ErrorActivity
97e7c03db : Fix Ferrochrome bootloop
0124aa2a7 : Revert "grant post_notifications to terminal app by default"
1645b4a81 : Sort ports in SettingsPortForwardingActivity
ae41e97dd : Disable LLMNR in the guest VM
3be0e4fbb : Speak something for the invisible element
3051ffe66 : Check file contexts only for protected VMs
b1ca3e231 : Exclude 7681(ttyd) from port forwarding list
9cd1a6fec : Add ImageArhive
ad3eac0df : Do the keyword replacement when parsing the vm config file
dd1824284 : Use Path over File, or String wherever possible
e17e59faa : Use extracted partitions for x86_64
2072847a8 : Enable mod keyboard in a11y env as well
59134f36c : Refactor PortNotifier
4cc4ba997 : Move port forward notification handling routines to PortNotifier
54ce43215 : Rename start/stop methods un VmLauncherService
fa3d28cb3 : startForeground API call is inlined.
b7abb00b5 : Reorder methods in VmLauncherService in the order of their execution
7850d056e : Use explicit intent for identifying the VmLauncherService
8562d2a4c : Merge VmLauncherServices into VmLauncherService
55c2ff7ce : Put all logs from the VmTerminalApp under the same TAG
0d896b9d4 : Add a section about graphical environment in terminal app
294a0aa88 : Fix few lint of VmLauncherService from aosp/3356387
6a3c24651 : Fix monitoring active port logic in forwarder_guest_launcher
ba1a926a9 : Show in-app keyboard only if ime shows
678ebb878 : Implement notification for port forwarding
799d62d1e : Move connectToTerminalService into startVm
09530bfe9 : set nocache option in webview
88b0b2aab : Add [CHAR LIMIT=none] to the strings that talkback speaks
ea791ee15 : handle webview's timeout error as well
08f01ce33 : add modifier keys in terminal app
1ad2e5959 : Add TYPE_TEXT_FLAG_NO_SUGGESTIONS for web view
56de7e4ac : VmTerminalApp: Relax char limit of 'restart to apply'
1d3dfc9ca : Fix many accessibility issues.
5981c3081 : VmTerminalApp: Show install error
16e6f3d64 : Remove client.p12
22d79476e : Finish the activity when the VM stops.
014010365 : VmTerminalApp: Prevent snackbar from hiding UX
544083168 : Wrap WebView with TerminalView
ecbe807c8 : VmTerminalApp: Use full hyphenation
709196e4d : Import translations. DO NOT MERGE ANYWHERE
028c6b51f : Import translations. DO NOT MERGE ANYWHERE
77718ea3c : VmTerminalApp: Implement Wi-Fi only checkbox in installer
fe0b976b6 : Skip rollback protection in pvmfw for Trusty Security VM
87fbc4bb3 : [dice] Support multi key algorithm switch in pvmfw
d160b2f3c : Fix xtermjs a11y bug about onKey
24e3b5030 : Add a TODO to remove python3 -u
987969a07 : Remove RUST_LOG=debug from forwarder_guest_launcher.service
57acb5690 : platform.dts: Fix preprocessor bodging prop name
9c8637988 : VmTerminalApp: Enable hyphenation if needed
675f1336c : Unite vm_launcher_lib into VmTerminalApp
fe1733f52 : Remove VmLauncherApp
48bce5ee9 : VmTerminalApp: Increase estimated image size to 550MB
d948a4d16 : Implement backup in terminal
49c825922 : VmTerminalApp: Implement error activity
3919b8c62 : Do not hard-code `image.raw` filename in build script.
6eea2b893 : [microfuchsia] Disable VM console by default
08d229e51 : add getRootfsFile
485868d54 : Fix text overlaping when font and display size is large
6203d6714 : vmbase: Provide baremetal DiceClearMemory() as lib
e5fffd83f : tests: Use libopen_dice_clear_memory
eabd9bd43 : Update the security_vm to use non-test version
856466b77 : Disable ferrochrome tests that running ChromeOS VM
a4f63cab5 : VmTerminalApp: Prefer resources in XML
b3495bed5 : VmTerminalApp: Optimize settings_list_item
a9fc3fe44 : VmTerminalApp: Always prefer start/end over left/right
b0b761559 : VmTerminalApp: Turn on RTL support
4b4f67d4b : Import translations. DO NOT MERGE ANYWHERE
7053a27aa : Add string resource for backup
3fcfa9212 : Import translations. DO NOT MERGE ANYWHERE
b4c3e5372 : [test] Test nostd open-dice library in pvmfw dice test
12c8e2848 : [KM-VM] Set arm64 KM VM kernel to the AVB signed kernel
2b38411c6 : Verify that EmptyPayload can interact with the console
3a4f58bfc : [KM-VM] Check Trusty VM kernel has valid AVB footer
246a7661e : Add early_vms.xml to Virtualization package to allow reusability
9427f0109 : [dice] Move open-dice Rust wrapper tests to presubmit
4dda2fb53 : Add os parameter to composd_cmd
b6d62fb05 : [KM-VM] Add arm64 KM VM kernel targets
756329c98 : VmTerminalApp: Non-interactive installation for debug and development
503eded1f : Ignore previleged port from the result of tcpstate
1f4f1ad85 : VmTerminalApp: Restricted button texts length
54920aa60 : Add -r(release) flag to build.sh to build images for all architectures.
402fd2edb : microdroid_manager: wait for failure reason to transmit
67ca04615 : Minor fix for `python3 -u` wrapper on forwarder_guest_launcher
2fea12aa8 : Wrap command tcpstates-bpfcc with `python3 -u` in forwarder_guest_launcher
9951e4bf6 : Temporary ignore check_tee_service_permission tests
14ca76a97 : Make open-dice rust wrapper compatible with upstream code
5790cef96 : [KM-VM] Add rollback index to KM VM AVB footer
73ee9c491 : Ignore outputIsNotRedirectedToLogcatIfNotDebuggable
ec0a60e20 : Unify and change notification channel to package name
0c5f01f1b : Report active ports periodically from VM
fda571ce6 : Adding parameter for trusty_security_vm_launcher vCPU configuration
028a41334 : Allow the payload running in Microdroid to interact with kmsg
474fba690 : Revert "Adding parameter for trusty_security_vm_launcher vCPU configuration"
1fc8db90c : Remove default 6 GB size from MainActivity
26243c2ac : Increase the size of the image to 6GB.
5b1d9b3ee : Remove connect_console in vm_config.json
247741a4a : Adding parameter for trusty_security_vm_launcher vCPU configuration
9515d0fc2 : virtmgr: Support dumping guest DT for debugging
043029126 : VmTerminalApp: Stop mentioning virtual machine
6955da176 : Add VSR indexes for VM remote attestation VTS
5d399fb3d : Add missing brace in the script
89f05f8c4 : [dice] Migrate from fixed public key/signature sizes to dynamic length
0b587bf0d : [dice] Remove redundant feature flag alloc
b1ed08447 : [dice] Migrate from DICE_COSE_KEY_ALG_VALUE to VM_KEY_ALGORITHM
03e27efaf : Install procps to the debian image
1742d0d14 : VmTerminalApp: Migrate Toast to Snackbar
ee32c5274 : VmTerminalApp: Remove Toast for external storage
06f4ac551 : Support release mode in build.sh
202ab23a4 : Modify port forwarding stored preference based on current state of guest
13748d239 : Implement permission check for VMs that request tee_services
c2180fc5b : Rename linkerconfig to linker_config
695218788 : vm cli: add --tee_services flag
b1416128b : virtmngr: get secontext of the caller app
9954838e3 : Add teeServices field to AppConfig & RawConfig
655f5a52a : Propagate RELEASE_AVF_ENABLE_VM_TO_TEE_SERVICES_ALLOWLIST to avf_build_flags_rust
15052b7a8 : Add headers to xml files
b5fffcc14 : Disable the leave alert in ttyd.
d095b357a : Use env_logger for printing logs in forwarder_guest_launcher
300eabf8a : Increase default font size to 13.
fbbebe566 : Define interface for reporting active ports in the guest
5a73c9487 : Import translations. DO NOT MERGE ANYWHERE
4232950ce : Remove eBPF-based port listener
f0327318f : Integrate port forwarding with setting page
7ec45d3bf : Turn hypervisor backends module into a library
56bfd1abb : Remove unnecessary OWNERS file in libs/dice
e0848f27e : Stop VmLauncherService when vm is finished
24d989c13 : Improve the layout of the installation page
7e1877bdd : Allow multiple xml files for early VMs
5f8e75df6 : Add an option to start a pVM without pvmfw
54c7b38c2 : Fix missing metrics due to crosvm name change
27cbf4ca2 : Update linker configuration for microdroid
79abe851a : Import translations. DO NOT MERGE ANYWHERE
a3574aa8e : VmTerminalApp: Use dl.google.com
5ccd03eae : Import translations. DO NOT MERGE ANYWHERE
6823ffc98 : Read the algorithm of the leaf public key in DICE chain in pvmfw
a9432ba50 : forwarder_host terminates when terminating VM
34164d6d7 : VmLauncherService doesn't start a new thread for starting gRPC server
467c69f7c : Modify installer to integrate disk resize
ed99bb459 : Add top appbar to the settings page
6ff1f2e09 : Allow changing VM name on configuration
6a2a4dc93 : Turn the optimization on for the terminal app
5c10bee9c : Enable Device Tree backcompat tests
38b6c78f9 : Rialto: Remove unnecessary unsharing after main
8ab7c374e : vmbase: Move CMOs under crate::arch
043dfb736 : vmbase: Abstract CPU arch in write_volatile_u8()
82aaf03f6 : vmbase: Add CPU-specific submodules to crate::arch
4e9d2f681 : vmbase: Temporarily rename crate::arch to aarch64
90dfd6926 : vmbase: Move MemoryTracker to memory::tracker
755a258f7 : vmbase: Prepare memory::shared privacy for move
1d0127731 : vmbase: Move handle_*_fault() to crate::exceptions
462bdf4fc : pvmfw: Move MemorySlices to crate::memory
84ba1a802 : pvmfw: Make template Fdt const
d0818b22f : vmbase: Replace flatten() with slice::as_flattened
955b65858 : Modify debian build script to integrate disk resize
1f968b1a1 : Refactor DICE chain parsing in pvmfw to collect BccPayload
1afff42ab : [doc] Fix broken links in service_vm.md
9bc612e17 : Use /data/data/<pkgname> for user0
39f23c087 : Clean-up the device after running TerminalAppTests
d566f5ec4 : Enable the terminal app activity without hard-coding its name
76eed8df8 : Use bash for starting forwarder_guest_launcher and ip_addr_reporter
af2b4f2ac : Fix broken tests due to crosvm name change
75112841c : Push vm image on the host
ba748c221 : Remove unused import
146d67ac5 : Move TerminalAppTests to the avf group
445f971bd : TerminalAppTests is a general-tests
d5846dc5d : Add test to verify page size of Microdroid VM
f49350ec4 : Reland Add microdroid_16k
c6acf79a8 : Measure Terminal boot time
505d5a442 : Use /data/user/<user_id>/<package_name> directory
71aeb9a42 : Revert "Add microdroid_16k"
127f07319 : Revert "Revert "Revert^4 "Add golden device tree test for backwa..."
8b183f58c : Add microdroid_16k
3dc23bd92 : Add tap and settings intent for the terminal notification
fefd3217a : grant post_notifications to terminal app by default
139ddfdf9 : Add copyright file in debian rootfs
7bacda491 : Use shared preference to store forwarding ports
d240c2de9 : Integrate recovery settings
3a8c603ee : Use random port for grpc service
fc92d5df3 : Debian service should check its caller is the vm
dbee0f29a : Uninstall downloaded VM image after test is done
536f718ae : Add test for the terminal app
000d92e42 : Reload for the case of ERROR_FAILED_SSL_HANDSHAKE as well
c7fb95a18 : Generate client certificate in runtime
d506fdcfb : Auto-downloading of VM image
0206c161d : Add waitForBootCompleted to the Terminal's MainActivity
4c8cb58f2 : Temporarily disable optimization for the terminal app
27f460cda : Don't put the terminal app on the devices with no AVF
9e2f53586 : Remove setDeleteIntent
4ec83804f : introduce $PACKAGE_NAME in vm_config
7e7f19d29 : build ttyd with client cert patch
5fb5ec075 : Hardcode 4096 as block size for dm-verity devices
5ee6a0dc3 : forwarder_guest_launcher is executed as systemd service
e04fac8ac : Host can send required infos to the guest for performing forwarding.
9305b7498 : forwarder_guest_launcher can execute forwarder_guest
733150bfb : Make forwarder_host buildable
f7445e876 : Introduce host side forwarder agent
44460bb88 : Import translations. DO NOT MERGE ANYWHERE
5fcb63490 : virtmgr: Wait for socket file before connecting
0faa90332 : virtiofs: Add systemd unit to mount virtiofs inside guest
068e6740c : Update RpcServer::new_vsock calls for new return
2d91dd12f : Reland "Update kernel to builds 12570979"
43c936269 : Reland "vm tool: rename --gki arg to more generic --os one"
a2af2a71a : Reland "Generalize our parameterized tests"
62bb18170 : virtiofs: Request permission to access storage
94d6a236c : Import translations. DO NOT MERGE ANYWHERE
572d3bd86 : Revert "Add microdroid_16k kernel"
057642a3d : Revert "vm tool: rename --gki arg to more generic --os one"
bd7aec026 : Revert "Update kernel to builds 12570979"
ab6c5ca86 : Revert "Generalize our parameterized tests"
5dececfe8 : Improve doc for pVM DICE chain
44efd3e71 : VmTerminalApp: Fix string
8c8f8dcc8 : Add microdroid_16k kernel
fa12080e5 : Update kernel to builds 12570979
bca034d55 : vm tool: rename --gki arg to more generic --os one
bb3663fee : Generalize our parameterized tests
6338d9a0d : [KM-VM] Add AVB footer to the Security Trusty VM
1d409dea1 : Move Trusty VM kernel target from cf repo to virt
960104fa0 : Hide the boot progress
4d965f76a : Ignore null action in VirtualizationSystemService$Receiver.
4106e16ba : Set ttyd web page's title as an app name
64645aa70 : Parameterize manually benchmark collection in MicrodroidHostTests
32330bbc9 : Remove VmLauncher from the virt APEX
4d8477e58 : Update proguard.flags for GSON
61145c5e0 : Notification dismissal or click close button will stop VM
fcbd73667 : VmTerminalApp: Mark configs as translatable="false"
a8204b943 : Clarify VM suspend/resume APIs.
cbd14eba0 : Use SearchArtifactUtil to search test file
e9431a929 : Update apex_available for libapkverify and libapkzip
22e8d88e1 : Import translations. DO NOT MERGE ANYWHERE
646a31b00 : Update ARM_SMCCC_KVM_FUNC_DEV_REQ_MMIO ABI
5ce6c6f4c : MockHypervisor: Get MMIO tokens by address only
7fb437fac : pvmfw: Validate size of assigned MMIO regions
eacdd0f42 : pvmfw: Warn for misaligned assigned MMIO regions
f4d8a0dc2 : pvmfw: Refactor DeviceAssignmentError for clarity
61593815a : Add optimize: true to Android.bp to optimize bytecode
1b155b940 : Import translations. DO NOT MERGE ANYWHERE
8790ea703 : virtmgr: Remove writeback=true and CachePolicy options
191e61773 : Changed the urls for CI
a48084d9d : Set grub timeout to zero
77845dc80 : Bash is the default shell
592b52a86 : Integrate disk resize settings with frontend page
6d08eb8c7 : do not use screen
099311a1f : Fix installing from url doesn't mark complete
899f7bfbb : Fix icons in dark mode
53986a82e : TerminalApp: Allow e2fsck to be used non-interactively
6676ea816 : virtmgr: Add "--skip-pivot-root" option
07cee6b5a : Remove avf_v_test_apis aconfig flag
e6028be0d : Remove reference to android14-6.1 microdroid kernel
bc849efd9 : Don't show the error page
c0ff52172 : VmTerminalApp: Initial skeleton for downloading from server
b6bcab8c4 : VmTerminalApp: Add UI skeleton for download activity and service
a96439f8c : vm_launcher_lib: Fix bug in InstallUtils#isImageInstalled()
f375ef13e : virtmgr: migrate from vm_control to crosvm_control
05515dc39 : virtiofs: Add AVF API to share directory paths
2b94409ad : Remove dependencies on the 1-variant fallback
f50ec3358 : Give crosvm process a unique name
bd471cfbe : VmTerminalApp: Handle a11y for text
35dbb8eac : Revert^2 "Add --no-pmu option to VM"
316e6428e : Remove dependencies on the 1-variant fallback
1d4bcdb38 : Rename trusty_vm_launcher and move it to packages/modules/Virtualization
1f50195b0 : Import translations. DO NOT MERGE ANYWHERE
4ec3a9329 : vmbase: uart: Move asm block to crate::arch
22a0168f2 : Revert "Revert^4 "Add golden device tree test for backwards comp..."
f17b2ca2e : Revert "Add --no-pmu option to VM"
a3bd6c4ac : Add mock notifications for terminal app
4e3021a8f : Modify vm_config.json properly
d5cba891b : Modify build artifact's output dir properly
904d96201 : Bug fix in build.sh from aosp/3313754
8e71198f9 : Move packaging logic from kokoro/**/build.sh to build.sh
88ae917db : Install vm payload from /sdcard to /data/data/app
c8800a100 : Use sockaddr_vm instead of VsockAddr for libbinder
5ba6a219a : Add --no-pmu option to VM
5a547efc8 : Include vm_config.json as a part of images.tar.gz
91c0d14bc : XTS: Ensure Updatable VM is supported
2e059cc08 : Avoid using static mut variables for heap.
aa1e82493 : Some UI change to the terminal app
bb51e6d5c : Show indefinite progress bar (circular type) while the VM is booting
5d52f8b30 : Support relative path of image.raw
5e9dd4ba0 : VmTerminalApp: Fix i18n issues in resize activity
1392d958f : Remove search bar in the settings page
b3ff5e23e : VmTerminalApp: Merge with LinuxInstaller
f6a987930 : ttyd uses client cert
1cfcb583c : Add forwarder_guest_launcher into debian VM.
9c6f59c51 : Define forwarder_guest_launcher communicating with host
f9d3bb85f : Revert "Set droid's gid 1000"
e369c9ece : Define gRPC method StreamForwardingOrder
a893cf3e3 : Add x86_64 image to kokoro build
43e121bf0 : Revert^4 "Add golden device tree test for backwards compatibility check"
4a44ea1cd : TerminalApp: Add hints for l10n
7eddbf459 : Update rialto_test to incorporate RkpInstance in session
1ced48d90 : Add document about RKP VM marker testing in VM attestation
9ccd14d20 : generate_assets.sh for LinuxInstaller
bcde0d606 : Use statically assigned IP for the VM
7ebeb1e46 : Remove virt-customize related code
bff487f15 : Add sudo for copying /var/log/fai/*
de2d33302 : Set droid's gid 1000
6c848034d : Modify source dir for gathering FAI logs
536374324 : Remove dependency on once_cell where it isn't actually used.
e8791860a : TerminalApp: Template for offline disk resize
a63b1e4f1 : Add trace sections for terminal app
e2f66d38f : Enable Linux terminal app via developer settings
92ed502c3 : rename artifact's name
8685f89c9 : Enable screenReaderMode in xtermjs if isTouchExplorationEnabled is true
e477a2432 : Trying to find exact location of log file in remote kokoro build
1100ae216 : Suppress error for copying log files of kokoro build
7813621ed : Add @Override in MainActivity
de1e73d7c : Fix typo
c9a7fd993 : Revert "Fix a typo"
12909188a : Gather build log in kokoro for debugging purpose
4bd67dac6 : Fix a typo
8b44302d3 : Add Kokoro presubmit job for AVF Debian package.
73d70253f : Revert^3 "Add golden device tree test for backwards compatibility check"
ce3a39689 : Define DebianService for host-guest communication
98f3d7250 : Add default account "droid"
1d6ecee7d : refactor build script
4246fd452 : Revert^2 "Add golden device tree test for backwards compatibility check"
36047f2dd : Revert "Add golden device tree test for backwards compatibility check"
0fdd051ab : build.sh: Add support for x86_64 architecture in the Debian image build script.
0b53a600d : Add mock settings pages to terminal app
0e359fba9 : Convert vintf_fragments into vintf_fragment module(s)
a94e92d15 : Use new method/field to get StagedApexInfo[]
407df8ff3 : Remove IAccessor implementation and use libbinder APIs
6d65f2454 : Add pkvm_experimental option
2cc96eb1b : Add golden device tree test for backwards compatibility check
c7c23579a : vmbase: Fully absorb libfdtpci as a crate module
f2c19d48e : vmbase: Re-expose fdtpci as vmbase::fdt::pci
992c2bb50 : liblibfdt: Add no_std variant for libfdt_baremetal
2b8936c75 : liblibfdt_bindgen: Only use libfdt as header_libs
1f086a853 : Revert "Build port_listener in debian VM"
5b9c99dd0 : Build port_listener in debian VM
0c69740c3 : Remove dexpreopt files of service-compos from /system's artifact list
a4ad719ee : Update the certificate type for AVF to "rkp-vm"
198a0fb57 : Manually install rustup when building debian VM
d8f465582 : build script which uses docker
c52a25324 : Make port_listener buildable
1cc8dbb21 : Introduce port_listener
6aba7c0ba : Debian VM holds forwarder_guest
6289ee189 : Fix broken links in README.md files
aa1e0019f : FerrochromeApp: Remove FerrochromeApp
9f1727308 : LinuxInstaller: Remove dependency with FerrochromeApp
57ba9c41b : pvmfw/avb: Add Capability::UefiSupport
e31592af2 : Set OWNERS for build/debian
f6ed866bb : Use compressed image file as artifact
fb9edfc4c : MicrodroidTests: Disable test when ran on native_bridge
b8424d4e6 : Skip MicrodroidHostTestCases on native bridge
dee8b14bd : Make guest side forwarder agent buildable
4cfc37423 : Add 'define_artifacts' for build artifacts in kokoro
74360b206 : Copy artifact to kokoro_artifacts
b137a5f40 : Remove hooks for containers
c1351afb8 : Introduce guest side forwarder agent
92fa7989b : Set up for kokoro build
a153f5f95 : Mark service-compos as system_ext_specific
5c807a2f8 : Add option to dump device tree blob in VM config
e511b301c : Revert "use python 3.10 for FAI"
c16ff32d3 : Revert "add -E option in sudo to preserve env variable for pyenv"
cb469d86e : Migrate to tradefed test Parameterizer
746800978 : add -E option in sudo to preserve env variable for pyenv
716e27f9d : use python 3.10 for FAI
362537905 : Generate more info for debug
8ea4b5b97 : Use normal build instead of container build
5689c80e6 : MicrodroidHostTests: Decorate VSR-7.1-001.008 tests
b25366ce8 : MicrodroidTests: Decorate CDD 9.17/C-3-4 tests
d02afd699 : MicrodroidHostTests: Decorate CDD 9.17/C-2-5 tests
30423ca5a : Add retry in wget for debootstrap
3120d7c40 : Check if some deb packages are accessible
82f68b3e5 : Make libforwarder buildable
a55e77321 : Introduce libforwarder
19be96fec : Link to loopXpY symbolically
961046ed6 : Delete unused loop devices before starting
bc2dfac56 : MicrodroidHostTests: Fix typo in @CddTest
faabfbd48 : Revert "Support vendor partition in non-debuggable pVMs"
4fc9ba29d : Remove mountdisks.BASE in favor or links
856e3becc : Custommize mountdisks.BASE as well
dacad1ebc : multiply sector size for offset and sizelimit
3d187f204 : Update build script to run in container
da733ac18 : Update the docker setting for Kokoro
0e565edb5 : apt update before installing packages

+- Project: platform/packages/modules/Wifi

982abfd2ac : Import translations. DO NOT MERGE ANYWHERE
68d6c7f560 : Register death handler for the mainline supplicant service.
e77d13eaad : Initialize Java class to access the Mainline Supplicant service.
381c696e91 : Move APIs to check USD support to WifiManager
527b4c09ae : Add support for NAN periodic ranging.
d0c59a4cce : Change the mainline supplicant terminate method to oneway.
55e7f3a063 : Fix WifiAwareDataPathStateManagerTest
4805ca200f : Support for BLE assisted P2P discovery and pairing
ca8a8be278 : [CTS-V Migration] Replace current Wi-Fi utility adb commands with MBS WiFi APIs.
e3606965a5 : Block the current connected network
2d72f78232 : Add API for BSSID block
59e19a1bab : Add NIDL rro
6877f81cea : [CTS-V Migration][Wi-Fi Direct] Implement test case #1
98a3904dcc : Revert^2 "Mark @FlaggedApi flags as exported"
09db42e41e : Add skeleton implementation for the NAN periodic ranging callback.
b51c1630b9 : Import translations. DO NOT MERGE ANYWHERE
1be5296f0b : Revert "Mark @FlaggedApi flags as exported"
f90147328c : Add data capture API
15e1a3994c : Add fix the performance logging
9e90f52181 : Gate getCachedScanData with service version 2
0774219033 : Update the import for ApIfaceParams.
6a518e2f63 : Add skeleton implementation for the USD callbacks in the Wifi framework.
692ec90550 : Import translations. DO NOT MERGE ANYWHERE
e94c6d2dde : [CTS-V][Wi-Fi Aware] Change max ranging distance of new CTS-V Wi-Fi aware tests to 100M.
af41fc0005 : Import translations. DO NOT MERGE ANYWHERE
2781af8650 : Mark @FlaggedApi flags as exported
d9faad4166 : wifi: Update java doc to avoid linking to system api from public api
51072aeb59 : Add datastall status in WiFi Scorer stats
dfddd250ae : Import WifiChannelWidthInMhz from the Supplicant AIDL instead of the Vendor HAL AIDL interface.
0f64ee12b1 : Logs firmware alert events
05c364cfe0 : Logs IP unreachable events
b58c958687 : Logs RSSI poll on/off
03e9286f06 : Remove current data capture logic
b3ca5f9902 : Add set operating frequencies for USD
04d2edc4fe : Add USD publish and subscribe APIs
df7a809ff7 : Add USD subscribe session callback
b15e0134c6 : Add USD publish session callback
45e5bd5af2 : Add USD session callbacks
06a842ca38 : Add USD publish session
1880dcf760 : Add USD subscribe session
2632391180 : Add USD discovery result
d6071bc999 : Add USD characterstics
15a453545c : Add USD subscribe config
92e4702682 : Add USD publish config
25ed275cae : Add USD config
13cccc6412 : Add USD avaiability APIs
e109cc832f : Add USD feature support check API
d52590ff4e : Add USD service
a73fdb6c93 : Add UsdManager
d799638a4c : Add android.net.wifi.flags to framework
349d0d1ba8 : wifi: Check AAPM flag before calling AAPM API
ebf4ba67b6 : Clear WifiUsabilityStatsEntry while reusing it
3ed00ccac5 : Increase ring buffer size to 80
b6e716f821 : WiFi Aware Matchfilter Test
39d5279f0f : Import translations. DO NOT MERGE ANYWHERE
ef0d0c8098 : wifi: Fix NPE (Vendor HAL is not supported case)
2cd3b41c31 : Clarified comments.
5621b644a2 : Add ART profile to wifi service.
bece4f3312 : Remove state argument from WifiStateChangedListener
35f69aed97 : wifi: fix build break
a50da370a4 : replace aconfig_storage_reader_java to aconfig_storage_stub
49ff193c7c : Restore the method signature
5bd92d4d35 : Revert "Fix TX/RX linkspeeds not updating properly on network details page"
1b40bb6bed : Create new proto for WiFi Scorer training data
1382866050 : Removes duplicate NULL check.
8fee949e02 : Import translations. DO NOT MERGE ANYWHERE
6297e9cafa : wifi: Support for USD and Pairing
666f3e762e : wifi: Support for Wi-Fi Direct R2
9200726319 : wifi: Support Sap config: client isolation Backup/Restore
048d6970c4 : Add callback for wifi state changes
7d3fc37d94 : Add overlay allowlist for AP/NAN concurrency
5d47b033fc : Add PASN comeback cookie for secure ranging
f3effc2ee0 : Update P2P dialog
a96cf8028e : Add Thread device roles into WiFi Scorer stats
d1e4865bd9 : Add missing quotation marks to the list of binaries in the Apex.
985ab30afb : Wifi Aware Message Test
7919761565 : Add feature flag for wifi state changed listener
cd53df3edd : wifi: Publics APIs for LOHS band customization
c0653797ea : wifi: Add a new flag for publicing APIs for LOHS band customization
8848e27c84 : Add secure HE-LTF protocol version to capabilities
621ea7e6ae : Add flag for BSSID API
302f03ef33 : CTS-V Migration Wi-Fi Aware (Add utility functions for Wi-Fi Aware setup)
0b8cabd0d6 : Include mainline supplicant binary in the Wifi Apex.
f9c5f8d296 : wifi: Supports Soft AP client isolation configuration
1e27715eca : wifi: Adds API to indicate what AAPM features are supported
1e858db44b : Fix race in ActiveModeWarden change role
a95a755da3 : Post on Wifi thread to avoid ANR
7fb128af5c : Explicitly include VCN API stubs
604c192711 : Use the isSdkAtLeastB to check whether to perform the Keystore migration.
1a81f26f69 : Add WifiManager.SoftApCallback#onClientsDisconnected
87c59f9d6f : Add 11az secure ranging result parameters
5256ea9f2f : Add 11az secure ranging request config
c4d02972a8 : wifi: Add mld address in SoftApInfo
7bf33664f6 : wifi: Adds a new capability to indicate that the device supports MLO AP
9c35107f93 : Add L3ConnectingState
499a866cc3 : Add 11az secure ranging device capabilities
555c62a91a : Add 11az secure ranging capabilities in ScanResult
93920dda5a : Fix NPE caused by null value in mCachedStaticChipInfos
3288c7ce35 : Support for P2P Compatibility Mode
87b52ca0d5 : Add addUsdInterface and removeUsdInterface to the mainline supplicant AIDL interface.
a5a9cdc471 : Add the mainline supplicant rc and config files to the Apex /etc directory.
2d055316c0 : wifi: Supports MLO SAP
0b14e6dff8 : CTS-V Migration Wi-Fi Aware (Test #7)
63ae6baee4 : Update P2P Connection Changed Broadcast
c972d56355 : wifi: Disable WEP usage when AAPM is ON
f9045da34f : Import the mainline supplicant AIDL interface in the Wifi framework.
2334238692 : Add an overlay to control Wifi BugReport for subsystem restart
4f367a5cbe : Ignore null action in WifiServiceImpl.
54d2a4159a : Reorder permission and user check for Wifi Enablement
7201bd9903 : Update SoftApManager#updateConnectedClients
bcf3900d4a : wifi_aware_discovery Test#4
9370cde325 : Pass disconnect reason from hostapd
7e54b0e1e0 : Add a method to check SDK
a6fe242886 : Reformat IWifiManager
699018e9a1 : Remove BaseWifiService
f7beb5a23d : Wi-Fi Direct R2 USD callback functions
ad1818bf1e : Add SoftApDisconnectReason to WifiClient
d3a4cd09e2 : Add overlay to control showing P2P invitation timeout
17b480b50f : Remove timeout from all dialogs except P2P invitation received
1b935c06e9 : Add disconnect reason code to framework
2a23cbab49 : Update wifi AIDL versions
c77af9e3e2 : Wifi: Wifi Restart for Enabled state in ActiveModeWarden's state machine.
a31bae0b4f : Add annotation to API input
9f70db04a5 : CTS-V Migration Wi-Fi Aware (Test #8~13)
a1deea7526 : CTS-V Migration Wi-Fi Aware (Test #6)
f231f3c55e : CTS-V Migration Wi-Fi Aware (Test#4,Test #5)
240bfedcf3 : CTS-V Migration Wi-Fi Aware (Test #3)
67e813fbb4 : CTS-V Migration Wi-Fi Aware (Test #2)
16603128c8 : Revert^2 "Remove dependencies on the 1-variant fallback"
1d716cb184 : Remove the service declaration check in the JNI waitForService wrapper.
a74027cfd7 : Add voip mode into wifi scorer stats
7fc8c11291 : Add max supported linkspeed into wifi scorer metrics
0e18e70f06 : Add WiFi low latency state into WiFi scorer stats
7dce064b38 : Add UWB adapter state into WiFi Scorer stats
0e3918593b : Revert "Remove dependencies on the 1-variant fallback"
b97d07f625 : Remove dependencies on the 1-variant fallback
8614d97746 : Accept UID in the MulticastLockManager acquire and release methods.
5d486480c4 : Monitor the importance for multicast lock owners and update filtering accordingly.
5fe56e03e9 : wifi_aware_discovery Test#3
60ac817a41 : Track the number of locks held by each active UID in WifiMulticastLockManager.
6785572ac4 : wifi: Using registerReceiverForAllUsers for location.MODE_CHANGED
65cb6e4614 : Add WiFi link tx/rx linkspeed into wifi metrics
575413a4f5 : Add cached scan results into WiFi Scorer stats
55d3ef276d : Remove PSK when upgrading with an SAE-only passphrase.
aa42d0ee67 : Add bluetooth status into wifi scorer stats
addb539526 : Add throughput sufficiency into wifi scorer stats
f5d4eb5e0a : Add wifi framework state into WiFi Scorer stats
f61060ca48 : Log SoftAp session duration
c4a2f2d472 : Remove @Deprecated inconsistency in *removed.txt files
4b52903050 : test_create_discovery Test#2
fbef25bc91 : Fix NPE in WifiInfo.getSsid()
6bb0ffbc38 : wifi: Add a new overlay to indicate # of AP MLDs supported
1c485e393e : Add binder parameter to WifiManager#releaseMulticastLock.
45a81f513c : Merge WifiServiceImpl#getSupportedFeaturesIfAllowed into isFeatureSupported.
d79e5df46f : Add more preupload hook for Wifi
c5c50c5a92 : Initialize the unstable AIDL interface for the mainline supplicant.
8aec845a7e : Use sendMessage for disconnect()
0c6bd9edc4 : Adjust function call sequence of WiFi scorer stats
2b9cd64d2e : test_create_discovery Test#1
f22d0a43dc : WifiScoreReport limit logging
d0d2305d93 : Add JNI method to retrieve the binder for services in the mainline module.
354a333d78 : Add a feature flag for the mainline supplicant feature.
0d19f05972 : Resolve API feedbacks for set/get autojoin
16ceebd909 : Fix remove alias issue
91223a7751 : Fix the NPE in PairingConfigManager
60a9c19e52 : BC Auth failure message for unspecified reason code
ef459b3c34 : Add flags for sap disconnect
53566fe800 : Initialize the featureSet field to an empty BitSet by default.
c5786c6398 : wifi: Fix cases where the Nan iface is not removed resulting in HAL not stopping
b2962d51d6 : Remove cached chip capabilities from StaticChipInfo.
33e7346a68 : Change WifiManager capability values from long bitmask to int index.
55b5e5c391 : Add isFeatureSupported API to WifiManager.
c0bffc1e65 : Update capabilities in ActiveModeWarden and WifiServiceImpl to use the BitSet representation.
6d87e03338 : Update all capabilities in HalDeviceManager to use the BitSet representation.
06ab6f519a : Also log the scheduleLatency for state machine
54906dc39c : Update the Keystore migration call in the framework to use the new asynchronous parameters.
5a6a68b912 : Add number of LABEL_BAD event happened into Wifi Scorer stats
e15fd1adbc : Add tx, rx transmitted bytes into wifi scorer stats
fc5dfe8b24 : ClientModeImpl disconnect consistency
3569980aa7 : Add TxTimePerLevel into WiFi RadioStats
aaacb45d1d : Add Ratestats into PeerInfo for WiFi Scorer stats
f5d92c325c : Add PeerInfo into WiFi link stats for WiFi Scorer
eb652ecf45 : Add packet stats by access category into WiFi Link stats for WiFi Scorer
8a7dea6a5d : Add contention time stats to link stats
4b48962318 : Add channel stats into WiFi link
b64d884f4d : Add MLO Mode into WiFi Scorer Stats
8955bc2418 : Add WiFi link stats into WiFi Scorer
f378e92ecf : Add WiFi link count into WifiUsabilityStats
b28843e47d : Make perfromance logging more accurate
6d208aea35 : Log EXTERNAL_SCORER_DRY_RUN to WifiSettingInfo in the pulled atom
2c7d258845 : Log attribution tag when report scan results
b0f14d8048 : Mark packages/modules/Wifi apps in updatable apexes with updatable: true
3f5c551620 : Update the *ModeImpl classes to return their capabilities as a BitSet instead of a long.
c5affe41d6 : Update all capabilities in WifiNative to use the BitSet representation.
18ab40f16d : Clean up references to FeatureFlags in WifiConnectivityManager.
a9fe23e2f3 : Adds clarifying comments
d99209ddb8 : Remove the feature flag check for the delayed carrier scan optimization.
6646876e1e : Remove etancohen@ from WIFI_OWNERS
7e09dc4787 : Per iface last BSSID and frequency
3b2969b540 : Improve wifi scanner statemachine logging
70f44c9e05 : WifiManager setAutojoinRestrictionSecurityTypes, getAutojoinRestrictionSecurityTypes API
1a4cbff9ac : Migrate WiFi Aware Attached test form acts to mobly Test case: test_attach test_attach_apm_toggle_attach_again test_attach_multiple_sessions test_attach_with_identity test_attach_with_location_off test_attach_with_no_wifi
e2d31f03a2 : CoexCellChannel should accept band value of BAND_UNKNOWN
4e43f1ac8e : Enable RSN Overriding feature based on the chip capability

+- Project: platform/packages/modules/adb

0190f3b4 : Fix ANDROID_LOG_TAGS forwarding
4b6685f0 : Define adbd_recovery
1ee67895 : notice: remove obsolete copyright line
b794e34e : fdevent: remove obsolete personal copyright line
52a00a23 : Document ADB usage of Zero-Length packets
3cc9480c : Fix Macos USB backend ZLP
4af6e4ff : Make daemon resilient to unbound bulk transfer
aac24222 : [adb] Allow v4 signature in streaming installs
4e2f31db : Properly set UsbFfsConnection
ce2026f6 : Fix reuse-after-free upon claim interface (libusb)
8d185a43 : Add missing newlines to adb install error messages
64c0aca2 : Remove dependencies on the 1-variant fallback
e1caf9ed : Refactor/Split local_init into emu/server_socket
5fccd383 : Revert "Make bin2c_fastdeployment a cc_genrule"
8db94779 : Make bin2c_fastdeployment a cc_genrule
a9b3987d : Add a limited privilege "trade-in mode" for adbd.
01cbbf50 : Refactor local -> emulator
9f01cfa3 : Tidy AUTH logs
2cbf5915 : Add documentation
05db1403 : Enable appinfo test
be45567f : Delete extra ACK (ready function) call on local sockets
1bacec0a : Simplify default port handling
a78c78e7 : Add log to see pm commands during install
7f647f36 : Delete dead code
a550bb65 : Improve libusb error message

+- Project: platform/packages/modules/common

73af2645 : build/allowed_deps.txt: add libvpx_sve
c69b2532 : Reduce minSdkVersion for zerocopy.
6d1bf255 : Allow using old version of zerocopy during staged update.
6dcd5439 : Fix min sdk version of modules-utils-infra
32120393 : Add contexthub v4 ndk to allowed deps
476517cc : Expose AppSearch Flags
c0a2d225 : Adding new apex module deps to allowlist
c1a308ee : Update nfc apex dependencies
8c6f978b : Update nfc apex dependencies
ad26f460 : Update nfc apex dependencies
bc377668 : Update allow list
bb5ece84 : Allow admin aconfig flag
3b05cc5a : Revert "Allow admin aconfig flag"
468d9a24 : Add libdoubleconversion to new-allowed-deps.txt
ad5ec640 : Allow admin aconfig flag
e1fc71e5 : Add NFC to apex_defaults visibility for b-launched
e38ae212 : Update dependencies on graphics.common HAL to latest
86608a0d : cherry pick to aosp-main-future to avoid merge conflict
1ed381b9 : allow libaconfigd_rust to be used for platform
9d4d3d4b : add libaconfigd_rust to list
7584eed2 : Correct lib to exported one
93ece989 : Adds apex available for security aconfig flag
62702640 : Add android.media.codec-aconfig-cc to allowed_deps
6839ae65 : Update allowed deps for trace redactor
7c447912 : Move contents of ModuleDefaults.bp to Android.bp
35b530a5 : Add ABSL to allowed APEX deps.
d684210d : Update dependencies on graphics.common HAL to latest
c48fea65 : Add dependencies for import of clap_builder.
5c573849 : allowed_deps += uprobestats_mainline_flags_c_lib
1bdaca24 : Add bluetooth socket to allowed deps
fa3060ed : jspecify(minSdkVersion:1) updated
f32988ae : Use min_sdk_version 29 instead of 34
5e8c8703 : Revert^2 "Add nfc module to mainline SDK"
889b3c20 : Modules: @FlaggedApi: Enforce using constants instead of literals
90545595 : jspecify(minSdkVersion:1) updated
1f1c4d7f : Revert "Add nfc module to mainline SDK"
b8b3b649 : Add nfc module to mainline SDK
6074530b : Add art_flags_uprobestats_c_lib to allowed_deps
007872ab : Use min_sdk_version 29 instead of 34
95e7265c : Modify the visiblity of the droidstubs of mainline modules
5ec8eb83 : Format sdk/ModuleDefaults.bp
f14428ed : Update allowed apex deps
2714f6ff : Add APV Software Codec deps
06981ed7 : add aconfig_storage_reader_java
4965610d : add new dependency
2ae4734a : build/allowed_deps.txt: add libdav1d_sve2
6a1e04b5 : Update channel sounding HAL API
e617d2e3 : Add VanillaIceCream to sdk snapshots.
0d555fc9 : Add android.crashrecovery.flags-aconfig-java to allowed_deps.txt
7713f010 : Add libopenssl and its deps to configinfrastructure
0d79e39e : Add deps for new module uprobestats
b7e9e1af : build/allowed_deps.txt: add libdav1d_dotprod_i8mm
190f9f39 : Revert "Added gson deps for Adservices"
e2050dd6 : Add initial mainline supplicant dependencies to allowed_deps.txt
6388f78e : Remove dependencies on the 1-variant fallback
83170a20 : update config infra deps
a95ebbc3 : Update allowed deps for changing riscv64 min SDK version to "current"
9c5845d3 : [DocsUI M3] Add aconfig deps to the allowed_deps file
c6cccb81 : Add "com.uwb.support.rftest" to allowed_deps.
7b7647e8 : Add "com.uwb.support.rftest" to allowed_deps.
d13c8cc4 : Make module-defaults visible to VCN
9219a126 : Update wifi AIDL versions
cda7fa93 : Ensure inner classes of @SystemApi classes are kept
14daa3df : [Ranging] Add ranging_rtt_backend to allowed deps
04b05a71 : build/allowed_deps.txt: add android.media.extractor.flags-aconfig-cc
545b4a1c : allowed_deps: Add dependency for DocumentsUI
9a7de82a : Allow adbd to rely on the static tradeinmode flag library.
35c1b037 : Allow adbd to rely on the static tradeinmode flag library.
3275045c : Update Owners file
f6919754 : [Ranging] Update minSdkVersion to 33
be343fc3 : Revert^2 "Allow proto lite lib use in ConfigInfra"
4b97b0b5 : Revert "Allow proto lite lib use in ConfigInfra"
35039062 : Allow proto lite lib use in ConfigInfra
96e6de0c : add new dependency
456b85b6 : Add timerfd jni libs to allowed_deps
c6e8d240 : Revert^2 "add new dependency"
7a5eb47f : allowed_deps: Add libion_headers
75cfd763 : build/allowed_deps.txt: add libvpx_neon_i8mm
7c46b9ae : build/allowed_deps.txt: add libvpx_neon_dotprod

+- Project: platform/packages/modules/vndk

6e59419 : Removing vndk apex v29

+- Project: platform/packages/providers/CalendarProvider

621255e : Import translations. DO NOT MERGE ANYWHERE
95035b4 : call getUserUnlockRealtime only when boot increase or DEBUG to avoid binder spam.
6753e53 : Update comments to point to the new location of event.logtags.
932c373 : Don't call getUserStartRealtime if not needed
7ae80df : Remove edge-to-edge opt out.
344f23d : Update calendar provider OWNERS

+- Project: platform/packages/providers/CallLogProvider

8074593 : Deduplicate call log entries during restore
622f91a : Migrate CallLogBackupAgentTest to JUnit4

+- Project: platform/packages/providers/ContactsProvider

2ebc369e : Use SQLite pre-compiled statement to create stale search index table
233e9af6 : Define AppCompat Change ID in a standalone java class and make it in a standlone java-library, so that the it could be recognized by AppCompat Framework
f7eaae7b : Revert "Migrated to the LocalSimContactsWriteException"
319beef1 : Remove table creation guard for stale search contacts temp table
deb528ca : Update comments to point to the new location of event.logtags.
62764751 : Migrated to the LocalSimContactsWriteException
87764471 : Flag off deletion of Custom Data rows
fc435d37 : Fixed the ContactsLookupKeyTest by combining the update operations of account name and account type (both to non-null) into a single operation.
fbfa71f6 : Import translations. DO NOT MERGE ANYWHERE
e57a15fb : Fix the movable contacts condition check in MoveContactsToDefaultAccountActivity. The condition should be if the sum of local contacts and sim contacts is greater than 0, the dialog should be called.
e25e7371 : Import translations. DO NOT MERGE ANYWHERE
9f99cfa9 : Disabled the raw contacts/group creation restriction when calling package's target SDK is below Android B.
9788f53c : Import translations. DO NOT MERGE ANYWHERE
dc45e5a3 : Create MoveContactsToDefaultAccountActivity class to support moving all local contacts to cloud account.(Including device and sim contacts).
21e1afbd : Import translations. DO NOT MERGE ANYWHERE
26f316d9 : Updated the ContactsProvider's owners.
9fe6a9df : Always return a bundle from get number methods
6009462f : Add flag to disable moves to ineligible Cloud DCA
f5f47227 : Support SIM to be a default account in CP2.
ee996d71 : Implement the CP2 handler of DefaultAccount.getEligibleCloudAccounts
bcae0a2a : Support calling the CP2 Move API
8e610cc3 : Implemented the CP2's handler of setDefaultAccountForNewContacts and getDefaultAccountForNewContacts API.
db25b754 : Allows the non-eligible cloud account being set as default account.
698cc183 : Fixed the double-initialization of ContactsProvider2.mDefaultAccountManager and ContactsProvider2.mAccountResolver
67e7ea06 : Allow contact insertion without account being specified, when default account is not set. Contact insertion in this situation will continue to be stored in the local account.
de728ae6 : Use an sqlite TEMP TABLE for search index contact id invalidation
c1f57b2f : Update MockTelephonyManager to align with the method being overriden
20af60ce : Switch the CP2 new default account rule implementation's flag from com.android.providers.contacts.flags.enable_new_default_account_rule_flag to android.provider.new_default_account_api_enabled, such that the API and implementation will be enabled/disabled by the same flag.
6cb0a7fe : Switch the ContactsProvider from using internal DeaultAccount class to ContactsContract.RawContacts.DefaultAccount.DefaultAccountAndState.

+- Project: platform/packages/providers/DownloadProvider

a4f92752 : Ensure ownership validation in downloadprovider insert method
8ca48365 : Import translations. DO NOT MERGE ANYWHERE
e22a124e : Import translations. DO NOT MERGE ANYWHERE
43a2753a : Import translations. DO NOT MERGE ANYWHERE
827b4b53 : Import translations. DO NOT MERGE ANYWHERE
3dd45d4f : Add HttpEngine to DownloadManager behind a flag
9142a4c6 : Check external_primary mount before MediaStore scan
df8df2a7 : Import translations. DO NOT MERGE ANYWHERE
b9376d5e : Import translations. DO NOT MERGE ANYWHERE
5923f38c : Set download as visible if destination uri is null

+- Project: platform/packages/providers/MediaProvider

e590ceffa : Import translations. DO NOT MERGE ANYWHERE
8719650bd : Added LocalizationHelper class to support the localization of count and dates
bb8b14c43 : Revert "Enabling rendering of stamp annotations and Freetext annotations"
dc59b4ecf : Invalidate dentry cache for nomedia files
69eab678b : Added log for printing packageUserId while saving grants
83224f1bf : Enabling rendering of stamp annotations and Freetext annotations
30eaee18c : Move db.beginTransaction() within try-catch-finally
0dafd0283 : Use Locale.ROOT for SQL queries constructed using String.format in Photopicker V2
27aca1031 : Update external URIs to volume-specific format
164f4986e : Created new class for stamp annotation
784de31ac : Make Flag declarations exportable
53561f883 : Adding pdf editor flag for enabling stamp annotations
51ab5b9a2 : Disable some page_tests on R.
01c7313d9 : Removing redundent builder from PdfPagePathObject
26f1e9df3 : Public APIs to support adding/removing page objects in AOSP
3623210f4 : Public APIs to support adding/removing annotations in AOSP
e4e73af89 : Add tests for SearchDataService
a63f28d1b : Implement SearchDataService.getSearchResults in Kotlin Picker
13b0a6623 : Update the logic to check if cloud search is enabled
27b2385d1 : [PDF| MediaProvider] Modify page.EnsureTextPageInitialized to return early if input page is null.
0721cab04 : Add logs for Picker Search
953a46b23 : Schedule media sets sync worker based on checks of the given provider's authority. Expose a hidden API to trigger the same
c5b740853 : Modify FuseDaemon#DeriveVolumeName to better handle invalid paths.
f4412e1c4 : Import translations. DO NOT MERGE ANYWHERE
c1544974c : Return early to prevent infinite recursion.
1323d76de : Add worker for syncing media in media sets - Also added mime type filter for both media sets ad media in media sets - Added additional util function to fetch mediaSetId and mediaSetPickerId
ae841b5f7 : Display search progress icon while loading the search results
b794e0d98 : Restrict PrefetchDataService access to features
42d6c0b29 : Add tests for Prefetcher in Kotlin Picker
01fbecd1d : Support Prefetch in Kotlin Photopicker's Feature Manager
eeae201e2 : Revert^2 "Mark @FlaggedApi flags as exported"
262417f08 : Import translations. DO NOT MERGE ANYWHERE
9f906f06b : Add tests for Media Categories Picker Backend
73ae3121b : Return Albums and Categories from CMP to Kotlin Picker
5856c3bb2 : Revert "Mark @FlaggedApi flags as exported"
0066e0da3 : Import translations. DO NOT MERGE ANYWHERE
93ea06d09 : Import translations. DO NOT MERGE ANYWHERE
6adf29025 : Modify tests to use the new a11y strings.
2d163c637 : Mark @FlaggedApi flags as exported
4872cc0f6 : Add worker to fetch and cache media sets
625f54759 : Add check to defer Search Feature in picker choice mode
2a50b718f : Adding pdf editor flag for enabling annotations and page object
85509429c : Rename MediaCollections capability to MediaCategories for consistency
21dbd2332 : Define category response columns for Kotlin Picker UI contract
71e8457c0 : Import translations. DO NOT MERGE ANYWHERE
f84f2310d : Clean up SearchSuggestionType class and use CloudMediaProviderContract
f06899a5d : Change integer constants to strings in Search APIs
247682413 : Selection bar and Album Media Grid A11Y fixes.
7f39b48d5 : Add events code for photopicker sarch new atoms.
419ae8d5f : Adjust search bar placement when cloud feature is off
baf5d2bd0 : Fix: Ensure PDF GotoLinks are within Page bounds
20e5cb1cb : Add SearchViewModel to EmbeddedViewModelFactory.
332c7aeac : Make talkback more descriptive for media item in grid.
b15f6be58 : Import translations. DO NOT MERGE ANYWHERE
4565be904 : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
72c7e209b : Export javac classes
086ed2983 : Show no suggestions when search suggestion is empty
b17a82707 : Fix SQL query to fetch previous page key for search results
1e09b5cae : Fix SQL query to fetch previous page key for media in media sets
34037d445 : Relocate the parenthesis position to avoid duplication in Localization
417d18f70 : Add database helper functions for media_in_media_sets table
4861f7ff2 : Test Picker internal APIs for search feature
2680e6bbd : Expose Picker internal APIs for search feature
81273c938 : Only release the SurfaceControlViewHost in Session#close
4e463db4b : Disable PhotoGrid scroll when embedded & collapsed
d966b9d8b : Limit the Photopicker Transcoding feature to Android T+
d8b274537 : Update transferTouchGestureToHost
d4becee50 : Add tooltip and disable search bar when search is disabled.
dd3971c97 : Create new flag for favorites API
1cebd5e92 : Fix a minimum height for the navbar in embedded.
f97c01e30 : Remove GROUP enum value from Media location for Search Metrics
a2a50f017 : Mark mark_media_as_favorite_api flag as read-only.
642b6ed89 : Add tests for the logic to fetch and cache search suggestions
ea2f6ebb0 : Fetch and cache search suggestions for Picker UI
172137ac0 : Adding database helper functions for media_sets table
f171a9a51 : Fix the bug that freeCache clears more space than expected
c5b31a4bb : Fix getType with a redacted uri
2dc0aff5a : Fix UserIdManager creation in PickerViewModel in PhotoPickerTestActivity
24dc202c2 : [Embedded] Dispatch and listen to telemetry events
c825d064f : Show icon for non-face suggestion type in the right
7d95a3ab4 : Modify the talkback for Profile Selector menu.
01c2ee84c : Add tests for search suggestion database utils
efb6fd20a : Add methods to insert and query search suggestions
10a713370 : Address API Council feedback for Photopicker Search APIs
06d31fb21 : Import translations. DO NOT MERGE ANYWHERE
21159bff9 : Fix false side of ternary for privacy explainer.
373a95422 : Reduce DEFAULT size for InMemory paging sources
8053d1caf : Add tables for search suggestions
3e052d64c : Fix talkback for select/deselect icon in preview mode.
e4b685480 : Fix SearchStateTest breakage on R devices
c7113af81 : Fix NPE when cleaning transcoded files
f0c0d3888 : Creating media_sets and media_in_media_sets tables for caching media_sets and their content metadata
2dd6c2572 : Move CtsScopedStoragePublicVolumeHostTest to presubmit tests
c863d5686 : Mark media as favorite API
b06c69cc8 : Create trackers for search results sync
5b663f74c : Add SearchState class that tracks the search feature state
a30d1637f : Disable CLOUD_CHOOSE_PROVIDER and CLOUD_CHOOSE_ACCOUNT banner in Embedded PhotoPicker
f28f99e43 : Update MediaStore version to remove DB version for targetSDK>V
09a248143 : Populate search results on query/suggestion search
a403d1ba6 : Import translations. DO NOT MERGE ANYWHERE
bc0fca3ea : Add tests for search results sync worker
726d3dff6 : Add worker for search results sync
c0696ab4b : Set app compat change ID to enabled for Baklava
bd9faab0c : Launch photopicker in full screen when search is active
9ee2df127 : Import translations. DO NOT MERGE ANYWHERE
349707f00 : Remove dependencies on the 1-variant fallback
f62186247 : Fix the error that media preparer returns an empty media list
ba3232aa9 : Fix wait for sync logic
944c4eb80 : Update documentations of ACTION_PICK_IMAGES and EXTRA_MEDIA_CAPABILITIES
32ba300a6 : Implement the Photopicker Transcoding logic in Photopicker Application
26e2f72ae : Add crossProfile=true to PhotopickerApplication.
7b3e0afef : Add new Motion Photo intents
888549057 : Add CloudMediaProviderContract.Capabilities APIs and MEDIA_CATEGORY_TYPE_USER_ALBUMS.
50141e003 : [Embedded Picker] Add animations to photopicker.
1297805ef : Add death recipient on client callback and cleanUp session resources
eaad1ea83 : Fix recomposition flicker with AlbumMediaGrid
538cc4502 : Test for SearchViewModel
e00613804 : Display dynamic search suggestions in search view
3d101b91e : Import translations. DO NOT MERGE ANYWHERE
91722b4d0 : Add tests for picker search results query and insertions in Picker DB
bc488cf50 : Use only primary storage for thumbnails
567f55308 : Add methods to insert and query search result media
bf332b018 : Move the logic to create sql projections out of PickerSQLConstants
fbabde26b : Refactor PickerDataLayerV2
8eff56137 : Add MIME_TYPE filters in Search APIs documentation
febe207e6 : Public volume recovery for stable URIs
85bbf3761 : Implement the open transcoded file flow of Photopicker Transcoding
f2c581aa8 : Implement the transcoding flow of Photopicker Transcoding
4609c638f : Populate the latitude and longitude columns of the files table
678e14034 : Import translations. DO NOT MERGE ANYWHERE
a0b83db34 : Add util methods to perform database operations on search_request table
682b96934 : Introduce Search Request and Search Result tables
4db1293bf : Fixes for errorprone update
47b0be276 : Add test to ensure long press to preview is disabled in embedded
dc5eec7c2 : Fix TimeoutException thrown during finalize() call of PdfRenderer.Page
526ec26d1 : Restrict invalid characters in file name
e3a6f0d19 : Use Upstream Passthrough if available
77511f446 : Import translations. DO NOT MERGE ANYWHERE
199ac14f3 : Restrict app file creation to prevent malicious behavior
c073d3208 : Remove overriden method EmbeddedPhotoPickerSessionWrapper.binderDied(Landroid/os/IBinder).
188040d73 : Fixes for errorprone update
77cb06096 : Import translations. DO NOT MERGE ANYWHERE
a7938a07a : Add QUERY_ALL_PACKAGES to Photopicker.
9b56546fa : Add missing Javadoc dependency to `framework-mediaprovider`
32d310f4d : Set video control timeout to 5 seconds.
5228928b5 : Use go/compat-framework to gate mediastore version lockdown
abafc3b67 : Search and Derived Collection Apis for CloudMediaProvider.
d5a33fda1 : Add SearchDataService
c6e21b4c2 : Reduce permission pop up icon size
cee53937c : Add CtsScopedStoragePublicVolumeHostTest to test mapping for postsubmit runs
f9c270858 : Adding helper classes for search
34bf053ca : In Search bar update placeholder text based on mime type filter
411bb2782 : [Embedded] Transfer touches to client in empty grid and align “No photos yet" screen properly
665058a86 : Add flag to control enabling the photopicker DeviceConfigReceiver
0f834abd5 : Mark packages/providers/MediaProvider apps in updatable apexes with updatable: true
8d621b5d7 : Add check on inode number before do_forget call
dea1b765f : Move initialization of NotificationChannel out of the MP boot path
abe995922 : Fixes for errorprone update
194752593 : Import translations. DO NOT MERGE ANYWHERE
0aee1e828 : Prevent crash for embedded when long pressing items in an album.
9054703a3 : Fix profile selector colors for accent color.
6c1447de9 : Support redacted URI for getType
137d42a16 : Fix banners in embedded photopicker photo grid
c1d07037c : Import translations. DO NOT MERGE ANYWHERE
574be48da : Add RequiresFlagsEnabled to testSetEmbeddedPhotopickerFeatureInfo tests
5e4bde747 : Add missing test module to mediaprovider-mainline-presubmit

+- Project: platform/packages/providers/TelephonyProvider

f6ada844 : Add NB_IOT_NTN
d49ab131 : Carrier ID table rollout all_sdk_carrier_list_Rollout_20241106 Publish the latest carrier id table. This build IS suitable for public release.
d92997f2 : Partially support database downgrade
09966699 : Fix SmsProviderTest on messaging-less devices
be17b7c4 : Carrier ID table rollout all_sdk_carrier_list_Rollout_20241009 Publish the latest carrier id table. This build IS suitable for public release.

+- Project: platform/packages/providers/TvProvider

062d38c : TvProvider: fix notifyChange issue with notifying non-system user

+- Project: platform/packages/screensavers/Basic

e075fc8 : Enable complications on screensaver.

+- Project: platform/packages/screensavers/PhotoTable

7dc3770 : Import translations. DO NOT MERGE ANYWHERE
595e75e : Import translations. DO NOT MERGE ANYWHERE
41227d1 : Import translations. DO NOT MERGE ANYWHERE
7c786c0 : Import translations. DO NOT MERGE ANYWHERE
7204a9c : Import translations. DO NOT MERGE ANYWHERE
b2057e3 : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/services/AlternativeNetworkAccess

a5304f8 : Support SubscriptionManager.canManagerSubscription for HSUM
4487b3e : Fixes for errorprone update

+- Project: platform/packages/services/BuiltInPrintService

02497d3 : Check Mopria cert level on printer & apply JPPS as per applicable spec
087c238 : Fix for adding multiple Wi-Fi Direct printers (part1)
0479ad6 : Import translations. DO NOT MERGE ANYWHERE
e4f8c2f : Import translations. DO NOT MERGE ANYWHERE
97d85f6 : Fix for Wi-Fi Direct printer connection on Printer Info Screen (part2)
bbf1e5d : Fix for blank page not printed for PWG face down printers
00ff2c3 : Implemented TODOs for analytics
65969d6 : Send "requesting-user-name" as part of `get-jobs` request
a9ec23e : Add another printing owner
7103712 : Fix for swapped printer capabilities issue
d133c4c : Add another printing owner
f02de4b : Revert "Remove analytics related code"
5795e7a : Import translations. DO NOT MERGE ANYWHERE
6670681 : Remove -Wno-implicit-function-declaration and fix the bugs.
eabaee8 : Revert "Remove analytics related code"
fbbb4fb : Import translations. DO NOT MERGE ANYWHERE

+- Project: platform/packages/services/Car

0e11f29d18 : Add flags to permission definitions in service/AndroidManifest.xml
b9cb9e1611 : Revert "Add PERMISSION_CAR_INFO for VEHICLE_CURB_WEIGHT."
54da115e63 : Add PERMISSION_CAR_INFO for VEHICLE_CURB_WEIGHT.
adb41e905a : Do not release real car service in CarServiceTest.
3fe0622cc1 : Fixed flaky car service unit test
badd65fc8c : Fixed focus interaction mock issues
6edb9815df : Added VEHICLE_PASSIVE_SUSPENSION_HEIGHT to API
6ddba30ab5 : Added BRAKE_FLUID_LEVEL_LOW to API
b3e30bf7d7 : Added BRAKE_PAD_WEAR_PERCENTAGE to API
8d21865bfc : Added BRAKE_PEDAL_COMPRESSION_PERCENTAGE to API
83d9274f39 : Added ACCELERATOR_PEDAL_COMPRESSION_PERCENTAGE to API
92550db5db : Added VEHICLE_DRIVING_AUTOMATION_TARGET_LEVEL to API
c848474b6a : Added VEHICLE_HORN_ENGAGED to API
73546a5798 : Parse HasSupportValueInfo from VHAL.
34603dc514 : Skip calling reg/unregSupportedValuesCb if not implemented.
07d83b9b6f : Implement SupportedValues in AidlVehicleStub
987602319a : Use audio HAL product strategies in new audio control HAL
f2acc6399c : Added INSTANTANEOUS_EV_EFFICIENCY to API
157bd4b96f : Remove PreferenceService from Car Manifest
b0c0ea02b9 : Import translations. DO NOT MERGE ANYWHERE
3715859e4a : Import translations. DO NOT MERGE ANYWHERE
baba63724c : Added INSTANTANEOUS_FUEL_ECONOMY to API
a0f470de4b : Added TURN_SIGNAL_LIGHT_STATE and TURN_SIGNAL_SWITCH to API
517071c106 : Added SAE_J3400 enums to EvChargingConnectorType.java
f8a153067c : Added INFO_VEHICLE_SIZE_CLASS to API
f2566a46e3 : Implement unregisterSupportedValuesChangeCb in CarPropSvc.
5358305b4c : Implement unregisterSupportedValuesChangeCb in CarPropMgr.
5f2ceedf83 : Implement registerSupportedValuesChange in VehicleHal.
4ba35ce13d : Added INFO_MODEL_TRIM to API
f352479632 : Remove expectation of IME StatsToken
83c000020d : [COMMIT 1/1 c9017356e471] This implementation violates the transitivity and symmetry contracts of the comparison method. Specifically, the issue arises when two tasks have the same priority and timestamp; the method always returns 1.
2dd464b137 : Add PERMISSION_CAR_DRIVING_STATE_3P for VEHICLE_DRIVING_AUTOMATION_CURRENT_LEVEL.
43743aae3a : Add PERMISSION_CAR_ENGINE_DETAILED_3P for ENGINE_RPM.
7a205818a5 : Add PERMISSION_READ_WINDSHIELD_WIPERS_3P for WINDSHIELD_WIPERS_STATE.
9b6a40c985 : Add PERMISSION_TIRES_3P for TIRE_PRESSURE.
5afae6f833 : Add PERMISSION_READ_CAR_SEATS for SEAT_OCCUPANCY.
56c2c72603 : Implement registerSupportedValuesChangeCallback in CarPropSvc.
10e544eaaf : Fix typo: "tto" -> "to" in KitchenSink default permissions xml.
7cb8a8c30e : Add new PERMISSION_MILEAGE_3P for PERF_ODOMETER.
166228d9fa : Add new PERMISSION_READ_STEERING_STATE_3P for PERF_STEERING_ANGLE.
e27e4b379d : Add FLAG_25Q2_3P_PERMISSIONS to PropertyHalServiceConfigs.
b16a8175e2 : Fix AndroidAutomotiveNotificationsTests for portrait
a946684379 : Fix CarDeveloperOptions test
6c8607bbb7 : Add DisplayCompat density feature flag
cce5af8313 : Fix a dead lock in AidlVhalClient.
854ae30d62 : Update aspect ratio icons for RB
ebcab8975e : Fix failing CarDeveloperOptions test
2eafb1a153 : Add text clarifying SSID case sensitivity
2b072f7b8d : Add wildcard to CarServiceHalUnitTest
d3c08dff57 : Add Hal*DebugUtilsUnitTests to Ravenwood tests
ea8f607415 : Adding scrollbars to the Menu and Panel to improve UX.
fc1342a74d : Import translations. DO NOT MERGE ANYWHERE
0b0d51f886 : Use notification for radio alert in kitchen sink
20ebae7884 : Add car audio service test for audio control HAL configuration
00932e80fe : Add feature flag to android manifest
c5ad97f668 : Update VHAL CPP client to use V4 interface
51b07bf460 : Fill in CarPropertyService for simulation changes
3887a0ffd3 : Update API usage
6e2963e224 : Support for Fabricated RROs in the RRO Panels.
78c4c7a079 : Use audio control zones helper to configure car audio service
fe3aafd86d : Assign remaining unassigned microphone input devices to primary zone
22dfb6f92c : Fix failing CarDeveloperOptionsUnitTests
05f6bbb7be : Import translations. DO NOT MERGE ANYWHERE
a09695eccf : Import translations. DO NOT MERGE ANYWHERE
34d096749a : Import translations. DO NOT MERGE ANYWHERE
09377173aa : Add audio mirror load to audio control zones helper
3f3b25982a : Fill in CarPropertySimulationManager
4ac9615a82 : Add additional SysUI overlays to package allowlist
e54cbcf40b : Add car audio context to audio control zones helper
c8d9880c91 : Fix failing CarDeveloperOptions test
b0f48e12d0 : Added android_b_vehicle_properties flag to car service.
ea814e381b : Updated VehiclePropertyIdsParser to ignore duplicate enums and sort
37b65e7011 : Add trunk stable flag for new 3p permissions.
234f73d05a : Update javadocs for VehiclePropertyIds.
c6729db9b2 : Update java docs for Car permissions.
ea15626565 : Update SystemAPI tag and java docs for Car permissions.
b1dc82ba5c : Update SystemAPI tag and java docs for Car permissions.
7ed7b45c26 : Update java docs for Car permissions.
b973c700ae : Update SystemAPI tag and java docs for Car permissions.
6aeda389d0 : Update SystemAPI tag and java docs for Car permissions.
ee828511b0 : Add audio zone id to occupant zone id load to audio control zones helper
2d3119c90b : Fix interface name label
e2e5ca5d1b : Add audio control HAL zones helper
6e6a88f64a : Import translations. DO NOT MERGE ANYWHERE
e783ebb4ad : Import translations. DO NOT MERGE ANYWHERE
8b38b5f85a : Import translations. DO NOT MERGE ANYWHERE
09157f2e36 : Import translations. DO NOT MERGE ANYWHERE
2dd35f84b2 : Import translations. DO NOT MERGE ANYWHERE
f1c2fdcf88 : Import translations. DO NOT MERGE ANYWHERE
8fa7858b5c : Import translations. DO NOT MERGE ANYWHERE
56c22d2510 : Import translations. DO NOT MERGE ANYWHERE
885f4d35c7 : Import translations. DO NOT MERGE ANYWHERE
e020d2b843 : Import translations. DO NOT MERGE ANYWHERE
cb0f8b590e : Import translations. DO NOT MERGE ANYWHERE
70f0f3e979 : Import translations. DO NOT MERGE ANYWHERE
af397473da : Update SystemAPI tag and java docs for Car permissions.
350c87b493 : Update SystemAPI tag and java docs for Car permissions.
8bbe296908 : Update comments to point to the new location of event.logtags.
9814e1bbfe : Define more VINTF fragments as a module type
7af774e3e8 : Implement registerSupportedValuesChangeCallback in CarPropMgr.
65ec18a377 : Migrate caruiportrait to DA
2d536d119d : Add audio input device converter to audio control HAL converter
bacd9e625b : Fix crashe in InCallTaskStateRouter
564ec36d1c : Add convert audio mirroring devices
4c03f8271e : Skip startOrBindService when user is not running
157bcfdb44 : Skip startOrBindService when user is not running
f45f283730 : Adjust interface to receive IME statsToken
86ab7ef176 : Add check for fadeable usages error
5246ca7940 : Implement getSupportedValuesList.
49d749f97b : Skip startOrBindService when user is not running
68e791f753 : Implement getMinMaxSupportedValue
e426c42ace : Add audio zone fade converter to audio control HAL converter
3a2f4a1a87 : Add DriverUIDesignRRO
1c74e1f12e : Avoid deadlock between CarRemoteDeviceService and SystemActivityMonitoringService
555d2ae017 : Import translations. DO NOT MERGE ANYWHERE
7fc2f9a899 : Add CarPropertySimulationManager
a0400e5812 : Add textColorAlertDialogListItem to themes_device_defaults
8efc145d22 : Refactor generate car audio device info method
e01da252cc : Refactor audio utility method to create address to car audio device map
be1d3f6071 : Add transient fade configuration converter
3a62cdc4e6 : Increase VHAL version and implement supportDynamicSupportedValues.
fd0bca1c9f : Update doc for HVAC_FAN_DIRECTION.
36f0d0d91c : Implement getSupportedValuesList.
38a9cc8bcc : Fill in hasMin/MaxSupportedValue for legacy VHAL.
6e3ff50911 : Implement part of getMinMaxSupportedValue.
2dddc48c35 : Remove a deprecated attributes from reference camera apps
e44c17f7d7 : Revert android.automotive.watchdog.internal as unstable AIDL
563b3b0f5c : Replace args with flags in droidstubs
f4a7451fe9 : Remove overlayed colors on Android Auto's keypad.
776a53d6d3 : Add audio fade configuration converter
5e93616b73 : Define more VINTF fragments as a module type
40aa20050f : Support toggle on control bar
b5458db601 : Add control center controller to passenger overlay
534e408fd1 : replace aconfig_storage_reader_java to aconfig_storage_stub
b3a66efc42 : Remove SystemAPI tag for DRIVER_MONITORING permissions.
08a90b6b23 : Remove SystemAPI tag for PERMISSION_READ_ADAS_STATES and PERMISSION_CONTROL_ADAS_STATES.
181260d8f8 : Remove SystemAPI tag for PERMISSION_READ_ADAS_SETTINGS and PERMISSION_CONTROL_ADAS_SETTINGS.
85913a423b : Define VINTF fragment as a module type
9975d022df : Fix Dialogs with custom view on Hudson
da7f90f76c : Add config to declare settings is using ActivityEmbedding
e26ba4b48f : Update SettingsRRO for new dual pane layout
c162b296eb : Fix system dialog title/icon styling
5cd1ccfec0 : Add OWNERS file for tools directory.
1a1d08a0c7 : Enable car portrait to install third-party IME
e9df6993b4 : Changed volume group activation type values
a15916902b : Add audio control zone converter
99fb9c4290 : Import translations. DO NOT MERGE ANYWHERE
d234518177 : Import translations. DO NOT MERGE ANYWHERE
b0eeae02bf : Add null checks to audio context converter parameters
5ffd9a4ed6 : Refactor audio device manager in car audio service unit test
8573c575a8 : Remove SystemAPI tag for PERMISSION_CAR_DRIVING_STATE.
e5d98f5f0a : Remove SystemAPI tag for PERMISSION_READ_ULTRASONICS_SENSOR_DATA.
af34b51231 : Remove SystemAPI tag for PERMISSION_CONTROL_STEERING_WHEEL.
7a95ee997c : Remove SystemAPI tag for PERMISSION_READ_CAR_AIRBAGS and PERMISSION_CONTROL_CAR_AIRBAGS.
d39ba62ec0 : Remove SystemAPI tag for PERMISSION_READ_WINDSHIELD_WIPERS and PERMISSION_CONTROL_WINDSHIELD_WIPERS.
f216144c2f : Remove SystemAPI tag for PERMISSION_READ_CAR_SEAT_BELTS.
114584c39b : Remove SystemAPI tag for PERMISSION_CONTROL_CAR_MIRRORS.
1aa21e71ec : Remove SystemAPI tag for PERMISSION_CONTROL_CAR_SEATS.
8b622bcf42 : Remove SystemAPI tag for PERMISSION_CONTROL_GLOVE_BOX.
e29ad659bf : Update docs for PERMISSION_CONTROL_CAR_MIRRORS.
dad1c7f843 : Update docs for CONTROL_CAR_DYNAMICS_STATE and PERMISSION_CONTROL_CAR_DYNAMICS_STATE.
f390625909 : Update docs for PERMISSION_CONTROL_CAR_CLIMATE.
9a23e57327 : Update docs for PERMISSION_CONTROL_CAR_DOORS.
bf874a45fe : Update docs for PERMISSION_CAR_ENGINE_DETAILED.
1fb6495d43 : Update docs for PERMISSION_READ_IMPACT_SENSORS.
d161f4df16 : Update docs for PERMISSION_READ_HEAD_UP_DISPLAY_STATUS and PERMISSION_CONTROL_HEAD_UP_DISPLAY.
a9c9e1cb03 : Update docs for PERMISSION_READ_VALET_MODE and PERMISSION_CONTROL_VALET_MODE.
245b4e4afc : Fix top system bar and nav bar mirroring in RTL
5fa425bc89 : Fix failing CarDeveloperOptionsUnitTests
bb54840b4e : Migrate materialColor* attributes into colors
6fb72c799e : Show state change of Dialpad button in control branch
0678c6302e : Remove phone settings activity
f0033250e3 : Change andoridManifests and moved ProcFsInspector
235f15e623 : Remove SystemAPI tag for PERMISSION_CONTROL_CAR_DOORS.
bd9d27642f : Remove SystemAPI tag for PERMISSION_CONTROL_CAR_CLIMATE.
8131428700 : Remove SystemAPI tag for PERMISSION_CAR_DYNAMICS_STATE and PERMISSION_CONTROL_CAR_DYNAMICS_STATE.
be980d54df : Add convert volume group config method
ed2a662064 : Add convert audio context to device entry
93c6508e5d : Changed CarServiceUnitTest srcs'
c640522513 : Moved some carservice_unit_test to CarLibTests
c986d3a72d : Dynamic rows and columns based available space
377f60711f : Add STATE_SHUTDOWN_PREPARE to list of states that support completion
7c1e56e288 : Add owner field to automotive internal aidl_interface
9141eb14e9 : Move WMComponent to wm shell
d6d667fed3 : Add a permission for Aspect ratio feature
483715094b : Remove SystemAPI tag for PERMISSION_READ_HEAD_UP_DISPLAY_STATUS and PERMISSION_CONTROL_HEAD_UP_DISPLAY.
b8de3248cb : Remove SystemAPI tag for PERMISSION_READ_VALET_MODE and PERMISSION_CONTROL_VALET_MODE.
a611ec4195 : Remove SystemAPI tag for PERMISSION_READ_IMPACT_SENSORS.
7eb6ccc2a9 : Remove SystemAPI tag for PERMISSION_CAR_ENGINE_DETAILED.
84da33c0ae : Update PropertyHalServiceConfigs runtime flag checks.
949eee2972 : Add convert audio port to car audio device info
b5931467bc : Add utility to convert car audio context
c32885d6d8 : Add utility to convert volume activation configuration
a38af40aa5 : Add native device conversion
fe174e2ac9 : Remove unnessary static initializer from core audio helper
cd50c3c869 : Add controllers to MUMD system bar buttons
33d0635b3d : Remove unnecessary api_levels_... properties from car-doc-stubs
53e746b2e4 : Remove ICarPowerPolicyDelegate from VINTF manifest
bba0bd42a1 : Update power policy notification mechanism
11645580b4 : Fix bug where --cancel-after can deadlock
dbde43321c : Add audio configuration APIs to audio control wrapper
aab8c887e8 : Inform systemUI of the control bar's height change
5d0adff0a0 : Initialize ICarImpl doPriorityInitInConstruction as true
93f0aac320 : ClusterHalService should not crash when VHAL property is not supported.
9cf01b2a7a : Convert hvac button to flexible ui
c9806ee7aa : Move DriverUI to car_product/
d8a0f66c50 : Add a test for resource leak check for LargeParcelable.
2da2e4849c : Remove duplicate comment for Payload.close.
e2107f97fc : Remove ICarPowerPolicyDelegate from VINTF manifest
f174ba8fff : Update dependency on graphics.common HAL to use defaults.
16fb8b53d5 : Remove activity from car dev options
15e7562454 : Changed to use the new hidden API
d297c2f428 : Import translations. DO NOT MERGE ANYWHERE
f310d7b576 : Point SystemUI Dialog theme to DeviceDefault
99ef3c759b : Update Dialog theme to be similar to AlertDialog theme
8acd81edfe : Remove unused resource value
4f8bedc9a1 : Add feature flag for car property simulation
a6959cc6b7 : Script to run a CUJ multiple times and generate KPI stats
eb4da0de73 : Generate KPI stats for KPI collected across several runs
0d1c6179fe : Add support for new KPIs in generate_kpis.py script
f916dace5b : Support device recovery and immediate app launch CUJ.
eb8663f0b2 : Support hover tooltip in the PSI plot.
4baab698b8 : Parse multiple events from a single line in PSI dump.
fa12b3cd75 : Print cli arguments and log error to log file.
c9d43145e2 : Fix failing CarActivityServiceTaskMonitorUnitTest for portrait UI
c65ccebdb8 : Revert the new default MCP value (ag/29980127) to 32 processes.
0b8ea093ea : Add trunk stable flag for removing system API on property constants.
c5e532fee0 : Refactor car audio configuration management
320e7cef12 : Fix SharedMemory leak issue.
2cb26d9d9a : Increase carpowerpolicyd connecting to VHAL timeout.
3d1a4dbf5a : Adds new oem tokens RROs.
4476b229bb : CarService test: workaround for new system usage
f3ac482465 : Add new APIs for car property supported values.
f6514a1fd5 : Implement radio alert in radio kitchen sink
0abaf26514 : Remove activity from car dev options
7d57833eb9 : Close the Payload object in car lib.
28b71e525a : Start align dialog title on Hudson
f66a3df28f : New menu structure for the tool.
aeeaacb76c : Add checks for bundle info in audio focus request
f07bb6fa1f : Import translations. DO NOT MERGE ANYWHERE
1110179893 : Fix Dialogs created in fullscreen on RB
22bb1fd869 : Update Dialog width
50e5cd03dd : Moved internal carservice_unit_test to CarLibUnitTest
de87739dba : Moved CarPowerManagerUnitTest to CarServiceTest
e8ca687331 : Add smaller Car*ServiceUniTest
13e697cd84 : Refactored EventsMask to EventFlag
2b52fd47d8 : Import translations. DO NOT MERGE ANYWHERE
9c1bbb062d : Import translations. DO NOT MERGE ANYWHERE
5a15e7fbbe : Allow guest user creation regardless of no_add_user
6f4e96e415 : Enable GAS RROs for CarUiPortrait targets.
1228ff83f4 : Fix the STR issue with Guest user
3187864d84 : Update locking for passenger users
0a28abd66c : Launch default navigation activity after terms of services are accepted
7da5e4a3f1 : Fix Dialer dialpad hidden by media controls
da0f0bd312 : Include telephony_system.mk
648f4d2d23 : Style car_ui_activity_background_color in reference designs
fb141a3f57 : Catch the UnsupportedOperationException when querying channels from the WifiManager.
d00352b644 : Optimize CarPropertyValue parcel marshal logic.
d9a0dec355 : Fix /java/bin no file found error.
c38b15581e : Optimize CarPropertyValue parcel marshal logic.
3ec3bfe9c3 : Create CW control bar activity
7ba4d77145 : Hide queue button when there is no queue
53ecec056e : Update dependency on graphics.common HAL to use defaults.
4e07eaab6e : Expose current drive state as shell command for easy debugging
58505292ed : Simplify Application RRO panel.
a65d9d66c9 : Add a permission required for SystemUI
445f64b44e : Add a permission required for Settings
32aed692b5 : Update disabled microphone dialog
567aaf1ce5 : Update disabled microphone dialog
9b1663a523 : Fix typo in ScreenOffHandler log
f417b5f6ed : Add TrunkStable flag for task view task reordering
33e923e8be : Fix listDialogPortraitTest for short screen
9a064c8360 : Remove @Deprecated inconsistency in *removed.txt files
97659df25f : VUR should not be set to false for subscribePropertyEvents.
6b81102d5b : Add the debug panel entry to RB
6594841a3b : Remove Settings Service from Manifest
ebbdff8b13 : Set BSSID in SoftApConfiguration when AP is started
935e4f4daf : Add Settings Core team to OWNERS for CarDeveloperOptions manifest file
6d1721a90b : Update documentation for init-value events.
355abcae89 : Add safety check for value not being null.
0be5927da6 : Move CarPropertyConfig/ValueTest to unit test.
3ec1ec4ecd : Support gone text fields in coolwhip
8cf2d2d325 : Always send initial value event for new subscription.
b21e6ebc2d : Async car audio service init.
28538323b3 : Add smaller CarServiceUnitTests
70d160ec8a : Add smaller CarServiceUnitTests
d61f117a20 : Import translations. DO NOT MERGE ANYWHERE
1e08571f69 : Import translations. DO NOT MERGE ANYWHERE
27d3cdd2a4 : Create RawPropertyValue.
5a6f94fb2c : Update carwatchdogd proto file imports
62f688e990 : Restart launcher if TaskView not initialized
9c77bfdcdd : Add custom notification builder to kitchensink
26cc5c08df : Fix SurfaceControlViewHostTests for portraitUI
ef8827bd96 : Minor update on doc.
3de7e4a248 : Make shell_transit value overridable for cw targets
5858d0b13c : Add logging to CPMS for boot times
417aa8d783 : Increases the margins at the top of the views
15e0eb3817 : Update ASV size on RB
6d39b04b8d : Fix RoleManagerTest for landscape UI
0b31f09d8c : Import translations. DO NOT MERGE ANYWHERE
e264b08026 : Import translations. DO NOT MERGE ANYWHERE
3929393005 : Add null condition before checking for blocked activity targets for ActivityBlockingActivity
95879d9a55 : Fix calm mode not opening when app grid is open
2bb7425264 : Added reserved field to performance_stats.proto
9beb017141 : Replace strong pointers in PackageInfoResolver with smart pointers
f696853ce9 : Enable AidlVehicleStubUnitTest on host.
1231ed7f9d : Fix HUN panel contrast
f517ce27b2 : Add support for parsing logcat and generating plots and KPIs from PSI monitor output.
ced5030852 : Move kernel start time to WatchdogPerfService dump
77bbb808a4 : Use checkService instead of getService in watchdog test client.
91d99cd126 : Remove duplicate code in HalPropValueBuilder.
a1b582e1f6 : Remove duplicated comment
8977e06693 : Make AIDL ICarDisplayProxy/default as default
facf0f2d24 : Handle kernel abort during hibernation
01ed482506 : Fix media card panel animation
4afcc6d5df : Remove duplicate codes in MockedVehicleHal.
1f799103de : Add missing contextual colors to OEM design tokens
2dc50e1e55 : Remove ICarPowerPolicySystemNotification
03967776ea : Fixes for errorprone update
d40e4016b0 : Have the test preparer enable Bluetooth
f74ca067f8 : Add process name and uid to ProcessIdentifier
1830dcef6d : Fix portrait launcher crash during onDestory
df806a731f : Delete Car Settings manifest entries related to super-obsolete zen onboarding
c075873e03 : Add a permission required for Settings
72bbabe9b5 : Replace hard-coded PER_DISPLAY_MAX_BRIGHTNESS.
b739fdd8e0 : Disable dock media appps for RB
8f2617849e : Launch companionUI after task move is complete
113e593196 : Avoid foreground user service is started by user 0
71fed2ea55 : Disable seeking media when queue/history panel is open
a6126a0582 : Refactor wtf handling in AbstractExtendedMockitoTestCase
165c767c10 : move multi-user annotations to bedstead-multiuser module
b2d220efd8 : Fix Toolbar in plugin
549573f581 : Revert^2 "Prevent resource leak in CarPowerPolicyServer."
0cc032e10b : Clean up old setting pages
7e487cb0eb : Fix Hotword test app launch

+- Project: platform/packages/services/DeviceAsWebcam

a1ce6f8 : Import translations. DO NOT MERGE ANYWHERE
5c1d1ad : Downgrade "UVC support not enabled" log to INFO
a4eaee4 : Import translations. DO NOT MERGE ANYWHERE
46eed1d : Update package name of DeviceAsWebcamPreview
1003f8e : Add null check for DeviceAsWebcamReceiver.
8ab4fbb : Import translations. DO NOT MERGE ANYWHERE
a49b0bd : Import translations. DO NOT MERGE ANYWHERE
2d3f197 : Remove flag high_quality_toggle
18c287e : Mark strings that shouldn't be localized to prevent erroneous translations.

+- Project: platform/packages/services/Iwlan

57980f0 : Use primitive boolean in ErrorPolicyManager
b86b6db : Utilize member varibale to avoid EpdgTunnelManager getInstance call
3ce4ca7 : Refactor Singleton EpdgSelector
f312ccc : Remove hidden API usages in VcnTransportInfo
f72d29a : Remove hidden API usages in VcnTransportInfo
c2e6c38 : Update Network Validation Accept Timing and Status Reporting
b773f4b : Refactor remaining backoff duration return type to Use Duration Class for Duration Representation
b4aa92f : Refactor RetryAction to Use Duration Class for Duration Representation

+- Project: platform/packages/services/Mms

9f5097c : CarrierMessagingService: Add gated support for granular error for MMS flows

+- Project: platform/packages/services/Telecomm

24d97b428 : Import translations. DO NOT MERGE ANYWHERE
8fd1d687a : Add the import statements that got removed from TelecomServiceImpl
4ab4d3dd0 : Resolve cross account user icon validation.
38b952f29 : Synchronize active bluetooth device
9f30609a1 : Rename the flag of keep_bt_devices_cache_updated
f6be73f44 : Update the flag as bug fix
73983ae31 : Fix issue where call timeout is triggered for canceled calls.
422b0d24e : add anom report and more logs to Ringer haptics
df2bb5b98 : Keep the BT devices cache of BluetoothDeviceManager updated
16f94ccc7 : Allow privileged apps to accept and end VOIP calls.
ace080f92 : Clear the cache after the logs get pulled
9586d387e : Only attempt dock call routing if not already on speaker
60e685dcd : Revert^2 "Mark @FlaggedApi flags as exported"
065bca048 : Import translations. DO NOT MERGE ANYWHERE
cb192caf2 : Revert "Mark @FlaggedApi flags as exported"
770184092 : Import translations. DO NOT MERGE ANYWHERE
f9a4857ca : Import translations. DO NOT MERGE ANYWHERE
54559cc7b : Mark @FlaggedApi flags as exported
5fc967338 : Resolve READ_PRIVILEGED_PHONE_STATE bypass
b671d95cb : Ensure AppLabelProxy is multiuser aware.
b55fe9206 : Fix NPE in BluetoothDeviceManager
b0aeb39cd : Import translations. DO NOT MERGE ANYWHERE
b7e89e8c4 : Revert the security exception for voip phone account registration.
95d0739c1 : Resolve audio not routing to watch issue
a4d6a97d8 : Resolve call audio focus not abandoned
212d538ae : Ensure BT ICS unbind after completing BT future
5c9f677b2 : Improve the performance to avoid the potential ANR
96945d535 : Ensure vibration attributes are always logged.
6b2d00fc7 : Synchronize bluetooth routes map
4e64460ca : Update the uid value to the calling package uid
2e0acb99d : Fix NPE in CallAudioRouteController
ba4229636 : Improve headset media button logging in Telecom.
1fb021117 : Revert^2 "Modify Tests to make SessionManager final for tests"
6c7c4a961 : Revert "Modify Tests to make SessionManager final for tests"
136195a91 : Import translations. DO NOT MERGE ANYWHERE
6df175693 : Refrain from unbinding BT ICS when call is ongoing
69a816bb5 : Unbind CS if connection is not created within 15 seconds.
bf5e48425 : Unbind CS if connection is not created within 15 seconds.
14978e09b : Unbind CS if connection is not created within 15 seconds.
955f01121 : Ensure transactional accounts cannot be call capable.
4b34bf54c : Modify Tests to make SessionManager final for tests
4db9fecdf : Unbind CS if connection is not created within 15 seconds.
8deef8d28 : Call Sequencing initial changes/refactor
f81900399 : Import translations. DO NOT MERGE ANYWHERE
e973a3c34 : Ensure access to mDisconnectedToneBtFutures is synchronized.
463dc7c6a : Resolve user switch baseline route for video calls
48d6b0df9 : Unbind CS if connection is not created within 15 seconds.
1f15e021c : Revert "Unbind CS if connection is not created within 15 seconds."
8d1376ef8 : Clean up onCommunicationDeviceChanged callback logic
325aed4a7 : PhoneAccountRegistrarTest: use registrar as SYSTEM user
a48808e0f : Unbind CS if connection is not created within 15 seconds.
410f01079 : Use communication device callback and fix reinitialization routing
8398bb2b8 : Enable telecom metrics pulled atoms callback
0ba3a574d : Revert audio focus logic in Telecom.
cabe9dd4c : Resolve multiple BT device switching and active device updates
9ba59a001 : Update the prefix of the API enums' value for the API stats
d206891a7 : increase intermediate stuck voip call timeout to 2 mins
049ca1687 : Remove Recursion from Telecom Session Management/Traversal
cd5768fe9 : Fix NPE in CallStats.
74a5666b5 : Add Telecom API and module error metrics
9206e6d37 : Import translations. DO NOT MERGE ANYWHERE
aecb9a3d4 : Respect active BT device for routing and resolve flaky BT timing issues
85bd3be9e : Resolve NPE with AudioManager#getCommunicationDevice
25251261e : update the WATCHDOG_DISCONNECTED_STUCK_VOIP_CALL_MSG message
536338568 : Fixes for errorprone update
02e26a504 : Retrieve the main user and associated context in BlockCheckerFilter.
95b7770ad : CallAudioRouteController: fix possible null pointer dereference in onAvailableRoutesChanged()
e40f48c76 : Modified to display valid value set in Role when capturing log dump

+- Project: platform/packages/services/Telephony

9a96c2f51 : Import translations. DO NOT MERGE ANYWHERE
5f0ccae78 : Reject e911 call during non-emergency satellite session.
139baeade : Set the IsNtnOnlyCarrier field in satellite access controller metrics.
0f6683a6d : Add carrier id to satellite access controller
60a66e2e1 : No need to init mSatelliteAccessConfigMap during cleanup
58e46cad1 : Fix minor Telephony crashes
30b203802 : Remove earfcn max value validation check
74811429f : Import translations. DO NOT MERGE ANYWHERE
f5858b967 : Disable CDMA calls
f1af36c5d : Remove references to NV reset APIs
8f1a64653 : Add NB_IOT_NTN
39ae90292 : Fixed not showing satellite notifications if satellite is not supported
18f460827 : Removing permission check
a968e98f6 : TS43 Entitlement enhancement for Allowed Services
3a20001fe : "Add logic to update the notification whenever the satellite availaibility changes."
fa64af682 : TS43 Entitlement Enhancement with data plan type
b2265a6af : Added IsSatelliteSystemNotificationsEnabled check
bf05851cb : Convert hidden SatelliteManager APIs to System APIs.
2422ce3cc : Import translations. DO NOT MERGE ANYWHERE
2d155dfd7 : Import translations. DO NOT MERGE ANYWHERE
2c9e9ac7f : Update comments to point to the new location of event.logtags.
8bb664519 : Add DefaultMessageAppSupportedCheck for satellite SOS
cd37e7fbb : Revert "Add logic to update the notification whenever the satellite availaibility changes."
30e3e06f3 : Revert "Add logic to update the notification whenever the satellite availaibility changes."
364b6ff62 : [NTN][Geofence] CTS for geofence enhancement
dcd19f376 : Clear calling identity for setNtnSmsSupported API.
40516785b : Add requestSatelliteAccessConfigurationForCurrentLocation for TestSatelliteApp
93320242d : Add register/unregister for selected satellite subscription changed event in PhoneInterfaceManager.
8368b43b3 : [telephony] Delete unnecessary file
739b7d27e : Add logic to update the notification whenever the satellite availaibility changes.
acc8e35b1 : Import translations. DO NOT MERGE ANYWHERE
ba450046b : Change SatellitePosition @NonNull from @Nullable
8e30ad0d8 : Add requestSatelliteDisplayNameForSubscription in SatelliteManager
cabf244ee : Use the new HAL API updateSystemSelectionChannels to pass the bands/frequency to vendor service.
c4b074592 : Introduced forceResetCarrierKeysForImsiEncryption TestAPI to delete Imsi Certificate.
1834bcbb6 : Apply telephony_config_json_parser for satellite accesss control
5c3270ae7 : Re-evaluate disallowed reasons when carrier config changes
560c881f0 : Fix NPE by initializing notification manager
b9a4f612e : Added flag check in SatelliteAccessController
0b404db90 : Import translations. DO NOT MERGE ANYWHERE
da75775a7 : Update notification even when location settings are disabled.
5fa633d04 : Create the notification before registering the callback and add a null check logic.
dd8e083fd : Configure earfcns to modem
c29c54e0b : New System Api: getCarrierId using carrierIdentifier
99cc753b4 : Add requestSelectedSatelliteSubscriptionId API in PhoneInterfaceManager.
b1ead4b0c : Increase reliability of Teleservice tests.
f7b28c5bc : Improve SatelliteAccessControllerTest (unit test) isolation
3b0c4c21c : Fixed crash when user handle is null
bc11de1fa : Log the result of registering for satellite supported and provision state changes.
898f8cdea : [Geofence] Update SatS2FileCreator, SatS2LocationLookup, and DumpSatS2File to support config ID
e99cfc38e : Fix NPE at RadioInfo
a646bc13b : Add setNtnSmsSupported API in SatelliteManager.
edfa4c6c0 : Fix NPE in TelecomAccountRegistry isRttCurrentlySupported
22b93f8ec : Use TestableLooper.RunWithLooper instead of manually setting it up
33c445284 : Fix unit test NPEs when testing against Minimal Telephony
a1d2b1bcc : [Geofence] Write/read entry values to/from satellite block file
e31a8f3e8 : Allowlisted T-Mobile apps to access TM#getCarrierRestrictionStatus.
e30fbf643 : Revert "Import translations. DO NOT MERGE ANYWHERE"
edc8a1c72 : Add requestRegionalSatelliteConfigurationForCurrentLocation and onRegionalSatelliteConfigurationChanged
310544d1d : ImsConferenceControllerTest extends TelephonyTestBase
7db47c268 : Move PhoneFactory/PhoneGlobals static mocking to TelephonyTestBase
555ddec87 : Enhance IMS related shell commands to also specify an optional User
6216eaba1 : Add more null checks for getCarrierConfigForSubId
31e8dac5f : [satellite] fixed the crash when clicking the Test real or Test Demo button.
c14fe02ed : Clean up SatelliteEntitlementControllerTest
127483f39 : Add in-service wait timer config for ECC dialed in APM
84df0c104 : Import translations. DO NOT MERGE ANYWHERE
d42814253 : Prevent PhoneInterfaceManager mocks leaking between tests
3a9c8a6b6 : Put additional logs to check ResultReceiver created over limit
630db0176 : [satellite] Added condition for non-emergency in hidden menu
612be3b5b : [Geofence] Write/read entry value byte count and version number to/from the header block
8852e5127 : Add notification when satellite availability changes
609d78843 : Fix PackageManager handling in PhoneInterfaceManagerTest
709605f11 : Use test's shared preferences as a mock of system's
952f22da7 : Improve TeleServiceTests unit test isolation in singletons
8157fc445 : Import translations. DO NOT MERGE ANYWHERE
d7ea7085c : Add unit test for getImsPcscfAddresses and GetSimServiceTable
1c8068bcf : Minor Telephony Unit test cleanup fixes
322782bdd : ClearCallingIdentity for all satellite APIs in PhoneInterfacemanager
2070c44e2 : [Satellite] Changed start of satellite non-emergency mode in hidden menu to send intent instead of directly invoking satellite manager
b1c7ce61d : Added package to allowlist to access carrierlock stats
18466ee16 : Revert "Set forEmergencyCall of setRadioPower to false for norma..."
10a1955da : Ensure ImsConference is initialzed with a valid number presentation.
05483bfaa : Skip lost mobile data notification when connected to satellite network that does not support data.
9c3d57bbc : Clear binding identity before calling satellite provision APIs
0d8c39d37 : Import translations. DO NOT MERGE ANYWHERE
42c5af4a9 : Use getSelectedSatelliteSubId() instead of getSatellitePhone().getSubId()
2980ac0a4 : Add reason to exit the emergency callback mode
23c56781d : Add getRegionalConfigIdForLocation for S2RangeSatelliteOnDeviceAccessController
54af3d175 : Refactor the code from using S2LevelRange to using SuffixTableRange
7a4f06084 : Ensure TelecomAccountRegistry is HSUM aware.
c79e3c520 : Suppress Notifications for Hidden Subs
20d8a0afc : Import translations. DO NOT MERGE ANYWHERE
c6ddac557 : Unit test fix for display-aware PowerManager API
02533f125 : Fixed GbaServiceTest on HSUM devices
4d4729271 : Update TestSatelliteService to use NB IoT radio technology and modem state
0468d891f : Updating the throttle time only after actual location query
059e34026 : Support HSUM for CarrierConfigManager APIs
e3a6dbcfc : adb shell command to trigger phone.notifyCarrierRoamingNtnEligibleStateChanged API.
1f968bbb0 : Fixed unit tests
26735295c : Fixed CTS EuiccManagerTest on HSUM devices
f46f0d64f : Add interface for satellite CTS
1e698f54d : Import translations. DO NOT MERGE ANYWHERE
1b7d97c5f : TestApp: Real & Demo mode for requestEnabled()
621c5ddf4 : Update Satellite Test App UI to work better on small screens such a Wear devices
578cdcde5 : Add deprovisionSatellite api
d69c27cd1 : Dont send MSG_WAIT_FOR_IMS_STATE_TIMEOUT message for reselect case.
965c84f31 : Enforce FEATURE_TELEPHONY_SUBSCRIPTION in getPackagesWithCarrierPrivileges
32bde21d0 : Add test case for location is not available
723105448 : Import translations. DO NOT MERGE ANYWHERE
94dde9fa1 : Force ESP loop back mode for test emergency number
dc3bcec33 : Set max wait time if setVoicemailNumber is stuck
2f163a641 : Drop FEATURE_TELEPHONY_GSM from interface requirements
59104154e : Update API description for setDeviceAlignedWithSatellite()
768afd036 : Combine isSatelliteEnabled and isSatelliteBeingEnabled
cc4364d9e : enforce the calling package is the default dialer for VisualVoicemail

+- Project: platform/packages/wallpapers/LivePicker

458b53c : Import translations. DO NOT MERGE ANYWHERE
02c8c5f : Import translations. DO NOT MERGE ANYWHERE
fb672d9 : Import translations. DO NOT MERGE ANYWHERE
c7edf2b : Import translations. DO NOT MERGE ANYWHERE
09c81bd : Fix errant nullable annotation
f2c9ab0 : Add support for description-based live wallpaper content handling

+- Project: platform/platform_testing

fcb94c131 : RequiresFlagsDisabled: return false when flag not defined
290af73fc : Fix: content provide throws an error when moving traces on seahawk
097ec5ea4 : Fixed selector for LockscreenSixDigitPinUi
06838e20c : Complete StsCommonUtilHostTests dependencies
40a79ce03 : TAPL rename: uiautomator_helpers -> uiautomatorhelpers
b2a693504 : Apply ktlint fixes to systemui tapl
06aa12f59 : Add DeviceProduct for cuttlefish-comet
45e692a3d : Clear both logcat on the device and LogcatReciever buffer;
3ef11407e : Revert^4 "Introduce TestAnnotationScanner, and enable meta-annotations"
a8e76d370 : Move systemui-tapl to platform_testing
a01d8801e : Add a message with a link to http://go/motion-testing#managing-goldens in logs when tests fail.
7d946c9b7 : Add Java and Python Formatter and Linter to Automotive Tests.
5b6a407d8 : Fix: logcats during boot are truncated
880a86864 : Update test and Utils class for settings dual pane
c7c32ed56 : rdroidtest: Add module path to test case names
51f907d08 : Rule: Create WattsonMarkersRule
eb5136bc2 : Fix WMShellFlickerTestsDesktopMode SnapResizeAppWindow
a2e23a6eb : Add sysui scenario tests to presubmit for health/rules.
576c066d8 : Move Pip app helper to flicker lib
d1281d98d : Added testRestrictedSoundPalette in UxRestrictionFacetBarTest
34f74f2d9 : Add getter for package matcher to IComponentName
21fb59ae8 : Improve java interop for WindowManagerStateHelper
9e7949365 : Fix ktfmt errors in Components.kt
4fb707e3f : Re-add all activities to PerformanceLaunch APK
def139356 : Ensure flicker files are compatible with ktlint
759aaa99b : Improve java interop
778e9b261 : Refactor app helpers for better reuse and java compat
4b5f948f1 : Connectivity tests to verify the disabled Phone and Media Button after Reconnecting the paired device
699b774ce : Update below tests in NetworkPaletteTest to repeat verification.
8d6d4bac0 : Revert "Fixed testCurrentTime in CurrentDateTimeTest"
bc1f10c65 : Refactor BasePlugin to not be a Gradle plugin
49fccd472 : Apply license and developer to all .pom artifacts
79b9e90e3 : Fixed testCurrentTime in CurrentDateTimeTest
4214c8b35 : Fix atest command
04a369c30 : Fix SnapResizeAppWindowWithButtonTest
61bc1f218 : Fix SurfaceViewBackgroundMatcher, add test
3dfe57700 : Revert "Added new LaunchQuickControlsTest under AndroidAutomotiv..."
648725f4b : Make visibleRegion in LayerProperties non nullable
b4855c7a9 : Added new tests to UxRestrictionFacetBarTest
d9dab467f : Fix the import error in the python NonApiTest annotation.
4cade77d9 : Support MINIMIZE_AUTO_PIP_APP scenario
643f64272 : Migrate materialColor* attributes into colors
d74f0a000 : add support for templating spectacio shell commands;
1d245d8d8 : Added new LaunchQuickControlsTest under AndroidAutomotiveStatusBarTests
086a26b9a : Add matcher for BRING_APPS_TO_FRONT
7f547668b : Add WithTopVisibleApp to WM state helper
8f12e7a3c : Revert^3 "Introduce TestAnnotationScanner, and enable meta-annotations"
b3e1cb134 : Fixes for errorprone update
9b8e92fde : Update desktop mode Components for OPEN_UNLIMITED_APPS scenario
b0342f89c : Use new Java Tradefed launcher
3c023b42d : Remove dependencies on the 1-variant fallback
cedaaf48c : Update layoutlib screenshot tests
7fbccb844 : Aligning Android color tokens with Material
9ff1b517c : Update the sdk dependency for PerformanceLaunch Test App
a90e68c4f : Add initialValue parameter to DoubleLineClockRule
fe8d5ac40 : Extract xts annotations and make it host supported
f493091a9 : Reverting g3-compatibility changes to Spectatio validator GSON reflection magic
c389a80eb : Change AutoRepro plugin version to 1.0.0-alpha1
0636a3001 : Wear uses getScreenBounds, so it must be public
1cd7ccd7e : Updates to Spectatio to make it compatible with google3 infra
b0bec6967 : Revert "Aligning Android color tokens with Material"
19d640c47 : Prevent double-running class rules on Parameterized
cb533de38 : Update screen bounds calculation for SpectatioUiUtil
d3d863692 : Aligning Android color tokens with Material
99aa4d417 : Update transition types with latest shell custom types
cd6a07986 : Revert "Added new tests for Apps Launch from App grid"
62fa6fc42 : Support for MINIMIZE{,_LAST}_APP scenarios
adce62e8b : Add CoroutineTracingPerfTests to instrumentation_metric_test_list
eab3bf418 : Added new tests for Apps Launch from App grid
56e4aef01 : Add more custom transitions to TransitionType
1fb4cb42a : Current Date and Time Test
411214a74 : Rename more references to STS SDK
0c985f1bf : Add Maven publishing information to AutoRepro
d769163e1 : Revert^2 "Introduce TestAnnotationScanner, and enable meta-annotations"
df27ae19b : Add new test to open Appgrid.
d7fbdad2c : uiautomator_helpers rename to uiautomatorhelpers
bd3772d8c : Update quickswitch transition filter
de32eb26a : Add p24 script to lock CPU, GPU, and BUS clocks to the max freq
f6ac7b487 : Fix assumption failure for MU test for GM
337508f24 : Add headroom mode option to memhog preparer.
47bda8069 : Create new properly namd uiautomatorhelpers package
54eb2e559 : Ignore SurfaceViewBackground layers in VisibleLayersShownMoreThanOneConsecutiveEntry assertion
029530f50 : Run frameworks/base presubmits when changing testing rules
5645d8c13 : New Bluetooth Media Default State Test
32a5fa636 : Fix the Mobly snippet Installation Error Test: Locally (https://paste.googleplex.com/6092459137564672) Bug: 376091280
f7a006787 : Add osRelease field to KernelVersion
c9e923fb0 : Revert "Introduce TestAnnotationScanner, and enable meta-annotations"
8fd3b68f2 : Remove dependencies on the 1-variant fallback
79ee429be : Fix flicker test command documentation
94dfe709c : Introduce TestAnnotationScanner, and enable meta-annotations
822db58b2 : Delete flag info, because it only has legacy flags.
a6aaa73fa : Remove dependencies on the 1-variant fallback
d382bf4e5 : Fix flagging for microphone test
12d8dcc68 : Add ensureFinished option to FoldableRule#unfold
8ddadd3bc : Fixing FlickerServiceTracesCollectorTest to include view capture tracing addition to SysUI.
7473f05b2 : Fix Flicker Tests - Bounds Calculation
9d4357cc5 : PerfettoHelper: Use content provider to write trace to results directory
4880c3723 : Add locator for "SOFT_KEYBOARD_HIDE_BUTTON" as part of fixing AccountSettingTest#testAddRemoveAccount
cecb98894 : Rebrand STS SDK as AutoRepro
bc28b3168 : Add CoroutineTracingPerfTests to instrumentation_metric_test_list
1cb40fc1a : set perfetto_persist_pid_track to true as default
4c6d70511 : Enabling BT Dumpsys Logs in Media Tests Test: https://android-build.corp.google.com/builds/abtd/run/L63800030007174494 Bug: 359910938
02e653256 : Dialer Phone card test fix
9ce91ab82 : Add waitingForDialog param to DialogScreenshotTest
d51dcb326 : Revert "Add CarrierAppIntegrationTestCases to instrumentation tests"
26277fbf0 : Build Wifi snippets
dcc8383db : Add CarrierAppIntegrationTestCases to instrumentation tests
9b368ff03 : Fixes for errorprone update
a6a654534 : Add a new enum for CF arm to DeviceProduct
fec6deb0a : Enabled DeleteGuestSelfNotAllowed Test
1831bcbb0 : Revert "Enabled SwitchUserQuickSettings test in RB"
9b857e57d : nfc: Add jni tests to native_test_list
383eefa8e : Enabled DeleteGuestNotAllowed test in RB
47260a9b0 : Enabled SwitchUserQuickSettings test in RB
bc1dea9b6 : Supports saving trace files during testing
4bc12aa5a : Enable Spectatio UI_ELEMENT to be selected with DESCRIPTION_CONTAINS
b1c0a077e : Removed ignore rule for catbox_functional_multiuser_system_user in RB
5382d4a40 : Fixes for errorprone update
df0af88e2 : Added conditionali wait time to stabilize profile icon test
63b6bc621 : Clean up unnecessary modules
117ae0c4f : Revert "Fix UnusedVariable errorprone issues"
c0bb1c8cf : Updated test methods in MultiUserTests
8d0e570b6 : Removed ignore rule in ProfileIconTest for RB
b17841fc9 : Capture screenshot using with the screenshot library.
dba9acf39 : native_tests: Add a new libnfc-nci-tests for unit tests
268fecf5a : Clear list of perfetto PIDs to stop
2af89d283 : Add an annotation for adding test to nonblocking Presubmit.
bd1f0cc6a : Fix Meta Data tests and add more media logs Test: https://android-build.corp.google.com/builds/abtd/run/L90800030006735060 Bug: 362518445
e3fe71afc : Unify WM/SF dumps collection and loading
ef009c078 : Fix UnusedVariable errorprone issues
0344b56af : Style change for compatibility with google3 precheck
175012034 : Use the standard lib to read proto paths
9f1c03e48 : Add support for capturing built-in AnimatedContent transitions
10cf7c83f : Format Motion Test library files with the new version (0.52) of ktfmt
eafb31257 : Update device state config usage to use official API

+- Project: platform/prebuilts/abi-dumps/ndk

5191556 : Snap for 12710726 from 2fb491b53f4fb5e4da12b2a6f74ec65625669896 to 25Q1-release

+- Project: platform/prebuilts/abi-dumps/platform

3a1b53c : Snap for 12722466 from c0ceb94d76b5c6ee15f70303591010ca83826b8f to 25Q1-release

+- Project: platform/prebuilts/abi-dumps/vndk

f4439440 : Snap for 12710726 from 1203f220eaffb82fa0f91f382ec9315a841bad2e to 25Q1-release

+- Project: platform/prebuilts/android-emulator

d870e05 : Snap for 12562477 from c22634fd11f01c5828cbdc9bd504abf4be720598 to 25Q1-release

+- Project: platform/prebuilts/asuite

b3adc34 : Snap for 12765415 from 1f9f22cae214bb7c3fbbd0d4cb5d8db6fc5fa3a6 to 25Q1-release

+- Project: platform/prebuilts/build-tools

cda5da8d : Snap for 12765415 from 213386836e7e5e98a21ff1b88f4df74a86ae4ed3 to 25Q1-release

+- Project: platform/prebuilts/checkstyle

387726c : Snap for 12680993 from a3482992c451ea8ccf9dedf52ffa289e5957de18 to 25Q1-release

+- Project: platform/prebuilts/clang-tools

bed243d : Snap for 12609205 from 9c805d770ded11a6a7ebd28665eac0f35124cc38 to 25Q1-release

+- Project: platform/prebuilts/clang/host/linux-x86

96266255a : Snap for 12763142 from ee9089390b36376efd0b873ea4f483b0fad2356f to 25Q1-release

+- Project: platform/prebuilts/cmdline-tools

ec4b037 : Snap for 12399304 from 1c17e1076d3100642c9a266ab53ae03456cd2216 to 25Q1-release

+- Project: platform/prebuilts/devtools

4bfccde : [automerger skipped] Mark ab/11976889 as merged in aosp-main-future am: e011901b1d -s ours

+- Project: platform/prebuilts/gcc/linux-x86/host/x86_64-linux-glibc2.17-4.8

57c46e9 : Snap for 12763142 from 9eb2716c4064a933c7679e97aff4bd73ef6184ab to 25Q1-release

+- Project: platform/prebuilts/go/linux-x86

ca01d12c2 : Snap for 12477291 from 19032d71981ffa08d43efd94d15db34e0379e282 to 25Q1-release

+- Project: platform/prebuilts/gradle-plugin

fb9990eb9 : Snap for 12337407 from 82d4f29fe5ae38e7caadb0360324cbb77810b3f5 to 25Q1-release

+- Project: platform/prebuilts/jdk/jdk21

ebc610a : Snap for 12525496 from ef5bcc92586b839ae3dbacc154127092fa4002ec to 25Q1-release

+- Project: platform/prebuilts/ktlint

f26b5df : Snap for 12658558 from aa17ba46b7cc58ca7d5655e7ee34cf93c5eabef9 to 25Q1-release

+- Project: platform/prebuilts/manifest-merger

86567ad : Snap for 12735943 from 93603b6f7483f14666f74f67977794a3a8fc1594 to 25Q1-release

+- Project: platform/prebuilts/maven_repo/bumptech

d140436 : Snap for 12658558 from 426f9cdf58c3ff466e1b804a547b93f3af05dcb5 to 25Q1-release

+- Project: platform/prebuilts/misc

847dbab : Snap for 12735943 from 21e2a98857ed1b84689932b27a7864938620118b to 25Q1-release

+- Project: platform/prebuilts/module_sdk/AdServices

b2cc8f1 : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30871119'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/AppSearch

5ed5d89 : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30870027'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/Bluetooth

7da5ab8 : Merge cherrypicks of ['googleplex-android-review.googlesource.com/31574360'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/ConfigInfrastructure

0b5cb34 : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30871990'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/Connectivity

6a3210e : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30871122'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/DeviceLock

2df370d : Snap for 12337407 from f9206292ead82379ab8a5dc370ef172599ce69d3 to 25Q1-release

+- Project: platform/prebuilts/module_sdk/HealthFitness

b6e7687 : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30871120'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/IPsec

1781e4d : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30870295'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/Media

811a369 : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30869998'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/MediaProvider

d4dd651 : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30870000'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/OnDevicePersonalization

b9dc8a8 : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30869941'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/Permission

084591f : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30869944'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/RemoteKeyProvisioning

647bfd8 : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30869945'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/Scheduling

45d292c : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30871751'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/SdkExtensions

9911dbc : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30871752'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/StatsD

25c3a40 : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30869943'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/Uwb

51093fc : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30870001'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/Wifi

ccd4ef9 : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30870281'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/art

c3a3f811 : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30870799'] into 25Q1-release.

+- Project: platform/prebuilts/module_sdk/conscrypt

789cf6c : Merge cherrypicks of ['googleplex-android-review.googlesource.com/30871137'] into 25Q1-release.

+- Project: platform/prebuilts/ndk

e0e58e67c : Merge changes from topic "reland-ndk-libcxx" into main am: 679207bd3a am: e8014b49a3

+- Project: platform/prebuilts/qemu-kernel

8f44c3f : [automerger skipped] Mark ab/11976889 as merged in aosp-main-future am: 60da75f13c -s ours

+- Project: platform/prebuilts/r8

f1fd37c : Snap for 12710726 from f45915f75afc8690f03db9e76b2ea53c5d14b2f9 to 25Q1-release

+- Project: platform/prebuilts/remoteexecution-client

ca812e7 : Snap for 12537680 from 83fd5c18a98d24f1aff47c00683ca6380590a84a to 25Q1-release

+- Project: platform/prebuilts/runtime

924841ff : Snap for 12727401 from 1a295adc2ac882a1c3fbb77b6e447756bacd541e to 25Q1-release

+- Project: platform/prebuilts/rust

b40554a23 : Snap for 12543847 from b30b36e455901be7fc86d687f1903ceb4ec721a2 to 25Q1-release

+- Project: platform/prebuilts/sdk

344a7f5ef1 : Snap for 12763142 from 7e8a2c1fac15040546c82d0949d59ff373e065a2 to 25Q1-release

+- Project: platform/prebuilts/tools

6774467a94 : Snap for 12727401 from bf8eb09c5871cba19fa5d68bd48e3b8f00bf78be to 25Q1-release

+- Project: platform/system/apex

b673351b : deapexer: extract files as host-readable
eab3d68b : apexer_with_DCLA_processing_test: use deapexer
12554f33 : Fix apexer and apexer_test
a1ce5a36 : Hard-code payload type for test APEXes for sharedlibs_host_tests
2761cfe5 : deapexer: remove '/' in os.path.join
ca0e2dd7 : Adding more comments and renaming some methods
33c01e28 : Mark apex_test.skip_validations
781ab051 : metrics: populate HAL info
869c8c0d : apexd-metrics: helper class to send atoms
6272a124 : apexd: Remove shell commands
a3a92c59 : Revert "Add 'partition' field to apexd_host tool"
df718b01 : Add 'partition' field to apexd_host tool
dab76008 : deapexer: fix lint errors
d7a9d07f : deapexer: info --print-payload-type
5a076ca7 : Clean up preinstalled path
9d2b19b8 : Add 'partition' field to ApexInfo
5aae3fef : Revert^2 "Removing unused dep: libvendorsupport_llndk_headers"
6b61b4a8 : Revert "Removing unused dep: libvendorsupport_llndk_headers"
841db210 : libapexsupport: add INTRODUCED_IN(V)
944675b1 : Removing unused dep: libvendorsupport_llndk_headers
63fa6d8c : Update 'enable_brand_new_apex' flag namespace
caded69e : Support verified brand-new APEX in GetPreinstalledPath
2cfe226a : Verify existing active public key for brand-new APEX
7129d2bb : Allow installation of brand-new APEX
87e74513 : Allow verified brand-new APEX in AddDataApex
d5258961 : Initialize public keys and blocklists and add verifier for brand-new APEX
9cee3957 : Add proto and file for brand-new apex blocklist
5a5e64ec : Add Aconfig feature flag for the feature of brand-new apex installation
bab3c2d6 : Add flag enabling bootstrap apex on all partitions
886ca146 : Remove dependencies on the 1-variant fallback
b671a348 : Add a VTS test for DIO support
165d1b7e : No sharedlibs directory in apexd-bootstrap
8e2af95f : Remove workaround for duplicate
c0c14ff3 : loop creation error is propagated.
5bb96ce8 : Remove redundant loop creation helper
f954b3bb : Remove workaround
b1aadf2f : Remove unused code
b506bc56 : Remove unused code
4d9225f5 : Remove unused code
52c1d703 : Switch from genrule to java_genrule
3f974d6c : host_apex_verifier: 'on' is for vendor apex
2c2d9d11 : Add apexd_host_tests to TEST_MAPPING
c0dc2a73 : Run checkvintf with all staged apexes mounted
e64ef710 : Force signapk.jar generated in out/host/soong
3f3cae06 : apexd: Refactor temp-mounts
74697634 : apexd talks to statsbootstrap
49cc6470 : Decouple statslog from libapexd
bcd3c1f6 : Clean up unused functions
74d4bf10 : apexd_metrics: store file hashes in session
d9f5ab61 : Fix apexd_host_tests regarding checkvintf
9c38ca40 : remove remountPackages
2d533c2c : apexd: Use a new interface for VintfObject
9a046a8e : Attach hosts tools in apexer_test package
221722ae : apexer: Fix non-existent required dependency
183dcd93 : Update libprocessgroup dependencies

+- Project: platform/system/authgraph

49b1d93 : Add dirgroup for trusty genrule
dc45b7e : Add unofficial/unsupported Cargo build files

+- Project: platform/system/bpf

d036b6f : BpfLoader-rs: Load libbpf programs
644d336 : BPF: Create android_bpf_defs
b01b5c8 : MOCK_BPF: Remove bpf mocking library
7802654 : BpfLoader-rs: add rustfmt.toml symlink
cbb87b6 : bpfloader-rs: implement rust native logging
ba06663 : BpfLoader: remove unnecessary logging functionality

+- Project: platform/system/bpfprogs

09e44fe : BpfProgs: timeInState: enable libbpf
93c094d : BpfProgs: transition to new android_bpf_defs.h
7ae3418 : MOCK_BPF: Remove bpf mocking library

+- Project: platform/system/chre

5e3279285 : Adds missing mutex lock in chppEnqueueTxErrorDatagram
cbcc63c31 : WifiSettingsTest increase retry timeout when FW busy
2efc12708 : wifi_scan_cache: Correctly calculate ageMs
474320d9f : Updates timestamp in CHPP linux log
03fea9585 : Update constants in message_common.h to match CHRE HAL
91d2fdb53 : [MessageRouter] Explicitly close sessions on hub unregister
c94ddda4e : subterra-hal: Use MessageHubManager
19cc72747 : [MessageRouter] Adds iteration function for all endpoints
0c4eaff06 : [CHQTS] Refactor chre_cross_validator_wifi test
2954df644 : subterra-hal: Add shared ContextHubV4Impl
57d2b29e4 : Enable hooks for handling event enqueue failures
27e6aca33 : platform/nanoapp_loader: Add missing dlfcn dep
df5dc926c : [Tinysys] Replace heap_free() with vPortFree()
c1e49b040 : subterra-hal: Use single global session id map
92201befe : Revert^2 "Mark @FlaggedApi flags as exported"
2c00542e0 : subterra-hal: Add flatbuffer definitions
cd8f75d3a : pw_log: remove transitional _PW_LOG_REQUIRES_VERBOSITY conditional
72c4018ea : Revert "Mark @FlaggedApi flags as exported"
551c1841b : Mark @FlaggedApi flags as exported
238d78fad : Fix typo in message_router.h
a9c594943 : subterra-hal: Adds MessageHubManager class
195b22017 : Add the hal_handle_nanoapp_query_test_mode flag
73cdec36b : Fix a typo in hal_client_manager.h
4ad270743 : [Multiclient HAL] Return an error from APIs if CHRE is not ready
f815da191 : [Multiclient HAL] Create a flag guarding CHRE restart handling thread change
beb2c9e7f : [Tinysys] Add shared wifi and wwan platform code to tinysys's build
d523f36fe : [Tinysys] refactor host link impl for dma usages
a7b299a76 : [Tinysys] Update DMA transmission for SCP > AP direction
5bb76dd2c : Updates version in HAL xml file
cd344e087 : Create BT socket offload encoders/decoders
44a25bb5f : Create BT Socket FlatBuffer Messages
8089d32cf : MessageRouter: Increase the maximum length of endpoint names
9402a57c8 : Integrate CHRE into MessageRouter
d215bbbdf : Add a note to chreWifiNanRequestRangingAsync that it requires WiFi permission.
8b37a52f0 : platform/shared: Instead of UNUSED_VAR don't create the symbol
fda6b4c28 : CMake: Move util system headers to the system target
3602a75c1 : Add endpoint info functions to EventLoop
1350e06c4 : Cleanup chre_unit_tests in Android.bp
c1fcbd220 : Fix typo in event.h
14d22b016 : cmake: Add missing backend for log_buffer_manager
dcb7a34c9 : Fix a typo in message_router.h
02fc3ce59 : Add script to flag improper target_platform headers
85742838a : [subterra-hal] Add contexthub v4 stubs
1c9f0c838 : cmake: Add more platform/shared support
d9e2531f3 : pal_wifi_stub: Resolve unused parameter warnings
c540627f2 : pal_wwan_stub: Resolve unused parameter warning
239d483a7 : [CHRE API] Add endpoint message and session events
7329c4793 : platform/cmake: Add additional Freertos and shared build rules
1d4dc9bbc : Add MessageRouterCallbackAllocator
a31b0efb5 : platform/freertos: Provide header isolation for backends
b67999f38 : chpp: Don't attempt recovery on failed OPEN
a63d23b48 : [Tinysys] change heap_malloc to pvPortMalloc
b6a5d964a : [Tinysys] Use macros to control task priorities
c761d78b2 : [Tinysys] Update dram/sram API usages
b09138b24 : Add include paths for platform/shared/public_platform_*
86ab3558b : wifi_scan_cache: Handle request while cache is busy
573e1ee69 : Move platform_cache_management to platform/shared
8d869992d : Drop weakest result in wifi scan cache when full
7d99ede0d : Add log to failure async event
4748f0070 : Remove unused WifiStatusCode alias
cbc4908e7 : chpp: Improve packet utils used in tests
c66258d30 : chpp: Fix ACK after retry log message
17924f8bc : chpp/wifi: Remove unused cookie parameter
7d643b10e : wifi_world: Re-request scans on timer expiry
891fdd0d2 : Update comment in test about chreAudioGetSource
7e36c68a9 : chpp: Acknowledge a duplicate packet
2ae0fe438 : platform/shared: Gate implementation based on flags
c960c1c12 : cmake: Add additional pal and platform/shared support
ba74cc1b5 : Move SynchronizedMemoryPool to util/system
2ed433b16 : chre/platform/shared: Isolate backend headers
6ed85874b : Improve debugging capability for wifiCrossValidation nanoapp
ccd2a09c3 : [Multiclient HAL] Create a flag for HAL's launch timeout changes
731795a33 : platform/shared/{re,core}: Deps fix
d0f63c622 : [Tinysys] Update dma operations for host link
d9529ab60 : [CHRE] Move fixed_size_blocking_queue.h to system utilities
b4a377c61 : Add missing include for cstddef in memory_impl.h
28217f9a8 : Update max payload setting checks
3afc76294 : Move synchronized_expandable_memory_pool.h to system utilities
9c24ba464 : CHRE_ASSERT size optimization
1cdd0bc20 : Minor cleanup to chre_api* sources
415a95d23 : Move chreAbort backend to chre_api_re.cc
f094f1e9e : Add missing stub for chreAudioGetStatus
f86d7f1e6 : chre/platform/cmake: Add build rules for chre_api backends
84da32893 : Fix typo for chreDebugDumpLog API
24ae88cf7 : chre/core/cmake: Add missing dependency
790397e42 : Store endpoint names in EndpointInfo
b2aa04d80 : chre/cmake: Add missing PUBLIC_INCLUDES
df756d9bd : Add broadcaster address filter to ble_world.cc
afa22a6c1 : Create CHRE BLE Socket Offload APIs
a71fd9913 : CHRE: fix over corrected CHPP WIFI time
ded79ea2f : Initialize MessageRouter in CHRE simulation tests
16f2a6281 : chre: Add Pigweed-based CMake build support
33ee7e71f : chre/core: Enable a consistent build graph
8ffaca8e9 : Define pwpb specific options file
796e329c4 : Disable CHRE embedded flags in the AOC build
85a146b43 : Fix missing-designated-field-initializers compilation issues
e1727c03e : pw_assert_nanoapp: Fix PW_HANDLE_ASSERT_FAILURE define
45ac35591 : chre/log_oom: Optimize the implementation for tokenization
c6bdbb574 : pw_assert_nanoapp: Explicitly pass verbosity to PW_LOG
bae70f600 : Remove unused include from telemetry_manager.cc
9e0baec39 : chre/platform: Remove extra include in platform_sensor_manager
1dab525c3 : chre/platform: Permit all platform implementations to inline
763fbb491 : Move TransactionManager to util/system
077535aec : Cleanup logging includes in core CHRE code
d45d6615b : Consolidate shared CHRE logging code
64353a6f9 : nanoapp_loader: Resolve sqrt double-promotion
8d497f404 : chre/platform/memory_manager: Correct include path
985e86dba : Update to clang-r530567
362dba929 : [Tinysys] Update HAL sending payload max size to 32K
0c3f5ee04 : [Tinysys] Implement message delivery status for reliable messages
a953a45d9 : [Tinysys] Use queue to send messages for token db info
883100185 : Add arthuri@ to system/chre OWNERS
8c0e77966 : Add nanoapp lookup and iteration functions to EventLoop
1ac0a21ee : Add endpoint types to message_common.h
323bd5308 : Fix typo in message_router.h
19a8d7179 : Create sensor_test nanoapp
919cc936e : Add boundary check for max log len for string logs
d08fe067a : Update message_world to reply with a message of custom size
25c7312ff : [CHRE] Cleanup CHRE 24Q3 flags
f49b6ea2f : MessageRouter: Update comment for forEachEndpointOfHub
bd828b50c : MessageRouter: Unregister hub on MessageHub destruction
ff7439038 : MessageRouter: Add Session information to Message
c580dd6bd : [Tinysys] Update the static buffer HAL uses for reading
b98549a2a : Add unique_ptr.cc to Pigweed sources for util.mk
e56a1fcd8 : [MessageRouter] Implement MessageHub and MessageRouter
e72f163f0 : [MessageRouter] Add MessageHub and MessageRouter API
8f759c001 : [MessageRouter] Add common definitions
72a5cfaa6 : CHRE add ldexpf export for nanoapp
bd7d4e16d : HostEndpointManager: Remove unnecessary else
8d27a3b02 : Fix usage of literal type suffixes in CHRE API
0b27dcefb : [Tinysys] Update build flags
7cdc04c4c : nearby: sync from google3 at CL 683068368
faf95941f : [Tinysys] Not use Opitional to avoid lld error
e38d7ac56 : Remove unused Android.mk for `chre`
914362d63 : Remove unused Android.mk for `chre`
487b1fde1 : [Tinysys] Remove toolchain settings from variant.mk
3cd4e7e63 : [chre-hal-xport] Add flag for using EFW xport
c140a5a09 : Improve logging and failure response of nanoapp loading
84c9fb1e8 : [Tinysys] Init logger and DramVoteClient
59e28cde5 : Add crsabotta@ to OWNERS file for CHRE
80a698bb0 : nearby: sync from google3 at CL 680184232
590854fb9 : Fix CHPP timeout retry reset recovery logic
a61545b68 : Fix CHPP timeout retry reset recovery logic
2432221f2 : [efw-xport] Add bugfix flag for rewind capability
72d2ea23f : [Multiclient HAL] Tolerate duplicated responses of nanoapp loading
1ccaff145 : [Multiclient HAL] Log transaction id when loading a nanoapp
506d9b272 : Add Permission support to test send_message
f0cdf6d71 : Add common.sh
0cd407f21 : Add status logging to rpc_server.cc
80bd59b35 : Remove unified metrics reporting flags from aconfig
bf116938b : Add CHRE flags for generic offload APIs and implementation
edf3fd504 : CHRE audio test: Allow equality in DC_OFFSET_LIMIT
5591c8ac0 : Delete chre.scons
1b9cf5c3d : Fix CHRE test on multi user system
9dc586f3b : Fix error log message
4a7cac246 : Add wwan neighbor capability definition
6709ffa0e : [Tinysys] Merge common settings of tinysys platforms
652f03609 : [Tinysys] Support up to 32k bytes messages sent to AP
2a5da3271 : Ensure chre::Optional can be instantiated at compile time
e4f8a3987 : Disable experimental editions in nanopb.mk
fd51c4f93 : [Tinysys] Create an image signer for testing purpose
3e52d07fc : Ignore incomplete scan results in wifi xval
0bd21ed12 : Optimize event delivery in distributeEventCommon
e883bba96 : Add distributeEventSync to event_loop
f243b119a : CHQTS: Clean up the gts test helper
576f946f7 : Fix logging

+- Project: platform/system/core

d189b8755 : Revert "Define ueventd.rc.recovery"
f9b38f91a : Define ueventd.rc.recovery
9b5c6fdce : Define init_second_stage.recovery
2e581b68c : Define reboot.recovery and watchdogd.recovery
fdd861ef7 : debuggerd: Use libprocessgroup to unfreeze
09e7cea7c : Define toolbox.recovery
44eca61ab : Replace partition-specific toybox make module with soong modules
683e3c076 : Start aconfigd socket defined in configinfra mainline module
7a1cf9a52 : Update trusty to use secretkeeper hal V1
ee7a71375 : ashmem: Ensure all memfds have non-executable permissions by default
00a32314a : libsnapshot: Cleanup temp metadata during rollback
8972ce18d : libprocessgroup: Remove ramdisk_available from libcgrouprc
62f8723f6 : libprocessgroup: Remove vendor_ramdisk_available from libcgrouprc
f26b13aeb : libprocessgroup: Remove recovery_available from libcgrouprc
8366faad1 : gatekeeperd_service_fuzzer: Add signal() to handle SIGPIPE
27dd6f8e6 : libutils OWNERS for shayba@
52d2446b4 : Deprecate cc_binary aconfigd and the controlling flag
3df083a49 : libprefetch: rename property name
9731ea7b6 : Update comments to point to the new location of event.logtags.
cadad290a : Fix the dm-verity Merkle tree caches to not expire so quickly
487584da2 : Move Trusty C++ KeyMint to v4
fef2dff80 : Remove /data/apex/hashtree directory
ef3a2c05f : libprefetch: Start prefetch service based on build
f00efa024 : Remove system/core/METADATA
0701fed36 : Remove no longer necessary MS_LAZYTIME definitions.
589afaa88 : fs_mgr: Support nosymfollow mount option
35ab96a42 : Add prefetch directory in /metadata
b912e3e54 : Fix permission of zram writeback and idle file
150483e3a : trusty: utils: rpmb_dev: secure storage support for test VM
7bb484d40 : snapuserd: Lock the buffer during snapshot-merge
3e7c17a8e : Reapply "libprocessgroup: Remove __BEGIN_DECLS and __END_DECLS"
e2efde374 : Use genfs labels version library
d5c8b0bdd : Remove |ro.hardware.| prefix in KM VM sys property
ae8313f8e : libprefetch: library to prefetch data using tracing.
267207a17 : use V4 Health HAL
fdaaef952 : Revert "libprocessgroup: Remove __BEGIN_DECLS and __END_DECLS"
6028880ac : Move snapuserd_test to postsubmit
44461354f : snapuserd: Use GTEST_SKIP in snapuserd_test.
d17d5c585 : ueventd: add support for driver section in ueventd.rc
668ffc395 : snapuserd: Use GTEST_SKIP in snapuserd_test.
f1d00f0f2 : libcutils: create rust bindings for android ids
6105d9dc8 : Create the mainline supplicant directory during initialization.
28b2556e9 : Fix typo of snapuserd_verify.h
6a5db6838 : Remove non-UTF8 characters from string fields.
d02b74411 : snapuserd: Change error message to verbose
5996d608a : Use the new 'partition' field in 'ApexInfo' to identify vendor apexes
fdf443235 : libprocessgroup: Remove __BEGIN_DECLS and __END_DECLS
b6071f19c : libprocessgroup: Convert CGROUPV2_HIERARCHY_NAME to std::string
9e5f74d4e : libprocessgroup: Remove CGROUPV2_CONTROLLER_NAME
148c2531e : libprocessgroup: Remove CGROUPS_RC_PATH
dd8edea85 : init: Add NVME support to the `boot_part_uuid` method of managing boot devices
9d0620882 : Fix failure in CowTest#InvalidMergeOrderTest.
03a14f528 : Declare support for v4 of KeyMint HAL
a4e852d03 : Stop explicitly adding bionic subdirectories to the include path.
16693fae2 : Rename system property to enable KeyMint VM
cbb59dd24 : Add source of unwind when fatal error.
1b5a7addd : Revert "Move snapuserd_test to presubmit-large group"
ac810ad71 : Move snapuserd_test to presubmit-large group
9ad453ffa : Set input thread priority to RT - try 4
64148e33d : libprocessgroup: Remove libcgrouprc_format
4be70e7db : Remove mitchp from OWNERS file
5f216ffdc : trusty: utils: rpmb_dev: add wv secure storage init.rc
945dd526e : Stop explicitly adding bionic subdirectories to the include path.
6facd1bfd : Test stack buffer size calculation.
5969d6924 : Declare previous version when using frozen HALs
e8ff8b494 : Set the proper FEATURE_HARDWARE_KEYSTORE version
f1eaa7516 : move aconfigd platform init service from init.rc into aconfigd.rc
fc7ec65a0 : Add AID for memory management daemon
dbb080d9b : Revert^2 "Deprecating libvendorsupport_llndk_headers"
46afe22f9 : init: Avoid extra string copies when finding devices by using const refs
3c700588c : Start aconfigd_system processes in init.rc
ab8f9717f : Revert "Deprecating libvendorsupport_llndk_headers"
76afb4a2c : Add BOARD_GENFS_LABELS_VERSION
ce6f7330a : Increase zram size percentage limit
e5a2af34b : Increase the test timeout
52da71d47 : Revert "Set input thread priority to RT - try 3"
eb3d280f1 : init: Look for partition only on a boot device if using boot_part_uuid
e9de31006 : init: Add the ability to find the boot device by partition UUID
3de05fcff : init: Move the stripping of "/devices" and "/devices/platform/" to a helper
6519e6d67 : init: Break FindPlatformDevice() into a helper function
9481f9760 : init: Factor GetBlockDeviceInfo() out of GetBlockDeviceSymlinks()
743e8f16a : init: Use ConsumePrefix() instead of open coding in GetBlockDeviceSymlinks()
a2bd0e6b5 : Set input thread priority to RT - try 3
01af5431f : snapuserd: typecast cow_op->new_block to uint64_t
06c4e96fd : Deprecating libvendorsupport_llndk_headers
9f760f8d4 : init: Reorder GetBlockDeviceSymlinks() so FindDmDevice() is first
e53e50e3f : init: Add partition_uuid to Uevent
519d3f8b3 : fs_mgr: Add getter for androidboot.boot_part_uuid
1de3ab901 : Revert^5 "Set block device as RO/RW before mount"
454167f5b : libsnapuserd: Handle empty response from server
0b678adb6 : libcutils: Update libprocessgroup dependencies
8a45aaaf1 : libprocessgroup: Update libprocessgroup dependencies
b1f0c1bfd : libsnapshot: Update libprocessgroup dependencies
a3cf826de : Set input thread priority to RT - try 2
de6707df0 : Revert "Set input thread priority to RT"
a5560db8c : libsnapshot: Add script to test snapshot updates
5cc1ca176 : Revert^3 "init: Look for super partition only on a boot device"
cf9f0870e : Add support for tombstone symbolization to pbtombstone.
39a1730a8 : Make pbtombstone a host tool.
e7f1d4123 : read() can return fewer bytes than requested
5fd1be1a5 : Revert^4 "Set block device as RO/RW before mount"
5d5c732a3 : Rename KM VM related system properties
930f77b02 : Set input thread priority to RT
132b1ecdf : Switch mkbootfs to C++.
03429acce : init: mount tracefs before apexd-bootstrap
dea6f7c2a : Fix overlayfs on page cf page agnostic builds.
c434d801d : libdm: Redact keys from dm-crypt targets when calling GetTable.
c9e000c7a : libdm: Redact keys from dm-crypt targets when calling GetTable.
96fc434b6 : libprocessgroup: Remove ifdefery around SetTimerSlackAction::ExecuteForTask
e68f6cd6c : libdm: Redact keys from dm-crypt targets when calling GetTable.
390be0ae3 : libdm: Redact keys from dm-crypt targets when calling GetTable.
d94e6c537 : libdm: Redact keys from dm-crypt targets when calling GetTable.
9611c18aa : Don't use android::base::StartsWith / EndsWith
77ee43a9f : support f2fs device aliasing feature
3063d84e9 : libmodprobe: add support for dynamic module options
ac474ff7a : firmware_handler: extract part responsible for running ext program to lib
ec0f7774c : fs_mgr: avoid std::vector<const T>
b544df0e1 : Add support to update the DTB when flashing a vendor_boot ramdisk
7ae15190d : libdm: Redact keys from dm-crypt targets when calling GetTable.
9b9233f4c : libdm: Redact keys from dm-crypt targets when calling GetTable.
5bfb93678 : Revert^2 "init: Look for super partition only on a boot device"
03f7133b0 : Pin KeyMint dependency to correct/specific version
77dc1e7df : libsnapshot: Fix MapAllSnapshotsWithoutSlotSwitch test
09c18c17f : Remove dependencies on the 1-variant fallback
68c2cbf2b : libsnapshot: Fix MapAllSnapshotsWithoutSlotSwitch test
a190ecb6f : RefBase: document leak memory case
5331393cb : Mark the phony shell_and_utilities_vendor as vendor: true
cbe09a805 : Remove carlosgalo from libprocessgroup OWNERS
5ad59a4cf : libsnapshot: Resume snapshot merge if snapshots are in second phase
3aac36201 : Remove log spam.
a6af9bced : init: filter .##rc with preview SDK version
398461160 : libprocessgroup: Add SetSchedulerPolicy Action
44884d5c3 : Use uint64_t instead of size_t when calculating extent size to avoid overflow
9081b85a1 : Add proposed trendy teams for VTS modules
900ef7bf3 : Add proposed trendy teams for VTS modules
2a030efe6 : libprocessgroup: Remove SetClamps action
3e4b58e9d : libprocessgroup: Remove unused prctl include
cea66e89a : Add dirgroup for trusty genrule
b53eb9dbc : libprocessgroup: Use pid_t for ProfileAction::ExecuteForTask
075008174 : libprocessgroup: Remove prctl interface for setting timer slack
699faa849 : trusty: tipc-test: Fix D argument
e69b2d427 : snapshotctl: Initialize snapshot pointer when reverting snapshots
787ddbc8a : Reapply "libprocessgroup: Combine all 3 ActivateControllers imple..."
47580ff76 : Reapply "libprocessgroup: Remove ACgroupController_getMaxActivati..."
c76b6ada2 : Reapply "libprocessgroup: Remove dependency on libcgrouprc"
a09ee8ece : Reapply "libprocessgroup: Remove cgroup.rc file"
0fa49253a : Revert "libprocessgroup: Combine all 3 ActivateControllers imple..."
691ad736b : Revert "libprocessgroup: Remove dependency on libcgrouprc"
aeca8793f : Revert "libprocessgroup: Remove ACgroupController_getMaxActivati..."
972a2d30f : Revert "libprocessgroup: Remove cgroup.rc file"
e0ec952ba : Avoid unnecessary allocation in VectorImpl
42dbd2de8 : Remove libstats*_lazy tests from MTS
3a7cb4c63 : Use __ANDROID_VENDOR_API__ for vendor API surface
623682af2 : fiemap_writer_test: add block truncation and sync for safety
551c6018c : init/epoll: clean up reorder-init-list warning
50fd82214 : init: Remove schedtune support
8d71220df : Revert "init: Look for super partition only on a boot device"
36ea62f1f : Revert "init: Wait for /dev/hvc1 during ARCVM first-stage mount"
5161033f6 : libprocessgroup: Combine all 3 ActivateControllers implementations into one
c31c5a75c : libprocessgroup: Remove ACgroupController_getMaxActivationDepth
9d84103bd : libprocessgroup: Remove dependency on libcgrouprc
ae4ce8ccc : libprocessgroup: Remove cgroup.rc file
da77c2e1d : Change owner of ACPI BERT tables to system
f29f1a2a3 : trusty-ut-ctrl: Allow stream mode
6f451a9c8 : init: Issue a wipe on boot if trade-in mode was active.
a23013dd4 : libsnapshot: configure num worker threads
981664df0 : libprocessgroup: Remove schedtune support
bc067ef9f : libdm: Redact keys from dm-crypt targets when calling GetTable.
0d84909d6 : libsnapshot: Use words for xor ops
6f0ebcb52 : init: Look for super partition only on a boot device
78d5f7c6f : Added ownership attribution for test targets
717b4a718 : Add host support to libexpresslog
1ec529087 : Remove unnecessary getpriority() system call
8a32d8288 : Added ownership attribution for test targets
1396f9d3b : Add host support to libexpresslog
3d90ed0ce : trusty: utils: trusty-ut-ctrl: fix the vendor target
69f3da832 : trusty: support secure storage in system-ext
16f94816d : Revert "Support vendor partition in non-debuggable pVMs"
4f0e3eb6f : trusty: utils: trusty-ut-ctrl: add to system_ext
a3b9e560b : Replace ANDROID_VENDOR with ANDROID_VNDK in versioning
47718268d : Remove OEM_UNLOCK_PROP usage
34083b715 : Use std::sync::LazyLock rather than once_cell.
f3f3845b4 : Update struct to include far and elr on the NS side

+- Project: platform/system/dmesgd

f36be43 : Use the optimize_for_size property instead of "-Os" cflag

+- Project: platform/system/extras

411e9cc2 : Fix bugs with verify_trace.
bb4eee7d : Define mke2fs.conf.recovery
26485a29 : Add min_sdk_version
4cdb0494 : simpleperf: Fix reading old branch list files
b4a65e8c : Put builtin event types into a constexpr table to avoid build time bloat
fbda3aed : simpleperf: Update doc for enabling ETM data collection
fdaf232f : Implement torq simpleperf symbolization validation
0e0d352d : Remove error thrown when running rm <file> for a nonexistent file
a089ee43 : Fix bugs in torq
89b6f5f4 : Implement torq open subcommand
571dfbdd : Implemented the torq config pull subcommand.
28aab9c6 : Add a verify_trace tool.
a15b1c98 : simpleperf: autofdo doc: Change to extbinary format
ba6fe26b : profcollect: Move trace providers to a subdirectory
dc3f8dad : Refactor library for handling trace entries.
8fbc1c84 : Remove mitchp from mtectrl OWNERS
3f294177 : simpleperf: inject: Add --exclude-process-name
1cc11da3 : simpleperf: inject: Add --dump to dump branch list file
c7fabb64 : simpleperf: Support large branch list file
de028e29 : simpleperf: simpleperf_utils.py: Fix parse_version
e9ccca6c : simpleperf: Add more doc for source line/disassembly annotation
2db091e8 : simpleperf: Fix GetAndroidVersion
cb06493a : Added timestamp to torq host file names
54142d8f : Added waiting time when switching to from-user and made lint improvements.
fdd10675 : profcollectd: use BR_INST_RETIRED.NEAR_TAKEN for lbr sampling
4efb4ee5 : Implemented the torq config show subcommand.
25f5ff27 : Add LoginLauncherActivity in LauncherStart event
cb6fa9bb : Fixed simpleperf_event_exists to make a copy of events list
8d250b6c : simpleperf: Support intel event BR_INST_RETIRED.NEAR_TAKEN
01880b52 : Implemented torq simpleperf
a87e35fd : Implemented creating default predefined config on older versions of android.
52cfaa06 : simpleperf: Add example for collecting ETM data for the kernel
6f79e230 : view_profile: optimize png to fit under gitiles limit
c978e365 : Removed hw subcommand from program.
54180e59 : Add Perfetto to the list of recommended UIs for viewing simpleperf profiles
c9b9ebda : riscv64: simpleperf: Add basic support for RISC-V
012045b0 : Added torq unit tests for get_android_sdk_version
bc6bc69f : Fixed get_android_sdk_version
01fbd0d1 : simpleperf: Add doc for ETM data generation rate
31a0cab2 : simpleperf: inject: Add test for -j
5f716cb4 : Implemented handling of boot edge case on older versions of android.
626408a6 : simpleperf: inject: Add -j to read input files with multithread
1bd49768 : boottime_tools: fix timeout of bootanalyze.py
b30cca5e : boottime_tools: support adb -s in bootanalyze.sh
7f5f4dc5 : simpleperf: inject: Don't fail when seeing an invalid branch list file
631bd8f0 : simpleperf: Add BranchListMergedReader
5e2cfce1 : simpleperf: ignore binary_cache
8033be18 : simpleperf: inject: Speed up ETMBranchMap to instr range conversion
2131f7dc : simpleperf: inject: Fix ETMDecoder for indirect branch instructions
d8bd4480 : simpleperf: inject: Support --allow-mismatched-build-id for vmlinux
7b32a2d3 : simpleperf: log time info when `--log verbose` is used
135a37e9 : simpleperf: inject: Fix condition for using branch_addr
0a7330d4 : Implemented the torq config list subcommand.
4f5efbf2 : boottime_tools: Add Android.bp for bootanalyze
be9c2db2 : bootanalyze: Use alternative for timestamp alignment
4e8160d0 : simpleperf: inject: Add -z to compress branch-list file
56e102f1 : Make the mke2fs.conf can be included by the others

+- Project: platform/system/gatekeeper

39ecf94 : Add dirgroup for trusty genrule

+- Project: platform/system/gsid

5635a61 : Add proposed trendy teams for VTS modules

+- Project: platform/system/hardware/interfaces

1527b3e : Create vendor-available vold AIDL interface
42f7e8d : SuspendSepolicyTests: Use target device grep
d3b6865 : SuspendSepolicyTests: Use target device grep
b8f791e : Multizone readiness: add zone Id field in Product Strategies
6e942f2 : Add AudioVolumeGroupChangeEvent parcelable
477ce09 : Add android.media.audio.eraser
b2dd4e1 : Add SoundClassification enum for audio ML use cases
568edba : Move fast_kernel_wakelock_reporting to wear_frameworks namespace
67dacfc : Move suspend OWNERS file up
3458bdf : Speed up kernel wakelock polling
8f759d9 : Revert^2 "Audio CAP: Address ANAPIC comments, Part 3."
5cd78ab : Revert "Audio CAP: Address ANAPIC comments, Part 3."
0d5f9a1 : Audio CAP: Address ANAPIC comments, Part 3.
3ce3107 : Update lint baseline for vFRC
9338258 : Add getSupplementaryAttestationInfo
d51e801 : Add keystore2/OWNERS file
37b08b1 : Audio CAP: Address ANAPIC comments, Part 2.
21e8b54 : Revert^2 "Add SPEAKER_CLEANUP system usage"
b443288 : Unfreeze Keystore and bump KeyMint import version
8604748 : Audio CAP: Address ANAPIC comments
b6a8ab2 : Revert "Add SPEAKER_CLEANUP system usage"
e69a5be : Add speakerLayout field to AudioDevice.aidl
f52428f : Add audio device type MULTICHANNEL_GROUP
8dcb253 : Add SPEAKER_CLEANUP system usage
25f9b6f : Add android.media.audio.eraser
ebab5c3 : suspend: Support 64b suspend stats
d25aff3 : Convert vintf_fragments into vintf_fragment module(s)
3e6352d : Add media audio common types rust default
d769787 : Unfreeze Keystore and bump KeyMint import version
fe443e6 : Add proposed trendy teams for VTS modules
c258b00 : Add proposed trendy teams for VTS modules
f2135b5 : Add frozen: true to android.system.suspend
7574d76 : Add SoundClassification enum for audio ML use cases
2631e3e : Replace std::stoi() with android::base::ParseInt()

+- Project: platform/system/hwservicemanager

7ebb266 : Change TEST_MAPPING to use test_module_config rather than list options.

+- Project: platform/system/incremental_delivery

34c13b2 : error: no matching constructor for initialization of 'std::ifstream'
d894477 : error: no matching constructor for initialization of 'std::ifstream'
1c4f443 : error: no matching constructor for initialization of 'std::ifstream'

+- Project: platform/system/keymaster

408cc22 : Move C++ KeyMint to v4
46cb835 : Support KeyMint v4
f928919 : Bump KeyMint version
2c48e3e : Bump KeyMint version
271562f : Create a separate lib for keymaster_configuration.cpp
46742e1 : Add versioned lib_android_keymaster_keymint_utils
bf544f4 : Fix build on ToT Clang
164b05b : Add dirgroup for trusty genrule

+- Project: platform/system/keymint

28e41ff : Remove comments on some fields in the ASN.1 schema.
29ba069 : Simplify KeyMint setAdditionalAttestationInfo impl
9563f98 : Add cvlasov@ and kwadhera@ to OWNERS
3fff377 : Implement setAdditionalAttestationInfo & attest to modules
5a047f7 : Implement setAdditionalAttestationInfo & attest to modules
550fa92 : Add dirgroup for trusty genrule
8f43b39 : Update comment for fallback Verified Boot key value.
52dc61d : Move description of secure time to device section
92c96da : Update Cargo files

+- Project: platform/system/libartpalette

0970c9e : Remove prebuilts of Perfetto client libraries.

+- Project: platform/system/libbase

b1b2e01 : Make LogdChunkSplitter_TooLongTag a silent death test.
310ca83 : [base] Cleanup ScopeGuard ctor + noexcept
72bf961 : Add missing include for atoi().
46fc805 : Remove non-template android::base::Trim implementation
ca49101 : Remove unused android::base::Join explicit instantiations
4932431 : Speedup android::base::Join 3x
42f1935 : Add test case for android::base::Join of single non-string element
af4ae8d : Use std::string_view::starts_with and std::string_view::ends_with
da19fa6 : Remove unused variable.
03f1fc4 : Add dirgroup for trusty genrule
b0c2bdc : Fix build with fmtlib 11.0.2
38c889f : Improve host-side system properties support

+- Project: platform/system/libcppbor

3d00e9f : Address non critical clang errors for indirect imports
5d7b27e : Fix logging bug introduced in https://r.android.com/3376829
21b0e92 : Use a trivial local implementation of span<T> if the platform doesn't support std::span.
e5a86fa : Add additional compiler macro checks for checking float and double data formats conform to IEEE 754
c4a7a00 : Fix formatting and indentation.
03ee085 : cppbor: Allow indefinite length strings and bytestrings
960c76c : Add support for float and double values to libcppbor.
298f64f : Preallocate array and map stores during parsing.
7ee5e77 : Fix documentation of Item::semanticTag(size_t).
c9b5f63 : Apply clang-format
e8b5b63 : Fix parsing of SemanticTag followed by stop code.
5bc523e : Add dirgroup for trusty genrule

+- Project: platform/system/libfmq

4d606cc : Support cpp backend for libfmq
e63476b : Add proposed trendy teams for VTS modules

+- Project: platform/system/libhidl

bdd5545 : Change TEST_MAPPING to use test_module_config rather than list options.
b2f0f4e : Define VINTF fragment as a module type
2d81352 : Add vendor_manifest and odm_manifest from build/target/board
ff0d03f : Convert vintf_data to soong

+- Project: platform/system/libhwbinder

5cd08b1 : Change TEST_MAPPING to use test_module_config rather than list options.
23f5b06 : Add benchmark for sending a vector of binders over AIDL
08925d2 : libhwbinder: remove unused writeUnpadded

+- Project: platform/system/librustutils

7970ca5 : allow confinginfra module to access these utils

+- Project: platform/system/libsysprop

592d253 : Remove old Keyboard backlight sysprop flags
dc875c9 : Rename CritialIssues to BackportedFixes
deac44e : Rename CritialIssues to BackportedFixes
473d7bc : Add rendering depth removal sysprop
cb9b3b9 : Make CrashRecovery Properties Public
6edc4e1 : Add CriticalIssues system properties
0392dc9 : Add system property to for Gaming Audio profile (GMAP)

+- Project: platform/system/libufdt

76c3a18 : ufdt: use size_t for iteration in ufdt_apply_multioverlay

+- Project: platform/system/libvintf

4c61828 : Treat all unknown attribute values as an error
189ad26 : Update libvintf xsd for `accessor` for vts_halManifest_validate_test
db1bbd3 : Add new exclusive-to field to hal entries in VINTF
33cedb3 : Format reference arguments for consistency in parse_string.h
74b3a54 : Move generate xsd file to avoid requiring API Review
eeb48f1 : Add error string when failing parse xml due to element mismatch
45bf2fe : Extract nameWithVersion()
71d084f : Use 'partition' instead of 'preinstalledModulePath' field in 'ApexInfo'
1b1d2e3 : Add proposed trendy teams for VTS modules
b4c7088 : PathReplacingFileSystem with multiple replacements

+- Project: platform/system/libziparchive

465d787 : Define ziptool.recovery
2a047f0 : Fix unit/uint typos.

+- Project: platform/system/linkerconfig

6eefa86 : Add proposed trendy teams for GTS modules
339ad9d : Use 'partition' instead of 'preinstalledModulePath' field in 'ApexInfo' to decide apex partition
ee36778 : Add a fallback configuration entry for /tmp.
73b3189 : Remove GtsLinkerConfigTestCases from MTS

+- Project: platform/system/logging

163c599f : Extract LOG_ID_MIN/MAX checks to a helper function.

+- Project: platform/system/media

7a20ffe5 : audio: add audio_output_is_mixed_output_flags
ac09757f : AIDL effect: add version for draining state support
90f46b28 : Camera: Add desktop effects metadata tags
081badef : Add UUID for eraser effect uuid
432a85fc : Add audio header definitions for IAMF
4b1d8a6d : Add version number to support effect destroy at any state
2e856175 : Add audio_uuid_t ToString and equality operator utils
bfc170f8 : Spatializer: define SPATIALIZER_PARAM_SPATIALIZED_CHANNEL_MASKS
21cfc78c : Add audio_uuid_t ToString and equality operator utils
e5978e3e : AIDL effect: add version for draining state support
96f25526 : audio_utils: Add Trace handling
dd5545e9 : Camera: auto-generate non-HAL visible request keys
c00276c2 : Revert^2 "audio: Add audio_policy_forced_cfg_to_string"
8e7d523f : Revert "audio: Add audio_policy_forced_cfg_to_string"
b67df495 : audio_utils: Add name format change string utility
b612317c : Add element-wise min/max/clamp op for AIDL unions
043db477 : audio_utils: Move useful string utilities from mediametrics
ec1be51b : Add multi-client support in camera2
4faf825f : Camera: Fix missing enums for fwk_only visibility
5d6df118 : Camera: Improve physicalCameraIds dumpsys
7aada737 : Camera: Add fwk_ndk_public visibility
9c059773 : Camera metadata: Support system API for synthetic keys
3cd32b57 : Camera: Add AE priority mode tags
bf772c2c : Camera: Add Baklava for feature combination query version
21c627a2 : audio: Add audio_policy_forced_cfg_to_string
f72f59e8 : Camera: Add CONTROL_ZOOM_METHOD CaptureRequest key
a66b4a82 : Night Mode Indicator
eb87bb73 : Add elementwise min/max utils
cc1f5887 : Rename clamp_utils to elementwise_op
2a465158 : Add clamp_utils for structures and vector clamping
f2e576a9 : Add opAggregateImpl and opAggregateImpl_N for element-wise operation
fe17babd : Add elementwise min/max utils
18713780 : Add clampParameter for effects with elementwise_op utility
9884bedd : Rename clamp_utils to elementwise_op
67d6d17b : Add clamp_utils for structures and vector clamping
96991641 : Add opAggregateImpl and opAggregateImpl_N for element-wise operation
2278d20f : Add a speaker_layout_channel_mask to audio_port_config_device_ext
5e9c98e4 : Camera: Fix CameraMetadataTag.mako template
f6f73cc7 : Revert^2 "Add SPEAKER_CLEANUP system usage HAL definition"
6207e812 : Revert "Add SPEAKER_CLEANUP system usage HAL definition"
33d713f3 : Camera: Add color temperature metadata tags
8066b250 : Add SPEAKER_CLEANUP system usage HAL definition
c763cfdb : Add audio device type MULTICHANNEL_GROUP
4f841322 : Camera: Add HEIC UltraHDR stream configuration tags
918d570c : camera: Clarify hot pixel map coordinate system when sensor pixel mode is MAXIMUM_RESOLUTION
f57c7575 : Add UUID for eraser effect uuid
b46ad028 : Update AE_MODE_ON description for flash control.
e049bec2 : audio_utils: Add RunRemote to run methods on a separate process
5a4f996b : Change libalsautilsv2 for shared library to cc_library
78c7412f : audio_utils: add queue wait analysis
f59e9540 : audio_utils: Add a std::mutex timed_lock method
3e6c58d8 : audio: Add support for AC-4 level 4 audio format
38b214f2 : audio_utils: Enable unique_lock safety annotations for MelProcessor
9e9e0bd0 : camera: Remove session_hal_buf_manager flag
bbd6ac2c : audio_utils: Enable unique_lock safety annotations for CommandThread
1bd1d3dc : audio_utils: Add unique_lock variant for std::mutex
5a8dbea8 : Change libalsautilsv2 for shared library to cc_library

+- Project: platform/system/memory/libdmabufheap

91edc87 : Add proposed trendy teams for VTS modules

+- Project: platform/system/memory/libion

758cc9b : libion: Extract headers to a separate build target

+- Project: platform/system/memory/libmeminfo

23854c7 : Add exporter information into dmabuf_dump report.
b018d09 : elf_alignment_test: Ignore ODEX files for now
7f7a381 : elf_alignment_test: Ignore ODEX files for now
feedc8b : memevents: Fail fast for garbage event data
782e5de : memevents: Correct type of oom_score_adj field in OomKill struct
c5d4eba : Don't use android::base::StartsWith / EndsWith
901e4a7 : Remove carlosgalo from libmeminfo owners
3153f8d : elf_alignment_test: Remove unnecessary cpp_std: "gnu++20"
d7613c2 : memevents: Make ring buffer access thread safe
ccde3f1 : elf64: compare_libs: Return error when parsing fails
1cb2501 : elf64: parser: Validate index for the string table in elf file
09f058d : Program to compare shared libraries
8d1bf1d : Library to compare elf64 shared libraries
83bf899 : Add proposed trendy teams for VTS modules
6f5765c : Handle name change of kgsl maps entries.
89221df : Rename libraries that creates invalid elf64 shared libraries
60c647a : elf_alignment_test: Ignore vendor TEE binaries
4711afa : elf_alignment_test: Switch to using regex
a47082a : elf_alignment_test: Ignore vendor TEE binaries
9b1d6e9 : elf_alignment_test: Switch to using regex
af790d3 : Revert "elf_alignment_test: Ignore vendor TEE binaries"
23fb314 : Generates shared library which executable header has ZERO e_shoff
b0b2705 : Generates a shared library which section headers are all ZERO
669fd16 : elf_alignment_test: Ignore vendor TEE binaries
0028c62 : Move trusty VM kernel to etc/vm/trusty_vm directory
89d42bb : Generates shared library which executable header has an invalid e_shoff
8519c28 : Generates invalid shared library which elf64_hdr indicates ZERO section headers
63ba19b : memevents: Set kver 5.8 requirement for programs using ring buffers

+- Project: platform/system/memory/libmemunreachable

4fff23e : Add proposed trendy teams for VTS modules

+- Project: platform/system/memory/lmkd

50e484b : Stop redefining __NR_process_mrelease.
a63948e : lmkd: count the number of times LMKD wakes up
c7eca43 : lmkd: fix higher event being reset by lower and polling start
d155efb : lmkd: Fix first poll of an event occuring sooner than intended.
f32fe4d : lmkd: Ensure node stats are being parsed
667fdbf : lmkd: fix handling of EPOLLHUP for pidfd

+- Project: platform/system/memory/mmd

5f4a1e5 : Revert^2 "Implement basic mmd zram setup"
6cc3928 : Revert "Implement basic mmd zram setup"
58d5cc0 : Implement basic mmd zram setup
a95d66d : Load system properties for mmd
5972aae : Starts mmd only if AConfig flag is enabled
a5a16b0 : Introduce mock os interface per feature
6e0e72d : Setup group and permission of /sys/block/zram0 files at mmd.rc
a43e82b : Add ZramRecompression
ec5216c : Load zram writeback size on init
670f5be : Apply daily limit to zram writeback
10ef95a : Support writeback_limit of zram
b6fb5b8 : Introduce ZramWriteback
edf7d4e : Add binder interface of mmd
1f98fd2 : Configure rustfmt for mmd preupload hook
5537295 : Add mmd.rc to start the daemon on boot
960d825 : Introduce mmd in Rust
18b571c : Add mmd_flags_lib build target
879839e : Add mmd skeleton
b6f8309 : Add Shin as an owner of mmd
eb99717 : Add mmd project owners
f161575 : Initial empty repository

+- Project: platform/system/netd

01a1e47f : Android W requires 64-bit userspace on new 6.7+ kernels.
146e1c47 : kernel_test: relax skip for bpf jit test
6b6e644b : Remove unused variable.
77e5d602 : Convert vintf_fragments into vintf_fragment module(s)
f7e3fead : Include if_id in the xfrm_migrate netlink message
462b4398 : vts kernel_test: next Android requires kernel version 5.4+

+- Project: platform/system/nfc

9f91299c : Test : Added test cases in nfa_dm_main_test.cc and nfa_dm_ndef_test.cc Bug : 371103644 Test : atest libnfc-nci-tests
14fd8aaa : Added NFA_SetNfcSecure() API
76519a13 : nfc(nci): Add min_sdk_version
ea60657a : Test : Added test cases for nfa_ce_act.cc Bug : 371103644 Test : atest libnfc-nci-tests
4e9d87ad : nfc: Add null pointer check correctly
f830a97d : Casimir: add `--nci_unix_fd` and `--rf_unix_fd`
b46720e3 : Test : Added test cases for nfa_ce_main_test.cc Bug : 371103644 Test : atest libnfc-nci-tests
62a841eb : Implment field on notifications and setting the power level on cuttlefish
bb19a90f : Add T4T Ndef Nfceee feature support
2daf9f39 : Test : Added test cases in nfa_ce_api_test.cc Bug : 371103644 Test : atest libnfc-nci-tests
ac411f3e : Fix race condition for RF_DEACTIVATE_CMD while in RFST_POLL_ACTIVE
be760359 : Modify RF_FIELD_INFO_NTF depending on listen technologies
c11f55d1 : In RFST_IDLE, block RF_DISCOVER_RSP callback only if DISABLING
120ad755 : Report correct deactivation type when returning NFA_DEACTIVATED_EVT
5dea353c : Check flags when receiving INTF_ACTIVATED_NTF in RFST_W4_HOST_SELECT
811741b9 : If DEACT_CMD(disc) received while in RFST_DISCOVERY, return to RFST_IDLE
3464276b : Update dm_disc_poll/listen_mask_dfl in nfa_dm_act_change_discovery_tech()
bfe819ec : Add deselect when performing sleep/wakeup pres check for T2T
d5b76d8e : Handling of NACK data received during T2T presence checking
46f1713e : Enforce cache consistency between sending thread and receiving thread
6769e5c3 : Do not clean address of thread ressources if GKI_PTHREAD_JOINABLE
998bb973 : Wait end of GKI_run() to destroy mutex
8b69f465 : Fix collision of activation and deactivate CMD in RFST_LISTEN_SLEEP
8c16861d : Send DEACTIVATE(IDLE) for any deactivate cmd received in listen_sleep
a21e07fb : Fix the scheduling of RF_DEACTIVATE_CMD/NTF while in RFST_LISTEN_ACTIVE
d9710bad : Update fuzzer for Reading of Chinese Id card
8ad37336 : Reading of Chinese Id card
788a0e9d : Casimir: Add a `Unix` variant of `Listener`
227322ec : Test : Added more test cases in CrcChecksum_test.cc Bug : 371103644 Test : libnfc-nci-tests
545a888c : Casimir: Add a `Listener` enum above `TcpListener` in `main`
dcbe21a4 : Add configurable option for deactivate timeout instead of hardcode
90ce1e3f : Test : Added test cases for debug_nfcsnoop_test.cc
3bbc9d8d : Casimir: Accept Async{Read,Write} implementations in Device::{nci, rf}
2161a0b4 : Implement sending custom polling loop annotations to cuttlefish
59401bfd : Test : Added Test cases in debug_lmrt_test.cc and modified Android.bp Bug : b/314795235 Test : libnfc-nci-tests
81a444b8 : Seperate the default libnfc-nci.conf for different product
1b55ae7d : Fix max_conn value at reception of CORE_INIT_RSP
1810c030 : Test: Added test cases in CrcChecksum_test.cc and ringbuffer_test.cc Bug: b/371103644 Test: atest libnfc-nci-tests
2d87ce32 : nfc(mts): Fix mainline module name
512032d3 : mts(nfc): Add a MTS module for nfc
71724e47 : Remove unused variable.
76f36cd4 : Fix unhandled event (0x7)
6c760e23 : No need to read max number of EE at NFC_ENABLE_REVT
ea741bc7 : Fixing warnings of code analysis tool
081e89da : Removing support of NFC-DEP for listen mode
26020175 : nfc(tests): Add a new libnfc-nci-tests for unit tests
bec9e709 : Restore flags if deactivate(idle) failed
809c88bb : Report tag as absent if deactivation for legacy pres checking failed
e15cc60a : Fix possible null p_r_apdu deference in Le encoding fix
11bc5bfb : Fix Le encoding issue for non NFC Forum compliant tags
85fffafe : Check T3T State machine when calling nw_t3t_handle_nci_poll_ntf()
75285e33 : Stop the right timer in rw_t3t_unselect()

+- Project: platform/system/secretkeeper

f8b4832 : Implement secretkeeper HAL v2
aaf9488 : Add dirgroup for trusty genrule
6437184 : Add unofficial/unsupported Cargo build files

+- Project: platform/system/security

ca704495 : Add helpers for populating apex module info
e5f30870 : Update code for Rust 1.82.0
703bcc18 : More diagnostics for slow operations
ee784bb9 : Skip DE-critical system keys on clearNamespace
e30c1ecb : Create new Keystore2 java_aconfig_library
0a17cbbf : Create new Keystore2 java_aconfig_library
1cfc81d8 : Add getSupplementaryAttestationInfo
6d713f29 : Refactor function signature and move mock back
734e42a7 : Don't use deviceSuffix, it isn't needed
de6c81f1 : Create new Keystore2 java_aconfig_library
cf5f122e : Skip running the test using `MAX_USES_PER_BOOT` on devices with keymaster implementation.
54a9189a : Add keystore2_client_tests
32e1610e : Add more grant tests
8e92e746 : Don't allow a non-owner to grant access to APP key
3f38c1c7 : Fix warning for rkp_factory_extraction_tool
a3f26f0a : Remove OWNERS who have moved on
c91ea02c : Add cvlasov@ and kwadhera@ to OWNERS
542212b9 : Keystore and KeyMint version bump
6024c693 : Keystore and KeyMint version bump
ef897ec1 : Use a struct for key access information
5589c5c8 : tests: refactor grant tests
33b7a622 : Add and use run_as::run_as_child_app helper
26b2d53f : Add and use run_as::run_as_app helper
dcfacb93 : Add and use run_as::run_as_root helper
11ed60e2 : Minimize unsafe blocks
61c3ed56 : [rkp_factory_extraction_tool] adding requireUdsCerts flag
8371cd27 : Skip tests using APPLICATION_DATA on older devices with Keymaster implementation.
743f1785 : Add code to support keystore certificate post processing.
24e6a5f6 : Clean up keystore owners
fa63f234 : Adjust tests for newly-exposed Grant functionality
b01202dc : Generate Java host library for keystore2 flags
a634cd46 : Drop unused IAttestationManager interface
e9b32c58 : [factory_tool] Update factory tool with new CSR verification param
9f0007eb : Cope better with errors in child processes
acd3d989 : Add tests using real Gatekeeper
89e87d50 : Test failure arms for auth-bound keys
32034c76 : Convert vintf_fragments into vintf_fragment module(s)
621e2177 : Update tests using app-id and app-data.
f804556d : Log more information on KEY_USER_NOT_AUTHENTICATED
2065602e : Allow more error codes while trying to use incorrect APPLICATION_ID/APPLICATION_DATA.
8c9a54c8 : Skip device id attestation tests on devices receiving only system update and not vendor update, on such devices `android.software.device_id_attestation` feature may not be configured correctly.
327f3ace : Allow more error codes on multiple block modes failure. Updated tests to use `APPLICATION_ID` along with `APPLICATION_DATA`.
76d95baf : Remove incorrect/confusing Certificate use
95239541 : Replace libchrome string_util to libbase strings
8da288c5 : Avoid linear scans of database tables

+- Project: platform/system/sepolicy

4117ea81f : Terminal app's package name is com.android.virtualization.terminal
e4bbb5ab2 : Allow system_server access to aconfigd_mainline socket as well
b2409ecae : plat_file_contexts_test: fix typo
6cf49137d : Remove flag check for bluetooth socket hal in private/service_contexts
61544252b : Add prop to control PM appcompat
27783693c : SELinux update to support aconfigd_mainline process
0e30a88a2 : Revert^2 "Add mmd selinux policies for zram setup"
2d3643ac7 : Fix an untracked selinux denial
3f3d48f59 : Use "bootloader_prop" context for new "ro.boot.vbmeta.public_key_digest"
e7cd0eb22 : Sepolicy setting for crosvm virtiofs mounts
10e3d1f55 : Remove new storage test mission marker file code now that new storage is launched
7f32d14db : Revert "Add nlmsg constants and macros"
c2449d38c : Revert "Update netlink_audit_socket for nlmsg xperm"
aa577262c : Revert "Update netlink_tcpdiag_socket for nlmsg xperm"
b55aaff9a : Revert "Update netlink_xfrm_socket for nlmsg xperm"
fa1b5c7fb : Revert "Update netlink_route_socket for nlmsg xperm"
b6721d27d : Revert "Enable netlink_xperm capability"
43202350f : Revert "Add mmd selinux policies for zram setup"
be9c526e1 : Add mmd selinux policies for zram setup
fa4957fbb : Enable netlink_xperm capability
dd25127cc : Update netlink_route_socket for nlmsg xperm
33392718a : Enable 'fwk_devicestate_service' in 202404 builds
eb4cc3eed : Update netlink_xfrm_socket for nlmsg xperm
67c1b352c : Update netlink_tcpdiag_socket for nlmsg xperm
5bc9bb8c8 : Update netlink_audit_socket for nlmsg xperm
24dcdf4c2 : Add nlmsg constants and macros
d49aeccfa : Deprecate cc_binary aconfigd and the controlling flag
23871eaa0 : selinux: bpfloader: add fs_bpf:dir read and open
bf8901713 : Fix trade-in mode on user devices
93e601af7 : allow traced to read frozen processes from cgroup.freeze
2fd980647 : Revert "Add sepolicy for terminal app for composite disk and disk resizing"
4ba2c4b9b : Add tunables to control prefetch
3d4d2cdaa : Allow init to write to /sys/module/dm_bufio/parameters/max_age_seconds
ca8bd39d8 : Deprecate cc_binary aconfigd and the controlling flag
81c4baecc : Add sepolicy for mmd system properties
9444dabc6 : Add mmd.enabled_aconfig to sepolicy
f1c7e1421 : Allow system server to access udc sysfs
2712c9026 : Add sepolicy for mmd to execute zram maintenance
9c7d30642 : Add mmd selinux policies
92022fe6d : create new system label for biometric virtual hal sysprop
562e55473 : Add library to get vendor genfs version
5e91cdf5d : Add profiling apex file contexts
47425aee5 : [Thread] allow ot-daemon to read/write system_server created UDP socket
d1e90993e : Add sepolicy for KeyMint VM system properties exposed to vendors
0af6fb7a7 : Allow system_server to read /vendor/usr/idc directory
51bf1e414 : Allow "adb shell tradeinmode" on user builds.
7f9867940 : Add fuzzer name for the Wifi mainline supplicant service.
ca1d90afa : prefetch: Add new prefetch.te selinux policy
dd8c0902c : Prefetch: Add sepolicy to control prefetch properties
c9a644a8c : allow vmlauncher_app virtualizationservice_data_file:file rename
ec2bdfe10 : Removing adaptive auth selinux entries
ed2d5a61b : [wifi_usd] Add USD service
db02ad5f6 : [StatsD IoUring] Allow StatsD to use IoUring in SEPolicy
eb5872ed2 : Introduce Selinux policies for the mainline supplicant.
f98b51ef0 : Add sepolicy for intrusion detection service
93d17cffc : Revert^5 "Allow system server to access udc sysfs"
6f6008cf1 : Add 'fwk_devicestate_service'
6de5d5062 : Revert "Add service wait override property for audioserver clients"
2ea821d5c : microdroid_manager: allow tcdain
6f991b010 : Renaming adaptive authentication selinux entries
1454b4ccc : Allow rkp_cert_processor to call system_server and package_native.
256f21f70 : Allow the payload to print logs
291884428 : Update 24q4 sepolicy.conf file to match aosp/3309521
262fb0702 : Update 24q4 sepolicy.conf file to match aosp/3309521
b95542e37 : Add service wait override property for audioserver clients
477e8f7ff : Add test_pkvm_tee_service example tee service
48966b610 : Add plumbing for new tee_service_contexts
c5e4033d8 : Add an adb_tradeinmode type for restricted adbd.
6854dd718 : Add com.android.nfcservices-file_contexts to Android.bp
ee0ae810c : create security context for aconfigd-system rust binary and aconfigd_system socket.
ffa0493ea : Add mlstrustedsubject to aconfigd.te
c7426d2e1 : sepolicy for crosvm to support virtiofs
9b32308a7 : Add BOARD_GENFS_LABELS_VERSION
747be20e6 : bluetooth: Add policy for bluetooth.hardware.wakeup_supported
aac58aa95 : Renaming adaptive auth selinux entries
171c5c63a : Revert "Add property accesses to gatekeeper, keymint policies"
3c6b5aa73 : add property to disable usecase validator
116aa1a1a : Add property accesses to gatekeeper, keymint policies
64a2dab7e : Rename critical_issues.fixed_issues to backported_fixes.alias_bitset
68f82c6d6 : Rename critical_issues.fixed_issues to backported_fixes.alias_bitset
f233b1fca : Add hardware ID property
53c2221bc : Add sepolicy for forensic service
6b6aae477 : Add update provider to SELinux policy
8fe656806 : Make SingletonModules into regular Modules
63a356d52 : SEPolicy for dynamic_instrumentation_service
e1c3a9b0b : Allow bootanim to talk to the statsd socket
1f1e4cad0 : Add sepolicy for terminal app for composite disk and disk resizing
209772d38 : bug_map update for crash_dump / keystore
abfc8b086 : Revert^5 "Allow system server to access udc sysfs"
6265605d0 : Refactor selinux_policy_nonsystem to separate phonies for vendor/odm
d1d66024f : VmTerminalApp can create and manage vsock socket.
3e5d33ee3 : Add bluetooth socket hal
f0945b63c : Allow init to chown sysfs_firmware_acpi_tables
c4d0eb2bc : Revert "Allow init to chown sysfs_firmware_acpi_tables"
41a6feb03 : Remove bert_collector_start_prop property
dc46228ba : Remove dependencies on the 1-variant fallback
561429cbd : [KM-VM] Add SELinux rules for system internal properties
ad0591113 : Move rkp_cert_processor to system_ext.
957bb1148 : Allow access to cgroups.json files
6a8e53d5a : sepolicy for crosvm to support virtiofs
872e66d84 : Add sepolicy for crosvm to support virtiofs mounts
bad9ca11d : Allow "adb shell tradeinmode" on userdebug/eng builds.
ad4802f37 : Allow vendor domain to access bootstrap bionic libs
fec75c77a : Allow oom adjuster to set sched group for VM processes
f5ff3eb4c : SePolicy rules for initial media quality HAL service
138c0865a : Relax crosvm flagged neverallow rules
30349e3a7 : Remove dependencies on the 1-variant fallback
ef742642a : system_app.te: Allow System app to read /system_dlkm
c568cedd4 : Silence rawio denial for vold
4549e89cf : Add media.c2.remove_rendering_depth to proerty_contexts
6d3126d89 : Relax crosvm flagged neverallow rules
df3a5964e : Adding RELEASE_AVF_ENABLE_WIDEVINE_PVM build flag
1550e2d4d : Label system properties to config audio codec priority
e740080c5 : set_prop(shell, bionic_linker_16kb_app_compat_prop)
155cc2fe0 : Mark selinux_policy_* phony targets with their partition
2f31d932c : Add sepolicy for /metadata/tradeinmode.
f1849d343 : Remove unused system/sepolicy/Android.mk
d85b55d21 : Collapse task_profiles_api_file into task_profiles_file
55c17f2f3 : Collapse cgroup_desc_api_file into cgroup_desc_file
8809e9b4f : Allow SurfaceFlinger to talk to statsd bootstrap
290f00577 : Add ITradeInMode service sepolicy.
71481bf39 : Add tradeinmode sepolicy.
cfe070251 : Revert "domain.te: tmpfs neverallows"
802520812 : Mark selinux_policy_system_ext as system_ext_specific
e7c2031f8 : Sepolicy rules for initial media quality framework.
2cb29e1da : Revert "Add desktop_ec_crash_collector_start_prop context"
95115ba80 : domain.te: tmpfs neverallows
ef4385538 : Revert "Allowing userdebug/eng builds crash dump access to ks"
da1790534 : Label uprobestatsbpfload as bpfloader_exec
6d7b218e2 : Allow statsd to control the uprobestats service
aff35ea5c : VmTerminalApp: Merge with LinuxInstaller
002607136 : Add sepolicy rules in preparation for DocumentsUI APEX bundle
0592448b2 : Update the context used in the image interface
3fce5ad00 : Add an adb_tradeinmode type for restricted adbd.
b0d75afc8 : Revert "Allow dexopt_chroot_setup to check /metadata in chroot."
c5c9753af : Resolve neverallow in retrofit devices
194d712ef : sepolicy: allow fastbootd to operate devpts
63053fe28 : Add gpu_device access to isolated_compute_app
0acb51190 : Allow vmlauncher app to run e2fsck and resize2fs
8659c156f : Convert file_contexts.bin from Android.mk to Android.bp
ac43c5e2c : Allow apexd to rename files in /data/apex/decompressed
852d19b5d : Allow access to cgroups.json files
62bcb0da5 : aconfigd: cleanup
b0a21a6a9 : Allow dexopt_chroot_setup to check /metadata in chroot.
ee97d9c50 : Convert selinux_policy to Android.bp
603bfdc80 : Add desktop_ec_crash_collector_start_prop context
92d3e2b22 : Allow init to chown sysfs_firmware_acpi_tables
041b53e96 : Revert "Allow system to access all cgroups.json files"
6db51b615 : Remove deprecated SEPolicy permission for keystore
8afcd7b19 : Add sepolicy for ACPI bert_collector
831f7caa8 : Add sepolicy for ro.build.critical_issues.fixed_issues.long_list
81225f103 : Add neverallow rule to restrict Unix streams from/to sdk sandbox
0c9edb386 : Convert selinux_policy_system to Android.bp
4c36b7052 : rename sepolicy rule to linux_vm_setup as well
38deee7b5 : Add keystore2 grant capability to all apps to support key grants
09e326f0c : Remove dependencies on the 1-variant fallback
f6c6f4555 : Add changes to bring up Keystore Certificate post processor.
25cfdb97e : Revert "Convert selinux_policy_system to Android.bp"
5c932ee43 : Reapply "Expose starting_at_board_api to access_vectors"
3dd73cc79 : Add flag-guarding also to compat modules
13df47f40 : Add compat plat_pub_versioned.cil modules to bp
74d86d000 : Convert selinux_policy_system to Android.bp
051406bed : Allow system to access all cgroups.json files
d38119602 : [AAPM] Introduce new Service for Android Advanced Protection Mode
5c90418f9 : Allow virtual camera binder call to surface flinger
d5f1d0d94 : Allow virtual camera binder call to surface flinger
a07b3712f : Convert selinux_policy_nonsystem to Android.bp
c3ee5c4b9 : Allow apexd to talk to statsbootstrap_service
c2ed6adf7 : Revert^2 "Add a new label for fingerprint vhal properties"
7c5e59f92 : Revert "Add a new label for fingerprint vhal properties"
fbfdc6745 : Add a new label for fingerprint vhal properties to support moving from vendor to system
740511cf2 : Add a selinux rule for nfc security logging.
a4fddc0ba : Allow apexd to rename files in /data/apex/decompressed
5c6d49999 : Wildcard graphics.common SPHAL version.
1408190b9 : Convert selinux_policy_product to Android.bp
e08bfeab6 : Build plat_file_contexts_data_test for selinux_policy_system
9bae515e6 : Split adbd.te into adbd.te and adbd_common.te.
eec9b70da : Allow SurfaceFlinger to read/write Unix sockets from automotive_display_service
3ea355cc0 : Allow debugfs_wifi_tracing to create dir
6399243c3 : Add hypervisor.virtmgr.dump_device_tree

+- Project: platform/system/server_configurable_flags

2d69e92 : aconfigd: don't fail OTA flag apply if one flag fails
f76d76e : flag guard transition to use mainline aconfigd
4058f60 : flag guard transition to use mainline aconfigd
69b3d07 : emove new storage test mission marker file code now that new storage is launched
64d15fb : Fix error handling in aconfigd-system
d2dd55e : Deprecate cc_binary aconfigd and the controlling flag
ff7fc9e : Deprecate cc_binary aconfigd and the controlling flag
988d15c : aconfigd: migrate to full rust aconfigd
3cc5382 : aconfigd: migrate to full rust aconfigd
fc31980 : Remove marker file creation
cdf0f5c : Aconfigd: send an error message back thru socket in case of error
288c173 : Merge OTA messages that have the same build ID
4b19b95 : Deprecate flag that guards local immediate override
867caab : Demote spammy log.
729f297 : move aconfigd platform init service into aconfigd.rc
7bca5b5 : remove aconfigd-system rust binary from coverage build while build system fix is being worked on, in order to unblock current work.
b5dcbf7 : Fix test failures
e81ab2f : Switch from aconfigd to aconfigd-system on flag flip
6210e62 : Switch from aconfigd to aconfigd-system on flag flip
831f502 : Fix aconfigd showmap rss regression
a777c5d : Call C++ aconfigd implementation from Rust binary
459fb2e : aconfigd: create a rust binary target aconfigd_system
32d5b01 : aconfigd: split aconfig cc_binary into a cc_binary + a cc_library
9661c2d : aconfigd: split aconfig cc_binary into a cc_binary + a cc_library
909fd99 : Support clearing overrides immediately in aconfigd
691fd56 : Move aconfigd log messages to DEBUG
77c31db : aconfigd: Run some tests as root
7195616 : Revert^2 "Add cts_flags_tests_cc to aconfigd"
6f42c6d : Move more test data into separate v1/v2 folders.
29c0567 : aconfigd: migrate aconfigd proto build targets to config infra module
4ab41dc : aconfigd: migrate aconfigd proto build targets to config infra module
9eb1862 : Support different override types in Java utils
f626050 : Revert "Add cts_flags_tests_cc to aconfigd"
e455ee9 : Add cts_flags_tests_cc to aconfigd
ea45a8f : Update boot info file when immediately overriding
8e05864 : Add has_boot_local_override to FlagSnapshot
539a896 : Use flag info file from build directly
6e0e1ec : test update to work with revert
29860f4 : Add proto lite lib to aconfigd

+- Project: platform/system/teeui

af2cbb1 : Add dirgroup for trusty genrule
a9734c7 : Use jni_libs to install shared library dependency

+- Project: platform/system/testing/gtest_extras

c008bb7 : Move all options errors to a string in the object.

+- Project: platform/system/timezone

752c7fcf : Added MCC mappings to the telephony lookup file

+- Project: platform/system/tools/aidl

8c88296d : Update AIDL SDK versions for Baklava.
1c09aab6 : Escape AIDL commandline.
bd79b3ef : Add explicit lifetimes for method parameters.
ce039142 : Allow large byte literals in Rust.
8f7bdbcd : IsMutated -> IsFromWithinArray
8d58ba8e : Update error message for frozen interfaces to note different imports
96b78bdd : Revert "Remove usages of AddReverseDependency()"
0ab163b9 : Remove dependencies on the 1-variant fallback
eab1aaeb : aidl_test_client: add cpp/ndk client tests for extendable parcelables
51b1171d : Consistently use int32_t for positions to avoid implicit conversions
b50e0a92 : Add soong module for aidl_rust_glue
7775de70 : Add dirgroup for trusty genrule
fc607837 : Fix an error message
b5f563e9 : Stop checking in AIDL intermediate files.
98144c71 : add integration test coverage for vintf ParcelableHolder
1498893d : aidl: fix decoding of vintf stability ParcelableHolder
cbf964c1 : Remove usages of AddReverseDependency()
9063012b : Remove aidl "generator" modules
269fe40e : Reland "Change API for tracing in NDK"
0fad32cc : Revert^3 "Change API for tracing in NDK"
106a40f1 : aidl: fix typo
616332f8 : Revert^2 "Change API for tracing in NDK"
ee81d692 : Remove the aidl api modules

+- Project: platform/system/tools/hidl

7d446c06 : Revert "Remove usages of AddReverseDependency()"
a5439251 : Remove dependencies on the 1-variant fallback
8bd11bbe : Add proposed trendy teams for VTS modules
40bbeaa5 : Remove usages of AddReverseDependency()

+- Project: platform/system/tools/mkbootimg

7ca80d4 : rust: add OWNERS
358690c : rust: migrate bindings generation to soong rust_bindgen
a306f82 : Re-licence bootimg.h to BSD-3

+- Project: platform/system/tools/xsdc

ac50c14 : Fix UnusedVariable errorprone issues
4c1a15d : Create xsdc docs module in a load hook

+- Project: platform/system/unwinding

7d0f203 : Make thread unwind timeout error messages better.

+- Project: platform/system/update_engine

8ab0f410 : Remove unnecessary last_error_ field
399bd4da : Remove unused field save_rollback_data
34d21042 : Fix for UnmapPartitionOnDeviceMapper
94fa3e52 : Fix for UnmapPartitionOnDeviceMapper
08bbbd49 : Re-map source partitions with RW permission during OTA
212b0536 : Skip FEC reads for partial updates
0aaa7368 : Add triggerPostinstall API to IUpdateEngine.aidl
9c5baeb7 : Add stubs for new API triggerPostinstall
80e4391c : xor_writer: use 64bit for partition size
13bc0b8a : Stop explicitly adding bionic subdirectories to the include path.
58661043 : Add new triggerPostinstall() API to support async postinstall script run
ae222aa0 : update_engine: xor writer size_t
e227d99d : Fix issues with setShouldSwitchSlotOnReboot
0c18424d : Replace base string utils with android::base ones
1f46f5bb : Replace libchrome NumberToString with std::format
11c3da63 : Remove libchrome stringprintf.h functions
6fbf3bec : Set default max_thread to 256
3357e278 : Simply Replace libchrome StringToXXs with android base ones
b9a9aa22 : Replace chrome string util with android base ones
d021d81c : Limit partition parallelization to number of CPU cores
91aa2174 : Replace libchrome size() with C++ standard ones
1783eb35 : Replace libchrome macros.h with android base ones
e68e1304 : Remove xz_chromeos.cc
333a619c : Remove string_number_conversions.h from payload_generation_config.cc
a9a25138 : Remove included file string_util.h from payload_signer.cc
4e1be09c : Remove included file string_split.h from payload_signer.cc
e70b6b43 : Remove libchrome dependency from operations_generator.h
247c9fec : Use android-base/macro.h instead of base/macro.h
c485dddb : Remove libchrome dependency from full_update_generator.h
3ec48106 : Use android-base/logging.h instead of base/logging.h
8bf1e690 : Remove include base/string/stringprintf.h
4f029762 : Use android-base/stringprintf.h instead of base/string/stringprintf.h
99da2214 : Fix partial extraction feature in ota_extractor
6d8a94df : Add aidl_interface for update_engine
458f31d4 : update owners.
f1a360be : update_engine: uint64_t of partition size

+- Project: platform/system/usb_info_tools

f4195bb : typec_connector_class_helper: Add location, USB & DRM info
79acf06 : typec_connector_class_helper: Add typec class cable info
b231690 : typec_connector_class_helper: Add alt. mode & PDO information
22ab4f0 : typec_connector_class_helper: Add Partner Product Type VDOs
50df58c : typec_connector_class_helper: Add partner identity without product type VDOs
f326557 : usb info tools: add preupload config
a655546 : Initial type-c connector class helper functionality - print the contents of files from the port directories
909b708 : Initial empty repository

+- Project: platform/system/vold

73670aad : Add support for F2FS format for HDD device storage
3da20f4c : vold: include multiple devices for calculating storage size
04e04701 : Fix uses-after-free
d651359c : Improve log messages when CE key is already protected by secret
6b083c65 : [USB] Change default volume label for ExFat format mounts while formatting.
0c841556 : add f2fs device aliasing related arguments in vold
7bdf7998 : More accurately validate the prerequisites for using emmc_optimized
bc75f869 : Stop using deprecated function AServiceManager_getService()
e1e62224 : Fix some compiler warnings in vold

+- Project: platform/test/app_compat/csuite

bcd18a7 : Add logging to help debug and reproduce csuite failures locally
3fb23dc : Fix webview-app-launch tradefed config
6a0a485 : Remove dependencies on the 1-variant fallback

+- Project: platform/test/catbox

db01871 : Add perfetto traces and metrics to boottimetest
d6ba234 : Disable setup wizard for MU tests
e7ced23 : Add passenger load to the driver screen;
dec6d00 : Hot fix for PassengerLoad preparer for MUMD
4297905 : Revert "Included LaunchQuickControlsTest under catbox-functional..."
618fbd5 : Included LaunchQuickControlsTest under catbox-functional-status-bar.xml
34f0205 : Common Catbox xml for debugging and enabling smoke tests
79c7110 : Updated existing test plan to include Bluetooth Media Test
97dab73 : Current Date and Time test xml file
6099af1 : Add longevity test base XML file.
f67cf71 : Modify existing PassengerLoad preparer for MUMD
144c081 : Fix: local run for start new user test does not report metric in test_results file

+- Project: platform/test/cts-root

b91aae6 : Rename startObservingHealth API
5845643 : Allow observers to register custom executor
83050d5 : Revert "Add CTS test for delegated bugreport consent"
13db953 : Add WRITE_ALLOWLISTED_DEVICE_CONFIG perm when modifying DeviceConfig
ec9d51a : CtsRootPackageWatchdog as MTS for crashrecovery
9e74c86 : Add CTS test for delegated bugreport consent
576496d : Rename onPackageFailure to notifyPackageFailure
017253c : Update test to use pre jarjared library
52ac40e : Revert "Add CTS test for delegated bugreport consent"
51ac37a : Update tests to use classes from Module
53395d9 : Remove dependencies on the 1-variant fallback
5d433be : Add CTS test for delegated bugreport consent
6cc23f5 : Revert "Revert "Revert "Revert "Test adjustments for removal of ..."
6f3e157 : Revert "Revert "Revert "Test adjustments for removal of VDM perm..."
4eb01de : Revert "Revert "Test adjustments for removal of VDM permissions ..."
662d938 : Revert "Test adjustments for removal of VDM permissions from Shell"
ee3f5cf : Test adjustments for removal of VDM permissions from Shell

+- Project: platform/test/dittosuite

6cf9840 : Stop redefining __NR_sched_setattr.
ab164ec : Script to automate Perfetto, Ditto, TraceProcessor execution

+- Project: platform/test/mts

752bd623 : Revert "Add mts tests to the test plan."
f93b2700 : Add mts tests to the test plan.
568a9709 : Add CtsHealthConnectHostSideTestCases to MTS healthfitness tests list.
81592132 : Add MTS for crashrecovery
d6f9becc : Regenerate the ART MTS definition (2024-11-01).
da944b8f : Regenerate the ART MTS definition (2024-10-22).
d8324b4a : mts(nfc): Add a MTS module for nfc
36041311 : add adservices and odp mts to mts-all config
47e99444 : Regenerate the ART MTS definition (2024-10-15).
aafe20f9 : Update MTS test plan
a58c5b82 : Remove GtsLinkerConfigTestCases from MTS
df5d2ec6 : Add new Rvc test modules to mts config files
1e8175ee : add adservices and odp mts to mts-all config
4ff032d6 : Add Health Connect PHR CTS tests to MTS
edd7c1f8 : Revert "mts-uwb: Add libuci_hal_android_tests to UWB MTS"
5a8f8860 : Add missing HealthFitness tests to MTS.

+- Project: platform/test/robolectric-extensions

f7b38bf : Implement a Junit4 RunListener based of a clone of tradefed's clearcut support.
a8b0d02 : Fix typo annotation
98867c5 : Enable logcat logging to stdout in Robolectric.
bd92169 : The upstream version of the native lib loader changed logic.
7085c55 : The upstream version of the native lib loader changed logic.
d9c92b7 : work around upstream, most of this class should be deleted
49b20aa : Use updated sdk int in run-android-test.sh.
166cbe5 : Fix AndroidNativeRuntimeLoader to work with Baklava
bd6456a : Fix AndroidNativeRuntimeLoader to work with Baklava
08d8d2b : Default to conscypt mode off everywhere. No-Typo-Check: don't control the annotation.
9c644f3 : Set all needed system props before read
7b4a615 : Fix AssetManagerTest when run via run-android-test.sh.

+- Project: platform/test/suite_harness

91e5c917 : Simplify IncrementalDeqpPreparer to collect dEQP dependencies only.
85574002 : Reuse the same EDI file to store incremental dEQP results for different run modes.
d0a78a56 : Use `modules` as the attribute name instead of `module` when saving data into the EDI files.
48ade124 : Supports to collect dEQP dependencies for trusted builds.
5e9ee314 : MediaPreparer places a completion sentinel
5203e802 : Revert "Use the targetFirst value from the module config"
61dc486d : Use the targetFirst value from the module config
de5f77e5 : Move filter to moduleDir
0e74014f : Revert "Update interface with moduleDir"
020c61f1 : Update interface with moduleDir
da6c2659 : Keep incremental dEQP build attributes in sync across shards.
6640081d : Check for file existance only

+- Project: platform/test/vts

f40304e7d : Fix cpu device check in VtsGpuTests
6683f13f7 : Exclude vts_kernelLifetimes_validate_test from vts-validation plan
d267ea554 : Exclude vts_kernelLifetimes_validate_test from vts-validation plan
4431447f0 : Exclude vts_eol_enforcement_test from vts-validation plan
dc8a0401d : Exclude vts_eol_enforcement_test from vts-validation plan
3005a6ee9 : Update VTS to exclude new graphics requirements for TV
55e1954f8 : Support VENDOR_25Q2 build version in dEQP level test.
69d1f2ac4 : Add proposed trendy teams for VTS modules
2bece7469 : DO NOT MERGE Update VTS tag version to V15_R3
0c1b42ead : DO NOT MERGE Update VTS tag version to V12_R15
e02bf9f90 : DO NOT MERGE Update VTS tag version to V12.1_R13
6ecb2c77b : DO NOT MERGE Update VTS tag version to V13_R11
7e1e889c0 : DO NOT MERGE Update VTS tag version to V14_R7
1313d9690 : Update VTS Platinum candidate plans with new tests.

+- Project: platform/test/vts-testcase/hal

46252789 : Make sure we don't have undeclared HALs in the device manifest
127a13ba : Rename KM VM system property
29ac231a : Rename system property to enable KeyMint VM
a893cecf : Mapper4 or later is hard-required for API 36+
e3bb92c8 : Changes to OWNERS files for USB.
2f7198d4 : Add proposed trendy teams for VTS modules
9b0f57ac : Add proposed trendy teams for VTS modules
73d8a698 : Add proposed trendy teams for VTS modules
e8ad1898 : Fixes for errorprone update
cf13fbae : Allow OEMs to specify MTP class/subclass/protocol to be 255/255/0
e2e0c9ef : Allow OEMs to specify MTP class/subclass/protocol to be 255/255/0
d864d8f8 : Allow emtpy vndk version for devices with ro.board.api_level
562a5ec9 : Allow emtpy vndk version for devices with ro.board.api_level

+- Project: platform/test/vts-testcase/kernel

673adf1a : Add some F2FS tests around compression
8cd14aad : EOL enforcement: Hard code branch name for Android 11
7eb9fc50 : remove bpf_native_test VTS test.
9a515f59 : ltp: Rewrite special make rules for ltp into soong
71e9702e : EOL enforcement: Hard code branch name for Android 11
aace6429 : Add encryption to automotive tests
573a031d : ltp: swapon tests that depend on v1 memory controller should be skippable
4be64365 : Add proposed trendy teams for VTS modules
3556e21a : Add proposed trendy teams for VTS modules
14301565 : Skip init_boot requirement based on ro.board.first_api_level
c5be2ab0 : Allow OEMs to update old 32/64 devices' vendor partitions.
df4e6f50 : ltp: enable inotify tests
5b1f977f : ltp: template.xml: Reduce number of entity references in xml template
a919e02d : Add proposed trendy teams for VTS modules
cf605c4d : Add proposed trendy teams for VTS modules
1fc548ae : Add proposed trendy teams for VTS modules
a6d6b855 : Fix compiler warnings in vts_kernel_encryption_test
d00bb42c : Vts16KPageSizeTest: Add 16KiB VMA test cases
3f951613 : Allow legacy sdcardfs devices to not use fuse-bpf
4bdbda52 : Adjust comment language
554aef99 : [VTS] Support custom FBE derivation algos
89c8597f : ltp: Abort if device or root is lost during test
784a1e99 : [VTS] Support custom FBE derivation algos
50e32f1a : ltp: Enable new tests in 20240524
a4e98bda : ltp: Update test suites to 20240524

+- Project: platform/test/vts-testcase/security

31bfb44 : Skip 32bit kernel check when api level <= 33
c233586 : Add proposed trendy teams for VTS modules
6d64a9a : Add proposed trendy teams for VTS modules

+- Project: platform/test/vts-testcase/vndk

77b42df : Add proposed trendy teams for VTS modules

+- Project: toolchain/pgo-profiles

9cb1bd8 : Merge cherrypicks of ['android-review.googlesource.com/3396835', 'googleplex-android-review.googlesource.com/30980213'] into 25Q1-release.

+- Project: platform/tools/acloud

1df901d : Tolerate missing log directories
a6344c2 : trusty: Allow mixing remote and local images
ad50048 : trusty: Add remote image support

+- Project: platform/tools/apksig

137d813 : Mention APK support in an example command in help
c1caa15 : apksigner: support codename Baklava
4ba7e9d : Fix UnusedVariable errorprone issues
9618f8b : Handle OOM error when allocating space for signature block

+- Project: platform/tools/asuite

aba0ad87 : Support incremental setup rollout process.
355375c3 : Allow adjust rolling output window height through env var
15d4fec1 : Increase rolling_tf_subprocess_output roll out percentage to 100%
c4cf3ffe : Increase deprecate_bazel_mode roll out to 60%
6ad6880f : Send feature roll out metric even when the percentage is 0 or 100
4d5cb28f : Add the rollout control for incremental setup.
62227291 : Remove the message print for deprecate bazel mode
7b63baa5 : Increase rolling subproces output rollout ratio to 60%
6802129f : Increase deprecate bazel mode rollout percentage to 30%
9dbc95d8 : Add go/a-update link/banner to adevice
47998cf1 : A update alias autocomplete.
92d6f9e0 : Change disable_bazel_mode_by_default feature to deprecate_bazel_mode
232c726d : Do not print rolling output feature message if not using it
36a3525d : Mark the gradual feature rollout message magenta
616ef815 : Increase the rollout percentage of rolling subprocess output to 30%
620dd8fa : Increase bazel mode deprecation rollout percentage to 10%
c9a1dec1 : Each rolling out feature specifies its own message
4565bd15 : Display how to disable partially rolled out features to early users
5c854068 : Roll out default bazel mode disabling to 5% users
3110fd2a : Roll out rolling subprocess output feature to 5% users
e5bd334a : Replace tabs in rolling output
c07d26de : Fix extra erase of 2 output lines when rolling print is still empty
9e55c000 : Intercept stderr as well during rolling output
b1f6d8d8 : Fix import
e530312e : Always enable gradual rolling out features for adte owners
bd918f6d : Add a member to owners
5a44e42d : Print full output when rolling output is enabled and subprocess failed
017409e3 : Use StringIO instead of a string list
360b2530 : Erase the rolling outputs after command finish
e78f944e : Only display rolling output on tty terminals
6843ed56 : Support displaying TF subprocess progress in a rolling window
2b04e08b : Skip logging and hash calculation for 0 or 100 rollout percentage
6dffe21e : Start gradual rollout of disabling bazel mode
642f5690 : Validate feature param values
20e3ba46 : Cache the controlled feature enablement result
75fcbfb3 : Send metric for partially rolled out features
7d891025 : Support gradual feature rollout
5b5d56c7 : module_finder: Fix linter complaints
165d3ff3 : Remove vts_kernel_tests target for ltp
424c772c : Remove python2 from atest bazel
6d5ec6da : Remove filter_teams
bd56297c : Fix help message when no args entered; alias fixes
177139fe : Assert on all atest runner commands in dry run integration tests
82c72b1c : Highlight bug report link
1a257183 : Disable log uploading when running integration tests locally
586b031b : Pipe input to integration test subprocess
24e33226 : Remove a misleading print
37d08859 : A tool ui improvements; support multiple aliases
1e7b3c51 : Add some detail to debug log
c4fea21c : A update: Add rest of make push aliases
95d439a9 : result_reporter: include the test iteration's run name in summary
0ee3841e : Use `llvm-readobj` to filter out irrelevant objects
acd7e15c : Exclude build output in atest logging
f71591bb : Remove '\n' from bash reset code
2e9485ae : Support printing build output in multiple limited lines and wrap long lines
ab712e56 : metrics_uploader path; display metrics send fails
98495568 : Copy build verbose log into atest log directory
788d6df2 : Copy build trace before generating log file html
f3f0e86d : adevice metrics_uploader path fix
0d0129a2 : Create a base template for performance-tests
fd4997e1 : Remove dry run diff tests from default integration test set
03317032 : Add back the test log directory print
d317a21b : Print individual perf test metric files with --print-perf-iterations flag
e91f8d74 : Support setting invocation properties in result upload for perf tests
d0854946 : Remove agueeva from OWNERS
d780e002 : Inject default args for perf tests
7c8f7814 : Speed up atest by running non-indexed test finders immediately
99a1d07a : Create integration tests to compare atest behavior on prod and dev binaries
a0df0e06 : Use a class to hold additional args for integration tests
78bfb4ba : Disable 2 brittle integration tests
64b0011d : A tool ui improvements
39ea3799 : Read the USB speed from the device instead of the adb server.
4839d41d : DO NOT MERGE: Add back FileProtoResultReporter

+- Project: platform/tools/currysrc

c14cd41 : Support flagged-api.json option

+- Project: platform/tools/deviceinfra/prebuilts

c687913 : RESTRICT AUTOMERGE: Release deviceinfra prebuilts: version 947-20241206-001230, GitHub version 9c455bd81ba2d26efd9fd565e7a9ea1aaa7f8011
44e1f84 : RESTRICT AUTOMERGE: Release deviceinfra prebuilts: version 947-20241206-001230, GitHub version 9c455bd81ba2d26efd9fd565e7a9ea1aaa7f8011
2a4b27e : RESTRICT AUTOMERGE: Release deviceinfra prebuilts: version 947-20241206-001230, GitHub version 9c455bd81ba2d26efd9fd565e7a9ea1aaa7f8011
240f99d : RESTRICT AUTOMERGE: Release deviceinfra prebuilts: version 947-20241206-001230, GitHub version 9c455bd81ba2d26efd9fd565e7a9ea1aaa7f8011
815edfa : RESTRICT AUTOMERGE: Release deviceinfra prebuilts: version 945-20241204-074828, GitHub version a590c38cf638f6d35fdef7b6c816cad946a3d2aa
6e870f1 : RESTRICT AUTOMERGE: Release deviceinfra prebuilts: version 945-20241204-074828, GitHub version a590c38cf638f6d35fdef7b6c816cad946a3d2aa
3a8babe : Add liutongbo@ to console OWNERS (cherry picked from https://android-review.googlesource.com/q/commit:403530c09ae0e5372dd72d4fedd064b296057134) Merged-In: I088fb2182c9b878b92c19fb9bf7d83f2f35b7d9d Change-Id: I088fb2182c9b878b92c19fb9bf7d83f2f35b7d9d
af097d0 : RESTRICT AUTOMERGE: Release deviceinfra prebuilts: version 937-20241129-015708, GitHub version 959257a5027b297cfaa646989f82fbc4d1a3e1d9
a9b267a : RESTRICT AUTOMERGE: Release deviceinfra prebuilts: version 937-20241129-015708, GitHub version 959257a5027b297cfaa646989f82fbc4d1a3e1d9
2063f6e : RESTRICT AUTOMERGE: Release deviceinfra prebuilts: version 934-20241126-053213, GitHub version f3310c25328fe27901186922a2e7fe21482cfb1c
8ce6dce : RESTRICT AUTOMERGE: Release deviceinfra prebuilts: version 934-20241126-053213, GitHub version f3310c25328fe27901186922a2e7fe21482cfb1c
61be9bc : RESTRICT AUTOMERGE: Release deviceinfra prebuilts: version 927-20241121-040027, GitHub version 0f6d0913b60e3b116ded8bf0c67e81cfde17d379
185b8b9 : RESTRICT AUTOMERGE: Release deviceinfra prebuilts: version 927-20241121-040027, GitHub version 0f6d0913b60e3b116ded8bf0c67e81cfde17d379
f55ad4a : Release deviceinfra prebuilts: version 922-20241117-040035, GitHub version def601c03aef4a0dec9ada772ddd19a68b8bfb2c
52ef56f : Release deviceinfra prebuilts: version 922-20241117-040035, GitHub version def601c03aef4a0dec9ada772ddd19a68b8bfb2c
dd861b2 : Release deviceinfra prebuilts: version 922-20241117-040035, GitHub version def601c03aef4a0dec9ada772ddd19a68b8bfb2c
403530c : Add liutongbo@ to console OWNERS
6a5d160 : Release deviceinfra prebuilts: version 907-20241104-040021, GitHub version 0b054170f8378694cc5f2b089bc81959a48a77b0
d072316 : Release deviceinfra prebuilts: version 907-20241104-040021, GitHub version 0b054170f8378694cc5f2b089bc81959a48a77b0
ac96fdd : Release deviceinfra prebuilts: version 907-20241104-040021, GitHub version 0b054170f8378694cc5f2b089bc81959a48a77b0
f8cf3aa : Release deviceinfra prebuilts: version 903-20241031-040029, GitHub version 2c5697079cb7d488171a2d5ac27ea10d7b58a165
710e1e6 : Release deviceinfra prebuilts: version 903-20241031-040029, GitHub version 2c5697079cb7d488171a2d5ac27ea10d7b58a165
35ef2b8 : Release deviceinfra prebuilts: version 903-20241031-040029, GitHub version 2c5697079cb7d488171a2d5ac27ea10d7b58a165
3fdb575 : Release deviceinfra prebuilts: version 902-20241030-040408, GitHub version 838cac1418540466d0475c4e9c37100954c26c45
0cac54d : Release deviceinfra prebuilts: version 902-20241030-040408, GitHub version 838cac1418540466d0475c4e9c37100954c26c45
b235c0d : Release deviceinfra prebuilts: version 902-20241030-040408, GitHub version 838cac1418540466d0475c4e9c37100954c26c45
9df1007 : Release deviceinfra prebuilts: version 901-20241029-045825, GitHub version fa90770359ccbb6cf5ff0c335b61ffb4ebf2ce15
b9b82eb : Release deviceinfra prebuilts: version 901-20241029-045825, GitHub version fa90770359ccbb6cf5ff0c335b61ffb4ebf2ce15
3b582fc : Release deviceinfra prebuilts: version 901-20241029-045825, GitHub version fa90770359ccbb6cf5ff0c335b61ffb4ebf2ce15
b8118fe : Revert^2 "Release deviceinfra prebuilts: version 886-20241016-232705, GitHub version 828cad1d4191157a4af5e17c6789d0d09dea38da"
58d0947 : Revert "Release deviceinfra prebuilts: version 886-20241016-232705, GitHub version 828cad1d4191157a4af5e17c6789d0d09dea38da"
afc3fe6 : Release deviceinfra prebuilts: version 886-20241016-232705, GitHub version 828cad1d4191157a4af5e17c6789d0d09dea38da
4019b82 : Release deviceinfra prebuilts: version 886-20241016-232705, GitHub version 828cad1d4191157a4af5e17c6789d0d09dea38da
6fa839f : Release deviceinfra prebuilts: version 886-20241016-232705, GitHub version 828cad1d4191157a4af5e17c6789d0d09dea38da
a217951 : Release deviceinfra prebuilts: version 878-20241011-040025, GitHub version 9c14bfe09eb8cbf3621c7548c878d84f496e8fee
39441c0 : Release deviceinfra prebuilts: version 878-20241011-040025, GitHub version 9c14bfe09eb8cbf3621c7548c878d84f496e8fee
7f8e646 : Release deviceinfra prebuilts: version 878-20241011-040025, GitHub version 9c14bfe09eb8cbf3621c7548c878d84f496e8fee
ef9afb1 : Release deviceinfra prebuilts: version 874-20241007-040025, GitHub version 6aaf6b44bca694753e5e203898ae0dcf8d3813a4
230a5d4 : Release deviceinfra prebuilts: version 874-20241007-040025, GitHub version 6aaf6b44bca694753e5e203898ae0dcf8d3813a4
da69b38 : Release deviceinfra prebuilts: version 874-20241007-040025, GitHub version 6aaf6b44bca694753e5e203898ae0dcf8d3813a4
049b237 : Release deviceinfra prebuilts: version 860-20240924-040359, GitHub version 463331aa527c53e5ce095ca41430a022497ee805
f51276b : Release deviceinfra prebuilts: version 860-20240924-040359, GitHub version 463331aa527c53e5ce095ca41430a022497ee805
437bf95 : Release deviceinfra prebuilts: version 860-20240924-040359, GitHub version 463331aa527c53e5ce095ca41430a022497ee805

+- Project: platform/tools/doc_generation

4ae10ff : Reduce system call for ds-docs-switched

+- Project: platform/tools/external_updater

499dd08 : Add more fields to METADATA proto
c5dcad7 : Fix a bug
91a3756 : Allow upgrading to a specific version
d640800 : Suggest an alternative for tags that are not on any branch
abafcbf : Refactor git_updater.check()

+- Project: platform/tools/metalava

70f7508ae : Dedup creating Api from VersionedApis
ae3761482 : Add VersionedApi abstraction around readAndroidJar(...)
66d740a89 : Combine VersionedSourceApi and VersionedSignatureApi list
4dc930530 : Create VersionedApi abstraction
ce3b372bc : Add VersionedSourceApi to update Api from CodebaseFragment
dc0dd123a : Move API updating into VersionedSignatureApi
6df0b1cd5 : Encapsulate SignatureFileLoader in VersionedSignatureApi
d531754af : Make generateApiVersionsFromSignatureFilesConfig a fun
298aeb935 : Do not cache Codebase loaded from historical signature files
33a7788ae : Extract interface from SignatureFileLoader
4f6cd5e14 : Remove unused SignatureFileCache.load(SignatureFile, ...)
0c9247e1a : Change GenerateXmlConfig.currentApiLevel to ApiVersion
ba772c3fe : Simplify file handling in apilevels tests
a0ba4e57b : Move MinSdkVersion into Manifest.kt
758c8ed2a : Rename SdkVersion to ApiVersion
bbbd6a6ee : Update Api with Codebase version
b14dc8547 : Add test for removing classes in current codebase
baa41a786 : Add ApiElement.Updater for SDK extensions
1f35a023d : Abstract ApiElement updating into an Updater class
12715e90d : Remove ApiClass.getMethod(...)
c5a06da0a : Rename apilevels add...(...) methods to update...(...)
faafb0342 : Use maps to store super classes and interfaces in ApiClass
4f9337f06 : Stop passing version/deprecated to ApiElement constructor
7f59f3d57 : Make Api implement ParentApiElement instead of ApiElement
2d317f108 : Add ParentApiElement for use in ApiXmlPrinter
07c4bde97 : Combine ApiElement.update(...) methods with default value
ca491c67e : Improve debuggability of ApiElements
ac65afeb4 : Track changes to extends/implements in SDK extensions
6e4d76a5c : Fix backfill for system so it works the same as public
7651dc8a1 : Fix historical backfill for getAllExtensionVersions()
4a0c2d3d6 : Add missing copyright statements
ae33f474d : Improve testing of historical backfill
4beed27f8 : Prepare for improving testing of SdkExtensions
27846444f : Make ExtractSystemApiLevelsTest use system jars
d92705795 : Dedup updating of sdks on fields and methods
47c9aa0e1 : Test SDK extension changes to extends/implements
abc622ba4 : Avoid having to quote $ in expected XML contents
7f4e64d5f : Fix computation of version in removed attribute
13000663c : Refactor whether to include current Codebase in generateXml(...)
c91226199 : Remove references to JSON from generateJson(...)
0203480e9 : Make --generate-api-version-history format dependent on extension
ee9c87ef3 : Add ApiPrinter to GenerateJsonConfig
ee27a7923 : Dedup api-versions.json checks in ApiGeneratorTest
ddb208078 : Maintain indentation in checkApiVersionsXmlContent(...)
8f77b7f19 : Dedup api-versions.xml checks in ApiGeneratorTest
305cfeb59 : Improve abstraction of command line arguments in GenerateJsonConfig
3e631dd7a : Encapsulate generateJson(...) parameters in config class
5574e1a20 : Convert api versions json properties to Clikt
ca757b7b1 : Only track a single SDK extension version for @sdkExtSince
26861e431 : Use SdkVersion in DessertReleaseBasedSdkExtension
95929b84f : Differentiate dessert independent and dependent extensions
8ad4ce9c4 : Add SdkExtension.fromXmlAttributes(...) factory method
984cf5433 : Change SdkExtension from data class to normal class
c4baa1a1d : Rename SdkIdentifier to SdkExtension
62bdadbff : Clarify contents of calculateSdksAttr(...extensions...) parameter
659795609 : Move validation into AvailableSdkExtensions
5000de549 : Add AvailableSdkExtensions abstraction
fdd1b81b3 : Remove magic ApiElement.NEVER value
603a2d709 : Prevent creating SdkIdentifier with ANDROID_PLATFORM_SDK_ID
aa3b542d2 : Simplify handling of SDK ids in DocAnalyzer
8feb3a487 : Use SdkVersion major/minor/patch in generateJson(...)
6f79eb62b : Add major/minor/patch/preReleaseQuality properties to SdkVersion
a0b1985f0 : Add missing copyright headers to kt files
05fe8a668 : Fix handling of flagged super method with no previously released API
f0b90d430 : Add test to show incorrect handling of FlaggedApi on super method
3112fef3e : Make generateJson(...) from current codebase test more realistic
4af01c552 : Use Sdk/ExtVersion companion object properties and functions
9ea0255a2 : Replace Sdk/ExtVersion type aliases with value classes
4ecd1d88c : Differentiate between sdk and ext version
2375ea9c6 : Upgrade severity of value class lint to ERROR
9a72f4aeb : Remove secondary constructors from ApiElement
10d66c30a : Rename generateJson(...) currentApiVersion parameter
d475fab00 : Remove unnecessary @Throws annotations
a49914ace : Use `null` for the unset value of ApiElement.deprecatedIn
0a127898e : Simplify ApiClass.alwaysHidden
bb603a313 : Update AndroidX @Discouraged retention to CLASS
15897c242 : Move creation of GenerateXmlConfig into ApiLevelsGenerationOptions
6399573b3 : Move validation of sdkJarRoot and sdkInfoFile to ApiLevelsGenerationOptions
2e8ad6e82 : Move ApiGenerator.generateXml(...) parameters into data object
3b4abc24a : Move findAndroidJars(...) and supporting code to ApiLevelsGenerationOptions
e5529b013 : Call findAndroidJars(...) on demand
270107f4f : Prepare findAndroidJars(...) for move to ApiLevelsGenerationOptions
1ea7a7905 : Delegate Options.currentApiLevel
3f6f639db : Pass a filter for API levels into DocAnalyzer
88e41ed40 : Move isDeveloperPreviewBuild to ApiLevelsGenerationOptions
fd76766a3 : Move DocAnalyzer.getApiLevelLabel(...) to ApiLevelsGenerationOptions
58d61abc3 : Use numeric API level in Javadoc if --current-version not provided
fc7854c40 : Dedup getting label for an API level
0df40fe18 : Test providing --current-codename but not --current-version
e69aaf5e5 : Move DocAnalyzer.apiPredicateConfig to constructor property
dd681457a : Pass ExecutionEnvironment to Options constructor
e25abf9cb : Remove unnecessary setting of global options
747b3f002 : Remove unused --current-jar option
0c8a85e4a : Simplify ExtensionSdkJarReaderTest
46f7c5b55 : Remove unused --hide-sdk-extensions-newer-than option
4e7cd7ba1 : Fix applying api-versions.xml file that has just been generated
ac1e63f49 : Check docstubs in ApiGeneratorTest
bd1566554 : Generalize using a file or string contents in DriverTest.check(...)
3d43431ba : Stop useExistingSignatureFileOrCreateNewFile(..) supporting nullable
0758b6237 : Move Api*.print(...) methods to ApiXmlPrinter
d94df8215 : Make mSdks and mLastPresentIn accessible outside ApiElement
08cc88da6 : Remove Api.mMin and pass it in to Api.print(...) method
899bc3f0f : Make ApiJsonPrinter extend ApiPrinter
ec2afc2cf : Add general mechanism for printing API levels and use for XML
5bab578ec : Use PrintWriter not PrintStream when writing api-versions.xml
7f4e61f3b : Fix IDE reported issues in ...apilevels package
58426a10d : Dedup Api.clean()
a702f25df : Move --hide-sdk-extensions-newer-than to ApiLevelsGenerationOptions
9572cbc03 : Convert --sdk-extensions-* options to Clikt
ccd192519 : Convert --current-jar to Clikt
c289ac907 : Convert --android-jar-pattern to Clikt
46e2557f7 : Clean up handling of jar patterns when generating XML API levels
a3fc74f78 : Convert --current-codename to Clikt
717b4cf77 : Convert --first-version and --current-version to Clikt
36f4899f6 : Use null as the default value for --current-version
c41fccd35 : Convert --remove-missing-class-references-in-api-levels to Clikt
bc8fbdc70 : Add ApiLevelsGenerationOptions
c798fefd9 : Gather complete api-versions.xml for module-lib and system-server
1f5351fa1 : Move ...model.visitors to metalava-model
58d915677 : Move ApiVisitor and related classes to ...model.visitors package
cf9acfad9 : Remove default value for ApiPredicate.config
807fbf698 : Take a snapshot of every item in metalava-model-snapshot-testing
3e8964240 : Add selectedApiVariants test for a system API delta
d30252b22 : Dedup and parameterize the selectedApiVariants tests
ad209bffd : Add context to the error when an invalid PsiType is found
2aa12569e : Update baseline key generation for kt source elements
d9b780684 : Move systemApiSource to KnownSourceFiles
3a46f924e : Copy selectedApiVariants into snapshot
f7b91b764 : Hide forMainSurface/apiVariantType in SignatureFile
afdf15b87 : Replace ApiFile.forMainApiSurface with apiVariant.surface.isMain
7848f8579 : Update selectedApiVariants when parsing signature files
daff37add : Add SignatureFile.apiVariantFor(ApiSurfaces)
7f3e745d2 : Add test to show selectedApiVariants on previously released API
c8b114d9f : Show selectedApiVariants are not updated from signature files
a11bcb2fa : Create a single Codebase for flagged API reversion
fea2d2a57 : Fix cts-current-api-gz and related paths
da5905d8d : Disallow checking compatibility against .jar file
5fd7f2366 : Check compatibility against signature files not .jar files
e148e41ca : Use toTypeString(...) fully in ProguardWriter
ef5f72b97 : Add TypeStringConfiguration.nestedClassSeparator
3e4221c7b : Add TypeStringConfiguration.eraseGenerics
01881e94d : Add TypeStringConfiguration.treatVarargsAsArray
6000a210b : Dedup computation of array suffix to use in toTypeString(...)
e8eb2cd43 : Make toCanonicalType use toTypeString(stripJavaLangPrefix=NEVER)
6a0c21b83 : Expand CommonTypeStringTest to test toCanonicalType()
8534ee842 : Move multiple parameter TypeItem.toTypeString(...) to test fixture
87630db6d : Reuse TypeStringConfiguration instances
b825e87a0 : Reduce cost of maintaining TypeStringConfiguration.isDefault
d2c07ab4a : Remove assertions about MAGIC_VERSION used in API generator test
671872c72 : Dedup TypeStringConfiguration
a3060986d : Move filter from TypeStringConfiguration to TypeStringParameters
ace1193d0 : Update AndroidX @Discouraged retention to CLASS
d92f3b739 : Strip java.lang. prefixes consistently when not legacy
2ca547c3e : Add FileFormat.stripJavaLangPrefix
fd2c7f77f : Refactor SignatureInputOutputTest to make it easier to add more tests
f92450cc3 : Fix tests that do not configure API surfaces correctly
c84d5c4a4 : Add SelectableItem.selectedApiVariants
8469e3ee5 : Add ApiVariantSet.clear()
c4376a964 : Add Codebase.apiSurfaces
909efb263 : Add ApiVariantSet and related classes
1aa021bd1 : Add ApiSurfaces and configure in Options
31d4dc7d6 : Remove stripJavaLangPrefix(...) method
47865e836 : Add classpath to project description
fb4730aa5 : Add stripJavaLangPrefix to TypeItem.toTypeString(...)
eac360ac2 : Add tests to show java.lang. prefix stripping edge cases
50b75a80b : Simplify handling of outermost ClassTypeItem string
087736162 : Dedup the type string generation code
34054589c : Support testing the internal state of metalava main command
c98976d6a : Add API lint to flag value class definitions
fb63d0e36 : Move Item.emit to SelectableItem
e752c5d73 : Showability is only valid on SelectableItem
58b353e18 : Stop marking parameters as being part of the API
462b2db7d : Move Item.variantSelectors and related members to SelectableItem
970c246a2 : Support visiting SelectableItems in BaseItemVisitor
b676dff66 : Use SelectableItem instead of Item where necessary in DocAnalyzer
f3ee97dcd : Property actualItemToSnapshot only makes sense on SelectableItem
b4f6934f9 : Add DefaultSelectableItem
856e37a1b : Avoid visiting ParameterItems unnecessarily in BaseItemVisitor
e00705a4f : Pass the whole TestFixture through to test runners
15399c0f1 : Dedup code for calling visitItem/afterVisitItem
813d7cb8f : Change FilterPredicate = Predicate<SelectableItem>
27d1538f6 : Tweak report filtering in ApiLint to handle SelectableItem
2cf8bc816 : Replace Predicate<Item> with new FilterPredicate type alias
8c3846438 : Rename FilterPredicate to MatchOverridingMethodPredicate
da97bf7bc : Switch ItemTree to store SelectableItem instead of Item
c35c9cac8 : Stop including ParameterItems in ItemTree
ee7c6910c : Correct removed...Item(...) "from" parameter types for non-packages
41132e255 : Correct removedPackageItem(...) "from" parameter type
d4fbae98c : Disambiguate ComparisonVisitor added/removed/compare method names
179cafb38 : Stop tracking added/removed parameters in ComparisonVisitor
3105252b7 : Remove checking for AnnotationItem in ComparisonVisitor
593fa3cc1 : Stop TypeParameterItem extending Item
3ef88e45b : Remove itemLanguage from DefaultTypeParameterItem
81f4889f0 : Rename PsiTypeParameterItem.psiClass to psiTypeParameter
ed1431f71 : Stop calling filter reference on TypeParameterItem
c242ab42a : Update metalava to lint version 31.8.0-alpha08
59497d288 : Always provide Reporter to Codebase
53327aa8c : Update metalava to use lint version 31.8.0-alpha04
5ea9d6943 : Filter value class accessors for K1
b0c0d928a : Create property even without getters
adabf1c36 : Include property annotations even when the property has a backing field
839e4abcc : Refactor psi modifier flags computation
9bef112e8 : Create properties based on declared elements instead of getters
f16aead7a : Track value class primary constructors in K2
ca49e4aa9 : Filter out no-args value class primary constructor
5bc553922 : Add optionalReporter to Codebase.Config
f7bb2496d : Make BasicReporter a little simpler to use
c23c2226d : Wrap AnnotationManager in Codebase.Config
286b95d28 : Add SignatureFile.fromText(...)
3c1418dd4 : Add SignatureFile.fromStream(...)
61f380c35 : Make SignatureFile abstract with a private implementation
0b576f990 : Move responsibility for reading file to SignatureFile
7228b4d5e : Avoid creating SignatureFile directly
1d4011447 : Avoid ambiguity around phrase "current API surface"
ec6c7701b : Remove unused PreviouslyReleasedApi.files
b26366522 : Remove special support for single SignatureFile
7662e566c : Pull SignatureFileLoader creation out of SignatureFileCache
8882eee6a : Remove unused ApiFile.parseApi(...) entry point
df8cf7525 : Remove unnecessary defaults in PsiSourceParser constructor
0070ca9dc : Remove MutableCodebase, use DefaultCodebase instead
0e7aace1a : Combine AbstractItem and DefaultItem
7ba69e2a1 : Move codebase property from DefaultItem to AbstractItem
b6486fb66 : Add primary constructor to snapshots
7046c4e77 : Filter annotations applied to multiple items in location test
8cc8f9389 : Add associated property items to snapshots
cb34f8c8b : Cleanup IDE reported issues in Options.kt
d94a45fde : Allow DriverTest.check(...extraArguments...) to take varargs array
4d31e7752 : Make actualItemToSnapshot private
b59780bbf : Use snapshot when generating api-versions.xml for reverted items
9bd26ab3e : Use snapshot when generating stubs for reverted items
20fe9ecee : Use snapshot when generating signatures for reverted items
ed0a659e2 : Use CodebaseFragment in addApisFromCodebase(...)
bd1119572 : Use FilteringApiVisitor in addApisFromCodebase(...)
f2f718b9e : Propagate @Deprecated to class members in removed files
f4f36584a : Rename DeprecatedTestCase to DeprecatedTest
172b1700e : Use ApiFilters in ApiVisitor primary constructor
dd467b645 : Use ApiFilters in FilteringApiVisitor
52c79c785 : Use ApiFilters in JDiffXmlWriter.createFilteringVisitor(...)
5d660c62f : Add ApiType.getNonElidingApiFilters() for use in ApiLint
77d824858 : Restrict visibility of ApiVisitor properties
3f22e601f : Extract ApiVisitor.defaultEmitFilter(ApiPredicate.Config)
4c6576ad1 : Add ApiFilters and use it in ApiVisitor secondary constructor
d76eb884b : Remove workaround for incorrect super classes in legacy jars
d8cc09544 : Stop ordering callables by source order
01b414277 : Remove unnecessary ApiVisitor constructor parameters
b0d816a2b : Stop preserving class nesting in snapshot
24b0ecad1 : Do not emit snapshot classes added when resolving references
90df197c5 : Improve error message when failing to snapshot a class
d63ad995a : Add a package private constructor for file facade class
07275f9eb : Add test for file facade class constructors
7e40e9013 : Defer CodebaseFragment snapshot creation until needed
0971d4a98 : Make CodebaseFragment an abstract class
054c4386b : Make CodebaseFragment constructor private
97c1b8eee : Add separate visitor for handling references within a snapshot
b26fc713b : Support fully qualifying snapshot of PsiItemDocumentation
45c09dbfb : Fix detection of doc comments in Kotlin
569174cd6 : Add testsuite tests for ItemDocumentation
58ead2a79 : Remove workaround for invalid @attr ref R.styleable#<field> refs
6d21aae71 : Replicate doclava behavior when fully qualifying references
c48824cde : Remove unused workaround controlled by PREPEND_LOCAL_CLASS
2768cefca : Resolve classes when checking imports should be included
c059083db : Add test to show getImports(...) methods ignore unresolved classes
bb51ad6ff : Enforce total ordering when picking best stub constructor
be7e932dd : Rewrite StubConstructorManager.pickBest to use Comparator
79e573506 : Add tests to show stub constructor selection is order dependent
1bdf4b0be : Prevent @deprecatedSince from causing item to be deprecated
cca03b72d : Add test to show that @deprecatedSince is treated as @deprecated
0cf37226f : Clean up deprecation tests in CommonClassItemTest
4d59fe37a : Combine sourceFile() and getSourceFile()
496a7c0e3 : Replace sole use of Item.sourceFile() with annotation location
b0dbc7321 : Add test for filtering classes using a package filter
3a0ba022a : Snapshot AnnotationItem.info
da6bb8ee7 : Delegate AnnotationItem.targets to AnnotationInfo
e7353d22c : Separate NoOpAnnotationInfo from AnnotationInfo
998ec666d : Remove ApiVisitor.Config, replace with ApiPredicate.Config
a2cdc8c70 : Remove unused ApiVisitor.config primary constructor parameter
7a4cad17f : Make AnnotationItem.targets consistent across models
1fb2286cb : Add test for computing source path annotation targets
09c31944a : Use DefaultAnnotationManager as default in testsuite
02d3c7bdd : Add TestFixture to encapsulate annotationManager
38e22521c : Pass annotationManager through to ModelSuiteRunner
7d4aadb33 : Remove package filter check from ApiVisitor.include(...)
56deb5609 : Add apiPackages to SourceParser.parseSources(...)
ec9f692dd : Rename Options.stubPackages to apiPackages
e0aec4cb1 : Move PackageFilter into metalava-model
d0c8283d8 : Move DefaultAnnotationManager into ...model.annotation package
9fab448b1 : Move various annotation constants to metalava-model
c04a120e4 : Replace actualItem with actualItemToSnapshot
a6aa53d9f : Convert --stub-packages processing to Clikt
cc62a641b : Cleanup PackageFilter in preparation for move
55681a57a : Rename NonEmittableDelegatingVisitor to EmittableDelegatingVisitor
68f31a9cc : Update metalava's android lint version to 31.8.0-alpha01
869b61d0d : Remove CodebaseSnapshotTaker.typeItemFactory/Stack
9a685e39f : Move TypeParameterList.snapshot(...) to SnapshotTypeItemFactoryContext
f9bf5c4db : Use classTypeItemFactory instead of typeItemFactory
f9e59da54 : Add SnapshotTypeItemFactoryContext
4e71a6ae3 : Use CodebaseSnapshotTaker.inScope for fields and properties
222c5c9f4 : Return value from CodebaseSnapshotTaker.inScope(...)
1696185c3 : Remove CodebaseSnapshotTaker.currentClass
51f9fadbd : Remove CodebaseSnapshotTaker.currentPackage
1d8209d4a : Rename CodebaseSnapshotTaker.codebase to snapshotCodebase
0aeb8d133 : Add test that shows snapshot does not resolve nested classes
1b8380b41 : Fix handling of new class in SignatureToJDiffCommand
712a0c21b : Add SignatureToJDiff tests for adding new interface
7e04b9eab : Revert "Propagate class emit to package emit for text codebase"
6602a23da : Propagate class emit to package emit for text codebase
17b328c67 : Fix typos found when merging into Google3

+- Project: platform/tools/netsim

025bc3f7 : Remove C++ gRPC server code completely
8c5acd40 : Revert "Declare NETSIMD_RUST_LOG and set pica log level to debug"
0a896208 : Version Bump to 0.3.37
70b989c4 : Add framework for Wi-Fi encryption and decryption
7f8b4b9b : Reimplment IsNetsimdAlive in Rust
6379d2b9 : Add mod pattern_vec to support proxy bypass lists.
f9d5e7c1 : Add dns_manager to track IpAddr -> domain names.
66c3a6c5 : Adjust WSAPoll behavior on Windows to reduce CPU usage
98fcd5d3 : Crate for reading and writing pcap files.
ab87642d : Remove src/core and rust_grpc flag
b4edaddd : Add netsim-daemon to RunTest task for Linux
8666e33d : Prefix vsock with underscore to fix clippy warnings
794d92dd : Improve hostapd-rs documentation for cargo doc
efa6bef5 : Reliably identify path and only set necessary vars
e9e62195 : Utilize vsock and no_cli_ui for rust_grpc
5726f49e : Refactor: Split wifi::hostapd module into separate files
a5434d34 : Remove unused environment variables in cargo_test.cmd
f0904a7d : Run cargo clippy in RunTest task for Linux
5dd6ec9d : Configure Cargo build envrionment using cargo_env.sh
ac9687af : Log patched fields when patching device
11d49964 : Add comments for libslirp-rs
75f046a4 : Version Bump to 0.3.36
8fcbc709 : Fix a clippy format warning
e21b9fdd : Fix comment test
ad0514ff : Add comments
0566c3cd : Remove config.rs entirely (dev is only used in http_server)
db12fa0a : Remove some clippy warnings.
23cffca0 : Remove use of static Runtime.
9beecff7 : Remove config.rs pcap since it was set and read in only one place.
392403ad : Use mutex to avoid test conflict accessing env var
176a10dd : nest PatchDeviceFields for PatchDeviceRequest proto
847c2a0a : Migrate remaining packet defs to netsim-packets
9a23081a : Add DNS response parser for http proxy.
f3e92810 : Windows runtest task uses prebuilt cargo and enabled on buildbots
e09b31d7 : Temporarily disable udp test on windows
c4591add : Remove exclude-filter for CtsBluetoothTestCases
ff9a8450 : Version Bump to 0.3.35
5cc7641d : Support connecting to HTTP proxy with basic authentication
473c885b : Refactor: Remove qemu slirp and hostapd
13ac6dfc : Improve clarity of Proxy error messages
5c87dd24 : Hostapd - Clean up C args and streamline test
6b37abc6 : Support HTTP proxy
9ebecb84 : Enable rust_grpc
b806659f : Reland "Enhance hostapd tests by checking beacon frame"
b125e605 : rust-grpc handle error in grpc_server::server::start
6d370db3 : Enable setting http_proxy via environment variable
fcb158c2 : Remove unnecessary comments in wifi module
d9a996d1 : Replace ProxyConfigError with http_proxy::Error
d920ee16 : Version Bump to 0.3.34
c5af65d1 : Add Connector for HTTP proxy
aa34db03 : Revert "Enhance hostapd tests by checking beacon frame"
97981597 : Introduce ProxyManager struct in http-proxy
f063dca3 : Implement try_connect and remove callbacks in libslirp-rs
e825073e : Enhance hostapd tests by checking beacon frame
7ce7830a : Enable http-proxy and netsim-common Rust unit tests
4c435b2e : Update bindings.rs for libslirp-rs
fc37f6c6 : Temporarily disable PyTests on Windows
2d4f1c27 : Update existing path prefixes for netsim-ui
472fad0e : Add timeout on integration test.
e20d5ba7 : Enable configuring host DNS in Rust slirp
dcd79450 : Verison Bump to 0.3.33
00b31331 : Refactor: Split wifi::libslirp module into separate files
5de8d85f : Use opaque to pass CallbackContext.
4e21dba1 : Migrate wifi packet definitions to packets crate
29533d27 : Add unit test for sockaddr_in and sockaddr_in6 conversions to SocketAddrV4 and SocketAddrV6.
80f0fad6 : Reland "Enable Rust slirp by default"
0c917b17 : Fix: SocketAddrV4 to sockaddr_in conversion
8ba61956 : Support IPv6 for http, socket, rust-grpc server
e40f0cd0 : Environment agnostic implementation for binding grpc server to IPv4 and/or IPv6
ad195d34 : Reduce netsimd CPU usage by setting poll timeout
80ba8050 : Fix new argument
d6b164f5 : Test UDP packets through libslirp using etherparse
4f3979b7 : Support ProxyConnect callback through the libslirp thread.
8d3ed4fa : Remove dependencies on the 1-variant fallback
f316476a : Version Bump to 0.3.32
129721d2 : Remove warning message about rust resolver in workspace
734bd618 : Add converson between SocketAddr and sockaddr_storage for http proxy.
30880e42 : Revert^3 "Enable Rust slirp by default"
0d0fd7ce : set_capture, set_capture_all in netsim grpc client
c578f18b : Use localhost interface for socket and grpc server
25cb98c8 : Revert^2 "Enable Rust slirp by default"
e1b0e8ea : Format code
24a9f32a : Fix rust slirp timer management deadlock
e2dbad2f : Version Bump to 0.3.31
262506af : Revert "Temporarily disable PyTest for Mac Intel"
d3099438 : Run RunTest task in buildbots for Linux and Mac
4777f066 : Revert "Enable Rust slirp by default"
383665a2 : Update create_pipe to support ipv6
7479488b : Enable Rust hostapd by default
f6cedff9 : Implement initial hostapd set_ssid()
16e28945 : Temporarily disable PyTest for Mac Intel
6512d858 : Version Bump to 0.3.30
c3315968 : Implement hostapd terminate()
40707bac : Fix setting PATH in cargo_test.cmd
0cd21e86 : Add basic hostapd integration test & build rules
4fee035d : Fix rust-grpc GetCapture generating corrupted pcap files
4429bf70 : rust-grpc propagates bt_properties when adding chip
5d54ef44 : Version Bump to 0.3.29
847ace00 : Ensure reliable write to hostapd
d72a46a4 : Set selectedId for dropped devices [web ui]
f675139a : Declare NETSIMD_RUST_LOG and set pica log level to debug
392365e2 : Make socket_pair fd blocking before passing to hostapd
d71818e1 : Version Bump to 0.3.28
4d0b30ff : Enable Rust slirp by default

+- Project: tools/platform-compat

4e0d18a : Fix UnusedVariable errorprone issues
bfed9e7 : Don't write ChangeId javadoc into config XML. Saves ~170kb of RAM on all Android devices.

+- Project: platform/tools/repohooks

17bc37e : Honor combine_stdout_stderr when the result is created from an exception.
3a454ed : Populate an error message if rustfmt fails with no output.
7278e0d : repo hooks: run builtins hooks prior to custom
aad4d88 : repohooks: Include mk files in aosp_license check
e745062 : Accept "Fix: " as a bug line
2036f46 : repohooks: Support --exclude-dirs in aosp_license
82ba8f2 : repohooks: Add AOSP license check

+- Project: platform/tools/security

8e25e4a : [log] Log algorithms when key/signature algorithms are different
81a9f53 : Check signature on root in UDS certificate chain
1fb6a47 : Simplify wrappers for opaque boxed types
34c75f0 : Remove mitchp@ from OWNERS file
69011f2 : Ensure no private keys in CSR
157cc10 : have one implementation of deviceSuffix
fee784e : update sanitizer owners: remove mitchp, add pcc
0642842 : Expose more from hwtrust for DICE chain validation
894253f : Make some PublicKey COSE operations available to lib users
9f047ca : Make libhwtrust available to /product modules
95e05fe : adding test data to hwtrust
61444b2 : Fix expired test data in hwtrust
99fbaa0 : Check the certificate type according to the rkp instance in session
cb05e77 : [hwtrust] Add argument rkp-instance to `hwtrust csr`
42f05c3 : Fix expired test data in hwtrust
0056117 : Add aliceywang@ to hwtrust/OWNERS
5bc7311 : Update vsr profile ranges in hwtrust
d801183 : Validate RKP VM markers in the DICE chain
8a05120 : Add support to hwtrust for validating UdsCerts

+- Project: platform/tools/test/connectivity

8044e4367 : [WifiManagerTest] Add test case for reboot with bluetooth and location off
175d4f35b : [WifiSoftApTest] Add a test case for 2G hotspot when lte coex applied
2d5532012 : [Wifi] Add edge channel 64 and 144 test cases.
a6dfa1c1e : Catch connection.Error in WifiPreTest.setup_openwrt_connection()

+- Project: platform/tools/test/mobly_extensions

46021da : results_uploader v0.7.1 release notes
29d1925 : [results_uploader] Properly report target status for flaky runs
f122e38 : results_uploader v0.7 release notes
b06293a : [results_uploader] Add initial setup prompt in results_uploader

+- Project: platform/tools/tradefederation/contrib

dce5b96 : Create a target preparer to auto generate trace config

+- Project: platform/tools/tradefederation

15296607e : Add another test_module_config test to the allowed large list
f4dc7623b : If testZipRegexSet contains null, this will likely indicate that the discovery results are corrupted. Throw a more specific exception with an appropriate exit code and avoid caller to process regex list that contains a null.
95cf22d60 : Due to the matching syntax in BuildWhatYouNeed we need a few matches
faa41199f : Update the error signature to more generic about resolve host failure
867277970 : Add the logic to check disk space before app installation.
f54c4ce16 : Add signing boot image key in add-hash-footer step.
d8990b25d : Log artifacts when their transfer fails in `ArtRunTest`.
c10add70a : Port resultDB protos to Tradefed
858ea1b4d : Apply the full exemption normalization to device image
8f66d2158 : Prevent runtime exception from stopping events
d93edc407 : Support partial loading of multi-device configs
235ef1eab : Add an error signature to track dns resolve failure in fetch cvd
56e6ef1af : Add additional fastboot command support in GkiDeviceFlashPreparer.
4ee1b5c4b : [curl->java.net.http] migrate GET /operations/{id}
9fc0c7a4c : Exercise the failed-transfer logic in `ArtRunTestTest`.
210e37ad4 : Use `assertThrows` to test for exceptions in `ArtRunTestTest`.
ebd427398 : Update IDiscoverDependencies to return a set of zip regexes instead of a single string.
876867527 : Update IDiscoverDependencies to return a set of zip regexes instead of a single string.
f0a84775f : Remove log start delay
ad56a945d : Add cas_download_error_build_id to InvocationMetricLogger
3248d6232 : Update error message to include aapt version
19acf2a70 : Report skipped module as sparse so we can batch report them
807e515ed : Ensure unchanged skipped module do not look up cache
9c2fdddaa : Honor aapt version in TestAppInstallSetup
81c01fd5f : Dedicate the option to invocation reporting of skipped modules
df59b4452 : Remove the os platform as it doesn't matter for results
a51b5ed9e : Reduce waiting time to poll device status via host orchestrator
f4be434ff : Fix baseline config for disable_usb_app_verification
91adbc928 : Revert "Per recommendation, set platform in Action directly"
066800c68 : Add support for --sandbox-tests-zips option in TestZipDiscoveryExecutor.
0e7794801 : Add an error signature for bluetooth_failed_to_stop
624e2cbf4 : Per recommendation, set platform in Action directly
87434cf49 : Only stop app process and skip app uninstallation if incremental setup is turned on.
6ab88fdfd : Allow to apply cache reporting for presubmit only
06acb2d3b : Have a full trace of the skipInvocation trace
23bd6a91e : Deprecate FileUtil.findDirsUnder
78204d61b : Use file hashing comparison method to decide whether to install APKs.
99240325e : properly respect ignored changes in common location
e40ad3dea : Add an error signature for cf_ramdisk_mount_failure
c6b4ac3f6 : Use alternative mechanism for device wiping
c39c4b22d : Fix framework detection in reboot and command used
a373396be : Index tradefed.jar for caching results
73429593d : Also support shared_libs in non-minimal setup
7add70d04 : Avoid thread contention in EventParsingThread
a7a068853 : Add run-test-suite to TestDiscoveryInvoker, and throw exception to indicate it is not implemented.
91cf17849 : Add option to run a specific test suite built by test_suite rule.
914bc9c28 : Allow caching in presubmit
47d8b4748 : Create test record file in a shared directory for Cloud ATE if IS_CLOUD_ATE is set.
a05e82296 : Use the new get module directory for better performance
0e6ef546f : Track some operations for look up of cache
84767ca7a : Revert^2 "Remove any non-soong related robo cruft from tradefed, as the config was able to be moved in to robolectric plugs (though not as cleanly as I would have liked ( http://aosp/3312231 )."
8a04c02ae : Limit discovery considered to existing modules
3fcfdc16d : Carry java to isolatedHostTest subprocess
fd46521fb : Reland "MicrodroidBuilder: deprecate gki() and introduce os()"
e06b98aed : Revert "MicrodroidBuilder: deprecate gki() and introduce os()"
2933449e2 : MicrodroidBuilder: deprecate gki() and introduce os()
fd5e5396b : Recursively chown /data directory and subdirectory
6d235bc0d : Adds an option to override the TCP connection wait time.
bb5fce595 : Reduce classpath to jar files
8173b0513 : Handled UserInterruptException in Console.java class.
56c83024e : Always move baseline if not opted out
88c1c4670 : merge test mapping zip when it's extracted already
97f29a9bd : Parse test records asynchronously
84315882e : Fallback to tcp:serial fastboot serial as a last resort
afdb7d296 : Support local incremental setup specification in Atest.
5d9ff10aa : Add discover test zip capability to TradeFed Test Observatory
58df35e32 : Complete the events when a method timeout
a146df8aa : stop reporting granular results in proto for caching
bb3976379 : improve error detection for OTA flasher
23e8c35c7 : Use relative paths for artifacts in content uploader
41e0ba5fb : Add discover test zip capability to TradeFed Test Observatory
9e315c111 : Close client socket connection at the end
27d24799e : Combine the setting clear_lock_screen and go_back_home
3facb9026 : Enable the possibility to do local incremental setup.
a314f2b18 : prioritze moduleDir from config while searching
b5507f77c : Remove dependencies on the 1-variant fallback
bcc2cf3cb : Clean up original cache logic in RunUtil
b58b53f47 : Extend IDiscoverDependencies interface to provide test artifact info
6c9a3a608 : Extend IDiscoverDependencies interface to provide test artifact info
3f185aaa3 : Revert "Remove any non-soong related robo cruft from tradefed, as the config was able to be moved in to robolectric plugs (though not as cleanly as I would have liked ( http://aosp/3312231 )."
b918f38ac : Remove any non-soong related robo cruft from tradefed, as the config was able to be moved in to robolectric plugs (though not as cleanly as I would have liked ( http://aosp/3312231 ).
db73d503f : Use cmd wifi by default
f1a800014 : Allow only one binary to exist
4db104f70 : Clean up the localized per-runner cache
059c2a571 : Spy build_provider object as well as part of discovery
bed950161 : Track more independently origin and destination
91911ed50 : Track upload/download metrics of cache
496ff4f6e : Add CONNDIAG log data type and option to use it
6d36b4d86 : Stage file in moduleDir for SearchArtifactUtil
50aec1fb3 : Allow first bootloader reboot to be direct instead of SVC
a45a7d0bf : PtsBotTest: Collect Android Snoop logs between tests
4377fcc07 : Catch the exception caused by client cancels grpc when call onNext.
33e4fbd66 : Try extracting bootloader to update it from userspace
bee4f7f2a : Change the logic of PackageInfo attribute parsing.
9da2a8637 : Add options to use shared libs in per module folder
877385e2d : Add cas_download_error_files metric to InvocationMetricLogger
1513c156e : Allow update of bootloader in userspace for AB devices
9a0dac1b3 : Add option to delete snapshot
684e88b72 : Only reset isolation bits if we ran the module
98ebd60b9 : Compute hash of common location for tests
f035bc834 : Move wipe device option before flash GKI boot image.
d0f534b99 : Reland Migrate doesFileExist to executeShellV2
582bb00e4 : Make revert-snapshot error surface directly
dcf8e052a : Run dex2oat but skip Checker in ArtRunTest for code coverage
c69795dfb : Revert "Migrate doesFileExist to executeShellV2"
f210e5096 : Stop linking runtime dependencies in IsolatedHostTest
dbd78bece : Add fastboot to the execution environment for ExecutableHostTest
0ae219d0c : Stop linking runtime deps into module folder
08b6d20a6 : Extract Sandbox logic of match extra zips as build targets to a public interface
eac15f918 : Add dump_dt to MicrodroidBuilder
bf93a5fab : Extract Sandbox logic of match extra zips as build targets to a public interface
401f9dc50 : Avoid writing the bytecode from direct python execution
50c417698 : Migrate doesFileExist to executeShellV2
12fef2718 : use cmd wifi to disconnect from network
3b847f2b8 : Handle aborted test analysis closer to evaluation
0b613a452 : Protect againts NPE if the currentBuild isn't set
995eccdb4 : Revert "Use prioritize-host-config directly for search"
4f01354fe : Support the new abtd trigger name
a588ae2da : Use prioritize-host-config directly for search
3c88c8a2b : Skip reporting module config while applying cache
904514f30 : Add more unit tests for FastbootCommandPreparer.
14fe06914 : mark cached modules as sparsed for batched processing
5d3ad5c45 : Handle extra file name replacement with the runtime file path for the fastboot flashing command, and add unit tests.
22bdb0659 : PtsBotTest: Fix missaligned run start -> end pairs
9069c9b4e : Add metadata for moduleDir, prioritize-host-config
f2b706158 : Store moduleDir, prioritizeHostConfig as metadata
0385239d4 : javatests: tradefed: GTestResultParserTest for rust test names
14cf24726 : Log image hash so we can investigate them offline
2555be988 : Gate using the user id to being valid
dbbc28c23 : Add ktap parser resolution option for KernelTargetTest
c596c0ef2 : Add error identifier to config error
467ac447c : test_framework: tradefed: ExecutableTargetTest: enable GTestResultParser
96f5b1129 : Remove options for include/exclude filter files
14d37e3b7 : Add a few guardrails to incremental update to limit churn
c2ff5be8f : move multi-user and enterprise annotations to the modules
099eca6a2 : Move filter to moduleDir when available
f97e30e2f : Revert "Create filter in the module scope"
c03e89495 : Revert "Reland Searching for filters with more safety"
f603bb14a : Reland Searching for filters with more safety
06ff868e5 : Ensure we backfill filters in the file filter if available
36c36b863 : skip setupwizard dismissal check for native device
a83d2f452 : Revert "Support searching for the filter file if not absolute path"
2aaf598ba : Handle abort analysis closer to where it occurs
e82ed25e2 : Support searching for the filter file if not absolute path
8e66249a4 : Collect fetch.log from oxygen
357b1508c : Create filter in the module scope
4c8d8a9ee : Avoid flashing if device already at right version
dae5a724b : Avoid double logging filter files
b7dcd44e4 : Add a setter for new incremental flow for rollout
cd190b346 : Report if the first bootloader reboot fails
a24c7d117 : Fixes for errorprone update
7371582ff : Add an error code to indicate host vm out of disk space for snapshot
0d2ab5168 : Revert "Migrate PushFilePreparer to use search util"
07b26f13e : Fix UnusedVariable errorprone issues
453a2a6b4 : Switch to Linked map/set for consistency
53d98d386 : Migrate PushFilePreparer to use search util
9e54f0c3d : Encode logics to pull cvd logs in a common location.
f49aed79c : Avoid using content provider on user 0
6daddf20b : Avoid actually storing things in the @option field
5faeef066 : Avoid duplicating filters using Set
89d33816f : Allow GCSFileDownloaderBase to retry timeout error
f2c93bd1e : Give subprocess exception a default error
0b9a2265a : Switch a bunch of runner to Linked set/map
a6426843e : Add time unit in the key. ATI: https://android-build.corp.google.com/builds/abtd/run/L28600030006595393
2c79a12b6 : Centralize WORK_NODE usage and add the new name
64ac54c5c : Increase time to wait for LHP to be connected and limit the memory usage of LHP.
17a163401 : Track module-name in hit and miss cache
92d2d28d7 : Avoid creating empty filters
098707703 : Include sharded tests when dumping for intra-module-sharding
3b5263a06 : Limit the maximum number of test cases in memory
5bb1915bb : Ensure we don't create and set empty filter files
43394ad40 : Add a flag to allow not swallowing runtime errors. This adds a guard to the fix originally proposed at https://android-review.git.corp.google.com/c/platform/tools/tradefederation/+/2967543
d4e23e75a : fix an issue where skipped param is passed as empty string
ed96cdc4f : ExecutableTargetTest: Add method to abort when device or root is lost

+- Project: platform/tools/tradefederation/prebuilts

f22bceb7 : Snap for 12770256 from 11f241d9b6e35be577ba9a62835d4cb2d2c21d60 to 25Q1-release

+- Project: platform/tools/treble

67698d1 : Prevent exception when removing nonexistent file from a ramdisk fragment.
9a67f56 : Modify Merge script to force include
92712a4 : Modify script to include staging and build super.img
77a98d3 : Pass env variable HOME to re-client for detecting valid credentials

+- Project: trusty/device/arm/generic-arm64

5bdb134 : linux: Remove arm_ffa module from build
9c1a527 : project/qemu: Remove firmware.android.dts
0113667 : make: Disable Rust CFI for clang-r498229b
e32e028 : hafnium: Configure managed exit as an IRQ
2cfea4f : qemu: Add new qemu SPD compatibility projects
9b266e2 : project/qemu: Capture and dump serial console for boot tests
9e89500 : Add dirgroup for trusty genrule
df25110 : hafnium: Remove boot information entries from manifest
7c44086 : qemu-atf: Set path to dtc in prebuilts
9495498 : qemu: Fix qemu package generation
0cc2499 : linux: Replace trusty_test fragment with system_heap

+- Project: trusty/device/x86/generic-x86_64

8e745f1 : Configure Trusty VM as non-secure on Cuttlefish
513044c : Add dirgroup for trusty genrule
1421a5b : project: generic-x86_64: Enable LTO and CFI

+- Project: trusty/host/common

f397a61 : replace_ramdisk_modules: Delete missing modules

+- Project: trusty/lk/trusty

dcdb8d9 : app: stdcalltest: Add FP/SIMD register clobber test
eb5c04c : trusty: mmutest: fix minor memory leaks
b318c40 : Revert^2 "unittest: count test for each test suite"
3dbe6f6 : Revert "lib: arm_ffa: Call FFA_VERSION before all other calls"
70f5893 : lib: arm_ffa: Call FFA_VERSION before all other calls
2e7f1f8 : Revert "unittest: count test for each test suite"
1cab25c : unittest: count test for each test suite
45e3c44 : lib: sm: Prevent restarts from different clients
553f644 : lib/dtb_service: Remove dependency on binder
a6e6ee2 : lib: arm_ffa: Register secondary core entry point
89c638f : trusty: sm: Delineate critical section in idle thread
734432d : trusty: sm: Reschedule idle thread if requested by interrupt handler
5289c7b : trusty: sm: Fix up function prototype for sm_intc_enable_interrupts()
7be2eed : Revert^2 "platform: generic-arm64: Add vsock-rust"
39d616a : platform: generic-arm64: Use FFA_CONSOLE_LOG
024f021 : lib: arm_ffa: Support FFA_CONSOLE_LOG
01a5a66 : lib: smc: Add return code for unknown SMC
476f6ad : lib: arm_ffa: Support direct message response
46f42f0 : lib: arm_ffa: Support for FFA_MSG_WAIT
05a1668 : lib: arm_ffa: Support FFA_ERROR
373b2e4 : platform: generic-arm64: Select Hafnium timer irq
a6ee31a : platform: generic-arm64: Skip GIC init for Hafnium
b9e2e03 : Revert "platform: generic-arm64: Add vsock-rust"
39a7c8a : platform: generic-arm64: Add vsock-rust
a399a59 : platform: generic-arm64: Fix assert panic on unaligned FDT size
638a6c5 : Add dirgroup for trusty genrule
7a83102 : trusty: prepare_dma: allow pmem parameter to be omitted.
7be352e : lib: trusty: trusty_app: Fix prg_hdr->p_memsz usage
5bef880 : lib: ubsan: Fix clang 18 build
17e76ce : Add -Wno-missing-field-initializers to generic_flags
3889910 : unittest: add duration to the test result report
137a6e4 : platform: generic-x86_64: Use FIND_CRATE to locate dependencies
f0bc279 : trusty: mmutest: fix pmm_reserve test
8134858 : Collect FAR, ELR, send them to be potentially obfuscated

+- Project: trusty/app/authmgr

dc3ac58 : Initial empty repository

+- Project: trusty/app/avb

49bb41e : Add dirgroup for trusty genrule
81e8963 : test: rules.mk: avb_test host test add unittest dep

+- Project: trusty/app/cast-auth

1dfcec3 : Add dirgroup for trusty genrule

+- Project: trusty/app/confirmationui

21157a4 : Add dirgroup for trusty genrule

+- Project: trusty/app/gatekeeper

9f451b8 : Add dirgroup for trusty genrule

+- Project: trusty/app/keymaster

2c803e3 : Support KeyMint v4
0984833 : Cope with new KeyMint tag
ba7a7ed : Report Trusty KeyMint as KmVersion::KEYMINT_4
90018b0 : Add dirgroup for trusty genrule
7d1f804 : host_unittest: rule.mk add unittest host lib dependency

+- Project: trusty/app/keymint

47c1f78 : rust: Update makefile rule to locate protobuf crate
4dbe97b : Set attestation IDs to None in KeyMint non-secure VM
97bc6c5 : Add dirgroup for trusty genrule

+- Project: trusty/app/sample

4cbd659 : hwcrypto: Removing batch key definition
4e4c3ec : Changing hwcrypto interface to stable
2df2b27 : Implement HMAC clear key import
a281b5f : Implement HMAC operations
fc07d35 : Add aidl file for hmac operations
7670a1a : Mock implementation for keyslot
a8d35e0 : Add protection ID to key metadata
0dbc980 : Adding failable conversion to macro
44cdcc9 : Separating key header policy and metadata
22173a0 : Add opaque key export/import functionality
ce69685 : Add key serialization code
b250de1 : Add connection information to opaque keys
7b59bcb : memref-test: make IPC structure size and alignment explicit
5b6251d : Use FIND_CRATE instead of external/rust/crates
65ff5d4 : Add dirgroup for trusty genrule
540b4f1 : hwrng-test: It should be possible to disable forward_sum test
edb8c13 : Add basic cbcs decryption/encryption
5b8ea02 : rules.mk: Use FIND_CRATE to locate dependencies

+- Project: trusty/app/secretkeeper

a3a297e : Add dirgroup for trusty genrule

+- Project: trusty/app/storage

d963aa4 : storage: Add method to AIDL impl
bc4fb9c : storage: Make test handle storage disconnects
40cf1c9 : storage: Keep AIDL sessions alive across reconnect
320e038 : storage: Correctly track `storage_sevice` init
57c87f9 : storage: Stop tearing down block devices on proxy reconnect
2546e2d : storage-unittest: fix StorageInitNoCommitCleanupTest
3e4b004 : storage: Make `block_device_tipc` support reconnect
1ec3956 : Add dirgroup for trusty genrule
80cdb34 : storage: Extract nsp fs init/destroy fn
251cdd2 : storage: Extract tdp fs init/destroy fn
e67eaa6 : storage: Extract ns fs init/destroy fn
b2faff3 : storage: Extract rpmb fs init/destroy fn
5a190dd : storage: Un-split NS filesystem initialization
38e0e8f : storage: Refactor rpmb partition start/size calcs
f5d8efe : host tests: rule.mk add unittest host lib dependency
fb77f3f : storage: Change `check_storage_size` to set size
aaaca9e : storage: Change args for `rpmb_check`
fbfc6cc : storage: Separate client code in `block_device_tipc`
857f8a1 : storage: Add logging to block_device_tipc_init

+- Project: trusty/lib

e9867a0 : [dice] Update DICE_PRIVATE_KEY_SIZE to DICE_PRIVATE_KEY_BUFFER_SIZE
c904610 : make: Update secure storage aidl loacation
cb12319 : Fix incompatiblity issue with upstream open-dice
19fc1aa : [Trusty][Coverage] Increase number of apps, for when all test apps are loaded
b515705 : Update rustflags to work with new hal_v4 types
4daa65d : Update rustflags to work with new hal_v4 types
966c7d9 : Add OUT_DIR to rust_bindgen for boringssl
c601333 : Add visibility support to service_dispatcher!
eb6845d : lib:storage: default pointer returns for error paths
12a4c8a : make: Add bindgen and compiler flags as a bindgen rule dependency
4d5ef53 : Add support for trusty genrule build
bcb2f1d : Make AIDL_RUST_GULE_TOOL configurable
458cebd : Allow proto binaries to be configurable
891c2d9 : make: Pass rustfmt path to bindgen
3e57078 : make: Mark Rust compiler invocations as recursive
3c56e10 : trusty: metrics: enable metrics TA to restart after exiting to unblock metricsd
66e67d1 : experimental: lib: tidl: Add missing packed attribute
039488e : lib: tipc: impl Serialize for most numeric primitives
6c75182 : lib: sancov: Work around clang 18+ link failure
2f96eff : make: Pass -fno-PIC -fno-PIE if we don't pass -fPIC
420fbf3 : trusty: prepare_dma: add test for invalid arguments
01bbc42 : lib: unittest: remove get_current_time_ns declaration from header
7ec0d2a : make: Update logic to locate the crate source code
3a9917d : make: Use FIND_CRATE to locate external Rust dependencies
5feca22 : lib: unittest-rust: add duration to test case report
a433f98 : [Metrics] Add salt to the hashing method for crash metrics
b557802 : host: unittest: get_current_time_ns
abe8d37 : lib: trusty-std: Disable panic_info_message on 1.81
d78b2d8 : app: metrics: rules.mk add missing unittest lib
21d84fd : Use size_t for iterator on sizeof
d6d90ec : [Trusty][Metrics] Add FAR and ELR to the struct corresponding to crash metrics and obfuscate them conditionally

+- Project: trusty/vendor/google/aosp

3459569 : Add keep_gendir to genrule for incremental builds
59bc4bf : Update arm64 trusty project for testing
5eb1630 : test-map: Scan for device names for driver tests on FF-A
0fd099f : test-map: Update sysfs paths for modules
2c2c713 : android: Add non-staging dirgroup
ea12b2a : Explicitly set embedded_launcher
4862c9f : test-map: Add fpsimd test
04c9547 : Turn on incremental builds for Trusty Makefile
67b7d8b : Minimize dirgroup for Trusty build
db98c52 : test-map: Add storage proxy reconnect tests
acf26c4 : Update genrule_defaults to include more crates
199b05c : Add lk.elf genrule module
9f9baea : scripts: Merge profdata before generating reports
23f3b05 : scripts: Add the new SPD compatibility builds
2a8eca2 : run_tests: Fix dependency order when reloading qemu.py
88c69dc : run_tests: Reload qemu and qemu_error between projects
acb06c4 : Include desktop-arm64-test and desktop-x86_64-test
bfe7b1c : scripts: Update clang version to r522817
459f7b9 : scripts: Allow overriding clang and rust version
2dcba27 : Revert "scripts: Update clang version to r522817"
7cbc444 : scripts: Update clang version to r522817
37a4d4b : trusty: scripts: Add docs in build config