-
Notifications
You must be signed in to change notification settings - Fork 505
Insights: pytorch/xla
Overview
Could not load contribution data
Please try again later
10 Pull requests merged by 6 people
-
Add tuned parameters for Qwen/Qwen2.5-32B
#8966 merged
Apr 12, 2025 -
Adapt Splash Attention from TorchPrime
#8911 merged
Apr 11, 2025 -
Make scan-based GRU support
batch_first
parameter.#8964 merged
Apr 11, 2025 -
[WRONG PR] @assume_pure
#8923 merged
Apr 10, 2025 -
[call_jax] support returning PyTree from the JAX function
#8957 merged
Apr 10, 2025 -
Remove deprecated typing module in ansible flow
#8955 merged
Apr 10, 2025 -
Add test to ensure scan-based and the standard GRU are interchangeable.
#8949 merged
Apr 9, 2025 -
Update to pytorch 2.6
#8944 merged
Apr 9, 2025 -
Pin update to 20250406
#8945 merged
Apr 8, 2025 -
Use pages_per_seq * page_size instead of directly passing max_model_len
#8950 merged
Apr 8, 2025
6 Pull requests opened by 4 people
-
Add runtime check when using non-kernel for ragged paged attn
#8958 opened
Apr 10, 2025 -
assume_pure & call_jax composible.
#8961 opened
Apr 10, 2025 -
@assume_pure
#8962 opened
Apr 11, 2025 -
Add an option for JittableModule to dedup parameters.
#8965 opened
Apr 11, 2025 -
Add a helper class to handle mesh and sharding
#8967 opened
Apr 12, 2025 -
Disable one splash attention test
#8970 opened
Apr 14, 2025
6 Issues closed by 5 people
-
[RFC] scan operator and scan_layers
#8620 closed
Apr 10, 2025 -
[cleanup] Remove install_post_deps_pytorch_xla
#8934 closed
Apr 10, 2025 -
Support Python 3.13 in nightly wheels
#8959 closed
Apr 10, 2025 -
User built torch-xla wheel fails on import
#8940 closed
Apr 9, 2025 -
torch.distributed.all_reduce not converted to stableHLO
#8854 closed
Apr 9, 2025 -
GPU test failure: test_dynamo.py
#8952 closed
Apr 9, 2025
10 Issues opened by 5 people
-
[call_jax] Bridge the torch_xla and JAX mesh
#8972 opened
Apr 14, 2025 -
Splash attention test fail randomly only in github CI
#8971 opened
Apr 14, 2025 -
Incorrect stableHLO output of all reduce
#8969 opened
Apr 14, 2025 -
Alternative to torch.select_mask
#8968 opened
Apr 13, 2025 -
`call_jax` doesn't take jax config into hashing.
#8963 opened
Apr 11, 2025 -
TPU 6e-1 + Pytorch 2.6[TPU] does not work
#8960 opened
Apr 10, 2025 -
GPU test failed again: AtenXlaTensorTest.TestDivInPlaceWithRoundingMode
#8956 opened
Apr 10, 2025 -
MegaScale discovery is ran twice again
#8954 opened
Apr 9, 2025 -
torch.linalg.lstsq issues on GPU/TPU
#8953 opened
Apr 8, 2025 -
GPU master test failure: test_python_ops.py
#8951 opened
Apr 8, 2025
12 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Large number of graph break with flash_attention on dynamo openxla backend
#8913 commented on
Apr 7, 2025 • 0 new comments -
Persistent cache doesn't work for GPU/TPU
#8930 commented on
Apr 8, 2025 • 0 new comments -
Run backward on CPU before importing torch_xla cause future backward on XLA crash
#4174 commented on
Apr 9, 2025 • 0 new comments -
TPU memory use increased significantly in torch/xla - 2.6.0.dev20241107
#8423 commented on
Apr 9, 2025 • 0 new comments -
Torch-XLA not compatible with static python
#8948 commented on
Apr 10, 2025 • 0 new comments -
[Deprecation Tracking] API deprecation timeline summary
#8915 commented on
Apr 11, 2025 • 0 new comments -
[torchax] jit compile the model constructor
#8635 commented on
Apr 11, 2025 • 0 new comments -
[scan] Avoid re-tracing the combine function on every call
#8632 commented on
Apr 14, 2025 • 0 new comments -
Transition to Hermetic CUDA
#8665 commented on
Apr 8, 2025 • 0 new comments -
[Draft] Add Experimental limited sparse embedding bag
#8905 commented on
Apr 11, 2025 • 0 new comments -
test on debian12
#8928 commented on
Apr 10, 2025 • 0 new comments -
add CreateGlobalShardedData prototype
#8932 commented on
Apr 14, 2025 • 0 new comments