-
Notifications
You must be signed in to change notification settings - Fork 273
Insights: pytorch/ao
Overview
-
- 19 Merged pull requests
- 11 Open pull requests
- 0 Closed issues
- 3 New issues
Could not load contribution data
Please try again later
19 Pull requests merged by 10 people
-
Fix Bug in MX Builds
#2284 merged
May 31, 2025 -
Add back AOPerModuleConfig for BC
#2282 merged
May 31, 2025 -
Patch the _is_conv_node function
#2257 merged
May 31, 2025 -
Fixes MX formats build for blackwell
#2278 merged
May 30, 2025 -
Update CMake to enable building ops on iOS
#2274 merged
May 30, 2025 -
Resolve logger warnings
#2250 merged
May 30, 2025 -
Add Integration Tests to H100 CI
#2268 merged
May 30, 2025 -
Make optim lazily intialize global state
#2277 merged
May 30, 2025 -
Fix generate.py for fbgemm int4 integration
#2273 merged
May 29, 2025 -
Mark QAT range learning as prototype for now
#2272 merged
May 29, 2025 -
Enable range learning for QAT
#2033 merged
May 29, 2025 -
Fix torchao generate script for cpu device
#2267 merged
May 29, 2025 -
Enable fp16+int4 mixed precission path for int4 xpu path with int zero point
#2240 merged
May 29, 2025 -
integration-vllm-test
#2258 merged
May 28, 2025 -
Add support for fbgemm int4 mm kernel
#2255 merged
May 28, 2025 -
[reland2][ROCm] preshuffled weight mm
#2207 merged
May 28, 2025 -
Support INT8 SDPA template for CPU
#2148 merged
May 28, 2025 -
Fix Per Row scaling for inference
#2253 merged
May 27, 2025 -
Revert "Try fixing CI by pinning pytest (#2238)"
#2263 merged
May 27, 2025
11 Pull requests opened by 9 people
-
test_affine_quantized_float.py pytest too unittest
#2261 opened
May 25, 2025 -
Test d script
#2264 opened
May 27, 2025 -
Update QAT docs, highlight axolotl integration
#2266 opened
May 28, 2025 -
[float8 training] remove duplicate override for view
#2269 opened
May 29, 2025 -
float8 moe training conversion API prototype
#2275 opened
May 30, 2025 -
[WIP] Add support for fbgemm fp8 kernels
#2276 opened
May 30, 2025 -
Fix QAT range learning, ensure scales get gradients
#2280 opened
May 30, 2025 -
Remove Constraint for sm89 hardware
#2281 opened
May 30, 2025 -
[do not land] testing if moving this breaks my PRs
#2283 opened
May 30, 2025 -
Build mxfp4 kernel for sm120a
#2285 opened
May 31, 2025 -
[optim] Fix bug when default dtype is BF16
#2286 opened
May 31, 2025
3 Issues opened by 3 people
-
QAT range learning tracker
#2271 opened
May 29, 2025 -
[pt2e] QAT training and FSDP support
#2265 opened
May 27, 2025 -
convert_to_float8_training and torch.compile make model slow
#2262 opened
May 26, 2025
10 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
GPTQ updates
#2235 commented on
May 31, 2025 • 10 new comments -
Add activation sparsity (24 + fp8 dynamic quant) subclass
#2213 commented on
May 30, 2025 • 5 new comments -
[roadmap/tracker] Low precision training for MoEs
#2147 commented on
May 27, 2025 • 0 new comments -
BatchNorm + Convolution fusion in `prepare_pt2e` removal
#2245 commented on
May 28, 2025 • 0 new comments -
[Quant] Can quant not be decomposed on inductor?
#2228 commented on
May 29, 2025 • 0 new comments -
int4_weight_only get plain weight are padded
#2249 commented on
May 29, 2025 • 0 new comments -
newer torchao breaks sglang?
#2226 commented on
May 30, 2025 • 0 new comments -
[CPU] Enable DA8W4 on CPU
#2128 commented on
May 27, 2025 • 0 new comments -
Enable Int4WeightOnlyGPTQQuantizer on Intel GPU.
#2200 commented on
May 30, 2025 • 0 new comments -
Fixes MX formats build for blackwell
#2214 commented on
May 30, 2025 • 0 new comments