Skip to content

Conversation

@avizon-aws
Copy link

Currently when trying to use TP All gather with MXFP8, it gives an unimplemented error for the all gather operation. This PR adds support for MXFP8 all gather which will enable better performance as it will reduce the bandwidth required for the gather compute.

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 4, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3435

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 248a403 with merge base aa21b80 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla
Copy link

meta-cla bot commented Dec 4, 2025

Hi @avizon-aws!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 4, 2025
@danielvegamyhre
Copy link
Contributor

cc @vkuzo @drisspg

@vkuzo
Copy link
Contributor

vkuzo commented Dec 4, 2025

looks reasonable, would you mind rebasing as we've landed some refactors to MXTensor recently?

just curious, which framework are you using TP + mxfp8 from?

@avizon-aws
Copy link
Author

I think I already rebased just before creating the PR (yesterday night), so it should be up to date with mainline. I fixed the ruff formating to pass the workflows, I think one of the workflow was failing due to a missing tag on the PR, i think the right tag should be "add new feature", i wasnt able to add it, I think i dont have the permission for it.

I tried the TP AG using native pytorch, i created a test script for it, didnt use any other framework currently. I updated the sharding strategy for to_mx due to which it required the TP AG.

@danielvegamyhre
Copy link
Contributor

danielvegamyhre commented Dec 5, 2025

lgtm! will leave approval for @vkuzo who i think is also taking a look at this.

@avizon-aws to fix the code analysis ci job, please run the linter with ruff check --fix torchao/ test/ and ruff format torchao/ test/

@danielvegamyhre danielvegamyhre added the topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories) label Dec 5, 2025
@avizon-aws
Copy link
Author

Thanks @danielvegamyhre , i have used the commands shared above to fix the formatting.

vkuzo and others added 10 commits December 5, 2025 17:12
* Update

[ghstack-poisoned]

* Update

[ghstack-poisoned]

* Update

[ghstack-poisoned]
Summary:

taking claude code for a more thorough spin, will start with local
instructions and will see what makes sense to upstream

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
* [Reland][PT2E][X86] Add Inductor fusion passes of float8 qconv for X86Inductor backend

* add torch version check for Qconv FP8 UTs

* fix format issue

* Skip tests for ROCm

---------

Co-authored-by: Sun, Jiayi <jiayi.sun@intel.com>
* Int8Tensor migration

Summary:

This PR creates a new Int8Tensor and updates the configs to use the new
Int8Tensor flow

Test Plan:

To ensure BC:
```
pytest test/quantization/test_quant_api.py
```

To test new Int8Tensor:
```
pytest test/quantization/quantize_/workflows/int8/test_int8_tensor.py
```

Reviewers:

Subscribers:

Tasks:

Tags:

* ruff fixes

* add init

* fix ruff again

* update

* wip

* undo update tests

* fix ruff

* fix varname

* fix typing

* add tests

* fix dtype

* fix ci

* address granularity cr

* update _choose_quant_func_and_quantize_tensor

* make block size required attribute

* made dtype required as well

* address nits

* skip per tensor weight only test for now
… XPU (pytorch#3368)

* enable test/dtypes/test_bitpacking.py on intel xpu

* enable test/dtypes/test_floatx.py

* enable test/dtypes/test_floatx.py

* fix format issue

* fix format issue

* update _DEVICES
…ze_pt2e_qat} UT files to intel XPU (pytorch#3405)

* add test/quantization/pt2e/test_quantize_pt2e.py

* add test/quantization/pt2e/test_quantize_pt2e.py

* test/quantization/pt2e/test_quantize_pt2e_qat.py

* test/quantization/pt2e/test_quantize_pt2e_qat.py

* fix format issue

* update format

* increase timeout for xpu
@avizon-aws
Copy link
Author

I see some issues during rebasing, will fix them and then ping for review.

@avizon-aws
Copy link
Author

@vkuzo , i have fixed the rebase

@avizon-aws
Copy link
Author

avizon-aws commented Dec 6, 2025

I think the test is failing in the CI because of version and compatibility issues (I have run the test locally and it passes on compatibility >=9)), I will have to add some conditions due to which the test might need to be skipped for certain CUDA versions.

@avizon-aws
Copy link
Author

I added checks to ensure the test runs only on CUDA and also for compatibility >=9 (H100 and above as FP8 is supported well)

@avizon-aws
Copy link
Author

avizon-aws commented Dec 6, 2025

It seems like there is only 1 H100 GPU and hence the test is failing, because the world size required for my test is 2. What are the recommendations to proceed here? There are 2 options:

  1. Perform the test on CPU
  2. Skip the test

I think skipping the test is risky, I would prefer option 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants