-
Notifications
You must be signed in to change notification settings - Fork 330
Convert AutoAWQ checkpoints to compressed-tensors #2112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @mutichung, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a dedicated utility to bridge the gap between AutoAWQ-quantized models and the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new script to convert AutoAWQ checkpoints into a compressed-tensors-compatible format. The implementation covers loading model weights, dequantizing them according to the AutoAWQ GEMM version, and then re-packing them using ModelCompressor. The script also includes CLI and Python interfaces for conversion. Overall, the changes are well-structured and address the stated objective. However, there are a few areas related to security, correctness, and consistency that could be improved.
|
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
Signed-off-by: Muti Chung <mtchung037@gmail.com>
Signed-off-by: Muti Chung <mtchung037@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Muti Chung <mtchung037@gmail.com>
- Add usage example in module docstring. - Modified what to show on document page. Signed-off-by: Muti Chung <mtchung037@gmail.com>
Signed-off-by: Muti Chung <mtchung037@gmail.com>
Signed-off-by: Muti Chung <mtchung037@gmail.com>
Signed-off-by: Muti Chung <mtchung037@gmail.com>
2f45719 to
f997a80
Compare
Signed-off-by: Muti Chung <mtchung037@gmail.com>
Signed-off-by: Muti Chung <mtchung037@gmail.com>
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a valuable script for converting AutoAWQ checkpoints to the compressed-tensors format. The implementation is well-structured, with clear separation of concerns and good use of existing libraries. However, I've identified a couple of potential issues in the dequantization logic that could lead to incorrect behavior, particularly concerning tensor shapes and the handling of quantization parameters. My review includes suggestions to address these points to ensure the conversion is robust and correct for a wider range of models. The accompanying tests are a great start for validation.
Signed-off-by: Muti Chung <mtchung037@gmail.com>
brian-dellabetta
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the contribution! This looks great, but I have a couple questions about where this should be placed and if the test can be made a little shorter. I will add some other maintainers to review and discuss, and we can decide from there. Thanks again!
| return results | ||
|
|
||
|
|
||
| def compare_models(model_name_or_path: str): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
running lm_eval can be expensive, when comparing models we just want to ensure the logits are the same for a given set of input_ids. So one way to make this cheaper would be
input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids
orig_logits = orig_model.forward(input_ids=input_ids).logits
new_logits = new_model.forward(input_ids=input_ids).logits
# possible things to compare
print(f"Norm Diff {(orig_logits-new_logits).norm()}")
print(f"Norm MSE {torch.nn.MSELoss()(orig_logits,new_logits).norm()}")
print(f"Norm {orig_logits.norm()}, {new_logits.norm()}")There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. Will work on this and see what I can do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this may be more suitable for a higher scope, for example placing it into examples/ or tools/, or even directly into compressed-tensors as it does not involve any src code in llmcompressor.
This is the first time we've added a feature like this, just posting here to see what the rest of the team thinks, and we can decide after that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree, this is not a modifier and should probably be in tools or maybe utils?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's my take from a user experience perspective. None of them are strong opinions btw 😆:
- Users with their AutoAWQ checkpoint would naturally visit the document site of
llm-compressorand look for things related to AWQ. Putting the conversion script under the AWQ modifier module improves discoverability. - Putting it into some other places (
examples/,tools/) + mentioning it in the AWQ modifier's description should also work but adds an extra layer of maintenance effort if things are subjected to changes. compressed-tensorsdoes not have any documentation at the moment. Users would have to be redirected from some warning messages, AWQ modifier documentation, or vLLM documentation to be aware of the existence of this tool.
Also, just curious, here's a question regarding the purpose of this tool:
- vLLM seems to support serving AutoAWQ checkpoints 1. Why do we need to convert the format? Is vLLM planning on dropping the support and removing the AutoAWQ kernels?
- If so, then mentioning this conversion tool in the vLLM doc and the warning/error message makes the most sense.
- Otherwise, what are reasons and the potential entry points do you expect users to have the need to convert AutoAWQ checkpoints?
Footnotes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AutoAWQ is no longer maintained. It's be good to simplify formats and code while being able to support legacy usage through conversion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The document site for AWQ and the place where the core AWQ components are located aren't necessarily the same. I doubt people are going file by file in core components to find what they want. There's also no readme in there and the general modifiers readme one level up doesn't mention AWQ (bad). Searching AWQ usually leads users to examples/AWQ where there is a dedicated README. it feels like this is the expected landing spot. What path do you expect users would use to find modifiers/awq as their first landing point?
brian-dellabetta
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @mutichung , we discussed this internally, and would like to move this to compressed-tensors instead, into a new module and file like src/compressed_tensors/converters/autoawq.py. We landed on this because it only has to do with moving something out of the W4A16 serialization format introduced by AutoAWQ, and doesn't really have anything to do with the AWQ algorithm, and because it doesn't rely on any src code in llmcompressor/AWQModifier. We can still include it in the examples/awq README though, and add it to our docs and the vllm docs. Anyone with llm-compressor installed will still be able to use it from the dependency.
Please let us know what you think. Happy to help set up that PR in compressed-tensors.
|
@brian-dellabetta Great, sounds reasonable to me! I'm happy to move this PR to |
|
Closing in favor of compressed-tensors#531. |
|
Thanks @mutichung , we can follow up on the compressed-tensors PR. Appreciate it! |
Summary
This PR introduces a new script to convert AutoAWQ checkpoints into
compressed-tensors-compatible format undermodifiers/awq. Resolves #2087.Usage
Via CLI:
Via Python:
Known Issue
Asymmetric Support in
llm-compressor&compressed-tensorsAutoAWQwith versionGEMMonly supports asymmetric quantization 1.AssertionErrorwill be raised despite settingzero_point=False.PackedQuantizationCompressoris a WIP 2.Test Plan
llmcompressor-compressed model withCompressedLinear.torch.testing.assert_close, potentially due to GEMM kernel's internal precision?AutoAWQForCausalLMand vLLM.compressed-tensorsbased on 3.llmcompressorcheckpoints.ruikangliu/DeepSeek-R1-Distill-Qwen-1.5B-quantized.awq-autoawq-w4g128
AMead10/Llama-3.2-3B-Instruct-AWQ
fbaldassarri/mistralai_Mistral-7B-Instruct-v0.3-autoawq-int4-gs128-asym
Future Work
packed-quantizedonce asymmetric decompression is finalized.AutoModelForCausalLMwith a more generalized autoclass.Footnotes
awq/modules/linear/gemm.py#L187 ↩
[Feature] Support Zero-point Decompression #1704 ↩
compressed-tensors@f9e7426 ↩ ↩2
compressed-tensors@cf5980d ↩