Skip to content

Conversation

@kylesayrs
Copy link
Collaborator

@kylesayrs kylesayrs commented Dec 18, 2025

Co-requisites

Changes

  • Move module-specific operations from on_initialize (before offload wrapping) to on_start (after offload wrapping)
  • Change dispatch functions to use torch offloader

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @kylesayrs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on a significant refactoring of the model offloading and modifier initialization processes within the llmcompressor library. The primary goal is to enhance the clarity and efficiency of modifier lifecycle management by moving certain setup tasks to the on_start hook. Concurrently, the underlying offloading implementation has been modernized to more directly utilize the compressed-tensors library, thereby reducing reliance on accelerate for specific offloading functionalities. These changes contribute to a cleaner codebase, improved modularity, and better integration with updated external library practices.

Highlights

  • Modifier Lifecycle Hook Refactoring: The invocation of initialization-related methods like _set_resolved_mappings (AWQ), _infer_sequential_targets and get_layers (SparseGPT), _module_names preparation (GPTQ), and _resolve_mappings (SmoothQuant) has been moved from the on_initialize method to the on_start method across several modifiers. This ensures these operations are performed when the modifier is actively starting its process, rather than during its initial setup.
  • Offloading Mechanism Update: The model offloading logic has been updated to reduce direct dependencies on the accelerate library. Imports related to accelerate.hooks and compressed_tensors.utils.offloaded_dispatch have been replaced with compressed_tensors.offload.dispatch_model and a new remove_dispatch utility in llmcompressor.utils.dev. This change aligns the offloading implementation more closely with the compressed-tensors library.
  • Sequential Tracer Simplification: The SequentialTracer in llmcompressor/pipelines/sequential/helpers.py has been simplified. The offloaded parameter and its associated logic, which previously tracked modules with offloaded parameters, have been removed. This streamlines the tracing process by no longer needing to explicitly manage offloaded modules during graph capture.
  • Untying Word Embeddings Refinement: The untie_word_embeddings function in llmcompressor/utils/transformers.py has been refactored. It now directly uses module.register_parameter to untie weights, eliminating the need for has_offloaded_params and register_offload_parameter from compressed_tensors.
  • Dispatch Function Relocation and Refinement: The dispatch_for_sequential utility has been relocated from llmcompressor/pipelines/sequential/helpers to llmcompressor/utils/helpers. Additionally, a new remove_dispatch function has been introduced in llmcompressor/utils/dev to provide a unified way to remove both accelerate and compressed_tensors dispatch hooks from a module.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@kylesayrs kylesayrs changed the title [TorchOffloader] [TorchOffloader] Prepare for torch offloader compatibility Dec 18, 2025
@kylesayrs kylesayrs changed the title [TorchOffloader] Prepare for torch offloader compatibility [TorchOffloader] Switch to torch offloader from accelerate Dec 18, 2025
@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the model offloading mechanism, replacing the accelerate-based implementation with a new TorchOffloader from the compressed-tensors library. The changes are consistently applied across various components, including modifiers, pipelines, and utility functions. A notable improvement is the shift of setup logic in several modifiers from on_initialize to on_start, enhancing modularity and flexibility. The tests have also been updated to align with these changes. While the refactoring is well-executed, I've identified a critical import issue that will cause a runtime error.

DISABLE_QAC_MODIFIERS,
DisableQuantization,
calibration_forward_context,
dispatch_for_sequential,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The import of dispatch_for_sequential from llmcompressor.utils.helpers will fail as the function is not defined or imported in that module. It seems the function was intended to be moved from llmcompressor.pipelines.sequential.helpers to llmcompressor.utils.helpers, but only the import statement was updated. Please move the function definition to llmcompressor.utils.helpers to resolve this ImportError.

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants