-
Notifications
You must be signed in to change notification settings - Fork 279
Granite4 FP8 Block Quantization #2001
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Granite4 FP8 Block Quantization #2001
Conversation
|
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
Summary of ChangesHello @krishnateja95, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enables FP8 block quantization for the Granite4-small model by introducing a new example and essential architectural adaptations. It provides a mechanism to convert the model's Mixture-of-Experts (MoE) layers into a format amenable to quantization and then repackages the resulting weights for optimized storage and compatibility with high-performance inference frameworks like vLLM. This work streamlines the process of deploying quantized Granite models. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces FP8 block quantization for the Granite4-small model, including a new example script and utility functions. The core logic involves replacing MoE expert layers with standard linear layers for quantization and then packing them back into a 3D tensor format for compatibility. My review focuses on improving code quality, robustness, and fixing a bug that prevents the example from running. I've suggested removing unused imports, renaming a function for consistency and correctness, using isinstance for more robust type checking, and refactoring a large function for better maintainability. Overall, the changes are a good addition, and with these adjustments, the code will be more robust and easier to maintain.
dsikka
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of key issues:
- We should be using the moe_context: https://github.com/vllm-project/llm-compressor/blob/main/src/llmcompressor/modeling/moe_context.py. We have many examples you can use that apply the context
- All saving logic should be applied by compressed-tensors. Seems like you're doing a lot of custom logic
| import torch | ||
| import json | ||
| import os | ||
| import shutil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use this context: https://github.com/vllm-project/llm-compressor/blob/main/src/llmcompressor/modeling/moe_context.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dsikka This saving logic is specific to granite4 small model to ensure the fp8 block quantized model is compatible with vLLM.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Krishna Teja Chitty-Venkata <44275589+krishnateja95@users.noreply.github.com>
SUMMARY:
"Recipe for FP8 Block quantization of Granite4-small model (https://huggingface.co/ibm-granite/granite-4.0-h-small)"