Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 55 additions & 0 deletions examples/windows/Benchmark.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ Memory savings and inference speedup are compared to the ONNX FP16 baseline.

### 1.2 Accuracy Comparison

#### 1.2.1 MMLU

For accuracy evaluation, the [Massive Multitask Language Understanding (MMLU)](https://arxiv.org/abs/2009.03300) benchmark has been utilized. Please refer to the [detailed instructions](./accuracy_benchmark/README.md) for running the MMLU accuracy benchmark.

The table below shows the MMLU 5-shot score for some models.
Expand All @@ -39,3 +41,56 @@ The table below shows the MMLU 5-shot score for some models.
| [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) | 61.76 | 60.73 |
| [Llama3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) | 60.8 | 57.71 |
| [Gemma-2b-it](https://huggingface.co/google/gemma-2b-it) | 37.01 | 37.2 |

#### 1.2.2 Perplexity (PPL)

Perplexity measures how well a probability model predicts a sample. Lower perplexity values indicate better model quality. The following table shows perplexity values at input sequence length 1024 with chunk size of 512.

**Learn more about Perplexity:** [Perplexity - Wikipedia](https://en.wikipedia.org/wiki/Perplexity) | [Hugging Face - Perplexity of Fixed-Length Models](https://huggingface.co/docs/transformers/en/perplexity)

- **FP16-MB**: Baseline FP16 genai model (Model Builder)
- **Mixed AWQ-MO**: Important linear layers in INT8, rest in INT4 (AWQ), with ModelOpt.
- **Mixed RTN-MO**: Important linear layers in INT8, rest in INT4 (RTN), with ModelOpt.
- **Pure INT4 AWQ-MO**: All linear layers INT4 (AWQ) with ModelOpt.
- **Pure INT4 RTN-MO**: All linear layers INT4 (RTN) with ModelOpt.
- **Pure INT8 RTN-MO**: All linear layers INT8 (RTN) with ModelOpt.
- **Pure INT8 AWQ-MO**: All linear layers INT8 (AWQ) with ModelOpt.
- **Configuration**: Windows OS, GPU RTX 5090, nvidia-modelopt v0.39.0, onnxruntime-genai-cuda 0.9.2, onnxruntime-gpu 1.23.0, torch 2.8.0+cu128, transformers 4.49.0

| Model | FP16-MB | Mixed AWQ-MO | Mixed RTN-MO | Pure INT4 AWQ-MO | Pure INT4 RTN-MO | Pure INT8 RTN-MO | Pure INT8 AWQ-MO |
|:------|:--------|:-------------|:-------------|:-----------------|:-----------------|:-----------------|:-----------------|
| DeepSeek R1 Distill Qwen 1.5B | 39.447 | 41.699 | 44.332 | 44.213 | 46.304 | 39.802 | 39.713 |
| Llama 3.2 1B Instruct | 12.631 | 13.852 | 14.176 | 14.549 | 16.900 | 12.664 | 12.637 |
| Phi-3.5 Mini Instruct | 6.046 | 6.500 | 6.599 | 6.711 | 7.070 | - | - |
| Phi-4 Mini Instruct | 9.039 | 9.673 | 9.712 | 10.015 | 10.911 | - | - |
| Qwen 2.5 1.5B Instruct | 9.216 | 10.084 | 10.338 | 10.495 | 10.933 | 9.227 | 9.232 |

For detailed instructions on evaluating perplexity, please refer to the [Perplexity Evaluation Guide](./accuracy_benchmark/perplexity_metrics/README.md).

#### 1.2.3 KL-divergence

KL-divergence (Kullback-Leibler divergence) quantifies the distributional difference between the quantized model and the baseline model. Lower KL-divergence values indicate that the quantized model's output distribution is closer to the original model.

**Learn more about KL-divergence:** [KL Divergence - Wikipedia](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) | [Understanding KL Divergence](https://www.countbayesie.com/blog/2017/5/9/kullback-leibler-divergence-explained)

**Supported backends:** PyTorch and Onnxruntim-cuda, onnxruntime-trt-rtx-ep are both supported for evaluation.

- **Baseline model**: Hugging Face FP16 model
- **Quantized models**: Models where quantization is simulated (a.k.a. fake quantization), typically using the PyTorch-CUDA backend for evaluation. Fake quantization means quantized weights and dequantized simultaneously to simulate quantization. The inference backend column in the table below indicates whether the reported results are from PyTorch simulation or ONNX-runtime-based inference.
- **Configuration**: Windows OS, GPU RTX 5090, nvidia-modelopt v0.39.0, onnxruntime-genai-cuda 0.9.2, onnxruntime-gpu 1.23.0, torch 2.8.0+cu128, transformers 4.49.0

| Model | Quantization Method | Quantization Granularity | KL-divergence | Inference Backend |
|:-----------------------|:-------------------------------------------------|:--------------------------------------------------------------------|:--------------|:------------------------------|
| Qwen2.5-1.5B-Instruct | Base FP16 (Baseline) | - | 0.000 | PyTorch (FP16) |
| Qwen2.5-1.5B-Instruct | int4+int8 Blockwise-max_algo-mixed_quant (simulated) | INT4: per-block (block-size=128), INT8: per-channel (row-wise) | 0.336 | PyTorch (fake quantization) |
| Qwen2.5-1.5B-Instruct | int4+int8 max_algo-mixed_quant (simulated, per-channel) | INT4: per-block (block-size=128), INT8: per-channel (row-wise) | 0.337 | PyTorch (fake quantization) |
| Llama-3.2-3B-Instruct | Base FP16 (Baseline) | - | 0.000 | PyTorch (FP16) |
| Llama-3.2-3B-Instruct | int4+int8 Blockwise-awq-lite_algo-mixed_quant (simulated) | INT4: per-block (block-size=128), INT8: per-channel (row-wise) | 0.228 | PyTorch (fake quantization) |
| Llama-3.2-3B-Instruct | int4+int8 per-channel-awq-lite_algo-mixed_quant (simulated) | INT4: per-block (block-size=128), INT8: per-channel (row-wise) | 0.230 | PyTorch (fake quantization) |
| Llama-3.2-3B-Instruct | int4+int8 Blockwise-max_algo-mixed_quant (simulated) | INT4: per-block (block-size=128), INT8: per-channel (row-wise) | 0.238 | PyTorch (fake quantization) |
| Llama-3.2-3B-Instruct | int4+int8 per-channel-max_algo-mixed_quant (simulated) | INT4: per-block (block-size=128), INT8: per-channel (row-wise) | 0.238 | PyTorch (fake quantization) |
| Llama-3.2-3B-Instruct | int4 Blockwise-max_algo only (simulated) | INT4: per-block (block-size=128) | 0.334 | PyTorch (fake quantization) |

*All KL-divergence results above are obtained via PyTorch fake quantization simulation unless otherwise noted. Inference with ONNX-runtime can also be evaluated .*

For detailed instructions on computing KL-divergence, please refer to the [KL-divergence Evaluation Guide](./accuracy_benchmark/kl_divergence_metrics/README.md).
76 changes: 76 additions & 0 deletions examples/windows/onnx_ptq/genai_llm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,82 @@ Note:

Please refer to `quantize.py` for further details on command-line parameters.

#### Mixed Precision Quantization (INT4 + INT8)

ModelOpt-Windows supports **mixed precision quantization**, where different layers in the model can be quantized to different bit-widths. This approach combines INT4 quantization for most layers (for maximum compression and speed) with INT8 quantization for important or sensitive layers (to preserve accuracy).

##### Why Use Mixed Precision?

Mixed precision quantization provides an optimal balance between:

- **Model Size**: Primarily INT4 keeps the model small
- **Inference Speed**: INT4 layers run faster and smaller
- **Accuracy Preservation**: Critical layers in INT8 maintain model quality

Based on benchmark results, mixed precision quantization shows significant advantages:

| Model | Metric | INT4 RTN | Mixed RTN (INT4+INT8) | Improvement |
|:------|:-------|:-------------|:---------------------|:-----------|
| DeepSeek R1 1.5B | MMLU | 32.40% | 33.90% | +1.5% |
| | Perplexity | 46.304 | 44.332 | -2.0 (lower is better) |
| Llama 3.2 1B | MMLU | 39.90% | 44.70% | +4.8% |
| | Perplexity | 16.900 | 14.176 | -2.7 (lower is better) |
| Qwen 2.5 1.5B | MMLU | 56.70% | 57.50% | +0.8% |
| | Perplexity | 10.933 | 10.338 | -0.6 (lower is better) |

As shown above, mixed precision significantly improves accuracy with minimal disk size increase (~85-109 MB).

##### How Mixed Precision Works

The quantization strategy selects which layers to quantize to INT8 vs INT4:

1. **INT8 Layers** (Higher Precision): Important layers that significantly impact model quality. Quantized per-channel

2. **INT4 Layers** (Maximum Compression): All other layers. Qunatized blockwise.

This strategy preserves accuracy for the most sensitive layers while maintaining aggressive compression elsewhere.

##### Using Mixed Precision Quantization

###### Method 1: Use the default mixed precision strategy

```bash
python quantize.py --model_name=meta-llama/Meta-Llama-3.2-1B \
--onnx_path="E:\models\llama3.2-1b-fp16\model.onnx" \
--output_path="E:\models\llama3.2-1b-int4-int8-mixed\model.onnx" \
--algo=awq_lite \
--calib_size=32 \
--enable_mixed_quant
```

The `--enable_mixed_quant` flag automatically applies the default strategy.

###### Method 2: Specify custom layers for INT8

```bash
python quantize.py --model_name=meta-llama/Meta-Llama-3.2-1B \
--onnx_path="E:\models\llama3.2-1b-fp16\model.onnx" \
--output_path="E:\models\llama3.2-1b-int4-int8-custom\model.onnx" \
--algo=awq_lite \
--calib_size=32 \
--layers_8bit="layers.0,layers.1,layers.15,layers.16"
```

The `--layers_8bit` option allows you to manually specify which layers to quantize to INT8. You can use:

- Layer indices: `layers.0,layers.5,layers.10`
- Layer paths: `model/layers.0/attn/qkv_proj`
- Partial names: `qkv_proj,down_proj`

##### Technical Details

- **Block Size**: INT4 layers use block-wise quantization (default block-size=128), INT8 uses per-channel quantization
- **Quantization Axis**: INT4 (per-block), INT8 (per-channel row-wise)
- **Compatibility**: Works with both `awq_lite` and `rtn_dq` algorithms
- **Automatic Detection**: The `--layers_8bit` option automatically enables mixed quantization

For more benchmark results and detailed accuracy metrics, refer to the [Benchmark Guide](../../Benchmark.md).

### Evaluate the Quantized Model

To evaluate the quantized model, please refer to the [accuracy benchmarking](../../accuracy_benchmark/README.md) and [onnxruntime-genai performance benchmarking](https://github.com/microsoft/onnxruntime-genai/tree/main/benchmark/python).
Expand Down