From 896d8e2959370416fa71011526b131f0794f6123 Mon Sep 17 00:00:00 2001
From: Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com>
Date: Wed, 30 Apr 2025 15:41:15 -0400
Subject: [PATCH 1/4] Refresh content for README.md
---
README.md | 88 ++++++++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 81 insertions(+), 7 deletions(-)
diff --git a/README.md b/README.md
index d30c6a9b5..7908118cc 100644
--- a/README.md
+++ b/README.md
@@ -1,19 +1,93 @@
-# `bitsandbytes`
+

+bitsandbytes
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-[](https://pepy.tech/project/bitsandbytes) [](https://pepy.tech/project/bitsandbytes) [](https://pepy.tech/project/bitsandbytes)
+`bitsandbytes` enables accessible large language models via k-bit quantization for PyTorch. We provide three main features for dramatically reducing memory consumption for inference and training:
-The `bitsandbytes` library is a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 & 4-bit quantization functions.
+* 8-bit optimizers uses block-wise quantization to maintain 32-bit performance at a small fraction of the memory cost.
+* LLM.int8() or 8-bit quantization enables large language model inference with only half the required memory and without any performance degradation. This method is based on vector-wise quantization to quantize most features to 8-bits and separately treating outliers with 16-bit matrix multiplication.
+* QLoRA or 4-bit quantization enables large language model training with several memory-saving techniques that don't compromise performance. This method quantizes a model to 4-bits and inserts a small set of trainable low-rank adaptation (LoRA) weights to allow training.
The library includes quantization primitives for 8-bit & 4-bit operations, through `bitsandbytes.nn.Linear8bitLt` and `bitsandbytes.nn.Linear4bit` and 8-bit optimizers through `bitsandbytes.optim` module.
-There are ongoing efforts to support further hardware backends, i.e. Intel CPU + GPU, AMD GPU, Apple Silicon, hopefully NPU.
+## System Requirements
+bitsandbytes has the following minimum requirements for all platforms:
-**Please head to the official documentation page:**
+* Python 3.9+
+* [PyTorch](https://pytorch.org/get-started/locally/) 2.2+
-**[https://huggingface.co/docs/bitsandbytes/main](https://huggingface.co/docs/bitsandbytes/main)**
+Platform specific requirements are detailed below:
+#### Linux x86-64
+* glibc >= 2.24
-## License
+#### Windows x86-64
+* Windows Server 2019+ or Windows 11+
+
+
+## :book: Documentation
+* [Official Documentation](https://huggingface.co/docs/bitsandbytes/main)
+* 🤗 [Transformers](https://huggingface.co/docs/transformers/quantization/bitsandbytes)
+* 🤗 [Diffusers](https://huggingface.co/docs/diffusers/quantization/bitsandbytes)
+* 🤗 [PEFT](https://huggingface.co/docs/peft/developer_guides/quantization#quantize-a-model)
+
+## :heart: Sponsors
+The continued maintenance and development of `bitsandbytes` is made possible thanks to the generous support of our sponsors. Their contributions help ensure that we can keep improving the project and delivering valuable updates to the community.
+
+
+## License
`bitsandbytes` is MIT licensed.
We thank Fabio Cannizzo for his work on [FastBinarySearch](https://github.com/fabiocannizzo/FastBinarySearch) which we use for CPU quantization.
+
+## How to cite us
+If you found this library useful, please consider citing our work:
+
+### QLoRA
+
+```bibtex
+@article{dettmers2023qlora,
+ title={Qlora: Efficient finetuning of quantized llms},
+ author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
+ journal={arXiv preprint arXiv:2305.14314},
+ year={2023}
+}
+```
+
+### LLM.int8()
+
+```bibtex
+@article{dettmers2022llmint8,
+ title={LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale},
+ author={Dettmers, Tim and Lewis, Mike and Belkada, Younes and Zettlemoyer, Luke},
+ journal={arXiv preprint arXiv:2208.07339},
+ year={2022}
+}
+```
+
+### 8-bit Optimizers
+
+```bibtex
+@article{dettmers2022optimizers,
+ title={8-bit Optimizers via Block-wise Quantization},
+ author={Dettmers, Tim and Lewis, Mike and Shleifer, Sam and Zettlemoyer, Luke},
+ journal={9th International Conference on Learning Representations, ICLR},
+ year={2022}
+}
+```
From 96a8bf6081b19200bd0d4b52122dc4d9822a3e26 Mon Sep 17 00:00:00 2001
From: Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com>
Date: Mon, 5 May 2025 14:49:13 -0400
Subject: [PATCH 2/4] Update accelerator support chart
---
README.md | 95 ++++++++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 87 insertions(+), 8 deletions(-)
diff --git a/README.md b/README.md
index 7908118cc..6d576ee9f 100644
--- a/README.md
+++ b/README.md
@@ -31,14 +31,93 @@ bitsandbytes has the following minimum requirements for all platforms:
* Python 3.9+
* [PyTorch](https://pytorch.org/get-started/locally/) 2.2+
-
-Platform specific requirements are detailed below:
-#### Linux x86-64
-* glibc >= 2.24
-
-#### Windows x86-64
-* Windows Server 2019+ or Windows 11+
-
+ * _Note: While we aim to provide wide backwards compatibility, we recommend using the latest version of PyTorch for the best experience._
+
+#### Accelerator support:
+
+
+
+
+ | Platform |
+ Accelerator |
+ Hardware Requirements |
+ Support Status |
+
+
+
+
+ | 🐧 Linux |
+
+
+ | x86-64 |
+ ◻️ CPU |
+ |
+ 〰️ Partial Support |
+
+
+ |
+ 🟩 NVIDIA GPU |
+ SM50+ minimum SM75+ recommended |
+ ✅ Full Support * |
+
+
+ |
+ 🟥 AMD GPU |
+ gfx90a, gfx942, gfx1100 |
+ 🚧 |
+
+
+ |
+ 🟦 Intel XPU |
+ TBD |
+ 🚧 |
+
+
+ | aarch64 |
+ CPU |
+ |
+ 〰️ Partial Support |
+
+
+ |
+ 🟩 NVIDIA GPU |
+ SM75, SM80, SM90, SM100 |
+ ✅ Full Support * |
+
+
+ | 🪟 Windows |
+
+
+ | x86-64 |
+ ◻️ CPU |
+ AVX2 |
+ 〰️ Partial Support |
+
+
+ |
+ 🟩 NVIDIA GPU |
+ SM50+ minimum SM75+ recommended |
+ ✅ Full Support * |
+
+
+ |
+ 🟦 Intel XPU |
+ TBD |
+ 🚧 |
+
+
+ | 🍎 macOS |
+
+
+ | arm64 |
+ ◻️ CPU / Metal |
+ Apple M1+ |
+ ❌ Under consideration |
+
+
+
+
+\* Accelerated INT8 requires SM75+.
## :book: Documentation
* [Official Documentation](https://huggingface.co/docs/bitsandbytes/main)
From a7a88813eb824501e6d806ce09a69fe0b61cb28e Mon Sep 17 00:00:00 2001
From: Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com>
Date: Mon, 5 May 2025 15:26:43 -0400
Subject: [PATCH 3/4] Add HPU to accelerator table in README
---
README.md | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/README.md b/README.md
index 6d576ee9f..64429a749 100644
--- a/README.md
+++ b/README.md
@@ -72,9 +72,14 @@ bitsandbytes has the following minimum requirements for all platforms:
TBD |
🚧 |
+
+ |
+ 🟦 Intel HPU |
+ Gaudi1, Gaudi2, Gaudi3 |
+ 🚧 |
| aarch64 |
- CPU |
+ ◻️ CPU |
|
〰️ Partial Support |
From c8ef79d7c9851a90aa99d957dedac7524da6497e Mon Sep 17 00:00:00 2001
From: Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com>
Date: Thu, 8 May 2025 11:02:38 -0400
Subject: [PATCH 4/4] update readme for intel XPU
---
README.md | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/README.md b/README.md
index 64429a749..24bf9944f 100644
--- a/README.md
+++ b/README.md
@@ -64,19 +64,26 @@ bitsandbytes has the following minimum requirements for all platforms:
|
🟥 AMD GPU |
gfx90a, gfx942, gfx1100 |
- 🚧 |
+ 🚧 In Development |
|
🟦 Intel XPU |
- TBD |
- 🚧 |
+
+ Data Center GPU Max Series (Ponte Vecchio)
+ Arc A-Series (Alchemist)
+ Arc B-Series (Battlemage)
+ |
+ 🚧 In Development |
+
| aarch64 |
◻️ CPU |
@@ -107,8 +114,11 @@ bitsandbytes has the following minimum requirements for all platforms:
|
🟦 Intel XPU |
- TBD |
- 🚧 |
+
+ Arc A-Series (Alchemist)
+ Arc B-Series (Battlemage)
+ |
+ 🚧 In Development |
| 🍎 macOS |