Mixtral 8x22B Instruct v0.1
Mixtral 8x22B Instruct v0.1 needs roughly 70.5GB VRAM at Q4 quantization (282.0GB at FP16). 18 GPUs we track can run it fully in VRAM at 8k context.
Mistral AI141B params39B active (MoE)64k contextApache 2.0Commercial use ok
VRAM at each quantization
Assumes 8k context. KV cache grows linearly with context length.
| Quant | Weights | KV cache | Total |
|---|---|---|---|
| FP16 | 282.0 GB | 1.88 GB | 317.9 GB |
| Q8 | 141.0 GB | 1.88 GB | 160.0 GB |
| Q6_K | 105.8 GB | 1.88 GB | 120.5 GB |
| Q5_K_M | 88.1 GB | 1.88 GB | 100.8 GB |
| Q4_K_M | 70.5 GB | 1.88 GB | 81.1 GB |
| Q3_K_M | 56.4 GB | 1.88 GB | 65.3 GB |
| Q2_K | 42.3 GB | 1.88 GB | 49.5 GB |
Benchmarks
GPUs that run Mixtral 8x22B Instruct v0.1 natively (18)
- NVIDIA H100 80GBQ3_K_M · 236.2 t/s
- NVIDIA A100 80GBQ3_K_M · 143.8 t/s
- AMD Instinct MI300XQ8 · 149.5 t/s
- Apple M4 Ultra (384GB)FP16 · 15.4 t/s
- Apple M4 Ultra (192GB)Q8 · 30.8 t/s
- Apple M4 Max (128GB)Q6_K · 20.5 t/s
- Apple M4 Max (96GB)Q4_K_M · 30.8 t/s
- Apple M4 Max (64GB)Q2_K · 51.3 t/s
- Apple M3 Max (128GB)Q6_K · 15 t/s
- Apple M3 Max (96GB)Q4_K_M · 22.6 t/s
- Apple M3 Max (64GB)Q2_K · 37.6 t/s
- Apple M2 Ultra (384GB)FP16 · 11.3 t/s
- Apple M2 Ultra (192GB)Q8 · 22.6 t/s
- Apple M2 Max (96GB)Q4_K_M · 22.6 t/s
- Apple M2 Max (64GB)Q2_K · 37.6 t/s
- Apple M1 Ultra (128GB)Q6_K · 30.1 t/s
- Apple M1 Ultra (64GB)Q2_K · 75.2 t/s
- Apple M1 Max (64GB)Q2_K · 37.6 t/s
Plus 5 GPUs that run it with CPU offload (slower)
- NVIDIA RTX 5090Q2_K · 38.3 t/s
- NVIDIA A100 40GBQ2_K · 33.2 t/s
- NVIDIA L40SQ3_K_M · 13.8 t/s
- NVIDIA RTX A6000Q3_K_M · 12.3 t/s
- NVIDIA RTX 6000 AdaQ3_K_M · 15.4 t/s
Notes
MoE: 141B total / 39B active — needs a lot of VRAM but runs fast when it fits.
