Gemma 2 9B Instruct
Gemma 2 9B Instruct needs roughly 4.6GB VRAM at Q4 quantization (18.4GB at FP16). 57 GPUs we track can run it fully in VRAM at 8k context.
Google9.2B params8k contextGemmaCommercial use ok
VRAM at each quantization
Assumes 8k context. KV cache grows linearly with context length.
| Quant | Weights | KV cache | Total |
|---|---|---|---|
| FP16 | 18.4 GB | 2.82 GB | 23.8 GB |
| Q8 | 9.2 GB | 2.82 GB | 13.5 GB |
| Q6_K | 6.9 GB | 2.82 GB | 10.9 GB |
| Q5_K_M | 5.8 GB | 2.82 GB | 9.6 GB |
| Q4_K_M | 4.6 GB | 2.82 GB | 8.3 GB |
| Q3_K_M | 3.7 GB | 2.82 GB | 7.3 GB |
| Q2_K | 2.8 GB | 2.82 GB | 6.3 GB |
Benchmarks
GPUs that run Gemma 2 9B Instruct natively (57)
- NVIDIA RTX 5090FP16 · 97.4 t/s
- NVIDIA RTX 4090Q8 · 109.6 t/s
- NVIDIA RTX 4080Q8 · 77.9 t/s
- NVIDIA RTX 4070 TiQ6_K · 73 t/s
- NVIDIA RTX 4070Q6_K · 73 t/s
- NVIDIA RTX 4060 Ti 16GBQ8 · 31.3 t/s
- NVIDIA RTX 4060Q3_K_M · 73.9 t/s
- NVIDIA RTX 3090Q8 · 101.7 t/s
- NVIDIA RTX 3090 TiQ8 · 109.6 t/s
- NVIDIA RTX 3080 10GBQ4_K_M · 165.2 t/s
- NVIDIA RTX 3060 12GBQ6_K · 52.2 t/s
- NVIDIA H100 80GBFP16 · 182.1 t/s
- NVIDIA A100 80GBFP16 · 110.8 t/s
- NVIDIA A100 40GBFP16 · 84.5 t/s
- NVIDIA L40SFP16 · 47 t/s
- NVIDIA RTX A6000FP16 · 41.7 t/s
- NVIDIA RTX 6000 AdaFP16 · 52.2 t/s
- AMD Radeon RX 7900 XTXQ8 · 104.3 t/s
- AMD Radeon RX 7900 XTQ8 · 87 t/s
- AMD Radeon RX 6800 XTQ8 · 55.7 t/s
- AMD Instinct MI300XFP16 · 288 t/s
- Apple M4 Ultra (384GB)FP16 · 59.3 t/s
- Apple M4 Ultra (192GB)FP16 · 59.3 t/s
- Apple M4 Max (128GB)FP16 · 29.7 t/s
- Apple M4 Max (96GB)FP16 · 29.7 t/s
- Apple M4 Max (64GB)FP16 · 29.7 t/s
- Apple M4 Max (48GB)FP16 · 29.7 t/s
- Apple M4 Pro (48GB)FP16 · 14.8 t/s
- Apple M4 Pro (24GB)Q8 · 29.7 t/s
- Apple M4 (32GB)FP16 · 6.5 t/s
- Apple M4 (16GB)Q6_K · 17.4 t/s
- Apple M3 Max (128GB)FP16 · 21.7 t/s
- Apple M3 Max (96GB)FP16 · 21.7 t/s
- Apple M3 Max (64GB)FP16 · 21.7 t/s
- Apple M3 Max (48GB)FP16 · 21.7 t/s
- Apple M3 Max (36GB)FP16 · 21.7 t/s
- Apple M3 Pro (36GB)FP16 · 8.2 t/s
- Apple M3 Pro (18GB)Q8 · 16.3 t/s
- Apple M3 (24GB)Q8 · 10.9 t/s
- Apple M3 (16GB)Q6_K · 14.5 t/s
- Apple M2 Ultra (384GB)FP16 · 43.5 t/s
- Apple M2 Ultra (192GB)FP16 · 43.5 t/s
- Apple M2 Max (96GB)FP16 · 21.7 t/s
- Apple M2 Max (64GB)FP16 · 21.7 t/s
- Apple M2 Max (32GB)FP16 · 21.7 t/s
- Apple M2 Pro (32GB)FP16 · 10.9 t/s
- Apple M2 Pro (16GB)Q6_K · 29 t/s
- Apple M2 (24GB)Q8 · 10.9 t/s
- Apple M2 (16GB)Q6_K · 14.5 t/s
- Apple M1 Ultra (128GB)FP16 · 43.5 t/s
- Apple M1 Ultra (64GB)FP16 · 43.5 t/s
- Apple M1 Max (64GB)FP16 · 21.7 t/s
- Apple M1 Max (32GB)FP16 · 21.7 t/s
- Apple M1 Pro (32GB)FP16 · 10.9 t/s
- Apple M1 Pro (16GB)Q6_K · 29 t/s
- Apple M1 (16GB)Q6_K · 9.9 t/s
- Intel Arc A770 16GBQ8 · 60.9 t/s
Plus 1 GPUs that run it with CPU offload (slower)
- CPU only (system RAM)FP16 · 0.6 t/s
