GPT-OSS 20B
GPT-OSS 20B needs roughly 12.2GB VRAM at Q4_K_M quantization (47.5GB at FP16). 61 GPUs we track can run it fully in VRAM at 8k context.
OpenAI21B params4B active (MoE)128k contextApache 2.0Commercial use ok
VRAM at each quantization
Assumes 8k context. KV cache grows linearly with context length.
| Quant | Weights | KV cache | Total |
|---|---|---|---|
| FP16 | 42.0 GB | 0.40 GB | 47.5 GB |
| Q8 | 21.0 GB | 0.40 GB | 24.0 GB |
| Q6_K | 15.8 GB | 0.40 GB | 18.1 GB |
| Q5_K_M | 13.1 GB | 0.40 GB | 15.2 GB |
| Q4_K_M | 10.5 GB | 0.40 GB | 12.2 GB |
| Q3_K_M | 8.4 GB | 0.40 GB | 9.9 GB |
| Q2_K | 6.3 GB | 0.40 GB | 7.5 GB |
Benchmarks
GPUs that run GPT-OSS 20B natively (61)
- NVIDIA RTX 5090Q8 · 492.8 t/s
- NVIDIA RTX 4090Q6_K · 369.6 t/s
- NVIDIA RTX 4080Q5_K_M · 315.5 t/s
- NVIDIA RTX 4070 TiQ3_K_M · 346.5 t/s
- NVIDIA RTX 4070Q3_K_M · 346.5 t/s
- NVIDIA RTX 4060 Ti 16GBQ5_K_M · 126.7 t/s
- NVIDIA RTX 4060Q2_K · 249.3 t/s
- NVIDIA RTX 3090Q6_K · 343.2 t/s
- NVIDIA RTX 3090 TiQ6_K · 369.6 t/s
- NVIDIA RTX 3080 10GBQ2_K · 696.7 t/s
- NVIDIA RTX 3060 12GBQ3_K_M · 247.5 t/s
- NVIDIA H100 80GBFP16 · 460.6 t/s
- NVIDIA A100 80GBFP16 · 280.4 t/s
- NVIDIA A100 40GBQ8 · 427.6 t/s
- NVIDIA L40SQ8 · 237.6 t/s
- NVIDIA RTX A6000Q8 · 211.2 t/s
- NVIDIA RTX 6000 AdaQ8 · 264 t/s
- NVIDIA DGX Spark (128GB)FP16 · 37.5 t/s
- AMD Radeon RX 7900 XTXQ6_K · 352 t/s
- AMD Radeon RX 7900 XTQ6_K · 293.3 t/s
- AMD Radeon RX 6800 XTQ5_K_M · 225.3 t/s
- AMD Instinct MI300XFP16 · 728.8 t/s
- AMD Strix Halo (128GB)FP16 · 35.2 t/s
- AMD Strix Halo (96GB)FP16 · 35.2 t/s
- AMD Strix Halo (64GB)FP16 · 35.2 t/s
- Apple M4 Ultra (384GB)FP16 · 150.2 t/s
- Apple M4 Ultra (192GB)FP16 · 150.2 t/s
- Apple M4 Max (128GB)FP16 · 75.1 t/s
- Apple M4 Max (96GB)FP16 · 75.1 t/s
- Apple M4 Max (64GB)FP16 · 75.1 t/s
- Apple M4 Max (48GB)Q8 · 150.2 t/s
- Apple M4 Pro (48GB)Q8 · 75.1 t/s
- Apple M4 Pro (24GB)Q6_K · 100.1 t/s
- Apple M4 (32GB)Q8 · 33 t/s
- Apple M4 (16GB)Q3_K_M · 82.5 t/s
- Apple M3 Max (128GB)FP16 · 55 t/s
- Apple M3 Max (96GB)FP16 · 55 t/s
- Apple M3 Max (64GB)FP16 · 55 t/s
- Apple M3 Max (48GB)Q8 · 110 t/s
- Apple M3 Max (36GB)Q8 · 110 t/s
- Apple M3 Pro (36GB)Q8 · 41.3 t/s
- Apple M3 Pro (18GB)Q4_K_M · 82.5 t/s
- Apple M3 (24GB)Q6_K · 36.7 t/s
- Apple M3 (16GB)Q3_K_M · 68.8 t/s
- Apple M2 Ultra (384GB)FP16 · 110 t/s
- Apple M2 Ultra (192GB)FP16 · 110 t/s
- Apple M2 Max (96GB)FP16 · 55 t/s
- Apple M2 Max (64GB)FP16 · 55 t/s
- Apple M2 Max (32GB)Q8 · 110 t/s
- Apple M2 Pro (32GB)Q8 · 55 t/s
- Apple M2 Pro (16GB)Q3_K_M · 137.5 t/s
- Apple M2 (24GB)Q6_K · 36.7 t/s
- Apple M2 (16GB)Q3_K_M · 68.8 t/s
- Apple M1 Ultra (128GB)FP16 · 110 t/s
- Apple M1 Ultra (64GB)FP16 · 110 t/s
- Apple M1 Max (64GB)FP16 · 55 t/s
- Apple M1 Max (32GB)Q8 · 110 t/s
- Apple M1 Pro (32GB)Q8 · 55 t/s
- Apple M1 Pro (16GB)Q3_K_M · 137.5 t/s
- Apple M1 (16GB)Q3_K_M · 46.8 t/s
- Intel Arc A770 16GBQ5_K_M · 246.4 t/s
Plus 1 GPUs that run it with CPU offload (slower)
- CPU only (system RAM)Q8 · 2.6 t/s
Notes
Smaller sibling of GPT-OSS 120B. Matches o3-mini on key benchmarks; runs on 16 GB of VRAM.