Llama 3.1 405B Instruct
Llama 3.1 405B Instruct needs roughly 202.5GB VRAM at Q4 quantization (810.0GB at FP16). 5 GPUs we track can run it fully in VRAM at 8k context.
Meta405B params125k contextLlama 3.1 CommunityCommercial use ok
VRAM at each quantization
Assumes 8k context. KV cache grows linearly with context length.
| Quant | Weights | KV cache | Total |
|---|---|---|---|
| FP16 | 810.0 GB | 4.23 GB | 911.9 GB |
| Q8 | 405.0 GB | 4.23 GB | 458.3 GB |
| Q6_K | 303.8 GB | 4.23 GB | 344.9 GB |
| Q5_K_M | 253.1 GB | 4.23 GB | 288.2 GB |
| Q4_K_M | 202.5 GB | 4.23 GB | 231.5 GB |
| Q3_K_M | 162.0 GB | 4.23 GB | 186.2 GB |
| Q2_K | 121.5 GB | 4.23 GB | 140.8 GB |
Benchmarks
GPUs that run Llama 3.1 405B Instruct natively (5)
- AMD Instinct MI300XQ2_K · 43.6 t/s
- Apple M4 Ultra (384GB)Q6_K · 3.6 t/s
- Apple M4 Ultra (192GB)Q3_K_M · 6.7 t/s
- Apple M2 Ultra (384GB)Q6_K · 2.6 t/s
- Apple M2 Ultra (192GB)Q3_K_M · 4.9 t/s
Notes
Frontier-class open weights. Realistically needs a multi-GPU server.
