GLM-4.6 355B
GLM-4.6 355B needs roughly 202.3GB VRAM at Q4_K_M quantization (798.7GB at FP16). 10 GPUs we track can run it fully in VRAM at 8k context.
Z.ai355B params32B active (MoE)198k contextMITCommercial use ok
VRAM at each quantization
Assumes 8k context. KV cache grows linearly with context length.
| Quant | Weights | KV cache | Total |
|---|---|---|---|
| FP16 | 710.0 GB | 3.09 GB | 798.7 GB |
| Q8 | 355.0 GB | 3.09 GB | 401.1 GB |
| Q6_K | 266.3 GB | 3.09 GB | 301.7 GB |
| Q5_K_M | 221.9 GB | 3.09 GB | 252.0 GB |
| Q4_K_M | 177.5 GB | 3.09 GB | 202.3 GB |
| Q3_K_M | 142.0 GB | 3.09 GB | 162.5 GB |
| Q2_K | 106.5 GB | 3.09 GB | 122.7 GB |
Benchmarks
Benchmarks for this model are not yet available on the Open LLM Leaderboard v2. This is common for recently released models. Check back soon.
GPUs that run GLM-4.6 355B natively (10)
- NVIDIA DGX Spark (128GB)Q2_K · 31.3 t/s
- AMD Instinct MI300XQ3_K_M · 455.5 t/s
- AMD Strix Halo (128GB)Q2_K · 29.3 t/s
- Apple M4 Ultra (384GB)Q6_K · 50.1 t/s
- Apple M4 Ultra (192GB)Q3_K_M · 93.8 t/s
- Apple M4 Max (128GB)Q2_K · 62.6 t/s
- Apple M3 Max (128GB)Q2_K · 45.8 t/s
- Apple M2 Ultra (384GB)Q6_K · 36.7 t/s
- Apple M2 Ultra (192GB)Q3_K_M · 68.8 t/s
- Apple M1 Ultra (128GB)Q2_K · 91.7 t/s
Notes
Context expanded from 128K to 200K vs GLM-4.5; coding-focused improvements.