GLM-5.1 754B
GLM-5.1 754B needs roughly 434.0GB VRAM at Q4_K_M quantization (1700.7GB at FP16). 2 GPUs we track can run it fully in VRAM at 8k context.
Z.ai754B params44B active (MoE)198k contextMITCommercial use ok
VRAM at each quantization
Assumes 8k context. KV cache grows linearly with context length.
| Quant | Weights | KV cache | Total |
|---|---|---|---|
| FP16 | 1508.0 GB | 10.47 GB | 1700.7 GB |
| Q8 | 754.0 GB | 10.47 GB | 856.2 GB |
| Q6_K | 565.5 GB | 10.47 GB | 645.1 GB |
| Q5_K_M | 471.3 GB | 10.47 GB | 539.5 GB |
| Q4_K_M | 377.0 GB | 10.47 GB | 434.0 GB |
| Q3_K_M | 301.6 GB | 10.47 GB | 349.5 GB |
| Q2_K | 226.2 GB | 10.47 GB | 265.1 GB |
Benchmarks
GPUs that run GLM-5.1 754B natively (2)
- Apple M4 Ultra (384GB)Q3_K_M · 68.3 t/s
- Apple M2 Ultra (384GB)Q3_K_M · 50 t/s
Notes
Agentic engineering successor to GLM-5 — #1 on SWE-Bench Pro (58.4%). Same DSA architecture.