GLM-5 744B
GLM-5 744B needs roughly 428.4GB VRAM at Q4_K_M quantization (1678.3GB at FP16). 2 GPUs we track can run it fully in VRAM at 8k context.
Z.ai744B params40B active (MoE)198k contextMITCommercial use ok
VRAM at each quantization
Assumes 8k context. KV cache grows linearly with context length.
| Quant | Weights | KV cache | Total |
|---|---|---|---|
| FP16 | 1488.0 GB | 10.47 GB | 1678.3 GB |
| Q8 | 744.0 GB | 10.47 GB | 845.0 GB |
| Q6_K | 558.0 GB | 10.47 GB | 636.7 GB |
| Q5_K_M | 465.0 GB | 10.47 GB | 532.5 GB |
| Q4_K_M | 372.0 GB | 10.47 GB | 428.4 GB |
| Q3_K_M | 297.6 GB | 10.47 GB | 345.0 GB |
| Q2_K | 223.2 GB | 10.47 GB | 261.7 GB |
Benchmarks
GPUs that run GLM-5 744B natively (2)
- Apple M4 Ultra (384GB)Q3_K_M · 75.1 t/s
- Apple M2 Ultra (384GB)Q3_K_M · 55 t/s
Notes
Uses DeepSeek Sparse Attention (DSA) for efficient long-context. 256 routed experts, 8 active.