CanItRun Logocanitrun.

GLM-5.1 754B

GLM-5.1 754B needs roughly 434.0GB VRAM at Q4_K_M quantization (1700.7GB at FP16). 2 GPUs we track can run it fully in VRAM at 8k context.

Z.ai754B params44B active (MoE)198k contextMITCommercial use ok

VRAM at each quantization

Assumes 8k context. KV cache grows linearly with context length.

QuantWeightsKV cacheTotal
FP161508.0 GB10.47 GB1700.7 GB
Q8754.0 GB10.47 GB856.2 GB
Q6_K565.5 GB10.47 GB645.1 GB
Q5_K_M471.3 GB10.47 GB539.5 GB
Q4_K_M377.0 GB10.47 GB434.0 GB
Q3_K_M301.6 GB10.47 GB349.5 GB
Q2_K226.2 GB10.47 GB265.1 GB

Benchmarks

GPQA
86.2

GPUs that run GLM-5.1 754B natively (2)

Notes

Agentic engineering successor to GLM-5 — #1 on SWE-Bench Pro (58.4%). Same DSA architecture.

Hugging Face ↗Released 2026-04-07