CanItRun Logocanitrun.

GLM-4.6 355B

GLM-4.6 355B needs roughly 202.3GB VRAM at Q4_K_M quantization (798.7GB at FP16). 10 GPUs we track can run it fully in VRAM at 8k context.

Z.ai355B params32B active (MoE)198k contextMITCommercial use ok

VRAM at each quantization

Assumes 8k context. KV cache grows linearly with context length.

QuantWeightsKV cacheTotal
FP16710.0 GB3.09 GB798.7 GB
Q8355.0 GB3.09 GB401.1 GB
Q6_K266.3 GB3.09 GB301.7 GB
Q5_K_M221.9 GB3.09 GB252.0 GB
Q4_K_M177.5 GB3.09 GB202.3 GB
Q3_K_M142.0 GB3.09 GB162.5 GB
Q2_K106.5 GB3.09 GB122.7 GB

Benchmarks

Benchmarks for this model are not yet available on the Open LLM Leaderboard v2. This is common for recently released models. Check back soon.

GPUs that run GLM-4.6 355B natively (10)

Notes

Context expanded from 128K to 200K vs GLM-4.5; coding-focused improvements.

Hugging Face ↗Released 2025-09-30