CanItRun Logocanitrun.

Llama 3.2 3B Instruct

Llama 3.2 3B Instruct needs roughly 1.6GB VRAM at Q4 quantization (6.4GB at FP16). 60 GPUs we track can run it fully in VRAM at 8k context.

Meta3.2B params125k contextLlama 3.2 CommunityCommercial use ok

VRAM at each quantization

Assumes 8k context. KV cache grows linearly with context length.

QuantWeightsKV cacheTotal
FP166.4 GB0.94 GB8.2 GB
Q83.2 GB0.94 GB4.6 GB
Q6_K2.4 GB0.94 GB3.7 GB
Q5_K_M2.0 GB0.94 GB3.3 GB
Q4_K_M1.6 GB0.94 GB2.8 GB
Q3_K_M1.3 GB0.94 GB2.5 GB
Q2_K1.0 GB0.94 GB2.1 GB

Benchmarks

GPUs that run Llama 3.2 3B Instruct natively (60)

Plus 1 GPUs that run it with CPU offload (slower)

Notes

Small, efficient on-device model.

Hugging Face ↗Ollama ↗Released 2024-09-25