CanItRun Logocanitrun.

Apple M5 (16GB)

The Apple M5 (16GB) has 16 GB VRAM and 153 GB/s memory bandwidth. It can run 32 of our 70 tracked models natively in VRAM at 8k context.

VendorApple
ArchitectureApple M5
CPU cores10 (4P + 6E)
VRAM16 GB (unified)
Memory typeLPDDR5X
Memory bandwidth153 GB/s
Compute backendMETAL
TierLaptop
Released2025
Models (native)32 / 70
Models (offload)0 / 70
Software: MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.

Models this GPU runs natively in VRAM (32)

Too large for this GPU (38)

Frequently asked questions

How much VRAM does the Apple M5 (16GB) have?
The Apple M5 (16GB) has 16 GB of LPDDR5X with 153 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
What LLMs can the Apple M5 (16GB) run locally?
The Apple M5 (16GB) can run 32 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at Q8, Llama 3.2 3B Instruct at FP16, Llama 3.2 1B Instruct at FP16.
Can the Apple M5 (16GB) run Llama 3.3 70B Instruct?
The Apple M5 (16GB) does not have enough VRAM to run Llama 3.3 70B Instruct. You would need more VRAM or a lower quantization level.
Can the Apple M5 (16GB) run Qwen 3.6 27B?
Yes. The Apple M5 (16GB) runs Qwen 3.6 27B natively in VRAM at Q2_K quantization, achieving approximately 18.9 tokens per second.
Can the Apple M5 (16GB) run Llama 3.1 8B Instruct?
Yes. The Apple M5 (16GB) runs Llama 3.1 8B Instruct natively in VRAM at Q8 quantization, achieving approximately 19.1 tokens per second.