Apple M5 (16GB)
The Apple M5 (16GB) has 16 GB VRAM and 153 GB/s memory bandwidth. It can run 32 of our 70 tracked models natively in VRAM at 8k context.
| Vendor | Apple |
| Architecture | Apple M5 |
| CPU cores | 10 (4P + 6E) |
| VRAM | 16 GB (unified) |
| Memory type | LPDDR5X |
| Memory bandwidth | 153 GB/s |
| Compute backend | METAL |
| Tier | Laptop |
| Released | 2025 |
| Models (native) | 32 / 70 |
| Models (offload) | 0 / 70 |
Software: MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.
Models this GPU runs natively in VRAM (32)
- Nemotron 3 Nano 30B32B · MMLU-Pro —Q2_K · ~187 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro —Q2_K · ~187 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro —Q2_K · ~18.9 t/s
- Qwen 3.6 27B27B · MMLU-Pro —Q2_K · ~18.9 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro —Q2_K · ~147.6 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro —Q2_K · ~21.3 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2Q2_K · ~23 t/s
- GPT-OSS 20B21B · MMLU-Pro —Q3_K_M · ~105.2 t/s
- Qwen3 14B14.8B · MMLU-Pro —Q5_K_M · ~16.5 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 51.2Q4_K_M · ~20.8 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 56.1Q5_K_M · ~17.5 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6Q6_K · ~16.7 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro —Q6_K · ~16.7 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0Q6_K · ~22.2 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 37.5Q8 · ~19.1 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0Q8 · ~19.1 t/s
- Qwen3 8B8B · MMLU-Pro —Q8 · ~19.1 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 36.5Q8 · ~20.1 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0Q8 · ~21.1 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro —FP16 · ~19.1 t/s
- Gemma 4 E4B4B · MMLU-Pro —FP16 · ~19.1 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 35.6Q8 · ~40.3 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP16 · ~23.9 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP16 · ~24.7 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP16 · ~29.4 t/s
- Gemma 4 E2B2B · MMLU-Pro —FP16 · ~38.3 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP16 · ~45 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP16 · ~51 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP16 · ~61.7 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro —FP16 · ~76.5 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP16 · ~153 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP16 · ~212.5 t/s
Too large for this GPU (38)
- Llama 3.3 70B Instruct
- Qwen 2.5 72B Instruct
- Qwen 2.5 32B Instruct
- Qwen 2.5 Coder 32B Instruct
- Mixtral 8x7B Instruct v0.1
- Gemma 2 27B Instruct
- DeepSeek R1 Distill Llama 70B
- DeepSeek R1 Distill Qwen 32B
- Command-R 35B
- Yi 1.5 34B Chat
- Llama 3.1 70B Instruct
- Mixtral 8x22B Instruct v0.1
- Llama 3.1 405B Instruct
- DeepSeek V3 671B
- DeepSeek R1 671B
- Llama 4 Scout 109B
- Llama 4 Maverick 400B
- Qwen3 235B-A22B (MoE)
- Qwen3 32B
- MiniMax M1 456B
- GPT-OSS 120B
- GLM-4.5 355B
- GLM-4.5 Air 106B
- GLM-4.6 355B
- GLM-4.6V 106B
- GLM-4.7 358B
- Gemma 4 31B
- Qwen 3.5 35B-A3B (MoE)
- Qwen 3.5 122B-A10B (MoE)
- MiniMax M2.5 229B
- GLM-5 744B
- MiniMax M2.7 229B
- Nemotron 3 Super 120B
- Qwen 3.6 35B
- Kimi K2.6
- GLM-5.1 754B
- DeepSeek V4 Pro 1.6T
- DeepSeek V4 Flash 284B
Frequently asked questions
- How much VRAM does the Apple M5 (16GB) have?
- The Apple M5 (16GB) has 16 GB of LPDDR5X with 153 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
- What LLMs can the Apple M5 (16GB) run locally?
- The Apple M5 (16GB) can run 32 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at Q8, Llama 3.2 3B Instruct at FP16, Llama 3.2 1B Instruct at FP16.
- Can the Apple M5 (16GB) run Llama 3.3 70B Instruct?
- The Apple M5 (16GB) does not have enough VRAM to run Llama 3.3 70B Instruct. You would need more VRAM or a lower quantization level.
- Can the Apple M5 (16GB) run Qwen 3.6 27B?
- Yes. The Apple M5 (16GB) runs Qwen 3.6 27B natively in VRAM at Q2_K quantization, achieving approximately 18.9 tokens per second.
- Can the Apple M5 (16GB) run Llama 3.1 8B Instruct?
- Yes. The Apple M5 (16GB) runs Llama 3.1 8B Instruct natively in VRAM at Q8 quantization, achieving approximately 19.1 tokens per second.