Apple M5 Pro (36GB)
The Apple M5 Pro (36GB) has 36 GB VRAM and 307 GB/s memory bandwidth. It can run 47 of our 70 tracked models natively in VRAM at 8k context.
The Apple M5 Pro (36GB) balances capability and value for local LLM inference on MacBook Pro, with 36GB of unified memory and 307 GB/s bandwidth. Qwen 3.6 35B fits at Q4_K_M with headroom, and Gemma 4 31B runs at Q4_K_M or Q8_0 entirely in-memory — delivering the two most-searched open-weight models locally via MLX and llama.cpp without the premium of higher-memory tiers. This is the sweet spot for Apple Silicon users who want serious 30B-class inference in a thin-and-light Mac.
Apple M5 Pro (36GB) is a mobile/laptop Apple Silicon chip based on the Apple M5 Pro architecture. Released in 2026. It features 36 GB of LPDDR5X unified memory at 307 GB/s memory bandwidth. As an Apple Silicon chip, its memory is unified between CPU and GPU, so the full 36 GB can be allocated to model weights. MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.
For local LLM inference, this GPU runs 47 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is Qwen 2.5 72B Instruct (13 t/s at Q2_K). It comfortably runs models up to ~27-32B parameters at Q4. Larger models need CPU offload or multi-GPU. On Llama 3.3 70B Instruct, it achieves approximately 13.3 tokens per second at Q2_K quantization.
Apple's Metal backend is fully supported by MLX and llama.cpp, giving excellent performance on macOS. Among laptop GPUs, it sits above Apple M5 Pro (24GB) and Apple M2 Pro (32GB) in performance, but below Apple M3 Max (36GB).
| Vendor | Apple |
| Architecture | Apple M5 Pro |
| CPU cores | 15 (5S + 10P) |
| VRAM | 36 GB (unified) |
| Memory type | LPDDR5X |
| Memory bandwidth | 307 GB/s |
| Compute backend | METAL |
| Tier | Laptop |
| Released | 2026 |
| Models (native) | 47 / 70 |
| Models (offload) | 0 / 70 |
Models this GPU runs natively in VRAM (47)
- Qwen 2.5 72B Instruct72B · MMLU-Pro 58.1Q2_K · ~13 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9Q2_K · ~13.3 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0Q2_K · ~13.3 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4Q2_K · ~13.3 t/s
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7Q4_K_M · ~46.5 t/s
- Command-R 35B35B · MMLU-Pro 33.0Q3_K_M · ~20.4 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro —Q5_K_M · ~174.8 t/s
- Qwen 3.6 35B35B · MMLU-Pro —Q5_K_M · ~13.6 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0Q5_K_M · ~13.9 t/s
- Qwen3 32B32.8B · MMLU-Pro —Q6_K · ~11.4 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 55.1Q5_K_M · ~14.7 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4Q5_K_M · ~14.7 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0Q5_K_M · ~14.7 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro —Q6_K · ~137.3 t/s
- Gemma 4 31B31B · MMLU-Pro —Q5_K_M · ~15.4 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro —Q6_K · ~137.3 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0Q6_K · ~13.8 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro —Q8_0 · ~11.4 t/s
- Qwen 3.6 27B27B · MMLU-Pro —Q6_K · ~13.9 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro —Q8_0 · ~88.9 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro —Q8_0 · ~12.8 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2Q8_0 · ~13.8 t/s
- GPT-OSS 20B21B · MMLU-Pro —Q8_0 · ~84.4 t/s
- Qwen3 14B14.8B · MMLU-Pro —Q8_0 · ~20.7 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 51.2Q8_0 · ~20.9 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 56.1Q8_0 · ~21.9 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6BF16 · ~12.6 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro —BF16 · ~12.6 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0BF16 · ~16.7 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 37.5BF16 · ~19.2 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0BF16 · ~19.2 t/s
- Qwen3 8B8B · MMLU-Pro —BF16 · ~19.2 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 36.5BF16 · ~20.2 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0BF16 · ~21.2 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro —FP32 · ~19.2 t/s
- Gemma 4 E4B4B · MMLU-Pro —FP32 · ~19.2 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 35.6FP32 · ~20.2 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP32 · ~24 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~24.8 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~29.5 t/s
- Gemma 4 E2B2B · MMLU-Pro —FP32 · ~38.4 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~45.1 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~51.2 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~61.9 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro —FP32 · ~76.8 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~153.5 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~213.2 t/s
Too large for this GPU (23)
- Mixtral 8x22B Instruct v0.1
- Llama 3.1 405B Instruct
- DeepSeek V3 671B
- DeepSeek R1 671B
- Llama 4 Scout 109B
- Llama 4 Maverick 400B
- Qwen3 235B-A22B (MoE)
- MiniMax M1 456B
- GPT-OSS 120B
- GLM-4.5 355B
- GLM-4.5 Air 106B
- GLM-4.6 355B
- GLM-4.6V 106B
- GLM-4.7 358B
- Qwen 3.5 122B-A10B (MoE)
- MiniMax M2.5 229B
- GLM-5 744B
- MiniMax M2.7 229B
- Nemotron 3 Super 120B
- Kimi K2.6
- GLM-5.1 754B
- DeepSeek V4 Pro 1.6T
- DeepSeek V4 Flash 284B
Frequently asked questions
- How much VRAM does the Apple M5 Pro (36GB) have?
- The Apple M5 Pro (36GB) has 36 GB of LPDDR5X with 307 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
- What LLMs can the Apple M5 Pro (36GB) run locally?
- The Apple M5 Pro (36GB) can run 47 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at Q2_K, Llama 3.1 8B Instruct at BF16, Llama 3.2 3B Instruct at FP32.
- Can the Apple M5 Pro (36GB) run Llama 3.3 70B Instruct?
- Yes. The Apple M5 Pro (36GB) runs Llama 3.3 70B Instruct natively in VRAM at Q2_K quantization, achieving approximately 13.3 tokens per second.
- Can the Apple M5 Pro (36GB) run Qwen 3.6 27B?
- Yes. The Apple M5 Pro (36GB) runs Qwen 3.6 27B natively in VRAM at Q6_K quantization, achieving approximately 13.9 tokens per second.
- Can the Apple M5 Pro (36GB) run Llama 3.1 8B Instruct?
- Yes. The Apple M5 Pro (36GB) runs Llama 3.1 8B Instruct natively in VRAM at BF16 quantization, achieving approximately 19.2 tokens per second.