Apple M5 Pro (24GB)
The Apple M5 Pro (24GB) has 24 GB VRAM and 307 GB/s memory bandwidth. It can run 42 of our 70 tracked models natively in VRAM at 8k context.
The Apple M5 Pro (24GB) is the entry point for local LLM inference on the M5 Pro MacBook Pro, with 24GB of unified memory and 307 GB/s bandwidth. Qwen 3.6 27B fits at Q4_K_M with some headroom, and Gemma 4 26B MoE runs efficiently at Q4_K_M — both fully in-memory via MLX and llama.cpp. Qwen 3.6 35B and Gemma 4 31B require CPU offload at this memory tier, but the M5 Pro (24GB) delivers genuine 27B-class on-device AI capability without sacrificing MacBook portability.
Apple M5 Pro (24GB) is a mobile/laptop Apple Silicon chip based on the Apple M5 Pro architecture. Released in 2026. It features 24 GB of LPDDR5X unified memory at 307 GB/s memory bandwidth. As an Apple Silicon chip, its memory is unified between CPU and GPU, so the full 24 GB can be allocated to model weights. MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.
For local LLM inference, this GPU runs 42 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is Mixtral 8x7B Instruct v0.1 (79.6 t/s at Q2_K). It comfortably runs models up to ~27-32B parameters at Q4. Larger models need CPU offload or multi-GPU. On Qwen 3.6 27B, it achieves approximately 20.2 tokens per second at Q4_K_M quantization.
Apple's Metal backend is fully supported by MLX and llama.cpp, giving excellent performance on macOS. Among laptop GPUs, it sits above Apple M4 Pro (24GB) and NVIDIA RTX 4060 Ti 16GB in performance, but below Apple M5 Pro (36GB).
| Vendor | Apple |
| Architecture | Apple M5 Pro |
| CPU cores | 15 (5S + 10P) |
| VRAM | 24 GB (unified) |
| Memory type | LPDDR5X |
| Memory bandwidth | 307 GB/s |
| Compute backend | METAL |
| Tier | Laptop |
| Released | 2026 |
| Models (native) | 42 / 70 |
| Models (offload) | 0 / 70 |
Models this GPU runs natively in VRAM (42)
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7Q2_K · ~79.6 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro —Q3_K_M · ~261.8 t/s
- Qwen 3.6 35B35B · MMLU-Pro —Q3_K_M · ~20.4 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0Q3_K_M · ~20.8 t/s
- Qwen3 32B32.8B · MMLU-Pro —Q3_K_M · ~21.8 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 55.1Q3_K_M · ~22 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4Q3_K_M · ~22 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0Q3_K_M · ~22 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro —Q3_K_M · ~261.8 t/s
- Gemma 4 31B31B · MMLU-Pro —Q3_K_M · ~23 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro —Q4_K_M · ~199.9 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0Q3_K_M · ~26.2 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro —Q4_K_M · ~20.2 t/s
- Qwen 3.6 27B27B · MMLU-Pro —Q4_K_M · ~20.2 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro —Q4_K_M · ~157.8 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro —Q5_K_M · ~19.9 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2Q5_K_M · ~21.5 t/s
- GPT-OSS 20B21B · MMLU-Pro —Q6_K · ~103 t/s
- Qwen3 14B14.8B · MMLU-Pro —Q8_0 · ~20.7 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 51.2Q8_0 · ~20.9 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 56.1Q8_0 · ~21.9 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6Q8_0 · ~25.2 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro —Q8_0 · ~25.2 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0Q8_0 · ~33.4 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 37.5BF16 · ~19.2 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0BF16 · ~19.2 t/s
- Qwen3 8B8B · MMLU-Pro —BF16 · ~19.2 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 36.5BF16 · ~20.2 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0BF16 · ~21.2 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro —FP32 · ~19.2 t/s
- Gemma 4 E4B4B · MMLU-Pro —FP32 · ~19.2 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 35.6BF16 · ~40.4 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP32 · ~24 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~24.8 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~29.5 t/s
- Gemma 4 E2B2B · MMLU-Pro —FP32 · ~38.4 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~45.1 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~51.2 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~61.9 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro —FP32 · ~76.8 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~153.5 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~213.2 t/s
Too large for this GPU (28)
- Llama 3.3 70B Instruct
- Qwen 2.5 72B Instruct
- DeepSeek R1 Distill Llama 70B
- Command-R 35B
- Llama 3.1 70B Instruct
- Mixtral 8x22B Instruct v0.1
- Llama 3.1 405B Instruct
- DeepSeek V3 671B
- DeepSeek R1 671B
- Llama 4 Scout 109B
- Llama 4 Maverick 400B
- Qwen3 235B-A22B (MoE)
- MiniMax M1 456B
- GPT-OSS 120B
- GLM-4.5 355B
- GLM-4.5 Air 106B
- GLM-4.6 355B
- GLM-4.6V 106B
- GLM-4.7 358B
- Qwen 3.5 122B-A10B (MoE)
- MiniMax M2.5 229B
- GLM-5 744B
- MiniMax M2.7 229B
- Nemotron 3 Super 120B
- Kimi K2.6
- GLM-5.1 754B
- DeepSeek V4 Pro 1.6T
- DeepSeek V4 Flash 284B
Frequently asked questions
- How much VRAM does the Apple M5 Pro (24GB) have?
- The Apple M5 Pro (24GB) has 24 GB of LPDDR5X with 307 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
- What LLMs can the Apple M5 Pro (24GB) run locally?
- The Apple M5 Pro (24GB) can run 42 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at BF16, Llama 3.2 3B Instruct at FP32, Llama 3.2 1B Instruct at FP32.
- Can the Apple M5 Pro (24GB) run Llama 3.3 70B Instruct?
- The Apple M5 Pro (24GB) does not have enough VRAM to run Llama 3.3 70B Instruct. You would need more VRAM or a lower quantization level.
- Can the Apple M5 Pro (24GB) run Qwen 3.6 27B?
- Yes. The Apple M5 Pro (24GB) runs Qwen 3.6 27B natively in VRAM at Q4_K_M quantization, achieving approximately 20.2 tokens per second.
- Can the Apple M5 Pro (24GB) run Llama 3.1 8B Instruct?
- Yes. The Apple M5 Pro (24GB) runs Llama 3.1 8B Instruct natively in VRAM at BF16 quantization, achieving approximately 19.2 tokens per second.