Apple M5 Pro (48GB)
The Apple M5 Pro (48GB) has 48 GB VRAM and 307 GB/s memory bandwidth. It can run 51 of our 70 tracked models natively in VRAM at 8k context.
The Apple M5 Pro (48GB) is purpose-built for local LLM inference on MacBook Pro and Mac Studio, offering 48GB of unified memory and 307 GB/s bandwidth. It runs Qwen 3.6 35B at Q8_0 and Gemma 4 31B at Q8_0 or F16 entirely in-memory without CPU offload, covering both of today's highest-search-volume open-weight models via MLX and llama.cpp. The 15-core M5 Pro CPU and Apple Silicon's power efficiency make this the go-to on-device AI configuration for professionals who want top-tier models in a portable Mac form factor.
Apple M5 Pro (48GB) is a mobile/laptop Apple Silicon chip based on the Apple M5 Pro architecture. Released in 2026. It features 48 GB of LPDDR5X unified memory at 307 GB/s memory bandwidth. As an Apple Silicon chip, its memory is unified between CPU and GPU, so the full 48 GB can be allocated to model weights. MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.
For local LLM inference, this GPU runs 51 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is GPT-OSS 120B (205.3 t/s at Q2_K). It handles most models up to the 70B class in VRAM, including some larger MoE models. On Llama 3.3 70B Instruct, it achieves approximately 10.2 tokens per second at Q3_K_M quantization.
Apple's Metal backend is fully supported by MLX and llama.cpp, giving excellent performance on macOS. Among laptop GPUs, it sits above Apple M4 Pro (48GB) and Apple M5 Pro (36GB) in performance, but below Apple M3 Max (48GB).
| Vendor | Apple |
| Architecture | Apple M5 Pro |
| CPU cores | 15 (5S + 10P) |
| VRAM | 48 GB (unified) |
| Memory type | LPDDR5X |
| Memory bandwidth | 307 GB/s |
| Compute backend | METAL |
| Tier | Laptop |
| Released | 2026 |
| Models (native) | 51 / 70 |
| Models (offload) | 0 / 70 |
Models this GPU runs natively in VRAM (51)
- GPT-OSS 120B117B · MMLU-Pro —Q2_K · ~205.3 t/s
- Llama 4 Scout 109B109B · MMLU-Pro 70.0Q2_K · ~60.4 t/s
- GLM-4.5 Air 106B106B · MMLU-Pro —Q2_K · ~85.5 t/s
- GLM-4.6V 106B106B · MMLU-Pro —Q2_K · ~85.5 t/s
- Qwen 2.5 72B Instruct72B · MMLU-Pro 58.1Q3_K_M · ~9.9 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9Q3_K_M · ~10.2 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0Q3_K_M · ~10.2 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4Q3_K_M · ~10.2 t/s
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7Q5_K_M · ~40.6 t/s
- Command-R 35B35B · MMLU-Pro 33.0Q5_K_M · ~13.6 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro —Q8_0 · ~112.6 t/s
- Qwen 3.6 35B35B · MMLU-Pro —Q8_0 · ~8.8 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0Q8_0 · ~8.9 t/s
- Qwen3 32B32.8B · MMLU-Pro —Q8_0 · ~9.4 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 55.1Q8_0 · ~9.4 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4Q8_0 · ~9.4 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0Q8_0 · ~9.4 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro —Q8_0 · ~112.6 t/s
- Gemma 4 31B31B · MMLU-Pro —Q8_0 · ~9.9 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro —Q8_0 · ~112.6 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0Q8_0 · ~11.3 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro —Q8_0 · ~11.4 t/s
- Qwen 3.6 27B27B · MMLU-Pro —Q8_0 · ~11.4 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro —Q8_0 · ~88.9 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro —Q8_0 · ~12.8 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2Q8_0 · ~13.8 t/s
- GPT-OSS 20B21B · MMLU-Pro —Q8_0 · ~84.4 t/s
- Qwen3 14B14.8B · MMLU-Pro —BF16 · ~10.4 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 51.2BF16 · ~10.4 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 56.1BF16 · ~11 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6BF16 · ~12.6 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro —BF16 · ~12.6 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0BF16 · ~16.7 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 37.5FP32 · ~9.6 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0FP32 · ~9.6 t/s
- Qwen3 8B8B · MMLU-Pro —FP32 · ~9.6 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 36.5FP32 · ~10.1 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0FP32 · ~10.6 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro —FP32 · ~19.2 t/s
- Gemma 4 E4B4B · MMLU-Pro —FP32 · ~19.2 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 35.6FP32 · ~20.2 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP32 · ~24 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~24.8 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~29.5 t/s
- Gemma 4 E2B2B · MMLU-Pro —FP32 · ~38.4 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~45.1 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~51.2 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~61.9 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro —FP32 · ~76.8 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~153.5 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~213.2 t/s
Too large for this GPU (19)
- Mixtral 8x22B Instruct v0.1
- Llama 3.1 405B Instruct
- DeepSeek V3 671B
- DeepSeek R1 671B
- Llama 4 Maverick 400B
- Qwen3 235B-A22B (MoE)
- MiniMax M1 456B
- GLM-4.5 355B
- GLM-4.6 355B
- GLM-4.7 358B
- Qwen 3.5 122B-A10B (MoE)
- MiniMax M2.5 229B
- GLM-5 744B
- MiniMax M2.7 229B
- Nemotron 3 Super 120B
- Kimi K2.6
- GLM-5.1 754B
- DeepSeek V4 Pro 1.6T
- DeepSeek V4 Flash 284B
Frequently asked questions
- How much VRAM does the Apple M5 Pro (48GB) have?
- The Apple M5 Pro (48GB) has 48 GB of LPDDR5X with 307 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
- What LLMs can the Apple M5 Pro (48GB) run locally?
- The Apple M5 Pro (48GB) can run 51 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at Q3_K_M, Llama 3.1 8B Instruct at FP32, Llama 3.2 3B Instruct at FP32.
- Can the Apple M5 Pro (48GB) run Llama 3.3 70B Instruct?
- Yes. The Apple M5 Pro (48GB) runs Llama 3.3 70B Instruct natively in VRAM at Q3_K_M quantization, achieving approximately 10.2 tokens per second.
- Can the Apple M5 Pro (48GB) run Qwen 3.6 27B?
- Yes. The Apple M5 Pro (48GB) runs Qwen 3.6 27B natively in VRAM at Q8_0 quantization, achieving approximately 11.4 tokens per second.
- Can the Apple M5 Pro (48GB) run Llama 3.1 8B Instruct?
- Yes. The Apple M5 Pro (48GB) runs Llama 3.1 8B Instruct natively in VRAM at FP32 quantization, achieving approximately 9.6 tokens per second.