Apple M5 Max (128GB)
The Apple M5 Max (128GB) has 128 GB VRAM and 614 GB/s memory bandwidth. It can run 58 of our 70 tracked models natively in VRAM at 8k context.
The Apple M5 Max (128GB) is the flagship Apple Silicon configuration for local LLM inference, featuring 128GB of unified memory at 614 GB/s. It runs Qwen 3.6 35B and Gemma 4 31B entirely in-memory at F16 precision with room to spare, and can hold multiple models simultaneously for switching workflows without reload. With an 18-core CPU and full support from MLX and llama.cpp, the M5 Max (128GB) is the definitive on-device AI platform for Mac.
Apple M5 Max (128GB) is a mobile/laptop Apple Silicon chip based on the Apple M5 Max architecture. Released in 2026. It features 128 GB of LPDDR5X unified memory at 614 GB/s memory bandwidth. As an Apple Silicon chip, its memory is unified between CPU and GPU, so the full 128 GB can be allocated to model weights. MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.
For local LLM inference, this GPU runs 58 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is DeepSeek V4 Flash 284B (157.9 t/s at Q2_K). It can run all tracked models including 405B-class frontier models entirely in VRAM. On Llama 3.3 70B Instruct, it achieves approximately 8.8 tokens per second at Q8_0 quantization.
Apple's Metal backend is fully supported by MLX and llama.cpp, giving excellent performance on macOS. Among laptop GPUs, it sits above Apple M4 Max (128GB) and Apple M3 Max (128GB) in performance, but below Apple M1 Ultra (128GB).
| Vendor | Apple |
| Architecture | Apple M5 Max |
| CPU cores | 18 (6S + 12P) |
| VRAM | 128 GB (unified) |
| Memory type | LPDDR5X |
| Memory bandwidth | 614 GB/s |
| Compute backend | METAL |
| Tier | Laptop |
| Released | 2026 |
| Models (native) | 58 / 70 |
| Models (offload) | 0 / 70 |
Models this GPU runs natively in VRAM (58)
- DeepSeek V4 Flash 284B284B · MMLU-Pro —Q2_K · ~157.9 t/s
- Qwen3 235B-A22B (MoE)235B · MMLU-Pro —Q3_K_M · ~71.4 t/s
- MiniMax M2.5 229B229B · MMLU-Pro —Q3_K_M · ~157.1 t/s
- MiniMax M2.7 229B229B · MMLU-Pro —Q3_K_M · ~157.1 t/s
- Mixtral 8x22B Instruct v0.1141B · MMLU-Pro 40.0Q5_K_M · ~26.9 t/s
- Qwen 3.5 122B-A10B (MoE)122B · MMLU-Pro —Q6_K · ~82.4 t/s
- Nemotron 3 Super 120B120B · MMLU-Pro —Q6_K · ~68.6 t/s
- GPT-OSS 120B117B · MMLU-Pro —Q6_K · ~164.7 t/s
- Llama 4 Scout 109B109B · MMLU-Pro 70.0Q6_K · ~48.5 t/s
- GLM-4.5 Air 106B106B · MMLU-Pro —Q8_0 · ~56.3 t/s
- GLM-4.6V 106B106B · MMLU-Pro —Q8_0 · ~56.3 t/s
- Qwen 2.5 72B Instruct72B · MMLU-Pro 58.1Q8_0 · ~8.5 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9Q8_0 · ~8.8 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0Q8_0 · ~8.8 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4Q8_0 · ~8.8 t/s
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7BF16 · ~26.2 t/s
- Command-R 35B35B · MMLU-Pro 33.0BF16 · ~8.8 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro —BF16 · ~112.6 t/s
- Qwen 3.6 35B35B · MMLU-Pro —BF16 · ~8.8 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0BF16 · ~8.9 t/s
- Qwen3 32B32.8B · MMLU-Pro —BF16 · ~9.4 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 55.1BF16 · ~9.4 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4BF16 · ~9.4 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0BF16 · ~9.4 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro —BF16 · ~112.6 t/s
- Gemma 4 31B31B · MMLU-Pro —BF16 · ~9.9 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro —BF16 · ~112.6 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0BF16 · ~11.3 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro —FP32 · ~5.7 t/s
- Qwen 3.6 27B27B · MMLU-Pro —FP32 · ~5.7 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro —FP32 · ~44.4 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro —FP32 · ~6.4 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2FP32 · ~6.9 t/s
- GPT-OSS 20B21B · MMLU-Pro —FP32 · ~42.2 t/s
- Qwen3 14B14.8B · MMLU-Pro —FP32 · ~10.4 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 51.2FP32 · ~10.4 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 56.1FP32 · ~11 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6FP32 · ~12.6 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro —FP32 · ~12.6 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0FP32 · ~16.7 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 37.5FP32 · ~19.2 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0FP32 · ~19.2 t/s
- Qwen3 8B8B · MMLU-Pro —FP32 · ~19.2 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 36.5FP32 · ~20.2 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0FP32 · ~21.2 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro —FP32 · ~38.4 t/s
- Gemma 4 E4B4B · MMLU-Pro —FP32 · ~38.4 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 35.6FP32 · ~40.4 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP32 · ~48 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~49.5 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~59 t/s
- Gemma 4 E2B2B · MMLU-Pro —FP32 · ~76.8 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~90.3 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~102.3 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~123.8 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro —FP32 · ~153.5 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~307 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~426.4 t/s
Too large for this GPU (12)
Frequently asked questions
- How much VRAM does the Apple M5 Max (128GB) have?
- The Apple M5 Max (128GB) has 128 GB of LPDDR5X with 614 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
- What LLMs can the Apple M5 Max (128GB) run locally?
- The Apple M5 Max (128GB) can run 58 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at Q8_0, Llama 3.1 8B Instruct at FP32, Llama 3.2 3B Instruct at FP32.
- Can the Apple M5 Max (128GB) run Llama 3.3 70B Instruct?
- Yes. The Apple M5 Max (128GB) runs Llama 3.3 70B Instruct natively in VRAM at Q8_0 quantization, achieving approximately 8.8 tokens per second.
- Can the Apple M5 Max (128GB) run Qwen 3.6 27B?
- Yes. The Apple M5 Max (128GB) runs Qwen 3.6 27B natively in VRAM at FP32 quantization, achieving approximately 5.7 tokens per second.
- Can the Apple M5 Max (128GB) run Llama 3.1 8B Instruct?
- Yes. The Apple M5 Max (128GB) runs Llama 3.1 8B Instruct natively in VRAM at FP32 quantization, achieving approximately 19.2 tokens per second.