CanItRun Logocanitrun.

Apple M5 Pro (24GB)

The Apple M5 Pro (24GB) has 24 GB VRAM and 307 GB/s memory bandwidth. It can run 42 of our 70 tracked models natively in VRAM at 8k context.

The Apple M5 Pro (24GB) is the entry point for local LLM inference on the M5 Pro MacBook Pro, with 24GB of unified memory and 307 GB/s bandwidth. Qwen 3.6 27B fits at Q4_K_M with some headroom, and Gemma 4 26B MoE runs efficiently at Q4_K_M — both fully in-memory via MLX and llama.cpp. Qwen 3.6 35B and Gemma 4 31B require CPU offload at this memory tier, but the M5 Pro (24GB) delivers genuine 27B-class on-device AI capability without sacrificing MacBook portability.

Apple M5 Pro (24GB) is a mobile/laptop Apple Silicon chip based on the Apple M5 Pro architecture. Released in 2026. It features 24 GB of LPDDR5X unified memory at 307 GB/s memory bandwidth. As an Apple Silicon chip, its memory is unified between CPU and GPU, so the full 24 GB can be allocated to model weights. MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.

For local LLM inference, this GPU runs 42 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is Mixtral 8x7B Instruct v0.1 (79.6 t/s at Q2_K). It comfortably runs models up to ~27-32B parameters at Q4. Larger models need CPU offload or multi-GPU. On Qwen 3.6 27B, it achieves approximately 20.2 tokens per second at Q4_K_M quantization.

Apple's Metal backend is fully supported by MLX and llama.cpp, giving excellent performance on macOS. Among laptop GPUs, it sits above Apple M4 Pro (24GB) and NVIDIA RTX 4060 Ti 16GB in performance, but below Apple M5 Pro (36GB).

VendorApple
ArchitectureApple M5 Pro
CPU cores15 (5S + 10P)
VRAM24 GB (unified)
Memory typeLPDDR5X
Memory bandwidth307 GB/s
Compute backendMETAL
TierLaptop
Released2026
Models (native)42 / 70
Models (offload)0 / 70
Software: MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.

Models this GPU runs natively in VRAM (42)

Too large for this GPU (28)

Frequently asked questions

How much VRAM does the Apple M5 Pro (24GB) have?
The Apple M5 Pro (24GB) has 24 GB of LPDDR5X with 307 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
What LLMs can the Apple M5 Pro (24GB) run locally?
The Apple M5 Pro (24GB) can run 42 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at BF16, Llama 3.2 3B Instruct at FP32, Llama 3.2 1B Instruct at FP32.
Can the Apple M5 Pro (24GB) run Llama 3.3 70B Instruct?
The Apple M5 Pro (24GB) does not have enough VRAM to run Llama 3.3 70B Instruct. You would need more VRAM or a lower quantization level.
Can the Apple M5 Pro (24GB) run Qwen 3.6 27B?
Yes. The Apple M5 Pro (24GB) runs Qwen 3.6 27B natively in VRAM at Q4_K_M quantization, achieving approximately 20.2 tokens per second.
Can the Apple M5 Pro (24GB) run Llama 3.1 8B Instruct?
Yes. The Apple M5 Pro (24GB) runs Llama 3.1 8B Instruct natively in VRAM at BF16 quantization, achieving approximately 19.2 tokens per second.