CanItRun Logocanitrun.

Apple M5 Max (64GB)

The Apple M5 Max (64GB) has 64 GB VRAM and 614 GB/s memory bandwidth. It can run 54 of our 70 tracked models natively in VRAM at 8k context.

The Apple M5 Max (64GB) delivers serious local LLM inference capability, pairing 64GB of unified memory with 614 GB/s bandwidth for fast on-device AI on Mac. Qwen 3.6 35B fits at Q8_0 with headroom, and Gemma 4 31B runs at Q8_0 or F16 precision without CPU offload — covering today's most capable open-weight models at high quality. The 18-core CPU and tight integration with MLX and llama.cpp make the M5 Max (64GB) the go-to Apple Silicon choice for uncompromised local inference.

Apple M5 Max (64GB) is a mobile/laptop Apple Silicon chip based on the Apple M5 Max architecture. Released in 2026. It features 64 GB of LPDDR5X unified memory at 614 GB/s memory bandwidth. As an Apple Silicon chip, its memory is unified between CPU and GPU, so the full 64 GB can be allocated to model weights. MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.

For local LLM inference, this GPU runs 54 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is Mixtral 8x22B Instruct v0.1 (52.6 t/s at Q2_K). It handles most models up to the 70B class in VRAM, including some larger MoE models. On Llama 3.3 70B Instruct, it achieves approximately 13.6 tokens per second at Q5_K_M quantization.

Apple's Metal backend is fully supported by MLX and llama.cpp, giving excellent performance on macOS. Among laptop GPUs, it sits above Apple M4 Max (64GB) and Apple M5 Max (48GB) in performance, but below Apple M1 Ultra (64GB).

VendorApple
ArchitectureApple M5 Max
CPU cores18 (6S + 12P)
VRAM64 GB (unified)
Memory typeLPDDR5X
Memory bandwidth614 GB/s
Compute backendMETAL
TierLaptop
Released2026
Models (native)54 / 70
Models (offload)0 / 70
Software: MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.

Models this GPU runs natively in VRAM (54)

Too large for this GPU (16)

Frequently asked questions

How much VRAM does the Apple M5 Max (64GB) have?
The Apple M5 Max (64GB) has 64 GB of LPDDR5X with 614 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
What LLMs can the Apple M5 Max (64GB) run locally?
The Apple M5 Max (64GB) can run 54 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at Q5_K_M, Llama 3.1 8B Instruct at FP32, Llama 3.2 3B Instruct at FP32.
Can the Apple M5 Max (64GB) run Llama 3.3 70B Instruct?
Yes. The Apple M5 Max (64GB) runs Llama 3.3 70B Instruct natively in VRAM at Q5_K_M quantization, achieving approximately 13.6 tokens per second.
Can the Apple M5 Max (64GB) run Qwen 3.6 27B?
Yes. The Apple M5 Max (64GB) runs Qwen 3.6 27B natively in VRAM at Q8_0 quantization, achieving approximately 22.7 tokens per second.
Can the Apple M5 Max (64GB) run Llama 3.1 8B Instruct?
Yes. The Apple M5 Max (64GB) runs Llama 3.1 8B Instruct natively in VRAM at FP32 quantization, achieving approximately 19.2 tokens per second.