CanItRun Logocanitrun.

Apple M3 Ultra (512GB)

The Apple M3 Ultra (512GB) has 512 GB VRAM and 819 GB/s memory bandwidth. It can run 69 of our 70 tracked models natively in VRAM at 8k context.

Apple M3 Ultra (512GB): 512GB unified memory with 819 GB/s bandwidth. One of the only consumer systems capable of running 600B+ parameter LLMs locally in memory. Ideal for frontier-scale inference and massive MoE models.

Apple M3 Ultra (512GB): 2025 flagship desktop chip with 512GB unified LPDDR5X at 819 GB/s. 32-core CPU (24P+8E), 80-core GPU, 32-core Neural Engine. 184 billion transistors on TSMC 3nm.

Runs 600B+ parameter models locally. 70B at Q4 ~18-30 t/s, 32B at Q4 ~35-60 t/s, 7B FP16 100+ t/s. One of the only consumer systems capable of frontier-scale local inference.

MLX framework native. llama.cpp Metal fully optimized. Thunderbolt 5 connectivity. Hardware-accelerated ray tracing and dynamic caching. Power efficiency extremely high vs multi-GPU PCs.

VendorApple
ArchitectureApple M3 Ultra
CPU cores32 (24P + 8E)
VRAM512 GB (unified)
Memory typeLPDDR5X
Memory bandwidth819 GB/s
Compute backendMETAL
TierWorkstation
Released2025
Models (native)69 / 70
Models (offload)0 / 70
Software: MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.

Models this GPU runs natively in VRAM (69)

Too large for this GPU (1)

Frequently asked questions

How much VRAM does the Apple M3 Ultra (512GB) have?
The Apple M3 Ultra (512GB) has 512 GB of LPDDR5X with 819 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
What is the Apple M3 Ultra (512GB) best for?
With 512 GB of VRAM, the Apple M3 Ultra (512GB) is a server-class GPU designed for running the largest open-weight models (70B–405B) at high quantization with ample context.
What LLMs can the Apple M3 Ultra (512GB) run locally?
The Apple M3 Ultra (512GB) can run 69 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at FP32, Llama 3.1 8B Instruct at FP32, Llama 3.2 3B Instruct at FP32.
Can the Apple M3 Ultra (512GB) run Llama 3.3 70B Instruct?
Yes. The Apple M3 Ultra (512GB) runs Llama 3.3 70B Instruct natively in VRAM at FP32 quantization, achieving approximately 2.9 tokens per second.
Can the Apple M3 Ultra (512GB) run Qwen 3.6 27B?
Yes. The Apple M3 Ultra (512GB) runs Qwen 3.6 27B natively in VRAM at FP32 quantization, achieving approximately 7.6 tokens per second.
Can the Apple M3 Ultra (512GB) run Llama 3.1 8B Instruct?
Yes. The Apple M3 Ultra (512GB) runs Llama 3.1 8B Instruct natively in VRAM at FP32 quantization, achieving approximately 25.6 tokens per second.