Apple M3 Ultra (512GB)
The Apple M3 Ultra (512GB) has 512 GB VRAM and 819 GB/s memory bandwidth. It can run 69 of our 70 tracked models natively in VRAM at 8k context.
Apple M3 Ultra (512GB): 512GB unified memory with 819 GB/s bandwidth. One of the only consumer systems capable of running 600B+ parameter LLMs locally in memory. Ideal for frontier-scale inference and massive MoE models.
Apple M3 Ultra (512GB): 2025 flagship desktop chip with 512GB unified LPDDR5X at 819 GB/s. 32-core CPU (24P+8E), 80-core GPU, 32-core Neural Engine. 184 billion transistors on TSMC 3nm.
Runs 600B+ parameter models locally. 70B at Q4 ~18-30 t/s, 32B at Q4 ~35-60 t/s, 7B FP16 100+ t/s. One of the only consumer systems capable of frontier-scale local inference.
MLX framework native. llama.cpp Metal fully optimized. Thunderbolt 5 connectivity. Hardware-accelerated ray tracing and dynamic caching. Power efficiency extremely high vs multi-GPU PCs.
| Vendor | Apple |
| Architecture | Apple M3 Ultra |
| CPU cores | 32 (24P + 8E) |
| VRAM | 512 GB (unified) |
| Memory type | LPDDR5X |
| Memory bandwidth | 819 GB/s |
| Compute backend | METAL |
| Tier | Workstation |
| Released | 2025 |
| Models (native) | 69 / 70 |
| Models (offload) | 0 / 70 |
Models this GPU runs natively in VRAM (69)
- Kimi K2.61000B · MMLU-Pro 87.2Q3_K_M · ~65.5 t/s
- GLM-5.1 754B754B · MMLU-Pro 86.5Q4_K_M · ~36.4 t/s
- GLM-5 744B744B · MMLU-Pro 85.7Q4_K_M · ~40 t/s
- DeepSeek V3 671B671B · MMLU-Pro 75.9Q5_K_M · ~37.8 t/s
- DeepSeek R1 671B671B · MMLU-Pro 85.0Q5_K_M · ~37.8 t/s
- MiniMax M1 456B456B · MMLU-Pro 81.1Q6_K · ~23.9 t/s
- Llama 3.1 405B Instruct405B · MMLU-Pro 73.3Q8_0 · ~2 t/s
- Llama 4 Maverick 400B400B · MMLU-Pro 80.5Q8_0 · ~53 t/s
- GLM-4.7 358B358B · MMLU-Pro 84.3Q8_0 · ~28.2 t/s
- GLM-4.5 355B355B · MMLU-Pro 84.6Q8_0 · ~28.2 t/s
- GLM-4.6 355B355B · MMLU-Pro 84.5Q8_0 · ~28.2 t/s
- DeepSeek V4 Flash 284B284B · MMLU-Pro 86.3Q8_0 · ~69.3 t/s
- Qwen3 235B-A22B (MoE)235B · MMLU-Pro 84.4Q8_0 · ~41 t/s
- MiniMax M2.5 229B229B · MMLU-Pro 84.8Q8_0 · ~90.1 t/s
- MiniMax M2.7 229B229B · MMLU-Pro 86.0Q8_0 · ~90.1 t/s
- Mixtral 8x22B Instruct v0.1141B · MMLU-Pro 40.0BF16 · ~11.6 t/s
- Qwen 3.5 122B-A10B (MoE)122B · MMLU-Pro 86.7BF16 · ~45 t/s
- Nemotron 3 Super 120B120B · MMLU-Pro 83.7BF16 · ~37.5 t/s
- GPT-OSS 120B117B · MMLU-Pro 80.7BF16 · ~90.1 t/s
- Llama 4 Scout 109B109B · MMLU-Pro 74.3FP32 · ~13.2 t/s
- GLM-4.5 Air 106B106B · MMLU-Pro 81.4FP32 · ~18.8 t/s
- GLM-4.6V 106B106B · MMLU-Pro 79.9FP32 · ~18.8 t/s
- Qwen 2.5 72B Instruct72B · MMLU-Pro 71.1FP32 · ~2.8 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9FP32 · ~2.9 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0FP32 · ~2.9 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4FP32 · ~2.9 t/s
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7FP32 · ~17.5 t/s
- Command-R 35B35B · MMLU-Pro 33.0FP32 · ~5.9 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro 84.2FP32 · ~75.1 t/s
- Qwen 3.6 35B35B · MMLU-Pro 85.2FP32 · ~5.9 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0FP32 · ~6 t/s
- Qwen3 32B32.8B · MMLU-Pro 65.5FP32 · ~6.2 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 69.0FP32 · ~6.3 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4FP32 · ~6.3 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0FP32 · ~6.3 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro 78.3FP32 · ~75.1 t/s
- Gemma 4 31B31B · MMLU-Pro 85.2FP32 · ~6.6 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro 61.5FP32 · ~75.1 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0FP32 · ~7.5 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro 67.5FP32 · ~7.6 t/s
- Qwen 3.6 27B27B · MMLU-Pro 86.2FP32 · ~7.6 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro 82.6FP32 · ~59.3 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro 66.8FP32 · ~8.5 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2FP32 · ~9.2 t/s
- GPT-OSS 20B21B · MMLU-Pro 67.9FP32 · ~56.3 t/s
- Qwen3 14B14.8B · MMLU-Pro 61.0FP32 · ~13.8 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 63.7FP32 · ~13.9 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 70.4FP32 · ~14.6 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6FP32 · ~16.8 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro 60.6FP32 · ~16.8 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0FP32 · ~22.3 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 48.3FP32 · ~25.6 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0FP32 · ~25.6 t/s
- Qwen3 8B8B · MMLU-Pro 56.7FP32 · ~25.6 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 56.3FP32 · ~26.9 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0FP32 · ~28.2 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro 43.6FP32 · ~51.2 t/s
- Gemma 4 E4B4B · MMLU-Pro 69.4FP32 · ~51.2 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 47.4FP32 · ~53.9 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP32 · ~64 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~66 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~78.8 t/s
- Gemma 4 E2B2B · MMLU-Pro 60.0FP32 · ~102.4 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~120.4 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~136.5 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~165.1 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro 14.7FP32 · ~204.8 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~409.5 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~568.8 t/s
Too large for this GPU (1)
Frequently asked questions
- How much VRAM does the Apple M3 Ultra (512GB) have?
- The Apple M3 Ultra (512GB) has 512 GB of LPDDR5X with 819 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
- What is the Apple M3 Ultra (512GB) best for?
- With 512 GB of VRAM, the Apple M3 Ultra (512GB) is a server-class GPU designed for running the largest open-weight models (70B–405B) at high quantization with ample context.
- What LLMs can the Apple M3 Ultra (512GB) run locally?
- The Apple M3 Ultra (512GB) can run 69 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at FP32, Llama 3.1 8B Instruct at FP32, Llama 3.2 3B Instruct at FP32.
- Can the Apple M3 Ultra (512GB) run Llama 3.3 70B Instruct?
- Yes. The Apple M3 Ultra (512GB) runs Llama 3.3 70B Instruct natively in VRAM at FP32 quantization, achieving approximately 2.9 tokens per second.
- Can the Apple M3 Ultra (512GB) run Qwen 3.6 27B?
- Yes. The Apple M3 Ultra (512GB) runs Qwen 3.6 27B natively in VRAM at FP32 quantization, achieving approximately 7.6 tokens per second.
- Can the Apple M3 Ultra (512GB) run Llama 3.1 8B Instruct?
- Yes. The Apple M3 Ultra (512GB) runs Llama 3.1 8B Instruct natively in VRAM at FP32 quantization, achieving approximately 25.6 tokens per second.