Apple M3 Ultra (96GB)
The Apple M3 Ultra (96GB) has 96 GB VRAM and 819 GB/s memory bandwidth. It can run 57 of our 70 tracked models natively in VRAM at 8k context.
Apple M3 Ultra (96GB): 96GB unified memory at 819 GB/s. Excellent for 70B-class models at high quantization and large MoE inference workloads. One of the fastest Apple Silicon chips for local AI.
Apple M3 Ultra (96GB): 2025 desktop with 96GB unified LPDDR5X at 819 GB/s. 28-core CPU (20P+8E), 60-core GPU, 32-core Neural Engine on TSMC 3nm.
70B at Q4 native ~18-30 t/s, 32B at Q4 ~35-60 t/s. Excellent for 70B-class models at high quantization and large MoE inference workloads.
MLX and llama.cpp Metal fully supported. Thunderbolt 5. One of the fastest Apple Silicon chips for local AI — same bandwidth as the 256GB and 512GB tiers.
| Vendor | Apple |
| Architecture | Apple M3 Ultra |
| CPU cores | 28 (20P + 8E) |
| VRAM | 96 GB (unified) |
| Memory type | LPDDR5X |
| Memory bandwidth | 819 GB/s |
| Compute backend | METAL |
| Tier | Workstation |
| Released | 2025 |
| Models (native) | 57 / 70 |
| Models (offload) | 0 / 70 |
Models this GPU runs natively in VRAM (57)
- Qwen3 235B-A22B (MoE)235B · MMLU-Pro 84.4Q2_K · ~124.5 t/s
- MiniMax M2.5 229B229B · MMLU-Pro 84.8Q2_K · ~273.8 t/s
- MiniMax M2.7 229B229B · MMLU-Pro 86.0Q2_K · ~273.8 t/s
- Mixtral 8x22B Instruct v0.1141B · MMLU-Pro 40.0Q4_K_M · ~41 t/s
- Qwen 3.5 122B-A10B (MoE)122B · MMLU-Pro 86.7Q5_K_M · ~139.9 t/s
- Nemotron 3 Super 120B120B · MMLU-Pro 83.7Q5_K_M · ~116.6 t/s
- GPT-OSS 120B117B · MMLU-Pro 80.7Q5_K_M · ~279.8 t/s
- Llama 4 Scout 109B109B · MMLU-Pro 74.3Q5_K_M · ~82.3 t/s
- GLM-4.5 Air 106B106B · MMLU-Pro 81.4Q5_K_M · ~116.6 t/s
- GLM-4.6V 106B106B · MMLU-Pro 79.9Q5_K_M · ~116.6 t/s
- Qwen 2.5 72B Instruct72B · MMLU-Pro 71.1Q8_0 · ~11.4 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9Q8_0 · ~11.7 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0Q8_0 · ~11.7 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4Q8_0 · ~11.7 t/s
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7Q8_0 · ~69.8 t/s
- Command-R 35B35B · MMLU-Pro 33.0BF16 · ~11.7 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro 84.2BF16 · ~150.2 t/s
- Qwen 3.6 35B35B · MMLU-Pro 85.2BF16 · ~11.7 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0BF16 · ~11.9 t/s
- Qwen3 32B32.8B · MMLU-Pro 65.5BF16 · ~12.5 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 69.0BF16 · ~12.6 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4BF16 · ~12.6 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0BF16 · ~12.6 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro 78.3BF16 · ~150.2 t/s
- Gemma 4 31B31B · MMLU-Pro 85.2BF16 · ~13.2 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro 61.5BF16 · ~150.2 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0BF16 · ~15.1 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro 67.5BF16 · ~15.2 t/s
- Qwen 3.6 27B27B · MMLU-Pro 86.2BF16 · ~15.2 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro 82.6BF16 · ~118.5 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro 66.8BF16 · ~17.1 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2BF16 · ~18.4 t/s
- GPT-OSS 20B21B · MMLU-Pro 67.9BF16 · ~112.6 t/s
- Qwen3 14B14.8B · MMLU-Pro 61.0FP32 · ~13.8 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 63.7FP32 · ~13.9 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 70.4FP32 · ~14.6 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6FP32 · ~16.8 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro 60.6FP32 · ~16.8 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0FP32 · ~22.3 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 48.3FP32 · ~25.6 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0FP32 · ~25.6 t/s
- Qwen3 8B8B · MMLU-Pro 56.7FP32 · ~25.6 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 56.3FP32 · ~26.9 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0FP32 · ~28.2 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro 43.6FP32 · ~51.2 t/s
- Gemma 4 E4B4B · MMLU-Pro 69.4FP32 · ~51.2 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 47.4FP32 · ~53.9 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP32 · ~64 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~66 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~78.8 t/s
- Gemma 4 E2B2B · MMLU-Pro 60.0FP32 · ~102.4 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~120.4 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~136.5 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~165.1 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro 14.7FP32 · ~204.8 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~409.5 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~568.8 t/s
Too large for this GPU (13)
Frequently asked questions
- How much VRAM does the Apple M3 Ultra (96GB) have?
- The Apple M3 Ultra (96GB) has 96 GB of LPDDR5X with 819 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
- What is the Apple M3 Ultra (96GB) best for?
- With 96 GB of VRAM, the Apple M3 Ultra (96GB) is a server-class GPU designed for running the largest open-weight models (70B–405B) at high quantization with ample context.
- What LLMs can the Apple M3 Ultra (96GB) run locally?
- The Apple M3 Ultra (96GB) can run 57 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at Q8_0, Llama 3.1 8B Instruct at FP32, Llama 3.2 3B Instruct at FP32.
- Can the Apple M3 Ultra (96GB) run Llama 3.3 70B Instruct?
- Yes. The Apple M3 Ultra (96GB) runs Llama 3.3 70B Instruct natively in VRAM at Q8_0 quantization, achieving approximately 11.7 tokens per second.
- Can the Apple M3 Ultra (96GB) run Qwen 3.6 27B?
- Yes. The Apple M3 Ultra (96GB) runs Qwen 3.6 27B natively in VRAM at BF16 quantization, achieving approximately 15.2 tokens per second.
- Can the Apple M3 Ultra (96GB) run Llama 3.1 8B Instruct?
- Yes. The Apple M3 Ultra (96GB) runs Llama 3.1 8B Instruct natively in VRAM at FP32 quantization, achieving approximately 25.6 tokens per second.