CanItRun Logocanitrun.

Apple M3 Ultra (96GB)

The Apple M3 Ultra (96GB) has 96 GB VRAM and 819 GB/s memory bandwidth. It can run 57 of our 70 tracked models natively in VRAM at 8k context.

Apple M3 Ultra (96GB): 96GB unified memory at 819 GB/s. Excellent for 70B-class models at high quantization and large MoE inference workloads. One of the fastest Apple Silicon chips for local AI.

Apple M3 Ultra (96GB): 2025 desktop with 96GB unified LPDDR5X at 819 GB/s. 28-core CPU (20P+8E), 60-core GPU, 32-core Neural Engine on TSMC 3nm.

70B at Q4 native ~18-30 t/s, 32B at Q4 ~35-60 t/s. Excellent for 70B-class models at high quantization and large MoE inference workloads.

MLX and llama.cpp Metal fully supported. Thunderbolt 5. One of the fastest Apple Silicon chips for local AI — same bandwidth as the 256GB and 512GB tiers.

VendorApple
ArchitectureApple M3 Ultra
CPU cores28 (20P + 8E)
VRAM96 GB (unified)
Memory typeLPDDR5X
Memory bandwidth819 GB/s
Compute backendMETAL
TierWorkstation
Released2025
Models (native)57 / 70
Models (offload)0 / 70
Software: MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.

Models this GPU runs natively in VRAM (57)

Too large for this GPU (13)

Frequently asked questions

How much VRAM does the Apple M3 Ultra (96GB) have?
The Apple M3 Ultra (96GB) has 96 GB of LPDDR5X with 819 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
What is the Apple M3 Ultra (96GB) best for?
With 96 GB of VRAM, the Apple M3 Ultra (96GB) is a server-class GPU designed for running the largest open-weight models (70B–405B) at high quantization with ample context.
What LLMs can the Apple M3 Ultra (96GB) run locally?
The Apple M3 Ultra (96GB) can run 57 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at Q8_0, Llama 3.1 8B Instruct at FP32, Llama 3.2 3B Instruct at FP32.
Can the Apple M3 Ultra (96GB) run Llama 3.3 70B Instruct?
Yes. The Apple M3 Ultra (96GB) runs Llama 3.3 70B Instruct natively in VRAM at Q8_0 quantization, achieving approximately 11.7 tokens per second.
Can the Apple M3 Ultra (96GB) run Qwen 3.6 27B?
Yes. The Apple M3 Ultra (96GB) runs Qwen 3.6 27B natively in VRAM at BF16 quantization, achieving approximately 15.2 tokens per second.
Can the Apple M3 Ultra (96GB) run Llama 3.1 8B Instruct?
Yes. The Apple M3 Ultra (96GB) runs Llama 3.1 8B Instruct natively in VRAM at FP32 quantization, achieving approximately 25.6 tokens per second.