CanItRun Logocanitrun.

Apple M3 Ultra (256GB)

The Apple M3 Ultra (256GB) has 256 GB VRAM and 819 GB/s memory bandwidth. It can run 66 of our 70 tracked models natively in VRAM at 8k context.

Apple M3 Ultra (256GB): 256GB unified memory at 819 GB/s. Built for extreme local inference workloads including DeepSeek-class MoE models and multi-hundred-billion parameter LLMs.

Apple M3 Ultra (256GB): 2025 desktop with 256GB unified LPDDR5X at 819 GB/s. 28-core CPU (20P+8E), 60-core GPU, 32-core Neural Engine on TSMC 3nm.

Runs 405B at Q4 native and DeepSeek-class MoE models locally. 70B at Q4 ~18-30 t/s, 32B at Q4 ~35-60 t/s. Handles multi-hundred-billion parameter LLMs.

MLX and llama.cpp Metal fully supported. Thunderbolt 5. Excellent for large MoE inference workloads. Same 819 GB/s bandwidth as higher-tier configurations.

VendorApple
ArchitectureApple M3 Ultra
CPU cores28 (20P + 8E)
VRAM256 GB (unified)
Memory typeLPDDR5X
Memory bandwidth819 GB/s
Compute backendMETAL
TierWorkstation
Released2025
Models (native)66 / 70
Models (offload)0 / 70
Software: MLX gives the best performance on Apple Silicon; llama.cpp Metal backend is a solid alternative. Both are well-supported by Ollama.

Models this GPU runs natively in VRAM (66)

Too large for this GPU (4)

Frequently asked questions

How much VRAM does the Apple M3 Ultra (256GB) have?
The Apple M3 Ultra (256GB) has 256 GB of LPDDR5X with 819 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
What is the Apple M3 Ultra (256GB) best for?
With 256 GB of VRAM, the Apple M3 Ultra (256GB) is a server-class GPU designed for running the largest open-weight models (70B–405B) at high quantization with ample context.
What LLMs can the Apple M3 Ultra (256GB) run locally?
The Apple M3 Ultra (256GB) can run 66 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at BF16, Llama 3.1 8B Instruct at FP32, Llama 3.2 3B Instruct at FP32.
Can the Apple M3 Ultra (256GB) run Llama 3.3 70B Instruct?
Yes. The Apple M3 Ultra (256GB) runs Llama 3.3 70B Instruct natively in VRAM at BF16 quantization, achieving approximately 5.9 tokens per second.
Can the Apple M3 Ultra (256GB) run Qwen 3.6 27B?
Yes. The Apple M3 Ultra (256GB) runs Qwen 3.6 27B natively in VRAM at FP32 quantization, achieving approximately 7.6 tokens per second.
Can the Apple M3 Ultra (256GB) run Llama 3.1 8B Instruct?
Yes. The Apple M3 Ultra (256GB) runs Llama 3.1 8B Instruct natively in VRAM at FP32 quantization, achieving approximately 25.6 tokens per second.