NVIDIA DGX Spark (128GB) vs Apple M4 Ultra (192GB)
Side-by-side local AI comparison — VRAM, memory bandwidth, model compatibility, and estimated tokens per second across 70 open-weight models.
Quick verdict
Apple M4 Ultra (192GB) wins for local AI inference. It has 64 GB more VRAM and 300% more memory bandwidth, runs 64 models natively (vs 61), and exclusively fits 3 models the other cannot. Note: NVIDIA DGX Spark (128GB) uses CUDA while Apple M4 Ultra (192GB) uses METAL — software ecosystem matters for your framework.
Specs comparison
| Spec | NVIDIA DGX Spark (128GB) | Apple M4 Ultra (192GB) |
|---|---|---|
| VRAM | 128 GB unified | 192 GB unified |
| Memory type | LPDDR5X | LPDDR5X |
| Bandwidth | 273 GB/s | 1092 GB/s(+300%) |
| CPU cores | — | 32 (24P + 8E) |
| Architecture | Grace Blackwell | Apple M4 Ultra |
| Backend | CUDA | METAL |
| Tier | Workstation | Workstation |
| Released | 2025 | 2025 |
| Models (native) | 61 | 64 |
Estimated tokens per second
Computed from memory bandwidth and model active-parameter weight. Assumes model fits natively in VRAM.
| Model | NVIDIA DGX Spark (128GB) | Apple M4 Ultra (192GB) | Delta |
|---|---|---|---|
| Llama 3.3 70B Instruct(70B) | 3.9 t/s(Q8) | 7.8 t/s(FP16) | -50% |
| Qwen 3.6 27B(27B) | 5.1 t/s(FP16) | 20.2 t/s(FP16) | -75% |
| Llama 3.1 8B Instruct(8B) | 17.1 t/s(FP16) | 68.3 t/s(FP16) | -75% |
| Qwen 2.5 7B Instruct(7.6B) | 18 t/s(FP16) | 71.8 t/s(FP16) | -75% |
Delta is NVIDIA DGX Spark (128GB) relative to Apple M4 Ultra (192GB).
Only NVIDIA DGX Spark (128GB) can run(0)
No exclusive models — Apple M4 Ultra (192GB) can run everything NVIDIA DGX Spark (128GB) can.
Only Apple M4 Ultra (192GB) can run(3)
Both run natively(61)
These models fit in VRAM on both GPUs. Bandwidth determines which runs them faster.
- GLM-4.7 358B31.3 t/svs93.8 t/s
- GLM-4.5 355B31.3 t/svs93.8 t/s
- GLM-4.6 355B31.3 t/svs93.8 t/s
- DeepSeek V4 Flash 284B77 t/svs184.8 t/s
- Qwen3 235B-A22B (MoE)34.1 t/svs87.4 t/s
- MiniMax M2.5 229B75.1 t/svs192.2 t/s
- MiniMax M2.7 229B75.1 t/svs192.2 t/s
- Mixtral 8x22B Instruct v0.110.3 t/svs30.8 t/s
- Qwen 3.5 122B-A10B (MoE)40 t/svs120.1 t/s
- Nemotron 3 Super 120B33.4 t/svs100.1 t/s
- GPT-OSS 120B80.1 t/svs240.2 t/s
- Llama 4 Scout 109B23.6 t/svs70.7 t/s
- GLM-4.5 Air 106B25 t/svs100.1 t/s
- GLM-4.6V 106B25 t/svs100.1 t/s
- Qwen 2.5 72B Instruct3.8 t/svs7.6 t/s
- Llama 3.3 70B Instruct3.9 t/svs7.8 t/s
- +45 more on both
Which should you choose?
- • You rely on CUDA-based tools (PyTorch, vLLM, Ollama)
- • You need to run larger models (>128 GB VRAM)
- • Faster token generation is the priority
- • You're on macOS and want native Metal acceleration (MLX, llama.cpp)
Frequently asked questions
- Which is better for local AI, the NVIDIA DGX Spark (128GB) or Apple M4 Ultra (192GB)?
- For local AI inference, the Apple M4 Ultra (192GB) has the edge. It offers 192 GB VRAM (vs 128 GB) and 1092 GB/s bandwidth (vs 273 GB/s), letting it run 64 models natively in VRAM vs 61 for its rival.
- How much VRAM does the NVIDIA DGX Spark (128GB) have vs the Apple M4 Ultra (192GB)?
- The NVIDIA DGX Spark (128GB) has 128 GB of LPDDR5X at 273 GB/s. The Apple M4 Ultra (192GB) has 192 GB of LPDDR5X at 1092 GB/s. The Apple M4 Ultra (192GB) has 64 GB more VRAM, allowing it to run 3 models the NVIDIA DGX Spark (128GB) cannot fit natively.
- Can the NVIDIA DGX Spark (128GB) run Llama 3.3 70B?
- Yes. The NVIDIA DGX Spark (128GB) runs Llama 3.3 70B natively at Q8 quantization at approximately 3.9 tokens per second.
- Can the Apple M4 Ultra (192GB) run Llama 3.3 70B?
- Yes. The Apple M4 Ultra (192GB) runs Llama 3.3 70B natively at FP16 quantization at approximately 7.8 tokens per second.
- What is the difference between the NVIDIA DGX Spark (128GB) and Apple M4 Ultra (192GB) for AI?
- The key difference for AI inference is VRAM and memory bandwidth. The NVIDIA DGX Spark (128GB) has 128 GB VRAM at 273 GB/s (CUDA backend). The Apple M4 Ultra (192GB) has 192 GB VRAM at 1092 GB/s (METAL backend). VRAM determines which models fit; bandwidth determines tokens per second. The NVIDIA DGX Spark (128GB) runs 61 models natively vs 64 for the Apple M4 Ultra (192GB).