NVIDIA RTX Pro 6000
The NVIDIA RTX Pro 6000 has 96 GB VRAM and 1344 GB/s memory bandwidth. It can run 57 of our 70 tracked models natively in VRAM at 8k context.
The NVIDIA RTX Pro 6000 is the flagship Blackwell workstation GPU, doubling the RTX 6000 Ada's VRAM to 96GB of ECC GDDR7 on a 384-bit bus at 1,344 GB/s. It uses the full GB202 die with 24,576 CUDA cores — the same silicon as the RTX 5090 — but in a workstation form factor with professional drivers, NVLink support, and error-correcting memory. The 96GB capacity is large enough to run 70B models at Q4_K_M or Q8_0 entirely in VRAM without any CPU offloading, and comfortably holds multiple models simultaneously. At ~$6,300 MSRP, it is the definitive single-GPU option for on-prem LLM inference when model fit and professional reliability matter more than cost.
The NVIDIA RTX Pro 6000 is a professional workstation NVIDIA GPU based on the Blackwell architecture. Released in 2025. It features 96 GB of GDDR7 at 1344 GB/s memory bandwidth. Full llama.cpp and Ollama support out of the box. CUDA 12.x recommended; driver ≥ 525 required.
For local LLM inference, this GPU runs 57 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is Qwen3 235B-A22B (MoE) (204.3 t/s at Q2_K). It can run all tracked models including 405B-class frontier models entirely in VRAM. On Llama 3.3 70B Instruct, it achieves approximately 38.4 tokens per second at NVFP4 quantization. An additional 1 models fit with CPU offload — slower but usable.
NVIDIA's CUDA ecosystem provides broad out-of-the-box support across llama.cpp, Ollama, vLLM, and TensorRT-LLM. Among workstation GPUs, it sits above Apple M4 Max (96GB) and NVIDIA RTX 6000 Ada in performance, but below NVIDIA A100 40GB.
| Vendor | NVIDIA |
| Architecture | Blackwell |
| VRAM | 96 GB |
| Memory type | GDDR7 |
| Memory bandwidth | 1344 GB/s |
| Compute backend | CUDA |
| Tier | Workstation |
| Released | 2025 |
| Models (native) | 57 / 70 |
| Models (offload) | 1 / 70 |
Models this GPU runs natively in VRAM (57)
- Qwen3 235B-A22B (MoE)235B · MMLU-Pro 84.4Q2_K · ~204.3 t/s
- MiniMax M2.5 229B229B · MMLU-Pro 84.8Q2_K · ~449.4 t/s
- MiniMax M2.7 229B229B · MMLU-Pro 86.0Q2_K · ~449.4 t/s
- Mixtral 8x22B Instruct v0.1141B · MMLU-Pro 40.0NVFP4 · ~75.8 t/s
- Qwen 3.5 122B-A10B (MoE)122B · MMLU-Pro 86.7NVFP4 · ~295.7 t/s
- Nemotron 3 Super 120B120B · MMLU-Pro 83.7NVFP4 · ~246.4 t/s
- GPT-OSS 120B117B · MMLU-Pro 80.7NVFP4 · ~591.4 t/s
- Llama 4 Scout 109B109B · MMLU-Pro 74.3NVFP4 · ~173.9 t/s
- GLM-4.5 Air 106B106B · MMLU-Pro 81.4NVFP4 · ~246.4 t/s
- GLM-4.6V 106B106B · MMLU-Pro 79.9NVFP4 · ~246.4 t/s
- Qwen 2.5 72B Instruct72B · MMLU-Pro 71.1NVFP4 · ~37.3 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9NVFP4 · ~38.4 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0NVFP4 · ~38.4 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4NVFP4 · ~38.4 t/s
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7NVFP4 · ~229.2 t/s
- Command-R 35B35B · MMLU-Pro 33.0BF16 · ~19.2 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro 84.2BF16 · ~246.4 t/s
- Qwen 3.6 35B35B · MMLU-Pro 85.2BF16 · ~19.2 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0BF16 · ~19.5 t/s
- Qwen3 32B32.8B · MMLU-Pro 65.5BF16 · ~20.5 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 69.0BF16 · ~20.7 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4BF16 · ~20.7 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0BF16 · ~20.7 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro 78.3BF16 · ~246.4 t/s
- Gemma 4 31B31B · MMLU-Pro 85.2BF16 · ~21.7 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro 61.5BF16 · ~246.4 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0BF16 · ~24.7 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro 67.5BF16 · ~24.9 t/s
- Qwen 3.6 27B27B · MMLU-Pro 86.2BF16 · ~24.9 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro 82.6BF16 · ~194.5 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro 66.8BF16 · ~28 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2BF16 · ~30.3 t/s
- GPT-OSS 20B21B · MMLU-Pro 67.9BF16 · ~184.8 t/s
- Qwen3 14B14.8B · MMLU-Pro 61.0FP32 · ~22.7 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 63.7FP32 · ~22.9 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 70.4FP32 · ~24 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6FP32 · ~27.5 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro 60.6FP32 · ~27.5 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0FP32 · ~36.5 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 48.3FP32 · ~42 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0FP32 · ~42 t/s
- Qwen3 8B8B · MMLU-Pro 56.7FP32 · ~42 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 56.3FP32 · ~44.2 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0FP32 · ~46.3 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro 43.6FP32 · ~84 t/s
- Gemma 4 E4B4B · MMLU-Pro 69.4FP32 · ~84 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 47.4FP32 · ~88.4 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP32 · ~105 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~108.4 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~129.2 t/s
- Gemma 4 E2B2B · MMLU-Pro 60.0FP32 · ~168 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~197.6 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~224 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~271 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro 14.7FP32 · ~336 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~672 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~933.3 t/s
Models that fit with CPU offload (1)
These use system RAM for layers that don't fit in VRAM — expect much slower inference.
Too large for this GPU (12)
Compare NVIDIA RTX Pro 6000 with other GPUs
- NVIDIA RTX Pro 6000vsNVIDIA RTX 6000 Ada+48 GB VRAM
- NVIDIA RTX Pro 6000vsNVIDIA RTX 5090+64 GB VRAM
- NVIDIA RTX Pro 6000vsNVIDIA L40S+48 GB VRAM
- NVIDIA RTX Pro 6000vsNVIDIA DGX Spark (128GB)-32 GB VRAM
- NVIDIA RTX Pro 6000vsAMD Radeon AI Pro 9700 32GB+64 GB VRAM
- NVIDIA RTX Pro 6000vsApple M4 Ultra (192GB)-96 GB VRAM
- NVIDIA RTX Pro 6000vsApple M4 Max (96GB)96 GB each
- NVIDIA RTX Pro 6000vsNVIDIA H100 80GB+16 GB VRAM
Frequently asked questions
- How much VRAM does the NVIDIA RTX Pro 6000 have?
- The NVIDIA RTX Pro 6000 has 96 GB of GDDR7 with 1344 GB/s memory bandwidth.
- What LLMs can the NVIDIA RTX Pro 6000 run locally?
- The NVIDIA RTX Pro 6000 can run 57 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at NVFP4, Llama 3.1 8B Instruct at FP32, Llama 3.2 3B Instruct at FP32.
- Can the NVIDIA RTX Pro 6000 run Llama 3.3 70B Instruct?
- Yes. The NVIDIA RTX Pro 6000 runs Llama 3.3 70B Instruct natively in VRAM at NVFP4 quantization, achieving approximately 38.4 tokens per second.
- Can the NVIDIA RTX Pro 6000 run Qwen 3.6 27B?
- Yes. The NVIDIA RTX Pro 6000 runs Qwen 3.6 27B natively in VRAM at BF16 quantization, achieving approximately 24.9 tokens per second.
- Can the NVIDIA RTX Pro 6000 run Llama 3.1 8B Instruct?
- Yes. The NVIDIA RTX Pro 6000 runs Llama 3.1 8B Instruct natively in VRAM at FP32 quantization, achieving approximately 42 tokens per second.