NVIDIA RTX 4000 Ada
The NVIDIA RTX 4000 Ada has 20 GB VRAM and 320 GB/s memory bandwidth. It can run 42 of our 70 tracked models natively in VRAM at 8k context.
The NVIDIA RTX 4000 Ada is the entry-level Ada Lovelace workstation GPU, built on a 160-bit memory bus with 20GB ECC GDDR6 at 320 GB/s. With 6,144 CUDA cores and a single-slot form factor, it is the most compact professional GPU in the Ada lineup. Its 20GB VRAM comfortably handles 13B models at Q8_0 and 14B–20B models at Q4_K_M, exceeding what typical consumer 16GB cards can hold, while fitting into thermally constrained workstations and SFF builds.
The NVIDIA RTX 4000 Ada is a professional workstation NVIDIA GPU based on the Ada Lovelace architecture. Released in 2023. It features 20 GB of GDDR6 at 320 GB/s memory bandwidth. Full llama.cpp and Ollama support out of the box. CUDA 12.x recommended; driver ≥ 525 required.
For local LLM inference, this GPU runs 42 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is Mixtral 8x7B Instruct v0.1 (82.9 t/s at Q2_K). It handles smaller models up to ~7-14B at reasonable precision, with some 27-32B models fitting at lower quantization. On Qwen 3.6 27B, it achieves approximately 23.7 tokens per second at NVFP4 quantization. An additional 9 models fit with CPU offload — slower but usable.
NVIDIA's CUDA ecosystem provides broad out-of-the-box support across llama.cpp, Ollama, vLLM, and TensorRT-LLM. Among workstation GPUs, it sits above Apple M5 Pro (24GB) and NVIDIA RTX 4060 Ti 16GB in performance, but below Intel Arc Pro B60 24GB.
| Vendor | NVIDIA |
| Architecture | Ada Lovelace |
| VRAM | 20 GB |
| Memory type | GDDR6 |
| Memory bandwidth | 320 GB/s |
| Compute backend | CUDA |
| Tier | Workstation |
| Released | 2023 |
| Models (native) | 42 / 70 |
| Models (offload) | 9 / 70 |
Models this GPU runs natively in VRAM (42)
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7Q2_K · ~82.9 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro 84.2Q3_K_M · ~272.9 t/s
- Qwen 3.6 35B35B · MMLU-Pro 85.2Q2_K · ~27.8 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0Q3_K_M · ~21.6 t/s
- Qwen3 32B32.8B · MMLU-Pro 65.5Q3_K_M · ~22.7 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 69.0Q3_K_M · ~22.9 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4Q3_K_M · ~22.9 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0Q3_K_M · ~22.9 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro 78.3NVFP4 · ~234.7 t/s
- Gemma 4 31B31B · MMLU-Pro 85.2Q3_K_M · ~24 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro 61.5NVFP4 · ~234.7 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0NVFP4 · ~23.5 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro 67.5NVFP4 · ~23.7 t/s
- Qwen 3.6 27B27B · MMLU-Pro 86.2NVFP4 · ~23.7 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro 82.6NVFP4 · ~185.3 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro 66.8NVFP4 · ~26.7 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2NVFP4 · ~28.8 t/s
- GPT-OSS 20B21B · MMLU-Pro 67.9NVFP4 · ~176 t/s
- Qwen3 14B14.8B · MMLU-Pro 61.0NVFP4 · ~43.2 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 63.7NVFP4 · ~43.5 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 70.4NVFP4 · ~45.7 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6NVFP4 · ~52.5 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro 60.6NVFP4 · ~52.5 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0NVFP4 · ~69.6 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 48.3NVFP4 · ~80 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0NVFP4 · ~80 t/s
- Qwen3 8B8B · MMLU-Pro 56.7NVFP4 · ~80 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 56.3BF16 · ~21.1 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0BF16 · ~22.1 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro 43.6FP32 · ~20 t/s
- Gemma 4 E4B4B · MMLU-Pro 69.4BF16 · ~40 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 47.4BF16 · ~42.1 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP32 · ~25 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~25.8 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~30.8 t/s
- Gemma 4 E2B2B · MMLU-Pro 60.0FP32 · ~40 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~47.1 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~53.3 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~64.5 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro 14.7FP32 · ~80 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~160 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~222.2 t/s
Models that fit with CPU offload (9)
These use system RAM for layers that don't fit in VRAM — expect much slower inference.
- GPT-OSS 120B117B · MMLU-Pro 80.7Q2_K · ~48.6 t/s
- Llama 4 Scout 109B109B · MMLU-Pro 74.3Q2_K · ~14.3 t/s
- GLM-4.5 Air 106B106B · MMLU-Pro 81.4Q2_K · ~20.3 t/s
- GLM-4.6V 106B106B · MMLU-Pro 79.9Q2_K · ~20.3 t/s
- Qwen 2.5 72B Instruct72B · MMLU-Pro 71.1NVFP4 · ~2.2 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9NVFP4 · ~2.3 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0NVFP4 · ~2.3 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4NVFP4 · ~2.3 t/s
- Command-R 35B35B · MMLU-Pro 33.0NVFP4 · ~4.6 t/s
Too large for this GPU (19)
- Mixtral 8x22B Instruct v0.1
- Llama 3.1 405B Instruct
- DeepSeek V3 671B
- DeepSeek R1 671B
- Llama 4 Maverick 400B
- Qwen3 235B-A22B (MoE)
- MiniMax M1 456B
- GLM-4.5 355B
- GLM-4.6 355B
- GLM-4.7 358B
- Qwen 3.5 122B-A10B (MoE)
- MiniMax M2.5 229B
- GLM-5 744B
- MiniMax M2.7 229B
- Nemotron 3 Super 120B
- Kimi K2.6
- GLM-5.1 754B
- DeepSeek V4 Pro 1.6T
- DeepSeek V4 Flash 284B
Frequently asked questions
- How much VRAM does the NVIDIA RTX 4000 Ada have?
- The NVIDIA RTX 4000 Ada has 20 GB of GDDR6 with 320 GB/s memory bandwidth.
- What is the NVIDIA RTX 4000 Ada best for?
- With 20 GB of VRAM, the NVIDIA RTX 4000 Ada handles smaller models (7B–14B) at Q4–Q5 quantization — ideal for entry-level local LLM experimentation and lightweight inference.
- What LLMs can the NVIDIA RTX 4000 Ada run locally?
- The NVIDIA RTX 4000 Ada can run 42 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at NVFP4, Llama 3.2 3B Instruct at FP32, Llama 3.2 1B Instruct at FP32.
- Can the NVIDIA RTX 4000 Ada run Llama 3.3 70B Instruct?
- The NVIDIA RTX 4000 Ada can run Llama 3.3 70B Instruct with CPU offload at NVFP4 quantization, but inference will be slower than native VRAM execution.
- Can the NVIDIA RTX 4000 Ada run Qwen 3.6 27B?
- Yes. The NVIDIA RTX 4000 Ada runs Qwen 3.6 27B natively in VRAM at NVFP4 quantization, achieving approximately 23.7 tokens per second.
- Can the NVIDIA RTX 4000 Ada run Llama 3.1 8B Instruct?
- Yes. The NVIDIA RTX 4000 Ada runs Llama 3.1 8B Instruct natively in VRAM at NVFP4 quantization, achieving approximately 80 tokens per second.