NVIDIA RTX 4090
The NVIDIA RTX 4090 has 24GB VRAM and 1008 GB/s memory bandwidth. It can run 39 of our 53 tracked models natively in VRAM at 8k context.
24 GB VRAM1008 GB/sCUDAconsumer
Models this GPU runs natively in VRAM (39)
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7Q3_K_M · ~214.9 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro —Q4_K_M · ~739.2 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0Q4_K_M · ~58.6 t/s
- Qwen3 32B32.8B · MMLU-Pro —Q4_K_M · ~61.5 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 55.1Q4_K_M · ~62 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4Q4_K_M · ~62 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0Q4_K_M · ~62 t/s
- Gemma 4 31B31B · MMLU-Pro —Q4_K_M · ~65 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro —Q5_K_M · ~591.4 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0Q5_K_M · ~59.3 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro —Q5_K_M · ~59.7 t/s
- Qwen 3.6 27B27B · MMLU-Pro —Q5_K_M · ~59.7 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro —Q5_K_M · ~466.9 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro —Q6_K · ~56 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2Q6_K · ~60.5 t/s
- Qwen3 14B14.8B · MMLU-Pro —Q8 · ~68.1 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 51.2Q8 · ~68.6 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 56.1Q8 · ~72 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6Q8 · ~82.6 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro —Q8 · ~82.6 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0Q8 · ~109.6 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 37.5FP16 · ~63 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0FP16 · ~63 t/s
- Qwen3 8B8B · MMLU-Pro —FP16 · ~63 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 36.5FP16 · ~66.3 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0FP16 · ~69.5 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro —FP16 · ~126 t/s
- Gemma 4 E4B4B · MMLU-Pro —FP16 · ~126 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 35.6FP16 · ~132.6 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP16 · ~157.5 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP16 · ~162.6 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP16 · ~193.8 t/s
- Gemma 4 E2B2B · MMLU-Pro —FP16 · ~252 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP16 · ~296.5 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP16 · ~336 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP16 · ~406.5 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro —FP16 · ~504 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP16 · ~1008 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP16 · ~1400 t/s
Models that fit with CPU offload (7)
These use system RAM for layers that don't fit in VRAM — expect much slower inference.
- Qwen 3.5 122B-A10B (MoE)122B · MMLU-Pro —Q2_K · ~84 t/s
- Llama 4 Scout 109B109B · MMLU-Pro 70.0Q2_K · ~49.4 t/s
- Qwen 2.5 72B Instruct72B · MMLU-Pro 58.1Q4_K_M · ~7 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9Q4_K_M · ~7.2 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0Q4_K_M · ~7.2 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4Q4_K_M · ~7.2 t/s
- Command-R 35B35B · MMLU-Pro 33.0Q6_K · ~9.6 t/s
