AMD Radeon RX 7900 GRE
The AMD Radeon RX 7900 GRE has 16 GB VRAM and 576 GB/s memory bandwidth. It can run 40 of our 70 tracked models natively in VRAM at 8k context.
The AMD Radeon RX 7900 GRE is a consumer-grade AMD GPU based on the RDNA 3 architecture. Released in 2023. It features 16 GB of GDDR6 VRAM at 576 GB/s memory bandwidth via the ROCM backend. ROCm is Linux-only; on Windows use the Vulkan backend instead. Requires llama.cpp compiled with ROCm support.
For local LLM inference, this GPU runs 40 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is Qwen 3.5 35B-A3B (MoE) (641.9 t/s at Q2_K). It handles smaller models up to ~7-14B at reasonable precision, with some 27-32B models fitting at lower quantization. On Qwen 3.6 27B, it achieves approximately 49.6 tokens per second at Q3_K_M quantization. An additional 9 models fit with CPU offload — slower but usable.
The ROCm backend works on Linux with llama.cpp compiled for AMD. Windows users need the Vulkan driver. Among consumer GPUs, it sits above Intel Arc A770 16GB and AMD Radeon RX 6800 XT in performance, but below NVIDIA RTX 4080.
| Vendor | AMD |
| Architecture | RDNA 3 |
| VRAM | 16 GB |
| Memory type | GDDR6 |
| Memory bandwidth | 576 GB/s |
| Compute backend | ROCM |
| Tier | Consumer |
| Released | 2023 |
| Models (native) | 40 / 70 |
| Models (offload) | 9 / 70 |
Models this GPU runs natively in VRAM (40)
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro —Q2_K · ~641.9 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0Q2_K · ~50.9 t/s
- Qwen3 32B32.8B · MMLU-Pro —Q2_K · ~53.4 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 55.1Q2_K · ~53.9 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4Q2_K · ~53.9 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0Q2_K · ~53.9 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro —Q2_K · ~641.9 t/s
- Gemma 4 31B31B · MMLU-Pro —Q2_K · ~56.5 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro —Q2_K · ~641.9 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0Q2_K · ~64.4 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro —Q3_K_M · ~49.6 t/s
- Qwen 3.6 27B27B · MMLU-Pro —Q3_K_M · ~49.6 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro —Q3_K_M · ~387.8 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro —Q3_K_M · ~55.8 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2Q3_K_M · ~60.3 t/s
- GPT-OSS 20B21B · MMLU-Pro —Q4_K_M · ~281.3 t/s
- Qwen3 14B14.8B · MMLU-Pro —Q6_K · ~47.5 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 51.2Q5_K_M · ~60.8 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 56.1Q6_K · ~50.2 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6Q8_0 · ~47.2 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro —Q8_0 · ~47.2 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0Q8_0 · ~62.6 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 37.5Q8_0 · ~72 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0Q8_0 · ~72 t/s
- Qwen3 8B8B · MMLU-Pro —Q8_0 · ~72 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 36.5Q8_0 · ~75.8 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0Q8_0 · ~79.4 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro —BF16 · ~72 t/s
- Gemma 4 E4B4B · MMLU-Pro —BF16 · ~72 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 35.6BF16 · ~75.8 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0BF16 · ~90 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~46.5 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~55.4 t/s
- Gemma 4 E2B2B · MMLU-Pro —FP32 · ~72 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~84.7 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~96 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~116.1 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro —FP32 · ~144 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~288 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~400 t/s
Models that fit with CPU offload (9)
These use system RAM for layers that don't fit in VRAM — expect much slower inference.
- GLM-4.5 Air 106B106B · MMLU-Pro —Q2_K · ~36.5 t/s
- GLM-4.6V 106B106B · MMLU-Pro —Q2_K · ~36.5 t/s
- Qwen 2.5 72B Instruct72B · MMLU-Pro 58.1Q3_K_M · ~4.7 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9Q3_K_M · ~4.8 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0Q3_K_M · ~4.8 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4Q3_K_M · ~4.8 t/s
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7Q5_K_M · ~17.3 t/s
- Command-R 35B35B · MMLU-Pro 33.0Q5_K_M · ~6.4 t/s
- Qwen 3.6 35B35B · MMLU-Pro —Q6_K · ~5 t/s
Too large for this GPU (21)
- Mixtral 8x22B Instruct v0.1
- Llama 3.1 405B Instruct
- DeepSeek V3 671B
- DeepSeek R1 671B
- Llama 4 Scout 109B
- Llama 4 Maverick 400B
- Qwen3 235B-A22B (MoE)
- MiniMax M1 456B
- GPT-OSS 120B
- GLM-4.5 355B
- GLM-4.6 355B
- GLM-4.7 358B
- Qwen 3.5 122B-A10B (MoE)
- MiniMax M2.5 229B
- GLM-5 744B
- MiniMax M2.7 229B
- Nemotron 3 Super 120B
- Kimi K2.6
- GLM-5.1 754B
- DeepSeek V4 Pro 1.6T
- DeepSeek V4 Flash 284B
Frequently asked questions
- How much VRAM does the AMD Radeon RX 7900 GRE have?
- The AMD Radeon RX 7900 GRE has 16 GB of GDDR6 with 576 GB/s memory bandwidth.
- What LLMs can the AMD Radeon RX 7900 GRE run locally?
- The AMD Radeon RX 7900 GRE can run 40 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at Q8_0, Llama 3.2 3B Instruct at BF16, Llama 3.2 1B Instruct at FP32.
- Can the AMD Radeon RX 7900 GRE run Llama 3.3 70B Instruct?
- The AMD Radeon RX 7900 GRE can run Llama 3.3 70B Instruct with CPU offload at Q3_K_M quantization, but inference will be slower than native VRAM execution.
- Can the AMD Radeon RX 7900 GRE run Qwen 3.6 27B?
- Yes. The AMD Radeon RX 7900 GRE runs Qwen 3.6 27B natively in VRAM at Q3_K_M quantization, achieving approximately 49.6 tokens per second.
- Can the AMD Radeon RX 7900 GRE run Llama 3.1 8B Instruct?
- Yes. The AMD Radeon RX 7900 GRE runs Llama 3.1 8B Instruct natively in VRAM at Q8_0 quantization, achieving approximately 72 tokens per second.