CanItRun Logocanitrun.

AMD Radeon RX 7900 GRE

The AMD Radeon RX 7900 GRE has 16 GB VRAM and 576 GB/s memory bandwidth. It can run 40 of our 70 tracked models natively in VRAM at 8k context.

The AMD Radeon RX 7900 GRE is a consumer-grade AMD GPU based on the RDNA 3 architecture. Released in 2023. It features 16 GB of GDDR6 VRAM at 576 GB/s memory bandwidth via the ROCM backend. ROCm is Linux-only; on Windows use the Vulkan backend instead. Requires llama.cpp compiled with ROCm support.

For local LLM inference, this GPU runs 40 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is Qwen 3.5 35B-A3B (MoE) (641.9 t/s at Q2_K). It handles smaller models up to ~7-14B at reasonable precision, with some 27-32B models fitting at lower quantization. On Qwen 3.6 27B, it achieves approximately 49.6 tokens per second at Q3_K_M quantization. An additional 9 models fit with CPU offload — slower but usable.

The ROCm backend works on Linux with llama.cpp compiled for AMD. Windows users need the Vulkan driver. Among consumer GPUs, it sits above Intel Arc A770 16GB and AMD Radeon RX 6800 XT in performance, but below NVIDIA RTX 4080.

VendorAMD
ArchitectureRDNA 3
VRAM16 GB
Memory typeGDDR6
Memory bandwidth576 GB/s
Compute backendROCM
TierConsumer
Released2023
Models (native)40 / 70
Models (offload)9 / 70
Software: ROCm is Linux-only; on Windows use the Vulkan backend instead. Requires llama.cpp compiled with ROCm support.

Models this GPU runs natively in VRAM (40)

Models that fit with CPU offload (9)

These use system RAM for layers that don't fit in VRAM — expect much slower inference.

Too large for this GPU (21)

Frequently asked questions

How much VRAM does the AMD Radeon RX 7900 GRE have?
The AMD Radeon RX 7900 GRE has 16 GB of GDDR6 with 576 GB/s memory bandwidth.
What LLMs can the AMD Radeon RX 7900 GRE run locally?
The AMD Radeon RX 7900 GRE can run 40 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at Q8_0, Llama 3.2 3B Instruct at BF16, Llama 3.2 1B Instruct at FP32.
Can the AMD Radeon RX 7900 GRE run Llama 3.3 70B Instruct?
The AMD Radeon RX 7900 GRE can run Llama 3.3 70B Instruct with CPU offload at Q3_K_M quantization, but inference will be slower than native VRAM execution.
Can the AMD Radeon RX 7900 GRE run Qwen 3.6 27B?
Yes. The AMD Radeon RX 7900 GRE runs Qwen 3.6 27B natively in VRAM at Q3_K_M quantization, achieving approximately 49.6 tokens per second.
Can the AMD Radeon RX 7900 GRE run Llama 3.1 8B Instruct?
Yes. The AMD Radeon RX 7900 GRE runs Llama 3.1 8B Instruct natively in VRAM at Q8_0 quantization, achieving approximately 72 tokens per second.