CanItRun Logocanitrun.

AMD Radeon PRO W7900

The AMD Radeon PRO W7900 has 48 GB VRAM and 864 GB/s memory bandwidth. It can run 52 of our 70 tracked models natively in VRAM at 8k context.

The AMD Radeon PRO W7900 is AMD's flagship RDNA 3 workstation GPU, delivering 48GB ECC-capable GDDR6 on a 384-bit bus at 864 GB/s with 96MB of Infinity Cache and 6,144 stream processors. It can hold 34B models at Q8_0 and 70B models at Q4_K_M in VRAM, matching the NVIDIA RTX 6000 Ada on capacity at comparable bandwidth — making it AMD's most capable single-GPU workstation inference platform. ROCm applies on Linux; use the Vulkan backend on Windows.

The AMD Radeon PRO W7900 is a professional workstation AMD GPU based on the RDNA 3 architecture. Released in 2023. It features 48 GB of GDDR6 VRAM at 864 GB/s memory bandwidth via the ROCM backend. ROCm is Linux-only; on Windows use the Vulkan backend instead. Requires llama.cpp compiled with ROCm support.

For local LLM inference, this GPU runs 52 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is Nemotron 3 Super 120B (240.7 t/s at Q2_K). It handles most models up to the 70B class in VRAM, including some larger MoE models. On Llama 3.3 70B Instruct, it achieves approximately 28.7 tokens per second at Q3_K_M quantization. An additional 2 models fit with CPU offload — slower but usable.

The ROCm backend works on Linux with llama.cpp compiled for AMD. Windows users need the Vulkan driver. Among workstation GPUs, it sits above NVIDIA RTX A6000 and Apple M1 Ultra (64GB) in performance, but below NVIDIA RTX 6000 Ada.

VendorAMD
ArchitectureRDNA 3
VRAM48 GB
Memory typeGDDR6
Memory bandwidth864 GB/s
Compute backendROCM
TierWorkstation
Released2023
Models (native)52 / 70
Models (offload)2 / 70
Software: ROCm is Linux-only; on Windows use the Vulkan backend instead. Requires llama.cpp compiled with ROCm support.

Models this GPU runs natively in VRAM (52)

Models that fit with CPU offload (2)

These use system RAM for layers that don't fit in VRAM — expect much slower inference.

Too large for this GPU (16)

Frequently asked questions

How much VRAM does the AMD Radeon PRO W7900 have?
The AMD Radeon PRO W7900 has 48 GB of GDDR6 with 864 GB/s memory bandwidth.
What is the AMD Radeon PRO W7900 best for?
With 48 GB of VRAM, the AMD Radeon PRO W7900 is ideal for running 70B-class models at Q4 quantization and large MoE models — a workstation sweet spot for local inference.
What LLMs can the AMD Radeon PRO W7900 run locally?
The AMD Radeon PRO W7900 can run 52 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at Q3_K_M, Llama 3.1 8B Instruct at FP32, Llama 3.2 3B Instruct at FP32.
Can the AMD Radeon PRO W7900 run Llama 3.3 70B Instruct?
Yes. The AMD Radeon PRO W7900 runs Llama 3.3 70B Instruct natively in VRAM at Q3_K_M quantization, achieving approximately 28.7 tokens per second.
Can the AMD Radeon PRO W7900 run Qwen 3.6 27B?
Yes. The AMD Radeon PRO W7900 runs Qwen 3.6 27B natively in VRAM at Q8_0 quantization, achieving approximately 32 tokens per second.
Can the AMD Radeon PRO W7900 run Llama 3.1 8B Instruct?
Yes. The AMD Radeon PRO W7900 runs Llama 3.1 8B Instruct natively in VRAM at FP32 quantization, achieving approximately 27 tokens per second.