CanItRun Logocanitrun.

AMD Radeon PRO W7800

The AMD Radeon PRO W7800 has 32 GB VRAM and 576 GB/s memory bandwidth. It can run 47 of our 70 tracked models natively in VRAM at 8k context.

The AMD Radeon PRO W7800 is an RDNA 3 professional workstation GPU with 32GB ECC-capable GDDR6 on a 256-bit bus at 576 GB/s, backed by 64MB of Infinity Cache and 70 compute units. Its 32GB VRAM enables 34B models at Q4_K_M and 27B models at Q8_0 fully in-memory — on par with the NVIDIA RTX 5000 Ada on capacity. ROCm support applies on Linux; Windows users should use the Vulkan backend via llama.cpp.

The AMD Radeon PRO W7800 is a professional workstation AMD GPU based on the RDNA 3 architecture. Released in 2023. It features 32 GB of GDDR6 VRAM at 576 GB/s memory bandwidth via the ROCM backend. ROCm is Linux-only; on Windows use the Vulkan backend instead. Requires llama.cpp compiled with ROCm support.

For local LLM inference, this GPU runs 47 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is Qwen 2.5 72B Instruct (24.3 t/s at Q2_K). It comfortably runs models up to ~27-32B parameters at Q4. Larger models need CPU offload or multi-GPU. On Llama 3.3 70B Instruct, it achieves approximately 25 tokens per second at Q2_K quantization. An additional 7 models fit with CPU offload — slower but usable.

The ROCm backend works on Linux with llama.cpp compiled for AMD. Windows users need the Vulkan driver. Among workstation GPUs, it sits above AMD Radeon RX 7900 GRE and Apple M2 Max (32GB) in performance, but below AMD Radeon AI Pro 9700 32GB.

VendorAMD
ArchitectureRDNA 3
VRAM32 GB
Memory typeGDDR6
Memory bandwidth576 GB/s
Compute backendROCM
TierWorkstation
Released2023
Models (native)47 / 70
Models (offload)7 / 70
Software: ROCm is Linux-only; on Windows use the Vulkan backend instead. Requires llama.cpp compiled with ROCm support.

Models this GPU runs natively in VRAM (47)

Models that fit with CPU offload (7)

These use system RAM for layers that don't fit in VRAM — expect much slower inference.

Too large for this GPU (16)

Frequently asked questions

How much VRAM does the AMD Radeon PRO W7800 have?
The AMD Radeon PRO W7800 has 32 GB of GDDR6 with 576 GB/s memory bandwidth.
What is the AMD Radeon PRO W7800 best for?
With 32 GB of VRAM, the AMD Radeon PRO W7800 is well-suited for running 7B–32B models at Q4 with room for context, making it a great all-rounder for local LLM inference.
What LLMs can the AMD Radeon PRO W7800 run locally?
The AMD Radeon PRO W7800 can run 47 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at Q2_K, Llama 3.1 8B Instruct at BF16, Llama 3.2 3B Instruct at FP32.
Can the AMD Radeon PRO W7800 run Llama 3.3 70B Instruct?
Yes. The AMD Radeon PRO W7800 runs Llama 3.3 70B Instruct natively in VRAM at Q2_K quantization, achieving approximately 25 tokens per second.
Can the AMD Radeon PRO W7800 run Qwen 3.6 27B?
Yes. The AMD Radeon PRO W7800 runs Qwen 3.6 27B natively in VRAM at Q6_K quantization, achieving approximately 26 tokens per second.
Can the AMD Radeon PRO W7800 run Llama 3.1 8B Instruct?
Yes. The AMD Radeon PRO W7800 runs Llama 3.1 8B Instruct natively in VRAM at BF16 quantization, achieving approximately 36 tokens per second.