AMD Radeon AI Pro 9700 32GB
The AMD Radeon AI Pro 9700 32GB has 32 GB VRAM and 640 GB/s memory bandwidth. It can run 47 of our 70 tracked models natively in VRAM at 8k context.
The AMD Radeon AI Pro 9700 is a professional AI accelerator built on RDNA 4 with 32GB of GDDR6 memory and 640 GB/s bandwidth. Featuring 4,096 stream processors and 128 AI Accelerators on the Navi 48 die, it delivers 47.8 TFLOPS of FP32 and 95.7 TFLOPS of FP16 performance. Designed for local AI inference and development workloads, it offers a competitive professional option for single-GPU inference deployments where ECC memory and datacenter infrastructure aren't required.
The AMD Radeon AI Pro 9700 32GB is a datacenter-class AMD GPU based on the RDNA 4 architecture. Released in 2025. It features 32 GB of GDDR6 VRAM at 640 GB/s memory bandwidth via the ROCM backend. ROCm is Linux-only; on Windows use the Vulkan backend instead. Requires llama.cpp compiled with ROCm support.
For local LLM inference, this GPU runs 47 of the 70 models we track natively in VRAM at 8K context. The largest model it handles in VRAM is Qwen 2.5 72B Instruct (27 t/s at Q2_K). It comfortably runs models up to ~27-32B parameters at Q4. Larger models need CPU offload or multi-GPU. On Llama 3.3 70B Instruct, it achieves approximately 27.8 tokens per second at Q2_K quantization. An additional 7 models fit with CPU offload — slower but usable.
The ROCm backend works on Linux with llama.cpp compiled for AMD. Windows users need the Vulkan driver. Among datacenter GPUs, it sits above Apple M5 Max (48GB) and AMD Radeon RX 7900 GRE in performance, but below NVIDIA RTX 4080.
| Vendor | AMD |
| Architecture | RDNA 4 |
| VRAM | 32 GB |
| Memory type | GDDR6 |
| Memory bandwidth | 640 GB/s |
| Compute backend | ROCM |
| Tier | Datacenter |
| Released | 2025 |
| Models (native) | 47 / 70 |
| Models (offload) | 7 / 70 |
Models this GPU runs natively in VRAM (47)
- Qwen 2.5 72B Instruct72B · MMLU-Pro 58.1Q2_K · ~27 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9Q2_K · ~27.8 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0Q2_K · ~27.8 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4Q2_K · ~27.8 t/s
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7Q3_K_M · ~126.9 t/s
- Command-R 35B35B · MMLU-Pro 33.0Q3_K_M · ~42.5 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro —Q5_K_M · ~364.4 t/s
- Qwen 3.6 35B35B · MMLU-Pro —Q5_K_M · ~28.4 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0Q5_K_M · ~28.9 t/s
- Qwen3 32B32.8B · MMLU-Pro —Q5_K_M · ~30.3 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 55.1Q5_K_M · ~30.6 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4Q5_K_M · ~30.6 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0Q5_K_M · ~30.6 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro —Q6_K · ~286.2 t/s
- Gemma 4 31B31B · MMLU-Pro —Q5_K_M · ~32.1 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro —Q6_K · ~286.2 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0Q6_K · ~28.7 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro —Q6_K · ~28.9 t/s
- Qwen 3.6 27B27B · MMLU-Pro —Q6_K · ~28.9 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro —Q6_K · ~225.9 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro —Q8_0 · ~26.7 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2Q8_0 · ~28.8 t/s
- GPT-OSS 20B21B · MMLU-Pro —Q8_0 · ~176 t/s
- Qwen3 14B14.8B · MMLU-Pro —Q8_0 · ~43.2 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 51.2Q8_0 · ~43.5 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 56.1Q8_0 · ~45.7 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6BF16 · ~26.2 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro —BF16 · ~26.2 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0BF16 · ~34.8 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 37.5BF16 · ~40 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0BF16 · ~40 t/s
- Qwen3 8B8B · MMLU-Pro —BF16 · ~40 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 36.5BF16 · ~42.1 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0BF16 · ~44.1 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro —FP32 · ~40 t/s
- Gemma 4 E4B4B · MMLU-Pro —FP32 · ~40 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 35.6FP32 · ~42.1 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP32 · ~50 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~51.6 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~61.5 t/s
- Gemma 4 E2B2B · MMLU-Pro —FP32 · ~80 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~94.1 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~106.7 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~129 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro —FP32 · ~160 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~320 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~444.4 t/s
Models that fit with CPU offload (7)
These use system RAM for layers that don't fit in VRAM — expect much slower inference.
- Mixtral 8x22B Instruct v0.1141B · MMLU-Pro 40.0Q2_K · ~12.5 t/s
- Qwen 3.5 122B-A10B (MoE)122B · MMLU-Pro —Q2_K · ~48.6 t/s
- Nemotron 3 Super 120B120B · MMLU-Pro —Q2_K · ~40.5 t/s
- GPT-OSS 120B117B · MMLU-Pro —Q2_K · ~97.3 t/s
- Llama 4 Scout 109B109B · MMLU-Pro 70.0Q3_K_M · ~21.9 t/s
- GLM-4.5 Air 106B106B · MMLU-Pro —Q3_K_M · ~31 t/s
- GLM-4.6V 106B106B · MMLU-Pro —Q3_K_M · ~31 t/s
Too large for this GPU (16)
Frequently asked questions
- How much VRAM does the AMD Radeon AI Pro 9700 32GB have?
- The AMD Radeon AI Pro 9700 32GB has 32 GB of GDDR6 with 640 GB/s memory bandwidth.
- What LLMs can the AMD Radeon AI Pro 9700 32GB run locally?
- The AMD Radeon AI Pro 9700 32GB can run 47 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at Q2_K, Llama 3.1 8B Instruct at BF16, Llama 3.2 3B Instruct at FP32.
- Can the AMD Radeon AI Pro 9700 32GB run Llama 3.3 70B Instruct?
- Yes. The AMD Radeon AI Pro 9700 32GB runs Llama 3.3 70B Instruct natively in VRAM at Q2_K quantization, achieving approximately 27.8 tokens per second.
- Can the AMD Radeon AI Pro 9700 32GB run Qwen 3.6 27B?
- Yes. The AMD Radeon AI Pro 9700 32GB runs Qwen 3.6 27B natively in VRAM at Q6_K quantization, achieving approximately 28.9 tokens per second.
- Can the AMD Radeon AI Pro 9700 32GB run Llama 3.1 8B Instruct?
- Yes. The AMD Radeon AI Pro 9700 32GB runs Llama 3.1 8B Instruct natively in VRAM at BF16 quantization, achieving approximately 40 tokens per second.