Intel Arc Pro B60 24GB
The Intel Arc Pro B60 24GB has 24 GB VRAM and 380 GB/s memory bandwidth. It can run 42 of our 70 tracked models natively in VRAM at 8k context.
The Intel Arc Pro B60 is a Battlemage workstation GPU with 24GB of ECC GDDR6, announced at CES 2025. It shares the 24GB VRAM capacity of the B70 in a lower-power envelope, fitting 13B models at Q8 and 30B models at lower quantizations entirely in memory.
Intel Arc Pro B60 24GB: 2025 Xe2-HPG Battlemage workstation GPU with 24GB ECC GDDR6 — lower-power Arc Pro B tier.
13B at Q8 or 30B at Q4 natively. ~6-10 t/s for 7B via Vulkan.
Vulkan via llama.cpp works cross-platform. SYCL backend available with oneAPI. ISV-certified workstation card.
| Vendor | Intel |
| Architecture | Xe2-HPG (Battlemage) |
| VRAM | 24 GB |
| Memory type | GDDR6 |
| Memory bandwidth | 380 GB/s |
| Compute backend | VULKAN |
| Tier | Workstation |
| Released | 2025 |
| Models (native) | 42 / 70 |
| Models (offload) | 11 / 70 |
Models this GPU runs natively in VRAM (42)
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7Q2_K · ~98.5 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro 84.2Q3_K_M · ~324 t/s
- Qwen 3.6 35B35B · MMLU-Pro 85.2Q3_K_M · ~25.2 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0Q3_K_M · ~25.7 t/s
- Qwen3 32B32.8B · MMLU-Pro 65.5Q4_K_M · ~20.6 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 69.0Q3_K_M · ~27.2 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4Q3_K_M · ~27.2 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0Q3_K_M · ~27.2 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro 78.3Q4_K_M · ~247.5 t/s
- Gemma 4 31B31B · MMLU-Pro 85.2Q3_K_M · ~28.5 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro 61.5Q5_K_M · ~216.4 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0Q4_K_M · ~24.8 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro 67.5Q5_K_M · ~21.9 t/s
- Qwen 3.6 27B27B · MMLU-Pro 86.2Q5_K_M · ~21.9 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro 82.6Q5_K_M · ~170.8 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro 66.8Q5_K_M · ~24.6 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2Q6_K · ~20.9 t/s
- GPT-OSS 20B21B · MMLU-Pro 67.9Q6_K · ~127.4 t/s
- Qwen3 14B14.8B · MMLU-Pro 61.0Q8_0 · ~25.7 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 63.7Q8_0 · ~25.9 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 70.4Q8_0 · ~27.1 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6Q8_0 · ~31.1 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro 60.6Q8_0 · ~31.1 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0Q8_0 · ~41.3 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 48.3BF16 · ~23.8 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0BF16 · ~23.8 t/s
- Qwen3 8B8B · MMLU-Pro 56.7BF16 · ~23.8 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 56.3BF16 · ~25 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0BF16 · ~26.2 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro 43.6FP32 · ~23.8 t/s
- Gemma 4 E4B4B · MMLU-Pro 69.4FP32 · ~23.8 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 47.4FP32 · ~25 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP32 · ~29.7 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~30.6 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~36.5 t/s
- Gemma 4 E2B2B · MMLU-Pro 60.0FP32 · ~47.5 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~55.9 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~63.3 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~76.6 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro 14.7FP32 · ~95 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~190 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~263.9 t/s
Models that fit with CPU offload (11)
These use system RAM for layers that don't fit in VRAM — expect much slower inference.
- Qwen 3.5 122B-A10B (MoE)122B · MMLU-Pro 86.7Q2_K · ~28.9 t/s
- Nemotron 3 Super 120B120B · MMLU-Pro 83.7Q2_K · ~24.1 t/s
- GPT-OSS 120B117B · MMLU-Pro 80.7Q2_K · ~57.8 t/s
- Llama 4 Scout 109B109B · MMLU-Pro 74.3Q2_K · ~17 t/s
- GLM-4.5 Air 106B106B · MMLU-Pro 81.4Q2_K · ~24.1 t/s
- GLM-4.6V 106B106B · MMLU-Pro 79.9Q2_K · ~24.1 t/s
- Qwen 2.5 72B Instruct72B · MMLU-Pro 71.1Q4_K_M · ~2.3 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9Q4_K_M · ~2.4 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0Q4_K_M · ~2.4 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4Q4_K_M · ~2.4 t/s
- Command-R 35B35B · MMLU-Pro 33.0Q6_K · ~3.3 t/s
Too large for this GPU (17)
Frequently asked questions
- How much VRAM does the Intel Arc Pro B60 24GB have?
- The Intel Arc Pro B60 24GB has 24 GB of GDDR6 with 380 GB/s memory bandwidth.
- What is the Intel Arc Pro B60 24GB best for?
- With 24 GB of VRAM, the Intel Arc Pro B60 24GB is well-suited for running 7B–32B models at Q4 with room for context, making it a great all-rounder for local LLM inference.
- What LLMs can the Intel Arc Pro B60 24GB run locally?
- The Intel Arc Pro B60 24GB can run 42 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at BF16, Llama 3.2 3B Instruct at FP32, Llama 3.2 1B Instruct at FP32.
- Can the Intel Arc Pro B60 24GB run Llama 3.3 70B Instruct?
- The Intel Arc Pro B60 24GB can run Llama 3.3 70B Instruct with CPU offload at Q4_K_M quantization, but inference will be slower than native VRAM execution.
- Can the Intel Arc Pro B60 24GB run Qwen 3.6 27B?
- Yes. The Intel Arc Pro B60 24GB runs Qwen 3.6 27B natively in VRAM at Q5_K_M quantization, achieving approximately 21.9 tokens per second.
- Can the Intel Arc Pro B60 24GB run Llama 3.1 8B Instruct?
- Yes. The Intel Arc Pro B60 24GB runs Llama 3.1 8B Instruct natively in VRAM at BF16 quantization, achieving approximately 23.8 tokens per second.