CanItRun Logocanitrun.

Intel Arc Pro B60 24GB

The Intel Arc Pro B60 24GB has 24 GB VRAM and 380 GB/s memory bandwidth. It can run 42 of our 70 tracked models natively in VRAM at 8k context.

The Intel Arc Pro B60 is a Battlemage workstation GPU with 24GB of ECC GDDR6, announced at CES 2025. It shares the 24GB VRAM capacity of the B70 in a lower-power envelope, fitting 13B models at Q8 and 30B models at lower quantizations entirely in memory.

Intel Arc Pro B60 24GB: 2025 Xe2-HPG Battlemage workstation GPU with 24GB ECC GDDR6 — lower-power Arc Pro B tier.

13B at Q8 or 30B at Q4 natively. ~6-10 t/s for 7B via Vulkan.

Vulkan via llama.cpp works cross-platform. SYCL backend available with oneAPI. ISV-certified workstation card.

VendorIntel
ArchitectureXe2-HPG (Battlemage)
VRAM24 GB
Memory typeGDDR6
Memory bandwidth380 GB/s
Compute backendVULKAN
TierWorkstation
Released2025
Models (native)42 / 70
Models (offload)11 / 70
Software: Vulkan backend works in llama.cpp; SYCL backend available with oneAPI toolkit. Primarily a workstation/professional card.

Models this GPU runs natively in VRAM (42)

Models that fit with CPU offload (11)

These use system RAM for layers that don't fit in VRAM — expect much slower inference.

Too large for this GPU (17)

Frequently asked questions

How much VRAM does the Intel Arc Pro B60 24GB have?
The Intel Arc Pro B60 24GB has 24 GB of GDDR6 with 380 GB/s memory bandwidth.
What is the Intel Arc Pro B60 24GB best for?
With 24 GB of VRAM, the Intel Arc Pro B60 24GB is well-suited for running 7B–32B models at Q4 with room for context, making it a great all-rounder for local LLM inference.
What LLMs can the Intel Arc Pro B60 24GB run locally?
The Intel Arc Pro B60 24GB can run 42 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at BF16, Llama 3.2 3B Instruct at FP32, Llama 3.2 1B Instruct at FP32.
Can the Intel Arc Pro B60 24GB run Llama 3.3 70B Instruct?
The Intel Arc Pro B60 24GB can run Llama 3.3 70B Instruct with CPU offload at Q4_K_M quantization, but inference will be slower than native VRAM execution.
Can the Intel Arc Pro B60 24GB run Qwen 3.6 27B?
Yes. The Intel Arc Pro B60 24GB runs Qwen 3.6 27B natively in VRAM at Q5_K_M quantization, achieving approximately 21.9 tokens per second.
Can the Intel Arc Pro B60 24GB run Llama 3.1 8B Instruct?
Yes. The Intel Arc Pro B60 24GB runs Llama 3.1 8B Instruct natively in VRAM at BF16 quantization, achieving approximately 23.8 tokens per second.