CanItRun Logocanitrun.

Intel Arc B570 10GB

The Intel Arc B570 10GB has 10 GB VRAM and 380 GB/s memory bandwidth. It can run 25 of our 70 tracked models natively in VRAM at 8k context.

The Intel Arc B570 is the entry-level Battlemage discrete GPU with 10GB of GDDR6. It handles 7B models at Q4 quantization and smaller 3B models at higher quality. A budget option for users who want more VRAM than similarly-priced competitors.

Intel Arc B570 10GB: 2025 Xe2-HPG Battlemage with 10GB GDDR6 at 380 GB/s — entry Battlemage for LLM use.

7B at Q4-Q8 natively; 3B models at higher quality. ~5-8 t/s for 7B via Vulkan.

Vulkan via llama.cpp works cross-platform. SYCL backend available with oneAPI. Ollama support limited.

VendorIntel
ArchitectureXe2-HPG (Battlemage)
VRAM10 GB
Memory typeGDDR6
Memory bandwidth380 GB/s
Compute backendVULKAN
TierConsumer
Released2025
Models (native)25 / 70
Models (offload)22 / 70
Software: Vulkan backend works in llama.cpp; SYCL backend available but requires oneAPI toolkit. Ollama support is limited.

Models this GPU runs natively in VRAM (25)

Models that fit with CPU offload (22)

These use system RAM for layers that don't fit in VRAM — expect much slower inference.

Too large for this GPU (23)

Frequently asked questions

How much VRAM does the Intel Arc B570 10GB have?
The Intel Arc B570 10GB has 10 GB of GDDR6 with 380 GB/s memory bandwidth.
What is the Intel Arc B570 10GB best for?
With 10 GB of VRAM, the Intel Arc B570 10GB is best for running compact models (1B–8B) at low quantization, suitable for edge inference, prototyping, and lightweight tasks.
What LLMs can the Intel Arc B570 10GB run locally?
The Intel Arc B570 10GB can run 25 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at Q6_K, Llama 3.2 3B Instruct at BF16, Llama 3.2 1B Instruct at FP32.
Can the Intel Arc B570 10GB run Llama 3.3 70B Instruct?
The Intel Arc B570 10GB can run Llama 3.3 70B Instruct with CPU offload at Q2_K quantization, but inference will be slower than native VRAM execution.
Can the Intel Arc B570 10GB run Qwen 3.6 27B?
The Intel Arc B570 10GB can run Qwen 3.6 27B with CPU offload at Q8_0 quantization, but inference will be slower than native VRAM execution.
Can the Intel Arc B570 10GB run Llama 3.1 8B Instruct?
Yes. The Intel Arc B570 10GB runs Llama 3.1 8B Instruct natively in VRAM at Q6_K quantization, achieving approximately 57.9 tokens per second.