CanItRun Logocanitrun.

Intel Arc A380 6GB

The Intel Arc A380 6GB has 6 GB VRAM and 186 GB/s memory bandwidth. It can run 19 of our 70 tracked models natively in VRAM at 8k context.

Intel Arc A380 6GB: 2022 Xe-HPG Alchemist with 6GB GDDR6 at 186 GB/s — entry-level Arc discrete GPU.

3B-7B at Q4 with tight fit; 6GB cap rules out most 13B models. ~2-4 t/s for 7B via Vulkan.

Vulkan via llama.cpp works cross-platform. SYCL backend requires oneAPI toolkit. Ollama support limited.

VendorIntel
ArchitectureXe-HPG (Alchemist)
VRAM6 GB
Memory typeGDDR6
Memory bandwidth186 GB/s
Compute backendVULKAN
TierConsumer
Released2022
Models (native)19 / 70
Models (offload)28 / 70
Software: Vulkan backend works in llama.cpp; SYCL backend available but requires oneAPI toolkit. Ollama support is limited.

Models this GPU runs natively in VRAM (19)

Models that fit with CPU offload (28)

These use system RAM for layers that don't fit in VRAM — expect much slower inference.

Too large for this GPU (23)

Frequently asked questions

How much VRAM does the Intel Arc A380 6GB have?
The Intel Arc A380 6GB has 6 GB of GDDR6 with 186 GB/s memory bandwidth.
What is the Intel Arc A380 6GB best for?
With 6 GB of VRAM, the Intel Arc A380 6GB is best for running compact models (1B–8B) at low quantization, suitable for edge inference, prototyping, and lightweight tasks.
What LLMs can the Intel Arc A380 6GB run locally?
The Intel Arc A380 6GB can run 19 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at Q3_K_M, Llama 3.2 3B Instruct at Q8_0, Llama 3.2 1B Instruct at BF16.
Can the Intel Arc A380 6GB run Llama 3.3 70B Instruct?
The Intel Arc A380 6GB can run Llama 3.3 70B Instruct with CPU offload at Q2_K quantization, but inference will be slower than native VRAM execution.
Can the Intel Arc A380 6GB run Qwen 3.6 27B?
The Intel Arc A380 6GB can run Qwen 3.6 27B with CPU offload at Q6_K quantization, but inference will be slower than native VRAM execution.
Can the Intel Arc A380 6GB run Llama 3.1 8B Instruct?
Yes. The Intel Arc A380 6GB runs Llama 3.1 8B Instruct natively in VRAM at Q3_K_M quantization, achieving approximately 54.1 tokens per second.