CanItRun Logocanitrun.

Intel Arc 130V (16GB)

The Intel Arc 130V (16GB) has 16 GB VRAM and 137 GB/s memory bandwidth. It can run 31 of our 70 tracked models natively in VRAM at 8k context.

Intel Arc 130V (16GB): 2024 Xe2-LPG Battlemage iGPU (7 cores) with 16GB unified LPDDR5X at 137 GB/s — cut-down Lunar Lake.

7B at Q8 or 13B at Q4 in unified memory. ~2-4 t/s for 7B via Vulkan; bandwidth-constrained.

Vulkan via llama.cpp works; shares on-package memory with CPU. SYCL backend available with oneAPI. Ollama support limited.

VendorIntel
ArchitectureXe2-LPG (Battlemage)
VRAM16 GB (unified)
Memory typeLPDDR5X
Memory bandwidth137 GB/s
Compute backendVULKAN
TierIntegrated
Released2024
Models (native)31 / 70
Models (offload)0 / 70
Software: Vulkan backend works in llama.cpp. Shares on-package unified memory with the CPU. Ollama support is limited; SYCL backend available with oneAPI.

Models this GPU runs natively in VRAM (31)

Too large for this GPU (39)

Frequently asked questions

How much VRAM does the Intel Arc 130V (16GB) have?
The Intel Arc 130V (16GB) has 16 GB of LPDDR5X with 137 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
What is the Intel Arc 130V (16GB) best for?
With 16 GB of VRAM, the Intel Arc 130V (16GB) handles smaller models (7B–14B) at Q4–Q5 quantization — ideal for entry-level local LLM experimentation and lightweight inference.
What LLMs can the Intel Arc 130V (16GB) run locally?
The Intel Arc 130V (16GB) can run 31 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at Q8_0, Llama 3.2 3B Instruct at BF16, Llama 3.2 1B Instruct at FP32.
Can the Intel Arc 130V (16GB) run Llama 3.3 70B Instruct?
The Intel Arc 130V (16GB) does not have enough VRAM to run Llama 3.3 70B Instruct. You would need more VRAM or a lower quantization level.
Can the Intel Arc 130V (16GB) run Qwen 3.6 27B?
Yes. The Intel Arc 130V (16GB) runs Qwen 3.6 27B natively in VRAM at Q2_K quantization, achieving approximately 15.4 tokens per second.
Can the Intel Arc 130V (16GB) run Llama 3.1 8B Instruct?
Yes. The Intel Arc 130V (16GB) runs Llama 3.1 8B Instruct natively in VRAM at Q8_0 quantization, achieving approximately 17.1 tokens per second.