Intel Arc A310 4GB
The Intel Arc A310 4GB has 4 GB VRAM and 124 GB/s memory bandwidth. It can run 13 of our 70 tracked models natively in VRAM at 8k context.
Intel Arc A310 4GB: 2022 Xe-HPG Alchemist with 4GB GDDR6 at 124 GB/s — Intel's entry-level Arc card.
3B models at Q4 with tight fit; 4GB is very limiting for LLM inference. ~1-3 t/s for 3B via Vulkan.
Vulkan via llama.cpp works cross-platform. SYCL backend requires oneAPI toolkit. Ollama support limited.
| Vendor | Intel |
| Architecture | Xe-HPG (Alchemist) |
| VRAM | 4 GB |
| Memory type | GDDR6 |
| Memory bandwidth | 124 GB/s |
| Compute backend | VULKAN |
| Tier | Consumer |
| Released | 2022 |
| Models (native) | 13 / 70 |
| Models (offload) | 34 / 70 |
Software: Vulkan backend works in llama.cpp; SYCL backend available but requires oneAPI toolkit. Ollama support is limited.
Models this GPU runs natively in VRAM (13)
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 56.3Q2_K · ~49.6 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro 43.6Q5_K_M · ~48.1 t/s
- Gemma 4 E4B4B · MMLU-Pro 69.4Q4_K_M · ~55.1 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0Q5_K_M · ~60.2 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4Q6_K · ~48.8 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8Q6_K · ~58.2 t/s
- Gemma 4 E2B2B · MMLU-Pro 60.0Q8_0 · ~62 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0Q8_0 · ~72.9 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8BF16 · ~41.3 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5BF16 · ~50 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro 14.7BF16 · ~62 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~62 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~86.1 t/s
Models that fit with CPU offload (34)
These use system RAM for layers that don't fit in VRAM — expect much slower inference.
- Qwen 2.5 72B Instruct72B · MMLU-Pro 71.1Q2_K · ~1.3 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9Q2_K · ~1.3 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0Q2_K · ~1.3 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4Q2_K · ~1.3 t/s
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7Q3_K_M · ~5.6 t/s
- Command-R 35B35B · MMLU-Pro 33.0Q3_K_M · ~2.1 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro 84.2Q5_K_M · ~16 t/s
- Qwen 3.6 35B35B · MMLU-Pro 85.2Q5_K_M · ~1.4 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0Q5_K_M · ~1.4 t/s
- Qwen3 32B32.8B · MMLU-Pro 65.5Q5_K_M · ~1.5 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 69.0Q5_K_M · ~1.5 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4Q5_K_M · ~1.5 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0Q5_K_M · ~1.5 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro 78.3Q5_K_M · ~16 t/s
- Gemma 4 31B31B · MMLU-Pro 85.2Q5_K_M · ~1.6 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro 61.5Q6_K · ~12.6 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0Q6_K · ~1.4 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro 67.5Q6_K · ~1.4 t/s
- Qwen 3.6 27B27B · MMLU-Pro 86.2Q6_K · ~1.4 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro 82.6Q6_K · ~9.9 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro 66.8Q8_0 · ~1.3 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2Q8_0 · ~1.4 t/s
- GPT-OSS 20B21B · MMLU-Pro 67.9Q8_0 · ~7.8 t/s
- Qwen3 14B14.8B · MMLU-Pro 61.0Q8_0 · ~2.1 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 63.7Q8_0 · ~2.1 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 70.4Q8_0 · ~2.2 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6BF16 · ~1.3 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro 60.6BF16 · ~1.3 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0BF16 · ~1.7 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 48.3BF16 · ~1.9 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0BF16 · ~1.9 t/s
- Qwen3 8B8B · MMLU-Pro 56.7BF16 · ~1.9 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0BF16 · ~2.1 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 47.4FP32 · ~2 t/s
Too large for this GPU (23)
- Mixtral 8x22B Instruct v0.1
- Llama 3.1 405B Instruct
- DeepSeek V3 671B
- DeepSeek R1 671B
- Llama 4 Scout 109B
- Llama 4 Maverick 400B
- Qwen3 235B-A22B (MoE)
- MiniMax M1 456B
- GPT-OSS 120B
- GLM-4.5 355B
- GLM-4.5 Air 106B
- GLM-4.6 355B
- GLM-4.6V 106B
- GLM-4.7 358B
- Qwen 3.5 122B-A10B (MoE)
- MiniMax M2.5 229B
- GLM-5 744B
- MiniMax M2.7 229B
- Nemotron 3 Super 120B
- Kimi K2.6
- GLM-5.1 754B
- DeepSeek V4 Pro 1.6T
- DeepSeek V4 Flash 284B
Frequently asked questions
- How much VRAM does the Intel Arc A310 4GB have?
- The Intel Arc A310 4GB has 4 GB of GDDR6 with 124 GB/s memory bandwidth.
- What is the Intel Arc A310 4GB best for?
- With 4 GB of VRAM, the Intel Arc A310 4GB is best for running compact models (1B–8B) at low quantization, suitable for edge inference, prototyping, and lightweight tasks.
- What LLMs can the Intel Arc A310 4GB run locally?
- The Intel Arc A310 4GB can run 13 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.2 3B Instruct at Q5_K_M, Llama 3.2 1B Instruct at BF16, Qwen 2.5 7B Instruct at Q2_K.
- Can the Intel Arc A310 4GB run Llama 3.3 70B Instruct?
- The Intel Arc A310 4GB can run Llama 3.3 70B Instruct with CPU offload at Q2_K quantization, but inference will be slower than native VRAM execution.
- Can the Intel Arc A310 4GB run Qwen 3.6 27B?
- The Intel Arc A310 4GB can run Qwen 3.6 27B with CPU offload at Q6_K quantization, but inference will be slower than native VRAM execution.
- Can the Intel Arc A310 4GB run Llama 3.1 8B Instruct?
- The Intel Arc A310 4GB can run Llama 3.1 8B Instruct with CPU offload at BF16 quantization, but inference will be slower than native VRAM execution.