Intel Arc Pro A40 6GB
The Intel Arc Pro A40 6GB has 6 GB VRAM and 192 GB/s memory bandwidth. It can run 19 of our 70 tracked models natively in VRAM at 8k context.
Intel Arc Pro A40 6GB: 2022 Xe-HPG workstation GPU with 6GB ECC GDDR6 at 192 GB/s — single-slot Arc Pro entry.
7B at Q4 natively; 6GB limits model selection. ~2-4 t/s for 7B via Vulkan.
Vulkan via llama.cpp works cross-platform. SYCL backend available with oneAPI. Primarily a professional workstation card.
| Vendor | Intel |
| Architecture | Xe-HPG (Alchemist) |
| VRAM | 6 GB |
| Memory type | GDDR6 |
| Memory bandwidth | 192 GB/s |
| Compute backend | VULKAN |
| Tier | Workstation |
| Released | 2022 |
| Models (native) | 19 / 70 |
| Models (offload) | 28 / 70 |
Software: Vulkan backend works in llama.cpp; SYCL backend available with oneAPI toolkit. Primarily a workstation/professional card.
Models this GPU runs natively in VRAM (19)
- Gemma 3 12B Instruct12.2B · MMLU-Pro 60.6Q2_K · ~47.8 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 48.3Q3_K_M · ~55.8 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0Q3_K_M · ~55.8 t/s
- Qwen3 8B8B · MMLU-Pro 56.7Q3_K_M · ~55.8 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 56.3Q4_K_M · ~44.9 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0Q3_K_M · ~61.6 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro 43.6Q8_0 · ~48 t/s
- Gemma 4 E4B4B · MMLU-Pro 69.4Q8_0 · ~48 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 47.4Q3_K_M · ~117.5 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0Q8_0 · ~60 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4Q8_0 · ~61.9 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8Q8_0 · ~73.8 t/s
- Gemma 4 E2B2B · MMLU-Pro 60.0BF16 · ~48 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0BF16 · ~56.5 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8BF16 · ~64 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5BF16 · ~77.4 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro 14.7FP32 · ~48 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~96 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~133.3 t/s
Models that fit with CPU offload (28)
These use system RAM for layers that don't fit in VRAM — expect much slower inference.
- Qwen 2.5 72B Instruct72B · MMLU-Pro 71.1Q2_K · ~2 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9Q2_K · ~2.1 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0Q2_K · ~2.1 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4Q2_K · ~2.1 t/s
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7Q4_K_M · ~6.6 t/s
- Command-R 35B35B · MMLU-Pro 33.0Q3_K_M · ~3.2 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro 84.2Q5_K_M · ~24.8 t/s
- Qwen 3.6 35B35B · MMLU-Pro 85.2Q5_K_M · ~2.1 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0Q5_K_M · ~2.2 t/s
- Qwen3 32B32.8B · MMLU-Pro 65.5Q6_K · ~1.8 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 69.0Q5_K_M · ~2.3 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4Q5_K_M · ~2.3 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0Q5_K_M · ~2.3 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro 78.3Q6_K · ~19.5 t/s
- Gemma 4 31B31B · MMLU-Pro 85.2Q5_K_M · ~2.4 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro 61.5Q6_K · ~19.5 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0Q6_K · ~2.2 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro 67.5Q6_K · ~2.2 t/s
- Qwen 3.6 27B27B · MMLU-Pro 86.2Q6_K · ~2.2 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro 82.6Q8_0 · ~12.6 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro 66.8Q8_0 · ~2 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2Q8_0 · ~2.2 t/s
- GPT-OSS 20B21B · MMLU-Pro 67.9Q8_0 · ~12 t/s
- Qwen3 14B14.8B · MMLU-Pro 61.0Q8_0 · ~3.2 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 63.7Q8_0 · ~3.3 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 70.4Q8_0 · ~3.4 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6BF16 · ~2 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0BF16 · ~2.6 t/s
Too large for this GPU (23)
- Mixtral 8x22B Instruct v0.1
- Llama 3.1 405B Instruct
- DeepSeek V3 671B
- DeepSeek R1 671B
- Llama 4 Scout 109B
- Llama 4 Maverick 400B
- Qwen3 235B-A22B (MoE)
- MiniMax M1 456B
- GPT-OSS 120B
- GLM-4.5 355B
- GLM-4.5 Air 106B
- GLM-4.6 355B
- GLM-4.6V 106B
- GLM-4.7 358B
- Qwen 3.5 122B-A10B (MoE)
- MiniMax M2.5 229B
- GLM-5 744B
- MiniMax M2.7 229B
- Nemotron 3 Super 120B
- Kimi K2.6
- GLM-5.1 754B
- DeepSeek V4 Pro 1.6T
- DeepSeek V4 Flash 284B
Frequently asked questions
- How much VRAM does the Intel Arc Pro A40 6GB have?
- The Intel Arc Pro A40 6GB has 6 GB of GDDR6 with 192 GB/s memory bandwidth.
- What is the Intel Arc Pro A40 6GB best for?
- With 6 GB of VRAM, the Intel Arc Pro A40 6GB is best for running compact models (1B–8B) at low quantization, suitable for edge inference, prototyping, and lightweight tasks.
- What LLMs can the Intel Arc Pro A40 6GB run locally?
- The Intel Arc Pro A40 6GB can run 19 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at Q3_K_M, Llama 3.2 3B Instruct at Q8_0, Llama 3.2 1B Instruct at BF16.
- Can the Intel Arc Pro A40 6GB run Llama 3.3 70B Instruct?
- The Intel Arc Pro A40 6GB can run Llama 3.3 70B Instruct with CPU offload at Q2_K quantization, but inference will be slower than native VRAM execution.
- Can the Intel Arc Pro A40 6GB run Qwen 3.6 27B?
- The Intel Arc Pro A40 6GB can run Qwen 3.6 27B with CPU offload at Q6_K quantization, but inference will be slower than native VRAM execution.
- Can the Intel Arc Pro A40 6GB run Llama 3.1 8B Instruct?
- Yes. The Intel Arc Pro A40 6GB runs Llama 3.1 8B Instruct natively in VRAM at Q3_K_M quantization, achieving approximately 55.8 tokens per second.