Intel Arc 140V (32GB)
The Intel Arc 140V (32GB) has 32 GB VRAM and 137 GB/s memory bandwidth. It can run 43 of our 70 tracked models natively in VRAM at 8k context.
The Intel Arc 140V is the flagship Lunar Lake integrated GPU, built on the Xe2 (Battlemage) architecture and sharing 32GB of on-package LPDDR5X with the Core Ultra 200V CPU. At 137 GB/s it's bandwidth-constrained compared to discrete cards, but the 32GB unified pool lets it load 13B models at Q8 or 30B models at Q4 — making it one of the most capable laptop iGPUs for local LLM inference.
Intel Arc 140V (32GB): 2024 Xe2-LPG Battlemage iGPU with 32GB unified LPDDR5X at 137 GB/s — top Lunar Lake AI laptop iGPU.
13B at Q8 or 30B at Q4 in unified memory. ~3-6 t/s for 7B via Vulkan; bandwidth-constrained.
Vulkan via llama.cpp works; shares on-package memory with CPU. SYCL backend available with oneAPI. Ollama support limited.
| Vendor | Intel |
| Architecture | Xe2-LPG (Battlemage) |
| VRAM | 32 GB (unified) |
| Memory type | LPDDR5X |
| Memory bandwidth | 137 GB/s |
| Compute backend | VULKAN |
| Tier | Integrated |
| Released | 2024 |
| Models (native) | 43 / 70 |
| Models (offload) | 0 / 70 |
Models this GPU runs natively in VRAM (43)
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7Q3_K_M · ~27.2 t/s
- Command-R 35B35B · MMLU-Pro 33.0Q2_K · ~11.9 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro 84.2Q5_K_M · ~78 t/s
- Qwen 3.6 35B35B · MMLU-Pro 85.2Q5_K_M · ~6.1 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0Q5_K_M · ~6.2 t/s
- Qwen3 32B32.8B · MMLU-Pro 65.5Q5_K_M · ~6.5 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 69.0Q5_K_M · ~6.5 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4Q5_K_M · ~6.5 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0Q5_K_M · ~6.5 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro 78.3Q5_K_M · ~78 t/s
- Gemma 4 31B31B · MMLU-Pro 85.2Q5_K_M · ~6.9 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro 61.5Q5_K_M · ~78 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0Q5_K_M · ~7.8 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro 67.5Q6_K · ~6.2 t/s
- Qwen 3.6 27B27B · MMLU-Pro 86.2Q6_K · ~6.2 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro 82.6Q6_K · ~48.4 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro 66.8Q6_K · ~7 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2Q8_0 · ~6.2 t/s
- GPT-OSS 20B21B · MMLU-Pro 67.9Q8_0 · ~37.7 t/s
- Qwen3 14B14.8B · MMLU-Pro 61.0Q8_0 · ~9.3 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 63.7Q8_0 · ~9.3 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 70.4Q8_0 · ~9.8 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6Q8_0 · ~11.2 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro 60.6Q8_0 · ~11.2 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0BF16 · ~7.4 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 48.3BF16 · ~8.6 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0BF16 · ~8.6 t/s
- Qwen3 8B8B · MMLU-Pro 56.7BF16 · ~8.6 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 56.3BF16 · ~9 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0BF16 · ~9.4 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro 43.6FP32 · ~8.6 t/s
- Gemma 4 E4B4B · MMLU-Pro 69.4FP32 · ~8.6 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 47.4FP32 · ~9 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP32 · ~10.7 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~11 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~13.2 t/s
- Gemma 4 E2B2B · MMLU-Pro 60.0FP32 · ~17.1 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~20.1 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~22.8 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~27.6 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro 14.7FP32 · ~34.3 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~68.5 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~95.1 t/s
Too large for this GPU (27)
- Llama 3.3 70B Instruct
- Qwen 2.5 72B Instruct
- DeepSeek R1 Distill Llama 70B
- Llama 3.1 70B Instruct
- Mixtral 8x22B Instruct v0.1
- Llama 3.1 405B Instruct
- DeepSeek V3 671B
- DeepSeek R1 671B
- Llama 4 Scout 109B
- Llama 4 Maverick 400B
- Qwen3 235B-A22B (MoE)
- MiniMax M1 456B
- GPT-OSS 120B
- GLM-4.5 355B
- GLM-4.5 Air 106B
- GLM-4.6 355B
- GLM-4.6V 106B
- GLM-4.7 358B
- Qwen 3.5 122B-A10B (MoE)
- MiniMax M2.5 229B
- GLM-5 744B
- MiniMax M2.7 229B
- Nemotron 3 Super 120B
- Kimi K2.6
- GLM-5.1 754B
- DeepSeek V4 Pro 1.6T
- DeepSeek V4 Flash 284B
Frequently asked questions
- How much VRAM does the Intel Arc 140V (32GB) have?
- The Intel Arc 140V (32GB) has 32 GB of LPDDR5X with 137 GB/s memory bandwidth (unified system memory, shared between CPU and GPU).
- What is the Intel Arc 140V (32GB) best for?
- With 32 GB of VRAM, the Intel Arc 140V (32GB) is well-suited for running 7B–32B models at Q4 with room for context, making it a great all-rounder for local LLM inference.
- What LLMs can the Intel Arc 140V (32GB) run locally?
- The Intel Arc 140V (32GB) can run 43 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.1 8B Instruct at BF16, Llama 3.2 3B Instruct at FP32, Llama 3.2 1B Instruct at FP32.
- Can the Intel Arc 140V (32GB) run Llama 3.3 70B Instruct?
- The Intel Arc 140V (32GB) does not have enough VRAM to run Llama 3.3 70B Instruct. You would need more VRAM or a lower quantization level.
- Can the Intel Arc 140V (32GB) run Qwen 3.6 27B?
- Yes. The Intel Arc 140V (32GB) runs Qwen 3.6 27B natively in VRAM at Q6_K quantization, achieving approximately 6.2 tokens per second.
- Can the Intel Arc 140V (32GB) run Llama 3.1 8B Instruct?
- Yes. The Intel Arc 140V (32GB) runs Llama 3.1 8B Instruct natively in VRAM at BF16 quantization, achieving approximately 8.6 tokens per second.