Intel Data Center GPU Max 1550
The Intel Data Center GPU Max 1550 has 128 GB VRAM and 3276 GB/s memory bandwidth. It can run 58 of our 70 tracked models natively in VRAM at 8k context.
The Intel Data Center GPU Max 1550 is Intel's flagship HPC accelerator with 128GB of HBM2e at 3,276 GB/s — enough to run 70B models at Q8 or larger 100B+ models at lower quantization entirely in GPU memory. It targets HPC and AI training workloads in supercomputing clusters via Intel's oneAPI/SYCL stack.
Intel Data Center GPU Max 1550: 2022 Xe-HPC Ponte Vecchio with 128GB HBM2e at 3,276 GB/s — Intel's top HPC accelerator.
70B at Q8 or 100B+ at lower quantizations fit in 128GB. Highest Intel bandwidth for inference.
SYCL/oneAPI gives best performance; llama.cpp SYCL backend supported. Typically cloud/HPC-accessed.
| Vendor | Intel |
| Architecture | Xe-HPC (Ponte Vecchio) |
| VRAM | 128 GB |
| Memory type | HBM2e |
| Memory bandwidth | 3276 GB/s |
| Compute backend | VULKAN |
| Tier | Datacenter |
| Released | 2022 |
| Models (native) | 58 / 70 |
| Models (offload) | 3 / 70 |
Models this GPU runs natively in VRAM (58)
- DeepSeek V4 Flash 284B284B · MMLU-Pro 86.3Q2_K · ~842.6 t/s
- Qwen3 235B-A22B (MoE)235B · MMLU-Pro 84.4Q3_K_M · ~380.9 t/s
- MiniMax M2.5 229B229B · MMLU-Pro 84.8Q3_K_M · ~838 t/s
- MiniMax M2.7 229B229B · MMLU-Pro 86.0Q3_K_M · ~838 t/s
- Mixtral 8x22B Instruct v0.1141B · MMLU-Pro 40.0Q5_K_M · ~143.5 t/s
- Qwen 3.5 122B-A10B (MoE)122B · MMLU-Pro 86.7Q6_K · ~439.5 t/s
- Nemotron 3 Super 120B120B · MMLU-Pro 83.7Q6_K · ~366.2 t/s
- GPT-OSS 120B117B · MMLU-Pro 80.7Q6_K · ~878.9 t/s
- Llama 4 Scout 109B109B · MMLU-Pro 74.3Q6_K · ~258.5 t/s
- GLM-4.5 Air 106B106B · MMLU-Pro 81.4Q8_0 · ~300.3 t/s
- GLM-4.6V 106B106B · MMLU-Pro 79.9Q8_0 · ~300.3 t/s
- Qwen 2.5 72B Instruct72B · MMLU-Pro 71.1Q8_0 · ~45.5 t/s
- Llama 3.3 70B Instruct70B · MMLU-Pro 68.9Q8_0 · ~46.8 t/s
- DeepSeek R1 Distill Llama 70B70B · MMLU-Pro 70.0Q8_0 · ~46.8 t/s
- Llama 3.1 70B Instruct70B · MMLU-Pro 66.4Q8_0 · ~46.8 t/s
- Mixtral 8x7B Instruct v0.146.7B · MMLU-Pro 29.7BF16 · ~139.7 t/s
- Command-R 35B35B · MMLU-Pro 33.0BF16 · ~46.8 t/s
- Qwen 3.5 35B-A3B (MoE)35B · MMLU-Pro 84.2BF16 · ~600.6 t/s
- Qwen 3.6 35B35B · MMLU-Pro 85.2BF16 · ~46.8 t/s
- Yi 1.5 34B Chat34.4B · MMLU-Pro 37.0BF16 · ~47.6 t/s
- Qwen3 32B32.8B · MMLU-Pro 65.5BF16 · ~49.9 t/s
- Qwen 2.5 32B Instruct32.5B · MMLU-Pro 69.0BF16 · ~50.4 t/s
- Qwen 2.5 Coder 32B Instruct32.5B · MMLU-Pro 50.4BF16 · ~50.4 t/s
- DeepSeek R1 Distill Qwen 32B32.5B · MMLU-Pro 65.0BF16 · ~50.4 t/s
- Nemotron 3 Nano 30B32B · MMLU-Pro 78.3BF16 · ~600.6 t/s
- Gemma 4 31B31B · MMLU-Pro 85.2BF16 · ~52.8 t/s
- Qwen3 30B-A3B (MoE)30B · MMLU-Pro 61.5BF16 · ~600.6 t/s
- Gemma 2 27B Instruct27.2B · MMLU-Pro 38.0BF16 · ~60.2 t/s
- Gemma 3 27B Instruct27B · MMLU-Pro 67.5BF16 · ~60.7 t/s
- Qwen 3.6 27B27B · MMLU-Pro 86.2BF16 · ~60.7 t/s
- Gemma 4 26B (MoE)26B · MMLU-Pro 82.6FP32 · ~237.1 t/s
- Mistral Small 3.1 24B Instruct24B · MMLU-Pro 66.8FP32 · ~34.1 t/s
- Mistral Small 22B22.2B · MMLU-Pro 49.2FP32 · ~36.9 t/s
- GPT-OSS 20B21B · MMLU-Pro 67.9FP32 · ~225.2 t/s
- Qwen3 14B14.8B · MMLU-Pro 61.0FP32 · ~55.3 t/s
- Qwen 2.5 14B Instruct14.7B · MMLU-Pro 63.7FP32 · ~55.7 t/s
- Phi-4 14B Instruct14B · MMLU-Pro 70.4FP32 · ~58.5 t/s
- Mistral Nemo 12B Instruct12.2B · MMLU-Pro 35.6FP32 · ~67.1 t/s
- Gemma 3 12B Instruct12.2B · MMLU-Pro 60.6FP32 · ~67.1 t/s
- Gemma 2 9B Instruct9.2B · MMLU-Pro 32.0FP32 · ~89 t/s
- Llama 3.1 8B Instruct8B · MMLU-Pro 48.3FP32 · ~102.4 t/s
- DeepSeek R1 Distill Llama 8B8B · MMLU-Pro 41.0FP32 · ~102.4 t/s
- Qwen3 8B8B · MMLU-Pro 56.7FP32 · ~102.4 t/s
- Qwen 2.5 7B Instruct7.6B · MMLU-Pro 56.3FP32 · ~107.8 t/s
- Mistral 7B Instruct v0.37.25B · MMLU-Pro 30.0FP32 · ~113 t/s
- Gemma 3 4B Instruct4B · MMLU-Pro 43.6FP32 · ~204.8 t/s
- Gemma 4 E4B4B · MMLU-Pro 69.4FP32 · ~204.8 t/s
- Phi-3.5 Mini Instruct3.8B · MMLU-Pro 47.4FP32 · ~215.5 t/s
- Llama 3.2 3B Instruct3.2B · MMLU-Pro 24.0FP32 · ~255.9 t/s
- Qwen 2.5 3B Instruct3.1B · MMLU-Pro 32.4FP32 · ~264.2 t/s
- Gemma 2 2B Instruct2.6B · MMLU-Pro 17.8FP32 · ~315 t/s
- Gemma 4 E2B2B · MMLU-Pro 60.0FP32 · ~409.5 t/s
- SmolLM2 1.7B Instruct1.7B · MMLU-Pro 19.0FP32 · ~481.8 t/s
- Qwen 2.5 1.5B Instruct1.5B · MMLU-Pro 16.8FP32 · ~546 t/s
- Llama 3.2 1B Instruct1.24B · MMLU-Pro 12.5FP32 · ~660.5 t/s
- Gemma 3 1B Instruct1B · MMLU-Pro 14.7FP32 · ~819 t/s
- Qwen 2.5 0.5B Instruct0.5B · MMLU-Pro 10.0FP32 · ~1638 t/s
- SmolLM2 360M Instruct0.36B · MMLU-Pro 8.0FP32 · ~2275 t/s
Models that fit with CPU offload (3)
These use system RAM for layers that don't fit in VRAM — expect much slower inference.
Too large for this GPU (9)
Frequently asked questions
- How much VRAM does the Intel Data Center GPU Max 1550 have?
- The Intel Data Center GPU Max 1550 has 128 GB of HBM2e with 3276 GB/s memory bandwidth.
- What is the Intel Data Center GPU Max 1550 best for?
- With 128 GB of VRAM, the Intel Data Center GPU Max 1550 is a server-class GPU designed for running the largest open-weight models (70B–405B) at high quantization with ample context.
- What LLMs can the Intel Data Center GPU Max 1550 run locally?
- The Intel Data Center GPU Max 1550 can run 58 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at Q8_0, Llama 3.1 8B Instruct at FP32, Llama 3.2 3B Instruct at FP32.
- Can the Intel Data Center GPU Max 1550 run Llama 3.3 70B Instruct?
- Yes. The Intel Data Center GPU Max 1550 runs Llama 3.3 70B Instruct natively in VRAM at Q8_0 quantization, achieving approximately 46.8 tokens per second.
- Can the Intel Data Center GPU Max 1550 run Qwen 3.6 27B?
- Yes. The Intel Data Center GPU Max 1550 runs Qwen 3.6 27B natively in VRAM at BF16 quantization, achieving approximately 60.7 tokens per second.
- Can the Intel Data Center GPU Max 1550 run Llama 3.1 8B Instruct?
- Yes. The Intel Data Center GPU Max 1550 runs Llama 3.1 8B Instruct natively in VRAM at FP32 quantization, achieving approximately 102.4 tokens per second.