CanItRun Logocanitrun.

Intel Data Center GPU Max 1100

The Intel Data Center GPU Max 1100 has 48 GB VRAM and 1229 GB/s memory bandwidth. It can run 52 of our 70 tracked models natively in VRAM at 8k context.

The Intel Data Center GPU Max 1100 offers 48GB of HBM2e at 1,229 GB/s in a lower-power (300W) package compared to the 1550. It fits 30B–34B models at Q8 or 70B models at Q4 in memory, and is used in supercomputing installations alongside CPU tiles in the Aurora system at Argonne National Laboratory.

Intel Data Center GPU Max 1100: 2022 Xe-HPC Ponte Vecchio with 48GB HBM2e at 1,229 GB/s — mid-tier Intel HPC GPU.

30B-34B at Q8 or 70B at Q4 fit in 48GB. Strong bandwidth for datacenter inference.

SYCL/oneAPI gives best performance; llama.cpp SYCL backend supported. Typically cloud/HPC-accessed.

VendorIntel
ArchitectureXe-HPC (Ponte Vecchio)
VRAM48 GB
Memory typeHBM2e
Memory bandwidth1229 GB/s
Compute backendVULKAN
TierDatacenter
Released2022
Models (native)52 / 70
Models (offload)2 / 70
Software: Typically accessed via cloud. Uses SYCL/oneAPI for best performance; llama.cpp SYCL backend supported.

Models this GPU runs natively in VRAM (52)

Models that fit with CPU offload (2)

These use system RAM for layers that don't fit in VRAM — expect much slower inference.

Too large for this GPU (16)

Frequently asked questions

How much VRAM does the Intel Data Center GPU Max 1100 have?
The Intel Data Center GPU Max 1100 has 48 GB of HBM2e with 1229 GB/s memory bandwidth.
What is the Intel Data Center GPU Max 1100 best for?
With 48 GB of VRAM, the Intel Data Center GPU Max 1100 is ideal for running 70B-class models at Q4 quantization and large MoE models — a workstation sweet spot for local inference.
What LLMs can the Intel Data Center GPU Max 1100 run locally?
The Intel Data Center GPU Max 1100 can run 52 of the 70 open-weight models tracked by CanItRun natively in VRAM at 8k context. Top options include: Llama 3.3 70B Instruct at Q3_K_M, Llama 3.1 8B Instruct at FP32, Llama 3.2 3B Instruct at FP32.
Can the Intel Data Center GPU Max 1100 run Llama 3.3 70B Instruct?
Yes. The Intel Data Center GPU Max 1100 runs Llama 3.3 70B Instruct natively in VRAM at Q3_K_M quantization, achieving approximately 40.8 tokens per second.
Can the Intel Data Center GPU Max 1100 run Qwen 3.6 27B?
Yes. The Intel Data Center GPU Max 1100 runs Qwen 3.6 27B natively in VRAM at Q8_0 quantization, achieving approximately 45.5 tokens per second.
Can the Intel Data Center GPU Max 1100 run Llama 3.1 8B Instruct?
Yes. The Intel Data Center GPU Max 1100 runs Llama 3.1 8B Instruct natively in VRAM at FP32 quantization, achieving approximately 38.4 tokens per second.