Search
Close this search box.

Intel Xe GPU Series For Machine Learning – An Overview

intel logo

Build, deploy, operate computer vision at scale

  • One platform for all use cases
  • Connect all your cameras
  • Flexible for your needs
Contents

With the release of the Xe GPUs (“Xe”), Intel is now officially a maker of discrete graphics processors. In this article, we will provide an overview of the new Xe microarchitecture and its usability to compute complex AI workloads for machine learning tasks at optimized power consumption (efficiency).

 

Mathematical machine learning guide background

 

About Intel XE GPUs

GPU, or Graphics processing unit, is a key part of the computer that helps in the simultaneous processing of several pieces of data. This makes it especially important for high-performance tasks such as gaming, editing videos, as well as developing and running machine learning applications.

It has been more than 10 years since Intel’s last attempt to create a dedicated GPU, with prior prototype engines up scrapped (for example the Larrabee project in 2008). While Intel’s integrated Intel HD Graphics is built into most modern laptops, the company is lagging behind competitors like Nvidia and AMD when it comes to dedicated GPUs for high-end gaming or machine learning tasks. With the Xe GPU family, Intel has decided to make its dedicated graphics card because the integrated GPUs aren’t built for gaming or ML tasks.

The Xe series of GPUs consists of a series of micro-architectures, each of which aims to cover low to high-performance needs. At present, the Intel Xe GPU family consists of 4 different micro-architectures, each with separate core configurations. The Xe architecture variants have significant differences from each other, all are optimized for specific purposes.

After the intended release had been scheduled for mid-2020, the release was delayed to Q1 of 2021 and finally to Q1 of 2022. The pricing is still subject to speculation.

Xe-LP: Integrated/Low power

The Intel Processor Graphics Xe-LP GPU is a low-power unit designed for entry-level discrete graphics. The Xe-LP has a significantly smaller form factor compared to the other XE architectures. Laptops powered with this Intel iGPU have already been introduced in the market.

Xe-HP: High performance

The Xe-HP microarchitecture is a high-performance variant of the Xe series of GPUs. The variant can be used in data centers, with multi-tile scalability. The following variants Xe-HPC and Xe-HPG are both more evolved versions of the Xe-HP.

Xe-HPG: Enthusiast/Gaming

The Xe-HPG is a high-performance micro-architecture designed for enthusiast/high-performance gaming and data center requirements.

Xe-HPC: Datacenter

The Intel Xe-HPC variant is targeted towards high-performance computing needs. This may include large-scale research or AI training applications on supercomputers and other high-performance computing clusters.

 

Example of typical software code following human-programmed instructions.

Key Features Of Intel Xe GPU

The new generation of Intel GPUs is designed to provide high performance for AI workloads, and a better gaming experience along with greater speed to make designing and app development tasks easier. The key features of the Intel Xe GPU series include the following.

Faster Gaming Speed

The Intel Xe GPUs support faster and more immersive gaming with up to 1080p 60 fps support. However, the GPU ranks in the mid-range are when compared to high-end Nvidia GPUs.

Smoother Video Streaming

The latest GPU technology used in the Intel Xe series ensures super-smooth video streaming to take your viewing experience up a notch.

AI Matrix Engine

An Xe-core contains vector and arithmetic logic units that are named vector and matrix engines. The low-power AI matrix engine is optimized for designing, developing, or exporting tasks, even when dealing with large and complex files. In addition, the encoding performance has been significantly improved.

The Intel Xe GPUs are optimized for use with the OpenVINO inferencing engine. In combination with OpenVINO, the Xe GPUs achieve high ML performance levels at a much better cost-efficiency compared to other AI computing platforms.

Better Energy Efficiency

The Intel Xe GPU Series features a low-power architecture that ensures a longer battery life, even when you are multitasking or running AI workloads on portable devices such as laptops.

 

Is Intel XE Graphics Good for Machine Learning and Deep Learning?

GPUs are important for machine learning and deep learning because they can simultaneously process multiple pieces of data required for training the models. This makes the process easier and less time-consuming.

The new generation of GPUs by Intel is designed to better address issues related to performance-demanding tasks such as gaming, machine learning, artificial intelligence, and so on. While at present Intel has only introduced GPUs based on the Xe-LP micro-architecture framework, it is expected to soon roll out more advanced graphic processors geared towards higher performance workloads.

The Iris XE Max is the first discrete graphics processing unit introduced by Intel for PCs. It is based on the Xe architecture, and the Xe-LP micro-architecture to be exact. But is it good for machine learning or deep learning? Well, it surely is.

These GPUs are made available for modern thin and light laptops that are designed for creators. These come with the combined power and performance of the 11th Gen Intel Core processors, the Iris Xe iGPU, and the Iris Xe Max discrete graphics processor.

Intel added a technology called Deep Link, to allow intelligent power-sharing between the CPU and GPU of the computer to boost its performance for machine learning tasks. Hence, the accelerated performance is much faster at processing AI workloads compared to processors with comparably energy consumption.

Coming to the GPU architecture itself. The Xe-LP has 96 EUs (execution units) and larger cache sizes. In addition, it has the ability to operate at higher frequencies; it also supports running two simultaneous execution contexts. All of the above impact its efficiency and performance when dealing with the high workloads of machine learning tasks.

Therefore, the addition of the Iris XE Max GPU allows these laptops a better chance at any GPU-intensive activity. This includes running AI and machine learning models. Therefore, it won’t be an exaggeration to say that the Intel Xe GPU Series is built to make machine learning easier.

 

Intel XE With OpenVino for Deep Learning Acceleration

Intel’s Openvino Toolkit consists of a range of different development and deployment tools for deep learning inference applications. Till 2020, this toolkit included low-precision runtime with support for CPU edge devices only. But, with a 2021 update, the low-precision inference runtime was added for Intel’s Gen12 GPUs, i.e. those based on the Xe architecture. This brings further optimizations for deep learning tasks.

Another major advantage of OpenVino is its multi-device inference support. Compared to single-device execution (only CPU, for example), this ensures increased throughput across multiple devices. Additionally, with the support of multiple devices (CPU and GPU), it is possible to run multiple deep learning inference requests concurrently. This ensures full utilization of the system, leading to better performance.

 

Conclusion and Outlook

In short, the Intel Xe GPU series is a set of microarchitectures, ranging from the integrated and low-power variant (Xe-LP) to high-performance gaming/enthusiast (Xe-HPG), datacenter/high performance (Xe-HP), and high-performance computing (Xe-HPC). The hardware is optimized and accelerated with the use of Intel’s OpenVINO Toolkit.

The successor to the Xe has been announced as Xe2, it is currently under development.

What’s Next?

Read more articles about AI hardware, deep learning, and machine learning :

Get in touch with our team at viso.ai, we provide next-gen technology to leverage Intel Edge AI technology to develop and operate computer vision applications. Viso has joined the global Intel Partner Alliance.

 

Computer Vision Intel Partner Alliance