• Train

          Develop

          Deploy

          Operate

          Data Collection

          Building Blocks​

          Device Enrollment

          Monitoring Dashboards

          Video Annotation​

          Application Editor​

          Device Management

          Remote Maintenance

          Model Training

          Application Library

          Deployment Manager

          Unified Security Center

          AI Model Library

          Configuration Manager

          IoT Edge Gateway

          Privacy-preserving AI

          Ready to get started?

          Overview
          Whitepaper
          Expert Services
  • Why Viso Suite
  • Pricing
Search
Close this search box.

AI Hardware: Overview of Edge Machine Learning Inference in 2024

About

Viso Suite is the all-in-one solution for teams to build, deliver, scale computer vision applications.

Contents
Need Computer Vision?

Viso Suite is the world’s only end-to-end computer vision platform. Request a demo.

With the growing demand for real-time deep learning workloads, today’s standard cloud-based Artificial Intelligence approach is not enough to cover bandwidth, ensure data privacy, or low latency applications. Hence, Edge Computing technology is needed to move AI tasks to the edge. As a result, the recent Edge AI trends drive the need for specific AI hardware for on-device machine learning inference.

Computer vision and artificial intelligence are transforming IoT devices at the edge. In this article, you will learn about specialized AI hardware, also called AI accelerators, created to accelerate data-intensive deep learning inference on edge devices in a cost-effective way. Particularly, you will learn:

  1. Machine learning inference (Basics)
  2. The need for specialized AI hardware
  3. List of the most popular AI accelerators in 2021

Machine Learning Inference at the Edge

AI inference is the process of taking a neural network model, generally made with deep learning, and then deploying it onto a computing device (Edge Intelligence). This device will then process incoming data (usually images or video) to look for and identify whatever pattern it has been trained to recognize.

While deep learning inference can be carried out in the cloud, the need for Edge AI is growing rapidly due to bandwidth, privacy concerns, or the need for real-time processing.

Installing a low-power computer with an integrated AI inference accelerator close to the source of data results in much faster response times and more efficient computation. In addition, it requires less internet bandwidth and graphics power. Compared to cloud inference, inference at the edge can potentially reduce the time for a result from a few seconds to a fraction of a second.

 

People Detection with Edge AI Inference
People Detection with Edge AI Inference, here with privacy-preserving Face Blur

The Need for Specialized AI Hardware

Today, enterprises are extending analytics and business intelligence closer to the points where data is generated. Edge intelligence solutions place the computing infrastructure closer to the source of incoming data. This also places them closer to the systems and people who need to make data-driven decisions in real-time. In short, the AI model is trained in the cloud and deployed on the edge device.

Especially in computer vision, the workloads are high, and tasks to be computed are highly data-intensive. Therefore, using AI hardware acceleration for edge devices has many advantages, the main being speed:

  1. Speed and performance. By processing data closer to the source, edge computing greatly reduces latency. The end result is higher speeds, enabling real-time use cases.
  2. Better security practices. Critical data does not need to be transmitted across different systems. User access to the edge device can be very restricted.
  3. Scalability. Edge devices are endpoints of an AI system that can grow without performance limitations. This allows to start small and with minimal costs. The development of cloud-based technology and edge computing has made it easier than ever for businesses to scale their operations.
  4. Reliability. Edge computing distributes processing, storage, and applications across various devices, making it difficult for any single disruption to take down the network (cyberattacks, DDoS attacks, power outages, etc.).
  5. Offline-Capabilities. An Edge-based system is able to operate even with limited network connectivity, a crucial factor for mission-critical systems.
  6. Better data management. Fewer bottlenecks through distributed management of edge nodes. Only processed data of high quality is sent to the cloud.
  7. Privacy. Sensitive data can be processed locally and in real-time without streaming it to the cloud.

AI accelerators can greatly increase the on-device inference or execution speed of an AI model and can also be used to execute special AI-based tasks that cannot be conducted on a conventional CPU.

Most Popular Edge AI Hardware Accelerators

With AI becoming a key driver of edge computing, the combination of hardware accelerators and software platforms are becoming important to run the models for inferencing. NVIDIA Jetson, Intel Movidius Myriad X, or Google Coral Edge TPU are popular options available to accelerate AI at the edge.

1.) VPU: Vision Processing Unit

Vision Processing Units allow demanding computer vision and edge computing AI workloads to be conducted with high efficiency. VPUs achieve a balance of power efficiency and compute performance.

One of the most popular examples of a VPU is the Intel Neural Computing Stick 2 (NCS 2), which is based on the Intel Movidius Myriad X VPU. By running programmable computation strategies in parallel with workload-specific hardware acceleration, Movidius Myriad X creates an architectural environment that minimizes data movement.

The Intel Movidius Myriad X VPU is Intel’s first VPU that features the Neural Compute Engine – a highly intelligent hardware accelerator for deep neural network inference.

The Myriad X VPU is programmable with the Intel Distribution of the OpenVINO Toolkit. Used in conjunction with the Myriad Development Kit (MDK), custom vision, imaging, and deep neural network workloads can be implemented using preloaded development tools, neural network frameworks, and APIs.

2.) GPU: Graphics Processing Unit

A GPU is a specialized chip that can do rapid processing, particularly handling computer graphics and image processing. One example of devices bringing an accelerated AI performance to the Edge in a power-efficient and compact form factor is the NVIDIA Jetson device family.

The NVIDIA Jetson Nano development board, for example, allows neural networks to run using the NVIDIA Jetpack SDK. In addition to a 128-core GPU and Quad-core ARM CPU, it comes with nano-optimized Keras and Tensorflow libraries, allowing most neural network backends and frameworks to run smoothly and with little setup.

With the release of the Xe GPUs (“Xe”), Intel is now also tapping into the market of discrete graphics processors. The Intel GPU Xe is optimized for AI workloads and machine learning tasks while focusing on efficiency. Hence, the different versions of the Intel GPU XE family achieve state-of-the-art performance at lower power consumption.

3.) TPU: Tensor Processing Unit

A TPU is a specialized AI hardware that implements all the necessary control and logic to execute machine learning algorithms, typically by operating on predictive models such as artificial neural networks (ANN).

The Google Coral Edge TPU is Google’s purpose-built ASIC designed to run AI at the edge. The Google Coral TPU is a toolkit built for Edge that enables production with local AI. More specifically, the onboard device inference capabilities of Google Coral TPU allow users to build and power a wide range of on-device AI applications. Core advantages are the very low power consumption, cost-efficiency, and offline capabilities.

Google Coral devices are able to run machine learning frameworks (such as TensorFlow Lite, YOLO, R-CNN, etc.) for Object Detection to detect objects in video streams from connected cameras, and perform Object Tracking tasks.

 

Google Coral AI USB Accelerator
Google Coral AI Accelerator TPU – USB version (Source: Google Coral)

What’s Next?

Interested in reading more about real-world applications running on AI hardware accelerators?

Follow us

Related Articles

Join 6,300+ Fellow
AI Enthusiasts

Get expert news and updates straight to your inbox. Subscribe to the Viso Blog.

Sign up to receive news and other stories from viso.ai. Your information will be used in accordance with viso.ai's privacy policy. You may opt out at any time.
Play Video

Join 6,300+ Fellow
AI Enthusiasts

Get expert AI news 2x a month. Subscribe to the most read Computer Vision Blog.

You can unsubscribe anytime. See our privacy policy.

Build any Computer Vision Application, 10x faster

All-in-one Computer Vision Platform for businesses to build, deploy and scale real-world applications.