With the explosive growth of mobile computing and Internet of Things (IoT) applications, billions of mobile and IoT devices are being connected to the Internet, generating massive amounts of data at the network edge. As a result, the collection of massive volumes of data in cloud data centers incurs extremely high latency and network bandwidth usage.
Therefore, there is an urgent need to push the frontiers of artificial intelligence (AI) to the network edge to fully unleash the potential of big data. Edge AI is the combination of edge computing and AI. In this article, we will cover the following topics:
- What is Edge AI?
- Edge AI meaning and implication
- Why do we need AI at the edge?
- Applications of Edge AI
- AI future at the Edge
What is Edge AI?
Edge AI is the combination of Edge Computing and Artificial Intelligence to run machine learning tasks directly on connected edge devices.
To gain an understanding of what edge ai is, we need to look at the technological trends that drive the need for moving AI computing to the edge.
Edge AI is driven by Big Data and IoT
Today, in the era of the Internet of Things (IoT), an unprecedented volume of data generated by connected devices needs to be collected and analyzed. This leads to the generation of large quantities of data in real-time, which requires AI systems to make sense of data.
Traditionally, AI is Cloud-based
Initially, AI solutions were cloud-driven due to the need for high-end hardware capable of performing deep learning computing tasks and the ability to effortlessly scale the resources in the cloud. This involves offloading data to external computing systems (Cloud) for further processing, but this worsens latency, leads to increased communication costs, and drives privacy concerns.
What is Edge Computing?
To address the limitations of the cloud, there is a need to move the computing tasks to the edge of the network, closer to where the data is generated. Edge Computing refers to computations being performed as close to data sources as possible instead of on far-off, remote locations.
Hence, edge computing is used to extend the cloud as it is typically implemented in the form of edge-cloud systems, where decentralized edge nodes send processed data to the cloud.
Edge AI to run Machine Learning on Edge Devices
Edge AI, or Edge Intelligence, is the combination of edge computing and AI; it runs AI algorithms processing data locally on hardware, so-called edge devices. Therefore, Edge AI provides a form of on-device AI to take advantage of rapid response times with low latency, high privacy, more robustness, and better efficient use of network bandwidth.
The use of Edge AI is driven by emerging technologies such as machine learning, neural network acceleration, and reduction. ML edge computing opens up possibilities for new, robust, and scalable AI systems across multiple industries.
The entire field is very new and constantly evolving. Edge AI is expected to drive the AI future, by moving AI capabilities closer to the physical world.
What is an edge device?
An edge device is either an end device or an edge server able to perform computing tasks on the device itself. Hence, edge devices process the data of connected sensors that gather data, for example, cameras that provide a video stream.
Examples of edge devices are any computers or servers of a wide range of platforms. Basically, any computer of any form factor can serve as an edge device, from laptops and mobile phones to personal computers, embedded computers, or physical servers. Popular computing platforms for edge computing are x86, x64, or ARM.
For smaller prototypes, edge devices can be an SoC (System on a Chip) such as the popular Raspberry Pi or the Intel NUC series. Such SOC devices integrate all basic components of a computer (CPU, memory, USB controller, GPU).
Usually, ML tasks require a quite powerful AI hardware. However, AI models tend to become lighter and more efficient. An example of a popular object detection Computer Vision Algorithm that is particularly lightweight and fast, is named YOLOv3.
Advantages of Edge AI
Edge computing enables bringing AI processing tasks from the cloud to near the end devices in order to overcome the intrinsic problems of the traditional cloud, such as high latency and the lack of security.
Hence, moving AI computations to the network edge opens up new opportunities for AI applications with new products and services.
1. Lower data transfer volume
Data is processed by the edge device, and only a significantly lower amount of processed data is sent to the cloud. By reducing the traffic amount across the connection between a small cell and the core network, the bandwidth of the connection can be increased to prevent bottlenecks, and the traffic amount in the core network is reduced.
2. Speed for Real-time computing
Real-time processing is a fundamental advantage of Edge Computing. The physical proximity of edge devices to the data sources makes it possible to achieve lower latency which improves real-time data processing performance. It supports delay-sensitive applications and services such as remote surgery, tactile internet, unmanned vehicles, and vehicle accident prevention. A diverse range of services, including decision support, decision-making, and data analysis, can be provided by edge servers in a real-time manner.
3. Privacy and security
Since transferring sensitive user data over networks makes it vulnerable to theft and distortion, running AI at the edge enables keeping the data private. Edge computing makes it possible to guarantee that private data never leaves the local device (on-device machine learning). For the cases where data must be processed remotely, edge devices can be used to discard personally identifiable information before data transfer, thus enhancing user privacy and security. If you want to learn more about data privacy with AI, I recommend checking out our article about privacy-preserving Deep Learning for Computer Vision.
4. High availability
Decentralization and offline capabilities make Edge AI more robust by providing transient services during a network failure or cyber-attacks. Therefore, deploying AI tasks to the edge ensures significantly higher availability and overall robustness needed for mission-critical or production-grade AI applications (on-device AI).
5. Cost advantage
Moving AI processing to the edge is highly cost-cost efficient because only processed, highly valuable data is sent to the cloud. While sending and storing huge amounts of data is still very expensive, small devices at the edge have become more computationally powerful – following Moore’s Law.
Overall, edge-based ML enables real-time data processing and decision-making without the natural limitations of cloud-based computing. With the growing importance of data privacy and regulatory changes such as GDPR, Edge ML might soon be the only viable way for businesses to use AI in products and services.
Edge AI and 5G
The urgent need for or 5G in high-growth areas like fully self-driving cars, real-time virtual reality experiences, and mission-critical applications further drive innovation around edge computing and Edge AI. 5G is the next-generation cellular network that aspires to achieve substantial improvement on the quality of service, such as higher throughput and lower latency – offering 10x faster data rates than existing 5G networks.
To understand the need for fast data transmission and local on-device computing, consider real-time packet delivery among self-driving cars that requires an end-to-end delay of less than 10 ms. The minimum end-to-end delay for access to the cloud is greater than 80 ms, which is intolerable for many real-world applications.
Edge computing fulfills the sub-millisecond requirement of 5G applications and reduces energy consumption by around 30-40%, which attributes up to 5x lesser energy consumption as compared to accessing the cloud. Edge computing and 5G improve network performance to support and deploy different real-time AI applications, such as AI-based real-time video analytics depend on low latency data transmission.
Edge Computing vs. Fog Computing
Fog Computing is a term introduced by Cisco, and it is closely related to Edge computing. The concept of Fog computing is based on extending the cloud to be closer to the IoT end-devices with the aim to improve latency and security by performing computations near the network edge.
The main difference between fog and edge computing pertains to where the data is processed: in edge computing, data is processed either directly on the devices to which the sensors are attached or on gateway devices physically very close to the sensors; in the fog model, data is processed further away from the edge, on devices connected using a local area network (LAN).
Deep Learning at the Edge
Performing deep learning tasks typically requires a lot of computational power and a massive amount of data. Low-power IoT devices, such as typical cameras, are continuous sources of data. However, their limited storage and compute capabilities make them unsuitable for the training and inference of deep learning models. Edge AI technology provides a solution by combining Deep Learning and Edge Computing.
Therefore, edge devices or servers are placed near those end devices and used for deploying deep learning models that operate on IoT-generated data. ML edge computing is one of the most important trends for Computer Vision (CV) applications that involve heavy data such as video images and Natural Language Processing (NLP) requiring real-time processing.
Edge AI Applications
With Edge AI, it becomes possible to power scalable, mission-critical, and private AI applications. As Edge AI is still a very new technology, there are many more applications to be expected in the near future.
- Smart AI Vision including computer vision applications such as live video analytics to power AI vision systems in multiple industries. Intel developed special co-processors named Visual Processing Units to power high-performance computer vision applications to edge devices.
- Smart Energy applications such as connected wind farms. A study examined a remote wind farm’s data management and processing costs using a cloud-only system versus a combined edge-cloud system. The wind farm uses several data-producing sensors and devices such as video surveillance cameras, security sensors, access sensors for employees, and sensors on wind turbines. The edge-cloud system turned out to be 36% less expensive as opposed to the cloud-only system, while the volume of data required to be transferred was reduced by 96%.
- AI Healthcare applications such as remote surgery and diagnostics, as well as monitoring of patient vital signs are mostly based on edge devices that perform AI at the edge. Doctors can use a remote platform to operate surgical tools from a distance where they feel safe and comfortable.
- Entertainment applications include virtual reality, augmented reality, and mixed reality, such as streaming video content to virtual reality glasses. The size of such glasses can be reduced by offloading computation from the glasses to edge servers near the end device. For example, Microsoft recently introduced HoloLens, a holographic computer built onto a headset for an augmented reality experience. Microsoft aims to design standard computing, data analysis, medical imaging, and gaming-at-the-edge tools using the HoloLens.
- Smart Factories applications, such as smart machines, aim to improve safety and productivity. For example, operators can use a remote platform to operate heavy machines, particularly those located at hard-to-reach and unsafe places, from a safe and comfortable place.
- Intelligent transportation systems, whereby drivers can share or gather information from traffic information centers to avoid vehicles that are in danger or stop abruptly, in a real-time manner to avoid accidents. In addition, unmanned vehicles can sense their surroundings and move safely in an autonomous manner.
Edge AI Software Platforms
With machine learning moving from the cloud to the edge, the complexity increases. While the cloud can rely on APIs, edge AI requires IoT capabilities to manage physical edge devices that need to connect to the cloud (edge-to-cloud).
Therefore, Edge AI software usually involves a cloud-side to orchestrate the edge-side that consists of multiple edge clients that connect to the cloud. The edge-to-cloud connection is important not only to manage a high number of endpoints but also to roll out updates and security patches.
The edge architecture requires offline capabilities, remote deployment and updating, edge device monitoring, and secure data at rest and in transit. On-device machine learning requires AI hardware and optimized edge computing chips to process the data of sensors or cameras efficiently.
At viso.ai, we have built an Edge AI software platform to power Computer Vision applications. If you want to learn more, check out the Edge Computer Vision Platform Viso Suite.
Edge computing is necessary for real-world AI applications because the traditional cloud computing model is not suitable for AI applications that are computationally intensive and require massive amounts of data.
We recommend you read the following articles that cover related topics:
- Read about Edge Intelligence in 2021
- Learn about Privacy-preserving Deep Learning for Computer Vision
- An easy-to-understand guide to Deep Face Recognition
- What you need to know about Self-Supervised Learning
- Examples and Methods of Deep Reinforcement Learning