Intel’s OpenVINO is a powerful deep learning toolkit developed by Intel that enables optimized neural network inference across multiple hardware platforms. In this article, we discuss the features and benefits of OpenVINO, and how it integrates with Viso Suite, the leading computer vision platform, to build and deliver scalable applications.
In particular, you will learn more about:
- What is OpenVINO?
- Why should you use the toolkit?
- How OpenVINO works
- Flagship features of the toolkit
- Use cases and advantages
About us: viso.ai provides Viso Suite, the world’s only end-to-end Computer Vision Platform. Our solution supports OpenVINO out of the box and enables developers and organizations worldwide to develop and deliver all their computer vision applications. Get a demo for your company.
What is OpenVINO?
OpenVINO is a cross-platform deep learning toolkit developed by Intel. The name stands for “Open Visual Inference and Neural Network Optimization.” OpenVINO focuses on optimizing neural network inference with a write-once, deploy-anywhere approach for Intel hardware platforms, also including a post-training optimization tool.
The toolkit is free for use under Apache License version 2.0 and has two versions:
- OpenVINO toolkit, which is supported by the open-source community and the
- Intel Distribution of OpenVINO toolkit, which is supported by Intel.
Using the OpenVINO toolkit, software developers can select models, including those in popular model formats, and deploy pre-trained deep learning models (YOLO v3, ResNet 50, YOLOv8, etc.) through a high-level C++ Inference Engine API integrated with application logic.
Hence, OpenVINO offers integrated functionalities for expediting the development of applications and solutions that solve several tasks using computer vision, automatic speech recognition, natural language processing (NLP), recommendation systems, machine learning, and more.
OpenVINO Platform for Enterprises
Viso Suite, the world’s only end-to-end computer vision platform, leverages OpenVINO with powerful no-code/low-code capabilities and automated infrastructure. Our platform helps large enterprises worldwide to build, deploy and operate computer vision applications faster.
At viso.ai, we are AI vision partner of Intel and integrated the capabilities of OpenVINO as ready-made building blocks with our visual editor. In addition, Viso Suite provides everything around OpenVINO: image annotation, model management, edge device management, automated deployments, zero-trust security, data privacy, and full control over applications and data.
Why Use OpenVINO?
Deep Neural Networks (DNNs) have made considerable advances in many industrial domains in the past few years, bringing the accuracy of computer vision algorithms to a new level. However, deploying and producing such accurate and useful models requires adaptations for the hardware and computational methods.
OpenVINO allows the optimization of DNN models for inference to be a streamlined, efficient process through the integration of various tools, including the ability to read models in popular formats, ensuring optimal execution on Intel hardware.
The OpenVINO toolkit is based on the latest generations of Artificial Neural Networks (ANN), such as Convolutional Neural Networks (CNN) as well as recurrent and attention-based networks. For more information on what Artificial Neural Networks (ANN) are all about and how they are incorporated into computer vision, we suggest you read ANN and CNN: Analyzing Differences and Similarities.
The OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware. It maximizes performance and accelerates application development. OpenVINO aims to accelerate AI workloads and speed up time to market using a library of predetermined functions as well as pre-optimized kernels. In addition, other computer vision tools such as OpenCV, OpenCL kernels, and more are included in the OpenVINO toolkit.
The OpenVINO toolkit also provides a streamlined intermediate representation (IR) for efficient optimization and deployment of deep learning models across diverse hardware platforms.
What Are the Benefits of OpenVINO
- Accelerate Performance: Expedite computer vision workloads by enabling simple execution methods across different Intel processors and accelerators such as CPU, GPU/Intel Processor Graphics, VPU (Intel AI Stick NCS2 with Myriad X), and FPGA.
- Streamline Deep Learning Deployment: Utilize Convolutional Neural Network (CNN)-based deep learning functions using one common API in addition to more than 30 pre-trained models and documented code samples. With more than 100 public and custom models, the OpenVINO toolkit streamlines deep learning innovation by providing one centralized method for implementing dozens of deep learning models.
- Extend and Customize: OpenCL (Open Computing Language) Kernels and other tools offer an open, royalty-free standard way to add custom code pieces straight into the workload pipeline, customize deep learning model layers without the burden of framework overheads, and implement parallel programming of various accelerators.
- Innovate Artificial Intelligence: The complete Deep Learning Deployment Toolkit within OpenVINO allows users to extend artificial intelligence within private applications and optimize artificial intelligence “all the way to the cloud” with processes such as the Model Optimizer, Intermediate Representation, nGraph Integration, and more.
- Full Viso Suite Integration (End-To-End): OpenVINO is fully integrated with the enterprise no-code computer vision platform Viso Suite. Viso Suite provides pre-built modules to fetch the video feed of any digital camera (IP cameras, webcams, etc.) and multi-camera support. Visual programming with logic workflows allows fast building and updating of complete computer vision applications that can be deployed to edge devices – all with one platform.
What Can The OpenVINO Toolkit Be Used For
The toolkit can
- Deploy computer vision inference on various hardware (more below)
- Import and optimize models from various frameworks such as PyTorch, TensorFlow, etc.
(Post-training to accelerate inference)
- Run deep learning models outside of computer vision
- Perform “traditional” computer vision tasks (such as background subtraction)
The toolkit can not
- Training a machine learning model (although there are Training Extensions)
- Run “traditional” machine learning outside of computer vision
(such as Support Vector Machine), check out OpenCV
- Interpret the output of the model
How OpenVINO Works on a High-Level
The OpenVINO workflow primarily consists of four main steps:
- Train: A model is trained with code.
- Model Optimizer: The model is fed to the Model Optimizer, whose objective is to optimize the model and generate an Intermediate Representation (.xml + .bin files) of the model. The models are optimized with techniques such as quantization, freezing, fusion, and more. In this step, pre-trained models are configured according to the framework chosen and then converted with a simple, single-line command. Users can choose from an array of pre-trained models in the OpenVINO Model Zoo, which contains models for every purpose, from object detection to text recognition to human pose estimation.
- Inference Engine: The Intermediate Representation, along with input data, is fed to the Inference Engine. The inference engine’s job is to check for model compatibility based on the framework used to train the model as well as the hardware used (otherwise known as the environment). Frameworks supported by OpenVINO include TensorFlow, TensorFlow Lite, Caffe, MXNet, ONNX (PyTorch, Apple ML), and Kaldi.
- Deployment: The application, along with the optimized model and input data, is deployed to devices. For enterprise-grade solutions, Viso Suite provides complete device management for automated and robust deployment at scale.
The Most Important OpenVINO Features
OpenVINO includes a variety of features that are specific to the toolkit and a few that are specifically worth mentioning. Below you will find two flagship features we’ve decided to discuss in-depth. However, bear in mind that these features are just a portion of those offered by the toolkit and that there are many more not mentioned in this article.
Intel processors include strong x86 cores powered by a wide variety of integrated graphics and prime hardware that allow for computation offload. Examples of computation offload include when integrated graphics allow users to move computations to the Intel-integrated CPU that’s already built-in while using the CPU processor for small, interactive, or low-latency functions.
Artificial intelligence workloads can take advantage of these kinds of computation offload functions by using tools such as those offered by the OpenVINO toolkit. The runtime in the Intel Open VINO toolkit can be used to run inference tasks on integrated graphics like with any other supported target (like CPU).
One of the signature features of the OpenVINO™ toolkit is “multi-device” execution. The multi-device compatibility allows developers to run inference on a combination of “compute devices” on one system in a transparent way. This methodology enables computer vision creators to maximize inferencing performance.
The multi-device plugin further allows users to take full advantage of combining hardware and software fixtures. The multi-device mode uses the available CPU (Central Processing Unit) and integrated GPU (Graphics Processing Unit) for complete system utilization.
Application Footprint Reduction
Tools within the OpenVINO toolkit, like the Deployment Manager, allow users to rapidly reduce application footprint and are easily implementable. The application footprint refers to the amount of space and latency an application takes up in the user’s computing device. Therefore, reducing the application footprint was an important objective during the OpenVINO development process.
Depending on the use case, inferencing can require a substantial amount of memory to execute. To reduce this, recent updates to the OpenVINO toolkit were implemented. They include Custom Compiled Runtimes, Floating-point Model Representation, and decreased model sizes within pre-included libraries. To read more about the specificities of features that allow the OpenVINO toolkit to reduce application footprint, refer to this article written by OpenVINO engineers.
While the OpenVINO toolkit contains all necessary components verified on target platforms, users can also create custom runtime libraries. The open-sourced version of the toolkit allows users to compile runtime with certain modifications that will additionally reduce the size, such as enabling Link Time Optimizations (ENABLE_LTO).
OpenVINO Training Add-On – NNCF
There are various user-built add-ons for OpenVINO that are not included in the toolkit download. There are OpenVINO add-ons available for specific tasks and purposes. For example, there is a specific component for fine-tuning the accuracy of new deep learning models when other pre-downloaded techniques do not allow for the desired accuracy.
This component is called the Neural Network Compression Framework (NNCF). The supported optimization techniques and models in this add-on come directly from the OpenVINO toolkit.
Features demonstrated and included in the NNCF add-on for the OpenVINO Toolkit are:
- Automatic Model Transformation: Using the optimization method ensures that there is no necessary user-implemented modification for transforming the deep learning model at hand.
- Unified API: This refers to the unified API (Application Programming Interface) for methods of optimization. All compression methods are based on specific abstractions introduced within the framework.
- Algorithm Combination: The ability to combine algorithms into the pipelines allows for the application of several algorithms simultaneously and enables the production of one optimized model per fine-tuning stage.
For example, algorithms performing optimization for sparsity and lower precision can be deployed at once by combining two separate algorithms in the pipeline.
- Distributed Training Support: The deep learning model fine-tuning can be organized on the multi-node distributed cluster.
- Uniform configuration: optimization methods can be configured in a standardized way through the use of the JSON configuration file. A JSON configuration file is used for a simpler setup of the compression parameters applied to the deep learning model.
- ONNX Exportation: The ability to export to an ONNX format, the Open standard for machine learning interoperability, is included in this add-on. This is an informal standard for NNs representation. Such optimized models can be converted to OpenVINO Intermediate Representation, which was discussed in the previous section of this article, for further inference as well.
Among the most important and novel advantages of the NNCF framework add-on is that it is able to use automatic model transformation when applying optimization methods. This specific add-on was created by OpenVINO computer vision engineers Alexander Kozlov, Yury Gorbachev, and many others. To read more about what it takes to become a computer vision engineer, we suggest reading Being a Computer Vision Engineer.
OpenVINO Industry Use Cases and Advantages
Intel’s Distribution of the OpenVINO toolkit is built to facilitate and promote the development, creation, and deployment of high-performance computer vision and deep learning inference applications across widely-used Intel platforms.
Use cases can be built for a wide range of industries, from industrial automation and security surveillance to smart city, retail, agriculture, healthcare, utilities and energy, sports, and more. To see more examples and find computer vision ideas, explore our extensive List of Computer Vision Applications and enterprise-grade computer vision solutions you can build and operate using OpenVINO with the application platform Viso Suite.
There are multiple case studies on how AI-powered by the OpenVINO toolkit is currently solving real-world problems (Person detection, head pose, etc.).
The toolkit has been used for a multitude of purposes, from solutions for industrial manufacturing to city-wide transportation. Some specific examples include:
Workplace Hazards and Personal Protective Equipment Detection
Viso Suite, the computer vision platform, can be leveraged to develop a PPE recognition application that protects against workplace hazards and to monitor compliance with safety protocols. The Viso Suite platform utilizes Open VINO to extend deep learning capabilities and inference to the edge.
The PPE recognition application captures video footage from workspace surveillance, such as CCTV cameras, and uses the footage to identify whether employees are wearing the required PPE for the task at hand. In case of a violation, the application can disperse information to company staff through integrated notification, alerting, and logging features, allowing employees to take prompt action to address the safety hazard.
Manufacturing Defect Detection
Using Viso Suite, manufacturing companies can build and deliver custom AI inspection and defect detection applications. For example, in heavy machinery manufacturing, deep learning applications can detect porosity, and casting defects or defects in welded parts.
Therefore, computer vision pipelines use algorithms to analyze video frames and detect quality issues automatically. Such real-time AI image analysis methods are capable of identifying defects that are not visible to the human eye.
Viso Suite is used by industrial manufacturers to implement automated defect detection solutions that ensure consistent quality control and thereby reduce the risk of defects in their products. Through high-performance computer vision, manufacturers can improve their production efficiency, product quality, customer satisfaction, and operational safety.
Sports Mega Event Security
The Viso Suite Platform is used at major tennis events for an integrated video surveillance system designed to meet the AI video monitoring needs across diverse sites, with mega crowds of more than 1 million visitors.
In order to safeguard players, spectators, venues, and equipment, the organizers needed to deploy AI video surveillance across dozens of cameras and sensors. The OpenVINO toolkit, which supports the delivery of deep learning software and inferencing for video analytics at the edge, was used to implement multiple AI vision applications for person detection, crowd counting, suspicious object recognition, parking, and traffic analysis.
Restaurant Industry Analytics With OpenVINO
Viso Suite can be deployed in the restaurant industry for comprehensive video surveillance. With OpenVINO toolkit integration, restaurants can utilize video analytics at the edge for enhanced safety, security, and operational efficiency within dining establishments. Viso Suite makes it possible to build a number of AI vision applications, including facial recognition for staff access control, occupancy monitoring to adhere to social distancing guidelines, and queue management to improve customer flow and service delivery.
Additionally, OpenVINO and Viso Suite enable real-time analysis of video feeds for various applications such as table occupancy detection, wait time estimation, and food quality monitoring. With AI-driven video analytics, operators can make data-driven decisions to enhance customer experiences, improve operational workflows, and ensure compliance with health and safety regulations.
Open VINO General Information
The OpenVINO toolkit can be downloaded after filling out the form on this landing page: Download the Intel Distribution of the OpenVINO toolkit here. For help downloading, visit the OpenVINO download documentation, a PDF that is downloaded directly onto your computer.
The OpenCL 3.0 Finalized Specification, mentioned in this article, was released on September 30th, 2020, and can be used as part of the OpenVINO toolkit. The complete documentation site can be accessed here: OpenVINO Toolkit Overview.
OpenVINO 2022.1 introduces a new version of OpenVINO API (API 2.0). versions of OpenVINO prior to 2022.1 require changes in the app logic when migrating an application from other frameworks, such as TensorFlow, ONNX Runtime, PyTorch models, PaddlePaddle.
OpenVINO 2022 is a comprehensive toolkit for developing applications and solutions based on deep learning tasks, providing high-performance and rich deployment options from edge to cloud. It enables CNN-based and transformer-based deep learning inference on the edge or cloud and supports various execution modes across Intel technologies.
The installation package of OpenVINO 2022 includes two parts, OpenVINO Runtime (core set of libraries to run ML tasks) and OpenVINO Development Tools (set of utilities for working with OpenVINO models). The package can be installed via different methods depending on the user’s needs. New features and enhancements include improved performance and compatibility with more frameworks.
Tutorials and demos are available here to help users get started with OpenVINO 2022.
Get started – Build a Large-Scale Computer Vision Solution with OpenVINO
At viso.ai, we are partners of Intel, the developer of OpenVINO. We power Viso Suite, an end-to-end computer vision platform that provides Open VINO capabilities out of the box. As an all-in-one solution, Viso Suite provides everything around OpenVINO, such as the integration of any digital camera (surveillance cameras, CCTV cameras, webcams) and robust infrastructure for businesses and enterprises to deliver secure, robust, and scalable Edge AI vision systems dramatically faster and easier:
- Viso Suite provides an all-in-one solution to build, deploy and monitor computer vision systems.
- Use visual programming and low-code/no-code tools with automated infrastructure to deliver computer vision 10x faster.
- Avoid integration hassles and writing code from scratch; explore applications.
Get started: Get in touch with our team of AI experts and request a demo for your organization.
Thanks for reading this overview of the OpenVINO toolkit. If you enjoyed the contents of this article, we suggest you also take a look at: