What is OpenVINO? – The Ultimate Overview

Intel OpenVINO Toolkit
Contents

The OpenVINO toolkit is designed for quickly developing a wide range of applications for several industries. In this article, we will provide an overview of the OpenVINO toolkit.

In particular, you will learn more about:

  1. What is OpenVINO?
  2. Why should you use the toolkit?
  3. How OpenVINO works
  4. Flagship features of the toolkit
  5. Use cases and advantages

What is OpenVINO?

OpenVINO is a cross-platform deep toolkit developed by Intel. The name stands for “Open Visual Inference and Neural Network Optimization”. OpenVINO focuses on optimizing neural network inference with a write-once, deploy-anywhere approach for Intel hardware platforms.

The toolkit is free for use under Apache License version 2.0 and has two versions:

Using the OpenVINO toolkit, software developers can deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.

Hence, OpenVINO offers integrated functionalities for expediting the development of applications and solutions that solve several tasks using computer vision, automatic speech recognition, natural language processing, recommendation systems, machine learning, and more.

what is openvino overview
Overview of OpenVINO: Enabling deep learning inference on the edge with a cross-platform toolkit. – Source

Why Use OpenVINO?

Deep Neural Networks (DNNs) have made considerable advances in many industrial domains in the past few years, bringing the accuracy of computer vision algorithms to a new level. However, deploying and producing such accurate and useful models requires adaptations for the hardware and computational methods.

OpenVINO allows the optimization of DNN models for inference to be a streamlined, efficient process through the integration of various tools.

The OpenVINO toolkit is based on the latest generations of Artificial Neural Networks (ANN), such as Convolutional Neural Networks (CNN) as well as recurrent and attention-based networks. For more information on what Artificial Neural Networks (ANN) are all about and how they are incorporated in computer vision, we suggest you read ANN and CNN: Analyzing Differences and Similarities.

The OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware. It maximizes performance and accelerates application development. OpenVINO aims to accelerate AI workloads and speed up time to market using a library of predetermined functions as well as pre-optimized kernels. In addition, other computer vision tools such as OpenCV, OpenCL kernels, and more are included in the OpenVINO toolkit.

 

OpenVINO workflow overview
OpenVINO workflow overview – Source
What Are the Benefits of OpenVINO
  1. Accelerate Performance: Expedite computer vision workloads by enabling simple execution methods across different Intel processors and accelerators such as CPU, GPU/Intel Processor Graphics, VPU, and FPGA.
  2. Streamline Deep Learning Deployment: Utilize Convolutional Neural Network (CNN)-based deep learning functions using one common API in addition to more than 30 pre-trained models and documented code samples. With more than 100 public and custom models, the OpenVINO toolkit streamlines deep learning innovation by providing one centralized method for implementing dozens of deep learning models.
  3. Extend and Customize: OpenCL (Open Computing Language) Kernels and other tools offer an open, royalty-free standard way to add custom code pieces straight into the workload pipeline, customize deep learning model layers without the burden of framework overheads, and implement parallel programming of diverse accelerators.
  4. Innovate Artificial Intelligence: The complete Deep Learning Deployment Toolkit within OpenVINO allows users to extend artificial intelligence within private applications and optimize artificial intelligence “all the way to the cloud” with processes such as the Model Optimizer, Intermediate Representation, nGraph Integration, and more.
What Can OpenVINO Be Used For

The toolkit can

  • Deploy computer vision inference on various hardware
  • Import and optimize models from various frameworks
    (Post-training to accelerate inference)
  • Run deep learning models outside of computer vision
  • Perform “traditional” computer vision tasks (such as background subtraction)

The toolkit can not

  • Training a machine learning model (although there are Training Extensions)
  • Run “traditional” machine learning outside of computer vision
    (such as Support Vector Machine)
  • Interpret the output of the model

How OpenVINO Works on a High Level

The OpenVINO workflow primarily consists of four main steps:

  1. Train: A model is trained with code.
  2. Model Optimizer: The model is fed to the Model Optimizer, whose objective is to optimize the model and generate an Intermediate Representation (.xml + .bin files) of the model. The models are optimized with techniques such as quantization, freezing, fusion, and more. In this step, pre-trained models are configured according to the framework chosen and then converted with a simple, single-line command. Users can choose from an array of pre-trained models in the OpenVINO Model Zoo, which contains models for every purpose, from object detection to text recognition to human pose estimation.
  3. Inference Engine: The Intermediate Representation is fed to the Inference Engine. The inference engine’s job is to check for model compatibility based on the framework used to train the model as well as the hardware used (otherwise known as the environment). Frameworks supported by OpenVINO include Tensorflow, Caffe, MXNet, ONNX (Pytorch, Apple ML), and Kaldi.
  4. Deployment: The application is deployed to devices.

 

how openvino works
The workflow of OpenVINO: Optimize, tune, and run AI inference using the integrated model optimizer and development tools. – Source

The Most Important OpenVINO Features

OpenVINO includes a variety of features that are specific to the toolkit and a few that are specifically worth mentioning. Below you will find two flagship features we’ve decided to discuss in-depth. However, bear in mind that these features are just a portion of those offered by the toolkit and that there are many more not mentioned in this article.

Multi-Device Execution

Intel processors include strong x86 cores powered by a wide variety of integrated graphics, prime hardware that allow for computation offload. Examples of computation offload include when integrated graphics allow users to move computations to the Intel-integrated CPU that’s already built-in while using the CPU processor for small, interactive, or low-latency functions.

Artificial intelligence workloads can take advantage of these kinds of computation offload functions by using tools such as those offered by the OpenVINO toolkit. The runtime in the Intel OpenVINO toolkit can be used to run inference tasks on integrated graphics like with any other supported target (like CPU).

One of the signature features of the OpenVINO™ toolkit is “multi-device” execution. The multi-device compatibility allows developers to run inference on a combination of “compute devices” on one system in a transparent way. This methodology enables computer vision creators to maximize inferencing performance.

The multi-device plugin further allows users to take full advantage of combining hardware and software fixtures. The multi-device mode uses available CPU (Central Processing Unit) and integrated GPU (Graphics Processing Unit) for complete system utilization.

Intel NCS2 Intel Movidius Neural Compute Stick
OpenVINO integrates with the Intel Neural Compute Stick 2 – AI accelerator
Application Footprint Reduction

Tools within the OpenVINO toolkit like the Deployment Manager allow users to rapidly reduce application footprint and are easily implementable. The application footprint refers to the amount of space and latency an application takes up in the user’s computing device. Therefore, reducing the application footprint was an important objective during the OpenVINO development process.

Depending on the use case, inferencing can require a substantial amount of memory to execute. To reduce this, recent updates to the OpenVINO toolkit were implemented. They include Custom Compiled Runtimes, Floating-point Model Representation, and decreased model sizes within pre-included libraries. To read more about the specificities of features that allow the OpenVINO toolkit to reduce application footprint, refer to this article written by OpenVINO engineers.

While the OpenVINO toolkit contains all necessary components verified on target platforms, users can also create custom runtime libraries. The open-sourced version of the toolkit allows users to compile runtime with certain modifications that will additionally reduce the size, such as enabling Link Time Optimizations (ENABLE_LTO).

OpenVINO Reduce Application Footprint
Reduce the Application Footprint with OpenVINO – Source

OpenVINO Training Add-On – NNCF

There are various user-built add-ons for OpenVINO that are not included in the toolkit download. There are OpenVINO add-ons available for specific tasks and purposes. For example, there a specific component for fine-tuning the accuracy of new deep learning models when other pre-downloaded techniques do not allow for the desired accuracy.

This component is called the Neural Network Compression Framework (NNCF). The supported optimization techniques and models in this add-on come directly from the OpenVINO toolkit.

Features demonstrated and included in the NNCF add-on for the OpenVINO Toolkit are:

  • Automatic Model Transformation: Using the optimization method ensures that there is no necessary user-implemented modification for transforming the deep learning model at hand.
  • Unified API: This refers to the unified API (Application Programming Interface) for methods of optimization. All compression methods are based on specific abstractions introduced within the framework.
  • Algorithm Combination: The ability to combine algorithms into the pipelines allows for the application of several algorithms simultaneously and enables the production of one optimized model per fine-tuning stage.
    For example, algorithms performing optimization for sparsity and lower precision can be deployed at once by combining two separate algorithms in the pipeline.
  • Distributed Training Support: The deep learning model fine-tuning can be organized on the multi-node distributed cluster.
  • Uniform configuration: optimization methods can be configured in a standardized way through the use of the JSON configuration file. A JSON configuration file is used for a simpler setup of the compression parameters applied to the deep learning model.
  • ONNX Exportation: The ability to export to an ONNX format, the Open standard for machine learning interoperability, is included in this add-on. This is an informal standard for NNs representation. Such optimized models can be converted to OpenVINO Intermediate Representation, which was discussed in the previous section of this article, for further inference as well.

Among the most important and novel advantages of the NNCF framework add-on is that it is able to use automatic model transformation when applying optimization methods. This specific add-on was created by OpenVINO computer vision engineers Alexander Kozlov, Yury Gorbachev, and many others. To read more about what it takes to become a computer vision engineer, we suggest reading Being a Computer Vision Engineer in 2021.

OpenVINO Industry Use Cases and Advantages

Intel’s Distribution of the OpenVINO toolkit is built to facilitate and promote the development, creation, and deployment of high-performance computer vision and deep learning inference applications across widely-used Intel platforms.

Use Cases can be built for a wide range of industries, from security surveillance to robotics, retail, AI, healthcare, transportation, and more. To see more examples, explore our extensive List of Computer Vision Applications in 2021.

There are multiple case studies on how AI-powered by the OpenVINO toolkit is currently solving real-world problems. For a complete list of companies currently using OpenVINO, refer to Success Stories: Case Studies by Intel.

People counting Use Case with Object Detection
People counting application with Object Detection, built using OpenVINO on the Viso Suite Platform

The toolkit has been used for a multitude of purposes, from solutions for industrial manufacturing to city-wide transportation. Some specific examples include:

Workplace Hazards and Spread of Infectious Diseases

Vulcan AI, an artificial intelligence application developer, uses OpenVINO to protect against workplace hazards and infectious disease spread by extending deep learning capabilities and inference to the edge.

The Vulcan AI “WorkSafe solution” captures video footage from workspace surveillance such as CCTV cameras and uses the footage to identify potential safety hazards. It then works to disperse information to company staff as issues occur through features such as integrated notification, alerting, and logging features. Such functions allow employees to urgently take action to address the safety hazard.

Porosity Defect Detection

ADLINK is a manufacturing company that designs and manufactures products for embedded computing, test & measurement, and automation applications. ADLINK uses the OpenVINO toolkit to detect porosity defects in welded robotic arcs, critical components of modern heavy machinery manufacturing.

According to the case study, ADLINK and Intel “have created an automated weld-defect detection solution” based on various ADLINK software and the OpenVINO toolkit. Using an action recognition model made possible by the toolkit, the porosity detector is capable of automatically detecting porosity defects from video frames, something that is not possible using only the human eye.

Nerve Detection and Improving Workflow

Samsung Medison’s real-time inference of nerve image ultrasounds is powered by the OpenVINO toolkit and helps enhance treatment workflow and improve nerve detection accuracy for anesthesiologists. UGRA, otherwise known as ultrasound-guided regional anesthesia, helps anesthesiologists visualize their structures of target and facilitate safe and accurate anesthesia around those nerves.

The OpenVINO toolkit was specifically used to improve the performance of real-time inference models that detect and identify nerve location while ultrasound scanning is ongoing. This method also helps improve the treatment workflow of URGA practitioners.

FIFA World Cup Security Platform

The Axxon Intellect PSIM platform, powered by Intel Computer Vision products, specifically the OpenVINO toolkit, was the basis for an integrated video surveillance system designed to meet the monitoring needs of Russia’s Ministry of Internal Affairs (MIA) across diverse World Cup sites.

In order to safeguard players, spectators, venues, and equipment at the Russian World Cup, the “organizers needed to deploy video surveillance across dozens of cameras and sensors”. The OpenVINO toolkit, which supports the delivery of deep learning software and inferencing for video use cases at the edge, was used extensively.

What’s Next?

The OpenVINO toolkit can be downloaded after filling out the form on this landing page: Download the Intel Distribution of the OpenVINO toolkit here. For help downloading, visit the OpenVINO download documentation, a PDF that is downloaded directly onto your computer.

The OpenCL 3.0 Finalized Specification, mentioned in this article, was released on September 30th, 2020, and can be used as part of the OpenVINO toolkit. The complete documentation site can be accessed here: OpenVINO Toolkit Overview.

Thanks for reading this overview on the OpenVINO toolkit. If you enjoyed the contents of this article, we suggest you also take a look at:

  1. Being a Computer Vision Engineer in 2021
  2. Video Analytics: Ultimate Overview in 2021
  3. AI Hardware: Overview of Edge Machine Learning Inference in 2021
Share on linkedin
Share on twitter
Share on whatsapp
Share on facebook
Share on email
Related Articles

Join 6,300+ Fellow
AI Enthusiasts

Get expert news and updates straight to your inbox. Subscribe to the Viso Blog.

Sign up to receive news and other stories from viso.ai. Your information will be used in accordance with viso.ai's privacy policy. You may opt out at any time.

Want to use Computer Vision applications?

Get the all-in-one Suite to build and deliver Computer Vision Applications. 
Learn more

This website uses cookies. By continuing to browse this site, you agree to this use.