• Train

          Develop

          Deploy

          Operate

          Data Collection

          Building Blocks​

          Device Enrollment

          Monitoring Dashboards

          Video Annotation​

          Application Editor​

          Device Management

          Remote Maintenance

          Model Training

          Application Library

          Deployment Manager

          Unified Security Center

          AI Model Library

          Configuration Manager

          IoT Edge Gateway

          Privacy-preserving AI

          Ready to get started?

          Overview
          Whitepaper
          Expert Services
  • Why Viso Suite
  • Pricing
Search
Close this search box.

ONNX Explained: A New Paradigm in AI Interoperability

About

Viso Suite is the all-in-one solution for teams to build, deliver, scale computer vision applications.

Contents
Need Computer Vision?

Viso Suite is the world’s only end-to-end computer vision platform. Request a demo.

ONNX is an open standard for representing computer vision and machine learning models. The ONNX standard provides a common format enabling the transfer of models between different machine learning frameworks such as TensorFlow, PyTorch, MXNet, and others.

 

ONNX (Open Neural Network Exchange) is an open-source format that facilitates interoperability between different deep learning frameworks for simple model sharing and deployment.
ONNX (Open Neural Network Exchange) is an open-source format that facilitates interoperability between different deep learning algorithms for simple model sharing and deployment.

 

ONNX (Open Neural Network Exchange) is an open-source format. It promotes interoperability between different deep learning frameworks for simple model sharing and deployment.

This interoperability is crucial for developers and researchers. It allows for the use of models across different frameworks, omitting the need for retraining or significant modifications.

Key Aspects of ONNX:

  • Format Flexibility. ONNX supports a wide range of model types, including deep learning and traditional machine learning.
  • Framework Interoperability. Models trained in one framework can be exported to ONNX format and imported into another compatible framework. This is particularly useful for deployment or for continuing development in a different environment.
  • Performance Optimizations. ONNX models can benefit from optimizations available in different frameworks and efficiently run on various hardware platforms.
  • Community-Driven. Being an open-source project, it’s supported and developed by a community of companies and individual contributors. This ensures a constant influx of updates and improvements.
  • Tools and Ecosystem. Numerous tools are available for converting, visualizing, and optimizing ONNX models. Developers can also find libraries for running these models on different AI hardware, including GPU and CPU.
  • Versioning and Compatibility. ONNX is regularly updated with new versions. Each version maintains a level of backward compatibility to ensure that models created with older versions remain usable.

This standard is particularly beneficial in scenarios where flexibility and interoperability between different tools and platforms are essential. For instance, a model trained in a research setting might be exported to ONNX for deployment in a production environment of a different framework.

 

Real-World Applications of ONNX

We can view ONNX as a sort of Rosetta stone of artificial intelligence (AI). It offers unparalleled flexibility and interoperability across various frameworks and tools. Its design enables seamless model portability and optimization, making it a valuable asset within and across various industries.

Below are some of the applications in which ONNX has already begun to make a significant impact. Real-world use cases are broad, ranging from facial recognition and pattern recognition to object recognition:

  • Healthcare. In medical imaging, ONNX facilitates the use of deep learning models for diagnosing diseases from MRI or CT scans. For instance, a model trained in TensorFlow for tumor detection can be converted to ONNX for deployment in clinical diagnostic tools that operate on a different framework.
  • Automotive. ONNX aids in developing autonomous vehicle systems and self-driving cars. It allows for the integration of object detection models into real-time driving decision systems. It also gives a level of interoperability regardless of the original training environment.
  • Retail. ONNX enables recommendation systems trained in one framework to be deployed in diverse e-commerce platforms. This allows retailers to enhance customer engagement through personalized shopping experiences.
  • Manufacturing. In predictive maintenance, ONNX models can forecast equipment failures. You can train it in one framework and deploy it in factory systems using another, ensuring operational efficiency.
  • Finance. Fraud detection models developed in a single framework can be easily transferred for integration into banking systems. This is a vital component in implementing robust, real-time fraud prevention.
  • Agriculture. ONNX assists in precision farming by integrating crop and soil models into various agricultural management systems, aiding in efficient resource utilization.
  • Entertainment. ONNX can transfer behavior prediction models into game engines. This has the potential to enhance player experience through AI-driven personalization and interactions.
  • Education. Adaptive learning systems can integrate AI models that personalize learning content, allowing for different learning styles across various platforms.
  • Telecommunications. ONNX streamlines the deployment of network optimization models. This allows operators to optimize bandwidth allocation and streamline customer service in telecom infrastructures.
  • Environmental Monitoring. ONNX supports climate change models, allowing for the sharing and deployment across platforms. Environmental models are notorious for their complexity, especially when trying to combine them to make predictions.

 

A real-world application for inventory detection in Retail
A real-world application for inventory detection in Retail

 

Popular Frameworks and Tools Compatible with ONNX

A cornerstone of ONNX’s usefulness is its ability to seamlessly interface with frameworks and tools already being used in different applications.

This compatibility ensures that AI developers can leverage the strengths of diverse platforms while maintaining model portability and efficiency.

With that in mind, here’s a list of notable and compatible frameworks and tools:

  • PyTorch. A widely-used open-source machine learning library from Facebook. Pytorch is known for its ease of use and dynamic computational graph. The community favors PyTorch for research and development because of its flexibility and intuitive design.
  • TensorFlow. Developed by Google, TensorFlow is a comprehensive framework. TensorFlow offers both high-level and low-level APIs for building and deploying machine learning models.
  • Microsoft Cognitive Toolkit (CNTK). A deep learning framework from Microsoft. Known for its efficiency in training convolutional neural networks, CNTK is especially notable in speech and image recognition tasks.
  • Apache MXNet. A flexible and efficient open-source deep learning framework supported by Amazon. MXNet deploys deep neural networks on a wide array of platforms, from cloud infrastructure to mobile devices.
  • Scikit-Learn. A popular library for traditional machine learning algorithms. While not directly compatible, models from Scikit-Learn can be converted with sklearn-onnx.
  • Keras. A high-level neural networks API, Keras operates on top of TensorFlow, CNTK, and Theano. It focuses on enabling fast experimentation.
  • Apple Core ML. For integrating ML models into iOS applications, models can be converted from various frameworks to ONNX and then to Core ML format.
  • ONNX Runtime. A cross-platform, high-performance scoring engine. It optimizes model inference across hardware and is crucial for deployment.
  • NVIDIA TensorRT. An SDK for high-performance deep learning inference. TensorRT includes an ONNX parser and is used for optimized inference on NVIDIA GPUs.
  • ONNX.js. A JavaScript library for running ONNX models on browsers and Node.js. It allows web-based ML applications to leverage ONNX models.

 

pytorch vs tensorflow comparison in popularity
See the popularity of PyTorch vs. TensorFlow

 

Understanding the Intricacies and Implications of ONNX Runtime in AI Deployments

ONNX Runtime is a performance-focused engine for running models. It ensures efficient and scalable execution across a variety of platforms and hardware. With a hardware-agnostic architecture, it allows deploying AI models consistently across different environments.

This high-level system architecture begins with converting an ONNX model into an in-memory graph representation. This proceeds provider-independent optimizations and graph partitioning based on available execution providers. Each subgraph is then assigned to an execution provider, ensuring that it can handle the given operations.

 

The Open Neural Network Exchange (ONNX) runtime structure consists of a set of APIs, libraries, and tools. These enable efficient and cross-platform execution of models represented in the format.
The Open Neural Network Exchange (ONNX) runtime structure consists of a set of APIs, libraries, and tools. These enable efficient and cross-platform execution of models represented in the format – source.

 

The design of ONNX Runtime facilitates multiple threads invoking the Run() method. This occurs simultaneously on the same inference session, with all kernels being stateless. The design guarantees support for all operators by the default execution provider. It also allows execution providers to use internal tensor representations, with the responsibility of converting these at the boundaries of their subgraphs.

The positive implications for AI practitioners, developers, and business are manifold:

  • Ability to execute models across various hardware and execution providers.
  • Graph partitioning and optimizations improve efficiency and performance.
  • Compatibility with multiple custom accelerators and runtimes.
  • Supports a wide range of applications and environments, from cloud to edge.
  • Multiple threads can simultaneously run inference sessions.
  • Ensures support for all operators, providing reliability in model execution.
  • Execution providers manage their memory allocators, optimizing resource usage.
  • Easy integration with various frameworks and tools for streamlined AI workflows.

 

Benefits and Challenges of Adopting the ONNX Model

As with any new revolutionary technology, ONNX comes with its own challenges and considerations.

Benefits
  • Framework Interoperability. Facilitates the use of models across different ML frameworks, enhancing flexibility.
  • Deployment Efficiency. Streamlines the process of deploying models across various platforms and devices.
  • Community Support. Benefits from a growing, collaborative community contributing to its development and support.
  • Optimization Opportunities. Offers model optimizations for improved performance and efficiency.
  • Hardware Agnostic. Compatible with a wide range of hardware, ensuring broad applicability.
  • Consistency. Maintains model fidelity across different environments and frameworks.
  • Regular Updates. Continuously evolving with the latest advancements in AI and machine learning.
Challenges
  • Complexity in Conversion. Converting models to ONNX format can be complex and time-consuming, especially for models using non-standard layers or operations.
  • Version Compatibility. Ensuring compatibility with different versions of ONNX and ML frameworks can be challenging.
  • Limited Support for Certain Operations. Some advanced or proprietary operations may not be fully supported.
  • Performance Overheads. In some cases, there can be performance overheads in converting and running.
  • Learning Curve. Requires understanding of ONNX format and compatible tools, adding to the learning curve for teams.
  • Dependency on Community. Some features may rely on community contributions for updates and fixes, which can vary in timeliness.
  • Intermittent Compatibility Issues. Occasional compatibility issues with certain frameworks or tools can arise, requiring troubleshooting.

 

Viso Suite supports ONNX format and facilitates building real-world computer vision applications
The Platform Viso Suite supports ONNX format and facilitates building real-world computer vision applications

 

ONNX – An Open Standard Driven by a Thriving Community

As a popular open-source framework, a key component of ONNX’s continued development and success is community involvement. Its GitHub project has nearly 300 contributors, and its current user base is over 19.4k.

With 27 releases and over 3.6k forks at the time of writing, it’s a dynamic and ever-evolving project. There have also been over 3,000 pull requests (with 40 still active) and over 2,300 resolved issues (with 268 active).

The involvement of a diverse range of contributors and users has made it a vibrant and progressive project. And initiatives by ONNX, individual contributors, major partners, and other interested parties are keeping it that way:

  • Wide Industry Adoption. ONNX is popular among both individual developers and major tech companies for various AI and ML applications. Examples include:
    • Microsoft. Utilizes ONNX in various services, including Azure Machine Learning and Windows ML.
    • Facebook. As a founding member of the ONNX project, Facebook has integrated ONNX support in PyTorch, one of the leading deep learning frameworks.
    • IBM. Uses ONNX in its Watson services, enabling seamless model deployment across diverse platforms.
    • Amazon Web Services (AWS). Supports ONNX models in its machine learning services like Amazon SageMaker, for example.
  • Active Forum Discussions. The community participates in forums and discussions, providing support, sharing best practices, and guiding the direction of the project.
  • Regular Community Meetings. ONNX maintains regular community meetings, where members discuss advancements, and roadmaps, and address community questions.
  • Educational Resources. The community actively works on developing and sharing educational resources, tutorials, and documentation.

 

The ONNX documentation provides comprehensive and accessible resources for users, offering detailed information, guides, and examples.
The ONNX documentation provides comprehensive and accessible resources for users, offering detailed information, guides, and examples – source.

 

ONNX Case Studies

Numerous case studies have demonstrated ONNX’s effectiveness and impact in various applications. However, here are two of the most significant ones in recent years:

 

Optimizing Deep Learning Model Training

Microsoft’s case study showcases how ONNX Runtime (ORT) can optimize the training of large deep-learning models like BERT. ORT implements memory usage optimizations by reusing buffer segments across a series of operations, such as gradient accumulation and weight update computations.

Among its most important findings was how it enabled training BERT with double the batch size compared to PyTorch. Thus, leading to more efficient GPU utilization and better performance. Additionally, ORT integrates Zero Redundancy Optimizer (ZeRO) for GPU memory consumption reduction, further boosting batch size capabilities.

 

Failures and Risks in Model Converters

A study in the ONNX ecosystem focused on analyzing the failures and risks in deep learning model converters. This research highlights the growing complexity of the deep learning ecosystem. This included the evolution of frameworks, model registries, and compilers.

 

ONNX acts as a common intermediary in the growing complexity of deep learning.
ONNX acts as a common intermediary in the growing complexity of deep learning – source.

 

In the study, the data scientists point out the increasing importance of common intermediaries for interoperability as the ecosystem expands. It addresses the challenges in maintaining compatibility between frameworks and DL compilers, illustrating how ONNX aids in navigating these complexities.

The research also delves into the nature and prevalence of failures in DL model converters, providing insights into the risks and opportunities for improvement in the ecosystem.

 

Getting Started

ONNX stands as a pivotal framework in open-source machine learning, fostering interoperability and collaboration across various AI platforms. Its versatile structure, supported by an extensive array of frameworks, empowers developers to deploy models efficiently. ONNX offers a standardized approach to bridging the gaps and reaching new heights of usability and model performance.

Follow us

Related Articles

Join 6,300+ Fellow
AI Enthusiasts

Get expert news and updates straight to your inbox. Subscribe to the Viso Blog.

Sign up to receive news and other stories from viso.ai. Your information will be used in accordance with viso.ai's privacy policy. You may opt out at any time.
Play Video

Join 6,300+ Fellow
AI Enthusiasts

Get expert AI news 2x a month. Subscribe to the most read Computer Vision Blog.

You can unsubscribe anytime. See our privacy policy.

Build any Computer Vision Application, 10x faster

All-in-one Computer Vision Platform for businesses to build, deploy and scale real-world applications.