Feature index

Viso Suite provides a single no-code platform for Computer Vision development, deployment, monitoring and scaling. Explore our full list of features below.

Low-code / No-code Editor

Visual Programming

Viso Suite provides a visual editor to use pre-built modules. Use drag and drop to wire together hardware and software to create your computer vision application as a workflow.

Infinitely exensible

Install new modules from the marketplace to add more computer vision, video processing and logic capabilities to your editor. Installed modules automatically appear in the editor - ready to use.

Version Management

Create and update your applications in the visual editor. Create new versions of your applications. Dependencies are managed out-of-the box. Revert application versions easily.

Powerful logic

Wire together modules to process camera streams with deep learning modules. And make sense of the output data. Build complex flows intuitively with an intuitive rule engine.

Configure application modules

Configure the modules you wire together with visual drop down menus to set the available parameters easily. Find all module parameters in our documentation and tutorials.

Use over 2'000 function modules

Browse pre-built function modules for integrations, sending emails, if-this-then-that logic, and much more. Install function modules with one click to your workspace editor.

Add your custom code

Simply add your own Javascript code to your application workflows. Write custom functions, for example to efficiently handle output data (text strings of classes, or counting results).

Create your custom modules

Developers can wrap their own code into modules to use them in applications. Use the version management for your custom modules, to update them as part of applications in-use.

Use your own containers

Custom modules can easily be created with your own docker container. Use the built-in docker dependency management to deploy containers with your application.

Computer Vision Modules

Object Detection

Real-time object detection to recognize objects in digital images and videos. Deep learning models are used to recognize predefined classes.

People Detection

Detect people in images and real-time video feeds. Deep learning person detection is used to trigger an alert or send a message in various use cases.

Face Detection

Face detection is used to detect the presence of human faces in images and video. Deep learning models are used to achieve highest accuracy. ​

Animal Detection

Deep learning models are used for the automatic detection of animals in video feeds. It is used in security, road safety and agriculture applications.

Object Tracking

Object tracking is used to track the movement of detected objects in a video feed. It is used to track multiple objects or people as they move around.

Object Counting

Object counting to track the number of detected objects in a video. Vision-based counting is used in traffic flow monitoring or product counting.

People Counting

People counting is used to count one or multiple people in video streams. A vision-based people counter is used in crowd counting and footfall analytics.

Dwell Time Tracking

Neural networks detect and track people and vehicles to calculate the average dwell time in specific areas. It is used to detect waiting times, inefficiencies and time spent by vehicles at stops.

Motion Heatmap

Object flow is used to create a motion heatmap based on real-time computer vision detection of objects. It is used for traffic analysis and footfall heatmaps.

Object Segmentation

Video object segmentation is used to segment several different objects with deep learning models in images from video streams.

Pose Estimation

Human pose estimation with deep learning is used to detect and track semantic key points like joints and limbs in images of video streams.

Fall Detection

Real-time human fall detection uses deep learning methods to detect and track human movement and analyze motion in video feeds.

Posture Recognition

Detect the human body posture in real-time on video images. Posture recognition is used to detect and track the movement of people to identify the postures "sitting", "lying", "standing".

Image Classification

Deep learning image classification is used to comprehend an entire video frame as a whole by assigning it to a specific label. Classification is used to analyze images that contain one object.

Combine Modules

Use modules in combination to build a powerful application pipeline in high-performance computer vision applications. One application can contain multiple modules of the same type.

Input and Output Modules

Surveillance Camera Input

Get the video stream of one or multiple digital video surveillance cameras. Use the video feed node to apply deep learning models to the real-time stream of network cameras (IP cameras).

USB camera or Webcam Input

Get the video feed of USB cameras or webcams connected to an edge device (computer). Use one or multiple cameras to provide video streams that are analyzed in real-time using neural networks.

Video File Input

Use video files to provide the video stream simulating the stream of a camera. Video files can be used for testing real-time applications before switching to a physical camera - with one click.

Multi-Camera Input

Get the video feed of multiple cameras (IP cameras, webcam/USB cameras) in one application. Perform multi-stream processing in real-time with parallel computing.

Fisheye Dewarping

Use dewarping to correct the distortions of videos obtained from cameras equipped with a fisheye lens (ultra wide-angle). This image pre-processing is required to apply deep learning models trained on regular videos.

Frame Buffer

Use a frame buffer to accumulate the historical results from a prior module in your application workflow. Buffering is used in real-time object or people counting and tracking applications.

Region of Interest (ROI) ​

Define one or multiple specific areas in a video stream to focus subsequent tasks on the selected regions only. Enable ROI-Cropping to increase inference performance.

Region of Interest Polygon

Draw flexible polygons to set one or even multiple areas in a video feed. Use polygons to exclude specific areas. Use it to detect objects or faces in specific areas, for example, in intrusion detection.

Region-based Counting

Draw one or even multiple regions of interest in a video stream to apply object or people counting. Draw rectangles with crossing lines at entrances or object counters at production lines.

Region of Interest Sections

Define one or multiple sections in a video stream with a flexible grid. Set different names and colors to track objects across sections (e.g. areas of a store or building). Exclude specific sections.

Video View

Display the video stream with the detected output results from prior modules. For privacy and performance reasons, the video output is usually disabled. It's added for testing and debugging computer vision applications.

Send Slack Messages

Use the slack message module to send alerts and messages to slack channels based on pre-defined rules. Alternatively, send messages by email or SMS with Twilio.

Collect detection results

Built-in data collectors send the data output (messages, counts) of the applications from edge devices in the cloud. Images are processed at the edge without the need for cloud data offloading.

Visualize detection results

Detection results of computer vision applications can be visualized in custom cloud dashboards. Collected data is aggregated across all devices and synchronized to the cloud workspace.

Video Recording

Store video frames or video files to export or process with a delay. Images or video recordings can be sent to the cloud and third party systems. Delayed processing is used to optimize efficiency.

Supported Algorithms and AI models

Pre-Trained Algorithms

In all modules, select from the best AI algorithms pre-trained on massive datasets. Viso.ai is constantly scouting and integrating the best-performing algorithms and AI models.

Custom Algorithm Import

All modules support importing your custom-trained or re-trained AI model. Use custom and retrained algorithms to detect special objects or situations to achieve mission-critical accuracy.

Switching Algorithms

For every module, we set a default algorithm. To change it, select another algorithm from the drop-down list. Update or exchange an algorithm of an existing application with one single click.

Object Detection AI Models

Use the best performing pre-trained deep learning models and neural networks for object detection.

  • SSD Mobilenet v1
  • SSD Inception v2
  • SSDlite Mobilenet v2
  • Faster RCNN ResNet50
  • Faster RCNN ResNet101
  • Faster RCNN Inception v2
  • Faster RCNN Inception ResNet v2
  • Faster RCNN ResNet 50 v1
  • Faster RCNN ResNet 101 v1
  • EfficientDet D0
  • EfficientDet D4
  • SSD Mobilenet v2
  • SSD ResNet 50 v1
  • Yolov3
  • Yolov3 tiny
  • Yolov4
  • Yolov4 tiny
  • ResNet50
  • VGG19
  • Yolov3 FP16
  • Face Detection Retail 0004
  • SSD Mobilenet v2
  • Person Detection 0200
  • Person Detection 0201
  • Person Detection Retail 0013
  • Person Vehicle Bike Detection 2000
  • Person Vehicle Bike Detection Crossroad 0078
  • Vehicle Detection 0222
  • Vehicle Detection 0201
  • SSD MobileNet v1
  • SSD MobileNet v2
  • SSD Inception v2
  • SSDlite MobileNet v2
  • Faster RCNN Inception v2
  • Faster RCNN Inception ResNet v2
  • Faster RCNN ResNet50
  • Faster RCNN ResNet50 v1
  • Faster RCNN ResNet101
  • Faster RCNN ResNet101 v1
  • EfficientDet d0
  • EfficientDet d4
  • SSD Mobilenet v2
  • SSD Resnet50 v1
  • Yolov3
  • Yolov3 tiny
  • Yolov4
  • Yolov4 tiny
  • MobileNet SSD v1
  • MobileNet SSD v2

Image Classification AI Models

Robust pre-trained deep learning models and neural networks for image classification.

  • MobileNet
  • ResNet50
  • VGG19
  • Inception v1 (GoogLeNet)
  • MobileNet
  • ResNet50
  • VGG19
  • Inception v4
  • MobileNet

Keypoint Detection AI Models

Pre-trained deep learning models for keypoint detection, used for human pose estimation.

  1. ResNet50
  2. ResNet101
  3. ResNet152
  4. ResNext50
  5. ShuffleNetV2x1
  6. ShuffleNetV2x2
  • OpenPose
  • PoseNet MobileNet v1
  • ResNet 101
  • PoseNet ResNet50

Object Tracking Algorithms

The object tracking algorithms to track multiple objects detected by the object detection module.

  • DLIB
  • MOSSE
  • CSRT
  • DeepSORT
  • Geo Distance

Object Segmentation AI Models

Popular object segmentation deep learning models for object segmentation tasks.

  • Mask RCNN Inception ResNet v2
  • Mask RCNN ResNet101
  • Mask RCNN Inception v2
  • Mask RCNN ResNet50
  • Mask RCNN Inception ResNet v2
  • Mask RCNN ResNet101
  • Mask RCNN Inception v2
  • Mask RCNN ResNet50

Face Recognition AI Models

Pre-trained deep learning models and algorithms for face recognition and facial attribute analysis.

  • VGG-Face
  • Google FaceNet
  • OpenFace
  • Facebook DeepFace
  • DeepID
  • Dlib
  • ArcFace
  • OpenCV
  • Dlib
  • SSD
  • MTCNN
  • RetinaFace

Supported Deep Learning Frameworks and Libraries

TensorFlow 2.0

The popular open source TensorFlow machine learning library focused on neural networks. Originally developed by Google Brain.

TensorFlow Lite

TensorFlow Lite is a deep learning framework for on-device inference. It is optimized for ML on Edge Devices.

OpenVINO

OpenVINO tools optimize deep learning models for inference tasks on Intel hardware including CPU, integrated GPU and Movidius VPU.

PyTorch

The most popular open source ML library PyTorch was developed by Facebook, it is based on the Torch library. PyTorch is used by Tesla and UBER to power computer vision products.

Chainer

Chainer is a deep learning framework widely used for the development and application of deep learning methods. It supports NVIDIA CUDA for high-performance applications.

OpenCV

OpenCV is the most popular computer vision library developed by Intel. It is aimed at real-time computer vision and features GPU acceleration.

OpenPose

OpenPose is a real-time multi-person detection library. It is one of the most popular open-source pose estimation technologies.

OpenPifPaf

OpenPifPaf is a neural network architecture for semantic keypoint detection (human body joints) and pose estimation at high speed.

Supported Devices and Hardware

Cross-platform compatibility

Applications built on Viso Suite are cross-platform portable and run on any supported device type. We optimize all modules out-of-the-box. Due to our partnerships, Viso Suite customers are first to test and use the latest AI hardware.

Application Portability

Seamlessly move from a hardware platform to a different one. Exchange the device and hardware as the project advances, seamlessly switch from prototyping devices to optimized hardware. Your system is future-proof.

Virtual Devices

Viso Suite allows testing with virtual devices in the cloud. Even with no physical device available, you are able to run applications simulating before deploying to physical devices.

Generic Computing Devices (x86)

x86 includes processors from Intel, AMD and others. It is the most popular type of processors used in general-purpose computers, embedded systems, and small, ruggedized systems.

Intel NUC (amd64)

Intel NUC is a small-form-factore computer designed by Intel. The edge computing device has a built-in CPU and can be operated with VPU or TPU AI accelerators.

Up Board (amd64)

The Up Board is an embedded single board computer for edge computing. It can be used with one or multiple VPU or TPU AI accelerators.

Coral devices (aarch64)

The Google Coral AI accelerator is available as a single board computer for edge solutions. It is optimized for fast neural network inferencing.

Raspberry Pi 4 (aarch64)

The Raspberry Pi 4 is a small factor single board edge computing device. It's affordable and popular for prototyping, compatible with VPU and TPU.

NVIDIA Jetson

NVIDIA Jetson is a very popular small form-factor edge computing device that provides accelerated AI processing and GPU computing.

CPU Computing

Use the built-in CPU to process light AI inference tasks while prototyping. For in-production and real-time performance, switch to GPU, VPU or TPU with one single click.

GPU Computing

Use GPU computing for high-performance applications with complex tasks. NVIDIA AI acceleration with CUDA allows parallelized inference processing.

VPU Computing

Intel Movidius Vision Processing Units (VPUs) are hardware accelerators for deep neural networks. A Movidius Myriad X can be connected as USB stick or PCIe board to computing devices.

Multi-VPU Computing

Combine multiple VPUs for high-performance parallelized edge computing. In combination, VPUs achieve GPU-level performance at a much lower total price point.

TPU Computing

Use Google Coral Tensor Processing Units as USB-stick or PCIe board. TPU AI hardware accelerators can be attached to any generic computing device.

Workspace Library

Manage all your modules

Install, manage and uninstall modules in the library. Installed modules are automatically adding capabilities to your Viso Editor.

Dependency management

See which modules are currently being used by applications you build. Manage dependencies of modules that use docker containers.

Video file management

Add videos to your workspace video library. Use them to simulate camera input for testing with applications before switching to physical cameras.

Manage multiple applications

See everything about your applications in one place. Easily see which applications are currently deployed.

Application export and import

Export your complete application with a single click to your computer. Import Viso applications quickly to any of your workspaces.

Deployment

Automated edge deployments

Deploy to large fleets of enrolled edge devices at the click of a button. The brick-safe-deployment process is fully automated. All you need to do is to assign a profile with an application to a device.

Multiple deployment targets

Create different profiles for different use cases. For example to work with different applications in one workspace, or for different deployment targets (dev, staging, production).

Robust offline-online releases

Deployment automation for temporarily offline devices makes it possible to roll-out application versions even if some devices are temporarily offline.

Device Management

Complete device management

Use a fully integrated device management to manage up to 1'000 edge devices in a workspace. Smart filter systems help to manage many devices effectively.

Remote status mangement

The device status reports the online/offline status, deployment status and current health status of every device. We combined a set of parameters in one single status.

Automated device health check

Since camera systems usually require manual inspections, we developed a fully automated remote health-check. Device hardware, network, application and even cameras are checked.

Remote terminal access

Use a built-in remote SSH terminal to access devices for manual debugging. Deactivate the terminal for devices in a production setting.

Remote device health dashboard

Monitor every device in a real-time dashboard with detailed hardware and network stats. Remotely explore past data with dynamic filters.

Dashboard Builder

Create Custom Dashboards

Use a complete no-code dashboard builder to create and manage multiple dashboards. Visualize data from your deployed applications in real-time.

Over 40 Different Chart Widgets

Use different charts to visualize your application data for your use case. Design dashboards visually and resize widgets, change colors, titles, data formats and more.

Integrated Data-Connector

Aggregate the output data metrics from your deployed applications across all edge devices in the cloud and visualize it in dynamic dashboards.

Extensive dynamic filters

Filter and explore your application data in dashboards using dynamic drill-down filters and time-based filters.

Workspace

Invite and manage user

Set up user accounts, manage invitations to let your team members join the workspace. Manage all users in one place.

Full access management

Assign user roles to users to restrict access to workspace areas, data and applications. Ensure secure passwords across the workspace.

Create custom user roles

Add your own custom user roles to your workspace. Invite multiple users with custom permissions.

Real-time collaboration

Collaborate with your team in a workspace. Build and deploy together. The robust versioning and session management makes collaboration easy.

Custom table views

Adjust and filter every table view with powerful filters and dynamic columns. Work efficiently, even with a large amount of users, apps and devices.

Change theme and dark-mode

Customize the workspace look and make it yours, change the color theme. Choose between a light and dark-mode.

All-in-one platform to build Computer Vision 10x faster, implement any solution with automated tools.

By using viso.ai, you agree to our Cookie Policy.

Request a live demo

By clicking “Request Demo” you agree to our Terms of Use and Privacy Policy.

Not interested?

We’re always looking to improve, so please let us know why you are not interested in using Computer Vision with Viso Suite.

error: Alert: Content is protected