Comparative Analysis of the Visual Datasets COCO and OID

code example background
Contents

In this article, we will compare the visual datasets COCO and OID at a higher level. They are both very popular open-source visual datasets for machine learning projects. Both Microsoft’s COCO dataset and Google’s OID dataset are important to computer vision applications as a whole because of their role in object detection, face detection, pose estimation, and more.

In the following, we will go over:

  • The purpose of each dataset
  • Qualities of the datasets
  • What makes them unique from one another

Purpose of COCO and OID Datasets

Understanding visual scenes, as it is required for successful computer vision applications, requires multitudes of data accumulation. For the average programmer, such data can be difficult to accumulate because of limited resources and database access.

COCO is a dataset made by Microsoft to allow programmers to have data for computer vision-oriented projects. Similarly, OID is also able to be used by any programmer for any computer vision means necessary. However, it is disclosed that OID was originally made for Google employees, which means there are some holes in the documentation that may not be understood by non-google employees.

Open Images Dataset (OID)

What makes it unique? Google annotates all images in the OID dataset with image-level labels, object bounding boxes, object segmentation masks, visual relationships, and localized narratives. This leaves it to be used for slightly more computer vision tasks when compared to COCO because of its slightly broader annotation system. The OID landing page also claims it’s the largest existing dataset with object location annotations.

Data. Open Images is a dataset of approximately 9 million pre-annotated images. It’s mentioned clearly on Google’s Open Images Dataset landing page that most if not all images were hand-annotated by professional annotators. This ensures accuracy and consistency for each image and leads to higher accuracy rates for computer vision applications when in use.

Common Objects in Context (COCO)

What makes it unique? Microsoft pedals the idea that COCO introduces a large, pre-annotated dataset containing images depicting complex everyday scenes of common objects in their natural context. This sets COCO apart from previous object recognition datasets that may be specifically focused on sectors of artificial intelligence. Such sectors include image classification, object bounding box localization or semantic pixel-level segmentation.

Meanwhile, COCO focuses mainly on segmenting individual object instances. This broader focus allows COCO to be used in more instances than that of other popular datasets like CIFAR-10 and CIFAR-100, but unfortunately not too much more than OID.

Data. With a total of 2.5 million labeled instances in 328k images, COCO is a very large and expansive dataset that allows many uses. However, this amount does not compare to Google’s OID, which contains a whopping 9 million annotated images.

Google’s 9 million annotated images were manually annotated, while OID discloses that it generated object bounding boxes and segmentation masks using automated and computerized methods. Both COCO and OID have not disclosed bounding box accuracy, so it remains up to the user whether they assume automated bounding boxes would be more precise than manually made ones.

What’s Next?

It is essential to understand and compare the visual datasets COCO and OID with their differences, before using one for projects, so that all available resources are being optimized. If you thought this article was interesting, we recommend you read:

Share on linkedin
Share on twitter
Share on whatsapp
Share on facebook
Share on email
Related Articles
nvidia inception program viso.ai joins

viso.ai Joins NVIDIA Inception Program

Zürich Area, Switzerland — July 18, 2019 — viso.ai today announced it has joined the NVIDIA Inception program, which is designed to nurture startups revolutionizing

Want to use Computer Vision applications?

Get the all-in-one Suite to build and deliver Computer Vision Applications. 
Learn more

This website uses cookies. By continuing to browse this site, you agree to this use.