The 1st Occluded Video Instance Segmentation (OVIS) Challenge in conjunction with ICCV 2021

Organized by qjy - Current server time: Feb. 21, 2025, 12:59 p.m. UTC

Previous

Testing
July 26, 2021, 6:59 a.m. UTC

Current

Development
June 1, 2021, midnight UTC

End

Competition Ends
Aug. 2, 2021, 6:59 a.m. UTC

News

The 1st Occluded Video Instance Segmentation (OVIS) Challenge in conjunction with ICCV 2021

OVIS (short for Occluded Video Instance Segmentation) is a new large scale benchmark dataset for video instance segmentation task. It is designed with the philosophy of perceiving object occlusions in videos, which could reveal the complexity and the diversity of real-world scenes.

Abstract

Can our video understanding systems perceive objects when a heavy occlusion exists in a scene?

To answer this question, we collect a large-scale dataset called OVIS for occluded video instance segmentation, that is, to simultaneously detect, segment, and track instances in occluded scenes. OVIS consists of 296k high-quality instance masks from 25 semantic categories, where object occlusions usually occur. While our human vision systems can understand those occluded instances by contextual reasoning and association, our experiments suggest that current video understanding systems are not satisfying. On the OVIS dataset, the highest AP achieved by state-of-the-art algorithms is only 14.4, which reveals that we are still at a nascent stage for understanding objects, instances, and videos in a real-world scenario. For more details, please refer to our website and the workshop homepage.

Overview

OVIS Consists of: 

  • 296k high-quality instance masks
  • 25 commonly seen semantic categories
  • 901 videos with severe object occlusions
  • 5,223 unique instances

Given a video, all the objects belonging to the pre-defined category set are exhaustively annotated. All the videos are annotated per 5 frames.

Distinctive Properties 

  • Severe occlusions. The most distinctive property of our OVIS dataset is that a large portion of objects is under various types of severe occlusions caused by different factors.
  • Long videos. The average video duration and the average instance duration of OVIS are 12.77s and 10.05s respectively.
  • Crowded scenes. On average, there are 5.80 instances per video and 4.72 objects per frame.

Categories 

The 25 semantic categories in OVIS are Person, Bird, Cat, Dog, Horse, Sheep, Cow, Elephant, Bear, Zebra, Giraffe, Poultry, Giant panda, Lizard, Parrot, Monkey, Rabbit, Tiger, Fish, Turtle, Bicycle, Motorcycle, Airplane, Boat, and Vehicle.

For a detailed description of OVIS, please refer to our paper.

For any questions or suggestions, please contact Jiyang Qi (jiyangqi@hust.edu.cn).

You can evaluate your results on this tab.

We use the same evaluation metrics as Youtube-VIS. The code for CMaskTrack R-CNN and our metric has been published.

OVIS is under Attribution-NonCommercial-ShareAlike License (CC BY-NC-SA 4.0).

Development

Start: June 1, 2021, midnight

Description: Development phase: create models and submit results on validation set.

Testing

Start: July 26, 2021, 6:59 a.m.

Description: Testing phase: submit results on test set.

Competition Ends

Aug. 2, 2021, 6:59 a.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 huangke1 41.77
2 deahuang 41.69
3 Ach 39.72