BDD100K Multiple Object Tracking Challenge

Organized by bdd100k - Current server time: May 31, 2020, 7:13 a.m. UTC

Previous

val2020
Jan. 1, 2020, midnight UTC

Current

test2020
Jan. 1, 2020, midnight UTC

End

Competition Ends
June 13, 2020, 6:59 a.m. UTC

CVPR 2020 BDD100K Multiple Object Tracking Challenge

 

animated

 

The BDD100K Multiple Object Tracking challenge is part of the Workshop on Autonomous Driving at CVPR 2020. Understanding the temporal association of objects within videos is one of the fundamental yet challenging tasks in computer vision. We provide BDD100K MOT, a large-scale diverse database, to advance the study of multiple object tracking. For full details of this task please read the evaluation page.

You can find detailed instructions on how to participate in the challenge on the workshop website.

Please refer to the following pages for FAQs.

Evaluation

Phases

There are two phases for the challenge: val phase and test phase. The final ranking will be based on the test phase.

Pre-training

It is a fair game to pre-train your network with ImageNet or COCO, but if other datasets are used, please note in the submission description. We will rank the methods without using external datasets except ImageNet and COCO.

Ignoring distractors

"other person", "trailer", and "other vehicle" are considered detractors in this challenge. As a preprocessing step, all predicted boxes are matched and the ones matched to distractor ground-truth boxes are ignored.

Crowd region

After bounding box matching, we ignore all detected false-positive boxes that has >50% overlap with the crowd region (ground-truth boxes with the "Crowd" attribute).

Super-category

In addition to the evaluation of all 8 classes, we merge ground truth and prediction categories into 3 super-categories specified above, and evaluate the results for each super-category. The super-category evaluation results will be provided only for the purpose of reference.

Metrics

It is a fair game to pre-train your network with ImageNet or COCO, but if other datasets are used, please note in the submission description. We will rank the methods without using external datasets except ImageNet and COCO.

[Update 0521] We employ mean Multiple Object Tracking Accuracy (mMOTA, mean of MOTA of each category) as our primary evaluation metric for ranking. All metrics are detailed below. See this paper for more details.

  • mMOTA (%): Mean multiple object tracking accuracy, reported in percentage. The number is calculated by taking the average of MOTA values for the 8 categories.
  • mMOTP (%): Mean multiple object tracking precision, reported in percentage. The number is calculated by taking the average of MOTP values for the 8 categories.
  • MOTA (%): Multiple object tracking accuracy, reported in percentage.
  • MOTP (%): Multiple object tracking precision, reported in percentage.
  • Misses: The total number of missed ground-truth boxes.
  • FP: Number of false positive matches after global min-cost matching.
  • Switch: An identity switch is counted when a ground-truth object is matched with a track that is different from the last known assigned track.
  • Mostly Tracked: Number of objects with at least 80 percent of its lifespan tracked.
  • Mostly Lost:Number of objects with less than 20 percent of its lifespan tracked.
  • Partially Tracked: Number of objects with at least 20 percent and less than 80 percent of its lifespan tracked.

Please refer to BDD data website for terms to use BDD data.

val2020

Start: Jan. 1, 2020, midnight

Description: The val phase evaluates 200 sequences in the BDD100K MOT validation set. The data can be downloaded from https://bdd-data.berkeley.edu/. Evaluation usually takes about 5 minutes (there might be a roughly 10 to 15 minutes delay before the evaluation starts). Click "Submit to Leaderboard" to submit your results to the leaderboard in Results page.

test2020

Start: Jan. 1, 2020, midnight

Description: The test phase evaluates 400 sequences in the BDD100K MOT testing set. The data can be downloaded from https://bdd-data.berkeley.edu/. Evaluation usually takes about 10 minutes (there might be a roughly 10 to 15 minutes delay before the evaluation starts). Click "Submit to Leaderboard" to submit your results to the leaderboard in Results page.

Competition Ends

June 13, 2020, 6:59 a.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 bdd100k 26.40
2 suntinger 10.87
3 zhaoxingjie 4.89