Various autonomous or assisted driving strategies have been facilitated through the accurate and reliable perception of the environment around a vehicle. Among the sensors that are commonly used, radar has usually been considered as a robust and cost-effective solution even in severe driving scenarios, e.g., weak/strong lighting and bad weather. However, the object detection task on radar data is not well explored either in academia or industry. The reasons can be concluded into three folds: 1) Radar signal, especially radio frequency (RF) data, is not an intuitive type of data like RGB images, so that its role in autonomous driving is seriously underestimated; 2) Significantly limited public datasets with proper object annotations are available so that it is difficult to address the problem using powerful machine learning mechanisms; 3) It is noticeably difficult to extract semantic information for object classification from the radar signals.
The organizers are from the Information Processing Lab at the University of Washington, Silkwave Holdings Limited, Zhejiang University, and ETRI. The challenge organizers include:
The organizers will post announcements in the Forums. Questions about this challenge are welcome, including logistics, dataset questions, etc. The participants can also post their questions in the Forums. The organizers will answer the questions actively.
The participates need to submit their radar object detection results for the testing set to the evaluation server. The evaluation metrics include AP and AP under four different driving scenarios, i.e., parking lot (PL), campus road (CR), city street (CS), highway (HW). The main score for this challenge is the overall AP. The details of the evaluation method are mentioned .
zip file should contain 10 different
txt files for 10 testing sequences with the following names:
2019_05_28_CM1S013.txt 2019_05_28_MLMS005.txt 2019_05_28_PBMS006.txt 2019_05_28_PCMS004.txt 2019_05_28_PM2S012.txt
2019_05_28_PM2S014.txt 2019_09_18_ONRD004.txt 2019_09_18_ONRD009.txt 2019_09_29_ONRD012.txt 2019_10_13_ONRD048.txt
Each of them should have the following format:
frame_id range(m) azimuth(rad) class_name conf_score ...
The ROD2021 dataset (a subset of CRUW) for this challenge will be available to the participants once the challenge starts. The participates are required to use the provided dataset with annotations to develop an object detection method using the radar data only as the input. The participates are also allowed to propose their own object annotation methods based on the RGB and RF images in the training set, but the proposed object annotation method needs to be clearly described in your method description as well as any future paper at ICMR 2021. The object detection results should be submitted to CodaLab, including the object classes and object locations in the radar range-azimuth coordinates, i.e., in the bird's-eye view. Each object in the radar's field of view is represented by a point in the RF image.
The participates can form their own teams from different organizations. There will be two phases for this challenge. The testing data for the first phase is randomly selected 30% from the overall testing set, and the second phase is the remaining 70% of the testing set. The final score is the AP of the overall testing set. During the competition, each team can submit their results once per week. The teams need to provide their opensource code through GitHub after the challenge results announcement.
There will be two phases for this challenge:
First phase: randomly select 30% from the overall testing set for evaluation.
Second phase: the remaining 70% of the testing set. The final score is the AP in the second phase.
Some detailed rules are listed as follows:
The participates can form their own teams from different organizations and the number of participants is not limited. But only one team is allowed from an individual organization.
The participates are NOT allowed to use external data for either training or validation.
The teams need to provide their opensource code through GitHub after the challenge results announcement.
The participates are not allowed to use extra information from human labeling on the training dataset or testing dataset for the challenge’s target labels.
The participates are allowed to propose their own object annotation methods based on the RGB and RF images in the training set, but the proposed object annotation method needs to be clearly described in your method description as well as any future paper at ICMR 2021.
During each of the two phases in the competition, each team can only submit their results for evaluation once per day, and less than 10 attempts in total.
Remember to submit your best results to the leaderboard before the phase deadline.
The provided dataset can only be used for academic purposes. By using this dataset and related software, you agree to cite our dataset and baseline paper .
Start: Jan. 18, 2021, midnight
Description: First phase submission includes selected 30% testing set.
Start: March 12, 2021, midnight
Description: Second phase submission includes the remaining 70% testing set.
March 26, 2021, midnight
You must be logged in to participate in competitions.Sign In