The goal of this challenge is to advance the area of learning to drive for autonomous driving. The driving model will learn to predict – given a set of sensor inputs – driving maneuvers consisting of steering wheel angle and vehicle speed at a point in the future (1s in the future for this challenge). Participants are allowed and encouraged to explore various sensor modalities supplied through our challenge dataset and to leverage various computer vision technologies (such as recognition, detection, depth and motion estimation) to achieve this goal.
This challenge is part of our ICCV 2019 workshop and is split into two phases. Our first phase will last up until the 10th of October 2019 and the best performing methods will be presented by the challenge winners during our workshop session. Fact sheets need to be submitted for the winning methods. The second phase will be open-end to allow researchers to continously improve their model and gauge the performance on the public leaderboard.
For this particular challenge we will use the Drive360 dataset which contains around 60 hours of recorded driving around Switzerland. This released Drive360 dataset contains recorded videos from 4 roof mounted cameras (front-view, rear-view, left-view, and right-view), rendered videos from the TomTom visual route guidance system, 21 road attibutes such as distance to next traffic light or the road curvature from the Navigational Map of HERE Technologies, and finally the ego-vehicle's speed and steering wheel angle recorded from the CAN bus. The data is split into a training set, a validation set and a test set. They can be accessed in the following way:
Challenge particpants, upon signing up, will receive 1) three csv files (one for each of the three sub-sets) that specify the synchronized image paths, road attributes, GPS and the CAN bus control labels (for the train and validation set); and 2) a link to all the images extracted from the videos.
Please refer to the evaluation section for more info on evaluation and the data section for more detailed information on the dataset.
This challenge is launched based on the work of these two previous papers (please cite them if you use the data for your publication):
This learning to drive task is formulated as a task of imitation learning or learning by demonstrations. In essence, driving models (neural networks) are trained to imitate how a human driver drives in the same situations. To that end, we define the best performing network as a network that drives exactly like the human driver in the dataset.
Challenge participants need to develop driving models that can drive most similar to the human driver that recorded the dataset.
Specifically, at any given point the network needs to predict the steering wheel angle (canSteering) and vehicle speed (canSpeed) of the human driver 1 second into the future. We have already projected these maneuvers 1 second into the future within the supplied csv, thus challenge participants simply use the current row value of canSteering and canSpeed as the label to predict.
Challenge participants will then submit their networks predictions for the drive360_test.csv test-set. It is allowed to use the validation set for model training.
There are three rules regarding input data to the driving model:
The performance of driving model submissions will be evaluated using mean squared error metric for both speed and steering predictions. The two errors are then averaged to report the overall performance.
Challenge participants will submit a file named: test_predictions.csv which is a two-column, comma-separated csv file with column headers specified as: canSteering, canSpeed
This file will contain 316137 predictions for both canSpeed and canSteering on the drive360_test.csv test-set.
By downloading the data for this challenge you agree to the following terms:
Start: June 1, 2019, midnight
Start: Oct. 20, 2019, midnight
You must be logged in to participate in competitions.Sign In