Learning to Drive Competition

Organized by heckers - Current server time: June 19, 2019, 2:43 p.m. UTC


Open-End Phase
Oct. 20, 2019, midnight UTC


Open-End Phase
Oct. 20, 2019, midnight UTC


Competition Ends

Challenge: Learning To Drive (L2D)

The goal of this challenge is to advance the area of learning to drive for autonomous driving. The driving model will learn to predict – given a set of sensor inputs – driving maneuvers consisting of steering wheel angle and vehicle speed at a point in the future (1s in the future for this challenge). Participants are allowed and encouraged to explore various sensor modalities supplied through our challenge dataset and to leverage various computer vision technologies (such as recognition, detection, depth and motion estimation) to achieve this goal.

This challenge is part of our ICCV 2019 workshop and is split into two phases. Our first phase will last up until the 10th of October 2019 and the best performing methods will be presented by the challenge winners during our workshop session. Fact sheets need to be submitted for the winning methods. The second phase will be open-end to allow researchers to continously improve their model and gauge the performance on the public leaderboard. 

For this particular challenge we will use the Drive360 dataset which contains around 60 hours of recorded driving around Switzerland. This released Drive360 dataset contains recorded videos from 4 roof mounted cameras (front-view, rear-view, left-view, and right-view), rendered videos from the TomTom visual route guidance system, 21 road attibutes such as distance to next traffic light or the road curvature from the Navigational Map of HERE Technologies, and finally the ego-vehicle's speed and steering wheel angle recorded from the CAN bus. The data is split into a training set, a validation set and a test set. They can be accessed in the following way: 

Challenge particpants, upon signing up, will receive 1) three csv files (one for each of the three sub-sets) that specify the synchronized image paths, road attributes, GPS and the CAN bus control labels (for the train and validation set); and 2) a link to all the images extracted from the videos

Please refer to the evaluation section for more info on evaluation and the data section for more detailed information on the dataset.

This challenge is launched based on the work of these two previous papers (please cite them if you use the data for your publication):

  1.  "End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners", Simon Hecker, Dengxin Dai, Luc Van Gool,ECCV 2018
  2. "Learning Accurate, Comfortable and Human-like Driving"Simon HeckerDengxin Dai and Luc Van Gool, ArXiv 2019.

Learning To Drive (L2D) Task

This learning to drive task is formulated as a task of imitation learning or learning by demonstrations. In essence, driving models (neural networks) are trained to imitate how a human driver drives in the same situations. To that end, we define the best performing network as a network that drives exactly like the human driver in the dataset.

Task Definition

Challenge participants need to develop driving models that can drive most similar to the human driver that recorded the dataset.

Specifically, at any given point the network needs to predict the steering wheel angle (canSteering) and vehicle speed (canSpeed) of the human driver 1 second into the future. We have already projected these maneuvers 1 second into the future within the supplied csv, thus challenge participants simply use the current row value of canSteering and canSpeed as the label to predict.

Challenge participants will then submit their networks predictions for the drive360_test.csv test-set. It is allowed to use the validation set for model training.  

Task Rules

There are three rules regarding input data to the driving model:

  1. Vehicle state information is NOT allowed to be used as input to the driving model. This means one cannot use any canSpeed and canSteering values as input to the model. 
  2. Any data from the future steps is NOT allowed  to be used as input to the driving model.  One can only use data from previous and current time steps. For instance, if predicting the canSpeed and canSteering of time step X + 1s, one can use the data recorded before time step X and at time step X, but not the data recorded after X.
  3. Learning and reasoning over chapter boundaries are not allowed. The whole dataset consists of 27 driving routes in total. They are split into 5 minute intervals for the sake of simplicity. We call this 5 minute sequences chapters. The chapters are randomly shuffled before being grouped into the training set, validation set and test set in order to have a fair distribution of road situations for all three sets. These chapters should be treated as standalone sequences.  Learning and reasoning over chapter boundaries add complexity and raise confusion. 

Task Evaluation

The performance of driving model submissions will be evaluated using mean squared error metric for both speed and steering predictions. The two errors are then averaged to report the overall performance.

Submission Format

Challenge participants will submit a file named: test_predictions.csv which is a two-column, comma-separated csv file with column headers specified as: canSteering, canSpeed

This file will contain 316137 predictions for both canSpeed and canSteering on the drive360_test.csv test-set.

By downloading the data for this challenge you agree to the following terms:

  1. You will not distribute the images and csv files.
  2. ETH Zurich makes no representations or warranties regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose.
  3. You accept full responsibility for your use of the data and shall defend and indemnify ETH Zurich, including its employees, officers and agents, against any and all claims arising from your use of the data, including but not limited to your use of any copies of copyrighted images that you may create from the data.

ICCV Phase

Start: June 1, 2019, midnight

Open-End Phase

Start: Oct. 20, 2019, midnight

Competition Ends


You must be logged in to participate in competitions.

Sign In
# Username Score
1 heckers 1.0000