Autonomous driving is one of the most significant applications of AI. From eliminating road accidents due to human errors to massively reducing urban space devoted to parking, autonomous driving promises to fundamentally change our daily lives in many ways. Deep learning, reinforcement learning, and multi-agent learning have achieved phenomenal success in recent years, and are now being actively researched for use in autonomous driving. However, large-scale research competitions and standard benchmarks in autonomous driving have mainly focused on perception and prediction rather than planning and interaction. To help advance the frontier of autonomous driving research and to stimulate research that takes multi-agent interaction in driving seriously, we organize the SMARTS Autonomous Driving Competition at DAI 2020.
Participants are expected to develop their autonomous driving planning and control solutions to tackle complex interactive traffic scenarios provided by the SMARTS simulation platform. The solutions will be evaluated according to their competence for driving, interaction, and generalization.
There are two separate tracks in the competition. Here Track 1 is focused on single-agent multi-lane cruising, where the AI agent controls a vehicle to drive along lengthy routes that go through intersections and roundabouts while interacting with social vehicles.
Participants will submit their autonomous driving agents (as model file) to the competition platform for automated evaluation. To reduce the variance of evaluation, there will be repeated runs with different random seeds for each route. The evaluation metrics include:
SMARTS(Scalable Multi-Agent Reinforcement Learning Training School) is the first of its kind simulation platform in that it is centrally focused on realistic dynamical driving interaction. It allows construction and control of interaction scenarios that emulate real-world behaviors at different levels of granularity. In doing so, it serves to bring research on multi-agent learning closer to the reality of autonomous driving than ever before.
The solution agent is expected to follow a specific route through a road network that includes intersections, merges, and roundabouts. The task of the agent is to follow the prescribed route, drive as quickly and safely as possible from the start line to the finish line, amid background traffic that consists only of other vehicles.
Refer to evaluation page
Your submission should includes the trained model and and agent interface to be called by the evaluation program. See more details in starter-kit README.md and also smarts documents about competition guides.
For any questions do not hesitate to raise them in the forums or wechat group or email to: firstname.lastname@example.org.
When you submit your solution we’ll put it through a similar evaluation to your local run.py script. The leaderboard score includes public dataset and evaluation dataset. The evaluation will be done with different seeds, social vehicle types, social vehicle numbers and agent missions. Therefore, each scenario will be evaluated for multi times to lower the invariance.
In addition, we will have test scores by using private, new, previously unreleased maps. The final score will be calculated using a weighted total of evaluation score and test score.
Our evaluation will involve multiple episodes, and the results will reflect the averages over these episodes. As metioned above, multiple metrics will be considered. Specifically, which are:
The final score for a single map is mapped in [0,1] by normalizing each independent metric. As shown in the formulation, B_max represents the maximum of time consumption, in practice, it is calculated by route_length / minimum_speed_limit; W_road is half of the road width; L_route is the length of predefined route. In track1, \alpha is set to be 0.4, \beta is set to be 0.1 and \gamma is set to be 0.5.
The overall score is a weighted average of different scenario categories. The more complicated the scenario is, the larger the weight will be. The detailed scoring weight is [0.08, 0.12, 0.1, 0.1, 0.1, 0.22, 0.28] for simple_loop, sharp_loop, intersection_loop, merge_loop, roundabout_loop, mixed_loop, and all_loop relatively.
There are 4 action space types allowed in this track, for both of training and evaluation. Specifically, they are:
More details please read the docs provided in startkit.
1. The competition is public, but the Competition Organizer may elect to disallow participation according to its own considerations.
2. The Competition Organizer reserves the right to disqualify any entrant from the competition if, in the Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the competition through cheating, deception, or other unfair playing practices.
3. Submissions are void if they are in whole or part illegible, incomplete, damaged, altered, counterfeit, obtained through fraudulent means, or late. The Competition Organizer reserves the right, in its sole discretion, to disqualify any entrant who makes a submission that does not adhere to all requirements.
4. Officers, directors, employees, and advisory board members (and their immediate families and members of the same household) of the Competition Organizer (i.e. the Decision Making and Reasoning Laboratory of Huawei Noah’s Ark Lab and the APEX Lab of Shanghai Jiao Tong University) and their respective affiliates are not eligible to participate in the competition.
5. You agree to use reasonable and suitable measures to prevent persons who have not formally agreed to these rules from gaining access to the software and data provided by the Competition Organizer. You agree not to transmit, duplicate, publish, redistribute, or otherwise provide or make such software and data available to any party not participating in the competition. You agree to notify Competition Organizer immediately upon learning of any possible unauthorized transmission or unauthorized access of such software and data and agree to work with Competition Organizer to rectify any unauthorized transmission. You agree that participation in the competition shall not be construed as having or being granted a license (expressly, by implication, or otherwise) under, or any right of ownership in, any of the software and data.
6. By downloading the software and data provided by the Competition Organizer you agree to the following terms:
6.1. You will not distribute the software and data.
6.2. You accept full responsibility for your use of the software and data and shall defend and indemnify the Competition Organizer, against any and all claims arising from your use of the software and data.
7. By joining the competition, you affirm and acknowledge that you agree to comply with applicable laws and regulations, and you may not infringe upon any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition, and will not breach of any applicable laws and regulations related to export control and data privacy and protection.
8. The Competition Organizer reserves the right to verify eligibility and to adjudicate on any dispute at any time. If you provide any false information relating to the competition concerning your identity, residency, mailing address, telephone number, e-mail address, right of ownership, or information required for entering the competition, you may be immediately disqualified from the competition.
9. Participants grant to the Competition Organizer the right to use your winning submissions and the source code and data created for and used to generate the submission for any purpose whatsoever and without further approval.
10. Participants are not allowed for persons of the age of under 18.
11. Prizes are subject to the Competition Organizer’s review and verification of the entrant’s eligibility and compliance with these rules as well as the compliance of the winning submissions with the submission requirements.
12. Prize winnings will be transferred to the winner by a third party.
13. Competition prizes do not include tax payment. Any potential winner is solely responsible for all applicable taxes related to accepting the prize.
14. Right to cancel, modify, or disqualify. The Competition Organizer reserves the right at its sole discretion to terminate, modify, or suspend the competition.
15. Organizers may collect the following types of personal data of all the teams of each track with your consent: (1) contact details: username and email address; (2) Affiliation background: this data includes the university you are studying at or the company you are working for.
16. Organizers may use your personal data to (1) verify whether you are eligible to enter the competition; (2) get the statistics information of your affiliations, and the affiliation data is anonymized; (2) deal with your queries in connection with the competition or any prize you win; (3) to determine winners and award prizes.
17. Organizers are committed to protecting your personal data; however, please note that no security measure is perfect. The organizers will keep your personal data for a period of one year after the competition closes.
1. This competition is organized by the Decision Making and Reasoning Laboratory of Huawei Noah's Ark Lab and the APEX Lab of Shanghai Jiao Tong University. The Competition Organizer is responsible for the execution of this competition, including disbursement of the award to the competition winners.
2. This competition is public, but the Competition Organizer approves each user’s request to participate and may elect to disallow participation according to its own considerations.
3. Submission format: zipped directory including the trained model and all the necessary agent interface code to allow the running of the solution together with the simulation platform provided by the Competition Organizer.
4. Users: Each participant must create a CodaLab account to submit their solution for the competition. Only one account per user is allowed.
5. If you are entering as a representative of a company, educational institution, or other legal entity, or on behalf of your employer, these rules are binding for you individually and/or for the entity you represent or are an employee of. If you are acting within the scope of your employment as an employee, contractor, or agent of another party, you affirm that such party has full knowledge of your actions and has consented thereof, including your potential receipt of a prize. You further affirm that your actions do not violate your employer’s or entity's policies and procedures.
6. Teams: Participants are allowed to form teams. The maximum of the number of participants on the team is up to 3. You may not participate in more than one team. Each team member must be a single individual operating a separate CodaLab account. Team formation requests will not be permitted after the date specified on the competition website. Participants who would like to form a team should review the ‘Competition Teams’ section on CodaLab’s ‘user_teams’ Wiki page. In order to form a valid team, the total submission count of all of a team’s participants must be less than or equal to the maximum number allowed for a team. The maximum allowed is the number of submissions per day multiplied by the number of days the competition has been running.
7. Team mergers are allowed and can be performed by the team leader. Team merger requests will not be permitted after the "Team mergers deadline", if such a deadline is listed on the competition website. In order to merge, the combined team must have a total submission count less than or equal to the maximum allowed for a single team as of the merge date. The maximum allowed is the number of submissions per day multiplied by the number of days the competition has been running. The organizers don’t provide any assistance regarding team mergers.
8. External data: You may use a new dataset other than the software and data provided by the Competition Organizer to develop and test your models and submissions.
10. Total Prize Amount (USD): $ 10, 000
11. Prize Allocation:
1st Place: $6,000
2nd Place: $3,000
3rd Place: $1,000
12. Upon being awarded a prize:
12.1. The prize winner must agree to submitting and delivering a technical presentation of their solution at the DAI conference.
12.2. The prize winner must deliver to the Competition Organizer the software and data created for the purpose of the competition and used to generate the winning submission and associated documentation written in English. The delivered software and data must be capable of regenerating the winning submission and contain a description of the resources required to build and run the regenerated submission successfully.
12.3. The prize winner will grant to the Competition Organizer a nonexclusive license to the winning solution’s software and data and represent that the winner has the unrestricted right to grant that license.
12.4. The prize winner will sign and return all prize acceptance documents as may be required by the Competition Organizer.
13. If a team wins a monetary prize, Competition Organizer will allocate the prize money in even shares between team members unless the team unanimously contacts the Competition Organizer within three business days following the submission deadline to request an alternative prize distribution.
We will provide some computing resources for training, participants can send requests to email@example.com via registered email. Remember to provide your team information including team member name and email, institute, country, or region.
1. Where can I get the competition package?
After registered the CodaLab account, go to the participate page and send a request
2. How long will the participation request be approved?
In 6 hours， and the result will be sent to your registered email, pls check it in time.
3. What's the DDL of grouping?
Before the end of the first stage.
4. What's the max number of members in a team?
Up to 3 players is allowed, see terms-6.
1. Fatal server error: (EE) Server is already active for display 1
Just ignore this error. This is to make sure the Xorg is running.
2. Can smarts simulator run on Windows?
Our simulator was developed for ubuntu (>=16.04) and macOS(>=10.15), but not suitable for WSL1 and WSL2. To install it on the Windows system, some prerequisites need to be met: (1) system version >= 10; (2) install it via docker (>=19.03.7).
3. Exception: Could not open window.
If you are running on a computer with GUI interface and occur this problem, **do not** try the solution below and try to use docker solution or contact us.
Otherwise if you are running on a server without GUI, you can try the following instructions to solve it.
# set DISPLAY vim ~/.bashrc # write following command into bashrc export DISPLAY=":1" # refresh source ~/.bashrc # set xorg server sudo wget -O /etc/X11/xorg.conf http://xpra.org/xorg.conf sudo /usr/bin/Xorg -noreset +extension GLX +extension RANDR +extension RENDER -logfile ./xdummy.log -config /etc/X11/xorg.conf $DISPLAY &
4. Cannot use sumo. You can export sumo path to bashrc manually
# set SUMO HOME vim ~/.bashrc # write following command into bashrc # for ubuntu export SUMO_HOME="/usr/share/sumo" # for macos export SUMO_HOME="/usr/local/opt/sumo/share/sumo" # refresh source ~/.bashrc
5. When I run scl docs, it returns error. The reason is that you install the smarts package without any virtual environment likes virtualenv or conda (in other words, virtualenv and conda are recommended). It will return error:
Error: No docs found, try running: make docs
1. Adress already in use
Envision will use one port 8081, this error shows you have another program using this port, just kill this process or restart computer.
2. Cars are rendered, but roads are not rendered properly
`supervisord` default assume that starter-kit and dataset_public are in the same dir level, if not, modified the default path in supervisord.conf.
3. Open envision, but no roads or cars is rendered
If localhost:8081 can not be accessed, make sure you have open envision port by `supervisord` or `scl` command.
If localhost:8081 can be accessed but no rendered cars and roads, make sure `headless` mode is not set and `scl` scenario path is correct.
If you still have problem, raise that in the wechat group or Forums.
4. see envision on a remote server
use ssh port forwarding like
ssh ssh -L8081:localhost:8081 -L8082:localhost:8082 -L6006:localhost:6006 username@server_ip
1. import agent error
The submission zip file requires just zip the outer directory like `submission_example` and also in this directory you must have a file named `agent.py`.
1. import other modules error
This means you use some modules that are not installed in the evulation environment, contact us in the wechat group or forums.
build the scenario by `scl scenario build-all ../dataset_public`
Start: Aug. 14, 2020, 6 a.m.
Start: Oct. 11, 2020, midnight
Start: Oct. 12, 2020, midnight
Oct. 15, 2020, midnight
You must be logged in to participate in competitions.Sign In