DAI2020 SMARTS Competition Track 2: Multi-Agent Collaboration

Organized by DAI2020 - Current server time: Sept. 23, 2020, 10:43 p.m. UTC

Current

Public Leaderboard
Aug. 14, 2020, midnight UTC

Next

Private Leaderboard
Oct. 11, 2020, midnight UTC

End

Competition Ends
Oct. 15, 2020, midnight UTC

Track 2: Multi-Agent Collaboration

Introduction

Autonomous driving is one of the most significant applications of AI. From eliminating road accidents due to human errors to massively reducing urban space devoted to parking, autonomous driving promises to fundamentally change our daily lives in many ways. Deep learning, reinforcement learning, and multi-agent learning have achieved phenomenal success in recent years, and are now being actively researched for use in autonomous driving. However, large-scale research competitions and standard benchmarks in autonomous driving have mainly focused on perception and prediction rather than planning and interaction. To help advance the frontier of autonomous driving research and to stimulate research that takes multi-agent interaction in driving seriously, we organize the SMARTS Autonomous Driving Competition at DAI2020.

Principles

Participants are expected to develop their autonomous driving planning and control solutions to tackle complex interactive traffic scenarios provided by the SMARTS simulation platform. The solutions will be evaluated according to their competence for driving, interaction, and generalization. Track 2 is focused on multi-agent safe driving, where the AI agent is expected to control one or more vehicles to coordinate accomplish scenario-specific missions such as left turn at T-junction, on-ramp merge, etc. In this track, participants will submit their autonomous driving agents (as model files) to the competition platform for automated evaluation. To reduce the variance of evaluation, there will be repeated runs with different random seeds for missionsThe evaluation metrics include:

  • Safety: percentage of routes or missions completed without critical infractions (e.g. crashes with other vehicles);
  • Time: time took to finish the task;
  • Control quality: deviation from the centerline of the lane;
  • Closeness to the goal, the closer the better.

Environment

SMARTS (Scalable Multi-Agent Reinforcement Learning Training School) is the first of its kind simulation platform in that it is centrally focused on realistic dynamical driving interaction. It allows the construction and control of interaction scenarios that emulate real-world behaviors at different levels of granularity. In doing so, it serves to bring research on multi-agent learning closer to the reality of autonomous driving than ever before. We provide the install package in the starter kit. Participants can get it from the participate panel.

Task

For this track, in each scenario, participants are supposed to develop a parameter-sharing multi-agent model to control a group of agents to accomplish short missions.  Scenarios are composed of Ramp, Double Merge, T-junction, Crossroads, and Roundabout. Note that parameter sharing policy is required in the same scenario but
not necessarily across scenarios. 
Each submission will be evaluated on all of these scenarios simultaneously. Moreover, in each scenario, there are more than one mission and more than one agent instances (all sharing the same set of policy parameters), and each of these agent instances will be randomly assigned a mission. In addition to the agent-controlled vehicles, there will also be background traffic vehicles driving in the scenario.

scenario

Submission

Your submission should include the trained model shared with all agents and an agent interface to be called by the evaluation program.  See more details in starter-kit README. 

Contact

For any questions do not hesitate to raise them in the forums or WeChat group or email to dmnrlab@huawei.com.

                                                                                    

Evaluation Rules

Evaluation is based on the average performance over multiple episodes and includes the following metrics:

  • A - Safety: percentage of missions completed without critical infractions (e.g. crashes with other vehicles) -- if in one scenario there are 5 missions for 5 agent instances and 4 missions were completed, then the ratio is 0.8.
  • B - Time: time took to finish the task;
  • C - Control quality: average deviation from the centerline of the lane; and
  • D - The distance of agent to its mission goal averaged over multiple scenarios -- if in one scenario there are 5 missions for 5 agents and if the final distances from the agent instances to their respective goals are 0.1, 0.2, 0.4, 0.3 and 0.5, then the average agent distance in the current episode is (0.1+0.2+0.4+0.3+0.5)/5=0.3. For each scenario, D will be the mean value of the averaged agent distances over multiple episodes.
scenario

The overall score falls in the [0, 1] interval. As shown in the formula above, each component is normalized respectively by B_max, which represents the maximum of time allowed (such as 1,000 time steps), W_road, which is half of the road width,  and D_max, which is the maximum of distance to goal tolerated and varies across missions and scenarios.

Legal Action Spaces

There are 4 action space types allowed in this track, for both of training and evaluation. Specifically, they are:

  • ActionSpaceType.Continuous
  • ActionSpaceType.ActuatorDynamic
  • ActionSpaceType.Lane
  • ActionSpaceType.LaneWithContinuousSpeed

More details please read the docs provided in startkit.

 

Competition Rules

1. This competition is organized by the Decision Making and Reasoning Laboratory of Huawei Noah's Ark Lab and the APEX Lab of Shanghai Jiao Tong University. The Competition Organizer is responsible for the execution of this competition, including disbursement of the award to the competition winners.

2. This competition is public, but the Competition Organizer approves each user’s request to participate and may elect to disallow participation according to its own considerations.

3. Submission format: zipped directory including the trained model and all the necessary agent interface code to allow the running of the solution together with the simulation platform provided by the Competition Organizer.

4. Users: Each participant must create a CodaLab account to submit their solution for the competition. Only one account per user is allowed.

5. If you are entering as a representative of a company, educational institution, or other legal entity, or on behalf of your employer, these rules are binding for you individually and/or for the entity you represent or are an employee of. If you are acting within the scope of your employment as an employee, contractor, or agent of another party, you affirm that such party has full knowledge of your actions and has consented thereof, including your potential receipt of a prize. You further affirm that your actions do not violate your employer’s or entity's policies and procedures.

6. Teams: Participants are allowed to form teams. The maximum of the number of participants on the team is up to 3. You may not participate in more than one team. Each team member must be a single individual operating a separate CodaLab account. Team formation requests will not be permitted after the date specified on the competition website. Participants who would like to form a team should review the ‘Competition Teams’ section on CodaLab’s ‘user_teams’ Wiki page. In order to form a valid team, the total submission count of all of a team’s participants must be less than or equal to the maximum number allowed for a team. The maximum allowed is the number of submissions per day multiplied by the number of days the competition has been running.

7. Team mergers are allowed and can be performed by the team leader. Team merger requests will not be permitted after the "Team mergers deadline", if such a deadline is listed on the competition website. In order to merge, the combined team must have a total submission count less than or equal to the maximum allowed for a single team as of the merge date. The maximum allowed is the number of submissions per day multiplied by the number of days the competition has been running. The organizers don’t provide any assistance regarding team mergers.

8. External data: You may use a new dataset other than the software and data provided by the Competition Organizer to develop and test your models and submissions.

9. Competition Duration: Aug. 14, 2020 to Oct. 14, 2020.

10. Total Prize Amount (USD): $ 10, 000

11. Prize Allocation:

  • 1st Place: $6,000

  • 2nd Place: $3,000

  • 3rd Place: $1,000

12. Upon being awarded a prize:

     12.1. The prize winner must agree to submitting and delivering a technical presentation of their solution at the DAI conference.

     12.2. The prize winner must deliver to the Competition Organizer the software and data created for the purpose of the competition and used to generate the winning submission and associated documentation written in English. The delivered software and data must be capable of regenerating the winning submission and contain a description of the resources required to build and run the regenerated submission successfully.

     12.3. The prize winner will grant to the Competition Organizer a nonexclusive license to the winning solution’s software and data and represent that the winner has the unrestricted right to grant that license.

     12.4. The prize winner will sign and return all prize acceptance documents as may be required by the Competition Organizer.

13. If a team wins a monetary prize, Competition Organizer will allocate the prize money in even shares between team members unless the team unanimously contacts the Competition Organizer within three business days following the submission deadline to request an alternative prize distribution.

 

 

Terms and Legal Considerations

1. The competition is public, but the Competition Organizer may elect to disallow participation according to its own considerations.

2. The Competition Organizer reserves the right to disqualify any entrant from the competition if, in the Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the competition through cheating, deception, or other unfair playing practices.

3. Submissions are void if they are in whole or part illegible, incomplete, damaged, altered, counterfeit, obtained through fraudulent means, or late. The Competition Organizer reserves the right, in its sole discretion, to disqualify any entrant who makes a submission that does not adhere to all requirements.

4. Officers, directors, employees, and advisory board members (and their immediate families and members of the same household) of the Competition Organizer (i.e. the Decision Making and Reasoning Laboratory of Huawei Noah’s Ark Lab and the APEX Lab of Shanghai Jiao Tong University) and their respective affiliates are not eligible to participate in the competition.

5. You agree to use reasonable and suitable measures to prevent persons who have not formally agreed to these rules from gaining access to the software and data provided by the Competition Organizer. You agree not to transmit, duplicate, publish, redistribute, or otherwise provide or make such software and data available to any party not participating in the competition. You agree to notify Competition Organizer immediately upon learning of any possible unauthorized transmission or unauthorized access of such software and data and agree to work with Competition Organizer to rectify any unauthorized transmission. You agree that participation in the competition shall not be construed as having or being granted a license (expressly, by implication, or otherwise) under, or any right of ownership in, any of the software and data.

6. By downloading the software and data provided by the Competition Organizer you agree to the following terms:

      6.1. You will not distribute the software and data.

      6.2. You accept full responsibility for your use of the software and data and shall defend and indemnify the Competition Organizer, against any and all claims arising from your use of the software and data.

7. By joining the competition, you affirm and acknowledge that you agree to comply with applicable laws and regulations, and you may not infringe upon any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition, and will not breach of any applicable laws and regulations related to export control and data privacy and protection.

8. The Competition Organizer reserves the right to verify eligibility and to adjudicate on any dispute at any time. If you provide any false information relating to the competition concerning your identity, residency, mailing address, telephone number, e-mail address, right of ownership, or information required for entering the competition, you may be immediately disqualified from the competition.

9. Participants grant to the Competition Organizer the right to use your winning submissions and the source code and data created for and used to generate the submission for any purpose whatsoever and without further approval.

10. Participants are not allowed for persons of the age of under 18.

11. Prizes are subject to the Competition Organizer’s review and verification of the entrant’s eligibility and compliance with these rules as well as the compliance of the winning submissions with the submission requirements.

12. Prize winnings will be transferred to the winner by a third party.

13. Competition prizes do not include tax payment. Any potential winner is solely responsible for all applicable taxes related to accepting the prize.

14. Right to cancel, modify, or disqualify. The Competition Organizer reserves the right at its sole discretion to terminate, modify, or suspend the competition.

15. Organizers may collect the following types of personal data of all the teams of each track with your consent: (1) contact details: username and email address; (2) Affiliation background: this data includes the university you are studying at or the company you are working for.

16. Organizers may use your personal data to (1) verify whether you are eligible to enter the competition; (2) get the statistics information of your affiliations, and the affiliation data is anonymized; (2) deal with your queries in connection with the competition or any prize you win; (3) to determine winners and award prizes.

17. Organizers are committed to protecting your personal data; however, please note that no security measure is perfect. The organizers will keep your personal data for a period of one year after the competition closes.

Training Resources [NO COMPUTING RESOURCES ARE AVAILABLE NOW]

We will provide some computing resources for training, participants can send requests to dmnrlab@huawei.com via registered email. Remember to provide your team information including team member name and email, institute, country, or region.

FAQ

Participate

1. Where can I get the competition package?

    After registered the CodaLab account, go to the participate page and send a request

2. How long will the participation request be approved?

    In 6 hours´╝î and the result will be sent to your registered email, pls check it in time.

3. What's the DDL of grouping?

    Before the end of the first stage.

4. What's the max number of members in a team?

    Up to 3 players is allowed, see terms-6.

Environment Config

1. Fatal server error: (EE) Server is already active for display 1

    Just ignore this error. This is to make sure the Xorg is running.

2. Can smarts simulator run on Windows?

    Our simulator was developed for ubuntu (>=16.04) and macOS(>=10.15), but not suitable for WSL1 and WSL2. To install it on the Windows system, some prerequisites need to be met: (1) system version >= 10; (2) install it via docker (>=19.03.7).

3. Cannot use sumo. You can export sumo path to bashrc manually

# set SUMO HOME
vim ~/.bashrc
# write following command into bashrc
# for ubuntu
export SUMO_HOME="/usr/share/sumo"
# for macos
export SUMO_HOME="/usr/local/opt/sumo/share/sumo"
# refresh
source ~/.bashrc

4. When I run scl docs, it returns error. The reason is that you install the smarts package without any virtual environment likes virtualenv or conda (in other words, virtualenv and conda are recommended). It will return error:

Error: No docs found, try running:
make docs

That means scl cannot find the `smarts_docs` in `/usr/`, instead in `/usr/local/`. You can fix this error with soft link: `ln -s /usr/local/smarts_docs /usr/smarts_docs`, then `scl docs` will work successfully !
 
5. Core dumped when build scenaios.
  Since scenario building is parallel, this error means you do not have enough resources to do building cocurrently. Try to build scenario one by one.

Envision

1. Adress already in use

    Envision will use one port 8081, this error shows you have another program using this port, just kill this process or restart computer.

2. Cars are rendered, but roads are not rendered properly

    `supervisord` default assume that starter-kit and dataset_public are in the same dir level, if not, modified the default path in supervisord.conf.

3. Open envision, but no roads or cars is rendered

    If localhost:8081 can not be accessed, make sure you have open envision port by `supervisord`  or `scl` command.

    If  localhost:8081 can be accessed but no rendered cars and roads, make sure `headless` mode is not set and `scl`  scenario path is correct.

    If  you still have problem, raise that in the wechat group or Forums.

 

Submission

1. import agent error

   The submission zip file requires just zip the outer directory like `submission_example` and also in this directory you must have a file named  `agent.py`.

1. import other modules error

   This means you use some modules that are not installed in the evulation environment, contact us in the wechat group or forums.

Timelines

  • Aug. 14, 2020 to Oct. 11, 2020, submission on public leaderboard
  • Oct. 11 to Oct. 14, a final-solution submission for each team and a final private evaluation period with the submitted solution.
  • Oct. 19, 2020, winners announcement.
  • Oct. 25, 2020, top-5 teams of each track will be invited to give a presentation at DAI 2020.
  • The final winners will be determined by the advisory committee based on the leaderboard score and the technical presentation meterials.
 

Public Leaderboard

Start: Aug. 14, 2020, midnight

Description: Evaluate submission on public dataset. The maximum of episode length for each scenario is 1,000.

Private Leaderboard

Start: Oct. 11, 2020, midnight

Competition Ends

Start: Oct. 12, 2020, midnight

Competition Ends

Oct. 15, 2020, midnight

You must be logged in to participate in competitions.

Sign In
# Username Score
1 leafzs 0.69
2 alombard 0.64
3 ChengheWang 0.63