L2RPN NEURIPS 2020 - Robustness Track

Organized by BDonnot - Current server time: March 26, 2025, 4:28 p.m. UTC
Reward $15,000

First phase

Warmup phase
July 8, 2020, midnight UTC

End

Competition Ends
Nov. 30, 2022, midnight UTC

Learning to Run a Power Network - Neurips Track 1

A forum is available for comments, suggestions and help on discordhttps://discord.gg/cYsYrPT. You can also TEAM-UP there.

We encourage OPEN-submissions as a great way to help beginners, foster collaboration and also develop new ideas. Prizes will be awarded for it by October 12th. Check Prizes and Open-Submission sections.

The Grid2op documentation can be found here: https://grid2op.readthedocs.io/en/latest/

If you would like to get familiar first with the problem and make your hands on quickly, a sandbox competition on a smaller case remains open as well: L2RPN Sandbox

L2RPN in a sustainable world competition: Robustness Track

Power grids transport electricity across states, countries and even continents. They are the backbone of power distribution, playing a central economical and societal role by supplying reliable power to industry, services, and consumers. Their importance appears even more critical today as we transition towards a more sustainable world within a carbon-free economy, and concentrate energy distribution in the form of electricity. Problems that arise within the power grid range from transient brownouts to complete electrical blackouts which can create significant economic and social perturbations, i.e.de facto freezing society. Grid operators are still responsible for ensuring that a reliable supply of electricity is provided everywhere, at all times. With the advent of renewable energy, electric mobility, and limitations placed on engaging in new grid infrastructure projects, the task of controlling existing grids is becoming increasingly difficult, forcing grid operators to do “more with less”. This challenge aims at testing the potential of AI to address this important real-world problem for our future.

In this track, develop your agent to be robust to unexpected events and keep delivering reliable electricity everywhere even in difficult circumstances. An opponent, which we disclose to you, will attack in an adversarial fashion some lines of the grid everyday at different times (you can think of cyber-attacks for instance). You will have to overcome his attacks and keep operating the grid safely. You will be tested against that opponent on hidden new scenarios not present in the training set, to assess the robustness of your agent. The 24 test scenarios (over which we evaluate your submission) run over a whole week and are selected among every month of the year.

Visualization of a scenario showing an attack on neurips track 1 grid and environment. Your submission results will come up with such a visualization for your agents.

Try your own submission to get your own visualization!

A competition running in 3 phases:

  • Warmup Phase: this phase last until 17th of August. It will let each participant get familiar with the problem, start developping interesting agents and make good submissions. This is a phase durinhg which we welcome a lot of feedback over the clarity, ergonomy, and difficulty of the competition. At the end of this phase, we will improve the competition based on this feedback. Beside the training data that will not change (except major unexepected issue), everything else can change marginally for improvements.
  • Validation Phase: this is the main phase of the competition which will last until 28th of October. During that phase, you will be evaluated on the same problem you will be eventually tested on in the last phase. This phase allows each participants to make several submissions, regularly test how his agent is improving, and see how it performs in the leaderboard.
  • Test Phase: this is an "automatic" phase under which we evaluate your last submission of the validation phase on different but similar test scenarios. This will test against agent overfitting and will create the final leaderboard from which we will announce the winners of the Track.

Make sure to comply with the rules of the competition (see Terms and Conditions section) to appear on the final leaderboard of the competition and possibly win the competition.

Besides rewarding the best agents, we favor collaboration and very much welcome open submissions (see Open Submission section) to allow every one improve from one another during the competition. Everyone can vote for the open submissions he likes most. Participants sharing their submissions and having the most impact during the competition will be rewarded (such prizes will be announced after first month of competition).

To proceed in the competition:

  • Visit our website https://l2rpn.chalearn.org/ for an interactive introduction to power grid operations 
  • Reading the companion white paper as well as the description of the competition, and also our L2RPN 2019 paper should help you understand the problem deeper.
  • Visit the Instructions subsection to get started with the competition
  • Understand the rules of the game and the evaluation of your submission in the related subsection
  • Review the terms and conditions that you will have to accept to make your first submission.
  • Dive into the starting kit for a guided tour and tutorial to get all set for the competition and start make submissions. It helps you TROUBLESHOOT  your submission if you are having troubles
  • Take a look at the Grid2op documentation

You are ready to get started and we are looking forward for your first submission in the Participate section to become the control room operator of the future !

Instructions

In this section, we give you the instructions to help you:

  • configure your own environment,
  • quickly get a first agent ready with a starting kit,
  • get additional data,
  • make a submission on Codalab for the challenge,
  • finally discover your results.

Download the Starting Kit - Install everything

A starting kit is available for you to download in the Participate section on codalab, along with the proper game environment for the competition. Several notebooks should help you understand how to properly run the Grid2op platform using chronics to train and test your agents.

The starting kit also gives details about how to check that your submission is valid and ready to run on the competition servers. To have an environment as close as possible to the one used by codalab, the full updated list of the packages and their version is available at https://github.com/rte-france/Grid2Op/issues/97 (first message). Main pacakges are:

  • grid2op version 1.2.2
  • lightsim2grid (backend) version 0.2.4
  • l2rpn-baselines version 0.5.0
  • tensorflow version 2.3.0
  • pytorch version 1.6.0

And lots of others.

To only get the Grid2op Platform and L2RPN-baselines

The challenge is based on an environment (and not only a dataset) in which an agent can learn from interactions. It runs under the Grid2op platform.

The Grid2op platform can be installed as any python package with (if you want to only install grid2op - otherwise do the previous command from the starting-kit):

pip install grid2op

Similarly the "baseline" python package can be installed with:

pip install l2rpn-baselines

Those packages can get updated during the competition to improve them.

Get the data

Once grid2op is installed, you can get the competition data (approximately 2.0Go) directly from the internet. This download will happen automatically the first time you will create the environment of the competition from within a python script or shell:

import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1_small")
#for Track 1 - Robustness
import grid2op
env = grid2op.make("l2rpn_neurips_2020_track2_small")
#for Track 2 - Adaptability

You can visit the "Good to Know" section for more information and parametrization, for example to allow faster learning.

The general help of the platform is available at https://grid2op.readthedocs.io/en/latest/ .

As probably most of you are not familiar with power systems in general, we have made some introductory notebooks of the problem we are tackling and the grid2op platform. These notebook are available without any installations thanks to "mybinder" at the following link https://mybinder.org/v2/gh/rte-france/Grid2Op/master.

Make a submission

Essentially, a submission should be a ZIP file containing at least these two elements:

  • submission: a folder in which you agent is defined.
  • metadata: file giving the instruction to Codalab on how to process the submission (should never be changed).

In the starting kit, and script is here to help you create and check your submission is valid:

python3 check_your_submission.py --help

/!\ This is a code submission challenge, meaning that the participant has to submit his code (and not his results).

Upon reception of the challenger's submission, will be read by Codalab and the code will be run on the competition servers. The detailed structure of the submission directory can be found in the starting kit.

Then, to upload your submission on Codalab:

  • Go to the competition homepage
  • Click on "Participate"
  • Click on "Submit / View Results"
  • Click on "Submit" and select your ZIP file to submit it

Codalab will take some time to process the submission and will display the scores on the same page once the submissions have been processed. You may need to refresh the page. As explained in the rules, if your submission takes more than 20 minutes to run, a timeout error will be raised and your submission will be ignored.

See your results

 In the "Submit / View Results" sub-section in the Participate section, you can see the status of your submission. Once it is processed, you can review your submission, see the score it obtained and the time it took to run. When clicking on the blue cross next to your submission, different logs are available for additional information. You can also download your submission again if you want to. More importantly, you can get logs of your agent's behavior over different scenarios in the folder "output from scoring step". Several indicators over all the scenarios that the agent was run on can be visualized in the html file.

To compare your score to the ones of the other participants, please go on the Results page. The Leaderboard is displayed there. Be aware that only your last submission's score is considered there.

 

Competition environment (IMPORTANT UPDATE)

As for the beginning of the validation phase, and untile the end of the test phase, the competition environment will be updated with grid2op version 1.2.2,  lightsim2grid version 0.2.4 and l2rpn_baselines version 0.5.0.

As of now on, and except if a really impacting bug is found in any of the above listed packages, the competition packages version will not be modified (to ensure fairness between all submissions).

You can install the dependencies of the packages with the command given in this github issue (first message):  https://github.com/rte-france/Grid2Op/issues/97

If you want to use docker, you can retrieve the exact same image used for this competition with:

docker pull bdonnot/l2rpn:neurips.2020.4

 

DEPRACTED

Competition environment (deprecated)

Throughout the warm-up competition phase, the environment will be the same. From the start an until the end, the version of grid2op 1.2.1 is used. The python packages you can use on codalab are listed as the "challenge" dependency of grid2op. To replicate the environment of the competition you can install it this way:

pip install grid2op[challenge]==1.2.1

If you are familiar with docker, all the code that is used by codalab is used through docker. You can get the exact version used by the competition with:

docker pull bdonnot/grid2op:1.2.1

IMPORTANT: see at the bottom of this file for an important update (concerning the grid2op version that has changed since the release of this competition)

Important update on competition environment (deprecated)

As for August 3rd 14pm UTC the competition environment will include gym (version 0.17.2) and will be updated with grid2op version 1.2.0 and lightsim2grid version 0.2.3.

This changes is made to ensure the same behaviour of the actions on different backends (which was not the case in some corner cases) and brings further the compatibility with open AI gym framework (ask by the community).

Please upgrade your versions accordingly.

Terms and Conditions

This challenge is governed by the  general ChaLearn contest rules .

Challenge specific Rules

This challenge starts on July 8th 2020 and ends on October 30th 2020. Prizes for winning teams are listed in the Prizes section.

  • This challenge runs in 3 phases where you can submit you code and see your score on the leaderboard.
  • The organizers may provide additional baseline agents during the challenge to stimulate the competition.
  • The participant will be limited to 20 submissions per day and 1000 in total per phase (except for the test phase during which only the last submission of the validation phase will be automatically tested).
  • Submissions are limited to 500 MB in size.
  • Each submission has a limited time to finish all scenarios: 30 minutes.
  • We will check your submission to be valid: they should not change the environment of the game in any way. This would be considered cheating.
  • Teams should use a common account, under a group email. Multiple accounts are forbidden. We allow participants to form teams during the "validation phase" of the competition and only at most 2 weeks before the start of the test phase. If that is the case, please send the organiser an email (or a message on discord) with the name (on codalab) and mail addresses of each team member.
  • The final leaderboard (and the final ranking of the participants) will be based on scores in the Test Phase only.
  • To receive any price, a team should agree to open-source its code at the end of the competition.
  • We strongly encourage all teams to share their code and make them accessible in public submission, and further in the l2rpn-baselines python package (see https://github.com/rte-france/l2rpn-baselines, more information on the official discord https://discord.gg/cYsYrPT)
  • Anyone can make entries.

Open-submissions additional rules:

Open-Submissions are subject to the preceding rules for any usual submission. An opens-source licence should also be linked to any open-submissions (details about licence can be found in the "Collaboration & open-submission" section).
 
In addition, we evaluate:
  • an original open-submission as:
    • By original, we mean that new open-submissions should be different from existing oldest open-submissions.  Otherwise it will be counted as double-submission. If a submission improves upon an existing one, we ask participants to faithfully acknowledge it with a mention in their submission.py file  and a thumbs up on Codalab.
      • Submissions will be overseen by a jury to identify if results, codes, models and actions are indeed different enough from existing submissions. 
      • New submissions can be considered original even if it builds upon an existing submission (which must be acknowledged) if it either has:
        • a score increase of 3 points
        • a computation time decrease of 25%, with a score worse no less than 1 point from the existing submission

Rules of the Game

Objective of the game

The objective of the competition is to design an agent that can sucessfully manage to operate a powergrid. Operate a powergrid here means: find ways to modify how the objects are interconnected together (aka "changing the topology) or modify the productions to make sure it remains safe (see "Conditions of Game Over") while being operated to minimize the energy losses.

More information are given in the 1_Power_Grid_101_notebook prodived in the starting kit.

If you have any question, we are here to answer you on the official discord: https://discord.gg/cYsYrPT

Conditions of Game Over

As any system, a power grid can fail to operate properly, as illustrated on the challenge website. This can occur under conditions such as:

  • consumption is not met because no electricity is flowing to some loads or more than n power plants get disconnected (1 for this challenge);
  • the grid gets split appart into isolated sub-grids making the whole grid non-connex.

These conditions can appear when power lines in the grid get disconnected after being overloaded. When a line get disconnected, it loads gets distributed over other power lines, which in turn might get overloaded and thus disconnected as well, leading to a cascading failure (blackout).

Conditions on Overloads

When the power in a line increases above its thermal limit, the line becomes overloaded. It can stay overloaded for few timesteps before it gets disconnected, if no proper agent action is taken to relieve this overload (2 timesteps are allowed in this challenge, see the Parameters class in grid2op), this is what we call a "soft overload". If the overload is too high, the line gets disconnected immediately (above 200% of the thermal limit in this challenge). This is a 'hard' overload. At some point this can lead to a very rapid cascading failure in a single timestep, if some lines already got disconnected and other lines get quite loaded.

Conditions on Actions

Actions can consist off:

  • re connecting / disconnecting a powerline
  • changing the topology of the grid (choose to isolate some objects [productions, loads, powerlines] from other
  • modify the production set point with redispatching actions

These parameters are accessible through the "Parameters" class of grid2op. During training, you can modify some of these parameters to relax some constraints and initialize your training better.

Be aware that some actions can be considered illegal by grid2op if they don't comply with some conditions. In that case, no action will be taken at that timestep, similar to a do-nothing.

 

Observations to use

Observations about the state of the grid can be retrieved from the environment to be used for your agent. Please read the table in the grid2op documentation. You can recover information over current productions, loads, and more importantly about the flows over the lines and the topology of the grid. You are free to use whatever observation available, make the best of it!

Environment parameters for challenge

Some parameters of the environment can easily be modified before running an agent on it. By doing so you can actually modulate the difficulty of a given problem, and define some learning strategy as explained in section "Good To Know". For the competition, you will be tested with the followind default parameter values:

Difficulty = "challenge" (default)

  • NO_OVERFLOW_DISCONNECTION:  False
  • NB_TIMESTEP_OVERFLOW_ALLOWED: 3
  • NB_TIMESTEP_COOLDOWN_SUB: 3
  • NB_TIMESTEP_COOLDOWN_LINE: 3
  • HARD_OVERFLOW_THRESHOLD: 2.0
  • NB_TIMESTEP_RECONNECTION: 12
  • IGNORE_MIN_UP_DOWN_TIME: true
  • ALLOW_DISPATCH_GEN_SWITCH_OFF: True
  • ENV_DC: False
  • FORECAST_DC: False
  • MAX_SUB_CHANGED: 1
  • MAX_LINE_STATUS_CHANGED: 1

 

Evaluation

Your agent is evaluated on 24 weekly scenarios over every month of the year and possibly starting on different weekdays.

You can have a look at the 3_Rules_Data_Score_Agent notebook provided on the starting kit. 

Definition of a cost function

 

The cost function that an agent will be evaluated on represents the cost of operations of a power grid, as well as the cost of any blackout that could occur. Let explains the details of that in the following.

1) cost of energy losses

To begin with, we will recall that transporting electricity always generates some energy losses Eloss(t) due to the Joule effect in resistive power lines at any time t:

  • Eloss(t)=Σ rl × yl(t)2


At any time t, the operator of the grid is responsible for compensating those energy losses  by purchasing on the energy market the corresponding amount of production at the marginal price p(t). We can therefore define the following energy loss cost closses(t):

  • closses(t)=Eloss(t) × p(t)

2) cost of redispacthing productions after actions on generators


Then we should consider that operator decisions when taking an action can induce costs, especially when requiring market actors to perform specific actions, as they should be paid in return. Topological actions are mostly free, as the grid belongs to the power grid operator, and no energy cost is involved. However, redispatching actions involve producers which should get paid. As the grid operators ask to redispatch energy Eredispatch(t),  some power plants will increase their production by Eredispatch(t) while others will compensate by decreasing their production by the same amount to keep the power grid balanced. Hence, the grid operator will pay both producers for this redispatched energy at a cost credispatching(t) higher than the marginal price p(t) (possibly by some factor):

  • credispatching(t) = 2×Eredispatch(t)×p(t)

3) total cost of operations


If no flexibility is identified or integrated on the grid, operational costs related to redispatching can dramatically increase due to renewable energy sources as was the case recently in Germany with **an avoidable 1 billion €/year increase**.

We can hence define our overall operational cost coperations(t):

  • coperations(t) = closses(t) + credispatching(t)


Formally, we can define an "episode" e successfully managed by an agent up until time tend (over a scenario of maximum length Te) by:

  • e = {o1,a1,o2,a2, ... , otend,atend  }

where ot represents the observation at time t and at the actions the agent took at time t. In particular, o1 is the first observation and otend is the last one: either there is a game over at time tend or the agent reached the end of the scenario such that tend = Te.

An agent can either manage to operate the grid for the entire scenario or fail after some time tend because of a blackout. In case of a blackout, the cost cblackout(t) at a given time t would be proportional to the amount of consumption not supplied Load(t), at a price higher than the marginal price p(t) by some factor beta:

  • cblackout(t) = Load(t) × p(t) × beta with beta › 1

Notice that Load(t) >> Eredispatch(t) , Eloss(t)
which means that the cost of a blackout is a lot higher than the cost of operating the grid as expected. It is even higher if we further consider the secondary effects on the economy (More information can be found on this blackout cost simulator: https://www.blackout-simulator.com). Furthermore, a blackout does not last forever and power grids restart at some point. But for the sake of simplicity while preserving most of the realism, all these additional complexities are not considered here.

Now we can define our overall cost c for an episode e:

  • c(e) = Σ0 -> tend coperations(t) + Σ tend -> Te cblackout(t)


We still encourage the participants to operate the grid as long as possible, but penalize them for the remaining time after the game is over, as this is a critical system and safety is paramount.

Finally, participants will be tested on N hidden scenarios of different lengths, varying from one day to one week, and on various difficult situations according to our baselines. This will test agent behavior in various representative conditions. Under those episodes, our final score to minimize will be:

  • Score = Σ 0 -> N   c(ei)

Rescaling of the scores

For a naive agent (a "do-nothing" agent that does not actually take any action) the cost function can get really high (in the order of billions of $) in our scenarios since a blackout most likely occur in a scenario.  
Comparing two agents that scores on a billion scale is not easy (eg. it is not clear that 33025056 is worst than 33025053). So, we decided to apply linear transformations to improve the readability and better represent the ability of an agent to be robust and performant:
- -100.0 No step played, maximum blackout penalty for all steps of all scenarios.
- 0.0 for the "do nothing" baselines.
- 80.0 for playing all the scenarios completely, by ignoring line disconnections due to overload, while keeping losses equal to the difference of productions and consomations of the scenario. Lines in maintenance also get reconnected after the maintenance is finished.
- 100.0 Same agent as for a score of 80, but considering that it manages to optimize the losses to reach 80% of the previously computed losses. 

This means that:
- the score should be maximized rather than minimized
- having a score of 100 is possibly out of reach
- having a positive score is already pretty good!

 

Rescaling of the scores: illustration

On the first figure the see the operationnal costs (highligthed in red) of a few "interesting" controlers (note that these controler are "theoretical" controlers). Some might not be feasible in practice. We compare the score with the operational cost of four of them:

- "Com. Game Over" is the worst possible controler. He does a complete game over for all the scenario

- "Do nothing": is the agent that does nothing. It serves as baseline.

- "No Game Over" is a controler (theoretical) that would not game over, but would not take any action except reconnecting the line that have been in maintenance.

- "No Game Over + loss optim." is a controler that does better than the previous one in the sense that it will also takes care (and succeed) in managing the losses. To make sure we have an upper bound on it, we supposes that such a controler is 20% more efficient than the "No Game Over" controler in reducing the operational cost. [NB for most scenarios, this is probably out of reach]

 

 

figure: representation of the operationnal costs of the few (theoretical) controlers of interest.

 

In reality though, we want to emphasize the fact that keeping the grid is a problem, but reducing the operationnal cost is also really interesting. To this end, we decided to to assign different scores as decribed in the image bellow:

- "Com. Game Over" has the worst possible score of -100.00. If this score is displayed, it means your submission is probably not valid.

- "Do nothing": is the "reference" agent. It has the score of 0. (NB. In all cases do nothing agent has a score of 0 regardless of its capacity to succeed to manage completely a scenario. This means that you can have a slightly negative score in such cases if your agent did worst than the do nothing at managing the scenarios (in terms of operation costs) but it manage to get to the end of it.)

- "No Game Over" is assigned a score of +80.00. (NB in case the do nothing successfully manage all the scenario, this part is "skipped" see the note bellow)

- "No Game Over + loss optim." is assigned a score of +100.00.

In case of a scenario that a "do-nothing" agent can handle until the end, you score will be 0 if you finish the scenario and don't do better at managing the losses than a do nothing. So in addition of being robust, managing efficiently the electricity losses will be especially rewarded for some scenarios.

 

figure: representation of the operationnal costs of the few (theoretical) controlers of interest as well as their asscoaited scores.

Note on the hidden scenarios

For this competition, there exists 24 hidden scenarios of 7-day long, distributed over the months in a year and over the days in a week. 

Scenarios have been cherry picked to offer different levels of difficulty, can start at arbitrary time steps ( but chronics starts always at middnight!). Time interval between two consecutive time step is fixed and will always be 5 mins.

 

Using Your own reward

You can use any rewards you want in grid2op, different from our cost function for competition evaluation, both at training time (when you train your agent on your computer) or at test time.

To change the reward signal you are using, you can, at training time, specify it at the creation of the environment:

  
    import grid2op
    from grid2op.Reward import GameplayReward
    env = grid2op.make("l2rpn_neurips_2020_trackX", reward_class=GameplayReward)
  

We invite you to get have a look at the official grid2op documentation about rewards at https://grid2op.readthedocs.io/en/latest/reward.html

 

 

 

Collaboration & Open submissions

During the competition, we favor collaboration and open-submissions which will be rewarded along the competition (which details will be announed after first month of competition). We believe that helping each other on building on top of one another is a great way to make steady progress for everyone.

Sharing your submission

Sharing your submission is very simple: from the same location you make a submission and look at its result, there is a button to enable or disable sharing your submission in public. By default, your submission is private.

We recommend that your agent interface follows the L2RPN baseline template for better reusability and reproductibility.

While your submission only needs to contain your trained model, your open submission will have an even greater impact when sharing the code to train your model as well. You can hence include it in your submission to share it with everyone. To go further, you could even publish it as a new baseline on L2RPN-baseline repository.

Using and acknowledging a public submission

To see available public submissions, click on Public submission section near Result section. You will then discover a board listing all of them. You can download each of them and the download count is monitored. You can figure out which participant made a specific submission available. You can eventually acknowledge him with a like if it was of good use to you to improve your approach over the problem. You can finally give anyone feedback on Discord about its public submission, to help him improve it in return.

Open submission licence

When making a submission public in the competition, it can be used by anyone within the competition.

It is best practice to attach a licence to a public submission, so that other people best know in which context they can use it and such that it can be used outside of the competition. Here are the licence we recommend: MIT, BSD clause 2, BSD clause 3, Apache, MPL v2.0.

It is your responsability to ensure copyrights compliance when sharing or using open submissions. Organizers will be able to help with this process, but cannot be held responsible in case of infringement.

The following prizes are sponsored by RTE, Google Research, University College of London, EPRI and IQT Labs. Chalearn is also providing support to the competition as well through Codalab.

Along the whole competition, 15 000$ in total will be awarded. 12 000$ will be shared between the 3 best teams willing to share their code open-source following the L2RPN baseline template. This will be divided as follows:

  • 1rst rank: the best team will be awarded 6000$ total - 4000$ in money and 2000$ of travel grants to either attend NeurIPS or visit RTE in Paris and INRIA located at University Paris-Saclay (the best University worldwide for mathemathics according to latest Shanghai's ranking).
  • 2nd rank: the team will be awarded 4000$ total - 2000$ in money and 2000$ of travel grants to either attend NeurIPS or visit RTE in Paris and INRIA located at University Paris-Saclay (the best University worldwide for mathemathics according to latest Shanghai's ranking).
  • 3rd rank: the team will be awarded 2000$ total - 2000$ of travel grants to either attend NeurIPS or visit RTE in Paris and INRIA located at University Paris-Saclay (the best University worldwide for mathemathics according to latest Shanghai's ranking).

The remaining 3000$ of prizes will be used to reward open-submissions.

Open-submission Prizes :

  • By october 12th, the first 5 participants who share an original open-submission with a score above 2, each wins a 200$ money prize. 
  • On october 12th, the 5 best original open-submissions with a score above 2, each wins 400$ money prize. Your best-score open-submission will be the one considered at that point, so you can do multiple ones and improve it.
See "Terms and Conditions" section for "original" definition.
Winners will be announced on October 21rst after checking the submissions are valid.
As a reminder: Open-submission should have an open-source licence (we recommend MPL 2.0) linked to it, as explained in "Collaboration & open-submission" section. It will not be valid if it is missing one. Make sure to undertand the rules. 

 

A fast backend simulator

Pandapower, a well-known open-soure library in the power system community, is the default Grid2op backend and has been used as the default backend for previous competitions. However, it can be a bit too slow when it comes to running thousands of simulations. For that aim, the lightSim2Grid simulator (https://github.com/BDonnot/lightsim2grid) was developped in C++, imitating pandapower behavior and reproducing its results for our current power grid modelization. A speedup factor of 30 can be achieved, which should be of great use when training an agent. LightSim2Grid is now the backend used when running and evaluating your submissions on Codalab for the current competition

NB: at the moment this simulator is not natively available on Microsoft Windows based machine (unless you manage to compile the SparSuite framework on Windows), but can be installed through a docker image.  For the other platform, installation instructions are provided in the above mentionned github.

Once installed you can use it this way:

import grid2op
from lightsim2grid.LightSimBackend import LightSimBackend
backend = LightSimBackend()
env = grid2op.make("l2rpn_wcci_2020", backend=backend)
# and now you can do the code you want to do

The performance increase can be rather large. On a desktop (Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz), just using regular agent (without any call to external librariries), for the "l2rpn_wcci_2020" is it possible to perform 22 it/s whereas the same agent, with LightSimBackend only performs 618 it/s (speed ups around 28:1 in favor of LightSimBackend)

Simulate function

As operators do in real life, you can simulate the effect of different actions before taking a decision (a step in Grid2op framework). This comes at a cost in terms of computation time, but allows to validate the relevance of your action on the short term.

any_action = env.action_space()  # or any other action
obs = env.reset()
state_after_simulate, simulated_reward, simulated_done, simulated_info = obs.simulate(any_action,time_step=1)

Here the simulation will be run on a next-timestep forecasted state (if the forecast is available. In this competition a next-timestep forecast is indeed available). With time_step=0, you will run the simulation on your current state.

Grid2op documentation

To understand all the features of Grid2op framework and use it to its full potential, you will find most of the answers on how to use it through its documentation: https://grid2op.readthedocs.io/en/latest/.

Reward design

You can specify your own reward, a function that can be different from the score of the competition. We believe that reward design is an important aspect of the competition, and a participant should think about which reward is best to let its agent learns and explore. 

To do so you simply need to change the "reward"

import grid2op
from grid2op.Reward import L2RPNReward
env = grid2op.make("l2rpn_wcci_2020", reward_class=L2RPNReward)

As always more information on this feature can be found at https://grid2op.readthedocs.io/en/latest/reward.html

Curriculum Learning by changing difficulty levels

Different parameters to configure an environment allow to modulate the difficulty for an agent to deals with that environment.

You can report to the full description of the parameters used in each level at the end of this section for more information.

For instance, it is possible to inhibit line disconnection when overloaded, hence avoiding any blackout and allowing an agent to operate and learn until the end of scenario. This easy mode could be a prefered mode when your start training your agent. By modifying the environment parameter you can hence design a learning curriculum for your agent, making the environment more and more difficult to eventually operate in the full environment setting.

For this competition, 4 difficulty levels are available. For example, you have easier environment with

import grid2op
env = grid2op.make("l2rpn_wcci_2020", difficulty="0")
# in this case the environment does not simulate the powerline disconnection when there are overflows for example.

 

Increasing order of difficulty are (see the addendum for a detail on every level):

  • env = grid2op.make("l2rpn_wcci_2020", difficulty="0"): is the easiest mode. No powerlines are ever disconnected. Nothing is really made by the environment and there is no cooldown at all.
  • env = grid2op.make("l2rpn_wcci_2020", difficulty="1"): is much harder than the previous level. Some powerlines will be automatically disconnected after a while if they are in overflow.
  • env = grid2op.make("l2rpn_wcci_2020", difficulty="2") : relatively close to the "real" environment, the major difference is that it is more permissive on the action you can perform (you can acting on object much quicker)
  • env = grid2op.make("l2rpn_wcci_2020", difficulty="competition"): the default difficulty. This is the one used to assess the performance of your agent on codalab and thus to rank the participants.

 

Grid2Viz - visual study tool of your agents

To inspect and study some particular scenarios and compare the behavior of different agents, the Grid2Viz interface is a great tool to try and use (https://github.com/mjothy/grid2viz)

Grid2Viz front page to start studying a scenario and agent results

Chronix2Grid - generate additional chronics

To generate all the chronics of the environment for the competition, we used the chronix2grid package. If you want to generate additional chronics, you can use it yourself https://github.com/mjothy/ChroniX2Grid/tree/master/chronix2grid

Visualize the behaviour of your submission

Once your submission has run on the platform, you can visualize how your agent behave. Two main plots are available, here is an example:

To get them, you can go to your submission, clik on the "+" sign and then "Download output from prediction step"

You can visualize how many time steps and the score per scenarios, the cost of maintaining your powergrid at each time step and a gif image that sums up the result of your agent on one given scenario. For example the score per scenarios:

Addendum: detailed options for each levels

In this section we will list the different values taken for the Parameters that are used to make default difficulty levelts. For more information about the real definition of these attributes, you can visit: https://grid2op.readthedocs.io/en/latest/parameters.html

Difficulty = "0"

  • NO_OVERFLOW_DISCONNECTION:  true
  • NB_TIMESTEP_OVERFLOW_ALLOWED: 9999
  • NB_TIMESTEP_COOLDOWN_SUB: 0
  • NB_TIMESTEP_COOLDOWN_LINE: 0
  • HARD_OVERFLOW_THRESHOLD: 9999
  • NB_TIMESTEP_RECONNECTION: 0
  • IGNORE_MIN_UP_DOWN_TIME: true
  • ALLOW_DISPATCH_GEN_SWITCH_OFF: true
  • ENV_DC: false
  • FORECAST_DC: false
  • MAX_SUB_CHANGED: 1
  • MAX_LINE_STATUS_CHANGED: 1

Difficulty = "1"

  • NO_OVERFLOW_DISCONNECTION:  false,
  • NB_TIMESTEP_OVERFLOW_ALLOWED: 6
  • NB_TIMESTEP_COOLDOWN_SUB: 0
  • NB_TIMESTEP_COOLDOWN_LINE: 0
  • HARD_OVERFLOW_THRESHOLD: 3.0
  • NB_TIMESTEP_RECONNECTION: 1
  • IGNORE_MIN_UP_DOWN_TIME: true
  • ALLOW_DISPATCH_GEN_SWITCH_OFF: true
  • ENV_DC: false
  • FORECAST_DC: false
  • MAX_SUB_CHANGED: 1
  • MAX_LINE_STATUS_CHANGED: 1

Difficulty = "2"

  • NO_OVERFLOW_DISCONNECTION:  false,
  • NB_TIMESTEP_OVERFLOW_ALLOWED: 3
  • NB_TIMESTEP_COOLDOWN_SUB: 1
  • NB_TIMESTEP_COOLDOWN_LINE: 1
  • HARD_OVERFLOW_THRESHOLD: 2.5
  • NB_TIMESTEP_RECONNECTION: 6
  • IGNORE_MIN_UP_DOWN_TIME: true
  • ALLOW_DISPATCH_GEN_SWITCH_OFF: true
  • ENV_DC: false
  • FORECAST_DC: false
  • MAX_SUB_CHANGED: 1
  • MAX_LINE_STATUS_CHANGED: 1

Difficulty = "challenge" (default)

  • NO_OVERFLOW_DISCONNECTION:  false,
  • NB_TIMESTEP_OVERFLOW_ALLOWED: 3
  • NB_TIMESTEP_COOLDOWN_SUB: 3
  • NB_TIMESTEP_COOLDOWN_LINE: 3
  • HARD_OVERFLOW_THRESHOLD: 2.0
  • NB_TIMESTEP_RECONNECTION: 12
  • IGNORE_MIN_UP_DOWN_TIME: true
  • ALLOW_DISPATCH_GEN_SWITCH_OFF: true
  • ENV_DC: false
  • FORECAST_DC: false
  • MAX_SUB_CHANGED: 1
  • MAX_LINE_STATUS_CHANGED: 1

More flexibility ?

Yes in case that all these settings are not enough you can definitne your own set of parameters at the creation of the environment. For this you can do:

import grid2op
from grid2op.Parameters import Parameters
param = Parameters()
param.NB_TIMESTEP_OVERFLOW_ALLOWED = ...
param.NB_TIMESTEP_COOLDOWN_LINE = ...
# change any other attribute of the parameter class
env = grid2op.make("l2rpn_wcci_2020", param=param)
# and now the created environment is configured with you parameters

 

 

Credits

This challenge would not have been possible without the help of many people.

Principal coordinators:

  • Antoine Marot (RTE, France)
  • Isabelle Guyon (U. Paris-Saclay; UPSud/INRIA, France and ChaLearn, USA)

Protocol and task design:

  • Gabriel Dulac-Arnold (Google Research, France)
  • Olivier Pietquin (Google Research, France)
  • Isabelle Guyon (U. Paris-Saclay; UPSud/INRIA, France and ChaLearn, USA)
  • Patrick Panciatici (RTE, France)
  • Antoine Marot (RTE, France)
  • Benjamin Donnot (RTE, France)
  • Camilo Romero (RTE, France)
  • Jan Viebahn (TenneT, Netherlands)
  • Adrian Kelly (EPRI, Ireland)
  • Mariette Awad (American University of Beirut, Lebanon)
  • Yang Weng (Arizo State Univ., USA)

Data format, software interfaces, and metrics:

  • Benjamin Donnot (RTE, France)
  • Mario Jothy (Artelys, France)
  • Gabriel Dulac-Arnold (Google Research, France)
  • Aidan O'Sullivan (UCL/Turing Institute, UK)
  • Zigfried Hampel-Arias (Lab 41, USA)
  • Jean Grizet (EPITECH & RTE, France)

Environment preparation and formatting:

  • Carlo Brancucci (Encoord, USA)
  • Vincent Renault (Artelys, France)
  • Camilo Romero (RTE, France)
  • Bri-Mathias Hodge (NREL, USA)
  • Florian Schäfer (Univ. Kassel/pandapower, Germany)
  • Antoine Marot (RTE, France)
  • Benjamin Donnot (RTE, France)

Baseline methods and beta-testing:

  • Kishan Prudhvi Guddanti (Arizo State Univ., USA)
  • Loïc Omnes (ENSAE & RTE, France)
  • Jan Viebahn (TenneT, Netherlands)
  • Medha Subramanian (TenneT & TU Delft, Netherlands)
  • Benjamin Donnot (RTE, France)
  • Jean Grizet (EPITECH & RTE, France)
  • Patrick de Mars (UCL, UK)
  • Lucas Tindall (Lab 41 & UCSD, USA)

Other contributors to the organization, starting kit, and datasets, include:

  • Balthazar Donnon (RTE R&D and UPSud/INRIA, France)
  • Kimang Khun (Ecole Polytechnique, France)
  • Luca Veyrin-Forrer (U. Paris-Saclay; UPSud, France)
  • Marvin Lerousseau
  • Joao Araùjo

Our special thanks go to:

  • Marc Schoenauer (U. Paris-Saclay; UPSud/INRIA, France)
  • Patrick Panciatici (RTE R&D, France)
  • Olivier Pietquin (Google Brain, France)

The challenge is running on the Codalab platform administered by  Université Paris-Saclay  and maintained by CKCollab LLC, with primary developers:

  • Eric Carmichael (CKCollab, USA)
  • Tyler Thomas (CKCollab, USA)

ChaLearn and RTE are the challenge organization coordinators. RTE, Google Research, UCL, EPRI and IQT Labs are sponsors and donated prizes.

Our last special thanks go to Google Cloud Platform for donating the Cloud Credits to run the competition on Codalab all along its duration, hence actively supporting research and making it possible to make possible new breakthroughs.

Get the Starting Kit

We put at your disposal a starting kit that you can download in the Participate Section. It gives you an easy start for the competition, in the form of several notebooks and material that explains the objectives of this competition, how to participate and materials to help you get started relatively smoothly on this competition.

  • 1_Power_Grid_101_notebook.ipynb explains the problem of power grid operation on a small grid using the grid2op platform.

 

  • 2_Neurips_Track1_Opponent.ipynb or 2_Neurips_Track_MultiMix (the name depends on the track you want to compete) explains you in more details the objective of this track, how to download the environment (the data) and how run your agent.
  • shows how to define an agent, test it to make sure it is running correctly, and make a submission. In particular, this notebook illustrates how to check that your submission is valid and ready to run on the competition servers.

 

  • 3_Rules_Data_Score_Agent.ipynb details the rules of the competition, explain relatively concisely the dataset. It is also there that the score (that will use to establish the leaboard) is defined. Finally you will be able to create a rapid agent on this notebook.

 

  • 4_SubmitToCodalab.ipynb details the submission process on codalab. In this competition you are required to submit code. This notebook details how to do that using the codalab interface.

 

  • 5_DebugYourSubmission.ipynb As stated above, your are asked to submit code that will be evaluated on private environments. This means that the code that runs perfectly fine on your machine will be ran on distant machine in the cloud. The process of providing such reproducible code can be counter intuitive and we explain in this last notebook how to fix the most common issue participants encounter in previous competitions.

If you need any help, do not hesitate to contact the competition organizers on the dedicated discord forum server that we opened for the competition: https://discord.gg/cYsYrPT

Download Size (mb) Phase
Starting Kit 47.369 #1 Warmup phase
Starting Kit 47.369 #2 Development phase
Starting Kit 47.369 #3 Test phase
Starting Kit 47.369 #4 Legacy phase

Warmup phase

Start: July 8, 2020, midnight

Description: Warmup phase: you can try your models in this phase

Development phase

Start: Aug. 19, 2020, 1 p.m.

Description: Validation Phase: your models will be tested on a validation dataset selected with the same rules as the test dataset

Test phase

Start: Nov. 1, 2020, 12:10 a.m.

Description: Test Phase: your last model will be tested only once on this private dataset

Legacy phase

Start: Nov. 1, 2020, 12:10 a.m.

Description: Legacy Phase: same environment as the neurips competition, would you have beaten the best submissions made during Neurips 2020?

Competition Ends

Nov. 30, 2022, midnight

You must be logged in to participate in competitions.

Sign In