A forum is available for comments, suggestions and help on discord: https://discord.gg/cYsYrPT. You can also TEAM-UP there.
We encourage OPEN-submissions as a great way to help beginners, foster collaboration and also develop new ideas. Prizes will be awarded for it by October 12th. Check Prizes and Open-Submission sections.
The Grid2op documentation can be found here: https://grid2op.readthedocs.io/en/latest/
If you would like to get familiar first with the problem and make your hands on quickly, a sandbox competition on a smaller case remains open as well: L2RPN Sandbox
Power grids transport electricity across states, countries and even continents. They are the backbone of power distribution, playing a central economical and societal role by supplying reliable power to industry, services, and consumers. Their importance appears even more critical today as we transition towards a more sustainable world within a carbon-free economy, and concentrate energy distribution in the form of electricity. Problems that arise within the power grid range from transient brownouts to complete electrical blackouts which can create significant economic and social perturbations, i.e.de facto freezing society. Grid operators are still responsible for ensuring that a reliable supply of electricity is provided everywhere, at all times. With the advent of renewable energy, electric mobility, and limitations placed on engaging in new grid infrastructure projects, the task of controlling existing grids is becoming increasingly difficult, forcing grid operators to do “more with less”. This challenge aims at testing the potential of AI to address this important real-world problem for our future.
In this track, develop your agent to be robust to unexpected events and keep delivering reliable electricity everywhere even in difficult circumstances. An opponent, which we disclose to you, will attack in an adversarial fashion some lines of the grid everyday at different times (you can think of cyber-attacks for instance). You will have to overcome his attacks and keep operating the grid safely. You will be tested against that opponent on hidden new scenarios not present in the training set, to assess the robustness of your agent. The 24 test scenarios (over which we evaluate your submission) run over a whole week and are selected among every month of the year.Visualization of a scenario showing an attack on neurips track 1 grid and environment. Your submission results will come up with such a visualization for your agents.
Try your own submission to get your own visualization!
Make sure to comply with the rules of the competition (see Terms and Conditions section) to appear on the final leaderboard of the competition and possibly win the competition.
Besides rewarding the best agents, we favor collaboration and very much welcome open submissions (see Open Submission section) to allow every one improve from one another during the competition. Everyone can vote for the open submissions he likes most. Participants sharing their submissions and having the most impact during the competition will be rewarded (such prizes will be announced after first month of competition).
You are ready to get started and we are looking forward for your first submission in the Participate section to become the control room operator of the future !
In this section, we give you the instructions to help you:
A starting kit is available for you to download in the Participate section on codalab, along with the proper game environment for the competition. Several notebooks should help you understand how to properly run the Grid2op platform using chronics to train and test your agents.
The starting kit also gives details about how to check that your submission is valid and ready to run on the competition servers. To have an environment as close as possible to the one used by codalab, the full updated list of the packages and their version is available at https://github.com/rte-france/Grid2Op/issues/97 (first message). Main pacakges are:
And lots of others.
The challenge is based on an environment (and not only a dataset) in which an agent can learn from interactions. It runs under the Grid2op platform.
The Grid2op platform can be installed as any python package with (if you want to only install grid2op - otherwise do the previous command from the starting-kit):
pip install grid2op
Similarly the "baseline" python package can be installed with:
pip install l2rpn-baselines
Those packages can get updated during the competition to improve them.
Once grid2op is installed, you can get the competition data (approximately 2.0Go) directly from the internet. This download will happen automatically the first time you will create the environment of the competition from within a python script or shell:
import grid2op
env = grid2op.make("l2rpn_neurips_2020_track1_small") #for Track 1 - Robustness
import grid2op
env = grid2op.make("l2rpn_neurips_2020_track2_small") #for Track 2 - Adaptability
You can visit the "Good to Know" section for more information and parametrization, for example to allow faster learning.
The general help of the platform is available at https://grid2op.readthedocs.io/en/latest/ .
As probably most of you are not familiar with power systems in general, we have made some introductory notebooks of the problem we are tackling and the grid2op platform. These notebook are available without any installations thanks to "mybinder" at the following link https://mybinder.org/v2/gh/rte-france/Grid2Op/master.
Essentially, a submission should be a ZIP file containing at least these two elements:
In the starting kit, and script is here to help you create and check your submission is valid:
python3 check_your_submission.py --help
/!\ This is a code submission challenge, meaning that the participant has to submit his code (and not his results).
Upon reception of the challenger's submission, will be read by Codalab and the code will be run on the competition servers. The detailed structure of the submission directory can be found in the starting kit.
Then, to upload your submission on Codalab:
Codalab will take some time to process the submission and will display the scores on the same page once the submissions have been processed. You may need to refresh the page. As explained in the rules, if your submission takes more than 20 minutes to run, a timeout error will be raised and your submission will be ignored.
In the "Submit / View Results" sub-section in the Participate section, you can see the status of your submission. Once it is processed, you can review your submission, see the score it obtained and the time it took to run. When clicking on the blue cross next to your submission, different logs are available for additional information. You can also download your submission again if you want to. More importantly, you can get logs of your agent's behavior over different scenarios in the folder "output from scoring step". Several indicators over all the scenarios that the agent was run on can be visualized in the html file.
To compare your score to the ones of the other participants, please go on the Results page. The Leaderboard is displayed there. Be aware that only your last submission's score is considered there.
As for the beginning of the validation phase, and untile the end of the test phase, the competition environment will be updated with grid2op version 1.2.2, lightsim2grid version 0.2.4 and l2rpn_baselines version 0.5.0.
As of now on, and except if a really impacting bug is found in any of the above listed packages, the competition packages version will not be modified (to ensure fairness between all submissions).
You can install the dependencies of the packages with the command given in this github issue (first message): https://github.com/rte-france/Grid2Op/issues/97
If you want to use docker, you can retrieve the exact same image used for this competition with:
docker pull bdonnot/l2rpn:neurips.2020.4
DEPRACTED
Throughout the warm-up competition phase, the environment will be the same. From the start an until the end, the version of grid2op 1.2.1 is used. The python packages you can use on codalab are listed as the "challenge" dependency of grid2op. To replicate the environment of the competition you can install it this way:
pip install grid2op[challenge]==1.2.1
If you are familiar with docker, all the code that is used by codalab is used through docker. You can get the exact version used by the competition with:
docker pull bdonnot/grid2op:1.2.1
IMPORTANT: see at the bottom of this file for an important update (concerning the grid2op version that has changed since the release of this competition)
As for August 3rd 14pm UTC the competition environment will include gym (version 0.17.2) and will be updated with grid2op version 1.2.0 and lightsim2grid version 0.2.3.
This changes is made to ensure the same behaviour of the actions on different backends (which was not the case in some corner cases) and brings further the compatibility with open AI gym framework (ask by the community).
Please upgrade your versions accordingly.
This challenge is governed by the general ChaLearn contest rules .
This challenge starts on July 8th 2020 and ends on October 30th 2020. Prizes for winning teams are listed in the Prizes section.
The objective of the competition is to design an agent that can sucessfully manage to operate a powergrid. Operate a powergrid here means: find ways to modify how the objects are interconnected together (aka "changing the topology) or modify the productions to make sure it remains safe (see "Conditions of Game Over") while being operated to minimize the energy losses.
More information are given in the 1_Power_Grid_101_notebook prodived in the starting kit.
If you have any question, we are here to answer you on the official discord: https://discord.gg/cYsYrPT
As any system, a power grid can fail to operate properly, as illustrated on the challenge website. This can occur under conditions such as:
These conditions can appear when power lines in the grid get disconnected after being overloaded. When a line get disconnected, it loads gets distributed over other power lines, which in turn might get overloaded and thus disconnected as well, leading to a cascading failure (blackout).
When the power in a line increases above its thermal limit, the line becomes overloaded. It can stay overloaded for few timesteps before it gets disconnected, if no proper agent action is taken to relieve this overload (2 timesteps are allowed in this challenge, see the Parameters class in grid2op), this is what we call a "soft overload". If the overload is too high, the line gets disconnected immediately (above 200% of the thermal limit in this challenge). This is a 'hard' overload. At some point this can lead to a very rapid cascading failure in a single timestep, if some lines already got disconnected and other lines get quite loaded.
Actions can consist off:
These parameters are accessible through the "Parameters" class of grid2op. During training, you can modify some of these parameters to relax some constraints and initialize your training better.
Be aware that some actions can be considered illegal by grid2op if they don't comply with some conditions. In that case, no action will be taken at that timestep, similar to a do-nothing.
Observations about the state of the grid can be retrieved from the environment to be used for your agent. Please read the table in the grid2op documentation. You can recover information over current productions, loads, and more importantly about the flows over the lines and the topology of the grid. You are free to use whatever observation available, make the best of it!
Some parameters of the environment can easily be modified before running an agent on it. By doing so you can actually modulate the difficulty of a given problem, and define some learning strategy as explained in section "Good To Know". For the competition, you will be tested with the followind default parameter values:
Your agent is evaluated on 24 weekly scenarios over every month of the year and possibly starting on different weekdays.
You can have a look at the 3_Rules_Data_Score_Agent notebook provided on the starting kit.
Definition of a cost function
The cost function that an agent will be evaluated on represents the cost of operations of a power grid, as well as the cost of any blackout that could occur. Let explains the details of that in the following.
1) cost of energy losses
To begin with, we will recall that transporting electricity always generates some energy losses Eloss(t) due to the Joule effect in resistive power lines at any time t:
At any time t, the operator of the grid is responsible for compensating those energy losses by purchasing on the energy market the corresponding amount of production at the marginal price p(t). We can therefore define the following energy loss cost closses(t):
2) cost of redispacthing productions after actions on generators
Then we should consider that operator decisions when taking an action can induce costs, especially when requiring market actors to perform specific actions, as they should be paid in return. Topological actions are mostly free, as the grid belongs to the power grid operator, and no energy cost is involved. However, redispatching actions involve producers which should get paid. As the grid operators ask to redispatch energy Eredispatch(t), some power plants will increase their production by Eredispatch(t) while others will compensate by decreasing their production by the same amount to keep the power grid balanced. Hence, the grid operator will pay both producers for this redispatched energy at a cost credispatching(t) higher than the marginal price p(t) (possibly by some factor):
3) total cost of operations
If no flexibility is identified or integrated on the grid, operational costs related to redispatching can dramatically increase due to renewable energy sources as was the case recently in Germany with **an avoidable 1 billion €/year increase**.
We can hence define our overall operational cost coperations(t):
Formally, we can define an "episode" e successfully managed by an agent up until time tend (over a scenario of maximum length Te) by:
where ot represents the observation at time t and at the actions the agent took at time t. In particular, o1 is the first observation and otend is the last one: either there is a game over at time tend or the agent reached the end of the scenario such that tend = Te.
An agent can either manage to operate the grid for the entire scenario or fail after some time tend because of a blackout. In case of a blackout, the cost cblackout(t) at a given time t would be proportional to the amount of consumption not supplied Load(t), at a price higher than the marginal price p(t) by some factor beta:
Notice that Load(t) >> Eredispatch(t) , Eloss(t)
which means that the cost of a blackout is a lot higher than the cost of operating the grid as expected. It is even higher if we further consider the secondary effects on the economy (More information can be found on this blackout cost simulator: https://www.blackout-simulator.com). Furthermore, a blackout does not last forever and power grids restart at some point. But for the sake of simplicity while preserving most of the realism, all these additional complexities are not considered here.
Now we can define our overall cost c for an episode e:
We still encourage the participants to operate the grid as long as possible, but penalize them for the remaining time after the game is over, as this is a critical system and safety is paramount.
Finally, participants will be tested on N hidden scenarios of different lengths, varying from one day to one week, and on various difficult situations according to our baselines. This will test agent behavior in various representative conditions. Under those episodes, our final score to minimize will be:
For a naive agent (a "do-nothing" agent that does not actually take any action) the cost function can get really high (in the order of billions of $) in our scenarios since a blackout most likely occur in a scenario.
Comparing two agents that scores on a billion scale is not easy (eg. it is not clear that 33025056 is worst than 33025053). So, we decided to apply linear transformations to improve the readability and better represent the ability of an agent to be robust and performant:
- -100.0 No step played, maximum blackout penalty for all steps of all scenarios.
- 0.0 for the "do nothing" baselines.
- 80.0 for playing all the scenarios completely, by ignoring line disconnections due to overload, while keeping losses equal to the difference of productions and consomations of the scenario. Lines in maintenance also get reconnected after the maintenance is finished.
- 100.0 Same agent as for a score of 80, but considering that it manages to optimize the losses to reach 80% of the previously computed losses.
This means that:
- the score should be maximized rather than minimized
- having a score of 100 is possibly out of reach
- having a positive score is already pretty good!
On the first figure the see the operationnal costs (highligthed in red) of a few "interesting" controlers (note that these controler are "theoretical" controlers). Some might not be feasible in practice. We compare the score with the operational cost of four of them:
- "Com. Game Over" is the worst possible controler. He does a complete game over for all the scenario
- "Do nothing": is the agent that does nothing. It serves as baseline.
- "No Game Over" is a controler (theoretical) that would not game over, but would not take any action except reconnecting the line that have been in maintenance.
- "No Game Over + loss optim." is a controler that does better than the previous one in the sense that it will also takes care (and succeed) in managing the losses. To make sure we have an upper bound on it, we supposes that such a controler is 20% more efficient than the "No Game Over" controler in reducing the operational cost. [NB for most scenarios, this is probably out of reach]
figure: representation of the operationnal costs of the few (theoretical) controlers of interest.
In reality though, we want to emphasize the fact that keeping the grid is a problem, but reducing the operationnal cost is also really interesting. To this end, we decided to to assign different scores as decribed in the image bellow:
- "Com. Game Over" has the worst possible score of -100.00. If this score is displayed, it means your submission is probably not valid.
- "Do nothing": is the "reference" agent. It has the score of 0. (NB. In all cases do nothing agent has a score of 0 regardless of its capacity to succeed to manage completely a scenario. This means that you can have a slightly negative score in such cases if your agent did worst than the do nothing at managing the scenarios (in terms of operation costs) but it manage to get to the end of it.)
- "No Game Over" is assigned a score of +80.00. (NB in case the do nothing successfully manage all the scenario, this part is "skipped" see the note bellow)
- "No Game Over + loss optim." is assigned a score of +100.00.
In case of a scenario that a "do-nothing" agent can handle until the end, you score will be 0 if you finish the scenario and don't do better at managing the losses than a do nothing. So in addition of being robust, managing efficiently the electricity losses will be especially rewarded for some scenarios.
figure: representation of the operationnal costs of the few (theoretical) controlers of interest as well as their asscoaited scores.
For this competition, there exists 24 hidden scenarios of 7-day long, distributed over the months in a year and over the days in a week.
Scenarios have been cherry picked to offer different levels of difficulty, can start at arbitrary time steps ( but chronics starts always at middnight!). Time interval between two consecutive time step is fixed and will always be 5 mins.
You can use any rewards you want in grid2op, different from our cost function for competition evaluation, both at training time (when you train your agent on your computer) or at test time.
To change the reward signal you are using, you can, at training time, specify it at the creation of the environment:
import grid2op
from grid2op.Reward import GameplayReward
env = grid2op.make("l2rpn_neurips_2020_trackX", reward_class=GameplayReward)
We invite you to get have a look at the official grid2op documentation about rewards at https://grid2op.readthedocs.io/en/latest/reward.html
During the competition, we favor collaboration and open-submissions which will be rewarded along the competition (which details will be announed after first month of competition). We believe that helping each other on building on top of one another is a great way to make steady progress for everyone.
Sharing your submission is very simple: from the same location you make a submission and look at its result, there is a button to enable or disable sharing your submission in public. By default, your submission is private.
We recommend that your agent interface follows the L2RPN baseline template for better reusability and reproductibility.
While your submission only needs to contain your trained model, your open submission will have an even greater impact when sharing the code to train your model as well. You can hence include it in your submission to share it with everyone. To go further, you could even publish it as a new baseline on L2RPN-baseline repository.
To see available public submissions, click on Public submission section near Result section. You will then discover a board listing all of them. You can download each of them and the download count is monitored. You can figure out which participant made a specific submission available. You can eventually acknowledge him with a like if it was of good use to you to improve your approach over the problem. You can finally give anyone feedback on Discord about its public submission, to help him improve it in return.
When making a submission public in the competition, it can be used by anyone within the competition.
It is best practice to attach a licence to a public submission, so that other people best know in which context they can use it and such that it can be used outside of the competition. Here are the licence we recommend: MIT, BSD clause 2, BSD clause 3, Apache, MPL v2.0.
It is your responsability to ensure copyrights compliance when sharing or using open submissions. Organizers will be able to help with this process, but cannot be held responsible in case of infringement.
The following prizes are sponsored by RTE, Google Research, University College of London, EPRI and IQT Labs. Chalearn is also providing support to the competition as well through Codalab.
Along the whole competition, 15 000$ in total will be awarded. 12 000$ will be shared between the 3 best teams willing to share their code open-source following the L2RPN baseline template. This will be divided as follows:
The remaining 3000$ of prizes will be used to reward open-submissions.
Pandapower, a well-known open-soure library in the power system community, is the default Grid2op backend and has been used as the default backend for previous competitions. However, it can be a bit too slow when it comes to running thousands of simulations. For that aim, the lightSim2Grid simulator (https://github.com/BDonnot/lightsim2grid) was developped in C++, imitating pandapower behavior and reproducing its results for our current power grid modelization. A speedup factor of 30 can be achieved, which should be of great use when training an agent. LightSim2Grid is now the backend used when running and evaluating your submissions on Codalab for the current competition
NB: at the moment this simulator is not natively available on Microsoft Windows based machine (unless you manage to compile the SparSuite framework on Windows), but can be installed through a docker image. For the other platform, installation instructions are provided in the above mentionned github.
Once installed you can use it this way:
import grid2op
from lightsim2grid.LightSimBackend import LightSimBackend
backend = LightSimBackend()
env = grid2op.make("l2rpn_wcci_2020", backend=backend)
# and now you can do the code you want to do
The performance increase can be rather large. On a desktop (Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz), just using regular agent (without any call to external librariries), for the "l2rpn_wcci_2020" is it possible to perform 22 it/s whereas the same agent, with LightSimBackend only performs 618 it/s (speed ups around 28:1 in favor of LightSimBackend)
As operators do in real life, you can simulate the effect of different actions before taking a decision (a step in Grid2op framework). This comes at a cost in terms of computation time, but allows to validate the relevance of your action on the short term.
any_action = env.action_space() # or any other action obs = env.reset() state_after_simulate, simulated_reward, simulated_done, simulated_info = obs.simulate(any_action,time_step=1)
Here the simulation will be run on a next-timestep forecasted state (if the forecast is available. In this competition a next-timestep forecast is indeed available). With time_step=0, you will run the simulation on your current state.
To understand all the features of Grid2op framework and use it to its full potential, you will find most of the answers on how to use it through its documentation: https://grid2op.readthedocs.io/en/latest/.
You can specify your own reward, a function that can be different from the score of the competition. We believe that reward design is an important aspect of the competition, and a participant should think about which reward is best to let its agent learns and explore.
To do so you simply need to change the "reward"
import grid2op
from grid2op.Reward import L2RPNReward
env = grid2op.make("l2rpn_wcci_2020", reward_class=L2RPNReward)
As always more information on this feature can be found at https://grid2op.readthedocs.io/en/latest/reward.html
Different parameters to configure an environment allow to modulate the difficulty for an agent to deals with that environment.
You can report to the full description of the parameters used in each level at the end of this section for more information.
For instance, it is possible to inhibit line disconnection when overloaded, hence avoiding any blackout and allowing an agent to operate and learn until the end of scenario. This easy mode could be a prefered mode when your start training your agent. By modifying the environment parameter you can hence design a learning curriculum for your agent, making the environment more and more difficult to eventually operate in the full environment setting.
For this competition, 4 difficulty levels are available. For example, you have easier environment with
import grid2op
env = grid2op.make("l2rpn_wcci_2020", difficulty="0")
# in this case the environment does not simulate the powerline disconnection when there are overflows for example.
Increasing order of difficulty are (see the addendum for a detail on every level):
To inspect and study some particular scenarios and compare the behavior of different agents, the Grid2Viz interface is a great tool to try and use (https://github.com/mjothy/grid2viz)
Grid2Viz front page to start studying a scenario and agent results
To generate all the chronics of the environment for the competition, we used the chronix2grid package. If you want to generate additional chronics, you can use it yourself https://github.com/mjothy/ChroniX2Grid/tree/master/chronix2grid
Once your submission has run on the platform, you can visualize how your agent behave. Two main plots are available, here is an example:
To get them, you can go to your submission, clik on the "+" sign and then "Download output from prediction step"
You can visualize how many time steps and the score per scenarios, the cost of maintaining your powergrid at each time step and a gif image that sums up the result of your agent on one given scenario. For example the score per scenarios:
In this section we will list the different values taken for the Parameters that are used to make default difficulty levelts. For more information about the real definition of these attributes, you can visit: https://grid2op.readthedocs.io/en/latest/parameters.html
Yes in case that all these settings are not enough you can definitne your own set of parameters at the creation of the environment. For this you can do:
import grid2op
from grid2op.Parameters import Parameters
param = Parameters()
param.NB_TIMESTEP_OVERFLOW_ALLOWED = ...
param.NB_TIMESTEP_COOLDOWN_LINE = ...
# change any other attribute of the parameter class
env = grid2op.make("l2rpn_wcci_2020", param=param)
# and now the created environment is configured with you parameters
This challenge would not have been possible without the help of many people.
Principal coordinators:
Protocol and task design:
Data format, software interfaces, and metrics:
Environment preparation and formatting:
Baseline methods and beta-testing:
Other contributors to the organization, starting kit, and datasets, include:
Our special thanks go to:
The challenge is running on the Codalab platform administered by Université Paris-Saclay and maintained by CKCollab LLC, with primary developers:
ChaLearn and RTE are the challenge organization coordinators. RTE, Google Research, UCL, EPRI and IQT Labs are sponsors and donated prizes.
Our last special thanks go to Google Cloud Platform for donating the Cloud Credits to run the competition on Codalab all along its duration, hence actively supporting research and making it possible to make possible new breakthroughs.
We put at your disposal a starting kit that you can download in the Participate Section. It gives you an easy start for the competition, in the form of several notebooks and material that explains the objectives of this competition, how to participate and materials to help you get started relatively smoothly on this competition.
If you need any help, do not hesitate to contact the competition organizers on the dedicated discord forum server that we opened for the competition: https://discord.gg/cYsYrPT
Download | Size (mb) | Phase |
---|---|---|
Starting Kit | 47.369 | #1 Warmup phase |
Starting Kit | 47.369 | #2 Development phase |
Starting Kit | 47.369 | #3 Test phase |
Starting Kit | 47.369 | #4 Legacy phase |
Start: July 8, 2020, midnight
Description: Warmup phase: you can try your models in this phase
Start: Aug. 19, 2020, 1 p.m.
Description: Validation Phase: your models will be tested on a validation dataset selected with the same rules as the test dataset
Start: Nov. 1, 2020, 12:10 a.m.
Description: Test Phase: your last model will be tested only once on this private dataset
Start: Nov. 1, 2020, 12:10 a.m.
Description: Legacy Phase: same environment as the neurips competition, would you have beaten the best submissions made during Neurips 2020?
Nov. 30, 2022, midnight
You must be logged in to participate in competitions.
Sign In