L2RPN WCCI 2020 Competition

Organized by BDonnot - Current server time: Sept. 27, 2020, 2:41 p.m. UTC
Reward $6,000

Previous

Development phase
May 20, 2020, midnight UTC

Current

Test phase
June 30, 2020, midnight UTC

End

Competition Ends
Oct. 1, 2020, midnight UTC

Learning to Run a Power Network - WCCI Competition

A forum is available for comments, suggestions and help on discordhttps://discord.gg/cYsYrPT.

The Grid2op documentation can be found here: https://grid2op.readthedocs.io/en/latest/

If you would like to get familiar first with the problem and make your hands on quickly, a sandbox competition on a smaller case remains open as well: https://competitions.codalab.org/competitions/24493

The L2RPN Challenge

Power grids transport electricity across states, countries and even continents. They are the backbone of power distribution, playing a central economical and societal role by supplying reliable power to industry, services, and consumers. Their importance appears even more critical today as we transition towards a more sustainable world within a carbon-free economy, and concentrate energy distribution in the form of electricity. Problems that arise within the power grid range from transient brownouts to complete electrical blackouts which can create significant economic and social perturbations, i.e.de facto freezing society. Grid operators are still responsible for ensuring that a reliable supply of electricity is provided everywhere, at all times. With the advent of renewable energy, electric mobility, and limitations placed on engaging in new grid infrastructure projects, the task of controlling existing grids is becoming increasingly difficult, forcing grid operators to do “more with less”. This challenge aims at testing the potential of AI to address this important real-world problem for our future.

Visualization of a scenario on the wcci grid and environment. Your submission results will come up with such a visualization for your agents.

Try your own submission to get your own visualization!

To proceed in the competition:

  • Visit our website https://l2rpn.chalearn.org/ for an interactive introduction to power grid operations 
  • Reading the companion white paper as well as our L2RPN 2019 paper should help you understand the problem deeper.
  • Visit the Instructions subsection to get started with the competition
  • Understand the rules of the game and the evaluation of your submission in the related subsection
  • Review the terms and conditions that you will have to accept to make your first submission.
  • Dive into the starting kit for a guided tour and tutorial to get all set for the competition and start make submissions
  • Take a look at the Grid2op documentation

You are ready to get started and we are looking forward for your first submission in the Participate section to become the control room operator of the future !

Instructions

In this section, we give you the instructions to help you:

  • configure your own environment,
  • quickly get a first agent ready with a starting kit,
  • get additional data,
  • make a submission on Codalab for the challenge,
  • finally discover your results.

Get the Grid2op Platform

The challenge is based on an environment (and not only a dataset) in which an agent can learn from interactions. It runs under the Grid2op platform.

The Grid2op platform can be installed as any python package with:

pip install grid2op

We also strongly recommend to use the "baseline" python package that will be updated during the competition with:

pip install l2rpn-baselines

Download the Starting Kit

A starting kit is available for you to download in the Participate section on codalab, along with the proper game environment for the competition. Several notebooks should help you understand how to properly run the Grid2op platform using chronics to train and test your agents.

The starting kit also gives details about how to check that your submission is valid and ready to run on the competition servers.

Get the data

Once grid2op is installed, you can get the competition data (approximately 4~4.5Go) directly from the internet. This download will happen automatically the first time you will create the environment of the competition from within a python script or shell:

import grid2op
env = grid2op.make("l2rpn_wcci_2020")

You can visit the "Good to Know" section for more information and parametrization, for example to allow faster learning. The general help of the platform is available at https://grid2op.readthedocs.io/en/latest/ . As probably most of you are not familiar with power systems in general, we have made some introductory notebooks of the problem we are tackling and the grid2op platform. These notebook are available without any installations thanks to "mybinder" at the following link https://mybinder.org/v2/gh/rte-france/Grid2Op/master.

Make a submission

Essentially, a submission should be a ZIP file containing at least these two elements:

  • submission: a folder in which you agent is defined.
  • metadata: file giving the instruction to Codalab on how to process the submission (should never be changed).

In the starting kit, and script is here to help you create and check your submission is valid:

python3 check_your_submission.py --help

/!\ This is a code submission challenge, meaning that the participant has to submit his code (and not his results).

Upon reception of the challenger's submission, will be read by Codalab and the code will be run on the competition servers. The detailed structure of the submission directory can be found in the starting kit.

Then, to upload your submission on Codalab:

  • Go to the competition homepage
  • Click on "Participate"
  • Click on "Submit / View Results"
  • Click on "Submit" and select your ZIP file to submit it

Codalab will take some time to process the submission and will display the scores on the same page once the submissions have been processed. You may need to refresh the page. As explained in the rules, if your submission takes more than 20 minutes to run, a timeout error will be raised and your submission will be ignored.

See your results

 In the "Submit / View Results" sub-section in the Participate section, you can see the status of your submission. Once it is processed, you can review your submission, see the score it obtained and the time it took to run. When clicking on the blue cross next to your submission, different logs are available for additional information. You can also download your submission again if you want to. More importantly, you can get logs of your agent's behavior over different scenarios in the folder "output from scoring step". Several indicators over all the scenarios that the agent was run on can be visualized in the html file.

To compare your score to the ones of the other participants, please go on the Results page. The Leaderboard is displayed there. Be aware that only your last submission's score is considered there.

Competition environment

Throughout the competition, the environment will be the same. From the start an until the end, the version of grid2op 0.9.1.post1 is used. The python packages you can use on codalab are listed as the "challenge" dependency of grid2op. To replicate the environment of the competition you can install it this way:

pip install grid2op[challenge]==0.9.1.post1

If you are familiar with docker, all the code that is used by codalab is used through docker. You can get the exact version used by the competition with:

docker pull bdonnot/grid2op:0.9.1.post1

 

The following prizes are sponsored by Geirina and State Grid of China.

The top two teams will be awarded $3,000 each in the form of travel expenses to either: 

  • visit GEIRI North America (GEIRINA) which is located in the Silicon Valley of California, United States
  • attend a conference (IEEE PES General Meeting or World Congress for Computational Intelligence

To get the prizes, the winners will be required to share their model open-source following the L2RPN baseline template.

 

Terms and Conditions

This challenge is governed by the general ChaLearn contest rules.

Challenge specific Rules

This challenge starts on May 20th 2020 and ends on July 1st 2020. Prizes for winners are listed in the Prizes section.

  • This challenge runs in 1 phase where you can submit you code in the same condition as the l2rpn competition that will start soon.
  • The organizers may provide additional baseline agents during the challenge to stimulate the competition.
  • The participant will be limited to 5 submissions per day.
  • Submissions are limited to 300 Mb in size.
  • Each submission has a limited time to finish all senarios: 30 minutes.
  • We will check your submission to be valid: they should not change the environement of the game in any way. This would be considered cheating.
  • Teams should use a common account, under a group email. Multiple accounts are forbidden.
  • The final leaderboard (and the final ranking of the participants) will be based on scores in the Test Phase only.
  • We strongly encourage all teams to share their code and make them accessible in the l2rpn-baselines python package (see https://github.com/rte-france/l2rpn-baselines, more information on the official discord https://discord.gg/cYsYrPT)
  • Anyone can make entries.

Credits

This challenge would not have been possible without the help of many people.

Principal coordinators:

  • Antoine Marot (RTE, France)
  • Isabelle Guyon (U. Paris-Saclay; UPSud/INRIA, France and ChaLearn, USA)

Protocol and task design:

  • Gabriel Dulac-Arnold (Google Research, France)
  • Olivier Pietquin (Google Research, France)
  • Isabelle Guyon (U. Paris-Saclay; UPSud/INRIA, France and ChaLearn, USA)
  • Patrick Panciatici (RTE, France)
  • Antoine Marot (RTE, France)
  • Benjamin Donnot (RTE, France)
  • Camilo Romero (RTE, France)
  • Jan Viebahn (TenneT, Netherlands)
  • Adrian Kelly (EPRI, Ireland)
  • Di Shi (Geirina, USA)
  • Mariette Awad (American University of Beirut, Lebanon)

Data format, software interfaces, and metrics:

  • Benjamin Donnot (RTE, France)
  • Mario Jothy (Artelys, France)
  • Gabriel Dulac-Arnold (Google Research, France)
  • Aidan O'Sullivan (UCL/Turing Institute, UK)
  • Zigfried Hampel-Arias (Lab 41, USA)
  • Jean Grizet (EPITECH & RTE, France)

Environment preparation and formatting:

  • Carlo Brancucci (Encoord, USA)
  • Vincent Renault (Artelys, France)
  • Camilo Romero (RTE, France)
  • Bri-Mathias Hodge (NREL, USA)
  • Florian Schäfer (Univ. Kassel/pandapower, Germany)
  • Antoine Marot (RTE, France)
  • Benjamin Donnot (RTE, France)

Baseline methods and beta-testing:

  • Kishan Prudhvi Guddanti (Arizo State Univ., USA)
  • Jiajun Duan (Geirina, USA)
  • Loïc Omnes (ENSAE & RTE, France)
  • Jan Viebahn (TenneT, Netherlands)
  • Medha Subramanian (TenneT & TU Delft, Netherlands)
  • Benjamin Donnot (RTE, France)
  • Jean Grizet (EPITECH & RTE, France)
  • Patrick de Mars (UCL, UK)
  • Jan-Hendrik Menke (Univ. Kassel/pandapower, Germany)
  • Yan Zan (Geirina, USA)
  • Lucas Tindall (Lab 41 & UCSD, USA)

Other contributors to the organization, starting kit, and datasets, include:

  • Balthazar Donnon (RTE R&D and UPSud/INRIA, France)
  • Kimang Khun (Ecole Polytechnique, France)
  • Luca Veyrin-Forrer (U. Paris-Saclay; UPSud, France)
  • Marvin Lerousseau
  • Joao Araùjo

Our special thanks go to:

  • Marc Schoenauer (U. Paris-Saclay; UPSud/INRIA, France)
  • Patrick Panciatici (RTE R&D, France)
  • Olivier Pietquin (Google Brain, France)

The challenge is running on the Codalab platform, administered by Université Paris-Saclay and maintained by CKCollab LLC, with primary developers:

  • Eric Carmichael (CKCollab, USA)
  • Tyler Thomas (CKCollab, USA)

ChaLearn and RTE are the challenge organization coordinators and sponsors, and RTE donated prizes.

Get the Starting Kit

We put at your disposal a starting kit that you can download in the Participate Section. It gives you an easy start for the competition, in the form of several notebooks:

  • 1_Power_Grid_101_notebook.ipynb explains the problem of power grid operation on a small grid using the grid2op platform.

 

  • 2_Develop_And_RunLocally_An_agent.ipynb shows how to define an agent, test it to make sure it is running correctly, and make a submission. In particular, this notebook illustrates how to check that your submission is valid and ready to run on the competition servers.

 

  • 3_TestAndFormatYourAgent.ipynb is a shorter version of the previous notebook that allows you to easily test whether your submission is valid and can be run on the Codalab platform.

 

  • 4_DebugYourAgent.ipynb is a step by step helper to help you debug your agent if your submission fails to run on the Codalab platform.

If you need any help, do not hesitate to contact the competition organizers on the dedicated discord forum server that we opened for the competition: https://discord.gg/cYsYrPT

A fast backend simulator

The default backend is pandapower, a well-known open-soure library in the power system community. However, it can be a bit too slow when it comes to running thousands of simulations. For that aim, the lightSim2Grid simulator (https://github.com/BDonnot/lightsim2grid) was developped in C++, imitating pandapower behavior and reproducing its results for our current power grid modelization. A speedup factor of 30 can be achieved, which should be of great use when training an agent.

NB: at the moment this simulator is not available on Microsoft Windows based machine (unless you manage to compile the SparSuite framework on Windows).  For the other platform, installation instructions are provided in the abvo mentionned github.

Once installed you can use it this way:

import grid2op
from lightsim2grid.LightSimBackend import LightSimBackend
backend = LightSimBackend()
env = grid2op.make("l2rpn_wcci_2020", backend=backend)
# and now you can do the code you want to do

The performance increase can be rather large. On a desktop (Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz), just using regular agent (without any call to external librariries), for the "l2rpn_wcci_2020" is it possible to perform 1000 steps in 72.8s (13.7 it/s) whereas the same agent, with LightSimBackend only takes 2.7s (370.4 it/s) to perform the same number of steps (speed ups around 27:1 in favor of LightSimBackend)

/!\ ATTENTION /!\ We want to emphasize that the default grid2op backend is PandaPowerBackend. This default backend will be used to score your agent. There might exist some slight differences (we never noticed difference higher than 1e-7 - 1e-6) between PandaPowerBackend and LightSimBackend.

Simulate function

As operators do in real life, you can simulate the effect of diffent actions before taking a decision (a step in Grid2op framework). This comes at a cost in terms of computation time, but allows to validate the relevance of your action on the short term.

Grid2op documentation

To understand all the features of Grid2op framework and use it to its full potential, you will find most of the answers on how to use it through its documentation: https://grid2op.readthedocs.io/en/latest/.

Reward design

You can specify your own reward, a function that can be different from the score of the competition. We believe that reward design is an important aspect of the competition, and a participant should think about which reward is best to let its agent learns and explore. 

To do so you simply need to change the "reward"

import grid2op
from grid2op.Reward import L2RPNReward
env = grid2op.make("l2rpn_wcci_2020", reward_class=L2RPNReward)

As always more information on this feature can be found at https://grid2op.readthedocs.io/en/latest/reward.html

Curriculum Learning by changing difficulty levels

Different parameters to configure an environment allow to modulate the difficulty for an agent to deals with that environment.

You can report to the full description of the parameters used in each level at the end of this section for more information.

For instance, it is possible to inhibit line disconnection when overloaded, hence avoiding any blackout and allowing an agent to operate and learn until the end of scenario. This easy mode could be a prefered mode when your start training your agent. By modifying the environment parameter you can hence design a learning curriculum for your agent, making the environment more and more difficult to eventually operate in the full environment setting.

For this competition, 4 difficulty levels are available. For example, you have easier environment with

import grid2op
env = grid2op.make("l2rpn_wcci_2020", difficulty="0")
# in this case the environment does not simulate the powerline disconnection when there are overflows for example.

 

Increasing order of difficulty are (see the addendum for a detail on every level):

  • env = grid2op.make("l2rpn_wcci_2020", difficulty="0"): is the easiest mode. No powerlines are ever disconnected. Nothing is really made by the environment and there is no cooldown at all.
  • env = grid2op.make("l2rpn_wcci_2020", difficulty="1"): is much harder than the previous level. Some powerlines will be automatically disconnected after a while if they are in overflow.
  • env = grid2op.make("l2rpn_wcci_2020", difficulty="2") : relatively close to the "real" environment, the major difference is that it is more permissive on the action you can perform (you can acting on object much quicker)
  • env = grid2op.make("l2rpn_wcci_2020", difficulty="competition"): the default difficulty. This is the one used to assess the performance of your agent on codalab and thus to rank the participants.

 

Grid2Viz - visual study tool of your agents

To inspect and study some particular scenarios and compare the behavior of different agents, the Grid2Viz interface is a great tool to try and use (https://github.com/mjothy/grid2viz)

Chronix2Grid - generate additional chronics

To generate all the chronics of the environment for the competition, we used the chronix2grid package. If you want to generate additional chronics, you can use it yourself https://github.com/mjothy/ChroniX2Grid/tree/master/chronix2grid

Visualize the behaviour of your submission

Once your submission has run on the platform, you can visualize how your agent behave. Two main plots are available, here is an example:

To get them, you can go to your submission, clik on the "+" sign and then "Download output from prediction step"

You can visualize how many time steps and the score per scenarios, the cost of maintaining your powergrid at each time step and a gif image that sums up the result of your agent on one given scenario. For example the score per scenarios:

Addendum: detailed options for each levels

In this section we will list the different values taken for the Parameters that are used to make default difficulty levelts. For more information about the real definition of these attributes, you can visit: https://grid2op.readthedocs.io/en/latest/parameters.html

Difficulty = "0"

  • NO_OVERFLOW_DISCONNECTION:  true
  • NB_TIMESTEP_OVERFLOW_ALLOWED: 9999
  • NB_TIMESTEP_COOLDOWN_SUB: 0
  • NB_TIMESTEP_COOLDOWN_LINE: 0
  • HARD_OVERFLOW_THRESHOLD: 9999
  • NB_TIMESTEP_RECONNECTION: 0
  • IGNORE_MIN_UP_DOWN_TIME: true
  • ALLOW_DISPATCH_GEN_SWITCH_OFF: true
  • ENV_DC: false
  • FORECAST_DC: false
  • MAX_SUB_CHANGED: 1
  • MAX_LINE_STATUS_CHANGED: 1

Difficulty = "1"

  • NO_OVERFLOW_DISCONNECTION:  false,
  • NB_TIMESTEP_OVERFLOW_ALLOWED: 6
  • NB_TIMESTEP_COOLDOWN_SUB: 0
  • NB_TIMESTEP_COOLDOWN_LINE: 0
  • HARD_OVERFLOW_THRESHOLD: 300
  • NB_TIMESTEP_RECONNECTION: 1
  • IGNORE_MIN_UP_DOWN_TIME: true
  • ALLOW_DISPATCH_GEN_SWITCH_OFF: true
  • ENV_DC: false
  • FORECAST_DC: false
  • MAX_SUB_CHANGED: 1
  • MAX_LINE_STATUS_CHANGED: 1

Difficulty = "2"

  • NO_OVERFLOW_DISCONNECTION:  false,
  • NB_TIMESTEP_OVERFLOW_ALLOWED: 3
  • NB_TIMESTEP_COOLDOWN_SUB: 1
  • NB_TIMESTEP_COOLDOWN_LINE: 1
  • HARD_OVERFLOW_THRESHOLD: 250
  • NB_TIMESTEP_RECONNECTION: 6
  • IGNORE_MIN_UP_DOWN_TIME: true
  • ALLOW_DISPATCH_GEN_SWITCH_OFF: true
  • ENV_DC: false
  • FORECAST_DC: false
  • MAX_SUB_CHANGED: 1
  • MAX_LINE_STATUS_CHANGED: 1

Difficulty = "challenge" (default)

  • NO_OVERFLOW_DISCONNECTION:  false,
  • NB_TIMESTEP_OVERFLOW_ALLOWED: 3
  • NB_TIMESTEP_COOLDOWN_SUB: 3
  • NB_TIMESTEP_COOLDOWN_LINE: 3
  • HARD_OVERFLOW_THRESHOLD: 200
  • NB_TIMESTEP_RECONNECTION: 12
  • IGNORE_MIN_UP_DOWN_TIME: true
  • ALLOW_DISPATCH_GEN_SWITCH_OFF: true
  • ENV_DC: false
  • FORECAST_DC: false
  • MAX_SUB_CHANGED: 1
  • MAX_LINE_STATUS_CHANGED: 1

More flexibility ?

Yes in case that all these settings are not enough you can definitne your own set of parameters at the creation of the environment. For this you can do:

import grid2op
from grid2op.Parameters import Parameters
param = Parameters()
param.NB_TIMESTEP_OVERFLOW_ALLOWED = ...
param.NB_TIMESTEP_COOLDOWN_LINE = ...
# change any other attribute of the parameter class
env = grid2op.make("l2rpn_wcci_2020", param=param)
# and now the created environment is configured with you parameters

 

 

Rules of the Game

Objective of the game

The objective of the competition is to design an agent that can sucessfully manage to operate a powergrid. Operate a powergrid here means: find ways to modify how the objects are interconnected together (aka "changing the topology) or modify the productions to make sure it stays safe (see "Conditions of Game Over").

More information are given in the 1_Power_Grid_101_notebook prodived in the starting kit.

If you have any question, we are here to answer you on the official discord: https://discord.gg/cYsYrPT

Conditions of Game Over

As any system, a power grid can fail to operate properly, as illustrated on the challenge website. This can occur under conditions such as:

  • consumption is not met because no electricity is flowing to some loads or more than n power plants get disconnected (1 for this challenge);
  • the grid gets split appart into isolated sub-grids making the whole grid non-connex.

These conditions can appear when power lines in the grid get disconnected after being overloaded. When a line get disconnected, it loads gets distributed over other power lines, which in turn might get overloaded and thus disconnected as well, leading to a cascading failure (blackout).

Conditions on Overloads

When the power in a line increases above its thermal limit, the line becomes overloaded. It can stay overloaded for few timesteps before it gets disconnected, if no proper agent action is taken to relieve this overload (2 timesteps are allowed in this challenge, see the Parameters class in grid2op), this is what we call a "soft overload". If the overload is too high, the line gets disconnected immediately (above 200% of the thermal limit in this challenge). This is a 'hard' overload. At some point this can lead to a very rapid cascading failure in a single timestep, if some lines already got disconnected and other lines get quite loaded.

Conditions on Actions

Actions can consist on:

  • re connecting / disconnecting a powerline
  • changing the topology of the grid (choose to insolate some objects [productions, loads, powerlines] from other
  • modify the production set point with redispatching actions

Those parameters are accessible through the "Parameters" class of grid2op. During training, you can modify some of these parameters to relax some constraints and initialize your training better.

 

Observations to use

Observations about the state of the grid can be retrieved from the environment to be used for your agent. Please read the table in the grid2op documentation. You can recover information over current productions, loads, and more importantly about the flows over the lines and the topology of the grid. You are free to use whatever observation available, make the best of it!

Environment parameters for challenge

Some parameters of the environment can easily be modified before running an agent on it. By doing so you can actually modulate the difficulty of a given problem, and define some learning strategy as explained in section "Good To Know". For the competition, you will be tested with the followind default parameter values:

Difficulty = "challenge" (default)

  • NO_OVERFLOW_DISCONNECTION:  false,
  • NB_TIMESTEP_OVERFLOW_ALLOWED: 3
  • NB_TIMESTEP_COOLDOWN_SUB: 3
  • NB_TIMESTEP_COOLDOWN_LINE: 3
  • HARD_OVERFLOW_THRESHOLD: 200
  • NB_TIMESTEP_RECONNECTION: 12
  • IGNORE_MIN_UP_DOWN_TIME: true
  • ALLOW_DISPATCH_GEN_SWITCH_OFF: true
  • ENV_DC: false
  • FORECAST_DC: false
  • MAX_SUB_CHANGED: 1
  • MAX_LINE_STATUS_CHANGED: 1

 

Evaluation

Your agent is evaluated on 10 scenarios of different lengths starting at different times.

You can have a look at the 2_Develop_And_RunLocally_An_agent notebook provided on the starting kit. 

Definition of a cost function

 

The cost function that an agent will be evaluated on represents the cost of operations of a power grid, as well as the cost of any blackout that could occur. Let explains the details of that in the following.

1) cost of energy losses

To begin with, we will recall that transporting electricity always generates some energy losses Eloss(t) due to the Joule effect in resistive power lines at any time t:

  • Eloss(t)=Σ rl × yl(t)2


At any time t, the operator of the grid is responsible for compensating those energy losses  by purchasing on the energy market the corresponding amount of production at the marginal price p(t). We can therefore define the following energy loss cost closses(t):

  • closses(t)=Eloss(t) × p(t)

2) cost of redispacthing productions after actions on generators


Then we should consider that operator decisions when taking an action can induce costs, especially when requiring market actors to perform specific actions, as they should be paid in return. Topological actions are mostly free, as the grid belongs to the power grid operator, and no energy cost is involved. However, redispatching actions involve producers which should get paid. As the grid operators ask to redispatch energy Eredispatch(t),  some power plants will increase their production by Eredispatch(t) while others will compensate by decreasing their production by the same amount to keep the power grid balanced. Hence, the grid operator will pay both producers for this redispatched energy at a cost credispatching(t) higher than the marginal price p(t) (possibly by some factor):

  • credispatching(t) = 2×Eredispatch(t)×p(t)

3) total cost of operations


If no flexibility is identified or integrated on the grid, operational costs related to redispatching can dramatically increase due to renewable energy sources as was the case recently in Germany with **an avoidable 1 billion €/year increase**.

We can hence define our overall operational cost coperations(t):

  • coperations(t) = closses(t) + credispatching(t)


Formally, we can define an "episode" e successfully managed by an agent up until time tend (over a scenario of maximum length Te) by:

  • e = {o1,a1,o2,a2, ... , otend,atend  }

where ot represents the observation at time t and at the actions the agent took at time t. In particular, o1 is the first observation and otend is the last one: either there is a game over at time tend or the agent reached the end of the scenario such that tend = Te.

An agent can either manage to operate the grid for the entire scenario or fail after some time tend because of a blackout. In case of a blackout, the cost cblackout(t) at a given time t would be proportional to the amount of consumption not supplied Load(t), at a price higher than the marginal price p(t) by some factor beta:

  • cblackout(t) = Load(t) × p(t) × beta with beta › 1

Notice that Load(t) >> Eredispatch(t) , Eloss(t)
which means that the cost of a blackout is a lot higher than the cost of operating the grid as expected. It is even higher if we further consider the secondary effects on the economy (More information can be found on this blackout cost simulator: https://www.blackout-simulator.com). Furthermore, a blackout does not last forever and power grids restart at some point. But for the sake of simplicity while preserving most of the realism, all these additional complexities are not considered here.

Now we can define our overall cost c for an episode e:

  • c(e) = Σ0 -> tend coperations(t) + Σ tend -> Te cblackout(t)


We still encourage the participants to operate the grid as long as possible, but penalize them for the remaining time after the game is over, as this is a critical system and safety is paramount.

Finally, participants will be tested on N hidden scenarios of different lengths, varying from one day to one week, and on various difficult situations according to our baselines. This will test agent behavior in various representative conditions. Under those episodes, our final score to minimize will be:

  • Score = Σ 0 -> N   c(ei)

Rescaling of the scores

For a naive agent (a "do-nothing" agent that does not actually take any action) the cost function can get really high (in the order of billions of $) in our scenarios since a blackout most likely occur in a scenario.  
Comparing two agents that scores on a billion scale is not easy (eg. it is not clear that 33025056 is worst than 33025053). So, we decided to apply linear transformations to improve the readability and better represent the ability of an agent to be robust and performant:
- -100.0 No step played, maximum blackout penalty for all steps of all scenarios.
- 0.0 for the "do nothing" baselines.
- 80.0 for playing all the scenarios completely while keeping losses equal to the difference of productions and consomations of the scenario.
- 100.0 for the best possible agent: an agent that handles all scenarios with topology, that is without additional redispatching cost, and which reduces the losses to 70% of the initial electricity losses of the do nothing agent (NB this is probably un achievable for most scenarios). 

This means that:
- the score should be maximized rather than minimized
- having a score of 100 is probably out of reach
- having a positive score is already pretty good!

 

Rescaling of the scores: illustration

On the first figure the see the operationnal costs (highligthed in red) of a few "interesting" controlers (note that these controler are "theoretical" controlers). Some might not be feasible in practice. We compare the score with the operational cost of four of them:

- "Com. Game Over" is the worst possible controler. He does a complete game over for all the scenario

- "Do nothing": is the agent that does nothing. It serves as baseline.

- "No Game Over" is a controler (theoretical) that would not game over, but would not take any action.

- "No Game Over + loss optim." is a controler that does better than the previous one in the sense that it will also takes care (and succeed) in managing the losses. To make sure we have an upper bound on it, we supposes that such a controler is 30% more efficient than the "No Game Over" controler in reducing the operational cost. [NB for most scenarios, this is probably out of reach]

 

 

figure: representation of the operationnal costs of the few (theoretical) controlers of interest.

 

In reality though, we want to emphasize the fact that keeping the grid is a problem, but reducing the operationnal cost is also really interesting. To this end, we decided to to assign different scores as decribed in the image bellow:

- "Com. Game Over" has the worst possible score of -100.00. If this score is displayed, it means your submission is probably not valid.

- "Do nothing": is the "reference" agent. It has the score of 0. (NB. In all cases do nothing agent has a score of 0 regardless of its capacity to succeed to manage completely a scenario. This means that you can have a slightly negative score in such cases if your agent did worst than the do nothing at managing the scenarios (in terms of operation costs) but it manage to get to the end of it.)

- "No Game Over" is assigned a score of +80.00. (NB in case the do nothing successfully manage all the scenario, this part is "skipped" see the note bellow)

- "No Game Over + loss optim." is assigned a score of +100.00.

In case of a scenario that a "do-nothing" agent can handle until the end, you score will be 0 if you finish the scenario and don't do better at managing the losses than a do nothing. So in addition of being robust, managing efficiently the electricity losses will be especially rewarded for some scenarios.

 

figure: representation of the operationnal costs of the few (theoretical) controlers of interest as well as their asscoaited scores.

Note on the hidden scenarios

For this competition, there exists 10 hidden scenarios of 3-day long, distributed over the months in a year and over the days in a week. 

Scenarios have been cherry picked to offer different levels of difficulty, can start at arbitrary time steps ( but chronics starts always at middnight!). Time interval between two consecutive time step is fixed and will always be 5 mins.

 

Using Your own reward

You can use any rewards you want in grid2op, different from our cost function for competition evaluation, both at training time (when you train your agent on your computer) or at test time.

To change the reward signal you are using, you can, at training time, specify it at the creation of the environment:

import grid2op
from grid2op.Reward import GameplayReward
env = grid2op.make("l2rpn_wcci_2020", reward_class=GameplayReward)

We invite you to get have a look at the official grid2op documentation about rewards at https://grid2op.readthedocs.io/en/latest/reward.html

 

 

 

Download Size (mb) Phase
Starting Kit 2.391 #1 Development phase

Development phase

Start: May 20, 2020, midnight

Description: Development phase: you can try your models in this phase

Test phase

Start: June 30, 2020, midnight

Description: Test Phase your model will be tested only once on this dataset

Competition Ends

Oct. 1, 2020, midnight

You must be logged in to participate in competitions.

Sign In
# Username Score
1 shhong 75.72
2 zenghsh3 66.21
3 yzm_test 48.62