IWCS-2019 shared task: DRS Parsing

Organized by kowalsky - Current server time: June 19, 2019, 8:51 p.m. UTC


Feb. 25, 2019, noon UTC


March 12, 2019, 11:59 a.m. UTC


Competition Ends

The shared task on DRS parsing will be co-located with IWCS-2019 held in Gothenburg, Sweden on 23-27 May.

What is DRS parsing?

DRS parsing is a semantic parsing task where meaning of natural language texts needs to be automatically converted into a Discourse Representation Structure (DRS), a semantic representation with a long history in studies on formal semantics.

DRS parsing in a nutshell

DRS parsing in a nutshell

Discourse Representation Structure

A Discourse Representation Structure (DRS) is a meaning representation introduced by the Discourse Representation Theory (DRT), a well-studied formalism developed in formal semantics (Kamp, 1984; Van der Sandt, 1992; Kamp and Reyle, 1993; Asher, 1993; Kadmon, 2001).

DRSs are able to model many challenging semantic phenomena, for example, quantifiers, negation, pronoun resolution, presupposition accommodation, and discourse structure. Concepts are represented by WordNet synsets, relations by VerbNet roles.

Parallel Meaning Bank

The data used in the shared task is a part of the Parallel Meaning Bank (PMB) project. In order to have a close look at the PMB documents and annotation, visit the PMB online explorer.    


  • R. van Noord, L. Abzianidze, A. Toral, J. Bos (2018): Exploring Neural Methods for Parsing Discourse Representation StructuresTransactions Of The Association For Computational Linguistics, 6, 619-633. [PDF] [BibTeX]

  • R. van Noord, L. Abzianidze, H. Haagsma, J. Bos (2018): Evaluating Scoped Meaning Representations. LREC. [PDF] [BibTeX]

  • L. Abzianidze, J. Bjerva, K. Evang, H. Haagsma, R. van Noord, P. Ludmann, D. Nguyen, J. Bos (2017): The Parallel Meaning Bank: Towards a Multilingual Corpus of Translations Annotated with Compositional Meaning Representations. EACL. [PDF] [BibTeX]

  • J. Liu, S. B. Cohen, M. Lapata (2018): Discourse Representation Structure Parsing. ACL. [PDF] [BibTeX]

Evaluation criterium is transparent

Competition submissions will be evaluated by computing the micro-average F-score on matching clauses of system output and the gold standard.

An animated illustration of the evaluation procedure (restart the animation with shift+reload_page):

Evaluation explained

As you can see, before the clause-matching procedure, the CLF referee checks whether system-produced clausal forms (CLFs) are well-formed and replaces ill-formed CLFs with a dummy never-matching CLF.

So, you better find a better fix for the ill-formed CLFs of your system before the submission. For a strict validation of system output CLFs, specify the signature file for the CLF referee. The strict validation additionally checks whether the semantic roles, discourse relations, and comparison operators (e.g., temporal and spatial) fall in the CLF signature. Run the strict validation of CLFs as follows:

python system_clfs.txt -s clf_signature.yaml

We strongly advise running the strict validation on the system CLFs because the clauses with out-of-signature operators will never be matched to the gold clauses.

Terms and conditions are dull as usual

By participating in this shared task you agree to the following terms and conditions. If any of these conditions is violated by a participant, the task organizers reserve the right to ban the participant from the task and withheld the scores of their systems.

  • By submitting a system output to the shared task, you approve the task organizers to use it for research purposes and publicly release the results derived from the output. The types of release may include, but are not limited to, sharing at this website, at the corresponding workshop and in the associated task summarizing paper.  
  • You accept that the ultimate decision of metric choice and score value is that of the task organizers.
  • Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.
  • You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers.
  • Participants are not allowed to use more than one account for the task in order to obtain more system runs. Although they can do so if their systems differ from each other in a meaningful way and they plan to have separate submissions for these systems.
  • Tuning to the test data (used during the competition) is forbidden.
  • The datasets provided by the task organizers come with no warranties. 

Starting kit for a quick start

All you need for a quick start is available in the DRS parsing repository.

In short, the repository contains:

  • data/pmb-2.2.0/gold - training and development data splits, also a sample submission obtained from Boxer and co.
  • data/pmb-2.2.0/silver and data/pmb-2.2.0/bronze - large data useful for training data-hungry systems. These data are mostly automatically generated by Boxer and co (the silver one includes some manual annotations) and come with errors.
  • evaluation/ - evaluation scripts consisting of Counter (an evaluation tool) and CLF Referee (a CLF validator).
  • parsing/ - a baseline SPAR parser and an AMR2DRS tool converting AMRs into (clausal form) DRSs.

Contact the organizers by joining the discussion group.

Submission format is simple and colorful

Participants need to submit a zip file with a single file clfs.txt inside it. The clfs.txt file must be encoded in UTF-8, contain a list of DRSs separated by a blank line and follow the order in the corresponding raw file.

Each DRS has to be formatted in a clause notation by representing it as a set of clauses (the order and repetitions do not matter). Each line contains one clause. Comments start with a %, and any information after this sign is considered to be not part of a clause (see gold data for examples).

A quick anatomy of a clausal form of a sample DRS:

clausal form anatomy

A clause is a triple or a quadruple where its components are separated with a whitespace.

The first component of a clause is always a variable standing for a box label (remember the box format of a DRS!) and the second component is always a functor that determines the type of the clause. The rest of the components are either variables or constants (the latter enclosed in double quotes).

For more details:

  • Read about the clause and box notations of DRSs;
  • Have a look at clf_signature.yaml which defines the signature of the clause notation. The definition file can be is used to spot ill-formed clausal forms produced by a system during the evaluation.

Discussion group is stylish

To facilitate communication between the task organizers and participants, we set up a discussion group at slack:


Go to the above link and create a slack account by simply typing your email. Check your email as you will get a verification message from Slack. Confirm the email and type optional name for Slack. Skip all the intro procedures and you are ready to post in the discussion group.

We created several channels to keep the communication tidy:

#codalab for the issues related to this competition website, e.g., problems with registration or submission;

#general basically for everything that is related to the shared task and is not specific to the CodaLab site;

#random for conversations not really related to the shared task as such, e.g., talks about weather, food or football.

Important dates are indeed important

20 Dec 2018 Final data released (before the competition phase)
25-28 Feb - 11 Mar 2019 Competition phase
04 14 Mar 2019 Results are sent
15 Mar 01 Apr 2019
System description paper due by 11:59pm UTC-12
(we will also allow submissions of short system descriptions that will be included in the shared task overview paper and the authors will be offered to co-author it. In this scenario, registration and attendence at the shared task workshop is not obligatory)
01 08 Apr 2019 Notification of acceptance
15 24 Apr 2019 Camera-ready due by 11:59pm UTC-12
23-27 May 2019 IWCS main conference

Frequently asked questions are answered below

Where can I get the data?

You can get the data from the Get Data page under the Participate tab.

How do I show the score of my system in the Results table?

Go to your submit/results page, click . of the score you want to show, and press [Submit to Leaderboard]. The . sign will appear for the submitted score.

With what parameters Counter will be run during the competition phase?

For the competiotion and pre-competion phases Counter is used with the same parameters:

python -f1 system_clfs.txt -f2 gold_clfs.txt -prin -s conc -r 20 -ill dummy -coda scores

For the meaning of each parameter see the DRS parsing repo.

Why does my submission get the error "Invalid file type (text/plain)"?

One common reason can be that your zip archive contains a directory instead of the clfs.txt file. When creating the archive of your submission, use zip -j to junk the paths.

Organizers (alphabetical order)

Lasha Abzianidze (University of Groningen)

Johan Bos (University of Groningen)

Hessel Haagsma (University of Groningen)

Rik van Noord (University of Groningen)


The organizers can be reached with Slack discussion group


Start: Oct. 1, 2018, midnight

Description: During the pre-competition phase you can make a lot of submissions of your system output. After submitting a system output, you should see either a score or an error produced by the evaluation script. The same evaluation script will be used in the competition phase.


Start: Feb. 25, 2019, noon


Start: March 12, 2019, 11:59 a.m.

Description: The phase serves as a test bed for DRS parsers. The reference set and the raw input is the same as in the competition phase. The leaderboard can be used to check state of the art in DRS parsing.

Competition Ends


You must be logged in to participate in competitions.

Sign In