The shared task on DRS parsing will be co-located with IWCS-2019 held in Gothenburg, Sweden on 23-27 May.
DRS parsing is a semantic parsing task where meaning of natural language texts needs to be automatically converted into a Discourse Representation Structure (DRS), a semantic representation with a long history in studies on formal semantics.
A Discourse Representation Structure (DRS) is a meaning representation introduced by the Discourse Representation Theory (DRT), a well-studied formalism developed in formal semantics (Kamp, 1984; Van der Sandt, 1992; Kamp and Reyle, 1993; Asher, 1993; Kadmon, 2001).
DRSs are able to model many challenging semantic phenomena, for example, quantifiers, negation, pronoun resolution, presupposition accommodation, and discourse structure. Concepts are represented by WordNet synsets, relations by VerbNet roles.
R. van Noord, L. Abzianidze, A. Toral, J. Bos (2018): Exploring Neural Methods for Parsing Discourse Representation Structures. Transactions Of The Association For Computational Linguistics, 6, 619-633. [PDF] [BibTeX]
L. Abzianidze, J. Bjerva, K. Evang, H. Haagsma, R. van Noord, P. Ludmann, D. Nguyen, J. Bos (2017): The Parallel Meaning Bank: Towards a Multilingual Corpus of Translations Annotated with Compositional Meaning Representations. EACL. [PDF] [BibTeX]
Competition submissions will be evaluated by computing the micro-average F-score on matching clauses of system output and the gold standard.
An animated illustration of the evaluation procedure (restart the animation with shift+reload_page):
As you can see, before the clause-matching procedure, the CLF referee checks whether system-produced clausal forms (CLFs) are well-formed and replaces ill-formed CLFs with a dummy never-matching CLF.
So, you better find a better fix for the ill-formed CLFs of your system before the submission. For a strict validation of system output CLFs, specify the signature file for the CLF referee. The strict validation additionally checks whether the semantic roles, discourse relations, and comparison operators (e.g., temporal and spatial) fall in the CLF signature. Run the strict validation of CLFs as follows:
python clf_referee.py system_clfs.txt -s clf_signature.yaml
We strongly advise running the strict validation on the system CLFs because the clauses with out-of-signature operators will never be matched to the gold clauses.
By participating in this shared task you agree to the following terms and conditions. If any of these conditions is violated by a participant, the task organizers reserve the right to ban the participant from the task and withheld the scores of their systems.
All you need for a quick start is available in the DRS parsing repository.
In short, the repository contains:
Contact the organizers by joining the discussion group.
Participants need to submit a zip file with a single file clfs.txt inside it. The clfs.txt file must be encoded in UTF-8, contain a list of DRSs separated by a blank line and follow the order in the corresponding raw file.
Each DRS has to be formatted in a clause notation by representing it as a set of clauses (the order and repetitions do not matter). Each line contains one clause. Comments start with a %, and any information after this sign is considered to be not part of a clause (see gold data for examples).
A quick anatomy of a clausal form of a sample DRS:
A clause is a triple or a quadruple where its components are separated with a whitespace.
The first component of a clause is always a variable standing for a box label (remember the box format of a DRS!) and the second component is always a functor that determines the type of the clause. The rest of the components are either variables or constants (the latter enclosed in double quotes).
For more details:
To facilitate communication between the task organizers and participants, we set up a discussion group at slack:
Go to the above link and create a slack account by simply typing your email. Check your email as you will get a verification message from Slack. Confirm the email and type optional name for Slack. Skip all the intro procedures and you are ready to post in the discussion group.
We created several channels to keep the communication tidy:
#codalab for the issues related to this competition website, e.g., problems with registration or submission;
#general basically for everything that is related to the shared task and is not specific to the CodaLab site;
#random for conversations not really related to the shared task as such, e.g., talks about weather, food or football.
|20 Dec 2018||Final data released (before the competition phase)|
|25-28 Feb - 11 Mar 2019||Competition phase|
|04 14 Mar 2019||Results are sent|
|15 Mar 01 Apr 2019||
System description paper due by 11:59pm UTC-12
(we will also allow submissions of short system descriptions that will be included in the shared task overview paper and the authors will be offered to co-author it. In this scenario, registration and attendence at the shared task workshop is not obligatory)
|01 08 Apr 2019||Notification of acceptance|
|15 24 Apr 2019||Camera-ready due by 11:59pm UTC-12|
|23-27 May 2019||IWCS main conference|
You can get the data from the Get Data page under the Participate tab.
Go to your submit/results page, click . of the score you want to show, and press [Submit to Leaderboard]. The . sign will appear for the submitted score.
For the competiotion and pre-competion phases Counter is used with the same parameters:
python counter.py -f1 system_clfs.txt -f2 gold_clfs.txt -prin -s conc -r 20 -ill dummy -coda scores
For the meaning of each parameter see the DRS parsing repo.
One common reason can be that your zip archive contains a directory instead of the
clfs.txt file. When creating the archive of your submission, use
zip -j to junk the paths.
Lasha Abzianidze (University of Groningen)
Johan Bos (University of Groningen)
Hessel Haagsma (University of Groningen)
Rik van Noord (University of Groningen)
The organizers can be reached with Slack discussion group
Start: Oct. 1, 2018, midnight
Description: During the pre-competition phase you can make a lot of submissions of your system output. After submitting a system output, you should see either a score or an error produced by the evaluation script. The same evaluation script will be used in the competition phase.
Start: Feb. 25, 2019, noon
Start: March 12, 2019, 11:59 a.m.
Description: The phase serves as a test bed for DRS parsers. The reference set and the raw input is the same as in the competition phase. The leaderboard can be used to check state of the art in DRS parsing.
You must be logged in to participate in competitions.Sign In