SemEval-2017 Task 9 (parsing subtask): English Biomedical AMR parsing

Organized by jonmay - Current server time: March 27, 2025, 5:17 p.m. UTC

First phase

News/Forum Development/Dry Run
Aug. 1, 2016, midnight UTC

End

Competition Ends
Jan. 20, 2017, midnight UTC

Welcome!

 

Overview:

Abstract Meaning Representation (AMR) is a compact, readable, whole-sentence semantic annotation. Annotation components include entity identification and typing, PropBank semantic roles, individual entities playing multiple roles, entity grounding via wikification, as well as treatments of modality, negation, etc.

Here is an example AMR for the sentence “The London emergency services said that altogether 11 people had been sent to hospital for treatment due to minor wounds.”

(s / say-01
      :ARG0 (s2 / service
            :mod (e / emergency)
            :location (c / city :wiki ‘‘London’’
                  :name (n / name :op1 ‘‘London’’)))
      :ARG1 (s3 / send-01
            :ARG1 (p / person :quant 11)
            :ARG2 (h / hospital)
            :mod (a / altogether)
            :purpose (t / treat-03
                  :ARG1 p
                  :ARG2 (w / wound-01
                        :ARG1 p
                        :mod (m / minor)))))

Note the inclusion of PropBank semantic frames (‘say-01’, ‘send-01’, ‘treat-03’, ‘wound-01’), grounding via wikification (‘London’), and multiple roles played by an entity (e.g. ‘11 people’ are the ARG1 of send-01, the ARG1 of treat-03, and the ARG1 of wound-01).

In 2016 SemEval held its first AMR parsing challenge and received strong submissions from 11 diverse teams. In 2017 we have extended the challenge to both parsing of biomedical data and generation. This subtask is concerned with the former:

Subtask 1: Parsing Biomedical Data

As in 2016, participants will be provided with parallel English-AMR training data. They will have to parse new English data and return the obtained AMRs. The genre of the data is quite different from that in 2016. It focuses on scientific articles regarding cancer pathway discovery.

Here is an example parse of the sentence "Among tested agents, the B-Raf inhibitor dabrafenib was found to induce a strong V600E-dependent shift in cell viability."

(f / find-01
      :ARG1 (i2 / induce-01
            :ARG0 (s / small-molecule :name (n3 / name :op1 "dabrafenib")
                  :ARG0-of (i3 / inhibit-01
                        :ARG1 (e2 / enzyme :name (n2 / name :op1 "B-Raf")))
                  :ARG1-of (i4 / include-01
                        :ARG2 (a / agent
                              :ARG1-of (t2 / test-01))))
            :ARG2 (s2 / shift-01
                  :ARG1 (v / viability
                        :mod (c / cell))
                  :ARG0-of (d / depend-01
                        :ARG1 (m / mutate-01 :value "V600E"))
                  :mod (s3 / strong))))

Participants may use any resources at their disposal (but may not hand-annotate the blind data or hire other human beings to hand-annotate the blind data). The SemEval trophy goes to the system with the highest Smatch score.

More example Bio data with AMRs can be found here

Existing AMR-related research: Kevin Knight has been keeping a list here. It is hard to keep up though, so please send email to jonmay@isi.edu if yours is missing and you want a citation)

 

How to Participate In The Evaluation

Participation is a two-phase process:

  1. Participate in the Development/Dry Run (optional but highly recommended)
  2. Participate in the Evaluation

Participation in each phase is more or less the same:

  1. Train a parser and run it on the appropriate test set.
  2. Create an answer file for the test set containing the parses. The answer file must follow the appropriate form (the development and training data follows this form):
    • The parses should be in the same order as the sentences in the test set.
    • Extra whitespace and newlines within parses is ignored. That is, a parse may exist on one or multiple (contiguous) lines.
    • There must be one or more empty lines in between parses.
    • Any number of lines prefixed with "#" may come before or after a parse; these will be ignored.
  3. Create a submission package. This is a .zip file with at least one but possibly more files in a flat hierarchy (no subdirectories).
    • For the Development/Dry Run Phase, the news/forum test set must be named `nf_answer.txt.' The Biomedical test set must be named `bio_answer.txt.' A submission may contain either or both files.
    • For the Evaluation Phase a single test set is used and must be named `bio_answer.txt.'
  4. Navigate to the `Participate' tab and the `Submit/View Results' subtab. Enter any information into the box and click `Submit' to upload your submission package.
  5. Refresh the page periodically until the status of your system is `Finished.' If something goes wrong you may wish to look at the various output logs to debug, including your scores.
    • During the Development/Dry Run Phase you may resubmit unlimited multiple times.
    • During the Evaluation Phase you may only submit twice, to discourage hillclimbing on the test data. Your last submission will be considered your official submission.

Evaluation Criteria

Please note that all evaluation criteria are subject to change at the whim of the task organizer

The primary trophy-determining metric for this subtask will be the automated metric Smatch, commit fda1c9ea564a142d2dd0a6455627e69348662c9b, the version from 2016-11-14, located at https://github.com/snowblink14/smatch. This is subject to change.

Other automated metrics, such as other versions of Smatch, and sub-evaluations such as Smatch calculated on specific parts of the corpus will also be displayed in the online submission system and may be part of SemEval workshop proceedings. These metrics are not official and not trophy-determining.

We welcome the proposal of other human and automated metrics for this task, since it is not at all clear that the above proposed methods are in fact the best way to evaluate systems. That being said, unless otherwise indicated by the task organizer, the trophy-determining metric is that listed above.

Terms and Conditions

By submitting to the 'Evaluation' phase of this track you agree to the public release of your submissions' scores at the SemEval 2017 workshop and in the associated publicly available proceedings, at the task organizer's discretion. Scores may include, but are not limited to, automatic and manual quantitative judgements, qualitative judgements, and other metrics as the task organizer sees fit. You accept that the ultimate decision of metric choice and score value is that of the task organizer. You further agree that your system will be named according to the team name provided at the time of submission or to a suitable shorthand, as determined by the task organizer. You agree that the task organizer is under no obligation to release scores and that scores may be withheld if it is the task organizer's judgement that the submission was incomplete, deceptive, or violated the letter or spirit of the competition's rules. Inclusion or exclusion of a submission's scores is not an endorsement or unendorsement of a team or individual's submission, system, or science. You further acknowledge that all trophy-making decisions are made at the sole discretion of the task organizer and that the organizer may present zero or more trophies. The definition of what constitutes a trophy is up to the task organizer.

News/Forum Development/Dry Run

Start: Aug. 1, 2016, midnight

Description: Parse News/Forum data from 2016 SemEval Task 8 (LDC2016E33) and/or the Bio AMR Corpus version 0.8 test subcorpus. News/Forum data is included with the Newswire/Discussion Forum data pack released by LDC and Bio data is publicly available. Follow the instructions provided in the 'Get Data' tab to obtain it. See 'Evaluation' under 'Learn the Details' for information on how to submit.

Evaluation

Start: Jan. 9, 2017, midnight

Description: Parse the SemEval 2017 Task 9 Bio AMR Evaluation corpus. This data will be released when the evaluation period begins. See 'Evaluation' under 'Learn the Details' for information on how to submit.

Competition Ends

Jan. 20, 2017, midnight

You must be logged in to participate in competitions.

Sign In