[OLD] DeftEval 2020 (SemEval 2020 - Task 6)

Organized by sspala - Current server time: April 28, 2025, 12:18 a.m. UTC

Previous

Subtask 3: Evaluation
March 1, 2020, midnight UTC

Current

Post-Evaluation
March 12, 2020, midnight UTC

End

Competition Ends
Never

DeftEval: Extracting term-definition pairs in free text

PLEASE NOTE: The DeftEval competition has moved to this page. Please make submissions on the new page.

We have migrated the competition to a new Codalab page in order to handle some bugs with the leaderboard in advance of our evaluation period. This page will remain available for your records, but please make new submissions on the new competition page. You will not be able to make submissions during the formal evaluation page on this page.

 

Motivation

Welcome! Definition extraction has been a popular topic in NLP research for well more than a decade, but has been historically limited to well defined, structured, and narrow conditions. In reality, natural language is complicated, and complicated data requires both complex solutions and data that reflects that reality. The DEFT corpus expands on these cases to include term-definition pairs that cross sentence boundaries, lack explicit definitors, or definition-like verb phrases (e.g. is, means, is defined as, etc.), or appear in non-hypernym structures.

Subtasks

DeftEval is split into three subtasks

Subtask 1: Sentence Classification

Given a sentence, classify whether or not it contains a definition. This is the traditional definition extraction task.

Subtask 2: Sequence Labeling

Label each token with BIO tags according to the corpus' tag specification (see Data page).

Subtask 3: Relation Classification

Given the tag sequence labels, label the relations between each tag according to the corpus' relation specification (see Data page).

You may participate in any combination of the three subtasks, but note that the evaluation period for Subtask 3 will occur only after the end of the evaluation period for Subtask 2 in order to avoid any unfair release of test data.

Important Dates

Please note that there are new evaluation dates as of 3 Dec 2019 to reflect the new SemEval deadlines:

  • Trial Data Release: 15 Aug 2019
  • Training Period: 04 Sept 2019
  • Subtask 1 Evaluation Period: 19 Feb 2020 - 29 Feb 2020
  • Subtask 2 Evaluation Period: 19 Feb 2020 - 29 Feb 2020
  • Subtask 3 Evaluation Period: 1 March 2020 - 11 March 2020

Task Organizers

  • Sasha Spala, Adobe Document Cloud, sspala at adobe dot com
  • Nicholas A Miller, Adobe Document Cloud
  • Franck Dernoncourt, Adobe Research,
  • Carl Dockhorn, Adobe Document Cloud

For questions and issues related to the task, please see the DeftEval-2020 forum. For questions and issues related to the data, please log issues on Github. To contact the organizers, please email the organizers at semeval-2020-task-6-organizers@googlegroups.com.

Evaluation Criteria

  1. Subtask 1: Sentence Classification
    We will report P/R/F1 for the positive and negative classes. The official score will be based on the F1 for the positive class.
  2. Subtask 2: Sequence labeling
    We will report P/R/F1 for each evaluated class, as well as macro- and micro-averaged F1 for the evaluated classes. The official score will be based on the macro-averaged F1 of the evaluated classes. Evaluated classes include: Term, Alias-Term, Referential-Term, Definition, Referential-Definition, and Qualifier.
  3. Subtask 3: Relation extraction
    We will report P/R/F1 for each evaluated relation, as well as macro- and micro-averaged F1 for the evaluated relations. The official score will be based on  the macro-averaged F1 of the evaluated relations. The evaluated relations include: Direct-defines, Indirect-defines, Refers-to, AKA, and Qualifies.

You can run these metrics locally by using the evaluation code available on the DEFT Github repo. The test set contains data from the same distribution of data as the train and dev sets. 

 

Running evaluation via Codalab

You may wish to run your evaluation via the Codalab framework to check your input formatting and to submit to the public leaderboard before the evaluation period begins. You must submit through the Codalab evaluation framework during the evaluation period in order for your submission to count towards the official competition ranking.


Please note the following phases and their corresponding evaluation data:

Practice Data: Trial data, found here

Training: Dev data, found here

All Evaluation phases: Test data, unlabeled data to be posted during the appropriate evaluation phase dates for each task.

To submit through Codalab, follow these steps: 

  1. Navigate to Participate -> Submit/View Results
  2. To test your results against the trial data (for formatting only), select 'Practice Data'.
    To test your results against dev data during the training period, select 'All tasks: training'.
    To submit your final results during the evaluation period, select the appropriate evaluation task. Note that the dates for evaluation of each task is staggered to prevent unfair advantages on any of the tasks.
  3. Click 'Submit' to submit a .zip file containing your files to evaluate

    Your submission zip should include the .deft file for the task you want to evaluate and named using the following convention:
    task_[task number]_[name of source file].deft

    For example, if you are submitting files for task 1 to evaluate against the dev files, you would include the following files:
    task_1_t1_biology_0_0.deft
    task_1_t1_biology_0_202.deft
    ... 
    task_1_t7_government_2_303.deft

    If you are submitting multiple task files at once, simply include those files in the same submission.

    Be sure to double check to make sure you are using the correct data (ie., dev vs. training) as there are overlaps in the naming conventions across files in the separate sets.
    When submitting your zip file, you should zip only the files in your results folder. Do not submit a .zip file containing a folder.

 

Terms and Conditions

Data for this competition is comprised of annotations on excerpts from freely available textbooks at www.cnx.org. All data, including annotations, is provided under the CC-BY-NA 4.0 license.

Practice Data

Start: Aug. 15, 2019, midnight

All tasks: Training

Start: Sept. 4, 2019, midnight

Subtask 1: Evaluation

Start: Feb. 19, 2020, midnight

Subtask 2: Evaluation

Start: Feb. 19, 2020, midnight

Subtask 3: Evaluation

Start: March 1, 2020, midnight

Post-Evaluation

Start: March 12, 2020, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In