SemEval2021 Task 8 - MeasEval - Counts and Measurements

Organized by chrpr - Current server time: Sept. 26, 2020, 4:28 a.m. UTC

First phase

Practice
Oct. 1, 2020, midnight UTC

End

Competition Ends
Never

Welcome to MeasEval: Counts and Measurements!

Overview

Counts and measurements are an important part of scientific discourse. It is relatively easy to find measurements in text, but a bare measurement like "17 mg" is not informative. However, relatively little attention has been given to parsing and extracting these important semantic relations. This is challenging because the way scientists write can be ambiguous and inconsistent, and the location of this information relative to the measurement can vary greatly.

MeasEval is a new entity and semantic relation extraction task focused on finding counts and measurements, attributes of these quantities, and additional information including measured entities, properties, and measurement contexts.

Task List

MeasEval is composed of five sub-tasks that cover span extraction, classification, and relation extraction, including cross-sentence relations. Given a paragraph from a scientific text:

  1. Identify all Quantities in the text, specify if they are counts or measurements, and identify their spans in the text.
  2. For measurements, identify the unit. For both counts and measurements, classify additional value information (count, range, approximate, mean, etc.).
  3. For both counts and measurements, identify the MeasuredEntity, if one exists. If a MeasuredProperty also exists, mark its span.
  4. Identify the location of "Qualifiers" to record any additional related context that is needed to either validate or understand the observed count or measurement.
  5. Create relationships between Quantity, MeasureEntity, MeasuredProperty, and Qualifier spans using the HasQuantity, HasProperty, and Qualifies relation types.

Data Availability

Additional resources and data will be available on the MeasEval Github Repo

Register and Participate

Register your team on the CodaLab Participate page.

Join our listserv at https://groups.google.com/forum/#!forum/measeval-semeval-2021

Important Dates

  • Trial data ready: July 31, 2020
  • Task website ready: August 14, 2020
  • Training data ready: October 1, 2020
  • Test data ready: December 3, 2020
  • Evaluation start: January 10, 2021
  • Evaluation end: January 31, 2021
  • Paper submission due: February 23, 2021
  • Notification to authors: March 29, 2021
  • Camera ready due: April 5, 2021
  • SemEval workshop: Summer 2021

Organizers

Corey Harper, Elsevier Labs and INDE lab at the University of Amsterdam
Jessica Cox, Elsevier Labs
Ron Daniel, Elsevier Labs
Paul Groth, INDE lab at the University of Amsterdam
Curt Kohler, Elsevier Labs
Antony Scerri, Elsevier Labs

Evaluation Criteria

Evaluation will be based on precision, recall, and F1 metrics for classification subtasks and SQuAD-style Exact Match (EM) and Overlap (“F1”) scores for the span components. We opt for SQuAD-style F1 here because we wish to give some credit for substring matches given the difficulty of exactly matching entities, properties, and contexts, which may include various modifiers and determiners.

For the classification components of Task 2 and in the relations for Task 5, we will provide P/R/F1 for each of the evaluated classes, along with micro and macro averages.

For the span identification components of Tasks 1, 2, 3, and 4, we will provide SQuAD-style Exact Match (EM) and Overlap (“F1”) scores for the provided spans.

Our evaluation script will be made available prior to the October 1 release of our training data.

Terms and Conditions

Data for this competition is in the form of annotations on CC-BY ScienceDirect Articles available from the Elsevier Labs OA-STM-Corpus. All data, including annotations, is provided under the CC-BY license.

The organizers make no warranties regarding the Dataset, including but not limited to being up-to- date, correct or complete.

By submitting results to this competition, you consent to the public release of your scores at SemEval2021 and in related publications.

Practice

Start: Oct. 1, 2020, midnight

Evaluation

Start: Jan. 10, 2021, midnight

Post-Evaluation

Start: Feb. 1, 2021, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In