SemEval2021 Task 8 - MeasEval - Counts and Measurements

Organized by chrpr - Current server time: Nov. 30, 2020, 4:53 p.m. UTC


Oct. 1, 2020, midnight UTC


Jan. 10, 2021, midnight UTC


Competition Ends

Welcome to MeasEval: Counts and Measurements!


Counts and measurements are an important part of scientific discourse. It is relatively easy to find measurements in text, but a bare measurement like "17 mg" is not informative. However, relatively little attention has been given to parsing and extracting these important semantic relations. This is challenging because the way scientists write can be ambiguous and inconsistent, and the location of this information relative to the measurement can vary greatly.

MeasEval is a new entity and semantic relation extraction task focused on finding counts and measurements, attributes of these quantities, and additional information including measured entities, properties, and measurement contexts.

Task List

(Updated 12 Nov 2020)

MeasEval is composed of five sub-tasks that cover span extraction, classification, and relation extraction, including cross-sentence relations. Note that all submissions will be evaluated against all five sub-tasks. Given a paragraph from a scientific text:

  1. For each paragraph of text, identify all Quantity spans.
  2. For each identified Quantity, identify the Unit of measurement, if one exists. For each identified Quantity, classify additional value Modifiers (count, range, approximate, mean, etc.) that apply to the Quantity.
  3. For each identified Quantity, identify the MeasuredEntity it applies to (if one exists) and mark its span. If an associated MeasuredProperty also exists, identify it and mark its span as well.
  4. Identify and mark the span of any Qualifier that is needed to record additional related context to either validate or understand the each identified Quantity.
  5. Identify relationships between Quantity, MeasureEntity, MeasuredProperty, and Qualifier spans using the HasQuantity, HasProperty, and Qualifies relation types.

More detailed definitions can be found be reviewing our Annotation Guidelines.

Data Availability

Additional resources and data will be available on the MeasEval Github Repo

Register and Participate

Register your team on the CodaLab Participate page.

Join our listserv at!forum/measeval-semeval-2021

Important Dates

  • Trial data ready: July 31, 2020
  • Task website ready: August 14, 2020
  • Training data ready: October 1, 2020
  • Test data ready: December 3, 2020
  • Evaluation start: January 10, 2021
  • Evaluation end: January 31, 2021
  • Paper submission due: February 23, 2021
  • Notification to authors: March 29, 2021
  • Camera ready due: April 5, 2021
  • SemEval workshop: Summer 2021


Corey Harper, Elsevier Labs and INDE lab at the University of Amsterdam
Jessica Cox, Elsevier Labs
Ron Daniel, Elsevier Labs
Paul Groth, INDE lab at the University of Amsterdam
Curt Kohler, Elsevier Labs
Antony Scerri, Elsevier Labs

Evaluation Criteria

(Updated 12 Nov 2020)

Evaluation will be based on a global F1 score averaged across all subtasks. For classification and relation extraction subtasks, this score will be a binary match score, while for span identification tasks, it will be based on a SQuAD style Overlap ("F1") score.

Although more granular scores will not be included in the leaderboard, the evaluation code can be executed locally to provide Exact Match scores for span identification tasks and P/R/F1 scores for all subtask components. For self-evaluation purposes prior to the test and evaluation period, the code can also be configured to provide scores averaged by docId (paragraph) or for each of nine separate score components of the five subtasks. These score components are: Quantity, Unit, Modifiers, MeasuredProperty, MeasuredEntity, Qualifier, HasQuantity, HasProperty, and Qualifies.

Terms and Conditions

Data for this competition is in the form of annotations on CC-BY ScienceDirect Articles available from the Elsevier Labs OA-STM-Corpus. All data, including annotations, is provided under the CC-BY license.

The organizers make no warranties regarding the Dataset, including but not limited to being up-to- date, correct or complete.

By submitting results to this competition, you consent to the public release of your scores at SemEval2021 and in related publications.


Start: Oct. 1, 2020, midnight


Start: Jan. 10, 2021, midnight


Start: Feb. 1, 2021, midnight

Competition Ends


You must be logged in to participate in competitions.

Sign In