SemEval 2019 Task 11 Normalization of Medical Concepts in Clinical Narrative

Organized by SemEval.2019.Task.11 - Current server time: Feb. 16, 2025, 1:14 p.m. UTC

Previous

Evaluation
Jan. 10, 2019, midnight UTC

Current

Post-Evaluation
Jan. 31, 2019, midnight UTC

End

Competition Ends
Never

We are currently in the process of clarifying data access issues with i2b2.
Participating requests will be approved upon issues being resolved.
Thank you for your interest in the task.

Clinical findings, diseases, procedures, body structures, and medications recorded in the medical notes constitute invaluable resources for diverse clinical applications. Effective use and exchange of information about clinically relevant concepts in the free-text clinical narratives require two complementary processes: Named Entity Recognition (NER) and Named Entity Normalization (NEN). NER for clinical notes identifies mention spans of the clinically-relevant concepts, which has been well explored in the Clinical NLP research community.

NEN involves linking named entities to concepts in standardized medical terminologies thereby allowing for better generalization across contexts. For example, one may use heart attack, MI, or myocardial infarction to refer to the same general concept, and unless a mapping to a standardized vocabulary concept is available, generalization across these mentions is very difficult. To date, there have been very few shared tasks that focused on the NEN such as the well-known ShARe/CLEF eHealth 2013 Task 1, SemEval-2014 Task 7 and SemEval-2015 Task 14 challenges. However, the CLEF/SemEval challenges focused specifically on the disorder mentions.

In this task, we focus specifically on the NEN task. We extend the previous CLEF/SemEval normalization work to include a much broader set of clinical concepts, not limiting the task to disorders. The task involves normalization over an existing annotation of named entities, which include clinical concepts annotated as medical problems, treatments and tests in the fourth i2b2/VA Shared Task. Different from previous CLEF/SemEval tasks, a named entity is mapped to a Concept Unique Identifier (CUI) in the UMLS 2017AB version from either SNOMED CT or RxNorm. In the above example, the equivalent mentions referring to "Myocardial Infarction" may be mapped to CUI, C0027051 in the UMLS.

Note: Since i2b2 data requires Data Use Agreement (DUA), we will only provide the annotations which include char offsets and corresponding CUIs of the mention spans.

Accuracy is used to evaluate and compare the system performance.

Please follow the specifications below for the successful evaluation.

  • A list of file names of the medical notes will be provided. Participants will need to concatenate predicted CUIs from individual notes following the order in the list into a single submission file with file name, "answer.txt".

 

  • In addition, the predicted CUIs in an individual note should follow the order of ids provided in ascending order.

 

  • Each row in the submission file lists the CUI predicted by your system.

Terms and Conditions

This page enumerated the terms and conditions of the competition.

Trial

Start: June 1, 2018, midnight

Development

Start: Sept. 17, 2018, midnight

Evaluation

Start: Jan. 10, 2019, midnight

Post-Evaluation

Start: Jan. 31, 2019, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In