SemEval 2019 Task 11 Normalization of Medical Concepts in Clinical Narrative

Organized by SemEval.2019.Task.11 - Current server time: Sept. 24, 2018, 11:46 a.m. UTC

Previous

Trial
June 1, 2018, midnight UTC

Current

Development
Sept. 17, 2018, midnight UTC

Next

Evaluation
Jan. 10, 2019, midnight UTC

Dear SemEval Task 11 participants,

As you know, we have been trying to resolve licensing issues with the i2b2 data.   We are sorry to inform you that were not able to resolve them by the SemEval deadline for the release of training data.  Therefore, we are unfortunately forced to suspend Task 11 from SemEval 2019.

However, we are happy to announce that the task will instead run as a spin-off task at N2C2 next spring.  We invite all the participants to join the task Google group so that we coud keep you informed about the updated timeline (https://groups.google.com/forum/#!forum/semeval_2019_task_11_normalization).  We will send the invitations to join the group to all currently registered participants.  If you haven't registered but would like to participate, please send us a request to join the group.

Please accept our apologies for this changed timeline!  We are looking forward to making our dataset available to the clinical NLP community, and we hope you will be able to join us for the shared task at the new venue.

SemEval 2019 Task 11 Organizers

Yen-Fu Luo and Anna Rumshisky

Text Machine Lab for NLP

http://text-machine.cs.uml.edu

 

Clinical findings, diseases, procedures, body structures, and medications recorded in the medical notes constitute invaluable resources for diverse clinical applications. Effective use and exchange of information about clinically relevant concepts in the free-text clinical narratives require two complementary processes: Named Entity Recognition (NER) and Named Entity Normalization (NEN). NER for clinical notes identifies mention spans of the clinically-relevant concepts, which has been well explored in the Clinical NLP research community.

NEN involves linking named entities to concepts in standardized medical terminologies thereby allowing for better generalization across contexts. For example, one may use heart attack, MI, or myocardial infarction to refer to the same general concept, and unless a mapping to a standardized vocabulary concept is available, generalization across these mentions is very difficult. To date, there have been very few shared tasks that focused on the NEN such as the well-known ShARe/CLEF eHealth 2013 Task 1, SemEval-2014 Task 7 and SemEval-2015 Task 14 challenges. However, the CLEF/SemEval challenges focused specifically on the disorder mentions.

In this task, we focus specifically on the NEN task. We extend the previous CLEF/SemEval normalization work to include a much broader set of clinical concepts, not limiting the task to disorders. The task involves normalization over an existing annotation of named entities, which include clinical concepts annotated as medical problems, treatments and tests in the fourth i2b2/VA Shared Task. Different from previous CLEF/SemEval tasks, a named entity is mapped to a Concept Unique Identifier (CUI) in the UMLS 2017AB version from either SNOMED CT or RxNorm. In the above example, the equivalent mentions referring to "Myocardial Infarction" may be mapped to CUI, C0027051 in the UMLS.

Note: Since i2b2 data requires Data Use Agreement (DUA), we will only provide the annotations which include char offsets and corresponding CUIs of the mention spans.

Accuracy is used to evaluate and compare the system performance.

Please follow the specifications below for the successful evaluation.

  • A list of file names of the medical notes will be provided. Participants will need to concatenate predicted CUIs from individual notes following the order in the list into a single submission file with file name, "answer.txt".

 

  • In addition, the predicted CUIs in an individual note should follow the order of ids provided in ascending order.

 

  • Each row in the submission file lists the CUI predicted by your system.

Terms and Conditions

This page enumerated the terms and conditions of the competition.

Trial

Start: June 1, 2018, midnight

Development

Start: Sept. 17, 2018, midnight

Evaluation

Start: Jan. 10, 2019, midnight

Post-Evaluation

Start: Jan. 31, 2019, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In