We are currently in the process of clarifying data access issues with i2b2.
Participating requests will be approved upon issues being resolved.
Thank you for your interest in the task.
Clinical findings, diseases, procedures, body structures, and medications recorded in the medical notes constitute invaluable resources for diverse clinical applications. Effective use and exchange of information about clinically relevant concepts in the free-text clinical narratives require two complementary processes: Named Entity Recognition (NER) and Named Entity Normalization (NEN). NER for clinical notes identifies mention spans of the clinically-relevant concepts, which has been well explored in the Clinical NLP research community.
NEN involves linking named entities to concepts in standardized medical terminologies thereby allowing for better generalization across contexts. For example, one may use heart attack, MI, or myocardial infarction to refer to the same general concept, and unless a mapping to a standardized vocabulary concept is available, generalization across these mentions is very difficult. To date, there have been very few shared tasks that focused on the NEN such as the well-known ShARe/CLEF eHealth 2013 Task 1, SemEval-2014 Task 7 and SemEval-2015 Task 14 challenges. However, the CLEF/SemEval challenges focused specifically on the disorder mentions.
In this task, we focus specifically on the NEN task. We extend the previous CLEF/SemEval normalization work to include a much broader set of clinical concepts, not limiting the task to disorders. The task involves normalization over an existing annotation of named entities, which include clinical concepts annotated as medical problems, treatments and tests in the fourth i2b2/VA Shared Task. Different from previous CLEF/SemEval tasks, a named entity is mapped to a Concept Unique Identifier (CUI) in the UMLS 2017AB version from either SNOMED CT or RxNorm. In the above example, the equivalent mentions referring to "Myocardial Infarction" may be mapped to CUI, C0027051 in the UMLS.
Note: Since i2b2 data requires Data Use Agreement (DUA), we will only provide the annotations which include char offsets and corresponding CUIs of the mention spans.
Accuracy is used to evaluate and compare the system performance.
Please follow the specifications below for the successful evaluation.
This page enumerated the terms and conditions of the competition.
Start: June 1, 2018, midnight
Start: Sept. 17, 2018, midnight
Start: Jan. 10, 2019, midnight
Start: Jan. 31, 2019, midnight
Never
You must be logged in to participate in competitions.
Sign In