This is the CodaLab Competition for SemEval-2018 Task 6: Parsing Time Normalization.
Please join our Google Group to ask questions and get the most up-to-date information on the task.
Important Dates: 14 Aug 2017: Trial data release |
|
The Parsing Time Normalizations shared task is a new approach to time normalization based on recognizing semantically compositional time operators. Such operators are more expressive, being able to represent many more time expressions, and are more machine-learnable, as they can naturally be viewed as a semantic parsing task.
Each operator in the semantic tree can be formally defined in terms of mathematical operations. For example, the operator BETWEEN can be expressed as:
Between([t1, t2): Interval, [t3, t4): Interval): Interval = [t2, t3) |
Thus, interpreting the formal operations that compose a time expression produces the corresponding time intervals. For the example in the figure above and assuming that the Doc-Time is April 21, 2017, the resulting intervals would be:
[2017-03-11T00:00,2017-03-12T00:00) [2017-03-18T00:00,2017-03-19T00:00) [2017-03-25T00:00,2017-03-26T00:00) [2017-04-01T00:00,2017-04-02T00:00) [2017-04-08T00:00,2017-04-09T00:00) [2017-04-15T00:00,2017-04-16T00:00) |
The ultimate goal of the shared task is to inpretate time expressions in order to identify appropriate intervals that can be placed on a timeline.
We offer two tracks: parsing text to time operators and producing time intervals. For the latter, we will provide an interpreter that infers time intervals from the time operators extracted by the participants. The interpreter is also able to obtain such intervals from timestamps in TimeML format. Thus, systems participating in Track 1 will automatically take part in Track 2. Furthermore, participants can join Track 2 directly by providing more traditional TimeML annotations.
Egoitz Laparra, Dongfang Xu, Steven Bethard (University of Arizona)
Ahmed S. Elsayed, Martha Palmer (University of Colorado)
Our dataset covers two different domains: newswire and notes on colon cancer. The data consists annotated documents from the TimeBank/AQUAINT corpus of news articles, and documents from the THYME corpus of clinical notes.
Participants interested in the clinical notes portion of the evaluation will have to sign a data use agreement with the Mayo Clinic to obtain the raw text of the clinical notes and pathology reports (since the THYME corpus contains incompletely de-identified clinical data; the time expressions were retained). Participants of Clinical TempEval 2015, 2016, or 2017 have already completed this process and would not have to do anything more to participate in the clinical portion of Parsing Time Normalizations. New participants can follow the instructions for the process.
Please apply for a data use agreement as soon as possible! The process may take some time.
Read the DUA carefully before agreeing to it. Among other things, you will be agreeing:
The annotation is in Anafora XML format. This means that for each file in the corpus, there will be a directory. That directory will contain a XML file. The XML file contains stand-off annotations that follows the guidelines for the proposed time operator annotation scheme (example).
For both Track 1 (Parsing) and Track 2 (Intervals), the results will be given in terms of precision, recall and f-measure. For Track 2, our scorer includes an interpreter that can produce time intervals reading the annotations in both Anafora and TimeML formats. The scores for each track are calculated as follows:
For Track 1, your system must produce Anafora XML format files. In the case of Track 2, the format of submissions can be TimeML. In any case, your directory structure should follow the following organization:
|
Make sure that you comply with following rules when you create your output directory:
By submitting results to this competition, you consent to the public release of your scores at the SemEval-2018 workshop and in the associated proceedings, at the task organizers' discretion. Scores may include, but are not limited to, automatic and manual quantitative judgements, qualitative judgements, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.
You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgement that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.
You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers.
You agree not to redistribute the test data except in the manner prescribed by its licence.
Start: Aug. 14, 2017, midnight
Start: Jan. 8, 2018, midnight
Start: Jan. 30, 2018, midnight
Never
You must be logged in to participate in competitions.
Sign In