SemEval 2017 Task 10 ScienceIE

Organized by flavioAlexander - Current server time: July 22, 2018, 4:38 p.m. UTC

Previous

Testing
Jan. 1, 2017, midnight UTC

Current

Development
Sept. 1, 2016, midnight UTC

End

Competition Ends
Jan. 31, 2017, 11 p.m. UTC

The shared task ScienceIE at SemEval 2017 deals with automatic extraction of keyphrases from Computer Science, Material Sciences and Physics publications, as well as extracting types of keyphrases and relations between keyphrases.
PROCESS, TASK and MATERIAL form the fundamental objects in scientific works. Scientific research and practice is founded upon gaining, maintaining and understanding the body of existing scientific work in specific areas related to such fundamental objects. Some typical questions, researchers and practitioners more than often face are:

    • which papers have addressed a specific TASK ?
    • which papers have studied a PROCESS or variants ?
    • which papers have utilized such MATERIALS ?
    • which papers have addressed this TASK using variants of this PROCESS ?

Review papers are seldomly available in most research areas, and ability of search engines for scientific publications is limited. In addition to this, researchers often only have a vague search requirements which makes it hard to answer the above questions efficiently.

Automatically extracting keyphrases of the scientific documents, then labelling them and extracting relationships between them can address the above questions efficiently. This will further provide ultilities that can recommend relevant article to readers, match reviewers to submissions and help to explore huge collections of papers.

 

Please see ScienceIE website for more details, and please join the Google group to attend discussions.  

Evaluation



Schemes


There will be three evaluation scenarios:

  1. Only plain text is given (Subtasks A, B, C)

  2. Plain text with manually annotated keyphrase boundaries are given (Subtasks B, C)

  3. Plain text with manually annotated keyphrases and their types are given (Subtask C)



Metrics


The output of systems is matched exactly against the gold standard. The traditionally used metrics of precision, recall and F1-score are computed and the micro-average of those metrics across publications of the three genres are calculated. These metrics are calculated for Subtasks A, B and C.



Additional Resources


Participants may use additional external resources, as long as they declare this at submission time. However, participants may not manually annotate the test data.

 

Task Description

There are three subtasks:


Subtask (A): Identification of keyphrases


Given a scientific publication, the goal of this task is to identify all the keyphrases in the document.



Subtask (B): Classification of identified keyphrases


In this task, each keyphrase needs to be labelled by one of three types: (i) PROCESS, (ii) TASK, and (iii) MATERIAL.

PROCESS

Keyphrases relating to some scientific model, algorithm or process should be labelled by PROCESS.

TASK

Keyphrases those denote the application, end goal, problem, task should be labelled by TASK.

MATERIAL

MATERIAL keyphrases identify the resources used in the paper.



Subtask (C): Extraction of relationships between two identified keyphrases


Every pair of keyphrases need to be labelled by one of three types: (i) HYPONYM-OF, (ii) SYNONYM-OF, and (iii) NONE.

HYPONYM-OF

The realtionship between two keyphrases A and B is HYPONYM-OF if semantic field of A is included within that of B. One example is Red HYPONYM-OF Color.

SYNONYM-OF

The realtionship between two keyphrases A and B is SYNONYM-OF if they both denote the same semantic field, for example Machine Learning SYNONYM-OF ML.

Development

Start: Sept. 1, 2016, midnight

Description: Test your systems on the development data, up to 100 times in total. This will *not* be your final score for the shared task, and is so you can test your systems in the CodaLab environment.

Testing

Start: Jan. 1, 2017, midnight

Description: Test your systems on the official testing data. This will be your final score for the shared task. You are allowed to submit up to 3 runs for each evaluation setting (https://scienceie.github.io/evaluation.html), your best overall score will count.

Competition Ends

Jan. 31, 2017, 11 p.m.

You must be logged in to participate in competitions.

Sign In