NLPContributionGraph is introduced as Task 11 at SemEval 2021 for the first time. The task is defined on a dataset of NLP scholarly articles with their contributions structured to be integrable within Knowledge Graph infrastructures such as the Open Research Knowledge Graph. The structured contribution annotations are provided as: (1) Contribution sentences: a set of sentences about the contribution in the article; (2) Scientific terms and relations: a set of scientific terms and relational cue phrases extracted from the contribution sentences; and (3) Triples: semantic statements that pair scientific terms with a relation, modeled toward subject-predicate-object RDF statements for KG building. The Triples are organized under three (mandatory) or more of twelve total information units (viz., ResearchProblem, Approach, Model, Code, Dataset, ExperimentalSetup, Hyperparameters, Baselines, Results, Tasks, Experiments, and AblationAnalysis).
As a complete submission for the Shared Task, systems will have to extract the following information:
For example, given the article:
Systems should identify:
Note that the above example is only for one contribution-related sentence from the article. The participating systems should identify all the contribution-related sentences and accordingly perform the subsequent Phrases and Triples extraction tasks given the sentences, where the Triples extraction task entails categorizing a triples sequence under one of the twelve information units. In the example above, the set of triples pertain to the ExperimentalSetup information unit and, for the evaluation submission, will need to be saved in a file named after the information unit. More the details on the task submission format can be in the Evaluation page.
An NLPContributionGraph submission will be considered complete with predictions made for all 3 tasks (Sentence, Phrases, Triples). The evaluation metrics that will be applied are:
The focus of NLPContributionGraph is on the structuring of contributions in NLP scholarly articles to form a knowledge graph. To allow a thorough evaluation of systems, NLPContributionGraph will have multiple evaluation phases:
Evaluation Phase 1: End-to-end pipeline testing phase
Evaluation Phase 2: Phrases and Triples extraction testing phase
Part 1: Phrase Extraction Testing, the participant systems will be given the gold-annotated contribution sentences and will be expected to provide purely their scientific term and predicate phrase extraction output; In
Part 2: Triples Extraction Testing, the participant systems will be given the gold phrases and will be expected to provide their system output just for triples.
While participation is encouraged in all
Evaluation Phases and Parts, it is not required. Please see our Terms and Conditions for more information.
The evaluation metrics in the Evaluation Phases 1 and 2 will be the standard Precision, Recall, and F-score measures. Details of the evaluation units can be found in our evaluation script or in our Codalab competition configuration yaml file.
Evaluation Phase 1: End-to-end pipeline testing phase, the submission will have be organized per the following directory structure:
[task-name-folder]/ ├── [article-counter-folder]/ │ ├── sentences.txt │ └── entities.txt │ └── triples/ │ │ └── research-problem.txt │ │ └── model.txt │ │ └── ... # each article may be annotated by 3 or 6 information units │ └── ... # repeats for all articles annotated in a release └── ... # repeats depending on the number of tasks in the release
Please see our Github repository https://github.com/ncg-task/sample-submission for detailed information and for sample system input and output data for each of the
By participating in this task you agree to these terms and conditions. If, however, one or more of this conditions is a concern for you, send us an email and we will consider if an exception can be made.
Evaluation Phase 1: End-to-end pipeline testing phase,
Evaluation Phase 2, Part 1: Phrase extraction testing, and
Evaluation Phase 2, Part 2: Triples extraction testing. You can choose to participate in at least one.
To be considered a valid participation/submission in
Evaluation Phase 1: End-to-end pipeline testing phase, you agree to:
Start: Aug. 16, 2020, midnight
Start: Jan. 10, 2021, midnight
Start: Jan. 18, 2021, midnight
Start: Jan. 25, 2021, midnight
Start: Feb. 1, 2021, midnight
You must be logged in to participate in competitions.Sign In