Welcome to the leaderboard for the DocRED dataset. The leaderboard displays test set results for . The data and evaluation script can be downloaded from GitHub .
The submissions are evaluated using relation extration F1 and evidence F1 on the test set. The evaluation script is evaluation.py is available at the Github repository. For more details, please refer to the paper.
This page enumerated the terms and conditions of the competition.
Participantes need to compress the result file (result.json) into zip format (result.zip) for submission.
Submission file should contains all extracted relational facts from test set with necessary meta information, following the json format:
"title": <str>, # title of the document
"h_idx": <int>, # index of the head entity
"t_idx": <int>, # index of the tail entity
"r": <str>, # Wikidata ID of the relation
"evidence": <list<int>> # indices of the evidence sentences
Start: June 9, 2019, midnight
You must be logged in to participate in competitions.Sign In