This is the CodaLab Competition for SemEval 2018 Task 4: Character Identification on Multiparty Dialogues.
08/21/2017: Trial and training data release.
01/08/2018: Test data release.
01/29/2018: Evaluation end.
Character Identification is an entity linking task that identifies each mention as a certain character in multiparty dialogue. Let a mention be a nominal referring to a person (e.g., she, mom, Judy), and an entity be a character in a dialogue. The goal is to assign each mention to its entity, who may or may not participate in the dialogue. For the following example, the mention "mom" is not one of the speakers; nonetheless, it clearly refers to the specific person, Judy, that could appear in some other dialogue. Identifying such mentions as real characters requires cross-document entity resolution, which makes this task challenging.
This year's competition is focused on singular mentions using gold boundaries. We plan to open another competition by challenging plural mentions as well as ambiguous mention types using predicted mention boundaries in the following year.
Proceedings of the 21st Conference on Computational Natural Language Learning, CoNLL'17, Vancouver, Canada, 2017.
Character Identification on Multiparty Conversation: Identifying Mentions of Characters in TV Shows. Chen, H. Y.; and Choi, J. D. Proceedings of the 17th Annual SIGdial Meeting on Discourse and Dialogue, SIGDIAL'16, Los Angeles, CA, 2016.
The first two seasons of the TV show Friends are annotated for this task. Each season consists of episodes, each episode comprises scenes, and each scene is segmented into sentences. The followings describe the distributed datasets:
All datasets follow the CoNLL 2012 Shared Task data format. Documents are delimited by the comments in the following format:
#begin document (<Document ID>)[; part ###]
Each sentence is delimited by a new line ("\n") and each column indicates the following:
Here is a sample from the training dataset:
/friends-s01e01 0 0 He PRP (TOP(S(NP*) he - - Monica_Geller * (284)
/friends-s01e01 0 1 's VBZ (VP* be - - Monica_Geller * -
/friends-s01e01 0 2 just RB (ADVP*) just - - Monica_Geller * -
/friends-s01e01 0 3 some DT (NP(NP* some - - Monica_Geller * -
/friends-s01e01 0 4 guy NN *) guy - - Monica_Geller * (284)
/friends-s01e01 0 5 I PRP (SBAR(S(NP*) I - - Monica_Geller * (248)
/friends-s01e01 0 6 work VBP (VP* work - - Monica_Geller * -
/friends-s01e01 0 7 with IN (PP*)))))) with - - Monica_Geller * -
/friends-s01e01 0 8 ! . *)) ! - - Monica_Geller * -
/friends-s01e01 0 0 C'mon VB (TOP(S(S(VP*)) c'mon - - Joey_Tribbiani * -
/friends-s01e01 0 1 , , * , - - Joey_Tribbiani * -
/friends-s01e01 0 2 you PRP (NP*) you - - Joey_Tribbiani * (248)
/friends-s01e01 0 3 're VBP (VP* be - - Joey_Tribbiani * -
/friends-s01e01 0 4 going VBG (VP* go - - Joey_Tribbiani * -
/friends-s01e01 0 5 out RP (PRT*) out - - Joey_Tribbiani * -
/friends-s01e01 0 6 with IN (PP* with - - Joey_Tribbiani * -
/friends-s01e01 0 7 the DT (NP* the - - Joey_Tribbiani * -
/friends-s01e01 0 8 guy NN *)))) guy - - Joey_Tribbiani * (284)
/friends-s01e01 0 9 ! . *)) ! - - Joey_Tribbiani * -
A mention may include more than one word:
/friends-s01e02 0 0 Ugly JJ (TOP(S(NP(ADJP* ugly - - Chandler_Bing * (380
/friends-s01e02 0 1 Naked JJ *) naked - - Chandler_Bing * -
/friends-s01e02 0 2 Guy NNP *) Guy - - Chandler_Bing * 380)
/friends-s01e02 0 3 got VBD (VP* get - - Chandler_Bing * -
/friends-s01e02 0 4 a DT (NP* a - - Chandler_Bing * -
/friends-s01e02 0 5 Thighmaster NN *)) thighmaster - - Chandler_Bing * -
/friends-s01e02 0 6 ! . *)) ! - - Chandler_Bing * -
The mapping between the entity ID and the actual character can be found in friends_entity_map.txt.
Your output must consist of the entity ID of each mention, one per line, in the sequential order. There are 6 mentions in the above example, which will generate the following output:
Given this output, the evaluation script will measure,
The following shows the command to run the evaluation script:
python evaluate.py input_dir output_dir
The macro average between the F1 scores of all entities will be applied to the leaderboard.
By submitting results to this competition, you consent to the public release of your scores at the SemEval-2018 workshop and in the associated proceedings, at the task organizers' discretion. Scores may include, but are not limited to, automatic and manual quantitative judgements, qualitative judgements, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.
You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgement that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.
You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers.
You agree not to redistribute the test data except in the manner prescribed by its licence.
Start: Aug. 21, 2017, midnight
Start: Jan. 8, 2018, 1 a.m.
You must be logged in to participate in competitions.Sign In