Dialogue Evaluation 2020: Taxonomy Enrichment for the Russian Language

Organized by lilaspourpre - Current server time: March 29, 2025, 11:59 p.m. UTC

Previous

Post-Evaluation (NOUNS)
Nov. 16, 2020, 4 a.m. UTC

Current

Post-Evaluation (VERBS)
Nov. 16, 2020, 4 a.m. UTC

End

Competition Ends
Never

We invite you to participate in the Dialogue 2020 shared task on Taxonomy Enrichment for the Russian Language. Taxonomies are tree structures which organize terms into a semantic hierarchy. Taxonomic relations (or hypernyms) are “is-a” relations: cat is-a animal, banana is-a fruit, Microsoft is-a company, etc. This type of relations is useful in a wide range of natural language processing tasks for performing semantic analysis. The goal of this semantic task is to extend an existing taxonomy with relations of previously unseen words.

Multiple evaluation campaigns for hypernym extraction (SemEval-2018 task 9), taxonomy induction (Semeval-2016 task 13, SemEval 2015 task 17), and most notably for taxonomy enrichment (SemEval-2016 task 14) were organized for English and other western European languages in the past. However, this is the first evaluation campaign of this kind for Russian and any Slavic language. Moreover, the task has a more realistic setting as compared to the SemEval-2016 task 14 taxonomy enrichment task as the participants are not given the definitions of words but only new unseen words in context.

More concretely, the goal of this task is the following: Given words that are not yet included in the taxonomy, we need to associate each word with the appropriate hypernyms from an existing taxonomy. For example, given the input word “утка” (duck) we expect you to provide a list of its most probable 10 candidate hypernym synsets the word could be attached to, e.g. “animal”, “bird”, and so on. Here a word may refer to one, two or more “ancestors” (hypernym synsets) at the same time.

Join our discussion group in Telegram: https://t.me/joinchat/Ckja7Vh00qPOU887pLonqQ

Competition website: https://russe.nlpub.org/2020/isa/

We expect from participant a ranked list of 10 possible candidates for each new word in the test set. We will evaluate the systems using the Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR) scores. MAP score pays attention to the whole range of possible hypernyms, whereas MRR looks at how close to the top of the list a first correct prediction is. In addition to that, the F1 score will be computed to evaluate the performance of the top 1 prediction of the methods. MAP will be the official metric to rank the submissions.

In order to be less restrictive during the evaluation, we consider as correct answers not only immediate hypernyms of new words, but also hypernyms of these hypernyms. Therefore, if a system predicted a hypernym of a correct hypernym, this will also be considered a match. 

However, the specificity of the ruWordNet taxonomy and our assumption about second-order hypernyms may result in confusion in the evaluation process. Let us consider the following examples

One hypernym may be a “parent” of another hypernym (synset “Moksha” has two parents “tributary” and “river”, whereas “river” is the hypernym for “tributary”). While computing MAP score, it may not be clear, which hypernym group gains the score: the one with “river” synset as immediate hypernym or “river” as second-order hypernym.


Hypernyms may share common parents: “string instrument” and “folk instrument” have hypernym “musical instrument” in common. In this case if “musical instrument” appears in the candidate list, MAP score will also be confused. In order to avoid this hypernym ambiguity, we split immediate and second-order hypernyms into separate groups.

Each group corresponds to the connectivity component in the subgraph reconstructed from these hypernyms. We see that the first and the second subgraphs possess only one connectivity component in comparison to the last subfigure, where immediate hypernyms
form different hypernym groups. 

Therefore, the list of possible candidates of a given word should contain hypernyms (at least one) from each hypernym group.

The results are published in the following paper:

Irina Nikishina, Varvara Logacheva, Alexander Panchenko, and Natalia Loukachevitch (2020): RUSSE'2020: Findings of the First Taxonomy Enrichment Task for the Russian Language. In Proceedings of the 26-th International Conference on Computational Linguistics and Intellectual Technologies (Dialogue-2020). Moscow, Russia.

BibTeX:

@inproceedings{nikishina2020taxonomy,
title={{RUSSE'2020: Findings of the First Taxonomy Enrichment Task for the Russian Language}},
author={Nikishina, Irina and Logacheva, Varvara and Panchenko, Alexander and Loukachevitch, Natalia},
booktitle={Computational Linguistics and Intellectual Technologies: papers from the Annual conference ``Dialogue''},
year={2020}
}

 

Predicted hypernyms for nouns from Yuriy's answer (top-1 for nouns)

rank saccharin selfie cashback
1  sweetener picture (result)  discount
2 substitute   photographic image line of business
3  food additives photography  rendering services
supplement  cinema   accounting transaction
5 substance   portrait (depiction)  promissory note operation
6  sugar substitute atelier for personal services   discount rate
7 material for manufacture   photo shop  to reduce the amount 
8   carbohydrates  movement  exemption 
9 sugar self-portrait  purposeful action 
10  food  constant  entity banking operation 
       

ground

truth

 sweetener photographic image   to repay
substitute  photo portrait   return of physical assets
 food additives     
 sugar substitute   self-portrait  bonus (reward)
   portrait (depiction) prize 

 

 

 

Predicted hypernyms for verbs from cointegrated's answer (top-1 for verbs)

rank to party to be hanging around to photoshop
1 to get together  to mess around   to reproduce
2

 communication,

connection

indicent behaviour 

to improve the shortcomings,

to correct mistakes 

 to have fun  wandering around to copy, to make a copy 
4 activity to  reside to depict
5 people relationship to lie on to verify, to check
6 to spend time to spend time to supply, to provide
7 to have a good time rest to create (to make real)
8 to get to the place to go by foot to eliminate, to destroy
9 to go by foot to have fun to correct, to improve
10 rest to hesitate to reside

ground

truth 

     
to get together  to mess around  to exaggerate
to get to the place indicent behaviour  to represent as 
to hang out   to embellish
  to hesitate  
to spend time   purposeful action  to change, to alter
o have a good time   to modify
activity to rejuvenate, to refresh  
  to restore previous state  
  • Irina Nikishina, Skoltech (Irina.Nikishina@skoltech.ru)
  • Varvara Logacheva, Skoltech (v.logacheva@skoltech.ru)
  • Alexander Panchenko, Skoltech
  • Natalia Loukachevitch, Lomonosov Moscow State University

Info sponsors:

 

Join our discussion group in Telegram: https://t.me/joinchat/Ckja7Vh00qPOU887pLonqQ

Baselines

We will provide simple baselines based on distributional and neural language models. Besides, we believe that popular neural context-aware models (like ELMo and BERT) will be of particular use for this task as they can represent out-of-vocabulary words on the basis of their context. Therefore, everyone interested in testing these and other distributional semantic models are welcome to participate.

Tracks

The task will feature two tracks: detection of hypernyms for nouns and for verbs. The participants are allowed to use any additional datasets and corpora in addition to the train set based on the ruWordNet taxonomy. Moreover, we also provide the additional data: news text corpus, parsed Wikipedia corpus and hypernym database from Russian Distributional Thesaurus. However, we ask participants to mention all additional resources used for training of models.

The participants can test their models on the public test set by submitting the results to the leaderboards for each track (Nouns and Verbs). Once the private test sets is released, the participants will have two weeks to predict hypernyms for it and submit their final results.

Important dates:

  • First Call for Participation: December 15, 2019.
  • Release of the Training Data: December 15, 2019.
  • Release of the Test Data: January 31, 2020.
  • Submission of the Results: February 14, 2020 March, 1 (Anywhere on Earth).
  • Results of the Shared Task: February 28, 2020  March 3, 2020.
  • Article submission deadlines: March 10, 2020

 

Contacts

Irina.Nikishina@skoltech.ru

v.logacheva@skoltech.ru

 

Practice (NOUNS)

Start: Dec. 1, 2019, midnight

Practice (VERBS)

Start: Dec. 1, 2019, midnight

Evaluation (NOUNS)

Start: Nov. 15, 2020, midnight

Evaluation (VERBS)

Start: Nov. 15, 2020, midnight

Post-Evaluation (NOUNS)

Start: Nov. 16, 2020, 4 a.m.

Post-Evaluation (VERBS)

Start: Nov. 16, 2020, 4 a.m.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 RefalMachine2 0.4263
2 alvadia 0.4007
3 vvyadrincev 0.3874