Over recent years, computational lexical semantics has seen a surge of interest in a wide range of approaches, from multi-prototype embeddings to sense-based and contextualized embeddings, all aimed at providing some form of representation and understanding of a word in context. However, evaluating such a variety of approaches in a single framework is not easy. For instance, traditional Word Sense Disambiguation (WSD) fails to test latent representations unless these are linked to explicit sense inventories such as WordNet and BabelNet. To address this limitation, we propose a innovative common evaluation benchmark which allows to measure and compare the performance of the aforementioned context-based approaches. In this task, we will follow and extend Pilhevar and Camacho-Collados (2019), who proposed a benchmark consisting of semi-automatically-annotated English sentence pairs, which requires systems to determine whether a word occurring in two different sentences is used with the same meaning or not, without relying on a pre-defined sense inventory.
Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC) is the first SemEval task for Word-in-Context disambiguation which tackles the challenge of capturing the polysemous nature of words without relying on a fixed sense inventory in a multilingual and cross-lingual setting. MCL-WiC provides a single high-quality framework for the performance evaluation of a wide range of approaches aimed at evaluating the capability of a system to deeply understand word meaning. Compared to other datasets, MCL-WiC brings the following novelties:
Participating systems will be asked to perform a binary classification task in which they indicate whether the target word is used in the same meaning (tagged as T for true) or in a different meaning (F for false) in the same language (multilingual dataset) or across different languages (cross-lingual dataset). Below you can find two examples of sentence pairs, the first one from the multilingual part and the second one from the cross-lingual part:
In the first sentence pair, the target word souris will be tagged with T (True) since it is used in the same meaning in both sentences. Instead, in the second sentence pair, the target word mouse and its corresponding translation into French are used in two distinct meanings, therefore, in this case, the expected output will be F (False).
The following languages will be considered:
The manual annotation was performed according to the following criteria. Given a target word w occurring in two sentences in the same language (multilingual task) or a target word w in the first sentence in one language and the corresponding target word w' in the second sentence in a second language, we used the tag:
Github data repository: mcl-wic
Discussion forum: https://competitions.codalab.org/forums/23750/
Link to the paper: https://raw.githubusercontent.com/SapienzaNLP/mcl-wic/master/SemEval_2021_Task_2__Multilingual_and_Cross_lingual_Word_in_Context_Disambiguation__MCL_WiC___Paper_.pdf
The organizers gratefully acknowledge the support of the ELEXIS EU project No. 731015 and the MOUSSE ERC Consolidator Grant No. 726487 under the European Union’s Horizon 2020 research and innovation programme.
Systems will be asked to perform a binary classification on each sentence pair in the dataset, for which they will have to output T or F depending on whether a given target word occurring in two sentences is used with the same meaning or with a different meaning respectively. The goal is to determine to what degree systems can discriminate meanings within and across languages without necessarily relying on an explicit sense inventory.
Results will be computed using the accuracy measure. A thorough analysis will be carried out for each language pair (cross-lingual dataset), for the different types of approach declared by participants (context-specific embeddings, WSD, etc.), the type and amount of training data used by the system, by domain and genre of the sentences (i.e. formal/parliamentary vs. encyclopedic), etc. Furthermore, we will distinguish between systems which exploit the training set provided for the given language(s) and those which do not exploit it, e.g., based on vector similarities or traditional WSD systems which output T/F based on sense assignment.
Please follow these steps for the submission:
1. download the test data (.data) from our GitHub page https://github.com/SapienzaNLP/mcl-wic,
2. generate your answers,
3. name each file "test.{language}-{language}" (for example "test.ru-ru" if you wish to participate in the Russian multilingual sub-task),
4. create a submission.zip file containing all your datasets which you would like to submit (for example the submission.zip file could contain the files "test.ru-ru" and "test.en-ru", indicating that you will participate in the Russian multilingual sub-task and the English-Russian cross-lingual sub-task), and
5. submit!
We will compare the performance of participating systems against a baseline neural classifier. Our baseline system will be input different types of embeddings:
Interestingly, this will provide an effective multilingual and cross-lingual benchmark for all types of embeddings and NLU systems.
The data of the Multilingual and Cross-lingual Word-in-Context Disambiguation are released under the CC-BY-NC 4.0 license. Attribution shall be provided by citing:
F. Martelli, N. Kalach, G. Tola, R. Navigli. SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC). Proc. of the 15th Workshop on Semantic Evaluation, 2021.
Start: Oct. 1, 2020, midnight
Description: Please go to the github repository to download the training and dev data and work on your system(s)!
Start: Jan. 10, 2021, midnight
Description: During the evaluation phase, you can submit your runs, which will be evaluated against the test data.
Start: Feb. 1, 2021, midnight
Description: Post-evaluation analysis and discussion phase
Never
You must be logged in to participate in competitions.
Sign In