SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC)

Organized by Federico_Martelli - Current server time: Oct. 23, 2020, 10:25 p.m. UTC

Current

Post Evaluation
Jan. 31, 2021, midnight UTC

Next

Evaluation
Jan. 10, 2021, midnight UTC

End

Competition Ends
Never

Multilingual and Cross-lingual Word-in-Context Disambiguation

Introduction

Over recent years, computational lexical semantics has seen a surge of interest in a wide range of approaches, from multi-prototype embeddings to sense-based and contextualized embeddings, all aimed at providing some form of representation and understanding of a word in context. However, evaluating such a variety of approaches in a single framework is not easy. For instance, traditional Word Sense Disambiguation (WSD) fails to test latent representations unless these are linked to explicit sense inventories such as WordNet and BabelNet. To address this limitation, we propose a innovative common evaluation benchmark which allows to measure and compare the performance of the aforementioned context-based approaches. In this task, we will follow and extend Pilhevar and Camacho-Collados (2019), who proposed a benchmark consisting of semi-automatically-annotated English sentence pairs, which requires systems to determine whether a word occurring in two different sentences is used with the same meaning or not, without relying on a pre-defined sense inventory.

Task overview

Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC) is the first SemEval task for Word-in-Context disambiguation which tackles the challenge of capturing the polysemous nature of words without relying on a fixed sense inventory in a multilingual and cross-lingual setting. MCL-WiC provides a single high-quality framework for the performance evaluation of a wide range of approaches aimed at evaluating the capability of a system to deeply understand word meaning. Compared to its competitors, MCL-WiC brings the following novelties:

  • it addresses multilinguality and cross-linguality,
  • it provides coverage of all parts of speech, and
  • it covers a high number of domains and genres.

Participating systems will be asked to perform a binary classification task in which they indicate whether the target word is used in the same meaning (tagged as T for true) or in a different meaning (F for false) in the same language (multilingual dataset) or across different languages (cross-lingual dataset). Below you can find two examples of sentence pairs, the first one from the multilingual part and the second one from the cross-lingual part:

  • la souris mange le fromage -- le chat court après la souris
  • click the right mouse button -- le chat court après la souris

In the first sentence pair, the target word souris will be tagged with T (True) since it is used in the same meaning in both sentences. Instead, in the second sentence pair, the target word mouse and its corresponding translation into French are used in two distinct meanings, therefore, in this case, the expected output will be F (False).

Languages

The following languages will be considered:

  • Arabic
  • Chinese
  • English
  • French
  • Russian

Annotation

The manual annotation was performed according to the following criteria. Given a target word w occurring in two sentences in the same language (multilingual task) or a target word w in the first sentence in one language and the corresponding target word w' in the second sentence in a second language, we used the tag:

  • T if the two words are used in the same exact meaning.
  • F if the two words are used in two different meanings (such as race in the meaning of competition vs. that of breed).

Important dates

  • Trial data: July 31, 2020
  • Training data ready: October 1, 2020 October 26, 2020
  • Test data ready: December 3, 2020
  • Evaluation start: January 10, 2021
  • Evaluation end: January 31, 2021
  • Paper submission due: February 23, 2021
  • Notification to authors: March 29, 2021
  • Camera ready due: April 5, 2021
  • SemEval workshop: Summer 2021

Key links

Github data repository: mcl-wic
Discussion forum: https://competitions.codalab.org/forums/23750/

References

Evaluation Criteria

Systems will be asked to perform a binary classification on each sentence pair in the dataset, for which they will have to output T or F depending on whether a given target word occurring in two sentences is used with the same meaning or with a different meaning respectively. The goal is to determine to what degree systems can discriminate meanings within and across languages without necessarily relying on an explicit sense inventory.

Results will be computed using the accuracy measure. A thorough analysis will be carried out for each language pair (cross-lingual dataset), for the different types of approach declared by participants (context-specific embeddings, WSD, etc.), the type and amount of training data used by the system, by domain and genre of the sentences (i.e. formal/parliamentary vs. encyclopedic), etc. Furthermore, we will distinguish between systems which exploit the training set provided for the given language(s) and those which do not exploit it, e.g., based on vector similarities or traditional WSD systems which output T/F based on sense assignment.

Baselines

We will compare the performance of participating systems against a baseline neural classifier. Our baseline system will be input different types of embeddings:

  • sense embeddings, such as LMMS (Loureiro and Jorge, 2019) and SensEmBERT (Scarlini et al., 2020), which combine contextualized embeddings with the knowledge derived from resources such as WordNet and BabelNet;
  • context-specific word embeddings, such as Context2vec (Melamud et al., 2016), BERT (Devlin et al., 2019) etc.

Interestingly, this will provide an effective multilingual and cross-lingual benchmark for all types of embeddings and NLU systems.

References

Terms and Conditions

The data of the Multilingual and Cross-lingual Word-in-Context Disambiguation are released under the CC-BY-NC 4.0 license. Attribution should be provided by citing the task, its authors and this website (when the corresponding SemEval paper will be ready, please site the paper).

Training

Start: Oct. 1, 2020, midnight

Description: Please go to the github repository to download the training and dev data and work on your system(s)!

Evaluation

Start: Jan. 10, 2021, midnight

Description: During the evaluation phase, you can submit your runs, which will be evaluated against the test data

Post Evaluation

Start: Jan. 31, 2021, midnight

Description: Post-evaluation analysis and discussion phase

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In