SemEval 2018 Task 10 Capturing Discriminative Attributes

Organized by krebs - Current server time: Sept. 25, 2018, 5:44 p.m. UTC

Previous

Evaluation
Jan. 8, 2018, midnight UTC

Current

Post-Evaluation
Jan. 31, 2018, midnight UTC

End

Competition Ends
Never

Capturing Discriminative Attributes

Google Group: https://groups.google.com/forum/#!forum/semeval2018-discriminativeattributes

Summary

State of art semantic models do an excellent job at detecting semantic similarity, a traditional semantic task; for example, a model will be able to tell that cappuccino, espresso and americano are similar to each other. It is obvious, however, that no model can claim to capture semantic competence if it does not, in addition to similarity, predict semantic differences between words. If you can tell that americano is similar to capuccino and espresso but you can't tell the difference between them, you don't know what americano is. As a consequence, any semantic model that is only good at similarity detection will be of limited practical use.

To fill this gap, we propose a novel task of semantic difference detection. The goal of our proposed task is to predict whether a word is a discriminative attribute between two other words. For example, given the words apple and banana, is the word red a discriminative attribute?
Semantic difference is a ternary relation between two concepts (apple, banana) and a discriminative feature (red) that characterizes the first concept but not the other. By its nature, semantic difference detection is a binary classification task: given a triple apple,banana,red, the task is to determine whether it exemplifies a semantic difference or not.

Organizers

  • Denis Paperno (Lorraine Laboratory of Computer Science and its Applications (Loria, UMR 7503), National Center for Scientific Research (CNRS), France)
  • Alessandro Lenci (Department of Philology, Literature, and Linguistics of the University of Pisa, Italy)
  • Alicia Krebs (Textkernel BV, Amsterdam, Netherlands)

Evaluation Criteria

The models will be evaluated on F1 measure, as is standard in binary classification tasks.

Terms and Conditions

By submitting results to this competition, you consent to the public release of your scores at the SemEval-2018 workshop and in the associated proceedings, at the task organizers' discretion. Scores may include, but are not limited to, automatic and manual quantitative judgements, qualitative judgements, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgement that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.

You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers.

You agree not to redistribute the test data except in the manner prescribed by its licence.

Practice

Start: June 1, 2017, midnight

Evaluation

Start: Jan. 8, 2018, midnight

Post-Evaluation

Start: Jan. 31, 2018, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 vermouth 0.76
2 rspeer 0.74
3 esantus 0.73