SemEval-2018 Task 2, Multilingual Emoji Prediction

Organized by CamachoCollados - Current server time: Nov. 22, 2017, 11:46 p.m. UTC

Current

Practice
June 1, 2017, midnight UTC

Next

Evaluation
Jan. 8, 2018, midnight UTC

Welcome! 

Registration is open! (join the task in the Participate tab)
Join also the Google Group to follow the last news and post questions.

Important Dates:

21 Aug 2017: Trial data release
18 Sep 2017: Training data release
8 Jan 2018: Test data release. Evaluation start
29 Jan 2018: Evaluation end
26 Feb 2018System description paper deadline

 

Given the paramount importance of visual icons for providing an additional layer of meaning to social media messages, on one hand, and the indisputable role of Twitter as one of the most important social media platforms, on the other, we propose the Emoji Prediction task. We invite participants to submit systems designed to predict, given a tweet in English or Spanish, its most likely associated emoji. We will challenge systems to predict emojis among a wide and heterogeneous emoji space. As for the experimental setting, we will remove the emoji from the tweet and ask users to predict it, ignoring tweets including more than one emoji, for simplicity purposes (same settings of [1]). We will provide data for the two tasks:

  • Subtask 1: Emoji Prediction in English
  • Subtask 2: Emoji Prediction in Spanish

Participants can take part in one or two of the subtasks.

[1] Barbieri F., Ballesteros M., Saggion H., Are Emojis Predictable?, European Chapter of the Association for Computational Linguistics Valencia, 3-7 April 2017.

 

Task Details  

Training and Evaluation Data. The data for the task will consist of 500k tweets in English and 100K tweets in Spanish. The tweets were retrieved with the Twitter APIs, from October 2015 to February 2017, and geolocalized in United States and Spain. The dataset includes tweets that contain one and only one emoji, of the 20 most frequent emojis. Data will be split into Training (80%), Trial (10%) and Test (10%).

Label set. As labels we will use the 20 most frequent emojis of each language. They are different across the English and Spanish corpora. In the following we show the distribution of the emojis for each language (numbers refer to the percentage of occurrence of each emoji).

 

Organizers

Francesco Barbieri 
Luis Espinoza-Anke
Francesco Ronzano
Horacio Saggion
Universitat Pompeu Fabra, LaSTUS lab, Spain

Jose Camacho-Collados
Valerio Basile
Sapienza University, Italy

Miguel Ballesteros
IBM Watson, USA

Viviana Patti
University of Torino, Italy

 

 

Evaluation Criteria 

For evaluation, the classic Precision and Recall metrics over each emoji are used. The official results will be based on Macro F-score, as the fundamental idea of this task is to encourage systems to perform well overall, which would inherently mean a better sensitivity to the use of emojis in general, rather than for instance overfitting a model to do well in the  three or four most common emojis of the test data. Macro F-score can be defined as simply the average of the individual label-wise F-scores.  We will also report Micro F-score for informative purposes.

The official evaluation script can be found here.

Terms and Conditions

By submitting results to this competition, you consent to the public release of your scores at the SemEval-2018 workshop and in the associated proceedings, at the task organizers' discretion. Scores may include, but are not limited to, automatic and manual quantitative judgements, qualitative judgements, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgement that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.

You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers.

You agree not to redistribute the test data except in the manner prescribed by its licence.

Practice

Start: June 1, 2017, midnight

Evaluation

Start: Jan. 8, 2018, midnight

Post-Evaluation

Start: Jan. 30, 2018, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In

Top Three

Rank Username Score
1 Deffro 47.783
2 gguibon 45.798
3 m.yaghubzade 24.817