SemEval-2018 Task 2, Multilingual Emoji Prediction

Organized by CamachoCollados - Current server time: May 27, 2018, 1:21 a.m. UTC

Previous

Evaluation
Jan. 8, 2018, midnight UTC

Current

Post-Evaluation
Jan. 30, 2018, midnight UTC

End

Competition Ends
Never

Welcome! 

Data and results are available! (check the tabs on the left)

(Google Group to follow the last news and post questions.)

Important Dates:

21 Aug 2017: Trial data release
18 Sep 2017: Training data release
8 Jan 2018: Test data release. Evaluation start
29 Jan 2018: Evaluation end
5 Mar 2018: System description paper deadline
23 Mar 2018: System Description reviews deadline
2 Apr 2018: April Author notifications
16 Apr 2018: Camera-ready submissions deadline
5-6 Jun 2018: SemEval Workshop

 

Given the paramount importance of visual icons for providing an additional layer of meaning to social media messages, on one hand, and the indisputable role of Twitter as one of the most important social media platforms, on the other, we propose the Emoji Prediction task. We invite participants to submit systems designed to predict, given a tweet in English or Spanish, its most likely associated emoji. We will challenge systems to predict emojis among a wide and heterogeneous emoji space. As for the experimental setting, we will remove the emoji from the tweet and ask users to predict it, ignoring tweets including more than one emoji, for simplicity purposes (same settings of [1]). We will provide data for the two tasks:

  • Subtask 1: Emoji Prediction in English
  • Subtask 2: Emoji Prediction in Spanish

Participants can take part in one or two of the subtasks.

[1] Barbieri F., Ballesteros M., Saggion H., Are Emojis Predictable?, European Chapter of the Association for Computational Linguistics Valencia, 3-7 April 2017.

 

Task Details  

Training and Evaluation Data. The data for the task will consist of 500k tweets in English and 100K tweets in Spanish. The tweets were retrieved with the Twitter APIs, from October 2015 to February 2017, and geolocalized in the United States and Spain. The dataset includes tweets that contain one and only one emoji, of the 20 most frequent emojis. Data will be split into trial, training, and test.

Label set. As labels, we will use the 20 most frequent emojis of each language. They are different across the English and Spanish corpora. In the following, we show the distribution of the emojis for each language (numbers refer to the percentage of occurrence of each emoji).

Note that due to an issue we only consider 19 emojis in the Spanish task (from 0 to 18 where "top" emoji is omitted)

Organizers

Francesco Barbieri 
Luis Espinoza-Anke
Francesco Ronzano
Horacio Saggion
Universitat Pompeu Fabra, LaSTUS lab, Spain

Jose Camacho-Collados
Valerio Basile
Sapienza University, Italy

Miguel Ballesteros
IBM Watson, USA

Viviana Patti
University of Torino, Italy

 

Cite this task/dataset as:

@InProceedings{semeval2018task2,
  title={{SemEval-2018 Task 2: Multilingual Emoji Prediction}},
  author={Barbieri, Francesco and Camacho-Collados, Jose and Ronzano, Francesco and Espinosa-Anke, Luis and Ballesteros, Miguel and Basile, Valerio and Patti, Viviana and Saggion, Horacio},
  booktitle={Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval-2018)},
year={2018},
address={New Orleans, LA, United States},
publisher = {Association for Computational Linguistics}

 

Evaluation Criteria 

For evaluation, the classic Precision and Recall metrics over each emoji are used. The official results will be based on Macro F-score, as the fundamental idea of this task is to encourage systems to perform well overall, which would inherently mean a better sensitivity to the use of emojis in general, rather than for instance overfitting a model to do well in the  three or four most common emojis of the test data. Macro F-score can be defined as simply the average of the individual label-wise F-scores.  We will also report Micro F-score for informative purposes.

The official evaluation script can be found here.

The dataset can be downloaded here

The zip file includes three subdirectories: train, test, and trial (the latter may be used as development).

Test and Trial tweets along with their corresponding labels are already available, but, due to Twitter restrictions, you will need to download the tweets from the training set yourself. It is a smooth process and all the instructions and commands can be found in the train folder.

The dataset can be cited as:

@InProceedings{semeval2018task2,
  title={{SemEval-2018 Task 2: Multilingual Emoji Prediction}},
  author={Barbieri, Francesco and Camacho-Collados, Jose and Ronzano, Francesco and Espinosa-Anke, Luis and Ballesteros, Miguel and Basile, Valerio and Patti, Viviana and Saggion, Horacio},
  booktitle={Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval-2018)},
year={2018},
address={New Orleans, LA, United States},
publisher = {Association for Computational Linguistics}

 

The results can be found here.
There are two tabs, one for English and one for Spanish. The columns show:
username, macro f1, precision and recall, accuracy, and f1 for all the emojis (starting from label 0). 

System Description Papers

For those teams that have not completed the survey yet, please do it as soon as possible.
The deadline for the submission of system descriptions is Mon 26 Feb 2018 by 23:59 GMT-12:00. The system descriptions must be submitted through SOFTCONF following the detailed instructions found at this URL: http://alt.qcri.org/semeval2018/index.php?id=papers

 

Terms and Conditions

By submitting results to this competition, you consent to the public release of your scores at the SemEval-2018 workshop and in the associated proceedings, at the task organizers' discretion. Scores may include, but are not limited to, automatic and manual quantitative judgements, qualitative judgements, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgement that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.

You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers.

You agree not to redistribute the test data except in the manner prescribed by its licence.

Practice

Start: June 1, 2017, midnight

Evaluation

Start: Jan. 8, 2018, midnight

Post-Evaluation

Start: Jan. 30, 2018, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 RunningMan_hamei 0.000
2 ewallac2 0.000