Sentimix Spanglish

Organized by suraj1ly - Current server time: Jan. 18, 2021, 7:45 a.m. UTC

First phase

First phase
Sept. 4, 2019, midnight UTC


Competition Ends
March 12, 2020, noon UTC

overview paper:


Mixing languages, also known as code- mixing, is a norm in multilingual societies. Multilingual people, who are non-native English speakers, tend to code-mix using English-based phonetic typing and the insertion of anglicisms in their main language. In addition to mixing languages at the sentence level, it is fairly common to find the code-mixing behavior at the word level. This linguistic phenomenon poses a great challenge to conventional NLP systems, which currently rely on monolingual resources to handle the combination of multiple languages. The objective of this proposal is to bring the attention of the research community towards the task of sentiment analysis in code-mixed social media text. Specifically, we focus on the combination of English with Spanish (Spanglish) and Hindi (Hinglish), which are the 3rd and 4th most spoken languages in the world respectively. 

Hinglish and Spanglish - the Modern Urban Languages 
The evolution of social media texts such as blogs, micro-blogs (e.g., Twitter), and chats (e.g., WhatsApp and Facebook messages) has created many new opportunities for information access and language technology, but it has also posed many new challenges making it one of the current prime research areas. Although current language technologies are primarily built for English, non-native English speakers combine English and other languages when they use social media. In fact, statistics show that half of the messages on Twitter are in a language other than English. This evidence suggests that other languages, including multilinguiality and code-mixing, need to be considered by the NLP community. Code-mixing poses several unseen difficulties to NLP tasks such as word-level language identification, part-of-speech tagging, dependency parsing, machine translation and semantic processing. Conventional NLP systems heavily rely on monolingual resources to address code-mixed text, which limit them to properly handle issues like English-based phonetic typing, word-level code-mixing, and others. The next two phrases are examples of code-mixing in Spanglish and Hinglish. For the Spanglish example, in addition to the code-mixing at the sentence level, the word pushes conjugates the English word push according to the grammar rules in Spanish, which shows that code-mixing can also happen at the word level. Better to add more details on the Hinglish example In the Hinglish example only one English word enjoy has been used, but more noticeably for the Hindi words - instead of using Devanagari script, English phonetic typing is a popular practice in India. 

The SentiMix task - A summary 
The task is to predict the sentiment of a given code-mixed tweet. The sentiment labels are positive, negative, or neutral, and the code-mixed languages will be English-Hindi and English-Spanish. Besides the sentiment labels, we will also provide the language labels at the word level. The word-level language tags are en (English), spa (Spanish), hi (Hindi), mixed, and univ (e.g., symbols, @ mentions, hashtags).   Efficiency will be measured in terms of Precision, Recall, and F-measure.



If you are a participant or a researcher using our dataset or find this work useful, please cite the following paper:

title={SemEval-2020 Task 9: Overview of Sentiment Analysis of Code-Mixed Tweets},
author={Patwa, Parth and
Aguilar, Gustavo and
Kar, Sudipta and
Pandey, Suraj and
PYKL, Srinivas and
Gamb{\"a}ck, Bj{\"o}rn and
Chakraborty, Tanmoy and
Solorio, Thamar and
Das, Amitava},
booktitle = {Proceedings of the 14th International Workshop on Semantic Evaluation ({S}em{E}val-2020)},
year = {2020},
month = {December},
address = {Barcelona, Spain},
publisher = {Association for Computational Linguistics},


For the Tasks 
Official Competition Metric: The metric for evaluating the participating systems will be as follows. We will use F1 averaged across the positives, negatives, and the neutral. The final ranking would be based on the average F1 score. However, for further theoritical discussion and we will release macro-averaged recall (recall averaged across the three classes), since the latter has better theoretical properties than the former2015), and since this provides better consistency. 
Each participating team will initially have access to the training data only. Later, the unlabelled test data will be released. After SemEval-2020, the labels for the test data will be released as well. We will ask the participants to submit their predictions in a specified format (within 24 hours), and the organizers will calculate the results for each participant. We will make no distinction between constrained and unconstrained systems, but the participants will be asked to report what additional resources they have used for each submitted run.

Organizer List

Dr. Amitava Das Wipro AI Labs, Bangalore, India Mahindra École Centrale, Hyderabad, India. Dr. Tanmoy Chakraborty Indraprastha Institute of Information Technology Delhi, India. Dr. Thamar Solorio University of Houston, USA. Dr. Björn Gambäck Norwegian University of Science and Technology,Norway. Gustavo Aguilar University of Houston, USA. Sudipta Kar University of Houston, USA. Dr. Dan Garrette Google Research in New York. Srinivas P Y K L Indian Institute of Information Technology Sri City, India. Student Volunteers Parth Patwa Indian Institute of Information Technology Sri City, India. Suraj Pandey Indraprastha Institute of Information Technology Delhi, India.

Schedule Date 

Trial data ready: July 31, 2019
Training data ready: September 4, 2019
Test data ready: 19 February 2020
Evaluation start: 19 February 2020
Evaluation end: 1 March 2020
Results Posted: 18 March 2020
System description paper submissions due: 1 May 2020
Task description paper submissions due: 8 May 2020
Author notifications: 24 June 2020
Camera ready submissions due: 8 July 2020

SemEval 2020: 12-13 December 2020  


Terms and Condition

By submitting results to this competition, you consent to the public release of your scores at the SemEval workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgment that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.

You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers.

By downloading the data or by accessing it any manner, You agree not to redistribute the data except for the purpose of non-commercial and academic-research. The data must not be used for providing surveillance, analyses or research that isolates a group of individuals or any single individual for any unlawful or discriminatory purpose.

For any queries contact on Email:

Rank Users Score 1 (Best Score) Score 2 Score 3
1 LiangZhao 0.806 0.805 0.794
2 rachel 0.776 0.755 0.749
3 asking28 0.756 0.612 0.595
4 dpalominop 0.755 0.742 0.703
5 kongjun 0.753 0.726 0
6 HaoYu 0.752 0.694 0.663
7 Taha 0.751 0.736 0.033
8 meiyim 0.745 0.725 0
9 Lavinia_Ap 0.744 0 0
10 jupitter 0.739 0.705 0
11 tangmen 0.732 0.721 0.716
12 hermosillo748 0.728 0.714 0
13 harsh_6 0.725 0.703 0
14 francesita 0.722 0.721 0.721
15 ajason08 0.71 0.478 0
16 caozhou 0.707 0.703 0.664
17 clementincercel 0.706 0.696 0.694
18 ayushk 0.703 0.664 0.634
19 ahmed0sultan 0.701 0.701 0.696
20 Genius1237 0.684 0.637 0.574
21 zyy1510 0.682 0.651 0.634
22 keshav22b 0.671 0.149 0
23 suraj1ly (organizer baseline) 0.656 0 0
24 Abhilash 0.656 0 0
25 souryadipta 0.651 0.189 0
26 joca 0.646 0.637 0
27 pribanp 0.638 0.63 0.596
28 lakshadvani 0.634 0.634 0
29 sjmaharjan 0.27 0 0

                  Class-wise F1 scores for first 3 submissions (sorted by time stamp) of participants :


First phase

Start: Sept. 4, 2019, midnight

Competition Ends

March 12, 2020, noon

You must be logged in to participate in competitions.

Sign In