SentiMix Hindi-English

Organized by suraj1ly - Current server time: Oct. 22, 2019, 5:44 a.m. UTC


First phase
Sept. 4, 2019, 8 a.m. UTC


Competition Ends
Jan. 31, 2020, 11:33 p.m. UTC


Mixing languages, also known as code- mixing, is a norm in multilingual societies. Multilingual people, who are non-native English speakers, tend to code-mix using English-based phonetic typing and the insertion of anglicisms in their main language. In addition to mixing languages at the sentence level, it is fairly common to find the code-mixing behavior at the word level. This linguistic phenomenon poses a great challenge to conventional NLP systems, which currently rely on monolingual resources to handle the combination of multiple languages. The objective of this proposal is to bring the attention of the research community towards the task of sentiment analysis in code-mixed social media text. Specifically, we focus on the combination of English with Spanish (Spanglish) and Hindi (Hinglish), which are the 3rd and 4th most spoken languages in the world respectively. 

Hinglish and Spanglish - the Modern Urban Languages 
The evolution of social media texts such as blogs, micro-blogs (e.g., Twitter), and chats (e.g., WhatsApp and Facebook messages) has created many new opportunities for information access and language technology, but it has also posed many new challenges making it one of the current prime research areas. Although current language technologies are primarily built for English, non-native English speakers combine English and other languages when they use social media. In fact, statistics show that half of the messages on Twitter are in a language other than English. This evidence suggests that other languages, including multilinguiality and code-mixing, need to be considered by the NLP community. Code-mixing poses several unseen difficulties to NLP tasks such as word-level language identification, part-of-speech tagging, dependency parsing, machine translation and semantic processing. Conventional NLP systems heavily rely on monolingual resources to address code-mixed text, which limit them to properly handle issues like English-based phonetic typing, word-level code-mixing, and others. The next two phrases are examples of code-mixing in Spanglish and Hinglish. For the Spanglish example, in addition to the code-mixing at the sentence level, the word pushes conjugates the English word push according to the grammar rules in Spanish, which shows that code-mixing can also happen at the word level. Better to add more details on the Hinglish example In the Hinglish example only one English word enjoy has been used, but more noticeably for the Hindi words - instead of using Devanagari script, English phonetic typing is a popular practice in India. 

The SentiMix task - A summary 
The task is to predict the sentiment of a given code-mixed tweet. The sentiment labels are positive, negative, or neutral, and the code-mixed languages will be English-Hindi and English-Spanish. Besides the sentiment labels, we will also provide the language labels at the word level. The word-level language tags are en (English), spa (Spanish), hi (Hindi), mixed, and univ (e.g., symbols, @ mentions, hashtags).   Efficiency will be measured in terms of Precision, Recall, and F-measure. 


For the Tasks 
Official Competition Metric: The metric for evaluating the participating systems will be as follows. We will use F1 averaged across the positives, negatives, and the neutral. The final ranking would be based on the average F1 score. However, for further theoritical discussion and we will release macro-averaged recall (recall averaged across the three classes), since the latter has better theoretical properties than the former2015), and since this provides better consistency. 
Each participating team will initially have access to the training data only. Later, the unlabelled test data will be released. After SemEval-2020, the labels for the test data will be released as well. We will ask the participants to submit their predictions in a specified format (within 24 hours), and the organizers will calculate the results for each participant. We will make no distinction between constrained and unconstrained systems, but the participants will be asked to report what additional resources they have used for each submitted run.

Organizer List

Dr. Amitava Das Wipro AI Labs, Bangalore, India Mahindra École Centrale, Hyderabad, India. Dr. Tanmoy Chakraborty Indraprastha Institute of Information Technology Delhi, India. Dr. Thamar Solorio University of Houston, USA. Dr. Björn Gambäck Norwegian University of Science and Technology,Norway. Gustavo Aguilar University of Houston, USA. Sudipta Kar University of Houston, USA. Dr. Dan Garrette Google Research in New York. Srinivas P Y K L Indian Institute of Information Technology Sri City, India. Student Volunteers Parth Patwa Indian Institute of Information Technology Sri City, India. Suraj Pandey Indraprastha Institute of Information Technology Delhi, India.

Schedule Date 

Trial data ready : July 31, 2019
Training data ready :  September 4, 2019
Test data ready :  December 3, 2019
Evaluation start :  January 10, 2020
Evaluation end : January 31, 2020
Paper submission due : February 23, 2020
Notification to authors : March 29, 2020
Camera ready due :  April 5, 2020

SemEval workshop  : Summer 2020

Terms & Conditions

By submitting results to this competition, you consent to the public release of your scores at the SemEval workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgment that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.

You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers.

By downloading the data or by accessing it any manner, You agree not to redistribute the data except for the purpose of non-commercial and academic-research. The data must not be used for providing surveillance, analyses or research that isolates a group of individuals or any single individual for any unlawful or discriminatory purpose.

For any queries contact us on Email:

First phase

Start: Sept. 4, 2019, 8 a.m.

Description: Trial Phase

Competition Ends

Jan. 31, 2020, 11:33 p.m.

You must be logged in to participate in competitions.

Sign In