Memotion Analysis

Organized by semeval2019 - Current server time: May 31, 2020, 8:21 a.m. UTC

First phase

First phase
Sept. 4, 2019, midnight UTC

End

Competition Ends
March 13, 2020, 11:55 p.m. UTC

Announcements:

Ground Truth: https://drive.google.com/drive/folders/13Rz7r6JZ4jmsZ_zceJmL3OsHNqJIaiYN?usp=sharing 

The dataset is allowed to be used in any paper, only upon citation.

bibtex:

@inproceedings{chhavi2020memotion,
title={{Task Report: Memotion Analysis 1.0 @SemEval 2020: The Visuo-Lingual Metaphor!}},
author="Sharma, Chhavi and
Paka, Scott, William and
Bhageria, Deepesh and
Das, Amitava and
Poria, Soujanya and
Chakraborty, Tanmoy and
Gamb{\"a}ck, Bj{\"o}rn",
booktitle = "Proceedings of the 14th International Workshop on Semantic Evaluation ({S}em{E}val-2020)",
year = {2020},
month = {Sep},
address = "Barcelona, Spain",
publisher = "Association for Computational Linguistics"
}

 

Abstract
Information on social media comprises of various modalities such as textual, visual and audio. NLP and Computer Vision communities often leverage only one prominent modality in isolation to study social media. However, the computational processing of Internet memes needs a hybrid approach. The growing ubiquity of Internet memes on social media platforms such as Facebook, Instagram, and Twitter further suggests that we can not ignore such multimodal content anymore. To the best of our knowledge, there is not much attention towards meme emotion analysis. The objective of this proposal is to bring the attention of the research community towards the automatic processing of Internet memes. The task Memotion analysis will release 8K annotated memes - with human-annotated tags namely sentiment, and type of humor that is, sarcastic, humorous, or offensive.

The Multimodal Social Media
In the last few years, the growing ubiquity of Internet memes on social media platforms such as Facebook, Instagram, and Twitter has become a topic of immense interest. Memes, one of the most typed English words (Sonnad, 2018) in recent times. Memes are often derived from our prior social and cultural experiences such as TV series or a popular cartoon character (think: One Does Not Simply - a now immensely popular meme taken from the movie Lord of the Rings). These digital constructs are so deeply ingrained in our Internet culture that to understand the opinion of a community, we need to understand the type of memes it shares. (Gal et al., 2016) aptly describes them as performative acts, which involve a conscious decision to either support or reject an ongoing social discourse. Online Hate - A brutal Job: The prevalence of hate speech in online social media is a nightmare and a great societal responsibility for many social media companies. However, the latest entrant Internet memes (Williams et al., 2016) has doubled the challenge. When malicious users upload something offensive to torment or disturb people, it traditionally has to be seen and flagged by at least one human, either an user or a paid worker. Even today, companies like Facebook and Twitter rely extensively on outside human contractors from start-ups like CrowdFlower, or companies in the Philippines. But with the growing volume of multimodal social media, it is becoming impossible to scale. The detection of offensive content on online social media is an ongoing struggle. OffenseEval (Zampieri et al., 2019) is a shared task which is being organized since the last two years at SemEval. But, detecting an offensive meme is more complex than detecting an offensive text – it involves visual cue and language understanding. This is one of the motivating aspects which encourages us to propose this task. Multimodal Social Media Analysis - The Necessity: Analogous to textual content on social media, memes also need to be analyzed and processed to extract the conveyed message. A few researchers have tried to automate the meme generation (Peirson et al., 2018; Oliveira et al., 2016) process, while a few others tried to extract its inherent sentiment (French, 2017) in the recent past. Nevertheless, a lot more needs to be done to distinguish their finer aspects such as type of humor or offense. We hope Memotion analysis - the task will bring research attention towards the topic and the forum will be the place to continue relevant discussions on the topic among researchers.

The Memotion Analysis Task

Task A- Sentiment Classification: Given an Internet meme, the first task is to classify it as a positive, negative or neutral meme. 

Task B- Humor Classification: Given an Internet meme, the system has to identify the type of humor expressed. The categories are sarcastic, humorous, offensive and motivation meme. A meme can have more than one category.

Task C- Scales of Semantic Classes: The third task is to quantify the extent to which a particular effect is being expressed. Details of such quantifications are reported in Table 1. Appropriate annotated data will be provided.

Evaluation Criteria
For Task A: macro F1

For Task B and C: macro F1 for each of the subtasks, and then average.

 

Train data is cleaned and uploaded in the following link, we highly suggest that you use this. https://www.kaggle.com/williamscott701/memotion-dataset-7k 

 

Test Set is updated on 26 February 00.00 EST

Please read the following very carefully.

  • Prepare a txt for submission
  • Only one file has to be generated for any or all tasks.
  • The txt file should not contain headings or indices
  • Each row should contain results from task-a, task-b, task-c separated by underscore(_) delimiter
  • If you wish to not do a particular task, then you need to fill the values with 9. If there are 4 labels in the task, then it should be filled with 9 four times.
  • Sample row for a particular column: -1_9999_2100
  • The order of the above is delimited by tasks. -1 is the negative score for tasks a, 9999 are for task-b, 2100 are for task c. As there is a 9 present for task b, task-b scores will not be computed.
  • For task a: the scores can be one of [-1, 0, 1]. For task b, the scores can be one of [0, 1]. For task c, the scores can be one of [0, 1, 2, 3].
  • The four digits for task-b and task-c should be in the following order: humor, sarcasm, offensive, motivational.
  • On the leaderboard, the average of your task scores will be shown. You can download individual scores after submission.
  • The row order of the results should be according to the csv provided in the test data.
  • The text file name should be answer.txt.
  • And it should be zipped to res.zip.

Task A: 0.2176489217

Task B: 0.5118483395

Task C: 0.2483801837

 

Trial data ready: July 31, 2019
Training data ready: September 4, 2019
Test data ready: February 19, 2020
Evaluation start: February 19, 2020
Evaluation end: March 11, 2020

Results posted: March 18, 2020
System description paper submissions due: May 1st, 2020

Task description paper submissions due: May 8th, 2020
Notification to authors: June 10, 2020
Camera-ready Submissions due: July 1, 2020
SemEval: September 13-14, 2020

Dr. Amitava Das.
Wipro AI Labs, Bangalore, India
Mahindra École Centrale, Hyderabad, India.

Dr.Tanmoy Chakraborty.
Indraprastha Institute of Information Technology Delhi, India.

Dr.Soujanya Poria.
Nanyang Technological University, Singapore.

Dr. Björn Gambäck.
Norwegian University of Science and Technology, Norway.

Chhavi Sharma.
Indian Institute of Information Technology, Sri City, India.

William Scott Paka.
Indraprastha Institute of Information Technology, Delhi.

Deepesh Bhageria.
Indian Institute of Information Technology, Sri City, India.
 

Terms & Conditions

By submitting results to this competition, you consent to the public release of your scores at the SemEval workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgment that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.

You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers.

By downloading the data or by accessing it any manner, You agree not to redistribute the data except for the purpose of non-commercial and academic-research. The data must not be used for providing surveillance, analyses or research that isolates a group of individuals or any single individual for any unlawful or discriminatory purpose.

For any queries contact us on Email: semevalmemotion@gmail.com 

Task A:

  • Negative and Very Negative => -1
  • Positive and Very Positive => 1
  • Neutral => 0

Task B:

  • Not humorous => 0 and Humorous (funny, very funny, hilarious) => 1
  • Not Sarcastic => 0 and Sarcastic (general, twisted meaning, very twisted) => 1
  • Not offensive => 0 and Offensive (slight, very offensive, hateful offensive) => 1
  • Not Motivational => 0 and Motivational => 1

Task C:
Humour :

  • Not funny => 0
  • Funny => 1
  • Very funny => 2
  • Hilarious => 3

Sarcasm:

  • Not Sarcastic => 0
  • General => 1
  • Twisted Meaning => 2
  • Very Twisted => 3

Offense:

  • Not offensive => 0
  • Slight => 1
  • Very Offensive => 2
  • Hateful Offensive => 3

Motivation:

  • Not Motivational => 0
  • Motivational => 1

Announcements:

1. We encourage all the participants to write a system description paper on the approaches implemented in the format http://alt.qcri.org/semeval2020/. You can follow semeval website for paper submission.

2. You need to cite the paper with the following bibtex in your paper.

3. In the future, the dataset is allowed to be used in any paper, only upon citation.

4. Only the submissions that were mailed were considered.

bibtex:

@inproceedings{chhavi2020memotion,
title={{Task Report: Memotion Analysis 1.0 @SemEval 2020: The Visuo-Lingual Metaphor!}},
author="Sharma, Chhavi and
Paka, Scott, William and
Bhageria, Deepesh and
Das, Amitava and
Poria, Soujanya and
Chakraborty, Tanmoy and
Gamb{\"a}ck, Bj{\"o}rn",
booktitle = "Proceedings of the 14th International Workshop on Semantic Evaluation ({S}em{E}val-2020)",
year = {2020},
month = {Sep},
address = "Barcelona, Spain",
publisher = "Association for Computational Linguistics"
}

 

The results for each task are shared in an excel file . The top three rankers for each task are considered to be the winners.: 

https://docs.google.com/spreadsheets/d/10hryIs0-CWUzuzaEGeLxInJz_ylXcHEkpwV02-pQQwU/edit?usp=sharing

 

First phase

Start: Sept. 4, 2019, midnight

Competition Ends

March 13, 2020, 11:55 p.m.

You must be logged in to participate in competitions.

Sign In