Information on social media comprises of various modalities such as textual, visual and audio. NLP and Computer Vision communities often leverage only one prominent modality in isolation to study social media. However, the computational processing of Internet memes needs a hybrid approach. The growing ubiquity of Internet memes on social media platforms such as Facebook, Instagram, and Twitter further suggests that we can not ignore such multimodal content anymore. To the best of our knowledge, there is not much attention towards meme emotion analysis. The objective of this proposal is to bring the attention of the research community towards the automatic processing of Internet memes. The task Memotion analysis will release 8K annotated memes - with human-annotated tags namely sentiment, and type of humor that is, sarcastic, humorous, or offensive.
The Multimodal Social Media
In the last few years, the growing ubiquity of Internet memes on social media platforms such as Facebook, Instagram, and Twitter has become a topic of immense interest. Memes, one of the most typed English words (Sonnad, 2018) in recent times. Memes are often derived from our prior social and cultural experiences such as TV series or a popular cartoon character (think: One Does Not Simply - a now immensely popular meme taken from the movie Lord of the Rings). These digital constructs are so deeply ingrained in our Internet culture that to understand the opinion of a community, we need to understand the type of memes it shares. (Gal et al., 2016) aptly describes them as performative acts, which involve a conscious decision to either support or reject an ongoing social discourse. Online Hate - A brutal Job: The prevalence of hate speech in online social media is a nightmare and a great societal responsibility for many social media companies. However, the latest entrant Internet memes (Williams et al., 2016) has doubled the challenge. When malicious users upload something offensive to torment or disturb people, it traditionally has to be seen and flagged by at least one human, either an user or a paid worker. Even today, companies like Facebook and Twitter rely extensively on outside human contractors from start-ups like CrowdFlower, or companies in the Philippines. But with the growing volume of multimodal social media, it is becoming impossible to scale. The detection of offensive content on online social media is an ongoing struggle. OffenseEval (Zampieri et al., 2019) is a shared task which is being organized since the last two years at SemEval. But, detecting an offensive meme is more complex than detecting an offensive text – it involves visual cue and language understanding. This is one of the motivating aspects which encourages us to propose this task. Multimodal Social Media Analysis - The Necessity: Analogous to textual content on social media, memes also need to be analyzed and processed to extract the conveyed message. A few researchers have tried to automate the meme generation (Peirson et al., 2018; Oliveira et al., 2016) process, while a few others tried to extract its inherent sentiment (French, 2017) in the recent past. Nevertheless, a lot more needs to be done to distinguish their finer aspects such as type of humor or offense. We hope Memotion analysis - the task will bring research attention towards the topic and the forum will be the place to continue relevant discussions on the topic among researchers.
The Memotion Analysis Task
Task A- Sentiment Classification: Given an Internet meme, the first task is to classify it as a positive, negative or neutral meme.
Task B- Humor Classification: Given an Internet meme, the system has to identify the type of humor expressed. The categories are sarcastic, humorous, and offensive meme. If a meme does not fall under any of these categories, then it is marked as another meme. A meme can have more than one category.
Task C- Scales of Semantic Classes: The third task is to quantify the extent to which a particular effect is being expressed. Details of such quantifications are reported in Table 1. Appropriate annotated data will be provided.
For Task A: macro F1
For Task B and C: macro F1 for each of the subtasks, and then average.
Test Set is updated on 26 February 00.00 EST
Please read the following very carefully.
Will be updating soon.
Trial data ready: July 31, 2019
Training data ready: September 4, 2019
Test data ready: February 19, 2020
Evaluation start: February 19, 2020
Evaluation end: March 11, 2020
Results posted: March 18, 2020
System description paper submissions due: April 17, 2020
Task description paper submissions due: April 24, 2020
Notification to authors: June 10, 2020
Camera-ready Submissions due: July 1, 2020
SemEval: September 13-14, 2020
Dr. Amitava Das. Wipro AI Labs, Bangalore, India Mahindra École Centrale, Hyderabad, India.
Indraprastha Institute of Information Technology Delhi, India.
Nanyang Technological University, Singapore.
Dr. Björn Gambäck.
Norwegian University of Science and Technology, Norway.
Indian Institute of Information Technology, Sri City, India.
William Scott Paka.
Indraprastha Institute of Information Technology, Delhi.
Indian Institute of Information Technology, Sri City, India.
Terms & Conditions
By submitting results to this competition, you consent to the public release of your scores at the SemEval workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.
You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgment that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.
You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers.
By downloading the data or by accessing it any manner, You agree not to redistribute the data except for the purpose of non-commercial and academic-research. The data must not be used for providing surveillance, analyses or research that isolates a group of individuals or any single individual for any unlawful or discriminatory purpose.
For any queries contact us on Email: firstname.lastname@example.org
Start: Sept. 4, 2019, midnight
March 11, 2020, 11:32 p.m.
You must be logged in to participate in competitions.Sign In