OffensEval 2019 (SemEval 2019 - Task 6)

Organized by Shervin - Current server time: Dec. 5, 2019, 5:32 p.m. UTC

Previous

Evaluation
Jan. 10, 2019, midnight UTC

Current

Post-evaluation
Feb. 1, 2019, midnight UTC

End

Competition Ends
Never

OffensEval 2019: OffensEval: Identifying and Categorizing Offensive Language in Social Media (SemEval 2019 - Task 6)

This is the website for the OffensEval 2019 shared task organized at SemEval 2019.

Motivation

Offensive language is pervasive in social media. Individuals frequently take advantage of the perceived anonymity of computer-mediated communication, using this to engage in behaviour that many of them would not consider in real life. Online communities, social media platforms, and technology companies have been investing heavily in ways to cope with offensive language to prevent abusive behaviour in social media.

One of the most effective strategies for tackling this problem is to use computational methods to identify offense, aggression, and hate speech in user-generated content (e.g. posts, comments, microblogs, etc.). This topic has attracted significant attention in recent years as evidenced in recent publications (Waseem et al. 2017; Davidson et al., 2017, Malmasi and Zampieri, 2018, Kumar et al. 2018) and workshops such as AWL and TRAC.

In OffensEval we break down offensive content into three sub-tasks taking the type and target of offenses into account.

Sub-tasks

Sub-task A - Offensive language identification

Sub-task B - Automatic categorization of offense types;

Sub-task C - Offense target identification.

Data

The data is retrieved from social media and distributed in comma separated format. More information will be available soon.

Dates

The dataset for data release and submissions will be announced soon. The information on the "phases" page will be updated accordingly.

Task Organizers

Marcos Zampieri (University of Wolverhampton, UK)

Shervin Malmasi (Amazon, USA)

Preslav Nakov (Qatar Computing Research Insitute, Qatar)

Sara Rosenthal (IBM Research, USA)

Noura Farra (Columbia University, USA)

Ritesh Kumar (Bhim Rao Ambedkar University, India)

References

Davidson, T., Warmsley, D., Macy, M. and Weber, I. (2017) Automated Hate Speech Detection and the Problem of Offensive Language. Proceedings of ICWSM.

Kumar, R., Ojha, A.K., Malmasi, S. and Zampieri, M. (2018) Benchmarking Aggression Identification in Social Media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC). pp. 1-11.

Malmasi, S., Zampieri, M. (2018) Challenges in Discriminating Profanity from Hate Speech. Journal of Experimental & Theoretical Artificial Intelligence. Volume 30, Issue 2, pp. 187-202. Taylor & Francis. 

Waseem, Z., Davidson, T., Warmsley, D. and Weber, I. (2017) Understanding Abuse: A Typology of Abusive Language Detection Subtasks. Proceedings of the Abusive Language Online Workshop.

Evaluation Criteria

Classification systems will be evaluated using the macro-averaged F1-score.

Submission format information is available from the 'Participate' tab above.

 

Practice

Start: Sept. 10, 2018, midnight

Description: Submit practice predictions on the practice set. Use this to check your file format. A sample submission is available for download from the instructions page.

Evaluation

Start: Jan. 10, 2019, midnight

Description: Submit predictions for the test set.

Post-evaluation

Start: Feb. 1, 2019, midnight

Description: For evaluation after the competition ends. Submit additional test set predictions.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In