Fakeddit Multimodal Fake News Detection Challenge 2020

Organized by sharonlevy - Current server time: Oct. 23, 2020, 9:34 p.m. UTC

Current

First phase
Aug. 26, 2020, midnight UTC

End

Competition Ends
Feb. 16, 2021, 5:44 p.m. UTC
Fakeddit

Fake news has altered society in negative ways in politics and culture. It has adversely affected both online social network systems as well as offline communities and conversations. Using automatic machine learning classification models is an efficient way to combat the widespread dissemination of fake news. However, a lack of effective, comprehensive datasets has been a problem for fake news research and detection model development. Prior fake news datasets do not provide multimodal text and image data, metadata, comment data, and fine-grained fake news categorization at the scale and breadth of our dataset.

We present Fakeddit, a novel multimodal dataset consisting of over 1 million samples from multiple categories of fake news. After being processed through several stages of review, the samples are labeled according to 2-way, 3-way, and 6-way classification categories through distant supervision, demonstrating the fine-grained classification unique to Fakeddit. Read our LREC paper for more details here

This Fakeddit Multimodal Fake News Detection Challenge aims to benchmark progress towards models that can accurately detect specific types of fake news in text and images. We have a few awards for the winners.

Evaluation Criteria

Guidelines

The aim of Fakeddit is to provide a fine-grained multimodal fake news detection dataset and advance efforts to combat the spread of misinformation in multiple modalities. As such, we require those using our dataset to adhere to these guidelines:

  1. Only use the "6_way_label" and the "clean_title" columns from the public dataset linked on the Github page. 
  2. Do not use additional paired text/image data.
  3. Only use multimodal samples from the given dataset (samples that have both text and image). This is contained in the "multimodal_only_samples" folder.
  4. Do not attempt to extract ground truth labels for our samples on the Internet.

Disregarding these guidelines to improve evaluation scores is unethical and will not help improve future research in the area.

Metrics

We evaluate the detection models with accuracy. Specifically, we measure the percentage of text/image pairs the model is able to correctly classify from the total test set.

Ranking

The ranking for the competition is based on the evaluation metric. A team with a greater accuracy will be ranked higher than one with a lower accuracy.

Terms and Conditions

If you use this dataset, please cite

	

	@article{nakamura2019r,
        title={r/Fakeddit: A New Multimodal Benchmark Dataset for Fine-grained Fake News Detection},
        author={Nakamura, Kai and Levy, Sharon and Wang, William Yang},
        journal={arXiv preprint arXiv:1911.03854},
        year={2019}
        }

	

When submitting your results, please create a CSV file with the first column as the sample ID and the second column as the prediction. 

First phase

Start: Aug. 26, 2020, midnight

Competition Ends

Feb. 16, 2021, 5:44 p.m.

You must be logged in to participate in competitions.

Sign In