RumourEval-2017, subtask B (closed)

Organized by leondz - Current server time: May 27, 2018, 12:58 a.m. UTC

Current

Development
Aug. 1, 2016, midnight UTC

Next

Testing
Jan. 16, 2017, midnight UTC

End

Competition Ends
Feb. 2, 2017, midnight UTC

Welcome!

The goal of this subtask is to predict the veracity of a given rumour. The rumour is presented as a tweet, reporting an update associated with a newsworthy event, but deemed unsubstantiated at the time of release. Given such a tweet/claim, and a set of other resources provided, systems should return a label describing the anticipated veracity of the rumour as true or false. The ground truth of this task is manually established by journalist members of the team who identify official statements or other trustworthy sources of evidence that resolve the veracity of the given rumour. The participants in this subtask will be able to choose between two variants. In the first case -- the closed variant -- the veracity of a rumour will have to be predicted solely from the tweet itself. In the second case -- the open variant -- additional context will be provided as input to veracity prediction systems; this context will consist of snapshots of relevant sources retrieved immediately before the rumour was reported, including a snapshot of an associated Wikipedia article, a Wikipedia dump, news articles from digital news outlets retrieved from NewsDiffs, as well as preceding tweets from the same event. Critically, no external resources may be used that contain information from after the rumour's resolution. To control this, we will specify precise versions of external information that participants may use. This is important to make sure we introduce time sensitivity into the task of veracity prediction. We take a simple approach to this task, using only true/false labels for rumours. In practice, however, many claims are hard to verify; for example, there were many rumours concerning Vladimir Putin's activities in early 2015, many wholly unsubstantiable. Therefore, we also expect systems to return a confidence value in the range of 0-1 for each rumour; if the rumour is unverifiable, a confidence of 0 should be returned.

Evaluation Criteria

A submission should be a JSON format file, consisting of a single dictionary, where the key corresponds to a tweet id from the evaluation data, and the value is a list.

The first item in the list is a string of either "true" or "false", and the second value a float from [0..1] representing confidence. If you think it's impossible to determine the veracity of a rumour, set a confidence of zero.

Test data should be downloaded on the SemEval webpage, http://alt.qcri.org/semeval2017/task8/index.php?id=data-and-tools.

Terms and Conditions

Use of the data indicates acceptance of the Twitter terms of service.

Development

Start: Aug. 1, 2016, midnight

Testing

Start: Jan. 16, 2017, midnight

Competition Ends

Feb. 2, 2017, midnight

You must be logged in to participate in competitions.

Sign In