RumourEval-2017, subtask A

Organized by leondz - Current server time: Dec. 10, 2018, 12:41 p.m. UTC

Previous

Testing
Jan. 16, 2017, midnight UTC

Current

Development
Aug. 1, 2016, midnight UTC

End

Competition Ends
Feb. 2, 2017, midnight UTC

Welcome!

Related to the objective of predicting a rumour's veracity, the first subtask will deal with the complementary objective of tracking how other sources orient to the accuracy of the rumourous story. A key step in the analysis of the surrounding discourse is to determine how other users in social media regard the rumour. We propose to tackle this analysis by looking at the replies to the tweet that presented the rumourous statement, i.e. the originating rumourous (source) tweet. We will provide participants with a tree-structured conversation formed of tweets replying to the originating rumourous tweet, where each tweet presents its own type of support with regard to the rumour. We frame this in terms of supporting, denying, querying or commenting on (SDQC) the claim. Therefore, we introduce a subtask where the goal is to label the type of interaction between a given statement (rumourous tweet) and a reply tweet (the latter can be either direct or nested replies). Each tweet in the tree-structured thread will have to be categorised into one of the following four categories:

  • Support: the author of the response supports the veracity of the rumour they are responding to.
  • Deny: the author of the response denies the veracity of the rumour they are responding to.
  • Query: the author of the response asks for additional evidence in relation to the veracity of the rumour they are responding to.
  • Comment: the author of the response makes their own comment without a clear contribution to assessing the veracity of the rumour they are responding to.

Evaluation Criteria

A submission should be a JSON format file, consisting of a single dictionary, where the key corresponds to a tweet id from the evaluation data, and the value is your system's prediction of the stance: support, deny, query, or comment. Test data should be downloaded on the SemEval webpage, http://alt.qcri.org/semeval2017/task8/index.php?id=data-and-tools.

Terms and Conditions

Use of the data indicates acceptance of the Twitter terms of service.

Development

Start: Aug. 1, 2016, midnight

Testing

Start: Jan. 16, 2017, midnight

Competition Ends

Feb. 2, 2017, midnight

You must be logged in to participate in competitions.

Sign In