Community Question Answering (CQA) forums are gaining popularity online. They are seldom moderated, rather open, and thus they have few restrictions, if any, on who can post and who can answer a question. On the positive side, this means that one can freely ask any question and expect some good, honest answers. On the negative side, it takes effort to go through all possible answers and to make sense of them. For example, it is not unusual for a question to have hundreds of answers, which makes it very time consuming to the user to inspect and to winnow. The challenge we propose may help automate the process of finding good answers to new questions in a community-created discussion forum (e.g., by retrieving similar questions in the forum and identifying the posts in the answer threads of those questions that answer the question well).
We build on the success of the previous editions of our SemEval tasks on CQA, SemEval-2015 Task 3 and SemEval-2016 Task 3, and present an extended edition for SemEval-2017, which incorporates several novel facets.
This CodaLab competition is for Subtask B of SemEval Task 3: the Question-Question Similarity Subtask
Given:
rerank the related questions according to their similarity with respect to the original question. In this case, we will consider the "PerfectMatch" and "Relevant" questions both as good (i.e., we will not distinguish between them and we will consider them both "Relevant"), and they should be ranked above the "Irrelevant" questions. The gold labels for this subtask are contained in the RELQ_RELEVANCE2ORGQ field of the XML file. See the README file for a detailed explanation of their meaning. Again, this is not a classification task; it is a ranking task.
More information on the task and all the subtasks can be found on the SemEval Task website.
On the Leaderboard three scores will be provided: MAP, Average Recall, and MRR, but the official evaluation measure towards which all systems will be evaluated and ranked will be mean average precision (MAP) using the 10 ranked comments.
Note: The dataset is formatted for training in this subtask. For each original question, you have to consider the 10 related questions associated with it. They are consecutive in the dataset. The format required for the output of your systems will be detailed in the scorer and in the format-checker README files. These can be found here.
The name of the development file you submit needs to be SemEval2017-Task3-CQA-QL-dev.xml.subtaskB.pred, and it needs to be zipped.
The name of the test file you submit needs to be SemEval2017-task3-English-test.xml.subtaskB.pred, and it needs to be zipped.
By participating in this competition and submitting results in CodaLab you agree to the public release of your results in the proceedings of SemEval 2017. Furthermore, you accept that the choice of evaluation metric is made by the task organizers, and that they have the right to decide the winner of the competition, and to disqualify teams if they do not follow the rules of the competition.
Start: Aug. 1, 2016, midnight
Description: The development phase
Start: Jan. 9, 2017, midnight
Description: The testing phase
Jan. 31, 2017, noon
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | sagustian | 0.796 |
2 | Mr_Woo | 0.769 |
3 | naman | 0.751 |