COSC7336 - Community Question & Answering

Organized by UH_RiTUAL - Current server time: Sept. 25, 2018, 4:41 p.m. UTC

Previous

Testing
Sept. 19, 2017, midnight UTC

Current

Testing
Sept. 19, 2017, midnight UTC

End

Competition Ends
Dec. 15, 2017, 5 a.m. UTC

The Competition

Given a new question Q (aka the original question), and the set of the first ten related questions from the forum retrieved by a search engine, each associated with its first ten comments appearing in its thread, the goal is to rank the 100 comments according to their relevance with respect to the original question. We want the “Good” comments to be ranked above the “PotentiallyUseful” and Bad” comments, which will be considered just bad in terms of evaluation. Although, the systems are supposed to work on 100 comments, we take an application-oriented view in the evaluation, assuming that users would like to have good comments concentrated in the first ten positions. We believe users care much less about what happens in lower positions (e.g., after the 10th) in the rank, as they typically do not ask for the next page of results in a search engine such as Google or Bing. This is reflected in our primary evaluation score, MAP, which we restrict to consider only the top ten results.

Evaluation Criteria

To upload a submission you will need to create an account with Codalab. You'll then be able to register under the Participate tab after accpeting the terms and conditions.

 

Once you've created an account and registered, you can begin submitting your output for evaluation. You can run the evaluation script locally using the provided script in the Dropbox folder. You will need Python 2.7 in your environment. More details are found within the script. You can also find snippets of what the truth.relevancy file will contain as well as what the scorer is expecting from your submission.predictions file.

Submissions

For successfully completing your system submission you need to submit a text file (submission.predictions) with your system’s predictions and your source code in a single zip file.

Data Format

The scorer takes as input a "GOLD_FILE" and a "PREDICTIONS_FILE". Both files should contain one prediction per line in the following format:

"Question_ID"     "Answer_ID"     "RANK"     "SCORE"     "LABEL"
where tabulation is used as a separator.

The file should be sorted by "Question_ID", then by "Answer_ID" (this is already the order in the provided XML files, so no additional sorting is needed). "RANK" is a positive integer, reflecting the rank of the answer with respect to the question. In fact, the value of "RANK" is not used in scoring (and one can put there any integer); it is only included in the file for better readability of the "GOLD_FILE". "SCORE" is a real number reflecting the relevance of the answer with respect to the question. A higher value means higher relevance of the answer with respect to the question. In the "PREDICTIONS_FILE", this value is used to determine the ranking of the answers (in descending order) with respect to the question, and thus is key for calculating MAP.

It’s important that the zip file only contains the two files mentioned above, not a zip containing a folder with the contents in it; this will cause the evaluation script to fail. Please name your zip files `cosc7336-assign3-student-name.zip`
NOTE: CodaLab is an open source framework for running competitions. Your system submissions will be ranked according to accuracy of the system but the ranking will be public and thus it’s super important that the username you choose for the submission is not disclosing your identity. In order to identify which student gets credit for which system submission, please note in your report, and in your source code, the name you used to identify your submission in CodaLab (user name).
After successfully uploading a submission, be sure to click on the 'Submit to Leaderboard' button so that you can see your results against everyone else's.

Testing

To test against the gold dataset, submit your predictions in the Testing tab.

Terms and Conditions

Cheating, plagiarism, and other forms of academic dishonesty will not be tolerated. This includes copying code from the internet without the written consent of the author. Visit University's Academic Honesty Policy for more information. By accepting these terms and conditions, you agree to the consequences if found guity of any transgressions. Feel free to contact us if you have any doubts.

Testing

Start: Sept. 19, 2017, midnight

Competition Ends

Dec. 15, 2017, 5 a.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 jinaobo -
2 shallowLearner -
3 cryptexcode -