SemEval-2020 Task5: Modelling Causal Reasoning in Language: Detecting Counterfactuals Forum

Go back to competition Back to thread list Post in this thread

> task2: questions about "id"

I've tried the baseline method(Sequence Labeling method) using the provided code. I discover the test data in the path "arielsho/Task-5_Baseline/task2_test.csv" which has one sample data. The ant_start_id, ant_end_id, con_start_id, con_end_id are 69, 108, 0, 67. However, the results of the code are 70, 109, 0, 68. Also, we notice that the baseline code only use sentence, antecedent and consequent as inputs and doesn't use the ids. I'm wondering the evaluation method will be the same as provided baseline or comparison with the given ids.

Posted by: Chuck0314 @ Nov. 20, 2019, 1:57 p.m.

The sample training and test data file are only used to show the format of the input data, and we only put one sample there. Please run the code with your own training and test data.
Yeah we did not use 'sentenceID' in the baseline model, while in the evaluation stage, we will check the 'sentenceID' to make sure it is matched with our reference data. In the present practice phase, the labels and sentenceID in our reference data for task2 are exactly the ones from 'train.csv'. So if you try to submit your own results which are exactly the same as the labels in 'train.csv', you could get f1=1.

Posted by: Ariel_yang @ Nov. 21, 2019, 1:02 a.m.

Thanks. I still have a question, in the present practice phase, I submit the code or the results? I see it needs a project url.

Posted by: Chuck0314 @ Nov. 21, 2019, 7:39 a.m.

The baseline method also doesn't use "antecedent_startid", "antecedent_endid", "consequent_startid" and "consequent_endid". The baseline method gets these four gold labels(or true labels) by calculating antecedent(or consequent) with sentence using the "get_coordinate" function. Not directly using "antecedent_startid", "antecedent_endid", "consequent_startid" and "consequent_endid" in the sample.

Posted by: Chuck0314 @ Nov. 21, 2019, 7:49 a.m.

Thanks for your questions! To submit the results, please just submit your zip file according to our instructions in 'Evaluation' on the front page, and they could be evaluated automatically. The 'project URL' is not necessary for submission, while you can always put a link to your project there. Please click 'submit' and then upload your own file.
We also provided the 'antecedent' and 'consequent' (spans of sentences) except for 'antecedent_startid', 'antecedent_endid', 'consequent_startid', 'consequent_endid', so either get the index by yourself or directly get the index from our data file will be ok, as long as you make sure they are consistent : )

Posted by: Ariel_yang @ Nov. 21, 2019, 3:44 p.m.

Thanks. OK, I got it.

Posted by: Chuck0314 @ Nov. 22, 2019, 6:49 a.m.
Post in this thread