Hello,
my submissions for km-en fail because I have 1000 entries but according to the stack trace, 990 entries are expected?
That seems wrong because the data also contains 1000 source and translation sentences.
My error trace from CodaLab is below:
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Traceback (most recent call last):
File "/tmp/codalab/tmp4JvYmc/run/program/evaluation.py", line 156, in
lp_str, disk_footprint, model_params, scores = getScores()
File "/tmp/codalab/tmp4JvYmc/run/program/evaluation.py", line 143, in getScores
disk_footprint, model_params, lp_str_pred, predictions = parse_submission(submission_file.readlines(), False)
File "/tmp/codalab/tmp4JvYmc/run/program/evaluation.py", line 109, in parse_submission
lp_str, testset_size_kmen, len(lp_segments)
AssertionError: Incorrect number of predicted scores for km-en, expecting 990, given 1000.
Dear gregor_geigle,
the test set for this language pair does indeed contain 990 sentences.
If you cloned `https://github.com/sheffieldnlp/mlqe-pe` more than a week ago,
please refresh your version as the data for this language pair has since been updated.
I have deleted your submission so that it won't affect your number of submission for today.
Best,
Fred.
Thank you very much for the quick reply. The stale data was indeed the problem.
I also have a failed multilingual submission, which failed for the same reason. Could you remove that one, too?
Thank you in advance.
Best
Gregor
Hi,
I have deleted your submission to the multilingual task.
Best,
Fred.