Hello,
I just made a test submission with random predictions and obtained misclassification of 150%. How is that possible? Could you please check that?
Best,
Boris
Hello, what's the prediction format in your submission?
I always get 200% misclassified. My submission is as follow:
complementary_dominant
where complementary and dominant are numbers (1-7) or N, connected with '_'.
Thank you!
Posted by: charrin @ Jan. 18, 2017, 4:35 p.m.And how many validation images do you have? I only have 7004 images.
Posted by: charrin @ Jan. 18, 2017, 4:54 p.m.I have the same number of validation samples (rows in the .txt file) and the same format as yours.
Posted by: bknyaz @ Jan. 18, 2017, 5:09 p.m.Hi!
It seems we had the same issue as with training, four duplicates were mixed in. I have added files that contain necessary info to google drive folder for validation. The .txt file should have 7000 lines. However if even after that the result you get is over 100, please let me know and I will try to look more into this issue.
I am sorry for the inconvenience and thank you for being so patient and participating in our challenge.
Posted by: icv @ Jan. 18, 2017, 8:08 p.m.Sorry, I still got misclassification of 199% for 7000 random predictions.
Hi!
Is everything ok with the submissions now?
Posted by: icv @ Jan. 20, 2017, 9:29 a.m.Hi,
Now it seems fine, thanks.
I got the misclassification of 100.00% when all validation images are manually set to 5_5. This indicates that the prediction label of all validation images can't be 5_5.
I also kept getting 100% misclassification, either with all N_N or all 1_1.
But I think I discovered the problem. It's expected that each line in the .txt file should be like 'N_N\r\n' (or '1_1\r\n') and not just 'N_N\n'. I don't know if it is a feature or a bug.
Thank you very much! 'N_N\r\n' (or '1_1\r\n') may be correct in the .txt file.
There are many problems about evaluation.
I strongly recommed that organizers should present a detailed description about the format of submission file.
Open source evaluation code is also a good choice.
Thanks!
Posted by: csmath @ Jan. 20, 2017, 2:50 p.m.Thank you very much! 'N_N\r\n' (or '1_1\r\n') may be correct in the .txt file.
There are many problems about evaluation.
I strongly recommend that organizers should present a detailed description about the format of submission file.
Open source evaluation code is also a good choice.
Thanks!
Posted by: csmath @ Jan. 20, 2017, 2:50 p.m.Hi!
Thank you for trying so hard to figure out how to get an evaluation. I wrote the files in python by just adding "\n" to the end of the line. However I will download your submissions and test until I manage a working script.
Very sorry for the inconvenience and thank you for putting up with all these issues.
Posted by: icv @ Jan. 20, 2017, 4:10 p.m.