Outdoor Semantic Segmentation Challenge (DAGM GCPR 2021) Forum

Go back to competition Back to thread list Post in this thread

> Is there any way that I could evaluate results on the test set for further article publishing?

I'd like to use the TAS500 v1.1 dataset for scientific results, Is there a place where I could submit my test set predictions? Just like on Cityscapes (https://www.cityscapes-dataset.com/submit/).

Best,

Posted by: slow @ June 14, 2022, 6:34 p.m.

Hi,

thank you for asking!
As of now, we didn't setup a results table for the TAS500v1.1 dataset apart from the period of this codalab competition.
The dataset didn't draw enough attention to warrant a universal submission table.
You can message me (peter.mortimer AT unibw.de) regarding your method if this is well-tuned towards the semantic segmentation in this domain and you think its worth publishing it.

We could then maybe consider creating an "endless" codalab competition, where participants could always send in their most current version.
The SemanticKITTI group has been doing something similar to this (https://competitions.codalab.org/competitions/20331).

Thank you and best regards,
Peter

Posted by: pemo @ June 15, 2022, 11:06 a.m.

Hey Peter, I sent you an email some days ago, have you received it? let me know if I should try it again.

Thanks,

Posted by: slow @ June 22, 2022, 4:23 p.m.

Hello,

we have migrated the Outdoor Semantic Segmentation Challenge over to the new Codalab server instance and extended the competition runtime indefinitely.
This should make it possible to get your predictions on the test set evaluated.

Here's the link to the new competition page: https://codalab.lisn.upsaclay.fr/competitions/5637

Best regards,
Peter

Posted by: pemo @ June 29, 2022, 8:10 a.m.

Hey, Peter, thank you very much to make it available, however, the page seems offline to me, and I can't find the competition when I search for it on the platform.

Posted by: slow @ June 30, 2022, 10:11 a.m.

Thank you for the notice.
I didn't formally publish the competition yet.
The website should now be available.

Posted by: pemo @ June 30, 2022, 2:30 p.m.

Hey, I'm failing on my results submission, I have tried some different formats but I'm confused about what is the right format of the prediction files for submission.

On the evaluation details, it says to zip a folder with png images, which I tried without success work. And on the submission description, it says to save the prediction maps as matlab files, which I also tried some alternatives.

Can you describe what is the right way? Thanks.

Posted by: slow @ July 3, 2022, 2:28 p.m.

Hi,

the format for the submission is rather specific.
You can look into the sample submission created in the get_starting_kit available with the challenge: https://codalab.lisn.upsaclay.fr/competitions/5637#learn_the_details-get_starting_kit

You want to create a zip file containing a compressed .mat file for each image in the test set.

Here's the part of the get_starting_kit that takes care of converting your dictionary containing each prediciton map.
Notice that the predictions are first saved in a temporary folder and then written into the ZipFile object.
The "do_compression=True" argument is important in savemat to not exceed the upload limit on Codalab.

# create the zip file for submission
with zipfile.ZipFile('test_pred.zip', 'w') as zip_handle:
if not osp.exists('tmp'):
os.mkdir('tmp')
for key in submission.keys():
io.savemat('tmp/{}.mat'.format(key[:-4]), {'data': submission[key].astype(np.uint8)}, do_compression=True)
zip_handle.write('tmp/{}.mat'.format(key[:-4]), arcname=key[:-4] + '.mat')
shutil.rmtree('tmp')

Posted by: pemo @ July 4, 2022, 2:51 p.m.

Here also a link to the example prediction used as the initial baseline of the competition:

https://drive.google.com/file/d/1gCnMJ3OVXwcdnAWkXbD82Xn0jZvM11Gz/view?usp=sharing

Let me know if you need any additional help in getting the submission process working.

Best regards,
Peter

Posted by: pemo @ July 4, 2022, 2:53 p.m.

Excellent! It worked like a charm!

Posted by: slow @ July 4, 2022, 7:34 p.m.
Post in this thread