Recognizing Families In the Wild (FIW) Data Challenge Workshop - ACM MM 2017 Forum

Go back to competition Back to thread list Post in this thread

> how to submit

why max submissions per day is 999 but 3 in total?In addition,i submitted 3 times and fineshed without error,bu i can't see my score in resultsheet.

Posted by: naiven @ June 9, 2017, 5:03 p.m.

I apologize for the delayed response. For some reason, I do not get notified when a new forum is started and I overlooked this earlier (I will check my preferences and hopefully there is an option to turn this on).

Your submissions are successfully being scored (all but the first), however, there is something glitchy with the results table display (sometimes it displays properly, next thing it doesn't, next then it is displaying old results).

I will get this handled ASAP (i.e., by the end of the weekend). In the meantime, if you submit and the scores dont display then email me and I will send scores back to you as an attachment.

Also, once we get this working (which I am most certain is a Codalab thing, but either way I am getting right on it), then all submissions will surely be displayed (so please dont worry, for now email and I will send them back need be, and once fixed all will be posted :)

My apologies for this and trust me I already inquired about getting this working.

Thank you.

Joe

Posted by: jvision @ June 16, 2017, 7:52 p.m.

@naiven I increased the limit of submission per day. Thank you for this to my attention, I have no problem at all with no limit on this, as to how else do you know if you are improving from one run to the next. This is, of course, a different case when it comes to testing, but for validation, I encourage more experiments and, thus, more submissions :)

Also, scoring script can be accessed via https://github.com/huskyjo/RFIW2017/tree/master/verification

Expect more code to be added to repo real soon. I will let everyone know when so,

Thanks and I hope this addresses your concerns and helps in general. Keep me posted.

Best,
Joe

Posted by: jvision @ June 16, 2017, 8:08 p.m.

Thanks for your reply.First,i apologize for have not looking for difference between image paris in validation phase and trainning phase,but i found my hold out validation offline mostly score 0.7~0.9,i think which hint that the family of images in validation phase may seldom been seen in trainning phase, in anather words,the most challenge part of this competition is how to overcome overfitting.Anyway, i am curious about the split strategy of image if it is ok to publish.

Regards,
naiven

Posted by: naiven @ June 18, 2017, 7:46 a.m.

Thank you for the feedback. We are looking into this now (overlap between train and validation should not exist). We will let you know the verdict once checked.

In any case, and in the end, both train and validation can be used to train models used on test "blind" set. Still, if there is overlap between training and validation it will be addressed (I do not believe there is, but before I make any sort of claim let us check).

Also, we will provide details on how the splits were done. Let me check with the others about this, but this information should be okay to release sooner than later. I will send out an email when this information is available.

I appreciate your activeness, really. We want to make this as productive and as enjoyable as possible. Every bit of feedback helps ensure things are on par with expectations of yourself and other participants. Please continue the dialog as items arise.

Best regards,
Joe

Posted by: jrob.husky @ June 18, 2017, 3:42 p.m.

Hi naiven,

The split is based on families. If one family is used in TRAIN, it won't appear in the VALIDATION and TEST. When you choose your hold out set, I am not sure if some images are both in the training and hold-out pairs. If so, that might have some problems.

Best,
Yue

Posted by: wuyuebupt @ June 18, 2017, 3:54 p.m.
Post in this thread