ChaLearn LAP. Apparent Personality Analysis: First Impressions (first round) Forum

Go back to competition Back to thread list Post in this thread

> Regarding Max Submission Times

Dear organizers,

During the final evaluation process, the max submission times in total is 5. I noticed in the leaderboard that, some team may have used more than one id to submit their final result.

I want to know that, whether each team has 5 tries to submit or each member of the team has 5 tries to submit, which one is correct?

BTW, may I ask one more question that, the very final result of this contest is computed based on the last submission of one team, or the submission that has the highest accuracy?

Thanks!

Posted by: AlphaCV @ July 7, 2016, 11:35 a.m.

Dear Participant,

In fact, each Team should not perform more than 100 submissions (this is the total number for each team actually), and 5 submissions per day. It is now fixed for this second test phase, thank you for reporting this! Note these values are orientative, just to avoid to automatize and saturate the server.

Regarding the final result, it is based on the last submission of each team.

Best regards,

Posted by: vponcel @ July 7, 2016, 5 p.m.

I'm confused about this new submissions rules.
It means that Testing Phase doesn't have difference to Learning Phase.
I suppose that each team should only submit 5 times till the final day.
But can't submit 5 times each day.

Posted by: tzzcl @ July 7, 2016, 6:13 p.m.

Thanks for your reply!

When designing a learning algorithm, we want the model the learn the underlying distribution and regularity of the whole input space. That is, we want the model to perform well on the unseen data. Therefore, we usually need a test set to compute test error to act the role of model's generalization error.

But most learning algorithms have some parameters need to be set properly. As a result, we usually need a validation set to tuning the parameters.

In my opinion, the final test process should be and only be used to test the model's generalization error. If we have too many submissions, the final test data will be like the validation data, some teams will use that data to tune the parameter for their models. This result in the models they have tuned will overfit onto the test data, and we cannot not say that their models generalize well and their models perform well than other teams' model.

So, I think we should limit the max submission times, say 5 times for each team (like those competitions associated with ICCV'15, CVPR'15, and CVPR'16 held by you before), in order not to use the test data to tune the parameters of learning algorithms.

Best regards!

Posted by: AlphaCV @ July 8, 2016, 2:27 a.m.

Dear all,

we use to set the leaderboard as hide in the test phase, what means that participants cannot see the results, so test data cannot be used to tune the models. Due to some issue in Codalab, using this configuration of the leaderboard the submissions failed, so what we did was to just put anonymous option, so participants cannot see who send submissions, and the evaluation script is returning random values in the range 0.9 to 1.

We will download the last submission of each team and evaluate it to obtain the final result. Therefore, the information provided by Codalab for test phase is not informative and cannot be used for improving the models.

As some of you already submitted many times, we will fix the maximum at 15 (5 more than the maximum number the submissions performed by the team that submitted most times).

Thanks.

Posted by: xbaro @ July 8, 2016, 3:59 p.m.

What about the teams that have already submitted test results? Are their performance also random (they don't look random tbh, and the team name is still visible)? If not, it would be better to give each team the chance to see the actual test performance as many times as the team that has already submitted the most results.

Posted by: spinpop56 @ July 8, 2016, 8:41 p.m.

Dear Participant,

Yes, they are also random, and the team name is visible but the ranking positions are not rellevant. As commented before, in this evaluation phase the results are blind, so it is not allowed neither to see the results nor tuning the parameters on the test data. For this, one should take as reference the results of the learning phase. Then, each team has to decide which is the final submission they want to submit for test.

Best regards,

Posted by: vponcel @ July 11, 2016, 1:27 p.m.

"Yes, they are also random, and the team name is visible but the ranking positions are not rellevant. " Is it real?

As the test phase is blind, what's the meaning of 15 times? Only one will be enough i think.

Posted by: flx @ July 11, 2016, 3:46 p.m.

Hello,

Will we see the results of our test set submissions? Maybe after the challenge is over, since we want to report the performance of different models. Validation set submissions can be re-opened for this purpose too. That would also be a nice compensation for the last 2 days that were lost during the first phase.

Thanks,

Posted by: frkngrpnr @ July 11, 2016, 4:14 p.m.

Dear Participants,

Yes, you will see the results of your test submissions when the competition round finishes and all codes and fact sheets are verified.

Please recall the previous answer of xbaro in this thread to understand why the maximum submissions have been updated up to 15 times now.

Best regards,

Posted by: vponcel @ July 12, 2016, 11:59 a.m.
Post in this thread