Hello - I have two questions.
1. I made a submission yesterday. It showed up nearly immediately (as it has in the past) and I took note of my score.
However today it appears with an incorrect submission date, and I cannot submit another attempt today.
Is there any way to fix this so that I can make my intended submission today? Is this a known bug/is there some way I should prepare for this in the remaining days of the submission window?
2. Let me start my second question with a few words of appreciation first. Thank you for all the hard work putting together this competition, providing a useful challenge to help advance the field, and for the many hours you have surely put into organizing, answering questions, and providing starter code.
I saw there was a previous message announcing the change from 4 daily submissions to 2, but it seems there is now only 1 allowed per day. Can you help me understand this change, and is there a possibility of returning to the original policy?
I saw another thread where someone proposed that having multiple submission attempts may lead to overfitting; however this logic feels a bit flawed to me.
Consider this as a hypothetical learning problem - does it seem possible to learn a collection of ~25K integers, each ~1 byte, using only 100 queries (where each query returns only an average)?
Basically - limited submission rate does not seem to relate to the problem of overfitting, but only seems to (1) introduce logistical trouble as a participant, and (2) limit the number of competing submissions overall.
From the perspective of a transfer learning problem, I can certainly understand having SOME fixed budget (and 100 seems like a reasonable small number, as an example) - but I don't understand the purpose of limiting the RATE of submission.
(The only rationale I can think of is as a parallel to web security, to limit the possible impact of a malicious user with multiple accounts - but then the same solution from web security applies here - use a trusted identity provider, such as a university or an employer)
I scheduled my model re-training and development time for making final submissions based on the expectation that the submission process would stay constant; but the reduced rate of submissions unfortunately means I will only be able to submit very few of the model variants I have been working on.
Perhaps my comments come a bit too late to change anything about the process, given the timing - but I figured it was worthwhile to share my thoughts and ask about this anyway.
Of course, in one sense, it is generous of you to allow us multiple submissions, regardless of the form - I certainly made several mistakes during the phase 1 submissions where I used wrong arguments and produced a junk submission, so your policy helped me greatly.
Really it is just the unexpected change in protocol that is causing me a bit of trouble.
And in any case, thanks again for your hard work on organizing this; I know there are often conflicting requests in such situations and there is not an approach which will perfectly satisfy all participants.
Best
Niklas
Dear Niklas, many thanks for the question, I would be very glad to answer your queries.
1. This is normally caused by a delay of Codalab systems, there were the same queries about submission ongoing for some time until the participant can see the score later because of the system queue. And this is exactly one of the reasons we provide multiple submissions for the final stage. I hope your submission is sorted now.
2. Thanks for your ideas. We have multiple voices from participants about the limitation number and we discussed this internally as well. I agree that 100 times could not test the best parameters out. It's out of the 'validation' and 'test' data set separation consideration that we limit the submission number of phase 2. Strictly speaking for science, to test the transferability and the ability of adaptation for an algorithm, it's the most ideal case that we only give one chance of submission ('test set'), which is the most frequent case we use algorithms for EEG decoding in real-world cases. While as in the previous message, we open the submission and visibility of scores in phase 2 for two reasons, 1st is to give you a feel of your final score (since you've developed your algorithms for 2+month), and we don't want to simply give you the final result of 'you lose' or 'you win'. 2nd is the Codalab system may sometimes have a delay in submission, a single shot at the end may cause that some submissions exceed the deadline. So the final balance we have is 1 submission per day on different days. Apologies for the inconvenience. We will try to find feedbacks and improved strategies for our next competition in surveys after the end of our competition. For your development purpose, the phase 1 leaderboard is still open, you could still develop and test your algorithms on that( and you could test multiple times a day), the data is very similar with the second phase.
once again, thanks for your ideas and thanks for paying attention to the BEETL.
Kind regards,
Xiaoxi