Will the labels for the validation data be released? If so, when? Also, in the final evaluation phase, will the leaderboard be still accurate or will new submissions be given random scores around 0.9 like the first round of this competition?
We will not provide validation labels, we will run the same procedure like in the first round.
About the development stage, please note that those results are based on the same metric as used for evaluation. In the publication of the challenge report at ECCV you will see that methods are obtaining very high and similar performance since the ranking of the traits per video provides very close continuous values among videos, but just 1% of difference between two methods implies a significant better performance of one against the other.
Please also note that this time your final score for prizes will be computed both using the development score and the "usability/downloads" on the pieces of code you are providing to your competitors, thus we are promoting coopetition, the winner has to compete and cooperate ;)