* While you will be able to see the development datasets, don't tailor your algorithms to them too strongly. The evaluation datasets are entirely different.
* Don't spend much time tuning hyperparameters, training schedules, data augmentation, etc., unless it benefits the architecture search algorithm. Remember, submitted architectures will be trained from scratch with our fixed hyperparameters and no data augmentation.
* Try testing your algorithm over your own additional datasets, to make sure it generalizes well!
* Use the Makefile to test your submissions locally; it's a lot easier to debug that way versus submitting to our servers.
* Check out the "ingestion output log" and "scoring output log" after a successful submission; they'll contain useful metrics about your algorithm and performance.
Posted by: robgeada @ March 16, 2021, 10:14 a.m.