Thank you for joining the MAI 2021 Mobile AI 2021 Learned Smartphone ISP Challenge.
Here’s the guideline for your final test result submission.
This is basically identical to what is described in the factsheet template:
The factsheet, TFLite models, source codes, and executables should be uploaded directly to [CodaLab] in ONE single archive.
Only your LAST upload counts. All previous submissions will not be validated.
The uploaded archive should contain the following 5 folders:
1. TFLite/ - The folder with the TWO TFLite models (input shapes: [1, 128, 128, 4] and [1, 544, 960, 4]) and an inference script inference_tflite.py (load your TFLite model and output processed images).
2. Factsheet/ - The folder with your final factsheet (PDF + LaTeX).
3. Model/ - This folder should contain main.py file that restores your final model from a checkpoint (located in the same directory) and converts it to TFLite.
4. Source-Codes/ - The folder with codes used to train your final model, should contain the implementation of the model, all loss functions, and the training pipeline. There should also be a README file explaining how to run these codes.
5. Other/ - All other supplementary files should be placed in this directory.
The CodaLab test server closes on March 21, 2021, 11:59 p.m., UTC.
A valid submission should be submitted by then.
Please check the “Evaluation” section for more details.
1. Please select “Testing” for the final submission. If you want to evaluate using the validation data, please select “Development”. Otherwise, you will waste the testing submission quota (maximum: 3).
2. At the testing phase, the leaderboard will not show a specific score. Since we will download all the participants’ submissions and evaluate them on our devices, please ignore the “Failed Status” on CodaLab. It will not affect your final testing evaluation results as long as following the final submission guideline.
3. inference_tflite.py is not listed in the factsheet template, but it is REQUIRED in this challenge.
4. We will double-check whether the two submitted TFLite models are produced from the same model weight.
CVPR’21 Mobile AI Workshop - Learned Smartphone ISP Challenge
Do we still need to submit the "model_none.tflite" files?Posted by: imgclear @ March 18, 2021, 2:03 p.m.
You need to submit TWO TFLite models (one for PSNR and the other for latency).
Please check the "Evaluation" section for more details.
The "Source-Codes/" folder should also contain codes for generating processed images from the trained model/checkpoint (like what `test_model.py` can do in our provided starter codebase), not just for training.
The README file should also explain how to generate processed images.
There have some discussion at AI Benchmark Forums
"""Hi, as MHChen replied in this issue, https://competitions.codalab.org/forums/24750/5335/.
Does it mean we should prepare 3 model, namely model.tflite, model_none.tflite and one tflite whose input shape is [1,128,128,4] and output shape is [1, 256, 256, 3]?""
"""You need to submit three models as was stated in the email. Sorry for confusion, MHChen will modify his answer."""
so the submitted file name are: model.tflite, model_none.tflite and ????
and why the server stuck since yesterday?
Sorry for the confusion. Please see the updated info as below:
1. TFLite/ - The folder with the TWO TFLite models (input shapes: [1, None, None, 4] for fidelity quality and [1, 544, 960, 4] for latency). An inference script inference_tflite.py (load your TFLite model and output processed images) is recommended in the final submission.
As mentioned in the email announcement, "model.tflite" corresponds to [1, 544, 960, 4], and "model_none.tflite" corresponds to [1, None, None, 4].Posted by: MHChen @ March 19, 2021, 2:24 p.m.
Sorry to disturb you, I don't know the use of the "inference_tflite.py", cause you said "If your model performs any image pre-processing (rescaling, normalization, etc.) - it should be integrated directly into it, no additional scripts are accepted." in your email.
Did you mean that both the pre-processing and post-processing should be integrated into our '.tflite' file, or that we could put these operations(pre and post-processing) in "inference_tflite.py" and then you will run our inference_tflite.py. If it is the latter, how could we pre-processing the test data and output processed images
And another question is: should we need the output of our '.tflite' has been mapped to 0-255 so that it can be saved as an image directly or we could put this post-processing like mapping the output from 0-1 to 0-255 to the 'inference_tflite.py'Posted by: rhwang @ March 20, 2021, 3:10 a.m.
for some reason， i can't get a [1,None,None,4] tflite,
can we submmit a [1,128,128,4] tflite?
Sorry for the confusion about "inference_tflite.py". The clarification is as follows:
The main goal of "inference_tflite.py" is to introduce an additional sanity check for the submitted solution.
However, this script will not be used by us to evaluate your solution. We will be taking only the corresponding TFLite model and running it with the standard script that compares its outputs with the ground truth and calculates the final fidelity scores.
Therefore, it would be better to add "inference_tflite.py" to the final submission, but if it won't be there - your solution will not be disqualified.
About the pre-/post-processing questions, please follow the guideline in the email.
That is, all the processing should be integrated into the required TFLite models.
To get "model_none.tflite" with the input shape [1, None, None, 4], you can also try to convert the model with TF 2.4 or TF-nightly if TF 1.15.0 didn't work.
Only "model.tflite" is required to be generated using TF 1.15.0.