Mobile AI 2021 Learned Smartphone ISP Challenge Forum

Go back to competition Back to thread list Post in this thread

> Can't find the file "'/tmp/codalab/tmpqdAPkV/run/output/output.csv'"

Sorry to disturb you, our submission has been sent an error as follows:

WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
* daemon not running; starting now at tcp:5037
* daemon started successfully
INFO: Initialized TensorFlow Lite runtime.
INFO: TFLiteNeuronDelegate delegate: 704 nodes delegated out of 704 nodes with 1 partitions.

INFO: Init
INFO: BuildGraph
INFO: AddOpsAndTensors
INFO: AddOpsAndTensors done
INFO: total_input_byte_size: 8355840
INFO: total_output_byte_size: 25067520
INFO: NeuronModel_identifyInputsAndOutputs
INFO: NeuronModel_relaxComputationFloat32toFloat16: 1
INFO: NeuronModel_finish
INFO: BuildGraph done
INFO: NeuronCompilation_setPreference: 1
INFO: NeuronCompilation_setPriority: 110
terminating
Segmentation fault (core dumped)
Traceback (most recent call last):
File "/tmp/codalab/tmpqdAPkV/run/program/evaluation.py", line 173, in
file = open(LOG_NAME, 'r')
IOError: [Errno 2] No such file or directory: '/tmp/codalab/tmpqdAPkV/run/output/output.csv'
It confused me, could you please help me out or release your evaluation.py to us.

Posted by: rhwang @ Feb. 9, 2021, 6:55 a.m.

Could you please give me some suggestions. Thanks a lot.

Posted by: rhwang @ Feb. 9, 2021, 3:13 p.m.

Hi,

From the ERROR log, it shows that the submitted TFLite model had been putting into the evaluation device (The format of folder structure is correct).
But, on the device side, something unexpected happens. Please help to check if TF 1.15.0 is used for the conversion from .pb to .tflite. (listed in https://competitions.codalab.org/competitions/28054#learn_the_details-evaluation)

====
4. Export a TFLite model (with Tensorflow 1.15.0) of your solution and add it to the abovementioned ZIP archive; we will run this model to perform the evaluation on the target platform, MediaTek Dimensity 1000+.

Please create the TFLite which has input shape [1, 544, 960, 4] and output shape [1, 1088, 1920, 3]
The runtime of your TFLite model will be automatically evaluated on MediaTek Dimensity 1000+ platform and updated in the leaderboard of runtime per image [s].
Participants are welcome to use our tutorial codes to export TFLite models.
====

or directly apply our conversion script (w/ TF 1.15.0 installed) in https://github.com/MediaTek-NeuroPilot/mai21-learned-smartphone-isp

Posted by: HsienKaiKuo @ Feb. 9, 2021, 3:32 p.m.

Thanks for your reply. But actually I used TF 1.15.0 to convert the conversion from .pb to .tflite and also, I used the conversion script (from .ckpt to .pb & from .pb to .tflite) that you released in https://github.com/MediaTek-NeuroPilot/mai21-learned-smartphone-isp. So that is what really puzzled me。

In addition, this tflite file can run on my own smartphone(using AI Benchmark),So I wonder whether you can release the evaluation.py.

Posted by: rhwang @ Feb. 9, 2021, 4:01 p.m.

Hi rhwang ,

Could you post your submitted time and submission ID, so that we could check your model in detail.
Thank you.

Posted by: jimmy.chiang @ Feb. 10, 2021, 2:43 p.m.

Thanks for your reply, the submitted time is "02/09/2021 16:54:39" and the submission ID is #8

Posted by: rhwang @ Feb. 10, 2021, 3:07 p.m.
Post in this thread