Mobile AI 2021 Learned Smartphone ISP Challenge Forum

Go back to competition Back to thread list Post in this thread

> question about quantization

Hi,
As mentioned earlier, the FP16models will be dequantized to FP32,
while the message shows that
" you decided to use a quantized model for the final submission, there should be two *additional* fully-quantized INT8 TFLite files in this folder"
...
dose it means that the int8 model will not be dequantized to FP32?
it means, INT8 model is allowed to run and generally is faster than FP32 ?

Posted by: xushusong001 @ March 17, 2021, 11:32 a.m.

i think is unfair that different model types are compared at the same time. after all, time consuming affects the score a lot.
if the different model types are compared at the same time, the formulation should be changed.
int8 model is 2-4 times faster than the FP32.

Posted by: xushusong001 @ March 17, 2021, 11:52 a.m.
Post in this thread