Mobile AI 2021 Learned Smartphone ISP Challenge Forum

Go back to competition Back to thread list Post in this thread

> Question about model quantization

Hi, we enconter some problems in model quantization for shorter running time. From the same .pb file, we get float32 tflite model according to the provided pb2tflite script, and float16 and int8 models with additional options '--post_training_quantize' and '--quantize_to_float16'. The float32 model can successfully be tested on server, while quantized models cannot. We got error message "Segmentation fault (core dumped) ". Does anyone has insights on this?

Posted by: frank_wang87 @ Feb. 3, 2021, 9:58 a.m.

Please do not use FP16 post-training quantization.
In FP16 post-quantized model, the weight would be dequantized to FP32 and device does not support this type of dequantize operation.
Actually, your FP32 TFLite model is running with FP16 data type on APU.

Posted by: jimmy.chiang @ Feb. 3, 2021, 2:14 p.m.
Post in this thread