AIM 2020 Efficient Super-Resolution Challenge Forum

Go back to competition Back to thread list Post in this thread

> How to properly measure inference time ?

Hi,

I have a few questions regarding the performance reporting.

1) Do we have to use your script "test_demo.py" to calculate our model performance ? If so, why do you use torch.backends.cudnn.benchmark = True ? Setting it to False is much faster for me.

2) Let's say with my GPU I'm running MSRResNet at a pace of 0.5s / image in average and that my model takes 0.3s / image. On your hardware you said you can run MSRResNet in 0.170s / image so should I rescale my performance to your hardware (meaning that on your GPU, my model would probably take 0.170/0.5*0.3 = 0.102s / image) ?

3) Are we allowed to batch images with the same shape for faster inference (given we have enough VRAM) ?

Thanks for answering
Wsdea

Posted by: wsdea @ May 24, 2020, 1:40 p.m.

1) You can set torch.backends.cudnn.benchmark = False.
2) Yes, you should.
3) The organizers will test the code for a fair comparison.

Posted by: cszn @ May 31, 2020, 11:05 a.m.
Post in this thread