The parameters and FLOPs are valid, but the inference time is different on different frameworks.
Do I need to convert all my models into Pytorch version?
If not, how would you evaluate the inference time fairly?
You need to convert the model to PyTorch.Posted by: cszn @ July 4, 2020, 6:39 a.m.
Can I keep the training code in Tensorflow, and only convert the testing code to Pytorch?Posted by: Mulns @ July 4, 2020, 8:14 a.m.
Yes, you can.Posted by: Radu @ July 5, 2020, 8:53 a.m.