We note that the following clause appears in the official Q&A:
For the creation track, cheatings or adversarial attacks on our automatic evaluation algorithms for ID and image similarity. They will be ruled out by the final check.
Does this mean that the loss function of ID and image similarity can not be added in the process of training model?
Posted by: YangShuai @ March 9, 2021, 12:55 p.m.Hi. We demand that the deepfake creation result is a real and meaning face-swap to a human observer. We will visually check top solutions in the creation track to ensure this. So adversarial attacks on face recognition models are not allowed.
As for image similarity with the target image, we use the SSIM metric, which I think is not attackable by an adversarial algorithm. However, just in case it can be attacked in some unknown way, we demand that deepfake creation result is visually similar to its original target image to a human observer. We will also check this.
Hope this clears your doubts.