To maintain a reasonable deepfake detection accuracy in the final detection phase, any potential winning results must have an AUC score that is no lower than 0.8 on the baseline fake set (i.e. on the CelebDF-v2 test set, or the “Det_score_baseline”). In words, a high overall score with baseline AUC lower than 0.8 will not be qualified for final top-3 ranking.
We note that this restriction has already been met by nearly all participants, and this is to encourage submissions of reasonable deepfake detection methods.
I'm sorry to bother you again, but this minor remedy is to prevent the participating teams from only using the adversarial sample detection model for testing, right? But isn't detecting the adversarial sample equivalent to using the ID information of the image? That is, adversarial sample = Deepfakes images. I think this is against the rules.
But if I use two models, one for deepfakes detection and one for adversarial sample detection, then this minor remedy doesn't seem to prevent this from happening.