[Private Leaderboard released ]
Track1:
Team | Final Score |
asdfqwer | 0.990854245 |
Track2:
Team | Final Score |
asdfqwer | 0.9629342989 |
Track3:
Team | Final Score |
asdfqwer | 0.5929659144 |
sgai_lab | 0.5465882277 |
DiDi_SSTG | 0.004723898158 |
The rapid progress of photorealistic synthesis techniques has reached at a critical point where the boundary between real and manipulated images starts to blur. Thus, benchmarking and advancing digital forgery analysis have become a pressing issue. However, existing face forgery datasets either have limited diversity or only support coarse-grained analysis. To counter this emerging threat, we construct the ForgeryNet dataset, an extremely large face forgery dataset with unified annotations in image- and video-level data across four tasks: 1) Image Forgery Classification, including two-way (real / fake), three-way (real / fake with identity-replaced forgery approaches / fake with identity-remained forgery approaches), and n-way (real and 15 respective forgery approaches) classification. 2) Spatial Forgery Localization, which segments the manipulated area of fake images compared to their corresponding source real images. 3) Video Forgery Classification, which re-defines the video-level forgery classification with manipulated frames in random positions. This task is important because attackers in real world are free to manipulate any target frame. and 4) Temporal Forgery Localization, to localize the temporal segments which are manipulated. ForgeryNet is by far the largest publicly available deep face forgery dataset in terms of data-scale (2.9 million images, 221,247 videos), manipulations (7 image-level approaches, 8 video-level approaches), perturbations (36 independent and more mixed perturbations) and annotations (6.3 million classification labels, 2.9 million manipulated area annotations and 221,247 temporal forgery segment labels). We perform extensive benchmarking and studies of existing face forensics methods and obtain several valuable observations.
To access the ForgeryNet dataset, please visit its GitHub repository. You can also find the detailed data description and usage in the download file.
In phase1 , test data can be download at https://drive.google.com/file/d/1conYQXWguAwJ1eEwewHyMBGUtgjgR_sM/view?usp=sharing
In phase2 and phase3, test data can be download at https://drive.google.com/file/d/1CSJOkDR_jJvq7qUGP8oGcwpoPdKGzfgb/view?usp=sharing
[UPDATE]
In private test, participants should submit the public docker image to evaluate.
The mail format is:
Title: [Track n]-[Team name]-[First submit]
mail-to: forgerynet@gmail.com
content:
docker hub: forgerynet/forgerynet-submit
or
registry.cn-shenzhen.aliyuncs.com/forgerynet_submit/forgerynet-submit:1.0
Submit the data in a .zip file. The zip file should have a single file named 'eval-job-197000000000-trackn.csv'. The 'eval-job-197000000000-trackn.csv' should have 2 columns named filename and fake probability. Ensure that there is no empty blank line at the end of the file. Note that space should be used as the separator for csv. Each line should be "id label\n".
example:
Submit the data in a .zip file. The zip file should have a single file named 'eval_pred.json'. The 'eval_pred.json' should be like example.
example:
July 12, 2021 - Submission start.
Sep. 12, 2021 - Public Final submission deadline.
Sep. 19, 2021 - Private Final submission deadline.
Oct. 4, 2021 - Technical report submission deadline.
Oct. 17, 2021 - Awards at ICCV Workshop.
Please check the terms and conditions for further rules and details.
If you have any questions, please contact us by sending an email to forgerynet@gmail.com.
Evaluation Criteria of classification
Considering face as binarry classification, we can leverge AUC(Area Under the ROC Curve) as evaluation criteria. Specifically, fake class is 1 (True) and real class is 0 (False).
TPR = TP / (TP + FN)
FPR = FP / (FP + TN)
The AUC determines the final ranking in classification tasks.
For each video, forensics methods to be evaluated are expected to provide temporal boundaries of forgery segments and the corresponding confidence values. We follow metrics used in ActivityNet evaluation, and employ Interpolated Average Precision (AP) as well as Average Recall@2(AR@2) for evaluating predicted segments with respect to the groundtruth ones. To determine if a detection is a true positive, we inspect the temporal intersection over union (tIoU) with a ground truth segment, and check whether or not it is greater or equal to a given threshold(e.g. tIoU>0.5).
The ForgeryNet Challenge will be around 2 month with three phase. The challenge will start together with CVPR 2021. Participants are restricted to train their algorithms on the publicly available ForgeryNet trainning dataset. Participants are expected to develop more robust and generalized methods for face forgery analysis in real-world scenarios.
When participating in the competition, please be reminded that:
Before downloading and using the ForgeryNet dataset, please agree to the following terms of use. You, your employer and your affiliations are referred to as "User". The authors and their affiliations, SenseTime, are referred to as "Producer".
@inproceedings{he2021forgerynet, title={ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis}, author={He, Yinan and Gan, Bei and Chen, Siyu and Zhou, Yichun and Yin, Guojun and Song, Luchuan and Sheng, Lu and Shao, Jing and Liu, Ziwei}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={4360--4369}, year={2021} }
All rights reserved. Redistribution and use software in source and binary form, with or without modification, are permitted provided that the following conditions are met:
THIS SOFTWARE AND ANNOTATIONS ARE PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Start: July 12, 2021, midnight
Description: The online evaluation results must be submitted through this CodaLab competition site of the ForgeryNet Challenge. The online evaluation chances will be updated on Friday, 11:59 p.m (UTC) every week. The organizer will offer some hints/codes to ensure a successful submission. Note that challenge participants cannot use other external data, and the validation set cannot be used as training data.
Start: July 12, 2021, midnight
Description: The online evaluation results must be submitted through this CodaLab competition site of the ForgeryNet Challenge. The online evaluation chances will be updated on Friday, 11:59 p.m (UTC) every week. The organizer will offer some hints/codes to ensure a successful submission. Note that challenge participants cannot use other external data, and the validation set cannot be used as training data.
Start: July 12, 2021, midnight
Description: The online evaluation results must be submitted through this CodaLab competition site of the ForgeryNet Challenge. The online evaluation chances will be updated on Friday, 11:59 p.m (UTC) every week. The organizer will offer some hints/codes to ensure a successful submission. Note that challenge participants cannot use other external data, and the validation set cannot be used as training data.
Start: Sept. 12, 2021, midnight
Description: 1. Every submission should be contained in a docker image. [You should submit a public docker image path] 2. No internet access enabled. 2. docker <= 45 minutes run-time on 24 VCPU and Single Tesla V100-SXM2 32G. We recommend that you do not exceed the time limit. 3. External data must be freely & publicly available, including pre-trained models. Notice that any temp file when runing this container should be writen at `/forgerynet_output/$your_team_name`. You can create this folder in your `run.sh`. And DO NOT create `/forgerynet_output/` in your images. 4. The dataset is mounted in the `/forgerynet_data/` directory in the container in a read-only manner. 5. Output format should be same as public test. 6. You can pull `forgerynet/forgerynet-submit` in docker hub to get an example. 7. Send your docker image path to forgerynet@gmail.com, we will response you as soon as possible.
Sept. 19, 2021, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In