ForgeryNet: Face Forgery Analysis Challenge 2021

Organized by heyinan - Current server time: March 30, 2025, 8:27 p.m. UTC

First phase

Track1: Forgery Image Analysis
July 12, 2021, midnight UTC

End

Competition Ends
Sept. 19, 2021, 11:59 p.m. UTC

Overview

[Private Leaderboard  released ] 

Track1:

Team Final Score
asdfqwer 0.990854245

Track2:

Team Final Score
asdfqwer 0.9629342989

Track3:

Team Final Score
asdfqwer 0.5929659144
sgai_lab 0.5465882277
DiDi_SSTG 0.004723898158

 

The rapid progress of photorealistic synthesis techniques has reached at a critical point where the boundary between real and manipulated images starts to blur. Thus, benchmarking and advancing digital forgery analysis have become a pressing issue. However, existing face forgery datasets either have limited diversity or only support coarse-grained analysis. To counter this emerging threat, we construct the ForgeryNet dataset, an extremely large face forgery dataset with unified annotations in image- and video-level data across four tasks: 1) Image Forgery Classification, including two-way (real / fake), three-way (real / fake with identity-replaced forgery approaches / fake with identity-remained forgery approaches), and n-way (real and 15 respective forgery approaches) classification. 2) Spatial Forgery Localization, which segments the manipulated area of fake images compared to their corresponding source real images. 3) Video Forgery Classification, which re-defines the video-level forgery classification with manipulated frames in random positions. This task is important because attackers in real world are free to manipulate any target frame. and 4) Temporal Forgery Localization, to localize the temporal segments which are manipulated. ForgeryNet is by far the largest publicly available deep face forgery dataset in terms of data-scale (2.9 million images, 221,247 videos), manipulations (7 image-level approaches, 8 video-level approaches), perturbations (36 independent and more mixed perturbations) and annotations (6.3 million classification labels, 2.9 million manipulated area annotations and 221,247 temporal forgery segment labels). We perform extensive benchmarking and studies of existing face forensics methods and obtain several valuable observations.

 

Data Download

To access the ForgeryNet dataset, please visit its GitHub repository. You can also find the detailed data description and usage in the download file.

 

In phase1 , test data can be download at https://drive.google.com/file/d/1conYQXWguAwJ1eEwewHyMBGUtgjgR_sM/view?usp=sharing

In phase2 and phase3,  test data can be download at https://drive.google.com/file/d/1CSJOkDR_jJvq7qUGP8oGcwpoPdKGzfgb/view?usp=sharing 

Submission

[UPDATE]

In private test, participants should submit the public docker image to evaluate. 

The mail format is:
Title: [Track n]-[Team name]-[First submit]
mail-to: forgerynet@gmail.com
content:
docker hub: forgerynet/forgerynet-submit
or
registry.cn-shenzhen.aliyuncs.com/forgerynet_submit/forgerynet-submit:1.0

For phase1 and phase2

Submit the data in a .zip file. The zip file should have a single file named 'eval-job-197000000000-trackn.csv'. The 'eval-job-197000000000-trackn.csv' should have 2 columns named filename and fake probability. Ensure that there is no empty blank line at the end of the file. Note that space should be used as the separator for csv. Each line should be "id label\n".

example:

For phase3

Submit the data in a .zip file. The zip file should have a single file named 'eval_pred.json'. The 'eval_pred.json' should be like example.

example:

 

Timeline

July 12, 2021 - Submission start.
Sep. 12, 2021 - Public Final submission deadline.
Sep. 19, 2021 - Private Final submission deadline.
Oct. 4, 2021 - Technical report submission deadline.
Oct. 17, 2021 - Awards at ICCV Workshop.

 

General Rules

Please check the terms and conditions for further rules and details.

 

Contact Us

If you have any questions, please contact us by sending an email to forgerynet@gmail.com.

 

 

Evaluation Criteria of classification

Considering face as binarry classification, we can leverge AUC(Area Under the ROC Curve) as evaluation criteria. Specifically, fake class is 1 (True) and real class is 0 (False).

    • ROC curve
    • An ROC curve (receiver operating characteristic curve) is a graph showing the performance of a classification model at all classification thresholds. This curve plots two parameters: True Positive Rate False Positive Rate True Positive Rate (TPR) is a synonym for recall and is therefore defined as follows:

                   TPR = TP / (TP + FN)

      False Positive Rate (FPR) is defined as follows:

                   FPR = FP / (FP + TN)

    An ROC curve plots TPR vs. FPR at different classification thresholds. Lowering the classification threshold classifies more items as positive, thus increasing both False Positives and True Positives. The following figure shows a typical ROC curve. AUC stands for "Area under the ROC Curve." That is, AUC measures the entire two-dimensional area underneath the entire ROC curve (think integral calculus) from (0,0) to (1,1).

The AUC determines the final ranking in classification tasks.

 

Evaluation Criteria

For each video, forensics methods to be evaluated are expected to provide temporal boundaries of forgery segments and the corresponding confidence values. We follow metrics used in ActivityNet evaluation, and employ Interpolated Average Precision (AP) as well as Average Recall@2(AR@2) for evaluating predicted segments with respect to the groundtruth ones. To determine if a detection is a true positive, we inspect the temporal intersection over union (tIoU) with a ground truth segment, and check whether or not it is greater or equal to a given threshold(e.g. tIoU>0.5).

 

Terms and Conditions

General Rules

The ForgeryNet Challenge will be around 2 month with three phase. The challenge will start together with CVPR 2021. Participants are restricted to train their algorithms on the publicly available ForgeryNet trainning dataset. Participants are expected to develop more robust and generalized methods for face forgery analysis in real-world scenarios.

When participating in the competition, please be reminded that:

  • Results in the correct format must be uploaded to the evaluation server. The evaluation page lists detailed information regarding how results will be evaluated.
  • Any additional data can not be used in each track of this challenge.
  • Each entry must be associated to a team and provide its affiliation.
  • You cannot sign up to CodaLab from multiple accounts and therefore you cannot submit from multiple accounts.
  • Using multiple accounts to increase the number of submissions and private sharing outside teams are strictly prohibited.
  • The organizer reserves the absolute right to disqualify entries which is incomplete or illegible, late entries or entries that violate the rules.
  • The organizer reserves the right to adjust the competition schedule and rules based on situations.
  • The best entry of each team will be public in the leaderboard at all time.
  • To compete for awards, the participants must fill out a fact sheet briefly describing their methods.

 

Terms of Use: ForgeryNet Dataset

Before downloading and using the ForgeryNet dataset, please agree to the following terms of use. You, your employer and your affiliations are referred to as "User". The authors and their affiliations, SenseTime, are referred to as "Producer".

  • The ForgeryNet dataset is used for non-commercial/non-profit research purposes only.
  • All the images in ForgeryNet dataset can be used for academic purposes. However, the Producer is NOT responsible for any further use in a defamatory, pornographic or any other unlawful manner, or violation of any applicable regulations or laws.
  • The User takes full responsibility for any consequence caused by his/her use of ForgeryNet dataset in any form and shall defend and indemnify the Producer against all claims arising from such uses.
  • The User should NOT distribute, copy, reproduce, disclose, assign, sublicense, embed, host, transfer, sell, trade, or resell any portion of the ForgeryNet dataset to any third party for any purpose.
  • The User can provide his/her research associates and colleagues with access to ForgeryNet dataset (the download link or the dataset itself) provided that he/she agrees to be bound by these terms of use and guarantees that his/her research associates and colleagues agree to be bound by these terms of use.
  • The User should NOT remove or alter any copyright, trademark, or other proprietary notices appearing on or in copies of the ForgeryNet dataset.
  • This agreement is effective for any potential User of the ForgeryNet dataset upon the date that the User first accesses the ForgeryNet dataset in any form.
  • The Producer reserves the right to terminate the User's access to the ForgeryNet dataset at any time.
  • For using ForgeryNet dataset, please cite the following paper:
    @inproceedings{he2021forgerynet,
    title={ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis},
    author={He, Yinan and Gan, Bei and Chen, Siyu and Zhou, Yichun and Yin, Guojun and Song, Luchuan and Sheng, Lu and Shao, Jing and Liu, Ziwei},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={4360--4369},
    year={2021}
    }

 

Software

All rights reserved. Redistribution and use software in source and binary form, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  • Neither the name of the ForgeryNet Consortium nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE AND ANNOTATIONS ARE PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Track1: Forgery Image Analysis

Start: July 12, 2021, midnight

Description: The online evaluation results must be submitted through this CodaLab competition site of the ForgeryNet Challenge. The online evaluation chances will be updated on Friday, 11:59 p.m (UTC) every week. The organizer will offer some hints/codes to ensure a successful submission. Note that challenge participants cannot use other external data, and the validation set cannot be used as training data.

Track2: Forgery Video Analysis

Start: July 12, 2021, midnight

Description: The online evaluation results must be submitted through this CodaLab competition site of the ForgeryNet Challenge. The online evaluation chances will be updated on Friday, 11:59 p.m (UTC) every week. The organizer will offer some hints/codes to ensure a successful submission. Note that challenge participants cannot use other external data, and the validation set cannot be used as training data.

Track3: Forgery Video Temporal Localization

Start: July 12, 2021, midnight

Description: The online evaluation results must be submitted through this CodaLab competition site of the ForgeryNet Challenge. The online evaluation chances will be updated on Friday, 11:59 p.m (UTC) every week. The organizer will offer some hints/codes to ensure a successful submission. Note that challenge participants cannot use other external data, and the validation set cannot be used as training data.

Private Test

Start: Sept. 12, 2021, midnight

Description: 1. Every submission should be contained in a docker image. [You should submit a public docker image path] 2. No internet access enabled. 2. docker <= 45 minutes run-time on 24 VCPU and Single Tesla V100-SXM2 32G. We recommend that you do not exceed the time limit. 3. External data must be freely & publicly available, including pre-trained models. Notice that any temp file when runing this container should be writen at `/forgerynet_output/$your_team_name`. You can create this folder in your `run.sh`. And DO NOT create `/forgerynet_output/` in your images. 4. The dataset is mounted in the `/forgerynet_data/` directory in the container in a read-only manner. 5. Output format should be same as public test. 6. You can pull `forgerynet/forgerynet-submit` in docker hub to get an example. 7. Send your docker image path to forgerynet@gmail.com, we will response you as soon as possible.

Competition Ends

Sept. 19, 2021, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In