The 4th RFIW large-scale kinship recognition data competition in conjunction with FG 2020. This is made possible with the release of the largest and most comprehensive image database for automatic kinship recognition, Families in the Wild (FIW).
RFIW2020 will support 3 laboratory-style evaluation protocols: (1) repeated; (2) and (3) supported for the first time.
Additionally, we will have general paper and Brave New Idea tracks. Visit the RFIW workshop webpage to learn more about the other tasks, tracks, and the challenge workshop as a whole.
The best paper award will be awarded will be included in the 2020 IEEE Proceedings of Automatic Face and Gesture Recognition (AMFG). Note that individuals and/or teams can participate in just one or all tasks. In any case, standings will be determined for each task separately.
Automatic kinship recognition holds promise to an abundance of applications, like to aid forensic investigations as a powerful cue to narrow the search space (e.g., perhaps knowing that the Boston Bombers were brothers we may have identified the suspects sooner). There are many beneficiaries of such technologies, whether the consumer (e.g., automatic photo library management), the scholar (e.g., historic lineage & genealogical studies), the data analyzer (e.g., social-media-based analysis), or the investigator (e.g., cases of missing children and human trafficking).
A fair question to ask-- if so applicable, why is visual kinship recognition technology not found, or even prototyped, in real-world products? Reasons for this are two-fold:
Both points were addressed with the introduction of our FIW database, with data distributions to properly represent real-world scenarios available at scales much larger than ever before. FIW now allows researchers and practitioners to employ complex, modern-day data-driven methods (i.e., deep learning) in ways not possible before.
In the end, we hope FIW serves as a rich resource to further bridge the semantic gap of facial recognition-based problems to the broader human-computer interaction incentive.
Participants can submit in one or all tasks-- note standings will be determined for each task separately. We next introduce the 3 tasks of 2020 RFIW.
Kinship verification aims to determine whether a pair of facial images are blood relatives of a certain type (e.g., parent-child). This is a classical Boolean problem with system responses being either KIN or NON-KIN (i.e., related or unrelated, respectfully). Thus, this task tackles the one-to-one view of automatic kinship recognition.
More details are found in the data section of the competition.
Tri-Subject Verification focuses on a slightly different view of kinship verification– the goal is to decide whether a child is related to a pair of parents. This is a more realistic assumption, as having knowledge of one parent typically means knowledge of the other is accessible. This is the first time this was done using FIW data.
More details are found in the data section of the competition (register via the link above).
Large-Scale Search and Retrieval of family members of missing children. The problem is posed as a many-to-many ranking problem, as one-to-multiple faces are provided for each unique individual. The goal is to find family members of the search subjects (i.e., the probes) in a search pool (i.e., the gallery). The task simulates the real-world problems of missing persons.
More details are found in the data section of the competition (register via the link above).
Joseph Robinson (robinson.jo [at] husky [dot] neu [dot] edu, robinson.jo@husky.neu.edu) and Zaid Khan (khan [dot] za [at] husky [dot] neu [dot] edu, khan.za@husky.neu.edu ) for all inquiries pertaining to 2020 RFIW and FIW.
Tri-Subject Verification focuses on a slightly different view of kinship verification-- the goal is to decide whether a child is related to a pair of parents. This is a more realistic assumption, as having knowledge of one parent typically means knowledge of the other is accessible. Following this, we propose adding this track, but with additional tri-subject pair types (e.g., given a couple of known siblings, determine if an unknown subject is also a sibling). Plus, this will be done at scales far greater than ever before possible.
Triplet pairs consist of Father (F) / Mother (M) - Child (C) (FM-C), where the child C could be either a Son (S) or a Daughter (D). Hence, the triplet pairs are FM-S and FM-D.
![]() |
![]() |
This is the first time tri-subject verification has been done using the large-scale FIW dataset. Efforts here were inspired by earlier work that introduced the tri-subject task to the machine vision community:
Qin, Xiaoqian, Xiaoyang Tan, and Songcan Chen. " Tri-subject kinship verification: Understanding the core of a family." IEEE Transactions on Multimediae 17.10 (2015): 1855-1867.
The data for tri-subject verification is split into 3 disjoint sets, train, val, and test. Ground truth for train is provided in Phase 1. This is for self-evaluation, as runs on the Validation can be submitted for scoring. Ground truth for and val is available for Phase 2. The "blind" test set will be released during Phase 3. Labels are not for the test set until after the challenge and when the results are made public. Teams will be asked to only process the Test set to generate submissions and, hence, any attempt of analyzing or understanding the Test set is prohibited. All sets will be made-up of an equal number of positive and negative pairs. Lastly, note that there is no family or subject identity overlapping between any of the sets.
For more information see data section.
More information about FIW dataset is provided on the project page: https://web.northeastern.edu/smilelab/fiw/
As conventional face verification, we offer 3 modes, which are listed & described as follows:
Participants will be allowed to make up to 6 submissions of different runs for each mode (i.e., teams participating in all 3 settings will be allowed to submit up to 18 sets of results). Note that runs must be processed independently of one another.
The metric used is verification accuracy, which is provided per triplet-pair type (i.e., FM-D and FM-S). The overall averaged accuracy is used to determine the leaderboard. Also, ROC curves will also be upon requests for final papers.
Source code and trained models can be found on Github.
Special attention will be given to the submission that provides supporting code.
Contact Joseph Robinson (robinson.jo [at] husky [dot] neu [dot] edu, robinson.jo@husky.neu.edu) and Zaid Khan (khan [dot] za [at] husky [dot] neu [dot] edu, khan.za@husky.neu.edu ) for anything on 2020 RFIW and FIW.
If you're using or participating in either Challenge or using FIW data please cite the following papers:
Joseph P. Robinson, Ming Shao, Yue Wu, Hongfu Liu, Timothy Gillis, Yun Fu, "Visual Kinship Recognition of Families in the Wild." In IEEE TPAMI, 2018.
Joseph P. Robinson, Ming Shao, Handong Zhao, Yue Wu, Timothy Gillis, Yun Fu, " Recognizing Families In the Wild (RFIW): Data Challenge Workshop in conjunction with ACM MM 2017." In RFIW '17: Proceedings of the 2017 Workshop on Recognizing Families In the Wild, 2017.
S. Wang, J. P. Robinson, and Y. Fu, "Kinship Verification on Families In The Wild with Marginalized Denoising Metric Learning." In 12th IEEE AMFG, 2017.
Joseph P. Robinson, Ming Shao, Yue Wu, and Yun Fu, "Families in the Wild (FIW): Large-scale Kinship Image Database and Benchmarks." In Proceedings of the ACM on Multimedia Conference, 2016.
See https://web.northeastern.edu/smilelab/rfiw2020/index.html for more information on submissions (i.e., continued terms and conditions, along with links to templates, a portal for authors, and more).
Contact Joseph Robinson (robinson.jo [at] husky [dot] neu [dot] edu, robinson.jo@husky.neu.edu) and Zaid Khan (khan [dot] za [at] husky [dot] neu [dot] edu, khan.za@husky.neu.edu ) for anything on 2020 RFIW and FIW.
Matthew A. Turk
Toyota Technological Institute at Chicago (TTIC)
https://www.ttic.edu/mtur |
Yu Yin
Northeastern University
|
![]() Zaid Khan
Northeastern University
|
RFIW2020 supports the traditional verification task, along with two new evaluations (i.e., Tri-Subject Verification and Search & Retrieval); also, General Paper Submission and Brave New Ideas tracks. See Challenge Page for more details.
Contact Joseph Robinson (robinson.jo [at] husky [dot] neu [dot] edu, robinson.jo@husky.neu.edu) and Zaid Khan (khan [dot] za [at] husky [dot] neu [dot] edu, khan.za@husky.neu.edu ) for anything on 2020 RFIW and FIW.
Start: Dec. 3, 2019, midnight
Description: Training and validation data made available. Labels available for Training; sever will be open for scoring Validation
Start: Dec. 5, 2019, midnight
Description: Labels for Validation made available. Evaluation scripts provided to participants. Validation will still be open for those that rather upload results for automatic scoring and or those looking to make sure the submissions are formatted properly.
Start: Jan. 13, 2020, midnight
Description: Test data release. Validation server closed. Open for final submissions.
Jan. 21, 2020, noon
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | codezakh | 0.51 |
2 | DeepBlueAI | 0.50 |