This is the 5th large-scale kinship recognition data competition, in conjunction with FG 2021. This is made possible by releasing the largest and most comprehensive image database for automatic kinship recognition, Families in the Wild (FIW).
RFIW2021 will support 3 laboratory-style evaluation protocols:
Additionally, we will have a general paper submission track (more to come).
The best paper award will be included in the 2021 IEEE Proceedings of Automatic Face and Gesture Recognition (IEEE FG). Note that individuals and/or teams can participate in just one or all tasks. In any case, standings will be determined for each task separately.
Background
Automatic kinship recognition holds promise to an abundance of applications, like to aid forensic investigations as a powerful cue to narrow the search space (e.g., perhaps knowing that the Boston Bombers were brothers, we may have identified the suspects sooner). There are many beneficiaries of such technologies, whether the consumer (e.g., automatic photo library management), the scholar (e.g., historic lineage & genealogical studies), the data analyzer (e.g., social-media-based analysis), or the investigator (e.g., cases of missing children and human trafficking).
A fair question to ask-- if so applicable, why is visual kinship recognition technology not found, or even prototyped, in real-world products? Reasons for this are two-fold:
Both points were addressed by introducing our FIW database, with data distributions to properly represent real-world scenarios available at scales much larger than ever before. FIW now allows researchers and practitioners to employ complex, modern-day data-driven methods (i.e., deep learning) in ways not possible before.
In the end, we hope FIW serves as a rich resource to bridge further the semantic gap of facial recognition-based problems to the broader human-computer interaction incentive.
Participants can submit in one or all tasks-- note standings will be determined for each task separately. We next introduce the 3 tasks of 2021 RFIW.
Kinship verification aims to determine whether a pair of facial images are blood relatives of a certain type (e.g., parent-child). This is a classical Boolean problem with system responses being either KIN or NON-KIN (i.e., related or unrelated, respectfully). Thus, this task tackles the one-to-one view of automatic kinship recognition.
More details are found in the data section of the competition.
Tri-Subject Verification focuses on a slightly different view of kinship verification– the goal is to decide whether a child is related to a pair of parents. This is a more realistic assumption, as knowing one parent typically means knowledge of the other is accessible. This is the first time this was done using FIW data.
More details are found in the data section of the competition (register via the link above).
Large-Scale Search and Retrieval of family members of missing children. The problem is posed as a many-to-many ranking problem, as one-to-multiple faces are provided for each unique individual. The goal is to find family members of the search subjects (i.e., the probes) in a search pool (i.e., the gallery). The task simulates the real-world problems of missing persons.
More details are found in the data section of the competition (register via the link above).
Contact Joseph Robinson (robinson.jo [at] husky [dot] neu [dot] edu, robinson.jo@husky.neu.edu) and Ming Shao (mshao [at] umassd [dot] edu, mshao@umassd.edu) for all inquiries pertaining to 2020 RFIW and FIW.
More information about the FIW dataset is provided on the project page: https://web.northeastern.edu/smilelab/fiw/.
Tri-Subject Verification focuses on a slightly different view of kinship verification-- the goal is to decide whether a child is related to a pair of parents. This is a more realistic assumption, as knowing one parent typically means knowledge of the other is accessible. Following this, we propose adding this track, but with additional tri-subject pair types (e.g., given a couple of known siblings, determine if an unknown subject is also a sibling). Plus, this will be done at scales far greater than ever before possible.
Triplet pairs consist of Father (F) / Mother (M) - Child (C) (FM-C), where child C could be either a Son (S) or a Daughter (D). Hence, the triplet pairs are FM-S and FM-D.
![]() |
![]() |
This is the first time tri-subject verification has been done using the large-scale FIW dataset. Efforts here were inspired by earlier work that introduced the tri-subject task to the machine vision community:
Qin, Xiaoqian, Xiaoyang Tan, and Songcan Chen. " Tri-subject kinship verification: Understanding the core of a family." IEEE Transactions on Multimedia 17.10 (2015): 1855-1867.
The data for tri-subject verification is split into 3 disjoint sets, Train, Val, and Test. Ground truth for the Train is provided in Phase 1. This is for self-evaluation, as runs on the Validation can be submitted for scoring. Ground truth for and Val is available for Phase 2. The "blind" test set will be released during Phase 3. Labels are not for the Test set until after the challenge and when the results are made public. Teams will be asked to process the Test set to generate submissions and, hence, any attempt to analyze or understand the Test set is prohibited. All sets will be made up of an equal number of positive and negative pairs. Lastly, note that no family or subject identity is overlapping between any of the sets.
For more information, see the data section.
More information about the FIW dataset is provided on the project page: https://web.northeastern.edu/smilelab/fiw/.
As conventional face verification, we offer 3 modes, which are listed & described as follows:
Participants will be allowed to make up to 6 submissions of different runs for each mode (i.e., teams participating in all 3 settings will be allowed to submit up to 18 sets of results). Note that runs must be processed independently of one another.
The metric used is verification accuracy, which is provided per triplet-pair type (i.e., FM-D and FM-S). The overall averaged accuracy is used to determine the leaderboard. Also, ROC curves will be upon requests for final papers.
Source code and trained models can be found on Github.
Special attention will be given to the submission that provides supporting code.
Contact Joseph Robinson (robinson.jo [at] husky [dot] neu [dot] edu, robinson.jo@husky.neu.edu) and Ming Shao (mshao [at] umassd [dot] edu, mshao@umassd.edu) for all inquiries pertaining to 2020 RFIW and FIW.
More information about the FIW dataset is provided on the project page: https://web.northeastern.edu/smilelab/fiw/.
If you're using or participating in either Challenge or using FIW data please cite the following papers (bibtex found below):
@article{robinson2021survey,
title = {Survey on the Analysis and Modeling of Visual Kinship: A Decade In the Making},
author = {Robinson, Joseph P and Shao, Ming and Fu, Yun},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)},
publisher = {IEEE Computer Society},
number = {01},
pages = {1--1},
year = {2021},
}
@inproceedings{robinson2020recognizing,
title = {Recognizing Families In the Wild (RFIW): The 4th Edition},
author = {Robinson, Joseph P and Yin, Yu and Khan, Zaid and Shao, Ming and Xia, Siyu and Stopa, Michael and Timoner, Samson and Turk, Matthew A and Chellappa, Rama and Fu, Yun},
booktitle = {15th IEEE International Conference on Automatic Face and Gesture Recognition},
organization = {IEEE},
pages = {857--862},
year = {2020},
}
@article{robinson2018fiw,
author = {Robinson, Joseph P and Shao, Ming and Wu, Yue and Liu, Hongfu and Gillis, Timothy and Fu, Yun},
title = {Visual Kinship Recognition of Families In the Wild},
journal = {IEEE Transactions on pattern analysis and machine intelligence (PAMI)},
publisher = {IEEE Computer Society},
year = {2018},
}
@inproceedings{robinson2016fiw,
title = {Families In the Wild (FIW): Large-Scale Kinship Image Database and Benchmarks},
author = {Robinson, Joseph P and Shao, Ming and Wu, Yue and Fu, Yun},
booktitle = {Proceedings of the 2016 ACM on Multimedia Conference},
organization = {ACM}
pages = {242--246},
year = {2016},
}
Families In the Wild Image Database created by Joseph P. Robinson is the largest and most comprehensive image set available to the public with collections of family photos from 1,000 families. The image collection is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Paper submissions are to be written in English with at most 8 pages (plus references) double column. The paper format must follow the same guidelines as ACM MM (see proceedings-template for more information).
The review process is double blind. Authors do not know names of reviewers, nor will reviewers know that of the authors. Final decisions will be based on dialogue between reviewers and authors, in addition to rank of submission (i.e., team's place in the standings).
Dual submission is not allowed according to policies of FG 2021.
coming soon
Accepted and presented papers (and posters) will be published after the conference in FG Workshops proceedings together with the 2021 FG main conference papers.
The author kit includes a latex template (acmart-master.zip). See sig-alternate.pdf for details on paper formatting and styles.
Please cite the four papers listed above according to the terms and conditions for use our data.
Matthew A. Turk
Toyota Technological Institute at Chicago (TTIC)
https://www.ttic.edu/mtur |
RFIW2021 supports the traditional verification task, along with two new evaluations (i.e., Tri-Subject Verification and Search & Retrieval); also, General Paper Submission and Brave New Ideas tracks. See Challenge Page for more details.
Contact Joseph Robinson (robinson.jo [at] husky [dot] neu [dot] edu, robinson.jo@husky.neu.edu) and Ming Shao (mshao [at] umassd [dot] edu, mshao@umassd.edu) for all inquiries pertaining to 2021 RFIW and FIW.
More information about the FIW dataset is provided on the project page: https://web.northeastern.edu/smilelab/fiw/.
Start: July 8, 2021, noon
Description: Training and validation data made available. Labels available for Training; sever will be open for scoring Validation
Start: July 8, 2021, noon
Description: Labels for Validation made available. Evaluation scripts provided to participants. Validation will still be open for those that rather upload results for automatic scoring and or those looking to make sure the submissions are formatted properly.
Start: Sept. 1, 2021, midnight
Description: Test data release. Validation server closed. Open for final submissions.
Sept. 10, 2021, noon
You must be logged in to participate in competitions.
Sign In