Description for track 2 of RFIW2018, Family Classification, which includes brief overview of task, along with data splits and metrics used. Any and all task-specific questions are encouraged to be asked as a public forum-- If something seems unclear, lacking needed details, or even missing entirely, then please point this out for the sake of current and future participants (i.e., modifications will be made accordingly). It is important to note that the data has been updated since RFIW 2017. If you participated in RFIW last year, then discard previous data splits and use the modified sets provided for RFIW 2018. An error will be thrown if you evaluate and submit runs for old version of data splits, as many labels have been updated and lists have been regenerated to account for this.
Provided multiple members from a set of known families (i.e., classes), the goal is to model each family (i.e., build a classifier) to determine which of these families a set of unseen subjects belongs to. Thus, Family Classification is an one-to- many problem.
Family classification focuses on a slightly different problem: given a facial image, find the family to which the face of the image belongs to, i.e., families are modeled using facial images of other family members. Essentially, it is an one-to-many recognition problem, and becomes more challenging with an increasing number of families, as families contain large intra-class variations that typically fools the feature extractors and classifiers. Similar to conventional facial recognition, when the target data are unconstrained faces in the wild (e.g., variations in pose, illumination, expression, etc.), the task gets increasingly more difficult, as it is breaching capable of handling real-world scenarios. These are, unfortunately, challenges that need to be addressed with family recognition as well, i.e., the capability to recognize unconstrained families in the wild are needed in order to advance such technology for practical use.
For Family Recognition, there is a pre-specified gallery of facial images, for which all family labels are provided. The goal is to identify the family labels for a set of unseen faces. For example, the gallery could be composed of 25 families, each with at 25 facial images of at least 5 family members. The task is then to determine which of the 25 families the unseen input face belongs to.
Note the the set of unseen, test facial images are of individuals that are not included in the gallery (e.g., assuming the minimum of 5 members makes up a family in the gallery, none of these 5 individuals will be used for testing, as additional, unseen family members will be).
Photos of families sampled randomly from FIW (i.e., 27 of 1,000).
The results for this multi-class problem will be reported as top 1% error ratings and visualized as confusion matrices.
Confusion metrics used for Family Classification.
FIW includes a total of 1,000 families with multiple samples for each of the members. Thus, the range of modern-day data-driven approaches (i.e., deep learning models) that is now possible opens doors to possibilities in terms of proposed solutions to the problem of family classification.
The data for family classification is split into into 3 disjoint sets referred to as Train, Validation, and Test sets-- ground truth for training data will be provided during Phase 1, while submissions for the validation data can be uploaded for scoring. Then, ground truth for the validation data will be made available during Phase 2. Lastly, the "blind" Test set will be released during Phase 3. No labels will be provided for the Test set until the competition is adjourned. Teams are asked to only process the Test set to generate submissions and, hence, any attempt of analyzing or understanding the Test set is prohibited. Training and Validation is made up of 512 unique families, while others will be released for testing.
Submissions should include
For submitting the results, you need to follow these steps:
Source code and trained models will be made available on github, https://github.com/visionjo/FIW_KRT.
If you're using or participating in either Challenge or using FIW data please cite the following papers:
1) Joseph P. Robinson, Ming Shao, Yue Wu, and Yun Fu, "Families in the Wild (FIW): Large-scale Kinship Image Database and Benchmarks", in Proceedings of the 2016 ACM on Multimedia Conference.
2) S. Wang, J. P. Robinson, and Y. Fu, "Kinship Verification on Families In The Wild with Marginalized Denoising Metric Learning," in FG, 2017 12th IEEE.
3) Joseph P. Robinson, Ming Shao, Handong Zhao Yue Wu Timothy Gillis Yun Fu, "Recognizing Families In the Wild (RFIW): Data Challenge Workshop in conjunction with ACM MM 2017," RFIW '17: Proceedings of the 2017 Workshop on Recognizing Families In the Wild (2017).
RFIW is taking shape as a series of large-scale kinship, with RFIW2018 (i.e., this challenge) supporting an additional task (i.e., Tri-Subject Verification) and added track (i.e., General Paper Submission), along with rerunning Tracks 1 and 2 (get more details and links for each task and Call for Papers here https://web.northeastern.edu/smilelab/RFIW2018/). RFIW2017 results are summarized on challenge site, https://web.northeastern.edu/smilelab/RFIW2017/.
Program Chairs Joseph Robinson (robinson.jo [at] husky [dot] neu [dot] edu, email@example.com) and Ming Shao (mshao [at] umassd [dot] edu, firstname.lastname@example.org) are the contact persons of the RFIW challenge.
More information about FIW dataset is provided on the project page: http://smile-fiw.weebly.com/
Start: Jan. 5, 2018, midnight
Description: Training and validation data made available. Labels available for Training. Server will be open for scoring.
Start: Jan. 15, 2018, midnight
Description: Labels for Validation made available. Evaluation scripts provided to participants. Validation will remain open for automatic scoring.
Start: Feb. 4, 2018, midnight
Description: Test data release. Validation server closed. Open for final submissions.
March 1, 2018, midnight
You must be logged in to participate in competitions.Sign In