2018 FG Challenge Recognizing Families In the Wild (RFIW18)

Organized by jvision - Current server time: Sept. 25, 2018, 5:37 p.m. UTC

First phase

Training
Jan. 5, 2018, midnight UTC

End

Competition Ends
March 1, 2018, midnight UTC

2018 FG Challenge: Recognizing Families In the Wild (RFIW2018)

Family Classification (Track II)

LOGO

Overview

This is the second large-scale kinship recognition data competition, in conjunction with FG 2018.
This is done with the largest and most comprehensive database for kinship recognition,  Families in the Wild (FIW).

RFIW2018 supports 3 laboratory style evaluation protocols: (1) and (2) repeated; (3) being held for the first time. RFIW2018 Website, see https://web.northeastern.edu/smilelab/RFIW2018/
For more information on FG 2018, see https://fg2018.cse.sc.edu/..
For more information on the database, see the FIW homepage:https://web.northeastern.edu/smilelab/fiw/.
To look back at RFIW2017 visithttps://web.northeastern.edu/smilelab/RFIW2017/.

Important dates

  • 2018.01.05 Team registration opens.
  • 2018.01.05 Training and validation data made available (Phase I).
  • 2018.01.05 Validation server online.
  • 2018.01.15 Validation labels released (Phase II).
  • 2018.02.01 Test "blind" set and labels for validation set are released; validation server closed (Phase III).
  • 2018.02.10 Test results and README's (i.e., brief descriptions of each submission) are due.
  • 2018.02.11 Results will be made public.
  • 2018.02.15 Notebook papers due.
  • 2018.05.?? RFIW2018 Challenge in conjunction with FG 2018.

Provided Resources

  • Scripts: Scripts will be provided to facilitate the reproducibility of the images and performance evaluation results once the validation server is online. More information is provided on the data page.
  • Contact: You can use the forum on the data description page (highly recommended!) or directly contact the challenge organizers by email (robinson.jo [at] husky.neu.edu and mshao [at] umassd.edu) if you have doubts or any question.

2018 FG Challenge: Recognizing Families In the Wild (RFIW2018)

Family Classification (Track II)

LOGO

Family Classification (Track 2)

Description for track 2 of RFIW2018, Family Classification, which includes brief overview of task, along with data splits and metrics used. Any and all task-specific questions are encouraged to be asked as a public forum-- If something seems unclear, lacking needed details, or even missing entirely, then please point this out for the sake of current and future participants (i.e., modifications will be made accordingly). It is important to note that the data has been updated since RFIW 2017. If you participated in RFIW last year, then discard previous data splits and use the modified sets provided for RFIW 2018. An error will be thrown if you evaluate and submit runs for old version of data splits, as many labels have been updated and lists have been regenerated to account for this.

Overview

Provided multiple members from a set of known families (i.e., classes), the goal is to model each family (i.e., build a classifier) to determine which of these families a set of unseen subjects belongs to. Thus, Family Classification is an one-to- many problem.

Family classification focuses on a slightly different problem: given a facial image, find the family to which the face of the image belongs to, i.e., families are modeled using facial images of other family members. Essentially, it is an one-to-many recognition problem, and becomes more challenging with an increasing number of families, as families contain large intra-class variations that typically fools the feature extractors and classifiers. Similar to conventional facial recognition, when the target data are unconstrained faces in the wild (e.g., variations in pose, illumination, expression, etc.), the task gets increasingly more difficult, as it is breaching capable of handling real-world scenarios. These are, unfortunately, challenges that need to be addressed with family recognition as well, i.e., the capability to recognize unconstrained families in the wild are needed in order to advance such technology for practical use.

For Family Recognition, there is a pre-specified gallery of facial images, for which all family labels are provided. The goal is to identify the family labels for a set of unseen faces. For example, the gallery could be composed of 25 families, each with at 25 facial images of at least 5 family members. The task is then to determine which of the 25 families the unseen input face belongs to.

Note the the set of unseen, test facial images are of individuals that are not included in the gallery (e.g., assuming the minimum of 5 members makes up a family in the gallery, none of these 5 individuals will be used for testing, as additional, unseen family members will be).

Data Splits

 


FamiliesPhotos of families sampled randomly from FIW (i.e., 27 of 1,000).

Evaluation Settings and Metrics

The results for this multi-class problem will be reported as top 1% error ratings and visualized as confusion matrices.


Confusion Confusion metrics used for Family Classification.

Data Splits

FIW includes a total of 1,000 families with multiple samples for each of the members. Thus, the range of modern-day data-driven approaches (i.e., deep learning models) that is now possible opens doors to possibilities in terms of proposed solutions to the problem of family classification.

The data for family classification is split into into 3 disjoint sets referred to as Train, Validation, and Test sets-- ground truth for training data will be provided during Phase 1, while submissions for the validation data can be uploaded for scoring. Then, ground truth for the validation data will be made available during Phase 2. Lastly, the "blind" Test set will be released during Phase 3. No labels will be provided for the Test set until the competition is adjourned. Teams are asked to only process the Test set to generate submissions and, hence, any attempt of analyzing or understanding the Test set is prohibited. Training and Validation is made up of 512 unique families, while others will be released for testing.

 

Submissions

Submissions should include

For submitting the results, you need to follow these steps:

  1. process each face image and record results in the same order they are listed. Predicted labels are integers representing the family ID provided in the training set.
  2. write a CSV file named results.csv (note delimiter ',').
  3. zip archive containing results.csv and a readme.txt Note that the archive should not include folders, both files should be in the root of the archive.
  4. the readme.txt file should contain a brief description of the method used to generate results.

Additional Resources

Source code and trained models will be made available on github, https://github.com/visionjo/FIW_KRT.

2018 FG Challenge: Recognizing Families In the Wild (RFIW2018)

Family Classification (Track II)

LOGO

Data Agreement

If you're using or participating in either Challenge or using FIW data please cite the following papers:

1) Joseph P. Robinson, Ming Shao, Yue Wu, and Yun Fu, "Families in the Wild (FIW): Large-scale Kinship Image Database and Benchmarks", in Proceedings of the 2016 ACM on Multimedia Conference.

@inproceedings{robinson2016fiw,
title={Families in the Wild (FIW): Large-Scale Kinship Image Database and Benchmarks},
author={Robinson, Joseph P and Shao, Ming and Wu, Yue and Fu, Yun},
booktitle={Proceedings of the 2016 ACM on Multimedia Conference},
pages={242--246},
year={2016},
organization={ACM}
}


2) S. Wang, J. P. Robinson, and Y. Fu, "Kinship Verification on Families In The Wild with Marginalized Denoising Metric Learning," in FG, 2017 12th IEEE.

@InProceedings{kinFG2017,
author = {Wang, Shuyang and Robinson, Joseph P and Fu, Yun},
title = {Kinship Verification on Families in the Wild with Marginalized Denoising Metric Learning},
booktitle = {Automatic Face and Gesture Recognition (FG), 2017 12th IEEE International Conference and Workshops on}
}


3) Joseph P. Robinson, Ming Shao, Handong Zhao Yue Wu Timothy Gillis Yun Fu, "Recognizing Families In the Wild (RFIW): Data Challenge Workshop in conjunction with ACM MM 2017," RFIW '17: Proceedings of the 2017 Workshop on Recognizing Families In the Wild (2017).

@inproceedings{Fu:2017:3134421,
title={Recognizing Families In the Wild (RFIW): Data Challenge Workshop in conjunction with ACM MM 2017},
author={Robinson, Joseph P and Shao, Ming and Zhao, Handong and Wu, Yue and Gillis, Timothy and Fu, Yun},
booktitle={RFIW '17: Proceedings of the 2017 Workshop on Recognizing Families In the Wild},
pages={5--12},
location = {Mountain View, California, USA},
publisher = {ACM},
address = {New York, NY, USA},
year={2016}
}

 

Submissions

 

 
See https://web.northeastern.edu/smilelab/RFIW2018/submissions.html for more information on submissions.

2018 FG Challenge: Recognizing Families In the Wild (RFIW2018)

Family Classification (Track II)

LOGO

Organizers

RFIW is taking shape as a series of large-scale kinship, with RFIW2018 (i.e., this challenge) supporting an additional task (i.e., Tri-Subject Verification) and added track (i.e., General Paper Submission), along with rerunning Tracks 1 and 2 (get more details and links for each task and Call for Papers here https://web.northeastern.edu/smilelab/RFIW2018/). RFIW2017 results are summarized on challenge site, https://web.northeastern.edu/smilelab/RFIW2017/.

 

Program Chairs Joseph Robinson (robinson.jo [at] husky [dot] neu [dot] edu, robinson.jo@husky.neu.edu) and Ming Shao (mshao [at] umassd [dot] edu, mshao@umassd.edu) are the contact persons of the RFIW challenge.

 

More information about FIW dataset is provided on the project page: https://web.northeastern.edu/smilelab/fiw/

Training

Start: Jan. 5, 2018, midnight

Description: Training and validation data made available. Labels available for Training. Server will be open for scoring.

Validation

Start: Jan. 15, 2018, midnight

Description: Labels for Validation made available. Evaluation scripts provided to participants. Validation will remain open for automatic scoring.

Challenge

Start: Feb. 4, 2018, midnight

Description: Test data release. Validation server closed. Open for final submissions.

Competition Ends

March 1, 2018, midnight

You must be logged in to participate in competitions.

Sign In