2018 FG Challenge- Recognizing Families In the Wild (RFIW'18)

Organized by jvision - Current server time: Sept. 25, 2018, 4:46 p.m. UTC

Previous

Challenge
Dec. 23, 2017, midnight UTC

Current

Challenge
Dec. 23, 2017, midnight UTC

End

Competition Ends
Feb. 12, 2018, 11 p.m. UTC

FG 2018 Recognizing Families In the Wild (RFIW2018) Challenge

Kinship Verification (Track I)

LOGO

Overview

 

 

 

This is the second large-scale kinship recognition data competition, in conjunction with FG 2018.
This is done with the largest and most comprehensive database for kinship recognition, Families in the Wild (FIW)

RFIW2018 supports 3 laboratory-style evaluation protocols: (1) and (2) repeated; (3) being held for the first time.
      RFIW2018 Website, see
https://web.northeastern.edu/smilelab/RFIW2018/.
      For more information on FG 2018, see
https://fg2018.cse.sc.edu/..
For more information on the database, see the FIW homepage: https://web.northeastern.edu/smilelab/fiw/
.
      ‚Äč
To look back at RFIW2017 visit https://web.northeastern.edu/smilelab/RFIW2017/

Best paper award will be awarded to top performing team (in terms of results and presentation)

 

 

 

Important dates

  • 2017.11.16 Team registration opens.
  • 2017.11.16 Training and validation data made available (Phase I).
  • 2017.11.17 Validation server online.
  • 2017.12.01 Validation labels released (Phase II).
  • 2017.12.23 Test "blind" set and labels for validation set are released; validation server closed (Phase III).
  • 2018.01.07 Test results and README's (i.e., brief descriptions of each submission) are due.
  • 2018.01.07 Results will be made public.
  • 2018.01.31 Notebook papers due.
  • 2018.02.01 Test results and README's (i.e., brief descriptions of each submission) are due.
  • 2018.02.02 Results will be made public.
  • 2018.02.15 Notebook papers due.
  • 2017.05.?? RFIW2018 Challenge in conjunction with FG 2018.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Provided Resources

  • Scripts: Scripts will be provided to facilitate the reproducibility of the images and performance evaluation results once the validation server is online. More information is provided on the data page.
  • Contact: You can use the forum on the data description page (highly recommended!) or directly contact the challenge organizers by email (robinson.jo [at] husky.neu.edu and mshao [at] umassd.edu) if you have doubts or any question.

 

RFIW Workshop and Challenge @ 2018 FG

 

Kinship Verification (Track 1)

Description for track 1 of RFIW2018, Kinship Verification, which includes brief overview of task, along with data splits and metrics used. Any and all task-specific questions are encouraged to be asked as a public forum-- If something seems unclear, lacking needed details, or even missing entirely, then please point this out for the sake of current and future participants (i.e., modifications will be made accordingly). It is important to note that the data has been updated since RFIW 2017. If you participated in RFIW last year, then discard previous data splits and use the modified sets provided for RFIW 2018. An error will be thrown if you evaluate and submit runs for old version of data splits, as many labels have been updated and lists have been regenerated to account for this.

 

Overview

The goal of kinship verification is to determine whether or not a pair of facial images are blood relatives of a particular type (e.g., parent-child). This is a classical boolean problem with classes kin and non-kin (i.e., related and unrelated, respectfully). Thus, this task tackles the one-to-one view of automatic kinship recognition.



Prior research efforts have considered mainly considered parent-child kinship types, i.e., father-daughter (F-D), father-son (F-S), mother-daughter (M-D), mother-son (M-S). With far less, but still some, attention has been given to sibling pairs, i.e., Sister-Sister (S-S), Brother-Brother (B-B), and siblings of opposite sex (SIBS). As research in both psychology and computer vision revealed, different kin relations render different familial features. Hence, the four relationship (pair) types are typically handled independently during training (i.e., modeled separately). From such, more relationships (i.e., pairwise types) could boost both our intuition and models. FIW provides a collection of facial pairs that is incomparably larger than related databases of its type. On top of that, four new relationship types were introduced for the first time (i.e., grandparent- grandchild), which are displayed in the middle column of the table of facial image pairs below.

 

Data Splits

FIW includes a total of 654,304 pairs of faces from the 11 different types of kinship. Thus, the range of modern-day data-driven approaches (i.e., deep learning models) that is now possible opens doors to possibilities in terms of proposed solutions to the problem of kinship verification.

The data for kinship verification is partitioned into into 3 disjoint sets referred to as Train, Validation, and Test sets-- ground truth for the former will be provided during Phase 1 for self evaluation, while runs on the Validation can be submitted for scoring. Ground truth for Validation will be made available during Phase 2. The "blind" Test set will be released during Phase 3. No labels will be provided for the Test set until the challenge is adjourned and results are reported. Teams will be asked to only process the Test set to generate submissions and, hence, any attempt of analyzing or understanding the Test set is prohibited. All sets will be made up of an equal number of positive and negative pairs. Lastly, note that there is no family or subject identity overlapping between any of the sets.

Links to download data will be provided to registered participants. Register and access data at competition portal (coming soon).

RFIW2018 includes grandparent-grandchild, which is first time kin types spanning mutltiple generation are made available to community for evaluation.


HTML5 Icon

 

 

 

 

Evaluation Settings and Metrics

As conventional face verification, we offer 3 modes, which are listed & described as follows:

  1. Unsupervised: No labels given, i.e., no prior knowledge about kinship or subject IDs.
  2. Image-restricted: Kin/ non-kin labels given for training set, with no family overlap between training and test sets.
  3. Image-unrestricted: Kinship labels & IDs given– allows mining for additional negative pairs.

Participants will be allowed to make up to 6 submission of different runs for each mode (i.e., teams participating in all 3 settings will be allowed to submit up to 18 sets of results). Note that runs must be processed independent of one another.

For all modes, verification accuracy is reported per pairwise type and averaged.

Submissions

For submitting the results, you need to follow these steps:

  • Process set of face pairs for each of the 11 types and record results in the same order as listed. Labels are 1 for KIN or 0 for NON-KIN (integers).
  • Write a CSV file for each type separately, and named after the respective pair types, i.e., fd.csv (father-daughter), fs.csv (father-son), md.csv (mother-daughter), ms.csv (mother-son), ss.csv (sister-sister), bb.csv (brother-brother), sibs.csv (siblings), gfgd.csv (grandfather-granddaughter), gfgs.csv (grandfather-grandson), gmgd.csv (grandmother-granddaughter), gmgs.csv (grandmother-grandson). Note the delimiter ','; also, submissions must include all CSV files (named as listed) for scoring.
  • Zip archive containing all (i.e., 11) CSV file and a readme.txt. Note that the archive should not include folders, all files should be in the root of the archive.
  • The readme.txt file should contain a brief description of the method used to generate results. Without this, submissions will not qualify as far as the competition is concerned.

 

 

 

 

 

Data Agreement

If you're using or participating in either Challenge or using FIW data please cite the following papers:

1) Joseph P. Robinson, Ming Shao, Yue Wu, and Yun Fu, "Families in the Wild (FIW): Large-scale Kinship Image Database and Benchmarks", in Proceedings of the 2016 ACM on Multimedia Conference.

@inproceedings{robinson2016fiw,
title={Families in the Wild (FIW): Large-Scale Kinship Image Database and Benchmarks},
author={Robinson, Joseph P and Shao, Ming and Wu, Yue and Fu, Yun},
booktitle={Proceedings of the 2016 ACM on Multimedia Conference},
pages={242--246},
year={2016},
organization={ACM}
}

2) S. Wang, J. P. Robinson, and Y. Fu, "Kinship Verification on Families In The Wild with Marginalized Denoising Metric Learning," in FG, 2017 12th IEEE.

@InProceedings{kinFG2017,
author = {Wang, Shuyang and Robinson, Joseph P and Fu, Yun},
title = {Kinship Verification on Families in the Wild with Marginalized Denoising Metric Learning},
booktitle = {Automatic Face and Gesture Recognition (FG), 2017 12th IEEE International Conference and Workshops on}
}

3) Joseph P. Robinson, Ming Shao, Handong Zhao Yue Wu Timothy Gillis Yun Fu, "Recognizing Families In the Wild (RFIW): Data Challenge Workshop in conjunction with ACM MM 2017," RFIW '17: Proceedings of the 2017 Workshop on Recognizing Families In the Wild (2017).

@inproceedings{Fu:2017:3134421,
title={Recognizing Families In the Wild (RFIW): Data Challenge Workshop in conjunction with ACM MM 2017},
author={Robinson, Joseph P and Shao, Ming and Zhao, Handong and Wu, Yue and Gillis, Timothy and Fu, Yun},
booktitle={RFIW '17: Proceedings of the 2017 Workshop on Recognizing Families In the Wild},
pages={5--12},
location = {Mountain View, California, USA},
publisher = {ACM},
address = {New York, NY, USA},
year={2016}
}

Submissions

 

 
See https://web.northeastern.edu/smilelab/RFIW2018/submissions.html for more information on submissions (i.e., continued terms and conditions, along with links to templates, portal for authors, and more).



Training

Start: Nov. 15, 2017, midnight

Description: Training and validation data made available. Labels available for Training; sever will be open for scoring Validation

Validation

Start: Dec. 1, 2017, midnight

Description: Labels for Validation made available. Evaluation scripts provided to participants. Validation will still be open for those that rather upload results for automatic scoring and or those looking to make sure the submissions are formatted properly.

Challenge

Start: Dec. 23, 2017, midnight

Description: Test data release. Validation server closed. Open for final submissions.

Competition Ends

Feb. 12, 2018, 11 p.m.

You must be logged in to participate in competitions.

Sign In