2019 FG Challenge Recognizing Families In the Wild (RFIW19)

Organized by jvision - Current server time: Oct. 16, 2018, 9:43 p.m. UTC

Current

Training
Sept. 15, 2018, midnight UTC

Next

Validation
Nov. 10, 2018, midnight UTC

End

Competition Ends
Jan. 10, 2019, midnight UTC

Recognizing Families In the Wild (RFIW2019)

A 2019 FG Challenge

Kinship Verification (Track I)

LOGO

Overview

Large-scale kinship recognition data challenge, Recognizing Families In the Wild (RFIW), in conjunction with the 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019). We have the largest & most comprehensive database for visual kinship recognition, Families in the Wild(FIW), which is used to organize this, RFIW 2019 (3rd Edition).  
 
RFIW 2019 is supported by 3 laboratory style evaluation protocols: (1) Kinship Verification, (2), Family Classification (i.e., this), and (3) Tri-Subject, where (1) and (2) are repeated and (3) is being held for the 1st time.
 
Register here to get more information amd download resources for Family Classiication (Track 3) via https://competitions.codalab.org/competitions/edit_competition/20196 and stay-tuned for Track 3 to go live (i.e., opening portal soon)!
 

Look back at previous competitions, RFIW2017 and RFIW2018 .
For more information about FIW visit https://web.northeastern.edu/smilelab/fiw/index.html.

Important dates

  • 2018.09.15 Team registration opens.
  • 2018.09.15 Training (labels) and validation (no labels) data made available (Phase Ⅰ).
  • 2018.09.15 Validation server online.
  • 2018.11.01 Validation labels released (Phase Ⅱ).
  • 2019.01.01 Test "blind" set and labels for validation set are released; validation server closed (Phase Ⅲ).
  • 2019.01.10 Test results and README's (i.e., brief descriptions of each submission) are due.
  • 2019.01.10 Challenge submissions due (i.e., scoring servers closed).
  • 2019.02.01 Challenge papers due.
  • 2019.02.10 Author notifications (i.e., oral or poster).
  • 2019.02.15 Results made public.
  • 2019.03.01 Camera-ready.
  • TBD RFIW2019 Challenge in conjunction with FG 2019; Winners announced during workshop.

Additional Resources

Recognizing Families In the Wild (RFIW2019)

A 2019 FG Challenge

Kinship Verification (Track I)

LOGO

A description of track 1 of RFIW2019, Kinship Verification, that includes a brief task overview, specifications for data splits and metrics used (i.e., evaluation protocol). We encourage participants to bring up any and all task-specific questions in the public forum provided as part of this portal-- If something seems unclear, lacking needed details, or even missing entirely, then please point this out for the sake of current and future participants (i.e., modifications will be made accordingly). It is important to note that the data has been updated since last year's RFIW (i.e., re-download entire database). If you participated in RFIW last year, then discard previous data splits and use the modified sets provided for RFIW 2018. An error will be thrown if you evaluate and submit runs for old version of data splits, as many labels have been updated and lists have been regenerated to account for this.

Overview

The goal of kinship verification is to determine whether or not a pair of face images are blood relatives of a certain type (e.g., parent-child). This is a classical boolean problem with classes KIN and NON-KIN (i.e., related and unrelated, respectfully). Thus, this task tackles the one-to-one view of automatic kinship recognition.

Prior research had mainly considered parent-child kinship types, i.e., father-daughter (F-D), father-son (F-S), mother-daughter (M-D), mother-son (M-S). With far less, but still some, attention has been given to sibling pairs, i.e., sister-sister (S-S), brother-brother (B-B), and siblings of opposite sex (SIBS). As research in both psychology and computer vision revealed, different kin relations render different familial features. Hence, the four relationship (pair) types are typically handled independently during training (i.e., modeled separately). From such, more relationships (i.e., pairwise types) could boost both our intuition and model performance. FIW provides a collection of facial pairs that is incomparably larger than related databases. On top of that, FIW introduces 4 relationship types for the 1st time (i.e., grandparent-grandchild) (Figure, column (c)).

Data Splits

FIW contains 654,304 face pairs of 11 different pair types. Thus, the range of modern-day data-driven approaches (i.e., deep learning models) that is now possible spans far greater than ever before.

The data for kinship verification are partitioned in 3 disjoint sets (i.e., train, validation, and test setsGround truth for train set is released for Phase Ⅰ, all while the scoring server is open for evaluating validation set. Then, validation labels are released in Phase Ⅱ. Finally, the "blind" test is released at start of Phase Ⅲ, for which no labels will be provided (i.e., test set labels are are not meant to be known until the competition is adjourned. Thus, it is expected that participants will only process the test set to generate submissions and, hence, any and all attempts to analyze and/or understand test set data is prohibited. All sets will be made up of an equal number of positive and negative pairs. Lastly, no family or subject (i.e., identity) overlap between any of the sets.

FIW includes grandparent-grandchild, and is first of its kind to support relationship types that span multiple generation in the verification task.


HTML5 Icon

Evaluation Settings and Metrics

As conventional face verification, we offer 3 modes, which are listed & described as follows:

  1. Unsupervised: No labels given, i.e., no prior knowledge about kinship or subject IDs.
  2. Image-restricted: KIN and NON-KIN labels given for train set, with no family overlap between training and test sets.
  3. Image-unrestricted: Kinship labels & IDs given, allowing for mining of additional negative pairs.

Participants will be allowed to make up to 6 submission of different runs for each mode (i.e., teams participating in all 3 settings will be allowed to submit up to 18 sets of results). Note that runs must be processed independent of one another.

For all modes, verification accuracy is reported per pairwise type and averaged.

Submissions

For submitting the results, you need to follow these steps:

  • Process set of face pairs for each of the 11 types and record results in the same order as listed. Labels are 1 for KIN or 0 for NON-KIN (integers).
  • Write a CSV file for each type separately, and named after the respective pair types, i.e., fd.csv (father-daughter), fs.csv (father-son), md.csv (mother-daughter), ms.csv (mother-son), ss.csv (sister-sister), bb.csv (brother-brother), sibs.csv (siblings), gfgd.csv (grandfather-granddaughter), gfgs.csv (grandfather-grandson), gmgd.csv (grandmother-granddaughter), gmgs.csv (grandmother-grandson). Note the delimiter ','; also, submissions must include all CSV files (named as listed) for scoring.
  • Zip archive containing all 11 CSV files and a readme.txt. Note that the archive should not include folders, as all files should be in the root of the archive.
  • The readme.txt file should contain a brief description of the method used to generate results. Without this, the submission will not qualify as far as the competition is concerned.

Additional Resources

Source code and trained models will be made available on github, https://github.com/visionjo/FIW_KRT.

Recognizing Families In the Wild (RFIW2019)

A 2019 FG Challenge

Kinship Verification (Track I)

LOGO

Submissions

 
TBD

Recognizing Families In the Wild (RFIW2019)

A 2019 FG Challenge

Kinship Verification (Track I)

LOGO

Organizers

Picture
Joseph Robinson, Northeastern University
http://www.jrobsvision.com
 
Picture
Ming Shao, University of Massachusetts (Dartmouth)
http://www.cis.umassd.edu/~mshao/
 
Picture
Yun Fu, Northeastern University
http://www1.ece.neu.edu/~yunfu/
 
 
 

Training

Start: Sept. 15, 2018, midnight

Description: Training and validation data made available. Labels available for Training. Scoring server open.

Validation

Start: Nov. 10, 2018, midnight

Description: Labels for Validation made available. Evaluation scripts provided to participants. Validation will still be open for those that rather upload results for automatic scoring and or those looking to make sure the submissions are formatted properly.

Challenge

Start: Dec. 20, 2018, midnight

Description: Test data release. Validation server closed. Open for final submissions.

Competition Ends

Jan. 10, 2019, midnight

You must be logged in to participate in competitions.

Sign In