Recognizing Families In the Wild Data Challenge (4th Edition) in conjunction with FG 2020

Organized by jvision - Current server time: July 5, 2020, 3:50 p.m. UTC
Reward $500

Previous

Challenge
Jan. 10, 2020, midnight UTC

Current

Validation
Dec. 17, 2019, midnight UTC

End

Competition Ends
Jan. 21, 2020, noon UTC

RFIW Workshop and Challenge @ FG 2020

Search & Retrieval of Missing Children (Track III)

LOGO

Overview

The 4th RFIW large-scale kinship recognition data competition in conjunction with FG 2020. This is made possible with the release of the largest and most comprehensive image database for automatic kinship recognition, Families in the Wild (FIW).

RFIW2020 will support 3 laboratory-style evaluation protocols: (1) repeated; (2) and (3) supported for the first time.

 

  1. Kinship Verification  (one-to-one)
  2. Tri-subject Verification (one-to-two) 
  3. Search and Retrieval (many-to-many)

 

 
Additionally, we will have general paper and Brave New Idea tracks. Visit the RFIW workshop website to learn more about the other tasks, tracks, and the challenge workshop as a whole.


The best paper award will be awarded will be included in the 2020 IEEE Proceedings of Automatic Face and Gesture Recognition (AMFG). Note that individuals and/or teams can participate in just one or all tasks. In any case, standings will be determined for each task separately.

Important dates

  • 2019.12.11 Team registration opens.
  • 2019.12.11 Training and validation data made available (Phase I).
  • 2019.12.11 Validation server online.
  • 2019.12.17 Validation labels released (Phase II).
  • 2020.01.13 Test "blind" set and labels for validation set are released; validation server closed (Phase III).
  • 2020.01.20 Test results and README(s) (i.e., brief descriptions of each submission) are due.
  • 2020.01.21 Results will be made public and standings listed on the leader-board.
  • 2020.01.28 Paper submission for task evaluations and general paper submissions are due.
  • 2020.02.06 Notification.
  • 2020.02.26 Camera-ready due.
  • 2020.05.[18-22] RFIW Challenge in conjunction with FG 2020.

Provided Resources

RFIW Workshop and Challenge @ FG 2020

Search & Retrieval of Missing Children (Track III)

LOGO

Search & Retrieval of Missing Children

Overview

Large-Scale Search and Retrieval of family members of missing children. The problem is posed as a many-to-many ranking problem, as one-to-multiple faces are provided for each unique individual. The goal is to find family members of the search subjects (i.e., the probes) in a search pool (i.e., the gallery). The task simulates the real-world problems of missing persons.

missing children
A scary, horrifying truth is that sick people use platforms online to exploit children. Authorities may recognize a case but are unable to identify the child, even when faces are visible. The reasons can simply be that the child has a little-to-no record, and missing person photos can be both limited in sample size and, furthermore, are often outdated. Can we find relatives of children found digitally, to infer their true identities?

This is the first time search and retrieval has been done using the large-scale FIW dataset. Any and all questions, suggestions, ideas are both welcome and appreciated.

Data Splits

Data for large-scale Search & Retrieval will be split into 3 sets referred to as train, val, and test-- the former includes ground-truth labels for self-evaluation, while the latter is reserved as the blind split with no labels provided. It is expected that participants only process the test set to generate results for submission. Hence, attempts to understand or interrupt outputs of the test set is prohibited. For such analysis, use the val set. The train set is made up of complete families with labels (for details about family-level labels see README).

The val set, just like test-- except test is blind-- This task will is composed of a set of search subjects or missing children (i.e., probes). The number of samples per query (or probe) span from one-to-many. The task is to search a gallery, also provided with val and test, and return a ranked list for all subjects in the gallery according to the rank the model predicted them as relatives. The Test set will only consist of sets of images for the probes. Diversity in terms of ethnicity is ensured for both sets. Families span across ethnic groups and continents-- families that are Caucasian, Asian, Spanish, African American; families roots founded in the USA, China, Japan, Spain, England, Vietnam, Philippines, Brazil, and more. The effort was spent to balance the levels of diversity between val and test (i.e., closely mimic one another).

For Phase 3, the test set, just like the unlabeled version of val, is then to be processed, with the output (i.e., submission) being ranked lists including all members of the gallery and for each probe. Again, participants are asked too only process evaluation gallery when generating submissions.

For complete data descriptions see Data tab.
More information about FIW dataset is provided on the project page:

Evaluation Metrics

Mean Average Precision (MAP) is the main metric used for the leader-board, however, CMC curves and rank@K are other metrics appropriate for such a problem. Thus, we include all three for analysis and reporting.

There are, of course, many metrics that could provide different insights via analysis of results. To learn more here is a suggested (simple) source on information retrieval metrics. Feel free to reach out for specific metrics to be run on submissions (just provide a pointer to the submission(s) to be processed).

MAP

MAP provides a single measure of quality across recall levels. Among evaluation measures, MAP has shown to score in a decisive and consistent manner (source). Average Precision (AP), as the name implies, is the precision averaged over k past the relevant items retrieved for a single instance (i.e., search probe). Mathematically speaking:

where AP is a function of the f th family of set F and size (i.e., number of true relatives in gallery) PF.

MAP, then, averages the set of AP scores for all probes. Mathematically defined as

MAP

where N is the total number of missing children (i.e., search probes).

Cumulative Matching Characteristics (CMC)

CMC curves are the most popular evaluation metrics for search and retrieval based problems. As a simple case, let us assume a single-gallery-shot setting: each subject in the gallery has just one instance. For each probe, an algorithm will rank all the gallery samples according to similarity (or distance) scores in order. The CMC top-k accuracy is

CMC Eq

In essence, it is just a shifted step function, with the final CMC curve computed as a running average across probes.

While the single-gallery-shot CMC is well defined, it does not have a common agreement when it comes to the multi-gallery-shot setting, where each gallery identity could have multiple instances. However, in this competition, we will assume each subject is an instance. Therefore, probes are made up of as many images present for a particular probe (i.e., missing child).

Rank@k

Measures of correctness wrt top k. In other words, Rank@1 is the average across probes that the correct answer is of rank 1 in a ranked list; as rank@20 would be that in the top 20. Essentially, these are specific points on the CMC curves for a respective k-value.

Submissions

The format of submissions is kept simple: a KxN matrix, for each of the K probes, provides the full list of gallery indices as a ranked list. The order should be preserved, such that probes are listed in the submitted file in the same order as provided in the protocol lists.

Participants will be allowed to make up to 6 submissions of different runs for each mode (i.e., teams participating in all 3 settings will be allowed to submit up to 18 sets of results). Note that runs must be processed independently of one another.

Source code and trained models can be found on Github.

Special attention will be given to the submission that provides supporting code.

Contact

Joseph Robinson (robinson.jo [at] husky [dot] neu [dot] edu, robinson.jo@husky.neu.edu) and Zaid Khan (khan [dot] za [at] husky [dot] neu [dot] edu, khan.za@husky.neu.edu ) for all inquiries pertaining to 2020 RFIW and FIW.

RFIW Workshop and Challenge @ FG 2020

Search & Retrieval of Missing Children (Track III)

LOGO

Data Agreement

If you're using or participating in either Challenge or using FIW data please cite the following papers:


Joseph P. Robinson, Ming Shao, Yue Wu, Hongfu Liu, Timothy Gillis, Yun Fu, "Visual Kinship Recognition of Families in the Wild." In IEEE TPAMI, 2018.

@article{robinson2018visual,
title = {Visual kinship recognition of families in the wild},
author = {Robinson, Joseph P and Shao, Ming and Wu, Yue and Liu, Hongfu and Gillis, Timothy and Fu, Yun},
journal = {IEEE transactions on pattern analysis and machine intelligence},
volume = {40},
number = {11},
pages = {2624--2637},
year = {2018},
publisher = {IEEE}
}



Joseph P. Robinson, Ming Shao, Handong Zhao, Yue Wu, Timothy Gillis, Yun Fu, " Recognizing Families In the Wild (RFIW): Data Challenge Workshop in conjunction with ACM MM 2017." In RFIW '17: Proceedings of the 2017 Workshop on Recognizing Families In the Wild, 2017.

@inproceedings{Fu:2017:3134421,
title = {Recognizing Families In the Wild (RFIW): Data Challenge Workshop in conjunction with ACM MM 2017},
author = {Robinson, Joseph P and Shao, Ming and Zhao, Handong and Wu, Yue and Gillis, Timothy and Fu, Yun},
booktitle = {RFIW '17: Proceedings of the 2017 Workshop on Recognizing Families In the Wild},
pages = {5--12},
location = {Mountain View, California, USA},
publisher = {ACM},
address = {New York, NY, USA},
year = {2016}
}



S. Wang, J. P. Robinson, and Y. Fu, "Kinship Verification on Families In The Wild with Marginalized Denoising Metric Learning." In 12th IEEE AMFG, 2017.

@inproceedings{kinFG2017,
author = {Wang, Shuyang and Robinson, Joseph P and Fu, Yun},
title = {Kinship Verification on Families in the Wild with Marginalized Denoising Metric Learning},
booktitle = {Automatic Face and Gesture Recognition (FG), 2017 12th IEEE International Conference and Workshops on}
}



Joseph P. Robinson, Ming Shao, Yue Wu, and Yun Fu, "Families in the Wild (FIW): Large-scale Kinship Image Database and Benchmarks." In Proceedings of the ACM on Multimedia Conference, 2016.

@inproceedings{robinson2016fiw,
title = {Families in the Wild (FIW): Large-Scale Kinship Image Database and Benchmarks},
author = {Robinson, Joseph P and Shao, Ming and Wu, Yue and Fu, Yun},
booktitle = {Proceedings of the 2016 ACM on Multimedia Conference},
pages = {242--246},
year = {2016},
organization = {ACM}
}

Submissions

See https://web.northeastern.edu/smilelab/rfiw2020/index.html for more information on submissions (i.e., continued terms and conditions, along with links to templates, a portal for authors, and more).

More information about FIW dataset is provided on the project page: https://web.northeastern.edu/smilelab/fiw/

Contact

Joseph Robinson (robinson.jo [at] husky [dot] neu [dot] edu, robinson.jo@husky.neu.edu) and Zaid Khan (khan [dot] za [at] husky [dot] neu [dot] edu, khan.za@husky.neu.edu ) for all inquiries pertaining to 2020 RFIW and FIW.

RFIW Workshop and Challenge @ FG 2020

Search & Retrieval of Missing Children (Track III)

LOGO

Organizers

Honorary Chairs

 

 
University of Maryland
Picture
Toyota Tech. Institute at Chicago (TTIC)

 

General Chair

Picture
Northeastern University
 

  

Workshop Chairs

Picture 
Northeastern University
Picture  
U-Massachusetts (Dartmouth)
Picture 
Southeast University (China)
Picture
Mike Stopa
Picture 
Samson Timoner
ISMConnect
Picture
Yu Yin
Northeastern University

Web and Publicity Co-Chairs

Picture
Zaid Khan
Northeastern University

Contact

Joseph Robinson (robinson.jo [at] husky [dot] neu [dot] edu, robinson.jo@husky.neu.edu) and Zaid Khan (khan [dot] za [at] husky [dot] neu [dot] edu, khan.za@husky.neu.edu ) for anything on 2020 RFIW and FIW.

Training

Start: Dec. 11, 2019, midnight

Description: Training and validation data made available. Labels available for Training; sever will be open for scoring Validation.

Validation

Start: Dec. 17, 2019, midnight

Description: Labels for Validation made available. Evaluation scripts provided to participants. Validation will still be open for those that rather upload results for automatic scoring and or those looking to make sure the submissions are formatted properly.

Challenge

Start: Jan. 10, 2020, midnight

Description: Test data release. Validation server closed. Open for final submissions.

Competition Ends

Jan. 21, 2020, noon

You must be logged in to participate in competitions.

Sign In
# Username Score
1 codezakh 0.01