Chalearn LAP Inpainting Competition Track 3 - Fingerprint Denoising and Inpainting

Organized by mmadadi - Current server time: Nov. 16, 2018, 11:24 a.m. UTC

Previous

Track 3 feed-back
March 1, 2018, midnight UTC

Current

Track 3 test
July 14, 2018, midnight UTC

End

Competition Ends
Never

Welcome to Chalearn LAP Inpainting Competition Track 3 - Denoising and inpainting for fingerprint verification 

Biometrics play an increasingly important role in security to ensure privacy and identity verification, as evidenced by the increasing prevalence of fingerprint sensors on mobile devices. Fingerprint retrieval keeps also being an important law enforcement tool used in forensics. However, much remains to be done to improve the accuracy of verification, both in terms of false negatives (in part due to poor image quality when fingers are wet or dirty) and in terms of false positives due to the ease of forgery. This track involves reducing noise (denoising) and/or replacing missing parts (inpainting) due to various kinds of alterations or sensor failures in fingerprint images. This is viewed as a preprocessing step to ease verification carried out either by humans or existing third party software. To protect privacy and have full control over the experimental design, we synthesized a (very) large dataset of realistic artificial fingerprints for the purpose of training learning machines.

We generated images of fingerprints by first degrading synthetic fingerprints with a distorsion model  (blur, brightness, contrast, elastic transformation, occlusion, scratch, resolution, rotation) , then overlaying the fingerprints on top of various backgrounds. The resulting images are typical of what law enforcement agents have to deal with. For training you will get pairs of original and distorted images. Your goal will be to restitute original images from distorted image on a test dataset. 

Thus, the objective of participants is to develop algorithms that can inpaint and denoise fingerprint images that contain artifacts like noise, scratches, etc. to improve the performance of subsequent operations like fingerprint verification that are typically applied on such images. Developed algorithms will be evaluated based on reconstruction performance. That is, participants will be required to reconstruct the degraded fingerprint images using their developed algorithms and submit the reconstructed fingerprint images. After the submission, the reconstructed fingerprint images will be compared against the corresponding ground-truth fingerprint images in the pixel space to determine the quality of the reconstructions.

Participants will have access to the synthetic training data (pairs of {ground-truth, degraded} images) and the synthetic validation/test set (degraded images only). Furthermore, we will make available to them:

  • The software to generate the degraded images.

  • The evaluation scripts.

  • A Jupyter-notebook as a “starting kit”, including data visualization tools, baseline reconstruction methods, and instructions to create submissions.

Evaluation Criteria

Submissions will be evaluated based on their physical and perceptual qualities with peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), respectively (the average rank of these measures will be used to determine the winners). 

- PSNR is defined as the ratio between the peak power of the image and the power of the noise in the image (Euclidean distance between the image and the reference image):
 
 
where DR is the dynamic range, and m is the total number of pixels in each of the three color channels.
 
- SSIM is defined as the multiplicative combination of the similarities between the image and the reference image in terms of contrast, luminance and structure:
 
 
where µ (ti,j,k), µ (yi,j,k), σ (ti,j,k), σ (yi,j,k) and σ (ti,j,k, yi,j,k) are means, standard deviations and cross-covariances of windows centered around i and j, and C1 = (0.01 max DR)^2 and C2 = (0.03 max DR)^2.
 
Next to these main metrics, the leaderboard will also display the mean squared error of the submissions.
 
In addition to evaluating the reconstruction performance on the synthetic test set based on these metrics, the participants' codes (which has to be submitted) will also be evaluated on a real test set (which will not be made public) based on verification performance. Details on this will be provided in the next stages of the competition. We compute the ranking based on the mean over ranks of all metrics.

Evaluation Script

 
If you want to test your method locally to see your results before submitting them, you can download the evaluation script from the data page. Briefly, this script compares the submitted images with their ground-truth counterparts based on the above-defined metrics and returns the mean over the data set. As such, submissions should follow a specific format.

Submission Format

The submissions should be in the form of a .zip file, which contains the reconstructed fingerprint images. The images in the .zip file should have the same name as the input images that were used to reconstruct them. For example:
 
Assuming that the validation_input folder contains the following images:
 
- 1.jpg
- 2.jpg
- 3.jpg
 
The .zip file should contain their reconstructed version with the exact same name:
 
- 1.jpg
- 2.jpg
- 3.jpg
 
Note that the reconstructed images should be zipped directly and, e.g., not put in a folder beforehand.

 

CHALEARN Contest Rules for ChaLearn Looking at People

Image and Video Inpainting 2018

 

Official rules

Common terms used in these rules: 

These are the official rules that govern how the ChaLearn Looking at People Image and Video inpainting 2018 contest promotion will operate. This promotion will be simply referred to as the “contest” or the “challenge” throughout the rest of these rules and may be abbreviated on our website, in our documentation, and other publications as ChaLearn LAP 2018

In these rules, “organizers”, “we,” “our,” and “us” refer to CHALEARN and "participant”, “you,” and “yourself” refer to an eligible contest participant.  

  1. Contest description

This is a skill-based contest and chance plays no part in the determination of the winner(s). There are three tracks associated to this contest as described below:

  • Track 1: Pose estimation in occluded images. Given an image with multiple blocks of black pixels occluding the original image, and a mask indicating the position and size of these blocks, participants should be able to restore the masked parts of the image in a way that resembles the original content and looks plausible to a human. Participants will be provided with a composed of dozens of thousand of images, where each image is centered on a human which may or may not be partially occluded. The ratio of background information in relation to human body is roughly the same for all images although some other humans may appear in the background with an arbitrary size. The humans appear in a lot of poses with a variety of difficulties from simple rigid poses to complicated sport-oriented poses with self occlusions. 

The following metrics will be used to evaluate participant’s solutions: MSE, PSNR,  DSSIM and WNJD. The average rank of these measures will be used to determine the winners

  • Track 2: Video de-captioning. In this track participants will have to develop methods able to remove captions from videos. They will be provided with captioned and de-captioned versions of videos that can be used to develop their methods. For evaluation, participants will be provided of captioned videos, and performance will be determined according to the ability of methods to recover the original image sequences.  

A new data set was designed for this video inpainting task, it consists of as a set of (X, Y) pairs where X is a 5 seconds video clip (containing encrusted text) and Y the corresponding target video clip (without encrusted text).

The evaluation for this track will consist on two parts. The first evaluation will qualify the quality of the reconstruction, and will be done by using MSE, PSNR and DSSIM. The average rank of these measures will be used to determine the winners. After that the speed of video clips generation will be measured to define a secondary ranking. After computing standard inpainting measures from participants' predictions, we will run the proposed algorithms on a limit amount of time. The number of processed video clips will be used to generate this ranking.

  • Track 3: Denoising and inpainting for fingerprint verification. In this track participants will have to develop methods that can inpaint and denoise fingerprint images that contain artifacts like noise, scratches, etc., this with the goal of improving the performance of subsequent operations like fingerprint verification that are typically applied on such images. Synthetic and real data sets will be considered for the competition. In particular, a large data set with synthetic samples will be provided for development. A test set will be provided for the final evaluation.

Evaluation in the synthetic data set will be performed with the following reconstruction error measures: peak signal-to-noise-ratio (PSNR) and the structural similarity (SSIM), the average rank of these measures will be used to determine the winners.  Participants will be required to reconstruct the degraded fingerprint images in this set by using their developed algorithms and submit the reconstructed fingerprint images.

For the three tracks, eligible entries received will be judged using the criteria described above to determine winners.

  1. Tentative Contest Schedule

The registered participants will be notified by email of any change in the schedule.

1st March, 2018: Beginning of the competition, release of development and validation data for all tracks.

30th June, 2018: Release of encrypted final evaluation data and validation labels. Participants can start training their methods with the whole data set.

13th July, 2018: Deadline for code submission.

14th July, 2018: Release of final evaluation data decryption key. Participants start predicting the results on the final evaluation data.

20th July, 2018: End of all tracks of the competition. Deadline for submitting the predictions over the final evaluation data. The organizers start the code verification by running it on the final evaluation data.

20th July, 2018:  Deadline for submitting the fact sheets.

25th July, 2018. Release of the verification results to the participants for review. %Participants are invited to follow the paper submission guide for submitting contest papers.  

9th September, 2018. ECCV 2018 Inpainting challenge, dissemination of results, award ceremony.

  1. Eligibility

You are eligible to enter this contest if you meet the following requirements:

  1. You are an individual or a team of people desiring to contribute to the tasks of the challenge and accepting to follow its rules; you are employed  by a research laboratory, startup or other legal entity having a scientific research department or activity ; and

  2. You are NOT a resident of any country constrained by US export regulations included in the OFAC sanction page http://www.treasury.gov/resource-center/sanctions/Programs/Pages/Programs.aspx. Therefore residents of these countries / regions are not eligible to participate; and

You are not an employee of CHALEARN or any of the sponsoring or co-organizing entities; and

  1. You are not involved in any part of the administration and execution of this contest; and

  2. You are not an immediate family (parent, sibling, spouse, or child) or household member of an employee of CHALEARN or a person involved in any part of the administration and execution of this contest.

This contest is void within the geographic area identified above and wherever else prohibited by law.

If you choose to submit an entry, but are not qualified to enter the contest, this entry is voluntary, and any entry you submit is governed by the remainder of these contest rules; CHALEARN reserves the right to evaluate it for scientific purposes. If you are not qualified to submit a contest entry and still choose to submit one, under no circumstances will such entries qualify for sponsored prizes.

  1. Entry

To be eligible for judging, an entry must meet the following content/technical requirements:   

  1. Entry contents: The participants are required to submit prediction results and code. To be eligible for prizes, the top ranking participants are required to publicly release their code under a license of their choice, taken among popular OSI-approved licenses (http://opensource.org/licenses) and make their code accessible on-line for a period of not less than three years following the end of the challenge (only required for top three ranked participants of the competition). To be part of the final ranking the participants will be asked to fill out a survey (fact sheet) briefly describing their method. The top ranking participants and the rest of participants are also invited (not mandatory) to submit a maximum 8-page paper for the proceedings of the associated ECCV 2018 ChaLearn Workshop under evaluation (to be held in October 2018). To be eligible for prizes, top ranked participants score must improve the baseline performance provided by the challenge organizers.

  2. Pre-requisite: There is no pre-requisite to participate, including no requirement to have participated in previous challenges.

  3. Use of data provided: All data provided by CHALEARN are freely available to the participants from the website of the challenge under license terms provided with the data. The data are available only for open research and educational purposes, within the scope of the challenge. ChaLearn and the organizers make no warranties regarding the database, including but not limited to warranties of non-infringement or fitness for a particular purpose. The copyright of the videos remains in property of their respective owners. By downloading and making use of the data, you accept full responsibility for using the data. You shall defend and indemnify ChaLearn and the organizers, including their employees, Trustees, officers and agents, against any and all claims arising from your use of the data. You agree not to redistribute the data without this notice.

  • Test data: The organizers will use test data to perform the final evaluation, hence the participants’ final entry will be based on test data.
  • Training and validation data: The contest organizers will make available to the participants a training dataset with truth labels, and a validation set with no truth labels. The validation data will be used by the participants for practice purposes to validate their systems. It will be similar in composition to the test set (validation labels may be provided in the final test stage of the challenge).
  • Post-challenge analyses: The organizers may also perform additional post-challenge analyses using extra data, but the results will not affect the ranking of the challenge performed with the test data.
  1. Submission: The entries of the participants will be submitted on-line via the Codalab web platform. During the development period (quantitative competition), the participants will receive immediate feedback on validation data released for practice purpose. For the final quantitative evaluation, the results will be computed automatically on test data submissions.  The performances on test data will not be released until the challenge is over

  2. Original work, permissions: In addition, by submitting your entries into this contest you confirm that, to the best of your knowledge:  

    1. Your entry is your own original work, and exception being those participants of the coopetition using code shared by other participants; and

    2. Your entry only includes material that you own, or that you have permission from the copyright / trademark owner to use.

  1. Potential use of entry

Other than what is set forth below, we are not claiming any ownership rights to your entry. However, by submitting your entry, you:

  1. Are granting us an irrevocable, worldwide right and license, in exchange for your opportunity to participate in the contest and potential prize awards, for the duration of the protection of the copyrights to:

    1. Use, review, assess, test and otherwise analyze results submitted or produced by your code and other material submitted by you in connection with this contest and any future research or contests sponsored by; and

    2. Feature your entry and all its content in connection with the promotion of this contest in all media (now known or later developed);

  2. Agree to sign any necessary documentation that may be required for us and our designees to make use of the rights you granted above;

  3. Understand that we cannot control the incoming information you will disclose to our representatives or our co-sponsor’s representatives in the course of entering, or what our representatives will remember about your entry. You also understand that we will not restrict work assignments of representatives or our co-sponsor’s representatives who have had access to your entry. By entering this contest, you agree that use of information in our representatives’ or our co-sponsor’s  representatives unaided memories in the development or deployment of our products or services does not create liability for us under this agreement or copyright or trade secret law;

  4. Understand that you will not receive any compensation or credit for use of your entry, other than what is described in these official rules.

If you do not want to grant us these rights to your entry, please do not enter this contest.

  1. Submission of entries

    1. Follow the instructions on the Codalab website to submit entries.

    2. The participants will be registered as mutually exclusive teams. Each team may submit only one single final entry. We are not responsible for entries that we do not receive for any reason, or for entries that we receive but are not functioning properly.

    3. The participants must follow the instructions. We will automatically disqualify incomplete or invalid entries.

  1. Judging the entries

The board of CHALEARN will select a panel of judges to judge the entries; all judges will be forbidden to enter the contest and will be experts in causality, statistics, machine learning, computer vision, or a related field, or experts in challenge organization.  A list of the judges will be made available upon request. The judges will review all eligible entries received and select three winners for each of the thre tracks. Winners will be determined based upon the prediction score on test data.  The judges will verify that the winners complied with the rules, including that they documented their method by filling out a fact sheet.

The decisions of these judges are final and binding. The distribution of prizes according to the decisions made by the judges will be made within three (3) months after completion of the last round of the contest. If we do not receive a sufficient number of entries meeting the entry requirements, we may, at our discretion based on the above criteria, not award any or all of the contest prizes below.  In the event of a tie between any eligible entries, the tie will be broken by giving preference to the earliest submission, using the time stamp of the submission platform.

  1. Prizes and Awards

  1. ChaLearn, Google, Amazon, Disney Research, University of Barcelona, Human Pose Recovery and Behavior Analysis Group are the financial sponsors of this contest. There may be economic incentive prizes and travel grants for the winners (based on availability) to boost contest participation; these prizes will not require participants to enter into an IP agreement with any of the sponsors, to disclose algorithms, or to deliver source code to them.

  1. Incentive Prizes for each track

Award certificates and travel awards (based on availability) will be attributed to the top 3 ranked participants of each track. In addition top ranked participants will be invited to submit a paper to the associated ECCV 2018 Workshop (pending acceptance) and a paper to a special issue in a top ranked journal, TBA.

(*) The amount of travel awards will be based on need and availability. The travel award may be used for one of the workshops organized in conjunction with the challenge. The award money will be granted in reimbursement of expenses including airfare, ground transportation, hotel, or workshop registration. Reimbursement is conditioned on (i) attending the workshop, (ii) making an oral presentation of the methods used in the challenge, and (iii) presenting original receipts and boarding passes.

 3. Travel awards: Other travel awards may be distributed to deserving participants based upon need and availability.

4. If for any reason the advertised prize is unavailable, unless to do so would be prohibited by law, we reserve the right to substitute a prize(s) of equal or greater value, as permitted. We will only award one prize per team.  If you are selected as a potential winner of this contest:

    1. If your prize is not in cash, you may not exchange your prize for cash; you may not exchange any prize for other merchandise or services.

    2. You may not designate someone else as the winner. If you are unable or unwilling to accept your prize, we will award it to an alternate potential winner.

    3. If you accept a prize, you will be solely responsible for all applicable taxes related to accepting the prize.

    4. If you are a minor in your place of residence, we may award the prize to your parent/legal guardian on your behalf and your parent/legal guardian will be designated as the winner.

9. Other Sponsored Events

    1. To stimulate participation, the organizers are making available several channels of scientific paper publication. Publishing papers is optional and will not be a condition to entering the challenge or winning prizes.

    2. The results of the challenge will be presented in the competition program of WCCI 2018, also, an overview paper will be published in the associated ECCV 2018 ChaLearn Workshop (pending acceptance). A selection of the best workshop papers may be invited to submit extended versions of their papers for a special issue in a top tier journal.

The organizers may also sponsor other events to stimulate participation.

  1. Notifications

If there is any change to data, schedule, instructions of participation, or these rules, the registered participants will be notified at the email they provided with the registration.

If you are a potential winner, we will notify you by sending a message to the e-mail address listed on your final entry within seven days following the determination of winners. If the notification that we send is returned as undeliverable, or you are otherwise unreachable for any reason, we may award the prize to an alternate winner, unless forbidden by applicable law.

Winners who have entered the contest as a team will be responsible to share any prize among their members. The prize will be delivered to the registered team leader. If this person becomes unavailable for any reason, the prize will be delivered to be the authorized account holder of the e-mail address used to make the winning entry.

If you are a potential winner, we may require you to sign a declaration of eligibility, use, indemnity and liability/publicity release and applicable tax forms. If you are a potential winner and are a minor in your place of residence, and we require that your parent or legal guardian will be designated as the winner, and we may require that they sign a declaration of eligibility, use, indemnity and liability/publicity release on your behalf. If you, (or your parent/legal guardian if applicable), do not sign and return these required forms within the time period listed on the winner notification message, we may disqualify you (or the designated parent/legal guardian) and select an alternate selected winner.

  1. On-line notification

We will post changes in the rules or changes in the data as well as the names of confirmed winners (after contest decisions are made by the judges) online on http://gesture.chalearn.org/ and http://chalearnlap.cvc.uab.es/. This list will remain posted for one year or will be made available upon request by sending email to mmgesture@chalearn.org.

  1. Conditions.   By entering this contest you agree:

    1. To abide by these official rules;

    2. To the extent allowable under applicable law, to release and hold harmless CHALEARN and sponsors, their respective parents, subsidiaries, affiliates, employees and agents from any and all liability or any injury, loss, damage, right, claim or action of any kind arising from or in connection with this contest or any prize won save for residents of the United Kingdom, Chile, Korea, Greece, Brazil, Turkey, Hong Kong, France and Germany with respect to claims resulting from death or personal injury arising from CHALEARN’s and University of Barcelona’s negligence, for residents of the United Kingdom with respect to claims resulting from the tort of deceit or any other liabilities that may not be excluded by law, and for residents of Australia in respect of any implied condition or warranty the exclusion of which from these official rules would contravene any statute or cause any part of these official rules to be void;

    3. That CHALEARN’s decisions will be final and binding on all matters related to this contest; and

    4. That by accepting a prize, CHALEARN and competition sponsors may use your team name, your name, and your place of residence online and in print, or in any other media, in connection with this contest, without payment or compensation to you. The declaration of eligibility, use, indemnity and liability/publicity release provided to the potential winner will make reference to obtaining his/her free consent to use his/her name and place of residence. In any case, the lack of such consent does not prevent the winner from receiving the prize.

    5. This contest will be governed by the laws of the state of California, and you consent to the exclusive jurisdiction and venue of the courts of the state of California for any disputes arising out of this contest. For residents of Austria only: you may withdraw your submission from this contest within seven days of your entry. If you withdraw within seven days of entry, your submission will be returned to you, and we will not make any use of your submission in the future. However, you will not be eligible to win a prize. If you do not withdraw within seven days of entry, you will be bound by the provisions of these official rules. For residents of the United Kingdom only: the provisions of the contracts (rights of third parties) act 1999 will not apply to this agreement. For residents of New Zealand only: the provisions of the contracts (privity) act of 1982 will not apply to this agreement.For Quebec residents: any litigation respecting the conduct or organization of a publicity contest may be submitted to the Régie des Alcools, des Courses et des Jeux for ruling. Any litigation respecting the awarding of a prize may be submitted to the Régie only for the purpose of helping the parties reach a settlement.  For residents of Israel only: this agreement does not entitle third parties to benefits under this agreement as defined in Chapter “D” of the Contracts Act (General Part) – 1973.

    6. The data are available only for research and educational purposes, within the scope of the challenge. ChaLearn and the organizers make no warranties regarding the database, including but not limited to warranties of non-infringement or fitness for a particular purpose. The copyright of the videos remain the property of their respective owners. By downloading and making use of the data, you accept full responsibility for using the data. You shall defend and indemnify ChaLearn and the organizers, including their employees, Trustees, officers and agents, against any and all claims arising from your use of the data. You agree not to redistribute the data without this notice.

  1. Unforeseen event

If an unforeseen or unexpected event (including, but not limited to: someone cheating; a virus, bug, or catastrophic event corrupting data or the submission platform; someone discovering a flaw in the data or modalities of the challenge) that cannot be reasonably anticipated or controlled, (also referred to as force majeure)affects the fairness and / or integrity of this contest, we reserve the right to cancel, change or suspend this contest. This right is reserved whether the event is due to human or technical error. If a solution cannot be found to restore the integrity of the contest, we reserve the right to select winners based on the criteria specified above from among all eligible entries received before we had to cancel, change or suspend the contest subject to obtaining the approval from the Régie des Alcools, des Courses et des Jeux with respect to the province of Quebec.

Computer “hacking” is unlawful. If you attempt to compromise the integrity or the legitimate operation of this contest by hacking or by cheating or committing fraud in any way, we may seek damages from you to the fullest extent permitted by law. Further, we may ban you from participating in any of our future contests, so please play fairly.

  1. Sponsor

ChaLearn is the sponsor of this contest.

955 Creston Road,

Berkeley, CA 94708, USA

events@chalearn.org
and

University of Barcelona, Human Pose Recovery and Behavior Analysis group, are the co-sponsors of this contest. Additional sponsors can be added during the competition period.

15. Privacy

During the development phase of the contest and when they submit their final entries, contest participants do not need to disclose their real identity, but must provide a valid email address where we can be deliver notifications to them regarding the contest. To be eligible for prizes, however, contest participants will need to disclose their real identity to contest organizers, informing them by email of their name, professional affiliation, and address. To enter the contest, the participants will need to become users of the Codalab platform. Any profile information stored on this platform can be viewed and edited by the users. After the contest, the participants may cancel their account with the Codalab and cease to be users of that platform. All personal information will then be destroyed. The Codalab privacy policy will apply to contest information submitted by participants on the Codalab. Otherwise, CHALEARN’s privacy policy will apply to this contest and to all information that we receive from your entry that we receive directly from you or which you have submitted as part of your contest entry on the Codalab. Please read the privacy policy on the contest entry page before accepting the official rules and submitting your entry. Please note that by accepting the official rules you are also accepting the terms of the CHALEARN privacy policy: http://www.chalearn.org/privacy.html.

16. DISCLAIMER

ALL INFORMATION, SOFTWARE, DOCUMENTATION, AND DATA ARE PROVIDED "AS-IS". THE ORGANIZERS DISCLAIM ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL CHALEARN AND/OR OTHER ORGANIZERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF SOFTWARE, DOCUMENTS, MATERIALS, PUBLICATIONS, OR INFORMATION MADE AVAILABLE FOR THE CHALLENGE. 

Track 3 feed-back

Start: March 1, 2018, midnight

Description: You can submit your results of validation set for track 3. Please use high speed connections to submit results. Please try to avoid submissions bigger than 400MB. Each evaluation takes around 5-10 minutes.

Track 3 test

Start: July 14, 2018, midnight

Description: You can submit your results of test set for track 3. Please use high speed connections to submit results. Please try to avoid submissions bigger than 400MB. Each evaluation takes around 5-10 minutes.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 CVxTz 1.0000
2 rgsl888 2.3333
3 sukeshadigav 3.3333