Photo Triage Benchmark

Organized by changhw - Current server time: Sept. 24, 2018, 10:58 p.m. UTC

Current

Challenge
May 1, 2016, midnight UTC

End

Competition Ends
Never

Photo Triage Benchmark

Photo Triage: The photo with the green star in each series is the one preferred by the majority of people, while the percentage below each other photo indicates what fraction of people would prefer that photo over the starred one in the same series.

Problem

People often take a series of nearly redundant pictures to capture a moment or scene. However, selecting photos to keep or share from a large collection is a painful chore. To address this problem, we try to seek a relative quality measure within a series of photos taken of the same scene using the Princeton Adobe Photo Triage dataset.

Benchmark

The dataset contains 15,545 unedited photos distilled from personal photo albums. The photos are organized in 5,953 series, and in each series human preferences are collected by a crowd-sourced user study. For the benchmark, we split the dataset into training, validation and held-out test set. Their size ratio is about 25:1:5. This CodaLab challenge provides a platform to measure performance on the held-out test set. The evaluation is provided to compute two performance metrics to evaluate predicting results. The training, validation and testing data can be downloaded from the Photo Triage Website .

Photo Triage ChallengePlease consider downloading the data from the download page for the dataset before the evaluation.

Evaluation

The evaluation consists of measuring the predicting performance at two levels: series-level and pair-level. Both the 'trainval' and 'test' folders contain the list of pairs which the participants methods need to train or test on. Please submit your predictions for the pairs in the test_pairlist by storing them into one column in answer.txt and uploading the zipped txt file. For more information, please refer to README after downloading the data.

If you want to test your method locally to see your results before submitting them, you can run the evaluation script on validation set offline.

Please refer to the Terms of Use of the dataset.

Challenge

Start: May 1, 2016, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In