SALICON Saliency Prediction Challenge (LSUN 2017)

Organized by mjiang - Current server time: Jan. 21, 2021, 4:02 p.m. UTC


June 5, 2017, midnight UTC


Competition Ends

SALICON Saliency Prediction Challenge (LSUN 2017)

Understanding and predicting visual attention in natural scenes is a challenging problem in vision. This challenge is designed to evaluate the performance of algorithms predicting visual saliency in natural scene images. The motivation of the challenge includes (1) to facilitate attention study in context and with non-iconic views, (2) to provide larger-scale human attentional data, and (3) to encourage the development of methods that leverage multiple annotation modalities from Microsoft COCO. Saliency prediction results could, in turn, benefit other tasks like recognition and captioning – humans make multiple fixations to understand the visual input in natural scenes. Teams will be competing against each other by training their algorithms on the SALICON dataset and their results will be compared against human behavioral data. We are looking forward to receiving submissions based on novel and context-aware saliency prediction models.

To maintain the consistency of evaluation results, we use the same evaluation tools as the MIT saliency benchmark. The Matlab code can be obtained here. We will rank the submissions using four evaluation metrics to rank the submissions: SAUC, IG, NSS, and CC. Other metrics including AUC, SIM, and KL will be computed but not used for ranking.

Note that for SAUC and IG, the evaluation baseline is a center prior map aggregating fixations in all images except the one being evaluated.


The annotations in this dataset belong to the VIP lab and are licensed under a Creative Commons Attribution 4.0 License.


We do not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.


Start: June 5, 2017, midnight

Description: Please provide your team name, authors, corresponding author and email, institute, a brief method description. If external data is used, please specify.

Competition Ends


You must be logged in to participate in competitions.

Sign In
# Username Score
1 rdroste 0.767
2 RecSal 0.747
3 xubinwei 0.747