Depth From Focus Competition on the DDFF 12-Scene Dataset

Organized by hazirbas - Current server time: April 8, 2025, 5:23 p.m. UTC

Current

TEST
Dec. 14, 2017, midnight UTC

End

Competition Ends
Never

Welcome to Depth from Focus competition

This is a competition for depth from focus methods. Main problem is defined as predicting an accurate depth map from a focal strack, where the focus gradually changes from close to far away distances. Evaluation of the methods is performed on the DDFF 12-Scene dataset.

Please cite following paper if you participate in the competition:

@inproceedings{hazirbas18ddff,
  author = {C. Hazirbas and S. Soyer and M. Staab and L. Leal-Taixé and D. Cremers},
  title = {Deep Depth From Focus},
  month = {December},
  year = {2018},
  booktitle = {ACCV},
  eprint = {1704.01085},
  url = {https://hazirbas.com/projects/ddff/}
}
 

Evaluation Metrics

Results are reported for different error metrics, computed between the predicted and ground-truth dispariy maps. Errors are computed for the interval of [0.28, 0.02] pixel, equivalent to [0.5, 7] meters. MSE and RMS are also reported for actual depth errors.

Disparity maps must be saved as float arrays under each test folder using numpy.save() function.
Example: cafeteria/DISP_0001.npy where DISP_0001.npy is disparity map.
"runtime.txt" should be included where each line provides the runtime for each image in seconds.
Example: cafeteria/DISP_0001 0.6543

Please see the following paper for details.

@inproceedings{hazirbas17ddff,
  title     = {Deep Depth From Focus},
  author    = {C. Hazirbas and L. Leal-Taixé and D. Cremers},
  booktitle = {Arxiv preprint arXiv:1704.01085},
  month     = {April},
  year      = {2017},
}

Terms and Conditions

All data in the DDFF 12-Scene benchmark is licensed under a Creative Commons 4.0 Attribution License (CC BY 4.0).

TEST

Start: Dec. 14, 2017, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In