NOTE: This challenge is read-only as CodaLab suspended their old evaluation server. The challenge has been migrated to https://codalab.lisn.upsaclay.fr/competitions/7404.
The automatic generation of captions for images is a long-standing and challenging problem in artificial intelligence. To promote and measure the progress in this area, we carefully created the Microsoft Common objects in COntext (MS COCO) dataset to provide resources for training, validation, and testing of automatic image caption generation. Currently, the MS COCO 2014 dataset contains one million captions and over 160,000 images.
This CodaLab evaluation server provides a platform to measure performance on the validation and held-out test set. The MS COCO Caption Evaluation API is provided to compute several performance metrics to evaluate caption generation results. More details about data collection and evaluation metrics can be found in the paper Microsoft COCO Captions: Data Collection and Evaluation Server.
To participate, you can find instructions on the MS COCO website. In particular, please see the overview, download, format, evaluate (captions), and upload pages for more details.
The MS COCO Caption Evaluation API is used to evaluate results. The software uses both candidate and reference captions, applies sentence tokenization, and output several performance metrics including BLEU-1, BLEU-2, BLEU-3, BLEU-4, ROUGE-L, METEOR and CIDEr-D. More details can be found in the paper Microsoft COCO Captions: Data Collection and Evaluation Server.
Please refer to MS COCO Terms of Use.
Start: March 15, 2015, midnight
Never
You must be logged in to participate in competitions.
Sign In