The automatic generation of captions for images is a long-standing and challenging problem in artificial intelligence. To promote and measure the progress in this area, we carefully created the Microsoft Common objects in COntext (MS COCO) dataset to provide resources for training, validation, and testing of automatic image caption generation. Currently, the MS COCO 2014 dataset contains one million captions and over 160,000 images.
This CodaLab evaluation server provides a platform to measure performance on the validation and held-out test set. The MS COCO Caption Evaluation API is provided to compute several performance metrics to evaluate caption generation results. More details about data collection and evaluation metrics can be found in the paper Microsoft COCO Captions: Data Collection and Evaluation Server.
The MS COCO Caption Evaluation API is used to evaluate results. The software uses both candidate and reference captions, applies sentence tokenization, and output several performance metrics including BLEU-1, BLEU-2, BLEU-3, BLEU-4, ROUGE-L, METEOR and CIDEr-D. More details can be found in the paper Microsoft COCO Captions: Data Collection and Evaluation Server.
Start: March 15, 2015, midnight
You must be logged in to participate in competitions.Sign In