The Large Scale Movie Description Challenge (LSMDC) 2017 : Movie Description

Organized by arohrbach - Current server time: Jan. 21, 2021, 3:58 p.m. UTC


Blind test
Aug. 1, 2017, 5:48 p.m. UTC


Public test
Aug. 1, 2017, 5:48 p.m. UTC


Competition Ends
Oct. 1, 2017, 12:58 p.m. UTC


Automatically describing open-domain videos using rich natural sentences is among the most challenging tasks of computer vision, natural language processing and machine learning. To stimulate research on this topic, we propose the Large Scale Movie Description (LSMDC) Challenge, which features a unified version of the recently published large-scale movie datasets (M-VAD and MPII-MD). More information about the datasets can be found here.

In this challenge the task is to generate single sentence descriptions of individual video clips. The challenge consists of two phases: public test set evaluation and blind (where we will not provide the sentence descriptions) test set evaluation.

Other related challenges:


To participate, you should first create an account on CodaLab. In order to submit your results, please, perform these steps:

      • To officially take part in a challenge you have to submit your results on both, public and blind test sets.
      • Convert your generated descriptions into the following JSON format:
        "video_id": int,
        "caption": str,
        where "video_id" are integer numbers starting with 1.
      • Name your JSON file publictest_[your_algorithm_name]_results.json or blindtest_[your_algorithm_name]_results.json, depending on a challenge phase, zip it in an archive.
      • Go to "Participate" tab, click "Submit / View Results" and select the respective challenge phase.
      • Fill in the form (specify any external training data used by your algorithm in the "Method description" field) and upload your ZIP archive.
      • Click "Refresh Status" to see how your submission is being processed. In case of errors, please, check and correct your submission.
      • Once the submission is successfully processed, you can view your scores via "View scoring output log" and click "Post to leaderboard" to make your results publicly available. You can  access the detailed evaluation output via "Download evaluation output from scoring step".

Note, that we allow up to 5 submissions per day / 100 in total for the public test phase and 1 submision per day / 5 in total for the blind test phase.


We thank the "Microsoft COCO Image Captioning Challenge" organizers for sharing the evaluation code.

The MS COCO Caption Evaluation API is used to evaluate results. The software uses both candidate and reference captions, applies sentence tokenization, and output several performance metrics including BLEU-1, BLEU-2, BLEU-3, BLEU-4, ROUGE-L, METEOR and CIDEr-D. More details can be found in the paper Microsoft COCO Captions: Data Collection and Evaluation Server.

Winners will be selected based on a human evaluation of submissions on the blind test set (second phase of the challenge).

Public test

Start: Aug. 1, 2017, 5:48 p.m.

Description: Public Test set

Blind test

Start: Aug. 1, 2017, 5:48 p.m.

Description: Blind Test set

Competition Ends

Oct. 1, 2017, 12:58 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 ElTanque 0.134
2 danieljf24 0.168
3 arohrbach 0.163