The Large Scale Movie Description Challenge (LSMDC) 2017 : Movie Retrieval

Organized by atousa - Current server time: Jan. 16, 2018, 11:43 a.m. UTC


Movie Retrieval
Aug. 25, 2016, midnight UTC


Competition Ends

 If you have question please email to


Natural language-based video and image search has been a long standing topic of research among information retrieval, multimedia, and computer vision communities. Several existing on-line platforms (e.g. Youtube) rely on massive human curation efforts, manually assigned tags, however as the amount of unlabeled video content grows, with advent of inexpensive mobile recording devices (e.g. smart phones), the focus is rapidly shifting to automated understand, tagging and search. In this challenge, we would like to explore a variety of different joint language-visual learning models for video annotation and retrieval task, which is based on a unified version of the recently published large-scale movie datasets (M-VAD and MPII-MD). More information about the datasets and challenge can be found here.

Movie Retrieval: We compute Recall@1, Recall@5, Recall@10, and Median Rank for video retrieval (given caption rank videos). The evaluation is only on 1000 samples of  public test set.

Other challenges:

  • LSMDC 2016 : Movie Description, click here
  • LSMDC 2016 :  Movie Fill-in-the-Blank, click here
  • LSMDC 2016 :  Movie Multiple-Choice Test, click here




To participate, you should first create an account on CodaLab. In order to submit your results, please, perform these steps:

  • Convert your retrieval similarities (i.e. higher value means higher similarity between a caption and a video) into the following JSON format:
    "video_sims": sim1\tsim2\t ...\tsim1000 ,
    "caption_id": int,
  • video_sims is all 1000 similarities between current caption (caption_id) and all 1000 videos in order from video 1 to 1000. Convert similarities from float to string and separate them with tab("\t"). We also provide a script to create JSON file in above format here. PLEASE NOTE THAT IF YOU HAVE COMPUTED DISTANCES RATHER THAN SIMILARITIES, BEFORE SUBMITTING YOUR RESULTS YOU NEED TO MULTIPLY THE VALUES BY (-1.0).
  • Name the JSON file as "publictest_[your_algorithm_name]_results.json" and zip it in an archive. The ZIP file should have teh same name as your JSON file which means "publictest_[your_algorithm_name]"
  • Go to "Participate" tab, click "Submit / View Results".
  • Fill in the form (specify any external training data used by your algorithm in the "Method description" field) and upload your ZIP archive.
  • Click "Refresh Status" to see how your submission is being processed. In case of errors, please, check and correct your submission.
  • Once the submission is successfully processed, you can view your scores via "View scoring output log" and click "Post to leaderboard" to make your results publicly available.

Note, that we allow up to 10 submission per day. In total maximum submission per team is 100.


The evaluation is based on Recall@1, Recall@5, Recall@10, and MedR on subset 1000 samples of public test set provided in challenge website for movie retreival here. Recall@k means the percentage of ground-truth videos in the first k videos and MedR means the median rank of ground-truth videos.

Winners will be selected based on a highest Recall@10.

Movie Retrieval

Start: Aug. 25, 2016, midnight

Competition Ends


You must be logged in to participate in competitions.

Sign In

Top Three

Rank Username Score
1 yj 17.000
2 danieljf24 14.400
3 antoine77340 7.700