RedICA Text-Image Matching (RICATIM) Challenge

Organized by luis.pellegrin - Current server time: Nov. 14, 2018, 11:07 a.m. UTC

First phase

Development Phase
July 3, 2017, midnight UTC

End

Competition Ends
Aug. 17, 2017, 5 a.m. UTC

The RedICA Text-Image Matching Challenge

The aim of this challenge is to approach the image - text matching problem as one of binary classification.  Participants are provided with a classification data set in which the feature space of each instance encodes a pair (image-keyword), the class of the instance being +1 (when the keyword is relevant for describing the image) and 0 (when the keyword is irrelevant. Relevance of a keyword is determined with an undisclosed methodology, that may not be apparent to participants (i.e., a keyword may be relevant even if it is not an object visually observable in the image). Images are represented by CNN-based features, whereas keywords are encoded with their word2vec representation. Additionally, the raw images and words will be made publicly available, so that participants can take advantage of such information. Classification performance will be used to determine the winners of the challenge. 

 

Overview of the task 

Overview of the approached task.

 

Participation of members of the Red Temática CONACyT en Inteligencia Computacional Aplicada is encouraged, although this is a challenge open to anyone (see the terms & conditions section). 

 

Credits:


Organizing team: Luis Pellegrin, Hugo Jair Escalante, Alicia Morales, Eduardo Morales, Carlos A. Reyes-García 

Organizers are grateful with CodaLab (running on MS Azure) and ChaLearn. 

Sponsors: Red temática en Inteligencia Computacional Aplicada (RedICA), CONACyT, INAOE

Evaluation

The approached problem is a binary classification task. Each sample is characterized by a vector of features encoding an image-text pair, where images are encoded by a CNN-based representation (4096 features) and keywords are encoded with their 200-dimensional word2vec representation. Participants must predict the relevance of the matchings: A matching is said to be relevant (class 1) if the keyword is relevant to the corresponding word and non-relevant (0 class) otherwise.  Relevance of a keyword is determined with an undisclosed methodology, that may not be apparent to participants (i.e., a keyword may be relevant even if it is not an object visually observable in the image). 

 Overview of the data generation process

Overview of the data generation process.  

Participants  are given a training a data set (ricatim_train) with 20,000 x 4296-dimensional samples. Training samples are labeled (ricatim_train_labels). Participants must use the training set to build their models and send predictions for validation data during the challenge (a sample submission file is provided with the validation data set). For the final phase, labels for the validation data set will be released and participants will have to submit predictions for the test data set. Predictions should be submitted in a text file with the prediction (0 or 1) for each instance in the same order as they appear in the data matrix. For both, validation and test data sets the number of test instances is 5,000 (i.e, your prediction file should have 5000 lines).

Accuracy will be used as evaluation measure. 

 

There are 2 phases associated to the RICAMIT challenge:

  • Phase 1: development phase. We provide participants with labeled training data and unlabeled validation data. Participants must submit predictions for the validation dataset and will receive inmediate feedback (through the leaderboard) on the performance of your submissions. The performance of your BEST submission will be displayed on the leaderboard.
  • Phase 2: final phase. We provide participants with unlabeled test data, and we will release labels for the validation set. Participants will have 2 days for submitting predictions on the test set (validation data can be used for training models). Winners of the challenge will be determined by looking at the test set performance. 

Important: Submission files should be named "answer.txt" and they should contain only a vector with predictions (one line per instance, see the sample submission file here), the file should be compressed in zip format.

This  competition allows you to submit only prediction results (no code, although, please note that code verification will be performed for determining the winners, see the rules section).

The submissions will be evaluated using classic metrics as accuracy, f1, precision, recall (accuracy will be used to rank the winners).

Rules

This challenge is governed by the general ChaLearn contest rules.

With the following amendments:

  1. Participants can enter the competition by themselves or as part of a team, however, no participant can appear in more than one team. 
  2. Winners of the challenge will be determined according to their test set performance. The code of the top ranked performers will be verified before determining the winners of the challenge.
  3. A winner certificate will be awarded to the top 3 ranked participants in test data (provided their code is verified). 
  4. Top ranked participants residing in Mexico may be awarded travel grants (and additional prizes) to attend the SNAIC/ENAI events  http://ccc.inaoep.mx/SNAIC/,  only people residing in Mexico is eligible of these awards because restrictions on funding by RedICA-CONACyT.  To be eligible to the travel awards and prizes participants should commit themselves to make their code available. 
  5. Anyone is welcome to participate, excluding the organizing team. 

Development Phase

Start: July 3, 2017, midnight

Description: Development phase: tune your models and submit prediction results on validation set.

Final Phase

Start: Aug. 14, 2017, midnight

Description: Final phase (submit prediction results on test set).

Competition Ends

Aug. 17, 2017, 5 a.m.

You must be logged in to participate in competitions.

Sign In