DivFusion @ ICPR 2018 - Information Fusion for Social Image Retrieval & Diversification Task

Organized by lstefan - Current server time: Nov. 14, 2018, 10:53 a.m. UTC

First phase

Challenge
Feb. 25, 2018, midnight UTC

End

Competition Ends
June 16, 2018, midnight UTC

2018 DivFusion @ ICPR Multimedia Information Processing for Personality & Social Networks Analysis Challenge

Brought to you by ChaLearn, ImageCLEF, MediaEval, IAPRTC12

 

Diversification of image search results is now a hot research problem in multimedia. Search engines are fostering techniques that allow for providing the user with a diverse representation of his search results, rather than providing redundant information, e.g. the same perspective of a monument, or location etc. The DivFusion task builds on the MediaEval Retrieving Diverse Social Images Tasks and challenges the participants to develop highly effective information fusion techniques for social image search results diversification.

Participation in this task involves the following steps:

  1. Design your algorithms on the development data (devset): download the development data and design your approach for the task (see the Participate/Get Data section). These data comes with ground truth;
  2. Validate and optimize your algorithms on the validation data (validset): download the validation data and evaluate the performance of your algorithms (see the Participate/Get Data section). Optimize the parameters and the performance. These data comes with ground truth;
  3. Test your algorithms on the test data (testset): download the test data, build your final runs and submit them to the challenge (see the Participate/Get Data section for downloading the data, the Evaluation section for formatting your runs and the Participate/Submit section for submitting your runs). You are allowed to submit only 5 runs during the entire duration of the task. The ground truth for this data is not available to participants;
  4. Receive your evaluation results: your results on test data are available in real-time on the challenge leaderboard, where you can compare them against a baseline and other participant results (see the Results section).

 

Goal of task

Participants will receive a list of image search queries with up to 300 photos retrieved from Flickr and ranked with Flickr’s default "relevance" algorithm. These data are accompanied by various metadata and content descriptors. Each query comes also with a variety of diversification system outputs (participant runs from previous years).


The requirements of the task are to fuse the provided systems' outputs and return a ranked list of up to 50 photos that are both relevant and diverse representations of the query.


Relevance: a photo is considered to be relevant for the query if it is a common photo representation of all query concepts at once. Low quality photos (e.g., severely blurred, out of focus, etc.) are not considered relevant in this scenario.

Diversity: a set of photos is considered to be diverse if it depicts different visual characteristics of the query topics and subtopics, e.g., sub-locations, temporal information, typical actors/objects, genesis information, different views at different times of the day/year and under different weather conditions, close-ups on architectural details, sketches, creative views, with a certain degree of complementarity, i.e., most of the perceived visual information is different from one photo to another.

 

Use Scenario

The provided data are using two use case scenarios: (i) a tourist (single-topic query) scenario where a person tries to find more information about a place or event she might visit or attend and is interested in getting a more complete visual description of the target; (ii) a general ad-hoc (multi-topic query) scenario where the user searches for general-purpose images.

For more information, see the challenge webpage on the ChaLearn website (follow the link).

2018 DivFusion @ ICPR Multimedia Information Processing for Personality & Social Networks Analysis Challenge

 

You may submit up to 5 runs during the entire duration of the challenge, making use either of the provided information (e.g., content descriptors, metadata, etc.) or of external information of your own.

Each run has to contain two separate runs, one for each test data set, as following:

  • run 1: your system output on the seenIR testset data, which corresponds to the seen information retrieval scenario, i.e. the fused systems from the testset are the same as the ones from the development data, in particular to those from devset2. Please name your run file as: seenIR.txt;
  • run 2: your system output on the unseenIR testset data, which corresponds to the unseen information retrieval scenario, i.e. the fused systems from the testset are new and were not found in the development or the validation data. Please name your run file as: unseenIR.txt.

Important note: the system run on the seenIR and unseenIR data should be the same (method and parameters). Please do not submit different system outputs. The idea of these data is to be able to compare the results in the two different contexts.

A valid run consists of a .zip (zip archive) containing the two run files (seenIR.txt and unseenIR.txt). This is the file you should upload in the Participate/Submit section.

Submission format

Please submit your runs in the form of a trec topic file. This file is compatible with the trec_eval evaluation software (for more information please follow the previous link – you will find two archives trec_eval.8.1.tar.gz and trec_eval_latest.tar.gz - see the README file inside). The trec topic file has the structure illustrated by the following example of a file line (please note that values are separated by whitespaces):

030 Q0 ZF08 0 4238 prise1
qid iter docno rank sim run_id

where:

  • qid is the unique query id (please note that each query has a certain query id code that is provided with the data set in the topic xml files);
  • iter – is ignored;
  • docno – is the unique photo id (as provided with the data set);
  • rank – is the photo rank in the refined list provided by your method. Rank is expected to be an integer value ranging from 0 (the highest rank) up to 49;
  • sim – is the similarity score of your photo to the query and is mandatory for the submission. The similarity values need to be higher for the photos to be ranked first and should correspond to your refined ranking (e.g., the photo with rank 0 should have the highest sim value, followed by photo with rank 1 with the second highest sim value and so on). In case your approach do not provide explicitly similarity scores you are required to create dummy similarity scores that decrease when the rank increases (e.g., in this case, you may use the inverse ranking values);
  • run_id - is the name of your run (which you can choose, but should be as informative as possible without being too long – please note that no whitespaces or other special characters are allowed).

Please note that each run needs to contain at least one result for each query. An example of a run file should look like this:

1 0 3338743092 0 0.94 run1_audiovisualRF
1 0 3661411441 1 0.9 run1_audiovisualRF
...
1 0 7112511985 48 0.2 run1_audiovisualRF
1 0 711353192 49 0.12 run1_audiovisualRF
2 0 233474104 0 0.84 run1_audiovisualRF
2 0 3621431440 1 0.7 run1_audiovisualRF
...

When you format your runs, please make sure that the queries are ordered as provided with the topic file, i.e., by ascending qid order (see also in the example above). This order is not necessarily the alphabetic order, thus you need to generate it manually.

You can experiment with your own runs on the development and validation data. From experience, this helped avoiding getting the wrong format. You are provided with all the tools that allow you to check the run consistency and also compute your own metrics. See also the information below.

 

Evaluation metrics

Performance is going to be assessed for both diversity and relevance. We compute Cluster Recall at X (CR@X) - a measure that assesses how many different clusters from the ground truth are represented among the top X results (only relevant images are considered), Precision at X (P@X) - measures the number of relevant photos among the top X results and F1-measure at X (F1@X) - the harmonic mean of the previous two. Various cut off points are to be considered, e.g., X=5, 10, 20, 30, 40, 50.

Official ranking metrics will be the CR@20 images. This metric simulates the content of a single page of a typical web image search engine and reflects user behavior, i.e., inspecting the first page of results in priority. Metrics are to be computed individually on each test data set, i.e., seenIR data and unseenIR data. Final ranking will be based on overall mean values for CR@20, followed by P@20 and then F1@20.

 

Scoring tool

To allow participants to evaluate on their own the results of their systems, the official evaluation tool is provided with the data (div_eval.jar). It computes the official evaluation metrics at different cutoff points (see the previous section) for each of the queries together with the overall average values. The software tool was developed under Java and to run it you need to have Java installed on your machine. To check, you may run the following line in a command window: "java -version". In case you don't have Java installed, please visit this link, download the Java package for your environment and install it.

To run the script, use the following syntax (make sure you have the div_eval.jar file in your current folder):

java -jar div_eval.jar -r <runfilepath> -rgt <rGT directory path> -dgt <dGT directory path> -t <topic file path> -o <output file directory> [optional: -f <output file name>]

where:

-r <runfilepath> - specifies the file path to the current run file for which you want to compute the evaluation metrics. The file should be formatted according to the instructions above;
-rgt <rGT directory path> - specifies the path to the relevance ground truth (denoted by rGT) for the current data set;
-dgt <dGT directory path> - specifies the path to the diversity ground truth (denoted by dGT) for the current data set;
-t <topic file path> - specifies the file path to the topic xml file for the current data set;
-o <output file directory> - specifies the path for storing the evaluation results. Evaluation results are saved as .csv files (comma separated values);
-f <output file name> - is optional and specifies the output file name. By default, the output file will be named according to the run file name + "_metrics.csv".

Run example:

java -jar div_eval.jar -r c:\divtask\RUNd2.txt -rgt c:\divtask\rGT -dgt c:\divtask\dGT -t c:\divtask\devsetkeywordsGPS_topics.xml -o c:\divtask\results my_first_results

Output file example:

--------------------
"Run name","my_first_results.txt"
--------------------
"Average P@20 = ",.7222
"Average CR@20 = ",.3901
"Average F1@20 = ",.4993
--------------------
"Query Id ","Location
name",P@5,P@10,P@20,P@30,P@40,P@50,CR@5,CR@10,CR@20,CR@30,CR@40,CR@50,F1@5,F1@10,F1@20,F1@30,F1@40,F1@50
1,"the_great_wall_of_china",.8,.7,.75,.7667,.775,.78,.12,.24,.4,.52,.56,.72,.2087,.3574,.5217,.6197,.6502,.7488
2,"grande_arche_paris",.4,.2,.25,.2667,.375,.46,.08,.08,.2,.32,.52,.6,.1333,.1143,.2222,.2909,.4358,.5208

30,"el_angel_mexico",1.0,.6,.45,.3,.275,.22,.1905,.1905,.3333,.3333,.381,.381,.32,.2892,.383,.3158,.3194,.2789
31,"bibliotheque_nationale_de_france",1.0,.9,.8,.7,.65,.72,.12,.24,.4,.48,.52,.64,.2143,.3789,.5333,.5695,.5778,.6776
--------------------
"--","Avg.",P@5,P@10,P@20,P@30,P@40,P@50,CR@5,CR@10,CR@20,CR@30,CR@40,CR@50,F1@5,F1@10,F1@20,F1@30,F1@40,F1@50
,,.7905,.7492,.7222,.7048,.7067,.7063,.139,.2388,.3901,.4821,.5837,.6605,.2341,.3578,.4993,.565,.6319,.6746

 

CHALEARN Contest Rules for Multimedia Information Processing for Personality and Social Networks Analysis Contest 2018

 

Submissions must be submitted before April 25, 2018, midnight UTC. You may submit a total of 5 runs during the entire duration of the challenge.

Official rules
Common terms used in these rules:

These are the official rules that govern how the Multimedia Information Processing for Personality and Social Networks Analysis contest promotion will operate. This promotion will be simply referred to as the “contest” or the “challenge” throughout the rest of these rules and may be abbreviated on our website, in our documentation, and other publications as ChaLearn ICPR2018 LAP.

In these rules, “organizers”, “we,” “our,” and “us” refer to CHALEARN and "participant”, “you,” and “yourself” refer to an eligible contest participant.

 

SECTION 1 Contest description

This is a skill-based contest and chance plays no part in the determination of the winner(s). There are two tracks associated to this contest as described below:

    1. DivFusion track. Builds on the MediaEval Retrieving Diverse Social Images Tasks, which were addressing specifically the diversification of image search results in the context of social media. The task challenges the participants to develop highly effective information fusion techniques. The participants will be provided with several query results, content descriptors and output of various existing diversifications system. They are to employ fusion strategies to refine the retrieval results thus to improve even more the diversification performance of the existing systems. This track reuses the publicly available datasets issued from the 2013-2016 MediaEval Retrieving Diverse Social Images Tasks, together with the participant runs. These data consist of hundreds of Flickr image query results (>600 queries, both single- and multi- topic) and include: images (up to 300 images per query), social metadata (e.g., description, number of comments, tags, etc.), descriptors for visual, text, social information (e.g. user tagging credibility), as well as deep learning features, expert annotations for image relevance and diversification (i.e. clustering of images according to the similarity of their content). An example is presented in Figure 1. The data is to be accompanied by 180 participant runs which correspond to the output of image search retrieval diversification techniques (each run contains the diversification of each query from a dataset). These would allow to experiment with various fusion strategies. The winners of the challenge will be determined by their scores on a held out partition of the data set. Cluster recall and related measures will be used to rank participants.

      Figure 1. Example of query data to be diversified by the systems: Flickr image results for query “Pingxi Sky Lantern Festival” (first 14 images from 300) and metadata example for one image.

      <photo date_taken="2007-10-02 22:35:43" description="Taking by a slow shutter with 30 seconds. Sky lanterns flied from the ground to the sky. People wrote the wishes on the lanterns and expect them to be true ! In Ping-Xi town, fly the sky lantern is a ceremony that hold in Lantern Festival every year. 被引用了: www.colourlovers.com/blog/2009/02/09/colors-of-the-lanter… 很高興能跟其他很棒的作品排在一起designsbuzz.com/index.php/inspiration/60-best-inspiration..." id="1475159652" latitude="0" license="1" longitude="0" nbComments="29" rank="2" tags="taiwan 平溪 天燈 元宵節 pingxi skylantern supershot 20mmf28d mywinners abigfave anawesomeshot thelanternfestival ysplix thelanternday" title="Burning Hell? No, they are sky lanterns !! #18" url_b="https://farm2.staticflickr.com/1317/1475159652_e319b240ea_b.jpg" username="勞動的小網管" userid="7178701@N02"views="3663"/>

 

  1. HWxPI: Handwritten texts for Personality Identification track. The tasks consist in estimating the personality traits of users from their handwritten texts and the corresponding transcripts (see the dataset description above). The challenge comprises two phases, development and final phases. For the first phase, the participants should develop their systems using a set of development pairs of handwritten essays (including image and text) from 418 subjects. Each subject has an associated class 1 and 0, corresponding to the presence of a high pole or a low pole of a specific personality trait. The traits correspond to the Big Five personality model used in psychology: Extraversion, Agreeableness, Conscientiousness, Emotional stability, and Openness to experience. Thus, participants will have to develop a classifier to predict the pole of each trait, this classifier should be able to use the information from both modalities (i.e. textual and visual). For the final evaluation phase, an independent set of 293 unlabeled samples will be provided to the participants, who will have to provide predictions using the models trained on development data. The winners of the challenge will be determined with basis on the final phase performance. The evaluation of the participants will be through standard metrics, such as F-measure, accuracy, and ROC AUC measure.
    The corpus used in this task consists of handwritten Spanish essays from undergraduates Mexican students. For each essay two files are available: a manual transcript of the text and a scan image of the original sheet where the subject hand-wrote the essay. The texts of manual transcriptions have tags to mark some handwritten phenomena namely: <FO:well-written word> (misspelling), <D:description> (drawing), <IN> (insertion of a letter into a word), <MD> (modification of a word, that is a correction of a word), <DL> (elimination of a word), <NS> (when two words were written together; e.g. Iam instead of I am) and, SB (syllabification). Figure 2 below shows a pair of essays with their corresponding image and text. Each essay is labelled with five classes corresponding to five personality trait in the Big Five Model of Personality. The traits are Extraversion, Agreeableness, Conscientiousness, Emotional stability, and Openness to experience. The classes for each trait are 1 and 0 corresponding to the high pole and low pole of each trait, respectively. To assign each label in the dataset we use an instrument named TIPI (Ten Item Personality Inventory), this instrument includes a specific set of norms for each trait.

    Figure 2. Examples of two pairs of essays: image and text, left and right, respectively.

    Una vez sali <FO:salí> con un amigo no muy cercano, fuimos a comer y en la comida el chico se comportaba de forma extraña algo como <DL> desagradable <DL> <DL> con un <MD> aire de superioridad <MD> algo muy desagradable tanto para <DL> mi <FO:mí> como para las personas que estaban en nuestro alrededor pero ya despues <FO:después> cuando se dio cuenta de <DL> su comportamiento cambio <FO:cambió> la forma de como <FO:cómo> se portaba y fue muy humilde.
    Bueno soy un chico que le gusta divertirse busco todo lo bueno de cada cosa, y lo malo intento analizarlo y <NS> dar una solución. Me gusta escuchar a la gente y apoyarla si puedo , no me gusta el fut <FO:futbol> las mujeres jaja. mucho y tener amistades me encantan los retos y mis triunfos me saben mejor si me cuestan esfuerzo.

For the two tracks, eligible entries received will be judged using the criteria described above to determine winners.

 

SECTION 2 Tentative Contest Schedule

The registered participants will be notified by email of any change in the schedule.
25th February, 2018: Beginning of the quantitative competition. Track 1: Release of labeled development and unlabeled validation data. Track 2: Release of labeled development,validation data and unlabeled test data.
21th April, 2018: Deadline for code submission. Participants submit code for verification.
22th April, 2018: For track 2 only: Release of final evaluation data and possibly validation labels (still has to be confirmed). Participants can start training their final version of their methods. Participants start submitting predictions on the final evaluation data.
24th April, 2018: End of both tracks of the competition. Deadline for submitting the predictions over the final evaluation data. The organizers start the code verification process.
27th April, 2018: Deadline for submitting the fact sheets.
3rd May, 2018: Release of verification results to the participants for review.
21st August 2018: ICPR 2018 Joint Contest on Multimedia Challenges Beyond Visual Analysis, challenge results, award ceremony.

 

SECTION 3 Eligibility

You are eligible to enter this contest if you meet the following requirements:

  1. You are an individual or a team of people desiring to contribute to the tasks of the challenge and accepting to follow its rules; and You are NOT a resident of any country constrained by US export regulations included in the OFAC sanction page http://www.treasury.gov/resource-center/sanctions/Programs/Pages/Programs.aspx. Therefore residents of these countries / regions are not eligible to participate; and You are not an employee of CHALEARN or any of the sponsoring or co-organizing entities; and
  2. You are not involved in any part of the administration and execution of this contest; and
  3. You are not an immediate family (parent, sibling, spouse, or child) or household member of an employee of CHALEARN, or of a person involved in any part of the administration and execution of this contest.

This contest is void within the geographic area identified above and wherever else prohibited by law.

If you choose to submit an entry, but are not qualified to enter the contest, this entry is voluntary, and any entry you submit is governed by the remainder of these contest rules; CHALEARN reserves the right to evaluate it for scientific purposes. If you are not qualified to submit a contest entry and still choose to submit one, under no circumstances will such entries qualify for sponsored prizes.

 

SECTION 4 Entry

To be eligible for judging, an entry must meet the following content/technical requirements:

  1. Entry contents: The participants are required to submit prediction results and code. To be eligible for prizes, the top ranking participants are required to publicly release their code under a license of their choice, taken among popular OSI-approved licenses (http://opensource.org/licenses) and make their code accessible on-line for a period of not less than three years following the end of the challenge (only required for top three ranked participants of the competition). To be part of the final ranking the participants will be asked to fill out a survey (fact sheet) briefly describing their method. The top ranking participants and the rest of participants are also invited (not mandatory) to submit a paper for the ICPR 2018 Multimedia Information Processing for Personality and Social Networks Analysis Workshop (to be held in August 2018, pending acceptance). Additionally, organizers may invite participants to submit a paper to a special issue (under evaluation). To be eligible for prizes, top ranked participants score must improve the baseline performance provided by the challenge organizers.
  2. Pre-requisite: There is no pre-requisite to participate, including no requirement to have participated in previous challenges.
  3. Use of data provided: All data provided by CHALEARN are freely available to the participants from the website of the challenge under license terms provided with the data. The data are available only for open research and educational purposes, within the scope of the challenge. ChaLearn and the organizers make no warranties regarding the database, including but not limited to warranties of non-infringement or fitness for a particular purpose. The copyright of the images, texts, metadata and descriptors remains in property of their respective owners. By downloading and making use of the data, you accept full responsibility for using the data. You shall defend and indemnify ChaLearn and the organizers, including their employees, Trustees, officers and agents, against any and all claims arising from your use of the data. You agree not to redistribute the data without this notice.
    1. Test data:
      The organizers will use test data to perform the final evaluation, hence the participants’ final entry will be based on test data.
    2. Training and validation data:
      The contest organizers, depending on the track, will make available to the participants a training dataset and a validation dataset with truth labels. The validation data will be used by the participants for practice purposes to validate their systems. It will be similar in composition to the test set.
    3. Post-challenge analyses:
      The organizers may also perform additional post-challenge analyses using extra data, but the results will not affect the ranking of the challenge performed with the test data.
  4. Submission: The entries of the participants will be submitted on-line via the CodaLab web platform. During the development period, depending on the track, the participants will receive immediate feed-back on test data (for track 1) or on validation data (for track 2) released for practice purpose. For the final evaluation, the results will be computed automatically on test data submissions. For track 2, the performances on test data will not be released until the challenge is over.
  5. Original work, permissions: In addition, by submitting your entry into this contest you confirm that, to the best of your knowledge:
    1. Your entry is your own original work; and
    2. Your entry only includes material that you own, or that you have permission from the copyright / trademark owner to use.

 

SECTION 5 Potential use of entry

Other than what is set forth below, we are not claiming any ownership rights to your entry. However, by submitting your entry, you:

  1. Are granting us an irrevocable, worldwide right and license, in exchange for your opportunity to participate in the contest and potential prize awards, for the duration of the protection of the copyrights to:
    1. Use, review, assess, test and otherwise analyze results submitted or produced by your code and other material submitted by you in connection with this contest and any future research or contests sponsored by CHALEARN; and
    2. Feature your entry and all its content in connection with the promotion of this contest in all media (now known or later developed);
  2. Agree to sign any necessary documentation that may be required for us and our designees to make use of the rights you granted above;
  3. Understand and acknowledge that involved organizations and other entrants may have developed or commissioned materials similar or identical to your submission and you waive any claims you may have resulting from any similarities to your entry;
  4. Understand that we cannot control the incoming information you will disclose to our representatives or our co-sponsor’s representatives in the course of entering, or what our representatives will remember about your entry. You also understand that we will not restrict work assignments of representatives or our co-sponsor’s representatives who have had access to your entry. By entering this contest, you agree that use of information in our representatives’ or our co-sponsor’s representatives unaided memories in the development or deployment of our products or services does not create liability for us under this agreement or copyright or trade secret law;
  5. Understand that you will not receive any compensation or credit for use of your entry, other than what is described in these official rules.

If you do not want to grant us these rights to your entry, please do not enter this contest.

 

SECTION 6 Submission of entries

  1. Follow the instructions on the Codalab website to submit entries.
  2. The participants will be registered as mutually exclusive teams. For track 1 each team may submit several final entries to the proposed tasks. For track 2 each team may submit only one single final entry. We are not responsible for entries that we do not receive for any reason, or for entries that we receive but are not functioning properly.
  3. The participants must follow the instructions. We will automatically disqualify incomplete or invalid entries.

 

SECTION 7 Judging the entries

The organizers will select a panel of judges to judge the entries; all judges will be forbidden to enter the contest and will be experts in causality, statistics, machine learning, computer vision, or a related field, or experts in challenge organization. A list of the judges will be made available upon request. The judges will review all eligible entries received and select three winners for each of the two competition tracks based upon the prediction score on test data. The judges will verify that the winners complied with the rules, including that they documented their method by filling out a fact sheet.

The decisions of these judges are final and binding. The distribution of prizes according to the decisions made by the judges will be made within three (3) months after completion of the last round of the contest. If we do not receive a sufficient number of entries meeting the entry requirements, we may, at our discretion based on the above criteria, not award any or all of the contest prizes below. In the event of a tie between any eligible entries, the tie will be broken by giving preference to the earliest submission, using the time stamp of the submission platform.

 

SECTION 8 Prizes and Awards

  1. ChaLearn, INAOE, University of Barcelona, Human Pose Recovery and Behavior Analysis Group, UAM, University Politehnica of Bucharest, IAPRCTC 12 are the financial sponsors of this contest. Winners of each track will receive a certificate together with (depending on availability) a representative gift.
  2. If for any reason the advertised prize (either certificate or gift) is unavailable, unless to do so would be prohibited by law, we reserve the right to substitute a prize(s) of equal or greater value, as permitted. We will only award one prize per team. If you are selected as a potential winner of this contest:
    1. You may not designate someone else as the winner. If you are unable or unwilling to accept your prize, we will award it to an alternate potential winner.
    2. If you accept a prize, you will be solely responsible for all applicable taxes related to accepting the prize.
    3. If you are a minor in your place of residence, we may award the prize to your parent/legal guardian on your behalf and your parent/legal guardian will be designated as the winner.

 

SECTION 9 Other Sponsored Events

  1. To stimulate participation, the organizers are making available several channels of scientific paper publication. Publishing papers is optional and will not be a condition to entering the challenge or winning prizes.
  2. The results of the challenge will be published in the ICPR 2018 Multimedia Information Processing for Personality and Social Networks Analysis Workshop proceedings (pending acceptance).
  3. Best conference papers related to the participants methods can be invited to be extended and submitted to a Special Issue TBA.

The organizers may also sponsor other events to stimulate participation.

 

SECTION 10 Notifications

If there is any change to data, schedule, instructions of participation, or these rules, the registered participants will be notified at the email they provided with the registration.

If you are a potential winner, we will notify you by sending a message to the e-mail address listed on your final entry within seven days following the determination of winners. If the notification that we send is returned as undeliverable, or you are otherwise unreachable for any reason, we may award the prize to an alternate winner, unless forbidden by applicable law.

Winners who have entered the contest as a team will be responsible to share any prize among their members. The prize will be delivered to the registered team leader. If this person becomes unavailable for any reasons, the prize will be delivered to be the authorized account holder of the e-mail address used to make the winning entry.

If you are a potential winner, we may require you to sign a declaration of eligibility, use, indemnity and liability/publicity release and applicable tax forms. If you are a potential winner and are a minor in your place of residence, and we require that your parent or legal guardian will be designated as the winner, and we may require that they sign a declaration of eligibility, use, indemnity and liability/publicity release on your behalf. If you, (or your parent/legal guardian if applicable), do not sign and return these required forms within the time period listed on the winner notification message, we may disqualify you (or the designated parent/legal guardian) and select an alternate selected winner.

 

SECTION 11 On-line notification

We will post changes in the rules or changes in the data as well as the names of confirmed winners (after contest decisions are made by the judges) online on http://chalearnlap.cvc.uab.es This list will remain posted for at least one year.

 

SECTION 12 Conditions. By entering this contest you agree:

  1. To abide by these official rules;
  2. To the extent allowable under applicable law, to release and hold harmless CHALEARN and sponsors, their respective parents, subsidiaries, affiliates, employees and agents from any and all liability or any injury, loss, damage, right, claim or action of any kind arising from or in connection with this contest or any prize won save for residents of the United Kingdom, Chile, Korea, Greece, Brazil, Turkey, Hong Kong, France and Germany with respect to claims resulting from death or personal injury arising from CHALEARN’s and University Politehnica of Bucharest and IAPRTC 12 and INAOE and UAM and University of Barcelona’s negligence, for residents of the United Kingdom with respect to claims resulting from the tort of deceit or any other liabilities that may not be excluded by law, and for residents of Australia in respect of any implied condition or warranty the exclusion of which from these official rules would contravene any statute or cause any part of these official rules to be void;
  3. That CHALEARN’s decisions will be final and binding on all matters related to this contest; and
  4. That by accepting a prize, CHALEARN and competition sponsors may use your team name, your name, and your place of residence online and in print, or in any other media, in connection with this contest, without payment or compensation to you. The declaration of eligibility, use, indemnity and liability/publicity release provided to the potential winner will make reference to obtaining his/her free consent to use his/her name and place of residence. In any case, the lack of such consent does not prevent the winner from receiving the prize.
  5. This contest will be governed by the laws of the state of California, and you consent to the exclusive jurisdiction and venue of the courts of the state of California for any disputes arising out of this contest. For residents of Austria only: you may withdraw your submission from this contest within seven days of your entry. If you withdraw within seven days of entry, your submission will be returned to you, and we will not make any use of your submission in the future. However, you will not be eligible to win a prize. If you do not withdraw within seven days of entry, you will be bound by the provisions of these official rules. For residents of the United Kingdom only: the provisions of the contracts (rights of third parties) act 1999 will not apply to this agreement. For residents of New Zealand only: the provisions of the contracts (privity) act of 1982 will not apply to this agreement. For Quebec residents: any litigation respecting the conduct or organization of a publicity contest may be submitted to the Régie des Alcools, des Courses et des Jeux for ruling. Any litigation respecting the awarding of a prize may be submitted to the Régie only for the purpose of helping the parties reach a settlement. For residents of Israel only: this agreement does not entitle third parties to benefits under this agreement as defined in Chapter “D” of the Contracts Act (General Part) – 1973.

 

SECTION 13 Unforeseen event

If an unforeseen or unexpected event (including, but not limited to: someone cheating; a virus, bug, or catastrophic event corrupting data or the submission platform; someone discovering a flaw in the data or modalities of the challenge) that cannot be reasonably anticipated or controlled, (also referred to as force majeure) affects the fairness and / or integrity of this contest, we reserve the right to cancel, change or suspend this contest. This right is reserved whether the event is due to human or technical error. If a solution cannot be found to restore the integrity of the contest, we reserve the right to select winners based on the criteria specified above from among all eligible entries received before we had to cancel, change or suspend the contest subject to obtaining the approval from the Régie des Alcools, des Courses et des Jeux with respect to the province of Quebec.

Computer “hacking” is unlawful. If you attempt to compromise the integrity or the legitimate operation of this contest by hacking or by cheating or committing fraud in any way, we may seek damages from you to the fullest extent permitted by law. Further, we may ban you from participating in any of our future contests, so please play fairly.

 

SECTION 14 Sponsor

ChaLearn is sponsor of this contest.
955 Creston Road,
Berkeley, CA 94708, USA
events@chalearn.org
Additional sponsors can be added during the competition period.

Privacy

During the development phase of the contest and when they submit their final entries, contest participants do not need to disclose their real identity, but must provide a valid email address where we can be deliver notifications to them regarding the contest. To be eligible for prizes, however, contest participants will need to disclose their real identity to contest organizers, informing them by email of their name, professional affiliation, and address. To enter the contest, the participants will need to become users of the Codalab platform. Any profile information stored on this platform can be viewed and edited by the users. After the contest, the participants may cancel their account with the Codalab and cease to be users of that platform. All personal information will then be destroyed. The Codalab privacy policy will apply to contest information submitted by participants on the Codalab. Otherwise, CHALEARN’s privacy policy will apply to this contest and to all information that we receive from your entry that we receive directly from you or which you have submitted as part of your contest entry on the Codalab. Please read the privacy policy on the contest entry page before accepting the official rules and submitting your entry. Please note that by accepting the official rules you are also accepting the terms of the CHALEARN privacy policy: http://www.chalearn.org/privacy.html.

 

Certificate of acceptance of prize for Multimedia Information Processing for Personality and Social Networks Analysis Contest 2018

Team name:
Contact name:
Address:
Country of residence:
Date of birth:
Email:
Rank in challenge:
Prize received:

By accepting this prize, I certify that I have read and understood the rules of the challenge and that I am a representative of the team authorized to receive the prize and sign this document. To the best of my knowledge, all the team members followed the rules and did not cheat in participating to the challenge. I certify that team complied with all the challenge requirements, including that:

  • The team publicly released the source code of the software necessary to reproduce the final entry at hugo.jair@gmail.com and http://gesture.chalearn.org/ under a public license, taken among popular OSI-approved licenses (http://opensource.org/licenses). The code will remain publicly accessible on-line for a period of not less than three years following the end of the challenge.
  • The team filled out the requested survey (fact sheet) and, to the best of my knowledge, all information provided is correct.
  • The team is invited (not mandatory) to submit a maximum of 6-page paper to the Multimedia Information Processing for Personality and Social Networks Analysis Workshop 2018 (pending acceptance), summarizing their contribution in the contest.

I recognize that I am solely responsible for all applicable taxes related to accepting the prize. NOTE: IF A PRIZE IS DONATED BY CHALEARN, THE RECIPIENT MUST FILL OUT A W9 OR W8BEN FORM

I grant CHALEARN, ChaLearn LAP 2018 competition sponsors, and the contest organizers the right to use, review, assess, test and otherwise analyze results submitted and other material submitted by you in connection with this contest and any future research or contests sponsored by CHALEARN and co-sponsors of this competition the right to feature my entry and all its content in connection with the promotion of this contest in all media (now known or later developed).

CHALEARN and ChaLearn LAP 2018 competition sponsors may use the name of my team, my name, and my place of residence online and in print, or in any other media, in connection with this contest, without payment or compensation.

Name:
Date:
Signature:

Challenge

Start: Feb. 25, 2018, midnight

Description: Development-test

Competition Ends

June 16, 2018, midnight

You must be logged in to participate in competitions.

Sign In