Multimodal (Audio, Facial and Gesture) based Emotion Recognition challenge

Organized by dam - Current server time: Nov. 17, 2019, 9:54 a.m. UTC

Previous

Start
Nov. 5, 2019, midnight UTC

Current

Learning
Nov. 5, 2019, midnight UTC

Next

Final Evaluation
Feb. 22, 2020, midnight UTC

Emotion recognition has a key role in affective computing. People express emotions through different modalities. Integration of verbal and non-verbal communication channels creates a system in which the message is easier to understand. Expanding the focus to several expression forms can facilitate research on emotion recognition as well as human-machine interaction. We plan to organise a competition around two important problems which are: (1) recognition of compound emotions, that require, in addition to performing an effective visual analysis, to deal with recognition of micro emotions. The database includes 31250 facial faces with different emotions of 115 subjects whose gender distribution is almost uniform. (2) recognition of multi-modal emotions composed of three modalities, namely, facial expressions, body movement and gestures, and speech. The corpora contains recordings registered in studio conditions, acted out by 16 professional actors (8 male and 8 female).

  • Dorota Kaminska.  Scientist and educator at Institue of Mechatronics and Information systems.
    E-mail: dorota.kaminska@p.lodz.pl
  • Tomasz Spanski. Ph.D. student at Institute of Mechatronics and Information Systems
    E-mail: tomasz.spanski@p.lodz.pl
  • Kamall Nasrollahi. Associate professor at Visual Analysis of People Laboratory in Aalborg University in Denmark
    E-mail: kn@create.aau.dk
  • Thomas B. Moeslund. Head of the Visual Analysis of People laboratory at Aalbort Univeristy
    E-mail: tbm@create.aau.dk
  • Sergio Escalera. Associate professor at the Department of Mathematics and Informatics, Unversitat de Barcelona
    E-mail: sergio@maia.ub.es
  • Cagri Ozcinar. Research fellow in the Graphics Vision and Visualisation group at Trinity College Dublin
    E-mail: ozcinar@scss.tcd.ie
  • Jüri Allik. Professor of Experimental Psychology at the University of Tartu
    E-mail: juri.allik@ut.ee
  • Gholamreza Anbarjafari. Head of the iCV Lab in the Institute of Technology at the University of Tartu
    E-mail: shb@ut.ee
  • Maris Popens. Research assistant in iCV Lab in the Institute of Technology at the University of Tartu
    E-mail: maris@icv.tuit.ut.ee

The participants have to analyze all the three modalities and based on all 3 modalities perform the emotion recognition. The participants must submit the code and all dependencies via CODLAB and the organizer will run the codes.

The evaluation would be based on the average correct emotion recognition using each modalities as well as all 3 modalities together. 

In case of equal performance, the processing time will be used in order to indicate the ranking. The Training data will be provided followed by the validation dataset. The test data will be finally launched with no label and it will be used for the evaluation of participants. 

In addition the participants will be provided by a FactSheet template that they should fill and submit it along with their codes, and it will be also used for evaluation.

Evaluation Criteria

The percentage of correctly classified samples will be calculated for each emotion class and the average of calculated percentages will be taken as final performance rate.

Submission format

In this track, the participants should submit a ZIP file containing only one pkl file. 

Participants should submit a ZIP file containing only one pkl file named exactly as required below (without any subfolders in the ZIP file). 

The predictions for validation and test sets should be submitted in pkl dictionary with following format:

{DESCTIPTOR_FILE_NAME.txt : PREDICTED_EMOTION_CLASS, ... }, where: 

DESCTIPTOR_FILE_NAME - should be the same, as the desctiptor file name containing the predcted emotional state

The name of the prediction text file should be valid_prediction.pkl for the validtaion set (phase 2), test_prediction.pkl for the test set (phase 3)

PREDICTED_EMOTION_CLASS - should have only the following values: An, Di, Fe, Ha, Ne, Sa, Su

Examples:

- for phase 2: {frames_F2_An3 : Sa, ...} 

- for phase 3: {frames_TEST1 : An, ...}

About PKL 

A PKL file is a file created by pickle, a Python module that enables objects to be serialized to files on disk and deserialized back into the program at runtime. It contains a byte stream that represents the objects.

Please, keep in mind, that Codalab uses Python 2.7 by default. In case you serialized dictionary with predictions in Python3, add 'protocol=2' to the arguments.

You can download the database here.

Start

Start: Nov. 5, 2019, midnight

Learning

Start: Nov. 5, 2019, midnight

Description: Learning

Final Evaluation

Start: Feb. 22, 2020, midnight

Description: Final Evaluation

Competition Ends

March 4, 2020, 11 p.m.

You must be logged in to participate in competitions.

Sign In