GermEval 2020 Shared Task on the Prediction of Intellectual Ability and Personality Traits from Text

Organized by dirk.johannssen - Current server time: Dec. 5, 2019, 5:11 p.m. UTC

Current

Preparation
Dec. 1, 2019, 10 a.m. UTC

Next

Validation
Jan. 1, 2020, 10 a.m. UTC

End

Competition Ends
Never

Overview

The validity of high school grades as a predictor of academic success is controversial. Researchers have found indications that linguistic features such as function words used in a prospective student's writing perform better in predicting academic success (Pennebaker et al., 2014).

During an aptitude test, participants are asked to write freely associated texts to provided questions and images. Trained psychologists can predict behavior, long-term development, and subsequent success from those expressions. Paired with an IQ test and provided high school grades, prediction of intellectual ability from a text can be investigated. Such an approach would extend the sole text classification and could reveal insightful psychological traits.

Operant motives are unconscious intrinsic desires that can be measured by implicit or operant methods, such as the Operant Motive Test (OMT) or the Motive Index (MIX) employs. During the OMT and MIX, participants are asked to write freely associated texts to provided questions and images. Trained psychologists label these textual answers with one of five motives and corresponding levels. The identified motives allow psychologists to predict behavior, longterm development, and subsequent success. For our task, we provide extensive amounts of textual data from both, the OMT and MIX, paired with IQ and high school grades (MIX) and labels (OMT).

With this task, we aim to foster research within this context. This task is focusing on classifying German psychological text data for predicting the IQ and high school grades of college applicants as well as performing speaker identification by the same image descriptions.

The shared task is organized by Dirk Johannßen, Chris Biemann, Steffen Remus and Timo Baumann from the Language Technology group of the University of Hamburg, as well as David Scheffer from the NORDAKADEMIE Elmshorn, Nicola Baumann from the Universität Trier and the Gudula Ritz from the Impart GmbH (Germany).

A bundle with two exemplary systems, the data of the current phase and an evaluation skript can be be found here. This task is accompanied by a Language Technology Group page here

References

  • J. W. Pennebaker, C. K. Chung, J. Frazee, G. M. Lavergne, and D. I. Beaver, "When small words foretell academic success: The case of college admissions essays," PLOS ONE, vol. 9, no. 12, e115844, 2014, ISSN: 1932- 6203. doi: 10.1371/journal.pone.0115844. [Online]. Available: http: //journals.plos.org/plosone/article?id=10.1371/journal.pone. 0115844.

Evaluation

System submissions are done in teams. There is no restriction on the number of people in a team. However, keep into consideration that a participant is allowed to be in multiple teams, so splitting up into teams with overlapping members is a possibility. Every participating team is allowed to submit 3 different systems to the competition. For submission in the final evaluation phase, it is necessary for every team to name their submission (.zip and the actual submission .txt file) in the form "[Teamname]__[Systemname]" (note the two underscores!). E.g. your submission could look like

Funtastic4__SVM_NAIVEBAYES_ensemble1.zip
|
+-- Funtastic4__SVR_TF_IDF_ensemble1__task1.txt
+-- Funtastic4__SVC_TF_IDF_ensemble1__task2.txt
 

We also ask you to put exactly this name into the description before submitting your system. This identification method is needed to correctly associate each submitted system with its description paper. Thus, please make sure to write the name exactly as it will appear in your description paper (i.e. case sensitive). If your submission does not follow these rules it might not be evaluated. The evaluation script has been adopted for a formality check.

Only the person who makes the submission is required to register for the competition. All team members need to be stated in the description paper of the submitted system. The last submission of a system will be used for the final evaluation. Participants will see whether the submission succeeds, however, there will be no feedback regarding the score. The leaderboard will thus be disabled during the test phase.

The evaluation script is provided with the data so that participants can still evaluate their own data splits. The following zip-files contains this years' evaluation tool:

evaluationScriptGermeval2020_psychpred.py

The evaluation tool comes as a self-contained python script and is able to accept both tasks.: For the tasks to be distinguishable, you need to include a text file in your submission, being either

task1.txt


for Task 1 and

task2.txt 


for Task2. 

The evaluation tool requires three files: the task1/2.txt, as described above, the file with a system prediction and some gold standard file. Both latter files have to comply with the tab-separated format as follows for Subtask 1, reproducing the target rank (as averaged z-standardized scores of a participant) relative to all participants in a collection (i.e. test / dev / train) :

student_ID rank 
 

and for Task 2:

UUID motive level 
 

To get more information about its usage, simply type:

python evaluationScriptGermeval2020_psychpred.py --help 

On the task to be evaluated, the script computes for each class precision, recall and F1 score. As a summarizing score, the tool computes accuracy and macro-average precision, recall and F1 score.

Although the evaluation tool outputs several evaluation measures, the official ranking of the systems will be based on the macro-average F1 score only. Please remember this when tuning your classifiers. A classifier that is optimized for accuracy may not necessarily produce optimal results in terms of the macro-average F1 score.

System submissions are done in teams. There is no restriction on the number of people in a team. However, keep into consideration that a participant is allowed to be in multiple teams, so splitting up into teams with overlapping members is a possibility. Every participating team is allowed to submit 3 different systems to the competition.

Terms and Conditions

The copyright to the provided data belongs to the NORDAKADEMIE and for the OMT related tasks to the University of Trier and Impart GmbH, its licensors, vendors and/or its content providers. The scores and instances serve promotional/public purposes and permission has been granted by the NORDAKADEMIE and the University of Trier, which both share this dataset. This dataset is redistributed under the creative commons license CC BY-NC-SA 4.0.

By participating at this competition, you consent the public release of your anonymized scores at the GermEval-2020 workshop and in respective proceedings, at the task organizers' discretion.

 

Important Dates

  • 01-Dec-2019: Release of trial data
  • 01-Jan-2020: Release of training data (train + validation)
  • 08-May-2020: Release of test data
  • 01-Jun-2020: Final submission of test results
  • 03-Jun-2020: Submission of description paper
  • 04-11-Jun-2020: Peer reviewing: participants are expected ro review other participant's system descriptions
  • 12-Jun-2020: Notification of acceptance and reviewer feedback
  • 18-Jun-2020: Camera-ready deadline for system description papers
  • 23-Jun-2020: Workshop in Zurich, Switzerland at the KONVENS 2020 and SwissText joint conference

All due times are at 23:59 (AoE)

The shared task on a prediction of intellectual ability of text consists of two subtasks, described below. You can participate in any of them, may learn from external data and/or utilize the other data respectively for training, as well as perform e.g. multi-task or transfer learning.

Subtask 1: Prediction of Intellectual Ability

The task is to predict measures of intellectual ability solemnly based on text. For this, z-standardized high school grades and IQ scores of college applicants are summed and globally ranked. The goal of this subtask is to reproduce their ranking, systems are evaluated by the Pearson correlation coefficient between system and gold ranking. An exemplary illustration can be found in the Data area.

One z-standardized example instance looks as follows (including spelling errors made by the participant) with the unique ID (consisting of studentID_imageNo_questionNo), a student ID, an image number, an answer number, the German grade points, the English grade points, the math grade points, the language IQ score, the math IQ score and the average IQ score (all z-standardized).

 The data is delivered in two files, one containing participant data, the other containing sample data, each being connected by a student ID. The rank in the sample data reflects the averaged performance relative to all instances within the collection (i.e. within train / test / dev), which is to be reproduced for the task.

student_ID image_no answer_no UUID MIX_text
1034-875791 2 2 1034-875791_2_2 Die Person fühl sich eingebunden in die Unterhatung.

student_ID german_grade english_grade math_grade lang_iq logic_iq
1034-875791 -0.08651999119820285 0.3747985587188588 0.5115559707967757 -0.010173719700624676 -0.13686707618782515
 
student_ID rank
1034-875791 15
 

The training data set contains 80% of all available data, which is 62,280 expressions from 2,076 participants  and the development and test sets contain roughly 10% each, which are 7,800 expressions from 260 participants for the dev set and 7,770 (259 participants) expressions for the test set (this split has been chosen in order to preserve the order and completeness of the 30 answers per participant).

For the final results, participants of this shared task will be provided with an MIX_text only and are asked to reproduce the ranking of each student relative to all students in a collection (i.e. the within the test set). 

Subtask 2: Classification of the Operant Motive Test (OMT).

Operant motives are unconscious intrinsic desires that can be measured by implicit or operant methods, such as the Operant Motive Test (OMT)(Kuhl and Scheffer, 1999). During the OMT, participants are asked to write freely associated texts to provided questions and images. An exemplary illustration can be found in the Data area. Trained psychologists label these textual answers with one of four motives. The identified motives allow psychologists to predict behavior, long-term development, and subsequent success.

For this task, we provide the participants with a large dataset of labeled textual data, which emerged from an operant motive test. The training data set contains 80% of all available data (167,200 instances) and the development and test sets contain 10% each (20,900 instances)

 
UUID OMT_text
6221323283933528M10 Sie wird ausgeschimpft, will jedoch das Gesicht bewahren.Beleidigt.Weil sie sich schämt, ausgeschimpft zu werden. Die blaue Person ist verletzt und hört nicht auf die Worte der weißen Person.
 
UUID motive level
6221323283933528M10 F 5
 

For this shared task, participants will be provided with an OMT_text and are asked to predict the motive and level of each instance. The success will be measured with the macro-averaged F1-score.

References

  • Julius Kuhl and David Scheffer. 1999. Der operante Multi-Motiv-Test (OMT): Manual [The operant multi-motive-test (OMT): Manual]. Impart, Osnabrück, Germany: University of Osnabrück.

System Description Paper Author Guidelines

TBD

Preparation

Start: Dec. 1, 2019, 10 a.m.

Description: Preparation: Submit practice predictions on the sample dataset. Use this to check your file format. A sample submission is available for download under the tab Participate/Files.

Validation

Start: Jan. 1, 2020, 10 a.m.

Description: Evaluation Validation Set: Submit predictions for the validation set. The Scoreboard will be enabled.

Test

Start: May 8, 2020, 10 a.m.

Description: Evaluation Test Set: Submit predictions for the test set. Results during this phase will be used to assess the performance of a submission for this shared task. The scoreboard is disabled.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In