Submissions are evaluated on the test set. We report Mean Top-5 Recall (MT5R) on the following subsets of the test set:
For a definition of Top-5 Recall, see Section 3.2 of [1]. Mean Top-5 Recall is obtained by averaging Top-5 Recall values computed for each class appearing in the test set.
To submit your results to the leaderboard you must construct a submission zip file containing a single file test.json
containing the model’s results on the test set. This file should follow format detailed in the subsequent section.
The JSON submission format is composed of a single JSON object containing entries for every action in the test set. Specifically, the JSON file should contain:
'version'
property, set to '0.2'
'challenge'
property, which can assume the following values, depending on the challenge: ['action_recognition', 'action_anticipation']
;sls
properties (see the Supervision Levels Scale (SLS) page for more details):
sls_pt
: SLS Pretraining level.sls_tl
: SLS Training Labels level.sls_td
: SLS Training Data level.'results'
object containing entries for every action in the test set (e.g . 'P01_101_0'
is the first narration ID in the test set).Each action segment entry is a nested object composed of two entries: 'verb'
, specifying the class score for every verb class and the other, 'noun'
specifying the score for every noun class. Action scores are automatically computed by applying softmax to the verb and noun scores and computing the probability of each possible action.
{
"version": "0.2",
"challenge": "action_recognition",
"sls_pt": -1,
"sls_tl": -1,
"sls_td": -1,
"results": {
"P01_101_0": {
"verb": {
"0": 1.223,
"1": 4.278,
...
"96": 0.023
},
"noun": {
"0": 0.804,
"1": 1.870,
...
"299": 0.023
}
},
"P01_101_1": { ... },
...
}
}
If you wish to compute your own action scores, you can augment each segment submission with exactly 100 action scores with the key 'action'
{
...
"results": {
"P01_101_0": {
"verb": {
"0": 1.223,
"1": 4.278,
...
"96": 0.023
},
"noun": {
"0": 0.804,
"1": 1.870,
...
"299": 0.023
},
"action": {
"0,1": 1.083,
...
"96,299": 0.002
}
},
"P01_101_1": { ... },
...
}
}
The keys of the action
object are of the form <verb_class>,<noun_class>
.
You can provide scores in any float format that numpy is capable of reading (i.e. you do not need to stick to 3 decimal places).
If you fail to provide your own action scores we will compute them by
p(a = (v, n)) = p(v) * p(n)
To upload your results to CodaLab you have to zip the test file into a flat zip archive (it can’t be inside a folder within the archive).
You can create a flat archive using the command providing the JSON file is in your current directory.
$ zip -j my-submission.zip test.json
Start: July 14, 2021, midnight
Description: 2021 Open Testing Phase - Action Anticipation
Nov. 25, 2021, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In