EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition

Organized by jonmun - Current server time: Nov. 30, 2020, 6:11 p.m. UTC

Current

CVPR 2021 Challenge
Aug. 5, 2020, midnight UTC

End

Competition Ends
May 28, 2021, midnight UTC

EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition

Welcome to the EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition.

EPIC-KITCHENS-100 is an unscripted egocentric action dataset collected from 45 kitchens from 4 cities across the world. The unsupervised domain adaptation challenge tests how models can cope with similar data collected 2 years later on the task of action recognition.

Dataset Details

  • A labelled source domain containing videos from EPIC-KITCHENS-55 (recorded in 2018).
  • An unlabelled target domain containing videos from EPIC-KITCHENS-100 (recorded in 2020).

A separate set of participants are available for validation/hyper-parameter tuning.

Goal

Given labelled videos from the source domain and unlabelled videos from the target domain, the goal classify actions in the target domain. An action is defined as a verb and noun depicted in a trimmed video clip.

Evaluation Criteria

Submissions are evaluated on the target test set. We report top-1 and top-5 accuracy on both the target test set (and source test set for reference only):

 

Terms and Conditions

  • You agree to us storing your submission results for evaluation purposes.
  • You agree that if you place in the top-10 at the end of the challenge you will submit your code so that we can verify that you have not cheated.
  • You agree not to distribute the EPIC-KITCHENS-100 dataset without prior written permission.

Submissions

To submit your results to the leaderboard you must construct a submission zip file containing a single file test.json containing the model’s results on the target and source test sets. This file should follow format detailed in the subsequent section.

JSON Submission Format

The JSON submission format is composed of a single JSON object containing entries for every action in the test set. Specifically, the JSON file should contain:

  • a 'version' property, set to '0.2'
  • a 'challenge' property, which can assume the following values, depending on the challenge: ['domain_adaptation'];
  • a set of sls properties (see Supervision Levels Scale (SLS) page for more details):
    • sls_pt: SLS Pretraining level.
    • sls_tl: SLS Training Labels level. Note: this refers to the source domain only. No annotations are allowed in the target domain.
    • sls_td: SLS Training Data level.
  • a 'results_target' object containing entries for every action in the target test set (e.g . 'P01_101_0' is the first narration ID in the target test set).
  • a 'results_source' object containing entries for every action in the source test set (e.g . 'P01_11_0' is the first narration ID in the source test set).

Each action segment entry is a nested object composed of two entries: 'verb', specifying the class score for every verb class and the other, 'noun' specifying the score for every noun class. Action scores are automatically computed by applying softmax to the verb and noun scores and computing the probability of each possible action.

{
  "version": "0.2",
  "challenge": "domain_adaptation",
  "sls_pt": -1,
  "sls_tl": -1,
  "sls_td": -1,
  "results_target": {
    "P01_101_0": {
      "verb": {
        "0": 1.223,
        "1": 4.278,
        ...
        "96": 0.023
      },
      "noun": {
        "0": 0.804,
        "1": 1.870,
        ...
        "299": 0.023
      }
    },
    "P01_101_1": { ... },
    ...
  }
  "results_source": {
    "P01_11_0": {
      "verb": {
        "0": 1.223,
        "1": 4.278,
        ...
        "96": 0.023
      },
      "noun": {
        "0": 0.804,
        "1": 1.870,
        ...
        "299": 0.023
      }
    },
    "P01_11_1": { ... },
    ...
  },
}

If you wish to compute your own action scores, you can augment each segment submission with exactly 100 action scores with the key 'action'

{
  ...
  "results_target": {
    "P01_101_0": {
      "verb": {
        "0": 1.223,
        "1": 4.278,
        ...
        "96": 0.023
      },
      "noun": {
        "0": 0.804,
        "1": 1.870,
        ...
        "299": 0.023
      },
      "action": {
        "0,1": 1.083,
        ...
        "96,299": 0.002
      }
    },
    "P01_101_1": { ... },
    ...
  }
}

The keys of the action object are of the form <verb_class>,<noun_class>.

You can provide scores in any float format that numpy is capable of reading (i.e. you do not need to stick to 3 decimal places).

If you fail to provide your own action scores we will compute them by

  1. Obtaining softmax probabilites from your verb and noun scores
  2. Find the top 100 action probabilities where p(a = (v, n)) = p(v) * p(n)

Submission archive

To upload your results to CodaLab you have to zip the test file into a flat zip archive (it can’t be inside a folder within the archive).

You can create a flat archive using the command providing the JSON file is in your current directory.

$ zip -j my-submission.zip test.json

CVPR 2021 Challenge

Start: Aug. 5, 2020, midnight

Description: CVPR 2021 Unsupervised Domain Adaptation Challenge

Competition Ends

May 28, 2021, midnight

You must be logged in to participate in competitions.

Sign In