EPIC-Kitchens Action Recognition

Organized by willprice - Current server time: Nov. 16, 2018, 11:59 a.m. UTC

Current

ECCV 2018 Action Recognition Challenge
June 30, 2018, midnight UTC

End

Competition Ends
Never

EPIC-Kitchens 2018 Action Recognition Challenge

Welcome to the EPIC-Kitchens 2018 Action Recognition challenge. EPIC-Kitchens is an unscripted egocentric action dataset collected from 32 different people from 4 cities across the world.

This challenge is part of the ECCV 2018 workshop.

Dataset details

  • 55 hours of video
  • 11.5M frames
  • 39,594 total action segments
  • 125 verb classes, 352 noun classes
  • 28,472 training action segments
  • Seen kitchens test set - 8,047 action segments
  • Unseen kitchens test set - 2,929 action segments

Goal

Classify trimmed action segments from seen and unseen kitchens by action verb and noun.

Evaluation Criteria

Submissions are evaluated across 2 test sets:

  • Seen kitchens (kitchens that have action segments in the training set)
  • Unseen kitchens (kitchens that have no action segments in the training set)

We evaluate model performance across two sets of metrics:

  • Aggregate

    These metrics are micro-averaged giving weight to each class proportionately to their frequency in the test set under evaluation.

    • Top-1 accuracy
    • Top-5 accuracy
  • Per-class

    These metrics are macro-averaged giving equal weight to all classes regardless of their prevalence. We compute these for many-shot classes only: these are verb/noun classes that appear more than 100 times in training, and in the case of actions the cross product between the many shot verb and many shot noun classes given that the action appears at least once in training.

    Many shot classes can be found on github for verbs, nouns, and actions .

    • Precision
    • Recall

Terms and Conditions

PLEASE ONLY SIGN UP WITH AN EMAIL ADDRESS WITH A UNIVERSITY/COMPANY DOMAIN  (gmail, qq.com, etc will be rejected)

  • You agree to us storing your submission results for evaluation purposes.
  • You agree that if you place in the top-10 at the end of the challenge you will submit your code so that we can check for cheating.
  • You agree not to distribute the EPIC-Kitchens dataset without prior written permission.

Submissions

To submit your results to the leaderboard you must construct a submission zip file containing two files:

  • seen.json - Model inference on the seen kitchens test set (S1)
  • unseen.json - Model inference on the unseen kitchens test set (S2)

Both of these files follow the same format detailed below:

JSON Submission Format

The JSON submission format is composed of a single JSON object containing entries for every action in the test set. Specifically, the JSON file should contain:

  • a 'version' property, set to '0.1' (the only supported version so far);
  • a 'challenge' property, which can assume the following values, depending on the challenge: ['action_recognition', 'action_anticipation'];
  • a 'results' object containing entries for every action in the test set (e.g. '1924' is the first action ID in the seen test set).

Each action segment entry is a nested object composed of two entries: 'verb', specifying the class score for every verb class and the other, 'noun' specifying the score for every noun class. Action scores are automatically computed by applying softmax to the verb and noun scores and computing the probability of each possible action.

{
  "version": "0.1",
  "challenge": "action_recognition",
  "results": {
    "1924": {
      "verb": {
        "0": 1.223,
        "1": 4.278,
        ...
        "124": 0.023
      },
      "noun": {
        "0": 0.804,
        "1": 1.870,
        ...
        "351": 0.023
      }
    },
    "1925": { ... },
    ...
  }
}

If you wish to compute your own action scores, you can augment each segment submission with exactly 100 action scores with the key 'action'

{
  "version": "0.1",
  "challenge": "action_recognition",
  "results": {
    "1924": {
      "verb": {
        "0": 1.223,
        "1": 4.278,
        ...
        "124": 0.023
      },
      "noun": {
        "0": 0.804,
        "1": 1.870,
        ...
        "351": 0.023
      },
      "action": {
        "0,1": 1.083,
        ...
        "124,351": 0.002
      }
    },
    "1925": { ... },
    ...
  }
}

The keys of the action object are of the form <verb_class>,<noun_class>.

You can provide scores in any float format that numpy is capable of reading (i.e. you do not need to stick to 3 decimal places).

If you fail to provide your own action scores we will compute them by

  1. Obtaining softmax probabilites from your verb and noun scores
  2. Find the top 100 action probabilities where p(a = (v, n)) = p(v) * p(n)

Submission archive

To upload your results to CodaLab you have to zip both files into a flat zip archive (they can’t be inside a folder within the archive).

You can create a flat archive using the command providing the JSON files are in your current directory.

$ zip -j my-submission.zip seen.json unseen.json

ECCV 2018 Action Recognition Challenge

Start: June 30, 2018, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In