Secret url: https://competitions.codalab.org/competitions/24580?secret_key=1f62bbbf-aaf0-492f-adfe-8284972c6325
3D pose estimation remains one of the fundamental problems in computer vision. It has numerous application in activity recognition, motion retargetting, autonomous driving, etc. In the past few years, several algorithms have been proposed for solving this problem which show impressive performance on datasets such Human3.6M, MPI-INF 3DHP, HUMANEVA. Though these datasets contain challenging poses, they are all collected in indoor or very restrained environments. Consequently, we can ascertain the performance of these proposed algorithms only for a limited subset of possible images. This challenge aims to evaluate the performance of 3D pose estimation algorithms for images collected in the wild.
For evaluation, we make use of the 3DPW dataset - the first dataset annotated with 3D poses of images collected in the wild. The dataset has been collected by using a hand-held video camera and IMUs to record the activites of people. The IMUs are used to estimate the 3D pose of the people in the image and then these 3D poses are assigned to the 2D poses detected in the image using a state-or-the art pose estimator. 3DPW is the first annotated dataset with 3D poses of people performing routine daily activites such as catching a bus, playing a guitar and doing sports. In this challenge, we do not use the original splits in the dataset; and we use the entire dataset including its train, validation and test splits for evaluation. Your algorithm cannot use any part of the 3DPW dataset for training. You may use any of the other widely available datasets such as Human3.6M, HUMANEVA, MPI-INF-3DHP for training
The 3D human performance is evaluated according to these metrics:
1) MPJPE Mean Joint Position Error (in mm). It measures the average euclidean distance from prediction to ground truth joint positions.
2) MPJPE_PA: Mean Joint Position Error (in mm) after procrustes analysis
3) MPJAE. It measures the angle in degrees between the predicted part orientation and the ground truth orientation. The orientation difference is measured as the geodesic distance in SO(3). The 9 parts considered are: left/right upper arm, left/right lower arm, left/right upper leg, left/right lower leg and root.
4) MPJAE_PA. It measures the angle in degrees between the predicted part orientation and the ground truth orientation after rotating all predicted orientations by the rotation matrix obtained from the procrustes matching step.
5) PCK: percentage of correct joints. A joint is considered correct when it is less than 50mm away from the ground truth. The joints considered for PCK are: shoulders, elbows, wrists, hips, knees and ankles.
6) AUC: the total area under the PCK-threshold curve. Calculated by computing PCKs by varying from 0 to 200 mm the threshold at which a predicted joint is considered correct
Please download the 3DPW dataset from this link. Please run your algorithm for all the images in the train/validation/test folders and submit the results in the format described below. Please do not use this data for fine-tuning your algorithm nor for validation purposes. The entire dataset will be used for evaluation.
The structure of the submission directory should mirror that of the ground-truth directory. The submission directory should have three sub-directories : train/validation/test. For each pickle file in the ground truth directory, there should be a pickle file in the submission directory - with exactly the same name. Each pickle file should contain a dictionary with two keys - 'jointPositions' and 'orientations'.
'jointPositions' - must be an array of shape P x N x 24 x 3. This should contain the 3D joint location of each SMPL joint. The joint positions must be in meters.
'orientations - must be an array of shape P x N x 9 x 3 x 3. This array should contain the orientation matrix in the global coordinate frame of a subset of SMPL parts
P: The number of people tracked in the sequence. If there is just one tracked person, the size should be 1 x N x 24 x 3
N: The number of frames in the sequence.
The order of the jointpositions should follow the canonical ordering of the SMPL joints. The root joint should come first, followed by the two hip joints and so on.
The order of the and orientations should follow the following order: root (JOINT 0) , left hip (JOINT 1), right hip (JOINT 2), left knee (JOINT 4), right knee (JOINT 5), left shoulder (JOINT 16), right shoulder (JOINT 17), left elbow (JOINT 18), right elbow (JOINT 19).
If your algorithm does not make predictions for a particular frame, for example when the number of detected 2D pose keypoints is less than six, then please fill that space with zeros or some other dummy number. No algorithm will be evaluated for those frames. The evaluation protocol also ignores the frames where the camera not been well aligned with the image - these instances are labelled in valid_camposes array in the annotated pkl files.
If you only want to evaluate using the jointPositions and do not want to provide the orientations, please use only the 'jointPositions' key in each dictionary. Please only the 'orientations' key in each dictionary if you only want your method to be evaluated on the angular metrics.
Please submit the three folders - train, validation and test which contain all the pkl files with your predictions as one zip file. Please ensure that an extra folder is not created during the zipping process. The comman 'zip -r submission.zip train validation test' might be useful.
To participate in this challenge you need to agree with the following conditions: https://virtualhumans.mpi-inf.mpg.de/3DPW/license.html
Start: May 1, 2020, midnight
Description: In this phase you are allowed to use an object detector to first detect the people in the image.
July 31, 2020, midnight
You must be logged in to participate in competitions.Sign In