HUMBI is an ideal dataset to evaluate the ability of modeling human appearance. To measure such ability, we formulate a novel benchmark challenge on a pose-guided appearance rendering task: given a single view image of a person, render the person appearance from other viewsand poses. HUMBI offers the ground truth of this challenging task where the performance of the approaches can be precisely characterized. We validate the feasibility of thebenchmark challenge using the state-of-the-art rendering methods.
image, 2D keypoints, training list text file
images, foreground mask, 2D keypoints, validation list text file)
image (only for source), 2D keypoints, testing list text file
References and credits:
1) HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge (TPAMI under review) [paper]
Jae Shin Yoon, Zhixuan Yu, Jaesik Park, and Hyun Soo Park
2) HUMBI: HUMBI: A Large Multiview Dataset of Human Body Expressions (CVPR 2020) [paper]
{Zhixuan Yu*, Jae Shin Yoon*}, In Kyu Lee, Prashanth Venkatesh, Jaesik Park, Jihun Yu, and Hyun Soo Park
Project page: HUMBI.
Contact: jsyoon [at] umn.edu
1. You may submit 3 entries every day.
2. You cannot sign up multiple accounts and therefore you cannot submit from multiple accounts.
1. You are "not" allowed to use any testing data in training time.
2. You are "not" allowed to use any external data.
3. You are allowed to use validation set in training time and freely.
4. You are allowed to edit the training and validation .txt files (e.g., add more training pairs).
5. You are allowed to augment dataset (e.g., translation, rotation, and so on) within the provided training and validation sets.
6. You are allowed to use additional detection results (e.g., fashion segmentation densepose detection, single-view depth prediction, and so on) within the dataset.
1. Each submission will be scored and ranked by the evaluation metrics stated on the Competition Website. The ranking will be visible on the Competition Website's leaderboard if the submission is set to the public. The potential winner(s) are determined solely by the leaderboard ranking on the leaderboard, subject to compliance with these Rules.
2. In the event of a tie, the Submission that was entered first to the Competition will be the winner.
3. The ranking will be based on the MSSIM
Start: June 30, 2013, midnight
Never
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | cpmolnar | 0.829868 |
2 | humbi | 0.812742 |
3 | ejmccalla | 0.813914 |