DeepGlobe Road Extraction Challenge Forum

Go back to competition Back to thread list Post in this thread

> MANO data generation

Hi,
Thanks for organising this Task-1 challenge. I see that you recommended using MANO synthetic data generation for data augmentation. While trying running the script on MANO, i recovered that some sort of same training image sets are being generated there, which was not my goal in this case. For any particular image, i was hoping to get MANO generated images which would look considerably different.
Can you please shade some idea that which parameters control the generation of poses that are different from an original input image?

Posted by: nnn @ Sept. 10, 2019, 7:35 p.m.

In line 65-68 of visualize_task1_task2.py in src.zip, you can see that our MANO parameters for the task 1 and 2 are composed of:
1) mano_cam: 4 dimensional camera parameter (0: scale, 1: x-axis translation, 2: y-axis translation, 3: z-axis translation),
2) mano_quat: quaternion parameter (4 values for deciding global viewpoint of hands),
3) mano_art: articulation parameter (45 dimensional values for articulation PCA),
4) mano_shape: shape parameter (10 dimensional values for articulation PCA).
Articulation parameters are related to the hand poses, thus you can change this values to modify hand poses.
For more explanation, you can refer:
Section 3.2 of “Pushing the Envelope for RGB-based Dense 3D Hand Pose Estimation via Neural Rendering”, CVPR’19
and “Embodied Hands: Modeling and Capturing Hands and Bodies Together”, ToG’17

Posted by: guiggh @ Sept. 11, 2019, 10:53 a.m.

Thanks a lot for your response. I will try accordingly

Posted by: nnn @ Sept. 11, 2019, 7:48 p.m.
Post in this thread