HUMBI is a Human Behavioral Multiview Imaging Dataset. HUMBI is a large corpus of high fidelity models of behavioral signals in 3D from a diverse population measured by a massive multi-camera system. With our novel design of a portable imaging system (consists of 107 HD cameras), we collect human behaviors from 772 subjects across gender, ethnicity, age, and physical condition at a public venue.
Using the multiview image streams, we reconstruct high fidelity models of five elementary parts: gaze, face, hands, body, and cloth. As a byproduct, the 3D model provides geometrically consistent image annotation via 2D projection, for example, body part segmentation. This dataset is a significant departure from the existing human datasets that suffers from subject diversity.
We hope the HUMBI opens up a new opportunity for human behavioral imaging.
- Aug. 2020 : We provide the script to download the dataset per subject for stable access.
- Jun. 2020 : Our paper is presented in CVPR 2020.
- Jun. 2020 : The datasets for all body elements and the utility codes are available. For missing subjects, we will keep fixing and updating.
- Mar. 2020 : HUMBI dataset webpage is open.
The HUMBI Gaze contains multiview eyes images, textures, and 3D gaze reconstruction with headpose.
The HUMBI Face contains multiview images, textures, 3D geometry of keypoints and mesh.
The HUMBI Hand contains the left/right hand of multiview images, 3D keypoints and mesh.
The HUMBI Body&Clothing contains the multiview images, 3D geometry, and body texture.
Email: humbi.data [at] gmail [dot] com