원문정보
초록
영어
We present a system for acquiring synchronized multi-view color and depth (RGB-D) data using multiple off-the-shelf Microsoft Kinect and a new method for reconstructing spatio-temporally coherent 3D animation from time-varying dynamic RGB-D data. Our acquisition system is independent of any specific hardware component for the synchronization of the camera system. We show that the data acquired by our framework can be synchronously registered in a global coordinate system and then can be used to reconstruct the 3D animation of a dynamic scene. The main benefit of our work is that instead of relying on expensive multi-view video capture setups, multiple low cost Microsoft Kinect sensors can capture both the image and the depth data to do a 360o reconstruction of a dynamic scene. We also present a new algorithm for tracking dynamic three-dimensional point cloud data that can be used to reconstruct a time-coherent representation of a 3D animation without using any template model or a-prior assumption about the underlying surface. We show that despite some limitations imposed by the hardware for the synchronous acquisition of the data we can get reasonably good reconstruction of the animated 3D geometry, which can be used in a number of applications.
목차
1. Introduction
2. Related Work
3. Data Acquisition
4. System Geometry & Calibration
5. Global Registration and Segmentation
6. Spatio-Temporally Coherent 3D Animation
7. Results
8. Conclusion
Acknowledgements
References