Efficient and versatile three-dimensional scene modeling by sparse-depth dense-viewpoint acquisition

Mihai Mudure, Purdue University

Abstract

The present study describes an automated modeling approach for creating 3D digital models of real world scenes. The modeling approach is based on sparse-depth sampling from a dense set of acquisition viewpoints. We show that in sparse-depth/dense-viewpoint acquisition (SDDV) scanning the depth data quickly accumulates to generate models with good scene coverage. Data is acquired efficiently and robustly for each viewpoint, which enables an interactive, operator-in-the-loop modeling pipeline. Problems are detected and addressed at acquisition time, ensuring that high quality models are obtained in a single scanning session. The models obtained are compact, enable photorealistic virtual walkthroughs at interactive rates and are suitable for computer graphics applications such as virtual training, cultural heritage preservation, and real-estate marketing. We implement the approach in an efficient modeling system which acquires scenes with complex geometry and complex reflective properties from thousands of viewpoints in minutes. The acquisition system handles a variety of scenes efficiently, from small scenes (50 cm cube) to large room-sized environments. The system is robust, yet it does not require displacing scene objects or altering scene lighting conditions. We employ an acquisition device that acquires sparse depth and dense color at interactive rates. The device consists of a video camera rigidly attached to a laser system. The laser casts a 7x7 pattern of dots into the field of view of the camera. The dots are detected in the frame and converted to 3D points by triangulation. By construction, the laser beams project onto the video frame as disjoint epipolar segments which makes dot detection efficient and robust. The device is mounted on a mechanical tracking arm that provides pose in real time and allows six degrees of freedom motion within a 50 cm sphere centered at the arm base. Depth and color data are acquired in two separate passes. The operator sweeps the scene with the device to acquire the data, monitors and guides the acquisition process through immediate visual feedback. The acquired data are combined into a view-dependent model that produces quality novel views of the scene at interactive rates.

Degree

Ph.D.

Advisors

Popescu, Purdue University.

Subject Area

Computer science

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS