3D Reconstruction from Passive Sensors
In the past few decades, image-based 3D reconstruction has been widely adopted in a variety of applications for deriving accurate 3D models of the detected objects. Although current commercial software has automated the process of extract 3D information from 2D images, a transparent system, which can be incorporated with different user-defined constraints, is still the optimum option for many applications. In this regard, this dissertation introduces a generic framework for image-based 3D reconstruction from RGB-based frame cameras with different configurations (i.e., single or multi-camera systems). In this generic framework, a fully automated aerial triangulation procedure, which assumes the availability of prior information regarding the platform trajectory, is initially proposed for accurate 3D reconstruction of images that are captured by a single camera. Then, an automated 3D reconstruction procedure is introduced for image-based point cloud generation from multi-camera systems. Similar to the automated aerial triangulation, this procedure also takes advantage of the prior trajectory information to facilitate the image-based 3D reconstruction process. Since both proposed procedures are based on a Structure-from-Motion (SfM) framework, they are capable of dealing with image-based 3D reconstruction in the absence of GNSS (Global Navigation Satellite System)/INS (Inertial Navigation System) information or only in the presence of less-accurate POS (Position Orientation System) information. Finally, an object space-based approach is proposed to achieve more efficient image-based dense point cloud generation. Experimental results from real datasets demonstrate the feasibility of the proposed procedures in providing accurate image-based 3D reconstruction from images acquired with different configurations.
Habib, Purdue University.
Computer Engineering|Engineering|Civil engineering
Off-Campus Purdue Users:
To access this dissertation, please log in to our