3D building reconstruction from airborne laser scanning data

KyoHyouk Kim, Purdue University

Abstract

Buildings are commonly acknowledged as the most prominent objects in the generation of a 3D virtual model of our environment. Determining their exact locations, spatial extents, patterns, or detailed geometric structures is often required in a range of applications, such as urban modeling, city planning, or virtual reality. Land surveying and photogrammetry have been the conventional approaches for this purpose; however, both are very labor-intensive and time-consuming processes. Since the late 1990's, new data sources, such as high-resolution images and airborne laser scanning datasets, became available that required the development of more advanced algorithms. These new approaches continue to be studied as new sensor technology with higher resolution and accuracy are developed. Chief among the current day technology is Light Detection And Ranging (LiDAR), which has been successfully used for this field of study over the past two decades. While most existing approaches show promising results toward automatic generation of 3D building models, a number of issues remain to be addressed. The main objective of this research was to reconstruct 3D building models from airborne laser scanning (ALS) data. To achieve this objective, we proposed a complete framework for this process, including LiDAR filtering, building extraction, roof plane segmentation, and 3D reconstruction. The raw LiDAR points were first separated into two groups (i.e., the ground and non-ground points), referred to as "LiDAR filtering" step. The non-ground points were eliminated by the adaptive morphological filtering based on the curvatures and elevation difference, and the non-ground points were further processed to generate 3D buildings. Buildings at the complexity level of LOD1 were generated only from the building footprints, for which building boundary detection and regularization were studied. Buildings with higher levels of details can be reconstructed by extracting the individual roof planes and combining them based on their spatial adjacency. This thesis treated this task as a two-step task: segmentation and reconstruction. Segmentation finds the planar roof patches, while reconstruction further determines their adjacency and integrity. The roof plane segmentation was approached by the method of multiphase and multichannel level set. The segmentation outcome included the segmented points of individual roof planes as well as a labeled image. The roof vertices thereafter were determined by intersecting adjacent roof segments and then connected based on their topological relations inferred from the labeled image. Finally, we evaluated the proposed approach with three different LiDAR data sets.^

Degree

Ph.D.

Advisors

Jie Shan, Purdue University.

Subject Area

Geodesy|Engineering, Civil|Remote Sensing

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS