A SYNTACTIC APPROACH TO THREE-DIMENSIONAL OBJECT REPRESENTATION AND RECOGNITION (COMPUTERVISION, PATTERN)
Abstract
The syntactic approach to pattern representation and scene analysis has received increasing attention due to its unique capability in handling pattern structures and their relationships. However, it has been applied mostly on one and two-dimensional pattern recognition problems. The difficulties of syntactic approach in dealing with three-dimensional objects or scenes are caused by (1) model primitives are described in an observer-centered coordinate system; (2) lack of the mechanisms for relating two-dimensional image to three-dimensional objects, and (3) projections of some primitives in the image are invisible or partially occluded. In this thesis, we propose a syntactic approach to three-dimensional object recognition from a single view. The system consists of two major parts: analysis and recognition. The analysis part consists of primitive surface patches selection and modeling grammar construction. The recognition part consists of preprocessing, image segmentation, visible primitive surface identification, camera model estimation, and structural analysis. In the modeling phase, a three-dimensional object model is represented by using surface patches as primitives and 3D-plex grammar rules as structural relationship descriptors. Several algorithms to extract useful information for recognition from a given 3D-plex grammar are presented. The recognition task starts with preprocessing and image segmentation. Then, the transformation from three-dimensional object space to two-dimensional image space is determined by a camera model estimation procedure. The final phase of the recognition is carried out by a semantic-directed top-down backtrack recognizer.
Degree
Ph.D.
Subject Area
Computer science
Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server.