Date of Award

Fall 2013

Degree Type


Degree Name

Doctor of Philosophy (PhD)


Electrical and Computer Engineering

First Advisor

Avinash C. Kak

Second Advisor

Johnny Park

Committee Chair

Avinash C. Kak

Committee Co-Chair

Johnny Park

Committee Member 1

Akio Kosaka

Committee Member 2

Charles A. Bouman


There is much research interest currently in having mobile robots build accurate and visually dense models ofinterior space as they traverse through such spaces. One of the interesting problems that has came out of this research is that of visual place recognition and self-localization. This is the problem that forms the focus of the present dissertation. We show how dense and accurate 3D models of the interior space can be constructed using a hierarchical sensor-fusion architecture. Our system fuses images from a single photometric camera with range data from a laser scanning sensor. The range data used is rudimentary--the range measurements are line scans just a few inches above the floor to estimate the positions and the orientations of the hallway walls.

This dissertation also proposes two hypothesize-and-verify matching frameworks for the problem of place recognition and robot self-localization using the information contained in the constructed models: (1) A fast framework using a new type of image features that we call 3D-JUDOCA. We derive these features from stereo imagery and show that they possess superior viewpoint invariance compared to other similar features. We organize these features in a data structure, which we call the Feature Cylinder, for low-order polynomial-time verification of localization hypotheses. And (2) A signature-based hypothesize-and-verify framework in which the signatures are derived from the 3D-JUDOCA features. We present a criterion for selecting the best signatures for hypothesis generation and hypothesis verification. The second approach allows the robot to carry out place recognition and self-localization in constant time. We provide extensive experimental evidence to demonstrate the usefulness of both these frameworks.

Included in

Robotics Commons