Vision-guided mobile robot navigation using neural networks and topological models of the environment

Min Meng, Purdue University

Abstract

Large strides have been made in model-based approach to vision-guided mobile robot navigation in indoor environments. Although the model-based method does indeed result in very robust reasoning and control architectures, the fact remains that this approach requires precise geometrical modeling of those elements of the environment that are considered visually significant—a requirement that can be difficult to fulfill in some cases. The need for geometrical modeling of the environment also makes such systems “non-human-like.” Based on our observations of human navigators, we have developed a new kind of reasoning and control architecture for vision-guided navigation that makes a robot more “human-like.” This system, called NEURO-NAV, discards the more traditional geometrical representation of the environment and, instead, uses a semantically richer topological representation in which a hallway is modeled by the order of appearance of various landmarks and by adjacency relationships. With such a representation, it becomes possible for the robot to respond to human-supplied commands like “Follow the corridor and turn right at the second T junction.” This capability is achieved by an ensemble of neural networks whose activation and deactivation are controlled by a supervisory controller. The individual neural networks in the ensemble are trained. to interpret visual information and perform primitive navigational tasks such as hallway following and landmark detection.

Degree

Ph.D.

Advisors

Kak, Purdue University.

Subject Area

Electrical engineering|Computer science|Artificial intelligence

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS