A subsumptive, hierarchical, and distributed vision -based architecture for smart robotics

Guilherme Nelson DeSouza, Purdue University

Abstract

We present a vision-based architecture for smart robotics that is composed of modules, each with a specialized level of competence. At this time we are able to demonstrate the utility of the architecture for stereoscopic visual servoing, but the architecture could easily be extended to tasks such as “assembly-on-the-fly” in which a robot performs assembly operations on a moving target. Our architecture is subsumptive and hierarchical, in the sense that each module adds to the competence level of the modules below, and in the sense that they present a coarse-to-fine gradation with respect to vision sensing. At the coarsest level, the processing of sensory information enables a robot to become aware of the approximate location of an object in its field of view. On the other hand, at the finest end, the processing of stereo information enables a robot to know more precisely the position and orientation of an object in the coordinate frame of the robot. The processing at each level is completely independent and it can be performed at its own rate. A control arbitrator ranks the results of each level according to certain confidence indices, which are derived solely from the sensory information. This architecture has clear advantages regarding overall performance of the system, which is not affected by the “slowest link”, and regarding fault tolerance, since faults of one module don't affect the other modules. A highlight of this architecture is that it is possible to devise the same overall architectural framework for both mobile robots and arm robots. While the framework can be the same, obviously the specific demonstrations of robotic competence are different.

Degree

Ph.D.

Advisors

Kak, Purdue University.

Subject Area

Electrical engineering|Computer science

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS