Talking to TAD: An exercise in augmenting an everyday object for gestural interaction

Matthew P DeVito, Purdue University

Abstract

Thanks to many factors in the improvement of commercial technology, everyday objects are coming to life with digital enhancements. Contributing to this trend is the push for ubiquitous computing, where computer technology becomes so well-integrated into everyday objects that the computers become invisible. Additionally, advances in gesture-recognition technology are granting gesture sensitivity to these computer-enhanced objects. Such objects can play music with the clap of a hand, project information onto any surface, and even allow for complete computer control without any physical input device. A particularly interesting application of this technology is the concept of "augmented workspaces" where large touch- and gesture-sensitive displays are constructed to allow intuitive control of digital content. However, with the ubiquitous computing trend and the increasing economic feasibility of powerful sensors and actuators, digitally-augmented physical objects and appliances will soon find their way into these spaces, yet only gestural interaction with two-dimensional digital content has been explored. The present work attempts to fill some of the gap in knowledge regarding three-dimensional gesture control of three-dimensional objects instead of just two-dimensional content. In order to explore interactions with physical objects in augmented spaces, a common, off-the-shelf desk lamp was augmented with actuators, expanded light capabilities, and spatial gesture recognition, and a user study was conducted with the resulting Tabletop Assistive Droid (TAD). Untrained users were allowed to issue commands from anywhere in the augmented space; it was unknown whether they would leverage the augmented space's range and command from a comfortable position or reach in to interact with the objects. Participants generated gestures for controlling the augmented lamp's functions, and a final gesture set comprising of the most-agreeable gestures was compiled. During analysis of the gestures it was found that users were instinctively reaching into the three-dimensional space to directly interact with objects.

Degree

M.S.M.E.

Advisors

Ramani, Purdue University.

Subject Area

Behavioral psychology|Robotics

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS