Effective and interactive interpretation of gestures by individuals with mobility impairments
There has been an increasing attention in the adoption of commodity cameras and gaming based technology to support and complement rehabilitation therapies for users with upper extremity mobility impairments (UEMIs). Innovative applications leveraging on Kinect® and Wii® based technologies have been integrated in rehabilitation programs as part of a new paradigm often referred as exergaming – exercising while playing. These platforms involve the use of physical expressions, such as gestures, as the main form of interaction. Such platforms offer an alternative to traditional rehabilitation sessions, in order to improve rehabilitative outcomes and prevent rehospitalization. The problem is that such platforms rely on gesture interfaces, which are designed primarily for individuals without motor limitations, thus excluding a critical mass of users with some degree of UEMI who could benefit from this technology. The assistive technologies (AT) community has addressed this challenge through customizing hand gesture-based interfaces for specific quadriplegic users, which is tedious, time-consuming, and costly. There is no systematic method to convert existing gesture interfaces (designed for individuals without disabilities) to usable interfaces for persons with UEMIs. The objective of this research is to solve this hurdle by proposing a framework to establish guidelines, metrics, and procedures for the design of gesture sets (lexicons) suitable for users with UEMIs using fundamentally scientific sound principles. The key idea is to project the existing patterns of gestural behavior from persons without disabilities to match those exhibited by users with quadriplegia due to common cervical spinal cord injuries (SCIs). Two approaches (a user-centered and an analytic approach) have been developed and validated to provide users with quadriplegia with both individualized and universal solutions. The feasibility of the proposed methodology was validated through simulation and user-based experiments. Through these studies, it was found that subjects with UEMIs preferred gestures generated by our approach rather than the standard gestures (thirty-six out of forty-two constrained gestures). Gesture-variability analysis was conducted to further validate the gesture sets, and finally robotic execution was used to mimic gesture trajectories. Based on this, a physical metric (referred as work) was empirically obtained to compare the physical effort of each gesture. An integration method was presented to determine the accessible gesture set based on the stability and empirical robot execution. For all the gesture types, the accessible gestures were found to lie within 34% of the optimality of stability and work. Lastly, the gesture set determined by the proposed methodology was practically evaluated by target users in experiments while solving a spatial navigational problem.
Duerstock, Purdue University.
Computer Engineering|Electrical engineering|Industrial engineering
Off-Campus Purdue Users:
To access this dissertation, please log in to our