Generating Locative Gestures from Speech for an Animated Pedagogical Agent
The presence of Animated Pedagogical Agents (APAs) in education is increasing day-by-day. They are being used in online courses as well as research studies. Therefore, there is a need to study ways in which lectures can quickly be created using APAs. Gestures are an inseparable part of an APA’s communication with users. Gestures generated automatically in prior research efforts did not point to a specific location in the virtual world. The research presented here aimed to address this gap. More specifically, the work reported in this thesis aimed to develop and assess a system that automatically generates locative gestures for an Animated Pedagogical Agent. The system takes audio and text as inputs and generates animated gestures based on a set of rules. The automatically generated gestures point to the exact locations of objects in the virtual world surrounding the APA. We conducted a study with 100 subjects, in which we compared lecture videos containing system generated gestures to same lecture videos containing manually scripted gestures. The results show that the manual and automated lectures were equivalent in terms of timing and number of gestures.
Popescu, Purdue University.
Educational technology|Artificial intelligence|Mathematics education
Off-Campus Purdue Users:
To access this dissertation, please log in to our