Using Tangible Interaction and Virtual Reality to Support Spatial Perspective Taking Ability
According to several large-scale and longitudinal studies, spatial ability, one of the primary mental abilities, has been shown as a significant predictor for STEM learning (Science, Technology, Engineering, and Mathematics) and career success. Frameworks in HCI (Human-Computer Interaction) and TEI (Tangible and Embodied Interaction) also indicated how the spatial-related aspects of interaction are a common design theme for interfaces using emerging technologies. However, currently only very few interactive systems (using TEI) are designed around a target spatial ability. TEI's direct effects on spatial ability are also not well-investigated. Meanwhile, a growing body of research from cognitive sciences, such as embodied cognition and Common Coding Theory, shows that physical movements can enhance cognition in aspects that involve spatial thinking. Also, virtual reality (VR) affords better 3D perception for digital environments, and provides design opportunities to engage users with spatial tasks that may not be otherwise imagined or achieved in the real world. This research describes how we designed and built the system TASC (Tangibles for Augmenting Spatial Cognition), which combines body movement tracking and tangible objects with VR. We recap our design process and design rationales, along with how the finalized system was designed to enhance embodiment as a means to activate, support, engage, and hopefully augment spatial perspective taking ability. We conducted a user study with qualitative and quantitative evaluation methods. Respectively, the qualitative evaluation aimed to understand how the participants used the system; the quantitative evaluation was a multi-condition experiment with pre-tests and post-tests used to investigate if and how the system could improve spatial perspective taking ability. We built the digital pre/post-tests based on PTSOT (Perspective Taking/Spatial Orientation Test) (Hegarty, Kozhevnikov, & Waller, 2008). The study in total involved 52 participants: 6 participants (3M/3F) in the pilot study, 46 in the main study (3 conditions, around 15 per condition, each condition was overall gender-balanced). The qualitative analysis focused on the VR-TEI condition (the "main system"). Using thematic analysis with the video clips and written notes (both taken during the participants' interaction), and audio clips (recorded during the post-interaction interview), we synthesized the qualitative results into 4 themes: (1) Spatial strategies: akin but unique; (2) The use of gestures & verbalization; (3) Positive experience with the system; (4) The potentials of the system. The quantitative statistical analysis, using ANOVA and t-test for the 3-condition experiment, showed that every condition yielded perspective taking improvement from taking the test twice. However, only the VR-TEI condition led to statistically significant improvement. We conclude the research with discussion and future possibilities in these themes of: (a) The role of embodiment; (2) Further explorations of intermediate conditions; (3) A deeper look at sample size and validity; (4) Designing & evaluating TEIs for other spatial abilities; (5) Integration with STEM curriculum. The main contribution of this dissertation is that it reports how a VR-TEI system can be designed, built, and evaluated for a target spatial ability. We hope this research also contributes to bridging some knowledge gaps between interaction design, cognitive science, and STEM learning.
Mohler, Purdue University.
Off-Campus Purdue Users:
To access this dissertation, please log in to our