Exploring Novel Human Smart-Thing Interaction Through Augment Reality Framework Design

Yuanzhi Cao, Purdue University

Abstract

We have never felt so connected with the surrounding social and physical environment, thanks to the increasingly populating mobile computing devices and rapidly developing high-speed network. These technologies transform the everyday objects into smart-thingsand make us accessible to a large amount of digital information and intelligence relating closely to the physical reality. To bridge the gap between the digital interface and physical smart-thing, Augmented Reality (AR) has become a promising media that allows users to visually link the digital content to its physical target, with spatial and contextual awareness. Thanks to the vast improvement to the personal computing devices, AR technologies are emerging to popular realistic scenarios empowered by commercially available software development kits (SDKs) and hardware platforms, which makes it easier for human users to interact with the surrounding smart-things. Due to the scope of this thesis, we are interested in exploring for the smart-things that have physical interaction capabilities with the reality world, such as Machines, Robots, and IoTs. Our overarching goal is to create better experience for users to interact with these smart-things, that is visual, spatial, contextual, and embodied, and we try to achieve this goal through novel augmented reality system workflow/framework design. This thesis is based on our four published conference papers [1–4], which are described in chapters 3-6 respectively. On a broader level, our works in this thesis focus on exploring spatially situated visual programming techniques for human smart-thing interaction. In particular, we leverage contextual awareness in the AR environment with the interactivity of physical smart-things. We explore (1) spatial and visual input techniques and modalities for users to intuitively interact with the physical smart-things through interaction and interface design, and (2) the ecology of human smart-thing through system workflow design corresponding to the contextual awareness powered by the AR interface. In this thesis, we mainly study the following spatial aware AR interactions with our completed work: (i) Ani-Bot demonstrates Mixed-Reality (MR) interaction for tangible modular robotics through a Head-Mounted Device (HMD) with mid-air gestures, (ii) V.Ra describes spatially situated visual programming for Robot-IoT task planning, (iii) GhostAR has presented a time-space editor for Human-Robot Collaborative (HRC) task authoring. (iv) while AvaTutAR-studyhas presented an exploratory study that provided valuable design guidance for future AR avatar-based tutoring systems. We further develop the enabling techniques including a modular robotics kit with assembly awareness and the corresponding MR features for the major phases of its lifecycle; a lightweight and coherent ecosystem design that enables spatial and visual programming as well as IoT interactive and navigatory task execution with a single AR-SLAM mobile device; and a novel HRC task authoring workflow using robot programming by human demonstration method within AR scene with avatar reference and motion mapping with dynamic time warping (DTW). Primarily, we design system workflows and develop applications for increasing the flexibility of AR content manipulation, creation, authoring, and intuitively interacting with the smart environment visually and pervasively.

Degree

Ph.D.

Advisors

Ramani, Purdue University.

Subject Area

Robotics|Design|Education

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS