DOI
10.5703/1288284318534
Description
Learning feature-rich software often requires mastering complex keyboard-mouse combinations, yet text-based tutorials struggle to convey how these interactions should physically happen. Many learners pause, re-read, and search online, still unsure how to execute commands until a TA or peer demonstrates the gesture in person. We explore an AR-based tutorial approach that displays 3D hand and input animations directly within the learner’s view as they operate the software. By turning written instructions into embodied, real-time motion guidance, this system aims to make advanced software interactions more intuitive and accessible, reducing confusion and supporting confident skill development.
From Text to Action: Seeing Software Commands Through AR Gestures
Learning feature-rich software often requires mastering complex keyboard-mouse combinations, yet text-based tutorials struggle to convey how these interactions should physically happen. Many learners pause, re-read, and search online, still unsure how to execute commands until a TA or peer demonstrates the gesture in person. We explore an AR-based tutorial approach that displays 3D hand and input animations directly within the learner’s view as they operate the software. By turning written instructions into embodied, real-time motion guidance, this system aims to make advanced software interactions more intuitive and accessible, reducing confusion and supporting confident skill development.