Pose Imitation Constraints for Kinematic Structures

Glebys Gonzalez, Purdue University

Abstract

Robot usage has increased in different areas of society and human work, including medicine, transportation, education, space exploration, and the service industry. This phenomenon has generated a sudden enthusiasm to develop more intelligent robots, better equipped to perform tasks in a manner that is at least as good as those completed by humans. Such jobs require human involvement as operators or teammates since robots struggle with automation in everyday settings. Soon, the role of humans will be far beyond users or stakeholders and include those responsible for training such robots. A popular teaching form is to allow robots to mimic human behavior. This method is intuitive and natural and does not require specialized robotics knowledge. While there are other methods for robots to complete tasks effectively, collaborative tasks require mutual understanding and coordination that is best achieved through mimicking human motion. This mimicking problem has been tackled through skill imitation, which reproduces human-like motion during a task shown by a teacher. Skill imitation builds on faithfully replicating the human pose and requires two steps. First, an experts demonstration is captured and pre-processed, and motion features are obtained. Then, a learning algorithm is used to optimize for the task. The learning algorithms are often paired with traditional control systems to transfer the demonstration to the robot successfully. However, this methodology currently faces a generalization issue: most solutions are formulated for specific robots or tasks. The lack of generalization presents a problem, especially as the frequency at which robots are replaced and improved in collaborative environments is much higher than in traditional manufacturing. Similarly to humans, we expect robots to have more than one skill and the same skills to be completed by more than one type of robot. Thus, we address this issue by proposing a human motion imitation framework that can be efficiently computed and generalized for different kinematic structures (e.g., different robots). To develop this framework, we train an algorithm to augment collaborative demonstrations, facilitating the generalization to unseen scenarios. Then, we create a model for pose imitation that converts human motion to a flexible constraint space. This space can be directly mapped to different kinematic structures by specifying a correspondence between the main human joints (i.e., shoulder, elbow, wrist) and robot joints. This model permits having an unlimited number of robotic links between two assigned human joints, allowing different robots to mimic the demonstrated task and human pose. Last, we incorporate the constraint model into a reward that informs a Reinforcement Learning algorithm during optimization. We tested the proposed methodology in different collaborative scenarios. We assessed the task success rate, pose imitation accuracy, the occlusion that the robot produces in the environment, the number of collisions, and finally, the learning efficiency of the algorithm. The results show that the proposed framework creates effective collaboration in different robots and tasks.

Degree

Ph.D.

Advisors

Wachs, Purdue University.

Subject Area

Robotics|Aerospace engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS