Professor, Deputy Director, Advanced Robotics Center
Learning Dynamic Robot Object Handover Skills from Human Feedback
David Hsu, Andras Kupcsik
In the future, robots interacting with humans in everyday tasks will require seamless, human-like skills to be trustworthy and efficient helpers for humans. A robot handing over objects to humans is one of the most essential physical interaction channel, yet it is particularly difficult to program to robots today. Robots do not exhibit the complex motion planning, perception and actuation capability of humans. Furthermore, even for humans it is difficult to define what a good handover is, and therefore, it is difficult to find an appropriate controller that can handle handovers in different situations. However, humans are experts in handovers and they can assess the robot performance while interacting with it. Furthermore, we have high quality robot control architectures that are potentially able to encode human-like handover skills. In this talk, we will discuss how robots can learn static and dynamic handover skills from high level human feedback, while physically interacting with the human. We use a motion capture system to record the trajectories of the human body and hand. This information is directly fed to the robot, which adapts its movement using this information. The learning process not only provides high quality robot handover skills, but an estimation of the latent reward function of the handover task humans aim to maximize. This eventually provides a quantitative measure of what kind of controllers humans prefer in different situations.