Learning from Multiple Demonstrations using Trajectory-Aware Non-Rigid Registration with Applications to Deformable Object Manipulation
Alex X. Lee, Abhishek Gupta, Henry Lu, Sergey Levine, Pieter Abbeel

  Abstract — Learning from demonstration by means of non-rigid point cloud registration is an effective tool for learning to manipulate a wide range of deformable objects. However, most methods that use non-rigid registration to transfer demon- strated trajectories assume that the test and demonstration scene are structurally very similar, with any variation explained by a non-linear transformation. In real-world tasks with clutter and distractor objects, this assumption is unrealistic. In this work, we show that a trajectory-aware non-rigid registration method that uses multiple demonstrations to focus the registration process on points that are relevant to the task can effectively handle significantly greater visual variation than prior methods that are not trajectory-aware. We demonstrate that this approach achieves superior generalization on several challenging tasks, including towel folding and grasping objects in a box containing irrelevant distractors.

Paper (with Appendix): [PDF]