Leveraging Appearance Priors in Non-Rigid Registration,
with Application to Manipulation of Deformable Objects
Sandy Huang1, Jia Pan2, George Mulcaire1, Pieter Abbeel1
1UC Berkeley, CA, USA 2The University of Hong Kong, Hong Kong.
{shhuang, gmulcaire, pabbeel}@cs.berkeley.edu, jpan@cs.hku.hk

  Abstract — Manipulation of deformable objects is a widely applicable but challenging task in robotics. One promising non-parametric approach for this problem is trajectory transfer, in which a non-rigid registration is computed between the starting scene of the demonstration and the scene at test time. This registration is extrapolated to find a function from R3 to R3, which is then used to warp the demonstrated robot trajectory to generate a proposed trajectory to execute in the test scene. In prior work, only depth information from the scenes has been used to compute this warp function. This approach ignores appearance information, but there are situations in which using both shape and appearance information is necessary for finding high quality non-rigid warp functions.

In this paper, we describe an approach to learn relevant appearance information about deformable objects using deep learning, and use this additional information to improve the quality of non-rigid registration between demonstration and test scenes. Our method better registers areas of interest on deformable objects that are crucial for manipulation, such as rope crossings and towel corners and edges. We experimentally validate our approach in both simulation and in the real world, and show that the utilization of appearance information leads to a significant improvement in both selecting the best matching demonstration scene for a given test scene, and finding a high quality non-rigid registration between those two scenes.

Paper: [PDF]
Supplementary Videos (mp4): [Towel Folding] [Towel Folding Failure Cases]

Supplementary Video: Towel Folding
Supplementary Video: Towel Folding Failure Cases