More videos

Tracking Results

Tasks

The tracking algorithm was tested in the following manipulation scenarios:

  1. Human folding cloth
  2. Human tying (and untying) knots with a rope
  3. Robot tying knots with a rope

The experiments shown below were performed with ground truth markers on the objects, however, the tracking results shown do not use the marker information. The tracker used point clouds from a single Asus Xtion Pro Live camera (equivalent to the MS Kinect.) Scenarios 1 and 2 (human manipulating rope and cloth) used color information. Scenario 3 (robot manipulating rope) uses no color information. All videos for each scenario were generated with the same set of parameters for the tracker.

The left column of videos below shows the camera image of the object being tracked. The right column is a rendering of our state estimate when using our tracking algorithm.

Full videos for all of the test sequences referred to in the paper are included below (1X speed)


Human Manipulating Cloth

Experiment Task

Video of the tracked object

Rendering of our state estimate

Fold diagonal left


Download

Fold diagonal right


Download

Fold double


Download

Fold long


Download

Fold short


Download

Fold triple


Download


For the following videos of a human manipulating rope, the point cloud is also drawn in the state estimate rendering. The points are colored in a range between green and black, where darker points are more likely to be outliers.


Human Manipulating Rope

Experiment Task

Video of the tracked object

Rendering of our state estimate

Overhand knot (I)


Download

Overhand knot (II)

The tracking for this task fails because an occlusion causes an ambiguity when one end of the rope is moved under another section of the rope (0:15). However, this ambiguity doesn't arise anymore when two cameras are used (look at the comparisons section).


Download

Double overhand knot


Download

Figure-eight knot (I)


Download

Figure-eight knot (II)


Download

Overhand knot + untie

In this task, the rope gets tied in a way such that untying the rope requires just to pull both ends of it.


Download

Robot Manipulating Rope

Experiment Task

Video of the tracked object

Rendering of our state estimate

Overhand knot


Download

Figure-eight knot (I)


Download

Figure-eight knot (II)


Download



Comparisons

In this section we compare the performance of our tracking algorithm as we use more cameras or no color information.


Experiment Task

Video of the tracked object

Rendering of our state estimates using different information

Overhand knot (I):
Color vs. No Color
Both Using One Camera


Download

Overhand knot (II):
Two vs. One Camera
Both Using Color


Download



Other deformable objects

In this section we present qualitative results of our algorithm tracking other objects.


Experiment Task

Video of the tracked object

Rendering of our state estimates using different information

Sponge manipulation:
No Color
Using One Camera


Download