Learning Visual Servoing with Deep Features and Fitted Q-Iteration
Alex X. Lee, Sergey Levine, Pieter Abbeelࠤ
University of California, Berkeley, Department of Electrical Engineering and Computer Sciences
OpenAI
§International Computer Science Institute

  Abstract — Visual servoing involves choosing actions that move a robot in response to observations from a camera, in order to reach a goal configuration in the world. Standard visual servoing approaches typically rely on manually designed features and analytical dynamics models, which limits their generalization capability and often requires extensive application-specific feature and model engineering. In this work, we study how learned visual features, learned predictive dynamics models, and reinforcement learning can be combined to learn visual servoing mechanisms. We focus on target following, with the goal of designing algorithms that can learn a visual servo using low amounts of data of the target in question, to enable quick adaptation to new targets. Our approach is based on servoing the camera in the space of learned visual features, rather than image pixels or manually-designed keypoints. We demonstrate that standard deep features, in our case taken from a model trained for object classification, can be used together with a bilinear predictive model to learn an effective visual servo that is robust to visual variation, changes in viewing angle and appearance, and occlusions. A key component of our approach is to use a sample-efficient fitted Q-iteration algorithm to learn which features are best suited for the task at hand. We show that we can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvement over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms.


[Paper]
[Code]
[Servoing Benchmark Code]

Supplementary Videos

Click on any of the costs in the table entries below to see the trajectories corresponding to each of the costs.


Dynamics-Based Servoing Policies Learned with Reinforcement Learning

Details [+]

Policy Optimization Algorithm
Feature
Dynamics
unweighted
feature
dynamics
+ CEM (1500)
feature
dynamics
+ CEM
(3250)
feature
dynamics
+ TRPO
(≥ 80)
feature
dynamics
+ TRPO
(≥ 2000)
ours
feature
dynamics
+ FQI (20)
pixel, FC
pixel, LC
VGG conv1_2
VGG conv2_2
VGG conv3_3
VGG conv4_3
VGG conv5_3
Costs when using the set of cars seen during learning.
Policy Optimization Algorithm
Feature
Dynamics
unweighted
feature
dynamics
+ CEM (1500)
feature
dynamics
+ CEM
(3250)
feature
dynamics
+ TRPO
(≥ 80)
feature
dynamics
+ TRPO
(≥ 2000)
ours
feature
dynamics
+ FQI (20)
pixel, FC
pixel, LC
VGG conv1_2
VGG conv2_2
VGG conv3_3
VGG conv4_3
VGG conv5_3
Costs when using novel cars, none of which were seen during learning.

End-to-End Policies Learned with TRPO

Details [+]

Observation Modality
ground truth car position
raw pixel-intensity images
VGG conv1_2 features
VGG conv2_2 features
VGG conv3_3 features
Costs when using the set of cars seen during learning.
Observation Modality
ground truth car position
raw pixel-intensity images
VGG conv1_2 features
VGG conv2_2 features
VGG conv3_3 features
Costs when using novel cars, none of which were seen during learning.

Classical Image-Based Visual Servoing

Details [+]

Observation Modality (Feature Points)
corners of bounding box from C-COT tracker
corners of ground truth bounding box
corners of next frame's bounding box from C-COT tracker
corners of next frame's ground truth bounding box
SIFT feature points
SURF feature points
ORB feature points

Classical Position-Based Visual Servoing

Details [+]

Policy Variant
Observation Modality (Pose) Use Rotation Ignore Rotation
car pose
next frame's car pose