Adversarial Attacks on Neural Network Policies
Sandy Huang, Nicolas Papernot, Ian Goodfellow§, Yan Duan†§, Pieter Abbeel†§
University of California, Berkeley, Department of Electrical Engineering and Computer Sciences
Pennsylvania State University, School of Electrical Engineering and Computer Science
§OpenAI
shhuang@cs.berkeley.edu, ngp5056@cse.psu.edu, {ian, rocky, pieter}@openai.com

  Abstract — Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification. Such adversarial examples have been extensively studied in the context of computer vision applications. In this work, we show that adversarial attacks are also effective when targeting neural network policies in reinforcement learning. Specifically, we show that existing adversarial example crafting techniques can be used to significantly degrade the test-time performance of trained policies. Our threat model considers adversaries capable of introducing small perturbations to the raw input of the policy. We characterize the degree of vulnerability across tasks and training algorithms, for a subclass of adversarial-example attacks in white-box and black-box settings. Regardless of the learned task or training algorithm, we observe a significant drop in performance, even with small adversarial perturbations that do not interfere with human perception.


Paper: [arXiv]

Supplementary Videos:
White-Box
Black-Box (Transferability Across Policies)
Black-Box (Transferability Across Algorithmis)

Supplementary Videos: White-Box

Details [+]

FGSM ε =


Supplementary Videos: Black-Box (Transferability Across Policies)

Details [+]

FGSM ε =


Supplementary Videos: Black-Box (Transferability Across Algorithms)

Details [+]

Adversary
Type
Algorithm
(Adversary)
Deep Reinforcement Learning Algorithm (Target)
DQN TRPO A3C
-norm
FGSM
DQN
TRPO
A3C
2-norm
FGSM
DQN
TRPO
A3C
1-norm
FGSM
DQN
TRPO
A3C
FGSM ε =

Last Update: February 17, 2017