The webpage for the NIPS 2016 Deep RL workshop is here.
Deep Reinforcement Learning Workshop, NIPS 2015
The first-ever Deep Reinforcement Learning Workshop will be held at NIPS 2015 in Montréal, Canada on Friday December 11th. More details about the program are coming soon.
Organizers: John Schulman, Pieter Abbeel, David Silver, and Satinder Singh.
Contact Us
Co-Sponsored by Osaro and Google DeepMind
Update: videos are available
Call for Papers
The deadline has passed.
We invite you to submit papers that combine neural networks with reinforcement learning, which will be presented as talks or posters. The submission deadline is October 10th (midnight), and decisions will be sent out on October 24th October 29th. Please submit papers by email to this address.
Submissions should be in the NIPS 2015 format with a maximum of eight pages, not including references. Accepted submissions will get a spotlight talk and a poster presentation.
Abstract
Although the theory of reinforcement learning addresses an extremely general class of learning problems with a common mathematical formulation, its power has been limited by the need to develop task-specific feature representations. A paradigm shift is occurring as researchers figure out how to use deep neural networks as function approximators in reinforcement learning algorithms; this line of work has yielded remarkable empirical results in recent years. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help researchers with expertise in one of these fields to learn about the other.
Schedule
09:00 - 09:30 Honglak Lee, Deep Reinforcement Learning with Predictions |
09:30 - 10:00 Juergen Schmidhuber, Reinforcement Learning of Programs in General Purpose Computers with Memory |
10:00 - 10:30 Michael Bowling |
10:30 - 11:00 Morning coffee |
11:00 - 11:30 Volodymyr Mnih, Faster Deep Reinforcement Learning |
11:30 - 12:00 Gerry Tesauro, Deep RL and Games Research at IBM |
12:00 - 12:05 Osaro, tech talk |
12:05 - 14:00 Lunch |
14:00 - 14:30 Sergey Levine, Deep Sensorimotor Learning for Robotic Control |
14:30 - 15:00 Yoshua Bengio |
15:00 - 16:00 Spotlight talks for contributed papers |
16:00 - 17:00 Poster presentations & coffee |
17:00 - 17:30 Martin Riedmiller, Deep RL for Learning Machines |
17:30 - 18:00 Jan Koutnik, Compressed Neural Networks for Reinforcement Learning |
Contributed Papers
Accepted papers will be presented in a spotlight talk and a poster session.
The importance of experience replay database composition in deep reinforcement learning
Tim de Bruin, Jens Kober, Karl Tuyls, Robert Babuška
Continuous deep-time neural reinforcement learning
Davide Zambrano, Pieter R. Roelfsema and Sander M. Bohte
Memory-based control with recurrent neural networks
Nicolas Heess, Jonathan J Hunt, Timothy Lillicrap, David Silver
How to discount deep reinforcement learning: towards new dynamic strategies
Vincent François-Lavet, Raphael Fonteneau, Damien Ernst
Strategic Dialogue Management via Deep Reinforcement Learning
Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
Deep Reinforcement Learning in Parameterized Action Space
Matthew Hausknecht, Peter Stone
Guided Cost Learning: Inverse Optimal Control with Multilayer Neural Networks
Chelsea Finn, Sergey Levine, Pieter Abbeel
Learning Deep Control Policies for Autonomous Aerial Vehicles with MPC-Guided Policy Search
Tianhao Zhang, Gregory Kahn, Sergey Levine, Pieter Abbeel
Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning
Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov
Deep Inverse Reinforcement Learning
Markus Wulfmeier, Peter Ondruska and Ingmar Posner
ADAAPT: A Deep Architecture for Adaptive Policy Transfer from Multiple Sources
Janarthanan Rajendran, P Prasanna, Balaraman Ravindran, Mitesh Khapra
Q-Networks for Binary Vector Actions
Naoto Yoshida
The option-critic architecture
Pierre-Luc Bacon and Doina Precup
Learning Deep Neural Network Policies with Continuous Memory States
Marvin Zhang, Zoe McCarthy, Chelsea Finn, Sergey Levine, Pieter Abbeel
Deep Attention Recurrent Q-Network
Ivan Sorokin, Alexey Seleznev, Mikhail Pavlov, Aleksandr Fedorov, Anastasiia Ignateva
Generating Text with Deep Reinforcement Learning
Hongyu Guo
Deep Spatial Autoencoders for Visuomotor Learning
Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, Pieter Abbeel
Data-Efficient Learning of Feedback Policies from Image Pixels using Deep Dynamical Models
John-Alexander M. Assael, Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth
One-Shot Learning of Manipulation Skills with Online Dynamics Adaptation and Neural Network Priors
Justin Fu, Sergey Levine, Pieter Abbeel
Learning Visual Models of Physics for Playing Billiards
Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, Jitendra Malik
Conditional computation in neural networks for faster models
Emmanuel Bengio, Joelle Pineau, Pierre-Luc Bacon, Doina Precup
Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models
Bradly C. Stadie, Sergey Levine, Pieter Abbeel
Learning Simple Algorithms from Examples
Wojciech Zaremba, Tomas Mikolov, Armand Joulin, Rob Fergus