CS 294: Deep Reinforcement Learning, Spring 2017
Instructors: Sergey Levine, John Schulman, Chelsea Finn |
Lectures: Mondays and Wednesdays, 9:00am-10:30am in 306 Soda Hall. |
Office Hours: MW 10:30-11:30, by appointment (see signup sheet on Piazza) |
Communication: Piazza will be used for announcements, general questions and discussions, clarifications about assignments, student questions to each other, and so on. To sign up, go to Piazza and sign up with “UC Berkeley” and “CS294-112”. |
For people who are not enrolled, but interested in following and discussing the course, there is a subreddit forum here: reddit.com/r/berkeleydeeprlcourse/ |
Please do not email the course instructors about MuJoCo licenses if you are not enrolled in the course. Unfortunately, we do not have any license that we can provide to students who are not officially enrolled in the course for credit. |
Table of Contents
- Lecture Videos
- Lectures, Readings, and Assignments
- Prerequisites
- Related Materials
- Previous Offerings
Lecture Videos
The course lectures are available below. The course is not being offered as an online course, and the videos are provided only for your personal informational and entertainment purposes. They are not part of any course requirement or degree-bearing university program.
For all videos, click here.
For live
stream, click here.
Lectures, Readings, and Assignments
Below you can find an outline of the course. Slides and references will be posted as the course proceeds.
- Jan 18: Introduction and course overview (Levine, Finn, Schulman)
- Jan 23: Supervised learning and decision making (Levine)
- Slides
- End to End Learning for Self-Driving Cars
- A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning (DAgger paper)
- A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots
- Learning Transferable Policies for Monocular Reactive MAV Control
- Learning Real Manipulation Tasks from Virtual Demonstrations using LSTM
- Jan 25: Optimal control and planning (Levine)
- Jan 27 (10 am, SDH 240): Review section: autodiff, backpropagation, optimization (Finn)
- Jan 30: Learning dynamical system models from data (Levine)
- Homework 1 is out: Imitation Learning
- Plotting and Visualization Handout: Handout
- Slides
- Feb 1: Learning policies by imitating optimal controllers (Levine)
- Feb 6: Guest lecture: Igor Mordatch, OpenAI
- Feb 8: RL definitions, value iteration, policy iteration (Schulman)
- Homework 1 is DUE
- Homework 2 is out: Basic RL: see hw2 directory in the course github
- Slides
- Feb 13: Reinforcement learning with policy gradients (Schulman)
- Feb 15: Learning Q-functions: Q-learning, SARSA, and others (Schulman)
- Feb 22: Advanced Q-learning: replay buffers, target networks, double Q-learning (Schulman)
- Homework 2 is DUE
- Homework 3 is out: Deep Q Learning
- Slides (coming soon)
- Feb 27: Advanced model learning: predicting images and videos (Finn)
- Mar 1: Advanced imitation: policy distillation, guided policy search revisited (Finn)
- Mar 6: Inverse RL: acquiring objectives from demonstration (Finn)
- Mar 8: Advanced policy gradients: natural gradient and TRPO (Schulman)
- Homework 3 is DUE
- Homework 4 is out: Deep Policy Gradients
- Slides (coming soon)
- Mar 13: Policy gradient variance reduction and actor-critic algorithms (Schulman)
- Mar 15: Summary of policy gradients and temporal difference methods (Schulman)
- Mar 20: The exploration problem (Schulman)
- Mar 22: Open problems and challenges in deep reinforcement learning (Levine)
- Homework 4 is DUE
- Deadline to form final project groups
- Slides (coming soon)
- Apr 3: Parallelism and asynchrony in deep RL (Levine)
- Apr 5: Guest lecture: Mohammad Norouzi, Google Brain Team
- Apr 10: Guest lecture: Pieter Abbeel, UC Berkeley and OpenAI
- Apr 12: Advanced imitation learning and inverse RL algorithms (Finn)
- Apr 17: Project milestone presentations
- Final project milestone reports DUE
- Apr 19: Guest lecture: TBD
- Apr 24: Guest lecture: Aviv Tamar, UC Berkeley
- Apr 26: Final project presentations
- May 1: Final project presentations
- May 3: Final project presentations (spillover period)
Prerequisites
CS189 or equivalent is a prerequisite for the course. This course will assume some familiarity with reinforcement learning, numerical optimization and machine learning. Students who are not familiar with the concepts below are encouraged to brush up using the references provided right below this list. We’ll review this material in class, but it will be rather cursory.
- Reinforcement learning and MDPs
- Definition of MDPs
- Exact algorithms: policy and value iteration
- Search algorithms
- Numerical Optimization
- gradient descent, stochastic gradient descent
- backpropagation algorithm
- Machine Learning
- Classification and regression problems: what loss functions are used, how to fit linear and nonlinear models
- Training/test error, overfitting.
For introductory material on RL and MDPs, see
- CS188 EdX course, starting with Markov Decision Processes I
- Sutton & Barto, Ch 3 and 4.
- For a concise intro to MDPs, see Ch 1-2 of Andrew Ng’s thesis
- David Silver’s course, links below
For introductory material on machine learning and neural networks, see
Related Materials
John's lecture series at MLSS
- Lecture 1: intro, derivative free optimization
- Lecture 2: score function gradient estimation and policy gradients
- Lecture 3: actor critic methods
- Lecture 4: trust region and natural gradient methods, open problems
Courses
- Dave Silver’s course on reinforcement learning / Lecture Videos
- Nando de Freitas’ course on machine learning
- Andrej Karpathy’s course on neural networks
Relevant Textbooks
- Deep Learning
- Sutton & Barto, Reinforcement Learning: An Introduction
- Szepesvari, Algorithms for Reinforcement Learning
- Bertsekas, Dynamic Programming and Optimal Control, Vols I and II
- Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming
- Powell, Approximate Dynamic Programming