This course accompanies the ml-course and provides an overview of both classical and novel approaches in RL
Date | Class num | Lecture topic | Lecture to be provided | This year webinar | Prev Year Webinar |
---|---|---|---|---|---|
0 | DL recap | - | - | - | |
18/9 | 1 | Intro to RL | Lecture 1 | Webinar 1 | Webinar 1 |
25/9 | 2 | Bellman equations | Lecture 2 | Webinar 2 | Webinar 2 |
4/10 | 3 | Model-free learning | Lecture 3 | Webinar 3 | Webinar 3 |
16/10 | 4 | Approximate Q-learning, DQN | Lecture 4 | - | Webinar 4 |
18/10 | 5 | Policy gradient, REINFORCE | - | - | Webinar 5 |
23/10 | 6 | Advantage Actor Critic | Lecture 6 | - | - |
This course is inspired and heavily affected by the following brilliant courses, blogs and materials.
- RL course by Yandex School of Data Analysis
- Spinning Up by OpenAI (primarly developed by Josh Achiam)
- UCL Course on RL by David Silver
- Stable Baselines 3, great repo with baselines
- Reinforcement Learning: An Introduction book by Richard S. Sutton and Andrew G. Barto