RL Basics

There are two fundamental problems in the sequential decision making process: reinforcement learning and planning. In reinforcement learning, the environment is unknown and agent interacts with environment to improve its policy. Within reinforcement learning there are two kinds of operation: prediction and control. Prediction is given policy, evaluate the future. Control is to optimize the future to find the optimal policy. In RL, we alternatively do predition and control to get the best policy.

In terms of methods, RL algorithm can be categorized into two types: model-free algorithm and model-based algorithm. In model-free algorithms, we don’t want to or we can’t learn the system dynamics. We sample actions and get corresponding rewards to optimize policy or fit a value function. It can further be divied into two methods: policy optimization or value learning.

  • Planning
    • Value Iteration: Value iteration uses dynamic programming to compute the value function iteratively using Bellman equation.
    • Policy iteration — Compute the value function and optimize the policy in alternative steps
  • RL
    • Value-learning/Q-learning: Without an explicit policy, we fit the value-function or Q-value function iteratively with observed rewards under actions taken by an off-policy, like an ε-greedy policy which selects action based on the Q-value function and sometimes random actions for exploration.
    • Policy gradients: using neural network to approximate policy and optimize policy using gradient ascent.

Concepts

A return is a measured value (or a random variable), representing the actual discounted sum of rewards seen following a specific state or state/action pair. Value function is the expected return function.

References

  1. RL — Reinforcement Learning Algorithms Overview
  2. Spinning Up in Deep RL