TD learning algorithms are based on reducing the differences between estimates made by the agent at different times. Q-learning, seen in the previous section, is a TD algorithm, but it is based on the difference between states in immediately adjacent instants. TD is more generic and may consider moments and states further away.
It is a combination of the ideas of the Monte Carlo (MC) method and the Dynamic Programming (DP).
MC methods allow solving reinforcement learning problems based on the average of the results obtained.
DP represents a set of algorithms that can be used to calculate an optimal policy given a perfect model of the environment in the form of an MDP.
A TD algorithm can learn directly from raw ...