Temporal difference learning

TD learning algorithms are based on reducing the differences between estimates made by the agent at different times. Q-learning, seen in the previous section, is a TD algorithm, but it is based on the difference between states in immediately adjacent instants. TD is more generic and may consider moments and states further away.

It is a combination of the ideas of the Monte Carlo (MC) method and the Dynamic Programming (DP).

MC methods allow solving reinforcement learning problems based on the average of the results obtained.

DP represents a set of algorithms that can be used to calculate an optimal policy given a perfect model of the environment in the form of an MDP.

A TD algorithm can learn directly from raw ...

Get Hands-On Machine Learning on Google Cloud Platform now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.