Memory and experience replay

A clever solution to these two problems is available when we introduce the concept of a finite memory space where we store a set of experiences the agent has had. At each state, we can take the opportunity to remember the state, action and reward. Then, periodically, the agent can replay these experiences by sampling a random minibatch from memory and updating the DQN weights using that minibatch.

This replay mechanism allows the agent to learn from it's experiences in the longer term, in a general way, since it's sampling from those experiences in it's memory randomly rather than updating the entire network using just the last experience.

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.