Implementing a deep n-step advantage actor critic agent

We have prepared ourselves with all the background information required to implement the deep n-step advantage actor-critic (A2C) agent. Let's look at an overview of the agent implementation process and then jump right into the hands-on implementation.

The following is the high-level flow of our A2C agent:

  1. Initialize the actor's and critic's networks.
  2. Use the current policy of the actor to gather n-step experiences from the environment and calculate the n-step return.
  1. Calculate the actor's and critic's losses.
  2. Perform the stochastic gradent descent optimization step to update the actor and critic parameters.
  3. Repeat from step 2.

We will implement the agent in a Python class named ...

Get Hands-On Intelligent Agents with OpenAI Gym now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.