Quick summary of advantages and applications

A few of the key advantages of the Rainbow agent are summarized here for your quick reference:

  • Combines several notable extensions to Q-learning developed over the past several years
  • Achieves state-of-the art results in the Atari benchmarks
  • n-step targets with a suitably tuned value for n often leads to faster learning
  • Unlike other DQN variants, the Rainbow agent can start learning with 40% less frames collected in the experience replay memory
  • Matches the best performance of DQN in under 10 hours (7 million frames) on a single-GPU machine

The Rainbow algorithm has become the most sought after agent for discrete control problems where the action space is small and discrete. It has been very successful ...

Get Hands-On Intelligent Agents with OpenAI Gym now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.