Lunar Lander

The agent we use for Lunar Lander will be almost identical to CartPole, with the exception of the actual model architecture and a few hyperparameter changes, thanks to Keras-RL. The environment for Lunar Lander has eight inputs instead of four and our agent can now choose four actions instead of two.

If you're inspired by these examples and decide to try your hand at building a Keras-RL network, keep in mind that hyperparameter choice is very, very important. In the case of the Lunar Lander agent, the smallest changes to the model architecture caused my agent to to fail to learn a solution to the environment. Getting the network just right is hard work.

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.