We have implemented meta hod to calculate the reward and defined the permitted actions, observations, and the reset method for the custom CARLA environment. According to our custom Gym environment creation template, those are the required methods we need to implement for creating a custom environment that is compatible with the OpenAI Gym interface.
While this is true, there is one more thing we need to take care of so that the agent can interact with our environment continuously. Remember when we were developing our Q-learning agent in Chapter 5, Implementing your First Learning Agent – Solving the Mountain Car problem, for the mountain car environment, the environment that always ...