As you might have guessed, this is the most important method of the Q_Learner class, which does the magic of learning the Q-values, which in turn enables the agent to take intelligent actions over time! The best part is that it is not that complicated to implement! It is merely the implementation of the Q-learning update equation that we saw earlier. Don't believe me when I say it is simple to implement?! Alright, here is the implementation of the learning function:
def learn(self, obs, action, reward, next_obs): discretized_obs = self.discretize(obs) discretized_next_obs = self.discretize(next_obs) td_target = reward + self.gamma * np.max(self.Q[discretized_next_obs]) td_error = td_target ...