Markov Decision Processes

This world that we've framed up happens to be a Markov Decision Process (MDP), which has the following properties:

  • It has a finite set of states, S
  • It has a finite set of actions, A
  • is the probability that taking action A will transition between state s and state
  • is the immediate reward for transition between s and
  • is the discount factor, which is how much we discount future rewards over present rewards (more on ...

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.