Chapter 6. Fully Distributed Learning Algorithms
In fully distributed learning algorithms, we assume that the players use less information about other players and the history of the game. It is not always apparent that these problems can be formulated as a game, so the general game model for fully distributed learning is presented.
Learning by experimentation and trial-and-error can come close to pure Nash equilibrium play under certain conditions.
Reinforcement learning is where players relate their utility to actions previously taken, to optimize future rewards.
Regret minimization, and Boltzmann-Gibbs learning algorithms, are also considered, where the maxima and equilibria are approached after a reasonably small number of iterations.
In the schemes ...