You are previewing Bandit Algorithms for Website Optimization.

Bandit Algorithms for Website Optimization

Cover of Bandit Algorithms for Website Optimization by John Myles White Published by O'Reilly Media, Inc.
  1. Bandit Algorithms for Website Optimization
  2. Preface
    1. Finding the Code for This Book
    2. Dealing with Jargon: A Glossary
    3. Conventions Used in This Book
    4. Using Code Examples
    5. Safari® Books Online
    6. How to Contact Us
    7. Acknowledgments
  3. 1. Two Characters: Exploration and Exploitation
    1. The Scientist and the Businessman
      1. Cynthia the Scientist
      2. Bob the Businessman
      3. Oscar the Operations Researcher
    2. The Explore-Exploit Dilemma
  4. 2. Why Use Multiarmed Bandit Algorithms?
    1. What Are We Trying to Do?
    2. The Business Scientist: Web-Scale A/B Testing
  5. 3. The epsilon-Greedy Algorithm
    1. Introducing the epsilon-Greedy Algorithm
    2. Describing Our Logo-Choosing Problem Abstractly
      1. What’s an Arm?
      2. What’s a Reward?
      3. What’s a Bandit Problem?
    3. Implementing the epsilon-Greedy Algorithm
    4. Thinking Critically about the epsilon-Greedy Algorithm
  6. 4. Debugging Bandit Algorithms
    1. Monte Carlo Simulations Are Like Unit Tests for Bandit Algorithms
    2. Simulating the Arms of a Bandit Problem
    3. Analyzing Results from a Monte Carlo Study
      1. Approach 1: Track the Probability of Choosing the Best Arm
      2. Approach 2: Track the Average Reward at Each Point in Time
      3. Approach 3: Track the Cumulative Reward at Each Point in Time
    4. Exercises
  7. 5. The Softmax Algorithm
    1. Introducing the Softmax Algorithm
    2. Implementing the Softmax Algorithm
    3. Measuring the Performance of the Softmax Algorithm
    4. The Annealing Softmax Algorithm
    5. Exercises
  8. 6. UCB – The Upper Confidence Bound Algorithm
    1. Introducing the UCB Algorithm
    2. Implementing UCB
    3. Comparing Bandit Algorithms Side-by-Side
    4. Exercises
  9. 7. Bandits in the Real World: Complexity and Complications
    1. A/A Testing
    2. Running Concurrent Experiments
    3. Continuous Experimentation vs. Periodic Testing
    4. Bad Metrics of Success
    5. Scaling Problems with Good Metrics of Success
    6. Intelligent Initialization of Values
    7. Running Better Simulations
    8. Moving Worlds
    9. Correlated Bandits
    10. Contextual Bandits
    11. Implementing Bandit Algorithms at Scale
  10. 8. Conclusion
    1. Learning Life Lessons from Bandit Algorithms
    2. A Taxonomy of Bandit Algorithms
    3. Learning More and Other Topics
  11. Colophon
  12. Copyright

Chapter 8. Conclusion

Learning Life Lessons from Bandit Algorithms

In this book, we’ve presented three algorithms for solving the Multiarmed Bandit Problem:

  • The epsilon-Greedy Algorithm
  • The Softmax Algorithm
  • The UCB Algorithm

In order to really take advantage of these three algorithms, you’ll need to develop a good intuition for how they’ll behave when you deploy them on a live website. Having an intuition about which algorithms will work in practice is important because there is no universal bandit algorithm that will always do the best job of optimizing a website: domain expertise and good judgment will always be necessary.

To help you develop the intuition and judgment you’ll need, we’ve advocated a Monte Carlo simulation framework that lets you see how these algorithms and others will behave in hypothetical worlds. By testing an algorithm in many different hypothetical worlds, you can build an appreciation for the qualitative dynamics that cause a bandit algorithm to succeed in one scenario and to fail in another.

In this last section, we’d like to help you further down that path by highlighting these qualitative patterns explicitly.

We’ll start off with some general life lessons that we think are exemplified by bandit algorithms, but actually apply to any situation you might ever find yourself in. Here are the most salient lessons:

Trade-offs, trade-offs, trade-offs
In the real world, you always have to trade off between gathering data and acting on that data. Pure experimentation ...

The best content for your career. Discover unlimited learning on demand for around $1/day.