You are previewing Quantifying the User Experience.
O'Reilly logo
Quantifying the User Experience

Book Description

You're being asked to quantify usability improvements with statistics. But even with a background in statistics, you are hesitant to statistically analyze the data, as you may be unsure about which statistical tests to use and have trouble defending the use of the small test sample sizes associated with usability studies.

The book is about providing a practical guide on how to use statistics to solve common quantitative problems arising in user research. It addresses common questions you face every day such as: Is the current product more usable than our competition? Can we be sure at least 70% of users can complete the task on the 1st attempt? How long will it take users to purchase products on the website? This book shows you which test to use, and how provide a foundation for both the statistical theory and best practices in applying them. The authors draw on decades of statistical literature from Human Factors, Industrial Engineering and Psychology, as well as their own published research to provide the best solutions. They provide both concrete solutions (excel formula, links to their own web-calculators) along with an engaging discussion about the statistical reasons for why the tests work, and how to effectively communicate the results.



    *Provides practical guidance on solving usability testing problems with statistics for any project, including those using Six Sigma practices

      *Show practitioners which test to use, why they work, best practices in application, along with easy-to-use excel formulas and web-calculators for analyzing data

        *Recommends ways for practitioners to communicate results to stakeholders in plain English

Table of Contents

  1. Cover Image
  2. Content
  3. Title
  4. Copyright
  5. Dedication
  6. Acknowledgments
  7. About the Authors
  8. Chapter 1. Introduction and How to Use This Book
    1. Introduction
    2. The Organization of This Book
    3. How to Use This Book
    4. Key Points from the Chapter
    5. Chapter Review Questions
    6. References
  9. Chapter 2. Quantifying User Research
    1. What is User Research?
    2. Data from User Research
    3. Usability Testing
    4. A/B Testing
    5. Survey Data
    6. Requirements Gathering
    7. Key Points from the Chapter
    8. References
  10. Chapter 3. How Precise Are Our Estimates? Confidence Intervals
    1. Introduction
    2. Confidence Interval for a Completion Rate
    3. Confidence Interval for Rating Scales and Other Continuous Data
    4. Key Points from the Chapter
    5. Chapter Review Questions
    6. References
  11. Chapter 4. Did We Meet or Exceed Our Goal?
    1. Introduction
    2. One-Tailed and Two-Tailed Tests
    3. Comparing a Completion Rate to a Benchmark
    4. Comparing a Satisfaction Score to a Benchmark
    5. Comparing a Task Time to a Benchmark
    6. Key Points from the Chapter
    7. Chapter Review Questions
    8. References
  12. Chapter 5. Is There a Statistical Difference between Designs?
    1. Introduction
    2. Comparing Two Means (Rating Scales and Task Times)
    3. Comparing Completion Rates, Conversion Rates, and A/B Testing
    4. Key Points from the Chapter
    5. Chapter Review Questions
    6. References
  13. Chapter 6. What Sample Sizes Do We Need?
    1. Introduction
    2. Estimating Values
    3. Comparing Values
    4. What can I Do to Control Variability?
    5. Sample Size Estimation for Binomial Confidence Intervals
    6. Sample Size Estimation for Chi-Square Tests (Independent Proportions)
    7. Sample Size Estimation for McNemar Exact Tests (Matched Proportions)
    8. Key Points from the Chapter
    9. Chapter Review Questions
    10. References
  14. Chapter 7. What Sample Sizes Do We Need?
    1. Introduction
    2. Using a Probabilistic Model of Problem Discovery to Estimate Sample Sizes for Formative User Research
    3. Assumptions of the Binomial Probability Model
    4. Additional Applications of the Model
    5. What affects the Value of p?
    6. What Is a Reasonable Problem Discovery Goal?
    7. Reconciling the “Magic Number 5” with “Eight Is Not Enough”
    8. More about the Binomial Probability Formula and Its Small Sample Adjustment
    9. Other Statistical Models for Problem Discovery
    10. Key Points from the Chapter
    11. Chapter Review Questions
    12. References
  15. Chapter 8. Standardized Usability Questionnaires
    1. Introduction
    2. Poststudy Questionnaires
    3. Post-task Questionnaires
    4. Questionnaires for Assessing Perceived Usability of Websites
    5. Other Questionnaires of Interest
    6. Key Points from the Chapter
    7. Chapter Review Questions
    8. References
  16. Chapter 9. Six Enduring Controversies in Measurement and Statistics
    1. Introduction
    2. Is It Okay to Average Data from Multipoint Scales?
    3. Do You Need to Test at Least 30 Users?
    4. Should You Always Conduct a Two-Tailed Test?
    5. Can You Reject the Null Hypothesis When p > 0.05?
    6. Can You Combine Usability Metrics into Single Scores?
    7. What If You Need to Run More Than One Test?
    8. Key Points from the Chapter
    9. Chapter Review Questions
    10. References
  17. Chapter 10. Wrapping Up
    1. Introduction
    2. Getting More Information
    3. Good Luck!
    4. Key Points from the Chapter
    5. References
  18. APPENDIX. A Crash Course in Fundamental Statistical Concepts
    1. Introduction
    2. Types Of Data
    3. Populations and Samples
    4. Measuring Central Tendency
    5. Standard Deviation and Variance
    6. The Normal Distribution
    7. Area Under the Normal Curve
    8. Applying the Normal Curve to User Research Data
    9. Central Limit Theorem
    10. Standard Error of the Mean
    11. Margin of Error
    12. t-Distribution
    13. Significance Testing and p-Values
    14. The Logic of Hypothesis Testing
    15. Errors in Statistics
    16. Key Points from the Appendix
  19. Index