You are previewing Doing Bayesian Data Analysis.
O'Reilly logo
Doing Bayesian Data Analysis

Book Description

There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis tractable and accessible to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS, is for first year graduate students or advanced undergraduates and provides an accessible approach, as all mathematics is explained intuitively and with concrete examples. It assumes only algebra and ‘rusty’ calculus. Unlike other textbooks, this book begins with the basics, including essential concepts of probability and random sampling. The book gradually climbs all the way to advanced hierarchical modeling methods for realistic data. The text provides complete examples with the R programming language and BUGS software (both freeware), and begins with basic programming examples, working up gradually to complete programs for complex analyses and presentation graphics. These templates can be easily adapted for a large variety of students and their own research needs.The textbook bridges the students from their undergraduate training into modern Bayesian methods.



-Accessible, including the basics of essential concepts of probability and random sampling

-Examples with R programming language and BUGS software

-Comprehensive coverage of all scenarios addressed by non-bayesian textbooks- t-tests, analysis of variance (ANOVA) and comparisons in ANOVA, multiple regression, and chi-square (contingency table analysis).

-Coverage of experiment planning

-R and BUGS computer programming code on website

-Exercises have explicit purposes and guidelines for accomplishment

Table of Contents

  1. Cover image
  2. Title page
  3. Table of Contents
  4. Copyright
  5. Dedication
  6. Chapter 1. This Book’s Organization
    1. 1.1 Real People can Read This Book
    2. 1.2 Prerequisites
    3. 1.3 The Organization of This Book
    4. 1.4 Gimme Feedback (Be Polite)
    5. 1.5 Acknowledgments
  7. Part 1: The Basics: Parameters, Probability, Bayes’ Rule, and R
    1. Chapter 2. Introduction
      1. 2.1 Models of Observations and Models of Beliefs
      2. 2.2 Three Goals for Inference from Data
      3. 2.3 The R Programming Language
      4. 2.4 Exercises
    2. Chapter 3. What Is This Stuff Called Probability?
      1. 3.1 The Set of All Possible Events
      2. 3.2 Probability: Outside or Inside the Head
      3. 3.3 Probability Distributions
      4. 3.4 Two-Way Distributions
      5. 3.5 R Code
      6. 3.6 Exercises
    3. Chapter 4. Bayes’ Rule
      1. 4.1 Bayes’ Rule
      2. 4.2 Applied to Models and Data
      3. 4.3 The Three Goals of Inference
      4. 4.4 R Code
      5. 4.5 Exercises
  8. Part 2: All the Fundamentals Applied to Inferring a Binomial Proportion
    1. Chapter 5. Inferring a Binomial Proportion via Exact Mathematical Analysis
      1. 5.1 The Likelihood Function: Bernoulli Distribution
      2. 5.2 A Description of Beliefs: The Beta Distribution
      3. 5.3 Three Inferential Goals
      4. 5.4 Summary: How to do Bayesian Inference
      5. 5.5 R Code
      6. 5.6 Exercises
    2. Chapter 6. Inferring a Binomial Proportion via Grid Approximation
      1. 6.1 Bayes’ Rule for Discrete Values of
      2. 6.2 Discretizing a Continuous Prior Density
      3. 6.3 Estimation
      4. 6.4 Prediction of Subsequent Data
      5. 6.5 Model Comparison
      6. 6.6 Summary
      7. 6.7 R Code
      8. 6.8 Exercises
    3. Chapter 7. Inferring a Binomial Proportion via the Metropolis Algorithm
      1. 7.1 A Simple Case of the Metropolis Algorithm
      2. 7.2 The Metropolis Algorithm More Generally
      3. 7.3 From the Sampled Posterior to the Three Goals
      4. 7.4 MCMC in BUGS
      5. 7.5 Conclusion
      6. 7.6 R Code
      7. 7.7 Exercises
    4. Chapter 8. Inferring Two Binomial Proportions via Gibbs Sampling
      1. 8.1 Prior, Likelihood, and Posterior for Two Proportions
      2. 8.2 The Posterior via Exact Formal Analysis
      3. 8.3 The Posterior via Grid Approximation
      4. 8.4 The Posterior via Markov Chain Monte Carlo
      5. 8.5 Doing it with BUGS
      6. 8.6 How Different are the Underlying Biases?
      7. 8.7 Summary
      8. 8.8 R Code
      9. 8.9 Exercises
    5. Chapter 9. Bernoulli Likelihood with Hierarchical Prior
      1. 9.1 A Single Coin from a Single Mint
      2. 9.2 Multiple Coins from a Single Mint
      3. 9.3 Multiple Coins from Multiple Mints
      4. 9.4 Summary
      5. 9.5 R Code
      6. 9.6 Exercises
    6. Chapter 10. Hierarchical Modeling and Model Comparison
      1. 10.1 Model Comparison as Hierarchical Modeling
      2. 10.2 Model Comparison in BUGS
      3. 10.3 Model Comparison and Nested Models
      4. 10.4 Review of Hierarchical Framework for Model Comparison
      5. 10.5 Exercises
    7. Chapter 11. Null Hypothesis Significance Testing
      1. 11.1 NHST for the Bias of a Coin
      2. 11.2 Prior Knowledge about the Coin
      3. 11.3 Confidence Interval and Highest Density Interval
      4. 11.4 Multiple Comparisons
      5. 11.5 What a Sampling Distribution is Good for
      6. 11.6 Exercises
    8. Chapter 12. Bayesian Approaches to Testing a Point (“Null”)Hypothesis
      1. 12.1 The Estimation (Single Prior) Approach
      2. 12.2 The Model-Comparison (Two-Prior) Approach
      3. 12.3 Estimation or Model Comparison?
      4. 12.4 R Code
      5. 12.5 Exercises
    9. Chapter 13. Goals, Power, and Sample Size
      1. 13.1 The Will to Power
      2. 13.2 Sample Size for a Single Coin
      3. 13.3 Sample Size for Multiple Mints
      4. 13.4 Power: Prospective, Retrospective, and Replication
      5. 13.5 The Importance of Planning
      6. 13.6 R Code
      7. 13.7 Exercises
  9. Part 3: Applied to the Generalized Linear Model
    1. Chapter 14. Overview of the Generalized Linear Model
      1. 14.1 The Generalized Linear Model (GLM)
      2. 14.2 Cases of the GLM
      3. 14.3 Exercises
    2. Chapter 15. Metric Predicted Variable on a Single Group
      1. 15.1 Estimating the Mean and Precision of a Normal Likelihood
      2. 15.2 Repeated Measures and Individual Differences
      3. 15.3 Summary
      4. 15.4 R Code
      5. 15.5 Exercises
    3. Chapter 16. Metric Predicted Variable with One Metric Predictor
      1. 16.1 Simple Linear Regression
      2. 16.2 Outliers and Robust Regression
      3. 16.3 Simple Linear Regression with Repeated Measures
      4. 16.4 Summary
      5. 16.5 R Code
      6. 16.6 Exercises
    4. Chapter 17. Metric Predicted Variable with Multiple Metric Predictors
      1. 17.1 Multiple Linear Regression
      2. 17.2 Hyperpriors and Shrinkage of Regression Coefficients
      3. 17.3 Multiplicative Interaction of Metric Predictors
      4. 17.4 Which Predictors should be Included?
      5. 17.5 R Code
      6. 17.6 Exercises
    5. Chapter 18. Metric Predicted Variable with One Nominal Predictor
      1. 18.1 Bayesian Oneway ANOVA
      2. 18.2 Multiple Comparisons
      3. 18.3 Two-Group Bayesian ANOVA and the NHST t Test
      4. 18.4 R Code
      5. 18.5 Exercises
    6. Chapter 19. Metric Predicted Variable with Multiple Nominal Predictors
      1. 19.1 Bayesian Multifactor ANOVA
      2. 19.2 Repeated Measures, a.k.a. Within-Subject Designs
      3. 19.3 R Code
      4. 19.4 Exercises
    7. Chapter 20. Dichotomous Predicted Variable
      1. 20.1 Logistic Regression
      2. 20.2 Interaction of Predictors in Logistic Regression
      3. 20.3 Logistic ANOVA
      4. 20.4 Summary
      5. 20.5 R Code
      6. 20.6 Exercises
    8. Chapter 21. Ordinal Predicted Variable
      1. 21.1 Ordinal Probit Regression
      2. 21.2 Some Examples
      3. 21.3 Interaction
      4. 21.4 Relation to Linear and Logistic Regression
      5. 21.5 R Code
      6. 21.6 Exercises
    9. Chapter 22. Contingency Table Analysis
      1. 22.1 Poisson Exponential ANOVA
      2. 22.2 Examples
      3. 22.3 Log Linear Models for Contingency Tables
      4. 22.4 R Code for the Poisson Exponential Model
      5. 22.5 Exercises
    10. Chapter 23. Tools in the Trunk
      1. 23.1 Reporting a Bayesian Analysis
      2. 23.2 MCMC Burn-in and Thinning
      3. 23.3 Functions for Approximating Highest Density Intervals
      4. 23.4 Reparameterization of Probability Distributions
  10. Index