## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

No credit card required

## Book Description

Bayesian inference provides a simple and unified approach to data analysis, allowing experimenters to assign probabilities to competing hypotheses of interest, on the basis of the current state of knowledge. By incorporating relevant prior information, it can sometimes improve model parameter estimates by many orders of magnitude. This book provides a clear exposition of the underlying concepts with many worked examples and problem sets. It also discusses implementation, including an introduction to Markov chain Monte-Carlo integration and linear and nonlinear model fitting. Particularly extensive coverage of spectral analysis (detecting and measuring periodic signals) includes a self-contained introduction to Fourier and discrete Fourier methods. There is a chapter devoted to Bayesian inference with Poisson sampling, and three chapters on frequentist methods help to bridge the gap between the frequentist and Bayesian approaches. Supporting Mathematica® notebooks with solutions to selected problems, additional worked examples, and a Mathematica tutorial are available at www.cambridge.org/9780521150125.

1. Cover
2. Half Title
3. Title Page
5. Contents
6. Preface
7. Acknowledgements
8. 1. Role of probability theory in science
1. 1.1 Scientific inference
2. 1.2 Inference requires a probability theory
3. 1.3 Usual form of Bayes’ theorem
4. 1.4 Probability and frequency
5. 1.5 Marginalization
6. 1.6 The two basic problems in statistical inference
7. 1.7 Advantages of the Bayesian approach
8. 1.8 Problems
9. 2. Probability theory as extended logic
1. 2.1 Overview
2. 2.2 Fundamentals of logic
3. 2.3 Brief history
4. 2.4 An adequate set of operations
5. 2.5 Operations for plausible inference
6. 2.6 Uniqueness of the product and sum rules
7. 2.7 Summary
8. 2.8 Problems
10. 3. The how-to of Bayesian inference
1. 3.1 Overview
2. 3.2 Basics
3. 3.3 Parameter estimation
4. 3.4 Nuisance parameters
5. 3.5 Model comparison and Occam’s razor
6. 3.6 Sample spectral line problem
7. 3.7 Odds ratio
8. 3.8 Parameter estimation problem
9. 3.9 Lessons
10. 3.10 Ignorance priors
11. 3.11 Systematic errors
12. 3.12 Problems
11. 4. Assigning probabilities
1. 4.1 Introduction
2. 4.2 Binomial distribution
3. 4.3 Multinomial distribution
4. 4.4 Can you really answer that question?
5. 4.5 Logical versus causal connections
6. 4.6 Exchangeable distributions
7. 4.7 Poisson distribution
8. 4.8 Constructing likelihood functions
9. 4.9 Summary
10. 4.10 Problems
12. 5. Frequentist statistical inference
1. 5.1 Overview
2. 5.2 The concept of a random variable
3. 5.3 Sampling theory
4. 5.4 Probability distributions
5. 5.5 Descriptive properties of distributions
6. 5.6 Moment generating functions
7. 5.7 Some discrete probability distributions
8. 5.8 Continuous probability distributions
9. 5.9 Central Limit Theorem
10. 5.10 Bayesian demonstration of the Central Limit Theorem
11. 5.11 Distribution of the sample mean
12. 5.12 Transformation of a random variable
13. 5.13 Random and pseudo-random numbers
14. 5.14 Summary
15. 5.15 Problems
13. 6. What is a statistic?
1. 6.1 Introduction
2. 6.2 The χ[sup(2)] distribution
3. 6.3 Sample variance S[sup(2)]
4. 6.4 The Student’s t distribution
5. 6.5 F distribution (F-test)
6. 6.6 Confidence intervals
7. 6.7 Summary
8. 6.8 Problems
14. 7. Frequentist hypothesis testing
1. 7.1 Overview
2. 7.2 Basic idea
3. 7.3 Are two distributions the same?
4. 7.4 Problem with frequentist hypothesis testing
5. 7.5 Problems
15. 8. Maximum entropy probabilities
1. 8.1 Overview
2. 8.2 The maximum entropy principle
3. 8.3 Shannon’s theorem
4. 8.4 Alternative justification of MaxEnt
5. 8.5 Generalizing MaxEnt
6. 8.6 How to apply the MaxEnt principle
7. 8.7 MaxEnt distributions
8. 8.8 MaxEnt image reconstruction
9. 8.9 Pixon multiresolution image reconstruction
10. 8.10 Problems
16. 9. Bayesian inference with Gaussian errors
1. 9.1 Overview
2. 9.2 Bayesian estimate of a mean
3. 9.3 Is the signal variable?
4. 9.4 Comparison of two independent samples
5. 9.5 Summary
6. 9.6 Problems
17. 10. Linear model fitting (Gaussian errors)
1. 10.1 Overview
2. 10.2 Parameter estimation
3. 10.3 Regression analysis
4. 10.4 The posterior is a Gaussian
5. 10.5 Model parameter errors
6. 10.6 Correlated data errors
7. 10.7 Model comparison with Gaussian posteriors
8. 10.8 Frequentist testing and errors
9. 10.9 Summary
10. 10.10 Problems
18. 11. Nonlinear model fitting
1. 11.1 Introduction
2. 11.2 Asymptotic normal approximation
3. 11.3 Laplacian approximations
4. 11.4 Finding the most probable parameters
5. 11.5 Iterative linearization
6. 11.6 Mathematica example
7. 11.7 Errors in both coordinates
8. 11.8 Summary
9. 11.9 Problems
19. 12. Markov chain Monte Carlo
1. 12.1 Overview
2. 12.2 Metropolis–Hastings algorithm
3. 12.3 Why does Metropolis–Hastings work?
4. 12.4 Simulated tempering
5. 12.5 Parallel tempering
6. 12.6 Example
7. 12.7 Model comparison
8. 12.8 Towards an automated MCMC
9. 12.9 Extrasolar planet example
10. 12.10 MCMC robust summary statistic
11. 12.11 Summary
12. 12.12 Problems
20. 13. Bayesian revolution in spectral analysis
1. 13.1 Overview
2. 13.2 New insights on the periodogram
3. 13.3 Strong prior signal model
4. 13.4 No specific prior signal model
5. 13.5 Generalized Lomb–Scargle periodogram
6. 13.6 Non-uniform sampling
7. 13.7 Problems
21. 14. Bayesian inference with Poisson sampling
1. 14.1 Overview
2. 14.2 Infer a Poisson rate
3. 14.3 Signal + known background
4. 14.4 Analysis of ON/OFF measurements
5. 14.5 Time-varying Poisson rate
6. 14.6 Problems
22. Appendix A: Singular value decomposition
23. Appendix B: Discrete Fourier Transforms
1. B.1 Overview
2. B.2 Orthogonal and orthonormal functions
3. B.3 Fourier series and integral transform
4. B.4 Convolution and correlation
5. B.5 Waveform sampling
6. B.6 Nyquist sampling theorem
7. B.7 Discrete Fourier Transform
8. B.8 Applying the DFT
9. B.9 The Fast Fourier Transform
10. B.10 Discrete convolution and correlation
11. B.11 Accurate amplitudes by zero padding
12. B.12 Power-spectrum estimation
13. B.13 Discrete power spectral density estimation
14. B.14 Problems
24. Appendix C: Difference in two samples
1. C.1 Outline
2. C.2 Probabilities of the four hypotheses
3. C.3 The difference in the means
4. C.4 The ratio of the standard deviations
25. Appendix D: Poisson ON/OFF details
1. D.1 Derivation of p(s|N[sub(on)], I)
2. D.2 Derivation of the Bayes factor B[sub({s+b,b})]
26. Appendix E: Multivariate Gaussian from maximum entropy
27. References
28. Index