With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

No credit card required

Book Description

The standard rules of probability can be interpreted as uniquely valid principles in logic. In this book, E. T. Jaynes dispels the imaginary distinction between 'probability theory' and 'statistical inference', leaving a logical unity and simplicity, which provides greater technical power and flexibility in applications. This book goes beyond the conventional mathematics of probability theory, viewing the subject in a wider context. New results are discussed, along with applications of probability theory to a wide variety of problems in physics, mathematics, economics, chemistry and biology. It contains many exercises and problems, and is suitable for use as a textbook on graduate level courses involving data analysis. The material is aimed at readers who are already familiar with applied mathematics at an advanced undergraduate level or higher. The book will be of interest to scientists working in any area where inference from incomplete information is necessary.

1. Cover
2. Half Title
3. Title Page
5. Dedication
6. Contents
7. Editor’s foreword
8. Preface
9. Part I: Principles and elementary applications
1. 1. Plausible reasoning
1. 1.1 Deductive and plausible reasoning
2. 1.2 Analogies with physical theories
3. 1.3 The thinking computer
4. 1.4 Introducing the robot
5. 1.5 Boolean algebra
6. 1.6 Adequate sets of operations
7. 1.7 The basic desiderata
2. 2. The quantitative rules
1. 2.1 The product rule
2. 2.2 The sum rule
3. 2.3 Qualitative properties
4. 2.4 Numerical values
5. 2.5 Notation and finite-sets policy
3. 3. Elementary sampling theory
1. 3.1 Sampling without replacement
2. 3.2 Logic vs. propensity
3. 3.3 Reasoning from less precise information
4. 3.4 Expectations
5. 3.5 Other forms and extensions
6. 3.6 Probability as a mathematical tool
7. 3.7 The binomial distribution
8. 3.8 Sampling with replacement
9. 3.9 Correction for correlations
10. 3.10 Simplification
4. 4. Elementary hypothesis testing
1. 4.1 Prior probabilities
2. 4.2 Testing binary hypotheses with binary data
3. 4.3 Nonextensibility beyond the binary case
4. 4.4 Multiple hypothesis testing
5. 4.5 Continuous probability distribution functions
6. 4.6 Testing an infinite number of hypotheses
7. 4.7 Simple and compound (or composite) hypotheses
5. 5. Queer uses for probability theory
1. 5.1 Extrasensory perception
2. 5.2 Mrs Stewart’s telepathic powers
3. 5.3 Converging and diverging views
4. 5.4 Visual perception – evolution into Bayesianity?
5. 5.5 The discovery of Neptune
6. 5.6 Horse racing and weather forecasting
8. 5.8 Bayesian jurisprudence
6. 6. Elementary parameter estimation
7. 7. The central, Gaussian or normal distribution
8. 8. Sufficiency, ancillarity, and all that
1. 8.1 Sufficiency
2. 8.2 Fisher sufficiency
3. 8.3 Generalized sufficiency
4. 8.4 Sufficiency plus nuisance parameters
5. 8.5 The likelihood principle
6. 8.6 Ancillarity
7. 8.7 Generalized ancillary information
8. 8.8 Asymptotic likelihood: Fisher information
9. 8.9 Combining evidence from different sources
10. 8.10 Pooling the data
11. 8.11 Sam’s broken thermometer
9. 9. Repetitive experiments: probability and frequency
1. 9.1 Physical experiments
2. 9.2 The poorly informed robot
3. 9.3 Induction
4. 9.4 Are there general inductive rules?
5. 9.5 Multiplicity factors
6. 9.6 Partition function algorithms
7. 9.7 Entropy algorithms
8. 9.8 Another way of looking at it
9. 9.9 Entropy maximization
10. 9.10 Probability and frequency
11. 9.11 Significance tests
12. 9.12 Comparison of psi and chi-squared
13. 9.13 The chi-squared test
14. 9.14 Generalization
15. 9.15 Halley’s mortality table
10. 10. Physics of ‘random experiments’
1. 11. Discrete prior probabilities: the entropy principle
2. 12. Ignorance priors and transformation groups
1. 12.1 What are we trying to do?
2. 12.2 Ignorance priors
3. 12.3 Continuous distributions
4. 12.4 Transformation groups
3. 13. Decision theory, historical background
1. 13.1 Inference vs. decision
2. 13.2 Daniel Bernoulli’s suggestion
3. 13.3 The rationale of insurance
4. 13.4 Entropy and utility
5. 13.5 The honest weatherman
6. 13.6 Reactions to Daniel Bernoulli and Laplace
7. 13.7 Wald’s decision theory
8. 13.8 Parameter estimation for minimum loss
9. 13.9 Reformulation of the problem
10. 13.10 Effect of varying loss functions
11. 13.11 General decision theory
4. 14. Simple applications of decision theory
1. 14.1 Definitions and preliminaries
2. 14.2 Sufficiency and information
3. 14.3 Loss functions and criteria of optimum performance
4. 14.4 A discrete example
5. 14.5 How would our robot do it?
6. 14.6 Historical remarks
7. 14.7 The widget problem
5. 15. Paradoxes of probability theory
1. 15.1 How do paradoxes survive and grow?
2. 15.2 Summing a series the easy way
3. 15.3 Nonconglomerability
4. 15.4 The tumbling tetrahedra
5. 15.5 Solution for a finite number of tosses
6. 15.6 Finite vs. countable additivity
9. 15.9 Discussion
10. 15.10 A useful result after all?
11. 15.11 How to mass-produce paradoxes
6. 16. Orthodox methods: historical background
7. 17. Principles and pathology of orthodox statistics
1. 17.1 Information loss
2. 17.2 Unbiased estimators
3. 17.3 Pathology of an unbiased estimate
4. 17.4 The fundamental inequality of the sampling variance
5. 17.5 Periodicity: the weather in Central Park
6. 17.6 A Bayesian analysis
7. 17.7 The folly of randomization
8. 17.8 Fisher: common sense at Rothamsted
9. 17.9 Missing data
10. 17.10 Trend and seasonality in time series
11. 17.11 The general case
8. 18. The Ap distribution and rule of succession
9. 19. Physical measurements
1. 19.1 Reduction of equations of condition
2. 19.2 Reformulation as a decision problem
3. 19.3 The underdetermined case: K is singular
4. 19.4 The overdetermined case: K can be made nonsingular
5. 19.5 Numericale valuation of the result
6. 19.6 Accuracy of the estimates
10. 20. Model comparison
1. 20.1 Formulation of the problem
2. 20.2 The fair judge and the cruel realist
3. 20.3 But where is the idea of simplicity?
4. 20.4 An example: linear response models
11. 21. Outliers and robustness
12. 22. Introduction to communication theory
11. Appendix A: Other approaches to probability theory
12. Appendix B: Mathematical formalities and style
1. B.1 Notation and logical hierarchy
2. B.2 Our ‘cautious approach’ policy
3. B.3 Willy Feller on measure theory
4. B.4 Kronecker vs. Weierstrasz
5. B.5 What is a legitimate mathematical function?
6. B.6 Counting infinite sets?
7. B.7 The Hausdorff sphere paradox and mathematical diseases
8. B.8 What am I supposed to publish?
9. B.9 Mathematical courtesy
13. Appendix C: Convolutions and cumulants
14. References
15. Bibliography
16. Author index
17. Subject index