Regularizing priors

Using informative and weakly informative priors is a way of introducing bias in a model and, if done properly, can be a good thing because it helps to prevent overfitting.

The regularization idea is so powerful and useful that it has been discovered several times, including outside the Bayesian framework. In some fields, this idea is known as the Tikhonov regularization. In non-Bayesian statistics, this regularization idea takes the form of two modifications on the least square method, known as ridge regression and Lasso regression. From the Bayesian point of view, a ridge regression can be interpreted as using normal distributions for the beta coefficients (of a linear model), with small standard deviation that pushes the ...

Get Bayesian Analysis with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.