Chapter 5Bayesian Inference in State-Space Time Series Models

The previous chapter took an important first step in constructing adaptive time series models. Within the beta-Bernoulli model and the normal linear regression model, we derived hyperparameter updating rules that could be computed on an observation-by-observation basis. We also introduced discount factors to ensure the accumulated weight of the prior distribution did not drown out new data. The balance achieved thereby between past and present data ensures that we do not arrive at a spuriously precise conclusion.

However, it is possible to improve upon this adaptive but static design in a number of ways. A successful strategy that has been employed for more than 50 years in engineering applications is to specify dynamics for the evolution of model parameters. The parameters are assumed to evolve as states on an underlying state space. More precisely, the parameters are considered to be part of a complete description of the state of the system at any point in time, rather than fixed but unknown constants. Learning about parameters may then be reframed as a problem of filtering unobservable states with the aid of observable data, rather than as a problem of estimation and inference. Filtered state estimates can be improved further by retrospectively smoothing previous estimates with the help of subsequent observations. Our task in this chapter is to understand the formulation of state-space models and the related problems ...

Get Bayesian Risk Management now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.