CHAPTER 4

LINEAR LEAST-SQUARES ESTIMATION: FUNDAMENTALS

After laying the groundwork on model types and approaches to modeling, we now address the first estimation topic: the method of least squares. It will be seen that there are several different approaches to least-squares estimation and that the answer depends somewhat on the assumptions used. The simplest approach only assumes that the measurement data represents the output of a model corrupted with “measurement errors” of unknown characteristics, and the goal is to find model parameters that give the “best” fit to the noisy data. Other approaches are similar, but assume that either the variance or the probability distribution of the measurement noise is known. Finally, the “Bayesian” approach treats the state as a random variable and assumes that statistics of both the measurement noise and errors in the a priori state estimate are known. These different approaches are described by names such as weighted least squares, minimum variance, minimum mean-squared error, best linear unbiased estimate, maximum likelihood, and maximum a posteriori. The assumptions and properties of each method are discussed in later sections. Since maximum likelihood and maximum a posteriori solutions are based on assumed probability distributions, they are not really “least-squares” methods. However, maximum likelihood and maximum a posteriori are included in this chapter because the estimates are identical to other least-squares solutions when ...

Get Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.