9.1 INTRODUCTION

In this chapter, we are interested in finding a function of a set of random variables that is used to estimate an unknown parameter of the underlying distribution. Such a function is called a statistic, and it generally reduces the number of random variables, called samples, to a lower dimension without loss of “information” about the parameter. The scenario can be summarized as follows:

  • Let X(t) be a random process with fixed probability density function (pdf) fX(x) that is parameterized by scalar θ. Assume that θ is not observable: it cannot be sampled directly and information about θ is obtained only through samples of X(t).
  • The random process is sampled N times, yielding N-independent and identically distributed (iid) random variables . Although the {Xn} contain identical information about the unknown θ, the specific outcomes {Xn} generally vary.
  • The goal after sampling is to combine the {Xn} using functions to give “simpler” quantities for estimating the unknown θ. Since N is usually much larger than the number of parameters, combining the {Xn} condenses the samples to one or more random variables called sufficient statistics. These statistics are “sufficient” because information about θ is not lost by the mapping from N variables to a lower dimension.

The {Xn} could also be derived from samples of random sequence X[k] or they may just be N samples of a particular ...

Get Probability, Random Variables, and Random Processes: Theory and Signal Processing Applications now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.