6.5. Gradient algorithm

We have seen previously that the vector λ optimal, which is to say the one that minimizes the cost image is written:

images

Now, to resolve this equation, we have to inverse the autocorrelation matrix. That can involve major calculations if this matrix R is not a Toeplitz matrix. It is a Toeplitz matrix if R(i, j) = c(ii) with c representing the autocorrelation of the process.

Let us examine the evolution of the cost image previously traced.

Let λK be the vector coefficients (or weight) at instant K. If we wish to arrive at λ optimal, we must make λK evolve at each interaction by taking into account its relative position between the instant K and K+1.

For a given cost image, the gradient of image with regards to the vector image is normal at image.

In order for the algorithm to converge, it must very obviously ...

Get Discrete Stochastic Processes and Optimal Filtering now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.