9.5 Conclusions

When an algorithm, specifically an iterative algorithm, is implemented without prior knowledge, how to select an appropriate initial condition is a crucial step in convergence as well as reduction of computing time. Due to the lack of prior information a general and common practice is to use randomly generated initial conditions to initialize an algorithm. Such an approach seems natural and well accepted in algorithm design. However, associated with it three issues are needed to be addressed. First, there is a convergence issue. A bad initial condition may cause an algorithm to be divergent such as Newton's method or to be trapped in local optimality. This issue has received considerable interest such as vector quantization. Another is an inconsistency issue. Different random initial conditions generally result in different results in the end. A good example is the ISODATA, also known as K-means or C-means clustering techniques. To resolve this dilemma, it generally runs an algorithm with a number of different random initial conditions and their results are then averaged to be used for final results. This does not guarantee that the obtained results are the desired results. The third and last issue is high computational complexity. A search using a random initial condition can take a long journey to find its destiny. An appropriate initial condition can facilitate a searching process and cut convergence rate significantly. Unfortunately, all these three issues have ...

Get Hyperspectral Data Processing: Algorithm Design and Analysis now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.