10.2 Random PPI (RPPI)

In order to implement PPI in Section 7.2.1, first, one must know how many skewers, K, are needed to be generated. This value of K is generally determined an empirical basis. Second, it also needs to know how many dimensions, denoted by q to be retained after dimensionality reduction. Third, despite the fact that PPI does not need the knowledge of the number of endmembers, p, to be generated, it does require a parameter t to be used to threshold PPI counts it generates to extract endmembers. Finally, it requires human intervention to manually select final endmembers from those data sample vectors whose PPI counts are thresholded by t. Most importantly, for PPI to work effectively, the skewers must also be generated in such a random manner that the skewers can cover as many directions on which the data samples are projected as possible. However, this practice also comes with a drawback, that is, the results obtained from different sets of same-number skewers are different. Consequently, it must be interpreted by human analysts who manipulate results to produce best possible sets of endmembers.

The RPPI presented in this section is originally derived from its earlier version, called the automatic PPI (APPI), developed in Chaudhry et al. (2006). It also inherits the original structure of PPI but remedies the above-mentioned drawbacks suffered in PPI. It does not require dimensionality reduction (DR) as does the APPI. More specifically, RPPI converts the disadvantage ...

Get Hyperspectral Data Processing: Algorithm Design and Analysis now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.