Partial least squares
This chapter provides a detailed analysis of PLS and its maximum redundancy formulation. The data models including the underlying assumptions for obtaining a PLS and a MRPLS model are outlined in Sections 2.2 and 2.3, respectively.
Section 10.1 presents preliminaries of projecting the recorded samples of the input variables, onto an n-dimensional subspace, n ≤ nx, and show how a sequence of rank-one matrices extract variation from the sets of input and output variables x0 and , respectively. Section 10.2 then develops a PLS algorithm and Section 10.3 summarizes the basic steps of this algorithm.
Section 10.4 then analyzes the statistical and geometric properties of PLS and finally, Section 10.5 discusses the properties of MRPLS. Further material covering the development and analysis of PLS may be found in de Jong (1993); Geladi and Kowalski (1986); Höskuldsson (1988, 1996); ter Braak and de Jong (1998); Wold et al. (1984) and Young (1994).
In a similar fashion to PCA, PLS extracts information from the input and output data matrices, and by defining a series of rank-one matrices
The data matrices store mean-centered ...