# User:Dfbeaton/sandbox

Partial least squares (PLS) (sometimes called Projection on Latent Structures) is an umbrella term for a variety of statistical methods that, in general, aim to explain the relationship between two matrices (say, X and Y)[1][2]. PLS works in a similar fashion to principal components analysis, canonical correlation, and principal components regression in that PLS takes a latent variable approach to modeling the covariance between X and Y.

There are three basic versions of PLS, each with a wide array of extensions. The term Partial Least Squares was introduced by Swedish statistician Herman Wold who developed the first PLS approach with his son, Svante Wold. The approach introduced by Wold is commonly called PLS Regression. The second approach was introduced as PLS by Bookstein (cite Book and McIntosh) and is sometimes called PLS Correlation (Anjali) or PLS-SVD. This approach can be traced back to Ledyard Tucker's Interbattery Factor Analysis. The third approach was introduced by Tenenhaus or Vinzo? and is called PLS Path Modelling, a PLS Regression approach to Path Modelling.

## PLS Path Modelling

### Software

The first is a regression approach (PLS Regression), introduced by the Swedish statistician Herman Wold, who then developed it with his son, Svante Wold. The second is a "correlation" approach, introduced by

that bears some relation to principal components regression; instead of finding hyperplanes of minimum variance between the response and independent variables, it finds a linear regression model by projecting the predicted variables and the observable variables to a new space. Because both the X and Y data are projected to new spaces, the PLS family of methods are known as bilinear factor models. Partial least squares Discriminant Analysis (PLS-DA) is a variant used when the Y is binary.

PLS is used to find the fundamental relations between two matrices (X and Y), i.e. a latent variable approach to modeling the covariance structures in these two spaces. A PLS model will try to find the multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space. PLS regression is particularly suited when the matrix of predictors has more variables than observations, and when there is multicollinearity among X values. By contrast, standard regression will fail in these cases.

The PLS algorithm is employed in PLS path modelling,[3][4] a method of modeling a "causal" network of latent variables, as the word 'causal' has to put in quotes because causes obviously cannot be determined without experimental or quasi-experimental methods. This technique is a form of structural equation modeling, distinguished from the classical method by being component-based rather than covariance-based.[5]

Partial least squares was introduced by the Swedish statistician Herman Wold, who then developed it with his son, Svante Wold. An alternative term for PLS (and more correct according to Svante Wold[6]) is projection to latent structures, but the term partial least squares is still dominant in many areas. Although the original applications were in the social sciences, PLS regression is today most widely used in chemometrics and related areas. It is also used in bioinformatics, sensometrics, neuroscience and anthropology. In contrast, PLS path modeling is most often used in social sciences, econometrics, marketing and strategic management.

## Underlying model

The general underlying model of multivariate PLS is

{\displaystyle {\begin{aligned}X&=TP^{\top }+E\\Y&=UQ^{\top }+F,\end{aligned}}}

where ${\displaystyle X}$ is an ${\displaystyle n\times m}$ matrix of predictors, ${\displaystyle Y}$ is an ${\displaystyle n\times p}$ matrix of responses; ${\displaystyle T}$ and ${\displaystyle U}$ are ${\displaystyle n\times l}$ matrices that are, respectively, projections of X (the X score, component or factor matrix) and projections of Y (the Y scores); ${\displaystyle P}$ and ${\displaystyle Q}$ are, respectively, ${\displaystyle m\times l}$ and ${\displaystyle p\times l}$ orthogonal loading matrices; and matrices ${\displaystyle E}$ and ${\displaystyle F}$ are the error terms, assumed to be independent and identically distributed random normal variables. The decompositions of X and Y are made so as to maximise the covariance of T and U.

## Algorithms

A number of variants of PLS exist for estimating the factor and loading matrices ${\displaystyle T,P}$ and ${\displaystyle Q}$. Most of them construct estimates of the linear regression between ${\displaystyle X}$ and ${\displaystyle Y}$ as ${\displaystyle Y=X{\tilde {B}}+{\tilde {B}}_{0}}$. Some PLS algorithms are only appropriate for the case where ${\displaystyle Y}$ is a column vector, while others deal with the general case of a matrix ${\displaystyle Y}$. Algorithms also differ on whether they estimate the factor matrix ${\displaystyle T}$ as an orthogonal, an orthonormal matrix or not.[7][8][9][10][11][12] The final prediction will be the same for all these varieties of PLS, but the components will differ.

### PLS1

PLS1 is a widely used algorithm appropriate for the vector ${\displaystyle Y}$ case. It estimates ${\displaystyle T}$ as an orthonormal matrix. In pseudocode it is expressed below (capital letters are matrices, lower case letters are vectors if they are superscripted and scalars if they are subscripted):

 1  function PLS1(${\displaystyle X,y,l}$)
2  ${\displaystyle X^{(0)}\gets X}$
3  ${\displaystyle w^{(0)}\gets X^{T}y/||X^{T}y||}$, an initial estimate of ${\displaystyle w}$.
4  ${\displaystyle t^{(0)}\gets Xw^{(0)}}$
5  for ${\displaystyle k}$ = 0 to ${\displaystyle l}$
6      ${\displaystyle t_{k}\gets {t^{(k)}}^{T}t^{(k)}}$ (note this is a scalar)
7      ${\displaystyle t^{(k)}\gets t^{(k)}/t_{k}}$
8      ${\displaystyle p^{(k)}\gets {X^{(k)}}^{T}t^{(k)}}$
9      ${\displaystyle q_{k}\gets {y}^{T}t^{(k)}}$ (note this is a scalar)
10      if ${\displaystyle q_{k}}$ = 0
11          ${\displaystyle l\gets k}$, break the for loop
12      if ${\displaystyle k
13          ${\displaystyle X^{(k+1)}\gets X^{(k)}-t_{k}t^{(k)}{p^{(k)}}^{T}}$
14          ${\displaystyle w^{(k+1)}\gets {X^{(k+1)}}^{T}y}$
15          ${\displaystyle t^{(k+1)}\gets X^{(k+1)}w^{(k+1)}}$
16  end for
17  define ${\displaystyle W}$ to be the matrix with columns ${\displaystyle w^{(0)},w^{(1)},...,w^{(l-1)}}$.
Do the same to form the ${\displaystyle P}$ matrix and ${\displaystyle q}$ vector.
18  ${\displaystyle B\gets W{(P^{T}W)}^{-1}q}$
19  ${\displaystyle B_{0}\gets q_{0}-{P^{(0)}}^{T}B}$
20  return ${\displaystyle B,B_{0}}$


This form of the algorithm does not require centering of the input ${\displaystyle X}$ and ${\displaystyle Y}$, as this is performed implicitly by the algorithm. This algorithm features 'deflation' of the matrix ${\displaystyle X}$ (subtraction of ${\displaystyle t_{k}t^{(k)}{p^{(k)}}^{T}}$), but deflation of the vector ${\displaystyle y}$ is not performed, as it is not necessary (it can be proved that deflating ${\displaystyle y}$ yields the same results as not deflating.). The user-supplied variable ${\displaystyle l}$ is the limit on the number of latent factors in the regression; if it equals the rank of the matrix ${\displaystyle X}$, the algorithm will yield the least squares regression estimates for ${\displaystyle B}$ and ${\displaystyle B_{0}}$

## Extensions

In 2002 a new method was published called orthogonal projections to latent structures (OPLS). In OPLS, continuous variable data is separated into predictive and uncorrelated information. This leads to improved diagnostics, as well as more easily interpreted visualization. However, these changes only improve the interpretability, not the predictivity, of the PLS models.[13] L-PLS extends PLS regression to 3 connected data blocks.[14] Similarly, OPLS-DA (Discriminant Analysis) may be applied when working with discrete variables, as in classification and biomarker studies.

## Software implementation

Most major statistical software packages offer PLS regression.[citation needed]

## References

1. ^ Esposito Vinzi, V.; Russolillo, G. (2013). "Partial least squares algorithms and methods". WIREs Computational Statistics. 5 (1): 1–19. doi:10.1002/wics.1239.
2. ^ Abdi, H. (2010). "Partial least squares regression and projection on latent structure regression (PLS Regression)" (PDF). WIREs Computational Statistics. 2 (1): 97–106. doi:10.1002/wics.51.
3. ^ Tenenhaus, M.; Esposito Vinzi, V.; Chatelinc, Y-M.; Lauro, C. (January 2005). "PLS path modeling" (PDF). Computational Statistics & Data Analysis. 48 (1): 159–205. doi:10.1016/j.csda.2004.03.005.
4. ^ Vinzi, V.; Chin, W.W.; Henseler, J.; Wang, H., eds. (2010). Handbook of Partial Least Squares. ISBN 978-3-540-32825-4.
5. ^ Tenenhaus, M. (2008). "Component-based structural equation modelling" (PDF).
6. ^ Wold, S; Sjöström, M.; Eriksson, L. (2001). "PLS-regression: a basic tool of chemometrics". Chemometrics and Intelligent Laboratory Systems. 58 (2): 109–130. doi:10.1016/S0169-7439(01)00155-1.
7. ^ Lindgren, F; Geladi, P; Wold, S (1993). "The kernel algorithm for PLS". J. Chemometrics. 7: 45–59. doi:10.1002/cem.1180070104.
8. ^ de Jong, S.; ter Braak, C.J.F. (1994). "Comments on the PLS kernel algorithm". J. Chemometrics. 8 (2): 169–174. doi:10.1002/cem.1180080208.
9. ^ Dayal, B.S.; MacGregor, J.F. (1997). "Improved PLS algorithms". J. Chemometrics. 11 (1): 73–85. doi:10.1002/(SICI)1099-128X(199701)11:1<73::AID-CEM435>3.0.CO;2-#.
10. ^ de Jong, S. (1993). "SIMPLS: an alternative approach to partial least squares regression". Chemometrics and Intelligent Laboratory Systems. 18 (3): 251–263. doi:10.1016/0169-7439(93)85002-X.
11. ^ Rannar, S.; Lindgren, F.; Geladi, P.; Wold, S. (1994). "A PLS Kernel Algorithm for Data Sets with Many Variables and Fewer Objects. Part 1: Theory and Algorithm". J. Chemometrics. 8 (2): 111–125. doi:10.1002/cem.1180080204.
12. ^ Abdi, H. (2010). "Partial least squares regression and projection on latent structure regression (PLS-Regression)". Wiley Interdisciplinary Reviews: Computational Statistics. 2: 97–106. doi:10.1002/wics.51.
13. ^ Trygg, J; Wold, S (2002). "Orthogonal Projections to Latent Structures". Journal of Chemometrics. 16 (3): 119–128. doi:10.1002/cem.695.
14. ^ Sæbøa, S.; Almøya, T.; Flatbergb, A.; Aastveita, A.H.; Martens, H. (2008). "LPLS-regression: a method for prediction and classification under the influence of background information on predictor variables". Chemometrics and Intelligent Laboratory Systems. 91 (2): 121–132. doi:10.1016/j.chemolab.2007.10.006.

• Kramer, R. (1998). Chemometric Techniques for Quantitative Analysis. Marcel-Dekker. ISBN 0-8247-0198-4.
• Frank, Ildiko E.; Friedman, Jerome H. (1993). "A Statistical View of Some Chemometrics Regression Tools". Technometrics. 35 (2): 109–148. doi:10.1080/00401706.1993.10485033.
• Haenlein, Michael; Kaplan, Andreas M. (2004). "A Beginner's Guide to Partial Least Squares Analysis". Understanding Statistics. 3 (4): 283–297. doi:10.1207/s15328031us0304_4.
• Henseler, Joerg; Fassott, Georg (2005). "Testing Moderating Effects in PLS Path Models. An Illustration of Available Procedures".
• Lingjærde, Ole-Christian; Christophersen, Nils (2000). "Shrinkage Structure of Partial Least Squares". Scandinavian Journal of Statistics. 27 (3): 459–473. doi:10.1111/1467-9469.00201.
• Tenenhaus, Michel (1998). La Régression PLS: Théorie et Pratique. Paris: Technip.
• Rosipal, Roman; Kramer, Nicole (2006). "Overview and Recent Advances in Partial Least Squares, in Subspace, Latent Structure and Feature Selection Techniques": 34–51.
• Helland, Inge S. (1990). "PLS regression and statistical models". Scandinavian Journal of Statistics. 17 (2): 97–114. JSTOR 4616159.
• Wold, Herman (1966). "Estimation of principal components and related models by iterative least squares". In Krishnaiaah, P.R. Multivariate Analysis. New York: Academic Press. pp. 391–420.
• Wold, Herman (1981). The fix-point approach to interdependent systems. Amsterdam: North Holland.
• Wold, Herman (1985). "Partial least squares". In Kotz, Samuel; Johnson, Norman L. Encyclopedia of statistical sciences. 6. New York: Wiley. pp. 581–591.
• Wold, Svante; Ruhe, Axel; Wold, Herman; Dunn, W.J. (1984). "The collinearity problem in linear regression. the partial least squares (PLS) approach to generalized inverses". SIAM Journal on Scientific and Statistical Computing. 5 (3): 735–743. doi:10.1137/0905052.
• Garthwaite, Paul H. (1994). "An Interpretation of Partial Least Squares". Journal of the American Statistical Association. 89 (425): 122–7. JSTOR 2291207. doi:10.1080/01621459.1994.10476452.
• Wang, H., ed. (2010). Handbook of Partial Least Squares. ISBN 978-3-540-32825-4.
• Stone, M.; Brooks, R.J. (1990). "Continuum Regression: Cross-Validated Sequentially Constructed Prediction embracing Ordinary Least Squares, Partial Least Squares and Principal Components Regression". Journal of the Royal Statistical Society, Series B. 52 (2): 237–269. JSTOR 2345437.