Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Partial least squares regression

From Wikipedia, the free encyclopedia
Statistical method
Part of a series on
Regression analysis
Models
Estimation
Background

Partial least squares (PLS) regression is astatistical method that bears some relation toprincipal components regression and is areduced rank regression;[1] instead of findinghyperplanes of maximumvariance between the response and independent variables, it finds alinear regression model by projecting thepredicted variables and theobservable variables to a new space of maximum covariance (see below). Because both theX andY data are projected to new spaces, the PLS family of methods are known as bilinear factor models. Partial least squares discriminant analysis (PLS-DA) is a variant used when theY is categorical.

PLS is used to find the fundamental relations between twomatrices (X andY), i.e. alatent variable approach to modeling thecovariance structures in these two spaces. A PLS model will try to find the multidimensional direction in theX space that explains the maximum multidimensional variance direction in theY space. PLS regression is particularly suited when the matrix of predictors has more variables than observations, and when there ismulticollinearity amongX values. By contrast, standard regression will fail in these cases (unless it isregularized).

Partial least squares was introduced by the Swedish statisticianHerman O. A. Wold, who then developed it with his son, Svante Wold. An alternative term for PLS isprojection to latent structures,[2][3] but the termpartial least squares is still dominant in many areas. Although the original applications were in the social sciences, PLS regression is today most widely used inchemometrics and related areas. It is also used inbioinformatics,sensometrics,neuroscience, andanthropology.

Core idea

[edit]
Core Idea of PLS. The loading vectorsp1,q1{\displaystyle {\vec {p}}_{1},{\vec {q}}_{1}} in the input and output space are drawn in red (not normalized for better visibility). Whenx1{\displaystyle x_{1}} increases (independent ofx2{\displaystyle x_{2}}),y1{\displaystyle y_{1}} andy2{\displaystyle y_{2}} increase.

We are given a sample ofn{\displaystyle n}paired observations(xi,yi),i1,,n{\displaystyle ({\vec {x}}_{i},{\vec {y}}_{i}),i\in {1,\ldots ,n}}.In the first stepj=1{\displaystyle j=1}, the partial least squares regression searches for the normalized directionpj{\displaystyle {\vec {p}}_{j}},qj{\displaystyle {\vec {q}}_{j}} that maximizes the covariance[4]

maxpj,qjE[(pjX)tj(qjY)uj].{\displaystyle \max _{{\vec {p}}_{j},{\vec {q}}_{j}}\operatorname {E} [\underbrace {({\vec {p}}_{j}\cdot {\vec {X}})} _{t_{j}}\underbrace {({\vec {q}}_{j}\cdot {\vec {Y}})} _{u_{j}}].}

Note below, the algorithm is denoted in matrix notation.

Underlying model

[edit]

The general underlying model of multivariate PLS with{\displaystyle \ell } components is

X=TPT+E{\displaystyle X=TP^{\mathrm {T} }+E}
Y=UQT+F{\displaystyle Y=UQ^{\mathrm {T} }+F}

where

The decompositions ofX andY are made so as to maximise thecovariance betweenT andU.

Note that this covariance is defined pair by pair: the covariance of columni ofT (lengthn) with the columni ofU (lengthn) is maximized. Additionally, the covariance of the column i ofT with the columnj ofU (withij{\displaystyle i\neq j}) is zero.

In PLSR, the loadings are thus chosen so that the scores form an orthogonal basis. This is a major difference with PCA where orthogonality is imposed onto loadings (and not the scores).

Algorithms

[edit]

A number of variants of PLS exist for estimating the factor and loading matricesT, U, P andQ. Most of them construct estimates of the linear regression betweenX andY asY=XB~+B~0{\displaystyle Y=X{\tilde {B}}+{\tilde {B}}_{0}}. Some PLS algorithms are only appropriate for the case whereY is a column vector, while others deal with the general case of a matrixY. Algorithms also differ on whether they estimate the factor matrixT as an orthogonal (that is,orthonormal) matrix or not.[5][6][7][8][9][10] The final prediction will be the same for all these varieties of PLS, but the components will differ.

PLS is composed of iteratively repeating the following stepsk times (fork components):

  1. finding the directions of maximal covariance in input and output space
  2. performing least squares regression on the input score
  3. deflating the inputX{\displaystyle X} and/or targetY{\displaystyle Y}

PLS1

[edit]

PLS1 is a widely used algorithm appropriate for the vectorY case. It estimatesT as an orthonormal matrix.(Caution: thet vectors in the code below may not be normalized appropriately; see talk.)In pseudocode it is expressed below (capital letters are matrices, lower case letters are vectors if they are superscripted and scalars if they are subscripted).

 1function PLS1(X, y, ℓ) 2X(0)X{\displaystyle X^{(0)}\gets X} 3w(0)XTy/XTy{\displaystyle w^{(0)}\gets X^{\mathrm {T} }y/\|X^{\mathrm {T} }y\|}, an initial estimate ofw. 4fork=0{\displaystyle k=0}to1{\displaystyle \ell -1} 5t(k)X(k)w(k){\displaystyle t^{(k)}\gets X^{(k)}w^{(k)}} 6tkt(k)Tt(k){\displaystyle t_{k}\gets {t^{(k)}}^{\mathrm {T} }t^{(k)}} (note this is a scalar) 7t(k)t(k)/tk{\displaystyle t^{(k)}\gets t^{(k)}/t_{k}} 8p(k)X(k)Tt(k){\displaystyle p^{(k)}\gets {X^{(k)}}^{\mathrm {T} }t^{(k)}} 9qkyTt(k){\displaystyle q_{k}\gets {y}^{\mathrm {T} }t^{(k)}} (note this is a scalar)10ifqk=0{\displaystyle q_{k}=0}11k{\displaystyle \ell \gets k},break thefor loop12ifk<(1){\displaystyle k<(\ell -1)}13X(k+1)X(k)tkt(k)p(k)T{\displaystyle X^{(k+1)}\gets X^{(k)}-t_{k}t^{(k)}{p^{(k)}}^{\mathrm {T} }}14w(k+1)X(k+1)Ty{\displaystyle w^{(k+1)}\gets {X^{(k+1)}}^{\mathrm {T} }y}15endfor16defineW to be the matrixwith columnsw(0),w(1),,w(1){\displaystyle w^{(0)},w^{(1)},\ldots ,w^{(\ell -1)}}.       Do the same to form theP matrix andq vector.17BW(PTW)1q{\displaystyle B\gets W{(P^{\mathrm {T} }W)}^{-1}q}18B0q0P(0)TB{\displaystyle B_{0}\gets q_{0}-{P^{(0)}}^{\mathrm {T} }B}19returnB,B0{\displaystyle B,B_{0}}

This form of the algorithm does not require centering of the inputX andY, as this is performed implicitly by the algorithm.This algorithm features 'deflation' of the matrixX (subtraction oftkt(k)p(k)T{\displaystyle t_{k}t^{(k)}{p^{(k)}}^{\mathrm {T} }}), but deflation of the vectory is not performed, as it is not necessary (it can be proved that deflatingy yields the same results as not deflating[11]). The user-supplied variablel is the limit on the number of latent factors in the regression; if it equals the rank of the matrixX, the algorithm will yield the least squares regression estimates forB andB0{\displaystyle B_{0}}

Geometric interpretation of the deflation step in the input space

Extensions

[edit]

OPLS

[edit]

In 2002 a new method was published called orthogonal projections to latent structures (OPLS). In OPLS, continuous variable data is separated into predictive and uncorrelated (orthogonal) information. This leads to improved diagnostics, as well as more easily interpreted visualization. However, these changes only improve the interpretability, not the predictivity, of the PLS models.[12] Similarly, OPLS-DA (Discriminant Analysis) may be applied when working with discrete variables, as in classification and biomarker studies.

The general underlying model of OPLS is

X=TPT+TY-orthPY-orthT+E{\displaystyle X=TP^{\mathrm {T} }+T_{\text{Y-orth}}P_{\text{Y-orth}}^{\mathrm {T} }+E}
Y=UQT+F{\displaystyle Y=UQ^{\mathrm {T} }+F}

or in O2-PLS[13]

X=TPT+TY-orthPY-orthT+E{\displaystyle X=TP^{\mathrm {T} }+T_{\text{Y-orth}}P_{\text{Y-orth}}^{\mathrm {T} }+E}
Y=UQT+UX-orthQX-orthT+F{\displaystyle Y=UQ^{\mathrm {T} }+U_{\text{X-orth}}Q_{\text{X-orth}}^{\mathrm {T} }+F}

L-PLS

[edit]

Another extension of PLS regression, named L-PLS for its L-shaped matrices, connects 3 related data blocks to improve predictability.[14] In brief, a newZ matrix, with the same number of columns as theX matrix, is added to the PLS regression analysis and may be suitable for including additional background information on the interdependence of the predictor variables.

3PRF

[edit]

In 2015 partial least squares was related to a procedure called the three-pass regression filter (3PRF).[15] Supposing the number of observations and variables are large, the 3PRF (and hence PLS) is asymptotically normal for the "best" forecast implied by a linear latent factor model. In stock market data, PLS has been shown to provide accurate out-of-sample forecasts of returns and cash-flow growth.[16]

Partial least squares SVD

[edit]

A PLS version based onsingular value decomposition (SVD) provides a memory efficient implementation that can be used to address high-dimensional problems, such as relating millions of genetic markers to thousands of imaging features in imaging genetics, on consumer-grade hardware.[17]

PLS correlation

[edit]

PLS correlation (PLSC) is another methodology related to PLS regression,[18] which has been used in neuroimaging[18][19][20] and sport science,[21] to quantify the strength of the relationship between data sets. Typically, PLSC divides the data into two blocks (sub-groups) each containing one or more variables, and then usessingular value decomposition (SVD) to establish the strength of any relationship (i.e. the amount of shared information) that might exist between the two component sub-groups.[22] It does this by using SVD to determine the inertia (i.e. the sum of the singular values) of the covariance matrix of the sub-groups under consideration.[22][18]

See also

[edit]

References

[edit]
  1. ^Schmidli, Heinz (13 March 2013).Reduced Rank Regression: With Applications to Quantitative Structure-Activity Relationships. Springer.ISBN 978-3-642-50015-2.
  2. ^Wold, S; Sjöström, M.; Eriksson, L. (2001). "PLS-regression: a basic tool of chemometrics".Chemometrics and Intelligent Laboratory Systems.58 (2):109–130.doi:10.1016/S0169-7439(01)00155-1.S2CID 11920190.
  3. ^Abdi, Hervé (2010)."Partial least squares regression and projection on latent structure regression (PLS Regression)".WIREs Computational Statistics.2:97–106.doi:10.1002/wics.51.S2CID 122685021.
  4. ^See lecturehttps://www.youtube.com/watch?v=Px2otK2nZ1c&t=46s
  5. ^Lindgren, F; Geladi, P; Wold, S (1993). "The kernel algorithm for PLS".J. Chemometrics.7:45–59.doi:10.1002/cem.1180070104.S2CID 122950427.
  6. ^de Jong, S.; ter Braak, C.J.F. (1994). "Comments on the PLS kernel algorithm".J. Chemometrics.8 (2):169–174.doi:10.1002/cem.1180080208.S2CID 221549296.
  7. ^Dayal, B.S.; MacGregor, J.F. (1997). "Improved PLS algorithms".J. Chemometrics.11 (1):73–85.doi:10.1002/(SICI)1099-128X(199701)11:1<73::AID-CEM435>3.0.CO;2-#.S2CID 120753851.
  8. ^de Jong, S. (1993). "SIMPLS: an alternative approach to partial least squares regression".Chemometrics and Intelligent Laboratory Systems.18 (3):251–263.doi:10.1016/0169-7439(93)85002-X.
  9. ^Rannar, S.; Lindgren, F.; Geladi, P.; Wold, S. (1994). "A PLS Kernel Algorithm for Data Sets with Many Variables and Fewer Objects. Part 1: Theory and Algorithm".J. Chemometrics.8 (2):111–125.doi:10.1002/cem.1180080204.S2CID 121613293.
  10. ^Abdi, H. (2010). "Partial least squares regression and projection on latent structure regression (PLS-Regression)".Wiley Interdisciplinary Reviews: Computational Statistics.2:97–106.doi:10.1002/wics.51.S2CID 122685021.
  11. ^Höskuldsson, Agnar (1988). "PLS Regression Methods".Journal of Chemometrics.2 (3): 219.doi:10.1002/cem.1180020306.S2CID 120052390.
  12. ^Trygg, J; Wold, S (2002). "Orthogonal Projections to Latent Structures".Journal of Chemometrics.16 (3):119–128.doi:10.1002/cem.695.S2CID 122699039.
  13. ^Eriksson, S. Wold, and J. Tryg. "O2PLS® for improved analysis and visualization of complex data."https://www.dynacentrix.com/telecharg/SimcaP/O2PLS.pdf
  14. ^Sæbøa, S.; Almøya, T.; Flatbergb, A.; Aastveita, A.H.; Martens, H. (2008). "LPLS-regression: a method for prediction and classification under the influence of background information on predictor variables".Chemometrics and Intelligent Laboratory Systems.91 (2):121–132.doi:10.1016/j.chemolab.2007.10.006.
  15. ^Kelly, Bryan; Pruitt, Seth (2015-06-01). "The three-pass regression filter: A new approach to forecasting using many predictors".Journal of Econometrics. High Dimensional Problems in Econometrics.186 (2):294–316.doi:10.1016/j.jeconom.2015.02.011.
  16. ^Kelly, Bryan; Pruitt, Seth (2013-10-01). "Market Expectations in the Cross-Section of Present Values".The Journal of Finance.68 (5):1721–1756.CiteSeerX 10.1.1.498.5973.doi:10.1111/jofi.12060.ISSN 1540-6261.
  17. ^Lorenzi, Marco; Altmann, Andre; Gutman, Boris; Wray, Selina; Arber, Charles; Hibar, Derrek P.; Jahanshad, Neda; Schott, Jonathan M.; Alexander, Daniel C. (2018-03-20)."Susceptibility of brain atrophy to TRIB3 in Alzheimer's disease, evidence from functional prioritization in imaging genetics".Proceedings of the National Academy of Sciences.115 (12):3162–3167.Bibcode:2018PNAS..115.3162L.doi:10.1073/pnas.1706100115.ISSN 0027-8424.PMC 5866534.PMID 29511103.
  18. ^abcKrishnan, Anjali; Williams, Lynne J.; McIntosh, Anthony Randal; Abdi, Hervé (May 2011). "Partial Least Squares (PLS) methods for neuroimaging: A tutorial and review".NeuroImage.56 (2):455–475.doi:10.1016/j.neuroimage.2010.07.034.PMID 20656037.S2CID 8796113.
  19. ^McIntosh, Anthony R.; Mišić, Bratislav (2013-01-03). "Multivariate Statistical Analyses for Neuroimaging Data".Annual Review of Psychology.64 (1):499–525.doi:10.1146/annurev-psych-113011-143804.ISSN 0066-4308.PMID 22804773.
  20. ^Beggs, Clive B.; Magnano, Christopher; Belov, Pavel; Krawiecki, Jacqueline; Ramasamy, Deepa P.; Hagemeier, Jesper; Zivadinov, Robert (2016-05-02). de Castro, Fernando (ed.)."Internal Jugular Vein Cross-Sectional Area and Cerebrospinal Fluid Pulsatility in the Aqueduct of Sylvius: A Comparative Study between Healthy Subjects and Multiple Sclerosis Patients".PLOS ONE.11 (5) e0153960.Bibcode:2016PLoSO..1153960B.doi:10.1371/journal.pone.0153960.ISSN 1932-6203.PMC 4852898.PMID 27135831.
  21. ^Weaving, Dan; Jones, Ben; Ireton, Matt; Whitehead, Sarah; Till, Kevin; Beggs, Clive B. (2019-02-14). Connaboy, Chris (ed.)."Overcoming the problem of multicollinearity in sports performance data: A novel application of partial least squares correlation analysis".PLOS ONE.14 (2) e0211776.Bibcode:2019PLoSO..1411776W.doi:10.1371/journal.pone.0211776.ISSN 1932-6203.PMC 6375576.PMID 30763328.
  22. ^abAbdi, Hervé; Williams, Lynne J. (2013), Reisfeld, Brad; Mayeno, Arthur N. (eds.), "Partial Least Squares Methods: Partial Least Squares Correlation and Partial Least Square Regression",Computational Toxicology, vol. 930, Humana Press, pp. 549–579,doi:10.1007/978-1-62703-059-5_23,ISBN 978-1-62703-058-8,PMID 23086857

Literature

[edit]
  • Kramer, R. (1998).Chemometric Techniques for Quantitative Analysis. Marcel-Dekker.ISBN 978-0-8247-0198-7.
  • Frank, Ildiko E.; Friedman, Jerome H. (1993). "A Statistical View of Some Chemometrics Regression Tools".Technometrics.35 (2):109–148.doi:10.1080/00401706.1993.10485033.
  • Haenlein, Michael; Kaplan, Andreas M. (2004). "A Beginner's Guide to Partial Least Squares Analysis".Understanding Statistics.3 (4):283–297.doi:10.1207/s15328031us0304_4.
  • Henseler, Jörg; Fassott, Georg (2010). "Testing Moderating Effects in PLS Path Models: An Illustration of Available Procedures". In Vinzi, Vincenzo Esposito; Chin, Wynne W.; Henseler, Jörg; Wang, Huiwen (eds.).Handbook of Partial Least Squares: Concepts, Methods and Applications. Springer. pp. 713–735.doi:10.1007/978-3-540-32827-8_31.ISBN 978-3-540-32827-8.
  • Lingjærde, Ole-Christian; Christophersen, Nils (2000). "Shrinkage Structure of Partial Least Squares".Scandinavian Journal of Statistics.27 (3):459–473.doi:10.1111/1467-9469.00201.S2CID 121489764.
  • Tenenhaus, Michel (1998).La Régression PLS: Théorie et Pratique. Paris: Technip.
  • Rosipal, Roman; Krämer, Nicole (2006). "Overview and Recent Advances in Partial Least Squares". In Saunders, Craig; Grobelnik, Marko; Gunn, Steve; Shawe-Taylor, John (eds.).Subspace, Latent Structure and Feature Selection: Statistical and Optimization Perspectives Workshop, SLSFS 2005, Bohinj, Slovenia, February 23–25, 2005, Revised Selected Papers. Lecture Notes in Computer Science. Springer. pp. 34–51.doi:10.1007/11752790_2.ISBN 978-3-540-34138-3.
  • Helland, Inge S. (1990). "PLS regression and statistical models".Scandinavian Journal of Statistics.17 (2):97–114.JSTOR 4616159.
  • Wold, Herman (1966). "Estimation of principal components and related models by iterative least squares". In Krishnaiaah, P.R. (ed.).Multivariate Analysis. New York: Academic Press. pp. 391–420.
  • Wold, Herman (1981).The fix-point approach to interdependent systems. Amsterdam: North Holland.
  • Wold, Herman (1985). "Partial least squares". In Kotz, Samuel; Johnson, Norman L. (eds.).Encyclopedia of statistical sciences. Vol. 6. New York: Wiley. pp. 581–591.
  • Wold, Svante; Ruhe, Axel; Wold, Herman; Dunn, W.J. (1984). "The collinearity problem in linear regression. the partial least squares (PLS) approach to generalized inverses".SIAM Journal on Scientific and Statistical Computing.5 (3):735–743.doi:10.1137/0905052.
  • Garthwaite, Paul H. (1994). "An Interpretation of Partial Least Squares".Journal of the American Statistical Association.89 (425):122–7.doi:10.1080/01621459.1994.10476452.JSTOR 2291207.
  • Wang, H., ed. (2010).Handbook of Partial Least Squares.ISBN 978-3-540-32825-4.
  • Stone, M.; Brooks, R.J. (1990). "Continuum Regression: Cross-Validated Sequentially Constructed Prediction embracing Ordinary Least Squares, Partial Least Squares and Principal Components Regression".Journal of the Royal Statistical Society, Series B.52 (2):237–269.doi:10.1111/j.2517-6161.1990.tb01786.x.JSTOR 2345437.

External links

[edit]
Computational statistics
Correlation and dependence
Regression analysis
Regression as a
statistical model
Linear regression
Predictor structure
Non-standard
Non-normal errors
Decomposition of variance
Model exploration
Background
Design of experiments
Numericalapproximation
Applications
International
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Partial_least_squares_regression&oldid=1314926434"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp