Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Partial correlation

From Wikipedia, the free encyclopedia

Concept in probability theory and statistics
Not to be confused withCoefficient of partial determination.

Inprobability theory andstatistics,partial correlation measures the degree ofassociation between tworandom variables, with the effect of a set of controlling random variables removed. When determining the numerical relationship between two variables of interest, using theircorrelation coefficient will givemisleading results if there is anotherconfounding variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient. This is precisely the motivation for including other right-side variables in amultiple regression; but while multiple regression givesunbiased results for theeffect size, it does not give a numerical value of a measure of the strength of the relationship between the two variables of interest.

For example, giveneconomic data on the consumption, income, and wealth of various individuals, consider the relationship between consumption and income. Failing to control for wealth when computing a correlation coefficient between consumption and income would give a misleading result, since income might be numerically related to wealth which in turn might be numerically related to consumption; a measured correlation between consumption and income might actually be contaminated by these other correlations. The use of a partial correlation avoids this problem.

Like the correlation coefficient, the partial correlation coefficient takes on a value in the range from –1 to 1. The value –1 conveys a perfect negative correlation controlling for some variables (that is, an exact linear relationship in which higher values of one variable are associated with lower values of the other); the value 1 conveys a perfect positive linear relationship, and the value 0 conveys that there is no linear relationship.

The partial correlation coincides with theconditional correlation if the random variables arejointly distributed as themultivariate normal, otherelliptical,multivariate hypergeometric,multivariate negative hypergeometric,multinomial, orDirichlet distribution, but not in general otherwise.[1]

Formal definition

[edit]

Formally, the partial correlation betweenX andY given a set ofn controlling variablesZ = {Z1,Z2, ...,Zn}, writtenρXY·Z, is thecorrelation between theresidualseX andeY resulting from thelinear regression ofX withZ and ofY withZ, respectively. The first-order partial correlation (i.e., whenn = 1) is the difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. Thecoefficient of alienation, and its relation with joint variance through correlation are available in Guilford (1973, pp. 344–345).[2]

Computation

[edit]

Using linear regression

[edit]

A simple way to compute the sample partial correlation for some data is to solve the two associatedlinear regression problems and calculate thecorrelation between the residuals. LetX andY be random variables taking real values, and letZ be then-dimensional vector-valued random variable. Letxi,yi andzi denote theith ofN{\displaystyle N}i.i.d. observations from somejoint probability distribution over real random variablesX,Y, andZ, withzi having been augmented with a 1 to allow for a constant term in the regression. Solving the linear regression problem amounts to finding (n+1)-dimensional regression coefficient vectorswX{\displaystyle \mathbf {w} _{X}^{*}} andwY{\displaystyle \mathbf {w} _{Y}^{*}} such that

wX=argminw{i=1N(xiw,zi)2}{\displaystyle \mathbf {w} _{X}^{*}=\arg \min _{\mathbf {w} }\left\{\sum _{i=1}^{N}(x_{i}-\langle \mathbf {w} ,\mathbf {z} _{i}\rangle )^{2}\right\}}
wY=argminw{i=1N(yiw,zi)2}{\displaystyle \mathbf {w} _{Y}^{*}=\arg \min _{\mathbf {w} }\left\{\sum _{i=1}^{N}(y_{i}-\langle \mathbf {w} ,\mathbf {z} _{i}\rangle )^{2}\right\}}

whereN{\displaystyle N} is the number of observations, andw,zi{\displaystyle \langle \mathbf {w} ,\mathbf {z} _{i}\rangle } is thescalar product between the vectorsw{\displaystyle \mathbf {w} } andzi{\displaystyle \mathbf {z} _{i}}.

The residuals are then

eX,i=xiwX,zi{\displaystyle e_{X,i}=x_{i}-\langle \mathbf {w} _{X}^{*},\mathbf {z} _{i}\rangle }
eY,i=yiwY,zi{\displaystyle e_{Y,i}=y_{i}-\langle \mathbf {w} _{Y}^{*},\mathbf {z} _{i}\rangle }

and the sample partial correlation is then given by theusual formula for sample correlation, but between these newderived values:

ρ^XYZ=Ni=1NeX,ieY,ii=1NeX,ii=1NeY,iNi=1NeX,i2(i=1NeX,i)2 Ni=1NeY,i2(i=1NeY,i)2=Ni=1NeX,ieY,iNi=1NeX,i2 Ni=1NeY,i2.{\displaystyle {\begin{aligned}{\hat {\rho }}_{XY\cdot \mathbf {Z} }&={\frac {N\sum _{i=1}^{N}e_{X,i}e_{Y,i}-\sum _{i=1}^{N}e_{X,i}\sum _{i=1}^{N}e_{Y,i}}{{\sqrt {N\sum _{i=1}^{N}e_{X,i}^{2}-\left(\sum _{i=1}^{N}e_{X,i}\right)^{2}}}~{\sqrt {N\sum _{i=1}^{N}e_{Y,i}^{2}-\left(\sum _{i=1}^{N}e_{Y,i}\right)^{2}}}}}\\&={\frac {N\sum _{i=1}^{N}e_{X,i}e_{Y,i}}{{\sqrt {N\sum _{i=1}^{N}e_{X,i}^{2}}}~{\sqrt {N\sum _{i=1}^{N}e_{Y,i}^{2}}}}}.\end{aligned}}}

In the first expression the three terms after minus signs all equal 0 since each contains the sum of residuals from anordinary least squares regression.

Example

[edit]

Consider the following data on three variables,X,Y, andZ:

XYZ
210
420
1531
2041

Computing thePearson correlation coefficient between variablesX andY results in approximately 0.970, while computing the partial correlation betweenX andY, using the formula given above, gives a partial correlation of 0.919. The computations were done usingR with the following code.

>x<-c(2,4,15,20)>y<-c(1,2,3,4)>z<-c(0,0,1,1)# regress x onto z and compute residuals>res_x<-lm(x~z)$residuals# regress y onto z and compute residuals>res_y<-lm(y~z)$residuals# compute correlation of residuals>cor(res_x,res_y)# [1] 0.919145# show this is distinct from the correlation between x and y>cor(x,y)# [1] 0.9695016# compute generalized partial correlations>generalCorr::parcorMany(cbind(x,y,z))#      nami namj partij   partji rijMrji# [1,] "x"  "y"  "0.8844" "1"    "-0.1156"# [2,] "x"  "z"  "0.1581" "1"    "-0.8419"

The lower part of the above code reports generalized nonlinear partial correlation coefficient betweenX andY after removing the nonlinear effect ofZ to be 0.8844. Also, the generalized nonlinear partial correlation coefficient betweenX andZ after removing the nonlinear effect ofY to be 0.1581. See the R package `generalCorr' and its vignettes for details. Simulation and other details are in Vinod (2017) "Generalized correlation and kernel causality with applications in development economics," Communications in Statistics - Simulation and Computation, vol. 46, [4513, 4534], available online: 29 Dec 2015, URLhttps://doi.org/10.1080/03610918.2015.1122048.

Using recursive formula

[edit]

It can be computationally expensive to solve the linear regression problems. Actually, thenth-order partial correlation (i.e., with |Z| =n) can be easily computed from three (n - 1)th-order partial correlations. The zeroth-order partial correlationρXY·Ø is defined to be the regularcorrelation coefficientρXY.

It holds, for anyZ0Z,{\displaystyle Z_{0}\in \mathbf {Z} ,} that[3]

ρXYZ=ρXYZ{Z0}ρXZ0Z{Z0}ρZ0YZ{Z0}1ρXZ0Z{Z0}21ρZ0YZ{Z0}2{\displaystyle \rho _{XY\cdot \mathbf {Z} }={\frac {\rho _{XY\cdot \mathbf {Z} \setminus \{Z_{0}\}}-\rho _{XZ_{0}\cdot \mathbf {Z} \setminus \{Z_{0}\}}\rho _{Z_{0}Y\cdot \mathbf {Z} \setminus \{Z_{0}\}}}{{\sqrt {1-\rho _{XZ_{0}\cdot \mathbf {Z} \setminus \{Z_{0}\}}^{2}}}{\sqrt {1-\rho _{Z_{0}Y\cdot \mathbf {Z} \setminus \{Z_{0}\}}^{2}}}}}}

Naïvely implementing this computation as arecursive algorithm yields an exponential timecomplexity. However, this computation has theoverlapping subproblems property, such that usingdynamic programming or simply caching the results of the recursive calls yields a complexity ofO(n3){\displaystyle {\mathcal {O}}(n^{3})}.

Note in the case whereZ is a single variable, this reduces to:[citation needed]

ρXYZ=ρXYρXZρZY1ρXZ21ρZY2{\displaystyle \rho _{XY\cdot Z}={\frac {\rho _{XY}-\rho _{XZ}\rho _{ZY}}{{\sqrt {1-\rho _{XZ}^{2}}}{\sqrt {1-\rho _{ZY}^{2}}}}}}

Using matrix inversion

[edit]

The partial correlation can also be written in terms of the joint precision matrix. Consider a set of random variables,V=X1,,Xn{\displaystyle \mathbf {V} ={X_{1},\dots ,X_{n}}} of cardinalityn. We want the partial correlation between two variablesXi{\displaystyle X_{i}} andXj{\displaystyle X_{j}} given all others, i.e.,V{Xi,Xj}{\displaystyle \mathbf {V} \setminus \{X_{i},X_{j}\}}. Suppose the (joint/full)covariance matrixΣ=(σij){\displaystyle \Sigma =(\sigma _{ij})} ispositive definite and thereforeinvertible. If theprecision matrix is defined asΩ=(pij)=Σ1{\displaystyle \Omega =(p_{ij})=\Sigma ^{-1}}, then

ρXiXjV{Xi,Xj}=pijpiipjj{\displaystyle \rho _{X_{i}X_{j}\cdot \mathbf {V} \setminus \{X_{i},X_{j}\}}=-{\frac {p_{ij}}{\sqrt {p_{ii}p_{jj}}}}}1

Computing this requiresΣ1{\displaystyle \Sigma ^{-1}}, the inverse of the covariance matrixΣ{\displaystyle \Sigma } which runs inO(n3){\displaystyle {\mathcal {O}}(n^{3})} time (using the sample covariance matrix to obtain a sample partial correlation). Note that only a single matrix inversion is required to giveall the partial correlations between pairs of variables inV{\displaystyle \mathbf {V} }.

To prove Equation (1), return to the previous notation (i.e.X,Y,ZXi,Xj,V{Xi,Xj}{\displaystyle X,Y,\mathbf {Z} \leftrightarrow X_{i},X_{j},\mathbf {V} \setminus \{X_{i},X_{j}\}}) and start with the definition of partial correlation:ρXY·Z is thecorrelation between theresidualseX andeY resulting from thelinear regression ofX withZ and ofY withZ, respectively.

First, supposeβ,γ{\displaystyle \beta ,\gamma } are the coefficients for linear regression fit; that is,

β=argminβEXβTZ2{\displaystyle \beta =\operatorname {argmin} _{\beta }\mathbb {E} \|X-\beta ^{T}Z\|^{2}}
γ=argminγEYγTZ2{\displaystyle \gamma =\operatorname {argmin} _{\gamma }\mathbb {E} \|Y-\gamma ^{T}Z\|^{2}}

Write the joint covariance matrix for the vector(X,Y,ZT)T{\displaystyle (X,Y,Z^{T})^{T}} as

Σ=[ΣXXΣXYΣXZΣYXΣYYΣYZΣZXΣZYΣZZ]=[C11C12C21C22]{\displaystyle \Sigma ={\begin{bmatrix}\Sigma _{XX}&\Sigma _{XY}&\Sigma _{XZ}\\\Sigma _{YX}&\Sigma _{YY}&\Sigma _{YZ}\\\Sigma _{ZX}&\Sigma _{ZY}&\Sigma _{ZZ}\end{bmatrix}}={\begin{bmatrix}C_{11}&C_{12}\\C_{21}&C_{22}\\\end{bmatrix}}}

whereC11=[ΣXXΣXYΣYXΣYY],C12=[ΣXZΣYZ],C21=[ΣZXΣZY],C22=ΣZZ{\displaystyle C_{11}={\begin{bmatrix}\Sigma _{XX}&\Sigma _{XY}\\\Sigma _{YX}&\Sigma _{YY}\end{bmatrix}},\qquad C_{12}={\begin{bmatrix}\Sigma _{XZ}\\\Sigma _{YZ}\end{bmatrix}},\qquad C_{21}={\begin{bmatrix}\Sigma _{ZX}&\Sigma _{ZY}\end{bmatrix}},\qquad C_{22}=\Sigma _{ZZ}}Then the standard formula for linear regression gives

β=(ΣZZ)1ΣZX{\displaystyle \beta =\left(\Sigma _{ZZ}\right)^{-1}\Sigma _{ZX}}

Hence, the residuals can be written as

RX=XβTZ=XΣXZ(ΣZZ)1Z{\displaystyle R_{X}=X-\beta ^{T}Z=X-\Sigma _{XZ}\left(\Sigma _{ZZ}\right)^{-1}Z}

Note thatRX{\displaystyle R_{X}} has expectation zero because of the inclusion of an intercept term inZ{\displaystyle Z}. Computing thecovariance now gives

Cov(RX,RY)=E(RX,RY)==ΣXYΣXZ(ΣZZ)1ΣZY{\displaystyle \operatorname {Cov} (R_{X},R_{Y})=\mathbb {E} (R_{X},R_{Y})=\dots =\Sigma _{XY}-\Sigma _{XZ}\left(\Sigma _{ZZ}\right)^{-1}\Sigma _{ZY}}2

Next, write the precision matrixΩ=Σ1{\displaystyle \Omega =\Sigma ^{-1}} in a similar block form:

Ω=[ΩXXΩXYΩXZΩYXΩYYΩYZΩZXΩZYΩZZ]=[P11P12P21P22]{\displaystyle \Omega ={\begin{bmatrix}\Omega _{XX}&\Omega _{XY}&\Omega _{XZ}\\\Omega _{YX}&\Omega _{YY}&\Omega _{YZ}\\\Omega _{ZX}&\Omega _{ZY}&\Omega _{ZZ}\end{bmatrix}}={\begin{bmatrix}P_{11}&P_{12}\\P_{21}&P_{22}\\\end{bmatrix}}}

Then, bySchur's formula for block-matrix inversion,

P111=C11C12C221C21{\displaystyle P_{11}^{-1}=C_{11}-C_{12}C_{22}^{-1}C_{21}}

The entries of the right-hand-side matrix are precisely the covariances previously computed in (2), giving

P111=[Cov(RX,RX)Cov(RX,RY)Cov(RY,RX)Cov(RY,RY)]{\displaystyle P_{11}^{-1}={\begin{bmatrix}\operatorname {Cov} (R_{X},R_{X})&\operatorname {Cov} (R_{X},R_{Y})\\\operatorname {Cov} (R_{Y},R_{X})&\operatorname {Cov} (R_{Y},R_{Y})\\\end{bmatrix}}}

Using the formula for the inverse of a 2×2 matrix gives

P111=1detP11([P11]22[P11]12[P11]21[P11]11)=1detP11(pYYpXYpYXpXX){\displaystyle {\begin{aligned}P_{11}^{-1}&={\frac {1}{{\text{det}}P_{11}}}{\begin{pmatrix}[P_{11}]_{22}&-[P_{11}]_{12}\\-[P_{11}]_{21}&[P_{11}]_{11}\\\end{pmatrix}}\\&={\frac {1}{{\text{det}}P_{11}}}{\begin{pmatrix}p_{YY}&-p_{XY}\\-p_{YX}&p_{XX}\\\end{pmatrix}}\end{aligned}}}

So indeed, the partial correlation is

ρXYZ=Cov(RX,RY)Cov(RX,RX)Cov(RY,RY)=1detP11pXY1detP11pXX1detP11pYY=pXYpXXpYY{\displaystyle \rho _{XY\cdot Z}={\frac {\operatorname {Cov} (R_{X},R_{Y})}{\sqrt {\operatorname {Cov} (R_{X},R_{X})\operatorname {Cov} (R_{Y},R_{Y})}}}={\frac {-{\tfrac {1}{{\text{det}}P_{11}}}p_{XY}}{\sqrt {{\tfrac {1}{{\text{det}}P_{11}}}p_{XX}{\tfrac {1}{{\text{det}}P_{11}}}p_{YY}}}}=-{\frac {p_{XY}}{\sqrt {p_{XX}p_{YY}}}}}

as claimed in (1).

Interpretation

[edit]
Geometrical interpretation of partial correlation for the case ofN = 3 observations and thus a 2-dimensional hyperplane

Geometrical

[edit]

Let three variablesX,Y,Z (whereZ is the "control" or "extra variable") be chosen from a joint probability distribution overn variablesV. Further, letvi, 1 ≤iN, beNn-dimensionali.i.d. observations taken from the joint probability distribution overV. The geometrical interpretation comes from considering theN-dimensional vectorsx (formed by the successive values ofX over the observations),y (formed by the values ofY), andz (formed by the values ofZ).

It can be shown that the residualseX,i coming from the linear regression ofX onZ, if also considered as anN-dimensional vectoreX (denotedrX in the accompanying graph), have a zeroscalar product with the vectorz generated byZ. This means that the residuals vector lies on an (N–1)-dimensionalhyperplaneSz that isperpendicular toz.

The same also applies to the residualseY,i generating a vectoreY. The desired partial correlation is then thecosine of the angleφ between theprojectionseX andeY ofx andy, respectively, onto the hyperplane perpendicular toz.[4]: ch. 7 

As conditional independence test

[edit]
See also:Fisher transformation

With the assumption that all involved variables aremultivariate Gaussian, the partial correlationρXY·Z is zero if and only ifX isconditionally independent fromY givenZ.[1] This property does not hold in the general case.

Totest if a sample partial correlationρ^XYZ{\displaystyle {\hat {\rho }}_{XY\cdot \mathbf {Z} }} implies that the true population partial correlation differs from 0, Fisher'sz-transform of the partial correlation can be used:

z(ρ^XYZ)=12ln(1+ρ^XYZ1ρ^XYZ){\displaystyle z({\hat {\rho }}_{XY\cdot \mathbf {Z} })={\frac {1}{2}}\ln \left({\frac {1+{\hat {\rho }}_{XY\cdot \mathbf {Z} }}{1-{\hat {\rho }}_{XY\cdot \mathbf {Z} }}}\right)}

Thenull hypothesis isH0:ρXYZ=0{\displaystyle H_{0}:\rho _{XY\cdot \mathbf {Z} }=0}, to be tested against the two-tail alternativeHA:ρXYZ0{\displaystyle H_{A}:\rho _{XY\cdot \mathbf {Z} }\neq 0}.H0{\displaystyle H_{0}} can be rejected if

N|Z|3|z(ρ^XYZ)|>Φ1(1α/2){\displaystyle {\sqrt {N-|\mathbf {Z} |-3}}\cdot |z({\hat {\rho }}_{XY\cdot \mathbf {Z} })|>\Phi ^{-1}(1-\alpha /2)}

whereΦ{\displaystyle \Phi } is thecumulative distribution function of aGaussian distribution with zeromean and unitstandard deviation,α{\displaystyle \alpha } is thesignificance level ofH0{\displaystyle H_{0}}, andN{\displaystyle N} is thesample size. Thisz-transform is approximate, and the actual distribution of the sample (partial) correlation coefficient is not straightforward. However, an exactt-test based on a combination of the partial regression coefficient, the partial correlation coefficient, and the partial variances is available.[5]

The distribution of the sample partial correlation was described by Fisher.[6]

Semipartial correlation (part correlation)

[edit]

The semipartial (or part) correlation statistic is similar to the partial correlation statistic; both compare variations of two variables after certain factors are controlled for. However, to calculate the semipartial correlation, one holds the third variable constant for eitherX orY but not both; whereas for the partial correlation, one holds the third variable constant for both.[7] The semipartial correlation compares the unique variation of one variable (having removed variation associated with theZ variable(s)) with the unfiltered variation of the other, while the partial correlation compares the unique variation of one variable to the unique variation of the other.

The semipartial correlation can be viewed as more practically relevant "because it is scaled to (i.e., relative to) the total variability in the dependent (response) variable."[8] Conversely, it is less theoretically useful because it is less precise about the role of the unique contribution of the independent variable.

The absolute value of the semipartial correlation ofX withY is always less than or equal to that of the partial correlation ofX withY. The reason is this: Suppose the correlation ofX withZ has been removed fromX, giving the residual vectorex . In computing the semipartial correlation,Y still contains both unique variance and variance due to its association withZ. Butex , being uncorrelated withZ, can only explain some of the unique part of the variance ofY and not the part related toZ. In contrast, with the partial correlation, onlyey (the part of the variance ofY that is unrelated toZ) is to be explained, so there is less variance of the type thatex cannot explain.

Use in time series analysis

[edit]

Intime series analysis, thepartial autocorrelation function (sometimes "partial correlation function") of a time series is defined, for lagh{\displaystyle h}, as[citation needed]

φ(h)=ρX0Xh{X1,,Xh1}{\displaystyle \varphi (h)=\rho _{X_{0}X_{h}\,\cdot \,\{X_{1},\,\dots \,,X_{h-1}\}}}

This function is used to determine the appropriate lag length for anautoregression.

Partial correlations with Shrinkage

[edit]

When the sample size is smaller than the number of variables, a.k.a. high-dimensional setting, estimating partial correlations can be challenging. In this scenario, the sample covarianceΣ^{\displaystyle {\hat {\Sigma }}} is not well-conditioned, and finding its inverseΩ^{\displaystyle {\hat {\Omega }}} turns problematic.

Shrinkage_estimation methods improveΣ^{\displaystyle {\hat {\Sigma }}} orΩ^{\displaystyle {\hat {\Omega }}} and produces more reliable partial correlation estimates. One example is the Ledoit-Wolf shrinkage estimator,[9]

Σ^[λ]=λT+(1λ)Σ{\displaystyle {\hat {\Sigma }}^{[\lambda ]}=\lambda T+(1-\lambda )\Sigma }

whereΣ^{\displaystyle {\hat {\Sigma }}} is the sample covariance matrix,T{\displaystyle T} is a target matrix (e.g., a diagonal matrix), and the shrinkage intensityλ(0,1){\displaystyle \lambda \in (0,1)}.

The partial correlation under the Ledoit-Wolf shrinkage[10] is then:

P^ij[λ]=Ω^ij[λ]Ω^ii[λ]Ω^jj[λ]{\displaystyle {\hat {P}}_{ij}^{[\lambda ]}={\frac {{\hat {\Omega }}_{ij}^{[\lambda ]}}{\sqrt {{\hat {\Omega }}_{ii}^{[\lambda ]}{\hat {\Omega }}_{jj}^{[\lambda ]}}}}}

whereΩ^ij[λ]{\displaystyle {\hat {\Omega }}_{ij}^{[\lambda ]}} is the inverse ofΣ^ij[λ]{\displaystyle {\hat {\Sigma }}_{ij}^{[\lambda ]}}. This method is used in a variety of fields including finance and genomics.[11]

See also

[edit]

References

[edit]
  1. ^abBaba, Kunihiro; Ritei Shibata; Masaaki Sibuya (2004). "Partial correlation and conditional correlation as measures of conditional independence".Australian and New Zealand Journal of Statistics.46 (4):657–664.doi:10.1111/j.1467-842X.2004.00360.x.S2CID 123130024.
  2. ^Guilford J. P., Fruchter B. (1973).Fundamental statistics in psychology and education. Tokyo:McGraw-Hill Kogakusha, LTD.
  3. ^Kim, Seongho (November 2015)."ppcor: An R Package for a Fast Calculation to Semi-partial Correlation Coefficients".Communications for Statistical Applications and Methods.22 (6):665–674.doi:10.5351/CSAM.2015.22.6.665.ISSN 2287-7843.PMC 4681537.PMID 26688802.
  4. ^Rummel, R. J. (1976)."Understanding Correlation".
  5. ^Kendall MG, Stuart A. (1973)The Advanced Theory of Statistics, Volume 2 (3rd Edition),ISBN 0-85264-215-6, Section 27.22
  6. ^Fisher, R.A. (1924)."The distribution of the partial correlation coefficient".Metron.3 (3–4):329–332.
  7. ^"Partial and Semipartial Correlation". Archived fromthe original on 6 February 2014.
  8. ^StatSoft, Inc. (2010)."Semi-Partial (or Part) Correlation", Electronic Statistics Textbook. Tulsa, OK: StatSoft, accessed January 15, 2011.
  9. ^Ledoit, O., & Wolf, M. (2004). "A well-conditioned estimator for large-dimensional covariance matrices".Journal of Multivariate Analysis, 88(2), 365–411.https://doi.org/10.1016/S0047-259X(03)00096-4
  10. ^Schäfer, J., & Strimmer, K. (2005). "A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics".Statistical applications in genetics and molecular biology, 4(1).https://doi.org/10.2202/1544-6115.1175
  11. ^Ledoit, O., & Wolf, M. (2022). The power of (non-) linear shrinking: A review and guide to covariance matrix estimation.Journal of Financial Econometrics, 20(1), 187-218.https://doi.org/10.1093/jjfinec/nbaa007

External links

[edit]
Wikiversity has learning resources aboutPartial correlation
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis (see alsoTemplate:Least squares and regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Partial_correlation&oldid=1315453227"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp